Merge branch 'android-tegra-3.10' into android-tegra-flounder-3.10
d28c42e pstore: pmsg: return -ENOMEM on vmalloc failure
Signed-off-by: Mark Salyzyn <salyzyn@google.com>
Bug: 23385441
Change-Id: I6cc987cf7817eb3dbda62a625fb6137a468dd6c2
diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
new file mode 100644
index 0000000..f1a0bdb
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -0,0 +1,70 @@
+What: /sys/fs/f2fs/<disk>/gc_max_sleep_time
+Date: July 2013
+Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
+Description:
+ Controls the maximun sleep time for gc_thread. Time
+ is in milliseconds.
+
+What: /sys/fs/f2fs/<disk>/gc_min_sleep_time
+Date: July 2013
+Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
+Description:
+ Controls the minimum sleep time for gc_thread. Time
+ is in milliseconds.
+
+What: /sys/fs/f2fs/<disk>/gc_no_gc_sleep_time
+Date: July 2013
+Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
+Description:
+ Controls the default sleep time for gc_thread. Time
+ is in milliseconds.
+
+What: /sys/fs/f2fs/<disk>/gc_idle
+Date: July 2013
+Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
+Description:
+ Controls the victim selection policy for garbage collection.
+
+What: /sys/fs/f2fs/<disk>/reclaim_segments
+Date: October 2013
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the issue rate of segment discard commands.
+
+What: /sys/fs/f2fs/<disk>/ipu_policy
+Date: November 2013
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the in-place-update policy.
+
+What: /sys/fs/f2fs/<disk>/min_ipu_util
+Date: November 2013
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the FS utilization condition for the in-place-update
+ policies.
+
+What: /sys/fs/f2fs/<disk>/min_fsync_blocks
+Date: September 2014
+Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
+Description:
+ Controls the dirty page count condition for the in-place-update
+ policies.
+
+What: /sys/fs/f2fs/<disk>/max_small_discards
+Date: November 2013
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the issue rate of small discard commands.
+
+What: /sys/fs/f2fs/<disk>/max_victim_search
+Date: January 2014
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the number of trials to find a victim segment.
+
+What: /sys/fs/f2fs/<disk>/ram_thresh
+Date: March 2014
+Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
+Description:
+ Controls the memory footprint used by f2fs.
diff --git a/Documentation/devicetree/bindings/power/bq2419x-charger.txt b/Documentation/devicetree/bindings/power/bq2419x-charger.txt
index 71816ce..5ed8a1c 100644
--- a/Documentation/devicetree/bindings/power/bq2419x-charger.txt
+++ b/Documentation/devicetree/bindings/power/bq2419x-charger.txt
@@ -50,6 +50,8 @@
11 to 15 -> 2048mA
16 to 60 -> 5200 mA
> 60 : Charging will be disabled by HW.
+-ti,auto-recharge-time-suspend: time setting in seconds to register
+ rtc alarm timer to wake the device from LP0.
Subnode properties:
==================
diff --git a/Documentation/filesystems/f2fs.txt b/Documentation/filesystems/f2fs.txt
index bd3c56c..b2158f6 100644
--- a/Documentation/filesystems/f2fs.txt
+++ b/Documentation/filesystems/f2fs.txt
@@ -114,6 +114,18 @@
Default number is 6.
disable_ext_identify Disable the extension list configured by mkfs, so f2fs
does not aware of cold files such as media files.
+inline_xattr Enable the inline xattrs feature.
+inline_data Enable the inline data feature: New created small(<~3.4k)
+ files can be written into inode block.
+flush_merge Merge concurrent cache_flush commands as much as possible
+ to eliminate redundant command issues. If the underlying
+ device handles the cache_flush command relatively slowly,
+ recommend to enable this option.
+nobarrier This option can be used if underlying storage guarantees
+ its cached data should be written to the novolatile area.
+ If this option is set, no cache_flush commands are issued
+ but f2fs still guarantees the write ordering of all the
+ data writes.
================================================================================
DEBUGFS ENTRIES
@@ -128,6 +140,80 @@
- current memory footprint consumed by f2fs.
================================================================================
+SYSFS ENTRIES
+================================================================================
+
+Information about mounted f2f2 file systems can be found in
+/sys/fs/f2fs. Each mounted filesystem will have a directory in
+/sys/fs/f2fs based on its device name (i.e., /sys/fs/f2fs/sda).
+The files in each per-device directory are shown in table below.
+
+Files in /sys/fs/f2fs/<devname>
+(see also Documentation/ABI/testing/sysfs-fs-f2fs)
+..............................................................................
+ File Content
+
+ gc_max_sleep_time This tuning parameter controls the maximum sleep
+ time for the garbage collection thread. Time is
+ in milliseconds.
+
+ gc_min_sleep_time This tuning parameter controls the minimum sleep
+ time for the garbage collection thread. Time is
+ in milliseconds.
+
+ gc_no_gc_sleep_time This tuning parameter controls the default sleep
+ time for the garbage collection thread. Time is
+ in milliseconds.
+
+ gc_idle This parameter controls the selection of victim
+ policy for garbage collection. Setting gc_idle = 0
+ (default) will disable this option. Setting
+ gc_idle = 1 will select the Cost Benefit approach
+ & setting gc_idle = 2 will select the greedy aproach.
+
+ reclaim_segments This parameter controls the number of prefree
+ segments to be reclaimed. If the number of prefree
+ segments is larger than the number of segments
+ in the proportion to the percentage over total
+ volume size, f2fs tries to conduct checkpoint to
+ reclaim the prefree segments to free segments.
+ By default, 5% over total # of segments.
+
+ ipu_policy This parameter controls the policy of in-place
+ updates in f2fs. There are five policies:
+ 0x01: F2FS_IPU_FORCE, 0x02: F2FS_IPU_SSR,
+ 0x04: F2FS_IPU_UTIL, 0x08: F2FS_IPU_SSR_UTIL,
+ 0x10: F2FS_IPU_FSYNC.
+
+ min_ipu_util This parameter controls the threshold to trigger
+ in-place-updates. The number indicates percentage
+ of the filesystem utilization, and used by
+ F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies.
+
+ min_fsync_blocks This parameter controls the threshold to trigger
+ in-place-updates when F2FS_IPU_FSYNC mode is set.
+ The number indicates the number of dirty pages
+ when fsync needs to flush on its call path. If
+ the number is less than this value, it triggers
+ in-place-updates.
+
+ max_victim_search This parameter controls the number of trials to
+ find a victim segment when conducting SSR and
+ cleaning operations. The default value is 4096
+ which covers 8GB block address range.
+
+ dir_level This parameter controls the directory level to
+ support large directory. If a directory has a
+ number of files, it can reduce the file lookup
+ latency by increasing this dir_level value.
+ Otherwise, it needs to decrease this value to
+ reduce the space overhead. The default value is 0.
+
+ ram_thresh This parameter controls the memory footprint used
+ by free nids and cached nat entries. By default,
+ 10 is set, which indicates 10 MB / 1 GB RAM.
+
+================================================================================
USAGE
================================================================================
@@ -341,9 +427,11 @@
# of blocks in level #n = |
`- 4, Otherwise
- ,- 2^n, if n < MAX_DIR_HASH_DEPTH / 2,
+ ,- 2^(n + dir_level),
+ | if n + dir_level < MAX_DIR_HASH_DEPTH / 2,
# of buckets in level #n = |
- `- 2^((MAX_DIR_HASH_DEPTH / 2) - 1), Otherwise
+ `- 2^((MAX_DIR_HASH_DEPTH / 2) - 1),
+ Otherwise
When F2FS finds a file name in a directory, at first a hash value of the file
name is calculated. Then, F2FS scans the hash table in level #0 to find the
diff --git a/arch/alpha/include/uapi/asm/fcntl.h b/arch/alpha/include/uapi/asm/fcntl.h
index 6d9e805..dfdadb0 100644
--- a/arch/alpha/include/uapi/asm/fcntl.h
+++ b/arch/alpha/include/uapi/asm/fcntl.h
@@ -32,6 +32,7 @@
#define O_SYNC (__O_SYNC|O_DSYNC)
#define O_PATH 040000000
+#define O_TMPFILE 0100000000
#define F_GETLK 7
#define F_SETLK 8
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 0f92917..cc338189 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -2011,16 +2011,16 @@
bool "Build a concatenated zImage/dtb by default"
depends on OF
help
- Enabling this option will cause a concatenated zImage and DTB to
- be built by default (instead of a standalone zImage.) The image
- will built in arch/arm/boot/zImage-dtb.<dtb name>
+ Enabling this option will cause a concatenated zImage and list of
+ DTBs to be built by default (instead of a standalone zImage.)
+ The image will built in arch/arm/boot/zImage-dtb
-config BUILD_ARM_APPENDED_DTB_IMAGE_NAME
- string "Default dtb name"
+config BUILD_ARM_APPENDED_DTB_IMAGE_NAMES
+ string "Default dtb names"
depends on BUILD_ARM_APPENDED_DTB_IMAGE
help
- name of the dtb to append when building a concatenated
- zImage/dtb.
+ Space separated list of names of dtbs to append when
+ building a concatenated zImage-dtb.
# Compressed boot loader in ROM. Yes, we really want to ask about
# TEXT and BSS so we preserve their values in the config files.
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index a5d9cef..276b9cb 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -269,7 +269,7 @@
ifeq ($(CONFIG_XIP_KERNEL),y)
KBUILD_IMAGE := xipImage
else ifeq ($(CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE),y)
-KBUILD_IMAGE := zImage-dtb.$(CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE_NAME)
+KBUILD_IMAGE := zImage-dtb
else
KBUILD_IMAGE := zImage
endif
@@ -301,6 +301,9 @@
dtbs: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts MACHINE=$(MACHINE) dtbs
+zImage-dtb: vmlinux scripts dtbs
+ $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@
+
# We use MRPROPER_FILES and CLEAN_FILES now
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
diff --git a/arch/arm/boot/.gitignore b/arch/arm/boot/.gitignore
index 3c79f85..ad7a025 100644
--- a/arch/arm/boot/.gitignore
+++ b/arch/arm/boot/.gitignore
@@ -4,3 +4,4 @@
bootpImage
uImage
*.dtb
+zImage-dtb
\ No newline at end of file
diff --git a/arch/arm/boot/Makefile b/arch/arm/boot/Makefile
index 085bb96..65285bb 100644
--- a/arch/arm/boot/Makefile
+++ b/arch/arm/boot/Makefile
@@ -28,6 +28,14 @@
targets := Image zImage xipImage bootpImage uImage
+DTB_NAMES := $(subst $\",,$(CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE_NAMES))
+ifneq ($(DTB_NAMES),)
+DTB_LIST := $(addsuffix .dtb,$(DTB_NAMES))
+else
+DTB_LIST := $(dtb-y)
+endif
+DTB_OBJS := $(addprefix $(obj)/dts/,$(DTB_LIST))
+
ifeq ($(CONFIG_XIP_KERNEL),y)
$(obj)/xipImage: vmlinux FORCE
@@ -56,6 +64,10 @@
$(call if_changed,objcopy)
@$(kecho) ' Kernel: $@ is ready'
+$(obj)/zImage-dtb: $(obj)/zImage $(DTB_OBJS) FORCE
+ $(call if_changed,cat)
+ @echo ' Kernel: $@ is ready'
+
endif
ifneq ($(LOADADDR),)
diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
index ba799ea..26717f4 100644
--- a/arch/arm/boot/dts/Makefile
+++ b/arch/arm/boot/dts/Makefile
@@ -265,13 +265,20 @@
wm8850-w70v2.dtb
dtb-$(CONFIG_ARCH_ZYNQ) += zynq-zc702.dtb
+DTB_NAMES := $(subst $\",,$(CONFIG_BUILD_ARM_APPENDED_DTB_IMAGE_NAMES))
+ifneq ($(DTB_NAMES),)
+DTB_LIST := $(addsuffix .dtb,$(DTB_NAMES))
+else
+DTB_LIST := $(dtb-y)
+endif
+
targets += dtbs
-targets += $(dtb-y)
+targets += $(DTB_LIST)
endif
# *.dtb used to be generated in the directory above. Clean out the
# old build results so people don't accidentally use them.
-dtbs: $(addprefix $(obj)/, $(dtb-y))
+dtbs: $(addprefix $(obj)/, $(DTB_LIST))
$(Q)rm -f $(obj)/../*.dtb
clean-files := *.dtb
diff --git a/arch/arm/boot/dts/tegra124-flounder-emc.dtsi b/arch/arm/boot/dts/tegra124-flounder-emc.dtsi
new file mode 100644
index 0000000..7680754
--- /dev/null
+++ b/arch/arm/boot/dts/tegra124-flounder-emc.dtsi
@@ -0,0 +1,2718 @@
+memory-controller@7001b000 {
+ compatible = "nvidia,tegra12-emc";
+ reg = <0x7001b000 0x800>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ emc-table@12750 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_12750_02_V5.0.9_V0.4";
+ clock-frequency = <12750>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000003e>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000000
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000060
+ 0x00000000
+ 0x00000018
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x00000007
+ 0x0000000f
+ 0x00000005
+ 0x00000005
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000064
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000e0e
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000007
+ 0x00000000
+ 0x00000000
+ 0x00000042
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000f2f3
+ 0x800001c5
+ 0x0000000a
+ 0x40040001
+ 0x8000000a
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000006
+ 0x06030203
+ 0x000a0402
+ 0x77e30303
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000007
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008c5>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@20400 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_20400_02_V5.0.9_V0.4";
+ clock-frequency = <20400>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000026>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000000
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000009a
+ 0x00000000
+ 0x00000026
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x00000007
+ 0x0000000f
+ 0x00000006
+ 0x00000006
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x000000a0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000e0e
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x0000000b
+ 0x00000000
+ 0x00000000
+ 0x00000042
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000f2f3
+ 0x8000023a
+ 0x0000000a
+ 0x40020001
+ 0x80000012
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000006
+ 0x06030203
+ 0x000a0402
+ 0x76230303
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x0000000a
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008c5>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@40800 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_40800_02_V5.0.9_V0.4";
+ clock-frequency = <40800>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000012>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000001
+ 0x0000000a
+ 0x00000000
+ 0x00000001
+ 0x00000000
+ 0x00000004
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000000
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000134
+ 0x00000000
+ 0x0000004d
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x00000008
+ 0x0000000f
+ 0x0000000c
+ 0x0000000c
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000013f
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000e0e
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000015
+ 0x00000000
+ 0x00000000
+ 0x00000042
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000f2f3
+ 0x80000370
+ 0x0000000a
+ 0xa0000001
+ 0x80000017
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000006
+ 0x06030203
+ 0x000a0402
+ 0x74a30303
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000014
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008c5>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@68000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_68000_02_V5.0.9_V0.4";
+ clock-frequency = <68000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000000a>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000003
+ 0x00000011
+ 0x00000000
+ 0x00000002
+ 0x00000000
+ 0x00000004
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000000
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000202
+ 0x00000000
+ 0x00000080
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x0000000f
+ 0x0000000f
+ 0x00000013
+ 0x00000013
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000001
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000213
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000e0e
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000022
+ 0x00000000
+ 0x00000000
+ 0x00000042
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000f2f3
+ 0x8000050e
+ 0x0000000a
+ 0x00000001
+ 0x8000001e
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000006
+ 0x06030203
+ 0x000a0402
+ 0x74230403
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000021
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00b0
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00e90049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00a3
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ee00ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008c5>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@102000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_102000_02_V5.0.9_V0.4";
+ clock-frequency = <102000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000006>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000004
+ 0x0000001a
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000004
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000304
+ 0x00000000
+ 0x000000c1
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x00000018
+ 0x0000000f
+ 0x0000001c
+ 0x0000001c
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000003
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000031c
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000e0e
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000033
+ 0x00000000
+ 0x00000000
+ 0x00000042
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000f2f3
+ 0x80000713
+ 0x0000000a
+ 0x08000001
+ 0x80000026
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000006
+ 0x06030203
+ 0x000a0403
+ 0x73c30504
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000031
+ 0x00ff00da
+ 0x00ff00da
+ 0x00ff0075
+ 0x00ff00ff
+ 0x00ff009d
+ 0x00ff00ff
+ 0x00ff009d
+ 0x009b0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ad
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00c6
+ 0x00ff006d
+ 0x00ff0024
+ 0x00ff00d6
+ 0x000000ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x009f00a0
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00da
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008c5>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@204000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_204000_03_V5.0.9_V0.4";
+ clock-frequency = <204000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000002>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000009
+ 0x00000035
+ 0x00000000
+ 0x00000006
+ 0x00000002
+ 0x00000005
+ 0x0000000a
+ 0x00000003
+ 0x0000000b
+ 0x00000002
+ 0x00000002
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000004
+ 0x00000006
+ 0x00010000
+ 0x00000003
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000003
+ 0x0000000d
+ 0x0000000f
+ 0x00000011
+ 0x00000607
+ 0x00000000
+ 0x00000181
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000000
+ 0x00000032
+ 0x0000000f
+ 0x00000038
+ 0x00000038
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000007
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000638
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1069a298
+ 0x002c00a0
+ 0x00008000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00064000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00008000
+ 0x00000000
+ 0x00000000
+ 0x00008000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x0130b118
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000707
+ 0x81f1f108
+ 0x07070004
+ 0x0000003f
+ 0x016eeeee
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000066
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x000e000e
+ 0x000e000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000d2b3
+ 0x80000d22
+ 0x0000000a
+ 0x01000003
+ 0x80000040
+ 0x00000001
+ 0x00000001
+ 0x00000004
+ 0x00000002
+ 0x00000004
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000002
+ 0x00000004
+ 0x00000006
+ 0x06040203
+ 0x000a0404
+ 0x73840a05
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000062
+ 0x00ff006d
+ 0x00ff006d
+ 0x00ff003c
+ 0x00ff00af
+ 0x00ff004f
+ 0x00ff00af
+ 0x00ff004f
+ 0x004e0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080057
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0063
+ 0x00ff0036
+ 0x00ff0024
+ 0x00ff006b
+ 0x000000ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510050
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00c6
+ 0x00ff006d
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73240000>;
+ nvidia,emc-cfg-2 = <0x000008cd>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80001221>;
+ nvidia,emc-mode-1 = <0x80100003>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@300000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_300000_01_V5.0.9_V0.4";
+ clock-frequency = <300000>;
+ nvidia,emc-min-mv = <810>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllc_out0";
+ nvidia,src-sel-reg = <0x20000002>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000000d
+ 0x0000004d
+ 0x00000000
+ 0x00000009
+ 0x00000003
+ 0x00000004
+ 0x00000008
+ 0x00000002
+ 0x00000009
+ 0x00000003
+ 0x00000003
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000005
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000006
+ 0x00030000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000d
+ 0x0000000e
+ 0x00000010
+ 0x000008e4
+ 0x00000000
+ 0x00000239
+ 0x00000001
+ 0x00000008
+ 0x00000001
+ 0x00000000
+ 0x0000004b
+ 0x0000000e
+ 0x00000052
+ 0x00000200
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000009
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000924
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b098
+ 0x002c00a0
+ 0x00008000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x01231339
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000505
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000096
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x0173000e
+ 0x0173000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000d3b3
+ 0x800012d7
+ 0x00000009
+ 0x08000004
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000004
+ 0x00000005
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000002
+ 0x00000004
+ 0x00000006
+ 0x06040202
+ 0x000b0607
+ 0x77450e08
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000004
+ 0x00000090
+ 0x00ff004a
+ 0x00ff004a
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00350049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008003b
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0043
+ 0x00ff002d
+ 0x00ff0024
+ 0x00ff0049
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510036
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0087
+ 0x00ff004a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73340000>;
+ nvidia,emc-cfg-2 = <0x000008cd>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040128>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80000321>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200000>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@396000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_396000_03_V5.0.9_V0.4";
+ clock-frequency = <396000>;
+ nvidia,emc-min-mv = <860>;
+ nvidia,gk20a-min-mv = <900>;
+ nvidia,source = "pllm_out0";
+ nvidia,src-sel-reg = <0x00000002>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000011
+ 0x00000066
+ 0x00000000
+ 0x0000000c
+ 0x00000004
+ 0x00000005
+ 0x00000008
+ 0x00000002
+ 0x0000000a
+ 0x00000004
+ 0x00000004
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000005
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000006
+ 0x00030000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000d
+ 0x0000000e
+ 0x00000010
+ 0x00000bd1
+ 0x00000000
+ 0x000002f4
+ 0x00000001
+ 0x00000008
+ 0x00000001
+ 0x00000000
+ 0x00000063
+ 0x0000000f
+ 0x0000006c
+ 0x00000200
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x0000000d
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00000c11
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b098
+ 0x002c00a0
+ 0x00008000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00044000
+ 0x00044000
+ 0x00044000
+ 0x00044000
+ 0x00004400
+ 0x00004400
+ 0x00004400
+ 0x00004400
+ 0x10000280
+ 0x00000000
+ 0x00111111
+ 0x01231339
+ 0x00000000
+ 0x00000000
+ 0x77ffc081
+ 0x00000505
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x000000c6
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x015b000e
+ 0x015b000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x0000d3b3
+ 0x8000188b
+ 0x00000009
+ 0x0f000005
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000005
+ 0x00000007
+ 0x00000001
+ 0x00000002
+ 0x00000008
+ 0x00000002
+ 0x00000002
+ 0x00000004
+ 0x00000006
+ 0x06040202
+ 0x000d0709
+ 0x7586120a
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000a
+ 0x000000be
+ 0x00ff0038
+ 0x00ff0038
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00280049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008002d
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0033
+ 0x00ff0022
+ 0x00ff0024
+ 0x00ff0037
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510029
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0066
+ 0x00ff0038
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73340000>;
+ nvidia,emc-cfg-2 = <0x0000088d>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040008>;
+ nvidia,emc-cfg-dig-dll = <0x002c0068>;
+ nvidia,emc-mode-0 = <0x80000521>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200000>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@528000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_528000_02_V5.0.9_V0.4";
+ clock-frequency = <528000>;
+ nvidia,emc-min-mv = <920>;
+ nvidia,gk20a-min-mv = <900>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000018
+ 0x00000088
+ 0x00000000
+ 0x00000010
+ 0x00000006
+ 0x00000006
+ 0x00000009
+ 0x00000002
+ 0x0000000d
+ 0x00000006
+ 0x00000006
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000004
+ 0x00000004
+ 0x00000008
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000007
+ 0x00060000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000e
+ 0x00000013
+ 0x00000015
+ 0x00000fd6
+ 0x00000000
+ 0x000003f5
+ 0x00000002
+ 0x0000000b
+ 0x00000001
+ 0x00000000
+ 0x00000085
+ 0x00000012
+ 0x00000090
+ 0x00000200
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000013
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00001017
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b098
+ 0xe01200b1
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00054000
+ 0x00054000
+ 0x00000000
+ 0x00054000
+ 0x00054000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x0000000e
+ 0x100002a0
+ 0x00000000
+ 0x00111111
+ 0x0123133d
+ 0x00000000
+ 0x00000000
+ 0x77ffc085
+ 0x00000505
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0606003f
+ 0x00000000
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x0139000e
+ 0x0139000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x000052a0
+ 0x80002062
+ 0x0000000c
+ 0x0f000007
+ 0x80000040
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000007
+ 0x0000000a
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000006
+ 0x06050202
+ 0x0010090c
+ 0x7428180d
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000d
+ 0x000000fd
+ 0x00c10038
+ 0x00c10038
+ 0x00c1003c
+ 0x00c10090
+ 0x00c10041
+ 0x00c10090
+ 0x00c10041
+ 0x00270049
+ 0x00c10080
+ 0x00c10004
+ 0x00c10004
+ 0x00080021
+ 0x000000c1
+ 0x00c10004
+ 0x00c10026
+ 0x00c1001a
+ 0x00c10024
+ 0x00c10029
+ 0x000000c1
+ 0x00000036
+ 0x00c100c1
+ 0x00000036
+ 0x00c100c1
+ 0x00d400ff
+ 0x00510029
+ 0x00c100c1
+ 0x00c100c1
+ 0x00c10065
+ 0x00c1002a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73300000>;
+ nvidia,emc-cfg-2 = <0x00000895>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040008>;
+ nvidia,emc-cfg-dig-dll = <0xe0120069>;
+ nvidia,emc-mode-0 = <0x80000941>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200008>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@600000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_600000_04_V5.0.9_V0.4";
+ clock-frequency = <600000>;
+ nvidia,emc-min-mv = <930>;
+ nvidia,gk20a-min-mv = <900>;
+ nvidia,source = "pllc_ud";
+ nvidia,src-sel-reg = <0xe0000000>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000001b
+ 0x0000009b
+ 0x00000000
+ 0x00000013
+ 0x00000007
+ 0x00000007
+ 0x0000000b
+ 0x00000003
+ 0x00000010
+ 0x00000007
+ 0x00000007
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000000a
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x0000000b
+ 0x00070000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000012
+ 0x00000016
+ 0x00000018
+ 0x00001208
+ 0x00000000
+ 0x00000482
+ 0x00000002
+ 0x0000000d
+ 0x00000001
+ 0x00000000
+ 0x00000097
+ 0x00000015
+ 0x000000a3
+ 0x00000200
+ 0x00000004
+ 0x00000005
+ 0x00000004
+ 0x00000015
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x00001248
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b098
+ 0xe00e00b1
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0004c000
+ 0x0004c000
+ 0x00000000
+ 0x0004c000
+ 0x0004c000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x100002a0
+ 0x00000000
+ 0x00111111
+ 0x0121113d
+ 0x00000000
+ 0x00000000
+ 0x77ffc085
+ 0x00000404
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0606003f
+ 0x00000000
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x0127000e
+ 0x0127000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000003
+ 0x000040a0
+ 0x800024a9
+ 0x0000000e
+ 0x00000009
+ 0x80000040
+ 0x00000003
+ 0x00000004
+ 0x0000000e
+ 0x00000009
+ 0x0000000b
+ 0x00000001
+ 0x00000003
+ 0x0000000b
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000007
+ 0x07050202
+ 0x00130b0e
+ 0x73a91b0f
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000f
+ 0x00000120
+ 0x00aa0038
+ 0x00aa0038
+ 0x00aa003c
+ 0x00aa0090
+ 0x00aa0041
+ 0x00aa0090
+ 0x00aa0041
+ 0x00270049
+ 0x00aa0080
+ 0x00aa0004
+ 0x00aa0004
+ 0x0008001d
+ 0x000000aa
+ 0x00aa0004
+ 0x00aa0022
+ 0x00aa0018
+ 0x00aa0024
+ 0x00aa0024
+ 0x000000aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00d400ff
+ 0x00510029
+ 0x00aa00aa
+ 0x00aa00aa
+ 0x00aa0065
+ 0x00aa0025
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73300000>;
+ nvidia,emc-cfg-2 = <0x0000089d>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040008>;
+ nvidia,emc-cfg-dig-dll = <0xe00e0069>;
+ nvidia,emc-mode-0 = <0x80000b61>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200010>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@792000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_792000_05_V5.0.9_V0.4";
+ clock-frequency = <792000>;
+ nvidia,emc-min-mv = <1000>;
+ nvidia,gk20a-min-mv = <1000>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000024
+ 0x000000cd
+ 0x00000000
+ 0x00000019
+ 0x0000000a
+ 0x00000008
+ 0x0000000d
+ 0x00000004
+ 0x00000013
+ 0x0000000a
+ 0x0000000a
+ 0x00000003
+ 0x00000002
+ 0x00000000
+ 0x00000006
+ 0x00000006
+ 0x0000000b
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x0000000d
+ 0x00080000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000014
+ 0x00000017
+ 0x00000019
+ 0x000017e2
+ 0x00000000
+ 0x000005f8
+ 0x00000003
+ 0x00000011
+ 0x00000001
+ 0x00000000
+ 0x000000c7
+ 0x00000018
+ 0x000000d7
+ 0x00000200
+ 0x00000005
+ 0x00000006
+ 0x00000005
+ 0x0000001d
+ 0x00000000
+ 0x00000008
+ 0x00000008
+ 0x00001822
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b098
+ 0xe00700b1
+ 0x00008000
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000006
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x100002a0
+ 0x00000000
+ 0x00111111
+ 0x0120113d
+ 0x00000000
+ 0x00000000
+ 0x77ffc085
+ 0x00000505
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x61861820
+ 0x00514514
+ 0x00514514
+ 0x61861800
+ 0x0606003f
+ 0x00000000
+ 0x00000000
+ 0x00020000
+ 0x00000100
+ 0x00f7000e
+ 0x00f7000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000004
+ 0x000040a0
+ 0x80003012
+ 0x0000000f
+ 0x0e00000b
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000013
+ 0x0000000c
+ 0x0000000f
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x08060202
+ 0x00170e13
+ 0x736c2414
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000013
+ 0x0000017c
+ 0x00810038
+ 0x00810038
+ 0x0081003c
+ 0x00810090
+ 0x00810041
+ 0x00810090
+ 0x00810041
+ 0x00270049
+ 0x00810080
+ 0x00810004
+ 0x00810004
+ 0x00080016
+ 0x00000081
+ 0x00810004
+ 0x00810019
+ 0x00810018
+ 0x00810024
+ 0x0081001c
+ 0x00000081
+ 0x00000036
+ 0x00810081
+ 0x00000036
+ 0x00810081
+ 0x00d400ff
+ 0x00510029
+ 0x00810081
+ 0x00810081
+ 0x00810065
+ 0x0081001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000042>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73300000>;
+ nvidia,emc-cfg-2 = <0x0000089d>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040000>;
+ nvidia,emc-cfg-dig-dll = <0xe0070069>;
+ nvidia,emc-mode-0 = <0x80000d71>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200018>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+ emc-table@924000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x16>;
+ nvidia,dvfs-version = "06_924000_06_V5.0.9_V0.4";
+ clock-frequency = <924000>;
+ nvidia,emc-min-mv = <1040>;
+ nvidia,gk20a-min-mv = <1100>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <168>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000002b
+ 0x000000f0
+ 0x00000000
+ 0x0000001e
+ 0x0000000b
+ 0x0000000a
+ 0x0000000f
+ 0x00000005
+ 0x00000016
+ 0x0000000b
+ 0x0000000b
+ 0x00000004
+ 0x00000002
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x0000000d
+ 0x00000002
+ 0x00000000
+ 0x00000002
+ 0x0000000f
+ 0x000a0000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000016
+ 0x0000001a
+ 0x0000001c
+ 0x00001be7
+ 0x00000000
+ 0x000006f9
+ 0x00000004
+ 0x00000015
+ 0x00000001
+ 0x00000000
+ 0x000000e7
+ 0x0000001b
+ 0x000000fb
+ 0x00000200
+ 0x00000006
+ 0x00000007
+ 0x00000006
+ 0x00000022
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x00001c28
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1049b898
+ 0xe00400b1
+ 0x00008000
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00030000
+ 0x00030000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x100002a0
+ 0x00000000
+ 0x00111111
+ 0x0120113d
+ 0x00000000
+ 0x00000000
+ 0x77ffc085
+ 0x00000505
+ 0x81f1f108
+ 0x07070004
+ 0x00000000
+ 0x016eeeee
+ 0x5d75d720
+ 0x00514514
+ 0x00514514
+ 0x5d75d700
+ 0x0606003f
+ 0x00000000
+ 0x00000000
+ 0x00020000
+ 0x00000128
+ 0x00cd000e
+ 0x00cd000e
+ 0x00000000
+ 0x00000000
+ 0xa1430000
+ 0x00000000
+ 0x00000004
+ 0x00004080
+ 0x800037ea
+ 0x00000011
+ 0x0e00000d
+ 0x80000040
+ 0x00000005
+ 0x00000006
+ 0x00000016
+ 0x0000000e
+ 0x00000011
+ 0x00000002
+ 0x00000004
+ 0x0000000e
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000009
+ 0x09070202
+ 0x001a1016
+ 0x734e2a17
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000017
+ 0x000001bb
+ 0x006e0038
+ 0x006e0038
+ 0x006e003c
+ 0x006e0090
+ 0x006e0041
+ 0x006e0090
+ 0x006e0041
+ 0x00270049
+ 0x006e0080
+ 0x006e0004
+ 0x006e0004
+ 0x00080016
+ 0x0000006e
+ 0x006e0004
+ 0x006e0019
+ 0x006e0018
+ 0x006e0024
+ 0x006e001b
+ 0x0000006e
+ 0x00000036
+ 0x006e006e
+ 0x00000036
+ 0x006e006e
+ 0x00d400ff
+ 0x00510029
+ 0x006e006e
+ 0x006e006e
+ 0x006e0065
+ 0x006e001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000004c>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0x73300000>;
+ nvidia,emc-cfg-2 = <0x0000089d>;
+ nvidia,emc-sel-dpd-ctrl = <0x00040000>;
+ nvidia,emc-cfg-dig-dll = <0xe0040069>;
+ nvidia,emc-mode-0 = <0x80000f15>;
+ nvidia,emc-mode-1 = <0x80100002>;
+ nvidia,emc-mode-2 = <0x80200020>;
+ nvidia,emc-mode-4 = <0x00000000>;
+ };
+};
diff --git a/arch/arm/boot/dts/tegra124-flounder-generic.dtsi b/arch/arm/boot/dts/tegra124-flounder-generic.dtsi
new file mode 100644
index 0000000..b0b9999
--- /dev/null
+++ b/arch/arm/boot/dts/tegra124-flounder-generic.dtsi
@@ -0,0 +1,384 @@
+#include "tegra124.dtsi"
+
+/ {
+ gpio: gpio@6000d000 {
+ gpio-init-names = "default";
+ gpio-init-0 = <&gpio_default>;
+
+ gpio_default: default {
+ gpio-input = < TEGRA_GPIO(C, 7)
+ TEGRA_GPIO(G, 2)
+ TEGRA_GPIO(G, 3)
+ TEGRA_GPIO(H, 4)
+ TEGRA_GPIO(I, 5)
+ TEGRA_GPIO(I, 6)
+ TEGRA_GPIO(J, 0)
+ TEGRA_GPIO(K, 2)
+ TEGRA_GPIO(K, 3)
+ TEGRA_GPIO(N, 7)
+ TEGRA_GPIO(O, 2)
+ TEGRA_GPIO(O, 3)
+ TEGRA_GPIO(O, 5)
+ TEGRA_GPIO(O, 7)
+ TEGRA_GPIO(Q, 1)
+ TEGRA_GPIO(Q, 6)
+ TEGRA_GPIO(Q, 7)
+ TEGRA_GPIO(R, 4)
+ TEGRA_GPIO(S, 0)
+ TEGRA_GPIO(S, 1)
+ TEGRA_GPIO(S, 4)
+ TEGRA_GPIO(U, 2)
+ TEGRA_GPIO(U, 5)
+ TEGRA_GPIO(U, 6)
+ TEGRA_GPIO(V, 0)
+ TEGRA_GPIO(V, 1)
+ TEGRA_GPIO(W, 2)
+/*key*/
+ TEGRA_GPIO(Q, 0)
+ TEGRA_GPIO(Q, 5)
+ TEGRA_GPIO(V, 2)
+/*key*/
+/*headset*/
+ TEGRA_GPIO(S, 2)
+ TEGRA_GPIO(W, 3)
+ TEGRA_GPIO(B, 0)
+/*headset*/
+ TEGRA_GPIO(BB, 6)
+ TEGRA_GPIO(BB, 7)
+ TEGRA_GPIO(CC, 1)
+ TEGRA_GPIO(CC, 2)>;
+ gpio-output-low = <TEGRA_GPIO(G, 0)
+ TEGRA_GPIO(G, 1)
+ TEGRA_GPIO(H, 2)
+ TEGRA_GPIO(H, 3)
+ TEGRA_GPIO(H, 5)
+ TEGRA_GPIO(H, 6)
+ TEGRA_GPIO(H, 7)
+ TEGRA_GPIO(I, 0)
+/*key*/
+ TEGRA_GPIO(I, 3)
+/*key*/
+ TEGRA_GPIO(I, 4)
+/*headset*/
+ TEGRA_GPIO(J, 7)
+ TEGRA_GPIO(S, 3)
+/*headset*/
+ TEGRA_GPIO(K, 0)
+ TEGRA_GPIO(K, 1)
+ TEGRA_GPIO(K, 5)
+ TEGRA_GPIO(O, 0)
+ TEGRA_GPIO(O, 6)
+ TEGRA_GPIO(Q, 3)
+ TEGRA_GPIO(R, 1)
+ TEGRA_GPIO(R, 2)
+ TEGRA_GPIO(R, 5)
+ TEGRA_GPIO(S, 3)
+ TEGRA_GPIO(S, 6)
+ TEGRA_GPIO(U, 3)
+ TEGRA_GPIO(V, 3)
+ TEGRA_GPIO(X, 1)
+ TEGRA_GPIO(X, 3)
+ TEGRA_GPIO(X, 4)
+ TEGRA_GPIO(X, 5)
+ TEGRA_GPIO(X, 7)
+ TEGRA_GPIO(BB, 3)
+ TEGRA_GPIO(BB, 5)
+ TEGRA_GPIO(CC, 5)
+ TEGRA_GPIO(EE, 1)>;
+ gpio-output-high = <TEGRA_GPIO(B, 4)
+ TEGRA_GPIO(I, 2)
+ TEGRA_GPIO(K, 4)
+ TEGRA_GPIO(S, 5)
+ TEGRA_GPIO(R, 0)
+ TEGRA_GPIO(R, 3)
+ TEGRA_GPIO(X, 2)
+ TEGRA_GPIO(Q, 2)
+/*headset*/
+ TEGRA_GPIO(Q, 4)
+/*headset*/
+ TEGRA_GPIO(EE, 5)>;
+ };
+ };
+
+ host1x {
+ dsi {
+ nvidia,dsi-controller-vs = <1>;
+ status = "okay";
+ };
+ };
+
+ serial@70006000 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006040 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006200 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006300 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ memory@0x80000000 {
+ device_type = "memory";
+ reg = <0x80000000 0x80000000>;
+ };
+
+ i2c@7000c000 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000c400 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000c500 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000c700 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000d000 {
+ status = "okay";
+ clock-frequency = <400000>;
+ nvidia,bit-banging-xfer-after-shutdown;
+
+ /include/ "tegra124-flounder-power.dtsi"
+ };
+
+ i2c@7000d100 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ spi@7000d400 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000d800 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000da00 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000dc00 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+
+ power {
+ label = "Power";
+ gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_POWER>;
+ gpio-key,wakeup;
+ debounce-interval = <20>;
+ };
+
+ volume_down {
+ label = "Volume Down";
+ gpios = <&gpio TEGRA_GPIO(Q, 5) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEDOWN>;
+ debounce-interval = <20>;
+ };
+
+ volume_up {
+ label = "Volume Up";
+ gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEUP>;
+ debounce-interval = <20>;
+ };
+ };
+
+ regulators {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ vdd_ac_bat_reg: regulator@0 {
+ compatible = "regulator-fixed";
+ reg = <0>;
+ regulator-name = "vdd_ac_bat";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_sys_bl";
+ };
+ };
+ };
+
+ usb0_vbus: regulator@1 {
+ compatible = "regulator-fixed-sync";
+ reg = <1>;
+ regulator-name = "usb0-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio 108 0>; /* TEGRA_PN4 */
+ enable-active-high;
+ gpio-open-drain;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus0";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ usb1_vbus: regulator@2 {
+ compatible = "regulator-fixed-sync";
+ reg = <2>;
+ regulator-name = "usb1-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ gpio = <&gpio 109 0>; /* TEGRA_PN5 */
+ gpio-open-drain;
+ vin-supply = <&palmas_smps10_out2>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.1";
+ };
+ c2 {
+ regulator-consumer-supply = "usb_vbus1";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ usb2_vbus: regulator@3 {
+ compatible = "regulator-fixed-sync";
+ reg = <3>;
+ regulator-name = "usb2-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ gpio = <&gpio 249 0>; /* TEGRA_PFF1 */
+ gpio-open-drain;
+ vin-supply = <&palmas_smps10_out2>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.2";
+ };
+ c2 {
+ regulator-consumer-supply = "usb_vbus2";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ avdd_lcd: regulator@4 {
+ compatible = "regulator-fixed-sync";
+ reg = <4>;
+ regulator-name = "avdd-lcd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&palmas_gpio 3 0>;
+ enable-active-high;
+ vin-supply = <&palmas_smps9>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "avdd_lcd";
+ };
+ };
+ };
+
+ vdd_lcd: regulator@5 {
+ compatible = "regulator-fixed-sync";
+ reg = <5>;
+ regulator-name = "vdd-lcd";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ enable-active-high;
+ gpio = <&palmas_gpio 4 0>;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_lcd_1v2_s";
+ };
+ };
+ };
+
+ ldoen: regulator@6 {
+ compatible = "regulator-fixed-sync";
+ reg = <6>;
+ regulator-name = "ldoen";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ enable-active-high;
+ gpio = <&palmas_gpio 6 0>;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "ldoen";
+ regulator-consumer-device = "1-0052";
+ };
+ };
+ };
+
+ vpp_fuse: regulator@7 {
+ compatible = "regulator-fixed-sync";
+ reg = <7>;
+ regulator-name = "vpp-fuse";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ enable-active-high;
+ gpio = <&palmas_gpio 7 0>;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vpp_fuse";
+ };
+ };
+ };
+
+ en_lcd_bl: regulator@8 {
+ compatible = "regulator-fixed-sync";
+ reg = <8>;
+ regulator-name = "en-lcd-bl";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ gpio = <&gpio 134 0>; /* TEGRA_PQ6 */
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_lcd_bl_en";
+ };
+ };
+ };
+ };
+
+};
diff --git a/arch/arm/boot/dts/tegra124-flounder-power.dtsi b/arch/arm/boot/dts/tegra124-flounder-power.dtsi
new file mode 100644
index 0000000..8c5cfed
--- /dev/null
+++ b/arch/arm/boot/dts/tegra124-flounder-power.dtsi
@@ -0,0 +1,602 @@
+palmas: tps65913 {
+ compatible = "ti,palmas";
+ reg = <0x58>;
+ interrupts = <0 86 0>;
+
+ #interrupt-cells = <2>;
+ interrupt-controller;
+
+ palmas_gpio: gpio {
+ compatible = "ti,palmas-gpio";
+ gpio-controller;
+ #gpio-cells = <2>;
+ ti,enable-boost-bypass;
+ v_boost_bypass_gpio = <4>;
+ };
+
+ rtc {
+ compatible = "ti,palmas-rtc";
+ interrupt-parent = <&palmas>;
+ interrupts = <8 0>;
+ };
+
+ pinmux {
+ compatible = "ti,tps65913-pinctrl";
+ pinctrl-names = "default";
+ pinctrl-0 = <&palmas_default>;
+
+ palmas_default: pinmux {
+ powergood {
+ pins = "powergood";
+ function = "powergood";
+ };
+
+ vac {
+ pins = "vac";
+ function = "vac";
+ };
+
+ pin_gpio0 {
+ pins = "gpio0";
+ function = "id";
+ bias-pull-up;
+ };
+
+ pin_gpio1 {
+ pins = "gpio1";
+ function = "vbus_det";
+ };
+
+ pin_gpio6 {
+ pins = "gpio2", "gpio3", "gpio4", "gpio6", "gpio7";
+ function = "gpio";
+ };
+
+ pin_gpio5 {
+ pins = "gpio5";
+ function = "clk32kgaudio";
+ };
+ };
+ };
+
+ extcon {
+ compatible = "ti,palmas-usb";
+ extcon-name = "palmas-extcon";
+ ti,wakeup;
+ ti,enable-vbus-detection;
+ };
+
+ power_pm {
+ compatible = "ti,palmas-pm";
+ system-pmic-power-off;
+ boot-up-at-vbus;
+ };
+
+ gpadc {
+ compatible = "ti,palmas-gpadc";
+ interrupt-parent = <&palmas>;
+ interrupts = <18 0
+ 16 0
+ 17 0>;
+ iio_map {
+ ch0 {
+ ti,adc-channel-number = <0>;
+ ti,adc-consumer-device = "bq2419x";
+ ti,adc-consumer-channel ="batt-id-channel";
+ };
+
+ ch4 {
+ ti,adc-channel-number = <4>;
+ ti,adc-consumer-device = "HTC_HEADSET_PMIC";
+ ti,adc-consumer-channel ="hs_channel";
+ };
+
+ ch10 {
+ ti,adc-channel-number = <10>;
+ ti,adc-consumer-device = "bq2419x";
+ ti,adc-consumer-channel ="charger-vbus";
+ };
+ };
+ };
+
+ clock {
+ compatible = "ti,palmas-clk";
+
+ clk32k_kg {
+ ti,clock-boot-enable;
+ };
+
+ clk32k_kg_audio {
+ ti,clock-boot-enable;
+ };
+ };
+
+ pmic {
+ compatible = "ti,tps65913-pmic", "ti,palmas-pmic";
+ ldo2-in-supply = <&palmas_smps8>;
+ ldo5-in-supply = <&palmas_smps8>;
+ ldo7-in-supply = <&palmas_smps8>;
+ ldoln-in-supply = <&palmas_smps10_out2>;
+ ti,disable-smps10-in-suspend;
+
+ regulators {
+ smps123 {
+ regulator-name = "vdd-cpu";
+ regulator-min-microvolt = <650000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-always-on;
+ regulator-boot-on;
+ ti,roof-floor = <1>;
+ ti,config-flags = <8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_cpu";
+ };
+ };
+ };
+
+ smps457 {
+ regulator-name = "vdd-gpu";
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-init-microvolt = <1000000>;
+ regulator-boot-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_gpu";
+ };
+ };
+ };
+
+ palmas_smps6: smps6 {
+ regulator-name = "vdd-core";
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-always-on;
+ regulator-boot-on;
+ ti,roof-floor = <3>;
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_core";
+ };
+ };
+ };
+
+ palmas_smps8: smps8 {
+ regulator-name = "vdd-1v8";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "dbvdd";
+ regulator-consumer-device = "tegra-snd-rt5639.0";
+ };
+ c2 {
+ regulator-consumer-supply = "dbvdd";
+ regulator-consumer-device = "tegra-snd-rt5645.0";
+ };
+ c3 {
+ regulator-consumer-supply = "avdd";
+ regulator-consumer-device = "tegra-snd-rt5639.0";
+ };
+ c4 {
+ regulator-consumer-supply = "avdd";
+ regulator-consumer-device = "tegra-snd-rt5645.0";
+ };
+ c5 {
+ regulator-consumer-supply = "dmicvdd";
+ regulator-consumer-device = "tegra-snd-rt5639.0";
+ };
+ c6 {
+ regulator-consumer-supply = "dmicvdd";
+ regulator-consumer-device = "tegra-snd-rt5645.0";
+ };
+ c7 {
+ regulator-consumer-supply = "avdd_osc";
+ };
+ c8 {
+ regulator-consumer-supply = "vddio_sys";
+ };
+ c9 {
+ regulator-consumer-supply = "vddio_sys_2";
+ };
+ c10 {
+ regulator-consumer-supply = "pwrdet_nand";
+ };
+ c11 {
+ regulator-consumer-supply = "vddio_sdmmc";
+ regulator-consumer-device = "sdhci-tegra.0";
+ };
+ c12 {
+ regulator-consumer-supply = "pwrdet_sdmmc1";
+ };
+ c14 {
+ regulator-consumer-supply = "pwrdet_sdmmc4";
+ };
+ c15 {
+ regulator-consumer-supply = "avdd_pll_utmip";
+ regulator-consumer-device = "tegra-udc.0";
+ };
+ c16 {
+ regulator-consumer-supply = "avdd_pll_utmip";
+ regulator-consumer-device = "tegra-ehci.0";
+ };
+ c17 {
+ regulator-consumer-supply = "avdd_pll_utmip";
+ regulator-consumer-device = "tegra-ehci.1";
+ };
+ c18 {
+ regulator-consumer-supply = "avdd_pll_utmip";
+ regulator-consumer-device = "tegra-ehci.2";
+ };
+ c19 {
+ regulator-consumer-supply = "avdd_pll_utmip";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ c20 {
+ regulator-consumer-supply = "vddio_audio";
+ };
+ c21 {
+ regulator-consumer-supply = "pwrdet_audio";
+ };
+ c22 {
+ regulator-consumer-supply = "vddio_uart";
+ };
+ c23 {
+ regulator-consumer-supply = "pwrdet_uart";
+ };
+ c24 {
+ regulator-consumer-supply = "vddio_bb";
+ };
+ c25 {
+ regulator-consumer-supply = "pwrdet_bb";
+ };
+ c26 {
+ regulator-consumer-supply = "vdd_dtv";
+ };
+ c27 {
+ regulator-consumer-supply = "vdd_1v8_eeprom";
+ };
+ c28 {
+ regulator-consumer-supply = "vddio_cam";
+ regulator-consumer-device = "tegra_camera";
+ };
+ c29 {
+ regulator-consumer-supply = "vddio_cam";
+ regulator-consumer-device = "vi";
+ };
+ c30 {
+ regulator-consumer-supply = "pwrdet_cam";
+ };
+ c31 {
+ regulator-consumer-supply = "dvdd";
+ regulator-consumer-device = "spi0.0";
+ };
+ c34 {
+ regulator-consumer-supply = "vddio";
+ regulator-consumer-device = "0-0077";
+ };
+ c35 {
+ regulator-consumer-supply = "dvdd_lcd";
+ };
+ c36 {
+ regulator-consumer-supply = "vdd_lcd_1v8_s";
+ };
+ c37 {
+ regulator-consumer-supply = "vddio_sdmmc";
+ regulator-consumer-device = "sdhci-tegra.3";
+ };
+ };
+ };
+
+ palmas_smps9: smps9 {
+ regulator-name = "vdd-1v2";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+
+ consumers {
+ };
+ };
+
+ palmas_smps10_out1: smps10_out1 {
+ regulator-name = "vdd-hdmi";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_hdmi_5v0";
+ regulator-consumer-device = "tegradc.1";
+ };
+ };
+ };
+
+ palmas_smps10_out2: smps10_out2 {
+ regulator-name = "vdd-out2-5v0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_5v0_mdm";
+ };
+ c2 {
+ regulator-consumer-supply = "vdd_5v0_snsr";
+ };
+ c3 {
+ regulator-consumer-supply = "vdd_5v0_dis";
+ };
+ c4 {
+ regulator-consumer-supply = "spkvdd";
+ regulator-consumer-device = "tegra-snd-rt5639.0";
+ };
+ c5 {
+ regulator-consumer-supply = "spkvdd";
+ regulator-consumer-device = "tegra-snd-rt5645.0";
+ };
+ c6 {
+ regulator-consumer-supply = "avddio_pex";
+ regulator-consumer-device = "tegra-pcie";
+ };
+ c7 {
+ regulator-consumer-supply = "dvddio_pex";
+ regulator-consumer-device = "tegra-pcie";
+ };
+ c8 {
+ regulator-consumer-supply = "avddio_usb";
+ regulator-consumer-device = "tegra-xhci";
+ };
+
+ };
+ };
+
+ ldo1 {
+ regulator-name = "avdd-pll";
+ regulator-min-microvolt = <1050000>;
+ regulator-max-microvolt = <1050000>;
+ regulator-always-on;
+ regulator-boot-on;
+ ti,roof-floor = <3>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "avdd_pll_m";
+ };
+ c2 {
+ regulator-consumer-supply = "avdd_pll_ap_c2_c3";
+ };
+ c3 {
+ regulator-consumer-supply = "avdd_pll_cud2dpd";
+ };
+ c4 {
+ regulator-consumer-supply = "avdd_pll_c4";
+ };
+ c5 {
+ regulator-consumer-supply = "avdd_lvds0_io";
+ };
+ c6 {
+ regulator-consumer-supply = "vddio_ddr_hs";
+ };
+ c7 {
+ regulator-consumer-supply = "avdd_pll_erefe";
+ };
+ c8 {
+ regulator-consumer-supply = "avdd_pll_x";
+ };
+ c9 {
+ regulator-consumer-supply = "avdd_pll_cg";
+ };
+ c10 {
+ regulator-consumer-supply = "avdd_pex_pll";
+ regulator-consumer-device = "tegra-pcie";
+ };
+ c11 {
+ regulator-consumer-supply = "avdd_hdmi_pll";
+ regulator-consumer-device = "tegradc.1";
+ };
+
+ };
+ };
+
+ ldo2 {
+ regulator-name = "vdd-aud";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "v_ldo2";
+ regulator-consumer-device = "tegra-snd-rt5677.0";
+ };
+
+ };
+ };
+
+ ldo3 {
+ regulator-name = "vdd-sensor-io";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ };
+ };
+
+ ldo4 {
+ regulator-name = "vdd-sensor-2v85";
+ regulator-min-microvolt = <2850000>;
+ regulator-max-microvolt = <2850000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ };
+ };
+
+ ldo5 {
+ regulator-name = "avdd-dsi-csi";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vddio_hsic";
+ regulator-consumer-device = "tegra-ehci.1";
+ };
+ c2 {
+ regulator-consumer-supply = "vddio_hsic";
+ regulator-consumer-device = "tegra-ehci.2";
+ };
+ c3 {
+ regulator-consumer-supply = "vddio_hsic";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ c4 {
+ regulator-consumer-supply = "avdd_dsi_csi";
+ regulator-consumer-device = "tegradc.0";
+
+ };
+ c5 {
+ regulator-consumer-supply = "avdd_dsi_csi";
+ regulator-consumer-device = "tegradc.1";
+ };
+ c6 {
+ regulator-consumer-supply = "avdd_dsi_csi";
+ regulator-consumer-device = "vi.0";
+ };
+ c7 {
+ regulator-consumer-supply = "avdd_dsi_csi";
+ regulator-consumer-device = "vi.1";
+ };
+ c8 {
+ regulator-consumer-supply = "pwrdet_mipi";
+ };
+ c9 {
+ regulator-consumer-supply = "avdd_hsic_com";
+ };
+ c10 {
+ regulator-consumer-supply = "avdd_hsic_mdm";
+ };
+ c11 {
+ regulator-consumer-supply = "vdd_lcd_bl";
+ };
+
+ };
+ };
+
+ ldo6 {
+ regulator-name = "vdd-1v5-ldo6";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1800000>;
+
+ consumers {
+ };
+ };
+
+ ldo7 {
+ regulator-name = "v-slim-1v0";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1100000>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "slimport_dvdd";
+ };
+ };
+ };
+
+ ldo8 {
+ regulator-name = "vdd-rtc";
+ regulator-min-microvolt = <950000>;
+ regulator-max-microvolt = <950000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_rtc";
+ };
+ };
+ };
+
+ ldo9 {
+ regulator-name = "vdd-3p3-ldo9";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+
+ consumers {
+ };
+ };
+
+ ldoln {
+ regulator-name = "vddio-hv";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ ti,roof-floor = <3>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "pwrdet_hv";
+ };
+ };
+ };
+
+ ldousb {
+ regulator-name = "avdd-usb";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "pwrdet_pex_ctl";
+ };
+ c2 {
+ regulator-consumer-supply = "avdd_usb";
+ regulator-consumer-device = "tegra-udc.0";
+
+ };
+ c3 {
+ regulator-consumer-supply = "avdd_usb";
+ regulator-consumer-device = "tegra-ehci.0";
+ };
+ c4 {
+ regulator-consumer-supply = "avdd_usb";
+ regulator-consumer-device = "tegra-ehci.1";
+ };
+ c5 {
+ regulator-consumer-supply = "avdd_usb";
+ regulator-consumer-device = "tegra-ehci.2";
+ };
+ c6 {
+ regulator-consumer-supply = "hvdd_usb";
+ regulator-consumer-device = "tegra-xhci";
+
+ };
+ c7 {
+ regulator-consumer-supply = "hvdd_pex";
+ regulator-consumer-device = "tegra-pcie";
+ };
+ c8 {
+ regulator-consumer-supply = "hvdd_pex_pll_e";
+ regulator-consumer-device = "tegra-pcie";
+ };
+
+ };
+ };
+ };
+ };
+
+ voltage_monitor {
+ compatible = "ti,palmas-voltage-monitor";
+ ti,use-vbat-monitor;
+ };
+};
diff --git a/arch/arm/boot/dts/tegra124-flounder.dts b/arch/arm/boot/dts/tegra124-flounder.dts
new file mode 100644
index 0000000..c8fe1c0
--- /dev/null
+++ b/arch/arm/boot/dts/tegra124-flounder.dts
@@ -0,0 +1,66 @@
+/dts-v1/;
+
+#include "tegra124-flounder-generic.dtsi"
+
+/ {
+ model = "Flounder";
+ compatible = "google,flounder", "nvidia,tegra124";
+ nvidia-boardids = "1780:1100:2:B:7","1794:1000:0:A:6";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ chosen {
+ bootargs = "tegraid=40.0.0.00.00 vmalloc=256M video=tegrafb console=ttyS0,115200n8 earlyprintk";
+ linux,initrd-start = <0x85000000>;
+ linux,initrd-end = <0x851bc400>;
+ };
+
+ pinmux {
+ status = "disable";
+ };
+
+ i2c@7000c000 {
+ status = "okay";
+
+ max17050@36 {
+ compatible = "maxim,max17050";
+ reg = <0x36>;
+ };
+
+ bq2419x: bq2419x@6b {
+ compatible = "ti,bq2419x";
+ reg = <0x6b>;
+
+ charger {
+ regulator-name = "batt_regulator";
+ regulator-max-microamp = <3000>;
+ watchdog-timeout = <40>;
+ rtc-alarm-time = <3600>;
+ auto-recharge-time = <1800>;
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_bat_chg";
+ regulator-consumer-device = "tegra-udc.0";
+ };
+ };
+ };
+
+ vbus {
+ regulator-name = "vbus_regulator";
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.0";
+ };
+
+ c2 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-otg";
+ };
+ };
+ };
+ };
+
+
+};
+
diff --git a/arch/arm/mach-tegra/Kconfig b/arch/arm/mach-tegra/Kconfig
index f29663c..cc3fbf6 100644
--- a/arch/arm/mach-tegra/Kconfig
+++ b/arch/arm/mach-tegra/Kconfig
@@ -89,6 +89,7 @@
select PCI_MSI if PCI_TEGRA
select NVMAP_CACHE_MAINT_BY_SET_WAYS if TEGRA_NVMAP && !ARM64
select NVMAP_CACHE_MAINT_BY_SET_WAYS_ON_ONE_CPU if TEGRA_NVMAP && !ARM64
+ select NVMAP_USE_CMA_FOR_CARVEOUT if TEGRA_NVMAP && CMA && TRUSTED_LITTLE_KERNEL
select ARCH_TEGRA_HAS_CL_DVFS
select TEGRA_DUAL_CBUS
select SOC_BUS
@@ -164,6 +165,41 @@
help
Support for NVIDIA ARDBEG Development platform
+config MACH_FLOUNDER
+ bool "Flounder board"
+ depends on ARCH_TEGRA_12x_SOC
+ select MACH_HAS_SND_SOC_TEGRA_RT5677 if SND_SOC
+ help
+ Support for Flounder
+
+config QCT_9K_MODEM
+ bool "QCT MDM9K"
+ depends on MACH_T132_FLOUNDER
+ default n
+ select QCOM_USB_MODEM_POWER
+ select MDM_FTRACE_DEBUG
+ select MDM_ERRMSG
+ select MDM_POWEROFF_MODEM_IN_OFFMODE_CHARGING
+ select MSM_SUBSYSTEM_RESTART
+ select MSM_HSIC_SYSMON
+ select MSM_SYSMON_COMM
+ select MSM_RMNET_USB
+ select RMNET_DATA
+ select RMNET_DATA_DEBUG_PKT
+ select USB_QCOM_DIAG_BRIDGE
+ select USB_QCOM_MDM_BRIDGE
+ select USB_QCOM_KS_BRIDGE
+ select DIAG_CHAR
+ select DIAG_OVER_USB
+ select DIAG_HSIC_PIPE
+ select USB_ANDROID_DIAG
+ select USB_ANDROID_RMNET
+ select MSM_RMNET_BAM
+ select MODEM_SUPPORT
+ select MDM_SYSEDP
+ help
+ Support for Qualcomm 9K modem
+
config MACH_LOKI
bool "Loki board"
depends on ARCH_TEGRA_12x_SOC
diff --git a/arch/arm/mach-tegra/Makefile b/arch/arm/mach-tegra/Makefile
index 101192d..e044f87 100644
--- a/arch/arm/mach-tegra/Makefile
+++ b/arch/arm/mach-tegra/Makefile
@@ -168,6 +168,23 @@
obj-${CONFIG_SYSEDP_FRAMEWORK} += board-ardbeg-sysedp.o
endif
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-gps.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-kbc.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-sdhci.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-sensors.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-panel.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-memory.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-pinmux.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-power.o
+obj-${CONFIG_MACH_FLOUNDER} += panel-j-qxga-8-9.o
+obj-${CONFIG_MACH_FLOUNDER} += panel-s-wqxga-10-1.o
+obj-${CONFIG_MACH_FLOUNDER} += flounder-bdaddress.o
+obj-${CONFIG_SYSEDP_FRAMEWORK} += board-flounder-sysedp.o
+
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-bootparams.o
+obj-${CONFIG_MACH_FLOUNDER} += board-flounder-mdm9k.o
+
obj-${CONFIG_MACH_LOKI} += board-loki.o
obj-${CONFIG_MACH_LOKI} += board-loki-kbc.o
obj-${CONFIG_MACH_LOKI} += board-loki-sensors.o
diff --git a/arch/arm/mach-tegra/bcm_gps_hostwake.h b/arch/arm/mach-tegra/bcm_gps_hostwake.h
new file mode 100644
index 0000000..a545efb
--- /dev/null
+++ b/arch/arm/mach-tegra/bcm_gps_hostwake.h
@@ -0,0 +1,23 @@
+/******************************************************************************
+* Copyright (C) 2013 Broadcom Corporation
+*
+*
+* This program is free software; you can redistribute it and/or
+* modify it under the terms of the GNU General Public License as
+* published by the Free Software Foundation version 2.
+*
+* This program is distributed "as is" WITHOUT ANY WARRANTY of any
+* kind, whether express or implied; without even the implied warranty
+* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+* GNU General Public License for more details.
+******************************************************************************/
+
+#ifndef _BCM_GPS_HOSTWAKE_H_
+#define _BCM_GPS_HOSTWAKE_H_
+
+struct bcm_gps_hostwake_platform_data {
+ /* HOST_WAKE : to indicate that ASIC has an geofence event. */
+ unsigned int gpio_hostwake;
+};
+
+#endif /*_BCM_GPS_HOSTWAKE_H_*/
diff --git a/arch/arm/mach-tegra/board-ardbeg-sensors.c b/arch/arm/mach-tegra/board-ardbeg-sensors.c
index fddf15f..b1e0caa 100644
--- a/arch/arm/mach-tegra/board-ardbeg-sensors.c
+++ b/arch/arm/mach-tegra/board-ardbeg-sensors.c
@@ -30,13 +30,16 @@
#include <media/ar0261.h>
#include <media/imx135.h>
#include <media/imx179.h>
+#include <media/imx219.h>
#include <media/dw9718.h>
#include <media/as364x.h>
#include <media/ov5693.h>
+#include <media/ov9760.h>
#include <media/ov7695.h>
#include <media/mt9m114.h>
#include <media/ad5823.h>
#include <media/max77387.h>
+#include <media/drv201.h>
#include <linux/platform_device.h>
#include <media/soc_camera.h>
@@ -256,6 +259,177 @@
.io_dpd_bit = 12,
};
+static struct regulator *imx219_ext_reg1;
+static struct regulator *imx219_ext_reg2;
+
+static int ardbeg_imx219_get_extra_regulators()
+{
+ imx219_ext_reg1 = regulator_get(NULL, "imx135_reg1");
+ if (WARN_ON(IS_ERR(imx219_ext_reg1))) {
+ pr_err("%s: can't get regulator imx135_reg1: %ld\n",
+ __func__, PTR_ERR(imx219_ext_reg1));
+ imx219_ext_reg1 = NULL;
+ return -ENODEV;
+ }
+
+ imx219_ext_reg2 = regulator_get(NULL, "imx135_reg2");
+ if (WARN_ON(IS_ERR(imx219_ext_reg2))) {
+ pr_err("%s: can't get regulator imx135_reg2: %ld\n",
+ __func__, PTR_ERR(imx219_ext_reg2));
+ imx219_ext_reg2 = NULL;
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int ardbeg_imx219_power_on(struct imx219_power_rail *pw)
+{
+ int err;
+
+ if (unlikely(WARN_ON(!pw || !pw->iovdd || !pw->avdd)))
+ return -EFAULT;
+
+ /* disable CSIA/B IOs DPD mode to turn on camera for ardbeg */
+ tegra_io_dpd_disable(&csia_io);
+ tegra_io_dpd_disable(&csib_io);
+
+ if (ardbeg_imx219_get_extra_regulators())
+ goto imx219_poweron_fail;
+
+ err = regulator_enable(imx219_ext_reg1);
+ if (unlikely(err))
+ goto imx219_poweron_fail;
+
+ err = regulator_enable(imx219_ext_reg2);
+ if (unlikely(err))
+ goto imx219_ext_reg2_fail;
+
+ gpio_set_value(CAM1_PWDN, 0);
+ gpio_set_value(CAM_RSTN, 0);
+ usleep_range(10, 20);
+
+ err = regulator_enable(pw->avdd);
+ if (err)
+ goto imx219_avdd_fail;
+
+ err = regulator_enable(pw->iovdd);
+ if (err)
+ goto imx219_iovdd_fail;
+
+ usleep_range(1, 2);
+ gpio_set_value(CAM1_PWDN, 1);
+ gpio_set_value(CAM_RSTN, 1);
+
+ usleep_range(300, 310);
+
+ return 1;
+
+
+imx219_iovdd_fail:
+ regulator_disable(pw->avdd);
+
+imx219_avdd_fail:
+ if (imx219_ext_reg2)
+ regulator_disable(imx219_ext_reg2);
+
+imx219_ext_reg2_fail:
+ if (imx219_ext_reg1)
+ regulator_disable(imx219_ext_reg1);
+
+imx219_poweron_fail:
+
+ tegra_io_dpd_enable(&csia_io);
+ tegra_io_dpd_enable(&csib_io);
+ pr_err("%s failed.\n", __func__);
+ return -ENODEV;
+}
+
+static int ardbeg_imx219_power_off(struct imx219_power_rail *pw)
+{
+ if (unlikely(WARN_ON(!pw || !pw->iovdd || !pw->avdd))) {
+ tegra_io_dpd_enable(&csia_io);
+ tegra_io_dpd_enable(&csib_io);
+ return -EFAULT;
+ }
+
+ regulator_disable(pw->iovdd);
+ regulator_disable(pw->avdd);
+
+ regulator_disable(imx219_ext_reg1);
+ regulator_disable(imx219_ext_reg2);
+
+ /* put CSIA/B IOs into DPD mode to save additional power for ardbeg */
+ tegra_io_dpd_enable(&csia_io);
+ tegra_io_dpd_enable(&csib_io);
+ return 0;
+}
+
+struct imx219_platform_data ardbeg_imx219_pdata = {
+ .power_on = ardbeg_imx219_power_on,
+ .power_off = ardbeg_imx219_power_off,
+};
+
+static int ardbeg_ov9760_power_on(struct ov9760_power_rail *pw)
+{
+ int err;
+
+ /* disable CSIE IO DPD mode to turn on camera for ardbeg */
+ tegra_io_dpd_disable(&csie_io);
+
+ if (unlikely(WARN_ON(!pw || !pw->iovdd || !pw->avdd)))
+ return -EFAULT;
+
+ gpio_set_value(CAM_RSTN, 0);
+ gpio_set_value(CAM2_PWDN, 0);
+
+ err = regulator_enable(pw->avdd);
+ if (err)
+ goto ov9760_avdd_fail;
+
+ gpio_set_value(CAM2_PWDN, 1);
+
+ err = regulator_enable(pw->iovdd);
+ if (err)
+ goto ov9760_iovdd_fail;
+
+ usleep_range(1000, 1020);
+ gpio_set_value(CAM_RSTN, 1);
+
+ return 1;
+
+ov9760_iovdd_fail:
+ regulator_disable(pw->avdd);
+
+ov9760_avdd_fail:
+ return -ENODEV;
+}
+static int ardbeg_ov9760_power_off(struct ov9760_power_rail *pw)
+{
+ if (unlikely(WARN_ON(!pw || !pw->iovdd || !pw->avdd))) {
+ tegra_io_dpd_disable(&csie_io);
+ return -EFAULT;
+ }
+
+ gpio_set_value(CAM_RSTN, 0);
+ usleep_range(1000, 1020);
+
+ regulator_disable(pw->iovdd);
+ regulator_disable(pw->avdd);
+
+ gpio_set_value(CAM2_PWDN, 0);
+
+ /* put CSIE IOs into DPD mode to save additional power for ardbeg */
+ tegra_io_dpd_enable(&csie_io);
+
+ return 0;
+}
+struct ov9760_platform_data ardbeg_ov9760_data = {
+ .power_on = ardbeg_ov9760_power_on,
+ .power_off = ardbeg_ov9760_power_off,
+ .mclk_name = "mclk2",
+};
+
static int ardbeg_ar0261_power_on(struct ar0261_power_rail *pw)
{
int err;
@@ -1030,6 +1204,67 @@
.has_eeprom = 0,
};
+static int ardbeg_drv201_power_on(struct drv201_power_rail *pw)
+{
+ int err;
+ pr_info("%s\n", __func__);
+
+ if (unlikely(!pw || !pw->vdd || !pw->vdd_i2c))
+ return -EFAULT;
+
+ err = regulator_enable(pw->vdd);
+ if (unlikely(err))
+ goto drv201_vdd_fail;
+
+ err = regulator_enable(pw->vdd_i2c);
+ if (unlikely(err))
+ goto drv201_i2c_fail;
+
+ usleep_range(1000, 1020);
+
+ /* return 1 to skip the in-driver power_on sequence */
+ pr_debug("%s --\n", __func__);
+ return 1;
+
+drv201_i2c_fail:
+ regulator_disable(pw->vdd);
+
+drv201_vdd_fail:
+ pr_err("%s FAILED\n", __func__);
+ return -ENODEV;
+}
+
+static int ardbeg_drv201_power_off(struct drv201_power_rail *pw)
+{
+ pr_info("%s\n", __func__);
+
+ if (unlikely(!pw || !pw->vdd || !pw->vdd_i2c))
+ return -EFAULT;
+
+ regulator_disable(pw->vdd);
+ regulator_disable(pw->vdd_i2c);
+
+ return 1;
+}
+
+static struct nvc_focus_cap ardbeg_drv201_cap = {
+ .version = NVC_FOCUS_CAP_VER2,
+ .settle_time = 35,
+ .focus_macro = 810,
+ .focus_infinity = 50,
+ .focus_hyper = 50,
+};
+
+static struct drv201_platform_data ardbeg_drv201_pdata = {
+ .cfg = 0,
+ .num = 0,
+ .sync = 0,
+ .dev_name = "focuser",
+ .cap = &ardbeg_drv201_cap,
+ .power_on = ardbeg_drv201_power_on,
+ .power_off = ardbeg_drv201_power_off,
+};
+
static int ardbeg_ad5823_power_on(struct ad5823_platform_data *pdata)
{
int err = 0;
diff --git a/arch/arm/mach-tegra/board-flounder-bootparams.c b/arch/arm/mach-tegra/board-flounder-bootparams.c
new file mode 100644
index 0000000..4c8c00d
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-bootparams.c
@@ -0,0 +1,132 @@
+/*
+ * linux/arch/arm/mach-tegra/board-flounder-bootparams.c
+ *
+ * Copyright (C) 2014 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/platform_device.h>
+#include <mach/board_htc.h>
+#include <linux/of.h>
+
+static int mfg_mode;
+int __init board_mfg_mode_initialize(char *s)
+{
+ if (!strcmp(s, "normal"))
+ mfg_mode = BOARD_MFG_MODE_NORMAL;
+ else if (!strcmp(s, "factory2"))
+ mfg_mode = BOARD_MFG_MODE_FACTORY2;
+ else if (!strcmp(s, "recovery"))
+ mfg_mode = BOARD_MFG_MODE_RECOVERY;
+ else if (!strcmp(s, "charge"))
+ mfg_mode = BOARD_MFG_MODE_CHARGE;
+ else if (!strcmp(s, "power_test"))
+ mfg_mode = BOARD_MFG_MODE_POWERTEST;
+ else if (!strcmp(s, "mfgkernel"))
+ mfg_mode = BOARD_MFG_MODE_MFGKERNEL;
+ else if (!strcmp(s, "modem_calibration"))
+ mfg_mode = BOARD_MFG_MODE_MODEM_CALIBRATION;
+
+ return 1;
+}
+
+int board_mfg_mode(void)
+{
+ return mfg_mode;
+}
+EXPORT_SYMBOL(board_mfg_mode);
+
+__setup("androidboot.mode=", board_mfg_mode_initialize);
+
+static unsigned long radio_flag = 0;
+int __init radio_flag_init(char *s)
+{
+ if (strict_strtoul(s, 16, &radio_flag))
+ return -EINVAL;
+ else
+ return 1;
+}
+__setup("radioflag=", radio_flag_init);
+
+unsigned int get_radio_flag(void)
+{
+ return radio_flag;
+}
+EXPORT_SYMBOL(get_radio_flag);
+
+static int mdm_sku;
+static int get_modem_sku(void)
+{
+ int err, pid;
+
+ err = of_property_read_u32(
+ of_find_node_by_path("/chosen/board_info"),
+ "pid", &pid);
+ if (err) {
+ pr_err("%s: can't read pid from DT, error = %d (set to default sku)\n", __func__, err);
+ mdm_sku = MDM_SKU_WIFI_ONLY;
+ }
+
+ switch(pid)
+ {
+ case 302: //#F sku
+ mdm_sku = MDM_SKU_WIFI_ONLY;
+ break;
+
+ case 303: //#UL sku
+ mdm_sku = MDM_SKU_UL;
+ break;
+
+ case 304: //#WL sku
+ mdm_sku = MDM_SKU_WL;
+ break;
+
+ default:
+ mdm_sku = MDM_SKU_WIFI_ONLY;
+ break;
+ }
+
+ return mdm_sku;
+}
+
+static unsigned int radio_image_status = 0;
+static unsigned int __init set_radio_image_status(char *read_mdm_version)
+{
+ /* TODO: Try to read baseband version */
+ if (strlen(read_mdm_version)) {
+ if (strncmp(read_mdm_version,"T1.xx", 3) == 0)
+ radio_image_status = RADIO_IMG_EXIST;
+ else
+ radio_image_status = RADIO_IMG_NON_EXIST;
+ } else
+ radio_image_status = RADIO_IMG_NON_EXIST;
+
+ return 1;
+}
+__setup("androidboot.baseband=", set_radio_image_status);
+
+int get_radio_image_status(void)
+{
+ return radio_image_status;
+}
+EXPORT_SYMBOL(get_radio_image_status);
+
+bool is_mdm_modem()
+{
+ bool ret = false;
+ int mdm_modem = get_modem_sku();
+
+ if ((mdm_modem == MDM_SKU_UL) || (mdm_modem == MDM_SKU_WL))
+ ret = true;
+
+ return ret;
+}
+EXPORT_SYMBOL(is_mdm_modem);
diff --git a/arch/arm/mach-tegra/board-flounder-gps.c b/arch/arm/mach-tegra/board-flounder-gps.c
new file mode 100644
index 0000000..3a972cf
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-gps.c
@@ -0,0 +1,293 @@
+/******************************************************************************
+* Copyright (C) 2013 Broadcom Corporation
+*
+*
+* This program is free software; you can redistribute it and/or
+* modify it under the terms of the GNU General Public License as
+* published by the Free Software Foundation version 2.
+*
+* This program is distributed "as is" WITHOUT ANY WARRANTY of any
+* kind, whether express or implied; without even the implied warranty
+* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+* GNU General Public License for more details.
+******************************************************************************/
+
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
+#include <linux/miscdevice.h>
+#include <linux/wait.h>
+#include <linux/sched.h>
+#include <linux/platform_device.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/version.h>
+#include <linux/workqueue.h>
+#include <linux/unistd.h>
+#include <linux/bug.h>
+#include <linux/mutex.h>
+#include <linux/gpio.h>
+#include <linux/wakelock.h>
+
+#include "bcm_gps_hostwake.h"
+
+#define GPS_VERSION "1.01"
+#define PFX "GPS: "
+
+/* gps daemon will access "/dev/gps_geofence_wake" */
+#define HOST_WAKE_MODULE_NAME "gps_geofence_wake"
+
+/* driver structure for HOST_WAKE module */
+struct gps_geofence_wake {
+ /* irq from gpio_to_irq() */
+ int irq;
+ /* HOST_WAKE_GPIO */
+ int host_req_pin;
+ /* misc driver structure */
+ struct miscdevice misc;
+ /* wake_lock */
+ struct wake_lock wake_lock;
+};
+static struct gps_geofence_wake g_geofence_wake;
+
+static int gps_geofence_wake_open(struct inode *inode, struct file *filp)
+{
+ pr_debug(PFX "%s\n", __func__);
+ return 0;
+}
+
+static int gps_geofence_wake_release(struct inode *inode, struct file *filp)
+{
+ pr_debug(PFX "%s\n", __func__);
+ return 0;
+}
+
+static long gps_geofence_wake_ioctl(struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ pr_debug(PFX "%s\n", __func__);
+ return 0;
+}
+
+static const struct file_operations gps_geofence_wake_fops = {
+ .owner = THIS_MODULE,
+ .open = gps_geofence_wake_open,
+ .release = gps_geofence_wake_release,
+ .unlocked_ioctl = gps_geofence_wake_ioctl
+};
+
+/* set/reset wake lock by HOST_WAKE level
+* \param gpio the value of HOST_WAKE_GPIO */
+
+static void gps_geofence_wake_lock(int gpio)
+{
+ struct gps_geofence_wake *ac_data = &g_geofence_wake;
+ pr_debug(PFX "%s : gpio value = %d\n", __func__, gpio);
+
+ if (gpio)
+ wake_lock(&ac_data->wake_lock);
+ else
+ wake_unlock(&ac_data->wake_lock);
+}
+
+static irqreturn_t gps_host_wake_isr(int irq, void *dev)
+{
+ struct gps_geofence_wake *ac_data = &g_geofence_wake;
+ int gps_host_wake = ac_data->host_req_pin;
+ char gpio_value = 0x00;
+
+ pr_debug(PFX "%s\n", __func__);
+
+ gpio_value = gpio_get_value(gps_host_wake);
+
+ /* wake_lock */
+ gps_geofence_wake_lock(gpio_value);
+
+ return IRQ_HANDLED;
+}
+
+/* initialize GPIO and IRQ
+* \param gpio the GPIO of HOST_WAKE
+* \return if SUCCESS, return the id of IRQ, if FAIL, return -EIO */
+static int gps_gpio_irq_init(int gpio)
+{
+ int ret;
+ int irq;
+
+ pr_debug(PFX "%s\n", __func__);
+ /* 1. Set GPIO */
+ if ((gpio_request(gpio, "HOST_WAKE"))) {
+ pr_err(PFX
+ "Can't request HOST_REQ GPIO %d."
+ "It may be already registered in init.xyz.rc\n",
+ gpio);
+ return -EIO;
+ }
+ gpio_export(gpio, 1);
+ gpio_direction_input(gpio);
+
+ /* 2. Set IRQ */
+ irq = gpio_to_irq(gpio);
+ if (irq < 0) {
+ pr_err(PFX "Could not get HOST_WAKE_GPIO = %d!, err = %d\n",
+ gpio, irq);
+ gpio_free(gpio);
+ return -EIO;
+ }
+
+ ret = request_irq(irq, gps_host_wake_isr,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ "gps_host_wake", NULL);
+ if (ret) {
+ pr_err(PFX "Request_host wake irq failed.\n");
+ gpio_free(gpio);
+ return -EIO;
+ }
+
+ ret = irq_set_irq_wake(irq, 1);
+
+ if (ret) {
+ pr_err(PFX "Set_irq_wake failed.\n");
+ gpio_free(gpio);
+ free_irq(irq, NULL);
+ return -EIO;
+ }
+
+ return irq;
+}
+
+/* cleanup GPIO and IRQ */
+static void gps_gpio_irq_cleanup(int gpio, int irq)
+{
+ pr_debug(PFX "%s\n", __func__);
+ gpio_free(gpio);
+ free_irq(irq, NULL);
+}
+
+static int gps_hostwake_probe(struct platform_device *pdev)
+{
+ int ret;
+ int irq;
+
+ struct gps_geofence_wake *ac_data = &g_geofence_wake;
+ struct bcm_gps_hostwake_platform_data *pdata;
+
+ /* HOST_WAKE : to indicate that ASIC has an geofence event.
+ unsigned int gpio_hostwake; */
+
+ pr_debug(PFX "%s\n", __func__);
+
+ /* Read device tree information */
+ pdata = pdev->dev.platform_data;
+ if (!pdata) {
+ pr_debug(PFX "%s : fail to get platform_data.\n", __func__);
+ return -1;
+ }
+
+ /* 1. Init GPIO and IRQ for HOST_WAKE */
+ irq = gps_gpio_irq_init(pdata->gpio_hostwake);
+
+ if (irq < 0)
+ return -EIO;
+
+ /* 2. Register Driver */
+ memset(ac_data, 0, sizeof(struct gps_geofence_wake));
+
+ /* 2.1 Misc device setup */
+ ac_data->misc.minor = MISC_DYNAMIC_MINOR;
+ ac_data->misc.name = HOST_WAKE_MODULE_NAME;
+ ac_data->misc.fops = &gps_geofence_wake_fops;
+
+ /* 2.2 Information that be used later */
+ ac_data->irq = irq;
+ ac_data->host_req_pin = pdata->gpio_hostwake;
+
+ pr_notice(PFX "misc register, name %s, irq %d, host req pin num %d\n",
+ ac_data->misc.name, irq, ac_data->host_req_pin);
+ /* 2.3 Register misc driver */
+ ret = misc_register(&ac_data->misc);
+ if (ret) {
+ pr_err(PFX "cannot register gps geofence wake miscdev on minor=%d (%d)\n",
+ MISC_DYNAMIC_MINOR, ret);
+ return ret;
+ }
+
+ /* 3. Init wake_lock */
+ wake_lock_init(&ac_data->wake_lock, WAKE_LOCK_SUSPEND,
+ "gps_geofence_wakelock");
+ return 0;
+}
+
+static int gps_hostwake_remove(struct platform_device *pdev)
+{
+ struct gps_geofence_wake *ac_data = &g_geofence_wake;
+ int ret = 0;
+
+ pr_debug(PFX "%s\n", __func__);
+ /* 1. Cleanup GPIO and IRQ */
+ gps_gpio_irq_cleanup(ac_data->host_req_pin, ac_data->irq);
+
+ /* 2. Cleanup driver */
+
+ ret = misc_deregister(&ac_data->misc);
+ if (ret) {
+ pr_err(PFX "cannot unregister gps geofence wake miscdev on minor=%d (%d)\n",
+ MISC_DYNAMIC_MINOR, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int gps_hostwake_suspend(
+ struct platform_device *pdev, pm_message_t state)
+{
+ pr_debug(PFX "%s\n", __func__);
+ return 0;
+}
+
+static int gps_hostwake_resume(struct platform_device *pdev)
+{
+ pr_debug(PFX "%s\n", __func__);
+ return 0;
+}
+
+static struct platform_driver gps_driver = {
+ .probe = gps_hostwake_probe,
+ .remove = gps_hostwake_remove,
+ .suspend = gps_hostwake_suspend,
+ .resume = gps_hostwake_resume,
+ .driver = {
+ .name = "bcm-gps-hostwake",
+ .owner = THIS_MODULE,
+ },
+};
+
+static int __init gps_hostwake_init(void)
+{
+ pr_notice(PFX "Init: Broadcom GPS HostWake Geofence Driver v%s\n",
+ GPS_VERSION);
+ return platform_driver_register(&gps_driver);
+}
+
+static void __exit gps_hostwake_exit(void)
+{
+ pr_notice(PFX "Exit: Broadcom GPS HostWake Geofence Driver v%s\n",
+ GPS_VERSION);
+ platform_driver_unregister(&gps_driver);
+}
+
+module_init(gps_hostwake_init);
+module_exit(gps_hostwake_exit);
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Broadcom GPS Driver with hostwake interrupt");
diff --git a/arch/arm/mach-tegra/board-flounder-kbc.c b/arch/arm/mach-tegra/board-flounder-kbc.c
new file mode 100644
index 0000000..cc7d107
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-kbc.c
@@ -0,0 +1,67 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-kbc.c
+ * Keys configuration for Nvidia tegra4 flounder platform.
+ *
+ * Copyright (c) 2012-2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ * 02111-1307, USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/input.h>
+#include <linux/io.h>
+#include <linux/gpio.h>
+#include <linux/gpio_keys.h>
+#include <linux/delay.h>
+#include <linux/keycombo.h>
+#include <linux/keyreset.h>
+#include <linux/reboot.h>
+#include <linux/workqueue.h>
+
+#include "iomap.h"
+#include "wakeups-t12x.h"
+#include "gpio-names.h"
+
+#define RESTART_DELAY_MS 7000
+static int print_reset_warning(void)
+{
+ pr_info("Power held for %ld seconds. Restarting soon.\n",
+ RESTART_DELAY_MS / MSEC_PER_SEC);
+ return 0;
+}
+
+static struct keyreset_platform_data flounder_reset_keys_pdata = {
+ .reset_fn = &print_reset_warning,
+ .key_down_delay = RESTART_DELAY_MS,
+ .keys_down = {
+ KEY_POWER,
+ 0
+ }
+};
+
+static struct platform_device flounder_reset_keys_device = {
+ .name = KEYRESET_NAME,
+ .dev.platform_data = &flounder_reset_keys_pdata,
+ .id = PLATFORM_DEVID_AUTO,
+};
+
+int __init flounder_kbc_init(void)
+{
+ if (platform_device_register(&flounder_reset_keys_device))
+ pr_warn(KERN_WARNING "%s: register key reset failure\n",
+ __func__);
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-mdm9k.c b/arch/arm/mach-tegra/board-flounder-mdm9k.c
new file mode 100644
index 0000000..2e88dca
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-mdm9k.c
@@ -0,0 +1,378 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-mdm9k.c
+ *
+ * Copyright (C) 2014 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "[MDM]: " fmt
+
+#include <linux/gpio.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <mach/pinmux-t12.h>
+#include <mach/pinmux.h>
+#include <linux/platform_data/qcom_usb_modem_power.h>
+
+#include "cpu-tegra.h"
+#include "devices.h"
+#include "board.h"
+#include "board-common.h"
+#include "board-flounder.h"
+#include "tegra-board-id.h"
+#include <mach/board_htc.h>
+
+static struct gpio modem_gpios[] = { /* QCT 9K modem */
+ {MDM2AP_ERRFATAL, GPIOF_IN, "MDM2AP_ERRFATAL"},
+ {AP2MDM_ERRFATAL, GPIOF_OUT_INIT_LOW, "AP2MDM_ERRFATAL"},
+ {MDM2AP_STATUS, GPIOF_IN, "MDM2AP_STATUS"},
+ {AP2MDM_STATUS, GPIOF_OUT_INIT_LOW, "AP2MDM_STATUS"},
+ {MDM2AP_WAKEUP, GPIOF_IN, "MDM2AP_WAKEUP"},
+ {AP2MDM_WAKEUP, GPIOF_OUT_INIT_LOW, "AP2MDM_WAKEUP"},
+ {MDM2AP_HSIC_READY, GPIOF_IN, "MDM2AP_HSIC_READY"},
+ {AP2MDM_PMIC_RESET_N, GPIOF_OUT_INIT_LOW, "AP2MDM_PMIC_RESET_N"},
+ {AP2MDM_IPC1, GPIOF_OUT_INIT_LOW, "AP2MDM_IPC1"},
+ {MDM2AP_IPC3, GPIOF_IN, "MDM2AP_IPC3"},
+ {AP2MDM_VDD_MIN, GPIOF_OUT_INIT_LOW, "AP2MDM_VDD_MIN"},
+ {MDM2AP_VDD_MIN, GPIOF_IN, "MDM2AP_VDD_MIN"},
+ {AP2MDM_CHNL_RDY_CPU, GPIOF_OUT_INIT_LOW, "AP2MDM_CHNL_RDY_CPU"},
+};
+
+static void modem_dump_gpio_value(struct qcom_usb_modem *modem, int gpio_value, char *label);
+
+static int modem_init(struct qcom_usb_modem *modem)
+{
+ int ret = 0;
+
+ if (!modem) {
+ pr_err("%s: mdm_drv = NULL\n", __func__);
+ return -EINVAL;
+ }
+
+ ret = gpio_request_array(modem_gpios, ARRAY_SIZE(modem_gpios));
+ if (ret) {
+ pr_warn("%s:gpio request failed\n", __func__);
+ return ret;
+ }
+
+ if (gpio_is_valid(modem->pdata->ap2mdm_status_gpio))
+ gpio_direction_output(modem->pdata->ap2mdm_status_gpio, 0);
+ else
+ pr_err("gpio_request fail: ap2mdm_status_gpio\n");
+ if (gpio_is_valid(modem->pdata->ap2mdm_errfatal_gpio))
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+ else
+ pr_err("gpio_request fail: ap2mdm_errfatal_gpio\n");
+ if (gpio_is_valid(modem->pdata->ap2mdm_wakeup_gpio))
+ gpio_direction_output(modem->pdata->ap2mdm_wakeup_gpio, 0);
+ else
+ pr_err("gpio_request fail: ap2mdm_wakeup_gpio\n");
+ if (gpio_is_valid(modem->pdata->ap2mdm_ipc1_gpio))
+ gpio_direction_output(modem->pdata->ap2mdm_ipc1_gpio, 0);
+ else
+ pr_err("gpio_request fail: ap2mdm_ipc1_gpio\n");
+
+ if (gpio_is_valid(modem->pdata->mdm2ap_status_gpio))
+ gpio_direction_input(modem->pdata->mdm2ap_status_gpio);
+ else
+ pr_err("gpio_request fail: mdm2ap_status_gpio\n");
+ if (gpio_is_valid(modem->pdata->mdm2ap_errfatal_gpio))
+ gpio_direction_input(modem->pdata->mdm2ap_errfatal_gpio);
+ else
+ pr_err("gpio_request fail: mdm2ap_errfatal_gpio\n");
+ if (gpio_is_valid(modem->pdata->mdm2ap_wakeup_gpio))
+ gpio_direction_input(modem->pdata->mdm2ap_wakeup_gpio);
+ else
+ pr_err("gpio_request fail: mdm2ap_wakeup_gpio\n");
+ if (gpio_is_valid(modem->pdata->mdm2ap_hsic_ready_gpio))
+ gpio_direction_input(modem->pdata->mdm2ap_hsic_ready_gpio);
+ else
+ pr_err("gpio_request fail: mdm2ap_hsic_ready_gpio\n");
+ if (gpio_is_valid(modem->pdata->mdm2ap_ipc3_gpio ))
+ gpio_direction_input(modem->pdata->mdm2ap_ipc3_gpio);
+ else
+ pr_err("gpio_request fail: mdm2ap_ipc3_gpio\n");
+
+ /* export GPIO for user space access through sysfs */
+ if (!modem->mdm_gpio_exported)
+ {
+ int idx;
+ for (idx = 0; idx < ARRAY_SIZE(modem_gpios); idx++)
+ gpio_export(modem_gpios[idx].gpio, true);
+ }
+
+ modem->mdm_gpio_exported = 1;
+
+ return 0;
+}
+
+static void modem_power_on(struct qcom_usb_modem *modem)
+{
+ pr_info("+power_on_mdm\n");
+
+ if (!modem) {
+ pr_err("-%s: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ gpio_direction_output(modem->pdata->ap2mdm_wakeup_gpio, 0);
+
+ /* Pull RESET gpio low and wait for it to settle. */
+ pr_info("%s: Pulling RESET gpio low\n", __func__);
+
+ gpio_direction_output(modem->pdata->ap2mdm_pmic_reset_n_gpio, 0);
+
+ usleep_range(5000, 10000);
+
+ /* Deassert RESET first and wait for it to settle. */
+ pr_info("%s: Pulling RESET gpio high\n", __func__);
+
+ gpio_direction_output(modem->pdata->ap2mdm_pmic_reset_n_gpio, 1);
+
+ msleep(40);
+
+ gpio_direction_output(modem->pdata->ap2mdm_status_gpio, 1);
+
+ msleep(40);
+
+ msleep(200);
+
+ pr_info("-power_on_mdm\n");
+
+ return;
+}
+
+#define MDM_HOLD_TIME 4000
+
+static void modem_power_down(struct qcom_usb_modem *modem)
+{
+ pr_info("+power_down_mdm\n");
+
+ if (!modem) {
+ pr_err("%s-: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ /* APQ8064 only uses one pin to control pon and reset.
+ * If this pin is down over 300ms, memory data will loss.
+ * Currently sbl1 will help pull this pin up, and need 120~140ms.
+ * To decrease down time, we do not shut down MDM here and last until 8k PS_HOLD is pulled.
+ */
+ pr_info("%s: Pulling RESET gpio LOW\n", __func__);
+
+ if (modem->pdata->ap2mdm_pmic_reset_n_gpio >= 0)
+ gpio_direction_output(modem->pdata->ap2mdm_pmic_reset_n_gpio, 0);
+
+ msleep(MDM_HOLD_TIME);
+
+ pr_info("-power_down_mdm\n");
+}
+
+static void modem_remove(struct qcom_usb_modem *modem)
+{
+ gpio_free_array(modem_gpios, ARRAY_SIZE(modem_gpios));
+
+ modem->mdm_gpio_exported = 0;
+
+ return;
+}
+
+static void modem_status_changed(struct qcom_usb_modem *modem, int value)
+{
+ if (!modem) {
+ pr_err("%s-: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ if (modem->mdm_debug_on)
+ pr_info("%s: value:%d\n", __func__, value);
+
+ if (value && (modem->mdm_status & MDM_STATUS_BOOT_DONE)) {
+ gpio_direction_output(modem->pdata->ap2mdm_wakeup_gpio, 1);
+ }
+
+ return;
+}
+
+static void modem_boot_done(struct qcom_usb_modem *modem)
+{
+ if (!modem) {
+ pr_err("%s-: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ if (modem->mdm_status & (MDM_STATUS_BOOT_DONE | MDM_STATUS_STATUS_READY)) {
+ gpio_direction_output(modem->pdata->ap2mdm_wakeup_gpio, 1);
+ }
+
+ return;
+}
+
+static void modem_nv_write_done(struct qcom_usb_modem *modem)
+{
+ gpio_direction_output(modem->pdata->ap2mdm_ipc1_gpio, 1);
+ msleep(1);
+ gpio_direction_output(modem->pdata->ap2mdm_ipc1_gpio, 0);
+
+ return;
+}
+
+static void trigger_modem_errfatal(struct qcom_usb_modem *modem)
+{
+ pr_info("%s+\n", __func__);
+
+ if (!modem) {
+ pr_err("%s-: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+ msleep(1000);
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 1);
+ msleep(1000);
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+
+ pr_info("%s-\n", __func__);
+
+ return;
+}
+
+static void modem_debug_state_changed(struct qcom_usb_modem *modem, int value)
+{
+ if (!modem) {
+ pr_err("%s: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ modem->mdm_debug_on = value?true:false;
+
+ return;
+}
+
+static void modem_dump_gpio_value(struct qcom_usb_modem *modem, int gpio_value, char *label)
+{
+ int ret;
+
+ if (!modem) {
+ pr_err("%s-: mdm_drv = NULL\n", __func__);
+ return;
+ }
+
+ for (ret = 0; ret < ARRAY_SIZE(modem_gpios); ret++)
+ {
+ if (gpio_is_valid(gpio_value))
+ {
+ if (gpio_value == modem_gpios[ret].gpio)
+ {
+ pr_info("%s: %s = %d\n", label, modem_gpios[ret].label, gpio_get_value(gpio_value));
+ break;
+ }
+ }
+ else
+ pr_info("%s: %s = %d\n", label, modem_gpios[ret].label, gpio_get_value(modem_gpios[ret].gpio));
+ }
+
+ return;
+}
+
+static const struct qcom_modem_operations mdm_operations = {
+ .init = modem_init,
+ .start = modem_power_on,
+ .stop = modem_power_down,
+ .stop2 = modem_power_down,
+ .remove = modem_remove,
+ .status_cb = modem_status_changed,
+ .normal_boot_done_cb = modem_boot_done,
+ .nv_write_done_cb = modem_nv_write_done,
+ .fatal_trigger_cb = trigger_modem_errfatal,
+ .debug_state_changed_cb = modem_debug_state_changed,
+ .dump_mdm_gpio_cb = modem_dump_gpio_value,
+};
+
+struct tegra_usb_platform_data tegra_ehci2_hsic_modem_pdata = {
+ .port_otg = false,
+ .has_hostpc = true,
+ .unaligned_dma_buf_supported = true,
+ .phy_intf = TEGRA_USB_PHY_INTF_HSIC,
+ .op_mode = TEGRA_USB_OPMODE_HOST,
+ .u_data.host = {
+ .vbus_gpio = -1,
+ .hot_plug = false,
+ .remote_wakeup_supported = false,
+ .power_off_on_suspend = true,
+ .turn_off_vbus_on_lp0 = true,
+ },
+/* In flounder native design, the following are hard code
+ .u_cfg.hsic = {
+ .sync_start_delay = 9,
+ .idle_wait_delay = 17,
+ .term_range_adj = 0,
+ .elastic_underrun_limit = 16,
+ .elastic_overrun_limit = 16,
+ },
+*/
+};
+
+static struct qcom_usb_modem_power_platform_data mdm_pdata = {
+ .mdm_version = "4.0",
+ .ops = &mdm_operations,
+ .mdm2ap_errfatal_gpio = MDM2AP_ERRFATAL,
+ .ap2mdm_errfatal_gpio = AP2MDM_ERRFATAL,
+ .mdm2ap_status_gpio = MDM2AP_STATUS,
+ .ap2mdm_status_gpio = AP2MDM_STATUS,
+ .mdm2ap_wakeup_gpio = MDM2AP_WAKEUP,
+ .ap2mdm_wakeup_gpio = AP2MDM_WAKEUP,
+ .mdm2ap_vdd_min_gpio = -1,
+ .ap2mdm_vdd_min_gpio = -1,
+ .mdm2ap_hsic_ready_gpio = MDM2AP_HSIC_READY,
+ .ap2mdm_pmic_reset_n_gpio = AP2MDM_PMIC_RESET_N,
+ .ap2mdm_ipc1_gpio = AP2MDM_IPC1,
+ .ap2mdm_ipc2_gpio = -1,
+ .mdm2ap_ipc3_gpio = MDM2AP_IPC3,
+ .errfatal_irq_flags = IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ .status_irq_flags = IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ .wake_irq_flags = IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ .vdd_min_irq_flags = -1,
+ .hsic_ready_irq_flags = IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ .ipc3_irq_flags = IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ .autosuspend_delay = 2000,
+ .short_autosuspend_delay = -1,
+ .ramdump_delay_ms = 2000,
+ .tegra_ehci_device = &tegra_ehci2_device,
+ .tegra_ehci_pdata = &tegra_ehci2_hsic_modem_pdata,
+};
+
+static struct platform_device qcom_mdm_device = {
+ .name = "qcom_usb_modem_power",
+ .id = -1,
+ .dev = {
+ .platform_data = &mdm_pdata,
+ },
+};
+
+int __init flounder_mdm_9k_init(void)
+{
+/*
+ We add echi2 platform data here. In native design, it may
+ be added in flounder_usb_init().
+*/
+ if (is_mdm_modem()) {
+ pr_info("%s: add mdm_devices\n", __func__);
+ tegra_ehci2_device.dev.platform_data = &tegra_ehci2_hsic_modem_pdata;
+ platform_device_register(&qcom_mdm_device);
+ }
+
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-memory.c b/arch/arm/mach-tegra/board-flounder-memory.c
new file mode 100644
index 0000000..0878936
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-memory.c
@@ -0,0 +1,2799 @@
+/*
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ * 02111-1307, USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_data/tegra_emc_pdata.h>
+
+#include "board.h"
+#include "board-flounder.h"
+#include "tegra-board-id.h"
+#include "tegra12_emc.h"
+#include "devices.h"
+
+static struct tegra12_emc_table flounder_lpddr3_emc_table[] = {
+ {
+ 0x19, /* V6.0.0 */
+ "011_12750_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 12750, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x4000003e, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000000, /* EMC_RC */
+ 0x00000003, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000002, /* EMC_RAS */
+ 0x00000002, /* EMC_RP */
+ 0x00000006, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000002, /* EMC_RD_RCD */
+ 0x00000002, /* EMC_WR_RCD */
+ 0x00000001, /* EMC_RRD */
+ 0x00000002, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000003, /* EMC_WDV */
+ 0x00000003, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000002, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000005, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000003, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x0000000c, /* EMC_QSAFE */
+ 0x0000000d, /* EMC_RDV */
+ 0x0000000f, /* EMC_RDV_MASK */
+ 0x00000030, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x0000000c, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000002, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x00000003, /* EMC_TXSR */
+ 0x00000002, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000003, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000006, /* EMC_TFAW */
+ 0x00000004, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000036, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ0 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ1 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ2 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x00000011, /* EMC_ZCAL_WAIT_CNT */
+ 0x000d0011, /* EMC_MRS_WAIT_CNT */
+ 0x000d0011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000003, /* EMC_CTT_DURATION */
+ 0x0000f3f3, /* EMC_CFG_PIPE */
+ 0x80000164, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0x40040001, /* MC_EMEM_ARB_CFG */
+ 0x8000000a, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000000, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05040102, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00090402, /* MC_EMEM_ARB_DA_COVERS */
+ 0x77c30303, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000007, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00ff0049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x000800ff, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000015, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008c7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 57820, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_20400_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 20400, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x40000026, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000001, /* EMC_RC */
+ 0x00000003, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000002, /* EMC_RAS */
+ 0x00000002, /* EMC_RP */
+ 0x00000006, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000002, /* EMC_RD_RCD */
+ 0x00000002, /* EMC_WR_RCD */
+ 0x00000001, /* EMC_RRD */
+ 0x00000002, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000003, /* EMC_WDV */
+ 0x00000003, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000002, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000005, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000003, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x0000000c, /* EMC_QSAFE */
+ 0x0000000d, /* EMC_RDV */
+ 0x0000000f, /* EMC_RDV_MASK */
+ 0x0000004d, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000013, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000002, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x00000003, /* EMC_TXSR */
+ 0x00000003, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000003, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000006, /* EMC_TFAW */
+ 0x00000004, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000055, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ0 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ1 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ2 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x00000011, /* EMC_ZCAL_WAIT_CNT */
+ 0x00150011, /* EMC_MRS_WAIT_CNT */
+ 0x00150011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000003, /* EMC_CTT_DURATION */
+ 0x0000f3f3, /* EMC_CFG_PIPE */
+ 0x8000019f, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0x40020001, /* MC_EMEM_ARB_CFG */
+ 0x80000012, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000000, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05040102, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00090402, /* MC_EMEM_ARB_DA_COVERS */
+ 0x74e30303, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x0000000a, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00ff0049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x000800ff, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000015, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008c7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 35610, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_40800_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 40800, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x40000012, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000002, /* EMC_RC */
+ 0x00000005, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000002, /* EMC_RAS */
+ 0x00000002, /* EMC_RP */
+ 0x00000006, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000002, /* EMC_RD_RCD */
+ 0x00000002, /* EMC_WR_RCD */
+ 0x00000001, /* EMC_RRD */
+ 0x00000002, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000003, /* EMC_WDV */
+ 0x00000003, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000002, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000005, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000003, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x0000000c, /* EMC_QSAFE */
+ 0x0000000d, /* EMC_RDV */
+ 0x0000000f, /* EMC_RDV_MASK */
+ 0x0000009a, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000026, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000002, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x00000006, /* EMC_TXSR */
+ 0x00000006, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000003, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000006, /* EMC_TFAW */
+ 0x00000004, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x000000aa, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ0 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ1 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ2 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x00000011, /* EMC_ZCAL_WAIT_CNT */
+ 0x00290011, /* EMC_MRS_WAIT_CNT */
+ 0x00290011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000003, /* EMC_CTT_DURATION */
+ 0x0000f3f3, /* EMC_CFG_PIPE */
+ 0x8000023a, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0xa0000001, /* MC_EMEM_ARB_CFG */
+ 0x80000017, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000000, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05040102, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00090402, /* MC_EMEM_ARB_DA_COVERS */
+ 0x73030303, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000014, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00ff0049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x000800ff, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000015, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008c7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 20850, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_68000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 68000, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x4000000a, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000004, /* EMC_RC */
+ 0x00000008, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000002, /* EMC_RAS */
+ 0x00000002, /* EMC_RP */
+ 0x00000006, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000002, /* EMC_RD_RCD */
+ 0x00000002, /* EMC_WR_RCD */
+ 0x00000001, /* EMC_RRD */
+ 0x00000002, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000003, /* EMC_WDV */
+ 0x00000003, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000002, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000005, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000003, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x0000000c, /* EMC_QSAFE */
+ 0x0000000d, /* EMC_RDV */
+ 0x0000000f, /* EMC_RDV_MASK */
+ 0x00000101, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000040, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000002, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x0000000a, /* EMC_TXSR */
+ 0x0000000a, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000003, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000006, /* EMC_TFAW */
+ 0x00000004, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x0000011b, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ0 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ1 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ2 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x00000019, /* EMC_ZCAL_WAIT_CNT */
+ 0x00440011, /* EMC_MRS_WAIT_CNT */
+ 0x00440011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000003, /* EMC_CTT_DURATION */
+ 0x0000f3f3, /* EMC_CFG_PIPE */
+ 0x80000309, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0x00000001, /* MC_EMEM_ARB_CFG */
+ 0x8000001e, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000000, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05040102, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00090402, /* MC_EMEM_ARB_DA_COVERS */
+ 0x72630403, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000021, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff00b0, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff00ec, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff00ec, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00e90049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x000800ff, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff00a3, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x000000ef, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x000000ef, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00ee00ef, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000015, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008c7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 10720, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_102000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 102000, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x40000006, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000006, /* EMC_RC */
+ 0x0000000d, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000004, /* EMC_RAS */
+ 0x00000002, /* EMC_RP */
+ 0x00000006, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000002, /* EMC_RD_RCD */
+ 0x00000002, /* EMC_WR_RCD */
+ 0x00000001, /* EMC_RRD */
+ 0x00000002, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000003, /* EMC_WDV */
+ 0x00000003, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000002, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000005, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000003, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x0000000c, /* EMC_QSAFE */
+ 0x0000000d, /* EMC_RDV */
+ 0x0000000f, /* EMC_RDV_MASK */
+ 0x00000182, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000060, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000002, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x0000000f, /* EMC_TXSR */
+ 0x0000000f, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000003, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000006, /* EMC_TFAW */
+ 0x00000004, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x000001a9, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x000fc000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ0 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ1 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ2 */
+ 0x000fc000, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000fc00, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x00000025, /* EMC_ZCAL_WAIT_CNT */
+ 0x00660011, /* EMC_MRS_WAIT_CNT */
+ 0x00660011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000003, /* EMC_CTT_DURATION */
+ 0x0000f3f3, /* EMC_CFG_PIPE */
+ 0x8000040b, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0x08000001, /* MC_EMEM_ARB_CFG */
+ 0x80000026, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05040102, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00090403, /* MC_EMEM_ARB_DA_COVERS */
+ 0x72430504, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000031, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff00da, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff00da, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff0075, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff009d, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff009d, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x009b0049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x000800ad, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff00c6, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff006d, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff00d6, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x0000009f, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x0000009f, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x009f00a0, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff00da, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000015, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008c7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 6890, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_204000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 204000, /* SDRAM frequency */
+ 800, /* min voltage */
+ 800, /* gpu min voltage */
+ "pllp_out0", /* clock source id */
+ 0x40000002, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x0000000c, /* EMC_RC */
+ 0x0000001a, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000008, /* EMC_RAS */
+ 0x00000003, /* EMC_RP */
+ 0x00000007, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000003, /* EMC_RD_RCD */
+ 0x00000003, /* EMC_WR_RCD */
+ 0x00000002, /* EMC_RRD */
+ 0x00000003, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000002, /* EMC_WDV */
+ 0x00000002, /* EMC_WDV_MASK */
+ 0x00000005, /* EMC_QUSE */
+ 0x00000003, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000003, /* EMC_EINPUT */
+ 0x00000007, /* EMC_EINPUT_DURATION */
+ 0x00010000, /* EMC_PUTERM_EXTRA */
+ 0x00000004, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000002, /* EMC_QRST */
+ 0x0000000e, /* EMC_QSAFE */
+ 0x0000000f, /* EMC_RDV */
+ 0x00000011, /* EMC_RDV_MASK */
+ 0x00000304, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x000000c1, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000003, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x0000001d, /* EMC_TXSR */
+ 0x0000001d, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000004, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000009, /* EMC_TFAW */
+ 0x00000005, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000351, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a296, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00048000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00048000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00080000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00080000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00080000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00080000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x00090000, /* EMC_DLL_XFORM_DQ0 */
+ 0x00090000, /* EMC_DLL_XFORM_DQ1 */
+ 0x00090000, /* EMC_DLL_XFORM_DQ2 */
+ 0x00090000, /* EMC_DLL_XFORM_DQ3 */
+ 0x00009000, /* EMC_DLL_XFORM_DQ4 */
+ 0x00009000, /* EMC_DLL_XFORM_DQ5 */
+ 0x00009000, /* EMC_DLL_XFORM_DQ6 */
+ 0x00009000, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0130b018, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000606, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x0000004a, /* EMC_ZCAL_WAIT_CNT */
+ 0x00cc0011, /* EMC_MRS_WAIT_CNT */
+ 0x00cc0011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000004, /* EMC_CTT_DURATION */
+ 0x0000d3b3, /* EMC_CFG_PIPE */
+ 0x80000713, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000a, /* EMC_QPOP */
+ 0x01000003, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000006, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05050103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x000a0506, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71e40a07, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000001, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000062, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff006d, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff006d, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff00af, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff004f, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff00af, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff004f, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x004e0049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x00080057, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff0063, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff0036, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff006b, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000050, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000050, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510050, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff00c6, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff006d, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000017, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3200000, /* EMC_CFG */
+ 0x000008cf, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000008, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 3420, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_300000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 300000, /* SDRAM frequency */
+ 820, /* min voltage */
+ 820, /* gpu min voltage */
+ "pllc_out0", /* clock source id */
+ 0x20000002, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000011, /* EMC_RC */
+ 0x00000026, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x0000000c, /* EMC_RAS */
+ 0x00000005, /* EMC_RP */
+ 0x00000007, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000005, /* EMC_RD_RCD */
+ 0x00000005, /* EMC_WR_RCD */
+ 0x00000002, /* EMC_RRD */
+ 0x00000003, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000002, /* EMC_WDV */
+ 0x00000002, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000003, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000003, /* EMC_EINPUT */
+ 0x00000008, /* EMC_EINPUT_DURATION */
+ 0x00030000, /* EMC_PUTERM_EXTRA */
+ 0x00000004, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000002, /* EMC_QRST */
+ 0x0000000f, /* EMC_QSAFE */
+ 0x00000012, /* EMC_RDV */
+ 0x00000014, /* EMC_RDV_MASK */
+ 0x0000046e, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x0000011b, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000005, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000c, /* EMC_RW2PDEN */
+ 0x0000002a, /* EMC_TXSR */
+ 0x0000002a, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000005, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x0000000d, /* EMC_TFAW */
+ 0x00000007, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x000004e0, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a096, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00020000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x00060000, /* EMC_DLL_XFORM_DQ0 */
+ 0x00060000, /* EMC_DLL_XFORM_DQ1 */
+ 0x00060000, /* EMC_DLL_XFORM_DQ2 */
+ 0x00060000, /* EMC_DLL_XFORM_DQ3 */
+ 0x00006000, /* EMC_DLL_XFORM_DQ4 */
+ 0x00006000, /* EMC_DLL_XFORM_DQ5 */
+ 0x00006000, /* EMC_DLL_XFORM_DQ6 */
+ 0x00006000, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x01231239, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000606, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451420, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x0000006c, /* EMC_ZCAL_WAIT_CNT */
+ 0x012c0011, /* EMC_MRS_WAIT_CNT */
+ 0x012c0011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000004, /* EMC_CTT_DURATION */
+ 0x000052a3, /* EMC_CFG_PIPE */
+ 0x800009ed, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000b, /* EMC_QPOP */
+ 0x08000004, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000009, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05050103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x000c0709, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71c50e0a, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000004, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000090, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff004a, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff004a, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff0090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff0041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff0090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff0041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00350049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x0008003b, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff0043, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff002d, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff0049, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510036, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff0087, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff004a, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x0000001f, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x000008d7, /* EMC_CFG_2 */
+ 0x0004013c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 2680, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_396000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 396000, /* SDRAM frequency */
+ 850, /* min voltage */
+ 850, /* gpu min voltage */
+ "pllm_out0", /* clock source id */
+ 0x00000002, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000017, /* EMC_RC */
+ 0x00000033, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000010, /* EMC_RAS */
+ 0x00000007, /* EMC_RP */
+ 0x00000008, /* EMC_R2W */
+ 0x00000008, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000a, /* EMC_W2P */
+ 0x00000007, /* EMC_RD_RCD */
+ 0x00000007, /* EMC_WR_RCD */
+ 0x00000003, /* EMC_RRD */
+ 0x00000003, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000002, /* EMC_WDV */
+ 0x00000002, /* EMC_WDV_MASK */
+ 0x00000006, /* EMC_QUSE */
+ 0x00000003, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000002, /* EMC_EINPUT */
+ 0x00000009, /* EMC_EINPUT_DURATION */
+ 0x00030000, /* EMC_PUTERM_EXTRA */
+ 0x00000004, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000001, /* EMC_QRST */
+ 0x00000010, /* EMC_QSAFE */
+ 0x00000012, /* EMC_RDV */
+ 0x00000014, /* EMC_RDV_MASK */
+ 0x000005d9, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000176, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000002, /* EMC_PDEX2WR */
+ 0x00000002, /* EMC_PDEX2RD */
+ 0x00000007, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000000e, /* EMC_RW2PDEN */
+ 0x00000038, /* EMC_TXSR */
+ 0x00000038, /* EMC_TXSRDLL */
+ 0x00000003, /* EMC_TCKE */
+ 0x00000006, /* EMC_TCKESR */
+ 0x00000003, /* EMC_TPD */
+ 0x00000012, /* EMC_TFAW */
+ 0x00000009, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000670, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a096, /* EMC_FBIO_CFG5 */
+ 0x005800a0, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00020000, /* EMC_DLL_XFORM_DQS0 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS1 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS2 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS3 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS4 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS5 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS6 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS7 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS8 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS9 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS10 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS11 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS12 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS13 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS14 */
+ 0x00020000, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00050000, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x00040000, /* EMC_DLL_XFORM_DQ0 */
+ 0x00040000, /* EMC_DLL_XFORM_DQ1 */
+ 0x00040000, /* EMC_DLL_XFORM_DQ2 */
+ 0x00040000, /* EMC_DLL_XFORM_DQ3 */
+ 0x00004000, /* EMC_DLL_XFORM_DQ4 */
+ 0x00004000, /* EMC_DLL_XFORM_DQ5 */
+ 0x00004000, /* EMC_DLL_XFORM_DQ6 */
+ 0x00004000, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000200, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x01231239, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc000, /* EMC_XM2CLKPADCTRL */
+ 0x00000606, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451420, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x0000008f, /* EMC_ZCAL_WAIT_CNT */
+ 0x018c0011, /* EMC_MRS_WAIT_CNT */
+ 0x018c0011, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000004, /* EMC_CTT_DURATION */
+ 0x000052a3, /* EMC_CFG_PIPE */
+ 0x80000cc7, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000b, /* EMC_QPOP */
+ 0x0f000005, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RP */
+ 0x0000000c, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000009, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x05050103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x000e090c, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71c6120d, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x0000000a, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x000000be, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00ff0038, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00ff0038, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00ff003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00ff0090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00ff0041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00ff0090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00ff0041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00280049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00ff0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x0008002d, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00ff0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00ff0033, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00ff0022, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00ff0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00ff0037, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000ff, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510029, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00ff00ff, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00ff0066, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00ff0038, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000028, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x00000897, /* EMC_CFG_2 */
+ 0x0004001c, /* EMC_SEL_DPD_CTRL */
+ 0x00580068, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x00020004, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 2180, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_528000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 528000, /* SDRAM frequency */
+ 880, /* min voltage */
+ 870, /* gpu min voltage */
+ "pllm_ud", /* clock source id */
+ 0x80000000, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x0000001f, /* EMC_RC */
+ 0x00000044, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000016, /* EMC_RAS */
+ 0x00000009, /* EMC_RP */
+ 0x00000009, /* EMC_R2W */
+ 0x00000009, /* EMC_W2R */
+ 0x00000003, /* EMC_R2P */
+ 0x0000000d, /* EMC_W2P */
+ 0x00000009, /* EMC_RD_RCD */
+ 0x00000009, /* EMC_WR_RCD */
+ 0x00000005, /* EMC_RRD */
+ 0x00000004, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000002, /* EMC_WDV */
+ 0x00000002, /* EMC_WDV_MASK */
+ 0x00000008, /* EMC_QUSE */
+ 0x00000003, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000003, /* EMC_EINPUT */
+ 0x0000000a, /* EMC_EINPUT_DURATION */
+ 0x00050000, /* EMC_PUTERM_EXTRA */
+ 0x00000004, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000002, /* EMC_QRST */
+ 0x00000011, /* EMC_QSAFE */
+ 0x00000015, /* EMC_RDV */
+ 0x00000017, /* EMC_RDV_MASK */
+ 0x000007cd, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x000001f3, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000003, /* EMC_PDEX2WR */
+ 0x00000003, /* EMC_PDEX2RD */
+ 0x00000009, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x00000011, /* EMC_RW2PDEN */
+ 0x0000004a, /* EMC_TXSR */
+ 0x0000004a, /* EMC_TXSRDLL */
+ 0x00000004, /* EMC_TCKE */
+ 0x00000008, /* EMC_TCKESR */
+ 0x00000004, /* EMC_TPD */
+ 0x00000019, /* EMC_TFAW */
+ 0x0000000c, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000895, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a096, /* EMC_FBIO_CFG5 */
+ 0xe01200b9, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS0 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS1 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS2 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS3 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS4 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS5 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS6 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS7 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS8 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS9 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS10 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS11 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS12 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS13 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS14 */
+ 0x0000000a, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ0 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ1 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ2 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000000c, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000220, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0123123d, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc004, /* EMC_XM2CLKPADCTRL */
+ 0x00000505, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451420, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x000000bf, /* EMC_ZCAL_WAIT_CNT */
+ 0x02100013, /* EMC_MRS_WAIT_CNT */
+ 0x02100013, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000004, /* EMC_CTT_DURATION */
+ 0x000042a0, /* EMC_CFG_PIPE */
+ 0x800010b3, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x0000000d, /* EMC_QPOP */
+ 0x0f000007, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000010, /* MC_EMEM_ARB_TIMING_RC */
+ 0x0000000a, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x0000000d, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x00000009, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000006, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000006, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x06060103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00120b10, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71c81811, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x0000000d, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x000000fd, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00c10038, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00c10038, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00c1003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00c10090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00c10041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00c10090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00c10041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00270049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00c10080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00c10004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00c10004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x00080021, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000c1, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00c10004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00c10026, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00c1001a, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00c10024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00c10029, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000c1, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00c100c1, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00c100c1, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510029, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00c100c1, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00c100c1, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00c10065, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00c1002a, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000034, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x0000089f, /* EMC_CFG_2 */
+ 0x0004001c, /* EMC_SEL_DPD_CTRL */
+ 0xe0120069, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x000100c3, /* Mode Register 1 */
+ 0x00020006, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 1440, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_600000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 600000, /* SDRAM frequency */
+ 910, /* min voltage */
+ 910, /* gpu min voltage */
+ "pllc_ud", /* clock source id */
+ 0xe0000000, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000023, /* EMC_RC */
+ 0x0000004d, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000019, /* EMC_RAS */
+ 0x0000000a, /* EMC_RP */
+ 0x0000000a, /* EMC_R2W */
+ 0x0000000b, /* EMC_W2R */
+ 0x00000004, /* EMC_R2P */
+ 0x0000000f, /* EMC_W2P */
+ 0x0000000a, /* EMC_RD_RCD */
+ 0x0000000a, /* EMC_WR_RCD */
+ 0x00000005, /* EMC_RRD */
+ 0x00000004, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000004, /* EMC_WDV */
+ 0x00000004, /* EMC_WDV_MASK */
+ 0x0000000a, /* EMC_QUSE */
+ 0x00000004, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000003, /* EMC_EINPUT */
+ 0x0000000d, /* EMC_EINPUT_DURATION */
+ 0x00070000, /* EMC_PUTERM_EXTRA */
+ 0x00000005, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000002, /* EMC_QRST */
+ 0x00000014, /* EMC_QSAFE */
+ 0x00000018, /* EMC_RDV */
+ 0x0000001a, /* EMC_RDV_MASK */
+ 0x000008e4, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000239, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000004, /* EMC_PDEX2WR */
+ 0x00000004, /* EMC_PDEX2RD */
+ 0x0000000a, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x00000013, /* EMC_RW2PDEN */
+ 0x00000054, /* EMC_TXSR */
+ 0x00000054, /* EMC_TXSRDLL */
+ 0x00000005, /* EMC_TCKE */
+ 0x00000009, /* EMC_TCKESR */
+ 0x00000005, /* EMC_TPD */
+ 0x0000001c, /* EMC_TFAW */
+ 0x0000000d, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x000009c1, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a096, /* EMC_FBIO_CFG5 */
+ 0xe00e00b9, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00000008, /* EMC_DLL_XFORM_DQS0 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS1 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS2 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS3 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS4 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS5 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS6 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS7 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS8 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS9 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS10 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS11 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS12 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS13 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS14 */
+ 0x00000008, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00000010, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ0 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ1 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ2 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ3 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ4 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ5 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ6 */
+ 0x0000000b, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000220, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0121103d, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc004, /* EMC_XM2CLKPADCTRL */
+ 0x00000606, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x0000003f, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x51451420, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x51451400, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x000000d8, /* EMC_ZCAL_WAIT_CNT */
+ 0x02580014, /* EMC_MRS_WAIT_CNT */
+ 0x02580014, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000005, /* EMC_CTT_DURATION */
+ 0x000040a0, /* EMC_CFG_PIPE */
+ 0x800012d7, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x00000010, /* EMC_QPOP */
+ 0x00000009, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000005, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000012, /* MC_EMEM_ARB_TIMING_RC */
+ 0x0000000b, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x0000000e, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000002, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x0000000a, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000006, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x07060103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x00140d12, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71a91b13, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f03, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x0000000f, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x00000120, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00aa0038, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00aa0038, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x00aa003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00aa0090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00aa0041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00aa0090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00aa0041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00270049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00aa0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00aa0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00aa0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x0008001d, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x000000aa, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00aa0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00aa0022, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00aa0018, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00aa0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x00aa0024, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x000000aa, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00aa00aa, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00aa00aa, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510029, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00aa00aa, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00aa00aa, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00aa0065, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x00aa0025, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x0000003a, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x0000089f, /* EMC_CFG_2 */
+ 0x0004001c, /* EMC_SEL_DPD_CTRL */
+ 0xe00e0069, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430000, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x000100e3, /* Mode Register 1 */
+ 0x00020007, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 1440, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_792000_NoCfgVersion_V6.0.0_V1.1", /* DVFS table version */
+ 792000, /* SDRAM frequency */
+ 980, /* min voltage */
+ 980, /* gpu min voltage */
+ "pllm_ud", /* clock source id */
+ 0x80000000, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x0000002f, /* EMC_RC */
+ 0x00000066, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000021, /* EMC_RAS */
+ 0x0000000e, /* EMC_RP */
+ 0x0000000d, /* EMC_R2W */
+ 0x0000000d, /* EMC_W2R */
+ 0x00000005, /* EMC_R2P */
+ 0x00000013, /* EMC_W2P */
+ 0x0000000e, /* EMC_RD_RCD */
+ 0x0000000e, /* EMC_WR_RCD */
+ 0x00000007, /* EMC_RRD */
+ 0x00000004, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000005, /* EMC_WDV */
+ 0x00000005, /* EMC_WDV_MASK */
+ 0x0000000e, /* EMC_QUSE */
+ 0x00000004, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x0000000f, /* EMC_EINPUT_DURATION */
+ 0x000b0000, /* EMC_PUTERM_EXTRA */
+ 0x00000006, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x00000016, /* EMC_QSAFE */
+ 0x0000001d, /* EMC_RDV */
+ 0x0000001f, /* EMC_RDV_MASK */
+ 0x00000bd1, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x000002f4, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000005, /* EMC_PDEX2WR */
+ 0x00000005, /* EMC_PDEX2RD */
+ 0x0000000e, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x00000017, /* EMC_RW2PDEN */
+ 0x0000006f, /* EMC_TXSR */
+ 0x0000006f, /* EMC_TXSRDLL */
+ 0x00000006, /* EMC_TCKE */
+ 0x0000000c, /* EMC_TCKESR */
+ 0x00000006, /* EMC_TPD */
+ 0x00000026, /* EMC_TFAW */
+ 0x00000011, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000cdf, /* EMC_TREFBW */
+ 0x00000000, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a096, /* EMC_FBIO_CFG5 */
+ 0xe00700b9, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00000005, /* EMC_DLL_XFORM_DQS0 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS1 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS2 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS3 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS4 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS5 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS6 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS7 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS8 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS9 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS10 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS11 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS12 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS13 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS14 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x00004014, /* EMC_DLL_XFORM_ADDR0 */
+ 0x00004014, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x00004014, /* EMC_DLL_XFORM_ADDR3 */
+ 0x00004014, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ0 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ1 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ2 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ3 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ4 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ5 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ6 */
+ 0x00000009, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000220, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0120103d, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc004, /* EMC_XM2CLKPADCTRL */
+ 0x00000606, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x00000000, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x59659620, /* EMC_XM2DQSPADCTRL3 */
+ 0x00492492, /* EMC_XM2DQSPADCTRL4 */
+ 0x00492492, /* EMC_XM2DQSPADCTRL5 */
+ 0x59659600, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x0000011e, /* EMC_ZCAL_WAIT_CNT */
+ 0x03180017, /* EMC_MRS_WAIT_CNT */
+ 0x03180017, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000006, /* EMC_CTT_DURATION */
+ 0x00004080, /* EMC_CFG_PIPE */
+ 0x8000188b, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x00000014, /* EMC_QPOP */
+ 0x0e00000b, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000006, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_RP */
+ 0x00000018, /* MC_EMEM_ARB_TIMING_RC */
+ 0x0000000f, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000013, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x0000000c, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000003, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000008, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000008, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x08080103, /* MC_EMEM_ARB_DA_TURNS */
+ 0x001a1118, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71ac2419, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f02, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000013, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x0000017c, /* MC_PTSA_GRANT_DECREMENT */
+ 0x00810038, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x00810038, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x0081003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x00810090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x00810041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x00810090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x00810041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00270049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x00810080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x00810004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x00810004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x00080016, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x00000081, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x00810004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x00810019, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x00810018, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x00810024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x0081001c, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x00000081, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x00810081, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x00810081, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510029, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x00810081, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x00810081, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x00810065, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x0081001c, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x0000004c, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x0000089f, /* EMC_CFG_2 */
+ 0x0004001c, /* EMC_SEL_DPD_CTRL */
+ 0xe0070069, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430404, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010043, /* Mode Register 1 */
+ 0x0002001a, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 1200, /* expected dvfs latency (ns) */
+ },
+ {
+ 0x19, /* V6.0.0 */
+ "011_924000_00_V6.0.0_V1.1", /* DVFS table version */
+ 924000, /* SDRAM frequency */
+ 1010, /* min voltage */
+ 1010, /* gpu min voltage */
+ "pllm_ud", /* clock source id */
+ 0x80000000, /* CLK_SOURCE_EMC */
+ 165, /* number of burst_regs */
+ 31, /* number of up_down_regs */
+ {
+ 0x00000037, /* EMC_RC */
+ 0x00000078, /* EMC_RFC */
+ 0x00000000, /* EMC_RFC_SLR */
+ 0x00000026, /* EMC_RAS */
+ 0x00000010, /* EMC_RP */
+ 0x0000000f, /* EMC_R2W */
+ 0x00000010, /* EMC_W2R */
+ 0x00000006, /* EMC_R2P */
+ 0x00000017, /* EMC_W2P */
+ 0x00000010, /* EMC_RD_RCD */
+ 0x00000010, /* EMC_WR_RCD */
+ 0x00000009, /* EMC_RRD */
+ 0x00000005, /* EMC_REXT */
+ 0x00000000, /* EMC_WEXT */
+ 0x00000007, /* EMC_WDV */
+ 0x00000007, /* EMC_WDV_MASK */
+ 0x00000010, /* EMC_QUSE */
+ 0x00000005, /* EMC_QUSE_WIDTH */
+ 0x00000000, /* EMC_IBDLY */
+ 0x00000005, /* EMC_EINPUT */
+ 0x00000012, /* EMC_EINPUT_DURATION */
+ 0x000d0000, /* EMC_PUTERM_EXTRA */
+ 0x00000007, /* EMC_PUTERM_WIDTH */
+ 0x00000000, /* EMC_PUTERM_ADJ */
+ 0x00000000, /* EMC_CDB_CNTL_1 */
+ 0x00000000, /* EMC_CDB_CNTL_2 */
+ 0x00000000, /* EMC_CDB_CNTL_3 */
+ 0x00000004, /* EMC_QRST */
+ 0x00000019, /* EMC_QSAFE */
+ 0x00000020, /* EMC_RDV */
+ 0x00000022, /* EMC_RDV_MASK */
+ 0x00000dd4, /* EMC_REFRESH */
+ 0x00000000, /* EMC_BURST_REFRESH_NUM */
+ 0x00000375, /* EMC_PRE_REFRESH_REQ_CNT */
+ 0x00000006, /* EMC_PDEX2WR */
+ 0x00000006, /* EMC_PDEX2RD */
+ 0x00000010, /* EMC_PCHG2PDEN */
+ 0x00000000, /* EMC_ACT2PDEN */
+ 0x00000001, /* EMC_AR2PDEN */
+ 0x0000001b, /* EMC_RW2PDEN */
+ 0x00000082, /* EMC_TXSR */
+ 0x00000082, /* EMC_TXSRDLL */
+ 0x00000007, /* EMC_TCKE */
+ 0x0000000e, /* EMC_TCKESR */
+ 0x00000007, /* EMC_TPD */
+ 0x0000002d, /* EMC_TFAW */
+ 0x00000014, /* EMC_TRPAB */
+ 0x00000003, /* EMC_TCLKSTABLE */
+ 0x00000003, /* EMC_TCLKSTOP */
+ 0x00000f04, /* EMC_TREFBW */
+ 0x00000002, /* EMC_FBIO_CFG6 */
+ 0x00000000, /* EMC_ODT_WRITE */
+ 0x00000000, /* EMC_ODT_READ */
+ 0x1361a896, /* EMC_FBIO_CFG5 */
+ 0xe00400b9, /* EMC_CFG_DIG_DLL */
+ 0x00008000, /* EMC_CFG_DIG_DLL_PERIOD */
+ 0x00000005, /* EMC_DLL_XFORM_DQS0 */
+ 0x00000007, /* EMC_DLL_XFORM_DQS1 */
+ 0x00000007, /* EMC_DLL_XFORM_DQS2 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS3 */
+ 0x00000003, /* EMC_DLL_XFORM_DQS4 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS5 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS6 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS7 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS8 */
+ 0x00000007, /* EMC_DLL_XFORM_DQS9 */
+ 0x00000007, /* EMC_DLL_XFORM_DQS10 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS11 */
+ 0x00000003, /* EMC_DLL_XFORM_DQS12 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS13 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS14 */
+ 0x00000005, /* EMC_DLL_XFORM_DQS15 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE0 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE1 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE2 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE3 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE4 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE6 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE7 */
+ 0x0000000c, /* EMC_DLL_XFORM_ADDR0 */
+ 0x0000000c, /* EMC_DLL_XFORM_ADDR1 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR2 */
+ 0x0000000c, /* EMC_DLL_XFORM_ADDR3 */
+ 0x0000000c, /* EMC_DLL_XFORM_ADDR4 */
+ 0x00000000, /* EMC_DLL_XFORM_ADDR5 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE8 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE9 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE10 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE11 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE12 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE13 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE14 */
+ 0x00000000, /* EMC_DLL_XFORM_QUSE15 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS0 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS1 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS2 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS3 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS4 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS5 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS6 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS7 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS8 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS9 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS10 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS11 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS12 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS13 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS14 */
+ 0x00000000, /* EMC_DLI_TRIM_TXDQS15 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ0 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ1 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ2 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ3 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ4 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ5 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ6 */
+ 0x00000007, /* EMC_DLL_XFORM_DQ7 */
+ 0x00000220, /* EMC_XM2CMDPADCTRL */
+ 0x00000000, /* EMC_XM2CMDPADCTRL4 */
+ 0x00100100, /* EMC_XM2CMDPADCTRL5 */
+ 0x0120103d, /* EMC_XM2DQSPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL2 */
+ 0x00000000, /* EMC_XM2DQPADCTRL3 */
+ 0x77ffc004, /* EMC_XM2CLKPADCTRL */
+ 0x00000404, /* EMC_XM2CLKPADCTRL2 */
+ 0x81f1f008, /* EMC_XM2COMPPADCTRL */
+ 0x07070000, /* EMC_XM2VTTGENPADCTRL */
+ 0x00000000, /* EMC_XM2VTTGENPADCTRL2 */
+ 0x015ddddd, /* EMC_XM2VTTGENPADCTRL3 */
+ 0x55555520, /* EMC_XM2DQSPADCTRL3 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL4 */
+ 0x00514514, /* EMC_XM2DQSPADCTRL5 */
+ 0x55555500, /* EMC_XM2DQSPADCTRL6 */
+ 0x0000003f, /* EMC_DSR_VTTGEN_DRV */
+ 0x00000000, /* EMC_TXDSRVTTGEN */
+ 0x00000000, /* EMC_FBIO_SPARE */
+ 0x00064000, /* EMC_ZCAL_INTERVAL */
+ 0x0000014d, /* EMC_ZCAL_WAIT_CNT */
+ 0x039c0019, /* EMC_MRS_WAIT_CNT */
+ 0x039c0019, /* EMC_MRS_WAIT_CNT2 */
+ 0x00000000, /* EMC_CTT */
+ 0x00000007, /* EMC_CTT_DURATION */
+ 0x00004080, /* EMC_CFG_PIPE */
+ 0x80001c77, /* EMC_DYN_SELF_REF_CONTROL */
+ 0x00000017, /* EMC_QPOP */
+ 0x0e00000d, /* MC_EMEM_ARB_CFG */
+ 0x80000040, /* MC_EMEM_ARB_OUTSTANDING_REQ */
+ 0x00000007, /* MC_EMEM_ARB_TIMING_RCD */
+ 0x00000008, /* MC_EMEM_ARB_TIMING_RP */
+ 0x0000001b, /* MC_EMEM_ARB_TIMING_RC */
+ 0x00000012, /* MC_EMEM_ARB_TIMING_RAS */
+ 0x00000017, /* MC_EMEM_ARB_TIMING_FAW */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_RRD */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_RAP2PRE */
+ 0x0000000e, /* MC_EMEM_ARB_TIMING_WAP2PRE */
+ 0x00000004, /* MC_EMEM_ARB_TIMING_R2R */
+ 0x00000001, /* MC_EMEM_ARB_TIMING_W2W */
+ 0x00000009, /* MC_EMEM_ARB_TIMING_R2W */
+ 0x00000009, /* MC_EMEM_ARB_TIMING_W2R */
+ 0x09090104, /* MC_EMEM_ARB_DA_TURNS */
+ 0x001e141b, /* MC_EMEM_ARB_DA_COVERS */
+ 0x71ae2a1c, /* MC_EMEM_ARB_MISC0 */
+ 0x70000f02, /* MC_EMEM_ARB_MISC1 */
+ 0x001f0000, /* MC_EMEM_ARB_RING1_THROTTLE */
+ },
+ {
+ 0x00000017, /* MC_MLL_MPCORER_PTSA_RATE */
+ 0x000001bb, /* MC_PTSA_GRANT_DECREMENT */
+ 0x006e0038, /* MC_LATENCY_ALLOWANCE_XUSB_0 */
+ 0x006e0038, /* MC_LATENCY_ALLOWANCE_XUSB_1 */
+ 0x006e003c, /* MC_LATENCY_ALLOWANCE_TSEC_0 */
+ 0x006e0090, /* MC_LATENCY_ALLOWANCE_SDMMCA_0 */
+ 0x006e0041, /* MC_LATENCY_ALLOWANCE_SDMMCAA_0 */
+ 0x006e0090, /* MC_LATENCY_ALLOWANCE_SDMMC_0 */
+ 0x006e0041, /* MC_LATENCY_ALLOWANCE_SDMMCAB_0 */
+ 0x00270049, /* MC_LATENCY_ALLOWANCE_PPCS_0 */
+ 0x006e0080, /* MC_LATENCY_ALLOWANCE_PPCS_1 */
+ 0x006e0004, /* MC_LATENCY_ALLOWANCE_MPCORE_0 */
+ 0x006e0004, /* MC_LATENCY_ALLOWANCE_MPCORELP_0 */
+ 0x00080016, /* MC_LATENCY_ALLOWANCE_HC_0 */
+ 0x0000006e, /* MC_LATENCY_ALLOWANCE_HC_1 */
+ 0x006e0004, /* MC_LATENCY_ALLOWANCE_AVPC_0 */
+ 0x006e0019, /* MC_LATENCY_ALLOWANCE_GPU_0 */
+ 0x006e0018, /* MC_LATENCY_ALLOWANCE_MSENC_0 */
+ 0x006e0024, /* MC_LATENCY_ALLOWANCE_HDA_0 */
+ 0x006e001b, /* MC_LATENCY_ALLOWANCE_VIC_0 */
+ 0x0000006e, /* MC_LATENCY_ALLOWANCE_VI2_0 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2_0 */
+ 0x006e006e, /* MC_LATENCY_ALLOWANCE_ISP2_1 */
+ 0x00000036, /* MC_LATENCY_ALLOWANCE_ISP2B_0 */
+ 0x006e006e, /* MC_LATENCY_ALLOWANCE_ISP2B_1 */
+ 0x00d400ff, /* MC_LATENCY_ALLOWANCE_VDE_0 */
+ 0x00510029, /* MC_LATENCY_ALLOWANCE_VDE_1 */
+ 0x006e006e, /* MC_LATENCY_ALLOWANCE_VDE_2 */
+ 0x006e006e, /* MC_LATENCY_ALLOWANCE_VDE_3 */
+ 0x006e0065, /* MC_LATENCY_ALLOWANCE_SATA_0 */
+ 0x006e001c, /* MC_LATENCY_ALLOWANCE_AFI_0 */
+ },
+ 0x00000058, /* EMC_ZCAL_WAIT_CNT after clock change */
+ 0x001fffff, /* EMC_AUTO_CAL_INTERVAL */
+ 0x00000802, /* EMC_CTT_TERM_CTRL */
+ 0xf3300000, /* EMC_CFG */
+ 0x0000089f, /* EMC_CFG_2 */
+ 0x0004001c, /* EMC_SEL_DPD_CTRL */
+ 0xe0040069, /* EMC_CFG_DIG_DLL */
+ 0x00000000, /* EMC_BGBIAS_CTL0 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG2 */
+ 0x00000000, /* EMC_AUTO_CAL_CONFIG3 */
+ 0xa1430808, /* EMC_AUTO_CAL_CONFIG */
+ 0x00000000, /* Mode Register 0 */
+ 0x00010083, /* Mode Register 1 */
+ 0x0002001c, /* Mode Register 2 */
+ 0x800b0000, /* Mode Register 4 */
+ 1180, /* expected dvfs latency (ns) */
+ },
+};
+
+
+
+#ifdef CONFIG_TEGRA_USE_NCT
+static struct tegra12_emc_pdata board_emc_pdata;
+#endif
+
+static struct tegra12_emc_pdata flounder_lpddr3_emc_pdata = {
+ .description = "flounder_emc_tables",
+ .tables = flounder_lpddr3_emc_table,
+ .num_tables = ARRAY_SIZE(flounder_lpddr3_emc_table),
+};
+
+
+/*
+ * Also handles Flounder init.
+ */
+int __init flounder_emc_init(void)
+{
+ /*
+ * If the EMC table is successfully read from the NCT partition,
+ * we do not need to check for board ids and blindly load the one
+ * flashed on the NCT partition.
+ */
+ #ifdef CONFIG_TEGRA_USE_NCT
+ if (!tegra12_nct_emc_table_init(&board_emc_pdata)) {
+ tegra_emc_device.dev.platform_data = &board_emc_pdata;
+ pr_info("Loading EMC table read from NCT partition.\n");
+ } else
+ #endif
+ if (of_find_compatible_node(NULL, NULL, "nvidia,tegra12-emc")) {
+ /* If Device Tree Partition contains emc-tables, load them
+ * from Device Tree Partition
+ */
+ pr_info("Loading EMC tables from DeviceTree.\n");
+ } else {
+ pr_info("Loading Flounder EMC tables.\n");
+ tegra_emc_device.dev.platform_data = &flounder_lpddr3_emc_pdata;
+
+ platform_device_register(&tegra_emc_device);
+ }
+
+ tegra12_emc_init();
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-panel.c b/arch/arm/mach-tegra/board-flounder-panel.c
new file mode 100644
index 0000000..0574f6c
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-panel.c
@@ -0,0 +1,616 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-panel.c
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include <linux/ioport.h>
+#include <linux/fb.h>
+#include <linux/nvmap.h>
+#include <linux/nvhost.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/tegra_pwm_bl.h>
+#include <linux/regulator/consumer.h>
+#include <linux/pwm_backlight.h>
+#include <linux/of.h>
+#include <linux/dma-contiguous.h>
+#include <linux/clk.h>
+
+#include <mach/irqs.h>
+#include <mach/dc.h>
+#include <mach/io_dpd.h>
+
+#include "board.h"
+#include "devices.h"
+#include "gpio-names.h"
+#include "board-flounder.h"
+#include "board-panel.h"
+#include "common.h"
+#include "iomap.h"
+#include "tegra12_host1x_devices.h"
+#include "dvfs.h"
+
+struct platform_device * __init flounder_host1x_init(void)
+{
+ struct platform_device *pdev = NULL;
+
+#ifdef CONFIG_TEGRA_GRHOST
+ pdev = to_platform_device(bus_find_device_by_name(
+ &platform_bus_type, NULL, "host1x"));
+
+ if (!pdev) {
+ pr_err("host1x devices registration failed\n");
+ return NULL;
+ }
+#endif
+ return pdev;
+}
+
+/* hdmi related regulators */
+static struct regulator *flounder_hdmi_reg;
+static struct regulator *flounder_hdmi_pll;
+static struct regulator *flounder_hdmi_vddio;
+
+static struct resource flounder_disp1_resources[] = {
+ {
+ .name = "irq",
+ .start = INT_DISPLAY_GENERAL,
+ .end = INT_DISPLAY_GENERAL,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "regs",
+ .start = TEGRA_DISPLAY_BASE,
+ .end = TEGRA_DISPLAY_BASE + TEGRA_DISPLAY_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "fbmem",
+ .start = 0, /* Filled in by flounder_panel_init() */
+ .end = 0, /* Filled in by flounder_panel_init() */
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "ganged_dsia_regs",
+ .start = 0, /* Filled in the panel file by init_resources() */
+ .end = 0, /* Filled in the panel file by init_resources() */
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "ganged_dsib_regs",
+ .start = 0, /* Filled in the panel file by init_resources() */
+ .end = 0, /* Filled in the panel file by init_resources() */
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "dsi_regs",
+ .start = 0, /* Filled in the panel file by init_resources() */
+ .end = 0, /* Filled in the panel file by init_resources() */
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "mipi_cal",
+ .start = TEGRA_MIPI_CAL_BASE,
+ .end = TEGRA_MIPI_CAL_BASE + TEGRA_MIPI_CAL_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct resource flounder_disp2_resources[] = {
+ {
+ .name = "irq",
+ .start = INT_DISPLAY_B_GENERAL,
+ .end = INT_DISPLAY_B_GENERAL,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "regs",
+ .start = TEGRA_DISPLAY2_BASE,
+ .end = TEGRA_DISPLAY2_BASE + TEGRA_DISPLAY2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "fbmem",
+ .start = 0, /* Filled in by flounder_panel_init() */
+ .end = 0, /* Filled in by flounder_panel_init() */
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "hdmi_regs",
+ .start = TEGRA_HDMI_BASE,
+ .end = TEGRA_HDMI_BASE + TEGRA_HDMI_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+
+static struct tegra_dc_out flounder_disp1_out = {
+ .type = TEGRA_DC_OUT_DSI,
+};
+
+static int flounder_hdmi_enable(struct device *dev)
+{
+ int ret;
+ if (!flounder_hdmi_reg) {
+ flounder_hdmi_reg = regulator_get(dev, "avdd_hdmi");
+ if (IS_ERR_OR_NULL(flounder_hdmi_reg)) {
+ pr_err("hdmi: couldn't get regulator avdd_hdmi\n");
+ flounder_hdmi_reg = NULL;
+ return PTR_ERR(flounder_hdmi_reg);
+ }
+ }
+ ret = regulator_enable(flounder_hdmi_reg);
+ if (ret < 0) {
+ pr_err("hdmi: couldn't enable regulator avdd_hdmi\n");
+ return ret;
+ }
+ if (!flounder_hdmi_pll) {
+ flounder_hdmi_pll = regulator_get(dev, "avdd_hdmi_pll");
+ if (IS_ERR_OR_NULL(flounder_hdmi_pll)) {
+ pr_err("hdmi: couldn't get regulator avdd_hdmi_pll\n");
+ flounder_hdmi_pll = NULL;
+ regulator_put(flounder_hdmi_reg);
+ flounder_hdmi_reg = NULL;
+ return PTR_ERR(flounder_hdmi_pll);
+ }
+ }
+ ret = regulator_enable(flounder_hdmi_pll);
+ if (ret < 0) {
+ pr_err("hdmi: couldn't enable regulator avdd_hdmi_pll\n");
+ return ret;
+ }
+ return 0;
+}
+
+static int flounder_hdmi_disable(void)
+{
+ if (flounder_hdmi_reg) {
+ regulator_disable(flounder_hdmi_reg);
+ regulator_put(flounder_hdmi_reg);
+ flounder_hdmi_reg = NULL;
+ }
+
+ if (flounder_hdmi_pll) {
+ regulator_disable(flounder_hdmi_pll);
+ regulator_put(flounder_hdmi_pll);
+ flounder_hdmi_pll = NULL;
+ }
+ return 0;
+}
+
+static int flounder_hdmi_postsuspend(void)
+{
+ if (flounder_hdmi_vddio) {
+ regulator_disable(flounder_hdmi_vddio);
+ regulator_put(flounder_hdmi_vddio);
+ flounder_hdmi_vddio = NULL;
+ }
+ return 0;
+}
+
+static int flounder_hdmi_hotplug_init(struct device *dev)
+{
+ if (!flounder_hdmi_vddio) {
+ flounder_hdmi_vddio = regulator_get(dev, "vdd_hdmi_5v0");
+ if (WARN_ON(IS_ERR(flounder_hdmi_vddio))) {
+ pr_err("%s: couldn't get regulator vdd_hdmi_5v0: %ld\n",
+ __func__, PTR_ERR(flounder_hdmi_vddio));
+ flounder_hdmi_vddio = NULL;
+ } else {
+ return regulator_enable(flounder_hdmi_vddio);
+ }
+ }
+
+ return 0;
+}
+
+struct tmds_config flounder_tmds_config[] = {
+ { /* 480p/576p / 25.2MHz/27MHz modes */
+ .pclk = 27000000,
+ .pll0 = 0x01003110,
+ .pll1 = 0x00300F00,
+ .pe_current = 0x08080808,
+ .drive_current = 0x2e2e2e2e,
+ .peak_current = 0x00000000,
+ },
+ { /* 720p / 74.25MHz modes */
+ .pclk = 74250000,
+ .pll0 = 0x01003310,
+ .pll1 = 0x10300F00,
+ .pe_current = 0x08080808,
+ .drive_current = 0x20202020,
+ .peak_current = 0x00000000,
+ },
+ { /* 1080p / 148.5MHz modes */
+ .pclk = 148500000,
+ .pll0 = 0x01003310,
+ .pll1 = 0x10300F00,
+ .pe_current = 0x08080808,
+ .drive_current = 0x20202020,
+ .peak_current = 0x00000000,
+ },
+ {
+ .pclk = INT_MAX,
+ .pll0 = 0x01003310,
+ .pll1 = 0x10300F00,
+ .pe_current = 0x08080808,
+ .drive_current = 0x3A353536, /* lane3 needs a slightly lower current */
+ .peak_current = 0x00000000,
+ },
+};
+
+struct tegra_hdmi_out flounder_hdmi_out = {
+ .tmds_config = flounder_tmds_config,
+ .n_tmds_config = ARRAY_SIZE(flounder_tmds_config),
+};
+
+
+#ifdef CONFIG_FRAMEBUFFER_CONSOLE
+static struct tegra_dc_mode hdmi_panel_modes[] = {
+ {
+ .pclk = KHZ2PICOS(25200),
+ .h_ref_to_sync = 1,
+ .v_ref_to_sync = 1,
+ .h_sync_width = 96, /* hsync_len */
+ .v_sync_width = 2, /* vsync_len */
+ .h_back_porch = 48, /* left_margin */
+ .v_back_porch = 33, /* upper_margin */
+ .h_active = 640, /* xres */
+ .v_active = 480, /* yres */
+ .h_front_porch = 16, /* right_margin */
+ .v_front_porch = 10, /* lower_margin */
+ },
+};
+#endif /* CONFIG_FRAMEBUFFER_CONSOLE */
+
+static struct tegra_dc_out flounder_disp2_out = {
+ .type = TEGRA_DC_OUT_HDMI,
+ .flags = TEGRA_DC_OUT_HOTPLUG_HIGH,
+ .parent_clk = "pll_d2",
+
+ .dcc_bus = 3,
+ .hotplug_gpio = flounder_hdmi_hpd,
+ .hdmi_out = &flounder_hdmi_out,
+
+ /* TODO: update max pclk to POR */
+ .max_pixclock = KHZ2PICOS(297000),
+#ifdef CONFIG_FRAMEBUFFER_CONSOLE
+ .modes = hdmi_panel_modes,
+ .n_modes = ARRAY_SIZE(hdmi_panel_modes),
+ .depth = 24,
+#endif /* CONFIG_FRAMEBUFFER_CONSOLE */
+
+ .align = TEGRA_DC_ALIGN_MSB,
+ .order = TEGRA_DC_ORDER_RED_BLUE,
+
+ .enable = flounder_hdmi_enable,
+ .disable = flounder_hdmi_disable,
+ .postsuspend = flounder_hdmi_postsuspend,
+ .hotplug_init = flounder_hdmi_hotplug_init,
+};
+
+static struct tegra_fb_data flounder_disp1_fb_data = {
+ .win = 0,
+ .bits_per_pixel = 32,
+ .flags = TEGRA_FB_FLIP_ON_PROBE,
+};
+
+static struct tegra_dc_platform_data flounder_disp1_pdata = {
+ .flags = TEGRA_DC_FLAG_ENABLED,
+ .default_out = &flounder_disp1_out,
+ .fb = &flounder_disp1_fb_data,
+ .emc_clk_rate = 204000000,
+#ifdef CONFIG_TEGRA_DC_CMU
+ .cmu_enable = 0,
+#endif
+};
+
+static struct tegra_fb_data flounder_disp2_fb_data = {
+ .win = 0,
+ .xres = 1280,
+ .yres = 720,
+ .bits_per_pixel = 32,
+ .flags = TEGRA_FB_FLIP_ON_PROBE,
+};
+
+static struct tegra_dc_platform_data flounder_disp2_pdata = {
+ .flags = TEGRA_DC_FLAG_ENABLED,
+ .default_out = &flounder_disp2_out,
+ .fb = &flounder_disp2_fb_data,
+ .emc_clk_rate = 300000000,
+};
+
+static struct platform_device flounder_disp2_device = {
+ .name = "tegradc",
+ .id = 1,
+ .resource = flounder_disp2_resources,
+ .num_resources = ARRAY_SIZE(flounder_disp2_resources),
+ .dev = {
+ .platform_data = &flounder_disp2_pdata,
+ },
+};
+
+static struct platform_device flounder_disp1_device = {
+ .name = "tegradc",
+ .id = 0,
+ .resource = flounder_disp1_resources,
+ .num_resources = ARRAY_SIZE(flounder_disp1_resources),
+ .dev = {
+ .platform_data = &flounder_disp1_pdata,
+ },
+};
+
+static struct nvmap_platform_carveout flounder_carveouts[] = {
+ [0] = {
+ .name = "iram",
+ .usage_mask = NVMAP_HEAP_CARVEOUT_IRAM,
+ .base = TEGRA_IRAM_BASE + TEGRA_RESET_HANDLER_SIZE,
+ .size = TEGRA_IRAM_SIZE - TEGRA_RESET_HANDLER_SIZE,
+ .dma_dev = &tegra_iram_dev,
+ },
+ [1] = {
+ .name = "generic-0",
+ .usage_mask = NVMAP_HEAP_CARVEOUT_GENERIC,
+ .base = 0, /* Filled in by flounder_panel_init() */
+ .size = 0, /* Filled in by flounder_panel_init() */
+ .dma_dev = &tegra_generic_dev,
+ },
+ [2] = {
+ .name = "vpr",
+ .usage_mask = NVMAP_HEAP_CARVEOUT_VPR,
+ .base = 0, /* Filled in by flounder_panel_init() */
+ .size = 0, /* Filled in by flounder_panel_init() */
+ .dma_dev = &tegra_vpr_dev,
+ },
+};
+
+static struct nvmap_platform_data flounder_nvmap_data = {
+ .carveouts = flounder_carveouts,
+ .nr_carveouts = ARRAY_SIZE(flounder_carveouts),
+};
+static struct platform_device flounder_nvmap_device = {
+ .name = "tegra-nvmap",
+ .id = -1,
+ .dev = {
+ .platform_data = &flounder_nvmap_data,
+ },
+};
+
+/* can be called multiple times */
+static struct tegra_panel *flounder_panel_configure(struct board_info *board_out,
+ u8 *dsi_instance_out)
+{
+ struct tegra_panel *panel = NULL;
+ u8 dsi_instance = DSI_INSTANCE_0;
+ struct board_info boardtmp;
+
+ if (!board_out)
+ board_out = &boardtmp;
+ tegra_get_display_board_info(board_out);
+
+ panel = &dsi_j_qxga_8_9;
+ dsi_instance = DSI_INSTANCE_0;
+ /*tegra_io_dpd_enable(&dsic_io);
+ tegra_io_dpd_enable(&dsid_io);*/
+ if (board_out->board_id == BOARD_E1813)
+ panel = &dsi_s_wqxga_10_1;
+ if (dsi_instance_out)
+ *dsi_instance_out = dsi_instance;
+ return panel;
+}
+
+static void flounder_panel_select(void)
+{
+ struct tegra_panel *panel = NULL;
+ u8 dsi_instance;
+ struct board_info board;
+
+ panel = flounder_panel_configure(&board, &dsi_instance);
+
+ if (panel) {
+ if (panel->init_dc_out) {
+ panel->init_dc_out(&flounder_disp1_out);
+ if (flounder_disp1_out.type == TEGRA_DC_OUT_DSI) {
+ flounder_disp1_out.dsi->dsi_instance =
+ dsi_instance;
+ flounder_disp1_out.dsi->dsi_panel_rst_gpio =
+ DSI_PANEL_RST_GPIO;
+ flounder_disp1_out.dsi->dsi_panel_bl_pwm_gpio =
+ DSI_PANEL_BL_PWM_GPIO;
+ flounder_disp1_out.dsi->te_gpio = TEGRA_GPIO_PR6;
+ }
+ }
+
+ if (panel->init_fb_data)
+ panel->init_fb_data(&flounder_disp1_fb_data);
+
+ if (panel->init_cmu_data)
+ panel->init_cmu_data(&flounder_disp1_pdata);
+
+ if (panel->set_disp_device)
+ panel->set_disp_device(&flounder_disp1_device);
+
+ if (flounder_disp1_out.type == TEGRA_DC_OUT_DSI) {
+ tegra_dsi_resources_init(dsi_instance,
+ flounder_disp1_resources,
+ ARRAY_SIZE(flounder_disp1_resources));
+ }
+
+ if (panel->register_bl_dev)
+ panel->register_bl_dev();
+
+ if (panel->register_i2c_bridge)
+ panel->register_i2c_bridge();
+ }
+
+}
+
+int __init flounder_panel_init(void)
+{
+ int err = 0;
+ struct resource __maybe_unused *res;
+ struct platform_device *phost1x = NULL;
+
+#ifdef CONFIG_NVMAP_USE_CMA_FOR_CARVEOUT
+ struct dma_declare_info vpr_dma_info;
+ struct dma_declare_info generic_dma_info;
+#endif
+ flounder_panel_select();
+
+#ifdef CONFIG_TEGRA_NVMAP
+ flounder_carveouts[1].base = tegra_carveout_start;
+ flounder_carveouts[1].size = tegra_carveout_size;
+ flounder_carveouts[2].base = tegra_vpr_start;
+ flounder_carveouts[2].size = tegra_vpr_size;
+#ifdef CONFIG_NVMAP_USE_CMA_FOR_CARVEOUT
+ generic_dma_info.name = "generic";
+ generic_dma_info.base = tegra_carveout_start;
+ generic_dma_info.size = tegra_carveout_size;
+ generic_dma_info.resize = false;
+ generic_dma_info.cma_dev = NULL;
+
+ vpr_dma_info.name = "vpr";
+ vpr_dma_info.base = tegra_vpr_start;
+ vpr_dma_info.size = SZ_32M;
+ vpr_dma_info.resize = true;
+ vpr_dma_info.cma_dev = &tegra_vpr_cma_dev;
+ vpr_dma_info.notifier.ops = &vpr_dev_ops;
+
+ carveout_linear_set(&tegra_generic_cma_dev);
+ flounder_carveouts[1].cma_dev = &tegra_generic_cma_dev;
+ flounder_carveouts[1].resize = false;
+ carveout_linear_set(&tegra_vpr_cma_dev);
+ flounder_carveouts[2].cma_dev = &tegra_vpr_cma_dev;
+ flounder_carveouts[2].resize = true;
+
+
+ if (tegra_carveout_size) {
+ err = dma_declare_coherent_resizable_cma_memory(
+ &tegra_generic_dev, &generic_dma_info);
+ if (err) {
+ pr_err("Generic coherent memory declaration failed\n");
+ return err;
+ }
+ }
+ if (tegra_vpr_size) {
+ err = dma_declare_coherent_resizable_cma_memory(
+ &tegra_vpr_dev, &vpr_dma_info);
+ if (err) {
+ pr_err("VPR coherent memory declaration failed\n");
+ return err;
+ }
+ }
+#endif
+
+ err = platform_device_register(&flounder_nvmap_device);
+ if (err) {
+ pr_err("nvmap device registration failed\n");
+ return err;
+ }
+#endif
+
+ phost1x = flounder_host1x_init();
+ if (!phost1x) {
+ pr_err("host1x devices registration failed\n");
+ return -EINVAL;
+ }
+
+ res = platform_get_resource_byname(&flounder_disp1_device,
+ IORESOURCE_MEM, "fbmem");
+ res->start = tegra_fb_start;
+ res->end = tegra_fb_start + tegra_fb_size - 1;
+
+ /* Copy the bootloader fb to the fb. */
+ if (tegra_bootloader_fb_size)
+ __tegra_move_framebuffer(&flounder_nvmap_device,
+ tegra_fb_start, tegra_bootloader_fb_start,
+ min(tegra_fb_size, tegra_bootloader_fb_size));
+ else
+ __tegra_clear_framebuffer(&flounder_nvmap_device,
+ tegra_fb_start, tegra_fb_size);
+
+ flounder_disp1_device.dev.parent = &phost1x->dev;
+ err = platform_device_register(&flounder_disp1_device);
+ if (err) {
+ pr_err("disp1 device registration failed\n");
+ return err;
+ }
+
+ err = tegra_init_hdmi(&flounder_disp2_device, phost1x);
+ if (err)
+ return err;
+
+#ifdef CONFIG_TEGRA_NVAVP
+ nvavp_device.dev.parent = &phost1x->dev;
+ err = platform_device_register(&nvavp_device);
+ if (err) {
+ pr_err("nvavp device registration failed\n");
+ return err;
+ }
+#endif
+ return err;
+}
+
+int __init flounder_display_init(void)
+{
+ struct clk *disp1_clk = clk_get_sys("tegradc.0", NULL);
+ struct clk *disp2_clk = clk_get_sys("tegradc.1", NULL);
+ struct tegra_panel *panel;
+ struct board_info board;
+ long disp1_rate = 0;
+ long disp2_rate;
+
+ if (WARN_ON(IS_ERR(disp1_clk))) {
+ if (disp2_clk && !IS_ERR(disp2_clk))
+ clk_put(disp2_clk);
+ return PTR_ERR(disp1_clk);
+ }
+
+ if (WARN_ON(IS_ERR(disp2_clk))) {
+ clk_put(disp1_clk);
+ return PTR_ERR(disp1_clk);
+ }
+
+ panel = flounder_panel_configure(&board, NULL);
+
+ if (panel && panel->init_dc_out) {
+ panel->init_dc_out(&flounder_disp1_out);
+ if (flounder_disp1_out.n_modes && flounder_disp1_out.modes)
+ disp1_rate = flounder_disp1_out.modes[0].pclk;
+ } else {
+ if (!panel || !panel->init_dc_out)
+ printk(KERN_ERR "disp1 panel output not specified!\n");
+ }
+
+ printk(KERN_DEBUG "disp1 pclk=%ld\n", disp1_rate);
+ if (disp1_rate)
+ tegra_dvfs_resolve_override(disp1_clk, disp1_rate);
+
+ /* set up disp2 */
+ if (flounder_disp2_out.max_pixclock)
+ disp2_rate = PICOS2KHZ(flounder_disp2_out.max_pixclock) * 1000;
+ else
+ disp2_rate = 297000000; /* HDMI 4K */
+ printk(KERN_DEBUG "disp2 pclk=%ld\n", disp2_rate);
+ if (disp2_rate)
+ tegra_dvfs_resolve_override(disp2_clk, disp2_rate);
+
+ clk_put(disp1_clk);
+ clk_put(disp2_clk);
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-pinmux-t12x.h b/arch/arm/mach-tegra/board-flounder-pinmux-t12x.h
new file mode 100644
index 0000000..93e5d27
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-pinmux-t12x.h
@@ -0,0 +1,376 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-pinmux-t12x.h
+ *
+ * Copyright (c) 2013, NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth floor, Boston, MA 02110-1301, USA
+ */
+
+
+/* DO NOT EDIT THIS FILE. THIS FILE IS AUTO GENERATED FROM T124_CUSTOMER_PINMUX.XLSM */
+
+
+static __initdata struct tegra_pingroup_config flounder_pinmux_common[] = {
+
+ /*audio start*/
+ /* EXTPERIPH1 pinmux */
+ DEFAULT_PINMUX(DAP_MCLK1, EXTPERIPH1, NORMAL, NORMAL, OUTPUT),
+
+ /* I2S0 pinmux */
+ DEFAULT_PINMUX(DAP1_DIN, I2S0, NORMAL, TRISTATE, INPUT),
+ DEFAULT_PINMUX(DAP1_DOUT, I2S0, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP1_FS, I2S0, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP1_SCLK, I2S0, NORMAL, TRISTATE, OUTPUT),
+
+ /* I2S1 pinmux */
+ /*Tristated by default, will be turned on/off as required by audio machine driver*/
+ DEFAULT_PINMUX(DAP2_DIN, I2S1, NORMAL, TRISTATE, INPUT),
+ DEFAULT_PINMUX(DAP2_DOUT, I2S1, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP2_FS, I2S1, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP2_SCLK, I2S1, NORMAL, TRISTATE, OUTPUT),
+
+ /* I2S2 pinmux */
+ /*Tristated by default, will be turned on/off as required by audio machine driver*/
+ DEFAULT_PINMUX(DAP3_DIN, I2S2, NORMAL, TRISTATE, INPUT),
+ DEFAULT_PINMUX(DAP3_DOUT, I2S2, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP3_FS, I2S2, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP3_SCLK, I2S2, NORMAL, TRISTATE, OUTPUT),
+
+ /* I2S3 pinmux */
+ /*Tristated by default, will be turned on/off as required by audio machine driver*/
+ DEFAULT_PINMUX(DAP4_DIN, I2S3, NORMAL, TRISTATE, INPUT),
+ DEFAULT_PINMUX(DAP4_DOUT, I2S3, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP4_FS, I2S3, NORMAL, TRISTATE, OUTPUT),
+ DEFAULT_PINMUX(DAP4_SCLK, I2S3, NORMAL, TRISTATE, OUTPUT),
+
+ /* SPI5 pinmux */
+ DEFAULT_PINMUX(ULPI_CLK, SPI5, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(ULPI_DIR, SPI5, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(ULPI_NXT, SPI5, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(ULPI_STP, SPI5, NORMAL, NORMAL, OUTPUT),
+ /*audio end*/
+
+ /* I2C3 pinmux */
+ I2C_PINMUX(CAM_I2C_SCL, I2C3, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+ I2C_PINMUX(CAM_I2C_SDA, I2C3, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+
+ /* VI_ALT3 pinmux */
+ VI_PINMUX(CAM_MCLK, VI_ALT3, NORMAL, NORMAL, OUTPUT, DEFAULT, DISABLE),
+
+ /* VIMCLK2_ALT pinmux */
+ VI_PINMUX(GPIO_PBB0, VIMCLK2_ALT, NORMAL, NORMAL, OUTPUT, DEFAULT, DISABLE),
+
+ /* I2C2 pinmux */
+ I2C_PINMUX(GEN2_I2C_SCL, I2C2, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+ I2C_PINMUX(GEN2_I2C_SDA, I2C2, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+
+ /* SPI4 pinmux */
+ DEFAULT_PINMUX(GPIO_PG4, SPI4, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(GPIO_PG5, SPI4, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(GPIO_PG6, SPI4, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(GPIO_PG7, SPI4, NORMAL, NORMAL, INPUT),
+
+ /* PWM0 pinmux */
+ DEFAULT_PINMUX(GPIO_PH0, DTV, PULL_DOWN, TRISTATE, INPUT),
+
+ /* PWM1 pinmux */
+ DEFAULT_PINMUX(GPIO_PH1, PWM1, NORMAL, NORMAL, OUTPUT),
+
+ /* SOC pinmux */
+ DEFAULT_PINMUX(GPIO_PJ2, SOC, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(CLK_32K_OUT, SOC, PULL_UP, NORMAL, INPUT),
+
+ /* EXTPERIPH2 pinmux */
+ DEFAULT_PINMUX(CLK2_OUT, EXTPERIPH2, NORMAL, NORMAL, OUTPUT),
+
+ /* SDMMC1 pinmux */
+ DEFAULT_PINMUX(SDMMC1_CLK, SDMMC1, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC1_CMD, SDMMC1, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC1_DAT0, SDMMC1, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC1_DAT1, SDMMC1, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC1_DAT2, SDMMC1, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC1_DAT3, SDMMC1, PULL_UP, NORMAL, INPUT),
+
+ /* SPI3 pinmux */
+ DEFAULT_PINMUX(SDMMC3_CLK, SPI3, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC3_DAT0, SPI3, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC3_DAT1, SPI3, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC3_DAT2, SPI3, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC3_DAT3, SPI3, NORMAL, NORMAL, INPUT),
+
+ /* SDMMC4 pinmux */
+ DEFAULT_PINMUX(SDMMC4_CLK, SDMMC4, NORMAL, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_CMD, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT0, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT1, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT2, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT3, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT4, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT5, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT6, SDMMC4, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(SDMMC4_DAT7, SDMMC4, PULL_UP, NORMAL, INPUT),
+
+ /* UARTA pinmux */
+ DEFAULT_PINMUX(GPIO_PU0, UARTA, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(GPIO_PU1, UARTA, PULL_UP, NORMAL, INPUT),
+
+ /* I2CPWR pinmux */
+ I2C_PINMUX(PWR_I2C_SCL, I2CPWR, NORMAL, NORMAL, INPUT, DEFAULT, DISABLE),
+ I2C_PINMUX(PWR_I2C_SDA, I2CPWR, NORMAL, NORMAL, INPUT, DEFAULT, DISABLE),
+
+ /* RTCK pinmux */
+ DEFAULT_PINMUX(JTAG_RTCK, RTCK, PULL_UP, NORMAL, OUTPUT),
+
+ /* CLK pinmux */
+ DEFAULT_PINMUX(CLK_32K_IN, CLK, NORMAL, NORMAL, INPUT),
+
+ /* PWRON pinmux */
+ DEFAULT_PINMUX(CORE_PWR_REQ, PWRON, NORMAL, NORMAL, OUTPUT),
+
+ /* CPU pinmux */
+ DEFAULT_PINMUX(CPU_PWR_REQ, CPU, NORMAL, NORMAL, OUTPUT),
+
+ /* PMI pinmux */
+ DEFAULT_PINMUX(PWR_INT_N, PMI, PULL_UP, NORMAL, INPUT),
+
+ /* RESET_OUT_N pinmux */
+ DEFAULT_PINMUX(RESET_OUT_N, RESET_OUT_N, NORMAL, NORMAL, OUTPUT),
+
+ /* EXTPERIPH3 pinmux */
+ DEFAULT_PINMUX(CLK3_OUT, EXTPERIPH3, NORMAL, NORMAL, OUTPUT),
+
+ /* I2C1 pinmux */
+ I2C_PINMUX(GEN1_I2C_SCL, I2C1, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+ I2C_PINMUX(GEN1_I2C_SDA, I2C1, NORMAL, NORMAL, INPUT, DISABLE, DISABLE),
+
+ /* UARTC pinmux */
+ DEFAULT_PINMUX(UART3_CTS_N, UARTC, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(UART3_RTS_N, UARTC, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(UART3_RXD, UARTC, PULL_UP, NORMAL, INPUT),
+ DEFAULT_PINMUX(UART3_TXD, UARTC, NORMAL, NORMAL, OUTPUT),
+
+ /* GPIO pinmux */
+
+ /*audio start*/
+ GPIO_PINMUX(GPIO_PK0, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_X1_AUD, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_X4_AUD, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_X5_AUD, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW11, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW12, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(SDMMC1_WP_N, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(ULPI_DATA7, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_COL3, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_X3_AUD, NORMAL, NORMAL, OUTPUT, DISABLE),
+ /*audio end*/
+
+ /* NFC start */
+ GPIO_PINMUX(GPIO_PB1, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW7, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW9, NORMAL, NORMAL, OUTPUT, DISABLE),
+ /*NFC end */
+
+ /*USB start*/
+ GPIO_PINMUX(SDMMC3_CMD, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ /*USB end*/
+
+ /*key start*/
+ GPIO_PINMUX(KB_COL0, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(SDMMC3_CD_N, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_COL5, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PI3, NORMAL, NORMAL, OUTPUT, DISABLE),
+ /*key end*/
+ /*headset start*/
+ GPIO_PINMUX(GPIO_W3_AUD, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW10, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW11, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_COL4, NORMAL, NORMAL, OUTPUT, DISABLE),
+ /* UARTD pinmux one wired*/
+ DEFAULT_PINMUX(GPIO_PJ7, UARTD, NORMAL, NORMAL, OUTPUT),
+ DEFAULT_PINMUX(GPIO_PB0, UARTD, NORMAL, NORMAL, INPUT),
+ /*headset end*/
+ GPIO_PINMUX(GPIO_X6_AUD, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_X7_AUD, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_W2_AUD, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PBB3, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PBB7, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PCC1, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PCC2, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH2, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH3, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH6, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH7, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH5, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH6, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PH7, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PK1, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PJ0, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PK3, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(USB_VBUS_EN0, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PI6, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PI5, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PC7, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PI0, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(CLK2_REQ, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_COL1, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_COL2, PULL_UP, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW0, PULL_UP, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW1, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW2, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW4, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(KB_ROW5, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(CLK3_REQ, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PU2, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PU3, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PU4, NORMAL, NORMAL, OUTPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PU5, PULL_DOWN, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PU6, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(SPDIF_OUT, NORMAL, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(GPIO_PK2, PULL_UP, NORMAL, INPUT, DISABLE),
+ GPIO_PINMUX(SDMMC3_CD_N, PULL_UP, NORMAL, INPUT, DISABLE),
+
+ /*radio start*/
+ GPIO_PINMUX(ULPI_DATA0, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_WAKEUP#PO1
+ GPIO_PINMUX(ULPI_DATA1, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_STATUS#PO2
+ GPIO_PINMUX(ULPI_DATA2, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_ERR_FATAL#PO3
+ GPIO_PINMUX(ULPI_DATA3, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_IPC1#PO4 (flashlight feature)
+ GPIO_PINMUX(ULPI_DATA4, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_IPC2#PO5 (EFS SYNC)
+ GPIO_PINMUX(ULPI_DATA5, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_VDDMIN#PO6 (reserved)
+ GPIO_PINMUX(GPIO_PBB5, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_PON_RESET_N#PBB5
+ GPIO_PINMUX(KB_COL7, PULL_DOWN, NORMAL, INPUT, DISABLE), //AP2MDM_CHNL_RDY_CPU (reserved)
+ GPIO_PINMUX(KB_ROW8, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_IPC3#PS0 (reserved)
+ GPIO_PINMUX(KB_ROW13, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_WAKEUP#PS5
+ GPIO_PINMUX(KB_ROW14, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_STATUS#PS6
+ GPIO_PINMUX(GPIO_PCC2, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_VDDMIN#PCC2 (reserved)
+ GPIO_PINMUX(GPIO_PV0, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_ERR_FATAL#PV0
+ GPIO_PINMUX(GPIO_PV1, PULL_DOWN, NORMAL, INPUT, DISABLE), //MDM2AP_HSIC_READY#PV1
+ /*radio end*/
+};
+
+static __initdata struct tegra_pingroup_config unused_pins_lowpower[] = {
+ UNUSED_PINMUX(DAP_MCLK1_REQ),
+ UNUSED_PINMUX(PEX_L0_CLKREQ_N),
+ UNUSED_PINMUX(PEX_L0_RST_N),
+ UNUSED_PINMUX(PEX_L1_CLKREQ_N),
+ UNUSED_PINMUX(PEX_L1_RST_N),
+ UNUSED_PINMUX(PEX_WAKE_N),
+ UNUSED_PINMUX(GPIO_PFF2),
+ UNUSED_PINMUX(KB_ROW15),
+ UNUSED_PINMUX(KB_ROW16),
+ UNUSED_PINMUX(KB_ROW17),
+ UNUSED_PINMUX(OWR),
+};
+
+static struct gpio_init_pin_info init_gpio_mode_flounder_common[] = {
+ /*audio start*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PK0, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO0, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS4, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PV3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX1, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX4, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX5, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ3, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX3, false, 0),
+ /*audio end*/
+
+ /* NFC start */
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PB1, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR7, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS1, false, 0),
+ /* NFC end */
+
+ /*USB start*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PA7, true, 0),
+ /*USB end*/
+
+ /*headset start*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PW3, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS2, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ4, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PJ7, false, 0),
+ /*headset end*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX7, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PW2, true, 0),
+ /* DVFS_CLK set as GPIO to control RT8812 mode (high on boot) */
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO7, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PBB3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PBB7, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PCC1, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PCC2, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PH2, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PH3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PH5, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PH6, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PH7, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PK1, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PJ0, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PK3, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PN4, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ3, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PX6, false, 1),/*touch reset*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PK2, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PU4, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PI6, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PC7, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PI0, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PCC5, true, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ0, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ1, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ2, false, 1),
+/* key start */
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ0, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ5, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PV2, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PI3, false, 1),
+/* key end */
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ6, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR0, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR1, false, 0),
+ /* KB_ROW13 GPIO should be set to high to tristate vid for vboot
+ voltage. SW has to drive it low to change RT8812 o/p voltage
+ depending on pwm duty cyle. With default setting of kb_row13
+ boot voltage is 1.0 V
+ */
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR2, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR4, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PR5, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PEE1, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PEE5, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PB4, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PU2, false, 1),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PU3, false, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PU5, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PU6, true, 0),
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PK5, true, 0),
+
+ /*radio start*/
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO1, true, 0), //AP2MDM_WAKEUP#PO1
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO2, true, 0), //AP2MDM_STATUS#PO2
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO3, true, 0), //AP2MDM_ERR_FATAL#PO3
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO4, true, 0), //AP2MDM_IPC1#PO4 (flashlight feature)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO5, true, 0), //AP2MDM_IPC2#PO5 (EFS SYNC)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PO6, true, 0), //AP2MDM_VDDMIN#PO6 (reserved)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PBB5, true, 0), //AP2MDM_PON_RESET_N#PBB5
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PQ7, true, 0), //AP2MDM_CHNL_RDY_CPU (reserved)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS0, true, 0), //MDM2AP_IPC3#PS0 (reserved)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS5, true, 0), //MDM2AP_WAKEUP#PS5
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PS6, true, 0), //MDM2AP_STATUS#PS6
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PCC2, true, 0), //MDM2AP_VDDMIN#PCC2 (reserved)
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PV0, true, 0), //MDM2AP_ERR_FATAL#PV0
+ GPIO_INIT_PIN_MODE(TEGRA_GPIO_PV1, true, 0), //MDM2AP_HSIC_READY#PV1
+ /*radio end*/
+};
diff --git a/arch/arm/mach-tegra/board-flounder-pinmux.c b/arch/arm/mach-tegra/board-flounder-pinmux.c
new file mode 100644
index 0000000..63a355b
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-pinmux.c
@@ -0,0 +1,72 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-pinmux.c
+ *
+ * Copyright (c) 2013, NVIDIA Corporation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/gpio.h>
+#include <mach/pinmux.h>
+#include <mach/gpio-tegra.h>
+
+#include "board.h"
+#include "board-flounder.h"
+#include "devices.h"
+#include "gpio-names.h"
+
+#include <mach/pinmux-t12.h>
+#include "board-flounder-pinmux-t12x.h"
+
+static __initdata struct tegra_drive_pingroup_config flounder_drive_pinmux[] = {
+
+ /* SDMMC1 */
+ SET_DRIVE(SDIO1, ENABLE, DISABLE, DIV_1, 46, 42, FASTEST, FASTEST),
+
+ /* SDMMC3 */
+ SET_DRIVE(SDIO3, ENABLE, DISABLE, DIV_1, 20, 36, FASTEST, FASTEST),
+
+ /* SDMMC4 */
+ SET_DRIVE_WITH_TYPE(GMA, ENABLE, DISABLE, DIV_1, 10, 20, FASTEST,
+ FASTEST, 1),
+};
+
+static void __init flounder_gpio_init_configure(void)
+{
+ int len;
+ int i;
+ struct gpio_init_pin_info *pins_info;
+
+ len = ARRAY_SIZE(init_gpio_mode_flounder_common);
+ pins_info = init_gpio_mode_flounder_common;
+
+ for (i = 0; i < len; ++i) {
+ tegra_gpio_init_configure(pins_info->gpio_nr,
+ pins_info->is_input, pins_info->value);
+ pins_info++;
+ }
+}
+
+int __init flounder_pinmux_init(void)
+{
+ if (!of_machine_is_compatible("nvidia,tn8"))
+ flounder_gpio_init_configure();
+
+ tegra_pinmux_config_table(flounder_pinmux_common, ARRAY_SIZE(flounder_pinmux_common));
+ tegra_drive_pinmux_config_table(flounder_drive_pinmux,
+ ARRAY_SIZE(flounder_drive_pinmux));
+ tegra_pinmux_config_table(unused_pins_lowpower,
+ ARRAY_SIZE(unused_pins_lowpower));
+
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-power.c b/arch/arm/mach-tegra/board-flounder-power.c
new file mode 100644
index 0000000..7808c3f
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-power.c
@@ -0,0 +1,552 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-power.c
+ *
+ * Copyright (c) 2013-2014, NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/i2c.h>
+#include <linux/platform_device.h>
+#include <linux/resource.h>
+#include <linux/io.h>
+#include <mach/edp.h>
+#include <mach/irqs.h>
+#include <linux/edp.h>
+#include <linux/platform_data/tegra_edp.h>
+#include <linux/pid_thermal_gov.h>
+#include <linux/regulator/fixed.h>
+#include <linux/mfd/palmas.h>
+#include <linux/power/power_supply_extcon.h>
+#include <linux/regulator/machine.h>
+#include <linux/irq.h>
+#include <linux/gpio.h>
+#include <linux/regulator/tegra-dfll-bypass-regulator.h>
+#include <linux/tegra-fuse.h>
+
+#include <asm/mach-types.h>
+#include <mach/pinmux-t12.h>
+
+#include "pm.h"
+#include "dvfs.h"
+#include "board.h"
+#include "tegra-board-id.h"
+#include "board-common.h"
+#include "board-flounder.h"
+#include "board-pmu-defines.h"
+#include "devices.h"
+#include "iomap.h"
+#include "tegra_cl_dvfs.h"
+#include "tegra11_soctherm.h"
+
+#define PMC_CTRL 0x0
+#define PMC_CTRL_INTR_LOW (1 << 17)
+
+static void flounder_board_suspend(int state, enum suspend_stage stage)
+{
+ static int request = 0;
+
+ if (state == TEGRA_SUSPEND_LP0 && stage == TEGRA_SUSPEND_BEFORE_PERIPHERAL) {
+ if (!request)
+ gpio_request_one(TEGRA_GPIO_PB5, GPIOF_OUT_INIT_HIGH,
+ "sdmmc3_dat2");
+ else
+ gpio_direction_output(TEGRA_GPIO_PB5,1);
+
+ request = 1;
+ }
+}
+
+static struct tegra_suspend_platform_data flounder_suspend_data = {
+ .cpu_timer = 500,
+ .cpu_off_timer = 300,
+ .suspend_mode = TEGRA_SUSPEND_LP0,
+ .core_timer = 0x157e,
+ .core_off_timer = 2000,
+ .corereq_high = true,
+ .sysclkreq_high = true,
+ .cpu_lp2_min_residency = 1000,
+ .min_residency_vmin_fmin = 1000,
+ .min_residency_ncpu_fast = 8000,
+ .min_residency_ncpu_slow = 5000,
+ .min_residency_mclk_stop = 5000,
+ .min_residency_crail = 20000,
+ .board_suspend = flounder_board_suspend,
+};
+
+static struct power_supply_extcon_plat_data extcon_pdata = {
+ .extcon_name = "tegra-udc",
+};
+
+static struct platform_device power_supply_extcon_device = {
+ .name = "power-supply-extcon",
+ .id = -1,
+ .dev = {
+ .platform_data = &extcon_pdata,
+ },
+};
+
+int __init flounder_suspend_init(void)
+{
+ tegra_init_suspend(&flounder_suspend_data);
+ return 0;
+}
+
+/************************ FLOUNDER CL-DVFS DATA *********************/
+#define FLOUNDER_DEFAULT_CVB_ALIGNMENT 10000
+
+#ifdef CONFIG_ARCH_TEGRA_HAS_CL_DVFS
+static struct tegra_cl_dvfs_cfg_param e1736_flounder_cl_dvfs_param = {
+ .sample_rate = 12500, /* i2c freq */
+
+ .force_mode = TEGRA_CL_DVFS_FORCE_FIXED,
+ .cf = 10,
+ .ci = 0,
+ .cg = 2,
+
+ .droop_cut_value = 0xF,
+ .droop_restore_ramp = 0x0,
+ .scale_out_ramp = 0x0,
+};
+
+/* E1736 volatge map. Fixed 10mv steps from 700mv to 1400mv */
+#define E1736_CPU_VDD_MAP_SIZE ((1400000 - 700000) / 10000 + 1)
+static struct voltage_reg_map e1736_cpu_vdd_map[E1736_CPU_VDD_MAP_SIZE];
+static inline void e1736_fill_reg_map(void)
+{
+ int i;
+ for (i = 0; i < E1736_CPU_VDD_MAP_SIZE; i++) {
+ /* 0.7V corresponds to 0b0011010 = 26 */
+ /* 1.4V corresponds to 0b1100000 = 96 */
+ e1736_cpu_vdd_map[i].reg_value = i + 26;
+ e1736_cpu_vdd_map[i].reg_uV = 700000 + 10000 * i;
+ }
+}
+
+static struct tegra_cl_dvfs_platform_data e1736_cl_dvfs_data = {
+ .dfll_clk_name = "dfll_cpu",
+ .pmu_if = TEGRA_CL_DVFS_PMU_I2C,
+ .u.pmu_i2c = {
+ .fs_rate = 400000,
+ .slave_addr = 0xb0, /* pmu i2c address */
+ .reg = 0x23, /* vdd_cpu rail reg address */
+ },
+ .vdd_map = e1736_cpu_vdd_map,
+ .vdd_map_size = E1736_CPU_VDD_MAP_SIZE,
+
+ .cfg_param = &e1736_flounder_cl_dvfs_param,
+};
+static int __init flounder_cl_dvfs_init(void)
+{
+ struct tegra_cl_dvfs_platform_data *data = NULL;
+
+ e1736_fill_reg_map();
+ data = &e1736_cl_dvfs_data;
+
+ if (data) {
+ data->flags = TEGRA_CL_DVFS_DYN_OUTPUT_CFG;
+ tegra_cl_dvfs_device.dev.platform_data = data;
+ platform_device_register(&tegra_cl_dvfs_device);
+ }
+ return 0;
+}
+#else
+static inline int flounder_cl_dvfs_init(void)
+{ return 0; }
+#endif
+
+int __init flounder_rail_alignment_init(void)
+{
+#ifdef CONFIG_ARCH_TEGRA_13x_SOC
+#else
+ tegra12x_vdd_cpu_align(FLOUNDER_DEFAULT_CVB_ALIGNMENT, 0);
+#endif
+ return 0;
+}
+
+int __init flounder_regulator_init(void)
+{
+ void __iomem *pmc = IO_ADDRESS(TEGRA_PMC_BASE);
+ u32 pmc_ctrl;
+
+ /* TPS65913: Normal state of INT request line is LOW.
+ * configure the power management controller to trigger PMU
+ * interrupts when HIGH.
+ */
+ pmc_ctrl = readl(pmc + PMC_CTRL);
+ writel(pmc_ctrl | PMC_CTRL_INTR_LOW, pmc + PMC_CTRL);
+
+ platform_device_register(&power_supply_extcon_device);
+
+ return 0;
+}
+
+int __init flounder_edp_init(void)
+{
+ unsigned int regulator_mA;
+
+ /* Both vdd_cpu and vdd_gpu uses 3-phase rail, so EDP current
+ * limit will be the same.
+ * */
+ regulator_mA = 16800;
+
+ pr_info("%s: CPU regulator %d mA\n", __func__, regulator_mA);
+ tegra_init_cpu_edp_limits(regulator_mA);
+
+ pr_info("%s: GPU regulator %d mA\n", __func__, regulator_mA);
+ tegra_init_gpu_edp_limits(regulator_mA);
+
+ return 0;
+}
+
+
+static struct pid_thermal_gov_params soctherm_pid_params = {
+ .max_err_temp = 9000,
+ .max_err_gain = 1000,
+
+ .gain_p = 1000,
+ .gain_d = 0,
+
+ .up_compensation = 20,
+ .down_compensation = 20,
+};
+
+static struct thermal_zone_params soctherm_tzp = {
+ .governor_name = "pid_thermal_gov",
+ .governor_params = &soctherm_pid_params,
+};
+
+static struct tegra_thermtrip_pmic_data tpdata_palmas = {
+ .reset_tegra = 1,
+ .pmu_16bit_ops = 0,
+ .controller_type = 0,
+ .pmu_i2c_addr = 0x58,
+ .i2c_controller_id = 4,
+ .poweroff_reg_addr = 0xa0,
+ .poweroff_reg_data = 0x0,
+};
+
+/*
+ * @PSKIP_CONFIG_NOTE: For T132, throttling config of PSKIP is no longer
+ * done in soctherm registers. These settings are now done via registers in
+ * denver:ccroc module which are at a different register offset. More
+ * importantly, there are _only_ three levels of throttling: 'low',
+ * 'medium' and 'heavy' and are selected via the 'throttling_depth' field
+ * in the throttle->devs[] section of the soctherm config. Since the depth
+ * specification is per device, it is necessary to manually make sure the
+ * depths specified alongwith a given level are the same across all devs,
+ * otherwise it will overwrite a previously set depth with a different
+ * depth. We will refer to this comment at each relevant location in the
+ * config sections below.
+ */
+static struct soctherm_platform_data flounder_soctherm_data = {
+ .oc_irq_base = TEGRA_SOC_OC_IRQ_BASE,
+ .num_oc_irqs = TEGRA_SOC_OC_NUM_IRQ,
+ .therm = {
+ [THERM_CPU] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .hotspot_offset = 6000,
+ .num_trips = 3,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 101000,
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "tegra-heavy",
+ .trip_temp = 99000,
+ .trip_type = THERMAL_TRIP_HOT,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "cpu-balanced",
+ .trip_temp = 90000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ [THERM_GPU] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .hotspot_offset = 6000,
+ .num_trips = 3,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 101000,
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "tegra-heavy",
+ .trip_temp = 99000,
+ .trip_type = THERMAL_TRIP_HOT,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "gpu-balanced",
+ .trip_temp = 90000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ [THERM_MEM] = {
+ .zone_enable = true,
+ .num_trips = 1,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 101000, /* = GPU shut */
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ [THERM_PLL] = {
+ .zone_enable = true,
+ .tzp = &soctherm_tzp,
+ },
+ },
+ .throttle = {
+ [THROTTLE_HEAVY] = {
+ .priority = 100,
+ .devs = {
+ [THROTTLE_DEV_CPU] = {
+ .enable = true,
+ .depth = 80,
+ /* see @PSKIP_CONFIG_NOTE */
+ .throttling_depth = "heavy_throttling",
+ },
+ [THROTTLE_DEV_GPU] = {
+ .enable = true,
+ .throttling_depth = "heavy_throttling",
+ },
+ },
+ },
+ },
+};
+
+/* Only the diffs from flounder_soctherm_data structure */
+static struct soctherm_platform_data t132ref_v1_soctherm_data = {
+ .therm = {
+ [THERM_CPU] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .hotspot_offset = 10000,
+ },
+ [THERM_PLL] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .num_trips = 3,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 97000,
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "tegra-heavy",
+ .trip_temp = 94000,
+ .trip_type = THERMAL_TRIP_HOT,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "cpu-balanced",
+ .trip_temp = 84000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ },
+};
+
+/* Only the diffs from ardbeg_soctherm_data structure */
+static struct soctherm_platform_data t132ref_v2_soctherm_data = {
+ .therm = {
+ [THERM_CPU] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .hotspot_offset = 10000,
+ .num_trips = 3,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 105000,
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "tegra-heavy",
+ .trip_temp = 102000,
+ .trip_type = THERMAL_TRIP_HOT,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "cpu-balanced",
+ .trip_temp = 85000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ [THERM_GPU] = {
+ .zone_enable = true,
+ .passive_delay = 1000,
+ .hotspot_offset = 5000,
+ .num_trips = 3,
+ .trips = {
+ {
+ .cdev_type = "tegra-shutdown",
+ .trip_temp = 101000,
+ .trip_type = THERMAL_TRIP_CRITICAL,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "tegra-heavy",
+ .trip_temp = 99000,
+ .trip_type = THERMAL_TRIP_HOT,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ {
+ .cdev_type = "gpu-balanced",
+ .trip_temp = 89000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ },
+ },
+ .tzp = &soctherm_tzp,
+ },
+ },
+};
+
+static struct soctherm_throttle battery_oc_throttle_t13x = {
+ .throt_mode = BRIEF,
+ .polarity = SOCTHERM_ACTIVE_LOW,
+ .priority = 50,
+ .intr = true,
+ .alarm_cnt_threshold = 15,
+ .alarm_filter = 5100000,
+ .devs = {
+ [THROTTLE_DEV_CPU] = {
+ .enable = true,
+ .depth = 50,
+ /* see @PSKIP_CONFIG_NOTE */
+ .throttling_depth = "low_throttling",
+ },
+ [THROTTLE_DEV_GPU] = {
+ .enable = true,
+ .throttling_depth = "medium_throttling",
+ },
+ },
+};
+
+int __init flounder_soctherm_init(void)
+{
+ const int t13x_cpu_edp_temp_margin = 5000,
+ t13x_gpu_edp_temp_margin = 6000;
+ int cpu_edp_temp_margin, gpu_edp_temp_margin;
+ int cp_rev, ft_rev;
+ enum soctherm_therm_id therm_cpu = THERM_CPU;
+
+ cp_rev = tegra_fuse_calib_base_get_cp(NULL, NULL);
+ ft_rev = tegra_fuse_calib_base_get_ft(NULL, NULL);
+
+ /* TODO: remove this part once bootloader changes merged */
+ tegra_gpio_disable(TEGRA_GPIO_PJ2);
+ tegra_gpio_disable(TEGRA_GPIO_PS7);
+
+ cpu_edp_temp_margin = t13x_cpu_edp_temp_margin;
+ gpu_edp_temp_margin = t13x_gpu_edp_temp_margin;
+
+ if (!cp_rev) {
+ /* ATE rev is NEW: use v2 table */
+ flounder_soctherm_data.therm[THERM_CPU] =
+ t132ref_v2_soctherm_data.therm[THERM_CPU];
+ flounder_soctherm_data.therm[THERM_GPU] =
+ t132ref_v2_soctherm_data.therm[THERM_GPU];
+ } else {
+ /* ATE rev is Old or Mid: use PLLx sensor only */
+ flounder_soctherm_data.therm[THERM_CPU] =
+ t132ref_v1_soctherm_data.therm[THERM_CPU];
+ flounder_soctherm_data.therm[THERM_PLL] =
+ t132ref_v1_soctherm_data.therm[THERM_PLL];
+ therm_cpu = THERM_PLL; /* override CPU with PLL zone */
+ }
+
+ /* do this only for supported CP,FT fuses */
+ if ((cp_rev >= 0) && (ft_rev >= 0)) {
+ tegra_platform_edp_init(
+ flounder_soctherm_data.therm[therm_cpu].trips,
+ &flounder_soctherm_data.therm[therm_cpu].num_trips,
+ t13x_cpu_edp_temp_margin);
+ tegra_platform_gpu_edp_init(
+ flounder_soctherm_data.therm[THERM_GPU].trips,
+ &flounder_soctherm_data.therm[THERM_GPU].num_trips,
+ t13x_gpu_edp_temp_margin);
+ tegra_add_cpu_vmax_trips(
+ flounder_soctherm_data.therm[therm_cpu].trips,
+ &flounder_soctherm_data.therm[therm_cpu].num_trips);
+ tegra_add_tgpu_trips(
+ flounder_soctherm_data.therm[THERM_GPU].trips,
+ &flounder_soctherm_data.therm[THERM_GPU].num_trips);
+ tegra_add_core_vmax_trips(
+ flounder_soctherm_data.therm[THERM_PLL].trips,
+ &flounder_soctherm_data.therm[THERM_PLL].num_trips);
+ }
+
+ tegra_add_cpu_vmin_trips(
+ flounder_soctherm_data.therm[therm_cpu].trips,
+ &flounder_soctherm_data.therm[therm_cpu].num_trips);
+ tegra_add_gpu_vmin_trips(
+ flounder_soctherm_data.therm[THERM_GPU].trips,
+ &flounder_soctherm_data.therm[THERM_GPU].num_trips);
+ tegra_add_core_vmin_trips(
+ flounder_soctherm_data.therm[THERM_PLL].trips,
+ &flounder_soctherm_data.therm[THERM_PLL].num_trips);
+
+ flounder_soctherm_data.tshut_pmu_trip_data = &tpdata_palmas;
+ /* Enable soc_therm OC throttling on selected platforms */
+ memcpy(&flounder_soctherm_data.throttle[THROTTLE_OC4],
+ &battery_oc_throttle_t13x,
+ sizeof(battery_oc_throttle_t13x));
+ return tegra11_soctherm_init(&flounder_soctherm_data);
+}
diff --git a/arch/arm/mach-tegra/board-flounder-sdhci.c b/arch/arm/mach-tegra/board-flounder-sdhci.c
new file mode 100644
index 0000000..56e7e4e
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-sdhci.c
@@ -0,0 +1,561 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-sdhci.c
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/resource.h>
+#include <linux/platform_device.h>
+#include <linux/wlan_plat.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/if.h>
+#include <linux/mmc/host.h>
+#include <linux/wl12xx.h>
+#include <linux/platform_data/mmc-sdhci-tegra.h>
+#include <linux/mfd/max77660/max77660-core.h>
+#include <linux/tegra-fuse.h>
+#include <linux/random.h>
+
+#include <asm/mach-types.h>
+#include <mach/irqs.h>
+#include <mach/gpio-tegra.h>
+
+#include "gpio-names.h"
+#include "board.h"
+#include "board-flounder.h"
+#include "dvfs.h"
+#include "iomap.h"
+#include "tegra-board-id.h"
+
+#define FLOUNDER_WLAN_PWR TEGRA_GPIO_PX7
+#define FLOUNDER_WLAN_WOW TEGRA_GPIO_PU5
+#if defined(CONFIG_BCMDHD_EDP_SUPPORT)
+#define ON 1020 /* 1019.16mW */
+#define OFF 0
+static unsigned int wifi_states[] = {ON, OFF};
+#endif
+
+#define FUSE_SOC_SPEEDO_0 0x134
+static void (*wifi_status_cb)(int card_present, void *dev_id);
+static void *wifi_status_cb_devid;
+static int flounder_wifi_status_register(void (*callback)(int , void *), void *);
+
+static int flounder_wifi_reset(int on);
+static int flounder_wifi_power(int on);
+static int flounder_wifi_set_carddetect(int val);
+static int flounder_wifi_get_mac_addr(unsigned char *buf);
+static void* flounder_wifi_get_country_code(char *country_iso_code, u32 flags);
+
+static struct wifi_platform_data flounder_wifi_control = {
+ .set_power = flounder_wifi_power,
+ .set_reset = flounder_wifi_reset,
+ .set_carddetect = flounder_wifi_set_carddetect,
+ .get_mac_addr = flounder_wifi_get_mac_addr,
+ .get_country_code = flounder_wifi_get_country_code,
+#if defined (CONFIG_BCMDHD_EDP_SUPPORT)
+ /* wifi edp client information */
+ .client_info = {
+ .name = "wifi_edp_client",
+ .states = wifi_states,
+ .num_states = ARRAY_SIZE(wifi_states),
+ .e0_index = 0,
+ .priority = EDP_MAX_PRIO,
+ },
+#endif
+};
+
+static struct resource wifi_resource[] = {
+ [0] = {
+ .name = "bcm4329_wlan_irq",
+ .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL
+ | IORESOURCE_IRQ_SHAREABLE,
+ },
+};
+
+static struct platform_device flounder_wifi_device = {
+ .name = "bcm4329_wlan",
+ .id = 1,
+ .num_resources = 1,
+ .resource = wifi_resource,
+ .dev = {
+ .platform_data = &flounder_wifi_control,
+ },
+};
+
+static struct resource mrvl_wifi_resource[] = {
+ [0] = {
+ .name = "mrvl_wlan_irq",
+ .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWLEVEL | IORESOURCE_IRQ_SHAREABLE,
+ },
+};
+
+static struct platform_device marvell_wifi_device = {
+ .name = "mrvl_wlan",
+ .id = 1,
+ .num_resources = 1,
+ .resource = mrvl_wifi_resource,
+ .dev = {
+ .platform_data = &flounder_wifi_control,
+ },
+};
+
+static struct resource sdhci_resource0[] = {
+ [0] = {
+ .start = INT_SDMMC1,
+ .end = INT_SDMMC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ [1] = {
+ .start = TEGRA_SDMMC1_BASE,
+ .end = TEGRA_SDMMC1_BASE + TEGRA_SDMMC1_SIZE-1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+static struct resource sdhci_resource3[] = {
+ [0] = {
+ .start = INT_SDMMC4,
+ .end = INT_SDMMC4,
+ .flags = IORESOURCE_IRQ,
+ },
+ [1] = {
+ .start = TEGRA_SDMMC4_BASE,
+ .end = TEGRA_SDMMC4_BASE + TEGRA_SDMMC4_SIZE-1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+#ifdef CONFIG_MMC_EMBEDDED_SDIO
+static struct embedded_sdio_data embedded_sdio_data0 = {
+ .cccr = {
+ .sdio_vsn = 2,
+ .multi_block = 1,
+ .low_speed = 0,
+ .wide_bus = 0,
+ .high_power = 1,
+ .high_speed = 1,
+ },
+ .cis = {
+ .vendor = 0x02d0,
+ .device = 0x4329,
+ },
+};
+#endif
+
+static struct tegra_sdhci_platform_data tegra_sdhci_platform_data0 = {
+ .mmc_data = {
+ .register_status_notify = flounder_wifi_status_register,
+#ifdef CONFIG_MMC_EMBEDDED_SDIO
+ .embedded_sdio = &embedded_sdio_data0,
+#endif
+ .built_in = 0,
+ .ocr_mask = MMC_OCR_1V8_MASK,
+ },
+ .cd_gpio = -1,
+ .wp_gpio = -1,
+ .power_gpio = -1,
+ .tap_delay = 0,
+ .trim_delay = 0x2,
+ .ddr_clk_limit = 41000000,
+ .uhs_mask = MMC_UHS_MASK_DDR50,
+ .calib_3v3_offsets = 0x7676,
+ .calib_1v8_offsets = 0x7676,
+ .default_drv_type = MMC_SET_DRIVER_TYPE_A,
+};
+
+static struct tegra_sdhci_platform_data tegra_sdhci_platform_data3 = {
+ .cd_gpio = -1,
+ .wp_gpio = -1,
+ .power_gpio = -1,
+ .is_8bit = 1,
+ .tap_delay = 0x4,
+ .trim_delay = 0x3,
+ .ddr_trim_delay = 0x0,
+ .mmc_data = {
+ .built_in = 1,
+ .ocr_mask = MMC_OCR_1V8_MASK,
+ },
+ .ddr_clk_limit = 51000000,
+ .max_clk_limit = 200000000,
+ .calib_3v3_offsets = 0x0202,
+ .calib_1v8_offsets = 0x0202,
+};
+
+static struct platform_device tegra_sdhci_device0 = {
+ .name = "sdhci-tegra",
+ .id = 0,
+ .resource = sdhci_resource0,
+ .num_resources = ARRAY_SIZE(sdhci_resource0),
+ .dev = {
+ .platform_data = &tegra_sdhci_platform_data0,
+ },
+};
+
+static struct platform_device tegra_sdhci_device3 = {
+ .name = "sdhci-tegra",
+ .id = 3,
+ .resource = sdhci_resource3,
+ .num_resources = ARRAY_SIZE(sdhci_resource3),
+ .dev = {
+ .platform_data = &tegra_sdhci_platform_data3,
+ },
+};
+
+static int flounder_wifi_status_register(
+ void (*callback)(int card_present, void *dev_id),
+ void *dev_id)
+{
+ if (wifi_status_cb)
+ return -EAGAIN;
+ wifi_status_cb = callback;
+ wifi_status_cb_devid = dev_id;
+ return 0;
+}
+
+static int flounder_wifi_set_carddetect(int val)
+{
+ pr_debug("%s: %d\n", __func__, val);
+ if (wifi_status_cb)
+ wifi_status_cb(val, wifi_status_cb_devid);
+ else
+ pr_warn("%s: Nobody to notify\n", __func__);
+ return 0;
+}
+
+static int flounder_wifi_power(int on)
+{
+ pr_err("%s: %d\n", __func__, on);
+
+ gpio_set_value(FLOUNDER_WLAN_PWR, on);
+ msleep(100);
+
+ return 0;
+}
+
+static int flounder_wifi_reset(int on)
+{
+ pr_debug("%s: do nothing\n", __func__);
+ return 0;
+}
+
+static unsigned char flounder_mac_addr[IFHWADDRLEN] = { 0, 0x90, 0x4c, 0, 0, 0 };
+
+static int __init flounder_mac_addr_setup(char *str)
+{
+ char macstr[IFHWADDRLEN*3];
+ char *macptr = macstr;
+ char *token;
+ int i = 0;
+
+ if (!str)
+ return 0;
+ pr_debug("wlan MAC = %s\n", str);
+ if (strlen(str) >= sizeof(macstr))
+ return 0;
+ strcpy(macstr, str);
+
+ while ((token = strsep(&macptr, ":")) != NULL) {
+ unsigned long val;
+ int res;
+
+ if (i >= IFHWADDRLEN)
+ break;
+ res = kstrtoul(token, 0x10, &val);
+ if (res < 0)
+ return 0;
+ flounder_mac_addr[i++] = (u8)val;
+ }
+
+ return 1;
+}
+
+__setup("flounder.wifimacaddr=", flounder_mac_addr_setup);
+
+static int flounder_wifi_get_mac_addr(unsigned char *buf)
+{
+ uint rand_mac;
+
+ if (!buf)
+ return -EFAULT;
+
+ if ((flounder_mac_addr[4] == 0) && (flounder_mac_addr[5] == 0)) {
+ prandom_seed((uint)jiffies);
+ rand_mac = prandom_u32();
+ flounder_mac_addr[3] = (unsigned char)rand_mac;
+ flounder_mac_addr[4] = (unsigned char)(rand_mac >> 8);
+ flounder_mac_addr[5] = (unsigned char)(rand_mac >> 16);
+ }
+ memcpy(buf, flounder_mac_addr, IFHWADDRLEN);
+ return 0;
+}
+
+#define WLC_CNTRY_BUF_SZ 4 /* Country string is 3 bytes + NUL */
+
+static char flounder_country_code[WLC_CNTRY_BUF_SZ] = { 0 };
+
+static int __init flounder_country_code_setup(char *str)
+{
+ if (!str)
+ return 0;
+ pr_debug("wlan country code = %s\n", str);
+ if (strlen(str) >= sizeof(flounder_country_code))
+ return 0;
+ strcpy(flounder_country_code, str);
+ return 1;
+}
+__setup("androidboot.wificountrycode=", flounder_country_code_setup);
+
+struct cntry_locales_custom {
+ char iso_abbrev[WLC_CNTRY_BUF_SZ]; /* ISO 3166-1 country abbreviation */
+ char custom_locale[WLC_CNTRY_BUF_SZ]; /* Custom firmware locale */
+ s32 custom_locale_rev; /* Custom local revisin default -1 */
+};
+
+struct cntry_locales_custom country_code_custom_table[] = {
+/* Table should be filled out based on custom platform regulatory requirement */
+ {"", "XZ", 11}, /* Universal if Country code is unknown or empty */
+ {"US", "US", 0},
+ {"AE", "AE", 1},
+ {"AR", "AR", 21},
+ {"AT", "AT", 4},
+ {"AU", "AU", 6},
+ {"BE", "BE", 4},
+ {"BG", "BG", 4},
+ {"BN", "BN", 4},
+ {"BR", "BR", 4},
+ {"CA", "US", 0}, /* Previousely was CA/31 */
+ {"CH", "CH", 4},
+ {"CY", "CY", 4},
+ {"CZ", "CZ", 4},
+ {"DE", "DE", 7},
+ {"DK", "DK", 4},
+ {"EE", "EE", 4},
+ {"ES", "ES", 4},
+ {"FI", "FI", 4},
+ {"FR", "FR", 5},
+ {"GB", "GB", 6},
+ {"GR", "GR", 4},
+ {"HK", "HK", 2},
+ {"HR", "HR", 4},
+ {"HU", "HU", 4},
+ {"IE", "IE", 5},
+ {"IN", "IN", 3},
+ {"IS", "IS", 4},
+ {"IT", "IT", 4},
+ {"ID", "ID", 13},
+ {"JP", "JP", 58},
+ {"KR", "KR", 57},
+ {"KW", "KW", 5},
+ {"LI", "LI", 4},
+ {"LT", "LT", 4},
+ {"LU", "LU", 3},
+ {"LV", "LV", 4},
+ {"MA", "MA", 2},
+ {"MT", "MT", 4},
+ {"MX", "MX", 20},
+ {"MY", "MY", 3},
+ {"NL", "NL", 4},
+ {"NO", "NO", 4},
+ {"NZ", "NZ", 4},
+ {"PL", "PL", 4},
+ {"PT", "PT", 4},
+ {"PY", "PY", 2},
+ {"QA", "QA", 0},
+ {"RO", "RO", 4},
+ {"RU", "RU", 13},
+ {"SA", "SA", 26},
+ {"SE", "SE", 4},
+ {"SG", "SG", 4},
+ {"SI", "SI", 4},
+ {"SK", "SK", 4},
+ {"TH", "TH", 5},
+ {"TR", "TR", 7},
+ {"TW", "TW", 1},
+ {"VN", "VN", 4},
+ {"IR", "XZ", 11}, /* Universal if Country code is IRAN, (ISLAMIC REPUBLIC OF) */
+ {"SD", "XZ", 11}, /* Universal if Country code is SUDAN */
+ {"SY", "XZ", 11}, /* Universal if Country code is SYRIAN ARAB REPUBLIC */
+ {"GL", "XZ", 11}, /* Universal if Country code is GREENLAND */
+ {"PS", "XZ", 11}, /* Universal if Country code is PALESTINIAN TERRITORIES */
+ {"TL", "XZ", 11}, /* Universal if Country code is TIMOR-LESTE (EAST TIMOR) */
+ {"MH", "XZ", 11}, /* Universal if Country code is MARSHALL ISLANDS */
+};
+
+struct cntry_locales_custom country_code_nodfs_table[] = {
+ {"", "XZ", 40}, /* Universal if Country code is unknown or empty */
+ {"US", "US", 172},
+ {"AM", "E0", 26},
+ {"AU", "AU", 37},
+ {"BG", "E0", 26},
+ {"BR", "BR", 18},
+ {"CA", "US", 172},
+ {"CH", "E0", 26},
+ {"CY", "E0", 26},
+ {"CZ", "E0", 26},
+ {"DE", "E0", 26},
+ {"DK", "E0", 26},
+ {"DZ", "E0", 26},
+ {"EE", "E0", 26},
+ {"ES", "E0", 26},
+ {"EU", "E0", 26},
+ {"FI", "E0", 26},
+ {"FR", "E0", 26},
+ {"GB", "E0", 26},
+ {"GR", "E0", 26},
+ {"HK", "SG", 17},
+ {"HR", "E0", 26},
+ {"HU", "E0", 26},
+ {"ID", "ID", 1},
+ {"IE", "E0", 26},
+ {"IL", "E0", 26},
+ {"IN", "IN", 27},
+ {"IQ", "E0", 26},
+ {"IS", "E0", 26},
+ {"IT", "E0", 26},
+ {"JP", "JP", 83},
+ {"KR", "KR", 79},
+ {"KW", "E0", 26},
+ {"KZ", "E0", 26},
+ {"LI", "E0", 26},
+ {"LT", "E0", 26},
+ {"LU", "E0", 26},
+ {"LV", "LV", 4},
+ {"LY", "E0", 26},
+ {"MA", "E0", 26},
+ {"MT", "E0", 26},
+ {"MY", "MY", 15},
+ {"MX", "US", 172},
+ {"NL", "E0", 26},
+ {"NO", "E0", 26},
+ {"OM", "E0", 26},
+ {"PL", "E0", 26},
+ {"PT", "E0", 26},
+ {"QA", "QA", 0},
+ {"RO", "E0", 26},
+ {"RS", "E0", 26},
+ {"SA", "SA", 26},
+ {"SE", "E0", 26},
+ {"SG", "SG", 17},
+ {"SI", "E0", 26},
+ {"SK", "E0", 26},
+ {"SZ", "E0", 26},
+ {"TH", "TH", 9},
+ {"TN", "E0", 26},
+ {"TR", "E0", 26},
+ {"TW", "TW", 60},
+ {"ZA", "E0", 26},
+};
+
+static void* flounder_wifi_get_country_code(char *country_iso_code, u32 flags)
+{
+ struct cntry_locales_custom *locales;
+ int size, i;
+
+ if (flags & WLAN_PLAT_NODFS_FLAG) {
+ locales = country_code_nodfs_table;
+ size = ARRAY_SIZE(country_code_nodfs_table);
+ } else {
+ locales = country_code_custom_table;
+ size = ARRAY_SIZE(country_code_custom_table);
+ }
+
+ if (size == 0)
+ return NULL;
+
+ if (!country_iso_code || country_iso_code[0] == 0)
+ country_iso_code = flounder_country_code;
+
+ for (i = 0; i < size; i++) {
+ if (strcmp(country_iso_code, locales[i].iso_abbrev) == 0)
+ return &locales[i];
+ }
+ /* if no country code matched return first universal code */
+ return &locales[0];
+}
+
+static int __init flounder_wifi_init(void)
+{
+ int rc;
+
+ rc = gpio_request(FLOUNDER_WLAN_PWR, "wlan_power");
+ if (rc)
+ pr_err("WLAN_PWR gpio request failed:%d\n", rc);
+ rc = gpio_request(FLOUNDER_WLAN_WOW, "bcmsdh_sdmmc");
+ if (rc)
+ pr_err("WLAN_WOW gpio request failed:%d\n", rc);
+
+ rc = gpio_direction_output(FLOUNDER_WLAN_PWR, 0);
+ if (rc)
+ pr_err("WLAN_PWR gpio direction configuration failed:%d\n", rc);
+ rc = gpio_direction_input(FLOUNDER_WLAN_WOW);
+ if (rc)
+ pr_err("WLAN_WOW gpio direction configuration failed:%d\n", rc);
+
+ wifi_resource[0].start = wifi_resource[0].end =
+ gpio_to_irq(FLOUNDER_WLAN_WOW);
+
+ platform_device_register(&flounder_wifi_device);
+
+ mrvl_wifi_resource[0].start = mrvl_wifi_resource[0].end =
+ gpio_to_irq(FLOUNDER_WLAN_WOW);
+ platform_device_register(&marvell_wifi_device);
+
+ return 0;
+}
+
+int __init flounder_sdhci_init(void)
+{
+ int nominal_core_mv;
+ int min_vcore_override_mv;
+ int boot_vcore_mv;
+ u32 speedo;
+
+ nominal_core_mv =
+ tegra_dvfs_rail_get_nominal_millivolts(tegra_core_rail);
+ if (nominal_core_mv) {
+ tegra_sdhci_platform_data0.nominal_vcore_mv = nominal_core_mv;
+ tegra_sdhci_platform_data3.nominal_vcore_mv = nominal_core_mv;
+ }
+ min_vcore_override_mv =
+ tegra_dvfs_rail_get_override_floor(tegra_core_rail);
+ if (min_vcore_override_mv) {
+ tegra_sdhci_platform_data0.min_vcore_override_mv =
+ min_vcore_override_mv;
+ tegra_sdhci_platform_data3.min_vcore_override_mv =
+ min_vcore_override_mv;
+ }
+ boot_vcore_mv = tegra_dvfs_rail_get_boot_level(tegra_core_rail);
+ if (boot_vcore_mv) {
+ tegra_sdhci_platform_data0.boot_vcore_mv = boot_vcore_mv;
+ tegra_sdhci_platform_data3.boot_vcore_mv = boot_vcore_mv;
+ }
+
+ tegra_sdhci_platform_data0.max_clk_limit = 204000000;
+
+ speedo = tegra_fuse_readl(FUSE_SOC_SPEEDO_0);
+ tegra_sdhci_platform_data0.cpu_speedo = speedo;
+ tegra_sdhci_platform_data3.cpu_speedo = speedo;
+
+ tegra_sdhci_platform_data3.max_clk_limit = 200000000;
+
+ platform_device_register(&tegra_sdhci_device3);
+ platform_device_register(&tegra_sdhci_device0);
+ flounder_wifi_init();
+
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-sensors.c b/arch/arm/mach-tegra/board-flounder-sensors.c
new file mode 100644
index 0000000..4725e5f
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-sensors.c
@@ -0,0 +1,1079 @@
+/*
+ * arch/arm/mach-tegra/board-flounder-sensors.c
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/atomic.h>
+#include <linux/i2c.h>
+#include <linux/gpio.h>
+#include <linux/mpu.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/nct1008.h>
+#include <linux/of_platform.h>
+#include <linux/pid_thermal_gov.h>
+#include <linux/tegra-fuse.h>
+#include <mach/edp.h>
+#include <mach/pinmux-t12.h>
+#include <mach/pinmux.h>
+#include <mach/io_dpd.h>
+#include <media/camera.h>
+#include <media/ar0261.h>
+#include <media/imx135.h>
+#include <media/dw9718.h>
+#include <media/as364x.h>
+#include <media/ov5693.h>
+#include <media/ov7695.h>
+#include <media/mt9m114.h>
+#include <media/ad5823.h>
+#include <media/max77387.h>
+#include <media/imx219.h>
+#include <media/ov9760.h>
+#include <media/drv201.h>
+#include <media/tps61310.h>
+
+#include <linux/platform_device.h>
+#include <linux/input/cy8c_sar.h>
+#include <media/soc_camera.h>
+#include <media/soc_camera_platform.h>
+#include <media/tegra_v4l2_camera.h>
+#include <linux/generic_adc_thermal.h>
+#include <mach/board_htc.h>
+
+#include "cpu-tegra.h"
+#include "devices.h"
+#include "board.h"
+#include "board-common.h"
+#include "board-flounder.h"
+#include "tegra-board-id.h"
+#ifdef CONFIG_THERMAL_GOV_ADAPTIVE_SKIN
+#include <linux/adaptive_skin.h>
+#endif
+
+int cy8c_sar1_reset(void)
+{
+ pr_debug("[SAR] %s Enter\n", __func__);
+ gpio_set_value_cansleep(TEGRA_GPIO_PG6, 1);
+ msleep(5);
+ gpio_set_value_cansleep(TEGRA_GPIO_PG6, 0);
+ msleep(50);/*wait chip reset finish time*/
+ return 0;
+}
+
+int cy8c_sar_reset(void)
+{
+ pr_debug("[SAR] %s Enter\n", __func__);
+ gpio_set_value_cansleep(TEGRA_GPIO_PG7, 1);
+ msleep(5);
+ gpio_set_value_cansleep(TEGRA_GPIO_PG7, 0);
+ msleep(50);/*wait chip reset finish time*/
+ return 0;
+}
+
+int cy8c_sar_powerdown(int activate)
+{
+ int gpio = TEGRA_GPIO_PO5;
+ int ret = 0;
+
+ if (!is_mdm_modem()) {
+ pr_debug("[SAR]%s:!is_mdm_modem()\n", __func__);
+ return ret;
+ }
+
+ if (activate) {
+ pr_debug("[SAR]:%s:gpio high,activate=%d\n",
+ __func__, activate);
+ ret = gpio_direction_output(gpio, 1);
+ if (ret < 0)
+ pr_debug("[SAR]%s: calling gpio_free(sar_modem)\n",
+ __func__);
+ } else {
+ pr_debug("[SAR]:%s:gpio low,activate=%d\n", __func__, activate);
+ ret = gpio_direction_output(gpio, 0);
+ if (ret < 0)
+ pr_debug("[SAR]%s: calling gpio_free(sar_modem)\n",
+ __func__);
+ }
+ return ret;
+}
+
+static struct i2c_board_info flounder_i2c_board_info_cm32181[] = {
+ {
+ I2C_BOARD_INFO("cm32181", 0x48),
+ },
+};
+
+struct cy8c_i2c_sar_platform_data sar1_cy8c_data[] = {
+ {
+ .gpio_irq = TEGRA_GPIO_PCC5,
+ .reset = cy8c_sar1_reset,
+ .position_id = 1,
+ .bl_addr = 0x61,
+ .ap_addr = 0x5d,
+ .powerdown = cy8c_sar_powerdown,
+ },
+};
+
+struct cy8c_i2c_sar_platform_data sar_cy8c_data[] = {
+ {
+ .gpio_irq = TEGRA_GPIO_PC7,
+ .reset = cy8c_sar_reset,
+ .position_id = 0,
+ .bl_addr = 0x60,
+ .ap_addr = 0x5c,
+ .powerdown = cy8c_sar_powerdown,
+ },
+};
+
+struct i2c_board_info flounder_i2c_board_info_cypress_sar[] = {
+ {
+ I2C_BOARD_INFO("CYPRESS_SAR", 0xB8 >> 1),
+ .platform_data = &sar_cy8c_data,
+ .irq = -1,
+ },
+};
+
+struct i2c_board_info flounder_i2c_board_info_cypress_sar1[] = {
+ {
+ I2C_BOARD_INFO("CYPRESS_SAR1", 0xBA >> 1),
+ .platform_data = &sar1_cy8c_data,
+ .irq = -1,
+ },
+};
+
+/*
+ * Soc Camera platform driver for testing
+ */
+#if IS_ENABLED(CONFIG_SOC_CAMERA_PLATFORM)
+static int flounder_soc_camera_add(struct soc_camera_device *icd);
+static void flounder_soc_camera_del(struct soc_camera_device *icd);
+
+static int flounder_soc_camera_set_capture(struct soc_camera_platform_info *info,
+ int enable)
+{
+ /* TODO: probably add clk opertaion here */
+ return 0; /* camera sensor always enabled */
+}
+
+static struct soc_camera_platform_info flounder_soc_camera_info = {
+ .format_name = "RGB4",
+ .format_depth = 32,
+ .format = {
+ .code = V4L2_MBUS_FMT_RGBA8888_4X8_LE,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .field = V4L2_FIELD_NONE,
+ .width = 1280,
+ .height = 720,
+ },
+ .set_capture = flounder_soc_camera_set_capture,
+};
+
+static struct tegra_camera_platform_data flounder_camera_platform_data = {
+ .flip_v = 0,
+ .flip_h = 0,
+ .port = TEGRA_CAMERA_PORT_CSI_A,
+ .lanes = 4,
+ .continuous_clk = 0,
+};
+
+static struct soc_camera_link flounder_soc_camera_link = {
+ .bus_id = 0, /* This must match the .id of tegra_vi01_device */
+ .add_device = flounder_soc_camera_add,
+ .del_device = flounder_soc_camera_del,
+ .module_name = "soc_camera_platform",
+ .priv = &flounder_camera_platform_data,
+ .dev_priv = &flounder_soc_camera_info,
+};
+
+static struct platform_device *flounder_pdev;
+
+static void flounder_soc_camera_release(struct device *dev)
+{
+ soc_camera_platform_release(&flounder_pdev);
+}
+
+static int flounder_soc_camera_add(struct soc_camera_device *icd)
+{
+ return soc_camera_platform_add(icd, &flounder_pdev,
+ &flounder_soc_camera_link,
+ flounder_soc_camera_release, 0);
+}
+
+static void flounder_soc_camera_del(struct soc_camera_device *icd)
+{
+ soc_camera_platform_del(icd, flounder_pdev, &flounder_soc_camera_link);
+}
+
+static struct platform_device flounder_soc_camera_device = {
+ .name = "soc-camera-pdrv",
+ .id = 0,
+ .dev = {
+ .platform_data = &flounder_soc_camera_link,
+ },
+};
+#endif
+
+static struct tegra_io_dpd csia_io = {
+ .name = "CSIA",
+ .io_dpd_reg_index = 0,
+ .io_dpd_bit = 0,
+};
+
+static struct tegra_io_dpd csib_io = {
+ .name = "CSIB",
+ .io_dpd_reg_index = 0,
+ .io_dpd_bit = 1,
+};
+
+static struct tegra_io_dpd csie_io = {
+ .name = "CSIE",
+ .io_dpd_reg_index = 1,
+ .io_dpd_bit = 12,
+};
+
+static atomic_t shared_gpios_refcnt = ATOMIC_INIT(0);
+
+static void flounder_enable_shared_gpios(void)
+{
+ if (1 == atomic_add_return(1, &shared_gpios_refcnt)) {
+ gpio_set_value(CAM_VCM2V85_EN, 1);
+ usleep_range(100, 120);
+ gpio_set_value(CAM_1V2_EN, 1);
+ gpio_set_value(CAM_A2V85_EN, 1);
+ gpio_set_value(CAM_1V8_EN, 1);
+ pr_debug("%s\n", __func__);
+ }
+}
+
+static void flounder_disable_shared_gpios(void)
+{
+ if (atomic_dec_and_test(&shared_gpios_refcnt)) {
+ gpio_set_value(CAM_1V8_EN, 0);
+ gpio_set_value(CAM_A2V85_EN, 0);
+ gpio_set_value(CAM_1V2_EN, 0);
+ gpio_set_value(CAM_VCM2V85_EN, 0);
+ pr_debug("%s\n", __func__);
+ }
+}
+
+static int flounder_imx219_power_on(struct imx219_power_rail *pw)
+{
+ /* disable CSIA/B IOs DPD mode to turn on camera for flounder */
+ tegra_io_dpd_disable(&csia_io);
+ tegra_io_dpd_disable(&csib_io);
+
+ gpio_set_value(CAM_PWDN, 0);
+
+ flounder_enable_shared_gpios();
+
+ usleep_range(1, 2);
+ gpio_set_value(CAM_PWDN, 1);
+
+ usleep_range(300, 310);
+ pr_debug("%s\n", __func__);
+ return 1;
+}
+
+static int flounder_imx219_power_off(struct imx219_power_rail *pw)
+{
+ gpio_set_value(CAM_PWDN, 0);
+ usleep_range(100, 120);
+ pr_debug("%s\n", __func__);
+
+ flounder_disable_shared_gpios();
+
+ /* put CSIA/B IOs into DPD mode to save additional power for flounder */
+ tegra_io_dpd_enable(&csia_io);
+ tegra_io_dpd_enable(&csib_io);
+ return 0;
+}
+
+static struct imx219_platform_data flounder_imx219_pdata = {
+ .power_on = flounder_imx219_power_on,
+ .power_off = flounder_imx219_power_off,
+};
+
+static int flounder_ov9760_power_on(struct ov9760_power_rail *pw)
+{
+ /* disable CSIE IO DPD mode to turn on camera for flounder */
+ tegra_io_dpd_disable(&csie_io);
+
+ gpio_set_value(CAM2_RST, 0);
+
+ flounder_enable_shared_gpios();
+
+ usleep_range(100, 120);
+ gpio_set_value(CAM2_RST, 1);
+ pr_debug("%s\n", __func__);
+
+ return 1;
+}
+
+static int flounder_ov9760_power_off(struct ov9760_power_rail *pw)
+{
+ gpio_set_value(CAM2_RST, 0);
+ usleep_range(100, 120);
+ pr_debug("%s\n", __func__);
+
+ flounder_disable_shared_gpios();
+
+ /* put CSIE IOs into DPD mode to save additional power for flounder */
+ tegra_io_dpd_enable(&csie_io);
+
+ return 0;
+}
+
+static struct ov9760_platform_data flounder_ov9760_pdata = {
+ .power_on = flounder_ov9760_power_on,
+ .power_off = flounder_ov9760_power_off,
+ .mclk_name = "mclk2",
+};
+
+static int flounder_drv201_power_on(struct drv201_power_rail *pw)
+{
+ gpio_set_value(CAM_VCM_PWDN, 0);
+
+ flounder_enable_shared_gpios();
+
+ gpio_set_value(CAM_VCM_PWDN, 1);
+ usleep_range(100, 120);
+ pr_debug("%s\n", __func__);
+
+ return 1;
+}
+
+static int flounder_drv201_power_off(struct drv201_power_rail *pw)
+{
+ gpio_set_value(CAM_VCM_PWDN, 0);
+ usleep_range(100, 120);
+ pr_debug("%s\n", __func__);
+
+ flounder_disable_shared_gpios();
+
+ return 1;
+}
+
+static struct nvc_focus_cap flounder_drv201_cap = {
+ .version = NVC_FOCUS_CAP_VER2,
+ .settle_time = 15,
+ .focus_macro = 810,
+ .focus_infinity = 50,
+ .focus_hyper = 50,
+};
+
+static struct drv201_platform_data flounder_drv201_pdata = {
+ .cfg = 0,
+ .num = 0,
+ .sync = 0,
+ .dev_name = "focuser",
+ .cap = &flounder_drv201_cap,
+ .power_on = flounder_drv201_power_on,
+ .power_off = flounder_drv201_power_off,
+};
+
+static struct nvc_torch_pin_state flounder_tps61310_pinstate = {
+ .mask = 1 << (CAM_FLASH_STROBE - TEGRA_GPIO_PBB0), /* VGP4 */
+ .values = 1 << (CAM_FLASH_STROBE - TEGRA_GPIO_PBB0),
+};
+
+static struct tps61310_platform_data flounder_tps61310_pdata = {
+ .dev_name = "torch",
+ .pinstate = &flounder_tps61310_pinstate,
+};
+
+static struct camera_data_blob flounder_camera_lut[] = {
+ {"flounder_imx219_pdata", &flounder_imx219_pdata},
+ {"flounder_drv201_pdata", &flounder_drv201_pdata},
+ {"flounder_tps61310_pdata", &flounder_tps61310_pdata},
+ {"flounder_ov9760_pdata", &flounder_ov9760_pdata},
+ {},
+};
+
+void __init flounder_camera_auxdata(void *data)
+{
+ struct of_dev_auxdata *aux_lut = data;
+ while (aux_lut && aux_lut->compatible) {
+ if (!strcmp(aux_lut->compatible, "nvidia,tegra124-camera")) {
+ pr_info("%s: update camera lookup table.\n", __func__);
+ aux_lut->platform_data = flounder_camera_lut;
+ }
+ aux_lut++;
+ }
+}
+
+static int flounder_camera_init(void)
+{
+ pr_debug("%s: ++\n", __func__);
+
+ tegra_io_dpd_enable(&csia_io);
+ tegra_io_dpd_enable(&csib_io);
+ tegra_io_dpd_enable(&csie_io);
+ tegra_gpio_disable(TEGRA_GPIO_PBB0);
+ tegra_gpio_disable(TEGRA_GPIO_PCC0);
+
+#if IS_ENABLED(CONFIG_SOC_CAMERA_PLATFORM)
+ platform_device_register(&flounder_soc_camera_device);
+#endif
+ return 0;
+}
+
+static struct pid_thermal_gov_params cpu_pid_params = {
+ .max_err_temp = 4000,
+ .max_err_gain = 1000,
+
+ .gain_p = 1000,
+ .gain_d = 0,
+
+ .up_compensation = 15,
+ .down_compensation = 15,
+};
+
+static struct thermal_zone_params cpu_tzp = {
+ .governor_name = "pid_thermal_gov",
+ .governor_params = &cpu_pid_params,
+};
+
+static struct thermal_zone_params therm_est_activ_tzp = {
+ .governor_name = "step_wise"
+};
+
+static struct throttle_table cpu_throttle_table[] = {
+ /* CPU_THROT_LOW cannot be used by other than CPU */
+ /* CPU, GPU, C2BUS, C3BUS, SCLK, EMC */
+ { { 2295000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2269500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2244000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2218500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2193000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2167500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2142000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2116500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2091000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2065500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2040000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 2014500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1989000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1963500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1938000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1912500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1887000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1861500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1836000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1810500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1785000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1759500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1734000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1708500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1683000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1657500, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1632000, NO_CAP, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1606500, 790000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1581000, 776000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1555500, 762000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1530000, 749000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1504500, 735000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1479000, 721000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1453500, 707000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1428000, 693000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1402500, 679000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1377000, 666000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1351500, 652000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1326000, 638000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1300500, 624000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1275000, 610000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1249500, 596000, NO_CAP, NO_CAP, NO_CAP, NO_CAP } },
+ { { 1224000, 582000, NO_CAP, NO_CAP, NO_CAP, 792000 } },
+ { { 1198500, 569000, NO_CAP, NO_CAP, NO_CAP, 792000 } },
+ { { 1173000, 555000, NO_CAP, NO_CAP, 360000, 792000 } },
+ { { 1147500, 541000, NO_CAP, NO_CAP, 360000, 792000 } },
+ { { 1122000, 527000, NO_CAP, 684000, 360000, 792000 } },
+ { { 1096500, 513000, 444000, 684000, 360000, 792000 } },
+ { { 1071000, 499000, 444000, 684000, 360000, 792000 } },
+ { { 1045500, 486000, 444000, 684000, 360000, 792000 } },
+ { { 1020000, 472000, 444000, 684000, 324000, 792000 } },
+ { { 994500, 458000, 444000, 684000, 324000, 792000 } },
+ { { 969000, 444000, 444000, 600000, 324000, 792000 } },
+ { { 943500, 430000, 444000, 600000, 324000, 792000 } },
+ { { 918000, 416000, 396000, 600000, 324000, 792000 } },
+ { { 892500, 402000, 396000, 600000, 324000, 792000 } },
+ { { 867000, 389000, 396000, 600000, 324000, 792000 } },
+ { { 841500, 375000, 396000, 600000, 288000, 792000 } },
+ { { 816000, 361000, 396000, 600000, 288000, 792000 } },
+ { { 790500, 347000, 396000, 600000, 288000, 792000 } },
+ { { 765000, 333000, 396000, 504000, 288000, 792000 } },
+ { { 739500, 319000, 348000, 504000, 288000, 792000 } },
+ { { 714000, 306000, 348000, 504000, 288000, 624000 } },
+ { { 688500, 292000, 348000, 504000, 288000, 624000 } },
+ { { 663000, 278000, 348000, 504000, 288000, 624000 } },
+ { { 637500, 264000, 348000, 504000, 288000, 624000 } },
+ { { 612000, 250000, 348000, 504000, 252000, 624000 } },
+ { { 586500, 236000, 348000, 504000, 252000, 624000 } },
+ { { 561000, 222000, 348000, 420000, 252000, 624000 } },
+ { { 535500, 209000, 288000, 420000, 252000, 624000 } },
+ { { 510000, 195000, 288000, 420000, 252000, 624000 } },
+ { { 484500, 181000, 288000, 420000, 252000, 624000 } },
+ { { 459000, 167000, 288000, 420000, 252000, 624000 } },
+ { { 433500, 153000, 288000, 420000, 252000, 396000 } },
+ { { 408000, 139000, 288000, 420000, 252000, 396000 } },
+ { { 382500, 126000, 288000, 420000, 252000, 396000 } },
+ { { 357000, 112000, 288000, 420000, 252000, 396000 } },
+ { { 331500, 98000, 288000, 420000, 252000, 396000 } },
+ { { 306000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 280500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 255000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 229500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 204000, 84000, 288000, 420000, 252000, 396000 } },
+};
+
+static struct balanced_throttle cpu_throttle = {
+ .throt_tab_size = ARRAY_SIZE(cpu_throttle_table),
+ .throt_tab = cpu_throttle_table,
+};
+
+static struct throttle_table gpu_throttle_table[] = {
+ /* CPU_THROT_LOW cannot be used by other than CPU */
+ /* CPU, GPU, C2BUS, C3BUS, SCLK, EMC */
+ { { 2295000, 782800, 480000, 756000, 384000, 924000 } },
+ { { 2269500, 772200, 480000, 756000, 384000, 924000 } },
+ { { 2244000, 761600, 480000, 756000, 384000, 924000 } },
+ { { 2218500, 751100, 480000, 756000, 384000, 924000 } },
+ { { 2193000, 740500, 480000, 756000, 384000, 924000 } },
+ { { 2167500, 729900, 480000, 756000, 384000, 924000 } },
+ { { 2142000, 719300, 480000, 756000, 384000, 924000 } },
+ { { 2116500, 708700, 480000, 756000, 384000, 924000 } },
+ { { 2091000, 698100, 480000, 756000, 384000, 924000 } },
+ { { 2065500, 687500, 480000, 756000, 384000, 924000 } },
+ { { 2040000, 676900, 480000, 756000, 384000, 924000 } },
+ { { 2014500, 666000, 480000, 756000, 384000, 924000 } },
+ { { 1989000, 656000, 480000, 756000, 384000, 924000 } },
+ { { 1963500, 645000, 480000, 756000, 384000, 924000 } },
+ { { 1938000, 635000, 480000, 756000, 384000, 924000 } },
+ { { 1912500, 624000, 480000, 756000, 384000, 924000 } },
+ { { 1887000, 613000, 480000, 756000, 384000, 924000 } },
+ { { 1861500, 603000, 480000, 756000, 384000, 924000 } },
+ { { 1836000, 592000, 480000, 756000, 384000, 924000 } },
+ { { 1810500, 582000, 480000, 756000, 384000, 924000 } },
+ { { 1785000, 571000, 480000, 756000, 384000, 924000 } },
+ { { 1759500, 560000, 480000, 756000, 384000, 924000 } },
+ { { 1734000, 550000, 480000, 756000, 384000, 924000 } },
+ { { 1708500, 539000, 480000, 756000, 384000, 924000 } },
+ { { 1683000, 529000, 480000, 756000, 384000, 924000 } },
+ { { 1657500, 518000, 480000, 756000, 384000, 924000 } },
+ { { 1632000, 508000, 480000, 756000, 384000, 924000 } },
+ { { 1606500, 497000, 480000, 756000, 384000, 924000 } },
+ { { 1581000, 486000, 480000, 756000, 384000, 924000 } },
+ { { 1555500, 476000, 480000, 756000, 384000, 924000 } },
+ { { 1530000, 465000, 480000, 756000, 384000, 924000 } },
+ { { 1504500, 455000, 480000, 756000, 384000, 924000 } },
+ { { 1479000, 444000, 480000, 756000, 384000, 924000 } },
+ { { 1453500, 433000, 480000, 756000, 384000, 924000 } },
+ { { 1428000, 423000, 480000, 756000, 384000, 924000 } },
+ { { 1402500, 412000, 480000, 756000, 384000, 924000 } },
+ { { 1377000, 402000, 480000, 756000, 384000, 924000 } },
+ { { 1351500, 391000, 480000, 756000, 384000, 924000 } },
+ { { 1326000, 380000, 480000, 756000, 384000, 924000 } },
+ { { 1300500, 370000, 480000, 756000, 384000, 924000 } },
+ { { 1275000, 359000, 480000, 756000, 384000, 924000 } },
+ { { 1249500, 349000, 480000, 756000, 384000, 924000 } },
+ { { 1224000, 338000, 480000, 756000, 384000, 792000 } },
+ { { 1198500, 328000, 480000, 756000, 384000, 792000 } },
+ { { 1173000, 317000, 480000, 756000, 360000, 792000 } },
+ { { 1147500, 306000, 480000, 756000, 360000, 792000 } },
+ { { 1122000, 296000, 480000, 684000, 360000, 792000 } },
+ { { 1096500, 285000, 444000, 684000, 360000, 792000 } },
+ { { 1071000, 275000, 444000, 684000, 360000, 792000 } },
+ { { 1045500, 264000, 444000, 684000, 360000, 792000 } },
+ { { 1020000, 253000, 444000, 684000, 324000, 792000 } },
+ { { 994500, 243000, 444000, 684000, 324000, 792000 } },
+ { { 969000, 232000, 444000, 600000, 324000, 792000 } },
+ { { 943500, 222000, 444000, 600000, 324000, 792000 } },
+ { { 918000, 211000, 396000, 600000, 324000, 792000 } },
+ { { 892500, 200000, 396000, 600000, 324000, 792000 } },
+ { { 867000, 190000, 396000, 600000, 324000, 792000 } },
+ { { 841500, 179000, 396000, 600000, 288000, 792000 } },
+ { { 816000, 169000, 396000, 600000, 288000, 792000 } },
+ { { 790500, 158000, 396000, 600000, 288000, 792000 } },
+ { { 765000, 148000, 396000, 504000, 288000, 792000 } },
+ { { 739500, 137000, 348000, 504000, 288000, 792000 } },
+ { { 714000, 126000, 348000, 504000, 288000, 624000 } },
+ { { 688500, 116000, 348000, 504000, 288000, 624000 } },
+ { { 663000, 105000, 348000, 504000, 288000, 624000 } },
+ { { 637500, 95000, 348000, 504000, 288000, 624000 } },
+ { { 612000, 84000, 348000, 504000, 252000, 624000 } },
+ { { 586500, 84000, 348000, 504000, 252000, 624000 } },
+ { { 561000, 84000, 348000, 420000, 252000, 624000 } },
+ { { 535500, 84000, 288000, 420000, 252000, 624000 } },
+ { { 510000, 84000, 288000, 420000, 252000, 624000 } },
+ { { 484500, 84000, 288000, 420000, 252000, 624000 } },
+ { { 459000, 84000, 288000, 420000, 252000, 624000 } },
+ { { 433500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 408000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 382500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 357000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 331500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 306000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 280500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 255000, 84000, 288000, 420000, 252000, 396000 } },
+ { { 229500, 84000, 288000, 420000, 252000, 396000 } },
+ { { 204000, 84000, 288000, 420000, 252000, 396000 } },
+};
+
+static struct balanced_throttle gpu_throttle = {
+ .throt_tab_size = ARRAY_SIZE(gpu_throttle_table),
+ .throt_tab = gpu_throttle_table,
+};
+
+static int __init flounder_tj_throttle_init(void)
+{
+ if (of_machine_is_compatible("google,flounder") ||
+ of_machine_is_compatible("google,flounder_lte") ||
+ of_machine_is_compatible("google,flounder64") ||
+ of_machine_is_compatible("google,flounder64_lte")) {
+ balanced_throttle_register(&cpu_throttle, "cpu-balanced");
+ balanced_throttle_register(&gpu_throttle, "gpu-balanced");
+ }
+
+ return 0;
+}
+late_initcall(flounder_tj_throttle_init);
+
+#ifdef CONFIG_TEGRA_SKIN_THROTTLE
+static struct thermal_trip_info skin_trips[] = {
+ {
+ .cdev_type = "skin-balanced",
+ .trip_temp = 45000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ .hysteresis = 0,
+ }
+};
+
+static struct therm_est_subdevice skin_devs_wifi[] = {
+ {
+ .dev_data = "Tdiode_tegra",
+ .coeffs = {
+ 3, 0, 0, 0,
+ -1, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, -1
+ },
+ },
+ {
+ .dev_data = "Tboard_tegra",
+ .coeffs = {
+ 7, 6, 5, 3,
+ 3, 4, 4, 4,
+ 4, 4, 4, 4,
+ 3, 4, 3, 3,
+ 4, 6, 9, 15
+ },
+ },
+};
+
+static struct therm_est_subdevice skin_devs_lte[] = {
+ {
+ .dev_data = "Tdiode_tegra",
+ .coeffs = {
+ 2, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, -1, -2
+ },
+ },
+ {
+ .dev_data = "Tboard_tegra",
+ .coeffs = {
+ 7, 5, 4, 3,
+ 3, 3, 4, 4,
+ 3, 3, 3, 3,
+ 4, 4, 3, 3,
+ 3, 5, 10, 16
+ },
+ },
+};
+
+#ifdef CONFIG_THERMAL_GOV_ADAPTIVE_SKIN
+static struct adaptive_skin_thermal_gov_params skin_astg_params = {
+ .tj_tran_threshold = 2000,
+ .tj_std_threshold = 3000,
+ .tj_std_fup_threshold = 5000,
+
+ .tskin_tran_threshold = 500,
+ .tskin_std_threshold = 1000,
+
+ .target_state_tdp = 12,
+};
+
+static struct thermal_zone_params skin_astg_tzp = {
+ .governor_name = "adaptive_skin",
+ .governor_params = &skin_astg_params,
+};
+#endif
+
+static struct pid_thermal_gov_params skin_pid_params = {
+ .max_err_temp = 4000,
+ .max_err_gain = 1000,
+
+ .gain_p = 1000,
+ .gain_d = 0,
+
+ .up_compensation = 15,
+ .down_compensation = 15,
+};
+
+static struct thermal_zone_params skin_tzp = {
+ .governor_name = "pid_thermal_gov",
+ .governor_params = &skin_pid_params,
+};
+
+static struct thermal_zone_params skin_step_wise_tzp = {
+ .governor_name = "step_wise",
+};
+
+static struct therm_est_data skin_data = {
+ .num_trips = ARRAY_SIZE(skin_trips),
+ .trips = skin_trips,
+ .polling_period = 1100,
+ .passive_delay = 15000,
+ .tc1 = 10,
+ .tc2 = 1,
+ .tzp = &skin_step_wise_tzp,
+};
+
+static struct throttle_table skin_throttle_table[] = {
+ /* CPU_THROT_LOW cannot be used by other than CPU */
+ /* CPU, GPU, C2BUS, C3BUS, SCLK, EMC */
+ { { 2000000, 804000, 480000, 756000, NO_CAP, NO_CAP } },
+ { { 1900000, 756000, 480000, 648000, NO_CAP, NO_CAP } },
+ { { 1800000, 708000, 444000, 648000, NO_CAP, NO_CAP } },
+ { { 1700000, 648000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1600000, 648000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1500000, 612000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1450000, 612000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1400000, 540000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1350000, 540000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1300000, 540000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1250000, 540000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1200000, 540000, 444000, 600000, NO_CAP, NO_CAP } },
+ { { 1150000, 468000, 444000, 600000, 240000, NO_CAP } },
+ { { 1100000, 468000, 444000, 600000, 240000, NO_CAP } },
+ { { 1050000, 468000, 396000, 600000, 240000, NO_CAP } },
+ { { 1000000, 468000, 396000, 504000, 204000, NO_CAP } },
+ { { 975000, 468000, 396000, 504000, 204000, 792000 } },
+ { { 950000, 468000, 396000, 504000, 204000, 792000 } },
+ { { 925000, 468000, 396000, 504000, 204000, 792000 } },
+ { { 900000, 468000, 348000, 504000, 204000, 792000 } },
+ { { 875000, 468000, 348000, 504000, 136000, 600000 } },
+ { { 850000, 468000, 348000, 420000, 136000, 600000 } },
+ { { 825000, 396000, 348000, 420000, 136000, 600000 } },
+ { { 800000, 396000, 348000, 420000, 136000, 528000 } },
+ { { 775000, 396000, 348000, 420000, 136000, 528000 } },
+};
+
+static struct balanced_throttle skin_throttle = {
+ .throt_tab_size = ARRAY_SIZE(skin_throttle_table),
+ .throt_tab = skin_throttle_table,
+};
+
+static int __init flounder_skin_init(void)
+{
+ if (of_machine_is_compatible("google,flounder") ||
+ of_machine_is_compatible("google,flounder64")) {
+ /* turn on tskin only on XE (DVT2) and later revision */
+ if (flounder_get_hw_revision() >= FLOUNDER_REV_DVT2 ) {
+ skin_data.ndevs = ARRAY_SIZE(skin_devs_wifi);
+ skin_data.devs = skin_devs_wifi;
+ skin_data.toffset = -4746;
+#ifdef CONFIG_THERMAL_GOV_ADAPTIVE_SKIN
+ skin_data.tzp = &skin_astg_tzp;
+ skin_data.passive_delay = 6000;
+#endif
+
+ balanced_throttle_register(&skin_throttle,
+ "skin-balanced");
+ tegra_skin_therm_est_device.dev.platform_data =
+ &skin_data;
+ platform_device_register(&tegra_skin_therm_est_device);
+ }
+ }
+ else if (of_machine_is_compatible("google,flounder_lte") ||
+ of_machine_is_compatible("google,flounder64_lte")) {
+ /* turn on tskin only on LTE XD (DVT1) and later revision */
+ if (flounder_get_hw_revision() >= FLOUNDER_REV_DVT1 ) {
+ skin_data.ndevs = ARRAY_SIZE(skin_devs_lte);
+ skin_data.devs = skin_devs_lte;
+ skin_data.toffset = -1625;
+#ifdef CONFIG_THERMAL_GOV_ADAPTIVE_SKIN
+ skin_data.tzp = &skin_astg_tzp;
+ skin_data.passive_delay = 6000;
+#endif
+
+ balanced_throttle_register(&skin_throttle,
+ "skin-balanced");
+ tegra_skin_therm_est_device.dev.platform_data =
+ &skin_data;
+ platform_device_register(&tegra_skin_therm_est_device);
+ }
+
+ }
+ return 0;
+}
+late_initcall(flounder_skin_init);
+#endif
+
+static struct nct1008_platform_data flounder_nct72_pdata = {
+ .loc_name = "tegra",
+ .supported_hwrev = true,
+ .conv_rate = 0x06, /* 4Hz conversion rate */
+ .offset = 0,
+ .extended_range = true,
+
+ .sensors = {
+ [LOC] = {
+ .tzp = &therm_est_activ_tzp,
+ .shutdown_limit = 120, /* C */
+ .passive_delay = 1000,
+ .num_trips = 1,
+ .trips = {
+ {
+ .cdev_type = "therm_est_activ",
+ .trip_temp = 26000,
+ .trip_type = THERMAL_TRIP_ACTIVE,
+ .hysteresis = 1000,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ .mask = 1,
+ },
+ },
+ },
+ [EXT] = {
+ .tzp = &cpu_tzp,
+ .shutdown_limit = 95, /* C */
+ .passive_delay = 1000,
+ .num_trips = 2,
+ .trips = {
+ {
+ .cdev_type = "shutdown_warning",
+ .trip_temp = 93000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ .mask = 0,
+ },
+ {
+ .cdev_type = "cpu-balanced",
+ .trip_temp = 83000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ .hysteresis = 1000,
+ .mask = 1,
+ },
+ }
+ }
+ }
+};
+
+#ifdef CONFIG_TEGRA_SKIN_THROTTLE
+static struct nct1008_platform_data flounder_nct72_tskin_pdata = {
+ .loc_name = "skin",
+
+ .supported_hwrev = true,
+ .conv_rate = 0x06, /* 4Hz conversion rate */
+ .offset = 0,
+ .extended_range = true,
+
+ .sensors = {
+ [LOC] = {
+ .shutdown_limit = 95, /* C */
+ .num_trips = 0,
+ .tzp = NULL,
+ },
+ [EXT] = {
+ .shutdown_limit = 85, /* C */
+ .passive_delay = 10000,
+ .polling_delay = 1000,
+ .tzp = &skin_tzp,
+ .num_trips = 1,
+ .trips = {
+ {
+ .cdev_type = "skin-balanced",
+ .trip_temp = 50000,
+ .trip_type = THERMAL_TRIP_PASSIVE,
+ .upper = THERMAL_NO_LIMIT,
+ .lower = THERMAL_NO_LIMIT,
+ .mask = 1,
+ },
+ },
+ }
+ }
+};
+#endif
+
+static struct i2c_board_info flounder_i2c_nct72_board_info[] = {
+ {
+ I2C_BOARD_INFO("nct72", 0x4c),
+ .platform_data = &flounder_nct72_pdata,
+ .irq = -1,
+ },
+#ifdef CONFIG_TEGRA_SKIN_THROTTLE
+ {
+ I2C_BOARD_INFO("nct72", 0x4d),
+ .platform_data = &flounder_nct72_tskin_pdata,
+ .irq = -1,
+ }
+#endif
+};
+
+static int flounder_nct72_init(void)
+{
+ s32 base_cp, shft_cp;
+ u32 base_ft, shft_ft;
+ int nct72_port = TEGRA_GPIO_PI6;
+ int ret = 0;
+ int i;
+ struct thermal_trip_info *trip_state;
+
+ /* raise NCT's thresholds if soctherm CP,FT fuses are ok */
+ if ((tegra_fuse_calib_base_get_cp(&base_cp, &shft_cp) >= 0) &&
+ (tegra_fuse_calib_base_get_ft(&base_ft, &shft_ft) >= 0)) {
+ flounder_nct72_pdata.sensors[EXT].shutdown_limit += 20;
+ for (i = 0; i < flounder_nct72_pdata.sensors[EXT].num_trips;
+ i++) {
+ trip_state = &flounder_nct72_pdata.sensors[EXT].trips[i];
+ if (!strncmp(trip_state->cdev_type, "cpu-balanced",
+ THERMAL_NAME_LENGTH)) {
+ trip_state->cdev_type = "_none_";
+ break;
+ }
+ }
+ } else {
+ tegra_platform_edp_init(
+ flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips,
+ 12000); /* edp temperature margin */
+ tegra_add_cpu_vmax_trips(
+ flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips);
+ tegra_add_tgpu_trips(
+ flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips);
+ tegra_add_vc_trips(
+ flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips);
+ tegra_add_core_vmax_trips(
+ flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips);
+ }
+
+ tegra_add_all_vmin_trips(flounder_nct72_pdata.sensors[EXT].trips,
+ &flounder_nct72_pdata.sensors[EXT].num_trips);
+
+ flounder_i2c_nct72_board_info[0].irq = gpio_to_irq(nct72_port);
+
+ ret = gpio_request(nct72_port, "temp_alert");
+ if (ret < 0)
+ return ret;
+
+ ret = gpio_direction_input(nct72_port);
+ if (ret < 0) {
+ pr_info("%s: calling gpio_free(nct72_port)", __func__);
+ gpio_free(nct72_port);
+ }
+
+ i2c_register_board_info(0, flounder_i2c_nct72_board_info,
+ ARRAY_SIZE(flounder_i2c_nct72_board_info));
+
+ return ret;
+}
+
+static int powerdown_gpio_init(void){
+ int ret = 0;
+ static int done;
+ if (!is_mdm_modem()) {
+ pr_debug("[SAR]%s:!is_mdm_modem()\n", __func__);
+ return ret;
+ }
+
+ if (done == 0) {
+ if (!gpio_request(TEGRA_GPIO_PO5, "sar_modem")) {
+ pr_debug("[SAR]%s:gpio_request success\n", __func__);
+ ret = gpio_direction_output(TEGRA_GPIO_PO5, 0);
+ if (ret < 0) {
+ pr_debug(
+ "[SAR]%s: calling gpio_free(sar_modem)",
+ __func__);
+ gpio_free(TEGRA_GPIO_PO5);
+ }
+ done = 1;
+ }
+ }
+ return ret;
+}
+
+static int flounder_sar_init(void){
+ int sar_intr = TEGRA_GPIO_PC7;
+ int ret;
+ pr_info("%s: GPIO pin:%d\n", __func__, sar_intr);
+ flounder_i2c_board_info_cypress_sar[0].irq = gpio_to_irq(sar_intr);
+ ret = gpio_request(sar_intr, "sar_interrupt");
+ if (ret < 0)
+ return ret;
+ ret = gpio_direction_input(sar_intr);
+ if (ret < 0) {
+ pr_info("%s: calling gpio_free(sar_intr)", __func__);
+ gpio_free(sar_intr);
+ }
+ powerdown_gpio_init();
+ i2c_register_board_info(1, flounder_i2c_board_info_cypress_sar,
+ ARRAY_SIZE(flounder_i2c_board_info_cypress_sar));
+ return 0;
+}
+
+static int flounder_sar1_init(void){
+ int sar1_intr = TEGRA_GPIO_PCC5;
+ int ret;
+ pr_info("%s: GPIO pin:%d\n", __func__, sar1_intr);
+ flounder_i2c_board_info_cypress_sar1[0].irq = gpio_to_irq(sar1_intr);
+ ret = gpio_request(sar1_intr, "sar1_interrupt");
+ if (ret < 0)
+ return ret;
+ ret = gpio_direction_input(sar1_intr);
+ if (ret < 0) {
+ pr_info("%s: calling gpio_free(sar1_intr)", __func__);
+ gpio_free(sar1_intr);
+ }
+ powerdown_gpio_init();
+ i2c_register_board_info(1, flounder_i2c_board_info_cypress_sar1,
+ ARRAY_SIZE(flounder_i2c_board_info_cypress_sar1));
+ return 0;
+}
+
+int __init flounder_sensors_init(void)
+{
+ flounder_camera_init();
+ flounder_nct72_init();
+ flounder_sar_init();
+ flounder_sar1_init();
+
+ i2c_register_board_info(0, flounder_i2c_board_info_cm32181,
+ ARRAY_SIZE(flounder_i2c_board_info_cm32181));
+
+ return 0;
+}
diff --git a/arch/arm/mach-tegra/board-flounder-sysedp.c b/arch/arm/mach-tegra/board-flounder-sysedp.c
new file mode 100644
index 0000000..c2fb513
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder-sysedp.c
@@ -0,0 +1,141 @@
+/*
+ * Copyright (c) 2013-2014, NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/sysedp.h>
+#include <linux/platform_device.h>
+#include <linux/platform_data/tegra_edp.h>
+#include <linux/power_supply.h>
+#include <mach/edp.h>
+#include <linux/interrupt.h>
+#include "board-flounder.h"
+#include "board.h"
+#include "board-panel.h"
+#include "common.h"
+#include "tegra11_soctherm.h"
+
+/* --- EDP consumers data --- */
+static unsigned int imx219_states[] = { 0, 411 };
+static unsigned int ov9760_states[] = { 0, 180 };
+static unsigned int sdhci_states[] = { 0, 966 };
+static unsigned int speaker_states[] = { 0, 641 };
+static unsigned int wifi_states[] = { 0, 2318 };
+
+/* default - 19x12 8" panel*/
+static unsigned int backlight_default_states[] = {
+ 0, 110, 270, 430, 595, 765, 1055, 1340, 1655, 1970, 2380
+};
+
+static unsigned int tps61310_states[] = {
+ 0, 110 ,3350
+};
+
+static unsigned int qcom_mdm_states[] = {
+ 8, 2570, 1363, 2570, 3287
+};
+
+static struct sysedp_consumer_data flounder_sysedp_consumer_data[] = {
+ SYSEDP_CONSUMER_DATA("imx219", imx219_states),
+ SYSEDP_CONSUMER_DATA("ov9760", ov9760_states),
+ SYSEDP_CONSUMER_DATA("speaker", speaker_states),
+ SYSEDP_CONSUMER_DATA("wifi", wifi_states),
+ SYSEDP_CONSUMER_DATA("tegra-dsi-backlight.0", backlight_default_states),
+ SYSEDP_CONSUMER_DATA("sdhci-tegra.0", sdhci_states),
+ SYSEDP_CONSUMER_DATA("sdhci-tegra.3", sdhci_states),
+ SYSEDP_CONSUMER_DATA("tps61310", tps61310_states),
+ SYSEDP_CONSUMER_DATA("qcom-mdm-9k", qcom_mdm_states),
+};
+
+static struct sysedp_platform_data flounder_sysedp_platform_data = {
+ .consumer_data = flounder_sysedp_consumer_data,
+ .consumer_data_size = ARRAY_SIZE(flounder_sysedp_consumer_data),
+ .margin = 0,
+ .min_budget = 4400,
+};
+
+static struct platform_device flounder_sysedp_device = {
+ .name = "sysedp",
+ .id = -1,
+ .dev = { .platform_data = &flounder_sysedp_platform_data }
+};
+
+void __init flounder_new_sysedp_init(void)
+{
+ int r;
+
+ r = platform_device_register(&flounder_sysedp_device);
+ WARN_ON(r);
+}
+
+static struct tegra_sysedp_platform_data flounder_sysedp_dynamic_capping_platdata = {
+ .core_gain = 125,
+ .init_req_watts = 20000,
+ .pthrot_ratio = 75,
+ .cap_method = TEGRA_SYSEDP_CAP_METHOD_DIRECT,
+};
+
+static struct platform_device flounder_sysedp_dynamic_capping = {
+ .name = "sysedp_dynamic_capping",
+ .id = -1,
+ .dev = { .platform_data = &flounder_sysedp_dynamic_capping_platdata }
+};
+
+struct sysedp_reactive_capping_platform_data flounder_battery_oc4_platdata = {
+ .max_capping_mw = 15000,
+ .step_alarm_mw = 1000,
+ .step_relax_mw = 500,
+ .relax_ms = 250,
+ .sysedpc = {
+ .name = "battery_oc4"
+ },
+ .irq = TEGRA_SOC_OC_IRQ_BASE + TEGRA_SOC_OC_IRQ_4,
+ .irq_flags = IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
+};
+
+static struct platform_device flounder_sysedp_reactive_capping_oc4 = {
+ .name = "sysedp_reactive_capping",
+ .id = 1,
+ .dev = { .platform_data = &flounder_battery_oc4_platdata }
+};
+
+
+void __init flounder_sysedp_dynamic_capping_init(void)
+{
+ int r;
+ struct tegra_sysedp_corecap *corecap;
+ unsigned int corecap_size;
+
+ corecap = tegra_get_sysedp_corecap(&corecap_size);
+ if (!corecap) {
+ WARN_ON(1);
+ return;
+ }
+ flounder_sysedp_dynamic_capping_platdata.corecap = corecap;
+ flounder_sysedp_dynamic_capping_platdata.corecap_size = corecap_size;
+
+ flounder_sysedp_dynamic_capping_platdata.cpufreq_lim = tegra_get_system_edp_entries(
+ &flounder_sysedp_dynamic_capping_platdata.cpufreq_lim_size);
+ if (!flounder_sysedp_dynamic_capping_platdata.cpufreq_lim) {
+ WARN_ON(1);
+ return;
+ }
+
+ r = platform_device_register(&flounder_sysedp_dynamic_capping);
+ WARN_ON(r);
+
+ r = platform_device_register(&flounder_sysedp_reactive_capping_oc4);
+ WARN_ON(r);
+}
diff --git a/arch/arm/mach-tegra/board-flounder.c b/arch/arm/mach-tegra/board-flounder.c
new file mode 100644
index 0000000..7b837ba
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder.c
@@ -0,0 +1,1476 @@
+/*
+ * arch/arm/mach-tegra/board-flounder.c
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/ctype.h>
+#include <linux/platform_device.h>
+#include <linux/clk.h>
+#include <linux/serial_8250.h>
+#include <linux/i2c.h>
+#include <linux/i2c/i2c-hid.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/i2c-tegra.h>
+#include <linux/gpio.h>
+#include <linux/input.h>
+#include <linux/platform_data/tegra_usb.h>
+#include <linux/spi/spi.h>
+#include <linux/spi/rm31080a_ts.h>
+#include <linux/maxim_sti.h>
+#include <linux/memblock.h>
+#include <linux/spi/spi-tegra.h>
+#include <linux/rfkill-gpio.h>
+#include <linux/nfc/bcm2079x.h>
+#include <linux/skbuff.h>
+#include <linux/ti_wilink_st.h>
+#include <linux/regulator/consumer.h>
+#include <linux/smb349-charger.h>
+#include <linux/max17048_battery.h>
+#include <linux/leds.h>
+#include <linux/i2c/at24.h>
+#include <linux/of_platform.h>
+#include <linux/i2c.h>
+#include <linux/i2c-tegra.h>
+#include <linux/platform_data/serial-tegra.h>
+#include <linux/edp.h>
+#include <linux/usb/tegra_usb_phy.h>
+#include <linux/mfd/palmas.h>
+#include <linux/clk/tegra.h>
+#include <media/tegra_dtv.h>
+#include <linux/clocksource.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/tegra.h>
+#include <linux/pci-tegra.h>
+#include <linux/max1187x.h>
+#include <linux/tegra-soc.h>
+
+#include <mach/irqs.h>
+#include <linux/tegra_fiq_debugger.h>
+
+#include <mach/pinmux.h>
+#include <mach/pinmux-t12.h>
+#include <mach/io_dpd.h>
+#include <mach/i2s.h>
+#include <mach/isomgr.h>
+#include <mach/tegra_asoc_pdata.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <mach/gpio-tegra.h>
+#include <mach/xusb.h>
+#include <linux/platform_data/tegra_ahci.h>
+#include <linux/irqchip/tegra.h>
+#include <linux/htc_headset_mgr.h>
+#include <linux/htc_headset_pmic.h>
+#include <linux/htc_headset_one_wire.h>
+#include <../../../drivers/staging/android/timed_gpio.h>
+
+#include <mach/flounder-bdaddress.h>
+#include "bcm_gps_hostwake.h"
+#include "board.h"
+#include "board-flounder.h"
+#include "board-common.h"
+#include "board-touch-raydium.h"
+#include "board-touch-maxim_sti.h"
+#include "clock.h"
+#include "common.h"
+#include "devices.h"
+#include "gpio-names.h"
+#include "iomap.h"
+#include "pm.h"
+#include "tegra-board-id.h"
+#include "../../../sound/soc/codecs/rt5506.h"
+#include "../../../sound/soc/codecs/rt5677.h"
+#include "../../../sound/soc/codecs/tfa9895.h"
+#include "../../../sound/soc/codecs/rt5677-spi.h"
+
+static unsigned int flounder_hw_rev;
+static unsigned int flounder_eng_id;
+
+static int __init flounder_hw_revision(char *id)
+{
+ int ret;
+ char *hw_rev;
+
+ hw_rev = strsep(&id, ".");
+
+ ret = kstrtouint(hw_rev, 10, &flounder_hw_rev);
+ if (ret < 0) {
+ pr_err("Failed to parse flounder hw_revision=%s\n", hw_rev);
+ return ret;
+ }
+
+ if (id) {
+ ret = kstrtouint(id, 10, &flounder_eng_id);
+ if (ret < 0) {
+ pr_err("Failed to parse flounder eng_id=%s\n", id);
+ return ret;
+ }
+ }
+
+ pr_info("Flounder hardware revision = %d, engineer id = %d\n",
+ flounder_hw_rev, flounder_eng_id);
+
+ return 0;
+}
+early_param("hw_revision", flounder_hw_revision);
+
+int flounder_get_hw_revision(void)
+{
+ return flounder_hw_rev;
+}
+
+int flounder_get_eng_id(void)
+{
+ return flounder_eng_id;
+}
+
+struct aud_sfio_data {
+ const char *name;
+ int id;
+};
+
+static struct aud_sfio_data audio_sfio_pins[] = {
+ [0] = {
+ .name = "I2S1_LRCK",
+ .id = TEGRA_GPIO_PA2,
+ },
+ [1] = {
+ .name = "I2S1_SCLK",
+ .id = TEGRA_GPIO_PA3,
+ },
+ [2] = {
+ .name = "I2S1_SDATA_IN",
+ .id = TEGRA_GPIO_PA4,
+ },
+ [3] = {
+ .name = "I2S1_SDATA_OUT",
+ .id = TEGRA_GPIO_PA5,
+ },
+ [4] = {
+ .name = "I2S2_LRCK",
+ .id = TEGRA_GPIO_PP0,
+ },
+ [5] = {
+ .name = "I2S2_SDATA_IN",
+ .id = TEGRA_GPIO_PP1,
+ },
+ [6] = {
+ .name = "I2S2_SDATA_OUT",
+ .id = TEGRA_GPIO_PP2,
+ },
+ [7] = {
+ .name = "I2S2_SCLK",
+ .id = TEGRA_GPIO_PP3,
+ },
+ [8] = {
+ .name = "extperiph1_clk",
+ .id = TEGRA_GPIO_PW4,
+ },
+ [9] = {
+ .name = "SPI_MOSI",
+ .id = TEGRA_GPIO_PY0,
+ },
+ [10] = {
+ .name = "SPI_MISO",
+ .id = TEGRA_GPIO_PY1,
+ },
+ [11] = {
+ .name = "SPI_SCLK",
+ .id = TEGRA_GPIO_PY2,
+ },
+ [12] = {
+ .name = "SPI_CS",
+ .id = TEGRA_GPIO_PY3,
+ },
+};
+
+static struct resource flounder_bluedroid_pm_resources[] = {
+ [0] = {
+ .name = "shutdown_gpio",
+ .start = TEGRA_GPIO_PR1,
+ .end = TEGRA_GPIO_PR1,
+ .flags = IORESOURCE_IO,
+ },
+ [1] = {
+ .name = "host_wake",
+ .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHEDGE,
+ },
+ [2] = {
+ .name = "gpio_ext_wake",
+ .start = TEGRA_GPIO_PEE1,
+ .end = TEGRA_GPIO_PEE1,
+ .flags = IORESOURCE_IO,
+ },
+ [3] = {
+ .name = "gpio_host_wake",
+ .start = TEGRA_GPIO_PU6,
+ .end = TEGRA_GPIO_PU6,
+ .flags = IORESOURCE_IO,
+ },
+};
+
+static struct platform_device flounder_bluedroid_pm_device = {
+ .name = "bluedroid_pm",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(flounder_bluedroid_pm_resources),
+ .resource = flounder_bluedroid_pm_resources,
+};
+
+static noinline void __init flounder_setup_bluedroid_pm(void)
+{
+ flounder_bluedroid_pm_resources[1].start =
+ flounder_bluedroid_pm_resources[1].end =
+ gpio_to_irq(TEGRA_GPIO_PU6);
+ platform_device_register(&flounder_bluedroid_pm_device);
+}
+
+static struct tfa9895_platform_data tfa9895_data = {
+ .tfa9895_power_enable = TEGRA_GPIO_PX5,
+};
+struct rt5677_priv rt5677_data = {
+ .vad_clock_en = TEGRA_GPIO_PX3,
+};
+
+static struct i2c_board_info __initdata rt5677_board_info = {
+ I2C_BOARD_INFO("rt5677", 0x2d),
+ .platform_data = &rt5677_data,
+};
+static struct i2c_board_info __initdata tfa9895_board_info = {
+ I2C_BOARD_INFO("tfa9895", 0x34),
+ .platform_data = &tfa9895_data,
+};
+static struct i2c_board_info __initdata tfa9895l_board_info = {
+ I2C_BOARD_INFO("tfa9895l", 0x35),
+};
+
+static struct bcm2079x_platform_data bcm2079x_pdata = {
+ .irq_gpio = TEGRA_GPIO_PR7,
+ .en_gpio = TEGRA_GPIO_PB1,
+ .wake_gpio= TEGRA_GPIO_PS1,
+};
+
+static struct i2c_board_info __initdata bcm2079x_board_info = {
+ I2C_BOARD_INFO("bcm2079x-i2c", 0x77),
+ .platform_data = &bcm2079x_pdata,
+};
+
+static __initdata struct tegra_clk_init_table flounder_clk_init_table[] = {
+ /* name parent rate enabled */
+ { "pll_m", NULL, 0, false},
+ { "hda", "pll_p", 108000000, false},
+ { "hda2codec_2x", "pll_p", 48000000, false},
+ { "pwm", "pll_p", 3187500, false},
+ { "i2s1", "pll_a_out0", 0, false},
+ { "i2s2", "pll_a_out0", 0, false},
+ { "i2s3", "pll_a_out0", 0, false},
+ { "i2s4", "pll_a_out0", 0, false},
+ { "spdif_out", "pll_a_out0", 0, false},
+ { "d_audio", "clk_m", 12000000, false},
+ { "dam0", "clk_m", 12000000, false},
+ { "dam1", "clk_m", 12000000, false},
+ { "dam2", "clk_m", 12000000, false},
+ { "audio1", "i2s1_sync", 0, false},
+ { "audio2", "i2s2_sync", 0, false},
+ { "audio3", "i2s3_sync", 0, false},
+ { "vi_sensor", "pll_p", 150000000, false},
+ { "vi_sensor2", "pll_p", 150000000, false},
+ { "cilab", "pll_p", 150000000, false},
+ { "cilcd", "pll_p", 150000000, false},
+ { "cile", "pll_p", 150000000, false},
+ { "i2c1", "pll_p", 3200000, false},
+ { "i2c2", "pll_p", 3200000, false},
+ { "i2c3", "pll_p", 3200000, false},
+ { "i2c4", "pll_p", 3200000, false},
+ { "i2c5", "pll_p", 3200000, false},
+ { "sbc1", "pll_p", 25000000, false},
+ { "sbc2", "pll_p", 25000000, false},
+ { "sbc3", "pll_p", 25000000, false},
+ { "sbc4", "pll_p", 25000000, false},
+ { "sbc5", "pll_p", 25000000, false},
+ { "sbc6", "pll_p", 25000000, false},
+ { "uarta", "pll_p", 408000000, false},
+ { "uartb", "pll_p", 408000000, false},
+ { "uartc", "pll_p", 408000000, false},
+ { "uartd", "pll_p", 408000000, false},
+ { NULL, NULL, 0, 0},
+};
+
+struct max1187x_board_config max1187x_config_data[] = {
+ {
+ .config_id = 0x03E7,
+ .chip_id = 0x75,
+ .major_ver = 1,
+ .minor_ver = 10,
+ .protocol_ver = 8,
+ .config_touch = {
+ 0x03E7, 0x141F, 0x0078, 0x001E, 0x0A01, 0x0100, 0x0302, 0x0504,
+ 0x0706, 0x0908, 0x0B0A, 0x0D0C, 0x0F0E, 0x1110, 0x1312, 0x1514,
+ 0x1716, 0x1918, 0x1B1A, 0x1D1C, 0xFF1E, 0xFFFF, 0xFFFF, 0xFFFF,
+ 0xFFFF, 0x0100, 0x0302, 0x0504, 0x0706, 0x0908, 0x0B0A, 0x0D0C,
+ 0x0F0E, 0x1110, 0x1312, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ 0xFFFF, 0x0EFF, 0x895F, 0xFF13, 0x0000, 0x1402, 0x04B0, 0x04B0,
+ 0x04B0, 0x0514, 0x00B4, 0x1A00, 0x0A08, 0x00B4, 0x0082, 0xFFFF,
+ 0xFFFF, 0x03E8, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ 0x5046
+ },
+ .config_cal = {
+ 0xFFF5, 0xFFEA, 0xFFDF, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E,
+ 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E,
+ 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E, 0x001E,
+ 0x001E, 0x001E, 0x001E, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F,
+ 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F,
+ 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F, 0x000F,
+ 0x000F, 0x000F, 0x000F, 0xFFFF, 0xFF1E, 0x00FF, 0x00FF, 0x00FF,
+ 0x00FF, 0x00FF, 0x00FF, 0x00FF, 0x00FF, 0x00FF, 0x000A, 0x0001,
+ 0x0001, 0x0002, 0x0002, 0x0003, 0x0001, 0x0001, 0x0002, 0x0002,
+ 0x0003, 0x0C26
+ },
+ .config_private = {
+ 0x0118, 0x0069, 0x0082, 0x0038, 0xF0FF, 0x1428, 0x001E, 0x0190,
+ 0x03B6, 0x00AA, 0x00C8, 0x0018, 0x04E2, 0x003C, 0x0000, 0xB232,
+ 0xFEFE, 0xFFFF, 0xFFFF, 0xFFFF, 0x00FF, 0xFF64, 0x4E21, 0x1403,
+ 0x78C8, 0x524C, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF, 0xFFFF,
+ 0xFFFF, 0xF22F
+ },
+ .config_lin_x = {
+ 0x002B, 0x3016, 0x644C, 0x8876, 0xAA99, 0xCBBB, 0xF0E0, 0x8437
+ },
+ .config_lin_y = {
+ 0x0030, 0x2E17, 0x664B, 0x8F7D, 0xAE9F, 0xCABC, 0xEADA, 0x8844
+ },
+ },
+ {
+ .config_id = 0,
+ .chip_id = 0x75,
+ .major_ver = 0,
+ .minor_ver = 0,
+ },
+};
+
+struct max1187x_pdata max1187x_platdata = {
+ .fw_config = max1187x_config_data,
+ .gpio_tirq = TEGRA_GPIO_PK2,
+ .gpio_reset = TEGRA_GPIO_PX6,
+ .num_fw_mappings = 1,
+ .fw_mapping[0] = {.chip_id = 0x75, .filename = "max11876.bin", .filesize = 0xC000, .file_codesize = 0xC000},
+ .defaults_allow = 1,
+ .default_config_id = 0x5800,
+ .default_chip_id = 0x75,
+ .i2c_words = 128,
+ .coordinate_settings = 0,
+ .panel_min_x = 0,
+ .panel_max_x = 3840,
+ .panel_min_y = 0,
+ .panel_max_y = 2600,
+ .lcd_x = 2560,
+ .lcd_y = 1600,
+ .num_rows = 32,
+ .num_cols = 20,
+ .input_protocol = MAX1187X_PROTOCOL_B,
+ .update_feature = MAX1187X_UPDATE_CONFIG,
+ .tw_mask = 0xF,
+ .button_code0 = KEY_HOME,
+ .button_code1 = KEY_BACK,
+ .button_code2 = KEY_RESERVED,
+ .button_code3 = KEY_RESERVED,
+ .report_mode = MAX1187X_REPORT_MODE_EXTEND,
+};
+
+static void flounder_i2c_init(void)
+{
+ i2c_register_board_info(1, &rt5677_board_info, 1);
+ i2c_register_board_info(1, &tfa9895_board_info, 1);
+ i2c_register_board_info(1, &tfa9895l_board_info, 1);
+
+ bcm2079x_board_info.irq = gpio_to_irq(TEGRA_GPIO_PR7),
+ i2c_register_board_info(0, &bcm2079x_board_info, 1);
+}
+
+#ifndef CONFIG_USE_OF
+static struct platform_device *flounder_uart_devices[] __initdata = {
+ &tegra_uarta_device,
+ &tegra_uartb_device,
+ &tegra_uartc_device,
+ &tegra_uartd_device,
+};
+
+static struct tegra_serial_platform_data flounder_uarta_pdata = {
+ .dma_req_selector = 8,
+ .modem_interrupt = false,
+};
+
+static struct tegra_serial_platform_data flounder_uartb_pdata = {
+ .dma_req_selector = 9,
+ .modem_interrupt = false,
+};
+
+static struct tegra_serial_platform_data flounder_uartc_pdata = {
+ .dma_req_selector = 10,
+ .modem_interrupt = false,
+};
+
+static struct tegra_serial_platform_data flounder_uartd_pdata = {
+ .dma_req_selsctor = 19,
+ .modem_interrupt = false,
+};
+#endif
+static struct tegra_serial_platform_data flounder_uarta_pdata = {
+ .dma_req_selector = 8,
+ .modem_interrupt = false,
+};
+
+static struct tegra_asoc_platform_data flounder_audio_pdata_rt5677 = {
+ .gpio_hp_det = -1,
+ .gpio_ldo1_en = TEGRA_GPIO_PK0,
+ .gpio_ldo2_en = TEGRA_GPIO_PQ3,
+ .gpio_reset = TEGRA_GPIO_PX4,
+ .gpio_irq1 = TEGRA_GPIO_PS4,
+ .gpio_wakeup = TEGRA_GPIO_PO0,
+ .gpio_spkr_en = -1,
+ .gpio_spkr_ldo_en = TEGRA_GPIO_PX5,
+ .gpio_int_mic_en = TEGRA_GPIO_PV3,
+ .gpio_ext_mic_en = TEGRA_GPIO_PS3,
+ .gpio_hp_mute = -1,
+ .gpio_hp_en = TEGRA_GPIO_PX1,
+ .gpio_hp_ldo_en = -1,
+ .gpio_codec1 = -1,
+ .gpio_codec2 = -1,
+ .gpio_codec3 = -1,
+ .i2s_param[HIFI_CODEC] = {
+ .audio_port_id = 1,
+ .is_i2s_master = 1,
+ .i2s_mode = TEGRA_DAIFMT_I2S,
+ },
+ .i2s_param[SPEAKER] = {
+ .audio_port_id = 2,
+ .is_i2s_master = 1,
+ .i2s_mode = TEGRA_DAIFMT_I2S,
+ },
+ .i2s_param[BT_SCO] = {
+ .audio_port_id = 3,
+ .is_i2s_master = 1,
+ .i2s_mode = TEGRA_DAIFMT_DSP_A,
+ },
+ /* Add for MI2S driver to get GPIO */
+ .i2s_set[HIFI_CODEC*4 + 0] = {
+ .name = "I2S1_LRCK",
+ .id = TEGRA_GPIO_PA2,
+ },
+ .i2s_set[HIFI_CODEC*4 + 1] = {
+ .name = "I2S1_SCLK",
+ .id = TEGRA_GPIO_PA3,
+ },
+ .i2s_set[HIFI_CODEC*4 + 2] = {
+ .name = "I2S1_SDATA_IN",
+ .id = TEGRA_GPIO_PA4,
+ .dir_in = 1,
+ .pg = TEGRA_PINGROUP_DAP2_DIN,
+ },
+ .i2s_set[HIFI_CODEC*4 + 3] = {
+ .name = "I2S1_SDATA_OUT",
+ .id = TEGRA_GPIO_PA5,
+ },
+ .i2s_set[SPEAKER*4 + 0] = {
+ .name = "I2S2_LRCK",
+ .id = TEGRA_GPIO_PP0,
+ },
+ .i2s_set[SPEAKER*4 + 1] = {
+ .name = "I2S2_SDATA_IN",
+ .id = TEGRA_GPIO_PP1,
+ .dir_in = 1,
+ .pg = TEGRA_PINGROUP_DAP3_DIN,
+ },
+ .i2s_set[SPEAKER*4 + 2] = {
+ .name = "I2S2_SDATA_OUT",
+ .id = TEGRA_GPIO_PP2,
+ },
+ .i2s_set[SPEAKER*4 + 3] = {
+ .name = "I2S2_SCLK",
+ .id = TEGRA_GPIO_PP3,
+ },
+ .first_time_free[HIFI_CODEC] = 1,
+ .first_time_free[SPEAKER] = 1,
+ .codec_mclk = {
+ .name = "extperiph1_clk",
+ .id = TEGRA_GPIO_PW4,
+ }
+};
+
+static struct tegra_spi_device_controller_data dev_cdata_rt5677 = {
+ .rx_clk_tap_delay = 0,
+ .tx_clk_tap_delay = 16,
+};
+
+static struct gpio_config flounder_spi_pdata_rt5677[] = {
+ [0] = {
+ .name = "SPI5_MOSI",
+ .id = TEGRA_GPIO_PY0,
+ },
+ [1] = {
+ .name = "SPI5_MISO",
+ .id = TEGRA_GPIO_PY1,
+ .dir_in = 1,
+ .pg = TEGRA_PINGROUP_ULPI_DIR,
+ },
+ [2] = {
+ .name = "SPI5_SCLK",
+ .id = TEGRA_GPIO_PY2,
+ },
+ [3] = {
+ .name = "SPI5_CS#",
+ .id = TEGRA_GPIO_PY3,
+ },
+};
+
+void flounder_rt5677_spi_suspend(bool on)
+{
+ int i, ret;
+ if (on) {
+ pr_debug("%s: suspend", __func__);
+ for (i = 0; i < ARRAY_SIZE(flounder_spi_pdata_rt5677); i++) {
+ ret = gpio_request(flounder_spi_pdata_rt5677[i].id,
+ flounder_spi_pdata_rt5677[i].name);
+ if (ret < 0) {
+ pr_err("%s: gpio_request failed for gpio[%d] %s, return %d\n",
+ __func__, flounder_spi_pdata_rt5677[i].id, flounder_spi_pdata_rt5677[i].name, ret);
+ continue;
+ }
+ if (!flounder_spi_pdata_rt5677[i].dir_in) {
+ gpio_direction_output(flounder_spi_pdata_rt5677[i].id, 0);
+ } else {
+ tegra_pinctrl_pg_set_pullupdown(flounder_spi_pdata_rt5677[i].pg,
+ TEGRA_PUPD_PULL_DOWN);
+ gpio_direction_input(flounder_spi_pdata_rt5677[i].id);
+ }
+
+ }
+ } else {
+ pr_debug("%s: resume", __func__);
+ for (i = 0; i < ARRAY_SIZE(flounder_spi_pdata_rt5677); i++) {
+ gpio_free(flounder_spi_pdata_rt5677[i].id);
+ }
+ }
+}
+
+static struct rt5677_spi_platform_data rt5677_spi_pdata = {
+ .spi_suspend = flounder_rt5677_spi_suspend
+};
+
+struct spi_board_info rt5677_flounder_spi_board[1] = {
+ {
+ .modalias = "rt5677_spidev",
+ .bus_num = 4,
+ .chip_select = 0,
+ .max_speed_hz = 12 * 1000 * 1000,
+ .mode = SPI_MODE_0,
+ .controller_data = &dev_cdata_rt5677,
+ .platform_data = &rt5677_spi_pdata,
+ },
+};
+
+static void flounder_audio_init(void)
+{
+ int i;
+ flounder_audio_pdata_rt5677.codec_name = "rt5677.1-002d";
+ flounder_audio_pdata_rt5677.codec_dai_name = "rt5677-aif1";
+
+ spi_register_board_info(&rt5677_flounder_spi_board[0],
+ ARRAY_SIZE(rt5677_flounder_spi_board));
+ /* To prevent power leakage */
+ gpio_request(TEGRA_GPIO_PN1, "I2S0_SDATA_IN");
+ tegra_pinctrl_pg_set_pullupdown(TEGRA_PINGROUP_DAP1_DIN, TEGRA_PUPD_PULL_DOWN);
+
+ /* To config SFIO */
+ for (i = 0; i < ARRAY_SIZE(audio_sfio_pins); i++)
+ if (tegra_is_gpio(audio_sfio_pins[i].id)) {
+ gpio_request(audio_sfio_pins[i].id, audio_sfio_pins[i].name);
+ gpio_free(audio_sfio_pins[i].id);
+ pr_info("%s: gpio_free for gpio[%d] %s\n",
+ __func__, audio_sfio_pins[i].id, audio_sfio_pins[i].name);
+ }
+
+ /* To config GPIO */
+ for (i = 0; i < SPEAKER*2; i++) {
+ gpio_request(flounder_audio_pdata_rt5677.i2s_set[i].id,
+ flounder_audio_pdata_rt5677.i2s_set[i].name);
+ if (!flounder_audio_pdata_rt5677.i2s_set[i].dir_in) {
+ gpio_direction_output(flounder_audio_pdata_rt5677.i2s_set[i].id, 0);
+ } else {
+ tegra_pinctrl_pg_set_pullupdown(flounder_audio_pdata_rt5677.i2s_set[i].pg, TEGRA_PUPD_PULL_DOWN);
+ gpio_direction_input(flounder_audio_pdata_rt5677.i2s_set[i].id);
+ }
+ pr_info("%s: gpio_request for gpio[%d] %s\n",
+ __func__, flounder_audio_pdata_rt5677.i2s_set[i].id, flounder_audio_pdata_rt5677.i2s_set[i].name);
+ }
+
+}
+
+static struct platform_device flounder_audio_device_rt5677 = {
+ .name = "tegra-snd-rt5677",
+ .id = 0,
+ .dev = {
+ .platform_data = &flounder_audio_pdata_rt5677,
+ },
+};
+
+static void __init flounder_uart_init(void)
+{
+
+#ifndef CONFIG_USE_OF
+ tegra_uarta_device.dev.platform_data = &flounder_uarta_pdata;
+ tegra_uartb_device.dev.platform_data = &flounder_uartb_pdata;
+ tegra_uartc_device.dev.platform_data = &flounder_uartc_pdata;
+ tegra_uartd_device.dev.platform_data = &flounder_uartd_pdata;
+ platform_add_devices(flounder_uart_devices,
+ ARRAY_SIZE(flounder_uart_devices));
+#endif
+ tegra_uarta_device.dev.platform_data = &flounder_uarta_pdata;
+ if (!is_tegra_debug_uartport_hs()) {
+ int debug_port_id = uart_console_debug_init(0);
+ if (debug_port_id < 0)
+ return;
+
+#ifdef CONFIG_TEGRA_FIQ_DEBUGGER
+#ifndef CONFIG_TRUSTY_FIQ
+ tegra_serial_debug_init_irq_mode(TEGRA_UARTA_BASE, INT_UARTA, NULL, -1, -1);
+#endif
+#else
+ platform_device_register(uart_console_debug_device);
+#endif
+ } else {
+ tegra_uarta_device.dev.platform_data = &flounder_uarta_pdata;
+ platform_device_register(&tegra_uarta_device);
+ }
+}
+
+static struct resource tegra_rtc_resources[] = {
+ [0] = {
+ .start = TEGRA_RTC_BASE,
+ .end = TEGRA_RTC_BASE + TEGRA_RTC_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = INT_RTC,
+ .end = INT_RTC,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device tegra_rtc_device = {
+ .name = "tegra_rtc",
+ .id = -1,
+ .resource = tegra_rtc_resources,
+ .num_resources = ARRAY_SIZE(tegra_rtc_resources),
+};
+
+static struct timed_gpio flounder_vib_timed_gpios[] = {
+ {
+ .name = "vibrator",
+ .gpio = TEGRA_GPIO_PU3,
+ .max_timeout = 15000,
+ },
+};
+
+static struct timed_gpio_platform_data flounder_vib_pdata = {
+ .num_gpios = ARRAY_SIZE(flounder_vib_timed_gpios),
+ .gpios = flounder_vib_timed_gpios,
+};
+
+static struct platform_device flounder_vib_device = {
+ .name = TIMED_GPIO_NAME,
+ .id = -1,
+ .dev = {
+ .platform_data = &flounder_vib_pdata,
+ },
+};
+
+static struct platform_device *flounder_devices[] __initdata = {
+ &tegra_pmu_device,
+ &tegra_rtc_device,
+ &tegra_udc_device,
+#if defined(CONFIG_TEGRA_WATCHDOG)
+#ifndef CONFIG_TRUSTY_FIQ
+ &tegra_wdt0_device,
+#endif
+#endif
+#if defined(CONFIG_TEGRA_AVP)
+ &tegra_avp_device,
+#endif
+ &tegra_pcm_device,
+ &tegra_ahub_device,
+ &tegra_dam_device0,
+ &tegra_dam_device1,
+ &tegra_dam_device2,
+ &tegra_i2s_device1,
+ &tegra_i2s_device2,
+ &tegra_i2s_device3,
+ &tegra_i2s_device4,
+ &flounder_audio_device_rt5677,
+ &tegra_spdif_device,
+ &spdif_dit_device,
+ &bluetooth_dit_device,
+#if IS_ENABLED(CONFIG_SND_SOC_TEGRA_OFFLOAD)
+ &tegra_offload_device,
+#endif
+#if IS_ENABLED(CONFIG_SND_SOC_TEGRA30_AVP)
+ &tegra30_avp_audio_device,
+#endif
+
+#if defined(CONFIG_CRYPTO_DEV_TEGRA_AES)
+ &tegra_aes_device,
+#endif
+ &flounder_vib_device,
+};
+
+static struct tegra_usb_platform_data tegra_udc_pdata = {
+ .port_otg = true,
+ .has_hostpc = true,
+ .unaligned_dma_buf_supported = false,
+ .phy_intf = TEGRA_USB_PHY_INTF_UTMI,
+ .op_mode = TEGRA_USB_OPMODE_DEVICE,
+ .u_data.dev = {
+ .vbus_pmu_irq = 0,
+ .vbus_gpio = -1,
+ .charging_supported = true,
+ .remote_wakeup_supported = false,
+ },
+ .u_cfg.utmi = {
+ .hssync_start_delay = 0,
+ .elastic_limit = 16,
+ .idle_wait_delay = 17,
+ .term_range_adj = 6,
+ .xcvr_setup = 8,
+ .xcvr_lsfslew = 2,
+ .xcvr_lsrslew = 2,
+ .xcvr_hsslew_lsb = 3,
+ .xcvr_hsslew_msb = 3,
+ .xcvr_setup_offset = 3,
+ .xcvr_use_fuses = 1,
+ },
+};
+
+static struct tegra_usb_platform_data tegra_ehci1_utmi_pdata = {
+ .port_otg = true,
+ .has_hostpc = true,
+ .unaligned_dma_buf_supported = false,
+ .phy_intf = TEGRA_USB_PHY_INTF_UTMI,
+ .op_mode = TEGRA_USB_OPMODE_HOST,
+ .u_data.host = {
+ .vbus_gpio = -1,
+ .hot_plug = false,
+ .remote_wakeup_supported = true,
+ .power_off_on_suspend = true,
+ .support_y_cable = true,
+ },
+ .u_cfg.utmi = {
+ .hssync_start_delay = 0,
+ .elastic_limit = 16,
+ .idle_wait_delay = 17,
+ .term_range_adj = 6,
+ .xcvr_setup = 15,
+ .xcvr_lsfslew = 0,
+ .xcvr_lsrslew = 3,
+ .xcvr_hsslew_lsb = 3,
+ .xcvr_hsslew_msb = 3,
+ .xcvr_setup_offset = 3,
+ .xcvr_use_fuses = 1,
+ .vbus_oc_map = 0x4,
+ .xcvr_hsslew_lsb = 2,
+ },
+};
+
+static struct tegra_usb_platform_data tegra_ehci2_utmi_pdata = {
+ .port_otg = false,
+ .has_hostpc = true,
+ .unaligned_dma_buf_supported = false,
+ .phy_intf = TEGRA_USB_PHY_INTF_UTMI,
+ .op_mode = TEGRA_USB_OPMODE_HOST,
+ .u_data.host = {
+ .vbus_gpio = -1,
+ .hot_plug = false,
+ .remote_wakeup_supported = true,
+ .power_off_on_suspend = true,
+ },
+ .u_cfg.utmi = {
+ .hssync_start_delay = 0,
+ .elastic_limit = 16,
+ .idle_wait_delay = 17,
+ .term_range_adj = 6,
+ .xcvr_setup = 8,
+ .xcvr_lsfslew = 2,
+ .xcvr_lsrslew = 2,
+ .xcvr_setup_offset = 0,
+ .xcvr_use_fuses = 1,
+ .vbus_oc_map = 0x5,
+ },
+};
+
+static struct tegra_usb_platform_data tegra_ehci3_utmi_pdata = {
+ .port_otg = false,
+ .has_hostpc = true,
+ .unaligned_dma_buf_supported = false,
+ .phy_intf = TEGRA_USB_PHY_INTF_UTMI,
+ .op_mode = TEGRA_USB_OPMODE_HOST,
+ .u_data.host = {
+ .vbus_gpio = -1,
+ .hot_plug = false,
+ .remote_wakeup_supported = true,
+ .power_off_on_suspend = true,
+ },
+ .u_cfg.utmi = {
+ .hssync_start_delay = 0,
+ .elastic_limit = 16,
+ .idle_wait_delay = 17,
+ .term_range_adj = 6,
+ .xcvr_setup = 8,
+ .xcvr_lsfslew = 2,
+ .xcvr_lsrslew = 2,
+ .xcvr_setup_offset = 0,
+ .xcvr_use_fuses = 1,
+ .vbus_oc_map = 0x5,
+ },
+};
+
+static struct tegra_usb_otg_data tegra_otg_pdata = {
+ .ehci_device = &tegra_ehci1_device,
+ .ehci_pdata = &tegra_ehci1_utmi_pdata,
+ .id_det_gpio = TEGRA_GPIO_PW2,
+};
+
+static void flounder_usb_init(void)
+{
+ int usb_port_owner_info = tegra_get_usb_port_owner_info();
+
+ /* Device cable is detected through PMU Interrupt */
+ tegra_udc_pdata.support_pmu_vbus = true;
+ tegra_ehci1_utmi_pdata.support_pmu_vbus = true;
+ tegra_ehci1_utmi_pdata.vbus_extcon_dev_name = "palmas-extcon";
+ /* Host cable is detected through GPIO Interrupt */
+ tegra_udc_pdata.id_det_type = TEGRA_USB_GPIO_ID;
+ tegra_udc_pdata.vbus_extcon_dev_name = "palmas-extcon";
+ tegra_ehci1_utmi_pdata.id_det_type = TEGRA_USB_GPIO_ID;
+ if (!is_mdm_modem())
+ tegra_ehci1_utmi_pdata.u_cfg.utmi.xcvr_setup_offset = -3;
+
+ if (!(usb_port_owner_info & UTMI1_PORT_OWNER_XUSB)) {
+ tegra_otg_pdata.is_xhci = false;
+ tegra_udc_pdata.u_data.dev.is_xhci = false;
+ } else {
+ tegra_otg_pdata.is_xhci = true;
+ tegra_udc_pdata.u_data.dev.is_xhci = true;
+ }
+ if (!is_mdm_modem())
+ tegra_udc_pdata.u_cfg.utmi.xcvr_setup_offset = -3;
+ tegra_otg_device.dev.platform_data = &tegra_otg_pdata;
+ platform_device_register(&tegra_otg_device);
+ /* Setup the udc platform data */
+ tegra_udc_device.dev.platform_data = &tegra_udc_pdata;
+
+ if ((!(usb_port_owner_info & UTMI2_PORT_OWNER_XUSB))
+ /* tegra_ehci2_device will reserve for mdm9x25 modem */
+ && (!is_mdm_modem()))
+ {
+ tegra_ehci2_device.dev.platform_data =
+ &tegra_ehci2_utmi_pdata;
+ platform_device_register(&tegra_ehci2_device);
+ }
+ if (!(usb_port_owner_info & UTMI2_PORT_OWNER_XUSB)) {
+ tegra_ehci3_device.dev.platform_data = &tegra_ehci3_utmi_pdata;
+ platform_device_register(&tegra_ehci3_device);
+ }
+}
+
+static struct tegra_xusb_platform_data xusb_pdata = {
+ .portmap = TEGRA_XUSB_SS_P0 | TEGRA_XUSB_USB2_P0 | TEGRA_XUSB_SS_P1 |
+ TEGRA_XUSB_USB2_P1 | TEGRA_XUSB_USB2_P2,
+};
+
+static void flounder_xusb_init(void)
+{
+ int usb_port_owner_info = tegra_get_usb_port_owner_info();
+
+ xusb_pdata.lane_owner = (u8) tegra_get_lane_owner_info();
+
+ if (!(usb_port_owner_info & UTMI1_PORT_OWNER_XUSB))
+ xusb_pdata.portmap &= ~(TEGRA_XUSB_USB2_P0 |
+ TEGRA_XUSB_SS_P0);
+
+ if (!(usb_port_owner_info & UTMI2_PORT_OWNER_XUSB))
+ xusb_pdata.portmap &= ~(TEGRA_XUSB_USB2_P1 |
+ TEGRA_XUSB_USB2_P2 | TEGRA_XUSB_SS_P1);
+
+ if (usb_port_owner_info & HSIC1_PORT_OWNER_XUSB)
+ xusb_pdata.portmap |= TEGRA_XUSB_HSIC_P0;
+
+ if (usb_port_owner_info & HSIC2_PORT_OWNER_XUSB)
+ xusb_pdata.portmap |= TEGRA_XUSB_HSIC_P1;
+}
+
+#ifndef CONFIG_USE_OF
+static struct platform_device *flounder_spi_devices[] __initdata = {
+ &tegra11_spi_device1,
+ &tegra11_spi_device4,
+};
+
+static struct tegra_spi_platform_data flounder_spi1_pdata = {
+ .dma_req_sel = 15,
+ .spi_max_frequency = 25000000,
+ .clock_always_on = false,
+};
+
+static struct tegra_spi_platform_data flounder_spi4_pdata = {
+ .dma_req_sel = 18,
+ .spi_max_frequency = 25000000,
+ .clock_always_on = false,
+};
+static void __init flounder_spi_init(void)
+{
+ tegra11_spi_device1.dev.platform_data = &flounder_spi1_pdata;
+ tegra11_spi_device4.dev.platform_data = &flounder_spi4_pdata;
+ platform_add_devices(flounder_spi_devices,
+ ARRAY_SIZE(flounder_spi_devices));
+}
+#else
+static void __init flounder_spi_init(void)
+{
+}
+#endif
+
+#ifdef CONFIG_USE_OF
+static struct of_dev_auxdata flounder_auxdata_lookup[] __initdata = {
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000d400, "spi-tegra114.0",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000d600, "spi-tegra114.1",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000d800, "spi-tegra114.2",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000da00, "spi-tegra114.3",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000dc00, "spi-tegra114.4",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-spi", 0x7000de00, "spi-tegra114.5",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-apbdma", 0x60020000, "tegra-apbdma",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-se", 0x70012000, "tegra12-se", NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-host1x", TEGRA_HOST1X_BASE, "host1x",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-gk20a", TEGRA_GK20A_BAR0_BASE,
+ "gk20a.0", NULL),
+#ifdef CONFIG_ARCH_TEGRA_VIC
+ OF_DEV_AUXDATA("nvidia,tegra124-vic", TEGRA_VIC_BASE, "vic03.0", NULL),
+#endif
+ OF_DEV_AUXDATA("nvidia,tegra124-msenc", TEGRA_MSENC_BASE, "msenc",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-vi", TEGRA_VI_BASE, "vi.0", NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-isp", TEGRA_ISP_BASE, "isp.0", NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-isp", TEGRA_ISPB_BASE, "isp.1", NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-tsec", TEGRA_TSEC_BASE, "tsec", NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-hsuart", 0x70006000, "serial-tegra.0",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-hsuart", 0x70006040, "serial-tegra.1",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-hsuart", 0x70006200, "serial-tegra.2",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra114-hsuart", 0x70006300, "serial-tegra.3",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000c000, "tegra12-i2c.0",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000c400, "tegra12-i2c.1",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000c500, "tegra12-i2c.2",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000c700, "tegra12-i2c.3",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000d000, "tegra12-i2c.4",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-i2c", 0x7000d100, "tegra12-i2c.5",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-xhci", 0x70090000, "tegra-xhci",
+ &xusb_pdata),
+ OF_DEV_AUXDATA("nvidia,tegra124-camera", 0, "pcl-generic",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-dfll", 0x70110000, "tegra_cl_dvfs",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra132-dfll", 0x70040084, "tegra_cl_dvfs",
+ NULL),
+ OF_DEV_AUXDATA("nvidia,tegra124-efuse", TEGRA_FUSE_BASE, "tegra-fuse",
+ NULL),
+ {}
+};
+#endif
+
+static struct maxim_sti_pdata maxim_sti_pdata = {
+ .touch_fusion = "/vendor/bin/touch_fusion",
+ .config_file = "/vendor/firmware/touch_fusion.cfg",
+ .fw_name = "maxim_fp35.bin",
+ .nl_family = TF_FAMILY_NAME,
+ .nl_mc_groups = 5,
+ .chip_access_method = 2,
+ .default_reset_state = 0,
+ .tx_buf_size = 4100,
+ .rx_buf_size = 4100,
+ .gpio_reset = TOUCH_GPIO_RST_MAXIM_STI_SPI,
+ .gpio_irq = TOUCH_GPIO_IRQ_MAXIM_STI_SPI
+};
+
+static struct tegra_spi_device_controller_data maxim_dev_cdata = {
+ .rx_clk_tap_delay = 0,
+ .is_hw_based_cs = true,
+ .tx_clk_tap_delay = 0,
+};
+
+static struct spi_board_info maxim_sti_spi_board = {
+ .modalias = MAXIM_STI_NAME,
+ .bus_num = TOUCH_SPI_ID,
+ .chip_select = TOUCH_SPI_CS,
+ .max_speed_hz = 12 * 1000 * 1000,
+ .mode = SPI_MODE_0,
+ .platform_data = &maxim_sti_pdata,
+ .controller_data = &maxim_dev_cdata,
+};
+
+static int __init flounder_touch_init(void)
+{
+ pr_info("%s init synaptics spi touch\n", __func__);
+
+ if (of_find_node_by_path("/spi@7000d800/synaptics_dsx@0") == NULL) {
+ pr_info("[TP] %s init maxim spi touch\n", __func__);
+ (void)touch_init_maxim_sti(&maxim_sti_spi_board);
+ } else {
+ pr_info("[TP] synaptics device tree found\n");
+ }
+ return 0;
+}
+
+#define EARPHONE_DET TEGRA_GPIO_PW3
+#define HSMIC_2V85_EN TEGRA_GPIO_PS3
+#define AUD_REMO_PRES TEGRA_GPIO_PS2
+#define AUD_REMO_TX_OE TEGRA_GPIO_PQ4
+#define AUD_REMO_TX TEGRA_GPIO_PJ7
+#define AUD_REMO_RX TEGRA_GPIO_PB0
+
+static void headset_init(void)
+{
+ int ret ;
+
+ ret = gpio_request(HSMIC_2V85_EN, "HSMIC_2V85_EN");
+ if (ret < 0){
+ pr_err("[HS] %s: gpio_request failed for gpio %s\n",
+ __func__, "HSMIC_2V85_EN");
+ }
+
+ ret = gpio_request(AUD_REMO_TX_OE, "AUD_REMO_TX_OE");
+ if (ret < 0){
+ pr_err("[HS] %s: gpio_request failed for gpio %s\n",
+ __func__, "AUD_REMO_TX_OE");
+ }
+
+ ret = gpio_request(AUD_REMO_TX, "AUD_REMO_TX");
+ if (ret < 0){
+ pr_err("[HS] %s: gpio_request failed for gpio %s\n",
+ __func__, "AUD_REMO_TX");
+ }
+
+ ret = gpio_request(AUD_REMO_RX, "AUD_REMO_RX");
+ if (ret < 0){
+ pr_err("[HS] %s: gpio_request failed for gpio %s\n",
+ __func__, "AUD_REMO_RX");
+ }
+
+ gpio_direction_output(HSMIC_2V85_EN, 0);
+ gpio_direction_output(AUD_REMO_TX_OE, 1);
+
+}
+
+static void headset_power(int enable)
+{
+ pr_info("[HS_BOARD] (%s) Set MIC bias %d\n", __func__, enable);
+
+ if (enable)
+ gpio_set_value(HSMIC_2V85_EN, 1);
+ else {
+ gpio_set_value(HSMIC_2V85_EN, 0);
+ }
+}
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+#define AUD_DEBUG_EN TEGRA_GPIO_PK5
+static int headset_get_debug(void)
+{
+ int ret = 0;
+ ret = gpio_get_value(AUD_DEBUG_EN);
+ pr_info("[HS_BOARD] (%s) AUD_DEBUG_EN=%d\n", __func__, ret);
+
+ return ret;
+}
+#endif
+
+/* HTC_HEADSET_PMIC Driver */
+static struct htc_headset_pmic_platform_data htc_headset_pmic_data = {
+ .driver_flag = DRIVER_HS_PMIC_ADC,
+ .hpin_gpio = EARPHONE_DET,
+ .hpin_irq = 0,
+ .key_gpio = AUD_REMO_PRES,
+ .key_irq = 0,
+ .key_enable_gpio = 0,
+ .adc_mic = 0,
+ .adc_remote = {0, 117, 118, 230, 231, 414, 415, 829},
+ .hs_controller = 0,
+ .hs_switch = 0,
+ .iio_channel_name = "hs_channel",
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ .debug_gpio = AUD_DEBUG_EN,
+ .debug_irq = 0,
+ .headset_get_debug = headset_get_debug,
+#endif
+};
+
+static struct platform_device htc_headset_pmic = {
+ .name = "HTC_HEADSET_PMIC",
+ .id = -1,
+ .dev = {
+ .platform_data = &htc_headset_pmic_data,
+ },
+};
+
+static struct htc_headset_1wire_platform_data htc_headset_1wire_data = {
+ .tx_level_shift_en = AUD_REMO_TX_OE,
+ .uart_sw = 0,
+ .one_wire_remote ={0x7E, 0x7F, 0x7D, 0x7F, 0x7B, 0x7F},
+ .remote_press = 0,
+ .onewire_tty_dev = "/dev/ttyTHS3",
+};
+
+static struct platform_device htc_headset_one_wire = {
+ .name = "HTC_HEADSET_1WIRE",
+ .id = -1,
+ .dev = {
+ .platform_data = &htc_headset_1wire_data,
+ },
+};
+
+static void uart_tx_gpo(int mode)
+{
+ pr_info("[HS_BOARD] (%s) Set uart_tx_gpo mode = %d\n", __func__, mode);
+ switch (mode) {
+ case 0:
+ gpio_direction_output(AUD_REMO_TX, 0);
+ break;
+ case 1:
+ gpio_direction_output(AUD_REMO_TX, 1);
+ break;
+ case 2:
+ tegra_gpio_disable(AUD_REMO_TX);
+ break;
+ }
+}
+
+static void uart_lv_shift_en(int enable)
+{
+ pr_info("[HS_BOARD] (%s) Set uart_lv_shift_en %d\n", __func__, enable);
+ gpio_direction_output(AUD_REMO_TX_OE, enable);
+}
+
+/* HTC_HEADSET_MGR Driver */
+static struct platform_device *headset_devices[] = {
+ &htc_headset_pmic,
+ &htc_headset_one_wire,
+ /* Please put the headset detection driver on the last */
+};
+
+static struct headset_adc_config htc_headset_mgr_config[] = {
+ {
+ .type = HEADSET_MIC,
+ .adc_max = 3680,
+ .adc_min = 621,
+ },
+ {
+ .type = HEADSET_NO_MIC,
+ .adc_max = 620,
+ .adc_min = 0,
+ },
+};
+
+static struct htc_headset_mgr_platform_data htc_headset_mgr_data = {
+ .driver_flag = DRIVER_HS_MGR_FLOAT_DET,
+ .headset_devices_num = ARRAY_SIZE(headset_devices),
+ .headset_devices = headset_devices,
+ .headset_config_num = ARRAY_SIZE(htc_headset_mgr_config),
+ .headset_config = htc_headset_mgr_config,
+ .headset_init = headset_init,
+ .headset_power = headset_power,
+ .uart_tx_gpo = uart_tx_gpo,
+ .uart_lv_shift_en = uart_lv_shift_en,
+};
+
+static struct platform_device htc_headset_mgr = {
+ .name = "HTC_HEADSET_MGR",
+ .id = -1,
+ .dev = {
+ .platform_data = &htc_headset_mgr_data,
+ },
+};
+
+static int __init flounder_headset_init(void)
+{
+ pr_info("[HS]%s Headset device register enter\n", __func__);
+ platform_device_register(&htc_headset_mgr);
+ return 0;
+}
+
+static struct device *gps_dev;
+static struct class *gps_class;
+
+extern int tegra_get_hw_rev(void);
+
+#define GPS_HOSTWAKE_GPIO 69
+static struct bcm_gps_hostwake_platform_data gps_hostwake_data = {
+ .gpio_hostwake = GPS_HOSTWAKE_GPIO,
+};
+
+static struct platform_device bcm_gps_hostwake = {
+ .name = "bcm-gps-hostwake",
+ .id = -1,
+ .dev = {
+ .platform_data = &gps_hostwake_data,
+ },
+};
+
+#define PRJ_F 302
+static int __init flounder_gps_init(void)
+{
+
+ int ret;
+ int gps_onoff;
+ int product_id;
+
+ pr_info("[GPS]%s init gps onoff\n", __func__);
+ of_property_read_u32(
+ of_find_node_by_path("/chosen/board_info"),
+ "pid",
+ &product_id);
+
+ if (product_id == PRJ_F && flounder_get_hw_revision() <= FLOUNDER_REV_EVT1_1 ){
+ gps_onoff = TEGRA_GPIO_PH5; // XB
+ } else {
+ gps_onoff = TEGRA_GPIO_PB4; // XC
+ }
+
+ gps_class = class_create(THIS_MODULE, "gps");
+ if (IS_ERR(gps_class)){
+ pr_err("[GPS] %s: gps class create fail \n", __func__);
+ return PTR_ERR(gps_class);
+ }
+
+ gps_dev = device_create(gps_class, NULL, 0, NULL, "bcm47521");
+ if (IS_ERR(gps_dev)){
+ pr_err("[GPS] %s: gps device create fail \n", __func__);
+ return PTR_ERR(gps_dev);
+ }
+
+ ret = gpio_request(gps_onoff, "gps_onoff");
+ if (ret < 0){
+ pr_err("[GPS] %s: gpio_request failed for gpio %s\n",
+ __func__, "gps_onoff");
+ }
+
+ gpio_direction_output(gps_onoff, 0);
+ gpio_export (gps_onoff, 1);
+ gpio_export_link(gps_dev,"gps_onoff", gps_onoff);
+
+ if (product_id == PRJ_F) {
+ pr_info("GPS: init gps hostwake\n");
+ platform_device_register(&bcm_gps_hostwake);
+ }
+
+ return 0;
+}
+#undef PRJ_F
+
+static void __init flounder_force_recovery_gpio(void)
+{
+ int ret;
+
+ if(flounder_get_hw_revision() != FLOUNDER_REV_PVT)
+ return ;
+ ret = gpio_request(TEGRA_GPIO_PI1, "force_recovery");
+ if (ret < 0){
+ pr_err("force_recovery: gpio_request failed for force_recovery %s\n", "force_recovery");
+ }
+ gpio_direction_input(TEGRA_GPIO_PI1);
+ tegra_pinctrl_pg_set_pullupdown(TEGRA_PINGROUP_GPIO_PI1, TEGRA_PUPD_PULL_UP);
+}
+
+static void __init sysedp_init(void)
+{
+ flounder_new_sysedp_init();
+}
+
+static void __init edp_init(void)
+{
+ flounder_edp_init();
+}
+
+static void __init sysedp_dynamic_capping_init(void)
+{
+ flounder_sysedp_dynamic_capping_init();
+}
+
+static void __init tegra_flounder_early_init(void)
+{
+ sysedp_init();
+ tegra_clk_init_from_table(flounder_clk_init_table);
+ tegra_clk_verify_parents();
+ tegra_soc_device_init("flounder");
+}
+
+static struct tegra_dtv_platform_data flounder_dtv_pdata = {
+ .dma_req_selector = 11,
+};
+
+static void __init flounder_dtv_init(void)
+{
+ tegra_dtv_device.dev.platform_data = &flounder_dtv_pdata;
+ platform_device_register(&tegra_dtv_device);
+}
+
+static struct tegra_io_dpd pexbias_io = {
+ .name = "PEX_BIAS",
+ .io_dpd_reg_index = 0,
+ .io_dpd_bit = 4,
+};
+static struct tegra_io_dpd pexclk1_io = {
+ .name = "PEX_CLK1",
+ .io_dpd_reg_index = 0,
+ .io_dpd_bit = 5,
+};
+static struct tegra_io_dpd pexclk2_io = {
+ .name = "PEX_CLK2",
+ .io_dpd_reg_index = 0,
+ .io_dpd_bit = 6,
+};
+
+static void __init tegra_flounder_late_init(void)
+{
+ platform_device_register(&tegra124_pinctrl_device);
+ flounder_pinmux_init();
+
+ flounder_display_init();
+ flounder_uart_init();
+ flounder_usb_init();
+ if(is_mdm_modem())
+ flounder_mdm_9k_init();
+ flounder_xusb_init();
+ flounder_i2c_init();
+ flounder_spi_init();
+ flounder_audio_init();
+ platform_add_devices(flounder_devices, ARRAY_SIZE(flounder_devices));
+ tegra_io_dpd_init();
+ flounder_sdhci_init();
+ flounder_regulator_init();
+ flounder_dtv_init();
+ flounder_suspend_init();
+
+ flounder_emc_init();
+ flounder_edp_init();
+ isomgr_init();
+ flounder_touch_init();
+ flounder_headset_init();
+ flounder_panel_init();
+ flounder_kbc_init();
+ flounder_gps_init();
+
+ /* put PEX pads into DPD mode to save additional power */
+ tegra_io_dpd_enable(&pexbias_io);
+ tegra_io_dpd_enable(&pexclk1_io);
+ tegra_io_dpd_enable(&pexclk2_io);
+
+#ifdef CONFIG_TEGRA_WDT_RECOVERY
+ tegra_wdt_recovery_init();
+#endif
+
+ flounder_sensors_init();
+
+ flounder_soctherm_init();
+
+ flounder_setup_bluedroid_pm();
+ sysedp_dynamic_capping_init();
+ flounder_force_recovery_gpio();
+}
+
+static void __init tegra_flounder_init_early(void)
+{
+ flounder_rail_alignment_init();
+ tegra12x_init_early();
+}
+
+static void __init tegra_flounder_dt_init(void)
+{
+ tegra_flounder_early_init();
+#ifdef CONFIG_USE_OF
+ flounder_camera_auxdata(flounder_auxdata_lookup);
+ of_platform_populate(NULL,
+ of_default_bus_match_table, flounder_auxdata_lookup,
+ &platform_bus);
+#endif
+
+ tegra_flounder_late_init();
+ bt_export_bd_address();
+}
+
+static void __init tegra_flounder_reserve(void)
+{
+ struct device_node *hdmi_node = of_find_node_by_path("/host1x/hdmi");
+ bool fb2_enabled = hdmi_node && of_device_is_available(hdmi_node);
+ of_node_put(hdmi_node);
+
+#if defined(CONFIG_NVMAP_CONVERT_CARVEOUT_TO_IOVMM) || \
+ defined(CONFIG_TEGRA_NO_CARVEOUT)
+ /* 1536*2048*4*2 = 25165824 bytes */
+ tegra_reserve4( 0,SZ_16M + SZ_8M, 0, (100 * SZ_1M) );
+#else
+ tegra_reserve4(SZ_1G, SZ_16M + SZ_8M, fb2_enabled ? SZ_4M : 0, 100 * SZ_1M);
+#endif
+}
+
+static const char * const flounder_dt_board_compat[] = {
+ "google,flounder",
+ "google,flounder_lte",
+ "google,flounder64",
+ "google,flounder64_lte",
+ NULL
+};
+
+DT_MACHINE_START(FLOUNDER, "flounder")
+ .atag_offset = 0x100,
+ .smp = smp_ops(tegra_smp_ops),
+ .map_io = tegra_map_common_io,
+ .reserve = tegra_flounder_reserve,
+ .init_early = tegra_flounder_init_early,
+ .init_irq = irqchip_init,
+ .init_time = clocksource_of_init,
+ .init_machine = tegra_flounder_dt_init,
+ .restart = tegra_assert_system_reset,
+ .dt_compat = flounder_dt_board_compat,
+ .init_late = tegra_init_late
+MACHINE_END
diff --git a/arch/arm/mach-tegra/board-flounder.h b/arch/arm/mach-tegra/board-flounder.h
new file mode 100644
index 0000000..b7dd716
--- /dev/null
+++ b/arch/arm/mach-tegra/board-flounder.h
@@ -0,0 +1,141 @@
+/*
+ * arch/arm/mach-tegra/board-flounder.h
+ *
+ * Copyright (c) 2013, NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef _MACH_TEGRA_BOARD_FLOUNDER_H
+#define _MACH_TEGRA_BOARD_FLOUNDER_H
+
+#include <mach/gpio-tegra.h>
+#include <mach/irqs.h>
+#include "gpio-names.h"
+#include <mach/board_htc.h>
+
+int flounder_pinmux_init(void);
+int flounder_emc_init(void);
+int flounder_display_init(void);
+int flounder_panel_init(void);
+int flounder_kbc_init(void);
+int flounder_sdhci_init(void);
+int flounder_sensors_init(void);
+int flounder_regulator_init(void);
+int flounder_suspend_init(void);
+int flounder_rail_alignment_init(void);
+int flounder_soctherm_init(void);
+int flounder_edp_init(void);
+void flounder_camera_auxdata(void *);
+int flounder_mdm_9k_init(void);
+void flounder_new_sysedp_init(void);
+void flounder_sysedp_dynamic_capping_init(void);
+
+/* generated soc_therm OC interrupts */
+#define TEGRA_SOC_OC_IRQ_BASE TEGRA_NR_IRQS
+#define TEGRA_SOC_OC_NUM_IRQ TEGRA_SOC_OC_IRQ_MAX
+
+#define PALMAS_TEGRA_GPIO_BASE TEGRA_NR_GPIOS
+#define PALMAS_TEGRA_IRQ_BASE TEGRA_NR_IRQS
+
+#define CAM_RSTN TEGRA_GPIO_PBB3
+#define CAM_FLASH_STROBE TEGRA_GPIO_PBB4
+#define CAM2_PWDN TEGRA_GPIO_PBB6
+#define CAM1_PWDN TEGRA_GPIO_PBB5
+#define CAM_AF_PWDN TEGRA_GPIO_PBB7
+#define CAM_BOARD_E1806
+
+#define CAM_VCM_PWDN TEGRA_GPIO_PBB7
+#define CAM_PWDN TEGRA_GPIO_PO7
+#define CAM_1V2_EN TEGRA_GPIO_PH6
+#define CAM_A2V85_EN TEGRA_GPIO_PH2
+#define CAM_VCM2V85_EN TEGRA_GPIO_PH3
+#define CAM_1V8_EN TEGRA_GPIO_PH7
+#define CAM2_RST TEGRA_GPIO_PBB3
+
+#define UTMI1_PORT_OWNER_XUSB 0x1
+#define UTMI2_PORT_OWNER_XUSB 0x2
+#define HSIC1_PORT_OWNER_XUSB 0x4
+#define HSIC2_PORT_OWNER_XUSB 0x8
+
+/* Touchscreen definitions */
+#define TOUCH_GPIO_IRQ_RAYDIUM_SPI TEGRA_GPIO_PK2
+#define TOUCH_GPIO_RST_RAYDIUM_SPI TEGRA_GPIO_PK4
+#define TOUCH_SPI_ID 2 /*SPI 3 on flounder_interposer*/
+#define TOUCH_SPI_CS 0 /*CS 0 on flounder_interposer*/
+
+#define TOUCH_GPIO_IRQ_MAXIM_STI_SPI TEGRA_GPIO_PK2
+#define TOUCH_GPIO_RST_MAXIM_STI_SPI TEGRA_GPIO_PX6
+
+/* Audio-related GPIOs */
+#define TEGRA_GPIO_CDC_IRQ TEGRA_GPIO_PH4
+#define TEGRA_GPIO_HP_DET TEGRA_GPIO_PR7
+#define TEGRA_GPIO_LDO_EN TEGRA_GPIO_PR2
+
+/*GPIOs used by board panel file */
+#define DSI_PANEL_RST_GPIO_EVT_1_1 TEGRA_GPIO_PB4
+#define DSI_PANEL_RST_GPIO TEGRA_GPIO_PH5
+#define DSI_PANEL_BL_PWM_GPIO TEGRA_GPIO_PH1
+
+/* HDMI Hotplug detection pin */
+#define flounder_hdmi_hpd TEGRA_GPIO_PN7
+
+/* I2C related GPIOs */
+/* Same for interposer and t124 */
+#define TEGRA_GPIO_I2C1_SCL TEGRA_GPIO_PC4
+#define TEGRA_GPIO_I2C1_SDA TEGRA_GPIO_PC5
+#define TEGRA_GPIO_I2C2_SCL TEGRA_GPIO_PT5
+#define TEGRA_GPIO_I2C2_SDA TEGRA_GPIO_PT6
+#define TEGRA_GPIO_I2C3_SCL TEGRA_GPIO_PBB1
+#define TEGRA_GPIO_I2C3_SDA TEGRA_GPIO_PBB2
+#define TEGRA_GPIO_I2C4_SCL TEGRA_GPIO_PV4
+#define TEGRA_GPIO_I2C4_SDA TEGRA_GPIO_PV5
+#define TEGRA_GPIO_I2C5_SCL TEGRA_GPIO_PZ6
+#define TEGRA_GPIO_I2C5_SDA TEGRA_GPIO_PZ7
+
+/* AUO Display related GPIO */
+#define en_vdd_bl TEGRA_GPIO_PP2 /* DAP3_DOUT */
+#define lvds_en TEGRA_GPIO_PI0 /* GMI_WR_N */
+#define refclk_en TEGRA_GPIO_PG4 /* GMI_AD4 */
+
+/* HID keyboard and trackpad irq same for interposer and t124 */
+#define I2C_KB_IRQ TEGRA_GPIO_PC7
+#define I2C_TP_IRQ TEGRA_GPIO_PW3
+
+/* Qualcomm mdm9k modem related GPIO */
+#define AP2MDM_VDD_MIN TEGRA_GPIO_PO6
+#define AP2MDM_PMIC_RESET_N TEGRA_GPIO_PBB5
+#define AP2MDM_WAKEUP TEGRA_GPIO_PO1
+#define AP2MDM_STATUS TEGRA_GPIO_PO2
+#define AP2MDM_ERRFATAL TEGRA_GPIO_PO3
+#define AP2MDM_IPC1 TEGRA_GPIO_PO4
+#define AP2MDM_IPC2 TEGRA_GPIO_PO5
+#define MDM2AP_ERRFATAL TEGRA_GPIO_PV0
+#define MDM2AP_HSIC_READY TEGRA_GPIO_PV1
+#define MDM2AP_VDD_MIN TEGRA_GPIO_PCC2
+#define MDM2AP_WAKEUP TEGRA_GPIO_PS5
+#define MDM2AP_STATUS TEGRA_GPIO_PS6
+#define MDM2AP_IPC3 TEGRA_GPIO_PS0
+#define AP2MDM_CHNL_RDY_CPU TEGRA_GPIO_PQ7
+
+#define FLOUNDER_REV_EVT1_1 1
+#define FLOUNDER_REV_EVT1_2 2
+#define FLOUNDER_REV_DVT1 3
+#define FLOUNDER_REV_DVT2 4
+#define FLOUNDER_REV_PVT 0x80
+
+int flounder_get_hw_revision(void);
+int flounder_get_eng_id(void);
+
+#endif
diff --git a/arch/arm/mach-tegra/board-panel.h b/arch/arm/mach-tegra/board-panel.h
index eef84ff..5fbcb78 100644
--- a/arch/arm/mach-tegra/board-panel.h
+++ b/arch/arm/mach-tegra/board-panel.h
@@ -65,6 +65,7 @@
extern struct tegra_panel dsi_p_wuxga_10_1;
extern struct tegra_panel dsi_a_1080p_11_6;
extern struct tegra_panel dsi_s_wqxga_10_1;
+extern struct tegra_panel dsi_j_qxga_8_9;
extern struct tegra_panel dsi_lgd_wxga_7_0;
extern struct tegra_panel dsi_a_1080p_14_0;
extern struct tegra_panel edp_a_1080p_14_0;
diff --git a/arch/arm/mach-tegra/common.c b/arch/arm/mach-tegra/common.c
index 5b9fa78..7f26c54 100644
--- a/arch/arm/mach-tegra/common.c
+++ b/arch/arm/mach-tegra/common.c
@@ -120,7 +120,13 @@
#ifdef CONFIG_PSTORE_RAM
#define RAMOOPS_MEM_SIZE SZ_2M
+#ifdef CONFIG_PSTORE_PMSG
+#define FTRACE_MEM_SIZE SZ_512K
+#define PMSG_MEM_SIZE SZ_512K
+#else
#define FTRACE_MEM_SIZE SZ_1M
+#define PMSG_MEM_SIZE 0
+#endif
#endif
phys_addr_t tegra_bootloader_fb_start;
@@ -206,12 +212,22 @@
{
int err = 0;
#ifdef CONFIG_TRUSTED_LITTLE_KERNEL
+#define MAX_RETRIES 6
+ int retries = MAX_RETRIES;
+retry:
err = gk20a_do_idle();
if (!err) {
/* Config VPR_BOM/_SIZE in MC */
err = te_set_vpr_params((void *)(uintptr_t)base, size);
gk20a_do_unidle();
+ } else {
+ if (retries--) {
+ pr_err("%s:%d: fail retry=%d",
+ __func__, __LINE__, MAX_RETRIES - retries);
+ msleep(1);
+ goto retry;
+ }
}
#endif
return err;
@@ -1890,8 +1906,10 @@
{
ramoops_data.mem_size = reserve_size;
ramoops_data.mem_address = memblock_end_of_4G(reserve_size);
- ramoops_data.console_size = reserve_size - FTRACE_MEM_SIZE;
+ ramoops_data.console_size = reserve_size - FTRACE_MEM_SIZE
+ - PMSG_MEM_SIZE;
ramoops_data.ftrace_size = FTRACE_MEM_SIZE;
+ ramoops_data.pmsg_size = PMSG_MEM_SIZE;
ramoops_data.dump_oops = 1;
memblock_remove(ramoops_data.mem_address, ramoops_data.mem_size);
}
diff --git a/arch/arm/mach-tegra/dvfs.c b/arch/arm/mach-tegra/dvfs.c
index 4d5bdfd..1ea646f 100644
--- a/arch/arm/mach-tegra/dvfs.c
+++ b/arch/arm/mach-tegra/dvfs.c
@@ -47,6 +47,8 @@
struct dvfs_rail *tegra_core_rail;
struct dvfs_rail *tegra_gpu_rail;
+static bool gpu_power_on;
+
static LIST_HEAD(dvfs_rail_list);
static DEFINE_MUTEX(dvfs_lock);
static DEFINE_MUTEX(rail_disable_lock);
@@ -2065,6 +2067,38 @@
*/
#ifdef CONFIG_TEGRA_DVFS_RAIL_CONNECT_ALL
+
+static ssize_t gpu_power_on_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ return scnprintf(buf, PAGE_SIZE, "%u\n", gpu_power_on);
+}
+
+static ssize_t gpu_power_on_store(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ bool val;
+ int err;
+
+ err = strtobool(buf, &val);
+ if (err < 0)
+ return err;
+
+ if (val != gpu_power_on) {
+ if (!val)
+ return -EBUSY;
+
+ tegra_dvfs_rail_power_up(tegra_gpu_rail);
+ gpu_power_on = val;
+ }
+
+ return count;
+}
+
+static struct kobj_attribute gpu_poweron_attr =
+ __ATTR(gpu_power_on, 0644, gpu_power_on_show, gpu_power_on_store);
+
/*
* Enable voltage scaling only if all the rails connect successfully
*/
@@ -2097,6 +2131,13 @@
return -ENODEV;
}
+ if (tegra_gpu_rail) {
+ int err;
+ tegra_dvfs_rail_power_down(tegra_gpu_rail);
+ err = sysfs_create_file(power_kobj, &gpu_poweron_attr.attr);
+ WARN_ON(err);
+ }
+
return 0;
}
#else
diff --git a/arch/arm/mach-tegra/flounder-bdaddress.c b/arch/arm/mach-tegra/flounder-bdaddress.c
new file mode 100644
index 0000000..b95c8c9
--- /dev/null
+++ b/arch/arm/mach-tegra/flounder-bdaddress.c
@@ -0,0 +1,59 @@
+/*
+ *
+ * Code to extract Bluetooth bd_address information
+ * from device_tree set up by the bootloader.
+ *
+ * Copyright (C) 2010 HTC Corporation
+ * Author:Yomin Lin <yomin_lin@htc.com>
+ * Author:Allen Ou <allen_ou@htc.com>
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <asm/setup.h>
+#include <mach/flounder-bdaddress.h>
+#include <linux/of.h>
+
+/*configuration tags specific to Bluetooth*/
+#define MAX_BT_SIZE 0x6U
+
+#define CALIBRATION_DATA_PATH "/calibration_data"
+#define BT_FLASH_DATA "bt_flash"
+
+static unsigned char bt_bd_ram[MAX_BT_SIZE];
+static char bdaddress[20];
+
+static unsigned char *get_bt_bd_ram(void)
+{
+ struct device_node *offset = of_find_node_by_path(CALIBRATION_DATA_PATH);
+ int p_size;
+ unsigned char *p_data;
+
+ p_size = 0;
+ p_data = NULL;
+ if (offset) {
+ p_data = (unsigned char*) of_get_property(offset, BT_FLASH_DATA, &p_size);
+ }
+ if (p_data != NULL && p_size <= 20)
+ memcpy(bt_bd_ram, p_data, p_size);
+
+ return (bt_bd_ram);
+}
+
+void bt_export_bd_address(void)
+{
+ unsigned char cTemp[MAX_BT_SIZE];
+
+ memcpy(cTemp, get_bt_bd_ram(), sizeof(cTemp));
+ sprintf(bdaddress, "%02x:%02x:%02x:%02x:%02x:%02x",
+ cTemp[0], cTemp[1], cTemp[2],
+ cTemp[3], cTemp[4], cTemp[5]);
+
+ printk(KERN_INFO "SET BD_ADDRESS=%s\n", bdaddress);
+}
+module_param_string(bdaddress, bdaddress, sizeof(bdaddress), S_IWUSR | S_IRUGO);
+MODULE_PARM_DESC(bdaddress, "BT MAC ADDRESS");
+
diff --git a/arch/arm/mach-tegra/include/mach/board_htc.h b/arch/arm/mach-tegra/include/mach/board_htc.h
new file mode 100644
index 0000000..b0a78f4
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/board_htc.h
@@ -0,0 +1,53 @@
+/*
+ * linux/arch/arm/mach-tegra/include/mach/board_htc.h
+ *
+ * Copyright (C) 2014 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef __ARCH_ARM_MACH_TEGRA_BOARD_HTC_H
+#define __ARCH_ARM_MACH_TEGRA_BOARD_HTC_H
+
+enum {
+ BOARD_MFG_MODE_NORMAL = 0,
+ BOARD_MFG_MODE_FACTORY2,
+ BOARD_MFG_MODE_RECOVERY,
+ BOARD_MFG_MODE_CHARGE,
+ BOARD_MFG_MODE_POWERTEST,
+ BOARD_MFG_MODE_OFFMODE_CHARGING,
+ BOARD_MFG_MODE_MFGKERNEL,
+ BOARD_MFG_MODE_MODEM_CALIBRATION,
+};
+
+enum {
+ RADIO_FLAG_NONE = 0,
+ RADIO_FLAG_MORE_LOG = BIT(0),
+ RADIO_FLAG_FTRACE_ENABLE = BIT(1),
+ RADIO_FLAG_USB_UPLOAD = BIT(3),
+ RADIO_FLAG_DIAG_ENABLE = BIT(17),
+};
+
+enum {
+ MDM_SKU_WIFI_ONLY = 0,
+ MDM_SKU_UL,
+ MDM_SKU_WL,
+ NUM_MDM_SKU
+};
+
+enum {
+ RADIO_IMG_NON_EXIST = 0,
+ RADIO_IMG_EXIST
+};
+
+int board_mfg_mode(void);
+unsigned int get_radio_flag(void);
+int get_mdm_sku(void);
+bool is_mdm_modem(void);
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/diag_bridge.h b/arch/arm/mach-tegra/include/mach/diag_bridge.h
new file mode 100644
index 0000000..d67d664
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/diag_bridge.h
@@ -0,0 +1,54 @@
+/* Copyright (c) 2011, 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __LINUX_USB_DIAG_BRIDGE_H__
+#define __LINUX_USB_DIAG_BRIDGE_H__
+
+struct diag_bridge_ops {
+ void *ctxt;
+ void (*read_complete_cb)(void *ctxt, char *buf,
+ int buf_size, int actual);
+ void (*write_complete_cb)(void *ctxt, char *buf,
+ int buf_size, int actual);
+ int (*suspend)(void *ctxt);
+ void (*resume)(void *ctxt);
+};
+
+#if IS_ENABLED(CONFIG_USB_QCOM_DIAG_BRIDGE)
+
+extern int diag_bridge_read(int id, char *data, int size);
+extern int diag_bridge_write(int id, char *data, int size);
+extern int diag_bridge_open(int id, struct diag_bridge_ops *ops);
+extern void diag_bridge_close(int id);
+
+#else
+
+static int __maybe_unused diag_bridge_read(int id, char *data, int size)
+{
+ return -ENODEV;
+}
+
+static int __maybe_unused diag_bridge_write(int id, char *data, int size)
+{
+ return -ENODEV;
+}
+
+static int __maybe_unused diag_bridge_open(int id, struct diag_bridge_ops *ops)
+{
+ return -ENODEV;
+}
+
+static void __maybe_unused diag_bridge_close(int id) { }
+
+#endif
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/flounder-bdaddress.h b/arch/arm/mach-tegra/include/mach/flounder-bdaddress.h
new file mode 100644
index 0000000..1b81960
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/flounder-bdaddress.h
@@ -0,0 +1,12 @@
+/* arch/arm/mach-msm/flounder-bdaddress.c
+ *
+ * Code to extract Bluetooth bd_address information
+ * from device_tree set up by the bootloader.
+ *
+ * Copyright (C) 2010 HTC Corporation
+ * Author:Yomin Lin <yomin_lin@htc.com>
+ * Author:Allen Ou <allen_ou@htc.com>
+ *
+ */
+
+void bt_export_bd_address(void);
diff --git a/arch/arm/mach-tegra/include/mach/gpio-tegra.h b/arch/arm/mach-tegra/include/mach/gpio-tegra.h
index 8dea489..746812c 100644
--- a/arch/arm/mach-tegra/include/mach/gpio-tegra.h
+++ b/arch/arm/mach-tegra/include/mach/gpio-tegra.h
@@ -36,5 +36,6 @@
void tegra_gpio_init_configure(unsigned gpio, bool is_input, int value);
int tegra_gpio_get_bank_int_nr(int gpio);
int tegra_is_gpio(int);
+void tegra_gpio_disable(int gpio);
#endif
diff --git a/arch/arm/mach-tegra/include/mach/mcerr.h b/arch/arm/mach-tegra/include/mach/mcerr.h
index 916d2eb..753563e 100644
--- a/arch/arm/mach-tegra/include/mach/mcerr.h
+++ b/arch/arm/mach-tegra/include/mach/mcerr.h
@@ -104,7 +104,6 @@
MC_INT_ARBITRATION_EMEM | \
MC_INT_INVALID_SMMU_PAGE | \
MC_INT_INVALID_APB_ASID_UPDATE | \
- MC_INT_DECERR_VPR | \
MC_INT_SECERR_SEC | \
MC_INT_DECERR_MTS)
#endif
diff --git a/arch/arm/mach-tegra/include/mach/msm_smd.h b/arch/arm/mach-tegra/include/mach/msm_smd.h
new file mode 100644
index 0000000..5cfa3c1
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/msm_smd.h
@@ -0,0 +1,405 @@
+/* linux/include/asm-arm/arch-tegra/msm_smd.h
+ *
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2009-2012, Code Aurora Forum. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __ASM_ARCH_MSM_SMD_H
+#define __ASM_ARCH_MSM_SMD_H
+
+#include <linux/io.h>
+#include <mach/msm_smsm.h>
+
+typedef struct smd_channel smd_channel_t;
+
+#define SMD_MAX_CH_NAME_LEN 20 /* includes null char at end */
+
+#define SMD_EVENT_DATA 1
+#define SMD_EVENT_OPEN 2
+#define SMD_EVENT_CLOSE 3
+#define SMD_EVENT_STATUS 4
+#define SMD_EVENT_REOPEN_READY 5
+
+/*
+ * SMD Processor ID's.
+ *
+ * For all processors that have both SMSM and SMD clients,
+ * the SMSM Processor ID and the SMD Processor ID will
+ * be the same. In cases where a processor only supports
+ * SMD, the entry will only exist in this enum.
+ */
+enum {
+ SMD_APPS = SMSM_APPS,
+ SMD_MODEM = SMSM_MODEM,
+ SMD_Q6 = SMSM_Q6,
+ SMD_WCNSS = SMSM_WCNSS,
+ SMD_DSPS = SMSM_DSPS,
+ SMD_MODEM_Q6_FW,
+ SMD_RPM,
+ NUM_SMD_SUBSYSTEMS,
+};
+
+enum {
+ SMD_APPS_MODEM = 0,
+ SMD_APPS_QDSP,
+ SMD_MODEM_QDSP,
+ SMD_APPS_DSPS,
+ SMD_MODEM_DSPS,
+ SMD_QDSP_DSPS,
+ SMD_APPS_WCNSS,
+ SMD_MODEM_WCNSS,
+ SMD_QDSP_WCNSS,
+ SMD_DSPS_WCNSS,
+ SMD_APPS_Q6FW,
+ SMD_MODEM_Q6FW,
+ SMD_QDSP_Q6FW,
+ SMD_DSPS_Q6FW,
+ SMD_WCNSS_Q6FW,
+ SMD_APPS_RPM,
+ SMD_MODEM_RPM,
+ SMD_QDSP_RPM,
+ SMD_WCNSS_RPM,
+ SMD_NUM_TYPE,
+ SMD_LOOPBACK_TYPE = 100,
+
+};
+
+/*
+ * SMD IRQ Configuration
+ *
+ * Used to initialize IRQ configurations from platform data
+ *
+ * @irq_name: irq_name to query platform data
+ * @irq_id: initialized to -1 in platform data, stores actual irq id on
+ * successful registration
+ * @out_base: if not null then settings used for outgoing interrupt
+ * initialied from platform data
+ */
+
+struct smd_irq_config {
+ /* incoming interrupt config */
+ const char *irq_name;
+ unsigned long flags;
+ int irq_id;
+ const char *device_name;
+ const void *dev_id;
+
+ /* outgoing interrupt config */
+ uint32_t out_bit_pos;
+ void __iomem *out_base;
+ uint32_t out_offset;
+};
+
+/*
+ * SMD subsystem configurations
+ *
+ * SMD subsystems configurations for platform data. This contains the
+ * M2A and A2M interrupt configurations for both SMD and SMSM per
+ * subsystem.
+ *
+ * @subsys_name: name of subsystem passed to PIL
+ * @irq_config_id: unique id for each subsystem
+ * @edge: maps to actual remote subsystem edge
+ *
+ */
+struct smd_subsystem_config {
+ unsigned irq_config_id;
+ const char *subsys_name;
+ int edge;
+
+ struct smd_irq_config smd_int;
+ struct smd_irq_config smsm_int;
+
+};
+
+/*
+ * Subsystem Restart Configuration
+ *
+ * @disable_smsm_reset_handshake
+ */
+struct smd_subsystem_restart_config {
+ int disable_smsm_reset_handshake;
+};
+
+/*
+ * Shared Memory Regions
+ *
+ * the array of these regions is expected to be in ascending order by phys_addr
+ *
+ * @phys_addr: physical base address of the region
+ * @size: size of the region in bytes
+ */
+struct smd_smem_regions {
+ void *phys_addr;
+ unsigned size;
+};
+
+struct smd_platform {
+ uint32_t num_ss_configs;
+ struct smd_subsystem_config *smd_ss_configs;
+ struct smd_subsystem_restart_config *smd_ssr_config;
+ uint32_t num_smem_areas;
+ struct smd_smem_regions *smd_smem_areas;
+};
+
+#ifdef CONFIG_MSM_SMD
+/* warning: notify() may be called before open returns */
+int smd_open(const char *name, smd_channel_t ** ch, void *priv, void (*notify) (void *priv, unsigned event));
+
+int smd_close(smd_channel_t * ch);
+
+/* passing a null pointer for data reads and discards */
+int smd_read(smd_channel_t * ch, void *data, int len);
+int smd_read_from_cb(smd_channel_t * ch, void *data, int len);
+/* Same as smd_read() but takes a data buffer from userspace
+ * The function might sleep. Only safe to call from user context
+ */
+int smd_read_user_buffer(smd_channel_t * ch, void *data, int len);
+
+/* Write to stream channels may do a partial write and return
+** the length actually written.
+** Write to packet channels will never do a partial write --
+** it will return the requested length written or an error.
+*/
+int smd_write(smd_channel_t * ch, const void *data, int len);
+/* Same as smd_write() but takes a data buffer from userspace
+ * The function might sleep. Only safe to call from user context
+ */
+int smd_write_user_buffer(smd_channel_t * ch, const void *data, int len);
+
+int smd_write_avail(smd_channel_t * ch);
+int smd_read_avail(smd_channel_t * ch);
+
+/* Returns the total size of the current packet being read.
+** Returns 0 if no packets available or a stream channel.
+*/
+int smd_cur_packet_size(smd_channel_t * ch);
+
+#if 0
+/* these are interruptable waits which will block you until the specified
+** number of bytes are readable or writable.
+*/
+int smd_wait_until_readable(smd_channel_t * ch, int bytes);
+int smd_wait_until_writable(smd_channel_t * ch, int bytes);
+#endif
+
+/* these are used to get and set the IF sigs of a channel.
+ * DTR and RTS can be set; DSR, CTS, CD and RI can be read.
+ */
+int smd_tiocmget(smd_channel_t * ch);
+int smd_tiocmset(smd_channel_t * ch, unsigned int set, unsigned int clear);
+int smd_tiocmset_from_cb(smd_channel_t * ch, unsigned int set, unsigned int clear);
+int smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t ** _ch, void *priv, void (*notify) (void *, unsigned));
+
+/* Tells the other end of the smd channel that this end wants to recieve
+ * interrupts when the written data is read. Read interrupts should only
+ * enabled when there is no space left in the buffer to write to, thus the
+ * interrupt acts as notification that space may be avaliable. If the
+ * other side does not support enabling/disabling interrupts on demand,
+ * then this function has no effect if called.
+ */
+void smd_enable_read_intr(smd_channel_t * ch);
+
+/* Tells the other end of the smd channel that this end does not want
+ * interrupts when written data is read. The interrupts should be
+ * disabled by default. If the other side does not support enabling/
+ * disabling interrupts on demand, then this function has no effect if
+ * called.
+ */
+void smd_disable_read_intr(smd_channel_t * ch);
+
+/* Starts a packet transaction. The size of the packet may exceed the total
+ * size of the smd ring buffer.
+ *
+ * @ch: channel to write the packet to
+ * @len: total length of the packet
+ *
+ * Returns:
+ * 0 - success
+ * -ENODEV - invalid smd channel
+ * -EACCES - non-packet channel specified
+ * -EINVAL - invalid length
+ * -EBUSY - transaction already in progress
+ * -EAGAIN - no enough memory in ring buffer to start transaction
+ * -EPERM - unable to sucessfully start transaction due to write error
+ */
+int smd_write_start(smd_channel_t * ch, int len);
+
+/* Writes a segment of the packet for a packet transaction.
+ *
+ * @ch: channel to write packet to
+ * @data: buffer of data to write
+ * @len: length of data buffer
+ * @user_buf: (0) - buffer from kernelspace (1) - buffer from userspace
+ *
+ * Returns:
+ * number of bytes written
+ * -ENODEV - invalid smd channel
+ * -EINVAL - invalid length
+ * -ENOEXEC - transaction not started
+ */
+int smd_write_segment(smd_channel_t * ch, void *data, int len, int user_buf);
+
+/* Completes a packet transaction. Do not call from interrupt context.
+ *
+ * @ch: channel to complete transaction on
+ *
+ * Returns:
+ * 0 - success
+ * -ENODEV - invalid smd channel
+ * -E2BIG - some ammount of packet is not yet written
+ */
+int smd_write_end(smd_channel_t * ch);
+
+/*
+ * Returns a pointer to the subsystem name or NULL if no
+ * subsystem name is available.
+ *
+ * @type - Edge definition
+ */
+const char *smd_edge_to_subsystem(uint32_t type);
+
+/*
+ * Returns a pointer to the subsystem name given the
+ * remote processor ID.
+ *
+ * @pid Remote processor ID
+ * @returns Pointer to subsystem name or NULL if not found
+ */
+const char *smd_pid_to_subsystem(uint32_t pid);
+
+/*
+ * Checks to see if a new packet has arrived on the channel. Only to be
+ * called with interrupts disabled.
+ *
+ * @ch: channel to check if a packet has arrived
+ *
+ * Returns:
+ * 0 - packet not available
+ * 1 - packet available
+ * -EINVAL - NULL parameter or non-packet based channel provided
+ */
+int smd_is_pkt_avail(smd_channel_t * ch);
+#else
+
+static inline int smd_open(const char *name, smd_channel_t ** ch, void *priv, void (*notify) (void *priv, unsigned event))
+{
+ return -ENODEV;
+}
+
+static inline int smd_close(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read(smd_channel_t * ch, void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read_from_cb(smd_channel_t * ch, void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read_user_buffer(smd_channel_t * ch, void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write(smd_channel_t * ch, const void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_user_buffer(smd_channel_t * ch, const void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_avail(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_read_avail(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_cur_packet_size(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_tiocmget(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline int smd_tiocmset(smd_channel_t * ch, unsigned int set, unsigned int clear)
+{
+ return -ENODEV;
+}
+
+static inline int smd_tiocmset_from_cb(smd_channel_t * ch, unsigned int set, unsigned int clear)
+{
+ return -ENODEV;
+}
+
+static inline int smd_named_open_on_edge(const char *name, uint32_t edge, smd_channel_t ** _ch, void *priv, void (*notify) (void *, unsigned))
+{
+ return -ENODEV;
+}
+
+static inline void smd_enable_read_intr(smd_channel_t * ch)
+{
+}
+
+static inline void smd_disable_read_intr(smd_channel_t * ch)
+{
+}
+
+static inline int smd_write_start(smd_channel_t * ch, int len)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_segment(smd_channel_t * ch, void *data, int len, int user_buf)
+{
+ return -ENODEV;
+}
+
+static inline int smd_write_end(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+
+static inline const char *smd_edge_to_subsystem(uint32_t type)
+{
+ return NULL;
+}
+
+static inline const char *smd_pid_to_subsystem(uint32_t pid)
+{
+ return NULL;
+}
+
+static inline int smd_is_pkt_avail(smd_channel_t * ch)
+{
+ return -ENODEV;
+}
+#endif
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/msm_smsm.h b/arch/arm/mach-tegra/include/mach/msm_smsm.h
new file mode 100644
index 0000000..0dfe400
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/msm_smsm.h
@@ -0,0 +1,241 @@
+/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _ARCH_ARM_MACH_MSM_SMSM_H_
+#define _ARCH_ARM_MACH_MSM_SMSM_H_
+
+#include <linux/notifier.h>
+#if defined(CONFIG_MSM_N_WAY_SMSM)
+enum {
+ SMSM_APPS_STATE,
+ SMSM_MODEM_STATE,
+ SMSM_Q6_STATE,
+ SMSM_APPS_DEM,
+ SMSM_WCNSS_STATE = SMSM_APPS_DEM,
+ SMSM_MODEM_DEM,
+ SMSM_DSPS_STATE = SMSM_MODEM_DEM,
+ SMSM_Q6_DEM,
+ SMSM_POWER_MASTER_DEM,
+ SMSM_TIME_MASTER_DEM,
+};
+extern uint32_t SMSM_NUM_ENTRIES;
+#else
+enum {
+ SMSM_APPS_STATE = 1,
+ SMSM_MODEM_STATE = 3,
+ SMSM_NUM_ENTRIES,
+};
+#endif
+
+enum {
+ SMSM_APPS,
+ SMSM_MODEM,
+ SMSM_Q6,
+ SMSM_WCNSS,
+ SMSM_DSPS,
+};
+extern uint32_t SMSM_NUM_HOSTS;
+
+#define SMSM_INIT 0x00000001
+#define SMSM_OSENTERED 0x00000002
+#define SMSM_SMDWAIT 0x00000004
+#define SMSM_SMDINIT 0x00000008
+#define SMSM_RPCWAIT 0x00000010
+#define SMSM_RPCINIT 0x00000020
+#define SMSM_RESET 0x00000040
+#define SMSM_RSA 0x00000080
+#define SMSM_RUN 0x00000100
+#define SMSM_PWRC 0x00000200
+#define SMSM_TIMEWAIT 0x00000400
+#define SMSM_TIMEINIT 0x00000800
+#define SMSM_PWRC_EARLY_EXIT 0x00001000
+#define SMSM_WFPI 0x00002000
+#define SMSM_SLEEP 0x00004000
+#define SMSM_SLEEPEXIT 0x00008000
+#define SMSM_OEMSBL_RELEASE 0x00010000
+#define SMSM_APPS_REBOOT 0x00020000
+#define SMSM_SYSTEM_POWER_DOWN 0x00040000
+#define SMSM_SYSTEM_REBOOT 0x00080000
+#define SMSM_SYSTEM_DOWNLOAD 0x00100000
+#define SMSM_PWRC_SUSPEND 0x00200000
+#define SMSM_APPS_SHUTDOWN 0x00400000
+#define SMSM_SMD_LOOPBACK 0x00800000
+#define SMSM_RUN_QUIET 0x01000000
+#define SMSM_MODEM_WAIT 0x02000000
+#define SMSM_MODEM_BREAK 0x04000000
+#define SMSM_MODEM_CONTINUE 0x08000000
+#define SMSM_SYSTEM_REBOOT_USR 0x20000000
+#define SMSM_SYSTEM_PWRDWN_USR 0x40000000
+#define SMSM_UNKNOWN 0x80000000
+
+#define SMSM_WKUP_REASON_RPC 0x00000001
+#define SMSM_WKUP_REASON_INT 0x00000002
+#define SMSM_WKUP_REASON_GPIO 0x00000004
+#define SMSM_WKUP_REASON_TIMER 0x00000008
+#define SMSM_WKUP_REASON_ALARM 0x00000010
+#define SMSM_WKUP_REASON_RESET 0x00000020
+#define SMSM_A2_FORCE_SHUTDOWN 0x00002000
+#define SMSM_A2_RESET_BAM 0x00004000
+
+#define SMSM_VENDOR 0x00020000
+
+#define SMSM_A2_POWER_CONTROL 0x00000002
+#define SMSM_A2_POWER_CONTROL_ACK 0x00000800
+
+#define SMSM_WLAN_TX_RINGS_EMPTY 0x00000200
+#define SMSM_WLAN_TX_ENABLE 0x00000400
+
+#define SMSM_ERR_SRV_READY 0x00008000
+
+void *smem_alloc(unsigned id, unsigned size);
+void *smem_alloc2(unsigned id, unsigned size_in);
+void *smem_get_entry(unsigned id, unsigned *size);
+int smsm_change_state(uint32_t smsm_entry, uint32_t clear_mask, uint32_t set_mask);
+
+/*
+ * Changes the global interrupt mask. The set and clear masks are re-applied
+ * every time the global interrupt mask is updated for callback registration
+ * and de-registration.
+ *
+ * The clear mask is applied first, so if a bit is set to 1 in both the clear
+ * mask and the set mask, the result will be that the interrupt is set.
+ *
+ * @smsm_entry SMSM entry to change
+ * @clear_mask 1 = clear bit, 0 = no-op
+ * @set_mask 1 = set bit, 0 = no-op
+ *
+ * @returns 0 for success, < 0 for error
+ */
+int smsm_change_intr_mask(uint32_t smsm_entry, uint32_t clear_mask, uint32_t set_mask);
+int smsm_get_intr_mask(uint32_t smsm_entry, uint32_t * intr_mask);
+uint32_t smsm_get_state(uint32_t smsm_entry);
+int smsm_state_cb_register(uint32_t smsm_entry, uint32_t mask, void (*notify) (void *, uint32_t old_state, uint32_t new_state), void *data);
+int smsm_state_cb_deregister(uint32_t smsm_entry, uint32_t mask, void (*notify) (void *, uint32_t, uint32_t), void *data);
+int smsm_driver_state_notifier_register(struct notifier_block *nb);
+int smsm_driver_state_notifier_unregister(struct notifier_block *nb);
+void smsm_print_sleep_info(uint32_t sleep_delay, uint32_t sleep_limit, uint32_t irq_mask, uint32_t wakeup_reason, uint32_t pending_irqs);
+void smsm_reset_modem(unsigned mode);
+void smsm_reset_modem_cont(void);
+void smd_sleep_exit(void);
+
+#define SMEM_NUM_SMD_STREAM_CHANNELS 64
+#define SMEM_NUM_SMD_BLOCK_CHANNELS 64
+
+enum {
+ /* fixed items */
+ SMEM_PROC_COMM = 0,
+ SMEM_HEAP_INFO,
+ SMEM_ALLOCATION_TABLE,
+ SMEM_VERSION_INFO,
+ SMEM_HW_RESET_DETECT,
+ SMEM_AARM_WARM_BOOT,
+ SMEM_DIAG_ERR_MESSAGE,
+ SMEM_SPINLOCK_ARRAY,
+ SMEM_MEMORY_BARRIER_LOCATION,
+ SMEM_FIXED_ITEM_LAST = SMEM_MEMORY_BARRIER_LOCATION,
+
+ /* dynamic items */
+ SMEM_AARM_PARTITION_TABLE,
+ SMEM_AARM_BAD_BLOCK_TABLE,
+ SMEM_RESERVE_BAD_BLOCKS,
+ SMEM_WM_UUID,
+ SMEM_CHANNEL_ALLOC_TBL,
+ SMEM_SMD_BASE_ID,
+ SMEM_SMEM_LOG_IDX = SMEM_SMD_BASE_ID + SMEM_NUM_SMD_STREAM_CHANNELS,
+ SMEM_SMEM_LOG_EVENTS,
+ SMEM_SMEM_STATIC_LOG_IDX,
+ SMEM_SMEM_STATIC_LOG_EVENTS,
+ SMEM_SMEM_SLOW_CLOCK_SYNC,
+ SMEM_SMEM_SLOW_CLOCK_VALUE,
+ SMEM_BIO_LED_BUF,
+ SMEM_SMSM_SHARED_STATE,
+ SMEM_SMSM_INT_INFO,
+ SMEM_SMSM_SLEEP_DELAY,
+ SMEM_SMSM_LIMIT_SLEEP,
+ SMEM_SLEEP_POWER_COLLAPSE_DISABLED,
+ SMEM_KEYPAD_KEYS_PRESSED,
+ SMEM_KEYPAD_STATE_UPDATED,
+ SMEM_KEYPAD_STATE_IDX,
+ SMEM_GPIO_INT,
+ SMEM_MDDI_LCD_IDX,
+ SMEM_MDDI_HOST_DRIVER_STATE,
+ SMEM_MDDI_LCD_DISP_STATE,
+ SMEM_LCD_CUR_PANEL,
+ SMEM_MARM_BOOT_SEGMENT_INFO,
+ SMEM_AARM_BOOT_SEGMENT_INFO,
+ SMEM_SLEEP_STATIC,
+ SMEM_SCORPION_FREQUENCY,
+ SMEM_SMD_PROFILES,
+ SMEM_TSSC_BUSY,
+ SMEM_HS_SUSPEND_FILTER_INFO,
+ SMEM_BATT_INFO,
+ SMEM_APPS_BOOT_MODE,
+ SMEM_VERSION_FIRST,
+ SMEM_VERSION_SMD = SMEM_VERSION_FIRST,
+ SMEM_VERSION_LAST = SMEM_VERSION_FIRST + 24,
+ SMEM_OSS_RRCASN1_BUF1,
+ SMEM_OSS_RRCASN1_BUF2,
+ SMEM_ID_VENDOR0,
+ SMEM_ID_VENDOR1,
+ SMEM_ID_VENDOR2,
+ SMEM_HW_SW_BUILD_ID,
+ SMEM_SMD_BLOCK_PORT_BASE_ID,
+ SMEM_SMD_BLOCK_PORT_PROC0_HEAP = SMEM_SMD_BLOCK_PORT_BASE_ID + SMEM_NUM_SMD_BLOCK_CHANNELS,
+ SMEM_SMD_BLOCK_PORT_PROC1_HEAP = SMEM_SMD_BLOCK_PORT_PROC0_HEAP + SMEM_NUM_SMD_BLOCK_CHANNELS,
+ SMEM_I2C_MUTEX = SMEM_SMD_BLOCK_PORT_PROC1_HEAP + SMEM_NUM_SMD_BLOCK_CHANNELS,
+ SMEM_SCLK_CONVERSION,
+ SMEM_SMD_SMSM_INTR_MUX,
+ SMEM_SMSM_CPU_INTR_MASK,
+ SMEM_APPS_DEM_SLAVE_DATA,
+ SMEM_QDSP6_DEM_SLAVE_DATA,
+ SMEM_CLKREGIM_BSP,
+ SMEM_CLKREGIM_SOURCES,
+ SMEM_SMD_FIFO_BASE_ID,
+ SMEM_USABLE_RAM_PARTITION_TABLE = SMEM_SMD_FIFO_BASE_ID + SMEM_NUM_SMD_STREAM_CHANNELS,
+ SMEM_POWER_ON_STATUS_INFO,
+ SMEM_DAL_AREA,
+ SMEM_SMEM_LOG_POWER_IDX,
+ SMEM_SMEM_LOG_POWER_WRAP,
+ SMEM_SMEM_LOG_POWER_EVENTS,
+ SMEM_ERR_CRASH_LOG,
+ SMEM_ERR_F3_TRACE_LOG,
+ SMEM_SMD_BRIDGE_ALLOC_TABLE,
+ SMEM_SMDLITE_TABLE,
+ SMEM_SD_IMG_UPGRADE_STATUS,
+ SMEM_SEFS_INFO,
+ SMEM_RESET_LOG,
+ SMEM_RESET_LOG_SYMBOLS,
+ SMEM_MODEM_SW_BUILD_ID,
+ SMEM_SMEM_LOG_MPROC_WRAP,
+ SMEM_BOOT_INFO_FOR_APPS,
+ SMEM_SMSM_SIZE_INFO,
+ SMEM_SMD_LOOPBACK_REGISTER,
+ SMEM_SSR_REASON_MSS0,
+ SMEM_SSR_REASON_WCNSS0,
+ SMEM_SSR_REASON_LPASS0,
+ SMEM_SSR_REASON_DSPS0,
+ SMEM_SSR_REASON_VCODEC0,
+ SMEM_MEM_LAST = SMEM_SSR_REASON_VCODEC0,
+ SMEM_NUM_ITEMS,
+};
+
+enum {
+ SMEM_APPS_Q6_SMSM = 3,
+ SMEM_Q6_APPS_SMSM = 5,
+ SMSM_NUM_INTR_MUX = 8,
+};
+
+int smsm_check_for_modem_crash(void);
+void *smem_find(unsigned id, unsigned size);
+void *smem_get_entry(unsigned id, unsigned *size);
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/restart.h b/arch/arm/mach-tegra/include/mach/restart.h
new file mode 100644
index 0000000..44b5c9f
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/restart.h
@@ -0,0 +1,31 @@
+/* Copyright (c) 2011, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _ASM_ARCH_MSM_RESTART_H_
+#define _ASM_ARCH_MSM_RESTART_H_
+
+#define RESTART_NORMAL 0x0
+#define RESTART_DLOAD 0x1
+
+#if defined(CONFIG_MSM_NATIVE_RESTART)
+void msm_set_restart_mode(int mode);
+void msm_restart(char mode, const char *cmd);
+#elif defined(CONFIG_ARCH_FSM9XXX)
+void fsm_restart(char mode, const char *cmd);
+#else
+#define msm_set_restart_mode(mode)
+#endif
+
+extern int pmic_reset_irq;
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/socinfo.h b/arch/arm/mach-tegra/include/mach/socinfo.h
new file mode 100644
index 0000000..2ff0cf9
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/socinfo.h
@@ -0,0 +1,547 @@
+/* Copyright (c) 2009-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _ARCH_ARM_MACH_MSM_SOCINFO_H_
+#define _ARCH_ARM_MACH_MSM_SOCINFO_H_
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/of_fdt.h>
+#include <linux/of.h>
+
+#include <asm/cputype.h>
+#include <asm/mach-types.h>
+/*
+ * SOC version type with major number in the upper 16 bits and minor
+ * number in the lower 16 bits. For example:
+ * 1.0 -> 0x00010000
+ * 2.3 -> 0x00020003
+ */
+#define SOCINFO_VERSION_MAJOR(ver) ((ver & 0xffff0000) >> 16)
+#define SOCINFO_VERSION_MINOR(ver) (ver & 0x0000ffff)
+
+#ifdef CONFIG_OF
+#define of_board_is_cdp() of_machine_is_compatible("qcom,cdp")
+#define of_board_is_sim() of_machine_is_compatible("qcom,sim")
+#define of_board_is_rumi() of_machine_is_compatible("qcom,rumi")
+#define of_board_is_fluid() of_machine_is_compatible("qcom,fluid")
+#define of_board_is_liquid() of_machine_is_compatible("qcom,liquid")
+#define of_board_is_dragonboard() \
+ of_machine_is_compatible("qcom,dragonboard")
+#define of_board_is_cdp() of_machine_is_compatible("qcom,cdp")
+#define of_board_is_mtp() of_machine_is_compatible("qcom,mtp")
+#define of_board_is_qrd() of_machine_is_compatible("qcom,qrd")
+#define of_board_is_xpm() of_machine_is_compatible("qcom,xpm")
+#define of_board_is_skuf() of_machine_is_compatible("qcom,skuf")
+
+#define machine_is_msm8974() of_machine_is_compatible("qcom,msm8974")
+#define machine_is_msm9625() of_machine_is_compatible("qcom,msm9625")
+#define machine_is_msm8610() of_machine_is_compatible("qcom,msm8610")
+#define machine_is_msm8226() of_machine_is_compatible("qcom,msm8226")
+#define machine_is_apq8074() of_machine_is_compatible("qcom,apq8074")
+#define machine_is_msm8926() of_machine_is_compatible("qcom,msm8926")
+
+#define early_machine_is_msm8610() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msm8610")
+#define early_machine_is_mpq8092() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,mpq8092")
+#define early_machine_is_apq8084() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,apq8084")
+#define early_machine_is_msmkrypton() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmkrypton")
+#define early_machine_is_fsm9900() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,fsm9900")
+#define early_machine_is_msmsamarium() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmsamarium")
+#else
+#define of_board_is_sim() 0
+#define of_board_is_rumi() 0
+#define of_board_is_fluid() 0
+#define of_board_is_liquid() 0
+#define of_board_is_dragonboard() 0
+#define of_board_is_cdp() 0
+#define of_board_is_mtp() 0
+#define of_board_is_qrd() 0
+#define of_board_is_xpm() 0
+#define of_board_is_skuf() 0
+
+#define machine_is_msm8974() 0
+#define machine_is_msm9625() 0
+#define machine_is_msm8610() 0
+#define machine_is_msm8226() 0
+#define machine_is_apq8074() 0
+#define machine_is_msm8926() 0
+
+#define early_machine_is_msm8610() 0
+#define early_machine_is_mpq8092() 0
+#define early_machine_is_apq8084() 0
+#define early_machine_is_msmkrypton() 0
+#define early_machine_is_fsm9900() 0
+#define early_machine_is_msmsamarium() 0
+#endif
+
+enum msm_cpu {
+ MSM_CPU_UNKNOWN = 0,
+ MSM_CPU_7X01,
+ MSM_CPU_7X25,
+ MSM_CPU_7X27,
+ MSM_CPU_8X50,
+ MSM_CPU_8X50A,
+ MSM_CPU_7X30,
+ MSM_CPU_8X55,
+ MSM_CPU_8X60,
+ MSM_CPU_8960,
+ MSM_CPU_8960AB,
+ MSM_CPU_7X27A,
+ FSM_CPU_9XXX,
+ MSM_CPU_7X25A,
+ MSM_CPU_7X25AA,
+ MSM_CPU_7X25AB,
+ MSM_CPU_8064,
+ MSM_CPU_8064AB,
+ MSM_CPU_8064AA,
+ MSM_CPU_8930,
+ MSM_CPU_8930AA,
+ MSM_CPU_8930AB,
+ MSM_CPU_7X27AA,
+ MSM_CPU_9615,
+ MSM_CPU_8974,
+ MSM_CPU_8974PRO_AA,
+ MSM_CPU_8974PRO_AB,
+ MSM_CPU_8974PRO_AC,
+ MSM_CPU_8627,
+ MSM_CPU_8625,
+ MSM_CPU_9625,
+ MSM_CPU_8092,
+ MSM_CPU_8226,
+ MSM_CPU_8610,
+ MSM_CPU_8625Q,
+ MSM_CPU_8084,
+ MSM_CPU_KRYPTON,
+ FSM_CPU_9900,
+ MSM_CPU_SAMARIUM,
+};
+
+enum pmic_model {
+ PMIC_MODEL_PM8058 = 13,
+ PMIC_MODEL_PM8028 = 14,
+ PMIC_MODEL_PM8901 = 15,
+ PMIC_MODEL_PM8027 = 16,
+ PMIC_MODEL_ISL_9519 = 17,
+ PMIC_MODEL_PM8921 = 18,
+ PMIC_MODEL_PM8018 = 19,
+ PMIC_MODEL_PM8015 = 20,
+ PMIC_MODEL_PM8014 = 21,
+ PMIC_MODEL_PM8821 = 22,
+ PMIC_MODEL_PM8038 = 23,
+ PMIC_MODEL_PM8922 = 24,
+ PMIC_MODEL_PM8917 = 25,
+ PMIC_MODEL_UNKNOWN = 0xFFFFFFFF
+};
+
+enum msm_cpu socinfo_get_msm_cpu(void);
+uint32_t socinfo_get_id(void);
+uint32_t socinfo_get_version(void);
+uint32_t socinfo_get_raw_id(void);
+char *socinfo_get_build_id(void);
+uint32_t socinfo_get_platform_type(void);
+uint32_t socinfo_get_platform_subtype(void);
+uint32_t socinfo_get_platform_version(void);
+enum pmic_model socinfo_get_pmic_model(void);
+uint32_t socinfo_get_pmic_die_revision(void);
+int __init socinfo_init(void) __must_check;
+const int read_msm_cpu_type(void);
+const int get_core_count(void);
+const int cpu_is_krait(void);
+const int cpu_is_krait_v1(void);
+const int cpu_is_krait_v2(void);
+const int cpu_is_krait_v3(void);
+
+static inline int cpu_is_msm7x01(void)
+{
+#ifdef CONFIG_ARCH_MSM7X01A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X01;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x25(void)
+{
+#ifdef CONFIG_ARCH_MSM7X25
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X25;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x27(void)
+{
+#if defined(CONFIG_ARCH_MSM7X27) && !defined(CONFIG_ARCH_MSM7X27A)
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X27;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x27a(void)
+{
+#ifdef CONFIG_ARCH_MSM7X27A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X27A;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x27aa(void)
+{
+#ifdef CONFIG_ARCH_MSM7X27A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X27AA;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x25a(void)
+{
+#ifdef CONFIG_ARCH_MSM7X27A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X25A;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x25aa(void)
+{
+#ifdef CONFIG_ARCH_MSM7X27A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X25AA;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x25ab(void)
+{
+#ifdef CONFIG_ARCH_MSM7X27A
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X25AB;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm7x30(void)
+{
+#ifdef CONFIG_ARCH_MSM7X30
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_7X30;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_qsd8x50(void)
+{
+#ifdef CONFIG_ARCH_QSD8X50
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8X50;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8x55(void)
+{
+#ifdef CONFIG_ARCH_MSM7X30
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8X55;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8x60(void)
+{
+#ifdef CONFIG_ARCH_MSM8X60
+ return read_msm_cpu_type() == MSM_CPU_8X60;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8960(void)
+{
+#ifdef CONFIG_ARCH_MSM8960
+ return read_msm_cpu_type() == MSM_CPU_8960;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8960ab(void)
+{
+#ifdef CONFIG_ARCH_MSM8960
+ return read_msm_cpu_type() == MSM_CPU_8960AB;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_apq8064(void)
+{
+#ifdef CONFIG_ARCH_APQ8064
+ return read_msm_cpu_type() == MSM_CPU_8064;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_apq8064ab(void)
+{
+#ifdef CONFIG_ARCH_APQ8064
+ return read_msm_cpu_type() == MSM_CPU_8064AB;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_apq8064aa(void)
+{
+#ifdef CONFIG_ARCH_APQ8064
+ return read_msm_cpu_type() == MSM_CPU_8064AA;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8930(void)
+{
+#ifdef CONFIG_ARCH_MSM8930
+ return read_msm_cpu_type() == MSM_CPU_8930;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8930aa(void)
+{
+#ifdef CONFIG_ARCH_MSM8930
+ return read_msm_cpu_type() == MSM_CPU_8930AA;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8930ab(void)
+{
+#ifdef CONFIG_ARCH_MSM8930
+ return read_msm_cpu_type() == MSM_CPU_8930AB;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8627(void)
+{
+/* 8930 and 8627 will share the same CONFIG_ARCH type unless otherwise needed */
+#ifdef CONFIG_ARCH_MSM8930
+ return read_msm_cpu_type() == MSM_CPU_8627;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_fsm9xxx(void)
+{
+#ifdef CONFIG_ARCH_FSM9XXX
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == FSM_CPU_9XXX;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm9615(void)
+{
+#ifdef CONFIG_ARCH_MSM9615
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_9615;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8625(void)
+{
+#ifdef CONFIG_ARCH_MSM8625
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8625;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8974(void)
+{
+#ifdef CONFIG_ARCH_MSM8974
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8974;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8974pro_aa(void)
+{
+#ifdef CONFIG_ARCH_MSM8974
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8974PRO_AA;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8974pro_ab(void)
+{
+#ifdef CONFIG_ARCH_MSM8974
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8974PRO_AB;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8974pro_ac(void)
+{
+#ifdef CONFIG_ARCH_MSM8974
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8974PRO_AC;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_mpq8092(void)
+{
+#ifdef CONFIG_ARCH_MPQ8092
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8092;
+#else
+ return 0;
+#endif
+
+}
+
+static inline int cpu_is_msm8226(void)
+{
+#ifdef CONFIG_ARCH_MSM8226
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8226;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8610(void)
+{
+#ifdef CONFIG_ARCH_MSM8610
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8610;
+#else
+ return 0;
+#endif
+}
+
+static inline int cpu_is_msm8625q(void)
+{
+#ifdef CONFIG_ARCH_MSM8625
+ enum msm_cpu cpu = socinfo_get_msm_cpu();
+
+ BUG_ON(cpu == MSM_CPU_UNKNOWN);
+ return cpu == MSM_CPU_8625Q;
+#else
+ return 0;
+#endif
+}
+
+static inline int soc_class_is_msm8960(void)
+{
+ return cpu_is_msm8960() || cpu_is_msm8960ab();
+}
+
+static inline int soc_class_is_apq8064(void)
+{
+ return cpu_is_apq8064() || cpu_is_apq8064ab() || cpu_is_apq8064aa();
+}
+
+static inline int soc_class_is_msm8930(void)
+{
+ return cpu_is_msm8930() || cpu_is_msm8930aa() || cpu_is_msm8930ab() || cpu_is_msm8627();
+}
+
+static inline int soc_class_is_msm8974(void)
+{
+ return cpu_is_msm8974() || cpu_is_msm8974pro_aa() || cpu_is_msm8974pro_ab() || cpu_is_msm8974pro_ac();
+}
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/tegra_asoc_pdata.h b/arch/arm/mach-tegra/include/mach/tegra_asoc_pdata.h
index 2f465b3..754feeb 100644
--- a/arch/arm/mach-tegra/include/mach/tegra_asoc_pdata.h
+++ b/arch/arm/mach-tegra/include/mach/tegra_asoc_pdata.h
@@ -15,6 +15,7 @@
*/
#define HIFI_CODEC 0
+#define SPEAKER 1
#define BASEBAND 1
#define BT_SCO 2
#define VOICE_CODEC 3
@@ -26,6 +27,13 @@
#define TEGRA_DAIFMT_RIGHT_J 3
#define TEGRA_DAIFMT_LEFT_J 4
+struct gpio_config {
+ const char *name;
+ int id;
+ int dir_in;
+ int pg;
+};
+
struct i2s_config {
int audio_port_id;
int is_i2s_master;
@@ -56,20 +64,32 @@
const char *codec_dai_name;
int num_links;
int gpio_spkr_en;
+ int gpio_spkr_ldo_en;
int gpio_hp_det;
int gpio_hp_det_active_high;
int gpio_hp_mute;
+ int gpio_hp_en;
+ int gpio_hp_ldo_en;
int gpio_int_mic_en;
int gpio_ext_mic_en;
int gpio_ldo1_en;
+ int gpio_ldo2_en;
+ int gpio_reset;
+ int gpio_irq1;
+ int gpio_wakeup;
int gpio_codec1;
int gpio_codec2;
int gpio_codec3;
+ struct gpio_config codec_mclk;
bool micbias_gpio_absent;
bool use_codec_jd_irq;
unsigned int debounce_time_hp;
bool edp_support;
unsigned int edp_states[TEGRA_SPK_EDP_NUM_STATES];
struct i2s_config i2s_param[NUM_I2S_DEVICES];
+ struct gpio_config i2s_set[NUM_I2S_DEVICES*4];
+ struct mutex i2s_gpio_lock[NUM_I2S_DEVICES];
+ int gpio_free_count[NUM_I2S_DEVICES];
+ bool first_time_free[NUM_I2S_DEVICES];
struct ahub_bbc1_config *ahub_bbc1_param;
};
diff --git a/arch/arm/mach-tegra/include/mach/tegra_usb_pad_ctrl.h b/arch/arm/mach-tegra/include/mach/tegra_usb_pad_ctrl.h
index 1c6178d..e25b49e 100644
--- a/arch/arm/mach-tegra/include/mach/tegra_usb_pad_ctrl.h
+++ b/arch/arm/mach-tegra/include/mach/tegra_usb_pad_ctrl.h
@@ -32,6 +32,17 @@
#define PCIE_LANES_X4_X0 2
#define PCIE_LANES_X2_X1 3
+/* UTMIP SPARE CFG*/
+#define UTMIP_SPARE_CFG0 0x834
+#define HS_RX_IPG_ERROR_ENABLE (1 << 0)
+#define HS_RX_FLUSH_ALAP (1 << 1)
+#define HS_RX_LATE_SQUELCH (1 << 2)
+#define FUSE_SETUP_SEL (1 << 3)
+#define HS_TERM_RANGE_ADJ_SEL (1 << 4)
+#define FUSE_SPARE (1 << 5)
+#define FUSE_HS_SQUELCH_LEVEL (1 << 6)
+#define FUSE_HS_IREF_CAP_CFG (1 << 7)
+
/* xusb padctl regs for pad programming of t124 usb3 */
#define XUSB_PADCTL_IOPHY_PLL_S0_CTL1_0 0x138
#define XUSB_PADCTL_IOPHY_PLL_S0_CTL1_0_PLL0_REFCLK_NDIV_MASK (0x3 << 20)
diff --git a/arch/arm/mach-tegra/include/mach/usb_bridge.h b/arch/arm/mach-tegra/include/mach/usb_bridge.h
new file mode 100644
index 0000000..cb726c4
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/usb_bridge.h
@@ -0,0 +1,156 @@
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __LINUX_USB_BRIDGE_H__
+#define __LINUX_USB_BRIDGE_H__
+
+#include <linux/netdevice.h>
+#include <linux/usb.h>
+
+#define MAX_BRIDGE_DEVICES 4
+#define BRIDGE_NAME_MAX_LEN 20
+
+struct bridge_ops {
+ int (*send_pkt) (void *, void *, size_t actual);
+ void (*send_cbits) (void *, unsigned int);
+
+ /* flow control */
+ void (*unthrottle_tx) (void *);
+};
+
+#define TX_THROTTLED BIT(0)
+#define RX_THROTTLED BIT(1)
+
+struct bridge {
+ /* context of the gadget port using bridge driver */
+ void *ctx;
+
+ /*to maps bridge driver instance */
+ unsigned int ch_id;
+
+ /*to match against bridge xport name to get bridge driver instance */
+ char *name;
+
+ /* flow control bits */
+ unsigned long flags;
+
+ /* data/ctrl bridge callbacks */
+ struct bridge_ops ops;
+};
+
+/**
+ * timestamp_info: stores timestamp info for skb life cycle during data
+ * transfer for tethered rmnet/DUN.
+ * @created: stores timestamp at the time of creation of SKB.
+ * @rx_queued: stores timestamp when SKB queued to HW to receive
+ * data.
+ * @rx_done: stores timestamp when skb queued to h/w is completed.
+ * @rx_done_sent: stores timestamp when SKB is sent from gadget rmnet/DUN
+ * driver to bridge rmnet/DUN driver or vice versa.
+ * @tx_queued: stores timestamp when SKB is queued to send data.
+ *
+ * note that size of this struct shouldnt exceed 48bytes that's the max skb->cb
+ * holds.
+ */
+struct timestamp_info {
+ struct data_bridge *dev;
+
+ unsigned int created;
+ unsigned int rx_queued;
+ unsigned int rx_done;
+ unsigned int rx_done_sent;
+ unsigned int tx_queued;
+};
+
+/* Maximum timestamp message length */
+#define DBG_DATA_MSG 128UL
+
+/* Maximum timestamp messages */
+#define DBG_DATA_MAX 32UL
+
+/* timestamp buffer descriptor */
+struct timestamp_buf {
+ char (buf[DBG_DATA_MAX])[DBG_DATA_MSG]; /* buffer */
+ unsigned idx; /* index */
+ rwlock_t lck; /* lock */
+};
+
+#if defined(CONFIG_USB_QCOM_MDM_BRIDGE) || \
+ defined(CONFIG_USB_QCOM_MDM_BRIDGE_MODULE)
+
+/* Bridge APIs called by gadget driver */
+int ctrl_bridge_open(struct bridge *);
+void ctrl_bridge_close(unsigned int);
+int ctrl_bridge_write(unsigned int, char *, size_t);
+int ctrl_bridge_set_cbits(unsigned int, unsigned int);
+unsigned int ctrl_bridge_get_cbits_tohost(unsigned int);
+int data_bridge_open(struct bridge *brdg);
+void data_bridge_close(unsigned int);
+int data_bridge_write(unsigned int, struct sk_buff *);
+int data_bridge_unthrottle_rx(unsigned int);
+
+/* defined in control bridge */
+int ctrl_bridge_init(void);
+void ctrl_bridge_exit(void);
+int ctrl_bridge_probe(struct usb_interface *, struct usb_host_endpoint *, char *, int);
+void ctrl_bridge_disconnect(unsigned int);
+int ctrl_bridge_resume(unsigned int);
+int ctrl_bridge_suspend(unsigned int);
+
+#else
+
+static inline int __maybe_unused ctrl_bridge_open(struct bridge *brdg)
+{
+ return -ENODEV;
+}
+
+static inline void __maybe_unused ctrl_bridge_close(unsigned int id)
+{
+}
+
+static inline int __maybe_unused ctrl_bridge_write(unsigned int id, char *data, size_t size)
+{
+ return -ENODEV;
+}
+
+static inline int __maybe_unused ctrl_bridge_set_cbits(unsigned int id, unsigned int cbits)
+{
+ return -ENODEV;
+}
+
+static inline unsigned int __maybe_unused ctrl_bridge_get_cbits_tohost(unsigned int id)
+{
+ return -ENODEV;
+}
+
+static inline int __maybe_unused data_bridge_open(struct bridge *brdg)
+{
+ return -ENODEV;
+}
+
+static inline void __maybe_unused data_bridge_close(unsigned int id)
+{
+}
+
+static inline int __maybe_unused data_bridge_write(unsigned int id, struct sk_buff *skb)
+{
+ return -ENODEV;
+}
+
+static inline int __maybe_unused data_bridge_unthrottle_rx(unsigned int id)
+{
+ return -ENODEV;
+}
+
+#endif
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/usb_gadget_xport.h b/arch/arm/mach-tegra/include/mach/usb_gadget_xport.h
new file mode 100644
index 0000000..fd3f0f8
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/usb_gadget_xport.h
@@ -0,0 +1,75 @@
+/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __LINUX_USB_GADGET_XPORT_H__
+#define __LINUX_USB_GADGET_XPORT_H__
+
+enum fserial_func_type {
+ USB_FSER_FUNC_NONE,
+ USB_FSER_FUNC_SERIAL,
+ USB_FSER_FUNC_MODEM,
+};
+
+enum transport_type {
+ USB_GADGET_XPORT_UNDEF,
+ USB_GADGET_XPORT_TTY,
+ USB_GADGET_XPORT_HSIC,
+ USB_GADGET_XPORT_NONE,
+};
+
+#define XPORT_STR_LEN 10
+
+static char *xport_to_str(enum transport_type t)
+{
+ switch (t) {
+ case USB_GADGET_XPORT_TTY:
+ return "TTY";
+ case USB_GADGET_XPORT_HSIC:
+ return "HSIC";
+ case USB_GADGET_XPORT_NONE:
+ return "NONE";
+ default:
+ return "UNDEFINED";
+ }
+}
+
+static enum transport_type str_to_xport(const char *name)
+{
+ if (!strncasecmp("TTY", name, XPORT_STR_LEN))
+ return USB_GADGET_XPORT_TTY;
+ if (!strncasecmp("HSIC", name, XPORT_STR_LEN))
+ return USB_GADGET_XPORT_HSIC;
+ if (!strncasecmp("", name, XPORT_STR_LEN))
+ return USB_GADGET_XPORT_NONE;
+
+ return USB_GADGET_XPORT_UNDEF;
+}
+
+enum gadget_type {
+ USB_GADGET_SERIAL,
+ USB_GADGET_RMNET,
+};
+
+#define NUM_RMNET_HSIC_PORTS 1
+#define NUM_DUN_HSIC_PORTS 1
+#define NUM_PORTS (NUM_RMNET_HSIC_PORTS \
+ + NUM_DUN_HSIC_PORTS)
+
+int ghsic_ctrl_connect(void *, int);
+void ghsic_ctrl_disconnect(void *, int);
+int ghsic_ctrl_setup(unsigned int, enum gadget_type);
+int ghsic_data_connect(void *, int);
+void ghsic_data_disconnect(void *, int);
+int ghsic_data_setup(unsigned int, enum gadget_type);
+
+#endif
diff --git a/arch/arm/mach-tegra/include/mach/usbdiag.h b/arch/arm/mach-tegra/include/mach/usbdiag.h
new file mode 100644
index 0000000..b9912b0
--- /dev/null
+++ b/arch/arm/mach-tegra/include/mach/usbdiag.h
@@ -0,0 +1,88 @@
+/* include/asm-arm/arch-msm/usbdiag.h
+ *
+ * Copyright (c) 2008-2010, 2012, The Linux Foundation. All rights reserved.
+ *
+ * All source code in this file is licensed under the following license except
+ * where indicated.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * See the GNU General Public License for more details.
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, you can find it at http://www.fsf.org
+ */
+
+#ifndef _DRIVERS_USB_DIAG_H_
+#define _DRIVERS_USB_DIAG_H_
+
+#include <linux/err.h>
+
+#define DIAG_LEGACY "diag"
+#define DIAG_MDM "diag_mdm"
+#define DIAG_QSC "diag_qsc"
+#define DIAG_MDM2 "diag_mdm2"
+
+#define USB_DIAG_CONNECT 0
+#define USB_DIAG_DISCONNECT 1
+#define USB_DIAG_WRITE_DONE 2
+#define USB_DIAG_READ_DONE 3
+
+struct diag_request {
+ char *buf;
+ int length;
+ int actual;
+ int status;
+ void *context;
+};
+
+struct usb_diag_ch {
+ const char *name;
+ struct list_head list;
+ void (*notify) (void *priv, unsigned event, struct diag_request * d_req);
+ void *priv;
+ void *priv_usb;
+};
+
+#ifdef CONFIG_USB_G_ANDROID
+struct usb_diag_ch *usb_diag_open(const char *name, void *priv, void (*notify) (void *, unsigned, struct diag_request *));
+void usb_diag_close(struct usb_diag_ch *ch);
+int usb_diag_alloc_req(struct usb_diag_ch *ch, int n_write, int n_read);
+void usb_diag_free_req(struct usb_diag_ch *ch);
+int usb_diag_read(struct usb_diag_ch *ch, struct diag_request *d_req);
+int usb_diag_write(struct usb_diag_ch *ch, struct diag_request *d_req);
+#else
+static inline struct usb_diag_ch *usb_diag_open(const char *name, void *priv, void (*notify) (void *, unsigned, struct diag_request *))
+{
+ return ERR_PTR(-ENODEV);
+}
+
+static inline void usb_diag_close(struct usb_diag_ch *ch)
+{
+}
+
+static inline int usb_diag_alloc_req(struct usb_diag_ch *ch, int n_write, int n_read)
+{
+ return -ENODEV;
+}
+
+static inline void usb_diag_free_req(struct usb_diag_ch *ch)
+{
+}
+
+static inline int usb_diag_read(struct usb_diag_ch *ch, struct diag_request *d_req)
+{
+ return -ENODEV;
+}
+
+static inline int usb_diag_write(struct usb_diag_ch *ch, struct diag_request *d_req)
+{
+ return -ENODEV;
+}
+#endif /* CONFIG_USB_G_ANDROID */
+#endif /* _DRIVERS_USB_DIAG_H_ */
diff --git a/arch/arm/mach-tegra/panel-j-qxga-8-9.c b/arch/arm/mach-tegra/panel-j-qxga-8-9.c
new file mode 100644
index 0000000..0bc605c
--- /dev/null
+++ b/arch/arm/mach-tegra/panel-j-qxga-8-9.c
@@ -0,0 +1,596 @@
+/*
+ * arch/arm/mach-tegra/panel-j-qxga-8-9.c
+ *
+ * Copyright (c) 2012-2013, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <mach/dc.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/regulator/consumer.h>
+#include <linux/tegra_dsi_backlight.h>
+#include <linux/leds.h>
+#include <linux/ioport.h>
+#include <linux/export.h>
+#include <linux/of_gpio.h>
+
+#include <generated/mach-types.h>
+
+#include "board.h"
+#include "board-panel.h"
+#include "devices.h"
+#include "gpio-names.h"
+#include "tegra11_host1x_devices.h"
+
+#define TEGRA_DSI_GANGED_MODE 1
+
+#define DSI_PANEL_RESET 0
+
+#define DC_CTRL_MODE (TEGRA_DC_OUT_CONTINUOUS_MODE |\
+ TEGRA_DC_OUT_INITIALIZED_MODE)
+
+enum panel_gpios {
+ IOVDD_1V8 = 0,
+ AVDD_4V,
+ DCDC_EN,
+ LCM_RST,
+ NUM_PANEL_GPIOS,
+};
+
+static bool gpio_requested;
+static struct platform_device *disp_device;
+
+static int iovdd_1v8, avdd_4v, dcdc_en, lcm_rst;
+
+static struct gpio panel_init_gpios[] = {
+ {TEGRA_GPIO_PQ2, GPIOF_OUT_INIT_HIGH, "lcmio_1v8"},
+ {TEGRA_GPIO_PR0, GPIOF_OUT_INIT_HIGH, "avdd_4v"},
+ {TEGRA_GPIO_PEE5, GPIOF_OUT_INIT_HIGH, "dcdc_en"},
+ {TEGRA_GPIO_PH5, GPIOF_OUT_INIT_HIGH, "panel_rst"},
+};
+
+static struct tegra_dc_sd_settings dsi_j_qxga_8_9_sd_settings = {
+ .enable = 0, /* disabled by default. */
+ .use_auto_pwm = false,
+ .hw_update_delay = 0,
+ .bin_width = -1,
+ .aggressiveness = 5,
+ .use_vid_luma = false,
+ .phase_in_adjustments = 0,
+ .k_limit_enable = true,
+ .k_limit = 200,
+ .sd_window_enable = false,
+ .soft_clipping_enable = true,
+ /* Low soft clipping threshold to compensate for aggressive k_limit */
+ .soft_clipping_threshold = 128,
+ .smooth_k_enable = false,
+ .smooth_k_incr = 64,
+ /* Default video coefficients */
+ .coeff = {5, 9, 2},
+ .fc = {0, 0},
+ /* Immediate backlight changes */
+ .blp = {1024, 255},
+ /* Gammas: R: 2.2 G: 2.2 B: 2.2 */
+ /* Default BL TF */
+ .bltf = {
+ {
+ {57, 65, 73, 82},
+ {92, 103, 114, 125},
+ {138, 150, 164, 178},
+ {193, 208, 224, 241},
+ },
+ },
+ /* Default LUT */
+ .lut = {
+ {
+ {255, 255, 255},
+ {199, 199, 199},
+ {153, 153, 153},
+ {116, 116, 116},
+ {85, 85, 85},
+ {59, 59, 59},
+ {36, 36, 36},
+ {17, 17, 17},
+ {0, 0, 0},
+ },
+ },
+ .sd_brightness = &sd_brightness,
+ .use_vpulse2 = true,
+};
+
+static u8 ce[] = {0xCE, 0x5D, 0x40, 0x48, 0x56, 0x67, 0x78,
+ 0x88, 0x98, 0xA7, 0xB5, 0xC3, 0xD1, 0xDE,
+ 0xE9, 0xF2, 0xFA, 0xFF, 0x05, 0x00, 0x04,
+ 0x04, 0x00, 0x20};
+
+static struct tegra_dsi_cmd dsi_j_qxga_8_9_init_cmd[] = {
+ DSI_CMD_SHORT(DSI_GENERIC_SHORT_WRITE_2_PARAMS, 0xB0, 0x04),
+
+ DSI_CMD_LONG(DSI_GENERIC_LONG_WRITE, ce),
+
+ DSI_CMD_SHORT(DSI_GENERIC_SHORT_WRITE_2_PARAMS, 0xD6, 0x01),
+ DSI_CMD_SHORT(DSI_GENERIC_SHORT_WRITE_2_PARAMS, 0xB0, 0x03),
+
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_0_PARAM, DSI_DCS_EXIT_SLEEP_MODE, 0x0, CMD_NOT_CLUBBED),
+ DSI_DLY_MS(120),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_1_PARAM, 0x53, 0x24, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_1_PARAM, 0x55, 0x00, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_1_PARAM, 0x35, 0x00, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_0_PARAM, DSI_DCS_SET_DISPLAY_ON, 0x0, CMD_CLUBBED),
+};
+
+static struct tegra_dsi_cmd dsi_j_qxga_8_9_suspend_cmd[] = {
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_1_PARAM, 0x51, 0x00, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_0_PARAM, DSI_DCS_SET_DISPLAY_OFF, 0x0, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_0_PARAM, DSI_DCS_ENTER_SLEEP_MODE, 0x0, CMD_CLUBBED),
+ DSI_SEND_FRAME(3),
+ DSI_CMD_VBLANK_SHORT(DSI_GENERIC_SHORT_WRITE_2_PARAMS, 0xB0, 0x04, CMD_CLUBBED),
+ DSI_CMD_VBLANK_SHORT(DSI_GENERIC_SHORT_WRITE_2_PARAMS, 0xB1, 0x01, CMD_CLUBBED),
+};
+
+static struct tegra_dsi_out dsi_j_qxga_8_9_pdata = {
+ .controller_vs = DSI_VS_1,
+
+ .n_data_lanes = 8,
+
+#if DC_CTRL_MODE & TEGRA_DC_OUT_ONE_SHOT_MODE
+ .video_data_type = TEGRA_DSI_VIDEO_TYPE_COMMAND_MODE,
+ .ganged_type = TEGRA_DSI_GANGED_SYMMETRIC_LEFT_RIGHT,
+ .suspend_aggr = DSI_HOST_SUSPEND_LV2,
+ .refresh_rate = 61,
+ .rated_refresh_rate = 60,
+ .te_polarity_low = true,
+#else
+ .ganged_type = TEGRA_DSI_GANGED_SYMMETRIC_LEFT_RIGHT,
+ .video_data_type = TEGRA_DSI_VIDEO_TYPE_VIDEO_MODE,
+ .video_burst_mode = TEGRA_DSI_VIDEO_NONE_BURST_MODE,
+ .refresh_rate = 60,
+#endif
+
+ .pixel_format = TEGRA_DSI_PIXEL_FORMAT_24BIT_P,
+ .virtual_channel = TEGRA_DSI_VIRTUAL_CHANNEL_0,
+
+ .panel_reset = DSI_PANEL_RESET,
+ .power_saving_suspend = true,
+ .video_clock_mode = TEGRA_DSI_VIDEO_CLOCK_TX_ONLY,
+ .dsi_init_cmd = dsi_j_qxga_8_9_init_cmd,
+ .n_init_cmd = ARRAY_SIZE(dsi_j_qxga_8_9_init_cmd),
+ .dsi_suspend_cmd = dsi_j_qxga_8_9_suspend_cmd,
+ .n_suspend_cmd = ARRAY_SIZE(dsi_j_qxga_8_9_suspend_cmd),
+ .lp00_pre_panel_wakeup = false,
+ .ulpm_not_supported = true,
+ .no_pkt_seq_hbp = false,
+};
+
+static int dsi_j_qxga_8_9_gpio_get(void)
+{
+ int err;
+
+ if (gpio_requested)
+ return 0;
+
+ err = gpio_request_array(panel_init_gpios, ARRAY_SIZE(panel_init_gpios));
+ if(err) {
+ pr_err("gpio array request failed\n");
+ return err;
+ }
+
+ gpio_requested = true;
+
+ return 0;
+}
+
+static int dsi_j_qxga_8_9_postpoweron(struct device *dev)
+{
+ int err;
+
+ err = dsi_j_qxga_8_9_gpio_get();
+ if (err) {
+ pr_err("failed to get panel gpios\n");
+ return err;
+ }
+
+ gpio_set_value(avdd_4v, 1);
+ usleep_range(1 * 1000, 1 * 1000 + 500);
+ gpio_set_value(dcdc_en, 1);
+ usleep_range(15 * 1000, 15 * 1000 + 500);
+ gpio_set_value(lcm_rst, 1);
+ usleep_range(15 * 1000, 15 * 1000 + 500);
+
+ return err;
+}
+
+static int dsi_j_qxga_8_9_enable(struct device *dev)
+{
+ gpio_set_value(iovdd_1v8, 1);
+ usleep_range(15 * 1000, 15 * 1000 + 500);
+ return 0;
+}
+
+static int dsi_j_qxga_8_9_disable(void)
+{
+
+ gpio_set_value(lcm_rst, 0);
+ msleep(1);
+ gpio_set_value(dcdc_en, 0);
+ msleep(15);
+ gpio_set_value(avdd_4v, 0);
+ gpio_set_value(iovdd_1v8, 0);
+ msleep(10);
+
+ return 0;
+}
+
+static int dsi_j_qxga_8_9_postsuspend(void)
+{
+ return 0;
+}
+
+static struct tegra_dc_mode dsi_j_qxga_8_9_modes[] = {
+ {
+#if DC_CTRL_MODE & TEGRA_DC_OUT_ONE_SHOT_MODE
+ .pclk = 294264000, /* @61Hz */
+ .h_ref_to_sync = 0,
+
+ /* dc constraint, min 1 */
+ .v_ref_to_sync = 1,
+
+ .h_sync_width = 32,
+
+ /* dc constraint, min 1 */
+ .v_sync_width = 1,
+
+ .h_back_porch = 80,
+
+ /* panel constraint, send frame after TE deassert */
+ .v_back_porch = 5,
+
+ .h_active = 2560,
+ .v_active = 1600,
+ .h_front_porch = 328,
+
+ /* dc constraint, min v_ref_to_sync + 1 */
+ .v_front_porch = 2,
+#else
+ .pclk = 247600000, /* @60Hz */
+ .h_ref_to_sync = 1,
+ .v_ref_to_sync = 1,
+ .h_sync_width = 76,
+ .v_sync_width = 4,
+ .h_back_porch = 80,
+ .v_back_porch = 8,
+ .h_active = 768 * 2,
+ .v_active = 2048,
+ .h_front_porch = 300,
+ .v_front_porch = 12,
+#endif
+ },
+};
+
+#ifdef CONFIG_TEGRA_DC_CMU
+static struct tegra_dc_cmu dsi_j_qxga_8_9_cmu = {
+ /* lut1 maps sRGB to linear space. */
+ {
+ 0, 1, 2, 4, 5, 6, 7, 9,
+ 10, 11, 12, 14, 15, 16, 18, 20,
+ 21, 23, 25, 27, 29, 31, 33, 35,
+ 37, 40, 42, 45, 48, 50, 53, 56,
+ 59, 62, 66, 69, 72, 76, 79, 83,
+ 87, 91, 95, 99, 103, 107, 112, 116,
+ 121, 126, 131, 136, 141, 146, 151, 156,
+ 162, 168, 173, 179, 185, 191, 197, 204,
+ 210, 216, 223, 230, 237, 244, 251, 258,
+ 265, 273, 280, 288, 296, 304, 312, 320,
+ 329, 337, 346, 354, 363, 372, 381, 390,
+ 400, 409, 419, 428, 438, 448, 458, 469,
+ 479, 490, 500, 511, 522, 533, 544, 555,
+ 567, 578, 590, 602, 614, 626, 639, 651,
+ 664, 676, 689, 702, 715, 728, 742, 755,
+ 769, 783, 797, 811, 825, 840, 854, 869,
+ 884, 899, 914, 929, 945, 960, 976, 992,
+ 1008, 1024, 1041, 1057, 1074, 1091, 1108, 1125,
+ 1142, 1159, 1177, 1195, 1213, 1231, 1249, 1267,
+ 1286, 1304, 1323, 1342, 1361, 1381, 1400, 1420,
+ 1440, 1459, 1480, 1500, 1520, 1541, 1562, 1582,
+ 1603, 1625, 1646, 1668, 1689, 1711, 1733, 1755,
+ 1778, 1800, 1823, 1846, 1869, 1892, 1916, 1939,
+ 1963, 1987, 2011, 2035, 2059, 2084, 2109, 2133,
+ 2159, 2184, 2209, 2235, 2260, 2286, 2312, 2339,
+ 2365, 2392, 2419, 2446, 2473, 2500, 2527, 2555,
+ 2583, 2611, 2639, 2668, 2696, 2725, 2754, 2783,
+ 2812, 2841, 2871, 2901, 2931, 2961, 2991, 3022,
+ 3052, 3083, 3114, 3146, 3177, 3209, 3240, 3272,
+ 3304, 3337, 3369, 3402, 3435, 3468, 3501, 3535,
+ 3568, 3602, 3636, 3670, 3705, 3739, 3774, 3809,
+ 3844, 3879, 3915, 3950, 3986, 4022, 4059, 4095,
+ },
+ /* csc */
+ {
+ 0x105, 0x3D5, 0x024, /* 1.021 -0.164 0.143 */
+ 0x3EA, 0x121, 0x3C1, /* -0.082 1.128 -0.245 */
+ 0x002, 0x00A, 0x0F4, /* 0.007 0.038 0.955 */
+ },
+ /* lut2 maps linear space to sRGB */
+ {
+ 0, 1, 2, 2, 3, 4, 5, 6,
+ 6, 7, 8, 9, 10, 10, 11, 12,
+ 13, 13, 14, 15, 15, 16, 16, 17,
+ 18, 18, 19, 19, 20, 20, 21, 21,
+ 22, 22, 23, 23, 23, 24, 24, 25,
+ 25, 25, 26, 26, 27, 27, 27, 28,
+ 28, 29, 29, 29, 30, 30, 30, 31,
+ 31, 31, 32, 32, 32, 33, 33, 33,
+ 34, 34, 34, 34, 35, 35, 35, 36,
+ 36, 36, 37, 37, 37, 37, 38, 38,
+ 38, 38, 39, 39, 39, 40, 40, 40,
+ 40, 41, 41, 41, 41, 42, 42, 42,
+ 42, 43, 43, 43, 43, 43, 44, 44,
+ 44, 44, 45, 45, 45, 45, 46, 46,
+ 46, 46, 46, 47, 47, 47, 47, 48,
+ 48, 48, 48, 48, 49, 49, 49, 49,
+ 49, 50, 50, 50, 50, 50, 51, 51,
+ 51, 51, 51, 52, 52, 52, 52, 52,
+ 53, 53, 53, 53, 53, 54, 54, 54,
+ 54, 54, 55, 55, 55, 55, 55, 55,
+ 56, 56, 56, 56, 56, 57, 57, 57,
+ 57, 57, 57, 58, 58, 58, 58, 58,
+ 58, 59, 59, 59, 59, 59, 59, 60,
+ 60, 60, 60, 60, 60, 61, 61, 61,
+ 61, 61, 61, 62, 62, 62, 62, 62,
+ 62, 63, 63, 63, 63, 63, 63, 64,
+ 64, 64, 64, 64, 64, 64, 65, 65,
+ 65, 65, 65, 65, 66, 66, 66, 66,
+ 66, 66, 66, 67, 67, 67, 67, 67,
+ 67, 67, 68, 68, 68, 68, 68, 68,
+ 68, 69, 69, 69, 69, 69, 69, 69,
+ 70, 70, 70, 70, 70, 70, 70, 71,
+ 71, 71, 71, 71, 71, 71, 72, 72,
+ 72, 72, 72, 72, 72, 72, 73, 73,
+ 73, 73, 73, 73, 73, 74, 74, 74,
+ 74, 74, 74, 74, 74, 75, 75, 75,
+ 75, 75, 75, 75, 75, 76, 76, 76,
+ 76, 76, 76, 76, 77, 77, 77, 77,
+ 77, 77, 77, 77, 78, 78, 78, 78,
+ 78, 78, 78, 78, 78, 79, 79, 79,
+ 79, 79, 79, 79, 79, 80, 80, 80,
+ 80, 80, 80, 80, 80, 81, 81, 81,
+ 81, 81, 81, 81, 81, 81, 82, 82,
+ 82, 82, 82, 82, 82, 82, 83, 83,
+ 83, 83, 83, 83, 83, 83, 83, 84,
+ 84, 84, 84, 84, 84, 84, 84, 84,
+ 85, 85, 85, 85, 85, 85, 85, 85,
+ 85, 86, 86, 86, 86, 86, 86, 86,
+ 86, 86, 87, 87, 87, 87, 87, 87,
+ 87, 87, 87, 88, 88, 88, 88, 88,
+ 88, 88, 88, 88, 88, 89, 89, 89,
+ 89, 89, 89, 89, 89, 89, 90, 90,
+ 90, 90, 90, 90, 90, 90, 90, 90,
+ 91, 91, 91, 91, 91, 91, 91, 91,
+ 91, 91, 92, 92, 92, 92, 92, 92,
+ 92, 92, 92, 92, 93, 93, 93, 93,
+ 93, 93, 93, 93, 93, 93, 94, 94,
+ 94, 94, 94, 94, 94, 94, 94, 94,
+ 95, 95, 95, 95, 95, 95, 95, 95,
+ 95, 95, 96, 96, 96, 96, 96, 96,
+ 96, 96, 96, 96, 96, 97, 97, 97,
+ 97, 97, 97, 97, 97, 97, 97, 98,
+ 98, 98, 98, 98, 98, 98, 98, 98,
+ 98, 98, 99, 99, 99, 99, 99, 99,
+ 99, 100, 101, 101, 102, 103, 103, 104,
+ 105, 105, 106, 107, 107, 108, 109, 109,
+ 110, 111, 111, 112, 113, 113, 114, 115,
+ 115, 116, 116, 117, 118, 118, 119, 119,
+ 120, 120, 121, 122, 122, 123, 123, 124,
+ 124, 125, 126, 126, 127, 127, 128, 128,
+ 129, 129, 130, 130, 131, 131, 132, 132,
+ 133, 133, 134, 134, 135, 135, 136, 136,
+ 137, 137, 138, 138, 139, 139, 140, 140,
+ 141, 141, 142, 142, 143, 143, 144, 144,
+ 145, 145, 145, 146, 146, 147, 147, 148,
+ 148, 149, 149, 150, 150, 150, 151, 151,
+ 152, 152, 153, 153, 153, 154, 154, 155,
+ 155, 156, 156, 156, 157, 157, 158, 158,
+ 158, 159, 159, 160, 160, 160, 161, 161,
+ 162, 162, 162, 163, 163, 164, 164, 164,
+ 165, 165, 166, 166, 166, 167, 167, 167,
+ 168, 168, 169, 169, 169, 170, 170, 170,
+ 171, 171, 172, 172, 172, 173, 173, 173,
+ 174, 174, 174, 175, 175, 176, 176, 176,
+ 177, 177, 177, 178, 178, 178, 179, 179,
+ 179, 180, 180, 180, 181, 181, 182, 182,
+ 182, 183, 183, 183, 184, 184, 184, 185,
+ 185, 185, 186, 186, 186, 187, 187, 187,
+ 188, 188, 188, 189, 189, 189, 189, 190,
+ 190, 190, 191, 191, 191, 192, 192, 192,
+ 193, 193, 193, 194, 194, 194, 195, 195,
+ 195, 196, 196, 196, 196, 197, 197, 197,
+ 198, 198, 198, 199, 199, 199, 200, 200,
+ 200, 200, 201, 201, 201, 202, 202, 202,
+ 202, 203, 203, 203, 204, 204, 204, 205,
+ 205, 205, 205, 206, 206, 206, 207, 207,
+ 207, 207, 208, 208, 208, 209, 209, 209,
+ 209, 210, 210, 210, 211, 211, 211, 211,
+ 212, 212, 212, 213, 213, 213, 213, 214,
+ 214, 214, 214, 215, 215, 215, 216, 216,
+ 216, 216, 217, 217, 217, 217, 218, 218,
+ 218, 219, 219, 219, 219, 220, 220, 220,
+ 220, 221, 221, 221, 221, 222, 222, 222,
+ 223, 223, 223, 223, 224, 224, 224, 224,
+ 225, 225, 225, 225, 226, 226, 226, 226,
+ 227, 227, 227, 227, 228, 228, 228, 228,
+ 229, 229, 229, 229, 230, 230, 230, 230,
+ 231, 231, 231, 231, 232, 232, 232, 232,
+ 233, 233, 233, 233, 234, 234, 234, 234,
+ 235, 235, 235, 235, 236, 236, 236, 236,
+ 237, 237, 237, 237, 238, 238, 238, 238,
+ 239, 239, 239, 239, 240, 240, 240, 240,
+ 240, 241, 241, 241, 241, 242, 242, 242,
+ 242, 243, 243, 243, 243, 244, 244, 244,
+ 244, 244, 245, 245, 245, 245, 246, 246,
+ 246, 246, 247, 247, 247, 247, 247, 248,
+ 248, 248, 248, 249, 249, 249, 249, 249,
+ 250, 250, 250, 250, 251, 251, 251, 251,
+ 251, 252, 252, 252, 252, 253, 253, 253,
+ 253, 253, 254, 254, 254, 254, 255, 255,
+ },
+};
+#endif
+
+#define ORIG_PWM_MAX 255
+#define ORIG_PWM_DEF 133
+#define ORIG_PWM_MIN 10
+
+#define MAP_PWM_MAX 255
+#define MAP_PWM_DEF 90
+#define MAP_PWM_MIN 7
+
+static unsigned char shrink_pwm(int val)
+{
+ unsigned char shrink_br;
+
+ /* define line segments */
+ if (val <= ORIG_PWM_MIN)
+ shrink_br = MAP_PWM_MIN;
+ else if (val > ORIG_PWM_MIN && val <= ORIG_PWM_DEF)
+ shrink_br = MAP_PWM_MIN +
+ (val-ORIG_PWM_MIN)*(MAP_PWM_DEF-MAP_PWM_MIN)/(ORIG_PWM_DEF-ORIG_PWM_MIN);
+ else
+ shrink_br = MAP_PWM_DEF +
+ (val-ORIG_PWM_DEF)*(MAP_PWM_MAX-MAP_PWM_DEF)/(ORIG_PWM_MAX-ORIG_PWM_DEF);
+
+ return shrink_br;
+}
+
+static int dsi_j_qxga_8_9_bl_notify(struct device *unused, int brightness)
+{
+ /* Apply any backlight response curve */
+ if (brightness > 255)
+ pr_info("Error: Brightness > 255!\n");
+ else if (brightness > 0 && brightness <= 255)
+ brightness = shrink_pwm(brightness);
+
+ return brightness;
+}
+
+static int dsi_j_qxga_8_9_check_fb(struct device *dev, struct fb_info *info)
+{
+ return info->device == &disp_device->dev;
+}
+
+static struct tegra_dsi_cmd dsi_j_qxga_8_9_backlight_cmd[] = {
+ DSI_CMD_VBLANK_SHORT(DSI_DCS_WRITE_1_PARAM, 0x51, 0xFF, CMD_NOT_CLUBBED),
+};
+
+static struct tegra_dsi_bl_platform_data dsi_j_qxga_8_9_bl_data = {
+ .dsi_backlight_cmd = dsi_j_qxga_8_9_backlight_cmd,
+ .n_backlight_cmd = ARRAY_SIZE(dsi_j_qxga_8_9_backlight_cmd),
+ .dft_brightness = 224,
+ .notify = dsi_j_qxga_8_9_bl_notify,
+ /* Only toggle backlight on fb blank notifications for disp1 */
+ .check_fb = dsi_j_qxga_8_9_check_fb,
+};
+
+static struct platform_device __maybe_unused
+ dsi_j_qxga_8_9_bl_device = {
+ .name = "tegra-dsi-backlight",
+ .dev = {
+ .platform_data = &dsi_j_qxga_8_9_bl_data,
+ },
+};
+
+static struct platform_device __maybe_unused
+ *dsi_j_qxga_8_9_bl_devices[] __initdata = {
+ &dsi_j_qxga_8_9_bl_device,
+};
+
+static int __init dsi_j_qxga_8_9_register_bl_dev(void)
+{
+ int err = 0;
+ err = platform_add_devices(dsi_j_qxga_8_9_bl_devices,
+ ARRAY_SIZE(dsi_j_qxga_8_9_bl_devices));
+ if (err) {
+ pr_err("disp1 bl device registration failed");
+ return err;
+ }
+ return err;
+}
+
+static void dsi_j_qxga_8_9_set_disp_device(
+ struct platform_device *dalmore_display_device)
+{
+ disp_device = dalmore_display_device;
+}
+
+static void dsi_j_qxga_8_9_dc_out_init(struct tegra_dc_out *dc)
+{
+ int i;
+ struct device_node *np;
+
+ np = of_find_node_by_name(NULL, "panel_jdi_qxga_8_9");
+ if (np == NULL) {
+ pr_info("can't find device node\n");
+ } else {
+ for (i=0; i<NUM_PANEL_GPIOS; i++) {
+ panel_init_gpios[i].gpio =
+ of_get_gpio_flags(np, i, NULL);
+ pr_info("gpio pin = %d\n", panel_init_gpios[i].gpio);
+ }
+ }
+
+ iovdd_1v8 = panel_init_gpios[IOVDD_1V8].gpio;
+ avdd_4v = panel_init_gpios[AVDD_4V].gpio;
+ dcdc_en = panel_init_gpios[DCDC_EN].gpio;
+ lcm_rst = panel_init_gpios[LCM_RST].gpio;
+
+ dc->dsi = &dsi_j_qxga_8_9_pdata;
+ dc->parent_clk = "pll_d_out0";
+ dc->modes = dsi_j_qxga_8_9_modes;
+ dc->n_modes = ARRAY_SIZE(dsi_j_qxga_8_9_modes);
+ dc->enable = dsi_j_qxga_8_9_enable;
+ dc->postpoweron = dsi_j_qxga_8_9_postpoweron;
+ dc->disable = dsi_j_qxga_8_9_disable;
+ dc->postsuspend = dsi_j_qxga_8_9_postsuspend,
+ dc->width = 135;
+ dc->height = 180;
+ dc->flags = DC_CTRL_MODE;
+}
+
+static void dsi_j_qxga_8_9_fb_data_init(struct tegra_fb_data *fb)
+{
+ fb->xres = dsi_j_qxga_8_9_modes[0].h_active;
+ fb->yres = dsi_j_qxga_8_9_modes[0].v_active;
+}
+
+static void
+dsi_j_qxga_8_9_sd_settings_init(struct tegra_dc_sd_settings *settings)
+{
+ *settings = dsi_j_qxga_8_9_sd_settings;
+ settings->bl_device_name = "tegra-dsi-backlight";
+}
+
+static void dsi_j_qxga_8_9_cmu_init(struct tegra_dc_platform_data *pdata)
+{
+ pdata->cmu = &dsi_j_qxga_8_9_cmu;
+}
+
+struct tegra_panel __initdata dsi_j_qxga_8_9 = {
+ .init_sd_settings = dsi_j_qxga_8_9_sd_settings_init,
+ .init_dc_out = dsi_j_qxga_8_9_dc_out_init,
+ .init_fb_data = dsi_j_qxga_8_9_fb_data_init,
+ .register_bl_dev = dsi_j_qxga_8_9_register_bl_dev,
+ .init_cmu_data = dsi_j_qxga_8_9_cmu_init,
+ .set_disp_device = dsi_j_qxga_8_9_set_disp_device,
+};
+EXPORT_SYMBOL(dsi_j_qxga_8_9);
diff --git a/arch/arm/mach-tegra/powerdetect.c b/arch/arm/mach-tegra/powerdetect.c
index 1b414ca..70fa42d 100644
--- a/arch/arm/mach-tegra/powerdetect.c
+++ b/arch/arm/mach-tegra/powerdetect.c
@@ -106,7 +106,9 @@
POWER_CELL("pwrdet_pex_ctl", (0x1 << 11), (0x1 << 11), 0xFFFFFFFF),
#endif
POWER_CELL("pwrdet_sdmmc1", (0x1 << 12), (0x1 << 12), 0xFFFFFFFF),
+#ifndef CONFIG_MACH_T132_FLOUNDER
POWER_CELL("pwrdet_sdmmc3", (0x1 << 13), (0x1 << 13), 0xFFFFFFFF),
+#endif
POWER_CELL("pwrdet_sdmmc4", 0, (0x1 << 14), 0xFFFFFFFF),
#if defined(CONFIG_ARCH_TEGRA_11x_SOC) || defined(CONFIG_ARCH_TEGRA_12x_SOC)
POWER_CELL("pwrdet_hv", (0x1 << 15), (0x1 << 15), 0xFFFFFFFF),
diff --git a/arch/arm/mach-tegra/powergate-ops-t1xx.c b/arch/arm/mach-tegra/powergate-ops-t1xx.c
index 9e0c163..37f9f8c 100644
--- a/arch/arm/mach-tegra/powergate-ops-t1xx.c
+++ b/arch/arm/mach-tegra/powergate-ops-t1xx.c
@@ -78,6 +78,10 @@
goto err_clk_on;
udelay(10);
+ /* Reset module first to avoid module in inconsistent state */
+ powergate_partition_assert_reset(pg_info);
+
+ udelay(10);
ret = tegra_powergate_remove_clamping(id);
if (ret)
diff --git a/arch/arm/mach-tegra/tegra-board-id.h b/arch/arm/mach-tegra/tegra-board-id.h
index 3991944..05c89cc 100644
--- a/arch/arm/mach-tegra/tegra-board-id.h
+++ b/arch/arm/mach-tegra/tegra-board-id.h
@@ -65,9 +65,9 @@
#define BOARD_E1937 0x0791
#define BOARD_PM366 0x016e
#define BOARD_E1549 0x060D
+#define BOARD_E1814 0x0716
#define BOARD_E1797 0x0705
#define BOARD_E1937 0x0791
-
#define BOARD_E1563 0x061b
diff --git a/arch/arm/mach-tegra/tegra12_clocks.c b/arch/arm/mach-tegra/tegra12_clocks.c
index 912a167..6a3d3a3 100644
--- a/arch/arm/mach-tegra/tegra12_clocks.c
+++ b/arch/arm/mach-tegra/tegra12_clocks.c
@@ -9371,7 +9371,7 @@
if ((tegra_revision != TEGRA_REVISION_A01) &&
(tegra_revision != TEGRA_REVISION_A02))
- return 0; /* no frequency dependency for A03+ revisions */
+ return 68000000; /* 68MHz floor for A03+ all the time */
if (cpu_rate > 1020000)
emc_rate = 600000000; /* cpu > 1.02GHz, emc 600MHz */
diff --git a/arch/arm/mach-tegra/tegra12x_la.c b/arch/arm/mach-tegra/tegra12x_la.c
index 52b2b44..3e9918b 100644
--- a/arch/arm/mach-tegra/tegra12x_la.c
+++ b/arch/arm/mach-tegra/tegra12x_la.c
@@ -1,7 +1,7 @@
/*
* arch/arm/mach-tegra/tegra12x_la.c
*
- * Copyright (C) 2013-2014, NVIDIA CORPORATION. All rights reserved.
+ * Copyright (C) 2013-2015, NVIDIA CORPORATION. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -498,7 +498,7 @@
writel(p->usbx_ptsa_max, T12X_MC_RA(USBX_PTSA_MAX_0));
writel(p->usbd_ptsa_min, T12X_MC_RA(USBD_PTSA_MIN_0));
- writel(p->usbd_ptsa_min, T12X_MC_RA(USBD_PTSA_MAX_0));
+ writel(p->usbd_ptsa_max, T12X_MC_RA(USBD_PTSA_MAX_0));
writel(p->ftop_ptsa_min, T12X_MC_RA(FTOP_PTSA_MIN_0));
writel(p->ftop_ptsa_max, T12X_MC_RA(FTOP_PTSA_MAX_0));
@@ -567,7 +567,7 @@
p->r0_dis_ptsa_max = readl(T12X_MC_RA(R0_DIS_PTSA_MAX_0));
p->r0_disb_ptsa_min = readl(T12X_MC_RA(R0_DISB_PTSA_MIN_0));
- p->r0_disb_ptsa_min = readl(T12X_MC_RA(R0_DISB_PTSA_MAX_0));
+ p->r0_disb_ptsa_max = readl(T12X_MC_RA(R0_DISB_PTSA_MAX_0));
p->vd_ptsa_min = readl(T12X_MC_RA(VD_PTSA_MIN_0));
p->vd_ptsa_max = readl(T12X_MC_RA(VD_PTSA_MAX_0));
@@ -579,7 +579,7 @@
p->gk_ptsa_max = readl(T12X_MC_RA(GK_PTSA_MAX_0));
p->vicpc_ptsa_min = readl(T12X_MC_RA(VICPC_PTSA_MIN_0));
- p->vicpc_ptsa_min = readl(T12X_MC_RA(VICPC_PTSA_MAX_0));
+ p->vicpc_ptsa_max = readl(T12X_MC_RA(VICPC_PTSA_MAX_0));
p->apb_ptsa_min = readl(T12X_MC_RA(APB_PTSA_MIN_0));
p->apb_ptsa_max = readl(T12X_MC_RA(APB_PTSA_MAX_0));
diff --git a/arch/arm/mach-tegra/tegra13_dvfs.c b/arch/arm/mach-tegra/tegra13_dvfs.c
index 17552b1..397ea3e 100644
--- a/arch/arm/mach-tegra/tegra13_dvfs.c
+++ b/arch/arm/mach-tegra/tegra13_dvfs.c
@@ -42,9 +42,16 @@
#define MHZ 1000000
#define VDD_SAFE_STEP 100
+#define FUSE_FT_REV 0x128
+#define FUSE_CP_REV 0x190
-static int cpu_vmin_offsets[] = { 0, -20, };
-static int gpu_vmin_offsets[] = { 0, -20, };
+/*
+ * Effectivly no offsets. We still need DFLL tuning adjustment according to
+ * SiMon grade, but SiMon call-backs are enabled only if offsets are installed.
+ * So, just make them "0".
+ */
+static int cpu_vmin_offsets[2];
+static int gpu_vmin_offsets[2];
static int vdd_core_vmin_trips_table[MAX_THERMAL_LIMITS] = { 20, };
static int vdd_core_therm_floors_table[MAX_THERMAL_LIMITS] = { 950, };
@@ -83,7 +90,7 @@
static struct dvfs_rail tegra13_dvfs_rail_vdd_cpu = {
.reg_id = "vdd_cpu",
- .version = "p4v17",
+ .version = "p4v18",
.max_millivolts = 1300,
.min_millivolts = 680,
.simon_domain = TEGRA_SIMON_DOMAIN_CPU,
@@ -234,7 +241,7 @@
.dfll_tune_data = {
.tune1 = 0x00000095,
.droop_rate_min = 1000000,
- .min_millivolts = 680,
+ .min_millivolts = 800,
.tune_high_margin_mv = 30,
},
.max_mv = 1260,
@@ -272,7 +279,7 @@
},
.cvb_vmin = { 0, { 2877000, -174300, 3600, -357, -339, 53}, },
.vmin_trips_table = { 15, 30, 50, 70, 120, },
- .therm_floors_table = { 890, 760, 740, 720, 700, },
+ .therm_floors_table = { 890, 800, 800, 800, 800, },
},
};
@@ -754,6 +761,35 @@
}
}
+static bool fuse_cp_ft_rev_check(void)
+{
+ u32 rev, rev_major, rev_minor;
+ bool cp_check, ft_check;
+
+ rev = tegra_fuse_readl(FUSE_CP_REV);
+ rev_minor = rev & 0x1f;
+ rev_major = (rev >> 5) & 0x3f;
+
+ cp_check = ((rev_major == 0 && rev_minor == 12) ||
+ (rev_major == 1 && rev_minor == 0))
+ ? true : false;
+
+ rev = tegra_fuse_readl(FUSE_FT_REV);
+ rev_minor = rev & 0x1f;
+ rev_major = (rev >> 5) & 0x3f;
+
+ ft_check = ((rev_major == 0 && rev_minor == 12) ||
+ (rev_major == 0 && rev_minor == 13) ||
+ (rev_major == 1 && rev_minor == 0))
+ ? true : false;
+
+ if (cp_check && ft_check)
+ return true;
+
+ return false;
+}
+
+
static void __init set_cpu_dfll_vmin_data(
struct cpu_cvb_dvfs *d, struct dvfs *cpu_dvfs,
int speedo, struct rail_alignment *align)
@@ -761,6 +797,16 @@
int mv, mvj, t, j;
struct dvfs_rail *rail = &tegra13_dvfs_rail_vdd_cpu;
+ /* Increase vmin if fuse check returned false */
+ if (!fuse_cp_ft_rev_check()) {
+ for (j = 0; j <= MAX_THERMAL_LIMITS; j++) {
+ int *floor = &d->therm_floors_table[j];
+ if (*floor == 0)
+ break;
+ *floor = max(*floor, 820);
+ }
+ }
+
/* First install fixed Vmin profile */
tegra_dvfs_rail_init_vmin_thermal_profile(d->vmin_trips_table,
d->therm_floors_table, rail, &cpu_dvfs->dfll_data);
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5e147c2..e69f969 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -39,7 +39,7 @@
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE
- select HAVE_EFFICIENT_UNALIGNED_ACCESS
+ select HAVE_EFFICIENT_UNALIGNED_ACCESS if !DENVER_CPU
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 3f3980a..b495e74 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -42,7 +42,7 @@
head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM.
-TEXT_OFFSET := 0x00a80000
+TEXT_OFFSET := 0x00080000
export TEXT_OFFSET GZFLAGS
diff --git a/arch/arm64/boot/dts/Makefile b/arch/arm64/boot/dts/Makefile
index bc46994..5a608af 100644
--- a/arch/arm64/boot/dts/Makefile
+++ b/arch/arm64/boot/dts/Makefile
@@ -15,6 +15,10 @@
dtb-$(CONFIG_MACH_T132REF) += tegra132-bowmore-e1971-1100-a00-00-powerconfig.dtb
dtb-$(CONFIG_MACH_T132REF) += tegra132-tn8-p1761-1270-a03-battery.dtb
dtb-$(CONFIG_MACH_T132REF) += tegra132-tn8-p1761-1270-a03.dtb
+dtb-$(CONFIG_MACH_T132_FLOUNDER) += tegra132-flounder-xaxb.dtb
+dtb-$(CONFIG_MACH_T132_FLOUNDER) += tegra132-flounder-xc.dtb
+dtb-$(CONFIG_MACH_T132_FLOUNDER) += tegra132-flounder-xdxepvt.dtb
+dtb-$(CONFIG_MACH_T132_FLOUNDER) += tegra132-flounder_lte-xaxbxcxdpvt.dtb
DTB_NAMES := $(subst $\",,$(CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE_NAMES))
ifneq ($(DTB_NAMES),)
diff --git a/arch/arm64/boot/dts/tegra132-flounder-camera.dtsi b/arch/arm64/boot/dts/tegra132-flounder-camera.dtsi
new file mode 100644
index 0000000..904d107
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-camera.dtsi
@@ -0,0 +1,175 @@
+#include <dt-bindings/media/camera.h>
+
+/ {
+ camera-pcl {
+ compatible = "nvidia,tegra124-camera", "simple-bus";
+ configuration = <0xAA55AA55>;
+
+ modules {
+ module1: module1@modules {
+ compatible = "sensor,rear";
+ badge_info = "flounder_rear_camera";
+
+ sensor {
+ profile = <&imx219_1>;
+ platformdata = "flounder_imx219_pdata";
+ };
+ focuser {
+ profile = <&drv201_1>;
+ platformdata = "flounder_drv201_pdata";
+ };
+ flash {
+ profile = <&tps61310_1>;
+ platformdata = "flounder_tps61310_pdata";
+ };
+ };
+ module2: module4@modules {
+ compatible = "sensor,front";
+ badge_info = "flounder_front_camera";
+
+ sensor {
+ profile = <&ov9760_1>;
+ platformdata = "flounder_ov9760_pdata";
+ };
+ };
+ };
+
+ profiles {
+ imx219_1: imx219@2_0010 {
+ index = <1>;
+ chipname = "pcl_IMX219";
+ type = "sensor";
+ guid = "s_IMX219";
+ position = <0>;
+ bustype = "i2c";
+ busnum = <2>;
+ addr = <0x10>;
+ datalen = <2>;
+ pinmuxgrp = <0xFFFF>;
+ gpios = <3>;
+ clocks = "mclk";
+ drivername = "imx219";
+ detect = <0x0002 0x0000 0xFFFF 0x0219>;
+ devid = <0x0219>;
+ poweron = <
+ CAMERA_IND_CLK_SET(10000)
+ CAMERA_GPIO_CLR(119)
+ CAMERA_GPIO_SET(59)
+ CAMERA_WAITMS(10)
+ CAMERA_GPIO_SET(58)
+ CAMERA_GPIO_SET(62)
+ CAMERA_GPIO_SET(63)
+ CAMERA_WAITMS(1)
+ CAMERA_GPIO_SET(119)
+ CAMERA_WAITMS(20)
+ CAMERA_END
+ >;
+ poweroff = <
+ CAMERA_IND_CLK_CLR
+ CAMERA_GPIO_CLR(119)
+ CAMERA_WAITUS(1)
+ CAMERA_GPIO_CLR(63)
+ CAMERA_GPIO_CLR(62)
+ CAMERA_GPIO_CLR(58)
+ CAMERA_GPIO_CLR(59)
+ CAMERA_END
+ >;
+ };
+ ov9760_1: ov9760@2_0036 {
+ index = <2>;
+ chipname = "pcl_OV9760";
+ type = "sensor";
+ guid = "s_OV9760";
+ position = <1>;
+ bustype = "i2c";
+ busnum = <2>;
+ addr = <0x36>;
+ datalen = <2>;
+ pinmuxgrp = <0xFFFF>;
+ clocks = "mclk2";
+ drivername = "ov9760";
+ detect = <0x0002 0x300A 0xFFFF 0x9760>;
+ devid = <0x9760>;
+ poweron = <
+ CAMERA_IND_CLK_SET(10000)
+ CAMERA_GPIO_CLR(219)
+ CAMERA_GPIO_SET(59)
+ CAMERA_WAITMS(10)
+ CAMERA_GPIO_SET(58)
+ CAMERA_GPIO_SET(62)
+ CAMERA_GPIO_SET(63)
+ CAMERA_WAITMS(1)
+ CAMERA_GPIO_SET(219)
+ CAMERA_WAITMS(20)
+ CAMERA_END
+ >;
+ poweroff = <
+ CAMERA_IND_CLK_CLR
+ CAMERA_GPIO_CLR(219)
+ CAMERA_WAITUS(1)
+ CAMERA_GPIO_CLR(63)
+ CAMERA_GPIO_CLR(62)
+ CAMERA_GPIO_CLR(58)
+ CAMERA_GPIO_CLR(59)
+ CAMERA_END
+ >;
+ };
+ tps61310_1: tps61310@0_0033 {
+ index = <3>;
+ chipname = "pcl_TPS61310";
+ type = "flash";
+ guid = "l_NVCAM0";
+ position = <0>;
+ bustype = "i2c";
+ busnum = <0>;
+ addr = <0x33>;
+ datalen = <1>;
+ pinmuxgrp = <0xFFFF>;
+ drivername = "tps61310";
+ detect = <0x0001 0x0003 0x00F0 0x00C0>;
+ devid = <0x6131>;
+ poweron = <
+ CAMERA_END
+ >;
+ poweroff = <
+ CAMERA_END
+ >;
+ };
+ drv201_1: drv201@2_000e {
+ index = <4>;
+ chipname = "pcl_DRV201";
+ type = "focuser";
+ guid = "f_NVCAM0";
+ position = <0>;
+ bustype = "i2c";
+ busnum = <2>;
+ addr = <0xe>;
+ datalen = <1>;
+ pinmuxgrp = <0xFFFF>;
+ drivername = "drv201";
+ detect = <0x0001 0x0007 0x00FF 0x0083>;
+ devid = <0x0201>;
+ poweron = <
+ CAMERA_GPIO_SET(59)
+ CAMERA_WAITMS(10)
+ CAMERA_GPIO_SET(58)
+ CAMERA_GPIO_SET(62)
+ CAMERA_GPIO_SET(63)
+ CAMERA_WAITMS(1)
+ CAMERA_GPIO_SET(223)
+ CAMERA_WAITMS(10)
+ CAMERA_END
+ >;
+ poweroff = <
+ CAMERA_GPIO_CLR(223)
+ CAMERA_GPIO_CLR(63)
+ CAMERA_GPIO_CLR(62)
+ CAMERA_GPIO_CLR(58)
+ CAMERA_GPIO_CLR(59)
+ CAMERA_END
+ >;
+ };
+ };
+ };
+};
+
diff --git a/arch/arm64/boot/dts/tegra132-flounder-emc.dtsi b/arch/arm64/boot/dts/tegra132-flounder-emc.dtsi
new file mode 100644
index 0000000..45bb151
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-emc.dtsi
@@ -0,0 +1,11893 @@
+/*
+ * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+/ {
+emc@7001b000 {
+ compatible = "nvidia,tegra12-emc";
+ reg = <0x0 0x7001b000 0x0 0x800>;
+ nvidia,use-ram-code = <1>;
+ ram-code@1 { /* Elpida 2GB EDFA164A2MA JDF */
+ compatible = "nvidia,tegra12-emc";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ nvidia,ram-code = <1>;
+ emc-table@12750 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_12750_01_V6.0.3_V1.1";
+ clock-frequency = <12750>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000003e>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000030
+ 0x00000000
+ 0x0000000c
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000036
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x000d0011
+ 0x000d0011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000164
+ 0x0000000a
+ 0x40040001
+ 0x8000000a
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x77c30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000007
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <57820>;
+ };
+ emc-table@20400 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_20400_01_V6.0.3_V1.1";
+ clock-frequency = <20400>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000026>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000001
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000004d
+ 0x00000000
+ 0x00000013
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000055
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00150011
+ 0x00150011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000019f
+ 0x0000000a
+ 0x40020001
+ 0x80000012
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x74e30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x0000000a
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <35610>;
+ };
+ emc-table@40800 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_40800_01_V6.0.3_V1.1";
+ clock-frequency = <40800>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000012>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000009a
+ 0x00000000
+ 0x00000026
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000006
+ 0x00000006
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000000aa
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00290011
+ 0x00290011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000023a
+ 0x0000000a
+ 0xa0000001
+ 0x80000017
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x73030303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000014
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <20850>;
+ };
+ emc-table@68000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_68000_01_V6.0.3_V1.1";
+ clock-frequency = <68000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000000a>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000004
+ 0x00000008
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000101
+ 0x00000000
+ 0x00000040
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000a
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x0000011b
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000019
+ 0x00440011
+ 0x00440011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000309
+ 0x0000000a
+ 0x00000001
+ 0x8000001e
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x72630403
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000021
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00b0
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00e90049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00a3
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ee00ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <10720>;
+ };
+ emc-table@102000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_102000_01_V6.0.3_V1.1";
+ clock-frequency = <102000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000006>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000006
+ 0x0000000d
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000182
+ 0x00000000
+ 0x00000060
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000f
+ 0x0000000f
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000001a9
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000025
+ 0x00660011
+ 0x00660011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000040b
+ 0x0000000a
+ 0x08000001
+ 0x80000026
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090403
+ 0x72430504
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000031
+ 0x00ff00da
+ 0x00ff00da
+ 0x00ff0075
+ 0x00ff00ff
+ 0x00ff009d
+ 0x00ff00ff
+ 0x00ff009d
+ 0x009b0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ad
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00c6
+ 0x00ff006d
+ 0x00ff0024
+ 0x00ff00d6
+ 0x000000ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x009f00a0
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00da
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <6890>;
+ };
+ emc-table@136000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_136000_01_V6.0.4_V1.1";
+ clock-frequency = <136000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000004>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000008
+ 0x00000011
+ 0x00000000
+ 0x00000005
+ 0x00000002
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000202
+ 0x00000000
+ 0x00000080
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000014
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000236
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000031
+ 0x00880011
+ 0x00880011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000050e
+ 0x0000000a
+ 0x01000002
+ 0x8000002f
+ 0x00000001
+ 0x00000001
+ 0x00000004
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050102
+ 0x00090404
+ 0x72030705
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000042
+ 0x00ff00a3
+ 0x00ff00a3
+ 0x00ff0058
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00740049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080082
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0094
+ 0x00ff0051
+ 0x00ff0024
+ 0x00ff00a1
+ 0x000000ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00770078
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00a3
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <5400>;
+ };
+ emc-table@204000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_204000_01_V6.0.3_V1.1";
+ clock-frequency = <204000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000000c
+ 0x0000001a
+ 0x00000000
+ 0x00000008
+ 0x00000003
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000007
+ 0x00010000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000e
+ 0x0000000f
+ 0x00000011
+ 0x00000304
+ 0x00000000
+ 0x000000c1
+ 0x00000002
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000001d
+ 0x0000001d
+ 0x00000003
+ 0x00000004
+ 0x00000003
+ 0x00000009
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000351
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00090000
+ 0x00090000
+ 0x00090000
+ 0x00090000
+ 0x00009000
+ 0x00009000
+ 0x00009000
+ 0x00009000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000004a
+ 0x00cc0011
+ 0x00cc0011
+ 0x00000000
+ 0x00000004
+ 0x0000d3b3
+ 0x80000713
+ 0x0000000a
+ 0x01000003
+ 0x80000040
+ 0x00000001
+ 0x00000001
+ 0x00000006
+ 0x00000003
+ 0x00000005
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000a0506
+ 0x71e40a07
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000062
+ 0x00ff006d
+ 0x00ff006d
+ 0x00ff003c
+ 0x00ff00af
+ 0x00ff004f
+ 0x00ff00af
+ 0x00ff004f
+ 0x004e0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080057
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0063
+ 0x00ff0036
+ 0x00ff0024
+ 0x00ff006b
+ 0x000000ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510050
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00c6
+ 0x00ff006d
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000017>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008cf>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <3420>;
+ };
+ emc-table@300000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_300000_01_V6.0.3_V1.1";
+ clock-frequency = <300000>;
+ nvidia,emc-min-mv = <820>;
+ nvidia,gk20a-min-mv = <820>;
+ nvidia,source = "pllc_out0";
+ nvidia,src-sel-reg = <0x20000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000011
+ 0x00000026
+ 0x00000000
+ 0x0000000c
+ 0x00000005
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000005
+ 0x00000005
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000008
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000f
+ 0x00000012
+ 0x00000014
+ 0x0000046e
+ 0x00000000
+ 0x0000011b
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000002a
+ 0x0000002a
+ 0x00000003
+ 0x00000005
+ 0x00000003
+ 0x0000000d
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x000004e0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000006c
+ 0x012c0011
+ 0x012c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x800009ed
+ 0x0000000b
+ 0x08000004
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000005
+ 0x00000007
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000c0709
+ 0x71c50e0a
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000004
+ 0x00000090
+ 0x00ff004a
+ 0x00ff004a
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00350049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008003b
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0043
+ 0x00ff002d
+ 0x00ff0024
+ 0x00ff0049
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510036
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0087
+ 0x00ff004a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000001f>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x000008d7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2680>;
+ };
+ emc-table@396000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_396000_01_V6.0.3_V1.1";
+ clock-frequency = <396000>;
+ nvidia,emc-min-mv = <850>;
+ nvidia,gk20a-min-mv = <850>;
+ nvidia,source = "pllm_out0";
+ nvidia,src-sel-reg = <0x00000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000017
+ 0x00000033
+ 0x00000000
+ 0x00000010
+ 0x00000007
+ 0x00000008
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000007
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000009
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000010
+ 0x00000012
+ 0x00000014
+ 0x000005d9
+ 0x00000000
+ 0x00000176
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000000
+ 0x00000001
+ 0x0000000e
+ 0x00000038
+ 0x00000038
+ 0x00000003
+ 0x00000006
+ 0x00000003
+ 0x00000012
+ 0x00000009
+ 0x00000003
+ 0x00000003
+ 0x00000670
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000008f
+ 0x018c0011
+ 0x018c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x80000cc7
+ 0x0000000b
+ 0x0f000005
+ 0x80000040
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000007
+ 0x00000009
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000e090c
+ 0x71c6120d
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000a
+ 0x000000be
+ 0x00ff0038
+ 0x00ff0038
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00280049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008002d
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0033
+ 0x00ff0022
+ 0x00ff0024
+ 0x00ff0037
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510029
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0066
+ 0x00ff0038
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000028>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x00000897>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2180>;
+ };
+ emc-table@528000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_528000_01_V6.0.3_V1.1";
+ clock-frequency = <528000>;
+ nvidia,emc-min-mv = <880>;
+ nvidia,gk20a-min-mv = <870>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000001f
+ 0x00000044
+ 0x00000000
+ 0x00000016
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000003
+ 0x0000000d
+ 0x00000009
+ 0x00000009
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x0000000a
+ 0x00050000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000011
+ 0x00000015
+ 0x00000017
+ 0x000007cd
+ 0x00000000
+ 0x000001f3
+ 0x00000003
+ 0x00000003
+ 0x00000009
+ 0x00000000
+ 0x00000001
+ 0x00000011
+ 0x0000004a
+ 0x0000004a
+ 0x00000004
+ 0x00000008
+ 0x00000004
+ 0x00000019
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000895
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe01200b9
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0123123d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000505
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000bf
+ 0x02100013
+ 0x02100013
+ 0x00000000
+ 0x00000004
+ 0x000042a0
+ 0x800010b3
+ 0x0000000d
+ 0x0f000007
+ 0x80000040
+ 0x00000003
+ 0x00000004
+ 0x00000010
+ 0x0000000a
+ 0x0000000d
+ 0x00000002
+ 0x00000002
+ 0x00000009
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000006
+ 0x06060103
+ 0x00120b10
+ 0x71c81811
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000d
+ 0x000000fd
+ 0x00c10038
+ 0x00c10038
+ 0x00c1003c
+ 0x00c10090
+ 0x00c10041
+ 0x00c10090
+ 0x00c10041
+ 0x00270049
+ 0x00c10080
+ 0x00c10004
+ 0x00c10004
+ 0x00080021
+ 0x000000c1
+ 0x00c10004
+ 0x00c10026
+ 0x00c1001a
+ 0x00c10024
+ 0x00c10029
+ 0x000000c1
+ 0x00000036
+ 0x00c100c1
+ 0x00000036
+ 0x00c100c1
+ 0x00d400ff
+ 0x00510029
+ 0x00c100c1
+ 0x00c100c1
+ 0x00c10065
+ 0x00c1002a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000034>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0120069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100c3>;
+ nvidia,emc-mode-2 = <0x00020006>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table@600000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_600000_01_V6.0.3_V1.1";
+ clock-frequency = <600000>;
+ nvidia,emc-min-mv = <910>;
+ nvidia,gk20a-min-mv = <910>;
+ nvidia,source = "pllc_ud";
+ nvidia,src-sel-reg = <0xe0000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000023
+ 0x0000004d
+ 0x00000000
+ 0x00000019
+ 0x0000000a
+ 0x0000000a
+ 0x0000000b
+ 0x00000004
+ 0x0000000f
+ 0x0000000a
+ 0x0000000a
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000004
+ 0x00000000
+ 0x00000003
+ 0x0000000d
+ 0x00070000
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000014
+ 0x00000018
+ 0x0000001a
+ 0x000008e4
+ 0x00000000
+ 0x00000239
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000000
+ 0x00000001
+ 0x00000013
+ 0x00000054
+ 0x00000054
+ 0x00000005
+ 0x00000009
+ 0x00000005
+ 0x0000001c
+ 0x0000000d
+ 0x00000003
+ 0x00000003
+ 0x000009c1
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00e00b9
+ 0x00008000
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0121103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000d8
+ 0x02580014
+ 0x02580014
+ 0x00000000
+ 0x00000005
+ 0x000040a0
+ 0x800012d7
+ 0x00000010
+ 0x00000009
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000012
+ 0x0000000b
+ 0x0000000e
+ 0x00000002
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000007
+ 0x07060103
+ 0x00140d12
+ 0x71a91b13
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000f
+ 0x00000120
+ 0x00aa0038
+ 0x00aa0038
+ 0x00aa003c
+ 0x00aa0090
+ 0x00aa0041
+ 0x00aa0090
+ 0x00aa0041
+ 0x00270049
+ 0x00aa0080
+ 0x00aa0004
+ 0x00aa0004
+ 0x0008001d
+ 0x000000aa
+ 0x00aa0004
+ 0x00aa0022
+ 0x00aa0018
+ 0x00aa0024
+ 0x00aa0024
+ 0x000000aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00d400ff
+ 0x00510029
+ 0x00aa00aa
+ 0x00aa00aa
+ 0x00aa0065
+ 0x00aa0025
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000003a>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe00e0069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100e3>;
+ nvidia,emc-mode-2 = <0x00020007>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table@792000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_792000_01_V6.0.3_V1.1";
+ clock-frequency = <792000>;
+ nvidia,emc-min-mv = <980>;
+ nvidia,gk20a-min-mv = <980>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000002f
+ 0x00000066
+ 0x00000000
+ 0x00000021
+ 0x0000000e
+ 0x0000000d
+ 0x0000000d
+ 0x00000005
+ 0x00000013
+ 0x0000000e
+ 0x0000000e
+ 0x00000007
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x0000000f
+ 0x000b0000
+ 0x00000006
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000016
+ 0x0000001d
+ 0x0000001f
+ 0x00000bd1
+ 0x00000000
+ 0x000002f4
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000000
+ 0x00000001
+ 0x00000017
+ 0x0000006f
+ 0x0000006f
+ 0x00000006
+ 0x0000000c
+ 0x00000006
+ 0x00000026
+ 0x00000011
+ 0x00000003
+ 0x00000003
+ 0x00000cdf
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00700b9
+ 0x00008000
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00004014
+ 0x00004014
+ 0x00000000
+ 0x00004014
+ 0x00004014
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x59659620
+ 0x00492492
+ 0x00492492
+ 0x59659600
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000011e
+ 0x03180017
+ 0x03180017
+ 0x00000000
+ 0x00000006
+ 0x00004080
+ 0x8000188b
+ 0x00000014
+ 0x0e00000b
+ 0x80000040
+ 0x00000006
+ 0x00000007
+ 0x00000018
+ 0x0000000f
+ 0x00000013
+ 0x00000003
+ 0x00000003
+ 0x0000000c
+ 0x00000003
+ 0x00000001
+ 0x00000008
+ 0x00000008
+ 0x08080103
+ 0x001a1118
+ 0x71ac2419
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000013
+ 0x0000017c
+ 0x00810038
+ 0x00810038
+ 0x0081003c
+ 0x00810090
+ 0x00810041
+ 0x00810090
+ 0x00810041
+ 0x00270049
+ 0x00810080
+ 0x00810004
+ 0x00810004
+ 0x00080016
+ 0x00000081
+ 0x00810004
+ 0x00810019
+ 0x00810018
+ 0x00810024
+ 0x0081001c
+ 0x00000081
+ 0x00000036
+ 0x00810081
+ 0x00000036
+ 0x00810081
+ 0x00d400ff
+ 0x00510029
+ 0x00810081
+ 0x00810081
+ 0x00810065
+ 0x0081001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000004c>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0070069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430404>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010043>;
+ nvidia,emc-mode-2 = <0x0002001a>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1200>;
+ };
+ emc-table@924000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_924000_01_V6.0.3_V1.1";
+ clock-frequency = <924000>;
+ nvidia,emc-min-mv = <1010>;
+ nvidia,gk20a-min-mv = <1010>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000037
+ 0x00000078
+ 0x00000000
+ 0x00000026
+ 0x00000010
+ 0x0000000f
+ 0x00000010
+ 0x00000006
+ 0x00000017
+ 0x00000010
+ 0x00000010
+ 0x00000009
+ 0x00000005
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000010
+ 0x00000005
+ 0x00000000
+ 0x00000005
+ 0x00000012
+ 0x000d0000
+ 0x00000007
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000019
+ 0x00000020
+ 0x00000022
+ 0x00000dd4
+ 0x00000000
+ 0x00000375
+ 0x00000006
+ 0x00000006
+ 0x00000010
+ 0x00000000
+ 0x00000001
+ 0x0000001b
+ 0x00000082
+ 0x00000082
+ 0x00000007
+ 0x0000000e
+ 0x00000007
+ 0x0000002d
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000f04
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a896
+ 0xe00400b9
+ 0x00008000
+ 0x00000005
+ 0x00000007
+ 0x00000007
+ 0x00000005
+ 0x00000003
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000007
+ 0x00000007
+ 0x00000005
+ 0x00000003
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x55555520
+ 0x00514514
+ 0x00514514
+ 0x55555500
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000014d
+ 0x039c0019
+ 0x039c0019
+ 0x00000000
+ 0x00000007
+ 0x00004080
+ 0x80001c77
+ 0x00000017
+ 0x0e00000d
+ 0x80000040
+ 0x00000007
+ 0x00000008
+ 0x0000001b
+ 0x00000012
+ 0x00000017
+ 0x00000004
+ 0x00000004
+ 0x0000000e
+ 0x00000004
+ 0x00000001
+ 0x00000009
+ 0x00000009
+ 0x09090104
+ 0x001e141b
+ 0x71ae2a1c
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000017
+ 0x000001bb
+ 0x006e0038
+ 0x006e0038
+ 0x006e003c
+ 0x006e0090
+ 0x006e0041
+ 0x006e0090
+ 0x006e0041
+ 0x00270049
+ 0x006e0080
+ 0x006e0004
+ 0x006e0004
+ 0x00080016
+ 0x0000006e
+ 0x006e0004
+ 0x006e0019
+ 0x006e0018
+ 0x006e0024
+ 0x006e001b
+ 0x0000006e
+ 0x00000036
+ 0x006e006e
+ 0x00000036
+ 0x006e006e
+ 0x00d400ff
+ 0x00510029
+ 0x006e006e
+ 0x006e006e
+ 0x006e0065
+ 0x006e001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000058>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0040069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430808>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x0002001c>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1180>;
+ };
+
+ emc-table-derated@12750 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_12750_01_V6.0.3_V1.1";
+ clock-frequency = <12750>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000003e>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000000b
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000036
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x000d0011
+ 0x000d0011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000011c
+ 0x0000000a
+ 0x40040001
+ 0x8000000a
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x77c30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000007
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <57820>;
+ };
+ emc-table-derated@20400 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_20400_01_V6.0.3_V1.1";
+ clock-frequency = <20400>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000026>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000001
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000013
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000055
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00150011
+ 0x00150011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000012a
+ 0x0000000a
+ 0x40020001
+ 0x80000012
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x74e30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x0000000a
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <35610>;
+ };
+ emc-table-derated@40800 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_40800_01_V6.0.3_V1.1";
+ clock-frequency = <40800>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000012>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000026
+ 0x00000000
+ 0x00000009
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000006
+ 0x00000006
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000000aa
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00290011
+ 0x00290011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000151
+ 0x0000000a
+ 0xa0000001
+ 0x80000017
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x73030303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000014
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <20850>;
+ };
+ emc-table-derated@68000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_68000_01_V6.0.3_V1.1";
+ clock-frequency = <68000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000000a>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000004
+ 0x00000008
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000040
+ 0x00000000
+ 0x00000010
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000a
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x0000011b
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000019
+ 0x00440011
+ 0x00440011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000185
+ 0x0000000a
+ 0x00000001
+ 0x8000001e
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x72630403
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000021
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00b0
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00e90049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00a3
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ee00ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <10720>;
+ };
+ emc-table-derated@102000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_102000_01_V6.0.3_V1.1";
+ clock-frequency = <102000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000006>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000006
+ 0x0000000d
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000060
+ 0x00000000
+ 0x00000018
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000f
+ 0x0000000f
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000001a9
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000025
+ 0x00660011
+ 0x00660011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x800001c5
+ 0x0000000a
+ 0x08000001
+ 0x80000026
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090403
+ 0x72430504
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000031
+ 0x00ff00da
+ 0x00ff00da
+ 0x00ff0075
+ 0x00ff00ff
+ 0x00ff009d
+ 0x00ff00ff
+ 0x00ff009d
+ 0x009b0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ad
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00c6
+ 0x00ff006d
+ 0x00ff0024
+ 0x00ff00d6
+ 0x000000ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x009f00a0
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00da
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <6890>;
+ };
+ emc-table-derated@136000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_136000_01_V6.0.4_V1.1";
+ clock-frequency = <136000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000004>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000008
+ 0x00000011
+ 0x00000000
+ 0x00000005
+ 0x00000002
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000081
+ 0x00000000
+ 0x00000020
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000014
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000236
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000031
+ 0x00880011
+ 0x00880011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000206
+ 0x0000000a
+ 0x01000002
+ 0x8000002f
+ 0x00000001
+ 0x00000001
+ 0x00000004
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050102
+ 0x00090404
+ 0x72030705
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000042
+ 0x00ff00a3
+ 0x00ff00a3
+ 0x00ff0058
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00740049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080082
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0094
+ 0x00ff0051
+ 0x00ff0024
+ 0x00ff00a1
+ 0x000000ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00770078
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00a3
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <5400>;
+ };
+ emc-table-derated@204000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_204000_01_V6.0.3_V1.1";
+ clock-frequency = <204000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000000c
+ 0x0000001a
+ 0x00000000
+ 0x00000008
+ 0x00000004
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000004
+ 0x00000004
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000007
+ 0x00010000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000e
+ 0x0000000f
+ 0x00000011
+ 0x000000c1
+ 0x00000000
+ 0x00000030
+ 0x00000002
+ 0x00000002
+ 0x00000004
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000001d
+ 0x0000001d
+ 0x00000003
+ 0x00000004
+ 0x00000003
+ 0x00000009
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000351
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00090000
+ 0x00090000
+ 0x00090000
+ 0x00090000
+ 0x00009000
+ 0x00009000
+ 0x00009000
+ 0x00009000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000004a
+ 0x00cc0011
+ 0x00cc0011
+ 0x00000000
+ 0x00000004
+ 0x0000d3b3
+ 0x80000287
+ 0x0000000a
+ 0x01000003
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000005
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000b0606
+ 0x71e40a07
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000062
+ 0x00ff006d
+ 0x00ff006d
+ 0x00ff003c
+ 0x00ff00af
+ 0x00ff004f
+ 0x00ff00af
+ 0x00ff004f
+ 0x004e0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080057
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0063
+ 0x00ff0036
+ 0x00ff0024
+ 0x00ff006b
+ 0x000000ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510050
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00c6
+ 0x00ff006d
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000017>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008cf>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <3420>;
+ };
+ emc-table-derated@300000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_300000_01_V6.0.3_V1.1";
+ clock-frequency = <300000>;
+ nvidia,emc-min-mv = <820>;
+ nvidia,gk20a-min-mv = <820>;
+ nvidia,source = "pllc_out0";
+ nvidia,src-sel-reg = <0x20000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000012
+ 0x00000026
+ 0x00000000
+ 0x0000000d
+ 0x00000005
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000005
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000008
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000f
+ 0x00000012
+ 0x00000014
+ 0x0000011c
+ 0x00000000
+ 0x00000047
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000002a
+ 0x0000002a
+ 0x00000003
+ 0x00000005
+ 0x00000003
+ 0x0000000d
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x000004e0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x00006000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000006c
+ 0x012c0011
+ 0x012c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x8000033e
+ 0x0000000b
+ 0x08000004
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000005
+ 0x00000007
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000c0709
+ 0x71c50e0a
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000004
+ 0x00000090
+ 0x00ff004a
+ 0x00ff004a
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00350049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008003b
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0043
+ 0x00ff002d
+ 0x00ff0024
+ 0x00ff0049
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510036
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0087
+ 0x00ff004a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000001f>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x000008d7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2680>;
+ };
+ emc-table-derated@396000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_396000_01_V6.0.3_V1.1";
+ clock-frequency = <396000>;
+ nvidia,emc-min-mv = <850>;
+ nvidia,gk20a-min-mv = <850>;
+ nvidia,source = "pllm_out0";
+ nvidia,src-sel-reg = <0x00000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000018
+ 0x00000033
+ 0x00000000
+ 0x00000011
+ 0x00000007
+ 0x00000008
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000007
+ 0x00000007
+ 0x00000004
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000009
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000010
+ 0x00000012
+ 0x00000014
+ 0x00000176
+ 0x00000000
+ 0x0000005d
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000000
+ 0x00000001
+ 0x0000000e
+ 0x00000038
+ 0x00000038
+ 0x00000003
+ 0x00000006
+ 0x00000003
+ 0x00000012
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000670
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00050000
+ 0x00050000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000008f
+ 0x018c0011
+ 0x018c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x800003f4
+ 0x0000000b
+ 0x0f000005
+ 0x80000040
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000007
+ 0x00000009
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000e090c
+ 0x71c6120d
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000a
+ 0x000000be
+ 0x00ff0038
+ 0x00ff0038
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00280049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008002d
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0033
+ 0x00ff0022
+ 0x00ff0024
+ 0x00ff0037
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510029
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0066
+ 0x00ff0038
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000028>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x00000897>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2180>;
+ };
+ emc-table-derated@528000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_528000_01_V6.0.3_V1.1";
+ clock-frequency = <528000>;
+ nvidia,emc-min-mv = <880>;
+ nvidia,gk20a-min-mv = <870>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000020
+ 0x00000044
+ 0x00000000
+ 0x00000017
+ 0x0000000a
+ 0x0000000a
+ 0x00000009
+ 0x00000003
+ 0x0000000d
+ 0x0000000a
+ 0x0000000a
+ 0x00000006
+ 0x00000004
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x0000000a
+ 0x00050000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000011
+ 0x00000015
+ 0x00000017
+ 0x000001f3
+ 0x00000000
+ 0x0000007c
+ 0x00000003
+ 0x00000003
+ 0x0000000a
+ 0x00000000
+ 0x00000001
+ 0x00000011
+ 0x0000004a
+ 0x0000004a
+ 0x00000004
+ 0x00000008
+ 0x00000004
+ 0x00000019
+ 0x0000000d
+ 0x00000003
+ 0x00000003
+ 0x00000895
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe01200b9
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0123123d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000505
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000bf
+ 0x02100013
+ 0x02100013
+ 0x00000000
+ 0x00000004
+ 0x000042a0
+ 0x800004ef
+ 0x0000000d
+ 0x0f000007
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000011
+ 0x0000000a
+ 0x0000000d
+ 0x00000003
+ 0x00000002
+ 0x00000009
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000006
+ 0x06060103
+ 0x00130c11
+ 0x71c81812
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000d
+ 0x000000fd
+ 0x00c10038
+ 0x00c10038
+ 0x00c1003c
+ 0x00c10090
+ 0x00c10041
+ 0x00c10090
+ 0x00c10041
+ 0x00270049
+ 0x00c10080
+ 0x00c10004
+ 0x00c10004
+ 0x00080021
+ 0x000000c1
+ 0x00c10004
+ 0x00c10026
+ 0x00c1001a
+ 0x00c10024
+ 0x00c10029
+ 0x000000c1
+ 0x00000036
+ 0x00c100c1
+ 0x00000036
+ 0x00c100c1
+ 0x00d400ff
+ 0x00510029
+ 0x00c100c1
+ 0x00c100c1
+ 0x00c10065
+ 0x00c1002a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000034>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0120069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100c3>;
+ nvidia,emc-mode-2 = <0x00020006>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table-derated@600000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_600000_01_V6.0.3_V1.1";
+ clock-frequency = <600000>;
+ nvidia,emc-min-mv = <910>;
+ nvidia,gk20a-min-mv = <910>;
+ nvidia,source = "pllc_ud";
+ nvidia,src-sel-reg = <0xe0000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000025
+ 0x0000004d
+ 0x00000000
+ 0x0000001a
+ 0x0000000b
+ 0x0000000a
+ 0x0000000b
+ 0x00000004
+ 0x0000000f
+ 0x0000000b
+ 0x0000000b
+ 0x00000007
+ 0x00000004
+ 0x00000000
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000004
+ 0x00000000
+ 0x00000003
+ 0x0000000d
+ 0x00070000
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000014
+ 0x00000018
+ 0x0000001a
+ 0x00000237
+ 0x00000000
+ 0x0000008d
+ 0x00000004
+ 0x00000004
+ 0x0000000b
+ 0x00000000
+ 0x00000001
+ 0x00000013
+ 0x00000054
+ 0x00000054
+ 0x00000005
+ 0x00000009
+ 0x00000005
+ 0x0000001c
+ 0x0000000e
+ 0x00000003
+ 0x00000003
+ 0x000009c1
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00e00b9
+ 0x00008000
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0121103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000d8
+ 0x02580014
+ 0x02580014
+ 0x00000000
+ 0x00000005
+ 0x000040a0
+ 0x80000578
+ 0x00000010
+ 0x00000009
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000013
+ 0x0000000c
+ 0x0000000e
+ 0x00000003
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000007
+ 0x07060103
+ 0x00150e13
+ 0x71a91b14
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000f
+ 0x00000120
+ 0x00aa0038
+ 0x00aa0038
+ 0x00aa003c
+ 0x00aa0090
+ 0x00aa0041
+ 0x00aa0090
+ 0x00aa0041
+ 0x00270049
+ 0x00aa0080
+ 0x00aa0004
+ 0x00aa0004
+ 0x0008001d
+ 0x000000aa
+ 0x00aa0004
+ 0x00aa0022
+ 0x00aa0018
+ 0x00aa0024
+ 0x00aa0024
+ 0x000000aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00d400ff
+ 0x00510029
+ 0x00aa00aa
+ 0x00aa00aa
+ 0x00aa0065
+ 0x00aa0025
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000003a>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe00e0069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100e3>;
+ nvidia,emc-mode-2 = <0x00020007>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table-derated@792000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_792000_01_V6.0.3_V1.1";
+ clock-frequency = <792000>;
+ nvidia,emc-min-mv = <980>;
+ nvidia,gk20a-min-mv = <980>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000030
+ 0x00000066
+ 0x00000000
+ 0x00000022
+ 0x0000000f
+ 0x0000000d
+ 0x0000000d
+ 0x00000005
+ 0x00000013
+ 0x0000000f
+ 0x0000000f
+ 0x00000009
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x0000000f
+ 0x000b0000
+ 0x00000006
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000016
+ 0x0000001d
+ 0x0000001f
+ 0x000002ec
+ 0x00000000
+ 0x000000bb
+ 0x00000005
+ 0x00000005
+ 0x0000000f
+ 0x00000000
+ 0x00000001
+ 0x00000017
+ 0x0000006f
+ 0x0000006f
+ 0x00000006
+ 0x0000000c
+ 0x00000006
+ 0x00000026
+ 0x00000013
+ 0x00000003
+ 0x00000003
+ 0x00000cdf
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00700b9
+ 0x00008000
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00004014
+ 0x00004014
+ 0x00000000
+ 0x00004014
+ 0x00004014
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x59659620
+ 0x00492492
+ 0x00492492
+ 0x59659600
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000011e
+ 0x03180017
+ 0x03180017
+ 0x00000000
+ 0x00000006
+ 0x00004080
+ 0x800006e5
+ 0x00000014
+ 0x0e00000b
+ 0x80000040
+ 0x00000006
+ 0x00000007
+ 0x00000019
+ 0x00000010
+ 0x00000013
+ 0x00000004
+ 0x00000003
+ 0x0000000c
+ 0x00000003
+ 0x00000001
+ 0x00000008
+ 0x00000008
+ 0x08080103
+ 0x001b1219
+ 0x71ac241a
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000013
+ 0x0000017c
+ 0x00810038
+ 0x00810038
+ 0x0081003c
+ 0x00810090
+ 0x00810041
+ 0x00810090
+ 0x00810041
+ 0x00270049
+ 0x00810080
+ 0x00810004
+ 0x00810004
+ 0x00080016
+ 0x00000081
+ 0x00810004
+ 0x00810019
+ 0x00810018
+ 0x00810024
+ 0x0081001c
+ 0x00000081
+ 0x00000036
+ 0x00810081
+ 0x00000036
+ 0x00810081
+ 0x00d400ff
+ 0x00510029
+ 0x00810081
+ 0x00810081
+ 0x00810065
+ 0x0081001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000004c>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0070069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430404>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010043>;
+ nvidia,emc-mode-2 = <0x0002001a>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1200>;
+ };
+ emc-table-derated@924000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "01_924000_01_V6.0.3_V1.1";
+ clock-frequency = <924000>;
+ nvidia,emc-min-mv = <1010>;
+ nvidia,gk20a-min-mv = <1010>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000039
+ 0x00000078
+ 0x00000000
+ 0x00000028
+ 0x00000012
+ 0x0000000f
+ 0x00000010
+ 0x00000006
+ 0x00000017
+ 0x00000012
+ 0x00000012
+ 0x0000000a
+ 0x00000005
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000010
+ 0x00000005
+ 0x00000000
+ 0x00000005
+ 0x00000012
+ 0x000d0000
+ 0x00000007
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000019
+ 0x00000020
+ 0x00000022
+ 0x00000369
+ 0x00000000
+ 0x000000da
+ 0x00000006
+ 0x00000006
+ 0x00000012
+ 0x00000000
+ 0x00000001
+ 0x0000001b
+ 0x00000082
+ 0x00000082
+ 0x00000007
+ 0x0000000e
+ 0x00000007
+ 0x0000002d
+ 0x00000016
+ 0x00000003
+ 0x00000003
+ 0x00000f04
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a896
+ 0xe00400b9
+ 0x00008000
+ 0x00000005
+ 0x00000007
+ 0x00000007
+ 0x00000005
+ 0x00000003
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000007
+ 0x00000007
+ 0x00000005
+ 0x00000003
+ 0x00000005
+ 0x00000005
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000007
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x55555520
+ 0x00514514
+ 0x00514514
+ 0x55555500
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000014d
+ 0x039c0019
+ 0x039c0019
+ 0x00000000
+ 0x00000007
+ 0x00004080
+ 0x800007e0
+ 0x00000017
+ 0x0e00000d
+ 0x80000040
+ 0x00000008
+ 0x00000009
+ 0x0000001d
+ 0x00000013
+ 0x00000017
+ 0x00000005
+ 0x00000004
+ 0x0000000e
+ 0x00000004
+ 0x00000001
+ 0x00000009
+ 0x00000009
+ 0x09090104
+ 0x0020161d
+ 0x71ae2a1e
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000017
+ 0x000001bb
+ 0x006e0038
+ 0x006e0038
+ 0x006e003c
+ 0x006e0090
+ 0x006e0041
+ 0x006e0090
+ 0x006e0041
+ 0x00270049
+ 0x006e0080
+ 0x006e0004
+ 0x006e0004
+ 0x00080016
+ 0x0000006e
+ 0x006e0004
+ 0x006e0019
+ 0x006e0018
+ 0x006e0024
+ 0x006e001b
+ 0x0000006e
+ 0x00000036
+ 0x006e006e
+ 0x00000036
+ 0x006e006e
+ 0x00d400ff
+ 0x00510029
+ 0x006e006e
+ 0x006e006e
+ 0x006e0065
+ 0x006e001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000058>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0040069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430808>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x0002001c>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1180>;
+ };
+
+ };
+ ram-code@2 { /* Samsung_2G_K3QF2F20DM-AGCF */
+ #address-cells = <1>;
+ #size-cells = <0>;
+ nvidia,ram-code = <2>;
+ emc-table@12750 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_12750_01_V6.0.4_V1.1";
+ clock-frequency = <12750>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000003e>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000030
+ 0x00000000
+ 0x0000000c
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000036
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x000d0011
+ 0x000d0011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000164
+ 0x0000000a
+ 0x40040001
+ 0x8000000a
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x77c30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000007
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <57820>;
+ };
+ emc-table@20400 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_20400_01_V6.0.4_V1.1";
+ clock-frequency = <20400>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000026>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000001
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000004d
+ 0x00000000
+ 0x00000013
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000055
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00150011
+ 0x00150011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000019f
+ 0x0000000a
+ 0x40020001
+ 0x80000012
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x74e30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x0000000a
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <35610>;
+ };
+ emc-table@40800 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_40800_01_V6.0.4_V1.1";
+ clock-frequency = <40800>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000012>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000009a
+ 0x00000000
+ 0x00000026
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000006
+ 0x00000006
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000000aa
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00290011
+ 0x00290011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000023a
+ 0x0000000a
+ 0xa0000001
+ 0x80000017
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x73030303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000014
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <20850>;
+ };
+ emc-table@68000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_68000_01_V6.0.4_V1.1";
+ clock-frequency = <68000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000000a>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000004
+ 0x00000008
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000101
+ 0x00000000
+ 0x00000040
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000a
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x0000011b
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000019
+ 0x00440011
+ 0x00440011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000309
+ 0x0000000a
+ 0x00000001
+ 0x8000001e
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x72630403
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000021
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00b0
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00e90049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00a3
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ee00ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <10720>;
+ };
+ emc-table@102000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_102000_01_V6.0.4_V1.1";
+ clock-frequency = <102000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000006>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000006
+ 0x0000000d
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000182
+ 0x00000000
+ 0x00000060
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000f
+ 0x0000000f
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000001a9
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000025
+ 0x00660011
+ 0x00660011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000040b
+ 0x0000000a
+ 0x08000001
+ 0x80000026
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090403
+ 0x72430504
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000031
+ 0x00ff00da
+ 0x00ff00da
+ 0x00ff0075
+ 0x00ff00ff
+ 0x00ff009d
+ 0x00ff00ff
+ 0x00ff009d
+ 0x009b0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ad
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00c6
+ 0x00ff006d
+ 0x00ff0024
+ 0x00ff00d6
+ 0x000000ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x009f00a0
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00da
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <6890>;
+ };
+ emc-table@136000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_136000_01_V6.0.4_V1.1";
+ clock-frequency = <136000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000004>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000008
+ 0x00000011
+ 0x00000000
+ 0x00000005
+ 0x00000002
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000202
+ 0x00000000
+ 0x00000080
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000014
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000236
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000031
+ 0x00880011
+ 0x00880011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000050e
+ 0x0000000a
+ 0x01000002
+ 0x8000002f
+ 0x00000001
+ 0x00000001
+ 0x00000004
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050102
+ 0x00090404
+ 0x72030705
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000042
+ 0x00ff00a3
+ 0x00ff00a3
+ 0x00ff0058
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00740049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080082
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0094
+ 0x00ff0051
+ 0x00ff0024
+ 0x00ff00a1
+ 0x000000ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00770078
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00a3
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <5400>;
+ };
+ emc-table@204000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_204000_01_V6.0.4_V1.1";
+ clock-frequency = <204000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000000c
+ 0x0000001a
+ 0x00000000
+ 0x00000008
+ 0x00000003
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000007
+ 0x00010000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000e
+ 0x0000000f
+ 0x00000011
+ 0x00000304
+ 0x00000000
+ 0x000000c1
+ 0x00000002
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000001d
+ 0x0000001d
+ 0x00000003
+ 0x00000004
+ 0x00000003
+ 0x00000009
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000351
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00068000
+ 0x00068000
+ 0x00006800
+ 0x00006800
+ 0x00006800
+ 0x00006800
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000004a
+ 0x00cc0011
+ 0x00cc0011
+ 0x00000000
+ 0x00000004
+ 0x0000d3b3
+ 0x80000713
+ 0x0000000a
+ 0x01000003
+ 0x80000040
+ 0x00000001
+ 0x00000001
+ 0x00000006
+ 0x00000003
+ 0x00000005
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000a0506
+ 0x71e40a07
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000062
+ 0x00ff006d
+ 0x00ff006d
+ 0x00ff003c
+ 0x00ff00af
+ 0x00ff004f
+ 0x00ff00af
+ 0x00ff004f
+ 0x004e0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080057
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0063
+ 0x00ff0036
+ 0x00ff0024
+ 0x00ff006b
+ 0x000000ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510050
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00c6
+ 0x00ff006d
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000017>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008cf>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <3420>;
+ };
+ emc-table@300000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_300000_01_V6.0.4_V1.1";
+ clock-frequency = <300000>;
+ nvidia,emc-min-mv = <820>;
+ nvidia,gk20a-min-mv = <820>;
+ nvidia,source = "pllc_out0";
+ nvidia,src-sel-reg = <0x20000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000011
+ 0x00000026
+ 0x00000000
+ 0x0000000c
+ 0x00000005
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000005
+ 0x00000005
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000008
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000f
+ 0x00000012
+ 0x00000014
+ 0x0000046e
+ 0x00000000
+ 0x0000011b
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000002a
+ 0x0000002a
+ 0x00000003
+ 0x00000005
+ 0x00000003
+ 0x0000000d
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x000004e0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00004800
+ 0x00004800
+ 0x00004800
+ 0x00004800
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000006c
+ 0x012c0011
+ 0x012c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x800009ed
+ 0x0000000b
+ 0x08000004
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000005
+ 0x00000007
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000c0709
+ 0x71c50e0a
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000004
+ 0x00000090
+ 0x00ff004a
+ 0x00ff004a
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00350049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008003b
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0043
+ 0x00ff002d
+ 0x00ff0024
+ 0x00ff0049
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510036
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0087
+ 0x00ff004a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000001f>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x000008d7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2680>;
+ };
+ emc-table@396000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_396000_01_V6.0.4_V1.1";
+ clock-frequency = <396000>;
+ nvidia,emc-min-mv = <850>;
+ nvidia,gk20a-min-mv = <850>;
+ nvidia,source = "pllm_out0";
+ nvidia,src-sel-reg = <0x00000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000017
+ 0x00000033
+ 0x00000000
+ 0x00000010
+ 0x00000007
+ 0x00000008
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000007
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000009
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000010
+ 0x00000012
+ 0x00000014
+ 0x000005d9
+ 0x00000000
+ 0x00000176
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000000
+ 0x00000001
+ 0x0000000e
+ 0x00000038
+ 0x00000038
+ 0x00000003
+ 0x00000006
+ 0x00000003
+ 0x00000012
+ 0x00000009
+ 0x00000003
+ 0x00000003
+ 0x00000670
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000008f
+ 0x018c0011
+ 0x018c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x80000cc7
+ 0x0000000b
+ 0x0f000005
+ 0x80000040
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000007
+ 0x00000009
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000e090c
+ 0x71c6120d
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000a
+ 0x000000be
+ 0x00ff0038
+ 0x00ff0038
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00280049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008002d
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0033
+ 0x00ff0022
+ 0x00ff0024
+ 0x00ff0037
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510029
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0066
+ 0x00ff0038
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000028>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x00000897>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2180>;
+ };
+ emc-table@528000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_528000_01_V6.0.4_V1.1";
+ clock-frequency = <528000>;
+ nvidia,emc-min-mv = <880>;
+ nvidia,gk20a-min-mv = <870>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000001f
+ 0x00000044
+ 0x00000000
+ 0x00000016
+ 0x00000009
+ 0x00000009
+ 0x00000009
+ 0x00000003
+ 0x0000000d
+ 0x00000009
+ 0x00000009
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x0000000a
+ 0x00050000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000011
+ 0x00000015
+ 0x00000017
+ 0x000007cd
+ 0x00000000
+ 0x000001f3
+ 0x00000003
+ 0x00000003
+ 0x00000009
+ 0x00000000
+ 0x00000001
+ 0x00000011
+ 0x0000004a
+ 0x0000004a
+ 0x00000004
+ 0x00000008
+ 0x00000004
+ 0x00000019
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000895
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe01200b9
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0123123d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000505
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000bf
+ 0x02100013
+ 0x02100013
+ 0x00000000
+ 0x00000004
+ 0x000042a0
+ 0x800010b3
+ 0x0000000d
+ 0x0f000007
+ 0x80000040
+ 0x00000003
+ 0x00000004
+ 0x00000010
+ 0x0000000a
+ 0x0000000d
+ 0x00000002
+ 0x00000002
+ 0x00000009
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000006
+ 0x06060103
+ 0x00120b10
+ 0x71c81811
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000d
+ 0x000000fd
+ 0x00c10038
+ 0x00c10038
+ 0x00c1003c
+ 0x00c10090
+ 0x00c10041
+ 0x00c10090
+ 0x00c10041
+ 0x00270049
+ 0x00c10080
+ 0x00c10004
+ 0x00c10004
+ 0x00080021
+ 0x000000c1
+ 0x00c10004
+ 0x00c10026
+ 0x00c1001a
+ 0x00c10024
+ 0x00c10029
+ 0x000000c1
+ 0x00000036
+ 0x00c100c1
+ 0x00000036
+ 0x00c100c1
+ 0x00d400ff
+ 0x00510029
+ 0x00c100c1
+ 0x00c100c1
+ 0x00c10065
+ 0x00c1002a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000034>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0120069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100c3>;
+ nvidia,emc-mode-2 = <0x00020006>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table@600000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_600000_01_V6.0.4_V1.1";
+ clock-frequency = <600000>;
+ nvidia,emc-min-mv = <910>;
+ nvidia,gk20a-min-mv = <910>;
+ nvidia,source = "pllc_ud";
+ nvidia,src-sel-reg = <0xe0000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000023
+ 0x0000004d
+ 0x00000000
+ 0x00000019
+ 0x0000000a
+ 0x0000000a
+ 0x0000000b
+ 0x00000004
+ 0x0000000f
+ 0x0000000a
+ 0x0000000a
+ 0x00000005
+ 0x00000004
+ 0x00000000
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000004
+ 0x00000000
+ 0x00000003
+ 0x0000000d
+ 0x00070000
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000014
+ 0x00000018
+ 0x0000001a
+ 0x000008e4
+ 0x00000000
+ 0x00000239
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000000
+ 0x00000001
+ 0x00000013
+ 0x00000054
+ 0x00000054
+ 0x00000005
+ 0x00000009
+ 0x00000005
+ 0x0000001c
+ 0x0000000d
+ 0x00000003
+ 0x00000003
+ 0x000009c0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00e00b9
+ 0x00008000
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0121103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000d8
+ 0x02580014
+ 0x02580014
+ 0x00000000
+ 0x00000005
+ 0x000040a0
+ 0x800012d7
+ 0x00000010
+ 0x00000009
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000012
+ 0x0000000b
+ 0x0000000e
+ 0x00000002
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000007
+ 0x07060103
+ 0x00140d12
+ 0x71a91b13
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000f
+ 0x00000120
+ 0x00aa0038
+ 0x00aa0038
+ 0x00aa003c
+ 0x00aa0090
+ 0x00aa0041
+ 0x00aa0090
+ 0x00aa0041
+ 0x00270049
+ 0x00aa0080
+ 0x00aa0004
+ 0x00aa0004
+ 0x0008001d
+ 0x000000aa
+ 0x00aa0004
+ 0x00aa0022
+ 0x00aa0018
+ 0x00aa0024
+ 0x00aa0024
+ 0x000000aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00d400ff
+ 0x00510029
+ 0x00aa00aa
+ 0x00aa00aa
+ 0x00aa0065
+ 0x00aa0025
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000003a>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe00e0069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100e3>;
+ nvidia,emc-mode-2 = <0x00020007>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table@792000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_792000_01_V6.0.4_V1.1";
+ clock-frequency = <792000>;
+ nvidia,emc-min-mv = <980>;
+ nvidia,gk20a-min-mv = <980>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000002f
+ 0x00000066
+ 0x00000000
+ 0x00000021
+ 0x0000000e
+ 0x0000000d
+ 0x0000000d
+ 0x00000005
+ 0x00000013
+ 0x0000000e
+ 0x0000000e
+ 0x00000007
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x0000000f
+ 0x000b0000
+ 0x00000006
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000016
+ 0x0000001d
+ 0x0000001f
+ 0x00000bd1
+ 0x00000000
+ 0x000002f4
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000000
+ 0x00000001
+ 0x00000017
+ 0x0000006f
+ 0x0000006f
+ 0x00000006
+ 0x0000000c
+ 0x00000006
+ 0x00000026
+ 0x00000011
+ 0x00000003
+ 0x00000003
+ 0x00000cdf
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00700b9
+ 0x00008000
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000800f
+ 0x0000800f
+ 0x00000000
+ 0x0000800f
+ 0x0000800f
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x49249220
+ 0x00492492
+ 0x00492492
+ 0x49249200
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000011e
+ 0x03180017
+ 0x03180017
+ 0x00000000
+ 0x00000006
+ 0x00004080
+ 0x8000188b
+ 0x00000014
+ 0x0e00000b
+ 0x80000040
+ 0x00000006
+ 0x00000007
+ 0x00000018
+ 0x0000000f
+ 0x00000013
+ 0x00000003
+ 0x00000003
+ 0x0000000c
+ 0x00000003
+ 0x00000001
+ 0x00000008
+ 0x00000008
+ 0x08080103
+ 0x001a1118
+ 0x71ac2419
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000013
+ 0x0000017c
+ 0x00810038
+ 0x00810038
+ 0x0081003c
+ 0x00810090
+ 0x00810041
+ 0x00810090
+ 0x00810041
+ 0x00270049
+ 0x00810080
+ 0x00810004
+ 0x00810004
+ 0x00080016
+ 0x00000081
+ 0x00810004
+ 0x00810019
+ 0x00810018
+ 0x00810024
+ 0x0081001c
+ 0x00000081
+ 0x00000036
+ 0x00810081
+ 0x00000036
+ 0x00810081
+ 0x00d400ff
+ 0x00510029
+ 0x00810081
+ 0x00810081
+ 0x00810065
+ 0x0081001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000004c>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0070069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430404>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010043>;
+ nvidia,emc-mode-2 = <0x0002001a>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1200>;
+ };
+ emc-table@924000 {
+ compatible = "nvidia,tegra12-emc-table";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_924000_01_V6.0.4_V1.1";
+ clock-frequency = <924000>;
+ nvidia,emc-min-mv = <1010>;
+ nvidia,gk20a-min-mv = <1010>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000037
+ 0x00000078
+ 0x00000000
+ 0x00000026
+ 0x00000010
+ 0x0000000f
+ 0x00000010
+ 0x00000006
+ 0x00000017
+ 0x00000010
+ 0x00000010
+ 0x00000009
+ 0x00000005
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000010
+ 0x00000005
+ 0x00000000
+ 0x00000005
+ 0x00000012
+ 0x000d0000
+ 0x00000007
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000019
+ 0x00000020
+ 0x00000022
+ 0x00000dd4
+ 0x00000000
+ 0x00000375
+ 0x00000006
+ 0x00000006
+ 0x00000010
+ 0x00000000
+ 0x00000001
+ 0x0000001b
+ 0x00000082
+ 0x00000082
+ 0x00000007
+ 0x0000000e
+ 0x00000007
+ 0x0000002d
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000f04
+ 0x00000002
+ 0x00000000
+ 0x00000000
+ 0x1361a896
+ 0xe00400b9
+ 0x00008000
+ 0x00000005
+ 0x007fc005
+ 0x007fc007
+ 0x00000005
+ 0x007fc005
+ 0x007fc005
+ 0x007fc006
+ 0x007fc005
+ 0x00000005
+ 0x007fc005
+ 0x007fc007
+ 0x00000005
+ 0x007fc005
+ 0x007fc005
+ 0x007fc006
+ 0x007fc005
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x59555520
+ 0x00554596
+ 0x00557594
+ 0x55555500
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000014d
+ 0x039c0019
+ 0x039c0019
+ 0x00000000
+ 0x00000007
+ 0x00004080
+ 0x80001c77
+ 0x00000017
+ 0x0e00000d
+ 0x80000040
+ 0x00000007
+ 0x00000008
+ 0x0000001b
+ 0x00000012
+ 0x00000017
+ 0x00000004
+ 0x00000004
+ 0x0000000e
+ 0x00000004
+ 0x00000001
+ 0x00000009
+ 0x00000009
+ 0x09090104
+ 0x001e141b
+ 0x71ae2a1c
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000017
+ 0x000001bb
+ 0x006e0038
+ 0x006e0038
+ 0x006e003c
+ 0x006e0090
+ 0x006e0041
+ 0x006e0090
+ 0x006e0041
+ 0x00270049
+ 0x006e0080
+ 0x006e0004
+ 0x006e0004
+ 0x00080016
+ 0x0000006e
+ 0x006e0004
+ 0x006e0019
+ 0x006e0018
+ 0x006e0024
+ 0x006e001b
+ 0x0000006e
+ 0x00000036
+ 0x006e006e
+ 0x00000036
+ 0x006e006e
+ 0x00d400ff
+ 0x00510029
+ 0x006e006e
+ 0x006e006e
+ 0x006e0065
+ 0x006e001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000058>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0040069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430808>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x0002001c>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1180>;
+ };
+ emc-table-derated@12750 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_12750_01_V6.0.4_V1.1";
+ clock-frequency = <12750>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000003e>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000000
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x0000000b
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000002
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000036
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x000d0011
+ 0x000d0011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000011c
+ 0x0000000a
+ 0x40040001
+ 0x8000000a
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x77c30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000007
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <57820>;
+ };
+ emc-table-derated@20400 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_20400_01_V6.0.4_V1.1";
+ clock-frequency = <20400>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000026>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000001
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000013
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000055
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00150011
+ 0x00150011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x8000012a
+ 0x0000000a
+ 0x40020001
+ 0x80000012
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x74e30303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x0000000a
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <35610>;
+ };
+ emc-table-derated@40800 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_40800_01_V6.0.4_V1.1";
+ clock-frequency = <40800>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000012>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000026
+ 0x00000000
+ 0x00000009
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000006
+ 0x00000006
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000000aa
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000011
+ 0x00290011
+ 0x00290011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000151
+ 0x0000000a
+ 0xa0000001
+ 0x80000017
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x73030303
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000014
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x000000ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <20850>;
+ };
+ emc-table-derated@68000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_68000_01_V6.0.4_V1.1";
+ clock-frequency = <68000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x4000000a>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000004
+ 0x00000008
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000040
+ 0x00000000
+ 0x00000010
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000a
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x0000011b
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000019
+ 0x00440011
+ 0x00440011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000185
+ 0x0000000a
+ 0x00000001
+ 0x8000001e
+ 0x00000001
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090402
+ 0x72630403
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000021
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00b0
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00ff00ff
+ 0x00ff00ec
+ 0x00e90049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ff
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00ff
+ 0x00ff00a3
+ 0x00ff0024
+ 0x00ff00ff
+ 0x000000ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x000000ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ee00ef
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <10720>;
+ };
+ emc-table-derated@102000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_102000_01_V6.0.4_V1.1";
+ clock-frequency = <102000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000006>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000006
+ 0x0000000d
+ 0x00000000
+ 0x00000004
+ 0x00000002
+ 0x00000006
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000060
+ 0x00000000
+ 0x00000018
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000000f
+ 0x0000000f
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x000001a9
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000025
+ 0x00660011
+ 0x00660011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x800001c5
+ 0x0000000a
+ 0x08000001
+ 0x80000026
+ 0x00000001
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000004
+ 0x00000005
+ 0x05040102
+ 0x00090403
+ 0x72430504
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000031
+ 0x00ff00da
+ 0x00ff00da
+ 0x00ff0075
+ 0x00ff00ff
+ 0x00ff009d
+ 0x00ff00ff
+ 0x00ff009d
+ 0x009b0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x000800ad
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff00c6
+ 0x00ff006d
+ 0x00ff0024
+ 0x00ff00d6
+ 0x000000ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x0000009f
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x009f00a0
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00da
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <6890>;
+ };
+ emc-table-derated@136000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_136000_01_V6.0.4_V1.1";
+ clock-frequency = <136000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000004>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000008
+ 0x00000011
+ 0x00000000
+ 0x00000005
+ 0x00000002
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000002
+ 0x00000002
+ 0x00000001
+ 0x00000002
+ 0x00000000
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000002
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x00010000
+ 0x00000003
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x0000000c
+ 0x0000000d
+ 0x0000000f
+ 0x00000081
+ 0x00000000
+ 0x00000020
+ 0x00000002
+ 0x00000002
+ 0x00000002
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x00000014
+ 0x00000014
+ 0x00000003
+ 0x00000003
+ 0x00000003
+ 0x00000006
+ 0x00000004
+ 0x00000003
+ 0x00000003
+ 0x00000236
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00080000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x000fc000
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x0000fc00
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000404
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x00000031
+ 0x00880011
+ 0x00880011
+ 0x00000000
+ 0x00000003
+ 0x0000f3f3
+ 0x80000206
+ 0x0000000a
+ 0x01000002
+ 0x8000002f
+ 0x00000001
+ 0x00000001
+ 0x00000004
+ 0x00000001
+ 0x00000003
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000002
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050102
+ 0x00090404
+ 0x72030705
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000042
+ 0x00ff00a3
+ 0x00ff00a3
+ 0x00ff0058
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00ff00ff
+ 0x00ff0076
+ 0x00740049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080082
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0094
+ 0x00ff0051
+ 0x00ff0024
+ 0x00ff00a1
+ 0x000000ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00000077
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00770078
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00a3
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000015>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008c7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <5400>;
+ };
+ emc-table-derated@204000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_204000_01_V6.0.4_V1.1";
+ clock-frequency = <204000>;
+ nvidia,emc-min-mv = <800>;
+ nvidia,gk20a-min-mv = <800>;
+ nvidia,source = "pllp_out0";
+ nvidia,src-sel-reg = <0x40000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x0000000c
+ 0x0000001a
+ 0x00000000
+ 0x00000008
+ 0x00000004
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000004
+ 0x00000004
+ 0x00000002
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000007
+ 0x00010000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000e
+ 0x0000000f
+ 0x00000011
+ 0x000000c1
+ 0x00000000
+ 0x00000030
+ 0x00000002
+ 0x00000002
+ 0x00000004
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000001d
+ 0x0000001d
+ 0x00000003
+ 0x00000004
+ 0x00000003
+ 0x00000009
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000351
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a296
+ 0x005800a0
+ 0x00008000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00060000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00068000
+ 0x00068000
+ 0x00068000
+ 0x00068000
+ 0x00006800
+ 0x00006800
+ 0x00006800
+ 0x00006800
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x0130b018
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451400
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000004a
+ 0x00cc0011
+ 0x00cc0011
+ 0x00000000
+ 0x00000004
+ 0x0000d3b3
+ 0x80000287
+ 0x0000000a
+ 0x01000003
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000005
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000b0606
+ 0x71e40a07
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000001
+ 0x00000062
+ 0x00ff006d
+ 0x00ff006d
+ 0x00ff003c
+ 0x00ff00af
+ 0x00ff004f
+ 0x00ff00af
+ 0x00ff004f
+ 0x004e0049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x00080057
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0063
+ 0x00ff0036
+ 0x00ff0024
+ 0x00ff006b
+ 0x000000ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00000050
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510050
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff00c6
+ 0x00ff006d
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000017>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3200000>;
+ nvidia,emc-cfg-2 = <0x000008cf>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000008>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <3420>;
+ };
+ emc-table-derated@300000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_300000_01_V6.0.4_V1.1";
+ clock-frequency = <300000>;
+ nvidia,emc-min-mv = <820>;
+ nvidia,gk20a-min-mv = <820>;
+ nvidia,source = "pllc_out0";
+ nvidia,src-sel-reg = <0x20000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000012
+ 0x00000026
+ 0x00000000
+ 0x0000000d
+ 0x00000005
+ 0x00000007
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000005
+ 0x00000005
+ 0x00000003
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x00000008
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x0000000f
+ 0x00000012
+ 0x00000014
+ 0x0000011b
+ 0x00000000
+ 0x00000046
+ 0x00000002
+ 0x00000002
+ 0x00000005
+ 0x00000000
+ 0x00000001
+ 0x0000000c
+ 0x0000002a
+ 0x0000002a
+ 0x00000003
+ 0x00000005
+ 0x00000003
+ 0x0000000d
+ 0x00000007
+ 0x00000003
+ 0x00000003
+ 0x000004e0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00048000
+ 0x00004800
+ 0x00004800
+ 0x00004800
+ 0x00004800
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000006c
+ 0x012c0011
+ 0x012c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x8000033e
+ 0x0000000b
+ 0x08000004
+ 0x80000040
+ 0x00000001
+ 0x00000002
+ 0x00000009
+ 0x00000005
+ 0x00000007
+ 0x00000001
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000c0709
+ 0x71c50e0a
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000004
+ 0x00000090
+ 0x00ff004a
+ 0x00ff004a
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00350049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008003b
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0043
+ 0x00ff002d
+ 0x00ff0024
+ 0x00ff0049
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510036
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0087
+ 0x00ff004a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000001f>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x000008d7>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004013c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2680>;
+ };
+ emc-table-derated@396000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_396000_01_V6.0.4_V1.1";
+ clock-frequency = <396000>;
+ nvidia,emc-min-mv = <850>;
+ nvidia,gk20a-min-mv = <850>;
+ nvidia,source = "pllm_out0";
+ nvidia,src-sel-reg = <0x00000002>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000018
+ 0x00000033
+ 0x00000000
+ 0x00000011
+ 0x00000007
+ 0x00000008
+ 0x00000008
+ 0x00000003
+ 0x0000000a
+ 0x00000007
+ 0x00000007
+ 0x00000004
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000006
+ 0x00000003
+ 0x00000000
+ 0x00000002
+ 0x00000009
+ 0x00030000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000001
+ 0x00000010
+ 0x00000012
+ 0x00000014
+ 0x00000176
+ 0x00000000
+ 0x0000005d
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000000
+ 0x00000001
+ 0x0000000e
+ 0x00000038
+ 0x00000038
+ 0x00000003
+ 0x00000006
+ 0x00000003
+ 0x00000012
+ 0x0000000a
+ 0x00000003
+ 0x00000003
+ 0x00000670
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0x005800a0
+ 0x00008000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00020000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00058000
+ 0x00058000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00040000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00004000
+ 0x00000200
+ 0x00000000
+ 0x00100100
+ 0x01231239
+ 0x00000000
+ 0x00000000
+ 0x77ffc000
+ 0x00000606
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000008f
+ 0x018c0011
+ 0x018c0011
+ 0x00000000
+ 0x00000004
+ 0x000052a3
+ 0x800003f4
+ 0x0000000b
+ 0x0f000005
+ 0x80000040
+ 0x00000002
+ 0x00000003
+ 0x0000000c
+ 0x00000007
+ 0x00000009
+ 0x00000002
+ 0x00000002
+ 0x00000007
+ 0x00000003
+ 0x00000001
+ 0x00000005
+ 0x00000005
+ 0x05050103
+ 0x000e090c
+ 0x71c6120d
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000a
+ 0x000000be
+ 0x00ff0038
+ 0x00ff0038
+ 0x00ff003c
+ 0x00ff0090
+ 0x00ff0041
+ 0x00ff0090
+ 0x00ff0041
+ 0x00280049
+ 0x00ff0080
+ 0x00ff0004
+ 0x00ff0004
+ 0x0008002d
+ 0x000000ff
+ 0x00ff0004
+ 0x00ff0033
+ 0x00ff0022
+ 0x00ff0024
+ 0x00ff0037
+ 0x000000ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00000036
+ 0x00ff00ff
+ 0x00d400ff
+ 0x00510029
+ 0x00ff00ff
+ 0x00ff00ff
+ 0x00ff0066
+ 0x00ff0038
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000028>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x00000897>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0x00580068>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x00020004>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <2180>;
+ };
+ emc-table-derated@528000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_528000_01_V6.0.4_V1.1";
+ clock-frequency = <528000>;
+ nvidia,emc-min-mv = <880>;
+ nvidia,gk20a-min-mv = <870>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000020
+ 0x00000044
+ 0x00000000
+ 0x00000017
+ 0x0000000a
+ 0x0000000a
+ 0x00000009
+ 0x00000003
+ 0x0000000d
+ 0x0000000a
+ 0x0000000a
+ 0x00000006
+ 0x00000004
+ 0x00000000
+ 0x00000002
+ 0x00000002
+ 0x00000008
+ 0x00000003
+ 0x00000000
+ 0x00000003
+ 0x0000000a
+ 0x00050000
+ 0x00000004
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000011
+ 0x00000015
+ 0x00000017
+ 0x000001f3
+ 0x00000000
+ 0x0000007c
+ 0x00000003
+ 0x00000003
+ 0x0000000a
+ 0x00000000
+ 0x00000001
+ 0x00000011
+ 0x0000004a
+ 0x0000004a
+ 0x00000004
+ 0x00000008
+ 0x00000004
+ 0x00000019
+ 0x0000000d
+ 0x00000003
+ 0x00000003
+ 0x00000895
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe01200b9
+ 0x00008000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x0000000c
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0123123d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000505
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000bf
+ 0x02100013
+ 0x02100013
+ 0x00000000
+ 0x00000004
+ 0x000042a0
+ 0x800004ef
+ 0x0000000d
+ 0x0f000007
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000011
+ 0x0000000a
+ 0x0000000d
+ 0x00000003
+ 0x00000002
+ 0x00000009
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000006
+ 0x06060103
+ 0x00130c11
+ 0x71c81812
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000d
+ 0x000000fd
+ 0x00c10038
+ 0x00c10038
+ 0x00c1003c
+ 0x00c10090
+ 0x00c10041
+ 0x00c10090
+ 0x00c10041
+ 0x00270049
+ 0x00c10080
+ 0x00c10004
+ 0x00c10004
+ 0x00080021
+ 0x000000c1
+ 0x00c10004
+ 0x00c10026
+ 0x00c1001a
+ 0x00c10024
+ 0x00c10029
+ 0x000000c1
+ 0x00000036
+ 0x00c100c1
+ 0x00000036
+ 0x00c100c1
+ 0x00d400ff
+ 0x00510029
+ 0x00c100c1
+ 0x00c100c1
+ 0x00c10065
+ 0x00c1002a
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000034>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0120069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100c3>;
+ nvidia,emc-mode-2 = <0x00020006>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table-derated@600000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_600000_01_V6.0.4_V1.1";
+ clock-frequency = <600000>;
+ nvidia,emc-min-mv = <910>;
+ nvidia,gk20a-min-mv = <910>;
+ nvidia,source = "pllc_ud";
+ nvidia,src-sel-reg = <0xe0000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000025
+ 0x0000004d
+ 0x00000000
+ 0x0000001a
+ 0x0000000b
+ 0x0000000a
+ 0x0000000b
+ 0x00000004
+ 0x0000000f
+ 0x0000000b
+ 0x0000000b
+ 0x00000007
+ 0x00000004
+ 0x00000000
+ 0x00000004
+ 0x00000004
+ 0x0000000a
+ 0x00000004
+ 0x00000000
+ 0x00000003
+ 0x0000000d
+ 0x00070000
+ 0x00000005
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000002
+ 0x00000014
+ 0x00000018
+ 0x0000001a
+ 0x00000237
+ 0x00000000
+ 0x0000008d
+ 0x00000004
+ 0x00000004
+ 0x0000000b
+ 0x00000000
+ 0x00000001
+ 0x00000013
+ 0x00000054
+ 0x00000054
+ 0x00000005
+ 0x00000009
+ 0x00000005
+ 0x0000001c
+ 0x0000000e
+ 0x00000003
+ 0x00000003
+ 0x000009c0
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00e00b9
+ 0x00008000
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000010
+ 0x00000010
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x0000000b
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0121103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x0000003f
+ 0x015ddddd
+ 0x51451420
+ 0x00514514
+ 0x00514514
+ 0x51451400
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x000000d8
+ 0x02580014
+ 0x02580014
+ 0x00000000
+ 0x00000005
+ 0x000040a0
+ 0x80000578
+ 0x00000010
+ 0x00000009
+ 0x80000040
+ 0x00000004
+ 0x00000005
+ 0x00000013
+ 0x0000000c
+ 0x0000000e
+ 0x00000003
+ 0x00000003
+ 0x0000000a
+ 0x00000003
+ 0x00000001
+ 0x00000006
+ 0x00000007
+ 0x07060103
+ 0x00150e13
+ 0x71a91b14
+ 0x70000f03
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x0000000f
+ 0x00000120
+ 0x00aa0038
+ 0x00aa0038
+ 0x00aa003c
+ 0x00aa0090
+ 0x00aa0041
+ 0x00aa0090
+ 0x00aa0041
+ 0x00270049
+ 0x00aa0080
+ 0x00aa0004
+ 0x00aa0004
+ 0x0008001d
+ 0x000000aa
+ 0x00aa0004
+ 0x00aa0022
+ 0x00aa0018
+ 0x00aa0024
+ 0x00aa0024
+ 0x000000aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00000036
+ 0x00aa00aa
+ 0x00d400ff
+ 0x00510029
+ 0x00aa00aa
+ 0x00aa00aa
+ 0x00aa0065
+ 0x00aa0025
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000003a>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe00e0069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430000>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x000100e3>;
+ nvidia,emc-mode-2 = <0x00020007>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1440>;
+ };
+ emc-table-derated@792000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_792000_01_V6.0.4_V1.1";
+ clock-frequency = <792000>;
+ nvidia,emc-min-mv = <980>;
+ nvidia,gk20a-min-mv = <980>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000030
+ 0x00000066
+ 0x00000000
+ 0x00000022
+ 0x0000000f
+ 0x0000000d
+ 0x0000000d
+ 0x00000005
+ 0x00000013
+ 0x0000000f
+ 0x0000000f
+ 0x00000009
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x00000005
+ 0x0000000e
+ 0x00000004
+ 0x00000000
+ 0x00000005
+ 0x0000000f
+ 0x000b0000
+ 0x00000006
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000016
+ 0x0000001d
+ 0x0000001f
+ 0x000002ec
+ 0x00000000
+ 0x000000bb
+ 0x00000005
+ 0x00000005
+ 0x0000000f
+ 0x00000000
+ 0x00000001
+ 0x00000017
+ 0x0000006f
+ 0x0000006f
+ 0x00000006
+ 0x0000000c
+ 0x00000006
+ 0x00000026
+ 0x00000013
+ 0x00000003
+ 0x00000003
+ 0x00000cdf
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a096
+ 0xe00700b9
+ 0x00008000
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x007f8008
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000800f
+ 0x0000800f
+ 0x00000000
+ 0x0000800f
+ 0x0000800f
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x49249220
+ 0x00492492
+ 0x00492492
+ 0x49249200
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000011e
+ 0x03180017
+ 0x03180017
+ 0x00000000
+ 0x00000006
+ 0x00004080
+ 0x800006e5
+ 0x00000014
+ 0x0e00000b
+ 0x80000040
+ 0x00000006
+ 0x00000007
+ 0x00000019
+ 0x00000010
+ 0x00000013
+ 0x00000004
+ 0x00000003
+ 0x0000000c
+ 0x00000003
+ 0x00000001
+ 0x00000008
+ 0x00000008
+ 0x08080103
+ 0x001b1219
+ 0x71ac241a
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000013
+ 0x0000017c
+ 0x00810038
+ 0x00810038
+ 0x0081003c
+ 0x00810090
+ 0x00810041
+ 0x00810090
+ 0x00810041
+ 0x00270049
+ 0x00810080
+ 0x00810004
+ 0x00810004
+ 0x00080016
+ 0x00000081
+ 0x00810004
+ 0x00810019
+ 0x00810018
+ 0x00810024
+ 0x0081001c
+ 0x00000081
+ 0x00000036
+ 0x00810081
+ 0x00000036
+ 0x00810081
+ 0x00d400ff
+ 0x00510029
+ 0x00810081
+ 0x00810081
+ 0x00810065
+ 0x0081001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x0000004c>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0070069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430404>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010043>;
+ nvidia,emc-mode-2 = <0x0002001a>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1200>;
+ };
+ emc-table-derated@924000 {
+ compatible = "nvidia,tegra12-emc-table-derated";
+ nvidia,revision = <0x19>;
+ nvidia,dvfs-version = "02_924000_01_V6.0.4_V1.1";
+ clock-frequency = <924000>;
+ nvidia,emc-min-mv = <1010>;
+ nvidia,gk20a-min-mv = <1010>;
+ nvidia,source = "pllm_ud";
+ nvidia,src-sel-reg = <0x80000000>;
+ nvidia,burst-regs-num = <165>;
+ nvidia,burst-up-down-regs-num = <31>;
+ nvidia,emc-registers = <
+ 0x00000039
+ 0x00000078
+ 0x00000000
+ 0x00000028
+ 0x00000012
+ 0x0000000f
+ 0x00000010
+ 0x00000006
+ 0x00000017
+ 0x00000012
+ 0x00000012
+ 0x0000000a
+ 0x00000005
+ 0x00000000
+ 0x00000007
+ 0x00000007
+ 0x00000010
+ 0x00000005
+ 0x00000000
+ 0x00000005
+ 0x00000012
+ 0x000d0000
+ 0x00000007
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000004
+ 0x00000019
+ 0x00000020
+ 0x00000022
+ 0x00000369
+ 0x00000000
+ 0x000000da
+ 0x00000006
+ 0x00000006
+ 0x00000012
+ 0x00000000
+ 0x00000001
+ 0x0000001b
+ 0x00000082
+ 0x00000082
+ 0x00000007
+ 0x0000000e
+ 0x00000007
+ 0x0000002d
+ 0x00000016
+ 0x00000003
+ 0x00000003
+ 0x00000f04
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x1361a896
+ 0xe00400b9
+ 0x00008000
+ 0x00000005
+ 0x007fc005
+ 0x007fc007
+ 0x00000005
+ 0x007fc005
+ 0x007fc005
+ 0x007fc006
+ 0x007fc005
+ 0x00000005
+ 0x007fc005
+ 0x007fc007
+ 0x00000005
+ 0x007fc005
+ 0x007fc005
+ 0x007fc006
+ 0x007fc005
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x00008000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x0000800e
+ 0x0000800e
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x00000000
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x0000000a
+ 0x00000220
+ 0x00000000
+ 0x00100100
+ 0x0120103d
+ 0x00000000
+ 0x00000000
+ 0x77ffc004
+ 0x00000303
+ 0x81f1f008
+ 0x07070000
+ 0x00000000
+ 0x015ddddd
+ 0x59555520
+ 0x00554596
+ 0x00557594
+ 0x55555500
+ 0x0000003f
+ 0x00000000
+ 0x00000000
+ 0x00064000
+ 0x0000014d
+ 0x039c0019
+ 0x039c0019
+ 0x00000000
+ 0x00000007
+ 0x00004080
+ 0x800007e0
+ 0x00000017
+ 0x0e00000d
+ 0x80000040
+ 0x00000008
+ 0x00000009
+ 0x0000001d
+ 0x00000013
+ 0x00000017
+ 0x00000005
+ 0x00000004
+ 0x0000000e
+ 0x00000004
+ 0x00000001
+ 0x00000009
+ 0x00000009
+ 0x09090104
+ 0x0020161d
+ 0x71ae2a1e
+ 0x70000f02
+ 0x001f0000
+ >;
+ nvidia,emc-burst-up-down-regs = <
+ 0x00000017
+ 0x000001bb
+ 0x006e0038
+ 0x006e0038
+ 0x006e003c
+ 0x006e0090
+ 0x006e0041
+ 0x006e0090
+ 0x006e0041
+ 0x00270049
+ 0x006e0080
+ 0x006e0004
+ 0x006e0004
+ 0x00080016
+ 0x0000006e
+ 0x006e0004
+ 0x006e0019
+ 0x006e0018
+ 0x006e0024
+ 0x006e001b
+ 0x0000006e
+ 0x00000036
+ 0x006e006e
+ 0x00000036
+ 0x006e006e
+ 0x00d400ff
+ 0x00510029
+ 0x006e006e
+ 0x006e006e
+ 0x006e0065
+ 0x006e001c
+ >;
+ nvidia,emc-zcal-cnt-long = <0x00000058>;
+ nvidia,emc-acal-interval = <0x001fffff>;
+ nvidia,emc-ctt-term_ctrl = <0x00000802>;
+ nvidia,emc-cfg = <0xf3300000>;
+ nvidia,emc-cfg-2 = <0x0000089f>;
+ nvidia,emc-sel-dpd-ctrl = <0x0004001c>;
+ nvidia,emc-cfg-dig-dll = <0xe0040069>;
+ nvidia,emc-bgbias-ctl0 = <0x00000000>;
+ nvidia,emc-auto-cal-config2 = <0x00000000>;
+ nvidia,emc-auto-cal-config3 = <0x00000000>;
+ nvidia,emc-auto-cal-config = <0xa1430808>;
+ nvidia,emc-mode-0 = <0x00000000>;
+ nvidia,emc-mode-1 = <0x00010083>;
+ nvidia,emc-mode-2 = <0x0002001c>;
+ nvidia,emc-mode-4 = <0x800b0000>;
+ nvidia,emc-clock-latency-change = <1180>;
+ };
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-generic.dtsi b/arch/arm64/boot/dts/tegra132-flounder-generic.dtsi
new file mode 100644
index 0000000..dc16c6d3
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-generic.dtsi
@@ -0,0 +1,532 @@
+#include <dt-bindings/gpio/tegra-gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+
+#include "tegra132.dtsi"
+#include "tegra132-flounder-camera.dtsi"
+#include "tegra132-flounder-trusty.dtsi"
+#include "tegra132-flounder-sysedp.dtsi"
+#include "tegra132-flounder-powermon.dtsi"
+#include "tegra132-flounder-emc.dtsi"
+#include "tegra132-tn8-dfll.dtsi"
+#include "tegra132-flounder-touch.dtsi"
+
+/ {
+ model = "Flounder";
+
+ nvidia-boardids = "1780:1100:2:B:7","1794:1000:0:A:6";
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ chosen {
+ bootargs = "tegraid=40.0.0.00.00 vmalloc=256M video=tegrafb console=ttyS0,115200n8 earlyprintk";
+ linux,initrd-start = <0x85000000>;
+ linux,initrd-end = <0x851bc400>;
+ };
+
+ pinmux {
+ status = "disable";
+ };
+
+ gpio: gpio@6000d000 {
+ gpio-init-names = "default";
+ gpio-init-0 = <&gpio_default>;
+
+ gpio_default: default {
+ gpio-input = < TEGRA_GPIO(C, 7)
+ TEGRA_GPIO(A, 7)
+ TEGRA_GPIO(G, 2)
+ TEGRA_GPIO(G, 3)
+ TEGRA_GPIO(I, 5)
+ TEGRA_GPIO(I, 6)
+ TEGRA_GPIO(J, 0)
+ TEGRA_GPIO(K, 2)
+ TEGRA_GPIO(K, 3)
+ TEGRA_GPIO(N, 4)
+ TEGRA_GPIO(O, 2)
+ TEGRA_GPIO(O, 3)
+ TEGRA_GPIO(O, 5)
+ TEGRA_GPIO(O, 7)
+ TEGRA_GPIO(Q, 1)
+ TEGRA_GPIO(Q, 6)
+ TEGRA_GPIO(Q, 7)
+ TEGRA_GPIO(R, 4)
+ TEGRA_GPIO(S, 0)
+ TEGRA_GPIO(S, 1)
+ TEGRA_GPIO(S, 4)
+ TEGRA_GPIO(U, 5)
+ TEGRA_GPIO(U, 6)
+ TEGRA_GPIO(V, 0)
+ TEGRA_GPIO(V, 1)
+ TEGRA_GPIO(W, 2)
+/*key*/
+ TEGRA_GPIO(Q, 0)
+ TEGRA_GPIO(Q, 5)
+ TEGRA_GPIO(V, 2)
+/*key*/
+/*headset*/
+ TEGRA_GPIO(S, 2)
+ TEGRA_GPIO(W, 3)
+/*headset*/
+ TEGRA_GPIO(BB, 6)
+ TEGRA_GPIO(CC, 1)
+ TEGRA_GPIO(CC, 2)>;
+ gpio-output-low = <TEGRA_GPIO(H, 2)
+ TEGRA_GPIO(H, 3)
+ TEGRA_GPIO(H, 6)
+ TEGRA_GPIO(H, 7)
+ TEGRA_GPIO(I, 0)
+/*headset*/
+ TEGRA_GPIO(J, 7)
+ TEGRA_GPIO(S, 3)
+/*headset*/
+ TEGRA_GPIO(K, 0)
+ TEGRA_GPIO(K, 1)
+ TEGRA_GPIO(K, 5)
+ TEGRA_GPIO(O, 0)
+ TEGRA_GPIO(O, 6)
+ TEGRA_GPIO(Q, 3)
+ TEGRA_GPIO(R, 1)
+ TEGRA_GPIO(R, 2)
+ TEGRA_GPIO(R, 5)
+ TEGRA_GPIO(S, 3)
+ TEGRA_GPIO(S, 6)
+ TEGRA_GPIO(U, 3)
+ TEGRA_GPIO(V, 3)
+ TEGRA_GPIO(X, 1)
+ TEGRA_GPIO(X, 3)
+ TEGRA_GPIO(X, 4)
+ TEGRA_GPIO(X, 5)
+ TEGRA_GPIO(X, 7)
+ TEGRA_GPIO(BB, 3)
+ TEGRA_GPIO(BB, 5)
+ TEGRA_GPIO(BB, 7)
+ TEGRA_GPIO(EE, 1)>;
+ gpio-output-high = <TEGRA_GPIO(B, 4)
+/*key*/
+ TEGRA_GPIO(I, 3)
+/*key*/
+ TEGRA_GPIO(S, 5)
+ TEGRA_GPIO(R, 0)
+ TEGRA_GPIO(H, 5)
+ TEGRA_GPIO(Q, 2)
+/*headset*/
+ TEGRA_GPIO(Q, 4)
+/*headset*/
+ >;
+ };
+ };
+
+ host1x {
+ dsi {
+ nvidia,dsi-controller-vs = <1>;
+ status = "okay";
+ };
+
+ hdmi {
+ status = "disabled";
+ };
+ };
+
+ serial@70006000 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006040 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006200 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ serial@70006300 {
+ compatible = "nvidia,tegra114-hsuart";
+ status = "okay";
+ };
+
+ memory@0x80000000 {
+ device_type = "memory";
+ reg = <0x0 0x80000000 0x0 0x80000000>;
+ };
+
+ i2c@7000c000 {
+ status = "okay";
+ clock-frequency = <400000>;
+
+ max17050@36 {
+ compatible = "maxim,max17050";
+ reg = <0x36>;
+ battery-id-channel-name = "batt-id-channel";
+ param_adjust_map_by_id {
+ id0 {
+ id-number = <0>;
+ id-range = <0 524>;
+ temperature-normal-to-low-threshold = <(-50)>;
+ temperature-low-to-normal-threshold = <(-30)>;
+ temperature-normal-parameters = <0x964C 0x0068 0x0000>;
+ temperature-low-parameters = <0xAF59 0x4D84 0x2484>;
+ };
+ id1 {
+ id-number = <1>;
+ id-range = <525 1343>;
+ temperature-normal-to-low-threshold = <(-80)>;
+ temperature-low-to-normal-threshold = <(-50)>;
+ temperature-normal-parameters = <0x964C 0x1032 0x0000>;
+ temperature-low-parameters = <0xAF59 0x5F86 0x2790>;
+ };
+ };
+ };
+
+ bq2419x: bq2419x@6b {
+ compatible = "ti,bq2419x";
+ reg = <0x6b>;
+
+ interrupt-parent = <&gpio>;
+ interrupts = <TEGRA_GPIO(J, 0) 0x0>;
+
+ vbus {
+ regulator-name = "vbus_regulator";
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.0";
+ };
+
+ c2 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-otg";
+ };
+ };
+ };
+ };
+
+ htc_mcu@72 {
+ compatible = "htc_mcu";
+ reg = <0x72>;
+ interrupt-parent = <&gpio>;
+ interrupts = <140 0x0>;
+ mcu,intr-gpio = <&gpio 140 0>;
+ mcu,gs_chip_layout = <1>;
+ mcu,acceleration_axes = <7>;
+ mcu,magnetic_axes = <7>;
+ mcu,gyro_axes = <7>;
+ mcu,Cpu_wake_mcu-gpio = <&gpio 129 0>;
+ mcu,Reset-gpio = <&gpio 138 0>;
+ mcu,Chip_mode-gpio = <&gpio 164 0>;
+ };
+ };
+
+ i2c@7000c400 {
+ status = "okay";
+ clock-frequency = <400000>;
+
+ richtek_rt5506_amp@52 {
+ compatible = "richtek,rt5506-amp";
+ reg = <0x52>;
+ richtek,enable-gpio = <&gpio 185 0>;
+ };
+ };
+
+ i2c@7000c500 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000c700 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ i2c@7000d000 {
+ status = "okay";
+ clock-frequency = <400000>;
+ nvidia,bit-banging-xfer-after-shutdown;
+
+ /include/ "tegra124-flounder-power.dtsi"
+ };
+
+ i2c@7000d100 {
+ status = "okay";
+ clock-frequency = <400000>;
+ };
+
+ spi@7000d400 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000d800 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000da00 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ spi@7000dc00 {
+ status = "okay";
+ spi-max-frequency = <25000000>;
+ };
+
+ denver_cpuidle_pmic {
+ type = <4>; /* TPS 65913 2.3*/
+ retention-voltage = <11>; /* Vret = 0.55v */
+ lock = <0>;
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+
+ power {
+ label = "Power";
+ gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_POWER>;
+ gpio-key,wakeup;
+ debounce-interval = <20>;
+ };
+
+ volume_down {
+ label = "Volume Down";
+ gpios = <&gpio TEGRA_GPIO(Q, 5) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEDOWN>;
+ debounce-interval = <20>;
+ };
+
+ volume_up {
+ label = "Volume Up";
+ gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEUP>;
+ debounce-interval = <20>;
+ };
+ };
+
+ regulators {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ vdd_ac_bat_reg: regulator@0 {
+ compatible = "regulator-fixed";
+ reg = <0>;
+ regulator-name = "vdd_ac_bat";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_sys_bl";
+ };
+ };
+ };
+
+ usb0_vbus: regulator@1 {
+ compatible = "regulator-fixed-sync";
+ reg = <1>;
+ regulator-name = "usb0-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus0";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ usb1_vbus: regulator@2 {
+ compatible = "regulator-fixed-sync";
+ reg = <2>;
+ regulator-name = "usb1-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ vin-supply = <&palmas_smps10_out2>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.1";
+ };
+ c2 {
+ regulator-consumer-supply = "usb_vbus1";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ usb2_vbus: regulator@3 {
+ compatible = "regulator-fixed-sync";
+ reg = <3>;
+ regulator-name = "usb2-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ vin-supply = <&palmas_smps10_out2>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_vbus";
+ regulator-consumer-device = "tegra-ehci.2";
+ };
+ c2 {
+ regulator-consumer-supply = "usb_vbus2";
+ regulator-consumer-device = "tegra-xhci";
+ };
+ };
+ };
+
+ avdd_lcd: regulator@4 {
+ compatible = "regulator-fixed-sync";
+ reg = <4>;
+ regulator-name = "avdd-lcd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&palmas_gpio 3 0>;
+ enable-active-high;
+ vin-supply = <&palmas_smps9>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "avdd_lcd";
+ };
+ };
+ };
+
+ vdd_lcd: regulator@5 {
+ compatible = "regulator-fixed-sync";
+ reg = <5>;
+ regulator-name = "vdd-lcd";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ enable-active-high;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_lcd_1v2_s";
+ };
+ };
+ };
+
+ ldoen: regulator@6 {
+ compatible = "regulator-fixed-sync";
+ reg = <6>;
+ regulator-name = "ldoen";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-boot-on;
+ enable-active-high;
+ gpio = <&palmas_gpio 6 0>;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "ldoen";
+ regulator-consumer-device = "1-0052";
+ };
+ };
+ };
+
+ vpp_fuse: regulator@7 {
+ compatible = "regulator-fixed-sync";
+ reg = <7>;
+ regulator-name = "vpp-fuse";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ enable-active-high;
+ gpio = <&palmas_gpio 7 0>;
+ vin-supply = <&palmas_smps8>;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vpp_fuse";
+ };
+ };
+ };
+
+ en_lcd_bl: regulator@8 {
+ compatible = "regulator-fixed-sync";
+ reg = <8>;
+ regulator-name = "en-lcd-bl";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd_lcd_bl_en";
+ };
+ };
+ };
+ };
+
+ htc_battery_max17050 {
+ compatible = "htc,max17050_battery";
+ };
+
+ htc_battery_bq2419x {
+ compatible = "htc,bq2419x_battery";
+ regulator-name = "batt_regulator";
+ regulator-max-microamp = <1500000>;
+ auto-recharge-time = <30>;
+ input-voltage-limit-millivolt = <4440>;
+ fast-charge-current-limit-milliamp = <2944>;
+ pre-charge-current-limit-milliamp = <512>;
+ charge-voltage-limit-millivolt = <4352>;
+ charge-suspend-polling-time-sec = <3600>;
+ temp-polling-time-sec = <30>;
+ thermal-overtemp-control-no-thermister;
+ thermal-temperature-hot-deciC = <551>;
+ thermal-temperature-cold-deciC = <(-1)>;
+ thermal-temperature-warm-deciC = <481>;
+ thermal-temperature-cool-deciC = <100>;
+ thermal-temperature-hysteresis-deciC = <30>;
+ thermal-warm-voltage-millivolt = <4096>;
+ thermal-cool-voltage-millivolt = <4352>;
+ thermal-disable-warm-current-half;
+ thermal-overtemp-output-current-milliamp = <1344>;
+ unknown-battery-id-minimum = <1343>;
+ battery-id-channel-name = "batt-id-channel";
+ input-voltage-min-high-battery-millivolt = <4440>;
+ input-voltage-min-low-battery-millivolt = <4200>;
+ input-voltage-switch-millivolt = <4150>;
+ gauge-power-supply-name = "battery";
+ vbus-channel-name = "charger-vbus";
+ vbus-channel-max-voltage-mv = <6875>;
+ vbus-channel-max-adc = <4096>;
+ consumers {
+ c1 {
+ regulator-consumer-supply = "usb_bat_chg";
+ regulator-consumer-device = "tegra-udc.0";
+ };
+ c2 {
+ regulator-consumer-supply = "usb_bat_chg";
+ regulator-consumer-device = "tegra-otg";
+ };
+ };
+ };
+
+ vdd_hv_3v: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vdd_hv_3v";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-boot-on;
+ consumers {
+ c1 {
+ regulator-consumer-supply = "vdd";
+ regulator-consumer-device = "0-004c";
+ };
+ };
+ };
+
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-powermon.dtsi b/arch/arm64/boot/dts/tegra132-flounder-powermon.dtsi
new file mode 100755
index 0000000..d2335f4
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-powermon.dtsi
@@ -0,0 +1,35 @@
+/*
+ * Power monitor devices for the E1784
+ *
+ * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+/ {
+ i2c@7000c400 {
+ ina230x: ina230x@40 {
+ compatible = "ti,ina230x";
+ reg = <0x40>;
+ ti,trigger-config = <0x7203>;
+ ti,continuous-config = <0x7207>;
+ ti,current-threshold = <9000>;
+ ti,rail-name = "VDD_BAT";
+ ti,resistor = <10>;
+ ti,calibration-data = <186>;
+ ti,power-lsb = <7>;
+ ti,divisor = <25>;
+ ti,shunt-resistor-mohm = <10>;
+ ti,precision-multiplier = <1>;
+ ti,shunt-polartiy-inverted = <0>;
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-sysedp.dtsi b/arch/arm64/boot/dts/tegra132-flounder-sysedp.dtsi
new file mode 100644
index 0000000..4f67b7f
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-sysedp.dtsi
@@ -0,0 +1,52 @@
+/*
+ * arch/arm64/boot/dts/tegra132-flounder-sysedp.dtsi
+ *
+ * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+/ {
+ sysedp_batmon_calc {
+ compatible = "nvidia,tegra124-sysedp_batmon_calc";
+ ocv_lut = <
+ 100 4300000
+ 80 4079000
+ 60 3893000
+ 40 3770000
+ 20 3710000
+ 0 3552000
+ >;
+ ibat_lut = <
+ 600 9000
+ 0 9000
+ (-200) 5000
+ (-210) 0
+ >;
+ rbat_data = <
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ 50000 60000 70000 76000 95000 115000 130000 155000
+ >;
+ temp_axis = <600 450 300 230 150 50 0 (-200)>;
+ capacity_axis = <100 80 60 40 20 0>;
+ power_supply = "battery";
+ r_const = <40000>;
+ vsys_min = <2600000>;
+ };
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-touch.dtsi b/arch/arm64/boot/dts/tegra132-flounder-touch.dtsi
new file mode 100755
index 0000000..7853bab
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-touch.dtsi
@@ -0,0 +1,760 @@
+/ {
+ spi@7000d800 {
+ synaptics_dsx@0 {
+ compatible = "synaptics,dsx";
+ reg = <0x0>;
+ spi-max-frequency = <4000000>;
+ spi-cpha;
+ spi-cpol;
+ interrupt-parent = <&gpio>;
+ interrupts = <TEGRA_GPIO(K, 2) 0x2>;
+ synaptics,irq-gpio = <&gpio TEGRA_GPIO(K, 2) 0x00>;
+ synaptics,irq-flags = <0x2008>;
+ synaptics,reset-gpio = <&gpio TEGRA_GPIO(X, 6) 0x00>;
+ synaptics,reset-on-state = <0>;
+ synaptics,reset-active-ms = <20>;
+ synaptics,reset-delay-ms = <100>;
+ synaptics,byte-delay-us = <30>;
+ synaptics,block-delay-us = <30>;
+ synaptics,tw-pin-mask = <0x05>;
+
+ config_9 {
+ pr_number = <1732172>;
+ sensor_id = <0x05>;
+ config = [
+ 33 32 30 13 00 0F 01 3C
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 92 14 58 07 32 36 FF
+ 38 FE F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1C 23 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 B4 00
+ 14 00 00 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 5A 5A 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 00 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 04 01 04 83 2C 00 14
+ 00 00 00 02 00 02 56 03
+ 2B 00 13 00 01 00 02 00
+ 02 5A 03 2A 00 13 00 02
+ 00 02 00 02 5C 03 29 00
+ 12 00 03 00 02 00 02 5E
+ 83 28 00 12 00 02 02 02
+ 00 02 60 83 28 00 12 00
+ 05 00 02 00 02 62 83 27
+ 00 11 00 02 04 02 00 02
+ 64 82 25 00 11 00 06 02
+ 02 00 02 68 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 07 07 07 07 07 05
+ 05 05 03 03 03 02 02 02
+ 01 01 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 35 35 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 08 08 A0 00 A0 00 64
+ 64 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 AB 03 14 00 06
+ 00 15 5B 50 5A 32 1E 00
+ 12 00 90 C6 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E
+ ];
+ };
+
+ config_8 {
+ pr_number = <1732172>;
+ sensor_id = <0x00>;
+ config = [
+ 33 32 33 13 00 0F 01 3C
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 86 16 CD 07 32 33 FF
+ 2E 00 F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1C 23 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 B4 00
+ 14 00 00 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 5A 5A 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 04 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 04 01 04 85 3F 00 14
+ 00 00 00 02 00 02 56 04
+ 3D 00 13 00 01 00 02 00
+ 02 5A 04 3C 00 13 00 02
+ 00 02 00 02 5C 04 3B 00
+ 12 00 03 00 02 00 02 5E
+ 84 39 00 12 00 03 01 02
+ 00 02 60 84 38 00 12 00
+ 05 00 02 00 02 62 84 37
+ 00 11 00 03 03 02 00 02
+ 64 84 35 00 11 00 02 06
+ 02 00 02 68 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 0F 0B 0C 09 07 05
+ 05 05 03 03 03 01 00 00
+ 00 00 01 00 01 01 02 02
+ 04 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 03 00
+ 00 03 03 00 00 07 07 00
+ 2F 05 06 00 35 35 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 00 05 05 A0 00 A0 00 64
+ 64 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 9B 03 14 00 06
+ 00 15 5A 50 5A 32 1E 00
+ 79 00 10 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E
+ ];
+ };
+
+ config_7 {
+ pr_number = <1720643>;
+ sensor_id = <0x05>;
+ config = [
+ 33 32 30 09 00 0E 03 3C
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 92 14 58 07 32 36 FF
+ 38 FE F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1C 23 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 B4 00
+ 14 00 00 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 5A 5A 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 04 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 05 01 04 83 28 00 14
+ 00 00 00 02 00 02 56 03
+ 28 00 13 00 01 00 02 00
+ 02 5A 03 28 00 13 00 02
+ 00 02 00 02 5C 03 28 00
+ 12 00 03 00 02 00 02 5E
+ 83 28 00 12 00 04 00 02
+ 00 02 60 03 28 00 12 00
+ 05 00 02 00 02 62 83 28
+ 00 11 00 06 00 02 00 02
+ 64 83 28 00 11 00 07 00
+ 02 00 02 66 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 07 07 07 07 07 05
+ 05 05 03 03 03 02 02 02
+ 01 01 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 35 35 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 08 08 A0 00 A0 00 64
+ 64 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 AB 03 14 00 06
+ 00 15 5B 50 5A 32 1E 00
+ 0A 00 90 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E
+ ];
+ };
+
+ config_6 {
+ pr_number = <1720643>;
+ sensor_id = <0x00>;
+ config = [
+ 33 32 31 09 00 0E 03 3C
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 86 16 CD 07 32 33 FF
+ 2E 00 F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1C 23 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 B4 00
+ 14 00 01 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 5A 5A 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 04 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 05 01 04 83 29 00 14
+ 00 00 00 02 00 02 56 03
+ 28 00 13 00 01 00 02 00
+ 02 5A 03 27 00 13 00 01
+ 01 02 00 02 5C 02 26 00
+ 12 00 02 01 02 00 02 5E
+ 82 25 00 12 00 03 01 02
+ 00 02 60 82 24 00 12 00
+ 03 02 02 00 02 62 82 23
+ 00 11 00 04 02 02 00 02
+ 64 82 22 00 11 00 05 03
+ 02 00 02 68 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 08 08 08 07 07 05
+ 05 05 03 03 03 01 02 02
+ 00 00 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 35 35 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 08 08 A0 00 A0 00 64
+ 64 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 9B 03 14 00 06
+ 00 15 5B 50 5A 32 1E 00
+ 0A 00 90 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E
+ ];
+ };
+
+ config_5 { /* JDI */
+ pr_number = <1701038>;
+ config = [
+ 33 32 30 05 00 0E 03 32
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 AA 15 1E 05 2D 30 FD
+ 43 FA F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1E 28 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 B4 00
+ 14 00 01 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 80 80 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 00 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF 32 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 05 01 04 83 30 00 14
+ 00 00 00 02 00 02 56 03
+ 2F 00 13 00 01 00 02 00
+ 02 5A 83 2E 00 13 00 02
+ 00 02 00 02 5C 03 2D 00
+ 12 00 03 00 02 00 02 5E
+ 03 2C 00 12 00 04 00 02
+ 00 02 60 03 2B 00 12 00
+ 05 00 02 00 02 62 83 2A
+ 00 11 00 06 00 02 00 02
+ 64 83 28 00 11 00 08 00
+ 02 00 02 68 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 07 07 07 07 07 05
+ 05 05 03 03 03 02 02 02
+ 01 01 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 1E 1E 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 0C 0C A0 00 A0 00 50
+ 50 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 BB 03 14 00 06
+ 00 15 5A 50 5A 32 1E 00
+ 00 00 10 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E
+ ];
+ };
+
+ config_4 {
+ pr_number = <1694742>;
+ config = [
+ 33 32 30 03 00 0F 03 1E
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 AA 15 1E 05 2D 30 FD
+ 43 FA F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1E 28 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 00 00
+ 00 00 01 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 80 80 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 50 64 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 04 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 05 01 04 83 28 00 14
+ 00 00 00 02 00 02 56 03
+ 28 00 13 00 01 00 02 00
+ 02 5A 03 28 00 13 00 02
+ 00 02 00 02 5C 03 28 00
+ 12 00 03 00 02 00 02 5E
+ 83 28 00 12 00 04 00 02
+ 00 02 60 03 28 00 12 00
+ 05 00 02 00 02 62 83 28
+ 00 11 00 06 00 02 00 02
+ 64 83 28 00 11 00 07 00
+ 02 00 02 66 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 07 07 07 07 07 05
+ 05 05 03 03 03 02 02 02
+ 01 01 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 1E 1E 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 08 08 A0 00 A0 00 50
+ 50 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 BB 03 14 00 06
+ 00 15 5B 50 5A 32 1E 00
+ 0A 00 90 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E 14
+ ];
+ };
+
+ config_3 {
+ pr_number = <1682175>;
+ config = [
+ 33 32 30 01 00 06 03 1E
+ 05 00 0C 00 09 CD 4C CD
+ 4C 0D 0D 00 00 26 1C 1E
+ 05 AA 15 1E 05 2D 30 FD
+ 43 FA F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1E 28 00 0F
+ 0A 20 01 01 05 00 00 0A
+ C0 19 32 26 26 80 80 00
+ 00 C8 2F 14 00 10 00 00
+ 00 00 01 00 00 00 00 FC
+ E6 00 00 0A 32 50 99 40
+ 00 50 00 00 55 00 80 00
+ 01 64 00 FF FF 80 80 01
+ 40 40 32 04 00 00 00 00
+ 00 0C 00 09 3C 32 00 00
+ 00 00 4C 04 6C 07 1E 05
+ 00 00 00 01 05 0A 00 01
+ 14 03 06 0F 0A FF 00 C8
+ 00 80 04 1B 00 00 00 E8
+ 03 FF FF BF F5 FF FF 00
+ C0 80 00 71 00 E8 03 01
+ 00 03 03 07 07 00 E8 03
+ 02 05 01 04 83 28 00 14
+ 00 00 00 02 00 02 56 03
+ 28 00 13 00 01 00 02 00
+ 02 5A 03 28 00 13 00 02
+ 00 02 00 02 5C 03 28 00
+ 12 00 03 00 02 00 02 5E
+ 03 28 00 12 00 04 00 02
+ 00 02 60 03 28 00 12 00
+ 05 00 02 00 02 62 03 28
+ 00 11 00 06 00 02 00 02
+ 64 03 28 00 11 00 07 00
+ 02 00 02 66 05 05 05 05
+ 05 05 05 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 05 05 05 05
+ 05 05 0B 0B 0B 0B 0B 0B
+ 0B 0B 0B 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 0F 0F 07 07 07 07 07 05
+ 05 05 03 03 03 02 02 02
+ 01 01 01 01 01 01 01 01
+ 01 03 05 0F 1B 1B 01 00
+ 07 1B 1B 05 01 07 1E 00
+ 05 00 A6 00 14 64 01 00
+ 00 03 03 00 00 07 07 00
+ 32 05 06 00 1E 1E 00 05
+ 00 00 07 07 06 06 00 00
+ 03 05 32 C8 0C 26 0A 01
+ 05 08 08 A0 00 A0 00 50
+ 50 80 FE 00 00 00 00 00
+ 00 63 00 05 00 00 00 00
+ 0B 00 00 00 00 00 B2 03
+ 14 00 06 BB 03 14 00 06
+ 00 15 5A 50 5A 32 1E 00
+ 00 00 10 C0 00 3E 41 47
+ 49 37 3F 35 33 30 36 24
+ 4A 1F 1E 22 1A 1B 16 18
+ 34 32 31 2F 2B 28 2C 2A
+ 25 2D 23 20 21 1D 1C 17
+ 19 14 13 0F 0A 10 0C 0B
+ 05 03 08 06 0E 09 07 52
+ 56 5C 5E 54 55 53 58 57
+ 59 5B 5D 60 5F 63 62 02
+ 10 02 10 02 10 02 10 00
+ 10 00 10 00 10 00 10 F0
+ 0A 0A 0A 0A 1E 0A
+ ];
+ };
+
+ config_2 {
+ pr_number = <1671106>;
+ config = [
+ 10 00 00 05 00 06 03 1E
+ 05 00 0C 00 09 CE 3E 72
+ 45 10 10 0F 0F 26 1C 1E
+ 05 AA 15 1E 05 2D 30 FD
+ 43 FA F0 D2 F0 D2 C3 80
+ C4 C4 05 03 1E 28 00 0F
+ 0A 01 01 05 00 00 0A C0
+ 19 32 26 26 80 80 00 00
+ 2F 14 00 10 00 00 00 00
+ 01 00 00 00 00 FC E6 00
+ 00 0A 32 50 99 40 00 50
+ 00 00 55 00 80 00 01 64
+ 00 FF FF 80 80 01 40 40
+ 32 04 00 00 00 00 00 0C
+ 00 09 3C 32 00 00 00 00
+ 4C 04 6C 07 1E 05 00 00
+ 00 01 05 0A 00 01 FF 00
+ 2A 01 80 04 1B 00 00 00
+ E8 03 FF FF BF F5 FF FF
+ 00 C0 80 00 71 00 E8 03
+ 01 00 03 03 07 07 03 E8
+ 03 02 05 01 04 83 28 00
+ 14 00 00 00 02 00 02 56
+ 03 28 00 13 00 01 00 02
+ 00 02 5A 03 28 00 13 00
+ 02 00 02 00 02 5C 03 28
+ 00 12 00 03 00 02 00 02
+ 5E 03 28 00 12 00 04 00
+ 02 00 02 60 03 28 00 12
+ 00 05 00 02 00 02 62 03
+ 28 00 11 00 06 00 02 00
+ 02 64 03 28 00 11 00 07
+ 00 02 00 02 66 05 05 05
+ 05 05 05 05 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 04 04 04
+ 04 04 04 04 04 05 05 05
+ 05 05 05 0B 0B 0B 0B 0B
+ 0B 0B 0B 0B 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 07 07 07 07 07 07 07
+ 07 0F 0F 07 07 07 07 07
+ 05 05 05 03 03 03 02 02
+ 02 01 01 01 01 01 01 01
+ 01 01 03 05 0F 1B 1B 01
+ 00 07 1B 1B 05 01 07 1E
+ 00 05 00 A6 00 14 64 01
+ 00 00 03 03 00 00 07 07
+ 00 32 05 06 00 1E 1E 00
+ 05 00 00 07 07 06 06 00
+ 00 03 05 32 C8 0C 26 0A
+ 80 FE 00 00 00 00 00 00
+ 63 00 05 00 00 00 00 0B
+ 00 00 00 00 00 B2 03 14
+ 00 06 BB 03 14 00 06 00
+ 15 5A 50 5A 32 1E 00 00
+ 00 00 00 00 3E 41 47 49
+ 37 3F 35 33 30 36 24 4A
+ 1F 1E 22 1A 1B 16 18 34
+ 32 31 2F 2B 28 2C 2A 25
+ 2D 23 20 21 1D 1C 17 19
+ 14 13 0F 0A 10 0C 0B 05
+ 03 08 06 0E 09 07 52 56
+ 5C 5E 54 55 53 58 57 59
+ 5B 5D 60 5F 63 62 02 10
+ 02 10 02 10 02 10 00 10
+ 00 10 00 10 00 10 FF 0A
+ 0A 0A 0A 1E 0A
+ ];
+ };
+
+ config_1 {
+ pr_number = <1662137>;
+ config = [
+ 00 00 00 05 00 06 03 1E
+ 05 00 0C 00 09 AE 4B 9A
+ 4D 0E 0E 0D 0D 26 1C 1E
+ 05 58 10 A0 03 2D 1D 01
+ 32 00 52 BA 36 BE C3 80
+ C4 C4 05 03 1E 28 00 0F
+ 0A 01 01 07 05 05 0A C0
+ 19 C8 26 26 80 80 00 00
+ 2F 14 00 10 00 00 00 00
+ 00 00 00 00 00 FC E6 00
+ 00 0A 32 50 99 40 00 50
+ 00 00 55 00 80 00 01 64
+ 00 FF FF 66 66 01 40 40
+ 32 3C 00 00 00 00 00 0C
+ 00 09 3C 32 00 00 00 00
+ 4C 04 6C 07 1E 05 00 00
+ 00 01 05 0A 00 01 FF 00
+ 17 01 80 04 1B 00 00 00
+ C8 00 FF FF BF 64 88 13
+ 00 C0 80 00 71 00 D0 07
+ 0A 00 03 03 07 07 03 D0
+ 07 01 04 04 3B 00 13 00
+ 00 00 02 00 02 5A 84 3A
+ 00 12 00 01 00 02 00 02
+ 5E 04 39 00 12 00 02 00
+ 02 00 02 60 84 38 00 12
+ 00 03 00 02 00 02 62 84
+ 36 00 11 00 04 00 02 00
+ 02 64 84 34 00 11 00 06
+ 00 02 00 02 68 03 32 00
+ 10 00 08 00 02 00 02 6C
+ 83 30 00 10 00 0A 00 05
+ 00 04 5A 05 05 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 05 05 04 05 05 04 04
+ 05 05 05 05 04 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 20 00 05 00 A6 00 14
+ 64 01 00 07 07 06 06 00
+ 00 03 05 32 C8 0C 26 0A
+ 80 FE 00 00 00 00 00 00
+ 63 00 05 00 00 00 00 0B
+ 00 00 00 00 00 00 00 3E
+ 41 47 49 37 3F 35 33 30
+ 36 24 4A 1F 1E 22 1A 1B
+ 16 18 34 32 31 2F 2B 28
+ 2C 2A 25 2D 23 20 21 1D
+ 1C 17 19 14 13 0F 0A 10
+ 0C 0B 05 03 08 06 0E 09
+ 07 52 56 5C 5E 54 55 53
+ 58 57 59 5B 5D 60 5F 63
+ 62 02 10 02 10 02 10 02
+ 10 00 10 00 10 00 10 00
+ 10 FF 0A 0A 0A 0A 01
+ ];
+ };
+
+ config_0 {
+ pr_number = <1637894>;
+ config = [
+ 00 00 00 04 00 06 03 1E
+ 05 00 0C 00 09 AE 4B 9A
+ 4D 0E 0E 0D 0D 26 1C 1E
+ 05 58 10 A0 03 2D 1D 01
+ 32 00 52 BA 36 BE C3 80
+ C4 C4 05 03 0F 28 50 0F
+ 0A 01 01 07 05 05 0A C0
+ 19 C8 26 26 80 80 00 00
+ 2F 14 00 10 00 00 00 00
+ 00 00 00 00 00 FC E6 00
+ 00 0A 32 50 99 40 00 50
+ 00 00 55 00 80 00 01 64
+ 00 FF FF 80 80 01 40 40
+ 32 3C 00 00 00 00 00 0C
+ 00 09 3C 32 00 00 00 00
+ 4C 04 6C 07 1E 05 00 00
+ 00 01 05 0A 00 01 FF 00
+ 04 01 80 04 1B 00 00 00
+ C8 00 FF FF BF 64 F4 01
+ 00 C0 80 00 71 00 E8 03
+ 14 00 03 03 07 07 03 E8
+ 03 01 04 03 32 00 10 00
+ 00 00 02 00 02 6A 03 30
+ 00 10 00 02 00 05 00 04
+ 5A 03 2F 00 0F 00 04 00
+ 05 00 04 5D 03 2D 00 0F
+ 00 06 00 04 00 03 5A 03
+ 2C 00 0E 00 08 00 04 00
+ 03 5D 03 2A 00 0E 00 0A
+ 00 04 00 03 60 03 29 00
+ 0D 00 0C 00 03 00 02 58
+ 03 28 00 0D 00 0E 00 03
+ 00 02 5B 05 05 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 05 05 05 05 05 05 05
+ 05 28 00 05 00 A6 00 14
+ 64 05 00 07 07 06 06 00
+ 00 03 05 32 C8 0C 26 0A
+ 80 FE 00 00 00 00 00 00
+ 63 00 05 00 00 00 00 0B
+ 00 00 00 00 00 00 00 3E
+ 41 47 49 37 3F 35 33 30
+ 36 24 4A 1F 1E 22 1A 1B
+ 16 18 34 32 31 2F 2B 28
+ 2C 2A 25 2D 23 20 21 1D
+ 1C 17 19 14 13 0F 0A 10
+ 0C 0B 05 03 08 06 0E 09
+ 07 52 56 5C 5E 54 55 53
+ 58 57 59 5B 5D 60 5F 63
+ 62 02 10 02 10 02 10 02
+ 10 00 10 00 10 00 10 00
+ 10 9B 0A 0A 0A 0A 01
+ ];
+ };
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-trusty.dtsi b/arch/arm64/boot/dts/tegra132-flounder-trusty.dtsi
new file mode 100644
index 0000000..67540dd
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-trusty.dtsi
@@ -0,0 +1,43 @@
+/ {
+ trusty {
+ compatible = "android,trusty-smc-v1";
+ ranges;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ irq {
+ compatible = "android,trusty-irq-v1";
+ };
+
+ fiq {
+ compatible = "android,trusty-fiq-v1";
+ ranges;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ fiq-debugger {
+ compatible = "android,trusty-fiq-v1-tegra-uart";
+ reg = <0x0 0x70006000 0x0 0x40>;
+ interrupts = <0 36 0x04
+ 0 16 0x04>;
+ interrupt-names = "fiq", "signal";
+ };
+
+ watchdog {
+ compatible = "nvidia,tegra-wdt";
+ reg = <0x0 0x60005100 0x0 0x20
+ 0x0 0x60005070 0x0 0x08>;
+ interrupts = <0 123 0x04>;
+ interrupt-names = "fiq";
+ };
+ };
+
+ ote {
+ compatible = "android,trusty-ote-v1";
+ };
+
+ log {
+ compatible = "android,trusty-log-v1";
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/tegra132-flounder-xaxb.dts b/arch/arm64/boot/dts/tegra132-flounder-xaxb.dts
new file mode 100644
index 0000000..17671bc3
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-xaxb.dts
@@ -0,0 +1,16 @@
+/dts-v1/;
+
+#include "tegra132-flounder-generic.dtsi"
+
+/ {
+ compatible = "google,flounder64", "nvidia,tegra132";
+ hw-revision = "xa,xb";
+
+ panel_jdi_qxga_8_9 {
+ gpios = <&gpio TEGRA_GPIO(Q, 2) 0>,
+ <&gpio TEGRA_GPIO(R, 0) 0>,
+ <&gpio TEGRA_GPIO(EE, 5) 0>,
+ <&gpio TEGRA_GPIO(B, 4) 0>;
+ };
+};
+
diff --git a/arch/arm64/boot/dts/tegra132-flounder-xc.dts b/arch/arm64/boot/dts/tegra132-flounder-xc.dts
new file mode 100644
index 0000000..e05e434
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-xc.dts
@@ -0,0 +1,16 @@
+/dts-v1/;
+
+#include "tegra132-flounder-generic.dtsi"
+
+/ {
+ compatible = "google,flounder64", "nvidia,tegra132";
+ hw-revision = "xc";
+
+ panel_jdi_qxga_8_9 {
+ gpios = <&gpio TEGRA_GPIO(Q, 2) 0>,
+ <&gpio TEGRA_GPIO(R, 0) 0>,
+ <&gpio TEGRA_GPIO(EE, 5) 0>,
+ <&gpio TEGRA_GPIO(H, 5) 0>;
+ };
+};
+
diff --git a/arch/arm64/boot/dts/tegra132-flounder-xdxepvt.dts b/arch/arm64/boot/dts/tegra132-flounder-xdxepvt.dts
new file mode 100644
index 0000000..98c7f9cd
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder-xdxepvt.dts
@@ -0,0 +1,41 @@
+/dts-v1/;
+
+#include "tegra132-flounder-generic.dtsi"
+
+/ {
+ compatible = "google,flounder64", "nvidia,tegra132";
+ hw-revision = "xd,xe,pvt";
+
+ i2c@7000c000 {
+ htc_mcu@72 {
+ compatible = "htc_mcu";
+ reg = <0x72>;
+ interrupt-parent = <&gpio>;
+ interrupts = <140 0x0>;
+ mcu,intr-gpio = <&gpio 140 0>;
+ mcu,gs_chip_layout = <1>;
+ mcu,acceleration_axes = <6>;
+ mcu,magnetic_axes = <7>;
+ mcu,gyro_axes = <6>;
+ mcu,Cpu_wake_mcu-gpio = <&gpio 129 0>;
+ mcu,Reset-gpio = <&gpio 138 0>;
+ mcu,Chip_mode-gpio = <&gpio 164 0>;
+ };
+ };
+
+ panel_jdi_qxga_8_9 {
+ gpios = <&gpio TEGRA_GPIO(Q, 2) 0>,
+ <&gpio TEGRA_GPIO(R, 0) 0>,
+ <&gpio TEGRA_GPIO(I, 4) 0>,
+ <&gpio TEGRA_GPIO(H, 5) 0>;
+ };
+ i2c@7000c400 {
+ richtek_rt5506_amp@52 {
+ compatible = "richtek,rt5506-amp";
+ reg = <0x52>;
+ richtek,enable-gpio = <&gpio 185 0>;
+ richtek,enable-power-gpio = <&gpio 162 0>;
+ };
+ };
+};
+
diff --git a/arch/arm64/boot/dts/tegra132-flounder_lte-xaxbxcxdpvt.dts b/arch/arm64/boot/dts/tegra132-flounder_lte-xaxbxcxdpvt.dts
new file mode 100644
index 0000000..c9ab6a8
--- /dev/null
+++ b/arch/arm64/boot/dts/tegra132-flounder_lte-xaxbxcxdpvt.dts
@@ -0,0 +1,24 @@
+/dts-v1/;
+
+#include "tegra132-flounder-generic.dtsi"
+
+/ {
+ compatible = "google,flounder64_lte", "nvidia,tegra132";
+ hw-revision = "xa,xb,xc,xd,pvt";
+
+ panel_jdi_qxga_8_9 {
+ gpios = <&gpio TEGRA_GPIO(Q, 2) 0>,
+ <&gpio TEGRA_GPIO(R, 0) 0>,
+ <&gpio TEGRA_GPIO(I, 4) 0>,
+ <&gpio TEGRA_GPIO(H, 5) 0>;
+ };
+ i2c@7000c400 {
+ richtek_rt5506_amp@52 {
+ compatible = "richtek,rt5506-amp";
+ reg = <0x52>;
+ richtek,enable-gpio = <&gpio 185 0>;
+ richtek,enable-power-gpio = <&gpio 162 0>;
+ };
+ };
+};
+
diff --git a/arch/arm64/boot/dts/tegra132.dtsi b/arch/arm64/boot/dts/tegra132.dtsi
index 9c77252..a3a8415 100644
--- a/arch/arm64/boot/dts/tegra132.dtsi
+++ b/arch/arm64/boot/dts/tegra132.dtsi
@@ -22,6 +22,56 @@
enable-method = "spin-table";
cpu-release-addr = <0x0 0x8000fff8>;
power-states = <&power_states>;
+ // The currents(uA) correspond to the frequencies in the
+ // frequency table.
+ current = < 92060 //204000 kHz
+ 103560 //229500 kHz
+ 115070 //255000 kHz
+ 126580 //280500 kHz
+ 138080 //306000 kHz
+ 149590 //331500 kHz
+ 161100 //357000 kHz
+ 172600 //382500 kHz
+ 184110 //408000 kHz
+ 195620 //433500 kHz
+ 207120 //459000 kHz
+ 218630 //484500 kHz
+ 230140 //510000 kHz
+ 241640 //535500 kHz
+ 253150 //561000 kHz
+ 264660 //586500 kHz
+ 276170 //612000 kHz
+ 287670 //637500 kHz
+ 299180 //663000 kHz
+ 310690 //688500 kHz
+ 322190 //714000 kHz
+ 333700 //739500 kHz
+ 345210 //765000 kHz
+ 356710 //790500 kHz
+ 377360 //816000 kHz
+ 389150 //841500 kHz
+ 400950 //867000 kHz
+ 412740 //892500 kHz
+ 441640 //918000 kHz
+ 453910 //943500 kHz
+ 466180 //969000 kHz
+ 478440 //994500 kHz
+ 511520 //1020000 kHz
+ 587620 //1122000 kHz
+ 670620 //1224000 kHz
+ 761250 //1326000 kHz
+ 860270 //1428000 kHz
+ 968510 //1530000 kHZ
+ 1086830 //1632000 kHz
+ 1215570 //1734000 kHz
+ 1356860 //1836000 kHz
+ 1511200 //1938000 kHz
+ 1570850 //2014500 kHz
+ 1769650 //2091000 kHz
+ 1961670 //2193000 kHz
+ 2170950 //2295000 kHz
+ 2398880 //2397000 kHz
+ 2646910>; //2499000 kHZ
};
cpu@1 {
@@ -31,6 +81,57 @@
enable-method = "spin-table";
cpu-release-addr = <0x0 0x8000fff8>;
power-states = <&power_states>;
+ // The currents(uA) correspond to the frequencies in the
+ // frequency table.
+ current = < 60759 //204000 kHz
+ 68349 //229500 kHz
+ 75946 //255000 kHZ
+ 83542 //280500 kHz
+ 91132 //306000 kHz
+ 98729 //331500 kHz
+ 106326 //357000 kHz
+ 113916 //382500 kHz
+ 121512 //408000 kHz
+ 129109 //433500 kHZ
+ 136699 //459000 kHz
+ 144295 //484500 kHz
+ 151892 //510000 kHZ
+ 159482 //535500 kHz
+ 167079 //561000 kHz
+ 174675 //586500 kHZ
+ 182272 //612000 kHz
+ 189862 //637500 kHz
+ 197458 //663000 kHZ
+ 205055 //688500 kHZ
+ 212645 //714000 kHz
+ 220242 //739500 kHz
+ 227838 //765000 kHz
+ 235428 //790500 kHz
+ 249057 //816000 kHz
+ 256839 //841500 kHz
+ 264627 //867000 kHz
+ 272408 //892500 kHz
+ 291482 //918000 kHz
+ 299580 //943500 kHz
+ 307678 //969000 kHz
+ 315770 //994500 kHz
+ 337603 //1020000 kHz
+ 387829 //1122000 kHz
+ 442609 //1224000 kHz
+ 502425 //1326000 kHz
+ 567778 //1428000 kHz
+ 639216 //1530000 kHz
+ 717307 //1632000 kHz
+ 802276 //1734000 kHz
+ 895527 //1836000 kHz
+ 997392 //1938000 kHz
+ 1036761 //2014500 kHz
+ 1167969 //2091000 kHz
+ 1294702 //2193000 kHz
+ 1432827 //2295000 kHz
+ 1583260 //2397000 kHz
+ 1746960>; //2499000 kHz
+
};
};
diff --git a/arch/arm64/configs/flounder_defconfig b/arch/arm64/configs/flounder_defconfig
new file mode 100644
index 0000000..e013a71
--- /dev/null
+++ b/arch/arm64/configs/flounder_defconfig
@@ -0,0 +1,606 @@
+CONFIG_SYSVIPC=y
+CONFIG_AUDIT=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IKCONFIG=y
+CONFIG_CGROUPS=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_RESOURCE_COUNTERS=y
+CONFIG_CGROUP_SCHED=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_KALLSYMS_ALL=y
+CONFIG_EMBEDDED=y
+CONFIG_PROFILING=y
+CONFIG_JUMP_LABEL=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
+CONFIG_ARCH_TEGRA=y
+CONFIG_PM_DEVFREQ=y
+CONFIG_ARCH_TEGRA_12x_SOC=y
+CONFIG_MACH_ARDBEG=y
+CONFIG_QCT_9K_MODEM=y
+CONFIG_MACH_LOKI=y
+CONFIG_MACH_LAGUNA=y
+CONFIG_TEGRA_FIQ_DEBUGGER=y
+CONFIG_TEGRA_EMC_SCALING_ENABLE=y
+CONFIG_TEGRA_CLOCK_DEBUG_WRITE=y
+CONFIG_TEGRA_EDP_LIMITS=y
+CONFIG_TEGRA_GPU_EDP=y
+CONFIG_TEGRA_GADGET_BOOST_CPU_FREQ=1400
+CONFIG_TEGRA_EHCI_BOOST_CPU_FREQ=800
+CONFIG_TEGRA_DYNAMIC_PWRDET=y
+# CONFIG_TEGRA_EDP_EXACT_FREQ is not set
+CONFIG_TEGRA_WAKEUP_MONITOR=y
+CONFIG_TEGRA_SKIN_THROTTLE=y
+CONFIG_TEGRA_PLLM_SCALED=y
+CONFIG_ARCH_TEGRA_13x_SOC=y
+CONFIG_MACH_T132REF=y
+CONFIG_MACH_T132_FLOUNDER=y
+CONFIG_SMP=y
+CONFIG_NR_CPUS=2
+CONFIG_PREEMPT=y
+CONFIG_ARMV7_COMPAT=y
+# CONFIG_KSM is not set
+CONFIG_SECCOMP=y
+CONFIG_CMDLINE="no_console_suspend=1 tegra_wdt.enable_on_probe=1 tegra_wdt.heartbeat=120 androidboot.hardware=flounder"
+CONFIG_CMDLINE_EXTEND=y
+CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
+CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE_NAMES="tegra132-flounder-xaxb tegra132-flounder-xc tegra132-flounder-xdxepvt tegra132-flounder_lte-xaxbxcxdpvt"
+CONFIG_COMPAT=y
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
+CONFIG_PM_RUNTIME=y
+CONFIG_PM_DEBUG=y
+CONFIG_PM_ADVANCED_DEBUG=y
+CONFIG_SUSPEND_TIME=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_IDLE=y
+CONFIG_CPUQUIET_FRAMEWORK=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+CONFIG_IP_PNP_RARP=y
+CONFIG_INET_ESP=y
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+# CONFIG_INET_DIAG is not set
+CONFIG_IPV6_PRIVACY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_TUNNEL=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_ANDROID_PARANOID_NETWORK=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_NF_CONNTRACK_IPV4=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_TARGET_REJECT_SKERR=y
+CONFIG_NF_NAT_IPV4=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_NF_CONNTRACK_IPV6=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_MHI=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_INGRESS=y
+CONFIG_NET_CLS_U32=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_CFG80211=y
+CONFIG_NL80211_TESTMODE=y
+CONFIG_MAC80211=y
+CONFIG_RFKILL=y
+CONFIG_RFKILL_GPIO=y
+CONFIG_CAIF=y
+CONFIG_NFC=y
+CONFIG_BCM2079X_NFC=y
+# CONFIG_FIRMWARE_IN_KERNEL is not set
+CONFIG_CMA=y
+CONFIG_PLATFORM_ENABLE_IOMMU=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_AD525X_DPOT=y
+CONFIG_AD525X_DPOT_I2C=y
+CONFIG_APDS9802ALS=y
+CONFIG_SENSORS_NCT1008=y
+CONFIG_UID_STAT=y
+CONFIG_TEGRA_CRYPTO_DEV=y
+CONFIG_MAX1749_VIBRATOR=y
+CONFIG_THERM_EST=y
+CONFIG_FAN_THERM_EST=y
+CONFIG_BLUEDROID_PM=y
+CONFIG_UID_CPUTIME=y
+CONFIG_EEPROM_AT24=y
+CONFIG_TEGRA_PROFILER=y
+CONFIG_HTC_HEADSET_MGR=y
+CONFIG_HTC_HEADSET_PMIC=y
+CONFIG_HEADSET_DEBUG_UART=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=y
+CONFIG_DM_UEVENT=y
+CONFIG_DM_VERITY=y
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_R8169=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=y
+CONFIG_PPPOLAC=y
+CONFIG_PPPOPNS=y
+CONFIG_PPP_ASYNC=y
+CONFIG_PPP_SYNC_TTY=y
+CONFIG_USB_USBNET=y
+CONFIG_USB_NET_SMSC95XX=y
+# CONFIG_USB_NET_NET1080 is not set
+# CONFIG_USB_BELKIN is not set
+# CONFIG_USB_ARMLINUX is not set
+# CONFIG_USB_NET_ZAURUS is not set
+CONFIG_WIFI_CONTROL_FUNC=y
+CONFIG_BCMDHD=y
+CONFIG_BCMDHD_SDIO=y
+CONFIG_BCM4354=y
+CONFIG_BCMDHD_FW_PATH="/vendor/firmware/fw_bcmdhd.bin"
+CONFIG_DHD_USE_SCHED_SCAN=y
+# CONFIG_INPUT_MOUSEDEV is not set
+CONFIG_INPUT_JOYDEV=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_KEYRESET=y
+CONFIG_INPUT_CFBOOST=y
+# CONFIG_KEYBOARD_ATKBD is not set
+CONFIG_KEYBOARD_GPIO=y
+CONFIG_KEYBOARD_TEGRA=y
+# CONFIG_INPUT_MOUSE is not set
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_JOYSTICK_XPAD=y
+CONFIG_JOYSTICK_XPAD_FF=y
+CONFIG_JOYSTICK_XPAD_LEDS=y
+CONFIG_INPUT_TABLET=y
+CONFIG_TABLET_USB_ACECAD=y
+CONFIG_TABLET_USB_AIPTEK=y
+CONFIG_TABLET_USB_GTCO=y
+CONFIG_TABLET_USB_HANWANG=y
+CONFIG_TABLET_USB_KBTAB=y
+CONFIG_TABLET_USB_WACOM=y
+CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_CYPRESS_SAR=y
+CONFIG_TOUCHSCREEN_MAXIM_STI=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_SPI=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_CORE=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI_DEV=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE=y
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_KEYCHORD=y
+CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=y
+# CONFIG_SERIO_I8042 is not set
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVMEM is not set
+# CONFIG_DEVKMEM is not set
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_TEGRA=y
+# CONFIG_HW_RANDOM is not set
+CONFIG_I2C=y
+# CONFIG_I2C_COMPAT is not set
+CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=y
+CONFIG_I2C_MUX_PCA954x=y
+CONFIG_INPUT_CWSTM32=y
+# CONFIG_I2C_HELPER_AUTO is not set
+CONFIG_I2C_TEGRA=y
+CONFIG_SPI=y
+CONFIG_SPI_TEGRA114=y
+CONFIG_PINCTRL_PALMAS=y
+CONFIG_PINCTRL_AS3722=y
+CONFIG_DEBUG_GPIO=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_GPIO_MAX77663=y
+CONFIG_GPIO_PCA953X=y
+CONFIG_GPIO_PCA953X_IRQ=y
+CONFIG_GPIO_RC5T583=y
+CONFIG_GPIO_PALMAS=y
+CONFIG_GPIO_MAX77660=y
+CONFIG_POWER_SUPPLY_EXTCON=y
+CONFIG_CHARGER_BQ2419X_HTC=y
+CONFIG_HTC_BATTERY_BQ2419X=y
+CONFIG_CHARGER_BQ2471X=y
+CONFIG_CHARGER_BQ2477X=y
+CONFIG_CHARGER_MAX77665=y
+CONFIG_BATTERY_SBS=y
+CONFIG_BATTERY_MAX17048=y
+CONFIG_GAUGE_MAX17050=y
+CONFIG_FLOUNDER_BATTERY=y
+CONFIG_BATTERY_BQ27441=y
+CONFIG_CHARGER_GPIO=y
+CONFIG_CHARGER_EXTCON_MAX77660=y
+CONFIG_BATTERY_CW201X=y
+CONFIG_POWER_RESET_AS3722=y
+CONFIG_POWER_OFF_PALMAS=y
+CONFIG_BATTERY_SYSTEM_VOLTAGE_MONITOR=y
+CONFIG_VOLTAGE_MONITOR_PALMAS=y
+CONFIG_THERMAL_GOV_PID=y
+CONFIG_THERMAL_GOV_ADAPTIVE_SKIN=y
+CONFIG_GENERIC_ADC_THERMAL=y
+CONFIG_PWM_FAN=y
+CONFIG_TRUSTY=y
+CONFIG_WATCHDOG=y
+CONFIG_WATCHDOG_CORE=y
+CONFIG_WATCHDOG_NOWAYOUT=y
+CONFIG_TEGRA_WATCHDOG=y
+CONFIG_TEGRA_WATCHDOG_FIQ=y
+CONFIG_GPADC_TPS80031=y
+CONFIG_MFD_RC5T583=y
+CONFIG_MFD_MAX8831=y
+CONFIG_MFD_AS3722=y
+CONFIG_MFD_PALMAS=y
+CONFIG_MFD_MAX77663=y
+CONFIG_MFD_TPS65090=y
+CONFIG_MFD_TPS6586X=y
+CONFIG_MFD_TPS65910=y
+CONFIG_MFD_TPS80031=y
+CONFIG_MFD_MAX77660=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_VIRTUAL_CONSUMER=y
+CONFIG_REGULATOR_USERSPACE_CONSUMER=y
+CONFIG_REGULATOR_GPIO=y
+CONFIG_REGULATOR_AS3722=y
+CONFIG_REGULATOR_MAX8973=y
+CONFIG_REGULATOR_MAX77663=y
+CONFIG_REGULATOR_RC5T583=y
+CONFIG_REGULATOR_PALMAS=y
+CONFIG_REGULATOR_TPS51632=y
+CONFIG_REGULATOR_TPS62360=y
+CONFIG_REGULATOR_TPS65090=y
+CONFIG_REGULATOR_TPS6586X=y
+CONFIG_REGULATOR_TPS65910=y
+CONFIG_REGULATOR_TPS80031=y
+CONFIG_REGULATOR_TPS6238X0=y
+CONFIG_REGULATOR_MAX77660=y
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_USB_SUPPORT=y
+CONFIG_USB_VIDEO_CLASS=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+# CONFIG_TEGRA_RPC is not set
+CONFIG_TEGRA_NVAVP=y
+CONFIG_TEGRA_NVAVP_AUDIO=y
+# CONFIG_TEGRA_DTV is not set
+CONFIG_VIDEO_AR0832=y
+CONFIG_VIDEO_IMX091=y
+CONFIG_VIDEO_IMX135=y
+CONFIG_VIDEO_AR0261=y
+CONFIG_VIDEO_IMX132=y
+CONFIG_VIDEO_OV9772=y
+CONFIG_VIDEO_OV5693=y
+CONFIG_VIDEO_OV7695=y
+CONFIG_TORCH_SSL3250A=y
+CONFIG_MAX77665_FLASH=y
+CONFIG_TORCH_MAX77387=y
+CONFIG_TORCH_AS364X=y
+CONFIG_VIDEO_AD5823=y
+CONFIG_VIDEO_DW9718=y
+CONFIG_VIDEO_MT9M114=y
+CONFIG_VIDEO_CAMERA=y
+CONFIG_VIDEO_IMX219=y
+CONFIG_VIDEO_OV9760=y
+CONFIG_VIDEO_DRV201=y
+CONFIG_VIDEO_OUTPUT_CONTROL=y
+CONFIG_FB=y
+CONFIG_TEGRA_GRHOST=y
+CONFIG_TEGRA_DC=y
+CONFIG_ADF_TEGRA_FBDEV=y
+CONFIG_TEGRA_DSI=y
+CONFIG_TEGRA_DSI2EDP_SN65DSI86=y
+CONFIG_TEGRA_NVHDCP=y
+# CONFIG_TEGRA_CAMERA is not set
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+# CONFIG_BACKLIGHT_GENERIC is not set
+CONFIG_BACKLIGHT_PWM=y
+CONFIG_BACKLIGHT_TEGRA_PWM=y
+CONFIG_BACKLIGHT_TEGRA_DSI=y
+CONFIG_BACKLIGHT_MAX8831=y
+CONFIG_ADF=y
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_USB_AUDIO=y
+CONFIG_SND_SOC=y
+CONFIG_SND_SOC_TEGRA=y
+CONFIG_SND_SOC_TEGRA_RT5677=y
+CONFIG_AMP_TFA9895=y
+CONFIG_AMP_TFA9895L=y
+CONFIG_HIDRAW=y
+CONFIG_UHID=y
+CONFIG_HID_A4TECH=y
+CONFIG_HID_ACRUX=y
+CONFIG_HID_ACRUX_FF=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_BELKIN=y
+CONFIG_HID_CHERRY=y
+CONFIG_HID_CHICONY=y
+CONFIG_HID_PRODIKEYS=y
+CONFIG_HID_CYPRESS=y
+CONFIG_HID_DRAGONRISE=y
+CONFIG_DRAGONRISE_FF=y
+CONFIG_HID_EMS_FF=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_EZKEY=y
+CONFIG_HID_HOLTEK=y
+CONFIG_HOLTEK_FF=y
+CONFIG_HID_KEYTOUCH=y
+CONFIG_HID_KYE=y
+CONFIG_HID_UCLOGIC=y
+CONFIG_HID_WALTOP=y
+CONFIG_HID_GYRATION=y
+CONFIG_HID_TWINHAN=y
+CONFIG_HID_KENSINGTON=y
+CONFIG_HID_LCPOWER=y
+CONFIG_HID_LOGITECH=y
+CONFIG_HID_LOGITECH_DJ=y
+CONFIG_LOGITECH_FF=y
+CONFIG_LOGIRUMBLEPAD2_FF=y
+CONFIG_LOGIG940_FF=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MONTEREY=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_HID_NTRIG=y
+CONFIG_HID_ORTEK=y
+CONFIG_HID_PANTHERLORD=y
+CONFIG_PANTHERLORD_FF=y
+CONFIG_HID_PETALYNX=y
+CONFIG_HID_PICOLCD=y
+CONFIG_HID_PICOLCD_FB=y
+CONFIG_HID_PICOLCD_BACKLIGHT=y
+CONFIG_HID_PICOLCD_LCD=y
+CONFIG_HID_PICOLCD_LEDS=y
+CONFIG_HID_PRIMAX=y
+CONFIG_HID_ROCCAT=y
+CONFIG_HID_SAITEK=y
+CONFIG_HID_SAMSUNG=y
+CONFIG_HID_SONY=y
+CONFIG_HID_SPEEDLINK=y
+CONFIG_HID_SUNPLUS=y
+CONFIG_HID_GREENASIA=y
+CONFIG_GREENASIA_FF=y
+CONFIG_HID_SMARTJOYPLUS=y
+CONFIG_SMARTJOYPLUS_FF=y
+CONFIG_HID_TIVO=y
+CONFIG_HID_TOPSEED=y
+CONFIG_HID_THRUSTMASTER=y
+CONFIG_THRUSTMASTER_FF=y
+CONFIG_HID_WACOM=y
+CONFIG_HID_WIIMOTE=y
+CONFIG_HID_ZEROPLUS=y
+CONFIG_ZEROPLUS_FF=y
+CONFIG_HID_ZYDACRON=y
+CONFIG_USB_HIDDEV=y
+CONFIG_I2C_HID=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_OTG=y
+# CONFIG_USB_OTG_WHITELIST is not set
+CONFIG_USB_XHCI_HCD=y
+CONFIG_TEGRA_XUSB_PLATFORM=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_ACM=y
+CONFIG_USB_WDM=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_SERIAL=y
+CONFIG_USB_SERIAL_PL2303=y
+CONFIG_USB_SERIAL_OPTION=y
+CONFIG_USB_SERIAL_BASEBAND=y
+CONFIG_USB_RENESAS_MODEM=y
+CONFIG_USB_NV_SHIELD_LED=y
+CONFIG_USB_OTG_WAKELOCK=y
+CONFIG_USB_TEGRA_OTG=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_USB_TEGRA=y
+CONFIG_USB_G_ANDROID=y
+CONFIG_MMC=y
+CONFIG_MMC_UNSAFE_RESUME=y
+CONFIG_MMC_BLOCK_MINORS=16
+CONFIG_MMC_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_TEGRA=y
+CONFIG_LEDS_MAX8831=y
+CONFIG_LEDS_GPIO=y
+CONFIG_SWITCH=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_AS3722=y
+CONFIG_RTC_DRV_MAX77663=y
+CONFIG_RTC_DRV_TPS6586X=y
+CONFIG_RTC_DRV_TPS80031=y
+CONFIG_RTC_DRV_RC5T583=y
+CONFIG_RTC_DRV_PALMAS=y
+CONFIG_RTC_DRV_MAX77660=y
+CONFIG_DMADEVICES=y
+CONFIG_TEGRA20_APB_DMA=y
+CONFIG_STAGING=y
+CONFIG_MAX77660_ADC=y
+CONFIG_AS3722_ADC_EXTCON=y
+CONFIG_PALMAS_GPADC=y
+CONFIG_SENSORS_CM3218=y
+CONFIG_SENSORS_JSA1127=y
+CONFIG_SENSORS_STM8T143=y
+CONFIG_SENSORS_CM3217=y
+CONFIG_INA230=y
+CONFIG_ANDROID=y
+CONFIG_ASHMEM=y
+CONFIG_ANDROID_TIMED_GPIO=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER=y
+CONFIG_FIQ_DEBUGGER_NO_SLEEP=y
+CONFIG_FIQ_DEBUGGER_CONSOLE=y
+CONFIG_PASR=y
+CONFIG_TEGRA_MC_DOMAINS=y
+CONFIG_CLK_PALMAS=y
+CONFIG_CLK_AS3722=y
+CONFIG_TEGRA_IOMMU_SMMU=y
+CONFIG_EXTCON=y
+CONFIG_EXTCON_PALMAS=y
+CONFIG_IIO=y
+CONFIG_IIO_BUFFER=y
+CONFIG_IIO_KFIFO_BUF=y
+CONFIG_PWM=y
+CONFIG_PWM_TEGRA=y
+CONFIG_GK20A_PMU=y
+CONFIG_GK20A_DEVFREQ=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+CONFIG_EXT2_FS_SECURITY=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_SECURITY=y
+# CONFIG_DNOTIFY is not set
+CONFIG_FUSE_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_PSTORE=y
+CONFIG_PSTORE_CONSOLE=y
+CONFIG_PSTORE_PMSG=y
+CONFIG_F2FS_FS=y
+CONFIG_F2FS_FS_SECURITY=y
+CONFIG_F2FS_CHECK_FS=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
+CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
+CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
+CONFIG_SCHEDSTATS=y
+CONFIG_TIMER_STATS=y
+# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_DEBUG_INFO=y
+CONFIG_ENABLE_DEFAULT_TRACERS=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_SECURITY=y
+CONFIG_SECURITY_NETWORK=y
+CONFIG_SECURITY_SELINUX=y
+CONFIG_TRUSTED_LITTLE_KERNEL=y
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_TWOFISH=y
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+CONFIG_CRYPTO_DEV_TEGRA_SE=y
+CONFIG_CRYPTO_DEV_TEGRA_SE_NO_ALGS=y
+CONFIG_ARM64_CRYPTO=y
+CONFIG_CRYPTO_SHA1_ARM64_CE=y
+CONFIG_CRYPTO_SHA2_ARM64_CE=y
+CONFIG_CRYPTO_AES_ARM64_CE=y
+CONFIG_CRYPTO_AES_ARM64_CE_CCM=y
+CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
diff --git a/arch/arm64/mach-tegra/Kconfig b/arch/arm64/mach-tegra/Kconfig
index 6c5a2a1..d74b21e 100644
--- a/arch/arm64/mach-tegra/Kconfig
+++ b/arch/arm64/mach-tegra/Kconfig
@@ -46,6 +46,13 @@
help
Support for NVIDIA Exuma FPGA development platform
+config MACH_T132_FLOUNDER
+ bool "Flounder board with t132"
+ depends on ARCH_TEGRA_13x_SOC
+ select SYSEDP_FRAMEWORK
+ help
+ Support for flounder development platform
+
config DENVER_CPU
bool "Denver CPU"
help
diff --git a/arch/arm64/mach-tegra/Makefile b/arch/arm64/mach-tegra/Makefile
index b160e28..f6dc1b9 100644
--- a/arch/arm64/mach-tegra/Makefile
+++ b/arch/arm64/mach-tegra/Makefile
@@ -62,6 +62,24 @@
obj-${CONFIG_MACH_T132REF} += panel-s-wqxga-10-1.o
obj-${CONFIG_MACH_T132REF} += panel-i-edp-1080p-11-6.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-gps.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-kbc.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-sdhci.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-sensors.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-panel.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-memory.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-pinmux.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-power.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += panel-j-qxga-8-9.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += flounder-bdaddress.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-sysedp.o
+
+#CAM
+obj-${CONFIG_MACH_T132_FLOUNDER} += htc_awb_cal.o
+
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-bootparams.o
+obj-${CONFIG_MACH_T132_FLOUNDER} += board-flounder-mdm9k.o
obj-y += board-touch-maxim_sti-spi.o
diff --git a/arch/arm64/mach-tegra/board-flounder-bootparams.c b/arch/arm64/mach-tegra/board-flounder-bootparams.c
new file mode 100644
index 0000000..eb3975d
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-bootparams.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-bootparams.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-gps.c b/arch/arm64/mach-tegra/board-flounder-gps.c
new file mode 100644
index 0000000..d8f73c7
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-gps.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-gps.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-kbc.c b/arch/arm64/mach-tegra/board-flounder-kbc.c
new file mode 100644
index 0000000..4c315cc
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-kbc.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-kbc.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-mdm9k.c b/arch/arm64/mach-tegra/board-flounder-mdm9k.c
new file mode 100644
index 0000000..fffc136
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-mdm9k.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-mdm9k.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-memory.c b/arch/arm64/mach-tegra/board-flounder-memory.c
new file mode 100644
index 0000000..d4e1eb1
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-memory.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-memory.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-panel.c b/arch/arm64/mach-tegra/board-flounder-panel.c
new file mode 100644
index 0000000..0ab8392
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-panel.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-panel.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-pinmux.c b/arch/arm64/mach-tegra/board-flounder-pinmux.c
new file mode 100644
index 0000000..9593672
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-pinmux.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-pinmux.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-power.c b/arch/arm64/mach-tegra/board-flounder-power.c
new file mode 100644
index 0000000..cf34d2e
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-power.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-power.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-sdhci.c b/arch/arm64/mach-tegra/board-flounder-sdhci.c
new file mode 100644
index 0000000..bb2891c
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-sdhci.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-sdhci.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-sensors.c b/arch/arm64/mach-tegra/board-flounder-sensors.c
new file mode 100644
index 0000000..831086d
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-sensors.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-sensors.c"
diff --git a/arch/arm64/mach-tegra/board-flounder-sysedp.c b/arch/arm64/mach-tegra/board-flounder-sysedp.c
new file mode 100644
index 0000000..047f026
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder-sysedp.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/board-flounder-sysedp.c"
diff --git a/arch/arm64/mach-tegra/board-flounder.c b/arch/arm64/mach-tegra/board-flounder.c
new file mode 100644
index 0000000..0de83b8
--- /dev/null
+++ b/arch/arm64/mach-tegra/board-flounder.c
@@ -0,0 +1,27 @@
+/*
+ * arch/arm/mach-tegra/board-t132ref.c
+ *
+ * NVIDIA Tegra132 board support
+ *
+ * Copyright (C) 2012-2013 NVIDIA Corporation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/tegra-pmc.h>
+
+#include "pm.h"
+
+#include "../../arm/mach-tegra/board-flounder.c"
+
diff --git a/arch/arm64/mach-tegra/flounder-bdaddress.c b/arch/arm64/mach-tegra/flounder-bdaddress.c
new file mode 100644
index 0000000..50ba698
--- /dev/null
+++ b/arch/arm64/mach-tegra/flounder-bdaddress.c
@@ -0,0 +1,2 @@
+/* FIXME: temporary */
+#include "../../arm/mach-tegra/flounder-bdaddress.c"
diff --git a/arch/arm64/mach-tegra/htc_awb_cal.c b/arch/arm64/mach-tegra/htc_awb_cal.c
new file mode 100644
index 0000000..bafbbee
--- /dev/null
+++ b/arch/arm64/mach-tegra/htc_awb_cal.c
@@ -0,0 +1,113 @@
+/* arch/arm/mach-msm/htc_awb_cal.c
+ * Code to extract Camera AWB calibration information from ATAG
+ * set up by the bootloader.
+*/
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/proc_fs.h>
+#include <asm/setup.h>
+
+#include <linux/syscalls.h>
+#include <linux/of.h>
+
+const char awb_cam_cal_data_path[] = "/calibration_data";
+const char awb_cam_cal_data_tag[] = "cam_awb";
+
+const loff_t awb_cal_front_offset = 0x1000; /* Offset of the front cam data */
+#define AWB_CAL_SIZE 897 /* size of the calibration data */
+
+static ssize_t awb_calibration_get(char *buf, size_t count, loff_t off,
+ bool is_front)
+{
+ int p_size = 0;
+ unsigned char *p_data = NULL;
+ struct device_node *dev_node = NULL;
+
+ dev_node = of_find_node_by_path(awb_cam_cal_data_path);
+ if (dev_node)
+ /* of_get_property will return address of property,
+ * and fill the length to *p_size */
+ p_data = (unsigned char *)of_get_property(dev_node,
+ awb_cam_cal_data_tag,
+ &p_size);
+
+ if (p_data == NULL) {
+ pr_err("%s: Unable to get calibration data", __func__);
+ return 0;
+ }
+
+ if (off >= AWB_CAL_SIZE)
+ return 0;
+
+ if (off + count > AWB_CAL_SIZE)
+ count = AWB_CAL_SIZE - off;
+
+ p_data += is_front ? awb_cal_front_offset : 0;
+ memcpy(buf, p_data + off, count);
+
+ return count;
+}
+
+static ssize_t awb_calibration_back_read(struct file *filp,
+ struct kobject *kobj,
+ struct bin_attribute *attr,
+ char *buf, loff_t off, size_t count)
+{
+ return awb_calibration_get(buf, count, off, false);
+}
+
+static ssize_t awb_calibration_front_read(struct file *filp,
+ struct kobject *kobj,
+ struct bin_attribute *attr,
+ char *buf, loff_t off, size_t count)
+{
+ return awb_calibration_get(buf, count, off, true);
+}
+
+static struct bin_attribute factory_back = {
+ .attr.name = "factory_back",
+ .attr.mode = 0444,
+ .size = AWB_CAL_SIZE,
+ .read = awb_calibration_back_read,
+};
+
+static struct bin_attribute factory_front = {
+ .attr.name = "factory_front",
+ .attr.mode = 0444,
+ .size = AWB_CAL_SIZE,
+ .read = awb_calibration_front_read,
+};
+
+static int cam_get_awb_cal(void)
+{
+ int ret;
+ struct kobject *cam_awb_cal;
+
+ cam_awb_cal = kobject_create_and_add("camera_awb_cal", firmware_kobj);
+ if (cam_awb_cal == NULL) {
+ pr_err("%s: failed to create /sys/camera_awb_cal\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ ret = sysfs_create_bin_file(cam_awb_cal, &factory_back);
+ if (ret) {
+ pr_err("%s: failed to create back camera file\n", __func__);
+ kobject_del(cam_awb_cal);
+ return ret;
+ }
+
+ ret = sysfs_create_bin_file(cam_awb_cal, &factory_front);
+ if (ret) {
+ pr_err("%s: failed to create front camera file\n", __func__);
+ kobject_del(cam_awb_cal);
+ return ret;
+ }
+
+ return 0;
+}
+
+late_initcall(cam_get_awb_cal);
diff --git a/arch/arm64/mach-tegra/panel-j-qxga-8-9.c b/arch/arm64/mach-tegra/panel-j-qxga-8-9.c
new file mode 100644
index 0000000..5dc4aca
--- /dev/null
+++ b/arch/arm64/mach-tegra/panel-j-qxga-8-9.c
@@ -0,0 +1 @@
+#include "../../arm/mach-tegra/panel-j-qxga-8-9.c"
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 6f0ec40..fe9c49d 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -480,6 +480,9 @@
dma_mmu_remap_num++;
}
+__init void iotable_init_va(struct map_desc *io_desc, int nr);
+__init void iotable_init_mapping(struct map_desc *io_desc, int nr);
+
void __init dma_contiguous_remap(void)
{
int i;
@@ -492,10 +495,7 @@
if (start >= end)
continue;
- map.pfn = __phys_to_pfn(start);
- map.virtual = __phys_to_virt(start);
- map.length = end - start;
- map.type = MT_NORMAL_NC;
+ map.type = MT_MEMORY_KERNEL_EXEC;
/*
* Clear previous low-memory mapping
@@ -504,30 +504,20 @@
addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
- iotable_init(&map, 1);
+ for (addr = start; addr < end; addr += PAGE_SIZE) {
+ map.pfn = __phys_to_pfn(addr);
+ map.virtual = __phys_to_virt(addr);
+ map.length = PAGE_SIZE;
+ iotable_init_mapping(&map, 1);
+ }
+
+ map.pfn = __phys_to_pfn(start);
+ map.virtual = __phys_to_virt(start);
+ map.length = end - start;
+ iotable_init_va(&map, 1);
}
}
-static int __dma_update_pte(pte_t *pte, pgtable_t token, unsigned long addr,
- void *data)
-{
- struct page *page = virt_to_page(addr);
- pgprot_t prot = *(pgprot_t *)data;
-
- set_pte(pte, mk_pte(page, prot));
- return 0;
-}
-
-static void __dma_remap(struct page *page, size_t size, pgprot_t prot)
-{
- unsigned long start = (unsigned long) page_address(page);
- unsigned end = start + size;
-
- apply_to_page_range(&init_mm, start, size, __dma_update_pte, &prot);
- dsb();
- flush_tlb_kernel_range(start, end);
-}
-
static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
pgprot_t prot, struct page **ret_page,
const void *caller)
@@ -644,9 +634,6 @@
if (!page)
return NULL;
- __dma_clear_buffer(page, size);
- __dma_remap(page, size, prot);
-
*ret_page = page;
return page_address(page);
}
@@ -654,7 +641,6 @@
static void __free_from_contiguous(struct device *dev, struct page *page,
size_t size)
{
- __dma_remap(page, size, PG_PROT_KERNEL);
dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT);
}
@@ -1485,8 +1471,6 @@
if (!page)
goto error;
- __dma_clear_buffer(page, size);
-
for (i = 0; i < count; i++)
pages[i] = page + i;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7f04dc4..9d6e899 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -304,6 +304,31 @@
}
}
+__init void iotable_init_va(struct map_desc *io_desc, int nr)
+{
+ struct map_desc *md;
+ struct vm_struct *vm;
+
+ vm = early_alloc(sizeof(*vm) * nr);
+
+ for (md = io_desc; nr; md++, nr--) {
+ vm->addr = (void *)(md->virtual & PAGE_MASK);
+ vm->size = PAGE_ALIGN(md->length + (md->virtual & ~PAGE_MASK));
+ vm->phys_addr = __pfn_to_phys(md->pfn);
+ vm->flags = VM_IOREMAP;
+ vm->caller = iotable_init;
+ vm_area_add_early(vm++);
+ }
+}
+
+__init void iotable_init_mapping(struct map_desc *io_desc, int nr)
+{
+ struct map_desc *md;
+
+ for (md = io_desc; nr; md++, nr--)
+ create_mapping(md);
+}
+
#ifdef CONFIG_EARLY_PRINTK
/*
* Create an early I/O mapping using the pgd/pmd entries already populated
diff --git a/arch/parisc/include/uapi/asm/fcntl.h b/arch/parisc/include/uapi/asm/fcntl.h
index 0304b92..cc61c47 100644
--- a/arch/parisc/include/uapi/asm/fcntl.h
+++ b/arch/parisc/include/uapi/asm/fcntl.h
@@ -20,6 +20,7 @@
#define O_INVISIBLE 004000000 /* invisible I/O, for DMAPI/XDSM */
#define O_PATH 020000000
+#define O_TMPFILE 040000000
#define F_GETLK64 8
#define F_SETLK64 9
diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
index 35f77a4..f390042 100644
--- a/arch/powerpc/platforms/cell/spufs/inode.c
+++ b/arch/powerpc/platforms/cell/spufs/inode.c
@@ -238,7 +238,7 @@
.release = spufs_dir_close,
.llseek = dcache_dir_lseek,
.read = generic_read_dir,
- .readdir = dcache_readdir,
+ .iterate = dcache_readdir,
.fsync = noop_fsync,
};
EXPORT_SYMBOL_GPL(spufs_context_fops);
diff --git a/arch/sparc/include/uapi/asm/fcntl.h b/arch/sparc/include/uapi/asm/fcntl.h
index d0b83f6..d73e5e0 100644
--- a/arch/sparc/include/uapi/asm/fcntl.h
+++ b/arch/sparc/include/uapi/asm/fcntl.h
@@ -35,6 +35,7 @@
#define O_SYNC (__O_SYNC|O_DSYNC)
#define O_PATH 0x1000000
+#define O_TMPFILE 0x2000000
#define F_GETOWN 5 /* for sockets. */
#define F_SETOWN 6 /* for sockets. */
diff --git a/block/partitions/efi.c b/block/partitions/efi.c
index 894174c..23220b9 100644
--- a/block/partitions/efi.c
+++ b/block/partitions/efi.c
@@ -523,6 +523,9 @@
return;
}
+/* Skip searcking GPT from lastlba */
+#define SKIP_GPT_LASTLBA 1
+
/**
* find_valid_gpt() - Search disk for valid GPT headers and PTEs
* @state
@@ -570,8 +573,10 @@
good_agpt = is_gpt_valid(state,
le64_to_cpu(pgpt->alternate_lba),
&agpt, &aptes);
+#if !SKIP_GPT_LASTLBA
if (!good_agpt && force_gpt)
good_agpt = is_gpt_valid(state, lastlba, &agpt, &aptes);
+#endif
if (!good_agpt && force_gpt && force_gpt_sector)
good_agpt = is_gpt_valid(state, force_gpt_sector, &agpt, &aptes);
diff --git a/build.config b/build.config
new file mode 100644
index 0000000..9858768
--- /dev/null
+++ b/build.config
@@ -0,0 +1,13 @@
+ARCH=arm64
+BRANCH=android-tegra-flounder-3.10
+CROSS_COMPILE=aarch64-linux-android-
+DEFCONFIG=flounder_defconfig
+EXTRA_CMDS=''
+KERNEL_DIR=private/flounder
+LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin
+FILES="
+arch/arm64/boot/Image
+arch/arm64/boot/Image.gz-dtb
+vmlinux
+System.map
+"
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 5c445c2..4a9cde5 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -176,6 +176,8 @@
source "drivers/gpu/nvgpu/Kconfig"
+source "drivers/htc_debug/stability/Kconfig"
+
source "drivers/android/Kconfig"
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index c06e1c1..b105b54 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -159,4 +159,7 @@
obj-$(CONFIG_IPACK_BUS) += ipack/
obj-$(CONFIG_NTB) += ntb/
obj-$(CONFIG_MIPI_BIF) += mipi_bif/
+
+# HTC stability files
+obj-y += htc_debug/stability/
obj-$(CONFIG_ANDROID) += android/
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 2bb4577..12bbd55 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -13,38 +13,41 @@
#include <linux/dma-contiguous.h>
#include <linux/debugfs.h>
#include <linux/highmem.h>
-#include <asm/cacheflush.h>
+#include <linux/kthread.h>
+#define RESIZE_MAGIC 0xC11A900d
struct heap_info {
+ int magic;
char *name;
- /* number of devices pointed by devs */
- unsigned int num_devs;
- /* devs to manage cma/coherent memory allocs, if resize allowed */
- struct device *devs;
+ /* number of chunks memory to manage in */
+ unsigned int num_chunks;
+ /* dev to manage cma/coherent memory allocs, if resize allowed */
+ struct device dev;
/* device to allocate memory from cma */
struct device *cma_dev;
/* lock to synchronise heap resizing */
struct mutex resize_lock;
/* CMA chunk size if resize supported */
size_t cma_chunk_size;
- /* heap base */
- phys_addr_t base;
- /* heap size */
- size_t len;
+ /* heap current base */
+ phys_addr_t curr_base;
+ /* heap current length */
+ size_t curr_len;
+ /* heap lowest base */
phys_addr_t cma_base;
+ /* heap max length */
size_t cma_len;
size_t rem_chunk_size;
struct dentry *dma_debug_root;
int (*update_resize_cfg)(phys_addr_t , size_t);
+ /* The timer used to wakeup the shrink thread */
+ struct timer_list shrink_timer;
+ /* Pointer to the current shrink thread for this resizable heap */
+ struct task_struct *task;
+ unsigned long shrink_interval;
+ size_t floor_size;
};
-#define DMA_RESERVED_COUNT 8
-static struct dma_coherent_reserved {
- const struct device *dev;
-} dma_coherent_reserved[DMA_RESERVED_COUNT];
-
-static unsigned dma_coherent_reserved_count;
-
#ifdef CONFIG_ARM_DMA_IOMMU_ALIGNMENT
#define DMA_BUF_ALIGNMENT CONFIG_ARM_DMA_IOMMU_ALIGNMENT
#else
@@ -60,16 +63,24 @@
unsigned long *bitmap;
};
+static void shrink_timeout(unsigned long __data);
+static int shrink_thread(void *arg);
+static void shrink_resizable_heap(struct heap_info *h);
+static int heap_resize_locked(struct heap_info *h);
+#define RESIZE_DEFAULT_SHRINK_AGE 3
+
static bool dma_is_coherent_dev(struct device *dev)
{
- int i;
- struct dma_coherent_reserved *r = dma_coherent_reserved;
+ struct heap_info *h;
- for (i = 0; i < dma_coherent_reserved_count; i++, r++) {
- if (dev == r->dev)
- return true;
- }
- return false;
+ if (!dev)
+ return false;
+ h = dev_get_drvdata(dev);
+ if (!h)
+ return false;
+ if (h->magic != RESIZE_MAGIC)
+ return false;
+ return true;
}
static void dma_debugfs_init(struct device *dev, struct heap_info *heap)
{
@@ -81,10 +92,10 @@
}
}
- debugfs_create_x32("base", S_IRUGO,
- heap->dma_debug_root, (u32 *)&heap->base);
- debugfs_create_x32("size", S_IRUGO,
- heap->dma_debug_root, (u32 *)&heap->len);
+ debugfs_create_x32("curr_base", S_IRUGO,
+ heap->dma_debug_root, (u32 *)&heap->curr_base);
+ debugfs_create_x32("curr_size", S_IRUGO,
+ heap->dma_debug_root, (u32 *)&heap->curr_len);
debugfs_create_x32("cma_base", S_IRUGO,
heap->dma_debug_root, (u32 *)&heap->cma_base);
debugfs_create_x32("cma_size", S_IRUGO,
@@ -92,23 +103,35 @@
debugfs_create_x32("cma_chunk_size", S_IRUGO,
heap->dma_debug_root, (u32 *)&heap->cma_chunk_size);
debugfs_create_x32("num_cma_chunks", S_IRUGO,
- heap->dma_debug_root, (u32 *)&heap->num_devs);
+ heap->dma_debug_root, (u32 *)&heap->num_chunks);
+ debugfs_create_x32("floor_size", S_IRUGO,
+ heap->dma_debug_root, (u32 *)&heap->floor_size);
}
-static struct device *dma_create_dma_devs(const char *name, int num_devs)
+int dma_set_resizable_heap_floor_size(struct device *dev, size_t floor_size)
{
- int idx = 0;
- struct device *devs;
+ int ret = 0;
+ struct heap_info *h = NULL;
- devs = kzalloc(num_devs * sizeof(*devs), GFP_KERNEL);
- if (!devs)
- return NULL;
+ if (!dma_is_coherent_dev(dev))
+ return -ENODEV;
- for (idx = 0; idx < num_devs; idx++)
- dev_set_name(&devs[idx], "%s-heap-%d", name, idx);
+ h = dev_get_drvdata(dev);
+ if (!h)
+ return -ENOENT;
- return devs;
+ mutex_lock(&h->resize_lock);
+ h->floor_size = floor_size > h->cma_len ? h->cma_len : floor_size;
+ while (!ret && h->curr_len < h->floor_size)
+ ret = heap_resize_locked(h);
+ if (h->task)
+ mod_timer(&h->shrink_timer, jiffies + h->shrink_interval);
+ mutex_unlock(&h->resize_lock);
+ if (!h->task)
+ shrink_resizable_heap(h);
+ return ret;
}
+EXPORT_SYMBOL(dma_set_resizable_heap_floor_size);
int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
dma_addr_t device_addr, size_t size, int flags)
@@ -192,13 +215,7 @@
int err = 0;
struct heap_info *heap_info = NULL;
struct dma_contiguous_stats stats;
- struct dma_coherent_reserved *r =
- &dma_coherent_reserved[dma_coherent_reserved_count];
-
- if (dma_coherent_reserved_count == ARRAY_SIZE(dma_coherent_reserved)) {
- pr_err("Not enough slots for DMA Coherent reserved regions!\n");
- return -ENOSPC;
- }
+ struct task_struct *shrink_task = NULL;
if (!dev || !dma_info || !dma_info->name || !dma_info->cma_dev)
return -EINVAL;
@@ -207,6 +224,7 @@
if (!heap_info)
return -ENOMEM;
+ heap_info->magic = RESIZE_MAGIC;
heap_info->name = kmalloc(strlen(dma_info->name) + 1, GFP_KERNEL);
if (!heap_info->name) {
kfree(heap_info);
@@ -219,9 +237,10 @@
strcpy(heap_info->name, dma_info->name);
dev_set_name(dev, "dma-%s", heap_info->name);
heap_info->cma_dev = dma_info->cma_dev;
- heap_info->cma_chunk_size = dma_info->size;
+ heap_info->cma_chunk_size = dma_info->size ? : stats.size;
heap_info->cma_base = stats.base;
heap_info->cma_len = stats.size;
+ heap_info->curr_base = stats.base;
dev_set_name(heap_info->cma_dev, "cma-%s-heap", heap_info->name);
mutex_init(&heap_info->resize_lock);
@@ -232,33 +251,48 @@
goto fail;
}
- heap_info->num_devs = div_u64_rem(heap_info->cma_len,
+ heap_info->num_chunks = div_u64_rem(heap_info->cma_len,
(u32)heap_info->cma_chunk_size, (u32 *)&heap_info->rem_chunk_size);
if (heap_info->rem_chunk_size) {
- heap_info->num_devs++;
+ heap_info->num_chunks++;
dev_info(dev, "heap size is not multiple of cma_chunk_size "
- "heap_info->num_devs (%d) rem_chunk_size(0x%zx)\n",
- heap_info->num_devs, heap_info->rem_chunk_size);
+ "heap_info->num_chunks (%d) rem_chunk_size(0x%zx)\n",
+ heap_info->num_chunks, heap_info->rem_chunk_size);
} else
heap_info->rem_chunk_size = heap_info->cma_chunk_size;
- heap_info->devs = dma_create_dma_devs(heap_info->name,
- heap_info->num_devs);
- if (!heap_info->devs) {
- dev_err(dev, "failed to alloc devices\n");
- err = -ENOMEM;
- goto fail;
- }
+
+ dev_set_name(&heap_info->dev, "%s-heap", heap_info->name);
+
if (dma_info->notifier.ops)
heap_info->update_resize_cfg =
dma_info->notifier.ops->resize;
- r->dev = dev;
- dma_coherent_reserved_count++;
-
dev_set_drvdata(dev, heap_info);
dma_debugfs_init(dev, heap_info);
+
+ if (declare_coherent_heap(&heap_info->dev,
+ heap_info->cma_base, heap_info->cma_len))
+ goto declare_fail;
+ heap_info->dev.dma_mem->size = 0;
+ heap_info->shrink_interval = HZ * RESIZE_DEFAULT_SHRINK_AGE;
+ shrink_task = kthread_run(shrink_thread, heap_info, "%s-shrink_thread",
+ heap_info->name);
+ /*
+ * Set up an interval timer which can be used to trigger a commit wakeup
+ * after the commit interval expires
+ */
+ if (!IS_ERR(shrink_task)) {
+ setup_timer(&heap_info->shrink_timer, shrink_timeout,
+ (unsigned long)shrink_task);
+ heap_info->task = shrink_task;
+ }
+
+ if (dma_info->notifier.ops && dma_info->notifier.ops->resize)
+ dma_contiguous_enable_replace_pages(dma_info->cma_dev);
pr_info("resizable cma heap=%s create successful", heap_info->name);
return 0;
+declare_fail:
+ kfree(heap_info->name);
fail:
kfree(heap_info);
return err;
@@ -305,75 +339,60 @@
size_t count = PAGE_ALIGN(len) >> PAGE_SHIFT;
dma_release_from_contiguous(h->cma_dev, page, count);
+ dev_dbg(h->cma_dev, "released at base (0x%pa) size (0x%zx)\n",
+ &base, len);
}
static void get_first_and_last_idx(struct heap_info *h,
int *first_alloc_idx, int *last_alloc_idx)
{
- int idx;
- struct device *d;
-
- *first_alloc_idx = -1;
- *last_alloc_idx = h->num_devs;
-
- for (idx = 0; idx < h->num_devs; idx++) {
- d = &h->devs[idx];
- if (d->dma_mem) {
- if (*first_alloc_idx == -1)
- *first_alloc_idx = idx;
- *last_alloc_idx = idx;
- }
+ if (!h->curr_len) {
+ *first_alloc_idx = -1;
+ *last_alloc_idx = h->num_chunks;
+ } else {
+ *first_alloc_idx = div_u64(h->curr_base - h->cma_base,
+ h->cma_chunk_size);
+ *last_alloc_idx = div_u64(h->curr_base - h->cma_base +
+ h->curr_len + h->cma_chunk_size -
+ h->rem_chunk_size,
+ h->cma_chunk_size) - 1;
}
}
-static void update_heap_base_len(struct heap_info *h)
+static void update_alloc_range(struct heap_info *h)
{
- int idx;
- struct device *d;
- phys_addr_t base = 0;
- size_t len = 0;
-
- for (idx = 0; idx < h->num_devs; idx++) {
- d = &h->devs[idx];
- if (d->dma_mem) {
- if (!base)
- base = idx * h->cma_chunk_size + h->cma_base;
- len += (idx == h->num_devs - 1) ?
- h->rem_chunk_size : h->cma_chunk_size;
- }
- }
-
- h->base = base;
- h->len = len;
+ if (!h->curr_len)
+ h->dev.dma_mem->size = 0;
+ else
+ h->dev.dma_mem->size = (h->curr_base - h->cma_base +
+ h->curr_len) >> PAGE_SHIFT;
}
static int heap_resize_locked(struct heap_info *h)
{
- int i;
int err = 0;
phys_addr_t base = -1;
size_t len = h->cma_chunk_size;
- phys_addr_t prev_base = h->base;
- size_t prev_len = h->len;
+ phys_addr_t prev_base = h->curr_base;
+ size_t prev_len = h->curr_len;
int alloc_at_idx = 0;
int first_alloc_idx;
int last_alloc_idx;
- phys_addr_t start_addr = 0;
+ phys_addr_t start_addr = h->cma_base;
get_first_and_last_idx(h, &first_alloc_idx, &last_alloc_idx);
pr_debug("req resize, fi=%d,li=%d\n", first_alloc_idx, last_alloc_idx);
/* All chunks are in use. Can't grow it. */
- if (first_alloc_idx == 0 && last_alloc_idx == h->num_devs - 1)
+ if (first_alloc_idx == 0 && last_alloc_idx == h->num_chunks - 1)
return -ENOMEM;
- /* All chunks are free. Can allocate anywhere in CMA with
- * cma_chunk_size alignment.
- */
+ /* All chunks are free. Attempt to allocate the first chunk. */
if (first_alloc_idx == -1) {
base = alloc_from_contiguous_heap(h, start_addr, len);
- if (!dma_mapping_error(h->cma_dev, base))
+ if (base == start_addr)
goto alloc_success;
+ BUG_ON(!dma_mapping_error(h->cma_dev, base));
}
/* Free chunk before previously allocated chunk. Attempt
@@ -389,9 +408,9 @@
}
/* Free chunk after previously allocated chunk. */
- if (last_alloc_idx < h->num_devs - 1) {
+ if (last_alloc_idx < h->num_chunks - 1) {
alloc_at_idx = last_alloc_idx + 1;
- len = (alloc_at_idx == h->num_devs - 1) ?
+ len = (alloc_at_idx == h->num_chunks - 1) ?
h->rem_chunk_size : h->cma_chunk_size;
start_addr = alloc_at_idx * h->cma_chunk_size + h->cma_base;
base = alloc_from_contiguous_heap(h, start_addr, len);
@@ -401,61 +420,45 @@
}
if (dma_mapping_error(h->cma_dev, base))
- dev_err(&h->devs[alloc_at_idx],
+ dev_err(&h->dev,
"Failed to allocate contiguous memory on heap grow req\n");
return -ENOMEM;
alloc_success:
- if (declare_coherent_heap(&h->devs[alloc_at_idx], base, len)) {
- dev_err(&h->devs[alloc_at_idx],
- "Failed to declare coherent memory\n");
- goto fail_declare;
- }
-
- for (i = 0; i < len >> PAGE_SHIFT; i++) {
- struct page *page = phys_to_page(i + base);
-
- if (PageHighMem(page)) {
- void *ptr = kmap_atomic(page);
- dmac_flush_range(ptr, ptr + PAGE_SIZE);
- kunmap_atomic(ptr);
- } else {
- void *ptr = page_address(page);
- dmac_flush_range(ptr, ptr + PAGE_SIZE);
- }
- }
-
- update_heap_base_len(h);
+ if (!h->curr_len || h->curr_base > base)
+ h->curr_base = base;
+ h->curr_len += len;
/* Handle VPR configuration updates*/
if (h->update_resize_cfg) {
- err = h->update_resize_cfg(h->base, h->len);
+ err = h->update_resize_cfg(h->curr_base, h->curr_len);
if (err) {
- dev_err(&h->devs[alloc_at_idx], "Failed to update heap resize\n");
+ dev_err(&h->dev, "Failed to update heap resize\n");
goto fail_update;
}
+ dev_dbg(&h->dev, "update vpr base to %pa, size=%zx\n",
+ &h->curr_base, h->curr_len);
}
- dev_dbg(&h->devs[alloc_at_idx],
+ update_alloc_range(h);
+ dev_dbg(&h->dev,
"grow heap base from=0x%pa to=0x%pa,"
" len from=0x%zx to=0x%zx\n",
- &prev_base, &h->base, prev_len, h->len);
+ &prev_base, &h->curr_base, prev_len, h->curr_len);
return 0;
fail_update:
- dma_release_declared_memory(&h->devs[alloc_at_idx]);
-fail_declare:
release_from_contiguous_heap(h, base, len);
- h->base = prev_base;
- h->len = prev_len;
+ h->curr_base = prev_base;
+ h->curr_len = prev_len;
return -ENOMEM;
}
/* retval: !0 on success, 0 on failure */
-static int dma_alloc_from_coherent_dev(struct device *dev, ssize_t size,
+static int dma_alloc_from_coherent_dev_at(struct device *dev, ssize_t size,
dma_addr_t *dma_handle, void **ret,
- struct dma_attrs *attrs)
+ struct dma_attrs *attrs, ulong start)
{
struct dma_coherent_mem *mem;
int order = get_order(size);
@@ -486,7 +489,7 @@
count = 1 << order;
pageno = bitmap_find_next_zero_area(mem->bitmap, mem->size,
- 0, count, align);
+ start, count, align);
if (pageno >= mem->size)
goto err;
@@ -513,14 +516,20 @@
return mem->flags & DMA_MEMORY_EXCLUSIVE;
}
+static int dma_alloc_from_coherent_dev(struct device *dev, ssize_t size,
+ dma_addr_t *dma_handle, void **ret,
+ struct dma_attrs *attrs)
+{
+ return dma_alloc_from_coherent_dev_at(dev, size, dma_handle,
+ ret, attrs, 0);
+}
+
/* retval: !0 on success, 0 on failure */
static int dma_alloc_from_coherent_heap_dev(struct device *dev, size_t len,
dma_addr_t *dma_handle, void **ret,
struct dma_attrs *attrs)
{
- int idx;
struct heap_info *h = NULL;
- struct device *d;
*dma_handle = DMA_ERROR_CODE;
if (!dma_is_coherent_dev(dev))
@@ -529,22 +538,18 @@
h = dev_get_drvdata(dev);
BUG_ON(!h);
if (!h)
- return 1;
+ return DMA_MEMORY_EXCLUSIVE;
dma_set_attr(DMA_ATTR_ALLOC_EXACT_SIZE, attrs);
mutex_lock(&h->resize_lock);
retry_alloc:
/* Try allocation from already existing CMA chunks */
- for (idx = 0; idx < h->num_devs; idx++) {
- d = &h->devs[idx];
- if (!d->dma_mem)
- continue;
- if (dma_alloc_from_coherent_dev(
- d, len, dma_handle, ret, attrs)) {
- dev_dbg(d, "allocated addr 0x%pa len 0x%zx\n",
- dma_handle, len);
- goto out;
- }
+ if (dma_alloc_from_coherent_dev_at(
+ &h->dev, len, dma_handle, ret, attrs,
+ (h->curr_base - h->cma_base) >> PAGE_SHIFT)) {
+ dev_dbg(&h->dev, "allocated addr 0x%pa len 0x%zx\n",
+ dma_handle, len);
+ goto out;
}
if (!heap_resize_locked(h))
@@ -593,13 +598,7 @@
{
int idx = 0;
int err = 0;
- int resize_err = 0;
- void *ret = NULL;
- dma_addr_t dev_base;
struct heap_info *h = NULL;
- size_t chunk_size;
- int first_alloc_idx;
- int last_alloc_idx;
if (!dma_is_coherent_dev(dev))
return 0;
@@ -618,84 +617,143 @@
dma_set_attr(DMA_ATTR_ALLOC_EXACT_SIZE, attrs);
mutex_lock(&h->resize_lock);
-
idx = div_u64((uintptr_t)base - h->cma_base, h->cma_chunk_size);
- dev_dbg(&h->devs[idx], "req free addr (%p) size (0x%zx) idx (%d)\n",
+ dev_dbg(&h->dev, "req free addr (%p) size (0x%zx) idx (%d)\n",
base, len, idx);
- err = dma_release_from_coherent_dev(&h->devs[idx], len, base, attrs);
+ err = dma_release_from_coherent_dev(&h->dev, len, base, attrs);
+ /* err = 0 on failure, !0 on successful release */
+ if (err && h->task)
+ mod_timer(&h->shrink_timer, jiffies + h->shrink_interval);
+ mutex_unlock(&h->resize_lock);
- if (!err)
- goto out_unlock;
+ if (err && !h->task)
+ shrink_resizable_heap(h);
+ return err;
+}
+
+static bool shrink_chunk_locked(struct heap_info *h, int idx)
+{
+ size_t chunk_size;
+ int resize_err;
+ void *ret = NULL;
+ dma_addr_t dev_base;
+ struct dma_attrs attrs;
+
+ dma_set_attr(DMA_ATTR_ALLOC_EXACT_SIZE, &attrs);
+ /* check if entire chunk is free */
+ chunk_size = (idx == h->num_chunks - 1) ? h->rem_chunk_size :
+ h->cma_chunk_size;
+ resize_err = dma_alloc_from_coherent_dev_at(&h->dev,
+ chunk_size, &dev_base, &ret, &attrs,
+ idx * h->cma_chunk_size >> PAGE_SHIFT);
+ if (!resize_err) {
+ goto out;
+ } else if (dev_base != h->cma_base + idx * h->cma_chunk_size) {
+ resize_err = dma_release_from_coherent_dev(
+ &h->dev, chunk_size,
+ (void *)(uintptr_t)dev_base, &attrs);
+ BUG_ON(!resize_err);
+ goto out;
+ } else {
+ dev_dbg(&h->dev,
+ "prep to remove chunk b=0x%pa, s=0x%zx\n",
+ &dev_base, chunk_size);
+ resize_err = dma_release_from_coherent_dev(
+ &h->dev, chunk_size,
+ (void *)(uintptr_t)dev_base, &attrs);
+ BUG_ON(!resize_err);
+ if (!resize_err) {
+ dev_err(&h->dev, "failed to rel mem\n");
+ goto out;
+ }
+
+ /* Handle VPR configuration updates */
+ if (h->update_resize_cfg) {
+ phys_addr_t new_base = h->curr_base;
+ size_t new_len = h->curr_len - chunk_size;
+ if (h->curr_base == dev_base)
+ new_base += chunk_size;
+ dev_dbg(&h->dev, "update vpr base to %pa, size=%zx\n",
+ &new_base, new_len);
+ resize_err =
+ h->update_resize_cfg(new_base, new_len);
+ if (resize_err) {
+ dev_err(&h->dev,
+ "update resize failed\n");
+ goto out;
+ }
+ }
+
+ if (h->curr_base == dev_base)
+ h->curr_base += chunk_size;
+ h->curr_len -= chunk_size;
+ update_alloc_range(h);
+ release_from_contiguous_heap(h, dev_base, chunk_size);
+ dev_dbg(&h->dev, "removed chunk b=0x%pa, s=0x%zx"
+ " new heap b=0x%pa, s=0x%zx\n", &dev_base,
+ chunk_size, &h->curr_base, h->curr_len);
+ return true;
+ }
+out:
+ return false;
+}
+
+static void shrink_resizable_heap(struct heap_info *h)
+{
+ bool unlock = false;
+ int first_alloc_idx, last_alloc_idx;
check_next_chunk:
- get_first_and_last_idx(h, &first_alloc_idx, &last_alloc_idx);
-
- /* Check if heap can be shrinked */
- if (idx == first_alloc_idx || idx == last_alloc_idx) {
- /* check if entire chunk is free */
- if (idx == h->num_devs - 1)
- chunk_size = h->rem_chunk_size;
- else
- chunk_size = h->cma_chunk_size;
-
- resize_err = dma_alloc_from_coherent_dev(&h->devs[idx],
- chunk_size,
- &dev_base, &ret, attrs);
- if (!resize_err)
- goto out_unlock;
- else {
- dev_dbg(&h->devs[idx],
- "prep to remove chunk b=0x%pa, s=0x%zx\n",
- &dev_base, chunk_size);
- resize_err = dma_release_from_coherent_dev(
- &h->devs[idx], chunk_size,
- (void *)(uintptr_t)dev_base, attrs);
- if (!resize_err) {
- dev_err(&h->devs[idx], "failed to rel mem\n");
- goto out_unlock;
- }
-
- dma_release_declared_memory(&h->devs[idx]);
- BUG_ON(h->devs[idx].dma_mem != NULL);
- update_heap_base_len(h);
-
- /* Handle VPR configuration updates */
- if (h->update_resize_cfg) {
- resize_err =
- h->update_resize_cfg(h->base, h->len);
- if (resize_err) {
- dev_err(&h->devs[idx],
- "update resize failed\n");
- /* On update failure re-declare heap */
- resize_err = declare_coherent_heap(
- &h->devs[idx], dev_base,
- chunk_size);
- if (resize_err) {
- /* on declare coherent failure
- * release heap chunk
- */
- release_from_contiguous_heap(h,
- dev_base, chunk_size);
- dev_err(&h->devs[idx],
- "declare failed\n");
- } else
- update_heap_base_len(h);
- goto out_unlock;
- }
- }
-
- idx == first_alloc_idx ? ++idx : --idx;
- release_from_contiguous_heap(h, dev_base, chunk_size);
- dev_dbg(&h->devs[idx], "removed chunk b=0x%pa, s=0x%zx"
- "new heap b=0x%pa, s=0x%zx",
- &dev_base, chunk_size, &h->base, h->len);
- }
- if (idx < h->num_devs)
- goto check_next_chunk;
+ if (unlock) {
+ mutex_unlock(&h->resize_lock);
+ cond_resched();
}
+ mutex_lock(&h->resize_lock);
+ unlock = true;
+ if (h->curr_len <= h->floor_size)
+ goto out_unlock;
+ get_first_and_last_idx(h, &first_alloc_idx, &last_alloc_idx);
+ /* All chunks are free. Exit. */
+ if (first_alloc_idx == -1)
+ goto out_unlock;
+ if (shrink_chunk_locked(h, first_alloc_idx))
+ goto check_next_chunk;
+ /* Only one chunk is in use. */
+ if (first_alloc_idx == last_alloc_idx)
+ goto out_unlock;
+ if (shrink_chunk_locked(h, last_alloc_idx))
+ goto check_next_chunk;
+
out_unlock:
mutex_unlock(&h->resize_lock);
- return err;
+}
+
+/*
+ * Helper function used to manage resizable heap shrink timeouts
+ */
+
+static void shrink_timeout(unsigned long __data)
+{
+ struct task_struct *p = (struct task_struct *) __data;
+
+ wake_up_process(p);
+}
+
+static int shrink_thread(void *arg)
+{
+ struct heap_info *h = arg;
+
+ while (1) {
+ if (kthread_should_stop())
+ break;
+
+ shrink_resizable_heap(h);
+ /* resize done. goto sleep */
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule();
+ }
+
+ return 0;
}
void dma_release_declared_memory(struct device *dev)
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 4645fe3..907937a 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -22,6 +22,7 @@
#include <asm/page.h>
#include <asm/dma-contiguous.h>
+#include <linux/buffer_head.h>
#include <linux/memblock.h>
#include <linux/err.h>
#include <linux/mm.h>
@@ -32,6 +33,11 @@
#include <linux/swap.h>
#include <linux/mm_types.h>
#include <linux/dma-contiguous.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+#include <asm/outercache.h>
struct cma {
unsigned long base_pfn;
@@ -196,6 +202,7 @@
struct device *dev;
} cma_reserved[MAX_CMA_AREAS] __initdata;
static unsigned cma_reserved_count __initdata;
+static unsigned long cma_total_pages;
static int __init cma_init_reserved_areas(void)
{
@@ -281,6 +288,7 @@
r->size = size;
r->dev = dev;
cma_reserved_count++;
+ cma_total_pages += ((unsigned long)size / PAGE_SIZE);
pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
(unsigned long)base);
@@ -292,6 +300,51 @@
return base;
}
+unsigned long cma_get_total_pages(void)
+{
+ return cma_total_pages;
+}
+
+static int __dma_update_pte(pte_t *pte, pgtable_t token, unsigned long addr,
+ void *data)
+{
+ struct page *page = virt_to_page(addr);
+ pgprot_t prot = *(pgprot_t *)data;
+
+ set_pte(pte, mk_pte(page, prot));
+ return 0;
+}
+
+static void __dma_remap(struct page *page, size_t size, pgprot_t prot)
+{
+ unsigned long start = (unsigned long) page_address(page);
+ unsigned end = start + size;
+ int err;
+
+ err = apply_to_page_range(&init_mm, start,
+ size, __dma_update_pte, &prot);
+ if (err)
+ pr_err("***%s: error=%d, pfn=%lx\n", __func__,
+ err, page_to_pfn(page));
+ dsb();
+ flush_tlb_kernel_range(start, end);
+}
+
+static void __dma_clear_buffer(struct page *page, size_t size)
+{
+ void *ptr;
+ /*
+ * Ensure that the allocated pages are zeroed, and that any data
+ * lurking in the kernel direct-mapped region is invalidated.
+ */
+ ptr = page_address(page);
+ if (ptr) {
+ memset(ptr, 0, size);
+ dmac_flush_range(ptr, ptr + size);
+ outer_flush_range(__pa(ptr), __pa(ptr) + size);
+ }
+}
+
struct page *dma_alloc_at_from_contiguous(struct device *dev, int count,
unsigned int align, phys_addr_t at_addr)
{
@@ -322,17 +375,24 @@
mutex_lock(&cma_mutex);
for (;;) {
+ unsigned long timeout = jiffies + msecs_to_jiffies(8000);
+
pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
start, count, mask);
if (pageno >= cma->count || (start && start != pageno))
break;
pfn = cma->base_pfn + pageno;
+retry:
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
if (ret == 0) {
bitmap_set(cma->bitmap, pageno, count);
page = pfn_to_page(pfn);
break;
+ } else if (start && time_before(jiffies, timeout)) {
+ cond_resched();
+ invalidate_bh_lrus();
+ goto retry;
} else if (ret != -EBUSY || start) {
break;
}
@@ -344,6 +404,11 @@
mutex_unlock(&cma_mutex);
pr_debug("%s(): returned %p\n", __func__, page);
+ if (page) {
+ __dma_remap(page, count << PAGE_SHIFT,
+ pgprot_dmacoherent(PAGE_KERNEL));
+ __dma_clear_buffer(page, count << PAGE_SHIFT);
+ }
return page;
}
@@ -392,6 +457,8 @@
VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+ __dma_remap(pages, count << PAGE_SHIFT, PAGE_KERNEL_EXEC);
+
mutex_lock(&cma_mutex);
bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
free_contig_range(pfn, count);
@@ -419,3 +486,61 @@
return 0;
}
+
+#define MAX_REPLACE_DEV 16
+static struct device *replace_dev_list[MAX_REPLACE_DEV];
+static atomic_t replace_dev_count;
+
+bool dma_contiguous_should_replace_page(struct page *page)
+{
+ int i;
+ ulong pfn;
+ struct cma *cma;
+ struct device *dev;
+ int count = atomic_read(&replace_dev_count);
+
+ if (!page)
+ return false;
+ pfn = page_to_pfn(page);
+
+ for (i = 0; i < count; i++) {
+ dev = replace_dev_list[i];
+ if (!dev)
+ continue;
+ cma = dev->cma_area;
+ if (!cma)
+ continue;
+ if (pfn >= cma->base_pfn &&
+ pfn < cma->base_pfn + cma->count)
+ return true;
+ }
+
+ return false;
+}
+
+/* Enable replacing pages during get_user_pages.
+ * any ref count on CMA page from get_user_pages
+ * makes the page not migratable and can cause
+ * CMA allocation failure. Enabling replace
+ * would force replacing the CMA pages with non-CMA
+ * pages during get_user_pages
+ */
+int dma_contiguous_enable_replace_pages(struct device *dev)
+{
+ int idx;
+ struct cma *cma;
+
+ if (!dev)
+ return -EINVAL;
+
+ idx = atomic_inc_return(&replace_dev_count);
+ if (idx > MAX_REPLACE_DEV)
+ return -EINVAL;
+ replace_dev_list[idx - 1] = dev;
+ cma = dev->cma_area;
+ if (cma) {
+ pr_info("enabled page replacement for spfn=%lx, epfn=%lx\n",
+ cma->base_pfn, cma->base_pfn + cma->count);
+ }
+ return 0;
+}
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 77d4aed..36f1e37 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -56,6 +56,11 @@
static ktime_t last_read_time;
+static struct wakeup_source deleted_ws = {
+ .name = "deleted",
+ .lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
+};
+
/**
* wakeup_source_prepare - Prepare a new wakeup source for initialization.
* @ws: Wakeup source to prepare.
@@ -107,6 +112,34 @@
}
EXPORT_SYMBOL_GPL(wakeup_source_drop);
+/*
+ * Record wakeup_source statistics being deleted into a dummy wakeup_source.
+ */
+static void wakeup_source_record(struct wakeup_source *ws)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&deleted_ws.lock, flags);
+
+ if (ws->event_count) {
+ deleted_ws.total_time =
+ ktime_add(deleted_ws.total_time, ws->total_time);
+ deleted_ws.prevent_sleep_time =
+ ktime_add(deleted_ws.prevent_sleep_time,
+ ws->prevent_sleep_time);
+ deleted_ws.max_time =
+ ktime_compare(deleted_ws.max_time, ws->max_time) > 0 ?
+ deleted_ws.max_time : ws->max_time;
+ deleted_ws.event_count += ws->event_count;
+ deleted_ws.active_count += ws->active_count;
+ deleted_ws.relax_count += ws->relax_count;
+ deleted_ws.expire_count += ws->expire_count;
+ deleted_ws.wakeup_count += ws->wakeup_count;
+ }
+
+ spin_unlock_irqrestore(&deleted_ws.lock, flags);
+}
+
/**
* wakeup_source_destroy - Destroy a struct wakeup_source object.
* @ws: Wakeup source to destroy.
@@ -119,6 +152,7 @@
return;
wakeup_source_drop(ws);
+ wakeup_source_record(ws);
kfree(ws->name);
kfree(ws);
}
@@ -917,19 +951,13 @@
active_time = ktime_set(0, 0);
}
-#ifdef CONFIG_PM_ADVANCED_DEBUG
- ret = seq_printf(m, "%-24s%lu\t\t%lu\t\t%lu\t\t%lu\t\t"
+ ret = seq_printf(m, "%-12s\t%lu\t\t%lu\t\t%lu\t\t%lu\t\t"
"%lld\t\t%lld\t\t%lld\t\t%lld\t\t%lld\n",
ws->name, active_count, ws->event_count,
ws->wakeup_count, ws->expire_count,
ktime_to_ms(active_time), ktime_to_ms(total_time),
ktime_to_ms(max_time), ktime_to_ms(ws->last_time),
ktime_to_ms(prevent_sleep_time));
-#else
- ret = seq_printf(m, "\t%lu\t\t%lu\t\t%lu\t\t%lu\t\t%lld\t\t"
- "%-24s\n", active_count, ws->event_count, ws->wakeup_count,
- ws->expire_count, ktime_to_ms(active_time), ws->name);
-#endif
spin_unlock_irqrestore(&ws->lock, flags);
@@ -944,19 +972,17 @@
{
struct wakeup_source *ws;
-#ifdef CONFIG_PM_ADVANCED_DEBUG
- seq_puts(m, "name\t\t\tactive_count\tevent_count\twakeup_count\t"
+ seq_puts(m, "name\t\tactive_count\tevent_count\twakeup_count\t"
"expire_count\tactive_since\ttotal_time\tmax_time\t"
"last_change\tprevent_suspend_time\n");
-#else
- seq_puts(m, "active_count\tevent_count\twakeup_count\t"
- "expire_count\tactive_since\t\tname\n");
-#endif
+
rcu_read_lock();
list_for_each_entry_rcu(ws, &wakeup_sources, entry)
print_wakeup_source_stats(m, ws);
rcu_read_unlock();
+ print_wakeup_source_stats(m, &deleted_ws);
+
return 0;
}
diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
index c141183..26f8374 100644
--- a/drivers/base/regmap/regcache.c
+++ b/drivers/base/regmap/regcache.c
@@ -250,6 +250,39 @@
return 0;
}
+static int regcache_default_sync(struct regmap *map, unsigned int min,
+ unsigned int max)
+{
+ unsigned int reg;
+
+ for (reg = min; reg <= max; reg += map->reg_stride) {
+ unsigned int val;
+ int ret;
+
+ if (regmap_volatile(map, reg) ||
+ !regmap_writeable(map, reg))
+ continue;
+
+ ret = regcache_read(map, reg, &val);
+ if (ret)
+ return ret;
+
+ /* Is this the hardware default? If so skip. */
+ ret = regcache_lookup_reg(map, reg);
+ if (ret >= 0 && val == map->reg_defaults[ret].def)
+ continue;
+
+ map->cache_bypass = 1;
+ ret = _regmap_write(map, reg, val);
+ map->cache_bypass = 0;
+ if (ret)
+ return ret;
+ dev_dbg(map->dev, "Synced register %#x, value %#x\n", reg, val);
+ }
+
+ return 0;
+}
+
/**
* regcache_sync: Sync the register cache with the hardware.
*
@@ -268,7 +301,7 @@
const char *name;
unsigned int bypass;
- BUG_ON(!map->cache_ops || !map->cache_ops->sync);
+ BUG_ON(!map->cache_ops);
map->lock(map->lock_arg);
/* Remember the initial bypass state */
@@ -297,7 +330,10 @@
}
map->cache_bypass = 0;
- ret = map->cache_ops->sync(map, 0, map->max_register);
+ if (map->cache_ops->sync)
+ ret = map->cache_ops->sync(map, 0, map->max_register);
+ else
+ ret = regcache_default_sync(map, 0, map->max_register);
if (ret == 0)
map->cache_dirty = false;
@@ -331,7 +367,7 @@
const char *name;
unsigned int bypass;
- BUG_ON(!map->cache_ops || !map->cache_ops->sync);
+ BUG_ON(!map->cache_ops);
map->lock(map->lock_arg);
@@ -346,7 +382,10 @@
if (!map->cache_dirty)
goto out;
- ret = map->cache_ops->sync(map, min, max);
+ if (map->cache_ops->sync)
+ ret = map->cache_ops->sync(map, min, max);
+ else
+ ret = regcache_default_sync(map, min, max);
out:
trace_regcache_sync(map->dev, name, "stop region");
diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
index f0b0206..2b45dfe 100644
--- a/drivers/base/regmap/regmap.c
+++ b/drivers/base/regmap/regmap.c
@@ -123,7 +123,10 @@
if (map->volatile_table)
return _regmap_check_range_table(map, reg, map->volatile_table);
- return true;
+ if (map->cache_ops)
+ return false;
+ else
+ return true;
}
bool regmap_precious(struct regmap *map, unsigned int reg)
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index f1281de..390cfa5 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -64,6 +64,8 @@
source "drivers/tty/serial/Kconfig"
+source "drivers/char/diag/Kconfig"
+
config TTY_PRINTK
bool "TTY driver to output user messages via printk"
depends on EXPERT && TTY
diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index 95388f0..df10e21 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -67,3 +67,4 @@
obj-$(CONFIG_TEGRA_PFLASH) += tegra_pflash.o
obj-$(CONFIG_TEGRA_GMI_CHAR) += tegra_gmi_char.o
obj-$(CONFIG_TEGRA_GMI_ACCESS_CONTROL) += tegra_gmi_access.o
+obj-$(CONFIG_DIAG_CHAR) += diag/
diff --git a/drivers/char/diag/Kconfig b/drivers/char/diag/Kconfig
new file mode 100644
index 0000000..da9350a
--- /dev/null
+++ b/drivers/char/diag/Kconfig
@@ -0,0 +1,41 @@
+menu "Diag Support"
+
+config DIAG_CHAR
+ tristate "char driver interface and diag forwarding to/from modem"
+ default n
+ depends on USB_G_ANDROID || USB_FUNCTION_DIAG || USB_QCOM_MAEMO
+ depends on USB_QCOM_DIAG_BRIDGE
+ help
+ Char driver interface for diag user space and diag-forwarding to modem ARM and back.
+ This enables diagchar for maemo usb gadget or android usb gadget based on config selected.
+endmenu
+
+menu "DIAG traffic over USB"
+
+config DIAG_OVER_USB
+ bool "Enable DIAG traffic to go over USB"
+ depends on USB_QCOM_DIAG_BRIDGE
+ default y
+ help
+ This feature helps segregate code required for DIAG traffic to go over USB.
+endmenu
+
+menu "SDIO support for DIAG"
+
+config DIAG_SDIO_PIPE
+ depends on MSM_SDIO_AL
+ default n
+ bool "Enable 9K DIAG traffic over SDIO"
+ help
+ SDIO Transport Layer for DIAG Router
+endmenu
+
+menu "HSIC/SMUX support for DIAG"
+
+config DIAGFWD_BRIDGE_CODE
+ depends on USB_QCOM_DIAG_BRIDGE
+ default y
+ bool "Enable QSC/9K DIAG traffic over SMUX/HSIC"
+ help
+ SMUX/HSIC Transport Layer for DIAG Router
+endmenu
diff --git a/drivers/char/diag/Makefile b/drivers/char/diag/Makefile
new file mode 100644
index 0000000..c9204ea
--- /dev/null
+++ b/drivers/char/diag/Makefile
@@ -0,0 +1,6 @@
+obj-$(CONFIG_DIAG_CHAR) := diagchar.o
+obj-$(CONFIG_DIAG_SDIO_PIPE) += diagfwd_sdio.o
+obj-$(CONFIG_DIAGFWD_BRIDGE_CODE) += diagfwd_bridge.o
+obj-$(CONFIG_DIAGFWD_BRIDGE_CODE) += diagfwd_hsic.o
+obj-$(CONFIG_DIAGFWD_BRIDGE_CODE) += diagfwd_smux.o
+diagchar-objs := diagchar_core.o diagchar_hdlc.o diagfwd.o diagmem.o diagfwd_cntl.o diag_dci.o diag_masks.o diag_debugfs.o
diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c
new file mode 100644
index 0000000..026804ec
--- /dev/null
+++ b/drivers/char/diag/diag_dci.c
@@ -0,0 +1,1741 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/uaccess.h>
+#include <linux/diagchar.h>
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+#include <linux/pm_wakeup.h>
+#include <linux/spinlock.h>
+#include <linux/ratelimit.h>
+#include <asm/current.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include "diagchar_hdlc.h"
+#include "diagmem.h"
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_cntl.h"
+#include "diag_dci.h"
+
+static unsigned int dci_apps_tbl_size = 11;
+
+unsigned int dci_max_reg = 100;
+unsigned int dci_max_clients = 10;
+unsigned char dci_cumulative_log_mask[DCI_LOG_MASK_SIZE];
+unsigned char dci_cumulative_event_mask[DCI_EVENT_MASK_SIZE];
+struct mutex dci_log_mask_mutex;
+struct mutex dci_event_mask_mutex;
+struct mutex dci_health_mutex;
+
+spinlock_t ws_lock;
+unsigned long ws_lock_flags;
+
+/* Number of milliseconds anticipated to process the DCI data */
+#define DCI_WAKEUP_TIMEOUT 1
+
+#define DCI_CHK_CAPACITY(entry, new_data_len) \
+((entry->data_len + new_data_len > entry->total_capacity) ? 1 : 0) \
+
+#ifdef CONFIG_DEBUG_FS
+struct diag_dci_data_info *dci_data_smd;
+struct mutex dci_stat_mutex;
+
+void diag_dci_smd_record_info(int read_bytes, uint8_t ch_type)
+{
+ static int curr_dci_data_smd;
+ static unsigned long iteration;
+ struct diag_dci_data_info *temp_data = dci_data_smd;
+ if (!temp_data)
+ return;
+ mutex_lock(&dci_stat_mutex);
+ if (curr_dci_data_smd == DIAG_DCI_DEBUG_CNT)
+ curr_dci_data_smd = 0;
+ temp_data += curr_dci_data_smd;
+ temp_data->iteration = iteration + 1;
+ temp_data->data_size = read_bytes;
+ temp_data->ch_type = ch_type;
+ diag_get_timestamp(temp_data->time_stamp);
+ curr_dci_data_smd++;
+ iteration++;
+ mutex_unlock(&dci_stat_mutex);
+}
+#else
+void diag_dci_smd_record_info(int read_bytes, uint8_t ch_type)
+{
+}
+#endif
+
+/* Process the data read from apps userspace client */
+void diag_process_apps_dci_read_data(int data_type, void *buf, int recd_bytes)
+{
+ uint8_t cmd_code;
+
+ if (!buf) {
+ pr_err_ratelimited("diag: In %s, Null buf pointer\n", __func__);
+ return;
+ }
+
+ if (data_type != DATA_TYPE_DCI_LOG && data_type != DATA_TYPE_DCI_EVENT)
+ pr_err("diag: In %s, unsupported data_type: 0x%x\n", __func__, (unsigned int)data_type);
+
+ cmd_code = *(uint8_t *) buf;
+
+ if (cmd_code == LOG_CMD_CODE) {
+ extract_dci_log(buf, recd_bytes, APPS_DATA);
+ } else if (cmd_code == EVENT_CMD_CODE) {
+ extract_dci_events(buf, recd_bytes, APPS_DATA);
+ } else {
+ pr_err("diag: In %s, unsupported command code: 0x%x, not log or event\n", __func__, cmd_code);
+ }
+}
+
+/* Process the data read from the smd dci channel */
+int diag_process_smd_dci_read_data(struct diag_smd_info *smd_info, void *buf, int recd_bytes)
+{
+ int read_bytes, dci_pkt_len;
+ uint8_t recv_pkt_cmd_code;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ diag_dci_smd_record_info(recd_bytes, (uint8_t) smd_info->type);
+ /* Each SMD read can have multiple DCI packets */
+ read_bytes = 0;
+ while (read_bytes < recd_bytes) {
+ /* read actual length of dci pkt */
+ dci_pkt_len = *(uint16_t *) (buf + 2);
+
+ /* Check if the length of the current packet is lesser than the
+ * remaining bytes in the received buffer. This includes space
+ * for the Start byte (1), Version byte (1), length bytes (2)
+ * and End byte (1)
+ */
+ if ((dci_pkt_len + 5) > (recd_bytes - read_bytes)) {
+ pr_err("diag: Invalid length in %s, len: %d, dci_pkt_len: %d", __func__, recd_bytes, dci_pkt_len);
+ diag_dci_try_deactivate_wakeup_source(smd_info->ch);
+ return 0;
+ }
+ /* process one dci packet */
+ pr_debug("diag: bytes read = %d, single dci pkt len = %d\n", read_bytes, dci_pkt_len);
+ /* print_hex_dump(KERN_DEBUG, "Single DCI packet :",
+ DUMP_PREFIX_ADDRESS, 16, 1, buf, 5 + dci_pkt_len, 1); */
+ recv_pkt_cmd_code = *(uint8_t *) (buf + 4);
+ if (recv_pkt_cmd_code == LOG_CMD_CODE) {
+ /* Don't include the 4 bytes for command code */
+ extract_dci_log(buf + 4, recd_bytes - 4, smd_info->peripheral);
+ } else if (recv_pkt_cmd_code == EVENT_CMD_CODE) {
+ /* Don't include the 4 bytes for command code */
+ extract_dci_events(buf + 4, recd_bytes - 4, smd_info->peripheral);
+ } else
+ extract_dci_pkt_rsp(smd_info, buf, recd_bytes);
+ read_bytes += 5 + dci_pkt_len;
+ buf += 5 + dci_pkt_len; /* advance to next DCI pkt */
+ }
+ /* Release wakeup source when there are no more clients to
+ process DCI data */
+ if (driver->num_dci_client == 0)
+ diag_dci_try_deactivate_wakeup_source(smd_info->ch);
+
+ /* wake up all sleeping DCI clients which have some data */
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (entry->data_len) {
+ smd_info->in_busy_1 = 1;
+ diag_update_sleeping_process(entry->client->tgid, DCI_DATA_TYPE);
+ }
+ }
+
+ return 0;
+}
+
+static int process_dci_apps_buffer(struct diag_dci_client_tbl *entry, uint16_t data_len)
+{
+ int ret = 0;
+ int err = 0;
+
+ if (!entry) {
+ err = -EINVAL;
+ return err;
+ }
+
+ if (!entry->dci_apps_data) {
+ if (entry->apps_in_busy_1 == 0) {
+ entry->dci_apps_data = entry->dci_apps_buffer;
+ entry->apps_in_busy_1 = 1;
+ } else {
+ entry->dci_apps_data = diagmem_alloc(driver, driver->itemsize_dci, POOL_TYPE_DCI);
+ }
+ entry->apps_data_len = 0;
+ if (!entry->dci_apps_data) {
+ ret = -ENOMEM;
+ return ret;
+ }
+ }
+
+ /* If the data will not fit into the buffer */
+ if ((int)driver->itemsize_dci - entry->apps_data_len <= data_len) {
+ err = dci_apps_write(entry);
+ if (err) {
+ ret = -EIO;
+ return ret;
+ }
+ entry->dci_apps_data = NULL;
+ entry->apps_data_len = 0;
+ if (entry->apps_in_busy_1 == 0) {
+ entry->dci_apps_data = entry->dci_apps_buffer;
+ entry->apps_in_busy_1 = 1;
+ } else {
+ entry->dci_apps_data = diagmem_alloc(driver, driver->itemsize_dci, POOL_TYPE_DCI);
+ }
+
+ if (!entry->dci_apps_data) {
+ ret = -ENOMEM;
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static inline struct diag_dci_client_tbl *__diag_dci_get_client_entry(int client_id)
+{
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (entry->client->tgid == client_id)
+ return entry;
+ }
+ return NULL;
+}
+
+static inline int __diag_dci_query_log_mask(struct diag_dci_client_tbl *entry, uint16_t log_code)
+{
+ uint16_t item_num;
+ uint8_t equip_id, *log_mask_ptr, byte_mask;
+ int byte_index, offset;
+
+ if (!entry) {
+ pr_err("diag: In %s, invalid client entry\n", __func__);
+ return 0;
+ }
+
+ equip_id = LOG_GET_EQUIP_ID(log_code);
+ item_num = LOG_GET_ITEM_NUM(log_code);
+ byte_index = item_num / 8 + 2;
+ byte_mask = 0x01 << (item_num % 8);
+ offset = equip_id * 514;
+
+ if (offset + byte_index > DCI_LOG_MASK_SIZE) {
+ pr_err("diag: In %s, invalid offset: %d, log_code: %d, byte_index: %d\n", __func__, offset, log_code, byte_index);
+ return 0;
+ }
+
+ log_mask_ptr = entry->dci_log_mask;
+ log_mask_ptr = log_mask_ptr + offset + byte_index;
+ return ((*log_mask_ptr & byte_mask) == byte_mask) ? 1 : 0;
+
+}
+
+static inline int __diag_dci_query_event_mask(struct diag_dci_client_tbl *entry, uint16_t event_id)
+{
+ uint8_t *event_mask_ptr, byte_mask;
+ int byte_index, bit_index;
+
+ if (!entry) {
+ pr_err("diag: In %s, invalid client entry\n", __func__);
+ return 0;
+ }
+
+ byte_index = event_id / 8;
+ bit_index = event_id % 8;
+ byte_mask = 0x1 << bit_index;
+
+ if (byte_index > DCI_EVENT_MASK_SIZE) {
+ pr_err("diag: In %s, invalid, event_id: %d, byte_index: %d\n", __func__, event_id, byte_index);
+ return 0;
+ }
+
+ event_mask_ptr = entry->dci_event_mask;
+ event_mask_ptr = event_mask_ptr + byte_index;
+ return ((*event_mask_ptr & byte_mask) == byte_mask) ? 1 : 0;
+}
+
+void extract_dci_pkt_rsp(struct diag_smd_info *smd_info, unsigned char *buf, int len)
+{
+ int i = 0, index = -1, cmd_code_len = 1;
+ int curr_client_pid = 0, write_len;
+ struct diag_dci_client_tbl *entry = NULL;
+ void *temp_buf = NULL;
+ uint8_t recv_pkt_cmd_code;
+
+ recv_pkt_cmd_code = *(uint8_t *) (buf + 4);
+ if (recv_pkt_cmd_code != DCI_PKT_RSP_CODE)
+ cmd_code_len = 4; /* delayed response */
+
+ /* Skip the Start(1) and the version(1) bytes */
+ write_len = (int)(*(uint16_t *) (buf + 2));
+ /* Check if the length embedded in the packet is correct.
+ * Include the start (1), version (1), length (2) and the end
+ * (1) bytes while checking. Total = 5 bytes
+ */
+ if ((write_len <= 0) && ((write_len + 5) > len)) {
+ pr_err("diag: Invalid length in %s, len: %d, write_len: %d", __func__, len, write_len);
+ return;
+ }
+ write_len -= cmd_code_len;
+ pr_debug("diag: len = %d\n", write_len);
+ /* look up DCI client with tag */
+ for (i = 0; i < dci_max_reg; i++) {
+ if (driver->req_tracking_tbl[i].tag == *(int *)(buf + (4 + cmd_code_len))) {
+ *(int *)(buf + 4 + cmd_code_len) = driver->req_tracking_tbl[i].uid;
+ curr_client_pid = driver->req_tracking_tbl[i].pid;
+ index = i;
+ break;
+ }
+ }
+ if (index == -1) {
+ pr_err("diag: No matching PID for DCI data\n");
+ return;
+ }
+
+ entry = __diag_dci_get_client_entry(curr_client_pid);
+ if (!entry) {
+ pr_err("diag: In %s, couldn't find entry\n", __func__);
+ return;
+ }
+
+ mutex_lock(&entry->data_mutex);
+ if (DCI_CHK_CAPACITY(entry, 8 + write_len)) {
+ pr_alert("diag: create capacity for pkt rsp\n");
+ entry->total_capacity += 8 + write_len;
+ temp_buf = krealloc(entry->dci_data, entry->total_capacity, GFP_KERNEL);
+ if (!temp_buf) {
+ pr_err("diag: DCI realloc failed\n");
+ mutex_unlock(&entry->data_mutex);
+ return;
+ } else {
+ entry->dci_data = temp_buf;
+ }
+ }
+ *(int *)(entry->dci_data + entry->data_len) = DCI_PKT_RSP_TYPE;
+ entry->data_len += sizeof(int);
+ *(int *)(entry->dci_data + entry->data_len) = write_len;
+ entry->data_len += sizeof(int);
+ memcpy(entry->dci_data + entry->data_len, buf + 4 + cmd_code_len, write_len);
+ entry->data_len += write_len;
+ mutex_unlock(&entry->data_mutex);
+ /* delete immediate response entry */
+ if (smd_info->buf_in_1[8 + cmd_code_len] != 0x80)
+ driver->req_tracking_tbl[index].pid = 0;
+}
+
+static void copy_dci_event_from_apps(uint8_t * event_data, unsigned int total_event_len, struct diag_dci_client_tbl *entry)
+{
+ int ret = 0;
+ unsigned int total_length = 4 + total_event_len;
+
+ if (!event_data) {
+ pr_err_ratelimited("diag: In %s, event_data null pointer\n", __func__);
+ return;
+ }
+
+ if (!entry) {
+ pr_err_ratelimited("diag: In %s, entry null pointer\n", __func__);
+ return;
+ }
+
+ mutex_lock(&dci_health_mutex);
+ mutex_lock(&entry->data_mutex);
+
+ ret = process_dci_apps_buffer(entry, total_length);
+
+ if (ret != 0) {
+ if (ret == -ENOMEM)
+ pr_err_ratelimited("diag: In %s, DCI event drop, ret: %d. Reduce data rate.\n", __func__, ret);
+ else
+ pr_err_ratelimited("diag: In %s, DCI event drop, ret: %d\n", __func__, ret);
+ entry->dropped_events++;
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+ return;
+ }
+
+ entry->received_events++;
+ *(int *)(entry->dci_apps_data + entry->apps_data_len) = DCI_EVENT_TYPE;
+ memcpy(entry->dci_apps_data + entry->apps_data_len + 4, event_data, total_event_len);
+ entry->apps_data_len += total_length;
+
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+
+ check_drain_timer();
+
+ return;
+}
+
+static void copy_dci_event_from_smd(uint8_t * event_data, int data_source, unsigned int total_event_len, struct diag_dci_client_tbl *entry)
+{
+ (void)data_source;
+
+ if (!event_data) {
+ pr_err_ratelimited("diag: In %s, event_data null pointer\n", __func__);
+ return;
+ }
+
+ if (!entry) {
+ pr_err_ratelimited("diag: In %s, entry null pointer\n", __func__);
+ return;
+ }
+
+ mutex_lock(&dci_health_mutex);
+ mutex_lock(&entry->data_mutex);
+
+ if (DCI_CHK_CAPACITY(entry, 4 + total_event_len)) {
+ pr_err("diag: In %s, DCI event drop\n", __func__);
+ entry->dropped_events++;
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+ return;
+ }
+ entry->received_events++;
+ *(int *)(entry->dci_data + entry->data_len) = DCI_EVENT_TYPE;
+ /* 4 bytes for DCI_EVENT_TYPE */
+ memcpy(entry->dci_data + entry->data_len + 4, event_data, total_event_len);
+ entry->data_len += 4 + total_event_len;
+
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+}
+
+void extract_dci_events(unsigned char *buf, int len, int data_source)
+{
+ uint16_t event_id, event_id_packet, length, temp_len;
+ uint8_t payload_len, payload_len_field;
+ uint8_t timestamp[8], timestamp_len;
+ uint8_t event_data[MAX_EVENT_SIZE];
+ unsigned int total_event_len;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ length = *(uint16_t *) (buf + 1); /* total length of event series */
+ if (length == 0) {
+ pr_err("diag: Incoming dci event length is invalid\n");
+ return;
+ }
+ /* Move directly to the start of the event series. 1 byte for
+ * event code and 2 bytes for the length field.
+ */
+ temp_len = 3;
+ while (temp_len < (length - 1)) {
+ event_id_packet = *(uint16_t *) (buf + temp_len);
+ event_id = event_id_packet & 0x0FFF; /* extract 12 bits */
+ if (event_id_packet & 0x8000) {
+ /* The packet has the two smallest byte of the
+ * timestamp
+ */
+ timestamp_len = 2;
+ } else {
+ /* The packet has the full timestamp. The first event
+ * will always have full timestamp. Save it in the
+ * timestamp buffer and use it for subsequent events if
+ * necessary.
+ */
+ timestamp_len = 8;
+ memcpy(timestamp, buf + temp_len + 2, timestamp_len);
+ }
+ /* 13th and 14th bit represent the payload length */
+ if (((event_id_packet & 0x6000) >> 13) == 3) {
+ payload_len_field = 1;
+ payload_len = *(uint8_t *)
+ (buf + temp_len + 2 + timestamp_len);
+ if (payload_len < (MAX_EVENT_SIZE - 13)) {
+ /* copy the payload length and the payload */
+ memcpy(event_data + 12, buf + temp_len + 2 + timestamp_len, 1);
+ memcpy(event_data + 13, buf + temp_len + 2 + timestamp_len + 1, payload_len);
+ } else {
+ pr_err("diag: event > %d, payload_len = %d\n", (MAX_EVENT_SIZE - 13), payload_len);
+ return;
+ }
+ } else {
+ payload_len_field = 0;
+ payload_len = (event_id_packet & 0x6000) >> 13;
+ /* copy the payload */
+ memcpy(event_data + 12, buf + temp_len + 2 + timestamp_len, payload_len);
+ }
+
+ /* Before copying the data to userspace, check if we are still
+ * within the buffer limit. This is an error case, don't count
+ * it towards the health statistics.
+ *
+ * Here, the offset of 2 bytes(uint16_t) is for the
+ * event_id_packet length
+ */
+ temp_len += sizeof(uint16_t) + timestamp_len + payload_len_field + payload_len;
+ if (temp_len > len) {
+ pr_err("diag: Invalid length in %s, len: %d, read: %d", __func__, len, temp_len);
+ return;
+ }
+
+ /* 2 bytes for the event id & timestamp len is hard coded to 8,
+ as individual events have full timestamp */
+ *(uint16_t *) (event_data) = 10 + payload_len_field + payload_len;
+ *(uint16_t *) (event_data + 2) = event_id_packet & 0x7FFF;
+ memcpy(event_data + 4, timestamp, 8);
+ /* 2 bytes for the event length field which is added to
+ the event data */
+ total_event_len = 2 + 10 + payload_len_field + payload_len;
+ /* parse through event mask tbl of each client and check mask */
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (__diag_dci_query_event_mask(entry, event_id)) {
+ /* copy to client buffer */
+ if (data_source == APPS_DATA) {
+ copy_dci_event_from_apps(event_data, total_event_len, entry);
+ } else {
+ copy_dci_event_from_smd(event_data, data_source, total_event_len, entry);
+ }
+ }
+ }
+ }
+}
+
+int dci_apps_write(struct diag_dci_client_tbl *entry)
+{
+ int i, j;
+ int err = -ENOMEM;
+ int found_it = 0;
+
+ if (!entry) {
+ pr_err("diag: In %s, null dci client entry pointer\n", __func__);
+ return -EINVAL;
+ }
+
+ /* Make sure we have a buffer and there is data in it */
+ if (!entry->dci_apps_data || entry->apps_data_len <= 0) {
+ pr_err("diag: In %s, Invalid dci apps data info, dci_apps_data: 0x%x, apps_data_len: %d\n", __func__, (unsigned int)entry->dci_apps_data, entry->apps_data_len);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < entry->dci_apps_tbl_size; i++) {
+ if (entry->dci_apps_tbl[i].buf == NULL) {
+ entry->dci_apps_tbl[i].buf = entry->dci_apps_data;
+ entry->dci_apps_tbl[i].length = entry->apps_data_len;
+ err = 0;
+ for (j = 0; j < driver->num_clients; j++) {
+ if (driver->client_map[j].pid == entry->client->tgid) {
+ driver->data_ready[j] |= DCI_DATA_TYPE;
+ break;
+ }
+ }
+ wake_up_interruptible(&driver->wait_q);
+ found_it = 1;
+ break;
+ }
+ }
+
+ if (!found_it)
+ pr_err_ratelimited("diag: In %s, Apps DCI data table full. Reduce data rate.\n", __func__);
+
+ return err;
+}
+
+static void copy_dci_log_from_apps(unsigned char *buf, int len, struct diag_dci_client_tbl *entry)
+{
+ int ret = 0;
+ uint16_t log_length, total_length = 0;
+
+ if (!buf || !entry)
+ return;
+
+ log_length = *(uint16_t *) (buf + 2);
+ total_length = 4 + log_length;
+
+ /* Check if we are within the len. The check should include the
+ * first 4 bytes for the Cmd Code(2) and the length bytes (2)
+ */
+ if (total_length > len) {
+ pr_err("diag: Invalid length in %s, log_len: %d, len: %d", __func__, log_length, len);
+ return;
+ }
+
+ mutex_lock(&dci_health_mutex);
+ mutex_lock(&entry->data_mutex);
+
+ ret = process_dci_apps_buffer(entry, total_length);
+
+ if (ret != 0) {
+ if (ret == -ENOMEM)
+ pr_err_ratelimited("diag: In %s, DCI log drop, ret: %d. Reduce data rate.\n", __func__, ret);
+ else
+ pr_err_ratelimited("diag: In %s, DCI log drop, ret: %d\n", __func__, ret);
+ entry->dropped_logs++;
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+ return;
+ }
+
+ entry->received_logs++;
+ *(int *)(entry->dci_apps_data + entry->apps_data_len) = DCI_LOG_TYPE;
+ memcpy(entry->dci_apps_data + entry->apps_data_len + 4, buf + 4, log_length);
+ entry->apps_data_len += total_length;
+
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+
+ check_drain_timer();
+
+ return;
+}
+
+static void copy_dci_log_from_smd(unsigned char *buf, int len, int data_source, struct diag_dci_client_tbl *entry)
+{
+ uint16_t log_length = *(uint16_t *) (buf + 2);
+ (void)data_source;
+
+ /* Check if we are within the len. The check should include the
+ * first 4 bytes for the Log code(2) and the length bytes (2)
+ */
+ if ((log_length + sizeof(uint16_t) + 2) > len) {
+ pr_err("diag: Invalid length in %s, log_len: %d, len: %d", __func__, log_length, len);
+ return;
+ }
+
+ mutex_lock(&dci_health_mutex);
+ mutex_lock(&entry->data_mutex);
+
+ if (DCI_CHK_CAPACITY(entry, 4 + log_length)) {
+ pr_err_ratelimited("diag: In %s, DCI log drop\n", __func__);
+ entry->dropped_logs++;
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+ return;
+ }
+
+ entry->received_logs++;
+ *(int *)(entry->dci_data + entry->data_len) = DCI_LOG_TYPE;
+ memcpy(entry->dci_data + entry->data_len + 4, buf + 4, log_length);
+ entry->data_len += 4 + log_length;
+
+ mutex_unlock(&entry->data_mutex);
+ mutex_unlock(&dci_health_mutex);
+}
+
+void extract_dci_log(unsigned char *buf, int len, int data_source)
+{
+ uint16_t log_code, read_bytes = 0;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ /* The first six bytes for the incoming log packet contains
+ * Command code (2), the length of the packet (2) and the length
+ * of the log (2)
+ */
+ log_code = *(uint16_t *) (buf + 6);
+ read_bytes += sizeof(uint16_t) + 6;
+ if (read_bytes > len) {
+ pr_err("diag: Invalid length in %s, len: %d, read: %d", __func__, len, read_bytes);
+ return;
+ }
+
+ /* parse through log mask table of each client and check mask */
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (__diag_dci_query_log_mask(entry, log_code)) {
+ pr_debug("\t log code %x needed by client %d", log_code, entry->client->tgid);
+ /* copy to client buffer */
+ if (data_source == APPS_DATA) {
+ copy_dci_log_from_apps(buf, len, entry);
+ } else {
+ copy_dci_log_from_smd(buf, len, data_source, entry);
+ }
+ }
+ }
+}
+
+void diag_update_smd_dci_work_fn(struct work_struct *work)
+{
+ struct diag_smd_info *smd_info = container_of(work,
+ struct diag_smd_info,
+ diag_notify_update_smd_work);
+ int i, j;
+ char dirty_bits[16];
+ uint8_t *client_log_mask_ptr;
+ uint8_t *log_mask_ptr;
+ int ret;
+ int index = smd_info->peripheral;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ /* Update apps and peripheral(s) with the dci log and event masks */
+ memset(dirty_bits, 0, 16 * sizeof(uint8_t));
+
+ /*
+ * From each log entry used by each client, determine
+ * which log entries in the cumulative logs that need
+ * to be updated on the peripheral.
+ */
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ client_log_mask_ptr = entry->dci_log_mask;
+ for (j = 0; j < 16; j++) {
+ if (*(client_log_mask_ptr + 1))
+ dirty_bits[j] = 1;
+ client_log_mask_ptr += 514;
+ }
+ }
+
+ mutex_lock(&dci_log_mask_mutex);
+ /* Update the appropriate dirty bits in the cumulative mask */
+ log_mask_ptr = dci_cumulative_log_mask;
+ for (i = 0; i < 16; i++) {
+ if (dirty_bits[i])
+ *(log_mask_ptr + 1) = dirty_bits[i];
+
+ log_mask_ptr += 514;
+ }
+ mutex_unlock(&dci_log_mask_mutex);
+
+ /* Send updated mask to userspace clients */
+ diag_update_userspace_clients(DCI_LOG_MASKS_TYPE);
+ /* Send updated log mask to peripherals */
+ ret = diag_send_dci_log_mask(driver->smd_cntl[index].ch);
+
+ /* Send updated event mask to userspace clients */
+ diag_update_userspace_clients(DCI_EVENT_MASKS_TYPE);
+ /* Send updated event mask to peripheral */
+ ret = diag_send_dci_event_mask(driver->smd_cntl[index].ch);
+
+ smd_info->notify_context = 0;
+}
+
+void diag_dci_notify_client(int peripheral_mask, int data)
+{
+ int stat;
+ struct siginfo info;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ memset(&info, 0, sizeof(struct siginfo));
+ info.si_code = SI_QUEUE;
+ info.si_int = (peripheral_mask | data);
+
+ /* Notify the DCI process that the peripheral DCI Channel is up */
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (entry->list & peripheral_mask) {
+ info.si_signo = entry->signal_type;
+ stat = send_sig_info(entry->signal_type, &info, entry->client);
+ if (stat)
+ pr_err("diag: Err sending dci signal to client, signal data: 0x%x, stat: %d\n", info.si_int, stat);
+ }
+ }
+}
+
+int diag_send_dci_pkt(struct diag_master_table entry, unsigned char *buf, int len, int index)
+{
+ int i, status = 0;
+ unsigned int read_len = 0;
+
+ /* The first 4 bytes is the uid tag and the next four bytes is
+ the minmum packet length of a request packet */
+ if (len < DCI_PKT_REQ_MIN_LEN) {
+ pr_err("diag: dci: Invalid pkt len %d in %s\n", len, __func__);
+ return -EIO;
+ }
+ if (len > APPS_BUF_SIZE - 10) {
+ pr_err("diag: dci: Invalid payload length in %s\n", __func__);
+ return -EIO;
+ }
+ /* remove UID from user space pkt before sending to peripheral */
+ buf = buf + sizeof(int);
+ read_len += sizeof(int);
+ len = len - sizeof(int);
+ mutex_lock(&driver->dci_mutex);
+ /* prepare DCI packet */
+ driver->apps_dci_buf[0] = CONTROL_CHAR; /* start */
+ driver->apps_dci_buf[1] = 1; /* version */
+ *(uint16_t *) (driver->apps_dci_buf + 2) = len + 4 + 1; /* length */
+ driver->apps_dci_buf[4] = DCI_PKT_RSP_CODE;
+ *(int *)(driver->apps_dci_buf + 5) = driver->req_tracking_tbl[index].tag;
+ for (i = 0; i < len; i++)
+ driver->apps_dci_buf[i + 9] = *(buf + i);
+ read_len += len;
+ driver->apps_dci_buf[9 + len] = CONTROL_CHAR; /* end */
+ if ((read_len + 9) >= USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length while forming dci pkt in %s", __func__);
+ mutex_unlock(&driver->dci_mutex);
+ return -EIO;
+ }
+
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++) {
+ struct diag_smd_info *smd_info = driver->separate_cmdrsp[i] ? &driver->smd_dci_cmd[i] : &driver->smd_dci[i];
+ if (entry.client_id == smd_info->peripheral) {
+ if (smd_info->ch) {
+ smd_write(smd_info->ch, driver->apps_dci_buf, len + 10);
+ status = DIAG_DCI_NO_ERROR;
+ }
+ break;
+ }
+ }
+
+ if (status != DIAG_DCI_NO_ERROR) {
+ pr_alert("diag: check DCI channel\n");
+ status = DIAG_DCI_SEND_DATA_FAIL;
+ }
+ mutex_unlock(&driver->dci_mutex);
+ return status;
+}
+
+int diag_register_dci_transaction(int uid)
+{
+ int i, new_dci_client = 1, ret = -1;
+
+ for (i = 0; i < dci_max_reg; i++) {
+ if (driver->req_tracking_tbl[i].pid == current->tgid) {
+ new_dci_client = 0;
+ break;
+ }
+ }
+ mutex_lock(&driver->dci_mutex);
+ /* Make an entry in kernel DCI table */
+ driver->dci_tag++;
+ for (i = 0; i < dci_max_reg; i++) {
+ if (driver->req_tracking_tbl[i].pid == 0) {
+ driver->req_tracking_tbl[i].pid = current->tgid;
+ driver->req_tracking_tbl[i].uid = uid;
+ driver->req_tracking_tbl[i].tag = driver->dci_tag;
+ ret = i;
+ break;
+ }
+ }
+ mutex_unlock(&driver->dci_mutex);
+ return ret;
+}
+
+int diag_process_dci_transaction(unsigned char *buf, int len)
+{
+ unsigned char *temp = buf;
+ uint16_t subsys_cmd_code, log_code, item_num;
+ int subsys_id, cmd_code, ret = -1, index = -1, found = 0;
+ struct diag_master_table entry;
+ int count, set_mask, num_codes, bit_index, event_id, offset = 0, i;
+ unsigned int byte_index, read_len = 0;
+ uint8_t equip_id, *log_mask_ptr, *head_log_mask_ptr, byte_mask;
+ uint8_t *event_mask_ptr;
+ struct diag_dci_client_tbl *dci_entry = NULL;
+
+ if (!driver->smd_dci[MODEM_DATA].ch) {
+ pr_err("diag: DCI smd channel for peripheral %d not valid for dci updates\n", driver->smd_dci[MODEM_DATA].peripheral);
+ return DIAG_DCI_SEND_DATA_FAIL;
+ }
+
+ if (!temp) {
+ pr_err("diag: Invalid buffer in %s\n", __func__);
+ return -ENOMEM;
+ }
+
+ /* This is Pkt request/response transaction */
+ if (*(int *)temp > 0) {
+ if (len < DCI_PKT_REQ_MIN_LEN || len > USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length %d len in %s", len, __func__);
+ return -EIO;
+ }
+ /* enter this UID into kernel table and return index */
+ index = diag_register_dci_transaction(*(int *)temp);
+ if (index < 0) {
+ pr_alert("diag: registering new DCI transaction failed\n");
+ return DIAG_DCI_NO_REG;
+ }
+ temp += sizeof(int);
+ /*
+ * Check for registered peripheral and fwd pkt to
+ * appropriate proc
+ */
+ cmd_code = (int)(*(char *)temp);
+ temp++;
+ subsys_id = (int)(*(char *)temp);
+ temp++;
+ subsys_cmd_code = *(uint16_t *) temp;
+ temp += sizeof(uint16_t);
+ read_len += sizeof(int) + 2 + sizeof(uint16_t);
+ if (read_len >= USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length in %s\n", __func__);
+ return -EIO;
+ }
+ pr_debug("diag: %d %d %d", cmd_code, subsys_id, subsys_cmd_code);
+ for (i = 0; i < diag_max_reg; i++) {
+ entry = driver->table[i];
+ if (entry.process_id != NO_PROCESS) {
+ if (entry.cmd_code == cmd_code && entry.subsys_id == subsys_id && entry.cmd_code_lo <= subsys_cmd_code && entry.cmd_code_hi >= subsys_cmd_code) {
+ ret = diag_send_dci_pkt(entry, buf, len, index);
+ } else if (entry.cmd_code == 255 && cmd_code == 75) {
+ if (entry.subsys_id == subsys_id && entry.cmd_code_lo <= subsys_cmd_code && entry.cmd_code_hi >= subsys_cmd_code) {
+ ret = diag_send_dci_pkt(entry, buf, len, index);
+ }
+ } else if (entry.cmd_code == 255 && entry.subsys_id == 255) {
+ if (entry.cmd_code_lo <= cmd_code && entry.cmd_code_hi >= cmd_code) {
+ ret = diag_send_dci_pkt(entry, buf, len, index);
+ }
+ }
+ }
+ }
+ } else if (*(int *)temp == DCI_LOG_TYPE) {
+ /* Minimum length of a log mask config is 12 + 2 bytes for
+ atleast one log code to be set or reset */
+ if (len < DCI_LOG_CON_MIN_LEN || len > USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length in %s\n", __func__);
+ return -EIO;
+ }
+ /* find client table entry */
+ dci_entry = diag_dci_get_client_entry();
+ if (!dci_entry) {
+ pr_err("diag: In %s, invalid client\n", __func__);
+ return ret;
+ }
+
+ /* Extract each log code and put in client table */
+ temp += sizeof(int);
+ read_len += sizeof(int);
+ set_mask = *(int *)temp;
+ temp += sizeof(int);
+ read_len += sizeof(int);
+ num_codes = *(int *)temp;
+ temp += sizeof(int);
+ read_len += sizeof(int);
+
+ if (num_codes == 0 || (num_codes >= (USER_SPACE_DATA - 8) / 2)) {
+ pr_err("diag: dci: Invalid number of log codes %d\n", num_codes);
+ return -EIO;
+ }
+
+ head_log_mask_ptr = dci_entry->dci_log_mask;
+ if (!head_log_mask_ptr) {
+ pr_err("diag: dci: Invalid Log mask pointer in %s\n", __func__);
+ return -ENOMEM;
+ }
+ pr_debug("diag: head of dci log mask %p\n", head_log_mask_ptr);
+ count = 0; /* iterator for extracting log codes */
+ while (count < num_codes) {
+ if (read_len >= USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length for log type in %s", __func__);
+ return -EIO;
+ }
+ log_code = *(uint16_t *) temp;
+ equip_id = LOG_GET_EQUIP_ID(log_code);
+ item_num = LOG_GET_ITEM_NUM(log_code);
+ byte_index = item_num / 8 + 2;
+ if (byte_index >= (DCI_MAX_ITEMS_PER_LOG_CODE + 2)) {
+ pr_err("diag: dci: Log type, invalid byte index\n");
+ return ret;
+ }
+ byte_mask = 0x01 << (item_num % 8);
+ /*
+ * Parse through log mask table and find
+ * relevant range
+ */
+ log_mask_ptr = head_log_mask_ptr;
+ found = 0;
+ offset = 0;
+ while (log_mask_ptr && (offset < DCI_LOG_MASK_SIZE)) {
+ if (*log_mask_ptr == equip_id) {
+ found = 1;
+ pr_debug("diag: find equip id = %x at %p\n", equip_id, log_mask_ptr);
+ break;
+ } else {
+ pr_debug("diag: did not find equip id = %x at %d\n", equip_id, *log_mask_ptr);
+ log_mask_ptr += 514;
+ offset += 514;
+ }
+ }
+ if (!found) {
+ pr_err("diag: dci equip id not found\n");
+ return ret;
+ }
+ *(log_mask_ptr + 1) = 1; /* set the dirty byte */
+ log_mask_ptr = log_mask_ptr + byte_index;
+ if (set_mask)
+ *log_mask_ptr |= byte_mask;
+ else
+ *log_mask_ptr &= ~byte_mask;
+ /* add to cumulative mask */
+ update_dci_cumulative_log_mask(offset, byte_index, byte_mask);
+ temp += 2;
+ read_len += 2;
+ count++;
+ ret = DIAG_DCI_NO_ERROR;
+ }
+ /* send updated mask to userspace clients */
+ diag_update_userspace_clients(DCI_LOG_MASKS_TYPE);
+ /* send updated mask to peripherals */
+ ret = diag_send_dci_log_mask(driver->smd_cntl[MODEM_DATA].ch);
+ } else if (*(int *)temp == DCI_EVENT_TYPE) {
+ /* Minimum length of a event mask config is 12 + 4 bytes for
+ atleast one event id to be set or reset. */
+ if (len < DCI_EVENT_CON_MIN_LEN || len > USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length in %s\n", __func__);
+ return -EIO;
+ }
+ /* find client table entry */
+ dci_entry = diag_dci_get_client_entry();
+ if (!dci_entry) {
+ pr_err("diag: In %s, invalid client\n", __func__);
+ return ret;
+ }
+ /* Extract each log code and put in client table */
+ temp += sizeof(int);
+ read_len += sizeof(int);
+ set_mask = *(int *)temp;
+ temp += sizeof(int);
+ read_len += sizeof(int);
+ num_codes = *(int *)temp;
+ temp += sizeof(int);
+ read_len += sizeof(int);
+
+ /* Check for positive number of event ids. Also, the number of
+ event ids should fit in the buffer along with set_mask and
+ num_codes which are 4 bytes each */
+ if (num_codes == 0 || (num_codes >= (USER_SPACE_DATA - 8) / 2)) {
+ pr_err("diag: dci: Invalid number of event ids %d\n", num_codes);
+ return -EIO;
+ }
+
+ event_mask_ptr = dci_entry->dci_event_mask;
+ if (!event_mask_ptr) {
+ pr_err("diag: dci: Invalid event mask pointer in %s\n", __func__);
+ return -ENOMEM;
+ }
+ pr_debug("diag: head of dci event mask %p\n", event_mask_ptr);
+ count = 0; /* iterator for extracting log codes */
+ while (count < num_codes) {
+ if (read_len >= USER_SPACE_DATA) {
+ pr_err("diag: dci: Invalid length for event type in %s", __func__);
+ return -EIO;
+ }
+ event_id = *(int *)temp;
+ byte_index = event_id / 8;
+ if (byte_index >= DCI_EVENT_MASK_SIZE) {
+ pr_err("diag: dci: Event type, invalid byte index\n");
+ return ret;
+ }
+ bit_index = event_id % 8;
+ byte_mask = 0x1 << bit_index;
+ /*
+ * Parse through event mask table and set
+ * relevant byte & bit combination
+ */
+ if (set_mask)
+ *(event_mask_ptr + byte_index) |= byte_mask;
+ else
+ *(event_mask_ptr + byte_index) &= ~byte_mask;
+ /* add to cumulative mask */
+ update_dci_cumulative_event_mask(byte_index, byte_mask);
+ temp += sizeof(int);
+ read_len += sizeof(int);
+ count++;
+ ret = DIAG_DCI_NO_ERROR;
+ }
+ /* send updated mask to userspace clients */
+ diag_update_userspace_clients(DCI_EVENT_MASKS_TYPE);
+ /* send updated mask to peripherals */
+ ret = diag_send_dci_event_mask(driver->smd_cntl[MODEM_DATA].ch);
+ } else {
+ pr_alert("diag: Incorrect DCI transaction\n");
+ }
+ return ret;
+}
+
+struct diag_dci_client_tbl *diag_dci_get_client_entry()
+{
+ return __diag_dci_get_client_entry(current->tgid);
+}
+
+void update_dci_cumulative_event_mask(int offset, uint8_t byte_mask)
+{
+ uint8_t *event_mask_ptr;
+ uint8_t *update_ptr = dci_cumulative_event_mask;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+ bool is_set = false;
+
+ mutex_lock(&dci_event_mask_mutex);
+ update_ptr += offset;
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ event_mask_ptr = entry->dci_event_mask;
+ event_mask_ptr += offset;
+ if ((*event_mask_ptr & byte_mask) == byte_mask) {
+ is_set = true;
+ /* break even if one client has the event mask set */
+ break;
+ }
+ }
+ if (is_set == false)
+ *update_ptr &= ~byte_mask;
+ else
+ *update_ptr |= byte_mask;
+ mutex_unlock(&dci_event_mask_mutex);
+}
+
+void diag_dci_invalidate_cumulative_event_mask()
+{
+ int i = 0;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+ uint8_t *update_ptr, *event_mask_ptr;
+ update_ptr = dci_cumulative_event_mask;
+
+ mutex_lock(&dci_event_mask_mutex);
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ event_mask_ptr = entry->dci_event_mask;
+ for (i = 0; i < DCI_EVENT_MASK_SIZE; i++)
+ *(update_ptr + i) |= *(event_mask_ptr + i);
+ }
+ mutex_unlock(&dci_event_mask_mutex);
+}
+
+int diag_send_dci_event_mask(smd_channel_t * ch)
+{
+ void *buf = driver->buf_event_mask_update;
+ int header_size = sizeof(struct diag_ctrl_event_mask);
+ int wr_size = -ENOMEM, retry_count = 0, timer;
+ int ret = DIAG_DCI_NO_ERROR;
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ /* send event mask update */
+ driver->event_mask->cmd_type = DIAG_CTRL_MSG_EVENT_MASK;
+ driver->event_mask->data_len = 7 + DCI_EVENT_MASK_SIZE;
+ driver->event_mask->stream_id = DCI_MASK_STREAM;
+ driver->event_mask->status = 3; /* status for valid mask */
+ driver->event_mask->event_config = 1; /* event config */
+ driver->event_mask->event_mask_size = DCI_EVENT_MASK_SIZE;
+ memcpy(buf, driver->event_mask, header_size);
+ memcpy(buf + header_size, dci_cumulative_event_mask, DCI_EVENT_MASK_SIZE);
+ if (ch) {
+ while (retry_count < 3) {
+ wr_size = smd_write(ch, buf, header_size + DCI_EVENT_MASK_SIZE);
+ if (wr_size == -ENOMEM) {
+ retry_count++;
+ for (timer = 0; timer < 5; timer++)
+ udelay(2000);
+ } else {
+ break;
+ }
+ }
+ if (wr_size != header_size + DCI_EVENT_MASK_SIZE) {
+ pr_err("diag: error writing dci event mask %d, tried %d\n", wr_size, header_size + DCI_EVENT_MASK_SIZE);
+ ret = DIAG_DCI_SEND_DATA_FAIL;
+ }
+ } else {
+ pr_err("diag: ch not valid for dci event mask update\n");
+ ret = DIAG_DCI_SEND_DATA_FAIL;
+ }
+ mutex_unlock(&driver->diag_cntl_mutex);
+
+ return ret;
+}
+
+void update_dci_cumulative_log_mask(int offset, unsigned int byte_index, uint8_t byte_mask)
+{
+ int i;
+ uint8_t *update_ptr = dci_cumulative_log_mask;
+ uint8_t *log_mask_ptr;
+ bool is_set = false;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ mutex_lock(&dci_log_mask_mutex);
+ *update_ptr = 0;
+ /* set the equipment IDs */
+ for (i = 0; i < 16; i++)
+ *(update_ptr + (i * 514)) = i;
+
+ update_ptr += offset;
+ /* update the dirty bit */
+ *(update_ptr + 1) = 1;
+ update_ptr = update_ptr + byte_index;
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ log_mask_ptr = entry->dci_log_mask;
+ log_mask_ptr = log_mask_ptr + offset + byte_index;
+ if ((*log_mask_ptr & byte_mask) == byte_mask) {
+ is_set = true;
+ /* break even if one client has the log mask set */
+ break;
+ }
+ }
+
+ if (is_set == false)
+ *update_ptr &= ~byte_mask;
+ else
+ *update_ptr |= byte_mask;
+ mutex_unlock(&dci_log_mask_mutex);
+}
+
+void diag_dci_invalidate_cumulative_log_mask()
+{
+ int i = 0;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+ uint8_t *update_ptr, *log_mask_ptr;
+ update_ptr = dci_cumulative_log_mask;
+
+ mutex_lock(&dci_log_mask_mutex);
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ log_mask_ptr = entry->dci_log_mask;
+ for (i = 0; i < DCI_LOG_MASK_SIZE; i++)
+ *(update_ptr + i) |= *(log_mask_ptr + i);
+ }
+ mutex_unlock(&dci_log_mask_mutex);
+}
+
+int diag_send_dci_log_mask(smd_channel_t * ch)
+{
+ void *buf = driver->buf_log_mask_update;
+ int header_size = sizeof(struct diag_ctrl_log_mask);
+ uint8_t *log_mask_ptr = dci_cumulative_log_mask;
+ int i, wr_size = -ENOMEM, retry_count = 0, timer;
+ int ret = DIAG_DCI_NO_ERROR;
+
+ if (!ch) {
+ pr_err("diag: ch not valid for dci log mask update\n");
+ return DIAG_DCI_SEND_DATA_FAIL;
+ }
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ for (i = 0; i < 16; i++) {
+ retry_count = 0;
+ driver->log_mask->cmd_type = DIAG_CTRL_MSG_LOG_MASK;
+ driver->log_mask->num_items = 512;
+ driver->log_mask->data_len = 11 + 512;
+ driver->log_mask->stream_id = DCI_MASK_STREAM;
+ driver->log_mask->status = 3; /* status for valid mask */
+ driver->log_mask->equip_id = *log_mask_ptr;
+ driver->log_mask->log_mask_size = 512;
+ memcpy(buf, driver->log_mask, header_size);
+ memcpy(buf + header_size, log_mask_ptr + 2, 512);
+ /* if dirty byte is set and channel is valid */
+ if (ch && *(log_mask_ptr + 1)) {
+ while (retry_count < 3) {
+ wr_size = smd_write(ch, buf, header_size + 512);
+ if (wr_size == -ENOMEM) {
+ retry_count++;
+ for (timer = 0; timer < 5; timer++)
+ udelay(2000);
+ } else
+ break;
+ }
+ if (wr_size != header_size + 512) {
+ pr_err("diag: dci log mask update failed %d, tried %d for equip_id %d\n", wr_size, header_size + 512, driver->log_mask->equip_id);
+ ret = DIAG_DCI_SEND_DATA_FAIL;
+
+ } else {
+ *(log_mask_ptr + 1) = 0; /* clear dirty byte */
+ pr_debug("diag: updated dci log equip ID %d\n", *log_mask_ptr);
+ }
+ }
+ log_mask_ptr += 514;
+ }
+ mutex_unlock(&driver->diag_cntl_mutex);
+
+ return ret;
+}
+
+void create_dci_log_mask_tbl(unsigned char *tbl_buf)
+{
+ uint8_t i;
+ int count = 0;
+
+ if (!tbl_buf)
+ return;
+
+ /* create hard coded table for log mask with 16 categories */
+ for (i = 0; i < 16; i++) {
+ *(uint8_t *) tbl_buf = i;
+ pr_debug("diag: put value %x at %p\n", i, tbl_buf);
+ memset(tbl_buf + 1, 0, 513); /* set dirty bit as 0 */
+ tbl_buf += 514;
+ count += 514;
+ }
+}
+
+void create_dci_event_mask_tbl(unsigned char *tbl_buf)
+{
+ memset(tbl_buf, 0, 512);
+}
+
+static int diag_dci_probe(struct platform_device *pdev)
+{
+ int err = 0;
+ int index;
+
+ if (pdev->id == SMD_APPS_MODEM) {
+ index = MODEM_DATA;
+ err = smd_open("DIAG_2", &driver->smd_dci[index].ch, &driver->smd_dci[index], diag_smd_notify);
+ driver->smd_dci[index].ch_save = driver->smd_dci[index].ch;
+ driver->dci_device = &pdev->dev;
+ driver->dci_device->power.wakeup = wakeup_source_register("DIAG_DCI_WS");
+ if (err)
+ pr_err("diag: In %s, cannot open DCI port, Id = %d, err: %d\n", __func__, pdev->id, err);
+ }
+
+ return err;
+}
+
+static int diag_dci_cmd_probe(struct platform_device *pdev)
+{
+ int err = 0;
+ int index;
+
+ if (pdev->id == SMD_APPS_MODEM) {
+ index = MODEM_DATA;
+ err = smd_named_open_on_edge("DIAG_2_CMD", pdev->id, &driver->smd_dci_cmd[index].ch, &driver->smd_dci_cmd[index], diag_smd_notify);
+ driver->smd_dci_cmd[index].ch_save = driver->smd_dci_cmd[index].ch;
+ driver->dci_cmd_device = &pdev->dev;
+ driver->dci_cmd_device->power.wakeup = wakeup_source_register("DIAG_DCI_CMD_WS");
+ if (err)
+ pr_err("diag: In %s, cannot open DCI port, Id = %d, err: %d\n", __func__, pdev->id, err);
+ }
+
+ return err;
+}
+
+static int diag_dci_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diag_dci_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diag_dci_dev_pm_ops = {
+ .runtime_suspend = diag_dci_runtime_suspend,
+ .runtime_resume = diag_dci_runtime_resume,
+};
+
+struct platform_driver msm_diag_dci_driver = {
+ .probe = diag_dci_probe,
+ .driver = {
+ .name = "DIAG_2",
+ .owner = THIS_MODULE,
+ .pm = &diag_dci_dev_pm_ops,
+ },
+};
+
+struct platform_driver msm_diag_dci_cmd_driver = {
+ .probe = diag_dci_cmd_probe,
+ .driver = {
+ .name = "DIAG_2_CMD",
+ .owner = THIS_MODULE,
+ .pm = &diag_dci_dev_pm_ops,
+ },
+};
+
+int diag_dci_init(void)
+{
+ int success = 0;
+ int i;
+
+ driver->dci_tag = 0;
+ driver->dci_client_id = 0;
+ driver->num_dci_client = 0;
+ driver->dci_device = NULL;
+ driver->dci_cmd_device = NULL;
+ mutex_init(&driver->dci_mutex);
+ mutex_init(&dci_log_mask_mutex);
+ mutex_init(&dci_event_mask_mutex);
+ mutex_init(&dci_health_mutex);
+ spin_lock_init(&ws_lock);
+
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++) {
+ success = diag_smd_constructor(&driver->smd_dci[i], i, SMD_DCI_TYPE);
+ if (!success)
+ goto err;
+ }
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_DCI_CMD_CHANNELS; i++) {
+ success = diag_smd_constructor(&driver->smd_dci_cmd[i], i, SMD_DCI_CMD_TYPE);
+ if (!success)
+ goto err;
+ }
+ }
+
+ if (driver->req_tracking_tbl == NULL) {
+ driver->req_tracking_tbl = kzalloc(dci_max_reg * sizeof(struct dci_pkt_req_tracking_tbl), GFP_KERNEL);
+ if (driver->req_tracking_tbl == NULL)
+ goto err;
+ }
+ if (driver->apps_dci_buf == NULL) {
+ driver->apps_dci_buf = kzalloc(APPS_BUF_SIZE, GFP_KERNEL);
+ if (driver->apps_dci_buf == NULL)
+ goto err;
+ }
+ INIT_LIST_HEAD(&driver->dci_client_list);
+
+ driver->diag_dci_wq = create_singlethread_workqueue("diag_dci_wq");
+ success = platform_driver_register(&msm_diag_dci_driver);
+ if (success) {
+ pr_err("diag: Could not register DCI driver\n");
+ goto err;
+ }
+ if (driver->supports_separate_cmdrsp) {
+ success = platform_driver_register(&msm_diag_dci_cmd_driver);
+ if (success) {
+ pr_err("diag: Could not register DCI cmd driver\n");
+ goto err;
+ }
+ }
+ return DIAG_DCI_NO_ERROR;
+err:
+ pr_err("diag: Could not initialize diag DCI buffers");
+ kfree(driver->req_tracking_tbl);
+ kfree(driver->apps_dci_buf);
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_dci[i]);
+
+ if (driver->supports_separate_cmdrsp)
+ for (i = 0; i < NUM_SMD_DCI_CMD_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_dci_cmd[i]);
+
+ if (driver->diag_dci_wq)
+ destroy_workqueue(driver->diag_dci_wq);
+ mutex_destroy(&driver->dci_mutex);
+ mutex_destroy(&dci_log_mask_mutex);
+ mutex_destroy(&dci_event_mask_mutex);
+ mutex_destroy(&dci_health_mutex);
+ return DIAG_DCI_NO_REG;
+}
+
+void diag_dci_exit(void)
+{
+ int i;
+
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_dci[i]);
+
+ platform_driver_unregister(&msm_diag_dci_driver);
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_DCI_CMD_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_dci_cmd[i]);
+
+ platform_driver_unregister(&msm_diag_dci_cmd_driver);
+ }
+ kfree(driver->req_tracking_tbl);
+ kfree(driver->apps_dci_buf);
+ mutex_destroy(&driver->dci_mutex);
+ mutex_destroy(&dci_log_mask_mutex);
+ mutex_destroy(&dci_event_mask_mutex);
+ mutex_destroy(&dci_health_mutex);
+ destroy_workqueue(driver->diag_dci_wq);
+}
+
+int diag_dci_clear_log_mask()
+{
+ int j, k, err = DIAG_DCI_NO_ERROR;
+ uint8_t *log_mask_ptr, *update_ptr;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ entry = diag_dci_get_client_entry();
+ if (!entry) {
+ pr_err("diag: In %s, invalid client entry\n", __func__);
+ return DIAG_DCI_TABLE_ERR;
+ }
+
+ mutex_lock(&dci_log_mask_mutex);
+ create_dci_log_mask_tbl(entry->dci_log_mask);
+ memset(dci_cumulative_log_mask, 0x0, DCI_LOG_MASK_SIZE);
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ update_ptr = dci_cumulative_log_mask;
+ log_mask_ptr = entry->dci_log_mask;
+ for (j = 0; j < 16; j++) {
+ *update_ptr = j;
+ *(update_ptr + 1) = 1;
+ update_ptr += 2;
+ log_mask_ptr += 2;
+ for (k = 0; k < 513; k++) {
+ *update_ptr |= *log_mask_ptr;
+ update_ptr++;
+ log_mask_ptr++;
+ }
+ }
+ }
+ mutex_unlock(&dci_log_mask_mutex);
+ /* send updated mask to userspace clients */
+ diag_update_userspace_clients(DCI_LOG_MASKS_TYPE);
+ /* Send updated mask to peripherals */
+ err = diag_send_dci_log_mask(driver->smd_cntl[MODEM_DATA].ch);
+ return err;
+}
+
+int diag_dci_clear_event_mask()
+{
+ int j, err = DIAG_DCI_NO_ERROR;
+ uint8_t *event_mask_ptr, *update_ptr;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ entry = diag_dci_get_client_entry();
+ if (!entry) {
+ pr_err("diag: In %s, invalid client entry\n", __func__);
+ return DIAG_DCI_TABLE_ERR;
+ }
+
+ mutex_lock(&dci_event_mask_mutex);
+ memset(entry->dci_event_mask, 0x0, DCI_EVENT_MASK_SIZE);
+ memset(dci_cumulative_event_mask, 0x0, DCI_EVENT_MASK_SIZE);
+ update_ptr = dci_cumulative_event_mask;
+
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ event_mask_ptr = entry->dci_event_mask;
+ for (j = 0; j < DCI_EVENT_MASK_SIZE; j++)
+ *(update_ptr + j) |= *(event_mask_ptr + j);
+ }
+ mutex_unlock(&dci_event_mask_mutex);
+ /* send updated mask to userspace clients */
+ diag_update_userspace_clients(DCI_EVENT_MASKS_TYPE);
+ /* Send updated mask to peripherals */
+ err = diag_send_dci_event_mask(driver->smd_cntl[MODEM_DATA].ch);
+ return err;
+}
+
+int diag_dci_query_log_mask(uint16_t log_code)
+{
+ return __diag_dci_query_log_mask(diag_dci_get_client_entry(), log_code);
+}
+
+int diag_dci_query_event_mask(uint16_t event_id)
+{
+ return __diag_dci_query_event_mask(diag_dci_get_client_entry(), event_id);
+}
+
+uint8_t diag_dci_get_cumulative_real_time()
+{
+ uint8_t real_time = MODE_NONREALTIME;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (entry->real_time == MODE_REALTIME) {
+ real_time = 1;
+ break;
+ }
+ }
+ return real_time;
+}
+
+int diag_dci_set_real_time(uint8_t real_time)
+{
+ struct diag_dci_client_tbl *entry = NULL;
+ entry = diag_dci_get_client_entry();
+ if (!entry) {
+ pr_err("diag: In %s, invalid client entry\n", __func__);
+ return 0;
+ }
+ entry->real_time = real_time;
+ return 1;
+}
+
+void diag_dci_try_activate_wakeup_source(smd_channel_t * channel)
+{
+ spin_lock_irqsave(&ws_lock, ws_lock_flags);
+ if (channel == driver->smd_dci[MODEM_DATA].ch) {
+ pm_wakeup_event(driver->dci_device, DCI_WAKEUP_TIMEOUT);
+ pm_stay_awake(driver->dci_device);
+ } else if (channel == driver->smd_dci_cmd[MODEM_DATA].ch) {
+ pm_wakeup_event(driver->dci_cmd_device, DCI_WAKEUP_TIMEOUT);
+ pm_stay_awake(driver->dci_cmd_device);
+ }
+ spin_unlock_irqrestore(&ws_lock, ws_lock_flags);
+}
+
+void diag_dci_try_deactivate_wakeup_source(smd_channel_t * channel)
+{
+ spin_lock_irqsave(&ws_lock, ws_lock_flags);
+ if (channel == driver->smd_dci[MODEM_DATA].ch)
+ pm_relax(driver->dci_device);
+ else if (channel == driver->smd_dci_cmd[MODEM_DATA].ch)
+ pm_relax(driver->dci_cmd_device);
+ spin_unlock_irqrestore(&ws_lock, ws_lock_flags);
+}
+
+int diag_dci_register_client(uint16_t peripheral_list, int signal)
+{
+ int i;
+ struct diag_dci_client_tbl *new_entry = NULL;
+
+ if (driver->dci_state == DIAG_DCI_NO_REG)
+ return DIAG_DCI_NO_REG;
+
+ if (driver->num_dci_client >= MAX_DCI_CLIENTS)
+ return DIAG_DCI_NO_REG;
+
+ new_entry = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
+ if (new_entry == NULL) {
+ pr_err("diag: unable to alloc memory\n");
+ return -ENOMEM;
+ }
+
+ mutex_lock(&driver->dci_mutex);
+ if (!(driver->num_dci_client)) {
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++)
+ driver->smd_dci[i].in_busy_1 = 0;
+ if (driver->supports_separate_cmdrsp)
+ for (i = 0; i < NUM_SMD_DCI_CMD_CHANNELS; i++)
+ driver->smd_dci_cmd[i].in_busy_1 = 0;
+ }
+
+ new_entry->client = current;
+ new_entry->list = peripheral_list;
+ new_entry->signal_type = signal;
+ new_entry->dci_log_mask = kzalloc(DCI_LOG_MASK_SIZE, GFP_KERNEL);
+ if (!new_entry->dci_log_mask) {
+ pr_err("diag: Unable to create log mask for client, %d", driver->dci_client_id);
+ goto fail_alloc;
+ }
+ create_dci_log_mask_tbl(new_entry->dci_log_mask);
+ new_entry->dci_event_mask = kzalloc(DCI_EVENT_MASK_SIZE, GFP_KERNEL);
+ if (!new_entry->dci_event_mask) {
+ pr_err("diag: Unable to create event mask for client, %d", driver->dci_client_id);
+ goto fail_alloc;
+ }
+ create_dci_event_mask_tbl(new_entry->dci_event_mask);
+ new_entry->data_len = 0;
+ new_entry->dci_data = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (!new_entry->dci_data) {
+ pr_err("diag: Unable to allocate dci data memory for client, %d", driver->dci_client_id);
+ goto fail_alloc;
+ }
+ new_entry->total_capacity = IN_BUF_SIZE;
+ new_entry->dci_apps_buffer = kzalloc(driver->itemsize_dci, GFP_KERNEL);
+ if (!new_entry->dci_apps_buffer) {
+ pr_err("diag: Unable to allocate dci apps data memory for client, %d", driver->dci_client_id);
+ goto fail_alloc;
+ }
+ new_entry->dci_apps_data = NULL;
+ new_entry->apps_data_len = 0;
+ new_entry->apps_in_busy_1 = 0;
+ new_entry->dci_apps_tbl_size = (dci_apps_tbl_size < driver->poolsize_dci + 1) ? (driver->poolsize_dci + 1) : dci_apps_tbl_size;
+ new_entry->dci_apps_tbl = kzalloc(dci_apps_tbl_size * sizeof(struct diag_write_device), GFP_KERNEL);
+ if (!new_entry->dci_apps_tbl) {
+ pr_err("diag: Unable to allocate dci apps table for client, %d", driver->dci_client_id);
+ goto fail_alloc;
+ }
+ new_entry->dropped_logs = 0;
+ new_entry->dropped_events = 0;
+ new_entry->received_logs = 0;
+ new_entry->received_events = 0;
+ new_entry->real_time = 1;
+ mutex_init(&new_entry->data_mutex);
+ list_add(&new_entry->track, &driver->dci_client_list);
+ driver->num_dci_client++;
+ driver->dci_client_id++;
+ if (driver->num_dci_client == 1)
+ diag_update_proc_vote(DIAG_PROC_DCI, VOTE_UP);
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+ mutex_unlock(&driver->dci_mutex);
+
+ return driver->dci_client_id;
+
+fail_alloc:
+ kfree(new_entry->dci_log_mask);
+ kfree(new_entry->dci_event_mask);
+ kfree(new_entry->dci_apps_tbl);
+ kfree(new_entry->dci_apps_buffer);
+ kfree(new_entry->dci_data);
+ kfree(new_entry);
+
+ return -ENOMEM;
+}
+
+int diag_dci_deinit_client()
+{
+ int ret = DIAG_DCI_NO_ERROR, real_time = MODE_REALTIME, i;
+ struct diag_dci_client_tbl *entry = diag_dci_get_client_entry();
+
+ if (!entry)
+ return DIAG_DCI_NOT_SUPPORTED;
+
+ mutex_lock(&driver->dci_mutex);
+ /*
+ * Remove the entry from the list before freeing the buffers
+ * to ensure that we don't have any invalid access.
+ */
+ list_del(&entry->track);
+ driver->num_dci_client--;
+ /*
+ * Clear the client's log and event masks, update the cumulative
+ * masks and send the masks to peripherals
+ */
+ kfree(entry->dci_log_mask);
+ diag_update_userspace_clients(DCI_LOG_MASKS_TYPE);
+ diag_dci_invalidate_cumulative_log_mask();
+ ret = diag_send_dci_event_mask(driver->smd_cntl[MODEM_DATA].ch);
+ if (ret != DIAG_DCI_NO_ERROR) {
+ mutex_unlock(&driver->dci_mutex);
+ return ret;
+ }
+ kfree(entry->dci_event_mask);
+ diag_update_userspace_clients(DCI_EVENT_MASKS_TYPE);
+ diag_dci_invalidate_cumulative_event_mask();
+ ret = diag_send_dci_log_mask(driver->smd_cntl[MODEM_DATA].ch);
+ if (ret != DIAG_DCI_NO_ERROR) {
+ mutex_unlock(&driver->dci_mutex);
+ return ret;
+ }
+
+ /* Clean up the client's apps buffer */
+ mutex_lock(&entry->data_mutex);
+ for (i = 0; i < entry->dci_apps_tbl_size; i++) {
+ if (entry->dci_apps_tbl[i].buf != NULL && (entry->dci_apps_tbl[i].buf != entry->dci_apps_buffer)) {
+ diagmem_free(driver, entry->dci_apps_tbl[i].buf, POOL_TYPE_DCI);
+ }
+ entry->dci_apps_tbl[i].buf = NULL;
+ entry->dci_apps_tbl[i].length = 0;
+ }
+
+ kfree(entry->dci_data);
+ kfree(entry->dci_apps_buffer);
+ kfree(entry->dci_apps_tbl);
+ mutex_unlock(&entry->data_mutex);
+ kfree(entry);
+
+ if (driver->num_dci_client == 0) {
+ diag_update_proc_vote(DIAG_PROC_DCI, VOTE_DOWN);
+ } else {
+ real_time = diag_dci_get_cumulative_real_time();
+ diag_update_real_time_vote(DIAG_PROC_DCI, real_time);
+ }
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+
+ mutex_unlock(&driver->dci_mutex);
+
+ return DIAG_DCI_NO_ERROR;
+}
diff --git a/drivers/char/diag/diag_dci.h b/drivers/char/diag/diag_dci.h
new file mode 100644
index 0000000..ce9158b
--- /dev/null
+++ b/drivers/char/diag/diag_dci.h
@@ -0,0 +1,160 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef DIAG_DCI_H
+#define DIAG_DCI_H
+
+#define MAX_DCI_CLIENTS 10
+#define DCI_PKT_RSP_CODE 0x93
+#define DCI_DELAYED_RSP_CODE 0x94
+#define LOG_CMD_CODE 0x10
+#define EVENT_CMD_CODE 0x60
+#define DCI_PKT_RSP_TYPE 0
+#define DCI_LOG_TYPE -1
+#define DCI_EVENT_TYPE -2
+#define SET_LOG_MASK 1
+#define DISABLE_LOG_MASK 0
+#define MAX_EVENT_SIZE 512
+#define DCI_CLIENT_INDEX_INVALID -1
+#define DCI_PKT_REQ_MIN_LEN 5
+#define DCI_LOG_CON_MIN_LEN 14
+#define DCI_EVENT_CON_MIN_LEN 16
+
+#ifdef CONFIG_DEBUG_FS
+#define DIAG_DCI_DEBUG_CNT 100
+#define DIAG_DCI_DEBUG_LEN 100
+#endif
+
+/* 16 log code categories, each has:
+ * 1 bytes equip id + 1 dirty byte + 512 byte max log mask
+ */
+#define DCI_LOG_MASK_SIZE (16*514)
+#define DCI_EVENT_MASK_SIZE 512
+#define DCI_MASK_STREAM 2
+#define DCI_MAX_LOG_CODES 16
+#define DCI_MAX_ITEMS_PER_LOG_CODE 512
+
+extern unsigned int dci_max_reg;
+extern unsigned int dci_max_clients;
+extern unsigned char dci_cumulative_log_mask[DCI_LOG_MASK_SIZE];
+extern unsigned char dci_cumulative_event_mask[DCI_EVENT_MASK_SIZE];
+extern struct mutex dci_health_mutex;
+
+struct dci_pkt_req_tracking_tbl {
+ int pid;
+ int uid;
+ int tag;
+};
+
+struct diag_dci_client_tbl {
+ struct task_struct *client;
+ uint16_t list; /* bit mask */
+ int signal_type;
+ unsigned char *dci_log_mask;
+ unsigned char *dci_event_mask;
+ unsigned char *dci_data;
+ int data_len;
+ int total_capacity;
+ /* Buffer that each client owns for sending data */
+ unsigned char *dci_apps_buffer;
+ /* Pointer to buffer currently aggregating data in.
+ * May point to dci_apps_buffer or buffer from
+ * dci memory pool
+ */
+ unsigned char *dci_apps_data;
+ int dci_apps_tbl_size;
+ struct diag_write_device *dci_apps_tbl;
+ int apps_data_len;
+ int apps_in_busy_1;
+ int dropped_logs;
+ int dropped_events;
+ int received_logs;
+ int received_events;
+ struct mutex data_mutex;
+ uint8_t real_time;
+ struct list_head track;
+};
+
+/* This is used for DCI health stats */
+struct diag_dci_health_stats {
+ int dropped_logs;
+ int dropped_events;
+ int received_logs;
+ int received_events;
+ int reset_status;
+};
+
+/* This is used for querying DCI Log
+ or Event Mask */
+struct diag_log_event_stats {
+ uint16_t code;
+ int is_set;
+};
+
+enum {
+ DIAG_DCI_NO_ERROR = 1001, /* No error */
+ DIAG_DCI_NO_REG, /* Could not register */
+ DIAG_DCI_NO_MEM, /* Failed memory allocation */
+ DIAG_DCI_NOT_SUPPORTED, /* This particular client is not supported */
+ DIAG_DCI_HUGE_PACKET, /* Request/Response Packet too huge */
+ DIAG_DCI_SEND_DATA_FAIL, /* writing to kernel or peripheral fails */
+ DIAG_DCI_TABLE_ERR /* Error dealing with registration tables */
+};
+
+#ifdef CONFIG_DEBUG_FS
+/* To collect debug information during each smd read */
+struct diag_dci_data_info {
+ unsigned long iteration;
+ int data_size;
+ char time_stamp[DIAG_TS_SIZE];
+ uint8_t ch_type;
+};
+
+extern struct diag_dci_data_info *dci_data_smd;
+extern struct mutex dci_stat_mutex;
+#endif
+
+int diag_dci_init(void);
+void diag_dci_exit(void);
+int diag_dci_register_client(uint16_t peripheral_list, int signal);
+int diag_dci_deinit_client(void);
+void diag_update_smd_dci_work_fn(struct work_struct *);
+void diag_dci_notify_client(int peripheral_mask, int data);
+int dci_apps_write(struct diag_dci_client_tbl *entry);
+void diag_process_apps_dci_read_data(int data_type, void *buf, int recd_bytes);
+int diag_process_smd_dci_read_data(struct diag_smd_info *smd_info, void *buf, int recd_bytes);
+int diag_process_dci_transaction(unsigned char *buf, int len);
+int diag_send_dci_pkt(struct diag_master_table entry, unsigned char *buf, int len, int index);
+void extract_dci_pkt_rsp(struct diag_smd_info *smd_info, unsigned char *buf, int len);
+struct diag_dci_client_tbl *diag_dci_get_client_entry(void);
+/* DCI Log streaming functions */
+void create_dci_log_mask_tbl(unsigned char *tbl_buf);
+void update_dci_cumulative_log_mask(int offset, unsigned int byte_index, uint8_t byte_mask);
+void diag_dci_invalidate_cumulative_log_mask(void);
+int diag_send_dci_log_mask(smd_channel_t * ch);
+void extract_dci_log(unsigned char *buf, int len, int data_source);
+int diag_dci_clear_log_mask(void);
+int diag_dci_query_log_mask(uint16_t log_code);
+/* DCI event streaming functions */
+void update_dci_cumulative_event_mask(int offset, uint8_t byte_mask);
+void diag_dci_invalidate_cumulative_event_mask(void);
+int diag_send_dci_event_mask(smd_channel_t * ch);
+void extract_dci_events(unsigned char *buf, int len, int data_source);
+void create_dci_event_mask_tbl(unsigned char *tbl_buf);
+int diag_dci_clear_event_mask(void);
+int diag_dci_query_event_mask(uint16_t event_id);
+void diag_dci_smd_record_info(int read_bytes, uint8_t ch_type);
+uint8_t diag_dci_get_cumulative_real_time(void);
+int diag_dci_set_real_time(uint8_t real_time);
+/* Functions related to DCI wakeup sources */
+void diag_dci_try_activate_wakeup_source(smd_channel_t * channel);
+void diag_dci_try_deactivate_wakeup_source(smd_channel_t * channel);
+#endif
diff --git a/drivers/char/diag/diag_debugfs.c b/drivers/char/diag/diag_debugfs.c
new file mode 100644
index 0000000..723c218
--- /dev/null
+++ b/drivers/char/diag/diag_debugfs.c
@@ -0,0 +1,648 @@
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifdef CONFIG_DEBUG_FS
+
+#include <linux/slab.h>
+#include <linux/debugfs.h>
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_bridge.h"
+#include "diagfwd_hsic.h"
+#include "diagmem.h"
+#include "diag_dci.h"
+
+#define DEBUG_BUF_SIZE 4096
+static struct dentry *diag_dbgfs_dent;
+static int diag_dbgfs_table_index;
+static int diag_dbgfs_finished;
+static int diag_dbgfs_dci_data_index;
+static int diag_dbgfs_dci_finished;
+
+static ssize_t diag_dbgfs_read_status(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf;
+ int ret;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ ret = scnprintf(buf, DEBUG_BUF_SIZE,
+ "modem ch: 0x%x\n"
+ "lpass ch: 0x%x\n"
+ "riva ch: 0x%x\n"
+ "dci ch: 0x%x\n"
+ "modem cntl_ch: 0x%x\n"
+ "lpass cntl_ch: 0x%x\n"
+ "riva cntl_ch: 0x%x\n"
+ "modem cmd ch: 0x%x\n"
+ "dci cmd ch: 0x%x\n"
+ "CPU Tools id: %d\n"
+ "Apps only: %d\n"
+ "Apps master: %d\n"
+ "Check Polling Response: %d\n"
+ "polling_reg_flag: %d\n"
+ "uses device tree: %d\n"
+ "supports separate cmdrsp: %d\n"
+ "Modem separate cmdrsp: %d\n"
+ "LPASS separate cmdrsp: %d\n"
+ "RIVA separate cmdrsp: %d\n"
+ "Modem in_busy_1: %d\n"
+ "Modem in_busy_2: %d\n"
+ "LPASS in_busy_1: %d\n"
+ "LPASS in_busy_2: %d\n"
+ "RIVA in_busy_1: %d\n"
+ "RIVA in_busy_2: %d\n"
+ "DCI Modem in_busy_1: %d\n"
+ "Modem CMD in_busy_1: %d\n"
+ "Modem CMD in_busy_2: %d\n"
+ "DCI CMD Modem in_busy_1: %d\n"
+ "Modem supports STM: %d\n"
+ "LPASS supports STM: %d\n"
+ "RIVA supports STM: %d\n"
+ "Modem STM state: %d\n"
+ "LPASS STM state: %d\n"
+ "RIVA STM state: %d\n"
+ "APPS STM state: %d\n"
+ "Modem STM requested state: %d\n"
+ "LPASS STM requested state: %d\n"
+ "RIVA STM requested state: %d\n"
+ "APPS STM requested state: %d\n"
+ "supports apps hdlc encoding: %d\n"
+ "Modem hdlc encoding: %d\n"
+ "Lpass hdlc encoding: %d\n"
+ "RIVA hdlc encoding: %d\n"
+ "Modem CMD hdlc encoding: %d\n"
+ "Modem DATA in_buf_1_size: %d\n"
+ "Modem DATA in_buf_2_size: %d\n"
+ "ADSP DATA in_buf_1_size: %d\n"
+ "ADSP DATA in_buf_2_size: %d\n"
+ "RIVA DATA in_buf_1_size: %d\n"
+ "RIVA DATA in_buf_2_size: %d\n"
+ "Modem DATA in_buf_1_raw_size: %d\n"
+ "Modem DATA in_buf_2_raw_size: %d\n"
+ "ADSP DATA in_buf_1_raw_size: %d\n"
+ "ADSP DATA in_buf_2_raw_size: %d\n"
+ "RIVA DATA in_buf_1_raw_size: %d\n"
+ "RIVA DATA in_buf_2_raw_size: %d\n"
+ "Modem CMD in_buf_1_size: %d\n"
+ "Modem CMD in_buf_1_raw_size: %d\n"
+ "Modem CNTL in_buf_1_size: %d\n"
+ "ADSP CNTL in_buf_1_size: %d\n"
+ "RIVA CNTL in_buf_1_size: %d\n"
+ "Modem DCI in_buf_1_size: %d\n"
+ "Modem DCI CMD in_buf_1_size: %d\n"
+ "Received Feature mask from Modem: %d\n"
+ "Received Feature mask from LPASS: %d\n"
+ "Received Feature mask from WCNSS: %d\n"
+ "logging_mode: %d\n"
+ "real_time_mode: %d\n",
+ (unsigned int)driver->smd_data[MODEM_DATA].ch,
+ (unsigned int)driver->smd_data[LPASS_DATA].ch,
+ (unsigned int)driver->smd_data[WCNSS_DATA].ch,
+ (unsigned int)driver->smd_dci[MODEM_DATA].ch,
+ (unsigned int)driver->smd_cntl[MODEM_DATA].ch,
+ (unsigned int)driver->smd_cntl[LPASS_DATA].ch,
+ (unsigned int)driver->smd_cntl[WCNSS_DATA].ch,
+ (unsigned int)driver->smd_cmd[MODEM_DATA].ch,
+ (unsigned int)driver->smd_dci_cmd[MODEM_DATA].ch,
+ chk_config_get_id(),
+ chk_apps_only(),
+ chk_apps_master(),
+ chk_polling_response(),
+ driver->polling_reg_flag,
+ driver->use_device_tree,
+ driver->supports_separate_cmdrsp,
+ driver->separate_cmdrsp[MODEM_DATA],
+ driver->separate_cmdrsp[LPASS_DATA],
+ driver->separate_cmdrsp[WCNSS_DATA],
+ driver->smd_data[MODEM_DATA].in_busy_1,
+ driver->smd_data[MODEM_DATA].in_busy_2,
+ driver->smd_data[LPASS_DATA].in_busy_1,
+ driver->smd_data[LPASS_DATA].in_busy_2,
+ driver->smd_data[WCNSS_DATA].in_busy_1,
+ driver->smd_data[WCNSS_DATA].in_busy_2,
+ driver->smd_dci[MODEM_DATA].in_busy_1,
+ driver->smd_cmd[MODEM_DATA].in_busy_1,
+ driver->smd_cmd[MODEM_DATA].in_busy_2,
+ driver->smd_dci_cmd[MODEM_DATA].in_busy_1,
+ driver->peripheral_supports_stm[MODEM_DATA],
+ driver->peripheral_supports_stm[LPASS_DATA],
+ driver->peripheral_supports_stm[WCNSS_DATA],
+ driver->stm_state[MODEM_DATA],
+ driver->stm_state[LPASS_DATA],
+ driver->stm_state[WCNSS_DATA],
+ driver->stm_state[APPS_DATA],
+ driver->stm_state_requested[MODEM_DATA],
+ driver->stm_state_requested[LPASS_DATA],
+ driver->stm_state_requested[WCNSS_DATA],
+ driver->stm_state_requested[APPS_DATA],
+ driver->supports_apps_hdlc_encoding,
+ driver->smd_data[MODEM_DATA].encode_hdlc,
+ driver->smd_data[LPASS_DATA].encode_hdlc,
+ driver->smd_data[WCNSS_DATA].encode_hdlc,
+ driver->smd_cmd[MODEM_DATA].encode_hdlc,
+ (unsigned int)driver->smd_data[MODEM_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_data[MODEM_DATA].buf_in_2_size,
+ (unsigned int)driver->smd_data[LPASS_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_data[LPASS_DATA].buf_in_2_size,
+ (unsigned int)driver->smd_data[WCNSS_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_data[WCNSS_DATA].buf_in_2_size,
+ (unsigned int)driver->smd_data[MODEM_DATA].buf_in_1_raw_size,
+ (unsigned int)driver->smd_data[MODEM_DATA].buf_in_2_raw_size,
+ (unsigned int)driver->smd_data[LPASS_DATA].buf_in_1_raw_size,
+ (unsigned int)driver->smd_data[LPASS_DATA].buf_in_2_raw_size,
+ (unsigned int)driver->smd_data[WCNSS_DATA].buf_in_1_raw_size,
+ (unsigned int)driver->smd_data[WCNSS_DATA].buf_in_2_raw_size,
+ (unsigned int)driver->smd_cmd[MODEM_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_cmd[MODEM_DATA].buf_in_1_raw_size,
+ (unsigned int)driver->smd_cntl[MODEM_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_cntl[LPASS_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_cntl[WCNSS_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_dci[MODEM_DATA].buf_in_1_size,
+ (unsigned int)driver->smd_dci_cmd[MODEM_DATA].buf_in_1_size,
+ driver->rcvd_feature_mask[MODEM_DATA],
+ driver->rcvd_feature_mask[LPASS_DATA], driver->rcvd_feature_mask[WCNSS_DATA], driver->logging_mode, driver->real_time_mode);
+
+#ifdef CONFIG_DIAG_OVER_USB
+ ret += scnprintf(buf + ret, DEBUG_BUF_SIZE, "usb_connected: %d\n", driver->usb_connected);
+#endif
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+
+ kfree(buf);
+ return ret;
+}
+
+static ssize_t diag_dbgfs_read_dcistats(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf = NULL;
+ int bytes_remaining, bytes_written = 0, bytes_in_buf = 0, i = 0;
+ struct diag_dci_data_info *temp_data = dci_data_smd;
+ int buf_size = (DEBUG_BUF_SIZE < count) ? DEBUG_BUF_SIZE : count;
+
+ if (diag_dbgfs_dci_finished) {
+ diag_dbgfs_dci_finished = 0;
+ return 0;
+ }
+
+ buf = kzalloc(sizeof(char) * buf_size, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(buf)) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ bytes_remaining = buf_size;
+
+ if (diag_dbgfs_dci_data_index == 0) {
+ bytes_written =
+ scnprintf(buf, buf_size,
+ "number of clients: %d\n"
+ "dci proc active: %d\n"
+ "dci real time vote: %d\n",
+ driver->num_dci_client, (driver->proc_active_mask & DIAG_PROC_DCI) ? 1 : 0, (driver->proc_rt_vote_mask & DIAG_PROC_DCI) ? 1 : 0);
+ bytes_in_buf += bytes_written;
+ bytes_remaining -= bytes_written;
+#ifdef CONFIG_DIAG_OVER_USB
+ bytes_written = scnprintf(buf + bytes_in_buf, bytes_remaining, "usb_connected: %d\n", driver->usb_connected);
+ bytes_in_buf += bytes_written;
+ bytes_remaining -= bytes_written;
+#endif
+ if (driver->dci_device) {
+ bytes_written = scnprintf(buf + bytes_in_buf,
+ bytes_remaining,
+ "dci power active, relax: %lu, %lu\n",
+ driver->dci_device->power.wakeup->active_count, driver->dci_device->power.wakeup->relax_count);
+ bytes_in_buf += bytes_written;
+ bytes_remaining -= bytes_written;
+ }
+ if (driver->dci_cmd_device) {
+ bytes_written = scnprintf(buf + bytes_in_buf,
+ bytes_remaining,
+ "dci cmd power active, relax: %lu, %lu\n",
+ driver->dci_cmd_device->power.wakeup->active_count, driver->dci_cmd_device->power.wakeup->relax_count);
+ bytes_in_buf += bytes_written;
+ bytes_remaining -= bytes_written;
+ }
+ }
+ temp_data += diag_dbgfs_dci_data_index;
+ for (i = diag_dbgfs_dci_data_index; i < DIAG_DCI_DEBUG_CNT; i++) {
+ if (temp_data->iteration != 0) {
+ bytes_written = scnprintf(buf + bytes_in_buf, bytes_remaining,
+ "i %-10ld\t"
+ "s %-10d\t" "c %-10d\t" "t %-15s\n", temp_data->iteration, temp_data->data_size, temp_data->ch_type, temp_data->time_stamp);
+ bytes_in_buf += bytes_written;
+ bytes_remaining -= bytes_written;
+ /* Check if there is room for another entry */
+ if (bytes_remaining < bytes_written)
+ break;
+ }
+ temp_data++;
+ }
+
+ diag_dbgfs_dci_data_index = (i >= DIAG_DCI_DEBUG_CNT) ? 0 : i + 1;
+ bytes_written = simple_read_from_buffer(ubuf, count, ppos, buf, bytes_in_buf);
+ kfree(buf);
+ diag_dbgfs_dci_finished = 1;
+ return bytes_written;
+}
+
+static ssize_t diag_dbgfs_read_workpending(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf;
+ int ret;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ ret = scnprintf(buf, DEBUG_BUF_SIZE,
+ "Pending status for work_stucts:\n"
+ "diag_drain_work: %d\n"
+ "Modem data diag_read_smd_work: %d\n"
+ "LPASS data diag_read_smd_work: %d\n"
+ "RIVA data diag_read_smd_work: %d\n"
+ "Modem cntl diag_read_smd_work: %d\n"
+ "LPASS cntl diag_read_smd_work: %d\n"
+ "RIVA cntl diag_read_smd_work: %d\n"
+ "Modem dci diag_read_smd_work: %d\n"
+ "Modem data diag_notify_update_smd_work: %d\n"
+ "LPASS data diag_notify_update_smd_work: %d\n"
+ "RIVA data diag_notify_update_smd_work: %d\n"
+ "Modem cntl diag_notify_update_smd_work: %d\n"
+ "LPASS cntl diag_notify_update_smd_work: %d\n"
+ "RIVA cntl diag_notify_update_smd_work: %d\n"
+ "Modem dci diag_notify_update_smd_work: %d\n",
+ work_pending(&(driver->diag_drain_work)),
+ work_pending(&(driver->smd_data[MODEM_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_data[LPASS_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_data[WCNSS_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_cntl[MODEM_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_cntl[LPASS_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_cntl[WCNSS_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_dci[MODEM_DATA].diag_read_smd_work)),
+ work_pending(&(driver->smd_data[MODEM_DATA].diag_notify_update_smd_work)),
+ work_pending(&(driver->smd_data[LPASS_DATA].diag_notify_update_smd_work)),
+ work_pending(&(driver->smd_data[WCNSS_DATA].diag_notify_update_smd_work)),
+ work_pending(&(driver->smd_cntl[MODEM_DATA].diag_notify_update_smd_work)),
+ work_pending(&(driver->smd_cntl[LPASS_DATA].diag_notify_update_smd_work)),
+ work_pending(&(driver->smd_cntl[WCNSS_DATA].diag_notify_update_smd_work)), work_pending(&(driver->smd_dci[MODEM_DATA].diag_notify_update_smd_work)));
+
+#ifdef CONFIG_DIAG_OVER_USB
+ ret += scnprintf(buf + ret, DEBUG_BUF_SIZE,
+ "diag_proc_hdlc_work: %d\n" "diag_read_work: %d\n", work_pending(&(driver->diag_proc_hdlc_work)), work_pending(&(driver->diag_read_work)));
+#endif
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+
+ kfree(buf);
+ return ret;
+}
+
+static ssize_t diag_dbgfs_read_table(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf;
+ int ret = 0;
+ int i;
+ int bytes_remaining;
+ int bytes_in_buffer = 0;
+ int bytes_written;
+ int buf_size = (DEBUG_BUF_SIZE < count) ? DEBUG_BUF_SIZE : count;
+
+ if (diag_dbgfs_table_index >= diag_max_reg) {
+ /* Done. Reset to prepare for future requests */
+ diag_dbgfs_table_index = 0;
+ return 0;
+ }
+
+ buf = kzalloc(sizeof(char) * buf_size, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(buf)) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ bytes_remaining = buf_size;
+
+ if (diag_dbgfs_table_index == 0) {
+ bytes_written = scnprintf(buf + bytes_in_buffer, bytes_remaining,
+ "Client ids: Modem: %d, LPASS: %d, " "WCNSS: %d, APPS: %d\n", MODEM_DATA, LPASS_DATA, WCNSS_DATA, APPS_DATA);
+ bytes_in_buffer += bytes_written;
+ }
+
+ for (i = diag_dbgfs_table_index; i < diag_max_reg; i++) {
+ /* Do not process empty entries in the table */
+ if (driver->table[i].process_id == 0)
+ continue;
+
+ bytes_written = scnprintf(buf + bytes_in_buffer, bytes_remaining,
+ "i: %3d, cmd_code: %4x, subsys_id: %4x, "
+ "client: %2d, cmd_code_lo: %4x, "
+ "cmd_code_hi: %4x, process_id: %5d %s\n",
+ i,
+ driver->table[i].cmd_code,
+ driver->table[i].subsys_id,
+ driver->table[i].client_id,
+ driver->table[i].cmd_code_lo,
+ driver->table[i].cmd_code_hi, driver->table[i].process_id, (diag_find_polling_reg(i) ? "<- Polling cmd reg" : ""));
+
+ bytes_in_buffer += bytes_written;
+
+ /* Check if there is room to add another table entry */
+ bytes_remaining = buf_size - bytes_in_buffer;
+
+ if (bytes_remaining < bytes_written)
+ break;
+ }
+ diag_dbgfs_table_index = i + 1;
+
+ *ppos = 0;
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, bytes_in_buffer);
+
+ kfree(buf);
+ return ret;
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+static ssize_t diag_dbgfs_read_mempool(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf = NULL;
+ int ret = 0, i = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(buf)) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ ret = scnprintf(buf, DEBUG_BUF_SIZE,
+ "POOL_TYPE_COPY: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_HDLC: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_USER: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_WRITE_STRUCT: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_DCI: [0x%p : 0x%p] count = %d\n",
+ driver->diagpool,
+ diag_pools_array[POOL_COPY_IDX],
+ driver->count,
+ driver->diag_hdlc_pool,
+ diag_pools_array[POOL_HDLC_IDX],
+ driver->count_hdlc_pool,
+ driver->diag_user_pool,
+ diag_pools_array[POOL_USER_IDX],
+ driver->count_user_pool,
+ driver->diag_write_struct_pool,
+ diag_pools_array[POOL_WRITE_STRUCT_IDX], driver->count_write_struct_pool, driver->diag_dci_pool, diag_pools_array[POOL_DCI_IDX], driver->count_dci_pool);
+
+ for (i = 0; i < MAX_HSIC_CH; i++) {
+ if (!diag_hsic[i].hsic_inited)
+ continue;
+ ret += scnprintf(buf + ret, DEBUG_BUF_SIZE - ret,
+ "POOL_TYPE_HSIC_%d: [0x%p : 0x%p] count = %d\n",
+ i + 1, diag_hsic[i].diag_hsic_pool, diag_pools_array[POOL_HSIC_IDX + i], diag_hsic[i].count_hsic_pool);
+ }
+
+ for (i = 0; i < MAX_HSIC_CH; i++) {
+ if (!diag_hsic[i].hsic_inited)
+ continue;
+ ret += scnprintf(buf + ret, DEBUG_BUF_SIZE - ret,
+ "POOL_TYPE_HSIC_%d_WRITE: [0x%p : 0x%p] count = %d\n",
+ i + 1, diag_hsic[i].diag_hsic_write_pool, diag_pools_array[POOL_HSIC_WRITE_IDX + i], diag_hsic[i].count_hsic_write_pool);
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+
+ kfree(buf);
+ return ret;
+}
+#else
+static ssize_t diag_dbgfs_read_mempool(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf = NULL;
+ int ret = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(buf)) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ ret = scnprintf(buf, DEBUG_BUF_SIZE,
+ "POOL_TYPE_COPY: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_HDLC: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_USER: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_WRITE_STRUCT: [0x%p : 0x%p] count = %d\n"
+ "POOL_TYPE_DCI: [0x%p : 0x%p] count = %d\n",
+ driver->diagpool,
+ diag_pools_array[POOL_COPY_IDX],
+ driver->count,
+ driver->diag_hdlc_pool,
+ diag_pools_array[POOL_HDLC_IDX],
+ driver->count_hdlc_pool,
+ driver->diag_user_pool,
+ diag_pools_array[POOL_USER_IDX],
+ driver->count_user_pool,
+ driver->diag_write_struct_pool,
+ diag_pools_array[POOL_WRITE_STRUCT_IDX], driver->count_write_struct_pool, driver->diag_dci_pool, diag_pools_array[POOL_DCI_IDX], driver->count_dci_pool);
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+
+ kfree(buf);
+ return ret;
+}
+#endif
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+static ssize_t diag_dbgfs_read_bridge(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf;
+ int ret;
+ int i;
+ int bytes_remaining;
+ int bytes_in_buffer = 0;
+ int bytes_written;
+ int buf_size = (DEBUG_BUF_SIZE < count) ? DEBUG_BUF_SIZE : count;
+ int bytes_hsic_inited = 45;
+ int bytes_hsic_not_inited = 410;
+
+ if (diag_dbgfs_finished) {
+ diag_dbgfs_finished = 0;
+ return 0;
+ }
+
+ buf = kzalloc(sizeof(char) * buf_size, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(buf)) {
+ pr_err("diag: %s, Error allocating memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ bytes_remaining = buf_size;
+
+ /* Only one smux for now */
+ bytes_written = scnprintf(buf + bytes_in_buffer, bytes_remaining,
+ "Values for SMUX instance: 0\n"
+ "smux ch: %d\n"
+ "smux enabled %d\n"
+ "smux in busy %d\n" "smux connected %d\n\n", driver->lcid, driver->diag_smux_enabled, driver->in_busy_smux, driver->smux_connected);
+
+ bytes_in_buffer += bytes_written;
+ bytes_remaining = buf_size - bytes_in_buffer;
+
+ bytes_written = scnprintf(buf + bytes_in_buffer, bytes_remaining, "HSIC diag_disconnect_work: %d\n", work_pending(&(driver->diag_disconnect_work)));
+
+ bytes_in_buffer += bytes_written;
+ bytes_remaining = buf_size - bytes_in_buffer;
+
+ for (i = 0; i < MAX_HSIC_CH; i++) {
+ if (diag_hsic[i].hsic_inited) {
+ /* Check if there is room to add another HSIC entry */
+ if (bytes_remaining < bytes_hsic_inited)
+ break;
+ bytes_written = scnprintf(buf + bytes_in_buffer,
+ bytes_remaining,
+ "Values for HSIC Instance: %d\n"
+ "hsic ch: %d\n"
+ "hsic_inited: %d\n"
+ "hsic enabled: %d\n"
+ "hsic_opened: %d\n"
+ "hsic_suspend: %d\n"
+ "in_busy_hsic_read_on_device: %d\n"
+ "in_busy_hsic_write: %d\n"
+ "count_hsic_pool: %d\n"
+ "count_hsic_write_pool: %d\n"
+ "diag_hsic_pool: %x\n"
+ "diag_hsic_write_pool: %x\n"
+ "HSIC write_len: %d\n"
+ "num_hsic_buf_tbl_entries: %d\n"
+ "HSIC usb_connected: %d\n"
+ "HSIC diag_read_work: %d\n"
+ "diag_read_hsic_work: %d\n"
+ "diag_usb_read_complete_work: %d\n\n",
+ i,
+ diag_hsic[i].hsic_ch,
+ diag_hsic[i].hsic_inited,
+ diag_hsic[i].hsic_device_enabled,
+ diag_hsic[i].hsic_device_opened,
+ diag_hsic[i].hsic_suspend,
+ diag_hsic[i].in_busy_hsic_read_on_device,
+ diag_hsic[i].in_busy_hsic_write,
+ diag_hsic[i].count_hsic_pool,
+ diag_hsic[i].count_hsic_write_pool,
+ (unsigned int)diag_hsic[i].diag_hsic_pool,
+ (unsigned int)diag_hsic[i].diag_hsic_write_pool,
+ diag_bridge[i].write_len,
+ diag_hsic[i].num_hsic_buf_tbl_entries,
+ diag_bridge[i].usb_connected,
+ work_pending(&(diag_bridge[i].diag_read_work)),
+ work_pending(&(diag_hsic[i].diag_read_hsic_work)), work_pending(&(diag_bridge[i].usb_read_complete_work)));
+ if (bytes_written > bytes_hsic_inited)
+ bytes_hsic_inited = bytes_written;
+ } else {
+ /* Check if there is room to add another HSIC entry */
+ if (bytes_remaining < bytes_hsic_not_inited)
+ break;
+ bytes_written = scnprintf(buf + bytes_in_buffer, bytes_remaining, "HSIC Instance: %d has not been initialized\n\n", i);
+ if (bytes_written > bytes_hsic_not_inited)
+ bytes_hsic_not_inited = bytes_written;
+ }
+
+ bytes_in_buffer += bytes_written;
+
+ bytes_remaining = buf_size - bytes_in_buffer;
+ }
+
+ *ppos = 0;
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, bytes_in_buffer);
+
+ diag_dbgfs_finished = 1;
+ kfree(buf);
+ return ret;
+}
+
+const struct file_operations diag_dbgfs_bridge_ops = {
+ .read = diag_dbgfs_read_bridge,
+};
+#endif
+
+const struct file_operations diag_dbgfs_status_ops = {
+ .read = diag_dbgfs_read_status,
+};
+
+const struct file_operations diag_dbgfs_table_ops = {
+ .read = diag_dbgfs_read_table,
+};
+
+const struct file_operations diag_dbgfs_workpending_ops = {
+ .read = diag_dbgfs_read_workpending,
+};
+
+const struct file_operations diag_dbgfs_mempool_ops = {
+ .read = diag_dbgfs_read_mempool,
+};
+
+const struct file_operations diag_dbgfs_dcistats_ops = {
+ .read = diag_dbgfs_read_dcistats,
+};
+
+void diag_debugfs_init(void)
+{
+ diag_dbgfs_dent = debugfs_create_dir("diag", 0);
+ if (IS_ERR(diag_dbgfs_dent))
+ return;
+
+ debugfs_create_file("status", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_status_ops);
+
+ debugfs_create_file("table", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_table_ops);
+
+ debugfs_create_file("work_pending", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_workpending_ops);
+
+ debugfs_create_file("mempool", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_mempool_ops);
+
+ debugfs_create_file("dci_stats", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_dcistats_ops);
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ debugfs_create_file("bridge", 0444, diag_dbgfs_dent, 0, &diag_dbgfs_bridge_ops);
+#endif
+
+ diag_dbgfs_table_index = 0;
+ diag_dbgfs_finished = 0;
+ diag_dbgfs_dci_data_index = 0;
+ diag_dbgfs_dci_finished = 0;
+
+ /* DCI related structures */
+ dci_data_smd = kzalloc(sizeof(struct diag_dci_data_info) * DIAG_DCI_DEBUG_CNT, GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(dci_data_smd))
+ pr_warn("diag: could not allocate memory for dci debug info\n");
+
+ mutex_init(&dci_stat_mutex);
+}
+
+void diag_debugfs_cleanup(void)
+{
+ if (diag_dbgfs_dent) {
+ debugfs_remove_recursive(diag_dbgfs_dent);
+ diag_dbgfs_dent = NULL;
+ }
+
+ kfree(dci_data_smd);
+ mutex_destroy(&dci_stat_mutex);
+}
+#else
+void diag_debugfs_init(void)
+{
+}
+
+void diag_debugfs_cleanup(void)
+{
+}
+#endif
diff --git a/drivers/char/diag/diag_debugfs.h b/drivers/char/diag/diag_debugfs.h
new file mode 100644
index 0000000..4bc8b0f
--- /dev/null
+++ b/drivers/char/diag/diag_debugfs.h
@@ -0,0 +1,19 @@
+/* Copyright (c)2012, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAG_DEBUGFS_H
+#define DIAG_DEBUGFS_H
+
+void diag_debugfs_init(void);
+void diag_debugfs_cleanup(void);
+
+#endif
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
new file mode 100644
index 0000000..b6bb2db
--- /dev/null
+++ b/drivers/char/diag/diag_masks.c
@@ -0,0 +1,888 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/diagchar.h>
+#include <linux/kmemleak.h>
+#include <linux/workqueue.h>
+#include "diagchar.h"
+#include "diagfwd_cntl.h"
+#include "diag_masks.h"
+
+int diag_event_num_bytes;
+
+#define DIAG_CTRL_MASK_INVALID 0
+#define DIAG_CTRL_MASK_ALL_DISABLED 1
+#define DIAG_CTRL_MASK_ALL_ENABLED 2
+#define DIAG_CTRL_MASK_VALID 3
+
+#define ALL_EQUIP_ID 100
+#define ALL_SSID -1
+
+#define FEATURE_MASK_LEN_BYTES 2
+
+struct mask_info {
+ int equip_id;
+ int num_items;
+ int index;
+};
+
+#define CREATE_MSG_MASK_TBL_ROW(XX) \
+do { \
+ *(int *)(msg_mask_tbl_ptr) = MSG_SSID_ ## XX; \
+ msg_mask_tbl_ptr += 4; \
+ *(int *)(msg_mask_tbl_ptr) = MSG_SSID_ ## XX ## _LAST; \
+ msg_mask_tbl_ptr += 4; \
+ /* mimic the last entry as actual_last while creation */ \
+ *(int *)(msg_mask_tbl_ptr) = MSG_SSID_ ## XX ## _LAST; \
+ msg_mask_tbl_ptr += 4; \
+ /* increment by MAX_SSID_PER_RANGE cells */ \
+ msg_mask_tbl_ptr += MAX_SSID_PER_RANGE * sizeof(int); \
+} while (0)
+
+static void diag_print_mask_table(void)
+{
+/* Enable this to print mask table when updated */
+#ifdef MASK_DEBUG
+ int first, last, actual_last;
+ uint8_t *ptr = driver->msg_masks;
+ int i = 0;
+ pr_info("diag: F3 message mask table\n");
+ while (*(uint32_t *) (ptr + 4)) {
+ first = *(uint32_t *) ptr;
+ ptr += 4;
+ last = *(uint32_t *) ptr;
+ ptr += 4;
+ actual_last = *(uint32_t *) ptr;
+ ptr += 4;
+ pr_info("diag: SSID %d, %d - %d\n", first, last, actual_last);
+ for (i = 0; i <= actual_last - first; i++)
+ pr_info("diag: MASK:%x\n", *((uint32_t *) ptr + i));
+ ptr += MAX_SSID_PER_RANGE * 4;
+ }
+#endif
+}
+
+void diag_create_msg_mask_table(void)
+{
+ uint8_t *msg_mask_tbl_ptr = driver->msg_masks;
+
+ CREATE_MSG_MASK_TBL_ROW(0);
+ CREATE_MSG_MASK_TBL_ROW(1);
+ CREATE_MSG_MASK_TBL_ROW(2);
+ CREATE_MSG_MASK_TBL_ROW(3);
+ CREATE_MSG_MASK_TBL_ROW(4);
+ CREATE_MSG_MASK_TBL_ROW(5);
+ CREATE_MSG_MASK_TBL_ROW(6);
+ CREATE_MSG_MASK_TBL_ROW(7);
+ CREATE_MSG_MASK_TBL_ROW(8);
+ CREATE_MSG_MASK_TBL_ROW(9);
+ CREATE_MSG_MASK_TBL_ROW(10);
+ CREATE_MSG_MASK_TBL_ROW(11);
+ CREATE_MSG_MASK_TBL_ROW(12);
+ CREATE_MSG_MASK_TBL_ROW(13);
+ CREATE_MSG_MASK_TBL_ROW(14);
+ CREATE_MSG_MASK_TBL_ROW(15);
+ CREATE_MSG_MASK_TBL_ROW(16);
+ CREATE_MSG_MASK_TBL_ROW(17);
+ CREATE_MSG_MASK_TBL_ROW(18);
+ CREATE_MSG_MASK_TBL_ROW(19);
+ CREATE_MSG_MASK_TBL_ROW(20);
+ CREATE_MSG_MASK_TBL_ROW(21);
+ CREATE_MSG_MASK_TBL_ROW(22);
+ CREATE_MSG_MASK_TBL_ROW(23);
+}
+
+static void diag_set_msg_mask(int rt_mask)
+{
+ int first_ssid, last_ssid, i;
+ uint8_t *parse_ptr, *ptr = driver->msg_masks;
+
+ mutex_lock(&driver->diagchar_mutex);
+ driver->msg_status = rt_mask ? DIAG_CTRL_MASK_ALL_ENABLED : DIAG_CTRL_MASK_ALL_DISABLED;
+ while (*(uint32_t *) (ptr + 4)) {
+ first_ssid = *(uint32_t *) ptr;
+ ptr += 8; /* increment by 8 to skip 'last' */
+ last_ssid = *(uint32_t *) ptr;
+ ptr += 4;
+ parse_ptr = ptr;
+ pr_debug("diag: updating range %d %d\n", first_ssid, last_ssid);
+ for (i = 0; i < last_ssid - first_ssid + 1; i++) {
+ *(int *)parse_ptr = rt_mask;
+ parse_ptr += 4;
+ }
+ ptr += MAX_SSID_PER_RANGE * 4;
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+static void diag_update_msg_mask(int start, int end, uint8_t * buf)
+{
+ int found = 0, first, last, actual_last;
+ uint8_t *actual_last_ptr;
+ uint8_t *ptr = driver->msg_masks;
+ uint8_t *ptr_buffer_start = &(*(driver->msg_masks));
+ uint8_t *ptr_buffer_end = &(*(driver->msg_masks)) + MSG_MASK_SIZE;
+ uint32_t copy_len = (end - start + 1) * sizeof(int);
+
+ mutex_lock(&driver->diagchar_mutex);
+ /* First SSID can be zero : So check that last is non-zero */
+ while (*(uint32_t *) (ptr + 4)) {
+ first = *(uint32_t *) ptr;
+ ptr += 4;
+ last = *(uint32_t *) ptr;
+ ptr += 4;
+ actual_last = *(uint32_t *) ptr;
+ actual_last_ptr = ptr;
+ ptr += 4;
+ if (start >= first && start <= actual_last) {
+ ptr += (start - first) * 4;
+ if (end > actual_last) {
+ pr_info("diag: ssid range mismatch\n");
+ actual_last = end;
+ *(uint32_t *) (actual_last_ptr) = end;
+ }
+ if (actual_last - first >= MAX_SSID_PER_RANGE) {
+ pr_err("diag: In %s, truncating ssid range, %d-%d to max allowed: %d", __func__, first, actual_last, MAX_SSID_PER_RANGE);
+ copy_len = MAX_SSID_PER_RANGE;
+ actual_last = first + MAX_SSID_PER_RANGE;
+ *(uint32_t *) actual_last_ptr = actual_last;
+ }
+ if (CHK_OVERFLOW(ptr_buffer_start, ptr, ptr_buffer_end, copy_len)) {
+ pr_debug("diag: update ssid start %d, end %d\n", start, end);
+ memcpy(ptr, buf, copy_len);
+ } else
+ pr_alert("diag: Not enough space MSG_MASK\n");
+ found = 1;
+ break;
+ } else {
+ ptr += MAX_SSID_PER_RANGE * 4;
+ }
+ }
+ /* Entry was not found - add new table */
+ if (!found) {
+ if (CHK_OVERFLOW(ptr_buffer_start, ptr, ptr_buffer_end, 8 + ((end - start) + 1) * 4)) {
+ memcpy(ptr, &(start), 4);
+ ptr += 4;
+ memcpy(ptr, &(end), 4);
+ ptr += 4;
+ memcpy(ptr, &(end), 4); /* create actual_last entry */
+ ptr += 4;
+ pr_debug("diag: adding NEW ssid start %d, end %d\n", start, end);
+ memcpy(ptr, buf, ((end - start) + 1) * 4);
+ } else
+ pr_alert("diag: Not enough buffer space for MSG_MASK\n");
+ }
+ driver->msg_status = DIAG_CTRL_MASK_VALID;
+ mutex_unlock(&driver->diagchar_mutex);
+ diag_print_mask_table();
+}
+
+void diag_toggle_event_mask(int toggle)
+{
+ uint8_t *ptr = driver->event_masks;
+
+ mutex_lock(&driver->diagchar_mutex);
+ if (toggle) {
+ driver->event_status = DIAG_CTRL_MASK_ALL_ENABLED;
+ memset(ptr, 0xFF, EVENT_MASK_SIZE);
+ } else {
+ driver->event_status = DIAG_CTRL_MASK_ALL_DISABLED;
+ memset(ptr, 0, EVENT_MASK_SIZE);
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+static void diag_update_event_mask(uint8_t * buf, int num_bytes)
+{
+ uint8_t *ptr = driver->event_masks;
+ uint8_t *temp = buf + 2;
+
+ mutex_lock(&driver->diagchar_mutex);
+ if (CHK_OVERFLOW(ptr, ptr, ptr + EVENT_MASK_SIZE, num_bytes)) {
+ memcpy(ptr, temp, num_bytes);
+ driver->event_status = DIAG_CTRL_MASK_VALID;
+ } else {
+ pr_err("diag: In %s, not enough buffer space\n", __func__);
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+static void diag_disable_log_mask(void)
+{
+ int i = 0;
+ struct mask_info *parse_ptr = (struct mask_info *)(driver->log_masks);
+
+ pr_debug("diag: disable log masks\n");
+ mutex_lock(&driver->diagchar_mutex);
+ for (i = 0; i < MAX_EQUIP_ID; i++) {
+ pr_debug("diag: equip id %d\n", parse_ptr->equip_id);
+ if (!(parse_ptr->equip_id)) /* Reached a null entry */
+ break;
+ memset(driver->log_masks + parse_ptr->index, 0, (parse_ptr->num_items + 7) / 8);
+ parse_ptr++;
+ }
+ driver->log_status = DIAG_CTRL_MASK_ALL_DISABLED;
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+int chk_equip_id_and_mask(int equip_id, uint8_t * buf)
+{
+ int i = 0, flag = 0, num_items, offset;
+ unsigned char *ptr_data;
+ struct mask_info *ptr = (struct mask_info *)(driver->log_masks);
+
+ pr_debug("diag: received equip id = %d\n", equip_id);
+ /* Check if this is valid equipment ID */
+ for (i = 0; i < MAX_EQUIP_ID; i++) {
+ if ((ptr->equip_id == equip_id) && (ptr->index != 0)) {
+ offset = ptr->index;
+ num_items = ptr->num_items;
+ flag = 1;
+ break;
+ }
+ ptr++;
+ }
+ if (!flag)
+ return -EPERM;
+ ptr_data = driver->log_masks + offset;
+ memcpy(buf, ptr_data, (num_items + 7) / 8);
+ return 0;
+}
+
+static void diag_update_log_mask(int equip_id, uint8_t * buf, int num_items)
+{
+ uint8_t *temp = buf;
+ int i = 0;
+ unsigned char *ptr_data;
+ int offset = (sizeof(struct mask_info)) * MAX_EQUIP_ID;
+ struct mask_info *ptr = (struct mask_info *)(driver->log_masks);
+
+ pr_debug("diag: received equip id = %d\n", equip_id);
+ mutex_lock(&driver->diagchar_mutex);
+ /* Check if we already know index of this equipment ID */
+ for (i = 0; i < MAX_EQUIP_ID; i++) {
+ if ((ptr->equip_id == equip_id) && (ptr->index != 0)) {
+ offset = ptr->index;
+ break;
+ }
+ if ((ptr->equip_id == 0) && (ptr->index == 0)) {
+ /* Reached a null entry */
+ ptr->equip_id = equip_id;
+ ptr->num_items = num_items;
+ ptr->index = driver->log_masks_length;
+ offset = driver->log_masks_length;
+ driver->log_masks_length += ((num_items + 7) / 8);
+ break;
+ }
+ ptr++;
+ }
+ ptr_data = driver->log_masks + offset;
+ if (CHK_OVERFLOW(driver->log_masks, ptr_data, driver->log_masks + LOG_MASK_SIZE, (num_items + 7) / 8)) {
+ memcpy(ptr_data, temp, (num_items + 7) / 8);
+ driver->log_status = DIAG_CTRL_MASK_VALID;
+ } else {
+ pr_err("diag: Not enough buffer space for LOG_MASK\n");
+ driver->log_status = DIAG_CTRL_MASK_INVALID;
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+void diag_mask_update_fn(struct work_struct *work)
+{
+ struct diag_smd_info *smd_info = container_of(work,
+ struct diag_smd_info,
+ diag_notify_update_smd_work);
+ if (!smd_info) {
+ pr_err("diag: In %s, smd info is null, cannot update masks for the peripheral\n", __func__);
+ return;
+ }
+
+ diag_send_feature_mask_update(smd_info);
+ diag_send_msg_mask_update(smd_info->ch, ALL_SSID, ALL_SSID, smd_info->peripheral);
+ diag_send_log_mask_update(smd_info->ch, ALL_EQUIP_ID);
+ diag_send_event_mask_update(smd_info->ch, diag_event_num_bytes);
+
+ if (smd_info->notify_context == SMD_EVENT_OPEN)
+ diag_send_diag_mode_update_by_smd(smd_info, driver->real_time_mode);
+
+ smd_info->notify_context = 0;
+}
+
+void diag_send_log_mask_update(smd_channel_t * ch, int equip_id)
+{
+ void *buf = driver->buf_log_mask_update;
+ int header_size = sizeof(struct diag_ctrl_log_mask);
+ struct mask_info *ptr = (struct mask_info *)driver->log_masks;
+ int i, size, wr_size = -ENOMEM, retry_count = 0;
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ for (i = 0; i < MAX_EQUIP_ID; i++) {
+ size = (ptr->num_items + 7) / 8;
+ /* reached null entry */
+ if ((ptr->equip_id == 0) && (ptr->index == 0))
+ break;
+ driver->log_mask->cmd_type = DIAG_CTRL_MSG_LOG_MASK;
+ driver->log_mask->num_items = ptr->num_items;
+ driver->log_mask->data_len = 11 + size;
+ driver->log_mask->stream_id = 1; /* 2, if dual stream */
+ driver->log_mask->equip_id = ptr->equip_id;
+ driver->log_mask->status = driver->log_status;
+ switch (driver->log_status) {
+ case DIAG_CTRL_MASK_ALL_DISABLED:
+ driver->log_mask->log_mask_size = 0;
+ break;
+ case DIAG_CTRL_MASK_ALL_ENABLED:
+ driver->log_mask->log_mask_size = 0;
+ break;
+ case DIAG_CTRL_MASK_VALID:
+ driver->log_mask->log_mask_size = size;
+ break;
+ default:
+ /* Log status is not set or the buffer is corrupted */
+ pr_err("diag: In %s, invalid status %d", __func__, driver->log_status);
+ driver->log_mask->status = DIAG_CTRL_MASK_INVALID;
+ }
+
+ if (driver->log_mask->status == DIAG_CTRL_MASK_INVALID) {
+ mutex_unlock(&driver->diag_cntl_mutex);
+ return;
+ }
+ /* send only desired update, NOT ALL */
+ if (equip_id == ALL_EQUIP_ID || equip_id == driver->log_mask->equip_id) {
+ memcpy(buf, driver->log_mask, header_size);
+ if (driver->log_status == DIAG_CTRL_MASK_VALID)
+ memcpy(buf + header_size, driver->log_masks + ptr->index, size);
+ if (ch) {
+ while (retry_count < 3) {
+ wr_size = smd_write(ch, buf, header_size + size);
+ if (wr_size == -ENOMEM) {
+ retry_count++;
+ usleep_range(10000, 10100);
+ } else
+ break;
+ }
+ if (wr_size != header_size + size)
+ pr_err("diag: log mask update failed %d, tried %d", wr_size, header_size + size);
+ else
+ pr_debug("diag: updated log equip ID %d,len %d\n", driver->log_mask->equip_id, driver->log_mask->log_mask_size);
+ } else
+ pr_err("diag: ch not valid for log update\n");
+ }
+ ptr++;
+ }
+ mutex_unlock(&driver->diag_cntl_mutex);
+}
+
+void diag_send_event_mask_update(smd_channel_t * ch, int num_bytes)
+{
+ void *buf = driver->buf_event_mask_update;
+ int header_size = sizeof(struct diag_ctrl_event_mask);
+ int wr_size = -ENOMEM, retry_count = 0;
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ if (num_bytes == 0) {
+ pr_debug("diag: event mask not set yet, so no update\n");
+ mutex_unlock(&driver->diag_cntl_mutex);
+ return;
+ }
+ /* send event mask update */
+ driver->event_mask->cmd_type = DIAG_CTRL_MSG_EVENT_MASK;
+ driver->event_mask->data_len = 7 + num_bytes;
+ driver->event_mask->stream_id = 1; /* 2, if dual stream */
+ driver->event_mask->status = driver->event_status;
+
+ switch (driver->event_status) {
+ case DIAG_CTRL_MASK_ALL_DISABLED:
+ driver->event_mask->event_config = 0;
+ driver->event_mask->event_mask_size = 0;
+ break;
+ case DIAG_CTRL_MASK_ALL_ENABLED:
+ driver->event_mask->event_config = 1;
+ driver->event_mask->event_mask_size = 0;
+ break;
+ case DIAG_CTRL_MASK_VALID:
+ driver->event_mask->event_config = 1;
+ driver->event_mask->event_mask_size = num_bytes;
+ memcpy(buf + header_size, driver->event_masks, num_bytes);
+ break;
+ default:
+ /* Event status is not set yet or the buffer is corrupted */
+ pr_err("diag: In %s, invalid status %d", __func__, driver->event_status);
+ driver->event_mask->status = DIAG_CTRL_MASK_INVALID;
+ }
+
+ if (driver->event_mask->status == DIAG_CTRL_MASK_INVALID) {
+ mutex_unlock(&driver->diag_cntl_mutex);
+ return;
+ }
+ memcpy(buf, driver->event_mask, header_size);
+ if (ch) {
+ while (retry_count < 3) {
+ wr_size = smd_write(ch, buf, header_size + num_bytes);
+ if (wr_size == -ENOMEM) {
+ retry_count++;
+ usleep_range(10000, 10100);
+ } else
+ break;
+ }
+ if (wr_size != header_size + num_bytes)
+ pr_err("diag: error writing event mask %d, tried %d\n", wr_size, header_size + num_bytes);
+ } else
+ pr_err("diag: ch not valid for event update\n");
+ mutex_unlock(&driver->diag_cntl_mutex);
+}
+
+void diag_send_msg_mask_update(smd_channel_t * ch, int updated_ssid_first, int updated_ssid_last, int proc)
+{
+ void *buf = driver->buf_msg_mask_update;
+ int first, last, actual_last, size = -ENOMEM, retry_count = 0;
+ int header_size = sizeof(struct diag_ctrl_msg_mask);
+ uint8_t *ptr = driver->msg_masks;
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ while (*(uint32_t *) (ptr + 4)) {
+ first = *(uint32_t *) ptr;
+ ptr += 4;
+ last = *(uint32_t *) ptr;
+ ptr += 4;
+ actual_last = *(uint32_t *) ptr;
+ ptr += 4;
+ if (!((updated_ssid_first >= first && updated_ssid_last <= actual_last) || (updated_ssid_first == ALL_SSID))) {
+ ptr += MAX_SSID_PER_RANGE * 4;
+ continue;
+ }
+ /* send f3 mask update */
+ driver->msg_mask->cmd_type = DIAG_CTRL_MSG_F3_MASK;
+ driver->msg_mask->status = driver->msg_status;
+ switch (driver->msg_status) {
+ case DIAG_CTRL_MASK_ALL_DISABLED:
+ driver->msg_mask->msg_mask_size = 0;
+ break;
+ case DIAG_CTRL_MASK_ALL_ENABLED:
+ driver->msg_mask->msg_mask_size = 1;
+ memcpy(buf + header_size, ptr, 4 * (driver->msg_mask->msg_mask_size));
+ break;
+ case DIAG_CTRL_MASK_VALID:
+ driver->msg_mask->msg_mask_size = actual_last - first + 1;
+ /* Limit the msg_mask_size to MAX_SSID_PER_RANGE */
+ if (driver->msg_mask->msg_mask_size > MAX_SSID_PER_RANGE) {
+ pr_err("diag: in %s, Invalid msg mask size %d, max: %d", __func__, driver->msg_mask->msg_mask_size, MAX_SSID_PER_RANGE);
+ driver->msg_mask->msg_mask_size = MAX_SSID_PER_RANGE;
+ }
+ memcpy(buf + header_size, ptr, 4 * (driver->msg_mask->msg_mask_size));
+ break;
+ default:
+ /* Msg status is not set or the buffer is corrupted */
+ pr_err("diag: In %s, invalid status %d", __func__, driver->msg_status);
+ driver->msg_mask->status = DIAG_CTRL_MASK_INVALID;
+ }
+
+ if (driver->msg_mask->status == DIAG_CTRL_MASK_INVALID) {
+ mutex_unlock(&driver->diag_cntl_mutex);
+ return;
+ }
+ driver->msg_mask->data_len = 11 + 4 * (driver->msg_mask->msg_mask_size);
+ driver->msg_mask->stream_id = 1; /* 2, if dual stream */
+ driver->msg_mask->msg_mode = 0; /* Legcay mode */
+ driver->msg_mask->ssid_first = first;
+ driver->msg_mask->ssid_last = actual_last;
+ memcpy(buf, driver->msg_mask, header_size);
+ if (ch) {
+ while (retry_count < 3) {
+ size = smd_write(ch, buf, header_size + 4 * (driver->msg_mask->msg_mask_size));
+ if (size == -ENOMEM) {
+ retry_count++;
+ usleep_range(10000, 10100);
+ } else
+ break;
+ }
+ if (size != header_size + 4 * (driver->msg_mask->msg_mask_size))
+ pr_err("diag: proc %d, msg mask update fail %d, tried %d\n", proc, size, (header_size + 4 * (driver->msg_mask->msg_mask_size)));
+ else
+ pr_debug("diag: sending mask update for ssid first %d, last %d on PROC %d\n", first, actual_last, proc);
+ } else
+ pr_err("diag: proc %d, ch invalid msg mask update\n", proc);
+ ptr += MAX_SSID_PER_RANGE * 4;
+ }
+ mutex_unlock(&driver->diag_cntl_mutex);
+}
+
+void diag_send_feature_mask_update(struct diag_smd_info *smd_info)
+{
+ void *buf = driver->buf_feature_mask_update;
+ int header_size = sizeof(struct diag_ctrl_feature_mask);
+ int wr_size = -ENOMEM, retry_count = 0;
+ uint8_t feature_bytes[FEATURE_MASK_LEN_BYTES] = { 0, 0 };
+ int total_len = 0;
+
+ if (!smd_info) {
+ pr_err("diag: In %s, null smd info pointer\n", __func__);
+ return;
+ }
+
+ if (!smd_info->ch) {
+ pr_err("diag: In %s, smd channel not open for peripheral: %d, type: %d\n", __func__, smd_info->peripheral, smd_info->type);
+ return;
+ }
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ /* send feature mask update */
+ driver->feature_mask->ctrl_pkt_id = DIAG_CTRL_MSG_FEATURE;
+ driver->feature_mask->ctrl_pkt_data_len = 4 + FEATURE_MASK_LEN_BYTES;
+ driver->feature_mask->feature_mask_len = FEATURE_MASK_LEN_BYTES;
+ memcpy(buf, driver->feature_mask, header_size);
+ feature_bytes[0] |= F_DIAG_INT_FEATURE_MASK;
+ feature_bytes[0] |= F_DIAG_LOG_ON_DEMAND_RSP_ON_MASTER;
+ feature_bytes[0] |= driver->supports_separate_cmdrsp ? F_DIAG_REQ_RSP_CHANNEL : 0;
+ feature_bytes[0] |= driver->supports_apps_hdlc_encoding ? F_DIAG_HDLC_ENCODE_IN_APPS_MASK : 0;
+ feature_bytes[1] |= F_DIAG_OVER_STM;
+ memcpy(buf + header_size, &feature_bytes, FEATURE_MASK_LEN_BYTES);
+ total_len = header_size + FEATURE_MASK_LEN_BYTES;
+
+ while (retry_count < 3) {
+ wr_size = smd_write(smd_info->ch, buf, total_len);
+ if (wr_size == -ENOMEM) {
+ retry_count++;
+ /*
+ * The smd channel is full. Delay while
+ * smd processes existing data and smd
+ * has memory become available. The delay
+ * of 10000 was determined empirically as
+ * best value to use.
+ */
+ usleep_range(10000, 10100);
+ } else
+ break;
+ }
+ if (wr_size != total_len)
+ pr_err("diag: In %s, peripheral %d fail feature update, size: %d, tried: %d", __func__, smd_info->peripheral, wr_size, total_len);
+
+ mutex_unlock(&driver->diag_cntl_mutex);
+}
+
+int diag_process_apps_masks(unsigned char *buf, int len)
+{
+ int packet_type = 1;
+ int i;
+ int ssid_first, ssid_last, ssid_range;
+ int rt_mask, rt_first_ssid, rt_last_ssid, rt_mask_size;
+ uint8_t *rt_mask_ptr;
+ int equip_id, num_items;
+#if defined(CONFIG_DIAG_OVER_USB)
+ int payload_length;
+#endif
+
+ /* Set log masks */
+ if (*buf == 0x73 && *(int *)(buf + 4) == 3) {
+ buf += 8;
+ /* Read Equip ID and pass as first param below */
+ diag_update_log_mask(*(int *)buf, buf + 8, *(int *)(buf + 4));
+ diag_update_userspace_clients(LOG_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x73;
+ *(int *)(driver->apps_rsp_buf + 4) = 0x3; /* op. ID */
+ *(int *)(driver->apps_rsp_buf + 8) = 0x0; /* success */
+ payload_length = 8 + ((*(int *)(buf + 4)) + 7) / 8;
+ if (payload_length > APPS_BUF_SIZE - 12) {
+ pr_err("diag: log masks: buffer overflow\n");
+ return -EIO;
+ }
+ for (i = 0; i < payload_length; i++)
+ *(int *)(driver->apps_rsp_buf + 12 + i) = *(buf + i);
+
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_log_mask_update(driver->smd_cntl[i].ch, *(int *)buf);
+ }
+ encode_rsp_and_send(12 + payload_length - 1);
+ return 0;
+ }
+#endif
+ } /* Get log masks */
+ else if (*buf == 0x73 && *(int *)(buf + 4) == 4) {
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (!(driver->smd_data[MODEM_DATA].ch) && chk_apps_only()) {
+ equip_id = *(int *)(buf + 8);
+ num_items = *(int *)(buf + 12);
+ driver->apps_rsp_buf[0] = 0x73;
+ driver->apps_rsp_buf[1] = 0x0;
+ driver->apps_rsp_buf[2] = 0x0;
+ driver->apps_rsp_buf[3] = 0x0;
+ *(int *)(driver->apps_rsp_buf + 4) = 0x4;
+ if (!chk_equip_id_and_mask(equip_id, driver->apps_rsp_buf + 20))
+ *(int *)(driver->apps_rsp_buf + 8) = 0x0;
+ else
+ *(int *)(driver->apps_rsp_buf + 8) = 0x1;
+ *(int *)(driver->apps_rsp_buf + 12) = equip_id;
+ *(int *)(driver->apps_rsp_buf + 16) = num_items;
+ encode_rsp_and_send(20 + (num_items + 7) / 8 - 1);
+ return 0;
+ }
+#endif
+ } /* Disable log masks */
+ else if (*buf == 0x73 && *(int *)(buf + 4) == 0) {
+ /* Disable mask for each log code */
+ diag_disable_log_mask();
+ diag_update_userspace_clients(LOG_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x73;
+ driver->apps_rsp_buf[1] = 0x0;
+ driver->apps_rsp_buf[2] = 0x0;
+ driver->apps_rsp_buf[3] = 0x0;
+ *(int *)(driver->apps_rsp_buf + 4) = 0x0;
+ *(int *)(driver->apps_rsp_buf + 8) = 0x0; /* status */
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_log_mask_update(driver->smd_cntl[i].ch, ALL_EQUIP_ID);
+
+ }
+ encode_rsp_and_send(11);
+ return 0;
+ }
+#endif
+ } /* Get runtime message mask */
+ else if ((*buf == 0x7d) && (*(buf + 1) == 0x3)) {
+ ssid_first = *(uint16_t *) (buf + 2);
+ ssid_last = *(uint16_t *) (buf + 4);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (!(driver->smd_data[MODEM_DATA].ch) && chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x7d;
+ driver->apps_rsp_buf[1] = 0x3;
+ *(uint16_t *) (driver->apps_rsp_buf + 2) = ssid_first;
+ *(uint16_t *) (driver->apps_rsp_buf + 4) = ssid_last;
+ driver->apps_rsp_buf[6] = 0x1; /* Success Status */
+ driver->apps_rsp_buf[7] = 0x0;
+ rt_mask_ptr = driver->msg_masks;
+ while (*(uint32_t *) (rt_mask_ptr + 4)) {
+ rt_first_ssid = *(uint32_t *) rt_mask_ptr;
+ rt_mask_ptr += 8; /* +8 to skip 'last' */
+ rt_last_ssid = *(uint32_t *) rt_mask_ptr;
+ rt_mask_ptr += 4;
+ if (ssid_first == rt_first_ssid && ssid_last == rt_last_ssid) {
+ rt_mask_size = 4 * (rt_last_ssid - rt_first_ssid + 1);
+ if (rt_mask_size > APPS_BUF_SIZE - 8) {
+ pr_err("diag: rt masks: buffer overflow\n");
+ return -EIO;
+ }
+ memcpy(driver->apps_rsp_buf + 8, rt_mask_ptr, rt_mask_size);
+ encode_rsp_and_send(8 + rt_mask_size - 1);
+ return 0;
+ }
+ rt_mask_ptr += MAX_SSID_PER_RANGE * 4;
+ }
+ }
+#endif
+ } /* Set runtime message mask */
+ else if ((*buf == 0x7d) && (*(buf + 1) == 0x4)) {
+ ssid_first = *(uint16_t *) (buf + 2);
+ ssid_last = *(uint16_t *) (buf + 4);
+ if (ssid_last < ssid_first) {
+ pr_err("diag: Invalid msg mask ssid values, first: %d, last: %d\n", ssid_first, ssid_last);
+ return -EIO;
+ }
+ ssid_range = 4 * (ssid_last - ssid_first + 1);
+ if (ssid_range > APPS_BUF_SIZE - 8) {
+ pr_err("diag: Not enough space for message mask, ssid_range: %d\n", ssid_range);
+ return -EIO;
+ }
+ pr_debug("diag: received mask update for ssid_first = %d, ssid_last = %d", ssid_first, ssid_last);
+ diag_update_msg_mask(ssid_first, ssid_last, buf + 8);
+ diag_update_userspace_clients(MSG_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ for (i = 0; i < 8 + ssid_range; i++)
+ *(driver->apps_rsp_buf + i) = *(buf + i);
+ *(driver->apps_rsp_buf + 6) = 0x1;
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_msg_mask_update(driver->smd_cntl[i].ch, ssid_first, ssid_last, driver->smd_cntl[i].peripheral);
+
+ }
+ encode_rsp_and_send(8 + ssid_range - 1);
+ return 0;
+ }
+#endif
+ } /* Set ALL runtime message mask */
+ else if ((*buf == 0x7d) && (*(buf + 1) == 0x5)) {
+ rt_mask = *(int *)(buf + 4);
+ diag_set_msg_mask(rt_mask);
+ diag_update_userspace_clients(MSG_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x7d; /* cmd_code */
+ driver->apps_rsp_buf[1] = 0x5; /* set subcommand */
+ driver->apps_rsp_buf[2] = 1; /* success */
+ driver->apps_rsp_buf[3] = 0; /* rsvd */
+ *(int *)(driver->apps_rsp_buf + 4) = rt_mask;
+ /* send msg mask update to peripheral */
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_msg_mask_update(driver->smd_cntl[i].ch, ALL_SSID, ALL_SSID, driver->smd_cntl[i].peripheral);
+
+ }
+ encode_rsp_and_send(7);
+ return 0;
+ }
+#endif
+ } else if (*buf == 0x82) { /* event mask change */
+ buf += 4;
+ diag_event_num_bytes = (*(uint16_t *) buf) / 8 + 1;
+ diag_update_event_mask(buf, diag_event_num_bytes);
+ diag_update_userspace_clients(EVENT_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x82;
+ driver->apps_rsp_buf[1] = 0x0;
+ *(uint16_t *) (driver->apps_rsp_buf + 2) = 0x0;
+ *(uint16_t *) (driver->apps_rsp_buf + 4) = EVENT_LAST_ID + 1;
+ memcpy(driver->apps_rsp_buf + 6, driver->event_masks, EVENT_LAST_ID / 8 + 1);
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_event_mask_update(driver->smd_cntl[i].ch, diag_event_num_bytes);
+ }
+ encode_rsp_and_send(6 + EVENT_LAST_ID / 8);
+ return 0;
+ }
+#endif
+ } else if (*buf == 0x60) {
+ diag_toggle_event_mask(*(buf + 1));
+ diag_update_userspace_clients(EVENT_MASKS_TYPE);
+#if defined(CONFIG_DIAG_OVER_USB)
+ if (chk_apps_only()) {
+ driver->apps_rsp_buf[0] = 0x60;
+ driver->apps_rsp_buf[1] = 0x0;
+ driver->apps_rsp_buf[2] = 0x0;
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ if (driver->smd_cntl[i].ch)
+ diag_send_event_mask_update(driver->smd_cntl[i].ch, diag_event_num_bytes);
+ }
+ encode_rsp_and_send(2);
+ return 0;
+ }
+#endif
+ } else if (*buf == 0x78) {
+ if (!(driver->smd_cntl[MODEM_DATA].ch) || (driver->log_on_demand_support)) {
+ driver->apps_rsp_buf[0] = 0x78;
+ /* Copy log code received */
+ *(uint16_t *) (driver->apps_rsp_buf + 1) = *(uint16_t *) (buf + 1);
+ driver->apps_rsp_buf[3] = 0x1; /* Unknown */
+ encode_rsp_and_send(3);
+ }
+ }
+
+ return packet_type;
+}
+
+void diag_masks_init(void)
+{
+ driver->event_status = DIAG_CTRL_MASK_INVALID;
+ driver->msg_status = DIAG_CTRL_MASK_INVALID;
+ driver->log_status = DIAG_CTRL_MASK_INVALID;
+
+ if (driver->event_mask == NULL) {
+ driver->event_mask = kzalloc(sizeof(struct diag_ctrl_event_mask), GFP_KERNEL);
+ if (driver->event_mask == NULL)
+ goto err;
+ kmemleak_not_leak(driver->event_mask);
+ }
+ if (driver->msg_mask == NULL) {
+ driver->msg_mask = kzalloc(sizeof(struct diag_ctrl_msg_mask), GFP_KERNEL);
+ if (driver->msg_mask == NULL)
+ goto err;
+ kmemleak_not_leak(driver->msg_mask);
+ }
+ if (driver->log_mask == NULL) {
+ driver->log_mask = kzalloc(sizeof(struct diag_ctrl_log_mask), GFP_KERNEL);
+ if (driver->log_mask == NULL)
+ goto err;
+ kmemleak_not_leak(driver->log_mask);
+ }
+
+ if (driver->buf_msg_mask_update == NULL) {
+ driver->buf_msg_mask_update = kzalloc(APPS_BUF_SIZE, GFP_KERNEL);
+ if (driver->buf_msg_mask_update == NULL)
+ goto err;
+ kmemleak_not_leak(driver->buf_msg_mask_update);
+ }
+ if (driver->buf_log_mask_update == NULL) {
+ driver->buf_log_mask_update = kzalloc(APPS_BUF_SIZE, GFP_KERNEL);
+ if (driver->buf_log_mask_update == NULL)
+ goto err;
+ kmemleak_not_leak(driver->buf_log_mask_update);
+ }
+ if (driver->buf_event_mask_update == NULL) {
+ driver->buf_event_mask_update = kzalloc(APPS_BUF_SIZE, GFP_KERNEL);
+ if (driver->buf_event_mask_update == NULL)
+ goto err;
+ kmemleak_not_leak(driver->buf_event_mask_update);
+ }
+ if (driver->msg_masks == NULL) {
+ driver->msg_masks = kzalloc(MSG_MASK_SIZE, GFP_KERNEL);
+ if (driver->msg_masks == NULL)
+ goto err;
+ kmemleak_not_leak(driver->msg_masks);
+ }
+ if (driver->buf_feature_mask_update == NULL) {
+ driver->buf_feature_mask_update = kzalloc(sizeof(struct diag_ctrl_feature_mask) + FEATURE_MASK_LEN_BYTES, GFP_KERNEL);
+ if (driver->buf_feature_mask_update == NULL)
+ goto err;
+ kmemleak_not_leak(driver->buf_feature_mask_update);
+ }
+ if (driver->feature_mask == NULL) {
+ driver->feature_mask = kzalloc(sizeof(struct diag_ctrl_feature_mask), GFP_KERNEL);
+ if (driver->feature_mask == NULL)
+ goto err;
+ kmemleak_not_leak(driver->feature_mask);
+ }
+ diag_create_msg_mask_table();
+ diag_event_num_bytes = 0;
+ if (driver->log_masks == NULL) {
+ driver->log_masks = kzalloc(LOG_MASK_SIZE, GFP_KERNEL);
+ if (driver->log_masks == NULL)
+ goto err;
+ kmemleak_not_leak(driver->log_masks);
+ }
+ driver->log_masks_length = (sizeof(struct mask_info)) * MAX_EQUIP_ID;
+ if (driver->event_masks == NULL) {
+ driver->event_masks = kzalloc(EVENT_MASK_SIZE, GFP_KERNEL);
+ if (driver->event_masks == NULL)
+ goto err;
+ kmemleak_not_leak(driver->event_masks);
+ }
+ return;
+err:
+ pr_err("diag: Could not initialize diag mask buffers");
+ kfree(driver->event_mask);
+ kfree(driver->log_mask);
+ kfree(driver->msg_mask);
+ kfree(driver->msg_masks);
+ kfree(driver->log_masks);
+ kfree(driver->event_masks);
+ kfree(driver->feature_mask);
+ kfree(driver->buf_feature_mask_update);
+}
+
+void diag_masks_exit(void)
+{
+ kfree(driver->event_mask);
+ kfree(driver->log_mask);
+ kfree(driver->msg_mask);
+ kfree(driver->msg_masks);
+ kfree(driver->log_masks);
+ kfree(driver->event_masks);
+ kfree(driver->feature_mask);
+ kfree(driver->buf_feature_mask_update);
+}
diff --git a/drivers/char/diag/diag_masks.h b/drivers/char/diag/diag_masks.h
new file mode 100644
index 0000000..b85a3ef3
--- /dev/null
+++ b/drivers/char/diag/diag_masks.h
@@ -0,0 +1,28 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAG_MASKS_H
+#define DIAG_MASKS_H
+
+#include "diagfwd.h"
+
+int chk_equip_id_and_mask(int equip_id, uint8_t * buf);
+void diag_send_event_mask_update(smd_channel_t *, int num_bytes);
+void diag_send_msg_mask_update(smd_channel_t *, int ssid_first, int ssid_last, int proc);
+void diag_send_log_mask_update(smd_channel_t *, int);
+void diag_mask_update_fn(struct work_struct *work);
+void diag_send_feature_mask_update(struct diag_smd_info *smd_info);
+int diag_process_apps_masks(unsigned char *buf, int len);
+void diag_masks_init(void);
+void diag_masks_exit(void);
+extern int diag_event_num_bytes;
+#endif
diff --git a/drivers/char/diag/diagchar.h b/drivers/char/diag/diagchar.h
new file mode 100644
index 0000000..dd3ee23
--- /dev/null
+++ b/drivers/char/diag/diagchar.h
@@ -0,0 +1,445 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGCHAR_H
+#define DIAGCHAR_H
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mempool.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/sched.h>
+#include <linux/wakelock.h>
+#include <mach/msm_smd.h>
+#include <asm/atomic.h>
+#include <asm/mach-types.h>
+
+/* Size of the USB buffers used for read and write*/
+#define USB_MAX_OUT_BUF 4096
+#define APPS_BUF_SIZE 4096
+#define IN_BUF_SIZE 16384
+#define MAX_IN_BUF_SIZE 32768
+#define MAX_SYNC_OBJ_NAME_SIZE 32
+/* Size of the buffer used for deframing a packet
+ reveived from the PC tool*/
+#define HDLC_MAX 4096
+#define HDLC_OUT_BUF_SIZE 8192
+#define POOL_TYPE_COPY 1
+#define POOL_TYPE_HDLC 2
+#define POOL_TYPE_USER 3
+#define POOL_TYPE_WRITE_STRUCT 4
+#define POOL_TYPE_HSIC 5
+#define POOL_TYPE_HSIC_2 6
+#define POOL_TYPE_HSIC_WRITE 11
+#define POOL_TYPE_HSIC_2_WRITE 12
+#define POOL_TYPE_ALL 10
+#define POOL_TYPE_DCI 20
+
+#define POOL_COPY_IDX 0
+#define POOL_HDLC_IDX 1
+#define POOL_USER_IDX 2
+#define POOL_WRITE_STRUCT_IDX 3
+#define POOL_DCI_IDX 4
+#define POOL_BRIDGE_BASE POOL_DCI_IDX
+#define POOL_HSIC_IDX (POOL_BRIDGE_BASE + 1)
+#define POOL_HSIC_2_IDX (POOL_BRIDGE_BASE + 2)
+#define POOL_HSIC_3_IDX (POOL_BRIDGE_BASE + 3)
+#define POOL_HSIC_4_IDX (POOL_BRIDGE_BASE + 4)
+#define POOL_HSIC_WRITE_IDX (POOL_BRIDGE_BASE + 5)
+#define POOL_HSIC_2_WRITE_IDX (POOL_BRIDGE_BASE + 6)
+#define POOL_HSIC_3_WRITE_IDX (POOL_BRIDGE_BASE + 7)
+#define POOL_HSIC_4_WRITE_IDX (POOL_BRIDGE_BASE + 8)
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+#define NUM_MEMORY_POOLS 13
+#else
+#define NUM_MEMORY_POOLS 5
+#endif
+
+#define MAX_SSID_PER_RANGE 200
+
+#define MODEM_DATA 0
+#define LPASS_DATA 1
+#define WCNSS_DATA 2
+#define APPS_DATA 3
+#define SDIO_DATA 4
+#define HSIC_DATA 5
+#define HSIC_2_DATA 6
+#define SMUX_DATA 10
+#define APPS_PROC 1
+/*
+ * Each row contains First (uint32_t), Last (uint32_t), Actual
+ * last (uint32_t) values along with the range of SSIDs
+ * (MAX_SSID_PER_RANGE*uint32_t).
+ * And there are MSG_MASK_TBL_CNT rows.
+ */
+#define MSG_MASK_SIZE ((MAX_SSID_PER_RANGE+3) * 4 * MSG_MASK_TBL_CNT)
+#define LOG_MASK_SIZE 8000
+#define EVENT_MASK_SIZE 1000
+#define USER_SPACE_DATA 8192
+#define PKT_SIZE 4096
+#define MAX_EQUIP_ID 15
+#define DIAG_CTRL_MSG_LOG_MASK 9
+#define DIAG_CTRL_MSG_EVENT_MASK 10
+#define DIAG_CTRL_MSG_F3_MASK 11
+#define CONTROL_CHAR 0x7E
+
+#define DIAG_CON_APSS (0x0001) /* Bit mask for APSS */
+#define DIAG_CON_MPSS (0x0002) /* Bit mask for MPSS */
+#define DIAG_CON_LPASS (0x0004) /* Bit mask for LPASS */
+#define DIAG_CON_WCNSS (0x0008) /* Bit mask for WCNSS */
+
+#define NUM_STM_PROCESSORS 4
+
+#define DIAG_STM_MODEM 0x01
+#define DIAG_STM_LPASS 0x02
+#define DIAG_STM_WCNSS 0x04
+#define DIAG_STM_APPS 0x08
+
+/*
+ * The status bit masks when received in a signal handler are to be
+ * used in conjunction with the peripheral list bit mask to determine the
+ * status for a peripheral. For instance, 0x00010002 would denote an open
+ * status on the MPSS
+ */
+#define DIAG_STATUS_OPEN (0x00010000) /* DCI channel open status mask */
+#define DIAG_STATUS_CLOSED (0x00020000) /* DCI channel closed status mask */
+
+#define MODE_REALTIME 1
+#define MODE_NONREALTIME 0
+
+#define NUM_SMD_DATA_CHANNELS 3
+#define NUM_SMD_CONTROL_CHANNELS NUM_SMD_DATA_CHANNELS
+#define NUM_SMD_DCI_CHANNELS 1
+#define NUM_SMD_CMD_CHANNELS 1
+#define NUM_SMD_DCI_CMD_CHANNELS 1
+
+#define SMD_DATA_TYPE 0
+#define SMD_CNTL_TYPE 1
+#define SMD_DCI_TYPE 2
+#define SMD_CMD_TYPE 3
+#define SMD_DCI_CMD_TYPE 4
+
+#define DIAG_PROC_DCI 1
+#define DIAG_PROC_MEMORY_DEVICE 2
+
+/* Flags to vote the DCI or Memory device process up or down
+ when it becomes active or inactive */
+#define VOTE_DOWN 0
+#define VOTE_UP 1
+
+#define DIAG_TS_SIZE 50
+
+/* Maximum number of pkt reg supported at initialization*/
+extern int diag_max_reg;
+extern int diag_threshold_reg;
+
+#define APPEND_DEBUG(ch) \
+do { \
+ diag_debug_buf[diag_debug_buf_idx] = ch; \
+ (diag_debug_buf_idx < 1023) ? \
+ (diag_debug_buf_idx++) : (diag_debug_buf_idx = 0); \
+} while (0)
+
+/* List of remote processor supported */
+enum remote_procs {
+ MDM = 1,
+ MDM2 = 2,
+ MDM3 = 3,
+ MDM4 = 4,
+ QSC = 5,
+};
+
+struct diag_master_table {
+ uint16_t cmd_code;
+ uint16_t subsys_id;
+ uint32_t client_id;
+ uint16_t cmd_code_lo;
+ uint16_t cmd_code_hi;
+ int process_id;
+};
+
+struct bindpkt_params_per_process {
+ /* Name of the synchronization object associated with this proc */
+ char sync_obj_name[MAX_SYNC_OBJ_NAME_SIZE];
+ uint32_t count; /* Number of entries in this bind */
+ struct bindpkt_params *params; /* first bind params */
+};
+
+struct bindpkt_params {
+ uint16_t cmd_code;
+ uint16_t subsys_id;
+ uint16_t cmd_code_lo;
+ uint16_t cmd_code_hi;
+ /* For Central Routing, used to store Processor number */
+ uint16_t proc_id;
+ uint32_t event_id;
+ uint32_t log_code;
+ /* For Central Routing, used to store SMD channel pointer */
+ uint32_t client_id;
+};
+
+struct diag_write_device {
+ void *buf;
+ int length;
+};
+
+struct diag_client_map {
+ char name[20];
+ int pid;
+ int timeout;
+};
+
+struct diag_nrt_wake_lock {
+ int enabled;
+ int ref_count;
+ int copy_count;
+ struct wake_lock read_lock;
+ spinlock_t read_spinlock;
+};
+
+struct real_time_vote_t {
+ uint16_t proc;
+ uint8_t real_time_vote;
+};
+
+/* This structure is defined in USB header file */
+#ifndef CONFIG_DIAG_OVER_USB
+struct diag_request {
+ char *buf;
+ int length;
+ int actual;
+ int status;
+ void *context;
+};
+#endif
+
+struct diag_smd_info {
+ int peripheral; /* The peripheral this smd channel communicates with */
+ int type; /* The type of smd channel (data, control, dci) */
+ uint16_t peripheral_mask;
+ int encode_hdlc; /* Whether data is raw and needs to be hdlc encoded */
+
+ smd_channel_t *ch;
+ smd_channel_t *ch_save;
+
+ struct mutex smd_ch_mutex;
+
+ int in_busy_1;
+ int in_busy_2;
+
+ unsigned char *buf_in_1;
+ unsigned char *buf_in_2;
+
+ unsigned char *buf_in_1_raw;
+ unsigned char *buf_in_2_raw;
+
+ unsigned int buf_in_1_size;
+ unsigned int buf_in_2_size;
+
+ unsigned int buf_in_1_raw_size;
+ unsigned int buf_in_2_raw_size;
+
+ struct diag_request *write_ptr_1;
+ struct diag_request *write_ptr_2;
+
+ struct diag_nrt_wake_lock nrt_lock;
+
+ struct workqueue_struct *wq;
+
+ struct work_struct diag_read_smd_work;
+ struct work_struct diag_notify_update_smd_work;
+ int notify_context;
+ struct work_struct diag_general_smd_work;
+ int general_context;
+
+ /*
+ * Function ptr for function to call to process the data that
+ * was just read from the smd channel
+ */
+ int (*process_smd_read_data) (struct diag_smd_info * smd_info, void *buf, int num_bytes);
+};
+
+struct diagchar_dev {
+
+ /* State for the char driver */
+ unsigned int major;
+ unsigned int minor_start;
+ int num;
+ struct cdev *cdev;
+ char *name;
+ int dropped_count;
+ struct class *diagchar_class;
+ int ref_count;
+ struct mutex diagchar_mutex;
+ wait_queue_head_t wait_q;
+ wait_queue_head_t smd_wait_q;
+ struct diag_client_map *client_map;
+ int *data_ready;
+ int num_clients;
+ int polling_reg_flag;
+ struct diag_write_device *buf_tbl;
+ unsigned int buf_tbl_size;
+ int use_device_tree;
+ int supports_separate_cmdrsp;
+ int supports_apps_hdlc_encoding;
+ /* The state requested in the STM command */
+ int stm_state_requested[NUM_STM_PROCESSORS];
+ /* The current STM state */
+ int stm_state[NUM_STM_PROCESSORS];
+ /* Whether or not the peripheral supports STM */
+ int peripheral_supports_stm[NUM_SMD_CONTROL_CHANNELS];
+ /* DCI related variables */
+ struct dci_pkt_req_tracking_tbl *req_tracking_tbl;
+ struct list_head dci_client_list;
+ int dci_tag;
+ int dci_client_id;
+ struct mutex dci_mutex;
+ int num_dci_client;
+ unsigned char *apps_dci_buf;
+ int dci_state;
+ struct workqueue_struct *diag_dci_wq;
+ /* Memory pool parameters */
+ unsigned int itemsize;
+ unsigned int poolsize;
+ unsigned int itemsize_hdlc;
+ unsigned int poolsize_hdlc;
+ unsigned int itemsize_user;
+ unsigned int poolsize_user;
+ unsigned int itemsize_write_struct;
+ unsigned int poolsize_write_struct;
+ unsigned int itemsize_dci;
+ unsigned int poolsize_dci;
+ unsigned int debug_flag;
+ /* State for the mempool for the char driver */
+ mempool_t *diagpool;
+ mempool_t *diag_hdlc_pool;
+ mempool_t *diag_user_pool;
+ mempool_t *diag_write_struct_pool;
+ mempool_t *diag_dci_pool;
+ spinlock_t diag_mem_lock;
+ int count;
+ int count_hdlc_pool;
+ int count_user_pool;
+ int count_write_struct_pool;
+ int count_dci_pool;
+ int used;
+ /* Buffers for masks */
+ struct mutex diag_cntl_mutex;
+ struct diag_ctrl_event_mask *event_mask;
+ struct diag_ctrl_log_mask *log_mask;
+ struct diag_ctrl_msg_mask *msg_mask;
+ struct diag_ctrl_feature_mask *feature_mask;
+ /* State for diag forwarding */
+ struct diag_smd_info smd_data[NUM_SMD_DATA_CHANNELS];
+ struct diag_smd_info smd_cntl[NUM_SMD_CONTROL_CHANNELS];
+ struct diag_smd_info smd_dci[NUM_SMD_DCI_CHANNELS];
+ struct diag_smd_info smd_cmd[NUM_SMD_CMD_CHANNELS];
+ struct diag_smd_info smd_dci_cmd[NUM_SMD_DCI_CMD_CHANNELS];
+ int rcvd_feature_mask[NUM_SMD_CONTROL_CHANNELS];
+ int separate_cmdrsp[NUM_SMD_CONTROL_CHANNELS];
+ unsigned char *usb_buf_out;
+ unsigned char *apps_rsp_buf;
+ /* buffer for updating mask to peripherals */
+ unsigned char *buf_msg_mask_update;
+ unsigned char *buf_log_mask_update;
+ unsigned char *buf_event_mask_update;
+ unsigned char *buf_feature_mask_update;
+ int read_len_legacy;
+ struct mutex diag_hdlc_mutex;
+ unsigned char *hdlc_buf;
+ unsigned hdlc_count;
+ unsigned hdlc_escape;
+ int in_busy_pktdata;
+ struct device *dci_device;
+ struct device *dci_cmd_device;
+ /* Variables for non real time mode */
+ int real_time_mode;
+ int real_time_update_busy;
+ uint16_t proc_active_mask;
+ uint16_t proc_rt_vote_mask;
+ struct mutex real_time_mutex;
+ struct work_struct diag_real_time_work;
+ struct workqueue_struct *diag_real_time_wq;
+#ifdef CONFIG_DIAG_OVER_USB
+ int usb_connected;
+ struct usb_diag_ch *legacy_ch;
+ struct work_struct diag_proc_hdlc_work;
+ struct work_struct diag_read_work;
+ struct work_struct diag_usb_connect_work;
+ struct work_struct diag_usb_disconnect_work;
+#endif
+ struct workqueue_struct *diag_wq;
+ struct wake_lock wake_lock;
+ struct work_struct diag_drain_work;
+ struct workqueue_struct *diag_cntl_wq;
+ uint8_t *msg_masks;
+ uint8_t msg_status;
+ uint8_t *log_masks;
+ uint8_t log_status;
+ int log_masks_length;
+ uint8_t *event_masks;
+ uint8_t event_status;
+ uint8_t log_on_demand_support;
+ struct diag_master_table *table;
+ uint8_t *pkt_buf;
+ int pkt_length;
+ struct diag_request *usb_read_ptr;
+ struct diag_request *write_ptr_svc;
+ int logging_mode;
+ int mask_check;
+ int logging_process_id;
+ struct task_struct *socket_process;
+ struct task_struct *callback_process;
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ unsigned char *buf_in_sdio;
+ unsigned char *usb_buf_mdm_out;
+ struct sdio_channel *sdio_ch;
+ int read_len_mdm;
+ int in_busy_sdio;
+ struct usb_diag_ch *mdm_ch;
+ struct work_struct diag_read_mdm_work;
+ struct workqueue_struct *diag_sdio_wq;
+ struct work_struct diag_read_sdio_work;
+ struct work_struct diag_close_sdio_work;
+ struct diag_request *usb_read_mdm_ptr;
+ struct diag_request *write_ptr_mdm;
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ /* common for all bridges */
+ struct work_struct diag_connect_work;
+ struct work_struct diag_disconnect_work;
+ /* SGLTE variables */
+ int lcid;
+ unsigned char *buf_in_smux;
+ int in_busy_smux;
+ int diag_smux_enabled;
+ int smux_connected;
+ struct diag_request *write_ptr_mdm;
+#endif
+};
+
+extern struct diag_bridge_dev *diag_bridge;
+extern struct diag_hsic_dev *diag_hsic;
+extern struct diagchar_dev *driver;
+
+extern int wrap_enabled;
+extern uint16_t wrap_count;
+
+void diag_get_timestamp(char *time_str);
+int diag_find_polling_reg(int i);
+void check_drain_timer(void);
+
+#endif
diff --git a/drivers/char/diag/diagchar_core.c b/drivers/char/diag/diagchar_core.c
new file mode 100644
index 0000000..0582d97
--- /dev/null
+++ b/drivers/char/diag/diagchar_core.c
@@ -0,0 +1,2235 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/cdev.h>
+#include <linux/fs.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/uaccess.h>
+#include <linux/diagchar.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/ratelimit.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include <asm/current.h>
+#include "diagchar_hdlc.h"
+#include "diagmem.h"
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_cntl.h"
+#include "diag_dci.h"
+#ifdef CONFIG_DIAG_SDIO_PIPE
+#include "diagfwd_sdio.h"
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+#include "diagfwd_hsic.h"
+#include "diagfwd_smux.h"
+#endif
+#include <linux/timer.h>
+#include "diag_debugfs.h"
+#include "diag_masks.h"
+#include "diagfwd_bridge.h"
+
+#include <linux/coresight-stm.h>
+#include <linux/kernel.h>
+
+MODULE_DESCRIPTION("Diag Char Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION("1.0");
+
+#define INIT 1
+#define EXIT -1
+struct diagchar_dev *driver;
+struct diagchar_priv {
+ int pid;
+};
+/* The following variables can be specified by module options */
+ /* for copy buffer */
+static unsigned int itemsize = 4096; /*Size of item in the mempool */
+static unsigned int poolsize = 12; /*Number of items in the mempool */
+/* for hdlc buffer */
+static unsigned int itemsize_hdlc = 8192; /*Size of item in the mempool */
+static unsigned int poolsize_hdlc = 10; /*Number of items in the mempool */
+/* for user buffer */
+static unsigned int itemsize_user = 8192; /*Size of item in the mempool */
+static unsigned int poolsize_user = 8; /*Number of items in the mempool */
+/* for write structure buffer */
+static unsigned int itemsize_write_struct = 20; /*Size of item in the mempool */
+static unsigned int poolsize_write_struct = 10; /* Num of items in the mempool */
+/* For the dci memory pool */
+static unsigned int itemsize_dci = 8192; /*Size of item in the mempool */
+static unsigned int poolsize_dci = 10; /*Number of items in the mempool */
+/* This is the max number of user-space clients supported at initialization*/
+static unsigned int max_clients = 15;
+static unsigned int threshold_client_limit = 30;
+/* This is the maximum number of pkt registrations supported at initialization*/
+int diag_max_reg = 600;
+int diag_threshold_reg = 750;
+
+/* Timer variables */
+static struct timer_list drain_timer;
+static int timer_in_progress;
+void *buf_hdlc;
+module_param(itemsize, uint, 0);
+module_param(poolsize, uint, 0);
+module_param(max_clients, uint, 0);
+
+/* delayed_rsp_id 0 represents no delay in the response. Any other number
+ means that the diag packet has a delayed response. */
+static uint16_t delayed_rsp_id = 1;
+
+#define DIAGPKT_MAX_DELAYED_RSP 0xFFFF
+
+/* returns the next delayed rsp id - rollsover the id if wrapping is
+ enabled. */
+uint16_t diagpkt_next_delayed_rsp_id(uint16_t rspid)
+{
+ if (rspid < DIAGPKT_MAX_DELAYED_RSP)
+ rspid++;
+ else {
+ if (wrap_enabled) {
+ rspid = 1;
+ wrap_count++;
+ } else
+ rspid = DIAGPKT_MAX_DELAYED_RSP;
+ }
+ delayed_rsp_id = rspid;
+ return delayed_rsp_id;
+}
+
+#define COPY_USER_SPACE_OR_EXIT(buf, data, length) \
+do { \
+ if ((count < ret+length) || (copy_to_user(buf, \
+ (void *)&data, length))) { \
+ ret = -EFAULT; \
+ goto exit; \
+ } \
+ ret += length; \
+} while (0)
+
+static void drain_timer_func(unsigned long data)
+{
+ queue_work(driver->diag_wq, &(driver->diag_drain_work));
+}
+
+void diag_drain_work_fn(struct work_struct *work)
+{
+ int err = 0;
+ struct list_head *start, *temp;
+ struct diag_dci_client_tbl *entry = NULL;
+ timer_in_progress = 0;
+
+ mutex_lock(&driver->diagchar_mutex);
+ if (buf_hdlc) {
+ err = diag_device_write(buf_hdlc, APPS_DATA, NULL);
+ if (err)
+ diagmem_free(driver, buf_hdlc, POOL_TYPE_HDLC);
+ buf_hdlc = NULL;
+#ifdef DIAG_DEBUG
+ pr_debug("diag: Number of bytes written " "from timer is %d ", driver->used);
+#endif
+ driver->used = 0;
+ }
+
+ mutex_unlock(&driver->diagchar_mutex);
+
+ if (!(driver->num_dci_client > 0))
+ return;
+
+ list_for_each_safe(start, temp, &driver->dci_client_list) {
+ entry = list_entry(start, struct diag_dci_client_tbl, track);
+ if (entry->client == NULL)
+ continue;
+ mutex_lock(&entry->data_mutex);
+ if (entry->apps_data_len > 0) {
+ err = dci_apps_write(entry);
+ entry->dci_apps_data = NULL;
+ entry->apps_data_len = 0;
+ if (entry->apps_in_busy_1 == 0) {
+ entry->dci_apps_data = entry->dci_apps_buffer;
+ entry->apps_in_busy_1 = 1;
+ } else {
+ entry->dci_apps_data = diagmem_alloc(driver, driver->itemsize_dci, POOL_TYPE_DCI);
+ }
+
+ if (!entry->dci_apps_data)
+ pr_err_ratelimited("diag: In %s, Not able to acquire a buffer. Reduce data rate.\n", __func__);
+ }
+ mutex_unlock(&entry->data_mutex);
+ }
+}
+
+void check_drain_timer(void)
+{
+ int ret = 0;
+
+ if (!timer_in_progress) {
+ timer_in_progress = 1;
+ ret = mod_timer(&drain_timer, jiffies + msecs_to_jiffies(500));
+ }
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+void diag_clear_hsic_tbl(void)
+{
+ int i, j;
+
+ /* Clear for all active HSIC bridges */
+ for (j = 0; j < MAX_HSIC_CH; j++) {
+ if (diag_hsic[j].hsic_ch) {
+ diag_hsic[j].num_hsic_buf_tbl_entries = 0;
+ for (i = 0; i < diag_hsic[j].poolsize_hsic_write; i++) {
+ if (diag_hsic[j].hsic_buf_tbl[i].buf) {
+ /* Return the buffer to the pool */
+ diagmem_free(driver, (unsigned char *)
+ (diag_hsic[j].hsic_buf_tbl[i].buf), j + POOL_TYPE_HSIC);
+ diag_hsic[j].hsic_buf_tbl[i].buf = 0;
+ }
+ diag_hsic[j].hsic_buf_tbl[i].length = 0;
+ }
+ }
+ }
+}
+#else
+void diag_clear_hsic_tbl(void)
+{
+}
+#endif
+
+void diag_add_client(int i, struct file *file)
+{
+ struct diagchar_priv *diagpriv_data;
+
+ driver->client_map[i].pid = current->tgid;
+ driver->client_map[i].timeout = 0;
+ diagpriv_data = kmalloc(sizeof(struct diagchar_priv), GFP_KERNEL);
+ if (diagpriv_data)
+ diagpriv_data->pid = current->tgid;
+ file->private_data = diagpriv_data;
+ strlcpy(driver->client_map[i].name, current->comm, 20);
+ driver->client_map[i].name[19] = '\0';
+}
+
+static int diagchar_open(struct inode *inode, struct file *file)
+{
+ int i = 0;
+ void *temp;
+ printk("%s:%s(parent:%s): tgid=%d\n", __func__, current->comm, current->parent->comm, current->tgid);
+
+ if (driver) {
+ mutex_lock(&driver->diagchar_mutex);
+
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == 0)
+ break;
+
+ if (i < driver->num_clients) {
+ diag_add_client(i, file);
+ } else {
+ if (i < threshold_client_limit) {
+ driver->num_clients++;
+ temp = krealloc(driver->client_map, (driver->num_clients) * sizeof(struct diag_client_map), GFP_KERNEL);
+ if (!temp)
+ goto fail;
+ else
+ driver->client_map = temp;
+ temp = krealloc(driver->data_ready, (driver->num_clients) * sizeof(int), GFP_KERNEL);
+ if (!temp)
+ goto fail;
+ else
+ driver->data_ready = temp;
+ diag_add_client(i, file);
+ } else {
+ mutex_unlock(&driver->diagchar_mutex);
+ pr_alert("Max client limit for DIAG reached\n");
+ pr_info("Cannot open handle %s" " %d", current->comm, current->tgid);
+ for (i = 0; i < driver->num_clients; i++)
+ pr_debug("%d) %s PID=%d", i, driver->client_map[i].name, driver->client_map[i].pid);
+ return -ENOMEM;
+ }
+ }
+ driver->data_ready[i] = 0x0;
+ driver->data_ready[i] |= MSG_MASKS_TYPE;
+ driver->data_ready[i] |= EVENT_MASKS_TYPE;
+ driver->data_ready[i] |= LOG_MASKS_TYPE;
+
+ if (driver->ref_count == 0)
+ diagmem_init(driver);
+ driver->ref_count++;
+ mutex_unlock(&driver->diagchar_mutex);
+ return 0;
+ }
+ return -ENOMEM;
+
+fail:
+ mutex_unlock(&driver->diagchar_mutex);
+ driver->num_clients--;
+ pr_alert("diag: Insufficient memory for new client");
+ return -ENOMEM;
+}
+
+static int diagchar_close(struct inode *inode, struct file *file)
+{
+ int i = -1;
+ struct diagchar_priv *diagpriv_data = file->private_data;
+
+ pr_debug("diag: process exit %s\n", current->comm);
+ if (!(file->private_data)) {
+ pr_alert("diag: Invalid file pointer");
+ return -ENOMEM;
+ }
+
+ if (!driver)
+ return -ENOMEM;
+
+ /* clean up any DCI registrations, if this is a DCI client
+ * This will specially help in case of ungraceful exit of any DCI client
+ * This call will remove any pending registrations of such client
+ */
+ diag_dci_deinit_client();
+ /* If the exiting process is the socket process */
+ mutex_lock(&driver->diagchar_mutex);
+ if (driver->socket_process && (driver->socket_process->tgid == current->tgid)) {
+ driver->socket_process = NULL;
+ diag_update_proc_vote(DIAG_PROC_MEMORY_DEVICE, VOTE_DOWN);
+ }
+ if (driver->callback_process && (driver->callback_process->tgid == current->tgid)) {
+ driver->callback_process = NULL;
+ diag_update_proc_vote(DIAG_PROC_MEMORY_DEVICE, VOTE_DOWN);
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+
+#ifdef CONFIG_DIAG_OVER_USB
+ /* If the SD logging process exits, change logging to USB mode */
+ if (driver->logging_process_id == current->tgid) {
+ driver->logging_mode = USB_MODE;
+ diag_update_proc_vote(DIAG_PROC_MEMORY_DEVICE, VOTE_DOWN);
+ diagfwd_connect();
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ diag_clear_hsic_tbl();
+ diagfwd_cancel_hsic(REOPEN_HSIC);
+ diagfwd_connect_bridge(0);
+#endif
+ }
+#endif /* DIAG over USB */
+ /* Delete the pkt response table entry for the exiting process */
+ for (i = 0; i < diag_max_reg; i++)
+ if (driver->table[i].process_id == current->tgid)
+ driver->table[i].process_id = 0;
+
+ mutex_lock(&driver->diagchar_mutex);
+ driver->ref_count--;
+ /* On Client exit, try to destroy all 5 pools */
+ diagmem_exit(driver, POOL_TYPE_COPY);
+ diagmem_exit(driver, POOL_TYPE_HDLC);
+ diagmem_exit(driver, POOL_TYPE_USER);
+ diagmem_exit(driver, POOL_TYPE_WRITE_STRUCT);
+ diagmem_exit(driver, POOL_TYPE_DCI);
+ for (i = 0; i < driver->num_clients; i++) {
+ if (NULL != diagpriv_data && diagpriv_data->pid == driver->client_map[i].pid) {
+ driver->client_map[i].pid = 0;
+ driver->client_map[i].timeout = 0;
+ kfree(diagpriv_data);
+ diagpriv_data = NULL;
+ break;
+ }
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+ return 0;
+}
+
+int diag_find_polling_reg(int i)
+{
+ uint16_t subsys_id, cmd_code_lo, cmd_code_hi;
+
+ subsys_id = driver->table[i].subsys_id;
+ cmd_code_lo = driver->table[i].cmd_code_lo;
+ cmd_code_hi = driver->table[i].cmd_code_hi;
+
+ if (driver->table[i].cmd_code == 0xFF) {
+ if (subsys_id == 0xFF && cmd_code_hi >= 0x0C && cmd_code_lo <= 0x0C)
+ return 1;
+ if (subsys_id == 0x04 && cmd_code_hi >= 0x0E && cmd_code_lo <= 0x0E)
+ return 1;
+ else if (subsys_id == 0x08 && cmd_code_hi >= 0x02 && cmd_code_lo <= 0x02)
+ return 1;
+ else if (subsys_id == 0x32 && cmd_code_hi >= 0x03 && cmd_code_lo <= 0x03)
+ return 1;
+ }
+ return 0;
+}
+
+void diag_clear_reg(int peripheral)
+{
+ int i;
+
+ mutex_lock(&driver->diagchar_mutex);
+ /* reset polling flag */
+ driver->polling_reg_flag = 0;
+ for (i = 0; i < diag_max_reg; i++) {
+ if (driver->table[i].client_id == peripheral)
+ driver->table[i].process_id = 0;
+ }
+ /* re-scan the registration table */
+ for (i = 0; i < diag_max_reg; i++) {
+ if (driver->table[i].process_id != 0 && diag_find_polling_reg(i) == 1) {
+ driver->polling_reg_flag = 1;
+ break;
+ }
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+void diag_add_reg(int j, struct bindpkt_params *params, int *success, unsigned int *count_entries)
+{
+ *success = 1;
+ driver->table[j].cmd_code = params->cmd_code;
+ driver->table[j].subsys_id = params->subsys_id;
+ driver->table[j].cmd_code_lo = params->cmd_code_lo;
+ driver->table[j].cmd_code_hi = params->cmd_code_hi;
+
+ /* check if incoming reg is polling & polling is yet not registered */
+ if (driver->polling_reg_flag == 0)
+ if (diag_find_polling_reg(j) == 1)
+ driver->polling_reg_flag = 1;
+ if (params->proc_id == APPS_PROC) {
+ driver->table[j].process_id = current->tgid;
+ driver->table[j].client_id = APPS_DATA;
+ } else {
+ driver->table[j].process_id = NON_APPS_PROC;
+ driver->table[j].client_id = params->client_id;
+ }
+ (*count_entries)++;
+}
+
+void diag_get_timestamp(char *time_str)
+{
+ struct timeval t;
+ struct tm broken_tm;
+ do_gettimeofday(&t);
+ if (!time_str)
+ return;
+ time_to_tm(t.tv_sec, 0, &broken_tm);
+ scnprintf(time_str, DIAG_TS_SIZE, "%d:%d:%d:%ld", broken_tm.tm_hour, broken_tm.tm_min, broken_tm.tm_sec, t.tv_usec);
+}
+
+static int diag_get_remote(int remote_info)
+{
+ int val = (remote_info < 0) ? -remote_info : remote_info;
+ int remote_val;
+
+ switch (val) {
+ case MDM:
+ case MDM2:
+ case MDM3:
+ case MDM4:
+ case QSC:
+ remote_val = -remote_info;
+ break;
+ default:
+ remote_val = 0;
+ break;
+ }
+
+ return remote_val;
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+uint16_t diag_get_remote_device_mask(void)
+{
+ uint16_t remote_dev = 0;
+ int i;
+
+ /* Check for MDM processor */
+ for (i = 0; i < MAX_HSIC_CH; i++)
+ if (diag_hsic[i].hsic_inited)
+ remote_dev |= 1 << i;
+
+ /* Check for QSC processor */
+ if (driver->diag_smux_enabled)
+ remote_dev |= 1 << SMUX;
+
+ return remote_dev;
+}
+
+int diag_copy_remote(char __user * buf, size_t count, int *pret, int *pnum_data)
+{
+ int i;
+ int index;
+ int exit_stat = 1;
+ int ret = *pret;
+ int num_data = *pnum_data;
+ int remote_token;
+ unsigned long spin_lock_flags;
+ struct diag_write_device hsic_buf_tbl[NUM_HSIC_BUF_TBL_ENTRIES];
+
+ remote_token = diag_get_remote(MDM);
+ for (index = 0; index < MAX_HSIC_CH; index++) {
+ if (!diag_hsic[index].hsic_inited) {
+ remote_token--;
+ continue;
+ }
+
+ spin_lock_irqsave(&diag_hsic[index].hsic_spinlock, spin_lock_flags);
+ for (i = 0; i < diag_hsic[index].poolsize_hsic_write; i++) {
+ hsic_buf_tbl[i].buf = diag_hsic[index].hsic_buf_tbl[i].buf;
+ diag_hsic[index].hsic_buf_tbl[i].buf = 0;
+ hsic_buf_tbl[i].length = diag_hsic[index].hsic_buf_tbl[i].length;
+ diag_hsic[index].hsic_buf_tbl[i].length = 0;
+ }
+ diag_hsic[index].num_hsic_buf_tbl_entries = 0;
+ spin_unlock_irqrestore(&diag_hsic[index].hsic_spinlock, spin_lock_flags);
+
+ for (i = 0; i < diag_hsic[index].poolsize_hsic_write; i++) {
+ if (hsic_buf_tbl[i].length > 0) {
+ pr_debug("diag: HSIC copy to user, i: %d, buf: %x, len: %d\n", i, (unsigned int)hsic_buf_tbl[i].buf, hsic_buf_tbl[i].length);
+ num_data++;
+#if 0
+ /* Copy the negative token */
+ if (copy_to_user(buf + ret, &remote_token, 4)) {
+ num_data--;
+ goto drop_hsic;
+ }
+ ret += 4;
+ /* Copy the length of data being passed */
+ if (copy_to_user(buf + ret, (void *)&(hsic_buf_tbl[i].length), 4)) {
+ num_data--;
+ goto drop_hsic;
+ }
+ ret += 4;
+#endif
+ /* Copy the actual data being passed */
+ if (copy_to_user(buf + ret, (void *)hsic_buf_tbl[i].buf, hsic_buf_tbl[i].length)) {
+ ret -= 4;
+ num_data--;
+ goto drop_hsic;
+ }
+ ret += hsic_buf_tbl[i].length;
+drop_hsic:
+ /* Return the buffer to the pool */
+ diagmem_free(driver, (unsigned char *)(hsic_buf_tbl[i].buf), index + POOL_TYPE_HSIC);
+
+ /* Call the write complete function */
+ diagfwd_write_complete_hsic(NULL, index);
+ }
+ }
+ remote_token--;
+ }
+ if (driver->in_busy_smux == 1) {
+ remote_token = diag_get_remote(QSC);
+ num_data++;
+
+ /* Copy the negative token of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, remote_token, 4);
+ /* Copy the length of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, (driver->write_ptr_mdm->length), 4);
+ /* Copy the actual data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(driver->buf_in_smux), driver->write_ptr_mdm->length);
+ pr_debug("diag: SMUX data copied\n");
+ driver->in_busy_smux = 0;
+ }
+ exit_stat = 0;
+exit:
+ *pret = ret;
+ *pnum_data = num_data;
+ return exit_stat;
+}
+
+#else
+inline uint16_t diag_get_remote_device_mask(void)
+{
+ return 0;
+}
+
+inline int diag_copy_remote(char __user * buf, size_t count, int *pret, int *pnum_data)
+{
+ return 0;
+}
+#endif
+
+static int diag_copy_dci(char __user * buf, size_t count, struct diag_dci_client_tbl *entry, int *pret)
+{
+ int total_data_len = 0;
+ int ret = 0;
+ int exit_stat = 1;
+ int i;
+
+ if (!buf || !entry || !pret)
+ return exit_stat;
+
+ ret = *pret;
+
+ /* Place holder for total data length */
+ ret += 4;
+
+ mutex_lock(&entry->data_mutex);
+ /* Copy the apps data */
+ for (i = 0; i < entry->dci_apps_tbl_size; i++) {
+ if (entry->dci_apps_tbl[i].buf != NULL) {
+ if (copy_to_user(buf + ret, (void *)(entry->dci_apps_tbl[i].buf), entry->dci_apps_tbl[i].length)) {
+ goto drop;
+ }
+ ret += entry->dci_apps_tbl[i].length;
+ total_data_len += entry->dci_apps_tbl[i].length;
+drop:
+ if (entry->dci_apps_tbl[i].buf == entry->dci_apps_buffer) {
+ entry->apps_in_busy_1 = 0;
+ } else {
+ diagmem_free(driver, entry->dci_apps_tbl[i].buf, POOL_TYPE_DCI);
+ }
+ entry->dci_apps_tbl[i].buf = NULL;
+ entry->dci_apps_tbl[i].length = 0;
+ }
+ }
+
+ /* Copy the smd data */
+ if (entry->data_len > 0) {
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(entry->dci_data), entry->data_len);
+ total_data_len += entry->data_len;
+ entry->data_len = 0;
+ }
+
+ if (total_data_len > 0) {
+ /* Copy the total data length */
+ COPY_USER_SPACE_OR_EXIT(buf + 4, total_data_len, 4);
+ ret -= 4;
+ } else {
+ pr_debug("diag: In %s, Trying to copy ZERO bytes, total_data_len: %d\n", __func__, total_data_len);
+ }
+
+ exit_stat = 0;
+exit:
+ *pret = ret;
+ mutex_unlock(&entry->data_mutex);
+
+ return exit_stat;
+}
+
+int diag_command_reg(unsigned long ioarg)
+{
+ int i = 0, success = -EINVAL, j;
+ void *temp_buf;
+ unsigned int count_entries = 0, interim_count = 0;
+ struct bindpkt_params_per_process pkt_params;
+ struct bindpkt_params *params;
+ struct bindpkt_params *head_params;
+ if (copy_from_user(&pkt_params, (void *)ioarg, sizeof(struct bindpkt_params_per_process))) {
+ return -EFAULT;
+ }
+ if ((UINT_MAX / sizeof(struct bindpkt_params)) < pkt_params.count) {
+ pr_warn("diag: integer overflow while multiply\n");
+ return -EFAULT;
+ }
+ head_params = kzalloc(pkt_params.count * sizeof(struct bindpkt_params), GFP_KERNEL);
+ if (ZERO_OR_NULL_PTR(head_params)) {
+ pr_err("diag: unable to alloc memory\n");
+ return -ENOMEM;
+ } else
+ params = head_params;
+ if (copy_from_user(params, pkt_params.params, pkt_params.count * sizeof(struct bindpkt_params))) {
+ kfree(head_params);
+ return -EFAULT;
+ }
+ mutex_lock(&driver->diagchar_mutex);
+ for (i = 0; i < diag_max_reg; i++) {
+ if (driver->table[i].process_id == 0) {
+ diag_add_reg(i, params, &success, &count_entries);
+ if (pkt_params.count > count_entries) {
+ params++;
+ } else {
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return success;
+ }
+ }
+ }
+ if (i < diag_threshold_reg) {
+ /* Increase table size by amount required */
+ if (pkt_params.count >= count_entries) {
+ interim_count = pkt_params.count - count_entries;
+ } else {
+ pr_warn("diag: error in params count\n");
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return -EFAULT;
+ }
+ if (UINT_MAX - diag_max_reg >= interim_count) {
+ diag_max_reg += interim_count;
+ } else {
+ pr_warn("diag: Integer overflow\n");
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return -EFAULT;
+ }
+ /* Make sure size doesnt go beyond threshold */
+ if (diag_max_reg > diag_threshold_reg) {
+ diag_max_reg = diag_threshold_reg;
+ pr_err("diag: best case memory allocation\n");
+ }
+ if (UINT_MAX / sizeof(struct diag_master_table) < diag_max_reg) {
+ pr_warn("diag: integer overflow\n");
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return -EFAULT;
+ }
+ temp_buf = krealloc(driver->table, diag_max_reg * sizeof(struct diag_master_table), GFP_KERNEL);
+ if (!temp_buf) {
+ pr_err("diag: Insufficient memory for reg.\n");
+
+ if (pkt_params.count >= count_entries) {
+ interim_count = pkt_params.count - count_entries;
+ } else {
+ pr_warn("diag: params count error\n");
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return -EFAULT;
+ }
+ if (diag_max_reg >= interim_count) {
+ diag_max_reg -= interim_count;
+ } else {
+ pr_warn("diag: Integer underflow\n");
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return -EFAULT;
+ }
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return 0;
+ } else {
+ driver->table = temp_buf;
+ }
+ for (j = i; j < diag_max_reg; j++) {
+ diag_add_reg(j, params, &success, &count_entries);
+ if (pkt_params.count > count_entries) {
+ params++;
+ } else {
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ return success;
+ }
+ }
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ } else {
+ kfree(head_params);
+ mutex_unlock(&driver->diagchar_mutex);
+ pr_err("Max size reached, Pkt Registration failed for Process %d", current->tgid);
+ }
+ success = 0;
+ return success;
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+void diag_cmp_logging_modes_diagfwd_bridge(int old_mode, int new_mode)
+{
+ if (old_mode == MEMORY_DEVICE_MODE && new_mode == NO_LOGGING_MODE) {
+ diagfwd_disconnect_bridge(0);
+ diag_clear_hsic_tbl();
+ } else if (old_mode == NO_LOGGING_MODE && new_mode == MEMORY_DEVICE_MODE) {
+ int i;
+ for (i = 0; i < MAX_HSIC_CH; i++)
+ if (diag_hsic[i].hsic_inited)
+ diag_hsic[i].hsic_data_requested = driver->real_time_mode ? 1 : 0;
+ diagfwd_connect_bridge(0);
+ } else if (old_mode == USB_MODE && new_mode == NO_LOGGING_MODE) {
+ diagfwd_disconnect_bridge(0);
+ } else if (old_mode == NO_LOGGING_MODE && new_mode == USB_MODE) {
+ diagfwd_connect_bridge(0);
+ } else if (old_mode == USB_MODE && new_mode == MEMORY_DEVICE_MODE) {
+ if (driver->real_time_mode)
+ diagfwd_cancel_hsic(REOPEN_HSIC);
+ else
+ diagfwd_cancel_hsic(DONT_REOPEN_HSIC);
+ diagfwd_connect_bridge(0);
+ } else if (old_mode == MEMORY_DEVICE_MODE && new_mode == USB_MODE) {
+ diag_clear_hsic_tbl();
+ diagfwd_cancel_hsic(REOPEN_HSIC);
+ diagfwd_connect_bridge(0);
+ }
+}
+#else
+void diag_cmp_logging_modes_diagfwd_bridge(int old_mode, int new_mode)
+{
+
+}
+#endif
+
+#ifdef CONFIG_DIAG_SDIO_PIPE
+void diag_cmp_logging_modes_sdio_pipe(int old_mode, int new_mode)
+{
+ if (old_mode == MEMORY_DEVICE_MODE && new_mode == NO_LOGGING_MODE) {
+ mutex_lock(&driver->diagchar_mutex);
+ driver->in_busy_sdio = 1;
+ mutex_unlock(&driver->diagchar_mutex);
+ } else if (old_mode == NO_LOGGING_MODE && new_mode == MEMORY_DEVICE_MODE) {
+ mutex_lock(&driver->diagchar_mutex);
+ driver->in_busy_sdio = 0;
+ mutex_unlock(&driver->diagchar_mutex);
+ /* Poll SDIO channel to check for data */
+ if (driver->sdio_ch)
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+ } else if (old_mode == USB_MODE && new_mode == MEMORY_DEVICE_MODE) {
+ mutex_lock(&driver->diagchar_mutex);
+ driver->in_busy_sdio = 0;
+ mutex_unlock(&driver->diagchar_mutex);
+ /* Poll SDIO channel to check for data */
+ if (driver->sdio_ch)
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+ }
+}
+#else
+void diag_cmp_logging_modes_sdio_pipe(int old_mode, int new_mode)
+{
+
+}
+#endif
+
+int diag_switch_logging(unsigned long ioarg)
+{
+ int temp = 0, success = -EINVAL, status = 0;
+ int requested_mode = (int)ioarg;
+
+ switch (requested_mode) {
+ case USB_MODE:
+ case MEMORY_DEVICE_MODE:
+ case NO_LOGGING_MODE:
+ case UART_MODE:
+ case SOCKET_MODE:
+ case CALLBACK_MODE:
+ break;
+ default:
+ pr_err("diag: In %s, request to switch to invalid mode: %d\n", __func__, requested_mode);
+ return -EINVAL;
+ }
+
+ if (requested_mode == driver->logging_mode) {
+ if (requested_mode != MEMORY_DEVICE_MODE || driver->real_time_mode)
+ pr_info_ratelimited("diag: Already in logging mode change requested, mode: %d\n", driver->logging_mode);
+ return 0;
+ }
+
+ diag_update_proc_vote(DIAG_PROC_MEMORY_DEVICE, VOTE_UP);
+ if (requested_mode != MEMORY_DEVICE_MODE)
+ diag_update_real_time_vote(DIAG_PROC_MEMORY_DEVICE, MODE_REALTIME);
+
+ if (!(requested_mode == MEMORY_DEVICE_MODE && driver->logging_mode == USB_MODE))
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+
+ mutex_lock(&driver->diagchar_mutex);
+ temp = driver->logging_mode;
+ driver->logging_mode = requested_mode;
+
+ if (driver->logging_mode == MEMORY_DEVICE_MODE) {
+ diag_clear_hsic_tbl();
+ driver->mask_check = 1;
+ if (driver->socket_process) {
+ /*
+ * Notify the socket logging process that we
+ * are switching to MEMORY_DEVICE_MODE
+ */
+ status = send_sig(SIGCONT, driver->socket_process, 0);
+ if (status) {
+ pr_err("diag: %s, Error notifying ", __func__);
+ pr_err("socket process, status: %d\n", status);
+ }
+ }
+ } else if (driver->logging_mode == SOCKET_MODE) {
+ driver->socket_process = current;
+ } else if (driver->logging_mode == CALLBACK_MODE) {
+ driver->callback_process = current;
+ }
+
+ if (driver->logging_mode == UART_MODE || driver->logging_mode == SOCKET_MODE || driver->logging_mode == CALLBACK_MODE) {
+ diag_clear_hsic_tbl();
+ driver->mask_check = 0;
+ driver->logging_mode = MEMORY_DEVICE_MODE;
+ }
+
+ driver->logging_process_id = current->tgid;
+ mutex_unlock(&driver->diagchar_mutex);
+
+ if (temp == MEMORY_DEVICE_MODE && driver->logging_mode == NO_LOGGING_MODE) {
+ diag_reset_smd_data(RESET_AND_NO_QUEUE);
+ diag_cmp_logging_modes_sdio_pipe(temp, driver->logging_mode);
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ } else if (temp == NO_LOGGING_MODE && driver->logging_mode == MEMORY_DEVICE_MODE) {
+ diag_reset_smd_data(RESET_AND_QUEUE);
+ diag_cmp_logging_modes_sdio_pipe(temp, driver->logging_mode);
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ } else if (temp == USB_MODE && driver->logging_mode == NO_LOGGING_MODE) {
+ diagfwd_disconnect();
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ } else if (temp == NO_LOGGING_MODE && driver->logging_mode == USB_MODE) {
+ diagfwd_connect();
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ } else if (temp == USB_MODE && driver->logging_mode == MEMORY_DEVICE_MODE) {
+ diagfwd_disconnect();
+ diag_reset_smd_data(RESET_AND_QUEUE);
+ diag_cmp_logging_modes_sdio_pipe(temp, driver->logging_mode);
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ } else if (temp == MEMORY_DEVICE_MODE && driver->logging_mode == USB_MODE) {
+ diagfwd_connect();
+ diag_cmp_logging_modes_diagfwd_bridge(temp, driver->logging_mode);
+ }
+ success = 1;
+ return success;
+}
+
+long diagchar_ioctl(struct file *filp, unsigned int iocmd, unsigned long ioarg)
+{
+ int i, result = -EINVAL, interim_size = 0, client_id = 0, real_time = 0;
+ int retry_count = 0, timer = 0;
+ uint16_t support_list = 0, interim_rsp_id, remote_dev;
+ struct diag_dci_client_tbl *dci_params;
+ struct diag_dci_health_stats stats;
+ struct diag_log_event_stats le_stats;
+ struct diagpkt_delay_params delay_params;
+ struct real_time_vote_t rt_vote;
+
+ switch (iocmd) {
+ case DIAG_IOCTL_COMMAND_REG:
+ result = diag_command_reg(ioarg);
+ break;
+ case DIAG_IOCTL_GET_DELAYED_RSP_ID:
+ if (copy_from_user(&delay_params, (void *)ioarg, sizeof(struct diagpkt_delay_params)))
+ return -EFAULT;
+ if ((delay_params.rsp_ptr) && (delay_params.size == sizeof(delayed_rsp_id)) && (delay_params.num_bytes_ptr)) {
+ interim_rsp_id = diagpkt_next_delayed_rsp_id(delayed_rsp_id);
+ if (copy_to_user((void *)delay_params.rsp_ptr, &interim_rsp_id, sizeof(uint16_t)))
+ return -EFAULT;
+ interim_size = sizeof(delayed_rsp_id);
+ if (copy_to_user((void *)delay_params.num_bytes_ptr, &interim_size, sizeof(int)))
+ return -EFAULT;
+ result = 0;
+ }
+ break;
+ case DIAG_IOCTL_DCI_REG:
+ dci_params = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
+ if (dci_params == NULL) {
+ pr_err("diag: unable to alloc memory\n");
+ return -ENOMEM;
+ }
+ if (copy_from_user(dci_params, (void *)ioarg, sizeof(struct diag_dci_client_tbl))) {
+ kfree(dci_params);
+ return -EFAULT;
+ }
+ result = diag_dci_register_client(dci_params->list, dci_params->signal_type);
+ kfree(dci_params);
+ break;
+ case DIAG_IOCTL_DCI_DEINIT:
+ result = diag_dci_deinit_client();
+ break;
+ case DIAG_IOCTL_DCI_SUPPORT:
+ support_list |= DIAG_CON_APSS;
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++) {
+ if (driver->smd_dci[i].ch)
+ support_list |= driver->smd_dci[i].peripheral_mask;
+ }
+ if (copy_to_user((void *)ioarg, &support_list, sizeof(uint16_t)))
+ return -EFAULT;
+ result = DIAG_DCI_NO_ERROR;
+ break;
+ case DIAG_IOCTL_DCI_HEALTH_STATS:
+ if (copy_from_user(&stats, (void *)ioarg, sizeof(struct diag_dci_health_stats)))
+ return -EFAULT;
+
+ dci_params = diag_dci_get_client_entry();
+ if (!dci_params) {
+ result = DIAG_DCI_NOT_SUPPORTED;
+ break;
+ }
+ stats.dropped_logs = dci_params->dropped_logs;
+ stats.dropped_events = dci_params->dropped_events;
+ stats.received_logs = dci_params->received_logs;
+ stats.received_events = dci_params->received_events;
+ if (stats.reset_status) {
+ mutex_lock(&dci_health_mutex);
+ dci_params->dropped_logs = 0;
+ dci_params->dropped_events = 0;
+ dci_params->received_logs = 0;
+ dci_params->received_events = 0;
+ mutex_unlock(&dci_health_mutex);
+ }
+ if (copy_to_user((void *)ioarg, &stats, sizeof(struct diag_dci_health_stats)))
+ return -EFAULT;
+ result = DIAG_DCI_NO_ERROR;
+ break;
+ case DIAG_IOCTL_DCI_LOG_STATUS:
+ if (copy_from_user(&le_stats, (void *)ioarg, sizeof(struct diag_log_event_stats)))
+ return -EFAULT;
+ le_stats.is_set = diag_dci_query_log_mask(le_stats.code);
+ if (copy_to_user((void *)ioarg, &le_stats, sizeof(struct diag_log_event_stats)))
+ return -EFAULT;
+ result = DIAG_DCI_NO_ERROR;
+ break;
+ case DIAG_IOCTL_DCI_EVENT_STATUS:
+ if (copy_from_user(&le_stats, (void *)ioarg, sizeof(struct diag_log_event_stats)))
+ return -EFAULT;
+ le_stats.is_set = diag_dci_query_event_mask(le_stats.code);
+ if (copy_to_user((void *)ioarg, &le_stats, sizeof(struct diag_log_event_stats)))
+ return -EFAULT;
+ result = DIAG_DCI_NO_ERROR;
+ break;
+ case DIAG_IOCTL_DCI_CLEAR_LOGS:
+ if (copy_from_user((void *)&client_id, (void *)ioarg, sizeof(int)))
+ return -EFAULT;
+ result = diag_dci_clear_log_mask();
+ break;
+ case DIAG_IOCTL_DCI_CLEAR_EVENTS:
+ if (copy_from_user(&client_id, (void *)ioarg, sizeof(int)))
+ return -EFAULT;
+ result = diag_dci_clear_event_mask();
+ break;
+ case DIAG_IOCTL_LSM_DEINIT:
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == current->tgid)
+ break;
+ if (i == driver->num_clients)
+ return -EINVAL;
+ driver->data_ready[i] |= DEINIT_TYPE;
+ wake_up_interruptible(&driver->wait_q);
+ result = 1;
+ break;
+ case DIAG_IOCTL_SWITCH_LOGGING:
+ result = diag_switch_logging(ioarg);
+ break;
+ case DIAG_IOCTL_REMOTE_DEV:
+ remote_dev = diag_get_remote_device_mask();
+ if (copy_to_user((void *)ioarg, &remote_dev, sizeof(uint16_t)))
+ result = -EFAULT;
+ else
+ result = 1;
+ break;
+ case DIAG_IOCTL_VOTE_REAL_TIME:
+ if (copy_from_user(&rt_vote, (void *)ioarg, sizeof(struct real_time_vote_t)))
+ result = -EFAULT;
+ driver->real_time_update_busy++;
+ if (rt_vote.proc == DIAG_PROC_DCI) {
+ diag_dci_set_real_time(rt_vote.real_time_vote);
+ real_time = diag_dci_get_cumulative_real_time();
+ } else {
+ real_time = rt_vote.real_time_vote;
+ }
+ diag_update_real_time_vote(rt_vote.proc, real_time);
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+ result = 0;
+ break;
+ case DIAG_IOCTL_GET_REAL_TIME:
+ if (copy_from_user(&real_time, (void *)ioarg, sizeof(int)))
+ return -EFAULT;
+ while (retry_count < 3) {
+ if (driver->real_time_update_busy > 0) {
+ retry_count++;
+ /* The value 10000 was chosen empirically as an
+ optimum value in order to give the work in
+ diag_real_time_wq to complete processing. */
+ for (timer = 0; timer < 5; timer++)
+ usleep_range(10000, 10100);
+ } else {
+ real_time = driver->real_time_mode;
+ if (copy_to_user((void *)ioarg, &real_time, sizeof(int)))
+ return -EFAULT;
+ result = 0;
+ break;
+ }
+ }
+ break;
+ case DIAG_IOCTL_NONBLOCKING_TIMEOUT:
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == current->tgid)
+ break;
+ if (i == driver->num_clients)
+ return -EINVAL;
+ mutex_lock(&driver->diagchar_mutex);
+ driver->client_map[i].timeout = (int)ioarg;
+ mutex_unlock(&driver->diagchar_mutex);
+ result = 1;
+ break;
+ }
+ return result;
+}
+
+static int diagchar_read(struct file *file, char __user * buf, size_t count, loff_t * ppos)
+{
+ struct diag_dci_client_tbl *entry;
+ int index = -1, i = 0, ret = 0, timeout = 0;
+ int num_data = 0, data_type;
+ int remote_token;
+ int exit_stat;
+ int clear_read_wakelock;
+
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == current->tgid) {
+ index = i;
+ timeout = driver->client_map[i].timeout;
+ }
+
+ if (index == -1) {
+ pr_err("diag: Client PID not found in table");
+ return -EINVAL;
+ }
+
+ if (timeout)
+ wait_event_interruptible_timeout(driver->wait_q, driver->data_ready[index], timeout * HZ);
+ else
+ wait_event_interruptible(driver->wait_q, driver->data_ready[index]);
+
+ mutex_lock(&driver->diagchar_mutex);
+
+ clear_read_wakelock = 0;
+ if ((driver->data_ready[index] & USER_SPACE_DATA_TYPE) && (driver->logging_mode == MEMORY_DEVICE_MODE)) {
+ remote_token = 0;
+ pr_debug("diag: process woken up\n");
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & USER_SPACE_DATA_TYPE;
+ driver->data_ready[index] ^= USER_SPACE_DATA_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ /* place holder for number of data field */
+ ret += 4;
+
+ for (i = 0; i < driver->buf_tbl_size; i++) {
+ if (driver->buf_tbl[i].length > 0) {
+#ifdef DIAG_DEBUG
+ pr_debug("diag: WRITING the buf address " "and length is %x , %d\n", (unsigned int)
+ (driver->buf_tbl[i].buf), driver->buf_tbl[i].length);
+#endif
+ num_data++;
+ /* Copy the length of data being passed */
+ if (copy_to_user(buf + ret, (void *)&(driver->buf_tbl[i].length), 4)) {
+ num_data--;
+ goto drop;
+ }
+ ret += 4;
+
+ /* Copy the actual data being passed */
+ if (copy_to_user(buf + ret, (void *)driver->buf_tbl[i].buf, driver->buf_tbl[i].length)) {
+ ret -= 4;
+ num_data--;
+ goto drop;
+ }
+ ret += driver->buf_tbl[i].length;
+drop:
+#ifdef DIAG_DEBUG
+ pr_debug("diag: DEQUEUE buf address and" " length is %x,%d\n", (unsigned int)
+ (driver->buf_tbl[i].buf), driver->buf_tbl[i].length);
+#endif
+ diagmem_free(driver, (unsigned char *)
+ (driver->buf_tbl[i].buf), POOL_TYPE_HDLC);
+ driver->buf_tbl[i].length = 0;
+ driver->buf_tbl[i].buf = 0;
+ }
+ }
+
+ /* Copy peripheral data */
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ struct diag_smd_info *data = &driver->smd_data[i];
+ if (data->in_busy_1 == 1) {
+ num_data++;
+ /*Copy the length of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, (data->write_ptr_1->length), 4);
+ /*Copy the actual data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(data->buf_in_1), data->write_ptr_1->length);
+ if (!driver->real_time_mode) {
+ process_lock_on_copy(&data->nrt_lock);
+ clear_read_wakelock++;
+ }
+ data->in_busy_1 = 0;
+ }
+ if (data->in_busy_2 == 1) {
+ num_data++;
+ /*Copy the length of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, (data->write_ptr_2->length), 4);
+ /*Copy the actual data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(data->buf_in_2), data->write_ptr_2->length);
+ if (!driver->real_time_mode) {
+ process_lock_on_copy(&data->nrt_lock);
+ clear_read_wakelock++;
+ }
+ data->in_busy_2 = 0;
+ }
+ }
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ struct diag_smd_info *data = &driver->smd_cmd[i];
+ if (!driver->separate_cmdrsp[i])
+ continue;
+
+ if (data->in_busy_1 == 1) {
+ num_data++;
+ /*Copy the length of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, (data->write_ptr_1->length), 4);
+ /*Copy the actual data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(data->buf_in_1), data->write_ptr_1->length);
+ data->in_busy_1 = 0;
+ }
+ }
+ }
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ /* copy 9K data over SDIO */
+ if (driver->in_busy_sdio == 1) {
+ remote_token = diag_get_remote(MDM);
+ num_data++;
+
+ /*Copy the negative token of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, remote_token, 4);
+ /*Copy the length of data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, (driver->write_ptr_mdm->length), 4);
+ /*Copy the actual data being passed */
+ COPY_USER_SPACE_OR_EXIT(buf + ret, *(driver->buf_in_sdio), driver->write_ptr_mdm->length);
+ driver->in_busy_sdio = 0;
+ }
+#endif
+ /* Copy date from remote processors */
+ exit_stat = diag_copy_remote(buf, count, &ret, &num_data);
+ if (exit_stat == 1)
+ goto exit;
+
+ /* copy number of data fields */
+ /* num_data dosen't need */
+ COPY_USER_SPACE_OR_EXIT(buf + 4, num_data, 4);
+ ret -= 4;
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ if (driver->smd_data[i].ch)
+ queue_work(driver->smd_data[i].wq, &(driver->smd_data[i].diag_read_smd_work));
+ }
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ if (driver->sdio_ch)
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+#endif
+ APPEND_DEBUG('n');
+ goto exit;
+ } else if (driver->data_ready[index] & USER_SPACE_DATA_TYPE) {
+ /* In case, the thread wakes up and the logging mode is
+ not memory device any more, the condition needs to be cleared */
+ driver->data_ready[index] ^= USER_SPACE_DATA_TYPE;
+ } else if (driver->data_ready[index] & USERMODE_DIAGFWD) {
+ remote_token = 0;
+ pr_debug("diag: process woken up\n");
+ /*Copy the type of data being passed*/
+ data_type = USERMODE_DIAGFWD_LEGACY;
+ driver->data_ready[index] ^= USERMODE_DIAGFWD;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ /* place holder for number of data field */
+
+#if 0
+ spin_lock_irqsave(&driver->rsp_buf_busy_lock, flags);
+ if (driver->rsp_write_ptr->length > 0) {
+ if (copy_to_user(buf+ret,
+ (void *)(driver->rsp_write_ptr->buf),
+ driver->rsp_write_ptr->length)) {
+ ret -= sizeof(int);
+ goto drop_rsp_userspace;
+ }
+ num_data++;
+ ret += driver->rsp_write_ptr->length;
+drop_rsp_userspace:
+ driver->rsp_write_ptr->length = 0;
+ driver->rsp_buf_busy = 0;
+ }
+ spin_unlock_irqrestore(&driver->rsp_buf_busy_lock, flags);
+#endif
+
+ for (i = 0; i < driver->buf_tbl_size; i++) {
+ if (driver->buf_tbl[i].length > 0) {
+#ifdef DIAG_DEBUG
+ pr_debug("diag: WRITING the buf address and length is %p , %d\n",
+ driver->buf_tbl[i].buf,
+ driver->buf_tbl[i].length);
+#endif
+
+ /* Copy the actual data being passed */
+ if (copy_to_user(buf+ret, (void *)driver->
+ buf_tbl[i].buf, driver->buf_tbl[i].length)) {
+ ret -= 4;
+ num_data--;
+ goto drop_userspace;
+ }
+ ret += driver->buf_tbl[i].length;
+drop_userspace:
+#ifdef DIAG_DEBUG
+ pr_debug("diag: DEQUEUE buf address and length is %p, %d\n",
+ driver->buf_tbl[i].buf,
+ driver->buf_tbl[i].length);
+#endif
+ diagmem_free(driver, (unsigned char *)
+ (driver->buf_tbl[i].buf), POOL_TYPE_HDLC);
+ driver->buf_tbl[i].length = 0;
+ driver->buf_tbl[i].buf = 0;
+ }
+ }
+
+ /* Copy peripheral data */
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ struct diag_smd_info *data = &driver->smd_data[i];
+ if (data->in_busy_1 == 1) {
+ /*Copy the actual data being passed*/
+ COPY_USER_SPACE_OR_EXIT(buf+ret,
+ *(data->buf_in_1),
+ data->write_ptr_1->length);
+ if (!driver->real_time_mode) {
+ process_lock_on_copy(&data->nrt_lock);
+ clear_read_wakelock++;
+ }
+ data->in_busy_1 = 0;
+ }
+ if (data->in_busy_2 == 1) {
+ /*Copy the actual data being passed*/
+ COPY_USER_SPACE_OR_EXIT(buf+ret,
+ *(data->buf_in_2),
+ data->write_ptr_2->length);
+ if (!driver->real_time_mode) {
+ process_lock_on_copy(&data->nrt_lock);
+ clear_read_wakelock++;
+ }
+ data->in_busy_2 = 0;
+ }
+ }
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ struct diag_smd_info *data =
+ &driver->smd_cmd[i];
+ if (!driver->separate_cmdrsp[i])
+ continue;
+
+ if (data->in_busy_1 == 1) {
+ /*Copy the actual data being passed*/
+ COPY_USER_SPACE_OR_EXIT(buf+ret,
+ *(data->buf_in_1),
+ data->write_ptr_1->length);
+ data->in_busy_1 = 0;
+ }
+ }
+ }
+
+ /* Copy date from remote processors */
+ exit_stat = diag_copy_remote(buf, count, &ret, &num_data);
+ if (exit_stat == 1)
+ goto exit;
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ if (driver->smd_data[i].ch)
+ queue_work(driver->smd_data[i].wq,
+ &(driver->smd_data[i].diag_read_smd_work));
+ }
+
+ APPEND_DEBUG('n');
+#ifdef DIAG_DEBUG
+ printk("%s() return %d byte\n", __func__, ret);
+ print_hex_dump(KERN_INFO, "write packet data"
+ " to user space(first 16 bytes)", 16, 1,
+ DUMP_PREFIX_ADDRESS, buf, 16, 1);
+#endif
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & DEINIT_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & DEINIT_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ driver->data_ready[index] ^= DEINIT_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & MSG_MASKS_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & MSG_MASKS_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, *(driver->msg_masks), MSG_MASK_SIZE);
+ driver->data_ready[index] ^= MSG_MASKS_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & EVENT_MASKS_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & EVENT_MASKS_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, *(driver->event_masks), EVENT_MASK_SIZE);
+ driver->data_ready[index] ^= EVENT_MASKS_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & LOG_MASKS_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & LOG_MASKS_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, *(driver->log_masks), LOG_MASK_SIZE);
+ driver->data_ready[index] ^= LOG_MASKS_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & PKT_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & PKT_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, *(driver->pkt_buf), driver->pkt_length);
+ driver->data_ready[index] ^= PKT_TYPE;
+ driver->in_busy_pktdata = 0;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & DCI_EVENT_MASKS_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & DCI_EVENT_MASKS_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, driver->num_dci_client, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 8, *(dci_cumulative_event_mask), DCI_EVENT_MASK_SIZE);
+ driver->data_ready[index] ^= DCI_EVENT_MASKS_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & DCI_LOG_MASKS_TYPE) {
+ /*Copy the type of data being passed */
+ data_type = driver->data_ready[index] & DCI_LOG_MASKS_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 4, driver->num_dci_client, 4);
+ COPY_USER_SPACE_OR_EXIT(buf + 8, *(dci_cumulative_log_mask), DCI_LOG_MASK_SIZE);
+ driver->data_ready[index] ^= DCI_LOG_MASKS_TYPE;
+ goto exit;
+ }
+
+ if (driver->data_ready[index] & DCI_DATA_TYPE) {
+ /* Copy the type of data being passed */
+ data_type = driver->data_ready[index] & DCI_DATA_TYPE;
+ driver->data_ready[index] ^= DCI_DATA_TYPE;
+ COPY_USER_SPACE_OR_EXIT(buf, data_type, 4);
+ /* check the current client and copy its data */
+ entry = diag_dci_get_client_entry();
+ if (entry) {
+ exit_stat = diag_copy_dci(buf, count, entry, &ret);
+ if (exit_stat == 1)
+ goto exit;
+ }
+ for (i = 0; i < NUM_SMD_DCI_CHANNELS; i++) {
+ driver->smd_dci[i].in_busy_1 = 0;
+ if (driver->smd_dci[i].ch) {
+ diag_dci_try_deactivate_wakeup_source(driver->smd_dci[i].ch);
+ queue_work(driver->diag_dci_wq, &(driver->smd_dci[i].diag_read_smd_work));
+ }
+ }
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_DCI_CMD_CHANNELS; i++) {
+ if (!driver->separate_cmdrsp[i])
+ continue;
+ driver->smd_dci_cmd[i].in_busy_1 = 0;
+ if (driver->smd_dci_cmd[i].ch) {
+ diag_dci_try_deactivate_wakeup_source(driver->smd_dci_cmd[i].ch);
+ queue_work(driver->diag_dci_wq, &(driver->smd_dci_cmd[i].diag_read_smd_work));
+ }
+ }
+ }
+ goto exit;
+ }
+exit:
+ if (ret)
+ wake_lock_timeout(&driver->wake_lock, HZ / 2);
+
+ if (clear_read_wakelock) {
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++)
+ process_lock_on_copy_complete(&driver->smd_data[i].nrt_lock);
+ }
+ mutex_unlock(&driver->diagchar_mutex);
+ return ret;
+}
+
+static int diagchar_write(struct file *file, const char __user * buf, size_t count, loff_t * ppos)
+{
+ int err, ret = 0, pkt_type, token_offset = 0;
+ int remote_proc = 0;
+ uint8_t index;
+#ifdef DIAG_DEBUG
+ int length = 0, i;
+#endif
+ struct diag_send_desc_type send = { NULL, NULL, DIAG_STATE_START, 0 };
+ struct diag_hdlc_dest_type enc = { NULL, NULL, 0 };
+ void *buf_copy = NULL;
+ void *user_space_data = NULL;
+ unsigned int payload_size;
+
+ index = 0;
+ /* Get the packet type F3/log/event/Pkt response */
+ err = copy_from_user((&pkt_type), buf, 4);
+ /* First 4 bytes indicate the type of payload - ignore these */
+ if (count < 4) {
+ pr_err("diag: Client sending short data\n");
+ return -EBADMSG;
+ }
+ payload_size = count - 4;
+ if (payload_size > USER_SPACE_DATA) {
+ pr_err("diag: Dropping packet, packet payload size crosses 8KB limit. Current payload size %d\n", payload_size);
+ driver->dropped_count++;
+ return -EBADMSG;
+ }
+#ifdef CONFIG_DIAG_OVER_USB
+ if (driver->logging_mode == NO_LOGGING_MODE || (((pkt_type != DCI_DATA_TYPE) || ((pkt_type & (DATA_TYPE_DCI_LOG | DATA_TYPE_DCI_EVENT)) == 0))
+ && (driver->logging_mode == USB_MODE) && (!driver->usb_connected))) {
+ /*Drop the diag payload */
+ return -EIO;
+ }
+#endif /* DIAG over USB */
+ if (pkt_type == DCI_DATA_TYPE &&
+ driver->logging_process_id != current->tgid) {
+ user_space_data = diagmem_alloc(driver, payload_size, POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4, payload_size);
+ if (err) {
+ pr_alert("diag: copy failed for DCI data\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return DIAG_DCI_SEND_DATA_FAIL;
+ }
+ err = diag_process_dci_transaction(user_space_data, payload_size);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return err;
+ }
+ if (pkt_type == CALLBACK_DATA_TYPE) {
+ if (payload_size > itemsize) {
+ pr_err("diag: Dropping packet, packet payload size crosses 4KB limit. Current payload size %d\n", payload_size);
+ driver->dropped_count++;
+ return -EBADMSG;
+ }
+
+ mutex_lock(&driver->diagchar_mutex);
+ buf_copy = diagmem_alloc(driver, payload_size, POOL_TYPE_COPY);
+ if (!buf_copy) {
+ driver->dropped_count++;
+ mutex_unlock(&driver->diagchar_mutex);
+ return -ENOMEM;
+ }
+
+ err = copy_from_user(buf_copy, buf + 4, payload_size);
+ if (err) {
+ pr_err("diag: copy failed for user space data\n");
+ ret = -EIO;
+ goto fail_free_copy;
+ }
+ /* Check for proc_type */
+ remote_proc = diag_get_remote(*(int *)buf_copy);
+
+ if (!remote_proc) {
+ wait_event_interruptible(driver->wait_q, (driver->in_busy_pktdata == 0));
+ ret = diag_process_apps_pkt(buf_copy, payload_size);
+ goto fail_free_copy;
+ }
+ /* The packet is for the remote processor */
+ token_offset = 4;
+ payload_size -= 4;
+ buf += 4;
+ /* Perform HDLC encoding on incoming data */
+ send.state = DIAG_STATE_START;
+ send.pkt = (void *)(buf_copy + token_offset);
+ send.last = (void *)(buf_copy + token_offset - 1 + payload_size);
+ send.terminate = 1;
+
+ if (!buf_hdlc)
+ buf_hdlc = diagmem_alloc(driver, HDLC_OUT_BUF_SIZE, POOL_TYPE_HDLC);
+ if (!buf_hdlc) {
+ ret = -ENOMEM;
+ driver->used = 0;
+ goto fail_free_copy;
+ }
+ if (HDLC_OUT_BUF_SIZE < (2 * payload_size) + 3) {
+ pr_err("diag: Dropping packet, HDLC encoded packet payload size crosses buffer limit. Current payload size %d\n", ((2 * payload_size) + 3));
+ driver->dropped_count++;
+ ret = -EBADMSG;
+ goto fail_free_hdlc;
+ }
+ enc.dest = buf_hdlc + driver->used;
+ enc.dest_last = (void *)(buf_hdlc + driver->used + (2 * payload_size) + token_offset - 1);
+ diag_hdlc_encode(&send, &enc);
+
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ /* send masks to 9k too */
+ if (driver->sdio_ch && (remote_proc == MDM)) {
+ wait_event_interruptible(driver->wait_q, (sdio_write_avail(driver->sdio_ch) >= payload_size));
+ if (driver->sdio_ch && (payload_size > 0)) {
+ sdio_write(driver->sdio_ch, (void *)
+ (char *)buf_hdlc, payload_size + 3);
+ }
+ }
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ /* send masks to All 9k */
+ if ((remote_proc >= MDM) && (remote_proc <= MDM4)) {
+ index = remote_proc - MDM;
+ if (diag_hsic[index].hsic_ch && (payload_size > 0)) {
+ /* wait sending mask updates
+ * if HSIC ch not ready */
+ while (diag_hsic[index].in_busy_hsic_write) {
+ wait_event_interruptible(driver->wait_q, (diag_hsic[index].in_busy_hsic_write != 1));
+ }
+ diag_hsic[index].in_busy_hsic_write = 1;
+ diag_hsic[index].in_busy_hsic_read_on_device = 0;
+ err = diag_bridge_write(index, (char *)buf_hdlc, payload_size + 3);
+ if (err) {
+ pr_err("diag: err sending mask to MDM: %d\n", err);
+ /*
+ * If the error is recoverable, then
+ * clear the write flag, so we will
+ * resubmit a write on the next frame.
+ * Otherwise, don't resubmit a write
+ * on the next frame.
+ */
+ if ((-ESHUTDOWN) != err)
+ diag_hsic[index].in_busy_hsic_write = 0;
+ }
+ }
+ }
+ if (driver->diag_smux_enabled && (remote_proc == QSC)
+ && driver->lcid) {
+ if (payload_size > 0) {
+ err = msm_smux_write(driver->lcid, NULL, (char *)buf_hdlc, payload_size + 3);
+ if (err) {
+ pr_err("diag:send mask to MDM err %d", err);
+ ret = err;
+ }
+ }
+ }
+#endif
+ goto fail_free_hdlc;
+ }
+ if (pkt_type == USER_SPACE_DATA_TYPE) {
+ user_space_data = diagmem_alloc(driver, payload_size, POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4, payload_size);
+ if (err) {
+ pr_err("diag: copy failed for user space data\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return -EIO;
+ }
+ /* Check for proc_type */
+ remote_proc = diag_get_remote(*(int *)user_space_data);
+
+ if (remote_proc) {
+ token_offset = 4;
+ payload_size -= 4;
+ buf += 4;
+ }
+
+ /* Check masks for On-Device logging */
+#if 0
+ if (driver->mask_check) {
+ if (!mask_request_validate(user_space_data + token_offset)) {
+ pr_alert("diag: mask request Invalid\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return -EFAULT;
+ }
+ }
+#endif
+ buf = buf + 4;
+#ifdef DIAG_DEBUG
+ pr_debug("diag: user space data %d\n", payload_size);
+ for (i = 0; i < payload_size; i++)
+ pr_debug("\t %x", *((user_space_data + token_offset) + i));
+#endif
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ /* send masks to 9k too */
+ if (driver->sdio_ch && (remote_proc == MDM)) {
+ wait_event_interruptible(driver->wait_q, (sdio_write_avail(driver->sdio_ch) >= payload_size));
+ if (driver->sdio_ch && (payload_size > 0)) {
+ sdio_write(driver->sdio_ch, (void *)
+ (user_space_data + token_offset), payload_size);
+ }
+ }
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ /* send masks to All 9k */
+ if ((payload_size > 0)) {
+ index = 0;
+ /*
+ * If hsic data is being requested for this remote
+ * processor and its hsic in not open
+ */
+ if (!diag_hsic[index].hsic_device_opened) {
+ diag_hsic[index].hsic_data_requested = 1;
+ connect_bridge(0, index);
+ }
+
+ if (diag_hsic[index].hsic_ch) {
+ /* wait sending mask updates
+ * if HSIC ch not ready */
+ while (diag_hsic[index].in_busy_hsic_write) {
+ wait_event_interruptible(driver->wait_q, (diag_hsic[index].in_busy_hsic_write != 1));
+ }
+ diag_hsic[index].in_busy_hsic_write = 1;
+ diag_hsic[index].in_busy_hsic_read_on_device = 0;
+ err = diag_bridge_write(index, user_space_data + token_offset, payload_size);
+ if (err) {
+ pr_err("diag: err sending mask to MDM: %d\n", err);
+ /*
+ * If the error is recoverable, then
+ * clear the write flag, so we will
+ * resubmit a write on the next frame.
+ * Otherwise, don't resubmit a write
+ * on the next frame.
+ */
+ if ((-ESHUTDOWN) != err)
+ diag_hsic[index].in_busy_hsic_write = 0;
+ }
+ }
+ }
+ if (driver->diag_smux_enabled && (remote_proc == QSC)
+ && driver->lcid) {
+ if (payload_size > 0) {
+ err = msm_smux_write(driver->lcid, NULL, user_space_data + token_offset, payload_size);
+ if (err) {
+ pr_err("diag:send mask to MDM err %d", err);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return err;
+ }
+ }
+ }
+#endif
+#if 0
+ /* send masks to 8k now */
+ if (!remote_proc)
+ diag_process_hdlc((void *)
+ (user_space_data + token_offset), payload_size);
+#endif
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return 0;
+ } else if (driver->logging_process_id == current->tgid) {
+
+ user_space_data = diagmem_alloc(driver, payload_size, POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4, payload_size);
+
+ if (err) {
+ pr_err("diag: copy failed for user space data\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return -EIO;
+ }
+
+ remote_proc = diag_get_remote(*(int *)user_space_data);
+ if (remote_proc) {
+ token_offset = 4;
+ payload_size -= 4;
+ buf += 4;
+ }
+#ifdef DIAG_DEBUG
+ printk("diag: user space data %d\n", payload_size);
+ print_hex_dump(KERN_INFO, "write packet data "
+ "to user space", payload_size, 1,
+ DUMP_PREFIX_ADDRESS, user_space_data, payload_size, 1);
+#endif
+ /* send masks to All 9k */
+ if ((payload_size > 0)) {
+ index = 0;
+ /*
+ * If hsic data is being requested for this remote
+ * processor and its hsic in not open
+ */
+ if (!diag_hsic[index].hsic_device_opened) {
+ diag_hsic[index].hsic_data_requested = 1;
+ connect_bridge(0, index);
+ }
+
+ if (diag_hsic[index].hsic_ch) {
+ /* wait sending mask updates
+ * if HSIC ch not ready */
+ while (diag_hsic[index].in_busy_hsic_write) {
+ wait_event_interruptible(driver->wait_q, (diag_hsic[index].in_busy_hsic_write != 1));
+ }
+ diag_hsic[index].in_busy_hsic_write = 1;
+ diag_hsic[index].in_busy_hsic_read_on_device = 0;
+ err = diag_bridge_write(index, user_space_data + token_offset, payload_size);
+ if (err) {
+ pr_err("diag: err sending mask to MDM: %d\n", err);
+ /*
+ * If the error is recoverable, then
+ * clear the write flag, so we will
+ * resubmit a write on the next frame.
+ * Otherwise, don't resubmit a write
+ * on the next frame.
+ */
+ if ((-ESHUTDOWN) != err)
+ diag_hsic[index].in_busy_hsic_write = 0;
+ }
+ }
+ }
+
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
+ user_space_data = NULL;
+ return count;
+ }
+
+ if (payload_size > itemsize) {
+ pr_err("diag: Dropping packet, packet payload size crosses" "4KB limit. Current payload size %d\n", payload_size);
+ driver->dropped_count++;
+ return -EBADMSG;
+ }
+
+ buf_copy = diagmem_alloc(driver, payload_size, POOL_TYPE_COPY);
+ if (!buf_copy) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+
+ err = copy_from_user(buf_copy, buf + 4, payload_size);
+ if (err) {
+ printk(KERN_INFO "diagchar : copy_from_user failed\n");
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ buf_copy = NULL;
+ return -EFAULT;
+ }
+
+ if (pkt_type & (DATA_TYPE_DCI_LOG | DATA_TYPE_DCI_EVENT)) {
+ int data_type = pkt_type & (DATA_TYPE_DCI_LOG | DATA_TYPE_DCI_EVENT);
+ diag_process_apps_dci_read_data(data_type, buf_copy, payload_size);
+
+ if (pkt_type & DATA_TYPE_DCI_LOG)
+ pkt_type ^= DATA_TYPE_DCI_LOG;
+ else
+ pkt_type ^= DATA_TYPE_DCI_EVENT;
+
+ /*
+ * If the data is not headed for normal processing or the usb
+ * is unplugged and we are in usb mode
+ */
+ if ((pkt_type != DATA_TYPE_LOG && pkt_type != DATA_TYPE_EVENT)
+ || ((driver->logging_mode == USB_MODE) && (!driver->usb_connected))) {
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ return 0;
+ }
+ }
+
+ if (driver->stm_state[APPS_DATA] && (pkt_type >= DATA_TYPE_EVENT && pkt_type <= DATA_TYPE_LOG)) {
+ int stm_size = 0;
+
+ stm_size = stm_log_inv_ts(OST_ENTITY_DIAG, 0, buf_copy, payload_size);
+
+ if (stm_size == 0)
+ pr_debug("diag: In %s, stm_log_inv_ts returned size of 0\n", __func__);
+
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ return 0;
+ }
+#ifdef DIAG_DEBUG
+ printk(KERN_DEBUG "data is -->\n");
+ for (i = 0; i < payload_size; i++)
+ printk(KERN_DEBUG "\t %x \t", *(((unsigned char *)buf_copy) + i));
+#endif
+ send.state = DIAG_STATE_START;
+ send.pkt = buf_copy;
+ send.last = (void *)(buf_copy + payload_size - 1);
+ send.terminate = 1;
+#ifdef DIAG_DEBUG
+ pr_debug("diag: Already used bytes in buffer %d, and" " incoming payload size is %d\n", driver->used, payload_size);
+ printk(KERN_DEBUG "hdlc encoded data is -->\n");
+ for (i = 0; i < payload_size + 8; i++) {
+ printk(KERN_DEBUG "\t %x \t", *(((unsigned char *)buf_hdlc) + i));
+ if (*(((unsigned char *)buf_hdlc) + i) != 0x7e)
+ length++;
+ }
+#endif
+ mutex_lock(&driver->diagchar_mutex);
+ if (!buf_hdlc)
+ buf_hdlc = diagmem_alloc(driver, HDLC_OUT_BUF_SIZE, POOL_TYPE_HDLC);
+ if (!buf_hdlc) {
+ ret = -ENOMEM;
+ driver->used = 0;
+ goto fail_free_copy;
+ }
+ if (HDLC_OUT_BUF_SIZE < (2 * payload_size) + 3) {
+ pr_err("diag: Dropping packet, HDLC encoded packet payload size crosses buffer limit. Current payload size %d\n", ((2 * payload_size) + 3));
+ driver->dropped_count++;
+ ret = -EBADMSG;
+ goto fail_free_hdlc;
+ }
+ if (HDLC_OUT_BUF_SIZE - driver->used <= (2 * payload_size) + 3) {
+ err = diag_device_write(buf_hdlc, APPS_DATA, NULL);
+ if (err) {
+ ret = -EIO;
+ goto fail_free_hdlc;
+ }
+ buf_hdlc = NULL;
+ driver->used = 0;
+ buf_hdlc = diagmem_alloc(driver, HDLC_OUT_BUF_SIZE, POOL_TYPE_HDLC);
+ if (!buf_hdlc) {
+ ret = -ENOMEM;
+ goto fail_free_copy;
+ }
+ }
+
+ enc.dest = buf_hdlc + driver->used;
+ enc.dest_last = (void *)(buf_hdlc + driver->used + 2 * payload_size + 3);
+ diag_hdlc_encode(&send, &enc);
+
+ /* This is to check if after HDLC encoding, we are still within the
+ limits of aggregation buffer. If not, we write out the current buffer
+ and start aggregation in a newly allocated buffer */
+ if ((unsigned int)enc.dest >= (unsigned int)(buf_hdlc + HDLC_OUT_BUF_SIZE)) {
+ err = diag_device_write(buf_hdlc, APPS_DATA, NULL);
+ if (err) {
+ ret = -EIO;
+ goto fail_free_hdlc;
+ }
+ buf_hdlc = NULL;
+ driver->used = 0;
+ buf_hdlc = diagmem_alloc(driver, HDLC_OUT_BUF_SIZE, POOL_TYPE_HDLC);
+ if (!buf_hdlc) {
+ ret = -ENOMEM;
+ goto fail_free_copy;
+ }
+ enc.dest = buf_hdlc + driver->used;
+ enc.dest_last = (void *)(buf_hdlc + driver->used + (2 * payload_size) + 3);
+ diag_hdlc_encode(&send, &enc);
+ }
+
+ driver->used = (uint32_t) enc.dest - (uint32_t) buf_hdlc;
+ if (pkt_type == DATA_TYPE_RESPONSE) {
+ err = diag_device_write(buf_hdlc, APPS_DATA, NULL);
+ if (err) {
+ ret = -EIO;
+ goto fail_free_hdlc;
+ }
+ buf_hdlc = NULL;
+ driver->used = 0;
+ }
+
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ buf_copy = NULL;
+ mutex_unlock(&driver->diagchar_mutex);
+
+ check_drain_timer();
+
+ return 0;
+
+fail_free_hdlc:
+ diagmem_free(driver, buf_hdlc, POOL_TYPE_HDLC);
+ buf_hdlc = NULL;
+ driver->used = 0;
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ buf_copy = NULL;
+ mutex_unlock(&driver->diagchar_mutex);
+ return ret;
+
+fail_free_copy:
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ buf_copy = NULL;
+ mutex_unlock(&driver->diagchar_mutex);
+ return ret;
+}
+
+static void diag_real_time_info_init(void)
+{
+ if (!driver)
+ return;
+ driver->real_time_mode = 1;
+ driver->real_time_update_busy = 0;
+ driver->proc_active_mask = 0;
+ driver->proc_rt_vote_mask |= DIAG_PROC_DCI;
+ driver->proc_rt_vote_mask |= DIAG_PROC_MEMORY_DEVICE;
+ driver->diag_real_time_wq = create_singlethread_workqueue("diag_real_time_wq");
+ INIT_WORK(&(driver->diag_real_time_work), diag_real_time_work_fn);
+ mutex_init(&driver->real_time_mutex);
+}
+
+int mask_request_validate(unsigned char mask_buf[])
+{
+ uint8_t packet_id;
+ uint8_t subsys_id;
+ uint16_t ss_cmd;
+
+ packet_id = mask_buf[0];
+
+ if (packet_id == 0x4B) {
+ subsys_id = mask_buf[1];
+ ss_cmd = *(uint16_t *) (mask_buf + 2);
+ /* Packets with SSID which are allowed */
+ switch (subsys_id) {
+ case 0x04: /* DIAG_SUBSYS_WCDMA */
+ if ((ss_cmd == 0) || (ss_cmd == 0xF))
+ return 1;
+ break;
+ case 0x08: /* DIAG_SUBSYS_GSM */
+ if ((ss_cmd == 0) || (ss_cmd == 0x1))
+ return 1;
+ break;
+ case 0x09: /* DIAG_SUBSYS_UMTS */
+ case 0x0F: /* DIAG_SUBSYS_CM */
+ if (ss_cmd == 0)
+ return 1;
+ break;
+ case 0x0C: /* DIAG_SUBSYS_OS */
+ if ((ss_cmd == 2) || (ss_cmd == 0x100))
+ return 1; /* MPU and APU */
+ break;
+ case 0x12: /* DIAG_SUBSYS_DIAG_SERV */
+ if ((ss_cmd == 0) || (ss_cmd == 0x6) || (ss_cmd == 0x7))
+ return 1;
+ break;
+ case 0x13: /* DIAG_SUBSYS_FS */
+ if ((ss_cmd == 0) || (ss_cmd == 0x1))
+ return 1;
+ break;
+ default:
+ return 0;
+ break;
+ }
+ } else {
+ switch (packet_id) {
+ case 0x00: /* Version Number */
+ case 0x0C: /* CDMA status packet */
+ case 0x1C: /* Diag Version */
+ case 0x1D: /* Time Stamp */
+ case 0x60: /* Event Report Control */
+ case 0x63: /* Status snapshot */
+ case 0x73: /* Logging Configuration */
+ case 0x7C: /* Extended build ID */
+ case 0x7D: /* Extended Message configuration */
+ case 0x81: /* Event get mask */
+ case 0x82: /* Set the event mask */
+ return 1;
+ break;
+ default:
+ return 0;
+ break;
+ }
+ }
+ return 0;
+}
+
+static const struct file_operations diagcharfops = {
+ .owner = THIS_MODULE,
+ .read = diagchar_read,
+ .write = diagchar_write,
+ .unlocked_ioctl = diagchar_ioctl,
+ .compat_ioctl = diagchar_ioctl,
+ .open = diagchar_open,
+ .release = diagchar_close
+};
+
+static int diagchar_setup_cdev(dev_t devno)
+{
+
+ int err;
+
+ cdev_init(driver->cdev, &diagcharfops);
+
+ driver->cdev->owner = THIS_MODULE;
+ driver->cdev->ops = &diagcharfops;
+
+ err = cdev_add(driver->cdev, devno, 1);
+
+ if (err) {
+ printk(KERN_INFO "diagchar cdev registration failed !\n\n");
+ return -1;
+ }
+
+ driver->diagchar_class = class_create(THIS_MODULE, "diag");
+
+ if (IS_ERR(driver->diagchar_class)) {
+ printk(KERN_ERR "Error creating diagchar class.\n");
+ return -1;
+ }
+
+ device_create(driver->diagchar_class, NULL, devno, (void *)driver, "diag_mdm");
+
+ return 0;
+
+}
+
+static int diagchar_cleanup(void)
+{
+ if (driver) {
+ if (driver->cdev) {
+ /* TODO - Check if device exists before deleting */
+ device_destroy(driver->diagchar_class, MKDEV(driver->major, driver->minor_start));
+ cdev_del(driver->cdev);
+ }
+ if (!IS_ERR(driver->diagchar_class))
+ class_destroy(driver->diagchar_class);
+ wake_lock_destroy(&driver->wake_lock);
+ kfree(driver);
+ }
+ return 0;
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+static void diag_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect_bridge(1);
+}
+
+static void diag_disconnect_work_fn(struct work_struct *w)
+{
+ diagfwd_disconnect_bridge(1);
+}
+#endif
+
+#ifdef CONFIG_DIAG_SDIO_PIPE
+void diag_sdio_fn(int type)
+{
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa()) {
+ if (type == INIT)
+ diagfwd_sdio_init();
+ else if (type == EXIT)
+ diagfwd_sdio_exit();
+ }
+}
+#else
+inline void diag_sdio_fn(int type)
+{
+}
+#endif
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+void diagfwd_bridge_fn(int type)
+{
+ if (type == EXIT)
+ diagfwd_bridge_exit();
+}
+#else
+inline void diagfwd_bridge_fn(int type)
+{
+}
+#endif
+
+static int __init diagchar_init(void)
+{
+ dev_t dev;
+ int error, ret;
+
+ pr_debug("diagfwd initializing ..\n");
+ ret = 0;
+ driver = kzalloc(sizeof(struct diagchar_dev) + 5, GFP_KERNEL);
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ diag_bridge = kzalloc(MAX_BRIDGES * sizeof(struct diag_bridge_dev), GFP_KERNEL);
+ if (!diag_bridge)
+ pr_warn("diag: could not allocate memory for bridges\n");
+ diag_hsic = kzalloc(MAX_HSIC_CH * sizeof(struct diag_hsic_dev), GFP_KERNEL);
+ if (!diag_hsic)
+ pr_warn("diag: could not allocate memory for hsic ch\n");
+#endif
+
+ if (driver) {
+ driver->used = 0;
+ timer_in_progress = 0;
+ driver->debug_flag = 1;
+ driver->dci_state = DIAG_DCI_NO_ERROR;
+ setup_timer(&drain_timer, drain_timer_func, 1234);
+ driver->itemsize = itemsize;
+ driver->poolsize = poolsize;
+ driver->itemsize_hdlc = itemsize_hdlc;
+ driver->poolsize_hdlc = poolsize_hdlc;
+ driver->itemsize_user = itemsize_user;
+ driver->poolsize_user = poolsize_user;
+ driver->itemsize_write_struct = itemsize_write_struct;
+ driver->poolsize_write_struct = poolsize_write_struct;
+ driver->itemsize_dci = itemsize_dci;
+ driver->poolsize_dci = poolsize_dci;
+ driver->num_clients = max_clients;
+ driver->logging_mode = USB_MODE;
+ driver->socket_process = NULL;
+ driver->callback_process = NULL;
+ driver->mask_check = 0;
+ driver->in_busy_pktdata = 0;
+ mutex_init(&driver->diagchar_mutex);
+ init_waitqueue_head(&driver->wait_q);
+ init_waitqueue_head(&driver->smd_wait_q);
+ INIT_WORK(&(driver->diag_drain_work), diag_drain_work_fn);
+ diag_real_time_info_init();
+ diag_debugfs_init();
+ diag_masks_init();
+ diagfwd_init();
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ diagfwd_bridge_init(HSIC);
+ diagfwd_bridge_init(HSIC_2);
+ /* register HSIC device */
+ ret = platform_driver_register(&msm_hsic_ch_driver);
+ if (ret)
+ pr_err("diag: could not register HSIC device, ret: %d\n", ret);
+ diagfwd_bridge_init(SMUX);
+ INIT_WORK(&(driver->diag_connect_work), diag_connect_work_fn);
+ INIT_WORK(&(driver->diag_disconnect_work), diag_disconnect_work_fn);
+#endif
+ diagfwd_cntl_init();
+ driver->dci_state = diag_dci_init();
+ diag_sdio_fn(INIT);
+
+ pr_debug("diagchar initializing ..\n");
+ driver->num = 1;
+ driver->name = ((void *)driver) + sizeof(struct diagchar_dev);
+ strlcpy(driver->name, "diag", 4);
+ wake_lock_init(&driver->wake_lock, WAKE_LOCK_SUSPEND, "diagchar");
+
+ /* Get major number from kernel and initialize */
+ error = alloc_chrdev_region(&dev, driver->minor_start, driver->num, driver->name);
+ if (!error) {
+ driver->major = MAJOR(dev);
+ driver->minor_start = MINOR(dev);
+ } else {
+ printk(KERN_INFO "Major number not allocated\n");
+ goto fail;
+ }
+ driver->cdev = cdev_alloc();
+ error = diagchar_setup_cdev(dev);
+ if (error)
+ goto fail;
+ } else {
+ printk(KERN_INFO "kzalloc failed\n");
+ goto fail;
+ }
+
+ pr_info("diagchar initialized now");
+ return 0;
+
+fail:
+ diag_debugfs_cleanup();
+ diagchar_cleanup();
+ diagfwd_exit();
+ diagfwd_cntl_exit();
+ diag_dci_exit();
+ diag_masks_exit();
+ diag_sdio_fn(EXIT);
+ diagfwd_bridge_fn(EXIT);
+ return -1;
+}
+
+static void diagchar_exit(void)
+{
+ printk(KERN_INFO "diagchar exiting ..\n");
+ /* On Driver exit, send special pool type to
+ ensure no memory leaks */
+ diagmem_exit(driver, POOL_TYPE_ALL);
+ diagfwd_exit();
+ diagfwd_cntl_exit();
+ diag_dci_exit();
+ diag_masks_exit();
+ diag_sdio_fn(EXIT);
+ diagfwd_bridge_fn(EXIT);
+ diag_debugfs_cleanup();
+ diagchar_cleanup();
+ printk(KERN_INFO "done diagchar exit\n");
+}
+
+module_init(diagchar_init);
+module_exit(diagchar_exit);
diff --git a/drivers/char/diag/diagchar_hdlc.c b/drivers/char/diag/diagchar_hdlc.c
new file mode 100644
index 0000000..8d0bb78
--- /dev/null
+++ b/drivers/char/diag/diagchar_hdlc.c
@@ -0,0 +1,258 @@
+/* Copyright (c) 2008-2009, 2012-2013, The Linux Foundation.
+ * All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/cdev.h>
+#include <linux/fs.h>
+#include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/ratelimit.h>
+#include <linux/crc-ccitt.h>
+#include "diagchar_hdlc.h"
+#include "diagchar.h"
+
+MODULE_LICENSE("GPL v2");
+
+#define CRC_16_L_SEED 0xFFFF
+
+#define CRC_16_L_STEP(xx_crc, xx_c) \
+ crc_ccitt_byte(xx_crc, xx_c)
+
+void diag_hdlc_encode(struct diag_send_desc_type *src_desc, struct diag_hdlc_dest_type *enc)
+{
+ uint8_t *dest;
+ uint8_t *dest_last;
+ const uint8_t *src;
+ const uint8_t *src_last;
+ uint16_t crc;
+ unsigned char src_byte = 0;
+ enum diag_send_state_enum_type state;
+ unsigned int used = 0;
+
+ if (src_desc && enc) {
+
+ /* Copy parts to local variables. */
+ src = src_desc->pkt;
+ src_last = src_desc->last;
+ state = src_desc->state;
+ dest = enc->dest;
+ dest_last = enc->dest_last;
+
+ if (state == DIAG_STATE_START) {
+ crc = CRC_16_L_SEED;
+ state++;
+ } else {
+ /* Get a local copy of the CRC */
+ crc = enc->crc;
+ }
+
+ /* dest or dest_last may be NULL to trigger a
+ state transition only */
+ if (dest && dest_last) {
+ /* This condition needs to include the possibility
+ of 2 dest bytes for an escaped byte */
+ while (src <= src_last && dest <= dest_last) {
+
+ src_byte = *src++;
+
+ if ((src_byte == CONTROL_CHAR) || (src_byte == ESC_CHAR)) {
+
+ /* If the escape character is not the
+ last byte */
+ if (dest != dest_last) {
+ crc = CRC_16_L_STEP(crc, src_byte);
+
+ *dest++ = ESC_CHAR;
+ used++;
+
+ *dest++ = src_byte ^ ESC_MASK;
+ used++;
+ } else {
+
+ src--;
+ break;
+ }
+
+ } else {
+ crc = CRC_16_L_STEP(crc, src_byte);
+ *dest++ = src_byte;
+ used++;
+ }
+ }
+
+ if (src > src_last) {
+
+ if (state == DIAG_STATE_BUSY) {
+ if (src_desc->terminate) {
+ crc = ~crc;
+ state++;
+ } else {
+ /* Done with fragment */
+ state = DIAG_STATE_COMPLETE;
+ }
+ }
+
+ while (dest <= dest_last && state >= DIAG_STATE_CRC1 && state < DIAG_STATE_TERM) {
+ /* Encode a byte of the CRC next */
+ src_byte = crc & 0xFF;
+
+ if ((src_byte == CONTROL_CHAR)
+ || (src_byte == ESC_CHAR)) {
+
+ if (dest != dest_last) {
+
+ *dest++ = ESC_CHAR;
+ used++;
+ *dest++ = src_byte ^ ESC_MASK;
+ used++;
+
+ crc >>= 8;
+ } else {
+
+ break;
+ }
+ } else {
+
+ crc >>= 8;
+ *dest++ = src_byte;
+ used++;
+ }
+
+ state++;
+ }
+
+ if (state == DIAG_STATE_TERM) {
+ if (dest_last >= dest) {
+ *dest++ = CONTROL_CHAR;
+ used++;
+ state++; /* Complete */
+ }
+ }
+ }
+ }
+ /* Copy local variables back into the encode structure. */
+
+ enc->dest = dest;
+ enc->dest_last = dest_last;
+ enc->crc = crc;
+ src_desc->pkt = src;
+ src_desc->last = src_last;
+ src_desc->state = state;
+ }
+
+ return;
+}
+
+int diag_hdlc_decode(struct diag_hdlc_decode_type *hdlc)
+{
+ uint8_t *src_ptr = NULL, *dest_ptr = NULL;
+ unsigned int src_length = 0, dest_length = 0;
+
+ unsigned int len = 0;
+ unsigned int i;
+ uint8_t src_byte;
+
+ int pkt_bnd = 0;
+ int msg_start;
+
+ if (hdlc && hdlc->src_ptr && hdlc->dest_ptr && (hdlc->src_size - hdlc->src_idx > 0) && (hdlc->dest_size - hdlc->dest_idx > 0)) {
+
+ msg_start = (hdlc->src_idx == 0) ? 1 : 0;
+
+ src_ptr = hdlc->src_ptr;
+ src_ptr = &src_ptr[hdlc->src_idx];
+ src_length = hdlc->src_size - hdlc->src_idx;
+
+ dest_ptr = hdlc->dest_ptr;
+ dest_ptr = &dest_ptr[hdlc->dest_idx];
+ dest_length = hdlc->dest_size - hdlc->dest_idx;
+
+ for (i = 0; i < src_length; i++) {
+
+ src_byte = src_ptr[i];
+
+ if (hdlc->escaping) {
+ dest_ptr[len++] = src_byte ^ ESC_MASK;
+ hdlc->escaping = 0;
+ } else if (src_byte == ESC_CHAR) {
+ if (i == (src_length - 1)) {
+ hdlc->escaping = 1;
+ i++;
+ break;
+ } else {
+ dest_ptr[len++] = src_ptr[++i]
+ ^ ESC_MASK;
+ }
+ } else if (src_byte == CONTROL_CHAR) {
+ dest_ptr[len++] = src_byte;
+ /*
+ * If this is the first byte in the message,
+ * then it is part of the command. Otherwise,
+ * consider it as the last byte of the
+ * message.
+ */
+ if (msg_start && i == 0 && src_length > 1)
+ continue;
+ i++;
+ pkt_bnd = 1;
+ break;
+ } else {
+ dest_ptr[len++] = src_byte;
+ }
+
+ if (len >= dest_length) {
+ i++;
+ break;
+ }
+ }
+
+ hdlc->src_idx += i;
+ hdlc->dest_idx += len;
+ }
+
+ return pkt_bnd;
+}
+
+int crc_check(uint8_t * buf, uint16_t len)
+{
+ uint16_t crc = CRC_16_L_SEED;
+ uint8_t sent_crc[2] = { 0, 0 };
+
+ /*
+ * The minimum length of a valid incoming packet is 4. 1 byte
+ * of data and 3 bytes for CRC
+ */
+ if (!buf || len < 4) {
+ pr_err_ratelimited("diag: In %s, invalid packet or length, buf: 0x%x, len: %d", __func__, (int)buf, len);
+ return -EIO;
+ }
+
+ /*
+ * Run CRC check for the original input. Skip the last 3 CRC
+ * bytes
+ */
+ crc = crc_ccitt(crc, buf, len - 3);
+ crc ^= CRC_16_L_SEED;
+
+ /* Check the computed CRC against the original CRC bytes. */
+ sent_crc[0] = buf[len - 3];
+ sent_crc[1] = buf[len - 2];
+ if (crc != *((uint16_t *) sent_crc)) {
+ pr_debug("diag: In %s, crc mismatch. expected: %x, sent %x.\n", __func__, crc, *((uint16_t *) sent_crc));
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/char/diag/diagchar_hdlc.h b/drivers/char/diag/diagchar_hdlc.h
new file mode 100644
index 0000000..facbcee
--- /dev/null
+++ b/drivers/char/diag/diagchar_hdlc.h
@@ -0,0 +1,60 @@
+/* Copyright (c) 2008-2009, 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGCHAR_HDLC
+#define DIAGCHAR_HDLC
+
+enum diag_send_state_enum_type {
+ DIAG_STATE_START,
+ DIAG_STATE_BUSY,
+ DIAG_STATE_CRC1,
+ DIAG_STATE_CRC2,
+ DIAG_STATE_TERM,
+ DIAG_STATE_COMPLETE
+};
+
+struct diag_send_desc_type {
+ const void *pkt;
+ const void *last; /* Address of last byte to send. */
+ enum diag_send_state_enum_type state;
+ unsigned char terminate; /* True if this fragment
+ terminates the packet */
+};
+
+struct diag_hdlc_dest_type {
+ void *dest;
+ void *dest_last;
+ /* Below: internal use only */
+ uint16_t crc;
+};
+
+struct diag_hdlc_decode_type {
+ uint8_t *src_ptr;
+ unsigned int src_idx;
+ unsigned int src_size;
+ uint8_t *dest_ptr;
+ unsigned int dest_idx;
+ unsigned int dest_size;
+ int escaping;
+
+};
+
+void diag_hdlc_encode(struct diag_send_desc_type *src_desc, struct diag_hdlc_dest_type *enc);
+
+int diag_hdlc_decode(struct diag_hdlc_decode_type *hdlc);
+
+int crc_check(uint8_t * buf, uint16_t len);
+
+#define ESC_CHAR 0x7D
+#define ESC_MASK 0x20
+
+#endif
diff --git a/drivers/char/diag/diagfwd.c b/drivers/char/diag/diagfwd.c
new file mode 100644
index 0000000..353d36b
--- /dev/null
+++ b/drivers/char/diag/diagfwd.c
@@ -0,0 +1,2395 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/ratelimit.h>
+#include <linux/workqueue.h>
+#include <linux/pm_runtime.h>
+#include <linux/diagchar.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/of.h>
+#include <linux/kmemleak.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include <mach/msm_smd.h>
+#include <mach/socinfo.h>
+#include <mach/restart.h>
+#include "diagmem.h"
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_cntl.h"
+#include "diagfwd_hsic.h"
+#include "diagchar_hdlc.h"
+#ifdef CONFIG_DIAG_SDIO_PIPE
+#include "diagfwd_sdio.h"
+#endif
+#include "diag_dci.h"
+#include "diag_masks.h"
+#include "diagfwd_bridge.h"
+
+#define MODE_CMD 41
+#define RESET_ID 2
+
+#define STM_CMD_VERSION_OFFSET 4
+#define STM_CMD_MASK_OFFSET 5
+#define STM_CMD_DATA_OFFSET 6
+#define STM_CMD_NUM_BYTES 7
+
+#define STM_RSP_VALID_INDEX 7
+#define STM_RSP_SUPPORTED_INDEX 8
+#define STM_RSP_SMD_COMPLY_INDEX 9
+#define STM_RSP_NUM_BYTES 10
+
+#define SMD_DRAIN_BUF_SIZE 4096
+
+int diag_debug_buf_idx;
+unsigned char diag_debug_buf[1024];
+/* Number of entries in table of buffers */
+static unsigned int buf_tbl_size = 10;
+struct diag_master_table entry;
+int wrap_enabled;
+uint16_t wrap_count;
+
+void encode_rsp_and_send(int buf_length)
+{
+ struct diag_send_desc_type send = { NULL, NULL, DIAG_STATE_START, 0 };
+ struct diag_hdlc_dest_type enc = { NULL, NULL, 0 };
+ struct diag_smd_info *data = &(driver->smd_data[MODEM_DATA]);
+
+ if (buf_length > APPS_BUF_SIZE) {
+ pr_err("diag: In %s, invalid len %d, permissible len %d\n", __func__, buf_length, APPS_BUF_SIZE);
+ return;
+ }
+
+ send.state = DIAG_STATE_START;
+ send.pkt = driver->apps_rsp_buf;
+ send.last = (void *)(driver->apps_rsp_buf + buf_length);
+ send.terminate = 1;
+ if (!data->in_busy_1) {
+ enc.dest = data->buf_in_1;
+ enc.dest_last = (void *)(data->buf_in_1 + APPS_BUF_SIZE - 1);
+ diag_hdlc_encode(&send, &enc);
+ data->write_ptr_1->buf = data->buf_in_1;
+ data->write_ptr_1->length = (int)(enc.dest - (void *)(data->buf_in_1));
+ data->in_busy_1 = 1;
+ diag_device_write(data->buf_in_1, data->peripheral, data->write_ptr_1);
+ memset(driver->apps_rsp_buf, '\0', APPS_BUF_SIZE);
+ }
+}
+
+/* Determine if this device uses a device tree */
+#ifdef CONFIG_OF
+static int has_device_tree(void)
+{
+ struct device_node *node;
+
+ node = of_find_node_by_path("/");
+ if (node) {
+ of_node_put(node);
+ return 1;
+ }
+ return 0;
+}
+#else
+static int has_device_tree(void)
+{
+ return 0;
+}
+#endif
+
+int chk_config_get_id(void)
+{
+#if 0
+ /* For all Fusion targets, Modem will always be present */
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa())
+ return 0;
+
+ switch (socinfo_get_msm_cpu()) {
+ case MSM_CPU_8X60:
+ return APQ8060_TOOLS_ID;
+ case MSM_CPU_8960:
+ case MSM_CPU_8960AB:
+ return AO8960_TOOLS_ID;
+ case MSM_CPU_8064:
+ case MSM_CPU_8064AB:
+ case MSM_CPU_8064AA:
+ return APQ8064_TOOLS_ID;
+ case MSM_CPU_8930:
+ case MSM_CPU_8930AA:
+ case MSM_CPU_8930AB:
+ return MSM8930_TOOLS_ID;
+ case MSM_CPU_8974:
+ return MSM8974_TOOLS_ID;
+ case MSM_CPU_8625:
+ return MSM8625_TOOLS_ID;
+ case MSM_CPU_8084:
+ return APQ8084_TOOLS_ID;
+ default:
+ if (driver->use_device_tree) {
+ if (machine_is_msm8974())
+ return MSM8974_TOOLS_ID;
+ else if (machine_is_apq8074())
+ return APQ8074_TOOLS_ID;
+ else
+ return 0;
+ } else {
+ return 0;
+ }
+ }
+#else
+ return 0;
+#endif
+}
+
+/*
+ * This will return TRUE for targets which support apps only mode and hence SSR.
+ * This applies to 8960 and newer targets.
+ */
+int chk_apps_only(void)
+{
+#if 0
+ if (driver->use_device_tree)
+ return 1;
+
+ switch (socinfo_get_msm_cpu()) {
+ case MSM_CPU_8960:
+ case MSM_CPU_8960AB:
+ case MSM_CPU_8064:
+ case MSM_CPU_8064AB:
+ case MSM_CPU_8064AA:
+ case MSM_CPU_8930:
+ case MSM_CPU_8930AA:
+ case MSM_CPU_8930AB:
+ case MSM_CPU_8627:
+ case MSM_CPU_9615:
+ case MSM_CPU_8974:
+ return 1;
+ default:
+ return 0;
+ }
+#else
+ return 0;
+#endif
+}
+
+/*
+ * This will return TRUE for targets which support apps as master.
+ * Thus, SW DLOAD and Mode Reset are supported on apps processor.
+ * This applies to 8960 and newer targets.
+ */
+int chk_apps_master(void)
+{
+ if (driver->use_device_tree)
+ return 1;
+ else if (soc_class_is_msm8960() || soc_class_is_msm8930() || soc_class_is_apq8064() || cpu_is_msm9615())
+ return 1;
+ else
+ return 0;
+}
+
+int chk_polling_response(void)
+{
+ if (!(driver->polling_reg_flag) && chk_apps_master())
+ /*
+ * If the apps processor is master and no other processor
+ * has registered to respond for polling
+ */
+ return 1;
+ else if (!((driver->smd_data[MODEM_DATA].ch) && (driver->rcvd_feature_mask[MODEM_DATA])) && (chk_apps_master()))
+ /*
+ * If the apps processor is not the master and the modem
+ * is not up or we did not receive the feature masks from Modem
+ */
+ return 1;
+ else
+ return 0;
+}
+
+/*
+ * This function should be called if you feel that the logging process may
+ * need to be woken up. For instance, if the logging mode is MEMORY_DEVICE MODE
+ * and while trying to read data from a SMD data channel there are no buffers
+ * available to read the data into, then this function should be called to
+ * determine if the logging process needs to be woken up.
+ */
+void chk_logging_wakeup(void)
+{
+ int i;
+
+ /* Find the index of the logging process */
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == driver->logging_process_id)
+ break;
+
+ if (i < driver->num_clients) {
+ /* At very high logging rates a race condition can
+ * occur where the buffers containing the data from
+ * an smd channel are all in use, but the data_ready
+ * flag is cleared. In this case, the buffers never
+ * have their data read/logged. Detect and remedy this
+ * situation.
+ */
+ if ((driver->data_ready[i] & USER_SPACE_DATA_TYPE) == 0) {
+ driver->data_ready[i] |= USER_SPACE_DATA_TYPE;
+ pr_debug("diag: Force wakeup of logging process\n");
+ wake_up_interruptible(&driver->wait_q);
+ }
+ }
+}
+
+int diag_add_hdlc_encoding(struct diag_smd_info *smd_info, void *buf, int total_recd, uint8_t * encode_buf, int *encoded_length)
+{
+ struct diag_send_desc_type send = { NULL, NULL, DIAG_STATE_START, 0 };
+ struct diag_hdlc_dest_type enc = { NULL, NULL, 0 };
+ struct data_header {
+ uint8_t control_char;
+ uint8_t version;
+ uint16_t length;
+ };
+ struct data_header *header;
+ int header_size = sizeof(struct data_header);
+ uint8_t *end_control_char;
+ uint8_t *payload;
+ uint8_t *temp_buf;
+ uint8_t *temp_encode_buf;
+ int src_pkt_len;
+ int encoded_pkt_length;
+ int max_size;
+ int total_processed = 0;
+ int bytes_remaining;
+ int success = 1;
+
+ temp_buf = buf;
+ temp_encode_buf = encode_buf;
+ bytes_remaining = *encoded_length;
+ while (total_processed < total_recd) {
+ header = (struct data_header *)temp_buf;
+ /* Perform initial error checking */
+ if (header->control_char != CONTROL_CHAR || header->version != 1) {
+ success = 0;
+ break;
+ }
+ payload = temp_buf + header_size;
+ end_control_char = payload + header->length;
+ if (*end_control_char != CONTROL_CHAR) {
+ success = 0;
+ break;
+ }
+
+ max_size = 2 * header->length + 3;
+ if (bytes_remaining < max_size) {
+ pr_err("diag: In %s, Not enough room to encode remaining data for peripheral: %d, bytes available: %d, max_size: %d\n",
+ __func__, smd_info->peripheral, bytes_remaining, max_size);
+ success = 0;
+ break;
+ }
+
+ /* Prepare for encoding the data */
+ send.state = DIAG_STATE_START;
+ send.pkt = payload;
+ send.last = (void *)(payload + header->length - 1);
+ send.terminate = 1;
+
+ enc.dest = temp_encode_buf;
+ enc.dest_last = (void *)(temp_encode_buf + max_size);
+ enc.crc = 0;
+ diag_hdlc_encode(&send, &enc);
+
+ /* Prepare for next packet */
+ src_pkt_len = (header_size + header->length + 1);
+ total_processed += src_pkt_len;
+ temp_buf += src_pkt_len;
+
+ encoded_pkt_length = (uint8_t *) enc.dest - temp_encode_buf;
+ bytes_remaining -= encoded_pkt_length;
+ temp_encode_buf = enc.dest;
+ }
+
+ *encoded_length = (int)(temp_encode_buf - encode_buf);
+
+ return success;
+}
+
+static int check_bufsize_for_encoding(struct diag_smd_info *smd_info, void *buf, int total_recd)
+{
+ int buf_size = IN_BUF_SIZE;
+ int max_size = 2 * total_recd + 3;
+ unsigned char *temp_buf;
+
+ if (max_size > IN_BUF_SIZE) {
+ if (max_size > MAX_IN_BUF_SIZE) {
+ pr_err_ratelimited("diag: In %s, SMD sending packet of %d bytes that may expand to %d bytes, peripheral: %d\n",
+ __func__, total_recd, max_size, smd_info->peripheral);
+ max_size = MAX_IN_BUF_SIZE;
+ }
+ if (buf == smd_info->buf_in_1_raw) {
+ /* Only realloc if we need to increase the size */
+ if (smd_info->buf_in_1_size < max_size) {
+ temp_buf = krealloc(smd_info->buf_in_1, max_size, GFP_KERNEL);
+ if (temp_buf) {
+ smd_info->buf_in_1 = temp_buf;
+ smd_info->buf_in_1_size = max_size;
+ }
+ }
+ buf_size = smd_info->buf_in_1_size;
+ } else {
+ /* Only realloc if we need to increase the size */
+ if (smd_info->buf_in_2_size < max_size) {
+ temp_buf = krealloc(smd_info->buf_in_2, max_size, GFP_KERNEL);
+ if (temp_buf) {
+ smd_info->buf_in_2 = temp_buf;
+ smd_info->buf_in_2_size = max_size;
+ }
+ }
+ buf_size = smd_info->buf_in_2_size;
+ }
+ }
+
+ return buf_size;
+}
+
+void process_lock_enabling(struct diag_nrt_wake_lock *lock, int real_time)
+{
+ unsigned long read_lock_flags;
+
+ spin_lock_irqsave(&lock->read_spinlock, read_lock_flags);
+ if (real_time)
+ lock->enabled = 0;
+ else
+ lock->enabled = 1;
+ lock->ref_count = 0;
+ lock->copy_count = 0;
+ wake_unlock(&lock->read_lock);
+ spin_unlock_irqrestore(&lock->read_spinlock, read_lock_flags);
+}
+
+void process_lock_on_notify(struct diag_nrt_wake_lock *lock)
+{
+ unsigned long read_lock_flags;
+
+ spin_lock_irqsave(&lock->read_spinlock, read_lock_flags);
+ /*
+ * Do not work with ref_count here in case
+ * of spurious interrupt
+ */
+ if (lock->enabled)
+ wake_lock(&lock->read_lock);
+ spin_unlock_irqrestore(&lock->read_spinlock, read_lock_flags);
+}
+
+void process_lock_on_read(struct diag_nrt_wake_lock *lock, int pkt_len)
+{
+ unsigned long read_lock_flags;
+
+ spin_lock_irqsave(&lock->read_spinlock, read_lock_flags);
+ if (lock->enabled) {
+ if (pkt_len > 0) {
+ /*
+ * We have an data that is read that
+ * needs to be processed, make sure the
+ * processor does not go to sleep
+ */
+ lock->ref_count++;
+ if (!wake_lock_active(&lock->read_lock))
+ wake_lock(&lock->read_lock);
+ } else {
+ /*
+ * There was no data associated with the
+ * read from the smd, unlock the wake lock
+ * if it is not needed.
+ */
+ if (lock->ref_count < 1) {
+ if (wake_lock_active(&lock->read_lock))
+ wake_unlock(&lock->read_lock);
+ lock->ref_count = 0;
+ lock->copy_count = 0;
+ }
+ }
+ }
+ spin_unlock_irqrestore(&lock->read_spinlock, read_lock_flags);
+}
+
+void process_lock_on_copy(struct diag_nrt_wake_lock *lock)
+{
+ unsigned long read_lock_flags;
+
+ spin_lock_irqsave(&lock->read_spinlock, read_lock_flags);
+ if (lock->enabled)
+ lock->copy_count++;
+ spin_unlock_irqrestore(&lock->read_spinlock, read_lock_flags);
+}
+
+void process_lock_on_copy_complete(struct diag_nrt_wake_lock *lock)
+{
+ unsigned long read_lock_flags;
+
+ spin_lock_irqsave(&lock->read_spinlock, read_lock_flags);
+ if (lock->enabled) {
+ lock->ref_count -= lock->copy_count;
+ if (lock->ref_count < 1) {
+ wake_unlock(&lock->read_lock);
+ lock->ref_count = 0;
+ }
+ lock->copy_count = 0;
+ }
+ spin_unlock_irqrestore(&lock->read_spinlock, read_lock_flags);
+}
+
+/* Process the data read from the smd data channel */
+int diag_process_smd_read_data(struct diag_smd_info *smd_info, void *buf, int total_recd)
+{
+ struct diag_request *write_ptr_modem = NULL;
+ int *in_busy_ptr = 0;
+ int err = 0;
+
+ /*
+ * Do not process data on command channel if the
+ * channel is not designated to do so
+ */
+ if ((smd_info->type == SMD_CMD_TYPE) && !driver->separate_cmdrsp[smd_info->peripheral]) {
+ /* This print is for debugging */
+ pr_err("diag, In %s, received data on non-designated command channel: %d\n", __func__, smd_info->peripheral);
+ return 0;
+ }
+
+ /* If the data is already hdlc encoded */
+ if (!smd_info->encode_hdlc) {
+ if (smd_info->buf_in_1 == buf) {
+ write_ptr_modem = smd_info->write_ptr_1;
+ in_busy_ptr = &smd_info->in_busy_1;
+ } else if (smd_info->buf_in_2 == buf) {
+ write_ptr_modem = smd_info->write_ptr_2;
+ in_busy_ptr = &smd_info->in_busy_2;
+ } else {
+ pr_err("diag: In %s, no match for in_busy_1, peripheral: %d\n", __func__, smd_info->peripheral);
+ }
+
+ if (write_ptr_modem) {
+ write_ptr_modem->length = total_recd;
+ *in_busy_ptr = 1;
+ err = diag_device_write(buf, smd_info->peripheral, write_ptr_modem);
+ if (err) {
+ /* Free up the buffer for future use */
+ *in_busy_ptr = 0;
+ pr_err_ratelimited("diag: In %s, diag_device_write error: %d\n", __func__, err);
+ }
+ }
+ } else {
+ /* The data is raw and needs to be hdlc encoded */
+ if (smd_info->buf_in_1_raw == buf) {
+ write_ptr_modem = smd_info->write_ptr_1;
+ in_busy_ptr = &smd_info->in_busy_1;
+ } else if (smd_info->buf_in_2_raw == buf) {
+ write_ptr_modem = smd_info->write_ptr_2;
+ in_busy_ptr = &smd_info->in_busy_2;
+ } else {
+ pr_err("diag: In %s, no match for in_busy_1, peripheral: %d\n", __func__, smd_info->peripheral);
+ }
+
+ if (write_ptr_modem) {
+ int success = 0;
+ int write_length = 0;
+ unsigned char *write_buf = NULL;
+
+ write_length = check_bufsize_for_encoding(smd_info, buf, total_recd);
+ if (write_length) {
+ write_buf = (buf == smd_info->buf_in_1_raw) ? smd_info->buf_in_1 : smd_info->buf_in_2;
+ success = diag_add_hdlc_encoding(smd_info, buf, total_recd, write_buf, &write_length);
+ if (success) {
+ write_ptr_modem->length = write_length;
+ *in_busy_ptr = 1;
+ err = diag_device_write(write_buf, smd_info->peripheral, write_ptr_modem);
+ if (err) {
+ /*
+ * Free up the buffer for
+ * future use
+ */
+ *in_busy_ptr = 0;
+ pr_err_ratelimited("diag: In %s, diag_device_write error: %d\n", __func__, err);
+ }
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int diag_smd_resize_buf(struct diag_smd_info *smd_info, void **buf, unsigned int *buf_size, unsigned int requested_size)
+{
+ int success = 0;
+ void *temp_buf = NULL;
+ unsigned int new_buf_size = requested_size;
+
+ if (!smd_info)
+ return success;
+
+ if (requested_size <= MAX_IN_BUF_SIZE) {
+ pr_debug("diag: In %s, SMD peripheral: %d sending in packets up to %d bytes\n", __func__, smd_info->peripheral, requested_size);
+ } else {
+ pr_err_ratelimited("diag: In %s, SMD peripheral: %d, Packet size sent: %d, Max size supported (%d) exceeded. Data beyond max size will be lost\n",
+ __func__, smd_info->peripheral, requested_size, MAX_IN_BUF_SIZE);
+ new_buf_size = MAX_IN_BUF_SIZE;
+ }
+
+ /* Only resize if the buffer can be increased in size */
+ if (new_buf_size <= *buf_size) {
+ success = 1;
+ return success;
+ }
+
+ temp_buf = krealloc(*buf, new_buf_size, GFP_KERNEL);
+
+ if (temp_buf) {
+ /* Match the buffer and reset the pointer and size */
+ if (smd_info->encode_hdlc) {
+ /*
+ * This smd channel is supporting HDLC encoding
+ * on the apps
+ */
+ void *temp_hdlc = NULL;
+ if (*buf == smd_info->buf_in_1_raw) {
+ smd_info->buf_in_1_raw = temp_buf;
+ smd_info->buf_in_1_raw_size = new_buf_size;
+ temp_hdlc = krealloc(smd_info->buf_in_1, MAX_IN_BUF_SIZE, GFP_KERNEL);
+ if (temp_hdlc) {
+ smd_info->buf_in_1 = temp_hdlc;
+ smd_info->buf_in_1_size = MAX_IN_BUF_SIZE;
+ }
+ } else if (*buf == smd_info->buf_in_2_raw) {
+ smd_info->buf_in_2_raw = temp_buf;
+ smd_info->buf_in_2_raw_size = new_buf_size;
+ temp_hdlc = krealloc(smd_info->buf_in_2, MAX_IN_BUF_SIZE, GFP_KERNEL);
+ if (temp_hdlc) {
+ smd_info->buf_in_2 = temp_hdlc;
+ smd_info->buf_in_2_size = MAX_IN_BUF_SIZE;
+ }
+ }
+ } else {
+ if (*buf == smd_info->buf_in_1) {
+ smd_info->buf_in_1 = temp_buf;
+ smd_info->buf_in_1_size = new_buf_size;
+ } else if (*buf == smd_info->buf_in_2) {
+ smd_info->buf_in_2 = temp_buf;
+ smd_info->buf_in_2_size = new_buf_size;
+ }
+ }
+ *buf = temp_buf;
+ *buf_size = new_buf_size;
+ success = 1;
+ } else {
+ pr_err_ratelimited("diag: In %s, SMD peripheral: %d. packet size sent: %d, resize to support failed. Data beyond %d will be lost\n",
+ __func__, smd_info->peripheral, requested_size, *buf_size);
+ }
+
+ return success;
+}
+
+void diag_smd_send_req(struct diag_smd_info *smd_info)
+{
+ void *buf = NULL, *temp_buf = NULL;
+ int total_recd = 0, r = 0, pkt_len;
+ int loop_count = 0, total_recd_partial = 0;
+ int notify = 0;
+ int buf_size = 0;
+ int resize_success = 0;
+ int buf_full = 0;
+
+ if (!smd_info) {
+ pr_err("diag: In %s, no smd info. Not able to read.\n", __func__);
+ return;
+ }
+
+ /* Determine the buffer to read the data into. */
+ if (smd_info->type == SMD_DATA_TYPE) {
+ /* If the data is raw and not hdlc encoded */
+ if (smd_info->encode_hdlc) {
+ if (!smd_info->in_busy_1) {
+ buf = smd_info->buf_in_1_raw;
+ buf_size = smd_info->buf_in_1_raw_size;
+ } else if (!smd_info->in_busy_2) {
+ buf = smd_info->buf_in_2_raw;
+ buf_size = smd_info->buf_in_2_raw_size;
+ }
+ } else {
+ if (!smd_info->in_busy_1) {
+ buf = smd_info->buf_in_1;
+ buf_size = smd_info->buf_in_1_size;
+ } else if (!smd_info->in_busy_2) {
+ buf = smd_info->buf_in_2;
+ buf_size = smd_info->buf_in_2_size;
+ }
+ }
+ } else if (smd_info->type == SMD_CMD_TYPE) {
+ /* If the data is raw and not hdlc encoded */
+ if (smd_info->encode_hdlc) {
+ if (!smd_info->in_busy_1) {
+ buf = smd_info->buf_in_1_raw;
+ buf_size = smd_info->buf_in_1_raw_size;
+ }
+ } else {
+ if (!smd_info->in_busy_1) {
+ buf = smd_info->buf_in_1;
+ buf_size = smd_info->buf_in_1_size;
+ }
+ }
+ } else if (!smd_info->in_busy_1) {
+ buf = smd_info->buf_in_1;
+ buf_size = smd_info->buf_in_1_size;
+ }
+
+ if (!buf && (smd_info->type == SMD_DCI_TYPE || smd_info->type == SMD_DCI_CMD_TYPE))
+ diag_dci_try_deactivate_wakeup_source(smd_info->ch);
+
+ if (smd_info->ch && buf) {
+ int required_size = 0;
+ while ((pkt_len = smd_cur_packet_size(smd_info->ch)) != 0) {
+ total_recd_partial = 0;
+
+ required_size = pkt_len + total_recd;
+ if (required_size > buf_size)
+ resize_success = diag_smd_resize_buf(smd_info, &buf, &buf_size, required_size);
+
+ temp_buf = ((unsigned char *)buf) + total_recd;
+ while (pkt_len && (pkt_len != total_recd_partial)) {
+ loop_count++;
+ r = smd_read_avail(smd_info->ch);
+ pr_debug("diag: In %s, SMD peripheral: %d, received pkt %d %d\n", __func__, smd_info->peripheral, r, total_recd);
+ if (!r) {
+ /* Nothing to read from SMD */
+ wait_event(driver->smd_wait_q, ((smd_info->ch == 0) || smd_read_avail(smd_info->ch)));
+ /* If the smd channel is open */
+ if (smd_info->ch) {
+ pr_debug("diag: In %s, SMD peripheral: %d, return from wait_event\n", __func__, smd_info->peripheral);
+ continue;
+ } else {
+ pr_debug("diag: In %s, SMD peripheral: %d, return from wait_event ch closed\n", __func__, smd_info->peripheral);
+ goto fail_return;
+ }
+ }
+
+ if (pkt_len < r) {
+ pr_err("diag: In %s, SMD peripheral: %d, sending incorrect pkt\n", __func__, smd_info->peripheral);
+ goto fail_return;
+ }
+ if (pkt_len > r) {
+ pr_debug("diag: In %s, SMD sending partial pkt %d %d %d %d %d %d\n",
+ __func__, pkt_len, r, total_recd, loop_count, smd_info->peripheral, smd_info->type);
+ }
+
+ /* Protect from going beyond the end of the buffer */
+ if (total_recd < buf_size) {
+ if (total_recd + r > buf_size) {
+ r = buf_size - total_recd;
+ buf_full = 1;
+ }
+
+ total_recd += r;
+ total_recd_partial += r;
+
+ /* Keep reading for complete packet */
+ smd_read(smd_info->ch, temp_buf, r);
+ temp_buf += r;
+ } else {
+ /*
+ * This block handles the very rare case of a
+ * packet that is greater in length than what
+ * we can support. In this case, we
+ * incrementally drain the remaining portion
+ * of the packet that will not fit in the
+ * buffer, so that the entire packet is read
+ * from the smd.
+ */
+ int drain_bytes = (r > SMD_DRAIN_BUF_SIZE) ? SMD_DRAIN_BUF_SIZE : r;
+ unsigned char *drain_buf = kzalloc(drain_bytes,
+ GFP_KERNEL);
+ if (drain_buf) {
+ total_recd += drain_bytes;
+ total_recd_partial += drain_bytes;
+ smd_read(smd_info->ch, drain_buf, drain_bytes);
+ kfree(drain_buf);
+ } else {
+ pr_err("diag: In %s, SMD peripheral: %d, unable to allocate drain buffer\n", __func__, smd_info->peripheral);
+ break;
+ }
+ }
+ }
+
+ if (smd_info->type != SMD_CNTL_TYPE || buf_full)
+ break;
+
+ }
+
+ if (pkt_len == 0 && (smd_info->type == SMD_DCI_TYPE || smd_info->type == SMD_DCI_CMD_TYPE))
+ diag_dci_try_deactivate_wakeup_source(smd_info->ch);
+
+ if (!driver->real_time_mode && smd_info->type == SMD_DATA_TYPE)
+ process_lock_on_read(&smd_info->nrt_lock, pkt_len);
+
+ if (total_recd > 0) {
+ if (!buf) {
+ pr_err("diag: In %s, SMD peripheral: %d, Out of diagmem for Modem\n", __func__, smd_info->peripheral);
+ } else if (smd_info->process_smd_read_data) {
+ /*
+ * If the buffer was totally filled, reset
+ * total_recd appropriately
+ */
+ if (buf_full)
+ total_recd = buf_size;
+
+ notify = smd_info->process_smd_read_data(smd_info, buf, total_recd);
+ /* Poll SMD channels to check for data */
+ if (notify)
+ diag_smd_notify(smd_info, SMD_EVENT_DATA);
+ }
+ }
+ } else if (smd_info->ch && !buf && (driver->logging_mode == MEMORY_DEVICE_MODE)) {
+ chk_logging_wakeup();
+ }
+ return;
+
+fail_return:
+ if (smd_info->type == SMD_DCI_TYPE || smd_info->type == SMD_DCI_CMD_TYPE)
+ diag_dci_try_deactivate_wakeup_source(smd_info->ch);
+ return;
+}
+
+void diag_read_smd_work_fn(struct work_struct *work)
+{
+ struct diag_smd_info *smd_info = container_of(work,
+ struct diag_smd_info,
+ diag_read_smd_work);
+ diag_smd_send_req(smd_info);
+}
+
+int diag_device_write(void *buf, int data_type, struct diag_request *write_ptr)
+{
+ int i, err = 0, index;
+ index = 0;
+
+ if (driver->logging_mode == MEMORY_DEVICE_MODE) {
+ if (data_type == APPS_DATA) {
+ for (i = 0; i < driver->buf_tbl_size; i++)
+ if (driver->buf_tbl[i].length == 0) {
+ driver->buf_tbl[i].buf = buf;
+ driver->buf_tbl[i].length = driver->used;
+#ifdef DIAG_DEBUG
+ pr_debug("diag: ENQUEUE buf ptr" " and length is %x , %d\n", (unsigned int)(driver->buf_ tbl[i].buf), driver->buf_tbl[i].length);
+#endif
+ break;
+ }
+ }
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ else if (data_type == HSIC_DATA || data_type == HSIC_2_DATA) {
+ unsigned long flags;
+ int foundIndex = -1;
+ index = data_type - HSIC_DATA;
+ spin_lock_irqsave(&diag_hsic[index].hsic_spinlock, flags);
+ for (i = 0; i < diag_hsic[index].poolsize_hsic_write; i++) {
+ if (diag_hsic[index].hsic_buf_tbl[i].length == 0) {
+ diag_hsic[index].hsic_buf_tbl[i].buf = buf;
+ diag_hsic[index].hsic_buf_tbl[i].length = diag_bridge[index].write_len;
+ diag_hsic[index].num_hsic_buf_tbl_entries++;
+ foundIndex = i;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&diag_hsic[index].hsic_spinlock, flags);
+ if (foundIndex == -1)
+ err = -1;
+ else
+ pr_debug("diag: ENQUEUE HSIC buf ptr and length is %x , %d, ch %d\n", (unsigned int)buf, diag_bridge[index].write_len, index);
+ }
+#endif
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == driver->logging_process_id)
+ break;
+ if (i < driver->num_clients) {
+ pr_debug("diag: wake up logging process\n");
+ driver->data_ready[i] |= USERMODE_DIAGFWD;
+ wake_up_interruptible(&driver->wait_q);
+ } else
+ return -EINVAL;
+ } else if (driver->logging_mode == NO_LOGGING_MODE) {
+ if ((data_type >= MODEM_DATA) && (data_type <= WCNSS_DATA)) {
+ driver->smd_data[data_type].in_busy_1 = 0;
+ driver->smd_data[data_type].in_busy_2 = 0;
+ queue_work(driver->smd_data[data_type].wq, &(driver->smd_data[data_type].diag_read_smd_work));
+ if (data_type == MODEM_DATA && driver->separate_cmdrsp[data_type]) {
+ driver->smd_cmd[data_type].in_busy_1 = 0;
+ driver->smd_cmd[data_type].in_busy_2 = 0;
+ queue_work(driver->diag_wq, &(driver->smd_cmd[data_type].diag_read_smd_work));
+ }
+ }
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ else if (data_type == SDIO_DATA) {
+ driver->in_busy_sdio = 0;
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+ }
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ else if (data_type == HSIC_DATA || data_type == HSIC_2_DATA) {
+ index = data_type - HSIC_DATA;
+ if (diag_hsic[index].hsic_ch)
+ queue_work(diag_bridge[index].wq, &(diag_hsic[index].diag_read_hsic_work));
+ }
+#endif
+ err = -1;
+ }
+#ifdef CONFIG_DIAG_OVER_USB
+ else if (driver->logging_mode == USB_MODE) {
+ if (data_type == APPS_DATA) {
+ driver->write_ptr_svc = (struct diag_request *)
+ (diagmem_alloc(driver, sizeof(struct diag_request), POOL_TYPE_WRITE_STRUCT));
+ if (driver->write_ptr_svc) {
+ driver->write_ptr_svc->length = driver->used;
+ driver->write_ptr_svc->buf = buf;
+ err = usb_diag_write(driver->legacy_ch, driver->write_ptr_svc);
+ /* Free the buffer if write failed */
+ if (err) {
+ diagmem_free(driver, (unsigned char *)driver->write_ptr_svc, POOL_TYPE_WRITE_STRUCT);
+ }
+ } else {
+ err = -ENOMEM;
+ }
+ } else if ((data_type >= MODEM_DATA) && (data_type <= WCNSS_DATA)) {
+ write_ptr->buf = buf;
+#ifdef DIAG_DEBUG
+ printk(KERN_INFO "writing data to USB," "pkt length %d\n", write_ptr->length);
+ print_hex_dump(KERN_DEBUG, "Written Packet Data to" " USB: ", 16, 1, DUMP_PREFIX_ADDRESS, buf, write_ptr->length, 1);
+#endif /* DIAG DEBUG */
+ err = usb_diag_write(driver->legacy_ch, write_ptr);
+ }
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ else if (data_type == SDIO_DATA) {
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa()) {
+ write_ptr->buf = buf;
+ err = usb_diag_write(driver->mdm_ch, write_ptr);
+ } else
+ pr_err("diag: Incorrect sdio data " "while USB write\n");
+ }
+#endif
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ else if (data_type == HSIC_DATA || data_type == HSIC_2_DATA) {
+ index = data_type - HSIC_DATA;
+ if (diag_hsic[index].hsic_device_enabled) {
+ struct diag_request *write_ptr_mdm;
+ write_ptr_mdm = (struct diag_request *)
+ diagmem_alloc(driver, sizeof(struct diag_request), index + POOL_TYPE_HSIC_WRITE);
+ if (write_ptr_mdm) {
+ write_ptr_mdm->buf = buf;
+ write_ptr_mdm->length = diag_bridge[index].write_len;
+ write_ptr_mdm->context = (void *)index;
+ err = usb_diag_write(diag_bridge[index].ch, write_ptr_mdm);
+ /* Return to the pool immediately */
+ if (err) {
+ diagmem_free(driver, write_ptr_mdm, index + POOL_TYPE_HSIC_WRITE);
+ pr_err_ratelimited("diag: HSIC write failure, err: %d, ch %d\n", err, index);
+ }
+ } else {
+ pr_err("diag: allocate write fail\n");
+ err = -1;
+ }
+ } else {
+ pr_err("diag: Incorrect HSIC data " "while USB write\n");
+ err = -1;
+ }
+ } else if (data_type == SMUX_DATA) {
+ write_ptr->buf = buf;
+ write_ptr->context = (void *)SMUX;
+ pr_debug("diag: writing SMUX data\n");
+ err = usb_diag_write(diag_bridge[SMUX].ch, write_ptr);
+ }
+#endif
+ APPEND_DEBUG('d');
+ }
+#endif /* DIAG OVER USB */
+ return err;
+}
+
+static void diag_update_pkt_buffer(unsigned char *buf)
+{
+ unsigned char *ptr = driver->pkt_buf;
+ unsigned char *temp = buf;
+
+ mutex_lock(&driver->diagchar_mutex);
+ if (CHK_OVERFLOW(ptr, ptr, ptr + PKT_SIZE, driver->pkt_length)) {
+ memcpy(ptr, temp, driver->pkt_length);
+ driver->in_busy_pktdata = 1;
+ } else
+ printk(KERN_CRIT " Not enough buffer space for PKT_RESP\n");
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+void diag_update_userspace_clients(unsigned int type)
+{
+ int i;
+
+ mutex_lock(&driver->diagchar_mutex);
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid != 0)
+ driver->data_ready[i] |= type;
+ wake_up_interruptible(&driver->wait_q);
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+void diag_update_sleeping_process(int process_id, int data_type)
+{
+ int i;
+
+ mutex_lock(&driver->diagchar_mutex);
+ for (i = 0; i < driver->num_clients; i++)
+ if (driver->client_map[i].pid == process_id) {
+ driver->data_ready[i] |= data_type;
+ break;
+ }
+ wake_up_interruptible(&driver->wait_q);
+ mutex_unlock(&driver->diagchar_mutex);
+}
+
+static int diag_check_mode_reset(unsigned char *buf)
+{
+ int is_mode_reset = 0;
+ if (chk_apps_master() && (int)(*(char *)buf) == MODE_CMD)
+ if ((int)(*(char *)(buf + 1)) == RESET_ID)
+ is_mode_reset = 1;
+ return is_mode_reset;
+}
+
+void diag_send_data(struct diag_master_table entry, unsigned char *buf, int len, int type)
+{
+ driver->pkt_length = len;
+
+ /* If the process_id corresponds to an apps process */
+ if (entry.process_id != NON_APPS_PROC) {
+ /* If the message is to be sent to the apps process */
+ if (type != MODEM_DATA) {
+ diag_update_pkt_buffer(buf);
+ diag_update_sleeping_process(entry.process_id, PKT_TYPE);
+ }
+ } else {
+ if (len > 0) {
+ if (entry.client_id < NUM_SMD_DATA_CHANNELS) {
+ struct diag_smd_info *smd_info;
+ int index = entry.client_id;
+ /*
+ * Mode reset should work even if
+ * modem is down
+ */
+ if ((index == MODEM_DATA) && diag_check_mode_reset(buf)) {
+ return;
+ }
+ smd_info = (driver->separate_cmdrsp[index] && index < NUM_SMD_CMD_CHANNELS) ? &driver->smd_cmd[index] : &driver->smd_data[index];
+
+ if (smd_info->ch) {
+ mutex_lock(&smd_info->smd_ch_mutex);
+ smd_write(smd_info->ch, buf, len);
+ mutex_unlock(&smd_info->smd_ch_mutex);
+ } else {
+ pr_err("diag: In %s, smd channel %d not open, peripheral: %d, type: %d\n", __func__, index, smd_info->peripheral, smd_info->type);
+ }
+ } else {
+ pr_alert("diag: In %s, incorrect channel: %d", __func__, entry.client_id);
+ }
+ }
+ }
+}
+
+void diag_process_stm_mask(uint8_t cmd, uint8_t data_mask, int data_type, uint8_t * rsp_supported, uint8_t * rsp_smd_comply)
+{
+ int status = 0;
+ if (data_type >= MODEM_DATA && data_type <= WCNSS_DATA) {
+ if (driver->peripheral_supports_stm[data_type]) {
+ status = diag_send_stm_state(&driver->smd_cntl[data_type], cmd);
+ if (status == 1)
+ *rsp_smd_comply |= data_mask;
+ *rsp_supported |= data_mask;
+ } else if (driver->smd_cntl[data_type].ch) {
+ *rsp_smd_comply |= data_mask;
+ }
+ if ((*rsp_smd_comply & data_mask) && (*rsp_supported & data_mask))
+ driver->stm_state[data_type] = cmd;
+
+ driver->stm_state_requested[data_type] = cmd;
+ } else if (data_type == APPS_DATA) {
+ *rsp_supported |= data_mask;
+ *rsp_smd_comply |= data_mask;
+ driver->stm_state[data_type] = cmd;
+ driver->stm_state_requested[data_type] = cmd;
+ }
+}
+
+int diag_process_stm_cmd(unsigned char *buf)
+{
+ uint8_t version = *(buf + STM_CMD_VERSION_OFFSET);
+ uint8_t mask = *(buf + STM_CMD_MASK_OFFSET);
+ uint8_t cmd = *(buf + STM_CMD_DATA_OFFSET);
+ uint8_t rsp_supported = 0;
+ uint8_t rsp_smd_comply = 0;
+ int valid_command = 1;
+ int i;
+
+ /* Check if command is valid */
+ if ((version != 1) || (mask == 0) || (0 != (mask >> 4)) || (cmd != ENABLE_STM && cmd != DISABLE_STM)) {
+ valid_command = 0;
+ } else {
+ if (mask & DIAG_STM_MODEM)
+ diag_process_stm_mask(cmd, DIAG_STM_MODEM, MODEM_DATA, &rsp_supported, &rsp_smd_comply);
+
+ if (mask & DIAG_STM_LPASS)
+ diag_process_stm_mask(cmd, DIAG_STM_LPASS, LPASS_DATA, &rsp_supported, &rsp_smd_comply);
+
+ if (mask & DIAG_STM_WCNSS)
+ diag_process_stm_mask(cmd, DIAG_STM_WCNSS, WCNSS_DATA, &rsp_supported, &rsp_smd_comply);
+
+ if (mask & DIAG_STM_APPS)
+ diag_process_stm_mask(cmd, DIAG_STM_APPS, APPS_DATA, &rsp_supported, &rsp_smd_comply);
+ }
+
+ for (i = 0; i < STM_CMD_NUM_BYTES; i++)
+ driver->apps_rsp_buf[i] = *(buf + i);
+
+ driver->apps_rsp_buf[STM_RSP_VALID_INDEX] = valid_command;
+ driver->apps_rsp_buf[STM_RSP_SUPPORTED_INDEX] = rsp_supported;
+ driver->apps_rsp_buf[STM_RSP_SMD_COMPLY_INDEX] = rsp_smd_comply;
+
+ encode_rsp_and_send(STM_RSP_NUM_BYTES - 1);
+
+ return 0;
+}
+
+int diag_apps_responds()
+{
+ if (chk_apps_only()) {
+ if (driver->smd_data[MODEM_DATA].ch && driver->rcvd_feature_mask[MODEM_DATA]) {
+ return 0;
+ }
+ return 1;
+ }
+ return 0;
+}
+
+int diag_process_apps_pkt(unsigned char *buf, int len)
+{
+ uint16_t subsys_cmd_code;
+ int subsys_id, ssid_first, ssid_last, ssid_range;
+ int packet_type = 1, i, cmd_code;
+ unsigned char *temp = buf;
+ int data_type;
+ int mask_ret;
+#if defined(CONFIG_DIAG_OVER_USB)
+ unsigned char *ptr;
+#endif
+
+ /* Check if the command is a supported mask command */
+ mask_ret = diag_process_apps_masks(buf, len);
+ if (mask_ret <= 0)
+ return mask_ret;
+
+ /* Check for registered clients and forward packet to apropriate proc */
+ cmd_code = (int)(*(char *)buf);
+ temp++;
+ subsys_id = (int)(*(char *)temp);
+ temp++;
+ subsys_cmd_code = *(uint16_t *) temp;
+ temp += 2;
+ data_type = APPS_DATA;
+ /* Dont send any command other than mode reset */
+ if (chk_apps_master() && cmd_code == MODE_CMD) {
+ if (subsys_id != RESET_ID)
+ data_type = MODEM_DATA;
+ }
+
+ pr_debug("diag: %d %d %d", cmd_code, subsys_id, subsys_cmd_code);
+ for (i = 0; i < diag_max_reg; i++) {
+ entry = driver->table[i];
+ if (entry.process_id != NO_PROCESS && driver->rcvd_feature_mask[entry.client_id]) {
+ if (entry.cmd_code == cmd_code && entry.subsys_id == subsys_id && entry.cmd_code_lo <= subsys_cmd_code && entry.cmd_code_hi >= subsys_cmd_code) {
+ diag_send_data(entry, buf, len, data_type);
+ packet_type = 0;
+ } else if (entry.cmd_code == 255 && cmd_code == 75) {
+ if (entry.subsys_id == subsys_id && entry.cmd_code_lo <= subsys_cmd_code && entry.cmd_code_hi >= subsys_cmd_code) {
+ diag_send_data(entry, buf, len, data_type);
+ packet_type = 0;
+ }
+ } else if (entry.cmd_code == 255 && entry.subsys_id == 255) {
+ if (entry.cmd_code_lo <= cmd_code && entry.cmd_code_hi >= cmd_code) {
+ diag_send_data(entry, buf, len, data_type);
+ packet_type = 0;
+ }
+ }
+ }
+ }
+#if defined(CONFIG_DIAG_OVER_USB)
+ /* Check for the command/respond msg for the maximum packet length */
+ if ((*buf == 0x4b) && (*(buf + 1) == 0x12) && (*(uint16_t *) (buf + 2) == 0x0055)) {
+ for (i = 0; i < 4; i++)
+ *(driver->apps_rsp_buf + i) = *(buf + i);
+ *(uint32_t *) (driver->apps_rsp_buf + 4) = PKT_SIZE;
+ encode_rsp_and_send(7);
+ return 0;
+ } else if ((*buf == 0x4b) && (*(buf + 1) == 0x12) && (*(uint16_t *) (buf + 2) == 0x020E)) {
+ return diag_process_stm_cmd(buf);
+ }
+ /* Check for Apps Only & get event mask request */
+ else if (diag_apps_responds() && *buf == 0x81) {
+ driver->apps_rsp_buf[0] = 0x81;
+ driver->apps_rsp_buf[1] = 0x0;
+ *(uint16_t *) (driver->apps_rsp_buf + 2) = 0x0;
+ *(uint16_t *) (driver->apps_rsp_buf + 4) = EVENT_LAST_ID + 1;
+ for (i = 0; i < EVENT_LAST_ID / 8 + 1; i++)
+ *(unsigned char *)(driver->apps_rsp_buf + 6 + i) = 0x0;
+ encode_rsp_and_send(6 + EVENT_LAST_ID / 8);
+ return 0;
+ }
+ /* Get log ID range & Check for Apps Only */
+ else if (diag_apps_responds() && (*buf == 0x73) && *(int *)(buf + 4) == 1) {
+ driver->apps_rsp_buf[0] = 0x73;
+ *(int *)(driver->apps_rsp_buf + 4) = 0x1; /* operation ID */
+ *(int *)(driver->apps_rsp_buf + 8) = 0x0; /* success code */
+ *(int *)(driver->apps_rsp_buf + 12) = LOG_GET_ITEM_NUM(LOG_0);
+ *(int *)(driver->apps_rsp_buf + 16) = LOG_GET_ITEM_NUM(LOG_1);
+ *(int *)(driver->apps_rsp_buf + 20) = LOG_GET_ITEM_NUM(LOG_2);
+ *(int *)(driver->apps_rsp_buf + 24) = LOG_GET_ITEM_NUM(LOG_3);
+ *(int *)(driver->apps_rsp_buf + 28) = LOG_GET_ITEM_NUM(LOG_4);
+ *(int *)(driver->apps_rsp_buf + 32) = LOG_GET_ITEM_NUM(LOG_5);
+ *(int *)(driver->apps_rsp_buf + 36) = LOG_GET_ITEM_NUM(LOG_6);
+ *(int *)(driver->apps_rsp_buf + 40) = LOG_GET_ITEM_NUM(LOG_7);
+ *(int *)(driver->apps_rsp_buf + 44) = LOG_GET_ITEM_NUM(LOG_8);
+ *(int *)(driver->apps_rsp_buf + 48) = LOG_GET_ITEM_NUM(LOG_9);
+ *(int *)(driver->apps_rsp_buf + 52) = LOG_GET_ITEM_NUM(LOG_10);
+ *(int *)(driver->apps_rsp_buf + 56) = LOG_GET_ITEM_NUM(LOG_11);
+ *(int *)(driver->apps_rsp_buf + 60) = LOG_GET_ITEM_NUM(LOG_12);
+ *(int *)(driver->apps_rsp_buf + 64) = LOG_GET_ITEM_NUM(LOG_13);
+ *(int *)(driver->apps_rsp_buf + 68) = LOG_GET_ITEM_NUM(LOG_14);
+ *(int *)(driver->apps_rsp_buf + 72) = LOG_GET_ITEM_NUM(LOG_15);
+ encode_rsp_and_send(75);
+ return 0;
+ }
+ /* Respond to Get SSID Range request message */
+ else if (diag_apps_responds() && (*buf == 0x7d) && (*(buf + 1) == 0x1)) {
+ driver->apps_rsp_buf[0] = 0x7d;
+ driver->apps_rsp_buf[1] = 0x1;
+ driver->apps_rsp_buf[2] = 0x1;
+ driver->apps_rsp_buf[3] = 0x0;
+ /* -1 to un-account for OEM SSID range */
+ *(int *)(driver->apps_rsp_buf + 4) = MSG_MASK_TBL_CNT - 1;
+ *(uint16_t *) (driver->apps_rsp_buf + 8) = MSG_SSID_0;
+ *(uint16_t *) (driver->apps_rsp_buf + 10) = MSG_SSID_0_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 12) = MSG_SSID_1;
+ *(uint16_t *) (driver->apps_rsp_buf + 14) = MSG_SSID_1_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 16) = MSG_SSID_2;
+ *(uint16_t *) (driver->apps_rsp_buf + 18) = MSG_SSID_2_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 20) = MSG_SSID_3;
+ *(uint16_t *) (driver->apps_rsp_buf + 22) = MSG_SSID_3_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 24) = MSG_SSID_4;
+ *(uint16_t *) (driver->apps_rsp_buf + 26) = MSG_SSID_4_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 28) = MSG_SSID_5;
+ *(uint16_t *) (driver->apps_rsp_buf + 30) = MSG_SSID_5_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 32) = MSG_SSID_6;
+ *(uint16_t *) (driver->apps_rsp_buf + 34) = MSG_SSID_6_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 36) = MSG_SSID_7;
+ *(uint16_t *) (driver->apps_rsp_buf + 38) = MSG_SSID_7_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 40) = MSG_SSID_8;
+ *(uint16_t *) (driver->apps_rsp_buf + 42) = MSG_SSID_8_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 44) = MSG_SSID_9;
+ *(uint16_t *) (driver->apps_rsp_buf + 46) = MSG_SSID_9_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 48) = MSG_SSID_10;
+ *(uint16_t *) (driver->apps_rsp_buf + 50) = MSG_SSID_10_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 52) = MSG_SSID_11;
+ *(uint16_t *) (driver->apps_rsp_buf + 54) = MSG_SSID_11_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 56) = MSG_SSID_12;
+ *(uint16_t *) (driver->apps_rsp_buf + 58) = MSG_SSID_12_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 60) = MSG_SSID_13;
+ *(uint16_t *) (driver->apps_rsp_buf + 62) = MSG_SSID_13_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 64) = MSG_SSID_14;
+ *(uint16_t *) (driver->apps_rsp_buf + 66) = MSG_SSID_14_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 68) = MSG_SSID_15;
+ *(uint16_t *) (driver->apps_rsp_buf + 70) = MSG_SSID_15_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 72) = MSG_SSID_16;
+ *(uint16_t *) (driver->apps_rsp_buf + 74) = MSG_SSID_16_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 76) = MSG_SSID_17;
+ *(uint16_t *) (driver->apps_rsp_buf + 78) = MSG_SSID_17_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 80) = MSG_SSID_18;
+ *(uint16_t *) (driver->apps_rsp_buf + 82) = MSG_SSID_18_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 84) = MSG_SSID_19;
+ *(uint16_t *) (driver->apps_rsp_buf + 86) = MSG_SSID_19_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 88) = MSG_SSID_20;
+ *(uint16_t *) (driver->apps_rsp_buf + 90) = MSG_SSID_20_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 92) = MSG_SSID_21;
+ *(uint16_t *) (driver->apps_rsp_buf + 94) = MSG_SSID_21_LAST;
+ *(uint16_t *) (driver->apps_rsp_buf + 96) = MSG_SSID_22;
+ *(uint16_t *) (driver->apps_rsp_buf + 98) = MSG_SSID_22_LAST;
+ encode_rsp_and_send(99);
+ return 0;
+ }
+ /* Check for Apps Only Respond to Get Subsys Build mask */
+ else if (diag_apps_responds() && (*buf == 0x7d) && (*(buf + 1) == 0x2)) {
+ ssid_first = *(uint16_t *) (buf + 2);
+ ssid_last = *(uint16_t *) (buf + 4);
+ ssid_range = 4 * (ssid_last - ssid_first + 1);
+ /* frame response */
+ driver->apps_rsp_buf[0] = 0x7d;
+ driver->apps_rsp_buf[1] = 0x2;
+ *(uint16_t *) (driver->apps_rsp_buf + 2) = ssid_first;
+ *(uint16_t *) (driver->apps_rsp_buf + 4) = ssid_last;
+ driver->apps_rsp_buf[6] = 0x1;
+ driver->apps_rsp_buf[7] = 0x0;
+ ptr = driver->apps_rsp_buf + 8;
+ /* bld time masks */
+ switch (ssid_first) {
+ case MSG_SSID_0:
+ if (ssid_range > sizeof(msg_bld_masks_0)) {
+ pr_warning("diag: truncating ssid range for ssid 0");
+ ssid_range = sizeof(msg_bld_masks_0);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_0[i / 4];
+ break;
+ case MSG_SSID_1:
+ if (ssid_range > sizeof(msg_bld_masks_1)) {
+ pr_warning("diag: truncating ssid range for ssid 1");
+ ssid_range = sizeof(msg_bld_masks_1);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_1[i / 4];
+ break;
+ case MSG_SSID_2:
+ if (ssid_range > sizeof(msg_bld_masks_2)) {
+ pr_warning("diag: truncating ssid range for ssid 2");
+ ssid_range = sizeof(msg_bld_masks_2);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_2[i / 4];
+ break;
+ case MSG_SSID_3:
+ if (ssid_range > sizeof(msg_bld_masks_3)) {
+ pr_warning("diag: truncating ssid range for ssid 3");
+ ssid_range = sizeof(msg_bld_masks_3);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_3[i / 4];
+ break;
+ case MSG_SSID_4:
+ if (ssid_range > sizeof(msg_bld_masks_4)) {
+ pr_warning("diag: truncating ssid range for ssid 4");
+ ssid_range = sizeof(msg_bld_masks_4);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_4[i / 4];
+ break;
+ case MSG_SSID_5:
+ if (ssid_range > sizeof(msg_bld_masks_5)) {
+ pr_warning("diag: truncating ssid range for ssid 5");
+ ssid_range = sizeof(msg_bld_masks_5);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_5[i / 4];
+ break;
+ case MSG_SSID_6:
+ if (ssid_range > sizeof(msg_bld_masks_6)) {
+ pr_warning("diag: truncating ssid range for ssid 6");
+ ssid_range = sizeof(msg_bld_masks_6);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_6[i / 4];
+ break;
+ case MSG_SSID_7:
+ if (ssid_range > sizeof(msg_bld_masks_7)) {
+ pr_warning("diag: truncating ssid range for ssid 7");
+ ssid_range = sizeof(msg_bld_masks_7);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_7[i / 4];
+ break;
+ case MSG_SSID_8:
+ if (ssid_range > sizeof(msg_bld_masks_8)) {
+ pr_warning("diag: truncating ssid range for ssid 8");
+ ssid_range = sizeof(msg_bld_masks_8);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_8[i / 4];
+ break;
+ case MSG_SSID_9:
+ if (ssid_range > sizeof(msg_bld_masks_9)) {
+ pr_warning("diag: truncating ssid range for ssid 9");
+ ssid_range = sizeof(msg_bld_masks_9);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_9[i / 4];
+ break;
+ case MSG_SSID_10:
+ if (ssid_range > sizeof(msg_bld_masks_10)) {
+ pr_warning("diag: truncating ssid range for ssid 10");
+ ssid_range = sizeof(msg_bld_masks_10);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_10[i / 4];
+ break;
+ case MSG_SSID_11:
+ if (ssid_range > sizeof(msg_bld_masks_11)) {
+ pr_warning("diag: truncating ssid range for ssid 11");
+ ssid_range = sizeof(msg_bld_masks_11);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_11[i / 4];
+ break;
+ case MSG_SSID_12:
+ if (ssid_range > sizeof(msg_bld_masks_12)) {
+ pr_warning("diag: truncating ssid range for ssid 12");
+ ssid_range = sizeof(msg_bld_masks_12);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_12[i / 4];
+ break;
+ case MSG_SSID_13:
+ if (ssid_range > sizeof(msg_bld_masks_13)) {
+ pr_warning("diag: truncating ssid range for ssid 13");
+ ssid_range = sizeof(msg_bld_masks_13);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_13[i / 4];
+ break;
+ case MSG_SSID_14:
+ if (ssid_range > sizeof(msg_bld_masks_14)) {
+ pr_warning("diag: truncating ssid range for ssid 14");
+ ssid_range = sizeof(msg_bld_masks_14);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_14[i / 4];
+ break;
+ case MSG_SSID_15:
+ if (ssid_range > sizeof(msg_bld_masks_15)) {
+ pr_warning("diag: truncating ssid range for ssid 15");
+ ssid_range = sizeof(msg_bld_masks_15);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_15[i / 4];
+ break;
+ case MSG_SSID_16:
+ if (ssid_range > sizeof(msg_bld_masks_16)) {
+ pr_warning("diag: truncating ssid range for ssid 16");
+ ssid_range = sizeof(msg_bld_masks_16);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_16[i / 4];
+ break;
+ case MSG_SSID_17:
+ if (ssid_range > sizeof(msg_bld_masks_17)) {
+ pr_warning("diag: truncating ssid range for ssid 17");
+ ssid_range = sizeof(msg_bld_masks_17);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_17[i / 4];
+ break;
+ case MSG_SSID_18:
+ if (ssid_range > sizeof(msg_bld_masks_18)) {
+ pr_warning("diag: truncating ssid range for ssid 18");
+ ssid_range = sizeof(msg_bld_masks_18);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_18[i / 4];
+ break;
+ case MSG_SSID_19:
+ if (ssid_range > sizeof(msg_bld_masks_19)) {
+ pr_warning("diag: truncating ssid range for ssid 19");
+ ssid_range = sizeof(msg_bld_masks_19);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_19[i / 4];
+ break;
+ case MSG_SSID_20:
+ if (ssid_range > sizeof(msg_bld_masks_20)) {
+ pr_warning("diag: truncating ssid range for ssid 20");
+ ssid_range = sizeof(msg_bld_masks_20);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_20[i / 4];
+ break;
+ case MSG_SSID_21:
+ if (ssid_range > sizeof(msg_bld_masks_21)) {
+ pr_warning("diag: truncating ssid range for ssid 21");
+ ssid_range = sizeof(msg_bld_masks_21);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_21[i / 4];
+ break;
+ case MSG_SSID_22:
+ if (ssid_range > sizeof(msg_bld_masks_22)) {
+ pr_warning("diag: truncating ssid range for ssid 22");
+ ssid_range = sizeof(msg_bld_masks_22);
+ }
+ for (i = 0; i < ssid_range; i += 4)
+ *(int *)(ptr + i) = msg_bld_masks_22[i / 4];
+ break;
+ }
+ encode_rsp_and_send(8 + ssid_range - 1);
+ return 0;
+ }
+ /* Check for download command */
+ else if ((cpu_is_msm8x60() || chk_apps_master()) && (*buf == 0x3A)) {
+ /* send response back */
+ driver->apps_rsp_buf[0] = *buf;
+ encode_rsp_and_send(0);
+ msleep(5000);
+ /* call download API */
+ msm_set_restart_mode(RESTART_DLOAD);
+ printk(KERN_CRIT "diag: download mode set, Rebooting SoC..\n");
+ kernel_restart(NULL);
+ /* Not required, represents that command isnt sent to modem */
+ return 0;
+ }
+ /* Check for polling for Apps only DIAG */
+ else if ((*buf == 0x4b) && (*(buf + 1) == 0x32) && (*(buf + 2) == 0x03)) {
+ /* If no one has registered for polling */
+ if (chk_polling_response()) {
+ /* Respond to polling for Apps only DIAG */
+ for (i = 0; i < 3; i++)
+ driver->apps_rsp_buf[i] = *(buf + i);
+ for (i = 0; i < 13; i++)
+ driver->apps_rsp_buf[i + 3] = 0;
+
+ encode_rsp_and_send(15);
+ return 0;
+ }
+ }
+ /* Return the Delayed Response Wrap Status */
+ else if ((*buf == 0x4b) && (*(buf + 1) == 0x32) && (*(buf + 2) == 0x04) && (*(buf + 3) == 0x0)) {
+ memcpy(driver->apps_rsp_buf, buf, 4);
+ driver->apps_rsp_buf[4] = wrap_enabled;
+ encode_rsp_and_send(4);
+ return 0;
+ }
+ /* Wrap the Delayed Rsp ID */
+ else if ((*buf == 0x4b) && (*(buf + 1) == 0x32) && (*(buf + 2) == 0x05) && (*(buf + 3) == 0x0)) {
+ wrap_enabled = true;
+ memcpy(driver->apps_rsp_buf, buf, 4);
+ driver->apps_rsp_buf[4] = wrap_count;
+ encode_rsp_and_send(5);
+ return 0;
+ }
+ /* Check for ID for NO MODEM present */
+ else if (chk_polling_response()) {
+ /* respond to 0x0 command */
+ if (*buf == 0x00) {
+ for (i = 0; i < 55; i++)
+ driver->apps_rsp_buf[i] = 0;
+
+ encode_rsp_and_send(54);
+ return 0;
+ }
+ /* respond to 0x7c command */
+ else if (*buf == 0x7c) {
+ driver->apps_rsp_buf[0] = 0x7c;
+ for (i = 1; i < 8; i++)
+ driver->apps_rsp_buf[i] = 0;
+ /* Tools ID for APQ 8060 */
+ *(int *)(driver->apps_rsp_buf + 8) = chk_config_get_id();
+ *(unsigned char *)(driver->apps_rsp_buf + 12) = '\0';
+ *(unsigned char *)(driver->apps_rsp_buf + 13) = '\0';
+ encode_rsp_and_send(13);
+ return 0;
+ }
+ }
+#endif
+ return packet_type;
+}
+
+#ifdef CONFIG_DIAG_OVER_USB
+void diag_send_error_rsp(int index)
+{
+ int i;
+
+ /* -1 to accomodate the first byte 0x13 */
+ if (index > APPS_BUF_SIZE - 1) {
+ pr_err("diag: cannot send err rsp, huge length: %d\n", index);
+ return;
+ }
+
+ driver->apps_rsp_buf[0] = 0x13; /* error code 13 */
+ for (i = 0; i < index; i++)
+ driver->apps_rsp_buf[i + 1] = *(driver->hdlc_buf + i);
+ encode_rsp_and_send(index - 3);
+}
+#else
+static inline void diag_send_error_rsp(int index)
+{
+}
+#endif
+
+void diag_process_hdlc(void *data, unsigned len)
+{
+ struct diag_hdlc_decode_type hdlc;
+ int ret, type = 0, crc_chk = 0;
+
+ mutex_lock(&driver->diag_hdlc_mutex);
+
+ pr_debug("diag: HDLC decode fn, len of data %d\n", len);
+ hdlc.dest_ptr = driver->hdlc_buf;
+ hdlc.dest_size = USB_MAX_OUT_BUF;
+ hdlc.src_ptr = data;
+ hdlc.src_size = len;
+ hdlc.src_idx = 0;
+ hdlc.dest_idx = 0;
+ hdlc.escaping = 0;
+
+ ret = diag_hdlc_decode(&hdlc);
+ if (ret) {
+ crc_chk = crc_check(hdlc.dest_ptr, hdlc.dest_idx);
+ if (crc_chk) {
+ /* CRC check failed. */
+ pr_err_ratelimited("diag: In %s, bad CRC. Dropping packet\n", __func__);
+ mutex_unlock(&driver->diag_hdlc_mutex);
+ return;
+ }
+ }
+
+ /*
+ * If the message is 3 bytes or less in length then the message is
+ * too short. A message will need 4 bytes minimum, since there are
+ * 2 bytes for the CRC and 1 byte for the ending 0x7e for the hdlc
+ * encoding
+ */
+ if (hdlc.dest_idx < 4) {
+ pr_err_ratelimited("diag: In %s, message is too short, len: %d, dest len: %d\n", __func__, len, hdlc.dest_idx);
+ mutex_unlock(&driver->diag_hdlc_mutex);
+ return;
+ }
+
+ if (ret) {
+ type = diag_process_apps_pkt(driver->hdlc_buf, hdlc.dest_idx - 3);
+ if (type < 0) {
+ mutex_unlock(&driver->diag_hdlc_mutex);
+ return;
+ }
+ } else if (driver->debug_flag) {
+ pr_err("diag: In %s, partial packet received, dropping packet, len: %d\n", __func__, len);
+ print_hex_dump(KERN_DEBUG, "Dropped Packet Data: ", 16, 1, DUMP_PREFIX_ADDRESS, data, len, 1);
+ driver->debug_flag = 0;
+ }
+ /* send error responses from APPS for Central Routing */
+ if (type == 1 && chk_apps_only()) {
+ diag_send_error_rsp(hdlc.dest_idx);
+ type = 0;
+ }
+ /* implies this packet is NOT meant for apps */
+ if (!(driver->smd_data[MODEM_DATA].ch) && type == 1) {
+ if (chk_apps_only()) {
+ diag_send_error_rsp(hdlc.dest_idx);
+ } else { /* APQ 8060, Let Q6 respond */
+ if (driver->smd_data[LPASS_DATA].ch) {
+ mutex_lock(&driver->smd_data[LPASS_DATA].smd_ch_mutex);
+ smd_write(driver->smd_data[LPASS_DATA].ch, driver->hdlc_buf, hdlc.dest_idx - 3);
+ mutex_unlock(&driver->smd_data[LPASS_DATA].smd_ch_mutex);
+ }
+ }
+ type = 0;
+ }
+#ifdef DIAG_DEBUG
+ pr_debug("diag: hdlc.dest_idx = %d", hdlc.dest_idx);
+ for (i = 0; i < hdlc.dest_idx; i++)
+ printk(KERN_DEBUG "\t%x", *(((unsigned char *)
+ driver->hdlc_buf) + i));
+#endif /* DIAG DEBUG */
+ /* ignore 2 bytes for CRC, one for 7E and send */
+ if ((driver->smd_data[MODEM_DATA].ch) && (ret) && (type) && (hdlc.dest_idx > 3)) {
+ APPEND_DEBUG('g');
+ mutex_lock(&driver->smd_data[MODEM_DATA].smd_ch_mutex);
+ smd_write(driver->smd_data[MODEM_DATA].ch, driver->hdlc_buf, hdlc.dest_idx - 3);
+ mutex_unlock(&driver->smd_data[MODEM_DATA].smd_ch_mutex);
+ APPEND_DEBUG('h');
+#ifdef DIAG_DEBUG
+ printk(KERN_INFO "writing data to SMD, pkt length %d\n", len);
+ print_hex_dump(KERN_DEBUG, "Written Packet Data to SMD: ", 16, 1, DUMP_PREFIX_ADDRESS, data, len, 1);
+#endif /* DIAG DEBUG */
+ }
+ mutex_unlock(&driver->diag_hdlc_mutex);
+}
+
+void diag_reset_smd_data(int queue)
+{
+ int i;
+
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ driver->smd_data[i].in_busy_1 = 0;
+ driver->smd_data[i].in_busy_2 = 0;
+ if (queue)
+ /* Poll SMD data channels to check for data */
+ queue_work(driver->smd_data[i].wq, &(driver->smd_data[i].diag_read_smd_work));
+ }
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ driver->smd_cmd[i].in_busy_1 = 0;
+ driver->smd_cmd[i].in_busy_2 = 0;
+ if (queue)
+ /* Poll SMD data channels to check for data */
+ queue_work(driver->diag_wq, &(driver->smd_cmd[i].diag_read_smd_work));
+ }
+ }
+}
+
+#ifdef CONFIG_DIAG_OVER_USB
+/* 2+1 for modem ; 2 for LPASS ; 1 for WCNSS */
+#define N_LEGACY_WRITE (driver->poolsize + 6)
+/* Additionally support number of command data and dci channels */
+#define N_LEGACY_WRITE_CMD ((N_LEGACY_WRITE) + 4)
+#define N_LEGACY_READ 1
+
+static void diag_usb_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect();
+}
+
+static void diag_usb_disconnect_work_fn(struct work_struct *w)
+{
+ diagfwd_disconnect();
+}
+
+int diagfwd_connect(void)
+{
+ int err;
+ int i;
+
+ printk(KERN_DEBUG "diag: USB connected\n");
+ err = usb_diag_alloc_req(driver->legacy_ch, (driver->supports_separate_cmdrsp ? N_LEGACY_WRITE_CMD : N_LEGACY_WRITE), N_LEGACY_READ);
+ if (err)
+ printk(KERN_ERR "diag: unable to alloc USB req on legacy ch");
+
+ driver->usb_connected = 1;
+ diag_reset_smd_data(RESET_AND_QUEUE);
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ /* Poll SMD CNTL channels to check for data */
+ diag_smd_notify(&(driver->smd_cntl[i]), SMD_EVENT_DATA);
+ }
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+
+ /* Poll USB channel to check for data */
+ queue_work(driver->diag_wq, &(driver->diag_read_work));
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa()) {
+ if (driver->mdm_ch && !IS_ERR(driver->mdm_ch))
+ diagfwd_connect_sdio();
+ else
+ printk(KERN_INFO "diag: No USB MDM ch");
+ }
+#endif
+ return 0;
+}
+
+int diagfwd_disconnect(void)
+{
+ int i;
+
+ printk(KERN_DEBUG "diag: USB disconnected\n");
+ driver->usb_connected = 0;
+ driver->debug_flag = 1;
+ usb_diag_free_req(driver->legacy_ch);
+ if (driver->logging_mode == USB_MODE) {
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ driver->smd_data[i].in_busy_1 = 1;
+ driver->smd_data[i].in_busy_2 = 1;
+ }
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ driver->smd_cmd[i].in_busy_1 = 1;
+ driver->smd_cmd[i].in_busy_2 = 1;
+ }
+ }
+ }
+ queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa())
+ if (driver->mdm_ch && !IS_ERR(driver->mdm_ch))
+ diagfwd_disconnect_sdio();
+#endif
+ /* TBD - notify and flow control SMD */
+ return 0;
+}
+
+static int diagfwd_check_buf_match(int num_channels, struct diag_smd_info *data, unsigned char *buf)
+{
+ int i;
+ int found_it = 0;
+
+ for (i = 0; i < num_channels; i++) {
+ if (buf == (void *)data[i].buf_in_1) {
+ data[i].in_busy_1 = 0;
+ found_it = 1;
+ break;
+ } else if (buf == (void *)data[i].buf_in_2) {
+ data[i].in_busy_2 = 0;
+ found_it = 1;
+ break;
+ }
+ }
+
+ if (found_it) {
+ if (data[i].type == SMD_DATA_TYPE)
+ queue_work(data[i].wq, &(data[i].diag_read_smd_work));
+ else
+ queue_work(driver->diag_wq, &(data[i].diag_read_smd_work));
+ }
+
+ return found_it;
+}
+
+int diagfwd_write_complete(struct diag_request *diag_write_ptr)
+{
+ unsigned char *buf = diag_write_ptr->buf;
+ int found_it = 0;
+
+ /* Determine if the write complete is for data from modem/apps/q6 */
+ found_it = diagfwd_check_buf_match(NUM_SMD_DATA_CHANNELS, driver->smd_data, buf);
+
+ if (!found_it && driver->supports_separate_cmdrsp)
+ found_it = diagfwd_check_buf_match(NUM_SMD_CMD_CHANNELS, driver->smd_cmd, buf);
+
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ if (!found_it) {
+ if (buf == (void *)driver->buf_in_sdio) {
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa())
+ diagfwd_write_complete_sdio();
+ else
+ pr_err("diag: Incorrect buffer pointer while WRITE");
+ found_it = 1;
+ }
+ }
+#endif
+ if (!found_it) {
+ if (driver->logging_mode != USB_MODE)
+ pr_debug("diag: freeing buffer when not in usb mode\n");
+
+ diagmem_free(driver, (unsigned char *)buf, POOL_TYPE_HDLC);
+ diagmem_free(driver, (unsigned char *)diag_write_ptr, POOL_TYPE_WRITE_STRUCT);
+ }
+ return 0;
+}
+
+int diagfwd_read_complete(struct diag_request *diag_read_ptr)
+{
+ int status = diag_read_ptr->status;
+ unsigned char *buf = diag_read_ptr->buf;
+
+ /* Determine if the read complete is for data on legacy/mdm ch */
+ if (buf == (void *)driver->usb_buf_out) {
+ driver->read_len_legacy = diag_read_ptr->actual;
+ APPEND_DEBUG('s');
+#ifdef DIAG_DEBUG
+ printk(KERN_INFO "read data from USB, pkt length %d", diag_read_ptr->actual);
+ print_hex_dump(KERN_DEBUG, "Read Packet Data from USB: ", 16, 1, DUMP_PREFIX_ADDRESS, diag_read_ptr->buf, diag_read_ptr->actual, 1);
+#endif /* DIAG DEBUG */
+ if (driver->logging_mode == USB_MODE) {
+ if (status != -ECONNRESET && status != -ESHUTDOWN)
+ queue_work(driver->diag_wq, &(driver->diag_proc_hdlc_work));
+ else
+ queue_work(driver->diag_wq, &(driver->diag_read_work));
+ }
+ }
+#ifdef CONFIG_DIAG_SDIO_PIPE
+ else if (buf == (void *)driver->usb_buf_mdm_out) {
+ if (machine_is_msm8x60_fusion() || machine_is_msm8x60_fusn_ffa()) {
+ driver->read_len_mdm = diag_read_ptr->actual;
+ diagfwd_read_complete_sdio();
+ } else
+ pr_err("diag: Incorrect buffer pointer while READ");
+ }
+#endif
+ else
+ printk(KERN_ERR "diag: Unknown buffer ptr from USB");
+
+ return 0;
+}
+
+void diag_read_work_fn(struct work_struct *work)
+{
+ APPEND_DEBUG('d');
+ driver->usb_read_ptr->buf = driver->usb_buf_out;
+ driver->usb_read_ptr->length = USB_MAX_OUT_BUF;
+ usb_diag_read(driver->legacy_ch, driver->usb_read_ptr);
+ APPEND_DEBUG('e');
+}
+
+void diag_process_hdlc_fn(struct work_struct *work)
+{
+ APPEND_DEBUG('D');
+ diag_process_hdlc(driver->usb_buf_out, driver->read_len_legacy);
+ diag_read_work_fn(work);
+ APPEND_DEBUG('E');
+}
+
+void diag_usb_legacy_notifier(void *priv, unsigned event, struct diag_request *d_req)
+{
+ switch (event) {
+ case USB_DIAG_CONNECT:
+ queue_work(driver->diag_wq, &driver->diag_usb_connect_work);
+ break;
+ case USB_DIAG_DISCONNECT:
+ queue_work(driver->diag_wq, &driver->diag_usb_disconnect_work);
+ break;
+ case USB_DIAG_READ_DONE:
+ diagfwd_read_complete(d_req);
+ break;
+ case USB_DIAG_WRITE_DONE:
+ diagfwd_write_complete(d_req);
+ break;
+ default:
+ printk(KERN_ERR "Unknown event from USB diag\n");
+ break;
+ }
+}
+
+#endif /* DIAG OVER USB */
+
+void diag_smd_notify(void *ctxt, unsigned event)
+{
+ struct diag_smd_info *smd_info = (struct diag_smd_info *)ctxt;
+ if (!smd_info)
+ return;
+
+ if (event == SMD_EVENT_CLOSE) {
+ smd_info->ch = 0;
+ wake_up(&driver->smd_wait_q);
+ if (smd_info->type == SMD_DATA_TYPE) {
+ smd_info->notify_context = event;
+ queue_work(driver->diag_cntl_wq, &(smd_info->diag_notify_update_smd_work));
+ } else if (smd_info->type == SMD_DCI_TYPE) {
+ /* Notify the clients of the close */
+ diag_dci_notify_client(smd_info->peripheral_mask, DIAG_STATUS_CLOSED);
+ } else if (smd_info->type == SMD_CNTL_TYPE) {
+ diag_cntl_stm_notify(smd_info, CLEAR_PERIPHERAL_STM_STATE);
+ }
+ return;
+ } else if (event == SMD_EVENT_OPEN) {
+ if (smd_info->ch_save)
+ smd_info->ch = smd_info->ch_save;
+
+ if (smd_info->type == SMD_CNTL_TYPE) {
+ smd_info->notify_context = event;
+ queue_work(driver->diag_cntl_wq, &(smd_info->diag_notify_update_smd_work));
+ } else if (smd_info->type == SMD_DCI_TYPE) {
+ smd_info->notify_context = event;
+ queue_work(driver->diag_dci_wq, &(smd_info->diag_notify_update_smd_work));
+ /* Notify the clients of the open */
+ diag_dci_notify_client(smd_info->peripheral_mask, DIAG_STATUS_OPEN);
+ }
+ } else if (event == SMD_EVENT_DATA && !driver->real_time_mode && smd_info->type == SMD_DATA_TYPE) {
+ process_lock_on_notify(&smd_info->nrt_lock);
+ }
+
+ wake_up(&driver->smd_wait_q);
+
+ if (smd_info->type == SMD_DCI_TYPE || smd_info->type == SMD_DCI_CMD_TYPE) {
+ if (event == SMD_EVENT_DATA)
+ diag_dci_try_activate_wakeup_source(smd_info->ch);
+ queue_work(driver->diag_dci_wq, &(smd_info->diag_read_smd_work));
+ } else if (smd_info->type == SMD_DATA_TYPE) {
+ queue_work(smd_info->wq, &(smd_info->diag_read_smd_work));
+ } else {
+ queue_work(driver->diag_wq, &(smd_info->diag_read_smd_work));
+ }
+}
+
+static int diag_smd_probe(struct platform_device *pdev)
+{
+ int r = 0;
+ int index = -1;
+ const char *channel_name = NULL;
+
+ if (pdev->id == SMD_APPS_MODEM) {
+ index = MODEM_DATA;
+ channel_name = "DIAG";
+ }
+#if defined(CONFIG_MSM_N_WAY_SMD)
+ else if (pdev->id == SMD_APPS_QDSP) {
+ index = LPASS_DATA;
+ channel_name = "DIAG";
+ }
+#endif
+ else if (pdev->id == SMD_APPS_WCNSS) {
+ index = WCNSS_DATA;
+ channel_name = "APPS_RIVA_DATA";
+ }
+
+ if (index != -1) {
+ r = smd_named_open_on_edge(channel_name, pdev->id, &driver->smd_data[index].ch, &driver->smd_data[index], diag_smd_notify);
+ driver->smd_data[index].ch_save = driver->smd_data[index].ch;
+ }
+
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+ pr_debug("diag: In %s, open SMD port, Id = %d, r = %d\n", __func__, pdev->id, r);
+
+ return 0;
+}
+
+static int diag_smd_cmd_probe(struct platform_device *pdev)
+{
+ int r = 0;
+ int index = -1;
+ const char *channel_name = NULL;
+
+ if (!driver->supports_separate_cmdrsp)
+ return 0;
+
+ if (pdev->id == SMD_APPS_MODEM) {
+ index = MODEM_DATA;
+ channel_name = "DIAG_CMD";
+ }
+
+ if (index != -1) {
+ r = smd_named_open_on_edge(channel_name, pdev->id, &driver->smd_cmd[index].ch, &driver->smd_cmd[index], diag_smd_notify);
+ driver->smd_cmd[index].ch_save = driver->smd_cmd[index].ch;
+ }
+
+ pr_debug("diag: In %s, open SMD CMD port, Id = %d, r = %d\n", __func__, pdev->id, r);
+
+ return 0;
+}
+
+static int diag_smd_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diag_smd_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diag_smd_dev_pm_ops = {
+ .runtime_suspend = diag_smd_runtime_suspend,
+ .runtime_resume = diag_smd_runtime_resume,
+};
+
+static struct platform_driver msm_smd_ch1_driver = {
+
+ .probe = diag_smd_probe,
+ .driver = {
+ .name = "DIAG",
+ .owner = THIS_MODULE,
+ .pm = &diag_smd_dev_pm_ops,
+ },
+};
+
+static struct platform_driver diag_smd_lite_driver = {
+
+ .probe = diag_smd_probe,
+ .driver = {
+ .name = "APPS_RIVA_DATA",
+ .owner = THIS_MODULE,
+ .pm = &diag_smd_dev_pm_ops,
+ },
+};
+
+static struct platform_driver smd_lite_data_cmd_drivers[NUM_SMD_CMD_CHANNELS] = {
+ {
+ /* Modem data */
+ .probe = diag_smd_cmd_probe,
+ .driver = {
+ .name = "DIAG_CMD",
+ .owner = THIS_MODULE,
+ .pm = &diag_smd_dev_pm_ops,
+ },
+ }
+};
+
+int device_supports_separate_cmdrsp(void)
+{
+ return driver->use_device_tree;
+}
+
+void diag_smd_destructor(struct diag_smd_info *smd_info)
+{
+ if (smd_info->type == SMD_DATA_TYPE) {
+ wake_lock_destroy(&smd_info->nrt_lock.read_lock);
+ destroy_workqueue(smd_info->wq);
+ }
+
+ if (smd_info->ch)
+ smd_close(smd_info->ch);
+
+ smd_info->ch = 0;
+ smd_info->ch_save = 0;
+ kfree(smd_info->buf_in_1);
+ kfree(smd_info->buf_in_2);
+ kfree(smd_info->write_ptr_1);
+ kfree(smd_info->write_ptr_2);
+ kfree(smd_info->buf_in_1_raw);
+ kfree(smd_info->buf_in_2_raw);
+}
+
+int diag_smd_constructor(struct diag_smd_info *smd_info, int peripheral, int type)
+{
+ smd_info->peripheral = peripheral;
+ smd_info->type = type;
+ smd_info->encode_hdlc = 0;
+ mutex_init(&smd_info->smd_ch_mutex);
+
+ switch (peripheral) {
+ case MODEM_DATA:
+ smd_info->peripheral_mask = DIAG_CON_MPSS;
+ break;
+ case LPASS_DATA:
+ smd_info->peripheral_mask = DIAG_CON_LPASS;
+ break;
+ case WCNSS_DATA:
+ smd_info->peripheral_mask = DIAG_CON_WCNSS;
+ break;
+ default:
+ pr_err("diag: In %s, unknown peripheral, peripheral: %d\n", __func__, peripheral);
+ goto err;
+ }
+
+ smd_info->ch = 0;
+ smd_info->ch_save = 0;
+
+ if (smd_info->buf_in_1 == NULL) {
+ smd_info->buf_in_1 = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (smd_info->buf_in_1 == NULL)
+ goto err;
+ smd_info->buf_in_1_size = IN_BUF_SIZE;
+ kmemleak_not_leak(smd_info->buf_in_1);
+ }
+
+ if (smd_info->write_ptr_1 == NULL) {
+ smd_info->write_ptr_1 = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (smd_info->write_ptr_1 == NULL)
+ goto err;
+ kmemleak_not_leak(smd_info->write_ptr_1);
+ }
+
+ /* The smd data type needs two buffers */
+ if (smd_info->type == SMD_DATA_TYPE) {
+ if (smd_info->buf_in_2 == NULL) {
+ smd_info->buf_in_2 = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (smd_info->buf_in_2 == NULL)
+ goto err;
+ smd_info->buf_in_2_size = IN_BUF_SIZE;
+ kmemleak_not_leak(smd_info->buf_in_2);
+ }
+ if (smd_info->write_ptr_2 == NULL) {
+ smd_info->write_ptr_2 = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (smd_info->write_ptr_2 == NULL)
+ goto err;
+ kmemleak_not_leak(smd_info->write_ptr_2);
+ }
+ if (driver->supports_apps_hdlc_encoding) {
+ /* In support of hdlc encoding */
+ if (smd_info->buf_in_1_raw == NULL) {
+ smd_info->buf_in_1_raw = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (smd_info->buf_in_1_raw == NULL)
+ goto err;
+ smd_info->buf_in_1_raw_size = IN_BUF_SIZE;
+ kmemleak_not_leak(smd_info->buf_in_1_raw);
+ }
+ if (smd_info->buf_in_2_raw == NULL) {
+ smd_info->buf_in_2_raw = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (smd_info->buf_in_2_raw == NULL)
+ goto err;
+ smd_info->buf_in_2_raw_size = IN_BUF_SIZE;
+ kmemleak_not_leak(smd_info->buf_in_2_raw);
+ }
+ }
+ }
+
+ if (smd_info->type == SMD_CMD_TYPE && driver->supports_apps_hdlc_encoding) {
+ /* In support of hdlc encoding */
+ if (smd_info->buf_in_1_raw == NULL) {
+ smd_info->buf_in_1_raw = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (smd_info->buf_in_1_raw == NULL)
+ goto err;
+ smd_info->buf_in_1_raw_size = IN_BUF_SIZE;
+ kmemleak_not_leak(smd_info->buf_in_1_raw);
+ }
+ }
+
+ /* The smd data type needs separate work queues for reads */
+ if (type == SMD_DATA_TYPE) {
+ switch (peripheral) {
+ case MODEM_DATA:
+ smd_info->wq = create_singlethread_workqueue("diag_modem_data_read_wq");
+ break;
+ case LPASS_DATA:
+ smd_info->wq = create_singlethread_workqueue("diag_lpass_data_read_wq");
+ break;
+ case WCNSS_DATA:
+ smd_info->wq = create_singlethread_workqueue("diag_wcnss_data_read_wq");
+ break;
+ default:
+ smd_info->wq = NULL;
+ break;
+ }
+ } else {
+ smd_info->wq = NULL;
+ }
+
+ INIT_WORK(&(smd_info->diag_read_smd_work), diag_read_smd_work_fn);
+
+ /*
+ * The update function assigned to the diag_notify_update_smd_work
+ * work_struct is meant to be used for updating that is not to
+ * be done in the context of the smd notify function. The
+ * notify_context variable can be used for passing additional
+ * information to the update function.
+ */
+ smd_info->notify_context = 0;
+ smd_info->general_context = 0;
+ switch (type) {
+ case SMD_DATA_TYPE:
+ case SMD_CMD_TYPE:
+ INIT_WORK(&(smd_info->diag_notify_update_smd_work), diag_clean_reg_fn);
+ INIT_WORK(&(smd_info->diag_general_smd_work), diag_cntl_smd_work_fn);
+ break;
+ case SMD_CNTL_TYPE:
+ INIT_WORK(&(smd_info->diag_notify_update_smd_work), diag_mask_update_fn);
+ INIT_WORK(&(smd_info->diag_general_smd_work), diag_cntl_smd_work_fn);
+ break;
+ case SMD_DCI_TYPE:
+ case SMD_DCI_CMD_TYPE:
+ INIT_WORK(&(smd_info->diag_notify_update_smd_work), diag_update_smd_dci_work_fn);
+ INIT_WORK(&(smd_info->diag_general_smd_work), diag_cntl_smd_work_fn);
+ break;
+ default:
+ pr_err("diag: In %s, unknown type, type: %d\n", __func__, type);
+ goto err;
+ }
+
+ /*
+ * Set function ptr for function to call to process the data that
+ * was just read from the smd channel
+ */
+ switch (type) {
+ case SMD_DATA_TYPE:
+ case SMD_CMD_TYPE:
+ smd_info->process_smd_read_data = diag_process_smd_read_data;
+ break;
+ case SMD_CNTL_TYPE:
+ smd_info->process_smd_read_data = diag_process_smd_cntl_read_data;
+ break;
+ case SMD_DCI_TYPE:
+ case SMD_DCI_CMD_TYPE:
+ smd_info->process_smd_read_data = diag_process_smd_dci_read_data;
+ break;
+ default:
+ pr_err("diag: In %s, unknown type, type: %d\n", __func__, type);
+ goto err;
+ }
+
+ smd_info->nrt_lock.enabled = 0;
+ smd_info->nrt_lock.ref_count = 0;
+ smd_info->nrt_lock.copy_count = 0;
+ if (type == SMD_DATA_TYPE) {
+ spin_lock_init(&smd_info->nrt_lock.read_spinlock);
+
+ switch (peripheral) {
+ case MODEM_DATA:
+ wake_lock_init(&smd_info->nrt_lock.read_lock, WAKE_LOCK_SUSPEND, "diag_nrt_modem_read");
+ break;
+ case LPASS_DATA:
+ wake_lock_init(&smd_info->nrt_lock.read_lock, WAKE_LOCK_SUSPEND, "diag_nrt_lpass_read");
+ break;
+ case WCNSS_DATA:
+ wake_lock_init(&smd_info->nrt_lock.read_lock, WAKE_LOCK_SUSPEND, "diag_nrt_wcnss_read");
+ break;
+ default:
+ break;
+ }
+ }
+
+ return 1;
+err:
+ kfree(smd_info->buf_in_1);
+ kfree(smd_info->buf_in_2);
+ kfree(smd_info->write_ptr_1);
+ kfree(smd_info->write_ptr_2);
+ kfree(smd_info->buf_in_1_raw);
+ kfree(smd_info->buf_in_2_raw);
+
+ return 0;
+}
+
+void diagfwd_init(void)
+{
+ int success;
+ int i;
+
+ wrap_enabled = 0;
+ wrap_count = 0;
+ diag_debug_buf_idx = 0;
+ driver->read_len_legacy = 0;
+ driver->use_device_tree = has_device_tree();
+ driver->real_time_mode = 1;
+ /*
+ * The number of entries in table of buffers
+ * should not be any smaller than hdlc poolsize.
+ */
+ driver->buf_tbl_size = (buf_tbl_size < driver->poolsize_hdlc) ? driver->poolsize_hdlc : buf_tbl_size;
+ driver->supports_separate_cmdrsp = device_supports_separate_cmdrsp();
+ driver->supports_apps_hdlc_encoding = 1;
+ mutex_init(&driver->diag_hdlc_mutex);
+ mutex_init(&driver->diag_cntl_mutex);
+
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ driver->separate_cmdrsp[i] = 0;
+ driver->peripheral_supports_stm[i] = DISABLE_STM;
+ driver->rcvd_feature_mask[i] = 0;
+ }
+
+ for (i = 0; i < NUM_STM_PROCESSORS; i++) {
+ driver->stm_state_requested[i] = DISABLE_STM;
+ driver->stm_state[i] = DISABLE_STM;
+ }
+
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++) {
+ success = diag_smd_constructor(&driver->smd_data[i], i, SMD_DATA_TYPE);
+ if (!success)
+ goto err;
+ }
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ success = diag_smd_constructor(&driver->smd_cmd[i], i, SMD_CMD_TYPE);
+ if (!success)
+ goto err;
+ }
+ }
+
+ if (driver->usb_buf_out == NULL && (driver->usb_buf_out = kzalloc(USB_MAX_OUT_BUF, GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->usb_buf_out);
+ if (driver->hdlc_buf == NULL && (driver->hdlc_buf = kzalloc(HDLC_MAX, GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->hdlc_buf);
+ if (driver->client_map == NULL && (driver->client_map = kzalloc((driver->num_clients) * sizeof(struct diag_client_map), GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->client_map);
+ if (driver->buf_tbl == NULL)
+ driver->buf_tbl = kzalloc(driver->buf_tbl_size * sizeof(struct diag_write_device), GFP_KERNEL);
+ if (driver->buf_tbl == NULL)
+ goto err;
+ kmemleak_not_leak(driver->buf_tbl);
+ if (driver->data_ready == NULL && (driver->data_ready = kzalloc(driver->num_clients * sizeof(int)
+ , GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->data_ready);
+ if (driver->table == NULL && (driver->table = kzalloc(diag_max_reg * sizeof(struct diag_master_table), GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->table);
+
+ if (driver->usb_read_ptr == NULL) {
+ driver->usb_read_ptr = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (driver->usb_read_ptr == NULL)
+ goto err;
+ kmemleak_not_leak(driver->usb_read_ptr);
+ }
+ if (driver->pkt_buf == NULL && (driver->pkt_buf = kzalloc(PKT_SIZE, GFP_KERNEL)) == NULL)
+ goto err;
+ kmemleak_not_leak(driver->pkt_buf);
+ if (driver->apps_rsp_buf == NULL) {
+ driver->apps_rsp_buf = kzalloc(APPS_BUF_SIZE, GFP_KERNEL);
+ if (driver->apps_rsp_buf == NULL)
+ goto err;
+ kmemleak_not_leak(driver->apps_rsp_buf);
+ }
+ driver->diag_wq = create_singlethread_workqueue("diag_wq");
+#ifdef CONFIG_DIAG_OVER_USB
+ INIT_WORK(&(driver->diag_usb_connect_work), diag_usb_connect_work_fn);
+ INIT_WORK(&(driver->diag_usb_disconnect_work), diag_usb_disconnect_work_fn);
+ INIT_WORK(&(driver->diag_proc_hdlc_work), diag_process_hdlc_fn);
+ INIT_WORK(&(driver->diag_read_work), diag_read_work_fn);
+ driver->legacy_ch = usb_diag_open(DIAG_LEGACY, driver, diag_usb_legacy_notifier);
+ if (IS_ERR(driver->legacy_ch)) {
+ printk(KERN_ERR "Unable to open USB diag legacy channel\n");
+ goto err;
+ }
+#endif
+ platform_driver_register(&msm_smd_ch1_driver);
+ platform_driver_register(&diag_smd_lite_driver);
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++)
+ platform_driver_register(&smd_lite_data_cmd_drivers[i]);
+ }
+
+ return;
+err:
+ pr_err("diag: Could not initialize diag buffers");
+
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_data[i]);
+
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_cmd[i]);
+
+ kfree(driver->buf_msg_mask_update);
+ kfree(driver->buf_log_mask_update);
+ kfree(driver->buf_event_mask_update);
+ kfree(driver->usb_buf_out);
+ kfree(driver->hdlc_buf);
+ kfree(driver->client_map);
+ kfree(driver->buf_tbl);
+ kfree(driver->data_ready);
+ kfree(driver->table);
+ kfree(driver->pkt_buf);
+ kfree(driver->usb_read_ptr);
+ kfree(driver->apps_rsp_buf);
+ if (driver->diag_wq)
+ destroy_workqueue(driver->diag_wq);
+}
+
+void diagfwd_exit(void)
+{
+ int i;
+
+ for (i = 0; i < NUM_SMD_DATA_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_data[i]);
+
+#ifdef CONFIG_DIAG_OVER_USB
+ if (driver->usb_connected)
+ usb_diag_free_req(driver->legacy_ch);
+ usb_diag_close(driver->legacy_ch);
+#endif
+ platform_driver_unregister(&msm_smd_ch1_driver);
+ platform_driver_unregister(&diag_smd_lite_driver);
+
+ if (driver->supports_separate_cmdrsp) {
+ for (i = 0; i < NUM_SMD_CMD_CHANNELS; i++) {
+ diag_smd_destructor(&driver->smd_cmd[i]);
+ platform_driver_unregister(&smd_lite_data_cmd_drivers[i]);
+ }
+ }
+
+ kfree(driver->buf_msg_mask_update);
+ kfree(driver->buf_log_mask_update);
+ kfree(driver->buf_event_mask_update);
+ kfree(driver->usb_buf_out);
+ kfree(driver->hdlc_buf);
+ kfree(driver->client_map);
+ kfree(driver->buf_tbl);
+ kfree(driver->data_ready);
+ kfree(driver->table);
+ kfree(driver->pkt_buf);
+ kfree(driver->usb_read_ptr);
+ kfree(driver->apps_rsp_buf);
+ destroy_workqueue(driver->diag_wq);
+}
diff --git a/drivers/char/diag/diagfwd.h b/drivers/char/diag/diagfwd.h
new file mode 100644
index 0000000..28baeff
--- /dev/null
+++ b/drivers/char/diag/diagfwd.h
@@ -0,0 +1,64 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_H
+#define DIAGFWD_H
+
+#define NO_PROCESS 0
+#define NON_APPS_PROC -1
+
+#define RESET_AND_NO_QUEUE 0
+#define RESET_AND_QUEUE 1
+
+#define CHK_OVERFLOW(bufStart, start, end, length) \
+ ((((bufStart) <= (start)) && ((end) - (start) >= (length))) ? 1 : 0)
+
+void diagfwd_init(void);
+void diagfwd_exit(void);
+void diag_process_hdlc(void *data, unsigned len);
+void diag_smd_send_req(struct diag_smd_info *smd_info);
+void process_lock_enabling(struct diag_nrt_wake_lock *lock, int real_time);
+void process_lock_on_notify(struct diag_nrt_wake_lock *lock);
+void process_lock_on_read(struct diag_nrt_wake_lock *lock, int pkt_len);
+void process_lock_on_copy(struct diag_nrt_wake_lock *lock);
+void process_lock_on_copy_complete(struct diag_nrt_wake_lock *lock);
+void diag_usb_legacy_notifier(void *, unsigned, struct diag_request *);
+long diagchar_ioctl(struct file *, unsigned int, unsigned long);
+int diag_device_write(void *, int, struct diag_request *);
+int mask_request_validate(unsigned char mask_buf[]);
+void diag_clear_reg(int);
+int chk_config_get_id(void);
+int chk_apps_only(void);
+int chk_apps_master(void);
+int chk_polling_response(void);
+void diag_update_userspace_clients(unsigned int type);
+void diag_update_sleeping_process(int process_id, int data_type);
+void encode_rsp_and_send(int buf_length);
+void diag_smd_notify(void *ctxt, unsigned event);
+int diag_smd_constructor(struct diag_smd_info *smd_info, int peripheral, int type);
+void diag_smd_destructor(struct diag_smd_info *smd_info);
+int diag_switch_logging(unsigned long);
+int diag_command_reg(unsigned long);
+void diag_cmp_logging_modes_sdio_pipe(int old_mode, int new_mode);
+void diag_cmp_logging_modes_diagfwd_bridge(int old_mode, int new_mode);
+int diag_process_apps_pkt(unsigned char *buf, int len);
+void diag_reset_smd_data(int queue);
+int diag_apps_responds(void);
+/* State for diag forwarding */
+#ifdef CONFIG_DIAG_OVER_USB
+int diagfwd_connect(void);
+int diagfwd_disconnect(void);
+#endif
+extern int diag_debug_buf_idx;
+extern unsigned char diag_debug_buf[1024];
+extern struct platform_driver msm_diag_dci_driver;
+#endif
diff --git a/drivers/char/diag/diagfwd_bridge.c b/drivers/char/diag/diagfwd_bridge.c
new file mode 100644
index 0000000..79e8146
--- /dev/null
+++ b/drivers/char/diag/diagfwd_bridge.c
@@ -0,0 +1,358 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/diagchar.h>
+#include <linux/kmemleak.h>
+#include <linux/err.h>
+#include <linux/workqueue.h>
+#include <linux/ratelimit.h>
+#include <linux/platform_device.h>
+#include <linux/smux.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include "diagchar.h"
+#include "diagmem.h"
+#include "diagfwd_cntl.h"
+#include "diagfwd_smux.h"
+#include "diagfwd_hsic.h"
+#include "diag_masks.h"
+#include "diagfwd_bridge.h"
+
+struct diag_bridge_dev *diag_bridge;
+
+/* diagfwd_connect_bridge is called when the USB mdm channel is connected */
+int diagfwd_connect_bridge(int process_cable)
+{
+ uint8_t i;
+
+ pr_debug("diag: in %s\n", __func__);
+
+ for (i = 0; i < MAX_BRIDGES; i++)
+ if (diag_bridge[i].enabled)
+ connect_bridge(process_cable, i);
+ return 0;
+}
+
+void connect_bridge(int process_cable, uint8_t index)
+{
+ int err;
+
+ mutex_lock(&diag_bridge[index].bridge_mutex);
+ /* If the usb cable is being connected */
+ if (process_cable) {
+ err = usb_diag_alloc_req(diag_bridge[index].ch, N_MDM_WRITE, N_MDM_READ);
+ if (err)
+ pr_err("diag: unable to alloc USB req for ch %d err:%d\n", index, err);
+
+ diag_bridge[index].usb_connected = 1;
+ }
+
+ if (index == SMUX) {
+ if (driver->diag_smux_enabled) {
+ driver->in_busy_smux = 0;
+ diagfwd_connect_smux();
+ }
+ } else {
+ if (index >= MAX_HSIC_CH) {
+ pr_err("diag: Invalid hsic channel index %d in %s\n", index, __func__);
+ mutex_unlock(&diag_bridge[index].bridge_mutex);
+ return;
+ }
+ if (diag_hsic[index].hsic_device_enabled && (driver->logging_mode != MEMORY_DEVICE_MODE || diag_hsic[index].hsic_data_requested)) {
+ diag_hsic[index].in_busy_hsic_read_on_device = 0;
+ diag_hsic[index].in_busy_hsic_write = 0;
+ /* If the HSIC (diag_bridge) platform
+ * device is not open */
+ if (!diag_hsic[index].hsic_device_opened) {
+ hsic_diag_bridge_ops[index].ctxt = (void *)(int)(index);
+ err = diag_bridge_open(index, &hsic_diag_bridge_ops[index]);
+ if (err) {
+ pr_err("diag: HSIC channel open error: %d\n", err);
+ } else {
+ pr_debug("diag: opened HSIC channel\n");
+ diag_hsic[index].hsic_device_opened = 1;
+ }
+ } else {
+ pr_debug("diag: HSIC channel already open\n");
+ }
+ /*
+ * Turn on communication over usb mdm and HSIC,
+ * if the HSIC device driver is enabled
+ * and opened
+ */
+ if (diag_hsic[index].hsic_device_opened) {
+ diag_hsic[index].hsic_ch = 1;
+ /* Poll USB mdm channel to check for data */
+ if (driver->logging_mode == USB_MODE)
+ queue_work(diag_bridge[index].wq, &diag_bridge[index].diag_read_work);
+ /* Poll HSIC channel to check for data */
+ queue_work(diag_bridge[index].wq, &diag_hsic[index].diag_read_hsic_work);
+ }
+ }
+ }
+ mutex_unlock(&diag_bridge[index].bridge_mutex);
+}
+
+/*
+ * diagfwd_disconnect_bridge is called when the USB mdm channel
+ * is disconnected. So disconnect should happen for all bridges
+ */
+int diagfwd_disconnect_bridge(int process_cable)
+{
+ int i;
+ pr_debug("diag: In %s, process_cable: %d\n", __func__, process_cable);
+
+ for (i = 0; i < MAX_BRIDGES; i++) {
+ if (diag_bridge[i].enabled) {
+ mutex_lock(&diag_bridge[i].bridge_mutex);
+ /* If the usb cable is being disconnected */
+ if (process_cable) {
+ diag_bridge[i].usb_connected = 0;
+ usb_diag_free_req(diag_bridge[i].ch);
+ }
+
+ if (i == SMUX) {
+ if (driver->diag_smux_enabled && driver->logging_mode == USB_MODE) {
+ driver->in_busy_smux = 1;
+ driver->lcid = LCID_INVALID;
+ driver->smux_connected = 0;
+ /*
+ * Turn off communication over usb
+ * and smux
+ */
+ msm_smux_close(LCID_VALID);
+ }
+ } else {
+ if (diag_hsic[i].hsic_device_enabled && (driver->logging_mode != MEMORY_DEVICE_MODE || !diag_hsic[i].hsic_data_requested)) {
+ diag_hsic[i].in_busy_hsic_read_on_device = 1;
+ diag_hsic[i].in_busy_hsic_write = 1;
+ /* Turn off communication over usb
+ * and HSIC */
+ diag_hsic_close(i);
+ }
+ }
+ mutex_unlock(&diag_bridge[i].bridge_mutex);
+ }
+ }
+ return 0;
+}
+
+/* Called after the asychronous usb_diag_read() on mdm channel is complete */
+int diagfwd_read_complete_bridge(struct diag_request *diag_read_ptr)
+{
+ int index = (int)(diag_read_ptr->context);
+
+ /* The read of the usb on the mdm (not HSIC/SMUX) has completed */
+ diag_bridge[index].read_len = diag_read_ptr->actual;
+
+ if (index == SMUX) {
+ if (driver->diag_smux_enabled) {
+ diagfwd_read_complete_smux();
+ return 0;
+ } else {
+ pr_warning("diag: incorrect callback for smux\n");
+ }
+ }
+
+ /* If SMUX not enabled, check for HSIC */
+ diag_hsic[index].in_busy_hsic_read_on_device = 0;
+ if (!diag_hsic[index].hsic_ch) {
+ pr_err("DIAG in %s: hsic_ch == 0, ch %d\n", __func__, index);
+ return 0;
+ }
+
+ /*
+ * The read of the usb driver on the mdm channel has completed.
+ * If there is no write on the HSIC in progress, check if the
+ * read has data to pass on to the HSIC. If so, pass the usb
+ * mdm data on to the HSIC.
+ */
+ if (!diag_hsic[index].in_busy_hsic_write && diag_bridge[index].usb_buf_out && (diag_bridge[index].read_len > 0)) {
+
+ /*
+ * Initiate the HSIC write. The HSIC write is
+ * asynchronous. When complete the write
+ * complete callback function will be called
+ */
+ int err;
+ diag_hsic[index].in_busy_hsic_write = 1;
+ err = diag_bridge_write(index, diag_bridge[index].usb_buf_out, diag_bridge[index].read_len);
+ if (err) {
+ pr_err_ratelimited("diag: mdm data on HSIC write err: %d\n", err);
+ /*
+ * If the error is recoverable, then clear
+ * the write flag, so we will resubmit a
+ * write on the next frame. Otherwise, don't
+ * resubmit a write on the next frame.
+ */
+ if ((-ENODEV) != err)
+ diag_hsic[index].in_busy_hsic_write = 0;
+ }
+ }
+
+ /*
+ * If there is no write of the usb mdm data on the
+ * HSIC channel
+ */
+ if (!diag_hsic[index].in_busy_hsic_write)
+ queue_work(diag_bridge[index].wq, &diag_bridge[index].diag_read_work);
+
+ return 0;
+}
+
+static void diagfwd_bridge_notifier(void *priv, unsigned event, struct diag_request *d_req)
+{
+ int index;
+
+ switch (event) {
+ case USB_DIAG_CONNECT:
+ queue_work(driver->diag_wq, &driver->diag_connect_work);
+ break;
+ case USB_DIAG_DISCONNECT:
+ queue_work(driver->diag_wq, &driver->diag_disconnect_work);
+ break;
+ case USB_DIAG_READ_DONE:
+ index = (int)(d_req->context);
+ queue_work(diag_bridge[index].wq, &diag_bridge[index].usb_read_complete_work);
+ break;
+ case USB_DIAG_WRITE_DONE:
+ index = (int)(d_req->context);
+ if (index == SMUX && driver->diag_smux_enabled)
+ diagfwd_write_complete_smux();
+ else if (diag_hsic[index].hsic_device_enabled)
+ diagfwd_write_complete_hsic(d_req, index);
+ break;
+ default:
+ pr_err("diag: in %s: Unknown event from USB diag:%u\n", __func__, event);
+ break;
+ }
+}
+
+void diagfwd_bridge_init(int index)
+{
+ int ret;
+ unsigned char name[20];
+
+ if (index == HSIC) {
+ strlcpy(name, "hsic", sizeof(name));
+ } else if (index == HSIC_2) {
+ strlcpy(name, "hsic_2", sizeof(name));
+ } else if (index == SMUX) {
+ strlcpy(name, "smux", sizeof(name));
+ } else {
+ pr_err("diag: incorrect bridge init, instance: %d\n", index);
+ return;
+ }
+
+ strlcpy(diag_bridge[index].name, name, sizeof(diag_bridge[index].name));
+ strlcat(name, "_diag_wq", sizeof(diag_bridge[index].name));
+ diag_bridge[index].id = index;
+ diag_bridge[index].wq = create_singlethread_workqueue(name);
+ diag_bridge[index].read_len = 0;
+ diag_bridge[index].write_len = 0;
+ if (diag_bridge[index].usb_buf_out == NULL)
+ diag_bridge[index].usb_buf_out = kzalloc(USB_MAX_OUT_BUF, GFP_KERNEL);
+ if (diag_bridge[index].usb_buf_out == NULL)
+ goto err;
+ if (diag_bridge[index].usb_read_ptr == NULL)
+ diag_bridge[index].usb_read_ptr = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (diag_bridge[index].usb_read_ptr == NULL)
+ goto err;
+ if (diag_bridge[index].usb_read_ptr->context == NULL)
+ diag_bridge[index].usb_read_ptr->context = kzalloc(sizeof(int), GFP_KERNEL);
+ if (diag_bridge[index].usb_read_ptr->context == NULL)
+ goto err;
+ mutex_init(&diag_bridge[index].bridge_mutex);
+
+ if (index == HSIC || index == HSIC_2) {
+ INIT_WORK(&(diag_bridge[index].usb_read_complete_work), diag_usb_read_complete_hsic_fn);
+#ifdef CONFIG_DIAG_OVER_USB
+ INIT_WORK(&(diag_bridge[index].diag_read_work), diag_read_usb_hsic_work_fn);
+ if (index == HSIC)
+ diag_bridge[index].ch = usb_diag_open(DIAG_MDM, (void *)index, diagfwd_bridge_notifier);
+ else if (index == HSIC_2)
+ diag_bridge[index].ch = usb_diag_open(DIAG_MDM2, (void *)index, diagfwd_bridge_notifier);
+ if (IS_ERR(diag_bridge[index].ch)) {
+ pr_err("diag: Unable to open USB MDM ch = %d\n", index);
+ goto err;
+ } else
+ diag_bridge[index].enabled = 1;
+#endif
+ } else if (index == SMUX) {
+ INIT_WORK(&(diag_bridge[index].usb_read_complete_work), diag_usb_read_complete_smux_fn);
+#ifdef CONFIG_DIAG_OVER_USB
+ INIT_WORK(&(diag_bridge[index].diag_read_work), diag_read_usb_smux_work_fn);
+ diag_bridge[index].ch = usb_diag_open(DIAG_QSC, (void *)index, diagfwd_bridge_notifier);
+ if (IS_ERR(diag_bridge[index].ch)) {
+ pr_err("diag: Unable to open USB diag QSC channel\n");
+ goto err;
+ } else
+ diag_bridge[index].enabled = 1;
+#endif
+ ret = platform_driver_register(&msm_diagfwd_smux_driver);
+ if (ret)
+ pr_err("diag: could not register SMUX device, ret: %d\n", ret);
+ }
+ return;
+err:
+ pr_err("diag: Could not initialize for bridge forwarding\n");
+ kfree(diag_bridge[index].usb_buf_out);
+ kfree(diag_hsic[index].hsic_buf_tbl);
+ kfree(driver->write_ptr_mdm);
+ kfree(diag_bridge[index].usb_read_ptr);
+ if (diag_bridge[index].wq)
+ destroy_workqueue(diag_bridge[index].wq);
+ return;
+}
+
+void diagfwd_bridge_exit(void)
+{
+ int i;
+ pr_debug("diag: in %s\n", __func__);
+
+ for (i = 0; i < MAX_HSIC_CH; i++) {
+ if (diag_hsic[i].hsic_device_enabled) {
+ diag_hsic_close(i);
+ diag_hsic[i].hsic_device_enabled = 0;
+ diag_bridge[i].enabled = 0;
+ }
+ diag_hsic[i].hsic_inited = 0;
+ kfree(diag_hsic[i].hsic_buf_tbl);
+ }
+ diagmem_exit(driver, POOL_TYPE_ALL);
+ if (driver->diag_smux_enabled) {
+ driver->lcid = LCID_INVALID;
+ kfree(driver->buf_in_smux);
+ driver->diag_smux_enabled = 0;
+ diag_bridge[SMUX].enabled = 0;
+ }
+ platform_driver_unregister(&msm_hsic_ch_driver);
+ platform_driver_unregister(&msm_diagfwd_smux_driver);
+ /* destroy USB MDM specific variables */
+ for (i = 0; i < MAX_BRIDGES; i++) {
+ if (diag_bridge[i].enabled) {
+#ifdef CONFIG_DIAG_OVER_USB
+ if (diag_bridge[i].usb_connected)
+ usb_diag_free_req(diag_bridge[i].ch);
+ usb_diag_close(diag_bridge[i].ch);
+#endif
+ kfree(diag_bridge[i].usb_buf_out);
+ kfree(diag_bridge[i].usb_read_ptr);
+ destroy_workqueue(diag_bridge[i].wq);
+ diag_bridge[i].enabled = 0;
+ }
+ }
+ kfree(driver->write_ptr_mdm);
+}
diff --git a/drivers/char/diag/diagfwd_bridge.h b/drivers/char/diag/diagfwd_bridge.h
new file mode 100644
index 0000000..ae1259b
--- /dev/null
+++ b/drivers/char/diag/diagfwd_bridge.h
@@ -0,0 +1,49 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_BRIDGE_H
+#define DIAGFWD_BRIDGE_H
+
+#include "diagfwd.h"
+
+#define MAX_BRIDGES 5
+#define HSIC 0
+#define HSIC_2 1
+#define SMUX 4
+
+int diagfwd_connect_bridge(int);
+void connect_bridge(int, uint8_t);
+int diagfwd_disconnect_bridge(int);
+void diagfwd_bridge_init(int index);
+void diagfwd_bridge_exit(void);
+int diagfwd_read_complete_bridge(struct diag_request *diag_read_ptr);
+
+/* Diag-Bridge structure, n bridges can be used at same time
+ * for instance SMUX, HSIC working at same time
+ */
+struct diag_bridge_dev {
+ int id;
+ char name[20];
+ int enabled;
+ struct mutex bridge_mutex;
+ int usb_connected;
+ int read_len;
+ int write_len;
+ unsigned char *usb_buf_out;
+ struct usb_diag_ch *ch;
+ struct workqueue_struct *wq;
+ struct work_struct diag_read_work;
+ struct diag_request *usb_read_ptr;
+ struct work_struct usb_read_complete_work;
+};
+
+#endif
diff --git a/drivers/char/diag/diagfwd_cntl.c b/drivers/char/diag/diagfwd_cntl.c
new file mode 100644
index 0000000..16a5a83
--- /dev/null
+++ b/drivers/char/diag/diagfwd_cntl.c
@@ -0,0 +1,516 @@
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/diagchar.h>
+#include <linux/platform_device.h>
+#include <linux/kmemleak.h>
+#include <linux/delay.h>
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_cntl.h"
+/* tracks which peripheral is undergoing SSR */
+static uint16_t reg_dirty;
+#define HDR_SIZ 8
+
+void diag_clean_reg_fn(struct work_struct *work)
+{
+ struct diag_smd_info *smd_info = container_of(work,
+ struct diag_smd_info,
+ diag_notify_update_smd_work);
+ if (!smd_info)
+ return;
+
+ pr_debug("diag: clean registration for peripheral: %d\n", smd_info->peripheral);
+
+ reg_dirty |= smd_info->peripheral_mask;
+ diag_clear_reg(smd_info->peripheral);
+ reg_dirty ^= smd_info->peripheral_mask;
+
+ /* Reset the feature mask flag */
+ driver->rcvd_feature_mask[smd_info->peripheral] = 0;
+
+ smd_info->notify_context = 0;
+}
+
+void diag_cntl_smd_work_fn(struct work_struct *work)
+{
+ struct diag_smd_info *smd_info = container_of(work,
+ struct diag_smd_info,
+ diag_general_smd_work);
+
+ if (!smd_info || smd_info->type != SMD_CNTL_TYPE)
+ return;
+
+ if (smd_info->general_context == UPDATE_PERIPHERAL_STM_STATE) {
+ if (driver->peripheral_supports_stm[smd_info->peripheral] == ENABLE_STM) {
+ int status = 0;
+ int index = smd_info->peripheral;
+ status = diag_send_stm_state(smd_info, (uint8_t) (driver->stm_state_requested[index]));
+ if (status == 1)
+ driver->stm_state[index] = driver->stm_state_requested[index];
+ }
+ }
+ smd_info->general_context = 0;
+}
+
+void diag_cntl_stm_notify(struct diag_smd_info *smd_info, int action)
+{
+ if (!smd_info || smd_info->type != SMD_CNTL_TYPE)
+ return;
+
+ if (action == CLEAR_PERIPHERAL_STM_STATE)
+ driver->peripheral_supports_stm[smd_info->peripheral] = DISABLE_STM;
+}
+
+static void process_stm_feature(struct diag_smd_info *smd_info, uint8_t feature_mask)
+{
+ if (feature_mask & F_DIAG_OVER_STM) {
+ driver->peripheral_supports_stm[smd_info->peripheral] = ENABLE_STM;
+ smd_info->general_context = UPDATE_PERIPHERAL_STM_STATE;
+ queue_work(driver->diag_cntl_wq, &(smd_info->diag_general_smd_work));
+ } else {
+ driver->peripheral_supports_stm[smd_info->peripheral] = DISABLE_STM;
+ }
+}
+
+static void process_hdlc_encoding_feature(struct diag_smd_info *smd_info, uint8_t feature_mask)
+{
+ /*
+ * Check if apps supports hdlc encoding and the
+ * peripheral supports apps hdlc encoding
+ */
+ if (driver->supports_apps_hdlc_encoding && (feature_mask & F_DIAG_HDLC_ENCODE_IN_APPS_MASK)) {
+ driver->smd_data[smd_info->peripheral].encode_hdlc = ENABLE_APPS_HDLC_ENCODING;
+ if (driver->separate_cmdrsp[smd_info->peripheral] && smd_info->peripheral < NUM_SMD_CMD_CHANNELS)
+ driver->smd_cmd[smd_info->peripheral].encode_hdlc = ENABLE_APPS_HDLC_ENCODING;
+ } else {
+ driver->smd_data[smd_info->peripheral].encode_hdlc = DISABLE_APPS_HDLC_ENCODING;
+ if (driver->separate_cmdrsp[smd_info->peripheral] && smd_info->peripheral < NUM_SMD_CMD_CHANNELS)
+ driver->smd_cmd[smd_info->peripheral].encode_hdlc = DISABLE_APPS_HDLC_ENCODING;
+ }
+}
+
+/* Process the data read from the smd control channel */
+int diag_process_smd_cntl_read_data(struct diag_smd_info *smd_info, void *buf, int total_recd)
+{
+ int data_len = 0, type = -1, count_bytes = 0, j, flag = 0;
+ struct bindpkt_params_per_process *pkt_params = kzalloc(sizeof(struct bindpkt_params_per_process), GFP_KERNEL);
+ struct diag_ctrl_msg *msg;
+ struct cmd_code_range *range;
+ struct bindpkt_params *temp;
+
+ if (pkt_params == NULL) {
+ pr_alert("diag: In %s, Memory allocation failure\n", __func__);
+ return 0;
+ }
+
+ if (!smd_info) {
+ pr_err("diag: In %s, No smd info. Not able to read.\n", __func__);
+ kfree(pkt_params);
+ return 0;
+ }
+
+ while (count_bytes + HDR_SIZ <= total_recd) {
+ type = *(uint32_t *) (buf);
+ data_len = *(uint32_t *) (buf + 4);
+ if (type < DIAG_CTRL_MSG_REG || type > DIAG_CTRL_MSG_LAST) {
+ pr_alert("diag: In %s, Invalid Msg type %d proc %d", __func__, type, smd_info->peripheral);
+ break;
+ }
+ if (data_len < 0 || data_len > total_recd) {
+ pr_alert("diag: In %s, Invalid data len %d, total_recd: %d, proc %d", __func__, data_len, total_recd, smd_info->peripheral);
+ break;
+ }
+ count_bytes = count_bytes + HDR_SIZ + data_len;
+ if (type == DIAG_CTRL_MSG_REG && total_recd >= count_bytes) {
+ msg = buf + HDR_SIZ;
+ range = buf + HDR_SIZ + sizeof(struct diag_ctrl_msg);
+ if (msg->count_entries == 0) {
+ pr_debug("diag: In %s, received reg tbl with no entries\n", __func__);
+ buf = buf + HDR_SIZ + data_len;
+ continue;
+ }
+ pkt_params->count = msg->count_entries;
+ pkt_params->params = kzalloc(pkt_params->count * sizeof(struct bindpkt_params), GFP_KERNEL);
+ if (!pkt_params->params) {
+ pr_alert("diag: In %s, Memory alloc fail for cmd_code: %d, subsys: %d\n", __func__, msg->cmd_code, msg->subsysid);
+ buf = buf + HDR_SIZ + data_len;
+ continue;
+ }
+ temp = pkt_params->params;
+ for (j = 0; j < pkt_params->count; j++) {
+ temp->cmd_code = msg->cmd_code;
+ temp->subsys_id = msg->subsysid;
+ temp->client_id = smd_info->peripheral;
+ temp->proc_id = NON_APPS_PROC;
+ temp->cmd_code_lo = range->cmd_code_lo;
+ temp->cmd_code_hi = range->cmd_code_hi;
+ range++;
+ temp++;
+ }
+ flag = 1;
+ /* peripheral undergoing SSR should not
+ * record new registration
+ */
+ if (!(reg_dirty & smd_info->peripheral_mask))
+ diagchar_ioctl(NULL, DIAG_IOCTL_COMMAND_REG, (unsigned long)pkt_params);
+ else
+ pr_err("diag: drop reg proc %d\n", smd_info->peripheral);
+ kfree(pkt_params->params);
+ } else if (type == DIAG_CTRL_MSG_FEATURE && total_recd >= count_bytes) {
+ uint8_t feature_mask = 0;
+ int feature_mask_len = *(int *)(buf + 8);
+ if (feature_mask_len > 0) {
+ int periph = smd_info->peripheral;
+ driver->rcvd_feature_mask[smd_info->peripheral]
+ = 1;
+ feature_mask = *(uint8_t *) (buf + 12);
+ if (periph == MODEM_DATA)
+ driver->log_on_demand_support = feature_mask & F_DIAG_LOG_ON_DEMAND_RSP_ON_MASTER;
+ /*
+ * If apps supports separate cmd/rsp channels
+ * and the peripheral supports separate cmd/rsp
+ * channels
+ */
+ if (driver->supports_separate_cmdrsp && (feature_mask & F_DIAG_REQ_RSP_CHANNEL))
+ driver->separate_cmdrsp[periph] = ENABLE_SEPARATE_CMDRSP;
+ else
+ driver->separate_cmdrsp[periph] = DISABLE_SEPARATE_CMDRSP;
+ /*
+ * Check if apps supports hdlc encoding and the
+ * peripheral supports apps hdlc encoding
+ */
+ process_hdlc_encoding_feature(smd_info, feature_mask);
+ if (feature_mask_len > 1) {
+ feature_mask = *(uint8_t *) (buf + 13);
+ process_stm_feature(smd_info, feature_mask);
+ }
+ }
+ flag = 1;
+ } else if (type != DIAG_CTRL_MSG_REG) {
+ flag = 1;
+ }
+ buf = buf + HDR_SIZ + data_len;
+ }
+ kfree(pkt_params);
+
+ return flag;
+}
+
+void diag_update_proc_vote(uint16_t proc, uint8_t vote)
+{
+ mutex_lock(&driver->real_time_mutex);
+ if (vote)
+ driver->proc_active_mask |= proc;
+ else {
+ driver->proc_active_mask &= ~proc;
+ driver->proc_rt_vote_mask |= proc;
+ }
+ mutex_unlock(&driver->real_time_mutex);
+}
+
+void diag_update_real_time_vote(uint16_t proc, uint8_t real_time)
+{
+ mutex_lock(&driver->real_time_mutex);
+ if (real_time)
+ driver->proc_rt_vote_mask |= proc;
+ else
+ driver->proc_rt_vote_mask &= ~proc;
+ mutex_unlock(&driver->real_time_mutex);
+}
+
+#ifdef CONFIG_DIAG_OVER_USB
+void diag_real_time_work_fn(struct work_struct *work)
+{
+ int temp_real_time = MODE_REALTIME, i;
+
+ if (driver->proc_active_mask == 0) {
+ /* There are no DCI or Memory Device processes. Diag should
+ * be in Real Time mode irrespective of USB connection
+ */
+ temp_real_time = MODE_REALTIME;
+ } else if (driver->proc_rt_vote_mask & driver->proc_active_mask) {
+ /* Atleast one process is alive and is voting for Real Time
+ * data - Diag should be in real time mode irrespective of USB
+ * connection.
+ */
+ temp_real_time = MODE_REALTIME;
+ } else if (driver->usb_connected) {
+ /* If USB is connected, check individual process. If Memory
+ * Device Mode is active, set the mode requested by Memory
+ * Device process. Set to realtime mode otherwise.
+ */
+ if ((driver->proc_rt_vote_mask & DIAG_PROC_MEMORY_DEVICE) == 0)
+ temp_real_time = MODE_NONREALTIME;
+ else
+ temp_real_time = MODE_REALTIME;
+ } else {
+ /* We come here if USB is not connected and the active
+ * processes are voting for Non realtime mode.
+ */
+ temp_real_time = MODE_NONREALTIME;
+ }
+
+ if (temp_real_time != driver->real_time_mode) {
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++)
+ diag_send_diag_mode_update_by_smd(&driver->smd_cntl[i], temp_real_time);
+ } else {
+ pr_debug("diag: did not update real time mode, already in the req mode %d", temp_real_time);
+ }
+ if (driver->real_time_update_busy > 0)
+ driver->real_time_update_busy--;
+}
+#else
+void diag_real_time_work_fn(struct work_struct *work)
+{
+ int temp_real_time = MODE_REALTIME, i;
+
+ if (driver->proc_active_mask == 0) {
+ /* There are no DCI or Memory Device processes. Diag should
+ * be in Real Time mode.
+ */
+ temp_real_time = MODE_REALTIME;
+ } else if (!(driver->proc_rt_vote_mask & driver->proc_active_mask)) {
+ /* No active process is voting for real time mode */
+ temp_real_time = MODE_NONREALTIME;
+ }
+
+ if (temp_real_time != driver->real_time_mode) {
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++)
+ diag_send_diag_mode_update_by_smd(&driver->smd_cntl[i], temp_real_time);
+ } else {
+ pr_warn("diag: did not update real time mode, already in the req mode %d", temp_real_time);
+ }
+ if (driver->real_time_update_busy > 0)
+ driver->real_time_update_busy--;
+}
+#endif
+
+void diag_send_diag_mode_update_by_smd(struct diag_smd_info *smd_info, int real_time)
+{
+ struct diag_ctrl_msg_diagmode diagmode;
+ char buf[sizeof(struct diag_ctrl_msg_diagmode)];
+ int msg_size = sizeof(struct diag_ctrl_msg_diagmode);
+ int wr_size = -ENOMEM, retry_count = 0, timer;
+
+ /* For now only allow the modem to receive the message */
+ if (!smd_info || smd_info->type != SMD_CNTL_TYPE || (smd_info->peripheral != MODEM_DATA))
+ return;
+
+ mutex_lock(&driver->diag_cntl_mutex);
+ diagmode.ctrl_pkt_id = DIAG_CTRL_MSG_DIAGMODE;
+ diagmode.ctrl_pkt_data_len = 36;
+ diagmode.version = 1;
+ diagmode.sleep_vote = real_time ? 1 : 0;
+ /*
+ * 0 - Disables real-time logging (to prevent
+ * frequent APPS wake-ups, etc.).
+ * 1 - Enable real-time logging
+ */
+ diagmode.real_time = real_time;
+ diagmode.use_nrt_values = 0;
+ diagmode.commit_threshold = 0;
+ diagmode.sleep_threshold = 0;
+ diagmode.sleep_time = 0;
+ diagmode.drain_timer_val = 0;
+ diagmode.event_stale_timer_val = 0;
+
+ memcpy(buf, &diagmode, msg_size);
+
+ if (smd_info->ch) {
+ while (retry_count < 3) {
+ wr_size = smd_write(smd_info->ch, buf, msg_size);
+ if (wr_size == -ENOMEM) {
+ /*
+ * The smd channel is full. Delay while
+ * smd processes existing data and smd
+ * has memory become available. The delay
+ * of 2000 was determined empirically as
+ * best value to use.
+ */
+ retry_count++;
+ for (timer = 0; timer < 5; timer++)
+ udelay(2000);
+ } else {
+ struct diag_smd_info *data = &driver->smd_data[smd_info->peripheral];
+ driver->real_time_mode = real_time;
+ process_lock_enabling(&data->nrt_lock, real_time);
+ break;
+ }
+ }
+ if (wr_size != msg_size)
+ pr_err("diag: proc %d fail feature update %d, tried %d", smd_info->peripheral, wr_size, msg_size);
+ } else {
+ pr_err("diag: ch invalid, feature update on proc %d\n", smd_info->peripheral);
+ }
+
+ mutex_unlock(&driver->diag_cntl_mutex);
+}
+
+int diag_send_stm_state(struct diag_smd_info *smd_info, uint8_t stm_control_data)
+{
+ struct diag_ctrl_msg_stm stm_msg;
+ int msg_size = sizeof(struct diag_ctrl_msg_stm);
+ int retry_count = 0;
+ int wr_size = 0;
+ int success = 0;
+
+ if (!smd_info || (smd_info->type != SMD_CNTL_TYPE) || (driver->peripheral_supports_stm[smd_info->peripheral] == DISABLE_STM)) {
+ return -EINVAL;
+ }
+
+ if (smd_info->ch) {
+ stm_msg.ctrl_pkt_id = 21;
+ stm_msg.ctrl_pkt_data_len = 5;
+ stm_msg.version = 1;
+ stm_msg.control_data = stm_control_data;
+ while (retry_count < 3) {
+ wr_size = smd_write(smd_info->ch, &stm_msg, msg_size);
+ if (wr_size == -ENOMEM) {
+ /*
+ * The smd channel is full. Delay while
+ * smd processes existing data and smd
+ * has memory become available. The delay
+ * of 10000 was determined empirically as
+ * best value to use.
+ */
+ retry_count++;
+ usleep_range(10000, 10000);
+ } else {
+ success = 1;
+ break;
+ }
+ }
+ if (wr_size != msg_size) {
+ pr_err("diag: In %s, proc %d fail STM update %d, tried %d", __func__, smd_info->peripheral, wr_size, msg_size);
+ success = 0;
+ }
+ } else {
+ pr_err("diag: In %s, ch invalid, STM update on proc %d\n", __func__, smd_info->peripheral);
+ }
+ return success;
+}
+
+static int diag_smd_cntl_probe(struct platform_device *pdev)
+{
+ int r = 0;
+ int index = -1;
+ const char *channel_name = NULL;
+
+ /* open control ports only on 8960 & newer targets */
+ if (chk_apps_only()) {
+ if (pdev->id == SMD_APPS_MODEM) {
+ index = MODEM_DATA;
+ channel_name = "DIAG_CNTL";
+ }
+#if defined(CONFIG_MSM_N_WAY_SMD)
+ else if (pdev->id == SMD_APPS_QDSP) {
+ index = LPASS_DATA;
+ channel_name = "DIAG_CNTL";
+ }
+#endif
+ else if (pdev->id == SMD_APPS_WCNSS) {
+ index = WCNSS_DATA;
+ channel_name = "APPS_RIVA_CTRL";
+ }
+
+ if (index != -1) {
+ r = smd_named_open_on_edge(channel_name, pdev->id, &driver->smd_cntl[index].ch, &driver->smd_cntl[index], diag_smd_notify);
+ driver->smd_cntl[index].ch_save = driver->smd_cntl[index].ch;
+ }
+ pr_debug("diag: In %s, open SMD CNTL port, Id = %d, r = %d\n", __func__, pdev->id, r);
+ }
+
+ return 0;
+}
+
+static int diagfwd_cntl_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diagfwd_cntl_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diagfwd_cntl_dev_pm_ops = {
+ .runtime_suspend = diagfwd_cntl_runtime_suspend,
+ .runtime_resume = diagfwd_cntl_runtime_resume,
+};
+
+static struct platform_driver msm_smd_ch1_cntl_driver = {
+
+ .probe = diag_smd_cntl_probe,
+ .driver = {
+ .name = "DIAG_CNTL",
+ .owner = THIS_MODULE,
+ .pm = &diagfwd_cntl_dev_pm_ops,
+ },
+};
+
+static struct platform_driver diag_smd_lite_cntl_driver = {
+
+ .probe = diag_smd_cntl_probe,
+ .driver = {
+ .name = "APPS_RIVA_CTRL",
+ .owner = THIS_MODULE,
+ .pm = &diagfwd_cntl_dev_pm_ops,
+ },
+};
+
+void diagfwd_cntl_init(void)
+{
+ int success;
+ int i;
+
+ reg_dirty = 0;
+ driver->polling_reg_flag = 0;
+ driver->log_on_demand_support = 1;
+ driver->diag_cntl_wq = create_singlethread_workqueue("diag_cntl_wq");
+
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++) {
+ success = diag_smd_constructor(&driver->smd_cntl[i], i, SMD_CNTL_TYPE);
+ if (!success)
+ goto err;
+ }
+
+ platform_driver_register(&msm_smd_ch1_cntl_driver);
+ platform_driver_register(&diag_smd_lite_cntl_driver);
+
+ return;
+err:
+ pr_err("diag: Could not initialize diag buffers");
+
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_cntl[i]);
+
+ if (driver->diag_cntl_wq)
+ destroy_workqueue(driver->diag_cntl_wq);
+}
+
+void diagfwd_cntl_exit(void)
+{
+ int i;
+
+ for (i = 0; i < NUM_SMD_CONTROL_CHANNELS; i++)
+ diag_smd_destructor(&driver->smd_cntl[i]);
+
+ destroy_workqueue(driver->diag_cntl_wq);
+ destroy_workqueue(driver->diag_real_time_wq);
+
+ platform_driver_unregister(&msm_smd_ch1_cntl_driver);
+ platform_driver_unregister(&diag_smd_lite_cntl_driver);
+}
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
new file mode 100644
index 0000000..9ced8ce
--- /dev/null
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -0,0 +1,155 @@
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_CNTL_H
+#define DIAGFWD_CNTL_H
+
+/* Message registration commands */
+#define DIAG_CTRL_MSG_REG 1
+/* Message passing for DTR events */
+#define DIAG_CTRL_MSG_DTR 2
+/* Control Diag sleep vote, buffering etc */
+#define DIAG_CTRL_MSG_DIAGMODE 3
+/* Diag data based on "light" diag mask */
+#define DIAG_CTRL_MSG_DIAGDATA 4
+/* Send diag internal feature mask 'diag_int_feature_mask' */
+#define DIAG_CTRL_MSG_FEATURE 8
+/* Send Diag log mask for a particular equip id */
+#define DIAG_CTRL_MSG_EQUIP_LOG_MASK 9
+/* Send Diag event mask */
+#define DIAG_CTRL_MSG_EVENT_MASK_V2 10
+/* Send Diag F3 mask */
+#define DIAG_CTRL_MSG_F3_MASK_V2 11
+#define DIAG_CTRL_MSG_NUM_PRESETS 12
+#define DIAG_CTRL_MSG_SET_PRESET_ID 13
+#define DIAG_CTRL_MSG_LOG_MASK_WITH_PRESET_ID 14
+#define DIAG_CTRL_MSG_EVENT_MASK_WITH_PRESET_ID 15
+#define DIAG_CTRL_MSG_F3_MASK_WITH_PRESET_ID 16
+#define DIAG_CTRL_MSG_LAST DIAG_CTRL_MSG_F3_MASK_WITH_PRESET_ID
+
+/* Denotes that we support sending/receiving the feature mask */
+#define F_DIAG_INT_FEATURE_MASK 0x01
+/* Denotes that we support responding to "Log on Demand" */
+#define F_DIAG_LOG_ON_DEMAND_RSP_ON_MASTER 0x04
+/*
+ * Supports dedicated main request/response on
+ * new Data Rx and DCI Rx channels
+ */
+#define F_DIAG_REQ_RSP_CHANNEL 0x10
+/* Denotes we support diag over stm */
+#define F_DIAG_OVER_STM 0x02
+
+ /* Perform hdlc encoding of data coming from smd channel */
+#define F_DIAG_HDLC_ENCODE_IN_APPS_MASK 0x40
+
+#define ENABLE_SEPARATE_CMDRSP 1
+#define DISABLE_SEPARATE_CMDRSP 0
+
+#define ENABLE_STM 1
+#define DISABLE_STM 0
+
+#define UPDATE_PERIPHERAL_STM_STATE 1
+#define CLEAR_PERIPHERAL_STM_STATE 2
+
+#define ENABLE_APPS_HDLC_ENCODING 1
+#define DISABLE_APPS_HDLC_ENCODING 0
+
+struct cmd_code_range {
+ uint16_t cmd_code_lo;
+ uint16_t cmd_code_hi;
+ uint32_t data;
+};
+
+struct diag_ctrl_msg {
+ uint32_t version;
+ uint16_t cmd_code;
+ uint16_t subsysid;
+ uint16_t count_entries;
+ uint16_t port;
+};
+
+struct diag_ctrl_event_mask {
+ uint32_t cmd_type;
+ uint32_t data_len;
+ uint8_t stream_id;
+ uint8_t status;
+ uint8_t event_config;
+ uint32_t event_mask_size;
+ /* Copy event mask here */
+} __packed;
+
+struct diag_ctrl_log_mask {
+ uint32_t cmd_type;
+ uint32_t data_len;
+ uint8_t stream_id;
+ uint8_t status;
+ uint8_t equip_id;
+ uint32_t num_items; /* Last log code for this equip_id */
+ uint32_t log_mask_size; /* Size of log mask stored in log_mask[] */
+ /* Copy log mask here */
+} __packed;
+
+struct diag_ctrl_msg_mask {
+ uint32_t cmd_type;
+ uint32_t data_len;
+ uint8_t stream_id;
+ uint8_t status;
+ uint8_t msg_mode;
+ uint16_t ssid_first; /* Start of range of supported SSIDs */
+ uint16_t ssid_last; /* Last SSID in range */
+ uint32_t msg_mask_size; /* ssid_last - ssid_first + 1 */
+ /* Copy msg mask here */
+} __packed;
+
+struct diag_ctrl_feature_mask {
+ uint32_t ctrl_pkt_id;
+ uint32_t ctrl_pkt_data_len;
+ uint32_t feature_mask_len;
+ /* Copy feature mask here */
+} __packed;
+
+struct diag_ctrl_msg_diagmode {
+ uint32_t ctrl_pkt_id;
+ uint32_t ctrl_pkt_data_len;
+ uint32_t version;
+ uint32_t sleep_vote;
+ uint32_t real_time;
+ uint32_t use_nrt_values;
+ uint32_t commit_threshold;
+ uint32_t sleep_threshold;
+ uint32_t sleep_time;
+ uint32_t drain_timer_val;
+ uint32_t event_stale_timer_val;
+} __packed;
+
+struct diag_ctrl_msg_stm {
+ uint32_t ctrl_pkt_id;
+ uint32_t ctrl_pkt_data_len;
+ uint32_t version;
+ uint8_t control_data;
+} __packed;
+
+void diagfwd_cntl_init(void);
+void diagfwd_cntl_exit(void);
+void diag_read_smd_cntl_work_fn(struct work_struct *);
+void diag_notify_ctrl_update_fn(struct work_struct *work);
+void diag_clean_reg_fn(struct work_struct *work);
+void diag_cntl_smd_work_fn(struct work_struct *work);
+int diag_process_smd_cntl_read_data(struct diag_smd_info *smd_info, void *buf, int total_recd);
+void diag_send_diag_mode_update_by_smd(struct diag_smd_info *smd_info, int real_time);
+void diag_update_proc_vote(uint16_t proc, uint8_t vote);
+void diag_update_real_time_vote(uint16_t proc, uint8_t real_time);
+void diag_real_time_work_fn(struct work_struct *work);
+int diag_send_stm_state(struct diag_smd_info *smd_info, uint8_t stm_control_data);
+void diag_cntl_stm_notify(struct diag_smd_info *smd_info, int action);
+
+#endif
diff --git a/drivers/char/diag/diagfwd_hsic.c b/drivers/char/diag/diagfwd_hsic.c
new file mode 100644
index 0000000..9b3cb05
--- /dev/null
+++ b/drivers/char/diag/diagfwd_hsic.c
@@ -0,0 +1,473 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/uaccess.h>
+#include <linux/diagchar.h>
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <linux/ratelimit.h>
+#include <linux/workqueue.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+#include <linux/smux.h>
+#include <asm/current.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include "diagchar_hdlc.h"
+#include "diagmem.h"
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_hsic.h"
+#include "diagfwd_smux.h"
+#include "diagfwd_bridge.h"
+
+#define READ_HSIC_BUF_SIZE 2048
+struct diag_hsic_dev *diag_hsic;
+
+static void diag_read_hsic_work_fn(struct work_struct *work)
+{
+ unsigned char *buf_in_hsic = NULL;
+ int num_reads_submitted = 0;
+ int err = 0;
+ int write_ptrs_available;
+ struct diag_hsic_dev *hsic_struct = container_of(work,
+ struct diag_hsic_dev, diag_read_hsic_work);
+ int index = hsic_struct->id;
+ static DEFINE_RATELIMIT_STATE(rl, 10 * HZ, 1);
+
+ if (!diag_hsic[index].hsic_ch) {
+ pr_err("DIAG in %s: diag_hsic[index].hsic_ch == 0\n", __func__);
+ return;
+ }
+
+ /*
+ * Determine the current number of available buffers for writing after
+ * reading from the HSIC has completed.
+ */
+ if (driver->logging_mode == MEMORY_DEVICE_MODE)
+ write_ptrs_available = diag_hsic[index].poolsize_hsic_write - diag_hsic[index].num_hsic_buf_tbl_entries;
+ else
+ write_ptrs_available = diag_hsic[index].poolsize_hsic_write - diag_hsic[index].count_hsic_write_pool;
+
+ /*
+ * Queue up a read on the HSIC for all available buffers in the
+ * pool, exhausting the pool.
+ */
+ do {
+ /*
+ * If no more write buffers are available,
+ * stop queuing reads
+ */
+ if (write_ptrs_available <= 0)
+ break;
+
+ write_ptrs_available--;
+
+ /*
+ * No sense queuing a read if the HSIC bridge was
+ * closed in another thread
+ */
+ if (!diag_hsic[index].hsic_ch)
+ break;
+
+ buf_in_hsic = diagmem_alloc(driver, READ_HSIC_BUF_SIZE, index + POOL_TYPE_HSIC);
+ if (buf_in_hsic) {
+ /*
+ * Initiate the read from the HSIC. The HSIC read is
+ * asynchronous. Once the read is complete the read
+ * callback function will be called.
+ */
+ pr_debug("diag: read from HSIC\n");
+ num_reads_submitted++;
+ err = diag_bridge_read(index, (char *)buf_in_hsic, READ_HSIC_BUF_SIZE);
+ if (err) {
+ num_reads_submitted--;
+
+ /* Return the buffer to the pool */
+ diagmem_free(driver, buf_in_hsic, index + POOL_TYPE_HSIC);
+
+ if (__ratelimit(&rl))
+ pr_err("diag: Error initiating HSIC read, err: %d\n", err);
+ /*
+ * An error occurred, discontinue queuing
+ * reads
+ */
+ break;
+ }
+ }
+ } while (buf_in_hsic);
+
+ /*
+ * If there are read buffers available and for some reason the
+ * read was not queued, and if no unrecoverable error occurred
+ * (-ENODEV is an unrecoverable error), then set up the next read
+ */
+ if ((diag_hsic[index].count_hsic_pool < diag_hsic[index].poolsize_hsic) && (num_reads_submitted == 0) && (err != -ENODEV) && (diag_hsic[index].hsic_ch != 0))
+ queue_work(diag_bridge[index].wq, &diag_hsic[index].diag_read_hsic_work);
+}
+
+static void diag_hsic_read_complete_callback(void *ctxt, char *buf, int buf_size, int actual_size)
+{
+ int err = -2;
+ int index = (int)ctxt;
+ static DEFINE_RATELIMIT_STATE(rl, 10 * HZ, 1);
+
+ if (!diag_hsic[index].hsic_ch) {
+ /*
+ * The HSIC channel is closed. Return the buffer to
+ * the pool. Do not send it on.
+ */
+ diagmem_free(driver, buf, index + POOL_TYPE_HSIC);
+ pr_debug("diag: In %s: hsic_ch == 0, actual_size: %d\n", __func__, actual_size);
+ return;
+ }
+
+ /*
+ * Note that zero length is valid and still needs to be sent to
+ * the USB only when we are logging data to the USB
+ */
+ if ((actual_size > 0) || ((actual_size == 0) && (driver->logging_mode == USB_MODE))) {
+ if (!buf) {
+ pr_err("diag: Out of diagmem for HSIC\n");
+ } else {
+ /*
+ * Send data in buf to be written on the
+ * appropriate device, e.g. USB MDM channel
+ */
+ diag_bridge[index].write_len = actual_size;
+ err = diag_device_write((void *)buf, index + HSIC_DATA, NULL);
+ /* If an error, return buffer to the pool */
+ if (err) {
+ diagmem_free(driver, buf, index + POOL_TYPE_HSIC);
+ if (__ratelimit(&rl))
+ pr_err("diag: In %s, error calling diag_device_write, err: %d\n", __func__, err);
+ }
+ }
+ } else {
+ /*
+ * The buffer has an error status associated with it. Do not
+ * pass it on. Note that -ENOENT is sent when the diag bridge
+ * is closed.
+ */
+ diagmem_free(driver, buf, index + POOL_TYPE_HSIC);
+ pr_debug("diag: In %s: error status: %d\n", __func__, actual_size);
+ }
+
+ /*
+ * If for some reason there was no HSIC data to write to the
+ * mdm channel, set up another read
+ */
+ if (err && ((driver->logging_mode == MEMORY_DEVICE_MODE) || (diag_bridge[index].usb_connected && !diag_hsic[index].hsic_suspend))) {
+ queue_work(diag_bridge[index].wq, &diag_hsic[index].diag_read_hsic_work);
+ }
+}
+
+static void diag_hsic_write_complete_callback(void *ctxt, char *buf, int buf_size, int actual_size)
+{
+ int index = (int)ctxt;
+
+ /* The write of the data to the HSIC bridge is complete */
+ diag_hsic[index].in_busy_hsic_write = 0;
+ wake_up_interruptible(&driver->wait_q);
+
+ if (!diag_hsic[index].hsic_ch) {
+ pr_err("DIAG in %s: hsic_ch == 0, ch = %d\n", __func__, index);
+ return;
+ }
+
+ if (actual_size < 0)
+ pr_err("DIAG in %s: actual_size: %d\n", __func__, actual_size);
+
+ if (diag_bridge[index].usb_connected && (driver->logging_mode == USB_MODE))
+ queue_work(diag_bridge[index].wq, &diag_bridge[index].diag_read_work);
+}
+
+static int diag_hsic_suspend(void *ctxt)
+{
+ int index = (int)ctxt;
+ pr_debug("diag: hsic_suspend\n");
+
+ /* Don't allow suspend if a write in the HSIC is in progress */
+ if (diag_hsic[index].in_busy_hsic_write)
+ return -EBUSY;
+
+ /*
+ * Don't allow suspend if in MEMORY_DEVICE_MODE and if there
+ * has been hsic data requested
+ */
+ if (driver->logging_mode == MEMORY_DEVICE_MODE && diag_hsic[index].hsic_ch)
+ return -EBUSY;
+
+ diag_hsic[index].hsic_suspend = 1;
+
+ return 0;
+}
+
+static void diag_hsic_resume(void *ctxt)
+{
+ int index = (int)ctxt;
+
+ pr_debug("diag: hsic_resume\n");
+ diag_hsic[index].hsic_suspend = 0;
+
+ if ((diag_hsic[index].count_hsic_pool < diag_hsic[index].poolsize_hsic) && ((driver->logging_mode == MEMORY_DEVICE_MODE) || (diag_bridge[index].usb_connected)))
+ queue_work(diag_bridge[index].wq, &diag_hsic[index].diag_read_hsic_work);
+}
+
+struct diag_bridge_ops hsic_diag_bridge_ops[MAX_HSIC_CH] = {
+ {
+ .ctxt = NULL,
+ .read_complete_cb = diag_hsic_read_complete_callback,
+ .write_complete_cb = diag_hsic_write_complete_callback,
+ .suspend = diag_hsic_suspend,
+ .resume = diag_hsic_resume,
+ },
+ {
+ .ctxt = NULL,
+ .read_complete_cb = diag_hsic_read_complete_callback,
+ .write_complete_cb = diag_hsic_write_complete_callback,
+ .suspend = diag_hsic_suspend,
+ .resume = diag_hsic_resume,
+ }
+};
+
+void diag_hsic_close(int ch_id)
+{
+ if (diag_hsic[ch_id].hsic_device_enabled) {
+ diag_hsic[ch_id].hsic_ch = 0;
+ if (diag_hsic[ch_id].hsic_device_opened) {
+ diag_hsic[ch_id].hsic_device_opened = 0;
+ diag_bridge_close(ch_id);
+ pr_debug("diag: %s: closed successfully ch %d\n", __func__, ch_id);
+ } else {
+ pr_debug("diag: %s: already closed ch %d\n", __func__, ch_id);
+ }
+ } else {
+ pr_debug("diag: %s: HSIC device already removed ch %d\n", __func__, ch_id);
+ }
+}
+
+/* diagfwd_cancel_hsic is called to cancel outstanding read/writes */
+int diagfwd_cancel_hsic(int reopen)
+{
+ int err, i;
+
+ /* Cancel it for all active HSIC bridges */
+ for (i = 0; i < MAX_HSIC_CH; i++) {
+ if (!diag_bridge[i].enabled)
+ continue;
+ mutex_lock(&diag_bridge[i].bridge_mutex);
+ if (diag_hsic[i].hsic_device_enabled) {
+ if (diag_hsic[i].hsic_device_opened) {
+ diag_hsic[i].hsic_ch = 0;
+ diag_hsic[i].hsic_device_opened = 0;
+ diag_bridge_close(i);
+ if (reopen) {
+ hsic_diag_bridge_ops[i].ctxt = (void *)(i);
+ err = diag_bridge_open(i, &hsic_diag_bridge_ops[i]);
+ if (err) {
+ pr_err("diag: HSIC %d channel open error: %d\n", i, err);
+ } else {
+ pr_debug("diag: opened HSIC channel: %d\n", i);
+ diag_hsic[i].hsic_device_opened = 1;
+ diag_hsic[i].hsic_ch = 1;
+ }
+ diag_hsic[i].hsic_data_requested = 1;
+ } else {
+ diag_hsic[i].hsic_data_requested = 0;
+ }
+ }
+ }
+ mutex_unlock(&diag_bridge[i].bridge_mutex);
+ }
+ return 0;
+}
+
+/*
+ * diagfwd_write_complete_hsic is called after the asynchronous
+ * usb_diag_write() on mdm channel is complete
+ */
+int diagfwd_write_complete_hsic(struct diag_request *diag_write_ptr, int index)
+{
+ unsigned char *buf = (diag_write_ptr) ? diag_write_ptr->buf : NULL;
+
+ if (buf) {
+ /* Return buffers to their pools */
+ diagmem_free(driver, (unsigned char *)buf, index + POOL_TYPE_HSIC);
+ diagmem_free(driver, (unsigned char *)diag_write_ptr, index + POOL_TYPE_HSIC_WRITE);
+ }
+
+ if (!diag_hsic[index].hsic_ch) {
+ pr_err("diag: In %s: hsic_ch == 0\n", __func__);
+ return 0;
+ }
+
+ /* Read data from the HSIC */
+ queue_work(diag_bridge[index].wq, &diag_hsic[index].diag_read_hsic_work);
+
+ return 0;
+}
+
+void diag_usb_read_complete_hsic_fn(struct work_struct *w)
+{
+ struct diag_bridge_dev *bridge_struct = container_of(w,
+ struct diag_bridge_dev, usb_read_complete_work);
+
+ diagfwd_read_complete_bridge(diag_bridge[bridge_struct->id].usb_read_ptr);
+}
+
+void diag_read_usb_hsic_work_fn(struct work_struct *work)
+{
+ struct diag_bridge_dev *bridge_struct = container_of(work,
+ struct diag_bridge_dev, diag_read_work);
+ int index = bridge_struct->id;
+
+ if (!diag_hsic[index].hsic_ch) {
+ pr_err("diag: in %s: hsic_ch == 0\n", __func__);
+ return;
+ }
+ /*
+ * If there is no data being read from the usb mdm channel
+ * and there is no mdm channel data currently being written
+ * to the HSIC
+ */
+ if (!diag_hsic[index].in_busy_hsic_read_on_device && !diag_hsic[index].in_busy_hsic_write) {
+ APPEND_DEBUG('x');
+ /* Setup the next read from usb mdm channel */
+ diag_hsic[index].in_busy_hsic_read_on_device = 1;
+ diag_bridge[index].usb_read_ptr->buf = diag_bridge[index].usb_buf_out;
+ diag_bridge[index].usb_read_ptr->length = USB_MAX_OUT_BUF;
+ diag_bridge[index].usb_read_ptr->context = (void *)index;
+ usb_diag_read(diag_bridge[index].ch, diag_bridge[index].usb_read_ptr);
+ APPEND_DEBUG('y');
+ }
+ /* If for some reason there was no mdm channel read initiated,
+ * queue up the reading of data from the mdm channel
+ */
+
+ if (!diag_hsic[index].in_busy_hsic_read_on_device && (driver->logging_mode == USB_MODE))
+ queue_work(diag_bridge[index].wq, &(diag_bridge[index].diag_read_work));
+}
+
+static int diag_hsic_probe(struct platform_device *pdev)
+{
+ int err = 0;
+
+ /* pdev->Id will indicate which HSIC is working. 0 stands for HSIC
+ * or CP1 1 indicates HS-USB or CP2
+ */
+ pr_debug("diag: in %s, ch = %d\n", __func__, pdev->id);
+ mutex_lock(&diag_bridge[pdev->id].bridge_mutex);
+ if (!diag_hsic[pdev->id].hsic_inited) {
+ spin_lock_init(&diag_hsic[pdev->id].hsic_spinlock);
+ diag_hsic[pdev->id].num_hsic_buf_tbl_entries = 0;
+ if (diag_hsic[pdev->id].hsic_buf_tbl == NULL)
+ diag_hsic[pdev->id].hsic_buf_tbl = kzalloc(NUM_HSIC_BUF_TBL_ENTRIES * sizeof(struct diag_write_device), GFP_KERNEL);
+ if (diag_hsic[pdev->id].hsic_buf_tbl == NULL) {
+ mutex_unlock(&diag_bridge[pdev->id].bridge_mutex);
+ return -ENOMEM;
+ }
+ diag_hsic[pdev->id].id = pdev->id;
+ diag_hsic[pdev->id].count_hsic_pool = 0;
+ diag_hsic[pdev->id].count_hsic_write_pool = 0;
+ diag_hsic[pdev->id].itemsize_hsic = READ_HSIC_BUF_SIZE;
+ diag_hsic[pdev->id].poolsize_hsic = N_MDM_WRITE;
+ diag_hsic[pdev->id].itemsize_hsic_write = sizeof(struct diag_request);
+ diag_hsic[pdev->id].poolsize_hsic_write = N_MDM_WRITE;
+ diagmem_hsic_init(pdev->id);
+ INIT_WORK(&(diag_hsic[pdev->id].diag_read_hsic_work), diag_read_hsic_work_fn);
+ diag_hsic[pdev->id].hsic_data_requested = (driver->logging_mode == MEMORY_DEVICE_MODE) ? 0 : 1;
+ diag_hsic[pdev->id].hsic_inited = 1;
+ }
+ /*
+ * The probe function was called after the usb was connected
+ * on the legacy channel OR ODL is turned on and hsic data is
+ * requested. Communication over usb mdm and HSIC needs to be
+ * turned on.
+ */
+ if ((diag_bridge[pdev->id].usb_connected &&
+ (driver->logging_mode != MEMORY_DEVICE_MODE)) || ((driver->logging_mode == MEMORY_DEVICE_MODE) && diag_hsic[pdev->id].hsic_data_requested)) {
+ if (diag_hsic[pdev->id].hsic_device_opened) {
+ /* should not happen. close it before re-opening */
+ pr_warn("diag: HSIC channel already opened in probe\n");
+ diag_bridge_close(pdev->id);
+ }
+ hsic_diag_bridge_ops[pdev->id].ctxt = (void *)(pdev->id);
+ err = diag_bridge_open(pdev->id, &hsic_diag_bridge_ops[pdev->id]);
+ if (err) {
+ pr_err("diag: could not open HSIC, err: %d\n", err);
+ diag_hsic[pdev->id].hsic_device_opened = 0;
+ mutex_unlock(&diag_bridge[pdev->id].bridge_mutex);
+ return err;
+ }
+
+ pr_info("diag: opened HSIC bridge, ch = %d\n", pdev->id);
+ diag_hsic[pdev->id].hsic_device_opened = 1;
+ diag_hsic[pdev->id].hsic_ch = 1;
+ diag_hsic[pdev->id].in_busy_hsic_read_on_device = 0;
+ diag_hsic[pdev->id].in_busy_hsic_write = 0;
+
+ if (diag_bridge[pdev->id].usb_connected) {
+ /* Poll USB mdm channel to check for data */
+ queue_work(diag_bridge[pdev->id].wq, &diag_bridge[pdev->id].diag_read_work);
+ }
+ /* Poll HSIC channel to check for data */
+ queue_work(diag_bridge[pdev->id].wq, &diag_hsic[pdev->id].diag_read_hsic_work);
+ }
+ /* The HSIC (diag_bridge) platform device driver is enabled */
+ diag_hsic[pdev->id].hsic_device_enabled = 1;
+ mutex_unlock(&diag_bridge[pdev->id].bridge_mutex);
+ return err;
+}
+
+static int diag_hsic_remove(struct platform_device *pdev)
+{
+ pr_debug("diag: %s called\n", __func__);
+ if (diag_hsic[pdev->id].hsic_device_enabled) {
+ mutex_lock(&diag_bridge[pdev->id].bridge_mutex);
+ diag_hsic_close(pdev->id);
+ diag_hsic[pdev->id].hsic_device_enabled = 0;
+ mutex_unlock(&diag_bridge[pdev->id].bridge_mutex);
+ }
+
+ return 0;
+}
+
+static int diagfwd_hsic_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diagfwd_hsic_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diagfwd_hsic_dev_pm_ops = {
+ .runtime_suspend = diagfwd_hsic_runtime_suspend,
+ .runtime_resume = diagfwd_hsic_runtime_resume,
+};
+
+struct platform_driver msm_hsic_ch_driver = {
+ .probe = diag_hsic_probe,
+ .remove = diag_hsic_remove,
+ .driver = {
+ .name = "diag_bridge",
+ .owner = THIS_MODULE,
+ .pm = &diagfwd_hsic_dev_pm_ops,
+ },
+};
diff --git a/drivers/char/diag/diagfwd_hsic.h b/drivers/char/diag/diagfwd_hsic.h
new file mode 100644
index 0000000..64556f2
--- /dev/null
+++ b/drivers/char/diag/diagfwd_hsic.h
@@ -0,0 +1,59 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_HSIC_H
+#define DIAGFWD_HSIC_H
+
+#include <mach/diag_bridge.h>
+
+#define N_MDM_WRITE 8
+#define N_MDM_READ 1
+#define NUM_HSIC_BUF_TBL_ENTRIES N_MDM_WRITE
+#define MAX_HSIC_CH 4
+#define REOPEN_HSIC 1
+#define DONT_REOPEN_HSIC 0
+int diagfwd_write_complete_hsic(struct diag_request *, int index);
+int diagfwd_cancel_hsic(int reopen);
+void diag_read_usb_hsic_work_fn(struct work_struct *work);
+void diag_usb_read_complete_hsic_fn(struct work_struct *w);
+extern struct diag_bridge_ops hsic_diag_bridge_ops[MAX_HSIC_CH];
+extern struct platform_driver msm_hsic_ch_driver;
+void diag_hsic_close(int);
+
+/* Diag-HSIC structure, n HSIC bridges can be used at same time
+ * for instance HSIC(0), HS-USB(1) working at same time
+ */
+struct diag_hsic_dev {
+ int id;
+ int hsic_ch;
+ int hsic_inited;
+ int hsic_device_enabled;
+ int hsic_device_opened;
+ int hsic_suspend;
+ int hsic_data_requested;
+ int in_busy_hsic_read_on_device;
+ int in_busy_hsic_write;
+ struct work_struct diag_read_hsic_work;
+ int count_hsic_pool;
+ int count_hsic_write_pool;
+ unsigned int poolsize_hsic;
+ unsigned int poolsize_hsic_write;
+ unsigned int itemsize_hsic;
+ unsigned int itemsize_hsic_write;
+ mempool_t *diag_hsic_pool;
+ mempool_t *diag_hsic_write_pool;
+ int num_hsic_buf_tbl_entries;
+ struct diag_write_device *hsic_buf_tbl;
+ spinlock_t hsic_spinlock;
+};
+
+#endif
diff --git a/drivers/char/diag/diagfwd_sdio.c b/drivers/char/diag/diagfwd_sdio.c
new file mode 100644
index 0000000..53dc1de
--- /dev/null
+++ b/drivers/char/diag/diagfwd_sdio.c
@@ -0,0 +1,280 @@
+/* Copyright (c) 2011, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/uaccess.h>
+#include <linux/diagchar.h>
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <linux/workqueue.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+#include <asm/current.h>
+#ifdef CONFIG_DIAG_OVER_USB
+#include <mach/usbdiag.h>
+#endif
+#include "diagchar_hdlc.h"
+#include "diagmem.h"
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_sdio.h"
+
+void __diag_sdio_send_req(void)
+{
+ int r = 0;
+ void *buf = driver->buf_in_sdio;
+
+ if (driver->sdio_ch && (!driver->in_busy_sdio)) {
+ r = sdio_read_avail(driver->sdio_ch);
+
+ if (r > IN_BUF_SIZE) {
+ if (r < MAX_IN_BUF_SIZE) {
+ pr_err("diag: SDIO sending" " packets more than %d bytes\n", r);
+ buf = krealloc(buf, r, GFP_KERNEL);
+ } else {
+ pr_err("diag: SDIO sending" " in packets more than %d bytes\n", MAX_IN_BUF_SIZE);
+ return;
+ }
+ }
+ if (r > 0) {
+ if (!buf)
+ printk(KERN_INFO "Out of diagmem for SDIO\n");
+ else {
+ APPEND_DEBUG('i');
+ sdio_read(driver->sdio_ch, buf, r);
+ if (((!driver->usb_connected) && (driver->logging_mode == USB_MODE)) || (driver->logging_mode == NO_LOGGING_MODE)) {
+ /* Drop the diag payload */
+ driver->in_busy_sdio = 0;
+ return;
+ }
+ APPEND_DEBUG('j');
+ driver->write_ptr_mdm->length = r;
+ driver->in_busy_sdio = 1;
+ diag_device_write(buf, SDIO_DATA, driver->write_ptr_mdm);
+ }
+ }
+ }
+}
+
+static void diag_read_sdio_work_fn(struct work_struct *work)
+{
+ __diag_sdio_send_req();
+}
+
+static void diag_sdio_notify(void *ctxt, unsigned event)
+{
+ if (event == SDIO_EVENT_DATA_READ_AVAIL)
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+
+ if (event == SDIO_EVENT_DATA_WRITE_AVAIL)
+ wake_up_interruptible(&driver->wait_q);
+}
+
+static int diag_sdio_close(void)
+{
+ queue_work(driver->diag_sdio_wq, &(driver->diag_close_sdio_work));
+ return 0;
+}
+
+static void diag_close_sdio_work_fn(struct work_struct *work)
+{
+ pr_debug("diag: sdio close called\n");
+ if (sdio_close(driver->sdio_ch))
+ pr_err("diag: could not close SDIO channel\n");
+ else
+ driver->sdio_ch = NULL; /* channel successfully closed */
+}
+
+int diagfwd_connect_sdio(void)
+{
+ int err;
+
+ err = usb_diag_alloc_req(driver->mdm_ch, N_MDM_SDIO_WRITE, N_MDM_SDIO_READ);
+ if (err)
+ pr_err("diag: unable to alloc USB req on mdm ch\n");
+
+ driver->in_busy_sdio = 0;
+ if (!driver->sdio_ch) {
+ err = sdio_open("SDIO_DIAG", &driver->sdio_ch, driver, diag_sdio_notify);
+ if (err)
+ pr_info("diag: could not open SDIO channel\n");
+ else
+ pr_info("diag: opened SDIO channel\n");
+ } else {
+ pr_info("diag: SDIO channel already open\n");
+ }
+
+ /* Poll USB channel to check for data */
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_mdm_work));
+ /* Poll SDIO channel to check for data */
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+ return 0;
+}
+
+int diagfwd_disconnect_sdio(void)
+{
+ usb_diag_free_req(driver->mdm_ch);
+ if (driver->sdio_ch && (driver->logging_mode == USB_MODE)) {
+ driver->in_busy_sdio = 1;
+ diag_sdio_close();
+ }
+ return 0;
+}
+
+int diagfwd_write_complete_sdio(void)
+{
+ driver->in_busy_sdio = 0;
+ APPEND_DEBUG('q');
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_sdio_work));
+ return 0;
+}
+
+int diagfwd_read_complete_sdio(void)
+{
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_mdm_work));
+ return 0;
+}
+
+void diag_read_mdm_work_fn(struct work_struct *work)
+{
+ if (driver->sdio_ch) {
+ wait_event_interruptible(driver->wait_q, ((sdio_write_avail(driver->sdio_ch) >= driver->read_len_mdm) || !(driver->sdio_ch)));
+ if (!(driver->sdio_ch)) {
+ pr_alert("diag: sdio channel not valid");
+ return;
+ }
+ if (driver->sdio_ch && driver->usb_buf_mdm_out && (driver->read_len_mdm > 0))
+ sdio_write(driver->sdio_ch, driver->usb_buf_mdm_out, driver->read_len_mdm);
+ APPEND_DEBUG('x');
+ driver->usb_read_mdm_ptr->buf = driver->usb_buf_mdm_out;
+ driver->usb_read_mdm_ptr->length = USB_MAX_OUT_BUF;
+ usb_diag_read(driver->mdm_ch, driver->usb_read_mdm_ptr);
+ APPEND_DEBUG('y');
+ }
+}
+
+static int diag_sdio_probe(struct platform_device *pdev)
+{
+ int err;
+
+ err = sdio_open("SDIO_DIAG", &driver->sdio_ch, driver, diag_sdio_notify);
+ if (err)
+ printk(KERN_INFO "DIAG could not open SDIO channel");
+ else {
+ printk(KERN_INFO "DIAG opened SDIO channel");
+ queue_work(driver->diag_sdio_wq, &(driver->diag_read_mdm_work));
+ }
+
+ return err;
+}
+
+static int diag_sdio_remove(struct platform_device *pdev)
+{
+ pr_debug("\n diag: sdio remove called");
+ /* Disable SDIO channel to prevent further read/write */
+ driver->sdio_ch = NULL;
+ return 0;
+}
+
+static int diagfwd_sdio_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diagfwd_sdio_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diagfwd_sdio_dev_pm_ops = {
+ .runtime_suspend = diagfwd_sdio_runtime_suspend,
+ .runtime_resume = diagfwd_sdio_runtime_resume,
+};
+
+static struct platform_driver msm_sdio_ch_driver = {
+ .probe = diag_sdio_probe,
+ .remove = diag_sdio_remove,
+ .driver = {
+ .name = "SDIO_DIAG",
+ .owner = THIS_MODULE,
+ .pm = &diagfwd_sdio_dev_pm_ops,
+ },
+};
+
+void diagfwd_sdio_init(void)
+{
+ int ret;
+
+ driver->read_len_mdm = 0;
+ if (driver->buf_in_sdio == NULL)
+ driver->buf_in_sdio = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (driver->buf_in_sdio == NULL)
+ goto err;
+ if (driver->usb_buf_mdm_out == NULL)
+ driver->usb_buf_mdm_out = kzalloc(USB_MAX_OUT_BUF, GFP_KERNEL);
+ if (driver->usb_buf_mdm_out == NULL)
+ goto err;
+ if (driver->write_ptr_mdm == NULL)
+ driver->write_ptr_mdm = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (driver->write_ptr_mdm == NULL)
+ goto err;
+ if (driver->usb_read_mdm_ptr == NULL)
+ driver->usb_read_mdm_ptr = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (driver->usb_read_mdm_ptr == NULL)
+ goto err;
+ driver->diag_sdio_wq = create_singlethread_workqueue("diag_sdio_wq");
+#ifdef CONFIG_DIAG_OVER_USB
+ driver->mdm_ch = usb_diag_open(DIAG_MDM, driver, diag_usb_legacy_notifier);
+ if (IS_ERR(driver->mdm_ch)) {
+ printk(KERN_ERR "Unable to open USB diag MDM channel\n");
+ goto err;
+ }
+ INIT_WORK(&(driver->diag_read_mdm_work), diag_read_mdm_work_fn);
+#endif
+ INIT_WORK(&(driver->diag_read_sdio_work), diag_read_sdio_work_fn);
+ INIT_WORK(&(driver->diag_close_sdio_work), diag_close_sdio_work_fn);
+ ret = platform_driver_register(&msm_sdio_ch_driver);
+ if (ret)
+ printk(KERN_INFO "DIAG could not register SDIO device");
+ else
+ printk(KERN_INFO "DIAG registered SDIO device");
+
+ return;
+err:
+ printk(KERN_INFO "\n Could not initialize diag buf for SDIO");
+ kfree(driver->buf_in_sdio);
+ kfree(driver->usb_buf_mdm_out);
+ kfree(driver->write_ptr_mdm);
+ kfree(driver->usb_read_mdm_ptr);
+ if (driver->diag_sdio_wq)
+ destroy_workqueue(driver->diag_sdio_wq);
+}
+
+void diagfwd_sdio_exit(void)
+{
+#ifdef CONFIG_DIAG_OVER_USB
+ if (driver->usb_connected)
+ usb_diag_free_req(driver->mdm_ch);
+#endif
+ platform_driver_unregister(&msm_sdio_ch_driver);
+#ifdef CONFIG_DIAG_OVER_USB
+ usb_diag_close(driver->mdm_ch);
+#endif
+ kfree(driver->buf_in_sdio);
+ kfree(driver->usb_buf_mdm_out);
+ kfree(driver->write_ptr_mdm);
+ kfree(driver->usb_read_mdm_ptr);
+ destroy_workqueue(driver->diag_sdio_wq);
+}
diff --git a/drivers/char/diag/diagfwd_sdio.h b/drivers/char/diag/diagfwd_sdio.h
new file mode 100644
index 0000000..ead7deb
--- /dev/null
+++ b/drivers/char/diag/diagfwd_sdio.h
@@ -0,0 +1,27 @@
+/* Copyright (c) 2011, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_SDIO_H
+#define DIAGFWD_SDIO_H
+
+#include <mach/sdio_al.h>
+#define N_MDM_SDIO_WRITE 1 /* Upgrade to 2 with ping pong buffer */
+#define N_MDM_SDIO_READ 1
+
+void diagfwd_sdio_init(void);
+void diagfwd_sdio_exit(void);
+int diagfwd_connect_sdio(void);
+int diagfwd_disconnect_sdio(void);
+int diagfwd_read_complete_sdio(void);
+int diagfwd_write_complete_sdio(void);
+
+#endif
diff --git a/drivers/char/diag/diagfwd_smux.c b/drivers/char/diag/diagfwd_smux.c
new file mode 100644
index 0000000..0335576
--- /dev/null
+++ b/drivers/char/diag/diagfwd_smux.c
@@ -0,0 +1,199 @@
+/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/termios.h>
+#include <linux/slab.h>
+#include <linux/diagchar.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <mach/usbdiag.h>
+#include "diagchar.h"
+#include "diagfwd.h"
+#include "diagfwd_smux.h"
+#include "diagfwd_hsic.h"
+#include "diagfwd_bridge.h"
+
+void diag_smux_event(void *priv, int event_type, const void *metadata)
+{
+ unsigned char *rx_buf;
+ int len;
+
+ switch (event_type) {
+ case SMUX_CONNECTED:
+ pr_info("diag: SMUX_CONNECTED received\n");
+ driver->smux_connected = 1;
+ driver->in_busy_smux = 0;
+ /* read data from USB MDM channel & Initiate first write */
+ queue_work(diag_bridge[SMUX].wq, &diag_bridge[SMUX].diag_read_work);
+ break;
+ case SMUX_DISCONNECTED:
+ driver->smux_connected = 0;
+ driver->lcid = LCID_INVALID;
+ msm_smux_close(LCID_VALID);
+ pr_info("diag: SMUX_DISCONNECTED received\n");
+ break;
+ case SMUX_WRITE_DONE:
+ pr_debug("diag: SMUX Write done\n");
+ break;
+ case SMUX_WRITE_FAIL:
+ pr_info("diag: SMUX Write Failed\n");
+ break;
+ case SMUX_READ_FAIL:
+ pr_info("diag: SMUX Read Failed\n");
+ break;
+ case SMUX_READ_DONE:
+ len = ((struct smux_meta_read *)metadata)->len;
+ rx_buf = ((struct smux_meta_read *)metadata)->buffer;
+ driver->write_ptr_mdm->length = len;
+ diag_device_write(driver->buf_in_smux, SMUX_DATA, driver->write_ptr_mdm);
+ break;
+ };
+}
+
+int diagfwd_write_complete_smux(void)
+{
+ pr_debug("diag: clear in_busy_smux\n");
+ driver->in_busy_smux = 0;
+ return 0;
+}
+
+int diagfwd_read_complete_smux(void)
+{
+ queue_work(diag_bridge[SMUX].wq, &diag_bridge[SMUX].diag_read_work);
+ return 0;
+}
+
+int diag_get_rx_buffer(void *priv, void **pkt_priv, void **buffer, int size)
+{
+ if (!driver->in_busy_smux) {
+ *pkt_priv = (void *)0x1234;
+ *buffer = driver->buf_in_smux;
+ pr_debug("diag: set in_busy_smux as 1\n");
+ driver->in_busy_smux = 1;
+ } else {
+ pr_debug("diag: read buffer for SMUX is BUSY\n");
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+void diag_usb_read_complete_smux_fn(struct work_struct *w)
+{
+ diagfwd_read_complete_bridge(diag_bridge[SMUX].usb_read_ptr);
+}
+
+void diag_read_usb_smux_work_fn(struct work_struct *work)
+{
+ int ret;
+
+ if (driver->diag_smux_enabled) {
+ if (driver->lcid && diag_bridge[SMUX].usb_buf_out && (diag_bridge[SMUX].read_len > 0) && driver->smux_connected) {
+ ret = msm_smux_write(driver->lcid, NULL, diag_bridge[SMUX].usb_buf_out, diag_bridge[SMUX].read_len);
+ if (ret)
+ pr_err("diag: writing to SMUX ch, r = %d, lcid = %d\n", ret, driver->lcid);
+ }
+ diag_bridge[SMUX].usb_read_ptr->buf = diag_bridge[SMUX].usb_buf_out;
+ diag_bridge[SMUX].usb_read_ptr->length = USB_MAX_OUT_BUF;
+ diag_bridge[SMUX].usb_read_ptr->context = (void *)SMUX;
+ usb_diag_read(diag_bridge[SMUX].ch, diag_bridge[SMUX].usb_read_ptr);
+ return;
+ }
+}
+
+static int diagfwd_smux_runtime_suspend(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: suspending...\n");
+ return 0;
+}
+
+static int diagfwd_smux_runtime_resume(struct device *dev)
+{
+ dev_dbg(dev, "pm_runtime: resuming...\n");
+ return 0;
+}
+
+static const struct dev_pm_ops diagfwd_smux_dev_pm_ops = {
+ .runtime_suspend = diagfwd_smux_runtime_suspend,
+ .runtime_resume = diagfwd_smux_runtime_resume,
+};
+
+int diagfwd_connect_smux(void)
+{
+ void *priv = NULL;
+ int ret = 0;
+
+ if (driver->lcid == LCID_INVALID) {
+ ret = msm_smux_open(LCID_VALID, priv, diag_smux_event, diag_get_rx_buffer);
+ if (!ret) {
+ driver->lcid = LCID_VALID;
+ msm_smux_tiocm_set(driver->lcid, TIOCM_DTR, 0);
+ pr_info("diag: open SMUX ch, r = %d\n", ret);
+ } else {
+ pr_err("diag: failed to open SMUX ch, r = %d\n", ret);
+ return ret;
+ }
+ }
+ /* Poll USB channel to check for data */
+ queue_work(diag_bridge[SMUX].wq, &(diag_bridge[SMUX].diag_read_work));
+ return ret;
+}
+
+static int diagfwd_smux_probe(struct platform_device *pdev)
+{
+ int ret = 0;
+
+ pr_info("diag: SMUX probe called\n");
+ driver->lcid = LCID_INVALID;
+ driver->diag_smux_enabled = 1;
+ if (driver->buf_in_smux == NULL) {
+ driver->buf_in_smux = kzalloc(IN_BUF_SIZE, GFP_KERNEL);
+ if (driver->buf_in_smux == NULL)
+ goto err;
+ }
+ /* Only required for Local loopback test
+ * ret = msm_smux_set_ch_option(LCID_VALID,
+ SMUX_CH_OPTION_LOCAL_LOOPBACK, 0);
+ * if (ret)
+ * pr_err("diag: error setting SMUX ch option, r = %d\n", ret);
+ */
+ if (driver->write_ptr_mdm == NULL)
+ driver->write_ptr_mdm = kzalloc(sizeof(struct diag_request), GFP_KERNEL);
+ if (driver->write_ptr_mdm == NULL)
+ goto err;
+ ret = diagfwd_connect_smux();
+ return ret;
+
+err:
+ pr_err("diag: Could not initialize SMUX buffer\n");
+ kfree(driver->buf_in_smux);
+ return ret;
+}
+
+static int diagfwd_smux_remove(struct platform_device *pdev)
+{
+ driver->lcid = LCID_INVALID;
+ driver->smux_connected = 0;
+ driver->diag_smux_enabled = 0;
+ driver->in_busy_smux = 1;
+ kfree(driver->buf_in_smux);
+ driver->buf_in_smux = NULL;
+ return 0;
+}
+
+struct platform_driver msm_diagfwd_smux_driver = {
+ .probe = diagfwd_smux_probe,
+ .remove = diagfwd_smux_remove,
+ .driver = {
+ .name = "SMUX_DIAG",
+ .owner = THIS_MODULE,
+ .pm = &diagfwd_smux_dev_pm_ops,
+ },
+};
diff --git a/drivers/char/diag/diagfwd_smux.h b/drivers/char/diag/diagfwd_smux.h
new file mode 100644
index 0000000..fcf19d2
--- /dev/null
+++ b/drivers/char/diag/diagfwd_smux.h
@@ -0,0 +1,27 @@
+/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGFWD_SMUX_H
+#define DIAGFWD_SMUX_H
+
+#include <linux/smux.h>
+#define LCID_VALID SMUX_USB_DIAG_0
+#define LCID_INVALID 0
+
+int diagfwd_read_complete_smux(void);
+int diagfwd_write_complete_smux(void);
+int diagfwd_connect_smux(void);
+void diag_usb_read_complete_smux_fn(struct work_struct *w);
+void diag_read_usb_smux_work_fn(struct work_struct *work);
+extern struct platform_driver msm_diagfwd_smux_driver;
+
+#endif
diff --git a/drivers/char/diag/diagmem.c b/drivers/char/diag/diagmem.c
new file mode 100644
index 0000000..1fa32c1
--- /dev/null
+++ b/drivers/char/diag/diagmem.c
@@ -0,0 +1,310 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mempool.h>
+#include <linux/mutex.h>
+#include <asm/atomic.h>
+#include "diagchar.h"
+#include "diagfwd_bridge.h"
+#include "diagfwd_hsic.h"
+
+mempool_t *diag_pools_array[NUM_MEMORY_POOLS];
+
+void *diagmem_alloc(struct diagchar_dev *driver, int size, int pool_type)
+{
+ void *buf = NULL;
+ unsigned long flags;
+ int index;
+
+ spin_lock_irqsave(&driver->diag_mem_lock, flags);
+ index = 0;
+ if (pool_type == POOL_TYPE_COPY) {
+ if (driver->diagpool) {
+ if ((driver->count < driver->poolsize) && (size <= driver->itemsize)) {
+ atomic_add(1, (atomic_t *) & driver->count);
+ buf = mempool_alloc(driver->diagpool, GFP_ATOMIC);
+ }
+ }
+ } else if (pool_type == POOL_TYPE_HDLC) {
+ if (driver->diag_hdlc_pool) {
+ if ((driver->count_hdlc_pool < driver->poolsize_hdlc) && (size <= driver->itemsize_hdlc)) {
+ atomic_add(1, (atomic_t *) & driver->count_hdlc_pool);
+ buf = mempool_alloc(driver->diag_hdlc_pool, GFP_ATOMIC);
+ }
+ }
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool) {
+ if ((driver->count_user_pool < driver->poolsize_user) && (size <= driver->itemsize_user)) {
+ atomic_add(1, (atomic_t *) & driver->count_user_pool);
+ buf = mempool_alloc(driver->diag_user_pool, GFP_ATOMIC);
+ }
+ }
+ } else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
+ if (driver->diag_write_struct_pool) {
+ if ((driver->count_write_struct_pool < driver->poolsize_write_struct) && (size <= driver->itemsize_write_struct)) {
+ atomic_add(1, (atomic_t *) & driver->count_write_struct_pool);
+ buf = mempool_alloc(driver->diag_write_struct_pool, GFP_ATOMIC);
+ }
+ }
+ } else if (pool_type == POOL_TYPE_DCI) {
+ if (driver->diag_dci_pool) {
+ if ((driver->count_dci_pool < driver->poolsize_dci) && (size <= driver->itemsize_dci)) {
+ atomic_add(1, (atomic_t *) & driver->count_dci_pool);
+ buf = mempool_alloc(driver->diag_dci_pool, GFP_ATOMIC);
+ }
+ }
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ } else if (pool_type == POOL_TYPE_HSIC || pool_type == POOL_TYPE_HSIC_2) {
+ index = pool_type - POOL_TYPE_HSIC;
+ if (diag_hsic[index].diag_hsic_pool) {
+ if ((diag_hsic[index].count_hsic_pool < diag_hsic[index].poolsize_hsic) && (size <= diag_hsic[index].itemsize_hsic)) {
+ atomic_add(1, (atomic_t *)
+ & diag_hsic[index].count_hsic_pool);
+ buf = mempool_alloc(diag_hsic[index].diag_hsic_pool, GFP_ATOMIC);
+ }
+ }
+ } else if (pool_type == POOL_TYPE_HSIC_WRITE || pool_type == POOL_TYPE_HSIC_2_WRITE) {
+ index = pool_type - POOL_TYPE_HSIC_WRITE;
+ if (diag_hsic[index].diag_hsic_write_pool) {
+ if (diag_hsic[index].count_hsic_write_pool < diag_hsic[index].poolsize_hsic_write && (size <= diag_hsic[index].itemsize_hsic_write)) {
+ atomic_add(1, (atomic_t *)
+ & diag_hsic[index].count_hsic_write_pool);
+ buf = mempool_alloc(diag_hsic[index].diag_hsic_write_pool, GFP_ATOMIC);
+ }
+ }
+#endif
+ }
+ spin_unlock_irqrestore(&driver->diag_mem_lock, flags);
+ return buf;
+}
+
+void diagmem_exit(struct diagchar_dev *driver, int pool_type)
+{
+ int index;
+ unsigned long flags;
+ index = 0;
+
+ spin_lock_irqsave(&driver->diag_mem_lock, flags);
+ if (driver->diagpool) {
+ if (driver->count == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diagpool);
+ driver->diagpool = NULL;
+ } else if (driver->ref_count == 0 && pool_type == POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy COPY mempool");
+ }
+ }
+
+ if (driver->diag_hdlc_pool) {
+ if (driver->count_hdlc_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_hdlc_pool);
+ driver->diag_hdlc_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type == POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy HDLC mempool");
+ }
+ }
+
+ if (driver->diag_user_pool) {
+ if (driver->count_user_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_user_pool);
+ driver->diag_user_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type == POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy USER mempool");
+ }
+ }
+
+ if (driver->diag_write_struct_pool) {
+ /* Free up struct pool ONLY if there are no outstanding
+ transactions(aggregation buffer) with USB */
+ if (driver->count_write_struct_pool == 0 && driver->count_hdlc_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_write_struct_pool);
+ driver->diag_write_struct_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type == POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy STRUCT mempool");
+ }
+ }
+
+ if (driver->diag_dci_pool) {
+ if (driver->count_dci_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_dci_pool);
+ driver->diag_dci_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type == POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy DCI mempool");
+ }
+ }
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ for (index = 0; index < MAX_HSIC_CH; index++) {
+ if (diag_hsic[index].diag_hsic_pool && (diag_hsic[index].hsic_inited == 0)) {
+ if (diag_hsic[index].count_hsic_pool == 0) {
+ mempool_destroy(diag_hsic[index].diag_hsic_pool);
+ diag_hsic[index].diag_hsic_pool = NULL;
+ } else if (pool_type == POOL_TYPE_ALL)
+ pr_err("Unable to destroy HDLC mempool for ch %d", index);
+ }
+
+ if (diag_hsic[index].diag_hsic_write_pool && (diag_hsic[index].hsic_inited == 0)) {
+ /*
+ * Free up struct pool ONLY if there are no outstanding
+ * transactions(aggregation buffer) with USB
+ */
+ if (diag_hsic[index].count_hsic_write_pool == 0 && diag_hsic[index].count_hsic_pool == 0) {
+ mempool_destroy(diag_hsic[index].diag_hsic_write_pool);
+ diag_hsic[index].diag_hsic_write_pool = NULL;
+ } else if (pool_type == POOL_TYPE_ALL)
+ pr_err("Unable to destroy HSIC USB struct mempool for ch %d", index);
+ }
+ }
+#endif
+ spin_unlock_irqrestore(&driver->diag_mem_lock, flags);
+}
+
+void diagmem_free(struct diagchar_dev *driver, void *buf, int pool_type)
+{
+ int index;
+ unsigned long flags;
+
+ if (!buf)
+ return;
+
+ spin_lock_irqsave(&driver->diag_mem_lock, flags);
+ index = 0;
+ if (pool_type == POOL_TYPE_COPY) {
+ if (driver->diagpool != NULL && driver->count > 0) {
+ mempool_free(buf, driver->diagpool);
+ atomic_add(-1, (atomic_t *) & driver->count);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver mempool memory which is already free %d", driver->count);
+ } else if (pool_type == POOL_TYPE_HDLC) {
+ if (driver->diag_hdlc_pool != NULL && driver->count_hdlc_pool > 0) {
+ mempool_free(buf, driver->diag_hdlc_pool);
+ atomic_add(-1, (atomic_t *) & driver->count_hdlc_pool);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver HDLC mempool which is already free %d ", driver->count_hdlc_pool);
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool != NULL && driver->count_user_pool > 0) {
+ mempool_free(buf, driver->diag_user_pool);
+ atomic_add(-1, (atomic_t *) & driver->count_user_pool);
+ } else {
+ pr_err("diag: Attempt to free up DIAG driver USER mempool which is already free %d ", driver->count_user_pool);
+ }
+ } else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
+ if (driver->diag_write_struct_pool != NULL && driver->count_write_struct_pool > 0) {
+ mempool_free(buf, driver->diag_write_struct_pool);
+ atomic_add(-1, (atomic_t *) & driver->count_write_struct_pool);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver USB structure mempool which is already free %d ", driver->count_write_struct_pool);
+ } else if (pool_type == POOL_TYPE_DCI) {
+ if (driver->diag_dci_pool != NULL && driver->count_dci_pool > 0) {
+ mempool_free(buf, driver->diag_dci_pool);
+ atomic_add(-1, (atomic_t *) & driver->count_dci_pool);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver DCI mempool which is already free %d ", driver->count_dci_pool);
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+ } else if (pool_type == POOL_TYPE_HSIC || pool_type == POOL_TYPE_HSIC_2) {
+ index = pool_type - POOL_TYPE_HSIC;
+ if (diag_hsic[index].diag_hsic_pool != NULL && diag_hsic[index].count_hsic_pool > 0) {
+ mempool_free(buf, diag_hsic[index].diag_hsic_pool);
+ atomic_add(-1, (atomic_t *)
+ & diag_hsic[index].count_hsic_pool);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver HSIC mempool which is already free %d, ch = %d", diag_hsic[index].count_hsic_pool, index);
+ } else if (pool_type == POOL_TYPE_HSIC_WRITE || pool_type == POOL_TYPE_HSIC_2_WRITE) {
+ index = pool_type - POOL_TYPE_HSIC_WRITE;
+ if (diag_hsic[index].diag_hsic_write_pool != NULL && diag_hsic[index].count_hsic_write_pool > 0) {
+ mempool_free(buf, diag_hsic[index].diag_hsic_write_pool);
+ atomic_add(-1, (atomic_t *)
+ & diag_hsic[index].count_hsic_write_pool);
+ } else
+ pr_err("diag: Attempt to free up DIAG driver HSIC USB structure mempool which is already free %d, ch = %d", driver->count_write_struct_pool, index);
+#endif
+ } else {
+ pr_err("diag: In %s, unknown pool type: %d\n", __func__, pool_type);
+
+ }
+ spin_unlock_irqrestore(&driver->diag_mem_lock, flags);
+ diagmem_exit(driver, pool_type);
+}
+
+void diagmem_init(struct diagchar_dev *driver)
+{
+ spin_lock_init(&driver->diag_mem_lock);
+
+ if (driver->count == 0) {
+ driver->diagpool = mempool_create_kmalloc_pool(driver->poolsize, driver->itemsize);
+ diag_pools_array[POOL_COPY_IDX] = driver->diagpool;
+ }
+
+ if (driver->count_hdlc_pool == 0) {
+ driver->diag_hdlc_pool = mempool_create_kmalloc_pool(driver->poolsize_hdlc, driver->itemsize_hdlc);
+ diag_pools_array[POOL_HDLC_IDX] = driver->diag_hdlc_pool;
+ }
+
+ if (driver->count_user_pool == 0) {
+ driver->diag_user_pool = mempool_create_kmalloc_pool(driver->poolsize_user, driver->itemsize_user);
+ diag_pools_array[POOL_USER_IDX] = driver->diag_user_pool;
+ }
+
+ if (driver->count_write_struct_pool == 0) {
+ driver->diag_write_struct_pool = mempool_create_kmalloc_pool(driver->poolsize_write_struct, driver->itemsize_write_struct);
+ diag_pools_array[POOL_WRITE_STRUCT_IDX] = driver->diag_write_struct_pool;
+ }
+
+ if (driver->count_dci_pool == 0) {
+ driver->diag_dci_pool = mempool_create_kmalloc_pool(driver->poolsize_dci, driver->itemsize_dci);
+ diag_pools_array[POOL_DCI_IDX] = driver->diag_dci_pool;
+ }
+
+ if (!driver->diagpool)
+ pr_err("diag: Cannot allocate diag mempool\n");
+
+ if (!driver->diag_hdlc_pool)
+ pr_err("diag: Cannot allocate diag HDLC mempool\n");
+
+ if (!driver->diag_user_pool)
+ pr_err("diag: Cannot allocate diag USER mempool\n");
+
+ if (!driver->diag_write_struct_pool)
+ pr_err("diag: Cannot allocate diag USB struct mempool\n");
+
+ if (!driver->diag_dci_pool)
+ pr_err("diag: Cannot allocate diag DCI mempool\n");
+
+}
+
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+void diagmem_hsic_init(int index)
+{
+ if (index < 0 || index >= MAX_HSIC_CH) {
+ pr_err("diag: Invalid hsic index in %s\n", __func__);
+ return;
+ }
+
+ if (diag_hsic[index].count_hsic_pool == 0) {
+ diag_hsic[index].diag_hsic_pool = mempool_create_kmalloc_pool(diag_hsic[index].poolsize_hsic, diag_hsic[index].itemsize_hsic);
+ diag_pools_array[POOL_HSIC_IDX + index] = diag_hsic[index].diag_hsic_pool;
+ }
+
+ if (diag_hsic[index].count_hsic_write_pool == 0) {
+ diag_hsic[index].diag_hsic_write_pool = mempool_create_kmalloc_pool(diag_hsic[index].poolsize_hsic_write, diag_hsic[index].itemsize_hsic_write);
+ diag_pools_array[POOL_HSIC_WRITE_IDX + index] = diag_hsic[index].diag_hsic_write_pool;
+ }
+
+ if (!diag_hsic[index].diag_hsic_pool)
+ pr_err("Cannot allocate diag HSIC mempool for ch %d\n", index);
+
+ if (!diag_hsic[index].diag_hsic_write_pool)
+ pr_err("Cannot allocate diag HSIC struct mempool for ch %d\n", index);
+
+}
+#endif
diff --git a/drivers/char/diag/diagmem.h b/drivers/char/diag/diagmem.h
new file mode 100644
index 0000000..6861790
--- /dev/null
+++ b/drivers/char/diag/diagmem.h
@@ -0,0 +1,26 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGMEM_H
+#define DIAGMEM_H
+#include "diagchar.h"
+
+extern mempool_t *diag_pools_array[NUM_MEMORY_POOLS];
+
+void *diagmem_alloc(struct diagchar_dev *driver, int size, int pool_type);
+void diagmem_free(struct diagchar_dev *driver, void *buf, int pool_type);
+void diagmem_init(struct diagchar_dev *driver);
+void diagmem_exit(struct diagchar_dev *driver, int pool_type);
+#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+void diagmem_hsic_init(int index);
+#endif
+#endif
diff --git a/drivers/cpufreq/cpufreq_interactive.c b/drivers/cpufreq/cpufreq_interactive.c
index 0654e40..9dbf5b6 100644
--- a/drivers/cpufreq/cpufreq_interactive.c
+++ b/drivers/cpufreq/cpufreq_interactive.c
@@ -1019,6 +1019,30 @@
return count;
}
+static ssize_t show_rt_priority(struct cpufreq_interactive_tunables *tunables,
+ char *buf)
+{
+ return sprintf(buf, "%d\n", speedchange_task->rt_priority);
+}
+
+static ssize_t store_rt_priority(struct cpufreq_interactive_tunables *tunables,
+ const char *buf, size_t count)
+{
+ int ret;
+ struct sched_param param = {
+ .sched_priority = 0,
+ };
+
+ if (kstrtoint(buf, 0, ¶m.sched_priority) < 0)
+ return -EINVAL;
+
+ ret = sched_setscheduler(speedchange_task, SCHED_FIFO, ¶m);
+ if (ret < 0)
+ return ret;
+
+ return count;
+}
+
/*
* Create show/store routines
* - sys: One governor instance for complete SYSTEM
@@ -1066,6 +1090,7 @@
store_gov_pol_sys(boostpulse);
show_store_gov_pol_sys(boostpulse_duration);
show_store_gov_pol_sys(io_is_busy);
+show_store_gov_pol_sys(rt_priority);
#define gov_sys_attr_rw(_name) \
static struct global_attr _name##_gov_sys = \
@@ -1089,6 +1114,7 @@
gov_sys_pol_attr_rw(boost);
gov_sys_pol_attr_rw(boostpulse_duration);
gov_sys_pol_attr_rw(io_is_busy);
+gov_sys_pol_attr_rw(rt_priority);
static struct global_attr boostpulse_gov_sys =
__ATTR(boostpulse, 0200, NULL, store_boostpulse_gov_sys);
@@ -1109,6 +1135,7 @@
&boostpulse_gov_sys.attr,
&boostpulse_duration_gov_sys.attr,
&io_is_busy_gov_sys.attr,
+ &rt_priority_gov_sys.attr,
NULL,
};
@@ -1130,6 +1157,7 @@
&boostpulse_gov_pol.attr,
&boostpulse_duration_gov_pol.attr,
&io_is_busy_gov_pol.attr,
+ &rt_priority_gov_sys.attr,
NULL,
};
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index 3efc84e..115c20a 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -120,6 +120,12 @@
/* Channel base address offset from APBDMA base address */
#define TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET 0x1000
+/*
+ * If next DMA sub-transfer request is small,
+ * then schedule this tasklet as high priority.
+ */
+#define TEGRA_APBDMA_LOW_LATENCY_REQ_LEN 512
+
struct tegra_dma;
/*
@@ -679,6 +685,7 @@
struct tegra_dma_channel *tdc = dev_id;
unsigned long status;
unsigned long flags;
+ struct tegra_dma_sg_req *sgreq;
spin_lock_irqsave(&tdc->lock, flags);
@@ -687,7 +694,14 @@
tdc_write(tdc, TEGRA_APBDMA_CHAN_STATUS, status);
tdc_write(tdc, TEGRA_APBDMA_CHAN_STATUS, TEGRA_APBDMA_STATUS_ISE_EOC);
tdc->isr_handler(tdc, false);
- tasklet_schedule(&tdc->tasklet);
+
+ sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq),
+ node);
+ if (sgreq->req_len <= TEGRA_APBDMA_LOW_LATENCY_REQ_LEN)
+ tasklet_hi_schedule(&tdc->tasklet);
+ else
+ tasklet_schedule(&tdc->tasklet);
+
spin_unlock_irqrestore(&tdc->lock, flags);
return IRQ_HANDLED;
}
diff --git a/drivers/edp/sysedp_dynamic_capping.c b/drivers/edp/sysedp_dynamic_capping.c
index 9c0160e..4847427 100644
--- a/drivers/edp/sysedp_dynamic_capping.c
+++ b/drivers/edp/sysedp_dynamic_capping.c
@@ -39,7 +39,7 @@
static unsigned int gpu_window = 80;
static unsigned int gpu_high_hist;
static unsigned int gpu_high_count = 2;
-static unsigned int priority_bias = 75;
+static unsigned int priority_bias = 60;
static unsigned int online_cpu_count;
static bool gpu_busy;
static unsigned int fgpu;
diff --git a/drivers/gpio/gpio-palmas.c b/drivers/gpio/gpio-palmas.c
index e49638e..fc50716 100644
--- a/drivers/gpio/gpio-palmas.c
+++ b/drivers/gpio/gpio-palmas.c
@@ -27,10 +27,14 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
+#include <linux/pm.h>
+#include <linux/syscore_ops.h>
struct palmas_gpio {
struct gpio_chip gpio_chip;
struct palmas *palmas;
+ bool enable_boost_bypass;
+ int v_boost_bypass_gpio;
};
static inline struct palmas_gpio *to_palmas_gpio(struct gpio_chip *chip)
@@ -165,6 +169,7 @@
struct palmas *palmas = dev_get_drvdata(pdev->dev.parent);
struct palmas_platform_data *palmas_pdata;
struct palmas_gpio *palmas_gpio;
+
int ret;
palmas_gpio = devm_kzalloc(&pdev->dev,
@@ -197,16 +202,79 @@
else
palmas_gpio->gpio_chip.base = -1;
+ ret = of_property_read_u32(pdev->dev.of_node, "v_boost_bypass_gpio",
+ &palmas_gpio->v_boost_bypass_gpio);
+ if (ret < 0) {
+ palmas_gpio->v_boost_bypass_gpio = -1;
+ dev_err(&pdev->dev, "%s:Could not find boost_bypass gpio\n",
+ __func__);
+ }
+
ret = gpiochip_add(&palmas_gpio->gpio_chip);
if (ret < 0) {
dev_err(&pdev->dev, "Could not register gpiochip, %d\n", ret);
return ret;
}
+ if (pdev->dev.of_node) {
+ palmas_gpio->enable_boost_bypass = of_property_read_bool(
+ pdev->dev.of_node, "ti,enable-boost-bypass");
+ }
+
+ /* Set Boost Bypass */
+ if (palmas_gpio->enable_boost_bypass &&
+ palmas_gpio->v_boost_bypass_gpio != -1) {
+ dev_dbg(&pdev->dev,
+ "%s:Enabling boost bypass feature, set PMIC GPIO_%d as output high\n",
+ __func__, palmas_gpio->v_boost_bypass_gpio);
+ ret = palmas_gpio_output(&(palmas_gpio->gpio_chip),
+ palmas_gpio->v_boost_bypass_gpio, 1);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "Could not enable boost bypass feature, ret:%d\n", ret);
+ }
+ }
+
platform_set_drvdata(pdev, palmas_gpio);
return ret;
}
+#ifdef CONFIG_PM_SLEEP
+static int palmas_gpio_resume(struct platform_device *pdev)
+{
+ struct palmas_gpio *palmas_gpio = platform_get_drvdata(pdev);
+ int ret = 0;
+
+ if (palmas_gpio->enable_boost_bypass &&
+ palmas_gpio->v_boost_bypass_gpio != -1) {
+ ret = palmas_gpio_output(&(palmas_gpio->gpio_chip),
+ palmas_gpio->v_boost_bypass_gpio, 1);
+ dev_dbg(&pdev->dev,
+ "%s:Enable boost bypass, set PMIC GPIO_%d as high: %d\n",
+ __func__, palmas_gpio->v_boost_bypass_gpio, ret);
+ }
+
+ return ret;
+}
+
+static int palmas_gpio_suspend(struct platform_device *pdev, pm_message_t state)
+{
+ struct palmas_gpio *palmas_gpio = platform_get_drvdata(pdev);
+ int ret = 0;
+
+ if (palmas_gpio->enable_boost_bypass &&
+ palmas_gpio->v_boost_bypass_gpio != -1) {
+ ret = palmas_gpio_output(&(palmas_gpio->gpio_chip),
+ palmas_gpio->v_boost_bypass_gpio, 0);
+ dev_dbg(&pdev->dev,
+ "%s:Disable boost bypass, set PMIC GPIO_%d as low: %d\n",
+ __func__, palmas_gpio->v_boost_bypass_gpio, ret);
+ }
+
+ return ret;
+}
+#endif
+
static int palmas_gpio_remove(struct platform_device *pdev)
{
struct palmas_gpio *palmas_gpio = platform_get_drvdata(pdev);
@@ -229,6 +297,10 @@
.driver.of_match_table = of_palmas_gpio_match,
.probe = palmas_gpio_probe,
.remove = palmas_gpio_remove,
+#ifdef CONFIG_PM_SLEEP
+ .suspend = palmas_gpio_suspend,
+ .resume = palmas_gpio_resume,
+#endif
};
static int __init palmas_gpio_init(void)
diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
index 3f35036..3313da7 100644
--- a/drivers/gpio/gpio-tegra.c
+++ b/drivers/gpio/gpio-tegra.c
@@ -136,10 +136,11 @@
EXPORT_SYMBOL(tegra_is_gpio);
-static void tegra_gpio_disable(int gpio)
+void tegra_gpio_disable(int gpio)
{
tegra_gpio_mask_write(GPIO_MSK_CNF(gpio), gpio, 0);
}
+EXPORT_SYMBOL(tegra_gpio_disable);
void tegra_gpio_init_configure(unsigned gpio, bool is_input, int value)
{
diff --git a/drivers/gpu/nvgpu/gk20a/as_gk20a.c b/drivers/gpu/nvgpu/gk20a/as_gk20a.c
index 1d604b8..c8e71f1 100644
--- a/drivers/gpu/nvgpu/gk20a/as_gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/as_gk20a.c
@@ -131,19 +131,14 @@
struct gk20a_as_share *as_share,
struct nvhost_as_map_buffer_ex_args *args)
{
- int i;
-
gk20a_dbg_fn("");
- /* ensure that padding is not set. this is required for ensuring that
- * we can safely use these fields later */
- for (i = 0; i < ARRAY_SIZE(args->padding); i++)
- if (args->padding[i])
- return -EINVAL;
-
return gk20a_vm_map_buffer(as_share, args->dmabuf_fd,
- &args->offset, args->flags,
- args->kind);
+ &args->as_offset, args->flags,
+ args->kind,
+ args->buffer_offset,
+ args->mapping_size
+ );
}
static int gk20a_as_ioctl_map_buffer(
@@ -152,8 +147,9 @@
{
gk20a_dbg_fn("");
return gk20a_vm_map_buffer(as_share, args->nvmap_handle,
- &args->o_a.align,
- args->flags, NV_KIND_DEFAULT);
+ &args->o_a.offset,
+ args->flags, NV_KIND_DEFAULT,
+ 0, 0);
/* args->o_a.offset will be set if !err */
}
diff --git a/drivers/gpu/nvgpu/gk20a/fifo_gk20a.c b/drivers/gpu/nvgpu/gk20a/fifo_gk20a.c
index 0f4ad88..223b806 100644
--- a/drivers/gpu/nvgpu/gk20a/fifo_gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/fifo_gk20a.c
@@ -417,6 +417,13 @@
fifo_fb_timeout_period_max_f());
gk20a_writel(g, fifo_fb_timeout_r(), timeout);
+ for (i = 0; i < pbdma_timeout__size_1_v(); i++) {
+ timeout = gk20a_readl(g, pbdma_timeout_r(i));
+ timeout = set_field(timeout, pbdma_timeout_period_m(),
+ pbdma_timeout_period_max_f());
+ gk20a_writel(g, pbdma_timeout_r(i), timeout);
+ }
+
if (tegra_platform_is_silicon()) {
timeout = gk20a_readl(g, fifo_pb_timeout_r());
timeout &= ~fifo_pb_timeout_detection_enabled_f();
diff --git a/drivers/gpu/nvgpu/gk20a/gk20a.c b/drivers/gpu/nvgpu/gk20a/gk20a.c
index 4026ecc..6dc10d1 100644
--- a/drivers/gpu/nvgpu/gk20a/gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/gk20a.c
@@ -914,6 +914,8 @@
goto done;
}
+ wait_event(g->pmu.boot_wq, g->pmu.pmu_state == PMU_STATE_STARTED);
+
gk20a_channel_resume(g);
set_user_nice(current, nice_value);
@@ -1469,6 +1471,7 @@
&gk20a->timeouts_enabled);
gk20a_pmu_debugfs_init(dev);
#endif
+ init_waitqueue_head(&gk20a->pmu.boot_wq);
gk20a_init_gr(gk20a);
diff --git a/drivers/gpu/nvgpu/gk20a/gr_gk20a.c b/drivers/gpu/nvgpu/gk20a/gr_gk20a.c
index ed6c7cf..40dd8b2 100644
--- a/drivers/gpu/nvgpu/gk20a/gr_gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/gr_gk20a.c
@@ -4430,6 +4430,7 @@
static u32 wl_addr_gk20a[] = {
/* this list must be sorted (low to high) */
0x404468, /* gr_pri_mme_max_instructions */
+ 0x408944, /* gr_pri_bes_crop_hww_esr */
0x418800, /* gr_pri_gpcs_setup_debug */
0x419a04, /* gr_pri_gpcs_tpcs_tex_lod_dbg */
0x419a08, /* gr_pri_gpcs_tpcs_tex_samp_dbg */
diff --git a/drivers/gpu/nvgpu/gk20a/hw_pbdma_gk20a.h b/drivers/gpu/nvgpu/gk20a/hw_pbdma_gk20a.h
index 6b353a0..79d8cf2 100644
--- a/drivers/gpu/nvgpu/gk20a/hw_pbdma_gk20a.h
+++ b/drivers/gpu/nvgpu/gk20a/hw_pbdma_gk20a.h
@@ -106,6 +106,22 @@
{
return 0x00040000 + i*8192;
}
+static inline u32 pbdma_timeout_r(u32 i)
+{
+ return 0x0004012c + i*8192;
+}
+static inline u32 pbdma_timeout__size_1_v(void)
+{
+ return 0x00000001;
+}
+static inline u32 pbdma_timeout_period_m(void)
+{
+ return 0xffffffff << 0;
+}
+static inline u32 pbdma_timeout_period_max_f(void)
+{
+ return 0xffffffff;
+}
static inline u32 pbdma_pb_fetch_r(u32 i)
{
return 0x00040054 + i*8192;
diff --git a/drivers/gpu/nvgpu/gk20a/mm_gk20a.c b/drivers/gpu/nvgpu/gk20a/mm_gk20a.c
index b171915..52c0f3c 100644
--- a/drivers/gpu/nvgpu/gk20a/mm_gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/mm_gk20a.c
@@ -107,7 +107,7 @@
u32 kind);
static int update_gmmu_ptes_locked(struct vm_gk20a *vm,
enum gmmu_pgsz_gk20a pgsz_idx,
- struct sg_table *sgt,
+ struct sg_table *sgt, u64 buffer_offset,
u64 first_vaddr, u64 last_vaddr,
u8 kind_v, u32 ctag_offset, bool cacheable,
int rw_flag);
@@ -1055,7 +1055,7 @@
static int validate_fixed_buffer(struct vm_gk20a *vm,
struct buffer_attrs *bfr,
- u64 map_offset)
+ u64 map_offset, u64 map_size)
{
struct device *dev = dev_from_vm(vm);
struct vm_reserved_va_node *va_node;
@@ -1082,7 +1082,7 @@
&va_node->va_buffers_list, va_buffers_list) {
s64 begin = max(buffer->addr, map_offset);
s64 end = min(buffer->addr +
- buffer->size, map_offset + bfr->size);
+ buffer->size, map_offset + map_size);
if (end - begin > 0) {
gk20a_warn(dev, "overlapping buffer map requested");
return -EINVAL;
@@ -1095,6 +1095,7 @@
static u64 __locked_gmmu_map(struct vm_gk20a *vm,
u64 map_offset,
struct sg_table *sgt,
+ u64 buffer_offset,
u64 size,
int pgsz_idx,
u8 kind_v,
@@ -1137,6 +1138,7 @@
err = update_gmmu_ptes_locked(vm, pgsz_idx,
sgt,
+ buffer_offset,
map_offset, map_offset + size - 1,
kind_v,
ctag_offset,
@@ -1180,6 +1182,7 @@
err = update_gmmu_ptes_locked(vm,
pgsz_idx,
0, /* n/a for unmap */
+ 0,
vaddr,
vaddr + size - 1,
0, 0, false /* n/a for unmap */,
@@ -1272,7 +1275,9 @@
int kind,
struct sg_table **sgt,
bool user_mapped,
- int rw_flag)
+ int rw_flag,
+ u64 buffer_offset,
+ u64 mapping_size)
{
struct gk20a *g = gk20a_from_vm(vm);
struct gk20a_allocator *ctag_allocator = &g->gr.comp_tags;
@@ -1322,6 +1327,7 @@
buf_addr = (u64)sg_phys(bfr.sgt->sgl);
bfr.align = 1 << __ffs(buf_addr);
bfr.pgsz_idx = -1;
+ mapping_size = mapping_size ? mapping_size : bfr.size;
/* If FIX_OFFSET is set, pgsz is determined. Otherwise, select
* page size according to memory alignment */
@@ -1350,8 +1356,10 @@
gmmu_page_size = gmmu_page_sizes[bfr.pgsz_idx];
/* Check if we should use a fixed offset for mapping this buffer */
+
if (flags & NVHOST_AS_MAP_BUFFER_FLAGS_FIXED_OFFSET) {
- err = validate_fixed_buffer(vm, &bfr, offset_align);
+ err = validate_fixed_buffer(vm, &bfr,
+ offset_align, mapping_size);
if (err)
goto clean_up;
@@ -1400,11 +1408,13 @@
/* update gmmu ptes */
map_offset = __locked_gmmu_map(vm, map_offset,
bfr.sgt,
- bfr.size,
+ buffer_offset, /* sg offset */
+ mapping_size,
bfr.pgsz_idx,
bfr.kind_v,
bfr.ctag_offset,
flags, rw_flag);
+
if (!map_offset)
goto clean_up;
@@ -1447,7 +1457,7 @@
mapped_buffer->dmabuf = dmabuf;
mapped_buffer->sgt = bfr.sgt;
mapped_buffer->addr = map_offset;
- mapped_buffer->size = bfr.size;
+ mapped_buffer->size = mapping_size;
mapped_buffer->pgsz_idx = bfr.pgsz_idx;
mapped_buffer->ctag_offset = bfr.ctag_offset;
mapped_buffer->ctag_lines = bfr.ctag_lines;
@@ -1518,6 +1528,7 @@
mutex_lock(&vm->update_gmmu_lock);
vaddr = __locked_gmmu_map(vm, 0, /* already mapped? - No */
*sgt, /* sg table */
+ 0, /* sg offset */
size,
0, /* page size index = 0 i.e. SZ_4K */
0, /* kind */
@@ -1647,6 +1658,7 @@
static int update_gmmu_ptes_locked(struct vm_gk20a *vm,
enum gmmu_pgsz_gk20a pgsz_idx,
struct sg_table *sgt,
+ u64 buffer_offset,
u64 first_vaddr, u64 last_vaddr,
u8 kind_v, u32 ctag_offset,
bool cacheable,
@@ -1661,6 +1673,7 @@
u32 ctag_incr;
u32 page_size = gmmu_page_sizes[pgsz_idx];
u64 addr = 0;
+ u64 space_to_skip = buffer_offset;
pde_range_from_vaddr_range(vm, first_vaddr, last_vaddr,
&pde_lo, &pde_hi);
@@ -1673,13 +1686,31 @@
* comptags are active) is 128KB. We have checks elsewhere for that. */
ctag_incr = !!ctag_offset;
- if (sgt)
+ cur_offset = 0;
+ if (sgt) {
cur_chunk = sgt->sgl;
+ /* space_to_skip must be page aligned */
+ BUG_ON(space_to_skip & (page_size - 1));
+
+ while (space_to_skip > 0 && cur_chunk) {
+ u64 new_addr = gk20a_mm_iova_addr(cur_chunk);
+ if (new_addr) {
+ addr = new_addr;
+ addr += cur_offset;
+ }
+ cur_offset += page_size;
+ addr += page_size;
+ while (cur_chunk &&
+ cur_offset >= cur_chunk->length) {
+ cur_offset -= cur_chunk->length;
+ cur_chunk = sg_next(cur_chunk);
+ }
+ space_to_skip -= page_size;
+ }
+ }
else
cur_chunk = NULL;
- cur_offset = 0;
-
for (pde_i = pde_lo; pde_i <= pde_hi; pde_i++) {
u32 pte_lo, pte_hi;
u32 pte_cur;
@@ -1711,14 +1742,12 @@
gk20a_dbg(gpu_dbg_pte, "pte_lo=%d, pte_hi=%d", pte_lo, pte_hi);
for (pte_cur = pte_lo; pte_cur <= pte_hi; pte_cur++) {
-
if (likely(sgt)) {
u64 new_addr = gk20a_mm_iova_addr(cur_chunk);
if (new_addr) {
addr = new_addr;
addr += cur_offset;
}
-
pte_w[0] = gmmu_pte_valid_true_f() |
gmmu_pte_address_sys_f(addr
>> gmmu_pte_address_shift_v());
@@ -1735,20 +1764,16 @@
pte_w[1] |=
gmmu_pte_read_disable_true_f();
}
-
if (!cacheable)
pte_w[1] |= gmmu_pte_vol_true_f();
pte->ref_cnt++;
-
- gk20a_dbg(gpu_dbg_pte,
- "pte_cur=%d addr=0x%x,%08x kind=%d"
+ gk20a_dbg(gpu_dbg_pte, "pte_cur=%d addr=0x%x,%08x kind=%d"
" ctag=%d vol=%d refs=%d"
" [0x%08x,0x%08x]",
pte_cur, hi32(addr), lo32(addr),
kind_v, ctag, !cacheable,
pte->ref_cnt, pte_w[1], pte_w[0]);
-
ctag += ctag_incr;
cur_offset += page_size;
addr += page_size;
@@ -1924,7 +1949,7 @@
for (i = 0; i < num_pages; i++) {
u64 page_vaddr = __locked_gmmu_map(vm, vaddr,
- vm->zero_page_sgt, pgsz, pgsz_idx, 0, 0,
+ vm->zero_page_sgt, 0, pgsz, pgsz_idx, 0, 0,
NVHOST_AS_ALLOC_SPACE_FLAGS_FIXED_OFFSET,
gk20a_mem_flag_none);
@@ -2010,6 +2035,7 @@
gk20a_err(d, "invalid addr to unmap 0x%llx", offset);
return;
}
+
kref_put(&mapped_buffer->ref, gk20a_vm_unmap_locked_kref);
mutex_unlock(&vm->update_gmmu_lock);
}
@@ -2299,7 +2325,6 @@
va_node->sparse = true;
}
-
list_add_tail(&va_node->reserved_va_list, &vm->reserved_va_list);
mutex_unlock(&vm->update_gmmu_lock);
@@ -2438,7 +2463,9 @@
int dmabuf_fd,
u64 *offset_align,
u32 flags, /*NVHOST_AS_MAP_BUFFER_FLAGS_*/
- int kind)
+ int kind,
+ u64 buffer_offset,
+ u64 mapping_size)
{
int err = 0;
struct vm_gk20a *vm = as_share->vm;
@@ -2463,7 +2490,10 @@
ret_va = gk20a_vm_map(vm, dmabuf, *offset_align,
flags, kind, NULL, true,
- gk20a_mem_flag_none);
+ gk20a_mem_flag_none,
+ buffer_offset,
+ mapping_size);
+
*offset_align = ret_va;
if (!ret_va) {
dma_buf_put(dmabuf);
diff --git a/drivers/gpu/nvgpu/gk20a/mm_gk20a.h b/drivers/gpu/nvgpu/gk20a/mm_gk20a.h
index 4dfc2b7d..8904eb4 100644
--- a/drivers/gpu/nvgpu/gk20a/mm_gk20a.h
+++ b/drivers/gpu/nvgpu/gk20a/mm_gk20a.h
@@ -416,7 +416,9 @@
int kind,
struct sg_table **sgt,
bool user_mapped,
- int rw_flag);
+ int rw_flag,
+ u64 buffer_offset,
+ u64 mapping_size);
/* unmap handle from kernel */
void gk20a_vm_unmap(struct vm_gk20a *vm, u64 offset);
@@ -457,7 +459,9 @@
int dmabuf_fd,
u64 *offset_align,
u32 flags, /*NVHOST_AS_MAP_BUFFER_FLAGS_*/
- int kind);
+ int kind,
+ u64 buffer_offset,
+ u64 mapping_size);
int gk20a_vm_unmap_buffer(struct gk20a_as_share *, u64 offset);
int gk20a_dmabuf_alloc_drvdata(struct dma_buf *dmabuf, struct device *dev);
diff --git a/drivers/gpu/nvgpu/gk20a/pmu_gk20a.c b/drivers/gpu/nvgpu/gk20a/pmu_gk20a.c
index 96db9b6..5e8c687 100644
--- a/drivers/gpu/nvgpu/gk20a/pmu_gk20a.c
+++ b/drivers/gpu/nvgpu/gk20a/pmu_gk20a.c
@@ -1901,6 +1901,8 @@
gk20a_aelpg_init(g);
gk20a_aelpg_init_and_enable(g, PMU_AP_CTRL_ID_GRAPHICS);
}
+
+ wake_up(&g->pmu.boot_wq);
}
int gk20a_init_pmu_support(struct gk20a *g)
diff --git a/drivers/gpu/nvgpu/gk20a/pmu_gk20a.h b/drivers/gpu/nvgpu/gk20a/pmu_gk20a.h
index 9869d0d..72567a8 100644
--- a/drivers/gpu/nvgpu/gk20a/pmu_gk20a.h
+++ b/drivers/gpu/nvgpu/gk20a/pmu_gk20a.h
@@ -1035,6 +1035,7 @@
u32 elpg_stat;
int pmu_state;
+ wait_queue_head_t boot_wq;
#define PMU_ELPG_ENABLE_ALLOW_DELAY_MSEC 1 /* msec */
struct work_struct pg_init;
diff --git a/drivers/htc_debug/stability/Kconfig b/drivers/htc_debug/stability/Kconfig
new file mode 100644
index 0000000..f35cd11
--- /dev/null
+++ b/drivers/htc_debug/stability/Kconfig
@@ -0,0 +1,3 @@
+# Kconfig for the HTC stability mechanism.
+#
+
diff --git a/drivers/htc_debug/stability/Makefile b/drivers/htc_debug/stability/Makefile
new file mode 100644
index 0000000..71db382
--- /dev/null
+++ b/drivers/htc_debug/stability/Makefile
@@ -0,0 +1,4 @@
+# Makefile for the HTC stability mechanism.
+#
+
+obj-y += reboot_params.o
diff --git a/drivers/htc_debug/stability/reboot_params.c b/drivers/htc_debug/stability/reboot_params.c
new file mode 100644
index 0000000..3a12108
--- /dev/null
+++ b/drivers/htc_debug/stability/reboot_params.c
@@ -0,0 +1,265 @@
+/*
+ * Copyright (C) 2013 HTC Corporation. All rights reserved.
+ *
+ * @file /kernel/drivers/htc_debug/stability/reboot_params.c
+ *
+ * This software is distributed under dual licensing. These include
+ * the GNU General Public License version 2 and a commercial
+ * license of HTC. HTC reserves the right to change the license
+ * of future releases.
+ *
+ * Unless you and HTC execute a separate written software license
+ * agreement governing use of this software, this software is licensed
+ * to you under the terms of the GNU General Public License version 2,
+ * available at {link to GPL license term} (the "GPL").
+ */
+
+#include <linux/kernel.h>
+#include <linux/ctype.h>
+#include <linux/platform_device.h>
+#include <linux/of_platform.h>
+#include <linux/reboot.h>
+#include <linux/of_fdt.h>
+#include <linux/sched.h>
+
+#include <linux/kdebug.h>
+#include <linux/notifier.h>
+#include <linux/kallsyms.h>
+
+#include <asm/io.h>
+
+/* These constants are used in bootloader to decide actions. */
+#define RESTART_REASON_BOOT_BASE (0x77665500)
+#define RESTART_REASON_BOOTLOADER (RESTART_REASON_BOOT_BASE | 0x00)
+#define RESTART_REASON_REBOOT (RESTART_REASON_BOOT_BASE | 0x01)
+#define RESTART_REASON_RECOVERY (RESTART_REASON_BOOT_BASE | 0x02)
+#define RESTART_REASON_HTCBL (RESTART_REASON_BOOT_BASE | 0x03)
+#define RESTART_REASON_OFFMODE (RESTART_REASON_BOOT_BASE | 0x04)
+#define RESTART_REASON_RAMDUMP (RESTART_REASON_BOOT_BASE | 0xAA)
+#define RESTART_REASON_HARDWARE (RESTART_REASON_BOOT_BASE | 0xA0)
+#define RESTART_REASON_POWEROFF (RESTART_REASON_BOOT_BASE | 0xBB)
+
+/*
+ * The RESTART_REASON_OEM_BASE is used for oem commands.
+ * The actual value is parsed from reboot commands.
+ * RIL FATAL will use oem-99 to restart a device.
+ */
+#define RESTART_REASON_OEM_BASE (0x6f656d00)
+#define RESTART_REASON_RIL_FATAL (RESTART_REASON_OEM_BASE | 0x99)
+
+#define SZ_DIAG_ERR_MSG (128)
+
+struct htc_reboot_params {
+ u32 reboot_reason;
+ u32 radio_flag;
+ u32 battery_level;
+ char msg[SZ_DIAG_ERR_MSG];
+ char reserved[0];
+};
+static struct htc_reboot_params* reboot_params = NULL;
+
+static void set_restart_reason(u32 reason)
+{
+ reboot_params->reboot_reason = reason;
+}
+
+static void set_restart_msg(const char *msg)
+{
+ char* buf;
+ size_t buf_len;
+ if (unlikely(!msg)) {
+ WARN(1, "%s: argument msg is NULL\n", __func__);
+ msg = "";
+ }
+
+ buf = reboot_params->msg;
+ buf_len = sizeof(reboot_params->msg);
+
+ pr_debug("copy buffer from %pK (%s) to %pK for %zu bytes\n",
+ msg, msg, buf, min(strlen(msg), buf_len - 1));
+ snprintf(buf, buf_len, "%s", msg);
+}
+
+static struct cmd_reason_map {
+ char* cmd;
+ u32 reason;
+} cmd_reason_map[] = {
+ { .cmd = "", .reason = RESTART_REASON_REBOOT },
+ { .cmd = "bootloader", .reason = RESTART_REASON_BOOTLOADER },
+ { .cmd = "recovery", .reason = RESTART_REASON_RECOVERY },
+ { .cmd = "offmode", .reason = RESTART_REASON_OFFMODE },
+ { .cmd = "poweroff", .reason = RESTART_REASON_POWEROFF },
+ { .cmd = "force-hard", .reason = RESTART_REASON_RAMDUMP },
+};
+
+#define OEM_CMD_FMT "oem-%02x"
+
+static void set_restart_command(const char* command)
+{
+ int code;
+ int i;
+
+ if (unlikely(!command)) {
+ WARN(1, "%s: command is NULL\n", __func__);
+ command = "";
+ }
+
+ /* standard reboot command */
+ for (i = 0; i < ARRAY_SIZE(cmd_reason_map); i++)
+ if (!strcmp(command, cmd_reason_map[i].cmd)) {
+ set_restart_msg(cmd_reason_map[i].cmd);
+ set_restart_reason(cmd_reason_map[i].reason);
+ return;
+ }
+
+ /* oem reboot command */
+ if (1 == sscanf(command, OEM_CMD_FMT, &code)) {
+ /* oem-97, 98, 99 are RIL fatal */
+ if ((code == 0x97) || (code == 0x98))
+ code = 0x99;
+
+ set_restart_msg(command);
+ set_restart_reason(RESTART_REASON_OEM_BASE | code);
+ return;
+ }
+
+ /* unknown reboot command */
+ pr_warn("Unknown restart command: %s\n", command);
+ set_restart_msg("");
+ set_restart_reason(RESTART_REASON_REBOOT);
+}
+
+static int reboot_callback(struct notifier_block *nb,
+ unsigned long event, void *data)
+{
+ /*
+ * NOTE: `data' is NULL when reboot w/o command or shutdown
+ */
+ char* cmd;
+
+ cmd = (char*) (data ? data : "");
+ pr_debug("restart command: %s\n", data ? cmd : "<null>");
+
+ switch (event) {
+ case SYS_RESTART:
+ set_restart_command(cmd);
+ pr_info("syscall: reboot - current task: %s (%d:%d)\n",
+ current->comm, current->tgid, current->pid);
+ dump_stack();
+ break;
+ case SYS_HALT:
+ case SYS_POWER_OFF:
+ default:
+ /*
+ * - Clear reboot_params to prevent unnessary RAM issue.
+ * - Set it to 'offmode' instead of 'poweroff' since
+ * it is required to make device enter offmode charging
+ * if cable attached
+ */
+ set_restart_command("offmode");
+ break;
+ }
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block reboot_notifier = {
+ .notifier_call = reboot_callback,
+};
+
+static struct die_args *tombstone = NULL;
+
+int die_notify(struct notifier_block *self,
+ unsigned long val, void *data)
+{
+ static struct die_args args;
+ memcpy(&args, data, sizeof(args));
+ tombstone = &args;
+ pr_debug("saving oops: %p\n", (void*) tombstone);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block die_nb = {
+ .notifier_call = die_notify,
+};
+
+static int panic_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
+{
+ char msg_buf[SZ_DIAG_ERR_MSG];
+
+ if (tombstone) { // tamper the panic message for Oops
+ char pc_symn[KSYM_NAME_LEN] = "<unknown>";
+ char lr_symn[KSYM_NAME_LEN] = "<unknown>";
+
+#if defined(CONFIG_ARM)
+ sprint_symbol(pc_symn, tombstone->regs->ARM_pc);
+ sprint_symbol(lr_symn, tombstone->regs->ARM_lr);
+#elif defined(CONFIG_ARM64)
+ sprint_symbol(pc_symn, tombstone->regs->pc);
+ sprint_symbol(lr_symn, tombstone->regs->regs[30]);
+#endif
+
+ snprintf(msg_buf, sizeof(msg_buf),
+ "KP: %s PC:%s LR:%s",
+ current->comm,
+ pc_symn, lr_symn);
+ } else {
+ snprintf(msg_buf, sizeof(msg_buf),
+ "KP: %s", (const char*) ptr);
+ }
+ set_restart_msg((const char*) msg_buf);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block panic_block = {
+ .notifier_call = panic_event,
+};
+
+static int htc_reboot_params_probe(struct platform_device *pdev)
+{
+ struct resource *param;
+
+ param = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reboot_params");
+ if (param && resource_size(param) >= sizeof(struct htc_reboot_params)) {
+ reboot_params = ioremap_nocache(param->start, resource_size(param));
+ if (reboot_params) {
+ dev_info(&pdev->dev, "got reboot_params buffer at %p\n", reboot_params);
+
+ /*
+ * reboot_params is initialized by bootloader, so it's ready to
+ * register the reboot / die / panic handlers.
+ */
+ register_reboot_notifier(&reboot_notifier);
+ register_die_notifier(&die_nb);
+ atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
+
+ } else
+ dev_warn(&pdev->dev,
+ "failed to map resource `reboot_params': %pR\n", param);
+ }
+
+ return 0;
+}
+
+#define MODULE_NAME "htc_reboot_params"
+static struct of_device_id htc_reboot_params_dt_match_table[] = {
+ {
+ .compatible = MODULE_NAME
+ },
+ {},
+};
+
+static struct platform_driver htc_reboot_params_driver = {
+ .driver = {
+ .name = MODULE_NAME,
+ .of_match_table = htc_reboot_params_dt_match_table,
+ },
+ .probe = htc_reboot_params_probe,
+};
+
+static int __init htc_reboot_params_init(void)
+{
+ return platform_driver_register(&htc_reboot_params_driver);
+}
+core_initcall(htc_reboot_params_init);
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
index 146ef82..af73e17 100644
--- a/drivers/hwmon/Kconfig
+++ b/drivers/hwmon/Kconfig
@@ -1570,6 +1570,27 @@
help
Support for the TI INA3221 power monitor sensor.
+config BATTERY_SYSTEM_VOLTAGE_MONITOR
+ bool "Battery System Voltage Monitor interface"
+ help
+ If yes, the Battery System Voltage Monitor interface is used.
+ To set a threshold to get notification if the VBAT or VSYS
+ voltage threshold is exceeded.
+
+config CABLE_VBUS_MONITOR
+ bool "Cable VBUS monitor interface"
+ help
+ If yes, the cable VBUS monitor interface is used.
+ To check if VBUS is latched once.
+
+config VOLTAGE_MONITOR_PALMAS
+ tristate "Palmas Voltage Monitor driver"
+ depends on MFD_PALMAS && BATTERY_SYSTEM_VOLTAGE_MONITOR
+ help
+ If yes, the Palmas Voltage Monitor driver will be enabled.
+ It supports VBAT and VSYS monitoring by seting a threshold
+ for Palmas PMICs.
+
if ACPI
comment "ACPI drivers"
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
index 55275e0..9189492 100644
--- a/drivers/hwmon/Makefile
+++ b/drivers/hwmon/Makefile
@@ -149,6 +149,9 @@
obj-$(CONFIG_SENSORS_TMON_TMP411) += tmon-tmp411.o
obj-$(CONFIG_PMBUS) += pmbus/
+obj-$(CONFIG_BATTERY_SYSTEM_VOLTAGE_MONITOR) += battery_system_voltage_monitor.o
+obj-$(CONFIG_CABLE_VBUS_MONITOR) += cable_vbus_monitor.o
+obj-$(CONFIG_VOLTAGE_MONITOR_PALMAS) += palmas_voltage_monitor.o
ccflags-$(CONFIG_HWMON_DEBUG_CHIP) := -DDEBUG
diff --git a/drivers/hwmon/battery_system_voltage_monitor.c b/drivers/hwmon/battery_system_voltage_monitor.c
new file mode 100644
index 0000000..679faf9
--- /dev/null
+++ b/drivers/hwmon/battery_system_voltage_monitor.c
@@ -0,0 +1,202 @@
+/*
+ * battery_system_voltage_monitor.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/battery_system_voltage_monitor.h>
+
+static DEFINE_MUTEX(worker_mutex);
+static struct battery_system_voltage_monitor_worker *vbat_worker;
+static struct battery_system_voltage_monitor_worker *vsys_worker;
+
+static inline int __voltage_monitor_worker_register(bool is_vbat,
+ struct battery_system_voltage_monitor_worker *worker)
+{
+ int ret = 0;
+
+ mutex_lock(&worker_mutex);
+ if (!worker || !worker->ops) {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ if (is_vbat)
+ vbat_worker = worker;
+ else
+ vsys_worker = worker;
+
+error:
+ mutex_unlock(&worker_mutex);
+ return ret;
+}
+
+static inline int __voltage_monitor_on_once(bool is_vbat, unsigned int voltage)
+{
+ int ret = 0;
+ struct battery_system_voltage_monitor_worker *worker;
+
+ mutex_lock(&worker_mutex);
+ if (is_vbat)
+ worker = vbat_worker;
+ else
+ worker = vsys_worker;
+
+ if (!worker || !worker->ops || !worker->ops->monitor_on_once) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ ret = worker->ops->monitor_on_once(voltage, worker->data);
+
+error:
+ mutex_unlock(&worker_mutex);
+ return ret;
+}
+
+static inline int __voltage_monitor_off(bool is_vbat)
+{
+ int ret = 0;
+ struct battery_system_voltage_monitor_worker *worker;
+
+ mutex_lock(&worker_mutex);
+ if (is_vbat)
+ worker = vbat_worker;
+ else
+ worker = vsys_worker;
+
+ if (!worker || !worker->ops || !worker->ops->monitor_off) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ worker->ops->monitor_off(worker->data);
+
+error:
+ mutex_unlock(&worker_mutex);
+ return ret;
+}
+
+static inline int __voltage_monitor_listener_register(bool is_vbat,
+ int (*notification)(unsigned int voltage))
+{
+ int ret = 0;
+ struct battery_system_voltage_monitor_worker *worker;
+
+ if (!notification)
+ return -EINVAL;
+
+ mutex_lock(&worker_mutex);
+ if (is_vbat)
+ worker = vbat_worker;
+ else
+ worker = vsys_worker;
+
+ if (!worker || !worker->ops || !worker->ops->listener_register) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ ret = worker->ops->listener_register(notification, worker->data);
+
+error:
+ mutex_unlock(&worker_mutex);
+ return ret;
+}
+
+static inline int __voltage_monitor_listener_unregister(bool is_vbat)
+{
+ int ret = 0;
+ struct battery_system_voltage_monitor_worker *worker;
+
+ mutex_lock(&worker_mutex);
+ if (is_vbat)
+ worker = vbat_worker;
+ else
+ worker = vsys_worker;
+
+ if (!worker || !worker->ops || !worker->ops->listener_unregister) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ worker->ops->listener_unregister(worker->data);
+
+error:
+ mutex_unlock(&worker_mutex);
+ return ret;
+}
+
+int battery_voltage_monitor_worker_register(
+ struct battery_system_voltage_monitor_worker *worker)
+{
+ return __voltage_monitor_worker_register(true, worker);
+}
+EXPORT_SYMBOL_GPL(battery_voltage_monitor_worker_register);
+
+int battery_voltage_monitor_on_once(unsigned int voltage)
+{
+ return __voltage_monitor_on_once(true, voltage);
+}
+EXPORT_SYMBOL_GPL(battery_voltage_monitor_on);
+
+int battery_voltage_monitor_off(void)
+{
+ return __voltage_monitor_off(true);
+}
+EXPORT_SYMBOL_GPL(battery_voltage_monitor_off);
+
+int battery_voltage_monitor_listener_register(
+ int (*notification)(unsigned int voltage))
+{
+ return __voltage_monitor_listener_register(true, notification);
+}
+EXPORT_SYMBOL_GPL(battery_voltage_monitor_listener_register);
+
+int battery_voltage_monitor_listener_unregister(void)
+{
+ return __voltage_monitor_listener_unregister(true);
+}
+EXPORT_SYMBOL_GPL(battery_voltage_monitor_listener_unregister);
+
+int system_voltage_monitor_worker_register(
+ struct battery_system_voltage_monitor_worker *worker)
+{
+ return __voltage_monitor_worker_register(false, worker);
+}
+EXPORT_SYMBOL_GPL(system_voltage_monitor_worker_register);
+
+int system_voltage_monitor_on_once(unsigned int voltage)
+{
+ return __voltage_monitor_on_once(false, voltage);
+}
+EXPORT_SYMBOL_GPL(system_voltage_monitor_on);
+
+int system_voltage_monitor_off(void)
+{
+ return __voltage_monitor_off(false);
+}
+EXPORT_SYMBOL_GPL(system_voltage_monitor_off);
+
+int system_voltage_monitor_listener_register(
+ int (*notification)(unsigned int voltage))
+{
+ return __voltage_monitor_listener_register(false, notification);
+}
+EXPORT_SYMBOL_GPL(system_voltage_monitor_listener_register);
+
+int system_voltage_monitor_listener_unregister(void)
+{
+ return __voltage_monitor_listener_unregister(false);
+}
+EXPORT_SYMBOL_GPL(system_voltage_monitor_listener_unregister);
diff --git a/drivers/hwmon/cable_vbus_monitor.c b/drivers/hwmon/cable_vbus_monitor.c
new file mode 100644
index 0000000..5f5415a
--- /dev/null
+++ b/drivers/hwmon/cable_vbus_monitor.c
@@ -0,0 +1,78 @@
+/*
+ * cable_vbus_monitor.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/cable_vbus_monitor.h>
+
+static DEFINE_MUTEX(mutex);
+
+struct cable_vbus_monitor_data {
+ int (*is_vbus_latch_cb)(void *data);
+ void *is_vbus_latch_cb_data;
+} cable_vbus_monitor_data;
+
+
+int cable_vbus_monitor_latch_cb_register(int (*is_vbus_latched)(void *data),
+ void *data)
+{
+ int ret = 0;
+
+ if (!is_vbus_latched)
+ return -EINVAL;
+
+ mutex_lock(&mutex);
+ if (cable_vbus_monitor_data.is_vbus_latch_cb) {
+ ret = -EBUSY;
+ goto done;
+ }
+ cable_vbus_monitor_data.is_vbus_latch_cb = is_vbus_latched;
+ cable_vbus_monitor_data.is_vbus_latch_cb_data = data;
+done:
+ mutex_unlock(&mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(cable_vbus_monitor_latch_cb_register);
+
+int cable_vbus_monitor_latch_cb_unregister(void *data)
+{
+ mutex_lock(&mutex);
+ if (data == cable_vbus_monitor_data.is_vbus_latch_cb_data) {
+ cable_vbus_monitor_data.is_vbus_latch_cb = NULL;
+ cable_vbus_monitor_data.is_vbus_latch_cb_data = NULL;
+ }
+ mutex_unlock(&mutex);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(cable_vbus_monitor_latch_cb_unregister);
+
+int cable_vbus_monitor_is_vbus_latched(void)
+{
+ int ret = 0;
+
+ mutex_lock(&mutex);
+ if (cable_vbus_monitor_data.is_vbus_latch_cb) {
+ ret = cable_vbus_monitor_data.is_vbus_latch_cb(
+ cable_vbus_monitor_data.is_vbus_latch_cb_data);
+ if (ret < 0) {
+ pr_warn("unknown cable vbus latch status\n");
+ ret = 0;
+ }
+ }
+ mutex_unlock(&mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(cable_vbus_monitor_is_vbus_latched);
diff --git a/drivers/hwmon/palmas_voltage_monitor.c b/drivers/hwmon/palmas_voltage_monitor.c
new file mode 100644
index 0000000..adc947d
--- /dev/null
+++ b/drivers/hwmon/palmas_voltage_monitor.c
@@ -0,0 +1,611 @@
+/*
+ * palmas_voltage_monitor.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/err.h>
+#include <linux/mfd/palmas.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/battery_system_voltage_monitor.h>
+#include <linux/power_supply.h>
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+#include <linux/cable_vbus_monitor.h>
+#endif
+
+struct palmas_voltage_monitor_dev {
+ int id;
+ int irq;
+ int reg;
+ int reg_enable_bit;
+ int monitor_volt_mv;
+ int monitor_on;
+ int (*notification)(unsigned int voltage);
+};
+
+struct palmas_voltage_monitor {
+ struct palmas *palmas;
+ struct device *dev;
+ struct power_supply v_monitor;
+
+ bool use_vbat_monitor;
+ bool use_vsys_monitor;
+
+ struct palmas_voltage_monitor_dev vbat_mon_dev;
+ struct palmas_voltage_monitor_dev vsys_mon_dev;
+
+ struct mutex mutex;
+
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+ bool has_vbus_latched_cb;
+#endif
+};
+
+enum {
+ VBAT_MON_DEV,
+ VSYS_MON_DEV,
+};
+
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+static int palmas_voltage_monitor_is_vbus_latched(void *data)
+{
+ struct palmas_voltage_monitor *monitor = data;
+ int ret;
+ unsigned int val;
+
+ if (!monitor)
+ return -EINVAL;
+
+ ret = palmas_read(monitor->palmas, PALMAS_USB_OTG_BASE,
+ PALMAS_USB_VBUS_INT_LATCH_SET, &val);
+ if (ret < 0) {
+ dev_err(monitor->dev,
+ "USB_VBUS_INIT_LATCH_SET read fail, ret=%d\n", ret);
+ return 0;
+ }
+
+ ret = palmas_write(monitor->palmas, PALMAS_USB_OTG_BASE,
+ PALMAS_USB_VBUS_INT_LATCH_CLR,
+ PALMAS_USB_VBUS_INT_LATCH_SET_VA_VBUS_VLD);
+ if (ret < 0)
+ dev_err(monitor->dev,
+ "USB_VBUS_INIT_LATCH_CLR write fail, ret=%d\n", ret);
+
+ return !!(val & PALMAS_USB_VBUS_INT_LATCH_SET_VA_VBUS_VLD);
+}
+#endif
+
+static inline struct palmas_voltage_monitor_dev *get_monitor_dev_by_id(
+ struct palmas_voltage_monitor *monitor,
+ int monitor_id)
+{
+
+ if (monitor_id == VBAT_MON_DEV) {
+ if (monitor->use_vbat_monitor)
+ return &monitor->vbat_mon_dev;
+ } else {
+ if (monitor->use_vsys_monitor)
+ return &monitor->vsys_mon_dev;
+ }
+
+ return NULL;
+}
+
+static int palmas_voltage_monitor_listener_register(
+ struct palmas_voltage_monitor *monitor, int monitor_id,
+ int (*notification)(unsigned int voltage))
+{
+ struct palmas_voltage_monitor_dev *mon_dev = NULL;
+ int ret = 0;
+
+ if (!monitor || !notification)
+ return -EINVAL;
+
+ mon_dev = get_monitor_dev_by_id(monitor, monitor_id);
+
+ if (!mon_dev)
+ return -ENODEV;
+
+ mutex_lock(&monitor->mutex);
+ if (mon_dev->notification)
+ ret = -EEXIST;
+ else
+ mon_dev->notification = notification;
+ mutex_unlock(&monitor->mutex);
+
+ return ret;
+}
+
+static void palmas_voltage_monitor_listener_unregister(
+ struct palmas_voltage_monitor *monitor, int monitor_id)
+{
+ struct palmas_voltage_monitor_dev *mon_dev;
+ int ret;
+
+ if (!monitor)
+ return;
+
+ mon_dev = get_monitor_dev_by_id(monitor, monitor_id);
+
+ if (!mon_dev)
+ return;
+
+ mutex_lock(&monitor->mutex);
+ disable_irq(mon_dev->irq);
+ mon_dev->monitor_volt_mv = 0;
+ mon_dev->monitor_on = false;
+
+ ret = palmas_write(monitor->palmas, PALMAS_PMU_CONTROL_BASE,
+ mon_dev->reg, 0);
+ if (ret < 0)
+ dev_err(monitor->dev, "palmas write fail, ret=%d\n", ret);
+
+ mon_dev->notification = NULL;
+ mutex_unlock(&monitor->mutex);
+}
+
+#define PALMAS_MON_THRESHOLD_MV_MIN (2300)
+#define PALMAS_MON_THRESHOLD_MV_MAX (4600)
+#define PALMAS_MON_BITS_MIN (0x06)
+#define PALMAS_MON_BITS_MAX (0x34)
+#define PALMAS_MON_MV_STEP_PER_BIT (50)
+#define PALMAS_MON_ENABLE_BIT (4600)
+static int palmas_voltage_monitor_voltage_monitor_on_once(
+ struct palmas_voltage_monitor *monitor, int monitor_id,
+ unsigned int voltage)
+{
+ unsigned int bits;
+ struct palmas_voltage_monitor_dev *mon_dev;
+
+ if (!monitor)
+ return -EINVAL;
+
+ mon_dev = get_monitor_dev_by_id(monitor, monitor_id);
+
+ if (!mon_dev)
+ return 0;
+
+ mutex_lock(&monitor->mutex);
+ if (mon_dev->notification && !mon_dev->monitor_on) {
+ mon_dev->monitor_volt_mv = voltage;
+
+ if (voltage <= PALMAS_MON_THRESHOLD_MV_MIN)
+ bits = PALMAS_MON_BITS_MIN;
+ else if (voltage >= PALMAS_MON_THRESHOLD_MV_MAX)
+ bits = PALMAS_MON_BITS_MAX;
+ else {
+ bits = PALMAS_MON_BITS_MIN +
+ (voltage - PALMAS_MON_THRESHOLD_MV_MIN) /
+ PALMAS_MON_MV_STEP_PER_BIT;
+ }
+ bits |= mon_dev->reg_enable_bit;
+
+ palmas_write(monitor->palmas, PALMAS_PMU_CONTROL_BASE,
+ mon_dev->reg, bits);
+
+ mon_dev->monitor_on = true;
+ enable_irq(mon_dev->irq);
+ }
+ mutex_unlock(&monitor->mutex);
+
+ return 0;
+}
+
+static void palmas_voltage_monitor_voltage_monitor_off(
+ struct palmas_voltage_monitor *monitor, int monitor_id)
+{
+ struct palmas_voltage_monitor_dev *mon_dev;
+
+ if (!monitor)
+ return;
+
+ mon_dev = get_monitor_dev_by_id(monitor, monitor_id);
+
+ if (!mon_dev)
+ return;
+
+ mutex_lock(&monitor->mutex);
+ if (mon_dev->monitor_on) {
+ disable_irq(mon_dev->irq);
+ mon_dev->monitor_volt_mv = 0;
+ mon_dev->monitor_on = false;
+
+ palmas_write(monitor->palmas, PALMAS_PMU_CONTROL_BASE,
+ mon_dev->reg, 0);
+ }
+ mutex_unlock(&monitor->mutex);
+}
+
+static irqreturn_t palmas_vbat_mon_irq_handler(int irq,
+ void *_palmas_voltage_monitor)
+{
+ struct palmas_voltage_monitor *monitor = _palmas_voltage_monitor;
+ unsigned int vbat_mon_line_state;
+ int ret;
+
+ ret = palmas_read(monitor->palmas, PALMAS_INTERRUPT_BASE,
+ PALMAS_INT1_LINE_STATE, &vbat_mon_line_state);
+ if (ret < 0)
+ dev_err(monitor->dev, "INT1_LINE_STATE read fail, ret=%d\n",
+ ret);
+ else
+ dev_dbg(monitor->dev, "vbat-mon-irq() INT1_LINE_STATE 0x%02x\n",
+ vbat_mon_line_state);
+
+ mutex_lock(&monitor->mutex);
+ if (monitor->vbat_mon_dev.monitor_on) {
+ disable_irq_nosync(monitor->vbat_mon_dev.irq);
+ if (monitor->vbat_mon_dev.notification)
+ monitor->vbat_mon_dev.notification(
+ monitor->vbat_mon_dev.monitor_volt_mv);
+ monitor->vbat_mon_dev.monitor_on = false;
+ }
+ power_supply_changed(&monitor->v_monitor);
+ mutex_unlock(&monitor->mutex);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t palmas_vsys_mon_irq_handler(int irq,
+ void *_palmas_voltage_monitor)
+{
+ struct palmas_voltage_monitor *monitor = _palmas_voltage_monitor;
+ unsigned int vsys_mon_line_state;
+ int ret;
+
+ ret = palmas_read(monitor->palmas, PALMAS_INTERRUPT_BASE,
+ PALMAS_INT1_LINE_STATE, &vsys_mon_line_state);
+ if (ret < 0)
+ dev_err(monitor->dev, "INT1_LINE_STATE read fail, ret=%d\n",
+ ret);
+ else
+ dev_dbg(monitor->dev, "vsys-mon-irq() INT1_LINE_STATE 0x%02x\n",
+ vsys_mon_line_state);
+
+ mutex_lock(&monitor->mutex);
+ if (monitor->vsys_mon_dev.monitor_on) {
+ disable_irq_nosync(monitor->vsys_mon_dev.irq);
+ if (monitor->vsys_mon_dev.notification)
+ monitor->vsys_mon_dev.notification(
+ monitor->vsys_mon_dev.monitor_volt_mv);
+ monitor->vsys_mon_dev.monitor_on = false;
+ }
+ mutex_unlock(&monitor->mutex);
+
+ return IRQ_HANDLED;
+}
+
+static int palmas_voltage_monitor_vbat_listener_register(
+ int (*notification)(unsigned int voltage), void *data)
+{
+ struct palmas_voltage_monitor *monitor = data;
+
+ return palmas_voltage_monitor_listener_register(
+ monitor, VBAT_MON_DEV, notification);
+}
+
+static void palmas_voltage_monitor_vbat_listener_unregister(void *data)
+{
+ struct palmas_voltage_monitor *monitor = data;
+
+ palmas_voltage_monitor_listener_unregister(monitor, VBAT_MON_DEV);
+}
+
+static int palmas_voltage_monitor_vbat_monitor_on_once(
+ unsigned int voltage, void *data)
+{
+ struct palmas_voltage_monitor *monitor = data;
+
+ return palmas_voltage_monitor_voltage_monitor_on_once(
+ monitor, VBAT_MON_DEV, voltage);
+}
+
+static void palmas_voltage_monitor_vbat_monitor_off(void *data)
+{
+ struct palmas_voltage_monitor *monitor = data;
+
+ return palmas_voltage_monitor_voltage_monitor_off(monitor,
+ VBAT_MON_DEV);
+}
+
+struct battery_system_voltage_monitor_worker_operations vbat_monitor_ops = {
+ .monitor_on_once = palmas_voltage_monitor_vbat_monitor_on_once,
+ .monitor_off = palmas_voltage_monitor_vbat_monitor_off,
+ .listener_register = palmas_voltage_monitor_vbat_listener_register,
+ .listener_unregister = palmas_voltage_monitor_vbat_listener_unregister,
+};
+
+struct battery_system_voltage_monitor_worker vbat_monitor_worker = {
+ .ops = &vbat_monitor_ops,
+ .data = NULL,
+};
+
+static ssize_t voltage_monitor_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int voltage;
+ struct palmas_voltage_monitor *pvm;
+
+ pvm = dev_get_drvdata(dev);
+ sscanf(buf, "%u", &voltage);
+ if (voltage == 0)
+ palmas_voltage_monitor_vbat_monitor_off(pvm);
+ else
+ palmas_voltage_monitor_vbat_monitor_on_once(voltage, pvm);
+ return count;
+}
+
+static DEVICE_ATTR(voltage_monitor, S_IWUSR | S_IWGRP, NULL,
+ voltage_monitor_store);
+
+static int palmas_voltage_monitor_probe(struct platform_device *pdev)
+{
+ struct palmas *palmas = dev_get_drvdata(pdev->dev.parent);
+ struct palmas_platform_data *pdata;
+ struct palmas_voltage_monitor_platform_data *vapdata = NULL;
+ struct device_node *node = pdev->dev.of_node;
+ struct palmas_voltage_monitor *palmas_voltage_monitor;
+ int status, ret;
+
+ palmas_voltage_monitor = devm_kzalloc(&pdev->dev,
+ sizeof(*palmas_voltage_monitor), GFP_KERNEL);
+ if (!palmas_voltage_monitor)
+ return -ENOMEM;
+
+ pdata = dev_get_platdata(pdev->dev.parent);
+ if (pdata)
+ vapdata = pdata->voltage_monitor_pdata;
+
+ if (node && !vapdata) {
+ palmas_voltage_monitor->use_vbat_monitor =
+ of_property_read_bool(node, "ti,use-vbat-monitor");
+ palmas_voltage_monitor->use_vsys_monitor =
+ of_property_read_bool(node, "ti,use-vsys-monitor");
+ } else {
+ palmas_voltage_monitor->use_vbat_monitor = true;
+ palmas_voltage_monitor->use_vsys_monitor = true;
+
+ if (vapdata) {
+ palmas_voltage_monitor->use_vbat_monitor =
+ vapdata->use_vbat_monitor;
+ palmas_voltage_monitor->use_vsys_monitor =
+ vapdata->use_vsys_monitor;
+ }
+ }
+
+ mutex_init(&palmas_voltage_monitor->mutex);
+ dev_set_drvdata(&pdev->dev, palmas_voltage_monitor);
+
+ palmas_voltage_monitor->palmas = palmas;
+ palmas_voltage_monitor->dev = &pdev->dev;
+
+ palmas_voltage_monitor->vbat_mon_dev.id = VBAT_MON_DEV;
+ palmas_voltage_monitor->vbat_mon_dev.irq =
+ palmas_irq_get_virq(palmas, PALMAS_VBAT_MON_IRQ);
+ palmas_voltage_monitor->vbat_mon_dev.reg = PALMAS_VBAT_MON;
+ palmas_voltage_monitor->vbat_mon_dev.reg_enable_bit =
+ PALMAS_VBAT_MON_ENABLE;
+ palmas_voltage_monitor->vbat_mon_dev.notification = NULL;
+
+ palmas_voltage_monitor->vsys_mon_dev.id = VSYS_MON_DEV;
+ palmas_voltage_monitor->vsys_mon_dev.irq =
+ palmas_irq_get_virq(palmas, PALMAS_VSYS_MON_IRQ);
+ palmas_voltage_monitor->vsys_mon_dev.reg = PALMAS_VSYS_MON;
+ palmas_voltage_monitor->vsys_mon_dev.reg_enable_bit =
+ PALMAS_VSYS_MON_ENABLE;
+ palmas_voltage_monitor->vsys_mon_dev.notification = NULL;
+
+
+ if (palmas_voltage_monitor->use_vbat_monitor) {
+ status = devm_request_threaded_irq(palmas_voltage_monitor->dev,
+ palmas_voltage_monitor->vbat_mon_dev.irq,
+ NULL, palmas_vbat_mon_irq_handler,
+ IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
+ IRQF_ONESHOT | IRQF_EARLY_RESUME,
+ "palmas_vbat_mon", palmas_voltage_monitor);
+ if (status < 0) {
+ dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
+ palmas_voltage_monitor->vbat_mon_dev.irq,
+ status);
+ } else {
+ palmas_voltage_monitor->vbat_mon_dev.monitor_on = false;
+ disable_irq(palmas_voltage_monitor->vbat_mon_dev.irq);
+ }
+ vbat_monitor_worker.data = palmas_voltage_monitor;
+ battery_voltage_monitor_worker_register(&vbat_monitor_worker);
+ }
+
+ if (palmas_voltage_monitor->use_vsys_monitor) {
+ status = devm_request_threaded_irq(palmas_voltage_monitor->dev,
+ palmas_voltage_monitor->vsys_mon_dev.irq,
+ NULL, palmas_vsys_mon_irq_handler,
+ IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING |
+ IRQF_ONESHOT | IRQF_EARLY_RESUME,
+ "palmas_vsys_mon", palmas_voltage_monitor);
+ if (status < 0) {
+ dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
+ palmas_voltage_monitor->vsys_mon_dev.irq,
+ status);
+ } else {
+ palmas_voltage_monitor->vsys_mon_dev.monitor_on = false;
+ disable_irq(palmas_voltage_monitor->vsys_mon_dev.irq);
+ }
+ }
+
+ palmas_voltage_monitor->v_monitor.name = "palmas_voltage_monitor";
+ palmas_voltage_monitor->v_monitor.type = POWER_SUPPLY_TYPE_UNKNOWN;
+
+ ret = power_supply_register(palmas_voltage_monitor->dev,
+ &palmas_voltage_monitor->v_monitor);
+ if (ret) {
+ dev_err(palmas_voltage_monitor->dev,
+ "Failed: power supply register\n");
+ return ret;
+ }
+
+ ret = sysfs_create_file(&pdev->dev.kobj,
+ &dev_attr_voltage_monitor.attr);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "error creating sysfs file %d\n", ret);
+ return ret;
+ }
+
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+ ret = palmas_write(palmas_voltage_monitor->palmas,
+ PALMAS_USB_OTG_BASE,
+ PALMAS_USB_VBUS_INT_EN_LO_SET,
+ PALMAS_USB_VBUS_INT_EN_LO_SET_VA_VBUS_VLD);
+ if (ret < 0) {
+ dev_err(palmas_voltage_monitor->dev,
+ "USB_VBUS_INT_EN_LO_SET update fail, ret=%d\n",
+ ret);
+ goto skip_vbus_latch_register;
+ }
+
+ cable_vbus_monitor_latch_cb_register(
+ palmas_voltage_monitor_is_vbus_latched,
+ palmas_voltage_monitor);
+ palmas_voltage_monitor->has_vbus_latched_cb = true;
+
+skip_vbus_latch_register:
+#endif
+ return 0;
+}
+
+static int palmas_voltage_monitor_remove(struct platform_device *pdev)
+{
+ struct palmas_voltage_monitor *monitor = dev_get_drvdata(&pdev->dev);
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+ int ret;
+#endif
+
+ mutex_lock(&monitor->mutex);
+ if (monitor->use_vbat_monitor) {
+ disable_irq(monitor->vbat_mon_dev.irq);
+ devm_free_irq(monitor->dev, monitor->vbat_mon_dev.irq,
+ monitor);
+ }
+
+ if (monitor->use_vsys_monitor) {
+ disable_irq(monitor->vsys_mon_dev.irq);
+ devm_free_irq(monitor->dev, monitor->vbat_mon_dev.irq,
+ monitor);
+ }
+
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+ if (monitor->has_vbus_latched_cb) {
+ cable_vbus_monitor_latch_cb_unregister(monitor);
+ monitor->has_vbus_latched_cb = false;
+ ret = palmas_write(monitor->palmas,
+ PALMAS_USB_OTG_BASE,
+ PALMAS_USB_VBUS_INT_EN_LO_CLR,
+ PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_VBUS_VLD);
+ if (ret < 0)
+ dev_err(monitor->dev,
+ "USB_VBUS_INT_EN_LO_SET update fail, ret=%d\n",
+ ret);
+ }
+#endif
+ mutex_unlock(&monitor->mutex);
+ sysfs_remove_file(&monitor->v_monitor.dev->kobj,
+ &dev_attr_voltage_monitor.attr);
+ power_supply_unregister(&monitor->v_monitor);
+ mutex_destroy(&monitor->mutex);
+ devm_kfree(monitor->dev, monitor);
+
+ return 0;
+}
+
+static void palmas_voltage_monitor_shutdown(struct platform_device *pdev)
+{
+ struct palmas_voltage_monitor *monitor = dev_get_drvdata(&pdev->dev);
+ int ret;
+
+ mutex_lock(&monitor->mutex);
+ if (monitor->use_vbat_monitor) {
+ disable_irq(monitor->vbat_mon_dev.irq);
+ devm_free_irq(monitor->dev,
+ monitor->vbat_mon_dev.irq,
+ monitor);
+ ret = palmas_write(monitor->palmas, PALMAS_PMU_CONTROL_BASE,
+ PALMAS_VBAT_MON, 0);
+ if (ret < 0)
+ dev_err(monitor->dev,
+ "PALMAS_VBAT_MON write fail, ret=%d\n",
+ ret);
+ }
+
+ if (monitor->use_vsys_monitor) {
+ disable_irq(monitor->vsys_mon_dev.irq);
+ devm_free_irq(monitor->dev,
+ monitor->vsys_mon_dev.irq,
+ monitor);
+ ret = palmas_write(monitor->palmas, PALMAS_PMU_CONTROL_BASE,
+ PALMAS_VSYS_MON, 0);
+ if (ret < 0)
+ dev_err(monitor->dev,
+ "PALMAS_VSYS_MON write fail, ret=%d\n",
+ ret);
+ }
+
+#ifdef CONFIG_CABLE_VBUS_MONITOR
+ if (monitor->has_vbus_latched_cb) {
+ cable_vbus_monitor_latch_cb_unregister(monitor);
+ monitor->has_vbus_latched_cb = false;
+ ret = palmas_write(monitor->palmas,
+ PALMAS_USB_OTG_BASE,
+ PALMAS_USB_VBUS_INT_EN_LO_CLR,
+ PALMAS_USB_VBUS_INT_EN_LO_CLR_VA_VBUS_VLD);
+ if (ret < 0)
+ dev_err(monitor->dev,
+ "USB_VBUS_INT_EN_LO_SET update fail, ret=%d\n",
+ ret);
+ }
+#endif
+ mutex_unlock(&monitor->mutex);
+
+}
+
+static const struct of_device_id palmas_voltage_monitor_dt_match[] = {
+ { .compatible = "ti,palmas-voltage-monitor" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, palmas_voltage_monitor_dt_match);
+
+static struct platform_driver palmas_voltage_monitor_driver = {
+ .driver = {
+ .name = "palmas_voltage_monitor",
+ .of_match_table = of_match_ptr(palmas_voltage_monitor_dt_match),
+ .owner = THIS_MODULE,
+ },
+ .probe = palmas_voltage_monitor_probe,
+ .remove = palmas_voltage_monitor_remove,
+ .shutdown = palmas_voltage_monitor_shutdown,
+};
+
+static int __init palmas_voltage_monitor_init(void)
+{
+ return platform_driver_register(&palmas_voltage_monitor_driver);
+}
+subsys_initcall(palmas_voltage_monitor_init);
+
+static void __exit palmas_voltage_monitor_exit(void)
+{
+ platform_driver_unregister(&palmas_voltage_monitor_driver);
+}
+module_exit(palmas_voltage_monitor_exit);
+
+MODULE_DESCRIPTION("TI Palmas voltage monitor driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/i2c/Kconfig b/drivers/i2c/Kconfig
index 6331123..d0a9b70 100644
--- a/drivers/i2c/Kconfig
+++ b/drivers/i2c/Kconfig
@@ -58,6 +58,7 @@
will be called i2c-mux.
source drivers/i2c/muxes/Kconfig
+source drivers/i2c/chips/Kconfig
config I2C_SLAVE
bool "I2C slave driver support"
diff --git a/drivers/i2c/Makefile b/drivers/i2c/Makefile
index 572cab8..d780078 100644
--- a/drivers/i2c/Makefile
+++ b/drivers/i2c/Makefile
@@ -9,7 +9,7 @@
obj-$(CONFIG_I2C_CHARDEV) += i2c-dev.o
obj-$(CONFIG_I2C_MUX) += i2c-mux.o
obj-$(CONFIG_I2C_SLAVE) += i2c-slave.o
-obj-y += algos/ busses/ muxes/
+obj-y += algos/ busses/ muxes/ chips/
obj-$(CONFIG_I2C_STUB) += i2c-stub.o
ccflags-$(CONFIG_I2C_DEBUG_CORE) := -DDEBUG
diff --git a/drivers/i2c/chips/CwMcuSensor.c b/drivers/i2c/chips/CwMcuSensor.c
new file mode 100644
index 0000000..8f27b9a
--- /dev/null
+++ b/drivers/i2c/chips/CwMcuSensor.c
@@ -0,0 +1,4430 @@
+/* CwMcuSensor.c - driver file for HTC SensorHUB
+ *
+ * Copyright (C) 2014 HTC Ltd.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/input.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/pm.h>
+#include <linux/pm_runtime.h>
+#include <linux/CwMcuSensor.h>
+#include <linux/gpio.h>
+#include <linux/wakelock.h>
+#ifdef CONFIG_OF
+#include <linux/of_gpio.h>
+#endif
+#include <linux/mutex.h>
+#include <linux/spi/spi.h>
+#include <linux/workqueue.h>
+
+#include <linux/regulator/consumer.h>
+
+#include <linux/firmware.h>
+
+#include <linux/notifier.h>
+
+#include <linux/sensor_hub.h>
+
+#include <linux/iio/buffer.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+#include <linux/iio/trigger.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/kfifo_buf.h>
+#include <linux/irq_work.h>
+
+/*#include <mach/gpiomux.h>*/
+#define D(x...) pr_debug("[S_HUB][CW_MCU] " x)
+#define I(x...) pr_info("[S_HUB][CW_MCU] " x)
+#define E(x...) pr_err("[S_HUB][CW_MCU] " x)
+
+#define RETRY_TIMES 20
+#define LATCH_TIMES 1
+#define CHECK_FW_VER_TIMES 3
+#define UPDATE_FIRMWARE_RETRY_TIMES 5
+#define FW_ERASE_MIN 9000
+#define FW_ERASE_MAX 12000
+#define LATCH_ERROR_NO (-110)
+#define ACTIVE_RETRY_TIMES 10
+#define DPS_MAX (1 << (16 - 1))
+/* ========================================================================= */
+
+#define TOUCH_LOG_DELAY 5000
+#define CWMCU_BATCH_TIMEOUT_MIN 200
+#define MS_TO_PERIOD (1000 * 99 / 100)
+
+/* ========================================================================= */
+#define rel_significant_motion REL_WHEEL
+
+#define ACC_CALIBRATOR_LEN 3
+#define ACC_CALIBRATOR_RL_LEN 12
+#define MAG_CALIBRATOR_LEN 26
+#define GYRO_CALIBRATOR_LEN 3
+#define LIGHT_CALIBRATOR_LEN 4
+#define PRESSURE_CALIBRATOR_LEN 4
+
+#define REPORT_EVENT_COMMON_LEN 3
+
+#define FW_VER_INFO_LEN 31
+#define FW_VER_HEADER_LEN 7
+#define FW_VER_COUNT 6
+#define FW_RESPONSE_CODE 0x79
+#define FW_I2C_LEN_LIMIT 60
+
+#define REACTIVATE_PERIOD (10*HZ)
+#define RESET_PERIOD (30*HZ)
+#define SYNC_ACK_MAGIC 0x66
+#define EXHAUSTED_MAGIC 0x77
+
+#define CALIBRATION_DATA_PATH "/calibration_data"
+#define G_SENSOR_FLASH_DATA "gs_flash"
+#define GYRO_SENSOR_FLASH_DATA "gyro_flash"
+#define LIGHT_SENSOR_FLASH_DATA "als_flash"
+#define BARO_SENSOR_FLASH_DATA "bs_flash"
+
+#ifdef CONFIG_CWSTM32_DEBUG /* Remove this from defconfig when release */
+
+static int DEBUG_FLAG_GSENSOR;
+module_param(DEBUG_FLAG_GSENSOR, int, 0600);
+
+#else
+
+#define DEBUG_FLAG_GSENSOR 0
+
+#endif
+
+
+static int DEBUG_DISABLE;
+module_param(DEBUG_DISABLE, int, 0660);
+MODULE_PARM_DESC(DEBUG_DISABLE, "disable " CWMCU_I2C_NAME " driver") ;
+
+struct cwmcu_data {
+ struct i2c_client *client;
+ atomic_t delay;
+
+ /* mutex_lock protect:
+ * mcu_data->suspended,
+ * cw_set_pseudo_irq(indio_dev, state);
+ * iio_push_to_buffers(mcu_data->indio_dev, event);
+ */
+ struct mutex mutex_lock;
+
+ /* group_i2c_lock protect:
+ * set_calibrator_en(),
+ * set_k_value(),
+ * get_light_polling(),
+ * CWMCU_i2c_multi_write()
+ */
+ struct mutex group_i2c_lock;
+
+ /* activated_i2c_lock protect:
+ * CWMCU_i2c_write(),
+ * CWMCU_i2c_read(),
+ * reset_hub(),
+ * mcu_data->i2c_total_retry,
+ * mcu_data->i2c_latch_retry,
+ * mcu_data->i2c_jiffies
+ */
+ struct mutex activated_i2c_lock;
+
+ /* power_mode_lock protect:
+ * mcu_data->power_on_counter
+ */
+ struct mutex power_mode_lock;
+
+ struct iio_trigger *trig;
+ atomic_t pseudo_irq_enable;
+ struct mutex lock;
+
+ struct timeval now;
+ struct class *sensor_class;
+ struct device *sensor_dev;
+ u8 acceleration_axes;
+ u8 magnetic_axes;
+ u8 gyro_axes;
+
+ u64 enabled_list; /* Bit mask for sensor enable status */
+ u64 batched_list; /* Bit mask for FIFO usage, 32MSB is wake up */
+
+ /* report time */
+ s64 sensors_time[num_sensors];
+ s64 time_diff[num_sensors];
+ s32 report_period[num_sensors]; /* Microseconds * 0.99 */
+ u64 update_list;
+ u64 pending_flush;
+ s64 batch_timeout[num_sensors];
+ int IRQ;
+ struct delayed_work work;
+ struct work_struct one_shot_work;
+ /* Remember to add flag in cwmcu_resume() when add new flag */
+ bool w_activated_i2c;
+ bool w_re_init;
+ bool w_facedown_set;
+ bool w_flush_fifo;
+ bool w_clear_fifo;
+ bool w_clear_fifo_running;
+ bool w_report_meta;
+
+ bool suspended;
+ bool probe_success;
+ bool is_block_i2c;
+
+ u32 gpio_wake_mcu;
+ u32 gpio_reset;
+ u32 gpio_chip_mode;
+ u32 gpio_mcu_irq;
+ s32 gs_chip_layout;
+ u32 gs_kvalue;
+ s16 gs_kvalue_R1;
+ s16 gs_kvalue_R2;
+ s16 gs_kvalue_R3;
+ s16 gs_kvalue_L1;
+ s16 gs_kvalue_L2;
+ s16 gs_kvalue_L3;
+ u32 gy_kvalue;
+ u32 als_kvalue;
+ u32 bs_kvalue;
+ u8 bs_kheader;
+ u8 gs_calibrated;
+ u8 ls_calibrated;
+ u8 bs_calibrated;
+ u8 gy_calibrated;
+
+ s32 i2c_total_retry;
+ s32 i2c_latch_retry;
+ unsigned long i2c_jiffies;
+ unsigned long reset_jiffies;
+
+ int disable_access_count;
+
+ s32 iio_data[6];
+ struct iio_dev *indio_dev;
+ struct irq_work iio_irq_work;
+
+ /* power status */
+ int power_on_counter;
+
+ struct input_dev *input;
+ u16 light_last_data[REPORT_EVENT_COMMON_LEN];
+ u64 time_base;
+ u64 wake_fifo_time_base;
+ u64 step_counter_base;
+
+ struct workqueue_struct *mcu_wq;
+ struct wake_lock significant_wake_lock;
+ struct wake_lock report_wake_lock;
+
+ int fw_update_status;
+ u16 erase_fw_wait;
+};
+
+BLOCKING_NOTIFIER_HEAD(double_tap_notifier_list);
+
+int register_notifier_by_facedown(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&double_tap_notifier_list, nb);
+}
+EXPORT_SYMBOL(register_notifier_by_facedown);
+
+int unregister_notifier_by_facedown(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&double_tap_notifier_list,
+ nb);
+}
+EXPORT_SYMBOL(unregister_notifier_by_facedown);
+
+static int CWMCU_i2c_read(struct cwmcu_data *mcu_data,
+ u8 reg_addr, void *data, u8 len);
+static int CWMCU_i2c_read_power(struct cwmcu_data *mcu_data,
+ u8 reg_addr, void *data, u8 len);
+static int CWMCU_i2c_write(struct cwmcu_data *mcu_data,
+ u8 reg_addr, const void *data, u8 len);
+static int CWMCU_i2c_write_power(struct cwmcu_data *mcu_data,
+ u8 reg_addr, const void *data, u8 len);
+static int firmware_odr(struct cwmcu_data *mcu_data, int sensors_id,
+ int delay_ms);
+static void cwmcu_batch_read(struct cwmcu_data *mcu_data);
+
+static void gpio_make_falling_edge(int gpio)
+{
+ if (!gpio_get_value(gpio))
+ gpio_set_value(gpio, 1);
+ gpio_set_value(gpio, 0);
+}
+
+static void cwmcu_powermode_switch(struct cwmcu_data *mcu_data, int onoff)
+{
+ mutex_lock(&mcu_data->power_mode_lock);
+ if (onoff) {
+ if (mcu_data->power_on_counter == 0) {
+ gpio_make_falling_edge(mcu_data->gpio_wake_mcu);
+ udelay(10);
+ gpio_set_value(mcu_data->gpio_wake_mcu, 1);
+ udelay(10);
+ gpio_set_value(mcu_data->gpio_wake_mcu, 0);
+ D("%s: 11 onoff = %d\n", __func__, onoff);
+ usleep_range(500, 600);
+ }
+ mcu_data->power_on_counter++;
+ } else {
+ mcu_data->power_on_counter--;
+ if (mcu_data->power_on_counter <= 0) {
+ mcu_data->power_on_counter = 0;
+ gpio_set_value(mcu_data->gpio_wake_mcu, 1);
+ D("%s: 22 onoff = %d\n", __func__, onoff);
+ }
+ }
+ mutex_unlock(&mcu_data->power_mode_lock);
+ D("%s: onoff = %d, power_counter = %d\n", __func__, onoff,
+ mcu_data->power_on_counter);
+}
+
+static int cw_send_event(struct cwmcu_data *mcu_data, u8 id, u16 *data,
+ s64 timestamp)
+{
+ u8 event[21];/* Sensor HAL uses fixed 21 bytes */
+
+ event[0] = id;
+ memcpy(&event[1], data, sizeof(u16)*3);
+ memset(&event[7], 0, sizeof(u16)*3);
+ memcpy(&event[13], ×tamp, sizeof(s64));
+
+ D("%s: active_scan_mask = 0x%p, masklength = %u, data(x, y, z) ="
+ "(%d, %d, %d)\n",
+ __func__, mcu_data->indio_dev->active_scan_mask,
+ mcu_data->indio_dev->masklength,
+ *(s16 *)&event[1], *(s16 *)&event[3], *(s16 *)&event[5]);
+
+ if (mcu_data->indio_dev->active_scan_mask &&
+ (!bitmap_empty(mcu_data->indio_dev->active_scan_mask,
+ mcu_data->indio_dev->masklength))) {
+ mutex_lock(&mcu_data->mutex_lock);
+ if (!mcu_data->w_clear_fifo_running)
+ iio_push_to_buffers(mcu_data->indio_dev, event);
+ else {
+ D(
+ "%s: Drop data(0, 1, 2, 3) = "
+ "(0x%x, 0x%x, 0x%x, 0x%x)\n", __func__,
+ data[0], data[1], data[2], data[3]);
+ }
+ mutex_unlock(&mcu_data->mutex_lock);
+ return 0;
+ } else if (mcu_data->indio_dev->active_scan_mask == NULL)
+ D("%s: active_scan_mask = NULL, event might be missing\n",
+ __func__);
+
+ return -EIO;
+}
+
+static int cw_send_event_special(struct cwmcu_data *mcu_data, u8 id, u16 *data,
+ u16 *bias, s64 timestamp)
+{
+ u8 event[1+(2*sizeof(u16)*REPORT_EVENT_COMMON_LEN)+sizeof(timestamp)];
+
+ event[0] = id;
+ memcpy(&event[1], data, sizeof(u16)*REPORT_EVENT_COMMON_LEN);
+ memcpy(&event[1+sizeof(u16)*REPORT_EVENT_COMMON_LEN], bias,
+ sizeof(u16)*REPORT_EVENT_COMMON_LEN);
+ memcpy(&event[1+(2*sizeof(u16)*REPORT_EVENT_COMMON_LEN)], ×tamp,
+ sizeof(timestamp));
+
+ if (mcu_data->indio_dev->active_scan_mask &&
+ (!bitmap_empty(mcu_data->indio_dev->active_scan_mask,
+ mcu_data->indio_dev->masklength))) {
+ mutex_lock(&mcu_data->mutex_lock);
+ if (!mcu_data->w_clear_fifo_running)
+ iio_push_to_buffers(mcu_data->indio_dev, event);
+ else {
+ D(
+ "%s: Drop data(0, 1, 2, 3) = "
+ "(0x%x, 0x%x, 0x%x, 0x%x)\n", __func__,
+ data[0], data[1], data[2], data[3]);
+ }
+ mutex_unlock(&mcu_data->mutex_lock);
+ return 0;
+ } else if (mcu_data->indio_dev->active_scan_mask == NULL)
+ D("%s: active_scan_mask = NULL, event might be missing\n",
+ __func__);
+
+ return -EIO;
+}
+
+static int cwmcu_get_calibrator_status(struct cwmcu_data *mcu_data,
+ u8 sensor_id, u8 *data)
+{
+ int error_msg = 0;
+
+ if (sensor_id == CW_ACCELERATION)
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_ACC,
+ data, 1);
+ else if (sensor_id == CW_MAGNETIC)
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_MAG,
+ data, 1);
+ else if (sensor_id == CW_GYRO)
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_GYRO,
+ data, 1);
+
+ return error_msg;
+}
+
+static int cwmcu_get_calibrator(struct cwmcu_data *mcu_data, u8 sensor_id,
+ s8 *data, u8 len)
+{
+ int error_msg = 0;
+
+ if ((sensor_id == CW_ACCELERATION) && (len == ACC_CALIBRATOR_LEN))
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_ACC,
+ data, len);
+ else if ((sensor_id == CW_MAGNETIC) && (len == MAG_CALIBRATOR_LEN))
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_MAG,
+ data, len);
+ else if ((sensor_id == CW_GYRO) && (len == GYRO_CALIBRATOR_LEN))
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_GYRO,
+ data, len);
+ else if ((sensor_id == CW_LIGHT) && (len == LIGHT_CALIBRATOR_LEN))
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_LIGHT,
+ data, len);
+ else if ((sensor_id == CW_PRESSURE) && (len == PRESSURE_CALIBRATOR_LEN))
+ error_msg = CWMCU_i2c_read_power(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_PRESSURE,
+ data, len);
+ else
+ E("%s: invalid arguments, sensor_id = %u, len = %u\n",
+ __func__, sensor_id, len);
+
+ D("sensors_id = %u, calibrator data = (%d, %d, %d)\n", sensor_id,
+ data[0], data[1], data[2]);
+ return error_msg;
+}
+
+static ssize_t set_calibrator_en(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data;
+ u8 data2;
+ unsigned long sensors_id;
+ int error;
+
+ error = kstrtoul(buf, 10, &sensors_id);
+ if (error) {
+ E("%s: kstrtoul fails, error = %d\n", __func__, error);
+ return error;
+ }
+
+ /* sensor_id at least should between 0 ~ 31 */
+ data = (u8)sensors_id;
+ D("%s: data(sensors_id) = %u\n", __func__, data);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ mutex_lock(&mcu_data->group_i2c_lock);
+
+ switch (data) {
+ case 1:
+ error = CWMCU_i2c_read(mcu_data, G_SENSORS_STATUS, &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, G_SENSORS_STATUS, &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 2:
+ error = CWMCU_i2c_read(mcu_data, ECOMPASS_SENSORS_STATUS,
+ &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, ECOMPASS_SENSORS_STATUS,
+ &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 4:
+ error = CWMCU_i2c_read(mcu_data, GYRO_SENSORS_STATUS,
+ &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, GYRO_SENSORS_STATUS,
+ &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 7:
+ error = CWMCU_i2c_read(mcu_data, LIGHT_SENSORS_STATUS,
+ &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, LIGHT_SENSORS_STATUS,
+ &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 9:
+ data = 2; /* X- R calibration */
+ error = CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_TARGET_ACC,
+ &data, 1);
+ error = CWMCU_i2c_read(mcu_data, G_SENSORS_STATUS, &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, G_SENSORS_STATUS, &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 10:
+ data = 1; /* X+ L calibration */
+ error = CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_TARGET_ACC,
+ &data, 1);
+ error = CWMCU_i2c_read(mcu_data, G_SENSORS_STATUS, &data2, 1);
+ if (error < 0)
+ goto i2c_fail;
+ data = data2 | 16;
+ error = CWMCU_i2c_write(mcu_data, G_SENSORS_STATUS, &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 11:
+ data = 0; /* Z+ */
+ error = CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_TARGET_ACC,
+ &data, 1);
+ if (error < 0)
+ goto i2c_fail;
+ break;
+ case 12:
+ mcu_data->step_counter_base = 0;
+ D("%s: Reset step counter\n", __func__);
+ break;
+ default:
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ cwmcu_powermode_switch(mcu_data, 0);
+ E("%s: Improper sensor_id = %u\n", __func__, data);
+ return -EINVAL;
+ }
+
+ error = count;
+
+i2c_fail:
+ mutex_unlock(&mcu_data->group_i2c_lock);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ D("%s--: data2 = 0x%x, rc = %d\n", __func__, data2, error);
+ return error;
+}
+
+static void print_hex_data(char *buf, u32 index, u8 *data, size_t len)
+{
+ int i;
+ int rc;
+ char *buf_start;
+ size_t buf_remaining =
+ 3*EXCEPTION_BLOCK_LEN; /* 3 characters per data */
+
+ buf_start = buf;
+
+ for (i = 0; i < len; i++) {
+ rc = scnprintf(buf, buf_remaining, "%02x%c", data[i],
+ (i == len - 1) ? '\0' : ' ');
+ buf += rc;
+ buf_remaining -= rc;
+ }
+
+ printk(KERN_ERR "[S_HUB][CW_MCU] Exception Buffer[%d] = %.*s\n",
+ index * EXCEPTION_BLOCK_LEN,
+ (int)(buf - buf_start),
+ buf_start);
+}
+
+static ssize_t sprint_data(char *buf, s8 *data, ssize_t len)
+{
+ int i;
+ int rc;
+ size_t buf_remaining = PAGE_SIZE;
+
+ for (i = 0; i < len; i++) {
+ rc = scnprintf(buf, buf_remaining, "%d%c", data[i],
+ (i == len - 1) ? '\n' : ' ');
+ buf += rc;
+ buf_remaining -= rc;
+ }
+ return PAGE_SIZE - buf_remaining;
+}
+
+static ssize_t show_calibrator_status_acc(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[6] = {0};
+
+ if (cwmcu_get_calibrator_status(mcu_data, CW_ACCELERATION, data) >= 0)
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", data[0]);
+
+ return scnprintf(buf, PAGE_SIZE, "0x1\n");
+}
+
+static ssize_t show_calibrator_status_mag(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[6] = {0};
+
+ if (cwmcu_get_calibrator_status(mcu_data, CW_MAGNETIC, data) >= 0)
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", data[0]);
+
+ return scnprintf(buf, PAGE_SIZE, "0x1\n");
+}
+
+static ssize_t show_calibrator_status_gyro(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[6] = {0};
+
+ if (cwmcu_get_calibrator_status(mcu_data, CW_GYRO, data) >= 0)
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", data[0]);
+
+ return scnprintf(buf, PAGE_SIZE, "0x1\n");
+}
+
+static ssize_t set_k_value(struct cwmcu_data *mcu_data, const char *buf,
+ size_t count, u8 reg_addr, u8 len)
+{
+ int i;
+ long data_temp[len];
+ char *str_buf;
+ char *running;
+ int error;
+
+ D(
+ "%s: count = %lu, strlen(buf) = %lu, PAGE_SIZE = %lu,"
+ " reg_addr = 0x%x\n",
+ __func__, count, strlen(buf), PAGE_SIZE, reg_addr);
+
+ str_buf = kstrndup(buf, count, GFP_KERNEL);
+ if (str_buf == NULL) {
+ E("%s: cannot allocate buffer\n", __func__);
+ return -ENOMEM;
+ }
+ running = str_buf;
+
+ for (i = 0; i < len; i++) {
+ int error;
+ char *token;
+
+ token = strsep(&running, " ");
+
+ if (token == NULL) {
+ D("%s: i = %d\n", __func__, i);
+ break;
+ } else {
+ if (reg_addr ==
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_PRESSURE)
+ error = kstrtol(token, 16, &data_temp[i]);
+ else
+ error = kstrtol(token, 10, &data_temp[i]);
+ if (error) {
+ E("%s: kstrtol fails, error = %d, i = %d\n",
+ __func__, error, i);
+ kfree(str_buf);
+ return error;
+ }
+ }
+ }
+ kfree(str_buf);
+
+ D("Set calibration by attr (%ld, %ld, %ld), len = %u, reg_addr = 0x%x\n"
+ , data_temp[0], data_temp[1], data_temp[2], len, reg_addr);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ mutex_lock(&mcu_data->group_i2c_lock);
+ for (i = 0; i < len; i++) {
+ u8 data = (u8)(data_temp[i]);
+ /* Firmware can't write multi bytes */
+ error = CWMCU_i2c_write(mcu_data, reg_addr, &data, 1);
+ if (error < 0) {
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ cwmcu_powermode_switch(mcu_data, 0);
+ E("%s: error = %d, i = %d\n", __func__, error, i);
+ return -EIO;
+ }
+ }
+ mutex_unlock(&mcu_data->group_i2c_lock);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ return count;
+}
+
+static ssize_t set_k_value_acc_f(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return set_k_value(mcu_data, buf, count,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_ACC,
+ ACC_CALIBRATOR_LEN);
+}
+
+
+static ssize_t set_k_value_mag_f(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return set_k_value(mcu_data, buf, count,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_MAG,
+ MAG_CALIBRATOR_LEN);
+}
+
+static ssize_t set_k_value_gyro_f(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return set_k_value(mcu_data, buf, count,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_GYRO,
+ GYRO_CALIBRATOR_LEN);
+}
+
+static ssize_t set_k_value_barometer_f(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return set_k_value(mcu_data, buf, count,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_PRESSURE,
+ PRESSURE_CALIBRATOR_LEN);
+}
+
+static ssize_t led_enable(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ int error;
+ u8 data;
+ long data_temp = 0;
+
+ error = kstrtol(buf, 10, &data_temp);
+ if (error) {
+ E("%s: kstrtol fails, error = %d\n", __func__, error);
+ return error;
+ }
+
+ data = data_temp ? 2 : 4;
+
+ I("LED %s\n", (data == 2) ? "ENABLE" : "DISABLE");
+
+ error = CWMCU_i2c_write_power(mcu_data, 0xD0, &data, 1);
+ if (error < 0) {
+ E("%s: error = %d\n", __func__, error);
+ return -EIO;
+ }
+
+ return count;
+}
+
+static ssize_t get_k_value(struct cwmcu_data *mcu_data, int type, char *buf,
+ char *data, unsigned len)
+{
+ if (cwmcu_get_calibrator(mcu_data, type, data, len) < 0)
+ memset(data, 0, len);
+
+ return sprint_data(buf, data, len);
+}
+
+static ssize_t get_k_value_acc_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[ACC_CALIBRATOR_LEN];
+
+ return get_k_value(mcu_data, CW_ACCELERATION, buf, data, sizeof(data));
+}
+
+static ssize_t get_k_value_acc_rl_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[ACC_CALIBRATOR_RL_LEN] = {0};
+
+ if (CWMCU_i2c_read_power(mcu_data, CW_I2C_REG_SENSORS_CALIBRATOR_RESULT_RL_ACC
+ , data, sizeof(data)) >= 0) {
+
+ if (DEBUG_FLAG_GSENSOR == 1) {
+ int i;
+
+ for (i = 0; i < sizeof(data); i++)
+ D("data[%d]: %u\n", i, data[i]);
+ }
+
+ mcu_data->gs_kvalue_L1 = ((s8)data[1] << 8) | data[0];
+ mcu_data->gs_kvalue_L2 = ((s8)data[3] << 8) | data[2];
+ mcu_data->gs_kvalue_L3 = ((s8)data[5] << 8) | data[4];
+ mcu_data->gs_kvalue_R1 = ((s8)data[7] << 8) | data[6];
+ mcu_data->gs_kvalue_R2 = ((s8)data[9] << 8) | data[8];
+ mcu_data->gs_kvalue_R3 = ((s8)data[11] << 8) | data[10];
+ }
+
+ return sprint_data(buf, data, sizeof(data));
+}
+
+static ssize_t ap_get_k_value_acc_rl_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%d %d %d %d %d %d\n",
+ (s16)mcu_data->gs_kvalue_L1,
+ (s16)mcu_data->gs_kvalue_L2,
+ (s16)mcu_data->gs_kvalue_L3,
+ (s16)mcu_data->gs_kvalue_R1,
+ (s16)mcu_data->gs_kvalue_R2,
+ (s16)mcu_data->gs_kvalue_R3);
+}
+
+static ssize_t get_k_value_mag_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[MAG_CALIBRATOR_LEN];
+
+ return get_k_value(mcu_data, CW_MAGNETIC, buf, data, sizeof(data));
+}
+
+static ssize_t get_k_value_gyro_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[GYRO_CALIBRATOR_LEN];
+
+ return get_k_value(mcu_data, CW_GYRO, buf, data, sizeof(data));
+}
+
+static ssize_t get_k_value_light_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[LIGHT_CALIBRATOR_LEN] = {0};
+
+ if (cwmcu_get_calibrator(mcu_data, CW_LIGHT, data, sizeof(data)) < 0) {
+ E("%s: Get LIGHT Calibrator fails\n", __func__);
+ return -EIO;
+ }
+ return scnprintf(buf, PAGE_SIZE, "%x %x %x %x\n", data[0], data[1],
+ data[2], data[3]);
+}
+
+static ssize_t get_k_value_barometer_f(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[PRESSURE_CALIBRATOR_LEN];
+
+ return get_k_value(mcu_data, CW_PRESSURE, buf, data, sizeof(data));
+}
+
+static int CWMCU_i2c_read_power(struct cwmcu_data *mcu_data,
+ u8 reg_addr, void *data, u8 len)
+{
+ int ret;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ ret = CWMCU_i2c_read(mcu_data, reg_addr, data, len);
+ cwmcu_powermode_switch(mcu_data, 0);
+ return ret;
+}
+
+static int CWMCU_i2c_write_power(struct cwmcu_data *mcu_data,
+ u8 reg_addr, const void *data, u8 len)
+{
+ int ret;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ ret = CWMCU_i2c_write(mcu_data, reg_addr, data, len);
+ cwmcu_powermode_switch(mcu_data, 0);
+ return ret;
+}
+
+static ssize_t get_light_kadc(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[4] = {0};
+ u16 light_gadc;
+ u16 light_kadc;
+
+ CWMCU_i2c_read_power(mcu_data, LIGHT_SENSORS_CALIBRATION_DATA, data,
+ sizeof(data));
+
+ light_gadc = (data[1] << 8) | data[0];
+ light_kadc = (data[3] << 8) | data[2];
+ return scnprintf(buf, PAGE_SIZE, "gadc = 0x%x, kadc = 0x%x", light_gadc,
+ light_kadc);
+}
+
+static ssize_t get_firmware_version(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 firmware_version[FW_VER_COUNT] = {0};
+
+ CWMCU_i2c_read_power(mcu_data, FIRMWARE_VERSION, firmware_version,
+ sizeof(firmware_version));
+
+ return scnprintf(buf, PAGE_SIZE,
+ "Firmware Architecture version %u, "
+ "Sense version %u, Cywee lib version %u,"
+ " Water number %u"
+ ", Active Engine %u, Project Mapping %u\n",
+ firmware_version[0], firmware_version[1],
+ firmware_version[2], firmware_version[3],
+ firmware_version[4], firmware_version[5]);
+}
+
+static ssize_t get_hall_sensor(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 hall_sensor = 0;
+
+ CWMCU_i2c_read_power(mcu_data, CWSTM32_READ_Hall_Sensor,
+ &hall_sensor, 1);
+
+ return scnprintf(buf, PAGE_SIZE,
+ "Hall_1(S, N) = (%u, %u), Hall_2(S, N)"
+ " = (%u, %u), Hall_3(S, N) = (%u, %u)\n",
+ !!(hall_sensor & 0x1), !!(hall_sensor & 0x2),
+ !!(hall_sensor & 0x4), !!(hall_sensor & 0x8),
+ !!(hall_sensor & 0x10), !!(hall_sensor & 0x20));
+}
+
+static ssize_t get_barometer(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[6] = {0};
+
+ CWMCU_i2c_read_power(mcu_data, CWSTM32_READ_Pressure, data,
+ sizeof(data));
+
+ return scnprintf(buf, PAGE_SIZE, "%x %x %x %x\n", data[0], data[1],
+ data[2], data[3]);
+}
+
+static ssize_t get_light_polling(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data[REPORT_EVENT_COMMON_LEN] = {0};
+ u8 data_polling_enable;
+ u16 light_adc;
+ int rc;
+
+ data_polling_enable = CW_MCU_BIT_LIGHT_POLLING;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ mutex_lock(&mcu_data->group_i2c_lock);
+ rc = CWMCU_i2c_write(mcu_data, LIGHT_SENSORS_STATUS,
+ &data_polling_enable, 1);
+ if (rc < 0) {
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ cwmcu_powermode_switch(mcu_data, 0);
+ E("%s: write fail, rc = %d\n", __func__, rc);
+ return rc;
+ }
+ CWMCU_i2c_read(mcu_data, CWSTM32_READ_Light, data, sizeof(data));
+ if (rc < 0) {
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ cwmcu_powermode_switch(mcu_data, 0);
+ E("%s: read fail, rc = %d\n", __func__, rc);
+ return rc;
+ }
+ mutex_unlock(&mcu_data->group_i2c_lock);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ light_adc = (data[2] << 8) | data[1];
+
+ I("poll light[%x]=%u\n", light_adc, data[0]);
+
+ return scnprintf(buf, PAGE_SIZE, "ADC[0x%04X] => level %u\n", light_adc,
+ data[0]);
+}
+
+
+static ssize_t read_mcu_data(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ int i;
+ u8 reg_addr;
+ u8 len;
+ long data_temp[2] = {0};
+ u8 mcu_rdata[128] = {0};
+ char *str_buf;
+ char *running;
+
+ str_buf = kstrndup(buf, count, GFP_KERNEL);
+ if (str_buf == NULL) {
+ E("%s: cannot allocate buffer\n", __func__);
+ return -ENOMEM;
+ }
+ running = str_buf;
+
+ for (i = 0; i < ARRAY_SIZE(data_temp); i++) {
+ int error;
+ char *token;
+
+ token = strsep(&running, " ");
+
+ if (i == 0)
+ error = kstrtol(token, 16, &data_temp[i]);
+ else {
+ if (token == NULL) {
+ data_temp[i] = 1;
+ D("%s: token 2 missing\n", __func__);
+ break;
+ } else
+ error = kstrtol(token, 10, &data_temp[i]);
+ }
+ if (error) {
+ E("%s: kstrtol fails, error = %d, i = %d\n",
+ __func__, error, i);
+ kfree(str_buf);
+ return error;
+ }
+ }
+ kfree(str_buf);
+
+ /* TESTME for changing array to variable */
+ reg_addr = (u8)(data_temp[0]);
+ len = (u8)(data_temp[1]);
+
+ if (len < sizeof(mcu_rdata)) {
+ CWMCU_i2c_read_power(mcu_data, reg_addr, mcu_rdata, len);
+
+ for (i = 0; i < len; i++)
+ D("read mcu reg_addr = 0x%x, reg[%u] = 0x%x\n",
+ reg_addr, (reg_addr + i), mcu_rdata[i]);
+ } else
+ E("%s: len = %u, out of range\n", __func__, len);
+
+ return count;
+}
+
+static inline bool retry_exhausted(struct cwmcu_data *mcu_data)
+{
+ return ((mcu_data->i2c_total_retry > RETRY_TIMES) ||
+ (mcu_data->i2c_latch_retry > LATCH_TIMES));
+}
+
+static inline void retry_reset(struct cwmcu_data *mcu_data)
+{
+ mcu_data->i2c_total_retry = 0;
+ mcu_data->i2c_latch_retry = 0;
+}
+
+static int CWMCU_i2c_write(struct cwmcu_data *mcu_data,
+ u8 reg_addr, const void *data, u8 len)
+{
+ s32 write_res;
+ int i;
+ const u8 *u8_data = data;
+
+ if (DEBUG_DISABLE) {
+ mcu_data->disable_access_count++;
+ if ((mcu_data->disable_access_count % 100) == 0)
+ I("%s: DEBUG_DISABLE = %d\n", __func__, DEBUG_DISABLE);
+ return len;
+ }
+
+ if (mcu_data->is_block_i2c) {
+ if (time_after(jiffies,
+ mcu_data->reset_jiffies + RESET_PERIOD))
+ mcu_data->is_block_i2c = 0;
+ return len;
+ }
+
+ mutex_lock(&mcu_data->mutex_lock);
+ if (mcu_data->suspended) {
+ mutex_unlock(&mcu_data->mutex_lock);
+ return len;
+ }
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ mutex_lock(&mcu_data->activated_i2c_lock);
+ if (retry_exhausted(mcu_data)) {
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+ D("%s: mcu_data->i2c_total_retry = %d, i2c_latch_retry = %d\n",
+ __func__,
+ mcu_data->i2c_total_retry, mcu_data->i2c_latch_retry);
+ /* Try to recover HUB in low CPU utilization */
+ mcu_data->w_activated_i2c = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+ return -EIO;
+ }
+
+ for (i = 0; i < len; i++) {
+ while (!retry_exhausted(mcu_data)) {
+ write_res = i2c_smbus_write_byte_data(mcu_data->client,
+ reg_addr, u8_data[i]);
+ if (write_res >= 0) {
+ retry_reset(mcu_data);
+ break;
+ }
+ gpio_make_falling_edge(mcu_data->gpio_wake_mcu);
+ if (write_res == LATCH_ERROR_NO)
+ mcu_data->i2c_latch_retry++;
+ mcu_data->i2c_total_retry++;
+ E(
+ "%s: i2c write error, write_res = %d, total_retry ="
+ " %d, latch_retry = %d, addr = 0x%x, val = 0x%x\n",
+ __func__, write_res, mcu_data->i2c_total_retry,
+ mcu_data->i2c_latch_retry, reg_addr, u8_data[i]);
+ }
+
+ if (retry_exhausted(mcu_data)) {
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+ E("%s: mcu_data->i2c_total_retry = %d, "
+ "i2c_latch_retry = %d, EIO\n", __func__,
+ mcu_data->i2c_total_retry, mcu_data->i2c_latch_retry);
+ return -EIO;
+ }
+ }
+
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+
+ return 0;
+}
+
+static int CWMCU_i2c_multi_write(struct cwmcu_data *mcu_data,
+ u8 reg_addr, const void *data, u8 len)
+{
+ int rc, i;
+ const u8 *u8_data = data;
+
+ mutex_lock(&mcu_data->group_i2c_lock);
+
+ for (i = 0; i < len; i++) {
+ rc = CWMCU_i2c_write(mcu_data, reg_addr, &u8_data[i], 1);
+ if (rc) {
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ E("%s: CWMCU_i2c_write fails, rc = %d, i = %d\n",
+ __func__, rc, i);
+ return -EIO;
+ }
+ }
+
+ mutex_unlock(&mcu_data->group_i2c_lock);
+ return 0;
+}
+
+static int cwmcu_set_sensor_kvalue(struct cwmcu_data *mcu_data)
+{
+ /* Write single Byte because firmware can't write multi bytes now */
+ u8 *gs_data = (u8 *)&mcu_data->gs_kvalue; /* gs_kvalue is u32 */
+ u8 *gy_data = (u8 *)&mcu_data->gy_kvalue; /* gy_kvalue is u32 */
+ u8 *bs_data = (u8 *)&mcu_data->bs_kvalue; /* bs_kvalue is u32 */
+ u8 firmware_version[FW_VER_COUNT] = {0};
+
+ mcu_data->gs_calibrated = 0;
+ mcu_data->gy_calibrated = 0;
+ mcu_data->ls_calibrated = 0;
+ mcu_data->bs_calibrated = 0;
+
+ CWMCU_i2c_read(mcu_data, FIRMWARE_VERSION, firmware_version,
+ sizeof(firmware_version));
+ I(
+ "Firmware Architecture version %u, Sense version %u,"
+ " Cywee lib version %u, Water number %u"
+ ", Active Engine %u, Project Mapping %u\n",
+ firmware_version[0], firmware_version[1], firmware_version[2],
+ firmware_version[3], firmware_version[4], firmware_version[5]);
+
+ if (gs_data[3] == 0x67) {
+ __be32 be32_gs_data = cpu_to_be32(mcu_data->gs_kvalue);
+ gs_data = (u8 *)&be32_gs_data;
+
+ CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_ACC,
+ gs_data + 1, ACC_CALIBRATOR_LEN);
+ mcu_data->gs_calibrated = 1;
+ D("Set g-sensor kvalue (x, y, z) = (0x%x, 0x%x, 0x%x)\n",
+ gs_data[1], gs_data[2], gs_data[3]);
+ }
+
+ if (gy_data[3] == 0x67) {
+ __be32 be32_gy_data = cpu_to_be32(mcu_data->gy_kvalue);
+ gy_data = (u8 *)&be32_gy_data;
+
+ CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_GYRO,
+ gy_data + 1, GYRO_CALIBRATOR_LEN);
+ mcu_data->gy_calibrated = 1;
+ D("Set gyro-sensor kvalue (x, y, z) = (0x%x, 0x%x, 0x%x)\n",
+ gy_data[1], gy_data[2], gy_data[3]);
+ }
+
+ if ((mcu_data->als_kvalue & 0x6DA50000) == 0x6DA50000) {
+ __le16 als_data[2];
+ als_data[0] = cpu_to_le16(0x0a38);
+ als_data[1] = cpu_to_le16(mcu_data->als_kvalue);
+ CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_LIGHT,
+ als_data, LIGHT_CALIBRATOR_LEN);
+ mcu_data->ls_calibrated = 1;
+ D("Set light-sensor kvalue = 0x%x\n", als_data[1]);
+ }
+
+ if (mcu_data->bs_kheader == 0x67) {
+ __be32 be32_bs_data = cpu_to_be32(mcu_data->bs_kvalue);
+
+ CWMCU_i2c_write(mcu_data,
+ CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_PRESSURE,
+ &be32_bs_data, PRESSURE_CALIBRATOR_LEN);
+ mcu_data->bs_calibrated = 1;
+ D(
+ "Set barometer kvalue (a, b, c, d) = "
+ "(0x%x, 0x%x, 0x%x, 0x%x)\n",
+ bs_data[3], bs_data[2], bs_data[1], bs_data[0]);
+ }
+ I("Sensor calibration matrix is (gs %u gy %u ls %u bs %u)\n",
+ mcu_data->gs_calibrated, mcu_data->gy_calibrated,
+ mcu_data->ls_calibrated, mcu_data->bs_calibrated);
+ return 0;
+}
+
+
+static int cwmcu_sensor_placement(struct cwmcu_data *mcu_data)
+{
+ D("Set Sensor Placement\n");
+ CWMCU_i2c_write(mcu_data, GENSOR_POSITION, &mcu_data->acceleration_axes,
+ 1);
+ CWMCU_i2c_write(mcu_data, COMPASS_POSITION, &mcu_data->magnetic_axes,
+ 1);
+ CWMCU_i2c_write(mcu_data, GYRO_POSITION, &mcu_data->gyro_axes, 1);
+
+ return 0;
+}
+
+static void cwmcu_i2c_write_group(struct cwmcu_data *mcu_data, u8 write_addr,
+ u32 enable_list)
+{
+ int i;
+ __le32 buf = cpu_to_le32(enable_list);
+ u8 *data = (u8 *)&buf;
+
+ for (i = 0; i < sizeof(buf); ++i) {
+ D("%s: write_addr = 0x%x, write_val = 0x%x\n",
+ __func__, write_addr + i, data[i]);
+ CWMCU_i2c_write(mcu_data, write_addr + i, data + i, 1);
+ }
+}
+
+static int cwmcu_restore_status(struct cwmcu_data *mcu_data)
+{
+ int i, rc;
+ u8 data;
+ u8 reg_value = 0;
+ int delay_ms;
+
+ D("Restore status\n");
+
+ mcu_data->enabled_list |= (1LL << HTC_MAGIC_COVER);
+
+ cwmcu_i2c_write_group(mcu_data, CWSTM32_ENABLE_REG,
+ mcu_data->enabled_list
+ | (mcu_data->enabled_list >> 32));
+ cwmcu_i2c_write_group(mcu_data, CW_BATCH_ENABLE_REG,
+ mcu_data->batched_list);
+ cwmcu_i2c_write_group(mcu_data, CW_WAKE_UP_BATCH_ENABLE_REG,
+ mcu_data->batched_list >> 32);
+
+ D("%s: enable_list = 0x%llx\n", __func__, mcu_data->enabled_list);
+
+ for (i = 0; i < CW_SENSORS_ID_TOTAL; i++) {
+ delay_ms = mcu_data->report_period[i] / MS_TO_PERIOD;
+
+ rc = firmware_odr(mcu_data, i, delay_ms);
+ if (rc) {
+ E("%s: firmware_odr fails, rc = %d, i = %d\n",
+ __func__, rc, i);
+ return -EIO;
+ }
+ }
+
+#ifdef MCU_WARN_MSGS
+ reg_value = 1;
+ rc = CWMCU_i2c_write(mcu_data, CW_I2C_REG_WARN_MSG_ENABLE,
+ ®_value, 1);
+ if (rc) {
+ E("%s: CWMCU_i2c_write(WARN_MSG) fails, rc = %d, i = %d\n",
+ __func__, rc, i);
+ return -EIO;
+ }
+ D("%s: WARN_MSGS enabled\n", __func__);
+#endif
+
+ reg_value = 1;
+ rc = CWMCU_i2c_write(mcu_data, CW_I2C_REG_WATCH_DOG_ENABLE,
+ ®_value, 1);
+ if (rc) {
+ E("%s: CWMCU_i2c_write(WATCH_DOG) fails, rc = %d\n",
+ __func__, rc);
+ return -EIO;
+ }
+ D("%s: Watch dog enabled\n", __func__);
+
+ /* Inform SensorHUB that CPU is going to resume */
+ data = 1;
+ CWMCU_i2c_write(mcu_data, CW_CPU_STATUS_REG, &data, 1);
+ D("%s: write_addr = 0x%x, write_data = 0x%x\n", __func__,
+ CW_CPU_STATUS_REG, data);
+
+ return 0;
+}
+
+static int check_fw_version(struct cwmcu_data *mcu_data,
+ const struct firmware *fw)
+{
+ u8 firmware_version[FW_VER_COUNT] = {0};
+ u8 fw_version[FW_VER_COUNT];
+ char char_ver[4];
+ unsigned long ul_ver;
+ int i;
+ int rc;
+
+ if (mcu_data->fw_update_status & (FW_FLASH_FAILED | FW_ERASE_FAILED))
+ return 1;
+
+ CWMCU_i2c_read(mcu_data, FIRMWARE_VERSION, firmware_version,
+ sizeof(firmware_version));
+
+ /* Version example: HTCSHUB001.000.001.005.000.001 */
+ if (!strncmp(&fw->data[fw->size - FW_VER_INFO_LEN], "HTCSHUB",
+ sizeof("HTCSHUB") - 1)) {
+ for (i = 0; i < FW_VER_COUNT; i++) {
+ memcpy(char_ver, &fw->data[fw->size - FW_VER_INFO_LEN +
+ FW_VER_HEADER_LEN + (i * 4)],
+ sizeof(char_ver));
+ char_ver[sizeof(char_ver) - 1] = 0;
+ rc = kstrtol(char_ver, 10, &ul_ver);
+ if (rc) {
+ E("%s: kstrtol fails, rc = %d, i = %d\n",
+ __func__, rc, i);
+ return rc;
+ }
+ fw_version[i] = ul_ver;
+ D(
+ "%s: fw_version[%d] = %u, firmware_version[%d] ="
+ " %u\n", __func__, i, fw_version[i]
+ , i, firmware_version[i]);
+ }
+
+ if (memcmp(firmware_version, fw_version,
+ sizeof(firmware_version))) {
+ I("%s: Sensor HUB firmware update is required\n",
+ __func__);
+ return 1;
+ } else {
+ I("%s: Sensor HUB firmware is up-to-date\n", __func__);
+ return 0;
+ }
+
+ } else {
+ E("%s: fw version incorrect!\n", __func__);
+ return -ESPIPE;
+ }
+ return 0;
+}
+
+static int i2c_rx_bytes_locked(struct cwmcu_data *mcu_data, u8 *data,
+ u16 length)
+{
+ int retry;
+
+ struct i2c_msg msg[] = {
+ {
+ .addr = mcu_data->client->addr,
+ .flags = I2C_M_RD,
+ .len = length ,
+ .buf = data,
+ }
+ };
+
+ for (retry = 0; retry < UPDATE_FIRMWARE_RETRY_TIMES; retry++) {
+ if (__i2c_transfer(mcu_data->client->adapter, msg, 1) == 1)
+ break;
+ mdelay(10);
+ }
+
+ if (retry == UPDATE_FIRMWARE_RETRY_TIMES) {
+ E("%s: Retry over %d\n", __func__,
+ UPDATE_FIRMWARE_RETRY_TIMES);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int i2c_tx_bytes_locked(struct cwmcu_data *mcu_data, u8 *data,
+ u16 length)
+{
+ int retry;
+
+ struct i2c_msg msg[] = {
+ {
+ .addr = mcu_data->client->addr,
+ .flags = 0,
+ .len = length ,
+ .buf = data,
+ }
+ };
+
+ for (retry = 0; retry < UPDATE_FIRMWARE_RETRY_TIMES; retry++) {
+ if (__i2c_transfer(mcu_data->client->adapter, msg, 1) == 1)
+ break;
+ mdelay(10);
+ }
+
+ if (retry == UPDATE_FIRMWARE_RETRY_TIMES) {
+ E("%s: Retry over %d\n", __func__,
+ UPDATE_FIRMWARE_RETRY_TIMES);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int erase_mcu_flash_mem(struct cwmcu_data *mcu_data)
+{
+ u8 i2c_data[3] = {0};
+ int rc;
+
+ i2c_data[0] = 0x44;
+ i2c_data[1] = 0xBB;
+ rc = i2c_tx_bytes_locked(mcu_data, i2c_data, 2);
+ if (rc) {
+ E("%s: Failed to write 0xBB44, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ rc = i2c_rx_bytes_locked(mcu_data, i2c_data, 1);
+ if (rc) {
+ E("%s: Failed to read, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ if (i2c_data[0] != FW_RESPONSE_CODE) {
+ E("%s: FW NACK, i2c_data = 0x%x\n", __func__, i2c_data[0]);
+ return 1;
+ }
+
+
+ i2c_data[0] = 0xFF;
+ i2c_data[1] = 0xFF;
+ i2c_data[2] = 0;
+ rc = i2c_tx_bytes_locked(mcu_data, i2c_data, 3);
+ if (rc) {
+ E("%s: Failed to write_2, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ D("%s: Tx size = %d\n", __func__, 3);
+ /* Erase needs 9 sec in worst case */
+ msleep(mcu_data->erase_fw_wait + FW_ERASE_MIN);
+ D("%s: After delay, Tx size = %d\n", __func__, 3);
+
+ return 0;
+}
+
+static int update_mcu_flash_mem_block(struct cwmcu_data *mcu_data,
+ u32 start_address,
+ u8 write_buf[],
+ int numberofbyte)
+{
+ u8 i2c_data[FW_I2C_LEN_LIMIT+2] = {0};
+ __be32 to_i2c_command;
+ int data_len, checksum;
+ int i;
+ int rc;
+
+ i2c_data[0] = 0x31;
+ i2c_data[1] = 0xCE;
+ rc = i2c_tx_bytes_locked(mcu_data, i2c_data, 2);
+ if (rc) {
+ E("%s: Failed to write 0xCE31, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ rc = i2c_rx_bytes_locked(mcu_data, i2c_data, 1);
+ if (rc) {
+ E("%s: Failed to read, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ if (i2c_data[0] != FW_RESPONSE_CODE) {
+ E("%s: FW NACK, i2c_data = 0x%x\n", __func__, i2c_data[0]);
+ return 1;
+ }
+
+
+ to_i2c_command = cpu_to_be32(start_address);
+ memcpy(i2c_data, &to_i2c_command, sizeof(__be32));
+ i2c_data[4] = i2c_data[0] ^ i2c_data[1] ^ i2c_data[2] ^ i2c_data[3];
+ rc = i2c_tx_bytes_locked(mcu_data, i2c_data, 5);
+ if (rc) {
+ E("%s: Failed to write_2, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ rc = i2c_rx_bytes_locked(mcu_data, i2c_data, 1);
+ if (rc) {
+ E("%s: Failed to read_2, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ if (i2c_data[0] != FW_RESPONSE_CODE) {
+ E("%s: FW NACK_2, i2c_data = 0x%x\n", __func__, i2c_data[0]);
+ return 1;
+ }
+
+
+ checksum = 0x0;
+ data_len = numberofbyte + 2;
+
+ i2c_data[0] = numberofbyte - 1;
+
+ for (i = 0; i < numberofbyte; i++)
+ i2c_data[i+1] = write_buf[i];
+
+ for (i = 0; i < (data_len - 1); i++)
+ checksum ^= i2c_data[i];
+
+ i2c_data[i] = checksum;
+ rc = i2c_tx_bytes_locked(mcu_data, i2c_data, data_len);
+ if (rc) {
+ E("%s: Failed to write_3, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ i = numberofbyte * 35;
+ usleep_range(i, i + 1000);
+
+ rc = i2c_rx_bytes_locked(mcu_data, i2c_data, 1);
+ if (rc) {
+ E("%s: Failed to read_3, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ if (i2c_data[0] != FW_RESPONSE_CODE) {
+ E("%s: FW NACK_3, i2c_data = 0x%x\n", __func__, i2c_data[0]);
+ return 1;
+ }
+
+ return 0;
+}
+
+static void update_firmware(const struct firmware *fw, void *context)
+{
+ struct cwmcu_data *mcu_data = context;
+ int ret;
+ u8 write_buf[FW_I2C_LEN_LIMIT] = {0};
+ int block_size, data_len;
+ u32 address_point;
+ int i;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ if (!fw) {
+ E("%s: fw does not exist\n", __func__);
+ mcu_data->fw_update_status |= FW_DOES_NOT_EXIST;
+ goto fast_exit;
+ }
+
+ D("%s: firmware size = %lu\n", __func__, fw->size);
+
+ ret = check_fw_version(mcu_data, fw);
+ if (ret == 1) { /* Perform firmware update */
+
+ mutex_lock(&mcu_data->activated_i2c_lock);
+ i2c_lock_adapter(mcu_data->client->adapter);
+
+ mcu_data->client->addr = 0x39;
+
+ gpio_direction_output(mcu_data->gpio_chip_mode, 1);
+ mdelay(10);
+ gpio_direction_output(mcu_data->gpio_reset, 0);
+ mdelay(10);
+ gpio_direction_output(mcu_data->gpio_reset, 1);
+ mdelay(41);
+
+ mcu_data->fw_update_status |= FW_FLASH_FAILED;
+
+ ret = erase_mcu_flash_mem(mcu_data);
+ if (ret) {
+ E("%s: erase mcu flash memory fails, ret = %d\n",
+ __func__, ret);
+ mcu_data->fw_update_status |= FW_ERASE_FAILED;
+ } else {
+ mcu_data->fw_update_status &= ~FW_ERASE_FAILED;
+ }
+
+ D("%s: Start writing firmware\n", __func__);
+
+ block_size = fw->size / FW_I2C_LEN_LIMIT;
+ data_len = fw->size % FW_I2C_LEN_LIMIT;
+ address_point = 0x08000000;
+
+ for (i = 0; i < block_size; i++) {
+ memcpy(write_buf, &fw->data[FW_I2C_LEN_LIMIT*i],
+ FW_I2C_LEN_LIMIT);
+ ret = update_mcu_flash_mem_block(mcu_data,
+ address_point,
+ write_buf,
+ FW_I2C_LEN_LIMIT);
+ if (ret) {
+ E("%s: update_mcu_flash_mem_block fails,"
+ "ret = %d, i = %d\n", __func__, ret, i);
+ goto out;
+ }
+ address_point += FW_I2C_LEN_LIMIT;
+ }
+
+ if (data_len != 0) {
+ memcpy(write_buf, &fw->data[FW_I2C_LEN_LIMIT*i],
+ data_len);
+ ret = update_mcu_flash_mem_block(mcu_data,
+ address_point,
+ write_buf,
+ data_len);
+ if (ret) {
+ E("%s: update_mcu_flash_mem_block fails_2,"
+ "ret = %d\n", __func__, ret);
+ goto out;
+ }
+ }
+ mcu_data->fw_update_status &= ~FW_FLASH_FAILED;
+
+out:
+ D("%s: End writing firmware\n", __func__);
+
+ gpio_direction_output(mcu_data->gpio_chip_mode, 0);
+ mdelay(10);
+ gpio_direction_output(mcu_data->gpio_reset, 0);
+ mdelay(10);
+ gpio_direction_output(mcu_data->gpio_reset, 1);
+
+ /* HUB need at least 500ms to be ready */
+ usleep_range(500000, 1000000);
+
+ mcu_data->client->addr = 0x72;
+
+ i2c_unlock_adapter(mcu_data->client->adapter);
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+
+ }
+ release_firmware(fw);
+
+fast_exit:
+ mcu_data->w_re_init = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ mcu_data->fw_update_status &= ~FW_UPDATE_QUEUED;
+ D("%s: fw_update_status = 0x%x\n", __func__,
+ mcu_data->fw_update_status);
+
+ if (mcu_data->erase_fw_wait <= (FW_ERASE_MAX - FW_ERASE_MIN - 1000))
+ mcu_data->erase_fw_wait += 1000;
+}
+
+/* Returns the number of read bytes on success */
+static int CWMCU_i2c_read(struct cwmcu_data *mcu_data,
+ u8 reg_addr, void *data, u8 len)
+{
+ s32 rc = 0;
+ u8 *u8_data = data;
+
+ D("%s++: reg_addr = 0x%x, len = %d\n", __func__, reg_addr, len);
+
+ if (DEBUG_DISABLE) {
+ mcu_data->disable_access_count++;
+ if ((mcu_data->disable_access_count % 100) == 0)
+ I("%s: DEBUG_DISABLE = %d\n", __func__, DEBUG_DISABLE);
+ return len;
+ }
+
+ if (mcu_data->is_block_i2c) {
+ if (time_after(jiffies,
+ mcu_data->reset_jiffies + RESET_PERIOD))
+ mcu_data->is_block_i2c = 0;
+ return len;
+ }
+
+ mutex_lock(&mcu_data->mutex_lock);
+ if (mcu_data->suspended) {
+ mutex_unlock(&mcu_data->mutex_lock);
+ return len;
+ }
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ mutex_lock(&mcu_data->activated_i2c_lock);
+ if (retry_exhausted(mcu_data)) {
+ memset(u8_data, 0, len); /* Assign data to 0 when chip NACK */
+
+ /* Try to recover HUB in low CPU utilization */
+ D(
+ "%s: mcu_data->i2c_total_retry = %d, "
+ "mcu_data->i2c_latch_retry = %d\n", __func__,
+ mcu_data->i2c_total_retry,
+ mcu_data->i2c_latch_retry);
+ mcu_data->w_activated_i2c = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+ return len;
+ }
+
+ while (!retry_exhausted(mcu_data)) {
+ rc = i2c_smbus_read_i2c_block_data(mcu_data->client, reg_addr,
+ len, u8_data);
+ if (rc == len) {
+ retry_reset(mcu_data);
+ break;
+ } else {
+ gpio_make_falling_edge(mcu_data->gpio_wake_mcu);
+ mcu_data->i2c_total_retry++;
+ if (rc == LATCH_ERROR_NO)
+ mcu_data->i2c_latch_retry++;
+ E("%s: rc = %d, total_retry = %d, latch_retry = %d\n",
+ __func__,
+ rc, mcu_data->i2c_total_retry,
+ mcu_data->i2c_latch_retry);
+ }
+ }
+
+ if (retry_exhausted(mcu_data)) {
+ E("%s: total_retry = %d, latch_retry = %d, return\n",
+ __func__, mcu_data->i2c_total_retry,
+ mcu_data->i2c_latch_retry);
+ }
+
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+
+ return rc;
+}
+
+static bool reset_hub(struct cwmcu_data *mcu_data)
+{
+ if (time_after(jiffies, mcu_data->reset_jiffies + RESET_PERIOD)) {
+ gpio_direction_output(mcu_data->gpio_reset, 0);
+ D("%s: gpio_reset = %d\n", __func__,
+ gpio_get_value_cansleep(mcu_data->gpio_reset));
+ usleep_range(10000, 15000);
+ gpio_direction_output(mcu_data->gpio_reset, 1);
+ D("%s: gpio_reset = %d\n", __func__,
+ gpio_get_value_cansleep(mcu_data->gpio_reset));
+
+ retry_reset(mcu_data);
+ mcu_data->i2c_jiffies = jiffies;
+
+ /* HUB need at least 500ms to be ready */
+ usleep_range(500000, 1000000);
+ mcu_data->is_block_i2c = false;
+ } else
+ mcu_data->is_block_i2c = true;
+
+ mcu_data->reset_jiffies = jiffies;
+ return !mcu_data->is_block_i2c;
+}
+
+/* This informs firmware for Output Data Rate of each sensor.
+ * Need powermode held by caller */
+static int firmware_odr(struct cwmcu_data *mcu_data, int sensors_id,
+ int delay_ms)
+{
+ u8 reg_addr;
+ u8 reg_value;
+ int rc;
+
+ switch (sensors_id) {
+ case CW_ACCELERATION:
+ reg_addr = ACCE_UPDATE_RATE;
+ break;
+ case CW_MAGNETIC:
+ reg_addr = MAGN_UPDATE_RATE;
+ break;
+ case CW_GYRO:
+ reg_addr = GYRO_UPDATE_RATE;
+ break;
+ case CW_ORIENTATION:
+ reg_addr = ORIE_UPDATE_RATE;
+ break;
+ case CW_ROTATIONVECTOR:
+ reg_addr = ROTA_UPDATE_RATE;
+ break;
+ case CW_LINEARACCELERATION:
+ reg_addr = LINE_UPDATE_RATE;
+ break;
+ case CW_GRAVITY:
+ reg_addr = GRAV_UPDATE_RATE;
+ break;
+ case CW_MAGNETIC_UNCALIBRATED:
+ reg_addr = MAGN_UNCA_UPDATE_RATE;
+ break;
+ case CW_GYROSCOPE_UNCALIBRATED:
+ reg_addr = GYRO_UNCA_UPDATE_RATE;
+ break;
+ case CW_GAME_ROTATION_VECTOR:
+ reg_addr = GAME_ROTA_UPDATE_RATE;
+ break;
+ case CW_GEOMAGNETIC_ROTATION_VECTOR:
+ reg_addr = GEOM_ROTA_UPDATE_RATE;
+ break;
+ case CW_SIGNIFICANT_MOTION:
+ reg_addr = SIGN_UPDATE_RATE;
+ break;
+ case CW_PRESSURE:
+ reg_addr = PRESSURE_UPDATE_RATE;
+ break;
+ case CW_STEP_COUNTER:
+ reg_addr = STEP_COUNTER_UPDATE_PERIOD;
+ break;
+ case CW_ACCELERATION_W:
+ reg_addr = ACCE_WAKE_UPDATE_RATE;
+ break;
+ case CW_MAGNETIC_W:
+ reg_addr = MAGN_WAKE_UPDATE_RATE;
+ break;
+ case CW_GYRO_W:
+ reg_addr = GYRO_WAKE_UPDATE_RATE;
+ break;
+ case CW_PRESSURE_W:
+ reg_addr = PRESSURE_WAKE_UPDATE_RATE;
+ break;
+ case CW_ORIENTATION_W:
+ reg_addr = ORIE_WAKE_UPDATE_RATE;
+ break;
+ case CW_ROTATIONVECTOR_W:
+ reg_addr = ROTA_WAKE_UPDATE_RATE;
+ break;
+ case CW_LINEARACCELERATION_W:
+ reg_addr = LINE_WAKE_UPDATE_RATE;
+ break;
+ case CW_GRAVITY_W:
+ reg_addr = GRAV_WAKE_UPDATE_RATE;
+ break;
+ case CW_MAGNETIC_UNCALIBRATED_W:
+ reg_addr = MAGN_UNCA_WAKE_UPDATE_RATE;
+ break;
+ case CW_GYROSCOPE_UNCALIBRATED_W:
+ reg_addr = GYRO_UNCA_WAKE_UPDATE_RATE;
+ break;
+ case CW_GAME_ROTATION_VECTOR_W:
+ reg_addr = GAME_ROTA_WAKE_UPDATE_RATE;
+ break;
+ case CW_GEOMAGNETIC_ROTATION_VECTOR_W:
+ reg_addr = GEOM_ROTA_WAKE_UPDATE_RATE;
+ break;
+ case CW_STEP_COUNTER_W:
+ reg_addr = STEP_COUNTER_UPDATE_PERIOD;
+ break;
+ default:
+ reg_addr = 0;
+ D(
+ "%s: Only report_period changed, sensors_id = %d,"
+ " delay_us = %6d\n",
+ __func__, sensors_id,
+ mcu_data->report_period[sensors_id]);
+ return 0;
+ }
+
+ if (delay_ms >= 200)
+ reg_value = UPDATE_RATE_NORMAL;
+ else if (delay_ms >= 100)
+ reg_value = UPDATE_RATE_RATE_10Hz;
+ else if (delay_ms >= 60)
+ reg_value = UPDATE_RATE_UI;
+ else if (delay_ms >= 40)
+ reg_value = UPDATE_RATE_RATE_25Hz;
+ else if (delay_ms >= 20)
+ reg_value = UPDATE_RATE_GAME;
+ else
+ reg_value = UPDATE_RATE_FASTEST;
+
+
+ if ((sensors_id != CW_STEP_COUNTER) && (sensors_id != CW_LIGHT) &&
+ (sensors_id != CW_STEP_COUNTER_W)) {
+ D("%s: reg_addr = 0x%x, reg_value = 0x%x\n",
+ __func__, reg_addr, reg_value);
+
+ rc = CWMCU_i2c_write(mcu_data, reg_addr, ®_value, 1);
+ if (rc) {
+ E("%s: CWMCU_i2c_write fails, rc = %d\n", __func__, rc);
+ return -EIO;
+ }
+ } else {
+ __le32 period_data;
+
+ period_data = cpu_to_le32(delay_ms);
+
+ D("%s: reg_addr = 0x%x, period_data = 0x%x\n",
+ __func__, reg_addr, period_data);
+
+ rc = CWMCU_i2c_multi_write(mcu_data, reg_addr,
+ &period_data,
+ sizeof(period_data));
+ if (rc) {
+ E("%s: CWMCU_i2c_multi_write fails, rc = %d\n",
+ __func__, rc);
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
+
+int is_continuous_sensor(int sensors_id)
+{
+ switch (sensors_id) {
+ case CW_ACCELERATION:
+ case CW_MAGNETIC:
+ case CW_GYRO:
+ case CW_PRESSURE:
+ case CW_ORIENTATION:
+ case CW_ROTATIONVECTOR:
+ case CW_LINEARACCELERATION:
+ case CW_GRAVITY:
+ case CW_MAGNETIC_UNCALIBRATED:
+ case CW_GYROSCOPE_UNCALIBRATED:
+ case CW_GAME_ROTATION_VECTOR:
+ case CW_GEOMAGNETIC_ROTATION_VECTOR:
+ case CW_ACCELERATION_W:
+ case CW_MAGNETIC_W:
+ case CW_GYRO_W:
+ case CW_PRESSURE_W:
+ case CW_ORIENTATION_W:
+ case CW_ROTATIONVECTOR_W:
+ case CW_LINEARACCELERATION_W:
+ case CW_GRAVITY_W:
+ case CW_MAGNETIC_UNCALIBRATED_W:
+ case CW_GYROSCOPE_UNCALIBRATED_W:
+ case CW_GAME_ROTATION_VECTOR_W:
+ case CW_GEOMAGNETIC_ROTATION_VECTOR_W:
+ return 1;
+ break;
+ default:
+ return 0;
+ break;
+ }
+}
+
+static void setup_delay(struct cwmcu_data *mcu_data)
+{
+ u8 i;
+ int delay_ms;
+ int delay_candidate_ms;
+
+ delay_candidate_ms = CWMCU_NO_POLLING_DELAY;
+ for (i = 0; i < CW_SENSORS_ID_TOTAL; i++) {
+ D("%s: batch_timeout[%d] = %lld\n", __func__, i,
+ mcu_data->batch_timeout[i]);
+ if ((mcu_data->enabled_list & (1LL << i)) &&
+ is_continuous_sensor(i) &&
+ (mcu_data->batch_timeout[i] == 0)) {
+ D("%s: report_period[%d] = %d\n", __func__, i,
+ mcu_data->report_period[i]);
+
+ /* report_period is actual delay(us) * 0.99), convert to
+ * microseconds */
+ delay_ms = mcu_data->report_period[i] /
+ MS_TO_PERIOD;
+ if (delay_ms > CWMCU_MAX_DELAY)
+ delay_ms = CWMCU_MAX_DELAY;
+
+ if (delay_candidate_ms > delay_ms)
+ delay_candidate_ms = delay_ms;
+ }
+ }
+
+ if (delay_candidate_ms != atomic_read(&mcu_data->delay)) {
+ cancel_delayed_work_sync(&mcu_data->work);
+ if (mcu_data->enabled_list & IIO_SENSORS_MASK) {
+ atomic_set(&mcu_data->delay, delay_candidate_ms);
+ queue_delayed_work(mcu_data->mcu_wq, &mcu_data->work,
+ 0);
+ } else
+ atomic_set(&mcu_data->delay, CWMCU_MAX_DELAY + 1);
+ }
+
+ D("%s: Minimum delay = %dms\n", __func__,
+ atomic_read(&mcu_data->delay));
+
+}
+
+static int handle_batch_list(struct cwmcu_data *mcu_data, int sensors_id,
+ bool is_wake)
+{
+ int rc;
+ u8 i;
+ u8 data;
+ u64 sensors_bit;
+ u8 write_addr;
+
+ if ((sensors_id == CW_LIGHT) || (sensors_id == CW_SIGNIFICANT_MOTION))
+ return 0;
+
+ sensors_bit = (1LL << sensors_id);
+ mcu_data->batched_list &= ~sensors_bit;
+ mcu_data->batched_list |= (mcu_data->enabled_list & sensors_bit)
+ ? sensors_bit : 0;
+
+ D("%s: sensors_bit = 0x%llx, batched_list = 0x%llx\n", __func__,
+ sensors_bit, mcu_data->batched_list);
+
+ i = (sensors_id / 8);
+ data = (u8)(mcu_data->batched_list >> (i*8));
+
+ write_addr = (is_wake) ? CW_WAKE_UP_BATCH_ENABLE_REG :
+ CW_BATCH_ENABLE_REG;
+
+ if (i > 3)
+ i = (i - 4);
+
+ D("%s: Writing, addr = 0x%x, data = 0x%x\n", __func__,
+ (write_addr+i), data);
+
+ rc = CWMCU_i2c_write_power(mcu_data, write_addr+i, &data, 1);
+ if (rc)
+ E("%s: CWMCU_i2c_write fails, rc = %d\n",
+ __func__, rc);
+
+ return rc;
+}
+
+static int setup_batch_timeout(struct cwmcu_data *mcu_data, bool is_wake)
+{
+ __le32 timeout_data;
+ s64 current_timeout;
+ u32 continuous_sensor_count;
+ u8 i;
+ u8 write_addr;
+ int rc;
+ int scan_limit;
+
+ current_timeout = 0;
+ if (is_wake) {
+ i = CW_ACCELERATION_W;
+ scan_limit = CW_SENSORS_ID_TOTAL;
+ } else {
+ i = CW_ACCELERATION;
+ scan_limit = CW_SENSORS_ID_FW;
+ }
+ for (continuous_sensor_count = 0; i < scan_limit; i++) {
+ if (mcu_data->batch_timeout[i] != 0) {
+ if ((current_timeout >
+ mcu_data->batch_timeout[i]) ||
+ (current_timeout == 0)) {
+ current_timeout =
+ mcu_data->batch_timeout[i];
+ }
+ D("sensorid = %d, current_timeout = %lld\n",
+ i, current_timeout);
+ } else
+ continuous_sensor_count++;
+ }
+
+ if (continuous_sensor_count == scan_limit)
+ current_timeout = 0;
+
+ timeout_data = cpu_to_le32(current_timeout);
+
+ write_addr = (is_wake) ? CWSTM32_WAKE_UP_BATCH_MODE_TIMEOUT :
+ CWSTM32_BATCH_MODE_TIMEOUT;
+
+ D(
+ "%s: Writing, write_addr = 0x%x, current_timeout = %lld,"
+ " timeout_data = 0x%x\n",
+ __func__, write_addr, current_timeout, timeout_data);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ rc = CWMCU_i2c_multi_write(mcu_data, write_addr,
+ &timeout_data,
+ sizeof(timeout_data));
+ cwmcu_powermode_switch(mcu_data, 0);
+ if (rc)
+ E("%s: CWMCU_i2c_write fails, rc = %d\n", __func__, rc);
+
+ return rc;
+}
+
+static u64 report_step_counter(struct cwmcu_data *mcu_data, u32 fw_step,
+ u64 timestamp, bool is_wake)
+{
+ u16 u16_data_buff[REPORT_EVENT_COMMON_LEN * 2];
+ u64 step_counter_buff;
+
+ mcu_data->sensors_time[CW_STEP_COUNTER] = 0;
+
+ step_counter_buff = mcu_data->step_counter_base + fw_step;
+
+ u16_data_buff[0] = step_counter_buff & 0xFFFF;
+ u16_data_buff[1] = (step_counter_buff >> 16) & 0xFFFF;
+ u16_data_buff[2] = 0;
+ u16_data_buff[3] = (step_counter_buff >> 32) & 0xFFFF;
+ u16_data_buff[4] = (step_counter_buff >> 48) & 0xFFFF;
+ u16_data_buff[5] = 0;
+
+ cw_send_event_special(mcu_data, (is_wake) ? CW_STEP_COUNTER_W
+ : CW_STEP_COUNTER,
+ u16_data_buff,
+ u16_data_buff + REPORT_EVENT_COMMON_LEN,
+ timestamp);
+
+ return step_counter_buff;
+}
+
+static ssize_t active_set(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ long enabled = 0;
+ long sensors_id = 0;
+ u8 data;
+ u8 i;
+ char *str_buf;
+ char *running;
+ u64 sensors_bit;
+ int rc;
+ bool is_wake;
+ bool non_wake_bit;
+ bool wake_bit;
+ u32 write_list;
+
+ str_buf = kstrndup(buf, count, GFP_KERNEL);
+ if (str_buf == NULL) {
+ E("%s: cannot allocate buffer\n", __func__);
+ return -ENOMEM;
+ }
+ running = str_buf;
+
+ for (i = 0; i < 2; i++) {
+ int error;
+ char *token;
+
+ token = strsep(&running, " ");
+
+ if (i == 0)
+ error = kstrtol(token, 10, &sensors_id);
+ else {
+ if (token == NULL) {
+ enabled = sensors_id;
+ sensors_id = 0;
+ } else
+ error = kstrtol(token, 10, &enabled);
+ }
+ if (error) {
+ E("%s: kstrtol fails, error = %d, i = %d\n",
+ __func__, error, i);
+ kfree(str_buf);
+ return error;
+ }
+ }
+ kfree(str_buf);
+
+ if (!mcu_data->probe_success)
+ return -EBUSY;
+
+ if ((sensors_id > CW_SENSORS_ID_TOTAL) ||
+ (sensors_id < 0)
+ ) {
+ E("%s: Invalid sensors_id = %ld\n", __func__, sensors_id);
+ return -EINVAL;
+ }
+
+ sensors_bit = 1LL << sensors_id;
+
+ is_wake = (sensors_id >= CW_ACCELERATION_W) &&
+ (sensors_id <= CW_STEP_COUNTER_W);
+ if (is_wake) {
+ wake_bit = (mcu_data->enabled_list & sensors_bit);
+ non_wake_bit = (mcu_data->enabled_list & (sensors_bit >> 32));
+ } else {
+ wake_bit = (mcu_data->enabled_list & (sensors_bit << 32));
+ non_wake_bit = (mcu_data->enabled_list & sensors_bit);
+ }
+
+ mcu_data->enabled_list &= ~sensors_bit;
+ mcu_data->enabled_list |= enabled ? sensors_bit : 0;
+
+ /* clean batch parameters if sensor turn off */
+ if (!enabled) {
+ mcu_data->batch_timeout[sensors_id] = 0;
+ mcu_data->batched_list &= ~sensors_bit;
+ mcu_data->sensors_time[sensors_id] = 0;
+ setup_batch_timeout(mcu_data, is_wake);
+ mcu_data->report_period[sensors_id] = 200000 * MS_TO_PERIOD;
+ mcu_data->pending_flush &= ~(sensors_bit);
+ } else {
+ do_gettimeofday(&mcu_data->now);
+ mcu_data->sensors_time[sensors_id] =
+ (mcu_data->now.tv_sec * NS_PER_US) +
+ mcu_data->now.tv_usec;
+ }
+
+ write_list = mcu_data->enabled_list | (mcu_data->enabled_list >> 32);
+
+ i = ((is_wake) ? (sensors_id - 32) : sensors_id) / 8;
+ data = (u8)(write_list >> (i*8));
+
+ if (enabled
+ ? !(wake_bit | non_wake_bit)
+ : (wake_bit ^ non_wake_bit)) {
+ D("%s: Writing: CWSTM32_ENABLE_REG+i = 0x%x, data = 0x%x\n",
+ __func__, CWSTM32_ENABLE_REG+i, data);
+ rc = CWMCU_i2c_write_power(mcu_data, CWSTM32_ENABLE_REG+i,
+ &data, 1);
+ if (rc) {
+ E("%s: CWMCU_i2c_write fails, rc = %d\n",
+ __func__, rc);
+ return -EIO;
+ }
+
+ /* Disabling Step counter and no other step counter enabled */
+ if (((sensors_id == CW_STEP_COUNTER) ||
+ (sensors_id == CW_STEP_COUNTER_W))
+ && !enabled
+ && !(mcu_data->enabled_list & STEP_COUNTER_MASK)) {
+ __le32 data[3];
+
+ rc = CWMCU_i2c_read_power(mcu_data,
+ CWSTM32_READ_STEP_COUNTER,
+ data, sizeof(data));
+ if (rc >= 0) {
+ mcu_data->step_counter_base +=
+ le32_to_cpu(data[2]);
+ D("%s: Record step = %llu\n",
+ __func__, mcu_data->step_counter_base);
+ } else {
+ D("%s: Step Counter i2c read fails, rc = %d\n",
+ __func__, rc);
+ }
+ }
+
+ if (!enabled
+ && (!(mcu_data->enabled_list & IIO_CONTINUOUS_MASK))) {
+ mutex_lock(&mcu_data->mutex_lock);
+ mcu_data->w_clear_fifo_running = true;
+ mcu_data->w_clear_fifo = true;
+ mutex_unlock(&mcu_data->mutex_lock);
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+ }
+
+ }
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ rc = firmware_odr(mcu_data, sensors_id,
+ mcu_data->report_period[sensors_id] / MS_TO_PERIOD);
+ cwmcu_powermode_switch(mcu_data, 0);
+ if (rc) {
+ E("%s: firmware_odr fails, rc = %d\n", __func__, rc);
+ }
+
+ if ((sensors_id == CW_LIGHT) && (!!enabled)) {
+ D("%s: Initial lightsensor = %d\n",
+ __func__, mcu_data->light_last_data[0]);
+ cw_send_event(mcu_data, CW_LIGHT,
+ mcu_data->light_last_data, 0);
+ }
+
+ setup_delay(mcu_data);
+
+ rc = handle_batch_list(mcu_data, sensors_id, is_wake);
+ if (rc) {
+ E("%s: handle_batch_list fails, rc = %d\n", __func__,
+ rc);
+ return rc;
+ }
+
+ D("%s: sensors_id = %ld, enable = %ld, enable_list = 0x%llx\n",
+ __func__, sensors_id, enabled, mcu_data->enabled_list);
+
+ return count;
+}
+
+static ssize_t active_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u32 data;
+
+ CWMCU_i2c_read_power(mcu_data, CWSTM32_ENABLE_REG, &data, sizeof(data));
+
+ D("%s: enable = 0x%x\n", __func__, data);
+
+ return scnprintf(buf, PAGE_SIZE, "0x%llx, 0x%x\n",
+ mcu_data->enabled_list, data);
+}
+
+static ssize_t interval_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&mcu_data->delay));
+}
+
+static ssize_t interval_set(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ long val = 0;
+ long sensors_id = 0;
+ int i, rc;
+ char *str_buf;
+ char *running;
+
+ str_buf = kstrndup(buf, count, GFP_KERNEL);
+ if (str_buf == NULL) {
+ E("%s: cannot allocate buffer\n", __func__);
+ return -ENOMEM;
+ }
+ running = str_buf;
+
+ for (i = 0; i < 2; i++) {
+ int error;
+ char *token;
+
+ token = strsep(&running, " ");
+
+ if (i == 0)
+ error = kstrtol(token, 10, &sensors_id);
+ else {
+ if (token == NULL) {
+ val = 66;
+ D("%s: delay set to 66\n", __func__);
+ } else
+ error = kstrtol(token, 10, &val);
+ }
+ if (error) {
+ E("%s: kstrtol fails, error = %d, i = %d\n",
+ __func__, error, i);
+ kfree(str_buf);
+ return error;
+ }
+ }
+ kfree(str_buf);
+
+ if ((sensors_id < 0) || (sensors_id >= num_sensors)) {
+ D("%s: Invalid sensors_id = %ld\n", __func__, sensors_id);
+ return -EINVAL;
+ }
+
+ if (mcu_data->report_period[sensors_id] != val * MS_TO_PERIOD) {
+ /* period is actual delay(us) * 0.99 */
+ mcu_data->report_period[sensors_id] = val * MS_TO_PERIOD;
+
+ setup_delay(mcu_data);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ rc = firmware_odr(mcu_data, sensors_id, val);
+ cwmcu_powermode_switch(mcu_data, 0);
+ if (rc) {
+ E("%s: firmware_odr fails, rc = %d\n", __func__, rc);
+ return rc;
+ }
+ }
+
+ return count;
+}
+
+
+static ssize_t batch_set(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ s64 timeout = 0;
+ int sensors_id = 0, flag = 0, delay_ms = 0;
+ u8 i;
+ int retry;
+ int rc;
+ char *token;
+ char *str_buf;
+ char *running;
+ long input_val;
+ unsigned long long input_val_l;
+ bool need_update_fw_odr;
+ s32 period;
+ bool is_wake;
+
+ if (!mcu_data->probe_success) {
+ E("%s: probe_success = %d\n", __func__,
+ mcu_data->probe_success);
+ return -1;
+ }
+
+ for (retry = 0; retry < ACTIVE_RETRY_TIMES; retry++) {
+ mutex_lock(&mcu_data->mutex_lock);
+ if (mcu_data->suspended) {
+ mutex_unlock(&mcu_data->mutex_lock);
+ D("%s: suspended, retry = %d\n",
+ __func__, retry);
+ usleep_range(5000, 10000);
+ } else {
+ mutex_unlock(&mcu_data->mutex_lock);
+ break;
+ }
+ }
+ if (retry >= ACTIVE_RETRY_TIMES) {
+ D("%s: resume not completed, retry = %d, retry fails!\n",
+ __func__, retry);
+ return -ETIMEDOUT;
+ }
+
+ str_buf = kstrndup(buf, count, GFP_KERNEL);
+ if (str_buf == NULL) {
+ E("%s: cannot allocate buffer\n", __func__);
+ return -1;
+ }
+ running = str_buf;
+
+ for (i = 0; i < 4; i++) {
+ token = strsep(&running, " ");
+ if (token == NULL) {
+ E("%s: token = NULL, i = %d\n", __func__, i);
+ break;
+ }
+
+ switch (i) {
+ case 0:
+ rc = kstrtol(token, 10, &input_val);
+ sensors_id = (int)input_val;
+ break;
+ case 1:
+ rc = kstrtol(token, 10, &input_val);
+ flag = (int)input_val;
+ break;
+ case 2:
+ rc = kstrtol(token, 10, &input_val);
+ delay_ms = (int)input_val;
+ break;
+ case 3:
+ rc = kstrtoull(token, 10, &input_val_l);
+ timeout = (s64)input_val_l;
+ break;
+ default:
+ E("%s: Unknown i = %d\n", __func__, i);
+ break;
+ }
+
+ if (rc) {
+ E("%s: kstrtol fails, rc = %d, i = %d\n",
+ __func__, rc, i);
+ kfree(str_buf);
+ return rc;
+ }
+ }
+ kfree(str_buf);
+
+ D("%s: sensors_id = 0x%x, flag = %d, delay_ms = %d, timeout = %lld\n",
+ __func__, sensors_id, flag, delay_ms, timeout);
+
+ is_wake = (CW_ACCELERATION_W <= sensors_id) &&
+ (sensors_id <= CW_STEP_COUNTER_W);
+
+ /* period is actual delay(us) * 0.99 */
+ period = delay_ms * MS_TO_PERIOD;
+ need_update_fw_odr = mcu_data->report_period[sensors_id] != period;
+ D("%s: period = %d, report_period[%d] = %d\n",
+ __func__, period, sensors_id, mcu_data->report_period[sensors_id]);
+ mcu_data->report_period[sensors_id] = period;
+
+ switch (sensors_id) {
+ case CW_ACCELERATION:
+ case CW_MAGNETIC:
+ case CW_GYRO:
+ case CW_PRESSURE:
+ case CW_ORIENTATION:
+ case CW_ROTATIONVECTOR:
+ case CW_LINEARACCELERATION:
+ case CW_GRAVITY:
+ case CW_MAGNETIC_UNCALIBRATED:
+ case CW_GYROSCOPE_UNCALIBRATED:
+ case CW_GAME_ROTATION_VECTOR:
+ case CW_GEOMAGNETIC_ROTATION_VECTOR:
+ case CW_STEP_DETECTOR:
+ case CW_STEP_COUNTER:
+ case CW_ACCELERATION_W:
+ case CW_MAGNETIC_W:
+ case CW_GYRO_W:
+ case CW_PRESSURE_W:
+ case CW_ORIENTATION_W:
+ case CW_ROTATIONVECTOR_W:
+ case CW_LINEARACCELERATION_W:
+ case CW_GRAVITY_W:
+ case CW_MAGNETIC_UNCALIBRATED_W:
+ case CW_GYROSCOPE_UNCALIBRATED_W:
+ case CW_GAME_ROTATION_VECTOR_W:
+ case CW_GEOMAGNETIC_ROTATION_VECTOR_W:
+ case CW_STEP_DETECTOR_W:
+ case CW_STEP_COUNTER_W:
+ break;
+ case CW_LIGHT:
+ case CW_SIGNIFICANT_MOTION:
+ default:
+ D("%s: Batch not supported for this sensor_id = 0x%x\n",
+ __func__, sensors_id);
+ return count;
+ }
+
+ mcu_data->batch_timeout[sensors_id] = timeout;
+
+ setup_delay(mcu_data);
+
+ rc = setup_batch_timeout(mcu_data, is_wake);
+ if (rc) {
+ E("%s: setup_batch_timeout fails, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ if ((need_update_fw_odr == true) &&
+ (mcu_data->enabled_list & (1LL << sensors_id))) {
+ int odr_sensors_id;
+
+ odr_sensors_id = (is_wake) ? (sensors_id + 32) : sensors_id;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+ rc = firmware_odr(mcu_data, odr_sensors_id, delay_ms);
+ cwmcu_powermode_switch(mcu_data, 0);
+ if (rc) {
+ E("%s: firmware_odr fails, rc = %d\n", __func__, rc);
+ }
+ }
+
+ D(
+ "%s: sensors_id = %d, timeout = %lld, batched_list = 0x%llx,"
+ " delay_ms = %d\n",
+ __func__, sensors_id, timeout, mcu_data->batched_list,
+ delay_ms);
+
+ return (rc) ? rc : count;
+}
+
+static ssize_t batch_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u64 timestamp = 0;
+ struct timespec kt;
+ u64 k_timestamp;
+
+ kt = current_kernel_time();
+
+ CWMCU_i2c_read_power(mcu_data, CW_I2C_REG_MCU_TIME, ×tamp,
+ sizeof(timestamp));
+
+ le64_to_cpus(×tamp);
+
+ k_timestamp = (u64)(kt.tv_sec*NSEC_PER_SEC) + (u64)kt.tv_nsec;
+
+ return scnprintf(buf, PAGE_SIZE, "%llu", timestamp);
+}
+
+
+static ssize_t flush_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ int ret;
+ u8 data[4] = {0};
+
+ ret = CWMCU_i2c_read_power(mcu_data, CWSTM32_BATCH_MODE_DATA_COUNTER,
+ data, sizeof(data));
+ if (ret < 0)
+ D("%s: Read Counter fail, ret = %d\n", __func__, ret);
+
+ D("%s: DEBUG: Queue counter = %d\n", __func__,
+ *(u32 *)&data[0]);
+
+ return scnprintf(buf, PAGE_SIZE, "Queue counter = %d\n",
+ *(u32 *)&data[0]);
+}
+
+static void cwmcu_send_flush(struct cwmcu_data *mcu_data, int id)
+{
+ u8 type = CW_META_DATA;
+ u16 data[REPORT_EVENT_COMMON_LEN];
+ s64 timestamp = 0;
+ int rc;
+
+ data[0] = (u16)id;
+ data[1] = data[2] = 0;
+
+ D("%s: flush sensor: %d!!\n", __func__, id);
+
+ rc = cw_send_event(mcu_data, type, data, timestamp);
+ if (rc < 0)
+ E("%s: send_event fails, rc = %d\n", __func__, rc);
+
+ D("%s--:\n", __func__);
+}
+
+static ssize_t flush_set(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ u8 data;
+ unsigned long handle;
+ int rc;
+
+ rc = kstrtoul(buf, 10, &handle);
+ if (rc) {
+ E("%s: kstrtoul fails, rc = %d\n", __func__, rc);
+ return rc;
+ }
+
+ D("%s: handle = %lu\n", __func__, handle);
+
+ data = handle;
+
+ D("%s: addr = 0x%x, data = 0x%x\n", __func__,
+ CWSTM32_BATCH_FLUSH, data);
+
+ rc = CWMCU_i2c_write_power(mcu_data, CWSTM32_BATCH_FLUSH, &data, 1);
+ if (rc)
+ E("%s: CWMCU_i2c_write fails, rc = %d\n", __func__, rc);
+
+ mcu_data->w_flush_fifo = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ if ((handle == CW_LIGHT) || (handle == CW_SIGNIFICANT_MOTION)) {
+ mutex_lock(&mcu_data->lock);
+ cwmcu_send_flush(mcu_data, handle);
+ mutex_unlock(&mcu_data->lock);
+ } else
+ mcu_data->pending_flush |= (1LL << handle);
+
+ D("%s: mcu_data->pending_flush = 0x%llx\n", __func__,
+ mcu_data->pending_flush);
+
+ return count;
+}
+
+static ssize_t facedown_set(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ bool on;
+
+ if (strtobool(buf, &on) < 0)
+ return -EINVAL;
+
+ if (!!on == !!(mcu_data->enabled_list &
+ (1LL << HTC_FACEDOWN_DETECTION)))
+ return size;
+
+ if (on)
+ mcu_data->enabled_list |= (1LL << HTC_FACEDOWN_DETECTION);
+ else
+ mcu_data->enabled_list &= ~(1LL << HTC_FACEDOWN_DETECTION);
+
+ mcu_data->w_facedown_set = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ return size;
+}
+
+static ssize_t facedown_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%d\n",
+ !!(mcu_data->enabled_list & (1U << HTC_FACEDOWN_DETECTION)));
+}
+
+/* Return if META is read out */
+static bool report_iio(struct cwmcu_data *mcu_data, int *i, u8 *data,
+ __le64 *data64, u32 *event_count, bool is_wake)
+{
+ s32 ret;
+ u8 data_buff;
+ u16 data_event[REPORT_EVENT_COMMON_LEN];
+ u16 bias_event[REPORT_EVENT_COMMON_LEN];
+ u16 timestamp_event;
+ u64 *handle_time_base;
+ bool is_meta_read = false;
+
+ if (is_wake) {
+ wake_lock_timeout(&mcu_data->report_wake_lock,
+ msecs_to_jiffies(200));
+ }
+
+ if (data[0] == CW_META_DATA) {
+ __le16 *data16 = (__le16 *)(data + 1);
+
+ data_event[0] = le16_to_cpup(data16 + 1);
+ cw_send_event(mcu_data, data[0], data_event, 0);
+ mcu_data->pending_flush &= ~(1LL << data_event[0]);
+ D(
+ "total count = %u, current_count = %d, META from firmware,"
+ " event_id = %d, pending_flush = 0x%llx\n", *event_count, *i,
+ data_event[0], mcu_data->pending_flush);
+ is_meta_read = true;
+ } else if (data[0] == CW_TIME_BASE) {
+ u64 timestamp;
+
+ timestamp = le64_to_cpup(data64 + 1);
+
+ handle_time_base = (is_wake) ? &mcu_data->wake_fifo_time_base :
+ &mcu_data->time_base;
+
+ D(
+ "total count = %u, current_count = %d, CW_TIME_BASE = %llu,"
+ " is_wake = %d\n", *event_count, *i, timestamp, is_wake);
+ *handle_time_base = timestamp;
+
+ } else if (data[0] == CW_STEP_DETECTOR) {
+ __le16 *data16 = (__le16 *)(data + 1);
+
+ timestamp_event = le16_to_cpup(data16);
+
+ data_event[0] = 1;
+ handle_time_base = (is_wake) ?
+ &mcu_data->wake_fifo_time_base :
+ &mcu_data->time_base;
+ cw_send_event(mcu_data,
+ (is_wake)
+ ? CW_STEP_DETECTOR_W
+ : CW_STEP_DETECTOR
+ , data_event
+ , timestamp_event + *handle_time_base);
+
+ D(
+ "Batch data: total count = %u, current count = %d, "
+ "STEP_DETECTOR%s, timediff = %d, time_base = %llu,"
+ " r_time = %llu\n"
+ , *event_count, *i, (is_wake) ? "_W" : ""
+ , timestamp_event
+ , *handle_time_base
+ , *handle_time_base + timestamp_event
+ );
+
+ } else if (data[0] == CW_STEP_COUNTER) {
+ __le16 *data16 = (__le16 *)(data + 1);
+ __le32 *data32 = (__le32 *)(data + 3);
+
+ timestamp_event = le16_to_cpup(data16);
+
+ handle_time_base = (is_wake) ?
+ &mcu_data->wake_fifo_time_base :
+ &mcu_data->time_base;
+ report_step_counter(mcu_data,
+ le32_to_cpu(*data32),
+ timestamp_event + *handle_time_base,
+ is_wake);
+
+ D(
+ "Batch data: total count = %u, current count = %d, "
+ "STEP_COUNTER%s, step = %d, "
+ "timediff = %d, time_base = %llu, r_time = %llu\n"
+ , *event_count, *i, (is_wake) ? "_W" : ""
+ , le32_to_cpu(*data32)
+ , timestamp_event
+ , *handle_time_base
+ , *handle_time_base + timestamp_event
+ );
+
+ } else if ((data[0] == CW_MAGNETIC_UNCALIBRATED_BIAS) ||
+ (data[0] == CW_GYROSCOPE_UNCALIBRATED_BIAS)) {
+ __le16 *data16 = (__le16 *)(data + 1);
+ u8 read_addr;
+
+ data_buff = (data[0] == CW_MAGNETIC_UNCALIBRATED_BIAS) ?
+ CW_MAGNETIC_UNCALIBRATED :
+ CW_GYROSCOPE_UNCALIBRATED;
+ data_buff += (is_wake) ? 32 : 0;
+
+ bias_event[0] = le16_to_cpup(data16 + 1);
+ bias_event[1] = le16_to_cpup(data16 + 2);
+ bias_event[2] = le16_to_cpup(data16 + 3);
+
+ read_addr = (is_wake) ? CWSTM32_WAKE_UP_BATCH_MODE_DATA_QUEUE :
+ CWSTM32_BATCH_MODE_DATA_QUEUE;
+ ret = CWMCU_i2c_read(mcu_data, read_addr, data, 9);
+ if (ret >= 0) {
+ (*i)++;
+ timestamp_event = le16_to_cpup(data16);
+ data_event[0] = le16_to_cpup(data16 + 1);
+ data_event[1] = le16_to_cpup(data16 + 2);
+ data_event[2] = le16_to_cpup(data16 + 3);
+
+ D(
+ "Batch data: total count = %u, current "
+ "count = %d, event_id = %d, data(x, y, z) = "
+ "(%d, %d, %d), bias(x, y, z) = "
+ "(%d, %d, %d)\n"
+ , *event_count, *i, data_buff
+ , data_event[0], data_event[1], data_event[2]
+ , bias_event[0], bias_event[1]
+ , bias_event[2]);
+
+ handle_time_base = (is_wake) ?
+ &mcu_data->wake_fifo_time_base :
+ &mcu_data->time_base;
+ cw_send_event_special(mcu_data, data_buff,
+ data_event,
+ bias_event,
+ timestamp_event +
+ *handle_time_base);
+ } else {
+ E("Read Uncalibrated data fails, ret = %d\n", ret);
+ }
+ } else {
+ __le16 *data16 = (__le16 *)(data + 1);
+
+ timestamp_event = le16_to_cpup(data16);
+ data_event[0] = le16_to_cpup(data16 + 1);
+ data_event[1] = le16_to_cpup(data16 + 2);
+ data_event[2] = le16_to_cpup(data16 + 3);
+
+ data[0] += (is_wake) ? 32 : 0;
+
+ handle_time_base = (is_wake) ?
+ &mcu_data->wake_fifo_time_base :
+ &mcu_data->time_base;
+
+ D(
+ "Batch data: total count = %u, current count = %d, "
+ "event_id = %d, data(x, y, z) = (%d, %d, %d), "
+ "timediff = %d, time_base = %llu, r_time = %llu\n"
+ , *event_count, *i, data[0]
+ , data_event[0], data_event[1], data_event[2]
+ , timestamp_event
+ , *handle_time_base
+ , *handle_time_base + timestamp_event
+ );
+
+ if ((data[0] == CW_MAGNETIC) || (data[0] == CW_ORIENTATION)) {
+ int rc;
+ u8 accuracy;
+ u16 bias_event[REPORT_EVENT_COMMON_LEN] = {0};
+
+ rc = CWMCU_i2c_read(mcu_data,
+ CW_I2C_REG_SENSORS_ACCURACY_MAG,
+ &accuracy, 1);
+ if (rc < 0) {
+ E(
+ "%s: read ACCURACY_MAG fails, rc = "
+ "%d\n", __func__, rc);
+ accuracy = 3;
+ }
+ bias_event[0] = accuracy;
+
+ cw_send_event_special(mcu_data, data[0], data_event,
+ bias_event,
+ timestamp_event +
+ *handle_time_base);
+ } else {
+ cw_send_event(mcu_data, data[0], data_event,
+ timestamp_event + *handle_time_base);
+ }
+ }
+ return is_meta_read;
+}
+
+/* Return if META is read out */
+static bool cwmcu_batch_fifo_read(struct cwmcu_data *mcu_data, int queue_id)
+{
+ s32 ret;
+ int i;
+ u32 *event_count;
+ u8 event_count_data[4] = {0};
+ u8 reg_addr;
+ bool is_meta_read = false;
+
+ mutex_lock(&mcu_data->lock);
+
+ reg_addr = (queue_id)
+ ? CWSTM32_WAKE_UP_BATCH_MODE_DATA_COUNTER
+ : CWSTM32_BATCH_MODE_DATA_COUNTER;
+
+ ret = CWMCU_i2c_read(mcu_data, reg_addr, event_count_data,
+ sizeof(event_count_data));
+ if (ret < 0) {
+ D(
+ "Read Batched data Counter fail, ret = %d, queue_id"
+ " = %d\n", ret, queue_id);
+ }
+
+ event_count = (u32 *)(&event_count_data[0]);
+ if (*event_count > MAX_EVENT_COUNT) {
+ I("%s: event_count = %u, strange, queue_id = %d\n",
+ __func__, *event_count, queue_id);
+ *event_count = 0;
+ }
+
+ D("%s: event_count = %u, queue_id = %d\n", __func__,
+ *event_count, queue_id);
+
+ reg_addr = (queue_id) ? CWSTM32_WAKE_UP_BATCH_MODE_DATA_QUEUE :
+ CWSTM32_BATCH_MODE_DATA_QUEUE;
+
+ for (i = 0; i < *event_count; i++) {
+ __le64 data64[2];
+ u8 *data = (u8 *)data64;
+
+ data = data + 7;
+
+ ret = CWMCU_i2c_read(mcu_data, reg_addr, data, 9);
+ if (ret >= 0) {
+ /* check if there are no data from queue */
+ if (data[0] != CWMCU_NODATA) {
+ is_meta_read = report_iio(mcu_data, &i, data,
+ data64,
+ event_count,
+ queue_id);
+ }
+ } else {
+ E("Read Queue fails, ret = %d, queue_id = %d\n",
+ ret, queue_id);
+ }
+ }
+
+ mutex_unlock(&mcu_data->lock);
+
+ return is_meta_read;
+}
+
+static void cwmcu_meta_read(struct cwmcu_data *mcu_data)
+{
+ int i;
+ int queue_id;
+
+ for (i = 0; (i < 3) && mcu_data->pending_flush; i++) {
+ D("%s: mcu_data->pending_flush = 0x%llx, i = %d\n", __func__,
+ mcu_data->pending_flush, i);
+
+ queue_id = (mcu_data->pending_flush & 0xFFFFFFFF)
+ ? 0 : 1;
+ if (cwmcu_batch_fifo_read(mcu_data, queue_id))
+ break;
+
+ if (mcu_data->pending_flush)
+ usleep_range(6000, 9000);
+ else
+ break;
+ }
+ if (mcu_data->pending_flush && (i == 3))
+ D("%s: Fail to get META!!\n", __func__);
+
+}
+
+/* cwmcu_powermode_switch() must be held by caller */
+static void cwmcu_batch_read(struct cwmcu_data *mcu_data)
+{
+ int j;
+ u32 *non_wake_batch_list = (u32 *)&mcu_data->batched_list;
+ u32 *wake_batch_list = (non_wake_batch_list + 1);
+
+ D("%s++: batched_list = 0x%llx\n", __func__, mcu_data->batched_list);
+
+ for (j = 0; j < 2; j++) {
+ if ((!(*non_wake_batch_list) && (j == 0)) ||
+ (!(*wake_batch_list) && (j == 1))) {
+ D(
+ "%s++: nw_batched_list = 0x%x, w_batched_list = 0x%x,"
+ " j = %d, continue\n",
+ __func__, *non_wake_batch_list, *wake_batch_list, j);
+ continue;
+ }
+
+ cwmcu_batch_fifo_read(mcu_data, j);
+ }
+
+ D("%s--: batched_list = 0x%llx\n", __func__, mcu_data->batched_list);
+}
+
+static void cwmcu_check_sensor_update(struct cwmcu_data *mcu_data)
+{
+ int id;
+ s64 temp;
+
+ do_gettimeofday(&mcu_data->now);
+ temp = (mcu_data->now.tv_sec * NS_PER_US) + mcu_data->now.tv_usec;
+
+ for (id = 0; id < CW_SENSORS_ID_TOTAL; id++) {
+ mcu_data->time_diff[id] = temp - mcu_data->sensors_time[id];
+
+ if ((mcu_data->time_diff[id] >= mcu_data->report_period[id])
+ && (mcu_data->enabled_list & (1LL << id))) {
+ mcu_data->sensors_time[id] = temp;
+ mcu_data->update_list |= (1LL << id);
+ } else
+ mcu_data->update_list &= ~(1LL << id);
+ }
+}
+
+static void cwmcu_read(struct cwmcu_data *mcu_data, struct iio_poll_func *pf)
+{
+ int id_check;
+
+ if (!mcu_data->probe_success) {
+ E("%s: probe_success = %d\n", __func__,
+ mcu_data->probe_success);
+ return;
+ }
+
+ if (mcu_data->enabled_list) {
+
+ cwmcu_check_sensor_update(mcu_data);
+
+ for (id_check = 0 ;
+ (id_check < CW_SENSORS_ID_TOTAL)
+ ; id_check++) {
+ if ((is_continuous_sensor(id_check)) &&
+ (mcu_data->update_list & (1LL<<id_check)) &&
+ (mcu_data->batch_timeout[id_check] == 0)) {
+ cwmcu_powermode_switch(mcu_data, 1);
+ cwmcu_batch_fifo_read(mcu_data,
+ id_check > CW_SENSORS_ID_FW);
+ cwmcu_powermode_switch(mcu_data, 0);
+ }
+ }
+ }
+
+}
+
+static int cwmcu_suspend(struct device *dev)
+{
+ struct cwmcu_data *mcu_data = dev_get_drvdata(dev);
+ int i;
+ u8 data;
+
+ D("[CWMCU] %s\n", __func__);
+
+ cancel_work_sync(&mcu_data->one_shot_work);
+ cancel_delayed_work_sync(&mcu_data->work);
+
+ disable_irq(mcu_data->IRQ);
+
+ /* Inform SensorHUB that CPU is going to suspend */
+ data = 0;
+ CWMCU_i2c_write_power(mcu_data, CW_CPU_STATUS_REG, &data, 1);
+ D("%s: write_addr = 0x%x, write_data = 0x%x\n", __func__,
+ CW_CPU_STATUS_REG, data);
+
+ mutex_lock(&mcu_data->mutex_lock);
+ mcu_data->suspended = true;
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ for (i = 0; (mcu_data->power_on_counter != 0) &&
+ (gpio_get_value(mcu_data->gpio_wake_mcu) != 1) &&
+ (i < ACTIVE_RETRY_TIMES); i++)
+ usleep_range(10, 20);
+
+ gpio_set_value(mcu_data->gpio_wake_mcu, 1);
+ mcu_data->power_on_counter = 0;
+
+ return 0;
+}
+
+
+static int cwmcu_resume(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct cwmcu_data *mcu_data = i2c_get_clientdata(client);
+ u8 data;
+
+ D("[CWMCU] %s++\n", __func__);
+
+ mutex_lock(&mcu_data->mutex_lock);
+ mcu_data->suspended = false;
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ /* Inform SensorHUB that CPU is going to resume */
+ data = 1;
+ CWMCU_i2c_write_power(mcu_data, CW_CPU_STATUS_REG, &data, 1);
+ D("%s: write_addr = 0x%x, write_data = 0x%x\n", __func__,
+ CW_CPU_STATUS_REG, data);
+
+ enable_irq(mcu_data->IRQ);
+
+ if (mcu_data->w_activated_i2c
+ || mcu_data->w_re_init
+ || mcu_data->w_facedown_set
+ || mcu_data->w_clear_fifo
+ || mcu_data->w_flush_fifo
+ || mcu_data->w_report_meta)
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ if (mcu_data->enabled_list & IIO_SENSORS_MASK) {
+ queue_delayed_work(mcu_data->mcu_wq, &mcu_data->work,
+ msecs_to_jiffies(atomic_read(&mcu_data->delay)));
+ }
+
+ D("[CWMCU] %s--\n", __func__);
+ return 0;
+}
+
+
+#ifdef MCU_WARN_MSGS
+static void print_warn_msg(struct cwmcu_data *mcu_data,
+ char *buf, u32 len, u32 index)
+{
+ int ret;
+ char *buf_start = buf;
+
+ while ((buf - buf_start) < len) {
+ ret = min((u32)WARN_MSG_BLOCK_LEN,
+ (u32)(len - (buf - buf_start)));
+ ret = CWMCU_i2c_read(mcu_data,
+ CW_I2C_REG_WARN_MSG_BUFFER,
+ buf, ret);
+ if (ret == 0) {
+ break;
+ } else if (ret < 0) {
+ E("%s: warn i2c_read: ret = %d\n", __func__, ret);
+ break;
+ } else
+ buf += ret;
+ }
+ printk(KERN_WARNING "[S_HUB][CW_MCU] Warning MSG[%d] = %.*s",
+ index, (int)(buf - buf_start), buf_start);
+}
+#endif
+
+void magic_cover_report_input(struct cwmcu_data *mcu_data, u8 val)
+{
+ u32 data = ((val >> 6) & 0x3);
+
+ if ((data == 1) || (data == 2)) {
+ input_report_switch(mcu_data->input, SW_LID, (data - 1));
+ input_sync(mcu_data->input);
+ } else if ((data == 3) || (data == 0)) {
+ input_report_switch(mcu_data->input, SW_CAMERA_LENS_COVER,
+ !data);
+ input_sync(mcu_data->input);
+ }
+ return;
+}
+
+void activate_double_tap(u8 facedown)
+{
+ blocking_notifier_call_chain(&double_tap_notifier_list, facedown, NULL);
+ return;
+}
+
+static irqreturn_t cwmcu_irq_handler(int irq, void *handle)
+{
+ struct cwmcu_data *mcu_data = handle;
+ s32 ret;
+ u8 INT_st1, INT_st2, INT_st3, INT_st4, err_st, batch_st;
+ u8 clear_intr;
+ u16 light_adc = 0;
+
+ if (!mcu_data->probe_success) {
+ D("%s: probe not completed\n", __func__);
+ return IRQ_HANDLED;
+ }
+
+ D("[CWMCU] %s\n", __func__);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ CWMCU_i2c_read(mcu_data, CWSTM32_INT_ST1, &INT_st1, 1);
+ CWMCU_i2c_read(mcu_data, CWSTM32_INT_ST2, &INT_st2, 1);
+ CWMCU_i2c_read(mcu_data, CWSTM32_INT_ST3, &INT_st3, 1);
+ CWMCU_i2c_read(mcu_data, CWSTM32_INT_ST4, &INT_st4, 1);
+ CWMCU_i2c_read(mcu_data, CWSTM32_ERR_ST, &err_st, 1);
+ CWMCU_i2c_read(mcu_data, CWSTM32_BATCH_MODE_COMMAND, &batch_st, 1);
+
+ D(
+ "%s: INT_st(1, 2, 3, 4) = (0x%x, 0x%x, 0x%x, 0x%x), err_st = 0x%x"
+ ", batch_st = 0x%x\n",
+ __func__, INT_st1, INT_st2, INT_st3, INT_st4, err_st, batch_st);
+
+ /* INT_st1: bit 3 */
+ if (INT_st1 & CW_MCU_INT_BIT_LIGHT) {
+ u8 data[REPORT_EVENT_COMMON_LEN] = {0};
+ u16 data_buff[REPORT_EVENT_COMMON_LEN] = {0};
+
+ if (mcu_data->enabled_list & (1LL << CW_LIGHT)) {
+ CWMCU_i2c_read(mcu_data, CWSTM32_READ_Light, data, 3);
+ if (data[0] < 11) {
+ mcu_data->sensors_time[CW_LIGHT] =
+ mcu_data->sensors_time[CW_LIGHT] -
+ mcu_data->report_period[CW_LIGHT];
+ light_adc = (data[2] << 8) | data[1];
+
+ data_buff[0] = data[0];
+ mcu_data->light_last_data[0] = data_buff[0];
+ cw_send_event(mcu_data, CW_LIGHT, data_buff, 0);
+ D(
+ "light interrupt occur value is %u, adc "
+ "is %x ls_calibration is %u\n",
+ data[0], light_adc,
+ mcu_data->ls_calibrated);
+ } else {
+ light_adc = (data[2] << 8) | data[1];
+ D(
+ "light interrupt occur value is %u, adc is"
+ " %x ls_calibration is %u (message only)\n",
+ data[0], light_adc,
+ mcu_data->ls_calibrated);
+ }
+ I("intr light[%x]=%u\n", light_adc, data[0]);
+ }
+ if (data[0] < 11) {
+ clear_intr = CW_MCU_INT_BIT_LIGHT;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST1, &clear_intr,
+ 1);
+ }
+ }
+
+ /* INT_st2: bit 4 */
+ if (INT_st2 & CW_MCU_INT_BIT_MAGIC_COVER) {
+ if (mcu_data->enabled_list & (1LL << HTC_MAGIC_COVER)) {
+ u8 data;
+
+ ret = CWMCU_i2c_read(mcu_data, CWSTM32_READ_Hall_Sensor,
+ &data, 1);
+ if (ret >= 0) {
+ I("%s: MAGIC COVER = 0x%x\n", __func__,
+ ((data >> 6) & 0x3));
+ magic_cover_report_input(mcu_data, data);
+ } else {
+ E("%s: MAGIC COVER read fails, ret = %d\n",
+ __func__, ret);
+ }
+
+ clear_intr = CW_MCU_INT_BIT_MAGIC_COVER;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST2, &clear_intr,
+ 1);
+ }
+ }
+
+ /* INT_st3: bit 4 */
+ if (INT_st3 & CW_MCU_INT_BIT_SIGNIFICANT_MOTION) {
+ if (mcu_data->enabled_list & (1LL << CW_SIGNIFICANT_MOTION)) {
+ u16 data_buff[REPORT_EVENT_COMMON_LEN] = {0};
+ __le64 data64[2];
+ u8 *data = (u8 *)data64;
+
+ data = data + sizeof(__le64) - sizeof(u8);
+
+ ret = CWMCU_i2c_read(mcu_data,
+ CWSTM32_READ_SIGNIFICANT_MOTION,
+ data, sizeof(u8) + sizeof(__le64));
+ if (ret >= 0) {
+ u64 timestamp_event;
+ __le64 *le64_timestamp = data64 + 1;
+
+ timestamp_event = le64_to_cpu(*le64_timestamp);
+
+ mcu_data->sensors_time[CW_SIGNIFICANT_MOTION]
+ = 0;
+
+ wake_lock_timeout(
+ &mcu_data->significant_wake_lock, HZ);
+
+ data_buff[0] = 1;
+ cw_send_event(mcu_data, CW_SIGNIFICANT_MOTION,
+ data_buff, timestamp_event);
+
+ D("%s: Significant timestamp = %llu\n"
+ , __func__, timestamp_event);
+ } else {
+ E(
+ "Read CWSTM32_READ_SIGNIFICANT_MOTION fails,"
+ " ret = %d\n", ret);
+ }
+ }
+ clear_intr = CW_MCU_INT_BIT_SIGNIFICANT_MOTION;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST3, &clear_intr, 1);
+ }
+
+ /* INT_st3: bit 5 */
+ if (INT_st3 & CW_MCU_INT_BIT_STEP_DETECTOR) {
+ if (mcu_data->enabled_list & ((1ULL << CW_STEP_DETECTOR) |
+ (1ULL << CW_STEP_DETECTOR_W)))
+ cwmcu_batch_read(mcu_data);
+
+ clear_intr = CW_MCU_INT_BIT_STEP_DETECTOR;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST3, &clear_intr, 1);
+ }
+
+ /* INT_st3: bit 6 */
+ if (INT_st3 & CW_MCU_INT_BIT_STEP_COUNTER) {
+ if (mcu_data->enabled_list & (1LL << CW_STEP_COUNTER_W))
+ cwmcu_batch_read(mcu_data);
+
+ if (mcu_data->enabled_list & (1LL << CW_STEP_COUNTER)) {
+ __le64 data64[2];
+ u8 *data = (u8 *)data64;
+ __le32 step_fw;
+
+ ret = CWMCU_i2c_read(mcu_data,
+ CWSTM32_READ_STEP_COUNTER,
+ data, 12);
+ if (ret >= 0) {
+ step_fw = *(__le32 *)(data + 8);
+ D("%s: From Firmware, step = %u\n",
+ __func__, le32_to_cpu(step_fw));
+
+ mcu_data->sensors_time[CW_STEP_COUNTER]
+ = 0;
+
+ report_step_counter(mcu_data,
+ le32_to_cpu(step_fw)
+ , le64_to_cpu(
+ data64[0])
+ , false);
+
+ D(
+ "%s: Step Counter INT, step = %llu"
+ ", timestamp = %llu\n"
+ , __func__
+ , mcu_data->step_counter_base
+ + le32_to_cpu(step_fw)
+ , le64_to_cpu(data64[0]));
+ } else {
+ E(
+ "%s: Step Counter i2c read fails, "
+ "ret = %d\n", __func__, ret);
+ }
+ }
+ clear_intr = CW_MCU_INT_BIT_STEP_COUNTER;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST3, &clear_intr, 1);
+ }
+
+ /* INT_st3: bit 7 */
+ if (INT_st3 & CW_MCU_INT_BIT_FACEDOWN_DETECTION) {
+ if (mcu_data->enabled_list & (1LL << HTC_FACEDOWN_DETECTION)) {
+ u8 data;
+
+ ret = CWMCU_i2c_read(mcu_data,
+ CWSTM32_READ_FACEDOWN_DETECTION,
+ &data, sizeof(data));
+ if (ret >= 0) {
+ D("%s: FACEDOWN = %u\n", __func__, data);
+ activate_double_tap(data);
+ } else
+ E("%s: FACEDOWN i2c read fails, ret = %d\n",
+ __func__, ret);
+
+ }
+ clear_intr = CW_MCU_INT_BIT_FACEDOWN_DETECTION;
+ CWMCU_i2c_write(mcu_data, CWSTM32_INT_ST3, &clear_intr, 1);
+ }
+
+#ifdef MCU_WARN_MSGS
+ /* err_st: bit 5 */
+ if (err_st & CW_MCU_INT_BIT_ERROR_WARN_MSG) {
+ u8 buf_len[WARN_MSG_BUFFER_LEN_SIZE] = {0};
+
+ ret = CWMCU_i2c_read(mcu_data, CW_I2C_REG_WARN_MSG_BUFFER_LEN,
+ buf_len, sizeof(buf_len));
+ if (ret >= 0) {
+ int i;
+ char buf[WARN_MSG_PER_ITEM_LEN];
+
+ for (i = 0; i < WARN_MSG_BUFFER_LEN_SIZE; i++) {
+ if (buf_len[i] <= WARN_MSG_PER_ITEM_LEN)
+ print_warn_msg(mcu_data, buf,
+ buf_len[i], i);
+ }
+ } else {
+ E("%s: Warn MSG read fails, ret = %d\n",
+ __func__, ret);
+ }
+ clear_intr = CW_MCU_INT_BIT_ERROR_WARN_MSG;
+ ret = CWMCU_i2c_write(mcu_data, CWSTM32_ERR_ST, &clear_intr, 1);
+ }
+#endif
+
+ /* err_st: bit 6 */
+ if (err_st & CW_MCU_INT_BIT_ERROR_MCU_EXCEPTION) {
+ u8 buf_len[EXCEPTION_BUFFER_LEN_SIZE] = {0};
+ bool reset_done;
+
+ ret = CWMCU_i2c_read(mcu_data, CW_I2C_REG_EXCEPTION_BUFFER_LEN,
+ buf_len, sizeof(buf_len));
+ if (ret >= 0) {
+ __le32 exception_len;
+ u8 data[EXCEPTION_BLOCK_LEN];
+ int i;
+
+ exception_len = cpu_to_le32p((u32 *)&buf_len[0]);
+ E("%s: exception_len = %u\n", __func__, exception_len);
+
+ for (i = 0; exception_len >= EXCEPTION_BLOCK_LEN; i++) {
+ memset(data, 0, sizeof(data));
+ ret = CWMCU_i2c_read(mcu_data,
+ CW_I2C_REG_EXCEPTION_BUFFER,
+ data, sizeof(data));
+ if (ret >= 0) {
+ char buf[3*EXCEPTION_BLOCK_LEN];
+
+ print_hex_data(buf, i, data,
+ EXCEPTION_BLOCK_LEN);
+ exception_len -= EXCEPTION_BLOCK_LEN;
+ } else {
+ E(
+ "%s: i = %d, excp1 i2c_read: ret = %d"
+ "\n", __func__, i, ret);
+ goto exception_end;
+ }
+ }
+ if ((exception_len > 0) &&
+ (exception_len < sizeof(data))) {
+ ret = CWMCU_i2c_read(mcu_data,
+ CW_I2C_REG_EXCEPTION_BUFFER,
+ data, exception_len);
+ if (ret >= 0) {
+ char buf[3*EXCEPTION_BLOCK_LEN];
+
+ print_hex_data(buf, i, data,
+ exception_len);
+ } else {
+ E(
+ "%s: i = %d, excp2 i2c_read: ret = %d"
+ "\n", __func__, i, ret);
+ }
+ }
+ } else {
+ E("%s: Exception status dump fails, ret = %d\n",
+ __func__, ret);
+ }
+exception_end:
+ mutex_lock(&mcu_data->activated_i2c_lock);
+ reset_done = reset_hub(mcu_data);
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+
+ if (reset_done) {
+ mcu_data->w_re_init = true;
+ mcu_data->w_report_meta = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+ E("%s: reset after exception done\n", __func__);
+ }
+
+ clear_intr = CW_MCU_INT_BIT_ERROR_MCU_EXCEPTION;
+ ret = CWMCU_i2c_write(mcu_data, CWSTM32_ERR_ST, &clear_intr, 1);
+ }
+
+ /* err_st: bit 7 */
+ if (err_st & CW_MCU_INT_BIT_ERROR_WATCHDOG_RESET) {
+ u8 data[WATCHDOG_STATUS_LEN] = {0};
+
+ E("[CWMCU] Watch Dog Reset\n");
+ msleep(5);
+
+ ret = CWMCU_i2c_read(mcu_data, CW_I2C_REG_WATCHDOG_STATUS,
+ data, WATCHDOG_STATUS_LEN);
+ if (ret >= 0) {
+ int i;
+
+ for (i = 0; i < WATCHDOG_STATUS_LEN; i++) {
+ E("%s: Watchdog Status[%d] = 0x%x\n",
+ __func__, i, data[i]);
+ }
+ } else {
+ E("%s: Watchdog status dump fails, ret = %d\n",
+ __func__, ret);
+ }
+
+ mcu_data->w_re_init = true;
+ mcu_data->w_report_meta = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+
+ clear_intr = CW_MCU_INT_BIT_ERROR_WATCHDOG_RESET;
+ ret = CWMCU_i2c_write(mcu_data, CWSTM32_ERR_ST, &clear_intr, 1);
+ }
+
+ /* batch_st */
+ if (batch_st & CW_MCU_INT_BIT_BATCH_INT_MASK) {
+ cwmcu_batch_read(mcu_data);
+
+ clear_intr = CW_MCU_INT_BIT_BATCH_INT_MASK;
+
+ D("%s: clear_intr = 0x%x, write_addr = 0x%x", __func__,
+ clear_intr, CWSTM32_BATCH_MODE_COMMAND);
+
+ ret = CWMCU_i2c_write(mcu_data,
+ CWSTM32_BATCH_MODE_COMMAND,
+ &clear_intr, 1);
+ }
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ return IRQ_HANDLED;
+}
+
+/*=======iio device reg=========*/
+static void iio_trigger_work(struct irq_work *work)
+{
+ struct cwmcu_data *mcu_data = container_of((struct irq_work *)work,
+ struct cwmcu_data, iio_irq_work);
+
+ iio_trigger_poll(mcu_data->trig, iio_get_time_ns());
+}
+
+static irqreturn_t cw_trigger_handler(int irq, void *p)
+{
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+
+ cwmcu_read(mcu_data, pf);
+
+ mutex_lock(&mcu_data->lock);
+ iio_trigger_notify_done(mcu_data->indio_dev->trig);
+ mutex_unlock(&mcu_data->lock);
+
+ return IRQ_HANDLED;
+}
+
+static const struct iio_buffer_setup_ops cw_buffer_setup_ops = {
+ .preenable = &iio_sw_buffer_preenable,
+ .postenable = &iio_triggered_buffer_postenable,
+ .predisable = &iio_triggered_buffer_predisable,
+};
+
+static int cw_pseudo_irq_enable(struct iio_dev *indio_dev)
+{
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+
+ if (!atomic_cmpxchg(&mcu_data->pseudo_irq_enable, 0, 1)) {
+ D("%s:\n", __func__);
+ cancel_delayed_work_sync(&mcu_data->work);
+ queue_delayed_work(mcu_data->mcu_wq, &mcu_data->work, 0);
+ }
+
+ return 0;
+}
+
+static int cw_pseudo_irq_disable(struct iio_dev *indio_dev)
+{
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+
+ if (atomic_cmpxchg(&mcu_data->pseudo_irq_enable, 1, 0)) {
+ cancel_delayed_work_sync(&mcu_data->work);
+ D("%s:\n", __func__);
+ }
+ return 0;
+}
+
+static int cw_set_pseudo_irq(struct iio_dev *indio_dev, int enable)
+{
+ if (enable)
+ cw_pseudo_irq_enable(indio_dev);
+ else
+ cw_pseudo_irq_disable(indio_dev);
+
+ return 0;
+}
+
+static int cw_data_rdy_trigger_set_state(struct iio_trigger *trig,
+ bool state)
+{
+ struct iio_dev *indio_dev =
+ (struct iio_dev *)iio_trigger_get_drvdata(trig);
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+
+ mutex_lock(&mcu_data->mutex_lock);
+ cw_set_pseudo_irq(indio_dev, state);
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ return 0;
+}
+
+static const struct iio_trigger_ops cw_trigger_ops = {
+ .owner = THIS_MODULE,
+ .set_trigger_state = &cw_data_rdy_trigger_set_state,
+};
+
+static int cw_probe_trigger(struct iio_dev *iio_dev)
+{
+ struct cwmcu_data *mcu_data = iio_priv(iio_dev);
+ int ret;
+
+ iio_dev->pollfunc = iio_alloc_pollfunc(&iio_pollfunc_store_time,
+ &cw_trigger_handler, IRQF_ONESHOT, iio_dev,
+ "%s_consumer%d", iio_dev->name, iio_dev->id);
+ if (iio_dev->pollfunc == NULL) {
+ ret = -ENOMEM;
+ goto error_ret;
+ }
+ mcu_data->trig = iio_trigger_alloc("%s-dev%d",
+ iio_dev->name,
+ iio_dev->id);
+ if (!mcu_data->trig) {
+ ret = -ENOMEM;
+ goto error_dealloc_pollfunc;
+ }
+ mcu_data->trig->dev.parent = &mcu_data->client->dev;
+ mcu_data->trig->ops = &cw_trigger_ops;
+ iio_trigger_set_drvdata(mcu_data->trig, iio_dev);
+
+ ret = iio_trigger_register(mcu_data->trig);
+ if (ret)
+ goto error_free_trig;
+
+ return 0;
+
+error_free_trig:
+ iio_trigger_free(mcu_data->trig);
+error_dealloc_pollfunc:
+ iio_dealloc_pollfunc(iio_dev->pollfunc);
+error_ret:
+ return ret;
+}
+
+static int cw_probe_buffer(struct iio_dev *iio_dev)
+{
+ int ret;
+ struct iio_buffer *buffer;
+
+ buffer = iio_kfifo_allocate(iio_dev);
+ if (!buffer) {
+ ret = -ENOMEM;
+ goto error_ret;
+ }
+
+ buffer->scan_timestamp = true;
+ iio_dev->buffer = buffer;
+ iio_dev->setup_ops = &cw_buffer_setup_ops;
+ iio_dev->modes |= INDIO_BUFFER_TRIGGERED;
+ ret = iio_buffer_register(iio_dev, iio_dev->channels,
+ iio_dev->num_channels);
+ if (ret)
+ goto error_free_buf;
+
+ iio_scan_mask_set(iio_dev, iio_dev->buffer, CW_SCAN_ID);
+ iio_scan_mask_set(iio_dev, iio_dev->buffer, CW_SCAN_X);
+ iio_scan_mask_set(iio_dev, iio_dev->buffer, CW_SCAN_Y);
+ iio_scan_mask_set(iio_dev, iio_dev->buffer, CW_SCAN_Z);
+ return 0;
+
+error_free_buf:
+ iio_kfifo_free(iio_dev->buffer);
+error_ret:
+ return ret;
+}
+
+
+static int cw_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val,
+ int *val2,
+ long mask) {
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+ int ret = -EINVAL;
+
+ if (chan->type != IIO_ACCEL)
+ return ret;
+
+ mutex_lock(&mcu_data->lock);
+ switch (mask) {
+ case 0:
+ *val = mcu_data->iio_data[chan->channel2 - IIO_MOD_X];
+ ret = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SCALE:
+ /* Gain : counts / uT = 1000 [nT] */
+ /* Scaling factor : 1000000 / Gain = 1000 */
+ *val = 0;
+ *val2 = 1000;
+ ret = IIO_VAL_INT_PLUS_MICRO;
+ break;
+ }
+ mutex_unlock(&mcu_data->lock);
+
+ return ret;
+}
+
+#define CW_CHANNEL(axis) \
+{ \
+ .type = IIO_ACCEL, \
+ .modified = 1, \
+ .channel2 = axis+1, \
+ .info_mask = BIT(IIO_CHAN_INFO_SCALE), \
+ .scan_index = axis, \
+ .scan_type = IIO_ST('u', 32, 32, 0) \
+}
+
+static const struct iio_chan_spec cw_channels[] = {
+ CW_CHANNEL(CW_SCAN_ID),
+ CW_CHANNEL(CW_SCAN_X),
+ CW_CHANNEL(CW_SCAN_Y),
+ CW_CHANNEL(CW_SCAN_Z),
+ IIO_CHAN_SOFT_TIMESTAMP(CW_SCAN_TIMESTAMP)
+};
+
+static const struct iio_info cw_info = {
+ .read_raw = &cw_read_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int mcu_parse_dt(struct device *dev, struct cwmcu_data *pdata)
+{
+ struct property *prop;
+ struct device_node *dt = dev->of_node;
+ u32 buf = 0;
+ struct device_node *g_sensor_offset;
+ int g_sensor_cali_size = 0;
+ unsigned char *g_sensor_cali_data = NULL;
+ struct device_node *gyro_sensor_offset;
+ int gyro_sensor_cali_size = 0;
+ unsigned char *gyro_sensor_cali_data = NULL;
+ struct device_node *light_sensor_offset = NULL;
+ int light_sensor_cali_size = 0;
+ unsigned char *light_sensor_cali_data = NULL;
+ struct device_node *baro_sensor_offset;
+ int baro_sensor_cali_size = 0;
+ unsigned char *baro_sensor_cali_data = NULL;
+
+ int i;
+
+ g_sensor_offset = of_find_node_by_path(CALIBRATION_DATA_PATH);
+ if (g_sensor_offset) {
+ g_sensor_cali_data = (unsigned char *)
+ of_get_property(g_sensor_offset,
+ G_SENSOR_FLASH_DATA,
+ &g_sensor_cali_size);
+ D("%s: cali_size = %d\n", __func__, g_sensor_cali_size);
+ if (g_sensor_cali_data) {
+ for (i = 0; (i < g_sensor_cali_size) && (i < 4); i++) {
+ D("g sensor cali_data[%d] = %02x\n", i,
+ g_sensor_cali_data[i]);
+ pdata->gs_kvalue |= (g_sensor_cali_data[i] <<
+ (i * 8));
+ }
+ }
+
+ } else
+ E("%s: G-sensor Calibration data offset not found\n", __func__);
+
+ gyro_sensor_offset = of_find_node_by_path(CALIBRATION_DATA_PATH);
+ if (gyro_sensor_offset) {
+ gyro_sensor_cali_data = (unsigned char *)
+ of_get_property(gyro_sensor_offset,
+ GYRO_SENSOR_FLASH_DATA,
+ &gyro_sensor_cali_size);
+ D("%s:gyro cali_size = %d\n", __func__, gyro_sensor_cali_size);
+ if (gyro_sensor_cali_data) {
+ for (i = 0; (i < gyro_sensor_cali_size) && (i < 4);
+ i++) {
+ D("gyro sensor cali_data[%d] = %02x\n", i,
+ gyro_sensor_cali_data[i]);
+ pdata->gy_kvalue |= (gyro_sensor_cali_data[i] <<
+ (i * 8));
+ }
+ pdata->gs_kvalue_L1 = (gyro_sensor_cali_data[5] << 8) |
+ gyro_sensor_cali_data[4];
+ D("g sensor cali_data L1 = %x\n", pdata->gs_kvalue_L1);
+ pdata->gs_kvalue_L2 = (gyro_sensor_cali_data[7] << 8) |
+ gyro_sensor_cali_data[6];
+ D("g sensor cali_data L2 = %x\n", pdata->gs_kvalue_L2);
+ pdata->gs_kvalue_L3 = (gyro_sensor_cali_data[9] << 8) |
+ gyro_sensor_cali_data[8];
+ D("g sensor cali_data L3 = %x\n", pdata->gs_kvalue_L3);
+ pdata->gs_kvalue_R1 = (gyro_sensor_cali_data[11] << 8) |
+ gyro_sensor_cali_data[10];
+ D("g sensor cali_data R1 = %x\n", pdata->gs_kvalue_R1);
+ pdata->gs_kvalue_R2 = (gyro_sensor_cali_data[13] << 8) |
+ gyro_sensor_cali_data[12];
+ D("g sensor cali_data R2 = %x\n", pdata->gs_kvalue_R2);
+ pdata->gs_kvalue_R3 = (gyro_sensor_cali_data[15] << 8) |
+ gyro_sensor_cali_data[14];
+ D("g sensor cali_data R3 = %x\n", pdata->gs_kvalue_R3);
+ }
+
+ } else
+ E("%s: GYRO-sensor Calibration data offset not found\n",
+ __func__);
+
+ light_sensor_offset = of_find_node_by_path(CALIBRATION_DATA_PATH);
+ if (light_sensor_offset) {
+ light_sensor_cali_data = (unsigned char *)
+ of_get_property(light_sensor_offset,
+ LIGHT_SENSOR_FLASH_DATA,
+ &light_sensor_cali_size);
+ D("%s:light cali_size = %d\n", __func__,
+ light_sensor_cali_size);
+ if (light_sensor_cali_data) {
+ for (i = 0; (i < light_sensor_cali_size) && (i < 4);
+ i++) {
+ D("light sensor cali_data[%d] = %02x\n", i,
+ light_sensor_cali_data[i]);
+ pdata->als_kvalue |=
+ (light_sensor_cali_data[i] << (i * 8));
+ }
+ }
+ } else
+ E("%s: LIGHT-sensor Calibration data offset not found\n",
+ __func__);
+
+ baro_sensor_offset = of_find_node_by_path(CALIBRATION_DATA_PATH);
+ if (baro_sensor_offset) {
+ baro_sensor_cali_data = (unsigned char *)
+ of_get_property(baro_sensor_offset,
+ BARO_SENSOR_FLASH_DATA,
+ &baro_sensor_cali_size);
+ D("%s: cali_size = %d\n", __func__, baro_sensor_cali_size);
+ if (baro_sensor_cali_data) {
+ for (i = 0; (i < baro_sensor_cali_size) &&
+ (i < 5); i++) {
+ D("baro sensor cali_data[%d] = %02x\n", i,
+ baro_sensor_cali_data[i]);
+ if (i == baro_sensor_cali_size - 1)
+ pdata->bs_kheader =
+ baro_sensor_cali_data[i];
+ else
+ pdata->bs_kvalue |=
+ (baro_sensor_cali_data[i] <<
+ (i * 8));
+ }
+ }
+ } else
+ E("%s: Barometer-sensor Calibration data offset not found\n",
+ __func__);
+
+ pdata->gpio_wake_mcu = of_get_named_gpio(dt, "mcu,Cpu_wake_mcu-gpio",
+ 0);
+ if (!gpio_is_valid(pdata->gpio_wake_mcu))
+ E("DT:gpio_wake_mcu value is not valid\n");
+ else
+ D("DT:gpio_wake_mcu=%d\n", pdata->gpio_wake_mcu);
+
+ pdata->gpio_mcu_irq = of_get_named_gpio(dt, "mcu,intr-gpio", 0);
+ if (!gpio_is_valid(pdata->gpio_mcu_irq))
+ E("DT:gpio_mcu_irq value is not valid\n");
+ else
+ D("DT:gpio_mcu_irq=%d\n", pdata->gpio_mcu_irq);
+
+ pdata->gpio_reset = of_get_named_gpio(dt, "mcu,Reset-gpio", 0);
+ if (!gpio_is_valid(pdata->gpio_reset))
+ E("DT:gpio_reset value is not valid\n");
+ else
+ D("DT:gpio_reset=%d\n", pdata->gpio_reset);
+
+ pdata->gpio_chip_mode = of_get_named_gpio(dt, "mcu,Chip_mode-gpio", 0);
+ if (!gpio_is_valid(pdata->gpio_chip_mode))
+ E("DT:gpio_chip_mode value is not valid\n");
+ else
+ D("DT:gpio_chip_mode=%d\n", pdata->gpio_chip_mode);
+
+ prop = of_find_property(dt, "mcu,gs_chip_layout", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "mcu,gs_chip_layout", &buf);
+ pdata->gs_chip_layout = buf;
+ D("%s: chip_layout = %d\n", __func__, pdata->gs_chip_layout);
+ } else
+ E("%s: g_sensor,chip_layout not found", __func__);
+
+ prop = of_find_property(dt, "mcu,acceleration_axes", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "mcu,acceleration_axes", &buf);
+ pdata->acceleration_axes = buf;
+ I("%s: acceleration axes = %u\n", __func__,
+ pdata->acceleration_axes);
+ } else
+ E("%s: g_sensor axes not found", __func__);
+
+ prop = of_find_property(dt, "mcu,magnetic_axes", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "mcu,magnetic_axes", &buf);
+ pdata->magnetic_axes = buf;
+ I("%s: Compass axes = %u\n", __func__, pdata->magnetic_axes);
+ } else
+ E("%s: Compass axes not found", __func__);
+
+ prop = of_find_property(dt, "mcu,gyro_axes", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "mcu,gyro_axes", &buf);
+ pdata->gyro_axes = buf;
+ I("%s: gyro axes = %u\n", __func__, pdata->gyro_axes);
+ } else
+ E("%s: gyro axes not found", __func__);
+
+ return 0;
+}
+
+static struct device_attribute attributes[] = {
+
+ __ATTR(enable, 0666, active_show,
+ active_set),
+ __ATTR(batch_enable, 0666, batch_show, batch_set),
+ __ATTR(delay_ms, 0666, interval_show,
+ interval_set),
+ __ATTR(flush, 0666, flush_show, flush_set),
+ __ATTR(calibrator_en, 0220, NULL, set_calibrator_en),
+ __ATTR(calibrator_status_acc, 0440, show_calibrator_status_acc, NULL),
+ __ATTR(calibrator_status_mag, 0440, show_calibrator_status_mag, NULL),
+ __ATTR(calibrator_status_gyro, 0440, show_calibrator_status_gyro, NULL),
+ __ATTR(calibrator_data_acc, 0666, get_k_value_acc_f, set_k_value_acc_f),
+ __ATTR(calibrator_data_acc_rl, 0440, get_k_value_acc_rl_f, NULL),
+ __ATTR(ap_calibrator_data_acc_rl, 0440, ap_get_k_value_acc_rl_f, NULL),
+ __ATTR(calibrator_data_mag, 0666, get_k_value_mag_f, set_k_value_mag_f),
+ __ATTR(calibrator_data_gyro, 0666, get_k_value_gyro_f,
+ set_k_value_gyro_f),
+ __ATTR(calibrator_data_light, 0440, get_k_value_light_f, NULL),
+ __ATTR(calibrator_data_barometer, 0666, get_k_value_barometer_f,
+ set_k_value_barometer_f),
+ __ATTR(data_barometer, 0440, get_barometer, NULL),
+ __ATTR(data_light_polling, 0440, get_light_polling, NULL),
+ __ATTR(sensor_hub_rdata, 0220, NULL, read_mcu_data),
+ __ATTR(data_light_kadc, 0440, get_light_kadc, NULL),
+ __ATTR(firmware_version, 0440, get_firmware_version, NULL),
+ __ATTR(hall_sensor, 0440, get_hall_sensor, NULL),
+ __ATTR(led_en, 0220, NULL, led_enable),
+ __ATTR(facedown_enabled, 0660, facedown_show, facedown_set),
+};
+
+
+static int create_sysfs_interfaces(struct cwmcu_data *mcu_data)
+{
+ int i;
+ int res;
+
+ mcu_data->sensor_class = class_create(THIS_MODULE, "htc_sensorhub");
+ if (IS_ERR(mcu_data->sensor_class))
+ return PTR_ERR(mcu_data->sensor_class);
+
+ mcu_data->sensor_dev = device_create(mcu_data->sensor_class, NULL, 0,
+ "%s", "sensor_hub");
+ if (IS_ERR(mcu_data->sensor_dev)) {
+ res = PTR_ERR(mcu_data->sensor_dev);
+ goto err_device_create;
+ }
+
+ res = dev_set_drvdata(mcu_data->sensor_dev, mcu_data);
+ if (res)
+ goto err_set_drvdata;
+
+ for (i = 0; i < ARRAY_SIZE(attributes); i++)
+ if (device_create_file(mcu_data->sensor_dev, attributes + i))
+ goto error;
+
+ res = sysfs_create_link(&mcu_data->sensor_dev->kobj,
+ &mcu_data->indio_dev->dev.kobj, "iio");
+ if (res < 0)
+ goto error;
+
+ return 0;
+
+error:
+ while (--i >= 0)
+ device_remove_file(mcu_data->sensor_dev, attributes + i);
+err_set_drvdata:
+ put_device(mcu_data->sensor_dev);
+ device_unregister(mcu_data->sensor_dev);
+err_device_create:
+ class_destroy(mcu_data->sensor_class);
+ return res;
+}
+
+static void destroy_sysfs_interfaces(struct cwmcu_data *mcu_data)
+{
+ int i;
+
+ sysfs_remove_link(&mcu_data->sensor_dev->kobj, "iio");
+ for (i = 0; i < ARRAY_SIZE(attributes); i++)
+ device_remove_file(mcu_data->sensor_dev, attributes + i);
+ put_device(mcu_data->sensor_dev);
+ device_unregister(mcu_data->sensor_dev);
+ class_destroy(mcu_data->sensor_class);
+}
+
+static void cwmcu_remove_trigger(struct iio_dev *indio_dev)
+{
+ struct cwmcu_data *mcu_data = iio_priv(indio_dev);
+
+ iio_trigger_unregister(mcu_data->trig);
+ iio_trigger_free(mcu_data->trig);
+ iio_dealloc_pollfunc(indio_dev->pollfunc);
+}
+static void cwmcu_remove_buffer(struct iio_dev *indio_dev)
+{
+ iio_buffer_unregister(indio_dev);
+ iio_kfifo_free(indio_dev->buffer);
+}
+
+static void cwmcu_one_shot(struct work_struct *work)
+{
+ struct cwmcu_data *mcu_data = container_of((struct work_struct *)work,
+ struct cwmcu_data, one_shot_work);
+
+ if (mcu_data->w_activated_i2c == true) {
+ mcu_data->w_activated_i2c = false;
+
+ mutex_lock(&mcu_data->activated_i2c_lock);
+ if (retry_exhausted(mcu_data) &&
+ time_after(jiffies, mcu_data->i2c_jiffies +
+ REACTIVATE_PERIOD)) {
+ bool reset_done;
+
+ reset_done = reset_hub(mcu_data);
+
+ I("%s: fw_update_status = 0x%x\n", __func__,
+ mcu_data->fw_update_status);
+
+ if (reset_done &&
+ (!(mcu_data->fw_update_status &
+ (FW_DOES_NOT_EXIST | FW_UPDATE_QUEUED)))) {
+ mcu_data->fw_update_status |= FW_UPDATE_QUEUED;
+ request_firmware_nowait(THIS_MODULE,
+ FW_ACTION_HOTPLUG,
+ "sensor_hub.img",
+ &mcu_data->client->dev,
+ GFP_KERNEL, mcu_data, update_firmware);
+ }
+
+ }
+
+ if (retry_exhausted(mcu_data)) {
+ D("%s: i2c_total_retry = %d, i2c_latch_retry = %d\n",
+ __func__, mcu_data->i2c_total_retry,
+ mcu_data->i2c_latch_retry);
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+ return;
+ }
+
+ /* record the failure */
+ mcu_data->i2c_total_retry++;
+ mcu_data->i2c_jiffies = jiffies;
+
+ mutex_unlock(&mcu_data->activated_i2c_lock);
+ D(
+ "%s--: mcu_data->i2c_total_retry = %d, "
+ "mcu_data->i2c_latch_retry = %d\n", __func__,
+ mcu_data->i2c_total_retry, mcu_data->i2c_latch_retry);
+ }
+
+ if (mcu_data->w_re_init == true) {
+ mcu_data->w_re_init = false;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ cwmcu_sensor_placement(mcu_data);
+ cwmcu_set_sensor_kvalue(mcu_data);
+ cwmcu_restore_status(mcu_data);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+ }
+
+ if (mcu_data->w_facedown_set == true) {
+ u8 data;
+ int i;
+
+ mcu_data->w_facedown_set = false;
+
+ i = (HTC_FACEDOWN_DETECTION / 8);
+
+ data = (u8)(mcu_data->enabled_list >> (i * 8));
+ CWMCU_i2c_write_power(mcu_data, CWSTM32_ENABLE_REG + i, &data,
+ 1);
+ }
+
+ if (mcu_data->w_flush_fifo == true) {
+ int j;
+ bool is_meta_read = false;
+
+ mcu_data->w_flush_fifo = false;
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ for (j = 0; j < 2; j++) {
+ is_meta_read = cwmcu_batch_fifo_read(mcu_data, j);
+ if (is_meta_read)
+ break;
+ }
+
+ if (!is_meta_read)
+ cwmcu_meta_read(mcu_data);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ if (mcu_data->pending_flush && !mcu_data->w_flush_fifo) {
+ mcu_data->w_flush_fifo = true;
+ queue_work(mcu_data->mcu_wq, &mcu_data->one_shot_work);
+ }
+ }
+
+ mutex_lock(&mcu_data->mutex_lock);
+ if (mcu_data->w_clear_fifo == true) {
+ int j;
+
+ mcu_data->w_clear_fifo = false;
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ cwmcu_powermode_switch(mcu_data, 1);
+
+ for (j = 0; j < 2; j++)
+ cwmcu_batch_fifo_read(mcu_data, j);
+
+ cwmcu_powermode_switch(mcu_data, 0);
+
+ mutex_lock(&mcu_data->mutex_lock);
+ if (!mcu_data->w_clear_fifo)
+ mcu_data->w_clear_fifo_running = false;
+ } else
+ mcu_data->w_clear_fifo_running = false;
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ if (mcu_data->w_report_meta == true) {
+ int j;
+ u16 data_event[REPORT_EVENT_COMMON_LEN];
+
+ mcu_data->w_report_meta = false;
+
+ for (j = 0;
+ (j < CW_SENSORS_ID_TOTAL) && mcu_data->pending_flush;
+ j++) {
+ if (mcu_data->enabled_list & (1LL << j)) {
+ data_event[0] = j;
+ cw_send_event(mcu_data, CW_META_DATA,
+ data_event, 0);
+ I("%s: Reported META = %d from driver\n",
+ __func__, j);
+ }
+ mcu_data->pending_flush &= ~(1LL << j);
+ }
+ }
+}
+
+
+static void cwmcu_work_report(struct work_struct *work)
+{
+ struct cwmcu_data *mcu_data = container_of((struct delayed_work *)work,
+ struct cwmcu_data, work);
+
+ if (atomic_read(&mcu_data->pseudo_irq_enable)) {
+ unsigned long jiff;
+
+ jiff = msecs_to_jiffies(atomic_read(&mcu_data->delay));
+ if (!jiff)
+ jiff = 1;
+ D("%s: jiff = %lu\n", __func__, jiff);
+ irq_work_queue(&mcu_data->iio_irq_work);
+ queue_delayed_work(mcu_data->mcu_wq, &mcu_data->work, jiff);
+ }
+}
+
+static int cwmcu_input_init(struct input_dev **input)
+{
+ int err;
+
+ *input = input_allocate_device();
+ if (!*input)
+ return -ENOMEM;
+
+ set_bit(EV_SW, (*input)->evbit);
+
+ input_set_capability(*input, EV_SW, SW_LID);
+ input_set_capability(*input, EV_SW, SW_CAMERA_LENS_COVER);
+
+ (*input)->name = CWMCU_I2C_NAME;
+
+ err = input_register_device(*input);
+ if (err) {
+ input_free_device(*input);
+ return err;
+ }
+
+ return err;
+}
+
+static int CWMCU_i2c_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct cwmcu_data *mcu_data;
+ struct iio_dev *indio_dev;
+ int error;
+ int i;
+
+ I("%s++: Report pending META when FW exceptions\n", __func__);
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ dev_err(&client->dev, "i2c_check_functionality error\n");
+ return -EIO;
+ }
+
+ D("%s: sizeof(*mcu_data) = %lu\n", __func__, sizeof(*mcu_data));
+
+ indio_dev = iio_device_alloc(sizeof(*mcu_data));
+ if (!indio_dev) {
+ I("%s: iio_device_alloc failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ i2c_set_clientdata(client, indio_dev);
+
+ indio_dev->name = CWMCU_I2C_NAME;
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &cw_info;
+ indio_dev->channels = cw_channels;
+ indio_dev->num_channels = ARRAY_SIZE(cw_channels);
+ indio_dev->modes |= INDIO_BUFFER_TRIGGERED | INDIO_KFIFO_USE_VMALLOC;
+
+ mcu_data = iio_priv(indio_dev);
+ mcu_data->client = client;
+ mcu_data->indio_dev = indio_dev;
+
+ if (client->dev.of_node) {
+ D("Device Tree parsing.");
+
+ error = mcu_parse_dt(&client->dev, mcu_data);
+ if (error) {
+ dev_err(&client->dev,
+ "%s: mcu_parse_dt for pdata failed. err = %d\n"
+ , __func__, error);
+ goto exit_mcu_parse_dt_fail;
+ }
+ } else {
+ if (client->dev.platform_data != NULL) {
+ mcu_data->acceleration_axes =
+ ((struct cwmcu_platform_data *)
+ mcu_data->client->dev.platform_data)
+ ->acceleration_axes;
+ mcu_data->magnetic_axes =
+ ((struct cwmcu_platform_data *)
+ mcu_data->client->dev.platform_data)
+ ->magnetic_axes;
+ mcu_data->gyro_axes =
+ ((struct cwmcu_platform_data *)
+ mcu_data->client->dev.platform_data)
+ ->gyro_axes;
+ mcu_data->gpio_wake_mcu =
+ ((struct cwmcu_platform_data *)
+ mcu_data->client->dev.platform_data)
+ ->gpio_wake_mcu;
+ }
+ }
+
+ error = gpio_request(mcu_data->gpio_reset, "cwmcu_reset");
+ if (error)
+ E("%s : request reset gpio fail\n", __func__);
+
+ error = gpio_request(mcu_data->gpio_wake_mcu, "cwmcu_CPU2MCU");
+ if (error)
+ E("%s : request gpio_wake_mcu gpio fail\n", __func__);
+
+ error = gpio_request(mcu_data->gpio_chip_mode, "cwmcu_hub_boot_mode");
+ if (error)
+ E("%s : request ghip mode gpio fail\n", __func__);
+
+ gpio_direction_output(mcu_data->gpio_reset, 1);
+ gpio_direction_output(mcu_data->gpio_wake_mcu, 0);
+ gpio_direction_output(mcu_data->gpio_chip_mode, 0);
+
+ error = gpio_request(mcu_data->gpio_mcu_irq, "cwmcu_int");
+ if (error) {
+ E("%s : request irq gpio fail\n", __func__);
+ }
+
+ mutex_init(&mcu_data->mutex_lock);
+ mutex_init(&mcu_data->group_i2c_lock);
+ mutex_init(&mcu_data->activated_i2c_lock);
+ mutex_init(&mcu_data->power_mode_lock);
+ mutex_init(&mcu_data->lock);
+
+ INIT_DELAYED_WORK(&mcu_data->work, cwmcu_work_report);
+ INIT_WORK(&mcu_data->one_shot_work, cwmcu_one_shot);
+
+ error = cw_probe_buffer(indio_dev);
+ if (error) {
+ E("%s: iio yas_probe_buffer failed\n", __func__);
+ goto error_free_dev;
+ }
+ error = cw_probe_trigger(indio_dev);
+ if (error) {
+ E("%s: iio yas_probe_trigger failed\n", __func__);
+ goto error_remove_buffer;
+ }
+ error = iio_device_register(indio_dev);
+ if (error) {
+ E("%s: iio iio_device_register failed\n", __func__);
+ goto error_remove_trigger;
+ }
+
+ error = create_sysfs_interfaces(mcu_data);
+ if (error)
+ goto err_free_mem;
+
+ for (i = 0; i < num_sensors; i++) {
+ mcu_data->sensors_time[i] = 0;
+ mcu_data->report_period[i] = 200000 * MS_TO_PERIOD;
+ }
+
+ wake_lock_init(&mcu_data->significant_wake_lock, WAKE_LOCK_SUSPEND,
+ "significant_wake_lock");
+ wake_lock_init(&mcu_data->report_wake_lock, WAKE_LOCK_SUSPEND,
+ "report_wake_lock");
+
+ atomic_set(&mcu_data->delay, CWMCU_MAX_DELAY);
+ init_irq_work(&mcu_data->iio_irq_work, iio_trigger_work);
+
+ mcu_data->mcu_wq = create_singlethread_workqueue("htc_mcu");
+ i2c_set_clientdata(client, mcu_data);
+ pm_runtime_enable(&client->dev);
+
+ client->irq = gpio_to_irq(mcu_data->gpio_mcu_irq);
+
+ mcu_data->IRQ = client->irq;
+ D("Requesting irq = %d\n", mcu_data->IRQ);
+ error = request_threaded_irq(mcu_data->IRQ, NULL, cwmcu_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_ONESHOT, "cwmcu", mcu_data);
+ if (error)
+ E("[CWMCU] could not request irq %d\n", error);
+ error = enable_irq_wake(mcu_data->IRQ);
+ if (error < 0)
+ E("[CWMCU] could not enable irq as wakeup source %d\n", error);
+
+ mutex_lock(&mcu_data->mutex_lock);
+ mcu_data->suspended = false;
+ mutex_unlock(&mcu_data->mutex_lock);
+
+ request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG,
+ "sensor_hub.img", &client->dev, GFP_KERNEL, mcu_data,
+ update_firmware);
+
+ error = cwmcu_input_init(&mcu_data->input);
+ if (error) {
+ E("%s: input_dev register failed", __func__);
+ goto err_register_input;
+ }
+ input_set_drvdata(mcu_data->input, mcu_data);
+
+ mcu_data->probe_success = true;
+ I("CWMCU_i2c_probe success!\n");
+
+ return 0;
+
+err_register_input:
+ free_irq(mcu_data->IRQ, mcu_data);
+err_free_mem:
+ if (indio_dev)
+ iio_device_unregister(indio_dev);
+error_remove_trigger:
+ if (indio_dev)
+ cwmcu_remove_trigger(indio_dev);
+error_remove_buffer:
+ if (indio_dev)
+ cwmcu_remove_buffer(indio_dev);
+error_free_dev:
+ if (client->dev.of_node &&
+ ((struct cwmcu_platform_data *)mcu_data->client->dev.platform_data))
+ kfree(mcu_data->client->dev.platform_data);
+exit_mcu_parse_dt_fail:
+ if (indio_dev)
+ iio_device_free(indio_dev);
+ i2c_set_clientdata(client, NULL);
+
+ return error;
+}
+
+
+static int CWMCU_i2c_remove(struct i2c_client *client)
+{
+ struct cwmcu_data *mcu_data = i2c_get_clientdata(client);
+
+ gpio_set_value(mcu_data->gpio_wake_mcu, 1);
+
+ wake_lock_destroy(&mcu_data->significant_wake_lock);
+ wake_lock_destroy(&mcu_data->report_wake_lock);
+ destroy_sysfs_interfaces(mcu_data);
+ kfree(mcu_data);
+ return 0;
+}
+
+static const struct dev_pm_ops cwmcu_pm_ops = {
+ .suspend = cwmcu_suspend,
+ .resume = cwmcu_resume
+};
+
+
+static const struct i2c_device_id cwmcu_id[] = {
+ {CWMCU_I2C_NAME, 0},
+ { }
+};
+#ifdef CONFIG_OF
+static struct of_device_id mcu_match_table[] = {
+ {.compatible = "htc_mcu" },
+ {},
+};
+#else
+#define mcu_match_table NULL
+#endif
+
+MODULE_DEVICE_TABLE(i2c, cwmcu_id);
+
+static struct i2c_driver cwmcu_driver = {
+ .driver = {
+ .name = CWMCU_I2C_NAME,
+ .owner = THIS_MODULE,
+ .pm = &cwmcu_pm_ops,
+ .of_match_table = mcu_match_table,
+ },
+ .probe = CWMCU_i2c_probe,
+ .remove = CWMCU_i2c_remove,
+ .id_table = cwmcu_id,
+};
+
+static int __init CWMCU_i2c_init(void)
+{
+ return i2c_add_driver(&cwmcu_driver);
+}
+module_init(CWMCU_i2c_init);
+
+static void __exit CWMCU_i2c_exit(void)
+{
+ i2c_del_driver(&cwmcu_driver);
+}
+module_exit(CWMCU_i2c_exit);
+
+MODULE_DESCRIPTION("CWMCU I2C Bus Driver V1.6");
+MODULE_AUTHOR("CyWee Group Ltd.");
+MODULE_LICENSE("GPL");
diff --git a/drivers/i2c/chips/Kconfig b/drivers/i2c/chips/Kconfig
new file mode 100644
index 0000000..0aa11aa
--- /dev/null
+++ b/drivers/i2c/chips/Kconfig
@@ -0,0 +1,20 @@
+#
+# Miscellaneous I2C chip drivers configuration
+#
+
+menu "Miscellaneous I2C Chip support"
+
+config INPUT_CWSTM32
+ tristate "CyWee CWSTM32 Sensor HUB"
+ depends on I2C && INPUT
+ select INPUT_POLLDEV
+ help
+ This driver provides support for CWSTM32 Sensor HUB.
+
+config CWSTM32_DEBUG
+ tristate "CyWee CWSTM32 Sensor HUB DEBUG MECHANISM"
+ depends on INPUT_CWSTM32
+ help
+ This driver depends on CWSTM32 Sensor HUB for enable DEBUG.
+
+endmenu
diff --git a/drivers/i2c/chips/Makefile b/drivers/i2c/chips/Makefile
new file mode 100644
index 0000000..eb96f52
--- /dev/null
+++ b/drivers/i2c/chips/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for miscellaneous I2C chip drivers.
+#
+obj-$(CONFIG_INPUT_CWSTM32) += CwMcuSensor.o
+
diff --git a/drivers/iio/kfifo_buf.c b/drivers/iio/kfifo_buf.c
index a923c78..e7ea375 100644
--- a/drivers/iio/kfifo_buf.c
+++ b/drivers/iio/kfifo_buf.c
@@ -22,6 +22,9 @@
if ((length == 0) || (bytes_per_datum == 0))
return -EINVAL;
+ if (buf->buffer.kfifo_use_vmalloc)
+ return __kfifo_valloc((struct __kfifo *)&buf->kf, length,
+ bytes_per_datum);
return __kfifo_alloc((struct __kfifo *)&buf->kf, length,
bytes_per_datum, GFP_KERNEL);
}
@@ -151,6 +154,8 @@
kf->buffer.attrs = &iio_kfifo_attribute_group;
kf->buffer.access = &kfifo_access_funcs;
kf->buffer.length = 2;
+ kf->buffer.kfifo_use_vmalloc
+ = !!(indio_dev->modes & INDIO_KFIFO_USE_VMALLOC);
return &kf->buffer;
}
EXPORT_SYMBOL(iio_kfifo_allocate);
diff --git a/drivers/input/touchscreen/Kconfig b/drivers/input/touchscreen/Kconfig
index 3660466..7b4c6a9 100644
--- a/drivers/input/touchscreen/Kconfig
+++ b/drivers/input/touchscreen/Kconfig
@@ -136,6 +136,14 @@
To compile this driver as a module, choose M here: the
module will be called cy8ctmg110_ts.
+config CYPRESS_SAR
+ tristate "Cypress CY8C20045 SAR"
+ depends on I2C
+ default n
+ help
+ Say Y here if you have sar sensor on your device
+ If unsure, say N.
+
config TOUCHSCREEN_CYTTSP_CORE
tristate "Cypress TTSP touchscreen"
help
@@ -938,4 +946,15 @@
To compile this driver as a module, choose M here: the
module will be called maxim_sti.
+config TOUCHSCREEN_MAX1187X
+ tristate "Maxim max1187X touchscreen"
+ depends on I2C
+ help
+ Say Y here if you have a Maxim max11871 touchscreen connected
+ to your system. max11871 conforms to the Maxim Touch Protocol
+ (MTP) defined by Maxim.
+
+ If unsure, say N.
+
+source "drivers/input/touchscreen/synaptics_dsx/Kconfig"
endif
diff --git a/drivers/input/touchscreen/Makefile b/drivers/input/touchscreen/Makefile
index 3d74676..427567f 100644
--- a/drivers/input/touchscreen/Makefile
+++ b/drivers/input/touchscreen/Makefile
@@ -74,5 +74,8 @@
obj-$(CONFIG_TOUCHSCREEN_W90X900) += w90p910_ts.o
obj-$(CONFIG_TOUCHSCREEN_TPS6507X) += tps6507x-ts.o
obj-$(CONFIG_TOUCHSCREEN_RM31080A) += rm31080a_ts.o rm31080a_ctrl.o
+obj-$(CONFIG_TOUCHSCREEN_MAX1187X) += max1187x.o
obj-$(CONFIG_TOUCHSCREEN_SYN_RMI4_SPI) += rmi4/
obj-$(CONFIG_TOUCHSCREEN_MAXIM_STI) += maxim_sti.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX) += synaptics_dsx/
+obj-$(CONFIG_CYPRESS_SAR) += cy8c_sar.o
diff --git a/drivers/input/touchscreen/cy8c_sar.c b/drivers/input/touchscreen/cy8c_sar.c
new file mode 100755
index 0000000..3a3f7ba
--- /dev/null
+++ b/drivers/input/touchscreen/cy8c_sar.c
@@ -0,0 +1,901 @@
+/* drivers/input/touchscreen/cy8c_sar.c
+ *
+ * Copyright (C) 2011 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/input/cy8c_sar.h>
+#include <linux/delay.h>
+#include <linux/hrtimer.h>
+#include <linux/i2c.h>
+#include <linux/input.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/gpio.h>
+#include <linux/workqueue.h>
+#include <linux/wakelock.h>
+#include <linux/irq.h>
+#include <linux/pm.h>
+#include <linux/firmware.h>
+#include <linux/ctype.h>
+#define SCNx8 "hhx"
+
+static LIST_HEAD(sar_list);
+static DEFINE_SPINLOCK(sar_list_lock);
+static int cy8c_check_sensor(struct cy8c_sar_data *sar);
+static void sar_event_handler(struct work_struct *work);
+static DECLARE_WORK(notifier_work, sar_event_handler);
+BLOCKING_NOTIFIER_HEAD(sar_notifier_list);
+
+/**
+ * register_notifier_by_sar
+ * - Register function for Wifi core(s) to decrease/recover
+ * the transmission power
+ * @nb: Hook to be registered
+ *
+ * The register_notifier_by_sar is used to notify the Wifi core(s) about
+ * the SAR sensor state changes Returns zero always as notifier_chain_register.
+ */
+int register_notifier_by_sar(struct notifier_block *nb)
+{
+ int ret = blocking_notifier_chain_register(&sar_notifier_list, nb);
+ schedule_work(¬ifier_work);
+ return ret;
+}
+
+/**
+ * unregister_notifier_by_sar - Unregister previously registered SAR notifier
+ * @nb: Hook to be unregistered
+ *
+ * The unregister_notifier_by_sar unregisters previously registered SAR notifier
+ * Returns zero always as notifier_chain_register.
+ */
+int unregister_notifier_by_sar(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&sar_notifier_list, nb);
+}
+
+static int i2c_cy8c_read(
+ struct i2c_client *client, uint8_t addr, uint8_t *data, uint8_t length)
+{
+ int retry, ret;
+
+ for (retry = 0; retry < CY8C_I2C_RETRY_TIMES; retry++) {
+ ret = i2c_smbus_read_i2c_block_data(client, addr, length, data);
+ if (ret > 0)
+ return ret;
+ mdelay(10);
+ }
+
+ pr_debug("[SAR] i2c_read_block retry over %d\n", CY8C_I2C_RETRY_TIMES);
+ return -EIO;
+}
+
+static int i2c_cy8c_write(
+ struct i2c_client *client, uint8_t addr, uint8_t *data, uint8_t length)
+{
+ int retry, ret;
+
+ for (retry = 0; retry < CY8C_I2C_RETRY_TIMES; retry++) {
+ ret = i2c_smbus_write_i2c_block_data(
+ client, addr, length, data);
+ if (ret == 0)
+ return ret;
+ msleep(10);
+ }
+
+ pr_debug("[SAR] i2c_write_block retry over %d\n", CY8C_I2C_RETRY_TIMES);
+ return -EIO;
+
+}
+
+static int i2c_cy8c_write_byte_data(
+ struct i2c_client *client, uint8_t addr, uint8_t value)
+{
+ return i2c_cy8c_write(client, addr, &value, 1);
+}
+
+static ssize_t reset(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+ struct cy8c_i2c_sar_platform_data *pdata;
+
+ pdata = sar->client->dev.platform_data;
+ pr_debug("[SAR] reset\n");
+ pdata->reset();
+ sar->sleep_mode = KEEP_AWAKE;
+ return scnprintf(buf, PAGE_SIZE, "Reset chip");
+}
+static DEVICE_ATTR(reset, S_IRUGO, reset, NULL);
+
+static ssize_t sar_vendor_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ char data[2] = {0};
+ int ret;
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+
+ ret = i2c_cy8c_read(sar->client, CS_FW_VERSION, data, sizeof(data));
+ if (ret < 0) {
+ pr_err("[SAR] i2c Read version Err\n");
+ return ret;
+ }
+
+ return scnprintf(buf, PAGE_SIZE, "%s_V%x", CYPRESS_SAR_NAME, data[0]);
+}
+static DEVICE_ATTR(vendor, S_IRUGO, sar_vendor_show, NULL);
+
+static ssize_t cy8c_sar_gpio_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int ret;
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+ struct cy8c_i2c_sar_platform_data *pdata;
+
+ pdata = sar->client->dev.platform_data;
+
+ ret = gpio_get_value(pdata->gpio_irq);
+ pr_debug("[SAR] %d", pdata->gpio_irq);
+ return scnprintf(buf, PAGE_SIZE, "%d", ret);
+}
+static DEVICE_ATTR(gpio, S_IRUGO, cy8c_sar_gpio_show, NULL);
+
+static ssize_t sleep_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned int data;
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+ unsigned long spinlock_flags;
+
+ sscanf(buf, "%x", &data);
+ pr_debug("[SAR] %s +++", __func__);
+
+ if (data != KEEP_AWAKE && data != DEEP_SLEEP)
+ return count;
+ cancel_delayed_work_sync(&sar->sleep_work);
+
+ pr_debug("[SAR] %s: current mode = %d, new mode = %d\n",
+ __func__, sar->sleep_mode, data);
+ spin_lock_irqsave(&sar->spin_lock, spinlock_flags);
+ sar->radio_state = data;
+ queue_delayed_work(sar->cy8c_wq, &sar->sleep_work,
+ (sar->radio_state == KEEP_AWAKE) ? WAKEUP_DELAY : 0);
+
+ spin_unlock_irqrestore(&sar->spin_lock, spinlock_flags);
+ pr_debug("[SAR] %s ---", __func__);
+ return count;
+}
+
+static ssize_t sleep_show(
+ struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+ pr_debug("[SAR] %s: sleep_mode = %d\n", __func__, sar->sleep_mode);
+ return scnprintf(buf, PAGE_SIZE, "%d", sar->sleep_mode);
+}
+static DEVICE_ATTR(sleep, (S_IWUSR|S_IRUGO), sleep_show, sleep_store);
+
+static struct attribute *sar_attrs[] = {
+ &dev_attr_reset.attr,
+ &dev_attr_vendor.attr,
+ &dev_attr_gpio.attr,
+ &dev_attr_sleep.attr,
+ NULL,
+};
+
+static struct attribute_group sar_attr_group = {
+ .attrs = sar_attrs,
+};
+
+static int check_fw_version(char *buf, int len,
+ struct cy8c_sar_data *sar, int *tag_len)
+{
+ uint8_t ver = 0;
+ uint8_t readfwver = 0;
+ int taglen, ret;
+
+ *tag_len = taglen = strcspn(buf, "\n");
+ if (taglen >= 3)
+ sscanf(buf + taglen - 3, "%2" SCNx8, &readfwver);
+ ret = i2c_cy8c_read(sar->client, CS_FW_VERSION, &ver, 1);
+ pr_info("[SAR] from FIRMWARE file = %x, Chip FW ver: %x\n"
+ , readfwver, ver);
+ if (ver != readfwver) {
+ pr_info("[SAR] %s : chip fw version need to update\n"
+ , __func__);
+ return 1;
+ } else {
+ pr_debug("[SAR] %s : bypass the fw update process\n"
+ , __func__);
+ return 0;
+ }
+}
+
+int i2c_cy8c_lock_write(struct i2c_client *client, uint8_t *data, uint8_t length)
+{
+ int retry;
+
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = 0,
+ .len = length ,
+ .buf = data,
+ }
+ };
+
+ for (retry = 0; retry < CY8C_I2C_RETRY_TIMES; retry++) {
+ if (__i2c_transfer(client->adapter, msg, 1) == 1)
+ break;
+ mdelay(10);
+ }
+
+ if (retry == CY8C_I2C_RETRY_TIMES) {
+ printk(KERN_ERR "[sar]i2c_write_block retry over %d\n",
+ CY8C_I2C_RETRY_TIMES);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int i2c_tx_bytes(struct cy8c_sar_data *sar, u16 len, u8 *buf)
+{
+ int ret;
+ uint8_t retry;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+ uint16_t position_id = pdata->position_id;
+
+ for (retry = 0; retry < CY8C_I2C_RETRY_TIMES; retry++) {
+ ret = i2c_master_send(client, (char *)buf, (int)len);
+ if (ret > 0) {
+ if (ret == len)
+ return ret;
+ ret = -EIO;
+ break;
+ }
+ mdelay(10);
+ }
+
+ pr_err("[SAR%d][ERR] I2C TX fail (%d)", position_id, ret);
+ return ret;
+}
+
+static int i2c_rx_bytes(struct cy8c_sar_data *sar, u16 len, u8 *buf)
+{
+ int ret;
+ uint8_t retry;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+ uint16_t position_id = pdata->position_id;
+
+ for (retry = 0; retry < CY8C_I2C_RETRY_TIMES; retry++) {
+ ret = i2c_master_recv(client, (char *)buf, (int)len);
+ if (ret > 0) {
+ if (ret == len)
+ return ret;
+ ret = -EIO;
+ break;
+ }
+ mdelay(10);
+ }
+
+ pr_err("[SAR%d][ERR] I2C RX fail (%d)", position_id, ret);
+ return ret;
+}
+
+static void sar_event_handler(struct work_struct *work)
+{
+ struct cy8c_sar_data *sar;
+ int active = SAR_MISSING;
+ unsigned long spinlock_flags;
+ int (*fppowerdown)(int activate) = NULL;
+
+ pr_debug("[SAR] %s: enter\n", __func__);
+ spin_lock_irqsave(&sar_list_lock, spinlock_flags);
+ list_for_each_entry(sar, &sar_list, list) {
+ struct cy8c_i2c_sar_platform_data *pdata;
+
+ pdata = sar->client->dev.platform_data;
+ active &= ~SAR_MISSING;
+ if (sar->is_activated)
+ active |= 1 << pdata->position_id;
+ if (sar->dysfunctional)
+ active |= SAR_DYSFUNCTIONAL << pdata->position_id;
+ if (fppowerdown != pdata->powerdown)
+ fppowerdown = pdata->powerdown;
+ }
+ spin_unlock_irqrestore(&sar_list_lock, spinlock_flags);
+ if (fppowerdown)
+ fppowerdown(active & 0x03);
+ pr_info("[SAR] active=%x\n", active);
+ blocking_notifier_call_chain(&sar_notifier_list, active, NULL);
+}
+
+static int smart_blmode_check(struct cy8c_sar_data *sar)
+{
+ int ret;
+ uint8_t i2cbuf[3] = {0};
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+ uint16_t position_id = pdata->position_id;
+
+ ret = i2c_tx_bytes(sar, 2, i2cbuf);
+ if (ret != 2) {
+ pr_err("[SAR%d][ERR] Shift i2c_write ERROR!!\n", position_id);
+ return -1;
+ }
+ ret = i2c_rx_bytes(sar, 3, i2cbuf);/*Confirm in BL mode*/
+ if (ret != 3)
+ pr_err("[SAR%d][ERR] i2c_BL_read ERROR!!\n", position_id);
+
+ switch (i2cbuf[BL_CODEADD]) {
+ case BL_RETMODE:
+ case BL_RETBL:
+ return 0;
+ case BL_BLIVE:
+ return (i2cbuf[BL_STATUSADD] == BL_BLMODE) ? 1 : -1;
+ default:
+ pr_err("[SAR%d][ERR] code = %x %x %x\n"
+ , position_id, i2cbuf[0], i2cbuf[1], i2cbuf[2]);
+ return -1;
+ }
+ return -1;
+}
+
+static int update_firmware(char *buf, int len, struct cy8c_sar_data *sar)
+{
+ int ret, i, j, cnt_blk = 0;
+ uint8_t *sarfw, wbuf[18] = {0};
+ uint8_t i2cbuf[3] = {0};
+ int taglen;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+ uint16_t position_id = pdata->position_id;
+
+ pr_info("[SAR%d] %s, %d\n", position_id, __func__, len);
+ sarfw = kzalloc(len/2, GFP_KERNEL);
+
+ disable_irq(sar->intr_irq);
+ ret = check_fw_version(buf, len, sar, &taglen);
+ if (ret) {
+ for (i = j = 0; i < len - 1; ++i) {
+ char buffer[3];
+ memcpy(buffer, buf + taglen + i, 2);
+ if (!isxdigit(buffer[0]))
+ continue;
+ buffer[2] = '\0';
+ if (sscanf(buffer, "%2" SCNx8, sarfw + j) == 1) {
+ ++j;
+ ++i;
+ }
+ }
+ j = 0;
+ /* wait chip ready and set BL command */
+ msleep(100);
+ i2cbuf[0] = CS_FW_BLADD;
+ i2cbuf[1] = 1;
+ client->addr = pdata->ap_addr;
+ ret = i2c_tx_bytes(sar, 2, i2cbuf);
+ if (ret != 2)
+ pr_info("[SAR%d] G2B command fail\n", position_id);
+
+ client->addr = pdata->bl_addr;
+ pr_debug("[SAR%d] @set BL mode addr:0x%x\n", position_id
+ , pdata->bl_addr);
+ /* wait chip into BL mode. */
+ msleep(300);
+ memcpy(wbuf+2, sarfw, 10);
+ ret = i2c_tx_bytes(sar, 12, wbuf);/*1st Block*/
+ if (ret != 12) {
+ pr_err("[SAR%d][ERR] 1st i2c_write ERROR!!\n"
+ , position_id);
+ goto error_fw_fail;
+ }
+
+ memset(wbuf, 0, sizeof(wbuf));
+ j += 10;
+
+ /* wait chip move buf data to RAM for 1ST FW block */
+ msleep(100);
+ ret = smart_blmode_check(sar);
+ if (ret < 0) {
+ pr_err("[SAR%d][ERR] 1st Blk BL Check Err\n"
+ , position_id);
+ goto error_fw_fail;
+ }
+ pr_debug("[SAR%d] check 1st BL mode Done!!\n", position_id);
+ while (1) {
+ if (sarfw[j+1] == 0x39) {
+ cnt_blk++;
+ msleep(10);
+ i2c_lock_adapter(client->adapter);
+
+ for (i = 0; i < 4; i++) {
+ wbuf[0] = 0;
+ wbuf[1] = 0x10*i;
+ memcpy(wbuf+2, sarfw+j, 16);
+ /* Write Blocks */
+ ret = i2c_cy8c_lock_write(client, wbuf, 18);
+ if (ret != 0) {
+ pr_err("[SAR%d][ERR] i2c_write ERROR!! Data Block = %d\n"
+ , position_id, cnt_blk);
+ goto error_fw_fail;
+ }
+ memset(wbuf, 0, sizeof(wbuf));
+ msleep(10);
+ j += 16;
+ }
+ msleep(10);
+
+ wbuf[1] = 0x40;
+ memcpy(wbuf+2, sarfw+j, 14);
+ ret = i2c_cy8c_lock_write(client, wbuf, 16);
+
+ if (ret != 0) {
+ pr_err("[SAR%d][ERR] i2c_write ERROR!! Data Block = %d\n"
+ , position_id, cnt_blk);
+ goto error_fw_fail;
+ }
+ memset(wbuf, 0 , sizeof(wbuf));
+ j += 14;
+
+ /*
+ * wait chip move buf to RAM for
+ * each data block.
+ */
+ msleep(100);
+ i2c_unlock_adapter(client->adapter);
+ ret = smart_blmode_check(sar);/*confirm Bl*/
+ if (ret < 0) {
+ pr_err("[SAR%d][ERR] Check BL Error Blk = %d\n"
+ , position_id, cnt_blk);
+ goto error_fw_fail;
+ }
+ } else if (sarfw[j+1] == 0x3B) {
+ msleep(10);
+ memcpy(wbuf+2, sarfw+j, 10);
+ ret = i2c_tx_bytes(sar, 12, wbuf);
+ if (ret != 12) {
+ pr_err("[SAR%d][ERR] i2c_write ERROR!! Last Block\n"
+ , position_id);
+ goto error_fw_fail;
+ }
+ memset(wbuf, 0, sizeof(wbuf));
+ j += 10;
+
+ /*
+ * write all block done,
+ * wait chip internal reset time.
+ */
+ msleep(200);
+ pr_info("[SAR%d] Firmware Update OK!\n"
+ , position_id);
+ break;
+ } else {
+ pr_err("[SAR%d][ERR] Smart sensor firmware update error!!\n"
+ , position_id);
+ break;
+ }
+ }
+
+ client->addr = pdata->ap_addr;
+ pr_debug("[SAR%d] Firmware Update OK and set the slave addr to %x\n"
+ , position_id, pdata->ap_addr);
+ }
+ kfree(sarfw);
+ enable_irq(sar->intr_irq);
+ return 0;
+
+error_fw_fail:
+ kfree(sarfw);
+ client->addr = pdata->ap_addr;
+ enable_irq(sar->intr_irq);
+ return -1;
+}
+
+static void cy8c_sar_fw_update_func(const struct firmware *fw, void *context)
+{
+ int error, i;
+ struct cy8c_sar_data *sar = (struct cy8c_sar_data *)context;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+
+ if (!fw) {
+ pr_err("[SAR] sar%d_CY8C.img not available\n"
+ , pdata->position_id);
+ return;
+ }
+ for (i = 0; i < 3; i++) {
+ pdata->reset();
+ error = update_firmware((char *)fw->data, fw->size, sar);
+ if (error < 0) {
+ pr_err("[SAR%d] %s update firmware fails, error = %d\n"
+ , pdata->position_id, __func__ , error);
+ if (i == 2) {
+ pr_err("[SAR%d] fail 3 times to set the SAR always active"
+ , pdata->position_id);
+ sar->dysfunctional = 1;
+ schedule_work(¬ifier_work);
+ }
+ } else {
+ cy8c_check_sensor(sar);
+ sar->dysfunctional = 0;
+ break;
+ }
+ }
+}
+
+static int sar_fw_update(struct cy8c_sar_data *sar)
+{
+ int ret;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+
+ ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG
+ , pdata->position_id ? "sar1_CY8C.img" : "sar0_CY8C.img"
+ , &client->dev, GFP_KERNEL
+ , sar, cy8c_sar_fw_update_func);
+ return ret;
+}
+
+static int cy8c_check_sensor(struct cy8c_sar_data *sar)
+{
+ uint8_t ver = 0, chip = 0;
+ int ret;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+
+ pr_info("[SAR%d] %s\n", pdata->position_id, __func__);
+ client->addr = pdata->ap_addr;
+ ret = i2c_cy8c_read(client, CS_FW_CHIPID, &chip, 1);
+ if (ret < 0) {
+ pr_info("[SAR%d] Retrieve chip ID under BL\n"
+ , pdata->position_id);
+ client->addr = pdata->bl_addr;
+ ret = i2c_cy8c_read(client, CS_FW_CHIPID, &chip, 1);
+ client->addr = pdata->ap_addr;
+ if (ret < 0)
+ goto err_chip_found;
+ else
+ goto err_fw_get_fail;
+ }
+
+ ret = i2c_cy8c_read(client, CS_FW_VERSION, &ver, 1);
+ if (ret < 0) {
+ pr_err("[SAR%d][ERR] Ver Read Err\n", pdata->position_id);
+ goto err_fw_get_fail;
+ } else
+ sar->id.version = ver;
+
+ if (chip == CS_CHIPID) {
+ sar->id.chipid = chip;
+ pr_info("[SAR%d] CY8C_SAR_V%x\n"
+ , pdata->position_id, sar->id.version);
+ } else
+ pr_info("[SAR%d] CY8C_Cap_V%x\n"
+ , pdata->position_id, sar->id.version);
+ return 0;
+
+err_fw_get_fail:
+ pr_info("[SAR%d] Block in BL mode need re-flash FW.\n"
+ , pdata->position_id);
+ return -2;
+err_chip_found:
+ pr_err("[SAR%d][ERR] Chip not found\n", pdata->position_id);
+ return -1;
+}
+
+static int sar_update_mode(int radio, int pm)
+{
+ pr_debug("[SAR] %s: radio=%d, pm=%d\n", __func__, radio, pm);
+ if (radio == KEEP_AWAKE && pm == KEEP_AWAKE)
+ return KEEP_AWAKE;
+ else
+ return DEEP_SLEEP;
+}
+
+static void sar_sleep_func(struct work_struct *work)
+{
+ struct cy8c_sar_data *sar = container_of(
+ work, struct cy8c_sar_data, sleep_work.work);
+ int mode, err;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+
+ pr_debug("[SAR] %s\n", __func__);
+
+ mode = sar_update_mode(sar->radio_state, sar->pm_state);
+ if (mode == sar->sleep_mode || sar->dysfunctional) {
+ pr_debug("[SAR] sleep mode no change.\n");
+ return;
+ }
+ switch (mode) {
+ case KEEP_AWAKE:
+ pdata->reset();
+ enable_irq(sar->intr_irq);
+ break;
+ case DEEP_SLEEP:
+ disable_irq(sar->intr_irq);
+ err = i2c_cy8c_write_byte_data(client, CS_MODE, CS_CMD_DSLEEP);
+ if (err < 0) {
+ pr_err("[SAR] %s: I2C write fail. reg %d, err %d\n"
+ , __func__, CS_CMD_DSLEEP, err);
+ return;
+ }
+ break;
+ default:
+ pr_info("[SAR] Unknown sleep mode\n");
+ return;
+ }
+ sar->sleep_mode = mode;
+ pr_debug("[SAR] Set SAR sleep mode = %d\n", sar->sleep_mode);
+}
+
+static int sysfs_create(struct cy8c_sar_data *sar)
+{
+ int ret;
+ struct cy8c_i2c_sar_platform_data *pdata
+ = sar->client->dev.platform_data;
+ uint16_t position_id = pdata->position_id;
+
+ pr_debug("[SAR] %s, position=%d\n", __func__, position_id);
+ if (position_id == 0) {
+ sar->sar_class = class_create(THIS_MODULE, "cap_sense");
+ if (IS_ERR(sar->sar_class)) {
+ pr_info("[SAR] %s, position=%d, create class fail1\n"
+ , __func__, position_id);
+ ret = PTR_ERR(sar->sar_class);
+ sar->sar_class = NULL;
+ return -ENXIO;
+ }
+ sar->sar_dev = device_create(sar->sar_class, NULL, 0,
+ sar, "sar");
+ if (unlikely(IS_ERR(sar->sar_dev))) {
+ pr_info("[SAR] %s, position=%d, create class fail2\n"
+ , __func__, position_id);
+ ret = PTR_ERR(sar->sar_dev);
+ sar->sar_dev = NULL;
+ return -ENOMEM;
+ }
+ } else if (position_id == 1) {
+ sar->sar_class = class_create(THIS_MODULE, "cap_sense1");
+ if (IS_ERR(sar->sar_class)) {
+ pr_info("[SAR] %s, position=%d, create class fail3\n"
+ , __func__, position_id);
+ ret = PTR_ERR(sar->sar_class);
+ sar->sar_class = NULL;
+ return -ENXIO;
+ }
+ sar->sar_dev = device_create(sar->sar_class, NULL, 0,
+ sar, "sar1");
+ if (unlikely(IS_ERR(sar->sar_dev))) {
+ pr_info("[SAR] %s, position=%d, create class fail4\n"
+ , __func__, position_id);
+ ret = PTR_ERR(sar->sar_dev);
+ sar->sar_dev = NULL;
+ return -ENOMEM;
+ }
+ }
+
+ ret = sysfs_create_group(&sar->sar_dev->kobj, &sar_attr_group);
+ if (ret) {
+ dev_err(sar->sar_dev,
+ "Unable to create sar attr, error: %d\n", ret);
+ return -EEXIST;
+ }
+
+ return 0;
+}
+
+static irqreturn_t cy8c_sar_irq_handler(int irq, void *dev_id)
+{
+ struct cy8c_sar_data *sar = dev_id;
+ struct i2c_client *client = sar->client;
+ struct cy8c_i2c_sar_platform_data *pdata = client->dev.platform_data;
+ int state, ret; /* written, unused */
+ uint8_t buf[1] = {0};
+
+ pr_debug("[SAR%d] %s: enter\n",
+ pdata->position_id, __func__);
+ ret = i2c_cy8c_read(client, CS_STATUS, buf, 1);
+ state = gpio_get_value(pdata->gpio_irq);
+ sar->is_activated = !!buf[0];
+ pr_debug("[SAR] CS_STATUS:0x%x, is_activated = %d\n",
+ buf[0], sar->is_activated);
+ schedule_work(¬ifier_work);
+ return IRQ_HANDLED;
+}
+
+static int cy8c_sar_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct cy8c_sar_data *sar;
+ struct cy8c_i2c_sar_platform_data *pdata;
+ int ret;
+ unsigned long spinlock_flags;
+
+ pr_debug("[SAR] %s: enter\n", __func__);
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("[SAR][ERR] need I2C_FUNC_I2C\n");
+ ret = -ENODEV;
+ goto err_check_functionality_failed;
+ }
+
+ sar = kzalloc(sizeof(struct cy8c_sar_data), GFP_KERNEL);
+ if (sar == NULL) {
+ pr_err("[SAR][ERR] allocate cy8c_sar_data failed\n");
+ ret = -ENOMEM;
+ goto err_alloc_data_failed;
+ }
+
+ sar->client = client;
+ i2c_set_clientdata(client, sar);
+ pdata = client->dev.platform_data;
+
+ if (pdata)
+ pdata->reset();
+
+ sar->intr_irq = gpio_to_irq(pdata->gpio_irq);
+
+ ret = cy8c_check_sensor(sar);
+ if (ret == -1)
+ goto err_init_sensor_failed;
+
+ sar->sleep_mode = sar->radio_state = sar->pm_state = KEEP_AWAKE;
+ sar->is_activated = 0;
+ sar->polarity = 0; /* 0: low active */
+
+ spin_lock_irqsave(&sar_list_lock, spinlock_flags);
+ list_add(&sar->list, &sar_list);
+ spin_unlock_irqrestore(&sar_list_lock, spinlock_flags);
+
+ sar->cy8c_wq = create_singlethread_workqueue("cypress_sar");
+ if (!sar->cy8c_wq) {
+ ret = -ENOMEM;
+ pr_err("[SAR][ERR] create_singlethread_workqueue cy8c_wq fail\n");
+ goto err_create_wq_failed;
+ }
+ INIT_DELAYED_WORK(&sar->sleep_work, sar_sleep_func);
+
+ ret = sysfs_create(sar);
+ if (ret == -EEXIST)
+ goto err_create_device_file;
+ else if (ret == -ENOMEM)
+ goto err_create_device;
+ else if (ret == -ENXIO)
+ goto err_create_class;
+
+ sar->use_irq = 1;
+ if (client->irq && sar->use_irq) {
+ ret = request_threaded_irq(sar->intr_irq, NULL
+ , cy8c_sar_irq_handler
+ , IRQF_TRIGGER_FALLING | IRQF_ONESHOT
+ , CYPRESS_SAR_NAME, sar);
+ if (ret < 0) {
+ dev_err(&client->dev, "[SAR][ERR] request_irq failed\n");
+ pr_err("[SAR][ERR] request_irq failed for gpio %d, irq %d\n",
+ pdata->gpio_irq, client->irq);
+ goto err_request_irq;
+ }
+ }
+ ret = sar_fw_update(sar);
+ if (ret) {
+ pr_err("[SAR%d][ERR] Fail to create update work (%d).\n"
+ , pdata->position_id, ret);
+ goto err_check_functionality_failed;
+ }
+ return 0;
+
+err_request_irq:
+ sysfs_remove_group(&sar->sar_dev->kobj, &sar_attr_group);
+err_create_device_file:
+ device_unregister(sar->sar_dev);
+err_create_device:
+ class_destroy(sar->sar_class);
+err_create_class:
+ destroy_workqueue(sar->cy8c_wq);
+err_init_sensor_failed:
+err_create_wq_failed:
+ kfree(sar);
+
+err_alloc_data_failed:
+err_check_functionality_failed:
+ schedule_work(¬ifier_work);
+ return ret;
+}
+
+static int cy8c_sar_remove(struct i2c_client *client)
+{
+ struct cy8c_sar_data *sar = i2c_get_clientdata(client);
+ unsigned long spinlock_flags;
+
+ disable_irq(client->irq);
+ spin_lock_irqsave(&sar_list_lock, spinlock_flags);
+ list_del(&sar->list);
+ spin_unlock_irqrestore(&sar_list_lock, spinlock_flags);
+ sysfs_remove_group(&sar->sar_dev->kobj, &sar_attr_group);
+ device_unregister(sar->sar_dev);
+ class_destroy(sar->sar_class);
+ destroy_workqueue(sar->cy8c_wq);
+ free_irq(client->irq, sar);
+
+ kfree(sar);
+
+ return 0;
+}
+
+static int cy8c_sar_suspend(struct device *dev)
+{
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+
+ pr_debug("[SAR] %s\n", __func__);
+ cancel_delayed_work_sync(&sar->sleep_work);
+ sar->pm_state = DEEP_SLEEP;
+ queue_delayed_work(sar->cy8c_wq, &sar->sleep_work, 0);
+ flush_delayed_work(&sar->sleep_work);
+ return 0;
+}
+
+static int cy8c_sar_resume(struct device *dev)
+{
+ struct cy8c_sar_data *sar = dev_get_drvdata(dev);
+
+ pr_debug("[SAR] %s\n", __func__);
+
+ cancel_delayed_work_sync(&sar->sleep_work);
+ sar->pm_state = KEEP_AWAKE;
+ queue_delayed_work(sar->cy8c_wq, &sar->sleep_work, WAKEUP_DELAY*2);
+ return 0;
+}
+
+static const struct i2c_device_id cy8c_sar_id[] = {
+ { CYPRESS_SAR_NAME, 0 },
+ { CYPRESS_SAR1_NAME, 0 },
+};
+
+static const struct dev_pm_ops pm_ops = {
+ .suspend = cy8c_sar_suspend,
+ .resume = cy8c_sar_resume,
+};
+
+static struct i2c_driver cy8c_sar_driver = {
+ .probe = cy8c_sar_probe,
+ .remove = cy8c_sar_remove,
+ .id_table = cy8c_sar_id,
+ .driver = {
+ .name = CYPRESS_SAR_NAME,
+ .owner = THIS_MODULE,
+ .pm = &pm_ops,
+ },
+};
+
+static int __init cy8c_sar_init(void)
+{
+ pr_debug("[SAR] %s: enter\n", __func__);
+ return i2c_add_driver(&cy8c_sar_driver);
+}
+
+static void __exit cy8c_sar_exit(void)
+{
+ i2c_del_driver(&cy8c_sar_driver);
+}
+
+module_init(cy8c_sar_init);
+module_exit(cy8c_sar_exit);
+
+MODULE_DESCRIPTION("cy8c_sar driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/input/touchscreen/max1187x.c b/drivers/input/touchscreen/max1187x.c
new file mode 100644
index 0000000..9acb086
--- /dev/null
+++ b/drivers/input/touchscreen/max1187x.c
@@ -0,0 +1,3701 @@
+/* drivers/input/touchscreen/max1187x.c
+ *
+ * Copyright (c)2013 Maxim Integrated Products, Inc.
+ *
+ * Driver Version: 3.0.7.1
+ * Release Date: Mar 15, 2013
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#ifdef CONFIG_HAS_EARLYSUSPEND
+#include <linux/earlysuspend.h>
+#endif
+#include <linux/i2c.h>
+#include <linux/input.h>
+#include <linux/input/mt.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kthread.h>
+#include <linux/firmware.h>
+#include <linux/crc16.h>
+#include <linux/types.h>
+#include <linux/gpio.h>
+#include <linux/errno.h>
+#include <linux/of.h>
+#include <linux/jiffies.h>
+#include <asm/byteorder.h>
+#include <linux/max1187x.h>
+
+#ifdef pr_fmt
+#undef pr_fmt
+#endif
+//#define pr_fmt(fmt) MAX1187X_LOG_NAME "(%s:%d): " fmt, __func__, __LINE__
+#define pr_fmt(fmt) MAX1187X_LOG_NAME ": " fmt
+
+#ifdef pr_info
+#undef pr_info
+#endif
+#define pr_info(fmt, ...) printk(KERN_INFO pr_fmt(fmt) "\n", ##__VA_ARGS__)
+
+#ifdef pr_err
+#undef pr_err
+#endif
+#define pr_err(fmt, ...) printk(KERN_ERR MAX1187X_LOG_NAME \
+ "TOUCH_ERR:(%s:%d): " fmt "\n", __func__, __LINE__, ##__VA_ARGS__)
+
+#define pr_dbg(a, b, ...) do { if (debug_mask & a) \
+ pr_info(b, ##__VA_ARGS__); \
+ } while (0)
+
+#define pr_info_if(a, b, ...) do { if ((debug_mask >> 16) & a) \
+ pr_info(b, ##__VA_ARGS__); \
+ } while (0)
+#define debugmask_if(a) ((debug_mask >> 16) & a)
+
+#define ENABLE_IRQ() \
+do { \
+ mutex_lock(&ts->irq_mutex); \
+ if (ts->irq_disabled) { \
+ enable_irq(ts->client->irq); \
+ ts->irq_disabled = 0; \
+ pr_info("ENABLE_IRQ()"); \
+ } \
+ mutex_unlock(&ts->irq_mutex); \
+} while (0)
+
+#define DISABLE_IRQ() \
+do { \
+ mutex_lock(&ts->irq_mutex); \
+ if (ts->irq_disabled == 0) { \
+ disable_irq(ts->client->irq); \
+ ts->irq_disabled = 1; \
+ pr_info("DISABLE_IRQ()"); \
+ } \
+ mutex_unlock(&ts->irq_mutex); \
+} while (0)
+
+#define NWORDS(a) (sizeof(a) / sizeof(u16))
+#define BYTE_SIZE(a) ((a) * sizeof(u16))
+#define BYTEH(a) ((a) >> 8)
+#define BYTEL(a) ((a) & 0xFF)
+
+#define PDATA(a) (ts->pdata->a)
+
+#define RETRY_TIMES 3
+#define SHIFT_BITS 10
+
+static u32 debug_mask = 0x00080000;
+static struct kobject *android_touch_kobj;
+static struct data *gl_ts;
+
+#ifdef MAX1187X_LOCAL_PDATA
+struct max1187x_pdata local_pdata = { };
+#endif
+
+/* tanlist - array containing tan(i)*(2^16-1) for i=[0,45], i in degrees */
+u16 tanlist[] = {0, 1144, 2289, 3435, 4583, 5734,
+ 6888, 8047, 9210, 10380, 11556, 12739,
+ 13930, 15130, 16340, 17560, 18792, 20036,
+ 21294, 22566, 23853, 25157, 26478, 27818,
+ 29178, 30559, 31964, 33392, 34846, 36327,
+ 37837, 39377, 40951, 42559, 44204, 45888,
+ 47614, 49384, 51202, 53069, 54990, 56969,
+ 59008, 61112, 63286, 65535};
+
+/* config num - touch, calib, private, lookup, image
+ p7 config num, p8 config num */
+u16 config_num[2][5] = {{42, 50, 23, 8, 1},
+ {65, 74, 34, 8, 0}};
+
+struct report_reader {
+ u16 report_id;
+ u16 reports_passed;
+ struct semaphore sem;
+ int status;
+};
+
+struct report_point {
+ u8 state;
+ int x;
+ int y;
+ int z;
+ int w;
+};
+
+struct data {
+ struct max1187x_pdata *pdata;
+ struct max1187x_board_config *fw_config;
+ struct i2c_client *client;
+ struct input_dev *input_dev;
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ struct early_suspend early_suspend;
+#endif
+ u8 early_suspend_registered;
+ atomic_t scheduled_work_irq;
+ u32 irq_receive_time;
+ struct mutex irq_mutex;
+ struct mutex i2c_mutex;
+ struct mutex report_mutex;
+ struct semaphore report_sem;
+ struct report_reader report_readers[MAX_REPORT_READERS];
+ u8 irq_disabled;
+ u8 report_readers_outstanding;
+ u16 rx_report[1000]; /* with header */
+ u16 rx_report_len;
+ u16 rx_packet[MAX_WORDS_REPORT + 1]; /* with header */
+ u32 irq_count;
+ u16 framecounter;
+ u8 got_report;
+ int fw_index;
+ u16 fw_crc16;
+ u16 fw_version[MAX_WORDS_REPORT];
+ u16 touch_config[MAX_WORDS_COMMAND_ALL];
+ char phys[32];
+ u8 fw_responsive;
+ u8 have_fw;
+ u8 have_touchcfg;
+ u8 sysfs_created;
+ u8 is_raw_mode;
+ char debug_string[DEBUG_STRING_LEN_MAX];
+ u16 max11871_Touch_Configuration_Data[MAX1187X_TOUCH_CONFIG_MAX+2];
+ u16 max11871_Calibration_Table_Data[MAX1187X_CALIB_TABLE_MAX+2];
+ u16 max11871_Private_Configuration_Data[MAX1187X_PRIVATE_CONFIG_MAX+2];
+ u16 max11871_Lookup_Table_X_Data[MAX1187X_LOOKUP_TABLE_MAX+3];
+ u16 max11871_Lookup_Table_Y_Data[MAX1187X_LOOKUP_TABLE_MAX+3];
+ u16 max11871_Image_Factor_Table[MAX1187X_IMAGE_FACTOR_MAX];
+ u8 config_protocol;
+ struct report_point report_points[10];
+ u32 width_factor;
+ u32 height_factor;
+ u32 width_offset;
+ u32 height_offset;
+ u8 noise_level;
+ char fw_ver[10];
+ u8 protocol_ver;
+ u16 vendor_pin;
+ u8 baseline_mode;
+ u16 frame_rate[2];
+ u16 x_channel;
+ u16 y_channel;
+ int16_t report[1000];
+ u8 vk_press;
+ u8 finger_press;
+ u8 finger_log;
+ struct max1187x_virtual_key *button_data;
+ u16 button0:1;
+ u16 button1:1;
+ u16 button2:1;
+ u16 button3:1;
+ u16 cycles:1;
+};
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+static void early_suspend(struct early_suspend *h);
+static void late_resume(struct early_suspend *h);
+#endif
+static int device_init(struct i2c_client *client);
+static int device_deinit(struct i2c_client *client);
+
+static int bootloader_enter(struct data *ts);
+static int bootloader_exit(struct data *ts);
+static int bootloader_get_crc(struct data *ts, u16 *crc16,
+ u16 addr, u16 len, u16 delay);
+static int bootloader_set_byte_mode(struct data *ts);
+static int bootloader_erase_flash(struct data *ts);
+static int bootloader_write_flash(struct data *ts, const u8 *image, u16 length);
+
+static int set_touch_frame(struct i2c_client *client,
+ u16 idle_frame, u16 active_frame);
+static int set_baseline_mode(struct i2c_client *client, u16 mode);
+static int change_touch_rpt(struct i2c_client *client, u16 to);
+static int sreset(struct i2c_client *client);
+static int get_touch_config(struct i2c_client *client);
+static int get_fw_version(struct i2c_client *client);
+static void propagate_report(struct data *ts, int status, u16 *report);
+static int get_report(struct data *ts, u16 report_id, ulong timeout);
+static void release_report(struct data *ts);
+
+#if 0
+static u16 binary_search(const u16 *array, u16 len, u16 val);
+static s16 max1187x_orientation(s16 x, s16 y);
+static u16 max1187x_sqrt(u32 num);
+#endif
+
+static u8 bootloader;
+static u8 init_state;
+
+static char *keycode_check(int keycode)
+{
+ switch (keycode) {
+ case KEY_HOME:
+ return "HOME";
+ case KEY_BACK:
+ return "BACK";
+ case KEY_MENU:
+ return "MENU";
+ case KEY_SEARCH:
+ return "SEARCH";
+ default:
+ return "RESERVED";
+ }
+}
+
+/* I2C communication */
+/* debug_mask |= 0x10000 for I2C RX communication */
+static int i2c_rx_bytes(struct data *ts, u8 *buf, u16 len)
+{
+ int i, ret, written;
+
+ do {
+ ret = i2c_master_recv(ts->client, (char *) buf, (int) len);
+ } while (ret == -EAGAIN);
+ if (ret < 0) {
+ pr_err("I2C RX fail (%d)", ret);
+ return ret;
+ }
+
+ len = ret;
+
+ if (debugmask_if(1)) {
+ pr_info("I2C RX (%d):", len);
+ written = 0;
+ for (i = 0; i < len; i++) {
+ written += snprintf(ts->debug_string + written, 6,
+ "0x%02X,", buf[i]);
+ if (written + 6 >= DEBUG_STRING_LEN_MAX) {
+ pr_info("%s", ts->debug_string);
+ written = 0;
+ }
+ }
+ if (written > 0)
+ pr_info("%s", ts->debug_string);
+ }
+
+ return len;
+}
+
+static int i2c_rx_words(struct data *ts, u16 *buf, u16 len)
+{
+ int i, ret, written;
+
+ do {
+ ret = i2c_master_recv(ts->client,
+ (char *) buf, (int) (len * 2));
+ } while (ret == -EAGAIN);
+ if (ret < 0) {
+ pr_err("I2C RX fail (%d)", ret);
+ return ret;
+ }
+
+ if ((ret % 2) != 0) {
+ pr_err("I2C words RX fail: odd number of bytes (%d)", ret);
+ return -EIO;
+ }
+
+ len = ret/2;
+
+#ifdef __BIG_ENDIAN
+ for (i = 0; i < len; i++)
+ buf[i] = (buf[i] << 8) | (buf[i] >> 8);
+#endif
+ if (debugmask_if(1)) {
+ pr_info("I2C RX (%d):", len);
+ written = 0;
+ for (i = 0; i < len; i++) {
+ written += snprintf(ts->debug_string + written,
+ 8, "0x%04X,", buf[i]);
+ if (written + 8 >= DEBUG_STRING_LEN_MAX) {
+ pr_info("%s", ts->debug_string);
+ written = 0;
+ }
+ }
+ if (written > 0)
+ pr_info("%s", ts->debug_string);
+ }
+
+ return len;
+}
+
+/* debug_mask |= 0x20000 for I2C TX communication */
+static int i2c_tx_bytes(struct data *ts, u8 *buf, u16 len)
+{
+ int i, ret, written;
+
+ do {
+ ret = i2c_master_send(ts->client, (char *) buf, (int) len);
+ } while (ret == -EAGAIN);
+ if (ret < 0) {
+ pr_err("I2C TX fail (%d)", ret);
+ return ret;
+ }
+
+ len = ret;
+
+ if (debugmask_if(2)) {
+ pr_info("I2C TX (%d):", len);
+ written = 0;
+ for (i = 0; i < len; i++) {
+ written += snprintf(ts->debug_string + written, 6,
+ "0x%02X,", buf[i]);
+ if (written + 6 >= DEBUG_STRING_LEN_MAX) {
+ pr_info("%s", ts->debug_string);
+ written = 0;
+ }
+ }
+ if (written > 0)
+ pr_info("%s", ts->debug_string);
+ }
+
+ return len;
+}
+
+static int i2c_tx_words(struct data *ts, u16 *buf, u16 len)
+{
+ int i, ret, written;
+
+#ifdef __BIG_ENDIAN
+ for (i = 0; i < len; i++)
+ buf[i] = (buf[i] << 8) | (buf[i] >> 8);
+#endif
+ do {
+ ret = i2c_master_send(ts->client,
+ (char *) buf, (int) (len * 2));
+ } while (ret == -EAGAIN);
+ if (ret < 0) {
+ pr_err("I2C TX fail (%d)", ret);
+ return ret;
+ }
+ if ((ret % 2) != 0) {
+ pr_err("I2C words TX fail: odd number of bytes (%d)", ret);
+ return -EIO;
+ }
+
+ len = ret/2;
+
+ if (debugmask_if(2)) {
+ pr_info("I2C TX (%d):", len);
+ written = 0;
+ for (i = 0; i < len; i++) {
+ written += snprintf(ts->debug_string + written, 8,
+ "0x%04X,", buf[i]);
+ if (written + 8 >= DEBUG_STRING_LEN_MAX) {
+ pr_info("%s", ts->debug_string);
+ written = 0;
+ }
+ }
+ if (written > 0)
+ pr_info("%s", ts->debug_string);
+ }
+
+ return len;
+}
+
+/* Read report */
+static int read_mtp_report(struct data *ts, u16 *buf)
+{
+ int words = 1, words_tx, words_rx;
+ int ret = 0, remainder = 0, offset = 0, change_mode = 0;
+ u16 address = 0x000A;
+
+ mutex_lock(&ts->i2c_mutex);
+ /* read header, get size, read entire report */
+ words_tx = i2c_tx_words(ts, &address, 1);
+ if (words_tx != 1) {
+ mutex_unlock(&ts->i2c_mutex);
+ pr_err("Report RX fail: failed to set address");
+ return -EIO;
+ }
+
+ if (ts->is_raw_mode == 0) {
+ words_rx = i2c_rx_words(ts, buf, 2);
+ if (words_rx != 2 || BYTEL(buf[0]) > MAX_WORDS_REPORT) {
+ ret = -EIO;
+ pr_err("Report RX fail: received (%d) " \
+ "expected (%d) words, " \
+ "header (%04X)",
+ words_rx, words, buf[0]);
+ mutex_unlock(&ts->i2c_mutex);
+ return ret;
+ }
+ } else {
+ words_rx = i2c_rx_words(ts, buf,
+ (u16) PDATA(i2c_words));
+ if (words_rx != (u16) PDATA(i2c_words) || BYTEL(buf[0])
+ > MAX_WORDS_REPORT) {
+ ret = -EIO;
+ pr_err("Report RX fail: received (%d) " \
+ "expected (%d) words, header (%04X)",
+ words_rx, words, buf[0]);
+ mutex_unlock(&ts->i2c_mutex);
+ return ret;
+ }
+ }
+
+ if ((((BYTEH(buf[0])) & 0xF) == 0x1)
+ && buf[1] == 0x0800) {
+ if (ts->is_raw_mode == 0)
+ change_mode = 1;
+ ts->is_raw_mode = 1;
+ }
+
+ if ((((BYTEH(buf[0])) & 0xF) == 0x1)
+ && buf[1] != 0x0800) {
+ ts->is_raw_mode = 0;
+ }
+
+ if (ts->is_raw_mode == 0) {
+ words = BYTEL(buf[0]) + 1;
+ words_tx = i2c_tx_words(ts, &address, 1);
+ if (words_tx != 1) {
+ mutex_unlock(&ts->i2c_mutex);
+ pr_err("Report RX fail:" \
+ "failed to set address");
+ return -EIO;
+ }
+
+ words_rx = i2c_rx_words(ts, &buf[offset], words);
+ if (words_rx != words) {
+ mutex_unlock(&ts->i2c_mutex);
+ pr_err("Report RX fail 0x%X: received (%d) " \
+ "expected (%d) words",
+ address, words_rx, remainder);
+ return -EIO;
+
+ }
+ } else {
+ if (change_mode) {
+ words_rx = i2c_rx_words(ts, buf,
+ (u16) PDATA(i2c_words));
+ if (words_rx != (u16) PDATA(i2c_words) || BYTEL(buf[0])
+ > MAX_WORDS_REPORT) {
+ ret = -EIO;
+ pr_err("Report RX fail: received (%d) " \
+ "expected (%d) words, header (%04X)",
+ words_rx, words, buf[0]);
+ mutex_unlock(&ts->i2c_mutex);
+ return ret;
+ }
+ }
+
+ words = BYTEL(buf[0]) + 1;
+ remainder = words;
+
+ if (remainder - (u16) PDATA(i2c_words) > 0) {
+ remainder -= (u16) PDATA(i2c_words);
+ offset += (u16) PDATA(i2c_words);
+ address += (u16) PDATA(i2c_words);
+ }
+
+ words_tx = i2c_tx_words(ts, &address, 1);
+ if (words_tx != 1) {
+ mutex_unlock(&ts->i2c_mutex);
+ pr_err("Report RX fail: failed to set " \
+ "address 0x%X", address);
+ return -EIO;
+ }
+
+ words_rx = i2c_rx_words(ts, &buf[offset], remainder);
+ if (words_rx != remainder) {
+ mutex_unlock(&ts->i2c_mutex);
+ pr_err("Report RX fail 0x%X: received (%d) " \
+ "expected (%d) words",
+ address, words_rx, remainder);
+ return -EIO;
+ }
+ }
+
+ mutex_unlock(&ts->i2c_mutex);
+ return ret;
+}
+
+/* Send command */
+static int send_mtp_command(struct data *ts, u16 *buf, u16 len)
+{
+ u16 tx_buf[MAX_WORDS_COMMAND + 2]; /* with address and header */
+ u16 packets, words, words_tx;
+ int i, ret = 0;
+
+ /* check basics */
+ if (len < 2) {
+ pr_err("Command too short (%d); 2 words minimum", len);
+ return -EINVAL;
+ }
+ if ((buf[1] + 2) != len) {
+ pr_err("Inconsistent command length: " \
+ "expected (%d) given (%d)", (buf[1] + 2), len);
+ return -EINVAL;
+ }
+
+ if (len > MAX_WORDS_COMMAND_ALL) {
+ pr_err("Command too long (%d); maximum (%d) words",
+ len, MAX_WORDS_COMMAND_ALL);
+ return -EINVAL;
+ }
+
+ /* packetize and send */
+ packets = len / MAX_WORDS_COMMAND;
+ if (len % MAX_WORDS_COMMAND)
+ packets++;
+ tx_buf[0] = 0x0000;
+
+ mutex_lock(&ts->i2c_mutex);
+ for (i = 0; i < packets; i++) {
+ words = (i == (packets - 1)) ? len : MAX_WORDS_COMMAND;
+ tx_buf[1] = (packets << 12) | ((i + 1) << 8) | words;
+ memcpy(&tx_buf[2], &buf[i * MAX_WORDS_COMMAND],
+ BYTE_SIZE(words));
+ words_tx = i2c_tx_words(ts, tx_buf, words + 2);
+ if (words_tx != (words + 2)) {
+ ret = -1;
+ pr_err("Command TX fail: transmitted (%d) " \
+ "expected (%d) words, packet (%d)",
+ words_tx, words + 2, i);
+ }
+ len -= MAX_WORDS_COMMAND;
+ }
+ ts->got_report = 0;
+ mutex_unlock(&ts->i2c_mutex);
+
+ return ret;
+}
+
+/* Integer math operations */
+#if 0
+/* Returns index of element in array closest to val */
+static u16 binary_search(const u16 *array, u16 len, u16 val)
+{
+ s16 lt, rt, mid;
+ if (len < 2)
+ return 0;
+
+ lt = 0;
+ rt = len - 1;
+
+ while (lt <= rt) {
+ mid = (lt + rt)/2;
+ if (val == array[mid])
+ return mid;
+ if (val < array[mid])
+ rt = mid - 1;
+ else
+ lt = mid + 1;
+ }
+
+ if (lt >= len)
+ return len - 1;
+ if (rt < 0)
+ return 0;
+ if (array[lt] - val > val - array[lt-1])
+ return lt-1;
+ else
+ return lt;
+}
+
+/* Given values of x and y, it calculates the orientation
+ * with respect to y axis by calculating atan(x/y)
+ */
+static s16 max1187x_orientation(s16 x, s16 y)
+{
+ u16 sign = 0;
+ u16 len = sizeof(tanlist)/sizeof(tanlist[0]);
+ u32 quotient;
+ s16 angle;
+
+ if (x == y) {
+ angle = 45;
+ return angle;
+ }
+ if (x == 0) {
+ angle = 0;
+ return angle;
+ }
+ if (y == 0) {
+ if (x > 0)
+ angle = 90;
+ else
+ angle = -90;
+ return angle;
+ }
+
+ if (x < 0) {
+ sign = ~sign;
+ x = -x;
+ }
+ if (y < 0) {
+ sign = ~sign;
+ y = -y;
+ }
+
+ if (x == y)
+ angle = 45;
+ else if (x < y) {
+ quotient = ((u32)x << 16) - (u32)x;
+ quotient = quotient / y;
+ angle = binary_search(tanlist, len, quotient);
+ } else {
+ quotient = ((u32)y << 16) - (u32)y;
+ quotient = quotient / x;
+ angle = binary_search(tanlist, len, quotient);
+ angle = 90 - angle;
+ }
+ if (sign == 0)
+ return angle;
+ else
+ return -angle;
+}
+
+u16 max1187x_sqrt(u32 num)
+{
+ u16 mask = 0x8000;
+ u16 guess = 0;
+ u32 prod = 0;
+
+ if (num < 2)
+ return num;
+
+ while (mask) {
+ guess = guess ^ mask;
+ prod = guess*guess;
+ if (num < prod)
+ guess = guess ^ mask;
+ mask = mask>>1;
+ }
+ if (guess != 0xFFFF) {
+ prod = guess*guess;
+ if ((num - prod) > (prod + 2*guess + 1 - num))
+ guess++;
+ }
+
+ return guess;
+}
+#endif
+
+static void button_report(struct data *ts, int index, int state)
+{
+ if (!ts->button_data)
+ return;
+
+ if (state) {
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (3000 << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ (1 << 31) | (ts->button_data[index].x_position << 16) | ts->button_data[index].y_position);
+#endif
+ }
+ input_report_abs(ts->input_dev,
+ ABS_MT_TRACKING_ID, 0);
+ input_report_abs(ts->input_dev,
+ ABS_MT_POSITION_X, ts->button_data[index].x_position);
+ input_report_abs(ts->input_dev,
+ ABS_MT_POSITION_Y, ts->button_data[index].y_position);
+ input_report_abs(ts->input_dev,
+ ABS_MT_PRESSURE, 3000);
+ if (PDATA(report_mode) == MAX1187X_REPORT_MODE_EXTEND)
+ input_report_abs(ts->input_dev,
+ ABS_MT_WIDTH_MAJOR, 5);
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_B:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (3000 << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ (1 << 31) | (ts->button_data[index].x_position << 16) | ts->button_data[index].y_position);
+#endif
+ }
+ input_mt_slot(ts->input_dev, 0);
+ input_mt_report_slot_state(ts->input_dev,
+ MT_TOOL_FINGER, 1);
+ input_report_abs(ts->input_dev,
+ ABS_MT_POSITION_X, ts->button_data[index].x_position);
+ input_report_abs(ts->input_dev,
+ ABS_MT_POSITION_Y, ts->button_data[index].y_position);
+ input_report_abs(ts->input_dev,
+ ABS_MT_PRESSURE, 3000);
+ if (PDATA(report_mode) == MAX1187X_REPORT_MODE_EXTEND)
+ input_report_abs(ts->input_dev,
+ ABS_MT_WIDTH_MAJOR, 5);
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_CUSTOM1:
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (3000 << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ (1 << 31) | (ts->button_data[index].x_position << 16) | ts->button_data[index].y_position);
+#endif
+ break;
+ }
+ } else {
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ }
+ input_mt_sync(ts->input_dev);
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_B:
+ input_mt_slot(ts->input_dev, 0);
+ input_mt_report_slot_state(ts->input_dev, MT_TOOL_FINGER, 0);
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ }
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_CUSTOM1:
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/* debug_mask |= 0x40000 for touch reports */
+static void process_touch_report(struct data *ts, u16 *buf)
+{
+ u32 i;
+ u16 x, y, swap_u16;
+ u8 state[10] = {0};
+#if 0
+ u32 area;
+ u32 major_axis, minor_axis;
+ s16 xsize, ysize, orientation, swap_s16;
+#endif
+
+ struct max1187x_touch_report_header *header;
+ struct max1187x_touch_report_basic *reportb;
+ struct max1187x_touch_report_extended *reporte;
+
+ pr_dbg(1, "Touch:");
+ for (i = 0; i < (buf[0]&0x00FF); i++)
+ pr_dbg(1, " %04x", buf[i]);
+ pr_dbg(1, "\n");
+
+ header = (struct max1187x_touch_report_header *) buf;
+
+ if (!ts->input_dev)
+ goto err_process_touch_report_inputdev;
+ if (BYTEH(header->header) != 0x11)
+ goto err_process_touch_report_header;
+
+ if (header->report_id != MAX1187X_TOUCH_REPORT_BASIC &&
+ header->report_id != MAX1187X_TOUCH_REPORT_EXTENDED) {
+ if (header->report_id == 0x0134)
+ pr_info("Reset Baseline, Framecount number = %04x", buf[3]);
+ goto err_process_touch_report_reportid;
+ }
+
+ if (ts->framecounter == header->framecounter) {
+ pr_info("Same framecounter (%u) encountered at irq (%u)!\n",
+ ts->framecounter, ts->irq_count);
+ goto err_process_touch_report_framecounter;
+ }
+ ts->framecounter = header->framecounter;
+
+ if (!ts->finger_press) {
+ ts->finger_log = 0;
+ if (header->button0 != ts->button0) {
+ if (header->button0) {
+ ts->vk_press = 1;
+ pr_info("%s key pressed", keycode_check(PDATA(button_code0)));
+ } else {
+ ts->vk_press = 0;
+ pr_info("%s key released", keycode_check(PDATA(button_code0)));
+ }
+ if (ts->button_data)
+ button_report(ts, 0, header->button0);
+ else {
+ input_report_key(ts->input_dev, PDATA(button_code0), header->button0);
+ input_sync(ts->input_dev);
+ }
+ ts->button0 = header->button0;
+ }
+ if (header->button1 != ts->button1) {
+ if (header->button1) {
+ ts->vk_press = 1;
+ pr_info("%s key pressed", keycode_check(PDATA(button_code1)));
+ } else {
+ ts->vk_press = 0;
+ pr_info("%s key released", keycode_check(PDATA(button_code1)));
+ }
+ if (ts->button_data)
+ button_report(ts, 1, header->button1);
+ else {
+ input_report_key(ts->input_dev, PDATA(button_code1), header->button1);
+ input_sync(ts->input_dev);
+ }
+ ts->button1 = header->button1;
+ }
+ if (header->button2 != ts->button2) {
+ if (header->button2) {
+ ts->vk_press = 1;
+ pr_info("%s key pressed", keycode_check(PDATA(button_code2)));
+ } else {
+ ts->vk_press = 0;
+ pr_info("%s key released", keycode_check(PDATA(button_code2)));
+ }
+ if (ts->button_data)
+ button_report(ts, 2, header->button2);
+ else {
+ input_report_key(ts->input_dev, PDATA(button_code2), header->button2);
+ input_sync(ts->input_dev);
+ }
+ ts->button2 = header->button2;
+ }
+ if (header->button3 != ts->button3) {
+ if (header->button3) {
+ ts->vk_press = 1;
+ pr_info("%s key pressed", keycode_check(PDATA(button_code3)));
+ } else {
+ ts->vk_press = 0;
+ pr_info("%s key released", keycode_check(PDATA(button_code3)));
+ }
+ if (ts->button_data)
+ button_report(ts, 3, header->button3);
+ else {
+ input_report_key(ts->input_dev, PDATA(button_code3), header->button3);
+ input_sync(ts->input_dev);
+ }
+ ts->button3 = header->button3;
+ }
+ } else if ((header->button0 | header->button1 | header->button2 | header->button3) && !ts->finger_log) {
+ pr_info("Finger pressed! Ignore vkey press event.");
+ ts->finger_log = 1;
+ } else if (!(header->button0 | header->button1 | header->button2 | header->button3) && ts->finger_log) {
+ pr_info("Finger pressed! Ignore vkey release event.");
+ ts->finger_log = 0;
+ }
+
+ if (header->touch_count > 10) {
+ pr_err("Touch count (%u) out of bounds [0,10]!",
+ header->touch_count);
+ goto err_process_touch_report_touchcount;
+ }
+
+ if(header->touch_status != ts->noise_level) {
+ pr_info("Noise level %d -> %d", ts->noise_level, header->touch_status);
+ ts->noise_level = (u8)header->touch_status;
+ }
+
+ if(header->cycles != ts->cycles) {
+ pr_info("Cycles: %d -> %d", (ts->cycles == 1)? 32 : 16, (header->cycles == 1)? 32 : 16);
+ ts->cycles = header->cycles;
+ }
+
+ if (header->touch_count == 0) {
+ if (!ts->finger_press && ts->vk_press) {
+ return;
+ }
+ for (i = 0; i < MAX1187X_TOUCH_COUNT_MAX; i++) {
+ if (ts->report_points[i].state==1 && state[i]==0) {
+ if (PDATA(input_protocol) == MAX1187X_PROTOCOL_B) {
+ input_mt_slot(ts->input_dev, i);
+ input_mt_report_slot_state(ts->input_dev, MT_TOOL_FINGER, 0);
+ }
+ ts->report_points[i].state = 0;
+ if (debug_mask & BIT(3)) {
+ if(ts->width_factor && ts->height_factor) {
+ pr_dbg(8, "Screen:F[%02d]:Up, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ((ts->report_points[i].x-ts->width_offset)*ts->width_factor)>>SHIFT_BITS,
+ ((ts->report_points[i].y-ts->height_offset)*ts->height_factor)>>SHIFT_BITS,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ else {
+ pr_dbg(8, "Raw:F[%02d]:Up, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ts->report_points[i].x, ts->report_points[i].y,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ }
+ }
+ }
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ }
+ input_mt_sync(ts->input_dev);
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_B:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ }
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_CUSTOM1:
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE, 0);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION, 1 << 31);
+#endif
+ break;
+ default:
+ break;
+ }
+ pr_info_if(4, "(TOUCH): Fingers up, Frame(%d) Noise(%d) Cycles(%d)",
+ ts->framecounter, ts->noise_level, (ts->cycles == 1)? 32 : 16);
+ pr_dbg(2, "Finger leave, Noise:%d, Cycles:%d", ts->noise_level, (ts->cycles == 1)? 32 : 16);
+ } else {
+ if (ts->vk_press) {
+ //pr_info("Vkey pressed! Ignore finger event.");
+ return;
+ }
+ reportb = (struct max1187x_touch_report_basic *)
+ ((u8 *)buf + sizeof(*header));
+ reporte = (struct max1187x_touch_report_extended *)
+ ((u8 *)buf + sizeof(*header));
+ for (i = 0; i < header->touch_count; i++) {
+ x = reportb->x;
+ y = reportb->y;
+ if (PDATA(coordinate_settings) & MAX1187X_SWAP_XY) {
+ swap_u16 = x;
+ x = y;
+ y = swap_u16;
+ }
+ if (PDATA(coordinate_settings) & MAX1187X_REVERSE_X) {
+ x = PDATA(panel_max_x) + PDATA(panel_min_x) - x;
+ }
+ if (PDATA(coordinate_settings) & MAX1187X_REVERSE_Y) {
+ y = PDATA(panel_max_y) + PDATA(panel_min_y) - y;
+ }
+ if (reportb->z == 0)
+ reportb->z++;
+ pr_info_if(4, "(TOUCH): (%u) Finger %u: "\
+ "X(%d) Y(%d) Z(%d) Frame(%d) Noise(%d) Finger status(%X) Cycles(%d)",
+ header->framecounter, reportb->finger_id,
+ x, y, reportb->z, ts->framecounter, ts->noise_level, reportb->finger_status,
+ (ts->cycles == 1)? 32 : 16);
+ pr_dbg(2, "Finger %d=> X:%d, Y:%d, Z:%d, Noise:%d, Cycles:%d",
+ reportb->finger_id+1, x, y, reportb->z, ts->noise_level, (ts->cycles == 1)? 32 : 16);
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (reportb->z << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ ((i == (header->touch_count - 1)) << 31) | (x << 16) | y);
+#endif
+ }
+ input_report_abs(ts->input_dev,
+ ABS_MT_TRACKING_ID, reportb->finger_id);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION_X, x);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION_Y, y);
+ input_report_abs(ts->input_dev,
+ ABS_MT_PRESSURE, reportb->z);
+ break;
+ case MAX1187X_PROTOCOL_B:
+ if (PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (reportb->z << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ ((i == (header->touch_count - 1)) << 31) | (x << 16) | y);
+#endif
+ }
+ input_mt_slot(ts->input_dev, reportb->finger_id);
+ input_mt_report_slot_state(ts->input_dev,
+ MT_TOOL_FINGER, 1);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION_X, x);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION_Y, y);
+ input_report_abs(ts->input_dev,
+ ABS_MT_PRESSURE, reportb->z);
+ break;
+ case MAX1187X_PROTOCOL_CUSTOM1:
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_report_abs(ts->input_dev, ABS_MT_AMPLITUDE,
+ (reportb->z << 16) | 0x0A);
+ input_report_abs(ts->input_dev, ABS_MT_POSITION,
+ ((i == (header->touch_count - 1)) << 31) | (x << 16) | y);
+#endif
+ break;
+ }
+ ts->report_points[reportb->finger_id].x = x;
+ ts->report_points[reportb->finger_id].y = y;
+ ts->report_points[reportb->finger_id].z = reportb->z;
+ state[reportb->finger_id] = 1;
+
+ if (header->report_id
+ == MAX1187X_TOUCH_REPORT_EXTENDED) {
+ pr_info_if(4, "(TOUCH): speed(%d,%d), pixel(%d,%d), area(%d), x(%d,%d), y(%d,%d)",
+ reporte->xspeed, reporte->yspeed, reporte->xpixel, reporte->ypixel, reporte->area,
+ reporte->xmin, reporte->xmax, reporte->ymin, reporte->ymax);
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ case MAX1187X_PROTOCOL_B:
+ input_report_abs(ts->input_dev,
+ ABS_MT_WIDTH_MAJOR, reporte->area);
+ break;
+ default:
+ break;
+ }
+ ts->report_points[reportb->finger_id].w = reporte->area;
+#if 0
+ xsize = (reporte->xpixel - 1)
+ * (s16)(PDATA(lcd_x)/PDATA(num_rows));
+ ysize = (reporte->ypixel - 1)
+ * (s16)(PDATA(lcd_y)/PDATA(num_cols));
+ if (PDATA(coordinate_settings)
+ & MAX1187X_REVERSE_X)
+ xsize = -xsize;
+ if (PDATA(coordinate_settings)
+ & MAX1187X_REVERSE_Y)
+ ysize = -ysize;
+ if (PDATA(coordinate_settings)
+ & MAX1187X_SWAP_XY) {
+ swap_s16 = xsize;
+ xsize = ysize;
+ ysize = swap_s16;
+ }
+ /* Calculate orientation as
+ * arctan of xsize/ysize) */
+ orientation =
+ max1187x_orientation(xsize, ysize);
+ area = reporte->area
+ * (PDATA(lcd_x)/PDATA(num_rows))
+ * (PDATA(lcd_y)/PDATA(num_cols));
+ /* Major axis of ellipse if hypotenuse
+ * formed by xsize and ysize */
+ major_axis = xsize*xsize + ysize*ysize;
+ major_axis = max1187x_sqrt(major_axis);
+ /* Minor axis can be reverse calculated
+ * using the area of ellipse:
+ * Area of ellipse =
+ * pi / 4 * Major axis * Minor axis
+ * Minor axis =
+ * 4 * Area / (pi * Major axis)
+ */
+ minor_axis = (2 * area) / major_axis;
+ minor_axis = (minor_axis<<17) / MAX1187X_PI;
+ pr_info_if(4, "(TOUCH): Finger %u: " \
+ "Orientation(%d) Area(%u) Major_axis(%u) Minor_axis(%u)",
+ reportb->finger_id, orientation,
+ area, major_axis, minor_axis);
+ input_report_abs(ts->input_dev,
+ ABS_MT_ORIENTATION, orientation);
+ input_report_abs(ts->input_dev,
+ ABS_MT_TOUCH_MAJOR, major_axis);
+ input_report_abs(ts->input_dev,
+ ABS_MT_TOUCH_MINOR, minor_axis);
+#endif
+ reporte++;
+ reportb = (struct max1187x_touch_report_basic *)
+ ((u8 *) reporte);
+ } else {
+ reportb++;
+ }
+
+ switch (PDATA(input_protocol)) {
+ case MAX1187X_PROTOCOL_A:
+ input_mt_sync(ts->input_dev);
+ break;
+ default:
+ break;
+ }
+ }
+ for (i = 0; i < MAX1187X_TOUCH_COUNT_MAX; i++) {
+ if (ts->report_points[i].state==1 && state[i]==0) {
+ if (PDATA(input_protocol) == MAX1187X_PROTOCOL_B) {
+ input_mt_slot(ts->input_dev, i);
+ input_mt_report_slot_state(ts->input_dev, MT_TOOL_FINGER, 0);
+ }
+ ts->report_points[i].state = 0;
+ if (debug_mask & BIT(3)) {
+ if(ts->width_factor && ts->height_factor) {
+ pr_dbg(8, "Screen:F[%02d]:Up, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ((ts->report_points[i].x-ts->width_offset)*ts->width_factor)>>SHIFT_BITS,
+ ((ts->report_points[i].y-ts->height_offset)*ts->height_factor)>>SHIFT_BITS,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ else {
+ pr_dbg(8, "Raw:F[%02d]:Up, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ts->report_points[i].x, ts->report_points[i].y,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ }
+ }
+ else if (ts->report_points[i].state ==0 && state[i]==1) {
+ ts->report_points[i].state = 1;
+ if (debug_mask & BIT(3)) {
+ if (ts->width_factor && ts->height_factor) {
+ pr_dbg(8, "Screen:F[%02d]:Down, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ((ts->report_points[i].x-ts->width_offset)*ts->width_factor)>>SHIFT_BITS,
+ ((ts->report_points[i].y-ts->height_offset)*ts->height_factor)>>SHIFT_BITS,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ else {
+ pr_dbg(8, "Raw:F[%02d]:Down, X=%d, Y=%d, Z=%d, W=%d",
+ i+1, ts->report_points[i].x, ts->report_points[i].y,
+ ts->report_points[i].z, ts->report_points[i].w);
+ }
+ }
+ }
+ }
+ switch (PDATA(input_protocol) ) {
+ case MAX1187X_PROTOCOL_A:
+ case MAX1187X_PROTOCOL_B:
+ input_sync(ts->input_dev);
+ break;
+ case MAX1187X_PROTOCOL_CUSTOM1:
+ break;
+ }
+ }
+ ts->finger_press = header->touch_count;
+err_process_touch_report_touchcount:
+err_process_touch_report_inputdev:
+err_process_touch_report_header:
+err_process_touch_report_reportid:
+err_process_touch_report_framecounter:
+ return;
+}
+
+static irqreturn_t irq_handler(int irq, void *context)
+{
+ struct data *ts = (struct data *) context;
+ int read_retval;
+ u64 time_elapsed = jiffies;
+ struct timespec time_start, time_end, time_delta;
+
+ if (atomic_read(&ts->scheduled_work_irq) != 0)
+ return IRQ_HANDLED;
+
+ if (gpio_get_value(ts->pdata->gpio_tirq) != 0)
+ return IRQ_HANDLED;
+
+ /* disable_irq_nosync(ts->client->irq); */
+ atomic_inc(&ts->scheduled_work_irq);
+ ts->irq_receive_time = jiffies;
+ ts->irq_count++;
+
+ if (debug_mask & BIT(2)) {
+ getnstimeofday(&time_start);
+ }
+
+ read_retval = read_mtp_report(ts, ts->rx_packet);
+
+ if (time_elapsed >= ts->irq_receive_time)
+ time_elapsed = time_elapsed - ts->irq_receive_time;
+ else
+ time_elapsed = time_elapsed +
+ 0x100000000 - ts->irq_receive_time;
+
+ if (read_retval == 0 || time_elapsed > 2 * HZ) {
+ process_touch_report(ts, ts->rx_packet);
+ if (debug_mask & BIT(2)) {
+ getnstimeofday(&time_end);
+ time_delta.tv_nsec = (time_end.tv_sec*1000000000+time_end.tv_nsec)
+ -(time_start.tv_sec*1000000000+time_start.tv_nsec);
+ pr_dbg(4, "Touch latency = %ld us", time_delta.tv_nsec/1000);
+ }
+ propagate_report(ts, 0, ts->rx_packet);
+ }
+ atomic_dec(&ts->scheduled_work_irq);
+ /* enable_irq(ts->client->irq); */
+ return IRQ_HANDLED;
+}
+
+static ssize_t init_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%d\n", init_state);
+}
+
+static ssize_t init_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ int value, ret;
+
+ if (sscanf(buf, "%d", &value) != 1) {
+ pr_err("bad parameter");
+ return -EINVAL;
+ }
+ switch (value) {
+ case 0:
+ if (init_state == 0)
+ break;
+ ret = device_deinit(to_i2c_client(dev));
+ if (ret != 0) {
+ pr_err("deinit error (%d)", ret);
+ return ret;
+ }
+ break;
+ case 1:
+ if (init_state == 1)
+ break;
+ ret = device_init(to_i2c_client(dev));
+ if (ret != 0) {
+ pr_err("init error (%d)", ret);
+ return ret;
+ }
+ break;
+ case 2:
+ if (init_state == 1) {
+ ret = device_deinit(to_i2c_client(dev));
+ if (ret != 0) {
+ pr_err("deinit error (%d)", ret);
+ return ret;
+ }
+ }
+ ret = device_init(to_i2c_client(dev));
+ if (ret != 0) {
+ pr_err("init error (%d)", ret);
+ return ret;
+ }
+ break;
+ default:
+ pr_err("bad value");
+ return -EINVAL;
+ }
+
+ return count;
+}
+
+static ssize_t hreset_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+ struct max1187x_pdata *pdata = client->dev.platform_data;
+
+ if (!pdata->gpio_reset)
+ return count;
+
+ DISABLE_IRQ();
+ mutex_lock(&ts->i2c_mutex);
+ gpio_set_value(pdata->gpio_reset, 0);
+ usleep_range(10000, 11000);
+ gpio_set_value(pdata->gpio_reset, 1);
+ bootloader = 0;
+ ts->got_report = 0;
+ mutex_unlock(&ts->i2c_mutex);
+ if (get_report(ts, 0x01A0, 3000) != 0) {
+ pr_err("Failed to receive system status report");
+ return count;
+ }
+ release_report(ts);
+
+ return count;
+}
+
+static ssize_t sreset_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ DISABLE_IRQ();
+ if (sreset(client) != 0) {
+ pr_err("Failed to do soft reset.");
+ return count;
+ }
+ if (get_report(ts, 0x01A0, 3000) != 0) {
+ pr_err("Failed to receive system status report");
+ return count;
+ }
+
+ release_report(ts);
+ return count;
+}
+
+static ssize_t irq_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n", ts->irq_count);
+}
+
+static ssize_t irq_count_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ ts->irq_count = 0;
+ return count;
+}
+
+static ssize_t dflt_cfg_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ return snprintf(buf, PAGE_SIZE, "%u 0x%x 0x%x\n", PDATA(defaults_allow),
+ PDATA(default_config_id), PDATA(default_chip_id));
+}
+
+static ssize_t dflt_cfg_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ (void) sscanf(buf, "%u 0x%x 0x%x", &PDATA(defaults_allow),
+ &PDATA(default_config_id), &PDATA(default_chip_id));
+ return count;
+}
+
+static ssize_t panel_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ return snprintf(buf, PAGE_SIZE, "%u %u %u %u %u %u\n",
+ PDATA(panel_max_x), PDATA(panel_min_x),
+ PDATA(panel_max_y), PDATA(panel_min_y),
+ PDATA(lcd_x), PDATA(lcd_y));
+}
+
+static ssize_t panel_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+
+ (void) sscanf(buf, "%u %u %u %u %u %u",
+ &PDATA(panel_max_x), &PDATA(panel_min_x),
+ &PDATA(panel_max_y), &PDATA(panel_min_y),
+ &PDATA(lcd_x), &PDATA(lcd_y));
+ return count;
+}
+
+static ssize_t fw_ver_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+ u16 build_number = 0;
+ u8 branch = BYTEL(ts->fw_version[3]) >> 6;
+
+ if (ts->fw_version[1] >= 3)
+ build_number = ts->fw_version[4];
+ return snprintf(
+ buf,
+ PAGE_SIZE,
+ "%u.%u.%u p%u%c "
+ "(CRC16 0x%04X=>0x%04X) Chip ID 0x%02X\n",
+ BYTEH(ts->fw_version[2]),
+ BYTEL(ts->fw_version[2]),
+ build_number,
+ BYTEL(ts->fw_version[3]) & 0x3F,
+ (branch == 0) ? ' ' : (branch - 1 + 'a'),
+ (ts->fw_index != -1) ? \
+ PDATA(fw_mapping[ts->fw_index]).file_codesize \
+ : 0, ts->fw_crc16, BYTEH(ts->fw_version[3]));
+}
+
+static ssize_t driver_ver_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "3.0.7.1: Mar 15, 2013\n");
+}
+
+static ssize_t debug_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%08X\n", debug_mask);
+}
+
+static ssize_t debug_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ if (sscanf(buf, "%ix", &debug_mask) != 1) {
+ pr_err("bad parameter");
+ return -EINVAL;
+ }
+
+ return count;
+}
+
+static ssize_t command_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct data *ts = i2c_get_clientdata(client);
+ u16 buffer[MAX_WORDS_COMMAND_ALL];
+ char scan_buf[5];
+ int i;
+
+ count--; /* ignore carriage return */
+ if ((count % 4) != 0) {
+ pr_err("words not properly defined");
+ return -EINVAL;
+ }
+ scan_buf[4] = '\0';
+ for (i = 0; i < count; i += 4) {
+ memcpy(scan_buf, &buf[i], 4);
+ if (sscanf(scan_buf, "%hx", &buffer[i / 4]) != 1) {
+ pr_err("bad word (%s)", scan_buf);
+ return -EINVAL;
+ }
+
+ }
+ if (send_mtp_command(ts, buffer, count / 4))
+ pr_err("MTP command failed");
+ return ++count;
+}
+
+static ssize_t report_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+ struct i2c_client *client = kobj_to_i2c_client(kobj);
+ struct data *ts = i2c_get_clientdata(client);
+ int printed, i, offset = 0, payload;
+ int full_packet;
+ int num_term_char;
+
+ if (get_report(ts, 0xFFFF, 0xFFFFFFFF))
+ return 0;
+
+ payload = ts->rx_report_len;
+ full_packet = payload;
+ num_term_char = 2; /* number of term char */
+ if (count < (4 * full_packet + num_term_char))
+ return -EIO;
+ if (count > (4 * full_packet + num_term_char))
+ count = 4 * full_packet + num_term_char;
+
+ for (i = 1; i <= payload; i++) {
+ printed = snprintf(&buf[offset], PAGE_SIZE, "%04X\n",
+ ts->rx_report[i]);
+ if (printed <= 0)
+ return -EIO;
+ offset += printed - 1;
+ }
+ snprintf(&buf[offset], PAGE_SIZE, ",\n");
+ release_report(ts);
+
+ return count;
+}
+
+static ssize_t touch_vendor_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct data *ts = gl_ts;
+
+ return snprintf(buf, PAGE_SIZE, "Maxim-%s_p%u_chipID-0x%02X_twID-%02X\n",
+ ts->fw_ver, ts->protocol_ver, BYTEH(ts->fw_version[3]), ts->vendor_pin);
+}
+
+static ssize_t config_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct data *ts = gl_ts;
+ int i, ret;
+ size_t count = 0;
+ u16 mtpdata[]={0x0000, 0x0000, 0x0000};
+
+ for(i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get touch configuration
+ ret = get_touch_config(ts->client);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve touch config");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0102, 150);
+ if (ret != 0)
+ pr_info("[W] Get touch config time out-%d, retry", i);
+ if (ret == 0) {
+ count += snprintf(buf + count, PAGE_SIZE, "Touch config:\n");
+ for (i = 3; i < config_num[ts->config_protocol][0]+3; i++) {
+ count += snprintf(buf + count, PAGE_SIZE, "%04X ", ts->rx_report[i]);
+ if (((i-3) % 16) == (16 - 1))
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ }
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive touch config report");
+
+ for(i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get calibration table
+ mtpdata[0]=0x0011;
+ mtpdata[1]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 2);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve calibration table");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0111, 150);
+ if (ret != 0)
+ pr_info("[W] Get calibration table time out-%d, retry", i);
+ if (ret == 0) {
+ count += snprintf(buf + count, PAGE_SIZE, "Calibration Table:\n");
+ for (i = 3; i < config_num[ts->config_protocol][1]+3; i++) {
+ count += snprintf(buf + count, PAGE_SIZE, "%04X ", ts->rx_report[i]);
+ if (((i-3) % 16) == (16 - 1))
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ }
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive calibration table report");
+
+ for(i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get private configuration
+ mtpdata[0]=0x0004;
+ mtpdata[1]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 2);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve private config");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0104, 150);
+ if (ret != 0)
+ pr_info("[W] Get private config time out-%d, retry", i);
+ if (ret == 0) {
+ count += snprintf(buf + count, PAGE_SIZE, "Private Config:\n");
+ for (i = 3; i < config_num[ts->config_protocol][2]+3; i++) {
+ count += snprintf(buf + count, PAGE_SIZE, "%04X ", ts->rx_report[i]);
+ if (((i-3) % 16) == (16 - 1))
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ }
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive private config report");
+
+ for(i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get Lookup table X
+ mtpdata[0]=0x0031;
+ mtpdata[1]=0x0001;
+ mtpdata[2]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 3);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve Lookup table X");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0131, 150);
+ if (ret != 0)
+ pr_info("[W] Get Lookup table X time out-%d, retry", i);
+ if (ret == 0) {
+ count += snprintf(buf + count, PAGE_SIZE, "Lookup Table X:\n");
+ for (i = 3; i < config_num[ts->config_protocol][3]+3; i++) {
+ count += snprintf(buf + count, PAGE_SIZE, "%04X ", ts->rx_report[i]);
+ if (((i-3) % 16) == (16 - 1))
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ }
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive Lookup table X report");
+
+ for(i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get Lookup table Y
+ mtpdata[0]=0x0031;
+ mtpdata[1]=0x0001;
+ mtpdata[2]=0x0001;
+ ret = send_mtp_command(ts, mtpdata, 3);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve Lookup table Y");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0131, 150);
+ if (ret != 0)
+ pr_info("[W] Get Lookup table Y time out-%d, retry", i);
+ if (ret == 0) {
+ count += snprintf(buf + count, PAGE_SIZE, "Lookup Table Y:\n");
+ for (i = 3; i < config_num[ts->config_protocol][3]+3; i++) {
+ count += snprintf(buf + count, PAGE_SIZE, "%04X ", ts->rx_report[i]);
+ if (((i-3) % 16) == (16 - 1))
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ }
+ count += snprintf(buf + count, PAGE_SIZE, "\n");
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive Lookup table Y report");
+
+ return count;
+}
+
+static ssize_t gpio_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int ret = 0;
+ int ret1 = 0;
+ int ret2 = 0;
+ struct data *ts = gl_ts;
+ struct max1187x_pdata *pdata = ts->client->dev.platform_data;
+
+ if (!pdata->gpio_tirq)
+ return ret;
+
+ ret = gpio_get_value(pdata->gpio_tirq);
+ printk(KERN_DEBUG "[TP] GPIO_TP_INT_N=%d\n", ret);
+ ret1 = gpio_get_value(164);
+ printk(KERN_DEBUG "[TP] TP_I2C_SEL_CPU=%d\n", ret);
+ ret2 = gpio_get_value(86);
+ printk(KERN_DEBUG "[TP] TW_I2C_OE=%d\n", ret);
+ sprintf(buf, "GPIO_TP_INT_N=%d,TP_I2C_SEL_CPU=%d,TW_I2C_OE=%d\n", ret, ret1, ret2);
+ ret = strlen(buf) + 1;
+
+ return ret;
+}
+
+static ssize_t diag_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct data *ts = gl_ts;
+ size_t count = 0;
+ uint16_t i, j;
+ int ret, button_count = 0;
+
+ if (ts->baseline_mode != MAX1187X_AUTO_BASELINE)
+ if (set_baseline_mode(ts->client, ts->baseline_mode) < 0) {
+ pr_err("Failed to set up baseline mode");
+ return -1;
+ }
+ if (set_touch_frame(ts->client, ts->frame_rate[1], 0x0A) < 0) {
+ pr_err("Failed to set up frame rate");
+ return -1;
+ }
+
+ DISABLE_IRQ();
+ if (change_touch_rpt(ts->client, 0) < 0) {
+ pr_err("Failed to set up raw data report");
+ return -1;
+ }
+ ts->is_raw_mode = 1;
+ ret = get_report(ts, 0x0800, 500);
+ if (ret != 0)
+ pr_info("Failed to receive raw data report");
+
+ if (ret==0) {
+ memcpy(ts->report, &ts->rx_report[5], BYTE_SIZE(ts->rx_report[2] - 2));
+ if (ts->rx_report[2] - 2 > (ts->x_channel*ts->y_channel))
+ button_count = ts->rx_report[2] - 2 - (ts->x_channel*ts->y_channel);
+ count += sprintf(buf + count, "Channel: %dx%d\n", ts->x_channel, ts->y_channel);
+ for (i = 0; i < ts->y_channel; i++) {
+ for (j = 0; j < ts->x_channel; j++) {
+ count += sprintf(buf + count, "%6d", ts->report[i*ts->x_channel + j]);
+ }
+ count += sprintf(buf + count, "\n");
+ }
+ if (button_count) {
+ for (i = 0; i < button_count; i++) {
+ count += sprintf(buf + count, "%6d", ts->report[ts->x_channel*ts->y_channel + i]);
+ }
+ count += sprintf(buf + count, "\n");
+ }
+ release_report(ts);
+ }
+
+ DISABLE_IRQ();
+ ts->is_raw_mode = 0;
+ if (change_touch_rpt(ts->client, 1) < 0) {
+ pr_err("Failed to set up raw data report");
+ return -1;
+ }
+ if (set_touch_frame(ts->client, ts->frame_rate[1], ts->frame_rate[0]) < 0) {
+ pr_err("Failed to set up frame rate");
+ return -1;
+ }
+ if (ts->baseline_mode != MAX1187X_AUTO_BASELINE)
+ if (set_baseline_mode(ts->client, 2) < 0) {
+ pr_err("Failed to set up baseline mode");
+ return -1;
+ }
+ ENABLE_IRQ();
+
+ return count;
+}
+
+static ssize_t diag_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct data *ts = gl_ts;
+ if (buf[0] == '1')
+ ts->baseline_mode = MAX1187X_AUTO_BASELINE;
+ else if (buf[0] == '2')
+ ts->baseline_mode = MAX1187X_NO_BASELINE;
+ else if (buf[0] == '3')
+ ts->baseline_mode = MAX1187X_FIX_BASELINE;
+
+ return count;
+}
+
+static ssize_t unlock_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int unlock = -1;
+ if (buf[0] >= '0' && buf[0] <= '9' && buf[1] == '\n')
+ unlock = buf[0] - '0';
+
+ pr_info("Touch: unlock change to %d", unlock);
+ return count;
+}
+
+static ssize_t set_I2C_OE_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int unlock = -1;
+ int ret = 0;
+ if (buf[0] >= '0' && buf[0] <= '9' && buf[1] == '\n')
+ unlock = buf[0] - '0';
+
+ pr_info("Touch: set value %d", unlock);
+ gpio_set_value( 86, unlock);
+ ret = gpio_get_value( 86);
+ printk(KERN_DEBUG "[TP] TW_I2C_OE=%d\n", ret);
+
+ return count;
+}
+
+static ssize_t set_I2C_SEL_CPU_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int unlock = -1;
+ int ret = 0;
+ if (buf[0] >= '0' && buf[0] <= '9' && buf[1] == '\n')
+ unlock = buf[0] - '0';
+
+ pr_info("Touch: set value %d", unlock);
+ gpio_set_value( 164, unlock);
+ ret = gpio_get_value( 164);
+ printk(KERN_DEBUG "[TP] TP_I2C_SEL_CPU=%d\n", ret);
+
+ return count;
+}
+
+static DEVICE_ATTR(init, (S_IWUSR|S_IRUGO), init_show, init_store);
+static DEVICE_ATTR(hreset, S_IWUSR, NULL, hreset_store);
+static DEVICE_ATTR(sreset, S_IWUSR, NULL, sreset_store);
+static DEVICE_ATTR(irq_count, (S_IWUSR|S_IRUGO), irq_count_show, irq_count_store);
+static DEVICE_ATTR(dflt_cfg, (S_IWUSR|S_IRUGO), dflt_cfg_show, dflt_cfg_store);
+static DEVICE_ATTR(panel, (S_IWUSR|S_IRUGO), panel_show, panel_store);
+static DEVICE_ATTR(fw_ver, S_IRUGO, fw_ver_show, NULL);
+static DEVICE_ATTR(driver_ver, S_IRUGO, driver_ver_show, NULL);
+static DEVICE_ATTR(debug, (S_IWUSR|S_IRUGO), debug_show, debug_store);
+static DEVICE_ATTR(command, S_IWUSR, NULL, command_store);
+static struct bin_attribute dev_attr_report = {
+ .attr = {.name = "report", .mode = S_IRUGO}, .read = report_read };
+
+static struct device_attribute *dev_attrs[] = {
+ &dev_attr_hreset,
+ &dev_attr_sreset,
+ &dev_attr_irq_count,
+ &dev_attr_dflt_cfg,
+ &dev_attr_panel,
+ &dev_attr_fw_ver,
+ &dev_attr_driver_ver,
+ &dev_attr_debug,
+ &dev_attr_command,
+ NULL };
+
+static DEVICE_ATTR(debug_level, (S_IWUSR|S_IRUGO), debug_show, debug_store);
+static DEVICE_ATTR(vendor, S_IRUGO, touch_vendor_show, NULL);
+static DEVICE_ATTR(config, S_IRUGO, config_show, NULL);
+static DEVICE_ATTR(gpio, S_IRUGO, gpio_show, NULL);
+static DEVICE_ATTR(diag, (S_IWUSR|S_IRUGO), diag_show, diag_store);
+static DEVICE_ATTR(unlock, (S_IWUSR|S_IRUGO), NULL, unlock_store);
+static DEVICE_ATTR(set_I2C_OE, (S_IWUSR|S_IRUGO), gpio_show, set_I2C_OE_store);
+static DEVICE_ATTR(set_I2C_SEL_CPU, (S_IWUSR|S_IRUGO), gpio_show, set_I2C_SEL_CPU_store);
+
+
+/* debug_mask |= 0x80000 for all driver INIT */
+static void collect_chip_data(struct data *ts)
+{
+ int ret, i, build_number = 0;
+
+ ret = get_report(ts, 0x01A0, 3000);
+ if (ret != 0) {
+ pr_err("Failed to receive system status report");
+ if (PDATA(defaults_allow) == 0)
+ msleep(5000);
+ } else {
+ ts->vendor_pin = BYTEH(ts->rx_report[3]) & PDATA(tw_mask);
+ pr_info_if(8, "(INIT): vendor_pin=%x", ts->vendor_pin);
+ release_report(ts);
+ ts->fw_responsive = 1;
+ }
+#if 0 /* Debug report */
+ DISABLE_IRQ();
+ ret = get_report(ts, 0x0121, 500);
+ if (ret == 0) {
+ pr_info_if(8, "(INIT): Get power mode report:%d", ts->rx_report[2]);
+ release_report(ts);
+ }
+#endif
+ for (i = 0; i < RETRY_TIMES; i ++) {
+ DISABLE_IRQ();
+ ret = get_fw_version(ts->client);
+ if (ret < 0)
+ pr_err("Failed to retrieve firmware version");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0140, 100);
+ if (ret != 0)
+ pr_info("[W] Get firmware version time out-%d, retry", i);
+ if (ret == 0) {
+ memcpy(ts->fw_version, &ts->rx_report[1],
+ BYTE_SIZE(ts->rx_report[2] + 2));
+ release_report(ts);
+ ts->have_fw = 1;
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive firmware version report");
+ for (i = 0; i < RETRY_TIMES; i ++) {
+ DISABLE_IRQ();
+ ret = get_touch_config(ts->client);
+ if (ret < 0)
+ pr_err("Failed to retrieve touch config");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0102, 100);
+ if (ret != 0)
+ pr_info("[W] Get touch config time out-%d, retry", i);
+ if (ret == 0) {
+ memcpy(ts->touch_config, &ts->rx_report[1],
+ BYTE_SIZE(ts->rx_report[2] + 2));
+ release_report(ts);
+ ts->have_touchcfg = 1;
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive touch config report");
+ ENABLE_IRQ();
+ pr_info_if(8, "(INIT): firmware responsive: (%u)", ts->fw_responsive);
+ if (ts->fw_responsive) {
+ if (ts->have_fw) {
+ if (ts->fw_version[1] >= 3)
+ build_number = ts->fw_version[4];
+ sprintf(ts->fw_ver, "%u.%u.%u", BYTEH(ts->fw_version[2]),
+ BYTEL(ts->fw_version[2]), build_number);
+ ts->protocol_ver = BYTEL(ts->fw_version[3]) & 0x3F;
+ pr_info_if(8, "(INIT): firmware version: %u.%u.%u_p%u Chip ID: "
+ "0x%02X", BYTEH(ts->fw_version[2]),
+ BYTEL(ts->fw_version[2]),
+ build_number,
+ BYTEL(ts->fw_version[3]) & 0x3F,
+ BYTEH(ts->fw_version[3]));
+ } else
+ sprintf(ts->fw_ver, "Bootloader");
+ if (ts->have_touchcfg) {
+ pr_info_if(8, "(INIT): configuration ID: 0x%04X",
+ ts->touch_config[2]);
+ ts->x_channel = BYTEH(ts->touch_config[3]);
+ ts->y_channel = BYTEL(ts->touch_config[3]);
+ pr_info_if(8, "(INIT): Channel=(%d,%d)", ts->x_channel, ts->y_channel);
+ ts->frame_rate[0] = ts->touch_config[4];
+ ts->frame_rate[1] = ts->touch_config[5];
+ pr_info_if(8, "(INIT): Frame Rate=(%d,%d)", ts->frame_rate[0], ts->frame_rate[1]);
+ }
+ }
+ else
+ sprintf(ts->fw_ver, "Failed");
+}
+
+static int device_fw_load(struct data *ts, const struct firmware *fw,
+ u16 fw_index, int tagLen)
+{
+ u16 filesize, file_codesize, loopcounter;
+ u16 file_crc16_1, file_crc16_2, local_crc16;
+ int chip_crc16_1 = -1, chip_crc16_2 = -1, ret;
+
+ filesize = PDATA(fw_mapping[fw_index]).filesize;
+ file_codesize = PDATA(fw_mapping[fw_index]).file_codesize;
+
+ if (fw->size-tagLen != filesize) {
+ pr_err("filesize (%d) is not equal to expected size (%d)",
+ fw->size, filesize);
+ return -EIO;
+ }
+
+ file_crc16_1 = crc16(0, fw->data+tagLen, file_codesize);
+
+ loopcounter = 0;
+ do {
+ ret = bootloader_enter(ts);
+ if (ret == 0)
+ ret = bootloader_get_crc(ts, &local_crc16,
+ 0, file_codesize, 200);
+ if (ret == 0)
+ chip_crc16_1 = local_crc16;
+ ret = bootloader_exit(ts);
+ loopcounter++;
+ } while (loopcounter < MAX_FW_RETRIES && chip_crc16_1 == -1);
+
+ pr_info_if(8, "(INIT): file_crc16_1 = 0x%04x, chip_crc16_1 = 0x%04x\n",
+ file_crc16_1, chip_crc16_1);
+
+ ts->fw_index = fw_index;
+ ts->fw_crc16 = chip_crc16_1;
+
+ if (file_crc16_1 != chip_crc16_1) {
+ loopcounter = 0;
+ file_crc16_2 = crc16(0, fw->data+tagLen, filesize);
+
+ while (loopcounter < MAX_FW_RETRIES && file_crc16_2
+ != chip_crc16_2) {
+ pr_info_if(8, "(INIT): Reprogramming chip. Attempt %d",
+ loopcounter+1);
+ ret = bootloader_enter(ts);
+ if (ret == 0)
+ ret = bootloader_erase_flash(ts);
+ if (ret == 0)
+ ret = bootloader_set_byte_mode(ts);
+ if (ret == 0)
+ ret = bootloader_write_flash(ts, fw->data+tagLen,
+ filesize);
+ if (ret == 0)
+ ret = bootloader_get_crc(ts, &local_crc16,
+ 0, filesize, 200);
+ if (ret == 0)
+ chip_crc16_2 = local_crc16;
+ pr_info_if(8, "(INIT): file_crc16_2 = 0x%04x, "\
+ "chip_crc16_2 = 0x%04x\n",
+ file_crc16_2, chip_crc16_2);
+ ret = bootloader_exit(ts);
+ loopcounter++;
+ }
+
+ if (file_crc16_2 != chip_crc16_2)
+ return -EAGAIN;
+ }
+
+ loopcounter = 0;
+ do {
+ ret = bootloader_exit(ts);
+ loopcounter++;
+ } while (loopcounter < MAX_FW_RETRIES && ret != 0);
+
+ if (ret != 0)
+ return -EIO;
+
+ ts->fw_crc16 = file_crc16_1;
+
+ collect_chip_data(ts);
+ if (ts->have_fw == 0 || ts->have_touchcfg == 0) {
+ pr_err("firmware is unresponsive or inconsistent and "\
+ "no valid configuration is present");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static int is_booting(void)
+{
+ unsigned long long t;
+ unsigned long nanosec_rem;
+
+ t = cpu_clock(smp_processor_id());
+ nanosec_rem = do_div(t, 1000000000);
+ return (t < 30) ? 1 : 0;
+}
+
+static int compare_u16_arrays(u16 *buf1, u16 *buf2, u16 n)
+{
+ int i;
+ for (i = 0; i < n; i++) {
+ if (buf1[i] != buf2[i])
+ return 1;
+ }
+ return 0;
+}
+
+u16 calculate_checksum(u16 *buf, u16 n)
+{
+ u16 i, cs = 0;
+ for (i = 0; i < n; i++)
+ cs += buf[i];
+ return cs;
+}
+
+static void update_config(struct data *ts)
+{
+ int i, ret;
+ u16 reload_touch_config=0, reload_calib_table=0, reload_private_config=0;
+ u16 reload_lookup_x=0, reload_lookup_y=0, reload_imagefactor_table=0;
+ u16 mtpdata[]={0x0000, 0x0000, 0x0000};
+ u16 imagefactor_data[104];
+
+ /* configure the chip */
+ if (ts->max11871_Touch_Configuration_Data) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ ret = get_touch_config(ts->client);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve touch config");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0102, 150);
+ if (ret != 0)
+ pr_info("[W] Get touch config time out-%d, retry", i);
+ if (ret == 0) {
+ if (compare_u16_arrays(&ts->rx_report[2],
+ &ts->max11871_Touch_Configuration_Data[1], config_num[ts->config_protocol][0]+1)!=0) {
+ pr_info_if(8, "(Config): Touch Configuration Data mismatch");
+ reload_touch_config = 1;
+ } else {
+ pr_info_if(8, "(Config): Touch Configuration Data okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive touch config report");
+ }
+
+ if (ts->max11871_Calibration_Table_Data) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get calibration table
+ mtpdata[0]=0x0011;
+ mtpdata[1]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 2);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve calibration table");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0111, 150);
+ if (ret != 0)
+ pr_info("[W] Get calibration table time out-%d, retry", i);
+ if (ret == 0) {
+ if (compare_u16_arrays(&ts->rx_report[2],
+ &ts->max11871_Calibration_Table_Data[1], config_num[ts->config_protocol][1]+1)!=0) {
+ pr_info_if(8, "(Config): Calibration Table Data mismatch");
+ reload_calib_table = 1;
+ } else {
+ pr_info_if(8, "(Config): Calibration Table Data okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive calibration table report");
+ }
+
+ if (ts->max11871_Private_Configuration_Data) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get private configuration
+ mtpdata[0]=0x0004;
+ mtpdata[1]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 2);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve private config");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0104, 150);
+ if (ret != 0)
+ pr_info("[W] Get private config time out-%d, retry", i);
+ if (ret == 0) {
+ if (compare_u16_arrays(&ts->rx_report[2],
+ &ts->max11871_Private_Configuration_Data[1], config_num[ts->config_protocol][2]+1)!=0) {
+ pr_info_if(8, "(Config): Private Configuration Data mismatch");
+ reload_private_config = 1;
+ } else {
+ pr_info_if(8, "(Config): Private Configuration Data okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive private config report");
+ }
+
+ if (ts->max11871_Lookup_Table_X_Data) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get Lookup table X
+ mtpdata[0]=0x0031;
+ mtpdata[1]=0x0001;
+ mtpdata[2]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 3);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve Lookup table X");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0131, 150);
+ if (ret != 0)
+ pr_info("[W] Get Lookup table X time out-%d, retry", i);
+ if (ret == 0) {
+ if (compare_u16_arrays(&ts->rx_report[3],
+ &ts->max11871_Lookup_Table_X_Data[3], config_num[ts->config_protocol][3])!=0) {
+ pr_info_if(8, "(Config): Lookup Table X Data mismatch");
+ reload_lookup_x = 1;
+ } else {
+ pr_info_if(8, "(Config): Lookup Table X Data okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive Lookup table X report");
+ }
+
+ if (ts->max11871_Lookup_Table_Y_Data) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get Lookup table Y
+ mtpdata[0]=0x0031;
+ mtpdata[1]=0x0001;
+ mtpdata[2]=0x0001;
+ ret = send_mtp_command(ts, mtpdata, 3);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve Lookup table Y");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0131, 150);
+ if (ret != 0)
+ pr_info("[W] Get Lookup table Y time out-%d, retry", i);
+ if (ret == 0) {
+ if (compare_u16_arrays(&ts->rx_report[3],
+ &ts->max11871_Lookup_Table_Y_Data[3], config_num[ts->config_protocol][3])!=0) {
+ pr_info_if(8, "(Config): Lookup Table Y Data mismatch");
+ reload_lookup_y = 1;
+ } else {
+ pr_info_if(8, "(Config): Lookup Table Y Data okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive Lookup table Y report");
+ }
+
+ if (ts->max11871_Image_Factor_Table && config_num[ts->config_protocol][4]) {
+ for (i=0; i<RETRY_TIMES; i++) {
+ DISABLE_IRQ();
+ //Get Image Factor Table
+ mtpdata[0]=0x0047;
+ mtpdata[1]=0x0000;
+ ret = send_mtp_command(ts, mtpdata, 2);
+ if (ret < 0)
+ pr_info("[W] Failed to retrieve Image Factor Table");
+ if (ret == 0) {
+ ret = get_report(ts, 0x0147, 150);
+ if (ret != 0)
+ pr_info("[W] Get Image Factor Table time out-%d, retry", i);
+ if (ret == 0) {
+ if (ts->rx_report[3] !=
+ calculate_checksum(ts->max11871_Image_Factor_Table, 460)) {
+ pr_info_if(8, "(Config): Image Factor Table mismatch");
+ reload_imagefactor_table = 1;
+ } else {
+ pr_info_if(8, "(Config): Image Factor Table okay");
+ }
+ release_report(ts);
+ break;
+ }
+ }
+ }
+ if (i==RETRY_TIMES && ret!=0)
+ pr_err("Failed to receive Image Factor Table report");
+ }
+
+ //Configuration check has been done
+ //Now download correct configurations if required
+ if (reload_touch_config) {
+ pr_info_if(8, "(Config): Update Configuration Table");
+ DISABLE_IRQ();
+ ret = send_mtp_command(ts, ts->max11871_Touch_Configuration_Data, config_num[ts->config_protocol][0]+2);
+ if(ret < 0)
+ pr_err("Failed to send Touch Config");
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_calib_table) {
+ pr_info_if(8, "(Config): Update Calibration Table");
+ DISABLE_IRQ();
+ ret = send_mtp_command(ts, ts->max11871_Calibration_Table_Data, config_num[ts->config_protocol][1]+2);
+ if(ret < 0)
+ pr_err("Failed to send Calib Table");
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_private_config) {
+ pr_info_if(8, "(Config): Update Private Configuration Table");
+ DISABLE_IRQ();
+ ret = send_mtp_command(ts, ts->max11871_Private_Configuration_Data, config_num[ts->config_protocol][2]+2);
+ if(ret < 0)
+ pr_err("Failed to send Private Config");
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_lookup_x) {
+ pr_info_if(8, "(Config): Update Lookup Table X");
+ DISABLE_IRQ();
+ ret = send_mtp_command(ts, ts->max11871_Lookup_Table_X_Data, config_num[ts->config_protocol][3]+3);
+ if(ret < 0)
+ pr_err("Failed to send Lookup Table X");
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_lookup_y) {
+ pr_info_if(8, "(Config): Update Lookup Table Y");
+ DISABLE_IRQ();
+ ret = send_mtp_command(ts, ts->max11871_Lookup_Table_Y_Data, config_num[ts->config_protocol][3]+3);
+ if(ret < 0)
+ pr_err("Failed to send Lookup Table Y");
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_imagefactor_table && config_num[ts->config_protocol][4]) {
+ pr_info_if(8, "(Config): Update Image Factor Table");
+ DISABLE_IRQ();
+ //0-59 words
+ imagefactor_data[0] = 0x0046;
+ imagefactor_data[1] = 0x003E;
+ imagefactor_data[2] = 0x0000;
+ memcpy(imagefactor_data+3, ts->max11871_Image_Factor_Table, 60 << 1);
+ imagefactor_data[63] = calculate_checksum(imagefactor_data+2,61);
+ send_mtp_command(ts, imagefactor_data, 64);
+ msleep(100);
+ //60-159 words
+ imagefactor_data[0] = 0x0046;
+ imagefactor_data[1] = 0x0066;
+ imagefactor_data[2] = 0x003C;
+ memcpy(imagefactor_data+3, ts->max11871_Image_Factor_Table+60, 100 << 1);
+ imagefactor_data[103] = calculate_checksum(imagefactor_data+2,101);
+ send_mtp_command(ts, imagefactor_data, 104);
+ msleep(100);
+ //160-259 words
+ imagefactor_data[0] = 0x0046;
+ imagefactor_data[1] = 0x0066;
+ imagefactor_data[2] = 0x00A0;
+ memcpy(imagefactor_data+3, ts->max11871_Image_Factor_Table+160, 100 << 1);
+ imagefactor_data[103] = calculate_checksum(imagefactor_data+2,101);
+ send_mtp_command(ts, imagefactor_data, 104);
+ msleep(100);
+ //260-359 words
+ imagefactor_data[0] = 0x0046;
+ imagefactor_data[1] = 0x0066;
+ imagefactor_data[2] = 0x0104;
+ memcpy(imagefactor_data+3, ts->max11871_Image_Factor_Table+260, 100 << 1);
+ imagefactor_data[103] = calculate_checksum(imagefactor_data+2,101);
+ send_mtp_command(ts, imagefactor_data, 104);
+ msleep(100);
+ //360-459 words
+ imagefactor_data[0] = 0x0046;
+ imagefactor_data[1] = 0x0066;
+ imagefactor_data[2] = 0x8168;
+ memcpy(imagefactor_data+3, ts->max11871_Image_Factor_Table+360, 100 << 1);
+ imagefactor_data[103] = calculate_checksum(imagefactor_data+2,101);
+ send_mtp_command(ts, imagefactor_data, 104);
+ msleep(100);
+ ENABLE_IRQ();
+ }
+ if (reload_touch_config || reload_calib_table || reload_private_config ||
+ reload_lookup_x || reload_lookup_y || reload_imagefactor_table) {
+ DISABLE_IRQ();
+ if (sreset(ts->client) != 0) {
+ pr_err("Failed to do soft reset.");
+ return;
+ }
+ collect_chip_data(ts);
+ if (ts->have_fw == 0 || ts->have_touchcfg == 0) {
+ pr_err("firmware is unresponsive or inconsistent and "\
+ "no valid configuration is present");
+ return;
+ }
+ pr_info_if(8, "(INIT): Update config complete");
+ }
+}
+
+static int check_bin_version(const struct firmware *fw, int *tagLen, char *fw_ver)
+{
+ char tag[40];
+ int i = 0;
+
+ if (fw->data[0] == 'T' && fw->data[1] == 'P') {
+ while ((tag[i] = fw->data[i]) != '\n')
+ i++;
+ tag[i] = '\0';
+ *tagLen = i+1;
+ pr_info_if(8, "(INIT): tag=%s", tag);
+ if (strstr(tag, fw_ver) != NULL) {
+ pr_info_if(8, "(INIT): Update Bypass");
+ return 0; /* bypass */
+ }
+ }
+
+ pr_info_if(8, "(INIT): Need Update");
+ return 1;
+}
+
+static void check_fw_and_config(struct data *ts)
+{
+ u16 config_id, chip_id;
+ const struct firmware *fw;
+ int i, ret, tagLen = 0;
+
+ sreset(ts->client);
+ collect_chip_data(ts);
+ if ((ts->have_fw == 0 || ts->have_touchcfg == 0) &&
+ PDATA(defaults_allow) == 0) {
+ pr_err("firmware is unresponsive or inconsistent "\
+ "and default selections are disabled");
+ return;
+ }
+ config_id = ts->have_touchcfg ? ts->touch_config[2]
+ : PDATA(default_config_id);
+ chip_id = ts->have_fw ? BYTEH(ts->fw_version[3]) : \
+ PDATA(default_chip_id);
+
+ if (PDATA(update_feature)&MAX1187X_UPDATE_BIN) {
+ for (i = 0; i < PDATA(num_fw_mappings); i++) {
+ if (PDATA(fw_mapping[i]).chip_id == chip_id)
+ break;
+ }
+
+ if (i == PDATA(num_fw_mappings)) {
+ pr_err("FW not found for configID(0x%04X) and chipID(0x%04X)",
+ config_id, chip_id);
+ return;
+ }
+
+ pr_info_if(8, "(INIT): Firmware file (%s)",
+ PDATA(fw_mapping[i]).filename);
+
+ ret = request_firmware(&fw, PDATA(fw_mapping[i]).filename,
+ &ts->client->dev);
+
+ if (ret || fw == NULL) {
+ pr_err("firmware request failed (ret = %d, fwptr = %p)",
+ ret, fw);
+ return;
+ }
+
+ if (check_bin_version(fw, &tagLen, ts->fw_ver)) {
+ if (device_fw_load(ts, fw, i, tagLen)) {
+ release_firmware(fw);
+ pr_err("firmware download failed");
+ return;
+ }
+ }
+
+ release_firmware(fw);
+ pr_info_if(8, "(INIT): firmware download OK");
+ }
+
+ /* configure the chip */
+ if (PDATA(update_feature)&MAX1187X_UPDATE_CONFIG) {
+ while(ts->fw_config->config_id != 0) {
+ if(ts->fw_config->protocol_ver != ts->protocol_ver) {
+ ts->fw_config++;
+ continue;
+ }
+ if(ts->fw_config->major_ver > BYTEH(ts->fw_version[2])) {
+ ts->fw_config++;
+ continue;
+ }
+ if(ts->fw_config->minor_ver > BYTEL(ts->fw_version[2])) {
+ ts->fw_config++;
+ continue;
+ }
+ if(PDATA(tw_mask) && ts->fw_config->vendor_pin) {
+ if(ts->fw_config->vendor_pin != ts->vendor_pin) {
+ ts->fw_config++;
+ continue;
+ }
+ }
+ if(ts->fw_config->protocol_ver <= 7)
+ ts->config_protocol = 0;
+ else
+ ts->config_protocol = 1;
+ if(ts->fw_config->config_touch) {
+ ts->max11871_Touch_Configuration_Data[0] = 0x0001;
+ ts->max11871_Touch_Configuration_Data[1] = config_num[ts->config_protocol][0];
+ memcpy(ts->max11871_Touch_Configuration_Data+2,
+ ts->fw_config->config_touch, BYTE_SIZE(config_num[ts->config_protocol][0]));
+ }
+ if(ts->fw_config->config_cal) {
+ ts->max11871_Calibration_Table_Data[0] = 0x0010;
+ ts->max11871_Calibration_Table_Data[1] = config_num[ts->config_protocol][1];
+ memcpy(ts->max11871_Calibration_Table_Data+2,
+ ts->fw_config->config_cal, BYTE_SIZE(config_num[ts->config_protocol][1]));
+ }
+ if(ts->fw_config->config_private) {
+ ts->max11871_Private_Configuration_Data[0] = 0x0003;
+ ts->max11871_Private_Configuration_Data[1] = config_num[ts->config_protocol][2];
+ memcpy(ts->max11871_Private_Configuration_Data+2,
+ ts->fw_config->config_private, BYTE_SIZE(config_num[ts->config_protocol][2]));
+ }
+ if(ts->fw_config->config_lin_x) {
+ ts->max11871_Lookup_Table_X_Data[0] = 0x0030;
+ ts->max11871_Lookup_Table_X_Data[1] = config_num[ts->config_protocol][3]+1;
+ ts->max11871_Lookup_Table_X_Data[2] = 0x0000;
+ memcpy(ts->max11871_Lookup_Table_X_Data+3,
+ ts->fw_config->config_lin_x, BYTE_SIZE(config_num[ts->config_protocol][3]));
+ }
+ if(ts->fw_config->config_lin_y) {
+ ts->max11871_Lookup_Table_Y_Data[0] = 0x0030;
+ ts->max11871_Lookup_Table_Y_Data[1] = config_num[ts->config_protocol][3]+1;
+ ts->max11871_Lookup_Table_Y_Data[2] = 0x0001;
+ memcpy(ts->max11871_Lookup_Table_Y_Data+3,
+ ts->fw_config->config_lin_y, BYTE_SIZE(config_num[ts->config_protocol][3]));
+ }
+ if(ts->fw_config->config_ifactor && config_num[ts->config_protocol][4]) {
+ memcpy(ts->max11871_Image_Factor_Table,
+ ts->fw_config->config_ifactor, BYTE_SIZE(MAX1187X_IMAGE_FACTOR_MAX));
+ }
+ break;
+ }
+ if(ts->fw_config->config_id != 0) {
+ update_config(ts);
+ pr_info_if(8, "(INIT): Check config finish");
+ }
+ }
+
+ ENABLE_IRQ();
+
+ if (change_touch_rpt(ts->client, PDATA(report_mode)) < 0) {
+ pr_err("Failed to set up touch report mode");
+ return;
+ }
+}
+
+/* #ifdef CONFIG_OF */
+static struct max1187x_pdata *max1187x_get_platdata_dt(struct device *dev)
+{
+ struct max1187x_pdata *pdata = NULL;
+ struct device_node *devnode = dev->of_node;
+ u32 i;
+ u32 datalist[MAX1187X_NUM_FW_MAPPINGS_MAX];
+
+ if (!devnode)
+ return NULL;
+
+ pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata) {
+ pr_err("Failed to allocate memory for pdata\n");
+ return NULL;
+ }
+
+ /* Parse gpio_tirq */
+ if (of_property_read_u32(devnode, "gpio_tirq", &pdata->gpio_tirq)) {
+ pr_err("Failed to get property: gpio_tirq\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse gpio_reset */
+ if (of_property_read_u32(devnode, "gpio_reset", &pdata->gpio_reset)) {
+ pr_err("Failed to get property: gpio_reset\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse num_fw_mappings */
+ if (of_property_read_u32(devnode, "num_fw_mappings",
+ &pdata->num_fw_mappings)) {
+ pr_err("Failed to get property: num_fw_mappings\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ if (pdata->num_fw_mappings > MAX1187X_NUM_FW_MAPPINGS_MAX)
+ pdata->num_fw_mappings = MAX1187X_NUM_FW_MAPPINGS_MAX;
+
+ /* Parse chip_id */
+ if (of_property_read_u32_array(devnode, "chip_id", datalist,
+ pdata->num_fw_mappings)) {
+ pr_err("Failed to get property: chip_id\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ for (i = 0; i < pdata->num_fw_mappings; i++)
+ pdata->fw_mapping[i].chip_id = datalist[i];
+
+ /* Parse filename */
+ for (i = 0; i < pdata->num_fw_mappings; i++) {
+ if (of_property_read_string_index(devnode, "filename", i,
+ (const char **) &pdata->fw_mapping[i].filename)) {
+ pr_err("Failed to get property: "\
+ "filename[%d]\n", i);
+ goto err_max1187x_get_platdata_dt;
+ }
+ }
+
+ /* Parse filesize */
+ if (of_property_read_u32_array(devnode, "filesize", datalist,
+ pdata->num_fw_mappings)) {
+ pr_err("Failed to get property: filesize\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ for (i = 0; i < pdata->num_fw_mappings; i++)
+ pdata->fw_mapping[i].filesize = datalist[i];
+
+ /* Parse file_codesize */
+ if (of_property_read_u32_array(devnode, "file_codesize", datalist,
+ pdata->num_fw_mappings)) {
+ pr_err("Failed to get property: file_codesize\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ for (i = 0; i < pdata->num_fw_mappings; i++)
+ pdata->fw_mapping[i].file_codesize = datalist[i];
+
+ /* Parse defaults_allow */
+ if (of_property_read_u32(devnode, "defaults_allow",
+ &pdata->defaults_allow)) {
+ pr_err("Failed to get property: defaults_allow\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse default_config_id */
+ if (of_property_read_u32(devnode, "default_config_id",
+ &pdata->default_config_id)) {
+ pr_err("Failed to get property: default_config_id\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse default_chip_id */
+ if (of_property_read_u32(devnode, "default_chip_id",
+ &pdata->default_chip_id)) {
+ pr_err("Failed to get property: default_chip_id\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse i2c_words */
+ if (of_property_read_u32(devnode, "i2c_words", &pdata->i2c_words)) {
+ pr_err("Failed to get property: i2c_words\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse coordinate_settings */
+ if (of_property_read_u32(devnode, "coordinate_settings",
+ &pdata->coordinate_settings)) {
+ pr_err("Failed to get property: coordinate_settings\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse panel_max_x */
+ if (of_property_read_u32(devnode, "panel_max_x",
+ &pdata->panel_max_x)) {
+ pr_err("Failed to get property: panel_max_x\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse panel_min_x */
+ if (of_property_read_u32(devnode, "panel_min_x",
+ &pdata->panel_min_x)) {
+ pr_err("Failed to get property: panel_min_x\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse panel_max_y */
+ if (of_property_read_u32(devnode, "panel_max_y",
+ &pdata->panel_max_y)) {
+ pr_err("Failed to get property: panel_max_y\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse panel_min_y */
+ if (of_property_read_u32(devnode, "panel_min_y",
+ &pdata->panel_min_y)) {
+ pr_err("Failed to get property: panel_min_y\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse lcd_x */
+ if (of_property_read_u32(devnode, "lcd_x", &pdata->lcd_x)) {
+ pr_err("Failed to get property: lcd_x\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse lcd_y */
+ if (of_property_read_u32(devnode, "lcd_y", &pdata->lcd_y)) {
+ pr_err("Failed to get property: lcd_y\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse row_count */
+ if (of_property_read_u32(devnode, "num_rows",
+ &pdata->num_rows)) {
+ pr_err("Failed to get property: num_rows\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse num_cols */
+ if (of_property_read_u32(devnode, "num_cols",
+ &pdata->num_cols)) {
+ pr_err("Failed to get property: num_cols\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse button_code0 */
+ if (of_property_read_u32(devnode, "button_code0",
+ &pdata->button_code0)) {
+ pr_err("Failed to get property: button_code0\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse button_code1 */
+ if (of_property_read_u32(devnode, "button_code1",
+ &pdata->button_code1)) {
+ pr_err("Failed to get property: button_code1\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse button_code2 */
+ if (of_property_read_u32(devnode, "button_code2",
+ &pdata->button_code2)) {
+ pr_err("Failed to get property: button_code2\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ /* Parse button_code3 */
+ if (of_property_read_u32(devnode, "button_code3",
+ &pdata->button_code3)) {
+ pr_err("Failed to get property: button_code3\n");
+ goto err_max1187x_get_platdata_dt;
+ }
+
+ return pdata;
+
+err_max1187x_get_platdata_dt:
+ devm_kfree(dev, pdata);
+ return NULL;
+}
+/*
+#else
+static inline struct max1187x_pdata *
+ max1187x_get_platdata_dt(struct device *dev)
+{
+ return NULL;
+}
+#endif
+*/
+
+static int validate_pdata(struct max1187x_pdata *pdata)
+{
+ if (pdata == NULL) {
+ pr_err("Platform data not found!\n");
+ goto err_validate_pdata;
+ }
+
+ if (pdata->gpio_tirq == 0) {
+ pr_err("gpio_tirq (%u) not defined!\n", pdata->gpio_tirq);
+ goto err_validate_pdata;
+ }
+
+ if (pdata->num_rows == 0 || pdata->num_rows > 40) {
+ pr_err("num_rows (%u) out of range!\n", pdata->num_rows);
+ goto err_validate_pdata;
+ }
+
+ if (pdata->num_cols == 0 || pdata->num_cols > 40) {
+ pr_err("num_cols (%u) out of range!\n", pdata->num_cols);
+ goto err_validate_pdata;
+ }
+
+ return 0;
+
+err_validate_pdata:
+ return -ENXIO;
+}
+
+static int max1187x_chip_init(struct max1187x_pdata *pdata, int value)
+{
+ int ret;
+
+ if (value) {
+ if (pdata->gpio_reset) {
+ ret = gpio_request(pdata->gpio_reset, "max1187x_reset");
+ if (ret) {
+ pr_err("GPIO request failed for max1187x_reset (%d)\n",
+ pdata->gpio_reset);
+ return -EIO;
+ }
+ gpio_direction_output(pdata->gpio_reset, 1);
+ }
+ ret = gpio_request(pdata->gpio_tirq, "max1187x_tirq");
+ if (ret) {
+ pr_err("GPIO request failed for max1187x_tirq (%d)\n",
+ pdata->gpio_tirq);
+ return -EIO;
+ }
+ ret = gpio_direction_input(pdata->gpio_tirq);
+ if (ret) {
+ pr_err("GPIO set input direction failed for "\
+ "max1187x_tirq (%d)\n", pdata->gpio_tirq);
+ gpio_free(pdata->gpio_tirq);
+ return -EIO;
+ }
+ } else {
+ gpio_free(pdata->gpio_tirq);
+ if (pdata->gpio_reset)
+ gpio_free(pdata->gpio_reset);
+ }
+
+ return 0;
+}
+
+static int device_init_thread(void *arg)
+{
+ return device_init((struct i2c_client *) arg);
+}
+
+static int device_init(struct i2c_client *client)
+{
+ struct device *dev = &client->dev;
+ struct data *ts = NULL;
+ struct max1187x_pdata *pdata = NULL;
+ struct device_attribute **dev_attr = dev_attrs;
+ int ret = 0;
+
+ init_state = 1;
+ dev_info(dev, "(INIT): Start");
+
+ /* if I2C functionality is not present we are done */
+ /*if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("I2C core driver does not support I2C functionality");
+ ret = -ENXIO;
+ goto err_device_init;
+ }
+ pr_info_if(8, "(INIT): I2C functionality OK");*/
+
+ /* allocate control block; nothing more to do if we can't */
+ ts = kzalloc(sizeof(*ts), GFP_KERNEL);
+ if (!ts) {
+ pr_err("Failed to allocate control block memory");
+ ret = -ENOMEM;
+ goto err_device_init;
+ }
+
+ /* Get platform data */
+#ifdef MAX1187X_LOCAL_PDATA
+ pdata = &local_pdata;
+ if (!pdata) {
+ pr_err("Platform data is missing");
+ ret = -ENXIO;
+ goto err_device_init_pdata;
+ }
+#else
+ /*pdata = dev_get_platdata(dev);*/
+ /* If pdata is missing, try to get pdata from device tree (dts) */
+ /*if (!pdata)
+ pdata = max1187x_get_platdata_dt(dev);*/
+
+ if (client->dev.of_node)
+ pdata = max1187x_get_platdata_dt(dev);
+ else
+ pdata = dev_get_platdata(dev);
+
+ /* Validate if pdata values are okay */
+ ret = validate_pdata(pdata);
+ if (ret < 0)
+ goto err_device_init_pdata;
+ pr_info_if(8, "(INIT): Platform data OK");
+#endif
+
+ ts->pdata = pdata;
+ ts->fw_config = pdata->fw_config;
+ ts->client = client;
+ i2c_set_clientdata(client, ts);
+ mutex_init(&ts->irq_mutex);
+ mutex_init(&ts->i2c_mutex);
+ mutex_init(&ts->report_mutex);
+ sema_init(&ts->report_sem, 1);
+ ts->fw_index = -1;
+ ts->noise_level = 0;
+ ts->baseline_mode = MAX1187X_AUTO_BASELINE;
+ ts->button0 = 0;
+ ts->button1 = 0;
+ ts->button2 = 0;
+ ts->button3 = 0;
+ if (PDATA(button_data))
+ ts->button_data = PDATA(button_data);
+ if (!PDATA(report_mode))
+ PDATA(report_mode) = MAX1187X_REPORT_MODE_BASIC;
+
+ atomic_set(&ts->scheduled_work_irq, 0);
+
+ pr_info_if(8, "(INIT): Memory allocation OK");
+
+ /*debug_mask |= BIT(3);*/
+ pr_info_if(8, "(INIT): Debug level=0x%08X", debug_mask);
+
+ /* Initialize GPIO pins */
+ if (max1187x_chip_init(ts->pdata, 1) < 0) {
+ ret = -EIO;
+ goto err_device_init_gpio;
+ }
+ pr_info_if(8, "(INIT): chip init OK");
+
+ /* Setup IRQ and handler */
+ if (request_threaded_irq(client->irq, NULL, irq_handler,
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT, client->name, ts) != 0) {
+ pr_err("Failed to setup IRQ handler");
+ ret = -EIO;
+ goto err_device_init_gpio;
+ }
+ pr_info_if(8, "(INIT): IRQ handler OK");
+
+ /* collect controller ID and configuration ID data from firmware */
+ /* and perform firmware comparison/download if we have valid image */
+ check_fw_and_config(ts);
+
+ /* allocate and register touch device */
+ ts->input_dev = input_allocate_device();
+ if (!ts->input_dev) {
+ pr_err("Failed to allocate touch input device");
+ ret = -ENOMEM;
+ goto err_device_init_alloc_inputdev;
+ }
+ snprintf(ts->phys, sizeof(ts->phys), "%s/input0",
+ dev_name(dev));
+ ts->input_dev->name = MAX1187X_TOUCH;
+ ts->input_dev->phys = ts->phys;
+ ts->input_dev->id.bustype = BUS_I2C;
+ __set_bit(EV_SYN, ts->input_dev->evbit);
+ __set_bit(EV_ABS, ts->input_dev->evbit);
+ __set_bit(EV_KEY, ts->input_dev->evbit);
+ if (PDATA(input_protocol) == MAX1187X_PROTOCOL_B) {
+ input_mt_init_slots(ts->input_dev, MAX1187X_TOUCH_COUNT_MAX, 0);
+ } else {
+ input_set_abs_params(ts->input_dev, ABS_MT_TRACKING_ID, 0,
+ 10, 0, 0);
+ }
+ if (PDATA(input_protocol) == MAX1187X_PROTOCOL_CUSTOM1 || PDATA(support_htc_event)) {
+#if defined(ABS_MT_AMPLITUDE) && defined(ABS_MT_POSITION)
+ input_set_abs_params(ts->input_dev, ABS_MT_AMPLITUDE, 0,
+ 0xFF14, 0, 0);
+ input_set_abs_params(ts->input_dev, ABS_MT_POSITION, 0,
+ ((1 << 31) | (PDATA(panel_max_x) << 16) | PDATA(panel_max_y)), 0, 0);
+#endif
+ }
+ input_set_abs_params(ts->input_dev, ABS_MT_POSITION_X,
+ PDATA(panel_min_x), PDATA(panel_max_x), 0, 0);
+ input_set_abs_params(ts->input_dev, ABS_MT_POSITION_Y,
+ PDATA(panel_min_y), PDATA(panel_max_y), 0, 0);
+ input_set_abs_params(ts->input_dev, ABS_MT_PRESSURE, 0, 0xFFFF, 0, 0);
+ if (PDATA(report_mode) == MAX1187X_REPORT_MODE_EXTEND)
+ input_set_abs_params(ts->input_dev, ABS_MT_WIDTH_MAJOR, 0, 0x64, 0, 0);
+#if 0
+ input_set_abs_params(ts->input_dev, ABS_MT_TOUCH_MAJOR,
+ 0, max(PDATA(panel_max_x)-PDATA(panel_min_x),
+ PDATA(panel_max_y)-PDATA(panel_min_y)), 0, 0);
+ input_set_abs_params(ts->input_dev, ABS_MT_TOUCH_MINOR,
+ 0, min(PDATA(panel_max_x)-PDATA(panel_min_x),
+ PDATA(panel_max_y)-PDATA(panel_min_y)), 0, 0);
+ input_set_abs_params(ts->input_dev, ABS_MT_ORIENTATION, -90, 90, 0, 0);
+#endif
+ if (PDATA(button_code0) != KEY_RESERVED)
+ set_bit(pdata->button_code0, ts->input_dev->keybit);
+ if (PDATA(button_code1) != KEY_RESERVED)
+ set_bit(pdata->button_code1, ts->input_dev->keybit);
+ if (PDATA(button_code2) != KEY_RESERVED)
+ set_bit(pdata->button_code2, ts->input_dev->keybit);
+ if (PDATA(button_code3) != KEY_RESERVED)
+ set_bit(pdata->button_code3, ts->input_dev->keybit);
+
+ if (input_register_device(ts->input_dev)) {
+ pr_err("Failed to register touch input device");
+ ret = -EPERM;
+ goto err_device_init_register_inputdev;
+ }
+ pr_info_if(8, "(INIT): Input touch device OK");
+
+ if (PDATA(panel_max_x)-PDATA(panel_min_x)!=0 &&
+ PDATA(panel_max_y)-PDATA(panel_min_y)!=0) {
+ ts->width_factor = (PDATA(lcd_x)<<SHIFT_BITS) /
+ (PDATA(panel_max_x)-PDATA(panel_min_x));
+ ts->height_factor = (PDATA(lcd_y)<<SHIFT_BITS) /
+ (PDATA(panel_max_y)-PDATA(panel_min_y));
+ ts->width_offset = PDATA(panel_min_x);
+ ts->height_offset = PDATA(panel_min_y);
+ }
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ /* configure suspend/resume */
+ ts->early_suspend.level = EARLY_SUSPEND_LEVEL_STOP_DRAWING - 1;
+ ts->early_suspend.suspend = early_suspend;
+ ts->early_suspend.resume = late_resume;
+ register_early_suspend(&ts->early_suspend);
+ ts->early_suspend_registered = 1;
+ pr_info_if(8, "(INIT): suspend/resume registration OK");
+#endif
+
+ gl_ts = ts;
+ /* set up debug interface */
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_debug_level.attr) < 0) {
+ pr_err("failed to create sysfs file [debug_level]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_vendor.attr) < 0) {
+ pr_err("failed to create sysfs file [vendor]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_config.attr) < 0) {
+ pr_err("failed to create sysfs file [config]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_gpio.attr) < 0) {
+ pr_err("failed to create sysfs file [gpio]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_diag.attr) < 0) {
+ pr_err("failed to create sysfs file [diag]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_unlock.attr) < 0) {
+ pr_err("failed to create sysfs file [unlock]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_set_I2C_OE.attr) < 0) {
+ pr_err("failed to create sysfs file [diag]");
+ return 0;
+ }
+ if (sysfs_create_file(android_touch_kobj, &dev_attr_set_I2C_SEL_CPU.attr) < 0) {
+ pr_err("failed to create sysfs file [diag]");
+ return 0;
+ }
+ while (*dev_attr) {
+ if (device_create_file(&client->dev, *dev_attr) < 0) {
+ pr_err("failed to create sysfs file");
+ return 0;
+ }
+ ts->sysfs_created++;
+ dev_attr++;
+ }
+
+ if (device_create_bin_file(&client->dev, &dev_attr_report) < 0) {
+ pr_err("failed to create sysfs file [report]");
+ return 0;
+ }
+ ts->sysfs_created++;
+
+ pr_info("(INIT): Done\n");
+ return 0;
+
+err_device_init_register_inputdev:
+ input_free_device(ts->input_dev);
+ ts->input_dev = NULL;
+err_device_init_alloc_inputdev:
+err_device_init_gpio:
+err_device_init_pdata:
+ kfree(ts);
+err_device_init:
+ return ret;
+}
+
+static int device_deinit(struct i2c_client *client)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ struct max1187x_pdata *pdata = ts->pdata;
+ struct device_attribute **dev_attr = dev_attrs;
+
+ if (ts == NULL)
+ return 0;
+
+ propagate_report(ts, -1, NULL);
+
+ init_state = 0;
+ sysfs_remove_file(android_touch_kobj, &dev_attr_debug_level.attr);
+ sysfs_remove_file(android_touch_kobj, &dev_attr_vendor.attr);
+ sysfs_remove_file(android_touch_kobj, &dev_attr_config.attr);
+ sysfs_remove_file(android_touch_kobj, &dev_attr_gpio.attr);
+ sysfs_remove_file(android_touch_kobj, &dev_attr_diag.attr);
+ sysfs_remove_file(android_touch_kobj, &dev_attr_unlock.attr);
+ while (*dev_attr) {
+ if (ts->sysfs_created && ts->sysfs_created--)
+ device_remove_file(&client->dev, *dev_attr);
+ dev_attr++;
+ }
+ if (ts->sysfs_created && ts->sysfs_created--)
+ device_remove_bin_file(&client->dev, &dev_attr_report);
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ if (ts->early_suspend_registered)
+ unregister_early_suspend(&ts->early_suspend);
+#endif
+ if (ts->input_dev)
+ input_unregister_device(ts->input_dev);
+
+ if (client->irq)
+ free_irq(client->irq, ts);
+ (void) max1187x_chip_init(pdata, 0);
+ kfree(ts);
+
+ pr_info("(INIT): Deinitialized\n");
+ return 0;
+}
+
+static int check_chip_exist(struct i2c_client *client)
+{
+ char buf[32];
+ int read_len = 0, i;
+
+ /* if I2C functionality is not present we are done */
+ if(!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("I2C core driver does not support I2C functionality");
+ return -1;
+ }
+
+ for (i=0; i<RETRY_TIMES; i++) {
+ if (i != 0)
+ msleep(20);
+ buf[0] = 0x0A;
+ buf[1] = 0x00;
+ if (i2c_master_send(client, buf, 2) <= 0)
+ continue;
+ if (i2c_master_recv(client, buf, 2) <= 0)
+ continue;
+ read_len = (buf[0] + 1) << 1;
+ if (read_len>10)
+ read_len = 10;
+ if (i2c_master_recv(client, buf, read_len) <= 0)
+ continue;
+ break;
+ }
+ if (i == RETRY_TIMES) {
+ pr_info("(PROBE) No Maxim chip");
+ return -1;
+ }
+
+ pr_info("(INIT): I2C functionality OK");
+ pr_info("(INIT): Chip exist");
+ return 0;
+}
+
+static int probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ pr_info("(PROBE): max1187x_%s Enter", __func__);
+
+ if (check_chip_exist(client) < 0)
+ return -1;
+
+ android_touch_kobj = kobject_create_and_add("android_touch", NULL);
+ if (android_touch_kobj == NULL) {
+ pr_err("failed to create kobj");
+ return 0;
+ }
+ if (device_create_file(&client->dev, &dev_attr_init) < 0) {
+ pr_err("failed to create sysfs file [init]");
+ return 0;
+ }
+ if (sysfs_create_link(android_touch_kobj, &client->dev.kobj, "maxim1187x") < 0) {
+ pr_err("failed to create link");
+ return 0;
+ }
+
+ if (!is_booting())
+ return device_init(client);
+ if (IS_ERR(kthread_run(device_init_thread, (void *) client,
+ MAX1187X_NAME))) {
+ pr_err("failed to start kernel thread");
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static int remove(struct i2c_client *client)
+{
+ int ret = device_deinit(client);
+
+ sysfs_remove_link(android_touch_kobj, "maxim1187x");
+ device_remove_file(&client->dev, &dev_attr_init);
+ kobject_del(android_touch_kobj);
+ return ret;
+}
+
+/*
+ COMMANDS
+ */
+static int sreset(struct i2c_client *client)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = { 0x00E9, 0x0000 };
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int get_touch_config(struct i2c_client *client)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = { 0x0002, 0x0000 };
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int get_fw_version(struct i2c_client *client)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = { 0x0040, 0x0000 };
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int change_touch_rpt(struct i2c_client *client, u16 to)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = { 0x0018, 0x0001, to & 0x0003 };
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int set_touch_frame(struct i2c_client *client,
+ u16 idle_frame, u16 active_frame)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = {0x0026, 0x0001,
+ (((idle_frame & 0xFF) << 8) | (active_frame & 0xFF))};
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int set_baseline_mode(struct i2c_client *client, u16 mode)
+{
+ struct data *ts = i2c_get_clientdata(client);
+ u16 data[] = {0x0028, 0x0001, mode & 0x0003};
+ return send_mtp_command(ts, data, NWORDS(data));
+}
+
+static int combine_multipacketreport(struct data *ts, u16 *report)
+{
+ u16 packet_header = report[0];
+ u8 packet_seq_num = BYTEH(packet_header);
+ u8 packet_size = BYTEL(packet_header);
+ u16 total_packets, this_packet_num, offset;
+ static u16 packet_seq_combined;
+
+ if (packet_seq_num == 0x11) {
+ memcpy(ts->rx_report, report, (packet_size + 1) << 1);
+ ts->rx_report_len = packet_size;
+ packet_seq_combined = 1;
+ return 0;
+ }
+
+ total_packets = (packet_seq_num & 0xF0) >> 4;
+ this_packet_num = packet_seq_num & 0x0F;
+
+ if (this_packet_num == 1) {
+ if (report[1] == 0x0800) {
+ ts->rx_report_len = report[2] + 2;
+ packet_seq_combined = 1;
+ memcpy(ts->rx_report, report, (packet_size + 1) << 1);
+ return -EAGAIN;
+ } else {
+ return -EIO;
+ }
+ } else if (this_packet_num == packet_seq_combined + 1) {
+ packet_seq_combined++;
+ offset = (this_packet_num - 1) * 0xF4 + 1;
+ memcpy(ts->rx_report + offset, report + 1, packet_size << 1);
+ if (total_packets == this_packet_num)
+ return 0;
+ else
+ return -EIO;
+ }
+ return -EIO;
+}
+
+static void propagate_report(struct data *ts, int status, u16 *report)
+{
+ int i, ret;
+
+ down(&ts->report_sem);
+ mutex_lock(&ts->report_mutex);
+
+ if (report) {
+ ret = combine_multipacketreport(ts, report);
+ if (ret) {
+ up(&ts->report_sem);
+ mutex_unlock(&ts->report_mutex);
+ return;
+ }
+ }
+
+ for (i = 0; i < MAX_REPORT_READERS; i++) {
+ if (status == 0) {
+ if (ts->report_readers[i].report_id == 0xFFFF
+ || (ts->rx_report[1] != 0
+ && ts->report_readers[i].report_id
+ == ts->rx_report[1])) {
+ up(&ts->report_readers[i].sem);
+ ts->report_readers[i].reports_passed++;
+ ts->report_readers_outstanding++;
+ }
+ } else {
+ if (ts->report_readers[i].report_id != 0) {
+ ts->report_readers[i].status = status;
+ up(&ts->report_readers[i].sem);
+ }
+ }
+ }
+ if (ts->report_readers_outstanding == 0) {
+ up(&ts->report_sem);
+ }
+ mutex_unlock(&ts->report_mutex);
+}
+
+static int get_report(struct data *ts, u16 report_id, ulong timeout)
+{
+ int i, ret, status;
+
+ mutex_lock(&ts->report_mutex);
+ for (i = 0; i < MAX_REPORT_READERS; i++)
+ if (ts->report_readers[i].report_id == 0)
+ break;
+ if (i == MAX_REPORT_READERS) {
+ mutex_unlock(&ts->report_mutex);
+ ENABLE_IRQ();
+ pr_err("maximum readers reached");
+ return -EBUSY;
+ }
+ ts->report_readers[i].report_id = report_id;
+ sema_init(&ts->report_readers[i].sem, 1);
+ down(&ts->report_readers[i].sem);
+ ts->report_readers[i].status = 0;
+ ts->report_readers[i].reports_passed = 0;
+ mutex_unlock(&ts->report_mutex);
+ ENABLE_IRQ();
+
+ if (timeout == 0xFFFFFFFF)
+ ret = down_interruptible(&ts->report_readers[i].sem);
+ else
+ ret = down_timeout(&ts->report_readers[i].sem,
+ (timeout * HZ) / 1000);
+
+ mutex_lock(&ts->report_mutex);
+ if (ret && ts->report_readers[i].reports_passed > 0)
+ if (--ts->report_readers_outstanding == 0)
+ up(&ts->report_sem);
+ status = ts->report_readers[i].status;
+ ts->report_readers[i].report_id = 0;
+ mutex_unlock(&ts->report_mutex);
+
+ return (status == 0) ? ret : status;
+}
+
+static void release_report(struct data *ts)
+{
+ mutex_lock(&ts->report_mutex);
+ if (--ts->report_readers_outstanding == 0)
+ up(&ts->report_sem);
+ mutex_unlock(&ts->report_mutex);
+}
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+static void early_suspend(struct early_suspend *h)
+{
+ u16 data[] = {0x0020, 0x0001, 0x0000};
+ struct data *ts;
+ ts = container_of(h, struct data, early_suspend);
+
+ pr_info("max1187x_%s", __func__);
+ DISABLE_IRQ();
+ (void)send_mtp_command(ts, data, NWORDS(data));
+ ENABLE_IRQ();
+}
+
+static void late_resume(struct early_suspend *h)
+{
+
+ u16 data[] = {0x0020, 0x0001, 0x0002};
+ struct data *ts;
+ ts = container_of(h, struct data, early_suspend);
+
+ pr_info("max1187x_%s", __func__);
+
+ (void)send_mtp_command(ts, data, NWORDS(data));
+
+ (void)change_touch_rpt(ts->client, PDATA(report_mode));
+}
+#endif
+
+#define STATUS_ADDR_H 0x00
+#define STATUS_ADDR_L 0xFF
+#define DATA_ADDR_H 0x00
+#define DATA_ADDR_L 0xFE
+#define STATUS_READY_H 0xAB
+#define STATUS_READY_L 0xCC
+#define RXTX_COMPLETE_H 0x54
+#define RXTX_COMPLETE_L 0x32
+static int bootloader_read_status_reg(struct data *ts, const u8 byteL,
+ const u8 byteH)
+{
+ u8 buffer[] = { STATUS_ADDR_L, STATUS_ADDR_H }, i;
+
+ for (i = 0; i < 3; i++) {
+ if (i2c_tx_bytes(ts, buffer, 2) != 2) {
+ pr_err("TX fail");
+ return -EIO;
+ }
+ if (i2c_rx_bytes(ts, buffer, 2) != 2) {
+ pr_err("RX fail");
+ return -EIO;
+ }
+ if (buffer[0] == byteL && buffer[1] == byteH)
+ break;
+ }
+ if (i == 3) {
+ pr_err("Unexpected status => %02X%02X vs %02X%02X",
+ buffer[0], buffer[1], byteL, byteH);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int bootloader_write_status_reg(struct data *ts, const u8 byteL,
+ const u8 byteH)
+{
+ u8 buffer[] = { STATUS_ADDR_L, STATUS_ADDR_H, byteL, byteH };
+
+ if (i2c_tx_bytes(ts, buffer, 4) != 4) {
+ pr_err("TX fail");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int bootloader_rxtx_complete(struct data *ts)
+{
+ return bootloader_write_status_reg(ts, RXTX_COMPLETE_L,
+ RXTX_COMPLETE_H);
+}
+
+static int bootloader_read_data_reg(struct data *ts, u8 *byteL, u8 *byteH)
+{
+ u8 buffer[] = { DATA_ADDR_L, DATA_ADDR_H, 0x00, 0x00 };
+
+ if (i2c_tx_bytes(ts, buffer, 2) != 2) {
+ pr_err("TX fail");
+ return -EIO;
+ }
+ if (i2c_rx_bytes(ts, buffer, 4) != 4) {
+ pr_err("RX fail");
+ return -EIO;
+ }
+ if (buffer[2] != 0xCC && buffer[3] != 0xAB) {
+ pr_err("Status is not ready");
+ return -EIO;
+ }
+
+ *byteL = buffer[0];
+ *byteH = buffer[1];
+ return bootloader_rxtx_complete(ts);
+}
+
+static int bootloader_write_data_reg(struct data *ts, const u8 byteL,
+ const u8 byteH)
+{
+ u8 buffer[6] = { DATA_ADDR_L, DATA_ADDR_H, byteL, byteH,
+ RXTX_COMPLETE_L, RXTX_COMPLETE_H };
+
+ if (bootloader_read_status_reg(ts, STATUS_READY_L,
+ STATUS_READY_H) < 0) {
+ pr_err("read status register fail");
+ return -EIO;
+ }
+ if (i2c_tx_bytes(ts, buffer, 6) != 6) {
+ pr_err("TX fail");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int bootloader_rxtx(struct data *ts, u8 *byteL, u8 *byteH,
+ const int tx)
+{
+ if (tx > 0) {
+ if (bootloader_write_data_reg(ts, *byteL, *byteH) < 0) {
+ pr_err("write data register fail");
+ return -EIO;
+ }
+ return 0;
+ }
+
+ if (bootloader_read_data_reg(ts, byteL, byteH) < 0) {
+ pr_err("read data register fail");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int bootloader_get_cmd_conf(struct data *ts, int retries)
+{
+ u8 byteL, byteH;
+
+ do {
+ if (bootloader_read_data_reg(ts, &byteL, &byteH) >= 0) {
+ if (byteH == 0x00 && byteL == 0x3E)
+ return 0;
+ }
+ retries--;
+ } while (retries > 0);
+
+ return -EIO;
+}
+
+static int bootloader_write_buffer(struct data *ts, u8 *buffer, int size)
+{
+ u8 byteH = 0x00;
+ int k;
+
+ for (k = 0; k < size; k++) {
+ if (bootloader_rxtx(ts, &buffer[k], &byteH, 1) < 0) {
+ pr_err("bootloader RX-TX fail");
+ return -EIO;
+ }
+ }
+ return 0;
+}
+
+static int bootloader_enter(struct data *ts)
+{
+ int i;
+ u16 enter[3][2] = { { 0x7F00, 0x0047 }, { 0x7F00, 0x00C7 }, { 0x7F00,
+ 0x0007 } };
+
+ DISABLE_IRQ();
+ for (i = 0; i < 3; i++) {
+ if (i2c_tx_words(ts, enter[i], 2) != 2) {
+ ENABLE_IRQ();
+ pr_err("Failed to enter bootloader");
+ return -EIO;
+ }
+ }
+
+ if (bootloader_get_cmd_conf(ts, 5) < 0) {
+ ENABLE_IRQ();
+ pr_err("Failed to enter bootloader mode");
+ return -EIO;
+ }
+ bootloader = 1;
+ return 0;
+}
+
+static int bootloader_exit(struct data *ts)
+{
+ u16 exit[] = { 0x00FE, 0x0001, 0x5432 };
+
+ bootloader = 0;
+ ts->got_report = 0;
+ if (i2c_tx_words(ts, exit, NWORDS(exit)) != NWORDS(exit)) {
+ pr_err("Failed to exit bootloader");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int bootloader_get_crc(struct data *ts, u16 *crc16,
+ u16 addr, u16 len, u16 delay)
+{
+ u8 crc_command[] = {0x30, 0x02, BYTEL(addr),
+ BYTEH(addr), BYTEL(len), BYTEH(len)};
+ u8 byteL = 0, byteH = 0;
+ u16 rx_crc16 = 0;
+
+ if (bootloader_write_buffer(ts, crc_command, 6) < 0) {
+ pr_err("write buffer fail");
+ return -EIO;
+ }
+ msleep(delay);
+
+ /* reads low 8bits (crcL) */
+ if (bootloader_rxtx(ts, &byteL, &byteH, 0) < 0) {
+ pr_err("Failed to read low byte of crc response!");
+ return -EIO;
+ }
+ rx_crc16 = (u16) byteL;
+
+ /* reads high 8bits (crcH) */
+ if (bootloader_rxtx(ts, &byteL, &byteH, 0) < 0) {
+ pr_err("Failed to read high byte of crc response!");
+ return -EIO;
+ }
+ rx_crc16 = (u16)(byteL << 8) | rx_crc16;
+
+ if (bootloader_get_cmd_conf(ts, 5) < 0) {
+ pr_err("CRC get failed!");
+ return -EIO;
+ }
+ *crc16 = rx_crc16;
+
+ return 0;
+}
+
+static int bootloader_set_byte_mode(struct data *ts)
+{
+ u8 buffer[2] = { 0x0A, 0x00 };
+
+ if (bootloader_write_buffer(ts, buffer, 2) < 0) {
+ pr_err("write buffer fail");
+ return -EIO;
+ }
+ if (bootloader_get_cmd_conf(ts, 10) < 0) {
+ pr_err("command confirm fail");
+ return -EIO;
+ }
+ return 0;
+}
+
+static int bootloader_erase_flash(struct data *ts)
+{
+ u8 byteL = 0x02, byteH = 0x00;
+ int i, verify = 0;
+
+ if (bootloader_rxtx(ts, &byteL, &byteH, 1) < 0) {
+ pr_err("bootloader RX-TX fail");
+ return -EIO;
+ }
+
+ for (i = 0; i < 10; i++) {
+ msleep(60); /* wait 60ms */
+
+ if (bootloader_get_cmd_conf(ts, 0) < 0)
+ continue;
+
+ verify = 1;
+ break;
+ }
+
+ if (verify != 1) {
+ pr_err("Flash Erase failed");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int bootloader_write_flash(struct data *ts, const u8 *image, u16 length)
+{
+ u8 buffer[130];
+ u8 length_L = length & 0xFF;
+ u8 length_H = (length >> 8) & 0xFF;
+ u8 command[] = { 0xF0, 0x00, length_H, length_L, 0x00 };
+ u16 blocks_of_128bytes;
+ int i, j;
+
+ if (bootloader_write_buffer(ts, command, 5) < 0) {
+ pr_err("write buffer fail");
+ return -EIO;
+ }
+
+ blocks_of_128bytes = length >> 7;
+
+ for (i = 0; i < blocks_of_128bytes; i++) {
+ for (j = 0; j < 100; j++) {
+ usleep_range(1500, 2000);
+ if (bootloader_read_status_reg(ts, STATUS_READY_L,
+ STATUS_READY_H) == 0)
+ break;
+ }
+ if (j == 100) {
+ pr_err("Failed to read Status register!");
+ return -EIO;
+ }
+
+ buffer[0] = ((i % 2) == 0) ? 0x00 : 0x40;
+ buffer[1] = 0x00;
+ memcpy(buffer + 2, image + i * 128, 128);
+
+ if (i2c_tx_bytes(ts, buffer, 130) != 130) {
+ pr_err("Failed to write data (%d)", i);
+ return -EIO;
+ }
+ if (bootloader_rxtx_complete(ts) < 0) {
+ pr_err("Transfer failure (%d)", i);
+ return -EIO;
+ }
+ }
+
+ usleep_range(10000, 11000);
+ if (bootloader_get_cmd_conf(ts, 5) < 0) {
+ pr_err("Flash programming failed");
+ return -EIO;
+ }
+ return 0;
+}
+
+/****************************************
+ *
+ * Standard Driver Structures/Functions
+ *
+ ****************************************/
+static const struct i2c_device_id id[] = { { MAX1187X_NAME, 0 }, { } };
+
+MODULE_DEVICE_TABLE(i2c, id);
+
+static struct of_device_id max1187x_dt_match[] = {
+ { .compatible = "maxim,max1187x_tsc" }, { } };
+
+static struct i2c_driver driver = {
+ .probe = probe,
+ .remove = remove,
+ .id_table = id,
+ .driver = {
+ .name = MAX1187X_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = max1187x_dt_match,
+ },
+};
+
+static int max1187x_init(void)
+{
+ return i2c_add_driver(&driver);
+}
+
+static void __exit max1187x_exit(void)
+{
+ i2c_del_driver(&driver);
+}
+
+module_init(max1187x_init);
+module_exit(max1187x_exit);
+
+MODULE_AUTHOR("Maxim Integrated Products, Inc.");
+MODULE_DESCRIPTION("MAX1187X Touchscreen Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION("3.0.7.1");
diff --git a/drivers/input/touchscreen/synaptics_dsx/Kconfig b/drivers/input/touchscreen/synaptics_dsx/Kconfig
new file mode 100644
index 0000000..006409d
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/Kconfig
@@ -0,0 +1,113 @@
+#
+# Synaptics DSX touchscreen driver configuration
+#
+menuconfig TOUCHSCREEN_SYNAPTICS_DSX
+ bool "Synaptics DSX touchscreen"
+ default y
+ help
+ Say Y here if you have a Synaptics DSX touchscreen connected
+ to your system.
+
+ If unsure, say N.
+
+if TOUCHSCREEN_SYNAPTICS_DSX
+
+choice
+ default TOUCHSCREEN_SYNAPTICS_DSX_I2C
+ prompt "Synaptics DSX bus interface"
+config TOUCHSCREEN_SYNAPTICS_DSX_I2C
+ bool "RMI over I2C"
+ depends on I2C
+config TOUCHSCREEN_SYNAPTICS_DSX_SPI
+ bool "RMI over SPI"
+ depends on SPI_MASTER
+config TOUCHSCREEN_SYNAPTICS_DSX_RMI_HID_I2C
+ bool "HID over I2C"
+ depends on I2C
+endchoice
+
+config TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ tristate "Synaptics DSX core driver module"
+ depends on I2C || SPI_MASTER
+ help
+ Say Y here to enable basic touch reporting functionalities.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_core.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_RMI_DEV
+ tristate "Synaptics DSX RMI device module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for direct RMI register access.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_rmi_dev.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE
+ tristate "Synaptics DSX firmware update module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for doing firmware update.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_fw_update.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_ACTIVE_PEN
+ tristate "Synaptics DSX active pen module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for active pen functionalities.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_active_pen.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_PROXIMITY
+ tristate "Synaptics DSX proximity module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for proximity functionalities.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_proximity.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_TEST_REPORTING
+ tristate "Synaptics DSX test reporting module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for retrieving production test reports.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_test_reporting.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_DEBUG
+ tristate "Synaptics DSX debug module"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for firmware debug functionalities.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called synaptics_dsx_debug.
+
+config TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+ bool "Synaptics DSX wake up gesture"
+ depends on TOUCHSCREEN_SYNAPTICS_DSX_CORE
+ help
+ Say Y here to enable support for double tap gesture.
+
+ If unsure, say N.
+endif
diff --git a/drivers/input/touchscreen/synaptics_dsx/Makefile b/drivers/input/touchscreen/synaptics_dsx/Makefile
new file mode 100644
index 0000000..8de8d60
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/Makefile
@@ -0,0 +1,16 @@
+#
+# Makefile for the Synaptics DSX touchscreen driver.
+#
+
+# Each configuration option enables a list of files.
+
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_I2C) += synaptics_dsx_i2c.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_SPI) += synaptics_dsx_spi.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI_HID_I2C) += synaptics_dsx_rmi_hid_i2c.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_CORE) += synaptics_dsx_core.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI_DEV) += synaptics_dsx_rmi_dev.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE) += synaptics_dsx_fw_update.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_TEST_REPORTING) += synaptics_dsx_test_reporting.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_PROXIMITY) += synaptics_dsx_proximity.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_ACTIVE_PEN) += synaptics_dsx_active_pen.o
+obj-$(CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_DEBUG) += synaptics_dsx_debug.o
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_active_pen.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_active_pen.c
new file mode 100644
index 0000000..8950f7b
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_active_pen.c
@@ -0,0 +1,565 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define APEN_PHYS_NAME "synaptics_dsx/active_pen"
+
+#define ACTIVE_PEN_MAX_PRESSURE_16BIT 65535
+#define ACTIVE_PEN_MAX_PRESSURE_8BIT 255
+
+struct synaptics_rmi4_f12_query_8 {
+ union {
+ struct {
+ unsigned char size_of_query9;
+ struct {
+ unsigned char data0_is_present:1;
+ unsigned char data1_is_present:1;
+ unsigned char data2_is_present:1;
+ unsigned char data3_is_present:1;
+ unsigned char data4_is_present:1;
+ unsigned char data5_is_present:1;
+ unsigned char data6_is_present:1;
+ unsigned char data7_is_present:1;
+ } __packed;
+ };
+ unsigned char data[2];
+ };
+};
+
+struct apen_data {
+ union {
+ struct {
+ unsigned char status_pen:1;
+ unsigned char status_invert:1;
+ unsigned char status_barrel:1;
+ unsigned char status_reserved:5;
+ unsigned char x_lsb;
+ unsigned char x_msb;
+ unsigned char y_lsb;
+ unsigned char y_msb;
+ unsigned char pressure_lsb;
+ unsigned char pressure_msb;
+ } __packed;
+ unsigned char data[7];
+ };
+};
+
+struct synaptics_rmi4_apen_handle {
+ bool apen_present;
+ unsigned char intr_mask;
+ unsigned short query_base_addr;
+ unsigned short control_base_addr;
+ unsigned short data_base_addr;
+ unsigned short command_base_addr;
+ unsigned short apen_data_addr;
+ unsigned short max_pressure;
+ struct input_dev *apen_dev;
+ struct apen_data *apen_data;
+ struct synaptics_rmi4_data *rmi4_data;
+};
+
+static struct synaptics_rmi4_apen_handle *apen;
+
+DECLARE_COMPLETION(apen_remove_complete);
+
+static void apen_lift(void)
+{
+ input_report_key(apen->apen_dev, BTN_TOUCH, 0);
+ input_report_key(apen->apen_dev, BTN_TOOL_PEN, 0);
+ input_report_key(apen->apen_dev, BTN_TOOL_RUBBER, 0);
+ input_sync(apen->apen_dev);
+ apen->apen_present = false;
+
+ return;
+}
+
+static void apen_report(void)
+{
+ int retval;
+ int x;
+ int y;
+ int pressure;
+ static int invert = -1;
+ struct synaptics_rmi4_data *rmi4_data = apen->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ apen->apen_data_addr,
+ apen->apen_data->data,
+ sizeof(apen->apen_data->data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read active pen data\n",
+ __func__);
+ return;
+ }
+
+ if (apen->apen_data->status_pen == 0) {
+ if (apen->apen_present) {
+ apen_lift();
+ invert = -1;
+ }
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: No active pen data\n",
+ __func__);
+
+ return;
+ }
+
+ x = (apen->apen_data->x_msb << 8) | (apen->apen_data->x_lsb);
+ y = (apen->apen_data->y_msb << 8) | (apen->apen_data->y_lsb);
+
+ if ((x == -1) && (y == -1)) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Active pen in range but no valid x & y\n",
+ __func__);
+ return;
+ }
+
+ if (invert != -1 && invert != apen->apen_data->status_invert)
+ apen_lift();
+
+ invert = apen->apen_data->status_invert;
+
+ if (apen->max_pressure == ACTIVE_PEN_MAX_PRESSURE_16BIT) {
+ pressure = (apen->apen_data->pressure_msb << 8) |
+ apen->apen_data->pressure_lsb;
+ } else {
+ pressure = apen->apen_data->pressure_lsb;
+ }
+
+ input_report_key(apen->apen_dev, BTN_TOUCH, pressure > 0 ? 1 : 0);
+ input_report_key(apen->apen_dev,
+ apen->apen_data->status_invert > 0 ?
+ BTN_TOOL_RUBBER : BTN_TOOL_PEN, 1);
+ input_report_key(apen->apen_dev,
+ BTN_STYLUS, apen->apen_data->status_barrel > 0 ?
+ 1 : 0);
+ input_report_abs(apen->apen_dev, ABS_X, x);
+ input_report_abs(apen->apen_dev, ABS_Y, y);
+ input_report_abs(apen->apen_dev, ABS_PRESSURE, pressure);
+
+ input_sync(apen->apen_dev);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Active pen:\n"
+ "status = %d\n"
+ "invert = %d\n"
+ "barrel = %d\n"
+ "x = %d\n"
+ "y = %d\n"
+ "pressure = %d\n",
+ __func__,
+ apen->apen_data->status_pen,
+ apen->apen_data->status_invert,
+ apen->apen_data->status_barrel,
+ x, y, pressure);
+
+ apen->apen_present = true;
+
+ return;
+}
+
+static void apen_set_params(void)
+{
+ input_set_abs_params(apen->apen_dev, ABS_X, 0,
+ apen->rmi4_data->sensor_max_x, 0, 0);
+ input_set_abs_params(apen->apen_dev, ABS_Y, 0,
+ apen->rmi4_data->sensor_max_y, 0, 0);
+ input_set_abs_params(apen->apen_dev, ABS_PRESSURE, 0,
+ apen->max_pressure, 0, 0);
+
+ return;
+}
+
+static int apen_pressure(struct synaptics_rmi4_f12_query_8 *query_8)
+{
+ int retval;
+ unsigned char ii;
+ unsigned char data_reg_presence;
+ unsigned char size_of_query_9;
+ unsigned char *query_9;
+ unsigned char *data_desc;
+ struct synaptics_rmi4_data *rmi4_data = apen->rmi4_data;
+
+ data_reg_presence = query_8->data[1];
+
+ size_of_query_9 = query_8->size_of_query9;
+ query_9 = kmalloc(size_of_query_9, GFP_KERNEL);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ apen->query_base_addr + 9,
+ query_9,
+ size_of_query_9);
+ if (retval < 0)
+ goto exit;
+
+ data_desc = query_9;
+
+ for (ii = 0; ii < 6; ii++) {
+ if (!(data_reg_presence & (1 << ii)))
+ continue; /* The data register is not present */
+ data_desc++; /* Jump over the size entry */
+ while (*data_desc & (1 << 7))
+ data_desc++;
+ data_desc++; /* Go to the next descriptor */
+ }
+
+ data_desc++; /* Jump over the size entry */
+ /* Check for the presence of subpackets 1 and 2 */
+ if ((*data_desc & (3 << 1)) == (3 << 1))
+ apen->max_pressure = ACTIVE_PEN_MAX_PRESSURE_16BIT;
+ else
+ apen->max_pressure = ACTIVE_PEN_MAX_PRESSURE_8BIT;
+
+exit:
+ kfree(query_9);
+
+ return retval;
+}
+
+static int apen_reg_init(void)
+{
+ int retval;
+ unsigned char data_offset;
+ unsigned char size_of_query8;
+ struct synaptics_rmi4_f12_query_8 query_8;
+ struct synaptics_rmi4_data *rmi4_data = apen->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ apen->query_base_addr + 7,
+ &size_of_query8,
+ sizeof(size_of_query8));
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ apen->query_base_addr + 8,
+ query_8.data,
+ size_of_query8);
+ if (retval < 0)
+ return retval;
+
+ if ((size_of_query8 >= 2) && (query_8.data6_is_present)) {
+ data_offset = query_8.data0_is_present +
+ query_8.data1_is_present +
+ query_8.data2_is_present +
+ query_8.data3_is_present +
+ query_8.data4_is_present +
+ query_8.data5_is_present;
+ apen->apen_data_addr = apen->data_base_addr + data_offset;
+ retval = apen_pressure(&query_8);
+ if (retval < 0)
+ return retval;
+ } else {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Active pen support unavailable\n",
+ __func__);
+ retval = -ENODEV;
+ }
+
+ return retval;
+}
+
+static int apen_scan_pdt(void)
+{
+ int retval;
+ unsigned char ii;
+ unsigned char page;
+ unsigned char intr_count = 0;
+ unsigned char intr_off;
+ unsigned char intr_src;
+ unsigned short addr;
+ struct synaptics_rmi4_fn_desc fd;
+ struct synaptics_rmi4_data *rmi4_data = apen->rmi4_data;
+
+ for (page = 0; page < PAGES_TO_SERVICE; page++) {
+ for (addr = PDT_START; addr > PDT_END; addr -= PDT_ENTRY_SIZE) {
+ addr |= (page << 8);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ addr,
+ (unsigned char *)&fd,
+ sizeof(fd));
+ if (retval < 0)
+ return retval;
+
+ addr &= ~(MASK_8BIT << 8);
+
+ if (fd.fn_number) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Found F%02x\n",
+ __func__, fd.fn_number);
+ switch (fd.fn_number) {
+ case SYNAPTICS_RMI4_F12:
+ goto f12_found;
+ break;
+ }
+ } else {
+ break;
+ }
+
+ intr_count += (fd.intr_src_count & MASK_3BIT);
+ }
+ }
+
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to find F12\n",
+ __func__);
+ return -EINVAL;
+
+f12_found:
+ apen->query_base_addr = fd.query_base_addr | (page << 8);
+ apen->control_base_addr = fd.ctrl_base_addr | (page << 8);
+ apen->data_base_addr = fd.data_base_addr | (page << 8);
+ apen->command_base_addr = fd.cmd_base_addr | (page << 8);
+
+ retval = apen_reg_init();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to initialize active pen registers\n",
+ __func__);
+ return retval;
+ }
+
+ apen->intr_mask = 0;
+ intr_src = fd.intr_src_count;
+ intr_off = intr_count % 8;
+ for (ii = intr_off;
+ ii < ((intr_src & MASK_3BIT) +
+ intr_off);
+ ii++) {
+ apen->intr_mask |= 1 << ii;
+ }
+
+ rmi4_data->intr_mask[0] |= apen->intr_mask;
+
+ addr = rmi4_data->f01_ctrl_base_addr + 1;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ addr,
+ &(rmi4_data->intr_mask[0]),
+ sizeof(rmi4_data->intr_mask[0]));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to set interrupt enable bit\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static void synaptics_rmi4_apen_attn(struct synaptics_rmi4_data *rmi4_data,
+ unsigned char intr_mask)
+{
+ if (!apen)
+ return;
+
+ if (apen->intr_mask & intr_mask)
+ apen_report();
+
+ return;
+}
+
+static int synaptics_rmi4_apen_init(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+
+ apen = kzalloc(sizeof(*apen), GFP_KERNEL);
+ if (!apen) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for apen\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ apen->apen_data = kzalloc(sizeof(*(apen->apen_data)), GFP_KERNEL);
+ if (!apen->apen_data) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for apen_data\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit_free_apen;
+ }
+
+ apen->rmi4_data = rmi4_data;
+
+ retval = apen_scan_pdt();
+ if (retval < 0)
+ goto exit_free_apen_data;
+
+ apen->apen_dev = input_allocate_device();
+ if (apen->apen_dev == NULL) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate active pen device\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit_free_apen_data;
+ }
+
+ apen->apen_dev->name = PLATFORM_DRIVER_NAME;
+ apen->apen_dev->phys = APEN_PHYS_NAME;
+ apen->apen_dev->id.product = SYNAPTICS_DSX_DRIVER_PRODUCT;
+ apen->apen_dev->id.version = SYNAPTICS_DSX_DRIVER_VERSION;
+ apen->apen_dev->dev.parent = rmi4_data->pdev->dev.parent;
+ input_set_drvdata(apen->apen_dev, rmi4_data);
+
+ set_bit(EV_KEY, apen->apen_dev->evbit);
+ set_bit(EV_ABS, apen->apen_dev->evbit);
+ set_bit(BTN_TOUCH, apen->apen_dev->keybit);
+ set_bit(BTN_TOOL_PEN, apen->apen_dev->keybit);
+ set_bit(BTN_TOOL_RUBBER, apen->apen_dev->keybit);
+ set_bit(BTN_STYLUS, apen->apen_dev->keybit);
+#ifdef INPUT_PROP_DIRECT
+ set_bit(INPUT_PROP_DIRECT, apen->apen_dev->propbit);
+#endif
+
+ apen_set_params();
+
+ retval = input_register_device(apen->apen_dev);
+ if (retval) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to register active pen device\n",
+ __func__);
+ goto exit_free_input_device;
+ }
+
+ return 0;
+
+exit_free_input_device:
+ input_free_device(apen->apen_dev);
+
+exit_free_apen_data:
+ kfree(apen->apen_data);
+
+exit_free_apen:
+ kfree(apen);
+ apen = NULL;
+
+exit:
+ return retval;
+}
+
+static void synaptics_rmi4_apen_remove(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!apen)
+ goto exit;
+
+ input_unregister_device(apen->apen_dev);
+ kfree(apen->apen_data);
+ kfree(apen);
+ apen = NULL;
+
+exit:
+ complete(&apen_remove_complete);
+
+ return;
+}
+
+static void synaptics_rmi4_apen_reset(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!apen) {
+ synaptics_rmi4_apen_init(rmi4_data);
+ return;
+ }
+
+ apen_lift();
+
+ apen_scan_pdt();
+
+ apen_set_params();
+
+ return;
+}
+
+static void synaptics_rmi4_apen_reinit(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!apen)
+ return;
+
+ apen_lift();
+
+ return;
+}
+
+static void synaptics_rmi4_apen_e_suspend(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!apen)
+ return;
+
+ apen_lift();
+
+ return;
+}
+
+static void synaptics_rmi4_apen_suspend(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!apen)
+ return;
+
+ apen_lift();
+
+ return;
+}
+
+static struct synaptics_rmi4_exp_fn active_pen_module = {
+ .fn_type = RMI_ACTIVE_PEN,
+ .init = synaptics_rmi4_apen_init,
+ .remove = synaptics_rmi4_apen_remove,
+ .reset = synaptics_rmi4_apen_reset,
+ .reinit = synaptics_rmi4_apen_reinit,
+ .early_suspend = synaptics_rmi4_apen_e_suspend,
+ .suspend = synaptics_rmi4_apen_suspend,
+ .resume = NULL,
+ .late_resume = NULL,
+ .attn = synaptics_rmi4_apen_attn,
+};
+
+static int __init rmi4_active_pen_module_init(void)
+{
+ synaptics_rmi4_new_function(&active_pen_module, true);
+
+ return 0;
+}
+
+static void __exit rmi4_active_pen_module_exit(void)
+{
+ synaptics_rmi4_new_function(&active_pen_module, false);
+
+ wait_for_completion(&apen_remove_complete);
+
+ return;
+}
+
+module_init(rmi4_active_pen_module_init);
+module_exit(rmi4_active_pen_module_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX Active Pen Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c
new file mode 100644
index 0000000..8b6727e
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.c
@@ -0,0 +1,3437 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/gpio.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+#include <linux/input/synaptics_dsx.h>
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+#include <linux/sensor_hub.h>
+#endif
+#include "synaptics_dsx_core.h"
+#ifdef KERNEL_ABOVE_2_6_38
+#include <linux/input/mt.h>
+#endif
+
+#define INPUT_PHYS_NAME "synaptics_dsx/touch_input"
+
+#ifdef KERNEL_ABOVE_2_6_38
+#define TYPE_B_PROTOCOL
+#endif
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+#define WAKEUP_GESTURE true
+#else
+#define WAKEUP_GESTURE false
+#endif
+
+#define NO_0D_WHILE_2D
+#define REPORT_2D_Z
+#define REPORT_2D_W
+
+#define F12_DATA_15_WORKAROUND
+
+/*
+#define IGNORE_FN_INIT_FAILURE
+*/
+
+#define RPT_TYPE (1 << 0)
+#define RPT_X_LSB (1 << 1)
+#define RPT_X_MSB (1 << 2)
+#define RPT_Y_LSB (1 << 3)
+#define RPT_Y_MSB (1 << 4)
+#define RPT_Z (1 << 5)
+#define RPT_WX (1 << 6)
+#define RPT_WY (1 << 7)
+#define RPT_DEFAULT (RPT_TYPE | RPT_X_LSB | RPT_X_MSB | RPT_Y_LSB | RPT_Y_MSB)
+
+#define EXP_FN_WORK_DELAY_MS 500 /* ms */
+#define MAX_F11_TOUCH_WIDTH 15
+
+#define CHECK_STATUS_TIMEOUT_MS 100
+
+#define F01_STD_QUERY_LEN 21
+#define F01_BUID_ID_OFFSET 18
+#define F11_STD_QUERY_LEN 9
+#define F11_STD_CTRL_LEN 10
+#define F11_STD_DATA_LEN 12
+
+#define STATUS_NO_ERROR 0x00
+#define STATUS_RESET_OCCURRED 0x01
+#define STATUS_INVALID_CONFIG 0x02
+#define STATUS_DEVICE_FAILURE 0x03
+#define STATUS_CONFIG_CRC_FAILURE 0x04
+#define STATUS_FIRMWARE_CRC_FAILURE 0x05
+#define STATUS_CRC_IN_PROGRESS 0x06
+
+#define NORMAL_OPERATION (0 << 0)
+#define SENSOR_SLEEP (1 << 0)
+#define NO_SLEEP_OFF (0 << 2)
+#define NO_SLEEP_ON (1 << 2)
+#define CONFIGURED (1 << 7)
+
+#define F11_CONTINUOUS_MODE 0x00
+#define F11_WAKEUP_GESTURE_MODE 0x04
+#define F12_CONTINUOUS_MODE 0x00
+#define F12_WAKEUP_GESTURE_MODE 0x02
+
+#define SHIFT_BITS (10)
+
+static int synaptics_rmi4_f12_set_enables(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short ctrl28);
+
+static int synaptics_rmi4_free_fingers(struct synaptics_rmi4_data *rmi4_data);
+static int synaptics_rmi4_reinit_device(struct synaptics_rmi4_data *rmi4_data);
+static int synaptics_rmi4_reset_device(struct synaptics_rmi4_data *rmi4_data);
+
+static ssize_t synaptics_rmi4_full_pm_cycle_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_full_pm_cycle_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static void synaptics_rmi4_early_suspend(struct device *dev);
+
+static void synaptics_rmi4_late_resume(struct device *dev);
+
+static int synaptics_rmi4_suspend(struct device *dev);
+
+static int synaptics_rmi4_resume(struct device *dev);
+
+static ssize_t synaptics_rmi4_f01_reset_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t synaptics_rmi4_f01_productinfo_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_f01_buildid_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_f01_flashprog_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_0dbutton_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_0dbutton_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t synaptics_rmi4_suspend_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t synaptics_rmi4_wake_gesture_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_wake_gesture_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t synaptics_rmi4_interactive_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_interactive_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+static int facedown_status_handler_func(struct notifier_block *this,
+ unsigned long status, void *unused);
+#endif
+
+struct synaptics_rmi4_f01_device_status {
+ union {
+ struct {
+ unsigned char status_code:4;
+ unsigned char reserved:2;
+ unsigned char flash_prog:1;
+ unsigned char unconfigured:1;
+ } __packed;
+ unsigned char data[1];
+ };
+};
+
+struct synaptics_rmi4_f12_query_5 {
+ union {
+ struct {
+ unsigned char size_of_query6;
+ struct {
+ unsigned char ctrl0_is_present:1;
+ unsigned char ctrl1_is_present:1;
+ unsigned char ctrl2_is_present:1;
+ unsigned char ctrl3_is_present:1;
+ unsigned char ctrl4_is_present:1;
+ unsigned char ctrl5_is_present:1;
+ unsigned char ctrl6_is_present:1;
+ unsigned char ctrl7_is_present:1;
+ } __packed;
+ struct {
+ unsigned char ctrl8_is_present:1;
+ unsigned char ctrl9_is_present:1;
+ unsigned char ctrl10_is_present:1;
+ unsigned char ctrl11_is_present:1;
+ unsigned char ctrl12_is_present:1;
+ unsigned char ctrl13_is_present:1;
+ unsigned char ctrl14_is_present:1;
+ unsigned char ctrl15_is_present:1;
+ } __packed;
+ struct {
+ unsigned char ctrl16_is_present:1;
+ unsigned char ctrl17_is_present:1;
+ unsigned char ctrl18_is_present:1;
+ unsigned char ctrl19_is_present:1;
+ unsigned char ctrl20_is_present:1;
+ unsigned char ctrl21_is_present:1;
+ unsigned char ctrl22_is_present:1;
+ unsigned char ctrl23_is_present:1;
+ } __packed;
+ struct {
+ unsigned char ctrl24_is_present:1;
+ unsigned char ctrl25_is_present:1;
+ unsigned char ctrl26_is_present:1;
+ unsigned char ctrl27_is_present:1;
+ unsigned char ctrl28_is_present:1;
+ unsigned char ctrl29_is_present:1;
+ unsigned char ctrl30_is_present:1;
+ unsigned char ctrl31_is_present:1;
+ } __packed;
+ };
+ unsigned char data[5];
+ };
+};
+
+struct synaptics_rmi4_f12_query_8 {
+ union {
+ struct {
+ unsigned char size_of_query9;
+ struct {
+ unsigned char data0_is_present:1;
+ unsigned char data1_is_present:1;
+ unsigned char data2_is_present:1;
+ unsigned char data3_is_present:1;
+ unsigned char data4_is_present:1;
+ unsigned char data5_is_present:1;
+ unsigned char data6_is_present:1;
+ unsigned char data7_is_present:1;
+ } __packed;
+ struct {
+ unsigned char data8_is_present:1;
+ unsigned char data9_is_present:1;
+ unsigned char data10_is_present:1;
+ unsigned char data11_is_present:1;
+ unsigned char data12_is_present:1;
+ unsigned char data13_is_present:1;
+ unsigned char data14_is_present:1;
+ unsigned char data15_is_present:1;
+ } __packed;
+ };
+ unsigned char data[3];
+ };
+};
+
+struct synaptics_rmi4_f12_ctrl_8 {
+ union {
+ struct {
+ unsigned char max_x_coord_lsb;
+ unsigned char max_x_coord_msb;
+ unsigned char max_y_coord_lsb;
+ unsigned char max_y_coord_msb;
+ unsigned char rx_pitch_lsb;
+ unsigned char rx_pitch_msb;
+ unsigned char tx_pitch_lsb;
+ unsigned char tx_pitch_msb;
+ unsigned char low_rx_clip;
+ unsigned char high_rx_clip;
+ unsigned char low_tx_clip;
+ unsigned char high_tx_clip;
+ unsigned char num_of_rx;
+ unsigned char num_of_tx;
+ };
+ unsigned char data[14];
+ };
+};
+
+struct synaptics_rmi4_f12_ctrl_23 {
+ union {
+ struct {
+ unsigned char obj_type_enable;
+ unsigned char max_reported_objects;
+ };
+ unsigned char data[2];
+ };
+};
+
+struct synaptics_rmi4_f12_finger_data {
+ unsigned char object_type_and_status;
+ unsigned char x_lsb;
+ unsigned char x_msb;
+ unsigned char y_lsb;
+ unsigned char y_msb;
+#ifdef REPORT_2D_Z
+ unsigned char z;
+#endif
+#ifdef REPORT_2D_W
+ unsigned char wx;
+ unsigned char wy;
+#endif
+};
+
+struct synaptics_rmi4_f1a_query {
+ union {
+ struct {
+ unsigned char max_button_count:3;
+ unsigned char reserved:5;
+ unsigned char has_general_control:1;
+ unsigned char has_interrupt_enable:1;
+ unsigned char has_multibutton_select:1;
+ unsigned char has_tx_rx_map:1;
+ unsigned char has_perbutton_threshold:1;
+ unsigned char has_release_threshold:1;
+ unsigned char has_strongestbtn_hysteresis:1;
+ unsigned char has_filter_strength:1;
+ } __packed;
+ unsigned char data[2];
+ };
+};
+
+struct synaptics_rmi4_f1a_control_0 {
+ union {
+ struct {
+ unsigned char multibutton_report:2;
+ unsigned char filter_mode:2;
+ unsigned char reserved:4;
+ } __packed;
+ unsigned char data[1];
+ };
+};
+
+struct synaptics_rmi4_f1a_control {
+ struct synaptics_rmi4_f1a_control_0 general_control;
+ unsigned char button_int_enable;
+ unsigned char multi_button;
+ unsigned char *txrx_map;
+ unsigned char *button_threshold;
+ unsigned char button_release_threshold;
+ unsigned char strongest_button_hysteresis;
+ unsigned char filter_strength;
+};
+
+struct synaptics_rmi4_f1a_handle {
+ int button_bitmask_size;
+ unsigned char max_count;
+ unsigned char valid_button_count;
+ unsigned char *button_data_buffer;
+ unsigned char *button_map;
+ struct synaptics_rmi4_f1a_query button_query;
+ struct synaptics_rmi4_f1a_control button_control;
+};
+
+struct synaptics_rmi4_exp_fhandler {
+ struct synaptics_rmi4_exp_fn *exp_fn;
+ bool insert;
+ bool remove;
+ struct list_head link;
+};
+
+struct synaptics_rmi4_exp_fn_data {
+ bool initialized;
+ bool queue_work;
+ struct mutex mutex;
+ struct list_head list;
+ struct delayed_work work;
+ struct workqueue_struct *workqueue;
+ struct synaptics_rmi4_data *rmi4_data;
+};
+
+static struct synaptics_rmi4_exp_fn_data exp_data;
+
+static struct device_attribute attrs[] = {
+ __ATTR(full_pm_cycle, (S_IRUGO | S_IWUGO),
+ synaptics_rmi4_full_pm_cycle_show,
+ synaptics_rmi4_full_pm_cycle_store),
+ __ATTR(reset, S_IWUGO,
+ synaptics_rmi4_show_error,
+ synaptics_rmi4_f01_reset_store),
+ __ATTR(productinfo, S_IRUGO,
+ synaptics_rmi4_f01_productinfo_show,
+ synaptics_rmi4_store_error),
+ __ATTR(buildid, S_IRUGO,
+ synaptics_rmi4_f01_buildid_show,
+ synaptics_rmi4_store_error),
+ __ATTR(flashprog, S_IRUGO,
+ synaptics_rmi4_f01_flashprog_show,
+ synaptics_rmi4_store_error),
+ __ATTR(0dbutton, (S_IRUGO | S_IWUGO),
+ synaptics_rmi4_0dbutton_show,
+ synaptics_rmi4_0dbutton_store),
+ __ATTR(suspend, S_IWUGO,
+ synaptics_rmi4_show_error,
+ synaptics_rmi4_suspend_store),
+ __ATTR(wake_gesture, (S_IRUGO | S_IWUGO),
+ synaptics_rmi4_wake_gesture_show,
+ synaptics_rmi4_wake_gesture_store),
+ __ATTR(interactive, (S_IRUGO | S_IWUGO),
+ synaptics_rmi4_interactive_show,
+ synaptics_rmi4_interactive_store),
+};
+
+static ssize_t synaptics_rmi4_full_pm_cycle_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n",
+ rmi4_data->full_pm_cycle);
+}
+
+static ssize_t synaptics_rmi4_full_pm_cycle_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ rmi4_data->full_pm_cycle = input > 0 ? 1 : 0;
+
+ return count;
+}
+
+static ssize_t synaptics_rmi4_f01_reset_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int reset;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ if (sscanf(buf, "%u", &reset) != 1)
+ return -EINVAL;
+
+ if (reset != 1)
+ return -EINVAL;
+
+ retval = synaptics_rmi4_reset_device(rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to issue reset command, error = %d\n",
+ __func__, retval);
+ return retval;
+ }
+
+ return count;
+}
+
+static ssize_t synaptics_rmi4_f01_productinfo_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE, "0x%02x 0x%02x\n",
+ (rmi4_data->rmi4_mod_info.product_info[0]),
+ (rmi4_data->rmi4_mod_info.product_info[1]));
+}
+
+static ssize_t synaptics_rmi4_f01_buildid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n",
+ rmi4_data->firmware_id);
+}
+
+static ssize_t synaptics_rmi4_f01_flashprog_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int retval;
+ struct synaptics_rmi4_f01_device_status device_status;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ device_status.data,
+ sizeof(device_status.data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read device status, error = %d\n",
+ __func__, retval);
+ return retval;
+ }
+
+ return snprintf(buf, PAGE_SIZE, "%u\n",
+ device_status.flash_prog);
+}
+
+static ssize_t synaptics_rmi4_0dbutton_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n",
+ rmi4_data->button_0d_enabled);
+}
+
+static ssize_t synaptics_rmi4_0dbutton_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int input;
+ unsigned char ii;
+ unsigned char intr_enable;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ input = input > 0 ? 1 : 0;
+
+ if (rmi4_data->button_0d_enabled == input)
+ return count;
+
+ if (list_empty(&rmi->support_fn_list))
+ return -ENODEV;
+
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F1A) {
+ ii = fhandler->intr_reg_num;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr + 1 + ii,
+ &intr_enable,
+ sizeof(intr_enable));
+ if (retval < 0)
+ return retval;
+
+ if (input == 1)
+ intr_enable |= fhandler->intr_mask;
+ else
+ intr_enable &= ~fhandler->intr_mask;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr + 1 + ii,
+ &intr_enable,
+ sizeof(intr_enable));
+ if (retval < 0)
+ return retval;
+ }
+ }
+
+ rmi4_data->button_0d_enabled = input;
+
+ return count;
+}
+
+static ssize_t synaptics_rmi4_suspend_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ if (input == 1)
+ synaptics_rmi4_suspend(dev);
+ else if (input == 0)
+ synaptics_rmi4_resume(dev);
+ else
+ return -EINVAL;
+
+ return count;
+}
+
+static ssize_t synaptics_rmi4_wake_gesture_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ buf[0] = '0' + rmi4_data->enable_wakeup_gesture;
+ buf[1] = '\n';
+ buf[2] = 0;
+ return 2;
+}
+
+static ssize_t synaptics_rmi4_wake_gesture_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ switch(buf[0]) {
+ case '0':
+ input = 0;
+ break;
+ case '1':
+ input = 1;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (rmi4_data->f11_wakeup_gesture || rmi4_data->f12_wakeup_gesture)
+ rmi4_data->enable_wakeup_gesture = input;
+
+ return count;
+}
+
+static ssize_t synaptics_rmi4_interactive_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ buf[0] = '0' + rmi4_data->interactive;
+ buf[1] = '\n';
+ buf[2] = 0;
+ return 2;
+}
+
+static ssize_t synaptics_rmi4_interactive_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ if (strtobool(buf, &rmi4_data->interactive))
+ return -EINVAL;
+
+ if (rmi4_data->interactive) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: resume\n", __func__);
+ synaptics_rmi4_late_resume(&(rmi4_data->input_dev->dev));
+ } else {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: suspend\n", __func__);
+ synaptics_rmi4_early_suspend((&rmi4_data->input_dev->dev));
+ }
+ return count;
+}
+
+static unsigned short synaptics_sqrt(unsigned int num)
+{
+ unsigned short root, remainder, place;
+
+ root = 0;
+ remainder = num;
+ place = 0x4000;
+
+ while (place > remainder)
+ place = place >> 2;
+ while (place)
+ {
+ if (remainder >= root + place)
+ {
+ remainder = remainder - root - place;
+ root = root + (place << 1);
+ }
+ root = root >> 1;
+ place = place >> 2;
+ }
+
+ return root;
+}
+
+static int synaptics_rmi4_f11_abs_report(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ int retval;
+ unsigned char touch_count = 0; /* number of touch points */
+ unsigned char reg_index;
+ unsigned char finger;
+ unsigned char fingers_supported;
+ unsigned char num_of_finger_status_regs;
+ unsigned char finger_shift;
+ unsigned char finger_status;
+ unsigned char data_reg_blk_size;
+ unsigned char finger_status_reg[3];
+ unsigned char data[F11_STD_DATA_LEN];
+ unsigned char detected_gestures;
+ unsigned short data_addr;
+ unsigned short data_offset;
+ int x;
+ int y;
+ int z;
+ int wx;
+ int wy;
+ int temp;
+ struct synaptics_rmi4_f11_extra_data *extra_data;
+
+ /*
+ * The number of finger status registers is determined by the
+ * maximum number of fingers supported - 2 bits per finger. So
+ * the number of finger status registers to read is:
+ * register_count = ceil(max_num_of_fingers / 4)
+ */
+ fingers_supported = fhandler->num_of_data_points;
+ num_of_finger_status_regs = (fingers_supported + 3) / 4;
+ data_addr = fhandler->full_addr.data_base;
+ data_reg_blk_size = fhandler->size_of_data_register_block;
+ extra_data = (struct synaptics_rmi4_f11_extra_data *)fhandler->extra;
+
+ if (rmi4_data->suspend && rmi4_data->enable_wakeup_gesture) {
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr + extra_data->data38_offset,
+ &detected_gestures,
+ sizeof(detected_gestures));
+ if (retval < 0)
+ return 0;
+
+ if (detected_gestures) {
+ input_report_key(rmi4_data->input_dev, KEY_WAKEUP, 1);
+ input_sync(rmi4_data->input_dev);
+ input_report_key(rmi4_data->input_dev, KEY_WAKEUP, 0);
+ input_sync(rmi4_data->input_dev);
+ rmi4_data->suspend = false;
+ }
+
+ return 0;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr,
+ finger_status_reg,
+ num_of_finger_status_regs);
+ if (retval < 0)
+ return 0;
+
+ mutex_lock(&(rmi4_data->rmi4_report_mutex));
+
+ for (finger = 0; finger < fingers_supported; finger++) {
+ reg_index = finger / 4;
+ finger_shift = (finger % 4) * 2;
+ finger_status = (finger_status_reg[reg_index] >> finger_shift)
+ & MASK_2BIT;
+
+ /*
+ * Each 2-bit finger status field represents the following:
+ * 00 = finger not present
+ * 01 = finger present and data accurate
+ * 10 = finger present but data may be inaccurate
+ * 11 = reserved
+ */
+#ifdef TYPE_B_PROTOCOL
+ input_mt_slot(rmi4_data->input_dev, finger);
+ input_mt_report_slot_state(rmi4_data->input_dev,
+ MT_TOOL_FINGER, finger_status);
+#endif
+
+ if (finger_status) {
+ data_offset = data_addr +
+ num_of_finger_status_regs +
+ (finger * data_reg_blk_size);
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_offset,
+ data,
+ data_reg_blk_size);
+ if (retval < 0) {
+ touch_count = 0;
+ goto exit;
+ }
+
+ x = (data[0] << 4) | (data[2] & MASK_4BIT);
+ y = (data[1] << 4) | ((data[2] >> 4) & MASK_4BIT);
+ wx = (data[3] & MASK_4BIT);
+ wy = (data[3] >> 4) & MASK_4BIT;
+#ifdef REPORT_2D_Z
+ z = data[4];
+#endif
+
+ if (rmi4_data->hw_if->board_data->swap_axes) {
+ temp = x;
+ x = y;
+ y = temp;
+ temp = wx;
+ wx = wy;
+ wy = temp;
+ }
+
+ if (rmi4_data->hw_if->board_data->x_flip)
+ x = rmi4_data->sensor_max_x - x;
+ if (rmi4_data->hw_if->board_data->y_flip)
+ y = rmi4_data->sensor_max_y - y;
+
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_POSITION_X, x);
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_POSITION_Y, y);
+#ifdef REPORT_2D_Z
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_PRESSURE, z);
+#endif
+#ifdef REPORT_2D_W
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MAJOR, synaptics_sqrt(wx*wx + wy*wy));
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MINOR, min(wx, wy));
+#endif
+#ifndef TYPE_B_PROTOCOL
+ input_mt_sync(rmi4_data->input_dev);
+#endif
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Finger %d:\n"
+ "status = 0x%02x\n"
+ "x = %d\n"
+ "y = %d\n"
+ "z = %d\n"
+ "wx = %d\n"
+ "wy = %d\n",
+ __func__, finger,
+ finger_status,
+ x, y, z, wx, wy);
+
+ touch_count++;
+ }
+ }
+
+ if (touch_count == 0) {
+#ifndef TYPE_B_PROTOCOL
+ input_mt_sync(rmi4_data->input_dev);
+#endif
+ }
+
+ input_sync(rmi4_data->input_dev);
+
+exit:
+ mutex_unlock(&(rmi4_data->rmi4_report_mutex));
+
+ return touch_count;
+}
+
+static int synaptics_rmi4_f12_abs_report(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ int retval;
+ unsigned char touch_count = 0; /* number of touch points */
+ unsigned char finger;
+ unsigned char fingers_to_process;
+ unsigned char finger_status;
+ unsigned char size_of_2d_data;
+ unsigned char detected_gestures;
+ unsigned short data_addr;
+ int x;
+ int y;
+ int z;
+ int wx;
+ int wy;
+ int temp;
+ struct synaptics_rmi4_f12_extra_data *extra_data;
+ struct synaptics_rmi4_f12_finger_data *data;
+ struct synaptics_rmi4_f12_finger_data *finger_data;
+#ifdef F12_DATA_15_WORKAROUND
+ static unsigned char fingers_already_present;
+#endif
+
+ fingers_to_process = fhandler->num_of_data_points;
+ data_addr = fhandler->full_addr.data_base;
+ extra_data = (struct synaptics_rmi4_f12_extra_data *)fhandler->extra;
+ size_of_2d_data = sizeof(struct synaptics_rmi4_f12_finger_data);
+
+ if (rmi4_data->suspend && rmi4_data->enable_wakeup_gesture) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s, enable_wakeup_gesture\n", __func__);
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr + extra_data->data4_offset,
+ &detected_gestures,
+ sizeof(detected_gestures));
+ if (retval < 0)
+ return 0;
+
+ if (detected_gestures) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s, detected_gestures\n", __func__);
+ input_report_key(rmi4_data->input_dev, KEY_WAKEUP, 1);
+ input_sync(rmi4_data->input_dev);
+ input_report_key(rmi4_data->input_dev, KEY_WAKEUP, 0);
+ input_sync(rmi4_data->input_dev);
+ rmi4_data->suspend = false;
+ }
+
+ return 0;
+ }
+
+ /* Determine the total number of fingers to process */
+ if (extra_data->data15_size) {
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr + extra_data->data15_offset,
+ extra_data->data15_data,
+ extra_data->data15_size);
+ if (retval < 0)
+ return 0;
+
+ /* Start checking from the highest bit */
+ temp = extra_data->data15_size - 1; /* Highest byte */
+ finger = (fingers_to_process - 1) % 8; /* Highest bit */
+ do {
+ if (extra_data->data15_data[temp] & (1 << finger))
+ break;
+
+ if (finger) {
+ finger--;
+ } else {
+ temp--; /* Move to the next lower byte */
+ finger = 7;
+ }
+
+ fingers_to_process--;
+ } while (fingers_to_process);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Number of fingers to process = %d\n",
+ __func__, fingers_to_process);
+ }
+
+#ifdef F12_DATA_15_WORKAROUND
+ fingers_to_process = max(fingers_to_process, fingers_already_present);
+#endif
+
+ if (!fingers_to_process) {
+ synaptics_rmi4_free_fingers(rmi4_data);
+ return 0;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr + extra_data->data1_offset,
+ (unsigned char *)fhandler->data,
+ fingers_to_process * size_of_2d_data);
+ if (retval < 0)
+ return 0;
+
+ data = (struct synaptics_rmi4_f12_finger_data *)fhandler->data;
+
+ mutex_lock(&(rmi4_data->rmi4_report_mutex));
+
+ for (finger = 0; finger < fingers_to_process; finger++) {
+ finger_data = data + finger;
+ finger_status = finger_data->object_type_and_status;
+
+ switch (finger_status) {
+ case F12_FINGER_STATUS:
+ case F12_GLOVED_FINGER_STATUS:
+#ifdef TYPE_B_PROTOCOL
+ input_mt_slot(rmi4_data->input_dev, finger);
+ input_mt_report_slot_state(rmi4_data->input_dev,
+ MT_TOOL_FINGER, 1);
+#endif
+
+#ifdef F12_DATA_15_WORKAROUND
+ fingers_already_present = finger + 1;
+#endif
+
+ x = (finger_data->x_msb << 8) | (finger_data->x_lsb);
+ y = (finger_data->y_msb << 8) | (finger_data->y_lsb);
+#ifdef REPORT_2D_Z
+ z = finger_data->z;
+#endif
+#ifdef REPORT_2D_W
+ wx = finger_data->wx;
+ wy = finger_data->wy;
+#endif
+
+ if (rmi4_data->hw_if->board_data->swap_axes) {
+ temp = x;
+ x = y;
+ y = temp;
+ temp = wx;
+ wx = wy;
+ wy = temp;
+ }
+
+ if (rmi4_data->hw_if->board_data->x_flip)
+ x = rmi4_data->sensor_max_x - x;
+ if (rmi4_data->hw_if->board_data->y_flip)
+ y = rmi4_data->sensor_max_y - y;
+
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_POSITION_X, x);
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_POSITION_Y, y);
+#ifdef REPORT_2D_Z
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_PRESSURE, z);
+#endif
+#ifdef REPORT_2D_W
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MAJOR, synaptics_sqrt(wx*wx + wy*wy));
+ input_report_abs(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MINOR, min(wx, wy));
+#endif
+#ifndef TYPE_B_PROTOCOL
+ input_mt_sync(rmi4_data->input_dev);
+#endif
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Finger %d:\n"
+ "status = 0x%02x\n"
+ "x = %d\n"
+ "y = %d\n"
+ "z = %d\n"
+ "wx = %d\n"
+ "wy = %d\n",
+ __func__, finger,
+ finger_status,
+ x, y, z, wx, wy);
+
+ touch_count++;
+ break;
+ default:
+#ifdef TYPE_B_PROTOCOL
+ input_mt_slot(rmi4_data->input_dev, finger);
+ input_mt_report_slot_state(rmi4_data->input_dev,
+ MT_TOOL_FINGER, 0);
+#endif
+ break;
+ }
+ }
+
+ if (touch_count == 0) {
+#ifdef F12_DATA_15_WORKAROUND
+ fingers_already_present = 0;
+#endif
+#ifndef TYPE_B_PROTOCOL
+ input_mt_sync(rmi4_data->input_dev);
+#endif
+ }
+
+ input_sync(rmi4_data->input_dev);
+
+ mutex_unlock(&(rmi4_data->rmi4_report_mutex));
+
+ return touch_count;
+}
+
+static void synaptics_rmi4_f1a_report(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ int retval;
+ unsigned char touch_count = 0;
+ unsigned char button;
+ unsigned char index;
+ unsigned char shift;
+ unsigned char status;
+ unsigned char *data;
+ unsigned short data_addr = fhandler->full_addr.data_base;
+ struct synaptics_rmi4_f1a_handle *f1a = fhandler->data;
+ static unsigned char do_once = 1;
+ static bool current_status[MAX_NUMBER_OF_BUTTONS];
+#ifdef NO_0D_WHILE_2D
+ static bool before_2d_status[MAX_NUMBER_OF_BUTTONS];
+ static bool while_2d_status[MAX_NUMBER_OF_BUTTONS];
+#endif
+
+ if (do_once) {
+ memset(current_status, 0, sizeof(current_status));
+#ifdef NO_0D_WHILE_2D
+ memset(before_2d_status, 0, sizeof(before_2d_status));
+ memset(while_2d_status, 0, sizeof(while_2d_status));
+#endif
+ do_once = 0;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ data_addr,
+ f1a->button_data_buffer,
+ f1a->button_bitmask_size);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read button data registers\n",
+ __func__);
+ return;
+ }
+
+ data = f1a->button_data_buffer;
+
+ mutex_lock(&(rmi4_data->rmi4_report_mutex));
+
+ for (button = 0; button < f1a->valid_button_count; button++) {
+ index = button / 8;
+ shift = button % 8;
+ status = ((data[index] >> shift) & MASK_1BIT);
+
+ if (current_status[button] == status)
+ continue;
+ else
+ current_status[button] = status;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Button %d (code %d) ->%d\n",
+ __func__, button,
+ f1a->button_map[button],
+ status);
+#ifdef NO_0D_WHILE_2D
+ if (rmi4_data->fingers_on_2d == false) {
+ if (status == 1) {
+ before_2d_status[button] = 1;
+ } else {
+ if (while_2d_status[button] == 1) {
+ while_2d_status[button] = 0;
+ continue;
+ } else {
+ before_2d_status[button] = 0;
+ }
+ }
+ touch_count++;
+ input_report_key(rmi4_data->input_dev,
+ f1a->button_map[button],
+ status);
+ } else {
+ if (before_2d_status[button] == 1) {
+ before_2d_status[button] = 0;
+ touch_count++;
+ input_report_key(rmi4_data->input_dev,
+ f1a->button_map[button],
+ status);
+ } else {
+ if (status == 1)
+ while_2d_status[button] = 1;
+ else
+ while_2d_status[button] = 0;
+ }
+ }
+#else
+ touch_count++;
+ input_report_key(rmi4_data->input_dev,
+ f1a->button_map[button],
+ status);
+#endif
+ }
+
+ if (touch_count)
+ input_sync(rmi4_data->input_dev);
+
+ mutex_unlock(&(rmi4_data->rmi4_report_mutex));
+
+ return;
+}
+
+static void synaptics_rmi4_report_touch(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ unsigned char touch_count_2d;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Function %02x reporting\n",
+ __func__, fhandler->fn_number);
+
+ switch (fhandler->fn_number) {
+ case SYNAPTICS_RMI4_F11:
+ touch_count_2d = synaptics_rmi4_f11_abs_report(rmi4_data,
+ fhandler);
+
+ if (touch_count_2d)
+ rmi4_data->fingers_on_2d = true;
+ else
+ rmi4_data->fingers_on_2d = false;
+ break;
+ case SYNAPTICS_RMI4_F12:
+ touch_count_2d = synaptics_rmi4_f12_abs_report(rmi4_data,
+ fhandler);
+
+ if (touch_count_2d)
+ rmi4_data->fingers_on_2d = true;
+ else
+ rmi4_data->fingers_on_2d = false;
+ break;
+ case SYNAPTICS_RMI4_F1A:
+ synaptics_rmi4_f1a_report(rmi4_data, fhandler);
+ break;
+ default:
+ break;
+ }
+
+ return;
+}
+
+static void synaptics_rmi4_sensor_report(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char data[MAX_INTR_REGISTERS + 1];
+ unsigned char *intr = &data[1];
+ struct synaptics_rmi4_f01_device_status status;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ /*
+ * Get interrupt status information from F01 Data1 register to
+ * determine the source(s) that are flagging the interrupt.
+ */
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ data,
+ rmi4_data->num_of_intr_regs + 1);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read interrupt status\n",
+ __func__);
+ return;
+ }
+
+ status.data[0] = data[0];
+ if (status.unconfigured && !status.flash_prog) {
+ pr_notice("%s: spontaneous reset detected\n", __func__);
+ retval = synaptics_rmi4_reinit_device(rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to reinit device\n",
+ __func__);
+ }
+ }
+
+ /*
+ * Traverse the function handler list and service the source(s)
+ * of the interrupt accordingly.
+ */
+ if (!list_empty(&rmi->support_fn_list)) {
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->num_of_data_sources) {
+ if (fhandler->intr_mask &
+ intr[fhandler->intr_reg_num]) {
+ synaptics_rmi4_report_touch(rmi4_data,
+ fhandler);
+ }
+ }
+ }
+ }
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link) {
+ if (!exp_fhandler->insert &&
+ !exp_fhandler->remove &&
+ (exp_fhandler->exp_fn->attn != NULL))
+ exp_fhandler->exp_fn->attn(rmi4_data, intr[0]);
+ }
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ return;
+}
+
+static irqreturn_t synaptics_rmi4_irq(int irq, void *data)
+{
+ struct synaptics_rmi4_data *rmi4_data = data;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ if (gpio_get_value(bdata->irq_gpio) != bdata->irq_on_state)
+ goto exit;
+
+ synaptics_rmi4_sensor_report(rmi4_data);
+
+exit:
+ return IRQ_HANDLED;
+}
+
+static int synaptics_rmi4_int_enable(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval = 0;
+ unsigned char ii;
+ unsigned char zero = 0x00;
+ unsigned char *intr_mask;
+ unsigned short intr_addr;
+
+ intr_mask = rmi4_data->intr_mask;
+
+ for (ii = 0; ii < rmi4_data->num_of_intr_regs; ii++) {
+ if (intr_mask[ii] != 0x00) {
+ intr_addr = rmi4_data->f01_ctrl_base_addr + 1 + ii;
+ if (enable) {
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ intr_addr,
+ &(intr_mask[ii]),
+ sizeof(intr_mask[ii]));
+ if (retval < 0)
+ return retval;
+ } else {
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ intr_addr,
+ &zero,
+ sizeof(zero));
+ if (retval < 0)
+ return retval;
+ }
+ }
+ }
+
+ return retval;
+}
+
+static int synaptics_rmi4_irq_enable(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval = 0;
+
+ if (enable) {
+ if (rmi4_data->irq_enabled)
+ return retval;
+
+ retval = synaptics_rmi4_int_enable(rmi4_data, false);
+ if (retval < 0)
+ return retval;
+
+ /* Process and clear interrupts */
+ synaptics_rmi4_sensor_report(rmi4_data);
+
+ enable_irq(rmi4_data->irq);
+
+ retval = synaptics_rmi4_int_enable(rmi4_data, true);
+ if (retval < 0)
+ return retval;
+
+ rmi4_data->irq_enabled = true;
+ } else {
+ if (rmi4_data->irq_enabled) {
+ disable_irq(rmi4_data->irq);
+ rmi4_data->irq_enabled = false;
+ }
+ }
+
+ return retval;
+}
+
+static void synaptics_rmi4_set_intr_mask(struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count)
+{
+ unsigned char ii;
+ unsigned char intr_offset;
+
+ fhandler->intr_reg_num = (intr_count + 7) / 8;
+ if (fhandler->intr_reg_num != 0)
+ fhandler->intr_reg_num -= 1;
+
+ /* Set an enable bit for each data source */
+ intr_offset = intr_count % 8;
+ fhandler->intr_mask = 0;
+ for (ii = intr_offset;
+ ii < ((fd->intr_src_count & MASK_3BIT) +
+ intr_offset);
+ ii++)
+ fhandler->intr_mask |= 1 << ii;
+
+ return;
+}
+
+static int synaptics_rmi4_f01_init(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count)
+{
+ fhandler->fn_number = fd->fn_number;
+ fhandler->num_of_data_sources = fd->intr_src_count;
+ fhandler->data = NULL;
+ fhandler->extra = NULL;
+
+ synaptics_rmi4_set_intr_mask(fhandler, fd, intr_count);
+
+ rmi4_data->f01_query_base_addr = fd->query_base_addr;
+ rmi4_data->f01_ctrl_base_addr = fd->ctrl_base_addr;
+ rmi4_data->f01_data_base_addr = fd->data_base_addr;
+ rmi4_data->f01_cmd_base_addr = fd->cmd_base_addr;
+
+ return 0;
+}
+
+static int synaptics_rmi4_f11_init(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count)
+{
+ int retval;
+ unsigned char abs_data_size;
+ unsigned char abs_data_blk_size;
+ unsigned char query[F11_STD_QUERY_LEN];
+ unsigned char control[F11_STD_CTRL_LEN];
+
+ fhandler->fn_number = fd->fn_number;
+ fhandler->num_of_data_sources = fd->intr_src_count;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.query_base,
+ query,
+ sizeof(query));
+ if (retval < 0)
+ return retval;
+
+ /* Maximum number of fingers supported */
+ if ((query[1] & MASK_3BIT) <= 4)
+ fhandler->num_of_data_points = (query[1] & MASK_3BIT) + 1;
+ else if ((query[1] & MASK_3BIT) == 5)
+ fhandler->num_of_data_points = 10;
+
+ rmi4_data->num_of_fingers = fhandler->num_of_data_points;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base,
+ control,
+ sizeof(control));
+ if (retval < 0)
+ return retval;
+
+ /* Maximum x, y and z */
+ rmi4_data->sensor_max_x = ((control[6] & MASK_8BIT) << 0) |
+ ((control[7] & MASK_4BIT) << 8);
+ rmi4_data->sensor_max_y = ((control[8] & MASK_8BIT) << 0) |
+ ((control[9] & MASK_4BIT) << 8);
+#ifdef REPORT_2D_Z
+ rmi4_data->sensor_max_z = 255;
+#endif
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Function %02x max x = %d max y = %d max z = %d\n",
+ __func__, fhandler->fn_number,
+ rmi4_data->sensor_max_x,
+ rmi4_data->sensor_max_y,
+ rmi4_data->sensor_max_z);
+
+ rmi4_data->max_touch_width = synaptics_sqrt(
+ MAX_F11_TOUCH_WIDTH * MAX_F11_TOUCH_WIDTH * 2);
+
+ synaptics_rmi4_set_intr_mask(fhandler, fd, intr_count);
+
+ abs_data_size = query[5] & MASK_2BIT;
+ abs_data_blk_size = 3 + (2 * (abs_data_size == 0 ? 1 : 0));
+ fhandler->size_of_data_register_block = abs_data_blk_size;
+ fhandler->data = NULL;
+ fhandler->extra = NULL;
+
+ return retval;
+}
+
+static int synaptics_rmi4_f12_set_enables(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short ctrl28)
+{
+ int retval;
+ static unsigned short ctrl_28_address;
+
+ if (ctrl28)
+ ctrl_28_address = ctrl28;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ ctrl_28_address,
+ &rmi4_data->report_enable,
+ sizeof(rmi4_data->report_enable));
+ if (retval < 0)
+ return retval;
+
+ return retval;
+}
+
+static int synaptics_rmi4_f12_init(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count)
+{
+ int retval;
+ unsigned char size_of_2d_data;
+ unsigned char size_of_query8;
+ unsigned char ctrl_8_offset;
+ unsigned char ctrl_20_offset;
+ unsigned char ctrl_23_offset;
+ unsigned char ctrl_28_offset;
+ unsigned char num_of_fingers;
+ struct synaptics_rmi4_f12_extra_data *extra_data;
+ struct synaptics_rmi4_f12_query_5 query_5;
+ struct synaptics_rmi4_f12_query_8 query_8;
+ struct synaptics_rmi4_f12_ctrl_8 ctrl_8;
+ struct synaptics_rmi4_f12_ctrl_23 ctrl_23;
+
+ fhandler->fn_number = fd->fn_number;
+ fhandler->num_of_data_sources = fd->intr_src_count;
+ fhandler->extra = kmalloc(sizeof(*extra_data), GFP_KERNEL);
+ extra_data = (struct synaptics_rmi4_f12_extra_data *)fhandler->extra;
+ size_of_2d_data = sizeof(struct synaptics_rmi4_f12_finger_data);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.query_base + 5,
+ query_5.data,
+ sizeof(query_5.data));
+ if (retval < 0)
+ return retval;
+
+ ctrl_8_offset = query_5.ctrl0_is_present +
+ query_5.ctrl1_is_present +
+ query_5.ctrl2_is_present +
+ query_5.ctrl3_is_present +
+ query_5.ctrl4_is_present +
+ query_5.ctrl5_is_present +
+ query_5.ctrl6_is_present +
+ query_5.ctrl7_is_present;
+
+ ctrl_20_offset = ctrl_8_offset +
+ query_5.ctrl8_is_present +
+ query_5.ctrl9_is_present +
+ query_5.ctrl10_is_present +
+ query_5.ctrl11_is_present +
+ query_5.ctrl12_is_present +
+ query_5.ctrl13_is_present +
+ query_5.ctrl14_is_present +
+ query_5.ctrl15_is_present +
+ query_5.ctrl16_is_present +
+ query_5.ctrl17_is_present +
+ query_5.ctrl18_is_present +
+ query_5.ctrl19_is_present;
+
+ ctrl_23_offset = ctrl_20_offset +
+ query_5.ctrl20_is_present +
+ query_5.ctrl21_is_present +
+ query_5.ctrl22_is_present;
+
+ ctrl_28_offset = ctrl_23_offset +
+ query_5.ctrl23_is_present +
+ query_5.ctrl24_is_present +
+ query_5.ctrl25_is_present +
+ query_5.ctrl26_is_present +
+ query_5.ctrl27_is_present;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base + ctrl_23_offset,
+ ctrl_23.data,
+ sizeof(ctrl_23.data));
+ if (retval < 0)
+ return retval;
+
+ /* Maximum number of fingers supported */
+ fhandler->num_of_data_points = min(ctrl_23.max_reported_objects,
+ (unsigned char)F12_FINGERS_TO_SUPPORT);
+
+ num_of_fingers = fhandler->num_of_data_points;
+ rmi4_data->num_of_fingers = num_of_fingers;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.query_base + 7,
+ &size_of_query8,
+ sizeof(size_of_query8));
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.query_base + 8,
+ query_8.data,
+ size_of_query8);
+ if (retval < 0)
+ return retval;
+
+ /* Determine the presence of the Data0 register */
+ extra_data->data1_offset = query_8.data0_is_present;
+
+ if ((size_of_query8 >= 3) && (query_8.data15_is_present)) {
+ extra_data->data15_offset = query_8.data0_is_present +
+ query_8.data1_is_present +
+ query_8.data2_is_present +
+ query_8.data3_is_present +
+ query_8.data4_is_present +
+ query_8.data5_is_present +
+ query_8.data6_is_present +
+ query_8.data7_is_present +
+ query_8.data8_is_present +
+ query_8.data9_is_present +
+ query_8.data10_is_present +
+ query_8.data11_is_present +
+ query_8.data12_is_present +
+ query_8.data13_is_present +
+ query_8.data14_is_present;
+ extra_data->data15_size = (num_of_fingers + 7) / 8;
+ } else {
+ extra_data->data15_size = 0;
+ }
+
+ rmi4_data->report_enable = RPT_DEFAULT;
+#ifdef REPORT_2D_Z
+ rmi4_data->report_enable |= RPT_Z;
+#endif
+#ifdef REPORT_2D_W
+ rmi4_data->report_enable |= (RPT_WX | RPT_WY);
+#endif
+
+ retval = synaptics_rmi4_f12_set_enables(rmi4_data,
+ fhandler->full_addr.ctrl_base + ctrl_28_offset);
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base + ctrl_8_offset,
+ ctrl_8.data,
+ sizeof(ctrl_8.data));
+ if (retval < 0)
+ return retval;
+
+ /* Maximum x, y and z */
+ rmi4_data->sensor_max_x =
+ ((unsigned short)ctrl_8.max_x_coord_lsb << 0) |
+ ((unsigned short)ctrl_8.max_x_coord_msb << 8);
+ rmi4_data->sensor_max_y =
+ ((unsigned short)ctrl_8.max_y_coord_lsb << 0) |
+ ((unsigned short)ctrl_8.max_y_coord_msb << 8);
+#ifdef REPORT_2D_Z
+ rmi4_data->sensor_max_z = 255;
+#endif
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Function %02x max x = %d max y = %d max z = %d\n",
+ __func__, fhandler->fn_number,
+ rmi4_data->sensor_max_x,
+ rmi4_data->sensor_max_y,
+ rmi4_data->sensor_max_z);
+
+ rmi4_data->num_of_rx = ctrl_8.num_of_rx;
+ rmi4_data->num_of_tx = ctrl_8.num_of_tx;
+ rmi4_data->max_touch_width = synaptics_sqrt(
+ rmi4_data->num_of_rx*rmi4_data->num_of_rx +
+ rmi4_data->num_of_tx*rmi4_data->num_of_tx);
+
+ rmi4_data->f12_wakeup_gesture = query_5.ctrl27_is_present;
+ if (rmi4_data->f12_wakeup_gesture) {
+ extra_data->ctrl20_offset = ctrl_20_offset;
+ extra_data->data4_offset = query_8.data0_is_present +
+ query_8.data1_is_present +
+ query_8.data2_is_present +
+ query_8.data3_is_present;
+ }
+
+ synaptics_rmi4_set_intr_mask(fhandler, fd, intr_count);
+
+ /* Allocate memory for finger data storage space */
+ fhandler->data_size = num_of_fingers * size_of_2d_data;
+ fhandler->data = kmalloc(fhandler->data_size, GFP_KERNEL);
+
+ return retval;
+}
+
+static int synaptics_rmi4_f1a_alloc_mem(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ int retval;
+ struct synaptics_rmi4_f1a_handle *f1a;
+
+ f1a = kzalloc(sizeof(*f1a), GFP_KERNEL);
+ if (!f1a) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for function handle\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ fhandler->data = (void *)f1a;
+ fhandler->extra = NULL;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.query_base,
+ f1a->button_query.data,
+ sizeof(f1a->button_query.data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read query registers\n",
+ __func__);
+ return retval;
+ }
+
+ f1a->max_count = f1a->button_query.max_button_count + 1;
+
+ f1a->button_control.txrx_map = kzalloc(f1a->max_count * 2, GFP_KERNEL);
+ if (!f1a->button_control.txrx_map) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for tx rx mapping\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ f1a->button_bitmask_size = (f1a->max_count + 7) / 8;
+
+ f1a->button_data_buffer = kcalloc(f1a->button_bitmask_size,
+ sizeof(*(f1a->button_data_buffer)), GFP_KERNEL);
+ if (!f1a->button_data_buffer) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for data buffer\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ f1a->button_map = kcalloc(f1a->max_count,
+ sizeof(*(f1a->button_map)), GFP_KERNEL);
+ if (!f1a->button_map) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for button map\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int synaptics_rmi4_f1a_button_map(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler)
+{
+ int retval;
+ unsigned char ii;
+ unsigned char mapping_offset = 0;
+ struct synaptics_rmi4_f1a_handle *f1a = fhandler->data;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ mapping_offset = f1a->button_query.has_general_control +
+ f1a->button_query.has_interrupt_enable +
+ f1a->button_query.has_multibutton_select;
+
+ if (f1a->button_query.has_tx_rx_map) {
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base + mapping_offset,
+ f1a->button_control.txrx_map,
+ sizeof(f1a->button_control.txrx_map));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read tx rx mapping\n",
+ __func__);
+ return retval;
+ }
+
+ rmi4_data->button_txrx_mapping = f1a->button_control.txrx_map;
+ }
+
+ if (!bdata->cap_button_map) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: cap_button_map is NULL in board file\n",
+ __func__);
+ return -ENODEV;
+ } else if (!bdata->cap_button_map->map) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Button map is missing in board file\n",
+ __func__);
+ return -ENODEV;
+ } else {
+ if (bdata->cap_button_map->nbuttons != f1a->max_count) {
+ f1a->valid_button_count = min(f1a->max_count,
+ bdata->cap_button_map->nbuttons);
+ } else {
+ f1a->valid_button_count = f1a->max_count;
+ }
+
+ for (ii = 0; ii < f1a->valid_button_count; ii++)
+ f1a->button_map[ii] = bdata->cap_button_map->map[ii];
+ }
+
+ return 0;
+}
+
+static void synaptics_rmi4_f1a_kfree(struct synaptics_rmi4_fn *fhandler)
+{
+ struct synaptics_rmi4_f1a_handle *f1a = fhandler->data;
+
+ if (f1a) {
+ kfree(f1a->button_control.txrx_map);
+ kfree(f1a->button_data_buffer);
+ kfree(f1a->button_map);
+ kfree(f1a);
+ fhandler->data = NULL;
+ }
+
+ return;
+}
+
+static int synaptics_rmi4_f1a_init(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count)
+{
+ int retval;
+
+ fhandler->fn_number = fd->fn_number;
+ fhandler->num_of_data_sources = fd->intr_src_count;
+
+ synaptics_rmi4_set_intr_mask(fhandler, fd, intr_count);
+
+ retval = synaptics_rmi4_f1a_alloc_mem(rmi4_data, fhandler);
+ if (retval < 0)
+ goto error_exit;
+
+ retval = synaptics_rmi4_f1a_button_map(rmi4_data, fhandler);
+ if (retval < 0)
+ goto error_exit;
+
+ rmi4_data->button_0d_enabled = 1;
+
+ return 0;
+
+error_exit:
+ synaptics_rmi4_f1a_kfree(fhandler);
+
+ return retval;
+}
+
+static int synaptics_rmi4_f54_init(struct synaptics_rmi4_data *rmi4_data,
+ struct synaptics_rmi4_fn *fhandler,
+ struct synaptics_rmi4_fn_desc *fd,
+ unsigned int intr_count,
+ unsigned int page_number)
+{
+
+ fhandler->fn_number = fd->fn_number;
+ fhandler->num_of_data_sources = fd->intr_src_count;
+ fhandler->data = NULL;
+ fhandler->extra = NULL;
+
+ synaptics_rmi4_set_intr_mask(fhandler, fd, intr_count);
+
+ rmi4_data->f54_query_base_addr =
+ (fd->query_base_addr | (page_number << 8));
+ rmi4_data->f54_ctrl_base_addr =
+ (fd->ctrl_base_addr | (page_number << 8));
+ rmi4_data->f54_data_base_addr =
+ (fd->data_base_addr | (page_number << 8));
+ rmi4_data->f54_cmd_base_addr =
+ (fd->cmd_base_addr | (page_number << 8));
+ return 0;
+}
+
+static void synaptics_rmi4_empty_fn_list(struct synaptics_rmi4_data *rmi4_data)
+{
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_fn *fhandler_temp;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ if (!list_empty(&rmi->support_fn_list)) {
+ list_for_each_entry_safe(fhandler,
+ fhandler_temp,
+ &rmi->support_fn_list,
+ link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F1A) {
+ synaptics_rmi4_f1a_kfree(fhandler);
+ } else {
+ kfree(fhandler->extra);
+ kfree(fhandler->data);
+ }
+ list_del(&fhandler->link);
+ kfree(fhandler);
+ }
+ }
+ INIT_LIST_HEAD(&rmi->support_fn_list);
+
+ return;
+}
+
+static int synaptics_rmi4_check_status(struct synaptics_rmi4_data *rmi4_data,
+ bool *was_in_bl_mode)
+{
+ int retval;
+ int timeout = CHECK_STATUS_TIMEOUT_MS;
+ unsigned char intr_status;
+ struct synaptics_rmi4_f01_device_status status;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ status.data,
+ sizeof(status.data));
+ if (retval < 0)
+ return retval;
+
+ while (status.status_code == STATUS_CRC_IN_PROGRESS) {
+ if (timeout > 0)
+ msleep(20);
+ else
+ return -1;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ status.data,
+ sizeof(status.data));
+ if (retval < 0)
+ return retval;
+
+ timeout -= 20;
+ }
+
+ if (timeout != CHECK_STATUS_TIMEOUT_MS)
+ *was_in_bl_mode = true;
+
+ if (status.flash_prog == 1) {
+ rmi4_data->flash_prog_mode = true;
+ pr_notice("%s: In flash prog mode, status = 0x%02x\n",
+ __func__,
+ status.status_code);
+ } else {
+ rmi4_data->flash_prog_mode = false;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr + 1,
+ &intr_status,
+ sizeof(intr_status));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read interrupt status\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static void synaptics_rmi4_set_configured(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char device_ctrl;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to set configured\n",
+ __func__);
+ return;
+ }
+
+ rmi4_data->no_sleep_setting = device_ctrl & NO_SLEEP_ON;
+ device_ctrl |= CONFIGURED;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to set configured\n",
+ __func__);
+ }
+
+ return;
+}
+
+static int synaptics_rmi4_alloc_fh(struct synaptics_rmi4_fn **fhandler,
+ struct synaptics_rmi4_fn_desc *rmi_fd, int page_number)
+{
+ *fhandler = kmalloc(sizeof(**fhandler), GFP_KERNEL);
+ if (!(*fhandler))
+ return -ENOMEM;
+
+ (*fhandler)->full_addr.data_base =
+ (rmi_fd->data_base_addr |
+ (page_number << 8));
+ (*fhandler)->full_addr.ctrl_base =
+ (rmi_fd->ctrl_base_addr |
+ (page_number << 8));
+ (*fhandler)->full_addr.cmd_base =
+ (rmi_fd->cmd_base_addr |
+ (page_number << 8));
+ (*fhandler)->full_addr.query_base =
+ (rmi_fd->query_base_addr |
+ (page_number << 8));
+
+ return 0;
+}
+
+static int synaptics_rmi4_query_device(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char page_number;
+ unsigned char intr_count;
+ unsigned char f01_query[F01_STD_QUERY_LEN];
+ unsigned short pdt_entry_addr;
+ bool f01found;
+ bool was_in_bl_mode;
+ struct synaptics_rmi4_fn_desc rmi_fd;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+rescan_pdt:
+ f01found = false;
+ was_in_bl_mode = false;
+ intr_count = 0;
+ INIT_LIST_HEAD(&rmi->support_fn_list);
+
+ /* Scan the page description tables of the pages to service */
+ for (page_number = 0; page_number < PAGES_TO_SERVICE; page_number++) {
+ for (pdt_entry_addr = PDT_START; pdt_entry_addr > PDT_END;
+ pdt_entry_addr -= PDT_ENTRY_SIZE) {
+ pdt_entry_addr |= (page_number << 8);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ pdt_entry_addr,
+ (unsigned char *)&rmi_fd,
+ sizeof(rmi_fd));
+ if (retval < 0)
+ return retval;
+
+ pdt_entry_addr &= ~(MASK_8BIT << 8);
+
+ fhandler = NULL;
+
+ if (rmi_fd.fn_number == 0) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Reached end of PDT\n",
+ __func__);
+ break;
+ }
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: F%02x found (page %d)\n",
+ __func__, rmi_fd.fn_number,
+ page_number);
+
+ switch (rmi_fd.fn_number) {
+ case SYNAPTICS_RMI4_F01:
+ if (rmi_fd.intr_src_count == 0)
+ break;
+
+ f01found = true;
+
+ retval = synaptics_rmi4_alloc_fh(&fhandler,
+ &rmi_fd, page_number);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc for F%d\n",
+ __func__,
+ rmi_fd.fn_number);
+ return retval;
+ }
+
+ retval = synaptics_rmi4_f01_init(rmi4_data,
+ fhandler, &rmi_fd, intr_count);
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_check_status(rmi4_data,
+ &was_in_bl_mode);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to check status\n",
+ __func__);
+ return retval;
+ }
+
+ if (was_in_bl_mode) {
+ kfree(fhandler);
+ fhandler = NULL;
+ goto rescan_pdt;
+ }
+
+ if (rmi4_data->flash_prog_mode)
+ goto flash_prog_mode;
+
+ break;
+ case SYNAPTICS_RMI4_F11:
+ if (rmi_fd.intr_src_count == 0)
+ break;
+
+ retval = synaptics_rmi4_alloc_fh(&fhandler,
+ &rmi_fd, page_number);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc for F%d\n",
+ __func__,
+ rmi_fd.fn_number);
+ return retval;
+ }
+
+ retval = synaptics_rmi4_f11_init(rmi4_data,
+ fhandler, &rmi_fd, intr_count);
+ if (retval < 0)
+ return retval;
+ break;
+ case SYNAPTICS_RMI4_F12:
+ if (rmi_fd.intr_src_count == 0)
+ break;
+
+ retval = synaptics_rmi4_alloc_fh(&fhandler,
+ &rmi_fd, page_number);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc for F%d\n",
+ __func__,
+ rmi_fd.fn_number);
+ return retval;
+ }
+
+ retval = synaptics_rmi4_f12_init(rmi4_data,
+ fhandler, &rmi_fd, intr_count);
+ if (retval < 0)
+ return retval;
+ break;
+ case SYNAPTICS_RMI4_F1A:
+ if (rmi_fd.intr_src_count == 0)
+ break;
+
+ retval = synaptics_rmi4_alloc_fh(&fhandler,
+ &rmi_fd, page_number);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc for F%d\n",
+ __func__,
+ rmi_fd.fn_number);
+ return retval;
+ }
+
+ retval = synaptics_rmi4_f1a_init(rmi4_data,
+ fhandler, &rmi_fd, intr_count);
+ if (retval < 0) {
+#ifdef IGNORE_FN_INIT_FAILURE
+ kfree(fhandler);
+ fhandler = NULL;
+#else
+ return retval;
+#endif
+ }
+ break;
+ case SYNAPTICS_RMI4_F54:
+ if (rmi_fd.intr_src_count == 0)
+ break;
+
+ retval = synaptics_rmi4_alloc_fh(&fhandler,
+ &rmi_fd, page_number);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc for F%d\n",
+ __func__,
+ rmi_fd.fn_number);
+ return retval;
+ }
+ retval = synaptics_rmi4_f54_init(rmi4_data,
+ fhandler, &rmi_fd, intr_count,page_number);
+ if (retval < 0)
+ return retval;
+ break;
+ }
+
+ /* Accumulate the interrupt count */
+ intr_count += (rmi_fd.intr_src_count & MASK_3BIT);
+
+ if (fhandler && rmi_fd.intr_src_count) {
+ list_add_tail(&fhandler->link,
+ &rmi->support_fn_list);
+ }
+ }
+ }
+
+ if (!f01found) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to find F01\n",
+ __func__);
+ return -EINVAL;
+ }
+
+flash_prog_mode:
+ rmi4_data->num_of_intr_regs = (intr_count + 7) / 8;
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Number of interrupt registers = %d\n",
+ __func__, rmi4_data->num_of_intr_regs);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_query_base_addr,
+ f01_query,
+ sizeof(f01_query));
+ if (retval < 0)
+ return retval;
+
+ /* RMI Version 4.0 currently supported */
+ rmi->version_major = 4;
+ rmi->version_minor = 0;
+
+ rmi->manufacturer_id = f01_query[0];
+ rmi->product_props = f01_query[1];
+ rmi->product_info[0] = f01_query[2];
+ rmi->product_info[1] = f01_query[3];
+ memcpy(rmi->product_id_string, &f01_query[11], 10);
+
+ if (rmi->manufacturer_id != 1) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Non-Synaptics device found, manufacturer ID = %d\n",
+ __func__, rmi->manufacturer_id);
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_query_base_addr + F01_BUID_ID_OFFSET,
+ rmi->build_id,
+ sizeof(rmi->build_id));
+ if (retval < 0)
+ return retval;
+
+ rmi4_data->firmware_id = (unsigned int)rmi->build_id[0] +
+ (unsigned int)rmi->build_id[1] * 0x100 +
+ (unsigned int)rmi->build_id[2] * 0x10000;
+
+ memset(rmi4_data->intr_mask, 0x00, sizeof(rmi4_data->intr_mask));
+
+ /*
+ * Map out the interrupt bit masks for the interrupt sources
+ * from the registered function handlers.
+ */
+ if (!list_empty(&rmi->support_fn_list)) {
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->num_of_data_sources) {
+ rmi4_data->intr_mask[fhandler->intr_reg_num] |=
+ fhandler->intr_mask;
+ }
+ }
+ }
+
+ if (rmi4_data->f11_wakeup_gesture || rmi4_data->f12_wakeup_gesture)
+ rmi4_data->enable_wakeup_gesture = WAKEUP_GESTURE;
+ else
+ rmi4_data->enable_wakeup_gesture = false;
+
+ synaptics_rmi4_set_configured(rmi4_data);
+
+ return 0;
+}
+
+static int synaptics_rmi4_gpio_setup(int gpio, bool config, int dir, int state)
+{
+ int retval = 0;
+ unsigned char buf[16];
+
+ if (config) {
+ snprintf(buf, PAGE_SIZE, "dsx_gpio_%u\n", gpio);
+ retval = gpio_request(gpio, buf);
+ if (retval) {
+ pr_err("%s: Failed to get gpio %d (code: %d)",
+ __func__, gpio, retval);
+ return retval;
+ }
+
+ if (dir == 0)
+ retval = gpio_direction_input(gpio);
+ else
+ retval = gpio_direction_output(gpio, state);
+ if (retval) {
+ pr_err("%s: Failed to set gpio %d direction",
+ __func__, gpio);
+ return retval;
+ }
+ } else {
+ gpio_free(gpio);
+ }
+
+ return retval;
+}
+
+static void synaptics_rmi4_set_params(struct synaptics_rmi4_data *rmi4_data)
+{
+ unsigned char ii;
+ struct synaptics_rmi4_f1a_handle *f1a;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ input_set_abs_params(rmi4_data->input_dev,
+ ABS_MT_POSITION_X, 0,
+ rmi4_data->sensor_max_x, 0, 0);
+ input_set_abs_params(rmi4_data->input_dev,
+ ABS_MT_POSITION_Y, 0,
+ rmi4_data->sensor_max_y, 0, 0);
+#ifdef REPORT_2D_Z
+ input_set_abs_params(rmi4_data->input_dev,
+ ABS_MT_PRESSURE, 0,
+ rmi4_data->sensor_max_z, 0, 0);
+#endif
+#ifdef REPORT_2D_W
+ input_set_abs_params(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MAJOR, 0,
+ rmi4_data->max_touch_width, 0, 0);
+ input_set_abs_params(rmi4_data->input_dev,
+ ABS_MT_TOUCH_MINOR, 0,
+ rmi4_data->max_touch_width, 0, 0);
+#endif
+
+#ifdef TYPE_B_PROTOCOL
+ if (rmi4_data->input_dev->mt &&
+ rmi4_data->input_dev->mt->num_slots != rmi4_data->num_of_fingers)
+ input_mt_destroy_slots(rmi4_data->input_dev);
+ input_mt_init_slots(rmi4_data->input_dev,
+ rmi4_data->num_of_fingers, 0);
+#endif
+
+ f1a = NULL;
+ if (!list_empty(&rmi->support_fn_list)) {
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F1A)
+ f1a = fhandler->data;
+ }
+ }
+
+ if (f1a) {
+ for (ii = 0; ii < f1a->valid_button_count; ii++) {
+ set_bit(f1a->button_map[ii],
+ rmi4_data->input_dev->keybit);
+ input_set_capability(rmi4_data->input_dev,
+ EV_KEY, f1a->button_map[ii]);
+ }
+ }
+
+ if (rmi4_data->f11_wakeup_gesture || rmi4_data->f12_wakeup_gesture) {
+ set_bit(KEY_WAKEUP, rmi4_data->input_dev->keybit);
+ input_set_capability(rmi4_data->input_dev, EV_KEY, KEY_WAKEUP);
+ }
+
+ return;
+}
+
+static int synaptics_rmi4_set_input_dev(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ int temp;
+
+ rmi4_data->input_dev = input_allocate_device();
+ if (rmi4_data->input_dev == NULL) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate input device\n",
+ __func__);
+ retval = -ENOMEM;
+ goto err_input_device;
+ }
+
+ retval = synaptics_rmi4_query_device(rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to query device\n",
+ __func__);
+ goto err_query_device;
+ }
+
+ rmi4_data->input_dev->name = PLATFORM_DRIVER_NAME;
+ rmi4_data->input_dev->phys = INPUT_PHYS_NAME;
+ rmi4_data->input_dev->id.product = SYNAPTICS_DSX_DRIVER_PRODUCT;
+ rmi4_data->input_dev->id.version = SYNAPTICS_DSX_DRIVER_VERSION;
+ rmi4_data->input_dev->dev.parent = rmi4_data->pdev->dev.parent;
+ input_set_drvdata(rmi4_data->input_dev, rmi4_data);
+
+ set_bit(EV_SYN, rmi4_data->input_dev->evbit);
+ set_bit(EV_KEY, rmi4_data->input_dev->evbit);
+ set_bit(EV_ABS, rmi4_data->input_dev->evbit);
+#ifdef INPUT_PROP_DIRECT
+ set_bit(INPUT_PROP_DIRECT, rmi4_data->input_dev->propbit);
+#endif
+
+ if (rmi4_data->hw_if->board_data->swap_axes) {
+ temp = rmi4_data->sensor_max_x;
+ rmi4_data->sensor_max_x = rmi4_data->sensor_max_y;
+ rmi4_data->sensor_max_y = temp;
+ }
+
+ synaptics_rmi4_set_params(rmi4_data);
+
+ retval = input_register_device(rmi4_data->input_dev);
+ if (retval) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to register input device\n",
+ __func__);
+ goto err_register_input;
+ }
+
+ return 0;
+
+err_register_input:
+err_query_device:
+ synaptics_rmi4_empty_fn_list(rmi4_data);
+ input_free_device(rmi4_data->input_dev);
+
+err_input_device:
+ return retval;
+}
+
+static int synaptics_rmi4_set_gpio(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ retval = synaptics_rmi4_gpio_setup(
+ bdata->irq_gpio,
+ true, 0, 0);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to configure attention GPIO\n",
+ __func__);
+ goto err_gpio_irq;
+ }
+
+ if (bdata->power_gpio >= 0) {
+ retval = synaptics_rmi4_gpio_setup(
+ bdata->power_gpio,
+ true, 1, !bdata->power_on_state);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to configure power GPIO\n",
+ __func__);
+ goto err_gpio_power;
+ }
+ }
+
+ if (bdata->reset_gpio >= 0) {
+ retval = synaptics_rmi4_gpio_setup(
+ bdata->reset_gpio,
+ true, 1, !bdata->reset_on_state);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to configure reset GPIO\n",
+ __func__);
+ goto err_gpio_reset;
+ }
+ }
+
+ if (bdata->power_gpio >= 0) {
+ gpio_set_value(bdata->power_gpio, bdata->power_on_state);
+ msleep(bdata->power_delay_ms);
+ }
+
+ if (bdata->reset_gpio >= 0) {
+ gpio_set_value(bdata->reset_gpio, bdata->reset_on_state);
+ msleep(bdata->reset_active_ms);
+ gpio_set_value(bdata->reset_gpio, !bdata->reset_on_state);
+ msleep(bdata->reset_delay_ms);
+ }
+
+ return 0;
+
+err_gpio_reset:
+ if (bdata->power_gpio >= 0)
+ synaptics_rmi4_gpio_setup(bdata->power_gpio, false, 0, 0);
+
+err_gpio_power:
+ synaptics_rmi4_gpio_setup(bdata->irq_gpio, false, 0, 0);
+
+err_gpio_irq:
+ return retval;
+}
+
+static int synaptics_rmi4_get_reg(struct synaptics_rmi4_data *rmi4_data,
+ bool get)
+{
+ int retval;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ if (!get) {
+ retval = 0;
+ goto regulator_put;
+ }
+
+ if ((bdata->pwr_reg_name != NULL) && (*bdata->pwr_reg_name != 0)) {
+ rmi4_data->pwr_reg = regulator_get(rmi4_data->pdev->dev.parent,
+ bdata->pwr_reg_name);
+ if (IS_ERR(rmi4_data->pwr_reg)) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to get power regulator\n",
+ __func__);
+ retval = PTR_ERR(rmi4_data->pwr_reg);
+ goto regulator_put;
+ }
+ }
+
+ if ((bdata->bus_reg_name != NULL) && (*bdata->bus_reg_name != 0)) {
+ rmi4_data->bus_reg = regulator_get(rmi4_data->pdev->dev.parent,
+ bdata->bus_reg_name);
+ if (IS_ERR(rmi4_data->bus_reg)) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to get bus pullup regulator\n",
+ __func__);
+ retval = PTR_ERR(rmi4_data->bus_reg);
+ goto regulator_put;
+ }
+ }
+
+ return 0;
+
+regulator_put:
+ if (rmi4_data->pwr_reg) {
+ regulator_put(rmi4_data->pwr_reg);
+ rmi4_data->pwr_reg = NULL;
+ }
+
+ if (rmi4_data->bus_reg) {
+ regulator_put(rmi4_data->bus_reg);
+ rmi4_data->bus_reg = NULL;
+ }
+
+ return retval;
+}
+
+static int synaptics_rmi4_enable_reg(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ if (!enable) {
+ retval = 0;
+ goto disable_pwr_reg;
+ }
+
+ if (rmi4_data->bus_reg) {
+ retval = regulator_enable(rmi4_data->bus_reg);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to enable bus pullup regulator\n",
+ __func__);
+ goto exit;
+ }
+ }
+
+ if (rmi4_data->pwr_reg) {
+ retval = regulator_enable(rmi4_data->pwr_reg);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to enable power regulator\n",
+ __func__);
+ goto disable_bus_reg;
+ }
+ msleep(bdata->power_delay_ms);
+ }
+
+ return 0;
+
+disable_pwr_reg:
+ if (rmi4_data->pwr_reg)
+ regulator_disable(rmi4_data->pwr_reg);
+
+disable_bus_reg:
+ if (rmi4_data->bus_reg)
+ regulator_disable(rmi4_data->bus_reg);
+
+exit:
+ return retval;
+}
+
+static int synaptics_rmi4_free_fingers(struct synaptics_rmi4_data *rmi4_data)
+{
+ unsigned char ii;
+
+ mutex_lock(&(rmi4_data->rmi4_report_mutex));
+
+#ifdef TYPE_B_PROTOCOL
+ for (ii = 0; ii < rmi4_data->num_of_fingers; ii++) {
+ input_mt_slot(rmi4_data->input_dev, ii);
+ input_mt_report_slot_state(rmi4_data->input_dev,
+ MT_TOOL_FINGER, 0);
+ }
+#endif
+#ifndef TYPE_B_PROTOCOL
+ input_mt_sync(rmi4_data->input_dev);
+#endif
+ input_sync(rmi4_data->input_dev);
+
+ mutex_unlock(&(rmi4_data->rmi4_report_mutex));
+
+ rmi4_data->fingers_on_2d = false;
+
+ return 0;
+}
+
+static int synaptics_rmi4_force_cal(struct synaptics_rmi4_data *rmi4_data)
+{
+
+ int retval;
+ unsigned char command = 0x02;
+
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f54_cmd_base_addr,
+ &command,
+ sizeof(command));
+ if (retval < 0)
+ return retval;
+
+ return 0;
+}
+
+static int synaptics_rmi4_sw_reset(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char command = 0x01;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_cmd_base_addr,
+ &command,
+ sizeof(command));
+ if (retval < 0)
+ return retval;
+
+ msleep(rmi4_data->hw_if->board_data->reset_delay_ms);
+
+ if (rmi4_data->hw_if->ui_hw_init) {
+ retval = rmi4_data->hw_if->ui_hw_init(rmi4_data);
+ if (retval < 0)
+ return retval;
+ }
+
+ return 0;
+}
+
+static int synaptics_rmi4_reinit_device(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ mutex_lock(&(rmi4_data->rmi4_reset_mutex));
+
+ synaptics_rmi4_free_fingers(rmi4_data);
+
+ if (!list_empty(&rmi->support_fn_list)) {
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F12) {
+ synaptics_rmi4_f12_set_enables(rmi4_data, 0);
+ break;
+ }
+ }
+ }
+
+ retval = synaptics_rmi4_int_enable(rmi4_data, true);
+ if (retval < 0)
+ goto exit;
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->reinit != NULL)
+ exp_fhandler->exp_fn->reinit(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ synaptics_rmi4_set_configured(rmi4_data);
+
+ retval = 0;
+
+exit:
+ mutex_unlock(&(rmi4_data->rmi4_reset_mutex));
+ return retval;
+}
+
+static int synaptics_rmi4_reset_device(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ int temp;
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+
+ mutex_lock(&(rmi4_data->rmi4_reset_mutex));
+
+ synaptics_rmi4_irq_enable(rmi4_data, false);
+
+ retval = synaptics_rmi4_sw_reset(rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to issue reset command\n",
+ __func__);
+ mutex_unlock(&(rmi4_data->rmi4_reset_mutex));
+ return retval;
+ }
+
+ synaptics_rmi4_free_fingers(rmi4_data);
+
+ synaptics_rmi4_empty_fn_list(rmi4_data);
+
+ retval = synaptics_rmi4_query_device(rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to query device\n",
+ __func__);
+ mutex_unlock(&(rmi4_data->rmi4_reset_mutex));
+ return retval;
+ }
+
+ if (rmi4_data->hw_if->board_data->swap_axes) {
+ temp = rmi4_data->sensor_max_x;
+ rmi4_data->sensor_max_x = rmi4_data->sensor_max_y;
+ rmi4_data->sensor_max_y = temp;
+ }
+
+ synaptics_rmi4_set_params(rmi4_data);
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->reset != NULL)
+ exp_fhandler->exp_fn->reset(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ synaptics_rmi4_irq_enable(rmi4_data, true);
+
+ mutex_unlock(&(rmi4_data->rmi4_reset_mutex));
+
+ return 0;
+}
+
+static void synaptics_rmi4_exp_fn_work(struct work_struct *work)
+{
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler_temp;
+ struct synaptics_rmi4_data *rmi4_data = exp_data.rmi4_data;
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry_safe(exp_fhandler,
+ exp_fhandler_temp,
+ &exp_data.list,
+ link) {
+ if ((exp_fhandler->exp_fn->init != NULL) &&
+ exp_fhandler->insert) {
+ exp_fhandler->exp_fn->init(rmi4_data);
+ exp_fhandler->insert = false;
+ } else if ((exp_fhandler->exp_fn->remove != NULL) &&
+ exp_fhandler->remove) {
+ exp_fhandler->exp_fn->remove(rmi4_data);
+ list_del(&exp_fhandler->link);
+ kfree(exp_fhandler);
+ }
+ }
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ return;
+}
+
+void synaptics_rmi4_new_function(struct synaptics_rmi4_exp_fn *exp_fn,
+ bool insert)
+{
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+
+ if (!exp_data.initialized) {
+ mutex_init(&exp_data.mutex);
+ INIT_LIST_HEAD(&exp_data.list);
+ exp_data.initialized = true;
+ }
+
+ mutex_lock(&exp_data.mutex);
+ if (insert) {
+ exp_fhandler = kzalloc(sizeof(*exp_fhandler), GFP_KERNEL);
+ if (!exp_fhandler) {
+ pr_err("%s: Failed to alloc mem for expansion function\n",
+ __func__);
+ goto exit;
+ }
+ exp_fhandler->exp_fn = exp_fn;
+ exp_fhandler->insert = true;
+ exp_fhandler->remove = false;
+ list_add_tail(&exp_fhandler->link, &exp_data.list);
+ } else if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link) {
+ if (exp_fhandler->exp_fn->fn_type == exp_fn->fn_type) {
+ exp_fhandler->insert = false;
+ exp_fhandler->remove = true;
+ goto exit;
+ }
+ }
+ }
+
+exit:
+ mutex_unlock(&exp_data.mutex);
+
+ if (exp_data.queue_work) {
+ queue_delayed_work(exp_data.workqueue,
+ &exp_data.work,
+ msecs_to_jiffies(EXP_FN_WORK_DELAY_MS));
+ }
+
+ return;
+}
+EXPORT_SYMBOL(synaptics_rmi4_new_function);
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+static struct notifier_block facedown_status_handler = {
+ .notifier_call = facedown_status_handler_func,
+};
+#endif
+
+static int synaptics_rmi4_probe(struct platform_device *pdev)
+{
+ int retval;
+ unsigned char attr_count;
+ struct synaptics_rmi4_data *rmi4_data;
+ const struct synaptics_dsx_hw_interface *hw_if;
+ const struct synaptics_dsx_board_data *bdata;
+ pr_info(" %s\n", __func__);
+
+ hw_if = pdev->dev.platform_data;
+ if (!hw_if) {
+ dev_err(&pdev->dev,
+ "%s: No hardware interface found\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ bdata = hw_if->board_data;
+ if (!bdata) {
+ dev_err(&pdev->dev,
+ "%s: No board data found\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ rmi4_data = kzalloc(sizeof(*rmi4_data), GFP_KERNEL);
+ if (!rmi4_data) {
+ dev_err(&pdev->dev,
+ "%s: Failed to alloc mem for rmi4_data\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ rmi4_data->pdev = pdev;
+ rmi4_data->current_page = MASK_8BIT;
+ rmi4_data->hw_if = hw_if;
+ rmi4_data->sensor_sleep = false;
+ rmi4_data->suspend = false;
+ rmi4_data->irq_enabled = false;
+ rmi4_data->fingers_on_2d = false;
+ rmi4_data->f11_wakeup_gesture = false;
+ rmi4_data->f12_wakeup_gesture = false;
+
+ rmi4_data->irq_enable = synaptics_rmi4_irq_enable;
+ rmi4_data->reset_device = synaptics_rmi4_reset_device;
+
+ mutex_init(&(rmi4_data->rmi4_reset_mutex));
+ mutex_init(&(rmi4_data->rmi4_report_mutex));
+ mutex_init(&(rmi4_data->rmi4_io_ctrl_mutex));
+
+ platform_set_drvdata(pdev, rmi4_data);
+
+ retval = synaptics_rmi4_get_reg(rmi4_data, true);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to get regulators\n",
+ __func__);
+ goto err_get_reg;
+ }
+
+ retval = synaptics_rmi4_enable_reg(rmi4_data, true);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to enable regulators\n",
+ __func__);
+ goto err_enable_reg;
+ }
+
+ retval = synaptics_rmi4_set_gpio(rmi4_data);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to set up GPIO's\n",
+ __func__);
+ goto err_set_gpio;
+ }
+
+ if (hw_if->ui_hw_init) {
+ retval = hw_if->ui_hw_init(rmi4_data);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to initialize hardware interface\n",
+ __func__);
+ goto err_ui_hw_init;
+ }
+ }
+
+ retval = synaptics_rmi4_set_input_dev(rmi4_data);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to set up input device\n",
+ __func__);
+ goto err_set_input_dev;
+ }
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ rmi4_data->early_suspend.level = EARLY_SUSPEND_LEVEL_BLANK_SCREEN + 1;
+ rmi4_data->early_suspend.suspend = synaptics_rmi4_early_suspend;
+ rmi4_data->early_suspend.resume = synaptics_rmi4_late_resume;
+ register_early_suspend(&rmi4_data->early_suspend);
+#endif
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+ retval = register_notifier_by_facedown(&facedown_status_handler);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to register facedown notifier\n",
+ __func__);
+ goto err_register_notifier;
+ }
+#endif
+ rmi4_data->face_down = 0;
+
+ if (!exp_data.initialized) {
+ mutex_init(&exp_data.mutex);
+ INIT_LIST_HEAD(&exp_data.list);
+ exp_data.initialized = true;
+ }
+
+ rmi4_data->irq = gpio_to_irq(bdata->irq_gpio);
+
+ retval = request_threaded_irq(rmi4_data->irq, NULL,
+ synaptics_rmi4_irq, bdata->irq_flags,
+ PLATFORM_DRIVER_NAME, rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create irq thread\n",
+ __func__);
+ return retval;
+ }
+ disable_irq(rmi4_data->irq);
+
+ irq_set_irq_wake(rmi4_data->irq, 1);
+
+ retval = synaptics_rmi4_irq_enable(rmi4_data, true);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to enable attention interrupt\n",
+ __func__);
+ goto err_enable_irq;
+ }
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ retval = sysfs_create_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ if (retval < 0) {
+ dev_err(&pdev->dev,
+ "%s: Failed to create sysfs attributes\n",
+ __func__);
+ goto err_sysfs;
+ }
+ }
+
+ exp_data.workqueue = create_singlethread_workqueue("dsx_exp_workqueue");
+ INIT_DELAYED_WORK(&exp_data.work, synaptics_rmi4_exp_fn_work);
+ exp_data.rmi4_data = rmi4_data;
+ exp_data.queue_work = true;
+ queue_delayed_work(exp_data.workqueue,
+ &exp_data.work,
+ 0);
+
+ return retval;
+
+err_sysfs:
+ for (attr_count--; attr_count >= 0; attr_count--) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ synaptics_rmi4_irq_enable(rmi4_data, false);
+ free_irq(rmi4_data->irq, rmi4_data);
+
+err_enable_irq:
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ unregister_early_suspend(&rmi4_data->early_suspend);
+#endif
+
+ synaptics_rmi4_empty_fn_list(rmi4_data);
+ input_unregister_device(rmi4_data->input_dev);
+ rmi4_data->input_dev = NULL;
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+err_register_notifier:
+#endif
+err_set_input_dev:
+ synaptics_rmi4_gpio_setup(bdata->irq_gpio, false, 0, 0);
+
+ if (bdata->reset_gpio >= 0)
+ synaptics_rmi4_gpio_setup(bdata->reset_gpio, false, 0, 0);
+
+ if (bdata->power_gpio >= 0)
+ synaptics_rmi4_gpio_setup(bdata->power_gpio, false, 0, 0);
+
+err_ui_hw_init:
+err_set_gpio:
+ synaptics_rmi4_enable_reg(rmi4_data, false);
+
+err_enable_reg:
+ synaptics_rmi4_get_reg(rmi4_data, false);
+
+err_get_reg:
+ kfree(rmi4_data);
+
+ return retval;
+}
+
+static int synaptics_rmi4_remove(struct platform_device *pdev)
+{
+ unsigned char attr_count;
+ struct synaptics_rmi4_data *rmi4_data = platform_get_drvdata(pdev);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ cancel_delayed_work_sync(&exp_data.work);
+ flush_workqueue(exp_data.workqueue);
+ destroy_workqueue(exp_data.workqueue);
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ synaptics_rmi4_irq_enable(rmi4_data, false);
+ free_irq(rmi4_data->irq, rmi4_data);
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+ unregister_notifier_by_facedown(&facedown_status_handler);
+#endif
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ unregister_early_suspend(&rmi4_data->early_suspend);
+#endif
+
+ synaptics_rmi4_empty_fn_list(rmi4_data);
+ input_unregister_device(rmi4_data->input_dev);
+ rmi4_data->input_dev = NULL;
+
+ synaptics_rmi4_gpio_setup(bdata->irq_gpio, false, 0, 0);
+
+ if (bdata->reset_gpio >= 0)
+ synaptics_rmi4_gpio_setup(bdata->reset_gpio, false, 0, 0);
+
+ if (bdata->power_gpio >= 0)
+ synaptics_rmi4_gpio_setup(bdata->power_gpio, false, 0, 0);
+
+ synaptics_rmi4_enable_reg(rmi4_data, false);
+ synaptics_rmi4_get_reg(rmi4_data, false);
+
+ kfree(rmi4_data);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static void synaptics_rmi4_f11_wg(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval;
+ unsigned char reporting_control;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F11)
+ break;
+ }
+
+ if (fhandler->fn_number != SYNAPTICS_RMI4_F11) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: No F11 function\n",
+ __func__);
+ return;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base,
+ &reporting_control,
+ sizeof(reporting_control));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to change reporting mode\n",
+ __func__);
+ return;
+ }
+
+ reporting_control = (reporting_control & ~MASK_3BIT);
+ if (enable)
+ reporting_control |= F11_WAKEUP_GESTURE_MODE;
+ else
+ reporting_control |= F11_CONTINUOUS_MODE;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fhandler->full_addr.ctrl_base,
+ &reporting_control,
+ sizeof(reporting_control));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to change reporting mode\n",
+ __func__);
+ return;
+ }
+
+ return;
+}
+
+static void synaptics_rmi4_f12_wg(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval;
+ unsigned char offset;
+ unsigned char reporting_control[3];
+ struct synaptics_rmi4_f12_extra_data *extra_data;
+ struct synaptics_rmi4_fn *fhandler;
+ struct synaptics_rmi4_device_info *rmi;
+
+ rmi = &(rmi4_data->rmi4_mod_info);
+
+ list_for_each_entry(fhandler, &rmi->support_fn_list, link) {
+ if (fhandler->fn_number == SYNAPTICS_RMI4_F12)
+ break;
+ }
+
+ if (fhandler->fn_number != SYNAPTICS_RMI4_F12) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: No F12 function\n",
+ __func__);
+ return;
+ }
+
+ extra_data = (struct synaptics_rmi4_f12_extra_data *)fhandler->extra;
+ offset = extra_data->ctrl20_offset;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fhandler->full_addr.ctrl_base + offset,
+ reporting_control,
+ sizeof(reporting_control));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to change reporting mode\n",
+ __func__);
+ return;
+ }
+
+ if (enable)
+ reporting_control[2] = F12_WAKEUP_GESTURE_MODE;
+ else
+ reporting_control[2] = F12_CONTINUOUS_MODE;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fhandler->full_addr.ctrl_base + offset,
+ reporting_control,
+ sizeof(reporting_control));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to change reporting mode\n",
+ __func__);
+ return;
+ }
+
+ return;
+}
+
+static void synaptics_rmi4_wakeup_gesture(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s:%d\n", __func__, enable);
+ if (rmi4_data->f11_wakeup_gesture)
+ synaptics_rmi4_f11_wg(rmi4_data, enable);
+ else if (rmi4_data->f12_wakeup_gesture)
+ synaptics_rmi4_f12_wg(rmi4_data, enable);
+
+ return;
+}
+
+static void synaptics_rmi4_sensor_sleep(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char device_ctrl;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to enter sleep mode\n",
+ __func__);
+ return;
+ }
+
+ device_ctrl = (device_ctrl & ~MASK_3BIT);
+ device_ctrl = (device_ctrl | NO_SLEEP_OFF | SENSOR_SLEEP);
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to enter sleep mode\n",
+ __func__);
+ return;
+ }
+
+ rmi4_data->sensor_sleep = true;
+
+ return;
+}
+
+static void synaptics_rmi4_sensor_wake(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char device_ctrl;
+ unsigned char no_sleep_setting = rmi4_data->no_sleep_setting;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to wake from sleep mode\n",
+ __func__);
+ return;
+ }
+
+ device_ctrl = (device_ctrl & ~MASK_3BIT);
+ device_ctrl = (device_ctrl | no_sleep_setting | NORMAL_OPERATION);
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ &device_ctrl,
+ sizeof(device_ctrl));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to wake from sleep mode\n",
+ __func__);
+ return;
+ }
+
+ rmi4_data->sensor_sleep = false;
+
+ return;
+}
+
+static void synaptics_rmi4_early_suspend(struct device *dev)
+{
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ if (rmi4_data->stay_awake)
+ return;
+
+ if (rmi4_data->enable_wakeup_gesture && !rmi4_data->face_down) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: gesture mode\n", __func__);
+ synaptics_rmi4_wakeup_gesture(rmi4_data, true);
+ goto exit;
+ }
+
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: sleep mode\n", __func__);
+ synaptics_rmi4_irq_enable(rmi4_data, false);
+ synaptics_rmi4_sensor_sleep(rmi4_data);
+ synaptics_rmi4_free_fingers(rmi4_data);
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->early_suspend != NULL)
+ exp_fhandler->exp_fn->early_suspend(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ if (rmi4_data->full_pm_cycle)
+ synaptics_rmi4_suspend(&(rmi4_data->input_dev->dev));
+
+exit:
+ rmi4_data->suspend = true;
+
+ return;
+}
+
+static void synaptics_rmi4_late_resume(struct device *dev)
+{
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ if (rmi4_data->stay_awake)
+ return;
+
+ if (rmi4_data->enable_wakeup_gesture && !rmi4_data->face_down) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: wake up from gesture mode\n", __func__);
+ synaptics_rmi4_wakeup_gesture(rmi4_data, false);
+ synaptics_rmi4_force_cal(rmi4_data);
+ goto exit;
+ }
+
+ if (rmi4_data->full_pm_cycle)
+ synaptics_rmi4_resume(&(rmi4_data->input_dev->dev));
+
+ if (rmi4_data->suspend) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: wake up\n", __func__);
+ synaptics_rmi4_sensor_wake(rmi4_data);
+ synaptics_rmi4_irq_enable(rmi4_data, true);
+ }
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->late_resume != NULL)
+ exp_fhandler->exp_fn->late_resume(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+exit:
+ rmi4_data->suspend = false;
+ rmi4_data->face_down = 0;
+
+ return;
+}
+
+#ifdef CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_WAKEUP_GESTURE
+static int facedown_status_handler_func(struct notifier_block *this,
+ unsigned long status, void *unused)
+{
+ struct synaptics_rmi4_data *rmi4_data = exp_data.rmi4_data;
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: status: %lu\n", __func__, status);
+
+ if (rmi4_data->stay_awake || (!rmi4_data->suspend))
+ return 0;
+ if (rmi4_data->face_down == status)
+ return 0;
+
+ if (!status) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: gesture mode\n", __func__);
+ if (rmi4_data->enable_wakeup_gesture) {
+ synaptics_rmi4_sensor_wake(rmi4_data);
+ synaptics_rmi4_irq_enable(rmi4_data, true);
+ synaptics_rmi4_wakeup_gesture(rmi4_data, true);
+ }
+ rmi4_data->face_down = 0;
+ }
+ else {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s: sleep mode\n", __func__);
+ if (rmi4_data->enable_wakeup_gesture)
+ synaptics_rmi4_wakeup_gesture(rmi4_data, false);
+ synaptics_rmi4_irq_enable(rmi4_data, false);
+ synaptics_rmi4_sensor_sleep(rmi4_data);
+ rmi4_data->face_down = 1;
+ }
+ return 0;
+}
+#endif
+
+static int synaptics_rmi4_suspend(struct device *dev)
+{
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ if (rmi4_data->stay_awake)
+ return 0;
+
+ if (rmi4_data->enable_wakeup_gesture && !rmi4_data->face_down) {
+ if (rmi4_data->irq_enabled) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s, irq disabled\n", __func__);
+ disable_irq(rmi4_data->irq);
+ rmi4_data->irq_enabled = false;
+ }
+ }
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->suspend != NULL)
+ exp_fhandler->exp_fn->suspend(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ if (rmi4_data->pwr_reg)
+ regulator_disable(rmi4_data->pwr_reg);
+
+ return 0;
+}
+
+static int synaptics_rmi4_resume(struct device *dev)
+{
+ int retval;
+ struct synaptics_rmi4_exp_fhandler *exp_fhandler;
+ struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ if (rmi4_data->stay_awake)
+ return 0;
+
+ if (rmi4_data->pwr_reg) {
+ retval = regulator_enable(rmi4_data->pwr_reg);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to enable power regulator\n",
+ __func__);
+ }
+ msleep(bdata->power_delay_ms);
+ rmi4_data->current_page = MASK_8BIT;
+ if (rmi4_data->hw_if->ui_hw_init)
+ rmi4_data->hw_if->ui_hw_init(rmi4_data);
+ }
+
+ if (rmi4_data->enable_wakeup_gesture && !rmi4_data->face_down) {
+ if (!rmi4_data->irq_enabled) {
+ dev_dbg(rmi4_data->pdev->dev.parent, " %s, irq enabled\n", __func__);
+ enable_irq(rmi4_data->irq);
+ rmi4_data->irq_enabled = true;
+ }
+ }
+
+ mutex_lock(&exp_data.mutex);
+ if (!list_empty(&exp_data.list)) {
+ list_for_each_entry(exp_fhandler, &exp_data.list, link)
+ if (exp_fhandler->exp_fn->resume != NULL)
+ exp_fhandler->exp_fn->resume(rmi4_data);
+ }
+ mutex_unlock(&exp_data.mutex);
+
+ return 0;
+}
+
+static const struct dev_pm_ops synaptics_rmi4_dev_pm_ops = {
+ .suspend = synaptics_rmi4_suspend,
+ .resume = synaptics_rmi4_resume,
+};
+#endif
+
+static struct platform_driver synaptics_rmi4_driver = {
+ .driver = {
+ .name = PLATFORM_DRIVER_NAME,
+ .owner = THIS_MODULE,
+#ifdef CONFIG_PM
+ .pm = &synaptics_rmi4_dev_pm_ops,
+#endif
+ },
+ .probe = synaptics_rmi4_probe,
+ .remove = synaptics_rmi4_remove,
+};
+
+static int __init synaptics_rmi4_init(void)
+{
+ int retval;
+
+ retval = synaptics_rmi4_bus_init();
+ if (retval)
+ return retval;
+
+ return platform_driver_register(&synaptics_rmi4_driver);
+}
+
+static void __exit synaptics_rmi4_exit(void)
+{
+ platform_driver_unregister(&synaptics_rmi4_driver);
+
+ synaptics_rmi4_bus_exit();
+
+ return;
+}
+
+module_init(synaptics_rmi4_init);
+module_exit(synaptics_rmi4_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX Touch Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.h b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.h
new file mode 100644
index 0000000..00f52e6
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_core.h
@@ -0,0 +1,395 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef _SYNAPTICS_DSX_RMI4_H_
+#define _SYNAPTICS_DSX_RMI4_H_
+
+#define SYNAPTICS_DS4 (1 << 0)
+#define SYNAPTICS_DS5 (1 << 1)
+#define SYNAPTICS_DSX_DRIVER_PRODUCT (SYNAPTICS_DS4 | SYNAPTICS_DS5)
+#define SYNAPTICS_DSX_DRIVER_VERSION 0x2002
+
+#include <linux/version.h>
+
+#if (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 38))
+#define KERNEL_ABOVE_2_6_38
+#endif
+
+#ifdef KERNEL_ABOVE_2_6_38
+#define sstrtoul(...) kstrtoul(__VA_ARGS__)
+#else
+#define sstrtoul(...) strict_strtoul(__VA_ARGS__)
+#endif
+
+#define PDT_PROPS (0X00EF)
+#define PDT_START (0x00E9)
+#define PDT_END (0x00D0)
+#define PDT_ENTRY_SIZE (0x0006)
+#define PAGES_TO_SERVICE (10)
+#define PAGE_SELECT_LEN (2)
+#define ADDRESS_WORD_LEN (2)
+
+#define SYNAPTICS_RMI4_F01 (0x01)
+#define SYNAPTICS_RMI4_F11 (0x11)
+#define SYNAPTICS_RMI4_F12 (0x12)
+#define SYNAPTICS_RMI4_F1A (0x1a)
+#define SYNAPTICS_RMI4_F34 (0x34)
+#define SYNAPTICS_RMI4_F51 (0x51)
+#define SYNAPTICS_RMI4_F54 (0x54)
+#define SYNAPTICS_RMI4_F55 (0x55)
+#define SYNAPTICS_RMI4_FDB (0xdb)
+
+#define SYNAPTICS_RMI4_PRODUCT_INFO_SIZE 2
+#define SYNAPTICS_RMI4_PRODUCT_ID_SIZE 10
+#define SYNAPTICS_RMI4_BUILD_ID_SIZE 3
+
+#define F12_FINGERS_TO_SUPPORT 10
+#define F12_NO_OBJECT_STATUS 0x00
+#define F12_FINGER_STATUS 0x01
+#define F12_STYLUS_STATUS 0x02
+#define F12_PALM_STATUS 0x03
+#define F12_HOVERING_FINGER_STATUS 0x05
+#define F12_GLOVED_FINGER_STATUS 0x06
+
+#define MAX_NUMBER_OF_BUTTONS 4
+#define MAX_INTR_REGISTERS 4
+
+#define MASK_16BIT 0xFFFF
+#define MASK_8BIT 0xFF
+#define MASK_7BIT 0x7F
+#define MASK_6BIT 0x3F
+#define MASK_5BIT 0x1F
+#define MASK_4BIT 0x0F
+#define MASK_3BIT 0x07
+#define MASK_2BIT 0x03
+#define MASK_1BIT 0x01
+
+#define SENSOR_ID_CHECKING_EN 1 << 16
+
+enum exp_fn {
+ RMI_DEV = 0,
+ RMI_FW_UPDATER,
+ RMI_TEST_REPORTING,
+ RMI_PROXIMITY,
+ RMI_ACTIVE_PEN,
+ RMI_DEBUG,
+ RMI_LAST,
+};
+
+/*
+ * struct synaptics_rmi4_fn_desc - function descriptor fields in PDT entry
+ * @query_base_addr: base address for query registers
+ * @cmd_base_addr: base address for command registers
+ * @ctrl_base_addr: base address for control registers
+ * @data_base_addr: base address for data registers
+ * @intr_src_count: number of interrupt sources
+ * @fn_number: function number
+ */
+struct synaptics_rmi4_fn_desc {
+ unsigned char query_base_addr;
+ unsigned char cmd_base_addr;
+ unsigned char ctrl_base_addr;
+ unsigned char data_base_addr;
+ unsigned char intr_src_count;
+ unsigned char fn_number;
+};
+
+/*
+ * synaptics_rmi4_fn_full_addr - full 16-bit base addresses
+ * @query_base: 16-bit base address for query registers
+ * @cmd_base: 16-bit base address for command registers
+ * @ctrl_base: 16-bit base address for control registers
+ * @data_base: 16-bit base address for data registers
+ */
+struct synaptics_rmi4_fn_full_addr {
+ unsigned short query_base;
+ unsigned short cmd_base;
+ unsigned short ctrl_base;
+ unsigned short data_base;
+};
+
+/*
+ * struct synaptics_rmi4_f11_extra_data - extra data of F$11
+ * @data38_offset: offset to F11_2D_DATA38 register
+ */
+struct synaptics_rmi4_f11_extra_data {
+ unsigned char data38_offset;
+};
+
+/*
+ * struct synaptics_rmi4_f12_extra_data - extra data of F$12
+ * @data1_offset: offset to F12_2D_DATA01 register
+ * @data4_offset: offset to F12_2D_DATA04 register
+ * @data15_offset: offset to F12_2D_DATA15 register
+ * @data15_size: size of F12_2D_DATA15 register
+ * @data15_data: buffer for reading F12_2D_DATA15 register
+ * @ctrl20_offset: offset to F12_2D_CTRL20 register
+ */
+struct synaptics_rmi4_f12_extra_data {
+ unsigned char data1_offset;
+ unsigned char data4_offset;
+ unsigned char data15_offset;
+ unsigned char data15_size;
+ unsigned char data15_data[(F12_FINGERS_TO_SUPPORT + 7) / 8];
+ unsigned char ctrl20_offset;
+};
+
+/*
+ * struct synaptics_rmi4_fn - RMI function handler
+ * @fn_number: function number
+ * @num_of_data_sources: number of data sources
+ * @num_of_data_points: maximum number of fingers supported
+ * @size_of_data_register_block: data register block size
+ * @intr_reg_num: index to associated interrupt register
+ * @intr_mask: interrupt mask
+ * @full_addr: full 16-bit base addresses of function registers
+ * @link: linked list for function handlers
+ * @data_size: size of private data
+ * @data: pointer to private data
+ * @extra: pointer to extra data
+ */
+struct synaptics_rmi4_fn {
+ unsigned char fn_number;
+ unsigned char num_of_data_sources;
+ unsigned char num_of_data_points;
+ unsigned char size_of_data_register_block;
+ unsigned char intr_reg_num;
+ unsigned char intr_mask;
+ struct synaptics_rmi4_fn_full_addr full_addr;
+ struct list_head link;
+ int data_size;
+ void *data;
+ void *extra;
+};
+
+/*
+ * struct synaptics_rmi4_device_info - device information
+ * @version_major: RMI protocol major version number
+ * @version_minor: RMI protocol minor version number
+ * @manufacturer_id: manufacturer ID
+ * @product_props: product properties
+ * @product_info: product information
+ * @product_id_string: product ID
+ * @build_id: firmware build ID
+ * @support_fn_list: linked list for function handlers
+ */
+struct synaptics_rmi4_device_info {
+ unsigned int version_major;
+ unsigned int version_minor;
+ unsigned char manufacturer_id;
+ unsigned char product_props;
+ unsigned char product_info[SYNAPTICS_RMI4_PRODUCT_INFO_SIZE];
+ unsigned char product_id_string[SYNAPTICS_RMI4_PRODUCT_ID_SIZE + 1];
+ unsigned char build_id[SYNAPTICS_RMI4_BUILD_ID_SIZE];
+ struct list_head support_fn_list;
+};
+
+struct synaptics_rmi4_report_points {
+ uint8_t state;
+ int x;
+ int y;
+ int z;
+ int w;
+};
+
+/*
+ * struct synaptics_rmi4_data - RMI4 device instance data
+ * @pdev: pointer to platform device
+ * @input_dev: pointer to associated input device
+ * @hw_if: pointer to hardware interface data
+ * @rmi4_mod_info: device information
+ * @pwr_reg: pointer to regulator for power control
+ * @bus_reg: pointer to regulator for bus pullup control
+ * @rmi4_reset_mutex: mutex for software reset
+ * @rmi4_report_mutex: mutex for input event reporting
+ * @rmi4_io_ctrl_mutex: mutex for communication interface I/O
+ * @early_suspend: early suspend power management
+ * @current_page: current RMI page for register access
+ * @button_0d_enabled: switch for enabling 0d button support
+ * @full_pm_cycle: switch for enabling full power management cycle
+ * @num_of_tx: number of Tx channels for 2D touch
+ * @num_of_rx: number of Rx channels for 2D touch
+ * @num_of_fingers: maximum number of fingers for 2D touch
+ * @max_touch_width: maximum touch width
+ * @report_enable: input data to report for F$12
+ * @no_sleep_setting: default setting of NoSleep in F01_RMI_CTRL00 register
+ * @intr_mask: interrupt enable mask
+ * @button_txrx_mapping: Tx Rx mapping of 0D buttons
+ * @num_of_intr_regs: number of interrupt registers
+ * @f01_query_base_addr: query base address for f$01
+ * @f01_cmd_base_addr: command base address for f$01
+ * @f01_ctrl_base_addr: control base address for f$01
+ * @f01_data_base_addr: data base address for f$01
+ * @firmware_id: firmware build ID
+ * @irq: attention interrupt
+ * @sensor_max_x: maximum x coordinate for 2D touch
+ * @sensor_max_y: maximum y coordinate for 2D touch
+ * @flash_prog_mode: flag to indicate flash programming mode status
+ * @irq_enabled: flag to indicate attention interrupt enable status
+ * @fingers_on_2d: flag to indicate presence of fingers in 2D area
+ * @suspend: flag to indicate whether in suspend state
+ * @sensor_sleep: flag to indicate sleep state of sensor
+ * @stay_awake: flag to indicate whether to stay awake during suspend
+ * @irq_enable: pointer to interrupt enable function
+ * @f11_wakeup_gesture: flag to indicate support for wakeup gestures in F$11
+ * @f12_wakeup_gesture: flag to indicate support for wakeup gestures in F$12
+ * @enable_wakeup_gesture: flag to indicate usage of wakeup gestures
+ * @reset_device: pointer to device reset function
+ */
+struct synaptics_rmi4_data {
+ struct platform_device *pdev;
+ struct input_dev *input_dev;
+ const struct synaptics_dsx_hw_interface *hw_if;
+ struct synaptics_rmi4_device_info rmi4_mod_info;
+ struct regulator *pwr_reg;
+ struct regulator *bus_reg;
+ struct mutex rmi4_reset_mutex;
+ struct mutex rmi4_report_mutex;
+ struct mutex rmi4_io_ctrl_mutex;
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ struct early_suspend early_suspend;
+#endif
+#ifdef CONFIG_FB
+ struct notifier_block fb_notifier;
+#endif
+ unsigned char current_page;
+ unsigned char button_0d_enabled;
+ unsigned char full_pm_cycle;
+ unsigned char num_of_tx;
+ unsigned char num_of_rx;
+ unsigned char num_of_fingers;
+ unsigned char max_touch_width;
+ unsigned char report_enable;
+ unsigned char no_sleep_setting;
+ unsigned char intr_mask[MAX_INTR_REGISTERS];
+ unsigned char *button_txrx_mapping;
+ unsigned short num_of_intr_regs;
+ unsigned short f01_query_base_addr;
+ unsigned short f01_cmd_base_addr;
+ unsigned short f01_ctrl_base_addr;
+ unsigned short f01_data_base_addr;
+ unsigned short f54_query_base_addr;
+ unsigned short f54_cmd_base_addr;
+ unsigned short f54_ctrl_base_addr;
+ unsigned short f54_data_base_addr;
+ unsigned int firmware_id;
+ int irq;
+ int sensor_max_x;
+ int sensor_max_y;
+ int sensor_max_z;
+ bool flash_prog_mode;
+ bool irq_enabled;
+ bool fingers_on_2d;
+ bool suspend;
+ bool sensor_sleep;
+ bool stay_awake;
+ int (*irq_enable)(struct synaptics_rmi4_data *rmi4_data, bool enable);
+ bool f11_wakeup_gesture;
+ bool f12_wakeup_gesture;
+ bool enable_wakeup_gesture;
+ int (*reset_device)(struct synaptics_rmi4_data *rmi4_data);
+ int tw_vendor_pin;
+ int face_down;
+ bool interactive;
+};
+
+struct synaptics_dsx_bus_access {
+ unsigned char type;
+ int (*read)(struct synaptics_rmi4_data *rmi4_data, unsigned short addr,
+ unsigned char *data, unsigned short length);
+ int (*write)(struct synaptics_rmi4_data *rmi4_data, unsigned short addr,
+ unsigned char *data, unsigned short length);
+};
+
+struct synaptics_dsx_hw_interface {
+ struct synaptics_dsx_board_data *board_data;
+ const struct synaptics_dsx_bus_access *bus_access;
+ int (*bl_hw_init)(struct synaptics_rmi4_data *rmi4_data);
+ int (*ui_hw_init)(struct synaptics_rmi4_data *rmi4_data);
+};
+
+struct synaptics_rmi4_exp_fn {
+ enum exp_fn fn_type;
+ int (*init)(struct synaptics_rmi4_data *rmi4_data);
+ void (*remove)(struct synaptics_rmi4_data *rmi4_data);
+ void (*reset)(struct synaptics_rmi4_data *rmi4_data);
+ void (*reinit)(struct synaptics_rmi4_data *rmi4_data);
+ void (*early_suspend)(struct synaptics_rmi4_data *rmi4_data);
+ void (*suspend)(struct synaptics_rmi4_data *rmi4_data);
+ void (*resume)(struct synaptics_rmi4_data *rmi4_data);
+ void (*late_resume)(struct synaptics_rmi4_data *rmi4_data);
+ void (*attn)(struct synaptics_rmi4_data *rmi4_data,
+ unsigned char intr_mask);
+};
+
+int synaptics_rmi4_bus_init(void);
+
+void synaptics_rmi4_bus_exit(void);
+
+void synaptics_rmi4_new_function(struct synaptics_rmi4_exp_fn *exp_fn_module,
+ bool insert);
+
+int synaptics_fw_updater(unsigned char *fw_data);
+int synaptics_config_updater(struct synaptics_dsx_board_data *bdata);
+
+static inline int synaptics_rmi4_reg_read(
+ struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr,
+ unsigned char *data,
+ unsigned short len)
+{
+ return rmi4_data->hw_if->bus_access->read(rmi4_data, addr, data, len);
+}
+
+static inline int synaptics_rmi4_reg_write(
+ struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr,
+ unsigned char *data,
+ unsigned short len)
+{
+ return rmi4_data->hw_if->bus_access->write(rmi4_data, addr, data, len);
+}
+
+static inline ssize_t synaptics_rmi4_show_error(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ dev_warn(dev, "%s Attempted to read from write-only attribute %s\n",
+ __func__, attr->attr.name);
+ return -EPERM;
+}
+
+static inline ssize_t synaptics_rmi4_store_error(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ dev_warn(dev, "%s Attempted to write to read-only attribute %s\n",
+ __func__, attr->attr.name);
+ return -EPERM;
+}
+
+static inline void batohs(unsigned short *dest, unsigned char *src)
+{
+ *dest = src[1] * 0x100 + src[0];
+}
+
+static inline void hstoba(unsigned char *dest, unsigned short src)
+{
+ dest[0] = src % 0x100;
+ dest[1] = src / 0x100;
+}
+
+#endif
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_fw_update.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_fw_update.c
new file mode 100644
index 0000000..fb4576f
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_fw_update.c
@@ -0,0 +1,2201 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/input.h>
+#include <linux/firmware.h>
+#include <linux/platform_device.h>
+#include <linux/wakelock.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define FW_IMAGE_NAME "synaptics.img"
+
+#define DO_STARTUP_FW_UPDATE
+
+#define FORCE_UPDATE false
+#define DO_LOCKDOWN false
+
+#define MAX_IMAGE_NAME_LEN 256
+#define MAX_FIRMWARE_ID_LEN 10
+
+#define LOCKDOWN_OFFSET 0xb0
+#define IMAGE_AREA_OFFSET 0x100
+
+#define BOOTLOADER_ID_OFFSET 0
+#define BLOCK_NUMBER_OFFSET 0
+
+#define V5_PROPERTIES_OFFSET 2
+#define V5_BLOCK_SIZE_OFFSET 3
+#define V5_BLOCK_COUNT_OFFSET 5
+#define V5_BLOCK_DATA_OFFSET 2
+
+#define V6_PROPERTIES_OFFSET 1
+#define V6_BLOCK_SIZE_OFFSET 2
+#define V6_BLOCK_COUNT_OFFSET 3
+#define V6_BLOCK_DATA_OFFSET 1
+#define V6_FLASH_COMMAND_OFFSET 2
+#define V6_FLASH_STATUS_OFFSET 3
+
+#define LOCKDOWN_BLOCK_COUNT 5
+
+#define REG_MAP (1 << 0)
+#define UNLOCKED (1 << 1)
+#define HAS_CONFIG_ID (1 << 2)
+#define HAS_PERM_CONFIG (1 << 3)
+#define HAS_BL_CONFIG (1 << 4)
+#define HAS_DISP_CONFIG (1 << 5)
+#define HAS_CTRL1 (1 << 6)
+
+#define UI_CONFIG_AREA 0x00
+#define PERM_CONFIG_AREA 0x01
+#define BL_CONFIG_AREA 0x02
+#define DISP_CONFIG_AREA 0x03
+
+#define CMD_WRITE_FW_BLOCK 0x2
+#define CMD_ERASE_ALL 0x3
+#define CMD_WRITE_LOCKDOWN_BLOCK 0x4
+#define CMD_READ_CONFIG_BLOCK 0x5
+#define CMD_WRITE_CONFIG_BLOCK 0x6
+#define CMD_ERASE_CONFIG 0x7
+#define CMD_READ_TW_PIN 0x8
+#define CMD_ERASE_BL_CONFIG 0x9
+#define CMD_ERASE_DISP_CONFIG 0xa
+#define CMD_ENABLE_FLASH_PROG 0xf
+
+#define SLEEP_MODE_NORMAL (0x00)
+#define SLEEP_MODE_SENSOR_SLEEP (0x01)
+#define SLEEP_MODE_RESERVED0 (0x02)
+#define SLEEP_MODE_RESERVED1 (0x03)
+
+#define ENABLE_WAIT_MS (1 * 1000)
+#define WRITE_WAIT_MS (3 * 1000)
+#define ERASE_WAIT_MS (5 * 1000)
+
+#define MIN_SLEEP_TIME_US 50
+#define MAX_SLEEP_TIME_US 100
+
+#define ENTER_FLASH_PROG_WAIT_MS 20
+
+static int fwu_do_reflash(void);
+static int fwu_do_write_config(void);
+
+static ssize_t fwu_sysfs_show_image(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count);
+
+static ssize_t fwu_sysfs_store_image(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count);
+
+static ssize_t fwu_sysfs_do_reflash_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_write_config_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_read_config_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_config_area_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_image_name_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_image_size_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t fwu_sysfs_block_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t fwu_sysfs_firmware_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t fwu_sysfs_configuration_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t fwu_sysfs_perm_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t fwu_sysfs_bl_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t fwu_sysfs_disp_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+enum bl_version {
+ V5 = 5,
+ V6 = 6,
+};
+
+enum flash_area {
+ NONE,
+ UI_FIRMWARE,
+ CONFIG_AREA,
+};
+
+enum update_mode {
+ NORMAL = 1,
+ FORCE = 2,
+ LOCKDOWN = 8,
+};
+
+struct image_header {
+ /* 0x00 - 0x0f */
+ unsigned char checksum[4];
+ unsigned char reserved_04;
+ unsigned char reserved_05;
+ unsigned char options_firmware_id:1;
+ unsigned char options_bootloader:1;
+ unsigned char options_reserved:6;
+ unsigned char bootloader_version;
+ unsigned char firmware_size[4];
+ unsigned char config_size[4];
+ /* 0x10 - 0x1f */
+ unsigned char product_id[SYNAPTICS_RMI4_PRODUCT_ID_SIZE];
+ unsigned char package_id[2];
+ unsigned char package_id_revision[2];
+ unsigned char product_info[SYNAPTICS_RMI4_PRODUCT_INFO_SIZE];
+ /* 0x20 - 0x2f */
+ unsigned char bootloader_addr[4];
+ unsigned char bootloader_size[4];
+ unsigned char ui_addr[4];
+ unsigned char ui_size[4];
+ /* 0x30 - 0x3f */
+ unsigned char ds_id[16];
+ /* 0x40 - 0x4f */
+ unsigned char disp_config_addr[4];
+ unsigned char disp_config_size[4];
+ unsigned char reserved_48_4f[8];
+ /* 0x50 - 0x53 */
+ unsigned char firmware_id[4];
+};
+
+struct image_header_data {
+ bool contains_firmware_id;
+ bool contains_bootloader;
+ bool contains_disp_config;
+ unsigned int firmware_id;
+ unsigned int checksum;
+ unsigned int firmware_size;
+ unsigned int config_size;
+ unsigned int bootloader_size;
+ unsigned int disp_config_offset;
+ unsigned int disp_config_size;
+ unsigned char bootloader_version;
+ unsigned char product_id[SYNAPTICS_RMI4_PRODUCT_ID_SIZE + 1];
+ unsigned char product_info[SYNAPTICS_RMI4_PRODUCT_INFO_SIZE];
+};
+
+struct pdt_properties {
+ union {
+ struct {
+ unsigned char reserved_1:6;
+ unsigned char has_bsr:1;
+ unsigned char reserved_2:1;
+ } __packed;
+ unsigned char data[1];
+ };
+};
+
+struct f01_device_status {
+ union {
+ struct {
+ unsigned char status_code:4;
+ unsigned char reserved:2;
+ unsigned char flash_prog:1;
+ unsigned char unconfigured:1;
+ } __packed;
+ unsigned char data[1];
+ };
+};
+
+struct f01_device_control {
+ union {
+ struct {
+ unsigned char sleep_mode:2;
+ unsigned char nosleep:1;
+ unsigned char reserved:2;
+ unsigned char charger_connected:1;
+ unsigned char report_rate:1;
+ unsigned char configured:1;
+ } __packed;
+ unsigned char data[1];
+ };
+};
+
+struct synaptics_rmi4_fwu_handle {
+ enum bl_version bl_version;
+ bool initialized;
+ bool program_enabled;
+ bool has_perm_config;
+ bool has_bl_config;
+ bool has_disp_config;
+ bool force_update;
+ bool in_flash_prog_mode;
+ bool do_lockdown;
+ unsigned int data_pos;
+ unsigned int image_size;
+ unsigned char *image_name;
+ unsigned char *ext_data_source;
+ unsigned char *read_config_buf;
+ unsigned char intr_mask;
+ unsigned char command;
+ unsigned char bootloader_id[2];
+ unsigned char flash_properties;
+ unsigned char flash_status;
+ unsigned char productinfo1;
+ unsigned char productinfo2;
+ unsigned char properties_off;
+ unsigned char blk_size_off;
+ unsigned char blk_count_off;
+ unsigned char blk_data_off;
+ unsigned char flash_cmd_off;
+ unsigned char flash_status_off;
+ unsigned short block_size;
+ unsigned short fw_block_count;
+ unsigned short config_block_count;
+ unsigned short lockdown_block_count;
+ unsigned short perm_config_block_count;
+ unsigned short bl_config_block_count;
+ unsigned short disp_config_block_count;
+ unsigned short config_size;
+ unsigned short config_area;
+ char product_id[SYNAPTICS_RMI4_PRODUCT_ID_SIZE + 1];
+ const unsigned char *firmware_data;
+ const unsigned char *config_data;
+ const unsigned char *disp_config_data;
+ const unsigned char *lockdown_data;
+ struct workqueue_struct *fwu_workqueue;
+ struct work_struct fwu_work;
+ struct synaptics_rmi4_fn_desc f34_fd;
+ struct synaptics_rmi4_data *rmi4_data;
+ struct wake_lock fwu_wake_lock;
+};
+
+static struct bin_attribute dev_attr_data = {
+ .attr = {
+ .name = "data",
+ .mode = (S_IRUGO | S_IWUGO),
+ },
+ .size = 0,
+ .read = fwu_sysfs_show_image,
+ .write = fwu_sysfs_store_image,
+};
+
+static struct device_attribute attrs[] = {
+ __ATTR(doreflash, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_do_reflash_store),
+ __ATTR(writeconfig, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_write_config_store),
+ __ATTR(readconfig, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_read_config_store),
+ __ATTR(configarea, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_config_area_store),
+ __ATTR(imagename, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_image_name_store),
+ __ATTR(imagesize, S_IWUGO,
+ synaptics_rmi4_show_error,
+ fwu_sysfs_image_size_store),
+ __ATTR(blocksize, S_IRUGO,
+ fwu_sysfs_block_size_show,
+ synaptics_rmi4_store_error),
+ __ATTR(fwblockcount, S_IRUGO,
+ fwu_sysfs_firmware_block_count_show,
+ synaptics_rmi4_store_error),
+ __ATTR(configblockcount, S_IRUGO,
+ fwu_sysfs_configuration_block_count_show,
+ synaptics_rmi4_store_error),
+ __ATTR(permconfigblockcount, S_IRUGO,
+ fwu_sysfs_perm_config_block_count_show,
+ synaptics_rmi4_store_error),
+ __ATTR(blconfigblockcount, S_IRUGO,
+ fwu_sysfs_bl_config_block_count_show,
+ synaptics_rmi4_store_error),
+ __ATTR(dispconfigblockcount, S_IRUGO,
+ fwu_sysfs_disp_config_block_count_show,
+ synaptics_rmi4_store_error),
+};
+
+static struct synaptics_rmi4_fwu_handle *fwu;
+
+DECLARE_COMPLETION(fwu_remove_complete);
+
+static uint32_t syn_crc(uint16_t *data, uint16_t len)
+{
+ uint32_t sum1, sum2;
+ sum1 = sum2 = 0xFFFF;
+ if (data) {
+ while (len--) {
+ sum1 += *data++;
+ sum2 += sum1;
+ sum1 = (sum1 & 0xFFFF) + (sum1 >> 16);
+ sum2 = (sum2 & 0xFFFF) + (sum2 >> 16);
+ }
+ } else {
+ pr_err("%s: data incorrect", __func__);
+ return (0xFFFF | 0xFFFF << 16);
+ }
+ return sum1 | (sum2 << 16);
+}
+
+static unsigned int le_to_uint(const unsigned char *ptr)
+{
+ return (unsigned int)ptr[0] +
+ (unsigned int)ptr[1] * 0x100 +
+ (unsigned int)ptr[2] * 0x10000 +
+ (unsigned int)ptr[3] * 0x1000000;
+}
+
+static unsigned int be_to_uint(const unsigned char *ptr)
+{
+ return (unsigned int)ptr[3] +
+ (unsigned int)ptr[2] * 0x100 +
+ (unsigned int)ptr[1] * 0x10000 +
+ (unsigned int)ptr[0] * 0x1000000;
+}
+
+static void parse_header(struct image_header_data *header,
+ const unsigned char *fw_image)
+{
+ struct image_header *data = (struct image_header *)fw_image;
+
+ header->checksum = le_to_uint(data->checksum);
+
+ header->bootloader_version = data->bootloader_version;
+
+ header->firmware_size = le_to_uint(data->firmware_size);
+
+ header->config_size = le_to_uint(data->config_size);
+
+ memcpy(header->product_id, data->product_id, sizeof(data->product_id));
+ header->product_id[sizeof(data->product_id)] = 0;
+
+ memcpy(header->product_info, data->product_info,
+ sizeof(data->product_info));
+
+ header->contains_firmware_id = data->options_firmware_id;
+ if (header->contains_firmware_id)
+ header->firmware_id = le_to_uint(data->firmware_id);
+
+ header->contains_bootloader = data->options_bootloader;
+ if (header->contains_bootloader)
+ header->bootloader_size = le_to_uint(data->bootloader_size);
+
+ if ((header->bootloader_version == V5) && header->contains_bootloader) {
+ header->contains_disp_config = true;
+ header->disp_config_offset = le_to_uint(data->disp_config_addr);
+ header->disp_config_size = le_to_uint(data->disp_config_size);
+ } else {
+ header->contains_disp_config = false;
+ }
+
+ return;
+}
+
+static int fwu_read_f01_device_status(struct f01_device_status *status)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ status->data,
+ sizeof(status->data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read F01 device status\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static int fwu_read_f34_queries(void)
+{
+ int retval;
+ unsigned char count;
+ unsigned char buf[10];
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.query_base_addr + BOOTLOADER_ID_OFFSET,
+ fwu->bootloader_id,
+ sizeof(fwu->bootloader_id));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read bootloader ID\n",
+ __func__);
+ return retval;
+ }
+
+ if (fwu->bootloader_id[1] == '5') {
+ fwu->bl_version = V5;
+ } else if (fwu->bootloader_id[1] == '6') {
+ fwu->bl_version = V6;
+ } else {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Unrecognized bootloader version\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (fwu->bl_version == V5) {
+ fwu->properties_off = V5_PROPERTIES_OFFSET;
+ fwu->blk_size_off = V5_BLOCK_SIZE_OFFSET;
+ fwu->blk_count_off = V5_BLOCK_COUNT_OFFSET;
+ fwu->blk_data_off = V5_BLOCK_DATA_OFFSET;
+ } else if (fwu->bl_version == V6) {
+ fwu->properties_off = V6_PROPERTIES_OFFSET;
+ fwu->blk_size_off = V6_BLOCK_SIZE_OFFSET;
+ fwu->blk_count_off = V6_BLOCK_COUNT_OFFSET;
+ fwu->blk_data_off = V6_BLOCK_DATA_OFFSET;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.query_base_addr + fwu->properties_off,
+ &fwu->flash_properties,
+ sizeof(fwu->flash_properties));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read flash properties\n",
+ __func__);
+ return retval;
+ }
+
+ count = 4;
+
+ if (fwu->flash_properties & HAS_PERM_CONFIG) {
+ fwu->has_perm_config = 1;
+ count += 2;
+ }
+
+ if (fwu->flash_properties & HAS_BL_CONFIG) {
+ fwu->has_bl_config = 1;
+ count += 2;
+ }
+
+ if (fwu->flash_properties & HAS_DISP_CONFIG) {
+ fwu->has_disp_config = 1;
+ count += 2;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.query_base_addr + fwu->blk_size_off,
+ buf,
+ 2);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read block size info\n",
+ __func__);
+ return retval;
+ }
+
+ batohs(&fwu->block_size, &(buf[0]));
+
+ if (fwu->bl_version == V5) {
+ fwu->flash_cmd_off = fwu->blk_data_off + fwu->block_size;
+ fwu->flash_status_off = fwu->flash_cmd_off;
+ } else if (fwu->bl_version == V6) {
+ fwu->flash_cmd_off = V6_FLASH_COMMAND_OFFSET;
+ fwu->flash_status_off = V6_FLASH_STATUS_OFFSET;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.query_base_addr + fwu->blk_count_off,
+ buf,
+ count);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read block count info\n",
+ __func__);
+ return retval;
+ }
+
+ batohs(&fwu->fw_block_count, &(buf[0]));
+ batohs(&fwu->config_block_count, &(buf[2]));
+
+ count = 4;
+
+ if (fwu->has_perm_config) {
+ batohs(&fwu->perm_config_block_count, &(buf[count]));
+ count += 2;
+ }
+
+ if (fwu->has_bl_config) {
+ batohs(&fwu->bl_config_block_count, &(buf[count]));
+ count += 2;
+ }
+
+ if (fwu->has_disp_config)
+ batohs(&fwu->disp_config_block_count, &(buf[count]));
+
+ return 0;
+}
+
+static int fwu_read_f34_flash_status(void)
+{
+ int retval;
+ unsigned char status;
+ unsigned char command;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->flash_status_off,
+ &status,
+ sizeof(status));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read flash status\n",
+ __func__);
+ return retval;
+ }
+
+ fwu->program_enabled = status >> 7;
+
+ if (fwu->bl_version == V5)
+ fwu->flash_status = (status >> 4) & MASK_3BIT;
+ else if (fwu->bl_version == V6)
+ fwu->flash_status = status & MASK_3BIT;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->flash_cmd_off,
+ &command,
+ sizeof(command));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read flash command\n",
+ __func__);
+ return retval;
+ }
+
+ fwu->command = command & MASK_4BIT;
+
+ return 0;
+}
+
+static int fwu_write_f34_command(unsigned char cmd)
+{
+ int retval;
+ unsigned char command = cmd & MASK_4BIT;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ fwu->command = cmd;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->flash_cmd_off,
+ &command,
+ sizeof(command));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write command 0x%02x\n",
+ __func__, command);
+ return retval;
+ }
+
+ return 0;
+}
+
+static int fwu_wait_for_idle(int timeout_ms)
+{
+ int count = 0;
+ int timeout_count = ((timeout_ms * 1000) / MAX_SLEEP_TIME_US) + 1;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ do {
+ usleep_range(MIN_SLEEP_TIME_US, MAX_SLEEP_TIME_US);
+
+ count++;
+ if (count == timeout_count)
+ fwu_read_f34_flash_status();
+
+ if ((fwu->command == 0x00) && (fwu->flash_status == 0x00))
+ return 0;
+ } while (count < timeout_count);
+
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Timed out waiting for idle status, command: %X, status: %X\n",
+ __func__, fwu->command, fwu->flash_status);
+
+ return -ETIMEDOUT;
+}
+
+static enum flash_area fwu_go_nogo(struct image_header_data *header)
+{
+ int retval;
+ enum flash_area flash_area = NONE;
+ unsigned char index = 0;
+ unsigned char config_id[4];
+ unsigned int device_config_id;
+ unsigned int image_config_id;
+ unsigned int device_fw_id;
+ unsigned long image_fw_id;
+ char *strptr;
+ char *firmware_id;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ if (fwu->force_update) {
+ flash_area = UI_FIRMWARE;
+ goto exit;
+ }
+
+ /* Update both UI and config if device is in bootloader mode */
+ if (fwu->in_flash_prog_mode) {
+ flash_area = UI_FIRMWARE;
+ goto exit;
+ }
+
+ /* Get device firmware ID */
+ device_fw_id = rmi4_data->firmware_id;
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Device firmware ID = %d\n",
+ __func__, device_fw_id);
+
+ /* Get image firmware ID */
+ if (header->contains_firmware_id) {
+ image_fw_id = header->firmware_id;
+ } else {
+ strptr = strstr(fwu->image_name, "PR");
+ if (!strptr) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: No valid PR number (PRxxxxxxx) "
+ "found in image file name (%s)\n",
+ __func__, fwu->image_name);
+ flash_area = NONE;
+ goto exit;
+ }
+
+ strptr += 2;
+ firmware_id = kzalloc(MAX_FIRMWARE_ID_LEN, GFP_KERNEL);
+ while (strptr[index] >= '0' && strptr[index] <= '9') {
+ firmware_id[index] = strptr[index];
+ index++;
+ }
+
+ retval = sstrtoul(firmware_id, 10, &image_fw_id);
+ kfree(firmware_id);
+ if (retval) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to obtain image firmware ID\n",
+ __func__);
+ flash_area = NONE;
+ goto exit;
+ }
+ }
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Image firmware ID = %d\n",
+ __func__, (unsigned int)image_fw_id);
+
+ if (image_fw_id > device_fw_id) {
+ flash_area = UI_FIRMWARE;
+ goto exit;
+ } else if (image_fw_id < device_fw_id) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Image firmware ID older than device firmware ID\n",
+ __func__);
+ flash_area = NONE;
+ goto exit;
+ }
+
+ /* Get device config ID */
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.ctrl_base_addr,
+ config_id,
+ sizeof(config_id));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read device config ID\n",
+ __func__);
+ flash_area = NONE;
+ goto exit;
+ }
+ device_config_id = be_to_uint(config_id);
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Device config ID = 0x%02x 0x%02x 0x%02x 0x%02x\n",
+ __func__,
+ config_id[0],
+ config_id[1],
+ config_id[2],
+ config_id[3]);
+
+ /* Get image config ID */
+ image_config_id = be_to_uint(fwu->config_data);
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Image config ID = 0x%02x 0x%02x 0x%02x 0x%02x\n",
+ __func__,
+ fwu->config_data[0],
+ fwu->config_data[1],
+ fwu->config_data[2],
+ fwu->config_data[3]);
+
+ /*if (image_config_id > device_config_id) {
+ flash_area = CONFIG_AREA;
+ goto exit;
+ }*/
+
+ flash_area = NONE;
+
+exit:
+ if (flash_area == NONE) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: No need to do reflash\n",
+ __func__);
+ } else {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Updating %s\n",
+ __func__,
+ flash_area == UI_FIRMWARE ?
+ "UI firmware" :
+ "config only");
+ }
+
+ return flash_area;
+}
+
+static int fwu_scan_pdt(void)
+{
+ int retval;
+ unsigned char ii;
+ unsigned char intr_count = 0;
+ unsigned char intr_off;
+ unsigned char intr_src;
+ unsigned short addr;
+ bool f01found = false;
+ bool f34found = false;
+ struct synaptics_rmi4_fn_desc rmi_fd;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ for (addr = PDT_START; addr > PDT_END; addr -= PDT_ENTRY_SIZE) {
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ addr,
+ (unsigned char *)&rmi_fd,
+ sizeof(rmi_fd));
+ if (retval < 0)
+ return retval;
+
+ if (rmi_fd.fn_number) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Found F%02x\n",
+ __func__, rmi_fd.fn_number);
+ switch (rmi_fd.fn_number) {
+ case SYNAPTICS_RMI4_F01:
+ f01found = true;
+
+ rmi4_data->f01_query_base_addr =
+ rmi_fd.query_base_addr;
+ rmi4_data->f01_ctrl_base_addr =
+ rmi_fd.ctrl_base_addr;
+ rmi4_data->f01_data_base_addr =
+ rmi_fd.data_base_addr;
+ rmi4_data->f01_cmd_base_addr =
+ rmi_fd.cmd_base_addr;
+ break;
+ case SYNAPTICS_RMI4_F34:
+ f34found = true;
+ fwu->f34_fd.query_base_addr =
+ rmi_fd.query_base_addr;
+ fwu->f34_fd.ctrl_base_addr =
+ rmi_fd.ctrl_base_addr;
+ fwu->f34_fd.data_base_addr =
+ rmi_fd.data_base_addr;
+
+ fwu->intr_mask = 0;
+ intr_src = rmi_fd.intr_src_count;
+ intr_off = intr_count % 8;
+ for (ii = intr_off;
+ ii < ((intr_src & MASK_3BIT) +
+ intr_off);
+ ii++) {
+ fwu->intr_mask |= 1 << ii;
+ }
+ break;
+ }
+ } else {
+ break;
+ }
+
+ intr_count += (rmi_fd.intr_src_count & MASK_3BIT);
+ }
+
+ if (!f01found || !f34found) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to find both F01 and F34\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int fwu_write_blocks(unsigned char *block_ptr, unsigned short block_cnt,
+ unsigned char command)
+{
+ int retval;
+ unsigned char block_offset[] = {0, 0};
+ unsigned short block_num;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ block_offset[1] |= (fwu->config_area << 5);
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + BLOCK_NUMBER_OFFSET,
+ block_offset,
+ sizeof(block_offset));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write to block number registers\n",
+ __func__);
+ return retval;
+ }
+
+ for (block_num = 0; block_num < block_cnt; block_num++) {
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->blk_data_off,
+ block_ptr,
+ fwu->block_size);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write block data (block %d)\n",
+ __func__, block_num);
+ return retval;
+ }
+
+ retval = fwu_write_f34_command(command);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write command for block %d\n",
+ __func__, block_num);
+ return retval;
+ }
+
+ retval = fwu_wait_for_idle(WRITE_WAIT_MS);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to wait for idle status (block %d)\n",
+ __func__, block_num);
+ return retval;
+ }
+
+ block_ptr += fwu->block_size;
+ }
+
+ return 0;
+}
+
+static int fwu_write_firmware(void)
+{
+ return fwu_write_blocks((unsigned char *)fwu->firmware_data,
+ fwu->fw_block_count, CMD_WRITE_FW_BLOCK);
+}
+
+static int fwu_write_configuration(void)
+{
+ return fwu_write_blocks((unsigned char *)fwu->config_data,
+ fwu->config_block_count, CMD_WRITE_CONFIG_BLOCK);
+}
+
+static int fwu_write_disp_configuration(void)
+{
+ fwu->config_area = DISP_CONFIG_AREA;
+ fwu->config_data = fwu->disp_config_data;
+ fwu->config_block_count = fwu->disp_config_block_count;
+
+ return fwu_do_write_config();
+}
+
+static int fwu_write_lockdown(void)
+{
+ return fwu_write_blocks((unsigned char *)fwu->lockdown_data,
+ fwu->lockdown_block_count, CMD_WRITE_LOCKDOWN_BLOCK);
+}
+
+static int fwu_get_tw_vendor(void)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ uint16_t tw_pin_mask = rmi4_data->hw_if->board_data->tw_pin_mask;
+ uint8_t data[6];
+
+ memcpy(&data, &tw_pin_mask, sizeof(tw_pin_mask));
+ data[2] = data[0];
+ data[3] = data[1];
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + 1,
+ data,
+ 4);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write tw vendor pin\n",
+ __func__);
+ return -EINVAL;
+ }
+ retval = fwu_write_f34_command(CMD_READ_TW_PIN);
+ if (retval < 0)
+ return retval;
+
+ retval = fwu_wait_for_idle(ENABLE_WAIT_MS);
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.data_base_addr + 1,
+ data,
+ sizeof(data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read tw vendor pin\n",
+ __func__);
+ return -EINVAL;
+ }
+ rmi4_data->tw_vendor_pin = (data[5] << 8) | data[4];
+ dev_info(rmi4_data->pdev->dev.parent, " %s: tw_vendor_pin = %x\n", __func__,
+ rmi4_data->tw_vendor_pin);
+
+ return 0;
+}
+
+static int crc_comparison(uint32_t config_crc)
+{
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ uint8_t data[17];
+ int retval;
+ uint32_t flash_crc;
+
+ data[0] = fwu->config_block_count-1;
+ data[1] = 0x00;
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr,
+ data,
+ 2);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write crc addr\n",
+ __func__);
+ return retval;
+ }
+
+ retval = fwu_write_f34_command(CMD_READ_CONFIG_BLOCK);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write read config command\n",
+ __func__);
+ return retval;
+ }
+
+ retval = fwu_wait_for_idle(WRITE_WAIT_MS);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to wait for idle status\n",
+ __func__);
+ return retval;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.data_base_addr + 1,
+ data,
+ sizeof(data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read crc data\n",
+ __func__);
+ return retval;
+ }
+
+ memcpy(&flash_crc, &data[12], 4);
+ dev_info(rmi4_data->pdev->dev.parent, " %s: config_crc = %X, flash_crc = %X\n",
+ __func__, config_crc, flash_crc);
+
+ if (flash_crc == config_crc)
+ return 0;
+ else
+ return 1;
+}
+
+static int fwu_write_bootloader_id(void)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->blk_data_off,
+ fwu->bootloader_id,
+ sizeof(fwu->bootloader_id));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write bootloader ID\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static int fwu_enter_flash_prog(void)
+{
+ int retval;
+ struct f01_device_status f01_device_status;
+ struct f01_device_control f01_device_control;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = fwu_write_bootloader_id();
+ if (retval < 0)
+ return retval;
+
+ retval = fwu_write_f34_command(CMD_ENABLE_FLASH_PROG);
+ if (retval < 0)
+ return retval;
+
+ retval = fwu_wait_for_idle(ENABLE_WAIT_MS);
+ if (retval < 0)
+ return retval;
+
+ if (!fwu->program_enabled) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Program enabled bit not set\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (rmi4_data->hw_if->bl_hw_init) {
+ retval = rmi4_data->hw_if->bl_hw_init(rmi4_data);
+ if (retval < 0)
+ return retval;
+ }
+
+ retval = fwu_scan_pdt();
+ if (retval < 0)
+ return retval;
+
+ retval = fwu_read_f01_device_status(&f01_device_status);
+ if (retval < 0)
+ return retval;
+
+ if (!f01_device_status.flash_prog) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Not in flash prog mode\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ retval = fwu_read_f34_queries();
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ f01_device_control.data,
+ sizeof(f01_device_control.data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read F01 device control\n",
+ __func__);
+ return retval;
+ }
+
+ f01_device_control.nosleep = true;
+ f01_device_control.sleep_mode = SLEEP_MODE_NORMAL;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ rmi4_data->f01_ctrl_base_addr,
+ f01_device_control.data,
+ sizeof(f01_device_control.data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write F01 device control\n",
+ __func__);
+ return retval;
+ }
+
+ msleep(ENTER_FLASH_PROG_WAIT_MS);
+
+ return retval;
+}
+
+static int fwu_do_reflash(void)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+
+ retval = fwu_write_bootloader_id();
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Bootloader ID written\n",
+ __func__);
+
+ retval = fwu_write_f34_command(CMD_ERASE_ALL);
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Erase all command written\n",
+ __func__);
+
+ retval = fwu_wait_for_idle(ERASE_WAIT_MS);
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Idle status detected\n",
+ __func__);
+
+ if (fwu->firmware_data) {
+ retval = fwu_write_firmware();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write firmware\n",
+ __func__);
+ return retval;
+ }
+ pr_notice("%s: Firmware programmed\n", __func__);
+ }
+
+ if (fwu->config_data) {
+ retval = fwu_write_configuration();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write configuration\n",
+ __func__);
+ return retval;
+ }
+ pr_notice("%s: Configuration programmed\n", __func__);
+ }
+
+ if (fwu->disp_config_data) {
+ retval = fwu_write_disp_configuration();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write disp configuration\n",
+ __func__);
+ return retval;
+ }
+ pr_notice("%s: Display configuration programmed\n", __func__);
+ }
+
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+ return retval;
+}
+
+static int fwu_do_write_config(void)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+
+ if (fwu->config_area == PERM_CONFIG_AREA) {
+ fwu->config_block_count = fwu->perm_config_block_count;
+ goto write_config;
+ }
+
+ retval = fwu_write_bootloader_id();
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Bootloader ID written\n",
+ __func__);
+
+ switch (fwu->config_area) {
+ case UI_CONFIG_AREA:
+ retval = fwu_write_f34_command(CMD_ERASE_CONFIG);
+ break;
+ case BL_CONFIG_AREA:
+ retval = fwu_write_f34_command(CMD_ERASE_BL_CONFIG);
+ fwu->config_block_count = fwu->bl_config_block_count;
+ break;
+ case DISP_CONFIG_AREA:
+ retval = fwu_write_f34_command(CMD_ERASE_DISP_CONFIG);
+ fwu->config_block_count = fwu->disp_config_block_count;
+ break;
+ }
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Erase command written\n",
+ __func__);
+
+ retval = fwu_wait_for_idle(ERASE_WAIT_MS);
+ if (retval < 0)
+ return retval;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Idle status detected\n",
+ __func__);
+
+write_config:
+ retval = fwu_write_configuration();
+ if (retval < 0)
+ return retval;
+
+ pr_notice("%s: Config written\n", __func__);
+
+ return retval;
+}
+
+static int fwu_start_write_config(void)
+{
+ int retval;
+ unsigned short block_count;
+ struct image_header_data header;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ switch (fwu->config_area) {
+ case UI_CONFIG_AREA:
+ block_count = fwu->config_block_count;
+ break;
+ case PERM_CONFIG_AREA:
+ if (!fwu->has_perm_config)
+ return -EINVAL;
+ block_count = fwu->perm_config_block_count;
+ break;
+ case BL_CONFIG_AREA:
+ if (!fwu->has_bl_config)
+ return -EINVAL;
+ block_count = fwu->bl_config_block_count;
+ break;
+ case DISP_CONFIG_AREA:
+ if (!fwu->has_disp_config)
+ return -EINVAL;
+ block_count = fwu->disp_config_block_count;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (fwu->ext_data_source)
+ fwu->config_data = fwu->ext_data_source;
+ else
+ return -EINVAL;
+
+ fwu->config_size = fwu->block_size * block_count;
+
+ /* Jump to the config area if given a packrat image */
+ if ((fwu->config_area == UI_CONFIG_AREA) &&
+ (fwu->config_size != fwu->image_size)) {
+ parse_header(&header, fwu->ext_data_source);
+
+ if (header.config_size) {
+ fwu->config_data = fwu->ext_data_source +
+ IMAGE_AREA_OFFSET +
+ header.firmware_size;
+ if (header.contains_bootloader)
+ fwu->config_data += header.bootloader_size;
+ } else {
+ return -EINVAL;
+ }
+ }
+
+ pr_notice("%s: Start of write config process\n", __func__);
+
+ retval = fwu_enter_flash_prog();
+ if (retval < 0)
+ goto exit;
+
+ retval = fwu_do_write_config();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write config\n",
+ __func__);
+ }
+
+exit:
+ rmi4_data->reset_device(rmi4_data);
+
+ pr_notice("%s: End of write config process\n", __func__);
+
+ return retval;
+}
+
+static int fwu_do_read_config(void)
+{
+ int retval;
+ unsigned char block_offset[] = {0, 0};
+ unsigned short block_num;
+ unsigned short block_count;
+ unsigned short index = 0;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = fwu_enter_flash_prog();
+ if (retval < 0)
+ goto exit;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Entered flash prog mode\n",
+ __func__);
+
+ switch (fwu->config_area) {
+ case UI_CONFIG_AREA:
+ block_count = fwu->config_block_count;
+ break;
+ case PERM_CONFIG_AREA:
+ if (!fwu->has_perm_config) {
+ retval = -EINVAL;
+ goto exit;
+ }
+ block_count = fwu->perm_config_block_count;
+ break;
+ case BL_CONFIG_AREA:
+ if (!fwu->has_bl_config) {
+ retval = -EINVAL;
+ goto exit;
+ }
+ block_count = fwu->bl_config_block_count;
+ break;
+ case DISP_CONFIG_AREA:
+ if (!fwu->has_disp_config) {
+ retval = -EINVAL;
+ goto exit;
+ }
+ block_count = fwu->disp_config_block_count;
+ break;
+ default:
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ fwu->config_size = fwu->block_size * block_count;
+
+ kfree(fwu->read_config_buf);
+ fwu->read_config_buf = kzalloc(fwu->config_size, GFP_KERNEL);
+
+ block_offset[1] |= (fwu->config_area << 5);
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ fwu->f34_fd.data_base_addr + BLOCK_NUMBER_OFFSET,
+ block_offset,
+ sizeof(block_offset));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write to block number registers\n",
+ __func__);
+ goto exit;
+ }
+
+ for (block_num = 0; block_num < block_count; block_num++) {
+ retval = fwu_write_f34_command(CMD_READ_CONFIG_BLOCK);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write read config command\n",
+ __func__);
+ goto exit;
+ }
+
+ retval = fwu_wait_for_idle(WRITE_WAIT_MS);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to wait for idle status\n",
+ __func__);
+ goto exit;
+ }
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.data_base_addr + fwu->blk_data_off,
+ &fwu->read_config_buf[index],
+ fwu->block_size);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read block data (block %d)\n",
+ __func__, block_num);
+ goto exit;
+ }
+
+ index += fwu->block_size;
+ }
+
+exit:
+ rmi4_data->reset_device(rmi4_data);
+
+ return retval;
+}
+
+static int fwu_do_lockdown(void)
+{
+ int retval;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = fwu_enter_flash_prog();
+ if (retval < 0)
+ return retval;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.query_base_addr + fwu->properties_off,
+ &fwu->flash_properties,
+ sizeof(fwu->flash_properties));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read flash properties\n",
+ __func__);
+ return retval;
+ }
+
+ if ((fwu->flash_properties & UNLOCKED) == 0) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Device already locked down\n",
+ __func__);
+ return retval;
+ }
+
+ retval = fwu_write_lockdown();
+ if (retval < 0)
+ return retval;
+
+ pr_notice("%s: Lockdown programmed\n", __func__);
+
+ return retval;
+}
+
+static int fwu_start_reflash(void)
+{
+ int retval = 0;
+ enum flash_area flash_area;
+ struct image_header_data header;
+ struct f01_device_status f01_device_status;
+ const unsigned char *fw_image;
+ const struct firmware *fw_entry = NULL;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+
+ if (rmi4_data->sensor_sleep) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Sensor sleeping\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ rmi4_data->stay_awake = true;
+
+ pr_notice("%s: Start of reflash process\n", __func__);
+
+ if (fwu->ext_data_source) {
+ fw_image = fwu->ext_data_source;
+ } else {
+ strncpy(fwu->image_name, FW_IMAGE_NAME, MAX_IMAGE_NAME_LEN);
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Requesting firmware image %s\n",
+ __func__, fwu->image_name);
+
+ retval = request_firmware(&fw_entry, fwu->image_name,
+ rmi4_data->pdev->dev.parent);
+ if (retval != 0) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Firmware image %s not available\n",
+ __func__, fwu->image_name);
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Firmware image size = %zu\n",
+ __func__, fw_entry->size);
+
+ fw_image = fw_entry->data;
+ }
+
+ parse_header(&header, fw_image);
+
+ if (fwu->bl_version != header.bootloader_version) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Bootloader version mismatch\n",
+ __func__);
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ retval = fwu_read_f01_device_status(&f01_device_status);
+ if (retval < 0)
+ goto exit;
+
+ if (f01_device_status.flash_prog) {
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: In flash prog mode\n",
+ __func__);
+ fwu->in_flash_prog_mode = true;
+ } else {
+ fwu->in_flash_prog_mode = false;
+ }
+
+ if (fwu->do_lockdown) {
+ switch (fwu->bl_version) {
+ case V5:
+ case V6:
+ fwu->lockdown_data = fw_image + LOCKDOWN_OFFSET;
+ fwu->lockdown_block_count = LOCKDOWN_BLOCK_COUNT;
+ retval = fwu_do_lockdown();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to do lockdown\n",
+ __func__);
+ }
+ default:
+ break;
+ }
+ }
+
+ if (header.firmware_size)
+ fwu->firmware_data = fw_image + IMAGE_AREA_OFFSET;
+ else
+ fwu->firmware_data = NULL;
+
+ if (header.config_size)
+ fwu->config_data = fw_image + IMAGE_AREA_OFFSET +
+ header.firmware_size;
+ else
+ fwu->config_data = NULL;
+
+ if (header.contains_bootloader) {
+ if (header.firmware_size)
+ fwu->firmware_data += header.bootloader_size;
+ if (header.config_size)
+ fwu->config_data += header.bootloader_size;
+ }
+
+ if (header.contains_disp_config)
+ fwu->disp_config_data = fw_image + header.disp_config_offset;
+ else
+ fwu->disp_config_data = NULL;
+
+ flash_area = fwu_go_nogo(&header);
+
+ if (flash_area != NONE) {
+ retval = fwu_enter_flash_prog();
+ if (retval < 0)
+ goto exit;
+ }
+
+ switch (flash_area) {
+ case UI_FIRMWARE:
+ retval = fwu_do_reflash();
+ break;
+ case CONFIG_AREA:
+ retval = fwu_do_write_config();
+ break;
+ case NONE:
+ default:
+ goto exit;
+ }
+
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to do reflash\n",
+ __func__);
+ }
+
+exit:
+ rmi4_data->reset_device(rmi4_data);
+
+ if (fw_entry)
+ release_firmware(fw_entry);
+
+ pr_notice("%s: End of reflash process\n", __func__);
+
+ rmi4_data->stay_awake = false;
+
+ return retval;
+}
+
+int synaptics_fw_updater(unsigned char *fw_data)
+{
+ int retval;
+ pr_info("%s\n", __func__);
+
+ if (!fwu)
+ return -ENODEV;
+
+ if (!fwu->initialized)
+ return -ENODEV;
+
+ fwu->ext_data_source = fw_data;
+ fwu->config_area = UI_CONFIG_AREA;
+
+ retval = fwu_start_reflash();
+
+ return retval;
+}
+EXPORT_SYMBOL(synaptics_fw_updater);
+
+int synaptics_config_updater(struct synaptics_dsx_board_data *bdata)
+{
+ int retval, i;
+ struct synaptics_rmi4_config *cfg_table = bdata->config_table;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+ unsigned int device_fw_id = rmi4_data->firmware_id;
+ uint16_t tw_pin_mask = bdata->tw_pin_mask;
+ int config_num = bdata->config_num;
+ uint32_t crc_checksum;
+ unsigned char config_id[4];
+ unsigned int device_config_id;
+ unsigned int image_config_id;
+ uint8_t *config_data;
+ int config_size = fwu->config_block_count * fwu->block_size;
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+
+ if (!fwu)
+ return -ENODEV;
+
+ if (!fwu->initialized)
+ return -ENODEV;
+
+ rmi4_data->stay_awake = true;
+ /* Get device config ID */
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ fwu->f34_fd.ctrl_base_addr,
+ config_id,
+ sizeof(config_id));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read device config ID\n",
+ __func__);
+ goto exit;
+ }
+ device_config_id = be_to_uint(config_id);
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Device config ID = 0x%02x 0x%02x 0x%02x 0x%02x\n",
+ __func__,
+ config_id[0],
+ config_id[1],
+ config_id[2],
+ config_id[3]);
+
+ fwu->config_area = UI_CONFIG_AREA;
+
+ pr_notice("%s: Start of write config process\n", __func__);
+
+ retval = fwu_enter_flash_prog();
+ if (retval < 0)
+ goto exit;
+
+ if (tw_pin_mask) {
+ retval = fwu_get_tw_vendor();
+ if (retval < 0)
+ goto exit;
+ }
+
+ i = 0;
+ while (cfg_table[i].pr_number > device_fw_id) {
+ i++;
+ if (i == config_num) {
+ dev_info(rmi4_data->pdev->dev.parent, " %s: no config data\n", __func__);
+ goto exit;
+ }
+ }
+
+ if (tw_pin_mask) {
+ while ((cfg_table[i].sensor_id > 0)
+ && (cfg_table[i].sensor_id != (rmi4_data->tw_vendor_pin | SENSOR_ID_CHECKING_EN))) {
+ i++;
+ if (i == config_num) {
+ dev_info(rmi4_data->pdev->dev.parent, " %s: no config data\n", __func__);
+ goto exit;
+ }
+ }
+ }
+
+ config_data = cfg_table[i].config;
+ /* Get image config ID */
+ image_config_id = be_to_uint(config_data);
+ dev_info(rmi4_data->pdev->dev.parent,
+ "%s: Image config ID = 0x%02x 0x%02x 0x%02x 0x%02x\n",
+ __func__,
+ config_data[0],
+ config_data[1],
+ config_data[2],
+ config_data[3]);
+
+ crc_checksum = syn_crc((uint16_t *)config_data, (config_size)/2-2);
+ if (crc_checksum == 0xFFFFFFFF) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: crc_checksum Error", __func__);
+ return -1;
+ }
+ memcpy(&config_data[(config_size) - 4], &crc_checksum, 4);
+ dev_info(rmi4_data->pdev->dev.parent, " %s: crc_cksum = %X\n",
+ __func__, crc_checksum);
+ if (image_config_id == device_config_id) {
+ retval = crc_comparison(crc_checksum);
+ if (retval < 0) {
+ dev_info(rmi4_data->pdev->dev.parent, " %s: CRC comparison fail!\n",
+ __func__);
+ goto exit;
+ } else if (retval == 0) {
+ dev_info(rmi4_data->pdev->dev.parent, " %s: No need to update\n",
+ __func__);
+ goto exit;
+ }
+ }
+ fwu->config_data = config_data;
+
+ retval = fwu_do_write_config();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write config\n",
+ __func__);
+ }
+
+exit:
+ rmi4_data->reset_device(rmi4_data);
+
+ rmi4_data->stay_awake = false;
+ pr_notice("%s: End of write config process\n", __func__);
+
+ dev_info(rmi4_data->pdev->dev.parent, " %s end\n", __func__);
+ return retval;
+}
+EXPORT_SYMBOL(synaptics_config_updater);
+
+#ifdef DO_STARTUP_FW_UPDATE
+static void fwu_startup_fw_update_work(struct work_struct *work)
+{
+ wake_lock(&fwu->fwu_wake_lock);
+ synaptics_fw_updater(NULL);
+
+ synaptics_config_updater(fwu->rmi4_data->hw_if->board_data);
+ wake_unlock(&fwu->fwu_wake_lock);
+
+ return;
+}
+#endif
+
+static ssize_t fwu_sysfs_show_image(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count)
+{
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ if (count < fwu->config_size) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Not enough space (%zu bytes) in buffer\n",
+ __func__, count);
+ return -EINVAL;
+ }
+
+ memcpy(buf, fwu->read_config_buf, fwu->config_size);
+
+ return fwu->config_size;
+}
+
+static ssize_t fwu_sysfs_store_image(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count)
+{
+ memcpy((void *)(&fwu->ext_data_source[fwu->data_pos]),
+ (const void *)buf,
+ count);
+
+ fwu->data_pos += count;
+
+ return count;
+}
+
+static ssize_t fwu_sysfs_do_reflash_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1) {
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ if (input & LOCKDOWN) {
+ fwu->do_lockdown = true;
+ input &= ~LOCKDOWN;
+ }
+
+ if ((input != NORMAL) && (input != FORCE)) {
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ if (input == FORCE)
+ fwu->force_update = true;
+
+ retval = synaptics_fw_updater(fwu->ext_data_source);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to do reflash\n",
+ __func__);
+ goto exit;
+ }
+
+ retval = count;
+
+exit:
+ kfree(fwu->ext_data_source);
+ fwu->ext_data_source = NULL;
+ fwu->force_update = FORCE_UPDATE;
+ fwu->do_lockdown = DO_LOCKDOWN;
+ return retval;
+}
+
+static ssize_t fwu_sysfs_write_config_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1) {
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ if (input != 1) {
+ retval = -EINVAL;
+ goto exit;
+ }
+
+ retval = fwu_start_write_config();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write config\n",
+ __func__);
+ goto exit;
+ }
+
+ retval = count;
+
+exit:
+ kfree(fwu->ext_data_source);
+ fwu->ext_data_source = NULL;
+ return retval;
+}
+
+static ssize_t fwu_sysfs_read_config_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ if (input != 1)
+ return -EINVAL;
+
+ retval = fwu_do_read_config();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read config\n",
+ __func__);
+ return retval;
+ }
+
+ return count;
+}
+
+static ssize_t fwu_sysfs_config_area_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned long config_area;
+
+ retval = sstrtoul(buf, 10, &config_area);
+ if (retval)
+ return retval;
+
+ fwu->config_area = config_area;
+
+ return count;
+}
+
+static ssize_t fwu_sysfs_image_name_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ memcpy(fwu->image_name, buf, count);
+
+ return count;
+}
+
+static ssize_t fwu_sysfs_image_size_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned long size;
+ struct synaptics_rmi4_data *rmi4_data = fwu->rmi4_data;
+
+ retval = sstrtoul(buf, 10, &size);
+ if (retval)
+ return retval;
+
+ fwu->image_size = size;
+ fwu->data_pos = 0;
+
+ kfree(fwu->ext_data_source);
+ fwu->ext_data_source = kzalloc(fwu->image_size, GFP_KERNEL);
+ if (!fwu->ext_data_source) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for image data\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ return count;
+}
+
+static ssize_t fwu_sysfs_block_size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->block_size);
+}
+
+static ssize_t fwu_sysfs_firmware_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->fw_block_count);
+}
+
+static ssize_t fwu_sysfs_configuration_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->config_block_count);
+}
+
+static ssize_t fwu_sysfs_perm_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->perm_config_block_count);
+}
+
+static ssize_t fwu_sysfs_bl_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->bl_config_block_count);
+}
+
+static ssize_t fwu_sysfs_disp_config_block_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", fwu->disp_config_block_count);
+}
+
+static void synaptics_rmi4_fwu_attn(struct synaptics_rmi4_data *rmi4_data,
+ unsigned char intr_mask)
+{
+ if (!fwu)
+ return;
+
+ if (fwu->intr_mask & intr_mask)
+ fwu_read_f34_flash_status();
+
+ return;
+}
+
+static int synaptics_rmi4_fwu_init(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char attr_count;
+ struct pdt_properties pdt_props;
+ dev_info(rmi4_data->pdev->dev.parent, " %s\n", __func__);
+
+ fwu = kzalloc(sizeof(*fwu), GFP_KERNEL);
+ if (!fwu) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for fwu\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ fwu->image_name = kzalloc(MAX_IMAGE_NAME_LEN, GFP_KERNEL);
+ if (!fwu->image_name) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for image name\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit_free_fwu;
+ }
+
+ fwu->rmi4_data = rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ PDT_PROPS,
+ pdt_props.data,
+ sizeof(pdt_props.data));
+ if (retval < 0) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read PDT properties, assuming 0x00\n",
+ __func__);
+ } else if (pdt_props.has_bsr) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Reflash for LTS not currently supported\n",
+ __func__);
+ retval = -ENODEV;
+ goto exit_free_mem;
+ }
+
+ retval = fwu_scan_pdt();
+ if (retval < 0)
+ goto exit_free_mem;
+
+ fwu->productinfo1 = rmi4_data->rmi4_mod_info.product_info[0];
+ fwu->productinfo2 = rmi4_data->rmi4_mod_info.product_info[1];
+ memcpy(fwu->product_id, rmi4_data->rmi4_mod_info.product_id_string,
+ SYNAPTICS_RMI4_PRODUCT_ID_SIZE);
+ fwu->product_id[SYNAPTICS_RMI4_PRODUCT_ID_SIZE] = 0;
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: F01 product info: 0x%04x 0x%04x\n",
+ __func__, fwu->productinfo1, fwu->productinfo2);
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: F01 product ID: %s\n",
+ __func__, fwu->product_id);
+
+ retval = fwu_read_f34_queries();
+ if (retval < 0)
+ goto exit_free_mem;
+
+ fwu->force_update = FORCE_UPDATE;
+ fwu->do_lockdown = DO_LOCKDOWN;
+ fwu->initialized = true;
+
+ retval = sysfs_create_bin_file(&rmi4_data->input_dev->dev.kobj,
+ &dev_attr_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs bin file\n",
+ __func__);
+ goto exit_free_mem;
+ }
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ retval = sysfs_create_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs attributes\n",
+ __func__);
+ retval = -ENODEV;
+ goto exit_remove_attrs;
+ }
+ }
+
+#ifdef DO_STARTUP_FW_UPDATE
+ wake_lock_init(&fwu->fwu_wake_lock, WAKE_LOCK_SUSPEND, "fwu_wake_lock");
+ fwu->fwu_workqueue = create_singlethread_workqueue("fwu_workqueue");
+ INIT_WORK(&fwu->fwu_work, fwu_startup_fw_update_work);
+ queue_work(fwu->fwu_workqueue,
+ &fwu->fwu_work);
+#endif
+
+ return 0;
+
+exit_remove_attrs:
+ for (attr_count--; attr_count >= 0; attr_count--) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ sysfs_remove_bin_file(&rmi4_data->input_dev->dev.kobj, &dev_attr_data);
+
+exit_free_mem:
+ kfree(fwu->image_name);
+
+exit_free_fwu:
+ kfree(fwu);
+ fwu = NULL;
+
+exit:
+ return retval;
+}
+
+static void synaptics_rmi4_fwu_remove(struct synaptics_rmi4_data *rmi4_data)
+{
+ unsigned char attr_count;
+
+ if (!fwu)
+ goto exit;
+
+#ifdef DO_STARTUP_FW_UPDATE
+ cancel_work_sync(&fwu->fwu_work);
+ flush_workqueue(fwu->fwu_workqueue);
+ destroy_workqueue(fwu->fwu_workqueue);
+ wake_lock_destroy(&fwu->fwu_wake_lock);
+#endif
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ sysfs_remove_bin_file(&rmi4_data->input_dev->dev.kobj, &dev_attr_data);
+
+ kfree(fwu->read_config_buf);
+ kfree(fwu->image_name);
+ kfree(fwu);
+ fwu = NULL;
+
+exit:
+ complete(&fwu_remove_complete);
+
+ return;
+}
+
+static struct synaptics_rmi4_exp_fn fwu_module = {
+ .fn_type = RMI_FW_UPDATER,
+ .init = synaptics_rmi4_fwu_init,
+ .remove = synaptics_rmi4_fwu_remove,
+ .reset = NULL,
+ .reinit = NULL,
+ .early_suspend = NULL,
+ .suspend = NULL,
+ .resume = NULL,
+ .late_resume = NULL,
+ .attn = synaptics_rmi4_fwu_attn,
+};
+
+static int __init rmi4_fw_update_module_init(void)
+{
+ pr_info("%s\n", __func__);
+ synaptics_rmi4_new_function(&fwu_module, true);
+
+ return 0;
+}
+
+static void __exit rmi4_fw_update_module_exit(void)
+{
+ synaptics_rmi4_new_function(&fwu_module, false);
+
+ wait_for_completion(&fwu_remove_complete);
+
+ return;
+}
+
+module_init(rmi4_fw_update_module_init);
+module_exit(rmi4_fw_update_module_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX FW Update Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_i2c.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_i2c.c
new file mode 100644
index 0000000..9010e27
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_i2c.c
@@ -0,0 +1,429 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/types.h>
+#include <linux/of_gpio.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define SYN_I2C_RETRY_TIMES 10
+
+#ifdef CONFIG_OF
+static int parse_dt(struct device *dev, struct synaptics_dsx_board_data *bdata)
+{
+ int retval;
+ u32 value;
+ const char *name;
+ struct property *prop;
+ struct device_node *np = dev->of_node;
+
+ bdata->irq_gpio = of_get_named_gpio_flags(np,
+ "synaptics,irq-gpio", 0, NULL);
+
+ retval = of_property_read_u32(np, "synaptics,irq-flags", &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->irq_flags = value;
+
+ retval = of_property_read_string(np, "synaptics,pwr-reg-name", &name);
+ if (retval == -EINVAL)
+ bdata->pwr_reg_name = NULL;
+ else if (retval < 0)
+ return retval;
+ else
+ bdata->pwr_reg_name = name;
+
+ retval = of_property_read_string(np, "synaptics,bus-reg-name", &name);
+ if (retval == -EINVAL)
+ bdata->bus_reg_name = NULL;
+ else if (retval < 0)
+ return retval;
+ else
+ bdata->bus_reg_name = name;
+
+ if (of_property_read_bool(np, "synaptics,power-gpio")) {
+ bdata->power_gpio = of_get_named_gpio_flags(np,
+ "synaptics,power-gpio", 0, NULL);
+ retval = of_property_read_u32(np, "synaptics,power-on-state",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->power_on_state = value;
+ } else {
+ bdata->power_gpio = -1;
+ }
+
+ if (of_property_read_bool(np, "synaptics,power-delay-ms")) {
+ retval = of_property_read_u32(np, "synaptics,power-delay-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->power_delay_ms = value;
+ } else {
+ bdata->power_delay_ms = 0;
+ }
+
+ if (of_property_read_bool(np, "synaptics,reset-gpio")) {
+ bdata->reset_gpio = of_get_named_gpio_flags(np,
+ "synaptics,reset-gpio", 0, NULL);
+ retval = of_property_read_u32(np, "synaptics,reset-on-state",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_on_state = value;
+ retval = of_property_read_u32(np, "synaptics,reset-active-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_active_ms = value;
+ } else {
+ bdata->reset_gpio = -1;
+ }
+
+ if (of_property_read_bool(np, "synaptics,reset-delay-ms")) {
+ retval = of_property_read_u32(np, "synaptics,reset-delay-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_delay_ms = value;
+ } else {
+ bdata->reset_delay_ms = 0;
+ }
+
+ bdata->swap_axes = of_property_read_bool(np, "synaptics,swap-axes");
+
+ bdata->x_flip = of_property_read_bool(np, "synaptics,x-flip");
+
+ bdata->y_flip = of_property_read_bool(np, "synaptics,y-flip");
+
+ prop = of_find_property(np, "synaptics,cap-button-map", NULL);
+ if (prop && prop->length) {
+ bdata->cap_button_map->map = devm_kzalloc(dev,
+ prop->length,
+ GFP_KERNEL);
+ if (!bdata->cap_button_map->map)
+ return -ENOMEM;
+ bdata->cap_button_map->nbuttons = prop->length / sizeof(u32);
+ retval = of_property_read_u32_array(np,
+ "synaptics,cap-button-map",
+ bdata->cap_button_map->map,
+ bdata->cap_button_map->nbuttons);
+ if (retval < 0) {
+ bdata->cap_button_map->nbuttons = 0;
+ bdata->cap_button_map->map = NULL;
+ }
+ } else {
+ bdata->cap_button_map->nbuttons = 0;
+ bdata->cap_button_map->map = NULL;
+ }
+
+ return 0;
+}
+#endif
+
+static int synaptics_rmi4_i2c_set_page(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr)
+{
+ int retval;
+ unsigned char retry;
+ unsigned char buf[PAGE_SELECT_LEN];
+ unsigned char page;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+
+ page = ((addr >> 8) & MASK_8BIT);
+ if (page != rmi4_data->current_page) {
+ buf[0] = MASK_8BIT;
+ buf[1] = page;
+ for (retry = 0; retry < SYN_I2C_RETRY_TIMES; retry++) {
+ retval = i2c_master_send(i2c, buf, PAGE_SELECT_LEN);
+ if (retval != PAGE_SELECT_LEN) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: I2C retry %d\n",
+ __func__, retry + 1);
+ msleep(20);
+ } else {
+ rmi4_data->current_page = page;
+ break;
+ }
+ }
+ } else {
+ retval = PAGE_SELECT_LEN;
+ }
+
+ return retval;
+}
+
+static int synaptics_rmi4_i2c_read(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned char retry;
+ unsigned char buf;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+ struct i2c_msg msg[] = {
+ {
+ .addr = i2c->addr,
+ .flags = 0,
+ .len = 1,
+ .buf = &buf,
+ },
+ {
+ .addr = i2c->addr,
+ .flags = I2C_M_RD,
+ .len = length,
+ .buf = data,
+ },
+ };
+
+ buf = addr & MASK_8BIT;
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ retval = synaptics_rmi4_i2c_set_page(rmi4_data, addr);
+ if (retval != PAGE_SELECT_LEN) {
+ retval = -EIO;
+ goto exit;
+ }
+
+ for (retry = 0; retry < SYN_I2C_RETRY_TIMES; retry++) {
+ if (i2c_transfer(i2c->adapter, msg, 2) == 2) {
+ retval = length;
+ break;
+ }
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: I2C retry %d\n",
+ __func__, retry + 1);
+ msleep(20);
+ }
+
+ if (retry == SYN_I2C_RETRY_TIMES) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: I2C read over retry limit\n",
+ __func__);
+ retval = -EIO;
+ }
+
+exit:
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ return retval;
+}
+
+static int synaptics_rmi4_i2c_write(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned char retry;
+ unsigned char buf[length + 1];
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+ struct i2c_msg msg[] = {
+ {
+ .addr = i2c->addr,
+ .flags = 0,
+ .len = length + 1,
+ .buf = buf,
+ }
+ };
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ retval = synaptics_rmi4_i2c_set_page(rmi4_data, addr);
+ if (retval != PAGE_SELECT_LEN) {
+ retval = -EIO;
+ goto exit;
+ }
+
+ buf[0] = addr & MASK_8BIT;
+ memcpy(&buf[1], &data[0], length);
+
+ for (retry = 0; retry < SYN_I2C_RETRY_TIMES; retry++) {
+ if (i2c_transfer(i2c->adapter, msg, 1) == 1) {
+ retval = length;
+ break;
+ }
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: I2C retry %d\n",
+ __func__, retry + 1);
+ msleep(20);
+ }
+
+ if (retry == SYN_I2C_RETRY_TIMES) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: I2C write over retry limit\n",
+ __func__);
+ retval = -EIO;
+ }
+
+exit:
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ return retval;
+}
+
+static struct synaptics_dsx_bus_access bus_access = {
+ .type = BUS_I2C,
+ .read = synaptics_rmi4_i2c_read,
+ .write = synaptics_rmi4_i2c_write,
+};
+
+static struct synaptics_dsx_hw_interface hw_if;
+
+static struct platform_device *synaptics_dsx_i2c_device;
+
+static void synaptics_rmi4_i2c_dev_release(struct device *dev)
+{
+ kfree(synaptics_dsx_i2c_device);
+
+ return;
+}
+
+static int synaptics_rmi4_i2c_probe(struct i2c_client *client,
+ const struct i2c_device_id *dev_id)
+{
+ int retval;
+
+ if (!i2c_check_functionality(client->adapter,
+ I2C_FUNC_SMBUS_BYTE_DATA)) {
+ dev_err(&client->dev,
+ "%s: SMBus byte data commands not supported by host\n",
+ __func__);
+ return -EIO;
+ }
+
+ synaptics_dsx_i2c_device = kzalloc(
+ sizeof(struct platform_device),
+ GFP_KERNEL);
+ if (!synaptics_dsx_i2c_device) {
+ dev_err(&client->dev,
+ "%s: Failed to allocate memory for synaptics_dsx_i2c_device\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+#ifdef CONFIG_OF
+ if (client->dev.of_node) {
+ hw_if.board_data = devm_kzalloc(&client->dev,
+ sizeof(struct synaptics_dsx_board_data),
+ GFP_KERNEL);
+ if (!hw_if.board_data) {
+ dev_err(&client->dev,
+ "%s: Failed to allocate memory for board data\n",
+ __func__);
+ return -ENOMEM;
+ }
+ hw_if.board_data->cap_button_map = devm_kzalloc(&client->dev,
+ sizeof(struct synaptics_dsx_cap_button_map),
+ GFP_KERNEL);
+ if (!hw_if.board_data->cap_button_map) {
+ dev_err(&client->dev,
+ "%s: Failed to allocate memory for button map\n",
+ __func__);
+ return -ENOMEM;
+ }
+ parse_dt(&client->dev, hw_if.board_data);
+ }
+#else
+ hw_if.board_data = client->dev.platform_data;
+#endif
+
+ hw_if.bus_access = &bus_access;
+
+ synaptics_dsx_i2c_device->name = PLATFORM_DRIVER_NAME;
+ synaptics_dsx_i2c_device->id = 0;
+ synaptics_dsx_i2c_device->num_resources = 0;
+ synaptics_dsx_i2c_device->dev.parent = &client->dev;
+ synaptics_dsx_i2c_device->dev.platform_data = &hw_if;
+ synaptics_dsx_i2c_device->dev.release = synaptics_rmi4_i2c_dev_release;
+
+ retval = platform_device_register(synaptics_dsx_i2c_device);
+ if (retval) {
+ dev_err(&client->dev,
+ "%s: Failed to register platform device\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int synaptics_rmi4_i2c_remove(struct i2c_client *client)
+{
+ platform_device_unregister(synaptics_dsx_i2c_device);
+
+ return 0;
+}
+
+static const struct i2c_device_id synaptics_rmi4_id_table[] = {
+ {I2C_DRIVER_NAME, 0},
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, synaptics_rmi4_id_table);
+
+#ifdef CONFIG_OF
+static struct of_device_id synaptics_rmi4_of_match_table[] = {
+ {
+ .compatible = "synaptics,dsx",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, synaptics_rmi4_of_match_table);
+#else
+#define synaptics_rmi4_of_match_table NULL
+#endif
+
+static struct i2c_driver synaptics_rmi4_i2c_driver = {
+ .driver = {
+ .name = I2C_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = synaptics_rmi4_of_match_table,
+ },
+ .probe = synaptics_rmi4_i2c_probe,
+ .remove = synaptics_rmi4_i2c_remove,
+ .id_table = synaptics_rmi4_id_table,
+};
+
+int synaptics_rmi4_bus_init(void)
+{
+ return i2c_add_driver(&synaptics_rmi4_i2c_driver);
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_init);
+
+void synaptics_rmi4_bus_exit(void)
+{
+ i2c_del_driver(&synaptics_rmi4_i2c_driver);
+
+ return;
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX I2C Bus Support Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_proximity.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_proximity.c
new file mode 100644
index 0000000..1244254
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_proximity.c
@@ -0,0 +1,673 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define PROX_PHYS_NAME "synaptics_dsx/proximity"
+
+#define HOVER_Z_MAX (255)
+
+#define HOVERING_FINGER_EN (1 << 4)
+
+static ssize_t synaptics_rmi4_hover_finger_en_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t synaptics_rmi4_hover_finger_en_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static struct device_attribute attrs[] = {
+ __ATTR(hover_finger_en, (S_IRUGO | S_IWUGO),
+ synaptics_rmi4_hover_finger_en_show,
+ synaptics_rmi4_hover_finger_en_store),
+};
+
+struct synaptics_rmi4_f12_query_5 {
+ union {
+ struct {
+ unsigned char size_of_query6;
+ struct {
+ unsigned char ctrl0_is_present:1;
+ unsigned char ctrl1_is_present:1;
+ unsigned char ctrl2_is_present:1;
+ unsigned char ctrl3_is_present:1;
+ unsigned char ctrl4_is_present:1;
+ unsigned char ctrl5_is_present:1;
+ unsigned char ctrl6_is_present:1;
+ unsigned char ctrl7_is_present:1;
+ } __packed;
+ struct {
+ unsigned char ctrl8_is_present:1;
+ unsigned char ctrl9_is_present:1;
+ unsigned char ctrl10_is_present:1;
+ unsigned char ctrl11_is_present:1;
+ unsigned char ctrl12_is_present:1;
+ unsigned char ctrl13_is_present:1;
+ unsigned char ctrl14_is_present:1;
+ unsigned char ctrl15_is_present:1;
+ } __packed;
+ struct {
+ unsigned char ctrl16_is_present:1;
+ unsigned char ctrl17_is_present:1;
+ unsigned char ctrl18_is_present:1;
+ unsigned char ctrl19_is_present:1;
+ unsigned char ctrl20_is_present:1;
+ unsigned char ctrl21_is_present:1;
+ unsigned char ctrl22_is_present:1;
+ unsigned char ctrl23_is_present:1;
+ } __packed;
+ };
+ unsigned char data[4];
+ };
+};
+
+struct synaptics_rmi4_f12_query_8 {
+ union {
+ struct {
+ unsigned char size_of_query9;
+ struct {
+ unsigned char data0_is_present:1;
+ unsigned char data1_is_present:1;
+ unsigned char data2_is_present:1;
+ unsigned char data3_is_present:1;
+ unsigned char data4_is_present:1;
+ unsigned char data5_is_present:1;
+ unsigned char data6_is_present:1;
+ unsigned char data7_is_present:1;
+ } __packed;
+ };
+ unsigned char data[2];
+ };
+};
+
+struct prox_finger_data {
+ union {
+ struct {
+ unsigned char object_type_and_status;
+ unsigned char x_lsb;
+ unsigned char x_msb;
+ unsigned char y_lsb;
+ unsigned char y_msb;
+ unsigned char z;
+ } __packed;
+ unsigned char proximity_data[6];
+ };
+};
+
+struct synaptics_rmi4_prox_handle {
+ bool hover_finger_present;
+ bool hover_finger_en;
+ unsigned char intr_mask;
+ unsigned short query_base_addr;
+ unsigned short control_base_addr;
+ unsigned short data_base_addr;
+ unsigned short command_base_addr;
+ unsigned short hover_finger_en_addr;
+ unsigned short hover_finger_data_addr;
+ struct input_dev *prox_dev;
+ struct prox_finger_data *finger_data;
+ struct synaptics_rmi4_data *rmi4_data;
+};
+
+static struct synaptics_rmi4_prox_handle *prox;
+
+DECLARE_COMPLETION(prox_remove_complete);
+
+static void prox_hover_finger_lift(void)
+{
+ input_report_key(prox->prox_dev, BTN_TOUCH, 0);
+ input_report_key(prox->prox_dev, BTN_TOOL_FINGER, 0);
+ input_sync(prox->prox_dev);
+ prox->hover_finger_present = false;
+
+ return;
+}
+
+static void prox_hover_finger_report(void)
+{
+ int retval;
+ int x;
+ int y;
+ int z;
+ struct prox_finger_data *data;
+ struct synaptics_rmi4_data *rmi4_data = prox->rmi4_data;
+
+ data = prox->finger_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ prox->hover_finger_data_addr,
+ data->proximity_data,
+ sizeof(data->proximity_data));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read hovering finger data\n",
+ __func__);
+ return;
+ }
+
+ if (data->object_type_and_status != F12_HOVERING_FINGER_STATUS) {
+ if (prox->hover_finger_present)
+ prox_hover_finger_lift();
+
+ return;
+ }
+
+ x = (data->x_msb << 8) | (data->x_lsb);
+ y = (data->y_msb << 8) | (data->y_lsb);
+ z = HOVER_Z_MAX - data->z;
+
+ input_report_key(prox->prox_dev, BTN_TOUCH, 0);
+ input_report_key(prox->prox_dev, BTN_TOOL_FINGER, 1);
+ input_report_abs(prox->prox_dev, ABS_X, x);
+ input_report_abs(prox->prox_dev, ABS_Y, y);
+ input_report_abs(prox->prox_dev, ABS_DISTANCE, z);
+
+ input_sync(prox->prox_dev);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: x = %d y = %d z = %d\n",
+ __func__, x, y, z);
+
+ prox->hover_finger_present = true;
+
+ return;
+}
+
+static int prox_set_hover_finger_en(void)
+{
+ int retval;
+ unsigned char object_report_enable;
+ struct synaptics_rmi4_data *rmi4_data = prox->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ prox->hover_finger_en_addr,
+ &object_report_enable,
+ sizeof(object_report_enable));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read from object report enable register\n",
+ __func__);
+ return retval;
+ }
+
+ if (prox->hover_finger_en)
+ object_report_enable |= HOVERING_FINGER_EN;
+ else
+ object_report_enable &= ~HOVERING_FINGER_EN;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ prox->hover_finger_en_addr,
+ &object_report_enable,
+ sizeof(object_report_enable));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write to object report enable register\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static void prox_set_params(void)
+{
+ input_set_abs_params(prox->prox_dev, ABS_X, 0,
+ prox->rmi4_data->sensor_max_x, 0, 0);
+ input_set_abs_params(prox->prox_dev, ABS_Y, 0,
+ prox->rmi4_data->sensor_max_y, 0, 0);
+ input_set_abs_params(prox->prox_dev, ABS_DISTANCE, 0,
+ HOVER_Z_MAX, 0, 0);
+
+ return;
+}
+
+static int prox_reg_init(void)
+{
+ int retval;
+ unsigned char ctrl_23_offset;
+ unsigned char data_1_offset;
+ struct synaptics_rmi4_f12_query_5 query_5;
+ struct synaptics_rmi4_f12_query_8 query_8;
+ struct synaptics_rmi4_data *rmi4_data = prox->rmi4_data;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ prox->query_base_addr + 5,
+ query_5.data,
+ sizeof(query_5.data));
+ if (retval < 0)
+ return retval;
+
+ ctrl_23_offset = query_5.ctrl0_is_present +
+ query_5.ctrl1_is_present +
+ query_5.ctrl2_is_present +
+ query_5.ctrl3_is_present +
+ query_5.ctrl4_is_present +
+ query_5.ctrl5_is_present +
+ query_5.ctrl6_is_present +
+ query_5.ctrl7_is_present +
+ query_5.ctrl8_is_present +
+ query_5.ctrl9_is_present +
+ query_5.ctrl10_is_present +
+ query_5.ctrl11_is_present +
+ query_5.ctrl12_is_present +
+ query_5.ctrl13_is_present +
+ query_5.ctrl14_is_present +
+ query_5.ctrl15_is_present +
+ query_5.ctrl16_is_present +
+ query_5.ctrl17_is_present +
+ query_5.ctrl18_is_present +
+ query_5.ctrl19_is_present +
+ query_5.ctrl20_is_present +
+ query_5.ctrl21_is_present +
+ query_5.ctrl22_is_present;
+
+ prox->hover_finger_en_addr = prox->control_base_addr + ctrl_23_offset;
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ prox->query_base_addr + 8,
+ query_8.data,
+ sizeof(query_8.data));
+ if (retval < 0)
+ return retval;
+
+ data_1_offset = query_8.data0_is_present;
+ prox->hover_finger_data_addr = prox->data_base_addr + data_1_offset;
+
+ return retval;
+}
+
+static int prox_scan_pdt(void)
+{
+ int retval;
+ unsigned char ii;
+ unsigned char page;
+ unsigned char intr_count = 0;
+ unsigned char intr_off;
+ unsigned char intr_src;
+ unsigned short addr;
+ struct synaptics_rmi4_fn_desc fd;
+ struct synaptics_rmi4_data *rmi4_data = prox->rmi4_data;
+
+ for (page = 0; page < PAGES_TO_SERVICE; page++) {
+ for (addr = PDT_START; addr > PDT_END; addr -= PDT_ENTRY_SIZE) {
+ addr |= (page << 8);
+
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ addr,
+ (unsigned char *)&fd,
+ sizeof(fd));
+ if (retval < 0)
+ return retval;
+
+ addr &= ~(MASK_8BIT << 8);
+
+ if (fd.fn_number) {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Found F%02x\n",
+ __func__, fd.fn_number);
+ switch (fd.fn_number) {
+ case SYNAPTICS_RMI4_F12:
+ goto f12_found;
+ break;
+ }
+ } else {
+ break;
+ }
+
+ intr_count += (fd.intr_src_count & MASK_3BIT);
+ }
+ }
+
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to find F12\n",
+ __func__);
+ return -EINVAL;
+
+f12_found:
+ prox->query_base_addr = fd.query_base_addr | (page << 8);
+ prox->control_base_addr = fd.ctrl_base_addr | (page << 8);
+ prox->data_base_addr = fd.data_base_addr | (page << 8);
+ prox->command_base_addr = fd.cmd_base_addr | (page << 8);
+
+ retval = prox_reg_init();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to initialize proximity registers\n",
+ __func__);
+ return retval;
+ }
+
+ prox->intr_mask = 0;
+ intr_src = fd.intr_src_count;
+ intr_off = intr_count % 8;
+ for (ii = intr_off;
+ ii < ((intr_src & MASK_3BIT) +
+ intr_off);
+ ii++) {
+ prox->intr_mask |= 1 << ii;
+ }
+
+ rmi4_data->intr_mask[0] |= prox->intr_mask;
+
+ addr = rmi4_data->f01_ctrl_base_addr + 1;
+
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ addr,
+ &(rmi4_data->intr_mask[0]),
+ sizeof(rmi4_data->intr_mask[0]));
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to set interrupt enable bit\n",
+ __func__);
+ return retval;
+ }
+
+ return 0;
+}
+
+static ssize_t synaptics_rmi4_hover_finger_en_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ if (!prox)
+ return -ENODEV;
+
+ return snprintf(buf, PAGE_SIZE, "%u\n",
+ prox->hover_finger_en);
+}
+
+static ssize_t synaptics_rmi4_hover_finger_en_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int retval;
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = prox->rmi4_data;
+
+ if (!prox)
+ return -ENODEV;
+
+ if (sscanf(buf, "%x", &input) != 1)
+ return -EINVAL;
+
+ if (input == 1)
+ prox->hover_finger_en = true;
+ else if (input == 0)
+ prox->hover_finger_en = false;
+ else
+ return -EINVAL;
+
+ retval = prox_set_hover_finger_en();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to change hovering finger enable setting\n",
+ __func__);
+ return retval;
+ }
+
+ return count;
+}
+
+int synaptics_rmi4_prox_hover_finger_en(bool enable)
+{
+ int retval;
+
+ if (!prox)
+ return -ENODEV;
+
+ prox->hover_finger_en = enable;
+
+ retval = prox_set_hover_finger_en();
+ if (retval < 0)
+ return retval;
+
+ return 0;
+}
+EXPORT_SYMBOL(synaptics_rmi4_prox_hover_finger_en);
+
+static void synaptics_rmi4_prox_attn(struct synaptics_rmi4_data *rmi4_data,
+ unsigned char intr_mask)
+{
+ if (!prox)
+ return;
+
+ if (prox->intr_mask & intr_mask)
+ prox_hover_finger_report();
+
+ return;
+}
+
+static int synaptics_rmi4_prox_init(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned char attr_count;
+
+ prox = kzalloc(sizeof(*prox), GFP_KERNEL);
+ if (!prox) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for prox\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ prox->finger_data = kzalloc(sizeof(*(prox->finger_data)), GFP_KERNEL);
+ if (!prox->finger_data) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for finger_data\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit_free_prox;
+ }
+
+ prox->rmi4_data = rmi4_data;
+
+ retval = prox_scan_pdt();
+ if (retval < 0)
+ goto exit_free_finger_data;
+
+ prox->hover_finger_en = true;
+
+ retval = prox_set_hover_finger_en();
+ if (retval < 0)
+ return retval;
+
+ prox->prox_dev = input_allocate_device();
+ if (prox->prox_dev == NULL) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate proximity device\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit_free_finger_data;
+ }
+
+ prox->prox_dev->name = PLATFORM_DRIVER_NAME;
+ prox->prox_dev->phys = PROX_PHYS_NAME;
+ prox->prox_dev->id.product = SYNAPTICS_DSX_DRIVER_PRODUCT;
+ prox->prox_dev->id.version = SYNAPTICS_DSX_DRIVER_VERSION;
+ prox->prox_dev->dev.parent = rmi4_data->pdev->dev.parent;
+ input_set_drvdata(prox->prox_dev, rmi4_data);
+
+ set_bit(EV_KEY, prox->prox_dev->evbit);
+ set_bit(EV_ABS, prox->prox_dev->evbit);
+ set_bit(BTN_TOUCH, prox->prox_dev->keybit);
+ set_bit(BTN_TOOL_FINGER, prox->prox_dev->keybit);
+#ifdef INPUT_PROP_DIRECT
+ set_bit(INPUT_PROP_DIRECT, prox->prox_dev->propbit);
+#endif
+
+ prox_set_params();
+
+ retval = input_register_device(prox->prox_dev);
+ if (retval) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to register proximity device\n",
+ __func__);
+ goto exit_free_input_device;
+ }
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ retval = sysfs_create_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs attributes\n",
+ __func__);
+ goto exit_free_sysfs;
+ }
+ }
+
+ return 0;
+
+exit_free_sysfs:
+ for (attr_count--; attr_count >= 0; attr_count--) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ input_unregister_device(prox->prox_dev);
+ prox->prox_dev = NULL;
+
+exit_free_input_device:
+ if (prox->prox_dev)
+ input_free_device(prox->prox_dev);
+
+exit_free_finger_data:
+ kfree(prox->finger_data);
+
+exit_free_prox:
+ kfree(prox);
+ prox = NULL;
+
+exit:
+ return retval;
+}
+
+static void synaptics_rmi4_prox_remove(struct synaptics_rmi4_data *rmi4_data)
+{
+ unsigned char attr_count;
+
+ if (!prox)
+ goto exit;
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ sysfs_remove_file(&rmi4_data->input_dev->dev.kobj,
+ &attrs[attr_count].attr);
+ }
+
+ input_unregister_device(prox->prox_dev);
+ kfree(prox->finger_data);
+ kfree(prox);
+ prox = NULL;
+
+exit:
+ complete(&prox_remove_complete);
+
+ return;
+}
+
+static void synaptics_rmi4_prox_reset(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!prox) {
+ synaptics_rmi4_prox_init(rmi4_data);
+ return;
+ }
+
+ prox_hover_finger_lift();
+
+ prox_scan_pdt();
+
+ prox_set_hover_finger_en();
+
+ prox_set_params();
+
+ return;
+}
+
+static void synaptics_rmi4_prox_reinit(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!prox)
+ return;
+
+ prox_hover_finger_lift();
+
+ prox_set_hover_finger_en();
+
+ return;
+}
+
+static void synaptics_rmi4_prox_e_suspend(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!prox)
+ return;
+
+ prox_hover_finger_lift();
+
+ return;
+}
+
+static void synaptics_rmi4_prox_suspend(struct synaptics_rmi4_data *rmi4_data)
+{
+ if (!prox)
+ return;
+
+ prox_hover_finger_lift();
+
+ return;
+}
+
+static struct synaptics_rmi4_exp_fn proximity_module = {
+ .fn_type = RMI_PROXIMITY,
+ .init = synaptics_rmi4_prox_init,
+ .remove = synaptics_rmi4_prox_remove,
+ .reset = synaptics_rmi4_prox_reset,
+ .reinit = synaptics_rmi4_prox_reinit,
+ .early_suspend = synaptics_rmi4_prox_e_suspend,
+ .suspend = synaptics_rmi4_prox_suspend,
+ .resume = NULL,
+ .late_resume = NULL,
+ .attn = synaptics_rmi4_prox_attn,
+};
+
+static int __init rmi4_proximity_module_init(void)
+{
+ synaptics_rmi4_new_function(&proximity_module, true);
+
+ return 0;
+}
+
+static void __exit rmi4_proximity_module_exit(void)
+{
+ synaptics_rmi4_new_function(&proximity_module, false);
+
+ wait_for_completion(&prox_remove_complete);
+
+ return;
+}
+
+module_init(rmi4_proximity_module_init);
+module_exit(rmi4_proximity_module_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX Proximity Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_dev.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_dev.c
new file mode 100644
index 0000000..76decd0
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_dev.c
@@ -0,0 +1,887 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/gpio.h>
+#include <linux/uaccess.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define CHAR_DEVICE_NAME "rmi"
+#define DEVICE_CLASS_NAME "rmidev"
+#define SYSFS_FOLDER_NAME "rmidev"
+#define DEV_NUMBER 1
+#define REG_ADDR_LIMIT 0xFFFF
+
+static ssize_t rmidev_sysfs_data_show(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count);
+
+static ssize_t rmidev_sysfs_data_store(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count);
+
+static ssize_t rmidev_sysfs_open_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t rmidev_sysfs_release_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t rmidev_sysfs_attn_state_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t rmidev_sysfs_pid_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t rmidev_sysfs_pid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t rmidev_sysfs_term_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+static ssize_t rmidev_sysfs_intr_mask_show(struct device *dev,
+ struct device_attribute *attr, char *buf);
+
+static ssize_t rmidev_sysfs_intr_mask_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count);
+
+struct rmidev_handle {
+ dev_t dev_no;
+ pid_t pid;
+ unsigned char intr_mask;
+ struct device dev;
+ struct synaptics_rmi4_data *rmi4_data;
+ struct kobject *sysfs_dir;
+ struct siginfo interrupt_signal;
+ struct siginfo terminate_signal;
+ struct task_struct *task;
+ void *data;
+ bool irq_enabled;
+};
+
+struct rmidev_data {
+ int ref_count;
+ struct cdev main_dev;
+ struct class *device_class;
+ struct mutex file_mutex;
+ struct rmidev_handle *rmi_dev;
+};
+
+static struct bin_attribute attr_data = {
+ .attr = {
+ .name = "data",
+ .mode = (S_IRUGO | S_IWUGO),
+ },
+ .size = 0,
+ .read = rmidev_sysfs_data_show,
+ .write = rmidev_sysfs_data_store,
+};
+
+static struct device_attribute attrs[] = {
+ __ATTR(open, S_IWUGO,
+ synaptics_rmi4_show_error,
+ rmidev_sysfs_open_store),
+ __ATTR(release, S_IWUGO,
+ synaptics_rmi4_show_error,
+ rmidev_sysfs_release_store),
+ __ATTR(attn_state, S_IRUGO,
+ rmidev_sysfs_attn_state_show,
+ synaptics_rmi4_store_error),
+ __ATTR(pid, S_IRUGO | S_IWUGO,
+ rmidev_sysfs_pid_show,
+ rmidev_sysfs_pid_store),
+ __ATTR(term, S_IWUGO,
+ synaptics_rmi4_show_error,
+ rmidev_sysfs_term_store),
+ __ATTR(intr_mask, S_IRUGO | S_IWUGO,
+ rmidev_sysfs_intr_mask_show,
+ rmidev_sysfs_intr_mask_store),
+};
+
+static int rmidev_major_num;
+
+static struct class *rmidev_device_class;
+
+static struct rmidev_handle *rmidev;
+
+DECLARE_COMPLETION(rmidev_remove_complete);
+
+static irqreturn_t rmidev_sysfs_irq(int irq, void *data)
+{
+ struct synaptics_rmi4_data *rmi4_data = data;
+
+ sysfs_notify(&rmi4_data->input_dev->dev.kobj,
+ SYSFS_FOLDER_NAME, "attn_state");
+
+ return IRQ_HANDLED;
+}
+
+static int rmidev_sysfs_irq_enable(struct synaptics_rmi4_data *rmi4_data,
+ bool enable)
+{
+ int retval = 0;
+ unsigned char intr_status[MAX_INTR_REGISTERS];
+ unsigned long irq_flags = IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING;
+
+ if (enable) {
+ if (rmidev->irq_enabled)
+ return retval;
+
+ /* Clear interrupts first */
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ rmi4_data->f01_data_base_addr + 1,
+ intr_status,
+ rmi4_data->num_of_intr_regs);
+ if (retval < 0)
+ return retval;
+
+ retval = request_threaded_irq(rmi4_data->irq, NULL,
+ rmidev_sysfs_irq, irq_flags,
+ PLATFORM_DRIVER_NAME, rmi4_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create irq thread\n",
+ __func__);
+ return retval;
+ }
+
+ rmidev->irq_enabled = true;
+ } else {
+ if (rmidev->irq_enabled) {
+ disable_irq(rmi4_data->irq);
+ free_irq(rmi4_data->irq, rmi4_data);
+ rmidev->irq_enabled = false;
+ }
+ }
+
+ return retval;
+}
+
+static ssize_t rmidev_sysfs_data_show(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count)
+{
+ int retval;
+ unsigned int length = (unsigned int)count;
+ unsigned short address = (unsigned short)pos;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (length > (REG_ADDR_LIMIT - address)) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Out of register map limit\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (length) {
+ retval = synaptics_rmi4_reg_read(rmi4_data,
+ address,
+ (unsigned char *)buf,
+ length);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to read data\n",
+ __func__);
+ return retval;
+ }
+ } else {
+ return -EINVAL;
+ }
+
+ return length;
+}
+
+static ssize_t rmidev_sysfs_data_store(struct file *data_file,
+ struct kobject *kobj, struct bin_attribute *attributes,
+ char *buf, loff_t pos, size_t count)
+{
+ int retval;
+ unsigned int length = (unsigned int)count;
+ unsigned short address = (unsigned short)pos;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (length > (REG_ADDR_LIMIT - address)) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Out of register map limit\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (length) {
+ retval = synaptics_rmi4_reg_write(rmi4_data,
+ address,
+ (unsigned char *)buf,
+ length);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to write data\n",
+ __func__);
+ return retval;
+ }
+ } else {
+ return -EINVAL;
+ }
+
+ return length;
+}
+
+static ssize_t rmidev_sysfs_open_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ if (input != 1)
+ return -EINVAL;
+
+ rmi4_data->irq_enable(rmi4_data, false);
+ rmidev_sysfs_irq_enable(rmi4_data, true);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Attention interrupt disabled\n",
+ __func__);
+
+ return count;
+}
+
+static ssize_t rmidev_sysfs_release_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ if (input != 1)
+ return -EINVAL;
+
+ rmidev_sysfs_irq_enable(rmi4_data, false);
+ rmi4_data->irq_enable(rmi4_data, true);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Attention interrupt enabled\n",
+ __func__);
+
+ rmi4_data->reset_device(rmi4_data);
+
+ return count;
+}
+
+static ssize_t rmidev_sysfs_attn_state_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int attn_state;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ attn_state = gpio_get_value(bdata->irq_gpio);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n", attn_state);
+}
+
+static ssize_t rmidev_sysfs_pid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%u\n", rmidev->pid);
+}
+
+static ssize_t rmidev_sysfs_pid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ rmidev->pid = input;
+
+ if (rmidev->pid) {
+ rmidev->task = pid_task(find_vpid(rmidev->pid), PIDTYPE_PID);
+ if (!rmidev->task) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to locate PID of data logging tool\n",
+ __func__);
+ return -EINVAL;
+ }
+ }
+
+ return count;
+}
+
+static ssize_t rmidev_sysfs_term_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ if (input != 1)
+ return -EINVAL;
+
+ if (rmidev->pid)
+ send_sig_info(SIGTERM, &rmidev->terminate_signal, rmidev->task);
+
+ return count;
+}
+
+static ssize_t rmidev_sysfs_intr_mask_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "0x%02x\n", rmidev->intr_mask);
+}
+
+static ssize_t rmidev_sysfs_intr_mask_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int input;
+
+ if (sscanf(buf, "%u", &input) != 1)
+ return -EINVAL;
+
+ rmidev->intr_mask = (unsigned char)input;
+
+ return count;
+}
+
+/*
+ * rmidev_llseek - set register address to access for RMI device
+ *
+ * @filp: pointer to file structure
+ * @off:
+ * if whence == SEEK_SET,
+ * off: 16-bit RMI register address
+ * if whence == SEEK_CUR,
+ * off: offset from current position
+ * if whence == SEEK_END,
+ * off: offset from end position (0xFFFF)
+ * @whence: SEEK_SET, SEEK_CUR, or SEEK_END
+ */
+static loff_t rmidev_llseek(struct file *filp, loff_t off, int whence)
+{
+ loff_t newpos;
+ struct rmidev_data *dev_data = filp->private_data;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (IS_ERR(dev_data)) {
+ pr_err("%s: Pointer of char device data is invalid", __func__);
+ return -EBADF;
+ }
+
+ mutex_lock(&(dev_data->file_mutex));
+
+ switch (whence) {
+ case SEEK_SET:
+ newpos = off;
+ break;
+ case SEEK_CUR:
+ newpos = filp->f_pos + off;
+ break;
+ case SEEK_END:
+ newpos = REG_ADDR_LIMIT + off;
+ break;
+ default:
+ newpos = -EINVAL;
+ goto clean_up;
+ }
+
+ if (newpos < 0 || newpos > REG_ADDR_LIMIT) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: New position 0x%04x is invalid\n",
+ __func__, (unsigned int)newpos);
+ newpos = -EINVAL;
+ goto clean_up;
+ }
+
+ filp->f_pos = newpos;
+
+clean_up:
+ mutex_unlock(&(dev_data->file_mutex));
+
+ return newpos;
+}
+
+/*
+ * rmidev_read: read register data from RMI device
+ *
+ * @filp: pointer to file structure
+ * @buf: pointer to user space buffer
+ * @count: number of bytes to read
+ * @f_pos: starting RMI register address
+ */
+static ssize_t rmidev_read(struct file *filp, char __user *buf,
+ size_t count, loff_t *f_pos)
+{
+ ssize_t retval;
+ unsigned char tmpbuf[count + 1];
+ struct rmidev_data *dev_data = filp->private_data;
+
+ if (IS_ERR(dev_data)) {
+ pr_err("%s: Pointer of char device data is invalid", __func__);
+ return -EBADF;
+ }
+
+ if (count == 0)
+ return 0;
+
+ if (count > (REG_ADDR_LIMIT - *f_pos))
+ count = REG_ADDR_LIMIT - *f_pos;
+
+ mutex_lock(&(dev_data->file_mutex));
+
+ retval = synaptics_rmi4_reg_read(rmidev->rmi4_data,
+ *f_pos,
+ tmpbuf,
+ count);
+ if (retval < 0)
+ goto clean_up;
+
+ if (copy_to_user(buf, tmpbuf, count))
+ retval = -EFAULT;
+ else
+ *f_pos += retval;
+
+clean_up:
+ mutex_unlock(&(dev_data->file_mutex));
+
+ return retval;
+}
+
+/*
+ * rmidev_write: write register data to RMI device
+ *
+ * @filp: pointer to file structure
+ * @buf: pointer to user space buffer
+ * @count: number of bytes to write
+ * @f_pos: starting RMI register address
+ */
+static ssize_t rmidev_write(struct file *filp, const char __user *buf,
+ size_t count, loff_t *f_pos)
+{
+ ssize_t retval;
+ unsigned char tmpbuf[count + 1];
+ struct rmidev_data *dev_data = filp->private_data;
+
+ if (IS_ERR(dev_data)) {
+ pr_err("%s: Pointer of char device data is invalid", __func__);
+ return -EBADF;
+ }
+
+ if (count == 0)
+ return 0;
+
+ if (count > (REG_ADDR_LIMIT - *f_pos))
+ count = REG_ADDR_LIMIT - *f_pos;
+
+ if (copy_from_user(tmpbuf, buf, count))
+ return -EFAULT;
+
+ mutex_lock(&(dev_data->file_mutex));
+
+ retval = synaptics_rmi4_reg_write(rmidev->rmi4_data,
+ *f_pos,
+ tmpbuf,
+ count);
+ if (retval >= 0)
+ *f_pos += retval;
+
+ mutex_unlock(&(dev_data->file_mutex));
+
+ return retval;
+}
+
+static int rmidev_open(struct inode *inp, struct file *filp)
+{
+ int retval = 0;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+ struct rmidev_data *dev_data =
+ container_of(inp->i_cdev, struct rmidev_data, main_dev);
+
+ if (!dev_data)
+ return -EACCES;
+
+ filp->private_data = dev_data;
+
+ mutex_lock(&(dev_data->file_mutex));
+
+ rmi4_data->irq_enable(rmi4_data, false);
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Attention interrupt disabled\n",
+ __func__);
+
+ if (dev_data->ref_count < 1)
+ dev_data->ref_count++;
+ else
+ retval = -EACCES;
+
+ mutex_unlock(&(dev_data->file_mutex));
+
+ return retval;
+}
+
+static int rmidev_release(struct inode *inp, struct file *filp)
+{
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+ struct rmidev_data *dev_data =
+ container_of(inp->i_cdev, struct rmidev_data, main_dev);
+
+ if (!dev_data)
+ return -EACCES;
+
+ mutex_lock(&(dev_data->file_mutex));
+
+ dev_data->ref_count--;
+ if (dev_data->ref_count < 0)
+ dev_data->ref_count = 0;
+
+ rmi4_data->irq_enable(rmi4_data, true);
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Attention interrupt enabled\n",
+ __func__);
+
+ mutex_unlock(&(dev_data->file_mutex));
+
+ rmi4_data->reset_device(rmi4_data);
+
+ return 0;
+}
+
+static const struct file_operations rmidev_fops = {
+ .owner = THIS_MODULE,
+ .llseek = rmidev_llseek,
+ .read = rmidev_read,
+ .write = rmidev_write,
+ .open = rmidev_open,
+ .release = rmidev_release,
+};
+
+static void rmidev_device_cleanup(struct rmidev_data *dev_data)
+{
+ dev_t devno;
+ struct synaptics_rmi4_data *rmi4_data = rmidev->rmi4_data;
+
+ if (dev_data) {
+ devno = dev_data->main_dev.dev;
+
+ if (dev_data->device_class)
+ device_destroy(dev_data->device_class, devno);
+
+ cdev_del(&dev_data->main_dev);
+
+ unregister_chrdev_region(devno, 1);
+
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: rmidev device removed\n",
+ __func__);
+ }
+
+ return;
+}
+
+static char *rmi_char_devnode(struct device *dev, umode_t *mode)
+{
+ if (!mode)
+ return NULL;
+
+ *mode = (S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);
+
+ return kasprintf(GFP_KERNEL, "rmi/%s", dev_name(dev));
+}
+
+static int rmidev_create_device_class(void)
+{
+ rmidev_device_class = class_create(THIS_MODULE, DEVICE_CLASS_NAME);
+
+ if (IS_ERR(rmidev_device_class)) {
+ pr_err("%s: Failed to create /dev/%s\n",
+ __func__, CHAR_DEVICE_NAME);
+ return -ENODEV;
+ }
+
+ rmidev_device_class->devnode = rmi_char_devnode;
+
+ return 0;
+}
+
+static void rmidev_attn(struct synaptics_rmi4_data *rmi4_data,
+ unsigned char intr_mask)
+{
+ if (!rmidev)
+ return;
+
+ if (rmidev->pid && (rmidev->intr_mask & intr_mask))
+ send_sig_info(SIGIO, &rmidev->interrupt_signal, rmidev->task);
+
+ return;
+}
+
+static int rmidev_init_device(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ dev_t dev_no;
+ unsigned char attr_count;
+ struct rmidev_data *dev_data;
+ struct device *device_ptr;
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ rmidev = kzalloc(sizeof(*rmidev), GFP_KERNEL);
+ if (!rmidev) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for rmidev\n",
+ __func__);
+ retval = -ENOMEM;
+ goto err_rmidev;
+ }
+
+ rmidev->rmi4_data = rmi4_data;
+
+ memset(&rmidev->interrupt_signal, 0, sizeof(rmidev->interrupt_signal));
+ rmidev->interrupt_signal.si_signo = SIGIO;
+ rmidev->interrupt_signal.si_code = SI_USER;
+
+ memset(&rmidev->terminate_signal, 0, sizeof(rmidev->terminate_signal));
+ rmidev->terminate_signal.si_signo = SIGTERM;
+ rmidev->terminate_signal.si_code = SI_USER;
+
+ retval = rmidev_create_device_class();
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create device class\n",
+ __func__);
+ goto err_device_class;
+ }
+
+ if (rmidev_major_num) {
+ dev_no = MKDEV(rmidev_major_num, DEV_NUMBER);
+ retval = register_chrdev_region(dev_no, 1, CHAR_DEVICE_NAME);
+ } else {
+ retval = alloc_chrdev_region(&dev_no, 0, 1, CHAR_DEVICE_NAME);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate char device region\n",
+ __func__);
+ goto err_device_region;
+ }
+
+ rmidev_major_num = MAJOR(dev_no);
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Major number of rmidev = %d\n",
+ __func__, rmidev_major_num);
+ }
+
+ dev_data = kzalloc(sizeof(*dev_data), GFP_KERNEL);
+ if (!dev_data) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to alloc mem for dev_data\n",
+ __func__);
+ retval = -ENOMEM;
+ goto err_dev_data;
+ }
+
+ mutex_init(&dev_data->file_mutex);
+ dev_data->rmi_dev = rmidev;
+ rmidev->data = dev_data;
+
+ cdev_init(&dev_data->main_dev, &rmidev_fops);
+
+ retval = cdev_add(&dev_data->main_dev, dev_no, 1);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to add rmi char device\n",
+ __func__);
+ goto err_char_device;
+ }
+
+ dev_set_name(&rmidev->dev, "rmidev%d", MINOR(dev_no));
+ dev_data->device_class = rmidev_device_class;
+
+ device_ptr = device_create(dev_data->device_class, NULL, dev_no,
+ NULL, CHAR_DEVICE_NAME"%d", MINOR(dev_no));
+ if (IS_ERR(device_ptr)) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create rmi char device\n",
+ __func__);
+ retval = -ENODEV;
+ goto err_char_device;
+ }
+
+ retval = gpio_export(bdata->irq_gpio, false);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to export attention gpio\n",
+ __func__);
+ } else {
+ retval = gpio_export_link(&(rmi4_data->input_dev->dev),
+ "attn", bdata->irq_gpio);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s Failed to create gpio symlink\n",
+ __func__);
+ } else {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Exported attention gpio %d\n",
+ __func__, bdata->irq_gpio);
+ }
+ }
+
+ rmidev->sysfs_dir = kobject_create_and_add(SYSFS_FOLDER_NAME,
+ &rmi4_data->input_dev->dev.kobj);
+ if (!rmidev->sysfs_dir) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs directory\n",
+ __func__);
+ retval = -ENODEV;
+ goto err_sysfs_dir;
+ }
+
+ retval = sysfs_create_bin_file(rmidev->sysfs_dir,
+ &attr_data);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs bin file\n",
+ __func__);
+ goto err_sysfs_bin;
+ }
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++) {
+ retval = sysfs_create_file(rmidev->sysfs_dir,
+ &attrs[attr_count].attr);
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to create sysfs attributes\n",
+ __func__);
+ retval = -ENODEV;
+ goto err_sysfs_attrs;
+ }
+ }
+
+ return 0;
+
+err_sysfs_attrs:
+ for (attr_count--; attr_count >= 0; attr_count--)
+ sysfs_remove_file(rmidev->sysfs_dir, &attrs[attr_count].attr);
+
+ sysfs_remove_bin_file(rmidev->sysfs_dir, &attr_data);
+
+err_sysfs_bin:
+ kobject_put(rmidev->sysfs_dir);
+
+err_sysfs_dir:
+err_char_device:
+ rmidev_device_cleanup(dev_data);
+ kfree(dev_data);
+
+err_dev_data:
+ unregister_chrdev_region(dev_no, 1);
+
+err_device_region:
+ class_destroy(rmidev_device_class);
+
+err_device_class:
+ kfree(rmidev);
+ rmidev = NULL;
+
+err_rmidev:
+ return retval;
+}
+
+static void rmidev_remove_device(struct synaptics_rmi4_data *rmi4_data)
+{
+ unsigned char attr_count;
+ struct rmidev_data *dev_data;
+
+ if (!rmidev)
+ goto exit;
+
+ for (attr_count = 0; attr_count < ARRAY_SIZE(attrs); attr_count++)
+ sysfs_remove_file(rmidev->sysfs_dir, &attrs[attr_count].attr);
+
+ sysfs_remove_bin_file(rmidev->sysfs_dir, &attr_data);
+
+ kobject_put(rmidev->sysfs_dir);
+
+ dev_data = rmidev->data;
+ if (dev_data) {
+ rmidev_device_cleanup(dev_data);
+ kfree(dev_data);
+ }
+
+ unregister_chrdev_region(rmidev->dev_no, 1);
+
+ class_destroy(rmidev_device_class);
+
+ kfree(rmidev);
+ rmidev = NULL;
+
+exit:
+ complete(&rmidev_remove_complete);
+
+ return;
+}
+
+static struct synaptics_rmi4_exp_fn rmidev_module = {
+ .fn_type = RMI_DEV,
+ .init = rmidev_init_device,
+ .remove = rmidev_remove_device,
+ .reset = NULL,
+ .reinit = NULL,
+ .early_suspend = NULL,
+ .suspend = NULL,
+ .resume = NULL,
+ .late_resume = NULL,
+ .attn = rmidev_attn,
+};
+
+static int __init rmidev_module_init(void)
+{
+ synaptics_rmi4_new_function(&rmidev_module, true);
+
+ return 0;
+}
+
+static void __exit rmidev_module_exit(void)
+{
+ synaptics_rmi4_new_function(&rmidev_module, false);
+
+ wait_for_completion(&rmidev_remove_complete);
+
+ return;
+}
+
+module_init(rmidev_module_init);
+module_exit(rmidev_module_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX RMI Dev Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_hid_i2c.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_hid_i2c.c
new file mode 100644
index 0000000..54d5157
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_rmi_hid_i2c.c
@@ -0,0 +1,712 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/i2c.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/gpio.h>
+#include <linux/types.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define SYN_I2C_RETRY_TIMES 10
+
+#define REPORT_ID_GET_BLOB 0x07
+#define REPORT_ID_WRITE 0x09
+#define REPORT_ID_READ_ADDRESS 0x0a
+#define REPORT_ID_READ_DATA 0x0b
+#define REPORT_ID_SET_RMI_MODE 0x0f
+
+#define PREFIX_USAGE_PAGE_1BYTE 0x05
+#define PREFIX_USAGE_PAGE_2BYTES 0x06
+#define PREFIX_USAGE 0x09
+#define PREFIX_REPORT_ID 0x85
+#define PREFIX_REPORT_COUNT_1BYTE 0x95
+#define PREFIX_REPORT_COUNT_2BYTES 0x96
+
+#define USAGE_GET_BLOB 0xc5
+#define USAGE_WRITE 0x02
+#define USAGE_READ_ADDRESS 0x03
+#define USAGE_READ_DATA 0x04
+#define USAGE_SET_MODE 0x06
+
+#define FEATURE_REPORT_TYPE 0x03
+
+#define VENDOR_DEFINED_PAGE 0xff00
+
+#define BLOB_REPORT_SIZE 256
+
+#define RESET_COMMAND 0x01
+#define GET_REPORT_COMMAND 0x02
+#define SET_REPORT_COMMAND 0x03
+#define SET_POWER_COMMAND 0x08
+
+#define FINGER_MODE 0x00
+#define RMI_MODE 0x02
+
+struct hid_report_info {
+ unsigned char get_blob_id;
+ unsigned char write_id;
+ unsigned char read_addr_id;
+ unsigned char read_data_id;
+ unsigned char set_mode_id;
+ unsigned int blob_size;
+};
+
+static struct hid_report_info hid_report;
+
+struct hid_device_descriptor {
+ unsigned short device_descriptor_length;
+ unsigned short format_version;
+ unsigned short report_descriptor_length;
+ unsigned short report_descriptor_index;
+ unsigned short input_register_index;
+ unsigned short input_report_max_length;
+ unsigned short output_register_index;
+ unsigned short output_report_max_length;
+ unsigned short command_register_index;
+ unsigned short data_register_index;
+ unsigned short vendor_id;
+ unsigned short product_id;
+ unsigned short version_id;
+ unsigned int reserved;
+};
+
+static struct hid_device_descriptor hid_dd;
+
+struct i2c_rw_buffer {
+ unsigned char *read;
+ unsigned char *write;
+ unsigned short read_size;
+ unsigned short write_size;
+};
+
+static struct i2c_rw_buffer buffer;
+
+static int do_i2c_transfer(struct i2c_client *client, struct i2c_msg *msg)
+{
+ unsigned char retry;
+
+ for (retry = 0; retry < SYN_I2C_RETRY_TIMES; retry++) {
+ if (i2c_transfer(client->adapter, msg, 1) == 1)
+ break;
+ dev_err(&client->dev,
+ "%s: I2C retry %d\n",
+ __func__, retry + 1);
+ msleep(20);
+ }
+
+ if (retry == SYN_I2C_RETRY_TIMES) {
+ dev_err(&client->dev,
+ "%s: I2C transfer over retry limit\n",
+ __func__);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int check_buffer(unsigned char **buffer, unsigned short *buffer_size,
+ unsigned short length)
+{
+ if (*buffer_size < length) {
+ if (*buffer_size)
+ kfree(*buffer);
+ *buffer = kzalloc(length, GFP_KERNEL);
+ if (!(*buffer))
+ return -ENOMEM;
+ *buffer_size = length;
+ }
+
+ return 0;
+}
+
+static int generic_read(struct i2c_client *client, unsigned short length)
+{
+ int retval;
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = I2C_M_RD,
+ .len = length,
+ }
+ };
+
+ check_buffer(&buffer.read, &buffer.read_size, length);
+ msg[0].buf = buffer.read;
+
+ retval = do_i2c_transfer(client, msg);
+
+ return retval;
+}
+
+static int generic_write(struct i2c_client *client, unsigned short length)
+{
+ int retval;
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = 0,
+ .len = length,
+ .buf = buffer.write,
+ }
+ };
+
+ retval = do_i2c_transfer(client, msg);
+
+ return retval;
+}
+
+static void traverse_report_descriptor(unsigned int *index)
+{
+ unsigned char size;
+ unsigned char *buf = buffer.read;
+
+ size = buf[*index] & MASK_2BIT;
+ switch (size) {
+ case 0: /* 0 bytes */
+ *index += 1;
+ break;
+ case 1: /* 1 byte */
+ *index += 2;
+ break;
+ case 2: /* 2 bytes */
+ *index += 3;
+ break;
+ case 3: /* 4 bytes */
+ *index += 5;
+ break;
+ default:
+ break;
+ }
+
+ return;
+}
+
+static void find_blob_size(unsigned int index)
+{
+ unsigned int ii = index;
+ unsigned char *buf = buffer.read;
+
+ while (ii < hid_dd.report_descriptor_length) {
+ if (buf[ii] == PREFIX_REPORT_COUNT_1BYTE) {
+ hid_report.blob_size = buf[ii + 1];
+ return;
+ } else if (buf[ii] == PREFIX_REPORT_COUNT_2BYTES) {
+ hid_report.blob_size = buf[ii + 1] | (buf[ii + 2] << 8);
+ return;
+ }
+ traverse_report_descriptor(&ii);
+ }
+
+ return;
+}
+
+static void find_reports(unsigned int index)
+{
+ unsigned int ii = index;
+ unsigned char *buf = buffer.read;
+ static unsigned int report_id_index;
+ static unsigned char report_id;
+ static unsigned short usage_page;
+
+ if (buf[ii] == PREFIX_REPORT_ID) {
+ report_id = buf[ii + 1];
+ report_id_index = ii;
+ return;
+ }
+
+ if (buf[ii] == PREFIX_USAGE_PAGE_1BYTE) {
+ usage_page = buf[ii + 1];
+ return;
+ } else if (buf[ii] == PREFIX_USAGE_PAGE_2BYTES) {
+ usage_page = buf[ii + 1] | (buf[ii + 2] << 8);
+ return;
+ }
+
+ if ((usage_page == VENDOR_DEFINED_PAGE) && (buf[ii] == PREFIX_USAGE)) {
+ switch (buf[ii + 1]) {
+ case USAGE_GET_BLOB:
+ hid_report.get_blob_id = report_id;
+ find_blob_size(report_id_index);
+ break;
+ case USAGE_WRITE:
+ hid_report.write_id = report_id;
+ break;
+ case USAGE_READ_ADDRESS:
+ hid_report.read_addr_id = report_id;
+ break;
+ case USAGE_READ_DATA:
+ hid_report.read_data_id = report_id;
+ break;
+ case USAGE_SET_MODE:
+ hid_report.set_mode_id = report_id;
+ break;
+ default:
+ break;
+ }
+ }
+
+ return;
+}
+
+static int parse_report_descriptor(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned int ii = 0;
+ unsigned char *buf;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+
+ buffer.write[0] = hid_dd.report_descriptor_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.report_descriptor_index >> 8;
+ retval = generic_write(i2c, 2);
+ if (retval < 0)
+ return retval;
+ retval = generic_read(i2c, hid_dd.report_descriptor_length);
+ if (retval < 0)
+ return retval;
+
+ buf = buffer.read;
+
+ hid_report.get_blob_id = REPORT_ID_GET_BLOB;
+ hid_report.write_id = REPORT_ID_WRITE;
+ hid_report.read_addr_id = REPORT_ID_READ_ADDRESS;
+ hid_report.read_data_id = REPORT_ID_READ_DATA;
+ hid_report.set_mode_id = REPORT_ID_SET_RMI_MODE;
+ hid_report.blob_size = BLOB_REPORT_SIZE;
+
+ while (ii < hid_dd.report_descriptor_length) {
+ find_reports(ii);
+ traverse_report_descriptor(&ii);
+ }
+
+ return 0;
+}
+
+static int switch_to_rmi(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ check_buffer(&buffer.write, &buffer.write_size, 11);
+
+ /* set rmi mode */
+ buffer.write[0] = hid_dd.command_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.command_register_index >> 8;
+ buffer.write[2] = (FEATURE_REPORT_TYPE << 4) | hid_report.set_mode_id;
+ buffer.write[3] = SET_REPORT_COMMAND;
+ buffer.write[4] = hid_report.set_mode_id;
+ buffer.write[5] = hid_dd.data_register_index & MASK_8BIT;
+ buffer.write[6] = hid_dd.data_register_index >> 8;
+ buffer.write[7] = 0x04;
+ buffer.write[8] = 0x00;
+ buffer.write[9] = hid_report.set_mode_id;
+ buffer.write[10] = RMI_MODE;
+
+ retval = generic_write(i2c, 11);
+
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ return retval;
+}
+
+static int check_report_mode(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ unsigned short report_size;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ check_buffer(&buffer.write, &buffer.write_size, 7);
+
+ buffer.write[0] = hid_dd.command_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.command_register_index >> 8;
+ buffer.write[2] = (FEATURE_REPORT_TYPE << 4) | hid_report.set_mode_id;
+ buffer.write[3] = GET_REPORT_COMMAND;
+ buffer.write[4] = hid_report.set_mode_id;
+ buffer.write[5] = hid_dd.data_register_index & MASK_8BIT;
+ buffer.write[6] = hid_dd.data_register_index >> 8;
+
+ retval = generic_write(i2c, 7);
+ if (retval < 0)
+ goto exit;
+
+ retval = generic_read(i2c, 2);
+ if (retval < 0)
+ goto exit;
+
+ report_size = (buffer.read[1] << 8) | buffer.read[0];
+
+ retval = generic_write(i2c, 7);
+ if (retval < 0)
+ goto exit;
+
+ retval = generic_read(i2c, report_size);
+ if (retval < 0)
+ goto exit;
+
+ retval = buffer.read[3];
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: Report mode = %d\n",
+ __func__, retval);
+
+exit:
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ return retval;
+}
+
+static int hid_i2c_init(struct synaptics_rmi4_data *rmi4_data)
+{
+ int retval;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ check_buffer(&buffer.write, &buffer.write_size, 6);
+
+ /* read device descriptor */
+ buffer.write[0] = bdata->device_descriptor_addr & MASK_8BIT;
+ buffer.write[1] = bdata->device_descriptor_addr >> 8;
+ retval = generic_write(i2c, 2);
+ if (retval < 0)
+ goto exit;
+ retval = generic_read(i2c, sizeof(hid_dd));
+ if (retval < 0)
+ goto exit;
+ memcpy((unsigned char *)&hid_dd, buffer.read, sizeof(hid_dd));
+
+ retval = parse_report_descriptor(rmi4_data);
+ if (retval < 0)
+ goto exit;
+
+ /* set power */
+ buffer.write[0] = hid_dd.command_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.command_register_index >> 8;
+ buffer.write[2] = 0x00;
+ buffer.write[3] = SET_POWER_COMMAND;
+ retval = generic_write(i2c, 4);
+ if (retval < 0)
+ goto exit;
+
+ /* reset */
+ buffer.write[0] = hid_dd.command_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.command_register_index >> 8;
+ buffer.write[2] = 0x00;
+ buffer.write[3] = RESET_COMMAND;
+ retval = generic_write(i2c, 4);
+ if (retval < 0)
+ goto exit;
+
+ while (gpio_get_value(bdata->irq_gpio))
+ msleep(20);
+
+ retval = generic_read(i2c, hid_dd.input_report_max_length);
+ if (retval < 0)
+ goto exit;
+
+ /* get blob */
+ buffer.write[0] = hid_dd.command_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.command_register_index >> 8;
+ buffer.write[2] = (FEATURE_REPORT_TYPE << 4) | hid_report.get_blob_id;
+ buffer.write[3] = 0x02;
+ buffer.write[4] = hid_dd.data_register_index & MASK_8BIT;
+ buffer.write[5] = hid_dd.data_register_index >> 8;
+
+ retval = generic_write(i2c, 6);
+ if (retval < 0)
+ goto exit;
+
+ msleep(20);
+
+ retval = generic_read(i2c, hid_report.blob_size + 3);
+ if (retval < 0)
+ goto exit;
+
+exit:
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ if (retval < 0) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to initialize HID/I2C interface\n",
+ __func__);
+ return retval;
+ }
+
+ retval = switch_to_rmi(rmi4_data);
+
+ return retval;
+}
+
+static int synaptics_rmi4_i2c_read(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned char retry;
+ unsigned char recover = 1;
+ unsigned short report_length;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+ struct i2c_msg msg[] = {
+ {
+ .addr = i2c->addr,
+ .flags = 0,
+ .len = hid_dd.output_report_max_length + 2,
+ },
+ {
+ .addr = i2c->addr,
+ .flags = I2C_M_RD,
+ .len = length + 4,
+ },
+ };
+
+recover:
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ check_buffer(&buffer.write, &buffer.write_size,
+ hid_dd.output_report_max_length + 2);
+ msg[0].buf = buffer.write;
+ buffer.write[0] = hid_dd.output_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.output_register_index >> 8;
+ buffer.write[2] = hid_dd.output_report_max_length & MASK_8BIT;
+ buffer.write[3] = hid_dd.output_report_max_length >> 8;
+ buffer.write[4] = hid_report.read_addr_id;
+ buffer.write[5] = 0x00;
+ buffer.write[6] = addr & MASK_8BIT;
+ buffer.write[7] = addr >> 8;
+ buffer.write[8] = length & MASK_8BIT;
+ buffer.write[9] = length >> 8;
+
+ check_buffer(&buffer.read, &buffer.read_size, length + 4);
+ msg[1].buf = buffer.read;
+
+ retval = do_i2c_transfer(i2c, &msg[0]);
+ if (retval != 0)
+ goto exit;
+
+ retry = 0;
+ do {
+ retval = do_i2c_transfer(i2c, &msg[1]);
+ if (retval == 0)
+ retval = length;
+ else
+ goto exit;
+
+ report_length = (buffer.read[1] << 8) | buffer.read[0];
+ if (report_length == hid_dd.input_report_max_length) {
+ memcpy(&data[0], &buffer.read[4], length);
+ goto exit;
+ }
+
+ msleep(20);
+ retry++;
+ } while (retry < SYN_I2C_RETRY_TIMES);
+
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to receive read report\n",
+ __func__);
+ retval = -EIO;
+
+exit:
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ if ((retval != length) && (recover == 1)) {
+ recover = 0;
+ if (check_report_mode(rmi4_data) != RMI_MODE) {
+ retval = hid_i2c_init(rmi4_data);
+ if (retval == 0)
+ goto recover;
+ }
+ }
+
+ return retval;
+}
+
+static int synaptics_rmi4_i2c_write(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned char recover = 1;
+ unsigned char msg_length;
+ struct i2c_client *i2c = to_i2c_client(rmi4_data->pdev->dev.parent);
+ struct i2c_msg msg[] = {
+ {
+ .addr = i2c->addr,
+ .flags = 0,
+ }
+ };
+
+ if ((length + 10) < (hid_dd.output_report_max_length + 2))
+ msg_length = hid_dd.output_report_max_length + 2;
+ else
+ msg_length = length + 10;
+
+recover:
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ check_buffer(&buffer.write, &buffer.write_size, msg_length);
+ msg[0].len = msg_length;
+ msg[0].buf = buffer.write;
+ buffer.write[0] = hid_dd.output_register_index & MASK_8BIT;
+ buffer.write[1] = hid_dd.output_register_index >> 8;
+ buffer.write[2] = hid_dd.output_report_max_length & MASK_8BIT;
+ buffer.write[3] = hid_dd.output_report_max_length >> 8;
+ buffer.write[4] = hid_report.write_id;
+ buffer.write[5] = 0x00;
+ buffer.write[6] = addr & MASK_8BIT;
+ buffer.write[7] = addr >> 8;
+ buffer.write[8] = length & MASK_8BIT;
+ buffer.write[9] = length >> 8;
+ memcpy(&buffer.write[10], &data[0], length);
+
+ retval = do_i2c_transfer(i2c, msg);
+ if (retval == 0)
+ retval = length;
+
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ if ((retval != length) && (recover == 1)) {
+ recover = 0;
+ if (check_report_mode(rmi4_data) != RMI_MODE) {
+ retval = hid_i2c_init(rmi4_data);
+ if (retval == 0)
+ goto recover;
+ }
+ }
+
+ return retval;
+}
+
+static struct synaptics_dsx_bus_access bus_access = {
+ .type = BUS_I2C,
+ .read = synaptics_rmi4_i2c_read,
+ .write = synaptics_rmi4_i2c_write,
+};
+
+static struct synaptics_dsx_hw_interface hw_if;
+
+static struct platform_device *synaptics_dsx_i2c_device;
+
+static void synaptics_rmi4_i2c_dev_release(struct device *dev)
+{
+ kfree(synaptics_dsx_i2c_device);
+
+ return;
+}
+
+static int synaptics_rmi4_i2c_probe(struct i2c_client *client,
+ const struct i2c_device_id *dev_id)
+{
+ int retval;
+
+ if (!i2c_check_functionality(client->adapter,
+ I2C_FUNC_SMBUS_BYTE_DATA)) {
+ dev_err(&client->dev,
+ "%s: SMBus byte data commands not supported by host\n",
+ __func__);
+ return -EIO;
+ }
+
+ synaptics_dsx_i2c_device = kzalloc(
+ sizeof(struct platform_device),
+ GFP_KERNEL);
+ if (!synaptics_dsx_i2c_device) {
+ dev_err(&client->dev,
+ "%s: Failed to allocate memory for synaptics_dsx_i2c_device\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ hw_if.board_data = client->dev.platform_data;
+ hw_if.bus_access = &bus_access;
+ hw_if.bl_hw_init = switch_to_rmi;
+ hw_if.ui_hw_init = hid_i2c_init;
+
+ synaptics_dsx_i2c_device->name = PLATFORM_DRIVER_NAME;
+ synaptics_dsx_i2c_device->id = 0;
+ synaptics_dsx_i2c_device->num_resources = 0;
+ synaptics_dsx_i2c_device->dev.parent = &client->dev;
+ synaptics_dsx_i2c_device->dev.platform_data = &hw_if;
+ synaptics_dsx_i2c_device->dev.release = synaptics_rmi4_i2c_dev_release;
+
+ retval = platform_device_register(synaptics_dsx_i2c_device);
+ if (retval) {
+ dev_err(&client->dev,
+ "%s: Failed to register platform device\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int synaptics_rmi4_i2c_remove(struct i2c_client *client)
+{
+ if (buffer.read_size)
+ kfree(buffer.read);
+
+ if (buffer.write_size)
+ kfree(buffer.write);
+
+ platform_device_unregister(synaptics_dsx_i2c_device);
+
+ return 0;
+}
+
+static const struct i2c_device_id synaptics_rmi4_id_table[] = {
+ {I2C_DRIVER_NAME, 0},
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, synaptics_rmi4_id_table);
+
+static struct i2c_driver synaptics_rmi4_i2c_driver = {
+ .driver = {
+ .name = I2C_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ },
+ .probe = synaptics_rmi4_i2c_probe,
+ .remove = __devexit_p(synaptics_rmi4_i2c_remove),
+ .id_table = synaptics_rmi4_id_table,
+};
+
+int synaptics_rmi4_bus_init(void)
+{
+ return i2c_add_driver(&synaptics_rmi4_i2c_driver);
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_init);
+
+void synaptics_rmi4_bus_exit(void)
+{
+ i2c_del_driver(&synaptics_rmi4_i2c_driver);
+
+ return;
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX I2C Bus Support Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_spi.c b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_spi.c
new file mode 100644
index 0000000..aeeb485
--- /dev/null
+++ b/drivers/input/touchscreen/synaptics_dsx/synaptics_dsx_spi.c
@@ -0,0 +1,628 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/spi/spi.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/types.h>
+#include <linux/of_gpio.h>
+#include <linux/platform_device.h>
+#include <linux/input/synaptics_dsx.h>
+#include "synaptics_dsx_core.h"
+
+#define SPI_READ 0x80
+#define SPI_WRITE 0x00
+
+#define SYN_SPI_RETRY_TIMES 5
+
+#ifdef CONFIG_OF
+static int parse_config(struct device *dev, struct synaptics_dsx_board_data *bdata)
+{
+ struct synaptics_rmi4_config *cfg_table;
+ struct device_node *node, *pp = NULL;
+ struct property *prop;
+ uint8_t cnt = 0, i = 0;
+ u32 data = 0;
+ int len = 0;
+
+ pr_info(" %s\n", __func__);
+ node = dev->of_node;
+ if (node == NULL) {
+ pr_err(" %s, can't find device_node", __func__);
+ return -ENODEV;
+ }
+
+ while ((pp = of_get_next_child(node, pp)))
+ cnt++;
+
+ if (!cnt)
+ return -ENODEV;
+
+ cfg_table = kzalloc(cnt * (sizeof *cfg_table), GFP_KERNEL);
+ if (!cfg_table)
+ return -ENOMEM;
+
+ pp = NULL;
+ while ((pp = of_get_next_child(node, pp))) {
+ if (of_property_read_u32(pp, "sensor_id", &data) == 0)
+ cfg_table[i].sensor_id = (data | SENSOR_ID_CHECKING_EN);
+
+ if (of_property_read_u32(pp, "pr_number", &data) == 0)
+ cfg_table[i].pr_number = data;
+
+ prop = of_find_property(pp, "config", &len);
+ if (!prop) {
+ pr_err(" %s:Looking up %s property in node %s failed",
+ __func__, "config", pp->full_name);
+ return -ENODEV;
+ } else if (!len) {
+ pr_err(" %s:Invalid length of configuration data\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ cfg_table[i].length = len;
+ memcpy(cfg_table[i].config, prop->value, cfg_table[i].length);
+ /*pr_info(rmi4_data->pdev->dev.parent, " DT#%d-id:%05x, pr:%d, len:%d\n", i,
+ cfg_table[i].sensor_id, cfg_table[i].pr_number, cfg_table[i].length);
+ pr_info(rmi4_data->pdev->dev.parent, " cfg=[%02x,%02x,%02x,%02x]\n", cfg_table[i].config[0],
+ cfg_table[i].config[1], cfg_table[i].config[2], cfg_table[i].config[3]);*/
+ i++;
+ }
+
+ bdata->config_num = cnt;
+ bdata->config_table = cfg_table;
+
+ return 0;
+}
+
+static int parse_dt(struct device *dev, struct synaptics_dsx_board_data *bdata)
+{
+ int retval;
+ u32 value;
+ const char *name;
+ struct property *prop;
+ struct device_node *np = dev->of_node;
+
+ pr_info("%s\n", __func__);
+ bdata->irq_gpio = of_get_named_gpio_flags(np,
+ "synaptics,irq-gpio", 0, NULL);
+
+ retval = of_property_read_u32(np, "synaptics,irq-flags", &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->irq_flags = value;
+
+ retval = of_property_read_string(np, "synaptics,pwr-reg-name", &name);
+ if (retval == -EINVAL)
+ bdata->pwr_reg_name = NULL;
+ else if (retval < 0)
+ return retval;
+ else
+ bdata->pwr_reg_name = name;
+
+ retval = of_property_read_string(np, "synaptics,bus-reg-name", &name);
+ if (retval == -EINVAL)
+ bdata->bus_reg_name = NULL;
+ else if (retval < 0)
+ return retval;
+ else
+ bdata->bus_reg_name = name;
+
+ if (of_property_read_bool(np, "synaptics,power-gpio")) {
+ bdata->power_gpio = of_get_named_gpio_flags(np,
+ "synaptics,power-gpio", 0, NULL);
+ retval = of_property_read_u32(np, "synaptics,power-on-state",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->power_on_state = value;
+ } else {
+ bdata->power_gpio = -1;
+ }
+
+ if (of_property_read_bool(np, "synaptics,power-delay-ms")) {
+ retval = of_property_read_u32(np, "synaptics,power-delay-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->power_delay_ms = value;
+ } else {
+ bdata->power_delay_ms = 0;
+ }
+
+ if (of_property_read_bool(np, "synaptics,reset-gpio")) {
+ bdata->reset_gpio = of_get_named_gpio_flags(np,
+ "synaptics,reset-gpio", 0, NULL);
+ retval = of_property_read_u32(np, "synaptics,reset-on-state",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_on_state = value;
+ retval = of_property_read_u32(np, "synaptics,reset-active-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_active_ms = value;
+ } else {
+ bdata->reset_gpio = -1;
+ }
+
+ if (of_property_read_bool(np, "synaptics,reset-delay-ms")) {
+ retval = of_property_read_u32(np, "synaptics,reset-delay-ms",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->reset_delay_ms = value;
+ } else {
+ bdata->reset_delay_ms = 0;
+ }
+
+ if (of_property_read_bool(np, "synaptics,byte-delay-us")) {
+ retval = of_property_read_u32(np, "synaptics,byte-delay-us",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->byte_delay_us = value;
+ } else {
+ bdata->byte_delay_us = 0;
+ }
+
+ if (of_property_read_bool(np, "synaptics,block-delay-us")) {
+ retval = of_property_read_u32(np, "synaptics,block-delay-us",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->block_delay_us = value;
+ } else {
+ bdata->block_delay_us = 0;
+ }
+
+ bdata->swap_axes = of_property_read_bool(np, "synaptics,swap-axes");
+
+ bdata->x_flip = of_property_read_bool(np, "synaptics,x-flip");
+
+ bdata->y_flip = of_property_read_bool(np, "synaptics,y-flip");
+
+ prop = of_find_property(np, "synaptics,cap-button-map", NULL);
+ if (prop && prop->length) {
+ bdata->cap_button_map->map = devm_kzalloc(dev,
+ prop->length,
+ GFP_KERNEL);
+ if (!bdata->cap_button_map->map)
+ return -ENOMEM;
+ bdata->cap_button_map->nbuttons = prop->length / sizeof(u32);
+ retval = of_property_read_u32_array(np,
+ "synaptics,cap-button-map",
+ bdata->cap_button_map->map,
+ bdata->cap_button_map->nbuttons);
+ if (retval < 0) {
+ bdata->cap_button_map->nbuttons = 0;
+ bdata->cap_button_map->map = NULL;
+ }
+ } else {
+ bdata->cap_button_map->nbuttons = 0;
+ bdata->cap_button_map->map = NULL;
+ }
+
+ if (of_property_read_bool(np, "synaptics,tw-pin-mask")) {
+ retval = of_property_read_u32(np, "synaptics,tw-pin-mask",
+ &value);
+ if (retval < 0)
+ return retval;
+ else
+ bdata->tw_pin_mask = value;
+ } else {
+ bdata->tw_pin_mask = 0;
+ }
+
+ parse_config(dev, bdata);
+
+ return 0;
+}
+#endif
+
+static int synaptics_rmi4_spi_set_page(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr)
+{
+ int retval;
+ unsigned int index, retry;
+ unsigned int xfer_count = PAGE_SELECT_LEN + 1;
+ unsigned char txbuf[xfer_count];
+ unsigned char page;
+ struct spi_message msg;
+ struct spi_transfer xfers[xfer_count];
+ struct spi_device *spi = to_spi_device(rmi4_data->pdev->dev.parent);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ page = ((addr >> 8) & ~MASK_7BIT);
+ if (page != rmi4_data->current_page) {
+ spi_message_init(&msg);
+
+ txbuf[0] = SPI_WRITE;
+ txbuf[1] = MASK_8BIT;
+ txbuf[2] = page;
+
+ for (index = 0; index < xfer_count; index++) {
+ memset(&xfers[index], 0, sizeof(struct spi_transfer));
+ xfers[index].len = 1;
+ xfers[index].delay_usecs = bdata->byte_delay_us;
+ xfers[index].tx_buf = &txbuf[index];
+ spi_message_add_tail(&xfers[index], &msg);
+ }
+
+ if (bdata->block_delay_us)
+ xfers[index - 1].delay_usecs = bdata->block_delay_us;
+
+ for (retry = 0; retry < SYN_SPI_RETRY_TIMES; retry++) {
+ retval = spi_sync(spi, &msg);
+ if (retval == 0) {
+ rmi4_data->current_page = page;
+ retval = PAGE_SELECT_LEN;
+ break;
+ } else {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: SPI transfer, error = %d, retry = %d\n",
+ __func__, retval, retry + 1);
+ mdelay(5);
+ }
+ }
+
+ if (retry == SYN_SPI_RETRY_TIMES) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to complete SPI transfer, error = %d\n",
+ __func__, retval);
+ }
+ } else {
+ retval = PAGE_SELECT_LEN;
+ }
+
+ return retval;
+}
+
+static int synaptics_rmi4_spi_read(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned int index, retry;
+ unsigned int xfer_count = length + ADDRESS_WORD_LEN;
+ unsigned char txbuf[ADDRESS_WORD_LEN];
+ unsigned char *rxbuf = NULL;
+ struct spi_message msg;
+ struct spi_transfer *xfers = NULL;
+ struct spi_device *spi = to_spi_device(rmi4_data->pdev->dev.parent);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ spi_message_init(&msg);
+
+ xfers = kcalloc(xfer_count, sizeof(struct spi_transfer), GFP_KERNEL);
+ if (!xfers) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate memory for xfers\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ txbuf[0] = (addr >> 8) | SPI_READ;
+ txbuf[1] = addr & MASK_8BIT;
+
+ rxbuf = kmalloc(length, GFP_KERNEL);
+ if (!rxbuf) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate memory for rxbuf\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ retval = synaptics_rmi4_spi_set_page(rmi4_data, addr);
+ if (retval != PAGE_SELECT_LEN) {
+ retval = -EIO;
+ goto exit;
+ }
+
+ for (index = 0; index < xfer_count; index++) {
+ xfers[index].len = 1;
+ xfers[index].delay_usecs = bdata->byte_delay_us;
+ if (index < ADDRESS_WORD_LEN)
+ xfers[index].tx_buf = &txbuf[index];
+ else
+ xfers[index].rx_buf = &rxbuf[index - ADDRESS_WORD_LEN];
+ spi_message_add_tail(&xfers[index], &msg);
+ }
+
+ if (bdata->block_delay_us)
+ xfers[index - 1].delay_usecs = bdata->block_delay_us;
+
+ for (retry = 0; retry < SYN_SPI_RETRY_TIMES; retry++) {
+ retval = spi_sync(spi, &msg);
+ if (retval == 0) {
+ retval = length;
+ memcpy(data, rxbuf, length);
+ break;
+ } else {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: SPI transfer, error = %d, retry = %d\n",
+ __func__, retval, retry + 1);
+ mdelay(5);
+ }
+ }
+
+ if (retry == SYN_SPI_RETRY_TIMES) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to complete SPI transfer, error = %d\n",
+ __func__, retval);
+ }
+
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+exit:
+ kfree(rxbuf);
+ kfree(xfers);
+
+ return retval;
+}
+
+static int synaptics_rmi4_spi_write(struct synaptics_rmi4_data *rmi4_data,
+ unsigned short addr, unsigned char *data, unsigned short length)
+{
+ int retval;
+ unsigned int index, retry;
+ unsigned int xfer_count = length + ADDRESS_WORD_LEN;
+ unsigned char *txbuf = NULL;
+ struct spi_message msg;
+ struct spi_transfer *xfers = NULL;
+ struct spi_device *spi = to_spi_device(rmi4_data->pdev->dev.parent);
+ const struct synaptics_dsx_board_data *bdata =
+ rmi4_data->hw_if->board_data;
+
+ spi_message_init(&msg);
+
+ xfers = kcalloc(xfer_count, sizeof(struct spi_transfer), GFP_KERNEL);
+ if (!xfers) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate memory for xfers\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ txbuf = kmalloc(xfer_count, GFP_KERNEL);
+ if (!txbuf) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to allocate memory for txbuf\n",
+ __func__);
+ retval = -ENOMEM;
+ goto exit;
+ }
+
+ txbuf[0] = (addr >> 8) & ~SPI_READ;
+ txbuf[1] = addr & MASK_8BIT;
+ memcpy(&txbuf[ADDRESS_WORD_LEN], data, length);
+
+ mutex_lock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+ retval = synaptics_rmi4_spi_set_page(rmi4_data, addr);
+ if (retval != PAGE_SELECT_LEN) {
+ retval = -EIO;
+ goto exit;
+ }
+
+ for (index = 0; index < xfer_count; index++) {
+ xfers[index].len = 1;
+ xfers[index].delay_usecs = bdata->byte_delay_us;
+ xfers[index].tx_buf = &txbuf[index];
+ spi_message_add_tail(&xfers[index], &msg);
+ }
+
+ if (bdata->block_delay_us)
+ xfers[index - 1].delay_usecs = bdata->block_delay_us;
+
+ for (retry = 0; retry < SYN_SPI_RETRY_TIMES; retry++) {
+ retval = spi_sync(spi, &msg);
+ if (retval == 0) {
+ retval = length;
+ break;
+ } else {
+ dev_dbg(rmi4_data->pdev->dev.parent,
+ "%s: SPI transfer, error = %d, retry = %d\n",
+ __func__, retval, retry + 1);
+ mdelay(5);
+ }
+ }
+
+ if (retry == SYN_SPI_RETRY_TIMES) {
+ dev_err(rmi4_data->pdev->dev.parent,
+ "%s: Failed to complete SPI transfer, error = %d\n",
+ __func__, retval);
+ }
+
+ mutex_unlock(&rmi4_data->rmi4_io_ctrl_mutex);
+
+exit:
+ kfree(txbuf);
+ kfree(xfers);
+
+ return retval;
+}
+
+static struct synaptics_dsx_bus_access bus_access = {
+ .type = BUS_SPI,
+ .read = synaptics_rmi4_spi_read,
+ .write = synaptics_rmi4_spi_write,
+};
+
+static struct synaptics_dsx_hw_interface hw_if;
+
+static struct platform_device *synaptics_dsx_spi_device;
+
+static void synaptics_rmi4_spi_dev_release(struct device *dev)
+{
+ kfree(synaptics_dsx_spi_device);
+
+ return;
+}
+
+static int synaptics_rmi4_spi_probe(struct spi_device *spi)
+{
+ int retval;
+
+ pr_info("%s\n", __func__);
+ if (spi->master->flags & SPI_MASTER_HALF_DUPLEX) {
+ dev_err(&spi->dev,
+ "%s: Full duplex not supported by host\n",
+ __func__);
+ return -EIO;
+ }
+
+ synaptics_dsx_spi_device = kzalloc(
+ sizeof(struct platform_device),
+ GFP_KERNEL);
+ if (!synaptics_dsx_spi_device) {
+ dev_err(&spi->dev,
+ "%s: Failed to allocate memory for synaptics_dsx_spi_device\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ spi->bits_per_word = 8;
+ spi->mode = SPI_MODE_3;
+
+ retval = spi_setup(spi);
+ if (retval < 0) {
+ dev_err(&spi->dev,
+ "%s: Failed to perform SPI setup\n",
+ __func__);
+ return retval;
+ }
+
+#ifdef CONFIG_OF
+ if (spi->dev.of_node) {
+ hw_if.board_data = devm_kzalloc(&spi->dev,
+ sizeof(struct synaptics_dsx_board_data),
+ GFP_KERNEL);
+ if (!hw_if.board_data) {
+ dev_err(&spi->dev,
+ "%s: Failed to allocate memory for board data\n",
+ __func__);
+ return -ENOMEM;
+ }
+ hw_if.board_data->cap_button_map = devm_kzalloc(&spi->dev,
+ sizeof(struct synaptics_dsx_cap_button_map),
+ GFP_KERNEL);
+ if (!hw_if.board_data->cap_button_map) {
+ dev_err(&spi->dev,
+ "%s: Failed to allocate memory for button map\n",
+ __func__);
+ return -ENOMEM;
+ }
+ parse_dt(&spi->dev, hw_if.board_data);
+ }
+#else
+ hw_if.board_data = spi->dev.platform_data;
+#endif
+ hw_if.bus_access = &bus_access;
+
+ synaptics_dsx_spi_device->name = PLATFORM_DRIVER_NAME;
+ synaptics_dsx_spi_device->id = 0;
+ synaptics_dsx_spi_device->num_resources = 0;
+ synaptics_dsx_spi_device->dev.parent = &spi->dev;
+ synaptics_dsx_spi_device->dev.platform_data = &hw_if;
+ synaptics_dsx_spi_device->dev.release = synaptics_rmi4_spi_dev_release;
+
+ retval = platform_device_register(synaptics_dsx_spi_device);
+ if (retval) {
+ dev_err(&spi->dev,
+ "%s: Failed to register platform device\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int synaptics_rmi4_spi_remove(struct spi_device *spi)
+{
+ platform_device_unregister(synaptics_dsx_spi_device);
+
+ return 0;
+}
+
+static const struct spi_device_id synaptics_rmi4_id_table[] = {
+ {SPI_DRIVER_NAME, 0},
+ {},
+};
+MODULE_DEVICE_TABLE(spi, synaptics_rmi4_id_table);
+
+#ifdef CONFIG_OF
+static struct of_device_id synaptics_rmi4_of_match_table[] = {
+ {
+ .compatible = "synaptics,dsx",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, synaptics_rmi4_of_match_table);
+#else
+#define synaptics_rmi4_of_match_table NULL
+#endif
+
+static struct spi_driver synaptics_rmi4_spi_driver = {
+ .driver = {
+ .name = SPI_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = synaptics_rmi4_of_match_table,
+ },
+ .probe = synaptics_rmi4_spi_probe,
+ .remove = synaptics_rmi4_spi_remove,
+ .id_table = synaptics_rmi4_id_table,
+};
+
+
+int synaptics_rmi4_bus_init(void)
+{
+ return spi_register_driver(&synaptics_rmi4_spi_driver);
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_init);
+
+void synaptics_rmi4_bus_exit(void)
+{
+ spi_unregister_driver(&synaptics_rmi4_spi_driver);
+
+ return;
+}
+EXPORT_SYMBOL(synaptics_rmi4_bus_exit);
+
+MODULE_AUTHOR("Synaptics, Inc.");
+MODULE_DESCRIPTION("Synaptics DSX SPI Bus Support Module");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 93986d6..5a65a6e 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -1124,7 +1124,7 @@
}
pte = &ptbl[ptn];
- FLUSH_CPU_DCACHE(pte, tbl_page, count * sizeof(u32 *));
+ FLUSH_CPU_DCACHE(pte, tbl_page, count * sizeof(*pte));
if (!flush_all)
flush_ptc_and_tlb_range(smmu, as, iova, pte, tbl_page,
count);
@@ -1209,7 +1209,7 @@
}
pte = &ptbl[ptn];
- FLUSH_CPU_DCACHE(pte, tbl_page, count * sizeof(u32 *));
+ FLUSH_CPU_DCACHE(pte, tbl_page, count * sizeof(*pte));
if (!flush_all)
flush_ptc_and_tlb_range(smmu, as, iova, pte, tbl_page,
count);
diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
index ae92966..22f3813 100644
--- a/drivers/leds/Kconfig
+++ b/drivers/leds/Kconfig
@@ -224,6 +224,15 @@
Driver provides direct control via LED class and interface for
programming the engines.
+config LEDS_LP5521_HTC
+ tristate "LED Support for N.S. LP5521 LED driver chip"
+ depends on LEDS_CLASS && I2C
+ help
+ If you say yes here you get support for the National Semiconductor
+ LP5521 LED driver. It is 3 channel chip with programmable engines.
+ Driver provides direct control via LED class and interface for
+ programming the engines.
+
config LEDS_LP5523
tristate "LED Support for TI/National LP5523/55231 LED driver chip"
depends on LEDS_CLASS && I2C
@@ -492,6 +501,14 @@
This option enables support for the BlinkM RGB LED connected
through I2C. Say Y to enable support for the BlinkM LED.
+config LEDS_TPS61310
+ tristate "TI TPS61310 leds"
+ depends on LEDS_CLASS
+ depends on I2C
+ help
+ This option enables support for the TI TPS61310 LED driver
+ through I2C.
+
comment "LED Triggers"
source "drivers/leds/trigger/Kconfig"
diff --git a/drivers/leds/Makefile b/drivers/leds/Makefile
index 7472b75..1666809 100644
--- a/drivers/leds/Makefile
+++ b/drivers/leds/Makefile
@@ -28,6 +28,7 @@
obj-$(CONFIG_LEDS_LP3944) += leds-lp3944.o
obj-$(CONFIG_LEDS_LP55XX_COMMON) += leds-lp55xx-common.o
obj-$(CONFIG_LEDS_LP5521) += leds-lp5521.o
+obj-$(CONFIG_LEDS_LP5521_HTC) += leds-lp5521_htc.o
obj-$(CONFIG_LEDS_LP5523) += leds-lp5523.o
obj-$(CONFIG_LEDS_LP5562) += leds-lp5562.o
obj-$(CONFIG_LEDS_LP8788) += leds-lp8788.o
@@ -57,6 +58,7 @@
obj-$(CONFIG_LEDS_MAX8997) += leds-max8997.o
obj-$(CONFIG_LEDS_LM355x) += leds-lm355x.o
obj-$(CONFIG_LEDS_BLINKM) += leds-blinkm.o
+obj-$(CONFIG_LEDS_TPS61310) += leds-tps61310.o
obj-$(CONFIG_LEDS_MAX8831) += leds-max8831.o
@@ -65,3 +67,4 @@
# LED Triggers
obj-$(CONFIG_LEDS_TRIGGERS) += trigger/
+
diff --git a/drivers/leds/leds-lp5521_htc.c b/drivers/leds/leds-lp5521_htc.c
new file mode 100644
index 0000000..4aa2900
--- /dev/null
+++ b/drivers/leds/leds-lp5521_htc.c
@@ -0,0 +1,1147 @@
+/* driver/leds/leds-lp5521_htc.c
+ *
+ * Copyright (C) 2010 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/i2c.h>
+#include <linux/gpio.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/platform_device.h>
+#include <linux/hrtimer.h>
+#include <linux/interrupt.h>
+#include <linux/alarmtimer.h>
+#include <linux/leds.h>
+#include <linux/leds-lp5521_htc.h>
+#include <linux/regulator/consumer.h>
+#include <linux/module.h>
+#include <linux/of_gpio.h>
+#define LP5521_MAX_LEDS 9 /* Maximum number of LEDs */
+#define D(x...) printk(KERN_DEBUG "[LED]" x)
+#define I(x...) printk(KERN_INFO "[LED]" x)
+static int led_rw_delay, chip_enable;
+static int current_time;
+static struct i2c_client *private_lp5521_client;
+static struct mutex led_mutex;
+static struct workqueue_struct *g_led_work_queue;
+static uint32_t ModeRGB;
+#define Mode_Mask (0xff << 24)
+#define Red_Mask (0xff << 16)
+#define Green_Mask (0xff << 8)
+#define Blue_Mask 0xff
+
+
+struct lp5521_led {
+ int id;
+ u8 chan_nr;
+ u8 led_current;
+ u8 max_current;
+ struct led_classdev cdev;
+ struct mutex led_data_mutex;
+ struct alarm led_alarm;
+ struct work_struct led_work;
+ struct work_struct led_work_multicolor;
+ uint8_t Mode;
+ uint8_t Red;
+ uint8_t Green;
+ uint8_t Blue;
+ struct delayed_work blink_delayed_work;
+};
+
+struct lp5521_chip {
+ struct led_i2c_platform_data *pdata;
+ struct mutex led_i2c_rw_mutex; /* Serialize control */
+ struct i2c_client *client;
+ struct lp5521_led leds[LP5521_MAX_LEDS];
+};
+static int lp5521_parse_dt(struct device *, struct led_i2c_platform_data *);
+
+static char *hex2string(uint8_t *data, int len)
+{
+ static char buf[LED_I2C_WRITE_BLOCK_SIZE*4];
+ int i;
+
+ i = LED_I2C_WRITE_BLOCK_SIZE -1;
+ if (len > i)
+ len = i;
+
+ for (i = 0; i < len; i++)
+ sprintf(buf + i * 4, "[%02X]", data[i]);
+
+ return buf;
+}
+
+static int i2c_write_block(struct i2c_client *client, uint8_t addr,
+ uint8_t *data, int length)
+{
+ int retry;
+ uint8_t buf[LED_I2C_WRITE_BLOCK_SIZE];
+ int i;
+ struct lp5521_chip *cdata;
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = 0,
+ .len = length + 1,
+ .buf = buf,
+ }
+ };
+
+ dev_dbg(&client->dev, "W [%02X] = %s\n",
+ addr, hex2string(data, length));
+
+ cdata = i2c_get_clientdata(client);
+ if (length + 1 > LED_I2C_WRITE_BLOCK_SIZE) {
+ dev_err(&client->dev, "[LED] i2c_write_block length too long\n");
+ return -E2BIG;
+ }
+
+ buf[0] = addr;
+ for (i = 0; i < length; i++)
+ buf[i+1] = data[i];
+
+ mutex_lock(&cdata->led_i2c_rw_mutex);
+ msleep(1);
+ for (retry = 0; retry < I2C_WRITE_RETRY_TIMES; retry++) {
+ if (i2c_transfer(client->adapter, msg, 1) == 1)
+ break;
+ msleep(led_rw_delay);
+ }
+ if (retry >= I2C_WRITE_RETRY_TIMES) {
+ dev_err(&client->dev, "[LED] i2c_write_block retry over %d times\n",
+ I2C_WRITE_RETRY_TIMES);
+ mutex_unlock(&cdata->led_i2c_rw_mutex);
+ return -EIO;
+ }
+ mutex_unlock(&cdata->led_i2c_rw_mutex);
+
+ return 0;
+}
+
+
+static int I2C_RxData_2(char *rxData, int length)
+{
+ uint8_t loop_i;
+
+ struct i2c_msg msgs[] = {
+ {
+ .addr = private_lp5521_client->addr,
+ .flags = 0,
+ .len = 1,
+ .buf = rxData,
+ },
+ {
+ .addr = private_lp5521_client->addr,
+ .flags = I2C_M_RD,
+ .len = length,
+ .buf = rxData,
+ },
+ };
+
+ for (loop_i = 0; loop_i < I2C_WRITE_RETRY_TIMES; loop_i++) {
+ if (i2c_transfer(private_lp5521_client->adapter, msgs, 2) > 0)
+ break;
+ msleep(10);
+ }
+
+ if (loop_i >= I2C_WRITE_RETRY_TIMES) {
+ printk(KERN_ERR "[LED] %s retry over %d times\n",
+ __func__, I2C_WRITE_RETRY_TIMES);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int i2c_read_block(struct i2c_client *client,
+ uint8_t cmd, uint8_t *pdata, int length)
+{
+ char buffer[3] = {0};
+ int ret = 0, i;
+
+ if (pdata == NULL)
+ return -EFAULT;
+
+ if (length > 2) {
+ pr_err("[LED]%s: length %d> 2: \n", __func__, length);
+ return ret;
+ }
+ buffer[0] = cmd;
+ ret = I2C_RxData_2(buffer, length);
+ if (ret < 0) {
+ pr_err("[LED]%s: I2C_RxData fail \n", __func__);
+ return ret;
+ }
+
+ for (i = 0; i < length; i++) {
+ *(pdata+i) = buffer[i];
+ }
+ return ret;
+}
+
+static void lp5521_led_enable(struct i2c_client *client)
+{
+ int ret = 0;
+ uint8_t data;
+ struct lp5521_chip *cdata = i2c_get_clientdata(client);
+ struct led_i2c_platform_data *pdata = cdata->pdata;
+ char data1[1] = {0};
+ I(" %s +++\n" , __func__);
+
+ /* === led pin enable ===*/
+ if (pdata->ena_gpio) {
+ ret = gpio_direction_output(pdata->ena_gpio, 1);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output high failed %d\n", __func__, ret);
+ gpio_free(pdata->ena_gpio);
+ }
+ }
+ msleep(1);
+ /* === reset ===*/
+ data = 0xFF;
+ ret = i2c_write_block(client, 0x0d, &data, 1);
+ msleep(20);
+ ret = i2c_read_block(client, 0x05, data1, 1);
+ if (data1[0] != 0xaf) {
+ I(" %s reset not ready %x\n" , __func__, data1[0]);
+ }
+
+ chip_enable = 1;
+ mutex_lock(&led_mutex);
+ /* === enable CHIP_EN === */
+ data = 0x40;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ udelay(550);
+ /* === configuration control in power save mode=== */
+ data = 0x29;
+ ret = i2c_write_block(client, 0x08, &data, 1);
+ /* === set RGB current to 9.5mA === */
+ data = (u8)95;
+ ret = i2c_write_block(client, 0x05, &data, 1);
+ data = (u8)95;
+ ret = i2c_write_block(client, 0x06, &data, 1);
+ data = (u8)95;
+ ret = i2c_write_block(client, 0x07, &data, 1);
+ mutex_unlock(&led_mutex);
+ I(" %s ---\n" , __func__);
+}
+static void lp5521_red_long_blink(struct i2c_client *client)
+{
+ uint8_t data = 0x00;
+ int ret;
+
+ I(" %s +++\n" , __func__);
+ mutex_lock(&led_mutex);
+ data = 0x10;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+
+ /* === set pwm to 200 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x10, &data, 1);
+ data = 0xc8;
+ ret = i2c_write_block(client, 0x11, &data, 1);
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x12, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x13, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x14, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x15, &data, 1);
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x16, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x17, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x18, &data, 1);
+ ret = i2c_write_block(client, 0x19, &data, 1);
+
+ /* === run program === */
+
+ data = 0x20;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+ data = 0x60;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ udelay(550);
+ mutex_unlock(&led_mutex);
+ I(" %s ---\n" , __func__);
+}
+static void lp5521_breathing(struct i2c_client *client, uint8_t red, uint8_t green, uint8_t blue)
+{
+ uint8_t data = 0x00;
+// uint8_t data1[2]={0};
+ int ret;
+ uint8_t mode = 0x00;
+
+ I(" %s +++ red:%d, green:%d, blue:%d\n" , __func__, red, green, blue);
+ mutex_lock(&led_mutex);
+
+ if (red)
+ mode |= (3 << 4);
+ if (green)
+ mode |= (3 << 2);
+ if (blue)
+ mode |= 3;
+ data = mode & 0x15;
+
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+
+ if (red) {
+ data = 0x0a;
+ ret = i2c_write_block(client, 0x10, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x11, &data, 1);
+ data = 0x33;
+ ret = i2c_write_block(client, 0x12, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x13, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x14, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x15, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x16, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x17, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x18, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x19, &data, 1);
+ data = 0x60;
+ ret = i2c_write_block(client, 0x1a, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x1b, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x1c, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x1d, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x1e, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x1f, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x20, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x21, &data, 1);
+ data = 0x3d;
+ ret = i2c_write_block(client, 0x22, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x23, &data, 1);
+ data = 0x73;
+ ret = i2c_write_block(client, 0x24, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x25, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x26, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x27, &data, 1);
+ }
+
+ if (green) {
+ data = 0x0a;
+ ret = i2c_write_block(client, 0x30, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x31, &data, 1);
+ data = 0x33;
+ ret = i2c_write_block(client, 0x32, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x33, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x34, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x35, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x36, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x37, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x38, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x39, &data, 1);
+ data = 0x60;
+ ret = i2c_write_block(client, 0x3a, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x3b, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x3c, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x3d, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x3e, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x3f, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x40, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x41, &data, 1);
+ data = 0x3d;
+ ret = i2c_write_block(client, 0x42, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x43, &data, 1);
+ data = 0x73;
+ ret = i2c_write_block(client, 0x44, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x45, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x46, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x47, &data, 1);
+ }
+
+ if (blue) {
+ data = 0x0a;
+ ret = i2c_write_block(client, 0x50, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x51, &data, 1);
+ data = 0x33;
+ ret = i2c_write_block(client, 0x52, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x53, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x54, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x55, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x56, &data, 1);
+ data = 0x31;
+ ret = i2c_write_block(client, 0x57, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x58, &data, 1);
+ data = 0x13;
+ ret = i2c_write_block(client, 0x59, &data, 1);
+ data = 0x60;
+ ret = i2c_write_block(client, 0x5a, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x5b, &data, 1);
+ data = 0x1f;
+ ret = i2c_write_block(client, 0x5c, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x5d, &data, 1);
+ data = 0x0c;
+ ret = i2c_write_block(client, 0x5e, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x5f, &data, 1);
+ data = 0x14;
+ ret = i2c_write_block(client, 0x60, &data, 1);
+ data = 0xb1;
+ ret = i2c_write_block(client, 0x61, &data, 1);
+ data = 0x3d;
+ ret = i2c_write_block(client, 0x62, &data, 1);
+ data = 0x93;
+ ret = i2c_write_block(client, 0x63, &data, 1);
+ data = 0x73;
+ ret = i2c_write_block(client, 0x64, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x65, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x66, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x67, &data, 1);
+ }
+
+ /* === run program === */
+ data = mode & 0x2a;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+ data = (mode & 0x2a) | 0x40;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ udelay(550);
+
+ mutex_unlock(&led_mutex);
+ I(" %s ---\n" , __func__);
+
+}
+
+static void lp5521_color_blink(struct i2c_client *client, uint8_t red, uint8_t green, uint8_t blue)
+{
+ uint8_t data = 0x00;
+ int ret;
+ uint8_t mode = 0x00;
+ I(" %s +++ red:%d, green:%d, blue:%d\n" , __func__, red, green, blue);
+ mutex_lock(&led_mutex);
+
+ if (red)
+ mode |= (3 << 4);
+ if (green)
+ mode |= (3 << 2);
+ if (blue)
+ mode |= 3;
+ data = mode & 0x15;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+
+ if (red) {
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x10, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x11, &data, 1);
+ /* === set red pwm === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x12, &data, 1);
+
+ ret = i2c_write_block(client, 0x13, &red, 1);
+ /* === wait 0.064s === */
+ data = 0x44;
+ ret = i2c_write_block(client, 0x14, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x15, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x16, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x17, &data, 1);
+ /* === wait 0.935s === */
+ data = 0x7c;
+ ret = i2c_write_block(client, 0x18, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x19, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x1a, &data, 1);
+ ret = i2c_write_block(client, 0x1b, &data, 1);
+ }
+ if (green) {
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x30, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x31, &data, 1);
+ /* === set green pwm === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x32, &data, 1);
+
+ ret = i2c_write_block(client, 0x33, &green, 1);
+ /* === wait 0.064s === */
+ data = 0x44;
+ ret = i2c_write_block(client, 0x34, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x35, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x36, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x37, &data, 1);
+ /* === wait 0.935s === */
+ data = 0x7c;
+ ret = i2c_write_block(client, 0x38, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x39, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x3a, &data, 1);
+ ret = i2c_write_block(client, 0x3b, &data, 1);
+ }
+ if (blue) {
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x50, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x51, &data, 1);
+ /* === set blue pwm === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x52, &data, 1);
+
+ ret = i2c_write_block(client, 0x53, &blue, 1);
+ /* === wait 0.064s === */
+ data = 0x44;
+ ret = i2c_write_block(client, 0x54, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x55, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x56, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x57, &data, 1);
+ /* === wait 0.935s === */
+ data = 0x7c;
+ ret = i2c_write_block(client, 0x58, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x59, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x5a, &data, 1);
+ ret = i2c_write_block(client, 0x5b, &data, 1);
+ }
+
+ /* === run program === */
+ data = mode & 0x2a;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+ data = (mode & 0x2a)|0x40;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+
+ udelay(550);
+ mutex_unlock(&led_mutex);
+ I(" %s ---\n" , __func__);
+}
+
+static void lp5521_dual_color_blink(struct i2c_client *client)
+{
+ uint8_t data = 0x00;
+ int ret;
+
+ I(" %s +++\n" , __func__);
+ lp5521_led_enable(client);
+ mutex_lock(&led_mutex);
+ data = 0x14;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+
+
+ /* === set pwm to 200 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x10, &data, 1);
+ data = 0xc8;
+ ret = i2c_write_block(client, 0x11, &data, 1);
+ /* === wait 0.064s === */
+ data = 0x44;
+ ret = i2c_write_block(client, 0x12, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x13, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x14, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x15, &data, 1);
+ /* === wait 0.25s === */
+ data = 0x50;
+ ret = i2c_write_block(client, 0x16, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x17, &data, 1);
+ /* === trigger sg, wg === */
+ data = 0xe1;
+ ret = i2c_write_block(client, 0x18, &data, 1);
+ data = 0x04;
+ ret = i2c_write_block(client, 0x19, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x1a, &data, 1);
+ ret = i2c_write_block(client, 0x1b, &data, 1);
+ udelay(550);
+
+ /* === trigger wr === */
+ data = 0xe0;
+ ret = i2c_write_block(client, 0x30, &data, 1);
+ data = 0x80;
+ ret = i2c_write_block(client, 0x31, &data, 1);
+ udelay(550);
+ /* set pwm to 200 */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x32, &data, 1);
+ data = 0xc8;
+ ret = i2c_write_block(client, 0x33, &data, 1);
+ /* === wait 0.064s === */
+ data = 0x44;
+ ret = i2c_write_block(client, 0x34, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x35, &data, 1);
+ /* === set pwm to 0 === */
+ data = 0x40;
+ ret = i2c_write_block(client, 0x36, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x37, &data, 1);
+ /* === wait 0.999s === */
+ data = 0x7f;
+ ret = i2c_write_block(client, 0x38, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x39, &data, 1);
+ /* === wait 0.622s === */
+ data = 0x68;
+ ret = i2c_write_block(client, 0x3a, &data, 1);
+ data = 0x00;
+ ret = i2c_write_block(client, 0x3b, &data, 1);
+ /* === trigger sr === */
+ data = 0xe0;
+ ret = i2c_write_block(client, 0x3c, &data, 1);
+ data = 0x02;
+ ret = i2c_write_block(client, 0x3d, &data, 1);
+ /* === clear register === */
+ data = 0x00;
+ ret = i2c_write_block(client, 0x3e, &data, 1);
+ ret = i2c_write_block(client, 0x3f, &data, 1);
+ udelay(550);
+
+ /* === run program === */
+
+ data = 0x28;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+
+ data = 0x68;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ udelay(550);
+ mutex_unlock(&led_mutex);
+ I(" %s ---\n" , __func__);
+}
+static void lp5521_led_off(struct i2c_client *client)
+{
+ uint8_t data = 0x00;
+ int ret;
+ char data1[1] = {0};
+ struct lp5521_chip *cdata = i2c_get_clientdata(client);
+ struct led_i2c_platform_data *pdata = cdata->pdata;
+
+ I(" %s +++\n" , __func__);
+ if (!chip_enable) {
+ I(" %s return, chip already disable\n" , __func__);
+ return;
+ }
+
+ ret = i2c_read_block(client, 0x00, data1, 1);
+ if (!data1[0]) {
+ I(" %s return, chip already disable\n" , __func__);
+ return;
+ }
+
+ mutex_lock(&led_mutex);
+ /* === reset red green blue === */
+ data = 0x00;
+ ret = i2c_write_block(client, B_PWM_CONTROL, &data, 1);
+ ret = i2c_write_block(client, G_PWM_CONTROL, &data, 1);
+ ret = i2c_write_block(client, R_PWM_CONTROL, &data, 1);
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ mutex_unlock(&led_mutex);
+ if (pdata->ena_gpio) {
+ ret = gpio_direction_output(pdata->ena_gpio, 0);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output high failed %d\n", __func__, ret);
+ gpio_free(pdata->ena_gpio);
+ }
+ }
+ chip_enable = 0;
+ I(" %s ---\n" , __func__);
+}
+
+
+static void led_work_func(struct work_struct *work)
+{
+ struct i2c_client *client = private_lp5521_client;
+ struct lp5521_led *ldata;
+
+ I(" %s +++\n" , __func__);
+ ldata = container_of(work, struct lp5521_led, led_work);
+ lp5521_led_off(client);
+ I(" %s ---\n" , __func__);
+}
+
+static void multicolor_work_func(struct work_struct *work)
+{
+ struct i2c_client *client = private_lp5521_client;
+ struct lp5521_led *ldata;
+ int ret;
+ uint8_t data = 0x00;
+
+
+ ldata = container_of(work, struct lp5521_led, led_work_multicolor);
+ I(" %s , ModeRGB = %x\n" , __func__, ldata->Mode);
+
+ if (ldata->Mode < 5)
+ lp5521_led_enable(client);
+
+ if (ldata->Mode == 0) {
+ lp5521_led_off(client);
+ } else if (ldata->Mode == 1) { /* === set red, green, blue direct control === */
+ mutex_lock(&led_mutex);
+ ret = i2c_write_block(client, R_PWM_CONTROL, &ldata->Red, 1);
+ ret = i2c_write_block(client, G_PWM_CONTROL, &ldata->Green, 1);
+ ret = i2c_write_block(client, B_PWM_CONTROL, &ldata->Blue, 1);
+ data = 0x3f;
+ ret = i2c_write_block(client, OPRATION_REGISTER, &data, 1);
+ udelay(200);
+ data = 0x40;
+ ret = i2c_write_block(client, ENABLE_REGISTER, &data, 1);
+ udelay(500);
+ mutex_unlock(&led_mutex);
+ } else if (ldata->Mode == 2) { /* === set short blink === */
+ lp5521_color_blink(client, ldata->Red, ldata->Green, ldata->Blue);
+ } else if (ldata->Mode == 3) { /* === set delayed short blink === */
+ msleep(1000);
+ lp5521_color_blink(client, ldata->Red, ldata->Green, ldata->Blue);
+ } else if (ldata->Mode == 4 && ldata->Red && !ldata->Green && !ldata->Blue) { /* === set red long blink === */
+ lp5521_red_long_blink(client);
+ } else if (ldata->Mode == 5 && ldata->Red && ldata->Green && !ldata->Blue) { /* === set red green blink === */
+ lp5521_dual_color_blink(client);
+ } else if (ldata->Mode == 6) { /* === set breathing === */
+ lp5521_breathing(client, ldata->Red, ldata->Green, ldata->Blue);
+ }
+
+}
+
+static enum alarmtimer_restart led_alarm_handler(struct alarm *alarm, ktime_t now)
+{
+ struct lp5521_led *ldata;
+
+ I(" %s +++\n" , __func__);
+ ldata = container_of(alarm, struct lp5521_led, led_alarm);
+ queue_work(g_led_work_queue, &ldata->led_work);
+ I(" %s ---\n" , __func__);
+ return ALARMTIMER_NORESTART;
+}
+static void led_blink_do_work(struct work_struct *work)
+{
+ struct i2c_client *client = private_lp5521_client;
+ struct lp5521_led *ldata;
+
+ I(" %s +++\n" , __func__);
+ ldata = container_of(work, struct lp5521_led, blink_delayed_work.work);
+ lp5521_color_blink(client, ldata->Red, ldata->Green, ldata->Blue);
+ I(" %s ---\n" , __func__);
+}
+
+static ssize_t lp5521_led_off_timer_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%d\n", current_time);;
+}
+
+static ssize_t lp5521_led_off_timer_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct led_classdev *led_cdev;
+ struct lp5521_led *ldata;
+ int min, sec;
+ uint16_t off_timer;
+ ktime_t interval;
+ ktime_t next_alarm;
+
+ min = -1;
+ sec = -1;
+ sscanf(buf, "%d %d", &min, &sec);
+ I(" %s , min = %d, sec = %d\n" , __func__, min, sec);
+ if (min < 0 || min > 255)
+ return -EINVAL;
+ if (sec < 0 || sec > 255)
+ return -EINVAL;
+
+ led_cdev = (struct led_classdev *)dev_get_drvdata(dev);
+ ldata = container_of(led_cdev, struct lp5521_led, cdev);
+
+ off_timer = min * 60 + sec;
+
+ alarm_cancel(&ldata->led_alarm);
+ cancel_work_sync(&ldata->led_work);
+ if (off_timer) {
+ interval = ktime_set(off_timer, 0);
+ next_alarm = ktime_add(ktime_get_real(), interval);
+ alarm_start(&ldata->led_alarm, next_alarm);
+ }
+
+ return count;
+}
+
+static DEVICE_ATTR(off_timer, 0644, lp5521_led_off_timer_show,
+ lp5521_led_off_timer_store);
+
+static ssize_t lp5521_led_multi_color_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%x\n", ModeRGB);
+}
+
+static ssize_t lp5521_led_multi_color_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct led_classdev *led_cdev;
+ struct lp5521_led *ldata;
+ uint32_t val;
+ sscanf(buf, "%d", &val);
+
+ if (val < 0 || val > 0xFFFFFFFF)
+ return -EINVAL;
+
+ led_cdev = (struct led_classdev *)dev_get_drvdata(dev);
+ ldata = container_of(led_cdev, struct lp5521_led, cdev);
+ ldata->Mode = (val & Mode_Mask) >> 24;
+ ldata->Red = (val & Red_Mask) >> 16;
+ ldata->Green = (val & Green_Mask) >> 8;
+ ldata->Blue = val & Blue_Mask;
+ I(" %s , ModeRGB = %x\n" , __func__, val);
+ queue_work(g_led_work_queue, &ldata->led_work_multicolor);
+ return count;
+}
+
+static DEVICE_ATTR(ModeRGB, 0644, lp5521_led_multi_color_show,
+ lp5521_led_multi_color_store);
+
+/* === read/write i2c and control enable pin for debug === */
+static ssize_t lp5521_led_i2c_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int ret;
+ char data[1] = {0};
+ int i;
+ struct i2c_client *client = private_lp5521_client;
+
+ for (i = 0; i <= 0x6f; i++) {
+ ret = i2c_read_block(client, i, data, 1);
+ I(" %s i2c(%x) = 0x%x\n", __func__, i, data[0]);
+ }
+ return ret;
+}
+
+static ssize_t lp5521_led_i2c_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = private_lp5521_client;
+ int i, ret;
+ char *token[10];
+ unsigned long ul_reg, ul_data = 0;
+ uint8_t reg = 0, data;
+ char value[1] = {0};
+ struct lp5521_chip *cdata = i2c_get_clientdata(client);
+ struct led_i2c_platform_data *pdata = cdata->pdata;
+
+ for (i = 0; i < 2; i++) {
+ token[i] = strsep((char **)&buf, " ");
+ D("%s: token[%d] = %s\n", __func__, i, token[i]);
+ }
+ ret = strict_strtoul(token[0], 16, &ul_reg);
+ ret = strict_strtoul(token[1], 16, &ul_data);
+
+ reg = ul_reg;
+ data = ul_data;
+
+ if (reg < 0x6F) {
+ ret = i2c_write_block(client, reg, &data, 1);
+ ret = i2c_read_block(client, reg, value, 1);
+ I(" %s , ret = %d, Set REG=0x%x, data=0x%x\n" , __func__, ret, reg, data);
+ ret = i2c_read_block(client, reg, value, 1);
+ I(" %s , ret = %d, Get REG=0x%x, data=0x%x\n" , __func__, ret, reg, value[0]);
+ }
+ if (reg == 0x99) {
+ if (data == 1) {
+ I("%s , pull up enable pin\n", __func__);
+ if (pdata->ena_gpio) {
+ ret = gpio_direction_output(pdata->ena_gpio, 1);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output high failed %d\n", __func__, ret);
+ gpio_free(pdata->ena_gpio);
+ }
+ }
+ } else if (data == 0) {
+ I("%s , pull down enable pin\n", __func__);
+ if (pdata->ena_gpio) {
+ ret = gpio_direction_output(pdata->ena_gpio, 1);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output high failed %d\n", __func__, ret);
+ gpio_free(pdata->ena_gpio);
+ }
+ }
+ }
+ }
+ return count;
+}
+
+static DEVICE_ATTR(i2c, 0644, lp5521_led_i2c_show, lp5521_led_i2c_store);
+
+static int lp5521_parse_dt(struct device *dev, struct led_i2c_platform_data *pdata)
+{
+ struct property *prop;
+ struct device_node *dt = dev->of_node;
+ prop = of_find_property(dt, "lp5521,lp5521_en", NULL);
+ if (prop) {
+ pdata->ena_gpio = of_get_named_gpio(dt, "lp5521,lp5521_en", 0);
+ }
+ prop = of_find_property(dt, "lp5521,num_leds", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "lp5521,num_leds", &pdata->num_leds);
+ }
+ return 0;
+}
+
+static int lp5521_led_probe(struct i2c_client *client
+ , const struct i2c_device_id *id)
+{
+ struct device *dev = &client->dev;
+ struct lp5521_chip *cdata;
+ struct led_i2c_platform_data *pdata;
+ int ret =0;
+ int i;
+
+ printk("[LED][PROBE] led driver probe +++\n");
+
+ /* === init platform and client data === */
+ cdata = devm_kzalloc(dev, sizeof(struct lp5521_chip), GFP_KERNEL);
+ if (!cdata) {
+ ret = -ENOMEM;
+ dev_err(&client->dev, "[LED][PROBE_ERR] failed on allocat cdata\n");
+ goto err_cdata;
+ }
+
+ i2c_set_clientdata(client, cdata);
+ cdata->client = client;
+ pdata = devm_kzalloc(dev, sizeof(*cdata->pdata), GFP_KERNEL);
+ if (!pdata) {
+ ret = -ENOMEM;
+ dev_err(&client->dev, "[LED][PROBE_ERR] failed on allocate pdata\n");
+ goto err_exit;
+ }
+ cdata->pdata = pdata;
+
+ if (client->dev.platform_data) {
+ memcpy(pdata, client->dev.platform_data, sizeof(*pdata));
+ } else {
+ ret = lp5521_parse_dt(&client->dev, pdata);
+ if (ret < 0) {
+ dev_err(&client->dev, "[LED][PROBE_ERR] failed on get pdata\n");
+ goto err_exit;
+ }
+ }
+ led_rw_delay = 5;
+ /* === led enable pin === */
+ if (pdata->ena_gpio) {
+ ret = gpio_request(pdata->ena_gpio, "led_enable");
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_request failed %d\n", __func__, ret);
+ return ret;
+ }
+ ret = gpio_direction_output(pdata->ena_gpio, 1);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output failed %d\n", __func__, ret);
+ gpio_free(pdata->ena_gpio);
+ return ret;
+ }
+ }
+ /* === led trigger signal pin === */
+ if (pdata->tri_gpio) {
+ ret = gpio_request(pdata->tri_gpio, "led_trigger");
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_request failed %d\n", __func__, ret);
+ return ret;
+ }
+ ret = gpio_direction_output(pdata->tri_gpio, 0);
+ if (ret < 0) {
+ pr_err("[LED] %s: gpio_direction_output failed %d\n", __func__, ret);
+ gpio_free(pdata->tri_gpio);
+ return ret;
+ }
+ }
+ private_lp5521_client = client;
+ g_led_work_queue = create_workqueue("led");
+ if (!g_led_work_queue) {
+ ret = -10;
+ pr_err("[LED] %s: create workqueue fail %d\n", __func__, ret);
+ goto err_create_work_queue;
+ }
+ for (i = 0; i < pdata->num_leds; i++) {
+ cdata->leds[i].cdev.name = "indicator";
+ ret = led_classdev_register(dev, &cdata->leds[i].cdev);
+ if (ret < 0) {
+ dev_err(dev, "couldn't register led[%d]\n", i);
+ return ret;
+ }
+ ret = device_create_file(cdata->leds[i].cdev.dev, &dev_attr_ModeRGB);
+ if (ret < 0) {
+ pr_err("%s: failed on create attr ModeRGB [%d]\n", __func__, i);
+ goto err_register_attr_ModeRGB;
+ }
+ ret = device_create_file(cdata->leds[i].cdev.dev, &dev_attr_off_timer);
+ if (ret < 0) {
+ pr_err("%s: failed on create attr off_timer [%d]\n", __func__, i);
+ goto err_register_attr_off_timer;
+ }
+ ret = device_create_file(cdata->leds[i].cdev.dev, &dev_attr_i2c);
+ if (ret < 0) {
+ pr_err("%s: failed on create attr off_timer [%d]\n", __func__, i);
+ }
+
+ INIT_WORK(&cdata->leds[i].led_work, led_work_func);
+ INIT_WORK(&cdata->leds[i].led_work_multicolor, multicolor_work_func);
+ INIT_DELAYED_WORK(&cdata->leds[i].blink_delayed_work, led_blink_do_work);
+ alarm_init(&cdata->leds[i].led_alarm,
+ ALARM_REALTIME,
+ led_alarm_handler);
+ }
+
+ mutex_init(&cdata->led_i2c_rw_mutex);
+ mutex_init(&led_mutex);
+ printk("[LED][PROBE] led driver probe ---\n");
+ return 0;
+
+
+err_register_attr_off_timer:
+ kfree(cdata);
+ for (i = 0; i < pdata->num_leds; i++) {
+ device_remove_file(cdata->leds[i].cdev.dev,&dev_attr_off_timer);
+ }
+err_register_attr_ModeRGB:
+ for (i = 0; i < pdata->num_leds; i++) {
+ if (!strcmp(cdata->leds[i].cdev.name, "multi_color"))
+ device_remove_file(cdata->leds[i].cdev.dev,&dev_attr_ModeRGB);
+ }
+err_create_work_queue:
+err_exit:
+err_cdata:
+ return ret;
+}
+
+static const struct i2c_device_id led_i2c_id[] = {
+ { "LP5521-LED", 0 },
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, led_i2c_id);
+
+#ifdef CONFIG_OF
+static const struct of_device_id lp5521_mttable[] = {
+ { .compatible = "LP5521-LED"},
+ { },
+};
+#endif
+static struct i2c_driver led_i2c_driver = {
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "LP5521-LED",
+#ifdef CONFIG_OF
+ .of_match_table = lp5521_mttable,
+#endif
+ },
+ .id_table = led_i2c_id,
+ .probe = lp5521_led_probe,
+};
+module_i2c_driver(led_i2c_driver);
+#ifndef CONFIG_OF
+static int __init lp5521_led_init(void)
+{
+ int ret;
+
+ ret = i2c_add_driver(&led_i2c_driver);
+ if (ret)
+ return ret;
+ return 0;
+}
+
+static void __exit lp5521_led_exit(void)
+{
+ i2c_del_driver(&led_i2c_driver);
+}
+
+module_init(lp5521_led_init);
+module_exit(lp5521_led_exit);
+#endif
+MODULE_AUTHOR("<ShihHao_Shiung@htc.com>, <Dirk_Chang@htc.com>");
+MODULE_DESCRIPTION("LP5521 LED driver");
+
diff --git a/drivers/leds/leds-tps61310.c b/drivers/leds/leds-tps61310.c
new file mode 100644
index 0000000..4ae4b59
--- /dev/null
+++ b/drivers/leds/leds-tps61310.c
@@ -0,0 +1,1613 @@
+/* drivers/leds/leds-tps61310.c
+ *
+ * Copyright (C) 2008-2009 HTC Corporation.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/i2c.h>
+#include <linux/delay.h>
+#include <linux/platform_device.h>
+#include <linux/workqueue.h>
+#include <linux/leds.h>
+#include <linux/slab.h>
+#include <linux/gpio.h>
+#include <linux/of_gpio.h>
+#include <linux/module.h>
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+#include <linux/earlysuspend.h>
+#endif
+
+#define FLT_DBG_LOG(fmt, ...) \
+ printk(KERN_DEBUG "[FLT]TPS " fmt, ##__VA_ARGS__)
+#define FLT_INFO_LOG(fmt, ...) \
+ printk(KERN_INFO "[FLT]TPS " fmt, ##__VA_ARGS__)
+#define FLT_ERR_LOG(fmt, ...) \
+ printk(KERN_ERR "[FLT][ERR]TPS " fmt, ##__VA_ARGS__)
+
+#define FLASHLIGHT_NAME "flashlight"
+#define TPS61310_RETRY_COUNT 10
+
+enum flashlight_mode_flags {
+ FL_MODE_OFF = 0,
+ FL_MODE_TORCH,
+ FL_MODE_FLASH,
+ FL_MODE_PRE_FLASH,
+ FL_MODE_TORCH_LED_A,
+ FL_MODE_TORCH_LED_B,
+ FL_MODE_TORCH_LEVEL_1,
+ FL_MODE_TORCH_LEVEL_2,
+ FL_MODE_CAMERA_EFFECT_FLASH,
+ FL_MODE_CAMERA_EFFECT_PRE_FLASH,
+ FL_MODE_FLASH_LEVEL1,
+ FL_MODE_FLASH_LEVEL2,
+ FL_MODE_FLASH_LEVEL3,
+ FL_MODE_FLASH_LEVEL4,
+ FL_MODE_FLASH_LEVEL5,
+ FL_MODE_FLASH_LEVEL6,
+ FL_MODE_FLASH_LEVEL7,
+ FL_MODE_VIDEO_TORCH = 30,
+ FL_MODE_VIDEO_TORCH_1,
+ FL_MODE_VIDEO_TORCH_2,
+ FL_MODE_VIDEO_TORCH_3,
+ FL_MODE_VIDEO_TORCH_4,
+};
+
+struct TPS61310_flashlight_platform_data {
+ void (*gpio_init) (void);
+ uint32_t flash_duration_ms;
+ uint32_t led_count; /* 0: 1 LED, 1: 2 LED */
+ uint32_t tps61310_strb0;
+ uint32_t tps61310_strb1;
+ uint32_t tps61310_reset;
+ uint8_t mode_pin_suspend_state_low;
+ uint32_t enable_FLT_1500mA;
+ uint32_t disable_tx_mask;
+ uint32_t power_save;
+ uint32_t power_save_2;
+};
+
+struct tps61310_data {
+ struct led_classdev fl_lcdev;
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ struct early_suspend fl_early_suspend;
+#endif
+ enum flashlight_mode_flags mode_status;
+ uint32_t flash_sw_timeout;
+ struct mutex tps61310_data_mutex;
+ uint32_t strb0;
+ uint32_t strb1;
+ uint32_t reset;
+ uint8_t led_count;
+ uint8_t mode_pin_suspend_state_low;
+ uint8_t enable_FLT_1500mA;
+ uint8_t disable_tx_mask;
+ uint32_t power_save;
+ uint32_t power_save_2;
+ struct tps61310_led_data *led_array;
+};
+
+static struct i2c_client *this_client;
+static struct tps61310_data *this_tps61310;
+struct delayed_work tps61310_delayed_work;
+static struct workqueue_struct *tps61310_work_queue;
+static struct mutex tps61310_mutex;
+
+static int switch_state = 1;
+static int retry = 0;
+static int reg_init_fail = 0;
+
+static int regaddr = 0x00;
+static int regdata = 0x00;
+static int reg_buffered[256] = {0x00};
+
+static int tps61310_i2c_command(uint8_t, uint8_t);
+static int tps61310_flashlight_control(int);
+static int tps61310_flashlight_mode(int);
+
+static ssize_t sw_timeout_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%d\n", this_tps61310->flash_sw_timeout);
+}
+static ssize_t sw_timeout_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int input;
+ sscanf(buf, "%d", &input);
+ FLT_INFO_LOG("%s: %d\n",__func__,input);
+ this_tps61310->flash_sw_timeout = input;
+ return size;
+}
+static DEVICE_ATTR(sw_timeout, S_IRUGO | S_IWUSR, sw_timeout_show, sw_timeout_store);
+
+static ssize_t regaddr_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "0x%02x\n", regaddr);
+}
+static ssize_t regaddr_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int input;
+ sscanf(buf, "%x", &input);
+ FLT_INFO_LOG("%s: %d\n",__func__,input);
+ regaddr = input;
+ return size;
+}
+static DEVICE_ATTR(regaddr, S_IRUGO | S_IWUSR, regaddr_show, regaddr_store);
+
+static ssize_t regdata_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "0x%02x\n", reg_buffered[regaddr]);
+}
+static ssize_t regdata_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int input;
+ sscanf(buf, "%x", &input);
+ FLT_INFO_LOG("%s: %d\n",__func__,input);
+ regdata = input;
+
+ tps61310_i2c_command(regaddr, regdata);
+
+ return size;
+}
+static DEVICE_ATTR(regdata, S_IRUGO | S_IWUSR, regdata_show, regdata_store);
+
+static ssize_t switch_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "switch status:%d \n", switch_state);
+}
+
+static ssize_t switch_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int switch_status;
+ switch_status = -1;
+ sscanf(buf, "%d ",&switch_status);
+ FLT_INFO_LOG("%s: %d\n",__func__,switch_status);
+ switch_state = switch_status;
+ return size;
+}
+
+static DEVICE_ATTR(function_switch, S_IRUGO | S_IWUSR, switch_show, switch_store);
+
+static ssize_t max_current_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ if (this_tps61310->enable_FLT_1500mA)
+ return sprintf(buf, "1500\n");
+ else
+ return sprintf(buf, "750\n");
+}
+static DEVICE_ATTR(max_current, S_IRUGO, max_current_show, NULL);
+static ssize_t flash_store(
+ struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int val;
+ sscanf(buf, "%d ",&val);
+ FLT_INFO_LOG("%s: %d\n",__func__,val);
+ tps61310_flashlight_mode(val);
+ return size;
+}
+static DEVICE_ATTR(flash, S_IWUSR, NULL, flash_store);
+
+static int TPS61310_I2C_TxData(char *txData, int length)
+{
+ uint8_t loop_i;
+ struct i2c_msg msg[] = {
+ {
+ .addr = this_client->addr,
+ .flags = 0,
+ .len = length,
+ .buf = txData,
+ },
+ };
+
+ for (loop_i = 0; loop_i < TPS61310_RETRY_COUNT; loop_i++) {
+ if (i2c_transfer(this_client->adapter, msg, 1) > 0)
+ break;
+
+ mdelay(10);
+ }
+
+ if (loop_i >= TPS61310_RETRY_COUNT) {
+ FLT_ERR_LOG("%s retry over %d\n", __func__,
+ TPS61310_RETRY_COUNT);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int tps61310_i2c_command(uint8_t address, uint8_t data)
+{
+ uint8_t buffer[2];
+ int ret;
+ int err = 0;
+
+ reg_buffered[address] = data;
+
+ buffer[0] = address;
+ buffer[1] = data;
+ ret = TPS61310_I2C_TxData(buffer, 2);
+ if (ret < 0) {
+ FLT_ERR_LOG("%s error\n", __func__);
+ if (this_tps61310->reset) {
+ FLT_INFO_LOG("reset register");
+ gpio_set_value_cansleep(this_tps61310->reset, 0);
+ mdelay(10);
+ gpio_set_value_cansleep(this_tps61310->reset, 1);
+ if (address!=0x07 && address!=0x04) {
+ if (this_tps61310->enable_FLT_1500mA) {
+ err |= tps61310_i2c_command(0x07, 0x46);
+ err |= tps61310_i2c_command(0x04, 0x10);
+ } else {
+ /* voltage drop monitor*/
+ err |= tps61310_i2c_command(0x07, 0xF6);
+ }
+ if (err)
+ reg_init_fail++;
+ } else {
+ reg_init_fail++;
+ }
+ }
+ return ret;
+ }
+ return 0;
+}
+
+static int flashlight_turn_off(void)
+{
+ int status;
+ FLT_INFO_LOG("%s\n", __func__);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x02, 0x08);
+ tps61310_i2c_command(0x01, 0x00);
+ FLT_INFO_LOG("%s %d\n", __func__,this_tps61310->mode_status);
+ /* Avoid current overflow consumption while flash with 1.5A,
+ * enable/disable moden function
+ */
+ if (this_tps61310->power_save) {
+ status = this_tps61310->mode_status;
+ if (status == 2 || (status >= 10 && status <=16)) {
+ FLT_INFO_LOG("Disable power saving\n");
+ gpio_set_value_cansleep(this_tps61310->power_save, 0);
+ } else if (status == FL_MODE_PRE_FLASH) {
+ FLT_INFO_LOG("Enable power saving\n");
+ gpio_set_value_cansleep(this_tps61310->power_save, 1);
+ }
+ }
+ if (this_tps61310->power_save_2) {
+ status = this_tps61310->mode_status;
+ if (status == 2 || (status >= 10 && status <=16)) {
+ FLT_INFO_LOG("Disable power saving\n");
+ gpio_set_value_cansleep(this_tps61310->power_save_2, 0);
+ } else if (status == FL_MODE_PRE_FLASH) {
+ FLT_INFO_LOG("Enable power saving\n");
+ gpio_set_value_cansleep(this_tps61310->power_save_2, 1);
+ }
+ }
+ this_tps61310->mode_status = FL_MODE_OFF;
+ return 0;
+}
+
+void retry_flashlight_control(int err, int mode)
+{
+ if (err && !retry) {
+ FLT_INFO_LOG("%s error once\n", __func__);
+ retry++;
+ mutex_unlock(&tps61310_mutex);
+ tps61310_flashlight_control(mode);
+ mutex_lock(&tps61310_mutex);
+ } else if(err) {
+ FLT_INFO_LOG("%s error twice\n", __func__);
+ retry = 0;
+ }
+}
+
+int tps61310_flashlight_mode(int mode)
+{
+ int err = 0;
+ uint8_t current_hex = 0x0;
+ FLT_INFO_LOG("camera flash current %d\n", mode);
+ mutex_lock(&tps61310_mutex);
+ if (this_tps61310->reset && reg_init_fail) {
+ reg_init_fail = 0;
+ if (this_tps61310->enable_FLT_1500mA) {
+ err |= tps61310_i2c_command(0x07, 0x46);
+ err |= tps61310_i2c_command(0x04, 0x10);
+ } else {
+ /* voltage drop monitor*/
+ err |= tps61310_i2c_command(0x07, 0xF6);
+ }
+ }
+ if (err) {
+ FLT_ERR_LOG("%s error init register\n", __func__);
+ reg_init_fail = 0;
+ mutex_unlock(&tps61310_mutex);
+ return -err;
+ }
+#if defined CONFIG_FLASHLIGHT_1500mA
+ if (mode == 0)
+ flashlight_turn_off();
+ else if (mode > 0) {
+ FLT_INFO_LOG("flash 1.5A\n");
+ if (mode >= 750) {
+ current_hex = (mode - 750) / 50;
+ current_hex += 0x80;
+ tps61310_i2c_command(0x05, 0x6F);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x9E);
+ tps61310_i2c_command(0x02, current_hex);
+ } else {
+ current_hex = mode / 25;
+ current_hex += 0x80;
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, current_hex);
+ }
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ }
+#endif
+#if !defined CONFIG_FLASHLIGHT_1500mA
+ if (mode == 0)
+ flashlight_turn_off();
+ else if (mode > 0) {
+ current_hex = mode / 25;
+ current_hex += 0x80;
+ tps61310_i2c_command(0x05, 0x6F);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, current_hex);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ }
+#endif
+ mutex_unlock(&tps61310_mutex);
+ return 0;
+}
+
+int tps61310_flashlight_control(int mode)
+{
+ int ret = 0;
+ int err = 0;
+
+ mutex_lock(&tps61310_mutex);
+ if (this_tps61310->reset && reg_init_fail) {
+ reg_init_fail = 0;
+ if (this_tps61310->enable_FLT_1500mA) {
+ err |= tps61310_i2c_command(0x07, 0x46);
+ err |= tps61310_i2c_command(0x04, 0x10);
+ } else {
+ /* voltage drop monitor*/
+ err |= tps61310_i2c_command(0x07, 0xF6);
+ }
+ }
+ if (err) {
+ FLT_ERR_LOG("%s error init register\n", __func__);
+ reg_init_fail = 0;
+ mutex_unlock(&tps61310_mutex);
+ return -err;
+ }
+ if (this_tps61310->led_count == 1) {
+ if (this_tps61310->enable_FLT_1500mA) {
+#if defined CONFIG_FLASHLIGHT_1500mA
+ switch (mode) {
+ case FL_MODE_OFF:
+ flashlight_turn_off();
+ break;
+ case FL_MODE_FLASH:
+ FLT_INFO_LOG("flash 1.5A\n");
+ tps61310_i2c_command(0x05, 0x6F);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x9E);
+ tps61310_i2c_command(0x02, 0x8F);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL1:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x86);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL2:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x88);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL3:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x8C);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL4:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x90);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL5:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x94);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL6:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x98);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL7:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x9C);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_PRE_FLASH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_1:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x01);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_2:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x02);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_3:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x03);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_4:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH:
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ err |= tps61310_i2c_command(0x05, 0x6A);
+ err |= tps61310_i2c_command(0x00, 0x05);
+ err |= tps61310_i2c_command(0x01, 0x40);
+ if (this_tps61310->reset)
+ retry_flashlight_control(err, mode);
+ break;
+ case FL_MODE_TORCH_LEVEL_1:
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ err |= tps61310_i2c_command(0x05, 0x6A);
+ err |= tps61310_i2c_command(0x00, 0x01);
+ err |= tps61310_i2c_command(0x01, 0x40);
+ if (this_tps61310->reset)
+ retry_flashlight_control(err, mode);
+ break;
+ case FL_MODE_TORCH_LEVEL_2:
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ err |= tps61310_i2c_command(0x05, 0x6A);
+ err |= tps61310_i2c_command(0x00, 0x03);
+ err |= tps61310_i2c_command(0x01, 0x40);
+ if (this_tps61310->reset)
+ retry_flashlight_control(err, mode);
+ break;
+ default:
+ FLT_ERR_LOG("%s: unknown flash_light flags: %d\n",
+ __func__, mode);
+ ret = -EINVAL;
+ break;
+ }
+#endif
+ } else {
+#if !defined CONFIG_FLASHLIGHT_1500mA
+ switch (mode) {
+ case FL_MODE_OFF:
+ flashlight_turn_off();
+ break;
+ case FL_MODE_FLASH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x9E);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL1:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x86);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL2:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x88);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL3:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x8C);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL4:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x90);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL5:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x94);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL6:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x98);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL7:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, 0x9C);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_PRE_FLASH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_1:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x01);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_2:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x02);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_3:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x03);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH_4:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x04);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x05);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LEVEL_1:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x01);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LEVEL_2:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x03);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ default:
+ FLT_ERR_LOG("%s: unknown flash_light flags: %d\n",
+ __func__, mode);
+ ret = -EINVAL;
+ break;
+ }
+#endif
+ }
+ } else if (this_tps61310->led_count == 2) {
+#if defined CONFIG_TWO_FLASHLIGHT
+ switch (mode) {
+ case FL_MODE_OFF:
+ flashlight_turn_off();
+ break;
+ case FL_MODE_FLASH:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x90);
+ tps61310_i2c_command(0x01, 0x90);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL1:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x83);
+ tps61310_i2c_command(0x01, 0x83);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL2:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x84);
+ tps61310_i2c_command(0x01, 0x84);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL3:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x86);
+ tps61310_i2c_command(0x01, 0x86);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL4:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x88);
+ tps61310_i2c_command(0x01, 0x88);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL5:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x8A);
+ tps61310_i2c_command(0x01, 0x8A);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL6:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x8C);
+ tps61310_i2c_command(0x01, 0x8C);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_FLASH_LEVEL7:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, 0x8E);
+ tps61310_i2c_command(0x01, 0x8E);
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+ break;
+ case FL_MODE_PRE_FLASH:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x12);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_VIDEO_TORCH:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x12);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x1B);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LEVEL_1:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x09);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LEVEL_2:
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, 0x12);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LED_A:
+ tps61310_i2c_command(0x05, 0x69);
+ tps61310_i2c_command(0x00, 0x09);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+ case FL_MODE_TORCH_LED_B:
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, 0x09);
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+ break;
+
+ default:
+ FLT_ERR_LOG("%s: unknown flash_light flags: %d\n",
+ __func__, mode);
+ ret = -EINVAL;
+ break;
+ }
+#endif
+ }
+
+ FLT_INFO_LOG("%s: mode: %d\n", __func__, mode);
+ this_tps61310->mode_status = mode;
+ mutex_unlock(&tps61310_mutex);
+
+ return ret;
+}
+
+static void fl_lcdev_brightness_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+{
+ enum flashlight_mode_flags mode;
+ int ret = -1;
+
+
+ if (brightness > 0 && brightness <= LED_HALF) {
+ if (brightness == (LED_HALF - 2))
+ mode = FL_MODE_TORCH_LEVEL_1;
+ else if (brightness == (LED_HALF - 1))
+ mode = FL_MODE_TORCH_LEVEL_2;
+ else if (brightness == 1 && this_tps61310->led_count ==2)
+ mode = FL_MODE_TORCH_LED_A;
+ else if (brightness == 2 && this_tps61310->led_count ==2)
+ mode = FL_MODE_TORCH_LED_B;
+ else
+ mode = FL_MODE_TORCH;
+ } else if (brightness > LED_HALF && brightness <= LED_FULL) {
+ if (brightness == (LED_HALF + 1))
+ mode = FL_MODE_PRE_FLASH; /* pre-flash mode */
+ else if (brightness == (LED_HALF + 3))
+ mode = FL_MODE_FLASH_LEVEL1; /* Flashlight mode LEVEL1*/
+ else if (brightness == (LED_HALF + 4))
+ mode = FL_MODE_FLASH_LEVEL2; /* Flashlight mode LEVEL2*/
+ else if (brightness == (LED_HALF + 5))
+ mode = FL_MODE_FLASH_LEVEL3; /* Flashlight mode LEVEL3*/
+ else if (brightness == (LED_HALF + 6))
+ mode = FL_MODE_FLASH_LEVEL4; /* Flashlight mode LEVEL4*/
+ else if (brightness == (LED_HALF + 7))
+ mode = FL_MODE_FLASH_LEVEL5; /* Flashlight mode LEVEL5*/
+ else if (brightness == (LED_HALF + 8))
+ mode = FL_MODE_FLASH_LEVEL6; /* Flashlight mode LEVEL6*/
+ else if (brightness == (LED_HALF + 9))
+ mode = FL_MODE_FLASH_LEVEL7; /* Flashlight mode LEVEL7*/
+ else
+ mode = FL_MODE_FLASH; /* Flashlight mode */
+ } else
+ /* off and else */
+ mode = FL_MODE_OFF;
+
+ if ((mode != FL_MODE_OFF) && switch_state == 0){
+ FLT_INFO_LOG("%s flashlight is disabled by switch, mode = %d\n",__func__, mode);
+ return;
+ }
+
+ retry = 0;
+ ret = tps61310_flashlight_control(mode);
+ if (ret) {
+ FLT_ERR_LOG("%s: control failure rc:%d\n", __func__, ret);
+ return;
+ }
+}
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+static void flashlight_early_suspend(struct early_suspend *handler)
+{
+ FLT_INFO_LOG("%s\n", __func__);
+ if (this_tps61310->mode_status)
+ flashlight_turn_off();
+ if (this_tps61310->power_save)
+ gpio_set_value_cansleep(this_tps61310->power_save, 0);
+ if (this_tps61310->power_save_2)
+ gpio_set_value_cansleep(this_tps61310->power_save_2, 0);
+}
+
+static void flashlight_late_resume(struct early_suspend *handler)
+{
+}
+#endif
+
+static void flashlight_turn_off_work(struct work_struct *work)
+{
+ FLT_INFO_LOG("%s\n", __func__);
+ flashlight_turn_off();
+}
+
+static int tps61310_parse_dt(struct device *dev, struct TPS61310_flashlight_platform_data *pdata)
+{
+ struct property *prop;
+ struct device_node *dt = dev->of_node;
+ prop = of_find_property(dt, "tps61310,tps61310_strb0", NULL);
+ if (prop) {
+ pdata->tps61310_strb0 = of_get_named_gpio(dt, "tps61310,tps61310_strb0", 0);
+ }
+ prop = of_find_property(dt, "tps61310,tps61310_strb1", NULL);
+ if (prop) {
+ pdata->tps61310_strb1 = of_get_named_gpio(dt, "tps61310,tps61310_strb1", 0);
+ }
+ prop = of_find_property(dt, "tps61310,flash_duration_ms", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "tps61310,flash_duration_ms", &pdata->flash_duration_ms);
+ }
+ prop = of_find_property(dt, "tps61310,enable_FLT_1500mA", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "tps61310,enable_FLT_1500mA", &pdata->enable_FLT_1500mA);
+ }
+ prop = of_find_property(dt, "tps61310,led_count", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "tps61310,led_count", &pdata->led_count);
+ }
+ prop = of_find_property(dt, "tps61310,disable_tx_mask", NULL);
+ if (prop) {
+ of_property_read_u32(dt, "tps61310,disable_tx_mask", &pdata->disable_tx_mask);
+ }
+
+ return 0;
+
+}
+
+enum led_status {
+ OFF = 0,
+ ON,
+ BLINK,
+};
+
+enum led_id {
+ LED_2 = 0,
+ LED_1_3 = 1,
+};
+
+struct tps61310_led_data {
+ u8 num_leds;
+ struct i2c_client *client_dev;
+ struct tps61310_data *tps61310;
+ int status;
+ struct led_classdev cdev;
+ int max_current;
+ int id;
+ u8 default_state;
+ int torch_mode;
+ struct mutex lock;
+ struct work_struct work;
+};
+
+static int tps61310_get_common_configs(struct tps61310_led_data *led,
+ struct device_node *node)
+{
+ int rc;
+ const char *temp_string;
+
+ led->cdev.default_trigger = "none";
+ rc = of_property_read_string(node, "linux,default-trigger",
+ &temp_string);
+ if (!rc)
+ led->cdev.default_trigger = temp_string;
+ else if (rc != -EINVAL)
+ return rc;
+
+ led->default_state = LEDS_GPIO_DEFSTATE_OFF;
+ rc = of_property_read_string(node, "default-state",
+ &temp_string);
+ if (!rc) {
+ if (!strcmp(temp_string, "keep"))
+ led->default_state = LEDS_GPIO_DEFSTATE_KEEP;
+ else if (!strcmp(temp_string, "on"))
+ led->default_state = LEDS_GPIO_DEFSTATE_ON;
+ else
+ led->default_state = LEDS_GPIO_DEFSTATE_OFF;
+ } else if (rc != -EINVAL)
+ return rc;
+
+ return 0;
+}
+
+static int tps61310_error_recover(void)
+{
+ int err = 0;
+ if (this_tps61310->reset && reg_init_fail) {
+ reg_init_fail = 0;
+ if (this_tps61310->enable_FLT_1500mA) {
+ err |= tps61310_i2c_command(0x07, 0x46);
+ err |= tps61310_i2c_command(0x04, 0x10);
+ } else {
+ /* voltage drop monitor*/
+ err |= tps61310_i2c_command(0x07, 0xF6);
+ }
+ }
+ if (err) {
+ FLT_ERR_LOG("%s error init register\n", __func__);
+ reg_init_fail = 0;
+ return -err;
+ }
+
+ return err;
+}
+
+static void tps61310_flash_strb(void)
+{
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+ gpio_set_value_cansleep(this_tps61310->strb0, 1);
+ queue_delayed_work(tps61310_work_queue, &tps61310_delayed_work,
+ msecs_to_jiffies(this_tps61310->flash_sw_timeout));
+}
+
+static void tps61310_torch_strb(void)
+{
+ gpio_set_value_cansleep(this_tps61310->strb0, 0);
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ tps61310_i2c_command(0x01, 0x40);
+}
+
+static int tps61310_flash_set(struct tps61310_led_data *led,
+ enum led_brightness value)
+{
+ int err = 0;
+ uint8_t current_hex = 0x0;
+
+ FLT_INFO_LOG("flash set:%d\n", value);
+
+ mutex_lock(&tps61310_mutex);
+ err = tps61310_error_recover();
+ if ( err ) {
+ mutex_unlock(&tps61310_mutex);
+ return err;
+ }
+
+ if ( value == 0 )
+ flashlight_turn_off();
+ else if ( value > 0 ) {
+ uint8_t enled = 0x68;
+ switch (led->id)
+ {
+ case LED_2:
+ current_hex = value / 25;
+ current_hex += 0x80;
+ enled |= 0x02;
+ tps61310_i2c_command(0x05, enled);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x01, current_hex);
+ printk(KERN_INFO "[FLT]"
+ "set led2 current to 0x%x.\r\n", current_hex);
+ break;
+ case LED_1_3:
+ current_hex = value / 50;
+ current_hex += 0x80;
+ enled |= 0x05;
+ tps61310_i2c_command(0x05, enled);
+ tps61310_i2c_command(0x00, 0x00);
+ tps61310_i2c_command(0x02, current_hex);
+ printk(KERN_INFO "[FLT]"
+ "set led13 current to 0x%x.\r\n", current_hex);
+ break;
+ }
+
+ tps61310_flash_strb();
+ }
+
+ mutex_unlock(&tps61310_mutex);
+ return err;
+}
+
+static int tps61310_torch_set(struct tps61310_led_data *led,
+ enum led_brightness value)
+{
+ int err = 0;
+ uint8_t current_hex = 0x0;
+
+ FLT_INFO_LOG("torch set:%d\n", value);
+
+ mutex_lock(&tps61310_mutex);
+ err = tps61310_error_recover();
+ if ( err ) {
+ mutex_unlock(&tps61310_mutex);
+ return err;
+ }
+
+ if ( value == 0 )
+ flashlight_turn_off();
+ else if ( value > 0 ) {
+ switch (led->id) {
+ case LED_2:
+ current_hex = (value / 25) & 0x07;
+ tps61310_i2c_command(0x05, 0x6A);
+ tps61310_i2c_command(0x00, current_hex);
+ break;
+ case LED_1_3:
+ current_hex = ((value / 50) << 3) & 0x38;
+ tps61310_i2c_command(0x05, 0x6B);
+ tps61310_i2c_command(0x00, current_hex);
+ break;
+ };
+
+ tps61310_torch_strb();
+ }
+
+ mutex_unlock(&tps61310_mutex);
+ return err;
+}
+
+static void __tps61310_led_work(struct tps61310_led_data *led,
+ enum led_brightness value)
+{
+ mutex_lock(&led->lock);
+ if ( led->torch_mode ) {
+ switch (led->id) {
+ case LED_2:
+ tps61310_torch_set(led, value);
+ break;
+ case LED_1_3:
+ tps61310_torch_set(led, value);
+ break;
+ };
+ } else {
+ switch (led->id) {
+ case LED_2:
+ tps61310_flash_set(led, value);
+ break;
+ case LED_1_3:
+ tps61310_flash_set(led, value);
+ break;
+ };
+ }
+ mutex_unlock(&led->lock);
+}
+
+static void tps61310_led_work(struct work_struct *work)
+{
+ struct tps61310_led_data *led = container_of(work,
+ struct tps61310_led_data, work);
+
+ __tps61310_led_work(led, led->cdev.brightness);
+
+ return;
+}
+
+static void tps61310_led_set(struct led_classdev *led_cdev,
+ enum led_brightness value)
+{
+ struct tps61310_led_data *led;
+
+ led = container_of(led_cdev, struct tps61310_led_data, cdev);
+ if (value < LED_OFF || value > led->cdev.max_brightness) {
+ printk(KERN_ERR "[FLT]"
+ "Invalid brightness value\n");
+ return;
+ }
+
+ led->cdev.brightness = value;
+ schedule_work(&led->work);
+}
+
+static enum led_brightness tps61310_led_get(struct led_classdev *led_cdev)
+{
+ struct tps61310_led_data *led;
+
+ led = container_of(led_cdev, struct tps61310_led_data, cdev);
+
+ return led->cdev.brightness;
+}
+
+static int tps61310_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct tps61310_data *tps61310;
+ struct TPS61310_flashlight_platform_data *pdata;
+ int i = 0;
+
+ struct tps61310_led_data *led, *led_array;
+ struct device_node *node, *temp;
+ int num_leds = 0, parsed_leds = 0;
+ const char *led_label;
+ int rc;
+
+ tps61310 = NULL;
+ pdata = NULL;
+
+ FLT_INFO_LOG("%s +\n", __func__);
+ pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
+ if (pdata == NULL) {
+ FLT_ERR_LOG("%s: kzalloc pdata fail !!!\n", __func__);
+ rc = -ENOMEM;
+ goto fail_allocate_memory;
+ }
+ rc = tps61310_parse_dt(&client->dev, pdata);
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ FLT_ERR_LOG("%s: i2c check fail !!!\n", __func__);
+ rc = -ENODEV;
+ goto fail_allocate_memory;
+ }
+
+ tps61310 = kzalloc(sizeof(struct tps61310_data), GFP_KERNEL);
+ if (!tps61310) {
+ FLT_ERR_LOG("%s: kzalloc data fail !!!\n", __func__);
+ rc = -ENOMEM;
+ goto fail_allocate_memory;
+ }
+
+ i2c_set_clientdata(client, tps61310);
+ this_client = client;
+
+ INIT_DELAYED_WORK(&tps61310_delayed_work, flashlight_turn_off_work);
+ tps61310_work_queue = create_singlethread_workqueue("tps61310_wq");
+ if (!tps61310_work_queue)
+ goto err_create_tps61310_work_queue;
+
+ tps61310->fl_lcdev.name = FLASHLIGHT_NAME;
+ tps61310->fl_lcdev.brightness_set = fl_lcdev_brightness_set;
+ tps61310->strb0 = pdata->tps61310_strb0;
+ tps61310->strb1 = pdata->tps61310_strb1;
+ tps61310->reset = pdata->tps61310_reset;
+ tps61310->flash_sw_timeout = pdata->flash_duration_ms;
+ tps61310->led_count = (pdata->led_count) ? pdata->led_count : 1;
+ tps61310->mode_pin_suspend_state_low = pdata->mode_pin_suspend_state_low;
+ tps61310->enable_FLT_1500mA = pdata->enable_FLT_1500mA;
+ tps61310->disable_tx_mask = pdata->disable_tx_mask;
+ tps61310->power_save = pdata->power_save;
+ tps61310->power_save_2 = pdata->power_save_2;
+
+ if (tps61310->strb0) {
+ rc = gpio_request(tps61310->strb0, "strb0");
+ if (rc) {
+ FLT_ERR_LOG("%s: unable to request gpio %d (%d)\n",
+ __func__, tps61310->strb0, rc);
+ goto fail_allocate_resource;
+ }
+
+ rc = gpio_direction_output(tps61310->strb0, 0);
+ if (rc) {
+ FLT_ERR_LOG("%s: Unable to set direction (%d)\n", __func__, rc);
+ goto fail_allocate_resource;
+ }
+ }
+ if (tps61310->strb1) {
+ rc = gpio_request(tps61310->strb1, "strb1");
+ if (rc) {
+ FLT_ERR_LOG("%s: unable to request gpio %d (%d)\n",
+ __func__, tps61310->strb1, rc);
+ goto fail_allocate_resource;
+ }
+
+ rc = gpio_direction_output(tps61310->strb1, 1);
+ if (rc) {
+ FLT_ERR_LOG("%s: Unable to set direction (%d)\n", __func__, rc);
+ goto fail_allocate_resource;
+ }
+ }
+ if (tps61310->flash_sw_timeout <= 0)
+ tps61310->flash_sw_timeout = 600;
+
+ node = client->dev.of_node;
+
+ if (node == NULL) {
+ rc = -ENODEV;
+ goto fail_parsing_of_node;
+ }
+
+ temp = NULL;
+ while ((temp = of_get_next_child(node, temp)))
+ num_leds++;
+
+ if (!num_leds) {
+ rc = -ECHILD;
+ goto fail_parsing_of_node;
+ }
+
+ led_array = devm_kzalloc(&client->dev,
+ (sizeof(struct tps61310_led_data) * num_leds), GFP_KERNEL);
+ if (!led_array) {
+ dev_err(&client->dev, "Unable to allocate memory\n");
+ rc = -ENOMEM;
+ goto fail_parsing_of_node;
+ }
+
+ tps61310->led_array = led_array;
+ for_each_child_of_node(node, temp) {
+ led = &led_array[parsed_leds];
+ led->num_leds = num_leds;
+ led->client_dev = client;
+ led->tps61310 = tps61310;
+ led->status = OFF;
+
+ rc = of_property_read_string(temp, "label", &led_label);
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Failure reading label, rc = %d\n", rc);
+ goto fail_id_check;
+ }
+
+ rc = of_property_read_string(temp, "linux,name", &led->cdev.name);
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Failure reading led name, rc = %d\n", rc);
+ goto fail_id_check;
+ }
+
+ rc = of_property_read_u32(temp, "max-current", &led->max_current);
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Failure reading max_current, rc = %d\n", rc);
+ goto fail_id_check;
+ }
+
+ rc = of_property_read_u32(temp, "id", &led->id);
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Failure reading led id, rc = %d\n", rc);
+ goto fail_id_check;
+ }
+
+ rc = tps61310_get_common_configs(led, temp);
+ if (rc) {
+ printk(KERN_ERR "[FLT] "
+ "Failure reading common led configuration," \
+ " rc = %d\n", rc);
+ goto fail_id_check;
+ }
+
+ led->cdev.brightness_set = tps61310_led_set;
+ led->cdev.brightness_get = tps61310_led_get;
+
+ if (strncmp(led_label, "flash", sizeof("flash")) == 0) {
+ led->torch_mode = 0;
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Unable to read flash config data\n");
+ goto fail_id_check;
+ }
+ } else if (strncmp(led_label, "torch", sizeof("torch")) == 0) {
+ led->torch_mode = 1;
+ if (rc < 0) {
+ printk(KERN_ERR "[FLT] "
+ "Unable to read torch config data\n");
+ goto fail_id_check;
+ }
+ } else {
+ printk(KERN_ERR "[FLT] "
+ "No LED matching label\n");
+ rc = -EINVAL;
+ goto fail_id_check;
+ }
+
+ mutex_init(&led->lock);
+ INIT_WORK(&led->work, tps61310_led_work);
+
+ led->cdev.max_brightness = led->max_current;
+
+ rc = led_classdev_register(&client->dev, &led->cdev);
+ if (rc) {
+ printk(KERN_ERR "[FLT] "
+ "unable to register led %d,rc=%d\n",
+ led->id, rc);
+ goto fail_id_check;
+ }
+
+ /* configure default state */
+ switch (led->default_state) {
+ case LEDS_GPIO_DEFSTATE_OFF:
+ led->cdev.brightness = LED_OFF;
+ break;
+ case LEDS_GPIO_DEFSTATE_ON:
+ led->cdev.brightness = led->cdev.max_brightness;
+ __tps61310_led_work(led, led->cdev.brightness);
+ schedule_work(&led->work);
+ break;
+ case LEDS_GPIO_DEFSTATE_KEEP:
+ led->cdev.brightness = led->cdev.max_brightness;
+ break;
+ }
+
+ parsed_leds++;
+ }
+
+ mutex_init(&tps61310_mutex);
+ rc = led_classdev_register(&client->dev, &tps61310->fl_lcdev);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s: failed on led_classdev_register (%d)\n", __func__, rc);
+ goto platform_data_null;
+ }
+
+ this_tps61310 = tps61310;
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ tps61310->fl_early_suspend.suspend = flashlight_early_suspend;
+ tps61310->fl_early_suspend.resume = flashlight_late_resume;
+ register_early_suspend(&tps61310->fl_early_suspend);
+#endif
+
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_sw_timeout);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create sw_timeout sysfs fail\n", __func__);
+ }
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_regaddr);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create regaddr sysfs fail\n", __func__);
+ }
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_regdata);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create regdata sysfs fail\n", __func__);
+ }
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_function_switch);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create function_switch sysfs fail\n", __func__);
+ }
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_max_current);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create max_current sysfs fail\n", __func__);
+ }
+ rc = device_create_file(tps61310->fl_lcdev.dev, &dev_attr_flash);
+ if (rc < 0) {
+ FLT_ERR_LOG("%s, create max_current sysfs fail\n", __func__);
+ }
+ /* initial register set as shutdown mode */
+ tps61310_i2c_command(0x01, 0x00);
+
+ if (this_tps61310->enable_FLT_1500mA) {
+ FLT_INFO_LOG("Flashlight with 1.5A\n");
+ tps61310_i2c_command(0x07, 0x46);
+ tps61310_i2c_command(0x04, 0x10);
+ } else {
+ /* disable voltage drop monitor */
+ tps61310_i2c_command(0x07, 0x76);
+ }
+ /* Disable Tx-mask for issue of can flash while using lower band radio */
+ if (this_tps61310->disable_tx_mask)
+ tps61310_i2c_command(0x03, 0xC0);
+ if (this_tps61310->reset)
+ FLT_INFO_LOG("%s reset pin exist\n", __func__);
+ else
+ FLT_INFO_LOG("%s no reset pin\n", __func__);
+ if (this_tps61310->power_save) {
+ FLT_INFO_LOG("%s power save pin exist\n", __func__);
+ gpio_set_value_cansleep(this_tps61310->power_save, 0);
+ }
+ else
+ FLT_INFO_LOG("%s no power save pin\n", __func__);
+ if (this_tps61310->power_save_2) {
+ FLT_INFO_LOG("%s power save pin_2 exist\n", __func__);
+ gpio_set_value_cansleep(this_tps61310->power_save_2, 0);
+ }
+ else
+ FLT_INFO_LOG("%s no power save pin_2\n", __func__);
+
+ FLT_INFO_LOG("%s -\n", __func__);
+
+ if (pdata) kfree(pdata);
+ return 0;
+
+platform_data_null:
+ destroy_workqueue(tps61310_work_queue);
+ mutex_destroy(&tps61310_mutex);
+fail_id_check:
+ for (i = 0; i < parsed_leds; i++) {
+ mutex_destroy(&led_array[i].lock);
+ led_classdev_unregister(&led_array[i].cdev);
+ }
+fail_parsing_of_node:
+ if (tps61310->strb1) {
+ gpio_free(tps61310->strb1);
+ }
+ if (tps61310->strb0) {
+ gpio_free(tps61310->strb0);
+ }
+fail_allocate_resource:
+err_create_tps61310_work_queue:
+fail_allocate_memory:
+ if (tps61310) kfree(tps61310);
+
+ if (pdata) kfree(pdata);
+ return rc;
+}
+
+static int tps61310_remove(struct i2c_client *client)
+{
+ struct tps61310_data *tps61310 = i2c_get_clientdata(client);
+ struct tps61310_led_data *led_array = tps61310->led_array;
+
+ if (led_array) {
+ int i, parsed_leds = led_array->num_leds;
+ for (i=0; i<parsed_leds; i++) {
+ cancel_work_sync(&led_array[i].work);
+ mutex_destroy(&led_array[i].lock);
+ led_classdev_unregister(&led_array[i].cdev);
+ }
+ }
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ unregister_early_suspend(&tps61310->fl_early_suspend);
+#endif
+
+ destroy_workqueue(tps61310_work_queue);
+ mutex_destroy(&tps61310_mutex);
+ led_classdev_unregister(&tps61310->fl_lcdev);
+
+ if (tps61310->strb1) {
+ gpio_free(tps61310->strb1);
+ }
+ if (tps61310->strb0) {
+ gpio_free(tps61310->strb0);
+ }
+
+ kfree(tps61310);
+
+ FLT_INFO_LOG("%s:\n", __func__);
+ return 0;
+}
+
+static const struct i2c_device_id tps61310_id[] = {
+ { "tps61310", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, tps61310_id);
+static int tps61310_resume(struct i2c_client *client)
+{
+
+ FLT_INFO_LOG("%s:\n", __func__);
+ if (this_tps61310->mode_pin_suspend_state_low)
+ gpio_set_value_cansleep(this_tps61310->strb1, 1);
+ return 0;
+}
+static int tps61310_suspend(struct i2c_client *client, pm_message_t state)
+{
+
+ FLT_INFO_LOG("%s:\n", __func__);
+ if (this_tps61310->mode_pin_suspend_state_low)
+ gpio_set_value_cansleep(this_tps61310->strb1, 0);
+
+ return 0;
+}
+
+static const struct of_device_id tps61310_mttable[] = {
+ { .compatible = "ti,tps61310"},
+ { },
+};
+
+static struct i2c_driver tps61310_driver = {
+ .driver = {
+ .name = "tps61310",
+ .owner = THIS_MODULE,
+ .of_match_table = tps61310_mttable,
+ },
+ .probe = tps61310_probe,
+ .remove = tps61310_remove,
+ .suspend = tps61310_suspend,
+ .resume = tps61310_resume,
+ .id_table = tps61310_id,
+};
+module_i2c_driver(tps61310_driver);
+
+MODULE_DESCRIPTION("TPS61310 Led Flash driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/media/platform/tegra/Kconfig b/drivers/media/platform/tegra/Kconfig
index 8a59bdd..b8a2b16 100644
--- a/drivers/media/platform/tegra/Kconfig
+++ b/drivers/media/platform/tegra/Kconfig
@@ -218,3 +218,26 @@
---help---
This is a driver for generic camera devices
for use with the tegra isp.
+
+config VIDEO_IMX219
+ tristate "IMX219 camera sensor support"
+ depends on I2C && ARCH_TEGRA
+ ---help---
+ This is a driver for the IMX219 camera sensor
+ for use with the tegra isp. The sensor has a
+ maximum of 8MP (3280x2464) resolution.
+
+config VIDEO_OV9760
+ tristate "OV9760 camera sensor support"
+ depends on I2C && ARCH_TEGRA
+ ---help---
+ This is a driver for the OV9760 camera sensor
+ for use with the tegra isp. The sensor has a
+ maximum of 1.5MP (1472x1104) resolution.
+
+config VIDEO_DRV201
+ tristate "DRV201 focuser support"
+ depends on I2C && ARCH_TEGRA
+ ---help---
+ This is a driver for the DRV201 focuser
+ for use with the tegra isp.
diff --git a/drivers/media/platform/tegra/Makefile b/drivers/media/platform/tegra/Makefile
index 41f1ea9..d50bdfc 100644
--- a/drivers/media/platform/tegra/Makefile
+++ b/drivers/media/platform/tegra/Makefile
@@ -43,3 +43,6 @@
obj-$(CONFIG_VIDEO_OV7695) += ov7695.o
obj-$(CONFIG_VIDEO_MT9M114) += mt9m114.o
obj-$(CONFIG_VIDEO_CAMERA) += camera.o
+obj-$(CONFIG_VIDEO_IMX219) += imx219.o tps61310.o
+obj-$(CONFIG_VIDEO_OV9760) += ov9760.o
+obj-$(CONFIG_VIDEO_DRV201) += drv201.o
diff --git a/drivers/media/platform/tegra/drv201.c b/drivers/media/platform/tegra/drv201.c
new file mode 100644
index 0000000..ac19ec8
--- /dev/null
+++ b/drivers/media/platform/tegra/drv201.c
@@ -0,0 +1,865 @@
+/*
+ * drv201.c - a NVC kernel driver for focuser device drv201.
+ *
+ * Copyright (c) 2011-2013 NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+/* Implementation
+ * --------------
+ * The board level details about the device need to be provided in the board
+ * file with the <device>_platform_data structure.
+ * Standard among NVC kernel drivers in this structure is:
+ * .cfg = Use the NVC_CFG_ defines that are in nvc.h.
+ * Descriptions of the configuration options are with the defines.
+ * This value is typically 0.
+ * .num = The number of the instance of the device. This should start at 1 and
+ * and increment for each device on the board. This number will be
+ * appended to the MISC driver name, Example: /dev/focuser.1
+ * If not used or 0, then nothing is appended to the name.
+ * .sync = If there is a need to synchronize two devices, then this value is
+ * the number of the device instance (.num above) this device is to
+ * sync to. For example:
+ * Device 1 platform entries =
+ * .num = 1,
+ * .sync = 2,
+ * Device 2 platfrom entries =
+ * .num = 2,
+ * .sync = 1,
+ * The above example sync's device 1 and 2.
+ * To disable sync, set .sync = 0. Note that the .num = 0 device is not
+ * allowed to be synced to.
+ * This is typically used for stereo applications.
+ * .dev_name = The MISC driver name the device registers as. If not used,
+ * then the part number of the device is used for the driver name.
+ * If using the NVC user driver then use the name found in this
+ * driver under _default_pdata.
+ * .gpio_count = The ARRAY_SIZE of the nvc_gpio_pdata table.
+ * .gpio = A pointer to the nvc_gpio_pdata structure's platform GPIO data.
+ * The GPIO mechanism works by cross referencing the .gpio_type key
+ * among the nvc_gpio_pdata GPIO data and the driver's nvc_gpio_init
+ * GPIO data to build a GPIO table the driver can use. The GPIO's
+ * defined in the device header file's _gpio_type enum are the
+ * gpio_type keys for the nvc_gpio_pdata and nvc_gpio_init structures.
+ * These need to be present in the board file's nvc_gpio_pdata
+ * structure for the GPIO's that are used.
+ * The driver's GPIO logic uses assert/deassert throughout until the
+ * low level _gpio_wr/rd calls where the .assert_high is used to
+ * convert the value to the correct signal level.
+ * See the GPIO notes in nvc.h for additional information.
+ *
+ * The following is specific to NVC kernel focus drivers:
+ * .nvc = Pointer to the nvc_focus_nvc structure. This structure needs to
+ * be defined and populated if overriding the driver defaults.
+ * .cap = Pointer to the nvc_focus_cap structure. This structure needs to
+ * be defined and populated if overriding the driver defaults.
+ *
+ * Power Requirements:
+ * The device's header file defines the voltage regulators needed with the
+ * enumeration <device>_vreg. The order these are enumerated is the order
+ * the regulators will be enabled when powering on the device. When the
+ * device is powered off the regulators are disabled in descending order.
+ * The <device>_vregs table in this driver uses the nvc_regulator_init
+ * structure to define the regulator ID strings that go with the regulators
+ * defined with <device>_vreg. These regulator ID strings (or supply names)
+ * will be used in the regulator_get function in the _vreg_init function.
+ * The board power file and <device>_vregs regulator ID strings must match.
+ */
+
+#include <linux/fs.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/uaccess.h>
+#include <linux/list.h>
+#include <linux/regulator/consumer.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+
+#include "t124/t124.h"
+#include <media/drv201.h>
+#include <media/camera.h>
+
+#define DRV201_FOCAL_LENGTH_FLOAT (3.097f)
+#define DRV201_FNUMBER_FLOAT (2.4f)
+#define DRV201_FOCAL_LENGTH (0x4046353f) /* 3.097f */
+#define DRV201_FNUMBER (0x4019999a) /* 2.4f */
+#define DRV201_ACTUATOR_RANGE 1023
+#define DRV201_SETTLETIME 15
+#define DRV201_FOCUS_MACRO 810
+#define DRV201_FOCUS_INFINITY 50
+#define DRV201_POS_LOW_DEFAULT 0
+#define DRV201_POS_HIGH_DEFAULT 1023
+#define DRV201_POS_CLAMP 0x03ff
+
+#define DRV201_CONTROL_RING 0x0202BC
+
+
+struct drv201_info {
+ struct device *dev;
+ struct i2c_client *i2c_client;
+ struct drv201_platform_data *pdata;
+ struct miscdevice miscdev;
+ struct list_head list;
+ struct drv201_power_rail power;
+ struct nvc_focus_nvc nvc;
+ struct nvc_focus_cap cap;
+ struct nv_focuser_config nv_config;
+ struct nv_focuser_config cfg_usr;
+ atomic_t in_use;
+ bool reset_flag;
+ int pwr_dev;
+ s32 pos;
+ u16 dev_id;
+ struct camera_sync_dev *csync_dev;
+ struct regmap *regmap;
+};
+
+/**
+ * The following are default values
+ */
+
+static struct nvc_focus_cap drv201_default_cap = {
+ .version = NVC_FOCUS_CAP_VER2,
+ .actuator_range = DRV201_ACTUATOR_RANGE,
+ .settle_time = DRV201_SETTLETIME,
+ .focus_macro = DRV201_FOCUS_MACRO,
+ .focus_infinity = DRV201_FOCUS_INFINITY,
+ .focus_hyper = DRV201_FOCUS_INFINITY,
+};
+
+static struct nvc_focus_nvc drv201_default_nvc = {
+ .focal_length = DRV201_FOCAL_LENGTH,
+ .fnumber = DRV201_FNUMBER,
+};
+
+static struct drv201_platform_data drv201_default_pdata = {
+ .cfg = 0,
+ .num = 0,
+ .sync = 0,
+ .dev_name = "focuser",
+};
+static LIST_HEAD(drv201_info_list);
+static DEFINE_SPINLOCK(drv201_spinlock);
+
+static int drv201_i2c_wr8(struct drv201_info *info, u8 reg, u8 val)
+{
+ struct i2c_msg msg;
+ u8 buf[2];
+ buf[0] = reg;
+ buf[1] = val;
+ msg.addr = info->i2c_client->addr;
+ msg.flags = 0;
+ msg.len = 2;
+ msg.buf = &buf[0];
+ if (i2c_transfer(info->i2c_client->adapter, &msg, 1) != 1)
+ return -EIO;
+ return 0;
+}
+
+static int drv201_i2c_wr16(struct drv201_info *info, u8 reg, u16 val)
+{
+ struct i2c_msg msg;
+ u8 buf[3];
+ buf[0] = reg;
+ buf[1] = (u8)(val >> 8);
+ buf[2] = (u8)(val & 0xff);
+ msg.addr = info->i2c_client->addr;
+ msg.flags = 0;
+ msg.len = 3;
+ msg.buf = &buf[0];
+ if (i2c_transfer(info->i2c_client->adapter, &msg, 1) != 1)
+ return -EIO;
+ return 0;
+}
+
+void drv201_set_arc_mode(struct drv201_info *info)
+{
+ u32 sr = info->nv_config.slew_rate;
+ int err;
+
+
+ /* set ARC enable */
+ err = drv201_i2c_wr8(info, CONTROL, (sr >> 16) & 0xff);
+ if (err)
+ dev_err(info->dev, "%s: CONTROL reg write failed\n", __func__);
+
+ /* set the ARC RES mode */
+ err = drv201_i2c_wr8(info, MODE, (sr >> 8) & 0xff);
+ if (err)
+ dev_err(info->dev, "%s: MODE reg write failed\n", __func__);
+
+ /* set the VCM_FREQ */
+ err = drv201_i2c_wr8(info, VCM_FREQ, sr & 0xff);
+ if (err)
+ dev_err(info->dev, "%s: VCM_FREQ reg write failed\n", __func__);
+}
+
+static int drv201_position_wr(struct drv201_info *info, u16 position)
+{
+ int err;
+
+ position &= DRV201_POS_CLAMP;
+#ifdef TEGRA_12X_OR_HIGHER_CONFIG
+ err = camera_dev_sync_clear(info->csync_dev);
+ err = camera_dev_sync_wr_add(info->csync_dev,
+ VCM_CODE_MSB, position);
+ info->pos = position;
+#else
+ int err = drv201_i2c_wr16(
+ info, VCM_CODE_MSB, position);
+
+ if (!err)
+ info->pos = position;
+#endif
+ return err;
+}
+
+static int drv201_pm_wr(struct drv201_info *info, int pwr)
+{
+ int err = 0;
+ if ((info->pdata->cfg & (NVC_CFG_OFF2STDBY | NVC_CFG_BOOT_INIT)) &&
+ (pwr == NVC_PWR_OFF || pwr == NVC_PWR_STDBY_OFF))
+ pwr = NVC_PWR_STDBY;
+
+ if (pwr == info->pwr_dev)
+ return 0;
+
+ switch (pwr) {
+ case NVC_PWR_OFF_FORCE:
+ case NVC_PWR_OFF:
+ if (info->pdata && info->pdata->power_off)
+ info->pdata->power_off(&info->power);
+ break;
+ case NVC_PWR_STDBY_OFF:
+ case NVC_PWR_STDBY:
+ if (info->pdata && info->pdata->power_off)
+ info->pdata->power_off(&info->power);
+ break;
+ case NVC_PWR_COMM:
+ case NVC_PWR_ON:
+ if (info->pdata && info->pdata->power_on)
+ info->pdata->power_on(&info->power);
+ usleep_range(1000, 1020);
+ drv201_set_arc_mode(info);
+ drv201_position_wr(info, (u16)info->nv_config.pos_working_low);
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ if (err < 0) {
+ dev_err(info->dev, "%s err %d\n", __func__, err);
+ pwr = NVC_PWR_ERR;
+ }
+
+ info->pwr_dev = pwr;
+ dev_dbg(info->dev, "%s pwr_dev=%d\n", __func__, info->pwr_dev);
+
+ return err;
+}
+
+static int drv201_power_put(struct drv201_power_rail *pw)
+{
+ if (unlikely(!pw))
+ return -EFAULT;
+
+ if (likely(pw->vdd))
+ regulator_put(pw->vdd);
+
+ if (likely(pw->vdd_i2c))
+ regulator_put(pw->vdd_i2c);
+
+ pw->vdd = NULL;
+ pw->vdd_i2c = NULL;
+
+ return 0;
+}
+
+static int drv201_regulator_get(struct drv201_info *info,
+ struct regulator **vreg, char vreg_name[])
+{
+ struct regulator *reg = NULL;
+ int err = 0;
+
+ reg = regulator_get(info->dev, vreg_name);
+ if (unlikely(IS_ERR(reg))) {
+ dev_err(info->dev, "%s %s ERR: %d\n",
+ __func__, vreg_name, (int)reg);
+ err = PTR_ERR(reg);
+ reg = NULL;
+ } else
+ dev_dbg(info->dev, "%s: %s\n", __func__, vreg_name);
+
+ *vreg = reg;
+ return err;
+}
+
+static int drv201_power_get(struct drv201_info *info)
+{
+ struct drv201_power_rail *pw = &info->power;
+
+ drv201_regulator_get(info, &pw->vdd, "avdd_cam1_cam");
+ drv201_regulator_get(info, &pw->vdd_i2c, "pwrdet_sdmmc3");
+
+ return 0;
+}
+
+static int drv201_pm_dev_wr(struct drv201_info *info, int pwr)
+{
+ if (pwr < info->pwr_dev)
+ pwr = info->pwr_dev;
+ return drv201_pm_wr(info, pwr);
+}
+
+static inline void drv201_pm_exit(struct drv201_info *info)
+{
+ drv201_pm_wr(info, NVC_PWR_OFF_FORCE);
+ drv201_power_put(&info->power);
+}
+
+static inline void drv201_pm_init(struct drv201_info *info)
+{
+ drv201_power_get(info);
+}
+
+static int drv201_reset(struct drv201_info *info, u32 level)
+{
+ int err = 0;
+
+ if (level == NVC_RESET_SOFT)
+ err |= drv201_i2c_wr8(info, CONTROL, 0x01); /* SW reset */
+ else
+ err = drv201_pm_wr(info, NVC_PWR_OFF_FORCE);
+
+ return err;
+}
+
+static void drv201_dump_focuser_capabilities(struct drv201_info *info)
+{
+ dev_dbg(info->dev, "%s:\n", __func__);
+ dev_dbg(info->dev, "focal_length: 0x%x\n",
+ info->nv_config.focal_length);
+ dev_dbg(info->dev, "fnumber: 0x%x\n",
+ info->nv_config.fnumber);
+ dev_dbg(info->dev, "max_aperture: 0x%x\n",
+ info->nv_config.max_aperture);
+ dev_dbg(info->dev, "pos_working_low: %d\n",
+ info->nv_config.pos_working_low);
+ dev_dbg(info->dev, "pos_working_high: %d\n",
+ info->nv_config.pos_working_high);
+ dev_dbg(info->dev, "pos_actual_low: %d\n",
+ info->nv_config.pos_actual_low);
+ dev_dbg(info->dev, "pos_actual_high: %d\n",
+ info->nv_config.pos_actual_high);
+ dev_dbg(info->dev, "slew_rate: 0x%x\n",
+ info->nv_config.slew_rate);
+ dev_dbg(info->dev, "circle_of_confusion: %d\n",
+ info->nv_config.circle_of_confusion);
+ dev_dbg(info->dev, "num_focuser_sets: %d\n",
+ info->nv_config.num_focuser_sets);
+ dev_dbg(info->dev, "focuser_set[0].macro: %d\n",
+ info->nv_config.focuser_set[0].macro);
+ dev_dbg(info->dev, "focuser_set[0].hyper: %d\n",
+ info->nv_config.focuser_set[0].hyper);
+ dev_dbg(info->dev, "focuser_set[0].inf: %d\n",
+ info->nv_config.focuser_set[0].inf);
+ dev_dbg(info->dev, "focuser_set[0].settle_time: %d\n",
+ info->nv_config.focuser_set[0].settle_time);
+}
+
+static void drv201_get_focuser_capabilities(struct drv201_info *info)
+{
+ memset(&info->nv_config, 0, sizeof(info->nv_config));
+
+ info->nv_config.focal_length = info->nvc.focal_length;
+ info->nv_config.fnumber = info->nvc.fnumber;
+ info->nv_config.max_aperture = info->nvc.fnumber;
+ info->nv_config.range_ends_reversed = 0;
+
+ info->nv_config.pos_working_low = info->cap.focus_infinity;
+ info->nv_config.pos_working_high = info->cap.focus_macro;
+
+ info->nv_config.pos_actual_low = DRV201_POS_LOW_DEFAULT;
+ info->nv_config.pos_actual_high = DRV201_POS_HIGH_DEFAULT;
+
+ info->nv_config.circle_of_confusion = -1;
+ info->nv_config.num_focuser_sets = 1;
+ info->nv_config.focuser_set[0].macro = info->cap.focus_macro;
+ info->nv_config.focuser_set[0].hyper = info->cap.focus_hyper;
+ info->nv_config.focuser_set[0].inf = info->cap.focus_infinity;
+ info->nv_config.focuser_set[0].settle_time = info->cap.settle_time;
+
+ /* set drive mode to linear */
+ info->nv_config.slew_rate = DRV201_CONTROL_RING;
+
+ drv201_dump_focuser_capabilities(info);
+}
+
+static int drv201_set_focuser_capabilities(struct drv201_info *info,
+ struct nvc_param *params)
+{
+ if (copy_from_user(&info->cfg_usr,
+ MAKE_CONSTUSER_PTR(params->p_value), sizeof(info->cfg_usr))) {
+ dev_err(info->dev, "%s Err: copy_from_user bytes %d\n",
+ __func__, sizeof(info->cfg_usr));
+ return -EFAULT;
+ }
+
+ if (info->cfg_usr.focal_length)
+ info->nv_config.focal_length = info->cfg_usr.focal_length;
+ if (info->cfg_usr.fnumber)
+ info->nv_config.fnumber = info->cfg_usr.fnumber;
+ if (info->cfg_usr.max_aperture)
+ info->nv_config.max_aperture = info->cfg_usr.max_aperture;
+
+ if (info->cfg_usr.pos_working_low != AF_POS_INVALID_VALUE)
+ info->nv_config.pos_working_low = info->cfg_usr.pos_working_low;
+ if (info->cfg_usr.pos_working_high != AF_POS_INVALID_VALUE)
+ info->nv_config.pos_working_high =
+ info->cfg_usr.pos_working_high;
+ if (info->cfg_usr.pos_actual_low != AF_POS_INVALID_VALUE)
+ info->nv_config.pos_actual_low = info->cfg_usr.pos_actual_low;
+ if (info->cfg_usr.pos_actual_high != AF_POS_INVALID_VALUE)
+ info->nv_config.pos_actual_high = info->cfg_usr.pos_actual_high;
+
+ if (info->cfg_usr.circle_of_confusion != AF_POS_INVALID_VALUE)
+ info->nv_config.circle_of_confusion =
+ info->cfg_usr.circle_of_confusion;
+
+ if (info->cfg_usr.focuser_set[0].macro != AF_POS_INVALID_VALUE)
+ info->nv_config.focuser_set[0].macro =
+ info->cfg_usr.focuser_set[0].macro;
+ if (info->cfg_usr.focuser_set[0].hyper != AF_POS_INVALID_VALUE)
+ info->nv_config.focuser_set[0].hyper =
+ info->cfg_usr.focuser_set[0].hyper;
+ if (info->cfg_usr.focuser_set[0].inf != AF_POS_INVALID_VALUE)
+ info->nv_config.focuser_set[0].inf =
+ info->cfg_usr.focuser_set[0].inf;
+ if (info->cfg_usr.focuser_set[0].settle_time != AF_POS_INVALID_VALUE)
+ info->nv_config.focuser_set[0].settle_time =
+ info->cfg_usr.focuser_set[0].settle_time;
+
+ dev_dbg(info->dev,
+ "%s: copy_from_user bytes %d info->cap.settle_time %d\n",
+ __func__, sizeof(struct nv_focuser_config),
+ info->cap.settle_time);
+
+ drv201_dump_focuser_capabilities(info);
+ return 0;
+}
+
+static int drv201_param_rd(struct drv201_info *info, unsigned long arg)
+{
+ struct nvc_param params;
+ const void *data_ptr = NULL;
+ u32 data_size = 0;
+ int err = 0;
+
+#ifdef CONFIG_COMPAT
+ memset(¶ms, 0, sizeof(params));
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param_32))) {
+#else
+ if (copy_from_user(¶ms,
+ (const void __user *)arg,
+ sizeof(struct nvc_param))) {
+#endif
+ dev_err(info->dev, "%s %d copy_from_user err\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ switch (params.param) {
+ case NVC_PARAM_LOCUS:
+ data_ptr = &info->pos;
+ data_size = sizeof(info->pos);
+ dev_dbg(info->dev, "%s LOCUS: %d\n", __func__, info->pos);
+ break;
+ case NVC_PARAM_FOCAL_LEN:
+ data_ptr = &info->nv_config.focal_length;
+ data_size = sizeof(info->nv_config.focal_length);
+ break;
+ case NVC_PARAM_MAX_APERTURE:
+ data_ptr = &info->nv_config.max_aperture;
+ data_size = sizeof(info->nv_config.max_aperture);
+ dev_dbg(info->dev, "%s MAX_APERTURE: %x\n",
+ __func__, info->nv_config.max_aperture);
+ break;
+ case NVC_PARAM_FNUMBER:
+ data_ptr = &info->nv_config.fnumber;
+ data_size = sizeof(info->nv_config.fnumber);
+ dev_dbg(info->dev, "%s FNUMBER: %u\n",
+ __func__, info->nv_config.fnumber);
+ break;
+ case NVC_PARAM_CAPS:
+ /* send back just what's requested or our max size */
+ data_ptr = &info->nv_config;
+ data_size = sizeof(info->nv_config);
+ dev_err(info->dev, "%s CAPS\n", __func__);
+ break;
+ case NVC_PARAM_STS:
+ /*data_ptr = &info->sts;
+ data_size = sizeof(info->sts);*/
+ dev_dbg(info->dev, "%s\n", __func__);
+ break;
+ case NVC_PARAM_STEREO:
+ default:
+ dev_err(info->dev, "%s unsupported parameter: %d\n",
+ __func__, params.param);
+ err = -EINVAL;
+ break;
+ }
+ if (!err && params.sizeofvalue < data_size) {
+ dev_err(info->dev,
+ "%s data size mismatch %d != %d Param: %d\n",
+ __func__, params.sizeofvalue, data_size, params.param);
+ return -EINVAL;
+ }
+ if (!err && copy_to_user(MAKE_USER_PTR(params.p_value),
+ data_ptr, data_size)) {
+ dev_err(info->dev,
+ "%s copy_to_user err line %d\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+ return err;
+}
+
+static int drv201_param_wr(struct drv201_info *info, unsigned long arg)
+{
+ struct nvc_param params;
+ u8 u8val;
+ u32 u32val;
+ int err = 0;
+
+#ifdef CONFIG_COMPAT
+ memset(¶ms, 0, sizeof(params));
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param_32))) {
+#else
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param))) {
+#endif
+ dev_err(info->dev, "%s copy_from_user err line %d\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ if (copy_from_user(&u32val,
+ MAKE_CONSTUSER_PTR(params.p_value), sizeof(u32val))) {
+ dev_err(info->dev, "%s %d copy_from_user err\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ u8val = (u8)u32val;
+
+ /* parameters independent of sync mode */
+ switch (params.param) {
+ case NVC_PARAM_CAPS:
+ if (drv201_set_focuser_capabilities(info, ¶ms)) {
+ dev_err(info->dev,
+ "%s: Error: copy_from_user bytes %d\n",
+ __func__, params.sizeofvalue);
+ err = -EFAULT;
+ }
+ break;
+ case NVC_PARAM_LOCUS:
+ dev_dbg(info->dev, "%s LOCUS: %d\n", __func__, u32val);
+ err = drv201_position_wr(info, (u16)u32val);
+ break;
+ case NVC_PARAM_RESET:
+ err = drv201_reset(info, u32val);
+ dev_dbg(info->dev, "%s RESET: %d\n", __func__, err);
+ break;
+ case NVC_PARAM_SELF_TEST:
+ err = 0;
+ dev_dbg(info->dev, "%s SELF_TEST: %d\n", __func__, err);
+ break;
+ default:
+ dev_dbg(info->dev, "%s unsupported parameter: %d\n",
+ __func__, params.param);
+ err = -EINVAL;
+ break;
+ }
+ return err;
+}
+
+static long drv201_ioctl(struct file *file,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct drv201_info *info = file->private_data;
+ int pwr;
+ int err = 0;
+ switch (cmd) {
+ case NVC_IOCTL_PARAM_WR:
+#ifdef CONFIG_COMPAT
+ case NVC_IOCTL_32_PARAM_WR:
+#endif
+ drv201_pm_dev_wr(info, NVC_PWR_ON);
+ err = drv201_param_wr(info, arg);
+ drv201_pm_dev_wr(info, NVC_PWR_OFF);
+ return err;
+ case NVC_IOCTL_PARAM_RD:
+#ifdef CONFIG_COMPAT
+ case NVC_IOCTL_32_PARAM_RD:
+#endif
+ drv201_pm_dev_wr(info, NVC_PWR_ON);
+ err = drv201_param_rd(info, arg);
+ drv201_pm_dev_wr(info, NVC_PWR_OFF);
+ return err;
+ case NVC_IOCTL_PWR_WR:
+ /* This is a Guaranteed Level of Service (GLOS) call */
+ pwr = (int)arg * 2;
+ dev_dbg(info->dev, "%s PWR_WR: %d\n", __func__, pwr);
+ err = drv201_pm_wr(info, pwr);
+ return err;
+ case NVC_IOCTL_PWR_RD:
+ pwr = info->pwr_dev;
+ dev_dbg(info->dev, "%s PWR_RD: %d\n", __func__, pwr);
+ if (copy_to_user((void __user *)arg,
+ (const void *)&pwr, sizeof(pwr))) {
+ dev_err(info->dev, "%s copy_to_user err line %d\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ return 0;
+ default:
+ dev_dbg(info->dev, "%s unsupported ioctl: %x\n", __func__, cmd);
+ }
+ return -EINVAL;
+}
+
+
+static void drv201_sdata_init(struct drv201_info *info)
+{
+ /* set defaults */
+ memcpy(&info->nvc, &drv201_default_nvc, sizeof(info->nvc));
+ memcpy(&info->cap, &drv201_default_cap, sizeof(info->cap));
+
+ /* set to proper value */
+ info->cap.actuator_range =
+ DRV201_POS_HIGH_DEFAULT - DRV201_POS_LOW_DEFAULT;
+
+ /* set overrides if any */
+ if (info->pdata->nvc) {
+ if (info->pdata->nvc->fnumber)
+ info->nvc.fnumber = info->pdata->nvc->fnumber;
+ if (info->pdata->nvc->focal_length)
+ info->nvc.focal_length = info->pdata->nvc->focal_length;
+ if (info->pdata->nvc->max_aperature)
+ info->nvc.max_aperature =
+ info->pdata->nvc->max_aperature;
+ }
+
+ if (info->pdata->cap) {
+ if (info->pdata->cap->actuator_range)
+ info->cap.actuator_range =
+ info->pdata->cap->actuator_range;
+ if (info->pdata->cap->settle_time)
+ info->cap.settle_time = info->pdata->cap->settle_time;
+ if (info->pdata->cap->focus_macro)
+ info->cap.focus_macro = info->pdata->cap->focus_macro;
+ if (info->pdata->cap->focus_hyper)
+ info->cap.focus_hyper = info->pdata->cap->focus_hyper;
+ if (info->pdata->cap->focus_infinity)
+ info->cap.focus_infinity =
+ info->pdata->cap->focus_infinity;
+ }
+
+ drv201_get_focuser_capabilities(info);
+}
+
+static int drv201_open(struct inode *inode, struct file *file)
+{
+ struct drv201_info *info = NULL;
+ struct drv201_info *pos = NULL;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(pos, &drv201_info_list, list) {
+ if (pos->miscdev.minor == iminor(inode)) {
+ info = pos;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ if (!info)
+ return -ENODEV;
+
+ if (atomic_xchg(&info->in_use, 1))
+ return -EBUSY;
+ file->private_data = info;
+
+ dev_dbg(info->dev, "%s\n", __func__);
+ return 0;
+}
+
+static int drv201_release(struct inode *inode, struct file *file)
+{
+ struct drv201_info *info = file->private_data;
+ dev_dbg(info->dev, "%s\n", __func__);
+ drv201_pm_wr(info, NVC_PWR_OFF);
+ file->private_data = NULL;
+ WARN_ON(!atomic_xchg(&info->in_use, 0));
+ return 0;
+}
+
+static const struct file_operations drv201_fileops = {
+ .owner = THIS_MODULE,
+ .open = drv201_open,
+ .unlocked_ioctl = drv201_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = drv201_ioctl,
+#endif
+ .release = drv201_release,
+};
+
+static void drv201_del(struct drv201_info *info)
+{
+ drv201_pm_exit(info);
+ spin_lock(&drv201_spinlock);
+ list_del_rcu(&info->list);
+ spin_unlock(&drv201_spinlock);
+ synchronize_rcu();
+}
+
+static int drv201_remove(struct i2c_client *client)
+{
+ struct drv201_info *info = i2c_get_clientdata(client);
+ dev_dbg(info->dev, "%s\n", __func__);
+ misc_deregister(&info->miscdev);
+ drv201_del(info);
+ return 0;
+}
+
+static int drv201_probe(
+ struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct drv201_info *info;
+ char dname[16];
+ int err = 0;
+ static struct regmap_config drv201_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 16,
+ };
+
+ dev_dbg(&client->dev, "%s\n", __func__);
+ pr_info("drv201: probing focuser.\n");
+
+ info = devm_kzalloc(&client->dev, sizeof(*info), GFP_KERNEL);
+ if (info == NULL) {
+ dev_err(&client->dev, "%s: kzalloc error\n", __func__);
+ return -ENOMEM;
+ }
+ info->i2c_client = client;
+ info->dev = &client->dev;
+ if (client->dev.platform_data)
+ info->pdata = client->dev.platform_data;
+ else {
+ info->pdata = &drv201_default_pdata;
+ dev_dbg(info->dev, "%s No platform data. Using defaults.\n",
+ __func__);
+ }
+
+ info->regmap = devm_regmap_init_i2c(client, &drv201_regmap_config);
+ if (IS_ERR(info->regmap)) {
+ err = PTR_ERR(info->regmap);
+ dev_err(info->dev,
+ "Failed to allocate register map: %d\n", err);
+ drv201_del(info);
+ return -EIO;
+ }
+
+ i2c_set_clientdata(client, info);
+ INIT_LIST_HEAD(&info->list);
+ spin_lock(&drv201_spinlock);
+ list_add_rcu(&info->list, &drv201_info_list);
+ spin_unlock(&drv201_spinlock);
+ drv201_pm_init(info);
+
+ if (info->pdata->cfg & (NVC_CFG_NODEV | NVC_CFG_BOOT_INIT)) {
+ drv201_pm_wr(info, NVC_PWR_COMM);
+ drv201_pm_wr(info, NVC_PWR_OFF);
+ if (err < 0) {
+ dev_err(info->dev, "%s device not found\n",
+ __func__);
+ if (info->pdata->cfg & NVC_CFG_NODEV) {
+ drv201_del(info);
+ return -ENODEV;
+ }
+ } else {
+ dev_dbg(info->dev, "%s device found\n", __func__);
+ if (info->pdata->cfg & NVC_CFG_BOOT_INIT) {
+ /* initial move causes full initialization */
+ drv201_pm_wr(info, NVC_PWR_ON);
+ drv201_position_wr(info,
+ (u16)info->nv_config.pos_working_low);
+ drv201_pm_wr(info, NVC_PWR_OFF);
+ }
+ }
+ }
+
+ drv201_sdata_init(info);
+
+ if (info->pdata->dev_name != 0)
+ strcpy(dname, info->pdata->dev_name);
+ else
+ strcpy(dname, "drv201");
+
+ if (info->pdata->num)
+ snprintf(dname, sizeof(dname),
+ "%s.%u", dname, info->pdata->num);
+
+ info->miscdev.name = dname;
+ info->miscdev.fops = &drv201_fileops;
+ info->miscdev.minor = MISC_DYNAMIC_MINOR;
+ if (misc_register(&info->miscdev)) {
+ dev_err(info->dev, "%s unable to register misc device %s\n",
+ __func__, dname);
+ drv201_del(info);
+ return -ENODEV;
+ }
+
+#ifdef TEGRA_12X_OR_HIGHER_CONFIG
+ err = camera_dev_add_regmap(&info->csync_dev, "drv201", info->regmap);
+ if (err < 0) {
+ dev_err(info->dev, "%s unable i2c frame sync\n", __func__);
+ drv201_del(info);
+ return -ENODEV;
+ }
+#endif
+
+ return 0;
+}
+
+
+static const struct i2c_device_id drv201_id[] = {
+ { "drv201", 0 },
+ { },
+};
+
+MODULE_DEVICE_TABLE(i2c, drv201_id);
+
+static struct i2c_driver drv201_i2c_driver = {
+ .driver = {
+ .name = "drv201",
+ .owner = THIS_MODULE,
+ },
+ .id_table = drv201_id,
+ .probe = drv201_probe,
+ .remove = drv201_remove,
+};
+
+module_i2c_driver(drv201_i2c_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/tegra/imx219.c b/drivers/media/platform/tegra/imx219.c
new file mode 100644
index 0000000..e258e05
--- /dev/null
+++ b/drivers/media/platform/tegra/imx219.c
@@ -0,0 +1,842 @@
+/*
+ * imx219.c - imx219 sensor driver
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION, All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/i2c.h>
+#include <linux/clk.h>
+#include <linux/miscdevice.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/regulator/consumer.h>
+#include <media/imx219.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/sysedp.h>
+#include <linux/kernel.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#include "nvc_utilities.h"
+
+#include "imx219_tables.h"
+
+struct imx219_info {
+ struct miscdevice miscdev_info;
+ int mode;
+ struct imx219_power_rail power;
+ struct nvc_fuseid fuse_id;
+ struct i2c_client *i2c_client;
+ struct imx219_platform_data *pdata;
+ struct clk *mclk;
+ struct mutex imx219_camera_lock;
+ struct dentry *debugdir;
+ atomic_t in_use;
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *debugfs_root;
+ u32 debug_i2c_offset;
+#endif
+ struct sysedp_consumer *sysedpc;
+ // AF Data
+ u8 afdat[4];
+ bool afdat_read;
+ struct imx219_gain pre_gain;
+ bool pre_gain_delay;
+};
+
+static inline void
+msleep_range(unsigned int delay_base)
+{
+ usleep_range(delay_base*1000, delay_base*1000+500);
+}
+
+static inline void
+imx219_get_frame_length_regs(struct imx219_reg *regs, u32 frame_length)
+{
+ regs->addr = 0x0160;
+ regs->val = (frame_length >> 8) & 0xff;
+ (regs + 1)->addr = 0x0161;
+ (regs + 1)->val = (frame_length) & 0xff;
+}
+
+static inline void
+imx219_get_coarse_time_regs(struct imx219_reg *regs, u32 coarse_time)
+{
+ regs->addr = 0x15a;
+ regs->val = (coarse_time >> 8) & 0xff;
+ (regs + 1)->addr = 0x15b;
+ (regs + 1)->val = (coarse_time) & 0xff;
+}
+
+static inline void
+imx219_get_gain_reg(struct imx219_reg *regs, struct imx219_gain gain)
+{
+ regs->addr = 0x157;
+ regs->val = gain.again;
+ (regs+1)->addr = 0x158;
+ (regs+1)->val = gain.dgain_upper;
+ (regs+2)->addr = 0x159;
+ (regs+2)->val = gain.dgain_lower;
+}
+
+static int
+imx219_read_reg(struct i2c_client *client, u16 addr, u8 *val)
+{
+ int err;
+ struct i2c_msg msg[2];
+ unsigned char data[3];
+
+ if (!client->adapter)
+ return -ENODEV;
+
+ msg[0].addr = client->addr;
+ msg[0].flags = 0;
+ msg[0].len = 2;
+ msg[0].buf = data;
+
+ /* high byte goes out first */
+ data[0] = (u8) (addr >> 8);
+ data[1] = (u8) (addr & 0xff);
+
+ msg[1].addr = client->addr;
+ msg[1].flags = I2C_M_RD;
+ msg[1].len = 1;
+ msg[1].buf = data + 2;
+
+ err = i2c_transfer(client->adapter, msg, 2);
+ if (err == 2) {
+ *val = data[2];
+ return 0;
+ }
+
+ pr_err("%s:i2c read failed, addr %x, err %d\n",
+ __func__, addr, err);
+
+ return err;
+}
+
+static int
+imx219_write_reg(struct i2c_client *client, u16 addr, u8 val)
+{
+ int err;
+ struct i2c_msg msg;
+ unsigned char data[3];
+
+ if (!client->adapter)
+ return -ENODEV;
+
+ data[0] = (u8) (addr >> 8);
+ data[1] = (u8) (addr & 0xff);
+ data[2] = (u8) (val & 0xff);
+
+ msg.addr = client->addr;
+ msg.flags = 0;
+ msg.len = 3;
+ msg.buf = data;
+
+ err = i2c_transfer(client->adapter, &msg, 1);
+ if (err == 1)
+ return 0;
+
+ pr_err("%s:i2c write failed, addr %x, val %x, err %d\n",
+ __func__, addr, val, err);
+
+ return err;
+}
+
+static int
+imx219_write_table(struct i2c_client *client,
+ const struct imx219_reg table[],
+ const struct imx219_reg override_list[],
+ int num_override_regs)
+{
+ int err;
+ const struct imx219_reg *next;
+ int i;
+ u16 val;
+
+ for (next = table; next->addr != IMX219_TABLE_END; next++) {
+ if (next->addr == IMX219_TABLE_WAIT_MS) {
+ msleep_range(next->val);
+ continue;
+ }
+
+ val = next->val;
+
+ /* When an override list is passed in, replace the reg */
+ /* value to write if the reg is in the list */
+ if (override_list) {
+ for (i = 0; i < num_override_regs; i++) {
+ if (next->addr == override_list[i].addr) {
+ val = override_list[i].val;
+ break;
+ }
+ }
+ }
+
+ err = imx219_write_reg(client, next->addr, val);
+ if (err) {
+ pr_err("%s:imx219_write_table:%d", __func__, err);
+ return err;
+ }
+ }
+ return 0;
+}
+
+static int
+imx219_set_mode(struct imx219_info *info, struct imx219_mode *mode)
+{
+ int sensor_mode;
+ int err;
+ struct imx219_reg reg_list[8];
+
+ pr_info("%s: xres %u yres %u framelength %u coarsetime %u again %u dgain %u%u\n",
+ __func__, mode->xres, mode->yres, mode->frame_length,
+ mode->coarse_time, mode->gain.again,
+ mode->gain.dgain_upper, mode->gain.dgain_lower);
+
+ if (mode->xres == 3280 && mode->yres == 2460) {
+ sensor_mode = IMX219_MODE_3280x2460;
+ } else if (mode->xres == 1640 && mode->yres == 1230) {
+ sensor_mode = IMX219_MODE_1640x1230;
+ } else if (mode->xres == 3280 && mode->yres == 1846) {
+ sensor_mode = IMX219_MODE_3280x1846;
+ } else if (mode->xres == 1280 && mode->yres == 720) {
+ sensor_mode = IMX219_MODE_1280x720;
+ } else {
+ pr_err("%s: invalid resolution supplied to set mode %d %d\n",
+ __func__, mode->xres, mode->yres);
+ return -EINVAL;
+ }
+
+ /* get a list of override regs for the asking frame length, */
+ /* coarse integration time, and gain. */
+ imx219_get_frame_length_regs(reg_list, mode->frame_length);
+ imx219_get_coarse_time_regs(reg_list + 2, mode->coarse_time);
+ imx219_get_gain_reg(reg_list + 4, mode->gain);
+
+ err = imx219_write_table(info->i2c_client,
+ mode_table[sensor_mode],
+ reg_list, 7);
+
+ info->pre_gain = mode->gain;
+ info->pre_gain_delay = false;
+
+ info->mode = sensor_mode;
+ pr_info("[IMX219]: stream on.\n");
+ return 0;
+}
+
+static int
+imx219_get_status(struct imx219_info *info, u8 *dev_status)
+{
+ *dev_status = 0;
+ return 0;
+}
+
+static int
+imx219_set_frame_length(struct imx219_info *info, u32 frame_length)
+{
+ struct imx219_reg reg_list[2];
+ int i = 0;
+ int ret;
+
+ imx219_get_frame_length_regs(reg_list, frame_length);
+
+ for (i = 0; i < 2; i++) {
+ ret = imx219_write_reg(info->i2c_client, reg_list[i].addr,
+ reg_list[i].val);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+imx219_set_coarse_time(struct imx219_info *info, u32 coarse_time)
+{
+ int ret;
+
+ struct imx219_reg reg_list[2];
+ int i = 0;
+
+ imx219_get_coarse_time_regs(reg_list, coarse_time);
+
+ for (i = 0; i < 2; i++) {
+ ret = imx219_write_reg(info->i2c_client, reg_list[i].addr,
+ reg_list[i].val);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+imx219_set_gain(struct imx219_info *info, struct imx219_gain gain)
+{
+ int i;
+ int ret;
+ struct imx219_reg reg_list[3];
+
+ imx219_get_gain_reg(reg_list, gain);
+ for (i = 0; i < 3; ++i) {
+ ret = imx219_write_reg(info->i2c_client,
+ reg_list[i].addr,
+ reg_list[i].val);
+ if (ret) {
+ pr_err("%s: unable to write register: %d", __func__, ret);
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int
+imx219_set_group_hold(struct imx219_info *info, struct imx219_ae *ae)
+{
+ int ret = 0;
+ struct imx219_reg reg_list[8];
+ int offset = 0;
+
+ if (ae->frame_length_enable) {
+ imx219_get_frame_length_regs(reg_list + offset, ae->frame_length);
+ offset += 2;
+ }
+ if (ae->coarse_time_enable) {
+ imx219_get_coarse_time_regs(reg_list + offset, ae->coarse_time);
+ offset += 2;
+ }
+ if (ae->gain_enable) {
+ imx219_get_gain_reg(reg_list + offset, ae->gain);
+ offset +=3;
+ }
+
+ reg_list[offset].addr = IMX219_TABLE_END;
+
+ ret = imx219_write_table(info->i2c_client,
+ reg_list, NULL, 0);
+ return ret;
+}
+
+static int imx219_get_sensor_id(struct imx219_info *info)
+{
+ int ret = 0;
+ int i;
+ u8 bak = 0;
+
+ pr_info("%s\n", __func__);
+ if (info->fuse_id.size)
+ return 0;
+
+ /* Note 1: If the sensor does not have power at this point
+ Need to supply the power, e.g. by calling power on function */
+
+ for (i = 0; i < 4; i++) {
+ ret |= imx219_read_reg(info->i2c_client, 0x0004 + i, &bak);
+ pr_info("chip unique id 0x%x = 0x%02x\n", i, bak);
+ info->fuse_id.data[i] = bak;
+ }
+
+ for (i = 0; i < 2; i++) {
+ ret |= imx219_read_reg(info->i2c_client, 0x000d + i , &bak);
+ pr_info("chip unique id 0x%x = 0x%02x\n", i + 4, bak);
+ info->fuse_id.data[i + 4] = bak;
+ }
+
+ if (!ret)
+ info->fuse_id.size = 6;
+
+ /* Note 2: Need to clean up any action carried out in Note 1 */
+
+ return ret;
+}
+
+static int imx219_get_af_data(struct imx219_info *info)
+{
+ int ret = 0;
+ int i;
+ u8 bak = 0;
+ u8 *dat = (u8 *)info->afdat;
+
+ pr_info("%s\n", __func__);
+ if (info->afdat_read)
+ return 0;
+
+ imx219_write_reg(info->i2c_client, 0x0100, 0); // SW-Stanby
+ msleep_range(33); // wait one frame
+
+ imx219_write_reg(info->i2c_client, 0x012A, 0x18); // 24Mhz input
+ imx219_write_reg(info->i2c_client, 0x012B, 0x00); //
+
+ imx219_write_reg(info->i2c_client, 0x3302, 0x02); // clock setting
+ imx219_write_reg(info->i2c_client, 0x3303, 0x58); // clock setting
+ imx219_write_reg(info->i2c_client, 0x3300, 0); // ECC ON
+ imx219_write_reg(info->i2c_client, 0x3200, 1); // set 'Read'
+
+ imx219_write_reg(info->i2c_client, 0x3202, 1); // page 1
+
+ for (i = 0; i < 4; i++) {
+ ret |= imx219_read_reg(info->i2c_client, 0x3204 + i, &bak);
+ *(dat+3-i) = bak;
+ printk("[%d] x%x ", i, bak);
+ }
+ printk("\n");
+ info->afdat_read = true;
+
+ return ret;
+}
+
+static void imx219_mclk_disable(struct imx219_info *info)
+{
+ dev_dbg(&info->i2c_client->dev, "%s: disable MCLK\n", __func__);
+ clk_disable_unprepare(info->mclk);
+}
+
+static int imx219_mclk_enable(struct imx219_info *info)
+{
+ int err;
+ unsigned long mclk_init_rate = 24000000;
+
+ dev_dbg(&info->i2c_client->dev, "%s: enable MCLK with %lu Hz\n",
+ __func__, mclk_init_rate);
+
+ err = clk_set_rate(info->mclk, mclk_init_rate);
+ if (!err)
+ err = clk_prepare_enable(info->mclk);
+ return err;
+}
+
+static long
+imx219_ioctl(struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ int err = 0;
+ struct imx219_info *info = file->private_data;
+
+ switch (cmd) {
+ case IMX219_IOCTL_SET_POWER:
+ if (!info->pdata)
+ break;
+ if (arg && info->pdata->power_on) {
+ err = imx219_mclk_enable(info);
+ if (!err) {
+ sysedp_set_state(info->sysedpc, 1);
+ err = info->pdata->power_on(&info->power);
+ }
+ if (err < 0)
+ imx219_mclk_disable(info);
+ }
+ if (!arg && info->pdata->power_off) {
+ info->pdata->power_off(&info->power);
+ imx219_mclk_disable(info);
+ sysedp_set_state(info->sysedpc, 0);
+ }
+ break;
+ case IMX219_IOCTL_SET_MODE:
+ {
+ struct imx219_mode mode;
+ if (copy_from_user(&mode, (const void __user *)arg,
+ sizeof(struct imx219_mode))) {
+ pr_err("%s:Failed to get mode from user.\n", __func__);
+ return -EFAULT;
+ }
+ return imx219_set_mode(info, &mode);
+ }
+ case IMX219_IOCTL_SET_FRAME_LENGTH:
+ err = imx219_set_frame_length(info, (u32)arg);
+ break;
+ case IMX219_IOCTL_SET_COARSE_TIME:
+ err = imx219_set_coarse_time(info, (u32)arg);
+ break;
+ case IMX219_IOCTL_SET_GAIN:
+ {
+ struct imx219_gain gain;
+ if (copy_from_user(&gain, (const void __user *)arg,
+ sizeof(struct imx219_gain))) {
+ pr_err("%s:Failed to get gain from user\n", __func__);
+ return -EFAULT;
+ }
+ err = imx219_set_gain(info, gain);
+ break;
+ }
+ case IMX219_IOCTL_GET_STATUS:
+ {
+ u8 status;
+
+ err = imx219_get_status(info, &status);
+ if (err)
+ return err;
+ if (copy_to_user((void __user *)arg, &status, 1)) {
+ pr_err("%s:Failed to copy status to user\n", __func__);
+ return -EFAULT;
+ }
+ return 0;
+ }
+ case IMX219_IOCTL_GET_FUSEID:
+ {
+ err = imx219_get_sensor_id(info);
+
+ if (err) {
+ pr_err("%s:Failed to get fuse id info.\n", __func__);
+ return err;
+ }
+ if (copy_to_user((void __user *)arg, &info->fuse_id,
+ sizeof(struct nvc_fuseid))) {
+ pr_info("%s:Failed to copy fuse id to user space\n",
+ __func__);
+ return -EFAULT;
+ }
+ return 0;
+ }
+ case IMX219_IOCTL_SET_GROUP_HOLD:
+ {
+ struct imx219_ae ae;
+ if (copy_from_user(&ae, (const void __user *)arg,
+ sizeof(struct imx219_ae))) {
+ pr_info("%s:fail group hold\n", __func__);
+ return -EFAULT;
+ }
+ return imx219_set_group_hold(info, &ae);
+ }
+ case IMX219_IOCTL_GET_AFDAT:
+ {
+ err = imx219_get_af_data(info);
+
+ if (err) {
+ pr_err("%s:Failed to get af data.\n", __func__);
+ return err;
+ }
+ if (copy_to_user((void __user *)arg, info->afdat, 4)) {
+ pr_err("%s:Failed to copy status to user\n", __func__);
+ return -EFAULT;
+ }
+ return 0;
+ }
+ case IMX219_IOCTL_SET_FLASH_MODE:
+ {
+ dev_dbg(&info->i2c_client->dev,
+ "IMX219_IOCTL_SET_FLASH_MODE not used\n");
+ }
+ case IMX219_IOCTL_GET_FLASH_CAP:
+ return -ENODEV;/* Flounder not support on sensor strobe */
+ default:
+ pr_err("%s:unknown cmd: %u\n", __func__, cmd);
+ err = -EINVAL;
+ }
+
+ return err;
+}
+
+static int
+imx219_open(struct inode *inode, struct file *file)
+{
+ struct miscdevice *miscdev = file->private_data;
+ struct imx219_info *info;
+
+ info = container_of(miscdev, struct imx219_info, miscdev_info);
+ /* check if the device is in use */
+ if (atomic_xchg(&info->in_use, 1)) {
+ pr_info("%s:BUSY!\n", __func__);
+ return -EBUSY;
+ }
+
+ file->private_data = info;
+
+ return 0;
+}
+
+static int
+imx219_release(struct inode *inode, struct file *file)
+{
+ struct imx219_info *info = file->private_data;
+
+ file->private_data = NULL;
+
+ /* warn if device is already released */
+ WARN_ON(!atomic_xchg(&info->in_use, 0));
+ return 0;
+}
+
+static int imx219_power_put(struct imx219_power_rail *pw)
+{
+ if (unlikely(!pw))
+ return -EFAULT;
+
+ if (likely(pw->avdd))
+ regulator_put(pw->avdd);
+
+ if (likely(pw->iovdd))
+ regulator_put(pw->iovdd);
+
+ if (likely(pw->dvdd))
+ regulator_put(pw->dvdd);
+
+ pw->avdd = NULL;
+ pw->iovdd = NULL;
+ pw->dvdd = NULL;
+
+ return 0;
+}
+
+static const struct file_operations imx219_fileops = {
+ .owner = THIS_MODULE,
+ .open = imx219_open,
+ .unlocked_ioctl = imx219_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = imx219_ioctl,
+#endif
+ .release = imx219_release,
+};
+
+static struct miscdevice imx219_device = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "imx219",
+ .fops = &imx219_fileops,
+};
+
+#ifdef CONFIG_DEBUG_FS
+static int imx219_stats_show(struct seq_file *s, void *data)
+{
+ static struct imx219_info *info;
+
+ seq_printf(s, "%-20s : %-20s\n", "Name", "imx219-debugfs-testing");
+ seq_printf(s, "%-20s : 0x%X\n", "Current i2c-offset Addr",
+ info->debug_i2c_offset);
+ seq_printf(s, "%-20s : 0x%X\n", "DC BLC Enabled",
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int imx219_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, imx219_stats_show, inode->i_private);
+}
+
+static const struct file_operations imx219_stats_fops = {
+ .open = imx219_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int debug_i2c_offset_w(void *data, u64 val)
+{
+ struct imx219_info *info = (struct imx219_info *)(data);
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s setting i2c offset to 0x%X\n",
+ __func__, (u32)val);
+ info->debug_i2c_offset = (u32)val;
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s new i2c offset is 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int debug_i2c_offset_r(void *data, u64 *val)
+{
+ struct imx219_info *info = (struct imx219_info *)(data);
+ *val = (u64)info->debug_i2c_offset;
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s reading i2c offset is 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int debug_i2c_read(void *data, u64 *val)
+{
+ struct imx219_info *info = (struct imx219_info *)(data);
+ u8 temp1 = 0;
+ u8 temp2 = 0;
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s reading offset 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ if (imx219_read_reg(info->i2c_client,
+ info->debug_i2c_offset, &temp1)
+ || imx219_read_reg(info->i2c_client,
+ info->debug_i2c_offset+1, &temp2)) {
+ dev_err(&info->i2c_client->dev,
+ "imx219:%s failed\n", __func__);
+ return -EIO;
+ }
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s read value is 0x%X\n", __func__,
+ temp1<<8 | temp2);
+ *val = (u64)(temp1<<8 | temp2);
+ return 0;
+}
+
+static int debug_i2c_write(void *data, u64 val)
+{
+ struct imx219_info *info = (struct imx219_info *)(data);
+ dev_info(&info->i2c_client->dev,
+ "imx219:%s writing 0x%X to offset 0x%X\n", __func__,
+ (u8)val, info->debug_i2c_offset);
+ if (imx219_write_reg(info->i2c_client,
+ info->debug_i2c_offset, (u8)val)) {
+ dev_err(&info->i2c_client->dev, "imx219:%s failed\n", __func__);
+ return -EIO;
+ }
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(i2c_offset_fops, debug_i2c_offset_r,
+ debug_i2c_offset_w, "0x%llx\n");
+DEFINE_SIMPLE_ATTRIBUTE(i2c_read_fops, debug_i2c_read,
+ /*debug_i2c_dummy_w*/ NULL, "0x%llx\n");
+DEFINE_SIMPLE_ATTRIBUTE(i2c_write_fops, /*debug_i2c_dummy_r*/NULL,
+ debug_i2c_write, "0x%llx\n");
+
+static int imx219_debug_init(struct imx219_info *info)
+{
+ dev_dbg(&info->i2c_client->dev, "%s", __func__);
+
+ info->debugfs_root = debugfs_create_dir(imx219_device.name, NULL);
+
+ if (!info->debugfs_root)
+ goto err_out;
+
+ if (!debugfs_create_file("stats", S_IRUGO,
+ info->debugfs_root, info, &imx219_stats_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("offset", S_IRUGO | S_IWUSR,
+ info->debugfs_root, info, &i2c_offset_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("read", S_IRUGO,
+ info->debugfs_root, info, &i2c_read_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("write", S_IWUSR,
+ info->debugfs_root, info, &i2c_write_fops))
+ goto err_out;
+
+ return 0;
+
+err_out:
+ dev_err(&info->i2c_client->dev, "ERROR:%s failed", __func__);
+ debugfs_remove_recursive(info->debugfs_root);
+ return -ENOMEM;
+}
+#endif
+
+static int
+imx219_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct imx219_info *info;
+ int err;
+ const char *mclk_name;
+
+ pr_info("[IMX219]: probing sensor.\n");
+
+ info = devm_kzalloc(&client->dev,
+ sizeof(struct imx219_info), GFP_KERNEL);
+ if (!info) {
+ pr_err("%s:Unable to allocate memory!\n", __func__);
+ return -ENOMEM;
+ }
+
+ info->pdata = client->dev.platform_data;
+ info->i2c_client = client;
+ atomic_set(&info->in_use, 0);
+ info->mode = -1;
+ info->afdat_read = false;
+
+ mclk_name = info->pdata->mclk_name ?
+ info->pdata->mclk_name : "default_mclk";
+ info->mclk = devm_clk_get(&client->dev, mclk_name);
+ if (IS_ERR(info->mclk)) {
+ dev_err(&client->dev, "%s: unable to get clock %s\n",
+ __func__, mclk_name);
+ return PTR_ERR(info->mclk);
+ }
+
+ memcpy(&info->miscdev_info,
+ &imx219_device,
+ sizeof(struct miscdevice));
+
+ err = misc_register(&info->miscdev_info);
+ if (err) {
+ pr_err("%s:Unable to register misc device!\n", __func__);
+ goto imx219_probe_fail;
+ }
+
+ i2c_set_clientdata(client, info);
+ /* create debugfs interface */
+#ifdef CONFIG_DEBUG_FS
+ imx219_debug_init(info);
+#endif
+ info->sysedpc = sysedp_create_consumer("imx219", "imx219");
+ return 0;
+
+imx219_probe_fail:
+ imx219_power_put(&info->power);
+
+ return err;
+}
+
+static int
+imx219_remove(struct i2c_client *client)
+{
+ struct imx219_info *info;
+ info = i2c_get_clientdata(client);
+ misc_deregister(&imx219_device);
+
+ imx219_power_put(&info->power);
+
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove_recursive(info->debugfs_root);
+#endif
+ sysedp_free_consumer(info->sysedpc);
+ return 0;
+}
+
+static const struct i2c_device_id imx219_id[] = {
+ { "imx219", 0 },
+ { }
+};
+
+MODULE_DEVICE_TABLE(i2c, imx219_id);
+
+static struct i2c_driver imx219_i2c_driver = {
+ .driver = {
+ .name = "imx219",
+ .owner = THIS_MODULE,
+ },
+ .probe = imx219_probe,
+ .remove = imx219_remove,
+ .id_table = imx219_id,
+};
+
+static int __init imx219_init(void)
+{
+ pr_info("[IMX219] sensor driver loading\n");
+ return i2c_add_driver(&imx219_i2c_driver);
+}
+
+static void __exit imx219_exit(void)
+{
+ i2c_del_driver(&imx219_i2c_driver);
+}
+
+module_init(imx219_init);
+module_exit(imx219_exit);
diff --git a/drivers/media/platform/tegra/imx219_tables.h b/drivers/media/platform/tegra/imx219_tables.h
new file mode 100644
index 0000000..6caf11a
--- /dev/null
+++ b/drivers/media/platform/tegra/imx219_tables.h
@@ -0,0 +1,417 @@
+/*
+ * imx219_tables.h - sensor mode tables for imx219 HDR sensor.
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION, All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef IMX219_I2C_TABLES
+#define IMX219_I2C_TABLES
+
+#define IMX219_TABLE_WAIT_MS 0
+#define IMX219_TABLE_END 1
+#define IMX219_MAX_RETRIES 3
+#define IMX219_WAIT_MS 3
+
+struct imx219_reg {
+ u16 addr;
+ u8 val;
+};
+
+static struct imx219_reg mode_3280x2460[] = {
+ /* software reset */
+ {0x0103, 0x01},
+ /* global settings */
+ {0x30EB, 0x05},
+ {0x30EB, 0x0C},
+ {0x300A, 0xFF},
+ {0x300B, 0xFF},
+ {0x30EB, 0x05},
+ {0x30EB, 0x09},
+ {0x0114, 0x03},
+ {0x0128, 0x00},
+ {0x012A, 0x18},
+ {0x012B, 0x00},
+ /* Bank A Settings */
+ {0x0157, 0x00},
+ {0x015A, 0x08},
+ {0x015B, 0x8F},
+ {0x0160, 0x0A},
+ {0x0161, 0x83},
+ {0x0162, 0x0D},
+ {0x0163, 0x78},
+ {0x0164, 0x00},
+ {0x0165, 0x00},
+ {0x0166, 0x0C},
+ {0x0167, 0xCF},
+ {0x0168, 0x00},
+ {0x0169, 0x00},
+ {0x016A, 0x09},
+ {0x016B, 0x9F},
+ {0x016C, 0x0C},
+ {0x016D, 0xD0},
+ {0x016E, 0x09},
+ {0x016F, 0x9C},
+ {0x0170, 0x01},
+ {0x0171, 0x01},
+ {0x0174, 0x00},
+ {0x0175, 0x00},
+ {0x018C, 0x0A},
+ {0x018D, 0x0A},
+ /* Bank B Settings */
+ {0x0257, 0x00},
+ {0x025A, 0x08},
+ {0x025B, 0x8F},
+ {0x0260, 0x0A},
+ {0x0261, 0x83},
+ {0x0262, 0x0D},
+ {0x0263, 0x78},
+ {0x0264, 0x00},
+ {0x0265, 0x00},
+ {0x0266, 0x0C},
+ {0x0267, 0xCF},
+ {0x0268, 0x00},
+ {0x0269, 0x00},
+ {0x026A, 0x09},
+ {0x026B, 0x9F},
+ {0x026C, 0x0C},
+ {0x026D, 0xD0},
+ {0x026E, 0x09},
+ {0x026F, 0x9C},
+ {0x0270, 0x01},
+ {0x0271, 0x01},
+ {0x0274, 0x00},
+ {0x0275, 0x00},
+ {0x028C, 0x0A},
+ {0x028D, 0x0A},
+ /* clock setting */
+ {0x0301, 0x05},
+ {0x0303, 0x01},
+ {0x0304, 0x03},
+ {0x0305, 0x03},
+ {0x0306, 0x00},
+ {0x0307, 0x57},
+ {0x0309, 0x0A},
+ {0x030B, 0x01},
+ {0x030C, 0x00},
+ {0x030D, 0x5A},
+ {0x455E, 0x00},
+ {0x471E, 0x4B},
+ {0x4767, 0x0F},
+ {0x4750, 0x14},
+ {0x4540, 0x00},
+ {0x47B4, 0x14},
+ {0x4713, 0x30},
+ {0x478B, 0x10},
+ {0x478F, 0x10},
+ {0x4793, 0x10},
+ {0x4797, 0x0E},
+ {0x479B, 0x0E},
+ /* stream on */
+ {0x0100, 0x01},
+ {IMX219_TABLE_WAIT_MS, IMX219_WAIT_MS},
+ {IMX219_TABLE_END, 0x00}
+};
+static struct imx219_reg mode_3280x1846[] = {
+ /* software reset */
+ {0x0103, 0x01},
+ /* global settings */
+ {0x30EB, 0x05},
+ {0x30EB, 0x0C},
+ {0x300A, 0xFF},
+ {0x300B, 0xFF},
+ {0x30EB, 0x05},
+ {0x30EB, 0x09},
+ {0x0114, 0x03},
+ {0x0128, 0x00},
+ {0x012A, 0x18},
+ {0x012B, 0x00},
+ /* Bank A Settings */
+ {0x0157, 0x00},
+ {0x015A, 0x08},
+ {0x015B, 0x8F},
+ {0x0160, 0x07},
+ {0x0161, 0x5E},
+ {0x0162, 0x0D},
+ {0x0163, 0x78},
+ {0x0164, 0x00},
+ {0x0165, 0x00},
+ {0x0166, 0x0C},
+ {0x0167, 0xCF},
+ {0x0168, 0x01},
+ {0x0169, 0x36},
+ {0x016A, 0x08},
+ {0x016B, 0x6B},
+ {0x016C, 0x0C},
+ {0x016D, 0xD0},
+ {0x016E, 0x07},
+ {0x016F, 0x36},
+ {0x0170, 0x01},
+ {0x0171, 0x01},
+ {0x0174, 0x00},
+ {0x0175, 0x00},
+ {0x018C, 0x0A},
+ {0x018D, 0x0A},
+ /* Bank B Settings */
+ {0x0257, 0x00},
+ {0x025A, 0x08},
+ {0x025B, 0x8F},
+ {0x0260, 0x07},
+ {0x0261, 0x5E},
+ {0x0262, 0x0D},
+ {0x0263, 0x78},
+ {0x0264, 0x00},
+ {0x0265, 0x00},
+ {0x0266, 0x0C},
+ {0x0267, 0xCF},
+ {0x0268, 0x01},
+ {0x0269, 0x36},
+ {0x026A, 0x08},
+ {0x026B, 0x6B},
+ {0x026C, 0x0C},
+ {0x026D, 0xD0},
+ {0x026E, 0x07},
+ {0x026F, 0x36},
+ {0x0270, 0x01},
+ {0x0271, 0x01},
+ {0x0274, 0x00},
+ {0x0275, 0x00},
+ {0x028C, 0x0A},
+ {0x028D, 0x0A},
+ /* clock setting */
+ {0x0301, 0x05},
+ {0x0303, 0x01},
+ {0x0304, 0x03},
+ {0x0305, 0x03},
+ {0x0306, 0x00},
+ {0x0307, 0x57},
+ {0x0309, 0x0A},
+ {0x030B, 0x01},
+ {0x030C, 0x00},
+ {0x030D, 0x5A},
+ {0x455E, 0x00},
+ {0x471E, 0x4B},
+ {0x4767, 0x0F},
+ {0x4750, 0x14},
+ {0x4540, 0x00},
+ {0x47B4, 0x14},
+ {0x4713, 0x30},
+ {0x478B, 0x10},
+ {0x478F, 0x10},
+ {0x4793, 0x10},
+ {0x4797, 0x0E},
+ {0x479B, 0x0E},
+ /* stream on */
+ {0x0100, 0x01},
+ {IMX219_TABLE_WAIT_MS, IMX219_WAIT_MS},
+ {IMX219_TABLE_END, 0x00}
+};
+static struct imx219_reg mode_1640x1230[] = {
+ /* software reset */
+ {0x0103, 0x01},
+ /* global settings */
+ {0x30EB, 0x05},
+ {0x30EB, 0x0C},
+ {0x300A, 0xFF},
+ {0x300B, 0xFF},
+ {0x30EB, 0x05},
+ {0x30EB, 0x09},
+ {0x0114, 0x03},
+ {0x0128, 0x00},
+ {0x012A, 0x18},
+ {0x012B, 0x00},
+ /* Bank A Settings */
+ {0x0157, 0x00},
+ {0x015A, 0x08},
+ {0x015B, 0x8F},
+ {0x0160, 0x0A},
+ {0x0161, 0x83},
+ {0x0162, 0x0D},
+ {0x0163, 0x78},
+ {0x0164, 0x00},
+ {0x0165, 0x00},
+ {0x0166, 0x0C},
+ {0x0167, 0xCF},
+ {0x0168, 0x00},
+ {0x0169, 0x00},
+ {0x016A, 0x09},
+ {0x016B, 0x9F},
+ {0x016C, 0x06},
+ {0x016D, 0x68},
+ {0x016E, 0x04},
+ {0x016F, 0xCE},
+ {0x0170, 0x01},
+ {0x0171, 0x01},
+ {0x0174, 0x01},
+ {0x0175, 0x01},
+ {0x018C, 0x0A},
+ {0x018D, 0x0A},
+ /* Bank B Settings */
+ {0x0257, 0x00},
+ {0x025A, 0x08},
+ {0x025B, 0x8F},
+ {0x0260, 0x0A},
+ {0x0261, 0x83},
+ {0x0262, 0x0D},
+ {0x0263, 0x78},
+ {0x0264, 0x00},
+ {0x0265, 0x00},
+ {0x0266, 0x0C},
+ {0x0267, 0xCF},
+ {0x0268, 0x00},
+ {0x0269, 0xB0},
+ {0x026A, 0x09},
+ {0x026B, 0x9F},
+ {0x026C, 0x06},
+ {0x026D, 0x68},
+ {0x026E, 0x04},
+ {0x026F, 0xCE},
+ {0x0270, 0x01},
+ {0x0271, 0x01},
+ {0x0274, 0x01},
+ {0x0275, 0x01},
+ {0x028C, 0x0A},
+ {0x028D, 0x0A},
+ /* clock setting */
+ {0x0301, 0x05},
+ {0x0303, 0x01},
+ {0x0304, 0x03},
+ {0x0305, 0x03},
+ {0x0306, 0x00},
+ {0x0307, 0x57},
+ {0x0309, 0x0A},
+ {0x030B, 0x01},
+ {0x030C, 0x00},
+ {0x030D, 0x5A},
+ {0x455E, 0x00},
+ {0x471E, 0x4B},
+ {0x4767, 0x0F},
+ {0x4750, 0x14},
+ {0x4540, 0x00},
+ {0x47B4, 0x14},
+ {0x4713, 0x30},
+ {0x478B, 0x10},
+ {0x478F, 0x10},
+ {0x4793, 0x10},
+ {0x4797, 0x0E},
+ {0x479B, 0x0E},
+ /* stream on */
+ {0x0100, 0x01},
+ {IMX219_TABLE_WAIT_MS, IMX219_WAIT_MS},
+ {IMX219_TABLE_END, 0x00}
+};
+
+static struct imx219_reg mode_1280x720[] = {
+ /* software reset */
+ {0x0103, 0x01},
+ /* global settings */
+ {0x30EB, 0x05},
+ {0x30EB, 0x0C},
+ {0x300A, 0xFF},
+ {0x300B, 0xFF},
+ {0x30EB, 0x05},
+ {0x30EB, 0x09},
+ {0x0114, 0x03},
+ {0x0128, 0x00},
+ {0x012A, 0x18},
+ {0x012B, 0x00},
+ /* Bank A Settings */
+ {0x0160, 0x02},
+ {0x0161, 0x8C},
+ {0x0162, 0x0D},
+ {0x0163, 0xE8},
+ {0x0164, 0x01},
+ {0x0165, 0x68},
+ {0x0166, 0x0B},
+ {0x0167, 0x67},
+ {0x0168, 0x02},
+ {0x0169, 0x00},
+ {0x016A, 0x07},
+ {0x016B, 0x9F},
+ {0x016C, 0x05},
+ {0x016D, 0x00},
+ {0x016E, 0x02},
+ {0x016F, 0xD0},
+ {0x0170, 0x01},
+ {0x0171, 0x01},
+ {0x0174, 0x03},
+ {0x0175, 0x03},
+ {0x018C, 0x0A},
+ {0x018D, 0x0A},
+ /* Bank B Settings */
+ {0x0260, 0x02},
+ {0x0261, 0x8C},
+ {0x0262, 0x0D},
+ {0x0263, 0xE8},
+ {0x0264, 0x01},
+ {0x0265, 0x68},
+ {0x0266, 0x0B},
+ {0x0267, 0x67},
+ {0x0268, 0x02},
+ {0x0269, 0x00},
+ {0x026A, 0x07},
+ {0x026B, 0x9F},
+ {0x026C, 0x05},
+ {0x026D, 0x00},
+ {0x026E, 0x02},
+ {0x026F, 0xD0},
+ {0x0270, 0x01},
+ {0x0271, 0x01},
+ {0x0274, 0x03},
+ {0x0275, 0x03},
+ {0x028C, 0x0A},
+ {0x028D, 0x0A},
+ /* clock setting */
+ {0x0301, 0x05},
+ {0x0303, 0x01},
+ {0x0304, 0x03},
+ {0x0305, 0x03},
+ {0x0306, 0x00},
+ {0x0307, 0x57},
+ {0x0309, 0x0A},
+ {0x030B, 0x01},
+ {0x030C, 0x00},
+ {0x030D, 0x5A},
+ {0x455E, 0x00},
+ {0x471E, 0x4B},
+ {0x4767, 0x0F},
+ {0x4750, 0x14},
+ {0x4540, 0x00},
+ {0x47B4, 0x14},
+ {0x4713, 0x30},
+ {0x478B, 0x10},
+ {0x478F, 0x10},
+ {0x4793, 0x10},
+ {0x4797, 0x0E},
+ {0x479B, 0x0E},
+ /* stream on */
+ {0x0100, 0x01},
+ {IMX219_TABLE_WAIT_MS, IMX219_WAIT_MS},
+ {IMX219_TABLE_END, 0x00}
+};
+enum {
+ IMX219_MODE_3280x2460,
+ IMX219_MODE_3280x1846,
+ IMX219_MODE_1640x1230,
+ IMX219_MODE_1280x720,
+};
+
+static struct imx219_reg *mode_table[] = {
+ [IMX219_MODE_3280x2460] = mode_3280x2460,
+ [IMX219_MODE_3280x1846] = mode_3280x1846,
+ [IMX219_MODE_1640x1230] = mode_1640x1230,
+ [IMX219_MODE_1280x720] = mode_1280x720,
+};
+
+#endif
diff --git a/drivers/media/platform/tegra/nvavp/Kconfig b/drivers/media/platform/tegra/nvavp/Kconfig
index 294253a..4248bee 100644
--- a/drivers/media/platform/tegra/nvavp/Kconfig
+++ b/drivers/media/platform/tegra/nvavp/Kconfig
@@ -19,3 +19,13 @@
/dev/tegra_audio_avpchannel.
If unsure, say N
+
+config TEGRA_NVAVP_REF_DEBUG
+ bool "Enable Reference Debugging for Tegra NVAVP driver"
+ depends on TEGRA_NVAVP
+ default y
+ help
+ Enables debugging reference to the NVAVP device in event that NVAVP driver
+ prevents system from going to sleep.
+
+ If unsure, say N
diff --git a/drivers/media/platform/tegra/nvavp/nvavp_dev.c b/drivers/media/platform/tegra/nvavp/nvavp_dev.c
index 2c54e49..dd84bde 100644
--- a/drivers/media/platform/tegra/nvavp/nvavp_dev.c
+++ b/drivers/media/platform/tegra/nvavp/nvavp_dev.c
@@ -1,7 +1,7 @@
/*
* drivers/media/video/tegra/nvavp/nvavp_dev.c
*
- * Copyright (c) 2011-2014, NVIDIA CORPORATION. All rights reserved.
+ * Copyright (c) 2011-2015, NVIDIA CORPORATION. All rights reserved.
*
* This file is licensed under the terms of the GNU General Public License
* version 2. This program is licensed "as is" without any warranty of any
@@ -45,7 +45,7 @@
#include <linux/memblock.h>
#include <linux/anon_inodes.h>
#include <linux/tegra_pm_domains.h>
-
+#include <linux/debugfs.h>
#include <linux/pm_qos.h>
@@ -55,6 +55,10 @@
#include <linux/of_address.h>
#include <linux/tegra-timer.h>
+#ifdef CONFIG_TRUSTED_LITTLE_KERNEL
+#include <linux/ote_protocol.h>
+#endif
+
#if defined(CONFIG_TEGRA_AVP_KERNEL_ON_MMU)
#include "../avp/headavp.h"
#endif
@@ -171,6 +175,10 @@
struct miscdevice audio_misc_dev;
#endif
struct task_struct *init_task;
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ struct list_head ref_list;
+ struct dentry *dbg_dir;
+#endif
};
struct nvavp_clientctx {
@@ -185,6 +193,159 @@
};
static struct nvavp_info *nvavp_info_ctx;
+
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+struct nvavp_ref {
+ struct pid *pid;
+ struct list_head list;
+};
+
+static void nvavp_ref_add(struct nvavp_info *nvavp)
+{
+ struct nvavp_ref *ref;
+ struct pid *pid = get_task_pid(current, PIDTYPE_PGID);
+
+ ref = kmalloc(sizeof(struct nvavp_ref), GFP_KERNEL);
+ if (!ref)
+ {
+ pr_err("%s: failed to record Process %d as kmalloc failed!",
+ __func__, pid_nr(pid));
+ return;
+ }
+
+ ref->pid = pid;
+ list_add_tail(&ref->list, &nvavp->ref_list);
+ pr_err("nvavp: Process %d accquired a reference.\n", pid_nr(pid));
+}
+
+static inline void __nvavp_ref_remove(struct nvavp_ref *ref)
+{
+ list_del(&ref->list);
+ put_pid(ref->pid);
+ kfree(ref);
+}
+
+static void nvavp_ref_del(struct nvavp_info* nvavp)
+{
+ struct nvavp_ref *ref;
+ struct pid *pid = get_task_pid(current, PIDTYPE_PGID);
+
+ list_for_each_entry(ref, &nvavp->ref_list, list)
+ if (ref->pid == pid)
+ break;
+
+ if (&ref->list != &nvavp->ref_list)
+ {
+ pr_err("nvavp: Process %d has given up a reference\n",
+ pid_nr(pid));
+ __nvavp_ref_remove(ref);
+ }
+ else
+ {
+ pr_err("%s: Reference not found for Process %d\n",
+ __func__, pid_nr(pid));
+ }
+
+ put_pid(pid);
+}
+
+static void nvavp_ref_dump(struct nvavp_info *nvavp)
+{
+ struct nvavp_ref *ref, *tmp;
+
+ pr_info("nvavp: Dumping reference list:\n");
+ list_for_each_entry_safe(ref, tmp, &nvavp->ref_list, list)
+ {
+ if (pid_task(ref->pid, PIDTYPE_PGID))
+ {
+ pr_info("nvavp: Process %d\n", pid_nr(ref->pid));
+ }
+ else
+ {
+ pr_info("nvavp: Process %d no longer exists, "
+ "danging reference removed!\n",
+ pid_nr(ref->pid));
+ __nvavp_ref_remove(ref);
+ }
+ }
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int dbg_nvavp_refs_show(struct seq_file *m, void *unused)
+{
+ struct nvavp_info *nvavp = m->private;
+ struct nvavp_ref *ref;
+
+ list_for_each_entry(ref, &nvavp->ref_list, list)
+ if (pid_task(ref->pid, PIDTYPE_PID))
+ seq_printf(m, "%d\n", pid_nr(ref->pid));
+ else
+ seq_printf(m, "%d [danging]\n", pid_nr(ref->pid));
+
+ return 0;
+}
+
+static int dbg_nvavp_refs_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, dbg_nvavp_refs_show, inode->i_private);
+}
+
+
+static const struct file_operations dbg_nvavp_refs_fops = {
+ .open = dbg_nvavp_refs_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static void nvavp_debug_create(struct nvavp_info *nvavp)
+{
+ struct dentry *dbg_dir;
+ struct dentry *ret;
+
+ dbg_dir = debugfs_create_dir("nvavp", NULL);
+ if (!dbg_dir)
+ return;
+ nvavp->dbg_dir = dbg_dir;
+ ret = debugfs_create_file("refs", S_IRUGO, dbg_dir, nvavp,
+ &dbg_nvavp_refs_fops);
+ if (!ret)
+ debugfs_remove_recursive(dbg_dir);
+}
+#else
+static int dbg_hdmi_show(struct seq_file *m, void *unused)
+{
+}
+
+static int dbg_nvavp_refs_open(struct inode *inode, struct file *file)
+{
+}
+
+static void nvavp_debug_create(struct nvavp_info *nvavp)
+{
+}
+#endif
+#endif
+
+static inline void nvavp_ref_inc(struct nvavp_info *nvavp)
+{
+ //nvavp->refcount++;
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ nvavp_ref_add(nvavp);
+#endif
+}
+
+static inline void nvavp_ref_dec(struct nvavp_info *nvavp)
+{
+ //nvavp->refcount--;
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ nvavp_ref_del(nvavp);
+#endif
+}
+
+
+
+
static int nvavp_runtime_get(struct nvavp_info *nvavp)
{
if (nvavp->init_task != current) {
@@ -563,6 +724,7 @@
} else {
nvavp->clk_enabled++;
}
+ nvavp_ref_inc(nvavp);
}
static void nvavp_clks_disable(struct nvavp_info *nvavp)
@@ -581,6 +743,7 @@
dev_dbg(&nvavp->nvhost_dev->dev, "%s: resetting emc_clk "
"and sclk\n", __func__);
}
+ nvavp_ref_dec(nvavp);
}
static u32 nvavp_check_idle(struct nvavp_info *nvavp, int channel_id)
@@ -1281,7 +1444,7 @@
audio_initialized = nvavp_get_audio_init_status(nvavp);
#endif
- pr_debug("nvavp_uninit video_initialized(%d) audio_initialized(%d)\n",
+ pr_err("nvavp_uninit video_initialized(%d) audio_initialized(%d)\n",
video_initialized, audio_initialized);
/* Video and Audio both are uninitialized */
@@ -1291,7 +1454,7 @@
nvavp->init_task = current;
if (video_initialized) {
- pr_debug("nvavp_uninit nvavp->video_initialized\n");
+ pr_err("nvavp_uninit nvavp->video_initialized\n");
nvavp_halt_vde(nvavp);
nvavp_set_video_init_status(nvavp, 0);
video_initialized = 0;
@@ -1307,7 +1470,7 @@
/* Video and Audio both becomes uninitialized */
if (!video_initialized && !audio_initialized) {
- pr_debug("nvavp_uninit both channels uninitialized\n");
+ pr_err("nvavp_uninit both channels uninitialized\n");
clk_disable_unprepare(nvavp->sclk);
clk_disable_unprepare(nvavp->emc_clk);
@@ -1848,13 +2011,13 @@
struct nvavp_clientctx *clientctx;
int ret = 0;
- dev_dbg(&nvavp->nvhost_dev->dev, "%s: ++\n", __func__);
+ dev_err(&nvavp->nvhost_dev->dev, "%s: ++\n", __func__);
clientctx = kzalloc(sizeof(*clientctx), GFP_KERNEL);
if (!clientctx)
return -ENOMEM;
- pr_debug("tegra_nvavp_open channel_id (%d)\n", channel_id);
+ pr_err("tegra_nvavp_open channel_id (%d)\n", channel_id);
clientctx->channel_id = channel_id;
@@ -1883,7 +2046,7 @@
struct nvavp_clientctx *clientctx;
int ret = 0;
- pr_debug("tegra_nvavp_video_open NVAVP_VIDEO_CHANNEL\n");
+ pr_err("tegra_nvavp_video_open NVAVP_VIDEO_CHANNEL\n");
nonseekable_open(inode, filp);
@@ -1903,7 +2066,7 @@
struct nvavp_clientctx *clientctx;
int ret = 0;
- pr_debug("tegra_nvavp_audio_open NVAVP_AUDIO_CHANNEL\n");
+ pr_err("tegra_nvavp_audio_open NVAVP_AUDIO_CHANNEL\n");
nonseekable_open(inode, filp);
@@ -1936,7 +2099,7 @@
struct nvavp_info *nvavp = clientctx->nvavp;
int ret = 0;
- dev_dbg(&nvavp->nvhost_dev->dev, "%s: ++\n", __func__);
+ dev_err(&nvavp->nvhost_dev->dev, "%s: ++\n", __func__);
if (!nvavp->refcount) {
dev_err(&nvavp->nvhost_dev->dev,
@@ -2061,6 +2224,7 @@
return err;
}
+extern struct device tegra_vpr_dev;
static long tegra_nvavp_ioctl(struct file *filp, unsigned int cmd,
unsigned long arg)
{
@@ -2068,6 +2232,7 @@
struct nvavp_clock_args config;
int ret = 0;
u8 buf[NVAVP_IOCTL_CHANNEL_MAX_ARG_SIZE];
+ u32 floor_size;
if (_IOC_TYPE(cmd) != NVAVP_IOCTL_MAGIC ||
_IOC_NR(cmd) < NVAVP_IOCTL_MIN_NR ||
@@ -2126,6 +2291,15 @@
ret = copy_to_user((void __user *)arg, buf,
_IOC_SIZE(cmd));
break;
+ case NVAVP_IOCTL_VPR_FLOOR_SIZE:
+ if (copy_from_user(&floor_size, (void __user *)arg,
+ sizeof(floor_size))) {
+ ret = -EFAULT;
+ break;
+ }
+ ret = dma_set_resizable_heap_floor_size(&tegra_vpr_dev,
+ floor_size);
+ break;
default:
ret = -EINVAL;
break;
@@ -2426,6 +2600,12 @@
}
#endif
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ nvavp->ref_list.prev = &nvavp->ref_list;
+ nvavp->ref_list.next = &nvavp->ref_list;
+ nvavp_debug_create(nvavp);
+#endif
+
ret = request_irq(irq, nvavp_mbox_pending_isr, 0,
TEGRA_NVAVP_NAME, nvavp);
if (ret) {
@@ -2531,7 +2711,9 @@
PM_QOS_CPU_FREQ_MIN_DEFAULT_VALUE);
pm_qos_remove_request(&nvavp->min_online_cpus_req);
}
-
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ debugfs_remove_recursive(nvavp->dbg_dir);
+#endif
kfree(nvavp);
return 0;
}
@@ -2560,7 +2742,10 @@
ret = -EBUSY;
}
}
-
+#ifdef CONFIG_TEGRA_NVAVP_REF_DEBUG
+ if (ret == -EBUSY)
+ nvavp_ref_dump(nvavp);
+#endif
mutex_unlock(&nvavp->open_lock);
return ret;
@@ -2595,6 +2780,12 @@
tegra_nvavp_runtime_resume(dev);
+#ifdef CONFIG_TRUSTED_LITTLE_KERNEL
+ nvavp_clks_enable(nvavp);
+ te_restore_keyslots();
+ nvavp_clks_disable(nvavp);
+#endif
+
return 0;
}
diff --git a/drivers/media/platform/tegra/ov9760.c b/drivers/media/platform/tegra/ov9760.c
new file mode 100644
index 0000000..51c035b
--- /dev/null
+++ b/drivers/media/platform/tegra/ov9760.c
@@ -0,0 +1,907 @@
+/*
+ * ov9760.c - ov9760 sensor driver
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION, All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/i2c.h>
+#include <linux/clk.h>
+#include <linux/miscdevice.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <media/ov9760.h>
+#include <linux/regulator/consumer.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_gpio.h>
+#include <media/nvc.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/debugfs.h>
+#include <linux/sysedp.h>
+
+#define SIZEOF_I2C_TRANSBUF 32
+
+struct ov9760_reg {
+ u16 addr;
+ u16 val;
+};
+
+struct ov9760_info {
+ struct miscdevice miscdev_info;
+ int mode;
+ struct ov9760_power_rail power;
+ struct i2c_client *i2c_client;
+ struct clk *mclk;
+ struct ov9760_platform_data *pdata;
+ u8 i2c_trans_buf[SIZEOF_I2C_TRANSBUF];
+ struct ov9760_sensordata sensor_data;
+ atomic_t in_use;
+#ifdef CONFIG_DEBUG_FS
+ struct dentry *debugfs_root;
+ u32 debug_i2c_offset;
+#endif
+ struct sysedp_consumer *sysedpc;
+};
+
+#define OV9760_TABLE_WAIT_MS 0
+#define OV9760_TABLE_END 1
+#define OV9760_MAX_RETRIES 3
+
+static struct ov9760_reg mode_1472x1104[] = {
+ {0x0103, 0x01},
+ {OV9760_TABLE_WAIT_MS, 10},
+ {0X0340, 0X04},
+ {0X0341, 0XCC},
+ {0X0342, 0X07},
+ {0X0343, 0X00},
+ {0X0344, 0X00},
+ {0X0345, 0X10},
+ {0X0346, 0X00},
+ {0X0347, 0X08},
+ {0X0348, 0X05},
+ {0X0349, 0Xd7},
+ {0X034a, 0X04},
+ {0X034b, 0X68},
+ {0X3811, 0X04},
+ {0X3813, 0X04},
+ {0X034c, 0X05},
+ {0X034d, 0XC0},
+ {0X034e, 0X04},
+ {0X034f, 0X50},
+ {0X0383, 0X01},
+ {0X0387, 0X01},
+ {0X3820, 0X00},
+ {0X3821, 0X00},
+ {0X3660, 0X80},
+ {0X3680, 0Xf4},
+ {0X0100, 0X00},
+ {0X0101, 0X01},
+ {0X3002, 0X80},
+ {0X3012, 0X08},
+ {0X3014, 0X04},
+ {0X3022, 0X02},
+ {0X3023, 0X0f},
+ {0X3080, 0X00},
+ {0X3090, 0X02},
+ {0X3091, 0X2c},
+ {0X3092, 0X02},
+ {0X3093, 0X02},
+ {0X3094, 0X00},
+ {0X3095, 0X00},
+ {0X3096, 0X01},
+ {0X3097, 0X00},
+ {0X3098, 0X04},
+ {0X3099, 0X14},
+ {0X309a, 0X03},
+ {0X309c, 0X00},
+ {0X309d, 0X00},
+ {0X309e, 0X01},
+ {0X309f, 0X00},
+ {0X30a2, 0X01},
+ {0X30b0, 0X0a},
+ {0X30b3, 0Xc8},
+ {0X30b4, 0X06},
+ {0X30b5, 0X00},
+ {0X3503, 0X17},
+ {0X3509, 0X00},
+ {0X3600, 0X7c},
+ {0X3621, 0Xb8},
+ {0X3622, 0X23},
+ {0X3631, 0Xe2},
+ {0X3634, 0X03},
+ {0X3662, 0X14},
+ {0X366b, 0X03},
+ {0X3682, 0X82},
+ {0X3705, 0X20},
+ {0X3708, 0X64},
+ {0X371b, 0X60},
+ {0X3732, 0X40},
+ {0X3745, 0X00},
+ {0X3746, 0X18},
+ {0X3780, 0X2a},
+ {0X3781, 0X8c},
+ {0X378f, 0Xf5},
+ {0X3823, 0X37},
+ {0X383d, 0X88},
+ {0X4000, 0X23},
+ {0X4001, 0X04},
+ {0X4002, 0X45},
+ {0X4004, 0X08},
+ {0X4005, 0X40},
+ {0X4006, 0X40},
+ {0X4009, 0X40},
+ {0X404F, 0X8F},
+ {0X4058, 0X44},
+ {0X4101, 0X32},
+ {0X4102, 0Xa4},
+ {0X4520, 0Xb0},
+ {0X4580, 0X08},
+ {0X4582, 0X00},
+ {0X4307, 0X30},
+ {0X4605, 0X00},
+ {0X4608, 0X02},
+ {0X4609, 0X00},
+ {0X4801, 0X0f},
+ {0X4819, 0XB6},
+ {0X4837, 0X19},
+ {0X4906, 0Xff},
+ {0X4d00, 0X04},
+ {0X4d01, 0X4b},
+ {0X4d02, 0Xfe},
+ {0X4d03, 0X09},
+ {0X4d04, 0X1e},
+ {0X4d05, 0Xb7},
+ {0X3500, 0X00},
+ {0X3501, 0X04},
+ {0X3502, 0Xc8},
+ {0X3503, 0X17},
+ {0X3a04, 0X04},
+ {0X3a05, 0Xa4},
+ {0X3a06, 0X00},
+ {0X3a07, 0Xf8},
+ {0X5781, 0X17},
+ {0X5792, 0X00},
+ {0X0100, 0X01},
+ {OV9760_TABLE_END, 0x0000}
+};
+
+enum {
+ OV9760_MODE_1472x1104,
+};
+
+static struct ov9760_reg *mode_table[] = {
+ [OV9760_MODE_1472x1104] = mode_1472x1104,
+};
+
+#define OV9760_ENTER_GROUP_HOLD(group_hold) \
+ do { \
+ if (group_hold) { \
+ reg_list[offset].addr = 0x104; \
+ reg_list[offset].val = 0x01;\
+ offset++; \
+ } \
+ } while (0)
+
+#define OV9760_LEAVE_GROUP_HOLD(group_hold) \
+ do { \
+ if (group_hold) { \
+ reg_list[offset].addr = 0x104; \
+ reg_list[offset].val = 0x00;\
+ offset++; \
+ } \
+ } while (0)
+
+static void ov9760_mclk_disable(struct ov9760_info *info)
+{
+ dev_dbg(&info->i2c_client->dev, "%s: disable MCLK\n", __func__);
+ clk_disable_unprepare(info->mclk);
+}
+
+static int ov9760_mclk_enable(struct ov9760_info *info)
+{
+ int err;
+ unsigned long mclk_init_rate = 24000000;
+
+ dev_dbg(&info->i2c_client->dev, "%s: enable MCLK with %lu Hz\n",
+ __func__, mclk_init_rate);
+
+ err = clk_set_rate(info->mclk, mclk_init_rate);
+ if (!err)
+ err = clk_prepare_enable(info->mclk);
+ return err;
+}
+
+static inline int ov9760_get_frame_length_regs(struct ov9760_reg *regs,
+ u32 frame_length)
+{
+ regs->addr = 0x0340;
+ regs->val = (frame_length >> 8) & 0xff;
+ (regs + 1)->addr = 0x0341;
+ (regs + 1)->val = (frame_length) & 0xff;
+ return 2;
+}
+
+static inline int ov9760_get_coarse_time_regs(struct ov9760_reg *regs,
+ u32 coarse_time)
+{
+ regs->addr = 0x3500;
+ regs->val = (coarse_time >> 12) & 0xff;
+ (regs + 1)->addr = 0x3501;
+ (regs + 1)->val = (coarse_time >> 4) & 0xff;
+ (regs + 2)->addr = 0x3502;
+ (regs + 2)->val = (coarse_time & 0xf) << 4;
+ return 3;
+}
+
+static inline int ov9760_get_gain_reg(struct ov9760_reg *regs, u16 gain)
+{
+ regs->addr = 0x350a;
+ regs->val = (gain >> 8) & 0xff;
+ (regs + 1)->addr = 0x350b;
+ (regs + 1)->val = (gain) & 0xff;
+ return 2;
+}
+
+static int ov9760_read_reg(struct i2c_client *client, u16 addr, u8 *val)
+{
+ int err;
+ struct i2c_msg msg[2];
+ unsigned char data[3];
+
+ if (!client->adapter)
+ return -ENODEV;
+
+ msg[0].addr = client->addr;
+ msg[0].flags = 0;
+ msg[0].len = 2;
+ msg[0].buf = data;
+
+ /* high byte goes out first */
+ data[0] = (u8) (addr >> 8);
+ data[1] = (u8) (addr & 0xff);
+
+ msg[1].addr = client->addr;
+ msg[1].flags = I2C_M_RD;
+ msg[1].len = 1;
+ msg[1].buf = data + 2;
+
+ err = i2c_transfer(client->adapter, msg, 2);
+
+ if (err != 2)
+
+ return -EINVAL;
+
+ *val = data[2];
+
+ return 0;
+}
+
+static int ov9760_write_reg(struct i2c_client *client, u16 addr, u8 val)
+{
+ int err;
+ struct i2c_msg msg;
+ unsigned char data[3];
+ int retry = 0;
+
+ if (!client->adapter)
+ return -ENODEV;
+ dev_dbg(&client->dev, "%s\n", __func__);
+
+ data[0] = (u8) (addr >> 8);
+ data[1] = (u8) (addr & 0xff);
+ data[2] = (u8) (val & 0xff);
+
+ msg.addr = client->addr;
+ msg.flags = 0;
+ msg.len = 3;
+ msg.buf = data;
+
+ do {
+ err = i2c_transfer(client->adapter, &msg, 1);
+ if (err == 1)
+ return 0;
+ retry++;
+ pr_err("ov9760: i2c transfer failed, retrying %x %x\n",
+ addr, val);
+
+ usleep_range(3000, 3100);
+ } while (retry <= OV9760_MAX_RETRIES);
+
+ return err;
+}
+
+static int ov9760_write_bulk_reg(struct i2c_client *client, u8 *data, int len)
+{
+ int err;
+ struct i2c_msg msg;
+
+ if (!client->adapter)
+ return -ENODEV;
+
+ msg.addr = client->addr;
+ msg.flags = 0;
+ msg.len = len;
+ msg.buf = data;
+
+ err = i2c_transfer(client->adapter, &msg, 1);
+ if (err == 1)
+ return 0;
+
+ pr_err("ov9760: i2c bulk transfer failed at %x\n",
+ (int)data[0] << 8 | data[1]);
+
+ return err;
+}
+
+static int ov9760_write_table(struct ov9760_info *info,
+ const struct ov9760_reg table[],
+ const struct ov9760_reg override_list[],
+ int num_override_regs)
+{
+ int err;
+ const struct ov9760_reg *next, *n_next;
+ u8 *b_ptr = info->i2c_trans_buf;
+ unsigned int buf_filled = 0;
+ unsigned int i;
+ u16 val;
+
+ for (next = table; next->addr != OV9760_TABLE_END; next++) {
+ if (next->addr == OV9760_TABLE_WAIT_MS) {
+ msleep(next->val);
+ continue;
+ }
+
+ val = next->val;
+ /* When an override list is passed in, replace the reg */
+ /* value to write if the reg is in the list */
+ if (override_list) {
+ for (i = 0; i < num_override_regs; i++) {
+ if (next->addr == override_list[i].addr) {
+ val = override_list[i].val;
+ break;
+ }
+ }
+ }
+
+ if (!buf_filled) {
+ b_ptr = info->i2c_trans_buf;
+ *b_ptr++ = next->addr >> 8;
+ *b_ptr++ = next->addr & 0xff;
+ buf_filled = 2;
+ }
+ *b_ptr++ = val;
+ buf_filled++;
+
+ n_next = next + 1;
+ if (n_next->addr != OV9760_TABLE_END &&
+ n_next->addr != OV9760_TABLE_WAIT_MS &&
+ buf_filled < SIZEOF_I2C_TRANSBUF &&
+ n_next->addr == next->addr + 1) {
+ continue;
+ }
+
+ err = ov9760_write_bulk_reg(info->i2c_client,
+ info->i2c_trans_buf, buf_filled);
+ if (err)
+ return err;
+ buf_filled = 0;
+ }
+ return 0;
+}
+
+static int ov9760_set_mode(struct ov9760_info *info, struct ov9760_mode *mode)
+{
+ int sensor_mode;
+ int err;
+ struct ov9760_reg reg_list[7];
+
+ sensor_mode = OV9760_MODE_1472x1104;
+ /* get a list of override regs for the asking frame length, */
+ /* coarse integration time, and gain. */
+ ov9760_get_frame_length_regs(reg_list, mode->frame_length);
+ ov9760_get_coarse_time_regs(reg_list + 2, mode->coarse_time);
+ ov9760_get_gain_reg(reg_list + 5, mode->gain);
+
+ err = ov9760_write_table(info, mode_table[sensor_mode],
+ reg_list, 7);
+ if (err)
+ return err;
+
+ info->mode = sensor_mode;
+ return 0;
+}
+
+static int ov9760_set_frame_length(struct ov9760_info *info, u32 frame_length)
+{
+ int ret;
+ struct ov9760_reg reg_list[2];
+ u8 *b_ptr = info->i2c_trans_buf;
+
+ ov9760_get_frame_length_regs(reg_list, frame_length);
+
+ *b_ptr++ = reg_list[0].addr >> 8;
+ *b_ptr++ = reg_list[0].addr & 0xff;
+ *b_ptr++ = reg_list[0].val & 0xff;
+ *b_ptr++ = reg_list[1].val & 0xff;
+ ret = ov9760_write_bulk_reg(info->i2c_client, info->i2c_trans_buf, 4);
+
+ return ret;
+}
+
+static int ov9760_set_coarse_time(struct ov9760_info *info, u32 coarse_time)
+{
+ int ret;
+ struct ov9760_reg reg_list[3];
+ u8 *b_ptr = info->i2c_trans_buf;
+
+ dev_info(&info->i2c_client->dev, "coarse_time 0x%x\n", coarse_time);
+ ov9760_get_coarse_time_regs(reg_list, coarse_time);
+
+ *b_ptr++ = reg_list[0].addr >> 8;
+ *b_ptr++ = reg_list[0].addr & 0xff;
+ *b_ptr++ = reg_list[0].val & 0xff;
+ *b_ptr++ = reg_list[1].val & 0xff;
+ *b_ptr++ = reg_list[2].val & 0xff;
+
+ ret = ov9760_write_bulk_reg(info->i2c_client, info->i2c_trans_buf, 5);
+ return ret;
+
+}
+
+static int ov9760_set_gain(struct ov9760_info *info, u16 gain)
+{
+ int ret;
+ struct ov9760_reg reg_list[2];
+ int i = 0;
+ ov9760_get_gain_reg(reg_list, gain);
+
+ for (i = 0; i < 2; i++) {
+ ret = ov9760_write_reg(info->i2c_client, reg_list[i].addr,
+ reg_list[i].val);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int ov9760_set_group_hold(struct ov9760_info *info, struct ov9760_ae *ae)
+{
+ int err = 0;
+ struct ov9760_reg reg_list[12];
+ int offset = 0;
+ bool group_hold = true;
+
+ OV9760_ENTER_GROUP_HOLD(group_hold);
+ if (ae->gain_enable)
+ offset += ov9760_get_gain_reg(reg_list + offset,
+ ae->gain);
+ if (ae->frame_length_enable)
+ offset += ov9760_get_frame_length_regs(reg_list + offset,
+ ae->frame_length);
+ if (ae->coarse_time_enable)
+ offset += ov9760_get_coarse_time_regs(reg_list + offset,
+ ae->coarse_time);
+ OV9760_LEAVE_GROUP_HOLD(group_hold);
+
+ reg_list[offset].addr = OV9760_TABLE_END;
+ err |= ov9760_write_table(info, reg_list, NULL, 0);
+
+ return err;
+}
+
+static int ov9760_get_sensor_id(struct ov9760_info *info)
+{
+ int ret = 0;
+ int i;
+ u8 bak;
+
+ dev_info(&info->i2c_client->dev, "%s\n", __func__);
+ if (info->sensor_data.fuse_id_size)
+ return 0;
+
+ usleep_range(1000, 1100);
+ ov9760_write_reg(info->i2c_client, 0x0100, 0x01);
+ usleep_range(2000, 2500);
+ /* select bank 31 */
+ ov9760_write_reg(info->i2c_client, 0x3d84, 0xc0 & 31);
+ usleep_range(2000, 2500);
+ ov9760_write_reg(info->i2c_client, 0x3d81, 0x01);
+ usleep_range(2000, 2500);
+ for (i = 0; i < 8; i++) {
+ ret |= ov9760_read_reg(info->i2c_client, 0x3d00 + i, &bak);
+ info->sensor_data.fuse_id[i] = bak;
+ dev_info(&info->i2c_client->dev, "%s %x = %x\n", __func__, 0x3d00 + i ,bak);
+ }
+
+ if (!ret)
+ info->sensor_data.fuse_id_size = 8;
+
+ return ret;
+}
+
+static int ov9760_get_status(struct ov9760_info *info, u8 *status)
+{
+ int err;
+
+ *status = 0;
+ err = ov9760_read_reg(info->i2c_client, 0x002, status);
+ return err;
+}
+
+static long ov9760_ioctl(struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ int err;
+ struct ov9760_info *info = file->private_data;
+
+ switch _IOC_NR((cmd)) {
+ case _IOC_NR(OV9760_IOCTL_SET_MODE):
+ {
+ struct ov9760_mode mode;
+ if (copy_from_user(&mode,
+ (const void __user *)arg,
+ sizeof(struct ov9760_mode))) {
+ return -EFAULT;
+ }
+
+ return ov9760_set_mode(info, &mode);
+ }
+ case _IOC_NR(OV9760_IOCTL_SET_FRAME_LENGTH):
+ return ov9760_set_frame_length(info, (u32)arg);
+ case _IOC_NR(OV9760_IOCTL_SET_COARSE_TIME):
+ return ov9760_set_coarse_time(info, (u32)arg);
+ case _IOC_NR(OV9760_IOCTL_SET_GAIN):
+ return ov9760_set_gain(info, (u16)arg);
+ case _IOC_NR(OV9760_IOCTL_SET_GROUP_HOLD):
+ {
+ struct ov9760_ae ae;
+ if (copy_from_user(&ae,
+ (const void __user *)arg,
+ sizeof(struct ov9760_ae))) {
+ return -EFAULT;
+ }
+ return ov9760_set_group_hold(info, &ae);
+ }
+ case _IOC_NR(OV9760_IOCTL_GET_STATUS):
+ {
+ u8 status;
+
+ err = ov9760_get_status(info, &status);
+ if (err)
+ return err;
+ if (copy_to_user((void __user *)arg, &status,
+ 2)) {
+ return -EFAULT;
+ }
+ return 0;
+ }
+ case _IOC_NR(OV9760_IOCTL_GET_FUSEID):
+ {
+ err = ov9760_get_sensor_id(info);
+ if (err)
+ return err;
+ if (copy_to_user((void __user *)arg,
+ &info->sensor_data,
+ sizeof(struct ov9760_sensordata))) {
+ return -EFAULT;
+ }
+ return 0;
+ }
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int ov9760_open(struct inode *inode, struct file *file)
+{
+ struct miscdevice *miscdev = file->private_data;
+ struct ov9760_info *info;
+
+ info = container_of(miscdev, struct ov9760_info, miscdev_info);
+ /* check if the device is in use */
+ if (atomic_xchg(&info->in_use, 1)) {
+ dev_info(&info->i2c_client->dev, "%s:BUSY!\n", __func__);
+ return -EBUSY;
+ }
+
+ file->private_data = info;
+
+ if (info->pdata && info->pdata->power_on) {
+ sysedp_set_state(info->sysedpc, 1);
+ ov9760_mclk_enable(info);
+ info->pdata->power_on(&info->power);
+ } else {
+ dev_err(&info->i2c_client->dev,
+ "%s:no valid power_on function.\n", __func__);
+ return -EEXIST;
+ }
+
+ return 0;
+}
+
+int ov9760_release(struct inode *inode, struct file *file)
+{
+ struct ov9760_info *info = file->private_data;
+
+ if (info->pdata && info->pdata->power_off) {
+ ov9760_mclk_disable(info);
+ info->pdata->power_off(&info->power);
+ sysedp_set_state(info->sysedpc, 0);
+ }
+ file->private_data = NULL;
+
+ /* warn if device is already released */
+ WARN_ON(!atomic_xchg(&info->in_use, 0));
+ return 0;
+}
+
+
+static int ov9760_power_put(struct ov9760_power_rail *pw)
+{
+ if (likely(pw->avdd))
+ regulator_put(pw->avdd);
+
+ if (likely(pw->iovdd))
+ regulator_put(pw->iovdd);
+
+ pw->dvdd = NULL;
+ pw->avdd = NULL;
+ pw->iovdd = NULL;
+
+ return 0;
+}
+
+static const struct file_operations ov9760_fileops = {
+ .owner = THIS_MODULE,
+ .open = ov9760_open,
+ .unlocked_ioctl = ov9760_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = ov9760_ioctl,
+#endif
+ .release = ov9760_release,
+};
+
+static struct miscdevice ov9760_device = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "ov9760",
+ .fops = &ov9760_fileops,
+};
+
+#ifdef CONFIG_DEBUG_FS
+static int ov9760_stats_show(struct seq_file *s, void *data)
+{
+ struct ov9760_info *info = (struct ov9760_info *)(s->private);
+
+ seq_printf(s, "%-20s : %-20s\n", "Name", "ov9760-debugfs-testing");
+ seq_printf(s, "%-20s : 0x%X\n", "Current i2c-offset Addr",
+ info->debug_i2c_offset);
+ seq_printf(s, "%-20s : 0x%X\n", "DC BLC Enabled",
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int ov9760_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, ov9760_stats_show, inode->i_private);
+}
+
+static const struct file_operations ov9760_stats_fops = {
+ .open = ov9760_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int debug_i2c_offset_w(void *data, u64 val)
+{
+ struct ov9760_info *info = (struct ov9760_info *)(data);
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s setting i2c offset to 0x%X\n",
+ __func__, (u32)val);
+ info->debug_i2c_offset = (u32)val;
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s new i2c offset is 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int debug_i2c_offset_r(void *data, u64 *val)
+{
+ struct ov9760_info *info = (struct ov9760_info *)(data);
+ *val = (u64)info->debug_i2c_offset;
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s reading i2c offset is 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ return 0;
+}
+
+static int debug_i2c_read(void *data, u64 *val)
+{
+ struct ov9760_info *info = (struct ov9760_info *)(data);
+ u8 temp1 = 0;
+ u8 temp2 = 0;
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s reading offset 0x%X\n", __func__,
+ info->debug_i2c_offset);
+ if (ov9760_read_reg(info->i2c_client,
+ info->debug_i2c_offset, &temp1)
+ || ov9760_read_reg(info->i2c_client,
+ info->debug_i2c_offset+1, &temp2)) {
+ dev_err(&info->i2c_client->dev,
+ "ov9760:%s failed\n", __func__);
+ return -EIO;
+ }
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s read value is 0x%X\n", __func__,
+ temp1<<8 | temp2);
+ *val = (u64)(temp1<<8 | temp2);
+ return 0;
+}
+
+static int debug_i2c_write(void *data, u64 val)
+{
+ struct ov9760_info *info = (struct ov9760_info *)(data);
+ dev_info(&info->i2c_client->dev,
+ "ov9760:%s writing 0x%X to offset 0x%X\n", __func__,
+ (u8)val, info->debug_i2c_offset);
+ if (ov9760_write_reg(info->i2c_client,
+ info->debug_i2c_offset, (u8)val)) {
+ dev_err(&info->i2c_client->dev, "ov9760:%s failed\n", __func__);
+ return -EIO;
+ }
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(i2c_offset_fops, debug_i2c_offset_r,
+ debug_i2c_offset_w, "0x%llx\n");
+DEFINE_SIMPLE_ATTRIBUTE(i2c_read_fops, debug_i2c_read,
+ /*debug_i2c_dummy_w*/ NULL, "0x%llx\n");
+DEFINE_SIMPLE_ATTRIBUTE(i2c_write_fops, /*debug_i2c_dummy_r*/NULL,
+ debug_i2c_write, "0x%llx\n");
+
+static int ov9760_debug_init(struct ov9760_info *info)
+{
+ dev_dbg(&info->i2c_client->dev, "%s", __func__);
+
+ info->debugfs_root = debugfs_create_dir(ov9760_device.name, NULL);
+
+ if (!info->debugfs_root)
+ goto err_out;
+
+ if (!debugfs_create_file("stats", S_IRUGO,
+ info->debugfs_root, info, &ov9760_stats_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("offset", S_IRUGO | S_IWUSR,
+ info->debugfs_root, info, &i2c_offset_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("read", S_IRUGO,
+ info->debugfs_root, info, &i2c_read_fops))
+ goto err_out;
+
+ if (!debugfs_create_file("write", S_IWUSR,
+ info->debugfs_root, info, &i2c_write_fops))
+ goto err_out;
+
+ return 0;
+
+err_out:
+ dev_err(&info->i2c_client->dev, "ERROR:%s failed", __func__);
+ debugfs_remove_recursive(info->debugfs_root);
+ return -ENOMEM;
+}
+#endif
+
+static int ov9760_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct ov9760_info *info;
+ const char *mclk_name;
+ int err = 0;
+
+ pr_info("ov9760: probing sensor.\n");
+
+ info = devm_kzalloc(&client->dev,
+ sizeof(struct ov9760_info), GFP_KERNEL);
+ if (!info) {
+ pr_err("ov9760:%s:Unable to allocate memory!\n", __func__);
+ return -ENOMEM;
+ }
+
+ info->pdata = client->dev.platform_data;
+ info->i2c_client = client;
+ atomic_set(&info->in_use, 0);
+ info->mode = -1;
+
+ mclk_name = info->pdata->mclk_name ?
+ info->pdata->mclk_name : "default_mclk";
+ info->mclk = devm_clk_get(&client->dev, mclk_name);
+ if (IS_ERR(info->mclk)) {
+ dev_err(&client->dev, "%s: unable to get clock %s\n",
+ __func__, mclk_name);
+ return PTR_ERR(info->mclk);
+ }
+
+ memcpy(&info->miscdev_info,
+ &ov9760_device,
+ sizeof(struct miscdevice));
+
+ err = misc_register(&info->miscdev_info);
+ if (err) {
+ ov9760_power_put(&info->power);
+ pr_err("ov9760:%s:Unable to register misc device!\n",
+ __func__);
+ }
+
+ i2c_set_clientdata(client, info);
+
+#ifdef CONFIG_DEBUG_FS
+ ov9760_debug_init(info);
+#endif
+ info->sysedpc = sysedp_create_consumer("ov9760", "ov9760");
+
+ return err;
+}
+
+static int ov9760_remove(struct i2c_client *client)
+{
+ struct ov9760_info *info = i2c_get_clientdata(client);
+
+ ov9760_power_put(&info->power);
+ misc_deregister(&ov9760_device);
+ sysedp_free_consumer(info->sysedpc);
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove_recursive(info->debugfs_root);
+#endif
+ return 0;
+}
+
+static const struct i2c_device_id ov9760_id[] = {
+ { "ov9760", 0 },
+ { },
+};
+
+MODULE_DEVICE_TABLE(i2c, ov9760_id);
+
+static struct i2c_driver ov9760_i2c_driver = {
+ .driver = {
+ .name = "ov9760",
+ .owner = THIS_MODULE,
+ },
+ .probe = ov9760_probe,
+ .remove = ov9760_remove,
+ .id_table = ov9760_id,
+};
+
+static int __init ov9760_init(void)
+{
+ pr_info("ov9760 sensor driver loading\n");
+ return i2c_add_driver(&ov9760_i2c_driver);
+}
+
+static void __exit ov9760_exit(void)
+{
+ i2c_del_driver(&ov9760_i2c_driver);
+}
+
+module_init(ov9760_init);
+module_exit(ov9760_exit);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/tegra/tps61310.c b/drivers/media/platform/tegra/tps61310.c
new file mode 100644
index 0000000..6de8fa4
--- /dev/null
+++ b/drivers/media/platform/tegra/tps61310.c
@@ -0,0 +1,940 @@
+/*
+ * tps61310.c - tps61310 flash/torch kernel driver
+ *
+ * Copyright (c) 2014, NVIDIA Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ * 02111-1307, USA
+ */
+
+/* Implementation
+ * --------------
+ * The board level details about the device need to be provided in the board
+ * file with the tps61310_platform_data structure.
+ * Standard among NVC kernel drivers in this structure is:
+ * .cfg = Use the NVC_CFG_ defines that are in nvc_torch.h.
+ * Descriptions of the configuration options are with the defines.
+ * This value is typically 0.
+ * .num = The number of the instance of the device. This should start at 1 and
+ * and increment for each device on the board. This number will be
+ * appended to the MISC driver name, Example: /dev/tps61310.1
+ * .sync = If there is a need to synchronize two devices, then this value is
+ * the number of the device instance this device is allowed to sync to.
+ * This is typically used for stereo applications.
+ * .dev_name = The MISC driver name the device registers as. If not used,
+ * then the part number of the device is used for the driver name.
+ * If using the NVC user driver then use the name found in this
+ * driver under _default_pdata.
+ *
+ * The following is specific to NVC kernel flash/torch drivers:
+ * .pinstate = a pointer to the nvc_torch_pin_state structure. This
+ * structure gives the details of which VI GPIO to use to trigger
+ * the flash. The mask tells which pin and the values is the
+ * level. For example, if VI GPIO pin 6 is used, then
+ * .mask = 0x0040
+ * .values = 0x0040
+ * If VI GPIO pin 0 is used, then
+ * .mask = 0x0001
+ * .values = 0x0001
+ * This is typically just one pin but there is some legacy
+ * here that insinuates more than one pin can be used.
+ * When the flash level is set, then the driver will return the
+ * value in values. When the flash level is off, the driver will
+ * return 0 for the values to deassert the signal.
+ * If a VI GPIO is not used, then the mask and values must be set
+ * to 0. The flash may then be triggered via I2C instead.
+ * However, a VI GPIO is strongly encouraged since it allows
+ * tighter timing with the picture taken as well as reduced power
+ * by asserting the trigger signal for only when needed.
+ * .max_amp_torch = Is the maximum torch value allowed. The value is 0 to
+ * _MAX_TORCH_LEVEL. This is to allow a limit to the amount
+ * of amps used. If left blank then _MAX_TORCH_LEVEL will be
+ * used.
+ * .max_amp_flash = Is the maximum flash value allowed. The value is 0 to
+ * _MAX_FLASH_LEVEL. This is to allow a limit to the amount
+ * of amps used. If left blank then _MAX_FLASH_LEVEL will be
+ * used.
+ *
+ * The following is specific to only this NVC kernel flash/torch driver:
+ * N/A
+ *
+ * Power Requirements
+ * The board power file must contain the following labels for the power
+ * regulator(s) of this device:
+ * "vdd_i2c" = the power regulator for the I2C power.
+ * Note that this device is typically connected directly to the battery rail
+ * and does not need a source power regulator (vdd).
+ *
+ * The above values should be all that is needed to use the device with this
+ * driver. Modifications of this driver should not be needed.
+ */
+
+#include <linux/fs.h>
+#include <linux/i2c.h>
+#include <linux/miscdevice.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/uaccess.h>
+#include <linux/list.h>
+#include <linux/regulator/consumer.h>
+#include <linux/module.h>
+#include <media/nvc.h>
+#include <media/camera.h>
+#include <media/tps61310.h>
+#include <linux/gpio.h>
+#include <linux/sysedp.h>
+#include <linux/backlight.h>
+
+#define STRB0 220 /*GPIO PBB4*/
+#define STRB1 181 /*GPIO PW5*/
+#define TPS61310_REG0 0x00
+#define TPS61310_REG1 0x01
+#define TPS61310_REG2 0x02
+#define TPS61310_REG3 0x03
+#define TPS61310_REG5 0x05
+#define tps61310_flash_cap_size (sizeof(tps61310_flash_cap.numberoflevels) \
+ + (sizeof(tps61310_flash_cap.levels[0]) \
+ * (TPS61310_MAX_FLASH_LEVEL + 1)))
+#define tps61310_torch_cap_size (sizeof(tps61310_torch_cap.numberoflevels) \
+ + (sizeof(tps61310_torch_cap.guidenum[0]) \
+ * (TPS61310_MAX_TORCH_LEVEL + 1)))
+
+#define SYSEDP_OFF_MODE 0
+#define SYSEDP_TORCH_MODE 1
+#define SYSEDP_FLASH_MODE 2
+
+static struct nvc_torch_flash_capabilities tps61310_flash_cap = {
+ TPS61310_MAX_FLASH_LEVEL + 1,
+ {
+ { 0, 0xFFFFFFFF, 0 },
+ { 50, 558, 2 },
+ { 100, 558, 2 },
+ { 150, 558, 2 },
+ { 200, 558, 2 },
+ { 250, 558, 2 },
+ { 300, 558, 2 },
+ { 350, 558, 2 },
+ { 400, 558, 2 },
+ { 450, 558, 2 },
+ { 500, 558, 2 },
+ { 550, 558, 2 },
+ { 600, 558, 2 },
+ { 650, 558, 2 },
+ { 700, 558, 2 },
+ { 750, 558, 2 },
+ }
+};
+
+static struct nvc_torch_torch_capabilities tps61310_torch_cap = {
+ TPS61310_MAX_TORCH_LEVEL + 1,
+ {
+ 0,
+ 25,
+ 50,
+ 75,
+ 100,
+ 125,
+ 150,
+ 175,
+ }
+};
+
+struct tps61310_info {
+ atomic_t in_use;
+ struct i2c_client *i2c_client;
+ struct tps61310_platform_data *pdata;
+ struct miscdevice miscdev;
+ struct list_head list;
+ int pwr_api;
+ int pwr_dev;
+ u8 s_mode;
+ struct tps61310_info *s_info;
+ struct sysedp_consumer *sysedpc;
+ char devname[16];
+};
+
+static struct nvc_torch_pin_state tps61310_default_pinstate = {
+ .mask = 0x0000,
+ .values = 0x0000,
+};
+
+static struct tps61310_platform_data tps61310_default_pdata = {
+ .cfg = 0,
+ .num = 0,
+ .sync = 0,
+ .dev_name = "torch",
+ .pinstate = &tps61310_default_pinstate,
+ .max_amp_torch = TPS61310_MAX_TORCH_LEVEL,
+ .max_amp_flash = TPS61310_MAX_FLASH_LEVEL,
+};
+
+static LIST_HEAD(tps61310_info_list);
+static DEFINE_SPINLOCK(tps61310_spinlock);
+
+
+static int tps61310_i2c_rd(struct tps61310_info *info, u8 reg, u8 *val)
+{
+ struct i2c_msg msg[2];
+ u8 buf[2];
+
+ buf[0] = reg;
+ msg[0].addr = info->i2c_client->addr;
+ msg[0].flags = 0;
+ msg[0].len = 1;
+ msg[0].buf = &buf[0];
+ msg[1].addr = info->i2c_client->addr;
+ msg[1].flags = I2C_M_RD;
+ msg[1].len = 1;
+ msg[1].buf = &buf[1];
+ *val = 0;
+ if (i2c_transfer(info->i2c_client->adapter, msg, 2) != 2)
+ return -EIO;
+
+ *val = buf[1];
+ return 0;
+}
+
+static int tps61310_i2c_wr(struct tps61310_info *info, u8 reg, u8 val)
+{
+ struct i2c_msg msg;
+ u8 buf[2];
+
+ buf[0] = reg;
+ buf[1] = val;
+ msg.addr = info->i2c_client->addr;
+ msg.flags = 0;
+ msg.len = 2;
+ msg.buf = &buf[0];
+ if (i2c_transfer(info->i2c_client->adapter, &msg, 1) != 1)
+ return -EIO;
+
+ return 0;
+}
+
+static int tps61310_pm_wr(struct tps61310_info *info, int pwr)
+{
+ int err = 0;
+ u8 reg;
+
+ if (pwr == info->pwr_dev)
+ return 0;
+
+ switch (pwr) {
+ case NVC_PWR_OFF:
+ if ((info->pdata->cfg & NVC_CFG_OFF2STDBY) ||
+ (info->pdata->cfg & NVC_CFG_BOOT_INIT)) {
+ pwr = NVC_PWR_STDBY;
+ } else {
+ err |= tps61310_i2c_wr(info, TPS61310_REG1, 0x00);
+ err |= tps61310_i2c_wr(info, TPS61310_REG2, 0x00);
+ break;
+ }
+ case NVC_PWR_STDBY_OFF:
+ if ((info->pdata->cfg & NVC_CFG_OFF2STDBY) ||
+ (info->pdata->cfg & NVC_CFG_BOOT_INIT)) {
+ pwr = NVC_PWR_STDBY;
+ } else {
+ err |= tps61310_i2c_wr(info, TPS61310_REG1, 0x00);
+ err |= tps61310_i2c_wr(info, TPS61310_REG2, 0x00);
+ break;
+ }
+ case NVC_PWR_STDBY:
+ err |= tps61310_i2c_rd(info, TPS61310_REG1, ®);
+ reg &= 0x3F; /* 7:6 = mode */
+ err |= tps61310_i2c_wr(info, TPS61310_REG1, reg);
+ break;
+
+ case NVC_PWR_COMM:
+ case NVC_PWR_ON:
+ break;
+
+ default:
+ err = -EINVAL;
+ break;
+ }
+
+ if (err < 0) {
+ dev_err(&info->i2c_client->dev, "%s error\n", __func__);
+ pwr = NVC_PWR_ERR;
+ }
+ info->pwr_dev = pwr;
+ if (err > 0)
+ return 0;
+
+ return err;
+}
+
+static int tps61310_pm_wr_s(struct tps61310_info *info, int pwr)
+{
+ int err1 = 0;
+ int err2 = 0;
+
+ if ((info->s_mode == NVC_SYNC_OFF) ||
+ (info->s_mode == NVC_SYNC_MASTER) ||
+ (info->s_mode == NVC_SYNC_STEREO))
+ err1 = tps61310_pm_wr(info, pwr);
+ if ((info->s_mode == NVC_SYNC_SLAVE) ||
+ (info->s_mode == NVC_SYNC_STEREO))
+ err2 = tps61310_pm_wr(info->s_info, pwr);
+ return err1 | err2;
+}
+
+static int tps61310_pm_api_wr(struct tps61310_info *info, int pwr)
+{
+ int err = 0;
+
+ if (!pwr || (pwr > NVC_PWR_ON))
+ return 0;
+
+ if (pwr > info->pwr_dev)
+ err = tps61310_pm_wr_s(info, pwr);
+ if (!err)
+ info->pwr_api = pwr;
+ else
+ info->pwr_api = NVC_PWR_ERR;
+ if (info->pdata->cfg & NVC_CFG_NOERR)
+ return 0;
+
+ return err;
+}
+
+static int tps61310_pm_dev_wr(struct tps61310_info *info, int pwr)
+{
+ if (pwr < info->pwr_api)
+ pwr = info->pwr_api;
+ return tps61310_pm_wr(info, pwr);
+}
+
+static void tps61310_pm_exit(struct tps61310_info *info)
+{
+ tps61310_pm_wr_s(info, NVC_PWR_OFF);
+}
+
+struct tps61310_reg_init {
+ u8 mask;
+ u8 val;
+};
+
+static struct tps61310_reg_init tps61310_reg_init_id[] = {
+ {0xFF, 0x00},
+ {0xFF, 0x00},
+ {0xFF, 0x00},
+ {0xFF, 0xC0},
+};
+
+static int tps61310_param_rd(struct tps61310_info *info, long arg)
+{
+ struct nvc_param params;
+ struct nvc_torch_pin_state pinstate;
+ const void *data_ptr;
+ u8 reg;
+ u32 data_size = 0;
+ int err;
+
+#ifdef CONFIG_COMPAT
+ memset(¶ms, 0, sizeof(params));
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param_32))) {
+#else
+ if (copy_from_user(¶ms,
+ (const void __user *)arg,
+ sizeof(struct nvc_param))) {
+#endif
+ dev_err(&info->i2c_client->dev, "%s %d copy_from_user err\n",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ if (info->s_mode == NVC_SYNC_SLAVE)
+ info = info->s_info;
+ switch (params.param) {
+ case NVC_PARAM_FLASH_CAPS:
+ dev_dbg(&info->i2c_client->dev, "%s FLASH_CAPS\n", __func__);
+ data_ptr = &tps61310_flash_cap;
+ data_size = tps61310_flash_cap_size;
+ break;
+
+ case NVC_PARAM_FLASH_LEVEL:
+ tps61310_pm_dev_wr(info, NVC_PWR_COMM);
+ err = tps61310_i2c_rd(info, TPS61310_REG1, ®);
+ tps61310_pm_dev_wr(info, NVC_PWR_OFF);
+ if (err < 0)
+ return err;
+
+ if (reg & 0x80) { /* 7:7 flash on/off */
+ reg &= 0x1e; /* 1:4 flash setting */
+ reg++; /* flash setting +1 if flash on */
+ } else {
+ reg = 0; /* flash is off */
+ }
+ dev_dbg(&info->i2c_client->dev, "%s FLASH_LEVEL: %u\n",
+ __func__,
+ (unsigned)tps61310_flash_cap.levels[reg].guidenum);
+ data_ptr = &tps61310_flash_cap.levels[reg].guidenum;
+ data_size = sizeof(tps61310_flash_cap.levels[reg].guidenum);
+ break;
+
+ case NVC_PARAM_TORCH_CAPS:
+ dev_dbg(&info->i2c_client->dev, "%s TORCH_CAPS\n", __func__);
+ data_ptr = &tps61310_torch_cap;
+ data_size = tps61310_torch_cap_size;
+ break;
+
+ case NVC_PARAM_TORCH_LEVEL:
+ tps61310_pm_dev_wr(info, NVC_PWR_COMM);
+ err = tps61310_i2c_rd(info, TPS61310_REG0, ®);
+ tps61310_pm_dev_wr(info, NVC_PWR_OFF);
+ if (err < 0)
+ return err;
+
+ reg &= 0x07;
+ dev_dbg(&info->i2c_client->dev, "%s TORCH_LEVEL: %u\n",
+ __func__,
+ (unsigned)tps61310_torch_cap.guidenum[reg]);
+ data_ptr = &tps61310_torch_cap.guidenum[reg];
+ data_size = sizeof(tps61310_torch_cap.guidenum[reg]);
+ break;
+
+ case NVC_PARAM_FLASH_PIN_STATE:
+ pinstate.mask = info->pdata->pinstate->mask;
+ tps61310_pm_dev_wr(info, NVC_PWR_COMM);
+ err = tps61310_i2c_rd(info, TPS61310_REG1, ®);
+ tps61310_pm_dev_wr(info, NVC_PWR_OFF);
+ dev_dbg(&info->i2c_client->dev, "%s REG1 0x%x\n", __func__, reg);
+ if (err < 0)
+ return err;
+
+ reg &= 0x80; /* 7:7=flash enable */
+ if (reg)
+ /* assert strobe */
+ pinstate.values = info->pdata->pinstate->values;
+ else
+ pinstate.values = 0; /* deassert strobe */
+ dev_dbg(&info->i2c_client->dev, "%s FLASH_PIN_STATE: %x&%x\n",
+ __func__, pinstate.mask, pinstate.values);
+ data_ptr = &pinstate;
+ data_size = sizeof(struct nvc_torch_pin_state);
+ break;
+
+ case NVC_PARAM_STEREO:
+ dev_dbg(&info->i2c_client->dev, "%s STEREO: %d\n",
+ __func__, (int)info->s_mode);
+ data_ptr = &info->s_mode;
+ data_size = sizeof(info->s_mode);
+ break;
+
+ default:
+ dev_err(&info->i2c_client->dev,
+ "%s unsupported parameter: %d\n",
+ __func__, params.param);
+ return -EINVAL;
+ }
+
+ if (params.sizeofvalue < data_size) {
+ dev_err(&info->i2c_client->dev,
+ "%s data size mismatch %d != %d\n",
+ __func__, params.sizeofvalue, data_size);
+ return -EINVAL;
+ }
+
+ if (copy_to_user(MAKE_USER_PTR(params.p_value),
+ data_ptr,
+ data_size)) {
+ dev_err(&info->i2c_client->dev,
+ "%s copy_to_user err line %d\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int tps61310_param_wr_s(struct tps61310_info *info,
+ struct nvc_param *params,
+ u8 val)
+{
+ u8 reg;
+ int err = 0;
+ static u8 sysedp_state = SYSEDP_OFF_MODE;
+ static u8 sysedp_old_state = SYSEDP_OFF_MODE;
+ static u8 sysedp_bl_state = SYSEDP_OFF_MODE;
+ struct backlight_device *bd;
+
+ bd = get_backlight_device_by_name("tegra-dsi-backlight.0");
+ if (bd == NULL)
+ pr_err("%s: error getting backlight device!\n", __func__);
+ else if (bd->sysedpc == NULL)
+ pr_err("%s: backlight sysedpc is not initialized!\n", __func__);
+ /*
+ * 7:6 flash/torch mode
+ * 0 0 = off (power save)
+ * 0 1 = torch only (torch power is 2:0 REG0 where 0 = off)
+ * 1 0 = flash and torch (flash power is 2:0 REG1 (0 is a power level))
+ * 1 1 = N/A
+ * Note that 7:6 of REG1 and REG2 are shadowed with each other.
+ * In the code below we want to turn on/off one
+ * without affecting the other.
+ */
+ switch (params->param) {
+ case NVC_PARAM_FLASH_LEVEL:
+ dev_dbg(&info->i2c_client->dev, "%s FLASH_LEVEL: %d\n",
+ __func__, val);
+ tps61310_pm_dev_wr(info, NVC_PWR_ON);
+ err |= tps61310_i2c_wr(info, TPS61310_REG5, 0x6a);
+ gpio_set_value(STRB1, val ? 0 : 1);
+ if (val) {
+ val--;
+ sysedp_state = SYSEDP_FLASH_MODE;
+ if (val > tps61310_default_pdata.max_amp_flash)
+ val = tps61310_default_pdata.max_amp_flash;
+ /* Amp limit values are in the board-sensors file. */
+ if (info->pdata->max_amp_flash &&
+ (val > info->pdata->max_amp_flash))
+ val = info->pdata->max_amp_flash;
+ val = val << 1; /* 1:4 flash current*/
+ val |= 0x80; /* 7:7=flash mode */
+ } else {
+ sysedp_state = SYSEDP_OFF_MODE;
+ err = tps61310_i2c_rd(info, TPS61310_REG0, ®);
+ if (reg & 0x07) /* 2:0=torch setting */
+ val = 0x40; /* 6:6 enable just torch */
+ }
+ gpio_set_value(STRB0, val ? 1 : 0);
+ if (sysedp_state != sysedp_old_state) {
+ /*
+ * Remove backlight budget since flash will be enabled.
+ * And restore backlight budget after flash finished
+ */
+ if (sysedp_state == SYSEDP_FLASH_MODE) {
+ sysedp_bl_state = sysedp_get_state(bd->sysedpc);
+ sysedp_set_state(bd->sysedpc, SYSEDP_OFF_MODE);
+ sysedp_bl_state = sysedp_get_state(bd->sysedpc);
+ }
+ else {
+ sysedp_set_state(bd->sysedpc, sysedp_bl_state);
+ }
+ sysedp_set_state(info->sysedpc, sysedp_state);
+ }
+ sysedp_old_state = sysedp_state;
+ dev_dbg(&info->i2c_client->dev, "write reg1: 0x%x\n",val);
+ err |= tps61310_i2c_wr(info, TPS61310_REG1, val);
+ return err;
+
+ case NVC_PARAM_TORCH_LEVEL:
+ dev_dbg(&info->i2c_client->dev, "%s TORCH_LEVEL: %d\n",
+ __func__, val);
+ tps61310_pm_dev_wr(info, NVC_PWR_ON);
+ err |= tps61310_i2c_wr(info, TPS61310_REG5, 0x6a);
+ err |= tps61310_i2c_wr(info, TPS61310_REG0, val);
+ err |= tps61310_i2c_rd(info, TPS61310_REG1, ®);
+ gpio_set_value(STRB1, val ? 1 : 0);
+ reg &= 0x80; /* 7:7=flash */
+ if (val) {
+ sysedp_state = SYSEDP_TORCH_MODE;
+ if (val > tps61310_default_pdata.max_amp_torch)
+ val = tps61310_default_pdata.max_amp_torch;
+ /* Amp limit values are in the board-sensors file. */
+ if (info->pdata->max_amp_torch &&
+ (val > info->pdata->max_amp_torch))
+ val = info->pdata->max_amp_torch;
+ if (!reg) /* test if flash/torch off */
+ val |= (0x40); /* 6:6=torch only mode */
+ } else {
+ sysedp_state = SYSEDP_OFF_MODE;
+ val |= reg;
+ }
+ if (sysedp_state != sysedp_old_state) {
+ sysedp_set_state(info->sysedpc, sysedp_state);
+ sysedp_old_state = sysedp_state;
+ }
+ err |= tps61310_i2c_wr(info, TPS61310_REG1, val);
+ val &= 0xC0; /* 7:6=mode */
+ if (!val) /* turn pwr off if no torch && no pwr_api */
+ tps61310_pm_dev_wr(info, NVC_PWR_OFF);
+ gpio_set_value(STRB0, 0);
+ return err;
+
+ case NVC_PARAM_FLASH_PIN_STATE:
+ dev_dbg(&info->i2c_client->dev, "%s FLASH_PIN_STATE: %d\n",
+ __func__, val);
+ return err;
+
+ default:
+ dev_err(&info->i2c_client->dev,
+ "%s unsupported parameter: %d\n",
+ __func__, params->param);
+ return -EINVAL;
+ }
+}
+
+static int tps61310_param_wr(struct tps61310_info *info, long arg)
+{
+ struct nvc_param params;
+ u8 val;
+ int err = 0;
+
+#ifdef CONFIG_COMPAT
+ memset(¶ms, 0, sizeof(params));
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param_32))) {
+#else
+ if (copy_from_user(¶ms, (const void __user *)arg,
+ sizeof(struct nvc_param))) {
+#endif
+ dev_err(&info->i2c_client->dev, "%s %d copy_from_user err\n",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&val, MAKE_CONSTUSER_PTR(params.p_value),
+ sizeof(val))) {
+ dev_err(&info->i2c_client->dev, "%s %d copy_from_user err\n",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ /* parameters independent of sync mode */
+ switch (params.param) {
+ case NVC_PARAM_STEREO:
+ dev_dbg(&info->i2c_client->dev, "%s STEREO: %d\n",
+ __func__, (int)val);
+ if (val == info->s_mode)
+ return 0;
+
+ switch (val) {
+ case NVC_SYNC_OFF:
+ info->s_mode = val;
+ if (info->s_info != NULL) {
+ info->s_info->s_mode = val;
+ tps61310_pm_wr(info->s_info, NVC_PWR_OFF);
+ }
+ break;
+
+ case NVC_SYNC_MASTER:
+ info->s_mode = val;
+ if (info->s_info != NULL)
+ info->s_info->s_mode = val;
+ break;
+
+ case NVC_SYNC_SLAVE:
+ case NVC_SYNC_STEREO:
+ if (info->s_info != NULL) {
+ /* sync power */
+ info->s_info->pwr_api = info->pwr_api;
+ err = tps61310_pm_wr(info->s_info,
+ info->pwr_dev);
+ if (!err) {
+ info->s_mode = val;
+ info->s_info->s_mode = val;
+ } else {
+ tps61310_pm_wr(info->s_info,
+ NVC_PWR_OFF);
+ err = -EIO;
+ }
+ } else {
+ err = -EINVAL;
+ }
+ break;
+
+ default:
+ err = -EINVAL;
+ }
+ if (info->pdata->cfg & NVC_CFG_NOERR)
+ return 0;
+
+ return err;
+
+ default:
+ /* parameters dependent on sync mode */
+ switch (info->s_mode) {
+ case NVC_SYNC_OFF:
+ case NVC_SYNC_MASTER:
+ return tps61310_param_wr_s(info, ¶ms, val);
+
+ case NVC_SYNC_SLAVE:
+ return tps61310_param_wr_s(info->s_info,
+ ¶ms,
+ val);
+
+ case NVC_SYNC_STEREO:
+ err = tps61310_param_wr_s(info, ¶ms, val);
+ if (!(info->pdata->cfg & NVC_CFG_SYNC_I2C_MUX))
+ err |= tps61310_param_wr_s(info->s_info,
+ ¶ms,
+ val);
+ return err;
+
+ default:
+ dev_err(&info->i2c_client->dev, "%s %d internal err\n",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+ }
+}
+
+static long tps61310_ioctl(struct file *file,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct tps61310_info *info = file->private_data;
+ int pwr;
+
+ switch (cmd) {
+ case NVC_IOCTL_PARAM_WR:
+#ifdef CONFIG_COMPAT
+ case NVC_IOCTL_32_PARAM_WR:
+#endif
+ return tps61310_param_wr(info, arg);
+
+ case NVC_IOCTL_PARAM_RD:
+#ifdef CONFIG_COMPAT
+ case NVC_IOCTL_32_PARAM_RD:
+#endif
+ return tps61310_param_rd(info, arg);
+
+ case NVC_IOCTL_PWR_WR:
+ /* This is a Guaranteed Level of Service (GLOS) call */
+ pwr = (int)arg * 2;
+ dev_dbg(&info->i2c_client->dev, "%s PWR_WR: %d\n",
+ __func__, pwr);
+ return tps61310_pm_api_wr(info, pwr);
+
+ case NVC_IOCTL_PWR_RD:
+ if (info->s_mode == NVC_SYNC_SLAVE)
+ pwr = info->s_info->pwr_api / 2;
+ else
+ pwr = info->pwr_api / 2;
+ dev_dbg(&info->i2c_client->dev, "%s PWR_RD: %d\n",
+ __func__, pwr);
+ if (copy_to_user((void __user *)arg, (const void *)&pwr,
+ sizeof(pwr))) {
+ dev_err(&info->i2c_client->dev,
+ "%s copy_to_user err line %d\n",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ return 0;
+
+ default:
+ dev_err(&info->i2c_client->dev, "%s unsupported ioctl: %x\n",
+ __func__, cmd);
+ return -EINVAL;
+ }
+}
+
+static int tps61310_sync_en(int dev1, int dev2)
+{
+ struct tps61310_info *sync1 = NULL;
+ struct tps61310_info *sync2 = NULL;
+ struct tps61310_info *pos = NULL;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(pos, &tps61310_info_list, list) {
+ if (pos->pdata->num == dev1) {
+ sync1 = pos;
+ break;
+ }
+ }
+ pos = NULL;
+ list_for_each_entry_rcu(pos, &tps61310_info_list, list) {
+ if (pos->pdata->num == dev2) {
+ sync2 = pos;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ if (sync1 != NULL)
+ sync1->s_info = NULL;
+ if (sync2 != NULL)
+ sync2->s_info = NULL;
+ if (!dev1 && !dev2)
+ return 0; /* no err if default instance 0's used */
+
+ if (dev1 == dev2)
+ return -EINVAL; /* err if sync instance is itself */
+
+ if ((sync1 != NULL) && (sync2 != NULL)) {
+ sync1->s_info = sync2;
+ sync2->s_info = sync1;
+ }
+ return 0;
+}
+
+static int tps61310_sync_dis(struct tps61310_info *info)
+{
+ if (info->s_info != NULL) {
+ info->s_info->s_mode = 0;
+ info->s_info->s_info = NULL;
+ info->s_mode = 0;
+ info->s_info = NULL;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int tps61310_open(struct inode *inode, struct file *file)
+{
+ struct tps61310_info *info = NULL;
+ struct tps61310_info *pos = NULL;
+ int err;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(pos, &tps61310_info_list, list) {
+ if (pos->miscdev.minor == iminor(inode)) {
+ info = pos;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ if (!info)
+ return -ENODEV;
+
+ err = tps61310_sync_en(info->pdata->num, info->pdata->sync);
+ if (err == -EINVAL)
+ dev_err(&info->i2c_client->dev,
+ "%s err: invalid num (%u) and sync (%u) instance\n",
+ __func__, info->pdata->num, info->pdata->sync);
+ if (atomic_xchg(&info->in_use, 1))
+ return -EBUSY;
+
+ if (info->s_info != NULL) {
+ if (atomic_xchg(&info->s_info->in_use, 1))
+ return -EBUSY;
+ }
+
+ file->private_data = info;
+ dev_dbg(&info->i2c_client->dev, "%s\n", __func__);
+ return 0;
+}
+
+static int tps61310_release(struct inode *inode, struct file *file)
+{
+ struct tps61310_info *info = file->private_data;
+
+ dev_dbg(&info->i2c_client->dev, "%s\n", __func__);
+ gpio_set_value(STRB0, 0);
+ gpio_set_value(STRB1, 1);
+ tps61310_pm_wr_s(info, NVC_PWR_OFF);
+ file->private_data = NULL;
+ WARN_ON(!atomic_xchg(&info->in_use, 0));
+ if (info->s_info != NULL)
+ WARN_ON(!atomic_xchg(&info->s_info->in_use, 0));
+ tps61310_sync_dis(info);
+ return 0;
+}
+
+static const struct file_operations tps61310_fileops = {
+ .owner = THIS_MODULE,
+ .open = tps61310_open,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = tps61310_ioctl,
+#endif
+ .unlocked_ioctl = tps61310_ioctl,
+ .release = tps61310_release,
+};
+
+static void tps61310_del(struct tps61310_info *info)
+{
+ tps61310_pm_exit(info);
+ tps61310_sync_dis(info);
+ spin_lock(&tps61310_spinlock);
+ list_del_rcu(&info->list);
+ spin_unlock(&tps61310_spinlock);
+ synchronize_rcu();
+}
+
+static int tps61310_remove(struct i2c_client *client)
+{
+ struct tps61310_info *info = i2c_get_clientdata(client);
+
+ dev_dbg(&info->i2c_client->dev, "%s\n", __func__);
+ misc_deregister(&info->miscdev);
+ sysedp_free_consumer(info->sysedpc);
+ tps61310_del(info);
+ return 0;
+}
+
+static int tps61310_probe(
+ struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct tps61310_info *info;
+ int err;
+
+ dev_dbg(&client->dev, "%s\n", __func__);
+ info = devm_kzalloc(&client->dev, sizeof(*info), GFP_KERNEL);
+ if (info == NULL) {
+ dev_err(&client->dev, "%s: kzalloc error\n", __func__);
+ return -ENOMEM;
+ }
+
+ info->i2c_client = client;
+ if (client->dev.platform_data) {
+ info->pdata = client->dev.platform_data;
+ } else {
+ info->pdata = &tps61310_default_pdata;
+ dev_dbg(&client->dev,
+ "%s No platform data. Using defaults.\n",
+ __func__);
+ }
+ i2c_set_clientdata(client, info);
+ INIT_LIST_HEAD(&info->list);
+ spin_lock(&tps61310_spinlock);
+ list_add_rcu(&info->list, &tps61310_info_list);
+ spin_unlock(&tps61310_spinlock);
+
+ if (info->pdata->dev_name != 0)
+ strncpy(info->devname, info->pdata->dev_name,
+ sizeof(info->devname) - 1);
+ else
+ strncpy(info->devname, "tps61310", sizeof(info->devname) - 1);
+
+ if (info->pdata->num)
+ snprintf(info->devname, sizeof(info->devname), "%s.%u",
+ info->devname, info->pdata->num);
+
+ info->miscdev.name = info->devname;
+ info->miscdev.fops = &tps61310_fileops;
+ info->miscdev.minor = MISC_DYNAMIC_MINOR;
+ if (misc_register(&info->miscdev)) {
+ dev_err(&client->dev, "%s unable to register misc device %s\n",
+ __func__, info->devname);
+ tps61310_del(info);
+ return -ENODEV;
+ }
+
+ info->sysedpc = sysedp_create_consumer("tps61310", "tps61310");
+ return 0;
+}
+
+static const struct i2c_device_id tps61310_id[] = {
+ { "tps61310", 0 },
+ { },
+};
+
+MODULE_DEVICE_TABLE(i2c, tps61310_id);
+
+static struct i2c_driver tps61310_i2c_driver = {
+ .driver = {
+ .name = "tps61310",
+ .owner = THIS_MODULE,
+ },
+ .id_table = tps61310_id,
+ .probe = tps61310_probe,
+ .remove = tps61310_remove,
+};
+
+module_i2c_driver(tps61310_i2c_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/palmas.c b/drivers/mfd/palmas.c
index 9ad2213..c1b4afc 100644
--- a/drivers/mfd/palmas.c
+++ b/drivers/mfd/palmas.c
@@ -72,6 +72,7 @@
PALMAS_PM_ID,
PALMAS_THERM_ID,
PALMAS_LDOUSB_IN_ID,
+ PALMAS_VOLTAGE_MONITOR_ID,
};
static struct resource palmas_rtc_resources[] = {
@@ -90,7 +91,7 @@
BIT(PALMAS_CLK_ID) | BIT(PALMAS_PWM_ID) | \
BIT(PALMAS_USB_ID) | BIT(PALMAS_EXTCON_ID) | \
BIT(PALMAS_PM_ID) | BIT(PALMAS_THERM_ID) | \
- BIT(PALMAS_LDOUSB_IN_ID))
+ BIT(PALMAS_LDOUSB_IN_ID) | BIT(PALMAS_VOLTAGE_MONITOR_ID))
#define TPS80036_SUB_MODULE (TPS65913_SUB_MODULE | \
BIT(PALMAS_BATTERY_GAUGE_ID) | BIT(PALMAS_CHARGER_ID) | \
@@ -187,6 +188,10 @@
.name = "palmas-ldousb-in",
.id = PALMAS_LDOUSB_IN_ID,
},
+ {
+ .name = "palmas-voltage-monitor",
+ .id = PALMAS_VOLTAGE_MONITOR_ID,
+ },
};
static bool is_volatile_palmas_func_reg(struct device *dev, unsigned int reg)
@@ -1254,6 +1259,11 @@
children[PALMAS_CLK_ID].platform_data = pdata->clk_pdata;
children[PALMAS_CLK_ID].pdata_size = sizeof(*pdata->clk_pdata);
+ children[PALMAS_VOLTAGE_MONITOR_ID].platform_data =
+ pdata->voltage_monitor_pdata;
+ children[PALMAS_VOLTAGE_MONITOR_ID].pdata_size =
+ sizeof(*pdata->voltage_monitor_pdata);
+
ret = mfd_add_devices(palmas->dev, -1,
children, child_count,
NULL, palmas->irq_chip_data->irq_base,
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 8bb8e6b..dcd692c 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -659,4 +659,6 @@
source "drivers/misc/issp/Kconfig"
source "drivers/misc/tegra-profiler/Kconfig"
source "drivers/misc/gps/Kconfig"
+source "drivers/misc/headset/Kconfig"
+source "drivers/misc/qcom-mdm-9k/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index ce7e4c1..fd7b22b 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -80,9 +80,11 @@
obj-y += issp/
obj-$(CONFIG_TEGRA_PROFILER) += tegra-profiler/
obj-$(CONFIG_MTK_GPS) += gps/
+obj-$(CONFIG_HTC_HEADSET_MGR) += headset/
obj-y += tegra-fuse/
obj-$(CONFIG_DENVER_CPU) += force_idle_t132.o
obj-$(CONFIG_ARCH_TEGRA) +=tegra_timerinfo.o
obj-$(CONFIG_MODS) += mods/
obj-$(CONFIG_DENVER_CPU) += idle_test_t132.o
+obj-$(CONFIG_QCT_9K_MODEM) += qcom-mdm-9k/
obj-$(CONFIG_UID_CPUTIME) += uid_cputime.o
diff --git a/drivers/misc/headset/Kconfig b/drivers/misc/headset/Kconfig
new file mode 100644
index 0000000..4ef0210
--- /dev/null
+++ b/drivers/misc/headset/Kconfig
@@ -0,0 +1,44 @@
+# Copyright Statement:
+#
+# Read-Copy Update mechanism for mutual exclusion
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# Copyright MediaTek Corporation, 2010
+#
+# Authors: Hua Fu <Hua.Fu@mediatek.com>
+# Yongqing Han <yongqing.han@mediatek.com>
+
+config HTC_HEADSET_MGR
+ tristate "HTC headset manager driver"
+ default n
+ help
+ Provides support of HTC headset manager.
+
+config HTC_HEADSET_PMIC
+ tristate "HTC PMIC headset detection driver"
+ depends on HTC_HEADSET_MGR
+ default n
+ help
+ Provides support of HTC PMIC headset detection.
+
+config HTC_HEADSET_ONE_WIRE
+ tristate "HTC 1-wire headset detection driver"
+ depends on HTC_HEADSET_MGR
+ default n
+ help
+ Provides support of HTC 1-wire headset detection.
+
+config HEADSET_DEBUG_UART
+ tristate "Headset Debug UART"
+ default n
+ help
+ Provides support of Headset Debug UART.
diff --git a/drivers/misc/headset/Makefile b/drivers/misc/headset/Makefile
new file mode 100644
index 0000000..1cccb24
--- /dev/null
+++ b/drivers/misc/headset/Makefile
@@ -0,0 +1,20 @@
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+#
+
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+# more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>
+
+obj-$(CONFIG_HTC_HEADSET_MGR) += htc_headset_mgr.o
+obj-$(CONFIG_HTC_HEADSET_PMIC) += htc_headset_pmic.o
+obj-$(CONFIG_HTC_HEADSET_ONE_WIRE) += htc_headset_one_wire.o
+
+# EOF
diff --git a/drivers/misc/headset/htc_headset_mgr.c b/drivers/misc/headset/htc_headset_mgr.c
new file mode 100644
index 0000000..56a9386
--- /dev/null
+++ b/drivers/misc/headset/htc_headset_mgr.c
@@ -0,0 +1,2258 @@
+/*
+ *
+ * /arch/arm/mach-msm/htc_headset_mgr.c
+ *
+ * HTC headset manager driver.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/gpio.h>
+#include <linux/gpio_event.h>
+#include <linux/rtc.h>
+#include <linux/slab.h>
+
+#include <linux/htc_headset_mgr.h>
+
+#define DRIVER_NAME "HS_MGR"
+
+static struct workqueue_struct *detect_wq;
+static void insert_detect_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(insert_detect_work, insert_detect_work_func);
+static void remove_detect_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(remove_detect_work, remove_detect_work_func);
+static void mic_detect_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(mic_detect_work, mic_detect_work_func);
+
+static struct workqueue_struct *button_wq;
+static void button_35mm_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(button_35mm_work, button_35mm_work_func);
+
+static void button_1wire_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(button_1wire_work, button_1wire_work_func);
+
+static struct workqueue_struct *debug_wq;
+static void debug_work_func(struct work_struct *work);
+static DECLARE_WORK(debug_work, debug_work_func);
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+static int hs_mgr_rpc_call(struct msm_rpc_server *server,
+ struct rpc_request_hdr *req, unsigned len);
+
+static struct msm_rpc_server hs_rpc_server = {
+ .prog = HS_RPC_SERVER_PROG,
+ .vers = HS_RPC_SERVER_VERS,
+ .rpc_call = hs_mgr_rpc_call,
+};
+#endif
+
+struct button_work {
+ struct delayed_work key_work;
+ int key_code;
+};
+
+static struct htc_headset_mgr_info *hi;
+static struct hs_notifier_func hs_mgr_notifier;
+
+static int hpin_report = 0,
+ hpin_bounce = 0,
+ key_report = 0,
+ key_bounce = 0;
+
+static void init_next_driver(void)
+{
+ int i = hi->driver_init_seq;
+
+ if (!hi->pdata.headset_devices_num)
+ return;
+
+ if (i < hi->pdata.headset_devices_num) {
+ hi->driver_init_seq++;
+ platform_device_register(hi->pdata.headset_devices[i]);
+ }
+}
+
+int hs_debug_log_state(void)
+{
+ return (hi->debug_flag & DEBUG_FLAG_LOG) ? 1 : 0;
+}
+
+void hs_notify_driver_ready(char *name)
+{
+ HS_LOG("%s ready", name);
+ init_next_driver();
+}
+
+void hs_notify_hpin_irq(void)
+{
+ hi->hpin_jiffies = jiffies;
+/* HS_LOG("HPIN IRQ");*/
+ hpin_bounce++;
+}
+
+struct class *hs_get_attribute_class(void)
+{
+ return hi->htc_accessory_class;
+}
+
+int hs_hpin_stable(void)
+{
+ unsigned long last_hpin_jiffies = 0;
+ unsigned long unstable_jiffies = 1.2 * HZ;
+
+ HS_DBG();
+
+ last_hpin_jiffies = hi->hpin_jiffies;
+
+ if (time_before_eq(jiffies, last_hpin_jiffies + unstable_jiffies))
+ return 0;
+
+ return 1;
+}
+
+int get_mic_state(void)
+{
+ HS_DBG();
+
+ switch (hi->hs_35mm_type) {
+ case HEADSET_MIC:
+ case HEADSET_METRICO:
+ case HEADSET_BEATS:
+ case HEADSET_BEATS_SOLO:
+ return 1;
+ default:
+ break;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(get_mic_state);
+
+static void update_mic_status(int count)
+{
+ HS_DBG();
+
+ if (hi->is_ext_insert) {
+ HS_LOG("Start MIC status polling (%d)", count);
+ cancel_delayed_work_sync(&mic_detect_work);
+ hi->mic_detect_counter = count;
+ queue_delayed_work(detect_wq, &mic_detect_work,
+ HS_JIFFIES_MIC_DETECT);
+ }
+}
+
+static void headset_notifier_update(int id)
+{
+ if (!hi) {
+ HS_LOG("HS_MGR driver is not ready");
+ return;
+ }
+
+ switch (id) {
+ case HEADSET_REG_HPIN_GPIO:
+ break;
+ case HEADSET_REG_REMOTE_ADC:
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ break;
+ case HEADSET_REG_REMOTE_KEYCODE:
+ case HEADSET_REG_RPC_KEY:
+ break;
+ case HEADSET_REG_MIC_STATUS:
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ break;
+ case HEADSET_REG_MIC_BIAS:
+ if (!hi->pdata.headset_power &&
+ hi->hs_35mm_type != HEADSET_UNPLUG) {
+ hs_mgr_notifier.mic_bias_enable(1);
+ hi->mic_bias_state = 1;
+ msleep(HS_DELAY_MIC_BIAS);
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ }
+ break;
+ case HEADSET_REG_MIC_SELECT:
+ case HEADSET_REG_KEY_INT_ENABLE:
+ case HEADSET_REG_KEY_ENABLE:
+ case HEADSET_REG_INDICATOR_ENABLE:
+ break;
+ default:
+ break;
+ }
+}
+
+int headset_notifier_register(struct headset_notifier *notifier)
+{
+ if (!notifier->func) {
+ HS_LOG("NULL register function");
+ return 0;
+ }
+
+ switch (notifier->id) {
+ case HEADSET_REG_HPIN_GPIO:
+ HS_LOG("Register HPIN_GPIO notifier");
+ hs_mgr_notifier.hpin_gpio = notifier->func;
+ break;
+ case HEADSET_REG_REMOTE_ADC:
+ HS_LOG("Register REMOTE_ADC notifier");
+ hs_mgr_notifier.remote_adc = notifier->func;
+ break;
+ case HEADSET_REG_REMOTE_KEYCODE:
+ HS_LOG("Register REMOTE_KEYCODE notifier");
+ hs_mgr_notifier.remote_keycode = notifier->func;
+ break;
+ case HEADSET_REG_RPC_KEY:
+ HS_LOG("Register RPC_KEY notifier");
+ hs_mgr_notifier.rpc_key = notifier->func;
+ break;
+ case HEADSET_REG_MIC_STATUS:
+ HS_LOG("Register MIC_STATUS notifier");
+ hs_mgr_notifier.mic_status = notifier->func;
+ break;
+ case HEADSET_REG_MIC_BIAS:
+ HS_LOG("Register MIC_BIAS notifier");
+ hs_mgr_notifier.mic_bias_enable = notifier->func;
+ break;
+ case HEADSET_REG_MIC_SELECT:
+ HS_LOG("Register MIC_SELECT notifier");
+ hs_mgr_notifier.mic_select = notifier->func;
+ break;
+ case HEADSET_REG_KEY_INT_ENABLE:
+ HS_LOG("Register KEY_INT_ENABLE notifier");
+ hs_mgr_notifier.key_int_enable = notifier->func;
+ break;
+ case HEADSET_REG_KEY_ENABLE:
+ HS_LOG("Register KEY_ENABLE notifier");
+ hs_mgr_notifier.key_enable = notifier->func;
+ break;
+ case HEADSET_REG_INDICATOR_ENABLE:
+ HS_LOG("Register INDICATOR_ENABLE notifier");
+ hs_mgr_notifier.indicator_enable = notifier->func;
+ break;
+ case HEADSET_REG_UART_SET:
+ HS_LOG("Register UART_SET notifier");
+ hs_mgr_notifier.uart_set = notifier->func;
+ break;
+ case HEADSET_REG_1WIRE_INIT:
+ HS_LOG("Register 1WIRE_INIT notifier");
+ hs_mgr_notifier.hs_1wire_init = notifier->func;
+ hi->driver_one_wire_exist = 1;
+ break;
+ case HEADSET_REG_1WIRE_QUERY:
+ HS_LOG("Register 1WIRE_QUERY notifier");
+ hs_mgr_notifier.hs_1wire_query = notifier->func;
+ break;
+ case HEADSET_REG_1WIRE_READ_KEY:
+ HS_LOG("Register 1WIRE_READ_KEY notifier");
+ hs_mgr_notifier.hs_1wire_read_key = notifier->func;
+ break;
+ case HEADSET_REG_1WIRE_DEINIT:
+ HS_LOG("Register 1WIRE_DEINIT notifier");
+ hs_mgr_notifier.hs_1wire_deinit = notifier->func;
+ break;
+ case HEADSET_REG_1WIRE_REPORT_TYPE:
+ HS_LOG("Register 1WIRE_REPORT_TYPE notifier");
+ hs_mgr_notifier.hs_1wire_report_type = notifier->func;
+ break;
+ case HEADSET_REG_1WIRE_OPEN:
+ HS_LOG("Register HEADSET_REG_1WIRE_OPEN notifier");
+ hs_mgr_notifier.hs_1wire_open = notifier->func;
+ break;
+ case HEADSET_REG_HS_INSERT:
+ HS_LOG("Register HS_INSERT notifier");
+ hs_mgr_notifier.hs_insert = notifier->func;
+ break;
+ default:
+ HS_LOG("Unknown register ID");
+ return 0;
+ }
+
+ headset_notifier_update(notifier->id);
+
+ return 1;
+}
+EXPORT_SYMBOL(headset_notifier_register);
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+static int hs_mgr_rpc_call(struct msm_rpc_server *server,
+ struct rpc_request_hdr *req, unsigned len)
+{
+ struct hs_rpc_server_args_key *args_key;
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+
+ HS_DBG();
+
+ switch (req->procedure) {
+ case HS_RPC_SERVER_PROC_NULL:
+ HS_LOG("RPC_SERVER_NULL");
+ break;
+ case HS_RPC_SERVER_PROC_KEY:
+ args_key = (struct hs_rpc_server_args_key *)(req + 1);
+ args_key->adc = be32_to_cpu(args_key->adc);
+ HS_LOG("RPC_SERVER_KEY ADC = %u (0x%X)",
+ args_key->adc, args_key->adc);
+ if (hs_mgr_notifier.rpc_key)
+ hs_mgr_notifier.rpc_key(args_key->adc);
+ else
+ HS_LOG("RPC_KEY notify function doesn't exist");
+ break;
+ default:
+ HS_LOG("Unknown RPC procedure");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+#endif
+
+static ssize_t h2w_print_name(struct switch_dev *sdev, char *buf)
+{
+ return sprintf(buf, "Headset\n");
+}
+
+static void get_key_name(int keycode, char *buf)
+{
+ switch (keycode) {
+ case HS_MGR_KEYCODE_END:
+ sprintf(buf, "END");
+ break;
+ case HS_MGR_KEYCODE_MUTE:
+ sprintf(buf, "MUTE");
+ break;
+ case HS_MGR_KEYCODE_VOLDOWN:
+ sprintf(buf, "VOLDOWN");
+ break;
+ case HS_MGR_KEYCODE_VOLUP:
+ sprintf(buf, "VOLUP");
+ break;
+ case HS_MGR_KEYCODE_FORWARD:
+ sprintf(buf, "FORWARD");
+ break;
+ case HS_MGR_KEYCODE_PLAY:
+ sprintf(buf, "PLAY");
+ break;
+ case HS_MGR_KEYCODE_BACKWARD:
+ sprintf(buf, "BACKWARD");
+ break;
+ case HS_MGR_KEYCODE_MEDIA:
+ sprintf(buf, "MEDIA");
+ break;
+ case HS_MGR_KEYCODE_SEND:
+ sprintf(buf, "SEND");
+ break;
+ case HS_MGR_KEYCODE_FF:
+ sprintf(buf, "FastForward");
+ break;
+ case HS_MGR_KEYCODE_RW:
+ sprintf(buf, "ReWind");
+ break;
+ case HS_MGR_KEYCODE_ASSIST:
+ sprintf(buf, "ASSIST");
+ break;
+ default:
+ sprintf(buf, "%d", keycode);
+ }
+}
+
+void button_pressed(int type)
+{
+ char key_name[16];
+
+ get_key_name(type, key_name);
+ HS_LOG_TIME("%s (%d) pressed", key_name, type);
+ atomic_set(&hi->btn_state, type);
+ input_report_key(hi->input, type, 1);
+ input_sync(hi->input);
+ key_report++;
+}
+
+void button_released(int type)
+{
+ char key_name[16];
+
+ get_key_name(type, key_name);
+ HS_LOG_TIME("%s (%d) released", key_name, type);
+ atomic_set(&hi->btn_state, 0);
+ input_report_key(hi->input, type, 0);
+ input_sync(hi->input);
+ key_report++;
+}
+
+void headset_button_event(int is_press, int type)
+{
+ HS_DBG();
+
+ if (hi->hs_35mm_type == HEADSET_UNPLUG &&
+ hi->h2w_35mm_type == HEADSET_UNPLUG) {
+ HS_LOG("IGNORE key %d (HEADSET_UNPLUG)", type);
+ return;
+ }
+
+ if (!hs_hpin_stable()) {
+ HS_LOG("IGNORE key %d (Unstable HPIN)", type);
+ return;
+ }
+
+ if (!get_mic_state()) {
+ HS_LOG("IGNORE key %d (Not support MIC)", type);
+ return;
+ }
+
+ if (!is_press)
+ button_released(type);
+ else if (!atomic_read(&hi->btn_state))
+ button_pressed(type);
+}
+
+void hs_set_mic_select(int state)
+{
+ HS_DBG();
+
+ if (hs_mgr_notifier.mic_select)
+ hs_mgr_notifier.mic_select(state);
+}
+
+static int get_mic_status(void)
+{
+ int i = 0;
+ int adc = 0;
+ int mic = HEADSET_UNKNOWN_MIC;
+
+ if (hi->pdata.headset_config_num && hs_mgr_notifier.remote_adc) {
+ hs_mgr_notifier.remote_adc(&adc);
+ for (i = 0; i < hi->pdata.headset_config_num; i++) {
+ if (adc >= hi->pdata.headset_config[i].adc_min &&
+ adc <= hi->pdata.headset_config[i].adc_max)
+ return hi->pdata.headset_config[i].type;
+ }
+ if (hi->pdata.driver_flag & DRIVER_HS_MGR_FLOAT_DET) {
+ return HEADSET_UNPLUG;
+ }
+ } else if (hs_mgr_notifier.mic_status) {
+ mic = hs_mgr_notifier.mic_status();
+ }
+ else
+ HS_LOG("Failed to get MIC status");
+ return mic;
+}
+
+int headset_get_type(void)
+{
+ return hi->hs_35mm_type;
+}
+
+int headset_get_type_sync(int count, unsigned int interval)
+{
+ int current_type = hi->hs_35mm_type;
+ int new_type = HEADSET_UNKNOWN_MIC;
+
+ while (count--) {
+ new_type = get_mic_status();
+ if (new_type != current_type)
+ break;
+ if (count)
+ msleep(interval);
+ }
+
+ if (new_type != current_type) {
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ return HEADSET_UNKNOWN_MIC;
+ }
+
+ return hi->hs_35mm_type;
+}
+
+static void set_35mm_hw_state(int state)
+{
+ HS_DBG();
+
+/*
+ if (hs_mgr_notifier.key_int_enable && !state)
+ hs_mgr_notifier.key_int_enable(state);
+*/
+
+ if (hi->pdata.headset_power || hs_mgr_notifier.mic_bias_enable) {
+ if (hi->mic_bias_state != state) {
+ if (hi->pdata.headset_power)
+ hi->pdata.headset_power(state);
+ if (hs_mgr_notifier.mic_bias_enable)
+ hs_mgr_notifier.mic_bias_enable(state);
+
+ hi->mic_bias_state = state;
+ if (state) /* Wait for MIC bias stable */
+ msleep(HS_DELAY_MIC_BIAS);
+ }
+ }
+
+ hs_set_mic_select(state);
+
+ if (hs_mgr_notifier.key_enable)
+ hs_mgr_notifier.key_enable(state);
+/*
+ if (hs_mgr_notifier.key_int_enable && state)
+ hs_mgr_notifier.key_int_enable(state);
+*/
+
+}
+
+static int tv_out_detect(void)
+{
+ int adc = 0;
+ int mic = HEADSET_NO_MIC;
+
+ HS_DBG();
+
+ if (!hs_mgr_notifier.remote_adc)
+ return HEADSET_NO_MIC;
+
+ if (!hi->pdata.hptv_det_hp_gpio || !hi->pdata.hptv_det_tv_gpio)
+ return HEADSET_NO_MIC;
+
+ gpio_set_value(hi->pdata.hptv_det_hp_gpio, 0);
+ gpio_set_value(hi->pdata.hptv_det_tv_gpio, 1);
+ msleep(HS_DELAY_MIC_BIAS);
+
+ hs_mgr_notifier.remote_adc(&adc);
+ if (adc >= HS_DEF_HPTV_ADC_16_BIT_MIN &&
+ adc <= HS_DEF_HPTV_ADC_16_BIT_MAX)
+ mic = HEADSET_TV_OUT;
+
+ gpio_set_value(hi->pdata.hptv_det_hp_gpio, 1);
+ gpio_set_value(hi->pdata.hptv_det_tv_gpio, 0);
+
+ return mic;
+}
+
+#if 0
+static void insert_h2w_35mm(int *state)
+{
+ int mic = HEADSET_NO_MIC;
+
+ HS_LOG_TIME("Insert H2W 3.5mm headset");
+ set_35mm_hw_state(1);
+
+ mic = get_mic_status();
+
+ if (mic == HEADSET_NO_MIC) {
+ *state |= BIT_HEADSET_NO_MIC;
+ hi->h2w_35mm_type = HEADSET_NO_MIC;
+ HS_LOG_TIME("H2W 3.5mm without microphone");
+ } else {
+ *state |= BIT_HEADSET;
+ hi->h2w_35mm_type = HEADSET_MIC;
+ HS_LOG_TIME("H2W 3.5mm with microphone");
+ }
+}
+
+static void remove_h2w_35mm(void)
+{
+ HS_LOG_TIME("Remove H2W 3.5mm headset");
+
+ set_35mm_hw_state(0);
+
+ if (atomic_read(&hi->btn_state))
+ button_released(atomic_read(&hi->btn_state));
+ hi->h2w_35mm_type = HEADSET_UNPLUG;
+}
+#endif /* #if 0 */
+
+static void enable_metrico_headset(int enable)
+{
+ HS_DBG();
+
+ if (enable && !hi->metrico_status) {
+#if 0
+ enable_mos_test(1);
+#endif
+ hi->metrico_status = 1;
+ HS_LOG("Enable metrico headset");
+ }
+
+ if (!enable && hi->metrico_status) {
+#if 0
+ enable_mos_test(0);
+#endif
+ hi->metrico_status = 0;
+ HS_LOG("Disable metrico headset");
+ }
+}
+
+static void mic_detect_work_func(struct work_struct *work)
+{
+ int mic = HEADSET_NO_MIC;
+ int old_state, new_state;
+ int adc = 0;
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_MIC_DETECT_TIMEOUT);
+
+ HS_DBG();
+
+ if (!hi->pdata.headset_config_num && !hs_mgr_notifier.mic_status) {
+ HS_LOG("Failed to get MIC status");
+ return;
+ }
+
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(0);
+
+ mutex_lock(&hi->mutex_lock);
+/*Polling 1wire AID start*/
+
+ if (hi->driver_one_wire_exist && hi->one_wire_mode == 0) {
+ HS_LOG("1-wire re-detecting sequence");
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(0);
+ if (hi->pdata.uart_lv_shift_en)
+ hi->pdata.uart_lv_shift_en(0);
+ msleep(20);
+ if (hi->pdata.uart_lv_shift_en)
+ hi->pdata.uart_lv_shift_en(1);
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(2);
+ msleep(150);
+ if (hs_mgr_notifier.remote_adc)
+ hs_mgr_notifier.remote_adc(&adc);
+ hi->one_wire_mode = 0;
+/*Check one wire accessory for every plug event*/
+
+ if (adc > 1149) {
+ HS_LOG("Not HEADSET_NO_MIC, start 1wire init");
+ if (hs_mgr_notifier.hs_1wire_init() == 0) {
+ hi->one_wire_mode = 1;
+/*Report as normal headset with MIC*/
+ old_state = switch_get_state(&hi->sdev_h2w);
+ new_state = BIT_HEADSET;
+ if (old_state == BIT_HEADSET_NO_MIC) {
+ HS_LOG("no_mic to mic workaround");
+ new_state = BIT_HEADSET | BIT_HEADSET_NO_MIC;
+ }
+ HS_LOG("old_state = 0x%x, new_state = 0x%x", old_state, new_state);
+ switch_set_state(&hi->sdev_h2w, old_state & ~MASK_35MM_HEADSET);
+ switch_set_state(&hi->sdev_h2w, new_state);
+ hi->hs_35mm_type = HEADSET_BEATS;
+ mutex_unlock(&hi->mutex_lock);
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(1);
+ return;
+ } else {
+ hi->one_wire_mode = 0;
+ HS_LOG("Legacy mode");
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(2);
+ }
+ }
+ }
+
+/*Polling 1wire AID end*/
+
+ mic = get_mic_status();
+
+ if (mic == HEADSET_NO_MIC)
+ mic = tv_out_detect();
+
+ if (mic == HEADSET_TV_OUT && hi->pdata.hptv_sel_gpio)
+ gpio_set_value(hi->pdata.hptv_sel_gpio, 1);
+
+ if (mic == HEADSET_METRICO && !hi->metrico_status)
+ enable_metrico_headset(1);
+
+ if (mic == HEADSET_UNKNOWN_MIC || mic == HEADSET_UNPLUG) {
+ mutex_unlock(&hi->mutex_lock);
+ if (hi->mic_detect_counter--) {
+ queue_delayed_work(detect_wq, &mic_detect_work,
+ HS_JIFFIES_MIC_DETECT);
+ } else {
+ HS_LOG("MIC polling timeout (UNKNOWN/Floating MIC status)");
+ set_35mm_hw_state(0); /*Turn off mic bias*/
+ }
+ return;
+ }
+
+ if (hi->hs_35mm_type == HEADSET_UNSTABLE && hi->mic_detect_counter--) {
+ mutex_unlock(&hi->mutex_lock);
+ queue_delayed_work(detect_wq, &mic_detect_work,
+ HS_JIFFIES_MIC_DETECT);
+ return;
+ }
+
+ old_state = switch_get_state(&hi->sdev_h2w);
+ if (!(old_state & MASK_35MM_HEADSET) && !(hi->is_ext_insert)) {
+ HS_LOG("Headset has been removed");
+ mutex_unlock(&hi->mutex_lock);
+ return;
+ }
+
+ new_state = old_state & ~MASK_35MM_HEADSET;
+
+ switch (mic) {
+ case HEADSET_UNPLUG:
+ new_state &= ~MASK_35MM_HEADSET;
+ HS_LOG("HEADSET_UNPLUG (FLOAT)");
+ break;
+ case HEADSET_NO_MIC:
+ new_state |= BIT_HEADSET_NO_MIC;
+ HS_LOG("HEADSET_NO_MIC");
+ set_35mm_hw_state(0);
+ break;
+ case HEADSET_MIC:
+ new_state |= BIT_HEADSET;
+ HS_LOG("HEADSET_MIC");
+ break;
+ case HEADSET_METRICO:
+ new_state |= BIT_HEADSET;
+ HS_LOG("HEADSET_METRICO");
+ break;
+ case HEADSET_TV_OUT:
+ new_state |= BIT_TV_OUT;
+ HS_LOG("HEADSET_TV_OUT");
+#if defined(CONFIG_FB_MSM_TVOUT) && defined(CONFIG_ARCH_MSM8X60)
+ tvout_enable_detection(1);
+#endif
+ break;
+ case HEADSET_BEATS:
+ new_state |= BIT_HEADSET;
+ HS_LOG("HEADSET_BEATS");
+ break;
+ case HEADSET_BEATS_SOLO:
+ new_state |= BIT_HEADSET;
+ HS_LOG("HEADSET_BEATS_SOLO");
+ break;
+ case HEADSET_INDICATOR:
+ HS_LOG("HEADSET_INDICATOR");
+ break;
+ case HEADSET_UART:
+ HS_LOG("HEADSET_UART");
+ if (hs_mgr_notifier.uart_set)
+ hs_mgr_notifier.uart_set(1);
+ break;
+ }
+
+ if (new_state != old_state) {
+ HS_LOG_TIME("Plug/Unplug accessory, old_state 0x%x, new_state 0x%x", old_state, new_state);
+ hi->hs_35mm_type = mic;
+ new_state |= old_state;
+ switch_set_state(&hi->sdev_h2w, new_state);
+ HS_LOG_TIME("Sent uevent 0x%x ==> 0x%x", old_state, new_state);
+ hpin_report++;
+ } else
+ HS_LOG("MIC status has not changed");
+
+if (mic != HEADSET_NO_MIC)
+ {
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(1);
+ }
+ mutex_unlock(&hi->mutex_lock);
+}
+
+static void button_35mm_work_func(struct work_struct *work)
+{
+ int key;
+ struct button_work *works;
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+
+ HS_DBG();
+
+ works = container_of(work, struct button_work, key_work.work);
+ hi->key_level_flag = works->key_code;
+
+ if (hi->key_level_flag) {
+ switch (hi->key_level_flag) {
+ case 1:
+ key = HS_MGR_KEYCODE_MEDIA;
+ break;
+ case 2:
+ key = HS_MGR_KEYCODE_VOLUP;
+ break;
+ case 3:
+ key = HS_MGR_KEYCODE_VOLDOWN;
+ break;
+ case 4:
+ key = HS_MGR_KEYCODE_ASSIST;
+ break;
+ default:
+ HS_LOG("3.5mm RC: WRONG Button Pressed");
+ kfree(works);
+ return;
+ }
+ headset_button_event(1, key);
+ } else { /* key release */
+ if (atomic_read(&hi->btn_state))
+ headset_button_event(0, atomic_read(&hi->btn_state));
+ else
+ HS_LOG("3.5mm RC: WRONG Button Release");
+ }
+
+ kfree(works);
+}
+
+static void debug_work_func(struct work_struct *work)
+{
+ int flag = 0;
+ int adc = -EINVAL;
+ int hpin_gpio = -EINVAL;
+
+ HS_DBG();
+
+ while (hi->debug_flag & DEBUG_FLAG_ADC) {
+ flag = hi->debug_flag;
+ if (hs_mgr_notifier.hpin_gpio)
+ hpin_gpio = hs_mgr_notifier.hpin_gpio();
+ if (hs_mgr_notifier.remote_adc)
+ hs_mgr_notifier.remote_adc(&adc);
+ HS_LOG("Debug Flag %d, HP_DET %d, ADC %d", flag,
+ hpin_gpio, adc);
+ msleep(HS_DELAY_SEC);
+ }
+}
+
+static void remove_detect_work_func(struct work_struct *work)
+{
+ int state;
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+
+ HS_DBG();
+
+ if (time_before_eq(jiffies, hi->insert_jiffies + HZ)) {
+ HS_LOG("Waiting for HPIN stable");
+ msleep(HS_DELAY_SEC - HS_DELAY_REMOVE);
+ }
+
+ if (hi->is_ext_insert || hs_mgr_notifier.hpin_gpio() == 0) {
+ HS_LOG("Headset has been inserted");
+ return;
+ }
+
+ if (hi->hs_35mm_type == HEADSET_INDICATOR &&
+ hs_mgr_notifier.indicator_enable)
+ hs_mgr_notifier.indicator_enable(0);
+
+ set_35mm_hw_state(0);
+#if defined(CONFIG_FB_MSM_TVOUT) && defined(CONFIG_ARCH_MSM8X60)
+ if (hi->hs_35mm_type == HEADSET_TV_OUT && hi->pdata.hptv_sel_gpio) {
+ HS_LOG_TIME("Remove 3.5mm TVOUT cable");
+ tvout_enable_detection(0);
+ gpio_set_value(hi->pdata.hptv_sel_gpio, 0);
+ }
+#endif
+ if (hi->metrico_status)
+ enable_metrico_headset(0);
+
+ if (atomic_read(&hi->btn_state))
+ button_released(atomic_read(&hi->btn_state));
+ hi->hs_35mm_type = HEADSET_UNPLUG;
+
+ mutex_lock(&hi->mutex_lock);
+
+ if (hs_mgr_notifier.uart_set)
+ hs_mgr_notifier.uart_set(0);
+
+ state = switch_get_state(&hi->sdev_h2w);
+ if (!(state & MASK_35MM_HEADSET)) {
+ HS_LOG("Headset has been removed");
+ mutex_unlock(&hi->mutex_lock);
+ return;
+ }
+
+#if 0
+ if (hi->cable_in1 && !gpio_get_value(hi->cable_in1)) {
+ state &= ~BIT_35MM_HEADSET;
+ switch_set_state(&hi->sdev_h2w, state);
+ queue_delayed_work(detect_wq, &detect_h2w_work,
+ HS_DELAY_ZERO_JIFFIES);
+ } else {
+ state &= ~(MASK_35MM_HEADSET | MASK_FM_ATTRIBUTE);
+ switch_set_state(&hi->sdev_h2w, state);
+ }
+#else
+ state &= ~(MASK_35MM_HEADSET | MASK_FM_ATTRIBUTE);
+ switch_set_state(&hi->sdev_h2w, state);
+#endif
+ if (hi->one_wire_mode == 1) {
+ hi->one_wire_mode = 0;
+ }
+ HS_LOG_TIME("Remove 3.5mm accessory");
+ hpin_report++;
+ mutex_unlock(&hi->mutex_lock);
+
+#ifdef HTC_HEADSET_CONFIG_QUICK_BOOT
+ if (gpio_event_get_quickboot_status())
+ HS_LOG("quick_boot_status = 1");
+#endif
+}
+
+static void insert_detect_work_func(struct work_struct *work)
+{
+ int old_state, new_state;
+ int mic = HEADSET_NO_MIC;
+ int adc = 0;
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+
+ HS_DBG();
+
+ if (!hi->is_ext_insert || hs_mgr_notifier.hpin_gpio() == 1) {
+ HS_LOG("Headset has been removed");
+ return;
+ }
+
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(0);
+
+ set_35mm_hw_state(1);
+ msleep(250); /* de-bouncing time for MIC output Volt */
+
+ HS_LOG("Start 1-wire detecting sequence");
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(0);
+ if (hi->pdata.uart_lv_shift_en)
+ hi->pdata.uart_lv_shift_en(0);
+ msleep(20);
+ if (hi->pdata.uart_lv_shift_en)
+ hi->pdata.uart_lv_shift_en(1);
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(2);
+ hi->insert_jiffies = jiffies;
+ msleep(150);
+ if (hs_mgr_notifier.remote_adc)
+ hs_mgr_notifier.remote_adc(&adc);
+
+ mutex_lock(&hi->mutex_lock);
+
+ hi->one_wire_mode = 0;
+/*Check one wire accessory for every plug event*/
+ if (hi->driver_one_wire_exist && adc > 915) {
+ HS_LOG("[HS_1wire]1wire driver exists, starting init");
+ if (hs_mgr_notifier.hs_1wire_init() == 0) {
+ hi->one_wire_mode = 1;
+ /*Report as normal headset with MIC*/
+ old_state = switch_get_state(&hi->sdev_h2w);
+ new_state = BIT_HEADSET;
+ if (old_state == BIT_HEADSET_NO_MIC) {
+ HS_LOG("Send fake remove event");
+ switch_set_state(&hi->sdev_h2w, old_state & ~MASK_35MM_HEADSET);
+ }
+ switch_set_state(&hi->sdev_h2w, new_state);
+ hi->hs_35mm_type = HEADSET_BEATS;
+ mutex_unlock(&hi->mutex_lock);
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(1);
+ return;
+ }
+ else {
+ hi->one_wire_mode = 0;
+ HS_LOG("Lagacy mode");
+ if (hi->pdata.uart_tx_gpo)
+ hi->pdata.uart_tx_gpo(2);
+ }
+ }
+ mic = get_mic_status();
+ if (hi->pdata.driver_flag & DRIVER_HS_MGR_FLOAT_DET) {
+ HS_LOG("Headset float detect enable");
+ if (mic == HEADSET_UNPLUG) {
+ mutex_unlock(&hi->mutex_lock);
+ /*update_mic_status(HS_DEF_MIC_DETECT_COUNT);*/
+ set_35mm_hw_state(0);
+ return;
+ }
+ }
+
+ if (mic == HEADSET_NO_MIC)
+ mic = tv_out_detect();
+
+ if (mic == HEADSET_TV_OUT && hi->pdata.hptv_sel_gpio)
+ gpio_set_value(hi->pdata.hptv_sel_gpio, 1);
+
+ if (mic == HEADSET_METRICO && !hi->metrico_status)
+ enable_metrico_headset(1);
+
+ old_state = switch_get_state(&hi->sdev_h2w);
+ new_state = old_state & ~MASK_35MM_HEADSET;
+
+ switch (mic) {
+
+ case HEADSET_NO_MIC:
+ new_state |= BIT_HEADSET_NO_MIC;
+ HS_LOG_TIME("HEADSET_NO_MIC");
+ set_35mm_hw_state(0);
+ break;
+ case HEADSET_MIC:
+ new_state |= BIT_HEADSET;
+ HS_LOG_TIME("HEADSET_MIC");
+ break;
+ case HEADSET_METRICO:
+ mic = HEADSET_UNSTABLE;
+ HS_LOG_TIME("HEADSET_METRICO (UNSTABLE)");
+ break;
+ case HEADSET_UNKNOWN_MIC:
+ new_state |= BIT_HEADSET_NO_MIC;
+ HS_LOG_TIME("HEADSET_UNKNOWN_MIC");
+ break;
+ case HEADSET_TV_OUT:
+ new_state |= BIT_TV_OUT;
+ HS_LOG_TIME("HEADSET_TV_OUT");
+#if defined(CONFIG_FB_MSM_TVOUT) && defined(CONFIG_ARCH_MSM8X60)
+ tvout_enable_detection(1);
+#endif
+ break;
+ case HEADSET_BEATS:
+ new_state |= BIT_HEADSET;
+ HS_LOG_TIME("HEADSET_BEATS (UNSTABLE)");
+ break;
+ case HEADSET_BEATS_SOLO:
+ new_state |= BIT_HEADSET;
+ HS_LOG_TIME("HEADSET_BEATS_SOLO (UNSTABLE)");
+ break;
+ case HEADSET_INDICATOR:
+ HS_LOG_TIME("HEADSET_INDICATOR");
+ break;
+ case HEADSET_UART:
+ HS_LOG_TIME("HEADSET_UART");
+ if (hs_mgr_notifier.uart_set)
+ hs_mgr_notifier.uart_set(1);
+ break;
+ }
+ if ((old_state == BIT_HEADSET_NO_MIC) && (new_state == BIT_HEADSET)) {
+ HS_LOG("no_mic to mic workaround");
+ new_state = BIT_HEADSET_NO_MIC | BIT_HEADSET;
+ hi->hpin_jiffies = jiffies;
+ }
+ hi->hs_35mm_type = mic;
+ HS_LOG_TIME("Send uevent for state change, %d => %d", old_state, new_state);
+ switch_set_state(&hi->sdev_h2w, new_state);
+ hpin_report++;
+
+ if (mic != HEADSET_NO_MIC)
+ {
+ if (hs_mgr_notifier.key_int_enable)
+ hs_mgr_notifier.key_int_enable(1);
+ }
+ mutex_unlock(&hi->mutex_lock);
+
+#ifdef HTC_HEADSET_CONFIG_QUICK_BOOT
+ if (gpio_event_get_quickboot_status())
+ HS_LOG("quick_boot_status = 1");
+#endif
+
+ if (mic == HEADSET_UNKNOWN_MIC)
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ else if (mic == HEADSET_UNSTABLE)
+ update_mic_status(0);
+ else if (mic == HEADSET_INDICATOR) {
+ if (headset_get_type_sync(3, HS_DELAY_SEC) == HEADSET_INDICATOR)
+ HS_LOG("Delay check: HEADSET_INDICATOR");
+ else
+ HS_LOG("Delay check: HEADSET_UNKNOWN_MIC");
+ }
+}
+
+int hs_notify_plug_event(int insert, unsigned int intr_id)
+{
+ int ret = 0;
+ HS_LOG("Headset status++%d++ %d", intr_id,insert);
+
+ mutex_lock(&hi->mutex_lock);
+ hi->is_ext_insert = insert;
+ mutex_unlock(&hi->mutex_lock);
+
+ if (hs_mgr_notifier.hs_insert)
+ hs_mgr_notifier.hs_insert(insert);
+
+ cancel_delayed_work_sync(&mic_detect_work);
+ ret = cancel_delayed_work_sync(&insert_detect_work);
+ if (ret && hs_mgr_notifier.key_int_enable) {
+ HS_LOG("Cancel insert work success");
+ if (!insert)
+ hs_mgr_notifier.key_int_enable(1);
+ }
+ ret = cancel_delayed_work_sync(&remove_detect_work);
+ if (ret && hs_mgr_notifier.key_int_enable) {
+ HS_LOG("Cancel remove work success");
+ if (insert)
+ hs_mgr_notifier.key_int_enable(0);
+ }
+ if (hi->is_ext_insert) {
+ ret = queue_delayed_work(detect_wq, &insert_detect_work,
+ HS_JIFFIES_INSERT);
+ HS_LOG("queue insert work, ret = %d", ret);
+ }
+ else {
+ if (hi->pdata.driver_flag & DRIVER_HS_MGR_OLD_AJ) {
+ HS_LOG("Old AJ work long remove delay");
+ ret = queue_delayed_work(detect_wq, &remove_detect_work,
+ HS_JIFFIES_REMOVE_LONG);
+ } else {
+ ret = queue_delayed_work(detect_wq, &remove_detect_work,
+ HS_JIFFIES_REMOVE);
+ }
+ HS_LOG("queue remove work, ret = %d", ret);
+ }
+
+ HS_LOG("Headset status--%d-- %d", intr_id,insert);
+ return 1;
+}
+
+int hs_notify_key_event(int key_code)
+{
+ struct button_work *work;
+
+ HS_DBG();
+
+ if (hi->hs_35mm_type == HEADSET_INDICATOR) {
+ HS_LOG("Not support remote control");
+ return 1;
+ }
+
+ if (hi->hs_35mm_type == HEADSET_UNKNOWN_MIC ||
+ hi->hs_35mm_type == HEADSET_NO_MIC ||
+ hi->h2w_35mm_type == HEADSET_NO_MIC)
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ else if (hi->hs_35mm_type == HEADSET_UNSTABLE)
+ update_mic_status(0);
+ else if (!hs_hpin_stable()) {
+ HS_LOG("IGNORE key %d (Unstable HPIN)", key_code);
+ return 1;
+ } else if (hi->hs_35mm_type == HEADSET_UNPLUG && hi->is_ext_insert == 1) {
+ HS_LOG("MIC status is changed from float, re-polling to decide accessory type");
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ return 1;
+ } else {
+ work = kzalloc(sizeof(struct button_work), GFP_KERNEL);
+ if (!work) {
+ HS_ERR("Failed to allocate button memory");
+ return 1;
+ }
+ work->key_code = key_code;
+ INIT_DELAYED_WORK(&work->key_work, button_35mm_work_func);
+ queue_delayed_work(button_wq, &work->key_work,
+ HS_JIFFIES_BUTTON);
+ }
+
+ return 1;
+}
+
+static void proc_comb_keys(void)
+{
+ int j, k;
+ if (hi->key_code_1wire_index >= 5) {
+ for (j = 0; j <= hi->key_code_1wire_index - 5; j++) {
+ if (hi->key_code_1wire[j] == 1 && hi->key_code_1wire[j+2] == 1 && hi->key_code_1wire[j+4] == 1) {
+ hi->key_code_1wire[j] = HS_MGR_3X_KEY_MEDIA;
+ HS_LOG("key[%d] = %d", j, HS_MGR_3X_KEY_MEDIA);
+ for (k = j + 1; k < (hi->key_code_1wire_index - 4); k++) {
+ hi->key_code_1wire[k] = hi->key_code_1wire[k+4];
+ HS_LOG("key[%d] <= key[%d]", k, k+4);
+ }
+ hi->key_code_1wire_index -= 4;
+ }
+ }
+ }
+
+ if (hi->key_code_1wire_index >= 3) {
+ for (j = 0; j <= hi->key_code_1wire_index - 3; j++) {
+ if (hi->key_code_1wire[j] == 1 && hi->key_code_1wire[j+2] == 1) {
+ hi->key_code_1wire[j] = HS_MGR_2X_KEY_MEDIA;
+ HS_LOG("key[%d] = %d", j, HS_MGR_2X_KEY_MEDIA);
+ for (k = j + 1; k < (hi->key_code_1wire_index - 2); k++) {
+ hi->key_code_1wire[k] = hi->key_code_1wire[k+2];
+ HS_LOG("key[%d] <= key[%d]", k, k+2);
+ }
+ hi->key_code_1wire_index -= 2;
+ }
+ }
+ }
+}
+
+static void proc_long_press(void)
+{
+ if (hi->key_code_1wire[hi->key_code_1wire_index - 1] == HS_MGR_2X_KEY_MEDIA) { /*If last key is NEXT press, change it to FF*/
+ HS_LOG("long press key found, replace key[%d] = %d ==> %d", hi->key_code_1wire_index - 1,
+ hi->key_code_1wire[hi->key_code_1wire_index - 1], HS_MGR_2X_HOLD_MEDIA);
+ hi->key_code_1wire[hi->key_code_1wire_index - 1] = HS_MGR_2X_HOLD_MEDIA;
+ }
+
+ if (hi->key_code_1wire[hi->key_code_1wire_index - 1] == HS_MGR_3X_KEY_MEDIA) { /*If last key is PREVIOUS press, change it to RW*/
+ HS_LOG("long press key found, replace key[%d] = %d ==> %d", hi->key_code_1wire_index - 1,
+ hi->key_code_1wire[hi->key_code_1wire_index - 1], HS_MGR_3X_HOLD_MEDIA);
+ hi->key_code_1wire[hi->key_code_1wire_index - 1] = HS_MGR_3X_HOLD_MEDIA;
+ }
+}
+
+
+static void button_1wire_work_func(struct work_struct *work)
+{
+ int i;
+ static int pre_key = 0;
+ if (hi->key_code_1wire_index >= 15)
+ HS_LOG("key_code_1wire buffer overflow");
+ proc_comb_keys();
+ proc_long_press();
+ for (i = 0; i < hi->key_code_1wire_index; i++) {
+ HS_LOG("1wire key [%d] = %d", i, hi->key_code_1wire[i]);
+ switch (hi->key_code_1wire[i]) {
+ case 1:
+ /*single click MEDIA*/
+ button_pressed(HS_MGR_KEYCODE_MEDIA);
+ pre_key = HS_MGR_KEYCODE_MEDIA;
+ break;
+ case 2:
+ button_pressed(HS_MGR_KEYCODE_VOLUP);
+ pre_key = HS_MGR_KEYCODE_VOLUP;
+ break;
+ case 3:
+ button_pressed(HS_MGR_KEYCODE_VOLDOWN);
+ pre_key = HS_MGR_KEYCODE_VOLDOWN;
+ break;
+ case HS_MGR_2X_KEY_MEDIA:
+ button_pressed(HS_MGR_KEYCODE_FORWARD);
+ pre_key = HS_MGR_KEYCODE_FORWARD;
+ /*double click MEDIA*/
+ break;
+ case HS_MGR_3X_KEY_MEDIA:
+ button_pressed(HS_MGR_KEYCODE_BACKWARD);
+ pre_key = HS_MGR_KEYCODE_BACKWARD;
+ /*triple click MEDIA*/
+ break;
+ case HS_MGR_2X_HOLD_MEDIA:
+ button_pressed(HS_MGR_KEYCODE_FF);
+ pre_key = HS_MGR_KEYCODE_FF;
+ /*double click and hold MEDIA*/
+ break;
+ case HS_MGR_3X_HOLD_MEDIA:
+ button_pressed(HS_MGR_KEYCODE_RW);
+ pre_key = HS_MGR_KEYCODE_RW;
+ /*triple click and hold MEDIA*/
+ break;
+ case 0:
+ button_released(pre_key);
+ break;
+ default:
+ break;
+ }
+ msleep(10);
+ }
+ hi->key_code_1wire_index = 0;
+
+}
+
+
+int hs_notify_key_irq(void)
+{
+ int adc = 0;
+ int key_code = HS_MGR_KEY_INVALID;
+ static int pre_key = 0;
+
+ if (hi->one_wire_mode == 1 && hs_hpin_stable() && hi->is_ext_insert) {
+/* wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+ mdelay(10);
+ key_code = hs_mgr_notifier.hs_1wire_read_key();
+ hs_notify_key_event(key_code);
+ return 1;*/
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+ key_code = hs_mgr_notifier.hs_1wire_read_key();
+ if (key_code < 0) {
+ wake_unlock(&hi->hs_wake_lock);
+ return 1;
+ }
+ if (key_code == 2 || key_code == 3 || pre_key == 2 || pre_key == 3) {
+ queue_delayed_work(button_wq, &button_1wire_work, HS_JIFFIES_1WIRE_BUTTON_SHORT);
+ HS_LOG("Use short delay");
+ } else {
+ queue_delayed_work(button_wq, &button_1wire_work, hi->onewire_key_delay);
+ HS_LOG("Use long delay");
+ }
+
+ HS_LOG("key_code = 0x%x", key_code);
+ hi->key_code_1wire[hi->key_code_1wire_index++] = key_code;
+ pre_key = key_code;
+ return 1;
+ }
+
+ key_bounce++;
+ if (hi->hs_35mm_type == HEADSET_INDICATOR) {
+ HS_LOG("Not support remote control");
+ return 1;
+ }
+
+ if (!hs_mgr_notifier.remote_adc || !hs_mgr_notifier.remote_keycode) {
+ HS_LOG("Failed to get remote key code");
+ return 1;
+ }
+
+ /* only handle key event for no_mic device during 10 second */
+ if ((hi->hs_35mm_type == HEADSET_NO_MIC || hi->hs_35mm_type == HEADSET_UNKNOWN_MIC) &&
+ time_before_eq(jiffies, hi->hpin_jiffies + 10 * HZ)) {
+ HS_LOG("IGNORE key IRQ (Unstable HPIN)");
+ /* hs_notify_hpin_irq(); */
+ update_mic_status(HS_DEF_MIC_DETECT_COUNT);
+ } else if (hs_hpin_stable()) {
+#ifndef CONFIG_HTC_HEADSET_ONE_WIRE
+ msleep(50);
+#endif
+ hs_mgr_notifier.remote_adc(&adc);
+ key_code = hs_mgr_notifier.remote_keycode(adc);
+ hs_notify_key_event(key_code);
+ }
+
+ return 1;
+}
+
+static void usb_headset_detect(int type)
+{
+ int state_h2w = 0;
+ int state_usb = 0;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ state_h2w = switch_get_state(&hi->sdev_h2w);
+
+ switch (type) {
+ case USB_NO_HEADSET:
+ hi->usb_headset.type = USB_NO_HEADSET;
+ hi->usb_headset.status = STATUS_DISCONNECTED;
+ state_h2w &= ~MASK_USB_HEADSET;
+ state_usb = GOOGLE_USB_AUDIO_UNPLUG;
+ HS_LOG_TIME("Remove USB_HEADSET (state %d, %d)",
+ state_h2w, state_usb);
+ break;
+ case USB_AUDIO_OUT:
+ hi->usb_headset.type = USB_AUDIO_OUT;
+ hi->usb_headset.status = STATUS_CONNECTED_ENABLED;
+ state_h2w |= BIT_USB_AUDIO_OUT;
+ state_usb = GOOGLE_USB_AUDIO_ANLG;
+ HS_LOG_TIME("Insert USB_AUDIO_OUT (state %d, %d)",
+ state_h2w, state_usb);
+ break;
+#ifdef CONFIG_SUPPORT_USB_SPEAKER
+ case USB_AUDIO_OUT_DGTL:
+ hi->usb_headset.type = USB_AUDIO_OUT;
+ hi->usb_headset.status = STATUS_CONNECTED_ENABLED;
+ state_h2w |= BIT_USB_AUDIO_OUT;
+ state_usb = GOOGLE_USB_AUDIO_DGTL;
+ HS_LOG_TIME("Insert USB_AUDIO_OUT DGTL (state %d, %d)",
+ state_h2w, state_usb);
+ break;
+#endif
+ default:
+ HS_LOG("Unknown headset type");
+ }
+
+ switch_set_state(&hi->sdev_h2w, state_h2w);
+ mutex_unlock(&hi->mutex_lock);
+}
+
+void headset_ext_detect(int type)
+{
+ HS_DBG();
+
+ switch (type) {
+ case H2W_NO_HEADSET:
+ /* Release Key */
+ case H2W_HEADSET:
+ case H2W_35MM_HEADSET:
+ case H2W_REMOTE_CONTROL:
+ case H2W_USB_CRADLE:
+ case H2W_UART_DEBUG:
+ case H2W_TVOUT:
+ break;
+ case USB_NO_HEADSET:
+ /* Release Key */
+ case USB_AUDIO_OUT:
+#ifdef CONFIG_SUPPORT_USB_SPEAKER
+ case USB_AUDIO_OUT_DGTL:
+#endif
+ usb_headset_detect(type);
+ break;
+ default:
+ HS_LOG("Unknown headset type");
+ }
+}
+
+void headset_ext_button(int headset_type, int key_code, int press)
+{
+ HS_LOG("Headset %d, Key %d, Press %d", headset_type, key_code, press);
+ headset_button_event(press, key_code);
+}
+
+int switch_send_event(unsigned int bit, int on)
+{
+ unsigned long state;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ state = switch_get_state(&hi->sdev_h2w);
+ state &= ~(bit);
+
+ if (on)
+ state |= bit;
+
+ switch_set_state(&hi->sdev_h2w, state);
+ mutex_unlock(&hi->mutex_lock);
+ return 0;
+}
+
+static ssize_t headset_state_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int length = 0;
+ char *state = NULL;
+
+ HS_DBG();
+
+ switch (hi->hs_35mm_type) {
+ case HEADSET_UNPLUG:
+ state = "headset_unplug";
+ break;
+ case HEADSET_NO_MIC:
+ state = "headset_no_mic";
+ break;
+ case HEADSET_MIC:
+ state = "headset_mic";
+ break;
+ case HEADSET_METRICO:
+ state = "headset_metrico";
+ break;
+ case HEADSET_UNKNOWN_MIC:
+ state = "headset_unknown_mic";
+ break;
+ case HEADSET_TV_OUT:
+ state = "headset_tv_out";
+ break;
+ case HEADSET_UNSTABLE:
+ state = "headset_unstable";
+ break;
+ case HEADSET_BEATS:
+ if (hi->one_wire_mode == 1 && hs_mgr_notifier.hs_1wire_report_type)
+ hs_mgr_notifier.hs_1wire_report_type(&state);
+ else
+ state = "headset_beats";
+ break;
+ case HEADSET_BEATS_SOLO:
+ state = "headset_beats_solo";
+ break;
+ case HEADSET_INDICATOR:
+ state = "headset_indicator";
+ break;
+ case HEADSET_UART:
+ state = "headset_uart";
+ break;
+ default:
+ state = "error_state";
+ }
+
+ length = sprintf(buf, "%s\n", state);
+
+ return length;
+}
+
+static ssize_t headset_state_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ HS_DBG();
+ return 0;
+}
+
+static DEVICE_HEADSET_ATTR(state, 0644, headset_state_show,
+ headset_state_store);
+
+static ssize_t headset_1wire_state_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf,"%d\n", hi->one_wire_mode);
+}
+
+static ssize_t headset_1wire_state_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ HS_DBG();
+ return 0;
+}
+
+static DEVICE_HEADSET_ATTR(1wire_state, 0644, headset_1wire_state_show,
+ headset_1wire_state_store);
+
+static ssize_t headset_simulate_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ HS_DBG();
+ return sprintf(buf, "Command is not supported\n");
+}
+
+static ssize_t headset_simulate_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned long state = 0;
+
+ HS_DBG();
+
+ state = MASK_35MM_HEADSET | MASK_USB_HEADSET;
+ switch_send_event(state, 0);
+
+ if (strncmp(buf, "headset_unplug", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_unplug");
+ set_35mm_hw_state(0);
+ hi->hs_35mm_type = HEADSET_UNPLUG;
+ return count;
+ }
+
+ set_35mm_hw_state(1);
+ state = BIT_35MM_HEADSET;
+
+ if (strncmp(buf, "headset_no_mic", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_no_mic");
+ hi->hs_35mm_type = HEADSET_NO_MIC;
+ state = BIT_HEADSET_NO_MIC;
+ } else if (strncmp(buf, "headset_mic", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_mic");
+ hi->hs_35mm_type = HEADSET_MIC;
+ state = BIT_HEADSET;
+ } else if (strncmp(buf, "headset_metrico", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_metrico");
+ hi->hs_35mm_type = HEADSET_METRICO;
+ state = BIT_HEADSET;
+ } else if (strncmp(buf, "headset_unknown_mic", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_unknown_mic");
+ hi->hs_35mm_type = HEADSET_UNKNOWN_MIC;
+ state = BIT_HEADSET_NO_MIC;
+ } else if (strncmp(buf, "headset_tv_out", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_tv_out");
+ hi->hs_35mm_type = HEADSET_TV_OUT;
+ state = BIT_TV_OUT;
+#if defined(CONFIG_FB_MSM_TVOUT) && defined(CONFIG_ARCH_MSM8X60)
+ tvout_enable_detection(1);
+#endif
+ } else if (strncmp(buf, "headset_indicator", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_indicator");
+ hi->hs_35mm_type = HEADSET_INDICATOR;
+ } else if (strncmp(buf, "headset_beats", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_beats");
+ hi->hs_35mm_type = HEADSET_BEATS;
+ state = BIT_HEADSET;
+ } else if (strncmp(buf, "headset_beats_solo", count - 1) == 0) {
+ HS_LOG("Headset simulation: headset_beats_solo");
+ hi->hs_35mm_type = HEADSET_BEATS_SOLO;
+ state = BIT_HEADSET;
+ } else {
+ HS_LOG("Invalid parameter");
+ return count;
+ }
+
+ switch_send_event(state, 1);
+
+ return count;
+}
+
+static DEVICE_HEADSET_ATTR(simulate, 0644, headset_simulate_show,
+ headset_simulate_store);
+
+static ssize_t headset_1wire_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ char *s = buf;
+ HS_DBG();
+ s += sprintf(s, "onewire key delay is %dms\n", jiffies_to_msecs(hi->onewire_key_delay));
+ return (s - buf);
+}
+
+static ssize_t headset_1wire_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned long ten_base = 1;
+ int i;
+ unsigned long delay_t = 0;
+ HS_DBG();
+
+ for(i = (count - 2); i >= 0; i--) {
+ HS_LOG("buf[%d] = %d, ten_base = %ld", i, *(buf + i) - 48, ten_base);
+ delay_t += (*(buf + i) - 48) * ten_base;
+ ten_base *= 10;
+ }
+ HS_LOG("delay_t = %ld", delay_t);
+ hi->onewire_key_delay = msecs_to_jiffies(delay_t);
+
+ return count;
+}
+
+static DEVICE_HEADSET_ATTR(onewire, 0644, headset_1wire_show,
+ headset_1wire_store);
+
+static ssize_t tty_flag_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ char *s = buf;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ s += sprintf(s, "%d\n", hi->tty_enable_flag);
+ mutex_unlock(&hi->mutex_lock);
+ return (s - buf);
+}
+
+static ssize_t tty_flag_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int state;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ state = switch_get_state(&hi->sdev_h2w);
+ state &= ~(BIT_TTY_FULL | BIT_TTY_VCO | BIT_TTY_HCO);
+
+ if (count == (strlen("enable") + 1) &&
+ strncmp(buf, "enable", strlen("enable")) == 0) {
+ hi->tty_enable_flag = 1;
+ switch_set_state(&hi->sdev_h2w, state | BIT_TTY_FULL);
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Enable TTY FULL");
+ return count;
+ }
+ if (count == (strlen("vco_enable") + 1) &&
+ strncmp(buf, "vco_enable", strlen("vco_enable")) == 0) {
+ hi->tty_enable_flag = 2;
+ switch_set_state(&hi->sdev_h2w, state | BIT_TTY_VCO);
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Enable TTY VCO");
+ return count;
+ }
+ if (count == (strlen("hco_enable") + 1) &&
+ strncmp(buf, "hco_enable", strlen("hco_enable")) == 0) {
+ hi->tty_enable_flag = 3;
+ switch_set_state(&hi->sdev_h2w, state | BIT_TTY_HCO);
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Enable TTY HCO");
+ return count;
+ }
+ if (count == (strlen("disable") + 1) &&
+ strncmp(buf, "disable", strlen("disable")) == 0) {
+ hi->tty_enable_flag = 0;
+ switch_set_state(&hi->sdev_h2w, state);
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Disable TTY");
+ return count;
+ }
+
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Invalid TTY argument");
+
+ return -EINVAL;
+}
+
+static DEVICE_ACCESSORY_ATTR(tty, 0644, tty_flag_show, tty_flag_store);
+
+static ssize_t fm_flag_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ char *s = buf;
+ char *state;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ switch (hi->fm_flag) {
+ case 0:
+ state = "disable";
+ break;
+ case 1:
+ state = "fm_headset";
+ break;
+ case 2:
+ state = "fm_speaker";
+ break;
+ default:
+ state = "unknown_fm_status";
+ }
+
+ s += sprintf(s, "%s\n", state);
+ mutex_unlock(&hi->mutex_lock);
+ return (s - buf);
+}
+
+static ssize_t fm_flag_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int state;
+
+ HS_DBG();
+
+ mutex_lock(&hi->mutex_lock);
+ state = switch_get_state(&hi->sdev_h2w);
+ state &= ~(BIT_FM_HEADSET | BIT_FM_SPEAKER);
+
+ if (count == (strlen("fm_headset") + 1) &&
+ strncmp(buf, "fm_headset", strlen("fm_headset")) == 0) {
+ hi->fm_flag = 1;
+ state |= BIT_FM_HEADSET;
+ HS_LOG("Enable FM HEADSET");
+ } else if (count == (strlen("fm_speaker") + 1) &&
+ strncmp(buf, "fm_speaker", strlen("fm_speaker")) == 0) {
+ hi->fm_flag = 2;
+ state |= BIT_FM_SPEAKER;
+ HS_LOG("Enable FM SPEAKER");
+ } else if (count == (strlen("disable") + 1) &&
+ strncmp(buf, "disable", strlen("disable")) == 0) {
+ hi->fm_flag = 0 ;
+ HS_LOG("Disable FM");
+ } else {
+ mutex_unlock(&hi->mutex_lock);
+ HS_LOG("Invalid FM argument");
+ return -EINVAL;
+ }
+
+ switch_set_state(&hi->sdev_h2w, state);
+ mutex_unlock(&hi->mutex_lock);
+
+ return count;
+}
+
+static DEVICE_ACCESSORY_ATTR(fm, 0644, fm_flag_show, fm_flag_store);
+
+static ssize_t debug_flag_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int flag = hi->debug_flag;
+ int adc = -EINVAL;
+ int hpin_gpio = -EINVAL;
+ int len, i;
+ char *s;
+
+ HS_DBG();
+
+ if (hs_mgr_notifier.hpin_gpio)
+ hpin_gpio = hs_mgr_notifier.hpin_gpio();
+ if (hs_mgr_notifier.remote_adc)
+ hs_mgr_notifier.remote_adc(&adc);
+
+ s = buf;
+ len = sprintf(buf, "Debug Flag = %d\nHP_DET = %d\nADC = %d\n", flag,
+ hpin_gpio, adc);
+ buf += len;
+ len = sprintf(buf, "DET report count = %d\nDET bounce count = %d\n", hpin_report, hpin_bounce);
+ buf += len;
+ len = sprintf(buf, "KEY report count = %d\nKEY bounce count = %d\n", key_report, key_bounce);
+ buf += len;
+ for (i = 0; i < hi->pdata.headset_config_num; i++) {
+ switch (hi->pdata.headset_config[i].type) {
+ case HEADSET_NO_MIC:
+ len = sprintf(buf, "headset_no_mic_adc_max = %d\n", hi->pdata.headset_config[i].adc_max);
+ buf += len;
+ len = sprintf(buf, "headset_no_mic_adc_min = %d\n", hi->pdata.headset_config[i].adc_min);
+ buf += len;
+ break;
+ case HEADSET_BEATS_SOLO:
+ len = sprintf(buf, "headset_beats_solo_adc_max = %d\n", hi->pdata.headset_config[i].adc_max);
+ buf += len;
+ len = sprintf(buf, "headset_beats_solo_adc_min = %d\n", hi->pdata.headset_config[i].adc_min);
+ buf += len;
+ break;
+ case HEADSET_BEATS:
+ len = sprintf(buf, "headset_beats_adc_max = %d\n", hi->pdata.headset_config[i].adc_max);
+ buf += len;
+ len = sprintf(buf, "headset_beats_adc_min = %d\n", hi->pdata.headset_config[i].adc_min);
+ buf += len;
+ break;
+ case HEADSET_MIC:
+ len = sprintf(buf, "headset_mic_adc_max = %d\n", hi->pdata.headset_config[i].adc_max);
+ buf += len;
+ len = sprintf(buf, "headset_mic_adc_min = %d\n", hi->pdata.headset_config[i].adc_min);
+ buf += len;
+ break;
+ default:
+ break;
+ }
+ }
+ key_report = 0;
+ key_bounce = 0;
+ hpin_report = 0;
+ hpin_bounce = 0;
+ return (buf - s);
+}
+
+static ssize_t debug_flag_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long state = 0;
+
+ HS_DBG();
+
+ if (strncmp(buf, "enable", count - 1) == 0) {
+ if (hi->debug_flag & DEBUG_FLAG_ADC) {
+ HS_LOG("Debug work is already running");
+ return count;
+ }
+ if (!debug_wq) {
+ debug_wq = create_workqueue("debug");
+ if (!debug_wq) {
+ HS_LOG("Failed to create debug workqueue");
+ return count;
+ }
+ }
+ HS_LOG("Enable headset debug");
+ mutex_lock(&hi->mutex_lock);
+ hi->debug_flag |= DEBUG_FLAG_ADC;
+ mutex_unlock(&hi->mutex_lock);
+ queue_work(debug_wq, &debug_work);
+ } else if (strncmp(buf, "disable", count - 1) == 0) {
+ if (!(hi->debug_flag & DEBUG_FLAG_ADC)) {
+ HS_LOG("Debug work has been stopped");
+ return count;
+ }
+ HS_LOG("Disable headset debug");
+ mutex_lock(&hi->mutex_lock);
+ hi->debug_flag &= ~DEBUG_FLAG_ADC;
+ mutex_unlock(&hi->mutex_lock);
+ if (debug_wq) {
+ flush_workqueue(debug_wq);
+ destroy_workqueue(debug_wq);
+ debug_wq = NULL;
+ }
+ } else if (strncmp(buf, "debug_log_enable", count - 1) == 0) {
+ HS_LOG("Enable headset debug log");
+ hi->debug_flag |= DEBUG_FLAG_LOG;
+ } else if (strncmp(buf, "debug_log_disable", count - 1) == 0) {
+ HS_LOG("Disable headset debug log");
+ hi->debug_flag &= ~DEBUG_FLAG_LOG;
+ } else if (strncmp(buf, "no_headset", count - 1) == 0) {
+ HS_LOG("Headset simulation: no_headset");
+ state = BIT_HEADSET | BIT_HEADSET_NO_MIC | BIT_35MM_HEADSET |
+ BIT_TV_OUT | BIT_USB_AUDIO_OUT;
+ switch_send_event(state, 0);
+ } else if (strncmp(buf, "35mm_mic", count - 1) == 0) {
+ HS_LOG("Headset simulation: 35mm_mic");
+ state = BIT_HEADSET | BIT_35MM_HEADSET;
+ switch_send_event(state, 1);
+ } else if (strncmp(buf, "35mm_no_mic", count - 1) == 0) {
+ HS_LOG("Headset simulation: 35mm_no_mic");
+ state = BIT_HEADSET_NO_MIC | BIT_35MM_HEADSET;
+ switch_send_event(state, 1);
+ } else if (strncmp(buf, "35mm_tv_out", count - 1) == 0) {
+ HS_LOG("Headset simulation: 35mm_tv_out");
+ state = BIT_TV_OUT | BIT_35MM_HEADSET;
+ switch_send_event(state, 1);
+ } else if (strncmp(buf, "usb_audio", count - 1) == 0) {
+ HS_LOG("Headset simulation: usb_audio");
+ state = BIT_USB_AUDIO_OUT;
+ switch_send_event(state, 1);
+ } else if (strncmp(buf, "1wire_init", count - 1) == 0) {
+ hs_mgr_notifier.hs_1wire_init();
+ } else if (strncmp(buf, "init_gpio", count - 1) == 0) {
+ hi->pdata.uart_lv_shift_en(0);
+ hi->pdata.uart_tx_gpo(2);
+ } else {
+ HS_LOG("Invalid parameter");
+ return count;
+ }
+
+ return count;
+}
+
+static DEVICE_ACCESSORY_ATTR(debug, 0644, debug_flag_show, debug_flag_store);
+
+static int register_attributes(void)
+{
+ int ret = 0;
+
+ hi->htc_accessory_class = class_create(THIS_MODULE, "htc_accessory");
+ if (IS_ERR(hi->htc_accessory_class)) {
+ ret = PTR_ERR(hi->htc_accessory_class);
+ hi->htc_accessory_class = NULL;
+ goto err_create_class;
+ }
+
+ /* Register headset attributes */
+ hi->headset_dev = device_create(hi->htc_accessory_class,
+ NULL, 0, "%s", "headset");
+ if (unlikely(IS_ERR(hi->headset_dev))) {
+ ret = PTR_ERR(hi->headset_dev);
+ hi->headset_dev = NULL;
+ goto err_create_headset_device;
+ }
+
+ ret = device_create_file(hi->headset_dev, &dev_attr_headset_state);
+ if (ret)
+ goto err_create_headset_state_device_file;
+
+ ret = device_create_file(hi->headset_dev, &dev_attr_headset_simulate);
+ if (ret)
+ goto err_create_headset_simulate_device_file;
+
+ ret = device_create_file(hi->headset_dev, &dev_attr_headset_1wire_state);
+ if (ret)
+ goto err_create_headset_state_device_file;
+
+ /* Register TTY attributes */
+ hi->tty_dev = device_create(hi->htc_accessory_class,
+ NULL, 0, "%s", "tty");
+ if (unlikely(IS_ERR(hi->tty_dev))) {
+ ret = PTR_ERR(hi->tty_dev);
+ hi->tty_dev = NULL;
+ goto err_create_tty_device;
+ }
+
+ ret = device_create_file(hi->tty_dev, &dev_attr_tty);
+ if (ret)
+ goto err_create_tty_device_file;
+
+ /* Register FM attributes */
+ hi->fm_dev = device_create(hi->htc_accessory_class,
+ NULL, 0, "%s", "fm");
+ if (unlikely(IS_ERR(hi->fm_dev))) {
+ ret = PTR_ERR(hi->fm_dev);
+ hi->fm_dev = NULL;
+ goto err_create_fm_device;
+ }
+
+ ret = device_create_file(hi->fm_dev, &dev_attr_fm);
+ if (ret)
+ goto err_create_fm_device_file;
+
+ /* Register debug attributes */
+ hi->debug_dev = device_create(hi->htc_accessory_class,
+ NULL, 0, "%s", "debug");
+ if (unlikely(IS_ERR(hi->debug_dev))) {
+ ret = PTR_ERR(hi->debug_dev);
+ hi->debug_dev = NULL;
+ goto err_create_debug_device;
+ }
+
+ /* register the attributes */
+ ret = device_create_file(hi->debug_dev, &dev_attr_debug);
+ if (ret)
+ goto err_create_debug_device_file;
+
+ ret = device_create_file(hi->debug_dev, &dev_attr_headset_onewire);
+ if (ret)
+ goto err_create_debug_device_file;
+
+ return 0;
+
+err_create_debug_device_file:
+ device_unregister(hi->debug_dev);
+
+err_create_debug_device:
+ device_remove_file(hi->fm_dev, &dev_attr_fm);
+
+err_create_fm_device_file:
+ device_unregister(hi->fm_dev);
+
+err_create_fm_device:
+ device_remove_file(hi->tty_dev, &dev_attr_tty);
+
+err_create_tty_device_file:
+ device_unregister(hi->tty_dev);
+
+err_create_tty_device:
+ device_remove_file(hi->headset_dev, &dev_attr_headset_simulate);
+
+err_create_headset_simulate_device_file:
+ device_remove_file(hi->headset_dev, &dev_attr_headset_state);
+
+err_create_headset_state_device_file:
+ device_unregister(hi->headset_dev);
+
+err_create_headset_device:
+ class_destroy(hi->htc_accessory_class);
+
+err_create_class:
+
+ return ret;
+}
+
+static void unregister_attributes(void)
+{
+ device_remove_file(hi->debug_dev, &dev_attr_debug);
+ device_unregister(hi->debug_dev);
+ device_remove_file(hi->fm_dev, &dev_attr_fm);
+ device_unregister(hi->fm_dev);
+ device_remove_file(hi->tty_dev, &dev_attr_tty);
+ device_unregister(hi->tty_dev);
+ device_remove_file(hi->headset_dev, &dev_attr_headset_simulate);
+ device_remove_file(hi->headset_dev, &dev_attr_headset_state);
+ device_unregister(hi->headset_dev);
+ class_destroy(hi->htc_accessory_class);
+}
+
+static void headset_mgr_init(void)
+{
+ if (hi->pdata.hptv_det_hp_gpio)
+ gpio_set_value(hi->pdata.hptv_det_hp_gpio, 1);
+ if (hi->pdata.hptv_det_tv_gpio)
+ gpio_set_value(hi->pdata.hptv_det_tv_gpio, 0);
+ if (hi->pdata.hptv_sel_gpio)
+ gpio_set_value(hi->pdata.hptv_sel_gpio, 0);
+}
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+static void htc_headset_mgr_early_suspend(struct early_suspend *h)
+{
+ HS_DBG();
+}
+
+static void htc_headset_mgr_late_resume(struct early_suspend *h)
+{
+#ifdef HTC_HEADSET_CONFIG_QUICK_BOOT
+ int state = 0;
+ HS_DBG();
+
+ if (hi->quick_boot_status) {
+ mutex_lock(&hi->mutex_lock);
+ state = switch_get_state(&hi->sdev_h2w);
+ HS_LOG_TIME("Resend quick boot U-Event (state = %d)",
+ state | BIT_UNDEFINED);
+ switch_set_state(&hi->sdev_h2w, state | BIT_UNDEFINED);
+ HS_LOG_TIME("Resend quick boot U-Event (state = %d)", state);
+ switch_set_state(&hi->sdev_h2w, state);
+ hi->quick_boot_status = 0;
+ mutex_unlock(&hi->mutex_lock);
+ }
+#else
+ HS_DBG();
+#endif
+}
+#endif
+
+static int htc_headset_mgr_suspend(struct platform_device *pdev,
+ pm_message_t mesg)
+{
+ HS_DBG();
+
+#ifdef HTC_HEADSET_CONFIG_QUICK_BOOT
+ if (gpio_event_get_quickboot_status())
+ hi->quick_boot_status = 1;
+#endif
+
+ return 0;
+}
+
+static int htc_headset_mgr_resume(struct platform_device *pdev)
+{
+ HS_DBG();
+ if (hi->one_wire_mode == 1)
+ hs_mgr_notifier.hs_1wire_open();
+ return 0;
+}
+
+static int htc_headset_mgr_probe(struct platform_device *pdev)
+{
+ int ret;
+
+ struct htc_headset_mgr_platform_data *pdata = pdev->dev.platform_data;
+
+ HS_LOG("++++++++++++++++++++");
+
+ hi = kzalloc(sizeof(struct htc_headset_mgr_info), GFP_KERNEL);
+ if (!hi)
+ return -ENOMEM;
+
+ hi->pdata.driver_flag = pdata->driver_flag;
+ hi->pdata.headset_devices_num = pdata->headset_devices_num;
+ hi->pdata.headset_devices = pdata->headset_devices;
+ hi->pdata.headset_config_num = pdata->headset_config_num;
+ hi->pdata.headset_config = pdata->headset_config;
+
+ hi->pdata.hptv_det_hp_gpio = pdata->hptv_det_hp_gpio;
+ hi->pdata.hptv_det_tv_gpio = pdata->hptv_det_tv_gpio;
+ hi->pdata.hptv_sel_gpio = pdata->hptv_sel_gpio;
+
+ hi->pdata.headset_init = pdata->headset_init;
+ hi->pdata.headset_power = pdata->headset_power;
+ hi->pdata.uart_lv_shift_en = pdata->uart_lv_shift_en;
+ hi->pdata.uart_tx_gpo = pdata->uart_tx_gpo;
+
+ if (hi->pdata.headset_init)
+ hi->pdata.headset_init();
+
+ hi->driver_init_seq = 0;
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ hi->early_suspend.suspend = htc_headset_mgr_early_suspend;
+ hi->early_suspend.resume = htc_headset_mgr_late_resume;
+ register_early_suspend(&hi->early_suspend);
+#endif
+ wake_lock_init(&hi->hs_wake_lock, WAKE_LOCK_SUSPEND, DRIVER_NAME);
+
+ hi->hpin_jiffies = jiffies;
+ hi->usb_headset.type = USB_NO_HEADSET;
+ hi->usb_headset.status = STATUS_DISCONNECTED;
+
+ hi->hs_35mm_type = HEADSET_UNPLUG;
+ hi->h2w_35mm_type = HEADSET_UNPLUG;
+ hi->is_ext_insert = 0;
+ hi->mic_bias_state = 0;
+ hi->mic_detect_counter = 0;
+ hi->key_level_flag = -1;
+ hi->quick_boot_status = 0;
+ hi->driver_one_wire_exist = 0;
+ atomic_set(&hi->btn_state, 0);
+
+ hi->tty_enable_flag = 0;
+ hi->fm_flag = 0;
+ hi->debug_flag = 0;
+ hi->key_code_1wire_index = 0;
+ hi->onewire_key_delay = HS_JIFFIES_1WIRE_BUTTON;
+
+ mutex_init(&hi->mutex_lock);
+
+ hi->sdev_h2w.name = "h2w";
+ hi->sdev_h2w.print_name = h2w_print_name;
+
+ ret = switch_dev_register(&hi->sdev_h2w);
+ if (ret < 0)
+ goto err_h2w_switch_dev_register;
+
+ detect_wq = create_workqueue("detect");
+ if (detect_wq == NULL) {
+ ret = -ENOMEM;
+ HS_ERR("Failed to create detect workqueue");
+ goto err_create_detect_work_queue;
+ }
+
+ button_wq = create_workqueue("button");
+ if (button_wq == NULL) {
+ ret = -ENOMEM;
+ HS_ERR("Failed to create button workqueue");
+ goto err_create_button_work_queue;
+ }
+
+ hi->input = input_allocate_device();
+ if (!hi->input) {
+ ret = -ENOMEM;
+ goto err_request_input_dev;
+ }
+
+ hi->input->name = "h2w headset";
+ set_bit(EV_SYN, hi->input->evbit);
+ set_bit(EV_KEY, hi->input->evbit);
+ set_bit(KEY_END, hi->input->keybit);
+ set_bit(KEY_MUTE, hi->input->keybit);
+ set_bit(KEY_VOLUMEDOWN, hi->input->keybit);
+ set_bit(KEY_VOLUMEUP, hi->input->keybit);
+ set_bit(KEY_NEXTSONG, hi->input->keybit);
+ set_bit(KEY_PLAYPAUSE, hi->input->keybit);
+ set_bit(KEY_PREVIOUSSONG, hi->input->keybit);
+ set_bit(KEY_MEDIA, hi->input->keybit);
+ set_bit(KEY_SEND, hi->input->keybit);
+ set_bit(KEY_FASTFORWARD, hi->input->keybit);
+ set_bit(KEY_REWIND, hi->input->keybit);
+ set_bit(KEY_VOICECOMMAND, hi->input->keybit);
+
+ ret = input_register_device(hi->input);
+ if (ret < 0)
+ goto err_register_input_dev;
+
+ ret = register_attributes();
+ if (ret)
+ goto err_register_attributes;
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ if (hi->pdata.driver_flag & DRIVER_HS_MGR_RPC_SERVER) {
+ /* Create RPC server */
+ ret = msm_rpc_create_server(&hs_rpc_server);
+ if (ret < 0) {
+ HS_ERR("Failed to create RPC server");
+ goto err_create_rpc_server;
+ }
+ HS_LOG("Create RPC server successfully");
+ }
+#else
+ HS_DBG("NOT support RPC");
+#endif
+
+ headset_mgr_init();
+ hs_notify_driver_ready(DRIVER_NAME);
+
+ HS_LOG("--------------------");
+
+ return 0;
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+err_create_rpc_server:
+#endif
+
+err_register_attributes:
+ input_unregister_device(hi->input);
+
+err_register_input_dev:
+ input_free_device(hi->input);
+
+err_request_input_dev:
+ destroy_workqueue(button_wq);
+
+err_create_button_work_queue:
+ destroy_workqueue(detect_wq);
+
+err_create_detect_work_queue:
+ switch_dev_unregister(&hi->sdev_h2w);
+
+err_h2w_switch_dev_register:
+ mutex_destroy(&hi->mutex_lock);
+ wake_lock_destroy(&hi->hs_wake_lock);
+ kfree(hi);
+
+ HS_ERR("Failed to register %s driver", DRIVER_NAME);
+
+ return ret;
+}
+
+static int htc_headset_mgr_remove(struct platform_device *pdev)
+{
+#if 0
+ if ((switch_get_state(&hi->sdev_h2w) & MASK_HEADSET) != 0)
+ remove_headset();
+#endif
+
+ unregister_attributes();
+ input_unregister_device(hi->input);
+ destroy_workqueue(button_wq);
+ destroy_workqueue(detect_wq);
+ switch_dev_unregister(&hi->sdev_h2w);
+ mutex_destroy(&hi->mutex_lock);
+ wake_lock_destroy(&hi->hs_wake_lock);
+ kfree(hi);
+
+ return 0;
+}
+
+static struct platform_driver htc_headset_mgr_driver = {
+ .probe = htc_headset_mgr_probe,
+ .remove = htc_headset_mgr_remove,
+ .suspend = htc_headset_mgr_suspend,
+ .resume = htc_headset_mgr_resume,
+ .driver = {
+ .name = "HTC_HEADSET_MGR",
+ .owner = THIS_MODULE,
+ },
+};
+
+
+static int __init htc_headset_mgr_init(void)
+{
+ return platform_driver_register(&htc_headset_mgr_driver);
+}
+
+static void __exit htc_headset_mgr_exit(void)
+{
+ platform_driver_unregister(&htc_headset_mgr_driver);
+}
+
+late_initcall(htc_headset_mgr_init);
+module_exit(htc_headset_mgr_exit);
+
+MODULE_DESCRIPTION("HTC headset manager driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/headset/htc_headset_one_wire.c b/drivers/misc/headset/htc_headset_one_wire.c
new file mode 100644
index 0000000..a09920f
--- /dev/null
+++ b/drivers/misc/headset/htc_headset_one_wire.c
@@ -0,0 +1,422 @@
+/*
+ *
+ * /arch/arm/mach-msm/include/mach/htc_headset_pmic.h
+ *
+ * HTC PMIC headset driver.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/gpio.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/platform_device.h>
+#include <linux/rtc.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/termios.h>
+#include <linux/tty.h>
+
+#include <linux/htc_headset_mgr.h>
+#include <linux/htc_headset_one_wire.h>
+
+#define DRIVER_NAME "HS_1WIRE"
+#define hr_msleep(a) msleep(a)
+
+static struct workqueue_struct *onewire_wq;
+static void onewire_init_work_func(struct work_struct *work);
+static void onewire_closefile_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(onewire_init_work, onewire_init_work_func);
+static DECLARE_DELAYED_WORK(onewire_closefile_work, onewire_closefile_work_func);
+
+static struct htc_35mm_1wire_info *hi;
+static struct file *fp;
+int fp_count;
+static inline void usleep(unsigned long usecs)
+{
+ usleep_range(usecs, usecs);
+}
+
+static struct file *openFile(char *path,int flag,int mode)
+{
+ mm_segment_t old_fs;
+ mutex_lock(&hi->mutex_lock);
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ HS_LOG("Open: fp count = %d", ++fp_count);
+ fp=filp_open(path, flag, mode);
+ set_fs(old_fs);
+ if (!fp)
+ return NULL;
+ if(IS_ERR(fp))
+ HS_LOG("File Open Error:%s",path);
+
+ if(!fp->f_op)
+ HS_LOG("File Operation Method Error!!");
+
+ return fp;
+}
+
+static int readFile(struct file *fp,char *buf,int readlen)
+{
+ int ret;
+ mm_segment_t old_fs;
+ old_fs = get_fs();
+ if (fp && fp->f_op && fp->f_op->read) {
+ set_fs(KERNEL_DS);
+ ret = fp->f_op->read(fp,buf,readlen, &fp->f_pos);
+ set_fs(old_fs);
+ return ret;
+ } else
+ return -1;
+}
+
+static int writeFile(struct file *fp,char *buf,int readlen)
+{
+ int ret;
+ mm_segment_t old_fs;
+ old_fs = get_fs();
+ if (fp && fp->f_op && fp->f_op->write) {
+ set_fs(KERNEL_DS);
+ ret = fp->f_op->write(fp,buf,readlen, &fp->f_pos);
+ set_fs(old_fs);
+ return ret;
+ } else
+ return -1;
+}
+
+static void setup_hs_tty(struct file *tty_fp)
+{
+ struct termios hs_termios;
+ mm_segment_t old_fs;
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ tty_ioctl(tty_fp, TCGETS, (unsigned long)&hs_termios);
+ hs_termios.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP|INLCR|IGNCR|ICRNL|IXON);
+ hs_termios.c_oflag &= ~OPOST;
+ hs_termios.c_lflag &= ~(ECHO|ECHONL|ICANON|ISIG|IEXTEN);
+ hs_termios.c_cflag &= ~(CSIZE|CBAUD|PARENB|CSTOPB);
+ hs_termios.c_cflag |= (CREAD|CS8|CLOCAL|CRTSCTS|B38400);
+ tty_ioctl(tty_fp, TCSETS, (unsigned long)&hs_termios);
+ set_fs(old_fs);
+}
+
+int closeFile(struct file *fp)
+{
+ mm_segment_t old_fs;
+ old_fs = get_fs();
+ set_fs(KERNEL_DS);
+ HS_LOG("Close: fp count = %d", --fp_count);
+ filp_close(fp,NULL);
+ set_fs(old_fs);
+ mutex_unlock(&hi->mutex_lock);
+ return 0;
+}
+
+static void onewire_init_work_func(struct work_struct *work)
+{
+ HS_LOG("Open %s", hi->pdata.onewire_tty_dev);
+ fp = openFile(hi->pdata.onewire_tty_dev,O_CREAT|O_RDWR|O_NONBLOCK,0666);
+ if (fp != NULL) {
+ if (!fp->private_data)
+ HS_LOG("No private data");
+ else {
+ HS_LOG("Private data exist");
+ }
+ closeFile(fp);
+ return;
+ } else
+ HS_LOG("%s, openFile is NULL pointer\n", __func__);
+}
+
+static void onewire_closefile_work_func(struct work_struct *work)
+{
+ if(fp)
+ closeFile(fp);
+}
+
+static int hs_read_aid(void)
+{
+ char in_buf[10];
+ int read_count, retry, i;
+ for (retry = 0; retry < 3; retry++) {
+ read_count = readFile(fp, in_buf, 10);
+ HS_LOG("[1wire]read_count = %d", read_count);
+ if (read_count > 0) {
+ for (i = 0; i < read_count; i++) {
+ HS_LOG("[1wire]in_buf[%d] = 0x%x", i, in_buf[i]);
+ if ( (in_buf[i] & 0xF0) == 0x80 && in_buf[i] > 0x80) {
+ hi->aid = in_buf[i];
+ return 0;
+ }
+ }
+ }
+ }
+ return -1;
+
+}
+
+static int hs_1wire_query(int type)
+{
+ return 0; /*Todo*/
+}
+
+static int hs_1wire_open(void)
+{
+ int ret;
+ ret = cancel_delayed_work_sync(&onewire_closefile_work);
+ HS_LOG("[1-wire]hs_1wire_read_key");
+ if (!ret) {
+ HS_LOG("Cancel fileclose_work failed, ret = %d", ret);
+ fp = openFile(hi->pdata.onewire_tty_dev,O_CREAT|O_RDWR|O_NONBLOCK,0666);
+ }
+ queue_delayed_work(onewire_wq, &onewire_closefile_work, msecs_to_jiffies(2000));
+ if (!fp)
+ return -1;
+ return 0;
+}
+
+static int hs_1wire_read_key(void)
+{
+ char key_code[10];
+ int read_count, retry, i;
+
+ if (hs_1wire_open()!=0)
+ return -1;
+ for (retry = 0; retry < 3;retry++) {
+ read_count = readFile(fp, key_code, 10);
+ HS_LOG("[1wire]key read_count = %d", read_count);
+ if (read_count > 0) {
+ for (i = 0; i < read_count; i++) {
+ HS_LOG("[1wire]key_code[%d] = 0x%x", i, key_code[i]);
+ if (key_code[i] == hi->pdata.one_wire_remote[0])
+ return 1;
+ else if (key_code[i] == hi->pdata.one_wire_remote[2])
+ return 2;
+ else if (key_code[i] == hi->pdata.one_wire_remote[4])
+ return 3;
+ else if (key_code[i] == hi->pdata.one_wire_remote[1])
+ return 0;
+ else
+ HS_LOG("Non key data, dropped");
+ }
+ }
+ hr_msleep(50);
+ }
+ return -1;
+}
+
+static int hs_1wire_init(void)
+{
+ char all_zero = 0;
+ char send_data = 0xF5;
+
+ HS_LOG("[1-wire]hs_1wire_init");
+ fp = openFile(hi->pdata.onewire_tty_dev,O_CREAT|O_RDWR|O_SYNC|O_NONBLOCK,0666);
+ HS_LOG("Open %s", hi->pdata.onewire_tty_dev);
+ if (fp != NULL) {
+ if (!fp->private_data) {
+ HS_LOG("No private data");
+ if (hi->pdata.tx_level_shift_en)
+ gpio_set_value_cansleep(hi->pdata.tx_level_shift_en, 1);
+ if (hi->pdata.uart_sw)
+ gpio_set_value_cansleep(hi->pdata.uart_sw, 0);
+ hi->aid = 0;
+ closeFile(fp);
+ return -1;
+ }
+ } else {
+ HS_LOG("%s, openFile is NULL pointer\n", __func__);
+ return -1;
+ }
+ setup_hs_tty(fp);
+ HS_LOG("Setup HS tty");
+ if (hi->pdata.tx_level_shift_en) {
+ gpio_set_value_cansleep(hi->pdata.tx_level_shift_en, 0); /*Level shift low enable*/
+ HS_LOG("[HS]set tx_level_shift_en to 0");
+ }
+ if (hi->pdata.uart_sw) {
+ gpio_set_value_cansleep(hi->pdata.uart_sw, 1); /* 1: UART I/O to Audio Jack, 0: UART I/O to others */
+ HS_LOG("[HS]Set uart sw = 1");
+ }
+ hi->aid = 0;
+ hr_msleep(20);
+ writeFile(fp,&all_zero,1);
+ hr_msleep(5);
+ writeFile(fp,&send_data,1);
+// if (hi->pdata.remote_press) {
+// while(gpio_get_value(hi->pdata.remote_press) == 1) {
+// HS_LOG("[HS]Polling remote_press low");
+// }
+// }
+ HS_LOG("Send 0x00 0xF5");
+ usleep(300);
+ if (hi->pdata.tx_level_shift_en)
+ gpio_set_value_cansleep(hi->pdata.tx_level_shift_en, 1);
+ HS_LOG("[HS]Disable level shift");
+ hr_msleep(22);
+ if (hs_read_aid() == 0) {
+ HS_LOG("[1-wire]Valid AID received, enter 1-wire mode");
+ if (hi->pdata.tx_level_shift_en)
+ gpio_set_value_cansleep(hi->pdata.tx_level_shift_en, 1);
+ closeFile(fp);
+ return 0;
+ } else {
+ if (hi->pdata.tx_level_shift_en)
+ gpio_set_value_cansleep(hi->pdata.tx_level_shift_en, 1);
+ if (hi->pdata.uart_sw)
+ gpio_set_value_cansleep(hi->pdata.uart_sw, 0);
+ hi->aid = 0;
+ closeFile(fp);
+ return -1;
+ }
+}
+
+static void hs_1wire_deinit(void)
+{
+ if (fp) {
+ closeFile(fp);
+ fp = NULL;
+ }
+}
+
+static int hs_1wire_report_type(char **string)
+{
+ const int type_num = 3; /*How many 1-wire accessories supported*/
+ char *hs_type[] = {
+ "headset_beats_20",
+ "headset_mic_midtier",
+ "headset_mic_oneseg",
+ };
+ hi->aid &= 0x7f;
+ HS_LOG("[1wire]AID = 0x%x", hi->aid);
+ if (hi->aid > type_num || hi->aid < 1) {
+ *string = "1wire_unknown";
+ return 14;
+ }else {
+ *string = hs_type[hi->aid - 1];
+ HS_LOG("Report %s type, size %zd", *string, sizeof(hs_type[hi->aid -1]));
+ return sizeof(hs_type[hi->aid -1]);
+ }
+}
+
+static void hs_1wire_register(void)
+{
+ struct headset_notifier notifier;
+
+ notifier.id = HEADSET_REG_1WIRE_INIT;
+ notifier.func = hs_1wire_init;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_1WIRE_QUERY;
+ notifier.func = hs_1wire_query;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_1WIRE_READ_KEY;
+ notifier.func = hs_1wire_read_key;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_1WIRE_DEINIT;
+ notifier.func = hs_1wire_deinit;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_1WIRE_OPEN;
+ notifier.func = hs_1wire_open;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_1WIRE_REPORT_TYPE;
+ notifier.func = hs_1wire_report_type;
+ headset_notifier_register(¬ifier);
+}
+
+void one_wire_gpio_tx(int enable)
+{
+ HS_LOG("Set gpio[%d] = %d", hi->pdata.uart_tx, enable);
+ gpio_set_value(hi->pdata.uart_tx, enable);
+}
+
+void one_wire_lv_en(int enable)
+{
+ gpio_set_value(hi->pdata.tx_level_shift_en, 0);
+}
+
+void one_wire_uart_sw(int enable)
+{
+ gpio_set_value(hi->pdata.uart_sw, enable);
+}
+
+static int htc_headset_1wire_probe(struct platform_device *pdev)
+{
+ struct htc_headset_1wire_platform_data *pdata = pdev->dev.platform_data;
+ HS_LOG("1-wire probe starts");
+
+ hi = kzalloc(sizeof(struct htc_35mm_1wire_info), GFP_KERNEL);
+ if (!hi)
+ return -ENOMEM;
+
+ hi->pdata.tx_level_shift_en = pdata->tx_level_shift_en;
+ hi->pdata.uart_sw = pdata->uart_sw;
+ if (pdata->one_wire_remote[5])
+ memcpy(hi->pdata.one_wire_remote, pdata->one_wire_remote,
+ sizeof(hi->pdata.one_wire_remote));
+ hi->pdata.uart_tx = pdata->uart_tx;
+ hi->pdata.uart_rx = pdata->uart_rx;
+ hi->pdata.remote_press = pdata->remote_press;
+ fp_count = 0;
+ strncpy(hi->pdata.onewire_tty_dev, pdata->onewire_tty_dev, 15);
+ HS_LOG("1wire tty device %s", hi->pdata.onewire_tty_dev);
+ onewire_wq = create_workqueue("ONEWIRE_WQ");
+ if (onewire_wq == NULL) {
+ HS_ERR("Failed to create onewire workqueue");
+ return 0;
+ }
+ mutex_init(&hi->mutex_lock);
+ hs_1wire_register();
+ queue_delayed_work(onewire_wq, &onewire_init_work, msecs_to_jiffies(3000));
+ hs_notify_driver_ready(DRIVER_NAME);
+
+ HS_LOG("--------------------");
+
+ return 0;
+}
+
+static int htc_headset_1wire_remove(struct platform_device *pdev)
+{
+ return 0;
+}
+
+
+static struct platform_driver htc_headset_1wire_driver = {
+ .probe = htc_headset_1wire_probe,
+ .remove = htc_headset_1wire_remove,
+ .driver = {
+ .name = "HTC_HEADSET_1WIRE",
+ .owner = THIS_MODULE,
+ },
+};
+
+static int __init htc_headset_1wire_init(void)
+{
+ return platform_driver_register(&htc_headset_1wire_driver);
+}
+
+static void __exit htc_headset_1wire_exit(void)
+{
+ platform_driver_unregister(&htc_headset_1wire_driver);
+}
+
+late_initcall(htc_headset_1wire_init);
+module_exit(htc_headset_1wire_exit);
+
+MODULE_DESCRIPTION("HTC 1-wire headset driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/headset/htc_headset_pmic.c b/drivers/misc/headset/htc_headset_pmic.c
new file mode 100644
index 0000000..57e47e2
--- /dev/null
+++ b/drivers/misc/headset/htc_headset_pmic.c
@@ -0,0 +1,859 @@
+/*
+ *
+ * /arch/arm/mach-msm/htc_headset_pmic.c
+ *
+ * HTC PMIC headset driver.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/platform_device.h>
+#include <linux/rtc.h>
+#include <linux/slab.h>
+#include <linux/hrtimer.h>
+
+#include <linux/mfd/pm8xxx/core.h>
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+#include <mach/msm_rpcrouter.h>
+#endif
+#include <linux/htc_headset_mgr.h>
+#include <linux/htc_headset_pmic.h>
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_8XXX_ADC
+#include <linux/mfd/pm8xxx/pm8xxx-adc.h>
+#endif
+
+#define DRIVER_NAME "HS_PMIC"
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+#include <linux/iio/consumer.h>
+#include <linux/iio/types.h>
+static struct iio_channel *adc_channel;
+#endif
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+static bool hpin_irq_disabled;
+#endif
+static struct workqueue_struct *detect_wq;
+static void detect_pmic_work_func(struct work_struct *work);
+//static DECLARE_DELAYED_WORK(detect_pmic_work, detect_pmic_work_func);
+
+static void irq_init_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(irq_init_work, irq_init_work_func);
+
+static struct workqueue_struct *button_wq;
+static void button_pmic_work_func(struct work_struct *work);
+static DECLARE_DELAYED_WORK(button_pmic_work, button_pmic_work_func);
+static unsigned int hpin_count_global;
+static struct htc_35mm_pmic_info *hi;
+static struct detect_pmic_work_info
+ {
+ struct delayed_work hpin_work;
+ unsigned int intr_count;
+ unsigned int insert;
+ } detect_pmic_work;
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+static struct msm_rpc_endpoint *endpoint_adc;
+static struct msm_rpc_endpoint *endpoint_current;
+
+static struct hs_pmic_current_threshold current_threshold_lut[] = {
+ {
+ .adc_max = 14909, /* 0x3A3D */
+ .adc_min = 0, /* 0x0000 */
+ .current_uA = 500,
+ },
+ {
+ .adc_max = 29825, /* 0x7481 */
+ .adc_min = 14910, /* 0x3A3E */
+ .current_uA = 600,
+ },
+ {
+ .adc_max = 65535, /* 0xFFFF */
+ .adc_min = 29826, /* 0x7482 */
+ .current_uA = 500,
+ },
+};
+#endif
+
+static enum hrtimer_restart hs_hpin_irq_enable_func(struct hrtimer *timer)
+{
+ HS_LOG("Re-Enable HPIN IRQ");
+ enable_irq(hi->pdata.hpin_irq);
+ return HRTIMER_NORESTART;
+}
+
+static int hs_pmic_hpin_state(void)
+{
+ HS_DBG();
+
+ return gpio_get_value(hi->pdata.hpin_gpio);
+}
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+static int hs_pmic_remote_threshold(uint32_t adc)
+{
+ int i = 0;
+ int ret = 0;
+ int array_size = 0;
+ uint32_t status;
+ struct hs_pmic_rpc_request req;
+ struct hs_pmic_rpc_reply rep;
+
+ HS_DBG();
+
+ if (!(hi->pdata.driver_flag & DRIVER_HS_PMIC_DYNAMIC_THRESHOLD))
+ return 0;
+
+ req.hs_controller = cpu_to_be32(hi->pdata.hs_controller);
+ req.hs_switch = cpu_to_be32(hi->pdata.hs_switch);
+ req.current_uA = cpu_to_be32(HS_PMIC_HTC_CURRENT_THRESHOLD);
+
+ array_size = sizeof(current_threshold_lut) /
+ sizeof(struct hs_pmic_current_threshold);
+
+ for (i = 0; i < array_size; i++) {
+ if (adc >= current_threshold_lut[i].adc_min &&
+ adc <= current_threshold_lut[i].adc_max)
+ req.current_uA = cpu_to_be32(current_threshold_lut[i].
+ current_uA);
+ }
+
+ ret = msm_rpc_call_reply(endpoint_current,
+ HS_PMIC_RPC_CLIENT_PROC_THRESHOLD,
+ &req, sizeof(req), &rep, sizeof(rep),
+ HS_RPC_TIMEOUT);
+
+ if (ret < 0) {
+ HS_ERR("Failed to send remote threshold RPC");
+ return 0;
+ } else {
+ status = be32_to_cpu(rep.status);
+ if (status != HS_PMIC_RPC_ERR_SUCCESS) {
+ HS_ERR("Failed to set remote threshold");
+ return 0;
+ }
+ }
+
+ HS_LOG("Set remote threshold (%u, %u, %u)", hi->pdata.hs_controller,
+ hi->pdata.hs_switch, be32_to_cpu(req.current_uA));
+
+ return 1;
+}
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+static int hs_pmic_remote_adc(int *adc)
+{
+ int ret = 0;
+ struct rpc_request_hdr req;
+ struct hs_rpc_client_rep_adc rep;
+
+ HS_DBG();
+
+ ret = msm_rpc_call_reply(endpoint_adc, HS_RPC_CLIENT_PROC_ADC,
+ &req, sizeof(req), &rep, sizeof(rep),
+ HS_RPC_TIMEOUT);
+ if (ret < 0) {
+ *adc = -1;
+ HS_LOG("Failed to read remote ADC");
+ return 0;
+ }
+
+ *adc = (int) be32_to_cpu(rep.adc);
+ HS_LOG("Remote ADC %d (0x%X)", *adc, *adc);
+
+ return 1;
+}
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_8XXX_ADC
+static int hs_pmic_remote_adc_pm8921(int *adc)
+{
+ struct pm8xxx_adc_chan_result result;
+
+ HS_DBG();
+
+ result.physical = -EINVAL;
+ pm8xxx_adc_mpp_config_read(hi->pdata.adc_mpp, hi->pdata.adc_amux,
+ &result);
+ *adc = (int) result.physical;
+ *adc = *adc / 1000; /* uA to mA */
+ HS_LOG("Remote ADC %d (0x%X)", *adc, *adc);
+
+ return 1;
+}
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+static int hs_pmic_remote_adc_tps80032(int *adc)
+{
+ int ret;
+
+ if ( adc_channel ) {
+ ret = iio_read_channel_processed(adc_channel, adc);
+ if (ret < 0)
+ ret = iio_read_channel_raw(adc_channel, adc);
+ HS_LOG("Remote ADC %d (0x%X)", *adc, *adc);
+ } else {
+ HS_LOG("adc_channel is NULL");
+ }
+ return 1;
+}
+#endif
+
+static int hs_pmic_mic_status(void)
+{
+ int adc = 0;
+ int mic = HEADSET_UNKNOWN_MIC;
+
+ HS_DBG();
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ if (!hs_pmic_remote_adc(&adc))
+ return HEADSET_UNKNOWN_MIC;
+
+ if (hi->pdata.driver_flag & DRIVER_HS_PMIC_DYNAMIC_THRESHOLD)
+ hs_pmic_remote_threshold((unsigned int) adc);
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_8XXX_ADC
+ if (!hs_pmic_remote_adc_pm8921(&adc))
+ return HEADSET_UNKNOWN_MIC;
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+ if (!hs_pmic_remote_adc_tps80032(&adc))
+ return HEADSET_UNKNOWN_MIC;
+#endif
+
+ if (adc >= hi->pdata.adc_mic_bias[0] &&
+ adc <= hi->pdata.adc_mic_bias[1])
+ mic = HEADSET_MIC;
+ else if (adc < hi->pdata.adc_mic_bias[0])
+ mic = HEADSET_NO_MIC;
+ else
+ mic = HEADSET_UNKNOWN_MIC;
+
+ return mic;
+}
+
+static int hs_pmic_adc_to_keycode(int adc)
+{
+ int key_code = HS_MGR_KEY_INVALID;
+
+ HS_DBG();
+
+ if (!hi->pdata.adc_remote[7])
+ return HS_MGR_KEY_INVALID;
+
+ if (adc >= hi->pdata.adc_remote[0] &&
+ adc <= hi->pdata.adc_remote[1])
+ key_code = HS_MGR_KEY_PLAY;
+ else if (adc >= hi->pdata.adc_remote[2] &&
+ adc <= hi->pdata.adc_remote[3])
+ key_code = HS_MGR_KEY_ASSIST;
+ else if (adc >= hi->pdata.adc_remote[4] &&
+ adc <= hi->pdata.adc_remote[5])
+ key_code = HS_MGR_KEY_VOLUP;
+ else if (adc >= hi->pdata.adc_remote[6] &&
+ adc <= hi->pdata.adc_remote[7])
+ key_code = HS_MGR_KEY_VOLDOWN;
+ else if (adc > hi->pdata.adc_remote[7])
+ key_code = HS_MGR_KEY_NONE;
+
+ if (key_code != HS_MGR_KEY_INVALID)
+ HS_LOG("Key code %d", key_code);
+ else
+ HS_LOG("Unknown key code %d", key_code);
+
+ return key_code;
+}
+
+static void hs_pmic_rpc_key(int adc)
+{
+ int key_code = hs_pmic_adc_to_keycode(adc);
+
+ HS_DBG();
+
+ if (key_code != HS_MGR_KEY_INVALID)
+ hs_notify_key_event(key_code);
+}
+
+static void hs_pmic_key_enable(int enable)
+{
+ HS_DBG();
+
+ if (hi->pdata.key_enable_gpio)
+ gpio_set_value(hi->pdata.key_enable_gpio, enable);
+}
+
+static void detect_pmic_work_func(struct work_struct *work)
+{
+ struct detect_pmic_work_info *detect_pmic_work_ptr;
+ detect_pmic_work_ptr = container_of(work, struct detect_pmic_work_info, hpin_work.work);
+
+ HS_DBG();
+
+ hs_notify_plug_event(detect_pmic_work_ptr->insert, detect_pmic_work_ptr->intr_count);
+}
+
+static irqreturn_t detect_irq_handler(int irq, void *data)
+{
+ unsigned int irq_mask = IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW;
+ unsigned int hpin_count_local;
+
+ disable_irq_nosync(hi->pdata.hpin_irq);
+ HS_LOG("Disable HPIN IRQ");
+ hrtimer_start(&hi->timer, ktime_set(0, 200*NSEC_PER_MSEC), HRTIMER_MODE_REL);
+ hpin_count_local = hpin_count_global++;
+ detect_pmic_work.insert = gpio_get_value(hi->pdata.hpin_gpio);
+ HS_LOG("HPIN++%d++, value = %d, trigger_type = 0x%x", hpin_count_local, detect_pmic_work.insert, hi->hpin_irq_type);
+ hs_notify_hpin_irq();
+ HS_DBG();
+
+ if (!(hi->pdata.driver_flag & DRIVER_HS_PMIC_EDGE_IRQ)) {
+ if (hi->hpin_irq_type == IRQF_TRIGGER_LOW)
+ detect_pmic_work.insert = 1;
+ else
+ detect_pmic_work.insert = 0;
+ hi->hpin_irq_type ^= irq_mask;
+ set_irq_type(hi->pdata.hpin_irq, hi->hpin_irq_type);
+ }
+
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+ detect_pmic_work.intr_count = hpin_count_local;
+ queue_delayed_work(detect_wq, &detect_pmic_work.hpin_work, hi->hpin_debounce);
+
+ HS_LOG("HPIN--%d--, insert = %d, trigger_type = 0x%x", hpin_count_local, detect_pmic_work.insert, hi->hpin_irq_type);
+ return IRQ_HANDLED;
+}
+
+static void button_pmic_work_func(struct work_struct *work)
+{
+ HS_DBG();
+ hs_notify_key_irq();
+}
+
+static irqreturn_t button_irq_handler(int irq, void *dev_id)
+{
+ unsigned int irq_mask = IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW;
+
+ HS_DBG();
+ if (!(hi->pdata.driver_flag & DRIVER_HS_PMIC_EDGE_IRQ)) {
+ hi->key_irq_type ^= irq_mask;
+ set_irq_type(hi->pdata.key_irq, hi->key_irq_type);
+ }
+ wake_lock_timeout(&hi->hs_wake_lock, HS_WAKE_LOCK_TIMEOUT);
+ queue_delayed_work(button_wq, &button_pmic_work, HS_JIFFIES_ZERO);
+ return IRQ_HANDLED;
+}
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+static irqreturn_t debug_irq_handler(int irq, void *data)
+{
+ unsigned int irq_mask = IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW;
+ if (hi->pdata.headset_get_debug) {
+ if (hi->pdata.headset_get_debug()) {
+ HS_LOG("HEADSET_DEBUG_EN on");
+ if (!hpin_irq_disabled) {
+ disable_irq_nosync(hi->pdata.hpin_irq);
+ irq_set_irq_wake(hi->pdata.hpin_irq, 0);
+ HS_LOG("Disable HPIN IRQ");
+ hpin_irq_disabled = true;
+ }
+ hi->debug_irq_type = IRQF_TRIGGER_LOW;
+ set_irq_type(hi->pdata.debug_irq, hi->debug_irq_type);
+ } else {
+ HS_LOG("HEADSET_DEBUG_EN off");
+ detect_pmic_work.insert = gpio_get_value(hi->pdata.hpin_gpio);
+ if (!(hi->pdata.driver_flag & DRIVER_HS_PMIC_EDGE_IRQ)) {
+ if (hi->hpin_irq_type == IRQF_TRIGGER_LOW)
+ detect_pmic_work.insert = 1;
+ else
+ detect_pmic_work.insert = 0;
+ hi->hpin_irq_type ^= irq_mask;
+ set_irq_type(hi->pdata.hpin_irq, hi->hpin_irq_type);
+ }
+ if (hpin_irq_disabled) {
+ irq_set_irq_wake(hi->pdata.hpin_irq, 1);
+ enable_irq(hi->pdata.hpin_irq);
+ HS_LOG("Re-Enable HPIN IRQ");
+ hpin_irq_disabled = false;
+ }
+ hi->debug_irq_type = IRQF_TRIGGER_HIGH;
+ set_irq_type(hi->pdata.debug_irq, hi->debug_irq_type);
+ }
+ }
+
+ return IRQ_HANDLED;
+}
+#endif
+
+static void irq_init_work_func(struct work_struct *work)
+{
+ unsigned int irq_type = IRQF_TRIGGER_LOW;
+
+ HS_DBG();
+
+ if (hi->pdata.driver_flag & DRIVER_HS_PMIC_EDGE_IRQ)
+ irq_type = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING;
+
+ if (hi->pdata.hpin_gpio) {
+ HS_LOG("Enable detect IRQ");
+ hi->hpin_irq_type = irq_type;
+ set_irq_type(hi->pdata.hpin_irq, hi->hpin_irq_type);
+ enable_irq(hi->pdata.hpin_irq);
+ }
+
+ if (hi->pdata.key_gpio) {
+ HS_LOG("Setup button IRQ type");
+ hi->key_irq_type = irq_type;
+ set_irq_type(hi->pdata.key_irq, hi->key_irq_type);
+ if (set_irq_wake(hi->pdata.key_irq, 0) < 0)
+ HS_LOG("Disable remote key irq wake failed");
+ }
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ if (hi->pdata.debug_gpio) {
+ HS_LOG("Enable debug IRQ");
+ hi->debug_irq_type = IRQF_TRIGGER_HIGH;
+ set_irq_type(hi->pdata.debug_irq, hi->debug_irq_type);
+ enable_irq(hi->pdata.debug_irq);
+ }
+#endif
+}
+
+static void hs_pmic_key_int_enable(int enable)
+{
+ static int enable_count = 0;
+ if (enable == 1 && enable_count == 0) {
+ enable_irq(hi->pdata.key_irq);
+ enable_count++;
+ if (set_irq_wake(hi->pdata.key_irq, 1) < 0)
+ HS_LOG("Enable remote key irq wake failed");
+ HS_LOG("Enable remote key irq and wake");
+ } else if (enable == 0 && enable_count == 1) {
+ disable_irq_nosync(hi->pdata.key_irq);
+ enable_count--;
+ if (set_irq_wake(hi->pdata.key_irq, 0) < 0)
+ HS_LOG("Disable remote key irq wake failed");
+ HS_LOG("Disable remote key irq and wake");
+ }
+ HS_LOG("enable_count = %d", enable_count);
+}
+
+static int hs_pmic_request_irq(unsigned int gpio, unsigned int *irq,
+ irq_handler_t handler, unsigned long flags,
+ const char *name, unsigned int wake)
+{
+ int ret = 0;
+
+ HS_DBG();
+
+ ret = gpio_request(gpio, name);
+ if (ret < 0)
+ return ret;
+
+ ret = gpio_direction_input(gpio);
+ if (ret < 0) {
+ gpio_free(gpio);
+ return ret;
+ }
+
+ if (!(*irq)) {
+ ret = gpio_to_irq(gpio);
+ if (ret < 0) {
+ gpio_free(gpio);
+ return ret;
+ }
+ *irq = (unsigned int) ret;
+ }
+
+ ret = request_any_context_irq(*irq, handler, flags, name, NULL);
+ if (ret < 0) {
+ gpio_free(gpio);
+ return ret;
+ }
+
+ ret = set_irq_wake(*irq, wake);
+ if (ret < 0) {
+ free_irq(*irq, 0);
+ gpio_free(gpio);
+ return ret;
+ }
+
+ return 1;
+}
+
+static void hs_pmic_register(void)
+{
+ struct headset_notifier notifier;
+
+ if (hi->pdata.hpin_gpio) {
+ notifier.id = HEADSET_REG_HPIN_GPIO;
+ notifier.func = hs_pmic_hpin_state;
+ headset_notifier_register(¬ifier);
+ }
+
+ if ((hi->pdata.driver_flag & DRIVER_HS_PMIC_RPC_KEY) ||
+ (hi->pdata.driver_flag & DRIVER_HS_PMIC_ADC)) {
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ notifier.id = HEADSET_REG_REMOTE_ADC;
+ notifier.func = hs_pmic_remote_adc;
+ headset_notifier_register(¬ifier);
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_8XXX_ADC
+ notifier.id = HEADSET_REG_REMOTE_ADC;
+ notifier.func = hs_pmic_remote_adc_pm8921;
+ headset_notifier_register(¬ifier);
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+ notifier.id = HEADSET_REG_REMOTE_ADC;
+ notifier.func = hs_pmic_remote_adc_tps80032;
+ headset_notifier_register(¬ifier);
+#endif
+
+ notifier.id = HEADSET_REG_REMOTE_KEYCODE;
+ notifier.func = hs_pmic_adc_to_keycode;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_RPC_KEY;
+ notifier.func = hs_pmic_rpc_key;
+ headset_notifier_register(¬ifier);
+
+ notifier.id = HEADSET_REG_MIC_STATUS;
+ notifier.func = hs_pmic_mic_status;
+ headset_notifier_register(¬ifier);
+ }
+
+ if (hi->pdata.key_enable_gpio) {
+ notifier.id = HEADSET_REG_KEY_ENABLE;
+ notifier.func = hs_pmic_key_enable;
+ headset_notifier_register(¬ifier);
+ }
+
+
+ if (hi->pdata.key_gpio) {
+ notifier.id = HEADSET_REG_KEY_INT_ENABLE;
+ notifier.func = hs_pmic_key_int_enable;
+ headset_notifier_register(¬ifier);
+ }
+}
+
+static ssize_t pmic_adc_debug_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int ret = 0;
+ int adc = 0;
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ ret = hs_pmic_remote_adc(&adc);
+#endif
+ HS_DBG("button ADC = %d",adc);
+ return ret;
+}
+
+
+static struct device_attribute dev_attr_pmic_headset_adc =\
+ __ATTR_RO(pmic_adc_debug);
+
+int register_attributes(void)
+{
+ int ret = 0;
+ hi->pmic_dev = device_create(hi->htc_accessory_class,
+ NULL, 0, "%s", "pmic");
+ if (unlikely(IS_ERR(hi->pmic_dev))) {
+ ret = PTR_ERR(hi->pmic_dev);
+ hi->pmic_dev = NULL;
+ }
+
+ /*register the attributes */
+ ret = device_create_file(hi->pmic_dev, &dev_attr_pmic_headset_adc);
+ if (ret)
+ goto err_create_pmic_device_file;
+ return 0;
+
+err_create_pmic_device_file:
+ device_unregister(hi->pmic_dev);
+ HS_ERR("Failed to register pmic attribute file");
+ return ret;
+}
+static int htc_headset_pmic_probe(struct platform_device *pdev)
+{
+ int ret = 0;
+ struct htc_headset_pmic_platform_data *pdata = pdev->dev.platform_data;
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ uint32_t vers = 0;
+#endif
+
+ HS_LOG("++++++++++++++++++++");
+ hi = kzalloc(sizeof(struct htc_35mm_pmic_info), GFP_KERNEL);
+ if (!hi)
+ return -ENOMEM;
+
+ hi->pdata.driver_flag = pdata->driver_flag;
+ hi->pdata.hpin_gpio = pdata->hpin_gpio;
+ hi->pdata.hpin_irq = pdata->hpin_irq;
+ hi->pdata.key_gpio = pdata->key_gpio;
+ hi->pdata.key_irq = pdata->key_irq;
+ hi->pdata.key_enable_gpio = pdata->key_enable_gpio;
+ hi->pdata.adc_mpp = pdata->adc_mpp;
+ hi->pdata.adc_amux = pdata->adc_amux;
+ hi->pdata.hs_controller = pdata->hs_controller;
+ hi->pdata.hs_switch = pdata->hs_switch;
+ hi->pdata.adc_mic = pdata->adc_mic;
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ hi->pdata.debug_gpio = pdata->debug_gpio;
+ hi->pdata.debug_irq = pdata->debug_irq;
+ hi->pdata.headset_get_debug = pdata->headset_get_debug;
+#endif
+
+ hi->htc_accessory_class = hs_get_attribute_class();
+ hpin_count_global = 0;
+
+
+
+ register_attributes();
+ INIT_DELAYED_WORK(&detect_pmic_work.hpin_work, detect_pmic_work_func);
+ hrtimer_init(&hi->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ hi->timer.function = hs_hpin_irq_enable_func;
+
+ if (!hi->pdata.adc_mic)
+ hi->pdata.adc_mic = HS_DEF_MIC_ADC_16_BIT_MIN;
+
+ if (pdata->adc_mic_bias[0] && pdata->adc_mic_bias[1]) {
+ memcpy(hi->pdata.adc_mic_bias, pdata->adc_mic_bias,
+ sizeof(hi->pdata.adc_mic_bias));
+ hi->pdata.adc_mic = hi->pdata.adc_mic_bias[0];
+ } else {
+ hi->pdata.adc_mic_bias[0] = hi->pdata.adc_mic;
+ hi->pdata.adc_mic_bias[1] = HS_DEF_MIC_ADC_16_BIT_MAX;
+ }
+
+ if (pdata->adc_remote[5])
+ memcpy(hi->pdata.adc_remote, pdata->adc_remote,
+ sizeof(hi->pdata.adc_remote));
+
+ if (pdata->adc_metrico[0] && pdata->adc_metrico[1])
+ memcpy(hi->pdata.adc_metrico, pdata->adc_metrico,
+ sizeof(hi->pdata.adc_metrico));
+
+ hi->hpin_irq_type = IRQF_TRIGGER_LOW;
+ hi->hpin_debounce = HS_JIFFIES_ZERO;
+ hi->key_irq_type = IRQF_TRIGGER_LOW;
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ hi->debug_irq_type = IRQF_TRIGGER_LOW;
+#endif
+
+ wake_lock_init(&hi->hs_wake_lock, WAKE_LOCK_SUSPEND, DRIVER_NAME);
+
+ detect_wq = create_workqueue("HS_PMIC_DETECT");
+ if (detect_wq == NULL) {
+ ret = -ENOMEM;
+ HS_ERR("Failed to create detect workqueue");
+ goto err_create_detect_work_queue;
+ }
+
+ button_wq = create_workqueue("HS_PMIC_BUTTON");
+ if (button_wq == NULL) {
+ ret = -ENOMEM;
+ HS_ERR("Failed to create button workqueue");
+ goto err_create_button_work_queue;
+ }
+
+ if (hi->pdata.hpin_gpio) {
+ ret = hs_pmic_request_irq(hi->pdata.hpin_gpio,
+ &hi->pdata.hpin_irq, detect_irq_handler,
+ hi->hpin_irq_type, "HS_PMIC_DETECT", 1);
+ if (ret < 0) {
+ HS_ERR("Failed to request PMIC HPIN IRQ (0x%X)", ret);
+ goto err_request_detect_irq;
+ }
+ disable_irq(hi->pdata.hpin_irq);
+ }
+ if (hi->pdata.key_gpio) {
+ ret = hs_pmic_request_irq(hi->pdata.key_gpio,
+ &hi->pdata.key_irq, button_irq_handler,
+ hi->key_irq_type, "HS_PMIC_BUTTON", 1);
+ if (ret < 0) {
+ HS_ERR("Failed to request PMIC button IRQ (0x%X)", ret);
+ goto err_request_button_irq;
+ }
+ disable_irq(hi->pdata.key_irq);
+ }
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ if (hi->pdata.debug_gpio) {
+ ret = hs_pmic_request_irq(hi->pdata.debug_gpio,
+ &hi->pdata.debug_irq, debug_irq_handler,
+ hi->debug_irq_type, "HS_PMIC_DEBUG", 1);
+ if (ret < 0) {
+ HS_ERR("Failed to request PMIC DEBUG IRQ (0x%X)", ret);
+ goto err_request_debug_irq;
+ }
+ disable_irq(hi->pdata.debug_irq);
+ }
+#endif
+
+#ifdef HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+ adc_channel = iio_channel_get(&pdev->dev, pdata->iio_channel_name);
+ if (IS_ERR(hi->channel)) {
+ dev_err(&pdev->dev, "%s: Failed to get channel %s, %ld\n",
+ __func__, pdata->iio_channel_name,
+ PTR_ERR(hi->channel));
+ }
+ HS_LOG("%s: iio_channel_get(%s)\n", __func__, pdata->iio_channel_name);
+#endif
+
+
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ if (hi->pdata.driver_flag & DRIVER_HS_PMIC_RPC_KEY) {
+ /* Register ADC RPC client */
+ endpoint_adc = msm_rpc_connect(HS_RPC_CLIENT_PROG,
+ HS_RPC_CLIENT_VERS, 0);
+ if (IS_ERR(endpoint_adc)) {
+ hi->pdata.driver_flag &= ~DRIVER_HS_PMIC_RPC_KEY;
+ HS_LOG("Failed to register ADC RPC client");
+ } else
+ HS_LOG("Register ADC RPC client successfully");
+ }
+
+ if (hi->pdata.driver_flag & DRIVER_HS_PMIC_DYNAMIC_THRESHOLD) {
+ /* Register threshold RPC client */
+ vers = HS_PMIC_RPC_CLIENT_VERS_3_1;
+ endpoint_current = msm_rpc_connect_compatible(
+ HS_PMIC_RPC_CLIENT_PROG, vers, 0);
+ if (!endpoint_current) {
+ vers = HS_PMIC_RPC_CLIENT_VERS_2_1;
+ endpoint_current = msm_rpc_connect(
+ HS_PMIC_RPC_CLIENT_PROG, vers, 0);
+ }
+ if (!endpoint_current) {
+ vers = HS_PMIC_RPC_CLIENT_VERS_1_1;
+ endpoint_current = msm_rpc_connect(
+ HS_PMIC_RPC_CLIENT_PROG, vers, 0);
+ }
+ if (!endpoint_current) {
+ vers = HS_PMIC_RPC_CLIENT_VERS;
+ endpoint_current = msm_rpc_connect(
+ HS_PMIC_RPC_CLIENT_PROG, vers, 0);
+ }
+ if (IS_ERR(endpoint_current)) {
+ hi->pdata.driver_flag &=
+ ~DRIVER_HS_PMIC_DYNAMIC_THRESHOLD;
+ HS_LOG("Failed to register threshold RPC client");
+ } else
+ HS_LOG("Register threshold RPC client successfully"
+ " (0x%X)", vers);
+ }
+#else
+ hi->pdata.driver_flag &= ~DRIVER_HS_PMIC_RPC_KEY;
+ hi->pdata.driver_flag &= ~DRIVER_HS_PMIC_DYNAMIC_THRESHOLD;
+#endif
+
+ queue_delayed_work(detect_wq, &irq_init_work, HS_JIFFIES_IRQ_INIT);
+
+ hs_pmic_register();
+ hs_notify_driver_ready(DRIVER_NAME);
+
+ HS_LOG("--------------------");
+
+ return 0;
+
+err_request_button_irq:
+ if (hi->pdata.hpin_gpio) {
+ free_irq(hi->pdata.hpin_irq, 0);
+ gpio_free(hi->pdata.hpin_gpio);
+ }
+
+err_request_detect_irq:
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ if (hi->pdata.debug_gpio) {
+ free_irq(hi->pdata.debug_irq, 0);
+ gpio_free(hi->pdata.debug_gpio);
+ }
+
+err_request_debug_irq:
+#endif
+ destroy_workqueue(button_wq);
+
+err_create_button_work_queue:
+ destroy_workqueue(detect_wq);
+
+err_create_detect_work_queue:
+ wake_lock_destroy(&hi->hs_wake_lock);
+ kfree(hi);
+
+ HS_ERR("Failed to register %s driver", DRIVER_NAME);
+
+ return ret;
+}
+
+static int htc_headset_pmic_remove(struct platform_device *pdev)
+{
+ if (hi->pdata.key_gpio) {
+ free_irq(hi->pdata.key_irq, 0);
+ gpio_free(hi->pdata.key_gpio);
+ }
+
+ if (hi->pdata.hpin_gpio) {
+ free_irq(hi->pdata.hpin_irq, 0);
+ gpio_free(hi->pdata.hpin_gpio);
+ }
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ if (hi->pdata.debug_gpio) {
+ free_irq(hi->pdata.debug_irq, 0);
+ gpio_free(hi->pdata.debug_gpio);
+ }
+#endif
+
+ destroy_workqueue(button_wq);
+ destroy_workqueue(detect_wq);
+ wake_lock_destroy(&hi->hs_wake_lock);
+
+ kfree(hi);
+
+ return 0;
+}
+
+static struct platform_driver htc_headset_pmic_driver = {
+ .probe = htc_headset_pmic_probe,
+ .remove = htc_headset_pmic_remove,
+ .driver = {
+ .name = "HTC_HEADSET_PMIC",
+ .owner = THIS_MODULE,
+ },
+};
+
+static int __init htc_headset_pmic_init(void)
+{
+ return platform_driver_register(&htc_headset_pmic_driver);
+}
+
+static void __exit htc_headset_pmic_exit(void)
+{
+ platform_driver_unregister(&htc_headset_pmic_driver);
+}
+
+late_initcall(htc_headset_pmic_init);
+module_exit(htc_headset_pmic_exit);
+
+MODULE_DESCRIPTION("HTC PMIC headset driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/qcom-mdm-9k/Kconfig b/drivers/misc/qcom-mdm-9k/Kconfig
new file mode 100644
index 0000000..439dcf5
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/Kconfig
@@ -0,0 +1,72 @@
+menu "Qualcomm Modem Drivers"
+
+if QCT_9K_MODEM
+
+config QCOM_USB_MODEM_POWER
+ bool "Qualcomm USB modem power driver"
+ depends on USB
+ default n
+ ---help---
+ Say Y if you want to use one of the following modems
+ QCT 9x15
+ QCT 9x25
+
+ Disabled by default. Choose Y here if you want to build the driver.
+
+config MDM_FTRACE_DEBUG
+ bool "Enable ftrace debug support"
+ depends on QCOM_USB_MODEM_POWER
+ default n
+ help
+ To enable ftrace debug
+
+config MDM_ERRMSG
+ bool "set error message"
+ depends on QCOM_USB_MODEM_POWER
+ default n
+ help
+ Print error message in panic
+
+config MDM_POWEROFF_MODEM_IN_OFFMODE_CHARGING
+ bool "power off modem in offmode charging"
+ depends on QCOM_USB_MODEM_POWER
+ default n
+ help
+ To power off modem in offmode charging
+
+config MSM_SUBSYSTEM_RESTART
+ bool "MSM Subsystem Restart Driver"
+ depends on QCOM_USB_MODEM_POWER
+ default n
+ help
+ This option enables the MSM subsystem restart driver, which provides
+ a framework to handle subsystem crashes.
+
+config MSM_HSIC_SYSMON
+ tristate "MSM HSIC system monitor driver"
+ depends on USB
+ default n
+ help
+ Add support for bridging with the system monitor interface of MDM
+ over HSIC. This driver allows the local system monitor to
+ communicate with the remote system monitor interface.
+
+config MSM_SYSMON_COMM
+ bool "MSM System Monitor communication support"
+ depends on MSM_SUBSYSTEM_RESTART
+ default n
+ help
+ This option adds support for MSM System Monitor library, which
+ provides an API that may be used for notifying subsystems within
+ the SoC about other subsystems' power-up/down state-changes.
+
+config MDM_SYSEDP
+ bool "Modem Sysedp support"
+ depends on SYSEDP_FRAMEWORK
+ default n
+ help
+ This option adds support to system EDP to handle the power
+
+endif # QCT_9K_MODEM
+
+endmenu
diff --git a/drivers/misc/qcom-mdm-9k/Makefile b/drivers/misc/qcom-mdm-9k/Makefile
new file mode 100644
index 0000000..0e670fc
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/Makefile
@@ -0,0 +1,13 @@
+#
+# Makefile for Qualcomm MDM9K modem support.
+#
+
+GCOV_PROFILE := y
+
+subdir-ccflags-y := -Werror
+
+obj-$(CONFIG_QCOM_USB_MODEM_POWER) += qcom_usb_modem_power.o
+obj-$(CONFIG_MSM_SUBSYSTEM_RESTART) += subsystem_notif.o
+obj-$(CONFIG_MSM_SUBSYSTEM_RESTART) += subsystem_restart.o
+obj-$(CONFIG_MSM_HSIC_SYSMON) += hsic_sysmon.o
+obj-$(CONFIG_MSM_SYSMON_COMM) += sysmon.o
diff --git a/drivers/misc/qcom-mdm-9k/hsic_sysmon.c b/drivers/misc/qcom-mdm-9k/hsic_sysmon.c
new file mode 100644
index 0000000..6ca65d0
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/hsic_sysmon.c
@@ -0,0 +1,479 @@
+/* Copyright (c) 2012-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/* add additional information to our printk's */
+#define pr_fmt(fmt) "[HSIC][SYSMON]%s: " fmt "\n", __func__
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kref.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/usb.h>
+#include <linux/debugfs.h>
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#include "hsic_sysmon.h"
+#include "sysmon.h"
+
+#define DRIVER_DESC "HSIC System monitor driver"
+
+enum hsic_sysmon_op {
+ HSIC_SYSMON_OP_READ = 0,
+ HSIC_SYSMON_OP_WRITE,
+ NUM_OPS
+};
+
+struct hsic_sysmon {
+ struct usb_device *udev;
+ struct usb_interface *ifc;
+ __u8 in_epaddr;
+ __u8 out_epaddr;
+ unsigned int pipe[NUM_OPS];
+ struct kref kref;
+ struct platform_device pdev;
+ int id;
+
+ /* debugging counters */
+ atomic_t dbg_bytecnt[NUM_OPS];
+ atomic_t dbg_pending[NUM_OPS];
+};
+static struct hsic_sysmon *hsic_sysmon_devices[NUM_HSIC_SYSMON_DEVS];
+
+static void hsic_sysmon_delete(struct kref *kref)
+{
+ struct hsic_sysmon *hs = container_of(kref, struct hsic_sysmon, kref);
+
+ usb_put_dev(hs->udev);
+ hsic_sysmon_devices[hs->id] = NULL;
+ kfree(hs);
+}
+
+/**
+ * hsic_sysmon_open() - Opens the system monitor bridge.
+ * @id: the HSIC system monitor device to open
+ *
+ * This should only be called after the platform_device "sys_mon" with id
+ * SYSMON_SS_EXT_MODEM has been added. The simplest way to do that is to
+ * register a platform_driver and its probe will be called when the HSIC
+ * device is ready.
+ */
+int hsic_sysmon_open(enum hsic_sysmon_device_id id)
+{
+ struct hsic_sysmon *hs;
+
+ if (id >= NUM_HSIC_SYSMON_DEVS) {
+ pr_err("invalid dev id(%d)", id);
+ return -ENODEV;
+ }
+
+ hs = hsic_sysmon_devices[id];
+ if (!hs) {
+ pr_err("dev is null");
+ return -ENODEV;
+ }
+
+ kref_get(&hs->kref);
+
+ return 0;
+}
+EXPORT_SYMBOL(hsic_sysmon_open);
+
+/**
+ * hsic_sysmon_close() - Closes the system monitor bridge.
+ * @id: the HSIC system monitor device to close
+ */
+void hsic_sysmon_close(enum hsic_sysmon_device_id id)
+{
+ struct hsic_sysmon *hs;
+
+ if (id >= NUM_HSIC_SYSMON_DEVS) {
+ pr_err("invalid dev id(%d)", id);
+ return;
+ }
+
+ hs = hsic_sysmon_devices[id];
+ kref_put(&hs->kref, hsic_sysmon_delete);
+}
+EXPORT_SYMBOL(hsic_sysmon_close);
+
+/**
+ * hsic_sysmon_readwrite() - Common function to send read/write over HSIC
+ */
+static int hsic_sysmon_readwrite(enum hsic_sysmon_device_id id, void *data,
+ size_t len, size_t *actual_len, int timeout,
+ enum hsic_sysmon_op op)
+{
+ struct hsic_sysmon *hs;
+ int ret;
+ const char *opstr = (op == HSIC_SYSMON_OP_READ) ?
+ "read" : "write";
+
+ pr_debug("%s: id:%d, data len:%zd, timeout:%d", opstr, id, len, timeout);
+
+ if (id >= NUM_HSIC_SYSMON_DEVS) {
+ pr_err("invalid dev id(%d)", id);
+ return -ENODEV;
+ }
+
+ if (!len) {
+ pr_err("length(%zd) must be greater than 0", len);
+ return -EINVAL;
+ }
+
+ hs = hsic_sysmon_devices[id];
+ if (!hs) {
+ pr_err("device was not opened");
+ return -ENODEV;
+ }
+
+ if (!hs->ifc) {
+ pr_err("can't %s, device disconnected", opstr);
+ return -ENODEV;
+ }
+
+ ret = usb_autopm_get_interface(hs->ifc);
+ if (ret < 0) {
+ dev_err(&hs->ifc->dev, "can't %s, autopm_get failed:%d\n",
+ opstr, ret);
+ return ret;
+ }
+
+ atomic_inc(&hs->dbg_pending[op]);
+
+ ret = usb_bulk_msg(hs->udev, hs->pipe[op], data, len, (int *)actual_len,
+ timeout);
+
+ atomic_dec(&hs->dbg_pending[op]);
+
+ if (ret)
+ dev_err(&hs->ifc->dev,
+ "can't %s, usb_bulk_msg failed, err:%d\n", opstr, ret);
+ else
+ atomic_add(*actual_len, &hs->dbg_bytecnt[op]);
+
+ usb_autopm_put_interface(hs->ifc);
+ return ret;
+}
+
+/**
+ * hsic_sysmon_read() - Read data from the HSIC sysmon interface.
+ * @id: the HSIC system monitor device to open
+ * @data: pointer to caller-allocated buffer to fill in
+ * @len: length in bytes of the buffer
+ * @actual_len: pointer to a location to put the actual length read
+ * in bytes
+ * @timeout: time in msecs to wait for the message to complete before
+ * timing out (if 0 the wait is forever)
+ *
+ * Context: !in_interrupt ()
+ *
+ * Synchronously reads data from the HSIC interface. The call will return
+ * after the read has completed, encountered an error, or timed out. Upon
+ * successful return actual_len will reflect the number of bytes read.
+ *
+ * If successful, it returns 0, otherwise a negative error number. The number
+ * of actual bytes transferred will be stored in the actual_len paramater.
+ */
+int hsic_sysmon_read(enum hsic_sysmon_device_id id, char *data, size_t len,
+ size_t *actual_len, int timeout)
+{
+ return hsic_sysmon_readwrite(id, data, len, actual_len,
+ timeout, HSIC_SYSMON_OP_READ);
+}
+EXPORT_SYMBOL(hsic_sysmon_read);
+
+/**
+ * hsic_sysmon_write() - Write data to the HSIC sysmon interface.
+ * @id: the HSIC system monitor device to open
+ * @data: pointer to caller-allocated buffer to write
+ * @len: length in bytes of the data in buffer to write
+ * @actual_len: pointer to a location to put the actual length written
+ * in bytes
+ * @timeout: time in msecs to wait for the message to complete before
+ * timing out (if 0 the wait is forever)
+ *
+ * Context: !in_interrupt ()
+ *
+ * Synchronously writes data to the HSIC interface. The call will return
+ * after the write has completed, encountered an error, or timed out. Upon
+ * successful return actual_len will reflect the number of bytes written.
+ *
+ * If successful, it returns 0, otherwise a negative error number. The number
+ * of actual bytes transferred will be stored in the actual_len paramater.
+ */
+int hsic_sysmon_write(enum hsic_sysmon_device_id id, const char *data,
+ size_t len, int timeout)
+{
+ size_t actual_len;
+ return hsic_sysmon_readwrite(id, (void *)data, len, &actual_len,
+ timeout, HSIC_SYSMON_OP_WRITE);
+}
+EXPORT_SYMBOL(hsic_sysmon_write);
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 512
+static ssize_t sysmon_debug_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ char *buf;
+ int i, ret = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < NUM_HSIC_SYSMON_DEVS; i++) {
+ struct hsic_sysmon *hs = hsic_sysmon_devices[i];
+ if (!hs)
+ continue;
+
+ ret += scnprintf(buf + ret, DEBUG_BUF_SIZE - ret,
+ "---HSIC Sysmon #%d---\n"
+ "epin:%d, epout:%d\n"
+ "bytes to host: %d\n"
+ "bytes to mdm: %d\n"
+ "pending reads: %d\n"
+ "pending writes: %d\n",
+ i, hs->in_epaddr & ~0x80, hs->out_epaddr,
+ atomic_read(
+ &hs->dbg_bytecnt[HSIC_SYSMON_OP_READ]),
+ atomic_read(
+ &hs->dbg_bytecnt[HSIC_SYSMON_OP_WRITE]),
+ atomic_read(
+ &hs->dbg_pending[HSIC_SYSMON_OP_READ]),
+ atomic_read(
+ &hs->dbg_pending[HSIC_SYSMON_OP_WRITE])
+ );
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+ kfree(buf);
+ return ret;
+}
+
+static ssize_t sysmon_debug_reset_stats(struct file *file,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ int i;
+
+ for (i = 0; i < NUM_HSIC_SYSMON_DEVS; i++) {
+ struct hsic_sysmon *hs = hsic_sysmon_devices[i];
+ if (hs) {
+ atomic_set(&hs->dbg_bytecnt[HSIC_SYSMON_OP_READ], 0);
+ atomic_set(&hs->dbg_bytecnt[HSIC_SYSMON_OP_WRITE], 0);
+ atomic_set(&hs->dbg_pending[HSIC_SYSMON_OP_READ], 0);
+ atomic_set(&hs->dbg_pending[HSIC_SYSMON_OP_WRITE], 0);
+ }
+ }
+
+ return count;
+}
+
+const struct file_operations sysmon_stats_ops = {
+ .read = sysmon_debug_read_stats,
+ .write = sysmon_debug_reset_stats,
+};
+
+static struct dentry *dent;
+
+static void hsic_sysmon_debugfs_init(void)
+{
+ struct dentry *dfile;
+
+ dent = debugfs_create_dir("hsic_sysmon", 0);
+ if (IS_ERR(dent))
+ return;
+
+ dfile = debugfs_create_file("status", 0444, dent, 0, &sysmon_stats_ops);
+ if (!dfile || IS_ERR(dfile))
+ debugfs_remove(dent);
+}
+
+static void hsic_sysmon_debugfs_cleanup(void)
+{
+ if (dent) {
+ debugfs_remove_recursive(dent);
+ dent = NULL;
+ }
+}
+#else
+static inline void hsic_sysmon_debugfs_init(void) { }
+static inline void hsic_sysmon_debugfs_cleanup(void) { }
+#endif
+
+static void hsic_sysmon_pdev_release(struct device *dev) { }
+
+static int
+hsic_sysmon_probe(struct usb_interface *ifc, const struct usb_device_id *id)
+{
+ struct hsic_sysmon *hs;
+ struct usb_host_interface *ifc_desc;
+ struct usb_endpoint_descriptor *ep_desc;
+ int i;
+ int ret = -ENOMEM;
+
+ hs = kzalloc(sizeof(*hs), GFP_KERNEL);
+ if (!hs) {
+ pr_err("unable to allocate hsic_sysmon");
+ return -ENOMEM;
+ }
+
+#ifdef CONFIG_QCT_9K_MODEM
+ {
+ __u8 ifc_num;
+ ifc_num = ifc->cur_altsetting->desc.bInterfaceNumber;
+
+ pr_info("%s ifc_num:%u", __func__, ifc_num);
+ }
+#endif
+
+ hs->udev = usb_get_dev(interface_to_usbdev(ifc));
+ hs->ifc = ifc;
+ kref_init(&hs->kref);
+
+ ifc_desc = ifc->cur_altsetting;
+ for (i = 0; i < ifc_desc->desc.bNumEndpoints; i++) {
+ ep_desc = &ifc_desc->endpoint[i].desc;
+
+ if (!hs->in_epaddr && usb_endpoint_is_bulk_in(ep_desc)) {
+ hs->in_epaddr = ep_desc->bEndpointAddress;
+ hs->pipe[HSIC_SYSMON_OP_READ] =
+ usb_rcvbulkpipe(hs->udev, hs->in_epaddr);
+ }
+
+ if (!hs->out_epaddr && usb_endpoint_is_bulk_out(ep_desc)) {
+ hs->out_epaddr = ep_desc->bEndpointAddress;
+ hs->pipe[HSIC_SYSMON_OP_WRITE] =
+ usb_sndbulkpipe(hs->udev, hs->out_epaddr);
+ }
+ }
+
+ if (!(hs->in_epaddr && hs->out_epaddr)) {
+ pr_err("could not find bulk in and bulk out endpoints");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ hs->id = HSIC_SYSMON_DEV_EXT_MODEM + id->driver_info;
+ if (hs->id >= NUM_HSIC_SYSMON_DEVS) {
+ pr_warn("invalid dev id(%d)", hs->id);
+ hs->id = 0;
+ }
+
+ hsic_sysmon_devices[hs->id] = hs;
+ usb_set_intfdata(ifc, hs);
+
+ hs->pdev.name = "sys_mon";
+ hs->pdev.id = SYSMON_SS_EXT_MODEM + hs->id;
+ hs->pdev.dev.release = hsic_sysmon_pdev_release;
+ platform_device_register(&hs->pdev);
+
+ pr_debug("complete");
+
+ return 0;
+
+error:
+ if (hs)
+ kref_put(&hs->kref, hsic_sysmon_delete);
+
+ return ret;
+}
+
+static void hsic_sysmon_disconnect(struct usb_interface *ifc)
+{
+ struct hsic_sysmon *hs = usb_get_intfdata(ifc);
+
+ platform_device_unregister(&hs->pdev);
+ kref_put(&hs->kref, hsic_sysmon_delete);
+ usb_set_intfdata(ifc, NULL);
+}
+
+static int hsic_sysmon_suspend(struct usb_interface *ifc, pm_message_t message)
+{
+ return 0;
+}
+
+static int hsic_sysmon_resume(struct usb_interface *ifc)
+{
+ return 0;
+}
+
+/* driver_info is the instance number when multiple devices are present */
+static const struct usb_device_id hsic_sysmon_ids[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9048, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x904C, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9075, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9079, 1), .driver_info = 1, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908A, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909C, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909D, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 2), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A0, 1), .driver_info = 0, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 2), .driver_info = 0, },
+
+ {} /* terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, hsic_sysmon_ids);
+
+static struct usb_driver hsic_sysmon_driver = {
+ .name = "hsic_sysmon",
+ .probe = hsic_sysmon_probe,
+ .disconnect = hsic_sysmon_disconnect,
+ .suspend = hsic_sysmon_suspend,
+ .resume = hsic_sysmon_resume,
+ .reset_resume = hsic_sysmon_resume,
+ .id_table = hsic_sysmon_ids,
+ .supports_autosuspend = 1,
+};
+
+static int __init hsic_sysmon_init(void)
+{
+ int ret;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return 0;
+#endif
+
+ ret = usb_register(&hsic_sysmon_driver);
+ if (ret) {
+ pr_err("unable to register " DRIVER_DESC);
+ return ret;
+ }
+
+ hsic_sysmon_debugfs_init();
+ return 0;
+}
+
+static void __exit hsic_sysmon_exit(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return;
+#endif
+
+ hsic_sysmon_debugfs_cleanup();
+ usb_deregister(&hsic_sysmon_driver);
+}
+
+module_init(hsic_sysmon_init);
+module_exit(hsic_sysmon_exit);
+
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/qcom-mdm-9k/hsic_sysmon.h b/drivers/misc/qcom-mdm-9k/hsic_sysmon.h
new file mode 100644
index 0000000..9655dc03
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/hsic_sysmon.h
@@ -0,0 +1,57 @@
+/* Copyright (c) 2012-2013 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __HSIC_SYSMON_H__
+#define __HSIC_SYSMON_H__
+
+/**
+ * enum hsic_sysmon_device_id - Supported HSIC subsystem devices
+ */
+enum hsic_sysmon_device_id {
+ HSIC_SYSMON_DEV_EXT_MODEM,
+ HSIC_SYSMON_DEV_EXT_MODEM_2,
+ NUM_HSIC_SYSMON_DEVS
+};
+
+#if defined(CONFIG_MSM_HSIC_SYSMON) || defined(CONFIG_MSM_HSIC_SYSMON_MODULE)
+
+extern int hsic_sysmon_open(enum hsic_sysmon_device_id id);
+extern void hsic_sysmon_close(enum hsic_sysmon_device_id id);
+extern int hsic_sysmon_read(enum hsic_sysmon_device_id id, char *data,
+ size_t len, size_t *actual_len, int timeout);
+extern int hsic_sysmon_write(enum hsic_sysmon_device_id id, const char *data,
+ size_t len, int timeout);
+
+#else /* CONFIG_MSM_HSIC_SYSMON || CONFIG_MSM_HSIC_SYSMON_MODULE */
+
+static inline int hsic_sysmon_open(enum hsic_sysmon_device_id id)
+{
+ return -ENODEV;
+}
+
+static inline void hsic_sysmon_close(enum hsic_sysmon_device_id id) { }
+
+static inline int hsic_sysmon_read(enum hsic_sysmon_device_id id, char *data,
+ size_t len, size_t *actual_len, int timeout)
+{
+ return -ENODEV;
+}
+
+static inline int hsic_sysmon_write(enum hsic_sysmon_device_id id,
+ const char *data, size_t len, int timeout)
+{
+ return -ENODEV;
+}
+
+#endif /* CONFIG_MSM_HSIC_SYSMON || CONFIG_MSM_HSIC_SYSMON_MODULE */
+
+#endif /* __HSIC_SYSMON_H__ */
diff --git a/drivers/misc/qcom-mdm-9k/qcom_usb_modem_power.c b/drivers/misc/qcom-mdm-9k/qcom_usb_modem_power.c
new file mode 100644
index 0000000..3539266
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/qcom_usb_modem_power.c
@@ -0,0 +1,2346 @@
+/*
+ * Copyright (c) 2014, HTC CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#define pr_fmt(fmt) "[MDM]: " fmt
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/platform_data/tegra_usb.h>
+#include <linux/workqueue.h>
+#include <linux/gpio.h>
+#include <linux/err.h>
+#include <linux/pm_runtime.h>
+#include <linux/suspend.h>
+#include <linux/slab.h>
+#include <mach/gpio-tegra.h>
+#include <mach/board_htc.h>
+#include <linux/platform_data/qcom_usb_modem_power.h>
+#include <linux/proc_fs.h>
+#include <asm/uaccess.h>
+#include <linux/debugfs.h>
+#include <linux/completion.h>
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+#include "subsystem_restart.h"
+#define EXTERNAL_MODEM "external_modem"
+#endif
+
+#ifdef CONFIG_MSM_SYSMON_COMM
+#include "sysmon.h"
+#endif
+
+#define BOOST_CPU_FREQ_MIN 1200000
+#define BOOST_CPU_FREQ_TIMEOUT 5000
+
+#define WAKELOCK_TIMEOUT_FOR_USB_ENUM (HZ * 10)
+#define WAKELOCK_TIMEOUT_FOR_REMOTE_WAKE (HZ)
+
+/* MDM timeout value definition */
+#define MDM_MODEM_TIMEOUT 6000
+#define MDM_MODEM_DELTA 100
+#define MDM_BOOT_TIMEOUT 60000L
+#define MDM_RDUMP_TIMEOUT 180000L
+
+/* MDM misc driver ioctl definition */
+#define CHARM_CODE 0xCC
+#define WAKE_CHARM _IO(CHARM_CODE, 1)
+#define RESET_CHARM _IO(CHARM_CODE, 2)
+#define CHECK_FOR_BOOT _IOR(CHARM_CODE, 3, int)
+#define WAIT_FOR_BOOT _IO(CHARM_CODE, 4)
+#define NORMAL_BOOT_DONE _IOW(CHARM_CODE, 5, int)
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+#define RAM_DUMP_DONE _IOW(CHARM_CODE, 6, int)
+#define WAIT_FOR_RESTART _IOR(CHARM_CODE, 7, int)
+#endif
+#define EFS_SYNC_TIMEOUT _IO(CHARM_CODE, 92)
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+#define GET_FTRACE_CMD _IO(CHARM_CODE, 93)
+#endif
+#define GET_MFG_MODE _IO(CHARM_CODE, 94)
+#define GET_RADIO_FLAG _IO(CHARM_CODE, 95)
+#ifdef CONFIG_MDM_ERRMSG
+#define MODEM_ERRMSG_LEN 256
+#define SET_MODEM_ERRMSG _IOW(CHARM_CODE, 96, char[MODEM_ERRMSG_LEN])
+#ifdef CONFIG_COMPAT
+#define COMPAT_SET_MODEM_ERRMSG _IOW(CHARM_CODE, 96, compat_uptr_t)
+#endif
+#endif
+#define TRIGGER_MODEM_FATAL _IO(CHARM_CODE, 97)
+#define EFS_SYNC_DONE _IO(CHARM_CODE, 99)
+#define NV_WRITE_DONE _IO(CHARM_CODE, 100)
+#define POWER_OFF_CHARM _IOW(CHARM_CODE, 101, int)
+
+#ifdef CONFIG_MDM_SYSEDP
+#define SYSEDP_RADIO_STATE _IOW(CHARM_CODE, 111, int)
+#endif
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+extern int get_enable_ramdumps(void);
+extern void set_enable_ramdumps(int);
+#else
+static int get_enable_ramdumps(void)
+{
+ return 0;
+}
+#endif
+
+static void mdm_enable_irqs(struct qcom_usb_modem *modem, bool is_wake_irq)
+{
+ if(is_wake_irq)
+ {
+ if(!modem->mdm_wake_irq_enabled)
+ {
+ if(modem->wake_irq > 0)
+ {
+ enable_irq(modem->wake_irq);
+ if(modem->wake_irq_wakeable)
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: enable wake irq\n", __func__);
+ enable_irq_wake(modem->wake_irq);
+ }
+ }
+ }
+ modem->mdm_wake_irq_enabled = true;
+ }
+ else
+ {
+ if(!modem->mdm_irq_enabled)
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: enable mdm irq\n", __func__);
+
+ if(modem->errfatal_irq > 0)
+ {
+ enable_irq(modem->errfatal_irq);
+ if(modem->errfatal_irq_wakeable)
+ enable_irq_wake(modem->errfatal_irq);
+ }
+ if(modem->hsic_ready_irq > 0)
+ {
+ enable_irq(modem->hsic_ready_irq);
+ if(modem->hsic_ready_irq_wakeable)
+ enable_irq_wake(modem->hsic_ready_irq);
+ }
+ if(modem->status_irq > 0)
+ {
+ enable_irq(modem->status_irq);
+ if(modem->status_irq_wakeable)
+ enable_irq_wake(modem->status_irq);
+ }
+ if(modem->ipc3_irq > 0)
+ {
+ enable_irq(modem->ipc3_irq);
+ if(modem->ipc3_irq_wakeable)
+ enable_irq_wake(modem->ipc3_irq);
+ }
+ if(modem->vdd_min_irq > 0)
+ {
+ enable_irq(modem->vdd_min_irq);
+ if(modem->vdd_min_irq_wakeable)
+ enable_irq_wake(modem->vdd_min_irq);
+ }
+ }
+
+ modem->mdm_irq_enabled = true;
+ }
+
+ return;
+}
+
+static void mdm_disable_irqs(struct qcom_usb_modem *modem, bool is_wake_irq)
+{
+ if(is_wake_irq)
+ {
+ if(modem->mdm_wake_irq_enabled)
+ {
+ if(modem->wake_irq > 0)
+ {
+ disable_irq_nosync(modem->wake_irq);
+ if(modem->wake_irq_wakeable)
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: disable wake irq\n", __func__);
+ disable_irq_wake(modem->wake_irq);
+ }
+ }
+ }
+ modem->mdm_wake_irq_enabled = false;
+ }
+ else
+ {
+ if(modem->mdm_irq_enabled)
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: disable mdm irq\n", __func__);
+
+ if(modem->errfatal_irq > 0)
+ {
+ disable_irq_nosync(modem->errfatal_irq);
+ if(modem->errfatal_irq_wakeable)
+ disable_irq_wake(modem->errfatal_irq);
+ }
+ if(modem->hsic_ready_irq > 0)
+ {
+ disable_irq_nosync(modem->hsic_ready_irq);
+ if(modem->hsic_ready_irq_wakeable)
+ disable_irq_wake(modem->hsic_ready_irq);
+ }
+ if(modem->status_irq > 0)
+ {
+ disable_irq_nosync(modem->status_irq);
+ if(modem->status_irq_wakeable)
+ disable_irq_wake(modem->status_irq);
+ }
+ if(modem->ipc3_irq > 0)
+ {
+ disable_irq_nosync(modem->ipc3_irq);
+ if(modem->ipc3_irq_wakeable)
+ disable_irq_wake(modem->ipc3_irq);
+ }
+ if(modem->vdd_min_irq > 0)
+ {
+ disable_irq_nosync(modem->vdd_min_irq);
+ if(modem->vdd_min_irq_wakeable)
+ disable_irq_wake(modem->vdd_min_irq);
+ }
+ }
+
+ modem->mdm_irq_enabled = false;
+ }
+
+ return;
+}
+
+static int mdm_panic_prep(struct notifier_block *this, unsigned long event, void *ptr)
+{
+ int i;
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ pr_err("%s: AP CPU kernel panic!!!\n", __func__);
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ goto done;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ gpio_set_value(modem->pdata->ap2mdm_errfatal_gpio, 1);
+
+ if (modem->pdata->ap2mdm_wakeup_gpio >= 0)
+ gpio_set_value(modem->pdata->ap2mdm_wakeup_gpio, 1);
+
+ for (i = MDM_MODEM_TIMEOUT; i > 0; i -= MDM_MODEM_DELTA) {
+ /* pet_watchdog(); */
+ mdelay(MDM_MODEM_DELTA);
+ if (gpio_get_value(modem->pdata->mdm2ap_status_gpio) == 0)
+ break;
+ }
+
+ if (i <= 0)
+ pr_err("%s: MDM2AP_STATUS never went low\n", __func__);
+
+done:
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block mdm_panic_blk = {
+ .notifier_call = mdm_panic_prep,
+};
+
+/* supported modems */
+static const struct usb_device_id modem_list[] = {
+ {USB_DEVICE(0x05c6, 0x9008), /* USB MDM Boot Device */
+ .driver_info = 0,
+ },
+ {USB_DEVICE(0x05c6, 0x9048), /* USB MDM Device */
+ .driver_info = 0,
+ },
+ {}
+};
+
+static ssize_t load_unload_usb_host(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count);
+
+static void cpu_freq_unboost(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ cpu_unboost_work.work);
+
+ pm_qos_update_request(&modem->cpu_boost_req, PM_QOS_DEFAULT_VALUE);
+}
+
+static void cpu_freq_boost(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ cpu_boost_work);
+
+ cancel_delayed_work_sync(&modem->cpu_unboost_work);
+ pm_qos_update_request(&modem->cpu_boost_req, BOOST_CPU_FREQ_MIN);
+ queue_delayed_work(modem->wq, &modem->cpu_unboost_work,
+ msecs_to_jiffies(BOOST_CPU_FREQ_TIMEOUT));
+}
+
+static irqreturn_t qcom_usb_modem_wake_thread(int irq, void *data)
+{
+ struct qcom_usb_modem *modem = (struct qcom_usb_modem *)data;
+ unsigned long start_time = jiffies;
+
+ if (modem->mdm_debug_on)
+ pr_info("%s start\n", __func__);
+
+ mutex_lock(&modem->lock);
+
+ if (modem->udev && modem->udev->state != USB_STATE_NOTATTACHED) {
+ wake_lock_timeout(&modem->wake_lock,
+ WAKELOCK_TIMEOUT_FOR_REMOTE_WAKE);
+
+ dev_info(&modem->pdev->dev, "remote wake (%u)\n",
+ ++(modem->wake_cnt));
+
+ if (!modem->system_suspend) {
+ mutex_unlock(&modem->lock);
+ usb_lock_device(modem->udev);
+ if (usb_autopm_get_interface(modem->intf) == 0)
+ {
+ pr_info("%s(%d) usb_autopm_get_interface OK %u ms\n", __func__, __LINE__, jiffies_to_msecs(jiffies-start_time));
+ usb_autopm_put_interface_async(modem->intf);
+ }
+ usb_unlock_device(modem->udev);
+ mutex_lock(&modem->lock);
+ }
+ else
+ modem->hsic_wakeup_pending = true;
+
+#ifdef CONFIG_PM
+ if (modem->short_autosuspend_enabled && modem->pdata->autosuspend_delay > 0) {
+ pm_runtime_set_autosuspend_delay(&modem->udev->dev,
+ modem->pdata->autosuspend_delay);
+ modem->short_autosuspend_enabled = 0;
+ }
+#endif
+ }
+
+ mutex_unlock(&modem->lock);
+
+ if (modem->mdm_debug_on)
+ pr_info("%s end\n", __func__);
+
+ return IRQ_HANDLED;
+}
+
+static void mdm_hsic_ready(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ mdm_hsic_ready_work);
+ int value;
+
+ mutex_lock(&modem->lock);
+
+ value = gpio_get_value(modem->pdata->mdm2ap_hsic_ready_gpio);
+ if (value == 1) {
+ modem->mdm_status |= MDM_STATUS_HSIC_READY;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+
+ mutex_unlock(&modem->lock);
+
+ return;
+}
+static irqreturn_t qcom_usb_modem_hsic_ready_thread(int irq, void *data)
+{
+ struct qcom_usb_modem *modem = (struct qcom_usb_modem *)data;
+
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: mdm sent hsic_ready interrupt\n", __func__);
+
+ queue_work(modem->wq, &modem->mdm_hsic_ready_work);
+
+ return IRQ_HANDLED;
+}
+
+static void mdm_status_changed(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ mdm_status_work);
+ int value;
+
+ mutex_lock(&modem->lock);
+ value = gpio_get_value(modem->pdata->mdm2ap_status_gpio);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: status: %d, mdm_status: 0x%x\n", __func__, value, modem->mdm_status);
+
+ if(((modem->mdm_status & MDM_STATUS_STATUS_READY)?1:0) != value)
+ {
+ if(value && (modem->mdm_status & MDM_STATUS_BOOT_DONE))
+ {
+ if (!work_pending(&modem->host_reset_work))
+ queue_work(modem->usb_host_wq, &modem->host_reset_work);
+
+ mutex_unlock (&modem->lock);
+ wait_for_completion(&modem->usb_host_reset_done);
+ mutex_lock (&modem->lock);
+ INIT_COMPLETION(modem->usb_host_reset_done);
+
+ if(modem->ops && modem->ops->status_cb)
+ modem->ops->status_cb(modem, value);
+ modem->mdm9k_status = 1;
+ }
+ }
+
+ if ((value == 0) && (modem->mdm_status & MDM_STATUS_BOOT_DONE)) {
+ if(modem->mdm_status & MDM_STATUS_RESET)
+ pr_info("%s: mdm is already under reset! Skip reset.\n", __func__);
+ else
+ {
+ modem->mdm_status = MDM_STATUS_RESET;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ pr_err("%s: unexpected reset external modem\n", __func__);
+ if(modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, -1, "mdm_status_changed");
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ subsystem_restart(EXTERNAL_MODEM);
+#endif
+ }
+ } else if (value == 1) {
+ pr_info("%s: mdm status = 1: mdm is now ready\n", __func__);
+ modem->mdm_status |= MDM_STATUS_STATUS_READY;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+ else
+ {
+ modem->mdm_status &= ~MDM_STATUS_STATUS_READY;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+
+ mutex_unlock(&modem->lock);
+
+ return;
+}
+
+static irqreturn_t qcom_usb_modem_status_thread(int irq, void *data)
+{
+ struct qcom_usb_modem *modem = (struct qcom_usb_modem *)data;
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: mdm sent status interrupt\n", __func__);
+
+ queue_work(modem->wq, &modem->mdm_status_work);
+
+ return IRQ_HANDLED;
+}
+
+static void mdm_fatal(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ mdm_errfatal_work);
+
+ if(modem->mdm_status & MDM_STATUS_RESETTING)
+ {
+ pr_info("%s: Already under reseting procedure. Skip this reset.\n", __func__);
+ return ;
+ }
+ else
+ pr_info("%s: Reseting the mdm due to an errfatal\n", __func__);
+
+ mutex_lock(&modem->lock);
+
+ modem->mdm_status = MDM_STATUS_RESET;
+ modem->system_suspend = false;
+
+ mutex_unlock(&modem->lock);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+
+ if(modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, -1, "mdm_fatal");
+
+#ifdef CONFIG_PM
+ if (modem->udev) {
+ usb_disable_autosuspend(modem->udev);
+ pr_info("disable autosuspend for %s %s\n",
+ modem->udev->manufacturer, modem->udev->product);
+ }
+#endif
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ subsystem_restart(EXTERNAL_MODEM);
+#endif
+
+ return;
+}
+
+static irqreturn_t qcom_usb_modem_errfatal_thread(int irq, void *data)
+{
+ struct qcom_usb_modem *modem = (struct qcom_usb_modem *)data;
+
+ if(!modem)
+ goto done;
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: mdm got errfatal interrupt\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable) {
+ trace_printk("%s: mdm got errfatal interrupt\n", __func__);
+ tracing_off();
+ }
+#endif
+
+ if (modem->mdm_status & (MDM_STATUS_BOOT_DONE | MDM_STATUS_STATUS_READY)) {
+ if(modem->mdm_debug_on)
+ pr_info("%s: scheduling errfatal work now\n", __func__);
+ queue_work(modem->mdm_recovery_wq, &modem->mdm_errfatal_work);
+ }
+
+done:
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t qcom_usb_modem_ipc3_thread(int irq, void *data)
+{
+ struct qcom_usb_modem *modem = (struct qcom_usb_modem *)data;
+ int value;
+ unsigned int elapsed_ms;
+ static unsigned long last_ipc3_high_jiffies = 0;
+
+ mutex_lock(&modem->lock);
+
+ value = gpio_get_value(modem->pdata->mdm2ap_ipc3_gpio);
+
+ if (modem->mdm_status & MDM_STATUS_BOOT_DONE) {
+ if (value == 1) {
+ last_ipc3_high_jiffies = get_jiffies_64();
+ } else if (last_ipc3_high_jiffies != 0) {
+ elapsed_ms = jiffies_to_msecs(get_jiffies_64() - last_ipc3_high_jiffies);
+
+ if ( elapsed_ms>=450 && elapsed_ms<=550 ) {
+ pr_info("need trigger mdm reset by Kickstart : normal reset \n");
+ modem->boot_type = CHARM_NORMAL_BOOT;
+ complete(&modem->mdm_needs_reload);
+ } else if ( elapsed_ms>=50 && elapsed_ms<=150 ) {
+ pr_info("need trigger mdm reset by Kickstart : CNV reset \n");
+ modem->boot_type = CHARM_CNV_RESET;
+ complete(&modem->mdm_needs_reload);
+ } else {
+ pr_info("IPC3 interrupt is noise. interval is %d ms. \n", elapsed_ms);
+ }
+ last_ipc3_high_jiffies = 0;
+ }
+ } else {
+ last_ipc3_high_jiffies = 0;
+ }
+
+ if (modem->mdm_debug_on && modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, modem->pdata->mdm2ap_ipc3_gpio, "qcom_usb_modem_ipc3_thread");
+
+ modem->mdm2ap_ipc3_status = value;
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("mdm2ap_ipc3_status=%d\n", modem->mdm2ap_ipc3_status);
+#endif
+
+ mutex_unlock(&modem->lock);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t qcom_usb_modem_vdd_min_thread(int irq, void *data)
+{
+ /* Currently nothing to do */
+
+ return IRQ_HANDLED;
+}
+
+static void tegra_usb_host_reset(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ host_reset_work);
+ load_unload_usb_host(&modem->pdev->dev, NULL, "0", 1);
+ load_unload_usb_host(&modem->pdev->dev, NULL, "1", 1);
+
+ mutex_lock(&modem->lock);
+ complete(&modem->usb_host_reset_done);
+ mutex_unlock(&modem->lock);
+}
+
+static void tegra_usb_host_load(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ host_load_work);
+ load_unload_usb_host(&modem->pdev->dev, NULL, "1", 1);
+}
+
+static void tegra_usb_host_unload(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ host_unload_work);
+ load_unload_usb_host(&modem->pdev->dev, NULL, "0", 1);
+}
+
+static void device_add_handler(struct qcom_usb_modem *modem,
+ struct usb_device *udev)
+{
+ const struct usb_device_descriptor *desc = &udev->descriptor;
+ struct usb_interface *intf = usb_ifnum_to_if(udev, 0);
+ const struct usb_device_id *id = NULL;
+
+ if (intf) {
+ /* only look for specific modems if modem_list is provided in
+ platform data. Otherwise, look for the modems in the default
+ supported modem list */
+ if (modem->pdata->modem_list)
+ id = usb_match_id(intf, modem->pdata->modem_list);
+ else
+ id = usb_match_id(intf, modem_list);
+ }
+
+ if (id) {
+ /* hold wakelock to ensure ril has enough time to restart */
+ wake_lock_timeout(&modem->wake_lock,
+ WAKELOCK_TIMEOUT_FOR_USB_ENUM);
+
+ pr_info("Add device %d <%s %s>\n", udev->devnum,
+ udev->manufacturer, udev->product);
+
+ mutex_lock(&modem->lock);
+ modem->udev = udev;
+ modem->parent = udev->parent;
+ modem->intf = intf;
+ modem->vid = desc->idVendor;
+ modem->pid = desc->idProduct;
+ modem->wake_cnt = 0;
+ mutex_unlock(&modem->lock);
+
+ pr_info("persist_enabled: %u\n", udev->persist_enabled);
+
+#ifdef CONFIG_PM
+ if(modem->pdata->autosuspend_delay > 0)
+ {
+ pm_runtime_set_autosuspend_delay(&udev->dev,
+ modem->pdata->autosuspend_delay);
+ usb_enable_autosuspend(udev);
+ pr_info("enable autosuspend for %s %s\n",
+ udev->manufacturer, udev->product);
+ }
+ modem->short_autosuspend_enabled = 0;
+
+ /* allow the device to wake up the system */
+ if (udev->actconfig->desc.bmAttributes &
+ USB_CONFIG_ATT_WAKEUP)
+ device_set_wakeup_enable(&udev->dev, true);
+#endif
+ }
+}
+
+static void device_remove_handler(struct qcom_usb_modem *modem,
+ struct usb_device *udev)
+{
+ const struct usb_device_descriptor *desc = &udev->descriptor;
+
+ if (desc->idVendor == modem->vid && desc->idProduct == modem->pid) {
+ pr_info("Remove device %d <%s %s>\n", udev->devnum,
+ udev->manufacturer, udev->product);
+
+ mutex_lock(&modem->lock);
+ modem->udev = NULL;
+ modem->intf = NULL;
+ modem->vid = 0;
+ mutex_unlock(&modem->lock);
+ }
+}
+
+static int mdm_usb_notifier(struct notifier_block *notifier,
+ unsigned long usb_event, void *udev)
+{
+ struct qcom_usb_modem *modem =
+ container_of(notifier, struct qcom_usb_modem, usb_notifier);
+
+ switch (usb_event) {
+ case USB_DEVICE_ADD:
+ device_add_handler(modem, udev);
+ break;
+ case USB_DEVICE_REMOVE:
+ device_remove_handler(modem, udev);
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static int mdm_pm_notifier(struct notifier_block *notifier,
+ unsigned long pm_event, void *unused)
+{
+ struct qcom_usb_modem *modem =
+ container_of(notifier, struct qcom_usb_modem, pm_notifier);
+
+ mutex_lock(&modem->lock);
+ if (!modem->udev) {
+ mutex_unlock(&modem->lock);
+ return NOTIFY_DONE;
+ }
+
+ pr_info("%s: event %ld\n", __func__, pm_event);
+ switch (pm_event) {
+ case PM_SUSPEND_PREPARE:
+ pr_info("%s : PM_SUSPEND_PREPARE\n", __func__);
+ if (wake_lock_active(&modem->wake_lock)) {
+ pr_warn("%s: wakelock was active, aborting suspend\n",
+ __func__);
+ mutex_unlock(&modem->lock);
+ return NOTIFY_STOP;
+ }
+
+ modem->system_suspend = 1;
+#ifdef CONFIG_PM
+ if (modem->udev && modem->pdata->short_autosuspend_delay > 0 &&
+ modem->udev->state != USB_STATE_NOTATTACHED) {
+ pm_runtime_set_autosuspend_delay(&modem->udev->dev,
+ modem->pdata->short_autosuspend_delay);
+ modem->short_autosuspend_enabled = 1;
+ pr_info("%s: modem->short_autosuspend_enabled: %d (ms)\n", __func__, modem->pdata->short_autosuspend_delay);
+ }
+#endif
+ mutex_unlock(&modem->lock);
+ return NOTIFY_OK;
+ case PM_POST_SUSPEND:
+ pr_info("%s : PM_POST_SUSPEND\n", __func__);
+ modem->system_suspend = 0;
+ if(modem->hsic_wakeup_pending)
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: hsic wakeup pending\n", __func__);
+
+ usb_lock_device(modem->udev);
+ if (usb_autopm_get_interface(modem->intf) == 0)
+ {
+ pr_info("%s: usb_autopm_get_interface OK\n", __func__);
+ usb_autopm_put_interface_async(modem->intf);
+ }
+ usb_unlock_device(modem->udev);
+ modem->hsic_wakeup_pending = false;
+ }
+ mutex_unlock(&modem->lock);
+ return NOTIFY_OK;
+ }
+
+ mutex_unlock(&modem->lock);
+ return NOTIFY_DONE;
+}
+
+static int mdm_request_irq(struct qcom_usb_modem *modem,
+ irq_handler_t thread_fn,
+ unsigned int irq_gpio,
+ unsigned long irq_flags,
+ const char *label,
+ unsigned int *irq,
+ bool *is_wakeable)
+{
+ int ret;
+
+ /* gpio request is done in modem_init callback */
+#if 0
+ ret = gpio_request(irq_gpio, label);
+ if (ret)
+ return ret;
+#endif
+
+ /* enable IRQ for GPIO */
+ *irq = gpio_to_irq(irq_gpio);
+
+ ret = request_threaded_irq(*irq, NULL, thread_fn, irq_flags, label, modem);
+ if (ret) {
+ *irq = 0;
+ return ret;
+ }
+
+ ret = enable_irq_wake(*irq);
+ *is_wakeable = (ret) ? false : true;
+
+ return 0;
+}
+
+static void mdm_hsic_phy_open(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ return;
+}
+
+static void mdm_hsic_phy_init(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s\n", __func__);
+
+ return;
+}
+
+static void mdm_hsic_print_interface_pm_info(struct usb_device *udev)
+{
+ struct usb_interface *intf;
+ int i = 0, n = 0;
+
+ if (udev == NULL)
+ return;
+
+ dev_info(&udev->dev, "%s:\n", __func__);
+
+ if (udev->actconfig) {
+ n = udev->actconfig->desc.bNumInterfaces;
+ for (i = 0; i <= n - 1; i++) {
+ intf = udev->actconfig->interface[i];
+ pr_info("[HSIC_PM_DBG] intf:%d pm_usage_cnt:%d usage_count:%d\n", i,
+ atomic_read(&intf->pm_usage_cnt), atomic_read(&intf->dev.power.usage_count));
+ }
+ }
+}
+
+static void mdm_hsic_phy_suspend(struct qcom_usb_modem *modem)
+{
+ unsigned long elapsed_ms = 0;
+ static unsigned int suspend_cnt = 0;
+
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ mutex_lock(&modem->hsic_phy_lock);
+
+ if (modem->mdm_hsic_phy_resume_jiffies != 0) {
+ elapsed_ms = jiffies_to_msecs(jiffies - modem->mdm_hsic_phy_resume_jiffies);
+ modem->mdm_hsic_phy_active_total_ms += elapsed_ms;
+ }
+
+ suspend_cnt++;
+ if (elapsed_ms > 30000 || suspend_cnt >= 10) {
+ suspend_cnt = 0;
+ if(modem->mdm_debug_on)
+ {
+ pr_info("%s: elapsed_ms: %lu ms\n", __func__, elapsed_ms);
+ mdm_hsic_print_interface_pm_info(modem->udev);
+ }
+ }
+
+ pr_info("%s: phy_active_total_ms: %lu ms\n", __func__, modem->mdm_hsic_phy_active_total_ms);
+
+ mutex_unlock(&modem->hsic_phy_lock);
+
+ return;
+}
+
+static void mdm_hsic_phy_pre_suspend(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ return;
+}
+
+static void mdm_hsic_phy_post_suspend(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ mdm_hsic_phy_suspend(modem);
+
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ return;
+}
+
+static void mdm_hsic_phy_resume(struct qcom_usb_modem *modem)
+{
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ mutex_lock(&modem->hsic_phy_lock);
+
+ modem->mdm_hsic_phy_resume_jiffies = jiffies;
+
+ mutex_unlock(&modem->hsic_phy_lock);
+
+ return;
+}
+
+static void mdm_hsic_phy_pre_resume(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ mdm_hsic_phy_resume(modem);
+
+ return;
+}
+
+static void mdm_hsic_phy_post_resume(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ pr_info("%s\n", __func__);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable)
+ trace_printk("%s\n", __func__);
+#endif
+
+ return;
+}
+
+static void mdm_post_remote_wakeup(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ mutex_lock(&modem->lock);
+
+#ifdef CONFIG_PM
+ if (modem->udev &&
+ modem->udev->state != USB_STATE_NOTATTACHED &&
+ modem->short_autosuspend_enabled && modem->pdata->autosuspend_delay > 0) {
+ pm_runtime_set_autosuspend_delay(&modem->udev->dev,
+ modem->pdata->autosuspend_delay);
+ modem->short_autosuspend_enabled = 0;
+ }
+#endif
+ wake_lock_timeout(&modem->wake_lock, WAKELOCK_TIMEOUT_FOR_REMOTE_WAKE);
+
+ mutex_unlock(&modem->lock);
+
+ return;
+}
+
+void mdm_hsic_phy_close(void)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ return;
+}
+
+/* load USB host controller */
+static struct platform_device *tegra_usb_host_register(
+ const struct qcom_usb_modem *modem)
+{
+ const struct platform_device *hc_device =
+ modem->pdata->tegra_ehci_device;
+ struct platform_device *pdev;
+ int val;
+
+ pdev = platform_device_alloc(hc_device->name, hc_device->id);
+ if (!pdev)
+ return NULL;
+
+ val = platform_device_add_resources(pdev, hc_device->resource,
+ hc_device->num_resources);
+ if (val)
+ goto error;
+
+ pdev->dev.dma_mask = hc_device->dev.dma_mask;
+ pdev->dev.coherent_dma_mask = hc_device->dev.coherent_dma_mask;
+
+ val = platform_device_add_data(pdev, modem->pdata->tegra_ehci_pdata,
+ sizeof(struct tegra_usb_platform_data));
+ if (val)
+ goto error;
+
+ val = platform_device_add(pdev);
+ if (val)
+ goto error;
+
+ return pdev;
+
+error:
+ pr_err("%s: err %d\n", __func__, val);
+ platform_device_put(pdev);
+ return NULL;
+}
+
+/* unload USB host controller */
+static void tegra_usb_host_unregister(struct platform_device *pdev)
+{
+ platform_device_unregister(pdev);
+}
+
+static ssize_t load_unload_usb_host(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct qcom_usb_modem *modem = dev_get_drvdata(dev);
+ int host;
+
+ if (sscanf(buf, "%d", &host) != 1 || host < 0 || host > 1)
+ return -EINVAL;
+
+ pr_info("%s USB host\n", (host) ? "load" : "unload");
+
+ mutex_lock(&modem->hc_lock);
+ if (host) {
+ if (!modem->hc)
+ modem->hc = tegra_usb_host_register(modem);
+ } else {
+ if (modem->hc) {
+ tegra_usb_host_unregister(modem->hc);
+ modem->hc = NULL;
+ }
+ }
+ mutex_unlock(&modem->hc_lock);
+
+ return count;
+}
+
+static struct tegra_usb_phy_platform_ops qcom_usb_modem_debug_remote_wakeup_ops = {
+ .open = mdm_hsic_phy_open,
+ .init = mdm_hsic_phy_init,
+ .pre_suspend = mdm_hsic_phy_pre_suspend,
+ .post_suspend = mdm_hsic_phy_post_suspend,
+ .pre_resume = mdm_hsic_phy_pre_resume,
+ .post_resume = mdm_hsic_phy_post_resume,
+ .post_remote_wakeup = mdm_post_remote_wakeup,
+ .close = mdm_hsic_phy_close,
+};
+
+static struct tegra_usb_phy_platform_ops qcom_usb_modem_remote_wakeup_ops = {
+ .post_remote_wakeup = mdm_post_remote_wakeup,
+};
+
+static int proc_mdm9k_status(struct seq_file *s, void *unused)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+
+ seq_printf(s, "0\n");
+ return 0;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ seq_printf(s, "%d\n", modem->mdm9k_status);
+ return 0;
+}
+
+static int proc_mdm9k_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, proc_mdm9k_status, PDE_DATA(inode));
+}
+
+int proc_mdm9k_release(struct inode *inode, struct file *file)
+{
+ const struct seq_operations *op = ((struct seq_file *)file->private_data)->op;
+ int res = seq_release(inode, file);
+ kfree(op);
+ return res;
+}
+
+static const struct file_operations mdm9k_proc_ops = {
+ .owner = THIS_MODULE,
+ .open = proc_mdm9k_open,
+ .read = seq_read,
+ .release = proc_mdm9k_release,
+};
+
+static void mdm_loaded_info(struct qcom_usb_modem *modem)
+{
+ modem->mdm9k_status = 0;
+
+ modem ->mdm9k_pde = proc_create_data("mdm9k_status", 0, NULL, &mdm9k_proc_ops, NULL);
+}
+
+static void mdm_unloaded_info(struct qcom_usb_modem *modem)
+{
+ if (modem ->mdm9k_pde)
+ remove_proc_entry("mdm9k_status", NULL);
+}
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+static void execute_ftrace_cmd(char *cmd, struct qcom_usb_modem *modem)
+{
+ int ret;
+
+ if (get_radio_flag() == RADIO_FLAG_NONE)
+ return;
+
+ /* wait until ftrace cmd can be executed */
+ ret = wait_for_completion_interruptible(&modem->ftrace_cmd_can_be_executed);
+ INIT_COMPLETION(modem->ftrace_cmd_can_be_executed);
+ if (!ret) {
+ /* copy cmd to ftrace_cmd buffer */
+ mutex_lock(&modem->ftrace_cmd_lock);
+ memset(modem->ftrace_cmd, 0, sizeof(modem->ftrace_cmd));
+ strlcpy(modem->ftrace_cmd, cmd, sizeof(modem->ftrace_cmd));
+ pr_info("%s: ftrace_cmd (%s)\n", __func__, modem->ftrace_cmd);
+ mutex_unlock(&modem->ftrace_cmd_lock);
+
+ /* signal the waiting thread there is pending cmd */
+ complete(&modem->ftrace_cmd_pending);
+ }
+}
+
+static void ftrace_enable_basic_log_fn(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ ftrace_enable_log_work);
+
+ pr_info("%s+\n", __func__);
+ execute_ftrace_cmd("echo 8192 > /sys/kernel/debug/tracing/buffer_size_kb", modem);
+ execute_ftrace_cmd("echo 1 > /sys/kernel/debug/tracing/tracing_on", modem);
+ pr_info("%s-\n", __func__);
+}
+#endif
+
+#ifdef CONFIG_MDM_ERRMSG
+static int set_mdm_errmsg(void __user * msg, struct qcom_usb_modem *modem)
+{
+ if(modem == NULL)
+ return -EFAULT;
+
+ memset(modem->mdm_errmsg, 0, sizeof(modem->mdm_errmsg));
+ if (unlikely(copy_from_user(modem->mdm_errmsg, msg, sizeof(modem->mdm_errmsg)))) {
+ pr_err("%s: copy modem_errmsg failed\n", __func__);
+ return -EFAULT;
+ }
+
+ modem->mdm_errmsg[sizeof(modem->mdm_errmsg) - 1] = '\0';
+ pr_info("%s: set mdm errmsg: %s\n", __func__, modem->mdm_errmsg);
+
+ return 0;
+}
+
+char *get_mdm_errmsg(struct qcom_usb_modem *modem)
+{
+ if(modem == NULL)
+ return NULL;
+
+ if (strlen(modem->mdm_errmsg) <= 0) {
+ pr_err("%s: can not get mdm errmsg.\n", __func__);
+ return NULL;
+ }
+
+ return modem->mdm_errmsg;
+}
+
+EXPORT_SYMBOL(get_mdm_errmsg);
+#endif
+
+static int mdm_modem_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+long mdm_modem_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ int status, ret = 0;
+#ifdef CONFIG_MDM_SYSEDP
+ int radio_state = -1;
+#endif
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ if (_IOC_TYPE(cmd) != CHARM_CODE) {
+ pr_err("%s: invalid ioctl code\n", __func__);
+ return -EINVAL;
+ }
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ return -EINVAL;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: Entering ioctl cmd = %d\n", __func__, _IOC_NR(cmd));
+
+ mutex_lock(&modem->lock);
+
+ switch (cmd) {
+ case WAKE_CHARM:
+ pr_info("%s: Powering on mdm\n", __func__);
+ if(!(modem->mdm_status & (MDM_STATUS_RAMDUMP | MDM_STATUS_RESET | MDM_STATUS_RESETTING)))
+ {
+ modem->mdm_status = MDM_STATUS_POWER_DOWN;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+
+ if (!work_pending(&modem->host_reset_work))
+ queue_work(modem->usb_host_wq, &modem->host_reset_work);
+
+ /* hold wait lock to complete the enumeration */
+ wake_lock_timeout(&modem->wake_lock, WAKELOCK_TIMEOUT_FOR_USB_ENUM);
+
+ /* boost CPU freq */
+ if (!work_pending(&modem->cpu_boost_work))
+ queue_work(modem->wq, &modem->cpu_boost_work);
+
+ /* Wait for usb host reset done */
+ mutex_unlock(&modem->lock);
+ wait_for_completion(&modem->usb_host_reset_done);
+ mutex_lock(&modem->lock);
+ INIT_COMPLETION(modem->usb_host_reset_done);
+
+ if(modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, -1, "power_on_mdm (before)");
+
+ /* Enable irq */
+ mdm_enable_irqs(modem, false);
+ mdm_enable_irqs(modem, true);
+
+ /* start modem */
+ if (modem->ops && modem->ops->start)
+ modem->ops->start(modem);
+
+ modem->mdm_status |= MDM_STATUS_POWER_ON;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ break;
+
+ case CHECK_FOR_BOOT:
+ if (gpio_get_value(modem->pdata->mdm2ap_status_gpio) == 0)
+ put_user(1, (unsigned long __user *)arg);
+ else
+ put_user(0, (unsigned long __user *)arg);
+ break;
+
+ case NORMAL_BOOT_DONE:
+ {
+ if(modem->mdm_debug_on)
+ pr_info("%s: check if mdm is booted up\n", __func__);
+
+ get_user(status, (unsigned long __user *)arg);
+ if (status) {
+ pr_err("%s: normal boot failed\n", __func__);
+ } else {
+ pr_info("%s: normal boot done\n", __func__);
+ modem->mdm_status |= MDM_STATUS_BOOT_DONE;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+
+ if((modem->mdm_status & MDM_STATUS_BOOT_DONE) && (modem->mdm_status & MDM_STATUS_STATUS_READY))
+ {
+ if (!work_pending(&modem->host_reset_work))
+ queue_work(modem->usb_host_wq, &modem->host_reset_work);
+
+ mutex_unlock(&modem->lock);
+ wait_for_completion(&modem->usb_host_reset_done);
+ mutex_lock(&modem->lock);
+ INIT_COMPLETION(modem->usb_host_reset_done);
+
+ if (modem->ops->normal_boot_done_cb != NULL) {
+ pr_info("normal_boot_done_cb\n");
+ modem->ops->normal_boot_done_cb(modem);
+ }
+ modem->mdm9k_status = 1;
+ }
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ if (modem->mdm_status & MDM_STATUS_RESET)
+ {
+ pr_info("%s: modem is under reset: complete mdm_boot\n", __func__);
+ complete(&modem->mdm_boot);
+ modem->mdm_status &= ~MDM_STATUS_RESET;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ }
+#endif
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if (modem->ftrace_enable) {
+ if (modem->ftrace_wq) {
+ queue_work(modem->ftrace_wq, &modem->ftrace_enable_log_work);
+ }
+ }
+#endif
+ }
+ break;
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ case RAM_DUMP_DONE:
+ if(modem->mdm_debug_on)
+ pr_info("%s: mdm done collecting RAM dumps\n", __func__);
+
+ get_user(status, (unsigned long __user *)arg);
+ if (status)
+ pr_err("%s: ramdump collection fail.\n", __func__);
+ else
+ pr_info("%s: ramdump collection completed\n", __func__);
+
+ modem->mdm_status &= ~MDM_STATUS_RAMDUMP;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+
+ complete(&modem->mdm_ram_dumps);
+ break;
+
+ case WAIT_FOR_RESTART:
+ if(modem->mdm_debug_on)
+ pr_info("%s: wait for mdm to need images reloaded\n", __func__);
+
+ mutex_unlock(&modem->lock);
+ ret = wait_for_completion_interruptible(&modem->mdm_needs_reload);
+ mutex_lock(&modem->lock);
+
+ if ( modem->boot_type==CHARM_NORMAL_BOOT ) {
+ pr_info("%s: modem boot_type=Normal_boot\n", __func__);
+ } else {
+ pr_info("%s: modem boot_type=%s\n", __func__, ((modem->boot_type==CHARM_RAM_DUMPS)?"Ram_dump":"CNV_Reset"));
+ }
+
+ if (!ret)
+ put_user(modem->boot_type, (unsigned long __user *)arg);
+ INIT_COMPLETION(modem->mdm_needs_reload);
+ break;
+#endif
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ case GET_FTRACE_CMD:
+ {
+ /* execute ftrace cmd only when radio flag is non-zero */
+ if (get_radio_flag() != RADIO_FLAG_NONE) {
+ complete(&modem->ftrace_cmd_can_be_executed);
+ mutex_unlock(&modem->lock);
+ ret = wait_for_completion_interruptible(&modem->ftrace_cmd_pending);
+ mutex_lock(&modem->lock);
+ if (!ret) {
+ mutex_lock(&modem->ftrace_cmd_lock);
+ pr_info("ioctl GET_FTRACE_CMD: %s\n", modem->ftrace_cmd);
+ if (copy_to_user((void __user *)arg, modem->ftrace_cmd, sizeof(modem->ftrace_cmd))) {
+ pr_err("GET_FTRACE_CMD read fail\n");
+ }
+ mutex_unlock(&modem->ftrace_cmd_lock);
+ }
+ INIT_COMPLETION(modem->ftrace_cmd_pending);
+ }
+ break;
+ }
+#endif
+
+ case GET_MFG_MODE:
+ pr_info("%s: board_mfg_mode() = %d\n", __func__, board_mfg_mode());
+ put_user(board_mfg_mode(), (unsigned long __user *)arg);
+ break;
+
+ case GET_RADIO_FLAG:
+ pr_info("%s: get_radio_flag() = %x\n", __func__, get_radio_flag());
+ put_user(get_radio_flag(), (unsigned long __user *)arg);
+ break;
+
+#ifdef CONFIG_MDM_ERRMSG
+ case SET_MODEM_ERRMSG:
+ pr_info("%s: Set modem fatal errmsg\n", __func__);
+ ret = set_mdm_errmsg((void __user *)arg, modem);
+ break;
+#endif
+
+ case TRIGGER_MODEM_FATAL:
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ get_user(status, (unsigned long __user *)arg);
+ pr_info("%s: Trigger modem fatal!!(Ignore ramdump=%d)\n", __func__, status);
+
+ if(status && !(modem->mdm_status & MDM_STATUS_RESET))
+ {
+ modem->ramdump_save = get_enable_ramdumps();
+ set_enable_ramdumps(0);
+ }
+#else
+ pr_info("%s: Trigger modem fatal!!\n", __func__);
+#endif
+ if(!(modem->mdm_status & MDM_STATUS_RESET))
+ {
+ if(modem->ops && modem->ops->fatal_trigger_cb)
+ modem->ops->fatal_trigger_cb(modem);
+ }
+ else
+ pr_info("%s: modem reset is in progress!\n", __func__);
+ break;
+
+ case EFS_SYNC_DONE:
+ pr_info("%s:%s efs sync is done\n", __func__, (atomic_read(&modem->final_efs_wait) ? " FINAL" : ""));
+ atomic_set(&modem->final_efs_wait, 0);
+ break;
+
+ case NV_WRITE_DONE:
+ pr_info("%s: NV write done!\n", __func__);
+ if (modem->ops && modem->ops->nv_write_done_cb) {
+ modem->ops->nv_write_done_cb(modem);
+ }
+ break;
+
+ case EFS_SYNC_TIMEOUT:
+ break;
+
+ case POWER_OFF_CHARM:
+ pr_info("%s: (HTC_POWER_OFF_CHARM)Powering off mdm\n", __func__);
+ if (modem->ops && modem->ops->stop2) {
+ modem->mdm_status &= (MDM_STATUS_POWER_DOWN | MDM_STATUS_RESET);
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ modem->ops->stop2(modem);
+ }
+ break;
+
+#ifdef CONFIG_MDM_SYSEDP
+ case SYSEDP_RADIO_STATE:
+ if(modem->mdm_debug_on)
+ pr_info("%s: set sysdep radio state\n", __func__);
+ get_user(radio_state, (unsigned long __user *)arg);
+ if ((radio_state < 0) || (radio_state >= MDM_SYSEDP_MAX)) {
+ pr_err("%s: send a wrong radio statue %d\n", __func__, radio_state);
+ } else {
+ modem->radio_state = radio_state;
+ sysedp_set_state(modem->sysedpc, modem->radio_state);
+ pr_info("%s: send radio statue %d to sysedp\n", __func__, modem->radio_state);
+ }
+ break;
+#endif
+
+ default:
+ pr_err("%s: invalid ioctl cmd = %d\n", __func__, _IOC_NR(cmd));
+ ret = -EINVAL;
+ break;
+ }
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: Entering ioctl cmd = %d done.\n", __func__, _IOC_NR(cmd));
+
+ mutex_unlock(&modem->lock);
+
+ return ret;
+}
+
+#ifdef CONFIG_COMPAT
+long mdm_modem_ioctl_compat(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ /*Todo: Check which commands are needed to covert for compatibiltiy*/
+ case COMPAT_SET_MODEM_ERRMSG:
+ cmd = SET_MODEM_ERRMSG;
+ break;
+
+ default:
+ break;
+ }
+
+ return mdm_modem_ioctl(filp, cmd, (unsigned long) compat_ptr(arg));
+}
+#endif
+
+static const struct file_operations mdm_modem_fops = {
+ .owner = THIS_MODULE,
+ .open = mdm_modem_open,
+ .unlocked_ioctl = mdm_modem_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = mdm_modem_ioctl_compat,
+#endif
+};
+
+static struct miscdevice mdm_modem_misc = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "mdm",
+ .fops = &mdm_modem_fops
+};
+
+static int mdm_debug_on_set(void *data, u64 val)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ goto done;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ mutex_lock(&modem->lock);
+
+ if (modem->ops && modem->ops->debug_state_changed_cb)
+ modem->ops->debug_state_changed_cb(modem, val);
+
+ mutex_unlock(&modem->lock);
+
+done:
+ return 0;
+}
+
+static int mdm_debug_on_get(void *data, u64 * val)
+{
+ struct device *dev;
+ struct qcom_usb_modem *modem;
+
+ dev = bus_find_device_by_name(&platform_bus_type, NULL,
+ "MDM");
+ if (!dev) {
+ pr_warn("%s unable to find device name\n", __func__);
+ goto done;
+ }
+
+ modem = dev_get_drvdata(dev);
+
+ *val = modem->mdm_debug_on;
+
+done:
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(mdm_debug_on_fops, mdm_debug_on_get, mdm_debug_on_set, "%llu\n");
+
+static int mdm_debugfs_init(void)
+{
+ struct dentry *dent;
+
+ dent = debugfs_create_dir("mdm_dbg", 0);
+ if (IS_ERR(dent))
+ return PTR_ERR(dent);
+
+ debugfs_create_file("debug_on", 0644, dent, NULL, &mdm_debug_on_fops);
+ return 0;
+}
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+static int mdm_subsys_shutdown(const struct subsys_data *crashed_subsys)
+{
+ struct qcom_usb_modem *modem = crashed_subsys->modem;
+
+ if(modem->mdm_debug_on)
+ pr_info("[%s] start\n", __func__);
+
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 1);
+ if (get_enable_ramdumps() && modem->pdata->ramdump_delay_ms > 0) {
+ /* Wait for the external modem to complete
+ * its preparation for ramdumps.
+ */
+ msleep(modem->pdata->ramdump_delay_ms);
+ }
+
+ mutex_lock(&modem->lock);
+
+ //Power down mdm
+ mdm_disable_irqs(modem, false);
+
+ if((modem->mdm_status & MDM_STATUS_RESETTING) && get_enable_ramdumps())
+ {
+ pr_info("%s: Need to capture MDM RAM Dump, don't Pulling RESET gpio LOW here to prevent MDM memory data loss\n", __func__);
+ }
+ else
+ {
+ pr_info("%s: Pulling RESET gpio LOW\n", __func__);
+ if(modem->ops && modem->ops->stop)
+ modem->ops->stop(modem);
+ }
+
+ mutex_unlock(&modem->lock);
+
+ /* Workaournd for real-time MDM ramdump druing subsystem restart */
+ /* ap2mdm_errfatal_gpio should be pulled low otherwise MDM will assume 8K fatal after bootup */
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+
+ if(modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, -1, "mdm_subsys_shutdown");
+
+ if(modem->mdm_debug_on)
+ pr_info("[%s] end\n", __func__);
+
+ return 0;
+}
+
+static int mdm_subsys_powerup(const struct subsys_data *crashed_subsys)
+{
+ struct qcom_usb_modem *modem = crashed_subsys->modem;
+ int mdm_boot_status = 0;
+
+ if(modem->mdm_debug_on)
+ pr_info("[%s] start\n", __func__);
+
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+ gpio_direction_output(modem->pdata->ap2mdm_status_gpio, 1);
+
+ mutex_lock(&modem->lock);
+ if(modem->ramdump_save > 0)
+ {
+ set_enable_ramdumps(modem->ramdump_save);
+ modem->ramdump_save = -1;
+ }
+ modem->boot_type = CHARM_NORMAL_BOOT;
+ mutex_unlock(&modem->lock);
+ complete(&modem->mdm_needs_reload);
+ if (!wait_for_completion_timeout(&modem->mdm_boot, msecs_to_jiffies(MDM_BOOT_TIMEOUT))) {
+ mdm_boot_status = -ETIMEDOUT;
+ pr_err("%s: mdm modem restart timed out.\n", __func__);
+ } else {
+ pr_info("%s: mdm modem has been restarted\n", __func__);
+
+#ifdef CONFIG_MSM_SYSMON_COMM
+ /* Log the reason for the restart */
+ if(modem->mdm_restart_wq)
+ queue_work_on(0, modem->mdm_restart_wq, &modem->mdm_restart_reason_work);
+#endif
+ }
+ INIT_COMPLETION(modem->mdm_boot);
+
+ if(modem->mdm_debug_on)
+ pr_info("[%s] end (mdm_boot_status=%d)\n", __func__, mdm_boot_status);
+
+ return mdm_boot_status;
+}
+
+static int mdm_subsys_ramdumps(int want_dumps, const struct subsys_data *crashed_subsys)
+{
+ struct qcom_usb_modem *modem = crashed_subsys->modem;
+ int mdm_ram_dump_status = 0;
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: want_dumps is %d\n", __func__, want_dumps);
+
+ if (want_dumps) {
+ mutex_lock(&modem->lock);
+ modem->mdm_status |= MDM_STATUS_RAMDUMP;
+ modem->boot_type = CHARM_RAM_DUMPS;
+ complete(&modem->mdm_needs_reload);
+ mutex_unlock(&modem->lock);
+
+ wait_for_completion(&modem->mdm_ram_dumps);
+ INIT_COMPLETION(modem->mdm_ram_dumps);
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 1);
+
+ mutex_lock(&modem->lock);
+ mdm_disable_irqs(modem, false);
+
+ if((modem->mdm_status & MDM_STATUS_RESETTING) && get_enable_ramdumps())
+ {
+ pr_info("%s: Need to capture MDM RAM Dump, don't Pulling RESET gpio LOW here to prevent MDM memory data loss\n", __func__);
+ }
+ else
+ {
+ pr_info("%s: Pulling RESET gpio LOW\n", __func__);
+ if(modem->ops && modem->ops->stop)
+ modem->ops->stop(modem);
+ }
+ mutex_unlock(&modem->lock);
+
+ /* Workaournd for real-time MDM ramdump druing subsystem restart */
+ /* ap2mdm_errfatal_gpio should be pulled low otherwise MDM will assume 8K fatal after bootup */
+ gpio_direction_output(modem->pdata->ap2mdm_errfatal_gpio, 0);
+
+ if(modem->ops && modem->ops->dump_mdm_gpio_cb)
+ modem->ops->dump_mdm_gpio_cb(modem, -1, "mdm_subsys_ramdumps");
+ }
+ return mdm_ram_dump_status;
+}
+
+static struct subsys_data mdm_subsystem = {
+ .shutdown = mdm_subsys_shutdown,
+ .ramdump = mdm_subsys_ramdumps,
+ .powerup = mdm_subsys_powerup,
+ .name = EXTERNAL_MODEM,
+};
+#endif
+
+#ifdef CONFIG_MSM_SYSMON_COMM
+#define RD_BUF_SIZE 100
+#define SFR_MAX_RETRIES 10
+#define SFR_RETRY_INTERVAL 1000
+
+static void mdm_restart_reason_fn(struct work_struct *ws)
+{
+ struct qcom_usb_modem *modem = container_of(ws, struct qcom_usb_modem,
+ mdm_restart_reason_work);
+
+ int ret, ntries = 0;
+ char sfr_buf[RD_BUF_SIZE];
+
+ do {
+ msleep(SFR_RETRY_INTERVAL);
+ ret = sysmon_get_reason(SYSMON_SS_EXT_MODEM, sfr_buf, sizeof(sfr_buf));
+ if (ret) {
+ /*
+ * The sysmon device may not have been probed as yet
+ * after the restart.
+ */
+ pr_err("%s: Error retrieving mdm restart reason, ret = %d, " "%d/%d tries\n", __func__, ret, ntries + 1, SFR_MAX_RETRIES);
+ } else {
+ pr_err("mdm restart reason: %s\n", sfr_buf);
+ mutex_lock(&modem->lock);
+ modem->msr_info_list[modem->mdm_msr_index].valid = 1;
+ modem->msr_info_list[modem->mdm_msr_index].msr_time = current_kernel_time();
+ snprintf(modem->msr_info_list[modem->mdm_msr_index].modem_errmsg, RD_BUF_SIZE, "%s", sfr_buf);
+ if (++modem->mdm_msr_index >= MODEM_ERRMSG_LIST_LEN) {
+ modem->mdm_msr_index = 0;
+ }
+ mutex_unlock(&modem->lock);
+ break;
+ }
+ } while (++ntries < SFR_MAX_RETRIES);
+}
+#endif
+
+static int mdm_init(struct qcom_usb_modem *modem, struct platform_device *pdev)
+{
+ struct qcom_usb_modem_power_platform_data *pdata =
+ pdev->dev.platform_data;
+ int ret = 0;
+
+ pr_info("%s\n", __func__);
+
+ modem->pdata = pdata;
+ modem->pdev = pdev;
+
+ /* get modem operations from platform data */
+ modem->ops = (const struct qcom_modem_operations *)pdata->ops;
+
+ mutex_init(&(modem->lock));
+ mutex_init(&modem->hc_lock);
+ mutex_init(&modem->hsic_phy_lock);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ mutex_init(&modem->ftrace_cmd_lock);
+#endif
+ wake_lock_init(&modem->wake_lock, WAKE_LOCK_SUSPEND, "mdm_lock");
+ if (pdev->id >= 0)
+ dev_set_name(&pdev->dev, "MDM%d", pdev->id);
+ else
+ dev_set_name(&pdev->dev, "MDM");
+
+ INIT_WORK(&modem->host_reset_work, tegra_usb_host_reset);
+ INIT_WORK(&modem->host_load_work, tegra_usb_host_load);
+ INIT_WORK(&modem->host_unload_work, tegra_usb_host_unload);
+ INIT_WORK(&modem->cpu_boost_work, cpu_freq_boost);
+ INIT_DELAYED_WORK(&modem->cpu_unboost_work, cpu_freq_unboost);
+ INIT_WORK(&modem->mdm_hsic_ready_work, mdm_hsic_ready);
+ INIT_WORK(&modem->mdm_status_work, mdm_status_changed);
+ INIT_WORK(&modem->mdm_errfatal_work, mdm_fatal);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ INIT_WORK(&modem->ftrace_enable_log_work, ftrace_enable_basic_log_fn);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ INIT_WORK(&modem->mdm_restart_reason_work, mdm_restart_reason_fn);
+#endif
+ pm_qos_add_request(&modem->cpu_boost_req, PM_QOS_CPU_FREQ_MIN,
+ PM_QOS_DEFAULT_VALUE);
+
+ modem->pm_notifier.notifier_call = mdm_pm_notifier;
+ modem->usb_notifier.notifier_call = mdm_usb_notifier;
+
+ usb_register_notify(&modem->usb_notifier);
+ register_pm_notifier(&modem->pm_notifier);
+
+ mdm_loaded_info(modem);
+ if (modem->ops && modem->ops->debug_state_changed_cb)
+ modem->ops->debug_state_changed_cb(modem, (get_radio_flag() & RADIO_FLAG_MORE_LOG)?1:0);
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if(get_radio_flag() & RADIO_FLAG_FTRACE_ENABLE)
+ modem->ftrace_enable = true;
+ else
+ modem->ftrace_enable = false;
+
+ /* Initialize completion */
+ modem->ftrace_cmd_pending = COMPLETION_INITIALIZER_ONSTACK(modem->ftrace_cmd_pending);
+ modem->ftrace_cmd_can_be_executed = COMPLETION_INITIALIZER_ONSTACK(modem->ftrace_cmd_can_be_executed);
+#endif
+
+ /* Register kernel panic notification */
+ atomic_notifier_chain_register(&panic_notifier_list, &mdm_panic_blk);
+ mdm_debugfs_init();
+
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ /* Register subsystem handlers */
+ mdm_subsystem.modem = modem;
+ ssr_register_subsystem(&mdm_subsystem);
+
+ /* Initialize completion */
+ modem->mdm_needs_reload = COMPLETION_INITIALIZER_ONSTACK(modem->mdm_needs_reload);
+ modem->mdm_boot = COMPLETION_INITIALIZER_ONSTACK(modem->mdm_boot);
+ modem->mdm_ram_dumps = COMPLETION_INITIALIZER_ONSTACK(modem->mdm_ram_dumps);
+ modem->ramdump_save = -1;
+#endif
+
+ /* Initialize other variable */
+ modem->boot_type = CHARM_NORMAL_BOOT;
+ modem->mdm_status = MDM_STATUS_POWER_DOWN;
+ modem->mdm_wake_irq_enabled = false;
+ modem->mdm9k_status = 0;
+ modem->usb_host_reset_done = COMPLETION_INITIALIZER_ONSTACK(modem->usb_host_reset_done);
+ atomic_set(&modem->final_efs_wait, 0);
+ modem->mdm2ap_ipc3_status = gpio_get_value(modem->pdata->mdm2ap_ipc3_gpio);
+
+ /* hsic wakeup */
+ modem->mdm_hsic_phy_resume_jiffies = 0;
+ modem->mdm_hsic_phy_active_total_ms = 0;
+ modem->hsic_wakeup_pending = false;
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ memset(modem->ftrace_cmd, sizeof(modem->ftrace_cmd), 0);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ memset(modem->msr_info_list, sizeof(modem->msr_info_list), 0);
+ modem->mdm_msr_index = 0;
+#endif
+
+ /* create work queue platform_driver_register */
+ modem->usb_host_wq = create_singlethread_workqueue("usb_host_queue");
+ if(!modem->usb_host_wq)
+ goto error;
+
+ modem->wq = create_singlethread_workqueue("qcom_usb_mdm_queue");
+ if(!modem->wq)
+ goto error;
+
+ modem->mdm_recovery_wq= create_singlethread_workqueue("mdm_recovery_queue");
+ if(!modem->mdm_recovery_wq)
+ goto error;
+
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ modem->ftrace_wq = create_singlethread_workqueue("qcom_usb_mdm_ftrace_queue");
+ if(!modem->ftrace_wq)
+ goto error;
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ modem->mdm_restart_wq = alloc_workqueue("qcom_usb_mdm_restart_queue", 0, 0);
+ if(!modem->mdm_restart_wq)
+ goto error;
+#endif
+
+ /* modem init to request gpio settings */
+ if (modem->ops && modem->ops->init) {
+ ret = modem->ops->init(modem);
+ if (ret)
+ goto error;
+ }
+
+ /* Request IRQ */
+ /* if wake gpio is not specified we rely on native usb remote wake */
+ if (gpio_is_valid(pdata->mdm2ap_wakeup_gpio)) {
+ /* request remote wakeup irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_wake_thread,
+ pdata->mdm2ap_wakeup_gpio,
+ pdata->wake_irq_flags,
+ "MDM2AP_WAKEUP",
+ &modem->wake_irq,
+ &modem->wake_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request wake irq error\n");
+ goto error;
+ }
+ }
+
+ /* Register hsic usb ops */
+ if(modem->mdm_debug_on)
+ modem->pdata->tegra_ehci_pdata->ops =
+ &qcom_usb_modem_debug_remote_wakeup_ops;
+ else
+ modem->pdata->tegra_ehci_pdata->ops =
+ &qcom_usb_modem_remote_wakeup_ops;
+
+ if (gpio_is_valid(pdata->mdm2ap_hsic_ready_gpio)) {
+ /* request hsic ready irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_hsic_ready_thread,
+ pdata->mdm2ap_hsic_ready_gpio,
+ pdata->hsic_ready_irq_flags,
+ "MDM2AP_HSIC_READY",
+ &modem->hsic_ready_irq,
+ &modem->hsic_ready_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request hsic ready irq error\n");
+ goto error;
+ }
+ }
+
+ if (gpio_is_valid(pdata->mdm2ap_status_gpio)) {
+ /* request status irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_status_thread,
+ pdata->mdm2ap_status_gpio,
+ pdata->status_irq_flags,
+ "MDM2AP_STATUS",
+ &modem->status_irq,
+ &modem->status_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request status irq error\n");
+ goto error;
+ }
+ }
+
+ if (gpio_is_valid(pdata->mdm2ap_errfatal_gpio)) {
+ /* request error fatal irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_errfatal_thread,
+ pdata->mdm2ap_errfatal_gpio,
+ pdata->errfatal_irq_flags,
+ "MDM2AP_ERRFATAL",
+ &modem->errfatal_irq,
+ &modem->errfatal_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request errfatal irq error\n");
+ goto error;
+ }
+ }
+
+ if (gpio_is_valid(pdata->mdm2ap_ipc3_gpio)) {
+ /* request ipc3 irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_ipc3_thread,
+ pdata->mdm2ap_ipc3_gpio,
+ pdata->ipc3_irq_flags,
+ "MDM2AP_IPC3",
+ &modem->ipc3_irq,
+ &modem->ipc3_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request ipc3 irq error\n");
+ goto error;
+ }
+ }
+
+ if (gpio_is_valid(pdata->mdm2ap_vdd_min_gpio)) {
+ /* request vdd min irq from platform data */
+ ret = mdm_request_irq(modem,
+ qcom_usb_modem_vdd_min_thread,
+ pdata->mdm2ap_vdd_min_gpio,
+ pdata->vdd_min_irq_flags,
+ "mdm_vdd_min",
+ &modem->vdd_min_irq,
+ &modem->vdd_min_irq_wakeable);
+ if (ret) {
+ dev_err(&pdev->dev, "request vdd min irq error\n");
+ goto error;
+ }
+ }
+
+ /* Default force to disable irq */
+ modem->mdm_wake_irq_enabled = true;
+ mdm_disable_irqs(modem, true);
+ modem->mdm_irq_enabled = true;
+ mdm_disable_irqs(modem, false);
+
+#ifdef CONFIG_MDM_SYSEDP
+ modem->sysedpc = sysedp_create_consumer("qcom-mdm-9k", "qcom-mdm-9k");
+ if (modem->sysedpc == NULL) {
+ dev_err(&pdev->dev, "request sysdep fail\n");
+ goto error;
+ }
+#endif
+
+ /* Reigster misc mdm driver */
+ pr_info("%s: Registering mdm modem\n", __func__);
+ ret = misc_register(&mdm_modem_misc);
+ if(ret) {
+ dev_err(&pdev->dev, "register mdm_modem_misc driver fail.\n");
+ goto error;
+ }
+
+ return ret;
+error:
+
+ mdm_unloaded_info(modem);
+
+ unregister_pm_notifier(&modem->pm_notifier);
+ usb_unregister_notify(&modem->usb_notifier);
+
+ cancel_work_sync(&modem->host_reset_work);
+ cancel_work_sync(&modem->host_load_work);
+ cancel_work_sync(&modem->host_unload_work);
+ cancel_work_sync(&modem->cpu_boost_work);
+ cancel_delayed_work_sync(&modem->cpu_unboost_work);
+ cancel_work_sync(&modem->mdm_hsic_ready_work);
+ cancel_work_sync(&modem->mdm_status_work);
+ cancel_work_sync(&modem->mdm_errfatal_work);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ cancel_work_sync(&modem->ftrace_enable_log_work);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ cancel_work_sync(&modem->mdm_restart_reason_work);
+#endif
+
+ if(modem->usb_host_wq)
+ destroy_workqueue(modem->usb_host_wq);
+ if(modem->wq)
+ destroy_workqueue(modem->wq);
+ if(modem->mdm_recovery_wq)
+ destroy_workqueue(modem->mdm_recovery_wq);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if(modem->ftrace_wq)
+ destroy_workqueue(modem->ftrace_wq);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ if(modem->mdm_restart_wq)
+ destroy_workqueue(modem->mdm_restart_wq);
+#endif
+
+ pm_qos_remove_request(&modem->cpu_boost_req);
+
+ if (modem->wake_irq)
+ free_irq(modem->wake_irq, modem);
+ if (modem->hsic_ready_irq)
+ free_irq(modem->hsic_ready_irq, modem);
+ if (modem->status_irq)
+ free_irq(modem->status_irq, modem);
+ if (modem->errfatal_irq)
+ free_irq(modem->errfatal_irq, modem);
+ if (modem->ipc3_irq)
+ free_irq(modem->ipc3_irq, modem);
+ if (modem->vdd_min_irq)
+ free_irq(modem->vdd_min_irq, modem);
+
+ return ret;
+}
+
+static int qcom_usb_modem_probe(struct platform_device *pdev)
+{
+ struct qcom_usb_modem_power_platform_data *pdata =
+ pdev->dev.platform_data;
+ struct qcom_usb_modem *modem;
+ int ret = 0;
+
+#ifdef CONFIG_MDM_POWEROFF_MODEM_IN_OFFMODE_CHARGING
+ int mfg_mode = BOARD_MFG_MODE_NORMAL;
+
+ /* Check offmode charging, don't load mdm2 driver if device in offmode charging */
+ mfg_mode = board_mfg_mode();
+ if (mfg_mode == BOARD_MFG_MODE_OFFMODE_CHARGING) {
+ /* TODO: pull AP2MDM_PMIC_RESET_N to output low to save power */
+ pr_info("%s: BOARD_MFG_MODE_OFFMODE_CHARGING\n", __func__);
+
+ return 0;
+ } else {
+ pr_info("%s: mfg_mode=[%d]\n", __func__, mfg_mode);
+ }
+#else
+ pr_info("%s: CONFIG_MDM_POWEROFF_MODEM_IN_OFFMODE_CHARGING not set\n", __func__);
+#endif
+
+ if (!pdata) {
+ dev_dbg(&pdev->dev, "platform_data not available\n");
+ return -EINVAL;
+ }
+
+ modem = kzalloc(sizeof(struct qcom_usb_modem), GFP_KERNEL);
+ if (!modem) {
+ dev_dbg(&pdev->dev, "failed to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ ret = mdm_init(modem, pdev);
+ if (ret) {
+ kfree(modem);
+ return ret;
+ }
+
+ dev_set_drvdata(&pdev->dev, modem);
+
+ return ret;
+}
+
+static int __exit qcom_usb_modem_remove(struct platform_device *pdev)
+{
+ struct qcom_usb_modem *modem = platform_get_drvdata(pdev);
+
+#ifdef CONFIG_MDM_POWEROFF_MODEM_IN_OFFMODE_CHARGING
+ /* Check offmode charging, don't load mdm2 driver if device in offmode charging */
+ int mfg_mode = BOARD_MFG_MODE_NORMAL;
+ mfg_mode = board_mfg_mode();
+ if (mfg_mode == BOARD_MFG_MODE_OFFMODE_CHARGING) {
+ /* TODO: To free AP2MDM_PMIC_RESET_N gpio */
+ pr_info("%s: BOARD_MFG_MODE_OFFMODE_CHARGING\n", __func__);
+
+ return 0;
+ }
+#endif
+
+ misc_deregister(&mdm_modem_misc);
+
+#ifdef CONFIG_MDM_SYSEDP
+ sysedp_free_consumer(modem->sysedpc);
+#endif
+
+ mdm_unloaded_info(modem);
+
+ unregister_pm_notifier(&modem->pm_notifier);
+ usb_unregister_notify(&modem->usb_notifier);
+
+ if (modem->wake_irq)
+ free_irq(modem->wake_irq, modem);
+ if (modem->errfatal_irq)
+ free_irq(modem->errfatal_irq, modem);
+ if (modem->hsic_ready_irq)
+ free_irq(modem->hsic_ready_irq, modem);
+ if (modem->status_irq)
+ free_irq(modem->status_irq, modem);
+ if (modem->ipc3_irq)
+ free_irq(modem->ipc3_irq, modem);
+ if (modem->vdd_min_irq)
+ free_irq(modem->vdd_min_irq, modem);
+
+ if(modem->ops && modem->ops->remove)
+ modem->ops->remove(modem);
+
+ cancel_work_sync(&modem->host_reset_work);
+ cancel_work_sync(&modem->host_load_work);
+ cancel_work_sync(&modem->host_unload_work);
+ cancel_work_sync(&modem->cpu_boost_work);
+ cancel_delayed_work_sync(&modem->cpu_unboost_work);
+ cancel_work_sync(&modem->mdm_hsic_ready_work);
+ cancel_work_sync(&modem->mdm_status_work);
+ cancel_work_sync(&modem->mdm_errfatal_work);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ cancel_work_sync(&modem->ftrace_enable_log_work);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ cancel_work_sync(&modem->mdm_restart_reason_work);
+#endif
+
+ if(modem->usb_host_wq)
+ destroy_workqueue(modem->usb_host_wq);
+ if(modem->wq)
+ destroy_workqueue(modem->wq);
+ if(modem->mdm_recovery_wq)
+ destroy_workqueue(modem->mdm_recovery_wq);
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ if(modem->ftrace_wq)
+ destroy_workqueue(modem->ftrace_wq);
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ if(modem->mdm_restart_wq)
+ destroy_workqueue(modem->mdm_restart_wq);
+#endif
+ pm_qos_remove_request(&modem->cpu_boost_req);
+
+ kfree(modem);
+ return 0;
+}
+
+static void qcom_usb_modem_shutdown(struct platform_device *pdev)
+{
+ struct qcom_usb_modem *modem = platform_get_drvdata(pdev);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s: setting AP2MDM_STATUS low for a graceful restart\n", __func__);
+
+ mutex_lock(&modem->lock);
+ mdm_disable_irqs(modem, false);
+ mdm_disable_irqs(modem, true);
+
+ modem->mdm_status = MDM_STATUS_POWER_DOWN;
+ if(modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, modem->mdm_status);
+ mutex_unlock(&modem->lock);
+
+ atomic_set(&modem->final_efs_wait, 1);
+
+ if(gpio_is_valid(modem->pdata->ap2mdm_status_gpio))
+ gpio_set_value(modem->pdata->ap2mdm_status_gpio, 0);
+
+ if (gpio_is_valid(modem->pdata->ap2mdm_wakeup_gpio))
+ gpio_set_value(modem->pdata->ap2mdm_wakeup_gpio, 1);
+
+ if (modem->ops && modem->ops->stop)
+ modem->ops->stop(modem);
+
+ if (gpio_is_valid(modem->pdata->ap2mdm_wakeup_gpio))
+ gpio_set_value(modem->pdata->ap2mdm_wakeup_gpio, 0);
+
+ return;
+}
+
+#ifdef CONFIG_PM
+static int qcom_usb_modem_suspend(struct platform_device *pdev,
+ pm_message_t state)
+{
+ struct qcom_usb_modem *modem = platform_get_drvdata(pdev);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s\n", __func__);
+
+ /* send L3 hint to modem */
+ if (modem->ops && modem->ops->suspend)
+ modem->ops->suspend();
+
+ return 0;
+}
+
+static int qcom_usb_modem_resume(struct platform_device *pdev)
+{
+ struct qcom_usb_modem *modem = platform_get_drvdata(pdev);
+
+ if(modem->mdm_debug_on)
+ pr_info("%s\n", __func__);
+
+ /* send L3->L0 hint to modem */
+ if (modem->ops && modem->ops->resume)
+ modem->ops->resume();
+
+ return 0;
+}
+#endif
+
+static struct platform_driver qcom_usb_modem_power_driver = {
+ .driver = {
+ .name = "qcom_usb_modem_power",
+ .owner = THIS_MODULE,
+ },
+ .probe = qcom_usb_modem_probe,
+ .remove = __exit_p(qcom_usb_modem_remove),
+ .shutdown = qcom_usb_modem_shutdown,
+#ifdef CONFIG_PM
+ .suspend = qcom_usb_modem_suspend,
+ .resume = qcom_usb_modem_resume,
+#endif
+};
+
+static int __init qcom_usb_modem_power_init(void)
+{
+ return platform_driver_register(&qcom_usb_modem_power_driver);
+}
+
+module_init(qcom_usb_modem_power_init);
+
+static void __exit qcom_usb_modem_power_exit(void)
+{
+ platform_driver_unregister(&qcom_usb_modem_power_driver);
+}
+
+module_exit(qcom_usb_modem_power_exit);
+
+MODULE_DESCRIPTION("Qualcomm usb modem power management driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/qcom-mdm-9k/subsystem_notif.c b/drivers/misc/qcom-mdm-9k/subsystem_notif.c
new file mode 100644
index 0000000..73e4176
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/subsystem_notif.c
@@ -0,0 +1,222 @@
+/* Copyright (c) 2011, 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * Subsystem Notifier -- Provides notifications
+ * of subsys events.
+ *
+ * Use subsys_notif_register_notifier to register for notifications
+ * and subsys_notif_queue_notification to send notifications.
+ *
+ */
+
+#include <linux/notifier.h>
+#include <linux/init.h>
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/workqueue.h>
+#include <linux/stringify.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+
+#include "subsystem_notif.h"
+
+struct subsys_notif_info {
+ char name[50];
+ struct srcu_notifier_head subsys_notif_rcvr_list;
+ struct list_head list;
+};
+
+static LIST_HEAD(subsystem_list);
+static DEFINE_MUTEX(notif_lock);
+static DEFINE_MUTEX(notif_add_lock);
+
+#if defined(SUBSYS_RESTART_DEBUG)
+static void subsys_notif_reg_test_notifier(const char *);
+#endif
+
+static struct subsys_notif_info *_notif_find_subsys(const char *subsys_name)
+{
+ struct subsys_notif_info *subsys;
+
+ mutex_lock(¬if_lock);
+ list_for_each_entry(subsys, &subsystem_list, list)
+ if (!strncmp(subsys->name, subsys_name,
+ ARRAY_SIZE(subsys->name))) {
+ mutex_unlock(¬if_lock);
+ return subsys;
+ }
+ mutex_unlock(¬if_lock);
+
+ return NULL;
+}
+
+void *subsys_notif_register_notifier(
+ const char *subsys_name, struct notifier_block *nb)
+{
+ int ret;
+ struct subsys_notif_info *subsys = _notif_find_subsys(subsys_name);
+
+ if (!subsys) {
+
+ /* Possible first time reference to this subsystem. Add it. */
+ subsys = (struct subsys_notif_info *)
+ subsys_notif_add_subsys(subsys_name);
+
+ if (!subsys)
+ return ERR_PTR(-EINVAL);
+ }
+
+ ret = srcu_notifier_chain_register(
+ &subsys->subsys_notif_rcvr_list, nb);
+
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ return subsys;
+}
+EXPORT_SYMBOL(subsys_notif_register_notifier);
+
+int subsys_notif_unregister_notifier(void *subsys_handle,
+ struct notifier_block *nb)
+{
+ int ret;
+ struct subsys_notif_info *subsys =
+ (struct subsys_notif_info *)subsys_handle;
+
+ if (!subsys)
+ return -EINVAL;
+
+ ret = srcu_notifier_chain_unregister(
+ &subsys->subsys_notif_rcvr_list, nb);
+
+ return ret;
+}
+EXPORT_SYMBOL(subsys_notif_unregister_notifier);
+
+void *subsys_notif_add_subsys(const char *subsys_name)
+{
+ struct subsys_notif_info *subsys = NULL;
+
+ if (!subsys_name)
+ goto done;
+
+ mutex_lock(¬if_add_lock);
+
+ subsys = _notif_find_subsys(subsys_name);
+
+ if (subsys) {
+ mutex_unlock(¬if_add_lock);
+ goto done;
+ }
+
+ subsys = kmalloc(sizeof(struct subsys_notif_info), GFP_KERNEL);
+
+ if (!subsys) {
+ mutex_unlock(¬if_add_lock);
+ return ERR_PTR(-EINVAL);
+ }
+
+ strlcpy(subsys->name, subsys_name, ARRAY_SIZE(subsys->name));
+
+ srcu_init_notifier_head(&subsys->subsys_notif_rcvr_list);
+
+ INIT_LIST_HEAD(&subsys->list);
+
+ mutex_lock(¬if_lock);
+ list_add_tail(&subsys->list, &subsystem_list);
+ mutex_unlock(¬if_lock);
+
+ #if defined(SUBSYS_RESTART_DEBUG)
+ subsys_notif_reg_test_notifier(subsys->name);
+ #endif
+
+ mutex_unlock(¬if_add_lock);
+
+done:
+ return subsys;
+}
+EXPORT_SYMBOL(subsys_notif_add_subsys);
+
+int subsys_notif_queue_notification(void *subsys_handle,
+ enum subsys_notif_type notif_type,
+ void *data)
+{
+ int ret = 0;
+ struct subsys_notif_info *subsys =
+ (struct subsys_notif_info *) subsys_handle;
+
+ if (!subsys)
+ return -EINVAL;
+
+ if (notif_type < 0 || notif_type >= SUBSYS_NOTIF_TYPE_COUNT)
+ return -EINVAL;
+
+ ret = srcu_notifier_call_chain(
+ &subsys->subsys_notif_rcvr_list, notif_type,
+ data);
+ return ret;
+}
+EXPORT_SYMBOL(subsys_notif_queue_notification);
+
+#if defined(SUBSYS_RESTART_DEBUG)
+static const char *notif_to_string(enum subsys_notif_type notif_type)
+{
+ switch (notif_type) {
+
+ case SUBSYS_BEFORE_SHUTDOWN:
+ return __stringify(SUBSYS_BEFORE_SHUTDOWN);
+
+ case SUBSYS_AFTER_SHUTDOWN:
+ return __stringify(SUBSYS_AFTER_SHUTDOWN);
+
+ case SUBSYS_BEFORE_POWERUP:
+ return __stringify(SUBSYS_BEFORE_POWERUP);
+
+ case SUBSYS_AFTER_POWERUP:
+ return __stringify(SUBSYS_AFTER_POWERUP);
+
+ default:
+ return "unknown";
+ }
+}
+
+static int subsys_notifier_test_call(struct notifier_block *this,
+ unsigned long code,
+ void *data)
+{
+ switch (code) {
+
+ default:
+ printk(KERN_WARNING "%s: Notification %s from subsystem %p\n",
+ __func__, notif_to_string(code), data);
+ break;
+
+ }
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block nb = {
+ .notifier_call = subsys_notifier_test_call,
+};
+
+static void subsys_notif_reg_test_notifier(const char *subsys_name)
+{
+ void *handle = subsys_notif_register_notifier(subsys_name, &nb);
+ printk(KERN_WARNING "%s: Registered test notifier, handle=%p",
+ __func__, handle);
+}
+#endif
+
+MODULE_DESCRIPTION("Subsystem Restart Notifier");
+MODULE_VERSION("1.0");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/qcom-mdm-9k/subsystem_notif.h b/drivers/misc/qcom-mdm-9k/subsystem_notif.h
new file mode 100644
index 0000000..db421ca
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/subsystem_notif.h
@@ -0,0 +1,87 @@
+/* Copyright (c) 2011, 2013 - 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * Subsystem restart notifier API header
+ *
+ */
+
+#ifndef _SUBSYS_NOTIFIER_H
+#define _SUBSYS_NOTIFIER_H
+
+#include <linux/notifier.h>
+
+enum subsys_notif_type {
+ SUBSYS_BEFORE_SHUTDOWN,
+ SUBSYS_AFTER_SHUTDOWN,
+ SUBSYS_BEFORE_POWERUP,
+ SUBSYS_AFTER_POWERUP,
+ SUBSYS_RAMDUMP_NOTIFICATION,
+ SUBSYS_POWERUP_FAILURE,
+ SUBSYS_PROXY_VOTE,
+ SUBSYS_PROXY_UNVOTE,
+ SUBSYS_SOC_RESET,
+ SUBSYS_NOTIF_TYPE_COUNT
+};
+
+#if defined(CONFIG_MSM_SUBSYSTEM_RESTART)
+/* Use the subsys_notif_register_notifier API to register for notifications for
+ * a particular subsystem. This API will return a handle that can be used to
+ * un-reg for notifications using the subsys_notif_unregister_notifier API by
+ * passing in that handle as an argument.
+ *
+ * On receiving a notification, the second (unsigned long) argument of the
+ * notifier callback will contain the notification type, and the third (void *)
+ * argument will contain the handle that was returned by
+ * subsys_notif_register_notifier.
+ */
+void *subsys_notif_register_notifier(
+ const char *subsys_name, struct notifier_block *nb);
+int subsys_notif_unregister_notifier(void *subsys_handle,
+ struct notifier_block *nb);
+
+/* Use the subsys_notif_init_subsys API to initialize the notifier chains form
+ * a particular subsystem. This API will return a handle that can be used to
+ * queue notifications using the subsys_notif_queue_notification API by passing
+ * in that handle as an argument.
+ */
+void *subsys_notif_add_subsys(const char *);
+int subsys_notif_queue_notification(void *subsys_handle,
+ enum subsys_notif_type notif_type,
+ void *data);
+#else
+
+static inline void *subsys_notif_register_notifier(
+ const char *subsys_name, struct notifier_block *nb)
+{
+ return NULL;
+}
+
+static inline int subsys_notif_unregister_notifier(void *subsys_handle,
+ struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline void *subsys_notif_add_subsys(const char *subsys_name)
+{
+ return NULL;
+}
+
+static inline int subsys_notif_queue_notification(void *subsys_handle,
+ enum subsys_notif_type notif_type,
+ void *data)
+{
+ return 0;
+}
+#endif /* CONFIG_MSM_SUBSYSTEM_RESTART */
+
+#endif
diff --git a/drivers/misc/qcom-mdm-9k/subsystem_restart.c b/drivers/misc/qcom-mdm-9k/subsystem_restart.c
new file mode 100644
index 0000000..eab203c
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/subsystem_restart.c
@@ -0,0 +1,686 @@
+/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) "subsys-restart: %s(): " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/proc_fs.h>
+#include <linux/delay.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/kthread.h>
+#include <linux/time.h>
+#include <linux/wakelock.h>
+#include <linux/suspend.h>
+#include <asm/current.h>
+#include <mach/board_htc.h>
+#include <mach/socinfo.h>
+
+#include "subsystem_notif.h"
+#include "subsystem_restart.h"
+
+#define EXTERNAL_MODEM "external_modem"
+
+struct subsys_soc_restart_order {
+ const char * const *subsystem_list;
+ int count;
+
+ struct mutex shutdown_lock;
+ struct mutex powerup_lock;
+ struct subsys_data *subsys_ptrs[];
+};
+
+struct restart_wq_data {
+ struct subsys_data *subsys;
+ struct wake_lock ssr_wake_lock;
+ char wakelockname[64];
+ int coupled;
+ struct work_struct work;
+};
+
+struct restart_log {
+ struct timeval time;
+ struct subsys_data *subsys;
+ struct list_head list;
+};
+
+static int restart_level;
+static int enable_ramdumps;
+struct workqueue_struct *ssr_wq;
+
+static LIST_HEAD(restart_log_list);
+static LIST_HEAD(subsystem_list);
+static DEFINE_SPINLOCK(subsystem_list_lock);
+static DEFINE_MUTEX(soc_order_reg_lock);
+static DEFINE_MUTEX(restart_log_mutex);
+
+bool is_in_subsystem_restart = false;
+
+/* SOC specific restart orders go here */
+
+#define DEFINE_SINGLE_RESTART_ORDER(name, order) \
+ static struct subsys_soc_restart_order __##name = { \
+ .subsystem_list = order, \
+ .count = ARRAY_SIZE(order), \
+ .subsys_ptrs = {[ARRAY_SIZE(order)] = NULL} \
+ }; \
+ static struct subsys_soc_restart_order *name[] = { \
+ &__##name, \
+ }
+
+/* MSM 8x60 restart ordering info */
+static const char * const _order_8x60_all[] = {
+ "external_modem", "modem", "lpass"
+};
+DEFINE_SINGLE_RESTART_ORDER(orders_8x60_all, _order_8x60_all);
+
+static const char * const _order_8x60_modems[] = {"external_modem", "modem"};
+DEFINE_SINGLE_RESTART_ORDER(orders_8x60_modems, _order_8x60_modems);
+
+/* MSM 8960 restart ordering info */
+static const char * const order_8960[] = {"modem", "lpass"};
+
+static struct subsys_soc_restart_order restart_orders_8960_one = {
+ .subsystem_list = order_8960,
+ .count = ARRAY_SIZE(order_8960),
+ .subsys_ptrs = {[ARRAY_SIZE(order_8960)] = NULL}
+ };
+
+static struct subsys_soc_restart_order *restart_orders_8960[] = {
+ &restart_orders_8960_one,
+};
+
+/* These will be assigned to one of the sets above after
+ * runtime SoC identification.
+ */
+static struct subsys_soc_restart_order **restart_orders;
+static int n_restart_orders;
+
+module_param(enable_ramdumps, int, S_IRUGO | S_IWUSR);
+
+static struct subsys_soc_restart_order *_update_restart_order(
+ struct subsys_data *subsys);
+
+int get_restart_level()
+{
+ return restart_level;
+}
+EXPORT_SYMBOL(get_restart_level);
+
+int get_enable_ramdumps(void)
+{
+ return enable_ramdumps;
+}
+EXPORT_SYMBOL(get_enable_ramdumps);
+
+void set_enable_ramdumps(int en)
+{
+ enable_ramdumps = en;
+}
+EXPORT_SYMBOL(set_enable_ramdumps);
+
+static void restart_level_changed(void)
+{
+ struct subsys_data *subsys;
+ unsigned long flags;
+
+ if (cpu_is_msm8x60() && restart_level == RESET_SUBSYS_COUPLED) {
+ restart_orders = orders_8x60_all;
+ n_restart_orders = ARRAY_SIZE(orders_8x60_all);
+ }
+
+ if (cpu_is_msm8x60() && restart_level == RESET_SUBSYS_MIXED) {
+ restart_orders = orders_8x60_modems;
+ n_restart_orders = ARRAY_SIZE(orders_8x60_modems);
+ }
+
+ spin_lock_irqsave(&subsystem_list_lock, flags);
+ list_for_each_entry(subsys, &subsystem_list, list)
+ subsys->restart_order = _update_restart_order(subsys);
+ spin_unlock_irqrestore(&subsystem_list_lock, flags);
+}
+
+static int restart_level_set(const char *val, struct kernel_param *kp)
+{
+ int ret;
+ int old_val = restart_level;
+
+ if (cpu_is_msm9615()) {
+ pr_err("Only Phase 1 subsystem restart is supported\n");
+ return -EINVAL;
+ }
+
+ ret = param_set_int(val, kp);
+ if (ret)
+ return ret;
+
+ switch (restart_level) {
+
+ case RESET_SOC:
+ case RESET_SUBSYS_COUPLED:
+ case RESET_SUBSYS_INDEPENDENT:
+ pr_info("Phase %d behavior activated.\n", restart_level);
+ break;
+
+ case RESET_SUBSYS_MIXED:
+ pr_info("Phase 2+ behavior activated.\n");
+ break;
+
+ default:
+ restart_level = old_val;
+ return -EINVAL;
+ break;
+
+ }
+
+ if (restart_level != old_val)
+ restart_level_changed();
+
+ return 0;
+}
+
+module_param_call(restart_level, restart_level_set, param_get_int,
+ &restart_level, 0644);
+
+static struct subsys_data *_find_subsystem(const char *subsys_name)
+{
+ struct subsys_data *subsys;
+ unsigned long flags;
+
+ spin_lock_irqsave(&subsystem_list_lock, flags);
+ list_for_each_entry(subsys, &subsystem_list, list)
+ if (!strncmp(subsys->name, subsys_name,
+ SUBSYS_NAME_MAX_LENGTH)) {
+ spin_unlock_irqrestore(&subsystem_list_lock, flags);
+ return subsys;
+ }
+ spin_unlock_irqrestore(&subsystem_list_lock, flags);
+
+ return NULL;
+}
+
+static struct subsys_soc_restart_order *_update_restart_order(
+ struct subsys_data *subsys)
+{
+ int i, j;
+
+ if (!subsys)
+ return NULL;
+
+ if (!subsys->name)
+ return NULL;
+
+ mutex_lock(&soc_order_reg_lock);
+ for (j = 0; j < n_restart_orders; j++) {
+ for (i = 0; i < restart_orders[j]->count; i++)
+ if (!strncmp(restart_orders[j]->subsystem_list[i],
+ subsys->name, SUBSYS_NAME_MAX_LENGTH)) {
+
+ restart_orders[j]->subsys_ptrs[i] =
+ subsys;
+ mutex_unlock(&soc_order_reg_lock);
+ return restart_orders[j];
+ }
+ }
+
+ mutex_unlock(&soc_order_reg_lock);
+
+ return NULL;
+}
+
+static void _send_notification_to_order(struct subsys_data
+ **restart_list, int count,
+ enum subsys_notif_type notif_type)
+{
+ int i;
+
+ for (i = 0; i < count; i++)
+ if (restart_list[i])
+ subsys_notif_queue_notification(
+ restart_list[i]->notif_handle, notif_type, NULL);
+}
+
+static int max_restarts;
+module_param(max_restarts, int, 0644);
+
+static long max_history_time = 3600;
+module_param(max_history_time, long, 0644);
+
+static void do_epoch_check(struct subsys_data *subsys)
+{
+ int n = 0;
+ struct timeval *time_first = NULL, *curr_time;
+ struct restart_log *r_log, *temp;
+ static int max_restarts_check;
+ static long max_history_time_check;
+
+ mutex_lock(&restart_log_mutex);
+
+ max_restarts_check = max_restarts;
+ max_history_time_check = max_history_time;
+
+ /* Check if epoch checking is enabled */
+ if (!max_restarts_check)
+ goto out;
+
+ r_log = kmalloc(sizeof(struct restart_log), GFP_KERNEL);
+ if (!r_log)
+ goto out;
+ r_log->subsys = subsys;
+ do_gettimeofday(&r_log->time);
+ curr_time = &r_log->time;
+ INIT_LIST_HEAD(&r_log->list);
+
+ list_add_tail(&r_log->list, &restart_log_list);
+
+ list_for_each_entry_safe(r_log, temp, &restart_log_list, list) {
+
+ if ((curr_time->tv_sec - r_log->time.tv_sec) >
+ max_history_time_check) {
+
+ pr_debug("Deleted node with restart_time = %ld\n",
+ r_log->time.tv_sec);
+ list_del(&r_log->list);
+ kfree(r_log);
+ continue;
+ }
+ if (!n) {
+ time_first = &r_log->time;
+ pr_debug("Time_first: %ld\n", time_first->tv_sec);
+ }
+ n++;
+ pr_debug("Restart_time: %ld\n", r_log->time.tv_sec);
+ }
+
+ if (time_first && n >= max_restarts_check) {
+ if ((curr_time->tv_sec - time_first->tv_sec) <
+ max_history_time_check)
+ panic("Subsystems have crashed %d times in less than "
+ "%ld seconds!", max_restarts_check,
+ max_history_time_check);
+ }
+
+out:
+ mutex_unlock(&restart_log_mutex);
+}
+
+static void subsystem_restart_wq_func(struct work_struct *work)
+{
+ struct restart_wq_data *r_work = container_of(work,
+ struct restart_wq_data, work);
+ struct subsys_data **restart_list;
+ struct subsys_data *subsys = r_work->subsys;
+ struct subsys_soc_restart_order *soc_restart_order = NULL;
+
+ struct mutex *powerup_lock;
+ struct mutex *shutdown_lock;
+
+ int i;
+ int restart_list_count = 0;
+
+ if (r_work->coupled)
+ soc_restart_order = subsys->restart_order;
+
+ /* It's OK to not take the registration lock at this point.
+ * This is because the subsystem list inside the relevant
+ * restart order is not being traversed.
+ */
+ if (!soc_restart_order) {
+ restart_list = subsys->single_restart_list;
+ restart_list_count = 1;
+ powerup_lock = &subsys->powerup_lock;
+ shutdown_lock = &subsys->shutdown_lock;
+ } else {
+ restart_list = soc_restart_order->subsys_ptrs;
+ restart_list_count = soc_restart_order->count;
+ powerup_lock = &soc_restart_order->powerup_lock;
+ shutdown_lock = &soc_restart_order->shutdown_lock;
+ }
+
+ pr_debug("[%p]: Attempting to get shutdown lock!\n", current);
+
+ /* Try to acquire shutdown_lock. If this fails, these subsystems are
+ * already being restarted - return.
+ */
+ if (!mutex_trylock(shutdown_lock))
+ goto out;
+
+ pr_debug("[%p]: Attempting to get powerup lock!\n", current);
+
+ /* Now that we've acquired the shutdown lock, either we're the first to
+ * restart these subsystems or some other thread is doing the powerup
+ * sequence for these subsystems. In the latter case, panic and bail
+ * out, since a subsystem died in its powerup sequence.
+ */
+ if (!mutex_trylock(powerup_lock))
+ panic("%s[%p]: Subsystem died during powerup!",
+ __func__, current);
+
+ do_epoch_check(subsys);
+
+ /* Now it is necessary to take the registration lock. This is because
+ * the subsystem list in the SoC restart order will be traversed
+ * and it shouldn't be changed until _this_ restart sequence completes.
+ */
+ mutex_lock(&soc_order_reg_lock);
+
+ is_in_subsystem_restart = true;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ mutex_lock(&subsys->modem->lock);
+ subsys->modem->mdm_status |= MDM_STATUS_RESETTING;
+ if(subsys->modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, subsys->modem->mdm_status);
+ mutex_unlock(&subsys->modem->lock);
+#endif
+
+ pr_debug("[%p]: Starting restart sequence for %s\n", current,
+ r_work->subsys->name);
+
+ _send_notification_to_order(restart_list,
+ restart_list_count,
+ SUBSYS_BEFORE_SHUTDOWN);
+
+ for (i = 0; i < restart_list_count; i++) {
+
+ if (!restart_list[i])
+ continue;
+
+ pr_info("[%p]: Shutting down %s\n", current,
+ restart_list[i]->name);
+
+ if (restart_list[i]->shutdown(subsys) < 0)
+ panic("subsys-restart: %s[%p]: Failed to shutdown %s!",
+ __func__, current, restart_list[i]->name);
+ }
+
+ _send_notification_to_order(restart_list, restart_list_count,
+ SUBSYS_AFTER_SHUTDOWN);
+
+ /* Now that we've finished shutting down these subsystems, release the
+ * shutdown lock. If a subsystem restart request comes in for a
+ * subsystem in _this_ restart order after the unlock below, and
+ * before the powerup lock is released, panic and bail out.
+ */
+ mutex_unlock(shutdown_lock);
+
+ /* Collect ram dumps for all subsystems in order here */
+ for (i = 0; i < restart_list_count; i++) {
+ if (!restart_list[i])
+ continue;
+
+ pr_info("[%p]: Ramdump[%d] %s\n", current, i,
+ restart_list[i]->name);
+
+ if (restart_list[i]->ramdump) {
+ if (restart_list[i]->ramdump(enable_ramdumps,
+ subsys) < 0) {
+ pr_warn("%s[%p]: Ramdump failed.\n",
+ restart_list[i]->name, current);
+ } else {
+ pr_info("%s[%p]: Ramdump ok.\n",
+ restart_list[i]->name, current);
+ }
+ } else {
+ pr_info("%s[%p]: no ramdump.\n",
+ restart_list[i]->name, current);
+ }
+ }
+
+ _send_notification_to_order(restart_list,
+ restart_list_count,
+ SUBSYS_BEFORE_POWERUP);
+
+ for (i = restart_list_count - 1; i >= 0; i--) {
+
+ if (!restart_list[i])
+ continue;
+
+ pr_info("[%p]: Powering up %s\n", current,
+ restart_list[i]->name);
+
+ if (restart_list[i]->powerup(subsys) < 0)
+ panic("%s[%p]: Failed to powerup %s!", __func__,
+ current, restart_list[i]->name);
+ }
+
+ _send_notification_to_order(restart_list,
+ restart_list_count,
+ SUBSYS_AFTER_POWERUP);
+
+ pr_info("[%p]: Restart sequence for %s completed.\n",
+ current, r_work->subsys->name);
+
+ is_in_subsystem_restart = false;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ mutex_lock(&subsys->modem->lock);
+ subsys->modem->mdm_status &= ~MDM_STATUS_RESETTING;
+ if(subsys->modem->mdm_debug_on)
+ pr_info("%s: modem->mdm_status=0x%x\n", __func__, subsys->modem->mdm_status);
+ mutex_unlock(&subsys->modem->lock);
+#endif
+
+ mutex_unlock(powerup_lock);
+
+ mutex_unlock(&soc_order_reg_lock);
+
+ pr_debug("[%p]: Released powerup lock!\n", current);
+
+out:
+ wake_unlock(&r_work->ssr_wake_lock);
+ wake_lock_destroy(&r_work->ssr_wake_lock);
+ kfree(r_work);
+}
+
+int subsystem_restart(const char *subsys_name)
+{
+ struct subsys_data *subsys;
+ struct restart_wq_data *data = NULL;
+ int rc;
+
+ if (!subsys_name) {
+ pr_err("Invalid subsystem name.\n");
+ return -EINVAL;
+ }
+
+ pr_info("Restart sequence requested for %s, restart_level = %d.\n",
+ subsys_name, restart_level);
+
+ /* List of subsystems is protected by a lock. New subsystems can
+ * still come in.
+ */
+ subsys = _find_subsystem(subsys_name);
+
+ if (!subsys) {
+ pr_warn("Unregistered subsystem %s!\n", subsys_name);
+ return -EINVAL;
+ }
+
+ if (restart_level != RESET_SOC) {
+ data = kzalloc(sizeof(struct restart_wq_data), GFP_KERNEL);
+ if (!data) {
+ restart_level = RESET_SOC;
+ pr_warn("Failed to alloc restart data. Resetting.\n");
+ } else {
+ if (restart_level == RESET_SUBSYS_COUPLED ||
+ restart_level == RESET_SUBSYS_MIXED)
+ data->coupled = 1;
+ else
+ data->coupled = 0;
+
+ data->subsys = subsys;
+ }
+ }
+
+ switch (restart_level) {
+
+ case RESET_SUBSYS_COUPLED:
+ case RESET_SUBSYS_MIXED:
+ case RESET_SUBSYS_INDEPENDENT:
+ pr_debug("Restarting %s [level=%d]!\n", subsys_name,
+ restart_level);
+
+ snprintf(data->wakelockname, sizeof(data->wakelockname),
+ "ssr(%s)", subsys_name);
+ wake_lock_init(&data->ssr_wake_lock, WAKE_LOCK_SUSPEND,
+ data->wakelockname);
+ wake_lock(&data->ssr_wake_lock);
+
+ INIT_WORK(&data->work, subsystem_restart_wq_func);
+ rc = schedule_work(&data->work);
+
+ if (rc < 0)
+ panic("%s: Unable to schedule work to restart %s",
+ __func__, subsys->name);
+ break;
+
+ case RESET_SOC:
+#ifdef CONFIG_MDM_ERRMSG
+ if (strcmp(subsys_name, EXTERNAL_MODEM) == 0) {
+ char *errmsg = get_mdm_errmsg(subsys->modem);
+ panic("subsys-restart: %s crashed. %s",
+ subsys->name,
+ errmsg ? errmsg : "");
+ }
+ else
+#endif
+ {
+ panic("subsys-restart: Resetting the SoC - %s crashed.",
+ subsys->name);
+ }
+ break;
+
+ default:
+ panic("subsys-restart: Unknown restart level!\n");
+ break;
+
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(subsystem_restart);
+
+int ssr_register_subsystem(struct subsys_data *subsys)
+{
+ unsigned long flags;
+
+ if (!subsys)
+ goto err;
+
+ if (!subsys->name)
+ goto err;
+
+ if (!subsys->powerup || !subsys->shutdown)
+ goto err;
+
+ subsys->notif_handle = subsys_notif_add_subsys(subsys->name);
+ subsys->restart_order = _update_restart_order(subsys);
+ subsys->single_restart_list[0] = subsys;
+
+ mutex_init(&subsys->shutdown_lock);
+ mutex_init(&subsys->powerup_lock);
+
+ spin_lock_irqsave(&subsystem_list_lock, flags);
+ list_add(&subsys->list, &subsystem_list);
+ spin_unlock_irqrestore(&subsystem_list_lock, flags);
+
+ return 0;
+
+err:
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ssr_register_subsystem);
+
+static int ssr_panic_handler(struct notifier_block *this,
+ unsigned long event, void *ptr)
+{
+ struct subsys_data *subsys;
+
+ list_for_each_entry(subsys, &subsystem_list, list)
+ if (subsys->crash_shutdown)
+ subsys->crash_shutdown(subsys);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block panic_nb = {
+ .notifier_call = ssr_panic_handler,
+};
+
+static int __init ssr_init_soc_restart_orders(void)
+{
+ int i;
+
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &panic_nb);
+
+ if (cpu_is_msm8x60()) {
+ for (i = 0; i < ARRAY_SIZE(orders_8x60_all); i++) {
+ mutex_init(&orders_8x60_all[i]->powerup_lock);
+ mutex_init(&orders_8x60_all[i]->shutdown_lock);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(orders_8x60_modems); i++) {
+ mutex_init(&orders_8x60_modems[i]->powerup_lock);
+ mutex_init(&orders_8x60_modems[i]->shutdown_lock);
+ }
+
+ restart_orders = orders_8x60_all;
+ n_restart_orders = ARRAY_SIZE(orders_8x60_all);
+ }
+
+ if (cpu_is_msm8960() || cpu_is_msm8930() || cpu_is_msm9615() ||
+ cpu_is_apq8064()) {
+ restart_orders = restart_orders_8960;
+ n_restart_orders = ARRAY_SIZE(restart_orders_8960);
+ }
+
+ if (restart_orders == NULL || n_restart_orders < 1) {
+ WARN_ON(1);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __init subsys_restart_init(void)
+{
+ int ret = 0;
+
+ restart_level = RESET_SUBSYS_INDEPENDENT;
+ /* Set default ramdump capture to 0 */
+ enable_ramdumps = 0;
+
+ ssr_wq = alloc_workqueue("ssr_wq", 0, 0);
+
+ if (!ssr_wq)
+ panic("Couldn't allocate workqueue for subsystem restart.\n");
+
+ if (get_radio_flag() & RADIO_FLAG_USB_UPLOAD)
+ {
+ /* Enable mdm ramdump */
+ enable_ramdumps = 1;
+ }
+
+ ret = ssr_init_soc_restart_orders();
+
+ return ret;
+}
+
+arch_initcall(subsys_restart_init);
+
+MODULE_DESCRIPTION("Subsystem Restart Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/qcom-mdm-9k/subsystem_restart.h b/drivers/misc/qcom-mdm-9k/subsystem_restart.h
new file mode 100644
index 0000000..58deab4
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/subsystem_restart.h
@@ -0,0 +1,83 @@
+/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __SUBSYS_RESTART_H
+#define __SUBSYS_RESTART_H
+
+#include <linux/spinlock.h>
+#ifdef CONFIG_QCT_9K_MODEM
+#include <linux/platform_data/qcom_usb_modem_power.h>
+#endif
+
+#define SUBSYS_NAME_MAX_LENGTH 40
+
+enum {
+ RESET_SOC = 1,
+ RESET_SUBSYS_COUPLED,
+ RESET_SUBSYS_INDEPENDENT,
+ RESET_SUBSYS_MIXED = 25,
+ RESET_LEVEL_MAX
+};
+
+struct subsys_data {
+ const char *name;
+ int (*shutdown) (const struct subsys_data *);
+ int (*powerup) (const struct subsys_data *);
+ void (*crash_shutdown) (const struct subsys_data *);
+ int (*ramdump) (int, const struct subsys_data *);
+
+ /* Internal use only */
+ struct list_head list;
+ void *notif_handle;
+
+ struct mutex shutdown_lock;
+ struct mutex powerup_lock;
+
+ void *restart_order;
+ struct subsys_data *single_restart_list[1];
+
+ struct qcom_usb_modem *modem;
+};
+
+#if defined(CONFIG_MSM_SUBSYSTEM_RESTART)
+
+int get_restart_level(void);
+int subsystem_restart(const char *subsys_name);
+int ssr_register_subsystem(struct subsys_data *subsys);
+
+#ifdef CONFIG_QCT_9K_MODEM
+#ifdef CONFIG_MDM_ERRMSG
+char *get_mdm_errmsg(struct qcom_usb_modem *modem);
+#endif
+#endif
+
+#else
+
+static inline int get_restart_level(void)
+{
+ return 0;
+}
+
+static inline int subsystem_restart(const char *subsystem_name)
+{
+ return 0;
+}
+
+static inline int ssr_register_subsystem(struct subsys_data *subsys)
+{
+ return 0;
+}
+
+#endif /* CONFIG_MSM_SUBSYSTEM_RESTART */
+
+#endif
diff --git a/drivers/misc/qcom-mdm-9k/sysmon.c b/drivers/misc/qcom-mdm-9k/sysmon.c
new file mode 100644
index 0000000..2dae906
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/sysmon.c
@@ -0,0 +1,395 @@
+/*
+ * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#define pr_fmt(fmt) "[SYSMON]: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/completion.h>
+#include <linux/platform_device.h>
+#include <mach/msm_smd.h>
+
+#include "subsystem_notif.h"
+#include "hsic_sysmon.h"
+#include "sysmon.h"
+
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#define TX_BUF_SIZE 50
+#define RX_BUF_SIZE 500
+#define TIMEOUT_MS 5000
+
+enum transports {
+ TRANSPORT_SMD,
+ TRANSPORT_HSIC,
+};
+
+struct sysmon_subsys {
+ struct mutex lock;
+ struct smd_channel *chan;
+ bool chan_open;
+ struct completion resp_ready;
+ char rx_buf[RX_BUF_SIZE];
+ enum transports transport;
+ struct device *dev;
+};
+
+static struct sysmon_subsys subsys[SYSMON_NUM_SS] = {
+ [SYSMON_SS_MODEM].transport = TRANSPORT_SMD,
+ [SYSMON_SS_LPASS].transport = TRANSPORT_SMD,
+ [SYSMON_SS_WCNSS].transport = TRANSPORT_SMD,
+ [SYSMON_SS_DSPS].transport = TRANSPORT_SMD,
+ [SYSMON_SS_Q6FW].transport = TRANSPORT_SMD,
+ [SYSMON_SS_EXT_MODEM].transport = TRANSPORT_HSIC,
+};
+
+static const char *notif_name[SUBSYS_NOTIF_TYPE_COUNT] = {
+ [SUBSYS_BEFORE_SHUTDOWN] = "before_shutdown",
+ [SUBSYS_AFTER_SHUTDOWN] = "after_shutdown",
+ [SUBSYS_BEFORE_POWERUP] = "before_powerup",
+ [SUBSYS_AFTER_POWERUP] = "after_powerup",
+};
+
+struct enum_name_map {
+ int id;
+ const char name[50];
+};
+
+static struct enum_name_map map[SYSMON_NUM_SS] = {
+ {SYSMON_SS_WCNSS, "wcnss"},
+ {SYSMON_SS_MODEM, "modem"},
+ {SYSMON_SS_LPASS, "adsp"},
+ {SYSMON_SS_Q6FW, "modem_fw"},
+ {SYSMON_SS_EXT_MODEM, "external_modem"},
+ {SYSMON_SS_DSPS, "dsps"},
+};
+
+static int sysmon_send_smd(struct sysmon_subsys *ss, const char *tx_buf,
+ size_t len)
+{
+ int ret;
+
+ if (!ss->chan_open)
+ return -ENODEV;
+
+ init_completion(&ss->resp_ready);
+ pr_debug("Sending SMD message: %s\n", tx_buf);
+ smd_write(ss->chan, tx_buf, len);
+ ret = wait_for_completion_timeout(&ss->resp_ready,
+ msecs_to_jiffies(TIMEOUT_MS));
+ if (!ret)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static int sysmon_send_hsic(struct sysmon_subsys *ss, const char *tx_buf,
+ size_t len)
+{
+ int ret;
+ size_t actual_len;
+
+ pr_debug("Sending HSIC message: %s\n", tx_buf);
+ ret = hsic_sysmon_write(HSIC_SYSMON_DEV_EXT_MODEM,
+ tx_buf, len, TIMEOUT_MS);
+ if (ret)
+ return ret;
+ ret = hsic_sysmon_read(HSIC_SYSMON_DEV_EXT_MODEM, ss->rx_buf,
+ ARRAY_SIZE(ss->rx_buf), &actual_len, TIMEOUT_MS);
+ return ret;
+}
+
+static int sysmon_send_msg(struct sysmon_subsys *ss, const char *tx_buf,
+ size_t len)
+{
+ int ret;
+
+ switch (ss->transport) {
+ case TRANSPORT_SMD:
+ ret = sysmon_send_smd(ss, tx_buf, len);
+ break;
+ case TRANSPORT_HSIC:
+ ret = sysmon_send_hsic(ss, tx_buf, len);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ if (!ret)
+ pr_debug("Received response: %s\n", ss->rx_buf);
+
+ return ret;
+}
+
+/**
+ * sysmon_send_event() - Notify a subsystem of another's state change
+ * @dest_ss: ID of subsystem the notification should be sent to
+ * @event_ss: String name of the subsystem that generated the notification
+ * @notif: ID of the notification type (ex. SUBSYS_BEFORE_SHUTDOWN)
+ *
+ * Returns 0 for success, -EINVAL for invalid destination or notification IDs,
+ * -ENODEV if the transport channel is not open, -ETIMEDOUT if the destination
+ * subsystem does not respond, and -ENOSYS if the destination subsystem
+ * responds, but with something other than an acknowledgement.
+ *
+ * If CONFIG_MSM_SYSMON_COMM is not defined, always return success (0).
+ */
+int sysmon_send_event(const char *dest_ss, const char *event_ss,
+ enum subsys_notif_type notif)
+{
+
+ char tx_buf[TX_BUF_SIZE];
+ int ret, i;
+ struct sysmon_subsys *ss = NULL;
+
+ for (i = 0; i < ARRAY_SIZE(map); i++) {
+ if (!strcmp(map[i].name, dest_ss)) {
+ ss = &subsys[map[i].id];
+ break;
+ }
+ }
+
+ if (ss == NULL)
+ return -EINVAL;
+
+ if (ss->dev == NULL)
+ return -ENODEV;
+
+ if (notif < 0 || notif >= SUBSYS_NOTIF_TYPE_COUNT || event_ss == NULL ||
+ notif_name[notif] == NULL)
+ return -EINVAL;
+
+ snprintf(tx_buf, ARRAY_SIZE(tx_buf), "ssr:%s:%s", event_ss,
+ notif_name[notif]);
+
+ mutex_lock(&ss->lock);
+ ret = sysmon_send_msg(ss, tx_buf, strlen(tx_buf));
+ if (ret)
+ goto out;
+
+ if (strncmp(ss->rx_buf, "ssr:ack", ARRAY_SIZE(ss->rx_buf)))
+ ret = -ENOSYS;
+out:
+ mutex_unlock(&ss->lock);
+ return ret;
+}
+
+/**
+ * sysmon_send_shutdown() - send shutdown command to a
+ * subsystem.
+ * @dest_ss: ID of subsystem to send to.
+ *
+ * Returns 0 for success, -EINVAL for an invalid destination, -ENODEV if
+ * the SMD transport channel is not open, -ETIMEDOUT if the destination
+ * subsystem does not respond, and -ENOSYS if the destination subsystem
+ * responds with something unexpected.
+ *
+ * If CONFIG_MSM_SYSMON_COMM is not defined, always return success (0).
+ */
+int sysmon_send_shutdown(enum subsys_id dest_ss)
+{
+ struct sysmon_subsys *ss = &subsys[dest_ss];
+ const char tx_buf[] = "system:shutdown";
+ const char expect[] = "system:ack";
+ size_t prefix_len = ARRAY_SIZE(expect) - 1;
+ int ret;
+
+ if (ss->dev == NULL)
+ return -ENODEV;
+
+ if (dest_ss < 0 || dest_ss >= SYSMON_NUM_SS)
+ return -EINVAL;
+
+ mutex_lock(&ss->lock);
+ ret = sysmon_send_msg(ss, tx_buf, ARRAY_SIZE(tx_buf));
+ if (ret)
+ goto out;
+
+ if (strncmp(ss->rx_buf, expect, prefix_len))
+ ret = -ENOSYS;
+out:
+ mutex_unlock(&ss->lock);
+ return ret;
+}
+
+/**
+ * sysmon_get_reason() - Retrieve failure reason from a subsystem.
+ * @dest_ss: ID of subsystem to query
+ * @buf: Caller-allocated buffer for the returned NUL-terminated reason
+ * @len: Length of @buf
+ *
+ * Returns 0 for success, -EINVAL for an invalid destination, -ENODEV if
+ * the SMD transport channel is not open, -ETIMEDOUT if the destination
+ * subsystem does not respond, and -ENOSYS if the destination subsystem
+ * responds with something unexpected.
+ *
+ * If CONFIG_MSM_SYSMON_COMM is not defined, always return success (0).
+ */
+int sysmon_get_reason(enum subsys_id dest_ss, char *buf, size_t len)
+{
+ struct sysmon_subsys *ss = &subsys[dest_ss];
+ const char tx_buf[] = "ssr:retrieve:sfr";
+ const char expect[] = "ssr:return:";
+ size_t prefix_len = ARRAY_SIZE(expect) - 1;
+ int ret;
+
+ if (ss->dev == NULL)
+ return -ENODEV;
+
+ if (dest_ss < 0 || dest_ss >= SYSMON_NUM_SS ||
+ buf == NULL || len == 0)
+ return -EINVAL;
+
+ mutex_lock(&ss->lock);
+ ret = sysmon_send_msg(ss, tx_buf, ARRAY_SIZE(tx_buf));
+ if (ret)
+ goto out;
+
+ if (strncmp(ss->rx_buf, expect, prefix_len)) {
+ ret = -ENOSYS;
+ goto out;
+ }
+ strlcpy(buf, ss->rx_buf + prefix_len, len);
+out:
+ mutex_unlock(&ss->lock);
+ return ret;
+}
+
+static void sysmon_smd_notify(void *priv, unsigned int smd_event)
+{
+ struct sysmon_subsys *ss = priv;
+
+ switch (smd_event) {
+ case SMD_EVENT_DATA: {
+ if (smd_read_avail(ss->chan) > 0) {
+ smd_read_from_cb(ss->chan, ss->rx_buf,
+ ARRAY_SIZE(ss->rx_buf));
+ complete(&ss->resp_ready);
+ }
+ break;
+ }
+ case SMD_EVENT_OPEN:
+ ss->chan_open = true;
+ break;
+ case SMD_EVENT_CLOSE:
+ ss->chan_open = false;
+ break;
+ }
+}
+
+static int sysmon_probe(struct platform_device *pdev)
+{
+ struct sysmon_subsys *ss;
+ int ret;
+
+ if (pdev->id < 0 || pdev->id >= SYSMON_NUM_SS)
+ return -ENODEV;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("%s() name=%s\n", __func__, pdev->name);
+#endif
+
+ ss = &subsys[pdev->id];
+ mutex_init(&ss->lock);
+
+ switch (ss->transport) {
+ case TRANSPORT_SMD:
+ if (pdev->id >= SMD_NUM_TYPE)
+ return -EINVAL;
+
+ ret = smd_named_open_on_edge("sys_mon", pdev->id, &ss->chan, ss,
+ sysmon_smd_notify);
+ if (ret) {
+ pr_err("SMD open failed\n");
+ return ret;
+ }
+
+ smd_disable_read_intr(ss->chan);
+ break;
+ case TRANSPORT_HSIC:
+ if (pdev->id < SMD_NUM_TYPE)
+ return -EINVAL;
+
+ ret = hsic_sysmon_open(HSIC_SYSMON_DEV_EXT_MODEM);
+ if (ret) {
+ pr_err("HSIC open failed\n");
+ return ret;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ ss->dev = &pdev->dev;
+
+ return 0;
+}
+
+static int sysmon_remove(struct platform_device *pdev)
+{
+ struct sysmon_subsys *ss = &subsys[pdev->id];
+
+ ss->dev = NULL;
+
+ mutex_lock(&ss->lock);
+ switch (ss->transport) {
+ case TRANSPORT_SMD:
+ smd_close(ss->chan);
+ break;
+ case TRANSPORT_HSIC:
+ hsic_sysmon_close(HSIC_SYSMON_DEV_EXT_MODEM);
+ break;
+ }
+ mutex_unlock(&ss->lock);
+
+ return 0;
+}
+
+static struct platform_driver sysmon_driver = {
+ .probe = sysmon_probe,
+ .remove = sysmon_remove,
+ .driver = {
+ .name = "sys_mon",
+ .owner = THIS_MODULE,
+ },
+};
+
+static int __init sysmon_init(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return 0;
+#endif
+
+ return platform_driver_register(&sysmon_driver);
+}
+
+subsys_initcall(sysmon_init);
+
+static void __exit sysmon_exit(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return;
+#endif
+
+ platform_driver_unregister(&sysmon_driver);
+}
+module_exit(sysmon_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("system monitor communication library");
+MODULE_ALIAS("platform:sys_mon");
diff --git a/drivers/misc/qcom-mdm-9k/sysmon.h b/drivers/misc/qcom-mdm-9k/sysmon.h
new file mode 100644
index 0000000..e9adb4a
--- /dev/null
+++ b/drivers/misc/qcom-mdm-9k/sysmon.h
@@ -0,0 +1,60 @@
+/*
+ * Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __MSM_SYSMON_H
+#define __MSM_SYSMON_H
+
+#include <mach/msm_smd.h>
+#include "subsystem_notif.h"
+
+/**
+ * enum subsys_id - Destination subsystems for events.
+ */
+enum subsys_id {
+ /* SMD subsystems */
+ SYSMON_SS_MODEM = SMD_APPS_MODEM,
+ SYSMON_SS_LPASS = SMD_APPS_QDSP,
+ SYSMON_SS_WCNSS = SMD_APPS_WCNSS,
+ SYSMON_SS_DSPS = SMD_APPS_DSPS,
+ SYSMON_SS_Q6FW = SMD_APPS_Q6FW,
+
+ /* Non-SMD subsystems */
+ SYSMON_SS_EXT_MODEM = SMD_NUM_TYPE,
+ SYSMON_NUM_SS
+};
+
+#ifdef CONFIG_MSM_SYSMON_COMM
+int sysmon_send_event(const char *dest_ss, const char *event_ss,
+ enum subsys_notif_type notif);
+int sysmon_get_reason(enum subsys_id dest_ss, char *buf, size_t len);
+int sysmon_send_shutdown(enum subsys_id dest_ss);
+#else
+static inline int sysmon_send_event(const char *dest_ss,
+ const char *event_ss,
+ enum subsys_notif_type notif)
+{
+ return 0;
+}
+static inline int sysmon_get_reason(enum subsys_id dest_ss, char *buf,
+ size_t len)
+{
+ return 0;
+}
+static inline int sysmon_send_shutdown(enum subsys_id dest_ss)
+{
+ return 0;
+}
+#endif
+
+#endif
diff --git a/drivers/misc/tegra-fuse/tegra12x_fuse_offsets.h b/drivers/misc/tegra-fuse/tegra12x_fuse_offsets.h
index baa6c73..e379bf9 100644
--- a/drivers/misc/tegra-fuse/tegra12x_fuse_offsets.h
+++ b/drivers/misc/tegra-fuse/tegra12x_fuse_offsets.h
@@ -84,12 +84,15 @@
#define FUSE_FAB_CODE_MASK 0x3f
#define FUSE_LOT_CODE_0 0x208
#define FUSE_LOT_CODE_1 0x20c
+#define FUSE_LOT_CODE_1_MASK 0x0fffffff
#define FUSE_WAFER_ID 0x210
#define FUSE_WAFER_ID_MASK 0x3f
#define FUSE_X_COORDINATE 0x214
#define FUSE_X_COORDINATE_MASK 0x1ff
#define FUSE_Y_COORDINATE 0x218
#define FUSE_Y_COORDINATE_MASK 0x1ff
+#define FUSE_OPS_RESERVED 0x220
+#define FUSE_OPS_RESERVED_MASK 0x3f
#define FUSE_GPU_INFO 0x390
#define FUSE_GPU_INFO_MASK (1<<2)
#define FUSE_SPARE_BIT 0x300
@@ -309,6 +312,90 @@
return uid;
}
+/* return uid in bootloader format */
+static void tegra_chip_unique_id(u32 uid[4])
+{
+ u32 vendor;
+ u32 fab;
+ u32 wafer;
+ u32 lot0;
+ u32 lot1;
+ u32 x, y;
+ u32 rsvd;
+
+ /** For t12x:
+ *
+ * Field Bits Data
+ * (LSB first)
+ * -------- ---- ----------------------------------------
+ * Reserved 6
+ * Y 9 Wafer Y-coordinate
+ * X 9 Wafer X-coordinate
+ * WAFER 6 Wafer id
+ * LOT_0 32 Lot code 0
+ * LOT_1 28 Lot code 1
+ * FAB 6 FAB code
+ * VENDOR 4 Vendor code
+ * -------- ----
+ * Total 100
+ *
+ * Gather up all the bits and pieces.
+ *
+ * <Vendor:4>
+ * <Fab:6><Lot0:26>
+ * <Lot0:6><Lot1:26>
+ * <Lot1:2><Wafer:6><X:9><Y:9><Reserved:6>
+ *
+ **/
+
+ vendor = tegra_fuse_readl(FUSE_VENDOR_CODE) & FUSE_VENDOR_CODE_MASK;
+ fab = tegra_fuse_readl(FUSE_FAB_CODE) & FUSE_FAB_CODE_MASK;
+ wafer = tegra_fuse_readl(FUSE_WAFER_ID) & FUSE_WAFER_ID_MASK;
+ x = tegra_fuse_readl(FUSE_X_COORDINATE) & FUSE_X_COORDINATE_MASK;
+ y = tegra_fuse_readl(FUSE_Y_COORDINATE) & FUSE_Y_COORDINATE_MASK;
+
+ lot0 = tegra_fuse_readl(FUSE_LOT_CODE_0);
+ lot1 = tegra_fuse_readl(FUSE_LOT_CODE_1) & FUSE_LOT_CODE_1_MASK;
+ rsvd = tegra_fuse_readl(FUSE_OPS_RESERVED) & FUSE_OPS_RESERVED_MASK;
+
+ /* <Lot1:2><Wafer:6><X:9><Y:9><Reserved:6> */
+ uid[3] = (lot1 << 30) | (wafer << 24) | (x << 15) | (y << 6) | rsvd;
+ /* <Lot0:6><Lot1:26> */
+ uid[2] = (lot0 << 26) | (lot1 >> 2);
+ /* <Fab:6><Lot0:26> */
+ uid[1] = (fab << 26) | (lot0 >> 6);
+ /* <Vendor:4> */
+ uid[0] = vendor;
+}
+
+
+static ssize_t tegra_cprev_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u32 rev, major, minor;
+
+ rev = tegra_fuse_readl(FUSE_CP_REV);
+ minor = rev & 0x1f;
+ major = (rev >> 5) & 0x3f;
+
+ sprintf(buf, "%u.%u\n", major, minor);
+
+ return strlen(buf);
+}
+
+static ssize_t tegra_uid_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ u32 uid[4];
+
+ tegra_chip_unique_id(uid);
+ sprintf(buf, "%X%08X%08X%08X\n", uid[0], uid[1], uid[2], uid[3]);
+ return strlen(buf);
+}
+
+DEVICE_ATTR(cp_rev, 0444, tegra_cprev_show, NULL);
+DEVICE_ATTR(uid, 0444, tegra_uid_show, NULL);
+
static int tsensor_calib_offset[] = {
[0] = 0x198,
[1] = 0x184,
@@ -415,11 +502,16 @@
dev_attr_public_key.attr.mode = 0440;
dev_attr_pkc_disable.attr.mode = 0440;
dev_attr_vp8_enable.attr.mode = 0440;
+ dev_attr_cp_rev.attr.mode = 0444;
+ dev_attr_uid.attr.mode = 0444;
} else {
dev_attr_public_key.attr.mode = 0640;
dev_attr_pkc_disable.attr.mode = 0640;
dev_attr_vp8_enable.attr.mode = 0640;
+ dev_attr_cp_rev.attr.mode = 0444;
+ dev_attr_uid.attr.mode = 0444;
}
+
CHK_ERR(&pdev->dev, sysfs_create_file(&pdev->dev.kobj,
&dev_attr_public_key.attr));
CHK_ERR(&pdev->dev, sysfs_create_file(&pdev->dev.kobj,
@@ -428,6 +520,10 @@
&dev_attr_vp8_enable.attr));
CHK_ERR(&pdev->dev, sysfs_create_file(&pdev->dev.kobj,
&dev_attr_odm_lock.attr));
+ CHK_ERR(&pdev->dev, sysfs_create_file(&pdev->dev.kobj,
+ &dev_attr_cp_rev.attr));
+ CHK_ERR(&pdev->dev, sysfs_create_file(&pdev->dev.kobj,
+ &dev_attr_uid.attr));
return 0;
}
@@ -438,6 +534,8 @@
sysfs_remove_file(&pdev->dev.kobj, &dev_attr_pkc_disable.attr);
sysfs_remove_file(&pdev->dev.kobj, &dev_attr_vp8_enable.attr);
sysfs_remove_file(&pdev->dev.kobj, &dev_attr_odm_lock.attr);
+ sysfs_remove_file(&pdev->dev.kobj, &dev_attr_cp_rev.attr);
+ sysfs_remove_file(&pdev->dev.kobj, &dev_attr_uid.attr);
return 0;
}
@@ -450,6 +548,10 @@
&dev_attr_pkc_disable.attr, 0440));
CHK_ERR(dev, sysfs_chmod_file(kobj,
&dev_attr_vp8_enable.attr, 0440));
+ CHK_ERR(dev, sysfs_chmod_file(kobj,
+ &dev_attr_cp_rev.attr, 0444));
+ CHK_ERR(dev, sysfs_chmod_file(kobj,
+ &dev_attr_uid.attr, 0444));
return 0;
}
diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig
index 80c731d..86c53c5 100644
--- a/drivers/mmc/core/Kconfig
+++ b/drivers/mmc/core/Kconfig
@@ -51,3 +51,20 @@
device frequency dynamically. Enable this config only if
there is a custom implementation to determine the frequency
using the device stats.
+
+config ENABLE_MMC_USER_CONTROL
+ bool "Enable sending commands (e.g. shutdown) to the mmc"
+ default N
+ help
+ If you say Y here, you can enable the feature in runtime by
+ setting mmc_user_control_cmd module param to any of the following:
+ 1) "r: <msec_delay> <write_pattern>": This instructs the mmc to shutdown
+ after msec_delay msecs when doing a write command that starts with write_pattern
+
+config MMC_SHUTDOWN_GPIO
+ int "The gpio number that is activated in order to shutdown the mmc"
+ depends on ENABLE_MMC_USER_CONTROL
+ help
+ The gpio number that will be triggered to shutdown the mmc after
+ encountering the MMC_SHUTDOWN_PATTERN in the head of an mmc data
+ write request.
diff --git a/drivers/mmc/core/Makefile b/drivers/mmc/core/Makefile
index 76ac10a..600cacd 100644
--- a/drivers/mmc/core/Makefile
+++ b/drivers/mmc/core/Makefile
@@ -11,3 +11,5 @@
quirks.o slot-gpio.o
mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o
+
+mmc_core-$(CONFIG_ENABLE_MMC_USER_CONTROL) += mmc_user_control.o
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 0da82fe..cb92de6 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -31,6 +31,8 @@
#include <linux/wakelock.h>
#include <linux/devfreq.h>
#include <linux/slab.h>
+#include <linux/timer.h>
+#include <linux/gpio.h>
#include <trace/events/mmc.h>
@@ -50,6 +52,8 @@
#include "sd_ops.h"
#include "sdio_ops.h"
+#include "mmc_user_control.h"
+
/* If the device is not responding */
#define MMC_CORE_TIMEOUT_MS (10 * 60 * 1000) /* 10 minute timeout */
@@ -213,6 +217,17 @@
EXPORT_SYMBOL(mmc_request_done);
+#ifdef CONFIG_ENABLE_MMC_USER_CONTROL
+extern struct mmc_shutdown_data mmc_shutdown_config;
+static struct timer_list shutdown_mmc_timer;
+
+static void shutdown_mmc_timer_callback(unsigned long data)
+{
+ gpio_set_value(CONFIG_MMC_SHUTDOWN_GPIO, 1);
+ pr_info("MMC was shutdown after encountering shutdown pattern.\n");
+}
+#endif
+
static void
mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
{
@@ -221,6 +236,25 @@
struct scatterlist *sg;
#endif
+#ifdef CONFIG_ENABLE_MMC_USER_CONTROL
+ char *data_head;
+
+ read_lock(&mmc_shutdown_config.root_lock);
+ if (mmc_shutdown_config.enabled && mrq->data) {
+ data_head = (char *)(sg_virt(mrq->data->sg));
+ if (!strncmp(data_head,
+ mmc_shutdown_config.pattern_buff,
+ mmc_shutdown_config.pattern_len)) {
+ if (mod_timer(&shutdown_mmc_timer,
+ jiffies + msecs_to_jiffies(
+ mmc_shutdown_config.delay_ms))) {
+ pr_info("Failed to set MMC shutdown timer.\n");
+ }
+ }
+ }
+ read_unlock(&mmc_shutdown_config.root_lock);
+#endif
+
if (mrq->sbc) {
pr_debug("<%s: starting CMD%u arg %08x flags %08x>\n",
mmc_hostname(host), mrq->sbc->opcode,
@@ -363,8 +397,13 @@
*/
static void mmc_wait_data_done(struct mmc_request *mrq)
{
+ unsigned long flags;
+ struct mmc_context_info *context_info = &mrq->host->context_info;
+
+ spin_lock_irqsave(&context_info->lock, flags);
mrq->host->context_info.is_done_rcv = true;
wake_up_interruptible(&mrq->host->context_info.wait);
+ spin_unlock_irqrestore(&context_info->lock, flags);
}
static void mmc_wait_done(struct mmc_request *mrq)
@@ -426,15 +465,17 @@
struct mmc_context_info *context_info = &host->context_info;
int err;
unsigned long flags;
+ bool is_done_rcv = false;
while (1) {
wait_event_interruptible(context_info->wait,
(context_info->is_done_rcv ||
context_info->is_new_req));
spin_lock_irqsave(&context_info->lock, flags);
+ is_done_rcv = context_info->is_done_rcv;
context_info->is_waiting_last_req = false;
spin_unlock_irqrestore(&context_info->lock, flags);
- if (context_info->is_done_rcv) {
+ if (is_done_rcv) {
context_info->is_done_rcv = false;
context_info->is_new_req = false;
cmd = mrq->cmd;
@@ -3229,6 +3270,10 @@
{
int ret;
+#ifdef CONFIG_ENABLE_MMC_USER_CONTROL
+ rwlock_init(&mmc_shutdown_config.root_lock);
+ setup_timer(&shutdown_mmc_timer, shutdown_mmc_timer_callback, 0);
+#endif
workqueue = alloc_ordered_workqueue("kmmcd", 0);
if (!workqueue)
return -ENOMEM;
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 02b3906..019b057 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -294,7 +294,7 @@
}
card->ext_csd.rev = ext_csd[EXT_CSD_REV];
- if (card->ext_csd.rev > 7) {
+ if (card->ext_csd.rev > 8) {
pr_err("%s: unrecognised EXT_CSD revision %d\n",
mmc_hostname(card->host), card->ext_csd.rev);
err = -EINVAL;
@@ -447,21 +447,18 @@
}
}
- if (card->ext_csd.rev < 6) {
- card->ext_csd.sec_trim_mult =
- ext_csd[EXT_CSD_SEC_TRIM_MULT];
- card->ext_csd.sec_erase_mult =
- ext_csd[EXT_CSD_SEC_ERASE_MULT];
- card->ext_csd.sec_feature_support =
+ card->ext_csd.sec_trim_mult =
+ ext_csd[EXT_CSD_SEC_TRIM_MULT];
+ card->ext_csd.sec_erase_mult =
+ ext_csd[EXT_CSD_SEC_ERASE_MULT];
+ card->ext_csd.sec_feature_support =
ext_csd[EXT_CSD_SEC_FEATURE_SUPPORT];
- }
- if (card->ext_csd.rev == 6) {
+ if (card->ext_csd.rev >= 6) {
card->ext_csd.sec_feature_support =
ext_csd[EXT_CSD_SEC_FEATURE_SUPPORT] &
~EXT_CSD_SEC_ER_EN;
}
-
card->ext_csd.trim_timeout = 300 *
ext_csd[EXT_CSD_TRIM_MULT];
diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
index 49f04bc..9b15ffb 100644
--- a/drivers/mmc/core/mmc_ops.c
+++ b/drivers/mmc/core/mmc_ops.c
@@ -455,6 +455,14 @@
if (time_after(jiffies, timeout)) {
pr_err("%s: Card stuck in programming state! %s\n",
mmc_hostname(card->host), __func__);
+ if (index == EXT_CSD_SANITIZE_START) {
+ unsigned long prg_wait;
+ pr_err("%s: Send HPI command due to Santitize command timeout\n", __func__);
+ err = mmc_interrupt_hpi(card);
+ if (err && (err != -EINVAL)) {
+ pr_err("%s Failed to send HPI command (err=%d)\n", __func__, err);
+ }
+ }
return -ETIMEDOUT;
}
} while (R1_CURRENT_STATE(status) == R1_STATE_PRG);
diff --git a/drivers/mmc/core/mmc_user_control.c b/drivers/mmc/core/mmc_user_control.c
new file mode 100644
index 0000000..6c67d66
--- /dev/null
+++ b/drivers/mmc/core/mmc_user_control.c
@@ -0,0 +1,89 @@
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "mmc_user_control.h"
+
+/* START: MMC shutdown related info. */
+struct mmc_shutdown_data mmc_shutdown_config = {
+ .enabled = 0,
+ .pattern_buff = NULL,
+ .pattern_len = 0,
+ .delay_ms = 0,
+};
+EXPORT_SYMBOL(mmc_shutdown_config);
+
+/* Must already hold mmc_shutdown_data write_lock */
+static void mmc_shutdown_data_clear(struct mmc_shutdown_data *data)
+{
+ data->enabled = 0;
+ kfree(data->pattern_buff);
+ data->pattern_buff = NULL;
+ data->pattern_len = 0;
+}
+
+/* Must already hold mmc_shutdown_data write_lock */
+static
+void mmc_shutdown_data_set(struct mmc_shutdown_data *data, const char *cmd)
+{
+ char *pattern;
+ int delay_ms;
+
+ if (data == NULL)
+ return;
+ mmc_shutdown_data_clear(data);
+
+ if (cmd == NULL)
+ return;
+ pattern = kmalloc(strlen(cmd), GFP_ATOMIC);
+ if (sscanf(cmd, "r: %d %s", &delay_ms, pattern) == 2) {
+ data->delay_ms = delay_ms;
+ data->pattern_buff = pattern;
+ data->pattern_len = strlen(pattern);
+ data->enabled = 1;
+ } else {
+ kfree(pattern);
+ }
+}
+/* END: MMC shutdown related info. */
+
+/*
+ * TODO: Add support for multiple commands simultaneously.
+ */
+static
+int mmc_user_control_cmd_set(const char *cmd, const struct kernel_param *kp)
+{
+ char cmd_type;
+
+ /* Clear any mmc user command data to ensure stateless framework. */
+ write_lock(&mmc_shutdown_config.root_lock);
+ mmc_shutdown_data_clear(&mmc_shutdown_config);
+ write_unlock(&mmc_shutdown_config.root_lock);
+
+ if (sscanf(cmd, "%c:", &cmd_type) != 1)
+ return 0;
+
+ /* Set any mmc user command data. */
+ switch (cmd_type) {
+ case 'r':
+ write_lock(&mmc_shutdown_config.root_lock);
+ mmc_shutdown_data_set(&mmc_shutdown_config, cmd);
+ write_unlock(&mmc_shutdown_config.root_lock);
+ break;
+ default:
+ pr_info("Unknown mmc user command: %s\n", cmd);
+ }
+ return 0;
+}
+
+static const struct kernel_param_ops mmc_user_control_cmd_ops = {
+ .set = mmc_user_control_cmd_set,
+};
+
+/*
+ * The following module param is used to control the mmc from user space.
+ */
+module_param_cb(mmc_user_control_cmd, &mmc_user_control_cmd_ops, NULL, 0600);
+MODULE_PARM_DESC(
+ mmc_user_control_cmd,
+ "The specific command to send to the mmc (see Kconfig for ENABLE_MMC_USER_CONTROL).");
+
diff --git a/drivers/mmc/core/mmc_user_control.h b/drivers/mmc/core/mmc_user_control.h
new file mode 100644
index 0000000..c8ae3d5
--- /dev/null
+++ b/drivers/mmc/core/mmc_user_control.h
@@ -0,0 +1,18 @@
+#ifndef _MMC_USER_CONTROL_H
+#define _MMC_USER_CONTROL_H
+
+#ifdef CONFIG_ENABLE_MMC_USER_CONTROL
+
+/* START: MMC shutdown related structs */
+struct mmc_shutdown_data {
+ bool enabled;
+ char *pattern_buff;
+ int pattern_len;
+ int delay_ms;
+ rwlock_t root_lock;
+};
+/* END: MMC shutdown related structs */
+
+#endif
+
+#endif
diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
index 62ad3f2..a70306b 100644
--- a/drivers/mmc/host/sdhci-tegra.c
+++ b/drivers/mmc/host/sdhci-tegra.c
@@ -1696,6 +1696,7 @@
unsigned int tap_delay)
{
u32 vendor_ctrl;
+ u16 clk;
/* Max tap delay value is 255 */
if (tap_delay > MAX_TAP_VALUES) {
@@ -1706,10 +1707,18 @@
return;
}
+ clk = sdhci_readw(sdhci, SDHCI_CLOCK_CONTROL);
+ clk &= ~SDHCI_CLOCK_CARD_EN;
+ sdhci_writew(sdhci, clk, SDHCI_CLOCK_CONTROL);
+
vendor_ctrl = sdhci_readl(sdhci, SDHCI_VNDR_CLK_CTRL);
vendor_ctrl &= ~(0xFF << SDHCI_VNDR_CLK_CTRL_TAP_VALUE_SHIFT);
vendor_ctrl |= (tap_delay << SDHCI_VNDR_CLK_CTRL_TAP_VALUE_SHIFT);
sdhci_writel(sdhci, vendor_ctrl, SDHCI_VNDR_CLK_CTRL);
+
+ clk = sdhci_readw(sdhci, SDHCI_CLOCK_CONTROL);
+ clk |= SDHCI_CLOCK_CARD_EN;
+ sdhci_writew(sdhci, clk, SDHCI_CLOCK_CONTROL);
}
static void sdhci_tegra_set_trim_delay(struct sdhci_host *sdhci,
@@ -2164,7 +2173,6 @@
else
err = -EIO;
}
- mdelay(1);
out:
return err;
}
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 8950f64..4b1c0c2 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -623,8 +623,10 @@
BUG_ON(len > 65536);
/* tran, valid */
- sdhci_set_adma_desc(host, desc, addr, len, 0x21);
- desc += next_desc;
+ if (len > 0) {
+ sdhci_set_adma_desc(host, desc, addr, len, 0x21);
+ desc += next_desc;
+ }
/*
* If this triggers then we have a calculation bug
@@ -2714,6 +2716,10 @@
pr_err("%s: Timeout waiting for hardware "
"interrupt.\n", mmc_hostname(host->mmc));
sdhci_dumpregs(host);
+ if (host->mmc->card)
+ pr_err("%s: card's cid is %x %x %x %x\n", __func__, host->mmc->card->raw_cid[0],
+ host->mmc->card->raw_cid[1], host->mmc->card->raw_cid[2],
+ host->mmc->card->raw_cid[3]);
if (host->data) {
host->data->error = -ETIMEDOUT;
diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
index b2c01c2..2b489dc 100644
--- a/drivers/net/usb/Kconfig
+++ b/drivers/net/usb/Kconfig
@@ -558,4 +558,14 @@
help
This option enables VOH mode which is required for SMSC compliance.
+config MSM_RMNET_USB
+ tristate "RMNET USB Driver"
+ depends on USB_USBNET
+ help
+ Select this if you have a Qualcomm modem device connected via USB
+ supporting RMNET network interface.
+
+ To compile this driver as a module, choose M here: the module
+ will be called rmnet_usb. If unsure, choose N.
+
endmenu
diff --git a/drivers/net/usb/Makefile b/drivers/net/usb/Makefile
index 8ec8adb..734de3b 100644
--- a/drivers/net/usb/Makefile
+++ b/drivers/net/usb/Makefile
@@ -37,3 +37,6 @@
obj-$(CONFIG_USB_NET_CDC_MBIM) += cdc_mbim.o
obj-$(CONFIG_USB_NET_RAW_IP) += raw_ip_net.o
+rmnet_usb-y := rmnet_usb_ctrl.o rmnet_usb_data.o
+obj-$(CONFIG_MSM_RMNET_USB) += rmnet_usb.o
+
diff --git a/drivers/net/usb/rmnet_usb.h b/drivers/net/usb/rmnet_usb.h
new file mode 100644
index 0000000..912baee
--- /dev/null
+++ b/drivers/net/usb/rmnet_usb.h
@@ -0,0 +1,138 @@
+/* Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __RMNET_USB_H
+#define __RMNET_USB_H
+
+#include <linux/mutex.h>
+#include <linux/usb.h>
+#include <linux/cdev.h>
+#include <linux/usb/ch9.h>
+#include <linux/usb/cdc.h>
+
+#define MAX_RMNET_DEVS 4
+#define MAX_RMNET_INSTS_PER_DEV 17
+#define TOTAL_RMNET_DEV_COUNT (MAX_RMNET_DEVS * MAX_RMNET_INSTS_PER_DEV)
+
+#define CTRL_DEV_MAX_LEN 10
+
+#define RMNET_CTRL_DEV_OPEN 0
+#define RMNET_CTRL_DEV_READY 1
+#define RMNET_CTRL_DEV_MUX_EN 2
+
+/*data MUX header bit mask*/
+#define MUX_PAD_SHIFT 0x2
+
+/*big endian format ctrl MUX header bit masks*/
+#define MUX_CTRL_PADLEN_MASK 0x3F
+#define MUX_CTRL_MASK 0x80
+
+/*max padding bytes for n byte alignment*/
+#define MAX_PAD_BYTES(n) (n-1)
+
+/*
+ *MUX Header big endian Format
+ *BIT 0 - 5 : Pad bytes
+ *BIT 6: Reserved
+ *BIT 7: Mux type 0: Data, 1: control
+ *BIT 8-15: Mux ID
+ *BIT 16-31: PACKET_LEN_WITH_PADDING (Bytes)
+ */
+struct mux_hdr {
+ __u8 padding_info;
+ __u8 mux_id;
+ __u16 pkt_len_w_padding;
+} __packed;
+
+struct rmnet_ctrl_udev {
+
+ /*
+ * In case of non-mux ctrl channel there is a one to one mapping
+ * between rmnet_ctrl_dev and rmnet_ctrl_udev. Save the claimed
+ * device id.
+ */
+ unsigned int ctrldev_id;
+
+ unsigned int rdev_num;
+ struct usb_interface *intf;
+ unsigned int int_pipe;
+ struct urb *rcvurb;
+ struct urb *inturb;
+ struct usb_anchor tx_submitted;
+ struct usb_anchor rx_submitted;
+ void *rcvbuf;
+ void *intbuf;
+ struct usb_ctrlrequest *in_ctlreq;
+
+ struct workqueue_struct *wq;
+ struct work_struct get_encap_work;
+
+ unsigned long status;
+
+ /*counters*/
+ unsigned int snd_encap_cmd_cnt;
+ unsigned int get_encap_resp_cnt;
+ unsigned int resp_avail_cnt;
+ unsigned int get_encap_failure_cnt;
+ unsigned int set_ctrl_line_state_cnt;
+ unsigned int tx_ctrl_err_cnt;
+ unsigned int zlp_cnt;
+ unsigned int invalid_mux_id_cnt;
+ unsigned int ignore_encap_work;
+
+ /*mutex*/
+ struct mutex udev_lock;
+};
+
+struct rmnet_ctrl_dev {
+
+ /*for debugging purpose*/
+ char name[CTRL_DEV_MAX_LEN];
+
+ struct cdev cdev;
+ struct device *devicep;
+ unsigned ch_id;
+
+ struct rmnet_ctrl_udev *cudev;
+
+ spinlock_t rx_lock;
+ struct mutex dev_lock;
+ struct list_head rx_list;
+ wait_queue_head_t read_wait_queue;
+ wait_queue_head_t open_wait_queue;
+ unsigned long status;
+
+ bool claimed;
+
+ unsigned int mdm_wait_timeout;
+
+ /*input control lines (DSR, CTS, CD, RI)*/
+ unsigned int cbits_tolocal;
+ /*output control lines (DTR, RTS)*/
+ unsigned int cbits_tomdm;
+};
+
+extern struct workqueue_struct *usbnet_wq;
+
+extern int rmnet_usb_ctrl_start_rx(struct rmnet_ctrl_udev *);
+extern int rmnet_usb_ctrl_suspend(struct rmnet_ctrl_udev *dev);
+extern int rmnet_usb_ctrl_init(int num_devs, int insts_per_dev,
+ unsigned long mux_info);
+extern void rmnet_usb_ctrl_exit(int num_devs, int insts_per_dev,
+ unsigned long mux_info);
+extern int rmnet_usb_ctrl_probe(struct usb_interface *intf,
+ struct usb_host_endpoint *int_in,
+ unsigned long rmnet_devnum,
+ unsigned long *data);
+extern void rmnet_usb_ctrl_disconnect(struct rmnet_ctrl_udev *);
+
+#endif /* __RMNET_USB_H*/
diff --git a/drivers/net/usb/rmnet_usb_ctrl.c b/drivers/net/usb/rmnet_usb_ctrl.c
new file mode 100644
index 0000000..2b36bf2
--- /dev/null
+++ b/drivers/net/usb/rmnet_usb_ctrl.c
@@ -0,0 +1,1441 @@
+/* Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/termios.h>
+#include <linux/poll.h>
+#include <linux/ratelimit.h>
+#include <linux/debugfs.h>
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+#include "rmnet_usb.h"
+
+static char *rmnet_dev_names[MAX_RMNET_DEVS] = {"hsicctl"};
+module_param_array(rmnet_dev_names, charp, NULL, S_IRUGO | S_IWUSR);
+
+#define DEFAULT_READ_URB_LENGTH 0x1000
+#define UNLINK_TIMEOUT_MS 500 /*random value*/
+
+/*Output control lines.*/
+#define ACM_CTRL_DTR BIT(0)
+#define ACM_CTRL_RTS BIT(1)
+
+
+/*Input control lines.*/
+#define ACM_CTRL_DSR BIT(0)
+#define ACM_CTRL_CTS BIT(1)
+#define ACM_CTRL_RI BIT(2)
+#define ACM_CTRL_CD BIT(3)
+
+/*echo modem_wait > /sys/class/hsicctl/hsicctlx/modem_wait*/
+static ssize_t modem_wait_store(struct device *d, struct device_attribute *attr,
+ const char *buf, size_t n)
+{
+ unsigned int mdm_wait;
+ struct rmnet_ctrl_dev *dev = dev_get_drvdata(d);
+
+ if (!dev)
+ return -ENODEV;
+
+ sscanf(buf, "%u", &mdm_wait);
+
+ dev->mdm_wait_timeout = mdm_wait;
+
+ return n;
+}
+
+static ssize_t modem_wait_show(struct device *d, struct device_attribute *attr,
+ char *buf)
+{
+ struct rmnet_ctrl_dev *dev = dev_get_drvdata(d);
+
+ if (!dev)
+ return -ENODEV;
+
+ return snprintf(buf, PAGE_SIZE, "%u\n", dev->mdm_wait_timeout);
+}
+
+static DEVICE_ATTR(modem_wait, 0664, modem_wait_show, modem_wait_store);
+
+static int ctl_msg_dbg_mask;
+module_param_named(dump_ctrl_msg, ctl_msg_dbg_mask, int,
+ S_IRUGO | S_IWUSR | S_IWGRP);
+
+enum {
+ MSM_USB_CTL_DEBUG = 1U << 0,
+ MSM_USB_CTL_DUMP_BUFFER = 1U << 1,
+};
+
+#define DUMP_BUFFER(prestr, cnt, buf) \
+do { \
+ if (ctl_msg_dbg_mask & MSM_USB_CTL_DUMP_BUFFER) \
+ print_hex_dump(KERN_INFO, prestr, DUMP_PREFIX_NONE, \
+ 16, 1, buf, cnt, false); \
+} while (0)
+
+#define DBG(x...) \
+ do { \
+ if (ctl_msg_dbg_mask & MSM_USB_CTL_DEBUG) \
+ pr_info(x); \
+ } while (0)
+
+/* passed in rmnet_usb_ctrl_init */
+static int num_devs;
+static int insts_per_dev;
+
+/* dynamically allocated 2-D array of num_devs*insts_per_dev ctrl_devs */
+static struct rmnet_ctrl_dev **ctrl_devs;
+static struct class *ctrldev_classp[MAX_RMNET_DEVS];
+static dev_t ctrldev_num[MAX_RMNET_DEVS];
+
+struct ctrl_pkt {
+ size_t data_size;
+ void *data;
+ void *ctxt;
+};
+
+struct ctrl_pkt_list_elem {
+ struct list_head list;
+ struct ctrl_pkt cpkt;
+};
+
+static void resp_avail_cb(struct urb *);
+
+static int rmnet_usb_ctrl_dmux(struct ctrl_pkt_list_elem *clist)
+{
+ struct mux_hdr *hdr;
+ size_t pad_len;
+ size_t total_len;
+ unsigned int mux_id;
+
+ hdr = (struct mux_hdr *)clist->cpkt.data;
+ pad_len = hdr->padding_info & MUX_CTRL_PADLEN_MASK;
+ if (pad_len > MAX_PAD_BYTES(4)) {
+ pr_err_ratelimited("%s: Invalid pad len %d\n", __func__,
+ pad_len);
+ return -EINVAL;
+ }
+
+ mux_id = hdr->mux_id;
+ if (!mux_id || mux_id > insts_per_dev) {
+ pr_err_ratelimited("%s: Invalid mux id %d\n", __func__, mux_id);
+ return -EINVAL;
+ }
+
+ total_len = ntohs(hdr->pkt_len_w_padding);
+ if (!total_len || !(total_len - pad_len)) {
+ pr_err_ratelimited("%s: Invalid pkt length %d\n", __func__,
+ total_len);
+ return -EINVAL;
+ }
+
+ clist->cpkt.data_size = total_len - pad_len;
+
+ return mux_id - 1;
+}
+
+static void rmnet_usb_ctrl_mux(unsigned int id, struct ctrl_pkt *cpkt)
+{
+ struct mux_hdr *hdr;
+ size_t len;
+ size_t pad_len = 0;
+
+ hdr = (struct mux_hdr *)cpkt->data;
+ hdr->mux_id = id + 1;
+ len = cpkt->data_size - sizeof(struct mux_hdr) - MAX_PAD_BYTES(4);
+
+ /*add padding if len is not 4 byte aligned*/
+ pad_len = ALIGN(len, 4) - len;
+
+ hdr->pkt_len_w_padding = htons(len + pad_len);
+ hdr->padding_info = (pad_len & MUX_CTRL_PADLEN_MASK) | MUX_CTRL_MASK;
+
+ cpkt->data_size = sizeof(struct mux_hdr) +
+ ntohs(hdr->pkt_len_w_padding);
+}
+
+static void get_encap_work(struct work_struct *w)
+{
+ struct usb_device *udev;
+ struct rmnet_ctrl_udev *dev =
+ container_of(w, struct rmnet_ctrl_udev, get_encap_work);
+ int status;
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->status))
+ return;
+
+ if (dev->rcvurb->anchor) {
+ dev->ignore_encap_work++;
+ return;
+ }
+
+ udev = interface_to_usbdev(dev->intf);
+
+ status = usb_autopm_get_interface(dev->intf);
+ if (status < 0 && status != -EAGAIN && status != -EACCES) {
+ dev->get_encap_failure_cnt++;
+ return;
+ }
+
+ usb_fill_control_urb(dev->rcvurb, udev,
+ usb_rcvctrlpipe(udev, 0),
+ (unsigned char *)dev->in_ctlreq,
+ dev->rcvbuf,
+ DEFAULT_READ_URB_LENGTH,
+ resp_avail_cb, dev);
+
+
+ usb_anchor_urb(dev->rcvurb, &dev->rx_submitted);
+ status = usb_submit_urb(dev->rcvurb, GFP_KERNEL);
+ if (status) {
+ dev->get_encap_failure_cnt++;
+ usb_unanchor_urb(dev->rcvurb);
+ usb_autopm_put_interface(dev->intf);
+ if (status != -ENODEV)
+ pr_err("%s: Error submitting Read URB %d\n",
+ __func__, status);
+ goto resubmit_int_urb;
+ }
+
+ return;
+
+resubmit_int_urb:
+ /*check if it is already submitted in resume*/
+ if (!dev->inturb->anchor) {
+ usb_anchor_urb(dev->inturb, &dev->rx_submitted);
+ status = usb_submit_urb(dev->inturb, GFP_KERNEL);
+ if (status) {
+ usb_unanchor_urb(dev->inturb);
+ if (status != -ENODEV)
+ pr_err("%s: Error re-submitting Int URB %d\n",
+ __func__, status);
+ }
+ }
+}
+
+static void notification_available_cb(struct urb *urb)
+{
+ int status;
+ struct usb_cdc_notification *ctrl;
+ struct usb_device *udev;
+ struct rmnet_ctrl_udev *dev = urb->context;
+ struct rmnet_ctrl_dev *cdev;
+
+ /*usb device disconnect*/
+ if (urb->dev->state == USB_STATE_NOTATTACHED)
+ return;
+
+ udev = interface_to_usbdev(dev->intf);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] ncb\n");
+#endif
+
+ switch (urb->status) {
+ case 0:
+ /*if non zero lenght of data received while unlink*/
+ case -ENOENT:
+ /*success*/
+ break;
+
+ /*do not resubmit*/
+ case -ESHUTDOWN:
+ case -ECONNRESET:
+ case -EPROTO:
+ return;
+ case -EPIPE:
+ pr_err_ratelimited("%s: Stall on int endpoint\n", __func__);
+ /* TBD : halt to be cleared in work */
+ return;
+
+ /*resubmit*/
+ case -EOVERFLOW:
+ pr_err_ratelimited("%s: Babble error happened\n", __func__);
+ default:
+ pr_debug_ratelimited("%s: Non zero urb status = %d\n",
+ __func__, urb->status);
+ goto resubmit_int_urb;
+ }
+
+ if (!urb->actual_length)
+ return;
+
+ ctrl = urb->transfer_buffer;
+
+ switch (ctrl->bNotificationType) {
+ case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] ncb: CDC\n");
+#endif
+ dev->resp_avail_cnt++;
+ /* If MUX is not enabled, wakeup up the open process
+ * upon first notify response available.
+ */
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->status)) {
+ set_bit(RMNET_CTRL_DEV_READY, &dev->status);
+
+ cdev = &ctrl_devs[dev->rdev_num][dev->ctrldev_id];
+ wake_up(&cdev->open_wait_queue);
+ }
+
+ usb_mark_last_busy(udev);
+ queue_work(dev->wq, &dev->get_encap_work);
+
+ return;
+ default:
+ dev_err(&dev->intf->dev,
+ "%s:Command not implemented\n", __func__);
+ }
+
+resubmit_int_urb:
+ usb_anchor_urb(urb, &dev->rx_submitted);
+ status = usb_submit_urb(urb, GFP_ATOMIC);
+ if (status) {
+ usb_unanchor_urb(urb);
+ if (status != -ENODEV)
+ pr_err("%s: Error re-submitting Int URB %d\n",
+ __func__, status);
+ }
+
+ return;
+}
+
+static void resp_avail_cb(struct urb *urb)
+{
+ struct usb_device *udev;
+ struct ctrl_pkt_list_elem *list_elem = NULL;
+ struct rmnet_ctrl_udev *dev = urb->context;
+ struct rmnet_ctrl_dev *rx_dev;
+ void *cpkt;
+ int status = 0;
+ int ch_id = -EINVAL;
+ size_t cpkt_size = 0;
+
+ /*usb device disconnect*/
+ if (urb->dev->state == USB_STATE_NOTATTACHED)
+ return;
+
+ udev = interface_to_usbdev(dev->intf);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] rcb\n");
+#endif
+
+ usb_autopm_put_interface_async(dev->intf);
+
+ switch (urb->status) {
+ case 0:
+ /*success*/
+ break;
+
+ /*do not resubmit*/
+ case -ESHUTDOWN:
+ case -ENOENT:
+ case -ECONNRESET:
+ case -EPROTO:
+ return;
+
+ /*resubmit*/
+ case -EOVERFLOW:
+ pr_err_ratelimited("%s: Babble error happened\n", __func__);
+ default:
+ pr_debug_ratelimited("%s: Non zero urb status = %d\n",
+ __func__, urb->status);
+ goto resubmit_int_urb;
+ }
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] rcb: %i\n", urb->actual_length);
+#endif
+ cpkt = urb->transfer_buffer;
+ cpkt_size = urb->actual_length;
+ if (!cpkt_size) {
+ dev->zlp_cnt++;
+ dev_dbg(&dev->intf->dev, "%s: zero length pkt received\n",
+ __func__);
+ goto resubmit_int_urb;
+ }
+
+ list_elem = kmalloc(sizeof(struct ctrl_pkt_list_elem), GFP_ATOMIC);
+ if (!list_elem) {
+ dev_err(&dev->intf->dev, "%s: list_elem alloc failed\n",
+ __func__);
+ return;
+ }
+ list_elem->cpkt.data = kmalloc(cpkt_size, GFP_ATOMIC);
+ if (!list_elem->cpkt.data) {
+ dev_err(&dev->intf->dev, "%s: list_elem->data alloc failed\n",
+ __func__);
+ kfree(list_elem);
+ return;
+ }
+ memcpy(list_elem->cpkt.data, cpkt, cpkt_size);
+ list_elem->cpkt.data_size = cpkt_size;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] wake_up\n");
+#endif
+ ch_id = dev->ctrldev_id;
+
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status)) {
+ ch_id = rmnet_usb_ctrl_dmux(list_elem);
+ if (ch_id < 0) {
+ dev->invalid_mux_id_cnt++;
+ kfree(list_elem->cpkt.data);
+ kfree(list_elem);
+ goto resubmit_int_urb;
+ }
+ }
+
+ rx_dev = &ctrl_devs[dev->rdev_num][ch_id];
+
+ dev->get_encap_resp_cnt++;
+ dev_dbg(&dev->intf->dev, "Read %d bytes for %s\n",
+ urb->actual_length, rx_dev->name);
+
+ spin_lock(&rx_dev->rx_lock);
+ list_add_tail(&list_elem->list, &rx_dev->rx_list);
+ spin_unlock(&rx_dev->rx_lock);
+
+ wake_up(&rx_dev->read_wait_queue);
+
+resubmit_int_urb:
+ /*check if it is already submitted in resume*/
+ if (!dev->inturb->anchor) {
+ usb_mark_last_busy(udev);
+ usb_anchor_urb(dev->inturb, &dev->rx_submitted);
+ status = usb_submit_urb(dev->inturb, GFP_ATOMIC);
+ if (status) {
+ usb_unanchor_urb(dev->inturb);
+ if (status != -ENODEV)
+ pr_err("%s: Error re-submitting Int URB %d\n",
+ __func__, status);
+ }
+ }
+}
+
+int rmnet_usb_ctrl_start_rx(struct rmnet_ctrl_udev *dev)
+{
+ int retval = 0;
+
+ mutex_lock(&dev->udev_lock);
+ if (!dev->inturb->anchor) {
+ usb_anchor_urb(dev->inturb, &dev->rx_submitted);
+ retval = usb_submit_urb(dev->inturb, GFP_KERNEL);
+ if (retval < 0) {
+ usb_unanchor_urb(dev->inturb);
+ if (retval != -ENODEV)
+ pr_err("%s Intr submit %d\n", __func__, retval);
+ }
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG) {
+ if (dev && dev->intf) {
+ dev_info(&(dev->intf->dev), "%s submit dev->inturb:0x%p, retval:%x\n", __func__, dev->inturb, retval);
+ }
+ }
+#endif
+ }
+ mutex_unlock(&dev->udev_lock);
+
+ return retval;
+}
+
+static int rmnet_usb_ctrl_alloc_rx(struct rmnet_ctrl_udev *dev)
+{
+ dev->rcvurb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->rcvurb) {
+ pr_err("%s: Error allocating read urb\n", __func__);
+ goto nomem;
+ }
+
+ dev->rcvbuf = kmalloc(DEFAULT_READ_URB_LENGTH, GFP_KERNEL);
+ if (!dev->rcvbuf) {
+ pr_err("%s: Error allocating read buffer\n", __func__);
+ goto nomem;
+ }
+
+ dev->in_ctlreq = kmalloc(sizeof(*dev->in_ctlreq), GFP_KERNEL);
+ if (!dev->in_ctlreq) {
+ pr_err("%s: Error allocating setup packet buffer\n", __func__);
+ goto nomem;
+ }
+
+ return 0;
+
+nomem:
+ usb_free_urb(dev->rcvurb);
+ kfree(dev->rcvbuf);
+ kfree(dev->in_ctlreq);
+
+ return -ENOMEM;
+
+}
+
+#ifdef CONFIG_QCT_9K_MODEM
+static int rmnet_usb_ctrl_write_cmd(struct rmnet_ctrl_dev *dev)
+{
+ struct usb_device *udev;
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status))
+ return -ENODEV;
+
+ udev = interface_to_usbdev(dev->cudev->intf);
+ dev->cudev->set_ctrl_line_state_cnt++;
+ return usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ USB_CDC_REQ_SET_CONTROL_LINE_STATE,
+ (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE),
+ dev->cbits_tomdm, dev->cudev->intf->cur_altsetting->desc.bInterfaceNumber, NULL, 0, USB_CTRL_SET_TIMEOUT);
+}
+
+static void ctrl_write_callback(struct urb *urb)
+{
+ struct ctrl_pkt *cpkt = urb->context;
+ struct rmnet_ctrl_dev *dev = cpkt->ctxt;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] wcb: %d/%d\n", urb->status, urb->actual_length);
+#endif
+
+ if (urb->status) {
+ dev->cudev->tx_ctrl_err_cnt++;
+ //error case
+ pr_info("[RMNET] %s :Write status/size %d/%d\n", __func__, urb->status, urb->actual_length);
+ pr_debug_ratelimited("Write status/size %d/%d\n", urb->status, urb->actual_length);
+ }
+
+ kfree(urb->setup_packet);
+ kfree(urb->transfer_buffer);
+ usb_free_urb(urb);
+ kfree(cpkt);
+ usb_autopm_put_interface_async(dev->cudev->intf);
+}
+
+static int rmnet_usb_ctrl_write(struct rmnet_ctrl_dev *dev, struct ctrl_pkt *cpkt, size_t size)
+{
+ int result;
+ struct urb *sndurb;
+ struct usb_ctrlrequest *out_ctlreq;
+ struct usb_device *udev;
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status))
+ return -ENETRESET;
+
+ udev = interface_to_usbdev(dev->cudev->intf);
+
+ sndurb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!sndurb) {
+ dev_err(dev->devicep, "Error allocating read urb\n");
+ return -ENOMEM;
+ }
+
+ out_ctlreq = kmalloc(sizeof(*out_ctlreq), GFP_KERNEL);
+ if (!out_ctlreq) {
+ usb_free_urb(sndurb);
+ dev_err(dev->devicep, "Error allocating setup packet buffer\n");
+ return -ENOMEM;
+ }
+
+ /* CDC Send Encapsulated Request packet */
+ out_ctlreq->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ out_ctlreq->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ out_ctlreq->wValue = 0;
+ out_ctlreq->wIndex = dev->cudev->intf->cur_altsetting->desc.bInterfaceNumber;
+ out_ctlreq->wLength = cpu_to_le16(cpkt->data_size);
+
+ usb_fill_control_urb(sndurb, udev, usb_sndctrlpipe(udev, 0), (unsigned char *)out_ctlreq, (void *)cpkt->data, cpkt->data_size, ctrl_write_callback, cpkt);
+
+ result = usb_autopm_get_interface(dev->cudev->intf);
+ if (result < 0) {
+ dev_dbg(dev->devicep, "%s: Unable to resume interface: %d\n", __func__, result);
+
+ /*
+ * Revisit: if (result == -EPERM)
+ * rmnet_usb_suspend(dev->intf, PMSG_SUSPEND);
+ */
+
+ usb_free_urb(sndurb);
+ kfree(out_ctlreq);
+ return result;
+ }
+
+ usb_anchor_urb(sndurb, &dev->cudev->tx_submitted);
+ dev->cudev->snd_encap_cmd_cnt++;
+ result = usb_submit_urb(sndurb, GFP_KERNEL);
+ if (result < 0) {
+ if (result != -ENODEV)
+ dev_err(dev->devicep, "%s: Submit URB error %d\n", __func__, result);
+ dev->cudev->snd_encap_cmd_cnt--;
+ usb_autopm_put_interface(dev->cudev->intf);
+ usb_unanchor_urb(sndurb);
+ usb_free_urb(sndurb);
+ kfree(out_ctlreq);
+ return result;
+ }
+
+ return size;
+}
+#else
+static int rmnet_usb_ctrl_write_cmd(struct rmnet_ctrl_udev *dev, u8 req,
+ u16 val, void *data, u16 size)
+{
+ struct usb_device *udev;
+ int ret;
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->status))
+ return -ENETRESET;
+
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret < 0) {
+ pr_debug("%s: Unable to resume interface: %d\n",
+ __func__, ret);
+ return ret;
+ }
+
+ udev = interface_to_usbdev(dev->intf);
+ ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ req,
+ (USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE),
+ val,
+ dev->intf->cur_altsetting->desc.bInterfaceNumber,
+ data, size, USB_CTRL_SET_TIMEOUT);
+ if (ret < 0)
+ dev->tx_ctrl_err_cnt++;
+
+ usb_autopm_put_interface(dev->intf);
+
+ return ret;
+}
+#endif
+
+static int rmnet_ctl_open(struct inode *inode, struct file *file)
+{
+ struct ctrl_pkt_list_elem *list_elem = NULL;
+ unsigned long flag;
+ int retval = 0;
+ struct rmnet_ctrl_dev *dev =
+ container_of(inode->i_cdev, struct rmnet_ctrl_dev, cdev);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("%s+ \n", __func__);
+#endif
+
+ if (!dev)
+ return -ENODEV;
+
+ if (test_bit(RMNET_CTRL_DEV_OPEN, &dev->status))
+ goto already_opened;
+
+ if (dev->mdm_wait_timeout &&
+ !test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status)) {
+ retval = wait_event_interruptible_timeout(
+ dev->open_wait_queue,
+ test_bit(RMNET_CTRL_DEV_READY,
+ &dev->cudev->status),
+ msecs_to_jiffies(dev->mdm_wait_timeout * 1000));
+ if (retval == 0) {
+ dev_err(dev->devicep, "%s: Timeout opening %s\n",
+ __func__, dev->name);
+ return -ETIMEDOUT;
+ } else if (retval < 0) {
+ dev_err(dev->devicep, "%s: Error waiting for %s\n",
+ __func__, dev->name);
+ return retval;
+ }
+ }
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status)) {
+ dev_dbg(dev->devicep, "%s: Connection timedout opening %s\n",
+ __func__, dev->name);
+ return -ETIMEDOUT;
+ }
+
+ /* clear stale data if device close called but channel was ready */
+ spin_lock_irqsave(&dev->rx_lock, flag);
+ while (!list_empty(&dev->rx_list)) {
+ list_elem = list_first_entry(
+ &dev->rx_list,
+ struct ctrl_pkt_list_elem,
+ list);
+ list_del(&list_elem->list);
+ kfree(list_elem->cpkt.data);
+ kfree(list_elem);
+ }
+ spin_unlock_irqrestore(&dev->rx_lock, flag);
+
+ set_bit(RMNET_CTRL_DEV_OPEN, &dev->status);
+
+ file->private_data = dev;
+
+already_opened:
+ DBG("%s: Open called for %s\n", __func__, dev->name);
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("%s- \n", __func__);
+#endif
+
+ return 0;
+}
+
+static int rmnet_ctl_release(struct inode *inode, struct file *file)
+{
+ struct ctrl_pkt_list_elem *list_elem = NULL;
+ struct rmnet_ctrl_dev *dev;
+ unsigned long flag;
+
+ dev = file->private_data;
+ if (!dev)
+ return -ENODEV;
+
+ DBG("%s Called on %s device\n", __func__, dev->name);
+
+ spin_lock_irqsave(&dev->rx_lock, flag);
+ while (!list_empty(&dev->rx_list)) {
+ list_elem = list_first_entry(
+ &dev->rx_list,
+ struct ctrl_pkt_list_elem,
+ list);
+ list_del(&list_elem->list);
+ kfree(list_elem->cpkt.data);
+ kfree(list_elem);
+ }
+ spin_unlock_irqrestore(&dev->rx_lock, flag);
+
+ clear_bit(RMNET_CTRL_DEV_OPEN, &dev->status);
+
+ file->private_data = NULL;
+
+ return 0;
+}
+
+static unsigned int rmnet_ctl_poll(struct file *file, poll_table *wait)
+{
+ unsigned int mask = 0;
+ struct rmnet_ctrl_dev *dev;
+
+ dev = file->private_data;
+ if (!dev)
+ return POLLERR;
+
+ poll_wait(file, &dev->read_wait_queue, wait);
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status)) {
+ dev_dbg(dev->devicep, "%s: Device not connected\n",
+ __func__);
+ return POLLERR;
+ }
+
+ if (!list_empty(&dev->rx_list))
+ mask |= POLLIN | POLLRDNORM;
+
+ return mask;
+}
+
+static ssize_t rmnet_ctl_read(struct file *file, char __user *buf, size_t count,
+ loff_t *ppos)
+{
+ int retval = 0;
+ int bytes_to_read;
+ unsigned int hdr_len = 0;
+ struct rmnet_ctrl_dev *dev;
+ struct ctrl_pkt_list_elem *list_elem = NULL;
+ unsigned long flags;
+
+ dev = file->private_data;
+ if (!dev)
+ return -ENODEV;
+
+ DBG("%s: Read from %s\n", __func__, dev->name);
+
+ctrl_read:
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status)) {
+ dev_dbg(dev->devicep, "%s: Device not connected\n",
+ __func__);
+ return -ENETRESET;
+ }
+ spin_lock_irqsave(&dev->rx_lock, flags);
+ if (list_empty(&dev->rx_list)) {
+ spin_unlock_irqrestore(&dev->rx_lock, flags);
+
+ retval = wait_event_interruptible(dev->read_wait_queue,
+ !list_empty(&dev->rx_list) ||
+ !test_bit(RMNET_CTRL_DEV_READY,
+ &dev->cudev->status));
+ if (retval < 0)
+ return retval;
+
+ goto ctrl_read;
+ }
+
+ list_elem = list_first_entry(&dev->rx_list,
+ struct ctrl_pkt_list_elem, list);
+ bytes_to_read = (uint32_t)(list_elem->cpkt.data_size);
+ if (bytes_to_read > count) {
+ spin_unlock_irqrestore(&dev->rx_lock, flags);
+ dev_err(dev->devicep, "%s: Packet size %d > buf size %d\n",
+ __func__, bytes_to_read, count);
+ return -ENOMEM;
+ }
+ spin_unlock_irqrestore(&dev->rx_lock, flags);
+
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status))
+ hdr_len = sizeof(struct mux_hdr);
+
+ if (copy_to_user(buf, list_elem->cpkt.data + hdr_len, bytes_to_read)) {
+ dev_err(dev->devicep,
+ "%s: copy_to_user failed for %s\n",
+ __func__, dev->name);
+ return -EFAULT;
+ }
+ spin_lock_irqsave(&dev->rx_lock, flags);
+ list_del(&list_elem->list);
+ spin_unlock_irqrestore(&dev->rx_lock, flags);
+
+ kfree(list_elem->cpkt.data);
+ kfree(list_elem);
+ DBG("%s: Returning %d bytes to %s\n", __func__, bytes_to_read,
+ dev->name);
+ DUMP_BUFFER("Read: ", bytes_to_read, buf);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] R: %i\n", bytes_to_read);
+#endif
+
+ return bytes_to_read;
+}
+
+static ssize_t rmnet_ctl_write(struct file *file, const char __user * buf,
+ size_t size, loff_t *pos)
+{
+ int status;
+ size_t total_len;
+ void *wbuf;
+ void *actual_data;
+ struct ctrl_pkt *cpkt;
+ struct rmnet_ctrl_dev *dev = file->private_data;
+
+ if (!dev)
+ return -ENODEV;
+
+ if (size <= 0)
+ return -EINVAL;
+
+ if (!test_bit(RMNET_CTRL_DEV_READY, &dev->cudev->status))
+ return -ENETRESET;
+
+ DBG("%s: Writing %i bytes on %s\n", __func__, size, dev->name);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (get_radio_flag() & RADIO_FLAG_MORE_LOG)
+ pr_info("[RMNET] W: %i\n", size);
+#endif
+
+ total_len = size;
+
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status))
+ total_len += sizeof(struct mux_hdr) + MAX_PAD_BYTES(4);
+
+ wbuf = kmalloc(total_len , GFP_KERNEL);
+ if (!wbuf)
+ return -ENOMEM;
+
+ cpkt = kmalloc(sizeof(struct ctrl_pkt), GFP_KERNEL);
+ if (!cpkt) {
+ kfree(wbuf);
+ return -ENOMEM;
+ }
+ actual_data = cpkt->data = wbuf;
+ cpkt->data_size = total_len;
+ cpkt->ctxt = dev;
+
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status)) {
+ actual_data = wbuf + sizeof(struct mux_hdr);
+ rmnet_usb_ctrl_mux(dev->ch_id, cpkt);
+ }
+
+ status = copy_from_user(actual_data, buf, size);
+ if (status) {
+ dev_err(dev->devicep,
+ "%s: Unable to copy data from userspace %d\n",
+ __func__, status);
+ kfree(wbuf);
+ kfree(cpkt);
+ return status;
+ }
+ DUMP_BUFFER("Write: ", size, buf);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ status = rmnet_usb_ctrl_write(dev, cpkt, size);
+ if (status == size)
+ return size;
+ else
+ pr_err("[%s] status %d\n", __func__, status);
+#else
+ status = rmnet_usb_ctrl_write_cmd(dev->cudev,
+ USB_CDC_SEND_ENCAPSULATED_COMMAND, 0, cpkt->data,
+ cpkt->data_size);
+ if (status > 0)
+ dev->cudev->snd_encap_cmd_cnt++;
+
+ kfree(cpkt->data);
+ kfree(cpkt);
+#endif
+
+ return status;
+}
+
+static int rmnet_ctrl_tiocmset(struct rmnet_ctrl_dev *dev, unsigned int set,
+ unsigned int clear)
+{
+ int retval;
+
+ mutex_lock(&dev->dev_lock);
+ if (set & TIOCM_DTR)
+ dev->cbits_tomdm |= ACM_CTRL_DTR;
+
+ /*
+ * TBD if (set & TIOCM_RTS)
+ * dev->cbits_tomdm |= ACM_CTRL_RTS;
+ */
+
+ if (clear & TIOCM_DTR)
+ dev->cbits_tomdm &= ~ACM_CTRL_DTR;
+
+ /*
+ * (clear & TIOCM_RTS)
+ * dev->cbits_tomdm &= ~ACM_CTRL_RTS;
+ */
+
+ mutex_unlock(&dev->dev_lock);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ retval = usb_autopm_get_interface(dev->cudev->intf);
+ if (retval < 0) {
+ dev_dbg(dev->devicep, "%s: Unable to resume interface: %d\n", __func__, retval);
+ return retval;
+ }
+
+ retval = rmnet_usb_ctrl_write_cmd(dev);
+
+ usb_autopm_put_interface(dev->cudev->intf);
+#else
+ retval = rmnet_usb_ctrl_write_cmd(dev->cudev,
+ USB_CDC_REQ_SET_CONTROL_LINE_STATE, 0, NULL, 0);
+ if (!retval)
+ dev->cudev->set_ctrl_line_state_cnt++;
+#endif
+
+ return retval;
+}
+
+static int rmnet_ctrl_tiocmget(struct rmnet_ctrl_dev *dev)
+{
+ int ret;
+
+ mutex_lock(&dev->dev_lock);
+ ret =
+ /*
+ * TBD(dev->cbits_tolocal & ACM_CTRL_DSR ? TIOCM_DSR : 0) |
+ * (dev->cbits_tolocal & ACM_CTRL_CTS ? TIOCM_CTS : 0) |
+ */
+ (dev->cbits_tolocal & ACM_CTRL_CD ? TIOCM_CD : 0) |
+ /*
+ * TBD (dev->cbits_tolocal & ACM_CTRL_RI ? TIOCM_RI : 0) |
+ *(dev->cbits_tomdm & ACM_CTRL_RTS ? TIOCM_RTS : 0) |
+ */
+ (dev->cbits_tomdm & ACM_CTRL_DTR ? TIOCM_DTR : 0);
+ mutex_unlock(&dev->dev_lock);
+
+ return ret;
+}
+
+static long rmnet_ctrl_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret;
+ struct rmnet_ctrl_dev *dev;
+
+ dev = file->private_data;
+ if (!dev)
+ return -ENODEV;
+
+ switch (cmd) {
+ case TIOCMGET:
+
+ ret = rmnet_ctrl_tiocmget(dev);
+ break;
+ case TIOCMSET:
+ ret = rmnet_ctrl_tiocmset(dev, arg, ~arg);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static const struct file_operations ctrldev_fops = {
+ .owner = THIS_MODULE,
+ .read = rmnet_ctl_read,
+ .write = rmnet_ctl_write,
+ .unlocked_ioctl = rmnet_ctrl_ioctl,
+ .open = rmnet_ctl_open,
+ .release = rmnet_ctl_release,
+ .poll = rmnet_ctl_poll,
+};
+
+int rmnet_usb_ctrl_probe(struct usb_interface *intf,
+ struct usb_host_endpoint *int_in,
+ unsigned long rmnet_devnum,
+ unsigned long *data)
+{
+ struct rmnet_ctrl_udev *cudev;
+ struct rmnet_ctrl_dev *dev = NULL;
+ u16 wMaxPacketSize;
+ struct usb_endpoint_descriptor *ep;
+ struct usb_device *udev = interface_to_usbdev(intf);
+ int interval;
+ int ret = 0, n;
+
+ /* Find next available ctrl_dev */
+ for (n = 0; n < insts_per_dev; n++) {
+ dev = &ctrl_devs[rmnet_devnum][n];
+ if (!dev->claimed)
+ break;
+ }
+
+ if (!dev || n == insts_per_dev) {
+ pr_err("%s: No available ctrl devices for %lu\n", __func__,
+ rmnet_devnum);
+ return -ENODEV;
+ }
+
+ cudev = dev->cudev;
+
+ cudev->int_pipe = usb_rcvintpipe(udev,
+ int_in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+
+ cudev->intf = intf;
+
+ cudev->inturb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!cudev->inturb) {
+ dev_err(&intf->dev, "Error allocating int urb\n");
+ kfree(cudev);
+ return -ENOMEM;
+ }
+
+ /*use max pkt size from ep desc*/
+ ep = &cudev->intf->cur_altsetting->endpoint[0].desc;
+ wMaxPacketSize = le16_to_cpu(ep->wMaxPacketSize);
+
+ cudev->intbuf = kmalloc(wMaxPacketSize, GFP_KERNEL);
+ if (!cudev->intbuf) {
+ usb_free_urb(cudev->inturb);
+ kfree(cudev);
+ dev_err(&intf->dev, "Error allocating int buffer\n");
+ return -ENOMEM;
+ }
+
+ cudev->in_ctlreq->bRequestType =
+ (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ cudev->in_ctlreq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
+ cudev->in_ctlreq->wValue = 0;
+ cudev->in_ctlreq->wIndex =
+ cudev->intf->cur_altsetting->desc.bInterfaceNumber;
+ cudev->in_ctlreq->wLength = cpu_to_le16(DEFAULT_READ_URB_LENGTH);
+
+ interval = int_in->desc.bInterval;
+
+ usb_fill_int_urb(cudev->inturb, udev,
+ cudev->int_pipe,
+ cudev->intbuf, wMaxPacketSize,
+ notification_available_cb, cudev, interval);
+
+ usb_mark_last_busy(udev);
+ mutex_init(&cudev->udev_lock);
+ ret = rmnet_usb_ctrl_start_rx(cudev);
+ if (ret) {
+ usb_free_urb(cudev->inturb);
+ kfree(cudev->intbuf);
+ kfree(cudev);
+ return ret;
+ }
+
+ *data = (unsigned long)cudev;
+
+
+ /* If MUX is enabled, wakeup the open process here */
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &cudev->status)) {
+ set_bit(RMNET_CTRL_DEV_READY, &cudev->status);
+ for (n = 0; n < insts_per_dev; n++) {
+ dev = &ctrl_devs[rmnet_devnum][n];
+ wake_up(&dev->open_wait_queue);
+ }
+ } else {
+ cudev->ctrldev_id = n;
+ dev->claimed = true;
+ }
+
+ return 0;
+}
+
+void rmnet_usb_ctrl_disconnect(struct rmnet_ctrl_udev *dev)
+{
+ struct rmnet_ctrl_dev *cdev;
+ int n;
+
+ clear_bit(RMNET_CTRL_DEV_READY, &dev->status);
+
+ if (test_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status)) {
+ for (n = 0; n < insts_per_dev; n++) {
+ cdev = &ctrl_devs[dev->rdev_num][n];
+ wake_up(&cdev->read_wait_queue);
+ mutex_lock(&cdev->dev_lock);
+ cdev->cbits_tolocal = ~ACM_CTRL_CD;
+ cdev->cbits_tomdm = ~ACM_CTRL_DTR;
+ mutex_unlock(&cdev->dev_lock);
+ }
+ } else {
+ cdev = &ctrl_devs[dev->rdev_num][dev->ctrldev_id];
+ cdev->claimed = false;
+ wake_up(&cdev->read_wait_queue);
+ mutex_lock(&cdev->dev_lock);
+ cdev->cbits_tolocal = ~ACM_CTRL_CD;
+ cdev->cbits_tomdm = ~ACM_CTRL_DTR;
+ mutex_unlock(&cdev->dev_lock);
+ }
+
+ cancel_work_sync(&dev->get_encap_work);
+
+ usb_kill_anchored_urbs(&dev->tx_submitted);
+ usb_kill_anchored_urbs(&dev->rx_submitted);
+
+ usb_free_urb(dev->inturb);
+ dev->inturb = NULL;
+
+ kfree(dev->intbuf);
+ dev->intbuf = NULL;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 4096
+static ssize_t rmnet_usb_ctrl_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct rmnet_ctrl_udev *dev;
+ struct rmnet_ctrl_dev *cdev;
+ char *buf;
+ int ret;
+ int i, n;
+ int temp = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < num_devs; i++) {
+ for (n = 0; n < insts_per_dev; n++) {
+ cdev = &ctrl_devs[i][n];
+ dev = cdev->cudev;
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\n#ctrl_dev: %p Name: %s#\n"
+ "snd encap cmd cnt %u\n"
+ "resp avail cnt: %u\n"
+ "get encap resp cnt: %u\n"
+ "set ctrl line state cnt: %u\n"
+ "tx_err_cnt: %u\n"
+ "cbits_tolocal: %d\n"
+ "cbits_tomdm: %d\n"
+ "mdm_wait_timeout: %u\n"
+ "zlp_cnt: %u\n"
+ "get_encap_failure_cnt %u\n"
+ "ignore_encap_work %u\n"
+ "invalid mux id cnt %u\n"
+ "RMNET_CTRL_DEV_MUX_EN: %d\n"
+ "RMNET_CTRL_DEV_OPEN: %d\n"
+ "RMNET_CTRL_DEV_READY: %d\n",
+ cdev, cdev->name,
+ dev->snd_encap_cmd_cnt,
+ dev->resp_avail_cnt,
+ dev->get_encap_resp_cnt,
+ dev->set_ctrl_line_state_cnt,
+ dev->tx_ctrl_err_cnt,
+ cdev->cbits_tolocal,
+ cdev->cbits_tomdm,
+ cdev->mdm_wait_timeout,
+ dev->zlp_cnt,
+ dev->get_encap_failure_cnt,
+ dev->ignore_encap_work,
+ dev->invalid_mux_id_cnt,
+ test_bit(RMNET_CTRL_DEV_MUX_EN,
+ &dev->status),
+ test_bit(RMNET_CTRL_DEV_OPEN,
+ &dev->status),
+ test_bit(RMNET_CTRL_DEV_READY,
+ &dev->status));
+ }
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+ kfree(buf);
+ return ret;
+}
+
+static ssize_t rmnet_usb_ctrl_reset_stats(struct file *file, const char __user *
+ buf, size_t count, loff_t *ppos)
+{
+ struct rmnet_ctrl_udev *dev;
+ struct rmnet_ctrl_dev *cdev;
+ int i, n;
+
+ for (i = 0; i < num_devs; i++) {
+ for (n = 0; n < insts_per_dev; n++) {
+ cdev = &ctrl_devs[i][n];
+ dev = cdev->cudev;
+
+ dev->snd_encap_cmd_cnt = 0;
+ dev->resp_avail_cnt = 0;
+ dev->get_encap_resp_cnt = 0;
+ dev->set_ctrl_line_state_cnt = 0;
+ dev->tx_ctrl_err_cnt = 0;
+ dev->zlp_cnt = 0;
+ dev->invalid_mux_id_cnt = 0;
+ dev->ignore_encap_work = 0;
+ }
+ }
+ return count;
+}
+
+const struct file_operations rmnet_usb_ctrl_stats_ops = {
+ .read = rmnet_usb_ctrl_read_stats,
+ .write = rmnet_usb_ctrl_reset_stats,
+};
+
+struct dentry *usb_ctrl_dent;
+struct dentry *usb_ctrl_dfile;
+static void rmnet_usb_ctrl_debugfs_init(void)
+{
+ usb_ctrl_dent = debugfs_create_dir("rmnet_usb_ctrl", 0);
+ if (IS_ERR(usb_ctrl_dent))
+ return;
+
+ usb_ctrl_dfile = debugfs_create_file("status", 0644, usb_ctrl_dent, 0,
+ &rmnet_usb_ctrl_stats_ops);
+ if (!usb_ctrl_dfile || IS_ERR(usb_ctrl_dfile))
+ debugfs_remove(usb_ctrl_dent);
+}
+
+static void rmnet_usb_ctrl_debugfs_exit(void)
+{
+ debugfs_remove(usb_ctrl_dfile);
+ debugfs_remove(usb_ctrl_dent);
+}
+
+#else
+static void rmnet_usb_ctrl_debugfs_init(void) { }
+static void rmnet_usb_ctrl_debugfs_exit(void) { }
+#endif
+
+static void free_rmnet_ctrl_udev(struct rmnet_ctrl_udev *cudev)
+{
+ kfree(cudev->in_ctlreq);
+ kfree(cudev->rcvbuf);
+ kfree(cudev->intbuf);
+ usb_free_urb(cudev->rcvurb);
+ usb_free_urb(cudev->inturb);
+ destroy_workqueue(cudev->wq);
+ kfree(cudev);
+}
+
+int rmnet_usb_ctrl_init(int no_rmnet_devs, int no_rmnet_insts_per_dev,
+ unsigned long mux_info)
+{
+ struct rmnet_ctrl_dev *dev;
+ struct rmnet_ctrl_udev *cudev;
+ int i, n;
+ int status;
+ int cmux_enabled;
+
+ num_devs = no_rmnet_devs;
+ insts_per_dev = no_rmnet_insts_per_dev;
+
+ ctrl_devs = kzalloc(num_devs * sizeof(*ctrl_devs), GFP_KERNEL);
+ if (!ctrl_devs)
+ return -ENOMEM;
+
+ for (i = 0; i < num_devs; i++) {
+ ctrl_devs[i] = kzalloc(insts_per_dev * sizeof(*ctrl_devs[i]),
+ GFP_KERNEL);
+ if (!ctrl_devs[i])
+ return -ENOMEM;
+
+ status = alloc_chrdev_region(&ctrldev_num[i], 0, insts_per_dev,
+ rmnet_dev_names[i]);
+ if (IS_ERR_VALUE(status)) {
+ pr_err("ERROR:%s: alloc_chrdev_region() ret %i.\n",
+ __func__, status);
+ return status;
+ }
+
+ ctrldev_classp[i] = class_create(THIS_MODULE,
+ rmnet_dev_names[i]);
+ if (IS_ERR(ctrldev_classp[i])) {
+ pr_err("ERROR:%s: class_create() ENOMEM\n", __func__);
+ status = PTR_ERR(ctrldev_classp[i]);
+ return status;
+ }
+
+ for (n = 0; n < insts_per_dev; n++) {
+ dev = &ctrl_devs[i][n];
+
+ /*for debug purpose*/
+ snprintf(dev->name, CTRL_DEV_MAX_LEN, "%s%d",
+ rmnet_dev_names[i], n);
+
+ /* ctrl usb dev inits */
+ cmux_enabled = test_bit(i, &mux_info);
+ if (n && cmux_enabled)
+ /* for mux config one cudev maps to n dev */
+ goto skip_cudev_init;
+
+ cudev = kzalloc(sizeof(*cudev), GFP_KERNEL);
+ if (!cudev) {
+ pr_err("Error allocating rmnet usb ctrl dev\n");
+ kfree(dev);
+ return -ENOMEM;
+ }
+
+ cudev->rdev_num = i;
+ cudev->wq = create_singlethread_workqueue(dev->name);
+ if (!cudev->wq) {
+ pr_err("unable to allocate workqueue");
+ kfree(cudev);
+ kfree(dev);
+ return -ENOMEM;
+ }
+
+ init_usb_anchor(&cudev->tx_submitted);
+ init_usb_anchor(&cudev->rx_submitted);
+ INIT_WORK(&cudev->get_encap_work, get_encap_work);
+
+ status = rmnet_usb_ctrl_alloc_rx(cudev);
+ if (status) {
+ destroy_workqueue(cudev->wq);
+ kfree(cudev);
+ kfree(dev);
+ return status;
+ }
+
+skip_cudev_init:
+ /* ctrl dev inits */
+ dev->cudev = cudev;
+
+ if (cmux_enabled) {
+ set_bit(RMNET_CTRL_DEV_MUX_EN, &dev->status);
+ set_bit(RMNET_CTRL_DEV_MUX_EN,
+ &dev->cudev->status);
+ }
+
+ dev->ch_id = n;
+
+ mutex_init(&dev->dev_lock);
+ spin_lock_init(&dev->rx_lock);
+ init_waitqueue_head(&dev->read_wait_queue);
+ init_waitqueue_head(&dev->open_wait_queue);
+ INIT_LIST_HEAD(&dev->rx_list);
+
+ cdev_init(&dev->cdev, &ctrldev_fops);
+ dev->cdev.owner = THIS_MODULE;
+
+ status = cdev_add(&dev->cdev, (ctrldev_num[i] + n), 1);
+ if (status) {
+ pr_err("%s: cdev_add() ret %i\n", __func__,
+ status);
+ free_rmnet_ctrl_udev(dev->cudev);
+ kfree(dev);
+ return status;
+ }
+
+ dev->devicep = device_create(ctrldev_classp[i], NULL,
+ (ctrldev_num[i] + n), NULL,
+ "%s%d", rmnet_dev_names[i],
+ n);
+ if (IS_ERR(dev->devicep)) {
+ pr_err("%s: device_create() returned %ld\n",
+ __func__, PTR_ERR(dev->devicep));
+ cdev_del(&dev->cdev);
+ free_rmnet_ctrl_udev(dev->cudev);
+ kfree(dev);
+ return PTR_ERR(dev->devicep);
+ }
+
+ /*create /sys/class/hsicctl/hsicctlx/modem_wait*/
+ status = device_create_file(dev->devicep,
+ &dev_attr_modem_wait);
+ if (status) {
+ device_destroy(dev->devicep->class,
+ dev->devicep->devt);
+ cdev_del(&dev->cdev);
+ free_rmnet_ctrl_udev(dev->cudev);
+ kfree(dev);
+ return status;
+ }
+ dev_set_drvdata(dev->devicep, dev);
+ }
+ }
+
+ rmnet_usb_ctrl_debugfs_init();
+ pr_info("rmnet usb ctrl Initialized.\n");
+ return 0;
+}
+
+static void free_rmnet_ctrl_dev(struct rmnet_ctrl_dev *dev)
+{
+ device_remove_file(dev->devicep, &dev_attr_modem_wait);
+ cdev_del(&dev->cdev);
+ device_destroy(dev->devicep->class,
+ dev->devicep->devt);
+}
+
+void rmnet_usb_ctrl_exit(int no_rmnet_devs, int no_rmnet_insts_per_dev,
+ unsigned long mux_info)
+{
+ int i, n;
+
+ for (i = 0; i < no_rmnet_devs; i++) {
+ for (n = 0; n < no_rmnet_insts_per_dev; n++) {
+ free_rmnet_ctrl_dev(&ctrl_devs[i][n]);
+ if (n && test_bit(i, &mux_info))
+ continue;
+ free_rmnet_ctrl_udev((&ctrl_devs[i][n])->cudev);
+ }
+
+ kfree(ctrl_devs[i]);
+
+ class_destroy(ctrldev_classp[i]);
+ if (ctrldev_num[i])
+ unregister_chrdev_region(ctrldev_num[i], insts_per_dev);
+ }
+
+ kfree(ctrl_devs);
+ rmnet_usb_ctrl_debugfs_exit();
+}
diff --git a/drivers/net/usb/rmnet_usb_data.c b/drivers/net/usb/rmnet_usb_data.c
new file mode 100644
index 0000000..1fc5b0e
--- /dev/null
+++ b/drivers/net/usb/rmnet_usb_data.c
@@ -0,0 +1,819 @@
+/* Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/mii.h>
+#include <linux/if_arp.h>
+#include <linux/etherdevice.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/usb.h>
+#include <linux/ratelimit.h>
+#include <linux/usb/usbnet.h>
+#include <linux/msm_rmnet.h>
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#include "rmnet_usb.h"
+
+#define RMNET_DATA_LEN 2000
+#define RMNET_HEADROOM sizeof(struct QMI_QOS_HDR_S)
+
+static unsigned int no_rmnet_devs = 1;
+module_param(no_rmnet_devs, uint, S_IRUGO | S_IWUSR);
+
+unsigned int no_rmnet_insts_per_dev = 4;
+module_param(no_rmnet_insts_per_dev, uint, S_IRUGO | S_IWUSR);
+
+/*
+ * To support mux on multiple devices, bit position represents device
+ * and value represnts if mux is enabled or disabled.
+ * e.g. bit 0: mdm over HSIC, bit1: mdm over hsusb
+ */
+static unsigned long mux_enabled;
+module_param(mux_enabled, ulong, S_IRUGO | S_IWUSR);
+
+struct usbnet *unet_list[TOTAL_RMNET_DEV_COUNT];
+
+/* net device name prefixes, indexed by driver_info->data */
+static const char * const rmnet_names[] = {
+ "rmnet_usb%d",
+ "rmnet2_usb%d",
+};
+
+static int data_msg_dbg_mask;
+
+enum {
+ DEBUG_MASK_LVL0 = 1U << 0,
+ DEBUG_MASK_LVL1 = 1U << 1,
+ DEBUG_MASK_LVL2 = 1U << 2,
+};
+
+#define DBG(m, x...) do { \
+ if (data_msg_dbg_mask & m) \
+ pr_info(x); \
+} while (0)
+
+/*echo dbg_mask > /sys/class/net/rmnet_usbx/dbg_mask*/
+static ssize_t dbg_mask_store(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t n)
+{
+ unsigned int dbg_mask;
+ struct net_device *dev = to_net_dev(d);
+ struct usbnet *unet = netdev_priv(dev);
+
+ if (!dev)
+ return -ENODEV;
+
+ sscanf(buf, "%u", &dbg_mask);
+ /*enable dbg msgs for data driver*/
+ data_msg_dbg_mask = dbg_mask;
+
+ /*set default msg level*/
+ unet->msg_enable = NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK;
+
+ /*enable netif_xxx msgs*/
+ if (dbg_mask & DEBUG_MASK_LVL0)
+ unet->msg_enable |= NETIF_MSG_IFUP | NETIF_MSG_IFDOWN;
+ if (dbg_mask & DEBUG_MASK_LVL1)
+ unet->msg_enable |= NETIF_MSG_TX_ERR | NETIF_MSG_RX_ERR
+ | NETIF_MSG_TX_QUEUED | NETIF_MSG_TX_DONE
+ | NETIF_MSG_RX_STATUS;
+
+ return n;
+}
+
+static ssize_t dbg_mask_show(struct device *d,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%d\n", data_msg_dbg_mask);
+}
+
+static DEVICE_ATTR(dbg_mask, 0644, dbg_mask_show, dbg_mask_store);
+
+#define DBG0(x...) DBG(DEBUG_MASK_LVL0, x)
+#define DBG1(x...) DBG(DEBUG_MASK_LVL1, x)
+#define DBG2(x...) DBG(DEBUG_MASK_LVL2, x)
+
+static int rmnet_data_start(void);
+static bool rmnet_data_init;
+
+static int rmnet_init(const char *val, const struct kernel_param *kp)
+{
+ int ret = 0;
+
+ if (rmnet_data_init) {
+ pr_err("dynamic setting rmnet params currently unsupported\n");
+ return -EINVAL;
+ }
+
+ ret = param_set_bool(val, kp);
+ if (ret)
+ return ret;
+
+ rmnet_data_start();
+
+ return ret;
+}
+
+static struct kernel_param_ops rmnet_init_ops = {
+ .set = rmnet_init,
+ .get = param_get_bool,
+};
+module_param_cb(rmnet_data_init, &rmnet_init_ops, &rmnet_data_init,
+ S_IRUGO | S_IWUSR);
+
+static void rmnet_usb_setup(struct net_device *);
+static int rmnet_ioctl(struct net_device *, struct ifreq *, int);
+
+static int rmnet_usb_suspend(struct usb_interface *iface, pm_message_t message)
+{
+ struct usbnet *unet = usb_get_intfdata(iface);
+ struct rmnet_ctrl_udev *dev;
+
+ dev = (struct rmnet_ctrl_udev *)unet->data[1];
+ if (work_busy(&dev->get_encap_work))
+ return -EBUSY;
+
+ usb_kill_anchored_urbs(&dev->rx_submitted);
+ if (work_busy(&dev->get_encap_work))
+ return -EBUSY;
+
+ usbnet_pause_rx(unet);
+
+ if (usbnet_suspend(iface, message)) {
+ usbnet_resume_rx(unet);
+ rmnet_usb_ctrl_start_rx(dev);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int rmnet_usb_resume(struct usb_interface *iface)
+{
+ struct usbnet *unet = usb_get_intfdata(iface);
+ struct rmnet_ctrl_udev *dev;
+
+ dev = (struct rmnet_ctrl_udev *)unet->data[1];
+
+ usbnet_resume(iface);
+ usbnet_resume_rx(unet);
+
+ return rmnet_usb_ctrl_start_rx(dev);
+}
+
+static int rmnet_usb_bind(struct usbnet *usbnet, struct usb_interface *iface)
+{
+ struct usb_host_endpoint *endpoint = NULL;
+ struct usb_host_endpoint *bulk_in = NULL;
+ struct usb_host_endpoint *bulk_out = NULL;
+ struct usb_host_endpoint *int_in = NULL;
+ struct driver_info *info = usbnet->driver_info;
+ int status = 0;
+ int i;
+ int numends;
+
+ numends = iface->cur_altsetting->desc.bNumEndpoints;
+ for (i = 0; i < numends; i++) {
+ endpoint = iface->cur_altsetting->endpoint + i;
+ if (!endpoint) {
+ dev_err(&iface->dev, "%s: invalid endpoint %u\n",
+ __func__, i);
+ status = -EINVAL;
+ goto out;
+ }
+ if (usb_endpoint_is_bulk_in(&endpoint->desc))
+ bulk_in = endpoint;
+ else if (usb_endpoint_is_bulk_out(&endpoint->desc))
+ bulk_out = endpoint;
+ else if (usb_endpoint_is_int_in(&endpoint->desc))
+ int_in = endpoint;
+ }
+
+ if (!bulk_in || !bulk_out || !int_in) {
+ dev_err(&iface->dev, "%s: invalid endpoints\n", __func__);
+ status = -EINVAL;
+ goto out;
+ }
+ usbnet->in = usb_rcvbulkpipe(usbnet->udev,
+ bulk_in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ usbnet->out = usb_sndbulkpipe(usbnet->udev,
+ bulk_out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ usbnet->status = int_in;
+
+ strlcpy(usbnet->net->name, rmnet_names[info->data],
+ IFNAMSIZ);
+
+out:
+ return status;
+}
+
+static struct sk_buff *rmnet_usb_tx_fixup(struct usbnet *dev,
+ struct sk_buff *skb, gfp_t flags)
+{
+ struct QMI_QOS_HDR_S *qmih;
+
+ if (test_bit(RMNET_MODE_QOS, &dev->data[0])) {
+ qmih = (struct QMI_QOS_HDR_S *)
+ skb_push(skb, sizeof(struct QMI_QOS_HDR_S));
+ qmih->version = 1;
+ qmih->flags = 0;
+ qmih->flow_id = skb->mark;
+ }
+
+ if (skb)
+ DBG1("[%s] Tx packet #%lu len=%d mark=0x%x\n",
+ dev->net->name, dev->net->stats.tx_packets,
+ skb->len, skb->mark);
+
+ return skb;
+}
+
+static __be16 rmnet_ip_type_trans(struct sk_buff *skb)
+{
+ __be16 protocol = 0;
+
+ switch (skb->data[0] & 0xf0) {
+ case 0x40:
+ protocol = htons(ETH_P_IP);
+ break;
+ case 0x60:
+ protocol = htons(ETH_P_IPV6);
+ break;
+ default:
+ /*
+ * There is no good way to determine if a packet has
+ * a MAP header. For now default to MAP protocol
+ */
+ protocol = htons(ETH_P_MAP);
+ }
+
+ return protocol;
+}
+
+static int rmnet_usb_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+{
+ if (test_bit(RMNET_MODE_LLP_IP, &dev->data[0]))
+ skb->protocol = rmnet_ip_type_trans(skb);
+ else /*set zero for eth mode*/
+ skb->protocol = 0;
+
+ DBG1("[%s] Rx packet #%lu len=%d\n",
+ dev->net->name, dev->net->stats.rx_packets, skb->len);
+
+ return 1;
+}
+
+static int rmnet_usb_manage_power(struct usbnet *dev, int on)
+{
+ dev->intf->needs_remote_wakeup = on;
+ return 0;
+}
+
+static int rmnet_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (0 > new_mtu || RMNET_DATA_LEN < new_mtu)
+ return -EINVAL;
+
+ DBG0("[%s] MTU change: old=%d new=%d\n", dev->name, dev->mtu, new_mtu);
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
+static struct net_device_stats *rmnet_get_stats(struct net_device *dev)
+{
+ return &dev->stats;
+}
+
+static const struct net_device_ops rmnet_usb_ops_ether = {
+ .ndo_open = usbnet_open,
+ .ndo_stop = usbnet_stop,
+ .ndo_start_xmit = usbnet_start_xmit,
+ .ndo_get_stats = rmnet_get_stats,
+ /*.ndo_set_multicast_list = rmnet_set_multicast_list,*/
+ .ndo_tx_timeout = usbnet_tx_timeout,
+ .ndo_do_ioctl = rmnet_ioctl,
+ .ndo_change_mtu = usbnet_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+};
+
+static const struct net_device_ops rmnet_usb_ops_ip = {
+ .ndo_open = usbnet_open,
+ .ndo_stop = usbnet_stop,
+ .ndo_start_xmit = usbnet_start_xmit,
+ .ndo_get_stats = rmnet_get_stats,
+ /*.ndo_set_multicast_list = rmnet_set_multicast_list,*/
+ .ndo_tx_timeout = usbnet_tx_timeout,
+ .ndo_do_ioctl = rmnet_ioctl,
+ .ndo_change_mtu = rmnet_change_mtu,
+ .ndo_set_mac_address = 0,
+ .ndo_validate_addr = 0,
+};
+
+static int rmnet_ioctl_extended(struct net_device *dev, struct ifreq *ifr)
+{
+ struct rmnet_ioctl_extended_s ext_cmd;
+ int rc = 0;
+ struct usbnet *unet = netdev_priv(dev);
+
+ rc = copy_from_user(&ext_cmd, ifr->ifr_ifru.ifru_data,
+ sizeof(struct rmnet_ioctl_extended_s));
+
+ if (rc) {
+ DBG0("%s(): copy_from_user() failed\n", __func__);
+ return rc;
+ }
+
+ switch (ext_cmd.extended_ioctl) {
+ case RMNET_IOCTL_GET_SUPPORTED_FEATURES:
+ ext_cmd.u.data = 0;
+ break;
+
+ case RMNET_IOCTL_SET_MRU:
+ if (test_bit(EVENT_DEV_OPEN, &unet->flags))
+ return -EBUSY;
+
+ /* 16K max */
+ if ((size_t)ext_cmd.u.data > 0x4000)
+ return -EINVAL;
+
+ unet->rx_urb_size = (size_t) ext_cmd.u.data;
+ DBG0("[%s] rmnet_ioctl(): SET MRU to %u\n", dev->name,
+ unet->rx_urb_size);
+ break;
+
+ case RMNET_IOCTL_GET_MRU:
+ ext_cmd.u.data = (uint32_t)unet->rx_urb_size;
+ break;
+
+ case RMNET_IOCTL_GET_DRIVER_NAME:
+ strlcpy(ext_cmd.u.if_name, unet->driver_name,
+ sizeof(ext_cmd.u.if_name));
+ break;
+ case RMNET_IOCTL_GET_EPID:
+ ext_cmd.u.data =
+ unet->intf->cur_altsetting->desc.bInterfaceNumber;
+ break;
+ }
+
+ rc = copy_to_user(ifr->ifr_ifru.ifru_data, &ext_cmd,
+ sizeof(struct rmnet_ioctl_extended_s));
+
+ if (rc)
+ DBG0("%s(): copy_to_user() failed\n", __func__);
+ return rc;
+}
+
+static int rmnet_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct usbnet *unet = netdev_priv(dev);
+ unsigned long old_opmode;
+ int prev_mtu = dev->mtu;
+ int rc = 0;
+ struct rmnet_ioctl_data_s ioctl_data;
+
+ old_opmode = unet->data[0]; /*data[0] saves operation mode*/
+ /* Process IOCTL command */
+ switch (cmd) {
+ case RMNET_IOCTL_SET_LLP_ETHERNET: /*Set Ethernet protocol*/
+ /* Perform Ethernet config only if in IP mode currently*/
+ if (test_bit(RMNET_MODE_LLP_IP, &unet->data[0])) {
+ ether_setup(dev);
+ random_ether_addr(dev->dev_addr);
+ dev->mtu = prev_mtu;
+ dev->netdev_ops = &rmnet_usb_ops_ether;
+ clear_bit(RMNET_MODE_LLP_IP, &unet->data[0]);
+ set_bit(RMNET_MODE_LLP_ETH, &unet->data[0]);
+ DBG0("[%s] rmnet_ioctl(): set Ethernet protocol mode\n",
+ dev->name);
+ }
+ break;
+
+ case RMNET_IOCTL_SET_LLP_IP: /* Set RAWIP protocol*/
+ /* Perform IP config only if in Ethernet mode currently*/
+ if (test_bit(RMNET_MODE_LLP_ETH, &unet->data[0])) {
+
+ /* Undo config done in ether_setup() */
+ dev->header_ops = 0; /* No header */
+ dev->type = ARPHRD_RAWIP;
+ dev->hard_header_len = 0;
+ dev->mtu = prev_mtu;
+ dev->addr_len = 0;
+ dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
+ dev->netdev_ops = &rmnet_usb_ops_ip;
+ clear_bit(RMNET_MODE_LLP_ETH, &unet->data[0]);
+ set_bit(RMNET_MODE_LLP_IP, &unet->data[0]);
+ DBG0("[%s] rmnet_ioctl(): set IP protocol mode\n",
+ dev->name);
+ }
+ break;
+
+ case RMNET_IOCTL_GET_LLP: /* Get link protocol state */
+ ioctl_data.u.operation_mode = (unet->data[0]
+ & (RMNET_MODE_LLP_ETH
+ | RMNET_MODE_LLP_IP));
+ if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
+ sizeof(struct rmnet_ioctl_data_s)))
+ rc = -EFAULT;
+ break;
+
+ case RMNET_IOCTL_SET_QOS_ENABLE: /* Set QoS header enabled*/
+ set_bit(RMNET_MODE_QOS, &unet->data[0]);
+ DBG0("[%s] rmnet_ioctl(): set QMI QOS header enable\n",
+ dev->name);
+ break;
+
+ case RMNET_IOCTL_SET_QOS_DISABLE: /* Set QoS header disabled */
+ clear_bit(RMNET_MODE_QOS, &unet->data[0]);
+ DBG0("[%s] rmnet_ioctl(): set QMI QOS header disable\n",
+ dev->name);
+ break;
+
+ case RMNET_IOCTL_GET_QOS: /* Get QoS header state */
+ ioctl_data.u.operation_mode = (unet->data[0]
+ & RMNET_MODE_QOS);
+ if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
+ sizeof(struct rmnet_ioctl_data_s)))
+ rc = -EFAULT;
+ break;
+
+ case RMNET_IOCTL_GET_OPMODE: /* Get operation mode*/
+ ioctl_data.u.operation_mode = unet->data[0];
+ if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
+ sizeof(struct rmnet_ioctl_data_s)))
+ rc = -EFAULT;
+ break;
+
+ case RMNET_IOCTL_OPEN: /* Open transport port */
+ rc = usbnet_open(dev);
+ DBG0("[%s] rmnet_ioctl(): open transport port\n", dev->name);
+ break;
+
+ case RMNET_IOCTL_CLOSE: /* Close transport port*/
+ rc = usbnet_stop(dev);
+ DBG0("[%s] rmnet_ioctl(): close transport port\n", dev->name);
+ break;
+
+ case RMNET_IOCTL_EXTENDED:
+ rc = rmnet_ioctl_extended(dev, ifr);
+ break;
+
+ default:
+ dev_dbg(&unet->intf->dev, "[%s] error: rmnet_ioctl called for unsupported cmd[0x%x]\n",
+ dev->name, cmd);
+ return -EINVAL;
+ }
+
+ DBG2("[%s] %s: cmd=0x%x opmode old=0x%08lx new=0x%08lx\n",
+ dev->name, __func__, cmd, old_opmode, unet->data[0]);
+
+ return rc;
+}
+
+static void rmnet_usb_setup(struct net_device *dev)
+{
+ /* Using Ethernet mode by default */
+ dev->netdev_ops = &rmnet_usb_ops_ether;
+
+ /* set this after calling ether_setup */
+ dev->mtu = RMNET_DATA_LEN;
+
+ /* for QOS header */
+ dev->needed_headroom = RMNET_HEADROOM;
+
+ random_ether_addr(dev->dev_addr);
+ dev->watchdog_timeo = 1000; /* 10 seconds? */
+}
+
+static int rmnet_usb_data_status(struct seq_file *s, void *unused)
+{
+ struct usbnet *unet = s->private;
+
+ seq_printf(s, "RMNET_MODE_LLP_IP: %d\n",
+ test_bit(RMNET_MODE_LLP_IP, &unet->data[0]));
+ seq_printf(s, "RMNET_MODE_LLP_ETH: %d\n",
+ test_bit(RMNET_MODE_LLP_ETH, &unet->data[0]));
+ seq_printf(s, "RMNET_MODE_QOS: %d\n",
+ test_bit(RMNET_MODE_QOS, &unet->data[0]));
+ seq_printf(s, "Net MTU: %u\n", unet->net->mtu);
+ seq_printf(s, "rx_urb_size: %u\n", unet->rx_urb_size);
+ seq_printf(s, "rx skb q len: %u\n", unet->rxq.qlen);
+ seq_printf(s, "rx skb done q len: %u\n", unet->done.qlen);
+ seq_printf(s, "rx errors: %lu\n", unet->net->stats.rx_errors);
+ seq_printf(s, "rx over errors: %lu\n",
+ unet->net->stats.rx_over_errors);
+ seq_printf(s, "rx length errors: %lu\n",
+ unet->net->stats.rx_length_errors);
+ seq_printf(s, "rx packets: %lu\n", unet->net->stats.rx_packets);
+ seq_printf(s, "rx bytes: %lu\n", unet->net->stats.rx_bytes);
+ seq_printf(s, "tx skb q len: %u\n", unet->txq.qlen);
+ seq_printf(s, "tx errors: %lu\n", unet->net->stats.tx_errors);
+ seq_printf(s, "tx packets: %lu\n", unet->net->stats.tx_packets);
+ seq_printf(s, "tx bytes: %lu\n", unet->net->stats.tx_bytes);
+ seq_printf(s, "EVENT_DEV_OPEN: %d\n",
+ test_bit(EVENT_DEV_OPEN, &unet->flags));
+ seq_printf(s, "EVENT_TX_HALT: %d\n",
+ test_bit(EVENT_TX_HALT, &unet->flags));
+ seq_printf(s, "EVENT_RX_HALT: %d\n",
+ test_bit(EVENT_RX_HALT, &unet->flags));
+ seq_printf(s, "EVENT_RX_MEMORY: %d\n",
+ test_bit(EVENT_RX_MEMORY, &unet->flags));
+ seq_printf(s, "EVENT_DEV_ASLEEP: %d\n",
+ test_bit(EVENT_DEV_ASLEEP, &unet->flags));
+
+ return 0;
+}
+
+static int rmnet_usb_data_status_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, rmnet_usb_data_status, inode->i_private);
+}
+
+const struct file_operations rmnet_usb_data_fops = {
+ .open = rmnet_usb_data_status_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int rmnet_usb_data_debugfs_init(struct usbnet *unet)
+{
+ struct dentry *rmnet_usb_data_dbg_root;
+ struct dentry *rmnet_usb_data_dentry;
+
+ rmnet_usb_data_dbg_root = debugfs_create_dir(unet->net->name, NULL);
+ if (!rmnet_usb_data_dbg_root || IS_ERR(rmnet_usb_data_dbg_root))
+ return -ENODEV;
+
+ rmnet_usb_data_dentry = debugfs_create_file("status",
+ S_IRUGO | S_IWUSR,
+ rmnet_usb_data_dbg_root, unet,
+ &rmnet_usb_data_fops);
+
+ if (!rmnet_usb_data_dentry) {
+ debugfs_remove_recursive(rmnet_usb_data_dbg_root);
+ return -ENODEV;
+ }
+
+ unet->data[2] = (unsigned long)rmnet_usb_data_dbg_root;
+
+ return 0;
+}
+
+static void rmnet_usb_data_debugfs_cleanup(struct usbnet *unet)
+{
+ struct dentry *root = (struct dentry *)unet->data[2];
+
+ if (root) {
+ debugfs_remove_recursive(root);
+ unet->data[2] = 0;
+ }
+}
+
+static int rmnet_usb_probe(struct usb_interface *iface,
+ const struct usb_device_id *prod)
+{
+ struct usbnet *unet;
+ struct driver_info *info = (struct driver_info *)prod->driver_info;
+ struct usb_device *udev;
+ int status = 0;
+
+ udev = interface_to_usbdev(iface);
+
+ if (iface->num_altsetting != 1) {
+ dev_err(&iface->dev, "%s invalid num_altsetting %u\n",
+ __func__, iface->num_altsetting);
+ return -EINVAL;
+ }
+
+ status = usbnet_probe(iface, prod);
+ if (status < 0) {
+ dev_err(&iface->dev, "usbnet_probe failed %d\n",
+ status);
+ return status;
+ }
+
+ unet = usb_get_intfdata(iface);
+
+ /*set rmnet operation mode to eth by default*/
+ set_bit(RMNET_MODE_LLP_ETH, &unet->data[0]);
+
+ /*update net device*/
+ rmnet_usb_setup(unet->net);
+
+ /*create /sys/class/net/rmnet_usbx/dbg_mask*/
+ status = device_create_file(&unet->net->dev,
+ &dev_attr_dbg_mask);
+ if (status) {
+ usbnet_disconnect(iface);
+ return status;
+ }
+
+ status = rmnet_usb_ctrl_probe(iface, unet->status, info->data,
+ &unet->data[1]);
+ if (status) {
+ device_remove_file(&unet->net->dev, &dev_attr_dbg_mask);
+ usbnet_disconnect(iface);
+ return status;
+ }
+
+ status = rmnet_usb_data_debugfs_init(unet);
+ if (status)
+ dev_dbg(&iface->dev,
+ "mode debugfs file is not available\n");
+
+ usb_enable_autosuspend(udev);
+
+ if (udev->parent && !udev->parent->parent) {
+ /* allow modem and roothub to wake up suspended system */
+ device_set_wakeup_enable(&udev->dev, 1);
+ device_set_wakeup_enable(&udev->parent->dev, 1);
+ }
+
+ return 0;
+}
+
+static void rmnet_usb_disconnect(struct usb_interface *intf)
+{
+ struct usbnet *unet = usb_get_intfdata(intf);
+ struct rmnet_ctrl_udev *dev;
+
+ device_set_wakeup_enable(&unet->udev->dev, 0);
+
+ device_remove_file(&unet->net->dev, &dev_attr_dbg_mask);
+
+ dev = (struct rmnet_ctrl_udev *)unet->data[1];
+ rmnet_usb_ctrl_disconnect(dev);
+ unet->data[0] = 0;
+ unet->data[1] = 0;
+ rmnet_usb_data_debugfs_cleanup(unet);
+ usbnet_disconnect(intf);
+}
+
+static struct driver_info rmnet_info = {
+ .description = "RmNET net device",
+ .flags = FLAG_SEND_ZLP,
+ .bind = rmnet_usb_bind,
+ .tx_fixup = rmnet_usb_tx_fixup,
+ .rx_fixup = rmnet_usb_rx_fixup,
+ .manage_power = rmnet_usb_manage_power,
+ .data = 0,
+};
+
+static struct driver_info rmnet_usb_info = {
+ .description = "RmNET net device",
+ .flags = FLAG_SEND_ZLP,
+ .bind = rmnet_usb_bind,
+ .tx_fixup = rmnet_usb_tx_fixup,
+ .rx_fixup = rmnet_usb_rx_fixup,
+ .manage_power = rmnet_usb_manage_power,
+ .data = 1,
+};
+
+static const struct usb_device_id vidpids[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9034, 4),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9034, 5),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9034, 6),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9034, 7),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9048, 5),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9048, 6),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9048, 7),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9048, 8),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x904c, 6),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x904c, 7),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x904c, 8),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9075, 6), /*mux over hsic mdm*/
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x908E, 8),
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9079, 5),
+ .driver_info = (unsigned long)&rmnet_usb_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9079, 6),
+ .driver_info = (unsigned long)&rmnet_usb_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9079, 7),
+ .driver_info = (unsigned long)&rmnet_usb_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x9079, 8),
+ .driver_info = (unsigned long)&rmnet_usb_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x908A, 6), /*mux over hsic mdm*/
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x90A0, 6), /*mux over hsic mdm*/
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x05c6, 0x90A4, 8), /*mux over hsic mdm*/
+ .driver_info = (unsigned long)&rmnet_info,
+ },
+
+ { }, /* Terminating entry */
+};
+
+MODULE_DEVICE_TABLE(usb, vidpids);
+
+static struct usb_driver rmnet_usb = {
+ .name = "rmnet_usb",
+ .id_table = vidpids,
+ .probe = rmnet_usb_probe,
+ .disconnect = rmnet_usb_disconnect,
+ .suspend = rmnet_usb_suspend,
+ .resume = rmnet_usb_resume,
+ .reset_resume = rmnet_usb_resume,
+ .supports_autosuspend = true,
+};
+
+static int rmnet_data_start(void)
+{
+ int retval;
+
+ if (no_rmnet_devs > MAX_RMNET_DEVS) {
+ pr_err("ERROR:%s: param no_rmnet_devs(%d) > than maximum(%d)",
+ __func__, no_rmnet_devs, MAX_RMNET_DEVS);
+ return -EINVAL;
+ }
+
+ /* initialize ctrl devices */
+ retval = rmnet_usb_ctrl_init(no_rmnet_devs, no_rmnet_insts_per_dev,
+ mux_enabled);
+ if (retval) {
+ pr_err("rmnet_usb_cmux_init failed: %d", retval);
+ return retval;
+ }
+
+ retval = usb_register(&rmnet_usb);
+ if (retval) {
+ pr_err("usb_register failed: %d", retval);
+ return retval;
+ }
+
+ return retval;
+}
+
+#ifdef CONFIG_QCT_9K_MODEM
+static int __init rmnet_usb_init(void)
+{
+ if (is_mdm_modem())
+ rmnet_data_start();
+ return 0;
+}
+module_init(rmnet_usb_init);
+#endif
+
+static void __exit rmnet_usb_exit(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (is_mdm_modem())
+ {
+ usb_deregister(&rmnet_usb);
+ rmnet_usb_ctrl_exit(no_rmnet_devs, no_rmnet_insts_per_dev, mux_enabled);
+ }
+#else
+ usb_deregister(&rmnet_usb);
+ rmnet_usb_ctrl_exit(no_rmnet_devs, no_rmnet_insts_per_dev, mux_enabled);
+#endif
+}
+module_exit(rmnet_usb_exit);
+
+MODULE_DESCRIPTION("msm rmnet usb device");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
index 0e746f0..c36aead 100644
--- a/drivers/net/usb/usbnet.c
+++ b/drivers/net/usb/usbnet.c
@@ -326,7 +326,9 @@
return;
}
- skb->protocol = eth_type_trans (skb, dev->net);
+ if (!skb->protocol)
+ skb->protocol = eth_type_trans(skb, dev->net);
+
dev->net->stats.rx_packets++;
dev->net->stats.rx_bytes += skb->len;
diff --git a/drivers/net/wireless/Makefile b/drivers/net/wireless/Makefile
index f2ddf0d..03abc54 100644
--- a/drivers/net/wireless/Makefile
+++ b/drivers/net/wireless/Makefile
@@ -57,10 +57,6 @@
obj-$(CONFIG_BCMDHD) += bcmdhd/
-obj-$(CONFIG_BCM43241) += bcm43241/
-
-obj-$(CONFIG_BCM43341) += bcm43341/
-
obj-$(CONFIG_BRCMFMAC) += brcm80211/
obj-$(CONFIG_BRCMSMAC) += brcm80211/
obj-$(CONFIG_SD8897) += sd8897/
diff --git a/drivers/net/wireless/bcm43241/Makefile b/drivers/net/wireless/bcm43241/Makefile
deleted file mode 100644
index 5755e34..0000000
--- a/drivers/net/wireless/bcm43241/Makefile
+++ /dev/null
@@ -1,193 +0,0 @@
-# bcmdhd
-#####################
-# SDIO Basic feature
-#####################
-
-DHDCFLAGS += -Wall -Wstrict-prototypes -Dlinux -DLINUX -DBCMDRIVER \
- -DBCMDONGLEHOST -DUNRELEASEDCHIP -DBCMDMA32 -DBCMFILEIMAGE \
- -DDHDTHREAD -DSHOW_EVENTS -DBCMDBG -DWLP2P \
- -DWIFI_ACT_FRAME -DARP_OFFLOAD_SUPPORT \
- -DKEEP_ALIVE -DCSCAN -DPKT_FILTER_SUPPORT \
- -DEMBEDDED_PLATFORM -DPNO_SUPPORT \
- -DDHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT -DGET_CUSTOM_MAC_ENABLE \
- -DCUSTOMER_HW2 -DENABLE_INSMOD_NO_FW_LOAD -DQMONITOR
-
-#################
-# Common feature
-#################
-DHDCFLAGS += -DWL_CFG80211
-# Print out kernel panic point of file and line info when assertion happened
-DHDCFLAGS += -DBCMASSERT_LOG
-
-# keepalive
-DHDCFLAGS += -DCUSTOM_KEEP_ALIVE_SETTING=28000
-
-DHDCFLAGS += -DVSDB
-
-# For p2p connection issue
-DHDCFLAGS += -DWL_SCB_TIMEOUT=10
-
-# TDLS enable
-DHDCFLAGS += -DWLTDLS -DWLTDLS_AUTO_ENABLE
-# For TDLS tear down inactive time 40 sec
-DHDCFLAGS += -DCUSTOM_TDLS_IDLE_MODE_SETTING=40000
-# for TDLS RSSI HIGH for establishing TDLS link
-DHDCFLAGS += -DCUSTOM_TDLS_RSSI_THRESHOLD_HIGH=-60
-# for TDLS RSSI HIGH for tearing down TDLS link
-DHDCFLAGS += -DCUSTOM_TDLS_RSSI_THRESHOLD_LOW=-70
-
-# Roaming
-DHDCFLAGS += -DROAM_AP_ENV_DETECTION
-DHDCFLAGS += -DROAM_ENABLE -DROAM_CHANNEL_CACHE -DROAM_API
-DHDCFLAGS += -DENABLE_FW_ROAM_SUSPEND
-# Roaming trigger
-DHDCFLAGS += -DCUSTOM_ROAM_TRIGGER_SETTING=-75
-DHDCFLAGS += -DCUSTOM_ROAM_DELTA_SETTING=10
-# Set PM 2 always regardless suspend/resume
-DHDCFLAGS += -DSUPPORT_PM2_ONLY
-
-# For special PNO Event keep wake lock for 10sec
-DHDCFLAGS += -DCUSTOM_PNO_EVENT_LOCK_xTIME=10
-DHDCFLAGS += -DMIRACAST_AMPDU_SIZE=8
-
-# Early suspend
-DHDCFLAGS += -DDHD_USE_EARLYSUSPEND
-
-# For Scan result patch
-DHDCFLAGS += -DESCAN_RESULT_PATCH
-
-# For Static Buffer
-ifeq ($(CONFIG_BROADCOM_WIFI_RESERVED_MEM),y)
- DHDCFLAGS += -DCONFIG_DHD_USE_STATIC_BUF
- DHDCFLAGS += -DENHANCED_STATIC_BUF
- DHDCFLAGS += -DSTATIC_WL_PRIV_STRUCT
-endif
-ifneq ($(CONFIG_DHD_USE_SCHED_SCAN),)
-DHDCFLAGS += -DWL_SCHED_SCAN
-endif
-
-# Ioctl timeout 5000ms
-DHDCFLAGS += -DIOCTL_RESP_TIMEOUT=5000
-
-# Prevent rx thread monopolize
-DHDCFLAGS += -DWAIT_DEQUEUE
-
-# Config PM Control
-DHDCFLAGS += -DCONFIG_CONTROL_PM
-
-# idle count
-DHDCFLAGS += -DDHD_USE_IDLECOUNT
-
-# SKB TAILPAD to avoid out of boundary memory access
-DHDCFLAGS += -DDHDENABLE_TAILPAD
-
-# Wi-Fi Direct
-DHDCFLAGS += -DWL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST
-DHDCFLAGS += -DWL_CFG80211_STA_EVENT
-DHDCFLAGS += -DWL_IFACE_COMB_NUM_CHANNELS
-DHDCFLAGS += -DWL_ENABLE_P2P_IF
-
-##########################
-# driver type
-# m: module type driver
-# y: built-in type driver
-##########################
-DRIVER_TYPE ?= $(CONFIG_BCMDHD)
-
-DHDCFLAGS += -DBCM4339_CHIP -DBCM43241_CHIP -DBCM4354_CHIP
-DHDCFLAGS += -DPROP_TXSTATUS_VSDB
-DHDCFLAGS += -DCUSTOM_DPC_PRIO_SETTING=99
-
-#########################
-# BCM43241Chip dependent feature
-#########################
-DHDCFLAGS += -DMIMO_ANT_SETTING
-DHDCFLAGS += -DCUSTOM_GLOM_SETTING=1 -DCUSTOM_SDIO_F2_BLKSIZE=128
-DHDCFLAGS += -DAMPDU_HOSTREORDER
-DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=32
-ifeq ($(DRIVER_TYPE),m)
- DHDCFLAGS += -fno-pic
-endif
-
-ifneq ($(CONFIG_BCM4354),)
-# tput enhancement
- DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
- DHDCFLAGS += -DUSE_DYNAMIC_F2_BLKSIZE -DDYNAMIC_F2_BLKSIZE_FOR_NONLEGACY=128
- DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
- DHDCFLAGS += -DDHDTCPACK_SUPPRESS
- DHDCFLAGS += -DUSE_WL_TXBF
- DHDCFLAGS += -DUSE_WL_FRAMEBURST
- DHDCFLAGS += -DRXFRAME_THREAD
- DHDCFLAGS += -DREPEAT_READFRAME
- DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
- DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
- DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=40
- DHDCFLAGS += -DMAX_HDR_READ=128
- DHDCFLAGS += -DDHD_FIRSTREAD=128
- DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=16
-
-# New Features
- DHDCFLAGS += -DWL11U
- DHDCFLAGS += -DDHD_ENABLE_LPC
- DHDCFLAGS += -DCUSTOM_PSPRETEND_THR=30
-endif
-
-ifneq ($(CONFIG_BCM4339),)
- # tput enhancement
- DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
- DHDCFLAGS += -DUSE_DYNAMIC_F2_BLKSIZE -DDYNAMIC_F2_BLKSIZE_FOR_NONLEGACY=128
- DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
- DHDCFLAGS += -DDHDTCPACK_SUPPRESS
- DHDCFLAGS += -DUSE_WL_TXBF
- DHDCFLAGS += -DUSE_WL_FRAMEBURST
- DHDCFLAGS += -DRXFRAME_THREAD
- DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
- DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
- DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=32
-
- # New Features
- DHDCFLAGS += -DWL11U
- DHDCFLAGS += -DDHD_ENABLE_LPC
- DHDCFLAGS += -DCUSTOM_PSPRETEND_THR=30
-endif
-
-ifneq ($(CONFIG_BCMDHD_SDIO),)
- DHDCFLAGS += -DBDC -DDHD_BCMEVENTS -DMMC_SDIO_ABORT
- DHDCFLAGS += -DBCMSDIO -DBCMLXSDMMC -DUSE_SDIOFIFO_IOVAR
- DHDCFLAGS += -DPROP_TXSTATUS
-endif
-
-ifeq ($(CONFIG_BCMDHD_HW_OOB),y)
- DHDCFLAGS += -DHW_OOB -DOOB_INTR_ONLY
-else
- DHDCFLAGS += -DSDIO_ISR_THREAD
-endif
-
-ifneq ($(CONFIG_BCMDHD_PCIE),)
- DHDCFLAGS += -DPCIE_FULL_DONGLE -DBCMPCIE -DCUSTOM_DPC_PRIO_SETTING=-1
-endif
-
-#EXTRA_LDFLAGS += --strip-debug
-
-EXTRA_CFLAGS += $(DHDCFLAGS) -DDHD_DEBUG
-EXTRA_CFLAGS += -DSRCBASE=\"$(src)\"
-EXTRA_CFLAGS += -I$(src)/include/ -I$(src)/
-KBUILD_CFLAGS += -I$(LINUXDIR)/include -I$(shell pwd) -Idrivers/net/wireless/bcmdhd -Idrivers/net/wireless/bcmdhd/include
-
-DHDOFILES := src/dhd_pno.o src/dhd_common.o src/dhd_ip.o src/dhd_custom_gpio.o \
- src/dhd_linux.o src/dhd_linux_sched.o src/dhd_cfg80211.o src/dhd_linux_wq.o src/aiutils.o src/bcmevent.o \
- src/bcmutils.o src/bcmwifi_channels.o src/hndpmu.o src/linux_osl.o src/sbutils.o src/siutils.o \
- src/wl_android.o src/wl_cfg80211.o src/wl_cfgp2p.o src/wl_cfg_btcoex.o src/wldev_common.o src/wl_linux_mon.o \
- src/dhd_linux_platdev.o src/dhd_pno.o src/dhd_linux_wq.o src/wl_cfg_btcoex.o src/dhd_qmon.o
-
-ifneq ($(CONFIG_BCMDHD_SDIO),)
- DHDOFILES += src/bcmsdh.o src/bcmsdh_linux.o src/bcmsdh_sdmmc.o src/bcmsdh_sdmmc_linux.o
- DHDOFILES += src/dhd_cdc.o src/dhd_wlfc.o src/dhd_sdio.o
-endif
-
-ifneq ($(CONFIG_BCMDHD_PCIE),)
- DHDOFILES += src/dhd_pcie.o src/dhd_pcie_linux.o src/dhd_msgbuf.o src/circularbuf.o
-endif
-
-bcm43241-objs := $(DHDOFILES)
-obj-$(DRIVER_TYPE) += bcm43241.o
diff --git a/drivers/net/wireless/bcm43241/src b/drivers/net/wireless/bcm43241/src
deleted file mode 120000
index e68b802..0000000
--- a/drivers/net/wireless/bcm43241/src
+++ /dev/null
@@ -1 +0,0 @@
-../bcmdhd
\ No newline at end of file
diff --git a/drivers/net/wireless/bcm43341/Makefile b/drivers/net/wireless/bcm43341/Makefile
deleted file mode 100644
index a7de0ec..0000000
--- a/drivers/net/wireless/bcm43341/Makefile
+++ /dev/null
@@ -1,127 +0,0 @@
-# bcmdhd
-
-DHDCFLAGS = -Wall -Wstrict-prototypes -Dlinux -DBCMDRIVER \
- -DBCMDONGLEHOST -DUNRELEASEDCHIP -DBCMDMA32 -DBCMFILEIMAGE \
- -DDHDTHREAD -DDHD_DEBUG -DSDTEST -DBDC -DTOE \
- -DDHD_BCMEVENTS -DSHOW_EVENTS -DPROP_TXSTATUS -DBCMDBG \
- -DCUSTOMER_HW2 \
- -DMMC_SDIO_ABORT -DBCMSDIO -DBCMLXSDMMC -DBCMPLATFORM_BUS -DWLP2P \
- -DWIFI_ACT_FRAME -DARP_OFFLOAD_SUPPORT \
- -DKEEP_ALIVE -DGET_CUSTOM_MAC_ENABLE -DPKT_FILTER_SUPPORT \
- -DEMBEDDED_PLATFORM -DPNO_SUPPORT -DWLTDLS \
- -DDHD_USE_IDLECOUNT -DSET_RANDOM_MAC_SOFTAP -DROAM_ENABLE -DVSDB \
- -DWL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST \
- -DESCAN_RESULT_PATCH -DHT40_GO -DPASS_ARP_PACKET \
- -DDHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT -DSUPPORT_PM2_ONLY \
- -DMIRACAST_AMPDU_SIZE=8 \
- -Idrivers/net/wireless/bcmdhd -Idrivers/net/wireless/bcmdhd/include
-
-DHDCFLAGS += -DWL_CFG80211 -DWL_CFG80211_STA_EVENT
-# for < K3.8
-#DHDCFLAGS += -DWL_ENABLE_P2P_IF -DWL_IFACE_COMB_NUM_CHANNELS
-# for >= K3.8
-DHDCFLAGS += -DWL_CFG80211_P2P_DEV_IF -DWL_IFACE_COMB_NUM_CHANNELS
-
-DHDCFLAGS += -DDEBUGFS_CFG80211
-DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
-DHDCFLAGS += -DCUSTOM_DPC_PRIO_SETTING=99
-DHDCFLAGS += -DIOCTL_RESP_TIMEOUT=5000
-DHDCFLAGS += -DRXFRAME_THREAD
-DHDCFLAGS += -DDHDTCPACK_SUPPRESS
-
-ifeq ($(CONFIG_BCMDHD_HW_OOB),y)
- DHDCFLAGS += -DHW_OOB -DOOB_INTR_ONLY
-else
- DHDCFLAGS += -DSDIO_ISR_THREAD
-endif
-
-ifeq ($(CONFIG_BCMDHD_INSMOD_NO_FW_LOAD),y)
- DHDCFLAGS += -DENABLE_INSMOD_NO_FW_LOAD
-endif
-
-ifneq ($(CONFIG_DHD_USE_SCHED_SCAN),)
- DHDCFLAGS += -DWL_SCHED_SCAN
-endif
-
-#ifeq ($(CONFIG_BCM43241),y)
-# DHDCFLAGS += -DBCM43241_CHIP
-# DHDCFLAGS += -DUSE_SDIOFIFO_IOVAR
-# DHDCFLAGS += -DCUSTOM_SDIO_F2_BLKSIZE=128
-# DHDCFLAGS += -DCUSTOM_ROAM_TRIGGER_SETTING=-65
-# DHDCFLAGS += -DCUSTOM_ROAM_DELTA_SETTING=15
-# DHDCFLAGS += -DCUSTOM_KEEP_ALIVE_SETTING=28000
-# DHDCFLAGS += -DVSDB_BW_ALLOCATE_ENABLE
-# DHDCFLAGS += -DQMONITOR
-# DHDCFLAGS += -DP2P_DISCOVERY_WAR
-#endif
-
-ifeq ($(CONFIG_BCM43341),y)
- DHDCFLAGS += -DBCM43341_CHIP
- DHDCFLAGS += -DDHD_ENABLE_LPC
- DHDCFLAGS += -DVSDB_BW_ALLOCATE_ENABLE
- DHDCFLAGS += -DQMONITOR
- DHDCFLAGS += -DNV_BCM943341_WBFGN_MULTI_MODULE_SUPPORT
-endif
-
-#BCMDHD for BCM4335 and BCM4339 chip
-#ifneq ($(CONFIG_BCMDHD),)
-# DHDCFLAGS += -DBCM4335_CHIP
-# DHDCFLAGS += -DUSE_SDIOFIFO_IOVAR
-# DHDCFLAGS += -DCUSTOM_SDIO_F2_BLKSIZE=256
-# DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
-# DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
-# DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=32
-# DHDCFLAGS += -DUSE_WL_TXBF
-# DHDCFLAGS += -DUSE_WL_FRAMEBURST
-# DHDCFLAGS += -DDHD_ENABLE_LPC
-# DHDCFLAGS += -DVSDB_BW_ALLOCATE_ENABLE
-# DHDCFLAGS += -DQMONITOR
-# DHDCFLAGS += -DENABLE_4335BT_WAR
-#endif
-
-#ifeq ($(CONFIG_BCM4339),y)
-# DHDCFLAGS += -DBCM4339_CHIP
-# DHDCFLAGS += -DUSE_SDIOFIFO_IOVAR
-# DHDCFLAGS += -DCUSTOM_SDIO_F2_BLKSIZE=256
-# DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
-# DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
-# DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=32
-# DHDCFLAGS += -DUSE_WL_TXBF
-# DHDCFLAGS += -DUSE_WL_FRAMEBURST
-# DHDCFLAGS += -DDHD_ENABLE_LPC
-# DHDCFLAGS += -DVSDB_BW_ALLOCATE_ENABLE
-# DHDCFLAGS += -DQMONITOR
-#endif
-
-#ifeq ($(CONFIG_BCM4350),y)
-# DHDCFLAGS += -DBCM4350_CHIP
-# DHDCFLAGS += -DUSE_SDIOFIFO_IOVAR
-# DHDCFLAGS += -DCUSTOM_SDIO_F2_BLKSIZE=256
-# DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
-# DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
-# DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=32
-# DHDCFLAGS += -DUSE_WL_TXBF
-# DHDCFLAGS += -DUSE_WL_FRAMEBURST
-# DHDCFLAGS += -DDHD_ENABLE_LPC
-# DHDCFLAGS += -DVSDB_BW_ALLOCATE_ENABLE
-# DHDCFLAGS += -DQMONITOR
-#endif
-
-DHDOFILES = src/bcmsdh.o src/bcmsdh_linux.o src/bcmsdh_sdmmc.o src/bcmsdh_sdmmc_linux.o \
- src/dhd_cdc.o src/dhd_cfg80211.o src/dhd_common.o src/dhd_custom_gpio.o src/dhd_ip.o \
- src/dhd_linux.o src/dhd_linux_sched.o src/dhd_pno.o src/dhd_sdio.o src/dhd_wlfc.o \
- src/aiutils.o src/bcmevent.o src/bcmutils.o src/bcmwifi_channels.o src/hndpmu.o \
- src/linux_osl.o src/sbutils.o src/siutils.o src/wldev_common.o src/wl_android.o \
- src/wl_cfg80211.o src/wl_cfgp2p.o src/wl_linux_mon.o
-
-ifneq ($(findstring QMONITOR, $(DHDCFLAGS)),)
- DHDOFILES += src/dhd_qmon.o
-endif
-
-obj-$(CONFIG_BCMDHD) += bcm43341.o
-bcm43341-objs += $(DHDOFILES)
-
-EXTRA_CFLAGS = $(DHDCFLAGS)
-ifeq ($(CONFIG_BCMDHD),m)
- EXTRA_LDFLAGS += --strip-debug
-endif
diff --git a/drivers/net/wireless/bcm43341/src b/drivers/net/wireless/bcm43341/src
deleted file mode 120000
index e68b802..0000000
--- a/drivers/net/wireless/bcm43341/src
+++ /dev/null
@@ -1 +0,0 @@
-../bcmdhd
\ No newline at end of file
diff --git a/drivers/net/wireless/bcmdhd/Kconfig b/drivers/net/wireless/bcmdhd/Kconfig
index 2d6ac7e..b19c557 100644
--- a/drivers/net/wireless/bcmdhd/Kconfig
+++ b/drivers/net/wireless/bcmdhd/Kconfig
@@ -10,23 +10,25 @@
config BCMDHD_SDIO
bool "SDIO bus interface support"
depends on BCMDHD && MMC
- default y
config BCMDHD_PCIE
bool "PCIe bus interface support"
depends on BCMDHD && PCI && !BCMDHD_SDIO
-config BCM43241
- tristate "Broadcom 43241 wireless cards support"
- depends on WLAN
- ---help---
- This module adds support for wireless adapters based on
- Broadcom 43241 chipset.
-
config BCM4354
tristate "BCM4354 support"
depends on BCMDHD
+config BCM4356
+ tristate "BCM4356 support"
+ depends on BCMDHD
+ default n
+
+config BCM4358
+ tristate "BCM4358 support"
+ depends on BCMDHD
+ default n
+
config BCMDHD_FW_PATH
depends on BCMDHD
string "Firmware path"
@@ -41,13 +43,6 @@
---help---
Path to the calibration file.
-config BCMDHD_HW_OOB
- bool "Use out of band interrupt"
- depends on BCMDHD
- default y
- ---help---
- Use out of band interrupt for card interrupt and wake on wireless.
-
config BCMDHD_WEXT
bool "Enable WEXT support"
depends on BCMDHD && CFG80211 = n
@@ -69,3 +64,15 @@
default n
---help---
Use CFG80211 sched scan
+
+config DHD_SET_RANDOM_MAC_VAL
+ hex "Vendor OUI"
+ depends on BCMDHD
+ default 0x001A11
+ ---help---
+ Set vendor OUI for SoftAP
+
+config DHD_OF_SUPPORT
+ bool "Use in-drive platform device"
+ depends on BCMDHD
+ default n
diff --git a/drivers/net/wireless/bcmdhd/Makefile b/drivers/net/wireless/bcmdhd/Makefile
index 98afe0f..d01b227 100644
--- a/drivers/net/wireless/bcmdhd/Makefile
+++ b/drivers/net/wireless/bcmdhd/Makefile
@@ -8,9 +8,9 @@
-DDHDTHREAD -DSHOW_EVENTS -DBCMDBG -DWLP2P \
-DWIFI_ACT_FRAME -DARP_OFFLOAD_SUPPORT \
-DKEEP_ALIVE -DCSCAN -DPKT_FILTER_SUPPORT \
- -DEMBEDDED_PLATFORM -DPNO_SUPPORT \
- -DDHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT -DGET_CUSTOM_MAC_ENABLE \
- -DCUSTOMER_HW2 -DENABLE_INSMOD_NO_FW_LOAD -DQMONITOR -DTOE
+ -DEMBEDDED_PLATFORM -DPNO_SUPPORT -DSHOW_LOGTRACE \
+ -DDHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT \
+ -DCUSTOMER_HW2 -DGET_CUSTOM_MAC_ENABLE
#################
# Common feature
@@ -27,6 +27,7 @@
# For p2p connection issue
DHDCFLAGS += -DWL_SCB_TIMEOUT=10
+
# TDLS enable
DHDCFLAGS += -DWLTDLS -DWLTDLS_AUTO_ENABLE
# For TDLS tear down inactive time 40 sec
@@ -50,6 +51,16 @@
DHDCFLAGS += -DCUSTOM_PNO_EVENT_LOCK_xTIME=10
DHDCFLAGS += -DMIRACAST_AMPDU_SIZE=8
+#PNO trigger
+#DHDCFLAGS += -DPNO_MIN_RSSI_TRIGGER=-75
+
+#Gscan
+DHDCFLAGS += -DGSCAN_SUPPORT
+DHDCFLAGS += -DWL_VENDOR_EXT_SUPPORT
+#Link Statistics
+DHDCFLAGS += -DLINKSTAT_SUPPORT
+
+
# Early suspend
DHDCFLAGS += -DDHD_USE_EARLYSUSPEND
@@ -57,15 +68,19 @@
DHDCFLAGS += -DESCAN_RESULT_PATCH
# For Static Buffer
-ifeq ($(CONFIG_BROADCOM_WIFI_RESERVED_MEM),y)
- DHDCFLAGS += -DCONFIG_DHD_USE_STATIC_BUF
+ifeq ($(CONFIG_DHD_USE_STATIC_BUF),y)
DHDCFLAGS += -DENHANCED_STATIC_BUF
DHDCFLAGS += -DSTATIC_WL_PRIV_STRUCT
endif
+
ifneq ($(CONFIG_DHD_USE_SCHED_SCAN),)
DHDCFLAGS += -DWL_SCHED_SCAN
endif
+ifeq ($(CONFIG_DHD_OF_SUPPORT),y)
+ DHDCFLAGS += -DDHD_OF_SUPPORT
+endif
+
# Ioctl timeout 5000ms
DHDCFLAGS += -DIOCTL_RESP_TIMEOUT=5000
@@ -81,60 +96,177 @@
# SKB TAILPAD to avoid out of boundary memory access
DHDCFLAGS += -DDHDENABLE_TAILPAD
+# DTIM skip interval
+DHDCFLAGS += -DCUSTOM_SUSPEND_BCN_LI_DTIM=2 -DMAX_DTIM_ALLOWED_INTERVAL=600
+
# Wi-Fi Direct
DHDCFLAGS += -DWL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST
DHDCFLAGS += -DWL_CFG80211_STA_EVENT
DHDCFLAGS += -DWL_IFACE_COMB_NUM_CHANNELS
DHDCFLAGS += -DWL_ENABLE_P2P_IF
+DHDCFLAGS += -DWL_CFG80211_ACL
+DHDCFLAGS += -DDISABLE_11H_SOFTAP
+DHDCFLAGS += -DSET_RANDOM_MAC_SOFTAP
+DHDCFLAGS += -DCUSTOM_FORCE_NODFS_FLAG
+DHDCFLAGS += -DCUSTOM_SET_SHORT_DWELL_TIME
+
##########################
# driver type
# m: module type driver
# y: built-in type driver
##########################
-DRIVER_TYPE ?= $(CONFIG_BCMDHD)
-
-DHDCFLAGS += -DBCM4339_CHIP -DBCM43241_CHIP -DBCM4354_CHIP
-DHDCFLAGS += -DPROP_TXSTATUS_VSDB
-DHDCFLAGS += -DCUSTOM_DPC_PRIO_SETTING=99
-DHDCFLAGS += -DRXFRAME_THREAD
-DHDCFLAGS += -DDHDTCPACK_SUPPRESS
+DRIVER_TYPE ?= y
#########################
-# BCM43241Chip dependent feature
+# Chip dependent feature
#########################
-DHDCFLAGS += -DMIMO_ANT_SETTING
-DHDCFLAGS += -DCUSTOM_SDIO_F2_BLKSIZE=128
-DHDCFLAGS += -DAMPDU_HOSTREORDER
-DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=32
-ifeq ($(DRIVER_TYPE),m)
- DHDCFLAGS += -fno-pic
-endif
-ifneq ($(CONFIG_BCM4354),)
+ifneq ($(CONFIG_BCM4358),)
+ DHDCFLAGS += -DUSE_WL_TXBF
+ DHDCFLAGS += -DUSE_WL_FRAMEBURST
+ DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
+ DHDCFLAGS += -DMAX_AP_CLIENT_CNT=10
+ DHDCFLAGS += -DMAX_GO_CLIENT_CNT=5
+
+# New Features
+ DHDCFLAGS += -DWL11U
+ DHDCFLAGS += -DMFP
+ DHDCFLAGS += -DDHD_ENABLE_LPC
+ DHDCFLAGS += -DCUSTOM_COUNTRY_CODE
+ DHDCFLAGS += -DRTT_SUPPORT -DRTT_DEBUG
+
+# DHDCFLAGS += -DSAR_SUPPORT
+
+# debug info
+# DHDCFLAGS += -DDHD_WAKE_STATUS
+
+ifneq ($(CONFIG_BCMDHD_SDIO),)
+ DHDCFLAGS += -DBDC -DOOB_INTR_ONLY -DHW_OOB -DDHD_BCMEVENTS -DMMC_SDIO_ABORT
+ DHDCFLAGS += -DBCMSDIO -DBCMLXSDMMC -DUSE_SDIOFIFO_IOVAR
+ DHDCFLAGS += -DPROP_TXSTATUS
+ DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=16
+ DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
# tput enhancement
DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
DHDCFLAGS += -DUSE_DYNAMIC_F2_BLKSIZE -DDYNAMIC_F2_BLKSIZE_FOR_NONLEGACY=128
DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
DHDCFLAGS += -DDHDTCPACK_SUPPRESS
- DHDCFLAGS += -DUSE_WL_TXBF
- DHDCFLAGS += -DUSE_WL_FRAMEBURST
DHDCFLAGS += -DRXFRAME_THREAD
DHDCFLAGS += -DREPEAT_READFRAME
- DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
- DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=40
DHDCFLAGS += -DMAX_HDR_READ=128
DHDCFLAGS += -DDHD_FIRSTREAD=128
- DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=16
+
+# bcn_timeout
+ DHDCFLAGS += -DCUSTOM_BCN_TIMEOUT_SETTING=5
+
+ DHDCFLAGS += -DWLFC_STATE_PREALLOC
+endif
+
+ifneq ($(CONFIG_BCMDHD_PCIE),)
+ DHDCFLAGS += -DPCIE_FULL_DONGLE -DBCMPCIE -DCUSTOM_DPC_PRIO_SETTING=-1
+# tput enhancement
+ DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
+ DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=32
+ DHDCFLAGS += -DCUSTOM_AMPDU_RELEASE=16
+ DHDCFLAGS += -DPROP_TXSTATUS_VSDB
+# Disable watchdog thread
+ DHDCFLAGS += -DCUSTOM_DHD_WATCHDOG_MS=0
+
+ DHDCFLAGS += -DMAX_CNTL_TX_TIMEOUT=1
+ifneq ($(CONFIG_ARCH_MSM),)
+ DHDCFLAGS += -DMSM_PCIE_LINKDOWN_RECOVERY
+endif
+ifeq ($(CONFIG_DHD_USE_STATIC_BUF),y)
+ DHDCFLAGS += -DDHD_USE_STATIC_IOCTLBUF
+endif
+
+ DHDCFLAGS += -DDONGLE_ENABLE_ISOLATION
+endif
+
+# Print 802.1X packets
+ DHDCFLAGS += -DDHD_8021X_DUMP
+# Print DHCP packets
+# DHDCFLAGS += -DDHD_DHCP_DUMP
+endif
+
+ifneq ($(filter y, $(CONFIG_BCM4354) $(CONFIG_BCM4356)),)
+ DHDCFLAGS += -DUSE_WL_TXBF
+ DHDCFLAGS += -DUSE_WL_FRAMEBURST
+ DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
+ DHDCFLAGS += -DMAX_AP_CLIENT_CNT=10
+ DHDCFLAGS += -DMAX_GO_CLIENT_CNT=5
# New Features
DHDCFLAGS += -DWL11U
+ DHDCFLAGS += -DMFP
DHDCFLAGS += -DDHD_ENABLE_LPC
- DHDCFLAGS += -DCUSTOM_PSPRETEND_THR=30
+ DHDCFLAGS += -DCUSTOM_COUNTRY_CODE
+ DHDCFLAGS += -DSAR_SUPPORT
+
+# debug info
+ DHDCFLAGS += -DDHD_WAKE_STATUS -DDHD_WAKE_RX_STATUS
+ DHDCFLAGS += -DDHD_WAKE_EVENT_STATUS
+ifneq ($(CONFIG_BCM4356),)
+ DHDCFLAGS += -DRTT_SUPPORT -DRTT_DEBUG
+endif
+
+ifneq ($(CONFIG_BCMDHD_SDIO),)
+ DHDCFLAGS += -DBDC -DOOB_INTR_ONLY -DHW_OOB -DDHD_BCMEVENTS -DMMC_SDIO_ABORT
+ DHDCFLAGS += -DBCMSDIO -DBCMLXSDMMC -DUSE_SDIOFIFO_IOVAR
+ DHDCFLAGS += -DPROP_TXSTATUS
+ DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=16
+ DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
+# tput enhancement
+ DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
+ DHDCFLAGS += -DUSE_DYNAMIC_F2_BLKSIZE -DDYNAMIC_F2_BLKSIZE_FOR_NONLEGACY=128
+ DHDCFLAGS += -DBCMSDIOH_TXGLOM -DCUSTOM_TXGLOM=1 -DBCMSDIOH_TXGLOM_HIGHSPEED
+ DHDCFLAGS += -DDHDTCPACK_SUPPRESS
+ DHDCFLAGS += -DRXFRAME_THREAD
+ DHDCFLAGS += -DREPEAT_READFRAME
+ DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=40
+ DHDCFLAGS += -DMAX_HDR_READ=128
+ DHDCFLAGS += -DDHD_FIRSTREAD=128
+
+# bcn_timeout
+ DHDCFLAGS += -DCUSTOM_BCN_TIMEOUT_SETTING=5
+
+ DHDCFLAGS += -DWLFC_STATE_PREALLOC
+endif
+
+ifneq ($(CONFIG_BCMDHD_PCIE),)
+ DHDCFLAGS += -DPCIE_FULL_DONGLE -DBCMPCIE -DCUSTOM_DPC_PRIO_SETTING=-1
+# tput enhancement
+ DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
+ DHDCFLAGS += -DCUSTOM_AMPDU_MPDU=32
+ DHDCFLAGS += -DCUSTOM_AMPDU_RELEASE=16
+ DHDCFLAGS += -DPROP_TXSTATUS_VSDB
+# Disable watchdog thread
+ DHDCFLAGS += -DCUSTOM_DHD_WATCHDOG_MS=0
+
+ DHDCFLAGS += -DMAX_CNTL_TX_TIMEOUT=1
+ifneq ($(CONFIG_ARCH_MSM),)
+ DHDCFLAGS += -DMSM_PCIE_LINKDOWN_RECOVERY
+endif
+ifeq ($(CONFIG_DHD_USE_STATIC_BUF),y)
+ DHDCFLAGS += -DDHD_USE_STATIC_IOCTLBUF
+endif
+
+ DHDCFLAGS += -DDONGLE_ENABLE_ISOLATION
+endif
+
+# Print 802.1X packets
+ DHDCFLAGS += -DDHD_8021X_DUMP
+# prioritize 802.1x packet
+ DHDCFLAGS += -DEAPOL_PKT_PRIO
+# Print DHCP packets
+# DHDCFLAGS += -DDHD_DHCP_DUMP
endif
ifneq ($(CONFIG_BCM4339),)
+ DHDCFLAGS += -DBCM4339_CHIP -DHW_OOB
+
# tput enhancement
DHDCFLAGS += -DCUSTOM_GLOM_SETTING=8 -DCUSTOM_RXCHAIN=1
DHDCFLAGS += -DUSE_DYNAMIC_F2_BLKSIZE -DDYNAMIC_F2_BLKSIZE_FOR_NONLEGACY=128
@@ -145,6 +277,7 @@
DHDCFLAGS += -DRXFRAME_THREAD
DHDCFLAGS += -DCUSTOM_AMPDU_BA_WSIZE=64
DHDCFLAGS += -DCUSTOM_DPC_CPUCORE=0
+ DHDCFLAGS += -DPROP_TXSTATUS_VSDB
DHDCFLAGS += -DCUSTOM_MAX_TXGLOM_SIZE=32
# New Features
@@ -153,34 +286,28 @@
DHDCFLAGS += -DCUSTOM_PSPRETEND_THR=30
endif
-ifneq ($(CONFIG_BCMDHD_SDIO),)
- DHDCFLAGS += -DBDC -DDHD_BCMEVENTS -DMMC_SDIO_ABORT
- DHDCFLAGS += -DBCMSDIO -DBCMLXSDMMC -DUSE_SDIOFIFO_IOVAR
- DHDCFLAGS += -DPROP_TXSTATUS
-endif
-
-ifeq ($(CONFIG_BCMDHD_HW_OOB),y)
- DHDCFLAGS += -DHW_OOB -DOOB_INTR_ONLY
-else
- DHDCFLAGS += -DSDIO_ISR_THREAD
-endif
-
-ifneq ($(CONFIG_BCMDHD_PCIE),)
- DHDCFLAGS += -DPCIE_FULL_DONGLE -DBCMPCIE -DCUSTOM_DPC_PRIO_SETTING=-1
-endif
#EXTRA_LDFLAGS += --strip-debug
+ifeq ($(DRIVER_TYPE),y)
+ DHDCFLAGS += -DENABLE_INSMOD_NO_FW_LOAD
+ DHDCFLAGS += -DUSE_LATE_INITCALL_SYNC
+endif
+
EXTRA_CFLAGS += $(DHDCFLAGS) -DDHD_DEBUG
EXTRA_CFLAGS += -DSRCBASE=\"$(src)\"
EXTRA_CFLAGS += -I$(src)/include/ -I$(src)/
-KBUILD_CFLAGS += -I$(LINUXDIR)/include -I$(shell pwd) -Idrivers/net/wireless/bcmdhd -Idrivers/net/wireless/bcmdhd/include
+KBUILD_CFLAGS += -I$(LINUXDIR)/include -I$(shell pwd)
DHDOFILES := dhd_pno.o dhd_common.o dhd_ip.o dhd_custom_gpio.o \
dhd_linux.o dhd_linux_sched.o dhd_cfg80211.o dhd_linux_wq.o aiutils.o bcmevent.o \
bcmutils.o bcmwifi_channels.o hndpmu.o linux_osl.o sbutils.o siutils.o \
- wl_android.o wl_cfg80211.o wl_cfgp2p.o wl_cfg_btcoex.o wldev_common.o wl_linux_mon.o \
- dhd_linux_platdev.o dhd_pno.o dhd_linux_wq.o wl_cfg_btcoex.o dhd_qmon.o
+ wl_android.o wl_roam.o wl_cfg80211.o wl_cfgp2p.o wl_cfg_btcoex.o wldev_common.o wl_linux_mon.o \
+ dhd_linux_platdev.o dhd_pno.o dhd_rtt.o dhd_linux_wq.o wl_cfg_btcoex.o \
+ hnd_pktq.o hnd_pktpool.o wl_cfgvendor.o bcmxtlv.o dhd_debug.o dhd_debug_linux.o
+ifneq ($(CONFIG_DHD_OF_SUPPORT),)
+ DHDOFILES += dhd_custom_platdev.o
+endif
ifneq ($(CONFIG_BCMDHD_SDIO),)
DHDOFILES += bcmsdh.o bcmsdh_linux.o bcmsdh_sdmmc.o bcmsdh_sdmmc_linux.o
@@ -188,8 +315,21 @@
endif
ifneq ($(CONFIG_BCMDHD_PCIE),)
- DHDOFILES += dhd_pcie.o dhd_pcie_linux.o dhd_msgbuf.o circularbuf.o
+ DHDOFILES += dhd_pcie.o dhd_pcie_linux.o dhd_msgbuf.o dhd_flowring.o
+ DHDOFILES += pcie_core.o
endif
bcmdhd-objs := $(DHDOFILES)
obj-$(DRIVER_TYPE) += bcmdhd.o
+
+all:
+ @echo "$(MAKE) --no-print-directory -C $(KDIR) SUBDIRS=$(CURDIR) modules"
+ @$(MAKE) --no-print-directory -C $(KDIR) SUBDIRS=$(CURDIR) modules
+
+clean:
+ rm -rf *.o *.ko *.mod.c *~ .*.cmd *.o.cmd .*.o.cmd \
+ Module.symvers modules.order .tmp_versions modules.builtin
+
+install:
+ @$(MAKE) --no-print-directory -C $(KDIR) \
+ SUBDIRS=$(CURDIR) modules_install
diff --git a/drivers/net/wireless/bcmdhd/aiutils.c b/drivers/net/wireless/bcmdhd/aiutils.c
old mode 100755
new mode 100644
index 611f9a7..9095894
--- a/drivers/net/wireless/bcmdhd/aiutils.c
+++ b/drivers/net/wireless/bcmdhd/aiutils.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: aiutils.c 432226 2013-10-26 04:34:36Z $
+ * $Id: aiutils.c 467150 2014-04-02 17:30:43Z $
*/
#include <bcm_cfg.h>
#include <typedefs.h>
@@ -39,6 +39,7 @@
#define BCM47162_DMP() (0)
#define BCM5357_DMP() (0)
#define BCM4707_DMP() (0)
+#define PMU_DMP() (0)
#define remap_coreid(sih, coreid) (coreid)
#define remap_corerev(sih, corerev) (corerev)
@@ -211,7 +212,8 @@
sii->oob_router = addrl;
}
}
- if (cid != GMAC_COMMON_4706_CORE_ID && cid != NS_CCB_CORE_ID)
+ if (cid != GMAC_COMMON_4706_CORE_ID && cid != NS_CCB_CORE_ID &&
+ cid != PMU_CORE_ID && cid != GCI_CORE_ID)
continue;
}
@@ -339,6 +341,9 @@
return;
}
+#define AI_SETCOREIDX_MAPSIZE(coreid) \
+ (((coreid) == NS_CCB_CORE_ID) ? 15 * SI_CORE_SIZE : SI_CORE_SIZE)
+
/* This function changes the logical "focus" to the indicated core.
* Return the current core's virtual address.
*/
@@ -366,7 +371,8 @@
case SI_BUS:
/* map new one */
if (!cores_info->regs[coreidx]) {
- cores_info->regs[coreidx] = REG_MAP(addr, SI_CORE_SIZE);
+ cores_info->regs[coreidx] = REG_MAP(addr,
+ AI_SETCOREIDX_MAPSIZE(cores_info->coreid[coreidx]));
ASSERT(GOODREGS(cores_info->regs[coreidx]));
}
sii->curmap = regs = cores_info->regs[coreidx];
@@ -564,7 +570,17 @@
__FUNCTION__));
return sii->curidx;
}
+
+#ifdef REROUTE_OOBINT
+ if (PMU_DMP()) {
+ SI_ERROR(("%s: Attempting to read PMU DMP registers\n",
+ __FUNCTION__));
+ return PMU_OOB_BIT;
+ }
+#endif /* REROUTE_OOBINT */
+
ai = sii->curwrap;
+ ASSERT(ai != NULL);
return (R_REG(sii->osh, &ai->oobselouta30) & 0x1f);
}
@@ -588,6 +604,14 @@
__FUNCTION__));
return sii->curidx;
}
+#ifdef REROUTE_OOBINT
+ if (PMU_DMP()) {
+ SI_ERROR(("%s: Attempting to read PMU DMP registers\n",
+ __FUNCTION__));
+ return PMU_OOB_BIT;
+ }
+#endif /* REROUTE_OOBINT */
+
ai = sii->curwrap;
return ((R_REG(sii->osh, &ai->oobselouta30) >> AI_OOBSEL_1_SHIFT) & AI_OOBSEL_MASK);
@@ -922,6 +946,11 @@
__FUNCTION__));
return;
}
+ if (PMU_DMP()) {
+ SI_ERROR(("%s: Accessing PMU DMP register (ioctrl)\n",
+ __FUNCTION__));
+ return;
+ }
ASSERT(GOODREGS(sii->curwrap));
ai = sii->curwrap;
@@ -957,6 +986,11 @@
return 0;
}
+ if (PMU_DMP()) {
+ SI_ERROR(("%s: Accessing PMU DMP register (ioctrl)\n",
+ __FUNCTION__));
+ return 0;
+ }
ASSERT(GOODREGS(sii->curwrap));
ai = sii->curwrap;
@@ -992,6 +1026,11 @@
__FUNCTION__));
return 0;
}
+ if (PMU_DMP()) {
+ SI_ERROR(("%s: Accessing PMU DMP register (ioctrl)\n",
+ __FUNCTION__));
+ return 0;
+ }
ASSERT(GOODREGS(sii->curwrap));
ai = sii->curwrap;
@@ -1006,3 +1045,71 @@
return R_REG(sii->osh, &ai->iostatus);
}
+
+#if defined(BCMDBG_PHYDUMP)
+/* print interesting aidmp registers */
+void
+ai_dumpregs(si_t *sih, struct bcmstrbuf *b)
+{
+ si_info_t *sii = SI_INFO(sih);
+ si_cores_info_t *cores_info = (si_cores_info_t *)sii->cores_info;
+ osl_t *osh;
+ aidmp_t *ai;
+ uint i;
+
+ osh = sii->osh;
+
+ for (i = 0; i < sii->numcores; i++) {
+ si_setcoreidx(&sii->pub, i);
+ ai = sii->curwrap;
+
+ bcm_bprintf(b, "core 0x%x: \n", cores_info->coreid[i]);
+ if (BCM47162_DMP()) {
+ bcm_bprintf(b, "Skipping mips74k in 47162a0\n");
+ continue;
+ }
+ if (BCM5357_DMP()) {
+ bcm_bprintf(b, "Skipping usb20h in 5357\n");
+ continue;
+ }
+ if (BCM4707_DMP()) {
+ bcm_bprintf(b, "Skipping chipcommonb in 4707\n");
+ continue;
+ }
+
+ if (PMU_DMP()) {
+ bcm_bprintf(b, "Skipping pmu core\n");
+ continue;
+ }
+
+ bcm_bprintf(b, "ioctrlset 0x%x ioctrlclear 0x%x ioctrl 0x%x iostatus 0x%x"
+ "ioctrlwidth 0x%x iostatuswidth 0x%x\n"
+ "resetctrl 0x%x resetstatus 0x%x resetreadid 0x%x resetwriteid 0x%x\n"
+ "errlogctrl 0x%x errlogdone 0x%x errlogstatus 0x%x"
+ "errlogaddrlo 0x%x errlogaddrhi 0x%x\n"
+ "errlogid 0x%x errloguser 0x%x errlogflags 0x%x\n"
+ "intstatus 0x%x config 0x%x itcr 0x%x\n",
+ R_REG(osh, &ai->ioctrlset),
+ R_REG(osh, &ai->ioctrlclear),
+ R_REG(osh, &ai->ioctrl),
+ R_REG(osh, &ai->iostatus),
+ R_REG(osh, &ai->ioctrlwidth),
+ R_REG(osh, &ai->iostatuswidth),
+ R_REG(osh, &ai->resetctrl),
+ R_REG(osh, &ai->resetstatus),
+ R_REG(osh, &ai->resetreadid),
+ R_REG(osh, &ai->resetwriteid),
+ R_REG(osh, &ai->errlogctrl),
+ R_REG(osh, &ai->errlogdone),
+ R_REG(osh, &ai->errlogstatus),
+ R_REG(osh, &ai->errlogaddrlo),
+ R_REG(osh, &ai->errlogaddrhi),
+ R_REG(osh, &ai->errlogid),
+ R_REG(osh, &ai->errloguser),
+ R_REG(osh, &ai->errlogflags),
+ R_REG(osh, &ai->intstatus),
+ R_REG(osh, &ai->config),
+ R_REG(osh, &ai->itcr));
+ }
+}
+#endif
diff --git a/drivers/net/wireless/bcmdhd/bcmevent.c b/drivers/net/wireless/bcmdhd/bcmevent.c
old mode 100755
new mode 100644
index 93beccb..c7a902d
--- a/drivers/net/wireless/bcmdhd/bcmevent.c
+++ b/drivers/net/wireless/bcmdhd/bcmevent.c
@@ -2,13 +2,13 @@
* bcmevent read-only data shared by kernel or app layers
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,11 +16,11 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
- * $Id: bcmevent.c 440870 2013-12-04 05:23:45Z $
+ * $Id: bcmevent.c 470794 2014-04-16 12:01:41Z $
*/
#include <typedefs.h>
@@ -29,10 +29,17 @@
#include <proto/bcmeth.h>
#include <proto/bcmevent.h>
+
+/* Table of event name strings for UIs and debugging dumps */
+typedef struct {
+ uint event;
+ const char *name;
+} bcmevent_name_str_t;
+
/* Use the actual name for event tracing */
#define BCMEVENT_NAME(_event) {(_event), #_event}
-const bcmevent_name_t bcmevent_names[] = {
+static const bcmevent_name_str_t bcmevent_names[] = {
BCMEVENT_NAME(WLC_E_SET_SSID),
BCMEVENT_NAME(WLC_E_JOIN),
BCMEVENT_NAME(WLC_E_START),
@@ -68,6 +75,9 @@
BCMEVENT_NAME(WLC_E_ROAM_PREP),
BCMEVENT_NAME(WLC_E_PFN_NET_FOUND),
BCMEVENT_NAME(WLC_E_PFN_NET_LOST),
+ BCMEVENT_NAME(WLC_E_JOIN_START),
+ BCMEVENT_NAME(WLC_E_ROAM_START),
+ BCMEVENT_NAME(WLC_E_ASSOC_START),
#if defined(IBSS_PEER_DISCOVERY_EVENT)
BCMEVENT_NAME(WLC_E_IBSS_ASSOC),
#endif /* defined(IBSS_PEER_DISCOVERY_EVENT) */
@@ -119,7 +129,6 @@
#endif
BCMEVENT_NAME(WLC_E_ASSOC_REQ_IE),
BCMEVENT_NAME(WLC_E_ASSOC_RESP_IE),
- BCMEVENT_NAME(WLC_E_ACTION_FRAME_RX_NDIS),
BCMEVENT_NAME(WLC_E_BEACON_FRAME_RX),
#ifdef WLTDLS
BCMEVENT_NAME(WLC_E_TDLS_PEER_EVENT),
@@ -136,15 +145,50 @@
#ifdef WLWNM
BCMEVENT_NAME(WLC_E_WNM_STA_SLEEP),
#endif /* WLWNM */
-#if defined(WL_PROXDETECT)
BCMEVENT_NAME(WLC_E_PROXD),
-#endif
BCMEVENT_NAME(WLC_E_CCA_CHAN_QUAL),
BCMEVENT_NAME(WLC_E_BSSID),
#ifdef PROP_TXSTATUS
BCMEVENT_NAME(WLC_E_BCMC_CREDIT_SUPPORT),
#endif
BCMEVENT_NAME(WLC_E_TXFAIL_THRESH),
+#ifdef GSCAN_SUPPORT
+ BCMEVENT_NAME(WLC_E_PFN_GSCAN_FULL_RESULT),
+ BCMEVENT_NAME(WLC_E_PFN_SWC),
+#endif /* GSCAN_SUPPORT */
+#ifdef WLBSSLOAD_REPORT
+ BCMEVENT_NAME(WLC_E_BSS_LOAD),
+#endif
+#if defined(BT_WIFI_HANDOVER) || defined(WL_TBOW)
+ BCMEVENT_NAME(WLC_E_BT_WIFI_HANDOVER_REQ),
+#endif
+#ifdef GSCAN_SUPPORT
+ BCMEVENT_NAME(WLC_E_PFN_SSID_EXT),
+ BCMEVENT_NAME(WLC_E_ROAM_EXP_EVENT)
+#endif /* GSCAN_SUPPORT */
};
-const int bcmevent_names_size = ARRAYSIZE(bcmevent_names);
+
+const char *bcmevent_get_name(uint event_type)
+{
+ /* note: first coded this as a static const but some
+ * ROMs already have something called event_name so
+ * changed it so we don't have a variable for the
+ * 'unknown string
+ */
+ const char *event_name = NULL;
+
+ uint idx;
+ for (idx = 0; idx < (uint)ARRAYSIZE(bcmevent_names); idx++) {
+
+ if (bcmevent_names[idx].event == event_type) {
+ event_name = bcmevent_names[idx].name;
+ break;
+ }
+ }
+
+ /* if we find an event name in the array, return it.
+ * otherwise return unknown string.
+ */
+ return ((event_name) ? event_name : "Unknown Event");
+}
diff --git a/drivers/net/wireless/bcmdhd/bcmsdh.c b/drivers/net/wireless/bcmdhd/bcmsdh.c
old mode 100755
new mode 100644
index f77de60..5ee526b
--- a/drivers/net/wireless/bcmdhd/bcmsdh.c
+++ b/drivers/net/wireless/bcmdhd/bcmsdh.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh.c 455573 2014-02-14 17:49:31Z $
+ * $Id: bcmsdh.c 450676 2014-01-22 22:45:13Z $
*/
/**
diff --git a/drivers/net/wireless/bcmdhd/bcmsdh_linux.c b/drivers/net/wireless/bcmdhd/bcmsdh_linux.c
old mode 100755
new mode 100644
index a2888df..1604a18
--- a/drivers/net/wireless/bcmdhd/bcmsdh_linux.c
+++ b/drivers/net/wireless/bcmdhd/bcmsdh_linux.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh_linux.c 455573 2014-02-14 17:49:31Z $
+ * $Id: bcmsdh_linux.c 461444 2014-03-12 02:55:28Z $
*/
/**
@@ -34,6 +34,9 @@
#include <linuxver.h>
#include <linux/pci.h>
#include <linux/completion.h>
+#ifdef DHD_WAKE_STATUS
+#include <linux/wakeup_reason.h>
+#endif
#include <osl.h>
#include <pcicfg.h>
@@ -44,6 +47,9 @@
#include <bcmutils.h>
#include <dngl_stats.h>
#include <dhd.h>
+#if defined(CONFIG_ARCH_ODIN)
+#include <linux/platform_data/gpio-odin.h>
+#endif /* defined(CONFIG_ARCH_ODIN) */
#include <dhd_linux.h>
/* driver info, initialized when bcmsdh_register is called */
@@ -74,6 +80,7 @@
void *context; /* context returned from upper layer */
void *sdioh; /* handle to lower layer (sdioh) */
void *dev; /* handle to the underlying device */
+ void *adapter; /* handle to adapter */
bool dev_wake_enabled;
} bcmsdh_os_info_t;
@@ -152,6 +159,7 @@
bcmsdh->os_cxt = bcmsdh_osinfo;
bcmsdh_osinfo->sdioh = sdioh;
bcmsdh_osinfo->dev = dev;
+ bcmsdh_osinfo->adapter = adapter_info;
osl_set_bus_handle(osh, bcmsdh);
#if !defined(CONFIG_HAS_WAKELOCK) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
@@ -180,6 +188,11 @@
goto err;
}
+#ifdef DHD_WAKE_STATUS
+ bcmsdh->wake_irq = wifi_platform_get_wake_irq(adapter_info);
+ if (bcmsdh->wake_irq == -1)
+ bcmsdh->wake_irq = bcmsdh_osinfo->oob_irq_num;
+#endif
return bcmsdh;
/* error handling */
@@ -208,12 +221,38 @@
return 0;
}
+#ifdef DHD_WAKE_STATUS
+int bcmsdh_get_total_wake(bcmsdh_info_t *bcmsdh)
+{
+ return bcmsdh->total_wake_count;
+}
+
+int bcmsdh_set_get_wake(bcmsdh_info_t *bcmsdh, int flag)
+{
+ bcmsdh_os_info_t *bcmsdh_osinfo = bcmsdh->os_cxt;
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&bcmsdh_osinfo->oob_irq_spinlock, flags);
+
+ ret = bcmsdh->pkt_wake;
+ bcmsdh->total_wake_count += flag;
+ bcmsdh->pkt_wake = flag;
+
+ spin_unlock_irqrestore(&bcmsdh_osinfo->oob_irq_spinlock, flags);
+ return ret;
+}
+#endif
+
int bcmsdh_suspend(bcmsdh_info_t *bcmsdh)
{
bcmsdh_os_info_t *bcmsdh_osinfo = bcmsdh->os_cxt;
if (drvinfo.suspend && drvinfo.suspend(bcmsdh_osinfo->context))
return -EBUSY;
+#ifdef CONFIG_PARTIALRESUME
+ wifi_process_partial_resume(bcmsdh_osinfo->adapter, WIFI_PR_INIT);
+#endif
return 0;
}
@@ -221,6 +260,16 @@
{
bcmsdh_os_info_t *bcmsdh_osinfo = bcmsdh->os_cxt;
+#ifdef DHD_WAKE_STATUS
+ if (check_wakeup_reason(bcmsdh->wake_irq)) {
+#ifdef CONFIG_PARTIALRESUME
+ wifi_process_partial_resume(bcmsdh_osinfo->adapter,
+ WIFI_PR_NOTIFY_RESUME);
+#endif
+ bcmsdh_set_get_wake(bcmsdh, 1);
+ }
+#endif
+
if (drvinfo.resume)
return drvinfo.resume(bcmsdh_osinfo->context);
return 0;
@@ -338,8 +387,13 @@
(int)bcmsdh_osinfo->oob_irq_num, (int)bcmsdh_osinfo->oob_irq_flags));
bcmsdh_osinfo->oob_irq_handler = oob_irq_handler;
bcmsdh_osinfo->oob_irq_handler_context = oob_irq_handler_context;
+#if defined(CONFIG_ARCH_ODIN)
+ err = odin_gpio_sms_request_irq(bcmsdh_osinfo->oob_irq_num, wlan_oob_irq,
+ bcmsdh_osinfo->oob_irq_flags, "bcmsdh_sdmmc", bcmsdh);
+#else
err = request_irq(bcmsdh_osinfo->oob_irq_num, wlan_oob_irq,
bcmsdh_osinfo->oob_irq_flags, "bcmsdh_sdmmc", bcmsdh);
+#endif /* defined(CONFIG_ARCH_ODIN) */
if (err) {
SDLX_MSG(("%s: request_irq failed with %d\n", __FUNCTION__, err));
return err;
diff --git a/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc.c b/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc.c
old mode 100755
new mode 100644
index 390ad0b..15c4e0f
--- a/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc.c
+++ b/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh_sdmmc.c 457662 2014-02-24 15:07:28Z $
+ * $Id: bcmsdh_sdmmc.c 459285 2014-03-03 02:54:39Z $
*/
#include <typedefs.h>
@@ -468,7 +468,7 @@
switch (actionid) {
case IOV_GVAL(IOV_MSGLEVEL):
int_val = (int32)sd_msglevel;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_MSGLEVEL):
@@ -477,7 +477,7 @@
case IOV_GVAL(IOV_BLOCKMODE):
int_val = (int32)si->sd_blockmode;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_BLOCKMODE):
@@ -491,7 +491,7 @@
break;
}
int_val = (int32)si->client_block_size[int_val];
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_BLOCKSIZE):
@@ -527,12 +527,12 @@
case IOV_GVAL(IOV_RXCHAIN):
int_val = (int32)si->use_rxchain;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_DMA):
int_val = (int32)si->sd_use_dma;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DMA):
@@ -541,7 +541,7 @@
case IOV_GVAL(IOV_USEINTS):
int_val = (int32)si->use_client_ints;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_USEINTS):
@@ -555,7 +555,7 @@
case IOV_GVAL(IOV_DIVISOR):
int_val = (uint32)sd_divisor;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DIVISOR):
@@ -564,7 +564,7 @@
case IOV_GVAL(IOV_POWER):
int_val = (uint32)sd_power;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_POWER):
@@ -573,7 +573,7 @@
case IOV_GVAL(IOV_CLOCK):
int_val = (uint32)sd_clock;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_CLOCK):
@@ -582,7 +582,7 @@
case IOV_GVAL(IOV_SDMODE):
int_val = (uint32)sd_sdmode;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SDMODE):
@@ -591,7 +591,7 @@
case IOV_GVAL(IOV_HISPEED):
int_val = (uint32)sd_hiok;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_HISPEED):
@@ -600,12 +600,12 @@
case IOV_GVAL(IOV_NUMINTS):
int_val = (int32)si->intrcount;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_NUMLOCALINTS):
int_val = (int32)0;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_HOSTREG):
@@ -1331,7 +1331,7 @@
2.6.27. The implementation prior to that is buggy, and needs broadcom's
patch for it
*/
- if ((ret = mmc_power_restore_host(sd->func[0]->card->host))) {
+ if ((ret = sdio_reset_comm(sd->func[0]->card))) {
sd_err(("%s Failed, error = %d\n", __FUNCTION__, ret));
return ret;
}
@@ -1418,8 +1418,6 @@
#endif
bcmsdh_oob_intr_set(sd->bcmsdh, FALSE);
#endif /* !defined(OOB_INTR_ONLY) */
- if (mmc_power_save_host((sd->func[0])->card->host))
- sd_err(("%s card power save fail\n", __FUNCTION__));
}
else
sd_err(("%s Failed\n", __FUNCTION__));
diff --git a/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc_linux.c b/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc_linux.c
old mode 100755
new mode 100644
index e8c1958..f89b81b
--- a/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc_linux.c
+++ b/drivers/net/wireless/bcmdhd/bcmsdh_sdmmc_linux.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh_sdmmc_linux.c 434724 2013-11-07 05:38:43Z $
+ * $Id: bcmsdh_sdmmc_linux.c 434777 2013-11-07 09:30:27Z $
*/
#include <typedefs.h>
@@ -40,9 +40,6 @@
#include <dhd_linux.h>
#include <bcmsdh_sdmmc.h>
#include <dhd_dbg.h>
-#if defined(CONFIG_WIFI_CONTROL_FUNC)
-#include <linux/wlan_plat.h>
-#endif
#if !defined(SDIO_VENDOR_ID_BROADCOM)
#define SDIO_VENDOR_ID_BROADCOM 0x02d0
@@ -106,30 +103,12 @@
wifi_adapter_info_t *adapter;
osl_t *osh = NULL;
sdioh_info_t *sdioh = NULL;
-#if defined(CONFIG_WIFI_CONTROL_FUNC)
- struct wifi_platform_data *plat_data;
-#endif
sd_err(("bus num (host idx)=%d, slot num (rca)=%d\n", host_idx, rca));
adapter = dhd_wifi_platform_get_adapter(SDIO_BUS, host_idx, rca);
- if (adapter != NULL) {
+ if (adapter != NULL)
sd_err(("found adapter info '%s'\n", adapter->name));
-#if defined(CONFIG_WIFI_CONTROL_FUNC)
- if (adapter->wifi_plat_data) {
- plat_data = adapter->wifi_plat_data;
- /* sdio card detection is completed,
- * so stop card detection here */
- if (plat_data->set_carddetect) {
- sd_debug(("stopping card detection\n"));
- plat_data->set_carddetect(0);
- }
- else
- sd_err(("set_carddetect is not registered\n"));
- }
- else
- sd_err(("platform data is NULL\n"));
-#endif
- } else
+ else
sd_err(("can't find adapter info for this chip\n"));
#ifdef WL_CFG80211
@@ -197,11 +176,8 @@
sd_info(("Function#: 0x%04x\n", func->num));
/* 4318 doesn't have function 2 */
- if ((func->num == 2) || (func->num == 1 && func->device == 0x4)) {
+ if ((func->num == 2) || (func->num == 1 && func->device == 0x4))
ret = sdioh_probe(func);
- if (mmc_power_save_host(func->card->host))
- sd_err(("%s: card power save fail", __FUNCTION__));
- }
return ret;
}
diff --git a/drivers/net/wireless/bcmdhd/bcmsdspi_linux.c b/drivers/net/wireless/bcmdhd/bcmsdspi_linux.c
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/bcmspibrcm.c b/drivers/net/wireless/bcmdhd/bcmspibrcm.c
old mode 100755
new mode 100644
index 97a253b..f0a6102
--- a/drivers/net/wireless/bcmdhd/bcmspibrcm.c
+++ b/drivers/net/wireless/bcmdhd/bcmspibrcm.c
@@ -385,7 +385,7 @@
switch (actionid) {
case IOV_GVAL(IOV_MSGLEVEL):
int_val = (int32)sd_msglevel;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_MSGLEVEL):
@@ -398,12 +398,12 @@
break;
}
int_val = (int32)si->client_block_size[int_val];
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_DMA):
int_val = (int32)si->sd_use_dma;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DMA):
@@ -412,7 +412,7 @@
case IOV_GVAL(IOV_USEINTS):
int_val = (int32)si->use_client_ints;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_USEINTS):
@@ -420,7 +420,7 @@
case IOV_GVAL(IOV_DIVISOR):
int_val = (uint32)sd_divisor;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DIVISOR):
@@ -433,7 +433,7 @@
case IOV_GVAL(IOV_POWER):
int_val = (uint32)sd_power;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_POWER):
@@ -442,7 +442,7 @@
case IOV_GVAL(IOV_CLOCK):
int_val = (uint32)sd_clock;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_CLOCK):
@@ -451,7 +451,7 @@
case IOV_GVAL(IOV_SDMODE):
int_val = (uint32)sd_sdmode;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SDMODE):
@@ -460,7 +460,7 @@
case IOV_GVAL(IOV_HISPEED):
int_val = (uint32)sd_hiok;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_HISPEED):
@@ -476,12 +476,12 @@
case IOV_GVAL(IOV_NUMINTS):
int_val = (int32)si->intrcount;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_NUMLOCALINTS):
int_val = (int32)si->local_intrcount;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_DEVREG):
{
@@ -525,7 +525,7 @@
case IOV_GVAL(IOV_RESP_DELAY_ALL):
int_val = (int32)si->resp_delay_all;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_RESP_DELAY_ALL):
diff --git a/drivers/net/wireless/bcmdhd/bcmutils.c b/drivers/net/wireless/bcmdhd/bcmutils.c
old mode 100755
new mode 100644
index dee05aa..1c14325
--- a/drivers/net/wireless/bcmdhd/bcmutils.c
+++ b/drivers/net/wireless/bcmdhd/bcmutils.c
@@ -2,13 +2,13 @@
* Driver O/S-independent utility routines
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,11 +16,11 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
- * $Id: bcmutils.c 457888 2014-02-25 03:34:39Z $
+ * $Id: bcmutils.c 473326 2014-04-29 00:37:35Z $
*/
#include <bcm_cfg.h>
@@ -65,6 +65,7 @@
+
#ifdef BCMDRIVER
@@ -256,489 +257,6 @@
return p;
}
-/*
- * osl multiple-precedence packet queue
- * hi_prec is always >= the number of the highest non-empty precedence
- */
-void * BCMFASTPATH
-pktq_penq(struct pktq *pq, int prec, void *p)
-{
- struct pktq_prec *q;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
- ASSERT(PKTLINK(p) == NULL); /* queueing chains not allowed */
-
- ASSERT(!pktq_full(pq));
- ASSERT(!pktq_pfull(pq, prec));
-
- q = &pq->q[prec];
-
- if (q->head)
- PKTSETLINK(q->tail, p);
- else
- q->head = p;
-
- q->tail = p;
- q->len++;
-
- pq->len++;
-
- if (pq->hi_prec < prec)
- pq->hi_prec = (uint8)prec;
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_penq_head(struct pktq *pq, int prec, void *p)
-{
- struct pktq_prec *q;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
- ASSERT(PKTLINK(p) == NULL); /* queueing chains not allowed */
-
- ASSERT(!pktq_full(pq));
- ASSERT(!pktq_pfull(pq, prec));
-
- q = &pq->q[prec];
-
- if (q->head == NULL)
- q->tail = p;
-
- PKTSETLINK(p, q->head);
- q->head = p;
- q->len++;
-
- pq->len++;
-
- if (pq->hi_prec < prec)
- pq->hi_prec = (uint8)prec;
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_pdeq(struct pktq *pq, int prec)
-{
- struct pktq_prec *q;
- void *p;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- if ((q->head = PKTLINK(p)) == NULL)
- q->tail = NULL;
-
- q->len--;
-
- pq->len--;
-
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_pdeq_prev(struct pktq *pq, int prec, void *prev_p)
-{
- struct pktq_prec *q;
- void *p;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- q = &pq->q[prec];
-
- if (prev_p == NULL)
- return NULL;
-
- if ((p = PKTLINK(prev_p)) == NULL)
- return NULL;
-
- q->len--;
-
- pq->len--;
-
- PKTSETLINK(prev_p, PKTLINK(p));
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_pdeq_with_fn(struct pktq *pq, int prec, ifpkt_cb_t fn, int arg)
-{
- struct pktq_prec *q;
- void *p, *prev = NULL;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- q = &pq->q[prec];
- p = q->head;
-
- while (p) {
- if (fn == NULL || (*fn)(p, arg)) {
- break;
- } else {
- prev = p;
- p = PKTLINK(p);
- }
- }
- if (p == NULL)
- return NULL;
-
- if (prev == NULL) {
- if ((q->head = PKTLINK(p)) == NULL) {
- q->tail = NULL;
- }
- } else {
- PKTSETLINK(prev, PKTLINK(p));
- if (q->tail == p) {
- q->tail = prev;
- }
- }
-
- q->len--;
-
- pq->len--;
-
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_pdeq_tail(struct pktq *pq, int prec)
-{
- struct pktq_prec *q;
- void *p, *prev;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- for (prev = NULL; p != q->tail; p = PKTLINK(p))
- prev = p;
-
- if (prev)
- PKTSETLINK(prev, NULL);
- else
- q->head = NULL;
-
- q->tail = prev;
- q->len--;
-
- pq->len--;
-
- return p;
-}
-
-void
-pktq_pflush(osl_t *osh, struct pktq *pq, int prec, bool dir, ifpkt_cb_t fn, int arg)
-{
- struct pktq_prec *q;
- void *p, *prev = NULL;
-
- q = &pq->q[prec];
- p = q->head;
- while (p) {
- if (fn == NULL || (*fn)(p, arg)) {
- bool head = (p == q->head);
- if (head)
- q->head = PKTLINK(p);
- else
- PKTSETLINK(prev, PKTLINK(p));
- PKTSETLINK(p, NULL);
- PKTFREE(osh, p, dir);
- q->len--;
- pq->len--;
- p = (head ? q->head : PKTLINK(prev));
- } else {
- prev = p;
- p = PKTLINK(p);
- }
- }
-
- if (q->head == NULL) {
- ASSERT(q->len == 0);
- q->tail = NULL;
- }
-}
-
-bool BCMFASTPATH
-pktq_pdel(struct pktq *pq, void *pktbuf, int prec)
-{
- struct pktq_prec *q;
- void *p;
-
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- if (!pktbuf)
- return FALSE;
-
- q = &pq->q[prec];
-
- if (q->head == pktbuf) {
- if ((q->head = PKTLINK(pktbuf)) == NULL)
- q->tail = NULL;
- } else {
- for (p = q->head; p && PKTLINK(p) != pktbuf; p = PKTLINK(p))
- ;
- if (p == NULL)
- return FALSE;
-
- PKTSETLINK(p, PKTLINK(pktbuf));
- if (q->tail == pktbuf)
- q->tail = p;
- }
-
- q->len--;
- pq->len--;
- PKTSETLINK(pktbuf, NULL);
- return TRUE;
-}
-
-void
-pktq_init(struct pktq *pq, int num_prec, int max_len)
-{
- int prec;
-
- ASSERT(num_prec > 0 && num_prec <= PKTQ_MAX_PREC);
-
- /* pq is variable size; only zero out what's requested */
- bzero(pq, OFFSETOF(struct pktq, q) + (sizeof(struct pktq_prec) * num_prec));
-
- pq->num_prec = (uint16)num_prec;
-
- pq->max = (uint16)max_len;
-
- for (prec = 0; prec < num_prec; prec++)
- pq->q[prec].max = pq->max;
-}
-
-void
-pktq_set_max_plen(struct pktq *pq, int prec, int max_len)
-{
- ASSERT(prec >= 0 && prec < pq->num_prec);
-
- if (prec < pq->num_prec)
- pq->q[prec].max = (uint16)max_len;
-}
-
-void * BCMFASTPATH
-pktq_deq(struct pktq *pq, int *prec_out)
-{
- struct pktq_prec *q;
- void *p;
- int prec;
-
- if (pq->len == 0)
- return NULL;
-
- while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
- pq->hi_prec--;
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- if ((q->head = PKTLINK(p)) == NULL)
- q->tail = NULL;
-
- q->len--;
-
- pq->len--;
-
- if (prec_out)
- *prec_out = prec;
-
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
-void * BCMFASTPATH
-pktq_deq_tail(struct pktq *pq, int *prec_out)
-{
- struct pktq_prec *q;
- void *p, *prev;
- int prec;
-
- if (pq->len == 0)
- return NULL;
-
- for (prec = 0; prec < pq->hi_prec; prec++)
- if (pq->q[prec].head)
- break;
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- for (prev = NULL; p != q->tail; p = PKTLINK(p))
- prev = p;
-
- if (prev)
- PKTSETLINK(prev, NULL);
- else
- q->head = NULL;
-
- q->tail = prev;
- q->len--;
-
- pq->len--;
-
- if (prec_out)
- *prec_out = prec;
-
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
-void *
-pktq_peek(struct pktq *pq, int *prec_out)
-{
- int prec;
-
- if (pq->len == 0)
- return NULL;
-
- while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
- pq->hi_prec--;
-
- if (prec_out)
- *prec_out = prec;
-
- return (pq->q[prec].head);
-}
-
-void *
-pktq_peek_tail(struct pktq *pq, int *prec_out)
-{
- int prec;
-
- if (pq->len == 0)
- return NULL;
-
- for (prec = 0; prec < pq->hi_prec; prec++)
- if (pq->q[prec].head)
- break;
-
- if (prec_out)
- *prec_out = prec;
-
- return (pq->q[prec].tail);
-}
-
-void
-pktq_flush(osl_t *osh, struct pktq *pq, bool dir, ifpkt_cb_t fn, int arg)
-{
- int prec;
-
- /* Optimize flush, if pktq len = 0, just return.
- * pktq len of 0 means pktq's prec q's are all empty.
- */
- if (pq->len == 0) {
- return;
- }
-
- for (prec = 0; prec < pq->num_prec; prec++)
- pktq_pflush(osh, pq, prec, dir, fn, arg);
- if (fn == NULL)
- ASSERT(pq->len == 0);
-}
-
-/* Return sum of lengths of a specific set of precedences */
-int
-pktq_mlen(struct pktq *pq, uint prec_bmp)
-{
- int prec, len;
-
- len = 0;
-
- for (prec = 0; prec <= pq->hi_prec; prec++)
- if (prec_bmp & (1 << prec))
- len += pq->q[prec].len;
-
- return len;
-}
-
-/* Priority peek from a specific set of precedences */
-void * BCMFASTPATH
-pktq_mpeek(struct pktq *pq, uint prec_bmp, int *prec_out)
-{
- struct pktq_prec *q;
- void *p;
- int prec;
-
- if (pq->len == 0)
- {
- return NULL;
- }
- while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
- pq->hi_prec--;
-
- while ((prec_bmp & (1 << prec)) == 0 || pq->q[prec].head == NULL)
- if (prec-- == 0)
- return NULL;
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- if (prec_out)
- *prec_out = prec;
-
- return p;
-}
-/* Priority dequeue from a specific set of precedences */
-void * BCMFASTPATH
-pktq_mdeq(struct pktq *pq, uint prec_bmp, int *prec_out)
-{
- struct pktq_prec *q;
- void *p;
- int prec;
-
- if (pq->len == 0)
- return NULL;
-
- while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
- pq->hi_prec--;
-
- while ((pq->q[prec].head == NULL) || ((prec_bmp & (1 << prec)) == 0))
- if (prec-- == 0)
- return NULL;
-
- q = &pq->q[prec];
-
- if ((p = q->head) == NULL)
- return NULL;
-
- if ((q->head = PKTLINK(p)) == NULL)
- q->tail = NULL;
-
- q->len--;
-
- if (prec_out)
- *prec_out = prec;
-
- pq->len--;
-
- PKTSETLINK(p, NULL);
-
- return p;
-}
-
#endif /* BCMDRIVER */
#if !defined(BCMROMOFFLOAD_EXCLUDE_BCMUTILS_FUNCS)
@@ -849,8 +367,8 @@
if ((haystack == NULL) || (needle == NULL))
return DISCARD_QUAL(haystack, char);
- nlen = strlen(needle);
- len = strlen(haystack) - nlen + 1;
+ nlen = (int)strlen(needle);
+ len = (int)strlen(haystack) - nlen + 1;
for (i = 0; i < len; i++)
if (memcmp(needle, &haystack[i], nlen) == 0)
@@ -859,6 +377,16 @@
}
char *
+bcmstrnstr(const char *s, uint s_len, const char *substr, uint substr_len)
+{
+ for (; s_len >= substr_len; s++, s_len--)
+ if (strncmp(s, substr, substr_len) == 0)
+ return DISCARD_QUAL(s, char);
+
+ return NULL;
+}
+
+char *
bcmstrcat(char *dest, const char *src)
{
char *p;
@@ -1213,7 +741,7 @@
for (p = p0; p; p = PKTNEXT(osh, p))
prhex(NULL, PKTDATA(osh, p), PKTLEN(osh, p));
}
-#endif
+#endif
/* Takes an Ethernet frame and sets out-of-bound PKTPRIO.
* Also updates the inplace vlan tag if requested.
@@ -1242,7 +770,8 @@
vlan_tag = ntoh16(evh->vlan_tag);
vlan_prio = (int) (vlan_tag >> VLAN_PRI_SHIFT) & VLAN_PRI_MASK;
- if (evh->ether_type == hton16(ETHER_TYPE_IP)) {
+ if ((evh->ether_type == hton16(ETHER_TYPE_IP)) ||
+ (evh->ether_type == hton16(ETHER_TYPE_IPV6))) {
uint8 *ip_body = pktdata + sizeof(struct ethervlan_header);
uint8 tos_tc = IP_TOS46(ip_body);
dscp_prio = (int)(tos_tc >> IPV4_TOS_PREC_SHIFT);
@@ -1269,7 +798,14 @@
evh->vlan_tag = hton16(vlan_tag);
rc |= PKTPRIO_UPD;
}
- } else if (eh->ether_type == hton16(ETHER_TYPE_IP)) {
+
+#ifdef EAPOL_PKT_PRIO
+ } else if (eh->ether_type == hton16(ETHER_TYPE_802_1X)) {
+ priority = PRIO_8021D_NC;
+ rc = PKTPRIO_DSCP;
+#endif /* EAPOL_PKT_PRIO */
+ } else if ((eh->ether_type == hton16(ETHER_TYPE_IP)) ||
+ (eh->ether_type == hton16(ETHER_TYPE_IPV6))) {
uint8 *ip_body = pktdata + sizeof(struct ether_header);
uint8 tos_tc = IP_TOS46(ip_body);
uint8 dscp = tos_tc >> IPV4_TOS_DSCP_SHIFT;
@@ -1303,6 +839,43 @@
return (rc | priority);
}
+/* Returns TRUE and DSCP if IP header found, FALSE otherwise.
+ */
+bool BCMFASTPATH
+pktgetdscp(uint8 *pktdata, uint pktlen, uint8 *dscp)
+{
+ struct ether_header *eh;
+ struct ethervlan_header *evh;
+ uint8 *ip_body;
+ bool rc = FALSE;
+
+ /* minimum length is ether header and IP header */
+ if (pktlen < sizeof(struct ether_header) + IPV4_MIN_HEADER_LEN)
+ return FALSE;
+
+ eh = (struct ether_header *) pktdata;
+
+ if (eh->ether_type == HTON16(ETHER_TYPE_IP)) {
+ ip_body = pktdata + sizeof(struct ether_header);
+ *dscp = IP_DSCP46(ip_body);
+ rc = TRUE;
+ }
+ else if (eh->ether_type == HTON16(ETHER_TYPE_8021Q)) {
+ evh = (struct ethervlan_header *)eh;
+
+ /* minimum length is ethervlan header and IP header */
+ if (pktlen >= sizeof(struct ethervlan_header) + IPV4_MIN_HEADER_LEN &&
+ evh->ether_type == HTON16(ETHER_TYPE_IP)) {
+ ip_body = pktdata + sizeof(struct ethervlan_header);
+ *dscp = IP_DSCP46(ip_body);
+ rc = TRUE;
+ }
+ }
+
+ return rc;
+}
+
+/* The 0.5KB string table is not removed by compiler even though it's unused */
static char bcm_undeferrstr[32];
static const char *bcmerrorstrtable[] = BCMERRSTRINGTABLE;
@@ -1327,6 +900,7 @@
/* iovar table lookup */
+/* could mandate sorted tables and do a binary search */
const bcm_iovar_t*
bcm_iovar_lookup(const bcm_iovar_t *table, const char *name)
{
@@ -1831,6 +1405,22 @@
/*
* Traverse a string of 1-byte tag/1-byte length/variable-length value
* triples, returning a pointer to the substring whose first element
+ * matches tag
+ * return NULL if not found or length field < min_varlen
+ */
+bcm_tlv_t *
+bcm_parse_tlvs_min_bodylen(void *buf, int buflen, uint key, int min_bodylen)
+{
+ bcm_tlv_t * ret = bcm_parse_tlvs(buf, buflen, key);
+ if (ret == NULL || ret->len < min_bodylen) {
+ return NULL;
+ }
+ return ret;
+}
+
+/*
+ * Traverse a string of 1-byte tag/1-byte length/variable-length value
+ * triples, returning a pointer to the substring whose first element
* matches tag. Stop parsing when we see an element whose ID is greater
* than the target key.
*/
@@ -1958,7 +1548,7 @@
}
return (int)(p - str);
}
-#endif
+#endif
/* pretty hex print a contiguous buffer */
void
@@ -2059,7 +1649,7 @@
uint len, max_len;
char c;
- len = strlen(buf);
+ len = (uint)strlen(buf);
max_len = BUFSIZE_TODUMP_ATONCE;
@@ -2110,7 +1700,7 @@
{
uint len;
- len = strlen(name) + 1;
+ len = (uint)strlen(name) + 1;
if ((len + datalen) > buflen)
return 0;
@@ -2382,7 +1972,7 @@
return (int)(p - buf);
}
-#endif
+#endif
#endif /* BCMDRIVER */
@@ -2532,6 +2122,35 @@
#endif /* setbit */
void
+set_bitrange(void *array, uint start, uint end, uint maxbit)
+{
+ uint startbyte = start/NBBY;
+ uint endbyte = end/NBBY;
+ uint i, startbytelastbit, endbytestartbit;
+
+ if (end >= start) {
+ if (endbyte - startbyte > 1)
+ {
+ startbytelastbit = (startbyte+1)*NBBY - 1;
+ endbytestartbit = endbyte*NBBY;
+ for (i = startbyte+1; i < endbyte; i++)
+ ((uint8 *)array)[i] = 0xFF;
+ for (i = start; i <= startbytelastbit; i++)
+ setbit(array, i);
+ for (i = endbytestartbit; i <= end; i++)
+ setbit(array, i);
+ } else {
+ for (i = start; i <= end; i++)
+ setbit(array, i);
+ }
+ }
+ else {
+ set_bitrange(array, start, maxbit, maxbit);
+ set_bitrange(array, 0, end, maxbit);
+ }
+}
+
+void
bcm_bitprint32(const uint32 u32)
{
int i;
@@ -2542,6 +2161,27 @@
printf("\n");
}
+/* calculate checksum for ip header, tcp / udp header / data */
+uint16
+bcm_ip_cksum(uint8 *buf, uint32 len, uint32 sum)
+{
+ while (len > 1) {
+ sum += (buf[0] << 8) | buf[1];
+ buf += 2;
+ len -= 2;
+ }
+
+ if (len > 0) {
+ sum += (*buf) << 8;
+ }
+
+ while (sum >> 16) {
+ sum = (sum & 0xffff) + (sum >> 16);
+ }
+
+ return ((uint16)~sum);
+}
+
#ifdef BCMDRIVER
/*
* Hierarchical Multiword bitmap based small id allocator.
@@ -2709,7 +2349,7 @@
}
/* Allocate a unique small index using a multiword bitmap index allocator. */
-uint32
+uint32 BCMFASTPATH
bcm_mwbmap_alloc(struct bcm_mwbmap * mwbmap_hdl)
{
bcm_mwbmap_t * mwbmap_p;
@@ -2845,7 +2485,7 @@
}
/* Free a previously allocated index back into the multiword bitmap allocator */
-void
+void BCMFASTPATH
bcm_mwbmap_free(struct bcm_mwbmap * mwbmap_hdl, uint32 bitix)
{
bcm_mwbmap_t * mwbmap_p;
@@ -2998,10 +2638,266 @@
}
}
- ASSERT(free_cnt == mwbmap_p->ifree);
+ ASSERT((int)free_cnt == mwbmap_p->ifree);
}
/* END : Multiword bitmap based 64bit to Unique 32bit Id allocator. */
+/* Simple 16bit Id allocator using a stack implementation. */
+typedef struct id16_map {
+ uint16 total; /* total number of ids managed by allocator */
+ uint16 start; /* start value of 16bit ids to be managed */
+ uint32 failures; /* count of failures */
+ void *dbg; /* debug placeholder */
+ int stack_idx; /* index into stack of available ids */
+ uint16 stack[0]; /* stack of 16 bit ids */
+} id16_map_t;
+
+#define ID16_MAP_SZ(items) (sizeof(id16_map_t) + \
+ (sizeof(uint16) * (items)))
+
+#if defined(BCM_DBG)
+
+/* Uncomment BCM_DBG_ID16 to debug double free */
+/* #define BCM_DBG_ID16 */
+
+typedef struct id16_map_dbg {
+ uint16 total;
+ bool avail[0];
+} id16_map_dbg_t;
+#define ID16_MAP_DBG_SZ(items) (sizeof(id16_map_dbg_t) + \
+ (sizeof(bool) * (items)))
+#define ID16_MAP_MSG(x) print x
+#else
+#define ID16_MAP_MSG(x)
+#endif /* BCM_DBG */
+
+void * /* Construct an id16 allocator: [start_val16 .. start_val16+total_ids) */
+id16_map_init(osl_t *osh, uint16 total_ids, uint16 start_val16)
+{
+ uint16 idx, val16;
+ id16_map_t * id16_map;
+
+ ASSERT(total_ids > 0);
+ ASSERT((start_val16 + total_ids) < ID16_INVALID);
+
+ id16_map = (id16_map_t *) MALLOC(osh, ID16_MAP_SZ(total_ids));
+ if (id16_map == NULL) {
+ return NULL;
+ }
+
+ id16_map->total = total_ids;
+ id16_map->start = start_val16;
+ id16_map->failures = 0;
+ id16_map->dbg = NULL;
+
+ /* Populate stack with 16bit id values, commencing with start_val16 */
+ id16_map->stack_idx = 0;
+ val16 = start_val16;
+
+ for (idx = 0; idx < total_ids; idx++, val16++) {
+ id16_map->stack_idx = idx;
+ id16_map->stack[id16_map->stack_idx] = val16;
+ }
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+ id16_map->dbg = MALLOC(osh, ID16_MAP_DBG_SZ(total_ids));
+
+ if (id16_map->dbg) {
+ id16_map_dbg_t *id16_map_dbg = (id16_map_dbg_t *)id16_map->dbg;
+
+ id16_map_dbg->total = total_ids;
+ for (idx = 0; idx < total_ids; idx++) {
+ id16_map_dbg->avail[idx] = TRUE;
+ }
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+
+ return (void *)id16_map;
+}
+
+void * /* Destruct an id16 allocator instance */
+id16_map_fini(osl_t *osh, void * id16_map_hndl)
+{
+ uint16 total_ids;
+ id16_map_t * id16_map;
+
+ if (id16_map_hndl == NULL)
+ return NULL;
+
+ id16_map = (id16_map_t *)id16_map_hndl;
+
+ total_ids = id16_map->total;
+ ASSERT(total_ids > 0);
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+ if (id16_map->dbg) {
+ MFREE(osh, id16_map->dbg, ID16_MAP_DBG_SZ(total_ids));
+ id16_map->dbg = NULL;
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+
+ id16_map->total = 0;
+ MFREE(osh, id16_map, ID16_MAP_SZ(total_ids));
+
+ return NULL;
+}
+
+void
+id16_map_clear(void * id16_map_hndl, uint16 total_ids, uint16 start_val16)
+{
+ uint16 idx, val16;
+ id16_map_t * id16_map;
+
+ ASSERT(total_ids > 0);
+ ASSERT((start_val16 + total_ids) < ID16_INVALID);
+
+ id16_map = (id16_map_t *)id16_map_hndl;
+ if (id16_map == NULL) {
+ return;
+ }
+
+ id16_map->total = total_ids;
+ id16_map->start = start_val16;
+ id16_map->failures = 0;
+
+ /* Populate stack with 16bit id values, commencing with start_val16 */
+ id16_map->stack_idx = 0;
+ val16 = start_val16;
+
+ for (idx = 0; idx < total_ids; idx++, val16++) {
+ id16_map->stack_idx = idx;
+ id16_map->stack[id16_map->stack_idx] = val16;
+ }
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+ if (id16_map->dbg) {
+ id16_map_dbg_t *id16_map_dbg = (id16_map_dbg_t *)id16_map->dbg;
+
+ id16_map_dbg->total = total_ids;
+ for (idx = 0; idx < total_ids; idx++) {
+ id16_map_dbg->avail[idx] = TRUE;
+ }
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+}
+
+
+uint16 BCMFASTPATH /* Allocate a unique 16bit id */
+id16_map_alloc(void * id16_map_hndl)
+{
+ uint16 val16;
+ id16_map_t * id16_map;
+
+ ASSERT(id16_map_hndl != NULL);
+
+ id16_map = (id16_map_t *)id16_map_hndl;
+
+ ASSERT(id16_map->total > 0);
+
+ if (id16_map->stack_idx < 0) {
+ id16_map->failures++;
+ return ID16_INVALID;
+ }
+
+ val16 = id16_map->stack[id16_map->stack_idx];
+ id16_map->stack_idx--;
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+
+ ASSERT(val16 < (id16_map->start + id16_map->total));
+
+ if (id16_map->dbg) { /* Validate val16 */
+ id16_map_dbg_t *id16_map_dbg = (id16_map_dbg_t *)id16_map->dbg;
+
+ ASSERT(id16_map_dbg->avail[val16 - id16_map->start] == TRUE);
+ id16_map_dbg->avail[val16 - id16_map->start] = FALSE;
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+
+ return val16;
+}
+
+
+void BCMFASTPATH /* Free a 16bit id value into the id16 allocator */
+id16_map_free(void * id16_map_hndl, uint16 val16)
+{
+ id16_map_t * id16_map;
+
+ ASSERT(id16_map_hndl != NULL);
+
+ id16_map = (id16_map_t *)id16_map_hndl;
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+
+ ASSERT(val16 < (id16_map->start + id16_map->total));
+
+ if (id16_map->dbg) { /* Validate val16 */
+ id16_map_dbg_t *id16_map_dbg = (id16_map_dbg_t *)id16_map->dbg;
+
+ ASSERT(id16_map_dbg->avail[val16 - id16_map->start] == FALSE);
+ id16_map_dbg->avail[val16 - id16_map->start] = TRUE;
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+
+ id16_map->stack_idx++;
+ id16_map->stack[id16_map->stack_idx] = val16;
+}
+
+uint32 /* Returns number of failures to allocate an unique id16 */
+id16_map_failures(void * id16_map_hndl)
+{
+ ASSERT(id16_map_hndl != NULL);
+ return ((id16_map_t *)id16_map_hndl)->failures;
+}
+
+bool
+id16_map_audit(void * id16_map_hndl)
+{
+ int idx;
+ int insane = 0;
+ id16_map_t * id16_map;
+
+ ASSERT(id16_map_hndl != NULL);
+
+ id16_map = (id16_map_t *)id16_map_hndl;
+
+ ASSERT((id16_map->stack_idx > 0) && (id16_map->stack_idx < id16_map->total));
+ for (idx = 0; idx <= id16_map->stack_idx; idx++) {
+ ASSERT(id16_map->stack[idx] >= id16_map->start);
+ ASSERT(id16_map->stack[idx] < (id16_map->start + id16_map->total));
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+ if (id16_map->dbg) {
+ uint16 val16 = id16_map->stack[idx];
+ if (((id16_map_dbg_t *)(id16_map->dbg))->avail[val16] != TRUE) {
+ insane |= 1;
+ ID16_MAP_MSG(("id16_map<%p>: stack_idx %u invalid val16 %u\n",
+ id16_map_hndl, idx, val16));
+ }
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+ }
+
+#if defined(BCM_DBG) && defined(BCM_DBG_ID16)
+ if (id16_map->dbg) {
+ uint16 avail = 0; /* Audit available ids counts */
+ for (idx = 0; idx < id16_map_dbg->total; idx++) {
+ if (((id16_map_dbg_t *)(id16_map->dbg))->avail[idx16] == TRUE)
+ avail++;
+ }
+ if (avail && (avail != (id16_map->stack_idx + 1))) {
+ insane |= 1;
+ ID16_MAP_MSG(("id16_map<%p>: avail %u stack_idx %u\n",
+ id16_map_hndl, avail, id16_map->stack_idx));
+ }
+ }
+#endif /* BCM_DBG && BCM_DBG_ID16 */
+
+ return (!!insane);
+}
+/* END: Simple id16 allocator */
+
+
#endif /* BCMDRIVER */
/* calculate a >> b; and returns only lower 32 bits */
@@ -3080,3 +2976,84 @@
#define counter_printlog(a) do {} while (0)
#endif /* OSL_SYSUPTIME_SUPPORT == TRUE */
#endif /* DEBUG_COUNTER */
+
+#ifdef BCMDRIVER
+void
+dll_pool_detach(void * osh, dll_pool_t * pool, uint16 elems_max, uint16 elem_size)
+{
+ uint32 mem_size;
+ mem_size = sizeof(dll_pool_t) + (elems_max * elem_size);
+ if (pool)
+ MFREE(osh, pool, mem_size);
+}
+dll_pool_t *
+dll_pool_init(void * osh, uint16 elems_max, uint16 elem_size)
+{
+ uint32 mem_size, i;
+ dll_pool_t * dll_pool_p;
+ dll_t * elem_p;
+
+ ASSERT(elem_size > sizeof(dll_t));
+
+ mem_size = sizeof(dll_pool_t) + (elems_max * elem_size);
+
+ if ((dll_pool_p = (dll_pool_t *)MALLOC(osh, mem_size)) == NULL) {
+ printf("dll_pool_init: elems_max<%u> elem_size<%u> malloc failure\n",
+ elems_max, elem_size);
+ ASSERT(0);
+ return dll_pool_p;
+ }
+
+ bzero(dll_pool_p, mem_size);
+
+ dll_init(&dll_pool_p->free_list);
+ dll_pool_p->elems_max = elems_max;
+ dll_pool_p->elem_size = elem_size;
+
+ elem_p = dll_pool_p->elements;
+ for (i = 0; i < elems_max; i++) {
+ dll_append(&dll_pool_p->free_list, elem_p);
+ elem_p = (dll_t *)((uintptr)elem_p + elem_size);
+ }
+
+ dll_pool_p->free_count = elems_max;
+
+ return dll_pool_p;
+}
+
+
+void *
+dll_pool_alloc(dll_pool_t * dll_pool_p)
+{
+ dll_t * elem_p;
+
+ if (dll_pool_p->free_count == 0) {
+ ASSERT(dll_empty(&dll_pool_p->free_list));
+ return NULL;
+ }
+
+ elem_p = dll_head_p(&dll_pool_p->free_list);
+ dll_delete(elem_p);
+ dll_pool_p->free_count -= 1;
+
+ return (void *)elem_p;
+}
+
+void
+dll_pool_free(dll_pool_t * dll_pool_p, void * elem_p)
+{
+ dll_t * node_p = (dll_t *)elem_p;
+ dll_prepend(&dll_pool_p->free_list, node_p);
+ dll_pool_p->free_count += 1;
+}
+
+
+void
+dll_pool_free_tail(dll_pool_t * dll_pool_p, void * elem_p)
+{
+ dll_t * node_p = (dll_t *)elem_p;
+ dll_append(&dll_pool_p->free_list, node_p);
+ dll_pool_p->free_count += 1;
+}
+
+#endif /* BCMDRIVER */
diff --git a/drivers/net/wireless/bcmdhd/bcmwifi_channels.c b/drivers/net/wireless/bcmdhd/bcmwifi_channels.c
old mode 100755
new mode 100644
index f092699..8655937
--- a/drivers/net/wireless/bcmdhd/bcmwifi_channels.c
+++ b/drivers/net/wireless/bcmdhd/bcmwifi_channels.c
@@ -4,13 +4,13 @@
* software that might want wifi things as it grows.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -18,7 +18,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -232,6 +232,18 @@
return -1;
}
+/* wrapper function for wf_chspec_ntoa. In case of an error it puts
+ * the original chanspec in the output buffer, prepended with "invalid".
+ * Can be directly used in print routines as it takes care of null
+ */
+char *
+wf_chspec_ntoa_ex(chanspec_t chspec, char *buf)
+{
+ if (wf_chspec_ntoa(chspec, buf) == NULL)
+ snprintf(buf, CHANSPEC_STR_LEN, "invalid 0x%04x", chspec);
+ return buf;
+}
+
/* given a chanspec and a string buffer, format the chanspec as a
* string, and return the original pointer a.
* Min buffer length must be CHANSPEC_STR_LEN.
@@ -524,32 +536,25 @@
int ch1_id = 0, ch2_id = 0;
int sb;
+ /* look up the channel ID for the specified channel numbers */
ch1_id = channel_80mhz_to_id(ch1);
ch2_id = channel_80mhz_to_id(ch2);
/* validate channels */
- if (ch1 >= ch2 || ch1_id < 0 || ch2_id < 0)
+ if (ch1_id < 0 || ch2_id < 0)
return 0;
- /* combined channel in chspec */
- chspec_ch = (((uint16)ch1_id << WL_CHANSPEC_CHAN1_SHIFT) |
- ((uint16)ch2_id << WL_CHANSPEC_CHAN2_SHIFT));
+ /* combine 2 channel IDs in channel field of chspec */
+ chspec_ch = (((uint)ch1_id << WL_CHANSPEC_CHAN1_SHIFT) |
+ ((uint)ch2_id << WL_CHANSPEC_CHAN2_SHIFT));
- /* figure out ctl sideband */
+ /* figure out primary 20 MHz sideband */
- /* does the primary channel fit with the 1st 80MHz channel ? */
+ /* is the primary channel contained in the 1st 80MHz channel? */
sb = channel_to_sb(ch1, ctl_ch, bw);
if (sb < 0) {
- /* no, so does the primary channel fit with the 2nd 80MHz channel ? */
- sb = channel_to_sb(ch2, ctl_ch, bw);
- if (sb < 0) {
- /* no match for ctl_ch to either 80MHz center channel */
- return 0;
- }
- /* sb index is 0-3 for the low 80MHz channel, and 4-7 for
- * the high 80MHz channel. Add 4 to to shift to high set.
- */
- sb += 4;
+ /* no match for primary channel 'ctl_ch' in segment0 80MHz channel */
+ return 0;
}
chspec_sb = sb << WL_CHANSPEC_CTL_SB_SHIFT;
@@ -586,15 +591,12 @@
if (chspec_bw == WL_CHANSPEC_BW_8080) {
uint ch1_id, ch2_id;
- /* channel number in 80+80 must be in range */
+ /* channel IDs in 80+80 must be in range */
ch1_id = CHSPEC_CHAN1(chanspec);
ch2_id = CHSPEC_CHAN2(chanspec);
if (ch1_id >= WF_NUM_5G_80M_CHANS || ch2_id >= WF_NUM_5G_80M_CHANS)
return TRUE;
- /* ch2 must be above ch1 for the chanspec */
- if (ch2_id <= ch1_id)
- return TRUE;
} else if (chspec_bw == WL_CHANSPEC_BW_20 || chspec_bw == WL_CHANSPEC_BW_40 ||
chspec_bw == WL_CHANSPEC_BW_80 || chspec_bw == WL_CHANSPEC_BW_160) {
@@ -617,11 +619,14 @@
} else if (chspec_bw == WL_CHANSPEC_BW_40) {
if (CHSPEC_CTL_SB(chanspec) > WL_CHANSPEC_CTL_SB_LLU)
return TRUE;
- } else if (chspec_bw == WL_CHANSPEC_BW_80) {
+ } else if (chspec_bw == WL_CHANSPEC_BW_80 ||
+ chspec_bw == WL_CHANSPEC_BW_8080) {
if (CHSPEC_CTL_SB(chanspec) > WL_CHANSPEC_CTL_SB_LUU)
return TRUE;
}
-
+ else if (chspec_bw == WL_CHANSPEC_BW_160) {
+ ASSERT(CHSPEC_CTL_SB(chanspec) <= WL_CHANSPEC_CTL_SB_UUU);
+ }
return FALSE;
}
@@ -654,10 +659,9 @@
ch1 = wf_5g_80m_chans[CHSPEC_CHAN1(chanspec)];
ch2 = wf_5g_80m_chans[CHSPEC_CHAN2(chanspec)];
- /* the two channels must be separated by more than 80MHz by VHT req,
- * and ch2 above ch1 for the chanspec
- */
- if (ch2 > ch1 + CH_80MHZ_APART)
+ /* the two channels must be separated by more than 80MHz by VHT req */
+ if ((ch2 > ch1 + CH_80MHZ_APART) ||
+ (ch1 > ch2 + CH_80MHZ_APART))
return TRUE;
} else {
const uint8 *center_ch;
@@ -740,18 +744,15 @@
sb = CHSPEC_CTL_SB(chspec) >> WL_CHANSPEC_CTL_SB_SHIFT;
if (CHSPEC_IS8080(chspec)) {
+ /* For an 80+80 MHz channel, the sideband 'sb' field is an 80 MHz sideband
+ * (LL, LU, UL, LU) for the 80 MHz frequency segment 0.
+ */
+ uint chan_id = CHSPEC_CHAN1(chspec);
+
bw_mhz = 80;
- if (sb < 4) {
- center_chan = CHSPEC_CHAN1(chspec);
- }
- else {
- center_chan = CHSPEC_CHAN2(chspec);
- sb -= 4;
- }
-
/* convert from channel index to channel number */
- center_chan = wf_5g_80m_chans[center_chan];
+ center_chan = wf_5g_80m_chans[chan_id];
}
else {
bw_mhz = bw_chspec_to_mhz(chspec);
@@ -762,6 +763,13 @@
}
}
+/* given a chanspec, return the bandwidth string */
+char *
+wf_chspec_to_bw_str(chanspec_t chspec)
+{
+ return (char *)wf_chspec_bw_str[(CHSPEC_BW(chspec) >> WL_CHANSPEC_BW_SHIFT)];
+}
+
/*
* This function returns the chanspec of the control channel of a given chanspec
*/
@@ -847,22 +855,25 @@
ASSERT(!wf_chspec_malformed(chspec));
+ /* if the chanspec is > 80MHz, use the helper routine to find the primary 80 MHz channel */
+ if (CHSPEC_IS8080(chspec) || CHSPEC_IS160(chspec)) {
+ chspec = wf_chspec_primary80_chspec(chspec);
+ }
+
+ /* determine primary 40 MHz sub-channel of an 80 MHz chanspec */
if (CHSPEC_IS80(chspec)) {
center_chan = CHSPEC_CHANNEL(chspec);
sb = CHSPEC_CTL_SB(chspec);
- if (sb == WL_CHANSPEC_CTL_SB_UL) {
- /* Primary 40MHz is on upper side */
- sb = WL_CHANSPEC_CTL_SB_L;
- center_chan += CH_20MHZ_APART;
- } else if (sb == WL_CHANSPEC_CTL_SB_UU) {
- /* Primary 40MHz is on upper side */
- sb = WL_CHANSPEC_CTL_SB_U;
- center_chan += CH_20MHZ_APART;
- } else {
+ if (sb < WL_CHANSPEC_CTL_SB_UL) {
/* Primary 40MHz is on lower side */
- /* sideband bits are the same for LL/LU and L/U */
center_chan -= CH_20MHZ_APART;
+ /* sideband bits are the same for LL/LU and L/U */
+ } else {
+ /* Primary 40MHz is on upper side */
+ center_chan += CH_20MHZ_APART;
+ /* sideband bits need to be adjusted by UL offset */
+ sb -= WL_CHANSPEC_CTL_SB_UL;
}
/* Create primary 40MHz chanspec */
@@ -962,52 +973,101 @@
return freq;
}
+static const uint16 sidebands[] = {
+ WL_CHANSPEC_CTL_SB_LLL, WL_CHANSPEC_CTL_SB_LLU,
+ WL_CHANSPEC_CTL_SB_LUL, WL_CHANSPEC_CTL_SB_LUU,
+ WL_CHANSPEC_CTL_SB_ULL, WL_CHANSPEC_CTL_SB_ULU,
+ WL_CHANSPEC_CTL_SB_UUL, WL_CHANSPEC_CTL_SB_UUU
+};
+
+/*
+ * Returns the chanspec 80Mhz channel corresponding to the following input
+ * parameters
+ *
+ * primary_channel - primary 20Mhz channel
+ * center_channel - center frequecny of the 80Mhz channel
+ *
+ * The center_channel can be one of {42, 58, 106, 122, 138, 155}
+ *
+ * returns INVCHANSPEC in case of error
+ */
+chanspec_t
+wf_chspec_80(uint8 center_channel, uint8 primary_channel)
+{
+
+ chanspec_t chanspec = INVCHANSPEC;
+ chanspec_t chanspec_cur;
+ uint i;
+
+ for (i = 0; i < WF_NUM_SIDEBANDS_80MHZ; i++) {
+ chanspec_cur = CH80MHZ_CHSPEC(center_channel, sidebands[i]);
+ if (primary_channel == wf_chspec_ctlchan(chanspec_cur)) {
+ chanspec = chanspec_cur;
+ break;
+ }
+ }
+ /* If the loop ended early, we are good, otherwise we did not
+ * find a 80MHz chanspec with the given center_channel that had a primary channel
+ *matching the given primary_channel.
+ */
+ return chanspec;
+}
+
/*
* Returns the 80+80 chanspec corresponding to the following input parameters
*
- * primary_20mhz - Primary 20 Mhz channel
- * chan1 - channel number of first 80 Mhz band
- * chan2 - channel number of second 80 Mhz band
+ * primary_20mhz - Primary 20 MHz channel
+ * chan0 - center channel number of one frequency segment
+ * chan1 - center channel number of the other frequency segment
*
- * parameters chan1 and chan2 are channel numbers in {42, 58, 106, 122, 138, 155}
+ * Parameters chan0 and chan1 are channel numbers in {42, 58, 106, 122, 138, 155}.
+ * The primary channel must be contained in one of the 80MHz channels. This routine
+ * will determine which frequency segment is the primary 80 MHz segment.
*
- * returns INVCHANSPEC in case of error
+ * Returns INVCHANSPEC in case of error.
+ *
+ * Refer to IEEE802.11ac section 22.3.14 "Channelization".
*/
-
chanspec_t
-wf_chspec_get8080_chspec(uint8 primary_20mhz, uint8 chan1, uint8 chan2)
+wf_chspec_get8080_chspec(uint8 primary_20mhz, uint8 chan0, uint8 chan1)
{
int sb = 0;
uint16 chanspec = 0;
- int chan1_id = 0, chan2_id = 0;
+ int chan0_id = 0, chan1_id = 0;
+ int seg0, seg1;
+
+ chan0_id = channel_80mhz_to_id(chan0);
+ chan1_id = channel_80mhz_to_id(chan1);
+
+ /* make sure the channel numbers were valid */
+ if (chan0_id == -1 || chan1_id == -1)
+ return INVCHANSPEC;
/* does the primary channel fit with the 1st 80MHz channel ? */
- sb = channel_to_sb(chan1, primary_20mhz, 80);
- if (sb < 0) {
+ sb = channel_to_sb(chan0, primary_20mhz, 80);
+ if (sb >= 0) {
+ /* yes, so chan0 is frequency segment 0, and chan1 is seg 1 */
+ seg0 = chan0_id;
+ seg1 = chan1_id;
+ } else {
/* no, so does the primary channel fit with the 2nd 80MHz channel ? */
- sb = channel_to_sb(chan2, primary_20mhz, 80);
+ sb = channel_to_sb(chan1, primary_20mhz, 80);
if (sb < 0) {
/* no match for ctl_ch to either 80MHz center channel */
return INVCHANSPEC;
}
- /* sb index is 0-3 for the low 80MHz channel, and 4-7 for
- * the high 80MHz channel. Add 4 to to shift to high set.
- */
- sb += 4;
+ /* swapped, so chan1 is frequency segment 0, and chan0 is seg 1 */
+ seg0 = chan1_id;
+ seg1 = chan0_id;
}
- chan1_id = channel_80mhz_to_id(chan1);
- chan2_id = channel_80mhz_to_id(chan2);
- if (chan1_id == -1 || chan2_id == -1)
- return INVCHANSPEC;
- chanspec = (chan1_id << WL_CHANSPEC_CHAN1_SHIFT)|
- (chan2_id << WL_CHANSPEC_CHAN2_SHIFT)|
- (sb << WL_CHANSPEC_CTL_SB_SHIFT)|
- (WL_CHANSPEC_BW_8080)|
- (WL_CHANSPEC_BAND_5G);
+ chanspec = ((seg0 << WL_CHANSPEC_CHAN1_SHIFT) |
+ (seg1 << WL_CHANSPEC_CHAN2_SHIFT) |
+ (sb << WL_CHANSPEC_CTL_SB_SHIFT) |
+ WL_CHANSPEC_BW_8080 |
+ WL_CHANSPEC_BAND_5G);
return chanspec;
-
}
/*
@@ -1033,46 +1093,29 @@
uint8
wf_chspec_primary80_channel(chanspec_t chanspec)
{
- uint8 chan1 = 0, chan2 = 0, primary_20mhz = 0, primary80_chan = 0;
- int sb = 0;
-
- primary_20mhz = wf_chspec_ctlchan(chanspec);
+ uint8 primary80_chan;
if (CHSPEC_IS80(chanspec)) {
primary80_chan = CHSPEC_CHANNEL(chanspec);
}
else if (CHSPEC_IS8080(chanspec)) {
- chan1 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN1(chanspec));
- chan2 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN2(chanspec));
-
- /* does the primary channel fit with the 1st 80MHz channel ? */
- sb = channel_to_sb(chan1, primary_20mhz, 80);
- if (sb < 0) {
- /* no, so does the primary channel fit with the 2nd 80MHz channel ? */
- sb = channel_to_sb(chan2, primary_20mhz, 80);
- if (!(sb < 0)) {
- primary80_chan = chan2;
- }
- }
- else {
- primary80_chan = chan1;
- }
+ /* Channel ID 1 corresponds to frequency segment 0, the primary 80 MHz segment */
+ primary80_chan = wf_chspec_get80Mhz_ch(CHSPEC_CHAN1(chanspec));
}
else if (CHSPEC_IS160(chanspec)) {
- chan1 = CHSPEC_CHANNEL(chanspec);
- sb = channel_to_sb(chan1, primary_20mhz, 160);
- if (!(sb < 0)) {
- /* based on the sb value primary 80 channel can be retrieved
- * if sb is in range 0 to 3 the lower band is the 80Mhz primary band
- */
- if (sb < 4) {
- primary80_chan = chan1 - CH_40MHZ_APART;
- }
- /* if sb is in range 4 to 7 the lower band is the 80Mhz primary band */
- else
- {
- primary80_chan = chan1 + CH_40MHZ_APART;
- }
+ uint8 center_chan = CHSPEC_CHANNEL(chanspec);
+ uint sb = CHSPEC_CTL_SB(chanspec) >> WL_CHANSPEC_CTL_SB_SHIFT;
+
+ /* based on the sb value primary 80 channel can be retrieved
+ * if sb is in range 0 to 3 the lower band is the 80Mhz primary band
+ */
+ if (sb < 4) {
+ primary80_chan = center_chan - CH_40MHZ_APART;
+ }
+ /* if sb is in range 4 to 7 the upper band is the 80Mhz primary band */
+ else
+ {
+ primary80_chan = center_chan + CH_40MHZ_APART;
}
}
else {
@@ -1087,55 +1130,35 @@
*
* chanspec - Input chanspec for which the 80MHz secondary channel has to be retrieved
*
- * returns -1 in case the provided channel is 20/40 Mhz chanspec
+ * returns -1 in case the provided channel is 20/40/80 Mhz chanspec
*/
uint8
wf_chspec_secondary80_channel(chanspec_t chanspec)
{
- uint8 chan1 = 0, chan2 = 0, primary_20mhz = 0, secondary80_chan = 0;
- int sb = 0;
+ uint8 secondary80_chan;
- primary_20mhz = wf_chspec_ctlchan(chanspec);
- if (CHSPEC_IS80(chanspec)) {
- secondary80_chan = -1;
- }
- else if (CHSPEC_IS8080(chanspec)) {
- chan1 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN1(chanspec));
- chan2 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN2(chanspec));
-
- /* does the primary channel fit with the 1st 80MHz channel ? */
- sb = channel_to_sb(chan1, primary_20mhz, 80);
- if (sb < 0) {
- /* no, so does the primary channel fit with the 2nd 80MHz channel ? */
- sb = channel_to_sb(chan2, primary_20mhz, 80);
- if (!(sb < 0)) {
- secondary80_chan = chan1;
- }
- }
- else {
- secondary80_chan = chan2;
- }
+ if (CHSPEC_IS8080(chanspec)) {
+ secondary80_chan = wf_chspec_get80Mhz_ch(CHSPEC_CHAN2(chanspec));
}
else if (CHSPEC_IS160(chanspec)) {
- chan1 = CHSPEC_CHANNEL(chanspec);
- sb = channel_to_sb(chan1, primary_20mhz, 160);
- if (!(sb < 0)) {
- /* based on the sb value secondary 80 channel can be retrieved
- *if sb is in range 0 to 3 upper band is the secondary 80Mhz band
- */
- if (sb < 4) {
- secondary80_chan = chan1 + CH_40MHZ_APART;
- }
- /* if sb is in range 4 to 7 the lower band is the secondary 80Mhz band */
- else
- {
- secondary80_chan = chan1 - CH_40MHZ_APART;
- }
+ uint8 center_chan = CHSPEC_CHANNEL(chanspec);
+ uint sb = CHSPEC_CTL_SB(chanspec) >> WL_CHANSPEC_CTL_SB_SHIFT;
+
+ /* based on the sb value secondary 80 channel can be retrieved
+ * if sb is in range 0 to 3 upper band is the secondary 80Mhz band
+ */
+ if (sb < 4) {
+ secondary80_chan = center_chan + CH_40MHZ_APART;
+ }
+ /* if sb is in range 4 to 7 the lower band is the secondary 80Mhz band */
+ else
+ {
+ secondary80_chan = center_chan - CH_40MHZ_APART;
}
}
else {
- /* for 20 and 40 Mhz */
- secondary80_chan = -1;
+ /* for 20, 40, and 80 Mhz */
+ secondary80_chan = -1;
}
return secondary80_chan;
}
@@ -1145,55 +1168,62 @@
*
* chanspec - Input chanspec for which the primary 80Mhz chanspec has to be retreived
*
- * returns INVCHANSPEC in case the provided channel is 20/40 Mhz chanspec
+ * returns the input chanspec in case the provided chanspec is an 80 MHz chanspec
+ * returns INVCHANSPEC in case the provided channel is 20/40 MHz chanspec
*/
chanspec_t
wf_chspec_primary80_chspec(chanspec_t chspec)
{
chanspec_t chspec80;
- uint center_chan, chan1 = 0, chan2 = 0;
+ uint center_chan;
uint sb;
ASSERT(!wf_chspec_malformed(chspec));
- if (CHSPEC_IS8080(chspec)) {
- chan1 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN1(chspec));
- chan2 = wf_chspec_get80Mhz_ch(CHSPEC_CHAN2(chspec));
+ if (CHSPEC_IS80(chspec)) {
+ chspec80 = chspec;
+ }
+ else if (CHSPEC_IS8080(chspec)) {
+
+ /* Channel ID 1 corresponds to frequency segment 0, the primary 80 MHz segment */
+ center_chan = wf_chspec_get80Mhz_ch(CHSPEC_CHAN1(chspec));
sb = CHSPEC_CTL_SB(chspec);
- if (sb < 4) {
- /* Primary 80MHz is on lower side */
- center_chan = chan1;
- }
- else
- {
- /* Primary 80MHz is on upper side */
- center_chan = chan2;
- sb -= 4;
- }
/* Create primary 80MHz chanspec */
- chspec80 = (WL_CHANSPEC_BAND_5G | WL_CHANSPEC_BW_80 |sb | center_chan);
+ chspec80 = (WL_CHANSPEC_BAND_5G | WL_CHANSPEC_BW_80 | sb | center_chan);
}
else if (CHSPEC_IS160(chspec)) {
center_chan = CHSPEC_CHANNEL(chspec);
sb = CHSPEC_CTL_SB(chspec);
- if (sb < 4) {
- /* Primary 80MHz is on upper side */
+ if (sb < WL_CHANSPEC_CTL_SB_ULL) {
+ /* Primary 80MHz is on lower side */
center_chan -= CH_40MHZ_APART;
}
- else
- {
- /* Primary 80MHz is on lower side */
+ else {
+ /* Primary 80MHz is on upper side */
center_chan += CH_40MHZ_APART;
- sb -= 4;
+ sb -= WL_CHANSPEC_CTL_SB_ULL;
}
/* Create primary 80MHz chanspec */
chspec80 = (WL_CHANSPEC_BAND_5G | WL_CHANSPEC_BW_80 | sb | center_chan);
}
- else
- {
+ else {
chspec80 = INVCHANSPEC;
}
+
return chspec80;
}
+
+#ifdef WL11AC_80P80
+uint8
+wf_chspec_channel(chanspec_t chspec)
+{
+ if (CHSPEC_IS8080(chspec)) {
+ return wf_chspec_primary80_channel(chspec);
+ }
+ else {
+ return ((uint8)((chspec) & WL_CHANSPEC_CHAN_MASK));
+ }
+}
+#endif /* WL11AC_80P80 */
diff --git a/drivers/net/wireless/bcmdhd/bcmwifi_channels.h b/drivers/net/wireless/bcmdhd/bcmwifi_channels.h
old mode 100755
new mode 100644
index f642555..b3a446e
--- a/drivers/net/wireless/bcmdhd/bcmwifi_channels.h
+++ b/drivers/net/wireless/bcmdhd/bcmwifi_channels.h
@@ -4,13 +4,13 @@
* both the wl driver, tools & Apps.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -18,7 +18,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -43,10 +43,15 @@
#define CH_10MHZ_APART 2
#define CH_5MHZ_APART 1 /* 2G band channels are 5 Mhz apart */
#define CH_MAX_2G_CHANNEL 14 /* Max channel in 2G band */
-#define MAXCHANNEL 224 /* max # supported channels. The max channel no is 216,
+#define MAXCHANNEL 224 /* max # supported channels. The max channel no is above,
* this is that + 1 rounded up to a multiple of NBBY (8).
* DO NOT MAKE it > 255: channels are uint8's all over
*/
+#define MAXCHANNEL_NUM (MAXCHANNEL - 1) /* max channel number */
+
+/* make sure channel num is within valid range */
+#define CH_NUM_VALID_RANGE(ch_num) ((ch_num) > 0 && (ch_num) <= MAXCHANNEL_NUM)
+
#define CHSPEC_CTLOVLP(sp1, sp2, sep) (ABS(wf_chspec_ctlchan(sp1) - wf_chspec_ctlchan(sp2)) < \
(sep))
@@ -131,7 +136,11 @@
WL_CHANSPEC_BW_160 | WL_CHANSPEC_BAND_5G)
/* simple MACROs to get different fields of chanspec */
+#ifdef WL11AC_80P80
+#define CHSPEC_CHANNEL(chspec) wf_chspec_channel(chspec)
+#else
#define CHSPEC_CHANNEL(chspec) ((uint8)((chspec) & WL_CHANSPEC_CHAN_MASK))
+#endif
#define CHSPEC_CHAN1(chspec) ((chspec) & WL_CHANSPEC_CHAN1_MASK) >> WL_CHANSPEC_CHAN1_SHIFT
#define CHSPEC_CHAN2(chspec) ((chspec) & WL_CHANSPEC_CHAN2_MASK) >> WL_CHANSPEC_CHAN2_SHIFT
#define CHSPEC_BAND(chspec) ((chspec) & WL_CHANSPEC_BAND_MASK)
@@ -294,12 +303,34 @@
#define WLC_2G_25MHZ_OFFSET 5 /* 2.4GHz band channel offset */
/**
+ * No of sub-band vlaue of the specified Mhz chanspec
+ */
+#define WF_NUM_SIDEBANDS_40MHZ 2
+#define WF_NUM_SIDEBANDS_80MHZ 4
+#define WF_NUM_SIDEBANDS_8080MHZ 4
+#define WF_NUM_SIDEBANDS_160MHZ 8
+
+/**
* Convert chanspec to ascii string
*
* @param chspec chanspec format
* @param buf ascii string of chanspec
*
* @return pointer to buf with room for at least CHANSPEC_STR_LEN bytes
+ * Original chanspec in case of error
+ *
+ * @see CHANSPEC_STR_LEN
+ */
+extern char * wf_chspec_ntoa_ex(chanspec_t chspec, char *buf);
+
+/**
+ * Convert chanspec to ascii string
+ *
+ * @param chspec chanspec format
+ * @param buf ascii string of chanspec
+ *
+ * @return pointer to buf with room for at least CHANSPEC_STR_LEN bytes
+ * NULL in case of error
*
* @see CHANSPEC_STR_LEN
*/
@@ -350,6 +381,17 @@
extern uint8 wf_chspec_ctlchan(chanspec_t chspec);
/**
+ * Return the bandwidth string.
+ *
+ * This function returns the bandwidth string for the passed chanspec.
+ *
+ * @param chspec input chanspec
+ *
+ * @return Returns the bandwidth string
+ */
+extern char * wf_chspec_to_bw_str(chanspec_t chspec);
+
+/**
* Return the primary (control) chanspec.
*
* This function returns the chanspec of the primary 20MHz channel. For 20MHz
@@ -429,6 +471,19 @@
extern int wf_channel2mhz(uint channel, uint start_factor);
/**
+ * Returns the chanspec 80Mhz channel corresponding to the following input
+ * parameters
+ *
+ * primary_channel - primary 20Mhz channel
+ * center_channel - center frequecny of the 80Mhz channel
+ *
+ * The center_channel can be one of {42, 58, 106, 122, 138, 155}
+ *
+ * returns INVCHANSPEC in case of error
+ */
+extern chanspec_t wf_chspec_80(uint8 center_channel, uint8 primary_channel);
+
+/**
* Convert ctl chan and bw to chanspec
*
* @param ctl_ch channel
@@ -443,19 +498,22 @@
extern uint wf_freq2channel(uint freq);
/*
- * Returns the 80+80 chanspec corresponding to the following input parameters
+ * Returns the 80+80 MHz chanspec corresponding to the following input parameters
*
- * primary_20mhz - Primary 20 Mhz channel
- * chan1 - channel number of first 80 Mhz band
- * chan2 - channel number of second 80 Mhz band
+ * primary_20mhz - Primary 20 MHz channel
+ * chan0_80MHz - center channel number of one frequency segment
+ * chan1_80MHz - center channel number of the other frequency segment
*
- * parameters chan1 and chan2 are channel numbers in {42, 58, 106, 122, 138, 155}
+ * Parameters chan0_80MHz and chan1_80MHz are channel numbers in {42, 58, 106, 122, 138, 155}.
+ * The primary channel must be contained in one of the 80MHz channels. This routine
+ * will determine which frequency segment is the primary 80 MHz segment.
*
- * returns INVCHANSPEC in case of error
+ * Returns INVCHANSPEC in case of error.
+ *
+ * Refer to IEEE802.11ac section 22.3.14 "Channelization".
*/
-
extern chanspec_t wf_chspec_get8080_chspec(uint8 primary_20mhz,
-uint8 chan1_80Mhz, uint8 chan2_80Mhz);
+ uint8 chan0_80Mhz, uint8 chan1_80Mhz);
/*
* Returns the primary 80 Mhz channel for the provided chanspec
@@ -480,5 +538,11 @@
*/
extern chanspec_t wf_chspec_primary80_chspec(chanspec_t chspec);
-
+#ifdef WL11AC_80P80
+/*
+ * This function returns the centre chanel for the given chanspec.
+ * In case of 80+80 chanspec it returns the primary 80 Mhz centre channel
+ */
+extern uint8 wf_chspec_channel(chanspec_t chspec);
+#endif
#endif /* _bcmwifi_channels_h_ */
diff --git a/drivers/net/wireless/bcmdhd/bcmwifi_rates.h b/drivers/net/wireless/bcmdhd/bcmwifi_rates.h
old mode 100755
new mode 100644
index 38d339b..f8983a1
--- a/drivers/net/wireless/bcmdhd/bcmwifi_rates.h
+++ b/drivers/net/wireless/bcmdhd/bcmwifi_rates.h
@@ -34,9 +34,16 @@
#define WL_RATESET_SZ_DSSS 4
#define WL_RATESET_SZ_OFDM 8
-#define WL_RATESET_SZ_HT_MCS 8
#define WL_RATESET_SZ_VHT_MCS 10
+#if defined(WLPROPRIETARY_11N_RATES)
+#define WL_RATESET_SZ_HT_MCS WL_RATESET_SZ_VHT_MCS
+#else
+#define WL_RATESET_SZ_HT_MCS 8
+#endif
+
+#define WL_RATESET_SZ_HT_IOCTL 8 /* MAC histogram, compatibility with wl utility */
+
#define WL_TX_CHAINS_MAX 3
#define WL_RATE_DISABLED (-128) /* Power value corresponding to unsupported rate */
@@ -46,14 +53,19 @@
WL_TX_BW_20,
WL_TX_BW_40,
WL_TX_BW_80,
- WL_TX_BW_160,
WL_TX_BW_20IN40,
WL_TX_BW_20IN80,
WL_TX_BW_40IN80,
+ WL_TX_BW_160,
WL_TX_BW_20IN160,
WL_TX_BW_40IN160,
WL_TX_BW_80IN160,
- WL_TX_BW_ALL
+ WL_TX_BW_ALL,
+ WL_TX_BW_8080,
+ WL_TX_BW_8080CHAN2,
+ WL_TX_BW_20IN8080,
+ WL_TX_BW_40IN8080,
+ WL_TX_BW_80IN8080
} wl_tx_bw_t;
diff --git a/drivers/net/wireless/bcmdhd/bcmxtlv.c b/drivers/net/wireless/bcmdhd/bcmxtlv.c
new file mode 100644
index 0000000..f89d151
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/bcmxtlv.c
@@ -0,0 +1,403 @@
+/*
+ * Driver O/S-independent utility routines
+ *
+ * $Copyright Broadcom Corporation$
+ * $Id: bcmxtlv.c 458062 2014-02-25 19:34:27Z nehru $
+ */
+
+#ifndef __FreeBSD__
+#include <bcm_cfg.h>
+#endif
+
+#include <typedefs.h>
+#include <bcmdefs.h>
+
+#if defined(__FreeBSD__) || defined(__NetBSD__)
+#include <machine/stdarg.h>
+#else
+#include <stdarg.h>
+#endif /* __FreeBSD__ */
+
+#ifdef BCMDRIVER
+ #include <osl.h>
+#else /* !BCMDRIVER */
+ #include <stdlib.h> /* AS!!! */
+ #include <stdio.h>
+ #include <string.h>
+#include <stdlib.h>
+#ifndef ASSERT
+ #define ASSERT(exp)
+#endif
+inline void* MALLOCZ(void *o, size_t s) { BCM_REFERENCE(o); return calloc(1, s); }
+inline void MFREE(void *o, void *p, size_t s) { BCM_REFERENCE(o); BCM_REFERENCE(s); free(p); }
+#endif /* !BCMDRIVER */
+
+#include <bcmendian.h>
+#include <bcmutils.h>
+
+static INLINE int bcm_xtlv_size_for_data(int dlen, bcm_xtlv_opts_t opts)
+{
+ return ((opts & BCM_XTLV_OPTION_ALIGN32) ? ALIGN_SIZE(dlen + BCM_XTLV_HDR_SIZE, 4)
+ : (dlen + BCM_XTLV_HDR_SIZE));
+}
+
+bcm_xtlv_t *
+bcm_next_xtlv(bcm_xtlv_t *elt, int *buflen, bcm_xtlv_opts_t opts)
+{
+ int sz;
+#ifdef BCMDBG
+ /* validate current elt */
+ if (!bcm_valid_xtlv(elt, *buflen, opts))
+ return NULL;
+#endif
+ /* advance to next elt */
+ sz = BCM_XTLV_SIZE(elt, opts);
+ elt = (bcm_xtlv_t*)((uint8 *)elt + sz);
+ *buflen -= sz;
+
+ /* validate next elt */
+ if (!bcm_valid_xtlv(elt, *buflen, opts))
+ return NULL;
+
+ return elt;
+}
+
+int
+bcm_xtlv_buf_init(bcm_xtlvbuf_t *tlv_buf, uint8 *buf, uint16 len, bcm_xtlv_opts_t opts)
+{
+ if (!tlv_buf || !buf || !len)
+ return BCME_BADARG;
+
+ tlv_buf->opts = opts;
+ tlv_buf->size = len;
+ tlv_buf->head = buf;
+ tlv_buf->buf = buf;
+ return BCME_OK;
+}
+
+uint16
+bcm_xtlv_buf_len(bcm_xtlvbuf_t *tbuf)
+{
+ if (tbuf == NULL) return 0;
+ return (tbuf->buf - tbuf->head);
+}
+uint16
+bcm_xtlv_buf_rlen(bcm_xtlvbuf_t *tbuf)
+{
+ if (tbuf == NULL) return 0;
+ return tbuf->size - bcm_xtlv_buf_len(tbuf);
+}
+uint8 *
+bcm_xtlv_buf(bcm_xtlvbuf_t *tbuf)
+{
+ if (tbuf == NULL) return NULL;
+ return tbuf->buf;
+}
+uint8 *
+bcm_xtlv_head(bcm_xtlvbuf_t *tbuf)
+{
+ if (tbuf == NULL) return NULL;
+ return tbuf->head;
+}
+int
+bcm_xtlv_put_data(bcm_xtlvbuf_t *tbuf, uint16 type, const void *data, uint16 dlen)
+{
+ bcm_xtlv_t *xtlv;
+ int size;
+
+ if (tbuf == NULL)
+ return BCME_BADARG;
+ size = bcm_xtlv_size_for_data(dlen, tbuf->opts);
+ if (bcm_xtlv_buf_rlen(tbuf) < size)
+ return BCME_NOMEM;
+ xtlv = (bcm_xtlv_t *)bcm_xtlv_buf(tbuf);
+ xtlv->id = htol16(type);
+ xtlv->len = htol16(dlen);
+ memcpy(xtlv->data, data, dlen);
+ tbuf->buf += size;
+ return BCME_OK;
+}
+int
+bcm_xtlv_put_8(bcm_xtlvbuf_t *tbuf, uint16 type, const int8 data)
+{
+ bcm_xtlv_t *xtlv;
+ int size;
+
+ if (tbuf == NULL)
+ return BCME_BADARG;
+ size = bcm_xtlv_size_for_data(1, tbuf->opts);
+ if (bcm_xtlv_buf_rlen(tbuf) < size)
+ return BCME_NOMEM;
+ xtlv = (bcm_xtlv_t *)bcm_xtlv_buf(tbuf);
+ xtlv->id = htol16(type);
+ xtlv->len = htol16(sizeof(data));
+ xtlv->data[0] = data;
+ tbuf->buf += size;
+ return BCME_OK;
+}
+int
+bcm_xtlv_put_16(bcm_xtlvbuf_t *tbuf, uint16 type, const int16 data)
+{
+ bcm_xtlv_t *xtlv;
+ int size;
+
+ if (tbuf == NULL)
+ return BCME_BADARG;
+ size = bcm_xtlv_size_for_data(2, tbuf->opts);
+ if (bcm_xtlv_buf_rlen(tbuf) < size)
+ return BCME_NOMEM;
+
+ xtlv = (bcm_xtlv_t *)bcm_xtlv_buf(tbuf);
+ xtlv->id = htol16(type);
+ xtlv->len = htol16(sizeof(data));
+ htol16_ua_store(data, xtlv->data);
+ tbuf->buf += size;
+ return BCME_OK;
+}
+int
+bcm_xtlv_put_32(bcm_xtlvbuf_t *tbuf, uint16 type, const int32 data)
+{
+ bcm_xtlv_t *xtlv;
+ int size;
+
+ if (tbuf == NULL)
+ return BCME_BADARG;
+ size = bcm_xtlv_size_for_data(4, tbuf->opts);
+ if (bcm_xtlv_buf_rlen(tbuf) < size)
+ return BCME_NOMEM;
+ xtlv = (bcm_xtlv_t *)bcm_xtlv_buf(tbuf);
+ xtlv->id = htol16(type);
+ xtlv->len = htol16(sizeof(data));
+ htol32_ua_store(data, xtlv->data);
+ tbuf->buf += size;
+ return BCME_OK;
+}
+
+/*
+ * upacks xtlv record from buf checks the type
+ * copies data to callers buffer
+ * advances tlv pointer to next record
+ * caller's resposible for dst space check
+ */
+int
+bcm_unpack_xtlv_entry(uint8 **tlv_buf, uint16 xpct_type, uint16 xpct_len, void *dst,
+ bcm_xtlv_opts_t opts)
+{
+ bcm_xtlv_t *ptlv = (bcm_xtlv_t *)*tlv_buf;
+ uint16 len;
+ uint16 type;
+
+ ASSERT(ptlv);
+ /* tlv headr is always packed in LE order */
+ len = ltoh16(ptlv->len);
+ type = ltoh16(ptlv->id);
+ if (len == 0) {
+ /* z-len tlv headers: allow, but don't process */
+ printf("z-len, skip unpack\n");
+ } else {
+ if ((type != xpct_type) ||
+ (len > xpct_len)) {
+ printf("xtlv_unpack Error: found[type:%d,len:%d] != xpct[type:%d,len:%d]\n",
+ type, len, xpct_type, xpct_len);
+ return BCME_BADARG;
+ }
+ /* copy tlv record to caller's buffer */
+ memcpy(dst, ptlv->data, ptlv->len);
+ }
+ *tlv_buf += BCM_XTLV_SIZE(ptlv, opts);
+ return BCME_OK;
+}
+
+/*
+ * packs user data into tlv record
+ * advances tlv pointer to next xtlv slot
+ * buflen is used for tlv_buf space check
+ */
+int
+bcm_pack_xtlv_entry(uint8 **tlv_buf, uint16 *buflen, uint16 type, uint16 len, void *src,
+ bcm_xtlv_opts_t opts)
+{
+ bcm_xtlv_t *ptlv = (bcm_xtlv_t *)*tlv_buf;
+ int size;
+
+ ASSERT(ptlv);
+ ASSERT(src);
+
+ size = bcm_xtlv_size_for_data(len, opts);
+
+ /* copy data from tlv buffer to dst provided by user */
+ if (size > *buflen) {
+ printf("bcm_pack_xtlv_entry: no space tlv_buf: requested:%d, available:%d\n",
+ size, *buflen);
+ return BCME_BADLEN;
+ }
+ ptlv->id = htol16(type);
+ ptlv->len = htol16(len);
+
+ /* copy callers data */
+ memcpy(ptlv->data, src, len);
+
+ /* advance callers pointer to tlv buff */
+ *tlv_buf += size;
+ /* decrement the len */
+ *buflen -= size;
+ return BCME_OK;
+}
+
+/*
+ * unpack all xtlv records from the issue a callback
+ * to set function one call per found tlv record
+ */
+int
+bcm_unpack_xtlv_buf(void *ctx, uint8 *tlv_buf, uint16 buflen, bcm_xtlv_opts_t opts,
+ bcm_xtlv_unpack_cbfn_t *cbfn)
+{
+ uint16 len;
+ uint16 type;
+ int res = 0;
+ int size;
+ bcm_xtlv_t *ptlv;
+ int sbuflen = buflen;
+
+ ASSERT(!buflen || tlv_buf);
+ ASSERT(!buflen || cbfn);
+
+ while (sbuflen >= (int)BCM_XTLV_HDR_SIZE) {
+ ptlv = (bcm_xtlv_t *)tlv_buf;
+
+ /* tlv header is always packed in LE order */
+ len = ltoh16(ptlv->len);
+ type = ltoh16(ptlv->id);
+
+ size = bcm_xtlv_size_for_data(len, opts);
+
+ sbuflen -= size;
+ /* check for possible buffer overrun */
+ if (sbuflen < 0)
+ break;
+
+ if ((res = cbfn(ctx, ptlv->data, type, len)) != BCME_OK)
+ break;
+ tlv_buf += size;
+ }
+ return res;
+}
+
+int
+bcm_pack_xtlv_buf(void *ctx, void *tlv_buf, uint16 buflen, bcm_xtlv_opts_t opts,
+ bcm_pack_xtlv_next_info_cbfn_t get_next, bcm_pack_xtlv_pack_next_cbfn_t pack_next,
+ int *outlen)
+{
+ int res = BCME_OK;
+ uint16 tlv_id;
+ uint16 tlv_len;
+ uint8 *startp;
+ uint8 *endp;
+ uint8 *buf;
+ bool more;
+ int size;
+
+ ASSERT(get_next && pack_next);
+
+ buf = (uint8 *)tlv_buf;
+ startp = buf;
+ endp = (uint8 *)buf + buflen;
+ more = TRUE;
+ while (more && (buf < endp)) {
+ more = get_next(ctx, &tlv_id, &tlv_len);
+ size = bcm_xtlv_size_for_data(tlv_len, opts);
+ if ((buf + size) >= endp) {
+ res = BCME_BUFTOOSHORT;
+ goto done;
+ }
+
+ htol16_ua_store(tlv_id, buf);
+ htol16_ua_store(tlv_len, buf + sizeof(tlv_id));
+ pack_next(ctx, tlv_id, tlv_len, buf + BCM_XTLV_HDR_SIZE);
+ buf += size;
+ }
+
+ if (more)
+ res = BCME_BUFTOOSHORT;
+
+done:
+ if (outlen)
+ *outlen = buf - startp;
+ return res;
+}
+
+/*
+ * pack xtlv buffer from memory according to xtlv_desc_t
+ */
+int
+bcm_pack_xtlv_buf_from_mem(void **tlv_buf, uint16 *buflen, xtlv_desc_t *items,
+ bcm_xtlv_opts_t opts)
+{
+ int res = 0;
+ uint8 *ptlv = (uint8 *)*tlv_buf;
+
+ while (items->type != 0) {
+ if ((items->len > 0) && (res = bcm_pack_xtlv_entry(&ptlv,
+ buflen, items->type,
+ items->len, items->ptr, opts) != BCME_OK)) {
+ break;
+ }
+ items++;
+ }
+ *tlv_buf = ptlv; /* update the external pointer */
+ return res;
+}
+
+/*
+ * unpack xtlv buffer to memory according to xtlv_desc_t
+ *
+ */
+int
+bcm_unpack_xtlv_buf_to_mem(void *tlv_buf, int *buflen, xtlv_desc_t *items, bcm_xtlv_opts_t opts)
+{
+ int res = BCME_OK;
+ bcm_xtlv_t *elt;
+
+ elt = bcm_valid_xtlv((bcm_xtlv_t *)tlv_buf, *buflen, opts) ? (bcm_xtlv_t *)tlv_buf : NULL;
+ if (!elt || !items) {
+ res = BCME_BADARG;
+ return res;
+ }
+
+ for (; elt != NULL && res == BCME_OK; elt = bcm_next_xtlv(elt, buflen, opts)) {
+ /* find matches in desc_t items */
+ xtlv_desc_t *dst_desc = items;
+ uint16 len = ltoh16(elt->len);
+
+ while (dst_desc->type != 0) {
+ if (ltoh16(elt->id) != dst_desc->type) {
+ dst_desc++;
+ continue;
+ }
+ if (len != dst_desc->len)
+ res = BCME_BADLEN;
+ else
+ memcpy(dst_desc->ptr, elt->data, len);
+ break;
+ }
+ if (dst_desc->type == 0)
+ res = BCME_NOTFOUND;
+ }
+
+ if (*buflen != 0 && res == BCME_OK)
+ res = BCME_BUFTOOSHORT;
+
+ return res;
+}
+
+int bcm_xtlv_size(const bcm_xtlv_t *elt, bcm_xtlv_opts_t opts)
+{
+ int size; /* entire size of the XTLV including header, data, and optional padding */
+ int len; /* XTLV's value real length wthout padding */
+
+ len = BCM_XTLV_LEN(elt);
+
+ size = bcm_xtlv_size_for_data(len, opts);
+
+ return size;
+}
diff --git a/drivers/net/wireless/bcmdhd/circularbuf.c b/drivers/net/wireless/bcmdhd/circularbuf.c
deleted file mode 100755
index 6f89f73..0000000
--- a/drivers/net/wireless/bcmdhd/circularbuf.c
+++ /dev/null
@@ -1,326 +0,0 @@
-/*
- * Initialization and support routines for self-booting compressed image.
- *
- * Copyright (C) 1999-2014, Broadcom Corporation
- *
- * Unless you and Broadcom execute a separate written software license
- * agreement governing use of this software, this software is licensed to you
- * under the terms of the GNU General Public License version 2 (the "GPL"),
- * available at http://www.broadcom.com/licenses/GPLv2.php, with the
- * following added to such license:
- *
- * As a special exception, the copyright holders of this software give you
- * permission to link this software with independent modules, and to copy and
- * distribute the resulting executable under terms of your choice, provided that
- * you also meet, for each linked independent module, the terms and conditions of
- * the license of that module. An independent module is a module which is not
- * derived from this software. The special exception does not apply to any
- * modifications of the software.
- *
- * Notwithstanding the above, under no circumstances may you combine this
- * software in any way with any other Broadcom software provided under a license
- * other than the GPL, without Broadcom's express prior written consent.
- *
- * $Id: circularbuf.c 452261 2014-01-29 19:30:23Z $
- */
-
-#include <circularbuf.h>
-#include <bcmmsgbuf.h>
-#include <osl.h>
-
-#define CIRCULARBUF_READ_SPACE_AT_END(x) \
- ((x->w_ptr >= x->rp_ptr) ? (x->w_ptr - x->rp_ptr) : (x->e_ptr - x->rp_ptr))
-
-#define CIRCULARBUF_READ_SPACE_AVAIL(x) \
- (((CIRCULARBUF_READ_SPACE_AT_END(x) == 0) && (x->w_ptr < x->rp_ptr)) ? \
- x->w_ptr : CIRCULARBUF_READ_SPACE_AT_END(x))
-
-int cbuf_msg_level = CBUF_ERROR_VAL | CBUF_TRACE_VAL | CBUF_INFORM_VAL;
-
-/* #define CBUF_DEBUG */
-#ifdef CBUF_DEBUG
-#define CBUF_DEBUG_CHECK(x) x
-#else
-#define CBUF_DEBUG_CHECK(x)
-#endif /* CBUF_DEBUG */
-
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_init
- * Description:
- *
- *
- * Input Args :
- *
- *
- * Return Values :
- *
- * -----------------------------------------------------------------------------
- */
-void
-circularbuf_init(circularbuf_t *handle, void *buf_base_addr, uint16 total_buf_len)
-{
- handle->buf_addr = buf_base_addr;
-
- handle->depth = handle->e_ptr = HTOL32(total_buf_len);
-
- /* Initialize Read and Write pointers */
- handle->w_ptr = handle->r_ptr = handle->wp_ptr = handle->rp_ptr = HTOL32(0);
- handle->mb_ring_bell = NULL;
- handle->mb_ctx = NULL;
-
- return;
-}
-
-void
-circularbuf_register_cb(circularbuf_t *handle, mb_ring_t mb_ring_func, void *ctx)
-{
- handle->mb_ring_bell = mb_ring_func;
- handle->mb_ctx = ctx;
-}
-
-#ifdef CBUF_DEBUG
-static void
-circularbuf_check_sanity(circularbuf_t *handle)
-{
- if ((handle->e_ptr > handle->depth) ||
- (handle->r_ptr > handle->e_ptr) ||
- (handle->rp_ptr > handle->e_ptr) ||
- (handle->w_ptr > handle->e_ptr))
- {
- printf("%s:%d: Pointers are corrupted.\n", __FUNCTION__, __LINE__);
- circularbuf_debug_print(handle);
- ASSERT(0);
- }
- return;
-}
-#endif /* CBUF_DEBUG */
-
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_reserve_for_write
- *
- * Description:
- * This function reserves N bytes for write in the circular buffer. The circularbuf
- * implementation will only reserve space in the ciruclar buffer and return
- * the pointer to the address where the new data can be written.
- * The actual write implementation (bcopy/dma) is outside the scope of
- * circularbuf implementation.
- *
- * Input Args :
- * size - No. of bytes to reserve for write
- *
- * Return Values :
- * void * : Pointer to the reserved location. This is the address
- * that will be used for write (dma/bcopy)
- *
- * -----------------------------------------------------------------------------
- */
-void * BCMFASTPATH
-circularbuf_reserve_for_write(circularbuf_t *handle, uint16 size)
-{
- int16 avail_space;
- void *ret_ptr = NULL;
-
- CBUF_DEBUG_CHECK(circularbuf_check_sanity(handle));
- ASSERT(size < handle->depth);
-
- if (handle->wp_ptr >= handle->r_ptr)
- avail_space = handle->depth - handle->wp_ptr;
- else
- avail_space = handle->r_ptr - handle->wp_ptr;
-
- ASSERT(avail_space <= handle->depth);
- if (avail_space > size)
- {
- /* Great. We have enough space. */
- ret_ptr = CIRCULARBUF_START(handle) + handle->wp_ptr;
-
- /*
- * We need to update the wp_ptr for the next guy to write.
- *
- * Please Note : We are not updating the write pointer here. This can be
- * done only after write is complete (In case of DMA, we can only schedule
- * the DMA. Actual completion will be known only on DMA complete interrupt).
- */
- handle->wp_ptr += size;
- return ret_ptr;
- }
-
- /*
- * If there is no available space, we should check if there is some space left
- * in the beginning of the circular buffer. Wrap-around case, where there is
- * not enough space in the end of the circular buffer. But, there might be
- * room in the beginning of the buffer.
- */
- if (handle->wp_ptr >= handle->r_ptr)
- {
- avail_space = handle->r_ptr;
- if (avail_space > size)
- {
- /* OK. There is room in the beginning. Let's go ahead and use that.
- * But, before that, we have left a hole at the end of the circular
- * buffer as that was not sufficient to accomodate the requested
- * size. Let's make sure this is updated in the circularbuf structure
- * so that consumer does not use the hole.
- */
- handle->e_ptr = handle->wp_ptr;
- handle->wp_ptr = size;
-
- return CIRCULARBUF_START(handle);
- }
- }
-
- /* We have tried enough to accomodate the new packet. There is no room for now. */
- return NULL;
-}
-
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_write_complete
- *
- * Description:
- * This function has to be called by the producer end of circularbuf to indicate to
- * the circularbuf layer that data has been written and the write pointer can be
- * updated. In the process, if there was a doorbell callback registered, that
- * function would also be invoked.
- *
- * Input Args :
- * dest_addr : Address where the data was written. This would be the
- * same address that was reserved earlier.
- * bytes_written : Length of data written
- *
- * -----------------------------------------------------------------------------
- */
-void BCMFASTPATH
-circularbuf_write_complete(circularbuf_t *handle, uint16 bytes_written)
-{
- CBUF_DEBUG_CHECK(circularbuf_check_sanity(handle));
-
- /* Update the write pointer */
- if ((handle->w_ptr + bytes_written) >= handle->depth) {
- OSL_CACHE_FLUSH((void *) CIRCULARBUF_START(handle), bytes_written);
- handle->w_ptr = bytes_written;
- } else {
- OSL_CACHE_FLUSH((void *) (CIRCULARBUF_START(handle) + handle->w_ptr),
- bytes_written);
- handle->w_ptr += bytes_written;
- }
-
- /* And ring the door bell (mail box interrupt) to indicate to the peer that
- * message is available for consumption.
- */
- if (handle->mb_ring_bell)
- handle->mb_ring_bell(handle->mb_ctx);
-}
-
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_get_read_ptr
- *
- * Description:
- * This function will be called by the consumer of circularbuf for reading data from
- * the circular buffer. This will typically be invoked when the consumer gets a
- * doorbell interrupt.
- * Please note that the function only returns the pointer (and length) from
- * where the data can be read. Actual read implementation is upto the
- * consumer. It could be a bcopy or dma.
- *
- * Input Args :
- * void * : Address from where the data can be read.
- * available_len : Length of data available for read.
- *
- * -----------------------------------------------------------------------------
- */
-void * BCMFASTPATH
-circularbuf_get_read_ptr(circularbuf_t *handle, uint16 *available_len)
-{
- uint8 *ret_addr;
-
- CBUF_DEBUG_CHECK(circularbuf_check_sanity(handle));
-
- /* First check if there is any data available in the circular buffer */
- *available_len = CIRCULARBUF_READ_SPACE_AVAIL(handle);
- if (*available_len == 0)
- return NULL;
-
- /*
- * Although there might be data in the circular buffer for read, in
- * cases of write wrap-around and read still in the end of the circular
- * buffer, we might have to wrap around the read pending pointer also.
- */
- if (CIRCULARBUF_READ_SPACE_AT_END(handle) == 0)
- handle->rp_ptr = 0;
-
- ret_addr = CIRCULARBUF_START(handle) + handle->rp_ptr;
-
- /*
- * Please note that we do not update the read pointer here. Only
- * read pending pointer is updated, so that next reader knows where
- * to read data from.
- * read pointer can only be updated when the read is complete.
- */
- handle->rp_ptr = (uint16)(ret_addr - CIRCULARBUF_START(handle) + *available_len);
-
- ASSERT(*available_len <= handle->depth);
-
- OSL_CACHE_INV((void *) ret_addr, *available_len);
-
- return ret_addr;
-}
-
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_read_complete
- * Description:
- * This function has to be called by the consumer end of circularbuf to indicate
- * that data has been consumed and the read pointer can be updated.
- *
- * Input Args :
- * bytes_read : No. of bytes consumed by the consumer. This has to match
- * the length returned by circularbuf_get_read_ptr
- *
- * Return Values :
- * CIRCULARBUF_SUCCESS : Otherwise
- *
- * -----------------------------------------------------------------------------
- */
-circularbuf_ret_t BCMFASTPATH
-circularbuf_read_complete(circularbuf_t *handle, uint16 bytes_read)
-{
- CBUF_DEBUG_CHECK(circularbuf_check_sanity(handle));
- ASSERT(bytes_read < handle->depth);
-
- /* Update the read pointer */
- if ((handle->r_ptr + bytes_read) >= handle->depth)
- handle->r_ptr = bytes_read;
- else
- handle->r_ptr += bytes_read;
-
- return CIRCULARBUF_SUCCESS;
-}
-/*
- * -----------------------------------------------------------------------------
- * Function : circularbuf_revert_rp_ptr
- *
- * Description:
- * The rp_ptr update during circularbuf_get_read_ptr() is done to reflect the amount of data
- * that is sent out to be read by the consumer. But the consumer may not always read the
- * entire data. In such a case, the rp_ptr needs to be reverted back by 'left' bytes, where
- * 'left' is the no. of bytes left unread.
- *
- * Input args:
- * bytes : The no. of bytes left unread by the consumer
- *
- * -----------------------------------------------------------------------------
- */
-circularbuf_ret_t
-circularbuf_revert_rp_ptr(circularbuf_t *handle, uint16 bytes)
-{
- CBUF_DEBUG_CHECK(circularbuf_check_sanity(handle));
- ASSERT(bytes < handle->depth);
-
- handle->rp_ptr -= bytes;
-
- return CIRCULARBUF_SUCCESS;
-}
diff --git a/drivers/net/wireless/bcmdhd/dhd.h b/drivers/net/wireless/bcmdhd/dhd.h
old mode 100755
new mode 100644
index 735a106..7ddde04
--- a/drivers/net/wireless/bcmdhd/dhd.h
+++ b/drivers/net/wireless/bcmdhd/dhd.h
@@ -5,13 +5,13 @@
* DHD OS, bus, and protocol modules.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -19,12 +19,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd.h 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd.h 474409 2014-05-01 04:27:15Z $
*/
/****************
@@ -53,12 +53,23 @@
struct sched_param;
int setScheduler(struct task_struct *p, int policy, struct sched_param *param);
int get_scheduler_policy(struct task_struct *p);
+#define MAX_EVENT 16
#define ALL_INTERFACES 0xff
#include <wlioctl.h>
#include <wlfc_proto.h>
+#if defined(BCMWDF)
+#include <wdf.h>
+#include <WdfMiniport.h>
+#endif /* (BCMWDF) */
+
+#if defined(WL11U)
+#ifndef MFP
+#define MFP /* Applying interaction with MFP by spec HS2.0 REL2 */
+#endif /* MFP */
+#endif /* WL11U */
#if defined(KEEP_ALIVE)
/* Default KEEP_ALIVE Period is 55 sec to prevent AP from sending Keep Alive probe frame */
@@ -70,14 +81,22 @@
struct dhd_prot;
struct dhd_info;
struct dhd_ioctl;
-
+struct dhd_dbg;
/* The level of bus communication with the dongle */
enum dhd_bus_state {
DHD_BUS_DOWN, /* Not ready for frame transfers */
DHD_BUS_LOAD, /* Download access only (CPU reset) */
- DHD_BUS_DATA /* Ready for frame transfers */
+ DHD_BUS_DATA, /* Ready for frame transfers */
+ DHD_BUS_SUSPEND, /* Bus has been suspended */
};
+#define DHD_IF_ROLE_STA(role) (role == WLC_E_IF_ROLE_STA ||\
+ role == WLC_E_IF_ROLE_P2P_CLIENT)
+
+/* For supporting multiple interfaces */
+#define DHD_MAX_IFS 16
+#define DHD_DEL_IF -0xE
+#define DHD_BAD_IF -0xF
enum dhd_op_flags {
/* Firmware requested operation mode */
@@ -103,8 +122,9 @@
#define MAX_CNTL_RX_TIMEOUT 1
#endif /* MAX_CNTL_RX_TIMEOUT */
-#define DHD_SCAN_ASSOC_ACTIVE_TIME 35 /* ms: Embedded default Active setting from DHD */
-#define DHD_SCAN_UNASSOC_ACTIVE_TIME 65 /* ms: Embedded def. Unassoc Active setting from DHD */
+#define DHD_SCAN_ASSOC_ACTIVE_TIME 20 /* ms: Embedded default Active setting from DHD */
+#define DHD_SCAN_UNASSOC_ACTIVE_TIME 40 /* ms: Embedded def. Unassoc Active setting from DHD */
+#define DHD_SCAN_UNASSOC_ACTIVE_TIME_PS 30
#define DHD_SCAN_PASSIVE_TIME 130 /* ms: Embedded default Passive setting from DHD */
#ifndef POWERUP_MAX_RETRY
@@ -138,7 +158,16 @@
#if defined(STATIC_WL_PRIV_STRUCT)
DHD_PREALLOC_WIPHY_ESCAN0 = 5,
#endif /* STATIC_WL_PRIV_STRUCT */
- DHD_PREALLOC_DHD_INFO = 7
+ DHD_PREALLOC_DHD_INFO = 7,
+ DHD_PREALLOC_IF_FLOW_LKUP = 9
+};
+
+enum dhd_dongledump_mode {
+ DUMP_DISABLED = 0,
+ DUMP_MEMONLY,
+ DUMP_MEMFILE,
+ DUMP_MEMFILE_BUGON,
+ DUMP_MEMFILE_MAX
};
/* Packet alignment for most efficient SDIO (can change based on platform) */
@@ -158,17 +187,44 @@
} reorder_info_t;
#ifdef DHDTCPACK_SUPPRESS
-#define TCPACK_SUP_OFF 0 /* TCPACK suppress off */
-/* Replace TCPACK in txq when new coming one has higher ACK number. */
-#define TCPACK_SUP_REPLACE 1
-/* TCPACK_SUP_REPLACE + delayed TCPACK TX unless ACK to PSH DATA.
- * This will give benefits to Half-Duplex bus interface(e.g. SDIO) that
- * 1. we are able to read TCP DATA packets first from the bus
- * 2. TCPACKs that do not need to hurry delivered remains longer in TXQ so can be suppressed.
- */
-#define TCPACK_SUP_DELAYTX 2
+
+enum {
+ /* TCPACK suppress off */
+ TCPACK_SUP_OFF,
+ /* Replace TCPACK in txq when new coming one has higher ACK number. */
+ TCPACK_SUP_REPLACE,
+ /* TCPACK_SUP_REPLACE + delayed TCPACK TX unless ACK to PSH DATA.
+ * This will give benefits to Half-Duplex bus interface(e.g. SDIO) that
+ * 1. we are able to read TCP DATA packets first from the bus
+ * 2. TCPACKs that don't need to hurry delivered remains longer in TXQ so can be suppressed.
+ */
+ TCPACK_SUP_DELAYTX,
+ TCPACK_SUP_LAST_MODE
+};
#endif /* DHDTCPACK_SUPPRESS */
+
+/* DMA'ing r/w indices for rings supported */
+#ifdef BCM_INDX_TCM /* FW gets r/w indices in TCM */
+#define DMA_INDX_ENAB(dma_indxsup) 0
+#elif defined BCM_INDX_DMA /* FW gets r/w indices from Host memory */
+#define DMA_INDX_ENAB(dma_indxsup) 1
+#else /* r/w indices in TCM or host memory based on FW/Host agreement */
+#define DMA_INDX_ENAB(dma_indxsup) dma_indxsup
+#endif /* BCM_INDX_TCM */
+
+#if defined(WLTDLS) && defined(PCIE_FULL_DONGLE)
+struct tdls_peer_node {
+ uint8 addr[ETHER_ADDR_LEN];
+ struct tdls_peer_node *next;
+};
+typedef struct tdls_peer_node tdls_peer_node_t;
+typedef struct {
+ tdls_peer_node_t *node;
+ uint8 tdls_peer_count;
+} tdls_peer_tbl_t;
+#endif /* defined(WLTDLS) && defined(PCIE_FULL_DONGLE) */
+
/* Common structure for module and instance linkage */
typedef struct dhd_pub {
/* Linkage ponters */
@@ -176,7 +232,7 @@
struct dhd_bus *bus; /* Bus module handle */
struct dhd_prot *prot; /* Protocol module handle */
struct dhd_info *info; /* Info module handle */
-
+ struct dhd_dbg *dbg;
/* to NDIS developer, the structure dhd_common is redundant,
* please do NOT merge it back from other branches !!!
*/
@@ -200,6 +256,7 @@
/* Additional stats for the bus level */
ulong tx_packets; /* Data packets sent to dongle */
+ ulong tx_dropped; /* Data packets dropped in dhd */
ulong tx_multicast; /* Multicast data packets sent to dongle */
ulong tx_errors; /* Errors in sending data to dongle */
ulong tx_ctlpkts; /* Control packets sent to dongle */
@@ -247,6 +304,8 @@
int pktfilter_count;
wl_country_t dhd_cspec; /* Current Locale info */
+ u32 dhd_cflags;
+ bool force_country_change;
char eventmask[WL_EVENTING_MASK_LEN];
int op_mode; /* STA, HostAPD, WFD, SoftAP */
@@ -259,7 +318,7 @@
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 25))
struct mutex wl_start_stop_lock; /* lock/unlock for Android start/stop */
struct mutex wl_softap_lock; /* lock/unlock for any SoftAP/STA settings */
-#endif
+#endif
#ifdef PROP_TXSTATUS
bool wlfc_enabled;
@@ -278,6 +337,8 @@
bool proptxstatus_module_ignore;
bool proptxstatus_credit_ignore;
bool proptxstatus_txstatus_ignore;
+
+ bool wlfc_rxpkt_chk;
/*
* implement below functions in each platform if needed.
*/
@@ -290,6 +351,7 @@
#ifdef PNO_SUPPORT
void *pno_state;
#endif
+ void *rtt_state;
#ifdef ROAM_AP_ENV_DETECTION
bool roam_env_detection;
#endif
@@ -323,8 +385,77 @@
struct task_struct * current_rxf;
int chan_isvht80;
#endif /* CUSTOM_SET_CPUCORE */
+
+
+ void *sta_pool; /* pre-allocated pool of sta objects */
+ void *staid_allocator; /* allocator of sta indexes */
+
+ void *flowid_allocator; /* unique flowid allocator */
+ void *flow_ring_table; /* flow ring table, include prot and bus info */
+ void *if_flow_lkup; /* per interface flowid lkup hash table */
+ void *flowid_lock; /* per os lock for flowid info protection */
+ uint32 num_flow_rings;
+ uint8 flow_prio_map[NUMPRIO];
+ uint8 flow_prio_map_type;
+ char enable_log[MAX_EVENT];
+ bool dma_d2h_ring_upd_support;
+ bool dma_h2d_ring_upd_support;
+ int short_dwell_time;
+#ifdef DHD_WMF
+ bool wmf_ucast_igmp;
+#ifdef DHD_IGMP_UCQUERY
+ bool wmf_ucast_igmp_query;
+#endif
+#ifdef DHD_UCAST_UPNP
+ bool wmf_ucast_upnp;
+#endif
+#endif /* DHD_WMF */
+#ifdef DHD_UNICAST_DHCP
+ bool dhcp_unicast;
+#endif /* DHD_UNICAST_DHCP */
+#ifdef DHD_L2_FILTER
+ bool block_ping;
+#endif
+#if defined(WLTDLS) && defined(PCIE_FULL_DONGLE)
+ tdls_peer_tbl_t peer_tbl;
+#endif
+#ifdef GSCAN_SUPPORT
+ bool lazy_roam_enable;
+#endif /* GSCAN_SUPPORT */
+ uint8 *soc_ram;
+ uint32 soc_ram_length;
+ uint32 memdump_enabled;
+ uint8 rand_mac_oui[DOT11_OUI_LEN];
} dhd_pub_t;
+typedef struct {
+ uint rxwake;
+ uint rcwake;
+#ifdef DHD_WAKE_RX_STATUS
+ uint rx_bcast;
+ uint rx_arp;
+ uint rx_mcast;
+ uint rx_multi_ipv6;
+ uint rx_icmpv6;
+ uint rx_icmpv6_ra;
+ uint rx_icmpv6_na;
+ uint rx_icmpv6_ns;
+ uint rx_multi_ipv4;
+ uint rx_multi_other;
+ uint rx_ucast;
+#endif
+#ifdef DHD_WAKE_EVENT_STATUS
+ uint rc_event[WLC_E_LAST];
+#endif
+} wake_counts_t;
+
+#if defined(BCMWDF)
+typedef struct {
+ dhd_pub_t *dhd_pub;
+} dhd_workitem_context_t;
+
+WDF_DECLARE_CONTEXT_TYPE_WITH_NAME(dhd_workitem_context_t, dhd_get_dhd_workitem_context)
+#endif /* (BCMWDF) */
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && defined(CONFIG_PM_SLEEP)
@@ -350,7 +481,7 @@
#else
#define DHD_PM_RESUME_RETURN_ERROR(a) do { \
if (dhd_mmc_suspend) return a; } while (0)
- #endif
+ #endif
#define DHD_PM_RESUME_RETURN do { if (dhd_mmc_suspend) return; } while (0)
#define DHD_SPINWAIT_SLEEP_INIT(a) DECLARE_WAIT_QUEUE_HEAD(a);
@@ -387,8 +518,6 @@
#define DHD_IF_VIF 0x01 /* Virtual IF (Hidden from user) */
-unsigned long dhd_os_spin_lock(dhd_pub_t *pub);
-void dhd_os_spin_unlock(dhd_pub_t *pub, unsigned long flags);
#ifdef PNO_SUPPORT
int dhd_pno_clean(dhd_pub_t *dhd);
#endif /* PNO_SUPPORT */
@@ -404,6 +533,9 @@
extern int dhd_os_wake_lock_ctrl_timeout_cancel(dhd_pub_t *pub);
extern int dhd_os_wd_wake_lock(dhd_pub_t *pub);
extern int dhd_os_wd_wake_unlock(dhd_pub_t *pub);
+extern int dhd_os_wake_lock_waive(dhd_pub_t *pub);
+extern int dhd_os_wake_lock_restore(dhd_pub_t *pub);
+int dhd_os_get_wake_irq(dhd_pub_t *pub);
inline static void MUTEX_LOCK_SOFTAP_SET_INIT(dhd_pub_t * dhdp)
{
@@ -428,8 +560,6 @@
#define DHD_OS_WAKE_LOCK(pub) dhd_os_wake_lock(pub)
#define DHD_OS_WAKE_UNLOCK(pub) dhd_os_wake_unlock(pub)
-#define DHD_OS_WD_WAKE_LOCK(pub) dhd_os_wd_wake_lock(pub)
-#define DHD_OS_WD_WAKE_UNLOCK(pub) dhd_os_wd_wake_unlock(pub)
#define DHD_OS_WAKE_LOCK_TIMEOUT(pub) dhd_os_wake_lock_timeout(pub)
#define DHD_OS_WAKE_LOCK_RX_TIMEOUT_ENABLE(pub, val) \
dhd_os_wake_lock_rx_timeout_enable(pub, val)
@@ -437,6 +567,11 @@
dhd_os_wake_lock_ctrl_timeout_enable(pub, val)
#define DHD_OS_WAKE_LOCK_CTRL_TIMEOUT_CANCEL(pub) \
dhd_os_wake_lock_ctrl_timeout_cancel(pub)
+#define DHD_OS_WAKE_LOCK_WAIVE(pub) dhd_os_wake_lock_waive(pub)
+#define DHD_OS_WAKE_LOCK_RESTORE(pub) dhd_os_wake_lock_restore(pub)
+
+#define DHD_OS_WD_WAKE_LOCK(pub) dhd_os_wd_wake_lock(pub)
+#define DHD_OS_WD_WAKE_UNLOCK(pub) dhd_os_wd_wake_unlock(pub)
#define DHD_PACKET_TIMEOUT_MS 500
#define DHD_EVENT_TIMEOUT_MS 1500
@@ -488,6 +623,8 @@
/* Indication from bus module regarding removal/absence of dongle */
extern void dhd_detach(dhd_pub_t *dhdp);
extern void dhd_free(dhd_pub_t *dhdp);
+extern void dhd_clear(dhd_pub_t *dhdp);
+
/* Indication from bus module to change flow-control state */
extern void dhd_txflowcontrol(dhd_pub_t *dhdp, int ifidx, bool on);
@@ -498,7 +635,8 @@
extern bool dhd_prec_enq(dhd_pub_t *dhdp, struct pktq *q, void *pkt, int prec);
/* Receive frame for delivery to OS. Callee disposes of rxp. */
-extern void dhd_rx_frame(dhd_pub_t *dhdp, int ifidx, void *rxp, int numpkt, uint8 chan);
+extern void dhd_rx_frame(dhd_pub_t *dhdp, int ifidx, void *rxp, int numpkt,
+ uint8 chan, int pkt_wake, wake_counts_t *wcp);
/* Return pointer to interface name */
extern char *dhd_ifname(dhd_pub_t *dhdp, int idx);
@@ -509,11 +647,54 @@
/* Notify tx completion */
extern void dhd_txcomplete(dhd_pub_t *dhdp, void *txp, bool success);
+#define WIFI_FEATURE_INFRA 0x0001 /* Basic infrastructure mode */
+#define WIFI_FEATURE_INFRA_5G 0x0002 /* Support for 5 GHz Band */
+#define WIFI_FEATURE_HOTSPOT 0x0004 /* Support for GAS/ANQP */
+#define WIFI_FEATURE_P2P 0x0008 /* Wifi-Direct */
+#define WIFI_FEATURE_SOFT_AP 0x0010 /* Soft AP */
+#define WIFI_FEATURE_GSCAN 0x0020 /* Google-Scan APIs */
+#define WIFI_FEATURE_NAN 0x0040 /* Neighbor Awareness Networking */
+#define WIFI_FEATURE_D2D_RTT 0x0080 /* Device-to-device RTT */
+#define WIFI_FEATURE_D2AP_RTT 0x0100 /* Device-to-AP RTT */
+#define WIFI_FEATURE_BATCH_SCAN 0x0200 /* Batched Scan (legacy) */
+#define WIFI_FEATURE_PNO 0x0400 /* Preferred network offload */
+#define WIFI_FEATURE_ADDITIONAL_STA 0x0800 /* Support for two STAs */
+#define WIFI_FEATURE_TDLS 0x1000 /* Tunnel directed link setup */
+#define WIFI_FEATURE_TDLS_OFFCHANNEL 0x2000 /* Support for TDLS off channel */
+#define WIFI_FEATURE_EPR 0x4000 /* Enhanced power reporting */
+#define WIFI_FEATURE_AP_STA 0x8000 /* Support for AP STA Concurrency */
+#define WIFI_FEATURE_LINKSTAT 0x10000 /* Support for Linkstats */
+#define WIFI_FEATURE_HAL_EPNO 0x40000 /* WiFi PNO enhanced */
+#define WIFI_FEATUE_RSSI_MONITOR 0x80000 /* RSSI Monitor */
+
+#define MAX_FEATURE_SET_CONCURRRENT_GROUPS 3
+
+extern int dhd_dev_get_feature_set(struct net_device *dev);
+extern int *dhd_dev_get_feature_set_matrix(struct net_device *dev, int *num);
+extern int dhd_dev_set_nodfs(struct net_device *dev, u32 nodfs);
+extern int dhd_dev_cfg_rand_mac_oui(struct net_device *dev, uint8 *oui);
+extern int dhd_set_rand_mac_oui(dhd_pub_t *dhd);
+
+#ifdef GSCAN_SUPPORT
+extern int dhd_dev_set_lazy_roam_cfg(struct net_device *dev,
+ wlc_roam_exp_params_t *roam_param);
+extern int dhd_dev_lazy_roam_enable(struct net_device *dev, uint32 enable);
+extern int dhd_dev_set_lazy_roam_bssid_pref(struct net_device *dev,
+ wl_bssid_pref_cfg_t *bssid_pref, uint32 flush);
+extern int dhd_dev_set_blacklist_bssid(struct net_device *dev, maclist_t *blacklist,
+ uint32 len, uint32 flush);
+extern int dhd_dev_set_whitelist_ssid(struct net_device *dev, wl_ssid_whitelist_t *whitelist,
+ uint32 len, uint32 flush);
+#endif /* GSCAN_SUPPORT */
+
/* OS independent layer functions */
extern int dhd_os_proto_block(dhd_pub_t * pub);
extern int dhd_os_proto_unblock(dhd_pub_t * pub);
extern int dhd_os_ioctl_resp_wait(dhd_pub_t * pub, uint * condition, bool * pending);
extern int dhd_os_ioctl_resp_wake(dhd_pub_t * pub);
+extern int dhd_os_d3ack_wait(dhd_pub_t * pub, uint * condition, bool * pending);
+extern int dhd_os_d3ack_wake(dhd_pub_t * pub);
+extern struct net_device *dhd_linux_get_primary_netdev(dhd_pub_t *dhdp);
extern unsigned int dhd_os_get_ioctl_resp_timeout(void);
extern void dhd_os_set_ioctl_resp_timeout(unsigned int timeout_msec);
@@ -536,16 +717,22 @@
extern int dhd_customer_oob_irq_map(void *adapter, unsigned long *irq_flags_ptr);
extern int dhd_customer_gpio_wlan_ctrl(void *adapter, int onoff);
extern int dhd_custom_get_mac_address(void *adapter, unsigned char *buf);
-extern void get_customized_country_code(void *adapter, char *country_iso_code, wl_country_t *cspec);
+extern void get_customized_country_code(void *adapter, char *country_iso_code,
+ wl_country_t *cspec, u32 flags);
extern void dhd_os_sdunlock_sndup_rxq(dhd_pub_t * pub);
extern void dhd_os_sdlock_eventq(dhd_pub_t * pub);
extern void dhd_os_sdunlock_eventq(dhd_pub_t * pub);
extern bool dhd_os_check_hang(dhd_pub_t *dhdp, int ifidx, int ret);
extern int dhd_os_send_hang_message(dhd_pub_t *dhdp);
extern void dhd_set_version_info(dhd_pub_t *pub, char *fw);
+extern void dhd_set_short_dwell_time(dhd_pub_t *dhd, int set);
+#ifdef CUSTOM_SET_SHORT_DWELL_TIME
+extern void net_set_short_dwell_time(struct net_device *dev, int set);
+#endif
extern bool dhd_os_check_if_up(dhd_pub_t *pub);
extern int dhd_os_check_wakelock(dhd_pub_t *pub);
-
+extern int dhd_os_check_wakelock_all(dhd_pub_t *pub);
+extern int dhd_get_instance(dhd_pub_t *pub);
#ifdef CUSTOM_SET_CPUCORE
extern void dhd_set_cpucore(dhd_pub_t *dhd, int set);
#endif /* CUSTOM_SET_CPUCORE */
@@ -568,15 +755,22 @@
extern int net_os_rxfilter_add_remove(struct net_device *dev, int val, int num);
#endif /* PKT_FILTER_SUPPORT */
-extern int dhd_get_suspend_bcn_li_dtim(dhd_pub_t *dhd);
+extern int dhd_get_suspend_bcn_li_dtim(dhd_pub_t *dhd, int *dtim_period, int *bcn_interval);
extern bool dhd_support_sta_mode(dhd_pub_t *dhd);
#ifdef DHD_DEBUG
extern int write_to_file(dhd_pub_t *dhd, uint8 *buf, int size);
#endif /* DHD_DEBUG */
-extern void dhd_os_sdtxlock(dhd_pub_t * pub);
-extern void dhd_os_sdtxunlock(dhd_pub_t * pub);
+extern int dhd_dev_set_rssi_monitor_cfg(struct net_device *dev, int start,
+ int8 max_rssi, int8 min_rssi);
+
+#define DHD_RSSI_MONITOR_EVT_VERSION 1
+typedef struct {
+ uint8 version;
+ int8 cur_rssi;
+ struct ether_addr BSSID;
+} dhd_rssi_monitor_evt_t;
typedef struct {
uint32 limit; /* Expiration time (usec) */
@@ -585,20 +779,48 @@
uint32 tick; /* O/S tick time (usec) */
} dhd_timeout_t;
+#ifdef SHOW_LOGTRACE
+typedef struct {
+ int num_fmts;
+ char **fmts;
+ char *raw_fmts;
+ char *raw_sstr;
+ uint32 ramstart;
+ uint32 rodata_start;
+ uint32 rodata_end;
+ char *rom_raw_sstr;
+ uint32 rom_ramstart;
+ uint32 rom_rodata_start;
+ uint32 rom_rodata_end;
+} dhd_event_log_t;
+#endif /* SHOW_LOGTRACE */
+
+#if defined(KEEP_ALIVE)
+extern int dhd_dev_start_mkeep_alive(dhd_pub_t *dhd_pub, u8 mkeep_alive_id, u8 *ip_pkt,
+ u16 ip_pkt_len, u8* src_mac_addr, u8* dst_mac_addr, u32 period_msec);
+extern int dhd_dev_stop_mkeep_alive(dhd_pub_t *dhd_pub, u8 mkeep_alive_id);
+#endif /* defined(KEEP_ALIVE) */
+
extern void dhd_timeout_start(dhd_timeout_t *tmo, uint usec);
extern int dhd_timeout_expired(dhd_timeout_t *tmo);
extern int dhd_ifname2idx(struct dhd_info *dhd, char *name);
+extern int dhd_ifidx2hostidx(struct dhd_info *dhd, int ifidx);
extern int dhd_net2idx(struct dhd_info *dhd, struct net_device *net);
extern struct net_device * dhd_idx2net(void *pub, int ifidx);
extern int net_os_send_hang_message(struct net_device *dev);
extern int wl_host_event(dhd_pub_t *dhd_pub, int *idx, void *pktdata,
- wl_event_msg_t *, void **data_ptr);
+ wl_event_msg_t *, void **data_ptr, void *);
extern void wl_event_to_host_order(wl_event_msg_t * evt);
extern int dhd_wl_ioctl(dhd_pub_t *dhd_pub, int ifindex, wl_ioctl_t *ioc, void *buf, int len);
extern int dhd_wl_ioctl_cmd(dhd_pub_t *dhd_pub, int cmd, void *arg, int len, uint8 set,
int ifindex);
+extern int dhd_wl_ioctl_get_intiovar(dhd_pub_t *dhd_pub, char *name, uint *pval,
+ int cmd, uint8 set, int ifidx);
+extern int dhd_wl_ioctl_set_intiovar(dhd_pub_t *dhd_pub, char *name, uint val,
+ int cmd, uint8 set, int ifidx);
+
extern void dhd_common_init(osl_t *osh);
extern int dhd_do_driver_init(struct net_device *net);
@@ -632,18 +854,38 @@
extern int dhd_bus_membytes(dhd_pub_t *dhdp, bool set, uint32 address, uint8 *data, uint size);
extern void dhd_print_buf(void *pbuf, int len, int bytes_per_line);
extern bool dhd_is_associated(dhd_pub_t *dhd, void *bss_buf, int *retval);
-#if defined(BCMSDIO)
+#if defined(BCMSDIO) || defined(BCMPCIE)
extern uint dhd_bus_chip_id(dhd_pub_t *dhdp);
extern uint dhd_bus_chiprev_id(dhd_pub_t *dhdp);
extern uint dhd_bus_chippkg_id(dhd_pub_t *dhdp);
-#endif /* defined(BCMSDIO) */
+#endif /* defined(BCMSDIO) || defined(BCMPCIE) */
#if defined(KEEP_ALIVE)
extern int dhd_keep_alive_onoff(dhd_pub_t *dhd);
#endif /* KEEP_ALIVE */
+/* OS spin lock API */
+extern void *dhd_os_spin_lock_init(osl_t *osh);
+extern void dhd_os_spin_lock_deinit(osl_t *osh, void *lock);
+extern unsigned long dhd_os_spin_lock(void *lock);
+void dhd_os_spin_unlock(void *lock, unsigned long flags);
+
+/*
+ * Manage sta objects in an interface. Interface is identified by an ifindex and
+ * sta(s) within an interfaces are managed using a MacAddress of the sta.
+ */
+struct dhd_sta;
+extern struct dhd_sta *dhd_findadd_sta(void *pub, int ifidx, void *ea);
+extern void dhd_del_sta(void *pub, int ifidx, void *ea);
+extern int dhd_get_ap_isolate(dhd_pub_t *dhdp, uint32 idx);
+extern int dhd_set_ap_isolate(dhd_pub_t *dhdp, uint32 idx, int val);
+extern int dhd_bssidx2idx(dhd_pub_t *dhdp, uint32 bssidx);
+
extern bool dhd_is_concurrent_mode(dhd_pub_t *dhd);
extern int dhd_iovar(dhd_pub_t *pub, int ifidx, char *name, char *cmd_buf, uint cmd_len, int set);
+extern int dhd_getiovar(dhd_pub_t *pub, int ifidx, char *name, char *cmd_buf,
+ uint cmd_len, char **resptr, uint resp_len);
+
typedef enum cust_gpio_modes {
WLAN_RESET_ON,
WLAN_RESET_OFF,
@@ -708,6 +950,13 @@
/* Override to force tx queueing all the time */
extern uint dhd_force_tx_queueing;
+
+/* Default bcn_timeout value is 4 */
+#define DEFAULT_BCN_TIMEOUT_VALUE 4
+#ifndef CUSTOM_BCN_TIMEOUT_SETTING
+#define CUSTOM_BCN_TIMEOUT_SETTING DEFAULT_BCN_TIMEOUT_VALUE
+#endif
+
/* Default KEEP_ALIVE Period is 55 sec to prevent AP from sending Keep Alive probe frame */
#define DEFAULT_KEEP_ALIVE_VALUE 55000 /* msec */
#ifndef CUSTOM_KEEP_ALIVE_SETTING
@@ -739,10 +988,16 @@
/* hooks for custom PNO Event wake lock to guarantee enough time
for the Platform to detect Event before system suspended
*/
-#define DEFAULT_PNO_EVENT_LOCK_xTIME 2 /* multiplay of DHD_PACKET_TIMEOUT_MS */
+#define DEFAULT_PNO_EVENT_LOCK_xTIME 2 /* multiplier of DHD_PACKET_TIMEOUT_MS */
#ifndef CUSTOM_PNO_EVENT_LOCK_xTIME
#define CUSTOM_PNO_EVENT_LOCK_xTIME DEFAULT_PNO_EVENT_LOCK_xTIME
#endif
+
+#define DEFAULT_DHCP_LOCK_xTIME 2 /* multiplier of DHD_PACKET_TIMEOUT_MS */
+#ifndef CUSTOM_DHCP_LOCK_xTIME
+#define CUSTOM_DHCP_LOCK_xTIME DEFAULT_DHCP_LOCK_xTIME
+#endif
+
/* hooks for custom dhd_dpc_prio setting option via Makefile */
#define DEFAULT_DHP_DPC_PRIO 1
#ifndef CUSTOM_DPC_PRIO_SETTING
@@ -753,11 +1008,15 @@
#define CUSTOM_LISTEN_INTERVAL LISTEN_INTERVAL
#endif /* CUSTOM_LISTEN_INTERVAL */
-#define DEFAULT_SUSPEND_BCN_LI_DTIM 3
+#define DEFAULT_SUSPEND_BCN_LI_DTIM 5
#ifndef CUSTOM_SUSPEND_BCN_LI_DTIM
#define CUSTOM_SUSPEND_BCN_LI_DTIM DEFAULT_SUSPEND_BCN_LI_DTIM
#endif
+#ifndef BCN_TIMEOUT_IN_SUSPEND
+#define BCN_TIMEOUT_IN_SUSPEND 6 /* bcn timeout value in suspend mode */
+#endif
+
#ifndef CUSTOM_RXF_PRIO_SETTING
#define CUSTOM_RXF_PRIO_SETTING MAX((CUSTOM_DPC_PRIO_SETTING - 1), 1)
#endif
@@ -770,6 +1029,11 @@
#define WIFI_TURNON_DELAY DEFAULT_WIFI_TURNON_DELAY
#endif /* WIFI_TURNON_DELAY */
+#define DEFAULT_DHD_WATCHDOG_INTERVAL_MS 10 /* msec */
+#ifndef CUSTOM_DHD_WATCHDOG_MS
+#define CUSTOM_DHD_WATCHDOG_MS DEFAULT_DHD_WATCHDOG_INTERVAL_MS
+#endif /* DEFAULT_DHD_WATCHDOG_INTERVAL_MS */
+
#ifdef WLTDLS
#ifndef CUSTOM_TDLS_IDLE_MODE_SETTING
#define CUSTOM_TDLS_IDLE_MODE_SETTING 60000 /* 60sec to tear down TDLS of not active */
@@ -785,8 +1049,12 @@
#define MAX_DTIM_SKIP_BEACON_INTERVAL 100 /* max allowed associated AP beacon for DTIM skip */
#ifndef MAX_DTIM_ALLOWED_INTERVAL
-#define MAX_DTIM_ALLOWED_INTERVAL 600 /* max allowed total beacon interval for DTIM skip */
+#define MAX_DTIM_ALLOWED_INTERVAL 900 /* max allowed total beacon interval for DTIM skip */
#endif
+#ifndef MIN_DTIM_FOR_ROAM_THRES_EXTEND
+#define MIN_DTIM_FOR_ROAM_THRES_EXTEND 600 /* minimum dtim interval to extend roam threshold */
+#endif
+
#define NO_DTIM_SKIP 1
#ifdef SDTEST
/* Echo packet generator (SDIO), pkts/s */
@@ -810,11 +1078,6 @@
extern uint dhd_download_fw_on_driverload;
-/* For supporting multiple interfaces */
-#define DHD_MAX_IFS 16
-#define DHD_DEL_IF -0xe
-#define DHD_BAD_IF -0xf
-
extern void dhd_wait_for_event(dhd_pub_t *dhd, bool *lockvar);
extern void dhd_wait_event_wakeup(dhd_pub_t*dhd);
@@ -837,7 +1100,10 @@
#endif /* ARP_OFFLOAD_SUPPORT */
#ifdef WLTDLS
int dhd_tdls_enable(struct net_device *dev, bool tdls_on, bool auto_on, struct ether_addr *mac);
-#endif
+#ifdef PCIE_FULL_DONGLE
+void dhd_tdls_update_peer_info(struct net_device *dev, bool connect_disconnect, uint8 *addr);
+#endif /* PCIE_FULL_DONGLE */
+#endif /* WLTDLS */
/* Neighbor Discovery Offload Support */
int dhd_ndo_enable(dhd_pub_t * dhd, int ndo_enable);
int dhd_ndo_add_ip(dhd_pub_t *dhd, char* ipaddr, int idx);
@@ -855,11 +1121,21 @@
#ifdef PROP_TXSTATUS
int dhd_os_wlfc_block(dhd_pub_t *pub);
int dhd_os_wlfc_unblock(dhd_pub_t *pub);
+extern const uint8 prio2fifo[];
#endif /* PROP_TXSTATUS */
+void dhd_save_fwdump(dhd_pub_t *dhd_pub, void * buffer, uint32 length);
+void dhd_schedule_memdump(dhd_pub_t *dhdp, uint8 *buf, uint32 size);
+int dhd_os_socram_dump(struct net_device *dev, uint32 *dump_size);
+int dhd_os_get_socram_dump(struct net_device *dev, char **buf, uint32 *size);
+int dhd_common_socram_dump(dhd_pub_t *dhdp);
+int dhd_os_get_version(struct net_device *dev, bool dhd_ver, char **buf, uint32 size);
+
uint8* dhd_os_prealloc(dhd_pub_t *dhdpub, int section, uint size, bool kmalloc_if_fail);
void dhd_os_prefree(dhd_pub_t *dhdpub, void *addr, uint size);
+int dhd_process_cid_mac(dhd_pub_t *dhdp, bool prepost);
+
#if defined(CONFIG_DHD_USE_STATIC_BUF)
#define DHD_OS_PREALLOC(dhdpub, section, size) dhd_os_prealloc(dhdpub, section, size, FALSE)
#define DHD_OS_PREFREE(dhdpub, addr, size) dhd_os_prefree(dhdpub, addr, size)
@@ -869,4 +1145,40 @@
#endif /* defined(CONFIG_DHD_USE_STATIC_BUF) */
+#define dhd_add_flowid(pub, ifidx, ac_prio, ea, flowid) do {} while (0)
+#define dhd_del_flowid(pub, ifidx, flowid) do {} while (0)
+
+extern unsigned long dhd_os_general_spin_lock(dhd_pub_t *pub);
+extern void dhd_os_general_spin_unlock(dhd_pub_t *pub, unsigned long flags);
+
+/** Miscellaenous DHD Spin Locks */
+
+/* Disable router 3GMAC bypass path perimeter lock */
+#define DHD_PERIM_LOCK(dhdp) do {} while (0)
+#define DHD_PERIM_UNLOCK(dhdp) do {} while (0)
+
+/* Enable DHD general spin lock/unlock */
+#define DHD_GENERAL_LOCK(dhdp, flags) \
+ (flags) = dhd_os_general_spin_lock(dhdp)
+#define DHD_GENERAL_UNLOCK(dhdp, flags) \
+ dhd_os_general_spin_unlock((dhdp), (flags))
+
+/* Enable DHD flowring spin lock/unlock */
+#define DHD_FLOWRING_LOCK(lock, flags) (flags) = dhd_os_spin_lock(lock)
+#define DHD_FLOWRING_UNLOCK(lock, flags) dhd_os_spin_unlock((lock), (flags))
+
+/* Enable DHD common flowring info spin lock/unlock */
+#define DHD_FLOWID_LOCK(lock, flags) (flags) = dhd_os_spin_lock(lock)
+#define DHD_FLOWID_UNLOCK(lock, flags) dhd_os_spin_unlock((lock), (flags))
+
+
+
+typedef struct wl_io_pport {
+ dhd_pub_t *dhd_pub;
+ uint ifidx;
+} wl_io_pport_t;
+
+extern void *dhd_pub_wlinfo(dhd_pub_t *dhd_pub);
+
+
#endif /* _dhd_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_bta.c b/drivers/net/wireless/bcmdhd/dhd_bta.c
old mode 100755
new mode 100644
index 46ee3d4..d82d6d2
--- a/drivers/net/wireless/bcmdhd/dhd_bta.c
+++ b/drivers/net/wireless/bcmdhd/dhd_bta.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_bta.c 434656 2013-11-07 01:11:33Z $
+ * $Id: dhd_bta.c 434434 2013-11-06 07:16:02Z $
*/
#error "WLBTAMP is not defined"
diff --git a/drivers/net/wireless/bcmdhd/dhd_bta.h b/drivers/net/wireless/bcmdhd/dhd_bta.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/dhd_bus.h b/drivers/net/wireless/bcmdhd/dhd_bus.h
old mode 100755
new mode 100644
index f22ac6a..efcbab2
--- a/drivers/net/wireless/bcmdhd/dhd_bus.h
+++ b/drivers/net/wireless/bcmdhd/dhd_bus.h
@@ -5,13 +5,13 @@
* DHD OS, bus, and protocol modules.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -19,12 +19,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_bus.h 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd_bus.h 469959 2014-04-11 23:07:39Z $
*/
#ifndef _dhd_bus_h_
@@ -110,7 +110,11 @@
extern void *dhd_bus_txq(struct dhd_bus *bus);
extern void *dhd_bus_sih(struct dhd_bus *bus);
extern uint dhd_bus_hdrlen(struct dhd_bus *bus);
+#ifdef BCMSDIO
extern void dhd_bus_set_dotxinrx(struct dhd_bus *bus, bool val);
+#else
+#define dhd_bus_set_dotxinrx(a, b) do {} while (0)
+#endif
#define DHD_SET_BUS_STATE_DOWN(_bus) do { \
(_bus)->dhd->busstate = DHD_BUS_DOWN; \
@@ -126,33 +130,61 @@
#ifdef BCMPCIE
enum {
DNGL_TO_HOST_BUF_IOCT,
- DNGL_TO_HOST_BUF_ADDR,
- HOST_TO_DNGL_BUF_ADDR,
- HOST_TO_DNGL_WPTR,
- HOST_TO_DNGL_RPTR,
- DNGL_TO_HOST_WPTR,
- DNGL_TO_HOST_RPTR,
+ DNGL_TO_HOST_DMA_SCRATCH_BUFFER,
+ DNGL_TO_HOST_DMA_SCRATCH_BUFFER_LEN,
+ HOST_TO_DNGL_DMA_WRITEINDX_BUFFER,
+ HOST_TO_DNGL_DMA_READINDX_BUFFER,
+ DNGL_TO_HOST_DMA_WRITEINDX_BUFFER,
+ DNGL_TO_HOST_DMA_READINDX_BUFFER,
TOTAL_LFRAG_PACKET_CNT,
- HOST_TO_DNGL_CTRLBUF_ADDR,
- DNGL_TO_HOST_CTRLBUF_ADDR,
- HTOD_CTRL_RPTR,
- HTOD_CTRL_WPTR,
- DTOH_CTRL_RPTR,
- DTOH_CTRL_WPTR,
HTOD_MB_DATA,
DTOH_MB_DATA,
+ RING_BUF_ADDR,
+ H2D_DMA_WRITEINDX,
+ H2D_DMA_READINDX,
+ D2H_DMA_WRITEINDX,
+ D2H_DMA_READINDX,
+ RING_READ_PTR,
+ RING_WRITE_PTR,
+ RING_LEN_ITEMS,
+ RING_MAX_ITEM,
MAX_HOST_RXBUFS
};
typedef void (*dhd_mb_ring_t) (struct dhd_bus *, uint32);
-extern void dhd_bus_cmn_writeshared(struct dhd_bus *bus, void * data, uint32 len, uint8 type);
+extern void dhd_bus_cmn_writeshared(struct dhd_bus *bus, void * data, uint32 len, uint8 type,
+ uint16 ringid);
extern void dhd_bus_ringbell(struct dhd_bus *bus, uint32 value);
-extern void dhd_bus_cmn_readshared(struct dhd_bus *bus, void* data, uint8 type);
+extern void dhd_bus_cmn_readshared(struct dhd_bus *bus, void* data, uint8 type, uint16 ringid);
extern uint32 dhd_bus_get_sharedflags(struct dhd_bus *bus);
-extern void dhd_bus_rx_frame(struct dhd_bus *bus, void* pkt, int ifidx, uint pkt_count);
+extern void dhd_bus_rx_frame(struct dhd_bus *bus, void* pkt, int ifidx, uint pkt_count, int pkt_wake);
extern void dhd_bus_start_queue(struct dhd_bus *bus);
extern void dhd_bus_stop_queue(struct dhd_bus *bus);
-extern void dhd_bus_update_retlen(struct dhd_bus *bus, uint32 retlen, uint32 cmd_id, uint32 status,
- uint32 inline_data);
+extern void dhd_bus_update_retlen(struct dhd_bus *bus, uint32 retlen, uint32 cmd_id, uint16 status,
+ uint32 resp_len);
extern dhd_mb_ring_t dhd_bus_get_mbintr_fn(struct dhd_bus *bus);
+extern void dhd_bus_write_flow_ring_states(struct dhd_bus *bus,
+ void * data, uint16 flowid);
+extern void dhd_bus_read_flow_ring_states(struct dhd_bus *bus,
+ void * data, uint8 flowid);
+extern int dhd_bus_flow_ring_create_request(struct dhd_bus *bus, void *flow_ring_node);
+extern void dhd_bus_clean_flow_ring(struct dhd_bus *bus, void *flow_ring_node);
+extern void dhd_bus_flow_ring_create_response(struct dhd_bus *bus, uint16 flow_id, int32 status);
+extern int dhd_bus_flow_ring_delete_request(struct dhd_bus *bus, void *flow_ring_node);
+extern void dhd_bus_flow_ring_delete_response(struct dhd_bus *bus, uint16 flowid, uint32 status);
+extern int dhd_bus_flow_ring_flush_request(struct dhd_bus *bus, void *flow_ring_node);
+extern void dhd_bus_flow_ring_flush_response(struct dhd_bus *bus, uint16 flowid, uint32 status);
+extern uint8 dhd_bus_is_txmode_push(struct dhd_bus *bus);
+extern uint32 dhd_bus_max_h2d_queues(struct dhd_bus *bus, uint8 *txpush);
+extern int dhd_bus_schedule_queue(struct dhd_bus *bus, uint16 flow_id, bool txs);
+extern int dhdpcie_bus_clock_start(struct dhd_bus *bus);
+extern int dhdpcie_bus_clock_stop(struct dhd_bus *bus);
+extern int dhdpcie_bus_enable_device(struct dhd_bus *bus);
+extern int dhdpcie_bus_disable_device(struct dhd_bus *bus);
+extern int dhdpcie_bus_alloc_resource(struct dhd_bus *bus);
+extern void dhdpcie_bus_free_resource(struct dhd_bus *bus);
+extern bool dhdpcie_bus_dongle_attach(struct dhd_bus *bus);
+extern int dhd_bus_release_dongle(struct dhd_bus *bus);
+extern int dhd_bus_request_irq(struct dhd_bus *bus);
+
#endif /* BCMPCIE */
#endif /* _dhd_bus_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_cdc.c b/drivers/net/wireless/bcmdhd/dhd_cdc.c
old mode 100755
new mode 100644
index ad7ab42..f6addba
--- a/drivers/net/wireless/bcmdhd/dhd_cdc.c
+++ b/drivers/net/wireless/bcmdhd/dhd_cdc.c
@@ -2,13 +2,13 @@
* DHD Protocol Module for CDC and BDC.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_cdc.c 449353 2014-01-16 21:34:16Z $
+ * $Id: dhd_cdc.c 472193 2014-04-23 06:27:38Z $
*
* BDC is like CDC, except it includes a header for data packets to convey
* packet priority over the bus, and flags (e.g. to indicate checksum status
@@ -379,6 +379,17 @@
}
#undef PKTBUF /* Only defined in the above routine */
+uint
+dhd_prot_hdrlen(dhd_pub_t *dhd, void *PKTBUF)
+{
+ uint hdrlen = 0;
+#ifdef BDC
+ /* Length of BDC(+WLFC) headers pushed */
+ hdrlen = BDC_HEADER_LEN + (((struct bdc_header *)PKTBUF)->dataOffset * 4);
+#endif
+ return hdrlen;
+}
+
int
dhd_prot_hdrpull(dhd_pub_t *dhd, int *ifidx, void *pktbuf, uchar *reorder_buf_info,
uint *reorder_info_len)
@@ -476,6 +487,9 @@
dhd->hdrlen += BDC_HEADER_LEN;
#endif
dhd->maxctl = WLC_IOCTL_MAXLEN + sizeof(cdc_ioctl_t) + ROUND_UP_MARGIN;
+ /* set the memdump capability */
+ dhd->memdump_enabled = DUMP_MEMONLY;
+
return 0;
fail:
@@ -498,7 +512,8 @@
void
dhd_prot_dstats(dhd_pub_t *dhd)
{
-/* No stats from dongle added yet, copy bus stats */
+ /* copy bus stats */
+
dhd->dstats.tx_packets = dhd->tx_packets;
dhd->dstats.tx_errors = dhd->tx_errors;
dhd->dstats.rx_packets = dhd->rx_packets;
@@ -509,7 +524,7 @@
}
int
-dhd_prot_init(dhd_pub_t *dhd)
+dhd_sync_with_dongle(dhd_pub_t *dhd)
{
int ret = 0;
wlc_rev_info_t revinfo;
@@ -523,8 +538,13 @@
goto done;
+ dhd_process_cid_mac(dhd, TRUE);
+
ret = dhd_preinit_ioctls(dhd);
+ if (!ret)
+ dhd_process_cid_mac(dhd, FALSE);
+
/* Always assumes wl for now */
dhd->iswl = TRUE;
@@ -532,6 +552,11 @@
return ret;
}
+int dhd_prot_init(dhd_pub_t *dhd)
+{
+ return TRUE;
+}
+
void
dhd_prot_stop(dhd_pub_t *dhd)
{
diff --git a/drivers/net/wireless/bcmdhd/dhd_cfg80211.c b/drivers/net/wireless/bcmdhd/dhd_cfg80211.c
old mode 100755
new mode 100644
index 987912f..eb98e28
--- a/drivers/net/wireless/bcmdhd/dhd_cfg80211.c
+++ b/drivers/net/wireless/bcmdhd/dhd_cfg80211.c
@@ -30,7 +30,6 @@
#include <bcmutils.h>
#include <wldev_common.h>
#include <wl_cfg80211.h>
-#include <brcm_nl80211.h>
#include <dhd_cfg80211.h>
#ifdef PKT_FILTER_SUPPORT
@@ -52,9 +51,14 @@
#include <dhd.h>
#include <dhdioctl.h>
#include <wlioctl.h>
+#include <brcm_nl80211.h>
#include <dhd_cfg80211.h>
+#ifdef PCIE_FULL_DONGLE
+#include <dhd_flowring.h>
+#endif
-static s32 wl_dongle_up(struct net_device *ndev, u32 up);
+static s32 wl_dongle_up(struct net_device *ndev);
+static s32 wl_dongle_down(struct net_device *ndev);
/**
* Function implementations
@@ -74,6 +78,17 @@
s32 dhd_cfg80211_down(struct bcm_cfg80211 *cfg)
{
+ struct net_device *ndev;
+ s32 err = 0;
+
+ WL_TRACE(("In\n"));
+ if (!dhd_dongle_up) {
+ WL_ERR(("Dongle is already down\n"));
+ return err;
+ }
+
+ ndev = bcmcfg_to_prmry_ndev(cfg);
+ wl_dongle_down(ndev);
dhd_dongle_up = FALSE;
return 0;
}
@@ -127,9 +142,34 @@
return dhd_remove_if(cfg->pub, ifidx, FALSE);
}
-static s32 wl_dongle_up(struct net_device *ndev, u32 up)
+struct net_device * dhd_cfg80211_netdev_free(struct net_device *ndev)
+{
+ if (ndev) {
+ if (ndev->ieee80211_ptr) {
+ kfree(ndev->ieee80211_ptr);
+ ndev->ieee80211_ptr = NULL;
+ }
+ free_netdev(ndev);
+ return NULL;
+ }
+
+ return ndev;
+}
+
+void dhd_netdev_free(struct net_device *ndev)
+{
+#ifdef WL_CFG80211
+ ndev = dhd_cfg80211_netdev_free(ndev);
+#endif
+ if (ndev)
+ free_netdev(ndev);
+}
+
+static s32
+wl_dongle_up(struct net_device *ndev)
{
s32 err = 0;
+ u32 up = 0;
err = wldev_ioctl(ndev, WLC_UP, &up, sizeof(up), true);
if (unlikely(err)) {
@@ -138,6 +178,20 @@
return err;
}
+static s32
+wl_dongle_down(struct net_device *ndev)
+{
+ s32 err = 0;
+ u32 down = 0;
+
+ err = wldev_ioctl(ndev, WLC_DOWN, &down, sizeof(down), true);
+ if (unlikely(err)) {
+ WL_ERR(("WLC_DOWN error (%d)\n", err));
+ }
+ return err;
+}
+
+
s32 dhd_config_dongle(struct bcm_cfg80211 *cfg)
{
#ifndef DHD_SDALIGN
@@ -154,7 +208,7 @@
ndev = bcmcfg_to_prmry_ndev(cfg);
- err = wl_dongle_up(ndev, 0);
+ err = wl_dongle_up(ndev);
if (unlikely(err)) {
WL_ERR(("wl_dongle_up failed\n"));
goto default_conf_out;
@@ -167,8 +221,22 @@
}
+#ifdef PCIE_FULL_DONGLE
+void wl_roam_flowring_cleanup(struct bcm_cfg80211 *cfg)
+{
+ int hostidx = 0;
+ dhd_pub_t *dhd_pub = (dhd_pub_t *)(cfg->pub);
+ hostidx = dhd_ifidx2hostidx(dhd_pub->info, hostidx);
+ dhd_flow_rings_delete(dhd_pub, hostidx);
+}
+#endif
+
#ifdef CONFIG_NL80211_TESTMODE
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+int dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, struct wireless_dev *wdev, void *data, int len)
+#else
int dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, void *data, int len)
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0) */
{
struct sk_buff *reply;
struct bcm_cfg80211 *cfg;
@@ -180,6 +248,10 @@
u16 buflen;
u16 maxmsglen = PAGE_SIZE - 0x100;
bool newbuf = false;
+ int8 index = 0;
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+ struct net_device *ndev = NULL;
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0) */
WL_TRACE(("entry: cmd = %d\n", nlioc->cmd));
cfg = wiphy_priv(wiphy);
@@ -213,11 +285,20 @@
}
}
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+ ndev = wdev_to_wlc_ndev(wdev, cfg);
+ index = dhd_net2idx(dhd->info, ndev);
+ if (index == DHD_BAD_IF) {
+ WL_ERR(("Bad ifidx from wdev:%p\n", wdev));
+ return BCME_ERROR;
+ }
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0) */
+
ioc.cmd = nlioc->cmd;
ioc.len = nlioc->len;
ioc.set = nlioc->set;
ioc.driver = nlioc->magic;
- err = dhd_ioctl_process(dhd, 0, &ioc, buf);
+ err = dhd_ioctl_process(dhd, index, &ioc, buf);
if (err) {
WL_TRACE(("dhd_ioctl_process return err %d\n", err));
err = OSL_ERROR(err);
diff --git a/drivers/net/wireless/bcmdhd/dhd_cfg80211.h b/drivers/net/wireless/bcmdhd/dhd_cfg80211.h
old mode 100755
new mode 100644
index 905b306..bf89f12
--- a/drivers/net/wireless/bcmdhd/dhd_cfg80211.h
+++ b/drivers/net/wireless/bcmdhd/dhd_cfg80211.h
@@ -31,20 +31,39 @@
#include <wl_cfg80211.h>
#include <wl_cfgp2p.h>
+#ifndef WL_ERR
+#define WL_ERR CFG80211_ERR
+#endif
+#ifndef WL_TRACE
+#define WL_TRACE CFG80211_TRACE
+#endif
+
s32 dhd_cfg80211_init(struct bcm_cfg80211 *cfg);
s32 dhd_cfg80211_deinit(struct bcm_cfg80211 *cfg);
s32 dhd_cfg80211_down(struct bcm_cfg80211 *cfg);
s32 dhd_cfg80211_set_p2p_info(struct bcm_cfg80211 *cfg, int val);
s32 dhd_cfg80211_clean_p2p_info(struct bcm_cfg80211 *cfg);
s32 dhd_config_dongle(struct bcm_cfg80211 *cfg);
+#ifdef PCIE_FULL_DONGLE
+void wl_roam_flowring_cleanup(struct bcm_cfg80211 *cfg);
+#endif
#ifdef CONFIG_NL80211_TESTMODE
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+int dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, struct wireless_dev *wdev, void *data, int len);
+#else
int dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, void *data, int len);
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0) */
+#else
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+static inline int
+dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, struct wireless_dev *wdev, void *data, int len)
#else
static inline int dhd_cfg80211_testmode_cmd(struct wiphy *wiphy, void *data, int len)
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0) */
{
return 0;
}
-#endif
+#endif /* CONFIG_NL80211_TESTMODE */
#endif /* __DHD_CFG80211__ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_common.c b/drivers/net/wireless/bcmdhd/dhd_common.c
old mode 100755
new mode 100644
index 21ce01d..0959db9
--- a/drivers/net/wireless/bcmdhd/dhd_common.c
+++ b/drivers/net/wireless/bcmdhd/dhd_common.c
@@ -2,13 +2,13 @@
* Broadcom Dongle Host Driver (DHD), common DHD core.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_common.c 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd_common.c 473079 2014-04-27 07:47:16Z $
*/
#include <typedefs.h>
#include <osl.h>
@@ -34,12 +34,21 @@
#include <wlioctl.h>
#include <dhd.h>
#include <dhd_ip.h>
-
#include <proto/bcmevent.h>
+#include <proto/dnglevent.h>
+
+#ifdef SHOW_LOGTRACE
+#include <event_log.h>
+#endif /* SHOW_LOGTRACE */
+
+#ifdef BCMPCIE
+#include <dhd_flowring.h>
+#endif
#include <dhd_bus.h>
#include <dhd_proto.h>
#include <dhd_dbg.h>
+#include <dhd_debug.h>
#include <msgtrace.h>
#ifdef WL_CFG80211
@@ -48,9 +57,8 @@
#ifdef PNO_SUPPORT
#include <dhd_pno.h>
#endif
-#ifdef SET_RANDOM_MAC_SOFTAP
-#include <linux/random.h>
-#include <linux/jiffies.h>
+#ifdef RTT_SUPPORT
+#include <dhd_rtt.h>
#endif
#define htod32(i) (i)
@@ -65,6 +73,12 @@
#include <dhd_wlfc.h>
#endif
+#ifdef DHD_WMF
+#include <dhd_linux.h>
+#include <dhd_wmf_linux.h>
+#endif /* DHD_WMF */
+
+
#ifdef WLMEDIA_HTSF
extern void htsf_update(struct dhd_info *dhd, void *data);
#endif
@@ -83,7 +97,6 @@
uint32 dhd_conn_status;
uint32 dhd_conn_reason;
-extern int disable_proptx;
extern int dhd_iscan_request(void * dhdp, uint16 action);
extern void dhd_ind_scan_confirm(void *h, bool status);
extern int dhd_iscan_in_progress(void *h);
@@ -93,6 +106,9 @@
#if !defined(AP) && defined(WLP2P)
extern int dhd_get_concurrent_capabilites(dhd_pub_t *dhd);
#endif
+extern int dhd_socram_dump(struct dhd_bus *bus);
+static void dngl_host_event_process(dhd_pub_t *dhdp, bcm_dngl_event_t *event);
+static int dngl_host_event(dhd_pub_t *dhdp, void *pktdata);
bool ap_cfg_running = FALSE;
bool ap_fw_loaded = FALSE;
@@ -109,10 +125,12 @@
DHD_COMPILED " on " __DATE__ " at " __TIME__;
#else
const char dhd_version[] = "\nDongle Host Driver, version " EPI_VERSION_STR "\nCompiled from ";
-#endif
+#endif
void dhd_set_timer(void *bus, uint wdtick);
+
+
/* IOVar table */
enum {
IOV_VERSION = 1,
@@ -135,13 +153,10 @@
IOV_PROPTXSTATUS_ENABLE,
IOV_PROPTXSTATUS_MODE,
IOV_PROPTXSTATUS_OPT,
-#ifdef QMONITOR
- IOV_QMON_TIME_THRES,
- IOV_QMON_TIME_PERCENT,
-#endif /* QMONITOR */
IOV_PROPTXSTATUS_MODULE_IGNORE,
IOV_PROPTXSTATUS_CREDIT_IGNORE,
IOV_PROPTXSTATUS_TXSTATUS_IGNORE,
+ IOV_PROPTXSTATUS_RXPKT_CHK,
#endif /* PROP_TXSTATUS */
IOV_BUS_TYPE,
#ifdef WLMEDIA_HTSF
@@ -152,6 +167,24 @@
#ifdef DHDTCPACK_SUPPRESS
IOV_TCPACK_SUPPRESS,
#endif /* DHDTCPACK_SUPPRESS */
+#ifdef DHD_WMF
+ IOV_WMF_BSS_ENAB,
+ IOV_WMF_UCAST_IGMP,
+ IOV_WMF_MCAST_DATA_SENDUP,
+#ifdef WL_IGMP_UCQUERY
+ IOV_WMF_UCAST_IGMP_QUERY,
+#endif /* WL_IGMP_UCQUERY */
+#ifdef DHD_UCAST_UPNP
+ IOV_WMF_UCAST_UPNP,
+#endif /* DHD_UCAST_UPNP */
+#endif /* DHD_WMF */
+ IOV_AP_ISOLATE,
+#ifdef DHD_UNICAST_DHCP
+ IOV_DHCP_UNICAST,
+#endif /* DHD_UNICAST_DHCP */
+#ifdef DHD_L2_FILTER
+ IOV_BLOCK_PING,
+#endif
IOV_LAST
};
@@ -181,13 +214,10 @@
*/
{"ptxmode", IOV_PROPTXSTATUS_MODE, 0, IOVT_UINT32, 0 },
{"proptx_opt", IOV_PROPTXSTATUS_OPT, 0, IOVT_UINT32, 0 },
-#ifdef QMONITOR
- {"qtime_thres", IOV_QMON_TIME_THRES, 0, IOVT_UINT32, 0 },
- {"qtime_percent", IOV_QMON_TIME_PERCENT, 0, IOVT_UINT32, 0 },
-#endif /* QMONITOR */
{"pmodule_ignore", IOV_PROPTXSTATUS_MODULE_IGNORE, 0, IOVT_BOOL, 0 },
{"pcredit_ignore", IOV_PROPTXSTATUS_CREDIT_IGNORE, 0, IOVT_BOOL, 0 },
{"ptxstatus_ignore", IOV_PROPTXSTATUS_TXSTATUS_IGNORE, 0, IOVT_BOOL, 0 },
+ {"rxpkt_chk", IOV_PROPTXSTATUS_RXPKT_CHK, 0, IOVT_BOOL, 0 },
#endif /* PROP_TXSTATUS */
{"bustype", IOV_BUS_TYPE, 0, IOVT_UINT32, 0},
#ifdef WLMEDIA_HTSF
@@ -199,14 +229,50 @@
#ifdef DHDTCPACK_SUPPRESS
{"tcpack_suppress", IOV_TCPACK_SUPPRESS, 0, IOVT_UINT8, 0 },
#endif /* DHDTCPACK_SUPPRESS */
+#ifdef DHD_WMF
+ {"wmf_bss_enable", IOV_WMF_BSS_ENAB, 0, IOVT_BOOL, 0 },
+ {"wmf_ucast_igmp", IOV_WMF_UCAST_IGMP, 0, IOVT_BOOL, 0 },
+ {"wmf_mcast_data_sendup", IOV_WMF_MCAST_DATA_SENDUP, 0, IOVT_BOOL, 0 },
+#ifdef WL_IGMP_UCQUERY
+ {"wmf_ucast_igmp_query", IOV_WMF_UCAST_IGMP_QUERY, (0), IOVT_BOOL, 0 },
+#endif /* WL_IGMP_UCQUERY */
+#ifdef DHD_UCAST_UPNP
+ {"wmf_ucast_upnp", IOV_WMF_UCAST_UPNP, (0), IOVT_BOOL, 0 },
+#endif /* DHD_UCAST_UPNP */
+#endif /* DHD_WMF */
+#ifdef DHD_UNICAST_DHCP
+ {"dhcp_unicast", IOV_DHCP_UNICAST, (0), IOVT_BOOL, 0 },
+#endif /* DHD_UNICAST_DHCP */
+ {"ap_isolate", IOV_AP_ISOLATE, (0), IOVT_BOOL, 0},
+#ifdef DHD_L2_FILTER
+ {"block_ping", IOV_BLOCK_PING, (0), IOVT_BOOL, 0},
+#endif
{NULL, 0, 0, 0, 0 }
};
+void dhd_save_fwdump(dhd_pub_t *dhd_pub, void *buffer, uint32 length)
+{
+ if (dhd_pub->soc_ram == NULL) {
+ DHD_ERROR(("%s: Failed to allocate memory for fw crash snap shot.\n",
+ __FUNCTION__));
+ return;
+ }
+
+ if (dhd_pub->soc_ram != buffer) {
+ memset(dhd_pub->soc_ram, 0, dhd_pub->soc_ram_length);
+ dhd_pub->soc_ram_length = length;
+ memcpy(dhd_pub->soc_ram, buffer, length);
+ }
+}
#define DHD_IOVAR_BUF_SIZE 128
/* to NDIS developer, the structure dhd_common is redundant,
* please do NOT merge it back from other branches !!!
*/
+int dhd_common_socram_dump(dhd_pub_t *dhdp)
+{
+ return dhd_socram_dump(dhdp->bus);
+}
static int
dhd_dump(dhd_pub_t *dhdp, char *buf, int buflen)
@@ -239,8 +305,8 @@
bcm_bprintf(strbuf, "multicast %lu\n", dhdp->dstats.multicast);
bcm_bprintf(strbuf, "bus stats:\n");
- bcm_bprintf(strbuf, "tx_packets %lu tx_multicast %lu tx_errors %lu\n",
- dhdp->tx_packets, dhdp->tx_multicast, dhdp->tx_errors);
+ bcm_bprintf(strbuf, "tx_packets %lu tx_dropped %lu tx_multicast %lu tx_errors %lu\n",
+ dhdp->tx_packets, dhdp->tx_dropped, dhdp->tx_multicast, dhdp->tx_errors);
bcm_bprintf(strbuf, "tx_ctlpkts %lu tx_ctlerrs %lu\n",
dhdp->tx_ctlpkts, dhdp->tx_ctlerrs);
bcm_bprintf(strbuf, "rx_packets %lu rx_multicast %lu rx_errors %lu \n",
@@ -258,11 +324,12 @@
/* Add any bus info */
dhd_bus_dump(dhdp, strbuf);
+
return (!strbuf->size ? BCME_BUFTOOSHORT : 0);
}
int
-dhd_wl_ioctl_cmd(dhd_pub_t *dhd_pub, int cmd, void *arg, int len, uint8 set, int ifindex)
+dhd_wl_ioctl_cmd(dhd_pub_t *dhd_pub, int cmd, void *arg, int len, uint8 set, int ifidx)
{
wl_ioctl_t ioc;
@@ -271,22 +338,35 @@
ioc.len = len;
ioc.set = set;
- return dhd_wl_ioctl(dhd_pub, ifindex, &ioc, arg, len);
+ return dhd_wl_ioctl(dhd_pub, ifidx, &ioc, arg, len);
}
-
int
-dhd_wl_ioctl(dhd_pub_t *dhd_pub, int ifindex, wl_ioctl_t *ioc, void *buf, int len)
+dhd_wl_ioctl(dhd_pub_t *dhd_pub, int ifidx, wl_ioctl_t *ioc, void *buf, int len)
{
- int ret = 0;
+ int ret = BCME_ERROR;
if (dhd_os_proto_block(dhd_pub))
{
+#if defined(WL_WLC_SHIM)
+ wl_info_t *wl = dhd_pub_wlinfo(dhd_pub);
- ret = dhd_prot_ioctl(dhd_pub, ifindex, ioc, buf, len);
- if ((ret) && (dhd_pub->up))
+ wl_io_pport_t io_pport;
+ io_pport.dhd_pub = dhd_pub;
+ io_pport.ifidx = ifidx;
+
+ ret = wl_shim_ioctl(wl->shim, ioc, &io_pport);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s: wl_shim_ioctl(%d) ERR %d\n", __FUNCTION__, ioc->cmd, ret));
+ }
+#else
+ ret = dhd_prot_ioctl(dhd_pub, ifidx, ioc, buf, len);
+#endif /* defined(WL_WLC_SHIM) */
+
+ if (ret && dhd_pub->up) {
/* Send hang event only if dhd_open() was success */
- dhd_os_check_hang(dhd_pub, ifindex, ret);
+ dhd_os_check_hang(dhd_pub, ifidx, ret);
+ }
if (ret == -ETIMEDOUT && !dhd_pub->up) {
DHD_ERROR(("%s: 'resumed on timeout' error is "
@@ -297,12 +377,58 @@
dhd_os_proto_unblock(dhd_pub);
-
}
return ret;
}
+uint wl_get_port_num(wl_io_pport_t *io_pport)
+{
+ return 0;
+}
+
+/* Get bssidx from iovar params
+ * Input: dhd_pub - pointer to dhd_pub_t
+ * params - IOVAR params
+ * Output: idx - BSS index
+ * val - ponter to the IOVAR arguments
+ */
+static int
+dhd_iovar_parse_bssidx(dhd_pub_t *dhd_pub, char *params, int *idx, char **val)
+{
+ char *prefix = "bsscfg:";
+ uint32 bssidx;
+
+ if (!(strncmp(params, prefix, strlen(prefix)))) {
+ /* per bss setting should be prefixed with 'bsscfg:' */
+ char *p = (char *)params + strlen(prefix);
+
+ /* Skip Name */
+ while (*p != '\0')
+ p++;
+ /* consider null */
+ p = p + 1;
+ bcopy(p, &bssidx, sizeof(uint32));
+ /* Get corresponding dhd index */
+ bssidx = dhd_bssidx2idx(dhd_pub, bssidx);
+
+ if (bssidx >= DHD_MAX_IFS) {
+ DHD_ERROR(("%s Wrong bssidx provided\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ /* skip bss idx */
+ p += sizeof(uint32);
+ *val = p;
+ *idx = bssidx;
+ } else {
+ DHD_ERROR(("%s: bad parameter for per bss iovar\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ return BCME_OK;
+}
+
static int
dhd_doiovar(dhd_pub_t *dhd_pub, const bcm_iovar_t *vi, uint32 actionid, const char *name,
void *params, int plen, void *arg, int len, int val_size)
@@ -327,7 +453,7 @@
case IOV_GVAL(IOV_MSGLEVEL):
int_val = (int32)dhd_msg_level;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_MSGLEVEL):
@@ -348,12 +474,12 @@
case IOV_GVAL(IOV_BCMERROR):
int_val = (int32)dhd_pub->bcmerror;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_WDTICK):
int_val = (int32)dhd_watchdog_ms;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_WDTICK):
@@ -371,7 +497,7 @@
#ifdef DHD_DEBUG
case IOV_GVAL(IOV_DCONSOLE_POLL):
int_val = (int32)dhd_console_ms;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DCONSOLE_POLL):
@@ -389,6 +515,7 @@
dhd_pub->tx_errors = dhd_pub->rx_errors = 0;
dhd_pub->tx_ctlpkts = dhd_pub->rx_ctlpkts = 0;
dhd_pub->tx_ctlerrs = dhd_pub->rx_ctlerrs = 0;
+ dhd_pub->tx_dropped = 0;
dhd_pub->rx_dropped = 0;
dhd_pub->rx_readahead_cnt = 0;
dhd_pub->tx_realloc = 0;
@@ -424,7 +551,7 @@
if (bcmerror != BCME_OK)
goto exit;
int_val = wlfc_enab ? 1 : 0;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
}
case IOV_SVAL(IOV_PROPTXSTATUS_ENABLE): {
@@ -433,7 +560,6 @@
if (bcmerror != BCME_OK)
goto exit;
- disable_proptx = int_val ? FALSE : TRUE;
/* wlfc is already set as desired */
if (wlfc_enab == (int_val == 0 ? FALSE : TRUE))
goto exit;
@@ -449,36 +575,18 @@
bcmerror = dhd_wlfc_get_mode(dhd_pub, &int_val);
if (bcmerror != BCME_OK)
goto exit;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PROPTXSTATUS_MODE):
dhd_wlfc_set_mode(dhd_pub, int_val);
break;
-#ifdef QMONITOR
- case IOV_GVAL(IOV_QMON_TIME_THRES): {
- int_val = dhd_qmon_thres(dhd_pub, FALSE, 0);
- bcopy(&int_val, arg, val_size);
- break;
- }
-
- case IOV_SVAL(IOV_QMON_TIME_THRES): {
- dhd_qmon_thres(dhd_pub, TRUE, int_val);
- break;
- }
-
- case IOV_GVAL(IOV_QMON_TIME_PERCENT): {
- int_val = dhd_qmon_getpercent(dhd_pub);
- bcopy(&int_val, arg, val_size);
- break;
- }
-#endif /* QMONITOR */
case IOV_GVAL(IOV_PROPTXSTATUS_MODULE_IGNORE):
bcmerror = dhd_wlfc_get_module_ignore(dhd_pub, &int_val);
if (bcmerror != BCME_OK)
goto exit;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PROPTXSTATUS_MODULE_IGNORE):
@@ -489,7 +597,7 @@
bcmerror = dhd_wlfc_get_credit_ignore(dhd_pub, &int_val);
if (bcmerror != BCME_OK)
goto exit;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PROPTXSTATUS_CREDIT_IGNORE):
@@ -500,12 +608,24 @@
bcmerror = dhd_wlfc_get_txstatus_ignore(dhd_pub, &int_val);
if (bcmerror != BCME_OK)
goto exit;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PROPTXSTATUS_TXSTATUS_IGNORE):
dhd_wlfc_set_txstatus_ignore(dhd_pub, int_val);
break;
+
+ case IOV_GVAL(IOV_PROPTXSTATUS_RXPKT_CHK):
+ bcmerror = dhd_wlfc_get_rxpkt_chk(dhd_pub, &int_val);
+ if (bcmerror != BCME_OK)
+ goto exit;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_PROPTXSTATUS_RXPKT_CHK):
+ dhd_wlfc_set_rxpkt_chk(dhd_pub, int_val);
+ break;
+
#endif /* PROP_TXSTATUS */
case IOV_GVAL(IOV_BUS_TYPE):
@@ -519,14 +639,14 @@
#ifdef PCIE_FULL_DONGLE
int_val = BUS_TYPE_PCIE;
#endif
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
#ifdef WLMEDIA_HTSF
case IOV_GVAL(IOV_WLPKTDLYSTAT_SZ):
int_val = dhd_pub->htsfdlystat_sz;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_WLPKTDLYSTAT_SZ):
@@ -560,7 +680,7 @@
#ifdef DHDTCPACK_SUPPRESS
case IOV_GVAL(IOV_TCPACK_SUPPRESS): {
int_val = (uint32)dhd_pub->tcpack_sup_mode;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
}
case IOV_SVAL(IOV_TCPACK_SUPPRESS): {
@@ -568,6 +688,175 @@
break;
}
#endif /* DHDTCPACK_SUPPRESS */
+#ifdef DHD_WMF
+ case IOV_GVAL(IOV_WMF_BSS_ENAB): {
+ uint32 bssidx;
+ dhd_wmf_t *wmf;
+ char *val;
+
+ if (dhd_iovar_parse_bssidx(dhd_pub, (char *)name, &bssidx, &val) != BCME_OK) {
+ DHD_ERROR(("%s: wmf_bss_enable: bad parameter\n", __FUNCTION__));
+ bcmerror = BCME_BADARG;
+ break;
+ }
+
+ wmf = dhd_wmf_conf(dhd_pub, bssidx);
+ int_val = wmf->wmf_enable ? 1 :0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ }
+ case IOV_SVAL(IOV_WMF_BSS_ENAB): {
+ /* Enable/Disable WMF */
+ uint32 bssidx;
+ dhd_wmf_t *wmf;
+ char *val;
+
+ if (dhd_iovar_parse_bssidx(dhd_pub, (char *)name, &bssidx, &val) != BCME_OK) {
+ DHD_ERROR(("%s: wmf_bss_enable: bad parameter\n", __FUNCTION__));
+ bcmerror = BCME_BADARG;
+ break;
+ }
+
+ ASSERT(val);
+ bcopy(val, &int_val, sizeof(uint32));
+ wmf = dhd_wmf_conf(dhd_pub, bssidx);
+ if (wmf->wmf_enable == int_val)
+ break;
+ if (int_val) {
+ /* Enable WMF */
+ if (dhd_wmf_instance_add(dhd_pub, bssidx) != BCME_OK) {
+ DHD_ERROR(("%s: Error in creating WMF instance\n",
+ __FUNCTION__));
+ break;
+ }
+ if (dhd_wmf_start(dhd_pub, bssidx) != BCME_OK) {
+ DHD_ERROR(("%s: Failed to start WMF\n", __FUNCTION__));
+ break;
+ }
+ wmf->wmf_enable = TRUE;
+ } else {
+ /* Disable WMF */
+ wmf->wmf_enable = FALSE;
+ dhd_wmf_stop(dhd_pub, bssidx);
+ dhd_wmf_instance_del(dhd_pub, bssidx);
+ }
+ break;
+ }
+ case IOV_GVAL(IOV_WMF_UCAST_IGMP):
+ int_val = dhd_pub->wmf_ucast_igmp ? 1 : 0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_WMF_UCAST_IGMP):
+ if (dhd_pub->wmf_ucast_igmp == int_val)
+ break;
+
+ if (int_val >= OFF && int_val <= ON)
+ dhd_pub->wmf_ucast_igmp = int_val;
+ else
+ bcmerror = BCME_RANGE;
+ break;
+ case IOV_GVAL(IOV_WMF_MCAST_DATA_SENDUP):
+ int_val = dhd_wmf_mcast_data_sendup(dhd_pub, 0, FALSE, FALSE);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_WMF_MCAST_DATA_SENDUP):
+ dhd_wmf_mcast_data_sendup(dhd_pub, 0, TRUE, int_val);
+ break;
+
+#ifdef WL_IGMP_UCQUERY
+ case IOV_GVAL(IOV_WMF_UCAST_IGMP_QUERY):
+ int_val = dhd_pub->wmf_ucast_igmp_query ? 1 : 0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_WMF_UCAST_IGMP_QUERY):
+ if (dhd_pub->wmf_ucast_igmp_query == int_val)
+ break;
+
+ if (int_val >= OFF && int_val <= ON)
+ dhd_pub->wmf_ucast_igmp_query = int_val;
+ else
+ bcmerror = BCME_RANGE;
+ break;
+#endif /* WL_IGMP_UCQUERY */
+#ifdef DHD_UCAST_UPNP
+ case IOV_GVAL(IOV_WMF_UCAST_UPNP):
+ int_val = dhd_pub->wmf_ucast_upnp ? 1 : 0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_WMF_UCAST_UPNP):
+ if (dhd_pub->wmf_ucast_upnp == int_val)
+ break;
+
+ if (int_val >= OFF && int_val <= ON)
+ dhd_pub->wmf_ucast_upnp = int_val;
+ else
+ bcmerror = BCME_RANGE;
+ break;
+#endif /* DHD_UCAST_UPNP */
+#endif /* DHD_WMF */
+
+
+#ifdef DHD_UNICAST_DHCP
+ case IOV_GVAL(IOV_DHCP_UNICAST):
+ int_val = dhd_pub->dhcp_unicast;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_DHCP_UNICAST):
+ if (dhd_pub->dhcp_unicast == int_val)
+ break;
+
+ if (int_val >= OFF || int_val <= ON) {
+ dhd_pub->dhcp_unicast = int_val;
+ } else {
+ bcmerror = BCME_RANGE;
+ }
+ break;
+#endif /* DHD_UNICAST_DHCP */
+#ifdef DHD_L2_FILTER
+ case IOV_GVAL(IOV_BLOCK_PING):
+ int_val = dhd_pub->block_ping;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ case IOV_SVAL(IOV_BLOCK_PING):
+ if (dhd_pub->block_ping == int_val)
+ break;
+ if (int_val >= OFF || int_val <= ON) {
+ dhd_pub->block_ping = int_val;
+ } else {
+ bcmerror = BCME_RANGE;
+ }
+ break;
+#endif
+
+ case IOV_GVAL(IOV_AP_ISOLATE): {
+ uint32 bssidx;
+ char *val;
+
+ if (dhd_iovar_parse_bssidx(dhd_pub, (char *)name, &bssidx, &val) != BCME_OK) {
+ DHD_ERROR(("%s: ap isoalate: bad parameter\n", __FUNCTION__));
+ bcmerror = BCME_BADARG;
+ break;
+ }
+
+ int_val = dhd_get_ap_isolate(dhd_pub, bssidx);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ }
+ case IOV_SVAL(IOV_AP_ISOLATE): {
+ uint32 bssidx;
+ char *val;
+
+ if (dhd_iovar_parse_bssidx(dhd_pub, (char *)name, &bssidx, &val) != BCME_OK) {
+ DHD_ERROR(("%s: ap isolate: bad parameter\n", __FUNCTION__));
+ bcmerror = BCME_BADARG;
+ break;
+ }
+
+ ASSERT(val);
+ bcopy(val, &int_val, sizeof(uint32));
+ dhd_set_ap_isolate(dhd_pub, bssidx, int_val);
+ break;
+ }
default:
bcmerror = BCME_UNSUPPORTED;
@@ -869,7 +1158,8 @@
#ifdef SHOW_EVENTS
static void
-wl_show_host_event(wl_event_msg_t *event, void *event_data)
+wl_show_host_event(dhd_pub_t *dhd_pub, wl_event_msg_t *event, void *event_data,
+ void *raw_event_ptr, char *eventmask)
{
uint i, status, reason;
bool group = FALSE, flush_txq = FALSE, link = FALSE;
@@ -896,10 +1186,8 @@
(uchar)event->addr.octet[4]&0xff,
(uchar)event->addr.octet[5]&0xff);
- event_name = "UNKNOWN";
- for (i = 0; i < (uint)bcmevent_names_size; i++)
- if (bcmevent_names[i].event == event_type)
- event_name = bcmevent_names[i].name;
+ event_name = bcmevent_get_name(event_type);
+ BCM_REFERENCE(event_name);
if (flags & WLC_EVENT_MSG_LINK)
link = TRUE;
@@ -1027,6 +1315,9 @@
case WLC_E_PFN_SCAN_COMPLETE:
case WLC_E_PFN_SCAN_NONE:
case WLC_E_PFN_SCAN_ALLGONE:
+ case WLC_E_PFN_GSCAN_FULL_RESULT:
+ case WLC_E_PFN_SWC:
+ case WLC_E_PFN_SSID_EXT:
DHD_EVENT(("PNOEVENT: %s\n", event_name));
break;
@@ -1042,94 +1333,12 @@
break;
#endif /* WIFI_ACT_FRAME */
- case WLC_E_TRACE: {
- static uint32 seqnum_prev = 0;
- static uint32 logtrace_seqnum_prev = 0;
- msgtrace_hdr_t hdr;
- uint32 nblost;
- char *s, *p;
-
- buf = (uchar *) event_data;
- memcpy(&hdr, buf, MSGTRACE_HDRLEN);
-
- if (hdr.version != MSGTRACE_VERSION) {
- printf("\nMACEVENT: %s [unsupported version --> "
- "dhd version:%d dongle version:%d]\n",
- event_name, MSGTRACE_VERSION, hdr.version);
- /* Reset datalen to avoid display below */
- datalen = 0;
- break;
- }
-
- if (hdr.trace_type == MSGTRACE_HDR_TYPE_MSG) {
- /* There are 2 bytes available at the end of data */
- buf[MSGTRACE_HDRLEN + ntoh16(hdr.len)] = '\0';
-
- if (ntoh32(hdr.discarded_bytes) || ntoh32(hdr.discarded_printf)) {
- printf("\nWLC_E_TRACE: [Discarded traces in dongle -->"
- "discarded_bytes %d discarded_printf %d]\n",
- ntoh32(hdr.discarded_bytes), ntoh32(hdr.discarded_printf));
- }
-
- nblost = ntoh32(hdr.seqnum) - seqnum_prev - 1;
- if (nblost > 0) {
- printf("\nWLC_E_TRACE: [Event lost (msg) --> seqnum %d nblost %d\n",
- ntoh32(hdr.seqnum), nblost);
- }
- seqnum_prev = ntoh32(hdr.seqnum);
-
- /* Display the trace buffer. Advance from \n to \n to avoid display big
- * printf (issue with Linux printk )
- */
- p = (char *)&buf[MSGTRACE_HDRLEN];
- while (*p != '\0' && (s = strstr(p, "\n")) != NULL) {
- *s = '\0';
- printf("%s\n", p);
- p = s+1;
- }
- if (*p) printf("%s", p);
-
- /* Reset datalen to avoid display below */
- datalen = 0;
-
- } else if (hdr.trace_type == MSGTRACE_HDR_TYPE_LOG) {
- /* Let the standard event printing work for now */
- uint32 timestamp, w;
- if (ntoh32(hdr.seqnum) == logtrace_seqnum_prev) {
- printf("\nWLC_E_TRACE: [Event duplicate (log) %d",
- logtrace_seqnum_prev);
- } else {
- nblost = ntoh32(hdr.seqnum) - logtrace_seqnum_prev - 1;
- if (nblost > 0) {
- printf("\nWLC_E_TRACE: [Event lost (log)"
- " --> seqnum %d nblost %d\n",
- ntoh32(hdr.seqnum), nblost);
- }
- logtrace_seqnum_prev = ntoh32(hdr.seqnum);
-
- p = (char *)&buf[MSGTRACE_HDRLEN];
- datalen -= MSGTRACE_HDRLEN;
- w = ntoh32((uint32) *p);
- p += 4;
- datalen -= 4;
- timestamp = ntoh32((uint32) *p);
- printf("Logtrace %x timestamp %x %x",
- logtrace_seqnum_prev, timestamp, w);
-
- while (datalen > 4) {
- p += 4;
- datalen -= 4;
- /* Print each word. DO NOT ntoh it. */
- printf(" %8.8x", *((uint32 *) p));
- }
- printf("\n");
- }
- datalen = 0;
- }
-
- break;
+#ifdef SHOW_LOGTRACE
+ case WLC_E_TRACE:
+ {
+ dhd_dbg_trace_evnt_handler(dhd_pub, event_data, raw_event_ptr, datalen);
}
-
+#endif /* SHOW_LOGTRACE */
case WLC_E_RSSI:
DHD_EVENT(("MACEVENT: %s %d\n", event_name, ntoh32(*((int *)event_data))));
@@ -1141,6 +1350,12 @@
DHD_EVENT(("MACEVENT: %s, MAC %s\n", event_name, eabuf));
break;
+#ifdef BT_WIFI_HANDOBER
+ case WLC_E_BT_WIFI_HANDOVER_REQ:
+ DHD_EVENT(("MACEVENT: %s, MAC %s\n", event_name, eabuf));
+ break;
+#endif
+
default:
DHD_EVENT(("MACEVENT: %s %d, MAC %s, status %d, reason %d, auth %d\n",
event_name, event_type, eabuf, (int)status, (int)reason,
@@ -1151,6 +1366,7 @@
/* show any appended data */
if (DHD_BYTES_ON() && DHD_EVENT_ON() && datalen) {
buf = (uchar *) event_data;
+ BCM_REFERENCE(buf);
DHD_EVENT((" data (%d) : ", datalen));
for (i = 0; i < datalen; i++)
DHD_EVENT((" 0x%02x ", *buf++));
@@ -1159,9 +1375,127 @@
}
#endif /* SHOW_EVENTS */
+/* Check whether packet is a BRCM dngl event pkt. If it is, process event data. */
int
-wl_host_event(dhd_pub_t *dhd_pub, int *ifidx, void *pktdata,
- wl_event_msg_t *event, void **data_ptr)
+dngl_host_event(dhd_pub_t *dhdp, void *pktdata)
+{
+ bcm_dngl_event_t *pvt_data = (bcm_dngl_event_t *)pktdata;
+
+ if (bcmp(BRCM_OUI, &pvt_data->bcm_hdr.oui[0], DOT11_OUI_LEN)) {
+ DHD_ERROR(("%s: mismatched OUI, bailing\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+ /* Check to see if this is a DNGL event */
+ if (ntoh16_ua((void *)&pvt_data->bcm_hdr.usr_subtype) ==
+ BCMILCP_BCM_SUBTYPE_DNGLEVENT) {
+ dngl_host_event_process(dhdp, pvt_data);
+ return BCME_OK;
+ }
+ return BCME_ERROR;
+}
+
+void
+dngl_host_event_process(dhd_pub_t *dhdp, bcm_dngl_event_t *event)
+{
+ bcm_dngl_event_msg_t *dngl_event = &event->dngl_event;
+ uint8 *p = (uint8 *)(event + 1);
+ uint16 type = ntoh16_ua((void *)&dngl_event->event_type);
+ uint16 datalen = ntoh16_ua((void *)&dngl_event->datalen);
+ uint16 version = ntoh16_ua((void *)&dngl_event->version);
+
+ DHD_EVENT(("VERSION:%d, EVENT TYPE:%d, DATALEN:%d\n", version, type, datalen));
+ if (version != BCM_DNGL_EVENT_MSG_VERSION) {
+ DHD_ERROR(("%s:version mismatch:%d:%d\n", __FUNCTION__,
+ version, BCM_DNGL_EVENT_MSG_VERSION));
+ return;
+ }
+ if (dhd_socram_dump(dhdp->bus)) {
+ DHD_ERROR(("%s: socram dump failed\n", __FUNCTION__));
+ } else {
+ dhd_dbg_send_urgent_evt(dhdp, p, datalen);
+ }
+ switch (type) {
+ case DNGL_E_SOCRAM_IND:
+ {
+ bcm_dngl_socramind_t *socramind_ptr = (bcm_dngl_socramind_t *)p;
+ uint16 tag = ltoh32(socramind_ptr->tag);
+ uint16 taglen = ltoh32(socramind_ptr->length);
+ p = (uint8 *)socramind_ptr->value;
+ DHD_EVENT(("Tag:%d Len:%d Datalen:%d\n", tag, taglen, datalen));
+ switch (tag) {
+ case SOCRAM_IND_ASSRT_TAG:
+ {
+ /*
+ * The payload consists of -
+ * null terminated function name padded till 32 bit boundary +
+ * Line number - (32 bits)
+ * Caller address (32 bits)
+ */
+ char *fnname = (char *)p;
+ if (datalen < (ROUNDUP(strlen(fnname) + 1, sizeof(uint32)) +
+ sizeof(uint32) * 2)) {
+ DHD_ERROR(("Wrong length:%d\n", datalen));
+ return;
+ }
+ DHD_EVENT(("ASSRT Function:%s ", p));
+ p += ROUNDUP(strlen(p) + 1, sizeof(uint32));
+ DHD_EVENT(("Line:%d ", *(uint32 *)p));
+ p += sizeof(uint32);
+ DHD_EVENT(("Caller Addr:0x%x\n", *(uint32 *)p));
+ break;
+ }
+ case SOCRAM_IND_TAG_HEALTH_CHECK:
+ {
+ bcm_dngl_healthcheck_t *dngl_hc = (bcm_dngl_healthcheck_t *)p;
+ DHD_EVENT(("SOCRAM_IND_HEALTHCHECK_TAG:%d Len:%d\n",
+ ltoh32(dngl_hc->top_module_tag), ltoh32(dngl_hc->top_module_len)));
+ if (DHD_EVENT_ON()) {
+ prhex("HEALTHCHECK", p, ltoh32(dngl_hc->top_module_len));
+ }
+ p = (uint8 *)dngl_hc->value;
+
+ switch (ltoh32(dngl_hc->top_module_tag)) {
+ case HEALTH_CHECK_TOP_LEVEL_MODULE_PCIEDEV_RTE:
+ {
+ bcm_dngl_pcie_hc_t *pcie_hc = (bcm_dngl_pcie_hc_t *)p;
+ if (ltoh32(dngl_hc->top_module_len) < sizeof(bcm_dngl_pcie_hc_t)) {
+ DHD_ERROR(("Wrong length:%d\n",
+ ltoh32(dngl_hc->top_module_len)));
+ return;
+ }
+ DHD_EVENT(("%d:PCIE HC error:%d flag:0x%x, control:0x%x\n",
+ ltoh32(pcie_hc->version),
+ ltoh32(pcie_hc->pcie_err_ind_type),
+ ltoh32(pcie_hc->pcie_flag),
+ ltoh32(pcie_hc->pcie_control_reg)));
+ break;
+ }
+ default:
+ DHD_ERROR(("%s:Unknown module TAG:%d\n",
+ __FUNCTION__, ltoh32(dngl_hc->top_module_tag)));
+ break;
+ }
+ break;
+ }
+ default:
+ DHD_ERROR(("%s:Unknown TAG", __FUNCTION__));
+ if (p && DHD_EVENT_ON()) {
+ prhex("SOCRAMIND", p, taglen);
+ }
+ break;
+ }
+ break;
+ }
+ default:
+ DHD_ERROR(("%s:Unknown DNGL Event Type:%d", __FUNCTION__, type));
+ if (p && DHD_EVENT_ON()) {
+ prhex("SOCRAMIND", p, datalen);
+ }
+ break;
+ }
+}
+int wl_host_event(dhd_pub_t *dhd_pub, int *ifidx, void *pktdata,
+ wl_event_msg_t *event, void **data_ptr, void *raw_event)
{
/* check whether packet is a BRCM event pkt */
bcm_event_t *pvt_data = (bcm_event_t *)pktdata;
@@ -1169,6 +1503,12 @@
uint32 type, status, datalen;
uint16 flags;
int evlen;
+ int hostidx;
+
+ /* If it is a DNGL event process it first */
+ if (dngl_host_event(dhd_pub, pktdata) == BCME_OK) {
+ return BCME_OK;
+ }
if (bcmp(BRCM_OUI, &pvt_data->bcm_hdr.oui[0], DOT11_OUI_LEN)) {
DHD_ERROR(("%s: mismatched OUI, bailing\n", __FUNCTION__));
@@ -1184,6 +1524,7 @@
*data_ptr = &pvt_data[1];
event_data = *data_ptr;
+
/* memcpy since BRCM event pkt may be unaligned. */
memcpy(event, &pvt_data->event, sizeof(wl_event_msg_t));
@@ -1193,6 +1534,9 @@
datalen = ntoh32_ua((void *)&event->datalen);
evlen = datalen + sizeof(bcm_event_t);
+ /* find equivalent host index for event ifidx */
+ hostidx = dhd_ifidx2hostidx(dhd_pub->info, event->ifidx);
+
switch (type) {
#ifdef PROP_TXSTATUS
case WLC_E_FIFO_CREDIT_MAP:
@@ -1209,15 +1553,19 @@
break;
#endif
- case WLC_E_IF: {
+ case WLC_E_IF:
+ {
struct wl_event_data_if *ifevent = (struct wl_event_data_if *)event_data;
/* Ignore the event if NOIF is set */
if (ifevent->reserved & WLC_E_IF_FLAGS_BSSCFG_NOIF) {
- DHD_ERROR(("WLC_E_IF: NO_IF set, event Ignored\r\n"));
- return (BCME_OK);
+ DHD_ERROR(("WLC_E_IF: NO_IF set, event Ignored\n"));
+ return (BCME_UNSUPPORTED);
}
-
+#ifdef PCIE_FULL_DONGLE
+ dhd_update_interface_flow_info(dhd_pub, ifevent->ifidx,
+ ifevent->opcode, ifevent->role);
+#endif
#ifdef PROP_TXSTATUS
{
uint8* ea = pvt_data->eth.ether_dhost;
@@ -1264,18 +1612,18 @@
#endif /* WL_CFG80211 */
}
} else {
-#ifndef PROP_TXSTATUS
+#if !defined(PROP_TXSTATUS) || !defined(PCIE_FULL_DONGLE)
DHD_ERROR(("%s: Invalid ifidx %d for %s\n",
- __FUNCTION__, ifevent->ifidx, event->ifname));
+ __FUNCTION__, ifevent->ifidx, event->ifname));
#endif /* !PROP_TXSTATUS */
}
-
- /* send up the if event: btamp user needs it */
- *ifidx = dhd_ifname2idx(dhd_pub->info, event->ifname);
- /* push up to external supp/auth */
- dhd_event(dhd_pub->info, (char *)pvt_data, evlen, *ifidx);
+ /* send up the if event: btamp user needs it */
+ *ifidx = hostidx;
+ /* push up to external supp/auth */
+ dhd_event(dhd_pub->info, (char *)pvt_data, evlen, *ifidx);
break;
}
+
#ifdef WLMEDIA_HTSF
case WLC_E_HTSFSYNC:
htsf_update(dhd_pub->info, event_data);
@@ -1286,6 +1634,7 @@
memcpy((void *)(&pvt_data->event.event_type), &temp,
sizeof(pvt_data->event.event_type));
+ break;
}
case WLC_E_PFN_NET_FOUND:
case WLC_E_PFN_NET_LOST:
@@ -1296,19 +1645,52 @@
case WLC_E_PFN_BEST_BATCHING:
dhd_pno_event_handler(dhd_pub, event, (void *)event_data);
break;
-#endif
+#endif
+#if defined(RTT_SUPPORT)
+ case WLC_E_PROXD:
+ dhd_rtt_event_handler(dhd_pub, event, (void *)event_data);
+ break;
+#endif /* RTT_SUPPORT */
/* These are what external supplicant/authenticator wants */
- /* fall through */
+ case WLC_E_ASSOC_IND:
+ case WLC_E_AUTH_IND:
+ case WLC_E_REASSOC_IND:
+ dhd_findadd_sta(dhd_pub, hostidx, &event->addr.octet);
+ break;
case WLC_E_LINK:
+#ifdef PCIE_FULL_DONGLE
+ if (dhd_update_interface_link_status(dhd_pub, (uint8)hostidx,
+ (uint8)flags) != BCME_OK)
+ break;
+ if (!flags) {
+ dhd_flow_rings_delete(dhd_pub, hostidx);
+ }
+ /* fall through */
+#endif
case WLC_E_DEAUTH:
case WLC_E_DEAUTH_IND:
case WLC_E_DISASSOC:
case WLC_E_DISASSOC_IND:
+ if (type != WLC_E_LINK) {
+ dhd_del_sta(dhd_pub, hostidx, &event->addr.octet);
+ }
DHD_EVENT(("%s: Link event %d, flags %x, status %x\n",
__FUNCTION__, type, flags, status));
+#ifdef PCIE_FULL_DONGLE
+ if (type != WLC_E_LINK) {
+ uint8 ifindex = (uint8)hostidx;
+ uint8 role = dhd_flow_rings_ifindex2role(dhd_pub, ifindex);
+ if (DHD_IF_ROLE_STA(role)) {
+ dhd_flow_rings_delete(dhd_pub, ifindex);
+ } else {
+ dhd_flow_rings_delete_for_peer(dhd_pub, ifindex,
+ &event->addr.octet[0]);
+ }
+ }
+#endif
/* fall through */
default:
- *ifidx = dhd_ifname2idx(dhd_pub->info, event->ifname);
+ *ifidx = hostidx;
/* push up to external supp/auth */
dhd_event(dhd_pub->info, (char *)pvt_data, evlen, *ifidx);
DHD_TRACE(("%s: MAC event %d, flags %x, status %x\n",
@@ -1320,7 +1702,8 @@
}
#ifdef SHOW_EVENTS
- wl_show_host_event(event, (void *)event_data);
+ wl_show_host_event(dhd_pub, event,
+ (void *)event_data, raw_event, dhd_pub->enable_log);
#endif /* SHOW_EVENTS */
return (BCME_OK);
@@ -1400,12 +1783,12 @@
{
char *argv[8];
int i = 0;
- const char *str;
+ const char *str;
int buf_len;
int str_len;
char *arg_save = 0, *arg_org = 0;
int rc;
- char buf[128];
+ char buf[32] = {0};
wl_pkt_filter_enable_t enable_parm;
wl_pkt_filter_enable_t * pkt_filterp;
@@ -1413,7 +1796,7 @@
return;
if (!(arg_save = MALLOC(dhd->osh, strlen(arg) + 1))) {
- DHD_ERROR(("%s: kmalloc failed\n", __FUNCTION__));
+ DHD_ERROR(("%s: malloc failed\n", __FUNCTION__));
goto fail;
}
arg_org = arg_save;
@@ -1429,8 +1812,8 @@
str = "pkt_filter_enable";
str_len = strlen(str);
- bcm_strncpy_s(buf, sizeof(buf), str, str_len);
- buf[str_len] = '\0';
+ bcm_strncpy_s(buf, sizeof(buf) - 1, str, sizeof(buf) - 1);
+ buf[ sizeof(buf) - 1 ] = '\0';
buf_len = str_len + 1;
pkt_filterp = (wl_pkt_filter_enable_t *)(buf + str_len + 1);
@@ -1489,14 +1872,14 @@
return;
if (!(arg_save = MALLOC(dhd->osh, strlen(arg) + 1))) {
- DHD_ERROR(("%s: kmalloc failed\n", __FUNCTION__));
+ DHD_ERROR(("%s: malloc failed\n", __FUNCTION__));
goto fail;
}
arg_org = arg_save;
if (!(buf = MALLOC(dhd->osh, BUF_SIZE))) {
- DHD_ERROR(("%s: kmalloc failed\n", __FUNCTION__));
+ DHD_ERROR(("%s: malloc failed\n", __FUNCTION__));
goto fail;
}
@@ -1945,74 +2328,66 @@
}
}
-
/* Function to estimate possible DTIM_SKIP value */
int
-dhd_get_suspend_bcn_li_dtim(dhd_pub_t *dhd)
+dhd_get_suspend_bcn_li_dtim(dhd_pub_t *dhd, int *dtim_period, int *bcn_interval)
{
int bcn_li_dtim = 1; /* deafult no dtim skip setting */
int ret = -1;
- int dtim_period = 0;
- int ap_beacon = 0;
int allowed_skip_dtim_cnt = 0;
/* Check if associated */
if (dhd_is_associated(dhd, NULL, NULL) == FALSE) {
DHD_TRACE(("%s NOT assoc ret %d\n", __FUNCTION__, ret));
- goto exit;
+ return bcn_li_dtim;
}
+ if (dtim_period == NULL || bcn_interval == NULL)
+ return bcn_li_dtim;
/* read associated AP beacon interval */
if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_GET_BCNPRD,
- &ap_beacon, sizeof(ap_beacon), FALSE, 0)) < 0) {
+ bcn_interval, sizeof(*bcn_interval), FALSE, 0)) < 0) {
DHD_ERROR(("%s get beacon failed code %d\n", __FUNCTION__, ret));
- goto exit;
- }
-
- /* if associated APs Beacon more that 100msec do no dtim skip */
- if (ap_beacon > MAX_DTIM_SKIP_BEACON_INTERVAL) {
- DHD_ERROR(("%s NO dtim skip for AP with beacon %d ms\n", __FUNCTION__, ap_beacon));
- goto exit;
+ return bcn_li_dtim;
}
/* read associated ap's dtim setup */
if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_GET_DTIMPRD,
- &dtim_period, sizeof(dtim_period), FALSE, 0)) < 0) {
+ dtim_period, sizeof(*dtim_period), FALSE, 0)) < 0) {
DHD_ERROR(("%s failed code %d\n", __FUNCTION__, ret));
- goto exit;
+ return bcn_li_dtim;
}
/* if not assocated just eixt */
- if (dtim_period == 0) {
- goto exit;
+ if (*dtim_period == 0) {
+ return bcn_li_dtim;
}
/* attemp to use platform defined dtim skip interval */
bcn_li_dtim = dhd->suspend_bcn_li_dtim;
/* check if sta listen interval fits into AP dtim */
- if (dtim_period > CUSTOM_LISTEN_INTERVAL) {
+ if (*dtim_period > CUSTOM_LISTEN_INTERVAL) {
/* AP DTIM to big for our Listen Interval : no dtim skiping */
bcn_li_dtim = NO_DTIM_SKIP;
DHD_ERROR(("%s DTIM=%d > Listen=%d : too big ...\n",
- __FUNCTION__, dtim_period, CUSTOM_LISTEN_INTERVAL));
- goto exit;
+ __FUNCTION__, *dtim_period, CUSTOM_LISTEN_INTERVAL));
+ return bcn_li_dtim;
}
- if ((dtim_period * ap_beacon * bcn_li_dtim) > MAX_DTIM_ALLOWED_INTERVAL) {
- allowed_skip_dtim_cnt = MAX_DTIM_ALLOWED_INTERVAL / (dtim_period * ap_beacon);
+ if (((*dtim_period) * (*bcn_interval) * bcn_li_dtim) > MAX_DTIM_ALLOWED_INTERVAL) {
+ allowed_skip_dtim_cnt = MAX_DTIM_ALLOWED_INTERVAL / ((*dtim_period) * (*bcn_interval));
bcn_li_dtim = (allowed_skip_dtim_cnt != 0) ? allowed_skip_dtim_cnt : NO_DTIM_SKIP;
}
- if ((bcn_li_dtim * dtim_period) > CUSTOM_LISTEN_INTERVAL) {
+ if ((bcn_li_dtim * (*dtim_period)) > CUSTOM_LISTEN_INTERVAL) {
/* Round up dtim_skip to fit into STAs Listen Interval */
- bcn_li_dtim = (int)(CUSTOM_LISTEN_INTERVAL / dtim_period);
+ bcn_li_dtim = (int)(CUSTOM_LISTEN_INTERVAL / *dtim_period);
DHD_TRACE(("%s agjust dtim_skip as %d\n", __FUNCTION__, bcn_li_dtim));
}
DHD_ERROR(("%s beacon=%d bcn_li_dtim=%d DTIM=%d Listen=%d\n",
- __FUNCTION__, ap_beacon, bcn_li_dtim, dtim_period, CUSTOM_LISTEN_INTERVAL));
+ __FUNCTION__, *bcn_interval, bcn_li_dtim, *dtim_period, CUSTOM_LISTEN_INTERVAL));
-exit:
return bcn_li_dtim;
}
@@ -2031,7 +2406,7 @@
#if defined(KEEP_ALIVE)
int dhd_keep_alive_onoff(dhd_pub_t *dhd)
{
- char buf[256];
+ char buf[32] = {0};
const char *str;
wl_mkeep_alive_pkt_t mkeep_alive_pkt = {0};
wl_mkeep_alive_pkt_t *mkeep_alive_pktp;
@@ -2046,8 +2421,8 @@
str = "mkeep_alive";
str_len = strlen(str);
- strncpy(buf, str, str_len);
- buf[ str_len ] = '\0';
+ strncpy(buf, str, sizeof(buf) - 1);
+ buf[ sizeof(buf) - 1 ] = '\0';
mkeep_alive_pktp = (wl_mkeep_alive_pkt_t *) (buf + str_len + 1);
mkeep_alive_pkt.period_msec = CUSTOM_KEEP_ALIVE_SETTING;
buf_len = str_len + 1;
@@ -2173,7 +2548,7 @@
* SSIDs list parsing from cscan tlv list
*/
int
-wl_iw_parse_ssid_list_tlv(char** list_str, wlc_ssid_t* ssid, int max, int *bytes_left)
+wl_iw_parse_ssid_list_tlv(char** list_str, wlc_ssid_ext_t* ssid, int max, int *bytes_left)
{
char* str;
int idx = 0;
@@ -2194,7 +2569,7 @@
/* Get proper CSCAN_TLV_TYPE_SSID_IE */
*bytes_left -= 1;
str += 1;
-
+ ssid[idx].rssi_thresh = 0;
if (str[0] == 0) {
/* Broadcast SSID */
ssid[idx].SSID_len = 0;
@@ -2221,6 +2596,7 @@
*bytes_left -= ssid[idx].SSID_len;
str += ssid[idx].SSID_len;
+ ssid[idx].hidden = TRUE;
DHD_TRACE(("%s :size=%d left=%d\n",
(char*)ssid[idx].SSID, ssid[idx].SSID_len, *bytes_left));
diff --git a/drivers/net/wireless/bcmdhd/dhd_custom_gpio.c b/drivers/net/wireless/bcmdhd/dhd_custom_gpio.c
old mode 100755
new mode 100644
index 10afdaa..b7d162c
--- a/drivers/net/wireless/bcmdhd/dhd_custom_gpio.c
+++ b/drivers/net/wireless/bcmdhd/dhd_custom_gpio.c
@@ -20,7 +20,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
-* $Id: dhd_custom_gpio.c 447089 2014-01-08 04:05:58Z $
+* $Id: dhd_custom_gpio.c 447105 2014-01-08 05:27:09Z $
*/
#include <typedefs.h>
@@ -256,7 +256,8 @@
* input : ISO 3166-1 country abbreviation
* output: customized cspec
*/
-void get_customized_country_code(void *adapter, char *country_iso_code, wl_country_t *cspec)
+void get_customized_country_code(void *adapter, char *country_iso_code,
+ wl_country_t *cspec, u32 flags)
{
#if defined(CUSTOMER_HW2) && (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 39))
@@ -265,7 +266,8 @@
if (!cspec)
return;
- cloc_ptr = wifi_platform_get_country_code(adapter, country_iso_code);
+ cloc_ptr = wifi_platform_get_country_code(adapter, country_iso_code,
+ flags);
if (cloc_ptr) {
strlcpy(cspec->ccode, cloc_ptr->custom_locale, WLC_CNTRY_BUF_SZ);
cspec->rev = cloc_ptr->custom_locale_rev;
diff --git a/drivers/net/wireless/bcmdhd/dhd_custom_platdev.c b/drivers/net/wireless/bcmdhd/dhd_custom_platdev.c
new file mode 100644
index 0000000..0e38740
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_custom_platdev.c
@@ -0,0 +1,663 @@
+/*
+ * Custom file for dealing platform specific resouce
+ *
+ * Copyright (C) 1999-2015, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_custom_msm.c 520105 2014-12-10 07:22:01Z $
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/skbuff.h>
+#include <linux/wlan_plat.h>
+#include <linux/mmc/host.h>
+#include <linux/if.h>
+
+#if defined(CONFIG_ARCH_MSM)
+#if defined(CONFIG_64BIT)
+#include <linux/msm_pcie.h>
+#else
+#include <mach/msm_pcie.h>
+#endif
+#endif
+
+#include <linux/of_gpio.h>
+
+#include <linux/fcntl.h>
+#include <linux/fs.h>
+
+
+#define BCM_DBG pr_debug
+
+static int gpio_wl_reg_on = -1;
+static int brcm_wake_irq = -1;
+
+#if !defined(CONFIG_WIFI_CONTROL_FUNC)
+#define WLAN_PLAT_NODFS_FLAG 0x01
+#define WLAN_PLAT_AP_FLAG 0x02
+#endif
+
+#ifdef CONFIG_DHD_USE_STATIC_BUF
+
+enum dhd_prealloc_index {
+ DHD_PREALLOC_PROT = 0,
+ DHD_PREALLOC_RXBUF,
+ DHD_PREALLOC_DATABUF,
+ DHD_PREALLOC_OSL_BUF,
+ DHD_PREALLOC_SKB_BUF,
+ DHD_PREALLOC_WIPHY_ESCAN0 = 5,
+ DHD_PREALLOC_WIPHY_ESCAN1 = 6,
+ DHD_PREALLOC_DHD_INFO = 7,
+ DHD_PREALLOC_DHD_WLFC_INFO = 8,
+ DHD_PREALLOC_IF_FLOW_LKUP = 9,
+ DHD_PREALLOC_FLOWRING = 10,
+ DHD_PREALLOC_MAX
+};
+
+#define STATIC_BUF_MAX_NUM 20
+#define STATIC_BUF_SIZE (PAGE_SIZE*2)
+
+#define DHD_PREALLOC_PROT_SIZE (512)
+#define DHD_PREALLOC_WIPHY_ESCAN0_SIZE (64 * 1024)
+#define DHD_PREALLOC_DHD_INFO_SIZE (24 * 1024)
+#ifdef CONFIG_64BIT
+#define DHD_PREALLOC_IF_FLOW_LKUP_SIZE (20 * 1024 * 2)
+#else
+#define DHD_PREALLOC_IF_FLOW_LKUP_SIZE (20 * 1024)
+#endif
+#define DHD_PREALLOC_OSL_BUF_SIZE (STATIC_BUF_MAX_NUM * STATIC_BUF_SIZE)
+
+#define WLAN_SCAN_BUF_SIZE (64 * 1024)
+
+#if defined(CONFIG_64BIT)
+#define WLAN_DHD_INFO_BUF_SIZE (24 * 1024)
+#define WLAN_DHD_WLFC_BUF_SIZE (64 * 1024)
+#define WLAN_DHD_IF_FLOW_LKUP_SIZE (64 * 1024)
+#else
+#define WLAN_DHD_INFO_BUF_SIZE (16 * 1024)
+#define WLAN_DHD_WLFC_BUF_SIZE (16 * 1024)
+#define WLAN_DHD_IF_FLOW_LKUP_SIZE (20 * 1024)
+#endif /* CONFIG_64BIT */
+#define WLAN_DHD_MEMDUMP_SIZE (800 * 1024)
+
+#define PREALLOC_WLAN_SEC_NUM 4
+#define PREALLOC_WLAN_BUF_NUM 160
+#define PREALLOC_WLAN_SECTION_HEADER 24
+
+#ifdef CONFIG_BCMDHD_PCIE
+#define DHD_SKB_1PAGE_BUFSIZE (PAGE_SIZE*1)
+#define DHD_SKB_2PAGE_BUFSIZE (PAGE_SIZE*2)
+#define DHD_SKB_4PAGE_BUFSIZE (PAGE_SIZE*4)
+
+#define WLAN_SECTION_SIZE_0 (PREALLOC_WLAN_BUF_NUM * 128)
+#define WLAN_SECTION_SIZE_1 0
+#define WLAN_SECTION_SIZE_2 0
+#define WLAN_SECTION_SIZE_3 (PREALLOC_WLAN_BUF_NUM * 1024)
+
+#define DHD_SKB_1PAGE_BUF_NUM 0
+#define DHD_SKB_2PAGE_BUF_NUM 64
+#define DHD_SKB_4PAGE_BUF_NUM 0
+
+#else
+#define DHD_SKB_HDRSIZE 336
+#define DHD_SKB_1PAGE_BUFSIZE ((PAGE_SIZE*1)-DHD_SKB_HDRSIZE)
+#define DHD_SKB_2PAGE_BUFSIZE ((PAGE_SIZE*2)-DHD_SKB_HDRSIZE)
+#define DHD_SKB_4PAGE_BUFSIZE ((PAGE_SIZE*4)-DHD_SKB_HDRSIZE)
+
+#define WLAN_SECTION_SIZE_0 (PREALLOC_WLAN_BUF_NUM * 128)
+#define WLAN_SECTION_SIZE_1 (PREALLOC_WLAN_BUF_NUM * 128)
+#define WLAN_SECTION_SIZE_2 (PREALLOC_WLAN_BUF_NUM * 512)
+#define WLAN_SECTION_SIZE_3 (PREALLOC_WLAN_BUF_NUM * 1024)
+
+#define DHD_SKB_1PAGE_BUF_NUM 8
+#define DHD_SKB_2PAGE_BUF_NUM 8
+#define DHD_SKB_4PAGE_BUF_NUM 1
+#endif /* CONFIG_BCMDHD_PCIE */
+
+#define WLAN_SKB_1_2PAGE_BUF_NUM ((DHD_SKB_1PAGE_BUF_NUM) + \
+ (DHD_SKB_2PAGE_BUF_NUM))
+#define WLAN_SKB_BUF_NUM ((WLAN_SKB_1_2PAGE_BUF_NUM) + \
+ (DHD_SKB_4PAGE_BUF_NUM))
+
+
+void *wlan_static_prot = NULL;
+void *wlan_static_scan_buf0 = NULL;
+void *wlan_static_scan_buf1 = NULL;
+void *wlan_static_dhd_info_buf = NULL;
+void *wlan_static_if_flow_lkup = NULL;
+void *wlan_static_osl_buf = NULL;
+
+static struct sk_buff *wlan_static_skb[WLAN_SKB_BUF_NUM];
+
+
+static void *dhd_wlan_mem_prealloc(int section, unsigned long size)
+{
+ if (section == DHD_PREALLOC_PROT)
+ return wlan_static_prot;
+
+ if (section == DHD_PREALLOC_SKB_BUF)
+ return wlan_static_skb;
+
+ if (section == DHD_PREALLOC_WIPHY_ESCAN0)
+ return wlan_static_scan_buf0;
+
+ if (section == DHD_PREALLOC_WIPHY_ESCAN1)
+ return wlan_static_scan_buf1;
+
+ if (section == DHD_PREALLOC_OSL_BUF) {
+ if (size > DHD_PREALLOC_OSL_BUF_SIZE) {
+ pr_err("request OSL_BUF(%lu) is bigger than static size(%ld).\n",
+ size, DHD_PREALLOC_OSL_BUF_SIZE);
+ return NULL;
+ }
+ return wlan_static_osl_buf;
+ }
+
+ if (section == DHD_PREALLOC_DHD_INFO) {
+ if (size > DHD_PREALLOC_DHD_INFO_SIZE) {
+ pr_err("request DHD_INFO size(%lu) is bigger than static size(%d).\n",
+ size, DHD_PREALLOC_DHD_INFO_SIZE);
+ return NULL;
+ }
+ return wlan_static_dhd_info_buf;
+ }
+ if (section == DHD_PREALLOC_IF_FLOW_LKUP) {
+ if (size > DHD_PREALLOC_IF_FLOW_LKUP_SIZE) {
+ pr_err("request DHD_IF_FLOW_LKUP size(%lu) is bigger than static size(%d).\n",
+ size, DHD_PREALLOC_IF_FLOW_LKUP_SIZE);
+ return NULL;
+ }
+
+ return wlan_static_if_flow_lkup;
+ }
+ if ((section < 0) || (section > DHD_PREALLOC_MAX))
+ pr_err("request section id(%d) is out of max index %d\n",
+ section, DHD_PREALLOC_MAX);
+
+ return NULL;
+}
+
+static int dhd_init_wlan_mem(void)
+{
+
+ int i;
+ int j;
+
+ for (i = 0; i < DHD_SKB_1PAGE_BUF_NUM; i++) {
+ wlan_static_skb[i] = dev_alloc_skb(DHD_SKB_1PAGE_BUFSIZE);
+ if (!wlan_static_skb[i]) {
+ goto err_skb_alloc;
+ }
+ }
+
+ for (i = DHD_SKB_1PAGE_BUF_NUM; i < WLAN_SKB_1_2PAGE_BUF_NUM; i++) {
+ wlan_static_skb[i] = dev_alloc_skb(DHD_SKB_2PAGE_BUFSIZE);
+ if (!wlan_static_skb[i]) {
+ goto err_skb_alloc;
+ }
+ }
+
+#if !defined(CONFIG_BCMDHD_PCIE)
+ wlan_static_skb[i] = dev_alloc_skb(DHD_SKB_4PAGE_BUFSIZE);
+ if (!wlan_static_skb[i]) {
+ goto err_skb_alloc;
+ }
+#endif /* !CONFIG_BCMDHD_PCIE */
+
+ wlan_static_prot = kmalloc(DHD_PREALLOC_PROT_SIZE, GFP_KERNEL);
+ if (!wlan_static_prot) {
+ pr_err("Failed to alloc wlan_static_prot\n");
+ goto err_mem_alloc;
+ }
+
+ wlan_static_osl_buf = kmalloc(DHD_PREALLOC_OSL_BUF_SIZE, GFP_KERNEL);
+ if (!wlan_static_osl_buf) {
+ pr_err("Failed to alloc wlan_static_osl_buf\n");
+ goto err_mem_alloc;
+ }
+
+ wlan_static_scan_buf0 = kmalloc(DHD_PREALLOC_WIPHY_ESCAN0_SIZE, GFP_KERNEL);
+ if (!wlan_static_scan_buf0) {
+ pr_err("Failed to alloc wlan_static_scan_buf0\n");
+ goto err_mem_alloc;
+ }
+
+
+ wlan_static_dhd_info_buf = kmalloc(DHD_PREALLOC_DHD_INFO_SIZE, GFP_KERNEL);
+ if (!wlan_static_dhd_info_buf) {
+ pr_err("Failed to alloc wlan_static_dhd_info_buf\n");
+ goto err_mem_alloc;
+ }
+#ifdef CONFIG_BCMDHD_PCIE
+ wlan_static_if_flow_lkup = kmalloc(DHD_PREALLOC_IF_FLOW_LKUP_SIZE, GFP_KERNEL);
+ if (!wlan_static_if_flow_lkup) {
+ pr_err("Failed to alloc wlan_static_if_flow_lkup\n");
+ goto err_mem_alloc;
+ }
+#endif /* CONFIG_BCMDHD_PCIE */
+
+ return 0;
+
+err_mem_alloc:
+
+ if (wlan_static_prot)
+ kfree(wlan_static_prot);
+
+ if (wlan_static_dhd_info_buf)
+ kfree(wlan_static_dhd_info_buf);
+
+ if (wlan_static_scan_buf1)
+ kfree(wlan_static_scan_buf1);
+
+ if (wlan_static_scan_buf0)
+ kfree(wlan_static_scan_buf0);
+
+ if (wlan_static_osl_buf)
+ kfree(wlan_static_osl_buf);
+
+#ifdef CONFIG_BCMDHD_PCIE
+ if (wlan_static_if_flow_lkup)
+ kfree(wlan_static_if_flow_lkup);
+#endif
+ pr_err("Failed to mem_alloc for WLAN\n");
+
+ i = WLAN_SKB_BUF_NUM;
+
+err_skb_alloc:
+ pr_err("Failed to skb_alloc for WLAN\n");
+ for (j = 0; j < i; j++) {
+ dev_kfree_skb(wlan_static_skb[j]);
+ }
+
+ return -ENOMEM;
+}
+#endif /* CONFIG_DHD_USE_STATIC_BUF */
+
+int dhd_wifi_init_gpio(void)
+{
+ int wl_reg_on,wl_host_wake;
+ char *wlan_node = "android,bcmdhd_wlan";
+ struct device_node *np;
+
+ np = of_find_compatible_node(NULL, NULL, wlan_node);
+ if (!np) {
+ WARN(1, "failed to get device node of BRCM WLAN\n");
+ return -ENODEV;
+ }
+
+ /* get wlan_reg_on */
+ wl_reg_on = of_get_named_gpio(np, "wl_reg_on", 0);
+ if (wl_reg_on >= 0) {
+ gpio_wl_reg_on = wl_reg_on;
+ BCM_DBG("%s: gpio_wl_reg_on:%d.\n", __FUNCTION__, gpio_wl_reg_on);
+ }
+
+ /* get host_wake irq */
+ wl_host_wake = of_get_named_gpio(np, "wl_host_wake", 0);
+ if (wl_host_wake >= 0) {
+ BCM_DBG("%s: wl_host_wake:%d.\n", __FUNCTION__, wl_host_wake);
+ brcm_wake_irq = gpio_to_irq(wl_host_wake);
+ }
+
+ if (gpio_request(gpio_wl_reg_on, "WL_REG_ON"))
+ pr_err("%s: Faiiled to request gpio %d for WL_REG_ON\n",
+ __func__, gpio_wl_reg_on);
+ else
+ pr_err("%s: gpio_request WL_REG_ON done\n", __func__);
+
+ if (gpio_direction_output(gpio_wl_reg_on, 1))
+ pr_err("%s: WL_REG_ON failed to pull up\n", __func__);
+ else
+ BCM_DBG("%s: WL_REG_ON is pulled up\n", __func__);
+
+ if (gpio_get_value(gpio_wl_reg_on))
+ BCM_DBG("%s: Initial WL_REG_ON: [%d]\n",
+ __func__, gpio_get_value(gpio_wl_reg_on));
+
+ return 0;
+}
+
+int dhd_wlan_power(int on)
+{
+ pr_info("%s Enter: power %s\n", __func__, on ? "on" : "off");
+
+ if (on) {
+ if (gpio_direction_output(gpio_wl_reg_on, 1)) {
+ pr_err("%s: WL_REG_ON didn't output high\n", __func__);
+ return -EIO;
+ }
+ if (!gpio_get_value(gpio_wl_reg_on))
+ pr_err("[%s] gpio didn't set high.\n", __func__);
+ } else {
+ if (gpio_direction_output(gpio_wl_reg_on, 0)) {
+ pr_err("%s: WL_REG_ON didn't output low\n", __func__);
+ return -EIO;
+ }
+ }
+ return 0;
+}
+EXPORT_SYMBOL(dhd_wlan_power);
+
+static int dhd_wlan_reset(int onoff)
+{
+ return 0;
+}
+
+static int dhd_wlan_set_carddetect(int val)
+{
+ return 0;
+}
+
+/* Customized Locale table : OPTIONAL feature */
+#define WLC_CNTRY_BUF_SZ 4
+struct cntry_locales_custom {
+ char iso_abbrev[WLC_CNTRY_BUF_SZ];
+ char custom_locale[WLC_CNTRY_BUF_SZ];
+ int custom_locale_rev;
+};
+
+static struct cntry_locales_custom brcm_wlan_translate_custom_table[] = {
+ /* Table should be filled out based on custom platform regulatory requirement */
+ {"", "XT", 49}, /* Universal if Country code is unknown or empty */
+ {"US", "US", 176},
+ {"AE", "AE", 1},
+ {"AR", "AR", 21},
+ {"AT", "AT", 4},
+ {"AU", "AU", 40},
+ {"BE", "BE", 4},
+ {"BG", "BG", 4},
+ {"BN", "BN", 4},
+ {"BR", "BR", 4},
+ {"CA", "US", 176}, /* Previousely was CA/31 */
+ {"CH", "CH", 4},
+ {"CY", "CY", 4},
+ {"CZ", "CZ", 4},
+ {"DE", "DE", 7},
+ {"DK", "DK", 4},
+ {"EE", "EE", 4},
+ {"ES", "ES", 4},
+ {"FI", "FI", 4},
+ {"FR", "FR", 5},
+ {"GB", "GB", 6},
+ {"GR", "GR", 4},
+ {"HK", "HK", 2},
+ {"HR", "HR", 4},
+ {"HU", "HU", 4},
+ {"IE", "IE", 5},
+ {"IN", "IN", 28},
+ {"IS", "IS", 4},
+ {"IT", "IT", 4},
+ {"ID", "ID", 5},
+ {"JP", "JP", 86},
+ {"KR", "KR", 57},
+ {"KW", "KW", 5},
+ {"LI", "LI", 4},
+ {"LT", "LT", 4},
+ {"LU", "LU", 3},
+ {"LV", "LV", 4},
+ {"MA", "MA", 2},
+ {"MT", "MT", 4},
+ {"MX", "MX", 20},
+ {"MY", "MY", 16},
+ {"NL", "NL", 4},
+ {"NO", "NO", 4},
+ {"NZ", "NZ", 4},
+ {"PL", "PL", 4},
+ {"PT", "PT", 4},
+ {"PY", "PY", 2},
+ {"RO", "RO", 4},
+ {"RU", "RU", 13},
+ {"SE", "SE", 4},
+ {"SG", "SG", 19},
+ {"SI", "SI", 4},
+ {"SK", "SK", 4},
+ {"TH", "TH", 5},
+ {"TR", "TR", 7},
+ {"TW", "TW", 1},
+ {"VN", "VN", 4},
+};
+
+struct cntry_locales_custom brcm_wlan_translate_nodfs_table[] = {
+ {"", "XT", 50}, /* Universal if Country code is unknown or empty */
+ {"US", "US", 177},
+ {"AU", "AU", 41},
+ {"BR", "BR", 18},
+ {"CA", "US", 177},
+ {"CH", "E0", 33},
+ {"CY", "E0", 33},
+ {"CZ", "E0", 33},
+ {"DE", "E0", 33},
+ {"DK", "E0", 33},
+ {"EE", "E0", 33},
+ {"ES", "E0", 33},
+ {"EU", "E0", 33},
+ {"FI", "E0", 33},
+ {"FR", "E0", 33},
+ {"GB", "E0", 33},
+ {"GR", "E0", 33},
+ {"HK", "SG", 20},
+ {"HR", "E0", 33},
+ {"HU", "E0", 33},
+ {"IE", "E0", 33},
+ {"IN", "IN", 29},
+ {"ID", "ID", 5},
+ {"IS", "E0", 33},
+ {"IT", "E0", 33},
+ {"JP", "JP", 87},
+ {"KR", "KR", 79},
+ {"KW", "KW", 5},
+ {"LI", "E0", 33},
+ {"LT", "E0", 33},
+ {"LU", "E0", 33},
+ {"LV", "LV", 4},
+ {"MA", "MA", 2},
+ {"MT", "E0", 33},
+ {"MY", "MY", 17},
+ {"MX", "US", 177},
+ {"NL", "E0", 33},
+ {"NO", "E0", 33},
+ {"PL", "E0", 33},
+ {"PT", "E0", 33},
+ {"RO", "E0", 33},
+ {"SE", "E0", 33},
+ {"SG", "SG", 20},
+ {"SI", "E0", 33},
+ {"SK", "E0", 33},
+ {"SZ", "E0", 33},
+ {"TH", "TH", 9},
+ {"TW", "TW", 60},
+};
+
+static void *dhd_wlan_get_country_code(char *ccode, u32 flags)
+{
+ struct cntry_locales_custom *locales;
+ int size;
+ int i;
+
+ if (!ccode)
+ return NULL;
+
+ if (flags & WLAN_PLAT_NODFS_FLAG) {
+ locales = brcm_wlan_translate_nodfs_table;
+ size = ARRAY_SIZE(brcm_wlan_translate_nodfs_table);
+ } else {
+ locales = brcm_wlan_translate_custom_table;
+ size = ARRAY_SIZE(brcm_wlan_translate_custom_table);
+ }
+
+ for (i = 0; i < size; i++)
+ if (strcmp(ccode, locales[i].iso_abbrev) == 0)
+ return &locales[i];
+ return &locales[0];
+}
+
+static unsigned char brcm_mac_addr[IFHWADDRLEN] = { 0, 0x90, 0x4c, 0, 0, 0 };
+
+static int __init dhd_mac_addr_setup(char *str)
+{
+ char macstr[IFHWADDRLEN*3];
+ char *macptr = macstr;
+ char *token;
+ int i = 0;
+
+ if (!str)
+ return 0;
+ BCM_DBG("wlan MAC = %s\n", str);
+ if (strlen(str) >= sizeof(macstr))
+ return 0;
+ strlcpy(macstr, str, sizeof(macstr));
+
+ while (((token = strsep(&macptr, ":")) != NULL) && (i < IFHWADDRLEN)) {
+ unsigned long val;
+ int res;
+
+ res = kstrtoul(token, 0x10, &val);
+ if (res < 0)
+ break;
+ brcm_mac_addr[i++] = (u8)val;
+ }
+
+ if (i < IFHWADDRLEN && strlen(macstr)==IFHWADDRLEN*2) {
+ /* try again with wrong format (sans colons) */
+ u64 mac;
+ if (kstrtoull(macstr, 0x10, &mac) < 0)
+ return 0;
+ for (i=0; i<IFHWADDRLEN; i++)
+ brcm_mac_addr[IFHWADDRLEN-1-i] = (u8)((0xFF)&(mac>>(i*8)));
+ }
+
+ return i==IFHWADDRLEN ? 1:0;
+}
+
+__setup("androidboot.wifimacaddr=", dhd_mac_addr_setup);
+
+static int dhd_wifi_get_mac_addr(unsigned char *buf)
+{
+ uint rand_mac;
+
+ if (!buf)
+ return -EFAULT;
+
+ if ((brcm_mac_addr[4] == 0) && (brcm_mac_addr[5] == 0)) {
+ prandom_seed((uint)jiffies);
+ rand_mac = prandom_u32();
+ brcm_mac_addr[3] = (unsigned char)rand_mac;
+ brcm_mac_addr[4] = (unsigned char)(rand_mac >> 8);
+ brcm_mac_addr[5] = (unsigned char)(rand_mac >> 16);
+ }
+ memcpy(buf, brcm_mac_addr, IFHWADDRLEN);
+ return 0;
+}
+
+static int dhd_wlan_get_wake_irq(void)
+{
+ return brcm_wake_irq;
+}
+
+struct resource dhd_wlan_resources[] = {
+ [0] = {
+ .name = "bcmdhd_wlan_irq",
+ .start = 0, /* Dummy */
+ .end = 0, /* Dummy */
+ .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_SHAREABLE
+ | IORESOURCE_IRQ_HIGHLEVEL, /* Dummy */
+ },
+};
+EXPORT_SYMBOL(dhd_wlan_resources);
+
+
+struct wifi_platform_data dhd_wlan_control = {
+ .set_power = dhd_wlan_power,
+ .set_reset = dhd_wlan_reset,
+ .set_carddetect = dhd_wlan_set_carddetect,
+ .get_mac_addr = dhd_wifi_get_mac_addr,
+#ifdef CONFIG_DHD_USE_STATIC_BUF
+ .mem_prealloc = dhd_wlan_mem_prealloc,
+#endif
+ .get_wake_irq = dhd_wlan_get_wake_irq,
+ .get_country_code = dhd_wlan_get_country_code,
+};
+EXPORT_SYMBOL(dhd_wlan_control);
+
+int __init dhd_wlan_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO "%s: START\n", __FUNCTION__);
+
+#ifdef CONFIG_DHD_USE_STATIC_BUF
+ ret = dhd_init_wlan_mem();
+#endif
+
+ ret = dhd_wifi_init_gpio();
+ dhd_wlan_resources[0].start = dhd_wlan_resources[0].end =
+ brcm_wake_irq;
+
+ return ret;
+}
+
+void __exit dhd_wlan_exit(void)
+{
+#ifdef CONFIG_DHD_USE_STATIC_BUF
+ int i;
+
+ for (i = 0; i < DHD_SKB_1PAGE_BUF_NUM; i++) {
+ if (wlan_static_skb[i])
+ dev_kfree_skb(wlan_static_skb[i]);
+ }
+
+ for (i = DHD_SKB_1PAGE_BUF_NUM; i < WLAN_SKB_1_2PAGE_BUF_NUM; i++) {
+ if (wlan_static_skb[i])
+ dev_kfree_skb(wlan_static_skb[i]);
+ }
+
+#if !defined(CONFIG_BCMDHD_PCIE)
+ if (wlan_static_skb[i])
+ dev_kfree_skb(wlan_static_skb[i]);
+#endif /* !CONFIG_BCMDHD_PCIE */
+
+ if (wlan_static_prot)
+ kfree(wlan_static_prot);
+
+ if (wlan_static_osl_buf)
+ kfree(wlan_static_osl_buf);
+
+ if (wlan_static_scan_buf0)
+ kfree(wlan_static_scan_buf0);
+
+ if (wlan_static_dhd_info_buf)
+ kfree(wlan_static_dhd_info_buf);
+
+ if (wlan_static_scan_buf1)
+ kfree(wlan_static_scan_buf1);
+
+#ifdef CONFIG_BCMDHD_PCIE
+ if (wlan_static_if_flow_lkup)
+ kfree(wlan_static_if_flow_lkup);
+#endif
+#endif /* CONFIG_BCMDHD_USE_STATIC_BUF */
+ return;
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_dbg.h b/drivers/net/wireless/bcmdhd/dhd_dbg.h
old mode 100755
new mode 100644
index b9f1c6f..de08350
--- a/drivers/net/wireless/bcmdhd/dhd_dbg.h
+++ b/drivers/net/wireless/bcmdhd/dhd_dbg.h
@@ -2,13 +2,13 @@
* Debug/trace/assert driver definitions for Dongle Host Driver.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,7 +16,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -44,10 +44,11 @@
#define DHD_GLOM(args) do {if (dhd_msg_level & DHD_GLOM_VAL) printf args;} while (0)
#define DHD_EVENT(args) do {if (dhd_msg_level & DHD_EVENT_VAL) printf args;} while (0)
#define DHD_BTA(args) do {if (dhd_msg_level & DHD_BTA_VAL) printf args;} while (0)
-#define DHD_ISCAN(args) do {if (dhd_msg_level & DHD_ISCAN_VAL) printf args;} while (0)
+#define DHD_RING(args) do {if (dhd_msg_level & DHD_RING_VAL) printf args;} while (0)
#define DHD_ARPOE(args) do {if (dhd_msg_level & DHD_ARPOE_VAL) printf args;} while (0)
#define DHD_REORDER(args) do {if (dhd_msg_level & DHD_REORDER_VAL) printf args;} while (0)
#define DHD_PNO(args) do {if (dhd_msg_level & DHD_PNO_VAL) printf args;} while (0)
+#define DHD_RTT(args) do {if (dhd_msg_level & DHD_RTT_VAL) printf args;} while (0)
#define DHD_TRACE_HW4 DHD_TRACE
@@ -63,11 +64,12 @@
#define DHD_GLOM_ON() (dhd_msg_level & DHD_GLOM_VAL)
#define DHD_EVENT_ON() (dhd_msg_level & DHD_EVENT_VAL)
#define DHD_BTA_ON() (dhd_msg_level & DHD_BTA_VAL)
-#define DHD_ISCAN_ON() (dhd_msg_level & DHD_ISCAN_VAL)
+#define DHD_RING_ON() (dhd_msg_level & DHD_RING_VAL)
#define DHD_ARPOE_ON() (dhd_msg_level & DHD_ARPOE_VAL)
#define DHD_REORDER_ON() (dhd_msg_level & DHD_REORDER_VAL)
#define DHD_NOCHECKDIED_ON() (dhd_msg_level & DHD_NOCHECKDIED_VAL)
#define DHD_PNO_ON() (dhd_msg_level & DHD_PNO_VAL)
+#define DHD_RTT_ON() (dhd_msg_level & DHD_RTT_VAL)
#else /* defined(BCMDBG) || defined(DHD_DEBUG) */
@@ -83,7 +85,7 @@
#define DHD_GLOM(args)
#define DHD_EVENT(args)
#define DHD_BTA(args)
-#define DHD_ISCAN(args)
+#define DHD_RING(args)
#define DHD_ARPOE(args)
#define DHD_REORDER(args)
#define DHD_PNO(args)
@@ -102,13 +104,13 @@
#define DHD_GLOM_ON() 0
#define DHD_EVENT_ON() 0
#define DHD_BTA_ON() 0
-#define DHD_ISCAN_ON() 0
+#define DHD_RING_ON() 0
#define DHD_ARPOE_ON() 0
#define DHD_REORDER_ON() 0
#define DHD_NOCHECKDIED_ON() 0
#define DHD_PNO_ON() 0
-
-#endif
+#define DHD_RTT_ON() 0
+#endif
#define DHD_LOG(args)
diff --git a/drivers/net/wireless/bcmdhd/dhd_debug.c b/drivers/net/wireless/bcmdhd/dhd_debug.c
new file mode 100644
index 0000000..17e7e19
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_debug.c
Binary files differ
diff --git a/drivers/net/wireless/bcmdhd/dhd_debug.h b/drivers/net/wireless/bcmdhd/dhd_debug.h
new file mode 100644
index 0000000..d82f7bc
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_debug.h
@@ -0,0 +1,312 @@
+/*
+ * Linux Debugability support code
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_debug.h 545157 2015-03-30 23:47:38Z $
+ */
+
+
+#ifndef _dhd_debug_h_
+#define _dhd_debug_h_
+enum {
+ DEBUG_RING_ID_INVALID = 0,
+ FW_VERBOSE_RING_ID,
+ FW_EVENT_RING_ID,
+ DHD_EVENT_RING_ID,
+ /* add new id here */
+ DEBUG_RING_ID_MAX
+};
+
+enum {
+ /* Feature set */
+ DBG_MEMORY_DUMP_SUPPORTED = (1 << (0)), /* Memory dump of FW */
+ DBG_PER_PACKET_TX_RX_STATUS_SUPPORTED = (1 << (1)), /* PKT Status */
+ DBG_CONNECT_EVENT_SUPPORTED = (1 << (2)), /* Connectivity Event */
+ DBG_POWER_EVENT_SUPOORTED = (1 << (3)), /* POWER of Driver */
+ DBG_WAKE_LOCK_SUPPORTED = (1 << (4)), /* WAKE LOCK of Driver */
+ DBG_VERBOSE_LOG_SUPPORTED = (1 << (5)), /* verbose log of FW */
+ DBG_HEALTH_CHECK_SUPPORTED = (1 << (6)), /* monitor the health of FW */
+};
+
+enum {
+ /* set for binary entries */
+ DBG_RING_ENTRY_FLAGS_HAS_BINARY = (1 << (0)),
+ /* set if 64 bits timestamp is present */
+ DBG_RING_ENTRY_FLAGS_HAS_TIMESTAMP = (1 << (1))
+};
+
+#define DBGRING_NAME_MAX 32
+/* firmware verbose ring, ring id 1 */
+#define FW_VERBOSE_RING_NAME "fw_verbose"
+#define FW_VERBOSE_RING_SIZE (64 * 1024)
+/* firmware event ring, ring id 2 */
+#define FW_EVENT_RING_NAME "fw_event"
+#define FW_EVENT_RING_SIZE (64 * 1024)
+/* DHD connection event ring, ring id 3 */
+#define DHD_EVENT_RING_NAME "dhd_event"
+#define DHD_EVENT_RING_SIZE (64 * 1024)
+
+#define DBG_RING_STATUS_SIZE (sizeof(dhd_dbg_ring_status_t))
+
+#define VALID_RING(id) \
+ (id > DEBUG_RING_ID_INVALID && id < DEBUG_RING_ID_MAX)
+
+/* driver receive association command from kernel */
+#define WIFI_EVENT_ASSOCIATION_REQUESTED 0
+#define WIFI_EVENT_AUTH_COMPLETE 1
+#define WIFI_EVENT_ASSOC_COMPLETE 2
+/* received firmware event indicating auth frames are sent */
+#define WIFI_EVENT_FW_AUTH_STARTED 3
+/* received firmware event indicating assoc frames are sent */
+#define WIFI_EVENT_FW_ASSOC_STARTED 4
+/* received firmware event indicating reassoc frames are sent */
+#define WIFI_EVENT_FW_RE_ASSOC_STARTED 5
+#define WIFI_EVENT_DRIVER_SCAN_REQUESTED 6
+#define WIFI_EVENT_DRIVER_SCAN_RESULT_FOUND 7
+#define WIFI_EVENT_DRIVER_SCAN_COMPLETE 8
+#define WIFI_EVENT_G_SCAN_STARTED 9
+#define WIFI_EVENT_G_SCAN_COMPLETE 10
+#define WIFI_EVENT_DISASSOCIATION_REQUESTED 11
+#define WIFI_EVENT_RE_ASSOCIATION_REQUESTED 12
+#define WIFI_EVENT_ROAM_REQUESTED 13
+/* received beacon from AP (event enabled only in verbose mode) */
+#define WIFI_EVENT_BEACON_RECEIVED 14
+/* firmware has triggered a roam scan (not g-scan) */
+#define WIFI_EVENT_ROAM_SCAN_STARTED 15
+/* firmware has completed a roam scan (not g-scan) */
+#define WIFI_EVENT_ROAM_SCAN_COMPLETE 16
+/* firmware has started searching for roam candidates (with reason =xx) */
+#define WIFI_EVENT_ROAM_SEARCH_STARTED 17
+/* firmware has stopped searching for roam candidates (with reason =xx) */
+#define WIFI_EVENT_ROAM_SEARCH_STOPPED 18
+/* received channel switch anouncement from AP */
+#define WIFI_EVENT_CHANNEL_SWITCH_ANOUNCEMENT 20
+/* fw start transmit eapol frame, with EAPOL index 1-4 */
+#define WIFI_EVENT_FW_EAPOL_FRAME_TRANSMIT_START 21
+/* fw gives up eapol frame, with rate, success/failure and number retries */
+#define WIFI_EVENT_FW_EAPOL_FRAME_TRANSMIT_STOP 22
+/* kernel queue EAPOL for transmission in driver with EAPOL index 1-4 */
+#define WIFI_EVENT_DRIVER_EAPOL_FRAME_TRANSMIT_REQUESTED 23
+/* with rate, regardless of the fact that EAPOL frame is accepted or rejected by firmware */
+#define WIFI_EVENT_FW_EAPOL_FRAME_RECEIVED 24
+/* with rate, and eapol index, driver has received */
+/* EAPOL frame and will queue it up to wpa_supplicant */
+#define WIFI_EVENT_DRIVER_EAPOL_FRAME_RECEIVED 26
+/* with success/failure, parameters */
+#define WIFI_EVENT_BLOCK_ACK_NEGOTIATION_COMPLETE 27
+#define WIFI_EVENT_BT_COEX_BT_SCO_START 28
+#define WIFI_EVENT_BT_COEX_BT_SCO_STOP 29
+/* for paging/scan etc..., when BT starts transmiting twice per BT slot */
+#define WIFI_EVENT_BT_COEX_BT_SCAN_START 30
+#define WIFI_EVENT_BT_COEX_BT_SCAN_STOP 31
+#define WIFI_EVENT_BT_COEX_BT_HID_START 32
+#define WIFI_EVENT_BT_COEX_BT_HID_STOP 33
+/* firmware sends auth frame in roaming to next candidate */
+#define WIFI_EVENT_ROAM_AUTH_STARTED 34
+/* firmware receive auth confirm from ap */
+#define WIFI_EVENT_ROAM_AUTH_COMPLETE 35
+/* firmware sends assoc/reassoc frame in */
+#define WIFI_EVENT_ROAM_ASSOC_STARTED 36
+/* firmware receive assoc/reassoc confirm from ap */
+#define WIFI_EVENT_ROAM_ASSOC_COMPLETE 37
+
+#define WIFI_TAG_VENDOR_SPECIFIC 0 /* take a byte stream as parameter */
+#define WIFI_TAG_BSSID 1 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_ADDR 2 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_SSID 3 /* takes a 32 bytes SSID address as parameter */
+#define WIFI_TAG_STATUS 4 /* takes an integer as parameter */
+#define WIFI_TAG_CHANNEL_SPEC 5 /* takes one or more wifi_channel_spec as parameter */
+#define WIFI_TAG_WAKE_LOCK_EVENT 6 /* takes a wake_lock_event struct as parameter */
+#define WIFI_TAG_ADDR1 7 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_ADDR2 8 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_ADDR3 9 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_ADDR4 10 /* takes a 6 bytes MAC address as parameter */
+#define WIFI_TAG_TSF 11 /* take a 64 bits TSF value as parameter */
+#define WIFI_TAG_IE 12 /* take one or more specific 802.11 IEs parameter, */
+ /* IEs are in turn indicated */
+ /* in TLV format as per 802.11 spec */
+#define WIFI_TAG_INTERFACE 13 /* take interface name as parameter */
+#define WIFI_TAG_REASON_CODE 14 /* take a reason code as per 802.11 as parameter */
+#define WIFI_TAG_RATE_MBPS 15 /* take a wifi rate in 0.5 mbps */
+
+typedef struct {
+ uint16 tag;
+ uint16 len; /* length of value */
+ uint8 value[0];
+} __attribute__ ((packed)) tlv_log;
+
+typedef struct per_packet_status_entry {
+ uint8 flags;
+ uint8 tid; /* transmit or received tid */
+ uint16 MCS; /* modulation and bandwidth */
+ /*
+ * TX: RSSI of ACK for that packet
+ * RX: RSSI of packet
+ */
+ uint8 rssi;
+ uint8 num_retries; /* number of attempted retries */
+ uint16 last_transmit_rate; /* last transmit rate in .5 mbps */
+ /* transmit/reeive sequence for that MPDU packet */
+ uint16 link_layer_transmit_sequence;
+ /*
+ * TX: firmware timestamp (us) when packet is queued within firmware buffer
+ * for SDIO/HSIC or into PCIe buffer
+ * RX : firmware receive timestamp
+ */
+ uint64 firmware_entry_timestamp;
+ /*
+ * firmware timestamp (us) when packet start contending for the
+ * medium for the first time, at head of its AC queue,
+ * or as part of an MPDU or A-MPDU. This timestamp is not updated
+ * for each retry, only the first transmit attempt.
+ */
+ uint64 start_contention_timestamp;
+ /*
+ * fimrware timestamp (us) when packet is successfully transmitted
+ * or aborted because it has exhausted its maximum number of retries
+ */
+ uint64 transmit_success_timestamp;
+ /*
+ * packet data. The length of packet data is determined by the entry_size field of
+ * the wifi_ring_buffer_entry structure. It is expected that first bytes of the
+ * packet, or packet headers only (up to TCP or RTP/UDP headers) will be copied into the ring
+ */
+ uint8 data[0];
+} __attribute__ ((packed)) per_packet_status_entry_t;
+
+typedef struct log_conn_event {
+ uint16 event;
+ tlv_log tlvs[0];
+ /*
+ * separate parameter structure per event to be provided and optional data
+ * the event_data is expected to include an official android part, with some
+ * parameter as transmit rate, num retries, num scan result found etc...
+ * as well, event_data can include a vendor proprietary part which is
+ * understood by the developer only.
+ */
+} __attribute__ ((packed)) log_conn_event_t;
+
+/*
+ * Ring buffer name for power events ring. note that power event are extremely frequents
+ * and thus should be stored in their own ring/file so as not to clobber connectivity events
+ */
+
+typedef struct wake_lock_event {
+ uint32 status; /* 0 taken, 1 released */
+ uint32 reason; /* reason why this wake lock is taken */
+ char name[0]; /* null terminated */
+} __attribute__ ((packed)) wake_lock_event_t;
+
+typedef struct wifi_power_event {
+ uint16 event;
+ tlv_log tlvs[0];
+} __attribute__ ((packed)) wifi_power_event_t;
+
+/* entry type */
+enum {
+ DBG_RING_ENTRY_EVENT_TYPE = 1,
+ DBG_RING_ENTRY_PKT_TYPE,
+ DBG_RING_ENTRY_WAKE_LOCK_EVENT_TYPE,
+ DBG_RING_ENTRY_POWER_EVENT_TYPE,
+ DBG_RING_ENTRY_DATA_TYPE
+};
+
+typedef struct dhd_dbg_ring_entry {
+ uint16 len; /* payload length excluding the header */
+ uint8 flags;
+ uint8 type; /* Per ring specific */
+ uint64 timestamp; /* present if has_timestamp bit is set. */
+} __attribute__ ((packed)) dhd_dbg_ring_entry_t;
+
+#define DBG_RING_ENTRY_SIZE (sizeof(dhd_dbg_ring_entry_t))
+#define ENTRY_LENGTH(hdr) (hdr->len + DBG_RING_ENTRY_SIZE)
+#define DBG_EVENT_LOG(dhd, connect_state) \
+{ \
+ do { \
+ uint16 state = connect_state; \
+ dhd_os_push_push_ring_data(dhd, DHD_EVENT_RING_ID, &state, sizeof(state)); \
+ } while (0); \
+}
+typedef struct dhd_dbg_ring_status {
+ uint8 name[DBGRING_NAME_MAX];
+ uint32 flags;
+ int ring_id; /* unique integer representing the ring */
+ /* total memory size allocated for the buffer */
+ uint32 ring_buffer_byte_size;
+ uint32 verbose_level;
+ /* number of bytes that was written to the buffer by driver */
+ uint32 written_bytes;
+ /* number of bytes that was read from the buffer by user land */
+ uint32 read_bytes;
+ /* number of records that was read from the buffer by user land */
+ uint32 written_records;
+} dhd_dbg_ring_status_t;
+
+struct log_level_table {
+ int log_level;
+ uint16 tag;
+ char *desc;
+};
+
+typedef void (*dbg_pullreq_t)(void *os_priv, const int ring_id);
+
+typedef void (*dbg_urgent_noti_t) (dhd_pub_t *dhdp, const void *data, const uint32 len);
+/* dhd_dbg functions */
+extern int dhd_dbg_attach(dhd_pub_t *dhdp, dbg_pullreq_t os_pullreq,
+ dbg_urgent_noti_t os_urgent_notifier, void *os_priv);
+extern void dhd_dbg_detach(dhd_pub_t *dhdp);
+extern int dhd_dbg_start(dhd_pub_t *dhdp, bool start);
+extern void dhd_dbg_trace_evnt_handler(dhd_pub_t *dhdp, void *event_data,
+ void *raw_event_ptr, uint datalen);
+extern int dhd_dbg_set_configuration(dhd_pub_t *dhdp, int ring_id,
+ int log_level, int flags, int threshold);
+extern int dhd_dbg_get_ring_status(dhd_pub_t *dhdp, int ring_id,
+ dhd_dbg_ring_status_t *dbg_ring_status);
+
+extern int dhd_dbg_ring_push(dhd_pub_t *dhdp, int ring_id, dhd_dbg_ring_entry_t *hdr, void *data);
+
+extern int dhd_dbg_ring_pull(dhd_pub_t *dhdp, int ring_id, void *data, uint32 buf_len);
+extern int dhd_dbg_find_ring_id(dhd_pub_t *dhdp, char *ring_name);
+extern void *dhd_dbg_get_priv(dhd_pub_t *dhdp);
+extern int dhd_dbg_send_urgent_evt(dhd_pub_t *dhdp, const void *data, const uint32 len);
+
+/* wrapper function */
+extern int dhd_os_dbg_attach(dhd_pub_t *dhdp);
+extern void dhd_os_dbg_detach(dhd_pub_t *dhdp);
+extern int dhd_os_dbg_register_callback(int ring_id,
+ void (*dbg_ring_sub_cb)(void *ctx, const int ring_id, const void *data,
+ const uint32 len, const dhd_dbg_ring_status_t dbg_ring_status));
+extern int dhd_os_dbg_register_urgent_notifier(dhd_pub_t *dhdp,
+ void (*urgent_noti)(void *ctx, const void *data, const uint32 len, const uint32 fw_len));
+
+extern int dhd_os_start_logging(dhd_pub_t *dhdp, char *ring_name, int log_level,
+ int flags, int time_intval, int threshold);
+extern int dhd_os_reset_logging(dhd_pub_t *dhdp);
+extern int dhd_os_suppress_logging(dhd_pub_t *dhdp, bool suppress);
+
+extern int dhd_os_get_ring_status(dhd_pub_t *dhdp, int ring_id,
+ dhd_dbg_ring_status_t *dbg_ring_status);
+extern int dhd_os_trigger_get_ring_data(dhd_pub_t *dhdp, char *ring_name);
+extern int dhd_os_push_push_ring_data(dhd_pub_t *dhdp, int ring_id, void *data, int32 data_len);
+
+extern int dhd_os_dbg_get_feature(dhd_pub_t *dhdp, int32 *features);
+#endif /* _dhd_debug_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_debug_linux.c b/drivers/net/wireless/bcmdhd/dhd_debug_linux.c
new file mode 100644
index 0000000..70e3ede
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_debug_linux.c
@@ -0,0 +1,412 @@
+/*
+ * DHD debugability Linux os layer
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_debug_linux.c 545157 2015-03-30 23:47:38Z $
+ */
+
+#include <typedefs.h>
+#include <osl.h>
+#include <bcmutils.h>
+#include <bcmendian.h>
+#include <bcmpcie.h>
+#include <dngl_stats.h>
+#include <dhd.h>
+#include <dhd_dbg.h>
+#include <dhd_debug.h>
+
+#include <net/cfg80211.h>
+#include <wl_cfgvendor.h>
+
+typedef void (*dbg_ring_send_sub_t)(void *ctx, const int ring_id, const void *data,
+ const uint32 len, const dhd_dbg_ring_status_t ring_status);
+typedef void (*dbg_urgent_noti_sub_t)(void *ctx, const void *data,
+ const uint32 len, const uint32 fw_len);
+
+static dbg_ring_send_sub_t ring_send_sub_cb[DEBUG_RING_ID_MAX];
+static dbg_urgent_noti_sub_t urgent_noti_sub_cb;
+typedef struct dhd_dbg_os_ring_info {
+ dhd_pub_t *dhdp;
+ int ring_id;
+ int log_level;
+ unsigned long interval;
+ struct delayed_work work;
+ uint64 tsoffset;
+} linux_dbgring_info_t;
+
+struct log_level_table dhd_event_map[] = {
+ {1, WIFI_EVENT_DRIVER_EAPOL_FRAME_TRANSMIT_REQUESTED, "DRIVER EAPOL TX REQ"},
+ {1, WIFI_EVENT_DRIVER_EAPOL_FRAME_RECEIVED, "DRIVER EAPOL RX"},
+ {2, WIFI_EVENT_DRIVER_SCAN_REQUESTED, "SCAN_REQUESTED"},
+ {2, WIFI_EVENT_DRIVER_SCAN_COMPLETE, "SCAN COMPELETE"},
+ {3, WIFI_EVENT_DRIVER_SCAN_RESULT_FOUND, "SCAN RESULT FOUND"}
+};
+
+static void
+debug_data_send(dhd_pub_t *dhdp, int ring_id, const void *data, const uint32 len,
+ const dhd_dbg_ring_status_t ring_status)
+{
+ struct net_device *ndev;
+ dbg_ring_send_sub_t ring_sub_send;
+ ndev = dhd_linux_get_primary_netdev(dhdp);
+ if (!ndev)
+ return;
+ if (ring_send_sub_cb[ring_id]) {
+ ring_sub_send = ring_send_sub_cb[ring_id];
+ ring_sub_send(ndev, ring_id, data, len, ring_status);
+ }
+}
+
+static void
+dhd_os_dbg_urgent_notifier(dhd_pub_t *dhdp, const void *data, const uint32 len)
+{
+ struct net_device *ndev;
+ ndev = dhd_linux_get_primary_netdev(dhdp);
+ if (!ndev)
+ return;
+ if (urgent_noti_sub_cb) {
+ urgent_noti_sub_cb(ndev, data, len, dhdp->soc_ram_length);
+ }
+}
+
+static void
+dbg_ring_poll_worker(struct work_struct *work)
+{
+ struct delayed_work *d_work = to_delayed_work(work);
+ linux_dbgring_info_t *ring_info =
+ container_of(d_work, linux_dbgring_info_t, work);
+ dhd_pub_t *dhdp = ring_info->dhdp;
+ int ringid = ring_info->ring_id;
+ dhd_dbg_ring_status_t ring_status;
+ void *buf;
+ dhd_dbg_ring_entry_t *hdr;
+ uint32 buflen, rlen;
+
+ dhd_dbg_get_ring_status(dhdp, ringid, &ring_status);
+ if (ring_status.written_bytes > ring_status.read_bytes)
+ buflen = ring_status.written_bytes - ring_status.read_bytes;
+ else if (ring_status.written_bytes < ring_status.read_bytes)
+ buflen = 0xFFFFFFFF + ring_status.written_bytes -
+ ring_status.read_bytes;
+ else
+ goto exit;
+ buf = MALLOC(dhdp->osh, buflen);
+ if (!buf) {
+ DHD_ERROR(("%s failed to allocate read buf\n", __FUNCTION__));
+ return;
+ }
+ rlen = dhd_dbg_ring_pull(dhdp, ringid, buf, buflen);
+ hdr = (dhd_dbg_ring_entry_t *)buf;
+ while (rlen > 0) {
+ ring_status.read_bytes += ENTRY_LENGTH(hdr);
+ /* offset fw ts to host ts */
+ hdr->timestamp += ring_info->tsoffset;
+ debug_data_send(dhdp, ringid, hdr, ENTRY_LENGTH(hdr),
+ ring_status);
+ rlen -= ENTRY_LENGTH(hdr);
+ hdr = (dhd_dbg_ring_entry_t *)((void *)hdr + ENTRY_LENGTH(hdr));
+ }
+ MFREE(dhdp->osh, buf, buflen);
+
+ if (!ring_info->interval)
+ return;
+ dhd_dbg_get_ring_status(dhdp, ring_info->ring_id, &ring_status);
+
+exit:
+ if (ring_info->interval) {
+ /* retrigger the work at same interval */
+ if (ring_status.written_bytes == ring_status.read_bytes)
+ schedule_delayed_work(d_work, ring_info->interval);
+ else
+ schedule_delayed_work(d_work, 0);
+ }
+
+ return;
+}
+
+int
+dhd_os_dbg_register_callback(int ring_id, dbg_ring_send_sub_t callback)
+{
+ if (!VALID_RING(ring_id))
+ return BCME_RANGE;
+
+ ring_send_sub_cb[ring_id] = callback;
+ return BCME_OK;
+}
+
+int
+dhd_os_dbg_register_urgent_notifier(dhd_pub_t *dhdp, dbg_urgent_noti_sub_t urgent_noti_sub)
+{
+ if (!dhdp || !urgent_noti_sub)
+ return BCME_BADARG;
+ urgent_noti_sub_cb = urgent_noti_sub;
+
+ return BCME_OK;
+}
+
+int
+dhd_os_start_logging(dhd_pub_t *dhdp, char *ring_name, int log_level,
+ int flags, int time_intval, int threshold)
+{
+ int ret = BCME_OK;
+ int ring_id;
+ linux_dbgring_info_t *os_priv, *ring_info;
+ uint32 ms;
+
+ ring_id = dhd_dbg_find_ring_id(dhdp, ring_name);
+ if (!VALID_RING(ring_id))
+ return BCME_UNSUPPORTED;
+
+ DHD_RING(("%s , log_level : %d, time_intval : %d, threshod %d Bytes\n",
+ __FUNCTION__, log_level, time_intval, threshold));
+
+ /* change the configuration */
+ ret = dhd_dbg_set_configuration(dhdp, ring_id, log_level, flags, threshold);
+ if (ret) {
+ DHD_ERROR(("dhd_set_configuration is failed : %d\n", ret));
+ return ret;
+ }
+
+ os_priv = dhd_dbg_get_priv(dhdp);
+ if (!os_priv)
+ return BCME_ERROR;
+ ring_info = &os_priv[ring_id];
+ ring_info->log_level = log_level;
+ if (ring_id == FW_VERBOSE_RING_ID || ring_id == FW_EVENT_RING_ID) {
+ ring_info->tsoffset = local_clock();
+ if (dhd_wl_ioctl_get_intiovar(dhdp, "rte_timesync", &ms, WLC_GET_VAR,
+ FALSE, 0))
+ DHD_ERROR(("%s rte_timesync failed\n", __FUNCTION__));
+ do_div(ring_info->tsoffset, 1000000);
+ ring_info->tsoffset -= ms;
+ }
+ if (time_intval == 0 || log_level == 0) {
+ ring_info->interval = 0;
+ cancel_delayed_work_sync(&ring_info->work);
+ } else {
+ ring_info->interval = msecs_to_jiffies(time_intval * MSEC_PER_SEC);
+ schedule_delayed_work(&ring_info->work, ring_info->interval);
+ }
+
+ return ret;
+}
+
+int
+dhd_os_reset_logging(dhd_pub_t *dhdp)
+{
+ int ret = BCME_OK;
+ int ring_id;
+ linux_dbgring_info_t *os_priv, *ring_info;
+
+ os_priv = dhd_dbg_get_priv(dhdp);
+ if (!os_priv)
+ return BCME_ERROR;
+
+ /* Stop all rings */
+ for (ring_id = DEBUG_RING_ID_INVALID + 1; ring_id < DEBUG_RING_ID_MAX; ring_id++) {
+ DHD_RING(("%s: Stop ring buffer %d\n", __FUNCTION__, ring_id));
+
+ ring_info = &os_priv[ring_id];
+ /* cancel any pending work */
+ cancel_delayed_work_sync(&ring_info->work);
+ /* log level zero makes stop logging on that ring */
+ ring_info->log_level = 0;
+ ring_info->interval = 0;
+ /* change the configuration */
+ ret = dhd_dbg_set_configuration(dhdp, ring_id, 0, 0, 0);
+ if (ret) {
+ DHD_ERROR(("dhd_set_configuration is failed : %d\n", ret));
+ return ret;
+ }
+ }
+ return ret;
+}
+
+#define SUPPRESS_LOG_LEVEL 1
+int
+dhd_os_suppress_logging(dhd_pub_t *dhdp, bool suppress)
+{
+ int ret = BCME_OK;
+ int max_log_level;
+ int enable = (suppress) ? 0 : 1;
+ linux_dbgring_info_t *os_priv;
+
+ os_priv = dhd_dbg_get_priv(dhdp);
+ if (!os_priv)
+ return BCME_ERROR;
+
+ max_log_level = MAX(os_priv[FW_VERBOSE_RING_ID].log_level, os_priv[FW_EVENT_RING_ID].log_level);
+ if (max_log_level == SUPPRESS_LOG_LEVEL) {
+ /* suppress the logging in FW not to wake up host while device in suspend mode */
+ ret = dhd_iovar(dhdp, 0, "logtrace", (char *)&enable, sizeof(enable), 1);
+ if (ret < 0 && (ret != BCME_UNSUPPORTED)) {
+ DHD_ERROR(("logtrace is failed : %d\n", ret));
+ }
+ }
+
+ return ret;
+}
+
+int
+dhd_os_get_ring_status(dhd_pub_t *dhdp, int ring_id, dhd_dbg_ring_status_t *dbg_ring_status)
+{
+ return dhd_dbg_get_ring_status(dhdp, ring_id, dbg_ring_status);
+}
+
+int
+dhd_os_trigger_get_ring_data(dhd_pub_t *dhdp, char *ring_name)
+{
+ int ret = BCME_OK;
+ int ring_id;
+ linux_dbgring_info_t *os_priv, *ring_info;
+ ring_id = dhd_dbg_find_ring_id(dhdp, ring_name);
+ if (!VALID_RING(ring_id))
+ return BCME_UNSUPPORTED;
+ os_priv = dhd_dbg_get_priv(dhdp);
+ if (os_priv) {
+ ring_info = &os_priv[ring_id];
+ if (ring_info->interval) {
+ cancel_delayed_work_sync(&ring_info->work);
+ }
+ schedule_delayed_work(&ring_info->work, 0);
+ } else {
+ DHD_ERROR(("%s : os_priv is NULL\n", __FUNCTION__));
+ ret = BCME_ERROR;
+ }
+ return ret;
+}
+
+int
+dhd_os_push_push_ring_data(dhd_pub_t *dhdp, int ring_id, void *data, int32 data_len)
+{
+ int ret = BCME_OK, i;
+ dhd_dbg_ring_entry_t msg_hdr;
+ log_conn_event_t event_data;
+ linux_dbgring_info_t *os_priv, *ring_info = NULL;
+
+ if (!VALID_RING(ring_id))
+ return BCME_UNSUPPORTED;
+ os_priv = dhd_dbg_get_priv(dhdp);
+
+ if (os_priv) {
+ ring_info = &os_priv[ring_id];
+ }
+ memset(&msg_hdr, 0, sizeof(dhd_dbg_ring_entry_t));
+
+ if (ring_id == DHD_EVENT_RING_ID) {
+ msg_hdr.type = DBG_RING_ENTRY_EVENT_TYPE;
+ msg_hdr.flags |= DBG_RING_ENTRY_FLAGS_HAS_TIMESTAMP;
+ msg_hdr.flags |= DBG_RING_ENTRY_FLAGS_HAS_BINARY;
+ msg_hdr.timestamp = local_clock();
+ /* convert to ms */
+ do_div(msg_hdr.timestamp, 1000000);
+ msg_hdr.len = sizeof(event_data);
+ event_data.event = *((uint16 *)(data));
+ /* filter the event for higher log level with current log level */
+ for (i = 0; i < ARRAYSIZE(dhd_event_map); i++) {
+ if ((dhd_event_map[i].tag == event_data.event) &&
+ dhd_event_map[i].log_level > ring_info->log_level) {
+ return ret;
+ }
+ }
+ }
+ ret = dhd_dbg_ring_push(dhdp, ring_id, &msg_hdr, &event_data);
+ if (ret) {
+ DHD_ERROR(("%s : failed to push data into the ring (%d) with ret(%d)\n",
+ __FUNCTION__, ring_id, ret));
+ }
+ return ret;
+}
+
+int
+dhd_os_dbg_get_feature(dhd_pub_t *dhdp, int32 *features)
+{
+ int ret = BCME_OK;
+ /* XXX : we need to find a way to get the features for dbg */
+ *features = 0;
+ *features |= DBG_MEMORY_DUMP_SUPPORTED;
+ if (FW_SUPPORTED(dhdp, logtrace)) {
+ *features |= DBG_CONNECT_EVENT_SUPPORTED;
+ *features |= DBG_VERBOSE_LOG_SUPPORTED;
+ }
+ if (FW_SUPPORTED(dhdp, hchk)) {
+ *features |= DBG_HEALTH_CHECK_SUPPORTED;
+ }
+ return ret;
+}
+
+static void
+dhd_os_dbg_pullreq(void *os_priv, int ring_id)
+{
+ linux_dbgring_info_t *ring_info;
+
+ ring_info = &((linux_dbgring_info_t *)os_priv)[ring_id];
+ if (ring_info->interval != 0)
+ schedule_delayed_work(&ring_info->work, 0);
+}
+
+int
+dhd_os_dbg_attach(dhd_pub_t *dhdp)
+{
+ int ret = BCME_OK;
+ linux_dbgring_info_t *os_priv, *ring_info;
+ int ring_id;
+
+ /* os_dbg data */
+ os_priv = MALLOCZ(dhdp->osh, sizeof(*os_priv) * DEBUG_RING_ID_MAX);
+ if (!os_priv)
+ return BCME_NOMEM;
+
+ for (ring_id = DEBUG_RING_ID_INVALID + 1; ring_id < DEBUG_RING_ID_MAX;
+ ring_id++) {
+ ring_info = &os_priv[ring_id];
+ INIT_DELAYED_WORK(&ring_info->work, dbg_ring_poll_worker);
+ ring_info->dhdp = dhdp;
+ ring_info->ring_id = ring_id;
+ }
+
+ ret = dhd_dbg_attach(dhdp, dhd_os_dbg_pullreq, dhd_os_dbg_urgent_notifier, os_priv);
+ if (ret)
+ MFREE(dhdp->osh, os_priv, sizeof(*os_priv) * DEBUG_RING_ID_MAX);
+
+ return ret;
+}
+
+void
+dhd_os_dbg_detach(dhd_pub_t *dhdp)
+{
+ linux_dbgring_info_t *os_priv, *ring_info;
+ int ring_id;
+ /* free os_dbg data */
+ os_priv = dhd_dbg_get_priv(dhdp);
+ /* abort pending any job */
+ for (ring_id = DEBUG_RING_ID_INVALID + 1; ring_id < DEBUG_RING_ID_MAX; ring_id++) {
+ ring_info = &os_priv[ring_id];
+ if (ring_info->interval) {
+ ring_info->interval = 0;
+ cancel_delayed_work_sync(&ring_info->work);
+ }
+ }
+ MFREE(dhdp->osh, os_priv, sizeof(*os_priv) * DEBUG_RING_ID_MAX);
+
+ return dhd_dbg_detach(dhdp);
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_flowring.c b/drivers/net/wireless/bcmdhd/dhd_flowring.c
new file mode 100644
index 0000000..d5addad
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_flowring.c
@@ -0,0 +1,823 @@
+/*
+ * Broadcom Dongle Host Driver (DHD), Flow ring specific code at top level
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_flowrings.c jaganlv $
+ */
+
+#include <typedefs.h>
+#include <bcmutils.h>
+#include <bcmendian.h>
+#include <bcmdevs.h>
+
+#include <proto/ethernet.h>
+#include <proto/bcmevent.h>
+#include <dngl_stats.h>
+
+#include <dhd.h>
+
+#include <dhd_flowring.h>
+#include <dhd_bus.h>
+#include <dhd_proto.h>
+#include <dhd_dbg.h>
+#include <proto/802.1d.h>
+#include <pcie_core.h>
+#include <bcmmsgbuf.h>
+#include <dhd_pcie.h>
+
+static INLINE uint16 dhd_flowid_find(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 prio, char *sa, char *da);
+
+static INLINE uint16 dhd_flowid_alloc(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 prio, char *sa, char *da);
+
+static INLINE int dhd_flowid_lookup(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 prio, char *sa, char *da, uint16 *flowid);
+int BCMFASTPATH dhd_flow_queue_overflow(flow_queue_t *queue, void *pkt);
+
+#define FLOW_QUEUE_PKT_NEXT(p) PKTLINK(p)
+#define FLOW_QUEUE_PKT_SETNEXT(p, x) PKTSETLINK((p), (x))
+
+#ifdef EAPOL_PKT_PRIO
+const uint8 prio2ac[8] = { 0, 1, 1, 0, 2, 2, 3, 7 };
+#else
+const uint8 prio2ac[8] = { 0, 1, 1, 0, 2, 2, 3, 3 };
+#endif /* EAPOL_PKT_PRIO */
+const uint8 prio2tid[8] = { 0, 1, 2, 3, 4, 5, 6, 7 };
+
+int BCMFASTPATH
+dhd_flow_queue_overflow(flow_queue_t *queue, void *pkt)
+{
+ return BCME_NORESOURCE;
+}
+
+/* Flow ring's queue management functions */
+
+void /* Initialize a flow ring's queue */
+dhd_flow_queue_init(dhd_pub_t *dhdp, flow_queue_t *queue, int max)
+{
+ ASSERT((queue != NULL) && (max > 0));
+
+ dll_init(&queue->list);
+ queue->head = queue->tail = NULL;
+ queue->len = 0;
+ queue->max = max - 1;
+ queue->failures = 0U;
+ queue->cb = &dhd_flow_queue_overflow;
+}
+
+void /* Register an enqueue overflow callback handler */
+dhd_flow_queue_register(flow_queue_t *queue, flow_queue_cb_t cb)
+{
+ ASSERT(queue != NULL);
+ queue->cb = cb;
+}
+
+
+int BCMFASTPATH /* Enqueue a packet in a flow ring's queue */
+dhd_flow_queue_enqueue(dhd_pub_t *dhdp, flow_queue_t *queue, void *pkt)
+{
+ int ret = BCME_OK;
+
+ ASSERT(queue != NULL);
+
+ if (queue->len >= queue->max) {
+ queue->failures++;
+ ret = (*queue->cb)(queue, pkt);
+ goto done;
+ }
+
+ if (queue->head) {
+ FLOW_QUEUE_PKT_SETNEXT(queue->tail, pkt);
+ } else {
+ queue->head = pkt;
+ }
+
+ FLOW_QUEUE_PKT_SETNEXT(pkt, NULL);
+
+ queue->tail = pkt; /* at tail */
+
+ queue->len++;
+
+done:
+ return ret;
+}
+
+void * BCMFASTPATH /* Dequeue a packet from a flow ring's queue, from head */
+dhd_flow_queue_dequeue(dhd_pub_t *dhdp, flow_queue_t *queue)
+{
+ void * pkt;
+
+ ASSERT(queue != NULL);
+
+ pkt = queue->head; /* from head */
+
+ if (pkt == NULL) {
+ ASSERT((queue->len == 0) && (queue->tail == NULL));
+ goto done;
+ }
+
+ queue->head = FLOW_QUEUE_PKT_NEXT(pkt);
+ if (queue->head == NULL)
+ queue->tail = NULL;
+
+ queue->len--;
+
+ FLOW_QUEUE_PKT_SETNEXT(pkt, NULL); /* dettach packet from queue */
+
+done:
+ return pkt;
+}
+
+void BCMFASTPATH /* Reinsert a dequeued packet back at the head */
+dhd_flow_queue_reinsert(dhd_pub_t *dhdp, flow_queue_t *queue, void *pkt)
+{
+ if (queue->head == NULL) {
+ queue->tail = pkt;
+ }
+
+ FLOW_QUEUE_PKT_SETNEXT(pkt, queue->head);
+ queue->head = pkt;
+ queue->len++;
+}
+
+
+/* Init Flow Ring specific data structures */
+int
+dhd_flow_rings_init(dhd_pub_t *dhdp, uint32 num_flow_rings)
+{
+ uint32 idx;
+ uint32 flow_ring_table_sz;
+ uint32 if_flow_lkup_sz;
+ void * flowid_allocator;
+ flow_ring_table_t *flow_ring_table;
+ if_flow_lkup_t *if_flow_lkup = NULL;
+ void *lock = NULL;
+ unsigned long flags;
+
+
+ DHD_INFO(("%s\n", __FUNCTION__));
+
+ /* Construct a 16bit flow1d allocator */
+ flowid_allocator = id16_map_init(dhdp->osh,
+ num_flow_rings - FLOW_RING_COMMON, FLOWID_RESERVED);
+ if (flowid_allocator == NULL) {
+ DHD_ERROR(("%s: flowid allocator init failure\n", __FUNCTION__));
+ return BCME_NOMEM;
+ }
+
+ /* Allocate a flow ring table, comprising of requested number of rings */
+ flow_ring_table_sz = (num_flow_rings * sizeof(flow_ring_node_t));
+ flow_ring_table = (flow_ring_table_t *)MALLOC(dhdp->osh, flow_ring_table_sz);
+ if (flow_ring_table == NULL) {
+ DHD_ERROR(("%s: flow ring table alloc failure\n", __FUNCTION__));
+ goto fail;
+ }
+
+ /* Initialize flow ring table state */
+ bzero((uchar *)flow_ring_table, flow_ring_table_sz);
+ for (idx = 0; idx < num_flow_rings; idx++) {
+ flow_ring_table[idx].status = FLOW_RING_STATUS_CLOSED;
+ flow_ring_table[idx].flowid = (uint16)idx;
+ flow_ring_table[idx].lock = dhd_os_spin_lock_init(dhdp->osh);
+ if (flow_ring_table[idx].lock == NULL) {
+ DHD_ERROR(("%s: Failed to init spinlock for queue!\n", __FUNCTION__));
+ goto fail;
+ }
+
+ dll_init(&flow_ring_table[idx].list);
+
+ /* Initialize the per flow ring backup queue */
+ dhd_flow_queue_init(dhdp, &flow_ring_table[idx].queue,
+ FLOW_RING_QUEUE_THRESHOLD);
+ }
+
+ /* Allocate per interface hash table */
+ if_flow_lkup_sz = sizeof(if_flow_lkup_t) * DHD_MAX_IFS;
+ if_flow_lkup = (if_flow_lkup_t *)DHD_OS_PREALLOC(dhdp,
+ DHD_PREALLOC_IF_FLOW_LKUP, if_flow_lkup_sz);
+ if (if_flow_lkup == NULL) {
+ DHD_ERROR(("%s: if flow lkup alloc failure\n", __FUNCTION__));
+ goto fail;
+ }
+
+ /* Initialize per interface hash table */
+ bzero((uchar *)if_flow_lkup, if_flow_lkup_sz);
+ for (idx = 0; idx < DHD_MAX_IFS; idx++) {
+ int hash_ix;
+ if_flow_lkup[idx].status = 0;
+ if_flow_lkup[idx].role = 0;
+ for (hash_ix = 0; hash_ix < DHD_FLOWRING_HASH_SIZE; hash_ix++)
+ if_flow_lkup[idx].fl_hash[hash_ix] = NULL;
+ }
+
+ lock = dhd_os_spin_lock_init(dhdp->osh);
+ if (lock == NULL)
+ goto fail;
+
+ dhdp->flow_prio_map_type = DHD_FLOW_PRIO_AC_MAP;
+ bcopy(prio2ac, dhdp->flow_prio_map, sizeof(uint8) * NUMPRIO);
+
+ /* Now populate into dhd pub */
+ DHD_FLOWID_LOCK(lock, flags);
+ dhdp->num_flow_rings = num_flow_rings;
+ dhdp->flowid_allocator = (void *)flowid_allocator;
+ dhdp->flow_ring_table = (void *)flow_ring_table;
+ dhdp->if_flow_lkup = (void *)if_flow_lkup;
+ dhdp->flowid_lock = lock;
+ DHD_FLOWID_UNLOCK(lock, flags);
+
+ DHD_INFO(("%s done\n", __FUNCTION__));
+ return BCME_OK;
+
+fail:
+ if (lock != NULL)
+ dhd_os_spin_lock_deinit(dhdp->osh, lock);
+
+ /* Destruct the per interface flow lkup table */
+ if (dhdp->if_flow_lkup != NULL) {
+ DHD_OS_PREFREE(dhdp, if_flow_lkup, if_flow_lkup_sz);
+ }
+ if (flow_ring_table != NULL) {
+ for (idx = 0; idx < num_flow_rings; idx++) {
+ if (flow_ring_table[idx].lock != NULL)
+ dhd_os_spin_lock_deinit(dhdp->osh, flow_ring_table[idx].lock);
+ }
+ MFREE(dhdp->osh, flow_ring_table, flow_ring_table_sz);
+ }
+ id16_map_fini(dhdp->osh, flowid_allocator);
+
+ return BCME_NOMEM;
+}
+
+/* Deinit Flow Ring specific data structures */
+void dhd_flow_rings_deinit(dhd_pub_t *dhdp)
+{
+ uint16 idx;
+ uint32 flow_ring_table_sz;
+ uint32 if_flow_lkup_sz;
+ flow_ring_table_t *flow_ring_table;
+ unsigned long flags;
+ void *lock;
+
+ DHD_INFO(("dhd_flow_rings_deinit\n"));
+
+ if (dhdp->flow_ring_table != NULL) {
+
+ ASSERT(dhdp->num_flow_rings > 0);
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ flow_ring_table = (flow_ring_table_t *)dhdp->flow_ring_table;
+ dhdp->flow_ring_table = NULL;
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ for (idx = 0; idx < dhdp->num_flow_rings; idx++) {
+ if (flow_ring_table[idx].active) {
+ dhd_bus_clean_flow_ring(dhdp->bus, &flow_ring_table[idx]);
+ }
+ ASSERT(flow_queue_empty(&flow_ring_table[idx].queue));
+
+ /* Deinit flow ring queue locks before destroying flow ring table */
+ dhd_os_spin_lock_deinit(dhdp->osh, flow_ring_table[idx].lock);
+ flow_ring_table[idx].lock = NULL;
+ }
+
+ /* Destruct the flow ring table */
+ flow_ring_table_sz = dhdp->num_flow_rings * sizeof(flow_ring_table_t);
+ MFREE(dhdp->osh, flow_ring_table, flow_ring_table_sz);
+ }
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+
+ /* Destruct the per interface flow lkup table */
+ if (dhdp->if_flow_lkup != NULL) {
+ if_flow_lkup_sz = sizeof(if_flow_lkup_t) * DHD_MAX_IFS;
+ memset(dhdp->if_flow_lkup, 0, sizeof(if_flow_lkup_sz));
+ DHD_OS_PREFREE(dhdp, dhdp->if_flow_lkup, if_flow_lkup_sz);
+ dhdp->if_flow_lkup = NULL;
+ }
+
+ /* Destruct the flowid allocator */
+ if (dhdp->flowid_allocator != NULL)
+ dhdp->flowid_allocator = id16_map_fini(dhdp->osh, dhdp->flowid_allocator);
+
+ dhdp->num_flow_rings = 0U;
+ lock = dhdp->flowid_lock;
+ dhdp->flowid_lock = NULL;
+
+ DHD_FLOWID_UNLOCK(lock, flags);
+ dhd_os_spin_lock_deinit(dhdp->osh, lock);
+}
+
+uint8
+dhd_flow_rings_ifindex2role(dhd_pub_t *dhdp, uint8 ifindex)
+{
+ if_flow_lkup_t *if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+ ASSERT(if_flow_lkup);
+ return if_flow_lkup[ifindex].role;
+}
+
+#ifdef WLTDLS
+bool is_tdls_destination(dhd_pub_t *dhdp, uint8 *da)
+{
+ tdls_peer_node_t *cur = dhdp->peer_tbl.node;
+ while (cur != NULL) {
+ if (!memcmp(da, cur->addr, ETHER_ADDR_LEN)) {
+ return TRUE;
+ }
+ cur = cur->next;
+ }
+ return FALSE;
+}
+#endif /* WLTDLS */
+
+/* For a given interface, search the hash table for a matching flow */
+static INLINE uint16
+dhd_flowid_find(dhd_pub_t *dhdp, uint8 ifindex, uint8 prio, char *sa, char *da)
+{
+ int hash;
+ bool ismcast = FALSE;
+ flow_hash_info_t *cur;
+ if_flow_lkup_t *if_flow_lkup;
+ unsigned long flags;
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+
+ if (DHD_IF_ROLE_STA(if_flow_lkup[ifindex].role)) {
+#ifdef WLTDLS
+ if (dhdp->peer_tbl.tdls_peer_count && !(ETHER_ISMULTI(da)) &&
+ is_tdls_destination(dhdp, da)) {
+ hash = DHD_FLOWRING_HASHINDEX(da, prio);
+ cur = if_flow_lkup[ifindex].fl_hash[hash];
+ while (cur != NULL) {
+ if (!memcmp(cur->flow_info.da, da, ETHER_ADDR_LEN)) {
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ return cur->flowid;
+ }
+ cur = cur->next;
+ }
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ return FLOWID_INVALID;
+ }
+#endif /* WLTDLS */
+ cur = if_flow_lkup[ifindex].fl_hash[prio];
+ if (cur) {
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ return cur->flowid;
+ }
+
+ } else {
+
+ if (ETHER_ISMULTI(da)) {
+ ismcast = TRUE;
+ hash = 0;
+ } else {
+ hash = DHD_FLOWRING_HASHINDEX(da, prio);
+ }
+
+ cur = if_flow_lkup[ifindex].fl_hash[hash];
+
+ while (cur) {
+ if ((ismcast && ETHER_ISMULTI(cur->flow_info.da)) ||
+ (!memcmp(cur->flow_info.da, da, ETHER_ADDR_LEN) &&
+ (cur->flow_info.tid == prio))) {
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ return cur->flowid;
+ }
+ cur = cur->next;
+ }
+ }
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+
+ return FLOWID_INVALID;
+}
+
+/* Allocate Flow ID */
+static INLINE uint16
+dhd_flowid_alloc(dhd_pub_t *dhdp, uint8 ifindex, uint8 prio, char *sa, char *da)
+{
+ flow_hash_info_t *fl_hash_node, *cur;
+ if_flow_lkup_t *if_flow_lkup;
+ int hash;
+ uint16 flowid;
+ unsigned long flags;
+
+ fl_hash_node = (flow_hash_info_t *) MALLOC(dhdp->osh, sizeof(flow_hash_info_t));
+ if (fl_hash_node == NULL) {
+ DHD_ERROR(("%s: fl_hash_node alloc failed \n", __FUNCTION__));
+ return FLOWID_INVALID;
+ }
+ memcpy(fl_hash_node->flow_info.da, da, sizeof(fl_hash_node->flow_info.da));
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ ASSERT(dhdp->flowid_allocator != NULL);
+ flowid = id16_map_alloc(dhdp->flowid_allocator);
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+
+ if (flowid == FLOWID_INVALID) {
+ MFREE(dhdp->osh, fl_hash_node, sizeof(flow_hash_info_t));
+ DHD_ERROR(("%s: cannot get free flowid \n", __FUNCTION__));
+ return FLOWID_INVALID;
+ }
+
+ fl_hash_node->flowid = flowid;
+ fl_hash_node->flow_info.tid = prio;
+ fl_hash_node->flow_info.ifindex = ifindex;
+ fl_hash_node->next = NULL;
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+ if (DHD_IF_ROLE_STA(if_flow_lkup[ifindex].role)) {
+ /* For STA non TDLS dest we allocate entry based on prio only */
+#ifdef WLTDLS
+ if (dhdp->peer_tbl.tdls_peer_count &&
+ (is_tdls_destination(dhdp, da))) {
+ hash = DHD_FLOWRING_HASHINDEX(da, prio);
+ cur = if_flow_lkup[ifindex].fl_hash[hash];
+ if (cur) {
+ while (cur->next) {
+ cur = cur->next;
+ }
+ cur->next = fl_hash_node;
+ } else {
+ if_flow_lkup[ifindex].fl_hash[hash] = fl_hash_node;
+ }
+ } else
+#endif /* WLTDLS */
+ if_flow_lkup[ifindex].fl_hash[prio] = fl_hash_node;
+ } else {
+
+ /* For bcast/mcast assign first slot in in interface */
+ hash = ETHER_ISMULTI(da) ? 0 : DHD_FLOWRING_HASHINDEX(da, prio);
+ cur = if_flow_lkup[ifindex].fl_hash[hash];
+ if (cur) {
+ while (cur->next) {
+ cur = cur->next;
+ }
+ cur->next = fl_hash_node;
+ } else
+ if_flow_lkup[ifindex].fl_hash[hash] = fl_hash_node;
+ }
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+
+ DHD_INFO(("%s: allocated flowid %d\n", __FUNCTION__, fl_hash_node->flowid));
+
+ return fl_hash_node->flowid;
+}
+
+/* Get flow ring ID, if not present try to create one */
+static INLINE int
+dhd_flowid_lookup(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 prio, char *sa, char *da, uint16 *flowid)
+{
+ uint16 id;
+ flow_ring_node_t *flow_ring_node;
+ flow_ring_table_t *flow_ring_table;
+ unsigned long flags;
+
+ DHD_INFO(("%s\n", __FUNCTION__));
+
+ if (!dhdp->flow_ring_table)
+ return BCME_ERROR;
+
+ flow_ring_table = (flow_ring_table_t *)dhdp->flow_ring_table;
+
+ id = dhd_flowid_find(dhdp, ifindex, prio, sa, da);
+
+ if (id == FLOWID_INVALID) {
+
+ if_flow_lkup_t *if_flow_lkup;
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+
+ if (!if_flow_lkup[ifindex].status)
+ return BCME_ERROR;
+
+ id = dhd_flowid_alloc(dhdp, ifindex, prio, sa, da);
+ if (id == FLOWID_INVALID) {
+ DHD_ERROR(("%s: alloc flowid ifindex %u status %u\n",
+ __FUNCTION__, ifindex, if_flow_lkup[ifindex].status));
+ return BCME_ERROR;
+ }
+
+ /* register this flowid in dhd_pub */
+ dhd_add_flowid(dhdp, ifindex, prio, da, id);
+ }
+
+ ASSERT(id < dhdp->num_flow_rings);
+
+ flow_ring_node = (flow_ring_node_t *) &flow_ring_table[id];
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+ if (flow_ring_node->active) {
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ *flowid = id;
+ return BCME_OK;
+ }
+
+ /* Init Flow info */
+ memcpy(flow_ring_node->flow_info.sa, sa, sizeof(flow_ring_node->flow_info.sa));
+ memcpy(flow_ring_node->flow_info.da, da, sizeof(flow_ring_node->flow_info.da));
+ flow_ring_node->flow_info.tid = prio;
+ flow_ring_node->flow_info.ifindex = ifindex;
+ flow_ring_node->active = TRUE;
+ flow_ring_node->status = FLOW_RING_STATUS_PENDING;
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ dll_prepend(&dhdp->bus->const_flowring, &flow_ring_node->list);
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+
+ /* Create and inform device about the new flow */
+ if (dhd_bus_flow_ring_create_request(dhdp->bus, (void *)flow_ring_node)
+ != BCME_OK) {
+ DHD_ERROR(("%s: create error %d\n", __FUNCTION__, id));
+ return BCME_ERROR;
+ }
+
+ *flowid = id;
+ return BCME_OK;
+}
+
+/* Update flowid information on the packet */
+int BCMFASTPATH
+dhd_flowid_update(dhd_pub_t *dhdp, uint8 ifindex, uint8 prio, void *pktbuf)
+{
+ uint8 *pktdata = (uint8 *)PKTDATA(dhdp->osh, pktbuf);
+ struct ether_header *eh = (struct ether_header *)pktdata;
+ uint16 flowid;
+
+ if (dhd_bus_is_txmode_push(dhdp->bus))
+ return BCME_OK;
+
+ ASSERT(ifindex < DHD_MAX_IFS);
+ if (ifindex >= DHD_MAX_IFS) {
+ return BCME_BADARG;
+ }
+
+ if (!dhdp->flowid_allocator) {
+ DHD_ERROR(("%s: Flow ring not intited yet \n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+ if (dhd_flowid_lookup(dhdp, ifindex, prio, eh->ether_shost, eh->ether_dhost,
+ &flowid) != BCME_OK) {
+ return BCME_ERROR;
+ }
+
+ DHD_INFO(("%s: prio %d flowid %d\n", __FUNCTION__, prio, flowid));
+
+ /* Tag the packet with flowid */
+ DHD_PKTTAG_SET_FLOWID((dhd_pkttag_fr_t *)PKTTAG(pktbuf), flowid);
+ return BCME_OK;
+}
+
+void
+dhd_flowid_free(dhd_pub_t *dhdp, uint8 ifindex, uint16 flowid)
+{
+ int hashix;
+ bool found = FALSE;
+ flow_hash_info_t *cur, *prev;
+ if_flow_lkup_t *if_flow_lkup;
+ unsigned long flags;
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+
+ for (hashix = 0; hashix < DHD_FLOWRING_HASH_SIZE; hashix++) {
+
+ cur = if_flow_lkup[ifindex].fl_hash[hashix];
+
+ if (cur) {
+ if (cur->flowid == flowid) {
+ found = TRUE;
+ }
+
+ prev = NULL;
+ while (!found && cur) {
+ if (cur->flowid == flowid) {
+ found = TRUE;
+ break;
+ }
+ prev = cur;
+ cur = cur->next;
+ }
+ if (found) {
+ if (!prev) {
+ if_flow_lkup[ifindex].fl_hash[hashix] = cur->next;
+ } else {
+ prev->next = cur->next;
+ }
+
+ /* deregister flowid from dhd_pub. */
+ dhd_del_flowid(dhdp, ifindex, flowid);
+
+ id16_map_free(dhdp->flowid_allocator, flowid);
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ MFREE(dhdp->osh, cur, sizeof(flow_hash_info_t));
+
+ return;
+ }
+ }
+ }
+
+
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+ DHD_ERROR(("%s: could not free flow ring hash entry flowid %d\n",
+ __FUNCTION__, flowid));
+}
+
+
+/* Delete all Flow rings assocaited with the given Interface */
+void
+dhd_flow_rings_delete(dhd_pub_t *dhdp, uint8 ifindex)
+{
+ uint32 id;
+ flow_ring_table_t *flow_ring_table;
+
+ DHD_INFO(("%s: ifindex %u\n", __FUNCTION__, ifindex));
+
+ ASSERT(ifindex < DHD_MAX_IFS);
+ if (ifindex >= DHD_MAX_IFS)
+ return;
+
+ if (!dhdp->flow_ring_table)
+ return;
+
+ flow_ring_table = (flow_ring_table_t *)dhdp->flow_ring_table;
+ for (id = 0; id < dhdp->num_flow_rings; id++) {
+ if (flow_ring_table[id].active &&
+ (flow_ring_table[id].flow_info.ifindex == ifindex) &&
+ (flow_ring_table[id].status != FLOW_RING_STATUS_DELETE_PENDING)) {
+ DHD_INFO(("%s: deleting flowid %d\n",
+ __FUNCTION__, flow_ring_table[id].flowid));
+ dhd_bus_flow_ring_delete_request(dhdp->bus,
+ (void *) &flow_ring_table[id]);
+ }
+ }
+}
+
+/* Delete flow/s for given peer address */
+void
+dhd_flow_rings_delete_for_peer(dhd_pub_t *dhdp, uint8 ifindex, char *addr)
+{
+ uint32 id;
+ flow_ring_table_t *flow_ring_table;
+
+ DHD_ERROR(("%s: ifindex %u\n", __FUNCTION__, ifindex));
+
+ ASSERT(ifindex < DHD_MAX_IFS);
+ if (ifindex >= DHD_MAX_IFS)
+ return;
+
+ if (!dhdp->flow_ring_table)
+ return;
+
+ flow_ring_table = (flow_ring_table_t *)dhdp->flow_ring_table;
+ for (id = 0; id < dhdp->num_flow_rings; id++) {
+ if (flow_ring_table[id].active &&
+ (flow_ring_table[id].flow_info.ifindex == ifindex) &&
+ (!memcmp(flow_ring_table[id].flow_info.da, addr, ETHER_ADDR_LEN)) &&
+ (flow_ring_table[id].status != FLOW_RING_STATUS_DELETE_PENDING)) {
+ DHD_INFO(("%s: deleting flowid %d\n",
+ __FUNCTION__, flow_ring_table[id].flowid));
+ dhd_bus_flow_ring_delete_request(dhdp->bus,
+ (void *) &flow_ring_table[id]);
+ }
+ }
+}
+
+/* Handle Interface ADD, DEL operations */
+void
+dhd_update_interface_flow_info(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 op, uint8 role)
+{
+ if_flow_lkup_t *if_flow_lkup;
+ unsigned long flags;
+
+ ASSERT(ifindex < DHD_MAX_IFS);
+ if (ifindex >= DHD_MAX_IFS)
+ return;
+
+ DHD_INFO(("%s: ifindex %u op %u role is %u \n",
+ __FUNCTION__, ifindex, op, role));
+ if (!dhdp->flowid_allocator) {
+ DHD_ERROR(("%s: Flow ring not intited yet \n", __FUNCTION__));
+ return;
+ }
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+
+ if (op == WLC_E_IF_ADD || op == WLC_E_IF_CHANGE) {
+
+ if_flow_lkup[ifindex].role = role;
+
+ if (!(DHD_IF_ROLE_STA(role))) {
+ if_flow_lkup[ifindex].status = TRUE;
+ DHD_INFO(("%s: Mcast Flow ring for ifindex %d role is %d \n",
+ __FUNCTION__, ifindex, role));
+ /* Create Mcast Flow */
+ }
+ } else if (op == WLC_E_IF_DEL) {
+ if_flow_lkup[ifindex].status = FALSE;
+ DHD_INFO(("%s: cleanup all Flow rings for ifindex %d role is %d \n",
+ __FUNCTION__, ifindex, role));
+ }
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+}
+
+/* Handle a STA interface link status update */
+int
+dhd_update_interface_link_status(dhd_pub_t *dhdp, uint8 ifindex, uint8 status)
+{
+ if_flow_lkup_t *if_flow_lkup;
+ unsigned long flags;
+
+ ASSERT(ifindex < DHD_MAX_IFS);
+ if (ifindex >= DHD_MAX_IFS)
+ return BCME_BADARG;
+
+ DHD_INFO(("%s: ifindex %d status %d\n", __FUNCTION__, ifindex, status));
+
+ DHD_FLOWID_LOCK(dhdp->flowid_lock, flags);
+ if_flow_lkup = (if_flow_lkup_t *)dhdp->if_flow_lkup;
+
+ if (DHD_IF_ROLE_STA(if_flow_lkup[ifindex].role)) {
+ if (status)
+ if_flow_lkup[ifindex].status = TRUE;
+ else
+ if_flow_lkup[ifindex].status = FALSE;
+ }
+ DHD_FLOWID_UNLOCK(dhdp->flowid_lock, flags);
+
+ return BCME_OK;
+}
+/* Update flow priority mapping */
+int dhd_update_flow_prio_map(dhd_pub_t *dhdp, uint8 map)
+{
+ uint16 flowid;
+ flow_ring_node_t *flow_ring_node;
+
+ if (map > DHD_FLOW_PRIO_LLR_MAP)
+ return BCME_BADOPTION;
+
+ /* Check if we need to change prio map */
+ if (map == dhdp->flow_prio_map_type)
+ return BCME_OK;
+
+ /* If any ring is active we cannot change priority mapping for flow rings */
+ for (flowid = 0; flowid < dhdp->num_flow_rings; flowid++) {
+ flow_ring_node = DHD_FLOW_RING(dhdp, flowid);
+ if (flow_ring_node->active)
+ return BCME_EPERM;
+ }
+ /* Infor firmware about new mapping type */
+ if (BCME_OK != dhd_flow_prio_map(dhdp, &map, TRUE))
+ return BCME_ERROR;
+
+ /* update internal structures */
+ dhdp->flow_prio_map_type = map;
+ if (dhdp->flow_prio_map_type == DHD_FLOW_PRIO_TID_MAP)
+ bcopy(prio2tid, dhdp->flow_prio_map, sizeof(uint8) * NUMPRIO);
+ else
+ bcopy(prio2ac, dhdp->flow_prio_map, sizeof(uint8) * NUMPRIO);
+
+ return BCME_OK;
+}
+
+/* Set/Get flwo ring priority map */
+int dhd_flow_prio_map(dhd_pub_t *dhd, uint8 *map, bool set)
+{
+ uint8 iovbuf[24];
+ if (!set) {
+ bcm_mkiovar("bus:fl_prio_map", NULL, 0, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iovbuf, sizeof(iovbuf), FALSE, 0) < 0) {
+ DHD_ERROR(("%s: failed to get fl_prio_map\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+ *map = iovbuf[0];
+ return BCME_OK;
+ }
+ bcm_mkiovar("bus:fl_prio_map", (char *)map, 4, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0) < 0) {
+ DHD_ERROR(("%s: failed to set fl_prio_map \n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+ return BCME_OK;
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_flowring.h b/drivers/net/wireless/bcmdhd/dhd_flowring.h
new file mode 100644
index 0000000..211a0a1
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_flowring.h
@@ -0,0 +1,176 @@
+/*
+ * Header file describing the flow rings DHD interfaces.
+ *
+ * Provides type definitions and function prototypes used to create, delete and manage
+ *
+ * flow rings at high level
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_flowrings.h jaganlv $
+ */
+
+/****************
+ * Common types *
+ */
+
+#ifndef _dhd_flowrings_h_
+#define _dhd_flowrings_h_
+
+/* Max pkts held in a flow ring's backup queue */
+#define FLOW_RING_QUEUE_THRESHOLD (2048)
+
+/* Number of H2D common rings : PCIE Spec Rev? */
+#define FLOW_RING_COMMON 2
+
+#define FLOWID_INVALID (ID16_INVALID)
+#define FLOWID_RESERVED (FLOW_RING_COMMON)
+
+#define FLOW_RING_STATUS_OPEN 0
+#define FLOW_RING_STATUS_PENDING 1
+#define FLOW_RING_STATUS_CLOSED 2
+#define FLOW_RING_STATUS_DELETE_PENDING 3
+#define FLOW_RING_STATUS_FLUSH_PENDING 4
+
+#define DHD_FLOWRING_RX_BUFPOST_PKTSZ 2048
+
+#define DHD_FLOW_PRIO_AC_MAP 0
+#define DHD_FLOW_PRIO_TID_MAP 1
+#define DHD_FLOW_PRIO_LLR_MAP 2
+
+
+/* Pkttag not compatible with PROP_TXSTATUS or WLFC */
+typedef struct dhd_pkttag_fr {
+ uint16 flowid;
+ int dataoff;
+} dhd_pkttag_fr_t;
+
+#define DHD_PKTTAG_SET_FLOWID(tag, flow) ((tag)->flowid = (uint16)(flow))
+#define DHD_PKTTAG_SET_DATAOFF(tag, offset) ((tag)->dataoff = (int)(offset))
+
+#define DHD_PKTTAG_FLOWID(tag) ((tag)->flowid)
+#define DHD_PKTTAG_DATAOFF(tag) ((tag)->dataoff)
+
+/* Hashing a MacAddress for lkup into a per interface flow hash table */
+#define DHD_FLOWRING_HASH_SIZE 256
+#define DHD_FLOWRING_HASHINDEX(ea, prio) \
+ ((((uint8 *)(ea))[3] ^ ((uint8 *)(ea))[4] ^ ((uint8 *)(ea))[5] ^ ((uint8)(prio))) \
+ % DHD_FLOWRING_HASH_SIZE)
+
+#define DHD_IF_ROLE(pub, idx) (((if_flow_lkup_t *)(pub)->if_flow_lkup)[idx].role)
+#define DHD_IF_ROLE_AP(pub, idx) (DHD_IF_ROLE(pub, idx) == WLC_E_IF_ROLE_AP)
+#define DHD_IF_ROLE_P2PGO(pub, idx) (DHD_IF_ROLE(pub, idx) == WLC_E_IF_ROLE_P2P_GO)
+#define DHD_FLOW_RING(dhdp, flowid) \
+ (flow_ring_node_t *)&(((flow_ring_node_t *)((dhdp)->flow_ring_table))[flowid])
+
+struct flow_queue;
+
+/* Flow Ring Queue Enqueue overflow callback */
+typedef int (*flow_queue_cb_t)(struct flow_queue * queue, void * pkt);
+
+typedef struct flow_queue {
+ dll_t list; /* manage a flowring queue in a dll */
+ void * head; /* first packet in the queue */
+ void * tail; /* last packet in the queue */
+ uint16 len; /* number of packets in the queue */
+ uint16 max; /* maximum number of packets, queue may hold */
+ uint32 failures; /* enqueue failures due to queue overflow */
+ flow_queue_cb_t cb; /* callback invoked on threshold crossing */
+} flow_queue_t;
+
+#define flow_queue_len(queue) ((int)(queue)->len)
+#define flow_queue_max(queue) ((int)(queue)->max)
+#define flow_queue_avail(queue) ((int)((queue)->max - (queue)->len))
+#define flow_queue_full(queue) ((queue)->len >= (queue)->max)
+#define flow_queue_empty(queue) ((queue)->len == 0)
+
+typedef struct flow_info {
+ uint8 tid;
+ uint8 ifindex;
+ char sa[ETHER_ADDR_LEN];
+ char da[ETHER_ADDR_LEN];
+} flow_info_t;
+
+typedef struct flow_ring_node {
+ dll_t list; /* manage a constructed flowring in a dll, must be at first place */
+ flow_queue_t queue;
+ bool active;
+ uint8 status;
+ uint16 flowid;
+ flow_info_t flow_info;
+ void *prot_info;
+ void *lock; /* lock for flowring access protection */
+} flow_ring_node_t;
+typedef flow_ring_node_t flow_ring_table_t;
+
+typedef struct flow_hash_info {
+ uint16 flowid;
+ flow_info_t flow_info;
+ struct flow_hash_info *next;
+} flow_hash_info_t;
+
+typedef struct if_flow_lkup {
+ bool status;
+ uint8 role; /* Interface role: STA/AP */
+ flow_hash_info_t *fl_hash[DHD_FLOWRING_HASH_SIZE]; /* Lkup Hash table */
+} if_flow_lkup_t;
+
+static INLINE flow_ring_node_t *
+dhd_constlist_to_flowring(dll_t *item)
+{
+ return ((flow_ring_node_t *)item);
+}
+
+/* Exported API */
+
+/* Flow ring's queue management functions */
+extern void dhd_flow_queue_init(dhd_pub_t *dhdp, flow_queue_t *queue, int max);
+extern void dhd_flow_queue_register(flow_queue_t *queue, flow_queue_cb_t cb);
+extern int dhd_flow_queue_enqueue(dhd_pub_t *dhdp, flow_queue_t *queue, void *pkt);
+extern void * dhd_flow_queue_dequeue(dhd_pub_t *dhdp, flow_queue_t *queue);
+extern void dhd_flow_queue_reinsert(dhd_pub_t *dhdp, flow_queue_t *queue, void *pkt);
+
+extern int dhd_flow_rings_init(dhd_pub_t *dhdp, uint32 num_flow_rings);
+
+extern void dhd_flow_rings_deinit(dhd_pub_t *dhdp);
+
+extern int dhd_flowid_update(dhd_pub_t *dhdp, uint8 ifindex, uint8 prio,
+ void *pktbuf);
+
+extern void dhd_flowid_free(dhd_pub_t *dhdp, uint8 ifindex, uint16 flowid);
+
+extern void dhd_flow_rings_delete(dhd_pub_t *dhdp, uint8 ifindex);
+
+extern void dhd_flow_rings_delete_for_peer(dhd_pub_t *dhdp, uint8 ifindex,
+ char *addr);
+
+/* Handle Interface ADD, DEL operations */
+extern void dhd_update_interface_flow_info(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 op, uint8 role);
+
+/* Handle a STA interface link status update */
+extern int dhd_update_interface_link_status(dhd_pub_t *dhdp, uint8 ifindex,
+ uint8 status);
+extern int dhd_flow_prio_map(dhd_pub_t *dhd, uint8 *map, bool set);
+extern int dhd_update_flow_prio_map(dhd_pub_t *dhdp, uint8 map);
+
+extern uint8 dhd_flow_rings_ifindex2role(dhd_pub_t *dhdp, uint8 ifindex);
+#endif /* _dhd_flowrings_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_ip.c b/drivers/net/wireless/bcmdhd/dhd_ip.c
old mode 100755
new mode 100644
index 3db2ed8..55657c3
--- a/drivers/net/wireless/bcmdhd/dhd_ip.c
+++ b/drivers/net/wireless/bcmdhd/dhd_ip.c
@@ -2,13 +2,13 @@
* IP Packet Parser Module.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_ip.c 457995 2014-02-25 13:53:31Z $
+ * $Id: dhd_ip.c 468932 2014-04-09 06:58:15Z $
*/
#include <typedefs.h>
#include <osl.h>
@@ -31,13 +31,14 @@
#include <proto/802.3.h>
#include <proto/bcmip.h>
#include <bcmendian.h>
-
+#include <bcmutils.h>
#include <dhd_dbg.h>
#include <dhd_ip.h>
#ifdef DHDTCPACK_SUPPRESS
#include <dhd_bus.h>
+#include <dhd_proto.h>
#include <proto/bcmtcp.h>
#endif /* DHDTCPACK_SUPPRESS */
@@ -115,6 +116,68 @@
}
}
+bool pkt_is_dhcp(osl_t *osh, void *p)
+{
+ uint8 *frame;
+ int length;
+ uint8 *pt; /* Pointer to type field */
+ uint16 ethertype;
+ struct ipv4_hdr *iph; /* IP frame pointer */
+ int ipl; /* IP frame length */
+ uint16 src_port;
+
+ frame = PKTDATA(osh, p);
+ length = PKTLEN(osh, p);
+
+ /* Process Ethernet II or SNAP-encapsulated 802.3 frames */
+ if (length < ETHER_HDR_LEN) {
+ DHD_INFO(("%s: short eth frame (%d)\n", __FUNCTION__, length));
+ return FALSE;
+ } else if (ntoh16(*(uint16 *)(frame + ETHER_TYPE_OFFSET)) >= ETHER_TYPE_MIN) {
+ /* Frame is Ethernet II */
+ pt = frame + ETHER_TYPE_OFFSET;
+ } else if (length >= ETHER_HDR_LEN + SNAP_HDR_LEN + ETHER_TYPE_LEN &&
+ !bcmp(llc_snap_hdr, frame + ETHER_HDR_LEN, SNAP_HDR_LEN)) {
+ pt = frame + ETHER_HDR_LEN + SNAP_HDR_LEN;
+ } else {
+ DHD_INFO(("%s: non-SNAP 802.3 frame\n", __FUNCTION__));
+ return FALSE;
+ }
+
+ ethertype = ntoh16(*(uint16 *)pt);
+
+ /* Skip VLAN tag, if any */
+ if (ethertype == ETHER_TYPE_8021Q) {
+ pt += VLAN_TAG_LEN;
+
+ if (pt + ETHER_TYPE_LEN > frame + length) {
+ DHD_INFO(("%s: short VLAN frame (%d)\n", __FUNCTION__, length));
+ return FALSE;
+ }
+
+ ethertype = ntoh16(*(uint16 *)pt);
+ }
+
+ if (ethertype != ETHER_TYPE_IP) {
+ DHD_INFO(("%s: non-IP frame (ethertype 0x%x, length %d)\n",
+ __FUNCTION__, ethertype, length));
+ return FALSE;
+ }
+
+ iph = (struct ipv4_hdr *)(pt + ETHER_TYPE_LEN);
+ ipl = (uint)(length - (pt + ETHER_TYPE_LEN - frame));
+
+ /* We support IPv4 only */
+ if ((ipl < (IPV4_OPTIONS_OFFSET + 2)) || (IP_VER(iph) != IP_VER_4)) {
+ DHD_INFO(("%s: short frame (%d) or non-IPv4\n", __FUNCTION__, ipl));
+ return FALSE;
+ }
+
+ src_port = ntoh16(*(uint16 *)(pt + ETHER_TYPE_LEN + IPV4_OPTIONS_OFFSET));
+
+ return (src_port == 0x43 || src_port == 0x44);
+}
+
#ifdef DHDTCPACK_SUPPRESS
typedef struct {
@@ -286,7 +349,11 @@
goto exit;
}
- if (mode > TCPACK_SUP_DELAYTX) {
+ if (mode >= TCPACK_SUP_LAST_MODE ||
+#ifndef BCMSDIO
+ mode == TCPACK_SUP_DELAYTX ||
+#endif
+ FALSE) {
DHD_ERROR(("%s %d: Invalid mode %d\n", __FUNCTION__, __LINE__, mode));
ret = BCME_BADARG;
goto exit;
@@ -374,7 +441,6 @@
tcpack_sup_module_t *tcpack_sup_mod;
tcpack_info_t *tcpack_info_tbl;
int tbl_cnt;
- uint pushed_len;
int ret = BCME_OK;
void *pdata;
uint32 pktlen;
@@ -383,10 +449,7 @@
goto exit;
pdata = PKTDATA(dhdp->osh, pkt);
-
- /* Length of BDC(+WLFC) headers pushed */
- pushed_len = BDC_HEADER_LEN + (((struct bdc_header *)pdata)->dataOffset * 4);
- pktlen = PKTLEN(dhdp->osh, pkt) - pushed_len;
+ pktlen = PKTLEN(dhdp->osh, pkt) - dhd_prot_hdrlen(dhdp, pdata);
if (pktlen < TCPACKSZMIN || pktlen > TCPACKSZMAX) {
DHD_TRACE(("%s %d: Too short or long length %d to be TCP ACK\n",
@@ -679,10 +742,16 @@
tack_tbl.cnt[2]++;
#endif /* DEBUG_COUNTER && DHDTCPACK_SUP_DBG */
ret = TRUE;
- } else
- DHD_TRACE(("%s %d: lenth mismatch %d != %d || %d != %d\n",
- __FUNCTION__, __LINE__, new_ip_hdr_len, old_ip_hdr_len,
- new_tcp_hdr_len, old_tcp_hdr_len));
+ } else {
+#if defined(DEBUG_COUNTER) && defined(DHDTCPACK_SUP_DBG)
+ tack_tbl.cnt[6]++;
+#endif /* DEBUG_COUNTER && DHDTCPACK_SUP_DBG */
+ DHD_TRACE(("%s %d: lenth mismatch %d != %d || %d != %d"
+ " ACK %u -> %u\n", __FUNCTION__, __LINE__,
+ new_ip_hdr_len, old_ip_hdr_len,
+ new_tcp_hdr_len, old_tcp_hdr_len,
+ old_tcpack_num, new_tcp_ack_num));
+ }
} else if (new_tcp_ack_num == old_tcpack_num) {
set_dotxinrx = TRUE;
/* TCPACK retransmission */
@@ -865,7 +934,7 @@
bcopy(last_tdata_info, tdata_info_tmp, sizeof(tcpdata_info_t));
}
bzero(last_tdata_info, sizeof(tcpdata_info_t));
- DHD_ERROR(("%s %d: tcpdata_info(idx %d) is aged out. ttl cnt is now %d\n",
+ DHD_TRACE(("%s %d: tcpdata_info(idx %d) is aged out. ttl cnt is now %d\n",
__FUNCTION__, __LINE__, i, tcpack_sup_mod->tcpdata_info_cnt));
/* Don't increase "i" here, so that the prev last tcpdata_info is checked */
} else
@@ -894,7 +963,7 @@
/* No TCP flow with the same IP addr and TCP port is found
* in tcp_data_info_tbl. So add this flow to the table.
*/
- DHD_ERROR(("%s %d: Add data info to tbl[%d]: IP addr "IPV4_ADDR_STR" "IPV4_ADDR_STR
+ DHD_TRACE(("%s %d: Add data info to tbl[%d]: IP addr "IPV4_ADDR_STR" "IPV4_ADDR_STR
" TCP port %d %d\n",
__FUNCTION__, __LINE__, tcpack_sup_mod->tcpdata_info_cnt,
IPV4_ADDR_TO_STR(ntoh32_ua(&ip_hdr[IPV4_SRC_IP_OFFSET])),
diff --git a/drivers/net/wireless/bcmdhd/dhd_ip.h b/drivers/net/wireless/bcmdhd/dhd_ip.h
old mode 100755
new mode 100644
index 328329e..835046c
--- a/drivers/net/wireless/bcmdhd/dhd_ip.h
+++ b/drivers/net/wireless/bcmdhd/dhd_ip.h
@@ -4,13 +4,13 @@
* Provides type definitions and function prototypes used to parse ip packet.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -18,12 +18,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_ip.h 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd_ip.h 458522 2014-02-27 02:26:15Z $
*/
#ifndef _dhd_ip_h_
@@ -44,6 +44,7 @@
} pkt_frag_t;
extern pkt_frag_t pkt_frag_info(osl_t *osh, void *p);
+extern bool pkt_is_dhcp(osl_t *osh, void *p);
#ifdef DHDTCPACK_SUPPRESS
#define TCPACKSZMIN (ETHER_HDR_LEN + IPV4_MIN_HEADER_LEN + TCP_MIN_HEADER_LEN)
@@ -63,6 +64,7 @@
extern bool dhd_tcpack_suppress(dhd_pub_t *dhdp, void *pkt);
extern bool dhd_tcpdata_info_get(dhd_pub_t *dhdp, void *pkt);
+/* #define DHDTCPACK_SUP_DBG */
#if defined(DEBUG_COUNTER) && defined(DHDTCPACK_SUP_DBG)
extern counter_tbl_t tack_tbl;
#endif /* DEBUG_COUNTER && DHDTCPACK_SUP_DBG */
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux.c b/drivers/net/wireless/bcmdhd/dhd_linux.c
old mode 100755
new mode 100644
index a50b365..4b57190
--- a/drivers/net/wireless/bcmdhd/dhd_linux.c
+++ b/drivers/net/wireless/bcmdhd/dhd_linux.c
@@ -3,13 +3,13 @@
* Basically selected code segments from usb-cdc.c and usb-rndis.c
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -17,17 +17,22 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_linux.c 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd_linux.c 477711 2014-05-14 08:45:17Z $
*/
#include <typedefs.h>
#include <linuxver.h>
#include <osl.h>
+#ifdef SHOW_LOGTRACE
+#include <linux/syscalls.h>
+#include <event_log.h>
+#endif /* SHOW_LOGTRACE */
+
#include <linux/init.h>
#include <linux/kernel.h>
@@ -43,9 +48,12 @@
#include <linux/fcntl.h>
#include <linux/fs.h>
#include <linux/ip.h>
-#include <linux/compat.h>
+#include <linux/reboot.h>
+#include <linux/notifier.h>
#include <net/addrconf.h>
+#ifdef ENABLE_ADAPTIVE_SCHED
#include <linux/cpufreq.h>
+#endif /* ENABLE_ADAPTIVE_SCHED */
#include <asm/uaccess.h>
#include <asm/unaligned.h>
@@ -57,13 +65,25 @@
#include <proto/ethernet.h>
#include <proto/bcmevent.h>
+#include <proto/vlan.h>
+#include <proto/bcmudp.h>
+#include <proto/bcmdhcp.h>
+#ifdef DHD_L2_FILTER
+#include <proto/bcmicmp.h>
+#endif
+#include <proto/802.3.h>
+
#include <dngl_stats.h>
#include <dhd_linux_wq.h>
#include <dhd.h>
#include <dhd_linux.h>
+#ifdef PCIE_FULL_DONGLE
+#include <dhd_flowring.h>
+#endif
#include <dhd_bus.h>
#include <dhd_proto.h>
#include <dhd_dbg.h>
+#include <dhd_debug.h>
#ifdef CONFIG_HAS_WAKELOCK
#include <linux/wakelock.h>
#endif
@@ -73,6 +93,17 @@
#ifdef PNO_SUPPORT
#include <dhd_pno.h>
#endif
+#ifdef RTT_SUPPORT
+#include <dhd_rtt.h>
+#endif
+
+#ifdef CONFIG_COMPAT
+#include <linux/compat.h>
+#endif
+
+#ifdef DHD_WMF
+#include <dhd_wmf_linux.h>
+#endif /* DHD_WMF */
#ifdef DHDTCPACK_SUPPRESS
#include <dhd_ip.h>
@@ -112,6 +143,12 @@
extern bool ap_fw_loaded;
#endif
+#ifdef SET_RANDOM_MAC_SOFTAP
+#ifndef CONFIG_DHD_SET_RANDOM_MAC_VAL
+#define CONFIG_DHD_SET_RANDOM_MAC_VAL 0x001A11
+#endif
+static u32 vendor_oui = CONFIG_DHD_SET_RANDOM_MAC_VAL;
+#endif
#ifdef ENABLE_ADAPTIVE_SCHED
#define DEFAULT_CPUFREQ_THRESH 1000000 /* threshold frequency : 1000000 = 1GHz */
@@ -134,6 +171,13 @@
#include <wl_android.h>
+/* Maximum STA per radio */
+#define DHD_MAX_STA 32
+
+
+const uint8 wme_fifo2ac[] = { 0, 1, 2, 3, 1, 1 };
+const uint8 prio2fifo[8] = { 1, 0, 0, 1, 2, 2, 3, 3 };
+#define WME_PRIO2AC(prio) wme_fifo2ac[prio2fifo[(prio)]]
#ifdef ARP_OFFLOAD_SUPPORT
void aoe_update_host_ipv4_table(dhd_pub_t *dhd_pub, u32 ipa, bool add, int idx);
@@ -148,6 +192,7 @@
static bool dhd_inetaddr_notifier_registered = FALSE;
#endif /* ARP_OFFLOAD_SUPPORT */
+#ifdef CONFIG_IPV6
static int dhd_inet6addr_notifier_call(struct notifier_block *this,
unsigned long event, void *ptr);
static struct notifier_block dhd_inet6addr_notifier = {
@@ -157,6 +202,7 @@
* created in kernel notifier link list (with 'next' pointing to itself)
*/
static bool dhd_inet6addr_notifier_registered = FALSE;
+#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && defined(CONFIG_PM_SLEEP)
#include <linux/suspend.h>
@@ -166,10 +212,10 @@
#if defined(OOB_INTR_ONLY)
extern void dhd_enable_oob_intr(struct dhd_bus *bus, bool enable);
-#endif
+#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27))
static void dhd_hang_process(void *dhd_info, void *event_data, u8 event);
-#endif
+#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0))
MODULE_LICENSE("GPL v2");
#endif /* LinuxVer */
@@ -210,7 +256,6 @@
#include <linux/earlysuspend.h>
#endif /* defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND) */
-extern int dhd_get_suspend_bcn_li_dtim(dhd_pub_t *dhd);
#ifdef PKT_FILTER_SUPPORT
extern void dhd_pktfilter_offload_set(dhd_pub_t * dhd, char *arg);
@@ -218,7 +263,6 @@
extern void dhd_pktfilter_offload_delete(dhd_pub_t *dhd, int id);
#endif
-
#ifdef READ_MACADDR
extern int dhd_read_macaddr(struct dhd_info *dhd);
#else
@@ -231,6 +275,18 @@
#endif
+
+static int dhd_reboot_callback(struct notifier_block *this, unsigned long code, void *unused);
+static struct notifier_block dhd_reboot_notifier = {
+ .notifier_call = dhd_reboot_callback,
+ .priority = 1,
+};
+
+typedef struct dhd_dump {
+ uint8 *buf;
+ int bufsize;
+} dhd_dump_t;
+
typedef struct dhd_if_event {
struct list_head list;
wl_event_data_if_t event;
@@ -243,16 +299,26 @@
struct dhd_info *info; /* back pointer to dhd_info */
/* OS/stack specifics */
struct net_device *net;
- struct net_device_stats stats;
- int idx; /* iface idx in dongle */
- uint subunit; /* subunit */
+ int idx; /* iface idx in dongle */
+ uint subunit; /* subunit */
uint8 mac_addr[ETHER_ADDR_LEN]; /* assigned MAC address */
+ bool set_macaddress;
+ bool set_multicast;
+ uint8 bssidx; /* bsscfg index for the interface */
bool attached; /* Delayed attachment when unset */
bool txflowcontrol; /* Per interface flow control indicator */
char name[IFNAMSIZ+1]; /* linux interface name */
- uint8 bssidx; /* bsscfg index for the interface */
- bool set_macaddress;
- bool set_multicast;
+ struct net_device_stats stats;
+#ifdef DHD_WMF
+ dhd_wmf_t wmf; /* per bsscfg wmf setting */
+#endif /* DHD_WMF */
+#ifdef PCIE_FULL_DONGLE
+ struct list_head sta_list; /* sll of associated stations */
+#if !defined(BCM_GMAC3)
+ spinlock_t sta_list_lock; /* lock for manipulating sll */
+#endif /* ! BCM_GMAC3 */
+#endif /* PCIE_FULL_DONGLE */
+ uint32 ap_isolate; /* ap-isolation settings */
} dhd_if_t;
#ifdef WLMEDIA_HTSF
@@ -289,20 +355,24 @@
unsigned long event;
};
+/* When Perimeter locks are deployed, any blocking calls must be preceeded
+ * with a PERIM UNLOCK and followed by a PERIM LOCK.
+ * Examples of blocking calls are: schedule_timeout(), down_interruptible(),
+ * wait_event_timeout().
+ */
+
/* Local private structure (extension of pub) */
typedef struct dhd_info {
#if defined(WL_WIRELESS_EXT)
wl_iw_t iw; /* wireless extensions state (must be first) */
#endif /* defined(WL_WIRELESS_EXT) */
-
dhd_pub_t pub;
+ dhd_if_t *iflist[DHD_MAX_IFS]; /* for supporting multiple interfaces */
+
void *adapter; /* adapter information, interrupt, fw path etc. */
char fw_path[PATH_MAX]; /* path to firmware image */
char nv_path[PATH_MAX]; /* path to nvram vars file */
- /* For supporting multiple interfaces */
- dhd_if_t *iflist[DHD_MAX_IFS];
-
struct semaphore proto_sem;
#ifdef PROP_TXSTATUS
spinlock_t wlfc_spinlock;
@@ -312,6 +382,8 @@
htsf_t htsf;
#endif
wait_queue_head_t ioctl_resp_wait;
+ wait_queue_head_t d3ack_wait;
+
uint32 default_wd_interval;
struct timer_list timer;
@@ -346,16 +418,19 @@
#endif
spinlock_t wakelock_spinlock;
uint32 wakelock_counter;
- bool waive_wakelock;
- uint32 wakelock_before_waive;
int wakelock_wd_counter;
int wakelock_rx_timeout_enable;
int wakelock_ctrl_timeout_enable;
+ bool waive_wakelock;
+ uint32 wakelock_before_waive;
/* Thread to issue ioctl for multicast */
wait_queue_head_t ctrl_wait;
atomic_t pend_8021x_cnt;
dhd_attach_states_t dhd_state;
+#ifdef SHOW_LOGTRACE
+ dhd_event_log_t event_data;
+#endif /* SHOW_LOGTRACE */
#if defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND)
struct early_suspend early_suspend;
@@ -381,8 +456,14 @@
#endif
unsigned int unit;
struct notifier_block pm_notifier;
+#ifdef SAR_SUPPORT
+ struct notifier_block sar_notifier;
+ s32 sar_enable;
+#endif
} dhd_info_t;
+#define DHDIF_FWDER(dhdif) FALSE
+
/* Flag to indicate if we should download firmware on driver load */
uint dhd_download_fw_on_driverload = TRUE;
@@ -392,6 +473,10 @@
char firmware_path[MOD_PARAM_PATHLEN];
char nvram_path[MOD_PARAM_PATHLEN];
+/* backup buffer for firmware and nvram path */
+char fw_bak_path[MOD_PARAM_PATHLEN];
+char nv_bak_path[MOD_PARAM_PATHLEN];
+
/* information string to keep firmware, chio, cheip version info visiable from log */
char info_string[MOD_PARAM_INFOLEN];
module_param_string(info_string, info_string, MOD_PARAM_INFOLEN, 0444);
@@ -408,7 +493,13 @@
static void dhd_ifdel_event_handler(void *handle, void *event_info, u8 event);
static void dhd_set_mac_addr_handler(void *handle, void *event_info, u8 event);
static void dhd_set_mcast_list_handler(void *handle, void *event_info, u8 event);
+#ifdef CONFIG_IPV6
static void dhd_inet6_work_handler(void *dhd_info, void *event_data, u8 event);
+#endif
+
+#ifdef WL_CFG80211
+extern void dhd_netdev_free(struct net_device *ndev);
+#endif /* WL_CFG80211 */
/* Error bits */
module_param(dhd_msg_level, int, 0);
@@ -436,7 +527,7 @@
/* extend watchdog expiration to 2 seconds when DPC is running */
#define WATCHDOG_EXTEND_INTERVAL (2000)
-uint dhd_watchdog_ms = 10;
+uint dhd_watchdog_ms = CUSTOM_DHD_WATCHDOG_MS;
module_param(dhd_watchdog_ms, uint, 0);
#if defined(DHD_DEBUG)
@@ -484,6 +575,28 @@
static int instance_base = 0; /* Starting instance number */
module_param(instance_base, int, 0644);
+
+/* DHD Perimiter lock only used in router with bypass forwarding. */
+#define DHD_PERIM_RADIO_INIT() do { /* noop */ } while (0)
+#define DHD_PERIM_LOCK_TRY(unit, flag) do { /* noop */ } while (0)
+#define DHD_PERIM_UNLOCK_TRY(unit, flag) do { /* noop */ } while (0)
+#define DHD_PERIM_LOCK_ALL() do { /* noop */ } while (0)
+#define DHD_PERIM_UNLOCK_ALL() do { /* noop */ } while (0)
+
+#ifdef PCIE_FULL_DONGLE
+#if defined(BCM_GMAC3)
+#define DHD_IF_STA_LIST_LOCK_INIT(ifp) do { /* noop */ } while (0)
+#define DHD_IF_STA_LIST_LOCK(ifp, flags) ({ BCM_REFERENCE(flags); })
+#define DHD_IF_STA_LIST_UNLOCK(ifp, flags) ({ BCM_REFERENCE(flags); })
+#else /* ! BCM_GMAC3 */
+#define DHD_IF_STA_LIST_LOCK_INIT(ifp) spin_lock_init(&(ifp)->sta_list_lock)
+#define DHD_IF_STA_LIST_LOCK(ifp, flags) \
+ spin_lock_irqsave(&(ifp)->sta_list_lock, (flags))
+#define DHD_IF_STA_LIST_UNLOCK(ifp, flags) \
+ spin_unlock_irqrestore(&(ifp)->sta_list_lock, (flags))
+#endif /* ! BCM_GMAC3 */
+#endif /* PCIE_FULL_DONGLE */
+
/* Control fw roaming */
uint dhd_roam_disable = 0;
@@ -527,8 +640,8 @@
module_param(dhd_deferred_tx, uint, 0);
#ifdef BCMDBGFS
-extern void dhd_dbg_init(dhd_pub_t *dhdp);
-extern void dhd_dbg_remove(void);
+extern void dhd_dbgfs_init(dhd_pub_t *dhdp);
+extern void dhd_dbgfs_remove(void);
#endif /* BCMDBGFS */
#endif /* BCMSDIO */
@@ -589,14 +702,19 @@
static int dhd_wl_host_event(dhd_info_t *dhd, int *ifidx, void *pktdata,
wl_event_msg_t *event_ptr, void **data_ptr);
+#ifdef DHD_UNICAST_DHCP
+static const uint8 llc_snap_hdr[SNAP_HDR_LEN] = {0xaa, 0xaa, 0x03, 0x00, 0x00, 0x00};
+static int dhd_get_pkt_ip_type(dhd_pub_t *dhd, void *skb, uint8 **data_ptr,
+ int *len_ptr, uint8 *prot_ptr);
+static int dhd_get_pkt_ether_type(dhd_pub_t *dhd, void *skb, uint8 **data_ptr,
+ int *len_ptr, uint16 *et_ptr, bool *snap_ptr);
-#ifdef PROP_TXSTATUS
-static int dhd_wakelock_waive(dhd_info_t *dhdinfo);
-static int dhd_wakelock_restore(dhd_info_t *dhdinfo);
+static int dhd_convert_dhcp_broadcast_ack_to_unicast(dhd_pub_t *pub, void *pktbuf, int ifidx);
+#endif /* DHD_UNICAST_DHCP */
+#ifdef DHD_L2_FILTER
+static int dhd_l2_filter_block_ping(dhd_pub_t *pub, void *pktbuf, int ifidx);
#endif
-
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && (LINUX_VERSION_CODE <= \
- KERNEL_VERSION(2, 6, 39)) && defined(CONFIG_PM_SLEEP)
+#if defined(CONFIG_PM_SLEEP)
static int dhd_pm_callback(struct notifier_block *nfb, unsigned long action, void *ignored)
{
int ret = NOTIFY_DONE;
@@ -605,35 +723,40 @@
BCM_REFERENCE(dhdinfo);
switch (action) {
- case PM_HIBERNATION_PREPARE:
- case PM_SUSPEND_PREPARE:
- suspend = TRUE;
- break;
- case PM_POST_HIBERNATION:
- case PM_POST_SUSPEND:
- suspend = FALSE;
- break;
+ case PM_HIBERNATION_PREPARE:
+ case PM_SUSPEND_PREPARE:
+ suspend = TRUE;
+ break;
+ case PM_POST_HIBERNATION:
+ case PM_POST_SUSPEND:
+ suspend = FALSE;
+ break;
}
+#if defined(SUPPORT_P2P_GO_PS)
#ifdef PROP_TXSTATUS
if (suspend) {
- dhd_wakelock_waive(dhdinfo);
+ DHD_OS_WAKE_LOCK_WAIVE(&dhdinfo->pub);
dhd_wlfc_suspend(&dhdinfo->pub);
- dhd_wakelock_restore(dhdinfo);
- } else {
+ DHD_OS_WAKE_LOCK_RESTORE(&dhdinfo->pub);
+ } else
dhd_wlfc_resume(&dhdinfo->pub);
- }
-
#endif
+#endif /* defined(SUPPORT_P2P_GO_PS) */
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && (LINUX_VERSION_CODE <= \
KERNEL_VERSION(2, 6, 39))
dhd_mmc_suspend = suspend;
smp_mb();
#endif
+
return ret;
}
+static struct notifier_block dhd_pm_notifier = {
+ .notifier_call = dhd_pm_callback,
+ .priority = 10
+};
/* to make sure we won't register the same notifier twice, otherwise a loop is likely to be
* created in kernel notifier link list (with 'next' pointing to itself)
*/
@@ -641,13 +764,544 @@
extern int register_pm_notifier(struct notifier_block *nb);
extern int unregister_pm_notifier(struct notifier_block *nb);
-#endif /* (LINUX_VERSION >= 2.6.27 && LINUX_VERSION <= 2.6.39 && CONFIG_PM_SLEEP) */
+#endif /* CONFIG_PM_SLEEP */
/* Request scheduling of the bus rx frame */
static void dhd_sched_rxf(dhd_pub_t *dhdp, void *skb);
static void dhd_os_rxflock(dhd_pub_t *pub);
static void dhd_os_rxfunlock(dhd_pub_t *pub);
+/** priv_link is the link between netdev and the dhdif and dhd_info structs. */
+typedef struct dhd_dev_priv {
+ dhd_info_t * dhd; /* cached pointer to dhd_info in netdevice priv */
+ dhd_if_t * ifp; /* cached pointer to dhd_if in netdevice priv */
+ int ifidx; /* interface index */
+} dhd_dev_priv_t;
+
+#define DHD_DEV_PRIV_SIZE (sizeof(dhd_dev_priv_t))
+#define DHD_DEV_PRIV(dev) ((dhd_dev_priv_t *)DEV_PRIV(dev))
+#define DHD_DEV_INFO(dev) (((dhd_dev_priv_t *)DEV_PRIV(dev))->dhd)
+#define DHD_DEV_IFP(dev) (((dhd_dev_priv_t *)DEV_PRIV(dev))->ifp)
+#define DHD_DEV_IFIDX(dev) (((dhd_dev_priv_t *)DEV_PRIV(dev))->ifidx)
+
+#if defined(DHD_OF_SUPPORT)
+extern int dhd_wlan_init(void);
+extern void dhd_wlan_exit(void);
+#endif /* defined(DHD_OF_SUPPORT) */
+
+
+/** Clear the dhd net_device's private structure. */
+static inline void
+dhd_dev_priv_clear(struct net_device * dev)
+{
+ dhd_dev_priv_t * dev_priv;
+ ASSERT(dev != (struct net_device *)NULL);
+ dev_priv = DHD_DEV_PRIV(dev);
+ dev_priv->dhd = (dhd_info_t *)NULL;
+ dev_priv->ifp = (dhd_if_t *)NULL;
+ dev_priv->ifidx = DHD_BAD_IF;
+}
+
+/** Setup the dhd net_device's private structure. */
+static inline void
+dhd_dev_priv_save(struct net_device * dev, dhd_info_t * dhd, dhd_if_t * ifp,
+ int ifidx)
+{
+ dhd_dev_priv_t * dev_priv;
+ ASSERT(dev != (struct net_device *)NULL);
+ dev_priv = DHD_DEV_PRIV(dev);
+ dev_priv->dhd = dhd;
+ dev_priv->ifp = ifp;
+ dev_priv->ifidx = ifidx;
+}
+#ifdef SAR_SUPPORT
+static int dhd_sar_callback(struct notifier_block *nfb, unsigned long action, void *data)
+{
+ dhd_info_t *dhd = (dhd_info_t*)container_of(nfb, struct dhd_info, sar_notifier);
+ char iovbuf[32];
+ s32 sar_enable;
+ s32 txpower;
+ int ret;
+
+ if (dhd->pub.busstate == DHD_BUS_DOWN)
+ return NOTIFY_DONE;
+
+ if (data) {
+ /* if data != NULL then we expect that the notifier passed
+ * the exact value of max tx power in quarters of dB.
+ * qtxpower variable allows us to overwrite TX power.
+ */
+ txpower = *(s32*)data;
+ if (txpower == -1 || txpower > 127)
+ txpower = 127; /* Max val of 127 qdbm */
+
+ txpower |= WL_TXPWR_OVERRIDE;
+ txpower = htod32(txpower);
+
+ bcm_mkiovar("qtxpower", (char *)&txpower, 4, iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(&dhd->pub, WLC_SET_VAR,
+ iovbuf, sizeof(iovbuf), TRUE, 0)) < 0)
+ DHD_ERROR(("%s wl qtxpower failed %d\n", __FUNCTION__, ret));
+ } else {
+ /* '1' means activate sarlimit and '0' means back to normal
+ * state (deactivate sarlimit)
+ */
+ sar_enable = action ? 1 : 0;
+ if (dhd->sar_enable == sar_enable)
+ return NOTIFY_DONE;
+ bcm_mkiovar("sar_enable", (char *)&sar_enable, 4, iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(&dhd->pub, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0)) < 0)
+ DHD_ERROR(("%s wl sar_enable %d failed %d\n", __FUNCTION__, sar_enable, ret));
+ else
+ dhd->sar_enable = sar_enable;
+ }
+
+ return NOTIFY_DONE;
+}
+
+static bool dhd_sar_notifier_registered = FALSE;
+
+extern int register_notifier_by_sar(struct notifier_block *nb);
+extern int unregister_notifier_by_sar(struct notifier_block *nb);
+#endif
+
+#ifdef PCIE_FULL_DONGLE
+
+/** Dummy objects are defined with state representing bad|down.
+ * Performance gains from reducing branch conditionals, instruction parallelism,
+ * dual issue, reducing load shadows, avail of larger pipelines.
+ * Use DHD_XXX_NULL instead of (dhd_xxx_t *)NULL, whenever an object pointer
+ * is accessed via the dhd_sta_t.
+ */
+
+/* Dummy dhd_info object */
+dhd_info_t dhd_info_null = {
+#if defined(BCM_GMAC3)
+ .fwdh = FWDER_NULL,
+#endif
+ .pub = {
+ .info = &dhd_info_null,
+#ifdef DHDTCPACK_SUPPRESS
+ .tcpack_sup_mode = TCPACK_SUP_REPLACE,
+#endif /* DHDTCPACK_SUPPRESS */
+ .up = FALSE, .busstate = DHD_BUS_DOWN
+ }
+};
+#define DHD_INFO_NULL (&dhd_info_null)
+#define DHD_PUB_NULL (&dhd_info_null.pub)
+
+/* Dummy netdevice object */
+struct net_device dhd_net_dev_null = {
+ .reg_state = NETREG_UNREGISTERED
+};
+#define DHD_NET_DEV_NULL (&dhd_net_dev_null)
+
+/* Dummy dhd_if object */
+dhd_if_t dhd_if_null = {
+#if defined(BCM_GMAC3)
+ .fwdh = FWDER_NULL,
+#endif
+#ifdef WMF
+ .wmf = { .wmf_enable = TRUE },
+#endif
+ .info = DHD_INFO_NULL,
+ .net = DHD_NET_DEV_NULL,
+ .idx = DHD_BAD_IF
+};
+#define DHD_IF_NULL (&dhd_if_null)
+
+#define DHD_STA_NULL ((dhd_sta_t *)NULL)
+
+/** Interface STA list management. */
+
+/** Fetch the dhd_if object, given the interface index in the dhd. */
+static inline dhd_if_t *dhd_get_ifp(dhd_pub_t *dhdp, uint32 ifidx);
+
+/** Alloc/Free a dhd_sta object from the dhd instances' sta_pool. */
+static void dhd_sta_free(dhd_pub_t *pub, dhd_sta_t *sta);
+static dhd_sta_t * dhd_sta_alloc(dhd_pub_t * dhdp);
+
+/* Delete a dhd_sta or flush all dhd_sta in an interface's sta_list. */
+static void dhd_if_del_sta_list(dhd_if_t * ifp);
+static void dhd_if_flush_sta(dhd_if_t * ifp);
+
+/* Construct/Destruct a sta pool. */
+static int dhd_sta_pool_init(dhd_pub_t *dhdp, int max_sta);
+static void dhd_sta_pool_fini(dhd_pub_t *dhdp, int max_sta);
+static void dhd_sta_pool_clear(dhd_pub_t *dhdp, int max_sta);
+
+
+/* Return interface pointer */
+static inline dhd_if_t *dhd_get_ifp(dhd_pub_t *dhdp, uint32 ifidx)
+{
+ ASSERT(ifidx < DHD_MAX_IFS);
+ if (ifidx >= DHD_MAX_IFS) {
+ return NULL;
+ }
+ return dhdp->info->iflist[ifidx];
+}
+
+/** Reset a dhd_sta object and free into the dhd pool. */
+static void
+dhd_sta_free(dhd_pub_t * dhdp, dhd_sta_t * sta)
+{
+ int prio;
+
+ ASSERT((sta != DHD_STA_NULL) && (sta->idx != ID16_INVALID));
+
+ ASSERT((dhdp->staid_allocator != NULL) && (dhdp->sta_pool != NULL));
+ id16_map_free(dhdp->staid_allocator, sta->idx);
+ for (prio = 0; prio < (int)NUMPRIO; prio++)
+ sta->flowid[prio] = FLOWID_INVALID;
+ sta->ifp = DHD_IF_NULL; /* dummy dhd_if object */
+ sta->ifidx = DHD_BAD_IF;
+ bzero(sta->ea.octet, ETHER_ADDR_LEN);
+ INIT_LIST_HEAD(&sta->list);
+ sta->idx = ID16_INVALID; /* implying free */
+}
+
+/** Allocate a dhd_sta object from the dhd pool. */
+static dhd_sta_t *
+dhd_sta_alloc(dhd_pub_t * dhdp)
+{
+ uint16 idx;
+ dhd_sta_t * sta;
+ dhd_sta_pool_t * sta_pool;
+
+ ASSERT((dhdp->staid_allocator != NULL) && (dhdp->sta_pool != NULL));
+
+ idx = id16_map_alloc(dhdp->staid_allocator);
+ if (idx == ID16_INVALID) {
+ DHD_ERROR(("%s: cannot get free staid\n", __FUNCTION__));
+ return DHD_STA_NULL;
+ }
+
+ sta_pool = (dhd_sta_pool_t *)(dhdp->sta_pool);
+ sta = &sta_pool[idx];
+
+ ASSERT((sta->idx == ID16_INVALID) &&
+ (sta->ifp == DHD_IF_NULL) && (sta->ifidx == DHD_BAD_IF));
+ sta->idx = idx; /* implying allocated */
+
+ return sta;
+}
+
+/** Delete all STAs in an interface's STA list. */
+static void
+dhd_if_del_sta_list(dhd_if_t *ifp)
+{
+ dhd_sta_t *sta, *next;
+ unsigned long flags;
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ list_for_each_entry_safe(sta, next, &ifp->sta_list, list) {
+#if defined(BCM_GMAC3)
+ if (ifp->fwdh) {
+ /* Remove sta from WOFA forwarder. */
+ fwder_deassoc(ifp->fwdh, (uint16 *)(sta->ea.octet), (wofa_t)sta);
+ }
+#endif /* BCM_GMAC3 */
+ list_del(&sta->list);
+ dhd_sta_free(&ifp->info->pub, sta);
+ }
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+
+ return;
+}
+
+/** Router/GMAC3: Flush all station entries in the forwarder's WOFA database. */
+static void
+dhd_if_flush_sta(dhd_if_t * ifp)
+{
+#if defined(BCM_GMAC3)
+
+ if (ifp && (ifp->fwdh != FWDER_NULL)) {
+ dhd_sta_t *sta, *next;
+ unsigned long flags;
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ list_for_each_entry_safe(sta, next, &ifp->sta_list, list) {
+ /* Remove any sta entry from WOFA forwarder. */
+ fwder_flush(ifp->fwdh, (wofa_t)sta);
+ }
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+ }
+#endif /* BCM_GMAC3 */
+}
+
+/** Construct a pool of dhd_sta_t objects to be used by interfaces. */
+static int
+dhd_sta_pool_init(dhd_pub_t *dhdp, int max_sta)
+{
+ int idx, sta_pool_memsz;
+ dhd_sta_t * sta;
+ dhd_sta_pool_t * sta_pool;
+ void * staid_allocator;
+
+ ASSERT(dhdp != (dhd_pub_t *)NULL);
+ ASSERT((dhdp->staid_allocator == NULL) && (dhdp->sta_pool == NULL));
+
+ /* dhd_sta objects per radio are managed in a table. id#0 reserved. */
+ staid_allocator = id16_map_init(dhdp->osh, max_sta, 1);
+ if (staid_allocator == NULL) {
+ DHD_ERROR(("%s: sta id allocator init failure\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ /* Pre allocate a pool of dhd_sta objects (one extra). */
+ sta_pool_memsz = ((max_sta + 1) * sizeof(dhd_sta_t)); /* skip idx 0 */
+ sta_pool = (dhd_sta_pool_t *)MALLOC(dhdp->osh, sta_pool_memsz);
+ if (sta_pool == NULL) {
+ DHD_ERROR(("%s: sta table alloc failure\n", __FUNCTION__));
+ id16_map_fini(dhdp->osh, staid_allocator);
+ return BCME_ERROR;
+ }
+
+ dhdp->sta_pool = sta_pool;
+ dhdp->staid_allocator = staid_allocator;
+
+ /* Initialize all sta(s) for the pre-allocated free pool. */
+ bzero((uchar *)sta_pool, sta_pool_memsz);
+ for (idx = max_sta; idx >= 1; idx--) { /* skip sta_pool[0] */
+ sta = &sta_pool[idx];
+ sta->idx = id16_map_alloc(staid_allocator);
+ ASSERT(sta->idx <= max_sta);
+ }
+ /* Now place them into the pre-allocated free pool. */
+ for (idx = 1; idx <= max_sta; idx++) {
+ sta = &sta_pool[idx];
+ dhd_sta_free(dhdp, sta);
+ }
+
+ return BCME_OK;
+}
+
+/** Destruct the pool of dhd_sta_t objects.
+ * Caller must ensure that no STA objects are currently associated with an if.
+ */
+static void
+dhd_sta_pool_fini(dhd_pub_t *dhdp, int max_sta)
+{
+ dhd_sta_pool_t * sta_pool = (dhd_sta_pool_t *)dhdp->sta_pool;
+
+ if (sta_pool) {
+ int idx;
+ int sta_pool_memsz = ((max_sta + 1) * sizeof(dhd_sta_t));
+ for (idx = 1; idx <= max_sta; idx++) {
+ ASSERT(sta_pool[idx].ifp == DHD_IF_NULL);
+ ASSERT(sta_pool[idx].idx == ID16_INVALID);
+ }
+ MFREE(dhdp->osh, dhdp->sta_pool, sta_pool_memsz);
+ dhdp->sta_pool = NULL;
+ }
+
+ id16_map_fini(dhdp->osh, dhdp->staid_allocator);
+ dhdp->staid_allocator = NULL;
+}
+
+
+
+/* Clear the pool of dhd_sta_t objects for built-in type driver */
+static void
+dhd_sta_pool_clear(dhd_pub_t *dhdp, int max_sta)
+{
+ int idx, sta_pool_memsz;
+ dhd_sta_t * sta;
+ dhd_sta_pool_t * sta_pool;
+ void *staid_allocator;
+
+ if (!dhdp) {
+ DHD_ERROR(("%s: dhdp is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ sta_pool = (dhd_sta_pool_t *)dhdp->sta_pool;
+ staid_allocator = dhdp->staid_allocator;
+
+ if (!sta_pool) {
+ DHD_ERROR(("%s: sta_pool is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ if (!staid_allocator) {
+ DHD_ERROR(("%s: staid_allocator is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ /* clear free pool */
+ sta_pool_memsz = ((max_sta + 1) * sizeof(dhd_sta_t));
+ bzero((uchar *)sta_pool, sta_pool_memsz);
+
+ /* dhd_sta objects per radio are managed in a table. id#0 reserved. */
+ id16_map_clear(staid_allocator, max_sta, 1);
+
+ /* Initialize all sta(s) for the pre-allocated free pool. */
+ for (idx = max_sta; idx >= 1; idx--) { /* skip sta_pool[0] */
+ sta = &sta_pool[idx];
+ sta->idx = id16_map_alloc(staid_allocator);
+ ASSERT(sta->idx <= max_sta);
+ }
+ /* Now place them into the pre-allocated free pool. */
+ for (idx = 1; idx <= max_sta; idx++) {
+ sta = &sta_pool[idx];
+ dhd_sta_free(dhdp, sta);
+ }
+}
+
+
+/** Find STA with MAC address ea in an interface's STA list. */
+dhd_sta_t *
+dhd_find_sta(void *pub, int ifidx, void *ea)
+{
+ dhd_sta_t *sta, *next;
+ dhd_if_t *ifp;
+ unsigned long flags;
+
+ ASSERT(ea != NULL);
+ ifp = dhd_get_ifp((dhd_pub_t *)pub, ifidx);
+ if (ifp == NULL)
+ return DHD_STA_NULL;
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ list_for_each_entry_safe(sta, next, &ifp->sta_list, list) {
+ if (!memcmp(sta->ea.octet, ea, ETHER_ADDR_LEN)) {
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+ return sta;
+ }
+ }
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+
+ return DHD_STA_NULL;
+}
+
+/** Add STA into the interface's STA list. */
+dhd_sta_t *
+dhd_add_sta(void *pub, int ifidx, void *ea)
+{
+ dhd_sta_t *sta;
+ dhd_if_t *ifp;
+ unsigned long flags;
+
+ ASSERT(ea != NULL);
+ ifp = dhd_get_ifp((dhd_pub_t *)pub, ifidx);
+ if (ifp == NULL)
+ return DHD_STA_NULL;
+
+ sta = dhd_sta_alloc((dhd_pub_t *)pub);
+ if (sta == DHD_STA_NULL) {
+ DHD_ERROR(("%s: Alloc failed\n", __FUNCTION__));
+ return DHD_STA_NULL;
+ }
+
+ memcpy(sta->ea.octet, ea, ETHER_ADDR_LEN);
+
+ /* link the sta and the dhd interface */
+ sta->ifp = ifp;
+ sta->ifidx = ifidx;
+ INIT_LIST_HEAD(&sta->list);
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ list_add_tail(&sta->list, &ifp->sta_list);
+
+#if defined(BCM_GMAC3)
+ if (ifp->fwdh) {
+ ASSERT(ISALIGNED(ea, 2));
+ /* Add sta to WOFA forwarder. */
+ fwder_reassoc(ifp->fwdh, (uint16 *)ea, (wofa_t)sta);
+ }
+#endif /* BCM_GMAC3 */
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+
+ return sta;
+}
+
+/** Delete STA from the interface's STA list. */
+void
+dhd_del_sta(void *pub, int ifidx, void *ea)
+{
+ dhd_sta_t *sta, *next;
+ dhd_if_t *ifp;
+ unsigned long flags;
+
+ ASSERT(ea != NULL);
+ ifp = dhd_get_ifp((dhd_pub_t *)pub, ifidx);
+ if (ifp == NULL)
+ return;
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ list_for_each_entry_safe(sta, next, &ifp->sta_list, list) {
+ if (!memcmp(sta->ea.octet, ea, ETHER_ADDR_LEN)) {
+#if defined(BCM_GMAC3)
+ if (ifp->fwdh) { /* Found a sta, remove from WOFA forwarder. */
+ ASSERT(ISALIGNED(ea, 2));
+ fwder_deassoc(ifp->fwdh, (uint16 *)ea, (wofa_t)sta);
+ }
+#endif /* BCM_GMAC3 */
+ list_del(&sta->list);
+ dhd_sta_free(&ifp->info->pub, sta);
+ }
+ }
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+
+ return;
+}
+
+/** Add STA if it doesn't exist. Not reentrant. */
+dhd_sta_t*
+dhd_findadd_sta(void *pub, int ifidx, void *ea)
+{
+ dhd_sta_t *sta;
+
+ sta = dhd_find_sta(pub, ifidx, ea);
+
+ if (!sta) {
+ /* Add entry */
+ sta = dhd_add_sta(pub, ifidx, ea);
+ }
+
+ return sta;
+}
+#else
+static inline void dhd_if_flush_sta(dhd_if_t * ifp) { }
+static inline void dhd_if_del_sta_list(dhd_if_t *ifp) {}
+static inline int dhd_sta_pool_init(dhd_pub_t *dhdp, int max_sta) { return BCME_OK; }
+static inline void dhd_sta_pool_fini(dhd_pub_t *dhdp, int max_sta) {}
+dhd_sta_t *dhd_findadd_sta(void *pub, int ifidx, void *ea) { return NULL; }
+void dhd_del_sta(void *pub, int ifidx, void *ea) {}
+#endif /* PCIE_FULL_DONGLE */
+
+
+/* Returns dhd iflist index correspondig the the bssidx provided by apps */
+int dhd_bssidx2idx(dhd_pub_t *dhdp, uint32 bssidx)
+{
+ dhd_if_t *ifp;
+ dhd_info_t *dhd = dhdp->info;
+ int i;
+
+ ASSERT(bssidx < DHD_MAX_IFS);
+ ASSERT(dhdp);
+
+ for (i = 0; i < DHD_MAX_IFS; i++) {
+ ifp = dhd->iflist[i];
+ if (ifp && (ifp->bssidx == bssidx)) {
+ DHD_TRACE(("Index manipulated for %s from %d to %d\n",
+ ifp->name, bssidx, i));
+ break;
+ }
+ }
+ return i;
+}
+
static inline int dhd_rxf_enqueue(dhd_pub_t *dhdp, void* skb)
{
uint32 store_idx;
@@ -664,6 +1318,11 @@
if (dhdp->skbbuf[store_idx] != NULL) {
/* Make sure the previous packets are processed */
dhd_os_rxfunlock(dhdp);
+#ifdef RXF_DEQUEUE_ON_BUSY
+ DHD_TRACE(("dhd_rxf_enqueue: pktbuf not consumed %p, store idx %d sent idx %d\n",
+ skb, store_idx, sent_idx));
+ return BCME_BUSY;
+#else /* RXF_DEQUEUE_ON_BUSY */
DHD_ERROR(("dhd_rxf_enqueue: pktbuf not consumed %p, store idx %d sent idx %d\n",
skb, store_idx, sent_idx));
/* removed msleep here, should use wait_event_timeout if we
@@ -673,6 +1332,7 @@
OSL_SLEEP(1);
#endif
return BCME_ERROR;
+#endif /* RXF_DEQUEUE_ON_BUSY */
}
DHD_TRACE(("dhd_rxf_enqueue: Store SKB %p. idx %d -> %d\n",
skb, store_idx, (store_idx + 1) & (MAXSKBPEND - 1)));
@@ -713,7 +1373,7 @@
return skb;
}
-static int dhd_process_cid_mac(dhd_pub_t *dhdp, bool prepost)
+int dhd_process_cid_mac(dhd_pub_t *dhdp, bool prepost)
{
dhd_info_t *dhd = (dhd_info_t *)dhdp->info;
@@ -802,14 +1462,13 @@
uint roamvar = 1;
#endif /* ENABLE_FW_ROAM_SUSPEND */
uint nd_ra_filter = 0;
+ int lpas = 0;
+ int dtim_period = 0;
+ int bcn_interval = 0;
+ int bcn_to_dly = 0;
+ int bcn_timeout = CUSTOM_BCN_TIMEOUT_SETTING;
int ret = 0;
-#ifdef DYNAMIC_SWOOB_DURATION
-#ifndef CUSTOM_INTR_WIDTH
-#define CUSTOM_INTR_WIDTH 100
-#endif /* CUSTOM_INTR_WIDTH */
- int intr_width = 0;
-#endif /* DYNAMIC_SWOOB_DURATION */
if (!dhd)
return -ENODEV;
@@ -830,7 +1489,9 @@
#endif
/* Kernel suspended */
DHD_ERROR(("%s: force extra Suspend setting \n", __FUNCTION__));
-
+#ifdef CUSTOM_SET_SHORT_DWELL_TIME
+ dhd_set_short_dwell_time(dhd, TRUE);
+#endif
#ifndef SUPPORT_PM2_ONLY
dhd_wl_ioctl_cmd(dhd, WLC_SET_PM, (char *)&power_mode,
sizeof(power_mode), TRUE, 0);
@@ -839,17 +1500,34 @@
/* Enable packet filter, only allow unicast packet to send up */
dhd_enable_packet_filter(1, dhd);
-
/* If DTIM skip is set up as default, force it to wake
* each third DTIM for better power savings. Note that
* one side effect is a chance to miss BC/MC packet.
*/
- bcn_li_dtim = dhd_get_suspend_bcn_li_dtim(dhd);
- bcm_mkiovar("bcn_li_dtim", (char *)&bcn_li_dtim,
- 4, iovbuf, sizeof(iovbuf));
- if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf),
- TRUE, 0) < 0)
- DHD_ERROR(("%s: set dtim failed\n", __FUNCTION__));
+ bcn_li_dtim = dhd_get_suspend_bcn_li_dtim(dhd, &dtim_period, &bcn_interval);
+ dhd_iovar(dhd, 0, "bcn_li_dtim", (char *)&bcn_li_dtim, sizeof(bcn_li_dtim), 1);
+ if (bcn_li_dtim * dtim_period * bcn_interval >= MIN_DTIM_FOR_ROAM_THRES_EXTEND) {
+ /*
+ * Increase max roaming threshold from 2 secs to 8 secs
+ * the real roam threshold is MIN(max_roam_threshold, bcn_timeout/2)
+ */
+ lpas = 1;
+ dhd_iovar(dhd, 0, "lpas", (char *)&lpas, sizeof(lpas), 1);
+
+ bcn_to_dly = 1;
+ /*
+ * if bcn_to_dly is 1, the real roam threshold is MIN(max_roam_threshold, bcn_timeout -1);
+ * notify link down event after roaming procedure complete if we hit bcn_timeout
+ * while we are in roaming progress.
+ *
+ */
+ dhd_iovar(dhd, 0, "bcn_to_dly", (char *)&bcn_to_dly, sizeof(bcn_to_dly), 1);
+ /* Increase beacon timeout to 6 secs */
+ bcn_timeout = (bcn_timeout < BCN_TIMEOUT_IN_SUSPEND) ?
+ BCN_TIMEOUT_IN_SUSPEND : bcn_timeout;
+ dhd_iovar(dhd, 0, "bcn_timeout", (char *)&bcn_timeout, sizeof(bcn_timeout), 1);
+ }
+
#ifndef ENABLE_FW_ROAM_SUSPEND
/* Disable firmware roaming during suspend */
@@ -867,29 +1545,16 @@
DHD_ERROR(("failed to set nd_ra_filter (%d)\n",
ret));
}
-#ifdef DYNAMIC_SWOOB_DURATION
- intr_width = CUSTOM_INTR_WIDTH;
- bcm_mkiovar("bus:intr_width", (char *)&intr_width, 4,
- iovbuf, sizeof(iovbuf));
- if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
- sizeof(iovbuf), TRUE, 0)) < 0)
- DHD_ERROR(("failed to set intr_width (%d)\n", ret));
-#endif /* DYNAMIC_SWOOB_DURATION */
+ dhd_os_suppress_logging(dhd, TRUE);
} else {
#ifdef PKT_FILTER_SUPPORT
dhd->early_suspended = 0;
#endif
/* Kernel resumed */
DHD_ERROR(("%s: Remove extra suspend setting \n", __FUNCTION__));
-#ifdef DYNAMIC_SWOOB_DURATION
- intr_width = 0;
- bcm_mkiovar("bus:intr_width", (char *)&intr_width, 4,
- iovbuf, sizeof(iovbuf));
- if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
- sizeof(iovbuf), TRUE, 0)) < 0)
- DHD_ERROR(("failed to set intr_width (%d)\n", ret));
-#endif /* DYNAMIC_SWOOB_DURATION */
-
+#ifdef CUSTOM_SET_SHORT_DWELL_TIME
+ dhd_set_short_dwell_time(dhd, FALSE);
+#endif
#ifndef SUPPORT_PM2_ONLY
power_mode = PM_FAST;
dhd_wl_ioctl_cmd(dhd, WLC_SET_PM, (char *)&power_mode,
@@ -900,11 +1565,15 @@
dhd_enable_packet_filter(0, dhd);
#endif /* PKT_FILTER_SUPPORT */
- /* restore pre-suspend setting for dtim_skip */
- bcm_mkiovar("bcn_li_dtim", (char *)&bcn_li_dtim,
- 4, iovbuf, sizeof(iovbuf));
+ /* restore pre-suspend setting */
+ dhd_iovar(dhd, 0, "bcn_li_dtim", (char *)&bcn_li_dtim, sizeof(bcn_li_dtim), 1);
- dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0);
+ dhd_iovar(dhd, 0, "lpas", (char *)&lpas, sizeof(lpas), 1);
+
+ dhd_iovar(dhd, 0, "bcn_to_dly", (char *)&bcn_to_dly, sizeof(bcn_to_dly), 1);
+
+ dhd_iovar(dhd, 0, "bcn_timeout", (char *)&bcn_timeout, sizeof(bcn_timeout), 1);
+
#ifndef ENABLE_FW_ROAM_SUSPEND
roamvar = dhd_roam_disable;
bcm_mkiovar("roam_off", (char *)&roamvar, 4, iovbuf,
@@ -921,6 +1590,7 @@
DHD_ERROR(("failed to set nd_ra_filter (%d)\n",
ret));
}
+ dhd_os_suppress_logging(dhd, FALSE);
}
}
dhd_suspend_unlock(dhd);
@@ -934,6 +1604,8 @@
int ret = 0;
DHD_OS_WAKE_LOCK(dhdp);
+ DHD_PERIM_LOCK(dhdp);
+
/* Set flag when early suspend was called */
dhdp->in_suspend = val;
if ((force || !dhdp->suspend_disable_flag) &&
@@ -942,6 +1614,7 @@
ret = dhd_set_suspend(val, dhdp);
}
+ DHD_PERIM_UNLOCK(dhdp);
DHD_OS_WAKE_UNLOCK(dhdp);
return ret;
}
@@ -978,6 +1651,58 @@
* fatal();
*/
+#ifdef CONFIG_PARTIALRESUME
+static unsigned int dhd_get_ipv6_stat(u8 type)
+{
+ static unsigned int ra = 0;
+ static unsigned int na = 0;
+ static unsigned int other = 0;
+
+ switch (type) {
+ case NDISC_ROUTER_ADVERTISEMENT:
+ ra++;
+ return ra;
+ case NDISC_NEIGHBOUR_ADVERTISEMENT:
+ na++;
+ return na;
+ default:
+ other++;
+ break;
+ }
+ return other;
+}
+#endif
+
+static int dhd_rx_suspend_again(struct sk_buff *skb)
+{
+#ifdef CONFIG_PARTIALRESUME
+ u8 *pptr = skb_mac_header(skb);
+
+ if (pptr &&
+ (memcmp(pptr, "\x33\x33\x00\x00\x00\x01", ETHER_ADDR_LEN) == 0) &&
+ (ntoh16(skb->protocol) == ETHER_TYPE_IPV6)) {
+ u8 type = 0;
+#define ETHER_ICMP6_TYPE 54
+#define ETHER_ICMP6_DADDR 38
+ if (skb->len > ETHER_ICMP6_TYPE)
+ type = pptr[ETHER_ICMP6_TYPE];
+ if ((type == NDISC_NEIGHBOUR_ADVERTISEMENT) &&
+ (ipv6_addr_equal(&in6addr_linklocal_allnodes,
+ (const struct in6_addr *)(pptr + ETHER_ICMP6_DADDR)))) {
+ pr_debug("%s: Suspend, type = %d [%u]\n", __func__,
+ type, dhd_get_ipv6_stat(type));
+ return 0;
+ } else {
+ pr_debug("%s: Resume, type = %d [%u]\n", __func__,
+ type, dhd_get_ipv6_stat(type));
+ }
+#undef ETHER_ICMP6_TYPE
+#undef ETHER_ICMP6_DADDR
+ }
+#endif
+ return DHD_PACKET_TIMEOUT_MS;
+}
+
void
dhd_timeout_start(dhd_timeout_t *tmo, uint usec)
{
@@ -1013,7 +1738,7 @@
init_waitqueue_head(&delay_wait);
add_wait_queue(&delay_wait, &wait);
set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(1);
+ (void)schedule_timeout(1);
remove_wait_queue(&delay_wait, &wait);
set_current_state(TASK_RUNNING);
}
@@ -1068,6 +1793,27 @@
return i; /* default - the primary interface */
}
+int
+dhd_ifidx2hostidx(dhd_info_t *dhd, int ifidx)
+{
+ int i = DHD_MAX_IFS;
+
+ ASSERT(dhd);
+
+ if (ifidx < 0 || ifidx >= DHD_MAX_IFS) {
+ DHD_TRACE(("%s: ifidx %d out of range\n", __FUNCTION__, ifidx));
+ return 0; /* default - the primary interface */
+ }
+
+ while (--i > 0)
+ if (dhd->iflist[i] && (dhd->iflist[i]->idx == ifidx))
+ break;
+
+ DHD_TRACE(("%s: return hostidx %d for ifidx %d\n", __FUNCTION__, i, ifidx));
+
+ return i; /* default - the primary interface */
+}
+
char *
dhd_ifname(dhd_pub_t *dhdp, int ifidx)
{
@@ -1294,6 +2040,10 @@
struct net_device *ndev;
int ifidx, bssidx;
int ret;
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+ struct wireless_dev *vwdev, *primary_wdev;
+ struct net_device *primary_ndev;
+#endif /* OEM_ANDROID && (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0)) */
if (event != DHD_WQ_WORK_IF_ADD) {
DHD_ERROR(("%s: unexpected event \n", __FUNCTION__));
@@ -1312,6 +2062,7 @@
dhd_net_if_lock_local(dhd);
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
ifidx = if_event->event.ifidx;
bssidx = if_event->event.bssidx;
@@ -1324,14 +2075,44 @@
goto done;
}
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+ vwdev = kzalloc(sizeof(*vwdev), GFP_KERNEL);
+ if (unlikely(!vwdev)) {
+ WL_ERR(("Could not allocate wireless device\n"));
+ goto done;
+ }
+ primary_ndev = dhd->pub.info->iflist[0]->net;
+ primary_wdev = ndev_to_wdev(primary_ndev);
+ vwdev->wiphy = primary_wdev->wiphy;
+ vwdev->iftype = if_event->event.role;
+ vwdev->netdev = ndev;
+ ndev->ieee80211_ptr = vwdev;
+ SET_NETDEV_DEV(ndev, wiphy_dev(vwdev->wiphy));
+ DHD_ERROR(("virtual interface(%s) is created\n", if_event->name));
+#endif /* OEM_ANDROID && (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0)) */
+
+ DHD_PERIM_UNLOCK(&dhd->pub);
ret = dhd_register_if(&dhd->pub, ifidx, TRUE);
+ DHD_PERIM_LOCK(&dhd->pub);
if (ret != BCME_OK) {
DHD_ERROR(("%s: dhd_register_if failed\n", __FUNCTION__));
dhd_remove_if(&dhd->pub, ifidx, TRUE);
}
+#ifdef PCIE_FULL_DONGLE
+ /* Turn on AP isolation in the firmware for interfaces operating in AP mode */
+ if (FW_SUPPORTED((&dhd->pub), ap) && !(DHD_IF_ROLE_STA(if_event->event.role))) {
+ char iovbuf[WLC_IOCTL_SMLEN];
+ uint32 var_int = 1;
+
+ memset(iovbuf, 0, sizeof(iovbuf));
+ bcm_mkiovar("ap_isolate", (char *)&var_int, 4, iovbuf, sizeof(iovbuf));
+ dhd_wl_ioctl_cmd(&dhd->pub, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, ifidx);
+ }
+#endif /* PCIE_FULL_DONGLE */
done:
MFREE(dhd->pub.osh, if_event, sizeof(dhd_if_event_t));
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
dhd_net_if_unlock_local(dhd);
}
@@ -1361,6 +2142,7 @@
dhd_net_if_lock_local(dhd);
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
ifidx = if_event->event.ifidx;
DHD_TRACE(("Removing interface with idx %d\n", ifidx));
@@ -1369,6 +2151,7 @@
MFREE(dhd->pub.osh, if_event, sizeof(dhd_if_event_t));
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
dhd_net_if_unlock_local(dhd);
}
@@ -1379,11 +2162,6 @@
dhd_info_t *dhd = handle;
dhd_if_t *ifp = event_info;
-#ifdef SOFTAP
- unsigned long flags;
- bool in_ap = FALSE;
-#endif
-
if (event != DHD_WQ_WORK_SET_MAC) {
DHD_ERROR(("%s: unexpected event \n", __FUNCTION__));
}
@@ -1393,19 +2171,25 @@
return;
}
-#ifdef SOFTAP
- flags = dhd_os_spin_lock(&dhd->pub);
- in_ap = (ap_net_dev != NULL);
- dhd_os_spin_unlock(&dhd->pub, flags);
-
- if (in_ap) {
- DHD_ERROR(("attempt to set MAC for %s in AP Mode, blocked. \n",
- ifp->net->name));
- return;
- }
-#endif
dhd_net_if_lock_local(dhd);
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
+
+#ifdef SOFTAP
+ {
+ unsigned long flags;
+ bool in_ap = FALSE;
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
+ in_ap = (ap_net_dev != NULL);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
+
+ if (in_ap) {
+ DHD_ERROR(("attempt to set MAC for %s in AP Mode, blocked. \n",
+ ifp->net->name));
+ goto done;
+ }
+ }
+#endif /* SOFTAP */
if (ifp == NULL || !dhd->pub.up) {
DHD_ERROR(("%s: interface info not available/down \n", __FUNCTION__));
@@ -1420,6 +2204,7 @@
DHD_ERROR(("%s: _dhd_set_mac_address() failed\n", __FUNCTION__));
done:
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
dhd_net_if_unlock_local(dhd);
}
@@ -1431,11 +2216,6 @@
dhd_if_t *ifp = event_info;
int ifidx;
-#ifdef SOFTAP
- bool in_ap = FALSE;
- unsigned long flags;
-#endif
-
if (event != DHD_WQ_WORK_SET_MCAST_LIST) {
DHD_ERROR(("%s: unexpected event \n", __FUNCTION__));
return;
@@ -1446,21 +2226,26 @@
return;
}
-#ifdef SOFTAP
- flags = dhd_os_spin_lock(&dhd->pub);
- in_ap = (ap_net_dev != NULL);
- dhd_os_spin_unlock(&dhd->pub, flags);
-
- if (in_ap) {
- DHD_ERROR(("set MULTICAST list for %s in AP Mode, blocked. \n",
- ifp->net->name));
- ifp->set_multicast = FALSE;
- return;
- }
-#endif
-
dhd_net_if_lock_local(dhd);
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
+
+#ifdef SOFTAP
+ {
+ bool in_ap = FALSE;
+ unsigned long flags;
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
+ in_ap = (ap_net_dev != NULL);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
+
+ if (in_ap) {
+ DHD_ERROR(("set MULTICAST list for %s in AP Mode, blocked. \n",
+ ifp->net->name));
+ ifp->set_multicast = FALSE;
+ goto done;
+ }
+ }
+#endif /* SOFTAP */
if (ifp == NULL || !dhd->pub.up) {
DHD_ERROR(("%s: interface info not available/down \n", __FUNCTION__));
@@ -1474,6 +2259,7 @@
DHD_INFO(("%s: set multicast list for if %d\n", __FUNCTION__, ifidx));
done:
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
dhd_net_if_unlock_local(dhd);
}
@@ -1483,7 +2269,7 @@
{
int ret = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
struct sockaddr *sa = (struct sockaddr *)addr;
int ifidx;
dhd_if_t *dhdif;
@@ -1498,7 +2284,7 @@
memcpy(dhdif->mac_addr, sa->sa_data, ETHER_ADDR_LEN);
dhdif->set_macaddress = TRUE;
dhd_net_if_unlock_local(dhd);
- dhd_deferred_schedule_work((void *)dhdif, DHD_WQ_WORK_SET_MAC,
+ dhd_deferred_schedule_work(dhd->dhd_deferred_wq, (void *)dhdif, DHD_WQ_WORK_SET_MAC,
dhd_set_mac_addr_handler, DHD_WORK_PRIORITY_LOW);
return ret;
}
@@ -1506,7 +2292,7 @@
static void
dhd_set_multicast_list(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ifidx;
ifidx = dhd_net2idx(dhd, dev);
@@ -1514,8 +2300,8 @@
return;
dhd->iflist[ifidx]->set_multicast = TRUE;
- dhd_deferred_schedule_work((void *)dhd->iflist[ifidx], DHD_WQ_WORK_SET_MCAST_LIST,
- dhd_set_mcast_list_handler, DHD_WORK_PRIORITY_LOW);
+ dhd_deferred_schedule_work(dhd->dhd_deferred_wq, (void *)dhd->iflist[ifidx],
+ DHD_WQ_WORK_SET_MCAST_LIST, dhd_set_mcast_list_handler, DHD_WORK_PRIORITY_LOW);
}
#ifdef PROP_TXSTATUS
@@ -1538,11 +2324,25 @@
return 1;
}
-const uint8 wme_fifo2ac[] = { 0, 1, 2, 3, 1, 1 };
-uint8 prio2fifo[8] = { 1, 0, 0, 1, 2, 2, 3, 3 };
-#define WME_PRIO2AC(prio) wme_fifo2ac[prio2fifo[(prio)]]
-
#endif /* PROP_TXSTATUS */
+
+#if defined(DHD_8021X_DUMP)
+void
+dhd_tx_dump(osl_t *osh, void *pkt)
+{
+ uint8 *dump_data;
+ uint16 protocol;
+
+ dump_data = PKTDATA(osh, pkt);
+ protocol = (dump_data[12] << 8) | dump_data[13];
+
+ if (protocol == ETHER_TYPE_802_1X) {
+ DHD_ERROR(("ETHER_TYPE_802_1X [TX]: ver %d, type %d, replay %d\n",
+ dump_data[14], dump_data[15], dump_data[30]));
+ }
+}
+#endif /* DHD_8021X_DUMP */
+
int BCMFASTPATH
dhd_sendpkt(dhd_pub_t *dhdp, int ifidx, void *pktbuf)
{
@@ -1557,6 +2357,21 @@
return -ENODEV;
}
+#ifdef PCIE_FULL_DONGLE
+ if (dhdp->busstate == DHD_BUS_SUSPEND) {
+ DHD_ERROR(("%s : pcie is still in suspend state!!\n", __FUNCTION__));
+ PKTFREE(dhdp->osh, pktbuf, TRUE);
+ return -EBUSY;
+ }
+#endif /* PCIE_FULL_DONGLE */
+
+#ifdef DHD_UNICAST_DHCP
+ /* if dhcp_unicast is enabled, we need to convert the */
+ /* broadcast DHCP ACK/REPLY packets to Unicast. */
+ if (dhdp->dhcp_unicast) {
+ dhd_convert_dhcp_broadcast_ack_to_unicast(dhdp, pktbuf, ifidx);
+ }
+#endif /* DHD_UNICAST_DHCP */
/* Update multicast statistic */
if (PKTLEN(dhdp->osh, pktbuf) >= ETHER_HDR_LEN) {
uint8 *pktdata = (uint8 *)PKTDATA(dhdp->osh, pktbuf);
@@ -1564,8 +2379,41 @@
if (ETHER_ISMULTI(eh->ether_dhost))
dhdp->tx_multicast++;
- if (ntoh16(eh->ether_type) == ETHER_TYPE_802_1X)
+ if (ntoh16(eh->ether_type) == ETHER_TYPE_802_1X) {
+ DBG_EVENT_LOG(dhdp, WIFI_EVENT_DRIVER_EAPOL_FRAME_TRANSMIT_REQUESTED);
atomic_inc(&dhd->pend_8021x_cnt);
+ }
+#ifdef DHD_DHCP_DUMP
+ if (ntoh16(eh->ether_type) == ETHER_TYPE_IP) {
+ uint16 dump_hex;
+ uint16 source_port;
+ uint16 dest_port;
+ uint16 udp_port_pos;
+ uint8 *ptr8 = (uint8 *)&pktdata[ETHER_HDR_LEN];
+ uint8 ip_header_len = (*ptr8 & 0x0f)<<2;
+
+ udp_port_pos = ETHER_HDR_LEN + ip_header_len;
+ source_port = (pktdata[udp_port_pos] << 8) | pktdata[udp_port_pos+1];
+ dest_port = (pktdata[udp_port_pos+2] << 8) | pktdata[udp_port_pos+3];
+ if (source_port == 0x0044 || dest_port == 0x0044) {
+ dump_hex = (pktdata[udp_port_pos+249] << 8) |
+ pktdata[udp_port_pos+250];
+ if (dump_hex == 0x0101) {
+ DHD_ERROR(("DHCP - DISCOVER [TX]\n"));
+ } else if (dump_hex == 0x0102) {
+ DHD_ERROR(("DHCP - OFFER [TX]\n"));
+ } else if (dump_hex == 0x0103) {
+ DHD_ERROR(("DHCP - REQUEST [TX]\n"));
+ } else if (dump_hex == 0x0105) {
+ DHD_ERROR(("DHCP - ACK [TX]\n"));
+ } else {
+ DHD_ERROR(("DHCP - 0x%X [TX]\n", dump_hex));
+ }
+ } else if (source_port == 0x0043 || dest_port == 0x0043) {
+ DHD_ERROR(("DHCP - BOOTP [RX]\n"));
+ }
+ }
+#endif /* DHD_DHCP_DUMP */
} else {
PKTFREE(dhd->pub.osh, pktbuf, TRUE);
return BCME_ERROR;
@@ -1580,9 +2428,23 @@
/* Look into the packet and update the packet priority */
#ifndef PKTPRIO_OVERRIDE
if (PKTPRIO(pktbuf) == 0)
-#endif
+#endif
pktsetprio(pktbuf, FALSE);
+
+#ifdef PCIE_FULL_DONGLE
+ /*
+ * Lkup the per interface hash table, for a matching flowring. If one is not
+ * available, allocate a unique flowid and add a flowring entry.
+ * The found or newly created flowid is placed into the pktbuf's tag.
+ */
+ ret = dhd_flowid_update(dhdp, ifidx, dhdp->flow_prio_map[(PKTPRIO(pktbuf))], pktbuf);
+ if (ret != BCME_OK) {
+ PKTCFREE(dhd->pub.osh, pktbuf, TRUE);
+ return ret;
+ }
+#endif
+
#ifdef PROP_TXSTATUS
if (dhd_wlfc_is_supported(dhdp)) {
/* store the interface ID */
@@ -1606,12 +2468,19 @@
#ifdef WLMEDIA_HTSF
dhd_htsf_addtxts(dhdp, pktbuf);
#endif
+#if defined(DHD_8021X_DUMP)
+ dhd_tx_dump(dhdp->osh, pktbuf);
+#endif
#ifdef PROP_TXSTATUS
{
if (dhd_wlfc_commit_packets(dhdp, (f_commitpkt_t)dhd_bus_txdata,
dhdp->bus, pktbuf, TRUE) == WLFC_UNSUPPORTED) {
/* non-proptxstatus way */
+#ifdef BCMPCIE
+ ret = dhd_bus_txdata(dhdp->bus, pktbuf, (uint8)ifidx);
+#else
ret = dhd_bus_txdata(dhdp->bus, pktbuf);
+#endif /* BCMPCIE */
}
}
#else
@@ -1631,7 +2500,7 @@
int ret;
uint datalen;
void *pktbuf;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
dhd_if_t *ifp = NULL;
int ifidx;
#ifdef WLMEDIA_HTSF
@@ -1639,10 +2508,24 @@
#else
uint8 htsfdlystat_sz = 0;
#endif
+#ifdef DHD_WMF
+ struct ether_header *eh;
+ uint8 *iph;
+#endif /* DHD_WMF */
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
-
+#ifdef PCIE_FULL_DONGLE
+ if (dhd->pub.busstate == DHD_BUS_SUSPEND) {
+ DHD_ERROR(("%s : pcie is still in suspend state!!\n", __FUNCTION__));
+ dev_kfree_skb_any(skb);
+ ifp = DHD_DEV_IFP(net);
+ ifp->stats.tx_dropped++;
+ dhd->pub.tx_dropped++;
+ return NETDEV_TX_OK;
+ }
+#endif
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
/* Reject if down */
if (dhd->pub.busstate == DHD_BUS_DOWN || dhd->pub.hang_was_sent) {
@@ -1654,6 +2537,7 @@
DHD_ERROR(("%s: Event HANG sent up\n", __FUNCTION__));
net_os_send_hang_message(net);
}
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 20))
return -ENODEV;
@@ -1662,10 +2546,16 @@
#endif
}
- ifidx = dhd_net2idx(dhd, net);
+ ifp = DHD_DEV_IFP(net);
+ ifidx = DHD_DEV_IFIDX(net);
+
+ ASSERT(ifidx == dhd_net2idx(dhd, net));
+ ASSERT((ifp != NULL) && (ifp == dhd->iflist[ifidx]));
+
if (ifidx == DHD_BAD_IF) {
DHD_ERROR(("%s: bad ifidx %d\n", __FUNCTION__, ifidx));
netif_stop_queue(net);
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 20))
return -ENODEV;
@@ -1674,7 +2564,7 @@
#endif
}
- /* re-align socket buffer if "skb->data" is odd adress */
+ /* re-align socket buffer if "skb->data" is odd address */
if (((unsigned long)(skb->data)) & 0x1) {
unsigned char *data = skb->data;
uint32 length = skb->len;
@@ -1683,7 +2573,6 @@
PKTSETLEN(dhd->pub.osh, skb, length);
}
- ifp = dhd->iflist[ifidx];
datalen = PKTLEN(dhd->pub.osh, skb);
/* Make sure there's enough room for any header */
@@ -1725,12 +2614,83 @@
}
}
#endif
+#ifdef DHD_WMF
+ eh = (struct ether_header *)PKTDATA(dhd->pub.osh, pktbuf);
+ iph = (uint8 *)eh + ETHER_HDR_LEN;
+
+ /* WMF processing for multicast packets
+ * Only IPv4 packets are handled
+ */
+ if (ifp->wmf.wmf_enable && (ntoh16(eh->ether_type) == ETHER_TYPE_IP) &&
+ (IP_VER(iph) == IP_VER_4) && (ETHER_ISMULTI(eh->ether_dhost) ||
+ ((IPV4_PROT(iph) == IP_PROT_IGMP) && dhd->pub.wmf_ucast_igmp))) {
+#if defined(DHD_IGMP_UCQUERY) || defined(DHD_UCAST_UPNP)
+ void *sdu_clone;
+ bool ucast_convert = FALSE;
+#ifdef DHD_UCAST_UPNP
+ uint32 dest_ip;
+
+ dest_ip = ntoh32(*((uint32 *)(iph + IPV4_DEST_IP_OFFSET)));
+ ucast_convert = dhd->pub.wmf_ucast_upnp && MCAST_ADDR_UPNP_SSDP(dest_ip);
+#endif /* DHD_UCAST_UPNP */
+#ifdef DHD_IGMP_UCQUERY
+ ucast_convert |= dhd->pub.wmf_ucast_igmp_query &&
+ (IPV4_PROT(iph) == IP_PROT_IGMP) &&
+ (*(iph + IPV4_HLEN(iph)) == IGMPV2_HOST_MEMBERSHIP_QUERY);
+#endif /* DHD_IGMP_UCQUERY */
+ if (ucast_convert) {
+ dhd_sta_t *sta;
+ unsigned long flags;
+
+ DHD_IF_STA_LIST_LOCK(ifp, flags);
+
+ /* Convert upnp/igmp query to unicast for each assoc STA */
+ list_for_each_entry(sta, &ifp->sta_list, list) {
+ if ((sdu_clone = PKTDUP(dhd->pub.osh, pktbuf)) == NULL) {
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
+ DHD_OS_WAKE_UNLOCK(&dhd->pub);
+ return (WMF_NOP);
+ }
+ dhd_wmf_forward(ifp->wmf.wmfh, sdu_clone, 0, sta, 1);
+ }
+
+ DHD_IF_STA_LIST_UNLOCK(ifp, flags);
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
+ DHD_OS_WAKE_UNLOCK(&dhd->pub);
+
+ PKTFREE(dhd->pub.osh, pktbuf, TRUE);
+ return NETDEV_TX_OK;
+ } else
+#endif /* defined(DHD_IGMP_UCQUERY) || defined(DHD_UCAST_UPNP) */
+ {
+ /* There will be no STA info if the packet is coming from LAN host
+ * Pass as NULL
+ */
+ ret = dhd_wmf_packets_handle(&dhd->pub, pktbuf, NULL, ifidx, 0);
+ switch (ret) {
+ case WMF_TAKEN:
+ case WMF_DROP:
+ /* Either taken by WMF or we should drop it.
+ * Exiting send path
+ */
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
+ DHD_OS_WAKE_UNLOCK(&dhd->pub);
+ return NETDEV_TX_OK;
+ default:
+ /* Continue the transmit path */
+ break;
+ }
+ }
+ }
+#endif /* DHD_WMF */
ret = dhd_sendpkt(&dhd->pub, ifidx, pktbuf);
done:
if (ret) {
ifp->stats.tx_dropped++;
+ dhd->pub.tx_dropped++;
}
else {
dhd->pub.tx_packets++;
@@ -1738,6 +2698,7 @@
ifp->stats.tx_bytes += datalen;
}
+ DHD_PERIM_UNLOCK_TRY(DHD_FWDER_UNIT(dhd), TRUE);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
/* Return ok: we always eat the packet */
@@ -1748,6 +2709,7 @@
#endif
}
+
void
dhd_txflowcontrol(dhd_pub_t *dhdp, int ifidx, bool state)
{
@@ -1814,8 +2776,19 @@
#endif /* DHD_RX_DUMP */
+#ifdef DHD_WMF
+bool
+dhd_is_rxthread_enabled(dhd_pub_t *dhdp)
+{
+ dhd_info_t *dhd = dhdp->info;
+
+ return dhd->rxthread_enabled;
+}
+#endif /* DHD_WMF */
+
void
-dhd_rx_frame(dhd_pub_t *dhdp, int ifidx, void *pktbuf, int numpkt, uint8 chan)
+dhd_rx_frame(dhd_pub_t *dhdp, int ifidx, void *pktbuf, int numpkt, uint8 chan,
+ int pkt_wake, wake_counts_t *wcp)
{
dhd_info_t *dhd = (dhd_info_t *)dhdp->info;
struct sk_buff *skb;
@@ -1829,10 +2802,10 @@
int tout_ctrl = 0;
void *skbhead = NULL;
void *skbprev = NULL;
-#if defined(DHD_RX_DUMP) || defined(DHD_8021X_DUMP)
- char *dump_data;
uint16 protocol;
-#endif /* DHD_RX_DUMP || DHD_8021X_DUMP */
+#if defined(DHD_RX_DUMP) || defined(DHD_8021X_DUMP) || defined(DHD_WAKE_STATUS)
+ char *dump_data;
+#endif /* DHD_RX_DUMP || DHD_8021X_DUMP || DHD_WAKE_STATUS */
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
@@ -1846,10 +2819,12 @@
if (ifp == NULL) {
DHD_ERROR(("%s: ifp is NULL. drop packet\n",
__FUNCTION__));
- PKTFREE(dhdp->osh, pktbuf, FALSE);
+ PKTCFREE(dhdp->osh, pktbuf, FALSE);
continue;
}
+
eh = (struct ether_header *)PKTDATA(dhdp->osh, pktbuf);
+
/* Dropping only data packets before registering net device to avoid kernel panic */
#ifndef PROP_TXSTATUS_VSDB
if ((!ifp->net || ifp->net->reg_state != NETREG_REGISTERED) &&
@@ -1860,7 +2835,7 @@
#endif /* PROP_TXSTATUS_VSDB */
DHD_ERROR(("%s: net device is NOT registered yet. drop packet\n",
__FUNCTION__));
- PKTFREE(dhdp->osh, pktbuf, FALSE);
+ PKTCFREE(dhdp->osh, pktbuf, FALSE);
continue;
}
@@ -1871,10 +2846,43 @@
there is an urgent message but no packet to
piggy-back on
*/
- PKTFREE(dhdp->osh, pktbuf, FALSE);
+ PKTCFREE(dhdp->osh, pktbuf, FALSE);
continue;
}
#endif
+#ifdef DHD_L2_FILTER
+ /* If block_ping is enabled drop the ping packet */
+ if (dhdp->block_ping) {
+ if (dhd_l2_filter_block_ping(dhdp, pktbuf, ifidx) == BCME_OK) {
+ PKTFREE(dhdp->osh, pktbuf, FALSE);
+ continue;
+ }
+ }
+#endif
+#ifdef DHD_WMF
+ /* WMF processing for multicast packets */
+ if (ifp->wmf.wmf_enable && (ETHER_ISMULTI(eh->ether_dhost))) {
+ dhd_sta_t *sta;
+ int ret;
+
+ sta = dhd_find_sta(dhdp, ifidx, (void *)eh->ether_shost);
+ ret = dhd_wmf_packets_handle(dhdp, pktbuf, sta, ifidx, 1);
+ switch (ret) {
+ case WMF_TAKEN:
+ /* The packet is taken by WMF. Continue to next iteration */
+ continue;
+ case WMF_DROP:
+ /* Packet DROP decision by WMF. Toss it */
+ DHD_ERROR(("%s: WMF decides to drop packet\n",
+ __FUNCTION__));
+ PKTCFREE(dhdp->osh, pktbuf, FALSE);
+ continue;
+ default:
+ /* Continue the transmit path */
+ break;
+ }
+ }
+#endif /* DHD_WMF */
#ifdef DHDTCPACK_SUPPRESS
dhd_tcpdata_info_get(dhdp, pktbuf);
#endif
@@ -1887,6 +2895,21 @@
ASSERT(ifp);
skb->dev = ifp->net;
+#ifdef PCIE_FULL_DONGLE
+ if ((DHD_IF_ROLE_AP(dhdp, ifidx) || DHD_IF_ROLE_P2PGO(dhdp, ifidx)) &&
+ (!ifp->ap_isolate)) {
+ eh = (struct ether_header *)PKTDATA(dhdp->osh, pktbuf);
+ if (ETHER_ISUCAST(eh->ether_dhost)) {
+ if (dhd_find_sta(dhdp, ifidx, (void *)eh->ether_dhost)) {
+ dhd_sendpkt(dhdp, ifidx, pktbuf);
+ continue;
+ }
+ } else {
+ void *npktbuf = PKTDUP(dhdp->osh, pktbuf);
+ dhd_sendpkt(dhdp, ifidx, npktbuf);
+ }
+ }
+#endif /* PCIE_FULL_DONGLE */
/* Get the protocol, maintain skb around eth_type_trans()
* The main reason for this hack is for the limitation of
@@ -1899,18 +2922,54 @@
*/
eth = skb->data;
len = skb->len;
-
-#if defined(DHD_RX_DUMP) || defined(DHD_8021X_DUMP)
- dump_data = skb->data;
- protocol = (dump_data[12] << 8) | dump_data[13];
+ protocol = (skb->data[12] << 8) | skb->data[13];
if (protocol == ETHER_TYPE_802_1X) {
- DHD_ERROR(("ETHER_TYPE_802_1X: "
+ DBG_EVENT_LOG(dhdp, WIFI_EVENT_DRIVER_EAPOL_FRAME_RECEIVED);
+ }
+#if defined(DHD_RX_DUMP) || defined(DHD_8021X_DUMP) || defined(DHD_DHCP_DUMP) \
+ || defined(DHD_WAKE_STATUS)
+ dump_data = skb->data;
+#endif /* DHD_RX_DUMP || DHD_8021X_DUMP || DHD_DHCP_DUMP */
+#ifdef DHD_8021X_DUMP
+ if (protocol == ETHER_TYPE_802_1X) {
+ DHD_ERROR(("ETHER_TYPE_802_1X [RX]: "
"ver %d, type %d, replay %d\n",
dump_data[14], dump_data[15],
dump_data[30]));
}
-#endif /* DHD_RX_DUMP || DHD_8021X_DUMP */
+#endif /* DHD_8021X_DUMP */
+#ifdef DHD_DHCP_DUMP
+ if (protocol != ETHER_TYPE_BRCM && protocol == ETHER_TYPE_IP) {
+ uint16 dump_hex;
+ uint16 source_port;
+ uint16 dest_port;
+ uint16 udp_port_pos;
+ uint8 *ptr8 = (uint8 *)&dump_data[ETHER_HDR_LEN];
+ uint8 ip_header_len = (*ptr8 & 0x0f)<<2;
+
+ udp_port_pos = ETHER_HDR_LEN + ip_header_len;
+ source_port = (dump_data[udp_port_pos] << 8) | dump_data[udp_port_pos+1];
+ dest_port = (dump_data[udp_port_pos+2] << 8) | dump_data[udp_port_pos+3];
+ if (source_port == 0x0044 || dest_port == 0x0044) {
+ dump_hex = (dump_data[udp_port_pos+249] << 8) |
+ dump_data[udp_port_pos+250];
+ if (dump_hex == 0x0101) {
+ DHD_ERROR(("DHCP - DISCOVER [RX]\n"));
+ } else if (dump_hex == 0x0102) {
+ DHD_ERROR(("DHCP - OFFER [RX]\n"));
+ } else if (dump_hex == 0x0103) {
+ DHD_ERROR(("DHCP - REQUEST [RX]\n"));
+ } else if (dump_hex == 0x0105) {
+ DHD_ERROR(("DHCP - ACK [RX]\n"));
+ } else {
+ DHD_ERROR(("DHCP - 0x%X [RX]\n", dump_hex));
+ }
+ } else if (source_port == 0x0043 || dest_port == 0x0043) {
+ DHD_ERROR(("DHCP - BOOTP [RX]\n"));
+ }
+ }
+#endif /* DHD_DHCP_DUMP */
#if defined(DHD_RX_DUMP)
DHD_ERROR(("RX DUMP - %s\n", _get_packet_type_str(protocol)));
if (protocol != ETHER_TYPE_BRCM) {
@@ -1944,6 +3003,7 @@
if (skb->pkt_type == PACKET_MULTICAST) {
dhd->pub.rx_multicast++;
+ ifp->stats.multicast++;
}
skb->data = eth;
@@ -1978,12 +3038,81 @@
}
#endif /* PNO_SUPPORT */
+#ifdef DHD_WAKE_STATUS
+ if (unlikely(pkt_wake)) {
+ wcp->rcwake++;
+#ifdef DHD_WAKE_EVENT_STATUS
+ if (event.event_type < WLC_E_LAST)
+ wcp->rc_event[event.event_type]++;
+#endif
+ pkt_wake = 0;
+ }
+#endif
+
#ifdef DHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT
PKTFREE(dhdp->osh, pktbuf, FALSE);
continue;
#endif /* DHD_DONOT_FORWARD_BCMEVENT_AS_NETWORK_PKT */
} else {
- tout_rx = DHD_PACKET_TIMEOUT_MS;
+ if (dhd_rx_suspend_again(skb) != 0) {
+ if (skb->dev->ieee80211_ptr && skb->dev->ieee80211_ptr->ps == false)
+ tout_rx = CUSTOM_DHCP_LOCK_xTIME * DHD_PACKET_TIMEOUT_MS;
+ else
+ tout_rx = DHD_PACKET_TIMEOUT_MS;
+ }
+#ifdef PROP_TXSTATUS
+ dhd_wlfc_save_rxpath_ac_time(dhdp, (uint8)PKTPRIO(skb));
+#endif /* PROP_TXSTATUS */
+
+#ifdef DHD_WAKE_STATUS
+ if (unlikely(pkt_wake)) {
+ wcp->rxwake++;
+#ifdef DHD_WAKE_RX_STATUS
+#define ETHER_ICMP6_HEADER 20
+#define ETHER_IPV6_SADDR (ETHER_ICMP6_HEADER + 2)
+#define ETHER_IPV6_DAADR (ETHER_IPV6_SADDR + IPV6_ADDR_LEN)
+#define ETHER_ICMPV6_TYPE (ETHER_IPV6_DAADR + IPV6_ADDR_LEN)
+ if (ntoh16(skb->protocol) == ETHER_TYPE_ARP) /* ARP */
+ wcp->rx_arp++;
+ if (dump_data[0] == 0xFF) { /* Broadcast */
+ wcp->rx_bcast++;
+ } else if (dump_data[0] & 0x01) { /* Multicast */
+ wcp->rx_mcast++;
+ if (ntoh16(skb->protocol) == ETHER_TYPE_IPV6) {
+ wcp->rx_multi_ipv6++;
+ if ((skb->len > ETHER_ICMP6_HEADER) &&
+ (dump_data[ETHER_ICMP6_HEADER] == IPPROTO_ICMPV6)) {
+ wcp->rx_icmpv6++;
+ if (skb->len > ETHER_ICMPV6_TYPE) {
+ switch (dump_data[ETHER_ICMPV6_TYPE]) {
+ case NDISC_ROUTER_ADVERTISEMENT:
+ wcp->rx_icmpv6_ra++;
+ break;
+ case NDISC_NEIGHBOUR_ADVERTISEMENT:
+ wcp->rx_icmpv6_na++;
+ break;
+ case NDISC_NEIGHBOUR_SOLICITATION:
+ wcp->rx_icmpv6_ns++;
+ break;
+ }
+ }
+ }
+ } else if (dump_data[2] == 0x5E) {
+ wcp->rx_multi_ipv4++;
+ } else {
+ wcp->rx_multi_other++;
+ }
+ } else { /* Unicast */
+ wcp->rx_ucast++;
+ }
+#undef ETHER_ICMP6_HEADER
+#undef ETHER_IPV6_SADDR
+#undef ETHER_IPV6_DAADR
+#undef ETHER_ICMPV6_TYPE
+#endif
+ pkt_wake = 0;
+ }
+#endif
}
ASSERT(ifidx < DHD_MAX_IFS && dhd->iflist[ifidx]);
@@ -1992,11 +3121,12 @@
if (ifp->net)
ifp->net->last_rx = jiffies;
- dhdp->dstats.rx_bytes += skb->len;
- dhdp->rx_packets++; /* Local count */
- ifp->stats.rx_bytes += skb->len;
- ifp->stats.rx_packets++;
-
+ if (ntoh16(skb->protocol) != ETHER_TYPE_BRCM) {
+ dhdp->dstats.rx_bytes += skb->len;
+ dhdp->rx_packets++; /* Local count */
+ ifp->stats.rx_bytes += skb->len;
+ ifp->stats.rx_packets++;
+ }
if (in_interrupt()) {
netif_rx(skb);
@@ -2033,6 +3163,12 @@
DHD_OS_WAKE_LOCK_RX_TIMEOUT_ENABLE(dhdp, tout_rx);
DHD_OS_WAKE_LOCK_CTRL_TIMEOUT_ENABLE(dhdp, tout_ctrl);
+
+#ifdef CONFIG_PARTIALRESUME
+ if (tout_rx || tout_ctrl)
+ wifi_process_partial_resume(dhd->adapter,
+ WIFI_PR_VOTE_FOR_RESUME);
+#endif
}
void
@@ -2062,7 +3198,7 @@
static struct net_device_stats *
dhd_get_stats(struct net_device *net)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
dhd_if_t *ifp;
int ifidx;
@@ -2083,18 +3219,6 @@
/* Use the protocol to get dongle stats */
dhd_prot_dstats(&dhd->pub);
}
-
- /* Copy dongle stats to net device stats */
- ifp->stats.rx_packets = dhd->pub.dstats.rx_packets;
- ifp->stats.tx_packets = dhd->pub.dstats.tx_packets;
- ifp->stats.rx_bytes = dhd->pub.dstats.rx_bytes;
- ifp->stats.tx_bytes = dhd->pub.dstats.tx_bytes;
- ifp->stats.rx_errors = dhd->pub.dstats.rx_errors;
- ifp->stats.tx_errors = dhd->pub.dstats.tx_errors;
- ifp->stats.rx_dropped = dhd->pub.dstats.rx_dropped;
- ifp->stats.tx_dropped = dhd->pub.dstats.tx_dropped;
- ifp->stats.multicast = dhd->pub.dstats.multicast;
-
return &ifp->stats;
}
@@ -2124,14 +3248,14 @@
break;
}
- dhd_os_sdlock(&dhd->pub);
if (dhd->pub.dongle_reset == FALSE) {
DHD_TIMER(("%s:\n", __FUNCTION__));
/* Call the bus module watchdog */
dhd_bus_watchdog(&dhd->pub);
- flags = dhd_os_spin_lock(&dhd->pub);
+
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
/* Count the tick for reference */
dhd->pub.tickcnt++;
time_lapse = jiffies - jiffies_at_start;
@@ -2139,12 +3263,11 @@
/* Reschedule the watchdog */
if (dhd->wd_timer_valid)
mod_timer(&dhd->timer,
- jiffies +
- msecs_to_jiffies(dhd_watchdog_ms) -
- min(msecs_to_jiffies(dhd_watchdog_ms), time_lapse));
- dhd_os_spin_unlock(&dhd->pub, flags);
- }
- dhd_os_sdunlock(&dhd->pub);
+ jiffies +
+ msecs_to_jiffies(dhd_watchdog_ms) -
+ min(msecs_to_jiffies(dhd_watchdog_ms), time_lapse));
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
+ }
} else {
break;
}
@@ -2166,19 +3289,18 @@
return;
}
- dhd_os_sdlock(&dhd->pub);
/* Call the bus module watchdog */
dhd_bus_watchdog(&dhd->pub);
- flags = dhd_os_spin_lock(&dhd->pub);
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
/* Count the tick for reference */
dhd->pub.tickcnt++;
/* Reschedule the watchdog */
if (dhd->wd_timer_valid)
mod_timer(&dhd->timer, jiffies + msecs_to_jiffies(dhd_watchdog_ms));
- dhd_os_spin_unlock(&dhd->pub, flags);
- dhd_os_sdunlock(&dhd->pub);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
+
}
#ifdef ENABLE_ADAPTIVE_SCHED
@@ -2191,8 +3313,8 @@
setScheduler(current, SCHED_NORMAL, ¶m);
} else {
if (get_scheduler_policy(current) != SCHED_FIFO) {
- param.sched_priority = (prio < MAX_RT_PRIO)? prio : (MAX_RT_PRIO-1);
- setScheduler(current, SCHED_FIFO, ¶m);
+ param.sched_priority = (prio < MAX_RT_PRIO)? prio : (MAX_RT_PRIO-1);
+ setScheduler(current, SCHED_FIFO, ¶m);
}
}
}
@@ -2276,11 +3398,11 @@
{
tsk_ctl_t *tsk = (tsk_ctl_t *)data;
dhd_info_t *dhd = (dhd_info_t *)tsk->parent;
- dhd_pub_t *pub = &dhd->pub;
#if defined(WAIT_DEQUEUE)
#define RXF_WATCHDOG_TIME 250 /* BARK_TIME(1000) / */
ulong watchdogTime = OSL_SYSUPTIME(); /* msec */
#endif
+ dhd_pub_t *pub = &dhd->pub;
/* This thread doesn't need any user-level access,
* so get rid of all our resources
@@ -2353,6 +3475,26 @@
complete_and_exit(&tsk->completed, 0);
}
+#ifdef BCMPCIE
+void dhd_dpc_kill(dhd_pub_t *dhdp)
+{
+ dhd_info_t *dhd;
+
+ if (!dhdp)
+ return;
+
+ dhd = dhdp->info;
+
+ if(!dhd)
+ return;
+
+ tasklet_kill(&dhd->tasklet);
+ DHD_ERROR(("%s: tasklet disabled\n",__FUNCTION__));
+}
+#endif
+
+static int isresched = 0;
+
static void
dhd_dpc(ulong data)
{
@@ -2366,7 +3508,8 @@
*/
/* Call bus dpc unless it indicated down (then clean stop) */
if (dhd->pub.busstate != DHD_BUS_DOWN) {
- if (dhd_bus_dpc(dhd->pub.bus))
+ isresched = dhd_bus_dpc(dhd->pub.bus);
+ if (isresched)
tasklet_schedule(&dhd->tasklet);
else
DHD_OS_WAKE_UNLOCK(&dhd->pub);
@@ -2381,16 +3524,19 @@
{
dhd_info_t *dhd = (dhd_info_t *)dhdp->info;
- DHD_OS_WAKE_LOCK(dhdp);
if (dhd->thr_dpc_ctl.thr_pid >= 0) {
/* If the semaphore does not get up,
* wake unlock should be done here
*/
+ DHD_OS_WAKE_LOCK(dhdp);
if (!binary_sema_up(&dhd->thr_dpc_ctl))
DHD_OS_WAKE_UNLOCK(dhdp);
return;
} else {
- tasklet_schedule(&dhd->tasklet);
+ if (!test_bit(TASKLET_STATE_SCHED, &dhd->tasklet.state) && !isresched) {
+ DHD_OS_WAKE_LOCK(dhdp);
+ tasklet_schedule(&dhd->tasklet);
+ }
}
}
@@ -2398,11 +3544,40 @@
dhd_sched_rxf(dhd_pub_t *dhdp, void *skb)
{
dhd_info_t *dhd = (dhd_info_t *)dhdp->info;
+#ifdef RXF_DEQUEUE_ON_BUSY
+ int ret = BCME_OK;
+ int retry = 2;
+#endif /* RXF_DEQUEUE_ON_BUSY */
DHD_OS_WAKE_LOCK(dhdp);
DHD_TRACE(("dhd_sched_rxf: Enter\n"));
+#ifdef RXF_DEQUEUE_ON_BUSY
+ do {
+ ret = dhd_rxf_enqueue(dhdp, skb);
+ if (ret == BCME_OK || ret == BCME_ERROR)
+ break;
+ else
+ OSL_SLEEP(50); /* waiting for dequeueing */
+ } while (retry-- > 0);
+ if (retry <= 0 && ret == BCME_BUSY) {
+ void *skbp = skb;
+
+ while (skbp) {
+ void *skbnext = PKTNEXT(dhdp->osh, skbp);
+ PKTSETNEXT(dhdp->osh, skbp, NULL);
+ netif_rx_ni(skbp);
+ skbp = skbnext;
+ }
+ DHD_ERROR(("send skb to kernel backlog without rxf_thread\n"));
+ }
+ else {
+ if (dhd->thr_rxf_ctl.thr_pid >= 0) {
+ up(&dhd->thr_rxf_ctl.sema);
+ }
+ }
+#else /* RXF_DEQUEUE_ON_BUSY */
do {
if (dhd_rxf_enqueue(dhdp, skb) == BCME_OK)
break;
@@ -2411,6 +3586,7 @@
up(&dhd->thr_rxf_ctl.sema);
}
return;
+#endif /* RXF_DEQUEUE_ON_BUSY */
}
#ifdef TOE
@@ -2494,7 +3670,7 @@
static void
dhd_ethtool_get_drvinfo(struct net_device *net, struct ethtool_drvinfo *info)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
snprintf(info->driver, sizeof(info->driver), "wl");
snprintf(info->version, sizeof(info->version), "%lu", dhd->pub.drv_version);
@@ -2621,7 +3797,7 @@
static bool dhd_check_hang(struct net_device *net, dhd_pub_t *dhdp, int error)
{
dhd_info_t *dhd;
-
+ int dump_len = 0;
if (!dhdp) {
DHD_ERROR(("%s: dhdp is NULL\n", __FUNCTION__));
return FALSE;
@@ -2636,8 +3812,12 @@
DHD_ERROR(("%s : skipped due to negative pid - unloading?\n", __FUNCTION__));
return FALSE;
}
-#endif
-
+#endif
+ if (error == -ETIMEDOUT && dhdp->busstate != DHD_BUS_DOWN) {
+ if (dhd_os_socram_dump(net, &dump_len) == BCME_OK) {
+ dhd_dbg_send_urgent_evt(dhdp, NULL, 0);
+ }
+ }
if ((error == -ETIMEDOUT) || (error == -EREMOTEIO) ||
((dhdp->busstate == DHD_BUS_DOWN) && (!dhdp->dongle_reset))) {
DHD_ERROR(("%s: Event HANG send up due to re=%d te=%d e=%d s=%d\n", __FUNCTION__,
@@ -2671,7 +3851,6 @@
goto done;
}
-
/* send to dongle (must be up, and wl). */
if (pub->busstate != DHD_BUS_DATA) {
bcmerror = BCME_DONGLE_DOWN;
@@ -2768,11 +3947,8 @@
static int
dhd_ioctl_entry(struct net_device *net, struct ifreq *ifr, int cmd)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
dhd_ioctl_t ioc;
-#ifdef CONFIG_COMPAT
- dhd_ioctl_compat_t ioc_compat;
-#endif
int bcmerror = 0;
int ifidx;
int ret;
@@ -2780,6 +3956,15 @@
u16 buflen = 0;
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
+
+ /* Interface up check for built-in type */
+ if (!dhd_download_fw_on_driverload && dhd->pub.up == 0) {
+ DHD_ERROR(("%s: Interface is down \n", __FUNCTION__));
+ DHD_PERIM_UNLOCK(&dhd->pub);
+ DHD_OS_WAKE_UNLOCK(&dhd->pub);
+ return BCME_NOTUP;
+ }
/* send to dongle only if we are not waiting for reload already */
if (dhd->pub.hang_was_sent) {
@@ -2794,6 +3979,7 @@
if (ifidx == DHD_BAD_IF) {
DHD_ERROR(("%s: BAD IF\n", __FUNCTION__));
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return -1;
}
@@ -2803,6 +3989,7 @@
if ((cmd >= SIOCIWFIRST) && (cmd <= SIOCIWLAST)) {
/* may recurse, do NOT lock */
ret = wl_iw_ioctl(net, ifr, cmd);
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return ret;
}
@@ -2811,6 +3998,7 @@
#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 4, 2)
if (cmd == SIOCETHTOOL) {
ret = dhd_ethtool(dhd, (void*)ifr->ifr_data);
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return ret;
}
@@ -2824,28 +4012,35 @@
}
if (cmd != SIOCDEVPRIVATE) {
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return -EOPNOTSUPP;
}
memset(&ioc, 0, sizeof(ioc));
-#ifdef CONFIG_COMPAT
- memset(&ioc_compat, 0, sizeof(ioc_compat));
+#ifdef CONFIG_COMPAT
if (is_compat_task()) {
- /* Copy the ioc control structure part of ioctl request */
- if (copy_from_user(&ioc_compat, ifr->ifr_data, sizeof(dhd_ioctl_compat_t))) {
+ compat_wl_ioctl_t compat_ioc;
+ if (copy_from_user(&compat_ioc, ifr->ifr_data, sizeof(compat_wl_ioctl_t))) {
bcmerror = BCME_BADADDR;
goto done;
}
- ioc.cmd = ioc_compat.cmd;
- ioc.buf = (void *)(uintptr_t) ioc_compat.buf;
- ioc.len = ioc_compat.len;
- ioc.set = ioc_compat.set;
- ioc.used = ioc_compat.used;
- ioc.needed = ioc_compat.needed;
- ioc.driver = ioc_compat.driver;
- } else {
+ ioc.cmd = compat_ioc.cmd;
+ ioc.buf = compat_ptr(compat_ioc.buf);
+ ioc.len = compat_ioc.len;
+ ioc.set = compat_ioc.set;
+ ioc.used = compat_ioc.used;
+ ioc.needed = compat_ioc.needed;
+ /* To differentiate between wl and dhd read 4 more byes */
+ if ((copy_from_user(&ioc.driver, (char *)ifr->ifr_data + sizeof(compat_wl_ioctl_t),
+ sizeof(uint)) != 0)) {
+ bcmerror = BCME_BADADDR;
+ goto done;
+ }
+ } else
+#endif /* CONFIG_COMPAT */
+ {
/* Copy the ioc control structure part of ioctl request */
if (copy_from_user(&ioc, ifr->ifr_data, sizeof(wl_ioctl_t))) {
bcmerror = BCME_BADADDR;
@@ -2859,20 +4054,6 @@
goto done;
}
}
-#else
- /* Copy the ioc control structure part of ioctl request */
- if (copy_from_user(&ioc, ifr->ifr_data, sizeof(wl_ioctl_t))) {
- bcmerror = BCME_BADADDR;
- goto done;
- }
-
- /* To differentiate between wl and dhd read 4 more byes */
- if ((copy_from_user(&ioc.driver, (char *)ifr->ifr_data + sizeof(wl_ioctl_t),
- sizeof(uint)) != 0)) {
- bcmerror = BCME_BADADDR;
- goto done;
- }
-#endif
if (!capable(CAP_NET_ADMIN)) {
bcmerror = BCME_EPERM;
@@ -2885,24 +4066,32 @@
bcmerror = BCME_NOMEM;
goto done;
}
+
+ DHD_PERIM_UNLOCK(&dhd->pub);
if (copy_from_user(local_buf, ioc.buf, buflen)) {
+ DHD_PERIM_LOCK(&dhd->pub);
bcmerror = BCME_BADADDR;
goto done;
}
+ DHD_PERIM_LOCK(&dhd->pub);
+
*(char *)(local_buf + buflen) = '\0';
}
bcmerror = dhd_ioctl_process(&dhd->pub, ifidx, &ioc, local_buf);
if (!bcmerror && buflen && local_buf && ioc.buf) {
+ DHD_PERIM_UNLOCK(&dhd->pub);
if (copy_to_user(ioc.buf, local_buf, buflen))
bcmerror = -EFAULT;
+ DHD_PERIM_LOCK(&dhd->pub);
}
done:
if (local_buf)
MFREE(dhd->pub.osh, local_buf, buflen+1);
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return OSL_ERROR(bcmerror);
@@ -2914,13 +4103,16 @@
dhd_stop(struct net_device *net)
{
int ifidx = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
DHD_TRACE(("%s: Enter %p\n", __FUNCTION__, net));
if (dhd->pub.up == 0) {
goto exit;
}
+ dhd_if_flush_sta(DHD_DEV_IFP(net));
+
ifidx = dhd_net2idx(dhd, net);
BCM_REFERENCE(ifidx);
@@ -2941,10 +4133,21 @@
if ((dhd->dhd_state & DHD_ATTACH_STATE_ADD_IF) &&
(dhd->dhd_state & DHD_ATTACH_STATE_CFG80211)) {
int i;
+ dhd_if_t *ifp;
dhd_net_if_lock_local(dhd);
for (i = 1; i < DHD_MAX_IFS; i++)
- dhd_remove_if(&dhd->pub, i, TRUE);
+ dhd_remove_if(&dhd->pub, i, FALSE);
+
+ /* remove sta list for primary interface */
+ ifp = dhd->iflist[0];
+ if (ifp && ifp->net) {
+ dhd_if_del_sta_list(ifp);
+ }
+#ifdef PCIE_FULL_DONGLE
+ /* Initialize STA info list */
+ INIT_LIST_HEAD(&ifp->sta_list);
+#endif
dhd_net_if_unlock_local(dhd);
}
}
@@ -2961,13 +4164,22 @@
exit:
#if defined(WL_CFG80211)
if (ifidx == 0 && !dhd_download_fw_on_driverload)
- wl_android_wifi_off(net);
-#endif
+ wl_android_wifi_off(net, TRUE);
+#endif
dhd->pub.rxcnt_timeout = 0;
dhd->pub.txcnt_timeout = 0;
dhd->pub.hang_was_sent = 0;
+ /* Clear country spec for for built-in type driver */
+#ifndef CUSTOM_COUNTRY_CODE
+ if (!dhd_download_fw_on_driverload) {
+ dhd->pub.dhd_cspec.country_abbrev[0] = 0x00;
+ dhd->pub.dhd_cspec.rev = 0;
+ dhd->pub.dhd_cspec.ccode[0] = 0x00;
+ }
+#endif
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return 0;
}
@@ -2989,6 +4201,16 @@
DHD_ERROR(("%s: enableing interworking failed, ret=%d\n", __FUNCTION__, ret));
}
+ if (ret == BCME_OK) {
+ /* basic capabilities for HS20 REL2 */
+ uint32 cap = WL_WNM_BSSTRANS | WL_WNM_NOTIF;
+ bcm_mkiovar("wnm", (char *)&cap, sizeof(cap), iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR,
+ iovbuf, sizeof(iovbuf), TRUE, 0)) < 0) {
+ DHD_ERROR(("%s: failed to set WNM info, ret=%d\n", __FUNCTION__, ret));
+ }
+ }
+
return ret;
}
#endif /* WL11u */
@@ -2996,16 +4218,15 @@
static int
dhd_open(struct net_device *net)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd_info_t *dhd = DHD_DEV_INFO(net);
#ifdef TOE
uint32 toe_ol;
#endif
int ifidx;
int32 ret = 0;
-
-
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
dhd->pub.dongle_trap_occured = 0;
dhd->pub.hang_was_sent = 0;
@@ -3022,7 +4243,7 @@
goto exit;
}
-#endif
+#endif
ifidx = dhd_net2idx(dhd, net);
DHD_TRACE(("%s: ifidx %d\n", __FUNCTION__, ifidx));
@@ -3055,12 +4276,15 @@
goto exit;
}
}
-#endif
+#endif
if (dhd->pub.busstate != DHD_BUS_DATA) {
/* try to bring up bus */
- if ((ret = dhd_bus_start(&dhd->pub)) != 0) {
+ DHD_PERIM_UNLOCK(&dhd->pub);
+ ret = dhd_bus_start(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
+ if (ret) {
DHD_ERROR(("%s: failed with code %d\n", __FUNCTION__, ret));
ret = -1;
goto exit;
@@ -3068,7 +4292,7 @@
}
- /* dhd_prot_init has been called in dhd_bus_start or wl_android_wifi_on */
+ /* dhd_sync_with_dongle has been called in dhd_bus_start or wl_android_wifi_on */
memcpy(net->dev_addr, dhd->pub.mac.octet, ETHER_ADDR_LEN);
#ifdef TOE
@@ -3093,7 +4317,7 @@
dhd->pub.up = 1;
#ifdef BCMDBGFS
- dhd_dbg_init(&dhd->pub);
+ dhd_dbgfs_init(&dhd->pub);
#endif
OLD_MOD_INC_USE_COUNT;
@@ -3101,6 +4325,7 @@
if (ret)
dhd_stop(net);
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
@@ -3118,7 +4343,7 @@
/* && defined(OEM_ANDROID) && defined(BCMSDIO) */
- dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd = DHD_DEV_INFO(net);
/* If driver is already initialized, do nothing
*/
@@ -3156,8 +4381,8 @@
memcpy(if_event->mac, mac, ETHER_ADDR_LEN);
strncpy(if_event->name, name, IFNAMSIZ);
if_event->name[IFNAMSIZ - 1] = '\0';
- dhd_deferred_schedule_work((void *)if_event, DHD_WQ_WORK_IF_ADD,
- dhd_ifadd_event_handler, DHD_WORK_PRIORITY_LOW);
+ dhd_deferred_schedule_work(dhdinfo->dhd_deferred_wq, (void *)if_event,
+ DHD_WQ_WORK_IF_ADD, dhd_ifadd_event_handler, DHD_WORK_PRIORITY_LOW);
}
return BCME_OK;
@@ -3169,12 +4394,8 @@
dhd_if_event_t *if_event;
#ifdef WL_CFG80211
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 6, 0))
- wl_cfg80211_notify_ifdel(ifevent->ifidx, name, mac, ifevent->bssidx);
-#else
if (wl_cfg80211_notify_ifdel(ifevent->ifidx, name, mac, ifevent->bssidx) == BCME_OK)
return BCME_OK;
-#endif /* (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 6, 0)) */
#endif /* WL_CFG80211 */
/* handle IF event caused by wl commands, SoftAP, WEXT and
@@ -3185,7 +4406,7 @@
memcpy(if_event->mac, mac, ETHER_ADDR_LEN);
strncpy(if_event->name, name, IFNAMSIZ);
if_event->name[IFNAMSIZ - 1] = '\0';
- dhd_deferred_schedule_work((void *)if_event, DHD_WQ_WORK_IF_DEL,
+ dhd_deferred_schedule_work(dhdinfo->dhd_deferred_wq, (void *)if_event, DHD_WQ_WORK_IF_DEL,
dhd_ifdel_event_handler, DHD_WORK_PRIORITY_LOW);
return BCME_OK;
@@ -3209,6 +4430,8 @@
if (ifp->net != NULL) {
DHD_ERROR(("%s: free existing IF %s\n", __FUNCTION__, ifp->net->name));
+ dhd_dev_priv_clear(ifp->net); /* clear net_device private */
+
/* in unregister_netdev case, the interface gets freed by net->destructor
* (which is set to free_netdev)
*/
@@ -3239,25 +4462,43 @@
memcpy(&ifp->mac_addr, mac, ETHER_ADDR_LEN);
/* Allocate etherdev, including space for private structure */
- ifp->net = alloc_etherdev(sizeof(dhdinfo));
+ ifp->net = alloc_etherdev(DHD_DEV_PRIV_SIZE);
if (ifp->net == NULL) {
DHD_ERROR(("%s: OOM - alloc_etherdev(%zu)\n", __FUNCTION__, sizeof(dhdinfo)));
goto fail;
}
- memcpy(netdev_priv(ifp->net), &dhdinfo, sizeof(dhdinfo));
+
+ /* Setup the dhd interface's netdevice private structure. */
+ dhd_dev_priv_save(ifp->net, dhdinfo, ifp, ifidx);
+
if (name && name[0]) {
strncpy(ifp->net->name, name, IFNAMSIZ);
ifp->net->name[IFNAMSIZ - 1] = '\0';
}
+#ifdef WL_CFG80211
+ if (ifidx == 0)
+ ifp->net->destructor = free_netdev;
+ else
+ ifp->net->destructor = dhd_netdev_free;
+#else
ifp->net->destructor = free_netdev;
+#endif /* WL_CFG80211 */
strncpy(ifp->name, ifp->net->name, IFNAMSIZ);
ifp->name[IFNAMSIZ - 1] = '\0';
dhdinfo->iflist[ifidx] = ifp;
+
+#ifdef PCIE_FULL_DONGLE
+ /* Initialize STA info list */
+ INIT_LIST_HEAD(&ifp->sta_list);
+ DHD_IF_STA_LIST_LOCK_INIT(ifp);
+#endif /* PCIE_FULL_DONGLE */
+
return ifp->net;
fail:
if (ifp != NULL) {
if (ifp->net != NULL) {
+ dhd_dev_priv_clear(ifp->net);
free_netdev(ifp->net);
ifp->net = NULL;
}
@@ -3290,6 +4531,8 @@
} else {
netif_stop_queue(ifp->net);
+
+
if (need_rtnl_lock)
unregister_netdev(ifp->net);
else
@@ -3297,6 +4540,11 @@
}
ifp->net = NULL;
}
+#ifdef DHD_WMF
+ dhd_wmf_cleanup(dhdpub, ifidx);
+#endif /* DHD_WMF */
+
+ dhd_if_del_sta_list(ifp);
dhdinfo->iflist[ifidx] = NULL;
MFREE(dhdinfo->pub.osh, ifp, sizeof(*ifp));
@@ -3339,6 +4587,162 @@
#endif
+#ifdef SHOW_LOGTRACE
+#define DEFAULT_LOG_STR_PATH "/vendor/firmware/logstrs.bin"
+static char logstrs_path[MOD_PARAM_PATHLEN] = DEFAULT_LOG_STR_PATH;
+
+module_param_string(logstrs_path, logstrs_path, MOD_PARAM_PATHLEN, 0660);
+
+int
+dhd_init_logstrs_array(dhd_event_log_t *temp)
+{
+ struct file *filep = NULL;
+ struct kstat stat;
+ mm_segment_t fs;
+ char *raw_fmts = NULL;
+ int logstrs_size = 0;
+ gfp_t kflags;
+ logstr_header_t *hdr = NULL;
+ uint32 *lognums = NULL;
+ char *logstrs = NULL;
+ int ram_index = 0;
+ char **fmts;
+ int num_fmts = 0;
+ uint32 i = 0;
+ int error = 0;
+
+ if (temp->fmts && temp->raw_fmts) {
+ return BCME_OK;
+ }
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+
+ /* Save previous address limit first and then change to KERNEL_DS address limit */
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+
+ filep = filp_open(logstrs_path, O_RDONLY, 0);
+ if (IS_ERR(filep)) {
+ DHD_ERROR(("Failed to open the file logstrs.bin in %s, %s\n", __FUNCTION__, logstrs_path));
+ goto fail;
+ }
+ error = vfs_stat(logstrs_path, &stat);
+ if (error) {
+ DHD_ERROR(("Failed in %s to find file stat\n", __FUNCTION__));
+ goto fail;
+ }
+ logstrs_size = (int) stat.size;
+
+ raw_fmts = kmalloc(logstrs_size, kflags);
+ if (raw_fmts == NULL) {
+ DHD_ERROR(("Failed to allocate raw_fmts memory\n"));
+ goto fail;
+ }
+ if (vfs_read(filep, raw_fmts, logstrs_size, &filep->f_pos) != logstrs_size) {
+ DHD_ERROR(("Error: Log strings file read failed\n"));
+ goto fail;
+ }
+
+ /* Remember header from the logstrs.bin file */
+ hdr = (logstr_header_t *) (raw_fmts + logstrs_size -
+ sizeof(logstr_header_t));
+
+ if (hdr->log_magic == LOGSTRS_MAGIC) {
+ /*
+ * logstrs.bin start with header.
+ */
+ num_fmts = hdr->rom_logstrs_offset / sizeof(uint32);
+ ram_index = (hdr->ram_lognums_offset -
+ hdr->rom_lognums_offset) / sizeof(uint32);
+ lognums = (uint32 *) &raw_fmts[hdr->rom_lognums_offset];
+ logstrs = (char *) &raw_fmts[hdr->rom_logstrs_offset];
+ } else {
+ /*
+ * Legacy logstrs.bin format without header.
+ */
+ num_fmts = *((uint32 *) (raw_fmts)) / sizeof(uint32);
+ if (num_fmts == 0) {
+ /* Legacy ROM/RAM logstrs.bin format:
+ * - ROM 'lognums' section
+ * - RAM 'lognums' section
+ * - ROM 'logstrs' section.
+ * - RAM 'logstrs' section.
+ *
+ * 'lognums' is an array of indexes for the strings in the
+ * 'logstrs' section. The first uint32 is 0 (index of first
+ * string in ROM 'logstrs' section).
+ *
+ * The 4324b5 is the only ROM that uses this legacy format. Use the
+ * fixed number of ROM fmtnums to find the start of the RAM
+ * 'lognums' section. Use the fixed first ROM string ("Con\n") to
+ * find the ROM 'logstrs' section.
+ */
+ #define NUM_4324B5_ROM_FMTS 186
+ #define FIRST_4324B5_ROM_LOGSTR "Con\n"
+ ram_index = NUM_4324B5_ROM_FMTS;
+ lognums = (uint32 *) raw_fmts;
+ num_fmts = ram_index;
+ logstrs = (char *) &raw_fmts[num_fmts << 2];
+ while (strncmp(FIRST_4324B5_ROM_LOGSTR, logstrs, 4)) {
+ num_fmts++;
+ logstrs = (char *) &raw_fmts[num_fmts << 2];
+ }
+ } else {
+ /* Legacy RAM-only logstrs.bin format:
+ * - RAM 'lognums' section
+ * - RAM 'logstrs' section.
+ *
+ * 'lognums' is an array of indexes for the strings in the
+ * 'logstrs' section. The first uint32 is an index to the
+ * start of 'logstrs'. Therefore, if this index is divided
+ * by 'sizeof(uint32)' it provides the number of logstr
+ * entries.
+ */
+ ram_index = 0;
+ lognums = (uint32 *) raw_fmts;
+ logstrs = (char *) &raw_fmts[num_fmts << 2];
+ }
+ }
+ fmts = kmalloc(num_fmts * sizeof(char *), kflags);
+ if (fmts == NULL) {
+ DHD_ERROR(("Failed to allocate fmts memory"));
+ goto fail;
+ }
+
+ for (i = 0; i < num_fmts; i++) {
+ /* ROM lognums index into logstrs using 'rom_logstrs_offset' as a base
+ * (they are 0-indexed relative to 'rom_logstrs_offset').
+ *
+ * RAM lognums are already indexed to point to the correct RAM logstrs (they
+ * are 0-indexed relative to the start of the logstrs.bin file).
+ */
+ if (i == ram_index) {
+ logstrs = raw_fmts;
+ }
+ fmts[i] = &logstrs[lognums[i]];
+ }
+ temp->fmts = fmts;
+ temp->raw_fmts = raw_fmts;
+ temp->num_fmts = num_fmts;
+ filp_close(filep, NULL);
+ set_fs(fs);
+ return 0;
+fail:
+ if (raw_fmts) {
+ kfree(raw_fmts);
+ raw_fmts = NULL;
+ }
+ if (!IS_ERR(filep))
+ filp_close(filep, NULL);
+
+ /* Restore previous address limit */
+ set_fs(fs);
+
+ temp->fmts = NULL;
+ return -1;
+}
+#endif /* SHOW_LOGTRACE */
+
+
dhd_pub_t *
dhd_attach(osl_t *osh, struct dhd_bus *bus, uint bus_hdrlen)
{
@@ -3356,7 +4760,7 @@
/* will implement get_ids for DBUS later */
#if defined(BCMSDIO)
dhd_bus_get_ids(bus, &bus_type, &bus_num, &slot_num);
-#endif
+#endif
adapter = dhd_wifi_platform_get_adapter(bus_type, bus_num, slot_num);
/* Allocate primary dhd_info */
@@ -3371,12 +4775,26 @@
memset(dhd, 0, sizeof(dhd_info_t));
dhd_state |= DHD_ATTACH_STATE_DHD_ALLOC;
+ dhd->unit = dhd_found + instance_base; /* do not increment dhd_found, yet */
+
dhd->pub.osh = osh;
dhd->adapter = adapter;
#ifdef GET_CUSTOM_MAC_ENABLE
wifi_platform_get_mac_addr(dhd->adapter, dhd->pub.mac.octet);
#endif /* GET_CUSTOM_MAC_ENABLE */
+#ifdef CUSTOM_FORCE_NODFS_FLAG
+ dhd->pub.dhd_cflags |= WLAN_PLAT_NODFS_FLAG;
+ dhd->pub.force_country_change = TRUE;
+#endif
+#ifdef CUSTOM_COUNTRY_CODE
+ get_customized_country_code(dhd->adapter,
+ dhd->pub.dhd_cspec.country_abbrev, &dhd->pub.dhd_cspec,
+ dhd->pub.dhd_cflags);
+#endif /* CUSTOM_COUNTRY_CODE */
+
+ dhd->pub.short_dwell_time = -1;
+
dhd->thr_dpc_ctl.thr_pid = DHD_PID_KT_TL_INVALID;
dhd->thr_wdt_ctl.thr_pid = DHD_PID_KT_INVALID;
@@ -3391,6 +4809,8 @@
/* Link to info module */
dhd->pub.info = dhd;
+
+
/* Link to bus module */
dhd->pub.bus = bus;
dhd->pub.hdrlen = bus_hdrlen;
@@ -3425,10 +4845,16 @@
dhd->pub.skip_fc = dhd_wlfc_skip_fc;
dhd->pub.plat_init = dhd_wlfc_plat_init;
dhd->pub.plat_deinit = dhd_wlfc_plat_deinit;
+#ifdef WLFC_STATE_PREALLOC
+ dhd->pub.wlfc_state = MALLOC(dhd->pub.osh, sizeof(athost_wl_status_info_t));
+ if (dhd->pub.wlfc_state == NULL)
+ DHD_ERROR(("%s: wlfc_state prealloc failed\n", __FUNCTION__));
+#endif /* WLFC_STATE_PREALLOC */
#endif /* PROP_TXSTATUS */
/* Initialize other structure content */
init_waitqueue_head(&dhd->ioctl_resp_wait);
+ init_waitqueue_head(&dhd->d3ack_wait);
init_waitqueue_head(&dhd->ctrl_wait);
/* Initialize the spinlocks */
@@ -3450,7 +4876,6 @@
dhd->wakelock_wd_counter = 0;
dhd->wakelock_rx_timeout_enable = 0;
dhd->wakelock_ctrl_timeout_enable = 0;
- dhd->waive_wakelock = FALSE;
#ifdef CONFIG_HAS_WAKELOCK
wake_lock_init(&dhd->wl_wifi, WAKE_LOCK_SUSPEND, "wlan_wake");
wake_lock_init(&dhd->wl_rxwake, WAKE_LOCK_SUSPEND, "wlan_rx_wake");
@@ -3491,6 +4916,16 @@
}
#endif /* defined(WL_WIRELESS_EXT) */
+ /* attach debug support */
+ if (dhd_os_dbg_attach(&dhd->pub)) {
+ DHD_ERROR(("%s debug module attach failed\n", __FUNCTION__));
+ goto fail;
+ }
+
+ if (dhd_sta_pool_init(&dhd->pub, DHD_MAX_STA) != BCME_OK) {
+ DHD_ERROR(("%s: Initializing %u sta\n", __FUNCTION__, DHD_MAX_STA));
+ goto fail;
+ }
/* Set up the watchdog timer */
init_timer(&dhd->timer);
@@ -3528,21 +4963,20 @@
dhd_state |= DHD_ATTACH_STATE_THREADS_CREATED;
- /*
- * Save the dhd_info into the priv
- */
- memcpy(netdev_priv(net), &dhd, sizeof(dhd));
-
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && (LINUX_VERSION_CODE <= \
- KERNEL_VERSION(2, 6, 39)) && defined(CONFIG_PM_SLEEP)
- dhd->pm_notifier.notifier_call = dhd_pm_callback;
- dhd->pm_notifier.priority = 10;
+#if defined(CONFIG_PM_SLEEP)
if (!dhd_pm_notifier_registered) {
dhd_pm_notifier_registered = TRUE;
- register_pm_notifier(&dhd->pm_notifier);
+ register_pm_notifier(&dhd_pm_notifier);
}
-#endif /* (LINUX_VERSION >= 2.6.27 && LINUX_VERSION <= 2.6.39 && CONFIG_PM_SLEEP) */
-
+#endif /* CONFIG_PM_SLEEP */
+#ifdef SAR_SUPPORT
+ dhd->sar_notifier.notifier_call = dhd_sar_callback;
+ if (!dhd_sar_notifier_registered) {
+ dhd_sar_notifier_registered = TRUE;
+ dhd->sar_enable = 1; /* unknown state value */
+ register_notifier_by_sar(&dhd->sar_notifier);
+ }
+#endif /* SAR_SUPPORT */
#if defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND)
dhd->early_suspend.level = EARLY_SUSPEND_LEVEL_BLANK_SCREEN + 20;
dhd->early_suspend.suspend = dhd_early_suspend;
@@ -3558,10 +4992,12 @@
register_inetaddr_notifier(&dhd_inetaddr_notifier);
}
#endif /* ARP_OFFLOAD_SUPPORT */
+#ifdef CONFIG_IPV6
if (!dhd_inet6addr_notifier_registered) {
dhd_inet6addr_notifier_registered = TRUE;
register_inet6addr_notifier(&dhd_inet6addr_notifier);
}
+#endif
dhd->dhd_deferred_wq = dhd_deferred_work_init((void *)dhd);
#ifdef DEBUG_CPU_FREQ
dhd->new_freq = alloc_percpu(int);
@@ -3570,7 +5006,9 @@
#endif
#ifdef DHDTCPACK_SUPPRESS
#ifdef BCMSDIO
- dhd_tcpack_suppress_set(&dhd->pub, TCPACK_SUP_DELAYTX);
+ dhd_tcpack_suppress_set(&dhd->pub, TCPACK_SUP_REPLACE);
+#elif defined(BCMPCIE)
+ dhd_tcpack_suppress_set(&dhd->pub, TCPACK_SUP_REPLACE);
#else
dhd_tcpack_suppress_set(&dhd->pub, TCPACK_SUP_OFF);
#endif /* BCMSDIO */
@@ -3579,7 +5017,6 @@
dhd_state |= DHD_ATTACH_STATE_DONE;
dhd->dhd_state = dhd_state;
- dhd->unit = dhd_found + instance_base;
dhd_found++;
return &dhd->pub;
@@ -3681,8 +5118,9 @@
/* clear the path in module parameter */
firmware_path[0] = '\0';
- nvram_path[0] = '\0';
+#ifndef BCMEMBEDIMAGE
+ /* fw_path and nv_path are not mandatory for BCMEMBEDIMAGE */
if (dhdinfo->fw_path[0] == '\0') {
DHD_ERROR(("firmware path not found\n"));
return FALSE;
@@ -3691,6 +5129,7 @@
DHD_ERROR(("nvram path not found\n"));
return FALSE;
}
+#endif /* BCMEMBEDIMAGE */
return TRUE;
}
@@ -3707,6 +5146,8 @@
DHD_TRACE(("Enter %s:\n", __FUNCTION__));
+ DHD_PERIM_LOCK(dhdp);
+
/* try to download image and nvram to the dongle */
if (dhd->pub.busstate == DHD_BUS_DOWN && dhd_update_fw_nv_path(dhd)) {
DHD_INFO(("%s download fw %s, nv %s\n", __FUNCTION__, dhd->fw_path, dhd->nv_path));
@@ -3715,10 +5156,15 @@
if (ret < 0) {
DHD_ERROR(("%s: failed to download firmware %s\n",
__FUNCTION__, dhd->fw_path));
+ DHD_PERIM_UNLOCK(dhdp);
return ret;
}
}
+#ifdef SHOW_LOGTRACE
+ dhd_init_logstrs_array(&dhd->event_data);
+#endif /* SHOW_LOGTRACE */
if (dhd->pub.busstate != DHD_BUS_LOAD) {
+ DHD_PERIM_UNLOCK(dhdp);
return -ENETDOWN;
}
@@ -3733,6 +5179,7 @@
DHD_ERROR(("%s, dhd_bus_init failed %d\n", __FUNCTION__, ret));
dhd_os_sdunlock(dhdp);
+ DHD_PERIM_UNLOCK(dhdp);
return ret;
}
#if defined(OOB_INTR_ONLY)
@@ -3740,42 +5187,58 @@
if (dhd_bus_oob_intr_register(dhdp)) {
/* deactivate timer and wait for the handler to finish */
- flags = dhd_os_spin_lock(&dhd->pub);
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
dhd->wd_timer_valid = FALSE;
- dhd_os_spin_unlock(&dhd->pub, flags);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
del_timer_sync(&dhd->timer);
DHD_ERROR(("%s Host failed to register for OOB\n", __FUNCTION__));
dhd_os_sdunlock(dhdp);
+ DHD_PERIM_UNLOCK(dhdp);
DHD_OS_WD_WAKE_UNLOCK(&dhd->pub);
return -ENODEV;
}
/* Enable oob at firmware */
dhd_enable_oob_intr(dhd->pub.bus, TRUE);
-#endif
+#endif
+#ifdef PCIE_FULL_DONGLE
+ {
+ uint8 txpush = 0;
+ uint32 num_flowrings; /* includes H2D common rings */
+ num_flowrings = dhd_bus_max_h2d_queues(dhd->pub.bus, &txpush);
+ DHD_ERROR(("%s: Initializing %u flowrings\n", __FUNCTION__,
+ num_flowrings));
+ if ((ret = dhd_flow_rings_init(&dhd->pub, num_flowrings)) != BCME_OK) {
+ DHD_PERIM_UNLOCK(dhdp);
+ return ret;
+ }
+ }
+#endif /* PCIE_FULL_DONGLE */
+
+ /* Do protocol initialization necessary for IOCTL/IOVAR */
+ dhd_prot_init(&dhd->pub);
/* If bus is not ready, can't come up */
if (dhd->pub.busstate != DHD_BUS_DATA) {
- flags = dhd_os_spin_lock(&dhd->pub);
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
dhd->wd_timer_valid = FALSE;
- dhd_os_spin_unlock(&dhd->pub, flags);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
del_timer_sync(&dhd->timer);
DHD_ERROR(("%s failed bus is not ready\n", __FUNCTION__));
dhd_os_sdunlock(dhdp);
+ DHD_PERIM_UNLOCK(dhdp);
DHD_OS_WD_WAKE_UNLOCK(&dhd->pub);
return -ENODEV;
}
dhd_os_sdunlock(dhdp);
- dhd_process_cid_mac(dhdp, TRUE);
-
- /* Bus is ready, do any protocol initialization */
- if ((ret = dhd_prot_init(&dhd->pub)) < 0)
+ /* Bus is ready, query any dongle information */
+ if ((ret = dhd_sync_with_dongle(&dhd->pub)) < 0) {
+ DHD_PERIM_UNLOCK(dhdp);
return ret;
-
- dhd_process_cid_mac(dhdp, FALSE);
+ }
#ifdef ARP_OFFLOAD_SUPPORT
if (dhd->pend_ipaddr) {
@@ -3786,6 +5249,7 @@
}
#endif /* ARP_OFFLOAD_SUPPORT */
+ DHD_PERIM_UNLOCK(dhdp);
return 0;
}
#ifdef WLTDLS
@@ -3848,7 +5312,7 @@
}
int dhd_tdls_enable(struct net_device *dev, bool tdls_on, bool auto_on, struct ether_addr *mac)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
ret = _dhd_tdls_enable(&dhd->pub, tdls_on, auto_on, mac);
@@ -3856,7 +5320,63 @@
ret = BCME_ERROR;
return ret;
}
-#endif
+#ifdef PCIE_FULL_DONGLE
+void dhd_tdls_update_peer_info(struct net_device *dev, bool connect, uint8 *da)
+{
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+ dhd_pub_t *dhdp = (dhd_pub_t *)&dhd->pub;
+ tdls_peer_node_t *cur = dhdp->peer_tbl.node;
+ tdls_peer_node_t *new = NULL, *prev = NULL;
+ dhd_if_t *dhdif;
+ uint8 sa[ETHER_ADDR_LEN];
+ int ifidx = dhd_net2idx(dhd, dev);
+
+ if (ifidx == DHD_BAD_IF)
+ return;
+
+ dhdif = dhd->iflist[ifidx];
+ memcpy(sa, dhdif->mac_addr, ETHER_ADDR_LEN);
+
+ if (connect) {
+ while (cur != NULL) {
+ if (!memcmp(da, cur->addr, ETHER_ADDR_LEN)) {
+ DHD_ERROR(("%s: TDLS Peer exist already %d\n",
+ __FUNCTION__, __LINE__));
+ return;
+ }
+ cur = cur->next;
+ }
+
+ new = MALLOC(dhdp->osh, sizeof(tdls_peer_node_t));
+ if (new == NULL) {
+ DHD_ERROR(("%s: Failed to allocate memory\n", __FUNCTION__));
+ return;
+ }
+ memcpy(new->addr, da, ETHER_ADDR_LEN);
+ new->next = dhdp->peer_tbl.node;
+ dhdp->peer_tbl.node = new;
+ dhdp->peer_tbl.tdls_peer_count++;
+
+ } else {
+ while (cur != NULL) {
+ if (!memcmp(da, cur->addr, ETHER_ADDR_LEN)) {
+ dhd_flow_rings_delete_for_peer(dhdp, ifidx, da);
+ if (prev)
+ prev->next = cur->next;
+ else
+ dhdp->peer_tbl.node = cur->next;
+ MFREE(dhdp->osh, cur, sizeof(tdls_peer_node_t));
+ dhdp->peer_tbl.tdls_peer_count--;
+ return;
+ }
+ prev = cur;
+ cur = cur->next;
+ }
+ DHD_ERROR(("%s: TDLS Peer Entry Not found\n", __FUNCTION__));
+ }
+}
+#endif /* PCIE_FULL_DONGLE */
+#endif
bool dhd_is_concurrent_mode(dhd_pub_t *dhd)
{
@@ -3919,14 +5439,13 @@
return ret;
#else
return 0;
-#endif
+#endif
}
}
}
return 0;
}
-#endif
-
+#endif
int
dhd_preinit_ioctls(dhd_pub_t *dhd)
@@ -3935,11 +5454,18 @@
char eventmask[WL_EVENTING_MASK_LEN];
char iovbuf[WL_EVENTING_MASK_LEN + 12]; /* Room for "event_msgs" + '\0' + bitvec */
uint32 buf_key_b4_m4 = 1;
+ uint8 msglen;
+ eventmsgs_ext_t *eventmask_msg;
+ char iov_buf[WLC_IOCTL_SMLEN];
+ int ret2 = 0;
#if defined(CUSTOM_AMPDU_BA_WSIZE)
uint32 ampdu_ba_wsize = 0;
-#endif
+#endif
#if defined(CUSTOM_AMPDU_MPDU)
- uint32 ampdu_mpdu = 0;
+ int32 ampdu_mpdu = 0;
+#endif
+#if defined(CUSTOM_AMPDU_RELEASE)
+ int32 ampdu_release = 0;
#endif
#if defined(BCMSDIO)
@@ -3947,10 +5473,12 @@
int wlfc_enable = TRUE;
#ifndef DISABLE_11N
uint32 hostreorder = 1;
- int ret2 = 0;
#endif /* DISABLE_11N */
#endif /* PROP_TXSTATUS */
-#endif
+#endif
+#ifdef PCIE_FULL_DONGLE
+ uint32 wl_ap_isolate;
+#endif /* PCIE_FULL_DONGLE */
#ifdef DHD_ENABLE_LPC
uint32 lpc = 1;
@@ -3963,14 +5491,11 @@
#if defined(CUSTOMER_HW2) && defined(USE_WL_CREDALL)
uint32 credall = 1;
#endif
- uint bcn_timeout = 4;
+ uint bcn_timeout = CUSTOM_BCN_TIMEOUT_SETTING;
uint retry_max = 3;
#if defined(ARP_OFFLOAD_SUPPORT)
int arpoe = 1;
#endif
- int scan_assoc_time = DHD_SCAN_ASSOC_ACTIVE_TIME;
- int scan_unassoc_time = DHD_SCAN_UNASSOC_ACTIVE_TIME;
- int scan_passive_time = DHD_SCAN_PASSIVE_TIME;
char buf[WLC_IOCTL_SMLEN];
char *ptr;
uint32 listen_interval = CUSTOM_LISTEN_INTERVAL; /* Default Listen Interval in Beacons */
@@ -4001,16 +5526,16 @@
struct ether_addr p2p_ea;
#endif
-#if defined(AP) || defined(WLP2P)
+#if (defined(AP) || defined(WLP2P)) && !defined(SOFTAP_AND_GC)
uint32 apsta = 1; /* Enable APSTA mode */
-#endif /* defined(AP) || defined(WLP2P) */
+#elif defined(SOFTAP_AND_GC)
+ uint32 apsta = 0;
+ int ap_mode = 1;
+#endif /* (defined(AP) || defined(WLP2P)) && !defined(SOFTAP_AND_GC) */
#ifdef GET_CUSTOM_MAC_ENABLE
struct ether_addr ea_addr;
#endif /* GET_CUSTOM_MAC_ENABLE */
-#ifdef CUSTOM_AMPDU_BA_WSIZE
- struct ampdu_tid_control atc;
-#endif
#ifdef DISABLE_11N
uint32 nmode = 0;
#endif /* DISABLE_11N */
@@ -4024,15 +5549,24 @@
#ifdef CUSTOM_PSPRETEND_THR
uint32 pspretend_thr = CUSTOM_PSPRETEND_THR;
#endif
+#ifdef MAX_AP_CLIENT_CNT
+ uint32 max_assoc = MAX_AP_CLIENT_CNT;
+#endif
+
#ifdef PKT_FILTER_SUPPORT
dhd_pkt_filter_enable = TRUE;
#endif /* PKT_FILTER_SUPPORT */
#ifdef WLTDLS
dhd->tdls_enable = FALSE;
#endif /* WLTDLS */
+#ifdef DONGLE_ENABLE_ISOLATION
+ dhd->dongle_isolation = TRUE;
+#endif /* DONGLE_ENABLE_ISOLATION */
dhd->suspend_bcn_li_dtim = CUSTOM_SUSPEND_BCN_LI_DTIM;
DHD_TRACE(("Enter %s\n", __FUNCTION__));
dhd->op_mode = 0;
+ /* clear AP flags */
+ dhd->dhd_cflags &= ~WLAN_PLAT_AP_FLAG;
if ((!op_mode && dhd_get_fw_mode(dhd->info) == DHD_FLAG_MFG_MODE) ||
(op_mode == DHD_FLAG_MFG_MODE)) {
/* Check and adjust IOCTL response timeout for Manufactring firmware */
@@ -4071,6 +5605,7 @@
#ifdef GET_CUSTOM_MAC_ENABLE
}
#endif /* GET_CUSTOM_MAC_ENABLE */
+
/* get a capabilities from firmware */
memset(dhd->fw_capabilities, 0, sizeof(dhd->fw_capabilities));
bcm_mkiovar("cap", 0, 0, dhd->fw_capabilities, sizeof(dhd->fw_capabilities));
@@ -4095,9 +5630,9 @@
#ifdef SET_RANDOM_MAC_SOFTAP
SRANDOM32((uint)jiffies);
rand_mac = RANDOM32();
- iovbuf[0] = 0x02; /* locally administered bit */
- iovbuf[1] = 0x1A;
- iovbuf[2] = 0x11;
+ iovbuf[0] = (unsigned char)(vendor_oui >> 16) | 0x02; /* locally administered bit */
+ iovbuf[1] = (unsigned char)(vendor_oui >> 8);
+ iovbuf[2] = (unsigned char)vendor_oui;
iovbuf[3] = (unsigned char)(rand_mac & 0x0F) | 0xF0;
iovbuf[4] = (unsigned char)(rand_mac >> 8);
iovbuf[5] = (unsigned char)(rand_mac >> 16);
@@ -4117,6 +5652,15 @@
DHD_ERROR(("%s mpc for HostAPD failed %d\n", __FUNCTION__, ret));
}
#endif
+#ifdef MAX_AP_CLIENT_CNT
+ bcm_mkiovar("maxassoc", (char *)&max_assoc, 4, iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
+ sizeof(iovbuf), TRUE, 0)) < 0) {
+ DHD_ERROR(("%s maxassoc for HostAPD failed %d\n", __FUNCTION__, ret));
+ }
+#endif
+ /* set AP flag for specific country code of SOFTAP */
+ dhd->dhd_cflags |= WLAN_PLAT_AP_FLAG;
} else if ((!op_mode && dhd_get_fw_mode(dhd->info) == DHD_FLAG_MFG_MODE) ||
(op_mode == DHD_FLAG_MFG_MODE)) {
#if defined(ARP_OFFLOAD_SUPPORT)
@@ -4159,6 +5703,12 @@
DHD_ERROR(("%s APSTA for P2P failed ret= %d\n", __FUNCTION__, ret));
}
+#if defined(SOFTAP_AND_GC)
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_AP,
+ (char *)&ap_mode, sizeof(ap_mode), TRUE, 0)) < 0) {
+ DHD_ERROR(("%s WLC_SET_AP failed %d\n", __FUNCTION__, ret));
+ }
+#endif
memcpy(&p2p_ea, &dhd->mac, ETHER_ADDR_LEN);
ETHER_SET_LOCALADDR(&p2p_ea);
bcm_mkiovar("p2p_da_override", (char *)&p2p_ea,
@@ -4172,11 +5722,14 @@
}
#else
(void)concurrent_mode;
-#endif
+#endif
}
DHD_ERROR(("Firmware up: op_mode=0x%04x, MAC="MACDBG"\n",
dhd->op_mode, MAC2STRDBG(dhd->mac.octet)));
+ /* get a ccode and revision for the country code */
+ get_customized_country_code(dhd->info->adapter, dhd->dhd_cspec.country_abbrev,
+ &dhd->dhd_cspec, dhd->dhd_cflags);
/* Set Country code */
if (dhd->dhd_cspec.ccode[0] != 0) {
bcm_mkiovar("country", (char *)&dhd->dhd_cspec,
@@ -4232,7 +5785,18 @@
bcm_mkiovar("lpc", (char *)&lpc, 4, iovbuf, sizeof(iovbuf));
if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
sizeof(iovbuf), TRUE, 0)) < 0) {
- DHD_ERROR(("%s Set lpc failed %d\n", __FUNCTION__, ret));
+ if (ret != BCME_NOTDOWN) {
+ DHD_ERROR(("%s Set lpc failed %d\n", __FUNCTION__, ret));
+ } else {
+ u32 wl_down = 1;
+ ret = dhd_wl_ioctl_cmd(dhd, WLC_DOWN,
+ (char *)&wl_down, sizeof(wl_down), TRUE, 0);
+ DHD_ERROR(("%s lpc fail WL_DOWN : %d, lpc = %d\n", __FUNCTION__, ret, lpc));
+
+ bcm_mkiovar("lpc", (char *)&lpc, 4, iovbuf, sizeof(iovbuf));
+ ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0);
+ DHD_ERROR(("%s Set lpc ret --> %d\n", __FUNCTION__, ret));
+ }
}
#endif /* DHD_ENABLE_LPC */
@@ -4276,7 +5840,7 @@
if (ap_fw_loaded == TRUE) {
dhd_wl_ioctl_cmd(dhd, WLC_SET_DTIMPRD, (char *)&dtim, sizeof(dtim), TRUE, 0);
}
-#endif
+#endif
#if defined(KEEP_ALIVE)
{
@@ -4285,7 +5849,7 @@
#if defined(SOFTAP)
if (ap_fw_loaded == FALSE)
-#endif
+#endif
if (!(dhd->op_mode &
(DHD_FLAG_HOSTAP_MODE | DHD_FLAG_MFG_MODE))) {
if ((res = dhd_keep_alive_onoff(dhd)) < 0)
@@ -4294,6 +5858,7 @@
}
}
#endif /* defined(KEEP_ALIVE) */
+
#ifdef USE_WL_TXBF
bcm_mkiovar("txbf", (char *)&txbf, 4, iovbuf, sizeof(iovbuf));
if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
@@ -4321,13 +5886,9 @@
__FUNCTION__, ampdu_ba_wsize, ret));
}
}
+#endif
- atc.tid = 7;
- atc.enable = 0;
- bcm_mkiovar("ampdu_rx_tid", (char *)&atc, sizeof(atc), iovbuf, sizeof(iovbuf));
- dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0);
-#endif
#if defined(CUSTOM_AMPDU_MPDU)
ampdu_mpdu = CUSTOM_AMPDU_MPDU;
if (ampdu_mpdu != 0 && (ampdu_mpdu <= ampdu_ba_wsize)) {
@@ -4340,6 +5901,18 @@
}
#endif /* CUSTOM_AMPDU_MPDU */
+#if defined(CUSTOM_AMPDU_RELEASE)
+ ampdu_release = CUSTOM_AMPDU_RELEASE;
+ if (ampdu_release != 0 && (ampdu_release <= ampdu_ba_wsize)) {
+ bcm_mkiovar("ampdu_release", (char *)&du_release, 4, iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf,
+ sizeof(iovbuf), TRUE, 0)) < 0) {
+ DHD_ERROR(("%s Set ampdu_release to %d failed %d\n",
+ __FUNCTION__, CUSTOM_AMPDU_RELEASE, ret));
+ }
+ }
+#endif /* CUSTOM_AMPDU_RELEASE */
+
#ifdef CUSTOM_PSPRETEND_THR
/* Turn off MPC in AP mode */
bcm_mkiovar("pspretend_threshold", (char *)&pspretend_thr, 4,
@@ -4369,6 +5942,7 @@
setbit(eventmask, WLC_E_SET_SSID);
setbit(eventmask, WLC_E_PRUNE);
setbit(eventmask, WLC_E_AUTH);
+ setbit(eventmask, WLC_E_AUTH_IND);
setbit(eventmask, WLC_E_ASSOC);
setbit(eventmask, WLC_E_REASSOC);
setbit(eventmask, WLC_E_REASSOC_IND);
@@ -4390,7 +5964,6 @@
setbit(eventmask, WLC_E_TXFAIL);
#endif
setbit(eventmask, WLC_E_JOIN_START);
- setbit(eventmask, WLC_E_SCAN_COMPLETE);
#ifdef WLMEDIA_HTSF
setbit(eventmask, WLC_E_HTSFSYNC);
#endif /* WLMEDIA_HTSF */
@@ -4406,6 +5979,9 @@
#ifdef WLTDLS
setbit(eventmask, WLC_E_TDLS_PEER_EVENT);
#endif /* WLTDLS */
+#ifdef RTT_SUPPORT
+ setbit(eventmask, WLC_E_PROXD);
+#endif /* RTT_SUPPORT */
#ifdef WL_CFG80211
setbit(eventmask, WLC_E_ESCAN_RESULT);
if (dhd->op_mode & DHD_FLAG_P2P_MODE) {
@@ -4413,6 +5989,13 @@
setbit(eventmask, WLC_E_P2P_DISC_LISTEN_COMPLETE);
}
#endif /* WL_CFG80211 */
+ setbit(eventmask, WLC_E_TRACE);
+
+#ifdef EAPOL_PKT_PRIO
+#ifdef CONFIG_BCMDHD_PCIE
+ dhd_update_flow_prio_map(dhd, DHD_FLOW_PRIO_LLR_MAP);
+#endif /* CONFIG_BCMDHD_PCIE */
+#endif /* EAPOL_PKT_PRIO */
/* Write updated Event mask */
bcm_mkiovar("event_msgs", eventmask, WL_EVENTING_MASK_LEN, iovbuf, sizeof(iovbuf));
@@ -4421,12 +6004,56 @@
goto done;
}
- dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_CHANNEL_TIME, (char *)&scan_assoc_time,
- sizeof(scan_assoc_time), TRUE, 0);
- dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_UNASSOC_TIME, (char *)&scan_unassoc_time,
- sizeof(scan_unassoc_time), TRUE, 0);
- dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_PASSIVE_TIME, (char *)&scan_passive_time,
- sizeof(scan_passive_time), TRUE, 0);
+ /* make up event mask ext message iovar for event larger than 128 */
+ msglen = ROUNDUP(WLC_E_LAST, NBBY)/NBBY + EVENTMSGS_EXT_STRUCT_SIZE;
+ eventmask_msg = (eventmsgs_ext_t*)kmalloc(msglen, GFP_KERNEL);
+ if (eventmask_msg == NULL) {
+ DHD_ERROR(("failed to allocate %d bytes for event_msg_ext\n", msglen));
+ return BCME_NOMEM;
+ }
+ bzero(eventmask_msg, msglen);
+ eventmask_msg->ver = EVENTMSGS_VER;
+ eventmask_msg->len = ROUNDUP(WLC_E_LAST, NBBY)/NBBY;
+
+ /* Read event_msgs_ext mask */
+ bcm_mkiovar("event_msgs_ext", (char *)eventmask_msg, msglen, iov_buf, sizeof(iov_buf));
+ ret2 = dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iov_buf, sizeof(iov_buf), FALSE, 0);
+ if (ret2 != BCME_UNSUPPORTED)
+ ret = ret2;
+ if (ret2 == 0) { /* event_msgs_ext must be supported */
+ bcopy(iov_buf, eventmask_msg, msglen);
+ setbit(eventmask_msg->mask, WLC_E_RSSI_LQM);
+#ifdef GSCAN_SUPPORT
+ setbit(eventmask_msg->mask, WLC_E_PFN_GSCAN_FULL_RESULT);
+ setbit(eventmask_msg->mask, WLC_E_PFN_SCAN_COMPLETE);
+ setbit(eventmask_msg->mask, WLC_E_PFN_SWC);
+ setbit(eventmask_msg->mask, WLC_E_PFN_SSID_EXT);
+ setbit(eventmask_msg->mask, WLC_E_ROAM_EXP_EVENT);
+#endif /* GSCAN_SUPPORT */
+#ifdef BT_WIFI_HANDOVER
+ setbit(eventmask_msg->mask, WLC_E_BT_WIFI_HANDOVER_REQ);
+#endif /* BT_WIFI_HANDOVER */
+
+ /* Write updated Event mask */
+ eventmask_msg->ver = EVENTMSGS_VER;
+ eventmask_msg->command = EVENTMSGS_SET_MASK;
+ eventmask_msg->len = ROUNDUP(WLC_E_LAST, NBBY)/NBBY;
+ bcm_mkiovar("event_msgs_ext", (char *)eventmask_msg,
+ msglen, iov_buf, sizeof(iov_buf));
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR,
+ iov_buf, sizeof(iov_buf), TRUE, 0)) < 0) {
+ DHD_ERROR(("%s write event mask ext failed %d\n", __FUNCTION__, ret));
+ kfree(eventmask_msg);
+ goto done;
+ }
+ } else if (ret2 < 0 && ret2 != BCME_UNSUPPORTED) {
+ DHD_ERROR(("%s read event mask ext failed %d\n", __FUNCTION__, ret2));
+ kfree(eventmask_msg);
+ goto done;
+ } /* unsupported is ok */
+ kfree(eventmask_msg);
+
+ dhd_set_short_dwell_time(dhd, FALSE);
#ifdef ARP_OFFLOAD_SUPPORT
/* Set and enable ARP offload feature for STA only */
@@ -4434,7 +6061,7 @@
if (arpoe && !ap_fw_loaded) {
#else
if (arpoe) {
-#endif
+#endif
dhd_arp_offload_enable(dhd, TRUE);
dhd_arp_offload_set(dhd, dhd_arp_mode);
} else {
@@ -4452,8 +6079,7 @@
dhd->pktfilter[DHD_BROADCAST_FILTER_NUM] = NULL;
dhd->pktfilter[DHD_MULTICAST4_FILTER_NUM] = NULL;
dhd->pktfilter[DHD_MULTICAST6_FILTER_NUM] = NULL;
- /* Add filter to pass multicastDNS packet and NOT filter out as Broadcast */
- dhd->pktfilter[DHD_MDNS_FILTER_NUM] = "104 0 0 0 0xFFFFFFFFFFFF 0x01005E0000FB";
+ dhd->pktfilter[DHD_MDNS_FILTER_NUM] = NULL;
/* apply APP pktfilter */
dhd->pktfilter[DHD_ARP_FILTER_NUM] = "105 0 0 12 0xFFFF 0x0806";
@@ -4471,7 +6097,6 @@
DHD_ERROR(("%s wl nmode 0 failed %d\n", __FUNCTION__, ret));
#endif /* DISABLE_11N */
-
/* query for 'ver' to get version info from firmware */
memset(buf, 0, sizeof(buf));
ptr = buf;
@@ -4482,9 +6107,8 @@
bcmstrtok(&ptr, "\n", 0);
/* Print fw version info */
DHD_ERROR(("Firmware version = %s\n", buf));
-#if defined(BCMSDIO)
+
dhd_set_version_info(dhd, buf);
-#endif /* defined(BCMSDIO) */
}
#if defined(BCMSDIO)
@@ -4507,10 +6131,14 @@
bcm_mkiovar("ampdu_hostreorder", (char *)&hostreorder, 4, iovbuf, sizeof(iovbuf));
if ((ret2 = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0)) < 0) {
DHD_ERROR(("%s wl ampdu_hostreorder failed %d\n", __FUNCTION__, ret2));
+ if (ret2 != BCME_UNSUPPORTED)
+ ret = ret2;
if (ret2 != BCME_OK)
hostreorder = 0;
}
#endif /* DISABLE_11N */
+
+
if (wlfc_enable)
dhd_wlfc_init(dhd);
#ifndef DISABLE_11N
@@ -4520,11 +6148,29 @@
#endif /* PROP_TXSTATUS */
#endif /* BCMSDIO || BCMBUS */
+#ifdef PCIE_FULL_DONGLE
+ /* For FD we need all the packets at DHD to handle intra-BSS forwarding */
+ if (FW_SUPPORTED(dhd, ap)) {
+ wl_ap_isolate = AP_ISOLATE_SENDUP_ALL;
+ bcm_mkiovar("ap_isolate", (char *)&wl_ap_isolate, 4, iovbuf, sizeof(iovbuf));
+ if ((ret = dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0)) < 0)
+ DHD_ERROR(("%s failed %d\n", __FUNCTION__, ret));
+ }
+#endif /* PCIE_FULL_DONGLE */
#ifdef PNO_SUPPORT
if (!dhd->pno_state) {
dhd_pno_init(dhd);
}
#endif
+#ifdef RTT_SUPPORT
+ if (!dhd->rtt_state) {
+ ret = dhd_rtt_init(dhd);
+ if (ret < 0) {
+ DHD_ERROR(("%s failed to initialize RTT\n", __FUNCTION__));
+ }
+ }
+#endif
+
#ifdef WL11U
dhd_interworking_enable(dhd);
#endif /* WL11U */
@@ -4533,22 +6179,6 @@
return ret;
}
-void dhd_set_ampdu_rx_tid(struct net_device *dev, int ampdu_rx_tid)
-{
- int i, ret = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
- dhd_pub_t *pub = &dhd->pub;
- char iovbuf[32];
- for (i = 0; i < 8; i++) { /* One bit each for traffic class CS7 - CS0 */
- struct ampdu_tid_control atc;
- atc.tid = i;
- atc.enable = (ampdu_rx_tid >> i) & 1;
- bcm_mkiovar("ampdu_rx_tid", (char *)&atc, sizeof(atc), iovbuf,sizeof(iovbuf));
- ret = dhd_wl_ioctl_cmd(pub, WLC_SET_VAR, iovbuf, sizeof(iovbuf),TRUE, 0);
- if (ret < 0)
- DHD_ERROR(("%s failed %d\n", __func__, ret));
- }
-}
int
dhd_iovar(dhd_pub_t *pub, int ifidx, char *name, char *cmd_buf, uint cmd_len, int set)
@@ -4574,6 +6204,81 @@
return ret;
}
+int
+dhd_getiovar(dhd_pub_t *pub, int ifidx, char *name, char *cmd_buf,
+ uint cmd_len, char **resptr, uint resp_len)
+{
+ int len = resp_len;
+ int ret;
+ char *buf = *resptr;
+ wl_ioctl_t ioc;
+ if (resp_len > WLC_IOCTL_MAXLEN)
+ return BCME_BADARG;
+
+ memset(buf, 0, resp_len);
+
+ bcm_mkiovar(name, cmd_buf, cmd_len, buf, len);
+
+ memset(&ioc, 0, sizeof(ioc));
+
+ ioc.cmd = WLC_GET_VAR;
+ ioc.buf = buf;
+ ioc.len = len;
+ ioc.set = 0;
+
+ ret = dhd_wl_ioctl(pub, ifidx, &ioc, ioc.buf, ioc.len);
+
+ return ret;
+}
+
+int
+dhd_wl_ioctl_get_intiovar(dhd_pub_t *dhd_pub, char *name, uint *pval,
+ int cmd, uint8 set, int ifidx)
+{
+ char iovbuf[WLC_IOCTL_SMLEN];
+ int ret = -1;
+
+ /* memset(iovbuf, 0, sizeof(iovbuf)); */
+ if (bcm_mkiovar(name, NULL, 0, iovbuf, sizeof(iovbuf))) {
+ ret = dhd_wl_ioctl_cmd(dhd_pub, cmd, iovbuf, sizeof(iovbuf), set, ifidx);
+ if (!ret) {
+ *pval = ltoh32(*((uint*)iovbuf));
+ } else {
+ DHD_ERROR(("%s: get int iovar %s failed, ERR %d\n",
+ __FUNCTION__, name, ret));
+ }
+ } else {
+ DHD_ERROR(("%s: mkiovar %s failed\n",
+ __FUNCTION__, name));
+ }
+
+ return ret;
+}
+
+int
+dhd_wl_ioctl_set_intiovar(dhd_pub_t *dhd_pub, char *name, uint val,
+ int cmd, uint8 set, int ifidx)
+{
+ char iovbuf[WLC_IOCTL_SMLEN];
+ int ret = -1;
+ int lval = htol32(val);
+
+ /* memset(iovbuf, 0, sizeof(iovbuf)); */
+ if (bcm_mkiovar(name, (char*)&lval, sizeof(lval), iovbuf, sizeof(iovbuf))) {
+ ret = dhd_wl_ioctl_cmd(dhd_pub, cmd, iovbuf, sizeof(iovbuf), set, ifidx);
+ if (ret) {
+ DHD_ERROR(("%s: set int iovar %s failed, ERR %d\n",
+ __FUNCTION__, name, ret));
+ }
+ } else {
+ DHD_ERROR(("%s: mkiovar %s failed\n",
+ __FUNCTION__, name));
+ }
+
+ return ret;
+}
+
+
int dhd_change_mtu(dhd_pub_t *dhdp, int new_mtu, int ifidx)
{
struct dhd_info *dhd = dhdp->info;
@@ -4683,7 +6388,7 @@
}
#endif /* LINUX_VERSION_CODE */
- dhd = *(dhd_info_t **)netdev_priv(ifa->ifa_dev->dev);
+ dhd = DHD_DEV_INFO(ifa->ifa_dev->dev);
if (!dhd)
return NOTIFY_DONE;
@@ -4751,6 +6456,7 @@
}
#endif /* ARP_OFFLOAD_SUPPORT */
+#ifdef CONFIG_IPV6
/* Neighbor Discovery Offload: defered handler */
static void
dhd_inet6_work_handler(void *dhd_info, void *event_data, u8 event)
@@ -4842,7 +6548,7 @@
}
#endif /* LINUX_VERSION_CODE */
- dhd = *(dhd_info_t **)netdev_priv(inet6_ifa->idev->dev);
+ dhd = DHD_DEV_INFO(inet6_ifa->idev->dev);
if (!dhd)
return NOTIFY_DONE;
@@ -4863,15 +6569,17 @@
memcpy(&ndo_info->ipv6_addr[0], ipv6_addr, IPV6_ADDR_LEN);
/* defer the work to thread as it may block kernel */
- dhd_deferred_schedule_work((void *)ndo_info, DHD_WQ_WORK_IPV6_NDO,
+ dhd_deferred_schedule_work(dhd->dhd_deferred_wq, (void *)ndo_info, DHD_WQ_WORK_IPV6_NDO,
dhd_inet6_work_handler, DHD_WORK_PRIORITY_LOW);
return NOTIFY_DONE;
}
+#endif /* #ifdef CONFIG_IPV6 */
int
dhd_register_if(dhd_pub_t *dhdp, int ifidx, bool need_rtnl_lock)
{
dhd_info_t *dhd = (dhd_info_t *)dhdp->info;
+ dhd_if_t *ifp;
struct net_device *net = NULL;
int err = 0;
uint8 temp_addr[ETHER_ADDR_LEN] = { 0x00, 0x90, 0x4c, 0x11, 0x22, 0x33 };
@@ -4879,8 +6587,9 @@
DHD_TRACE(("%s: ifidx %d\n", __FUNCTION__, ifidx));
ASSERT(dhd && dhd->iflist[ifidx]);
- net = dhd->iflist[ifidx]->net;
- ASSERT(net);
+ ifp = dhd->iflist[ifidx];
+ net = ifp->net;
+ ASSERT(net && (ifp->idx == ifidx));
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 31))
ASSERT(!net->open);
@@ -4912,7 +6621,7 @@
/*
* We have to use the primary MAC for virtual interfaces
*/
- memcpy(temp_addr, dhd->iflist[ifidx]->mac_addr, ETHER_ADDR_LEN);
+ memcpy(temp_addr, ifp->mac_addr, ETHER_ADDR_LEN);
/*
* Android sets the locally administered bit to indicate that this is a
* portable hotspot. This will not work in simultaneous AP/STA mode,
@@ -4958,6 +6667,7 @@
}
+
printf("Register interface [%s] MAC: "MACDBG"\n\n", net->name,
MAC2STRDBG(net->dev_addr));
@@ -4979,6 +6689,16 @@
}
}
#endif /* OEM_ANDROID && BCMLXSDMMC && (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) */
+
+#if defined(BCMPCIE)
+ if (ifidx == 0) {
+ if (!dhd_download_fw_on_driverload) {
+ dhd_net_bus_devreset(net, TRUE);
+ wifi_platform_set_power(dhdp->info->adapter, FALSE, WIFI_TURNOFF_DELAY);
+ }
+ }
+#endif /* BCMPCIE */
+
return 0;
fail:
@@ -5015,7 +6735,7 @@
#if defined(OOB_INTR_ONLY)
dhd_bus_oob_intr_unregister(dhdp);
-#endif
+#endif
}
}
}
@@ -5045,11 +6765,19 @@
}
if (dhd->dhd_state & DHD_ATTACH_STATE_PROT_ATTACH) {
+#ifdef PCIE_FULL_DONGLE
+ dhd_flow_rings_deinit(dhdp);
+#endif
dhd_bus_detach(dhdp);
if (dhdp->prot)
dhd_prot_detach(dhdp);
}
+#ifdef PROP_TXSTATUS
+#ifdef WLFC_STATE_PREALLOC
+ MFREE(dhd->pub.osh, dhd->pub.wlfc_state, sizeof(athost_wl_status_info_t));
+#endif /* WLFC_STATE_PREALLOC */
+#endif /* PROP_TXSTATUS */
#ifdef ARP_OFFLOAD_SUPPORT
if (dhd_inetaddr_notifier_registered) {
@@ -5057,10 +6785,12 @@
unregister_inetaddr_notifier(&dhd_inetaddr_notifier);
}
#endif /* ARP_OFFLOAD_SUPPORT */
+#ifdef CONFIG_IPV6
if (dhd_inet6addr_notifier_registered) {
dhd_inet6addr_notifier_registered = FALSE;
unregister_inet6addr_notifier(&dhd_inet6addr_notifier);
}
+#endif
#if defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND)
if (dhd->dhd_state & DHD_ATTACH_STATE_EARLYSUSPEND_DONE) {
@@ -5094,6 +6824,9 @@
ASSERT(ifp);
ASSERT(ifp->net);
if (ifp && ifp->net) {
+
+
+
/* in unregister_netdev case, the interface gets freed by net->destructor
* (which is set to free_netdev)
*/
@@ -5102,16 +6835,22 @@
else
unregister_netdev(ifp->net);
ifp->net = NULL;
+#ifdef DHD_WMF
+ dhd_wmf_cleanup(dhdp, 0);
+#endif /* DHD_WMF */
+
+ dhd_if_del_sta_list(ifp);
+
MFREE(dhd->pub.osh, ifp, sizeof(*ifp));
dhd->iflist[0] = NULL;
}
}
/* Clear the watchdog timer */
- flags = dhd_os_spin_lock(&dhd->pub);
+ DHD_GENERAL_LOCK(&dhd->pub, flags);
timer_valid = dhd->wd_timer_valid;
dhd->wd_timer_valid = FALSE;
- dhd_os_spin_unlock(&dhd->pub, flags);
+ DHD_GENERAL_UNLOCK(&dhd->pub, flags);
if (timer_valid)
del_timer_sync(&dhd->timer);
@@ -5139,17 +6878,35 @@
dhd_deferred_work_deinit(dhd->dhd_deferred_wq);
dhd->dhd_deferred_wq = NULL;
+ if (dhdp->dbg)
+ dhd_os_dbg_detach(dhdp);
+#ifdef SHOW_LOGTRACE
+ if (dhd->event_data.fmts)
+ kfree(dhd->event_data.fmts);
+ if (dhd->event_data.raw_fmts)
+ kfree(dhd->event_data.raw_fmts);
+#endif /* SHOW_LOGTRACE */
+
#ifdef PNO_SUPPORT
if (dhdp->pno_state)
dhd_pno_deinit(dhdp);
#endif
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27)) && (LINUX_VERSION_CODE <= \
- KERNEL_VERSION(2, 6, 39)) && defined(CONFIG_PM_SLEEP)
+#ifdef RTT_SUPPORT
+ if (dhdp->rtt_state)
+ dhd_rtt_deinit(dhdp);
+#endif
+#if defined(CONFIG_PM_SLEEP)
if (dhd_pm_notifier_registered) {
- unregister_pm_notifier(&dhd->pm_notifier);
+ unregister_pm_notifier(&dhd_pm_notifier);
dhd_pm_notifier_registered = FALSE;
}
-#endif /* (LINUX_VERSION >= 2.6.27 && LINUX_VERSION <= 2.6.39 && CONFIG_PM_SLEEP) */
+#endif /* CONFIG_PM_SLEEP */
+#ifdef SAR_SUPPORT
+ if (dhd_sar_notifier_registered) {
+ unregister_notifier_by_sar(&dhd->sar_notifier);
+ dhd_sar_notifier_registered = FALSE;
+ }
+#endif /* SAR_SUPPORT */
#ifdef DEBUG_CPU_FREQ
if (dhd->new_freq)
free_percpu(dhd->new_freq);
@@ -5170,7 +6927,9 @@
#endif /* CONFIG_HAS_WAKELOCK */
}
-
+ if (dhdp->soc_ram) {
+ MFREE(dhdp->osh, dhdp->soc_ram, dhdp->soc_ram_length);
+ }
#ifdef DHDTCPACK_SUPPRESS
/* This will free all MEM allocated for TCPACK SUPPRESS */
dhd_tcpack_suppress_set(&dhd->pub, TCPACK_SUP_OFF);
@@ -5201,6 +6960,9 @@
dhdp->reorder_bufs[i] = NULL;
}
}
+
+ dhd_sta_pool_fini(dhdp, DHD_MAX_STA);
+
dhd = (dhd_info_t *)dhdp->info;
/* If pointer is allocated by dhd_os_prealloc then avoid MFREE */
if (dhd &&
@@ -5208,9 +6970,42 @@
MFREE(dhd->pub.osh, dhd, sizeof(*dhd));
dhd = NULL;
}
+ if (dhdp->soc_ram) {
+ memset(dhdp->soc_ram, 0, dhdp->soc_ram_length);
+ }
+}
+void
+dhd_clear(dhd_pub_t *dhdp)
+{
+ DHD_TRACE(("%s: Enter\n", __FUNCTION__));
+
+#ifdef PCIE_FULL_DONGLE
+ if (dhdp) {
+ int i;
+ for (i = 0; i < ARRAYSIZE(dhdp->reorder_bufs); i++) {
+ if (dhdp->reorder_bufs[i]) {
+ reorder_info_t *ptr;
+ uint32 buf_size = sizeof(struct reorder_info);
+ ptr = dhdp->reorder_bufs[i];
+ buf_size += ((ptr->max_idx + 1) * sizeof(void*));
+ DHD_REORDER(("free flow id buf %d, maxidx is %d, buf_size %d\n",
+ i, ptr->max_idx, buf_size));
+
+ MFREE(dhdp->osh, dhdp->reorder_bufs[i], buf_size);
+ dhdp->reorder_bufs[i] = NULL;
+ }
+ }
+ dhd_sta_pool_clear(dhdp, DHD_MAX_STA);
+ }
+#endif
+
+ if (dhdp->soc_ram) {
+ memset(dhdp->soc_ram, 0, dhdp->soc_ram_length);
+ }
+
}
-static void __exit
+static void
dhd_module_cleanup(void)
{
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
@@ -5222,17 +7017,76 @@
dhd_wifi_platform_unregister_drv();
}
+static void __exit
+dhd_module_exit(void)
+{
+ dhd_module_cleanup();
+ unregister_reboot_notifier(&dhd_reboot_notifier);
+#if defined(DHD_OF_SUPPORT)
+ dhd_wlan_exit();
+#endif /* defined(DHD_OF_SUPPORT) */
+}
+
static int __init
dhd_module_init(void)
{
int err;
+ int retry = POWERUP_MAX_RETRY;
DHD_ERROR(("%s in\n", __FUNCTION__));
- err = dhd_wifi_platform_register_drv();
+#if defined(DHD_OF_SUPPORT)
+ err = dhd_wlan_init();
+ if(err) {
+ DHD_ERROR(("%s: failed in dhd_wlan_init.",__FUNCTION__));
+ return err;
+ }
+#endif /* defined(DHD_OF_SUPPORT) */
+
+
+ DHD_PERIM_RADIO_INIT();
+
+ if (firmware_path[0] != '\0') {
+ strncpy(fw_bak_path, firmware_path, MOD_PARAM_PATHLEN);
+ fw_bak_path[MOD_PARAM_PATHLEN-1] = '\0';
+ }
+
+ if (nvram_path[0] != '\0') {
+ strncpy(nv_bak_path, nvram_path, MOD_PARAM_PATHLEN);
+ nv_bak_path[MOD_PARAM_PATHLEN-1] = '\0';
+ }
+
+ do {
+ err = dhd_wifi_platform_register_drv();
+ if (!err) {
+ register_reboot_notifier(&dhd_reboot_notifier);
+ break;
+ }
+ else {
+ DHD_ERROR(("%s: Failed to load the driver, try cnt %d\n",
+ __FUNCTION__, retry));
+ strncpy(firmware_path, fw_bak_path, MOD_PARAM_PATHLEN);
+ firmware_path[MOD_PARAM_PATHLEN-1] = '\0';
+ strncpy(nvram_path, nv_bak_path, MOD_PARAM_PATHLEN);
+ nvram_path[MOD_PARAM_PATHLEN-1] = '\0';
+ }
+ } while (retry--);
+
+ if (err)
+ DHD_ERROR(("%s: Failed to load driver max retry reached**\n", __FUNCTION__));
return err;
}
+static int
+dhd_reboot_callback(struct notifier_block *this, unsigned long code, void *unused)
+{
+ DHD_TRACE(("%s: code = %ld\n", __FUNCTION__, code));
+ if (code == SYS_RESTART) {
+ }
+
+ return NOTIFY_DONE;
+}
+
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
#if defined(CONFIG_DEFERRED_INITCALLS)
@@ -5246,7 +7100,7 @@
module_init(dhd_module_init);
#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0) */
-module_exit(dhd_module_cleanup);
+module_exit(dhd_module_exit);
/*
* OS specific functions required to implement DHD driver in OS independent way
@@ -5257,7 +7111,11 @@
dhd_info_t * dhd = (dhd_info_t *)(pub->info);
if (dhd) {
+ DHD_PERIM_UNLOCK(pub);
+
down(&dhd->proto_sem);
+
+ DHD_PERIM_LOCK(pub);
return 1;
}
@@ -5302,7 +7160,12 @@
timeout = dhd_ioctl_timeout_msec * HZ / 1000;
#endif
+ DHD_PERIM_UNLOCK(pub);
+
timeout = wait_event_timeout(dhd->ioctl_resp_wait, (*condition), timeout);
+
+ DHD_PERIM_LOCK(pub);
+
return timeout;
}
@@ -5315,6 +7178,36 @@
return 0;
}
+int
+dhd_os_d3ack_wait(dhd_pub_t *pub, uint *condition, bool *pending)
+{
+ dhd_info_t * dhd = (dhd_info_t *)(pub->info);
+ int timeout;
+
+ /* Convert timeout in millsecond to jiffies */
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27))
+ timeout = msecs_to_jiffies(dhd_ioctl_timeout_msec);
+#else
+ timeout = dhd_ioctl_timeout_msec * HZ / 1000;
+#endif
+
+ DHD_PERIM_UNLOCK(pub);
+ timeout = wait_event_timeout(dhd->d3ack_wait, (*condition), timeout);
+ DHD_PERIM_LOCK(pub);
+
+ return timeout;
+}
+
+int
+dhd_os_d3ack_wake(dhd_pub_t *pub)
+{
+ dhd_info_t *dhd = (dhd_info_t *)(pub->info);
+
+ wake_up(&dhd->d3ack_wait);
+ return 0;
+}
+
+
void
dhd_os_wd_timer_extend(void *bus, bool extend)
{
@@ -5342,11 +7235,11 @@
return;
}
- flags = dhd_os_spin_lock(pub);
+ DHD_GENERAL_LOCK(pub, flags);
/* don't start the wd until fw is loaded */
if (pub->busstate == DHD_BUS_DOWN) {
- dhd_os_spin_unlock(pub, flags);
+ DHD_GENERAL_UNLOCK(pub, flags);
if (!wdtick)
DHD_OS_WD_WAKE_UNLOCK(pub);
return;
@@ -5355,7 +7248,7 @@
/* Totally stop the timer */
if (!wdtick && dhd->wd_timer_valid == TRUE) {
dhd->wd_timer_valid = FALSE;
- dhd_os_spin_unlock(pub, flags);
+ DHD_GENERAL_UNLOCK(pub, flags);
del_timer_sync(&dhd->timer);
DHD_OS_WD_WAKE_UNLOCK(pub);
return;
@@ -5368,7 +7261,7 @@
mod_timer(&dhd->timer, jiffies + msecs_to_jiffies(dhd_watchdog_ms));
dhd->wd_timer_valid = TRUE;
}
- dhd_os_spin_unlock(pub, flags);
+ DHD_GENERAL_UNLOCK(pub, flags);
}
void *
@@ -5466,18 +7359,6 @@
{
}
-void
-dhd_os_sdtxlock(dhd_pub_t *pub)
-{
- dhd_os_sdlock(pub);
-}
-
-void
-dhd_os_sdtxunlock(dhd_pub_t *pub)
-{
- dhd_os_sdunlock(pub);
-}
-
static void
dhd_os_rxflock(dhd_pub_t *pub)
{
@@ -5539,7 +7420,7 @@
dhd_get_wireless_stats(struct net_device *dev)
{
int res = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (!dhd->pub.up) {
return NULL;
@@ -5561,7 +7442,12 @@
int bcmerror = 0;
ASSERT(dhd != NULL);
- bcmerror = wl_host_event(&dhd->pub, ifidx, pktdata, event, data);
+#ifdef SHOW_LOGTRACE
+ bcmerror = wl_host_event(&dhd->pub, ifidx, pktdata, event, data, &dhd->event_data);
+#else
+ bcmerror = wl_host_event(&dhd->pub, ifidx, pktdata, event, data, NULL);
+#endif /* SHOW_LOGTRACE */
+
if (bcmerror != BCME_OK)
return (bcmerror);
@@ -5687,64 +7573,71 @@
return;
}
-#ifdef BCMSDIO
+#if defined(BCMSDIO) || defined(BCMPCIE)
int
dhd_net_bus_devreset(struct net_device *dev, uint8 flag)
{
- int ret;
+ int ret = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
-
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (flag == TRUE) {
/* Issue wl down command before resetting the chip */
if (dhd_wl_ioctl_cmd(&dhd->pub, WLC_DOWN, NULL, 0, TRUE, 0) < 0) {
DHD_TRACE(("%s: wl down failed\n", __FUNCTION__));
}
#ifdef PROP_TXSTATUS
- if (dhd->pub.wlfc_enabled)
+ if (dhd->pub.wlfc_enabled) {
dhd_wlfc_deinit(&dhd->pub);
+ }
#endif /* PROP_TXSTATUS */
#ifdef PNO_SUPPORT
- if (dhd->pub.pno_state)
- dhd_pno_deinit(&dhd->pub);
-#endif
+ if (dhd->pub.pno_state) {
+ dhd_pno_deinit(&dhd->pub);
+ }
+#endif /* PNO_SUPPORT */
+#ifdef RTT_SUPPORT
+ if (dhd->pub.rtt_state) {
+ dhd_rtt_deinit(&dhd->pub);
+ }
+#endif /* RTT_SUPPORT */
}
-
+#ifdef BCMSDIO
if (!flag) {
dhd_update_fw_nv_path(dhd);
/* update firmware and nvram path to sdio bus */
dhd_bus_update_fw_nv_path(dhd->pub.bus,
dhd->fw_path, dhd->nv_path);
}
-
+#endif /* BCMSDIO */
ret = dhd_bus_devreset(&dhd->pub, flag);
if (ret) {
DHD_ERROR(("%s: dhd_bus_devreset: %d\n", __FUNCTION__, ret));
return ret;
}
-
return ret;
}
+#ifdef BCMSDIO
int
dhd_net_bus_suspend(struct net_device *dev)
{
- dhd_info_t *dhdinfo = *(dhd_info_t **)netdev_priv(dev);
- return dhd_bus_suspend(&dhdinfo->pub);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+ return dhd_bus_suspend(&dhd->pub);
}
int
dhd_net_bus_resume(struct net_device *dev, uint8 stage)
{
- dhd_info_t *dhdinfo = *(dhd_info_t **)netdev_priv(dev);
- return dhd_bus_resume(&dhdinfo->pub, stage);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+ return dhd_bus_resume(&dhd->pub, stage);
}
#endif /* BCMSDIO */
+#endif /* BCMSDIO || BCMPCIE */
int net_os_set_suspend_disable(struct net_device *dev, int val)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd) {
@@ -5757,7 +7650,7 @@
int net_os_set_suspend(struct net_device *dev, int val, int force)
{
int ret = 0;
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (dhd) {
#if defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND)
@@ -5774,7 +7667,7 @@
int net_os_set_suspend_bcn_li_dtim(struct net_device *dev, int val)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (dhd)
dhd->pub.suspend_bcn_li_dtim = val;
@@ -5785,13 +7678,12 @@
#ifdef PKT_FILTER_SUPPORT
int net_os_rxfilter_add_remove(struct net_device *dev, int add_remove, int num)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
char *filterp = NULL;
int filter_id = 0;
int ret = 0;
- if (!dhd || (num == DHD_UNICAST_FILTER_NUM) ||
- (num == DHD_MDNS_FILTER_NUM))
+ if (!dhd || (num == DHD_UNICAST_FILTER_NUM))
return ret;
if (num >= dhd->pub.pktfilter_count)
return -EINVAL;
@@ -5808,6 +7700,10 @@
filterp = "103 0 0 0 0xFFFF 0x3333";
filter_id = 103;
break;
+ case DHD_MDNS_FILTER_NUM:
+ filterp = "104 0 0 0 0xFFFFFFFFFFFF 0x01005E0000FB";
+ filter_id = 104;
+ break;
default:
return -EINVAL;
}
@@ -5847,7 +7743,7 @@
/* function to enable/disable packet for Network device */
int net_os_enable_packet_filter(struct net_device *dev, int val)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return dhd_os_enable_packet_filter(&dhd->pub, val);
}
@@ -5856,35 +7752,152 @@
int
dhd_dev_init_ioctl(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret;
- dhd_process_cid_mac(&dhd->pub, TRUE);
-
- if ((ret = dhd_prot_init(&dhd->pub)) < 0)
+ if ((ret = dhd_sync_with_dongle(&dhd->pub)) < 0)
goto done;
- dhd_process_cid_mac(&dhd->pub, FALSE);
-
done:
return ret;
}
+int dhd_dev_get_feature_set(struct net_device *dev)
+{
+ dhd_info_t *ptr = *(dhd_info_t **)netdev_priv(dev);
+ dhd_pub_t *dhd = (&ptr->pub);
+ int feature_set = 0;
+
+ if (!dhd)
+ return feature_set;
+
+ if (FW_SUPPORTED(dhd, sta))
+ feature_set |= WIFI_FEATURE_INFRA;
+ if (FW_SUPPORTED(dhd, dualband))
+ feature_set |= WIFI_FEATURE_INFRA_5G;
+ if (FW_SUPPORTED(dhd, p2p))
+ feature_set |= WIFI_FEATURE_P2P;
+ if (dhd->op_mode & DHD_FLAG_HOSTAP_MODE)
+ feature_set |= WIFI_FEATURE_SOFT_AP;
+ if (FW_SUPPORTED(dhd, tdls))
+ feature_set |= WIFI_FEATURE_TDLS;
+ if (FW_SUPPORTED(dhd, vsdb))
+ feature_set |= WIFI_FEATURE_TDLS_OFFCHANNEL;
+ if (FW_SUPPORTED(dhd, nan)) {
+ feature_set |= WIFI_FEATURE_NAN;
+ /* NAN is essentail for d2d rtt */
+ if (FW_SUPPORTED(dhd, rttd2d))
+ feature_set |= WIFI_FEATURE_D2D_RTT;
+ }
+#ifdef RTT_SUPPORT
+ feature_set |= WIFI_FEATURE_D2AP_RTT;
+#endif /* RTT_SUPPORT */
+#ifdef LINKSTAT_SUPPORT
+ feature_set |= WIFI_FEATURE_LINKSTAT;
+#endif /* LINKSTAT_SUPPORT */
+ /* Supports STA + STA always */
+ feature_set |= WIFI_FEATURE_ADDITIONAL_STA;
+#ifdef PNO_SUPPORT
+ if (dhd_is_pno_supported(dhd)) {
+ feature_set |= WIFI_FEATURE_PNO;
+ feature_set |= WIFI_FEATURE_BATCH_SCAN;
+#ifdef GSCAN_SUPPORT
+ feature_set |= WIFI_FEATURE_GSCAN;
+ feature_set |= WIFI_FEATURE_HAL_EPNO;
+#endif /* GSCAN_SUPPORT */
+ }
+ if (FW_SUPPORTED(dhd, rssi_mon)) {
+ feature_set |= WIFI_FEATUE_RSSI_MONITOR;
+ }
+#endif /* PNO_SUPPORT */
+#ifdef WL11U
+ feature_set |= WIFI_FEATURE_HOTSPOT;
+#endif /* WL11U */
+ return feature_set;
+}
+
+int *dhd_dev_get_feature_set_matrix(struct net_device *dev, int *num)
+{
+ int feature_set_full, mem_needed;
+ int *ret;
+
+ *num = 0;
+ mem_needed = sizeof(int) * MAX_FEATURE_SET_CONCURRRENT_GROUPS;
+ ret = (int *) kmalloc(mem_needed, GFP_KERNEL);
+
+ if (!ret) {
+ DHD_ERROR(("%s: failed to allocate %d bytes\n", __FUNCTION__,
+ mem_needed));
+ return ret;
+ }
+
+ feature_set_full = dhd_dev_get_feature_set(dev);
+
+ ret[0] = (feature_set_full & WIFI_FEATURE_INFRA) |
+ (feature_set_full & WIFI_FEATURE_INFRA_5G) |
+ (feature_set_full & WIFI_FEATURE_NAN) |
+ (feature_set_full & WIFI_FEATURE_D2D_RTT) |
+ (feature_set_full & WIFI_FEATURE_D2AP_RTT) |
+ (feature_set_full & WIFI_FEATURE_PNO) |
+ (feature_set_full & WIFI_FEATURE_HAL_EPNO) |
+ (feature_set_full & WIFI_FEATUE_RSSI_MONITOR) |
+ (feature_set_full & WIFI_FEATURE_BATCH_SCAN) |
+ (feature_set_full & WIFI_FEATURE_GSCAN) |
+ (feature_set_full & WIFI_FEATURE_HOTSPOT) |
+ (feature_set_full & WIFI_FEATURE_ADDITIONAL_STA) |
+ (feature_set_full & WIFI_FEATURE_EPR);
+
+ ret[1] = (feature_set_full & WIFI_FEATURE_INFRA) |
+ (feature_set_full & WIFI_FEATURE_INFRA_5G) |
+ (feature_set_full & WIFI_FEATUE_RSSI_MONITOR) |
+ /* Not yet verified NAN with P2P */
+ /* (feature_set_full & WIFI_FEATURE_NAN) | */
+ (feature_set_full & WIFI_FEATURE_P2P) |
+ (feature_set_full & WIFI_FEATURE_D2AP_RTT) |
+ (feature_set_full & WIFI_FEATURE_D2D_RTT) |
+ (feature_set_full & WIFI_FEATURE_EPR);
+
+ ret[2] = (feature_set_full & WIFI_FEATURE_INFRA) |
+ (feature_set_full & WIFI_FEATURE_INFRA_5G) |
+ (feature_set_full & WIFI_FEATUE_RSSI_MONITOR) |
+ (feature_set_full & WIFI_FEATURE_NAN) |
+ (feature_set_full & WIFI_FEATURE_D2D_RTT) |
+ (feature_set_full & WIFI_FEATURE_D2AP_RTT) |
+ (feature_set_full & WIFI_FEATURE_TDLS) |
+ (feature_set_full & WIFI_FEATURE_TDLS_OFFCHANNEL) |
+ (feature_set_full & WIFI_FEATURE_EPR);
+ *num = MAX_FEATURE_SET_CONCURRRENT_GROUPS;
+
+ return ret;
+}
+
+int
+dhd_dev_set_nodfs(struct net_device *dev, u32 nodfs)
+{
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+
+ if (nodfs)
+ dhd->pub.dhd_cflags |= WLAN_PLAT_NODFS_FLAG;
+ else
+ dhd->pub.dhd_cflags &= ~WLAN_PLAT_NODFS_FLAG;
+ dhd->pub.force_country_change = TRUE;
+ return 0;
+}
#ifdef PNO_SUPPORT
/* Linux wrapper to call common dhd_pno_stop_for_ssid */
int
dhd_dev_pno_stop_for_ssid(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_stop_for_ssid(&dhd->pub));
}
/* Linux wrapper to call common dhd_pno_set_for_ssid */
int
-dhd_dev_pno_set_for_ssid(struct net_device *dev, wlc_ssid_t* ssids_local, int nssid,
+dhd_dev_pno_set_for_ssid(struct net_device *dev, wlc_ssid_ext_t* ssids_local, int nssid,
uint16 scan_fr, int pno_repeat, int pno_freq_expo_max, uint16 *channel_list, int nchan)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_set_for_ssid(&dhd->pub, ssids_local, nssid, scan_fr,
pno_repeat, pno_freq_expo_max, channel_list, nchan));
@@ -5894,7 +7907,7 @@
int
dhd_dev_pno_enable(struct net_device *dev, int enable)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_enable(&dhd->pub, enable));
}
@@ -5904,32 +7917,625 @@
dhd_dev_pno_set_for_hotlist(struct net_device *dev, wl_pfn_bssid_t *p_pfn_bssid,
struct dhd_pno_hotlist_params *hotlist_params)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_set_for_hotlist(&dhd->pub, p_pfn_bssid, hotlist_params));
}
/* Linux wrapper to call common dhd_dev_pno_stop_for_batch */
int
dhd_dev_pno_stop_for_batch(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_stop_for_batch(&dhd->pub));
}
/* Linux wrapper to call common dhd_dev_pno_set_for_batch */
int
dhd_dev_pno_set_for_batch(struct net_device *dev, struct dhd_pno_batch_params *batch_params)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_set_for_batch(&dhd->pub, batch_params));
}
/* Linux wrapper to call common dhd_dev_pno_get_for_batch */
int
dhd_dev_pno_get_for_batch(struct net_device *dev, char *buf, int bufsize)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
return (dhd_pno_get_for_batch(&dhd->pub, buf, bufsize, PNO_STATUS_NORMAL));
}
#endif /* PNO_SUPPORT */
+#ifdef GSCAN_SUPPORT
+/* Linux wrapper to call common dhd_pno_set_cfg_gscan */
+int
+dhd_dev_pno_set_cfg_gscan(struct net_device *dev, dhd_pno_gscan_cmd_cfg_t type,
+ void *buf, uint8 flush)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_set_cfg_gscan(&dhd->pub, type, buf, flush));
+}
+
+/* Linux wrapper to call common dhd_pno_get_gscan */
+void *
+dhd_dev_pno_get_gscan(struct net_device *dev, dhd_pno_gscan_cmd_cfg_t type,
+ void *info, uint32 *len)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_get_gscan(&dhd->pub, type, info, len));
+}
+
+/* Linux wrapper to call common dhd_wait_batch_results_complete */
+int dhd_dev_wait_batch_results_complete(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_wait_batch_results_complete(&dhd->pub));
+}
+
+/* Linux wrapper to call common dhd_pno_lock_batch_results */
+int
+dhd_dev_pno_lock_access_batch_results(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_lock_batch_results(&dhd->pub));
+}
+/* Linux wrapper to call common dhd_pno_unlock_batch_results */
+void
+dhd_dev_pno_unlock_access_batch_results(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_unlock_batch_results(&dhd->pub));
+}
+
+/* Linux wrapper to call common dhd_pno_initiate_gscan_request */
+int dhd_dev_pno_run_gscan(struct net_device *dev, bool run, bool flush)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_initiate_gscan_request(&dhd->pub, run, flush));
+}
+
+/* Linux wrapper to call common dhd_pno_enable_full_scan_result */
+int dhd_dev_pno_enable_full_scan_result(struct net_device *dev, bool real_time_flag)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_enable_full_scan_result(&dhd->pub, real_time_flag));
+}
+
+/* Linux wrapper to call common dhd_handle_swc_evt */
+void * dhd_dev_swc_scan_event(struct net_device *dev, const void *data, int *send_evt_bytes)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_handle_swc_evt(&dhd->pub, data, send_evt_bytes));
+}
+
+/* Linux wrapper to call common dhd_handle_hotlist_scan_evt */
+void * dhd_dev_hotlist_scan_event(struct net_device *dev,
+ const void *data, int *send_evt_bytes, hotlist_type_t type)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_handle_hotlist_scan_evt(&dhd->pub, data, send_evt_bytes, type));
+}
+
+/* Linux wrapper to call common dhd_process_full_gscan_result */
+void * dhd_dev_process_full_gscan_result(struct net_device *dev,
+const void *data, int *send_evt_bytes)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_process_full_gscan_result(&dhd->pub, data, send_evt_bytes));
+}
+
+void dhd_dev_gscan_hotlist_cache_cleanup(struct net_device *dev, hotlist_type_t type)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ dhd_gscan_hotlist_cache_cleanup(&dhd->pub, type);
+
+ return;
+}
+
+int dhd_dev_gscan_batch_cache_cleanup(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_gscan_batch_cache_cleanup(&dhd->pub));
+}
+
+/* Linux wrapper to call common dhd_retreive_batch_scan_results */
+int dhd_dev_retrieve_batch_scan(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_retreive_batch_scan_results(&dhd->pub));
+}
+/* Linux wrapper to call common dhd_pno_process_epno_result */
+void * dhd_dev_process_epno_result(struct net_device *dev,
+ const void *data, uint32 event, int *send_evt_bytes)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_process_epno_result(&dhd->pub, data, event, send_evt_bytes));
+}
+
+int dhd_dev_set_lazy_roam_cfg(struct net_device *dev,
+ wlc_roam_exp_params_t *roam_param)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ wl_roam_exp_cfg_t roam_exp_cfg;
+ int err;
+
+ if (!roam_param) {
+ return BCME_BADARG;
+ }
+
+ DHD_ERROR(("a_band_boost_thr %d a_band_penalty_thr %d\n",
+ roam_param->a_band_boost_threshold, roam_param->a_band_penalty_threshold));
+ DHD_ERROR(("a_band_boost_factor %d a_band_penalty_factor %d cur_bssid_boost %d\n",
+ roam_param->a_band_boost_factor, roam_param->a_band_penalty_factor,
+ roam_param->cur_bssid_boost));
+ DHD_ERROR(("alert_roam_trigger_thr %d a_band_max_boost %d\n",
+ roam_param->alert_roam_trigger_threshold, roam_param->a_band_max_boost));
+
+ memcpy(&roam_exp_cfg.params, roam_param, sizeof(*roam_param));
+ roam_exp_cfg.version = ROAM_EXP_CFG_VERSION;
+ roam_exp_cfg.flags = ROAM_EXP_CFG_PRESENT;
+ if (dhd->pub.lazy_roam_enable) {
+ roam_exp_cfg.flags |= ROAM_EXP_ENABLE_FLAG;
+ }
+ err = dhd_iovar(&(dhd->pub), 0, "roam_exp_params", (char *)&roam_exp_cfg, sizeof(roam_exp_cfg), 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : Failed to execute roam_exp_params %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+
+int dhd_dev_lazy_roam_enable(struct net_device *dev, uint32 enable)
+{
+ int err;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ wl_roam_exp_cfg_t roam_exp_cfg;
+
+ memset(&roam_exp_cfg, 0, sizeof(roam_exp_cfg));
+ roam_exp_cfg.version = ROAM_EXP_CFG_VERSION;
+ if (enable) {
+ roam_exp_cfg.flags = ROAM_EXP_ENABLE_FLAG;
+ }
+
+ err = dhd_iovar(&(dhd->pub), 0, "roam_exp_params", (char *)&roam_exp_cfg, sizeof(roam_exp_cfg), 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : Failed to execute roam_exp_params %d\n", __FUNCTION__, err));
+ } else {
+ dhd->pub.lazy_roam_enable = (enable != 0);
+ }
+ return err;
+}
+int dhd_dev_set_lazy_roam_bssid_pref(struct net_device *dev,
+ wl_bssid_pref_cfg_t *bssid_pref, uint32 flush)
+{
+ int err;
+ int len;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ bssid_pref->version = BSSID_PREF_LIST_VERSION;
+ /* By default programming bssid pref flushes out old values */
+ bssid_pref->flags = (flush && !bssid_pref->count) ? ROAM_EXP_CLEAR_BSSID_PREF: 0;
+ len = sizeof(wl_bssid_pref_cfg_t);
+ len += (bssid_pref->count - 1) * sizeof(wl_bssid_pref_list_t);
+ err = dhd_iovar(&(dhd->pub), 0, "roam_exp_bssid_pref", (char *)bssid_pref,
+ len, 1);
+ if (err != BCME_OK) {
+ DHD_ERROR(("%s : Failed to execute roam_exp_bssid_pref %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+int dhd_dev_set_blacklist_bssid(struct net_device *dev, maclist_t *blacklist,
+ uint32 len, uint32 flush)
+{
+ int err;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ int macmode;
+
+ if (blacklist) {
+ err = dhd_wl_ioctl_cmd(&(dhd->pub), WLC_SET_MACLIST, (char *)blacklist,
+ len, TRUE, 0);
+ if (err != BCME_OK) {
+ DHD_ERROR(("%s : WLC_SET_MACLIST failed %d\n", __FUNCTION__, err));
+ return err;
+ }
+ }
+ /* By default programming blacklist flushes out old values */
+ macmode = (flush && !blacklist) ? WLC_MACMODE_DISABLED : WLC_MACMODE_DENY;
+ err = dhd_wl_ioctl_cmd(&(dhd->pub), WLC_SET_MACMODE, (char *)&macmode,
+ sizeof(macmode), TRUE, 0);
+ if (err != BCME_OK) {
+ DHD_ERROR(("%s : WLC_SET_MACMODE failed %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+int dhd_dev_set_whitelist_ssid(struct net_device *dev, wl_ssid_whitelist_t *ssid_whitelist,
+ uint32 len, uint32 flush)
+{
+ int err;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ wl_ssid_whitelist_t whitelist_ssid_flush;
+
+ if (!ssid_whitelist) {
+ if (flush) {
+ ssid_whitelist = &whitelist_ssid_flush;
+ ssid_whitelist->ssid_count = 0;
+ } else {
+ DHD_ERROR(("%s : Nothing to do here\n", __FUNCTION__));
+ return BCME_BADARG;
+ }
+ }
+ ssid_whitelist->version = SSID_WHITELIST_VERSION;
+ ssid_whitelist->flags = flush ? ROAM_EXP_CLEAR_SSID_WHITELIST : 0;
+ err = dhd_iovar(&(dhd->pub), 0, "roam_exp_ssid_whitelist", (char *)ssid_whitelist,
+ len, 1);
+ if (err != BCME_OK) {
+ DHD_ERROR(("%s : Failed to execute roam_exp_bssid_pref %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+
+void * dhd_dev_process_anqpo_result(struct net_device *dev,
+ const void *data, uint32 event, int *send_evt_bytes)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_pno_process_anqpo_result(&dhd->pub, data, event, send_evt_bytes));
+}
+#endif /* GSCAN_SUPPORT */
+
+int dhd_dev_set_rssi_monitor_cfg(struct net_device *dev, int start,
+ int8 max_rssi, int8 min_rssi)
+{
+ int err;
+ wl_rssi_monitor_cfg_t rssi_monitor;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ rssi_monitor.version = RSSI_MONITOR_VERSION;
+ rssi_monitor.max_rssi = max_rssi;
+ rssi_monitor.min_rssi = min_rssi;
+ rssi_monitor.flags = start ? 0: RSSI_MONITOR_STOP;
+ err = dhd_iovar(&(dhd->pub), 0, "rssi_monitor", (char *)&rssi_monitor,
+ sizeof(rssi_monitor), 1);
+ if (err != BCME_OK) {
+ DHD_ERROR(("%s : Failed to execute rssi_monitor %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+
+bool dhd_dev_is_legacy_pno_enabled(struct net_device *dev)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_is_legacy_pno_enabled(&dhd->pub));
+}
+
+int dhd_dev_cfg_rand_mac_oui(struct net_device *dev, uint8 *oui)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_pub_t *dhdp = &dhd->pub;
+
+ if (!dhdp || !oui) {
+ DHD_ERROR(("NULL POINTER : %s\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+ if (ETHER_ISMULTI(oui)) {
+ DHD_ERROR(("Expected unicast OUI\n"));
+ return BCME_ERROR;
+ } else {
+ uint8 *rand_mac_oui = dhdp->rand_mac_oui;
+ memcpy(rand_mac_oui, oui, DOT11_OUI_LEN);
+ DHD_ERROR(("Random MAC OUI to be used - %02x:%02x:%02x\n", rand_mac_oui[0],
+ rand_mac_oui[1], rand_mac_oui[2]));
+ }
+ return BCME_OK;
+}
+
+int dhd_set_rand_mac_oui(dhd_pub_t *dhd)
+{
+ int err;
+ wl_pfn_macaddr_cfg_t cfg;
+ uint8 *rand_mac_oui = dhd->rand_mac_oui;
+
+ memset(&cfg.macaddr, 0, ETHER_ADDR_LEN);
+ memcpy(&cfg.macaddr, rand_mac_oui, DOT11_OUI_LEN);
+ cfg.version = WL_PFN_MACADDR_CFG_VER;
+ if (ETHER_ISNULLADDR(&cfg.macaddr))
+ cfg.flags = 0;
+ else
+ cfg.flags = (WL_PFN_MAC_OUI_ONLY_MASK | WL_PFN_SET_MAC_UNASSOC_MASK);
+
+ DHD_ERROR(("Setting rand mac oui to FW - %02x:%02x:%02x\n", rand_mac_oui[0],
+ rand_mac_oui[1], rand_mac_oui[2]));
+
+ err = dhd_iovar(dhd, 0, "pfn_macaddr", (char *)&cfg, sizeof(cfg), 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to execute pfn_macaddr %d\n", __FUNCTION__, err));
+ }
+ return err;
+}
+
+#ifdef RTT_SUPPORT
+/* Linux wrapper to call common dhd_pno_set_cfg_gscan */
+int
+dhd_dev_rtt_set_cfg(struct net_device *dev, void *buf)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_rtt_set_cfg(&dhd->pub, buf));
+}
+int
+dhd_dev_rtt_cancel_cfg(struct net_device *dev, struct ether_addr *mac_list, int mac_cnt)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_rtt_stop(&dhd->pub, mac_list, mac_cnt));
+}
+
+int
+dhd_dev_rtt_register_noti_callback(struct net_device *dev, void *ctx, dhd_rtt_compl_noti_fn noti_fn)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_rtt_register_noti_callback(&dhd->pub, ctx, noti_fn));
+}
+int
+dhd_dev_rtt_unregister_noti_callback(struct net_device *dev, dhd_rtt_compl_noti_fn noti_fn)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_rtt_unregister_noti_callback(&dhd->pub, noti_fn));
+}
+
+int
+dhd_dev_rtt_capability(struct net_device *dev, rtt_capabilities_t *capa)
+{
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+
+ return (dhd_rtt_capability(&dhd->pub, capa));
+}
+#endif /* RTT_SUPPORT */
+
+#if defined(KEEP_ALIVE)
+#define TEMP_BUF_SIZE 512
+#define FRAME_SIZE 300
+int
+dhd_dev_start_mkeep_alive(dhd_pub_t *dhd_pub, u8 mkeep_alive_id, u8 *ip_pkt, u16 ip_pkt_len,
+ u8* src_mac, u8* dst_mac, u32 period_msec)
+{
+ const int ETHERTYPE_LEN = 2;
+ char *pbuf;
+ const char *str;
+ wl_mkeep_alive_pkt_t mkeep_alive_pkt = {0};
+ wl_mkeep_alive_pkt_t *mkeep_alive_pktp;
+ int buf_len;
+ int str_len;
+ int res = BCME_ERROR;
+ int len_bytes = 0;
+ int i;
+
+ /* ether frame to have both max IP pkt (256 bytes) and ether header */
+ char *pmac_frame;
+
+ /*
+ * The mkeep_alive packet is for STA interface only; if the bss is configured as AP,
+ * dongle shall reject a mkeep_alive request.
+ */
+ if (!dhd_support_sta_mode(dhd_pub))
+ return res;
+
+ DHD_TRACE(("%s execution\n", __FUNCTION__));
+
+ if ((pbuf = kzalloc(TEMP_BUF_SIZE, GFP_KERNEL)) == NULL) {
+ DHD_ERROR(("failed to allocate buf with size %d\n", TEMP_BUF_SIZE));
+ res = BCME_NOMEM;
+ return res;
+ }
+
+ if ((pmac_frame = kzalloc(FRAME_SIZE, GFP_KERNEL)) == NULL) {
+ DHD_ERROR(("failed to allocate mac_frame with size %d\n", FRAME_SIZE));
+ res = BCME_NOMEM;
+ goto exit;
+ }
+
+ /*
+ * Get current mkeep-alive status.
+ */
+ bcm_mkiovar("mkeep_alive", &mkeep_alive_id, sizeof(mkeep_alive_id),
+ pbuf, TEMP_BUF_SIZE);
+
+ if ((res = dhd_wl_ioctl_cmd(dhd_pub, WLC_GET_VAR, pbuf, TEMP_BUF_SIZE,
+ FALSE, 0)) < 0) {
+ DHD_ERROR(("%s: Get mkeep_alive failed (error=%d)\n", __FUNCTION__, res));
+ goto exit;
+ } else {
+ /* Check available ID whether it is occupied */
+ mkeep_alive_pktp = (wl_mkeep_alive_pkt_t *) pbuf;
+ if (dtoh32(mkeep_alive_pktp->period_msec != 0)) {
+ DHD_ERROR(("%s: Get mkeep_alive failed, ID %u is in use.\n",
+ __FUNCTION__, mkeep_alive_id));
+
+ /* Current occupied ID info */
+ DHD_ERROR(("%s: mkeep_alive\n", __FUNCTION__));
+ DHD_ERROR((" Id : %d\n"
+ " Period: %d msec\n"
+ " Length: %d\n"
+ " Packet: 0x",
+ mkeep_alive_pktp->keep_alive_id,
+ dtoh32(mkeep_alive_pktp->period_msec),
+ dtoh16(mkeep_alive_pktp->len_bytes)));
+
+ for (i = 0; i < mkeep_alive_pktp->len_bytes; i++) {
+ DHD_ERROR(("%02x", mkeep_alive_pktp->data[i]));
+ }
+ DHD_ERROR(("\n"));
+
+ res = BCME_NOTFOUND;
+ goto exit;
+ }
+ }
+
+ /* Request the specified ID */
+ memset(&mkeep_alive_pkt, 0, sizeof(wl_mkeep_alive_pkt_t));
+ memset(pbuf, 0, TEMP_BUF_SIZE);
+ str = "mkeep_alive";
+ str_len = strlen(str);
+ strncpy(pbuf, str, str_len);
+ pbuf[str_len] = '\0';
+
+ mkeep_alive_pktp = (wl_mkeep_alive_pkt_t *) (pbuf + str_len + 1);
+ mkeep_alive_pkt.period_msec = htod32(period_msec);
+ buf_len = str_len + 1;
+ mkeep_alive_pkt.version = htod16(WL_MKEEP_ALIVE_VERSION);
+ mkeep_alive_pkt.length = htod16(WL_MKEEP_ALIVE_FIXED_LEN);
+
+ /* ID assigned */
+ mkeep_alive_pkt.keep_alive_id = mkeep_alive_id;
+
+ buf_len += WL_MKEEP_ALIVE_FIXED_LEN;
+
+ /*
+ * Build up Ethernet Frame
+ */
+
+ /* Mapping dest mac addr */
+ memcpy(pmac_frame, dst_mac, ETHER_ADDR_LEN);
+ pmac_frame += ETHER_ADDR_LEN;
+
+ /* Mapping src mac addr */
+ memcpy(pmac_frame, src_mac, ETHER_ADDR_LEN);
+ pmac_frame += ETHER_ADDR_LEN;
+
+ /* Mapping Ethernet type (ETHERTYPE_IP: 0x0800) */
+ *(pmac_frame++) = 0x08;
+ *(pmac_frame++) = 0x00;
+
+ /* Mapping IP pkt */
+ memcpy(pmac_frame, ip_pkt, ip_pkt_len);
+ pmac_frame += ip_pkt_len;
+
+ /*
+ * Length of ether frame (assume to be all hexa bytes)
+ * = src mac + dst mac + ether type + ip pkt len
+ */
+ len_bytes = ETHER_ADDR_LEN*2 + ETHERTYPE_LEN + ip_pkt_len;
+ memcpy(mkeep_alive_pktp->data, pmac_frame, len_bytes);
+ buf_len += len_bytes;
+ mkeep_alive_pkt.len_bytes = htod16(len_bytes);
+
+ /*
+ * Keep-alive attributes are set in local variable (mkeep_alive_pkt), and
+ * then memcpy'ed into buffer (mkeep_alive_pktp) since there is no
+ * guarantee that the buffer is properly aligned.
+ */
+ memcpy((char *)mkeep_alive_pktp, &mkeep_alive_pkt, WL_MKEEP_ALIVE_FIXED_LEN);
+
+ res = dhd_wl_ioctl_cmd(dhd_pub, WLC_SET_VAR, pbuf, buf_len, TRUE, 0);
+exit:
+ kfree(pmac_frame);
+ kfree(pbuf);
+ return res;
+}
+
+int
+dhd_dev_stop_mkeep_alive(dhd_pub_t *dhd_pub, u8 mkeep_alive_id)
+{
+ char *pbuf;
+ const char *str;
+ wl_mkeep_alive_pkt_t mkeep_alive_pkt;
+ wl_mkeep_alive_pkt_t *mkeep_alive_pktp;
+ int buf_len;
+ int str_len;
+ int res = BCME_ERROR;
+ int i;
+
+ /*
+ * The mkeep_alive packet is for STA interface only; if the bss is configured as AP,
+ * dongle shall reject a mkeep_alive request.
+ */
+ if (!dhd_support_sta_mode(dhd_pub))
+ return res;
+
+ DHD_TRACE(("%s execution\n", __FUNCTION__));
+
+ /*
+ * Get current mkeep-alive status. Skip ID 0 which is being used for NULL pkt.
+ */
+ if ((pbuf = kzalloc(TEMP_BUF_SIZE, GFP_KERNEL)) == NULL) {
+ DHD_ERROR(("failed to allocate buf with size %d\n", TEMP_BUF_SIZE));
+ return res;
+ }
+
+ bcm_mkiovar("mkeep_alive", &mkeep_alive_id, sizeof(mkeep_alive_id), pbuf, TEMP_BUF_SIZE);
+
+ if ((res = dhd_wl_ioctl_cmd(dhd_pub, WLC_GET_VAR, pbuf, TEMP_BUF_SIZE, FALSE, 0)) < 0) {
+ DHD_ERROR(("%s: Get mkeep_alive failed (error=%d)\n", __FUNCTION__, res));
+ goto exit;
+ } else {
+ /* Check occupied ID */
+ mkeep_alive_pktp = (wl_mkeep_alive_pkt_t *) pbuf;
+ DHD_INFO(("%s: mkeep_alive\n", __FUNCTION__));
+ DHD_INFO((" Id : %d\n"
+ " Period: %d msec\n"
+ " Length: %d\n"
+ " Packet: 0x",
+ mkeep_alive_pktp->keep_alive_id,
+ dtoh32(mkeep_alive_pktp->period_msec),
+ dtoh16(mkeep_alive_pktp->len_bytes)));
+
+ for (i = 0; i < mkeep_alive_pktp->len_bytes; i++) {
+ DHD_INFO(("%02x", mkeep_alive_pktp->data[i]));
+ }
+ DHD_INFO(("\n"));
+ }
+
+ /* Make it stop if available */
+ if (dtoh32(mkeep_alive_pktp->period_msec != 0)) {
+ DHD_INFO(("stop mkeep_alive on ID %d\n", mkeep_alive_id));
+ memset(&mkeep_alive_pkt, 0, sizeof(wl_mkeep_alive_pkt_t));
+ memset(pbuf, 0, TEMP_BUF_SIZE);
+ str = "mkeep_alive";
+ str_len = strlen(str);
+ strncpy(pbuf, str, str_len);
+ pbuf[str_len] = '\0';
+
+ mkeep_alive_pktp = (wl_mkeep_alive_pkt_t *) (pbuf + str_len + 1);
+
+ mkeep_alive_pkt.period_msec = 0;
+ buf_len = str_len + 1;
+ mkeep_alive_pkt.version = htod16(WL_MKEEP_ALIVE_VERSION);
+ mkeep_alive_pkt.length = htod16(WL_MKEEP_ALIVE_FIXED_LEN);
+ mkeep_alive_pkt.keep_alive_id = mkeep_alive_id;
+ buf_len += WL_MKEEP_ALIVE_FIXED_LEN;
+
+ /*
+ * Keep-alive attributes are set in local variable (mkeep_alive_pkt), and
+ * then memcpy'ed into buffer (mkeep_alive_pktp) since there is no
+ * guarantee that the buffer is properly aligned.
+ */
+ memcpy((char *)mkeep_alive_pktp, &mkeep_alive_pkt, WL_MKEEP_ALIVE_FIXED_LEN);
+
+ res = dhd_wl_ioctl_cmd(dhd_pub, WLC_SET_VAR, pbuf, buf_len, TRUE, 0);
+ } else {
+ DHD_ERROR(("%s: ID %u does not exist.\n", __FUNCTION__, mkeep_alive_id));
+ res = BCME_NOTFOUND;
+ }
+exit:
+ kfree(pbuf);
+ return res;
+}
+#endif /* defined(KEEP_ALIVE) */
+
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 27))
static void dhd_hang_process(void *dhd_info, void *event_info, u8 event)
{
@@ -5940,9 +8546,11 @@
dev = dhd->iflist[0]->net;
if (dev) {
+#ifndef NO_AUTO_RECOVERY
rtnl_lock();
dev_close(dev);
rtnl_unlock();
+#endif
#if defined(WL_WIRELESS_EXT)
wl_iw_send_priv_event(dev, "HANG");
#endif
@@ -5958,8 +8566,8 @@
if (dhdp) {
if (!dhdp->hang_was_sent) {
dhdp->hang_was_sent = 1;
- dhd_deferred_schedule_work((void *)dhdp, DHD_WQ_WORK_HANG_MSG,
- dhd_hang_process, DHD_WORK_PRIORITY_HIGH);
+ dhd_deferred_schedule_work(dhdp->info->dhd_deferred_wq, (void *)dhdp,
+ DHD_WQ_WORK_HANG_MSG, dhd_hang_process, DHD_WORK_PRIORITY_HIGH);
}
}
return ret;
@@ -5967,7 +8575,7 @@
int net_os_send_hang_message(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd) {
@@ -5992,21 +8600,33 @@
int dhd_net_wifi_platform_set_power(struct net_device *dev, bool on, unsigned long delay_msec)
{
- dhd_info_t *dhdinfo = *(dhd_info_t **)netdev_priv(dev);
- return wifi_platform_set_power(dhdinfo->adapter, on, delay_msec);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+ return wifi_platform_set_power(dhd->adapter, on, delay_msec);
+}
+
+bool dhd_force_country_change(struct net_device *dev)
+{
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+
+ if (dhd && dhd->pub.up)
+ return dhd->pub.force_country_change;
+ return FALSE;
}
void dhd_get_customized_country_code(struct net_device *dev, char *country_iso_code,
wl_country_t *cspec)
{
- dhd_info_t *dhdinfo = *(dhd_info_t **)netdev_priv(dev);
- get_customized_country_code(dhdinfo->adapter, country_iso_code, cspec);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+ get_customized_country_code(dhd->adapter, country_iso_code, cspec,
+ dhd->pub.dhd_cflags);
}
+
void dhd_bus_country_set(struct net_device *dev, wl_country_t *cspec, bool notify)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (dhd && dhd->pub.up) {
memcpy(&dhd->pub.dhd_cspec, cspec, sizeof(wl_country_t));
+ dhd->pub.force_country_change = FALSE;
#ifdef WL_CFG80211
wl_update_wiphybands(NULL, notify);
#endif
@@ -6015,7 +8635,7 @@
void dhd_bus_band_set(struct net_device *dev, uint band)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (dhd && dhd->pub.up) {
#ifdef WL_CFG80211
wl_update_wiphybands(NULL, true);
@@ -6025,7 +8645,7 @@
int dhd_net_set_fw_path(struct net_device *dev, char *fw)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
if (!fw || fw[0] == '\0')
return -EINVAL;
@@ -6041,19 +8661,19 @@
DHD_INFO(("GOT STA FIRMWARE\n"));
ap_fw_loaded = FALSE;
}
-#endif
+#endif
return 0;
}
void dhd_net_if_lock(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
dhd_net_if_lock_local(dhd);
}
void dhd_net_if_unlock(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
dhd_net_if_unlock_local(dhd);
}
@@ -6091,7 +8711,7 @@
#endif
}
-unsigned long dhd_os_spin_lock(dhd_pub_t *pub)
+unsigned long dhd_os_general_spin_lock(dhd_pub_t *pub)
{
dhd_info_t *dhd = (dhd_info_t *)(pub->info);
unsigned long flags = 0;
@@ -6102,7 +8722,7 @@
return flags;
}
-void dhd_os_spin_unlock(dhd_pub_t *pub, unsigned long flags)
+void dhd_os_general_spin_unlock(dhd_pub_t *pub, unsigned long flags)
{
dhd_info_t *dhd = (dhd_info_t *)(pub->info);
@@ -6110,18 +8730,52 @@
spin_unlock_irqrestore(&dhd->dhd_lock, flags);
}
+/* Linux specific multipurpose spinlock API */
+void *
+dhd_os_spin_lock_init(osl_t *osh)
+{
+ /* Adding 4 bytes since the sizeof(spinlock_t) could be 0 */
+ /* if CONFIG_SMP and CONFIG_DEBUG_SPINLOCK are not defined */
+ /* and this results in kernel asserts in internal builds */
+ spinlock_t * lock = MALLOC(osh, sizeof(spinlock_t) + 4);
+ if (lock)
+ spin_lock_init(lock);
+ return ((void *)lock);
+}
+void
+dhd_os_spin_lock_deinit(osl_t *osh, void *lock)
+{
+ MFREE(osh, lock, sizeof(spinlock_t) + 4);
+}
+unsigned long
+dhd_os_spin_lock(void *lock)
+{
+ unsigned long flags = 0;
+
+ if (lock)
+ spin_lock_irqsave((spinlock_t *)lock, flags);
+
+ return flags;
+}
+void
+dhd_os_spin_unlock(void *lock, unsigned long flags)
+{
+ if (lock)
+ spin_unlock_irqrestore((spinlock_t *)lock, flags);
+}
+
static int
dhd_get_pend_8021x_cnt(dhd_info_t *dhd)
{
return (atomic_read(&dhd->pend_8021x_cnt));
}
-#define MAX_WAIT_FOR_8021X_TX 50
+#define MAX_WAIT_FOR_8021X_TX 100
int
dhd_wait_pend8021x(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int timeout = msecs_to_jiffies(10);
int ntimes = MAX_WAIT_FOR_8021X_TX;
int pend = dhd_get_pend_8021x_cnt(dhd);
@@ -6129,7 +8783,9 @@
while (ntimes && pend) {
if (pend) {
set_current_state(TASK_INTERRUPTIBLE);
+ DHD_PERIM_UNLOCK(&dhd->pub);
schedule_timeout(timeout);
+ DHD_PERIM_LOCK(&dhd->pub);
set_current_state(TASK_RUNNING);
ntimes--;
}
@@ -6157,7 +8813,7 @@
set_fs(KERNEL_DS);
/* open file to write */
- fp = filp_open("/tmp/mem_dump", O_WRONLY|O_CREAT, 0640);
+ fp = filp_open("/data/misc/wifi/mem_dump", O_WRONLY|O_CREAT, 0640);
if (!fp) {
printf("%s: open file error\n", __FUNCTION__);
ret = -1;
@@ -6207,7 +8863,7 @@
int net_os_wake_lock_timeout(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
@@ -6262,7 +8918,7 @@
int net_os_wake_lock_rx_timeout_enable(struct net_device *dev, int val)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
@@ -6272,7 +8928,7 @@
int net_os_wake_lock_ctrl_timeout_enable(struct net_device *dev, int val)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
@@ -6292,7 +8948,7 @@
#ifdef CONFIG_HAS_WAKELOCK
wake_lock(&dhd->wl_wifi);
#elif defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
- dhd_bus_dev_pm_stay_awake(pub);
+ dhd_bus_dev_pm_stay_awake(pub);
#endif
}
dhd->wakelock_counter++;
@@ -6304,7 +8960,7 @@
int net_os_wake_lock(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
@@ -6327,7 +8983,7 @@
#ifdef CONFIG_HAS_WAKELOCK
wake_unlock(&dhd->wl_wifi);
#elif defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
- dhd_bus_dev_pm_relax(pub);
+ dhd_bus_dev_pm_relax(pub);
#endif
}
ret = dhd->wakelock_counter;
@@ -6359,9 +9015,36 @@
#endif
return 0;
}
+
+int dhd_os_check_wakelock_all(dhd_pub_t *pub)
+{
+#if defined(CONFIG_HAS_WAKELOCK) || \
+ (defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36)))
+ dhd_info_t *dhd;
+
+ if (!pub)
+ return 0;
+ dhd = (dhd_info_t *)(pub->info);
+#endif /* CONFIG_HAS_WAKELOCK || BCMSDIO */
+
+#ifdef CONFIG_HAS_WAKELOCK
+ /* Indicate to the SD Host to avoid going to suspend if internal locks are up */
+ if (dhd && (wake_lock_active(&dhd->wl_wifi) ||
+ wake_lock_active(&dhd->wl_wdwake) ||
+ wake_lock_active(&dhd->wl_rxwake) ||
+ wake_lock_active(&dhd->wl_ctrlwake))) {
+ return 1;
+ }
+#elif defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
+ if (dhd && (dhd->wakelock_counter > 0) && dhd_bus_dev_pm_enabled(pub))
+ return 1;
+#endif
+ return 0;
+}
+
int net_os_wake_unlock(struct net_device *dev)
{
- dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
int ret = 0;
if (dhd)
@@ -6377,12 +9060,11 @@
if (dhd) {
spin_lock_irqsave(&dhd->wakelock_spinlock, flags);
- if (dhd->wakelock_wd_counter == 0 && !dhd->waive_wakelock) {
#ifdef CONFIG_HAS_WAKELOCK
- /* if wakelock_wd_counter was never used : lock it at once */
+ /* if wakelock_wd_counter was never used : lock it at once */
+ if (!dhd->wakelock_wd_counter)
wake_lock(&dhd->wl_wdwake);
#endif
- }
dhd->wakelock_wd_counter++;
ret = dhd->wakelock_wd_counter;
spin_unlock_irqrestore(&dhd->wakelock_spinlock, flags);
@@ -6398,77 +9080,78 @@
if (dhd) {
spin_lock_irqsave(&dhd->wakelock_spinlock, flags);
- if (dhd->wakelock_wd_counter > 0) {
+ if (dhd->wakelock_wd_counter) {
dhd->wakelock_wd_counter = 0;
- if (!dhd->waive_wakelock) {
#ifdef CONFIG_HAS_WAKELOCK
- wake_unlock(&dhd->wl_wdwake);
+ wake_unlock(&dhd->wl_wdwake);
#endif
- }
}
spin_unlock_irqrestore(&dhd->wakelock_spinlock, flags);
}
return ret;
}
-#ifdef PROP_TXSTATUS
/* waive wakelocks for operations such as IOVARs in suspend function, must be closed
* by a paired function call to dhd_wakelock_restore. returns current wakelock counter
*/
-int dhd_wakelock_waive(dhd_info_t *dhdinfo)
+int dhd_os_wake_lock_waive(dhd_pub_t *pub)
{
+ dhd_info_t *dhd = (dhd_info_t *)(pub->info);
unsigned long flags;
int ret = 0;
- spin_lock_irqsave(&dhdinfo->wakelock_spinlock, flags);
- /* dhd_wakelock_waive/dhd_wakelock_restore must be paired */
- if (dhdinfo->waive_wakelock)
- goto exit;
- /* record current lock status */
- dhdinfo->wakelock_before_waive = dhdinfo->wakelock_counter;
- dhdinfo->waive_wakelock = TRUE;
-
-exit:
- ret = dhdinfo->wakelock_wd_counter;
- spin_unlock_irqrestore(&dhdinfo->wakelock_spinlock, flags);
+ if (dhd) {
+ spin_lock_irqsave(&dhd->wakelock_spinlock, flags);
+ /* dhd_wakelock_waive/dhd_wakelock_restore must be paired */
+ if (dhd->waive_wakelock == FALSE) {
+ /* record current lock status */
+ dhd->wakelock_before_waive = dhd->wakelock_counter;
+ dhd->waive_wakelock = TRUE;
+ }
+ ret = dhd->wakelock_wd_counter;
+ spin_unlock_irqrestore(&dhd->wakelock_spinlock, flags);
+ }
return ret;
}
-int dhd_wakelock_restore(dhd_info_t *dhdinfo)
+int dhd_os_wake_lock_restore(dhd_pub_t *pub)
{
+ dhd_info_t *dhd = (dhd_info_t *)(pub->info);
unsigned long flags;
int ret = 0;
- spin_lock_irqsave(&dhdinfo->wakelock_spinlock, flags);
+ if (!dhd)
+ return 0;
+
+ spin_lock_irqsave(&dhd->wakelock_spinlock, flags);
/* dhd_wakelock_waive/dhd_wakelock_restore must be paired */
- if (!dhdinfo->waive_wakelock)
+ if (!dhd->waive_wakelock)
goto exit;
- dhdinfo->waive_wakelock = FALSE;
+ dhd->waive_wakelock = FALSE;
/* if somebody else acquires wakelock between dhd_wakelock_waive/dhd_wakelock_restore,
- * we need to make it up by calling wake_lock or pm_stay_awake. or if somebody releases
- * the lock in between, do the same by calling wake_unlock or pm_relax
- */
- if (dhdinfo->wakelock_before_waive == 0 && dhdinfo->wakelock_counter > 0) {
+ * we need to make it up by calling wake_lock or pm_stay_awake. or if somebody releases
+ * the lock in between, do the same by calling wake_unlock or pm_relax
+ */
+ if (dhd->wakelock_before_waive == 0 && dhd->wakelock_counter > 0) {
#ifdef CONFIG_HAS_WAKELOCK
- wake_lock(&dhdinfo->wl_wifi);
-#elif (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
- dhd_bus_dev_pm_stay_awake(&dhdinfo->pub);
+ wake_lock(&dhd->wl_wifi);
+#elif defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
+ dhd_bus_dev_pm_stay_awake(&dhd->pub);
#endif
- } else if (dhdinfo->wakelock_before_waive > 0 && dhdinfo->wakelock_counter == 0) {
+ } else if (dhd->wakelock_before_waive > 0 && dhd->wakelock_counter == 0) {
#ifdef CONFIG_HAS_WAKELOCK
- wake_unlock(&dhdinfo->wl_wifi);
-#elif (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
- dhd_bus_dev_pm_relax(&dhdinfo->pub);
+ wake_unlock(&dhd->wl_wifi);
+#elif defined(BCMSDIO) && (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 36))
+ dhd_bus_dev_pm_relax(&dhd->pub);
#endif
}
- dhdinfo->wakelock_before_waive = 0;
+ dhd->wakelock_before_waive = 0;
exit:
- ret = dhdinfo->wakelock_wd_counter;
- spin_unlock_irqrestore(&dhdinfo->wakelock_spinlock, flags);
+ ret = dhd->wakelock_wd_counter;
+ spin_unlock_irqrestore(&dhd->wakelock_spinlock, flags);
return ret;
}
-#endif /* PROP_TXSTATUS */
bool dhd_os_check_if_up(dhd_pub_t *pub)
{
@@ -6477,7 +9160,13 @@
return pub->up;
}
-#if defined(BCMSDIO)
+int dhd_os_get_wake_irq(dhd_pub_t *pub)
+{
+ if (!pub)
+ return -1;
+ return wifi_platform_get_wake_irq(pub->info->adapter);
+}
+
/* function to collect firmware, chip id and chip version info */
void dhd_set_version_info(dhd_pub_t *dhdp, char *fw)
{
@@ -6493,19 +9182,18 @@
"\n Chip: %x Rev %x Pkg %x", dhd_bus_chip_id(dhdp),
dhd_bus_chiprev_id(dhdp), dhd_bus_chippkg_id(dhdp));
}
-#endif /* defined(BCMSDIO) */
int dhd_ioctl_entry_local(struct net_device *net, wl_ioctl_t *ioc, int cmd)
{
int ifidx;
int ret = 0;
dhd_info_t *dhd = NULL;
- if (!net || !netdev_priv(net)) {
+ if (!net || !DEV_PRIV(net)) {
DHD_ERROR(("%s invalid parameter\n", __FUNCTION__));
return -EINVAL;
}
- dhd = *(dhd_info_t **)netdev_priv(net);
+ dhd = DHD_DEV_INFO(net);
if (!dhd)
return -EINVAL;
@@ -6516,8 +9204,12 @@
}
DHD_OS_WAKE_LOCK(&dhd->pub);
+ DHD_PERIM_LOCK(&dhd->pub);
+
ret = dhd_wl_ioctl(&dhd->pub, ifidx, ioc, ioc->buf, ioc->len);
dhd_check_hang(net, &dhd->pub, ret);
+
+ DHD_PERIM_UNLOCK(&dhd->pub);
DHD_OS_WAKE_UNLOCK(&dhd->pub);
return ret;
@@ -6536,6 +9228,46 @@
return dhd_check_hang(net, dhdp, ret);
}
+/* Return instance */
+int dhd_get_instance(dhd_pub_t *dhdp)
+{
+ return dhdp->info->unit;
+}
+
+void dhd_set_short_dwell_time(dhd_pub_t *dhd, int set)
+{
+ int scan_assoc_time = DHD_SCAN_ASSOC_ACTIVE_TIME;
+ int scan_unassoc_time = DHD_SCAN_UNASSOC_ACTIVE_TIME;
+ int scan_passive_time = DHD_SCAN_PASSIVE_TIME;
+
+ DHD_TRACE(("%s: Enter: %d\n", __FUNCTION__, set));
+ if (dhd->short_dwell_time != set) {
+ if (set) {
+ scan_unassoc_time = DHD_SCAN_UNASSOC_ACTIVE_TIME_PS;
+ }
+ dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_UNASSOC_TIME,
+ (char *)&scan_unassoc_time,
+ sizeof(scan_unassoc_time), TRUE, 0);
+ if (dhd->short_dwell_time == -1) {
+ dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_CHANNEL_TIME,
+ (char *)&scan_assoc_time,
+ sizeof(scan_assoc_time), TRUE, 0);
+ dhd_wl_ioctl_cmd(dhd, WLC_SET_SCAN_PASSIVE_TIME,
+ (char *)&scan_passive_time,
+ sizeof(scan_passive_time), TRUE, 0);
+ }
+ dhd->short_dwell_time = set;
+ }
+}
+
+#ifdef CUSTOM_SET_SHORT_DWELL_TIME
+void net_set_short_dwell_time(struct net_device *dev, int set)
+{
+ dhd_info_t *dhd = DHD_DEV_INFO(dev);
+
+ dhd_set_short_dwell_time(&dhd->pub, set);
+}
+#endif
#ifdef PROP_TXSTATUS
@@ -6667,13 +9399,13 @@
}
}
-void dhd_dbg_init(dhd_pub_t *dhdp)
+void dhd_dbgfs_init(dhd_pub_t *dhdp)
{
int err;
-
+#ifdef BCMDBGFS_MEM
g_dbgfs.dhdp = dhdp;
g_dbgfs.size = 0x20000000; /* Allow access to various cores regs */
-
+#endif
g_dbgfs.debugfs_dir = debugfs_create_dir("dhd", 0);
if (IS_ERR(g_dbgfs.debugfs_dir)) {
err = PTR_ERR(g_dbgfs.debugfs_dir);
@@ -6686,18 +9418,19 @@
return;
}
-void dhd_dbg_remove(void)
+void dhd_dbgfs_remove(void)
{
+#ifdef BCMDBGFS_MEM
debugfs_remove(g_dbgfs.debugfs_mem);
+#endif
debugfs_remove(g_dbgfs.debugfs_dir);
bzero((unsigned char *) &g_dbgfs, sizeof(g_dbgfs));
}
-#endif /* ifdef BCMDBGFS */
+#endif /* BCMDBGFS */
#ifdef WLMEDIA_HTSF
-
static
void dhd_htsf_addtxts(dhd_pub_t *dhdp, void *pktbuf)
{
@@ -7074,3 +9807,342 @@
return;
}
#endif /* CUSTOM_SET_CPUCORE */
+
+/* Get interface specific ap_isolate configuration */
+int dhd_get_ap_isolate(dhd_pub_t *dhdp, uint32 idx)
+{
+ dhd_info_t *dhd = dhdp->info;
+ dhd_if_t *ifp;
+
+ ASSERT(idx < DHD_MAX_IFS);
+
+ ifp = dhd->iflist[idx];
+
+ return ifp->ap_isolate;
+}
+
+/* Set interface specific ap_isolate configuration */
+int dhd_set_ap_isolate(dhd_pub_t *dhdp, uint32 idx, int val)
+{
+ dhd_info_t *dhd = dhdp->info;
+ dhd_if_t *ifp;
+
+ ASSERT(idx < DHD_MAX_IFS);
+
+ ifp = dhd->iflist[idx];
+
+ ifp->ap_isolate = val;
+
+ return 0;
+}
+
+static void
+dhd_mem_dump_to_file(void *handle, void *event_info, u8 event)
+{
+ dhd_info_t *dhd = handle;
+ dhd_dump_t *dump = event_info;
+
+ if (!dhd) {
+ DHD_ERROR(("%s: dhd is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ if (!dump) {
+ DHD_ERROR(("%s: dump is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ if (dhd->pub.memdump_enabled == DUMP_MEMFILE) {
+ write_to_file(&dhd->pub, dump->buf, dump->bufsize);
+ DHD_ERROR(("%s: writing SoC_RAM dump to the file failed\n", __FUNCTION__));
+ }
+
+ if (dhd->pub.memdump_enabled == DUMP_MEMFILE_BUGON) {
+ BUG_ON(1);
+ }
+ MFREE(dhd->pub.osh, dump, sizeof(dhd_dump_t));
+}
+
+void dhd_schedule_memdump(dhd_pub_t *dhdp, uint8 *buf, uint32 size)
+{
+ dhd_dump_t *dump = NULL;
+ dump = (dhd_dump_t *)MALLOC(dhdp->osh, sizeof(dhd_dump_t));
+ if (dump == NULL) {
+ DHD_ERROR(("%s: dhd dump memory allocation failed\n", __FUNCTION__));
+ return;
+ }
+ dump->buf = buf;
+ dump->bufsize = size;
+
+ dhd_deferred_schedule_work(dhdp->info->dhd_deferred_wq, (void *)dump,
+ DHD_WQ_WORK_SOC_RAM_DUMP, dhd_mem_dump_to_file, DHD_WORK_PRIORITY_HIGH);
+}
+int dhd_os_socram_dump(struct net_device *dev, uint32 *dump_size)
+{
+ int ret = BCME_OK;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_pub_t *dhdp = &dhd->pub;
+
+ if (dhdp->busstate == DHD_BUS_DOWN) {
+ return BCME_ERROR;
+ }
+ ret = dhd_common_socram_dump(dhdp);
+ if (ret == BCME_OK) {
+ *dump_size = dhdp->soc_ram_length;
+ }
+ return ret;
+}
+
+int dhd_os_get_socram_dump(struct net_device *dev, char **buf, uint32 *size)
+{
+ int ret = BCME_OK;
+ int orig_len = 0;
+ dhd_info_t *dhd = *(dhd_info_t **)netdev_priv(dev);
+ dhd_pub_t *dhdp = &dhd->pub;
+ if (buf == NULL)
+ return BCME_ERROR;
+ orig_len = *size;
+ if (dhdp->soc_ram) {
+ if (orig_len >= dhdp->soc_ram_length) {
+ memcpy(*buf, dhdp->soc_ram, dhdp->soc_ram_length);
+ /* reset the storage of dump */
+ memset(dhdp->soc_ram, 0, dhdp->soc_ram_length);
+ *size = dhdp->soc_ram_length;
+ dhdp->soc_ram_length = 0;
+ } else {
+ ret = BCME_BUFTOOSHORT;
+ DHD_ERROR(("The length of the buffer is too short"
+ " to save the memory dump with %d\n", dhdp->soc_ram_length));
+ }
+ } else {
+ DHD_ERROR(("socram_dump is not ready to get\n"));
+ ret = BCME_NOTREADY;
+ }
+ return ret;
+}
+
+int dhd_os_get_version(struct net_device *dev, bool dhd_ver, char **buf, uint32 size)
+{
+ int ret = BCME_OK;
+ memset(*buf, 0, size);
+ if (dhd_ver) {
+ strncpy(*buf, dhd_version, size - 1);
+ } else {
+ strncpy(*buf, strstr(info_string, "Firmware: "), size - 1);
+ }
+ return ret;
+}
+
+#ifdef DHD_WMF
+/* Returns interface specific WMF configuration */
+dhd_wmf_t* dhd_wmf_conf(dhd_pub_t *dhdp, uint32 idx)
+{
+ dhd_info_t *dhd = dhdp->info;
+ dhd_if_t *ifp;
+
+ ASSERT(idx < DHD_MAX_IFS);
+
+ ifp = dhd->iflist[idx];
+ return &ifp->wmf;
+}
+#endif /* DHD_WMF */
+
+
+#ifdef DHD_UNICAST_DHCP
+static int
+dhd_get_pkt_ether_type(dhd_pub_t *pub, void *pktbuf,
+ uint8 **data_ptr, int *len_ptr, uint16 *et_ptr, bool *snap_ptr)
+{
+ uint8 *frame = PKTDATA(pub->osh, pktbuf);
+ int length = PKTLEN(pub->osh, pktbuf);
+ uint8 *pt; /* Pointer to type field */
+ uint16 ethertype;
+ bool snap = FALSE;
+ /* Process Ethernet II or SNAP-encapsulated 802.3 frames */
+ if (length < ETHER_HDR_LEN) {
+ DHD_ERROR(("dhd: %s: short eth frame (%d)\n",
+ __FUNCTION__, length));
+ return BCME_ERROR;
+ } else if (ntoh16_ua(frame + ETHER_TYPE_OFFSET) >= ETHER_TYPE_MIN) {
+ /* Frame is Ethernet II */
+ pt = frame + ETHER_TYPE_OFFSET;
+ } else if (length >= ETHER_HDR_LEN + SNAP_HDR_LEN + ETHER_TYPE_LEN &&
+ !bcmp(llc_snap_hdr, frame + ETHER_HDR_LEN, SNAP_HDR_LEN)) {
+ pt = frame + ETHER_HDR_LEN + SNAP_HDR_LEN;
+ snap = TRUE;
+ } else {
+ DHD_INFO(("DHD: %s: non-SNAP 802.3 frame\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ ethertype = ntoh16_ua(pt);
+
+ /* Skip VLAN tag, if any */
+ if (ethertype == ETHER_TYPE_8021Q) {
+ pt += VLAN_TAG_LEN;
+
+ if ((pt + ETHER_TYPE_LEN) > (frame + length)) {
+ DHD_ERROR(("dhd: %s: short VLAN frame (%d)\n",
+ __FUNCTION__, length));
+ return BCME_ERROR;
+ }
+
+ ethertype = ntoh16_ua(pt);
+ }
+
+ *data_ptr = pt + ETHER_TYPE_LEN;
+ *len_ptr = length - (pt + ETHER_TYPE_LEN - frame);
+ *et_ptr = ethertype;
+ *snap_ptr = snap;
+ return BCME_OK;
+}
+
+static int
+dhd_get_pkt_ip_type(dhd_pub_t *pub, void *pktbuf,
+ uint8 **data_ptr, int *len_ptr, uint8 *prot_ptr)
+{
+ struct ipv4_hdr *iph; /* IP frame pointer */
+ int iplen; /* IP frame length */
+ uint16 ethertype, iphdrlen, ippktlen;
+ uint16 iph_frag;
+ uint8 prot;
+ bool snap;
+
+ if (dhd_get_pkt_ether_type(pub, pktbuf, (uint8 **)&iph,
+ &iplen, ðertype, &snap) != 0)
+ return BCME_ERROR;
+
+ if (ethertype != ETHER_TYPE_IP) {
+ return BCME_ERROR;
+ }
+
+ /* We support IPv4 only */
+ if (iplen < IPV4_OPTIONS_OFFSET || (IP_VER(iph) != IP_VER_4)) {
+ return BCME_ERROR;
+ }
+
+ /* Header length sanity */
+ iphdrlen = IPV4_HLEN(iph);
+
+ /*
+ * Packet length sanity; sometimes we receive eth-frame size bigger
+ * than the IP content, which results in a bad tcp chksum
+ */
+ ippktlen = ntoh16(iph->tot_len);
+ if (ippktlen < iplen) {
+
+ DHD_INFO(("%s: extra frame length ignored\n",
+ __FUNCTION__));
+ iplen = ippktlen;
+ } else if (ippktlen > iplen) {
+ DHD_ERROR(("dhd: %s: truncated IP packet (%d)\n",
+ __FUNCTION__, ippktlen - iplen));
+ return BCME_ERROR;
+ }
+
+ if (iphdrlen < IPV4_OPTIONS_OFFSET || iphdrlen > iplen) {
+ DHD_ERROR(("DHD: %s: IP-header-len (%d) out of range (%d-%d)\n",
+ __FUNCTION__, iphdrlen, IPV4_OPTIONS_OFFSET, iplen));
+ return BCME_ERROR;
+ }
+
+ /*
+ * We don't handle fragmented IP packets. A first frag is indicated by the MF
+ * (more frag) bit and a subsequent frag is indicated by a non-zero frag offset.
+ */
+ iph_frag = ntoh16(iph->frag);
+
+ if ((iph_frag & IPV4_FRAG_MORE) || (iph_frag & IPV4_FRAG_OFFSET_MASK) != 0) {
+ DHD_INFO(("DHD:%s: IP fragment not handled\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ prot = IPV4_PROT(iph);
+
+ *data_ptr = (((uint8 *)iph) + iphdrlen);
+ *len_ptr = iplen - iphdrlen;
+ *prot_ptr = prot;
+ return BCME_OK;
+}
+
+/** check the packet type, if it is DHCP ACK/REPLY, convert into unicast packet */
+static
+int dhd_convert_dhcp_broadcast_ack_to_unicast(dhd_pub_t *pub, void *pktbuf, int ifidx)
+{
+ dhd_sta_t* stainfo;
+ uint8 *eh = PKTDATA(pub->osh, pktbuf);
+ uint8 *udph;
+ uint8 *dhcp;
+ uint8 *chaddr;
+ int udpl;
+ int dhcpl;
+ uint16 port;
+ uint8 prot;
+
+ if (!ETHER_ISMULTI(eh + ETHER_DEST_OFFSET))
+ return BCME_ERROR;
+ if (dhd_get_pkt_ip_type(pub, pktbuf, &udph, &udpl, &prot) != 0)
+ return BCME_ERROR;
+ if (prot != IP_PROT_UDP)
+ return BCME_ERROR;
+ /* check frame length, at least UDP_HDR_LEN */
+ if (udpl < UDP_HDR_LEN) {
+ DHD_ERROR(("DHD: %s: short UDP frame, ignored\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+ port = ntoh16_ua(udph + UDP_DEST_PORT_OFFSET);
+ /* only process DHCP packets from server to client */
+ if (port != DHCP_PORT_CLIENT)
+ return BCME_ERROR;
+
+ dhcp = udph + UDP_HDR_LEN;
+ dhcpl = udpl - UDP_HDR_LEN;
+
+ if (dhcpl < DHCP_CHADDR_OFFSET + ETHER_ADDR_LEN) {
+ DHD_ERROR(("DHD: %s: short DHCP frame, ignored\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+ }
+ /* only process DHCP reply(offer/ack) packets */
+ if (*(dhcp + DHCP_TYPE_OFFSET) != DHCP_TYPE_REPLY)
+ return BCME_ERROR;
+ chaddr = dhcp + DHCP_CHADDR_OFFSET;
+ stainfo = dhd_find_sta(pub, ifidx, chaddr);
+ if (stainfo) {
+ bcopy(chaddr, eh + ETHER_DEST_OFFSET, ETHER_ADDR_LEN);
+ return BCME_OK;
+ }
+ return BCME_ERROR;
+}
+#endif /* DHD_UNICAST_DHD */
+#ifdef DHD_L2_FILTER
+/* Check if packet type is ICMP ECHO */
+static
+int dhd_l2_filter_block_ping(dhd_pub_t *pub, void *pktbuf, int ifidx)
+{
+ struct bcmicmp_hdr *icmph;
+ int udpl;
+ uint8 prot;
+
+ if (dhd_get_pkt_ip_type(pub, pktbuf, (uint8 **)&icmph, &udpl, &prot) != 0)
+ return BCME_ERROR;
+ if (prot == IP_PROT_ICMP) {
+ if (icmph->type == ICMP_TYPE_ECHO_REQUEST)
+ return BCME_OK;
+ }
+ return BCME_ERROR;
+}
+#endif /* DHD_L2_FILTER */
+struct net_device *
+dhd_linux_get_primary_netdev(dhd_pub_t *dhdp)
+{
+ dhd_info_t *dhd = dhdp->info;
+
+ if (dhd->iflist[0] && dhd->iflist[0]->net)
+ return dhd->iflist[0]->net;
+ else
+ return NULL;
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux.h b/drivers/net/wireless/bcmdhd/dhd_linux.h
old mode 100755
new mode 100644
index 7a15fc1..e3cf3af
--- a/drivers/net/wireless/bcmdhd/dhd_linux.h
+++ b/drivers/net/wireless/bcmdhd/dhd_linux.h
@@ -2,13 +2,13 @@
* DHD Linux header file (dhd_linux exports for cfg80211 and other components)
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,7 +16,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -34,8 +34,45 @@
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/fs.h>
#include <dngl_stats.h>
#include <dhd.h>
+#ifdef DHD_WMF
+#include <dhd_wmf_linux.h>
+#endif
+/* Linux wireless extension support */
+#if defined(WL_WIRELESS_EXT)
+#include <wl_iw.h>
+#endif /* defined(WL_WIRELESS_EXT) */
+#if defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND)
+#include <linux/earlysuspend.h>
+#endif /* defined(CONFIG_HAS_EARLYSUSPEND) && defined(DHD_USE_EARLYSUSPEND) */
+#if defined(CONFIG_WIFI_CONTROL_FUNC)
+#include <linux/wlan_plat.h>
+#endif
+
+#if !defined(CONFIG_WIFI_CONTROL_FUNC)
+#define WLAN_PLAT_NODFS_FLAG 0x01
+#define WLAN_PLAT_AP_FLAG 0x02
+
+struct wifi_platform_data {
+ int (*set_power)(int val);
+ int (*set_reset)(int val);
+ int (*set_carddetect)(int val);
+ void *(*mem_prealloc)(int section, unsigned long size);
+ int (*get_mac_addr)(unsigned char *buf);
+ int (*get_wake_irq)(void);
+ void *(*get_country_code)(char *ccode, u32 flags);
+#ifdef CONFIG_PARTIALRESUME
+#define WIFI_PR_INIT 0
+#define WIFI_PR_NOTIFY_RESUME 1
+#define WIFI_PR_VOTE_FOR_RESUME 2
+#define WIFI_PR_VOTE_FOR_SUSPEND 3
+#define WIFI_PR_WAIT_FOR_READY 4
+ bool (*partial_resume)(int action);
+#endif
+};
+#endif /* CONFIG_WIFI_CONTROL_FUNC */
#define DHD_REGISTRATION_TIMEOUT 12000 /* msec : allowed time to finished dhd registration */
@@ -56,6 +93,17 @@
wifi_adapter_info_t *adapters;
} bcmdhd_wifi_platdata_t;
+/** Per STA params. A list of dhd_sta objects are managed in dhd_if */
+typedef struct dhd_sta {
+ uint16 flowid[NUMPRIO]; /* allocated flow ring ids (by priority) */
+ void * ifp; /* associated dhd_if */
+ struct ether_addr ea; /* stations ethernet mac address */
+ struct list_head list; /* link into dhd_if::sta_list */
+ int idx; /* index of self in dhd_pub::sta_pool[] */
+ int ifidx; /* index of interface in dhd */
+} dhd_sta_t;
+typedef dhd_sta_t dhd_sta_pool_t;
+
int dhd_wifi_platform_register_drv(void);
void dhd_wifi_platform_unregister_drv(void);
wifi_adapter_info_t* dhd_wifi_platform_get_adapter(uint32 bus_type, uint32 bus_num,
@@ -64,11 +112,17 @@
int wifi_platform_bus_enumerate(wifi_adapter_info_t *adapter, bool device_present);
int wifi_platform_get_irq_number(wifi_adapter_info_t *adapter, unsigned long *irq_flags_ptr);
int wifi_platform_get_mac_addr(wifi_adapter_info_t *adapter, unsigned char *buf);
-void *wifi_platform_get_country_code(wifi_adapter_info_t *adapter, char *ccode);
+void *wifi_platform_get_country_code(wifi_adapter_info_t *adapter, char *ccode,
+ u32 flags);
void* wifi_platform_prealloc(wifi_adapter_info_t *adapter, int section, unsigned long size);
void* wifi_platform_get_prealloc_func_ptr(wifi_adapter_info_t *adapter);
+int wifi_platform_get_wake_irq(wifi_adapter_info_t *adapter);
+bool wifi_process_partial_resume(wifi_adapter_info_t *adapter, int action);
int dhd_get_fw_mode(struct dhd_info *dhdinfo);
bool dhd_update_fw_nv_path(struct dhd_info *dhdinfo);
+#ifdef DHD_WMF
+dhd_wmf_t* dhd_wmf_conf(dhd_pub_t *dhdp, uint32 idx);
+#endif /* DHD_WMF */
#endif /* __DHD_LINUX_H__ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux_platdev.c b/drivers/net/wireless/bcmdhd/dhd_linux_platdev.c
old mode 100755
new mode 100644
index 332d2be..e277859
--- a/drivers/net/wireless/bcmdhd/dhd_linux_platdev.c
+++ b/drivers/net/wireless/bcmdhd/dhd_linux_platdev.c
@@ -2,13 +2,13 @@
* Linux platform device for DHD WLAN adapter
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,7 +16,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -36,20 +36,6 @@
#include <dhd_bus.h>
#include <dhd_linux.h>
#include <wl_android.h>
-#if defined(CONFIG_WIFI_CONTROL_FUNC)
-#include <linux/wlan_plat.h>
-#endif
-
-#if !defined(CONFIG_WIFI_CONTROL_FUNC)
-struct wifi_platform_data {
- int (*set_power)(int val);
- int (*set_reset)(int val);
- int (*set_carddetect)(int val);
- void *(*mem_prealloc)(int section, unsigned long size);
- int (*get_mac_addr)(unsigned char *buf);
- void *(*get_country_code)(char *ccode);
-};
-#endif /* CONFIG_WIFI_CONTROL_FUNC */
#define WIFI_PLAT_NAME "bcmdhd_wlan"
#define WIFI_PLAT_NAME2 "bcm4329_wlan"
@@ -59,7 +45,7 @@
bcmdhd_wifi_platdata_t *dhd_wifi_platdata = NULL;
static int wifi_plat_dev_probe_ret = 0;
static bool is_power_on = FALSE;
-#ifdef DHD_OF_SUPPORT
+#if defined(DHD_OF_SUPPORT)
static bool dts_enabled = TRUE;
extern struct resource dhd_wlan_resources;
extern struct wifi_platform_data dhd_wlan_control;
@@ -67,7 +53,7 @@
static bool dts_enabled = FALSE;
struct resource dhd_wlan_resources = {0};
struct wifi_platform_data dhd_wlan_control = {0};
-#endif /* CONFIG_OF && !defined(CONFIG_ARCH_MSM) */
+#endif /* defined(DHD_OF_SUPPORT) */
static int dhd_wifi_platform_load(void);
@@ -113,10 +99,12 @@
if (size != 0L)
bzero(alloc_ptr, size);
return alloc_ptr;
+ } else {
+ DHD_ERROR(("%s: failed to alloc static mem section %d\n",
+ __FUNCTION__, section));
}
}
- DHD_ERROR(("%s: failed to alloc static mem section %d\n", __FUNCTION__, section));
return NULL;
}
@@ -164,12 +152,12 @@
#endif /* ENABLE_4335BT_WAR */
if (on)
- sysedp_set_state(plat_data->sysedpc, on);
+ sysedp_set_state(plat_data->sysedpc, 1);
err = plat_data->set_power(on);
- if (!on)
- sysedp_set_state(plat_data->sysedpc, on);
+ if (!on || err)
+ sysedp_set_state(plat_data->sysedpc, 0);
}
if (msec && !err)
@@ -195,11 +183,44 @@
DHD_ERROR(("%s device present %d\n", __FUNCTION__, device_present));
if (plat_data->set_carddetect) {
err = plat_data->set_carddetect(device_present);
+ } else {
+#if defined(CONFIG_ARCH_MSM) && defined(BCMPCIE)
+ extern int msm_pcie_enumerate(u32 rc_idx);
+ msm_pcie_enumerate(1);
+#endif
}
return err;
}
+int wifi_platform_get_wake_irq(wifi_adapter_info_t *adapter)
+{
+ struct wifi_platform_data *plat_data;
+
+ if (!adapter || !adapter->wifi_plat_data)
+ return -1;
+ plat_data = adapter->wifi_plat_data;
+ if (plat_data->get_wake_irq)
+ return plat_data->get_wake_irq();
+ return -1;
+}
+
+bool wifi_process_partial_resume(wifi_adapter_info_t *adapter, int action)
+{
+#ifdef CONFIG_PARTIALRESUME
+ struct wifi_platform_data *plat_data;
+
+ if (!adapter || !adapter->wifi_plat_data)
+ return false;
+ plat_data = adapter->wifi_plat_data;
+ if (plat_data->partial_resume)
+ return plat_data->partial_resume(action);
+ return false;
+#else
+ return false;
+#endif
+}
+
int wifi_platform_get_mac_addr(wifi_adapter_info_t *adapter, unsigned char *buf)
{
struct wifi_platform_data *plat_data;
@@ -214,7 +235,8 @@
return -EOPNOTSUPP;
}
-void *wifi_platform_get_country_code(wifi_adapter_info_t *adapter, char *ccode)
+void *wifi_platform_get_country_code(wifi_adapter_info_t *adapter, char *ccode,
+ u32 flags)
{
/* get_country_code was added after 2.6.39 */
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 39))
@@ -226,7 +248,7 @@
DHD_TRACE(("%s\n", __FUNCTION__));
if (plat_data->get_country_code) {
- return plat_data->get_country_code(ccode);
+ return plat_data->get_country_code(ccode, flags);
}
#endif /* (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 39)) */
@@ -256,7 +278,9 @@
adapter->intr_flags = resource->flags & IRQF_TRIGGER_MASK;
}
+
wifi_ctrl->sysedpc = sysedp_create_consumer("wifi", "wifi");
+
wifi_plat_dev_probe_ret = dhd_wifi_platform_load();
return wifi_plat_dev_probe_ret;
}
@@ -274,12 +298,18 @@
ASSERT(dhd_wifi_platdata->num_adapters == 1);
adapter = &dhd_wifi_platdata->adapters[0];
if (is_power_on) {
+#ifdef BCMPCIE
+ wifi_platform_bus_enumerate(adapter, FALSE);
+ OSL_SLEEP(100);
+ wifi_platform_set_power(adapter, FALSE, WIFI_TURNOFF_DELAY);
+#else
wifi_platform_set_power(adapter, FALSE, WIFI_TURNOFF_DELAY);
wifi_platform_bus_enumerate(adapter, FALSE);
+#endif /* BCMPCIE */
}
sysedp_free_consumer(wifi_ctrl->sysedpc);
- wifi_ctrl->sysedpc = 0;
+ wifi_ctrl->sysedpc = NULL;
return 0;
}
@@ -346,6 +376,7 @@
dev1 = bus_find_device(&platform_bus_type, NULL, WIFI_PLAT_NAME, wifi_platdev_match);
dev2 = bus_find_device(&platform_bus_type, NULL, WIFI_PLAT_NAME2, wifi_platdev_match);
+
if (!dts_enabled) {
if (dev1 == NULL && dev2 == NULL) {
DHD_ERROR(("no wifi platform data, skip\n"));
@@ -394,12 +425,14 @@
wifi_plat_dev_probe_ret = dhd_wifi_platform_load();
}
+
/* return probe function's return value if registeration succeeded */
return wifi_plat_dev_probe_ret;
}
void wifi_ctrlfunc_unregister_drv(void)
{
+
struct device *dev1, *dev2;
dev1 = bus_find_device(&platform_bus_type, NULL, WIFI_PLAT_NAME, wifi_platdev_match);
@@ -421,6 +454,7 @@
wifi_platform_bus_enumerate(adapter, FALSE);
}
}
+
kfree(dhd_wifi_platdata->adapters);
dhd_wifi_platdata->adapters = NULL;
dhd_wifi_platdata->num_adapters = 0;
@@ -499,7 +533,77 @@
static int dhd_wifi_platform_load_pcie(void)
{
int err = 0;
- err = dhd_bus_register();
+ int i;
+ wifi_adapter_info_t *adapter;
+
+ BCM_REFERENCE(i);
+ BCM_REFERENCE(adapter);
+
+ if (dhd_wifi_platdata == NULL) {
+ err = dhd_bus_register();
+ } else {
+ if (dhd_download_fw_on_driverload) {
+ /* power up all adapters */
+ for (i = 0; i < dhd_wifi_platdata->num_adapters; i++) {
+ int retry = POWERUP_MAX_RETRY;
+ adapter = &dhd_wifi_platdata->adapters[i];
+
+ DHD_ERROR(("Power-up adapter '%s'\n", adapter->name));
+ DHD_INFO((" - irq %d [flags %d], firmware: %s, nvram: %s\n",
+ adapter->irq_num, adapter->intr_flags, adapter->fw_path,
+ adapter->nv_path));
+ DHD_INFO((" - bus type %d, bus num %d, slot num %d\n\n",
+ adapter->bus_type, adapter->bus_num, adapter->slot_num));
+
+ do {
+ err = wifi_platform_set_power(adapter,
+ TRUE, WIFI_TURNON_DELAY);
+ if (err) {
+ DHD_ERROR(("failed to power up %s,"
+ " %d retry left\n",
+ adapter->name, retry));
+ /* WL_REG_ON state unknown, Power off forcely */
+ wifi_platform_set_power(adapter,
+ FALSE, WIFI_TURNOFF_DELAY);
+ continue;
+ } else {
+ err = wifi_platform_bus_enumerate(adapter, TRUE);
+ if (err) {
+ DHD_ERROR(("failed to enumerate bus %s, "
+ "%d retry left\n",
+ adapter->name, retry));
+ wifi_platform_set_power(adapter, FALSE,
+ WIFI_TURNOFF_DELAY);
+ } else {
+ break;
+ }
+ }
+ } while (retry--);
+
+ if (!retry) {
+ DHD_ERROR(("failed to power up %s, max retry reached**\n",
+ adapter->name));
+ return -ENODEV;
+ }
+ }
+ }
+
+ err = dhd_bus_register();
+
+ if (err) {
+ DHD_ERROR(("%s: pcie_register_driver failed\n", __FUNCTION__));
+ if (dhd_download_fw_on_driverload) {
+ /* power down all adapters */
+ for (i = 0; i < dhd_wifi_platdata->num_adapters; i++) {
+ adapter = &dhd_wifi_platdata->adapters[i];
+ wifi_platform_bus_enumerate(adapter, FALSE);
+ wifi_platform_set_power(adapter,
+ FALSE, WIFI_TURNOFF_DELAY);
+ }
+ }
+ }
+ }
+
return err;
}
#else
@@ -523,7 +627,7 @@
extern uint dhd_deferred_tx;
#if defined(BCMLXSDMMC)
extern struct semaphore dhd_registration_sem;
-#endif
+#endif
#ifdef BCMSDIO
static int dhd_wifi_platform_load_sdio(void)
@@ -543,14 +647,12 @@
return -EINVAL;
#if defined(BCMLXSDMMC)
- sema_init(&dhd_registration_sem, 0);
-#endif
-#if defined(BCMLXSDMMC) && !defined(CONFIG_TEGRA_PREPOWER_WIFI)
if (dhd_wifi_platdata == NULL) {
DHD_ERROR(("DHD wifi platform data is required for Android build\n"));
return -EINVAL;
}
+ sema_init(&dhd_registration_sem, 0);
/* power up all adapters */
for (i = 0; i < dhd_wifi_platdata->num_adapters; i++) {
bool chip_up = FALSE;
@@ -636,7 +738,7 @@
/* x86 bring-up PC needs no power-up operations */
err = dhd_bus_register();
-#endif
+#endif
return err;
}
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux_sched.c b/drivers/net/wireless/bcmdhd/dhd_linux_sched.c
old mode 100755
new mode 100644
index ba78dfd..8fc4ff5
--- a/drivers/net/wireless/bcmdhd/dhd_linux_sched.c
+++ b/drivers/net/wireless/bcmdhd/dhd_linux_sched.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_linux_sched.c 457596 2014-02-24 02:24:14Z $
+ * $Id: dhd_linux_sched.c 457570 2014-02-23 13:54:46Z $
*/
#include <linux/kernel.h>
#include <linux/module.h>
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux_wq.c b/drivers/net/wireless/bcmdhd/dhd_linux_wq.c
old mode 100755
new mode 100644
index 0364c9d..2d01570
--- a/drivers/net/wireless/bcmdhd/dhd_linux_wq.c
+++ b/drivers/net/wireless/bcmdhd/dhd_linux_wq.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_linux_wq.c 411851 2013-07-10 20:48:00Z $
+ * $Id: dhd_linux_wq.c 449578 2014-01-17 13:53:20Z $
*/
#include <linux/init.h>
@@ -66,7 +66,6 @@
spinlock_t work_lock;
void *dhd_info; /* review: does it require */
};
-struct dhd_deferred_wq *deferred_wq = NULL;
static inline struct kfifo*
dhd_kfifo_init(u8 *buf, int size, spinlock_t *lock)
@@ -90,6 +89,10 @@
dhd_kfifo_free(struct kfifo *fifo)
{
kfifo_free(fifo);
+#if (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 31))
+ /* FC11 releases the fifo memory */
+ kfree(fifo);
+#endif
}
/* deferred work functions */
@@ -154,7 +157,6 @@
}
work->dhd_info = dhd_info;
- deferred_wq = work;
DHD_ERROR(("%s: work queue initialized \n", __FUNCTION__));
return work;
@@ -191,9 +193,6 @@
dhd_kfifo_free(deferred_work->work_fifo);
kfree(deferred_work);
-
- /* deinit internal reference pointer */
- deferred_wq = NULL;
}
/*
@@ -201,8 +200,10 @@
* Schedules the event
*/
int
-dhd_deferred_schedule_work(void *event_data, u8 event, event_handler_t event_handler, u8 priority)
+dhd_deferred_schedule_work(void *workq, void *event_data, u8 event,
+ event_handler_t event_handler, u8 priority)
{
+ struct dhd_deferred_wq *deferred_wq = (struct dhd_deferred_wq *) workq;
struct dhd_deferred_event_t deferred_event;
int status;
@@ -246,7 +247,7 @@
}
static int
-dhd_get_scheduled_work(struct dhd_deferred_event_t *event)
+dhd_get_scheduled_work(struct dhd_deferred_wq *deferred_wq, struct dhd_deferred_event_t *event)
{
int status = 0;
@@ -293,7 +294,7 @@
}
do {
- status = dhd_get_scheduled_work(&work_event);
+ status = dhd_get_scheduled_work(deferred_work, &work_event);
DHD_TRACE(("%s: event to handle %d \n", __FUNCTION__, status));
if (!status) {
DHD_TRACE(("%s: No event to handle %d \n", __FUNCTION__, status));
diff --git a/drivers/net/wireless/bcmdhd/dhd_linux_wq.h b/drivers/net/wireless/bcmdhd/dhd_linux_wq.h
old mode 100755
new mode 100644
index 3a4ad1c..7c37173
--- a/drivers/net/wireless/bcmdhd/dhd_linux_wq.h
+++ b/drivers/net/wireless/bcmdhd/dhd_linux_wq.h
@@ -3,13 +3,13 @@
* Generic interface to handle dhd deferred work events
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -17,12 +17,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_linux_wq.h 408802 2013-06-20 19:08:47Z $
+ * $Id: dhd_linux_wq.h 449578 2014-01-17 13:53:20Z $
*/
#ifndef _dhd_linux_wq_h_
#define _dhd_linux_wq_h_
@@ -36,7 +36,7 @@
DHD_WQ_WORK_SET_MCAST_LIST,
DHD_WQ_WORK_IPV6_NDO,
DHD_WQ_WORK_HANG_MSG,
-
+ DHD_WQ_WORK_SOC_RAM_DUMP,
DHD_MAX_WQ_EVENTS
};
@@ -58,7 +58,7 @@
typedef void (*event_handler_t)(void *handle, void *event_data, u8 event);
void *dhd_deferred_work_init(void *dhd);
-void dhd_deferred_work_deinit(void *work);
-int dhd_deferred_schedule_work(void *event_data, u8 event,
+void dhd_deferred_work_deinit(void *workq);
+int dhd_deferred_schedule_work(void *workq, void *event_data, u8 event,
event_handler_t evt_handler, u8 priority);
#endif /* _dhd_linux_wq_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_msgbuf.c b/drivers/net/wireless/bcmdhd/dhd_msgbuf.c
old mode 100755
new mode 100644
index 344874f..b1f8414
--- a/drivers/net/wireless/bcmdhd/dhd_msgbuf.c
+++ b/drivers/net/wireless/bcmdhd/dhd_msgbuf.c
@@ -5,13 +5,13 @@
* DHD OS, bus, and protocol modules.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -19,18 +19,17 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_msgbuf.c 452261 2014-01-29 19:30:23Z $
+ * $Id: dhd_msgbuf.c 474409 2014-05-01 04:27:15Z $
*/
#include <typedefs.h>
#include <osl.h>
#include <bcmutils.h>
-#include <circularbuf.h>
#include <bcmmsgbuf.h>
#include <bcmendian.h>
@@ -39,89 +38,130 @@
#include <dhd_proto.h>
#include <dhd_bus.h>
#include <dhd_dbg.h>
+#include <dhd_debug.h>
+#include <siutils.h>
+#include <dhd_flowring.h>
+
#ifdef PROP_TXSTATUS
#include <wlfc_proto.h>
#include <dhd_wlfc.h>
#endif
+
#include <pcie_core.h>
#include <bcmpcie.h>
-
+#include <dhd_pcie.h>
+#include <dhd_ip.h>
#define RETRIES 2 /* # of retries to retrieve matching ioctl response */
#define IOCTL_HDR_LEN 12
-#define DEFAULT_RX_BUFFERS_TO_POST 255
-#define RXBUFPOST_THRESHOLD 16
-#define RX_BUF_BURST 8
+#define DEFAULT_RX_BUFFERS_TO_POST 256
+#define RXBUFPOST_THRESHOLD 32
+#define RX_BUF_BURST 16
-#define DHD_STOP_QUEUE_THRESHOLD 24
-#define DHD_START_QUEUE_THRESHOLD 32
-#define MAX_INLINE_IOCTL_LEN 64 /* anything beyond this len will not be inline reqst */
-
-/* Required for Native to PktId mapping incase of 64bit hosts */
-#define MAX_PKTID_ITEMS (2048)
-
-/* Given packet pointer and physical address, macro should return unique 32 bit pktid */
-/* And given 32bit pktid, macro should return packet pointer and physical address */
-extern void *pktid_map_init(void *osh, uint32 count);
-extern void pktid_map_uninit(void *pktid_map_handle);
-extern uint32 pktid_map_unique(void *pktid_map_handle,
- void *pkt, dmaaddr_t physaddr, uint32 physlen, uint32 dma);
-extern void *pktid_get_packet(void *pktid_map_handle,
- uint32 id, dmaaddr_t *physaddr, uint32 *physlen);
-
-#define NATIVE_TO_PKTID_INIT(osh, count) pktid_map_init(osh, count)
-#define NATIVE_TO_PKTID_UNINIT(pktid_map_handle) pktid_map_uninit(pktid_map_handle)
-
-#define NATIVE_TO_PKTID(pktid_map_handle, pkt, pa, pa_len, dma) \
- pktid_map_unique((pktid_map_handle), (void *)(pkt), (pa), (uint32) (pa_len), (uint32)dma)
-#define PKTID_TO_NATIVE(pktid_map_handle, id, pa, pa_len) \
- pktid_get_packet((pktid_map_handle), (uint32)(id), (void *)&(pa), (uint32 *) &(pa_len))
+#define DHD_STOP_QUEUE_THRESHOLD 200
+#define DHD_START_QUEUE_THRESHOLD 100
#define MODX(x, n) ((x) & ((n) -1))
#define align(x, n) (MODX(x, n) ? ((x) - MODX(x, n) + (n)) : ((x) - MODX(x, n)))
-#define RX_DMA_OFFSET 8
+#define RX_DMA_OFFSET 8
#define IOCT_RETBUF_SIZE (RX_DMA_OFFSET + WLC_IOCTL_MAXLEN)
+#define DMA_D2H_SCRATCH_BUF_LEN 8
+#define DMA_ALIGN_LEN 4
+#define DMA_XFER_LEN_LIMIT 0x400000
+
+#define DHD_FLOWRING_IOCTL_BUFPOST_PKTSZ 8192
+
+#define DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D 1
+#define DHD_FLOWRING_MAX_EVENTBUF_POST 8
+#define DHD_FLOWRING_MAX_IOCTLRESPBUF_POST 8
+
+#define DHD_PROT_FUNCS 22
+
+typedef struct dhd_mem_map {
+ void *va;
+ dmaaddr_t pa;
+ void *dmah;
+} dhd_mem_map_t;
+
+typedef struct dhd_dmaxfer {
+ dhd_mem_map_t srcmem;
+ dhd_mem_map_t destmem;
+ uint32 len;
+ uint32 srcdelay;
+ uint32 destdelay;
+} dhd_dmaxfer_t;
+
+#define TXP_FLUSH_NITEMS
+#define TXP_FLUSH_MAX_ITEMS_FLUSH_CNT 48
+
+typedef struct msgbuf_ring {
+ bool inited;
+ uint16 idx;
+ uchar name[24];
+ dhd_mem_map_t ring_base;
+#ifdef TXP_FLUSH_NITEMS
+ void* start_addr;
+ uint16 pend_items_count;
+#endif /* TXP_FLUSH_NITEMS */
+ ring_mem_t *ringmem;
+ ring_state_t *ringstate;
+} msgbuf_ring_t;
+
+
typedef struct dhd_prot {
+ osl_t *osh; /* OSL handle */
uint32 reqid;
- uint16 hdr_len;
uint32 lastcmd;
uint32 pending;
uint16 rxbufpost;
uint16 max_rxbufpost;
+ uint16 max_eventbufpost;
+ uint16 max_ioctlrespbufpost;
+ uint16 cur_event_bufs_posted;
+ uint16 cur_ioctlresp_bufs_posted;
uint16 active_tx_count;
uint16 max_tx_count;
- dmaaddr_t htod_physaddr;
- dmaaddr_t dtoh_physaddr;
- bool txflow_en;
- circularbuf_t *dtohbuf;
- circularbuf_t *htodbuf;
- uint32 rx_dataoffset;
- void* retbuf;
- dmaaddr_t retbuf_phys;
- void* ioctbuf; /* For holding ioct request buf */
- dmaaddr_t ioctbuf_phys; /* physical address for ioctbuf */
- dhd_mb_ring_t mb_ring_fn;
- void *htod_ring;
- void *dtoh_ring;
- /* Flag to check if splitbuf support is enabled. */
- /* Set to False at dhd_prot_attach. Set to True at dhd_prot_init */
- bool htodsplit;
- bool dtohsplit;
- /* H2D/D2H Ctrl rings */
- dmaaddr_t htod_ctrl_physaddr; /* DMA mapped physical addr ofr H2D ctrl ring */
- dmaaddr_t dtoh_ctrl_physaddr; /* DMA mapped phys addr for D2H ctrl ring */
- circularbuf_t *htod_ctrlbuf; /* Cbuf handle for H2D ctrl ring */
- circularbuf_t *dtoh_ctrlbuf; /* Cbuf handle for D2H ctrl ring */
- void *htod_ctrl_ring; /* address for H2D control buf */
- void *dtoh_ctrl_ring; /* address for D2H control buf */
+ uint16 txp_threshold;
+ /* Ring info */
+ msgbuf_ring_t *h2dring_txp_subn;
+ msgbuf_ring_t *h2dring_rxp_subn;
+ msgbuf_ring_t *h2dring_ctrl_subn; /* Cbuf handle for H2D ctrl ring */
+ msgbuf_ring_t *d2hring_tx_cpln;
+ msgbuf_ring_t *d2hring_rx_cpln;
+ msgbuf_ring_t *d2hring_ctrl_cpln; /* Cbuf handle for D2H ctrl ring */
+ uint32 rx_dataoffset;
+ dhd_mem_map_t retbuf;
+ dhd_mem_map_t ioctbuf; /* For holding ioct request buf */
+ dhd_mb_ring_t mb_ring_fn;
+ uint32 d2h_dma_scratch_buf_len; /* For holding ioct request buf */
+ dhd_mem_map_t d2h_dma_scratch_buf; /* For holding ioct request buf */
- uint16 ioctl_seq_no;
- uint16 data_seq_no;
- void *pktid_map_handle;
+ uint32 h2d_dma_writeindx_buf_len; /* For holding dma ringupd buf - submission write */
+ dhd_mem_map_t h2d_dma_writeindx_buf; /* For holding dma ringupd buf - submission write */
+
+ uint32 h2d_dma_readindx_buf_len; /* For holding dma ringupd buf - submission read */
+ dhd_mem_map_t h2d_dma_readindx_buf; /* For holding dma ringupd buf - submission read */
+
+ uint32 d2h_dma_writeindx_buf_len; /* For holding dma ringupd buf - completion write */
+ dhd_mem_map_t d2h_dma_writeindx_buf; /* For holding dma ringupd buf - completion write */
+
+ uint32 d2h_dma_readindx_buf_len; /* For holding dma ringupd buf - completion read */
+ dhd_mem_map_t d2h_dma_readindx_buf; /* For holding dma ringupd buf - completion read */
+
+ dhd_dmaxfer_t dmaxfer;
+ bool dmaxfer_in_progress;
+
+ uint16 ioctl_seq_no;
+ uint16 data_seq_no;
+ uint16 ioctl_trans_id;
+ void *pktid_map_handle;
+ uint16 rx_metadata_offset;
+ uint16 tx_metadata_offset;
+ uint16 rx_cpln_early_upd_idx;
} dhd_prot_t;
static int dhdmsgbuf_query_ioctl(dhd_pub_t *dhd, int ifidx, uint cmd,
@@ -129,184 +169,800 @@
static int dhd_msgbuf_set_ioctl(dhd_pub_t *dhd, int ifidx, uint cmd,
void *buf, uint len, uint8 action);
static int dhdmsgbuf_cmplt(dhd_pub_t *dhd, uint32 id, uint32 len, void* buf, void* retbuf);
-static int dhd_msgbuf_init_dtoh(dhd_pub_t *dhd);
static int dhd_msgbuf_rxbuf_post(dhd_pub_t *dhd);
-static int dhd_msgbuf_init_htod(dhd_pub_t *dhd);
-static int dhd_msgbuf_init_htod_ctrl(dhd_pub_t *dhd);
-static int dhd_msgbuf_init_dtoh_ctrl(dhd_pub_t *dhd);
-static int dhd_prot_rxbufpost(dhd_pub_t *dhd, uint32 count);
+static int dhd_prot_rxbufpost(dhd_pub_t *dhd, uint16 count);
static void dhd_prot_return_rxbuf(dhd_pub_t *dhd, uint16 rxcnt);
-static void dhd_prot_rxcmplt_process(dhd_pub_t *dhd, void* buf);
-static void dhd_prot_event_process(dhd_pub_t *dhd, uint8* buf, uint16 len);
-static void dhd_prot_process_msgtype(dhd_pub_t *dhd, uint8* buf, uint16 len);
-static void dhd_process_msgtype(dhd_pub_t *dhd, uint8* buf, uint16 len);
+static void dhd_prot_rxcmplt_process(dhd_pub_t *dhd, void* buf, uint16 msglen);
+static void dhd_prot_event_process(dhd_pub_t *dhd, void* buf, uint16 len);
+static int dhd_prot_process_msgtype(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint8* buf, uint16 len);
+static int dhd_process_msgtype(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint8* buf, uint16 len);
-static void dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf);
-static void dhd_prot_ioctcmplt_process(dhd_pub_t *dhd, void * buf);
-void* dhd_alloc_circularbuf_space(dhd_pub_t *dhd, circularbuf_t *handle, uint16 msglen, uint path);
-static int dhd_fillup_ioct_reqst(dhd_pub_t *dhd, uint16 len, uint cmd, void* buf, int ifidx);
+static void dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static void dhd_prot_ioctcmplt_process(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static void dhd_prot_ioctack_process(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static void dhd_prot_ringstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static void dhd_prot_genstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static void* dhd_alloc_ring_space(dhd_pub_t *dhd, msgbuf_ring_t *ring,
+ uint16 msglen, uint16 *alloced);
static int dhd_fillup_ioct_reqst_ptrbased(dhd_pub_t *dhd, uint16 len, uint cmd, void* buf,
int ifidx);
-static INLINE void dhd_prot_packet_free(dhd_pub_t *dhd, uint32 pktid);
-static INLINE void *dhd_prot_packet_get(dhd_pub_t *dhd, uint32 pktid);
+static INLINE void dhd_prot_packet_free(dhd_pub_t *dhd, uint32 pktid, uint8 buf_type);
+static INLINE void *dhd_prot_packet_get(dhd_pub_t *dhd, uint32 pktid, uint8 buf_type);
+static void dmaxfer_free_dmaaddr(dhd_pub_t *dhd, dhd_dmaxfer_t *dma);
+static int dmaxfer_prepare_dmaaddr(dhd_pub_t *dhd, uint len, uint srcdelay,
+ uint destdelay, dhd_dmaxfer_t *dma);
+static void dhdmsgbuf_dmaxfer_compare(dhd_pub_t *dhd, void *buf, uint16 msglen);
+static void dhd_prot_process_flow_ring_create_response(dhd_pub_t *dhd, void* buf, uint16 msglen);
+static void dhd_prot_process_flow_ring_delete_response(dhd_pub_t *dhd, void* buf, uint16 msglen);
+static void dhd_prot_process_flow_ring_flush_response(dhd_pub_t *dhd, void* buf, uint16 msglen);
+
+
+
+
+#ifdef DHD_RX_CHAINING
+#define PKT_CTF_CHAINABLE(dhd, ifidx, evh, prio, h_sa, h_da, h_prio) \
+ (!ETHER_ISNULLDEST(((struct ether_header *)(evh))->ether_dhost) && \
+ !ETHER_ISMULTI(((struct ether_header *)(evh))->ether_dhost) && \
+ !eacmp((h_da), ((struct ether_header *)(evh))->ether_dhost) && \
+ !eacmp((h_sa), ((struct ether_header *)(evh))->ether_shost) && \
+ ((h_prio) == (prio)) && (dhd_ctf_hotbrc_check((dhd), (evh), (ifidx))) && \
+ ((((struct ether_header *)(evh))->ether_type == HTON16(ETHER_TYPE_IP)) || \
+ (((struct ether_header *)(evh))->ether_type == HTON16(ETHER_TYPE_IPV6))))
+
+static INLINE void BCMFASTPATH dhd_rxchain_reset(rxchain_info_t *rxchain);
+static void BCMFASTPATH dhd_rxchain_frame(dhd_pub_t *dhd, void *pkt, uint ifidx);
+static void BCMFASTPATH dhd_rxchain_commit(dhd_pub_t *dhd);
+
+#define DHD_PKT_CTF_MAX_CHAIN_LEN 64
+#endif /* DHD_RX_CHAINING */
+
+static uint16 dhd_msgbuf_rxbuf_post_ctrlpath(dhd_pub_t *dhd, bool event_buf, uint32 max_to_post);
+static int dhd_msgbuf_rxbuf_post_ioctlresp_bufs(dhd_pub_t *pub);
+static int dhd_msgbuf_rxbuf_post_event_bufs(dhd_pub_t *pub);
+
+static void dhd_prot_ring_detach(dhd_pub_t *dhd, msgbuf_ring_t * ring);
+static void dhd_ring_init(dhd_pub_t *dhd, msgbuf_ring_t *ring);
+static msgbuf_ring_t* prot_ring_attach(dhd_prot_t * prot, char* name, uint16 max_item,
+ uint16 len_item, uint16 ringid);
+static void* prot_get_ring_space(msgbuf_ring_t *ring, uint16 nitems, uint16 * alloced);
+static void dhd_set_dmaed_index(dhd_pub_t *dhd, uint8 type, uint16 ringid, uint16 new_index);
+static uint16 dhd_get_dmaed_index(dhd_pub_t *dhd, uint8 type, uint16 ringid);
+static void prot_ring_write_complete(dhd_pub_t *dhd, msgbuf_ring_t * ring, void* p, uint16 len);
+static void prot_upd_read_idx(dhd_pub_t *dhd, msgbuf_ring_t * ring);
+static uint8* prot_get_src_addr(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint16 *available_len);
+static void prot_store_rxcpln_read_idx(dhd_pub_t *dhd, msgbuf_ring_t *ring);
+static void prot_early_upd_rxcpln_read_idx(dhd_pub_t *dhd, msgbuf_ring_t * ring);
+typedef void (*dhd_msgbuf_func_t)(dhd_pub_t *dhd, void * buf, uint16 msglen);
+static dhd_msgbuf_func_t table_lookup[DHD_PROT_FUNCS] = {
+ NULL,
+ dhd_prot_genstatus_process, /* MSG_TYPE_GEN_STATUS */
+ dhd_prot_ringstatus_process, /* MSG_TYPE_RING_STATUS */
+ NULL,
+ dhd_prot_process_flow_ring_create_response, /* MSG_TYPE_FLOW_RING_CREATE_CMPLT */
+ NULL,
+ dhd_prot_process_flow_ring_delete_response, /* MSG_TYPE_FLOW_RING_DELETE_CMPLT */
+ NULL,
+ dhd_prot_process_flow_ring_flush_response, /* MSG_TYPE_FLOW_RING_FLUSH_CMPLT */
+ NULL,
+ dhd_prot_ioctack_process, /* MSG_TYPE_IOCTLPTR_REQ_ACK */
+ NULL,
+ dhd_prot_ioctcmplt_process, /* MSG_TYPE_IOCTL_CMPLT */
+ NULL,
+ dhd_prot_event_process, /* MSG_TYPE_WL_EVENT */
+ NULL,
+ dhd_prot_txstatus_process, /* MSG_TYPE_TX_STATUS */
+ NULL,
+ dhd_prot_rxcmplt_process, /* MSG_TYPE_RX_CMPLT */
+ NULL,
+ dhdmsgbuf_dmaxfer_compare, /* MSG_TYPE_LPBK_DMAXFER_CMPLT */
+ NULL,
+};
+
+/*
+ * +---------------------------------------------------------------------------+
+ * PktId Map: Provides a native packet pointer to unique 32bit PktId mapping.
+ * The packet id map, also includes storage for some packet parameters that
+ * may be saved. A native packet pointer along with the parameters may be saved
+ * and a unique 32bit pkt id will be returned. Later, the saved packet pointer
+ * and the metadata may be retrieved using the previously allocated packet id.
+ * +---------------------------------------------------------------------------+
+ */
+#define MAX_PKTID_ITEMS (3072) /* Maximum number of pktids supported */
+
+typedef void * dhd_pktid_map_handle_t; /* opaque handle to a pktid map */
+
+/* Construct a packet id mapping table, returing an opaque map handle */
+static dhd_pktid_map_handle_t *dhd_pktid_map_init(void *osh, uint32 num_items);
+
+/* Destroy a packet id mapping table, freeing all packets active in the table */
+static void dhd_pktid_map_fini(dhd_pktid_map_handle_t *map);
+
+/* Determine number of pktids that are available */
+static INLINE uint32 dhd_pktid_map_avail_cnt(dhd_pktid_map_handle_t *map);
+
+/* Allocate a unique pktid against which a pkt and some metadata is saved */
+static INLINE uint32 dhd_pktid_map_reserve(dhd_pktid_map_handle_t *handle,
+ void *pkt);
+static INLINE void dhd_pktid_map_save(dhd_pktid_map_handle_t *handle, void *pkt,
+ uint32 nkey, dmaaddr_t physaddr, uint32 len, uint8 dma, uint8 buf_type);
+static uint32 dhd_pktid_map_alloc(dhd_pktid_map_handle_t *map, void *pkt,
+ dmaaddr_t physaddr, uint32 len, uint8 dma, uint8 buf_type);
+
+/* Return an allocated pktid, retrieving previously saved pkt and metadata */
+static void *dhd_pktid_map_free(dhd_pktid_map_handle_t *map, uint32 id,
+ dmaaddr_t *physaddr, uint32 *len, uint8 buf_type);
+
+/* Packet metadata saved in packet id mapper */
+
+typedef enum pkt_buf_type {
+ BUFF_TYPE_DATA_TX = 0,
+ BUFF_TYPE_DATA_RX,
+ BUFF_TYPE_IOCTL_RX,
+ BUFF_TYPE_EVENT_RX,
+ /* This is purely to work around the following scenario.
+ * In the function dhd_prot_txdata, NATIVE_TO_PKTID_RSV is
+ * called to just reserve the pkt id, later on if ring space
+ * is not available, the pktid is freed. But note that now
+ * dhd_prot_pkt_free will compare the the buf_type with the
+ * buffer type and fail if they don't match. In this case
+ * passing this flag will ensure that such a comparison is
+ * not made. The other option I considered is to use physaddr
+ * field itself. That is make it 0 in xxx_free and in the comparison
+ * if this field is zero just skip the dma != buf_type comparison.
+ * But that logic is too implicit and decided to use this logic to
+ * explicitly skip the check only in this case.
+ */
+ BUFF_TYPE_NO_CHECK
+} pkt_buf_type_t;
+
+/* Packet metadata saved in packet id mapper */
+typedef struct dhd_pktid_item {
+ bool inuse; /* tag an item to be in use */
+ uint8 dma; /* map direction: flush or invalidate */
+ uint8 buf_type;
+ /* This filed is used to colour the
+ * buffer pointers held in the locker.
+ */
+ uint16 len; /* length of mapped packet's buffer */
+ void *pkt; /* opaque native pointer to a packet */
+ dmaaddr_t physaddr; /* physical address of mapped packet's buffer */
+} dhd_pktid_item_t;
+
+typedef struct dhd_pktid_map {
+ void *osh;
+ int items; /* total items in map */
+ int avail; /* total available items */
+ int failures; /* lockers unavailable count */
+ uint32 keys[MAX_PKTID_ITEMS + 1]; /* stack of unique pkt ids */
+ dhd_pktid_item_t lockers[0]; /* metadata storage */
+} dhd_pktid_map_t;
+
+/*
+ * PktId (Locker) #0 is never allocated and is considered invalid.
+ *
+ * On request for a pktid, a value DHD_PKTID_INVALID must be treated as a
+ * depleted pktid pool and must not be used by the caller.
+ *
+ * Likewise, a caller must never free a pktid of value DHD_PKTID_INVALID.
+ */
+#define DHD_PKTID_INVALID (0U)
+
+#define DHD_PKTID_ITEM_SZ (sizeof(dhd_pktid_item_t))
+#define DHD_PKTID_MAP_SZ(items) (sizeof(dhd_pktid_map_t) + \
+ (DHD_PKTID_ITEM_SZ * ((items) + 1)))
+
+#define NATIVE_TO_PKTID_INIT(osh, items) dhd_pktid_map_init((osh), (items))
+#define NATIVE_TO_PKTID_FINI(map) dhd_pktid_map_fini(map)
+#define NATIVE_TO_PKTID_CLEAR(map) dhd_pktid_map_clear(map)
+
+#define NATIVE_TO_PKTID_RSV(map, pkt) dhd_pktid_map_reserve((map), (pkt))
+#define NATIVE_TO_PKTID_SAVE(map, pkt, nkey, pa, len, dma, buf_type) \
+ dhd_pktid_map_save((map), (void *)(pkt), (nkey), (pa), (uint32)(len), \
+ (uint8)dma, (uint8)buf_type)
+#define NATIVE_TO_PKTID(map, pkt, pa, len, dma, buf_type) \
+ dhd_pktid_map_alloc((map), (void *)(pkt), (pa), (uint32)(len), \
+ (uint8)dma, (uint8)buf_type)
+
+#define PKTID_TO_NATIVE(map, pktid, pa, len, buf_type) \
+ dhd_pktid_map_free((map), (uint32)(pktid), \
+ (dmaaddr_t *)&(pa), (uint32 *)&(len), (uint8)buf_type)
+
+#define PKTID_AVAIL(map) dhd_pktid_map_avail_cnt(map)
+
+/*
+ * +---------------------------------------------------------------------------+
+ * Packet to Packet Id mapper using a <numbered_key, locker> paradigm.
+ *
+ * dhd_pktid_map manages a set of unique Packet Ids range[1..MAX_PKTID_ITEMS].
+ *
+ * dhd_pktid_map_alloc() may be used to save some packet metadata, and a unique
+ * packet id is returned. This unique packet id may be used to retrieve the
+ * previously saved packet metadata, using dhd_pktid_map_free(). On invocation
+ * of dhd_pktid_map_free(), the unique packet id is essentially freed. A
+ * subsequent call to dhd_pktid_map_alloc() may reuse this packet id.
+ *
+ * Implementation Note:
+ * Convert this into a <key,locker> abstraction and place into bcmutils !
+ * Locker abstraction should treat contents as opaque storage, and a
+ * callback should be registered to handle inuse lockers on destructor.
+ *
+ * +---------------------------------------------------------------------------+
+ */
+
+/* Allocate and initialize a mapper of num_items <numbered_key, locker> */
+static dhd_pktid_map_handle_t *
+dhd_pktid_map_init(void *osh, uint32 num_items)
+{
+ uint32 nkey;
+ dhd_pktid_map_t *map;
+ uint32 dhd_pktid_map_sz;
+
+ ASSERT((num_items >= 1) && num_items <= MAX_PKTID_ITEMS);
+ dhd_pktid_map_sz = DHD_PKTID_MAP_SZ(num_items);
+
+ if ((map = (dhd_pktid_map_t *)MALLOC(osh, dhd_pktid_map_sz)) == NULL) {
+ DHD_ERROR(("%s:%d: MALLOC failed for size %d\n",
+ __FUNCTION__, __LINE__, dhd_pktid_map_sz));
+ return NULL;
+ }
+ bzero(map, dhd_pktid_map_sz);
+
+ map->osh = osh;
+ map->items = num_items;
+ map->avail = num_items;
+
+ map->lockers[DHD_PKTID_INVALID].inuse = TRUE; /* tag locker #0 as inuse */
+
+ for (nkey = 1; nkey <= num_items; nkey++) { /* locker #0 is reserved */
+ map->keys[nkey] = nkey; /* populate with unique keys */
+ map->lockers[nkey].inuse = FALSE;
+ }
+
+ return (dhd_pktid_map_handle_t *)map; /* opaque handle */
+}
+
+/*
+ * Retrieve all allocated keys and free all <numbered_key, locker>.
+ * Freeing implies: unmapping the buffers and freeing the native packet
+ * This could have been a callback registered with the pktid mapper.
+ */
+static void
+dhd_pktid_map_fini(dhd_pktid_map_handle_t *handle)
+{
+ void *osh;
+ int nkey;
+ dhd_pktid_map_t *map;
+ uint32 dhd_pktid_map_sz;
+ dhd_pktid_item_t *locker;
+
+ if (handle == NULL)
+ return;
+
+ map = (dhd_pktid_map_t *)handle;
+ osh = map->osh;
+ dhd_pktid_map_sz = DHD_PKTID_MAP_SZ(map->items);
+
+ nkey = 1; /* skip reserved KEY #0, and start from 1 */
+ locker = &map->lockers[nkey];
+
+ for (; nkey <= map->items; nkey++, locker++) {
+ if (locker->inuse == TRUE) { /* numbered key still in use */
+ locker->inuse = FALSE; /* force open the locker */
+
+ { /* This could be a callback registered with dhd_pktid_map */
+ DMA_UNMAP(osh, locker->physaddr, locker->len,
+ locker->dma, 0, 0);
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ if (locker->buf_type == BUFF_TYPE_IOCTL_RX)
+ PKTFREE_STATIC(osh, (ulong*)locker->pkt, FALSE);
+ else
+ PKTFREE(osh, (ulong*)locker->pkt, FALSE);
+#else
+ PKTFREE(osh, (ulong*)locker->pkt, FALSE);
+#endif
+
+ }
+ }
+ }
+
+ MFREE(osh, handle, dhd_pktid_map_sz);
+}
+
+static void
+dhd_pktid_map_clear(dhd_pktid_map_handle_t *handle)
+{
+ void *osh;
+ int nkey;
+ dhd_pktid_map_t *map;
+ dhd_pktid_item_t *locker;
+
+ DHD_TRACE(("%s\n",__FUNCTION__));
+
+ if (handle == NULL)
+ return;
+
+ map = (dhd_pktid_map_t *)handle;
+ osh = map->osh;
+ map->failures = 0;
+
+ nkey = 1; /* skip reserved KEY #0, and start from 1 */
+ locker = &map->lockers[nkey];
+
+ for (; nkey <= map->items; nkey++, locker++) {
+ map->keys[nkey] = nkey; /* populate with unique keys */
+ if (locker->inuse == TRUE) { /* numbered key still in use */
+ locker->inuse = FALSE; /* force open the locker */
+ DHD_TRACE(("%s free id%d\n",__FUNCTION__,nkey ));
+ DMA_UNMAP(osh, (uint32)locker->physaddr, locker->len,
+ locker->dma, 0, 0);
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ if (locker->buf_type == BUFF_TYPE_IOCTL_RX)
+ PKTFREE_STATIC(osh, (ulong*)locker->pkt, FALSE);
+ else
+ PKTFREE(osh, (ulong*)locker->pkt, FALSE);
+#else
+ PKTFREE(osh, (ulong*)locker->pkt, FALSE);
+#endif
+
+ }
+ }
+ map->avail = map->items;
+}
+
+/* Get the pktid free count */
+static INLINE uint32 BCMFASTPATH
+dhd_pktid_map_avail_cnt(dhd_pktid_map_handle_t *handle)
+{
+ dhd_pktid_map_t *map;
+
+ ASSERT(handle != NULL);
+ map = (dhd_pktid_map_t *)handle;
+
+ return map->avail;
+}
+
+/*
+ * Allocate locker, save pkt contents, and return the locker's numbered key.
+ * dhd_pktid_map_alloc() is not reentrant, and is the caller's responsibility.
+ * Caller must treat a returned value DHD_PKTID_INVALID as a failure case,
+ * implying a depleted pool of pktids.
+ */
+static INLINE uint32
+dhd_pktid_map_reserve(dhd_pktid_map_handle_t *handle, void *pkt)
+{
+ uint32 nkey;
+ dhd_pktid_map_t *map;
+ dhd_pktid_item_t *locker;
+
+ ASSERT(handle != NULL);
+ map = (dhd_pktid_map_t *)handle;
+
+ if (map->avail <= 0) { /* no more pktids to allocate */
+ map->failures++;
+ DHD_INFO(("%s:%d: failed, no free keys\n", __FUNCTION__, __LINE__));
+ return DHD_PKTID_INVALID; /* failed alloc request */
+ }
+ ASSERT(map->avail <= map->items);
+
+ nkey = map->keys[map->avail]; /* fetch a free locker, pop stack */
+ map->avail--;
+
+ locker = &map->lockers[nkey]; /* save packet metadata in locker */
+ locker->inuse = TRUE; /* reserve this locker */
+ locker->pkt = pkt;
+ locker->len = 0;
+ ASSERT(nkey != DHD_PKTID_INVALID);
+ return nkey; /* return locker's numbered key */
+}
+
+static INLINE void
+dhd_pktid_map_save(dhd_pktid_map_handle_t *handle, void *pkt, uint32 nkey,
+ dmaaddr_t physaddr, uint32 len, uint8 dma, uint8 buf_type)
+{
+ dhd_pktid_map_t *map;
+ dhd_pktid_item_t *locker;
+
+ ASSERT(handle != NULL);
+ map = (dhd_pktid_map_t *)handle;
+
+ ASSERT((nkey != DHD_PKTID_INVALID) && (nkey <= (uint32)map->items));
+
+ locker = &map->lockers[nkey];
+ ASSERT(locker->pkt == pkt);
+
+ locker->dma = dma; /* store contents in locker */
+ locker->physaddr = physaddr;
+ locker->len = (uint16)len; /* 16bit len */
+ locker->buf_type = buf_type;
+}
+
+static uint32 BCMFASTPATH
+dhd_pktid_map_alloc(dhd_pktid_map_handle_t *handle, void *pkt,
+ dmaaddr_t physaddr, uint32 len, uint8 dma, uint8 buf_type)
+{
+ uint32 nkey = dhd_pktid_map_reserve(handle, pkt);
+ if (nkey != DHD_PKTID_INVALID) {
+ dhd_pktid_map_save(handle, pkt, nkey, physaddr, len, dma, buf_type);
+ }
+ return nkey;
+}
+
+/*
+ * Given a numbered key, return the locker contents.
+ * dhd_pktid_map_free() is not reentrant, and is the caller's responsibility.
+ * Caller may not free a pktid value DHD_PKTID_INVALID or an arbitrary pktid
+ * value. Only a previously allocated pktid may be freed.
+ */
+static void * BCMFASTPATH
+dhd_pktid_map_free(dhd_pktid_map_handle_t *handle, uint32 nkey,
+ dmaaddr_t *physaddr, uint32 *len, uint8 buf_type)
+{
+ dhd_pktid_map_t *map;
+ dhd_pktid_item_t *locker;
+ void *pkt;
+ ASSERT(handle != NULL);
+
+ map = (dhd_pktid_map_t *)handle;
+ ASSERT((nkey != DHD_PKTID_INVALID) && (nkey <= (uint32)map->items));
+
+ locker = &map->lockers[nkey];
+
+ if (locker->inuse == FALSE) { /* Debug check for cloned numbered key */
+ DHD_ERROR(("%s:%d: Error! freeing invalid pktid<%u>\n",
+ __FUNCTION__, __LINE__, nkey));
+ ASSERT(locker->inuse != FALSE);
+ return NULL;
+ }
+ if ((buf_type != BUFF_TYPE_NO_CHECK) && (locker->buf_type != buf_type)) {
+ DHD_ERROR(("%s:%d: Error! Invalid Buffer Free for pktid<%u> \n",
+ __FUNCTION__, __LINE__, nkey));
+ return NULL;
+ }
+
+ map->avail++;
+ map->keys[map->avail] = nkey; /* make this numbered key available */
+
+ locker->inuse = FALSE; /* open and free Locker */
+
+ *physaddr = locker->physaddr; /* return contents of locker */
+ *len = (uint32)locker->len;
+ pkt = locker->pkt;
+ locker->pkt = NULL; /* Clear pkt */
+ locker->len = 0;
+
+ return pkt;
+}
/* Linkage, sets prot link and updates hdrlen in pub */
int dhd_prot_attach(dhd_pub_t *dhd)
{
uint alloced = 0;
- dhd_prot_t *msg_buf;
- if (!(msg_buf = (dhd_prot_t *)DHD_OS_PREALLOC(dhd, DHD_PREALLOC_PROT,
+ dhd_prot_t *prot;
+
+ /* Allocate prot structure */
+ if (!(prot = (dhd_prot_t *)DHD_OS_PREALLOC(dhd, DHD_PREALLOC_PROT,
sizeof(dhd_prot_t)))) {
- DHD_ERROR(("%s: kmalloc failed\n", __FUNCTION__));
- goto fail;
- }
- memset(msg_buf, 0, sizeof(dhd_prot_t));
+ DHD_ERROR(("%s: kmalloc failed\n", __FUNCTION__));
+ goto fail;
+ }
+ memset(prot, 0, sizeof(*prot));
- msg_buf->hdr_len = sizeof(ioctl_req_hdr_t) + sizeof(cmn_msg_hdr_t) + sizeof(ret_buf_t);
- msg_buf->dtohbuf = MALLOC(dhd->osh, sizeof(circularbuf_t));
- msg_buf->htodbuf = MALLOC(dhd->osh, sizeof(circularbuf_t));
+ prot->osh = dhd->osh;
+ dhd->prot = prot;
- memset(msg_buf->dtohbuf, 0, sizeof(circularbuf_t));
- memset(msg_buf->htodbuf, 0, sizeof(circularbuf_t));
+ /* DMAing ring completes supported? FALSE by default */
+ dhd->dma_d2h_ring_upd_support = FALSE;
+ dhd->dma_h2d_ring_upd_support = FALSE;
- dhd->prot = msg_buf;
- dhd->maxctl = WLC_IOCTL_MAXLEN + msg_buf->hdr_len;
+ /* set the memdump capability */
+ dhd->memdump_enabled = DUMP_MEMONLY;
- /* ret buf for ioctl */
- msg_buf->retbuf = DMA_ALLOC_CONSISTENT(dhd->osh, IOCT_RETBUF_SIZE, 4,
- &alloced, &msg_buf->retbuf_phys, NULL);
- if (msg_buf->retbuf == NULL) {
+ /* Ring Allocations */
+ /* 1.0 H2D TXPOST ring */
+ if (!(prot->h2dring_txp_subn = prot_ring_attach(prot, "h2dtxp",
+ H2DRING_TXPOST_MAX_ITEM, H2DRING_TXPOST_ITEMSIZE,
+ BCMPCIE_H2D_TXFLOWRINGID))) {
+ DHD_ERROR(("%s: kmalloc for H2D TXPOST ring failed\n", __FUNCTION__));
+ goto fail;
+ }
+
+ /* 2.0 H2D RXPOST ring */
+ if (!(prot->h2dring_rxp_subn = prot_ring_attach(prot, "h2drxp",
+ H2DRING_RXPOST_MAX_ITEM, H2DRING_RXPOST_ITEMSIZE,
+ BCMPCIE_H2D_MSGRING_RXPOST_SUBMIT))) {
+ DHD_ERROR(("%s: kmalloc for H2D RXPOST ring failed\n", __FUNCTION__));
+ goto fail;
+
+ }
+
+ /* 3.0 H2D CTRL_SUBMISSION ring */
+ if (!(prot->h2dring_ctrl_subn = prot_ring_attach(prot, "h2dctrl",
+ H2DRING_CTRL_SUB_MAX_ITEM, H2DRING_CTRL_SUB_ITEMSIZE,
+ BCMPCIE_H2D_MSGRING_CONTROL_SUBMIT))) {
+ DHD_ERROR(("%s: kmalloc for H2D CTRL_SUBMISSION ring failed\n",
+ __FUNCTION__));
+ goto fail;
+
+ }
+
+ /* 4.0 D2H TX_COMPLETION ring */
+ if (!(prot->d2hring_tx_cpln = prot_ring_attach(prot, "d2htxcpl",
+ D2HRING_TXCMPLT_MAX_ITEM, D2HRING_TXCMPLT_ITEMSIZE,
+ BCMPCIE_D2H_MSGRING_TX_COMPLETE))) {
+ DHD_ERROR(("%s: kmalloc for D2H TX_COMPLETION ring failed\n",
+ __FUNCTION__));
+ goto fail;
+
+ }
+
+ /* 5.0 D2H RX_COMPLETION ring */
+ if (!(prot->d2hring_rx_cpln = prot_ring_attach(prot, "d2hrxcpl",
+ D2HRING_RXCMPLT_MAX_ITEM, D2HRING_RXCMPLT_ITEMSIZE,
+ BCMPCIE_D2H_MSGRING_RX_COMPLETE))) {
+ DHD_ERROR(("%s: kmalloc for D2H RX_COMPLETION ring failed\n",
+ __FUNCTION__));
+ goto fail;
+
+ }
+
+ /* 6.0 D2H CTRL_COMPLETION ring */
+ if (!(prot->d2hring_ctrl_cpln = prot_ring_attach(prot, "d2hctrl",
+ D2HRING_CTRL_CMPLT_MAX_ITEM, D2HRING_CTRL_CMPLT_ITEMSIZE,
+ BCMPCIE_D2H_MSGRING_CONTROL_COMPLETE))) {
+ DHD_ERROR(("%s: kmalloc for D2H CTRL_COMPLETION ring failed\n",
+ __FUNCTION__));
+ goto fail;
+ }
+
+ /* Return buffer for ioctl */
+ prot->retbuf.va = DMA_ALLOC_CONSISTENT(dhd->osh, IOCT_RETBUF_SIZE, DMA_ALIGN_LEN,
+ &alloced, &prot->retbuf.pa, &prot->retbuf.dmah);
+ if (prot->retbuf.va == NULL) {
ASSERT(0);
return BCME_NOMEM;
}
- ASSERT(MODX((unsigned long)msg_buf->retbuf, 4) == 0);
+ ASSERT(MODX((unsigned long)prot->retbuf.va, DMA_ALIGN_LEN) == 0);
+ bzero(prot->retbuf.va, IOCT_RETBUF_SIZE);
+ OSL_CACHE_FLUSH((void *) prot->retbuf.va, IOCT_RETBUF_SIZE);
- msg_buf->ioctbuf = DMA_ALLOC_CONSISTENT(dhd->osh, MSGBUF_MAX_MSG_SIZE, 4,
- &alloced, &msg_buf->ioctbuf_phys, NULL);
+ /* IOCTL request buffer */
+ prot->ioctbuf.va = DMA_ALLOC_CONSISTENT(dhd->osh, IOCT_RETBUF_SIZE, DMA_ALIGN_LEN,
+ &alloced, &prot->ioctbuf.pa, &prot->ioctbuf.dmah);
- if (msg_buf->ioctbuf == NULL) {
+ if (prot->ioctbuf.va == NULL) {
ASSERT(0);
return BCME_NOMEM;
}
- ASSERT(MODX((unsigned long)msg_buf->ioctbuf, 4) == 0);
+ ASSERT(MODX((unsigned long)prot->ioctbuf.va, DMA_ALIGN_LEN) == 0);
+ bzero(prot->ioctbuf.va, IOCT_RETBUF_SIZE);
+ OSL_CACHE_FLUSH((void *) prot->ioctbuf.va, IOCT_RETBUF_SIZE);
- msg_buf->pktid_map_handle = NATIVE_TO_PKTID_INIT(dhd->osh, MAX_PKTID_ITEMS);
- if (msg_buf->pktid_map_handle == NULL) {
+ /* Scratch buffer for dma rx offset */
+ prot->d2h_dma_scratch_buf_len = DMA_D2H_SCRATCH_BUF_LEN;
+ prot->d2h_dma_scratch_buf.va = DMA_ALLOC_CONSISTENT(dhd->osh, DMA_D2H_SCRATCH_BUF_LEN,
+ DMA_ALIGN_LEN, &alloced, &prot->d2h_dma_scratch_buf.pa,
+ &prot->d2h_dma_scratch_buf.dmah);
+
+ if (prot->d2h_dma_scratch_buf.va == NULL) {
+ ASSERT(0);
+ return BCME_NOMEM;
+ }
+ ASSERT(MODX((unsigned long)prot->d2h_dma_scratch_buf.va, DMA_ALIGN_LEN) == 0);
+ bzero(prot->d2h_dma_scratch_buf.va, DMA_D2H_SCRATCH_BUF_LEN);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_scratch_buf.va, DMA_D2H_SCRATCH_BUF_LEN);
+
+
+ /* PKTID handle INIT */
+ prot->pktid_map_handle = NATIVE_TO_PKTID_INIT(dhd->osh, MAX_PKTID_ITEMS);
+ if (prot->pktid_map_handle == NULL) {
ASSERT(0);
return BCME_NOMEM;
}
- msg_buf->htod_ring = DMA_ALLOC_CONSISTENT(dhd->osh, HOST_TO_DNGL_MSGBUF_SZ, 4,
- &alloced, &msg_buf->htod_physaddr, NULL);
- if (msg_buf->htod_ring == NULL) {
- ASSERT(0);
- return BCME_NOMEM;
- }
+ prot->dmaxfer.srcmem.va = NULL;
+ prot->dmaxfer.destmem.va = NULL;
+ prot->dmaxfer_in_progress = FALSE;
- ASSERT(MODX((unsigned long)msg_buf->htod_ring, 4) == 0);
+ prot->rx_metadata_offset = 0;
+ prot->tx_metadata_offset = 0;
- msg_buf->dtoh_ring = DMA_ALLOC_CONSISTENT(dhd->osh, DNGL_TO_HOST_MSGBUF_SZ, 4,
- &alloced, &msg_buf->dtoh_physaddr, NULL);
- if (msg_buf->dtoh_ring == NULL) {
- ASSERT(0);
- return BCME_NOMEM;
- }
-
- ASSERT(MODX((unsigned long)msg_buf->dtoh_ring, 4) == 0);
-
- /* At this point we assume splitbuf is not supported by dongle */
- msg_buf->htodsplit = FALSE;
- msg_buf->dtohsplit = FALSE;
-
+#ifdef DHD_RX_CHAINING
+ dhd_rxchain_reset(&prot->rxchain);
+#endif
return 0;
fail:
#ifndef CONFIG_DHD_USE_STATIC_BUF
- if (msg_buf != NULL)
- MFREE(dhd->osh, msg_buf, sizeof(dhd_prot_t));
+ if (prot != NULL)
+ dhd_prot_detach(dhd);
#endif /* CONFIG_DHD_USE_STATIC_BUF */
return BCME_NOMEM;
}
+/* Init memory block on host DMA'ing indices */
+int
+dhd_prot_init_index_dma_block(dhd_pub_t *dhd, uint8 type, uint32 length)
+{
+ uint alloced = 0;
+
+ dhd_prot_t *prot = dhd->prot;
+ uint32 dma_block_size = 4 * length;
+
+ if (prot == NULL) {
+ DHD_ERROR(("prot is not inited\n"));
+ return BCME_ERROR;
+ }
+
+ switch (type) {
+ case HOST_TO_DNGL_DMA_WRITEINDX_BUFFER:
+ /* ring update dma buffer for submission write */
+ prot->h2d_dma_writeindx_buf_len = dma_block_size;
+ prot->h2d_dma_writeindx_buf.va = DMA_ALLOC_CONSISTENT(dhd->osh,
+ dma_block_size, DMA_ALIGN_LEN, &alloced,
+ &prot->h2d_dma_writeindx_buf.pa,
+ &prot->h2d_dma_writeindx_buf.dmah);
+
+ if (prot->h2d_dma_writeindx_buf.va == NULL) {
+ return BCME_NOMEM;
+ }
+
+ ASSERT(ISALIGNED(prot->h2d_dma_writeindx_buf.va, 4));
+ bzero(prot->h2d_dma_writeindx_buf.va, dma_block_size);
+ OSL_CACHE_FLUSH((void *)prot->h2d_dma_writeindx_buf.va, dma_block_size);
+ DHD_ERROR(("H2D_WRITEINDX_ARRAY_HOST: %d-bytes "
+ "inited for dma'ing h2d-w indices\n",
+ prot->h2d_dma_writeindx_buf_len));
+ break;
+
+ case HOST_TO_DNGL_DMA_READINDX_BUFFER:
+ /* ring update dma buffer for submission read */
+ prot->h2d_dma_readindx_buf_len = dma_block_size;
+ prot->h2d_dma_readindx_buf.va = DMA_ALLOC_CONSISTENT(dhd->osh,
+ dma_block_size, DMA_ALIGN_LEN, &alloced,
+ &prot->h2d_dma_readindx_buf.pa,
+ &prot->h2d_dma_readindx_buf.dmah);
+ if (prot->h2d_dma_readindx_buf.va == NULL) {
+ return BCME_NOMEM;
+ }
+
+ ASSERT(ISALIGNED(prot->h2d_dma_readindx_buf.va, 4));
+ bzero(prot->h2d_dma_readindx_buf.va, dma_block_size);
+ OSL_CACHE_FLUSH((void *)prot->h2d_dma_readindx_buf.va, dma_block_size);
+ DHD_ERROR(("H2D_READINDX_ARRAY_HOST %d-bytes "
+ "inited for dma'ing h2d-r indices\n",
+ prot->h2d_dma_readindx_buf_len));
+ break;
+
+ case DNGL_TO_HOST_DMA_WRITEINDX_BUFFER:
+ /* ring update dma buffer for completion write */
+ prot->d2h_dma_writeindx_buf_len = dma_block_size;
+ prot->d2h_dma_writeindx_buf.va = DMA_ALLOC_CONSISTENT(dhd->osh,
+ dma_block_size, DMA_ALIGN_LEN, &alloced,
+ &prot->d2h_dma_writeindx_buf.pa,
+ &prot->d2h_dma_writeindx_buf.dmah);
+
+ if (prot->d2h_dma_writeindx_buf.va == NULL) {
+ return BCME_NOMEM;
+ }
+
+ ASSERT(ISALIGNED(prot->d2h_dma_writeindx_buf.va, 4));
+ bzero(prot->d2h_dma_writeindx_buf.va, dma_block_size);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_writeindx_buf.va, dma_block_size);
+ DHD_ERROR(("D2H_WRITEINDX_ARRAY_HOST %d-bytes "
+ "inited for dma'ing d2h-w indices\n",
+ prot->d2h_dma_writeindx_buf_len));
+ break;
+
+ case DNGL_TO_HOST_DMA_READINDX_BUFFER:
+ /* ring update dma buffer for completion read */
+ prot->d2h_dma_readindx_buf_len = dma_block_size;
+ prot->d2h_dma_readindx_buf.va = DMA_ALLOC_CONSISTENT(dhd->osh,
+ dma_block_size, DMA_ALIGN_LEN, &alloced,
+ &prot->d2h_dma_readindx_buf.pa,
+ &prot->d2h_dma_readindx_buf.dmah);
+
+ if (prot->d2h_dma_readindx_buf.va == NULL) {
+ return BCME_NOMEM;
+ }
+
+ ASSERT(ISALIGNED(prot->d2h_dma_readindx_buf.va, 4));
+ bzero(prot->d2h_dma_readindx_buf.va, dma_block_size);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_readindx_buf.va, dma_block_size);
+ DHD_ERROR(("D2H_READINDX_ARRAY_HOST %d-bytes "
+ "inited for dma'ing d2h-r indices\n",
+ prot->d2h_dma_readindx_buf_len));
+ break;
+
+ default:
+ DHD_ERROR(("%s: Unexpected option\n", __FUNCTION__));
+ return BCME_BADOPTION;
+ }
+
+ return BCME_OK;
+
+}
+
/* Unlink, frees allocated protocol memory (including dhd_prot) */
void dhd_prot_detach(dhd_pub_t *dhd)
{
- /* Stop the protocol module */
+ dhd_prot_t *prot = dhd->prot;
+ /* Stop the protocol module */
if (dhd->prot) {
- if (dhd->prot->dtoh_ring) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->dtoh_ring,
- DNGL_TO_HOST_MSGBUF_SZ, dhd->prot->dtoh_physaddr, NULL);
-
- dhd->prot->dtoh_ring = NULL;
- PHYSADDRHISET(dhd->prot->dtoh_physaddr, 0);
- PHYSADDRLOSET(dhd->prot->dtoh_physaddr, 0);
+ /* free up scratch buffer */
+ if (prot->d2h_dma_scratch_buf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, prot->d2h_dma_scratch_buf.va,
+ DMA_D2H_SCRATCH_BUF_LEN, prot->d2h_dma_scratch_buf.pa,
+ prot->d2h_dma_scratch_buf.dmah);
+ prot->d2h_dma_scratch_buf.va = NULL;
+ }
+ /* free up ring upd buffer for submission writes */
+ if (prot->h2d_dma_writeindx_buf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, prot->h2d_dma_writeindx_buf.va,
+ prot->h2d_dma_writeindx_buf_len, prot->h2d_dma_writeindx_buf.pa,
+ prot->h2d_dma_writeindx_buf.dmah);
+ prot->h2d_dma_writeindx_buf.va = NULL;
}
- if (dhd->prot->htod_ring) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->htod_ring,
- HOST_TO_DNGL_MSGBUF_SZ, dhd->prot->htod_physaddr, NULL);
-
- dhd->prot->htod_ring = NULL;
- PHYSADDRHISET(dhd->prot->htod_physaddr, 0);
- PHYSADDRLOSET(dhd->prot->htod_physaddr, 0);
+ /* free up ring upd buffer for submission reads */
+ if (prot->h2d_dma_readindx_buf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, prot->h2d_dma_readindx_buf.va,
+ prot->h2d_dma_readindx_buf_len, prot->h2d_dma_readindx_buf.pa,
+ prot->h2d_dma_readindx_buf.dmah);
+ prot->h2d_dma_readindx_buf.va = NULL;
}
- if (dhd->prot->dtohbuf) {
- MFREE(dhd->osh, dhd->prot->dtohbuf, sizeof(circularbuf_t));
- dhd->prot->dtohbuf = NULL;
+ /* free up ring upd buffer for completion writes */
+ if (prot->d2h_dma_writeindx_buf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, prot->d2h_dma_writeindx_buf.va,
+ prot->d2h_dma_writeindx_buf_len, prot->d2h_dma_writeindx_buf.pa,
+ prot->d2h_dma_writeindx_buf.dmah);
+ prot->d2h_dma_writeindx_buf.va = NULL;
}
- if (dhd->prot->htodbuf) {
- MFREE(dhd->osh, dhd->prot->htodbuf, sizeof(circularbuf_t));
- dhd->prot->htodbuf = NULL;
+ /* free up ring upd buffer for completion writes */
+ if (prot->d2h_dma_readindx_buf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, prot->d2h_dma_readindx_buf.va,
+ prot->d2h_dma_readindx_buf_len, prot->d2h_dma_readindx_buf.pa,
+ prot->d2h_dma_readindx_buf.dmah);
+ prot->d2h_dma_readindx_buf.va = NULL;
}
- if (dhd->prot->htod_ctrl_ring) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->htod_ctrl_ring,
- HOST_TO_DNGL_CTRLRING_SZ, dhd->prot->htod_ctrl_physaddr, NULL);
-
- dhd->prot->htod_ctrl_ring = NULL;
- dhd->prot->htod_ctrl_physaddr = 0;
+ /* ioctl return buffer */
+ if (prot->retbuf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->retbuf.va,
+ IOCT_RETBUF_SIZE, dhd->prot->retbuf.pa, dhd->prot->retbuf.dmah);
+ dhd->prot->retbuf.va = NULL;
}
- if (dhd->prot->dtoh_ctrl_ring) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->dtoh_ctrl_ring,
- DNGL_TO_HOST_CTRLRING_SZ, dhd->prot->dtoh_ctrl_physaddr, NULL);
+ /* ioctl request buffer */
+ if (prot->ioctbuf.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->ioctbuf.va,
+ IOCT_RETBUF_SIZE, dhd->prot->ioctbuf.pa, dhd->prot->ioctbuf.dmah);
- dhd->prot->dtoh_ctrl_ring = NULL;
- dhd->prot->dtoh_ctrl_physaddr = 0;
+ dhd->prot->ioctbuf.va = NULL;
}
- if (dhd->prot->htod_ctrlbuf) {
- MFREE(dhd->osh, dhd->prot->htod_ctrlbuf, sizeof(circularbuf_t));
- dhd->prot->htod_ctrlbuf = NULL;
- }
- if (dhd->prot->dtoh_ctrlbuf) {
- MFREE(dhd->osh, dhd->prot->dtoh_ctrlbuf, sizeof(circularbuf_t));
- dhd->prot->dtoh_ctrlbuf = NULL;
- }
+ /* 1.0 H2D TXPOST ring */
+ dhd_prot_ring_detach(dhd, prot->h2dring_txp_subn);
+ /* 2.0 H2D RXPOST ring */
+ dhd_prot_ring_detach(dhd, prot->h2dring_rxp_subn);
+ /* 3.0 H2D CTRL_SUBMISSION ring */
+ dhd_prot_ring_detach(dhd, prot->h2dring_ctrl_subn);
+ /* 4.0 D2H TX_COMPLETION ring */
+ dhd_prot_ring_detach(dhd, prot->d2hring_tx_cpln);
+ /* 5.0 D2H RX_COMPLETION ring */
+ dhd_prot_ring_detach(dhd, prot->d2hring_rx_cpln);
+ /* 6.0 D2H CTRL_COMPLETION ring */
+ dhd_prot_ring_detach(dhd, prot->d2hring_ctrl_cpln);
- if (dhd->prot->retbuf) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->retbuf,
- IOCT_RETBUF_SIZE, dhd->prot->retbuf_phys, NULL);
- dhd->prot->retbuf = NULL;
- }
-
- if (dhd->prot->ioctbuf) {
- DMA_FREE_CONSISTENT(dhd->osh, dhd->prot->ioctbuf,
- MSGBUF_MAX_MSG_SIZE, dhd->prot->ioctbuf_phys, NULL);
-
- dhd->prot->ioctbuf = NULL;
- }
-
- NATIVE_TO_PKTID_UNINIT(dhd->prot->pktid_map_handle);
+ NATIVE_TO_PKTID_FINI(dhd->prot->pktid_map_handle);
#ifndef CONFIG_DHD_USE_STATIC_BUF
MFREE(dhd->osh, dhd->prot, sizeof(dhd_prot_t));
@@ -327,58 +983,15 @@
/* Initialize protocol: sync w/dongle state.
* Sets dongle media info (iswl, drv_version, mac address).
*/
-int dhd_prot_init(dhd_pub_t *dhd)
+int dhd_sync_with_dongle(dhd_pub_t *dhd)
{
int ret = 0;
wlc_rev_info_t revinfo;
- dhd_prot_t *prot = dhd->prot;
- uint32 shared_flags;
+
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
- dhd_bus_cmn_readshared(dhd->bus, &prot->max_tx_count, TOTAL_LFRAG_PACKET_CNT);
- if (prot->max_tx_count == 0) {
- /* This can happen if LFrag pool is not enabled for the LFRAG's */
- /* on the dongle. Let's use some default value */
- prot->max_tx_count = 64;
- }
- DHD_INFO(("%s:%d: MAX_TX_COUNT = %d\n", __FUNCTION__, __LINE__, prot->max_tx_count));
-
- dhd_bus_cmn_readshared(dhd->bus, &prot->max_rxbufpost, MAX_HOST_RXBUFS);
- if (prot->max_rxbufpost == 0) {
- /* This would happen if the dongle firmware is not */
- /* using the latest shared structure template */
- prot->max_rxbufpost = DEFAULT_RX_BUFFERS_TO_POST;
- }
- DHD_INFO(("%s:%d: MAX_RXBUFPOST = %d\n", __FUNCTION__, __LINE__, prot->max_rxbufpost));
-
- prot->active_tx_count = 0;
- prot->txflow_en = FALSE;
- prot->mb_ring_fn = dhd_bus_get_mbintr_fn(dhd->bus);
- prot->data_seq_no = 0;
- prot->ioctl_seq_no = 0;
- /* initialise msgbufs */
- shared_flags = dhd_bus_get_sharedflags(dhd->bus);
- if (shared_flags & PCIE_SHARED_HTOD_SPLIT) {
- prot->htodsplit = TRUE;
- if (dhd_msgbuf_init_htod_ctrl(dhd) == BCME_NOMEM)
- {
- prot->htodsplit = FALSE;
- DHD_ERROR(("%s:%d: HTOD ctrl ring alloc failed!\n",
- __FUNCTION__, __LINE__));
- }
- }
- if (shared_flags & PCIE_SHARED_DTOH_SPLIT) {
- prot->dtohsplit = TRUE;
- if (dhd_msgbuf_init_dtoh_ctrl(dhd) == BCME_NOMEM)
- {
- prot->dtohsplit = FALSE;
- DHD_ERROR(("%s:%d: DTOH ctrl ring alloc failed!\n",
- __FUNCTION__, __LINE__));
- }
- }
- ret = dhd_msgbuf_init_htod(dhd);
- ret = dhd_msgbuf_init_dtoh(dhd);
- ret = dhd_msgbuf_rxbuf_post(dhd);
+ /* Post event buffer after shim layer is attached */
+ ret = dhd_msgbuf_rxbuf_post_event_bufs(dhd);
/* Get the device rev info */
@@ -386,37 +999,207 @@
ret = dhd_wl_ioctl_cmd(dhd, WLC_GET_REVINFO, &revinfo, sizeof(revinfo), FALSE, 0);
if (ret < 0)
goto done;
-#if defined(WL_CFG80211)
- if (dhd_download_fw_on_driverload)
-#endif /* defined(WL_CFG80211) */
- ret = dhd_preinit_ioctls(dhd);
+
+ dhd_process_cid_mac(dhd, TRUE);
+
+ ret = dhd_preinit_ioctls(dhd);
+
+ if (!ret)
+ dhd_process_cid_mac(dhd, FALSE);
+
/* Always assumes wl for now */
dhd->iswl = TRUE;
done:
return ret;
-
}
+/* This function does all necessary initialization needed
+* for IOCTL/IOVAR path
+*/
+int dhd_prot_init(dhd_pub_t *dhd)
+{
+ int ret = 0;
+ dhd_prot_t *prot = dhd->prot;
+
+ /* Max pkts in ring */
+ prot->max_tx_count = H2DRING_TXPOST_MAX_ITEM;
+
+ DHD_INFO(("%s:%d: MAX_TX_COUNT = %d\n", __FUNCTION__, __LINE__, prot->max_tx_count));
+
+ /* Read max rx packets supported by dongle */
+ dhd_bus_cmn_readshared(dhd->bus, &prot->max_rxbufpost, MAX_HOST_RXBUFS, 0);
+ if (prot->max_rxbufpost == 0) {
+ /* This would happen if the dongle firmware is not */
+ /* using the latest shared structure template */
+ prot->max_rxbufpost = DEFAULT_RX_BUFFERS_TO_POST;
+ }
+ DHD_INFO(("%s:%d: MAX_RXBUFPOST = %d\n", __FUNCTION__, __LINE__, prot->max_rxbufpost));
+
+ prot->max_eventbufpost = DHD_FLOWRING_MAX_EVENTBUF_POST;
+ prot->max_ioctlrespbufpost = DHD_FLOWRING_MAX_IOCTLRESPBUF_POST;
+
+ prot->active_tx_count = 0;
+ prot->data_seq_no = 0;
+ prot->ioctl_seq_no = 0;
+ prot->txp_threshold = TXP_FLUSH_MAX_ITEMS_FLUSH_CNT;
+
+ prot->ioctl_trans_id = 1;
+
+ /* Register the interrupt function upfront */
+ /* remove corerev checks in data path */
+ prot->mb_ring_fn = dhd_bus_get_mbintr_fn(dhd->bus);
+
+ /* Initialise rings */
+ /* 1.0 H2D TXPOST ring */
+ if (dhd_bus_is_txmode_push(dhd->bus)) {
+ dhd_ring_init(dhd, prot->h2dring_txp_subn);
+ }
+
+ /* 2.0 H2D RXPOST ring */
+ dhd_ring_init(dhd, prot->h2dring_rxp_subn);
+ /* 3.0 H2D CTRL_SUBMISSION ring */
+ dhd_ring_init(dhd, prot->h2dring_ctrl_subn);
+ /* 4.0 D2H TX_COMPLETION ring */
+ dhd_ring_init(dhd, prot->d2hring_tx_cpln);
+ /* 5.0 D2H RX_COMPLETION ring */
+ dhd_ring_init(dhd, prot->d2hring_rx_cpln);
+ /* 6.0 D2H CTRL_COMPLETION ring */
+ dhd_ring_init(dhd, prot->d2hring_ctrl_cpln);
+
+ /* init the scratch buffer */
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->d2h_dma_scratch_buf.pa,
+ sizeof(prot->d2h_dma_scratch_buf.pa), DNGL_TO_HOST_DMA_SCRATCH_BUFFER, 0);
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->d2h_dma_scratch_buf_len,
+ sizeof(prot->d2h_dma_scratch_buf_len), DNGL_TO_HOST_DMA_SCRATCH_BUFFER_LEN, 0);
+
+ /* If supported by the host, indicate the memory block
+ * for comletion writes / submission reads to shared space
+ */
+ if (DMA_INDX_ENAB(dhd->dma_d2h_ring_upd_support)) {
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->d2h_dma_writeindx_buf.pa,
+ sizeof(prot->d2h_dma_writeindx_buf.pa),
+ DNGL_TO_HOST_DMA_WRITEINDX_BUFFER, 0);
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->h2d_dma_readindx_buf.pa,
+ sizeof(prot->h2d_dma_readindx_buf.pa),
+ HOST_TO_DNGL_DMA_READINDX_BUFFER, 0);
+ }
+
+ if (DMA_INDX_ENAB(dhd->dma_h2d_ring_upd_support)) {
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->h2d_dma_writeindx_buf.pa,
+ sizeof(prot->h2d_dma_writeindx_buf.pa),
+ HOST_TO_DNGL_DMA_WRITEINDX_BUFFER, 0);
+ dhd_bus_cmn_writeshared(dhd->bus, &prot->d2h_dma_readindx_buf.pa,
+ sizeof(prot->d2h_dma_readindx_buf.pa),
+ DNGL_TO_HOST_DMA_READINDX_BUFFER, 0);
+
+ }
+
+ ret = dhd_msgbuf_rxbuf_post(dhd);
+ ret = dhd_msgbuf_rxbuf_post_ioctlresp_bufs(dhd);
+
+ return ret;
+}
+
+#define DHD_DBG_SHOW_METADATA 0
+#if DHD_DBG_SHOW_METADATA
+static void BCMFASTPATH
+dhd_prot_print_metadata(dhd_pub_t *dhd, void *ptr, int len)
+{
+ uint8 tlv_t;
+ uint8 tlv_l;
+ uint8 *tlv_v = (uint8 *)ptr;
+
+ if (len <= BCMPCIE_D2H_METADATA_HDRLEN)
+ return;
+
+ len -= BCMPCIE_D2H_METADATA_HDRLEN;
+ tlv_v += BCMPCIE_D2H_METADATA_HDRLEN;
+
+ while (len > TLV_HDR_LEN) {
+ tlv_t = tlv_v[TLV_TAG_OFF];
+ tlv_l = tlv_v[TLV_LEN_OFF];
+
+ len -= TLV_HDR_LEN;
+ tlv_v += TLV_HDR_LEN;
+ if (len < tlv_l)
+ break;
+ if ((tlv_t == 0) || (tlv_t == WLFC_CTL_TYPE_FILLER))
+ break;
+
+ switch (tlv_t) {
+ case WLFC_CTL_TYPE_TXSTATUS:
+ bcm_print_bytes("METADATA TX_STATUS", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_RSSI:
+ bcm_print_bytes("METADATA RX_RSSI", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_FIFO_CREDITBACK:
+ bcm_print_bytes("METADATA FIFO_CREDITBACK", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_TX_ENTRY_STAMP:
+ bcm_print_bytes("METADATA TX_ENTRY", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_RX_STAMP:
+ bcm_print_bytes("METADATA RX_TIMESTAMP", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_TRANS_ID:
+ bcm_print_bytes("METADATA TRANS_ID", tlv_v, tlv_l);
+ break;
+
+ case WLFC_CTL_TYPE_COMP_TXSTATUS:
+ bcm_print_bytes("METADATA COMP_TXSTATUS", tlv_v, tlv_l);
+ break;
+
+ default:
+ bcm_print_bytes("METADATA UNKNOWN", tlv_v, tlv_l);
+ break;
+ }
+
+ len -= tlv_l;
+ tlv_v += tlv_l;
+ }
+}
+#endif /* DHD_DBG_SHOW_METADATA */
+
static INLINE void BCMFASTPATH
-dhd_prot_packet_free(dhd_pub_t *dhd, uint32 pktid)
+dhd_prot_packet_free(dhd_pub_t *dhd, uint32 pktid, uint8 buf_type)
{
void *PKTBUF;
dmaaddr_t pa;
uint32 pa_len;
- PKTBUF = PKTID_TO_NATIVE(dhd->prot->pktid_map_handle, pktid, pa, pa_len);
- DMA_UNMAP(dhd->osh, (uint) pa, (uint) pa_len, DMA_TX, 0, 0);
- PKTFREE(dhd->osh, PKTBUF, TRUE);
+ PKTBUF = PKTID_TO_NATIVE(dhd->prot->pktid_map_handle, pktid, pa,
+ pa_len, buf_type);
+
+ if (PKTBUF) {
+ DMA_UNMAP(dhd->osh, pa, (uint) pa_len, DMA_TX, 0, 0);
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ if (buf_type == BUFF_TYPE_IOCTL_RX)
+ PKTFREE_STATIC(dhd->osh, PKTBUF, FALSE);
+ else
+ PKTFREE(dhd->osh, PKTBUF, FALSE);
+#else
+ PKTFREE(dhd->osh, PKTBUF, FALSE);
+#endif
+ }
return;
}
static INLINE void * BCMFASTPATH
-dhd_prot_packet_get(dhd_pub_t *dhd, uint32 pktid)
+dhd_prot_packet_get(dhd_pub_t *dhd, uint32 pktid, uint8 buf_type)
{
void *PKTBUF;
- ulong pa;
+ dmaaddr_t pa;
uint32 pa_len;
- PKTBUF = PKTID_TO_NATIVE(dhd->prot->pktid_map_handle, pktid, pa, pa_len);
- DMA_UNMAP(dhd->osh, (uint) pa, (uint) pa_len, DMA_RX, 0, 0);
+ PKTBUF = PKTID_TO_NATIVE(dhd->prot->pktid_map_handle, pktid, pa, pa_len, buf_type);
+ if (PKTBUF) {
+ DMA_UNMAP(dhd->osh, pa, (uint) pa_len, DMA_RX, 0, 0);
+ }
+
return PKTBUF;
}
@@ -424,243 +1207,366 @@
dhd_msgbuf_rxbuf_post(dhd_pub_t *dhd)
{
dhd_prot_t *prot = dhd->prot;
- unsigned long flags;
- uint32 fillbufs;
- uint32 i;
+ int16 fillbufs;
+ uint16 cnt = 64;
+ int retcount = 0;
+
fillbufs = prot->max_rxbufpost - prot->rxbufpost;
+ while (fillbufs > 0) {
+ cnt--;
+ if (cnt == 0) {
+ /* find a better way to reschedule rx buf post if space not available */
+ DHD_ERROR(("h2d rx post ring not available to post host buffers \n"));
+ DHD_ERROR(("Current posted host buf count %d \n", prot->rxbufpost));
+ break;
+ }
- for (i = 0; i < fillbufs; ) {
- int retcount;
- uint32 buf_count = (fillbufs - i) > RX_BUF_BURST ? RX_BUF_BURST : (fillbufs - i);
+ /* Post in a burst of 8 buffers ata time */
+ fillbufs = MIN(fillbufs, RX_BUF_BURST);
- flags = dhd_os_spin_lock(dhd);
- retcount = dhd_prot_rxbufpost(dhd, buf_count);
+ /* Post buffers */
+ retcount = dhd_prot_rxbufpost(dhd, fillbufs);
+
if (retcount > 0) {
prot->rxbufpost += (uint16)retcount;
- i += (uint16)retcount;
- dhd_os_spin_unlock(dhd, flags);
+
+ /* how many more to post */
+ fillbufs = prot->max_rxbufpost - prot->rxbufpost;
} else {
- dhd_os_spin_unlock(dhd, flags);
- break;
+ /* Make sure we don't run loop any further */
+ fillbufs = 0;
}
}
return 0;
}
+/* Post count no of rx buffers down to dongle */
static int BCMFASTPATH
-dhd_prot_rxbufpost(dhd_pub_t *dhd, uint32 count)
+dhd_prot_rxbufpost(dhd_pub_t *dhd, uint16 count)
{
void *p;
- uint16 pktsz = 2048;
- uint32 i;
- rxdesc_msghdr_t *rxbuf_post;
- rx_lenptr_tup_t *rx_tup;
+ uint16 pktsz = DHD_FLOWRING_RX_BUFPOST_PKTSZ;
+ uint8 *rxbuf_post_tmp;
+ host_rxbuf_post_t *rxbuf_post;
+ void* msg_start;
dmaaddr_t physaddr;
uint32 pktlen;
- uint32 msglen = sizeof(rxdesc_msghdr_t) + count * sizeof(rx_lenptr_tup_t);
-
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf = (circularbuf_t *)prot->htodbuf;
+ msgbuf_ring_t * ring = prot->h2dring_rxp_subn;
+ uint8 i = 0;
+ uint16 alloced = 0;
+ unsigned long flags;
- rxbuf_post = (rxdesc_msghdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, (uint16)msglen, HOST_TO_DNGL_DATA);
- if (rxbuf_post == NULL) {
- DHD_INFO(("%s:%d: HTOD Msgbuf Not available\n",
- __FUNCTION__, __LINE__));
+ DHD_GENERAL_LOCK(dhd, flags);
+ /* Claim space for 'count' no of messages */
+ msg_start = (void *)dhd_alloc_ring_space(dhd, ring, count, &alloced);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ if (msg_start == NULL) {
+ DHD_INFO(("%s:%d: Rxbufpost Msgbuf Not available\n", __FUNCTION__, __LINE__));
return -1;
}
+ /* if msg_start != NULL, we should have alloced space for atleast 1 item */
+ ASSERT(alloced > 0);
- /* CMN msg header */
- rxbuf_post->msg.msglen = htol16((uint16)msglen);
- rxbuf_post->msg.msgtype = MSG_TYPE_RXBUF_POST;
- rxbuf_post->msg.ifidx = 0;
- rxbuf_post->msg.u.seq.seq_no = htol16(++prot->data_seq_no);
+ rxbuf_post_tmp = (uint8*)msg_start;
- /* RX specific hdr */
- rxbuf_post->rsvd0 = 0;
- rxbuf_post->rsvd1 = 0;
- rxbuf_post->descnt = (uint8)count;
-
- rx_tup = (rx_lenptr_tup_t *) &(rxbuf_post->rx_tup[0]);
-
- for (i = 0; i < count; i++) {
+ /* loop through each message */
+ for (i = 0; i < alloced; i++) {
+ rxbuf_post = (host_rxbuf_post_t *)rxbuf_post_tmp;
+ /* Create a rx buffer */
if ((p = PKTGET(dhd->osh, pktsz, FALSE)) == NULL) {
DHD_ERROR(("%s:%d: PKTGET for rxbuf failed\n", __FUNCTION__, __LINE__));
- printf("%s:%d: PKTGET for rxbuf failed. Need to handle this gracefully\n",
- __FUNCTION__, __LINE__);
return -1;
}
pktlen = PKTLEN(dhd->osh, p);
- physaddr = DMA_MAP(dhd->osh, PKTDATA(dhd->osh, p), pktlen, DMA_RX, 0, 0);
- if (physaddr == 0) {
- DHD_ERROR(("Something really bad, unless 0 is a valid phyaddr\n"));
+ physaddr = DMA_MAP(dhd->osh, PKTDATA(dhd->osh, p), pktlen, DMA_RX, p, 0);
+ if (PHYSADDRISZERO(physaddr)) {
+ if (RING_WRITE_PTR(ring) < alloced - i)
+ RING_WRITE_PTR(ring) = RING_MAX_ITEM(ring) - alloced + i;
+ else
+ RING_WRITE_PTR(ring) -= alloced - i;
+ alloced = i;
+ DMA_UNMAP(dhd->osh, physaddr, pktlen, DMA_RX, 0, 0);
+ PKTFREE(dhd->osh, p, FALSE);
+ DHD_ERROR(("Invalid phyaddr 0\n"));
ASSERT(0);
+ break;
}
- /* Each bufid-len-ptr tuple */
- rx_tup->rxbufid = htol32(NATIVE_TO_PKTID(dhd->prot->pktid_map_handle,
- p, physaddr, pktlen, DMA_RX));
- rx_tup->len = htol16((uint16)PKTLEN(dhd->osh, p));
- rx_tup->rsvd2 = 0;
- rx_tup->ret_buf.high_addr = htol32(PHYSADDRHI(physaddr));
- rx_tup->ret_buf.low_addr = htol32(PHYSADDRLO(physaddr));
- rx_tup++;
+ PKTPULL(dhd->osh, p, prot->rx_metadata_offset);
+ pktlen = PKTLEN(dhd->osh, p);
+
+ /* CMN msg header */
+ rxbuf_post->cmn_hdr.msg_type = MSG_TYPE_RXBUF_POST;
+ rxbuf_post->cmn_hdr.if_id = 0;
+
+ /* get the lock before calling NATIVE_TO_PKTID */
+ DHD_GENERAL_LOCK(dhd, flags);
+
+ rxbuf_post->cmn_hdr.request_id =
+ htol32(NATIVE_TO_PKTID(dhd->prot->pktid_map_handle, p, physaddr,
+ pktlen, DMA_RX, BUFF_TYPE_DATA_RX));
+
+ /* free lock */
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ if (rxbuf_post->cmn_hdr.request_id == DHD_PKTID_INVALID) {
+ if (RING_WRITE_PTR(ring) < alloced - i)
+ RING_WRITE_PTR(ring) = RING_MAX_ITEM(ring) - alloced + i;
+ else
+ RING_WRITE_PTR(ring) -= alloced - i;
+ alloced = i;
+ DMA_UNMAP(dhd->osh, physaddr, pktlen, DMA_RX, 0, 0);
+ PKTFREE(dhd->osh, p, FALSE);
+ DHD_ERROR(("Pktid pool depleted.\n"));
+ break;
+ }
+
+ rxbuf_post->data_buf_len = htol16((uint16)pktlen);
+ rxbuf_post->data_buf_addr.high_addr = htol32(PHYSADDRHI(physaddr));
+ rxbuf_post->data_buf_addr.low_addr =
+ htol32(PHYSADDRLO(physaddr) + prot->rx_metadata_offset);
+
+ if (prot->rx_metadata_offset) {
+ rxbuf_post->metadata_buf_len = prot->rx_metadata_offset;
+ rxbuf_post->metadata_buf_addr.high_addr = htol32(PHYSADDRHI(physaddr));
+ rxbuf_post->metadata_buf_addr.low_addr = htol32(PHYSADDRLO(physaddr));
+ } else {
+ rxbuf_post->metadata_buf_len = 0;
+ rxbuf_post->metadata_buf_addr.high_addr = 0;
+ rxbuf_post->metadata_buf_addr.low_addr = 0;
+ }
+
+ /* Move rxbuf_post_tmp to next item */
+ rxbuf_post_tmp = rxbuf_post_tmp + RING_LEN_ITEMS(ring);
}
+ /* Update the write pointer in TCM & ring bell */
+ if (alloced > 0)
+ prot_ring_write_complete(dhd, prot->h2dring_rxp_subn, msg_start, alloced);
- /* Since, we are filling the data directly into the bufptr obtained
- * from the msgbuf, we can directly call the write_complete
- */
- circularbuf_write_complete(htod_msgbuf, (uint16)msglen);
-
- return count;
-}
-
-void BCMFASTPATH
-dhd_msgbuf_ringbell(void *ctx)
-{
- dhd_pub_t *dhd = (dhd_pub_t *) ctx;
- dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf = (circularbuf_t *)prot->htodbuf;
-
- /* Following will take care of writing both the Write and End pointers (32 bits) */
- dhd_bus_cmn_writeshared(dhd->bus, &(CIRCULARBUF_WRITE_PTR(htod_msgbuf)),
- sizeof(uint32), HOST_TO_DNGL_WPTR);
-
- prot->mb_ring_fn(dhd->bus, *(uint32 *) &(CIRCULARBUF_WRITE_PTR(htod_msgbuf)));
-}
-
-void BCMFASTPATH
-dhd_ctrlbuf_ringbell(void *ctx)
-{
- dhd_pub_t *dhd = (dhd_pub_t *) ctx;
- dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_ctrlbuf = (circularbuf_t *)prot->htod_ctrlbuf;
-
- /* Following will take care of writing both the Write and End pointers (32 bits) */
- dhd_bus_cmn_writeshared(dhd->bus, &(CIRCULARBUF_WRITE_PTR(htod_ctrlbuf)),
- sizeof(uint32), HTOD_CTRL_WPTR);
-
- prot->mb_ring_fn(dhd->bus, *(uint32 *) &(CIRCULARBUF_WRITE_PTR(htod_ctrlbuf)));
+ return alloced;
}
static int
-dhd_msgbuf_init_htod(dhd_pub_t *dhd)
+dhd_prot_rxbufpost_ctrl(dhd_pub_t *dhd, bool event_buf)
{
+ void *p;
+ uint16 pktsz;
+ ioctl_resp_evt_buf_post_msg_t *rxbuf_post;
+ dmaaddr_t physaddr;
+ uint32 pktlen;
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf = (circularbuf_t *)prot->htodbuf;
+ uint16 alloced = 0;
+ unsigned long flags;
+ uint8 buf_type;
- circularbuf_init(htod_msgbuf, prot->htod_ring, HOST_TO_DNGL_MSGBUF_SZ);
- circularbuf_register_cb(htod_msgbuf, dhd_msgbuf_ringbell, (void *)dhd);
- dhd_bus_cmn_writeshared(dhd->bus, &prot->htod_physaddr,
- sizeof(prot->htod_physaddr), HOST_TO_DNGL_BUF_ADDR);
+ if (event_buf) {
+ /* Allocate packet for event buffer post */
+ pktsz = DHD_FLOWRING_RX_BUFPOST_PKTSZ;
+ buf_type = BUFF_TYPE_EVENT_RX;
+ } else {
+ /* Allocate packet for ctrl/ioctl buffer post */
+ pktsz = DHD_FLOWRING_IOCTL_BUFPOST_PKTSZ;
+ buf_type = BUFF_TYPE_IOCTL_RX;
+ }
- dhd_bus_cmn_writeshared(dhd->bus, &(CIRCULARBUF_WRITE_PTR(htod_msgbuf)),
- sizeof(uint32), HOST_TO_DNGL_WPTR);
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ if (!event_buf)
+ p = PKTGET_STATIC(dhd->osh, pktsz, FALSE);
+ else
+ p = PKTGET(dhd->osh, pktsz, FALSE);
+#else
+ p = PKTGET(dhd->osh, pktsz, FALSE);
+#endif
- return 0;
+ pktlen = PKTLEN(dhd->osh, p);
+ physaddr = DMA_MAP(dhd->osh, PKTDATA(dhd->osh, p), pktlen, DMA_RX, p, 0);
+ if (PHYSADDRISZERO(physaddr)) {
+ DHD_ERROR(("Invalid phyaddr 0\n"));
+ ASSERT(0);
+ goto free_pkt_return;
+ }
+
+ DHD_GENERAL_LOCK(dhd, flags);
+ rxbuf_post = (ioctl_resp_evt_buf_post_msg_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+ if (rxbuf_post == NULL) {
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ DHD_ERROR(("%s:%d: Ctrl submit Msgbuf Not available to post buffer \n",
+ __FUNCTION__, __LINE__));
+ DMA_UNMAP(dhd->osh, physaddr, pktlen, DMA_RX, 0, 0);
+ goto free_pkt_return;
+ }
+
+ /* CMN msg header */
+ if (event_buf)
+ rxbuf_post->cmn_hdr.msg_type = MSG_TYPE_EVENT_BUF_POST;
+ else
+ rxbuf_post->cmn_hdr.msg_type = MSG_TYPE_IOCTLRESP_BUF_POST;
+ rxbuf_post->cmn_hdr.if_id = 0;
+
+ rxbuf_post->cmn_hdr.request_id =
+ htol32(NATIVE_TO_PKTID(dhd->prot->pktid_map_handle, p, physaddr,
+ pktlen, DMA_RX, buf_type));
+
+ if (rxbuf_post->cmn_hdr.request_id == DHD_PKTID_INVALID) {
+ if (RING_WRITE_PTR(prot->h2dring_ctrl_subn) == 0)
+ RING_WRITE_PTR(prot->h2dring_ctrl_subn) =
+ RING_MAX_ITEM(prot->h2dring_ctrl_subn) - 1;
+ else
+ RING_WRITE_PTR(prot->h2dring_ctrl_subn)--;
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ DMA_UNMAP(dhd->osh, physaddr, pktlen, DMA_RX, 0, 0);
+ goto free_pkt_return;
+ }
+
+ rxbuf_post->cmn_hdr.flags = 0;
+ rxbuf_post->host_buf_len = htol16((uint16)PKTLEN(dhd->osh, p));
+ rxbuf_post->host_buf_addr.high_addr = htol32(PHYSADDRHI(physaddr));
+ rxbuf_post->host_buf_addr.low_addr = htol32(PHYSADDRLO(physaddr));
+
+ /* Update the write pointer in TCM & ring bell */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, rxbuf_post,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ return 1;
+
+free_pkt_return:
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ if (buf_type == BUFF_TYPE_IOCTL_RX)
+ PKTFREE_STATIC(dhd->osh, p, FALSE);
+ else
+ PKTFREE(dhd->osh, p, FALSE);
+#else
+ PKTFREE(dhd->osh, p, FALSE);
+#endif
+
+ return -1;
}
+
+static uint16
+dhd_msgbuf_rxbuf_post_ctrlpath(dhd_pub_t *dhd, bool event_buf, uint32 max_to_post)
+{
+ uint32 i = 0;
+ int32 ret_val;
+
+ DHD_INFO(("max to post %d, event %d \n", max_to_post, event_buf));
+ while (i < max_to_post) {
+ ret_val = dhd_prot_rxbufpost_ctrl(dhd, event_buf);
+ if (ret_val < 0)
+ break;
+ i++;
+ }
+ DHD_INFO(("posted %d buffers to event_pool/ioctl_resp_pool %d\n", i, event_buf));
+ return (uint16)i;
+}
+
static int
-dhd_msgbuf_init_dtoh(dhd_pub_t *dhd)
+dhd_msgbuf_rxbuf_post_ioctlresp_bufs(dhd_pub_t *dhd)
{
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *dtoh_msgbuf = (circularbuf_t *)prot->dtohbuf;
+ uint16 retcnt = 0;
- prot->rxbufpost = 0;
- circularbuf_init(dtoh_msgbuf, prot->dtoh_ring, DNGL_TO_HOST_MSGBUF_SZ);
- dhd_bus_cmn_writeshared(dhd->bus, &prot->dtoh_physaddr,
- sizeof(prot->dtoh_physaddr), DNGL_TO_HOST_BUF_ADDR);
-
- dhd_bus_cmn_writeshared(dhd->bus, &CIRCULARBUF_READ_PTR(dtoh_msgbuf),
- sizeof(uint16), DNGL_TO_HOST_RPTR);
-
- /* One dummy interrupt to the device. This would trigger */
- /* the msgbuf initializations at the device side. */
- /* Send dummy intr to device here, only if support for split data/ctrl rings is disabled */
- /* Else send the dummy initialization intr at dtoh ctrl buf init */
-
- dhd_bus_ringbell(dhd->bus, PCIE_INTB);
+ DHD_INFO(("ioctl resp buf post\n"));
+ retcnt = dhd_msgbuf_rxbuf_post_ctrlpath(dhd, FALSE,
+ prot->max_ioctlrespbufpost - prot->cur_ioctlresp_bufs_posted);
+ prot->cur_ioctlresp_bufs_posted += retcnt;
return 0;
}
-/* Allocate space for HTOD ctrl ring on host and initialize handle/doorbell for the same */
-static int dhd_msgbuf_init_htod_ctrl(dhd_pub_t *dhd)
+static int
+dhd_msgbuf_rxbuf_post_event_bufs(dhd_pub_t *dhd)
{
- uint alloced;
dhd_prot_t *prot = dhd->prot;
- prot->htod_ctrlbuf = MALLOC(dhd->osh, sizeof(circularbuf_t));
- memset(prot->htod_ctrlbuf, 0, sizeof(circularbuf_t));
-
- prot->htod_ctrl_ring = DMA_ALLOC_CONSISTENT(dhd->osh, HOST_TO_DNGL_CTRLRING_SZ, 4,
- &alloced, &prot->htod_ctrl_physaddr, NULL);
- if (prot->htod_ctrl_ring == NULL) {
- return BCME_NOMEM;
- }
-
- ASSERT(MODX((unsigned long)prot->htod_ctrl_ring, 4) == 0);
-
- circularbuf_init(prot->htod_ctrlbuf, prot->htod_ctrl_ring, HOST_TO_DNGL_CTRLRING_SZ);
- circularbuf_register_cb(prot->htod_ctrlbuf, dhd_ctrlbuf_ringbell, (void *)dhd);
- dhd_bus_cmn_writeshared(dhd->bus, &prot->htod_ctrl_physaddr,
- sizeof(prot->htod_ctrl_physaddr), HOST_TO_DNGL_CTRLBUF_ADDR);
-
- dhd_bus_cmn_writeshared(dhd->bus, &(CIRCULARBUF_WRITE_PTR(prot->htod_ctrlbuf)),
- sizeof(uint32), HTOD_CTRL_WPTR);
-
- return 0;
-}
-/* Allocate space for DTOH ctrl ring on host and initialize msgbuf handle in dhd_prot_t */
-static int dhd_msgbuf_init_dtoh_ctrl(dhd_pub_t *dhd)
-{
- uint alloced;
- dhd_prot_t *prot = dhd->prot;
- prot->dtoh_ctrlbuf = MALLOC(dhd->osh, sizeof(circularbuf_t));
- memset(prot->dtoh_ctrlbuf, 0, sizeof(circularbuf_t));
-
- prot->dtoh_ctrl_ring = DMA_ALLOC_CONSISTENT(dhd->osh, DNGL_TO_HOST_CTRLRING_SZ, 4,
- &alloced, &prot->dtoh_ctrl_physaddr, NULL);
- if (prot->dtoh_ctrl_ring == NULL) {
- return BCME_NOMEM;
- }
- ASSERT(MODX((unsigned long)prot->dtoh_ctrl_ring, 4) == 0);
-
- circularbuf_init(prot->dtoh_ctrlbuf, prot->dtoh_ctrl_ring, DNGL_TO_HOST_CTRLRING_SZ);
- dhd_bus_cmn_writeshared(dhd->bus, &prot->dtoh_ctrl_physaddr,
- sizeof(prot->dtoh_ctrl_physaddr), DNGL_TO_HOST_CTRLBUF_ADDR);
-
- dhd_bus_cmn_writeshared(dhd->bus, &(CIRCULARBUF_READ_PTR(prot->dtoh_ctrlbuf)),
- sizeof(uint32), DTOH_CTRL_RPTR);
+ prot->cur_event_bufs_posted += dhd_msgbuf_rxbuf_post_ctrlpath(dhd, TRUE,
+ prot->max_eventbufpost - prot->cur_event_bufs_posted);
return 0;
}
int BCMFASTPATH
-dhd_prot_process_msgbuf(dhd_pub_t *dhd)
+dhd_prot_process_msgbuf_rxcpl(dhd_pub_t *dhd)
{
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *dtoh_msgbuf = (circularbuf_t *)prot->dtohbuf;
-
- dhd_bus_cmn_readshared(dhd->bus, &CIRCULARBUF_WRITE_PTR(dtoh_msgbuf), DNGL_TO_HOST_WPTR);
/* Process all the messages - DTOH direction */
while (TRUE) {
uint8 *src_addr;
uint16 src_len;
-
- src_addr = circularbuf_get_read_ptr(dtoh_msgbuf, &src_len);
+ /* Store current read pointer */
+ /* Read pointer will be updated in prot_early_upd_rxcpln_read_idx */
+ prot_store_rxcpln_read_idx(dhd, prot->d2hring_rx_cpln);
+ /* Get the message from ring */
+ src_addr = prot_get_src_addr(dhd, prot->d2hring_rx_cpln, &src_len);
if (src_addr == NULL)
break;
/* Prefetch data to populate the cache */
OSL_PREFETCH(src_addr);
- dhd_prot_process_msgtype(dhd, src_addr, src_len);
- circularbuf_read_complete(dtoh_msgbuf, src_len);
+ if (dhd_prot_process_msgtype(dhd, prot->d2hring_rx_cpln, src_addr,
+ src_len) != BCME_OK) {
+ prot_upd_read_idx(dhd, prot->d2hring_rx_cpln);
+ DHD_ERROR(("%s: Error at process rxpl msgbuf of len %d\n",
+ __FUNCTION__, src_len));
+ }
+
+ /* Update read pointer */
+ prot_upd_read_idx(dhd, prot->d2hring_rx_cpln);
+ }
+
+ return 0;
+}
+
+void
+dhd_prot_update_txflowring(dhd_pub_t *dhd, uint16 flow_id, void *msgring_info)
+{
+ uint16 r_index = 0;
+ msgbuf_ring_t *ring = (msgbuf_ring_t *)msgring_info;
+
+ /* Update read pointer */
+ if (DMA_INDX_ENAB(dhd->dma_d2h_ring_upd_support)) {
+ r_index = dhd_get_dmaed_index(dhd, H2D_DMA_READINDX, ring->idx);
+ ring->ringstate->r_offset = r_index;
+ }
+
+ DHD_TRACE(("flow %d, write %d read %d \n\n", flow_id, RING_WRITE_PTR(ring),
+ RING_READ_PTR(ring)));
+
+ /* Need more logic here, but for now use it directly */
+ dhd_bus_schedule_queue(dhd->bus, flow_id, TRUE);
+}
+
+
+int BCMFASTPATH
+dhd_prot_process_msgbuf_txcpl(dhd_pub_t *dhd)
+{
+ dhd_prot_t *prot = dhd->prot;
+
+ /* Process all the messages - DTOH direction */
+ while (TRUE) {
+ uint8 *src_addr;
+ uint16 src_len;
+
+ src_addr = prot_get_src_addr(dhd, prot->d2hring_tx_cpln, &src_len);
+ if (src_addr == NULL)
+ break;
+
+ /* Prefetch data to populate the cache */
+ OSL_PREFETCH(src_addr);
+
+ if (dhd_prot_process_msgtype(dhd, prot->d2hring_tx_cpln, src_addr,
+ src_len) != BCME_OK) {
+ DHD_ERROR(("%s: Error at process txcmpl msgbuf of len %d\n",
+ __FUNCTION__, src_len));
+ }
/* Write to dngl rd ptr */
- dhd_bus_cmn_writeshared(dhd->bus, &CIRCULARBUF_READ_PTR(dtoh_msgbuf),
- sizeof(uint16), DNGL_TO_HOST_RPTR);
+ prot_upd_read_idx(dhd, prot->d2hring_tx_cpln);
}
return 0;
@@ -670,40 +1576,40 @@
dhd_prot_process_ctrlbuf(dhd_pub_t * dhd)
{
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *dtoh_ctrlbuf = (circularbuf_t *)prot->dtoh_ctrlbuf;
-
- dhd_bus_cmn_readshared(dhd->bus, &CIRCULARBUF_WRITE_PTR(dtoh_ctrlbuf), DTOH_CTRL_WPTR);
/* Process all the messages - DTOH direction */
while (TRUE) {
uint8 *src_addr;
uint16 src_len;
+ src_addr = prot_get_src_addr(dhd, prot->d2hring_ctrl_cpln, &src_len);
- src_addr = circularbuf_get_read_ptr(dtoh_ctrlbuf, &src_len);
if (src_addr == NULL) {
break;
}
+
/* Prefetch data to populate the cache */
OSL_PREFETCH(src_addr);
-
- dhd_prot_process_msgtype(dhd, src_addr, src_len);
- circularbuf_read_complete(dtoh_ctrlbuf, src_len);
+ if (dhd_prot_process_msgtype(dhd, prot->d2hring_ctrl_cpln, src_addr,
+ src_len) != BCME_OK) {
+ DHD_ERROR(("%s: Error at process ctrlmsgbuf of len %d\n",
+ __FUNCTION__, src_len));
+ }
/* Write to dngl rd ptr */
- dhd_bus_cmn_writeshared(dhd->bus, &CIRCULARBUF_READ_PTR(dtoh_ctrlbuf),
- sizeof(uint16), DTOH_CTRL_RPTR);
+ prot_upd_read_idx(dhd, prot->d2hring_ctrl_cpln);
}
return 0;
}
-static void BCMFASTPATH
-dhd_prot_process_msgtype(dhd_pub_t *dhd, uint8* buf, uint16 len)
+static int BCMFASTPATH
+dhd_prot_process_msgtype(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint8* buf, uint16 len)
{
dhd_prot_t *prot = dhd->prot;
uint32 cur_dma_len = 0;
+ int ret = BCME_OK;
- DHD_TRACE(("%s: process msgbuf of len %d\n", __FUNCTION__, len));
+ DHD_INFO(("%s: process msgbuf of len %d\n", __FUNCTION__, len));
while (len > 0) {
ASSERT(len > (sizeof(cmn_msg_hdr_t) + prot->rx_dataoffset));
@@ -716,188 +1622,244 @@
else {
cur_dma_len = len;
}
- dhd_process_msgtype(dhd, buf, (uint16)cur_dma_len);
+ if (dhd_process_msgtype(dhd, ring, buf, (uint16)cur_dma_len) != BCME_OK) {
+ DHD_ERROR(("%s: Error at process msg of dmalen %d\n",
+ __FUNCTION__, cur_dma_len));
+ ret = BCME_ERROR;
+ }
+
len -= (uint16)cur_dma_len;
buf += cur_dma_len;
}
+ return ret;
}
-
-static void
-dhd_check_sequence_num(cmn_msg_hdr_t *msg)
+#define PCIE_M2M_D2H_DMA_WAIT_TRIES 256
+#define PCIE_D2H_RESET_MARK 0xdeadbeef
+void dhd_msgbuf_d2h_check_cmplt(msgbuf_ring_t *ring, void *msg)
{
- static uint32 ioctl_seq_no_old = 0;
- static uint32 data_seq_no_old = 0;
+ uint32 tries;
+ uint32 *marker = (uint32 *)msg + RING_LEN_ITEMS(ring) / sizeof(uint32) - 1;
- switch (msg->msgtype) {
- case MSG_TYPE_IOCTL_CMPLT:
- if (msg->u.seq.seq_no && msg->u.seq.seq_no != (ioctl_seq_no_old + 1))
- {
- DHD_ERROR(("Error in IOCTL MsgBuf Sequence number!!"
- "new seq no %u, old seq number %u\n",
- msg->u.seq.seq_no, ioctl_seq_no_old));
- }
- ioctl_seq_no_old = msg->u.seq.seq_no;
- break;
-
- case MSG_TYPE_RX_CMPLT:
- case MSG_TYPE_WL_EVENT :
- case MSG_TYPE_TX_STATUS :
- case MSG_TYPE_LOOPBACK:
- if (msg->u.seq.seq_no && msg->u.seq.seq_no != (data_seq_no_old + 1))
- {
- DHD_ERROR(("Error in DATA MsgBuf Sequence number!!"
- "new seq no %u old seq number %u\n",
- msg->u.seq.seq_no, data_seq_no_old));
- }
- data_seq_no_old = msg->u.seq.seq_no;
- break;
-
- default:
- printf("Unknown MSGTYPE in %s \n", __FUNCTION__);
- break;
-
+ for (tries = 0; tries < PCIE_M2M_D2H_DMA_WAIT_TRIES; tries++) {
+ if (*(volatile uint32 *)marker != PCIE_D2H_RESET_MARK)
+ return;
+ OSL_CACHE_INV(msg, RING_LEN_ITEMS(ring));
}
+
+ /* only print error for data ring */
+ if (ring->idx == BCMPCIE_D2H_MSGRING_TX_COMPLETE ||
+ ring->idx == BCMPCIE_D2H_MSGRING_RX_COMPLETE)
+ DHD_ERROR(("%s: stale msgbuf content after %d retries\n",
+ __FUNCTION__, tries));
}
-static void BCMFASTPATH
-dhd_process_msgtype(dhd_pub_t *dhd, uint8* buf, uint16 len)
+static int BCMFASTPATH
+dhd_process_msgtype(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint8* buf, uint16 len)
{
uint16 pktlen = len;
uint16 msglen;
uint8 msgtype;
cmn_msg_hdr_t *msg = NULL;
+ int ret = BCME_OK;
+
+ ASSERT(ring && ring->ringmem);
+ msglen = RING_LEN_ITEMS(ring);
+ if (msglen == 0) {
+ DHD_ERROR(("%s: ringidx %d, msglen is %d, pktlen is %d \n",
+ __FUNCTION__, ring->idx, msglen, pktlen));
+ return BCME_ERROR;
+ }
+
while (pktlen > 0) {
msg = (cmn_msg_hdr_t *)buf;
- msgtype = msg->msgtype;
- msglen = msg->msglen;
- /* Prefetch data to populate the cache */
- OSL_PREFETCH(buf+msglen);
+ dhd_msgbuf_d2h_check_cmplt(ring, msg);
- dhd_check_sequence_num(msg);
+ msgtype = msg->msg_type;
- DHD_INFO(("msgtype %d, msglen is %d \n", msgtype, msglen));
- switch (msgtype) {
- case MSG_TYPE_IOCTL_CMPLT:
- DHD_INFO((" MSG_TYPE_IOCTL_CMPLT\n"));
- dhd_prot_ioctcmplt_process(dhd, buf);
- break;
- case MSG_TYPE_RX_CMPLT:
- DHD_INFO((" MSG_TYPE_RX_CMPLT\n"));
- dhd_prot_rxcmplt_process(dhd, buf);
- break;
- case MSG_TYPE_WL_EVENT:
- DHD_INFO((" MSG_TYPE_WL_EVENT\n"));
- dhd_prot_event_process(dhd, buf, msglen);
- break;
- case MSG_TYPE_TX_STATUS:
- DHD_INFO((" MSG_TYPE_TX_STATUS\n"));
- dhd_prot_txstatus_process(dhd, buf);
- break;
- case MSG_TYPE_LOOPBACK:
- bcm_print_bytes("LPBK RESP: ", (uint8 *)msg, msglen);
- DHD_ERROR((" MSG_TYPE_LOOPBACK, len %d\n", msglen));
- break;
- default :
- DHD_ERROR(("Unknown state in %s,"
- "rxoffset %d\n", __FUNCTION__, dhd->prot->rx_dataoffset));
- bcm_print_bytes("UNKNOWN msg", (uchar *)msg, msglen);
- break;
+
+ DHD_INFO(("msgtype %d, msglen is %d, pktlen is %d \n",
+ msgtype, msglen, pktlen));
+ if (msgtype == MSG_TYPE_LOOPBACK) {
+ bcm_print_bytes("LPBK RESP: ", (uint8 *)msg, msglen);
+ DHD_ERROR((" MSG_TYPE_LOOPBACK, len %d\n", msglen));
}
- DHD_INFO(("pktlen is %d, msglen is %d\n", pktlen, msglen));
- if (pktlen < msglen) {
- return;
+ ASSERT(msgtype < DHD_PROT_FUNCS);
+ if (table_lookup[msgtype]) {
+ table_lookup[msgtype](dhd, buf, msglen);
+ }
+
+ if (pktlen < msglen) {
+ ret = BCME_ERROR;
+ goto done;
}
pktlen = pktlen - msglen;
buf = buf + msglen;
+ if (msgtype == MSG_TYPE_RX_CMPLT)
+ prot_early_upd_rxcpln_read_idx(dhd,
+ dhd->prot->d2hring_rx_cpln);
}
+done:
+
+#ifdef DHD_RX_CHAINING
+ dhd_rxchain_commit(dhd);
+#endif
+
+ return ret;
}
+
static void
-dhd_prot_ioctcmplt_process(dhd_pub_t *dhd, void * buf)
+dhd_prot_ringstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
{
- uint32 retlen, status, inline_data = 0;
- uint32 pkt_id, xt_id;
-
- ioct_resp_hdr_t * ioct_resp = (ioct_resp_hdr_t *)buf;
- retlen = ltoh32(ioct_resp->ret_len);
- pkt_id = ltoh32(ioct_resp->pkt_id);
- xt_id = ltoh32(ioct_resp->xt_id);
- status = ioct_resp->status;
- if (retlen <= 4) {
- inline_data = ltoh32(ioct_resp->inline_data);
- } else {
- OSL_CACHE_INV((void *) dhd->prot->retbuf, retlen);
- }
- DHD_CTL(("status from the pkt_id is %d, ioctl is %d, ret_len is %d, xt_id %d\n",
- pkt_id, status, retlen, xt_id));
-
- if (retlen == 0)
- retlen = 1;
-
- dhd_bus_update_retlen(dhd->bus, retlen, pkt_id, status, inline_data);
- dhd_os_ioctl_resp_wake(dhd);
-}
-
-static void BCMFASTPATH
-dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf)
-{
- dhd_prot_t *prot = dhd->prot;
- txstatus_hdr_t * txstatus;
- unsigned long flags;
- uint32 pktid;
-
- /* locks required to protect circular buffer accesses */
- flags = dhd_os_spin_lock(dhd);
-
- txstatus = (txstatus_hdr_t *)buf;
- pktid = ltoh32(txstatus->pktid);
-
- prot->active_tx_count--;
-
- ASSERT(pktid != 0);
- dhd_prot_packet_free(dhd, pktid);
-
- if (prot->txflow_en == TRUE) {
- /* If the pktpool availability is above the high watermark, */
- /* let's resume the flow of packets to dongle. */
- if ((prot->max_tx_count - prot->active_tx_count) > DHD_START_QUEUE_THRESHOLD) {
- dhd_bus_start_queue(dhd->bus);
- prot->txflow_en = FALSE;
- }
- }
-
- dhd_os_spin_unlock(dhd, flags);
+ pcie_ring_status_t * ring_status = (pcie_ring_status_t *)buf;
+ DHD_ERROR(("ring status: request_id %d, status 0x%04x, flow ring %d, w_offset %d \n",
+ ring_status->cmn_hdr.request_id, ring_status->compl_hdr.status,
+ ring_status->compl_hdr.flow_ring_id, ring_status->write_idx));
+ /* How do we track this to pair it with ??? */
return;
}
static void
-dhd_prot_event_process(dhd_pub_t *dhd, uint8* buf, uint16 len)
+dhd_prot_genstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
{
- wl_event_hdr_t * evnt;
+ pcie_gen_status_t * gen_status = (pcie_gen_status_t *)buf;
+ DHD_ERROR(("gen status: request_id %d, status 0x%04x, flow ring %d \n",
+ gen_status->cmn_hdr.request_id, gen_status->compl_hdr.status,
+ gen_status->compl_hdr.flow_ring_id));
+
+ /* How do we track this to pair it with ??? */
+ return;
+}
+
+static void
+dhd_prot_ioctack_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
+{
+ ioctl_req_ack_msg_t * ioct_ack = (ioctl_req_ack_msg_t *)buf;
+
+ DHD_CTL(("ioctl req ack: request_id %d, status 0x%04x, flow ring %d \n",
+ ioct_ack->cmn_hdr.request_id, ioct_ack->compl_hdr.status,
+ ioct_ack->compl_hdr.flow_ring_id));
+ if (ioct_ack->compl_hdr.status != 0) {
+ DHD_ERROR(("got an error status for the ioctl request...need to handle that\n"));
+ }
+
+ memset(buf, 0 , msglen);
+ ioct_ack->marker = PCIE_D2H_RESET_MARK;
+}
+static void
+dhd_prot_ioctcmplt_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
+{
+ uint16 status;
+ uint32 resp_len = 0;
+ uint32 pkt_id, xt_id;
+ ioctl_comp_resp_msg_t * ioct_resp = (ioctl_comp_resp_msg_t *)buf;
+
+ resp_len = ltoh16(ioct_resp->resp_len);
+ xt_id = ltoh16(ioct_resp->trans_id);
+ pkt_id = ltoh32(ioct_resp->cmn_hdr.request_id);
+ status = ioct_resp->compl_hdr.status;
+
+ memset(buf, 0 , msglen);
+ ioct_resp->marker = PCIE_D2H_RESET_MARK;
+
+ DHD_CTL(("IOCTL_COMPLETE: pktid %x xtid %d status %x resplen %d\n",
+ pkt_id, xt_id, status, resp_len));
+
+ dhd_bus_update_retlen(dhd->bus, sizeof(ioctl_comp_resp_msg_t), pkt_id, status, resp_len);
+ dhd_os_ioctl_resp_wake(dhd);
+}
+
+static void BCMFASTPATH
+dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
+{
+ dhd_prot_t *prot = dhd->prot;
+ host_txbuf_cmpl_t * txstatus;
+ unsigned long flags;
+ uint32 pktid;
+ void *pkt;
+
+ /* locks required to protect circular buffer accesses */
+ DHD_GENERAL_LOCK(dhd, flags);
+
+ txstatus = (host_txbuf_cmpl_t *)buf;
+ pktid = ltoh32(txstatus->cmn_hdr.request_id);
+
+ DHD_INFO(("txstatus for pktid 0x%04x\n", pktid));
+ if (prot->active_tx_count)
+ prot->active_tx_count--;
+ else
+ DHD_ERROR(("Extra packets are freed\n"));
+
+ ASSERT(pktid != 0);
+ pkt = dhd_prot_packet_get(dhd, pktid, BUFF_TYPE_DATA_TX);
+ if (pkt) {
+#if defined(BCMPCIE)
+ dhd_txcomplete(dhd, pkt, true);
+#endif
+
+#if DHD_DBG_SHOW_METADATA
+ if (dhd->prot->tx_metadata_offset && txstatus->metadata_len) {
+ uchar *ptr;
+ /* The Ethernet header of TX frame was copied and removed.
+ * Here, move the data pointer forward by Ethernet header size.
+ */
+ PKTPULL(dhd->osh, pkt, ETHER_HDR_LEN);
+ ptr = PKTDATA(dhd->osh, pkt) - (dhd->prot->tx_metadata_offset);
+ bcm_print_bytes("txmetadata", ptr, txstatus->metadata_len);
+ dhd_prot_print_metadata(dhd, ptr, txstatus->metadata_len);
+ }
+#endif /* DHD_DBG_SHOW_METADATA */
+ PKTFREE(dhd->osh, pkt, TRUE);
+ }
+
+ memset(buf, 0 , msglen);
+ txstatus->marker = PCIE_D2H_RESET_MARK;
+
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ return;
+}
+
+static void
+dhd_prot_event_process(dhd_pub_t *dhd, void* buf, uint16 len)
+{
+ wlevent_req_msg_t *evnt;
uint32 bufid;
uint16 buflen;
int ifidx = 0;
- uint pkt_count = 1;
void* pkt;
unsigned long flags;
+ dhd_prot_t *prot = dhd->prot;
+ int pkt_wake = 0;
+#ifdef DHD_WAKE_STATUS
+ pkt_wake = bcmpcie_set_get_wake(dhd->bus, 0);
+#endif
/* Event complete header */
- evnt = (wl_event_hdr_t *)buf;
- bufid = ltoh32(evnt->rxbufid);
- buflen = ltoh16(evnt->retbuf_len);
+ evnt = (wlevent_req_msg_t *)buf;
+ bufid = ltoh32(evnt->cmn_hdr.request_id);
+ buflen = ltoh16(evnt->event_data_len);
+
+ ifidx = BCMMSGBUF_API_IFIDX(&evnt->cmn_hdr);
/* Post another rxbuf to the device */
- dhd_prot_return_rxbuf(dhd, 1);
+ if (prot->cur_event_bufs_posted)
+ prot->cur_event_bufs_posted--;
+ dhd_msgbuf_rxbuf_post_event_bufs(dhd);
+
+ memset(buf, 0 , len);
+ evnt->marker = PCIE_D2H_RESET_MARK;
/* locks required to protect pktid_map */
- flags = dhd_os_spin_lock(dhd);
+ DHD_GENERAL_LOCK(dhd, flags);
+ pkt = dhd_prot_packet_get(dhd, ltoh32(bufid), BUFF_TYPE_EVENT_RX);
+ DHD_GENERAL_UNLOCK(dhd, flags);
- pkt = dhd_prot_packet_get(dhd, ltoh32(bufid));
-
- dhd_os_spin_unlock(dhd, flags);
+ if (!pkt)
+ return;
/* DMA RX offset updated through shared area */
if (dhd->prot->rx_dataoffset)
@@ -905,85 +1867,85 @@
PKTSETLEN(dhd->osh, pkt, buflen);
- /* remove WL header */
- PKTPULL(dhd->osh, pkt, 4); /* WL Header */
-
- dhd_bus_rx_frame(dhd->bus, pkt, ifidx, pkt_count);
+ dhd_bus_rx_frame(dhd->bus, pkt, ifidx, 1, pkt_wake);
}
static void BCMFASTPATH
-dhd_prot_rxcmplt_process(dhd_pub_t *dhd, void* buf)
+dhd_prot_rxcmplt_process(dhd_pub_t *dhd, void* buf, uint16 msglen)
{
- rxcmplt_hdr_t *rxcmplt_h;
- rxcmplt_tup_t *rx_tup;
- uint32 bufid;
- uint16 buflen, cmpltcnt;
+ host_rxbuf_cmpl_t *rxcmplt_h;
uint16 data_offset; /* offset at which data starts */
void * pkt;
- int ifidx = 0;
- uint pkt_count = 0;
- uint32 i;
- void *pkthead = NULL;
- void *pkttail = NULL;
+ unsigned long flags;
+ static uint8 current_phase = 0;
+ uint ifidx;
+ int pkt_wake = 0;
+#ifdef DHD_WAKE_STATUS
+ pkt_wake = bcmpcie_set_get_wake(dhd->bus, 0);
+#endif
/* RXCMPLT HDR */
- rxcmplt_h = (rxcmplt_hdr_t *)buf;
- cmpltcnt = ltoh16(rxcmplt_h->rxcmpltcnt);
+ rxcmplt_h = (host_rxbuf_cmpl_t *)buf;
/* Post another set of rxbufs to the device */
- dhd_prot_return_rxbuf(dhd, cmpltcnt);
- ifidx = rxcmplt_h->msg.ifidx;
+ dhd_prot_return_rxbuf(dhd, 1);
- rx_tup = (rxcmplt_tup_t *) &(rxcmplt_h->rx_tup[0]);
- for (i = 0; i < cmpltcnt; i++) {
- unsigned long flags;
+ /* offset from which data starts is populated in rxstatus0 */
+ data_offset = ltoh16(rxcmplt_h->data_offset);
- bufid = ltoh32(rx_tup->rxbufid);
- buflen = ltoh16(rx_tup->retbuf_len);
+ DHD_GENERAL_LOCK(dhd, flags);
+ pkt = dhd_prot_packet_get(dhd, ltoh32(rxcmplt_h->cmn_hdr.request_id), BUFF_TYPE_DATA_RX);
+ DHD_GENERAL_UNLOCK(dhd, flags);
- /* offset from which data starts is populated in rxstatus0 */
- data_offset = ltoh16(rx_tup->data_offset);
-
- /* locks required to protect pktid_map */
- flags = dhd_os_spin_lock(dhd);
- pkt = dhd_prot_packet_get(dhd, ltoh32(bufid));
- dhd_os_spin_unlock(dhd, flags);
-
- /* data_offset from buf start */
- if (data_offset) {
- /* data offset given from dongle after split rx */
- PKTPULL(dhd->osh, pkt, data_offset); /* data offset */
- } else {
- /* DMA RX offset updated through shared area */
- if (dhd->prot->rx_dataoffset)
- PKTPULL(dhd->osh, pkt, dhd->prot->rx_dataoffset);
- }
-
- /* Actual length of the packet */
- PKTSETLEN(dhd->osh, pkt, buflen);
-
- /* remove WL header */
- PKTPULL(dhd->osh, pkt, 4); /* WL Header */
-
- pkt_count++;
- rx_tup++;
-
- /* Chain the packets and release in one shot to dhd_linux. */
- /* Interface and destination checks are not required here. */
- PKTSETNEXT(dhd->osh, pkt, NULL);
- if (pkttail == NULL) {
- pkthead = pkttail = pkt;
- } else {
- PKTSETNEXT(dhd->osh, pkttail, pkt);
- pkttail = pkt;
- }
+ if (!pkt) {
+ return;
}
- if (pkthead) {
- /* Release the packets to dhd_linux */
- dhd_bus_rx_frame(dhd->bus, pkthead, ifidx, pkt_count);
+ DHD_INFO(("id 0x%04x, offset %d, len %d, idx %d, phase 0x%02x, pktdata %p, metalen %d\n",
+ ltoh32(rxcmplt_h->cmn_hdr.request_id), data_offset, ltoh16(rxcmplt_h->data_len),
+ rxcmplt_h->cmn_hdr.if_id, rxcmplt_h->cmn_hdr.flags, PKTDATA(dhd->osh, pkt),
+ ltoh16(rxcmplt_h->metadata_len)));
+
+#if DHD_DBG_SHOW_METADATA
+ if (dhd->prot->rx_metadata_offset && rxcmplt_h->metadata_len) {
+ uchar *ptr;
+ ptr = PKTDATA(dhd->osh, pkt) - (dhd->prot->rx_metadata_offset);
+ /* header followed by data */
+ bcm_print_bytes("rxmetadata", ptr, rxcmplt_h->metadata_len);
+ dhd_prot_print_metadata(dhd, ptr, rxcmplt_h->metadata_len);
}
+#endif /* DHD_DBG_SHOW_METADATA */
+
+ if (current_phase != rxcmplt_h->cmn_hdr.flags) {
+ current_phase = rxcmplt_h->cmn_hdr.flags;
+ }
+ if (rxcmplt_h->flags & BCMPCIE_PKT_FLAGS_FRAME_802_11)
+ DHD_INFO(("D11 frame rxed \n"));
+ /* data_offset from buf start */
+ if (data_offset) {
+ /* data offset given from dongle after split rx */
+ PKTPULL(dhd->osh, pkt, data_offset); /* data offset */
+ } else {
+ /* DMA RX offset updated through shared area */
+ if (dhd->prot->rx_dataoffset)
+ PKTPULL(dhd->osh, pkt, dhd->prot->rx_dataoffset);
+ }
+ /* Actual length of the packet */
+ PKTSETLEN(dhd->osh, pkt, ltoh16(rxcmplt_h->data_len));
+
+ ifidx = rxcmplt_h->cmn_hdr.if_id;
+ memset(buf, 0 , msglen);
+ rxcmplt_h->marker = PCIE_D2H_RESET_MARK;
+
+#ifdef DHD_RX_CHAINING
+ /* Chain the packets */
+ dhd_rxchain_frame(dhd, pkt, ifidx);
+#else /* ! DHD_RX_CHAINING */
+ /* offset from which data starts is populated in rxstatus0 */
+ dhd_bus_rx_frame(dhd->bus, pkt, ifidx, 1, pkt_wake);
+#endif /* ! DHD_RX_CHAINING */
}
+
/* Stop protocol: sync w/dongle state. */
void dhd_prot_stop(dhd_pub_t *dhd)
{
@@ -993,11 +1955,19 @@
/* Add any protocol-specific data header.
* Caller must reserve prot_hdrlen prepend space.
*/
-void dhd_prot_hdrpush(dhd_pub_t *dhd, int ifidx, void *PKTBUF)
+void BCMFASTPATH
+dhd_prot_hdrpush(dhd_pub_t *dhd, int ifidx, void *PKTBUF)
{
return;
}
+uint
+dhd_prot_hdrlen(dhd_pub_t *dhd, void *PKTBUF)
+{
+ return 0;
+}
+
+
#define PKTBUF pktbuf
int BCMFASTPATH
@@ -1005,103 +1975,212 @@
{
unsigned long flags;
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf = (circularbuf_t *)prot->htodbuf;
- txdescr_msghdr_t *txdesc = NULL;
- tx_lenptr_tup_t *tx_tup;
- dmaaddr_t physaddr;
+ host_txbuf_post_t *txdesc = NULL;
+ dmaaddr_t physaddr, meta_physaddr;
uint8 *pktdata;
- uint8 *etherhdr;
uint16 pktlen;
- uint16 hdrlen;
uint32 pktid;
+ uint8 prio;
+ uint16 flowid = 0;
+ uint16 alloced = 0;
+ uint16 headroom;
+ msgbuf_ring_t *msg_ring;
+ uint8 dhcp_pkt;
+
+ if (!dhd_bus_is_txmode_push(dhd->bus)) {
+ flow_ring_table_t *flow_ring_table;
+ flow_ring_node_t *flow_ring_node;
+
+ flowid = (uint16)DHD_PKTTAG_FLOWID((dhd_pkttag_fr_t*)PKTTAG(PKTBUF));
+
+ flow_ring_table = (flow_ring_table_t *)dhd->flow_ring_table;
+ flow_ring_node = (flow_ring_node_t *)&flow_ring_table[flowid];
+
+ msg_ring = (msgbuf_ring_t *)flow_ring_node->prot_info;
+ } else {
+ msg_ring = prot->h2dring_txp_subn;
+ }
+
+
+
+ DHD_GENERAL_LOCK(dhd, flags);
+
+ /* Create a unique 32-bit packet id */
+ pktid = NATIVE_TO_PKTID_RSV(dhd->prot->pktid_map_handle, PKTBUF);
+ if (pktid == DHD_PKTID_INVALID) {
+ DHD_ERROR(("Pktid pool depleted.\n"));
+ /*
+ * If we return error here, the caller would queue the packet
+ * again. So we'll just free the skb allocated in DMA Zone.
+ * Since we have not freed the original SKB yet the caller would
+ * requeue the same.
+ */
+ goto err_no_res_pktfree;
+ }
+
+ /* Reserve space in the circular buffer */
+ txdesc = (host_txbuf_post_t *)dhd_alloc_ring_space(dhd,
+ msg_ring, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+ if (txdesc == NULL) {
+ DHD_INFO(("%s:%d: HTOD Msgbuf Not available TxCount = %d\n",
+ __FUNCTION__, __LINE__, prot->active_tx_count));
+ /* Free up the PKTID */
+ PKTID_TO_NATIVE(dhd->prot->pktid_map_handle, pktid, physaddr,
+ pktlen, BUFF_TYPE_NO_CHECK);
+ goto err_no_res_pktfree;
+ }
+ /* test if dhcp pkt */
+ dhcp_pkt = pkt_is_dhcp(dhd->osh, PKTBUF);
+ txdesc->flag2 = (txdesc->flag2 & ~(BCMPCIE_PKT_FLAGS2_FORCELOWRATE_MASK <<
+ BCMPCIE_PKT_FLAGS2_FORCELOWRATE_SHIFT)) | ((dhcp_pkt &
+ BCMPCIE_PKT_FLAGS2_FORCELOWRATE_MASK) << BCMPCIE_PKT_FLAGS2_FORCELOWRATE_SHIFT);
/* Extract the data pointer and length information */
pktdata = PKTDATA(dhd->osh, PKTBUF);
pktlen = (uint16)PKTLEN(dhd->osh, PKTBUF);
+ /* Ethernet header: Copy before we cache flush packet using DMA_MAP */
+ bcopy(pktdata, txdesc->txhdr, ETHER_HDR_LEN);
+
/* Extract the ethernet header and adjust the data pointer and length */
- etherhdr = pktdata;
- pktdata += ETHER_HDR_LEN;
- pktlen -= ETHER_HDR_LEN;
-
-
- flags = dhd_os_spin_lock(dhd);
+ pktdata = PKTPULL(dhd->osh, PKTBUF, ETHER_HDR_LEN);
+ pktlen -= ETHER_HDR_LEN;
/* Map the data pointer to a DMA-able address */
- physaddr = DMA_MAP(dhd->osh, pktdata, pktlen, DMA_TX, 0, 0);
- if (physaddr == 0) {
+ physaddr = DMA_MAP(dhd->osh, PKTDATA(dhd->osh, PKTBUF), pktlen, DMA_TX, PKTBUF, 0);
+ if ((PHYSADDRHI(physaddr) == 0) && (PHYSADDRLO(physaddr) == 0)) {
DHD_ERROR(("Something really bad, unless 0 is a valid phyaddr\n"));
ASSERT(0);
}
- /* Create a unique 32-bit packet id */
- pktid = NATIVE_TO_PKTID(dhd->prot->pktid_map_handle, PKTBUF, physaddr, pktlen, DMA_TX);
+ /* No need to lock. Save the rest of the packet's metadata */
+ NATIVE_TO_PKTID_SAVE(dhd->prot->pktid_map_handle, PKTBUF, pktid,
+ physaddr, pktlen, DMA_TX, BUFF_TYPE_DATA_TX);
- /* Reserve space in the circular buffer */
- hdrlen = sizeof(txdescr_msghdr_t) + (1 * sizeof(tx_lenptr_tup_t));
-
- txdesc = (txdescr_msghdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, hdrlen, HOST_TO_DNGL_DATA);
- if (txdesc == NULL) {
- dhd_prot_packet_free(dhd, pktid);
- dhd_os_spin_unlock(dhd, flags);
-
- DHD_INFO(("%s:%d: HTOD Msgbuf Not available TxCount = %d\n",
- __FUNCTION__, __LINE__, prot->active_tx_count));
- return BCME_NORESOURCE;
- }
+#ifdef TXP_FLUSH_NITEMS
+ if (msg_ring->pend_items_count == 0)
+ msg_ring->start_addr = (void *)txdesc;
+ msg_ring->pend_items_count++;
+#endif
/* Form the Tx descriptor message buffer */
/* Common message hdr */
- txdesc->txcmn.msg.msglen = htol16(hdrlen);
- txdesc->txcmn.msg.msgtype = MSG_TYPE_TX_POST;
- txdesc->txcmn.msg.u.seq.seq_no = htol16(++prot->data_seq_no);
+ txdesc->cmn_hdr.msg_type = MSG_TYPE_TX_POST;
+ txdesc->cmn_hdr.request_id = htol32(pktid);
+ txdesc->cmn_hdr.if_id = ifidx;
+ txdesc->flags = BCMPCIE_PKT_FLAGS_FRAME_802_3;
+ prio = (uint8)PKTPRIO(PKTBUF);
- /* Ethernet header */
- txdesc->txcmn.hdrlen = htol16(ETHER_HDR_LEN);
- bcopy(etherhdr, txdesc->txhdr, ETHER_HDR_LEN);
- /* Packet ID */
- txdesc->txcmn.pktid = htol32(pktid);
+ txdesc->flags |= (prio & 0x7) << BCMPCIE_PKT_FLAGS_PRIO_SHIFT;
+ txdesc->seg_cnt = 1;
- /* Descriptor count - Linux needs only one */
- txdesc->txcmn.descrcnt = 0x1;
+ txdesc->data_len = htol16(pktlen);
+ txdesc->data_buf_addr.high_addr = htol32(PHYSADDRHI(physaddr));
+ txdesc->data_buf_addr.low_addr = htol32(PHYSADDRLO(physaddr));
- tx_tup = (tx_lenptr_tup_t *) &(txdesc->tx_tup);
+ /* Move data pointer to keep ether header in local PKTBUF for later reference */
+ PKTPUSH(dhd->osh, PKTBUF, ETHER_HDR_LEN);
- /* Descriptor - 0 */
- tx_tup->pktlen = htol16(pktlen);
- tx_tup->ret_buf.high_addr = htol32(PHYSADDRHI(physaddr));
- tx_tup->ret_buf.low_addr = htol32(PHYSADDRLO(physaddr));
- /* Descriptor 1 - should be filled here - if required */
+ /* Handle Tx metadata */
+ headroom = (uint16)PKTHEADROOM(dhd->osh, PKTBUF);
+ if (prot->tx_metadata_offset && (headroom < prot->tx_metadata_offset))
+ DHD_ERROR(("No headroom for Metadata tx %d %d\n",
+ prot->tx_metadata_offset, headroom));
- /* Reserved for future use */
- txdesc->txcmn.priority = (uint8)PKTPRIO(PKTBUF);
- txdesc->txcmn.flowid = 0;
- txdesc->txcmn.msg.ifidx = ifidx;
+ if (prot->tx_metadata_offset && (headroom >= prot->tx_metadata_offset)) {
+ DHD_TRACE(("Metadata in tx %d\n", prot->tx_metadata_offset));
- /* Since, we are filling the data directly into the bufptr obtained
- * from the circularbuf, we can directly call the write_complete
- */
- circularbuf_write_complete(htod_msgbuf, hdrlen);
+ /* Adjust the data pointer to account for meta data in DMA_MAP */
+ PKTPUSH(dhd->osh, PKTBUF, prot->tx_metadata_offset);
+ meta_physaddr = DMA_MAP(dhd->osh, PKTDATA(dhd->osh, PKTBUF),
+ prot->tx_metadata_offset, DMA_RX, PKTBUF, 0);
+ if (PHYSADDRISZERO(meta_physaddr)) {
+ DHD_ERROR(("Something really bad, unless 0 is a valid phyaddr\n"));
+ ASSERT(0);
+ }
+
+ /* Adjust the data pointer back to original value */
+ PKTPULL(dhd->osh, PKTBUF, prot->tx_metadata_offset);
+
+ txdesc->metadata_buf_len = prot->tx_metadata_offset;
+ txdesc->metadata_buf_addr.high_addr = htol32(PHYSADDRHI(meta_physaddr));
+ txdesc->metadata_buf_addr.low_addr = htol32(PHYSADDRLO(meta_physaddr));
+ }
+ else {
+ txdesc->metadata_buf_len = htol16(0);
+ txdesc->metadata_buf_addr.high_addr = 0;
+ txdesc->metadata_buf_addr.low_addr = 0;
+ }
+
+
+ DHD_TRACE(("txpost: data_len %d, pktid 0x%04x\n", txdesc->data_len,
+ txdesc->cmn_hdr.request_id));
+
+ /* Update the write pointer in TCM & ring bell */
+#ifdef TXP_FLUSH_NITEMS
+ /* Flush if we have either hit the txp_threshold or if this msg is */
+ /* occupying the last slot in the flow_ring - before wrap around. */
+ if ((msg_ring->pend_items_count == prot->txp_threshold) ||
+ ((uint8 *) txdesc == (uint8 *) HOST_RING_END(msg_ring))) {
+ dhd_prot_txdata_write_flush(dhd, flowid, TRUE);
+ }
+#else
+ prot_ring_write_complete(dhd, msg_ring, txdesc, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+#endif
prot->active_tx_count++;
- /* If we have accounted for most of the lfrag packets on the dongle, */
- /* it's time to stop the packet flow - Assert flow control. */
- if ((prot->max_tx_count - prot->active_tx_count) < DHD_STOP_QUEUE_THRESHOLD) {
- dhd_bus_stop_queue(dhd->bus);
- prot->txflow_en = TRUE;
- }
-
- dhd_os_spin_unlock(dhd, flags);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return BCME_OK;
+
+err_no_res_pktfree:
+
+
+
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ return BCME_NORESOURCE;
+
+}
+
+/* called with a lock */
+void BCMFASTPATH
+dhd_prot_txdata_write_flush(dhd_pub_t *dhd, uint16 flowid, bool in_lock)
+{
+#ifdef TXP_FLUSH_NITEMS
+ unsigned long flags = 0;
+ flow_ring_table_t *flow_ring_table;
+ flow_ring_node_t *flow_ring_node;
+ msgbuf_ring_t *msg_ring;
+
+
+ if (!in_lock) {
+ DHD_GENERAL_LOCK(dhd, flags);
+ }
+
+ flow_ring_table = (flow_ring_table_t *)dhd->flow_ring_table;
+ flow_ring_node = (flow_ring_node_t *)&flow_ring_table[flowid];
+ msg_ring = (msgbuf_ring_t *)flow_ring_node->prot_info;
+
+ /* Update the write pointer in TCM & ring bell */
+ if (msg_ring->pend_items_count) {
+ prot_ring_write_complete(dhd, msg_ring, msg_ring->start_addr,
+ msg_ring->pend_items_count);
+ msg_ring->pend_items_count = 0;
+ msg_ring->start_addr = NULL;
+ }
+
+ if (!in_lock) {
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ }
+#endif /* TXP_FLUSH_NITEMS */
}
#undef PKTBUF /* Only defined in the above routine */
-int dhd_prot_hdrpull(dhd_pub_t *dhd, int *ifidx, void *pkt, uchar *buf, uint *len)
+int BCMFASTPATH
+dhd_prot_hdrpull(dhd_pub_t *dhd, int *ifidx, void *pkt, uchar *buf, uint *len)
{
return 0;
}
@@ -1111,13 +2190,20 @@
{
dhd_prot_t *prot = dhd->prot;
- prot->rxbufpost -= rxcnt;
+ if (prot->rxbufpost >= rxcnt) {
+ prot->rxbufpost -= rxcnt;
+ } else {
+ /* ASSERT(0); */
+ prot->rxbufpost = 0;
+ }
+
if (prot->rxbufpost <= (prot->max_rxbufpost - RXBUFPOST_THRESHOLD))
dhd_msgbuf_rxbuf_post(dhd);
return;
}
+
/* Use protocol to issue ioctl to dongle */
int dhd_prot_ioctl(dhd_pub_t *dhd, int ifidx, wl_ioctl_t * ioc, void * buf, int len)
{
@@ -1130,6 +2216,11 @@
goto done;
}
+ if (dhd->busstate == DHD_BUS_SUSPEND) {
+ DHD_ERROR(("%s : bus is suspended\n", __FUNCTION__));
+ goto done;
+ }
+
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
ASSERT(len <= WLC_IOCTL_MAXLEN);
@@ -1150,6 +2241,8 @@
prot->pending = TRUE;
prot->lastcmd = ioc->cmd;
action = ioc->set;
+
+
if (action & WL_IOCTL_ACTION_SET) {
ret = dhd_msgbuf_set_ioctl(dhd, ifidx, ioc->cmd, buf, len, action);
} else {
@@ -1161,7 +2254,9 @@
if (ret >= 0)
ret = 0;
else {
- DHD_INFO(("%s: status ret value is %d \n", __FUNCTION__, ret));
+ if (ret != BCME_NOTASSOCIATED) {
+ DHD_ERROR(("%s: status ret value is %d \n", __FUNCTION__, ret));
+ }
dhd->dongle_error = ret;
}
@@ -1175,6 +2270,7 @@
dhd->wme_dp = (uint8) ltoh32(val);
}
+
prot->pending = FALSE;
done:
@@ -1187,37 +2283,25 @@
{
unsigned long flags;
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf;
+ uint16 alloced = 0;
ioct_reqst_hdr_t *ioct_rqst;
uint16 hdrlen = sizeof(ioct_reqst_hdr_t);
uint16 msglen = len + hdrlen;
- if (dhd->prot->htodsplit)
- htod_msgbuf = (circularbuf_t *) prot->htod_ctrlbuf;
- else
- htod_msgbuf = (circularbuf_t *) prot->htodbuf;
if (msglen > MSGBUF_MAX_MSG_SIZE)
msglen = MSGBUF_MAX_MSG_SIZE;
- msglen = align(msglen, 4);
+ msglen = align(msglen, DMA_ALIGN_LEN);
- /* locks required to protect circular buffer accesses */
- flags = dhd_os_spin_lock(dhd);
-
- if (dhd->prot->htodsplit) {
- ioct_rqst = (ioct_reqst_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_CTRL);
- }
- else {
- ioct_rqst = (ioct_reqst_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_DATA);
- }
+ DHD_GENERAL_LOCK(dhd, flags);
+ ioct_rqst = (ioct_reqst_hdr_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
if (ioct_rqst == NULL) {
- dhd_os_spin_unlock(dhd, flags);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return 0;
}
@@ -1233,20 +2317,163 @@
/* Common msg buf hdr */
- ioct_rqst->msg.msglen = htol16(msglen);
- ioct_rqst->msg.msgtype = MSG_TYPE_LOOPBACK;
- ioct_rqst->msg.ifidx = 0;
- ioct_rqst->msg.u.seq.seq_no = htol16(++prot->data_seq_no);
+ ioct_rqst->msg.msg_type = MSG_TYPE_LOOPBACK;
+ ioct_rqst->msg.if_id = 0;
bcm_print_bytes("LPBK REQ: ", (uint8 *)ioct_rqst, msglen);
- circularbuf_write_complete(htod_msgbuf, msglen);
-
- dhd_os_spin_unlock(dhd, flags);
+ /* Update the write pointer in TCM & ring bell */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, ioct_rqst,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return 0;
}
+void dmaxfer_free_dmaaddr(dhd_pub_t *dhd, dhd_dmaxfer_t *dma)
+{
+ if (dma == NULL)
+ return;
+
+ if (dma->srcmem.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, dma->srcmem.va,
+ dma->len, dma->srcmem.pa, dma->srcmem.dmah);
+ dma->srcmem.va = NULL;
+ }
+ if (dma->destmem.va) {
+ DMA_FREE_CONSISTENT(dhd->osh, dma->destmem.va,
+ dma->len + 8, dma->destmem.pa, dma->destmem.dmah);
+ dma->destmem.va = NULL;
+ }
+}
+
+int dmaxfer_prepare_dmaaddr(dhd_pub_t *dhd, uint len,
+ uint srcdelay, uint destdelay, dhd_dmaxfer_t *dma)
+{
+ uint i;
+
+ if (!dma)
+ return BCME_ERROR;
+
+ /* First free up exisiting buffers */
+ dmaxfer_free_dmaaddr(dhd, dma);
+
+ dma->srcmem.va = DMA_ALLOC_CONSISTENT(dhd->osh, len, DMA_ALIGN_LEN,
+ &i, &dma->srcmem.pa, &dma->srcmem.dmah);
+ if (dma->srcmem.va == NULL) {
+ return BCME_NOMEM;
+ }
+
+ /* Populate source with a pattern */
+ for (i = 0; i < len; i++) {
+ ((uint8*)dma->srcmem.va)[i] = i % 256;
+ }
+ OSL_CACHE_FLUSH(dma->srcmem.va, len);
+
+ dma->destmem.va = DMA_ALLOC_CONSISTENT(dhd->osh, len + 8, DMA_ALIGN_LEN,
+ &i, &dma->destmem.pa, &dma->destmem.dmah);
+ if (dma->destmem.va == NULL) {
+ DMA_FREE_CONSISTENT(dhd->osh, dma->srcmem.va,
+ dma->len, dma->srcmem.pa, dma->srcmem.dmah);
+ dma->srcmem.va = NULL;
+ return BCME_NOMEM;
+ }
+
+
+ /* Clear the destination buffer */
+ bzero(dma->destmem.va, len +8);
+ OSL_CACHE_FLUSH(dma->destmem.va, len+8);
+
+ dma->len = len;
+ dma->srcdelay = srcdelay;
+ dma->destdelay = destdelay;
+
+ return BCME_OK;
+}
+
+static void
+dhdmsgbuf_dmaxfer_compare(dhd_pub_t *dhd, void * buf, uint16 msglen)
+{
+ dhd_prot_t *prot = dhd->prot;
+
+ OSL_CACHE_INV(prot->dmaxfer.destmem.va, prot->dmaxfer.len);
+ if (prot->dmaxfer.srcmem.va && prot->dmaxfer.destmem.va) {
+ if (memcmp(prot->dmaxfer.srcmem.va,
+ prot->dmaxfer.destmem.va,
+ prot->dmaxfer.len)) {
+ bcm_print_bytes("XFER SRC: ",
+ prot->dmaxfer.srcmem.va, prot->dmaxfer.len);
+ bcm_print_bytes("XFER DEST: ",
+ prot->dmaxfer.destmem.va, prot->dmaxfer.len);
+ }
+ else {
+ DHD_INFO(("DMA successful\n"));
+ }
+ }
+ dmaxfer_free_dmaaddr(dhd, &prot->dmaxfer);
+ dhd->prot->dmaxfer_in_progress = FALSE;
+}
+
+int
+dhdmsgbuf_dmaxfer_req(dhd_pub_t *dhd, uint len, uint srcdelay, uint destdelay)
+{
+ unsigned long flags;
+ int ret = BCME_OK;
+ dhd_prot_t *prot = dhd->prot;
+ pcie_dma_xfer_params_t *dmap;
+ uint32 xferlen = len > DMA_XFER_LEN_LIMIT ? DMA_XFER_LEN_LIMIT : len;
+ uint16 msglen = sizeof(pcie_dma_xfer_params_t);
+ uint16 alloced = 0;
+
+ if (prot->dmaxfer_in_progress) {
+ DHD_ERROR(("DMA is in progress...\n"));
+ return ret;
+ }
+ prot->dmaxfer_in_progress = TRUE;
+ if ((ret = dmaxfer_prepare_dmaaddr(dhd, xferlen, srcdelay, destdelay,
+ &prot->dmaxfer)) != BCME_OK) {
+ prot->dmaxfer_in_progress = FALSE;
+ return ret;
+ }
+
+
+ if (msglen > MSGBUF_MAX_MSG_SIZE)
+ msglen = MSGBUF_MAX_MSG_SIZE;
+
+ msglen = align(msglen, DMA_ALIGN_LEN);
+
+ DHD_GENERAL_LOCK(dhd, flags);
+ dmap = (pcie_dma_xfer_params_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+
+ if (dmap == NULL) {
+ dmaxfer_free_dmaaddr(dhd, &prot->dmaxfer);
+ prot->dmaxfer_in_progress = FALSE;
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ return BCME_NOMEM;
+ }
+
+ /* Common msg buf hdr */
+ dmap->cmn_hdr.msg_type = MSG_TYPE_LPBK_DMAXFER;
+ dmap->cmn_hdr.request_id = 0x1234;
+
+ dmap->host_input_buf_addr.high = htol32(PHYSADDRHI(prot->dmaxfer.srcmem.pa));
+ dmap->host_input_buf_addr.low = htol32(PHYSADDRLO(prot->dmaxfer.srcmem.pa));
+ dmap->host_ouput_buf_addr.high = htol32(PHYSADDRHI(prot->dmaxfer.destmem.pa));
+ dmap->host_ouput_buf_addr.low = htol32(PHYSADDRLO(prot->dmaxfer.destmem.pa));
+ dmap->xfer_len = htol32(prot->dmaxfer.len);
+ dmap->srcdelay = htol32(prot->dmaxfer.srcdelay);
+ dmap->destdelay = htol32(prot->dmaxfer.destdelay);
+
+ /* Update the write pointer in TCM & ring bell */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, dmap,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ DHD_ERROR(("DMA Started...\n"));
+
+ return BCME_OK;
+}
static int
dhdmsgbuf_query_ioctl(dhd_pub_t *dhd, int ifidx, uint cmd, void *buf, uint len, uint8 action)
@@ -1272,20 +2499,13 @@
}
}
- /* Fill up msgbuf for ioctl req */
- if (len < MAX_INLINE_IOCTL_LEN) {
- /* Inline ioct resuest */
- ret = dhd_fillup_ioct_reqst(dhd, (uint16)len, cmd, buf, ifidx);
- } else {
- /* Non inline ioct resuest */
- ret = dhd_fillup_ioct_reqst_ptrbased(dhd, (uint16)len, cmd, buf, ifidx);
- }
+ ret = dhd_fillup_ioct_reqst_ptrbased(dhd, (uint16)len, cmd, buf, ifidx);
DHD_INFO(("ACTION %d ifdix %d cmd %d len %d \n",
action, ifidx, cmd, len));
/* wait for interrupt and get first fragment */
- ret = dhdmsgbuf_cmplt(dhd, prot->reqid, len, buf, prot->retbuf);
+ ret = dhdmsgbuf_cmplt(dhd, prot->reqid, len, buf, prot->retbuf.va);
done:
return ret;
@@ -1294,31 +2514,55 @@
dhdmsgbuf_cmplt(dhd_pub_t *dhd, uint32 id, uint32 len, void* buf, void* retbuf)
{
dhd_prot_t *prot = dhd->prot;
- ioct_resp_hdr_t ioct_resp;
- uint8* data;
+ ioctl_comp_resp_msg_t ioct_resp;
+ void* pkt;
int retlen;
int msgbuf_len = 0;
+ unsigned long flags;
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
+ if (prot->cur_ioctlresp_bufs_posted)
+ prot->cur_ioctlresp_bufs_posted--;
+
+ dhd_msgbuf_rxbuf_post_ioctlresp_bufs(dhd);
+
retlen = dhd_bus_rxctl(dhd->bus, (uchar*)&ioct_resp, msgbuf_len);
-
- if (retlen <= 0)
- return -1;
-
- /* get ret buf */
- if (buf != NULL) {
- if (retlen <= 4) {
- bcopy((void*)&ioct_resp.inline_data, buf, retlen);
- DHD_INFO(("%s: data is %d, ret_len is %d\n",
- __FUNCTION__, ioct_resp.inline_data, retlen));
- }
- else {
- data = (uint8*)retbuf;
- bcopy((void*)&data[prot->rx_dataoffset], buf, retlen);
- }
+ if (retlen <= 0) {
+ DHD_ERROR(("IOCTL request failed with error code %d\n", retlen));
+ return retlen;
}
- return ioct_resp.status;
+ DHD_INFO(("ioctl resp retlen %d status %d, resp_len %d, pktid %d\n",
+ retlen, ioct_resp.compl_hdr.status, ioct_resp.resp_len,
+ ioct_resp.cmn_hdr.request_id));
+ if (ioct_resp.resp_len != 0) {
+ DHD_GENERAL_LOCK(dhd, flags);
+ pkt = dhd_prot_packet_get(dhd, ioct_resp.cmn_hdr.request_id, BUFF_TYPE_IOCTL_RX);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ DHD_INFO(("ioctl ret buf %p retlen %d status %x \n", pkt, retlen,
+ ioct_resp.compl_hdr.status));
+ /* get ret buf */
+ if ((buf) && (pkt)) {
+ /* bcopy(PKTDATA(dhd->osh, pkt), buf, ioct_resp.resp_len); */
+ /* ioct_resp.resp_len could have been changed to make it > 8 bytes */
+ bcopy(PKTDATA(dhd->osh, pkt), buf, len);
+ }
+ if (pkt) {
+#ifdef DHD_USE_STATIC_IOCTLBUF
+ PKTFREE_STATIC(dhd->osh, pkt, FALSE);
+#else
+ PKTFREE(dhd->osh, pkt, FALSE);
+#endif /* DHD_USE_STATIC_IOCTLBUF */
+
+ }
+ } else {
+ DHD_GENERAL_LOCK(dhd, flags);
+ dhd_prot_packet_free(dhd, ioct_resp.cmn_hdr.request_id, BUFF_TYPE_IOCTL_RX);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ }
+
+ return (int)(ioct_resp.compl_hdr.status);
}
static int
dhd_msgbuf_set_ioctl(dhd_pub_t *dhd, int ifidx, uint cmd, void *buf, uint len, uint8 action)
@@ -1343,18 +2587,12 @@
}
/* Fill up msgbuf for ioctl req */
- if (len < MAX_INLINE_IOCTL_LEN) {
- /* Inline ioct resuest */
- ret = dhd_fillup_ioct_reqst(dhd, (uint16)len, cmd, buf, ifidx);
- } else {
- /* Non inline ioct resuest */
- ret = dhd_fillup_ioct_reqst_ptrbased(dhd, (uint16)len, cmd, buf, ifidx);
- }
+ ret = dhd_fillup_ioct_reqst_ptrbased(dhd, (uint16)len, cmd, buf, ifidx);
DHD_INFO(("ACTIOn %d ifdix %d cmd %d len %d \n",
action, ifidx, cmd, len));
- ret = dhdmsgbuf_cmplt(dhd, prot->reqid, len, buf, prot->retbuf);
+ ret = dhdmsgbuf_cmplt(dhd, prot->reqid, len, buf, prot->retbuf.va);
return ret;
}
@@ -1395,34 +2633,22 @@
{
unsigned long flags;
hostevent_hdr_t *hevent = NULL;
- uint16 msglen = sizeof(hostevent_hdr_t);
+ uint16 alloced = 0;
dhd_prot_t *prot = dhd->prot;
- circularbuf_t *htod_msgbuf;
- /* locks required to protect circular buffer accesses */
- flags = dhd_os_spin_lock(dhd);
- if (dhd->prot->htodsplit) {
- htod_msgbuf = (circularbuf_t *)prot->htod_ctrlbuf;
- hevent = (hostevent_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_CTRL);
- }
- else {
- htod_msgbuf = (circularbuf_t *)prot->htodbuf;
- hevent = (hostevent_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_DATA);
- }
+ DHD_GENERAL_LOCK(dhd, flags);
+ hevent = (hostevent_hdr_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
if (hevent == NULL) {
- dhd_os_spin_unlock(dhd, flags);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return -1;
}
/* CMN msg header */
- hevent->msg.msglen = htol16(msglen);
- hevent->msg.msgtype = MSG_TYPE_HOST_EVNT;
- hevent->msg.ifidx = 0;
- hevent->msg.u.seq.seq_no = htol16(++prot->data_seq_no);
+ hevent->msg.msg_type = MSG_TYPE_HOST_EVNT;
+ hevent->msg.if_id = 0;
/* Event payload */
hevent->evnt_pyld = htol32(HOST_EVENT_CONS_CMD);
@@ -1430,110 +2656,46 @@
/* Since, we are filling the data directly into the bufptr obtained
* from the msgbuf, we can directly call the write_complete
*/
- circularbuf_write_complete(htod_msgbuf, msglen);
- dhd_os_spin_unlock(dhd, flags);
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, hevent,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return 0;
}
-void * BCMFASTPATH
-dhd_alloc_circularbuf_space(dhd_pub_t *dhd, circularbuf_t *handle, uint16 msglen, uint path)
+
+static void * BCMFASTPATH
+dhd_alloc_ring_space(dhd_pub_t *dhd, msgbuf_ring_t *ring, uint16 nitems, uint16 * alloced)
{
void * ret_buf;
+ uint16 r_index = 0;
- ret_buf = circularbuf_reserve_for_write(handle, msglen);
+ /* Alloc space for nitems in the ring */
+ ret_buf = prot_get_ring_space(ring, nitems, alloced);
+
if (ret_buf == NULL) {
- /* Try again after updating the read ptr from dongle */
- if (path == HOST_TO_DNGL_DATA)
- dhd_bus_cmn_readshared(dhd->bus, &(CIRCULARBUF_READ_PTR(handle)),
- HOST_TO_DNGL_RPTR);
- else if (path == HOST_TO_DNGL_CTRL)
- dhd_bus_cmn_readshared(dhd->bus, &(CIRCULARBUF_READ_PTR(handle)),
- HTOD_CTRL_RPTR);
- else
- DHD_ERROR(("%s:%d: Unknown path value \n", __FUNCTION__, __LINE__));
- ret_buf = circularbuf_reserve_for_write(handle, msglen);
+ /* if alloc failed , invalidate cached read ptr */
+ if (DMA_INDX_ENAB(dhd->dma_d2h_ring_upd_support)) {
+ r_index = dhd_get_dmaed_index(dhd, H2D_DMA_READINDX, ring->idx);
+ ring->ringstate->r_offset = r_index;
+ } else
+ dhd_bus_cmn_readshared(dhd->bus, &(RING_READ_PTR(ring)),
+ RING_READ_PTR, ring->idx);
+
+ /* Try allocating once more */
+ ret_buf = prot_get_ring_space(ring, nitems, alloced);
+
if (ret_buf == NULL) {
- DHD_INFO(("%s:%d: HTOD Msgbuf Not available \n", __FUNCTION__, __LINE__));
+ DHD_INFO(("%s: Ring space not available \n", ring->name));
return NULL;
}
}
+ /* Return alloced space */
return ret_buf;
}
-INLINE bool
-dhd_prot_dtohsplit(dhd_pub_t* dhd)
-{
- return dhd->prot->dtohsplit;
-}
-static int
-dhd_fillup_ioct_reqst(dhd_pub_t *dhd, uint16 len, uint cmd, void* buf, int ifidx)
-{
- dhd_prot_t *prot = dhd->prot;
- ioct_reqst_hdr_t *ioct_rqst;
- uint16 hdrlen = sizeof(ioct_reqst_hdr_t);
- uint16 msglen = len + hdrlen;
- circularbuf_t *htod_msgbuf;
- unsigned long flags;
- uint16 rqstlen = len;
- /* Limit ioct request to MSGBUF_MAX_MSG_SIZE bytes including hdrs */
- if (rqstlen + hdrlen > MSGBUF_MAX_MSG_SIZE)
- rqstlen = MSGBUF_MAX_MSG_SIZE - hdrlen;
+#define DHD_IOCTL_REQ_PKTID 0xFFFE
- /* Messge = hdr + rqstbuf */
- msglen = rqstlen + hdrlen;
-
- /* align it to 4 bytes, so that all start addr form cbuf is 4 byte aligned */
- msglen = align(msglen, 4);
-
- /* locks required to protect circular buffer accesses */
- flags = dhd_os_spin_lock(dhd);
-
- /* Request for cbuf space */
- if (dhd->prot->htodsplit) {
- htod_msgbuf = (circularbuf_t *)prot->htod_ctrlbuf;
- ioct_rqst = (ioct_reqst_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_CTRL);
- }
- else {
- htod_msgbuf = (circularbuf_t *)prot->htodbuf;
- ioct_rqst = (ioct_reqst_hdr_t *)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_DATA);
- }
-
- if (ioct_rqst == NULL) {
- dhd_os_spin_unlock(dhd, flags);
- return -1;
- }
-
- /* Common msg buf hdr */
- ioct_rqst->msg.msglen = htol16(msglen);
- ioct_rqst->msg.msgtype = MSG_TYPE_IOCTL_REQ;
- ioct_rqst->msg.ifidx = (uint8)ifidx;
- ioct_rqst->msg.u.seq.seq_no = htol16(++prot->ioctl_seq_no);
-
- /* Ioctl specific Message buf header */
- ioct_rqst->ioct_hdr.cmd = htol32(cmd);
- ioct_rqst->ioct_hdr.pkt_id = htol32(++prot->reqid);
- ioct_rqst->ioct_hdr.retbuf_len = htol16(len);
- ioct_rqst->ioct_hdr.xt_id = (uint16)ioct_rqst->ioct_hdr.pkt_id;
- DHD_CTL(("sending IOCTL_REQ cmd %d, pkt_id %d xt_id %d\n",
- ioct_rqst->ioct_hdr.cmd, ioct_rqst->ioct_hdr.pkt_id, ioct_rqst->ioct_hdr.xt_id));
-
- /* Ret buf ptr */
- ioct_rqst->ret_buf.high_addr = htol32(PHYSADDRHI(prot->retbuf_phys));
- ioct_rqst->ret_buf.low_addr = htol32(PHYSADDRLO(prot->retbuf_phys));
-
- /* copy ioct payload */
- if (buf)
- memcpy(&ioct_rqst[1], buf, rqstlen);
-
- /* upd wrt ptr and raise interrupt */
- circularbuf_write_complete(htod_msgbuf, msglen);
- dhd_os_spin_unlock(dhd, flags);
-
- return 0;
-}
/* Non inline ioct request */
/* Form a ioctl request first as per ioctptr_reqst_hdr_t header in the circular buffer */
/* Form a separate request buffer where a 4 byte cmn header is added in the front */
@@ -1542,79 +2704,64 @@
dhd_fillup_ioct_reqst_ptrbased(dhd_pub_t *dhd, uint16 len, uint cmd, void* buf, int ifidx)
{
dhd_prot_t *prot = dhd->prot;
- ioctptr_reqst_hdr_t *ioct_rqst;
- uint16 msglen = sizeof(ioctptr_reqst_hdr_t);
- circularbuf_t * htod_msgbuf;
- cmn_msg_hdr_t * ioct_buf; /* For ioctl payload */
- uint16 alignlen, rqstlen = len;
+ ioctl_req_msg_t *ioct_rqst;
+ void * ioct_buf; /* For ioctl payload */
+ uint16 rqstlen, resplen;
unsigned long flags;
+ uint16 alloced = 0;
+
+ rqstlen = len;
+ resplen = len;
/* Limit ioct request to MSGBUF_MAX_MSG_SIZE bytes including hdrs */
- if ((rqstlen + sizeof(cmn_msg_hdr_t)) > MSGBUF_MAX_MSG_SIZE)
- rqstlen = MSGBUF_MAX_MSG_SIZE - sizeof(cmn_msg_hdr_t);
+ /* 8K allocation of dongle buffer fails */
+ /* dhd doesnt give separate input & output buf lens */
+ /* so making the assumption that input length can never be more than 1.5k */
+ rqstlen = MIN(rqstlen, MSGBUF_MAX_MSG_SIZE);
- /* align it to 4 bytes, so that all start addr form cbuf is 4 byte aligned */
- alignlen = align(rqstlen, 4);
-
- /* locks required to protect circular buffer accesses */
- flags = dhd_os_spin_lock(dhd);
+ DHD_GENERAL_LOCK(dhd, flags);
/* Request for cbuf space */
- if (dhd->prot->htodsplit) {
- htod_msgbuf = (circularbuf_t *)prot->htod_ctrlbuf;
- ioct_rqst = (ioctptr_reqst_hdr_t*)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_CTRL);
- }
- else {
- htod_msgbuf = (circularbuf_t *)prot->htodbuf;
- ioct_rqst = (ioctptr_reqst_hdr_t*)dhd_alloc_circularbuf_space(dhd,
- htod_msgbuf, msglen, HOST_TO_DNGL_DATA);
- }
+ ioct_rqst = (ioctl_req_msg_t*)dhd_alloc_ring_space(dhd, prot->h2dring_ctrl_subn,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
if (ioct_rqst == NULL) {
- dhd_os_spin_unlock(dhd, flags);
+ DHD_ERROR(("couldn't allocate space on msgring to send ioctl request\n"));
+ DHD_GENERAL_UNLOCK(dhd, flags);
return -1;
}
/* Common msg buf hdr */
- ioct_rqst->msg.msglen = htol16(msglen);
- ioct_rqst->msg.msgtype = MSG_TYPE_IOCTLPTR_REQ;
- ioct_rqst->msg.ifidx = (uint8)ifidx;
- ioct_rqst->msg.u.seq.seq_no = htol16(++prot->ioctl_seq_no);
+ ioct_rqst->cmn_hdr.msg_type = MSG_TYPE_IOCTLPTR_REQ;
+ ioct_rqst->cmn_hdr.if_id = (uint8)ifidx;
+ ioct_rqst->cmn_hdr.flags = 0;
+ ioct_rqst->cmn_hdr.request_id = DHD_IOCTL_REQ_PKTID;
- /* Ioctl specific Message buf header */
- ioct_rqst->ioct_hdr.cmd = htol32(cmd);
- ioct_rqst->ioct_hdr.pkt_id = htol32(++prot->reqid);
- ioct_rqst->ioct_hdr.retbuf_len = htol16(len);
- ioct_rqst->ioct_hdr.xt_id = (uint16)ioct_rqst->ioct_hdr.pkt_id;
-
- DHD_CTL(("sending IOCTL_PTRREQ cmd %d, pkt_id %d xt_id %d\n",
- ioct_rqst->ioct_hdr.cmd, ioct_rqst->ioct_hdr.pkt_id, ioct_rqst->ioct_hdr.xt_id));
-
- /* Ret buf ptr */
- ioct_rqst->ret_buf.high_addr = htol32(PHYSADDRHI(prot->retbuf_phys));
- ioct_rqst->ret_buf.low_addr = htol32(PHYSADDRLO(prot->retbuf_phys));
-
- /* copy ioct payload */
- ioct_buf = (cmn_msg_hdr_t *) prot->ioctbuf;
- ioct_buf->msglen = htol16(alignlen + sizeof(cmn_msg_hdr_t));
- ioct_buf->msgtype = MSG_TYPE_IOCT_PYLD;
-
- if (buf) {
- memcpy(&ioct_buf[1], buf, rqstlen);
- OSL_CACHE_FLUSH((void *) prot->ioctbuf, rqstlen+sizeof(cmn_msg_hdr_t));
- }
-
- if ((ulong)ioct_buf % 4)
- printf("host ioct address unaligned !!!!! \n");
+ ioct_rqst->cmd = htol32(cmd);
+ ioct_rqst->output_buf_len = htol16(resplen);
+ ioct_rqst->trans_id = prot->ioctl_trans_id ++;
/* populate ioctl buffer info */
- ioct_rqst->ioct_hdr.buflen = htol16(alignlen + sizeof(cmn_msg_hdr_t));
- ioct_rqst->ioct_buf.high_addr = htol32(PHYSADDRHI(prot->ioctbuf_phys));
- ioct_rqst->ioct_buf.low_addr = htol32(PHYSADDRLO(prot->ioctbuf_phys));
+ ioct_rqst->input_buf_len = htol16(rqstlen);
+ ioct_rqst->host_input_buf_addr.high = htol32(PHYSADDRHI(prot->ioctbuf.pa));
+ ioct_rqst->host_input_buf_addr.low = htol32(PHYSADDRLO(prot->ioctbuf.pa));
+ /* copy ioct payload */
+ ioct_buf = (void *) prot->ioctbuf.va;
+
+ if (buf)
+ memcpy(ioct_buf, buf, len);
+
+ OSL_CACHE_FLUSH((void *) prot->ioctbuf.va, len);
+
+ if ((ulong)ioct_buf % DMA_ALIGN_LEN)
+ DHD_ERROR(("host ioct address unaligned !!!!! \n"));
+
+ DHD_CTL(("submitted IOCTL request request_id %d, cmd %d, output_buf_len %d, tx_id %d\n",
+ ioct_rqst->cmn_hdr.request_id, cmd, ioct_rqst->output_buf_len,
+ ioct_rqst->trans_id));
/* upd wrt ptr and raise interrupt */
- circularbuf_write_complete(htod_msgbuf, msglen);
-
- dhd_os_spin_unlock(dhd, flags);
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, ioct_rqst,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
return 0;
}
@@ -1684,11 +2831,10 @@
Here we can do dma unmapping for 32 bit also.
Since this in removal path, it will not affect performance
*/
- DMA_UNMAP(osh, (uint) handle->pktid_list[ix+1].pa,
+ DMA_UNMAP(osh, handle->pktid_list[ix+1].pa,
(uint) handle->pktid_list[ix+1].pa_len,
handle->pktid_list[ix+1].dma, 0, 0);
- PKTFREE(osh,
- (unsigned long*)handle->pktid_list[ix+1].native, TRUE);
+ PKTFREE(osh, (unsigned long*)handle->pktid_list[ix+1].native, TRUE);
}
}
bcm_mwbmap_fini(osh, handle->mwbmap_hdl);
@@ -1721,7 +2867,7 @@
handle->pktid_list[id].native = (ulong) pkt;
handle->pktid_list[id].pa = physaddr;
handle->pktid_list[id].pa_len = (uint32) physlen;
- handle->pktid_list[id].dma = dma;
+ handle->pktid_list[id].dma = (uchar)dma;
return id;
}
@@ -1739,8 +2885,8 @@
/* Debug check */
if (bcm_mwbmap_isfree(handle->mwbmap_hdl, (id-1))) {
- printf("%s:%d: Error !!!. How can the slot (%d) be free if the app is using it.\n",
- __FUNCTION__, __LINE__, (id-1));
+ printf("%s:%d: Error !!!. slot (%d/0x%04x) free but the app is using it.\n",
+ __FUNCTION__, __LINE__, (id-1), (id-1));
return NULL;
}
@@ -1753,3 +2899,950 @@
return native;
}
+static msgbuf_ring_t*
+prot_ring_attach(dhd_prot_t * prot, char* name, uint16 max_item, uint16 len_item, uint16 ringid)
+{
+ uint alloced = 0;
+ msgbuf_ring_t *ring;
+ dmaaddr_t physaddr;
+ uint16 size, cnt;
+ uint32 *marker;
+
+ ASSERT(name);
+ BCM_REFERENCE(physaddr);
+
+ /* allocate ring info */
+ ring = MALLOC(prot->osh, sizeof(msgbuf_ring_t));
+ if (ring == NULL) {
+ ASSERT(0);
+ return NULL;
+ }
+ bzero(ring, sizeof(*ring));
+
+ /* Init name */
+ strncpy(ring->name, name, sizeof(ring->name));
+
+ /* Ringid in the order given in bcmpcie.h */
+ ring->idx = ringid;
+
+ /* init ringmem */
+ ring->ringmem = MALLOC(prot->osh, sizeof(ring_mem_t));
+ if (ring->ringmem == NULL)
+ goto fail;
+ bzero(ring->ringmem, sizeof(*ring->ringmem));
+
+ ring->ringmem->max_item = max_item;
+ ring->ringmem->len_items = len_item;
+ size = max_item * len_item;
+
+ /* Ring Memmory allocation */
+ ring->ring_base.va = DMA_ALLOC_CONSISTENT(prot->osh, size, DMA_ALIGN_LEN,
+ &alloced, &ring->ring_base.pa, &ring->ring_base.dmah);
+
+ if (ring->ring_base.va == NULL)
+ goto fail;
+ ring->ringmem->base_addr.high_addr = htol32(PHYSADDRHI(ring->ring_base.pa));
+ ring->ringmem->base_addr.low_addr = htol32(PHYSADDRLO(ring->ring_base.pa));
+
+ ASSERT(MODX((unsigned long)ring->ring_base.va, DMA_ALIGN_LEN) == 0);
+ bzero(ring->ring_base.va, size);
+ for (cnt = 0; cnt < max_item; cnt++) {
+ marker = (uint32 *)ring->ring_base.va +
+ (cnt + 1) * len_item / sizeof(uint32) - 1;
+ *marker = PCIE_D2H_RESET_MARK;
+ }
+ OSL_CACHE_FLUSH((void *) ring->ring_base.va, size);
+
+ /* Ring state init */
+ ring->ringstate = MALLOC(prot->osh, sizeof(ring_state_t));
+ if (ring->ringstate == NULL)
+ goto fail;
+ bzero(ring->ringstate, sizeof(*ring->ringstate));
+
+ DHD_INFO(("RING_ATTACH : %s Max item %d len item %d total size %d "
+ "ring start %p buf phys addr %x:%x \n",
+ ring->name, ring->ringmem->max_item, ring->ringmem->len_items,
+ size, ring->ring_base.va, ring->ringmem->base_addr.high_addr,
+ ring->ringmem->base_addr.low_addr));
+ return ring;
+fail:
+ if (ring->ring_base.va)
+ PHYSADDRHISET(physaddr, ring->ringmem->base_addr.high_addr);
+ PHYSADDRLOSET(physaddr, ring->ringmem->base_addr.low_addr);
+ size = ring->ringmem->max_item * ring->ringmem->len_items;
+ DMA_FREE_CONSISTENT(prot->osh, ring->ring_base.va, size, ring->ring_base.pa, NULL);
+ ring->ring_base.va = NULL;
+ if (ring->ringmem)
+ MFREE(prot->osh, ring->ringmem, sizeof(ring_mem_t));
+ MFREE(prot->osh, ring, sizeof(msgbuf_ring_t));
+ ASSERT(0);
+ return NULL;
+}
+static void
+dhd_ring_init(dhd_pub_t *dhd, msgbuf_ring_t *ring)
+{
+ /* update buffer address of ring */
+ dhd_bus_cmn_writeshared(dhd->bus, &ring->ringmem->base_addr,
+ sizeof(ring->ringmem->base_addr), RING_BUF_ADDR, ring->idx);
+
+ /* Update max items possible in ring */
+ dhd_bus_cmn_writeshared(dhd->bus, &ring->ringmem->max_item,
+ sizeof(ring->ringmem->max_item), RING_MAX_ITEM, ring->idx);
+
+ /* Update length of each item in the ring */
+ dhd_bus_cmn_writeshared(dhd->bus, &ring->ringmem->len_items,
+ sizeof(ring->ringmem->len_items), RING_LEN_ITEMS, ring->idx);
+
+ /* ring inited */
+ ring->inited = TRUE;
+}
+static void
+dhd_prot_ring_detach(dhd_pub_t *dhd, msgbuf_ring_t * ring)
+{
+ dmaaddr_t phyaddr;
+ uint16 size;
+ dhd_prot_t *prot = dhd->prot;
+
+ BCM_REFERENCE(phyaddr);
+
+ if (ring == NULL)
+ return;
+
+ ring->inited = FALSE;
+
+ PHYSADDRHISET(phyaddr, ring->ringmem->base_addr.high_addr);
+ PHYSADDRLOSET(phyaddr, ring->ringmem->base_addr.low_addr);
+ size = ring->ringmem->max_item * ring->ringmem->len_items;
+ /* Free up ring */
+ if (ring->ring_base.va) {
+ DMA_FREE_CONSISTENT(prot->osh, ring->ring_base.va, size, ring->ring_base.pa,
+ ring->ring_base.dmah);
+ ring->ring_base.va = NULL;
+ }
+
+ /* Free up ring mem space */
+ if (ring->ringmem) {
+ MFREE(prot->osh, ring->ringmem, sizeof(ring_mem_t));
+ ring->ringmem = NULL;
+ }
+
+ /* Free up ring state info */
+ if (ring->ringstate) {
+ MFREE(prot->osh, ring->ringstate, sizeof(ring_state_t));
+ ring->ringstate = NULL;
+ }
+
+ /* free up ring info */
+ MFREE(prot->osh, ring, sizeof(msgbuf_ring_t));
+}
+/* Assumes only one index is updated ata time */
+static void *BCMFASTPATH
+prot_get_ring_space(msgbuf_ring_t *ring, uint16 nitems, uint16 * alloced)
+{
+ void *ret_ptr = NULL;
+ uint16 ring_avail_cnt;
+
+ ASSERT(nitems <= RING_MAX_ITEM(ring));
+
+ ring_avail_cnt = CHECK_WRITE_SPACE(RING_READ_PTR(ring), RING_WRITE_PTR(ring),
+ RING_MAX_ITEM(ring));
+
+ if (ring_avail_cnt == 0) {
+ DHD_INFO(("RING space not available on ring %s for %d items \n",
+ ring->name, nitems));
+ DHD_INFO(("write %d read %d \n\n", RING_WRITE_PTR(ring),
+ RING_READ_PTR(ring)));
+ return NULL;
+ }
+ *alloced = MIN(nitems, ring_avail_cnt);
+
+ /* Return next available space */
+ ret_ptr = (char*)HOST_RING_BASE(ring) + (RING_WRITE_PTR(ring) * RING_LEN_ITEMS(ring));
+
+ /* Update write pointer */
+ if ((RING_WRITE_PTR(ring) + *alloced) == RING_MAX_ITEM(ring))
+ RING_WRITE_PTR(ring) = 0;
+ else if ((RING_WRITE_PTR(ring) + *alloced) < RING_MAX_ITEM(ring))
+ RING_WRITE_PTR(ring) += *alloced;
+ else {
+ /* Should never hit this */
+ ASSERT(0);
+ return NULL;
+ }
+
+ return ret_ptr;
+}
+
+static void BCMFASTPATH
+prot_ring_write_complete(dhd_pub_t *dhd, msgbuf_ring_t * ring, void* p, uint16 nitems)
+{
+ dhd_prot_t *prot = dhd->prot;
+
+ /* cache flush */
+ OSL_CACHE_FLUSH(p, RING_LEN_ITEMS(ring) * nitems);
+
+ /* update write pointer */
+ /* If dma'ing h2d indices are supported
+ * update the values in the host memory
+ * o/w update the values in TCM
+ */
+ if (DMA_INDX_ENAB(dhd->dma_h2d_ring_upd_support))
+ dhd_set_dmaed_index(dhd, H2D_DMA_WRITEINDX,
+ ring->idx, (uint16)RING_WRITE_PTR(ring));
+ else
+ dhd_bus_cmn_writeshared(dhd->bus, &(RING_WRITE_PTR(ring)),
+ sizeof(uint16), RING_WRITE_PTR, ring->idx);
+
+ /* raise h2d interrupt */
+ prot->mb_ring_fn(dhd->bus, RING_WRITE_PTR(ring));
+}
+
+/* If dma'ing h2d indices are supported
+ * this function updates the indices in
+ * the host memory
+ */
+static void
+dhd_set_dmaed_index(dhd_pub_t *dhd, uint8 type, uint16 ringid, uint16 new_index)
+{
+ dhd_prot_t *prot = dhd->prot;
+
+ uint32 *ptr = NULL;
+ uint16 offset = 0;
+
+ switch (type) {
+ case H2D_DMA_WRITEINDX:
+ ptr = (uint32 *)(prot->h2d_dma_writeindx_buf.va);
+
+ /* Flow-Rings start at Id BCMPCIE_COMMON_MSGRINGS
+ * but in host memory their indices start
+ * after H2D Common Rings
+ */
+ if (ringid >= BCMPCIE_COMMON_MSGRINGS)
+ offset = ringid - BCMPCIE_COMMON_MSGRINGS +
+ BCMPCIE_H2D_COMMON_MSGRINGS;
+ else
+ offset = ringid;
+ ptr += offset;
+
+ *ptr = htol16(new_index);
+
+ /* cache flush */
+ OSL_CACHE_FLUSH((void *)prot->h2d_dma_writeindx_buf.va,
+ prot->h2d_dma_writeindx_buf_len);
+
+ break;
+
+ case D2H_DMA_READINDX:
+ ptr = (uint32 *)(prot->d2h_dma_readindx_buf.va);
+
+ /* H2D Common Righs start at Id BCMPCIE_H2D_COMMON_MSGRINGS */
+ offset = ringid - BCMPCIE_H2D_COMMON_MSGRINGS;
+ ptr += offset;
+
+ *ptr = htol16(new_index);
+ /* cache flush */
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_readindx_buf.va,
+ prot->d2h_dma_readindx_buf_len);
+
+ break;
+
+ default:
+ DHD_ERROR(("%s: Invalid option for DMAing read/write index\n",
+ __FUNCTION__));
+
+ break;
+ }
+ DHD_TRACE(("%s: Data 0x%p, ringId %d, new_index %d\n",
+ __FUNCTION__, ptr, ringid, new_index));
+}
+
+
+static uint16
+dhd_get_dmaed_index(dhd_pub_t *dhd, uint8 type, uint16 ringid)
+{
+ uint32 *ptr = NULL;
+ uint16 data = 0;
+ uint16 offset = 0;
+
+ switch (type) {
+ case H2D_DMA_WRITEINDX:
+ OSL_CACHE_INV((void *)dhd->prot->h2d_dma_writeindx_buf.va,
+ dhd->prot->h2d_dma_writeindx_buf_len);
+ ptr = (uint32 *)(dhd->prot->h2d_dma_writeindx_buf.va);
+
+ /* Flow-Rings start at Id BCMPCIE_COMMON_MSGRINGS
+ * but in host memory their indices start
+ * after H2D Common Rings
+ */
+ if (ringid >= BCMPCIE_COMMON_MSGRINGS)
+ offset = ringid - BCMPCIE_COMMON_MSGRINGS +
+ BCMPCIE_H2D_COMMON_MSGRINGS;
+ else
+ offset = ringid;
+ ptr += offset;
+
+ data = LTOH16((uint16)*ptr);
+ break;
+
+ case H2D_DMA_READINDX:
+ OSL_CACHE_INV((void *)dhd->prot->h2d_dma_readindx_buf.va,
+ dhd->prot->h2d_dma_readindx_buf_len);
+ ptr = (uint32 *)(dhd->prot->h2d_dma_readindx_buf.va);
+
+ /* Flow-Rings start at Id BCMPCIE_COMMON_MSGRINGS
+ * but in host memory their indices start
+ * after H2D Common Rings
+ */
+ if (ringid >= BCMPCIE_COMMON_MSGRINGS)
+ offset = ringid - BCMPCIE_COMMON_MSGRINGS +
+ BCMPCIE_H2D_COMMON_MSGRINGS;
+ else
+ offset = ringid;
+ ptr += offset;
+
+ data = LTOH16((uint16)*ptr);
+ break;
+
+ case D2H_DMA_WRITEINDX:
+ OSL_CACHE_INV((void *)dhd->prot->d2h_dma_writeindx_buf.va,
+ dhd->prot->d2h_dma_writeindx_buf_len);
+ ptr = (uint32 *)(dhd->prot->d2h_dma_writeindx_buf.va);
+
+ /* H2D Common Righs start at Id BCMPCIE_H2D_COMMON_MSGRINGS */
+ offset = ringid - BCMPCIE_H2D_COMMON_MSGRINGS;
+ ptr += offset;
+
+ data = LTOH16((uint16)*ptr);
+ break;
+
+ case D2H_DMA_READINDX:
+ OSL_CACHE_INV((void *)dhd->prot->d2h_dma_readindx_buf.va,
+ dhd->prot->d2h_dma_readindx_buf_len);
+ ptr = (uint32 *)(dhd->prot->d2h_dma_readindx_buf.va);
+
+ /* H2D Common Righs start at Id BCMPCIE_H2D_COMMON_MSGRINGS */
+ offset = ringid - BCMPCIE_H2D_COMMON_MSGRINGS;
+ ptr += offset;
+
+ data = LTOH16((uint16)*ptr);
+ break;
+
+ default:
+ DHD_ERROR(("%s: Invalid option for DMAing read/write index\n",
+ __FUNCTION__));
+
+ break;
+ }
+ DHD_TRACE(("%s: Data 0x%p, data %d\n", __FUNCTION__, ptr, data));
+ return (data);
+}
+
+/* D2H dircetion: get next space to read from */
+static uint8*
+prot_get_src_addr(dhd_pub_t *dhd, msgbuf_ring_t * ring, uint16* available_len)
+{
+ uint16 w_ptr;
+ uint16 r_ptr;
+ uint16 depth;
+ void* ret_addr = NULL;
+ uint16 d2h_w_index = 0;
+
+ DHD_TRACE(("%s: h2d_dma_readindx_buf %p, d2h_dma_writeindx_buf %p\n",
+ __FUNCTION__, (uint32 *)(dhd->prot->h2d_dma_readindx_buf.va),
+ (uint32 *)(dhd->prot->d2h_dma_writeindx_buf.va)));
+
+ /* update write pointer */
+ if (DMA_INDX_ENAB(dhd->dma_d2h_ring_upd_support)) {
+ /* DMAing write/read indices supported */
+ d2h_w_index = dhd_get_dmaed_index(dhd, D2H_DMA_WRITEINDX, ring->idx);
+ ring->ringstate->w_offset = d2h_w_index;
+ } else
+ dhd_bus_cmn_readshared(dhd->bus,
+ &(RING_WRITE_PTR(ring)), RING_WRITE_PTR, ring->idx);
+
+ w_ptr = ring->ringstate->w_offset;
+ r_ptr = ring->ringstate->r_offset;
+ depth = ring->ringmem->max_item;
+
+ /* check for avail space */
+ *available_len = READ_AVAIL_SPACE(w_ptr, r_ptr, depth);
+ if (*available_len == 0)
+ return NULL;
+
+ ASSERT(*available_len <= ring->ringmem->max_item);
+
+ /* if space available, calculate address to be read */
+ ret_addr = (char*)ring->ring_base.va + (r_ptr * ring->ringmem->len_items);
+
+ /* update read pointer */
+ if ((ring->ringstate->r_offset + *available_len) >= ring->ringmem->max_item)
+ ring->ringstate->r_offset = 0;
+ else
+ ring->ringstate->r_offset += *available_len;
+
+ ASSERT(ring->ringstate->r_offset < ring->ringmem->max_item);
+
+ /* convert index to bytes */
+ *available_len = *available_len * ring->ringmem->len_items;
+
+ /* return read address */
+ return ret_addr;
+}
+static void
+prot_upd_read_idx(dhd_pub_t *dhd, msgbuf_ring_t * ring)
+{
+ /* update read index */
+ /* If dma'ing h2d indices supported
+ * update the r -indices in the
+ * host memory o/w in TCM
+ */
+ if (DMA_INDX_ENAB(dhd->dma_h2d_ring_upd_support))
+ dhd_set_dmaed_index(dhd, D2H_DMA_READINDX,
+ ring->idx, (uint16)RING_READ_PTR(ring));
+ else
+ dhd_bus_cmn_writeshared(dhd->bus, &(RING_READ_PTR(ring)),
+ sizeof(uint16), RING_READ_PTR, ring->idx);
+}
+static void
+prot_store_rxcpln_read_idx(dhd_pub_t *dhd, msgbuf_ring_t * ring)
+{
+ dhd_prot_t *prot;
+ if (!dhd || !dhd->prot)
+ return;
+ prot = dhd->prot;
+ prot->rx_cpln_early_upd_idx = RING_READ_PTR(ring);
+}
+static void
+prot_early_upd_rxcpln_read_idx(dhd_pub_t *dhd, msgbuf_ring_t * ring)
+{
+ dhd_prot_t *prot;
+ if (!dhd || !dhd->prot)
+ return;
+ prot = dhd->prot;
+ if (prot->rx_cpln_early_upd_idx == RING_READ_PTR(ring))
+ return;
+ if (++prot->rx_cpln_early_upd_idx >= RING_MAX_ITEM(ring))
+ prot->rx_cpln_early_upd_idx = 0;
+ if (DMA_INDX_ENAB(dhd->dma_h2d_ring_upd_support))
+ dhd_set_dmaed_index(dhd, D2H_DMA_READINDX,
+ ring->idx, (uint16)prot->rx_cpln_early_upd_idx);
+ else
+ dhd_bus_cmn_writeshared(dhd->bus, &(prot->rx_cpln_early_upd_idx),
+ sizeof(uint16), RING_READ_PTR, ring->idx);
+}
+
+int
+dhd_prot_flow_ring_create(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node)
+{
+ tx_flowring_create_request_t *flow_create_rqst;
+ msgbuf_ring_t *msgbuf_flow_info;
+ dhd_prot_t *prot = dhd->prot;
+ uint16 hdrlen = sizeof(tx_flowring_create_request_t);
+ uint16 msglen = hdrlen;
+ unsigned long flags;
+ char eabuf[ETHER_ADDR_STR_LEN];
+ uint16 alloced = 0;
+
+ if (!(msgbuf_flow_info = prot_ring_attach(prot, "h2dflr",
+ H2DRING_TXPOST_MAX_ITEM, H2DRING_TXPOST_ITEMSIZE,
+ BCMPCIE_H2D_TXFLOWRINGID +
+ (flow_ring_node->flowid - BCMPCIE_H2D_COMMON_MSGRINGS)))) {
+ DHD_ERROR(("%s: kmalloc for H2D TX Flow ring failed\n", __FUNCTION__));
+ return BCME_NOMEM;
+ }
+ /* Clear write pointer of the ring */
+ flow_ring_node->prot_info = (void *)msgbuf_flow_info;
+
+ /* align it to 4 bytes, so that all start addr form cbuf is 4 byte aligned */
+ msglen = align(msglen, DMA_ALIGN_LEN);
+
+
+ DHD_GENERAL_LOCK(dhd, flags);
+ /* Request for ring buffer space */
+ flow_create_rqst = (tx_flowring_create_request_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+
+ if (flow_create_rqst == NULL) {
+ DHD_ERROR(("%s: No space in control ring for Flow create req\n", __FUNCTION__));
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ return BCME_NOMEM;
+ }
+ msgbuf_flow_info->inited = TRUE;
+
+ /* Common msg buf hdr */
+ flow_create_rqst->msg.msg_type = MSG_TYPE_FLOW_RING_CREATE;
+ flow_create_rqst->msg.if_id = (uint8)flow_ring_node->flow_info.ifindex;
+ flow_create_rqst->msg.request_id = htol16(0); /* TBD */
+
+ /* Update flow create message */
+ flow_create_rqst->tid = flow_ring_node->flow_info.tid;
+ flow_create_rqst->flow_ring_id = htol16((uint16)flow_ring_node->flowid);
+ memcpy(flow_create_rqst->sa, flow_ring_node->flow_info.sa, sizeof(flow_create_rqst->sa));
+ memcpy(flow_create_rqst->da, flow_ring_node->flow_info.da, sizeof(flow_create_rqst->da));
+ flow_create_rqst->flow_ring_ptr.low_addr = msgbuf_flow_info->ringmem->base_addr.low_addr;
+ flow_create_rqst->flow_ring_ptr.high_addr = msgbuf_flow_info->ringmem->base_addr.high_addr;
+ flow_create_rqst->max_items = htol16(H2DRING_TXPOST_MAX_ITEM);
+ flow_create_rqst->len_item = htol16(H2DRING_TXPOST_ITEMSIZE);
+ bcm_ether_ntoa((struct ether_addr *)flow_ring_node->flow_info.da, eabuf);
+ DHD_ERROR(("%s Send Flow create Req msglen flow ID %d for peer %s prio %d ifindex %d\n",
+ __FUNCTION__, flow_ring_node->flowid, eabuf, flow_ring_node->flow_info.tid,
+ flow_ring_node->flow_info.ifindex));
+
+ /* upd wrt ptr and raise interrupt */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, flow_create_rqst,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+
+ /* If dma'ing indices supported
+ * update the w-index in host memory o/w in TCM
+ */
+ if (DMA_INDX_ENAB(dhd->dma_h2d_ring_upd_support))
+ dhd_set_dmaed_index(dhd, H2D_DMA_WRITEINDX,
+ msgbuf_flow_info->idx, (uint16)RING_WRITE_PTR(msgbuf_flow_info));
+ else
+ dhd_bus_cmn_writeshared(dhd->bus, &(RING_WRITE_PTR(msgbuf_flow_info)),
+ sizeof(uint16), RING_WRITE_PTR, msgbuf_flow_info->idx);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ return BCME_OK;
+}
+
+static void
+dhd_prot_process_flow_ring_create_response(dhd_pub_t *dhd, void* buf, uint16 msglen)
+{
+ tx_flowring_create_response_t *flow_create_resp = (tx_flowring_create_response_t *)buf;
+
+ DHD_ERROR(("%s Flow create Response status = %d Flow %d\n", __FUNCTION__,
+ flow_create_resp->cmplt.status, flow_create_resp->cmplt.flow_ring_id));
+
+ dhd_bus_flow_ring_create_response(dhd->bus, flow_create_resp->cmplt.flow_ring_id,
+ flow_create_resp->cmplt.status);
+}
+
+void dhd_prot_clean_flow_ring(dhd_pub_t *dhd, void *msgbuf_flow_info)
+{
+ msgbuf_ring_t *flow_ring = (msgbuf_ring_t *)msgbuf_flow_info;
+ dhd_prot_ring_detach(dhd, flow_ring);
+ DHD_INFO(("%s Cleaning up Flow \n", __FUNCTION__));
+}
+
+void dhd_prot_print_flow_ring(dhd_pub_t *dhd, void *msgbuf_flow_info,
+ struct bcmstrbuf *strbuf)
+{
+ msgbuf_ring_t *flow_ring = (msgbuf_ring_t *)msgbuf_flow_info;
+ uint16 rd, wrt;
+ dhd_bus_cmn_readshared(dhd->bus, &rd, RING_READ_PTR, flow_ring->idx);
+ dhd_bus_cmn_readshared(dhd->bus, &wrt, RING_WRITE_PTR, flow_ring->idx);
+ bcm_bprintf(strbuf, "RD %d WR %d\n", rd, wrt);
+}
+
+void dhd_prot_print_info(dhd_pub_t *dhd, struct bcmstrbuf *strbuf)
+{
+ bcm_bprintf(strbuf, "CtrlPost: ");
+ dhd_prot_print_flow_ring(dhd, dhd->prot->h2dring_ctrl_subn, strbuf);
+ bcm_bprintf(strbuf, "CtrlCpl: ");
+ dhd_prot_print_flow_ring(dhd, dhd->prot->d2hring_ctrl_cpln, strbuf);
+ bcm_bprintf(strbuf, "RxPost: ");
+ bcm_bprintf(strbuf, "RBP %d ", dhd->prot->rxbufpost);
+ dhd_prot_print_flow_ring(dhd, dhd->prot->h2dring_rxp_subn, strbuf);
+ bcm_bprintf(strbuf, "RxCpl: ");
+ dhd_prot_print_flow_ring(dhd, dhd->prot->d2hring_rx_cpln, strbuf);
+ if (dhd_bus_is_txmode_push(dhd->bus)) {
+ bcm_bprintf(strbuf, "TxPost: ");
+ dhd_prot_print_flow_ring(dhd, dhd->prot->h2dring_txp_subn, strbuf);
+ }
+ bcm_bprintf(strbuf, "TxCpl: ");
+ dhd_prot_print_flow_ring(dhd, dhd->prot->d2hring_tx_cpln, strbuf);
+ bcm_bprintf(strbuf, "active_tx_count %d pktidmap_avail %d\n",
+ dhd->prot->active_tx_count,
+ dhd_pktid_map_avail_cnt(dhd->prot->pktid_map_handle));
+}
+
+int
+dhd_prot_flow_ring_delete(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node)
+{
+ tx_flowring_delete_request_t *flow_delete_rqst;
+ dhd_prot_t *prot = dhd->prot;
+ uint16 msglen = sizeof(tx_flowring_delete_request_t);
+ unsigned long flags;
+ uint16 alloced = 0;
+
+ /* align it to 4 bytes, so that all start addr form cbuf is 4 byte aligned */
+ msglen = align(msglen, DMA_ALIGN_LEN);
+
+ /* Request for ring buffer space */
+ DHD_GENERAL_LOCK(dhd, flags);
+ flow_delete_rqst = (tx_flowring_delete_request_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+
+ if (flow_delete_rqst == NULL) {
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ DHD_ERROR(("%s Flow Delete req failure no ring mem %d \n", __FUNCTION__, msglen));
+ return BCME_NOMEM;
+ }
+
+ /* Common msg buf hdr */
+ flow_delete_rqst->msg.msg_type = MSG_TYPE_FLOW_RING_DELETE;
+ flow_delete_rqst->msg.if_id = (uint8)flow_ring_node->flow_info.ifindex;
+ flow_delete_rqst->msg.request_id = htol16(0); /* TBD */
+
+ /* Update Delete info */
+ flow_delete_rqst->flow_ring_id = htol16((uint16)flow_ring_node->flowid);
+ flow_delete_rqst->reason = htol16(BCME_OK);
+
+ DHD_ERROR(("%s sending FLOW RING Delete req msglen %d \n", __FUNCTION__, msglen));
+
+ /* upd wrt ptr and raise interrupt */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, flow_delete_rqst,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ return BCME_OK;
+}
+
+static void
+dhd_prot_process_flow_ring_delete_response(dhd_pub_t *dhd, void* buf, uint16 msglen)
+{
+ tx_flowring_delete_response_t *flow_delete_resp = (tx_flowring_delete_response_t *)buf;
+
+ DHD_INFO(("%s Flow Delete Response status = %d \n", __FUNCTION__,
+ flow_delete_resp->cmplt.status));
+
+ dhd_bus_flow_ring_delete_response(dhd->bus, flow_delete_resp->cmplt.flow_ring_id,
+ flow_delete_resp->cmplt.status);
+}
+
+int
+dhd_prot_flow_ring_flush(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node)
+{
+ tx_flowring_flush_request_t *flow_flush_rqst;
+ dhd_prot_t *prot = dhd->prot;
+ uint16 msglen = sizeof(tx_flowring_flush_request_t);
+ unsigned long flags;
+ uint16 alloced = 0;
+
+ /* align it to 4 bytes, so that all start addr form cbuf is 4 byte aligned */
+ msglen = align(msglen, DMA_ALIGN_LEN);
+
+ /* Request for ring buffer space */
+ DHD_GENERAL_LOCK(dhd, flags);
+ flow_flush_rqst = (tx_flowring_flush_request_t *)dhd_alloc_ring_space(dhd,
+ prot->h2dring_ctrl_subn, DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D, &alloced);
+ if (flow_flush_rqst == NULL) {
+ DHD_GENERAL_UNLOCK(dhd, flags);
+ DHD_ERROR(("%s Flow Flush req failure no ring mem %d \n", __FUNCTION__, msglen));
+ return BCME_NOMEM;
+ }
+
+ /* Common msg buf hdr */
+ flow_flush_rqst->msg.msg_type = MSG_TYPE_FLOW_RING_FLUSH;
+ flow_flush_rqst->msg.if_id = (uint8)flow_ring_node->flow_info.ifindex;
+ flow_flush_rqst->msg.request_id = htol16(0); /* TBD */
+
+ flow_flush_rqst->flow_ring_id = htol16((uint16)flow_ring_node->flowid);
+ flow_flush_rqst->reason = htol16(BCME_OK);
+
+ DHD_INFO(("%s sending FLOW RING Flush req msglen %d \n", __FUNCTION__, msglen));
+
+ /* upd wrt ptr and raise interrupt */
+ prot_ring_write_complete(dhd, prot->h2dring_ctrl_subn, flow_flush_rqst,
+ DHD_FLOWRING_DEFAULT_NITEMS_POSTED_H2D);
+ DHD_GENERAL_UNLOCK(dhd, flags);
+
+ return BCME_OK;
+}
+
+static void
+dhd_prot_process_flow_ring_flush_response(dhd_pub_t *dhd, void* buf, uint16 msglen)
+{
+ tx_flowring_flush_response_t *flow_flush_resp = (tx_flowring_flush_response_t *)buf;
+
+ DHD_INFO(("%s Flow Flush Response status = %d \n", __FUNCTION__,
+ flow_flush_resp->cmplt.status));
+
+ dhd_bus_flow_ring_flush_response(dhd->bus, flow_flush_resp->cmplt.flow_ring_id,
+ flow_flush_resp->cmplt.status);
+}
+
+int
+dhd_prot_ringupd_dump(dhd_pub_t *dhd, struct bcmstrbuf *b)
+{
+ uint32 *ptr;
+ uint32 value;
+ uint32 i;
+ uint8 txpush = 0;
+ uint32 max_h2d_queues = dhd_bus_max_h2d_queues(dhd->bus, &txpush);
+
+ OSL_CACHE_INV((void *)dhd->prot->d2h_dma_writeindx_buf.va,
+ dhd->prot->d2h_dma_writeindx_buf_len);
+
+ ptr = (uint32 *)(dhd->prot->d2h_dma_writeindx_buf.va);
+
+ bcm_bprintf(b, "\n max_tx_queues %d, txpush mode %d\n", max_h2d_queues, txpush);
+
+ bcm_bprintf(b, "\nRPTR block H2D common rings, 0x%04x\n", ptr);
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tH2D CTRL: value 0x%04x\n", value);
+ ptr++;
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tH2D RXPOST: value 0x%04x\n", value);
+
+ if (txpush) {
+ ptr++;
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tH2D TXPOST value 0x%04x\n", value);
+ }
+ else {
+ ptr++;
+ bcm_bprintf(b, "RPTR block Flow rings , 0x%04x\n", ptr);
+ for (i = BCMPCIE_H2D_COMMON_MSGRINGS; i < max_h2d_queues; i++) {
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tflowring ID %d: value 0x%04x\n", i, value);
+ ptr++;
+ }
+ }
+
+ OSL_CACHE_INV((void *)dhd->prot->h2d_dma_readindx_buf.va,
+ dhd->prot->h2d_dma_readindx_buf_len);
+
+ ptr = (uint32 *)(dhd->prot->h2d_dma_readindx_buf.va);
+
+ bcm_bprintf(b, "\nWPTR block D2H common rings, 0x%04x\n", ptr);
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tD2H CTRLCPLT: value 0x%04x\n", value);
+ ptr++;
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tD2H TXCPLT: value 0x%04x\n", value);
+ ptr++;
+ value = ltoh32(*ptr);
+ bcm_bprintf(b, "\tD2H RXCPLT: value 0x%04x\n", value);
+
+ return 0;
+}
+
+uint32
+dhd_prot_metadatalen_set(dhd_pub_t *dhd, uint32 val, bool rx)
+{
+ dhd_prot_t *prot = dhd->prot;
+ if (rx)
+ prot->rx_metadata_offset = (uint16)val;
+ else
+ prot->tx_metadata_offset = (uint16)val;
+ return dhd_prot_metadatalen_get(dhd, rx);
+}
+
+uint32
+dhd_prot_metadatalen_get(dhd_pub_t *dhd, bool rx)
+{
+ dhd_prot_t *prot = dhd->prot;
+ if (rx)
+ return prot->rx_metadata_offset;
+ else
+ return prot->tx_metadata_offset;
+}
+
+uint32
+dhd_prot_txp_threshold(dhd_pub_t *dhd, bool set, uint32 val)
+{
+ dhd_prot_t *prot = dhd->prot;
+ if (set)
+ prot->txp_threshold = (uint16)val;
+ val = prot->txp_threshold;
+ return val;
+}
+
+#ifdef DHD_RX_CHAINING
+static INLINE void BCMFASTPATH
+dhd_rxchain_reset(rxchain_info_t *rxchain)
+{
+ rxchain->pkt_count = 0;
+}
+
+static void BCMFASTPATH
+dhd_rxchain_frame(dhd_pub_t *dhd, void *pkt, uint ifidx)
+{
+ uint8 *eh;
+ uint8 prio;
+ dhd_prot_t *prot = dhd->prot;
+ rxchain_info_t *rxchain = &prot->rxchain;
+
+ eh = PKTDATA(dhd->osh, pkt);
+ prio = IP_TOS46(eh + ETHER_HDR_LEN) >> IPV4_TOS_PREC_SHIFT;
+
+ /* For routers, with HNDCTF, link the packets using PKTSETCLINK, */
+ /* so that the chain can be handed off to CTF bridge as is. */
+ if (rxchain->pkt_count == 0) {
+ /* First packet in chain */
+ rxchain->pkthead = rxchain->pkttail = pkt;
+
+ /* Keep a copy of ptr to ether_da, ether_sa and prio */
+ rxchain->h_da = ((struct ether_header *)eh)->ether_dhost;
+ rxchain->h_sa = ((struct ether_header *)eh)->ether_shost;
+ rxchain->h_prio = prio;
+ rxchain->ifidx = ifidx;
+ rxchain->pkt_count++;
+ } else {
+ if (PKT_CTF_CHAINABLE(dhd, ifidx, eh, prio, rxchain->h_sa,
+ rxchain->h_da, rxchain->h_prio)) {
+ /* Same flow - keep chaining */
+ PKTSETCLINK(rxchain->pkttail, pkt);
+ rxchain->pkttail = pkt;
+ rxchain->pkt_count++;
+ } else {
+ /* Different flow - First release the existing chain */
+ dhd_rxchain_commit(dhd);
+
+ /* Create a new chain */
+ rxchain->pkthead = rxchain->pkttail = pkt;
+
+ /* Keep a copy of ptr to ether_da, ether_sa and prio */
+ rxchain->h_da = ((struct ether_header *)eh)->ether_dhost;
+ rxchain->h_sa = ((struct ether_header *)eh)->ether_shost;
+ rxchain->h_prio = prio;
+ rxchain->ifidx = ifidx;
+ rxchain->pkt_count++;
+ }
+ }
+
+ if ((!ETHER_ISMULTI(rxchain->h_da)) &&
+ ((((struct ether_header *)eh)->ether_type == HTON16(ETHER_TYPE_IP)) ||
+ (((struct ether_header *)eh)->ether_type == HTON16(ETHER_TYPE_IPV6)))) {
+ PKTSETCHAINED(dhd->osh, pkt);
+ PKTCINCRCNT(rxchain->pkthead);
+ PKTCADDLEN(rxchain->pkthead, PKTLEN(dhd->osh, pkt));
+ } else {
+ dhd_rxchain_commit(dhd);
+ return;
+ }
+
+ /* If we have hit the max chain length, dispatch the chain and reset */
+ if (rxchain->pkt_count >= DHD_PKT_CTF_MAX_CHAIN_LEN) {
+ dhd_rxchain_commit(dhd);
+ }
+}
+
+static void BCMFASTPATH
+dhd_rxchain_commit(dhd_pub_t *dhd)
+{
+ dhd_prot_t *prot = dhd->prot;
+ rxchain_info_t *rxchain = &prot->rxchain;
+
+ if (rxchain->pkt_count == 0)
+ return;
+
+ /* Release the packets to dhd_linux */
+ dhd_bus_rx_frame(dhd->bus, rxchain->pkthead, rxchain->ifidx, rxchain->pkt_count);
+
+ /* Reset the chain */
+ dhd_rxchain_reset(rxchain);
+}
+#endif /* DHD_RX_CHAINING */
+static void
+dhd_prot_ring_clear(msgbuf_ring_t* ring)
+{
+ uint16 size;
+ DHD_TRACE(("%s\n",__FUNCTION__));
+
+ size = ring->ringmem->max_item * ring->ringmem->len_items;
+ OSL_CACHE_INV((void *) ring->ring_base.va, size);
+ bzero(ring->ring_base.va, size);
+ OSL_CACHE_FLUSH((void *) ring->ring_base.va, size);
+
+ bzero(ring->ringstate, sizeof(*ring->ringstate));
+}
+
+void
+dhd_prot_clear(dhd_pub_t *dhd)
+{
+ struct dhd_prot *prot = dhd->prot;
+
+ DHD_TRACE(("%s\n",__FUNCTION__));
+
+ if(prot == NULL)
+ return;
+
+ if(prot->h2dring_txp_subn)
+ dhd_prot_ring_clear(prot->h2dring_txp_subn);
+ if(prot->h2dring_rxp_subn)
+ dhd_prot_ring_clear(prot->h2dring_rxp_subn);
+ if(prot->h2dring_ctrl_subn)
+ dhd_prot_ring_clear(prot->h2dring_ctrl_subn);
+ if(prot->d2hring_tx_cpln)
+ dhd_prot_ring_clear(prot->d2hring_tx_cpln);
+ if(prot->d2hring_rx_cpln)
+ dhd_prot_ring_clear(prot->d2hring_rx_cpln);
+ if(prot->d2hring_ctrl_cpln)
+ dhd_prot_ring_clear(prot->d2hring_ctrl_cpln);
+
+
+ if(prot->retbuf.va) {
+ OSL_CACHE_INV((void *) prot->retbuf.va, IOCT_RETBUF_SIZE);
+ bzero(prot->retbuf.va, IOCT_RETBUF_SIZE);
+ OSL_CACHE_FLUSH((void *) prot->retbuf.va, IOCT_RETBUF_SIZE);
+ }
+
+ if(prot->ioctbuf.va) {
+ OSL_CACHE_INV((void *) prot->ioctbuf.va, IOCT_RETBUF_SIZE);
+ bzero(prot->ioctbuf.va, IOCT_RETBUF_SIZE);
+ OSL_CACHE_FLUSH((void *) prot->ioctbuf.va, IOCT_RETBUF_SIZE);
+ }
+
+ if(prot->d2h_dma_scratch_buf.va) {
+ OSL_CACHE_INV((void *)prot->d2h_dma_scratch_buf.va, DMA_D2H_SCRATCH_BUF_LEN);
+ bzero(prot->d2h_dma_scratch_buf.va, DMA_D2H_SCRATCH_BUF_LEN);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_scratch_buf.va, DMA_D2H_SCRATCH_BUF_LEN);
+ }
+
+ if (prot->h2d_dma_readindx_buf.va) {
+ OSL_CACHE_INV((void *)prot->h2d_dma_readindx_buf.va,
+ prot->h2d_dma_readindx_buf_len);
+ bzero(prot->h2d_dma_readindx_buf.va,
+ prot->h2d_dma_readindx_buf_len);
+ OSL_CACHE_FLUSH((void *)prot->h2d_dma_readindx_buf.va,
+ prot->h2d_dma_readindx_buf_len);
+ }
+
+ if (prot->h2d_dma_writeindx_buf.va) {
+ OSL_CACHE_INV((void *)prot->h2d_dma_writeindx_buf.va,
+ prot->h2d_dma_writeindx_buf_len);
+ bzero(prot->h2d_dma_writeindx_buf.va, prot->h2d_dma_writeindx_buf_len);
+ OSL_CACHE_FLUSH((void *)prot->h2d_dma_writeindx_buf.va,
+ prot->h2d_dma_writeindx_buf_len);
+ }
+
+ if (prot->d2h_dma_readindx_buf.va) {
+ OSL_CACHE_INV((void *)prot->d2h_dma_readindx_buf.va,
+ prot->d2h_dma_readindx_buf_len);
+ bzero(prot->d2h_dma_readindx_buf.va, prot->d2h_dma_readindx_buf_len);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_readindx_buf.va,
+ prot->d2h_dma_readindx_buf_len);
+ }
+
+ if (prot->d2h_dma_writeindx_buf.va) {
+ OSL_CACHE_INV((void *)prot->d2h_dma_writeindx_buf.va,
+ prot->d2h_dma_writeindx_buf_len);
+ bzero(prot->d2h_dma_writeindx_buf.va, prot->d2h_dma_writeindx_buf_len);
+ OSL_CACHE_FLUSH((void *)prot->d2h_dma_writeindx_buf.va,
+ prot->d2h_dma_writeindx_buf_len);
+ }
+
+ prot->rx_metadata_offset = 0;
+ prot->tx_metadata_offset = 0;
+
+ prot->rxbufpost = 0;
+ prot->cur_event_bufs_posted = 0;
+ prot->cur_ioctlresp_bufs_posted = 0;
+
+ prot->active_tx_count = 0;
+ prot->data_seq_no = 0;
+ prot->ioctl_seq_no = 0;
+ prot->pending = 0;
+ prot->lastcmd = 0;
+
+ prot->ioctl_trans_id = 1;
+
+ /* dhd_flow_rings_init is located at dhd_bus_start,
+ * so when stopping bus, flowrings shall be deleted
+ */
+ dhd_flow_rings_deinit(dhd);
+ NATIVE_TO_PKTID_CLEAR(prot->pktid_map_handle);
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_pcie.c b/drivers/net/wireless/bcmdhd/dhd_pcie.c
old mode 100755
new mode 100644
index 570c1b7..b84c6ce
--- a/drivers/net/wireless/bcmdhd/dhd_pcie.c
+++ b/drivers/net/wireless/bcmdhd/dhd_pcie.c
@@ -2,13 +2,13 @@
* DHD Bus Module for PCIE
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_pcie.c 452261 2014-01-29 19:30:23Z $
+ * $Id: dhd_pcie.c 477711 2014-05-14 08:45:17Z $
*/
@@ -34,39 +34,57 @@
#include <hndpmu.h>
#include <sbchipc.h>
#if defined(DHD_DEBUG)
-#include <hndrte_armtrap.h>
-#include <hndrte_cons.h>
+#include <hnd_armtrap.h>
+#include <hnd_cons.h>
#endif /* defined(DHD_DEBUG) */
#include <dngl_stats.h>
#include <pcie_core.h>
#include <dhd.h>
#include <dhd_bus.h>
+#include <dhd_flowring.h>
#include <dhd_proto.h>
#include <dhd_dbg.h>
+#include <dhd_debug.h>
#include <dhdioctl.h>
#include <sdiovar.h>
#include <bcmmsgbuf.h>
#include <pcicfg.h>
-#include <circularbuf.h>
#include <dhd_pcie.h>
#include <bcmpcie.h>
+#include <bcmendian.h>
+#ifdef DHDTCPACK_SUPPRESS
+#include <dhd_ip.h>
+#endif /* DHDTCPACK_SUPPRESS */
+#include <proto/bcmevent.h>
+
+#ifdef BCMEMBEDIMAGE
+#include BCMEMBEDIMAGE
+#endif /* BCMEMBEDIMAGE */
#define MEMBLOCK 2048 /* Block size used for downloading of dongle image */
-#define MAX_NVRAMBUF_SIZE 4096 /* max nvram buf size */
+#define MAX_NVRAMBUF_SIZE 6144 /* max nvram buf size */
#define ARMCR4REG_BANKIDX (0x40/sizeof(uint32))
#define ARMCR4REG_BANKPDA (0x4C/sizeof(uint32))
+/* Temporary war to fix precommit till sync issue between trunk & precommit branch is resolved */
+#define DHD_FLOW_RING(dhdp, flowid) \
+ (flow_ring_node_t *)&(((flow_ring_node_t *)((dhdp)->flow_ring_table))[flowid])
int dhd_dongle_memsize;
int dhd_dongle_ramsize;
#ifdef DHD_DEBUG
+static int dhdpcie_checkdied(dhd_bus_t *bus, char *data, uint size);
static int dhdpcie_bus_readconsole(dhd_bus_t *bus);
#endif
+static int dhdpcie_mem_dump(dhd_bus_t *bus);
+static void dhdpcie_bus_report_pcie_linkdown(dhd_bus_t *bus);
static int dhdpcie_bus_membytes(dhd_bus_t *bus, bool write, ulong address, uint8 *data, uint size);
static int dhdpcie_bus_doiovar(dhd_bus_t *bus, const bcm_iovar_t *vi, uint32 actionid,
const char *name, void *params,
int plen, void *arg, int len, int val_size);
static int dhdpcie_bus_lpback_req(struct dhd_bus *bus, uint32 intval);
+static int dhdpcie_bus_dmaxfer_req(struct dhd_bus *bus,
+ uint32 len, uint32 srcdelay, uint32 destdelay);
static int dhdpcie_bus_download_state(dhd_bus_t *bus, bool enter);
static int _dhdpcie_download_firmware(struct dhd_bus *bus);
static int dhdpcie_download_firmware(dhd_bus_t *bus, osl_t *osh);
@@ -88,12 +106,24 @@
static uint16 dhdpcie_bus_rtcm16(dhd_bus_t *bus, ulong offset);
static void dhdpcie_bus_wtcm32(dhd_bus_t *bus, ulong offset, uint32 data);
static uint32 dhdpcie_bus_rtcm32(dhd_bus_t *bus, ulong offset);
-static void dhdpcie_bus_wreg32(dhd_bus_t *bus, uint reg, uint32 data);
-static uint32 dhdpcie_bus_rreg32(dhd_bus_t *bus, uint reg);
+static void dhdpcie_bus_wtcm64(dhd_bus_t *bus, ulong offset, uint64 data);
+static uint64 dhdpcie_bus_rtcm64(dhd_bus_t *bus, ulong offset);
static void dhdpcie_bus_cfg_set_bar0_win(dhd_bus_t *bus, uint32 data);
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+static void dhdpcie_bus_cfg_set_bar1_win(dhd_bus_t *bus, uint32 data);
+static ulong dhd_bus_cmn_check_offset(dhd_bus_t *bus, ulong offset);
+#endif
static void dhdpcie_bus_reg_unmap(osl_t *osh, ulong addr, int size);
static int dhdpcie_cc_nvmshadow(dhd_bus_t *bus, struct bcmstrbuf *b);
static void dhdpcie_send_mb_data(dhd_bus_t *bus, uint32 h2d_mb_data);
+static void dhd_fillup_ring_sharedptr_info(dhd_bus_t *bus, ring_info_t *ring_info);
+
+#ifdef BCMEMBEDIMAGE
+static int dhdpcie_download_code_array(dhd_bus_t *bus);
+#endif /* BCMEMBEDIMAGE */
+extern void dhd_dpc_kill(dhd_pub_t *dhdp);
+
+
#define PCI_VENDOR_ID_BROADCOM 0x14e4
@@ -111,15 +141,28 @@
IOV_RAMSIZE,
IOV_RAMSTART,
IOV_SLEEP_ALLOWED,
+ IOV_PCIE_DMAXFER,
+ IOV_PCIE_SUSPEND,
IOV_PCIEREG,
IOV_PCIECFGREG,
IOV_PCIECOREREG,
+ IOV_PCIESERDESREG,
+ IOV_BAR0_SECWIN_REG,
IOV_SBREG,
IOV_DONGLEISOLATION,
- IOV_LTRSLEEPON_UNLOOAD
+ IOV_LTRSLEEPON_UNLOOAD,
+ IOV_RX_METADATALEN,
+ IOV_TX_METADATALEN,
+ IOV_TXP_THRESHOLD,
+ IOV_BUZZZ_DUMP,
+ IOV_DUMP_RINGUPD_BLOCK,
+ IOV_DMA_RINGINDICES,
+ IOV_DB1_FOR_MB,
+ IOV_FLOW_PRIO_MAP
};
+
const bcm_iovar_t dhdpcie_iovars[] = {
{"intr", IOV_INTR, 0, IOVT_BOOL, 0 },
{"membytes", IOV_MEMBYTES, 0, IOVT_BUFFER, 2 * sizeof(int) },
@@ -134,10 +177,20 @@
{"pciereg", IOV_PCIEREG, 0, IOVT_BUFFER, 2 * sizeof(int32) },
{"pciecfgreg", IOV_PCIECFGREG, 0, IOVT_BUFFER, 2 * sizeof(int32) },
{"pciecorereg", IOV_PCIECOREREG, 0, IOVT_BUFFER, 2 * sizeof(int32) },
+ {"bar0secwinreg", IOV_BAR0_SECWIN_REG, 0, IOVT_BUFFER, 2 * sizeof(int32) },
{"sbreg", IOV_SBREG, 0, IOVT_BUFFER, sizeof(sdreg_t) },
+ {"pcie_dmaxfer", IOV_PCIE_DMAXFER, 0, IOVT_BUFFER, 3 * sizeof(int32) },
+ {"pcie_suspend", IOV_PCIE_SUSPEND, 0, IOVT_UINT32, 0 },
{"sleep_allowed", IOV_SLEEP_ALLOWED, 0, IOVT_BOOL, 0 },
{"dngl_isolation", IOV_DONGLEISOLATION, 0, IOVT_UINT32, 0 },
{"ltrsleep_on_unload", IOV_LTRSLEEPON_UNLOOAD, 0, IOVT_UINT32, 0 },
+ {"dump_ringupdblk", IOV_DUMP_RINGUPD_BLOCK, 0, IOVT_BUFFER, 0 },
+ {"dma_ring_indices", IOV_DMA_RINGINDICES, 0, IOVT_UINT32, 0},
+ {"rx_metadata_len", IOV_RX_METADATALEN, 0, IOVT_UINT32, 0 },
+ {"tx_metadata_len", IOV_TX_METADATALEN, 0, IOVT_UINT32, 0 },
+ {"txp_thresh", IOV_TXP_THRESHOLD, 0, IOVT_UINT32, 0 },
+ {"buzzz_dump", IOV_BUZZZ_DUMP, 0, IOVT_UINT32, 0 },
+ {"flow_prio_map", IOV_FLOW_PRIO_MAP, 0, IOVT_UINT32, 0 },
{NULL, 0, 0, 0, 0 }
};
@@ -180,14 +233,18 @@
return;
}
-/** 'tcm' is the *host* virtual address at which tcm is mapped */
+/**
+ * 'regs' is the host virtual address that maps to the start of the PCIe BAR0 window. The first 4096
+ * bytes in this window are mapped to the backplane address in the PCIEBAR0Window register. The
+ * precondition is that the PCIEBAR0Window register 'points' at the PCIe core.
+ *
+ * 'tcm' is the *host* virtual address at which tcm is mapped.
+ */
dhd_bus_t* dhdpcie_bus_attach(osl_t *osh, volatile char* regs, volatile char* tcm)
{
dhd_bus_t *bus;
- int ret = 0;
-
- DHD_TRACE(("%s: ENTER\n", __FUNCTION__));
+ DHD_ERROR(("%s: ENTER\n", __FUNCTION__));
do {
if (!(bus = MALLOC(osh, sizeof(dhd_bus_t)))) {
@@ -199,6 +256,8 @@
bus->tcm = tcm;
bus->osh = osh;
+ dll_init(&bus->const_flowring);
+
/* Attach pcie shared structure */
bus->pcie_sh = MALLOC(osh, sizeof(pciedev_shared_t));
@@ -216,14 +275,9 @@
break;
}
bus->dhd->busstate = DHD_BUS_DOWN;
+ bus->db1_for_mb = TRUE;
+ bus->dhd->hang_report = TRUE;
- /* Attach to the OS network interface */
- DHD_TRACE(("%s(): Calling dhd_register_if() \n", __FUNCTION__));
- ret = dhd_register_if(bus->dhd, 0, TRUE);
- if (ret) {
- DHD_ERROR(("%s(): ERROR.. dhd_register_if() failed\n", __FUNCTION__));
- break;
- }
DHD_TRACE(("%s: EXIT SUCCESS\n",
__FUNCTION__));
@@ -268,6 +322,27 @@
return &bus->txq;
}
+/* Get Chip ID version */
+uint dhd_bus_chip_id(dhd_pub_t *dhdp)
+{
+ dhd_bus_t *bus = dhdp->bus;
+ return bus->sih->chip;
+}
+
+/* Get Chip Rev ID version */
+uint dhd_bus_chiprev_id(dhd_pub_t *dhdp)
+{
+ dhd_bus_t *bus = dhdp->bus;
+ return bus->sih->chiprev;
+}
+
+/* Get Chip Pkg ID version */
+uint dhd_bus_chippkg_id(dhd_pub_t *dhdp)
+{
+ dhd_bus_t *bus = dhdp->bus;
+ return bus->sih->chippkg;
+}
+
/*
@@ -301,16 +376,18 @@
}
if (bus->dhd->busstate == DHD_BUS_DOWN) {
- DHD_ERROR(("%s : bus is down. we have nothing to do\n",
+ DHD_INFO(("%s : bus is down. we have nothing to do\n",
__FUNCTION__));
break;
}
+ /* Overall operation:
+ * - Mask further interrupts
+ * - Read/ack intstatus
+ * - Take action based on bits and state
+ * - Reenable interrupts (as per state)
+ */
-#ifdef DHD_ALLIRQ
- /* Lock here covers SMP */
- dhd_os_sdisrlock(bus->dhd);
-#endif
/* Count the interrupt call */
bus->intrcount++;
@@ -319,7 +396,7 @@
dhdpcie_bus_intr_disable(bus); /* Disable interrupt!! */
bus->intdis = TRUE;
-#if defined(DHD_ALLIRQ) || defined(PCIE_ISR_THREAD)
+#if defined(PCIE_ISR_THREAD)
DHD_TRACE(("Calling dhd_bus_dpc() from %s\n", __FUNCTION__));
DHD_OS_WAKE_LOCK(bus->dhd);
@@ -328,11 +405,8 @@
#else
bus->dpc_sched = TRUE;
dhd_sched_dpc(bus->dhd); /* queue DPC now!! */
-#endif /* defined(DHD_ALLIRQ) || defined(SDIO_ISR_THREAD) */
+#endif /* defined(SDIO_ISR_THREAD) */
-#ifdef DHD_ALLIRQ
- dhd_os_sdisrunlock(bus->dhd);
-#endif
DHD_TRACE(("%s: Exit Success DPC Queued\n", __FUNCTION__));
return TRUE;
@@ -350,6 +424,7 @@
void *regsva = (void*)bus->regs;
uint16 devid = bus->cl_devid;
uint32 val;
+ sbpcieregs_t *sbpcieregs;
DHD_TRACE(("%s: ENTER\n",
__FUNCTION__));
@@ -360,6 +435,12 @@
/* Set bar0 window to si_enum_base */
dhdpcie_bus_cfg_set_bar0_win(bus, SI_ENUM_BASE);
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ /* Read bar1 window */
+ bus->bar1_win_base = OSL_PCI_READ_CONFIG(bus->osh, PCI_BAR1_WIN, 4);
+ DHD_ERROR(("%s: PCI_BAR1_WIN = %x\n", __FUNCTION__, bus->bar1_win_base));
+#endif
+
/* si_attach() will provide an SI handle and scan the backplane */
if (!(bus->sih = si_attach((uint)devid, osh, regsva, PCI_BUS, bus,
&bus->vars, &bus->varsz))) {
@@ -367,13 +448,21 @@
goto fail;
}
- si_setcore(bus->sih, PCIE2_CORE_ID, 0);
- dhdpcie_bus_wreg32(bus, OFFSETOF(sbpcieregs_t, configaddr), 0x4e0);
- val = dhdpcie_bus_rreg32(bus, OFFSETOF(sbpcieregs_t, configdata));
- dhdpcie_bus_wreg32(bus, OFFSETOF(sbpcieregs_t, configdata), val);
+ si_setcore(bus->sih, PCIE2_CORE_ID, 0);
+ sbpcieregs = (sbpcieregs_t*)(bus->regs);
+
+ /* WAR where the BAR1 window may not be sized properly */
+ W_REG(osh, &sbpcieregs->configaddr, 0x4e0);
+ val = R_REG(osh, &sbpcieregs->configdata);
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ bus->bar1_win_mask = 0xffffffff - (bus->tcm_size - 1);
+ DHD_ERROR(("%s: BAR1 window val=%d mask=%x\n", __FUNCTION__, val, bus->bar1_win_mask));
+#endif
+ W_REG(osh, &sbpcieregs->configdata, val);
/* Get info on the ARM and SOCRAM cores... */
+ /* Should really be qualified by device id */
if ((si_setcore(bus->sih, ARM7S_CORE_ID, 0)) ||
(si_setcore(bus->sih, ARMCM3_CORE_ID, 0)) ||
(si_setcore(bus->sih, ARMCR4_CORE_ID, 0))) {
@@ -396,11 +485,16 @@
}
/* also populate base address */
switch ((uint16)bus->sih->chip) {
+ case BCM4339_CHIP_ID:
case BCM4335_CHIP_ID:
bus->dongle_ram_base = CR4_4335_RAM_BASE;
break;
+ case BCM4358_CHIP_ID:
+ case BCM4356_CHIP_ID:
case BCM4354_CHIP_ID:
+ case BCM43569_CHIP_ID:
case BCM4350_CHIP_ID:
+ case BCM43570_CHIP_ID:
bus->dongle_ram_base = CR4_4350_RAM_BASE;
break;
case BCM4360_CHIP_ID:
@@ -412,6 +506,9 @@
case BCM43602_CHIP_ID:
bus->dongle_ram_base = CR4_43602_RAM_BASE;
break;
+ case BCM4349_CHIP_GRPID:
+ bus->dongle_ram_base = CR4_4349_RAM_BASE;
+ break;
default:
bus->dongle_ram_base = 0;
DHD_ERROR(("%s: WARNING: Using default ram base at 0x%x\n",
@@ -433,6 +530,8 @@
/* Set the poll and/or interrupt flags */
bus->intr = (bool)dhd_intr;
+ bus->wait_for_d3_ack = 1;
+ bus->suspended = FALSE;
DHD_TRACE(("%s: EXIT: SUCCESS\n",
__FUNCTION__));
return 0;
@@ -509,12 +608,15 @@
if (bus->dhd) {
dongle_isolation = bus->dhd->dongle_isolation;
- dhd_detach(bus->dhd);
if (bus->intr) {
- dhdpcie_bus_intr_disable(bus);
+ if (bus->dhd->dongle_reset == FALSE)
+ dhdpcie_bus_intr_disable(bus);
dhdpcie_free_irq(bus);
}
+ /* Disable tasklet, already scheduled tasklet may be executed even though dongle has been released */
+ dhd_dpc_kill(bus->dhd);
+ dhd_detach(bus->dhd);
dhdpcie_bus_release_dongle(bus, osh, dongle_isolation, TRUE);
dhd_free(bus->dhd);
bus->dhd = NULL;
@@ -568,25 +670,9 @@
if (bus->sih) {
+ if (!dongle_isolation)
+ pcie_watchdog_reset(bus->osh, bus->sih, (sbpcieregs_t *)(bus->regs));
- if (!dongle_isolation) {
- uint32 val, i;
- uint16 cfg_offset[] = {0x4, 0x4C, 0x58, 0x5C, 0x60, 0x64, 0xDC,
- 0x228, 0x248, 0x4e0, 0x4f4};
- si_corereg(bus->sih, SI_CC_IDX, OFFSETOF(chipcregs_t, watchdog), ~0, 4);
- /* apply the WAR: need to restore the config space snoop bus values */
- OSL_DELAY(100000);
-
- for (i = 0; i < ARRAYSIZE(cfg_offset); i++) {
- dhdpcie_bus_wreg32(bus, OFFSETOF(sbpcieregs_t, configaddr),
- cfg_offset[i]);
- val = dhdpcie_bus_rreg32(bus,
- OFFSETOF(sbpcieregs_t, configdata));
- DHD_INFO(("SNOOP_BUS_UPDATE: config offset 0x%04x, value 0x%04x\n",
- cfg_offset[i], val));
- dhdpcie_bus_wreg32(bus, OFFSETOF(sbpcieregs_t, configdata), val);
- }
- }
if (bus->ltrsleep_on_unload) {
si_corereg(bus->sih, bus->sih->buscoreidx,
OFFSETOF(sbpcieregs_t, u.pcie2.ltr_state), ~0, 0);
@@ -620,23 +706,13 @@
OSL_PCI_WRITE_CONFIG(bus->osh, PCI_BAR0_WIN, 4, data);
}
-/* 32 bit pio write to device TCM */
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
void
-dhdpcie_bus_wreg32(dhd_bus_t *bus, uint reg, uint32 data)
+dhdpcie_bus_cfg_set_bar1_win(dhd_bus_t *bus, uint32 data)
{
- *(volatile uint32 *)(bus->regs + reg) = (uint32)data;
-
+ OSL_PCI_WRITE_CONFIG(bus->osh, PCI_BAR1_WIN, 4, data);
}
-
-uint32
-dhdpcie_bus_rreg32(dhd_bus_t *bus, uint reg)
-{
- uint32 data;
-
- data = *(volatile uint32 *)(bus->regs + reg);
- return data;
-}
-
+#endif
void
dhdpcie_bus_dongle_setmemsize(struct dhd_bus *bus, int mem_size)
@@ -678,20 +754,21 @@
if (!bus->dhd)
return;
- if (enforce_mutex)
- dhd_os_sdlock(bus->dhd);
-
+ if(bus->dhd->busstate == DHD_BUS_DOWN) {
+ DHD_ERROR(("%s: already down by net_dev_reset\n",__FUNCTION__));
+ goto done;
+ }
bus->dhd->busstate = DHD_BUS_DOWN;
dhdpcie_bus_intr_disable(bus);
status = dhdpcie_bus_cfg_read_dword(bus, PCIIntstatus, 4);
dhdpcie_bus_cfg_write_dword(bus, PCIIntstatus, 4, status);
+ if (!dhd_download_fw_on_driverload)
+ dhd_dpc_kill(bus->dhd);
/* Clear rx control and wake any waiters */
bus->rxlen = 0;
dhd_os_ioctl_resp_wake(bus->dhd);
-
- if (enforce_mutex)
- dhd_os_sdunlock(bus->dhd);
+done:
return;
}
@@ -759,6 +836,9 @@
DHD_ERROR(("%s: download firmware %s\n", __FUNCTION__, pfw_path));
+ /* Should succeed in opening image if it is actually given through registry
+ * entry or in module param.
+ */
image = dhd_os_open_image(pfw_path);
if (image == NULL)
goto err;
@@ -845,14 +925,19 @@
len = dhd_os_get_image_block(memblock, MAX_NVRAMBUF_SIZE, image);
}
else {
- len = strlen(bus->nvram_params);
+
+ /* nvram is string with null terminated. cannot use strlen */
+ len = bus->nvram_params_len;
ASSERT(len <= MAX_NVRAMBUF_SIZE);
memcpy(memblock, bus->nvram_params, len);
}
if (len > 0 && len < MAX_NVRAMBUF_SIZE) {
bufp = (char *)memblock;
bufp[len] = 0;
- len = process_nvram_vars(bufp, len);
+
+ if (nvram_file_exists)
+ len = process_nvram_vars(bufp, len);
+
if (len % 4) {
len += 4 - (len % 4);
}
@@ -882,11 +967,126 @@
}
+#ifdef BCMEMBEDIMAGE
+int
+dhdpcie_download_code_array(struct dhd_bus *bus)
+{
+ int bcmerror = -1;
+ int offset = 0;
+ unsigned char *p_dlarray = NULL;
+ unsigned int dlarray_size = 0;
+ unsigned int downloded_len, remaining_len, len;
+ char *p_dlimagename, *p_dlimagever, *p_dlimagedate;
+ uint8 *memblock = NULL, *memptr;
+
+ downloded_len = 0;
+ remaining_len = 0;
+ len = 0;
+
+ p_dlarray = dlarray;
+ dlarray_size = sizeof(dlarray);
+ p_dlimagename = dlimagename;
+ p_dlimagever = dlimagever;
+ p_dlimagedate = dlimagedate;
+
+ if ((p_dlarray == 0) || (dlarray_size == 0) ||(dlarray_size > bus->ramsize) ||
+ (p_dlimagename == 0) || (p_dlimagever == 0) || (p_dlimagedate == 0))
+ goto err;
+
+ memptr = memblock = MALLOC(bus->dhd->osh, MEMBLOCK + DHD_SDALIGN);
+ if (memblock == NULL) {
+ DHD_ERROR(("%s: Failed to allocate memory %d bytes\n", __FUNCTION__, MEMBLOCK));
+ goto err;
+ }
+ if ((uint32)(uintptr)memblock % DHD_SDALIGN)
+ memptr += (DHD_SDALIGN - ((uint32)(uintptr)memblock % DHD_SDALIGN));
+
+ while (downloded_len < dlarray_size) {
+ remaining_len = dlarray_size - downloded_len;
+ if (remaining_len >= MEMBLOCK)
+ len = MEMBLOCK;
+ else
+ len = remaining_len;
+
+ memcpy(memptr, (p_dlarray + downloded_len), len);
+ /* check if CR4 */
+ if (si_setcore(bus->sih, ARMCR4_CORE_ID, 0)) {
+ /* if address is 0, store the reset instruction to be written in 0 */
+ if (offset == 0) {
+ bus->resetinstr = *(((uint32*)memptr));
+ /* Add start of RAM address to the address given by user */
+ offset += bus->dongle_ram_base;
+ }
+ }
+ bcmerror = dhdpcie_bus_membytes(bus, TRUE, offset, (uint8 *)memptr, len);
+ downloded_len += len;
+ if (bcmerror) {
+ DHD_ERROR(("%s: error %d on writing %d membytes at 0x%08x\n",
+ __FUNCTION__, bcmerror, MEMBLOCK, offset));
+ goto err;
+ }
+ offset += MEMBLOCK;
+ }
+
+#ifdef DHD_DEBUG
+ /* Upload and compare the downloaded code */
+ {
+ unsigned char *ularray = NULL;
+ unsigned int uploded_len;
+ uploded_len = 0;
+ bcmerror = -1;
+ ularray = MALLOC(bus->dhd->osh, dlarray_size);
+ if (ularray == NULL)
+ goto upload_err;
+ /* Upload image to verify downloaded contents. */
+ offset = bus->dongle_ram_base;
+ memset(ularray, 0xaa, dlarray_size);
+ while (uploded_len < dlarray_size) {
+ remaining_len = dlarray_size - uploded_len;
+ if (remaining_len >= MEMBLOCK)
+ len = MEMBLOCK;
+ else
+ len = remaining_len;
+ bcmerror = dhdpcie_bus_membytes(bus, FALSE, offset,
+ (uint8 *)(ularray + uploded_len), len);
+ if (bcmerror) {
+ DHD_ERROR(("%s: error %d on reading %d membytes at 0x%08x\n",
+ __FUNCTION__, bcmerror, MEMBLOCK, offset));
+ goto upload_err;
+ }
+
+ uploded_len += len;
+ offset += MEMBLOCK;
+ }
+
+ if (memcmp(p_dlarray, ularray, dlarray_size)) {
+ DHD_ERROR(("%s: Downloaded image is corrupted (%s, %s, %s).\n",
+ __FUNCTION__, p_dlimagename, p_dlimagever, p_dlimagedate));
+ goto upload_err;
+
+ } else
+ DHD_ERROR(("%s: Download, Upload and compare succeeded (%s, %s, %s).\n",
+ __FUNCTION__, p_dlimagename, p_dlimagever, p_dlimagedate));
+upload_err:
+ if (ularray)
+ MFREE(bus->dhd->osh, ularray, dlarray_size);
+ }
+#endif /* DHD_DEBUG */
+err:
+
+ if (memblock)
+ MFREE(bus->dhd->osh, memblock, MEMBLOCK + DHD_SDALIGN);
+
+ return bcmerror;
+}
+#endif /* BCMEMBEDIMAGE */
+
+
static int
_dhdpcie_download_firmware(struct dhd_bus *bus)
{
int bcmerror = -1;
-
+ dhd_pub_t *dhd = bus->dhd;
bool embed = FALSE; /* download embedded firmware */
bool dlok = FALSE; /* download firmware succeeded */
@@ -944,6 +1144,7 @@
/* If a valid nvram_arry is specified as above, it can be passed down to dongle */
/* dhd_bus_set_nvram_params(bus, (char *)&nvram_array); */
+
/* External nvram takes precedence if specified */
if (dhdpcie_download_nvram(bus)) {
DHD_ERROR(("%s: dongle nvram file download failed\n", __FUNCTION__));
@@ -955,9 +1156,15 @@
DHD_ERROR(("%s: error getting out of ARM core reset\n", __FUNCTION__));
goto err;
}
-
+ if (dhd) {
+ if (!dhd->soc_ram) {
+ dhd->soc_ram = MALLOC(dhd->osh, bus->ramsize);
+ dhd->soc_ram_length = bus->ramsize;
+ } else {
+ memset(dhd->soc_ram, 0, dhd->soc_ram_length);
+ }
+ }
bcmerror = 0;
-
err:
return bcmerror;
}
@@ -975,18 +1182,20 @@
/* Wait until control frame is available */
timeleft = dhd_os_ioctl_resp_wait(bus->dhd, &bus->rxlen, &pending);
- dhd_os_sdlock(bus->dhd);
+ if (timeleft == 0) {
+ DHD_ERROR(("%s: resumed on timeout\n", __FUNCTION__));
+ bus->ioct_resp.cmn_hdr.request_id = 0;
+ bus->ioct_resp.compl_hdr.status = 0xffff;
+ bus->rxlen = 0;
+ }
rxlen = bus->rxlen;
- bcopy(&bus->ioct_resp, msg, sizeof(ioct_resp_hdr_t));
+ bcopy(&bus->ioct_resp, msg, sizeof(ioctl_comp_resp_msg_t));
bus->rxlen = 0;
- dhd_os_sdunlock(bus->dhd);
if (rxlen) {
DHD_CTL(("%s: resumed on rxctl frame, got %d\n", __FUNCTION__, rxlen));
} else if (timeleft == 0) {
DHD_ERROR(("%s: resumed on timeout\n", __FUNCTION__));
- bus->ioct_resp.pkt_id = 0;
- bus->ioct_resp.status = 0xffff;
} else if (pending == TRUE) {
DHD_CTL(("%s: canceled\n", __FUNCTION__));
return -ERESTARTSYS;
@@ -1005,9 +1214,13 @@
else
bus->dhd->rx_ctlerrs++;
- if (bus->dhd->rxcnt_timeout >= MAX_CNTL_TX_TIMEOUT)
+ if (bus->dhd->rxcnt_timeout >= MAX_CNTL_TX_TIMEOUT) {
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ bus->islinkdown = TRUE;
+ DHD_ERROR(("PCIe link down\n"));
+#endif /* SUPPORT_LINKDOWN_RECOVERY */
return -ETIMEDOUT;
-
+ }
if (bus->dhd->dongle_trap_occured)
return -EREMOTEIO;
@@ -1031,7 +1244,7 @@
return -1;
/* Read console log struct */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, log);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, log);
if ((rv = dhdpcie_bus_membytes(bus, FALSE, addr, (uint8 *)&c->log, sizeof(c->log))) < 0)
return rv;
@@ -1087,7 +1300,248 @@
return BCME_OK;
}
+
+static int
+dhdpcie_checkdied(dhd_bus_t *bus, char *data, uint size)
+{
+ int bcmerror = 0;
+ uint msize = 512;
+ char *mbuffer = NULL;
+ char *console_buffer = NULL;
+ uint maxstrlen = 256;
+ char *str = NULL;
+ trap_t tr;
+ pciedev_shared_t *pciedev_shared = bus->pcie_sh;
+ struct bcmstrbuf strbuf;
+ uint32 console_ptr, console_size, console_index;
+ uint8 line[CONSOLE_LINE_MAX], ch;
+ uint32 n, i, addr;
+ int rv;
+
+ DHD_TRACE(("%s: Enter\n", __FUNCTION__));
+
+ if (DHD_NOCHECKDIED_ON())
+ return 0;
+
+ if (data == NULL) {
+ /*
+ * Called after a rx ctrl timeout. "data" is NULL.
+ * allocate memory to trace the trap or assert.
+ */
+ size = msize;
+ mbuffer = data = MALLOC(bus->dhd->osh, msize);
+
+ if (mbuffer == NULL) {
+ DHD_ERROR(("%s: MALLOC(%d) failed \n", __FUNCTION__, msize));
+ bcmerror = BCME_NOMEM;
+ goto done;
+ }
+ }
+
+ if ((str = MALLOC(bus->dhd->osh, maxstrlen)) == NULL) {
+ DHD_ERROR(("%s: MALLOC(%d) failed \n", __FUNCTION__, maxstrlen));
+ bcmerror = BCME_NOMEM;
+ goto done;
+ }
+
+ if ((bcmerror = dhdpcie_readshared(bus)) < 0)
+ goto done;
+
+ bcm_binit(&strbuf, data, size);
+
+ bcm_bprintf(&strbuf, "msgtrace address : 0x%08X\nconsole address : 0x%08X\n",
+ pciedev_shared->msgtrace_addr, pciedev_shared->console_addr);
+
+ if ((pciedev_shared->flags & PCIE_SHARED_ASSERT_BUILT) == 0) {
+ /* NOTE: Misspelled assert is intentional - DO NOT FIX.
+ * (Avoids conflict with real asserts for programmatic parsing of output.)
+ */
+ bcm_bprintf(&strbuf, "Assrt not built in dongle\n");
+ }
+
+ if ((bus->pcie_sh->flags & (PCIE_SHARED_ASSERT|PCIE_SHARED_TRAP)) == 0) {
+ /* NOTE: Misspelled assert is intentional - DO NOT FIX.
+ * (Avoids conflict with real asserts for programmatic parsing of output.)
+ */
+ bcm_bprintf(&strbuf, "No trap%s in dongle",
+ (bus->pcie_sh->flags & PCIE_SHARED_ASSERT_BUILT)
+ ?"/assrt" :"");
+ } else {
+ if (bus->pcie_sh->flags & PCIE_SHARED_ASSERT) {
+ /* Download assert */
+ bcm_bprintf(&strbuf, "Dongle assert");
+ if (bus->pcie_sh->assert_exp_addr != 0) {
+ str[0] = '\0';
+ if ((bcmerror = dhdpcie_bus_membytes(bus, FALSE,
+ bus->pcie_sh->assert_exp_addr,
+ (uint8 *)str, maxstrlen)) < 0)
+ goto done;
+
+ str[maxstrlen - 1] = '\0';
+ bcm_bprintf(&strbuf, " expr \"%s\"", str);
+ }
+
+ if (bus->pcie_sh->assert_file_addr != 0) {
+ str[0] = '\0';
+ if ((bcmerror = dhdpcie_bus_membytes(bus, FALSE,
+ bus->pcie_sh->assert_file_addr,
+ (uint8 *)str, maxstrlen)) < 0)
+ goto done;
+
+ str[maxstrlen - 1] = '\0';
+ bcm_bprintf(&strbuf, " file \"%s\"", str);
+ }
+
+ bcm_bprintf(&strbuf, " line %d ", bus->pcie_sh->assert_line);
+ }
+
+ if (bus->pcie_sh->flags & PCIE_SHARED_TRAP) {
+ bus->dhd->dongle_trap_occured = TRUE;
+ if ((bcmerror = dhdpcie_bus_membytes(bus, FALSE,
+ bus->pcie_sh->trap_addr,
+ (uint8*)&tr, sizeof(trap_t))) < 0)
+ goto done;
+
+ bcm_bprintf(&strbuf,
+ "Dongle trap type 0x%x @ epc 0x%x, cpsr 0x%x, spsr 0x%x, sp 0x%x,"
+ "lp 0x%x, rpc 0x%x Trap offset 0x%x, "
+ "r0 0x%x, r1 0x%x, r2 0x%x, r3 0x%x, "
+ "r4 0x%x, r5 0x%x, r6 0x%x, r7 0x%x\n\n",
+ ltoh32(tr.type), ltoh32(tr.epc), ltoh32(tr.cpsr), ltoh32(tr.spsr),
+ ltoh32(tr.r13), ltoh32(tr.r14), ltoh32(tr.pc),
+ ltoh32(bus->pcie_sh->trap_addr),
+ ltoh32(tr.r0), ltoh32(tr.r1), ltoh32(tr.r2), ltoh32(tr.r3),
+ ltoh32(tr.r4), ltoh32(tr.r5), ltoh32(tr.r6), ltoh32(tr.r7));
+
+ addr = bus->pcie_sh->console_addr + OFFSETOF(hnd_cons_t, log);
+ if ((rv = dhdpcie_bus_membytes(bus, FALSE, addr,
+ (uint8 *)&console_ptr, sizeof(console_ptr))) < 0)
+ goto printbuf;
+
+ addr = bus->pcie_sh->console_addr + OFFSETOF(hnd_cons_t, log.buf_size);
+ if ((rv = dhdpcie_bus_membytes(bus, FALSE, addr,
+ (uint8 *)&console_size, sizeof(console_size))) < 0)
+ goto printbuf;
+
+ addr = bus->pcie_sh->console_addr + OFFSETOF(hnd_cons_t, log.idx);
+ if ((rv = dhdpcie_bus_membytes(bus, FALSE, addr,
+ (uint8 *)&console_index, sizeof(console_index))) < 0)
+ goto printbuf;
+
+ console_ptr = ltoh32(console_ptr);
+ console_size = ltoh32(console_size);
+ console_index = ltoh32(console_index);
+
+ if (console_size > CONSOLE_BUFFER_MAX ||
+ !(console_buffer = MALLOC(bus->dhd->osh, console_size)))
+ goto printbuf;
+
+ if ((rv = dhdpcie_bus_membytes(bus, FALSE, console_ptr,
+ (uint8 *)console_buffer, console_size)) < 0)
+ goto printbuf;
+
+ for (i = 0, n = 0; i < console_size; i += n + 1) {
+ for (n = 0; n < CONSOLE_LINE_MAX - 2; n++) {
+ ch = console_buffer[(console_index + i + n) % console_size];
+ if (ch == '\n')
+ break;
+ line[n] = ch;
+ }
+
+ if (n > 0) {
+ if (line[n - 1] == '\r')
+ n--;
+ line[n] = 0;
+ /* Don't use DHD_ERROR macro since we print
+ * a lot of information quickly. The macro
+ * will truncate a lot of the printfs
+ */
+
+ if (dhd_msg_level & DHD_ERROR_VAL)
+ printf("CONSOLE: %s\n", line);
+ }
+ }
+ }
+ }
+
+printbuf:
+ if (bus->pcie_sh->flags & (PCIE_SHARED_ASSERT | PCIE_SHARED_TRAP)) {
+ DHD_ERROR(("%s: %s\n", __FUNCTION__, strbuf.origbuf));
+ }
+ /* save core dump or write to a file */
+ if (bus->dhd->memdump_enabled) {
+ dhdpcie_mem_dump(bus);
+ dhd_dbg_send_urgent_evt(bus->dhd, NULL, 0);
+ }
+
+done:
+ if (mbuffer)
+ MFREE(bus->dhd->osh, mbuffer, msize);
+ if (str)
+ MFREE(bus->dhd->osh, str, maxstrlen);
+
+ if (console_buffer)
+ MFREE(bus->dhd->osh, console_buffer, console_size);
+
+ return bcmerror;
+}
#endif /* DHD_DEBUG */
+static void
+dhdpcie_bus_report_pcie_linkdown(dhd_bus_t *bus)
+{
+ if (bus == NULL)
+ return;
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ bus->islinkdown = TRUE;
+ DHD_ERROR(("PCIe link down, Device ID and Vendor ID are 0x%x\n",
+ dhdpcie_bus_cfg_read_dword(bus, PCI_VENDOR_ID, 4)));
+ dhd_os_send_hang_message(bus->dhd);
+#endif /* SUPPORT_LINKDOWN_RECOVERY */
+}
+static int
+dhdpcie_mem_dump(dhd_bus_t *bus)
+{
+ int ret = BCME_OK;
+ int size; /* Full mem size */
+ int start = bus->dongle_ram_base; /* Start address */
+ int read_size = 0; /* Read size of each iteration */
+ uint8 *databuf = NULL;
+ dhd_pub_t *dhd = bus->dhd;
+ if (!dhd->soc_ram) {
+ DHD_ERROR(("%s : dhd->soc_ram is NULL\n", __FUNCTION__));
+ return -1;
+ }
+ size = dhd->soc_ram_length = bus->ramsize;
+
+ /* Read mem content */
+ databuf = dhd->soc_ram;
+ while (size) {
+ read_size = MIN(MEMBLOCK, size);
+ if ((ret = dhdpcie_bus_membytes(bus, FALSE, start, databuf, read_size))) {
+ DHD_ERROR(("%s: Error membytes %d\n", __FUNCTION__, ret));
+ return BCME_ERROR;
+ }
+ DHD_TRACE(("."));
+
+ /* Decrement size and increment start address */
+ size -= read_size;
+ start += read_size;
+ databuf += read_size;
+ }
+
+
+ dhd_save_fwdump(bus->dhd, dhd->soc_ram, dhd->soc_ram_length);
+ dhd_schedule_memdump(bus->dhd, dhd->soc_ram, dhd->soc_ram_length);
+
+ return ret;
+}
+
+int
+dhd_socram_dump(dhd_bus_t *bus)
+{
+ return (dhdpcie_mem_dump(bus));
+}
+
/**
* Transfers bytes from host to dongle using pio mode.
@@ -1098,20 +1552,50 @@
{
int bcmerror = 0;
uint dsize;
- uint i = 0;
+ int detect_endian_flag = 0x01;
+ bool little_endian;
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ bool is_64bit_unaligned;
+#endif
+
+ /* Detect endianness. */
+ little_endian = *(char *)&detect_endian_flag;
+
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ /* Check 64bit aligned or not. */
+ is_64bit_unaligned = (address & 0x7);
+#endif
/* In remap mode, adjust address beyond socram and redirect
* to devram at SOCDEVRAM_BP_ADDR since remap address > orig_ramsize
* is not backplane accessible
*/
-
/* Determine initial transfer parameters */
- dsize = sizeof(uint8);
+ dsize = sizeof(uint64);
/* Do the transfer(s) */
if (write) {
while (size) {
- dhdpcie_bus_wtcm8(bus, address, *data);
+ if (size >= sizeof(uint64) && little_endian) {
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ if (is_64bit_unaligned) {
+ DHD_INFO(("%s: write unaligned %lx\n",
+ __FUNCTION__, address));
+ dhdpcie_bus_wtcm32(bus, address, *((uint32 *)data));
+ data += 4;
+ size -= 4;
+ address += 4;
+ is_64bit_unaligned = (address & 0x7);
+ continue;
+ }
+ else
+#endif
+ dhdpcie_bus_wtcm64(bus, address, *((uint64 *)data));
+ } else {
+ dsize = sizeof(uint8);
+ dhdpcie_bus_wtcm8(bus, address, *data);
+ }
+
/* Adjust for next transfer (if any) */
if ((size -= dsize)) {
data += dsize;
@@ -1120,10 +1604,29 @@
}
} else {
while (size) {
- data[i] = dhdpcie_bus_rtcm8(bus, address);
+ if (size >= sizeof(uint64) && little_endian) {
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ if (is_64bit_unaligned) {
+ DHD_INFO(("%s: read unaligned %lx\n",
+ __FUNCTION__, address));
+ *(uint32 *)data = dhdpcie_bus_rtcm32(bus, address);
+ data += 4;
+ size -= 4;
+ address += 4;
+ is_64bit_unaligned = (address & 0x7);
+ continue;
+ }
+ else
+#endif
+ *(uint64 *)data = dhdpcie_bus_rtcm64(bus, address);
+ } else {
+ dsize = sizeof(uint8);
+ *data = dhdpcie_bus_rtcm8(bus, address);
+ }
+
/* Adjust for next transfer (if any) */
- if ((size -= dsize)) {
- i++;
+ if ((size -= dsize) > 0) {
+ data += dsize;
address += dsize;
}
}
@@ -1131,13 +1634,144 @@
return bcmerror;
}
+int BCMFASTPATH
+dhd_bus_schedule_queue(struct dhd_bus *bus, uint16 flow_id, bool txs)
+{
+ flow_ring_node_t *flow_ring_node;
+ int ret = BCME_OK;
+
+ DHD_INFO(("%s: flow_id is %d\n", __FUNCTION__, flow_id));
+ /* ASSERT on flow_id */
+ if (flow_id >= bus->max_sub_queues) {
+ DHD_ERROR(("%s: flow_id is invalid %d, max %d\n", __FUNCTION__,
+ flow_id, bus->max_sub_queues));
+ return 0;
+ }
+
+ flow_ring_node = DHD_FLOW_RING(bus->dhd, flow_id);
+
+ {
+ unsigned long flags;
+ void *txp = NULL;
+ flow_queue_t *queue;
+
+ queue = &flow_ring_node->queue; /* queue associated with flow ring */
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+
+ if (flow_ring_node->status != FLOW_RING_STATUS_OPEN) {
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ return BCME_NOTREADY;
+ }
+
+ while ((txp = dhd_flow_queue_dequeue(bus->dhd, queue)) != NULL) {
+#ifdef DHDTCPACK_SUPPRESS
+ dhd_tcpack_check_xmit(bus->dhd, txp);
+#endif /* DHDTCPACK_SUPPRESS */
+ /* Attempt to transfer packet over flow ring */
+
+ ret = dhd_prot_txdata(bus->dhd, txp, flow_ring_node->flow_info.ifindex);
+ if (ret != BCME_OK) { /* may not have resources in flow ring */
+ DHD_INFO(("%s: Reinserrt %d\n", __FUNCTION__, ret));
+ dhd_prot_txdata_write_flush(bus->dhd, flow_id, FALSE);
+ /* reinsert at head */
+ dhd_flow_queue_reinsert(bus->dhd, queue, txp);
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ /* If we are able to requeue back, return success */
+ return BCME_OK;
+ }
+ }
+
+ dhd_prot_txdata_write_flush(bus->dhd, flow_id, FALSE);
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ }
+
+ return ret;
+}
+
/* Send a data frame to the dongle. Callee disposes of txp. */
int BCMFASTPATH
dhd_bus_txdata(struct dhd_bus *bus, void *txp, uint8 ifidx)
{
- return dhd_prot_txdata(bus->dhd, txp, ifidx);
+ unsigned long flags;
+ int ret = BCME_OK;
+ void *txp_pend = NULL;
+ if (!bus->txmode_push) {
+ uint16 flowid;
+ flow_queue_t *queue;
+ flow_ring_node_t *flow_ring_node;
+ if (!bus->dhd->flowid_allocator) {
+ DHD_ERROR(("%s: Flow ring not intited yet \n", __FUNCTION__));
+ goto toss;
+ }
+
+ flowid = DHD_PKTTAG_FLOWID((dhd_pkttag_fr_t*)PKTTAG(txp));
+
+ flow_ring_node = DHD_FLOW_RING(bus->dhd, flowid);
+
+ DHD_TRACE(("%s: pkt flowid %d, status %d active %d\n",
+ __FUNCTION__, flowid, flow_ring_node->status,
+ flow_ring_node->active));
+
+ if ((flowid >= bus->dhd->num_flow_rings) ||
+ (!flow_ring_node->active) ||
+ (flow_ring_node->status == FLOW_RING_STATUS_DELETE_PENDING)) {
+ DHD_INFO(("%s: Dropping pkt flowid %d, status %d active %d\n",
+ __FUNCTION__, flowid, flow_ring_node->status,
+ flow_ring_node->active));
+ ret = BCME_ERROR;
+ goto toss;
+ }
+
+ queue = &flow_ring_node->queue; /* queue associated with flow ring */
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+
+ if ((ret = dhd_flow_queue_enqueue(bus->dhd, queue, txp)) != BCME_OK)
+ txp_pend = txp;
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ if (flow_ring_node->status) {
+ DHD_INFO(("%s: Enq pkt flowid %d, status %d active %d\n",
+ __FUNCTION__, flowid, flow_ring_node->status,
+ flow_ring_node->active));
+ if (txp_pend) {
+ txp = txp_pend;
+ goto toss;
+ }
+ return BCME_OK;
+ }
+ ret = dhd_bus_schedule_queue(bus, flowid, FALSE);
+
+ /* If we have anything pending, try to push into q */
+ if (txp_pend) {
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+
+ if ((ret = dhd_flow_queue_enqueue(bus->dhd, queue, txp_pend)) != BCME_OK) {
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ txp = txp_pend;
+ goto toss;
+ }
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ }
+
+ return ret;
+
+ } else { /* bus->txmode_push */
+ return dhd_prot_txdata(bus->dhd, txp, ifidx);
+ }
+
+toss:
+ DHD_INFO(("%s: Toss %d\n", __FUNCTION__, ret));
+ PKTCFREE(bus->dhd->osh, txp, TRUE);
+ return ret;
}
+
void
dhd_bus_stop_queue(struct dhd_bus *bus)
{
@@ -1153,13 +1787,13 @@
}
void
-dhd_bus_update_retlen(dhd_bus_t *bus, uint32 retlen, uint32 pkt_id, uint32 status,
- uint32 inline_data)
+dhd_bus_update_retlen(dhd_bus_t *bus, uint32 retlen, uint32 pkt_id, uint16 status,
+ uint32 resp_len)
{
bus->rxlen = retlen;
- bus->ioct_resp.pkt_id = pkt_id;
- bus->ioct_resp.status = status;
- bus->ioct_resp.inline_data = inline_data;
+ bus->ioct_resp.cmn_hdr.request_id = pkt_id;
+ bus->ioct_resp.compl_hdr.status = status;
+ bus->ioct_resp.resp_len = (uint16)resp_len;
}
#if defined(DHD_DEBUG)
@@ -1173,9 +1807,6 @@
if (bus->console_addr == 0)
return BCME_UNSUPPORTED;
- /* Exclusive bus access */
- dhd_os_sdlock(bus->dhd);
-
/* Don't allow input if dongle is in reset */
if (bus->dhd->dongle_reset) {
dhd_os_sdunlock(bus->dhd);
@@ -1183,138 +1814,237 @@
}
/* Zero cbuf_index */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, cbuf_idx);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, cbuf_idx);
val = htol32(0);
if ((rv = dhdpcie_bus_membytes(bus, TRUE, addr, (uint8 *)&val, sizeof(val))) < 0)
goto done;
/* Write message into cbuf */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, cbuf);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, cbuf);
if ((rv = dhdpcie_bus_membytes(bus, TRUE, addr, (uint8 *)msg, msglen)) < 0)
goto done;
/* Write length into vcons_in */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, vcons_in);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, vcons_in);
val = htol32(msglen);
if ((rv = dhdpcie_bus_membytes(bus, TRUE, addr, (uint8 *)&val, sizeof(val))) < 0)
goto done;
- dhd_post_dummy_msg(bus->dhd);
+ /* generate an interurpt to dongle to indicate that it needs to process cons command */
+ dhdpcie_send_mb_data(bus, H2D_HOST_CONS_INT);
done:
-
- dhd_os_sdunlock(bus->dhd);
-
return rv;
}
#endif /* defined(DHD_DEBUG) */
/* Process rx frame , Send up the layer to netif */
-void
-dhd_bus_rx_frame(struct dhd_bus *bus, void* pkt, int ifidx, uint pkt_count)
+void BCMFASTPATH
+dhd_bus_rx_frame(struct dhd_bus *bus, void* pkt, int ifidx, uint pkt_count, int pkt_wake)
{
- dhd_os_sdunlock(bus->dhd);
- dhd_rx_frame(bus->dhd, ifidx, pkt, pkt_count, 0);
- dhd_os_sdlock(bus->dhd);
+ dhd_rx_frame(bus->dhd, ifidx, pkt, pkt_count, 0, pkt_wake, &bus->wake_counts);
}
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+static ulong dhd_bus_cmn_check_offset(dhd_bus_t *bus, ulong offset)
+{
+ uint new_bar1_wbase = 0;
+ ulong address = 0;
+
+ new_bar1_wbase = (uint)offset & bus->bar1_win_mask;
+ if (bus->bar1_win_base != new_bar1_wbase) {
+ bus->bar1_win_base = new_bar1_wbase;
+ dhdpcie_bus_cfg_set_bar1_win(bus, bus->bar1_win_base);
+ DHD_ERROR(("%s: offset=%lx, switch bar1_win_base to %x\n",
+ __FUNCTION__, offset, bus->bar1_win_base));
+ }
+
+ address = offset - bus->bar1_win_base;
+
+ return address;
+}
+#else
+#define dhd_bus_cmn_check_offset(x, y) y
+#endif /* defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT) */
+
/** 'offset' is a backplane address */
void
dhdpcie_bus_wtcm8(dhd_bus_t *bus, ulong offset, uint8 data)
{
- *(volatile uint8 *)(bus->tcm + offset) = (uint8)data;
+ *(volatile uint8 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)) = (uint8)data;
}
uint8
dhdpcie_bus_rtcm8(dhd_bus_t *bus, ulong offset)
{
- volatile uint8 data = *(volatile uint8 *)(bus->tcm + offset);
+ volatile uint8 data;
+#ifdef BCM47XX_ACP_WAR
+ data = R_REG(bus->dhd->osh,
+ (volatile uint8 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)));
+#else
+ data = *(volatile uint8 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset));
+#endif
return data;
}
void
dhdpcie_bus_wtcm32(dhd_bus_t *bus, ulong offset, uint32 data)
{
- *(volatile uint32 *)(bus->tcm + offset) = (uint32)data;
+ *(volatile uint32 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)) = (uint32)data;
}
void
dhdpcie_bus_wtcm16(dhd_bus_t *bus, ulong offset, uint16 data)
{
- *(volatile uint16 *)(bus->tcm + offset) = (uint16)data;
+ *(volatile uint16 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)) = (uint16)data;
+}
+void
+dhdpcie_bus_wtcm64(dhd_bus_t *bus, ulong offset, uint64 data)
+{
+ *(volatile uint64 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)) = (uint64)data;
}
uint16
dhdpcie_bus_rtcm16(dhd_bus_t *bus, ulong offset)
{
- volatile uint16 data = *(volatile uint16 *)(bus->tcm + offset);
+ volatile uint16 data;
+#ifdef BCM47XX_ACP_WAR
+ data = R_REG(bus->dhd->osh,
+ (volatile uint16 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)));
+#else
+ data = *(volatile uint16 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset));
+#endif
return data;
}
uint32
dhdpcie_bus_rtcm32(dhd_bus_t *bus, ulong offset)
{
- volatile uint32 data = *(volatile uint32 *)(bus->tcm + offset);
+ volatile uint32 data;
+#ifdef BCM47XX_ACP_WAR
+ data = R_REG(bus->dhd->osh,
+ (volatile uint32 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)));
+#else
+ data = *(volatile uint32 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset));
+#endif
+ return data;
+}
+
+uint64
+dhdpcie_bus_rtcm64(dhd_bus_t *bus, ulong offset)
+{
+ volatile uint64 data;
+#ifdef BCM47XX_ACP_WAR
+ data = R_REG(bus->dhd->osh,
+ (volatile uint64 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset)));
+#else
+ data = *(volatile uint64 *)(bus->tcm + dhd_bus_cmn_check_offset(bus, offset));
+#endif
return data;
}
void
-dhd_bus_cmn_writeshared(dhd_bus_t *bus, void * data, uint32 len, uint8 type)
+dhd_bus_cmn_writeshared(dhd_bus_t *bus, void * data, uint32 len, uint8 type, uint16 ringid)
{
uint64 long_data;
ulong tcm_offset;
+ pciedev_shared_t *sh;
+ pciedev_shared_t *shmem = NULL;
+
+ sh = (pciedev_shared_t*)bus->shared_addr;
DHD_INFO(("%s: writing to msgbuf type %d, len %d\n", __FUNCTION__, type, len));
switch (type) {
- case DNGL_TO_HOST_BUF_ADDR :
+ case DNGL_TO_HOST_DMA_SCRATCH_BUFFER:
long_data = HTOL64(*(uint64 *)data);
- tcm_offset = bus->d2h_data_ring_mem_addr;
- tcm_offset += OFFSETOF(ring_mem_t, base_addr);
+ tcm_offset = (ulong)&(sh->host_dma_scratch_buffer);
dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8*) &long_data, len);
prhex(__FUNCTION__, data, len);
break;
- case HOST_TO_DNGL_BUF_ADDR :
+
+ case DNGL_TO_HOST_DMA_SCRATCH_BUFFER_LEN :
+ tcm_offset = (ulong)&(sh->host_dma_scratch_buffer_len);
+ dhdpcie_bus_wtcm32(bus, tcm_offset, (uint32) HTOL32(*(uint32 *)data));
+ prhex(__FUNCTION__, data, len);
+ break;
+
+ case HOST_TO_DNGL_DMA_WRITEINDX_BUFFER:
+ /* ring_info_ptr stored in pcie_sh */
+ shmem = (pciedev_shared_t *)bus->pcie_sh;
+
long_data = HTOL64(*(uint64 *)data);
- tcm_offset = bus->h2d_data_ring_mem_addr;
- tcm_offset += OFFSETOF(ring_mem_t, base_addr);
+ tcm_offset = (ulong)shmem->rings_info_ptr;
+ tcm_offset += OFFSETOF(ring_info_t, h2d_w_idx_hostaddr);
dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8*) &long_data, len);
prhex(__FUNCTION__, data, len);
break;
- case HOST_TO_DNGL_WPTR :
- tcm_offset = bus->h2d_data_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, w_offset);
- dhdpcie_bus_wtcm32(bus, tcm_offset, (uint32) HTOL32(*(uint32 *)data));
+
+ case HOST_TO_DNGL_DMA_READINDX_BUFFER:
+ /* ring_info_ptr stored in pcie_sh */
+ shmem = (pciedev_shared_t *)bus->pcie_sh;
+
+ long_data = HTOL64(*(uint64 *)data);
+ tcm_offset = (ulong)shmem->rings_info_ptr;
+ tcm_offset += OFFSETOF(ring_info_t, h2d_r_idx_hostaddr);
+ dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8*) &long_data, len);
+ prhex(__FUNCTION__, data, len);
break;
- case DNGL_TO_HOST_RPTR :
- tcm_offset = bus->d2h_data_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, r_offset);
+
+ case DNGL_TO_HOST_DMA_WRITEINDX_BUFFER:
+ /* ring_info_ptr stored in pcie_sh */
+ shmem = (pciedev_shared_t *)bus->pcie_sh;
+
+ long_data = HTOL64(*(uint64 *)data);
+ tcm_offset = (ulong)shmem->rings_info_ptr;
+ tcm_offset += OFFSETOF(ring_info_t, d2h_w_idx_hostaddr);
+ dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8*) &long_data, len);
+ prhex(__FUNCTION__, data, len);
+ break;
+
+ case DNGL_TO_HOST_DMA_READINDX_BUFFER:
+ /* ring_info_ptr stored in pcie_sh */
+ shmem = (pciedev_shared_t *)bus->pcie_sh;
+
+ long_data = HTOL64(*(uint64 *)data);
+ tcm_offset = (ulong)shmem->rings_info_ptr;
+ tcm_offset += OFFSETOF(ring_info_t, d2h_r_idx_hostaddr);
+ dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8*) &long_data, len);
+ prhex(__FUNCTION__, data, len);
+ break;
+
+ case RING_LEN_ITEMS :
+ tcm_offset = bus->ring_sh[ringid].ring_mem_addr;
+ tcm_offset += OFFSETOF(ring_mem_t, len_items);
dhdpcie_bus_wtcm16(bus, tcm_offset, (uint16) HTOL16(*(uint16 *)data));
break;
- case HOST_TO_DNGL_CTRLBUF_ADDR:
- long_data = HTOL64(*(uint64 *)data);
- tcm_offset = bus->h2d_ctrl_ring_mem_addr;
- tcm_offset += OFFSETOF(ring_mem_t, base_addr);
- dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8 *) &long_data, len);
- break;
- case DNGL_TO_HOST_CTRLBUF_ADDR:
- long_data = HTOL64(*(uint64 *)data);
- tcm_offset = bus->d2h_ctrl_ring_mem_addr;
- tcm_offset += OFFSETOF(ring_mem_t, base_addr);
- dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8 *) &long_data, len);
- break;
- case HTOD_CTRL_WPTR:
- tcm_offset = bus->h2d_ctrl_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, w_offset);
- dhdpcie_bus_wtcm32(bus, tcm_offset, (uint32) HTOL32(*(uint32 *)data));
- break;
- case DTOH_CTRL_RPTR:
- tcm_offset = bus->d2h_ctrl_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, r_offset);
+
+ case RING_MAX_ITEM :
+ tcm_offset = bus->ring_sh[ringid].ring_mem_addr;
+ tcm_offset += OFFSETOF(ring_mem_t, max_item);
dhdpcie_bus_wtcm16(bus, tcm_offset, (uint16) HTOL16(*(uint16 *)data));
break;
+
+ case RING_BUF_ADDR :
+ long_data = HTOL64(*(uint64 *)data);
+ tcm_offset = bus->ring_sh[ringid].ring_mem_addr;
+ tcm_offset += OFFSETOF(ring_mem_t, base_addr);
+ dhdpcie_bus_membytes(bus, TRUE, tcm_offset, (uint8 *) &long_data, len);
+ prhex(__FUNCTION__, data, len);
+ break;
+
+ case RING_WRITE_PTR :
+ tcm_offset = bus->ring_sh[ringid].ring_state_w;
+ dhdpcie_bus_wtcm16(bus, tcm_offset, (uint16) HTOL16(*(uint16 *)data));
+ break;
+ case RING_READ_PTR :
+ tcm_offset = bus->ring_sh[ringid].ring_state_r;
+ dhdpcie_bus_wtcm16(bus, tcm_offset, (uint16) HTOL16(*(uint16 *)data));
+ break;
+
case DTOH_MB_DATA:
dhdpcie_bus_wtcm32(bus, bus->d2h_mb_data_ptr_addr,
(uint32) HTOL32(*(uint32 *)data));
break;
+
case HTOD_MB_DATA:
dhdpcie_bus_wtcm32(bus, bus->h2d_mb_data_ptr_addr,
(uint32) HTOL32(*(uint32 *)data));
@@ -1326,7 +2056,7 @@
void
-dhd_bus_cmn_readshared(dhd_bus_t *bus, void* data, uint8 type)
+dhd_bus_cmn_readshared(dhd_bus_t *bus, void* data, uint8 type, uint16 ringid)
{
pciedev_shared_t *sh;
ulong tcm_offset;
@@ -1334,30 +2064,18 @@
sh = (pciedev_shared_t*)bus->shared_addr;
switch (type) {
- case HOST_TO_DNGL_RPTR :
- tcm_offset = bus->h2d_data_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, r_offset);
+ case RING_WRITE_PTR :
+ tcm_offset = bus->ring_sh[ringid].ring_state_w;
*(uint16*)data = LTOH16(dhdpcie_bus_rtcm16(bus, tcm_offset));
break;
- case DNGL_TO_HOST_WPTR :
- tcm_offset = bus->d2h_data_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, w_offset);
- *(uint32*)data = LTOH32(dhdpcie_bus_rtcm32(bus, tcm_offset));
+ case RING_READ_PTR :
+ tcm_offset = bus->ring_sh[ringid].ring_state_r;
+ *(uint16*)data = LTOH16(dhdpcie_bus_rtcm16(bus, tcm_offset));
break;
case TOTAL_LFRAG_PACKET_CNT :
*(uint16*)data = LTOH16(dhdpcie_bus_rtcm16(bus,
(ulong) &sh->total_lfrag_pkt_cnt));
break;
- case HTOD_CTRL_RPTR:
- tcm_offset = bus->h2d_ctrl_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, r_offset);
- *(uint16*)data = LTOH16(dhdpcie_bus_rtcm16(bus, tcm_offset));
- break;
- case DTOH_CTRL_WPTR:
- tcm_offset = bus->d2h_ctrl_ring_state_addr;
- tcm_offset += OFFSETOF(ring_state_t, w_offset);
- *(uint32*)data = LTOH32(dhdpcie_bus_rtcm32(bus, tcm_offset));
- break;
case HTOD_MB_DATA:
*(uint32*)data = LTOH32(dhdpcie_bus_rtcm32(bus, bus->h2d_mb_data_ptr_addr));
break;
@@ -1436,6 +2154,420 @@
return bcmerror;
}
+#ifdef BCM_BUZZZ
+#include <bcm_buzzz.h>
+
+int dhd_buzzz_dump_cntrs3(char *p, uint32 *core, uint32 * ovhd, uint32 *log)
+{
+ int bytes = 0;
+ uint32 ctr, curr[3], prev[3], delta[3];
+
+ /* Compute elapsed counter values per counter event type */
+ for (ctr = 0U; ctr < 3; ctr++) {
+ prev[ctr] = core[ctr];
+ curr[ctr] = *log++;
+ core[ctr] = curr[ctr]; /* saved for next log */
+
+ if (curr[ctr] < prev[ctr])
+ delta[ctr] = curr[ctr] + (~0U - prev[ctr]);
+ else
+ delta[ctr] = (curr[ctr] - prev[ctr]);
+
+ /* Adjust for instrumentation overhead */
+ if (delta[ctr] >= ovhd[ctr])
+ delta[ctr] -= ovhd[ctr];
+ else
+ delta[ctr] = 0;
+
+ bytes += sprintf(p + bytes, "%12u ", delta[ctr]);
+ }
+
+ return bytes;
+}
+
+typedef union cm3_cnts { /* export this in bcm_buzzz.h */
+ uint32 u32;
+ uint8 u8[4];
+ struct {
+ uint8 cpicnt;
+ uint8 exccnt;
+ uint8 sleepcnt;
+ uint8 lsucnt;
+ };
+} cm3_cnts_t;
+
+int dhd_buzzz_dump_cntrs6(char *p, uint32 *core, uint32 * ovhd, uint32 *log)
+{
+ int bytes = 0;
+
+ uint32 cyccnt, instrcnt;
+ cm3_cnts_t cm3_cnts;
+ uint8 foldcnt;
+
+ { /* 32bit cyccnt */
+ uint32 curr, prev, delta;
+ prev = core[0]; curr = *log++; core[0] = curr;
+ if (curr < prev)
+ delta = curr + (~0U - prev);
+ else
+ delta = (curr - prev);
+ if (delta >= ovhd[0])
+ delta -= ovhd[0];
+ else
+ delta = 0;
+
+ bytes += sprintf(p + bytes, "%12u ", delta);
+ cyccnt = delta;
+ }
+
+ { /* Extract the 4 cnts: cpi, exc, sleep and lsu */
+ int i;
+ uint8 max8 = ~0;
+ cm3_cnts_t curr, prev, delta;
+ prev.u32 = core[1]; curr.u32 = * log++; core[1] = curr.u32;
+ for (i = 0; i < 4; i++) {
+ if (curr.u8[i] < prev.u8[i])
+ delta.u8[i] = curr.u8[i] + (max8 - prev.u8[i]);
+ else
+ delta.u8[i] = (curr.u8[i] - prev.u8[i]);
+ if (delta.u8[i] >= ovhd[i + 1])
+ delta.u8[i] -= ovhd[i + 1];
+ else
+ delta.u8[i] = 0;
+ bytes += sprintf(p + bytes, "%4u ", delta.u8[i]);
+ }
+ cm3_cnts.u32 = delta.u32;
+ }
+
+ { /* Extract the foldcnt from arg0 */
+ uint8 curr, prev, delta, max8 = ~0;
+ buzzz_arg0_t arg0; arg0.u32 = *log;
+ prev = core[2]; curr = arg0.klog.cnt; core[2] = curr;
+ if (curr < prev)
+ delta = curr + (max8 - prev);
+ else
+ delta = (curr - prev);
+ if (delta >= ovhd[5])
+ delta -= ovhd[5];
+ else
+ delta = 0;
+ bytes += sprintf(p + bytes, "%4u ", delta);
+ foldcnt = delta;
+ }
+
+ instrcnt = cyccnt - (cm3_cnts.u8[0] + cm3_cnts.u8[1] + cm3_cnts.u8[2]
+ + cm3_cnts.u8[3]) + foldcnt;
+ if (instrcnt > 0xFFFFFF00)
+ bytes += sprintf(p + bytes, "[%10s] ", "~");
+ else
+ bytes += sprintf(p + bytes, "[%10u] ", instrcnt);
+ return bytes;
+}
+
+int dhd_buzzz_dump_log(char * p, uint32 * core, uint32 * log, buzzz_t * buzzz)
+{
+ int bytes = 0;
+ buzzz_arg0_t arg0;
+ static uint8 * fmt[] = BUZZZ_FMT_STRINGS;
+
+ if (buzzz->counters == 6) {
+ bytes += dhd_buzzz_dump_cntrs6(p, core, buzzz->ovhd, log);
+ log += 2; /* 32bit cyccnt + (4 x 8bit) CM3 */
+ } else {
+ bytes += dhd_buzzz_dump_cntrs3(p, core, buzzz->ovhd, log);
+ log += 3; /* (3 x 32bit) CR4 */
+ }
+
+ /* Dump the logged arguments using the registered formats */
+ arg0.u32 = *log++;
+
+ switch (arg0.klog.args) {
+ case 0:
+ bytes += sprintf(p + bytes, fmt[arg0.klog.id]);
+ break;
+ case 1:
+ {
+ uint32 arg1 = *log++;
+ bytes += sprintf(p + bytes, fmt[arg0.klog.id], arg1);
+ break;
+ }
+ default:
+ printf("Maximum one argument supported\n");
+ break;
+ }
+ bytes += sprintf(p + bytes, "\n");
+
+ return bytes;
+}
+
+void dhd_buzzz_dump(buzzz_t * buzzz_p, void * buffer_p, char * p)
+{
+ int i;
+ uint32 total, part1, part2, log_sz, core[BUZZZ_COUNTERS_MAX];
+ void * log;
+
+ for (i = 0; i < BUZZZ_COUNTERS_MAX; i++)
+ core[i] = 0;
+
+ log_sz = buzzz_p->log_sz;
+
+ part1 = ((uint32)buzzz_p->cur - (uint32)buzzz_p->log) / log_sz;
+
+ if (buzzz_p->wrap == TRUE) {
+ part2 = ((uint32)buzzz_p->end - (uint32)buzzz_p->cur) / log_sz;
+ total = (buzzz_p->buffer_sz - BUZZZ_LOGENTRY_MAXSZ) / log_sz;
+ } else {
+ part2 = 0U;
+ total = buzzz_p->count;
+ }
+
+ if (total == 0U) {
+ printf("buzzz_dump total<%u> done\n", total);
+ return;
+ } else {
+ printf("buzzz_dump total<%u> : part2<%u> + part1<%u>\n",
+ total, part2, part1);
+ }
+
+ if (part2) { /* with wrap */
+ log = (void*)((size_t)buffer_p + (buzzz_p->cur - buzzz_p->log));
+ while (part2--) { /* from cur to end : part2 */
+ p[0] = '\0';
+ dhd_buzzz_dump_log(p, core, (uint32 *)log, buzzz_p);
+ printf("%s", p);
+ log = (void*)((size_t)log + buzzz_p->log_sz);
+ }
+ }
+
+ log = (void*)buffer_p;
+ while (part1--) {
+ p[0] = '\0';
+ dhd_buzzz_dump_log(p, core, (uint32 *)log, buzzz_p);
+ printf("%s", p);
+ log = (void*)((size_t)log + buzzz_p->log_sz);
+ }
+
+ printf("buzzz_dump done.\n");
+}
+
+int dhd_buzzz_dump_dngl(dhd_bus_t *bus)
+{
+ buzzz_t * buzzz_p = NULL;
+ void * buffer_p = NULL;
+ char * page_p = NULL;
+ pciedev_shared_t *sh;
+ int ret = 0;
+
+ if (bus->dhd->busstate != DHD_BUS_DATA) {
+ return BCME_UNSUPPORTED;
+ }
+ if ((page_p = (char *)MALLOC(bus->dhd->osh, 4096)) == NULL) {
+ printf("Page memory allocation failure\n");
+ goto done;
+ }
+ if ((buzzz_p = MALLOC(bus->dhd->osh, sizeof(buzzz_t))) == NULL) {
+ printf("Buzzz memory allocation failure\n");
+ goto done;
+ }
+
+ ret = dhdpcie_readshared(bus);
+ if (ret < 0) {
+ DHD_ERROR(("%s :Shared area read failed \n", __FUNCTION__));
+ goto done;
+ }
+
+ sh = bus->pcie_sh;
+
+ DHD_INFO(("%s buzzz:%08x\n", __FUNCTION__, sh->buzzz));
+
+ if (sh->buzzz != 0U) { /* Fetch and display dongle BUZZZ Trace */
+ dhdpcie_bus_membytes(bus, FALSE, (ulong)sh->buzzz,
+ (uint8 *)buzzz_p, sizeof(buzzz_t));
+ if (buzzz_p->count == 0) {
+ printf("Empty dongle BUZZZ trace\n\n");
+ goto done;
+ }
+ if (buzzz_p->counters != 3) { /* 3 counters for CR4 */
+ printf("Counters<%u> mismatch\n", buzzz_p->counters);
+ goto done;
+ }
+ /* Allocate memory for trace buffer and format strings */
+ buffer_p = MALLOC(bus->dhd->osh, buzzz_p->buffer_sz);
+ if (buffer_p == NULL) {
+ printf("Buffer memory allocation failure\n");
+ goto done;
+ }
+ /* Fetch the trace and format strings */
+ dhdpcie_bus_membytes(bus, FALSE, (uint32)buzzz_p->log, /* Trace */
+ (uint8 *)buffer_p, buzzz_p->buffer_sz);
+ /* Process and display the trace using formatted output */
+ printf("<#cycle> <#instruction> <#ctr3> <event information>\n");
+ dhd_buzzz_dump(buzzz_p, buffer_p, page_p);
+ printf("----- End of dongle BUZZZ Trace -----\n\n");
+ MFREE(bus->dhd->osh, buffer_p, buzzz_p->buffer_sz); buffer_p = NULL;
+ }
+
+done:
+
+ if (page_p) MFREE(bus->dhd->osh, page_p, 4096);
+ if (buzzz_p) MFREE(bus->dhd->osh, buzzz_p, sizeof(buzzz_t));
+ if (buffer_p) MFREE(bus->dhd->osh, buffer_p, buzzz_p->buffer_sz);
+
+ return BCME_OK;
+}
+#endif /* BCM_BUZZZ */
+int
+dhd_bus_devreset(dhd_pub_t *dhdp, uint8 flag)
+{
+ dhd_bus_t *bus = dhdp->bus;
+ int ret = 0;
+#ifdef CONFIG_ARCH_MSM
+ int retry = POWERUP_MAX_RETRY;
+#endif /* CONFIG_ARCH_MSM */
+
+ if (dhd_download_fw_on_driverload) {
+ ret = dhd_bus_start(dhdp);
+ } else {
+ if (flag == TRUE) {
+ /* Turn off WLAN */
+ DHD_ERROR(("%s: == Power OFF ==\n", __FUNCTION__));
+ bus->dhd->up = FALSE;
+ if (bus->dhd->busstate != DHD_BUS_DOWN) {
+ if (bus->intr) {
+ dhdpcie_bus_intr_disable(bus);
+ dhdpcie_free_irq(bus);
+ }
+
+ dhd_os_wd_timer(dhdp, 0);
+ dhd_dbg_start(dhdp, 0);
+ dhd_bus_stop(bus, TRUE);
+ dhd_prot_clear(dhdp);
+ dhd_clear(dhdp);
+ dhd_bus_release_dongle(bus);
+ dhdpcie_bus_free_resource(bus);
+ ret = dhdpcie_bus_disable_device(bus);
+ if (ret) {
+ DHD_ERROR(("%s: dhdpcie_bus_disable_device: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+
+#ifdef CONFIG_ARCH_MSM
+ ret = dhdpcie_bus_clock_stop(bus);
+ if (ret) {
+ DHD_ERROR(("%s: host clock stop failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+#endif /* CONFIG_ARCH_MSM */
+ bus->dhd->busstate = DHD_BUS_DOWN;
+ } else {
+ if (bus->intr) {
+ dhdpcie_bus_intr_disable(bus);
+ dhdpcie_free_irq(bus);
+ }
+
+ dhd_prot_clear(dhdp);
+ dhd_clear(dhdp);
+ dhd_bus_release_dongle(bus);
+ dhdpcie_bus_free_resource(bus);
+ ret = dhdpcie_bus_disable_device(bus);
+ if (ret) {
+ DHD_ERROR(("%s: dhdpcie_bus_disable_device: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+#ifdef CONFIG_ARCH_MSM
+ ret = dhdpcie_bus_clock_stop(bus);
+ if (ret) {
+ DHD_ERROR(("%s: host clock stop failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+#endif /* CONFIG_ARCH_MSM */
+ }
+
+ bus->dhd->dongle_reset = TRUE;
+ DHD_ERROR(("%s: WLAN OFF Done\n", __FUNCTION__));
+
+ } else {
+ if (bus->dhd->busstate == DHD_BUS_DOWN) {
+ /* Turn on WLAN */
+ DHD_ERROR(("%s: == Power ON ==\n", __FUNCTION__));
+#ifdef CONFIG_ARCH_MSM
+ while (retry--) {
+ ret = dhdpcie_bus_clock_start(bus);
+ if (!ret) {
+ DHD_ERROR(("%s: dhdpcie_bus_clock_start OK\n",
+ __FUNCTION__));
+ break;
+ }
+ else
+ OSL_SLEEP(10);
+ }
+
+ if (ret && !retry) {
+ DHD_ERROR(("%s: host pcie clock enable failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+#endif /* CONFIG_ARCH_MSM */
+ ret = dhdpcie_bus_enable_device(bus);
+ if (ret) {
+ DHD_ERROR(("%s: host configuration restore failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+
+ ret = dhdpcie_bus_alloc_resource(bus);
+ if (ret) {
+ DHD_ERROR(("%s: dhdpcie_bus_resource_alloc failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+
+ ret = dhdpcie_bus_dongle_attach(bus);
+ if (ret) {
+ DHD_ERROR(("%s: dhdpcie_bus_dongle_attach failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+
+ ret = dhd_bus_request_irq(bus);
+ if (ret) {
+ DHD_ERROR(("%s: dhd_bus_request_irq failed: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+
+ bus->dhd->dongle_reset = FALSE;
+
+ ret = dhd_bus_start(dhdp);
+ if (ret) {
+ DHD_ERROR(("%s: dhd_bus_start: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+ ret = dhd_dbg_start(dhdp, 1);
+ if (ret) {
+ DHD_ERROR(("%s: dhd_dbg_start: %d\n",
+ __FUNCTION__, ret));
+ goto done;
+ }
+ bus->dhd->up = TRUE;
+ DHD_ERROR(("%s: WLAN Power On Done\n", __FUNCTION__));
+ } else {
+ DHD_ERROR(("%s: what should we do here\n", __FUNCTION__));
+ goto done;
+ }
+ }
+ }
+done:
+ if (ret)
+ bus->dhd->busstate = DHD_BUS_DOWN;
+
+ return ret;
+}
static int
dhdpcie_bus_doiovar(dhd_bus_t *bus, const bcm_iovar_t *vi, uint32 actionid, const char *name,
@@ -1464,9 +2596,6 @@
bool_val = (int_val != 0) ? TRUE : FALSE;
- /* Some ioctls use the bus */
- dhd_os_sdlock(bus->dhd);
-
/* Check if dongle is in reset. If so, only allow DEVRESET iovars */
if (bus->dhd->dongle_reset && !(actionid == IOV_SVAL(IOV_DEVRESET) ||
actionid == IOV_GVAL(IOV_DEVRESET))) {
@@ -1493,13 +2622,47 @@
int_val);
int_val = si_corereg(bus->sih, bus->sih->buscoreidx,
OFFSETOF(sbpcieregs_t, configdata), 0, 0);
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_GVAL(IOV_BAR0_SECWIN_REG):
+ {
+ uint32 cur_base, base;
+ uchar *bar0;
+ volatile uint32 *offset;
+ /* set the bar0 secondary window to this */
+ /* write the register value */
+ cur_base = dhdpcie_bus_cfg_read_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint));
+ base = int_val & 0xFFFFF000;
+ dhdpcie_bus_cfg_write_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint32), base);
+ bar0 = (uchar *)bus->regs;
+ offset = (uint32 *)(bar0 + 0x4000 + (int_val & 0xFFF));
+ int_val = *offset;
+ bcopy(&int_val, arg, sizeof(int_val));
+ dhdpcie_bus_cfg_write_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint32), cur_base);
+ }
+ break;
+ case IOV_SVAL(IOV_BAR0_SECWIN_REG):
+ {
+ uint32 cur_base, base;
+ uchar *bar0;
+ volatile uint32 *offset;
+ /* set the bar0 secondary window to this */
+ /* write the register value */
+ cur_base = dhdpcie_bus_cfg_read_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint));
+ base = int_val & 0xFFFFF000;
+ dhdpcie_bus_cfg_write_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint32), base);
+ bar0 = (uchar *)bus->regs;
+ offset = (uint32 *)(bar0 + 0x4000 + (int_val & 0xFFF));
+ *offset = int_val2;
+ bcopy(&int_val2, arg, val_size);
+ dhdpcie_bus_cfg_write_dword(bus, PCIE2_BAR0_CORE2_WIN, sizeof(uint32), cur_base);
+ }
break;
case IOV_SVAL(IOV_PCIECOREREG):
si_corereg(bus->sih, bus->sih->buscoreidx, int_val, ~0, int_val2);
break;
-
case IOV_GVAL(IOV_SBREG):
{
sdreg_t sdreg;
@@ -1533,7 +2696,7 @@
case IOV_GVAL(IOV_PCIECOREREG):
int_val = si_corereg(bus->sih, bus->sih->buscoreidx, int_val, 0, 0);
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PCIECFGREG):
@@ -1542,16 +2705,29 @@
case IOV_GVAL(IOV_PCIECFGREG):
int_val = OSL_PCI_READ_CONFIG(bus->osh, int_val, 4);
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_PCIE_LPBK):
bcmerror = dhdpcie_bus_lpback_req(bus, int_val);
break;
+ case IOV_SVAL(IOV_PCIE_DMAXFER):
+ bcmerror = dhdpcie_bus_dmaxfer_req(bus, int_val, int_val2, int_val3);
+ break;
+
+ case IOV_GVAL(IOV_PCIE_SUSPEND):
+ int_val = (bus->dhd->busstate == DHD_BUS_SUSPEND) ? 1 : 0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_PCIE_SUSPEND):
+ dhdpcie_bus_suspend(bus, bool_val);
+ break;
+
case IOV_GVAL(IOV_MEMSIZE):
int_val = (int32)bus->ramsize;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_MEMBYTES):
case IOV_GVAL(IOV_MEMBYTES):
@@ -1639,18 +2815,24 @@
break;
}
+#ifdef BCM_BUZZZ
+ case IOV_GVAL(IOV_BUZZZ_DUMP):
+ bcmerror = dhd_buzzz_dump_dngl(bus);
+ break;
+#endif /* BCM_BUZZZ */
+
case IOV_SVAL(IOV_SET_DOWNLOAD_STATE):
bcmerror = dhdpcie_bus_download_state(bus, bool_val);
break;
case IOV_GVAL(IOV_RAMSIZE):
int_val = (int32)bus->ramsize;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_RAMSTART):
int_val = (int32)bus->dongle_ram_base;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_CC_NVMSHADOW):
@@ -1673,7 +2855,7 @@
case IOV_GVAL(IOV_DONGLEISOLATION):
int_val = bus->dhd->dongle_isolation;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DONGLEISOLATION):
@@ -1682,23 +2864,112 @@
case IOV_GVAL(IOV_LTRSLEEPON_UNLOOAD):
int_val = bus->ltrsleep_on_unload;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_LTRSLEEPON_UNLOOAD):
bus->ltrsleep_on_unload = bool_val;
break;
+ case IOV_GVAL(IOV_DUMP_RINGUPD_BLOCK):
+ {
+ struct bcmstrbuf dump_b;
+ bcm_binit(&dump_b, arg, len);
+ bcmerror = dhd_prot_ringupd_dump(bus->dhd, &dump_b);
+ break;
+ }
+ case IOV_GVAL(IOV_DMA_RINGINDICES):
+ { int h2d_support, d2h_support;
+
+ d2h_support = DMA_INDX_ENAB(bus->dhd->dma_d2h_ring_upd_support) ? 1 : 0;
+ h2d_support = DMA_INDX_ENAB(bus->dhd->dma_h2d_ring_upd_support) ? 1 : 0;
+ int_val = d2h_support | (h2d_support << 1);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+ }
+ case IOV_SVAL(IOV_DMA_RINGINDICES):
+ /* Can change it only during initialization/FW download */
+ if (bus->dhd->busstate == DHD_BUS_DOWN) {
+ if ((int_val > 3) || (int_val < 0)) {
+ DHD_ERROR(("Bad argument. Possible values: 0, 1, 2 & 3\n"));
+ bcmerror = BCME_BADARG;
+ } else {
+ bus->dhd->dma_d2h_ring_upd_support = (int_val & 1) ? TRUE : FALSE;
+ bus->dhd->dma_h2d_ring_upd_support = (int_val & 2) ? TRUE : FALSE;
+ }
+ } else {
+ DHD_ERROR(("%s: Can change only when bus down (before FW download)\n",
+ __FUNCTION__));
+ bcmerror = BCME_NOTDOWN;
+ }
+ break;
+
+ case IOV_GVAL(IOV_RX_METADATALEN):
+ int_val = dhd_prot_metadatalen_get(bus->dhd, TRUE);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_RX_METADATALEN):
+ if (int_val > 64) {
+ bcmerror = BCME_BUFTOOLONG;
+ break;
+ }
+ dhd_prot_metadatalen_set(bus->dhd, int_val, TRUE);
+ break;
+
+ case IOV_SVAL(IOV_TXP_THRESHOLD):
+ dhd_prot_txp_threshold(bus->dhd, TRUE, int_val);
+ break;
+
+ case IOV_GVAL(IOV_TXP_THRESHOLD):
+ int_val = dhd_prot_txp_threshold(bus->dhd, FALSE, int_val);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_DB1_FOR_MB):
+ if (int_val)
+ bus->db1_for_mb = TRUE;
+ else
+ bus->db1_for_mb = FALSE;
+ break;
+
+ case IOV_GVAL(IOV_DB1_FOR_MB):
+ if (bus->db1_for_mb)
+ int_val = 1;
+ else
+ int_val = 0;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_GVAL(IOV_TX_METADATALEN):
+ int_val = dhd_prot_metadatalen_get(bus->dhd, FALSE);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_TX_METADATALEN):
+ if (int_val > 64) {
+ bcmerror = BCME_BUFTOOLONG;
+ break;
+ }
+ dhd_prot_metadatalen_set(bus->dhd, int_val, FALSE);
+ break;
+
+ case IOV_GVAL(IOV_FLOW_PRIO_MAP):
+ int_val = bus->dhd->flow_prio_map_type;
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
+ case IOV_SVAL(IOV_FLOW_PRIO_MAP):
+ int_val = (int32)dhd_update_flow_prio_map(bus->dhd, (uint8)int_val);
+ bcopy(&int_val, arg, sizeof(int_val));
+ break;
+
default:
bcmerror = BCME_UNSUPPORTED;
break;
}
exit:
-
-
- dhd_os_sdunlock(bus->dhd);
-
return bcmerror;
}
/* Transfers bytes from host to dongle using pio mode */
@@ -1721,6 +2992,127 @@
return 0;
}
+void
+dhd_bus_set_suspend_resume(dhd_pub_t *dhdp, bool state)
+{
+ struct dhd_bus *bus = dhdp->bus;
+ if (bus) {
+ dhdpcie_bus_suspend(bus, state);
+ }
+}
+
+int
+dhdpcie_bus_suspend(struct dhd_bus *bus, bool state)
+{
+
+ int timeleft;
+ bool pending;
+ int rc = 0;
+ DHD_INFO(("%s Enter with state :%d\n", __FUNCTION__, state));
+
+ if (bus->dhd == NULL) {
+ DHD_ERROR(("bus not inited\n"));
+ return BCME_ERROR;
+ }
+ if (bus->dhd->prot == NULL) {
+ DHD_ERROR(("prot is not inited\n"));
+ return BCME_ERROR;
+ }
+ if (bus->dhd->busstate != DHD_BUS_DATA && bus->dhd->busstate != DHD_BUS_SUSPEND) {
+ DHD_ERROR(("not in a readystate to LPBK is not inited\n"));
+ return BCME_ERROR;
+ }
+ if (bus->dhd->dongle_reset)
+ return -EIO;
+
+
+ if (bus->suspended == state) /* Set to same state */
+ return BCME_OK;
+
+ if (state) {
+ bus->wait_for_d3_ack = 0;
+ bus->suspended = TRUE;
+ bus->dhd->busstate = DHD_BUS_SUSPEND;
+ DHD_OS_WAKE_LOCK_WAIVE(bus->dhd);
+ dhd_os_set_ioctl_resp_timeout(DEFAULT_IOCTL_RESP_TIMEOUT);
+ dhdpcie_send_mb_data(bus, H2D_HOST_D3_INFORM);
+ timeleft = dhd_os_d3ack_wait(bus->dhd, &bus->wait_for_d3_ack, &pending);
+ dhd_os_set_ioctl_resp_timeout(IOCTL_RESP_TIMEOUT);
+ DHD_OS_WAKE_LOCK_RESTORE(bus->dhd);
+ if (bus->wait_for_d3_ack == 1) {
+ /* Got D3 Ack. Suspend the bus */
+ if (dhd_os_check_wakelock_all(bus->dhd)) {
+ DHD_ERROR(("Suspend failed because of wakelock\n"));
+ bus->dev->current_state = PCI_D3hot;
+ pci_set_master(bus->dev);
+ rc = pci_set_power_state(bus->dev, PCI_D0);
+ if (rc) {
+ DHD_ERROR(("%s: pci_set_power_state failed:"
+ " current_state[%d], ret[%d]\n",
+ __FUNCTION__, bus->dev->current_state, rc));
+ }
+ bus->suspended = FALSE;
+ bus->dhd->busstate = DHD_BUS_DATA;
+ rc = BCME_ERROR;
+ } else {
+ dhdpcie_bus_intr_disable(bus);
+ rc = dhdpcie_pci_suspend_resume(bus->dev, state);
+ }
+ } else if (timeleft == 0) {
+ DHD_ERROR(("%s: resumed on timeout\n", __FUNCTION__));
+ bus->suspended = FALSE;
+ bus->dhd->busstate = DHD_BUS_DOWN;
+ rc = -ETIMEDOUT;
+ } else if (bus->wait_for_d3_ack == DHD_INVALID) {
+ DHD_ERROR(("PCIe link down during suspend"));
+ bus->suspended = FALSE;
+ bus->dhd->busstate = DHD_BUS_DOWN;
+ rc = -ETIMEDOUT;
+ dhdpcie_bus_report_pcie_linkdown(bus);
+ }
+ bus->wait_for_d3_ack = 1;
+ } else {
+ /* Resume */
+ DHD_INFO(("dhdpcie_bus_suspend resume\n"));
+ rc = dhdpcie_pci_suspend_resume(bus->dev, state);
+ bus->suspended = FALSE;
+ if (dhdpcie_bus_cfg_read_dword(bus, PCI_VENDOR_ID, 4) == PCIE_LINK_DOWN) {
+ DHD_ERROR(("PCIe link down during resume"));
+ rc = -ETIMEDOUT;
+ bus->dhd->busstate = DHD_BUS_DOWN;
+ dhdpcie_bus_report_pcie_linkdown(bus);
+ } else {
+ bus->dhd->busstate = DHD_BUS_DATA;
+ dhdpcie_bus_intr_enable(bus);
+ }
+ }
+ return rc;
+}
+
+/* Transfers bytes from host to dongle and to host again using DMA */
+static int
+dhdpcie_bus_dmaxfer_req(struct dhd_bus *bus, uint32 len, uint32 srcdelay, uint32 destdelay)
+{
+ if (bus->dhd == NULL) {
+ DHD_ERROR(("bus not inited\n"));
+ return BCME_ERROR;
+ }
+ if (bus->dhd->prot == NULL) {
+ DHD_ERROR(("prot is not inited\n"));
+ return BCME_ERROR;
+ }
+ if (bus->dhd->busstate != DHD_BUS_DATA) {
+ DHD_ERROR(("not in a readystate to LPBK is not inited\n"));
+ return BCME_ERROR;
+ }
+
+ if (len < 5 || len > 4194296) {
+ DHD_ERROR(("len is too small or too large\n"));
+ return BCME_ERROR;
+ }
+ return dhdmsgbuf_dmaxfer_req(bus->dhd, len, srcdelay, destdelay);
+}
+
static int
@@ -1898,6 +3290,7 @@
bcopy(bus->vars, vbuffer, bus->varsz);
/* Write the vars list */
bcmerror = dhdpcie_bus_membytes(bus, TRUE, varaddr, vbuffer, varsize);
+
/* Implement read back and verify later */
#ifdef DHD_DEBUG
/* Verify NVRAM bytes */
@@ -1915,6 +3308,7 @@
DHD_ERROR(("%s: error %d on reading %d nvram bytes at 0x%08x\n",
__FUNCTION__, bcmerror, varsize, varaddr));
}
+
/* Compare the org NVRAM with the one read from RAM */
if (memcmp(vbuffer, nvram_ularray, varsize)) {
DHD_ERROR(("%s: Downloaded NVRAM image is corrupted.\n", __FUNCTION__));
@@ -1999,7 +3393,58 @@
/* Add bus dump output to a buffer */
void dhd_bus_dump(dhd_pub_t *dhdp, struct bcmstrbuf *strbuf)
{
+ uint16 flowid;
+ flow_ring_node_t *flow_ring_node;
+#ifdef DHD_WAKE_STATUS
+ bcm_bprintf(strbuf, "wake %u rxwake %u readctrlwake %u\n",
+ bcmpcie_get_total_wake(dhdp->bus), dhdp->bus->wake_counts.rxwake,
+ dhdp->bus->wake_counts.rcwake);
+#ifdef DHD_WAKE_RX_STATUS
+ bcm_bprintf(strbuf, " unicast %u muticast %u broadcast %u arp %u\n",
+ dhdp->bus->wake_counts.rx_ucast, dhdp->bus->wake_counts.rx_mcast,
+ dhdp->bus->wake_counts.rx_bcast, dhdp->bus->wake_counts.rx_arp);
+ bcm_bprintf(strbuf, " multi4 %u multi6 %u icmp6 %u multiother %u\n",
+ dhdp->bus->wake_counts.rx_multi_ipv4, dhdp->bus->wake_counts.rx_multi_ipv6,
+ dhdp->bus->wake_counts.rx_icmpv6, dhdp->bus->wake_counts.rx_multi_other);
+ bcm_bprintf(strbuf, " icmp6_ra %u, icmp6_na %u, icmp6_ns %u\n",
+ dhdp->bus->wake_counts.rx_icmpv6_ra, dhdp->bus->wake_counts.rx_icmpv6_na,
+ dhdp->bus->wake_counts.rx_icmpv6_ns);
+#endif
+#ifdef DHD_WAKE_EVENT_STATUS
+ for (flowid = 0; flowid < WLC_E_LAST; flowid++)
+ if (dhdp->bus->wake_counts.rc_event[flowid] != 0)
+ bcm_bprintf(strbuf, " %s = %u\n", bcmevent_get_name(flowid),
+ dhdp->bus->wake_counts.rc_event[flowid]);
+ bcm_bprintf(strbuf, "\n");
+#endif
+#endif
+ dhd_prot_print_info(dhdp, strbuf);
+ for (flowid = 0; flowid < dhdp->num_flow_rings; flowid++) {
+ flow_ring_node = DHD_FLOW_RING(dhdp, flowid);
+ if (flow_ring_node->active) {
+ bcm_bprintf(strbuf, "Flow:%d IF %d Prio %d Qlen %d ",
+ flow_ring_node->flowid, flow_ring_node->flow_info.ifindex,
+ flow_ring_node->flow_info.tid, flow_ring_node->queue.len);
+ dhd_prot_print_flow_ring(dhdp, flow_ring_node->prot_info, strbuf);
+ }
+ }
+}
+
+static void
+dhd_update_txflowrings(dhd_pub_t *dhd)
+{
+ dll_t *item, *next;
+ flow_ring_node_t *flow_ring_node;
+ struct dhd_bus *bus = dhd->bus;
+
+ for (item = dll_head_p(&bus->const_flowring);
+ !dll_end(&bus->const_flowring, item); item = next) {
+ next = dll_next_p(item);
+
+ flow_ring_node = dhd_constlist_to_flowring(item);
+ dhd_prot_update_txflowring(dhd, flow_ring_node->flowid, flow_ring_node->prot_info);
+ }
}
/* Mailbox ringbell Function */
@@ -2011,9 +3456,18 @@
DHD_ERROR(("mailbox communication not supported\n"));
return;
}
- /* this is a pcie core register, not the config regsiter */
- DHD_INFO(("writing a mail box interrupt to the device, through config space\n"));
- dhdpcie_bus_cfg_write_dword(bus, PCISBMbx, 4, (1 << 0));
+ if (bus->db1_for_mb) {
+ /* this is a pcie core register, not the config regsiter */
+ /* XXX: makesure we are on PCIE */
+ DHD_INFO(("writing a mail box interrupt to the device, through doorbell 1\n"));
+ si_corereg(bus->sih, bus->sih->buscoreidx, PCIH2D_DB1, ~0, 0x12345678);
+ }
+ else {
+ DHD_INFO(("writing a mail box interrupt to the device, through config space\n"));
+ dhdpcie_bus_cfg_write_dword(bus, PCISBMbx, 4, (1 << 0));
+ /* XXX CRWLPCIEGEN2-182 requires double write */
+ dhdpcie_bus_cfg_write_dword(bus, PCISBMbx, 4, (1 << 0));
+ }
}
/* doorbell ring Function */
@@ -2081,9 +3535,6 @@
return 0;
}
-#ifndef DHD_ALLIRQ
- dhd_os_sdlock(bus->dhd);
-#endif /* DHD_ALLIRQ */
intstatus = bus->intstatus;
if ((bus->sih->buscorerev == 6) || (bus->sih->buscorerev == 4) ||
@@ -2108,9 +3559,6 @@
}
dhdpcie_bus_intr_enable(bus);
-#ifndef DHD_ALLIRQ
- dhd_os_sdunlock(bus->dhd);
-#endif /* DHD_ALLIRQ */
return resched;
}
@@ -2121,20 +3569,20 @@
{
uint32 cur_h2d_mb_data = 0;
- dhd_bus_cmn_readshared(bus, &cur_h2d_mb_data, HTOD_MB_DATA);
+ dhd_bus_cmn_readshared(bus, &cur_h2d_mb_data, HTOD_MB_DATA, 0);
if (cur_h2d_mb_data != 0) {
uint32 i = 0;
DHD_INFO(("GRRRRRRR: MB transaction is already pending 0x%04x\n", cur_h2d_mb_data));
while ((i++ < 100) && cur_h2d_mb_data) {
OSL_DELAY(10);
- dhd_bus_cmn_readshared(bus, &cur_h2d_mb_data, HTOD_MB_DATA);
+ dhd_bus_cmn_readshared(bus, &cur_h2d_mb_data, HTOD_MB_DATA, 0);
}
if (i >= 100)
DHD_ERROR(("waited 1ms for the dngl to ack the previous mb transaction\n"));
}
- dhd_bus_cmn_writeshared(bus, &h2d_mb_data, sizeof(uint32), HTOD_MB_DATA);
+ dhd_bus_cmn_writeshared(bus, &h2d_mb_data, sizeof(uint32), HTOD_MB_DATA, 0);
dhd_bus_gen_devmb_intr(bus);
}
@@ -2143,13 +3591,16 @@
{
uint32 d2h_mb_data = 0;
uint32 zero = 0;
-
- dhd_bus_cmn_readshared(bus, &d2h_mb_data, DTOH_MB_DATA);
+ dhd_bus_cmn_readshared(bus, &d2h_mb_data, DTOH_MB_DATA, 0);
if (!d2h_mb_data)
return;
- dhd_bus_cmn_writeshared(bus, &zero, sizeof(uint32), DTOH_MB_DATA);
-
+ dhd_bus_cmn_writeshared(bus, &zero, sizeof(uint32), DTOH_MB_DATA, 0);
+ if (d2h_mb_data == PCIE_LINK_DOWN) {
+ DHD_ERROR(("%s pcie linkdown, 0x%08x\n", __FUNCTION__, d2h_mb_data));
+ bus->wait_for_d3_ack = DHD_INVALID;
+ dhd_os_d3ack_wake(bus->dhd);
+ }
DHD_INFO(("D2H_MB_DATA: 0x%04x\n", d2h_mb_data));
if (d2h_mb_data & D2H_DEV_DS_ENTER_REQ) {
/* what should we do */
@@ -2163,7 +3614,18 @@
}
if (d2h_mb_data & D2H_DEV_D3_ACK) {
/* what should we do */
- DHD_INFO(("D2H_MB_DATA: D3 ACK\n"));
+ DHD_ERROR(("D2H_MB_DATA: D3 ACK\n"));
+ if (!bus->wait_for_d3_ack) {
+ bus->wait_for_d3_ack = 1;
+ dhd_os_d3ack_wake(bus->dhd);
+ }
+ }
+ if (d2h_mb_data & D2H_DEV_FWHALT) {
+ DHD_INFO(("FW trap has happened\n"));
+#ifdef DHD_DEBUG
+ dhdpcie_checkdied(bus, NULL, 0);
+#endif
+ bus->dhd->busstate = DHD_BUS_DOWN;
}
}
@@ -2183,10 +3645,15 @@
else {
if (intstatus & (PCIE_MB_TOPCIE_FN0_0 | PCIE_MB_TOPCIE_FN0_1))
dhdpcie_handle_mb_data(bus);
- if (intstatus & PCIE_MB_D2H_MB_MASK)
- dhdpci_bus_read_frames(bus);
- }
+ if (bus->dhd->busstate == DHD_BUS_SUSPEND) {
+ return;
+ }
+
+ if (intstatus & PCIE_MB_D2H_MB_MASK) {
+ dhdpci_bus_read_frames(bus);
+ }
+ }
}
/* Decode dongle to host message stream */
@@ -2194,16 +3661,25 @@
dhdpci_bus_read_frames(dhd_bus_t *bus)
{
/* There may be frames in both ctrl buf and data buf; check ctrl buf first */
- if (dhd_prot_dtohsplit(bus->dhd))
- dhd_prot_process_ctrlbuf(bus->dhd);
- dhd_prot_process_msgbuf(bus->dhd);
+ DHD_PERIM_LOCK(bus->dhd); /* Take the perimeter lock */
+
+ dhd_prot_process_ctrlbuf(bus->dhd);
+
+ /* update the flow ring cpls */
+ dhd_update_txflowrings(bus->dhd);
+
+ dhd_prot_process_msgbuf_txcpl(bus->dhd);
+
+ dhd_prot_process_msgbuf_rxcpl(bus->dhd);
+
+ DHD_PERIM_UNLOCK(bus->dhd); /* Release the perimeter lock */
}
static int
dhdpcie_readshared(dhd_bus_t *bus)
{
uint32 addr = 0;
- int rv;
+ int rv, w_init, r_init;
uint32 shaddr = 0;
pciedev_shared_t *sh = bus->pcie_sh;
dhd_timeout_t tmo;
@@ -2214,13 +3690,11 @@
while (((addr == 0) || (addr == bus->nvram_csm)) && !dhd_timeout_expired(&tmo)) {
/* Read last word in memory to determine address of sdpcm_shared structure */
- if ((rv = dhdpcie_bus_membytes(bus, FALSE, shaddr, (uint8 *)&addr, 4)) < 0)
- return rv;
-
- addr = ltoh32(addr);
+ addr = LTOH32(dhdpcie_bus_rtcm32(bus, shaddr));
}
- if ((addr == 0) || (addr == bus->nvram_csm)) {
+ if ((addr == 0) || (addr == bus->nvram_csm) || (addr < bus->dongle_ram_base) ||
+ (addr > shaddr)) {
DHD_ERROR(("%s: address (0x%08x) of pciedev_shared invalid\n",
__FUNCTION__, addr));
DHD_ERROR(("Waited %u usec, dongle is not ready\n", tmo.elapsed));
@@ -2268,65 +3742,97 @@
sh->flags & PCIE_SHARED_VERSION_MASK));
return BCME_ERROR;
}
+ if ((sh->flags & PCIE_SHARED_VERSION_MASK) >= 4) {
+ if (sh->flags & PCIE_SHARED_TXPUSH_SPRT) {
+#ifdef DHDTCPACK_SUPPRESS
+ /* Do not use tcpack suppress as packets don't stay in queue */
+ dhd_tcpack_suppress_set(bus->dhd, TCPACK_SUP_OFF);
+#endif
+ bus->txmode_push = TRUE;
+ } else
+ bus->txmode_push = FALSE;
+ }
+ DHD_ERROR(("bus->txmode_push is set to %d\n", bus->txmode_push));
+
+ /* Does the FW support DMA'ing r/w indices */
+ if (sh->flags & PCIE_SHARED_DMA_INDEX) {
+
+ DHD_ERROR(("%s: Host support DMAing indices: H2D:%d - D2H:%d. FW supports it\n",
+ __FUNCTION__,
+ (DMA_INDX_ENAB(bus->dhd->dma_h2d_ring_upd_support) ? 1 : 0),
+ (DMA_INDX_ENAB(bus->dhd->dma_d2h_ring_upd_support) ? 1 : 0)));
+
+ } else if (DMA_INDX_ENAB(bus->dhd->dma_d2h_ring_upd_support) ||
+ DMA_INDX_ENAB(bus->dhd->dma_h2d_ring_upd_support)) {
+
+#ifdef BCM_INDX_DMA
+ DHD_ERROR(("%s: Incompatible FW. FW does not support DMAing indices\n",
+ __FUNCTION__));
+ return BCME_ERROR;
+#endif
+ DHD_ERROR(("%s: Host supports DMAing indices but FW does not\n",
+ __FUNCTION__));
+ bus->dhd->dma_d2h_ring_upd_support = FALSE;
+ bus->dhd->dma_h2d_ring_upd_support = FALSE;
+ }
+
+
/* get ring_info, ring_state and mb data ptrs and store the addresses in bus structure */
{
ring_info_t ring_info;
- uint32 tcm_rmem_loc;
- uint32 tcm_rstate_loc;
if ((rv = dhdpcie_bus_membytes(bus, FALSE, sh->rings_info_ptr,
(uint8 *)&ring_info, sizeof(ring_info_t))) < 0)
return rv;
- bus->h2d_ring_count = ring_info.h2d_ring_count;
- bus->d2h_ring_count = ring_info.d2h_ring_count;
bus->h2d_mb_data_ptr_addr = ltoh32(sh->h2d_mb_data_ptr);
bus->d2h_mb_data_ptr_addr = ltoh32(sh->d2h_mb_data_ptr);
- bus->ringmem_ptr = ltoh32(ring_info.ringmem_ptr);
- bus->ring_state_ptr = ltoh32(ring_info.ring_state_ptr);
+
+ bus->max_sub_queues = ltoh16(ring_info.max_sub_queues);
+
+ /* If both FW and Host support DMA'ing indices, allocate memory and notify FW
+ * The max_sub_queues is read from FW initialized ring_info
+ */
+ if (DMA_INDX_ENAB(bus->dhd->dma_h2d_ring_upd_support)) {
+ w_init = dhd_prot_init_index_dma_block(bus->dhd,
+ HOST_TO_DNGL_DMA_WRITEINDX_BUFFER,
+ bus->max_sub_queues);
+ r_init = dhd_prot_init_index_dma_block(bus->dhd,
+ DNGL_TO_HOST_DMA_READINDX_BUFFER,
+ BCMPCIE_D2H_COMMON_MSGRINGS);
+
+ if ((w_init != BCME_OK) || (r_init != BCME_OK)) {
+ DHD_ERROR(("%s: Failed to allocate memory for dma'ing h2d indices"
+ "Host will use w/r indices in TCM\n",
+ __FUNCTION__));
+ bus->dhd->dma_h2d_ring_upd_support = FALSE;
+ }
+ }
+
+ if (DMA_INDX_ENAB(bus->dhd->dma_d2h_ring_upd_support)) {
+ w_init = dhd_prot_init_index_dma_block(bus->dhd,
+ DNGL_TO_HOST_DMA_WRITEINDX_BUFFER,
+ BCMPCIE_D2H_COMMON_MSGRINGS);
+ r_init = dhd_prot_init_index_dma_block(bus->dhd,
+ HOST_TO_DNGL_DMA_READINDX_BUFFER,
+ bus->max_sub_queues);
+
+ if ((w_init != BCME_OK) || (r_init != BCME_OK)) {
+ DHD_ERROR(("%s: Failed to allocate memory for dma'ing d2h indices"
+ "Host will use w/r indices in TCM\n",
+ __FUNCTION__));
+ bus->dhd->dma_d2h_ring_upd_support = FALSE;
+ }
+ }
+
+ /* read ringmem and ringstate ptrs from shared area and store in host variables */
+ dhd_fillup_ring_sharedptr_info(bus, &ring_info);
bcm_print_bytes("ring_info_raw", (uchar *)&ring_info, sizeof(ring_info_t));
DHD_INFO(("ring_info\n"));
- DHD_INFO(("h2d_ring_count %d\n", bus->h2d_ring_count));
- DHD_INFO(("d2h_ring_count %d\n", bus->d2h_ring_count));
- DHD_INFO(("ringmem_ptr 0x%04x\n", bus->ringmem_ptr));
- DHD_INFO(("ringstate_ptr 0x%04x\n", bus->ring_state_ptr));
- tcm_rmem_loc = bus->ringmem_ptr;
- tcm_rstate_loc = bus->ring_state_ptr;
-
- if (bus->h2d_ring_count > 1) {
- bus->h2d_ctrl_ring_mem_addr = tcm_rmem_loc;
- tcm_rmem_loc += sizeof(ring_mem_t);
- bus->h2d_ctrl_ring_state_addr = tcm_rstate_loc;
- tcm_rstate_loc += sizeof(ring_state_t);
- }
- bus->h2d_data_ring_mem_addr = tcm_rmem_loc;
- tcm_rmem_loc += sizeof(ring_mem_t);
- bus->h2d_data_ring_state_addr = tcm_rstate_loc;
- tcm_rstate_loc += sizeof(ring_state_t);
-
- if (bus->d2h_ring_count > 1) {
- bus->d2h_ctrl_ring_mem_addr = tcm_rmem_loc;
- tcm_rmem_loc += sizeof(ring_mem_t);
- bus->d2h_ctrl_ring_state_addr = tcm_rstate_loc;
- tcm_rstate_loc += sizeof(ring_state_t);
- }
- bus->d2h_data_ring_mem_addr = tcm_rmem_loc;
- bus->d2h_data_ring_state_addr = tcm_rstate_loc;
-
- DHD_INFO(("ring_mem\n"));
- DHD_INFO(("h2d_data_ring_mem 0x%04x\n", bus->h2d_data_ring_mem_addr));
- DHD_INFO(("h2d_ctrl_ring_mem 0x%04x\n", bus->h2d_ctrl_ring_mem_addr));
- DHD_INFO(("d2h_data_ring_mem 0x%04x\n", bus->d2h_data_ring_mem_addr));
- DHD_INFO(("d2h_ctrl_ring_mem 0x%04x\n", bus->d2h_ctrl_ring_mem_addr));
-
- DHD_INFO(("ring_state\n"));
- DHD_INFO(("h2d_data_ring_state 0x%04x\n", bus->h2d_data_ring_state_addr));
- DHD_INFO(("h2d_ctrl_ring_state 0x%04x\n", bus->h2d_ctrl_ring_state_addr));
- DHD_INFO(("d2h_data_ring_state 0x%04x\n", bus->d2h_data_ring_state_addr));
- DHD_INFO(("d2h_ctrl_ring_state 0x%04x\n", bus->d2h_ctrl_ring_state_addr));
+ DHD_ERROR(("max H2D queues %d\n", ltoh16(ring_info.max_sub_queues)));
DHD_INFO(("mail box address\n"));
DHD_INFO(("h2d_mb_data_ptr_addr 0x%04x\n", bus->h2d_mb_data_ptr_addr));
@@ -2334,8 +3840,102 @@
}
return BCME_OK;
}
+/* Read ring mem and ring state ptr info from shared are in TCM */
+static void
+dhd_fillup_ring_sharedptr_info(dhd_bus_t *bus, ring_info_t *ring_info)
+{
+ uint16 i = 0;
+ uint16 j = 0;
+ uint32 tcm_memloc;
+ uint32 d2h_w_idx_ptr, d2h_r_idx_ptr, h2d_w_idx_ptr, h2d_r_idx_ptr;
+ /* Ring mem ptr info */
+ /* Alloated in the order
+ H2D_MSGRING_CONTROL_SUBMIT 0
+ H2D_MSGRING_RXPOST_SUBMIT 1
+ D2H_MSGRING_CONTROL_COMPLETE 2
+ D2H_MSGRING_TX_COMPLETE 3
+ D2H_MSGRING_RX_COMPLETE 4
+ TX_FLOW_RING 5
+ */
+ {
+ /* ringmemptr holds start of the mem block address space */
+ tcm_memloc = ltoh32(ring_info->ringmem_ptr);
+
+ /* Find out ringmem ptr for each ring common ring */
+ for (i = 0; i <= BCMPCIE_COMMON_MSGRING_MAX_ID; i++) {
+ bus->ring_sh[i].ring_mem_addr = tcm_memloc;
+ /* Update mem block */
+ tcm_memloc = tcm_memloc + sizeof(ring_mem_t);
+ DHD_INFO(("ring id %d ring mem addr 0x%04x \n",
+ i, bus->ring_sh[i].ring_mem_addr));
+ }
+
+ /* Tx flow Ring */
+ if (bus->txmode_push) {
+ bus->ring_sh[i].ring_mem_addr = tcm_memloc;
+ DHD_INFO(("TX ring ring id %d ring mem addr 0x%04x \n",
+ i, bus->ring_sh[i].ring_mem_addr));
+ }
+ }
+
+ /* Ring state mem ptr info */
+ {
+ d2h_w_idx_ptr = ltoh32(ring_info->d2h_w_idx_ptr);
+ d2h_r_idx_ptr = ltoh32(ring_info->d2h_r_idx_ptr);
+ h2d_w_idx_ptr = ltoh32(ring_info->h2d_w_idx_ptr);
+ h2d_r_idx_ptr = ltoh32(ring_info->h2d_r_idx_ptr);
+ /* Store h2d common ring write/read pointers */
+ for (i = 0; i < BCMPCIE_H2D_COMMON_MSGRINGS; i++) {
+ bus->ring_sh[i].ring_state_w = h2d_w_idx_ptr;
+ bus->ring_sh[i].ring_state_r = h2d_r_idx_ptr;
+
+ /* update mem block */
+ h2d_w_idx_ptr = h2d_w_idx_ptr + sizeof(uint32);
+ h2d_r_idx_ptr = h2d_r_idx_ptr + sizeof(uint32);
+
+ DHD_INFO(("h2d w/r : idx %d write %x read %x \n", i,
+ bus->ring_sh[i].ring_state_w, bus->ring_sh[i].ring_state_r));
+ }
+ /* Store d2h common ring write/read pointers */
+ for (j = 0; j < BCMPCIE_D2H_COMMON_MSGRINGS; j++, i++) {
+ bus->ring_sh[i].ring_state_w = d2h_w_idx_ptr;
+ bus->ring_sh[i].ring_state_r = d2h_r_idx_ptr;
+
+ /* update mem block */
+ d2h_w_idx_ptr = d2h_w_idx_ptr + sizeof(uint32);
+ d2h_r_idx_ptr = d2h_r_idx_ptr + sizeof(uint32);
+
+ DHD_INFO(("d2h w/r : idx %d write %x read %x \n", i,
+ bus->ring_sh[i].ring_state_w, bus->ring_sh[i].ring_state_r));
+ }
+
+ /* Store txflow ring write/read pointers */
+ if (bus->txmode_push) {
+ bus->ring_sh[i].ring_state_w = h2d_w_idx_ptr;
+ bus->ring_sh[i].ring_state_r = h2d_r_idx_ptr;
+
+ DHD_INFO(("txflow : idx %d write %x read %x \n", i,
+ bus->ring_sh[i].ring_state_w, bus->ring_sh[i].ring_state_r));
+ } else {
+ for (j = 0; j < (bus->max_sub_queues - BCMPCIE_H2D_COMMON_MSGRINGS);
+ i++, j++)
+ {
+ bus->ring_sh[i].ring_state_w = h2d_w_idx_ptr;
+ bus->ring_sh[i].ring_state_r = h2d_r_idx_ptr;
+
+ /* update mem block */
+ h2d_w_idx_ptr = h2d_w_idx_ptr + sizeof(uint32);
+ h2d_r_idx_ptr = h2d_r_idx_ptr + sizeof(uint32);
+
+ DHD_INFO(("FLOW Rings h2d w/r : idx %d write %x read %x \n", i,
+ bus->ring_sh[i].ring_state_w,
+ bus->ring_sh[i].ring_state_r));
+ }
+ }
+ }
+}
/* Initialize bus module: prepare for communication w/dongle */
int dhd_bus_init(dhd_pub_t *dhdp, bool enforce_mutex)
{
@@ -2348,9 +3948,6 @@
if (!bus->dhd)
return 0;
- if (enforce_mutex)
- dhd_os_sdlock(bus->dhd);
-
/* Make sure we're talking to the core. */
bus->reg = si_setcore(bus->sih, PCIE2_CORE_ID, 0);
ASSERT(bus->reg != NULL);
@@ -2375,9 +3972,6 @@
/* bcmsdh_intr_unmask(bus->sdh); */
- if (enforce_mutex)
- dhd_os_sdunlock(bus->dhd);
-
return ret;
}
@@ -2410,6 +4004,10 @@
(device == BCM4354_D11AC5G_ID) || (device == BCM4354_CHIP_ID))
return 0;
+ if ((device == BCM4356_D11AC_ID) || (device == BCM4356_D11AC2G_ID) ||
+ (device == BCM4356_D11AC5G_ID) || (device == BCM4356_CHIP_ID))
+ return 0;
+
if ((device == BCM4345_D11AC_ID) || (device == BCM4345_D11AC2G_ID) ||
(device == BCM4345_D11AC5G_ID) || (device == BCM4345_CHIP_ID))
return 0;
@@ -2422,6 +4020,24 @@
(device == BCM43602_D11AC5G_ID) || (device == BCM43602_CHIP_ID))
return 0;
+ if ((device == BCM43569_D11AC_ID) || (device == BCM43569_D11AC2G_ID) ||
+ (device == BCM43569_D11AC5G_ID) || (device == BCM43569_CHIP_ID))
+ return 0;
+
+ if ((device == BCM4358_D11AC_ID) || (device == BCM4358_D11AC2G_ID) ||
+ (device == BCM4358_D11AC5G_ID) || (device == BCM4358_CHIP_ID))
+ return 0;
+
+ if ((device == BCM4349_D11AC_ID) || (device == BCM4349_D11AC2G_ID) ||
+ (device == BCM4349_D11AC5G_ID) || (device == BCM4349_CHIP_ID))
+ return 0;
+ if ((device == BCM4355_D11AC_ID) || (device == BCM4355_D11AC2G_ID) ||
+ (device == BCM4355_D11AC5G_ID) || (device == BCM4355_CHIP_ID))
+ return 0;
+ if ((device == BCM4359_D11AC_ID) || (device == BCM4359_D11AC2G_ID) ||
+ (device == BCM4359_D11AC5G_ID) || (device == BCM4359_CHIP_ID))
+ return 0;
+
DHD_ERROR(("%s: Unsupported vendor %x device %x\n", __FUNCTION__, vendor, device));
return (-ENODEV);
@@ -2562,3 +4178,300 @@
return BCME_OK;
}
+
+
+uint8 BCMFASTPATH
+dhd_bus_is_txmode_push(dhd_bus_t *bus)
+{
+ return bus->txmode_push;
+}
+
+void dhd_bus_clean_flow_ring(dhd_bus_t *bus, void *node)
+{
+ void *pkt;
+ flow_queue_t *queue;
+ flow_ring_node_t *flow_ring_node = (flow_ring_node_t *)node;
+ unsigned long flags;
+
+ queue = &flow_ring_node->queue;
+
+#ifdef DHDTCPACK_SUPPRESS
+ /* Clean tcp_ack_info_tbl in order to prevent access to flushed pkt,
+ * when there is a newly coming packet from network stack.
+ */
+ dhd_tcpack_info_tbl_clean(bus->dhd);
+#endif /* DHDTCPACK_SUPPRESS */
+
+ /* clean up BUS level info */
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+
+ /* Flush all pending packets in the queue, if any */
+ while ((pkt = dhd_flow_queue_dequeue(bus->dhd, queue)) != NULL) {
+ PKTFREE(bus->dhd->osh, pkt, TRUE);
+ }
+ ASSERT(flow_queue_empty(queue));
+
+ flow_ring_node->status = FLOW_RING_STATUS_CLOSED;
+
+ flow_ring_node->active = FALSE;
+
+ dll_delete(&flow_ring_node->list);
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ /* Call Flow ring clean up */
+ dhd_prot_clean_flow_ring(bus->dhd, flow_ring_node->prot_info);
+ dhd_flowid_free(bus->dhd, flow_ring_node->flow_info.ifindex,
+ flow_ring_node->flowid);
+
+}
+
+/*
+ * Allocate a Flow ring buffer,
+ * Init Ring buffer,
+ * Send Msg to device about flow ring creation
+*/
+int
+dhd_bus_flow_ring_create_request(dhd_bus_t *bus, void *arg)
+{
+ flow_ring_node_t *flow_ring_node = (flow_ring_node_t *)arg;
+
+ DHD_INFO(("%s :Flow create\n", __FUNCTION__));
+
+ /* Send Msg to device about flow ring creation */
+ if (dhd_prot_flow_ring_create(bus->dhd, flow_ring_node) != BCME_OK)
+ return BCME_NOMEM;
+
+ return BCME_OK;
+}
+
+void
+dhd_bus_flow_ring_create_response(dhd_bus_t *bus, uint16 flowid, int32 status)
+{
+ flow_ring_node_t *flow_ring_node;
+ unsigned long flags;
+
+ DHD_INFO(("%s :Flow Response %d \n", __FUNCTION__, flowid));
+
+ flow_ring_node = DHD_FLOW_RING(bus->dhd, flowid);
+ ASSERT(flow_ring_node->flowid == flowid);
+
+ if (status != BCME_OK) {
+ DHD_ERROR(("%s Flow create Response failure error status = %d \n",
+ __FUNCTION__, status));
+ /* Call Flow clean up */
+ dhd_bus_clean_flow_ring(bus, flow_ring_node);
+ return;
+ }
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+ flow_ring_node->status = FLOW_RING_STATUS_OPEN;
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ dhd_bus_schedule_queue(bus, flowid, FALSE);
+
+ return;
+}
+
+int
+dhd_bus_flow_ring_delete_request(dhd_bus_t *bus, void *arg)
+{
+ void * pkt;
+ flow_queue_t *queue;
+ flow_ring_node_t *flow_ring_node;
+ unsigned long flags;
+
+ DHD_INFO(("%s :Flow Delete\n", __FUNCTION__));
+
+ flow_ring_node = (flow_ring_node_t *)arg;
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+ if (flow_ring_node->status & FLOW_RING_STATUS_DELETE_PENDING) {
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+ DHD_ERROR(("%s :Delete Pending\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+ flow_ring_node->status = FLOW_RING_STATUS_DELETE_PENDING;
+
+ queue = &flow_ring_node->queue; /* queue associated with flow ring */
+
+#ifdef DHDTCPACK_SUPPRESS
+ /* Clean tcp_ack_info_tbl in order to prevent access to flushed pkt,
+ * when there is a newly coming packet from network stack.
+ */
+ dhd_tcpack_info_tbl_clean(bus->dhd);
+#endif /* DHDTCPACK_SUPPRESS */
+ /* Flush all pending packets in the queue, if any */
+ while ((pkt = dhd_flow_queue_dequeue(bus->dhd, queue)) != NULL) {
+ PKTFREE(bus->dhd->osh, pkt, TRUE);
+ }
+ ASSERT(flow_queue_empty(queue));
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ /* Send Msg to device about flow ring deletion */
+ dhd_prot_flow_ring_delete(bus->dhd, flow_ring_node);
+
+ return BCME_OK;
+}
+
+void
+dhd_bus_flow_ring_delete_response(dhd_bus_t *bus, uint16 flowid, uint32 status)
+{
+ flow_ring_node_t *flow_ring_node;
+
+ DHD_ERROR(("%s :Flow Delete Response %d \n", __FUNCTION__, flowid));
+
+ flow_ring_node = DHD_FLOW_RING(bus->dhd, flowid);
+ ASSERT(flow_ring_node->flowid == flowid);
+
+ if (status != BCME_OK) {
+ DHD_ERROR(("%s Flow Delete Response failure error status = %d \n",
+ __FUNCTION__, status));
+ return;
+ }
+ /* Call Flow clean up */
+ dhd_bus_clean_flow_ring(bus, flow_ring_node);
+
+ return;
+
+}
+
+int dhd_bus_flow_ring_flush_request(dhd_bus_t *bus, void *arg)
+{
+ void *pkt;
+ flow_queue_t *queue;
+ flow_ring_node_t *flow_ring_node;
+ unsigned long flags;
+
+ DHD_INFO(("%s :Flow Delete\n", __FUNCTION__));
+
+ flow_ring_node = (flow_ring_node_t *)arg;
+ queue = &flow_ring_node->queue; /* queue associated with flow ring */
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+
+#ifdef DHDTCPACK_SUPPRESS
+ /* Clean tcp_ack_info_tbl in order to prevent access to flushed pkt,
+ * when there is a newly coming packet from network stack.
+ */
+ dhd_tcpack_info_tbl_clean(bus->dhd);
+#endif /* DHDTCPACK_SUPPRESS */
+ /* Flush all pending packets in the queue, if any */
+ while ((pkt = dhd_flow_queue_dequeue(bus->dhd, queue)) != NULL) {
+ PKTFREE(bus->dhd->osh, pkt, TRUE);
+ }
+ ASSERT(flow_queue_empty(queue));
+
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ /* Send Msg to device about flow ring flush */
+ dhd_prot_flow_ring_flush(bus->dhd, flow_ring_node);
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+ flow_ring_node->status = FLOW_RING_STATUS_FLUSH_PENDING;
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ return BCME_OK;
+}
+
+void
+dhd_bus_flow_ring_flush_response(dhd_bus_t *bus, uint16 flowid, uint32 status)
+{
+ flow_ring_node_t *flow_ring_node;
+ unsigned long flags;
+
+ if (status != BCME_OK) {
+ DHD_ERROR(("%s Flow flush Response failure error status = %d \n",
+ __FUNCTION__, status));
+ return;
+ }
+
+ flow_ring_node = DHD_FLOW_RING(bus->dhd, flowid);
+ ASSERT(flow_ring_node->flowid == flowid);
+
+ DHD_FLOWRING_LOCK(flow_ring_node->lock, flags);
+ flow_ring_node->status = FLOW_RING_STATUS_OPEN;
+ DHD_FLOWRING_UNLOCK(flow_ring_node->lock, flags);
+
+ return;
+}
+
+uint32
+dhd_bus_max_h2d_queues(struct dhd_bus *bus, uint8 *txpush)
+{
+ if (bus->txmode_push)
+ *txpush = 1;
+ else
+ *txpush = 0;
+ return bus->max_sub_queues;
+}
+
+int
+dhdpcie_bus_clock_start(struct dhd_bus *bus)
+{
+ return dhdpcie_start_host_pcieclock(bus);
+}
+
+int
+dhdpcie_bus_clock_stop(struct dhd_bus *bus)
+{
+ return dhdpcie_stop_host_pcieclock(bus);
+}
+
+int
+dhdpcie_bus_disable_device(struct dhd_bus *bus)
+{
+ return dhdpcie_disable_device(bus);
+}
+
+int
+dhdpcie_bus_enable_device(struct dhd_bus *bus)
+{
+ return dhdpcie_enable_device(bus);
+}
+
+int
+dhdpcie_bus_alloc_resource(struct dhd_bus *bus)
+{
+ return dhdpcie_alloc_resource(bus);
+}
+
+void
+dhdpcie_bus_free_resource(struct dhd_bus *bus)
+{
+ dhdpcie_free_resource(bus);
+}
+
+int
+dhd_bus_request_irq(struct dhd_bus *bus)
+{
+ return dhdpcie_bus_request_irq(bus);
+}
+
+bool
+dhdpcie_bus_dongle_attach(struct dhd_bus *bus)
+{
+ return dhdpcie_dongle_attach(bus);
+}
+
+int
+dhd_bus_release_dongle(struct dhd_bus *bus)
+{
+ bool dongle_isolation;
+ osl_t *osh;
+
+ DHD_TRACE(("%s: Enter\n", __FUNCTION__));
+
+ if (bus) {
+ osh = bus->osh;
+ ASSERT(osh);
+
+ if (bus->dhd) {
+ dongle_isolation = bus->dhd->dongle_isolation;
+ dhdpcie_bus_release_dongle(bus, osh, dongle_isolation, TRUE);
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_pcie.h b/drivers/net/wireless/bcmdhd/dhd_pcie.h
old mode 100755
new mode 100644
index 9b3f718..febcab4
--- a/drivers/net/wireless/bcmdhd/dhd_pcie.h
+++ b/drivers/net/wireless/bcmdhd/dhd_pcie.h
@@ -2,13 +2,13 @@
* Linux DHD Bus Module for PCIE
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_pcie.h 452261 2014-01-29 19:30:23Z $
+ * $Id: dhd_pcie.h 473468 2014-04-29 07:30:27Z $
*/
@@ -29,10 +29,20 @@
#define dhd_pcie_h
#include <bcmpcie.h>
+#include <hnd_cons.h>
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+#if defined (CONFIG_ARCH_MSM)
+#if defined (CONFIG_64BIT)
+#include <linux/msm_pcie.h>
+#else
+#include <mach/msm_pcie.h>
+#endif
+#endif
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
/* defines */
-#define PCMSGBUF_HDRLEN 20
+#define PCMSGBUF_HDRLEN 0
#define DONGLE_REG_MAP_SIZE (32 * 1024)
#define DONGLE_TCM_MAP_SIZE (4096 * 1024)
#define DONGLE_MIN_MEMSIZE (128 *1024)
@@ -43,7 +53,9 @@
#define REMAP_ENAB(bus) ((bus)->remap)
#define REMAP_ISADDR(bus, a) (((a) >= ((bus)->orig_ramsize)) && ((a) < ((bus)->ramsize)))
-
+#define MAX_DHD_TX_FLOWS 256
+#define PCIE_LINK_DOWN 0xFFFFFFFF
+#define DHD_INVALID -1
/* user defined data structures */
#ifdef DHD_DEBUG
/* Device console log buffer state */
@@ -54,16 +66,23 @@
typedef struct dhd_console {
uint count; /* Poll interval msec counter */
uint log_addr; /* Log struct address (fixed) */
- hndrte_log_t log; /* Log struct (host copy) */
+ hnd_log_t log; /* Log struct (host copy) */
uint bufsize; /* Size of log buffer */
uint8 *buf; /* Log buffer (host copy) */
uint last; /* Last buffer read index */
} dhd_console_t;
#endif /* DHD_DEBUG */
+typedef struct ring_sh_info {
+ uint32 ring_mem_addr;
+ uint32 ring_state_w;
+ uint32 ring_state_r;
+} ring_sh_info_t;
typedef struct dhd_bus {
dhd_pub_t *dhd;
struct pci_dev *dev; /* pci device handle */
+ dll_t const_flowring; /* constructed list of tx flowring queues */
+
si_t *sih; /* Handle for SI calls */
char *vars; /* Variables (from CIS and/or other) */
uint varsz; /* Size of variables buffer */
@@ -84,7 +103,8 @@
uint16 cl_devid; /* cached devid for dhdsdio_probe_attach() */
char *fw_path; /* module_param: path to firmware image */
char *nv_path; /* module_param: path to nvram vars file */
- const char *nvram_params; /* user specified nvram params. */
+ char *nvram_params; /* user specified nvram params. */
+ int nvram_params_len;
struct pktq txq; /* Queue length used for flow-control */
@@ -114,10 +134,15 @@
ulong shared_addr;
pciedev_shared_t *pcie_sh;
bool bus_flowctrl;
- ioct_resp_hdr_t ioct_resp;
+ ioctl_comp_resp_msg_t ioct_resp;
uint32 dma_rxoffset;
volatile char *regs; /* pci device memory va */
volatile char *tcm; /* pci device memory va */
+ uint32 tcm_size;
+#if defined(CONFIG_ARCH_MSM) && defined(CONFIG_64BIT)
+ uint32 bar1_win_base;
+ uint32 bar1_win_mask;
+#endif
osl_t *osh;
uint32 nvram_csm; /* Nvram checksum */
uint16 pollrate;
@@ -127,21 +152,16 @@
void *pcie_mb_intr_osh;
bool sleep_allowed;
+ wake_counts_t wake_counts;
+
/* version 3 shared struct related info start */
+ ring_sh_info_t ring_sh[BCMPCIE_COMMON_MSGRINGS + MAX_DHD_TX_FLOWS];
uint8 h2d_ring_count;
uint8 d2h_ring_count;
uint32 ringmem_ptr;
uint32 ring_state_ptr;
- uint32 h2d_data_ring_mem_addr;
- uint32 h2d_ctrl_ring_mem_addr;
- uint32 h2d_data_ring_state_addr;
- uint32 h2d_ctrl_ring_state_addr;
-
- uint32 d2h_data_ring_mem_addr;
- uint32 d2h_ctrl_ring_mem_addr;
- uint32 d2h_data_ring_state_addr;
- uint32 d2h_ctrl_ring_state_addr;
+ uint32 d2h_dma_scratch_buffer_mem_addr;
uint32 h2d_mb_data_ptr_addr;
uint32 d2h_mb_data_ptr_addr;
@@ -149,7 +169,15 @@
uint32 def_intmask;
bool ltrsleep_on_unload;
-
+ uint wait_for_d3_ack;
+ uint8 txmode_push;
+ uint32 max_sub_queues;
+ bool db1_for_mb;
+ bool suspended;
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ struct msm_pcie_register_event pcie_event;
+ bool islinkdown;
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
} dhd_bus_t;
/* function declarations */
@@ -166,4 +194,19 @@
extern void dhdpcie_bus_release(struct dhd_bus *bus);
extern int32 dhdpcie_bus_isr(struct dhd_bus *bus);
extern void dhdpcie_free_irq(dhd_bus_t *bus);
+extern int dhdpcie_bus_suspend(struct dhd_bus *bus, bool state);
+extern int dhdpcie_pci_suspend_resume(struct pci_dev *dev, bool state);
+extern int dhdpcie_start_host_pcieclock(dhd_bus_t *bus);
+extern int dhdpcie_stop_host_pcieclock(dhd_bus_t *bus);
+extern int dhdpcie_disable_device(dhd_bus_t *bus);
+extern int dhdpcie_enable_device(dhd_bus_t *bus);
+extern int dhdpcie_alloc_resource(dhd_bus_t *bus);
+extern void dhdpcie_free_resource(dhd_bus_t *bus);
+extern int dhdpcie_bus_request_irq(struct dhd_bus *bus);
+extern int dhd_buzzz_dump_dngl(dhd_bus_t *bus);
+#ifdef DHD_WAKE_STATUS
+int bcmpcie_get_total_wake(struct dhd_bus *bus);
+int bcmpcie_set_get_wake(struct dhd_bus *bus, int flag);
+#endif
+
#endif /* dhd_pcie_h */
diff --git a/drivers/net/wireless/bcmdhd/dhd_pcie_linux.c b/drivers/net/wireless/bcmdhd/dhd_pcie_linux.c
old mode 100755
new mode 100644
index 5a58320..ac89e33
--- a/drivers/net/wireless/bcmdhd/dhd_pcie_linux.c
+++ b/drivers/net/wireless/bcmdhd/dhd_pcie_linux.c
@@ -2,13 +2,13 @@
* Linux DHD Bus Module for PCIE
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_pcie_linux.c 452261 2014-01-29 19:30:23Z $
+ * $Id: dhd_pcie_linux.c 477713 2014-05-14 08:59:12Z $
*/
@@ -34,8 +34,8 @@
#include <hndpmu.h>
#include <sbchipc.h>
#if defined(DHD_DEBUG)
-#include <hndrte_armtrap.h>
-#include <hndrte_cons.h>
+#include <hnd_armtrap.h>
+#include <hnd_cons.h>
#endif /* defined(DHD_DEBUG) */
#include <dngl_stats.h>
#include <pcie_core.h>
@@ -46,9 +46,18 @@
#include <dhdioctl.h>
#include <bcmmsgbuf.h>
#include <pcicfg.h>
-#include <circularbuf.h>
#include <dhd_pcie.h>
-
+#include <dhd_linux.h>
+#ifdef DHD_WAKE_STATUS
+#include <linux/wakeup_reason.h>
+#endif
+#if defined (CONFIG_ARCH_MSM)
+#ifdef CONFIG_64BIT
+#include <linux/msm_pcie.h>
+#else
+#include <mach/msm_pcie.h>
+#endif
+#endif
#define PCI_CFG_RETRY 10
#define OS_HANDLE_MAGIC 0x1234abcd /* Magic # to recognize osh */
@@ -87,7 +96,16 @@
struct pcos_info *pcos_info;
uint16 last_intrstatus; /* to cache intrstatus */
int irq;
-
+ char pciname[32];
+ struct pci_saved_state* default_state;
+ struct pci_saved_state* state;
+ wifi_adapter_info_t *adapter;
+#ifdef DHD_WAKE_STATUS
+ spinlock_t pcie_lock;
+ unsigned int total_wake_count;
+ int pkt_wake;
+ int wake_irq;
+#endif
} dhdpcie_info_t;
@@ -109,8 +127,13 @@
dhdpcie_pci_remove(struct pci_dev *pdev);
static int dhdpcie_init(struct pci_dev *pdev);
static irqreturn_t dhdpcie_isr(int irq, void *arg);
-static int dhdpcie_pci_suspend(struct pci_dev *dev);
+/* OS Routine functions for PCI suspend/resume */
+
+static int dhdpcie_pci_suspend(struct pci_dev *dev, pm_message_t state);
+static int dhdpcie_set_suspend_resume(struct pci_dev *dev, bool state);
static int dhdpcie_pci_resume(struct pci_dev *dev);
+static int dhdpcie_resume_dev(struct pci_dev *dev);
+static int dhdpcie_suspend_dev(struct pci_dev *dev);
static struct pci_device_id dhdpcie_pci_devid[] __devinitdata = {
{ vendor: 0x14e4,
device: PCI_ANY_ID,
@@ -133,24 +156,124 @@
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 0))
save_state: NULL,
#endif
- suspend: NULL,
- resume: NULL,
+ suspend: dhdpcie_pci_suspend,
+ resume: dhdpcie_pci_resume,
};
-static int dhdpcie_pci_suspend(struct pci_dev *dev)
+int dhdpcie_init_succeeded = FALSE;
+
+static void dhdpcie_pme_active(struct pci_dev *pdev, bool enable)
{
- int ret;
- pci_save_state(dev);
- pci_enable_wake(dev, PCI_D0, TRUE);
- pci_disable_device(dev);
- ret = pci_set_power_state(dev, PCI_D3hot);
+ uint16 pmcsr;
+
+ pci_read_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, &pmcsr);
+ /* Clear PME Status by writing 1 to it and enable PME# */
+ pmcsr |= PCI_PM_CTRL_PME_STATUS | PCI_PM_CTRL_PME_ENABLE;
+ if (!enable)
+ pmcsr &= ~PCI_PM_CTRL_PME_ENABLE;
+
+ pci_write_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, pmcsr);
+}
+
+static int dhdpcie_set_suspend_resume(struct pci_dev *pdev, bool state)
+{
+ int ret = 0;
+ dhdpcie_info_t *pch = pci_get_drvdata(pdev);
+ dhd_bus_t *bus = NULL;
+ DHD_INFO(("%s Enter with state :%x\n", __FUNCTION__, state));
+ if (pch) {
+ bus = pch->bus;
+ }
+
+ /* When firmware is not loaded do the PCI bus */
+ /* suspend/resume only */
+ if (bus && (bus->dhd->busstate == DHD_BUS_DOWN) &&
+ !bus->dhd->dongle_reset) {
+ ret = dhdpcie_pci_suspend_resume(bus->dev, state);
+ return ret;
+ }
+
+ if (bus && ((bus->dhd->busstate == DHD_BUS_SUSPEND)||
+ (bus->dhd->busstate == DHD_BUS_DATA)) &&
+ (bus->suspended != state)) {
+
+ ret = dhdpcie_bus_suspend(bus, state);
+ }
+ DHD_INFO(("%s Exit with state :%d\n", __FUNCTION__, ret));
return ret;
}
-static int dhdpcie_pci_resume(struct pci_dev *dev)
+static int dhdpcie_pci_suspend(struct pci_dev * pdev, pm_message_t state)
+{
+ BCM_REFERENCE(state);
+ DHD_INFO(("%s Enter with event %x\n", __FUNCTION__, state.event));
+ return dhdpcie_set_suspend_resume(pdev, TRUE);
+}
+
+static int dhdpcie_pci_resume(struct pci_dev *pdev)
+{
+ DHD_INFO(("%s Enter\n", __FUNCTION__));
+ return dhdpcie_set_suspend_resume(pdev, FALSE);
+}
+
+int dhd_os_get_wake_irq(dhd_pub_t *pub);
+
+static int dhdpcie_suspend_dev(struct pci_dev *dev)
+{
+ int ret;
+ dhdpcie_info_t *pch = pci_get_drvdata(dev);
+ dhdpcie_pme_active(dev, TRUE);
+ pci_save_state(dev);
+ pch->state = pci_store_saved_state(dev);
+ pci_enable_wake(dev, PCI_D0, TRUE);
+ if (pci_is_enabled(dev))
+ pci_disable_device(dev);
+ ret = pci_set_power_state(dev, PCI_D3hot);
+#ifdef CONFIG_PARTIALRESUME
+ wifi_process_partial_resume(pch->adapter, WIFI_PR_INIT);
+#endif
+ return ret;
+}
+
+#ifdef DHD_WAKE_STATUS
+int bcmpcie_get_total_wake(struct dhd_bus *bus)
+{
+ dhdpcie_info_t *pch = pci_get_drvdata(bus->dev);
+
+ return pch->total_wake_count;
+}
+
+int bcmpcie_set_get_wake(struct dhd_bus *bus, int flag)
+{
+ dhdpcie_info_t *pch = pci_get_drvdata(bus->dev);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&pch->pcie_lock, flags);
+
+ ret = pch->pkt_wake;
+ pch->total_wake_count += flag;
+ pch->pkt_wake = flag;
+
+ spin_unlock_irqrestore(&pch->pcie_lock, flags);
+ return ret;
+}
+#endif
+
+static int dhdpcie_resume_dev(struct pci_dev *dev)
{
int err = 0;
- uint32 val;
+ dhdpcie_info_t *pch = pci_get_drvdata(dev);
+
+#ifdef DHD_WAKE_STATUS
+ if (check_wakeup_reason(pch->wake_irq)) {
+#ifdef CONFIG_PARTIALRESUME
+ wifi_process_partial_resume(pch->adapter, WIFI_PR_NOTIFY_RESUME);
+#endif
+ bcmpcie_set_get_wake(pch->bus, 1);
+ }
+#endif
+ pci_load_and_free_saved_state(dev, &pch->state);
pci_restore_state(dev);
err = pci_enable_device(dev);
if (err) {
@@ -169,9 +292,7 @@
printf("%s:pci_set_power_state error %d \n", __FUNCTION__, err);
return err;
}
- pci_read_config_dword(dev, 0x40, &val);
- if ((val & 0x0000ff00) != 0)
- pci_write_config_dword(dev, 0x40, val & 0xffff00ff);
+ dhdpcie_pme_active(dev, FALSE);
return err;
}
@@ -180,11 +301,32 @@
int rc;
if (state)
- rc = dhdpcie_pci_suspend(dev);
+ rc = dhdpcie_suspend_dev(dev);
else
- rc = dhdpcie_pci_resume(dev);
+ rc = dhdpcie_resume_dev(dev);
return rc;
}
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0))
+static int dhdpcie_device_scan(struct device *dev, void *data)
+{
+ struct pci_dev *pcidev;
+ int *cnt = data;
+
+ pcidev = container_of(dev, struct pci_dev, dev);
+ if (pcidev->vendor != 0x14e4)
+ return 0;
+
+ DHD_INFO(("Found Broadcom PCI device 0x%04x\n", pcidev->device));
+ *cnt += 1;
+ if (pcidev->driver && strcmp(pcidev->driver->name, dhdpcie_driver.name))
+ DHD_ERROR(("Broadcom PCI Device 0x%04x has allocated with driver %s\n",
+ pcidev->device, pcidev->driver->name));
+
+ return 0;
+}
+#endif /* LINUX_VERSION >= 2.6.0 */
+
int
dhdpcie_bus_register(void)
{
@@ -194,12 +336,23 @@
#if (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 0))
if (!(error = pci_module_init(&dhdpcie_driver)))
return 0;
-#else
- if (!(error = pci_register_driver(&dhdpcie_driver)))
- return 0;
-#endif
DHD_ERROR(("%s: pci_module_init failed 0x%x\n", __FUNCTION__, error));
+#else
+ if (!(error = pci_register_driver(&dhdpcie_driver))) {
+ bus_for_each_dev(dhdpcie_driver.driver.bus, NULL, &error, dhdpcie_device_scan);
+ if (!error) {
+ DHD_ERROR(("No Broadcom PCI device enumerated!\n"));
+ } else if (!dhdpcie_init_succeeded) {
+ DHD_ERROR(("%s: dhdpcie initialize failed.\n", __FUNCTION__));
+ } else {
+ return 0;
+ }
+
+ pci_unregister_driver(&dhdpcie_driver);
+ error = BCME_ERROR;
+ }
+#endif /* LINUX_VERSION < 2.6.0 */
return error;
}
@@ -227,7 +380,8 @@
DHD_ERROR(("%s: PCIe Enumeration failed\n", __FUNCTION__));
return -ENODEV;
}
-
+ /* disable async suspend */
+ device_disable_async_suspend(&pdev->dev);
DHD_TRACE(("%s: PCIe Enumeration done!!\n", __FUNCTION__));
return 0;
}
@@ -237,6 +391,10 @@
{
osl_t *osh = pch->osh;
if (pch) {
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0))
+ if (!dhd_download_fw_on_driverload)
+ pci_load_and_free_saved_state(pch->dev, &pch->default_state);
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0) */
MFREE(osh, pch, sizeof(dhdpcie_info_t));
}
return 0;
@@ -246,7 +404,6 @@
void __devexit
dhdpcie_pci_remove(struct pci_dev *pdev)
{
-
osl_t *osh = NULL;
dhdpcie_info_t *pch = NULL;
dhd_bus_t *bus = NULL;
@@ -254,6 +411,7 @@
DHD_TRACE(("%s Enter\n", __FUNCTION__));
pch = pci_get_drvdata(pdev);
bus = pch->bus;
+ osh = pch->osh;
dhdpcie_bus_release(bus);
pci_disable_device(pdev);
@@ -262,6 +420,7 @@
/* osl detach */
osl_detach(osh);
+ dhdpcie_init_succeeded = FALSE;
DHD_TRACE(("%s Exit\n", __FUNCTION__));
@@ -275,10 +434,16 @@
dhd_bus_t *bus = dhdpcie_info->bus;
struct pci_dev *pdev = dhdpcie_info->bus->dev;
- if (request_irq(pdev->irq, dhdpcie_isr, IRQF_SHARED, "dhdpcie", bus) < 0) {
+ snprintf(dhdpcie_info->pciname, sizeof(dhdpcie_info->pciname),
+ "dhdpcie:%s", pci_name(pdev));
+ if (request_irq(pdev->irq, dhdpcie_isr, IRQF_SHARED,
+ dhdpcie_info->pciname, bus) < 0) {
DHD_ERROR(("%s: request_irq() failed\n", __FUNCTION__));
return -1;
- }
+ }
+
+ DHD_TRACE(("%s %s\n", __FUNCTION__, dhdpcie_info->pciname));
+
return 0; /* SUCCESS */
}
@@ -340,6 +505,26 @@
DHD_ERROR(("%s:ioremap() failed\n", __FUNCTION__));
break;
}
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0))
+ if (!dhd_download_fw_on_driverload) {
+ /* Backup PCIe configuration so as to use Wi-Fi on/off process
+ * in case of built in driver
+ */
+ pci_save_state(pdev);
+ dhdpcie_info->default_state = pci_store_saved_state(pdev);
+
+ if (dhdpcie_info->default_state == NULL) {
+ DHD_ERROR(("%s pci_store_saved_state returns NULL\n",
+ __FUNCTION__));
+ REG_UNMAP(dhdpcie_info->regs);
+ REG_UNMAP(dhdpcie_info->tcm);
+ pci_disable_device(pdev);
+ break;
+ }
+ }
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0) */
+
DHD_TRACE(("%s:Phys addr : reg space = %p base addr 0x"PRINTF_RESOURCE" \n",
__FUNCTION__, dhdpcie_info->regs, bar0_addr));
DHD_TRACE(("%s:Phys addr : tcm_space = %p base addr 0x"PRINTF_RESOURCE" \n",
@@ -374,20 +559,47 @@
return -1; /* FAILURE */
}
-
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+void dhdpcie_linkdown_cb(struct msm_pcie_notify *noti)
+{
+ struct pci_dev *pdev = (struct pci_dev *)noti->user;
+ dhdpcie_info_t *pch;
+ dhd_bus_t *bus;
+ dhd_pub_t *dhd;
+ if (pdev && (pch = pci_get_drvdata(pdev))) {
+ if ((bus = pch->bus) && (dhd = bus->dhd)) {
+ DHD_ERROR(("%s: Event HANG send up "
+ "due to PCIe linkdown\n", __FUNCTION__));
+ bus->islinkdown = TRUE;
+ dhd->busstate = DHD_BUS_DOWN;
+ DHD_OS_WAKE_LOCK_CTRL_TIMEOUT_ENABLE(dhd, DHD_EVENT_TIMEOUT_MS);
+ dhd_os_check_hang(dhd, 0, -ETIMEDOUT);
+ }
+ }
+}
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
int dhdpcie_init(struct pci_dev *pdev)
{
osl_t *osh = NULL;
dhd_bus_t *bus = NULL;
dhdpcie_info_t *dhdpcie_info = NULL;
-
+ wifi_adapter_info_t *adapter = NULL;
+ DHD_ERROR(("%s enter\n", __FUNCTION__));
do {
/* osl attach */
if (!(osh = osl_attach(pdev, PCI_BUS, FALSE))) {
DHD_ERROR(("%s: osl_attach failed\n", __FUNCTION__));
break;
}
+ /* initialize static buffer */
+ adapter = dhd_wifi_platform_get_adapter(PCI_BUS, pdev->bus->number,
+ PCI_SLOT(pdev->devfn));
+ if (adapter != NULL)
+ DHD_ERROR(("%s: found adapter info '%s'\n", __FUNCTION__, adapter->name));
+ else
+ DHD_ERROR(("%s: can't find adapter info for this chip\n", __FUNCTION__));
+ osl_static_mem_init(osh, adapter);
/* allocate linux spcific pcie structure here */
if (!(dhdpcie_info = MALLOC(osh, sizeof(dhdpcie_info_t)))) {
@@ -397,6 +609,7 @@
bzero(dhdpcie_info, sizeof(dhdpcie_info_t));
dhdpcie_info->osh = osh;
dhdpcie_info->dev = pdev;
+ dhdpcie_info->adapter = adapter;
/* Find the PCI resources, verify the */
/* vendor and device ID, map BAR regions and irq, update in structures */
@@ -415,6 +628,15 @@
dhdpcie_info->bus = bus;
dhdpcie_info->bus->dev = pdev;
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ bus->islinkdown = FALSE;
+ bus->pcie_event.events = MSM_PCIE_EVENT_LINKDOWN;
+ bus->pcie_event.user = pdev;
+ bus->pcie_event.mode = MSM_PCIE_TRIGGER_CALLBACK;
+ bus->pcie_event.callback = dhdpcie_linkdown_cb;
+ bus->pcie_event.options = MSM_PCIE_CONFIG_NO_RECOVERY;
+ msm_pcie_register_event(&bus->pcie_event);
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
if (bus->intr) {
/* Register interrupt callback, but mask it (not operational yet). */
@@ -431,14 +653,30 @@
"due to polling mode\n", __FUNCTION__));
}
- if (dhd_download_fw_on_driverload)
- if (dhd_bus_start(bus->dhd))
- break;
/* set private data for pci_dev */
pci_set_drvdata(pdev, dhdpcie_info);
+ /* Attach to the OS network interface */
+ DHD_TRACE(("%s(): Calling dhd_register_if() \n", __FUNCTION__));
+ if(dhd_register_if(bus->dhd, 0, TRUE)) {
+ DHD_ERROR(("%s(): ERROR.. dhd_register_if() failed\n", __FUNCTION__));
+ break;
+ }
+ if (dhd_download_fw_on_driverload) {
+ if (dhd_bus_start(bus->dhd)) {
+ DHD_ERROR(("%s: dhd_bud_start() failed\n", __FUNCTION__));
+ break;
+ }
+ }
+#ifdef DHD_WAKE_STATUS
+ spin_lock_init(&dhdpcie_info->pcie_lock);
+ dhdpcie_info->wake_irq = dhd_os_get_wake_irq(bus->dhd);
+ if (dhdpcie_info->wake_irq == -1)
+ dhdpcie_info->wake_irq = pdev->irq;
+#endif
+ dhdpcie_init_succeeded = TRUE;
- DHD_TRACE(("%s:Exit - SUCCESS \n", __FUNCTION__));
+ DHD_ERROR(("%s:Exit - SUCCESS \n", __FUNCTION__));
return 0; /* return SUCCESS */
} while (0);
@@ -453,6 +691,8 @@
if (osh)
osl_detach(osh);
+ dhdpcie_init_succeeded = FALSE;
+
DHD_TRACE(("%s:Exit - FAILURE \n", __FUNCTION__));
return -1; /* return FAILURE */
@@ -501,3 +741,257 @@
else
return FALSE;
}
+
+int
+dhdpcie_start_host_pcieclock(dhd_bus_t *bus)
+{
+ int ret=0;
+ int options = 0;
+ DHD_TRACE(("%s Enter:\n", __FUNCTION__));
+
+ if(bus == NULL)
+ return BCME_ERROR;
+
+ if(bus->dev == NULL)
+ return BCME_ERROR;
+
+#ifdef CONFIG_ARCH_MSM
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ if (bus->islinkdown)
+ options = MSM_PCIE_CONFIG_NO_CFG_RESTORE;
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
+
+ ret = msm_pcie_pm_control(MSM_PCIE_RESUME, bus->dev->bus->number,
+ bus->dev, NULL, options);
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ if (bus->islinkdown && !ret) {
+ msm_pcie_recover_config(bus->dev);
+ if (bus->dhd)
+ DHD_OS_WAKE_UNLOCK(bus->dhd);
+ bus->islinkdown = FALSE;
+ }
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
+
+ if (ret) {
+ DHD_ERROR(("%s Failed to bring up PCIe link\n", __FUNCTION__));
+ }
+#endif /* CONFIG_ARCH_MSM */
+ DHD_TRACE(("%s Exit:\n", __FUNCTION__));
+ return ret;
+}
+
+int
+dhdpcie_stop_host_pcieclock(dhd_bus_t *bus)
+{
+ int ret = 0;
+ int options = 0;
+ DHD_TRACE(("%s Enter:\n", __FUNCTION__));
+
+ if (bus == NULL)
+ return BCME_ERROR;
+
+ if (bus->dev == NULL)
+ return BCME_ERROR;
+
+#ifdef CONFIG_ARCH_MSM
+#ifdef MSM_PCIE_LINKDOWN_RECOVERY
+ if (bus->islinkdown)
+ options = MSM_PCIE_CONFIG_NO_CFG_RESTORE | MSM_PCIE_CONFIG_LINKDOWN;
+#endif /* MSM_PCIE_LINKDOWN_RECOVERY */
+
+ ret = msm_pcie_pm_control(MSM_PCIE_SUSPEND, bus->dev->bus->number,
+ bus->dev, NULL, options);
+
+ if (ret) {
+ DHD_ERROR(("Failed to stop PCIe link\n"));
+ }
+#endif /* CONFIG_ARCH_MSM */
+ DHD_TRACE(("%s Exit:\n", __FUNCTION__));
+ return ret;
+}
+
+int
+dhdpcie_disable_device(dhd_bus_t *bus)
+{
+ if (bus == NULL)
+ return BCME_ERROR;
+
+ if (bus->dev == NULL)
+ return BCME_ERROR;
+
+ pci_disable_device(bus->dev);
+
+ return 0;
+}
+
+int
+dhdpcie_enable_device(dhd_bus_t *bus)
+{
+ int ret = BCME_ERROR;
+ dhdpcie_info_t *pch;
+
+ DHD_TRACE(("%s Enter:\n", __FUNCTION__));
+
+ if(bus == NULL)
+ return BCME_ERROR;
+
+ if(bus->dev == NULL)
+ return BCME_ERROR;
+
+ pch = pci_get_drvdata(bus->dev);
+ if(pch == NULL)
+ return BCME_ERROR;
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0))
+ if (pci_load_saved_state(bus->dev, pch->default_state))
+ pci_disable_device(bus->dev);
+ else {
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0) */
+ pci_restore_state(bus->dev);
+ ret = pci_enable_device(bus->dev);
+ if(!ret)
+ pci_set_master(bus->dev);
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0))
+ }
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 0) */
+
+ if(ret)
+ pci_disable_device(bus->dev);
+
+ return ret;
+}
+int
+dhdpcie_alloc_resource(dhd_bus_t *bus)
+{
+ dhdpcie_info_t *dhdpcie_info;
+ phys_addr_t bar0_addr, bar1_addr;
+ ulong bar1_size;
+
+ do {
+ if (bus == NULL) {
+ DHD_ERROR(("%s: bus is NULL\n", __FUNCTION__));
+ break;
+ }
+
+ if (bus->dev == NULL) {
+ DHD_ERROR(("%s: bus->dev is NULL\n", __FUNCTION__));
+ break;
+ }
+
+ dhdpcie_info = pci_get_drvdata(bus->dev);
+ if (dhdpcie_info == NULL) {
+ DHD_ERROR(("%s: dhdpcie_info is NULL\n", __FUNCTION__));
+ break;
+ }
+
+ bar0_addr = pci_resource_start(bus->dev, 0); /* Bar-0 mapped address */
+ bar1_addr = pci_resource_start(bus->dev, 2); /* Bar-1 mapped address */
+
+ /* read Bar-1 mapped memory range */
+ bar1_size = pci_resource_len(bus->dev, 2);
+
+ if ((bar1_size == 0) || (bar1_addr == 0)) {
+ printf("%s: BAR1 Not enabled for this device size(%ld),"
+ " addr(0x"PRINTF_RESOURCE")\n",
+ __FUNCTION__, bar1_size, bar1_addr);
+ break;
+ }
+
+ dhdpcie_info->regs = (volatile char *) REG_MAP(bar0_addr, DONGLE_REG_MAP_SIZE);
+ if (!dhdpcie_info->regs) {
+ DHD_ERROR(("%s: ioremap() for regs is failed\n", __FUNCTION__));
+ break;
+ }
+
+ bus->regs = dhdpcie_info->regs;
+ dhdpcie_info->tcm = (volatile char *) REG_MAP(bar1_addr, DONGLE_TCM_MAP_SIZE);
+ dhdpcie_info->tcm_size = DONGLE_TCM_MAP_SIZE;
+ if (!dhdpcie_info->tcm) {
+ DHD_ERROR(("%s: ioremap() for regs is failed\n", __FUNCTION__));
+ REG_UNMAP(dhdpcie_info->regs);
+ bus->regs = NULL;
+ break;
+ }
+
+ bus->tcm = dhdpcie_info->tcm;
+
+ DHD_TRACE(("%s:Phys addr : reg space = %p base addr 0x"PRINTF_RESOURCE" \n",
+ __FUNCTION__, dhdpcie_info->regs, bar0_addr));
+ DHD_TRACE(("%s:Phys addr : tcm_space = %p base addr 0x"PRINTF_RESOURCE" \n",
+ __FUNCTION__, dhdpcie_info->tcm, bar1_addr));
+
+ return 0;
+ } while (0);
+
+ return BCME_ERROR;
+}
+
+void
+dhdpcie_free_resource(dhd_bus_t *bus)
+{
+ dhdpcie_info_t *dhdpcie_info;
+
+ if (bus == NULL) {
+ DHD_ERROR(("%s: bus is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ if (bus->dev == NULL) {
+ DHD_ERROR(("%s: bus->dev is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ dhdpcie_info = pci_get_drvdata(bus->dev);
+ if (dhdpcie_info == NULL) {
+ DHD_ERROR(("%s: dhdpcie_info is NULL\n", __FUNCTION__));
+ return;
+ }
+
+ if (bus->regs) {
+ REG_UNMAP(dhdpcie_info->regs);
+ bus->regs = NULL;
+ }
+
+ if (bus->tcm) {
+ REG_UNMAP(dhdpcie_info->tcm);
+ bus->tcm = NULL;
+ }
+}
+
+int
+dhdpcie_bus_request_irq(struct dhd_bus *bus)
+{
+ dhdpcie_info_t *dhdpcie_info;
+ int ret = 0;
+
+ if (bus == NULL) {
+ DHD_ERROR(("%s: bus is NULL\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ if (bus->dev == NULL) {
+ DHD_ERROR(("%s: bus->dev is NULL\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ dhdpcie_info = pci_get_drvdata(bus->dev);
+ if (dhdpcie_info == NULL) {
+ DHD_ERROR(("%s: dhdpcie_info is NULL\n", __FUNCTION__));
+ return BCME_ERROR;
+ }
+
+ if (bus->intr) {
+ /* Register interrupt callback, but mask it (not operational yet). */
+ DHD_INTR(("%s: Registering and masking interrupts\n", __FUNCTION__));
+ dhdpcie_bus_intr_disable(bus);
+ ret = dhdpcie_request_irq(dhdpcie_info);
+ if (ret) {
+ DHD_ERROR(("%s: request_irq() failed, ret=%d\n",
+ __FUNCTION__, ret));
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
diff --git a/drivers/net/wireless/bcmdhd/dhd_pno.c b/drivers/net/wireless/bcmdhd/dhd_pno.c
old mode 100755
new mode 100644
index 8c96f6b..d896472
--- a/drivers/net/wireless/bcmdhd/dhd_pno.c
+++ b/drivers/net/wireless/bcmdhd/dhd_pno.c
@@ -3,13 +3,13 @@
* Prefered Network Offload and Wi-Fi Location Service(WLS) code.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -17,7 +17,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -44,6 +44,9 @@
#include <dhd.h>
#include <dhd_pno.h>
#include <dhd_dbg.h>
+#ifdef GSCAN_SUPPORT
+#include <linux/gcd.h>
+#endif /* GSCAN_SUPPORT */
#ifdef __BIG_ENDIAN
#include <bcmendian.h>
@@ -71,23 +74,35 @@
} \
} while (0)
#define PNO_GET_PNOSTATE(dhd) ((dhd_pno_status_info_t *)dhd->pno_state)
-#define PNO_BESTNET_LEN 1024
+#define PNO_BESTNET_LEN 2048
#define PNO_ON 1
#define PNO_OFF 0
#define CHANNEL_2G_MAX 14
+#define CHANNEL_5G_MAX 165
#define MAX_NODE_CNT 5
#define WLS_SUPPORTED(pno_state) (pno_state->wls_supported == TRUE)
#define TIME_DIFF(timestamp1, timestamp2) (abs((uint32)(timestamp1/1000) \
- (uint32)(timestamp2/1000)))
+#define TIME_DIFF_MS(timestamp1, timestamp2) (abs((uint32)(timestamp1) \
+ - (uint32)(timestamp2)))
+#define TIMESPEC_TO_US(ts) (((uint64)(ts).tv_sec * USEC_PER_SEC) + \
+ (ts).tv_nsec / NSEC_PER_USEC)
#define ENTRY_OVERHEAD strlen("bssid=\nssid=\nfreq=\nlevel=\nage=\ndist=\ndistSd=\n====")
#define TIME_MIN_DIFF 5
+static wlc_ssid_ext_t * dhd_pno_get_legacy_pno_ssid(dhd_pub_t *dhd,
+ dhd_pno_status_info_t *pno_state);
+#ifdef GSCAN_SUPPORT
+static wl_pfn_gscan_ch_bucket_cfg_t *
+dhd_pno_gscan_create_channel_list(dhd_pub_t *dhd, dhd_pno_status_info_t *pno_state,
+uint16 *chan_list, uint32 *num_buckets, uint32 *num_buckets_to_fw);
+#endif /* GSCAN_SUPPORT */
static inline bool
is_dfs(uint16 channel)
{
if (channel >= 52 && channel <= 64) /* class 2 */
return TRUE;
- else if (channel >= 100 && channel <= 140) /* class 4 */
+ else if (channel >= 100 && channel <= 144) /* class 4 */
return TRUE;
else
return FALSE;
@@ -119,6 +134,108 @@
return err;
}
+bool dhd_is_pno_supported(dhd_pub_t *dhd)
+{
+ dhd_pno_status_info_t *_pno_state;
+
+ if (!dhd || !dhd->pno_state) {
+ DHD_ERROR(("NULL POINTER : %s\n",
+ __FUNCTION__));
+ return FALSE;
+ }
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ return WLS_SUPPORTED(_pno_state);
+}
+
+bool dhd_is_legacy_pno_enabled(dhd_pub_t *dhd)
+{
+ dhd_pno_status_info_t *_pno_state;
+
+ if (!dhd || !dhd->pno_state) {
+ DHD_ERROR(("NULL POINTER : %s\n",
+ __FUNCTION__));
+ return FALSE;
+ }
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ return ((_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) != 0);
+}
+
+#ifdef GSCAN_SUPPORT
+static uint64 convert_fw_rel_time_to_systime(uint32 fw_ts_ms)
+{
+ struct timespec ts;
+
+ get_monotonic_boottime(&ts);
+ return ((uint64)(TIMESPEC_TO_US(ts)) - (uint64)(fw_ts_ms * 1000));
+}
+
+static void
+dhd_pno_idx_to_ssid(struct dhd_pno_gscan_params *gscan_params,
+ dhd_epno_results_t *res, uint32 idx)
+{
+ dhd_epno_params_t *iter, *next;
+
+ if (gscan_params->num_epno_ssid > 0) {
+ list_for_each_entry_safe(iter, next,
+ &gscan_params->epno_ssid_list, list) {
+ if (iter->index == idx) {
+ memcpy(res->ssid, iter->ssid, iter->ssid_len);
+ res->ssid_len = iter->ssid_len;
+ return;
+ }
+ }
+ }
+ /* If we are here then there was no match */
+ res->ssid[0] = '\0';
+ res->ssid_len = 0;
+ return;
+}
+
+/* Cleanup all results */
+static void
+dhd_gscan_clear_all_batch_results(dhd_pub_t *dhd)
+{
+ struct dhd_pno_gscan_params *gscan_params;
+ dhd_pno_status_info_t *_pno_state;
+ gscan_results_cache_t *iter;
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ gscan_params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan;
+ iter = gscan_params->gscan_batch_cache;
+ /* Mark everything as consumed */
+ while (iter) {
+ iter->tot_consumed = iter->tot_count;
+ iter = iter->next;
+ }
+ dhd_gscan_batch_cache_cleanup(dhd);
+ return;
+}
+
+static int
+_dhd_pno_gscan_cfg(dhd_pub_t *dhd, wl_pfn_gscan_cfg_t *pfncfg_gscan_param, int size)
+{
+ int err = BCME_OK;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ err = dhd_iovar(dhd, 0, "pfn_gscan_cfg", (char *)pfncfg_gscan_param, size, 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to execute pfncfg_gscan_param\n", __FUNCTION__));
+ goto exit;
+ }
+exit:
+ return err;
+}
+
+static bool
+is_batch_retrieval_complete(struct dhd_pno_gscan_params *gscan_params)
+{
+ smp_rmb();
+ return (gscan_params->get_batch_flag == GSCAN_BATCH_RETRIEVAL_COMPLETE);
+}
+#endif /* GSCAN_SUPPORT */
+
static int
_dhd_pno_suspend(dhd_pub_t *dhd)
{
@@ -172,7 +289,7 @@
/* Enable/Disable PNO */
err = dhd_iovar(dhd, 0, "pfn", (char *)&enable, sizeof(enable), 1);
if (err < 0) {
- DHD_ERROR(("%s : failed to execute pfn_set\n", __FUNCTION__));
+ DHD_ERROR(("%s : failed to execute pfn_set - %d\n", __FUNCTION__, err));
goto exit;
}
_pno_state->pno_status = (enable)?
@@ -226,6 +343,12 @@
mode |= DHD_PNO_HOTLIST_MODE;
combined_scan = TRUE;
}
+#ifdef GSCAN_SUPPORT
+ else if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ DHD_PNO(("will enable combined scan with GSCAN SCAN MODE\n"));
+ mode |= DHD_PNO_GSCAN_MODE;
+ }
+#endif /* GSCAN_SUPPORT */
}
if (mode & (DHD_PNO_BATCH_MODE | DHD_PNO_HOTLIST_MODE)) {
/* Scan frequency of 30 sec */
@@ -233,7 +356,7 @@
/* slow adapt scan is off by default */
pfn_param.slow_freq = htod32(0);
/* RSSI margin of 30 dBm */
- pfn_param.rssi_margin = htod16(30);
+ pfn_param.rssi_margin = htod16(PNO_RSSI_MARGIN_DBM);
/* Network timeout 60 sec */
pfn_param.lost_network_timeout = htod32(60);
/* best n = 2 by default */
@@ -290,14 +413,73 @@
}
}
}
- if (pfn_param.scan_freq < htod32(PNO_SCAN_MIN_FW_SEC) ||
- pfn_param.scan_freq > htod32(PNO_SCAN_MAX_FW_SEC)) {
- DHD_ERROR(("%s pno freq(%d sec) is not valid \n",
- __FUNCTION__, PNO_SCAN_MIN_FW_SEC));
- err = BCME_BADARG;
+#ifdef GSCAN_SUPPORT
+ if (mode & DHD_PNO_GSCAN_MODE) {
+ uint32 lost_network_timeout;
+
+ pfn_param.scan_freq = htod32(pno_params->params_gscan.scan_fr);
+ if (pno_params->params_gscan.mscan) {
+ pfn_param.bestn = pno_params->params_gscan.bestn;
+ pfn_param.mscan = pno_params->params_gscan.mscan;
+ pfn_param.flags |= (ENABLE << ENABLE_BD_SCAN_BIT);
+ }
+ /* RSSI margin of 30 dBm */
+ pfn_param.rssi_margin = htod16(PNO_RSSI_MARGIN_DBM);
+ pfn_param.repeat = 0;
+ pfn_param.exp = 0;
+ pfn_param.slow_freq = 0;
+ pfn_param.flags |= htod16(ENABLE << ENABLE_ADAPTSCAN_BIT);
+
+ if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ dhd_pno_params_t *_params;
+
+ _params = &(_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS]);
+
+ pfn_param.scan_freq = gcd(pno_params->params_gscan.scan_fr,
+ _params->params_legacy.scan_fr);
+
+ if ((_params->params_legacy.pno_repeat != 0) ||
+ (_params->params_legacy.pno_freq_expo_max != 0)) {
+ pfn_param.repeat = (uchar) (_params->params_legacy.pno_repeat);
+ pfn_param.exp = (uchar) (_params->params_legacy.pno_freq_expo_max);
+ }
+ }
+
+ lost_network_timeout = (pno_params->params_gscan.max_ch_bucket_freq *
+ pfn_param.scan_freq *
+ pno_params->params_gscan.lost_ap_window);
+ if (lost_network_timeout) {
+ pfn_param.lost_network_timeout = htod32(MIN(lost_network_timeout,
+ GSCAN_MIN_BSSID_TIMEOUT));
+ } else {
+ pfn_param.lost_network_timeout = htod32(GSCAN_MIN_BSSID_TIMEOUT);
+ }
+ } else
+#endif /* GSCAN_SUPPORT */
+ {
+ if (pfn_param.scan_freq < htod32(PNO_SCAN_MIN_FW_SEC) ||
+ pfn_param.scan_freq > htod32(PNO_SCAN_MAX_FW_SEC)) {
+ DHD_ERROR(("%s pno freq(%d sec) is not valid \n",
+ __FUNCTION__, PNO_SCAN_MIN_FW_SEC));
+ err = BCME_BADARG;
+ goto exit;
+ }
+ }
+
+ err = dhd_set_rand_mac_oui(dhd);
+ /* Ignore if chip doesnt support the feature */
+ if (err < 0 && err != BCME_UNSUPPORTED) {
+ DHD_ERROR(("%s : failed to set random mac for PNO scan, %d\n", __FUNCTION__, err));
goto exit;
}
+
+#ifdef GSCAN_SUPPORT
+ if (mode == DHD_PNO_BATCH_MODE ||
+ ((mode & DHD_PNO_GSCAN_MODE) && pno_params->params_gscan.mscan)) {
+#else
if (mode == DHD_PNO_BATCH_MODE) {
+#endif /* GSCAN_SUPPORT */
int _tmp = pfn_param.bestn;
/* set bestn to calculate the max mscan which firmware supports */
err = dhd_iovar(dhd, 0, "pfnmem", (char *)&_tmp, sizeof(_tmp), 1);
@@ -311,12 +493,13 @@
DHD_ERROR(("%s : failed to get pfnmem\n", __FUNCTION__));
goto exit;
}
- DHD_PNO((" returned mscan : %d, set bestn : %d\n", _tmp, pfn_param.bestn));
pfn_param.mscan = MIN(pfn_param.mscan, _tmp);
+ DHD_PNO((" returned mscan : %d, set bestn : %d mscan %d\n", _tmp, pfn_param.bestn,
+ pfn_param.mscan));
}
err = dhd_iovar(dhd, 0, "pfn_set", (char *)&pfn_param, sizeof(pfn_param), 1);
if (err < 0) {
- DHD_ERROR(("%s : failed to execute pfn_set\n", __FUNCTION__));
+ DHD_ERROR(("%s : failed to execute pfn_set %d\n", __FUNCTION__, err));
goto exit;
}
/* need to return mscan if this is for batch scan instead of err */
@@ -325,11 +508,12 @@
return err;
}
static int
-_dhd_pno_add_ssid(dhd_pub_t *dhd, wlc_ssid_t* ssids_list, int nssid)
+_dhd_pno_add_ssid(dhd_pub_t *dhd, wlc_ssid_ext_t* ssids_list, int nssid)
{
int err = BCME_OK;
int i = 0;
wl_pfn_t pfn_element;
+
NULL_CHECK(dhd, "dhd is NULL", err);
if (nssid) {
NULL_CHECK(ssids_list, "ssid list is NULL", err);
@@ -338,8 +522,9 @@
{
int j;
for (j = 0; j < nssid; j++) {
- DHD_PNO(("%d: scan for %s size = %d\n", j,
- ssids_list[j].SSID, ssids_list[j].SSID_len));
+ DHD_PNO(("%s size = %d hidden = %d flags = %x rssi_thresh %d\n",
+ ssids_list[j].SSID, ssids_list[j].SSID_len, ssids_list[j].hidden,
+ ssids_list[j].flags, ssids_list[i].rssi_thresh));
}
}
/* Check for broadcast ssid */
@@ -357,7 +542,17 @@
pfn_element.wpa_auth = htod32(WPA_AUTH_PFN_ANY);
pfn_element.wsec = htod32(0);
pfn_element.infra = htod32(1);
- pfn_element.flags = htod32(ENABLE << WL_PFN_HIDDEN_BIT);
+ if (ssids_list[i].hidden)
+ pfn_element.flags = htod32(ENABLE << WL_PFN_HIDDEN_BIT);
+ else
+ pfn_element.flags = 0;
+ pfn_element.flags |= htod32(ssids_list[i].flags);
+ /* If a single RSSI threshold is defined, use that */
+#ifdef PNO_MIN_RSSI_TRIGGER
+ pfn_element.flags |= ((PNO_MIN_RSSI_TRIGGER & 0xFF) << WL_PFN_RSSI_SHIFT);
+#else
+ pfn_element.flags |= ((ssids_list[i].rssi_thresh & 0xFF) << WL_PFN_RSSI_SHIFT);
+#endif /* PNO_MIN_RSSI_TRIGGER */
memcpy((char *)pfn_element.ssid.SSID, ssids_list[i].SSID,
ssids_list[i].SSID_len);
pfn_element.ssid.SSID_len = ssids_list[i].SSID_len;
@@ -441,11 +636,20 @@
if (skip_dfs && is_dfs(dtoh32(list->element[i])))
continue;
+ } else if (band == WLC_BAND_AUTO) {
+ if (skip_dfs || !is_dfs(dtoh32(list->element[i])))
+ continue;
+
} else { /* All channels */
if (skip_dfs && is_dfs(dtoh32(list->element[i])))
continue;
}
- d_chan_list[j++] = dtoh32(list->element[i]);
+ if (dtoh32(list->element[i]) <= CHANNEL_5G_MAX) {
+ d_chan_list[j++] = (uint16) dtoh32(list->element[i]);
+ } else {
+ err = BCME_BADCHAN;
+ goto exit;
+ }
}
*nchan = j;
exit:
@@ -632,13 +836,16 @@
if (nchan) {
NULL_CHECK(channel_list, "nchan is NULL", err);
}
+ if (nchan > WL_NUMCHANNELS) {
+ return BCME_RANGE;
+ }
DHD_PNO(("%s enter : nchan : %d\n", __FUNCTION__, nchan));
memset(&pfncfg_param, 0, sizeof(wl_pfn_cfg_t));
/* Setup default values */
pfncfg_param.reporttype = htod32(WL_PFN_REPORT_ALLNET);
pfncfg_param.channel_num = htod32(0);
- for (i = 0; i < nchan && nchan < WL_NUMCHANNELS; i++)
+ for (i = 0; i < nchan; i++)
pfncfg_param.channel_list[i] = channel_list[i];
pfncfg_param.channel_num = htod32(nchan);
@@ -736,7 +943,7 @@
if (nbssid) {
NULL_CHECK(p_pfn_bssid, "bssid list is NULL", err);
}
- err = dhd_iovar(dhd, 0, "pfn_add_bssid", (char *)&p_pfn_bssid,
+ err = dhd_iovar(dhd, 0, "pfn_add_bssid", (char *)p_pfn_bssid,
sizeof(wl_pfn_bssid_t) * nbssid, 1);
if (err < 0) {
DHD_ERROR(("%s : failed to execute pfn_cfg\n", __FUNCTION__));
@@ -745,6 +952,33 @@
exit:
return err;
}
+
+#ifdef GSCAN_SUPPORT
+static int
+_dhd_pno_add_significant_bssid(dhd_pub_t *dhd,
+ wl_pfn_significant_bssid_t *p_pfn_significant_bssid, int nbssid)
+{
+ int err = BCME_OK;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+
+ if (!nbssid) {
+ err = BCME_ERROR;
+ goto exit;
+ }
+
+ NULL_CHECK(p_pfn_significant_bssid, "bssid list is NULL", err);
+
+ err = dhd_iovar(dhd, 0, "pfn_add_swc_bssid", (char *)p_pfn_significant_bssid,
+ sizeof(wl_pfn_significant_bssid_t) * nbssid, 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to execute pfn_significant_bssid %d\n", __FUNCTION__, err));
+ goto exit;
+ }
+exit:
+ return err;
+}
+#endif /* GSCAN_SUPPORT */
+
int
dhd_pno_stop_for_ssid(dhd_pub_t *dhd)
{
@@ -752,7 +986,7 @@
uint32 mode = 0;
dhd_pno_status_info_t *_pno_state;
dhd_pno_params_t *_params;
- wl_pfn_bssid_t *p_pfn_bssid;
+ wl_pfn_bssid_t *p_pfn_bssid = NULL;
NULL_CHECK(dhd, "dev is NULL", err);
NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
_pno_state = PNO_GET_PNOSTATE(dhd);
@@ -762,6 +996,36 @@
}
DHD_PNO(("%s enter\n", __FUNCTION__));
_pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+#ifdef GSCAN_SUPPORT
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ struct dhd_pno_gscan_params *gscan_params;
+
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ gscan_params = &_params->params_gscan;
+ if (gscan_params->mscan) {
+ /* retrieve the batching data from firmware into host */
+ err = dhd_wait_batch_results_complete(dhd);
+ if (err != BCME_OK)
+ goto exit;
+ }
+ /* save current pno_mode before calling dhd_pno_clean */
+ mutex_lock(&_pno_state->pno_mutex);
+ mode = _pno_state->pno_mode;
+ err = dhd_pno_clean(dhd);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to call dhd_pno_clean (err: %d)\n",
+ __FUNCTION__, err));
+ mutex_unlock(&_pno_state->pno_mutex);
+ goto exit;
+ }
+ /* restore previous pno_mode */
+ _pno_state->pno_mode = mode;
+ mutex_unlock(&_pno_state->pno_mutex);
+ /* Restart gscan */
+ err = dhd_pno_initiate_gscan_request(dhd, 1, 0);
+ goto exit;
+ }
+#endif /* GSCAN_SUPPORT */
/* restart Batch mode if the batch mode is on */
if (_pno_state->pno_mode & (DHD_PNO_BATCH_MODE | DHD_PNO_HOTLIST_MODE)) {
/* retrieve the batching data from firmware into host */
@@ -820,6 +1084,7 @@
}
}
exit:
+ kfree(p_pfn_bssid);
return err;
}
@@ -832,11 +1097,172 @@
return (_dhd_pno_enable(dhd, enable));
}
+static wlc_ssid_ext_t * dhd_pno_get_legacy_pno_ssid(dhd_pub_t *dhd,
+ dhd_pno_status_info_t *pno_state)
+{
+ int err = BCME_OK;
+ int i;
+ struct dhd_pno_ssid *iter, *next;
+ dhd_pno_params_t *_params1 = &pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS];
+ wlc_ssid_ext_t *p_ssid_list;
+
+ p_ssid_list = kzalloc(sizeof(wlc_ssid_ext_t) *
+ _params1->params_legacy.nssid, GFP_KERNEL);
+ if (p_ssid_list == NULL) {
+ DHD_ERROR(("%s : failed to allocate wlc_ssid_ext_t array (count: %d)",
+ __FUNCTION__, _params1->params_legacy.nssid));
+ err = BCME_ERROR;
+ pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ goto exit;
+ }
+ i = 0;
+ /* convert dhd_pno_ssid to wlc_ssid_ext_t */
+ list_for_each_entry_safe(iter, next, &_params1->params_legacy.ssid_list, list) {
+ p_ssid_list[i].SSID_len = iter->SSID_len;
+ p_ssid_list[i].hidden = iter->hidden;
+ p_ssid_list[i].rssi_thresh = iter->rssi_thresh;
+ memcpy(p_ssid_list[i].SSID, iter->SSID, p_ssid_list[i].SSID_len);
+ i++;
+ }
+exit:
+ return p_ssid_list;
+}
+
+#ifdef GSCAN_SUPPORT
+static int dhd_epno_set_ssid(dhd_pub_t *dhd,
+ dhd_pno_status_info_t *pno_state)
+{
+ int err = BCME_OK;
+ dhd_epno_params_t *iter, *next;
+ dhd_pno_params_t *_params1 = &pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ struct dhd_pno_gscan_params *gscan_params;
+ wlc_ssid_ext_t ssid_elem;
+ wl_pfn_ext_list_t *p_ssid_ext_elem = NULL;
+ uint32 mem_needed = 0, i = 0;
+ uint16 num_visible_epno_ssid;
+ uint8 flags;
+
+ gscan_params = &_params1->params_gscan;
+ num_visible_epno_ssid = gscan_params->num_visible_epno_ssid;
+
+ if (num_visible_epno_ssid) {
+ mem_needed = sizeof(wl_pfn_ext_list_t) + (sizeof(wl_pfn_ext_t) *
+ (num_visible_epno_ssid - 1));
+ p_ssid_ext_elem = kzalloc(mem_needed, GFP_KERNEL);
+ if (p_ssid_ext_elem == NULL) {
+ DHD_ERROR(("%s : failed to allocate memory %u\n",
+ __FUNCTION__, mem_needed));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ p_ssid_ext_elem->version = PFN_SSID_EXT_VERSION;
+ p_ssid_ext_elem->count = num_visible_epno_ssid;
+ }
+
+ DHD_PNO(("Total ssids %d, visible SSIDs %d\n", gscan_params->num_epno_ssid,
+ num_visible_epno_ssid));
+
+ /* convert dhd_pno_ssid to wlc_ssid_ext_t */
+ list_for_each_entry_safe(iter, next, &gscan_params->epno_ssid_list, list) {
+ if (iter->flags & DHD_PNO_USE_SSID) {
+ memset(&ssid_elem, 0, sizeof(ssid_elem));
+ ssid_elem.SSID_len = iter->ssid_len;
+ ssid_elem.hidden = TRUE;
+ flags = (iter->flags & DHD_EPNO_A_BAND_TRIG) ?
+ WL_PFN_SSID_A_BAND_TRIG: 0;
+ flags |= (iter->flags & DHD_EPNO_BG_BAND_TRIG) ?
+ WL_PFN_SSID_BG_BAND_TRIG: 0;
+ ssid_elem.flags = flags;
+ ssid_elem.rssi_thresh = iter->rssi_thresh;
+ memcpy(ssid_elem.SSID, iter->ssid, iter->ssid_len);
+ if ((err = _dhd_pno_add_ssid(dhd, &ssid_elem, 1)) < 0) {
+ DHD_ERROR(("failed to add ssid list (err %d) in firmware\n", err));
+ goto exit;
+ }
+ } else if (i < num_visible_epno_ssid) {
+ p_ssid_ext_elem->pfn_ext[i].rssi_thresh = iter->rssi_thresh;
+ switch (iter->auth) {
+ case DHD_PNO_AUTH_CODE_OPEN:
+ p_ssid_ext_elem->pfn_ext[i].wpa_auth = WPA_AUTH_DISABLED;
+ break;
+ case DHD_PNO_AUTH_CODE_PSK:
+ p_ssid_ext_elem->pfn_ext[i].wpa_auth =
+ (WPA2_AUTH_PSK | WPA_AUTH_PSK);
+ break;
+ case DHD_PNO_AUTH_CODE_EAPOL:
+ p_ssid_ext_elem->pfn_ext[i].wpa_auth =
+ (uint16)WPA_AUTH_PFN_ANY;
+ break;
+ default:
+ p_ssid_ext_elem->pfn_ext[i].wpa_auth =
+ (uint16)WPA_AUTH_PFN_ANY;
+ break;
+ }
+ memcpy(p_ssid_ext_elem->pfn_ext[i].ssid, iter->ssid, iter->ssid_len);
+ p_ssid_ext_elem->pfn_ext[i].ssid_len = iter->ssid_len;
+ iter->index = gscan_params->ssid_ext_last_used_index++;
+ flags = (iter->flags & DHD_EPNO_A_BAND_TRIG) ?
+ WL_PFN_SSID_A_BAND_TRIG: 0;
+ flags |= (iter->flags & DHD_EPNO_BG_BAND_TRIG) ?
+ WL_PFN_SSID_BG_BAND_TRIG: 0;
+ p_ssid_ext_elem->pfn_ext[i].flags = flags;
+ DHD_PNO(("SSID %s idx %d rssi thresh %d flags %x\n", iter->ssid,
+ iter->index, iter->rssi_thresh, flags));
+ i++;
+ }
+ }
+ if (num_visible_epno_ssid) {
+ err = dhd_iovar(dhd, 0, "pfn_add_ssid_ext", (char *)p_ssid_ext_elem,
+ mem_needed, 1);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to execute pfn_add_pno_ext_ssid %d\n", __FUNCTION__,
+ err));
+ }
+ }
+exit:
+ kfree(p_ssid_ext_elem);
+ return err;
+}
+#endif /* GSCAN_SUPPORT */
+
+static int
+dhd_pno_add_to_ssid_list(dhd_pno_params_t *params, wlc_ssid_ext_t *ssid_list,
+ int nssid)
+{
+ int ret = 0;
+ int i;
+ struct dhd_pno_ssid *_pno_ssid;
+
+ for (i = 0; i < nssid; i++) {
+ if (ssid_list[i].SSID_len > DOT11_MAX_SSID_LEN) {
+ DHD_ERROR(("%s : Invalid SSID length %d\n",
+ __FUNCTION__, ssid_list[i].SSID_len));
+ ret = BCME_ERROR;
+ goto exit;
+ }
+ _pno_ssid = kzalloc(sizeof(struct dhd_pno_ssid), GFP_KERNEL);
+ if (_pno_ssid == NULL) {
+ DHD_ERROR(("%s : failed to allocate struct dhd_pno_ssid\n",
+ __FUNCTION__));
+ ret = BCME_ERROR;
+ goto exit;
+ }
+ _pno_ssid->SSID_len = ssid_list[i].SSID_len;
+ _pno_ssid->hidden = ssid_list[i].hidden;
+ _pno_ssid->rssi_thresh = ssid_list[i].rssi_thresh;
+ memcpy(_pno_ssid->SSID, ssid_list[i].SSID, _pno_ssid->SSID_len);
+ list_add_tail(&_pno_ssid->list, ¶ms->params_legacy.ssid_list);
+ params->params_legacy.nssid++;
+ }
+
+exit:
+ return ret;
+}
+
int
-dhd_pno_set_for_ssid(dhd_pub_t *dhd, wlc_ssid_t* ssid_list, int nssid,
+dhd_pno_set_for_ssid(dhd_pub_t *dhd, wlc_ssid_ext_t* ssid_list, int nssid,
uint16 scan_fr, int pno_repeat, int pno_freq_expo_max, uint16 *channel_list, int nchan)
{
- struct dhd_pno_ssid *_pno_ssid;
dhd_pno_params_t *_params;
dhd_pno_params_t *_params2;
dhd_pno_status_info_t *_pno_state;
@@ -851,21 +1277,27 @@
if (!dhd_support_sta_mode(dhd)) {
err = BCME_BADOPTION;
- goto exit;
+ goto exit_no_clear;
}
DHD_PNO(("%s enter : scan_fr :%d, pno_repeat :%d,"
"pno_freq_expo_max: %d, nchan :%d\n", __FUNCTION__,
scan_fr, pno_repeat, pno_freq_expo_max, nchan));
_params = &(_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS]);
+ /* If GSCAN is also ON will handle this down below */
+#ifdef GSCAN_SUPPORT
+ if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE &&
+ !(_pno_state->pno_mode & DHD_PNO_GSCAN_MODE)) {
+#else
if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
+#endif /* GSCAN_SUPPORT */
DHD_ERROR(("%s : Legacy PNO mode was already started, "
"will disable previous one to start new one\n", __FUNCTION__));
err = dhd_pno_stop_for_ssid(dhd);
if (err < 0) {
DHD_ERROR(("%s : failed to stop legacy PNO (err %d)\n",
__FUNCTION__, err));
- goto exit;
+ goto exit_no_clear;
}
}
_pno_state->pno_mode |= DHD_PNO_LEGACY_MODE;
@@ -873,14 +1305,29 @@
if (err < 0) {
DHD_ERROR(("%s : failed to reinitialize profile (err %d)\n",
__FUNCTION__, err));
- goto exit;
+ goto exit_no_clear;
}
memset(_chan_list, 0, sizeof(_chan_list));
- tot_nchan = nchan;
+ tot_nchan = MIN(nchan, WL_NUMCHANNELS);
if (tot_nchan > 0 && channel_list) {
- for (i = 0; i < nchan; i++)
+ for (i = 0; i < tot_nchan; i++)
_params->params_legacy.chan_list[i] = _chan_list[i] = channel_list[i];
}
+#ifdef GSCAN_SUPPORT
+ else {
+ tot_nchan = WL_NUMCHANNELS;
+ err = _dhd_pno_get_channels(dhd, _chan_list, &tot_nchan,
+ (WLC_BAND_2G | WLC_BAND_5G), FALSE);
+ if (err < 0) {
+ tot_nchan = 0;
+ DHD_PNO(("Could not get channel list for PNO SSID\n"));
+ } else {
+ for (i = 0; i < tot_nchan; i++)
+ _params->params_legacy.chan_list[i] = _chan_list[i];
+ }
+ }
+#endif /* GSCAN_SUPPORT */
+
if (_pno_state->pno_mode & (DHD_PNO_BATCH_MODE | DHD_PNO_HOTLIST_MODE)) {
DHD_PNO(("BATCH SCAN is on progress in firmware\n"));
/* retrieve the batching data from firmware into host */
@@ -890,23 +1337,23 @@
err = _dhd_pno_enable(dhd, PNO_OFF);
if (err < 0) {
DHD_ERROR(("%s : failed to disable PNO\n", __FUNCTION__));
- goto exit;
+ goto exit_no_clear;
}
/* restore the previous mode */
_pno_state->pno_mode = mode;
/* use superset of channel list between two mode */
if (_pno_state->pno_mode & DHD_PNO_BATCH_MODE) {
_params2 = &(_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS]);
- if (_params2->params_batch.nchan > 0 && nchan > 0) {
+ if (_params2->params_batch.nchan > 0 && tot_nchan > 0) {
err = _dhd_pno_chan_merge(_chan_list, &tot_nchan,
&_params2->params_batch.chan_list[0],
_params2->params_batch.nchan,
- &channel_list[0], nchan);
+ &channel_list[0], tot_nchan);
if (err < 0) {
DHD_ERROR(("%s : failed to merge channel list"
" between legacy and batch\n",
__FUNCTION__));
- goto exit;
+ goto exit_no_clear;
}
} else {
DHD_PNO(("superset channel will use"
@@ -914,16 +1361,16 @@
}
} else if (_pno_state->pno_mode & DHD_PNO_HOTLIST_MODE) {
_params2 = &(_pno_state->pno_params_arr[INDEX_OF_HOTLIST_PARAMS]);
- if (_params2->params_hotlist.nchan > 0 && nchan > 0) {
+ if (_params2->params_hotlist.nchan > 0 && tot_nchan > 0) {
err = _dhd_pno_chan_merge(_chan_list, &tot_nchan,
&_params2->params_hotlist.chan_list[0],
_params2->params_hotlist.nchan,
- &channel_list[0], nchan);
+ &channel_list[0], tot_nchan);
if (err < 0) {
DHD_ERROR(("%s : failed to merge channel list"
" between legacy and hotlist\n",
__FUNCTION__));
- goto exit;
+ goto exit_no_clear;
}
}
}
@@ -931,9 +1378,21 @@
_params->params_legacy.scan_fr = scan_fr;
_params->params_legacy.pno_repeat = pno_repeat;
_params->params_legacy.pno_freq_expo_max = pno_freq_expo_max;
- _params->params_legacy.nchan = nchan;
- _params->params_legacy.nssid = nssid;
+ _params->params_legacy.nchan = tot_nchan;
+ _params->params_legacy.nssid = 0;
INIT_LIST_HEAD(&_params->params_legacy.ssid_list);
+#ifdef GSCAN_SUPPORT
+ /* dhd_pno_initiate_gscan_request will handle simultaneous Legacy PNO and GSCAN */
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ if (dhd_pno_add_to_ssid_list(_params, ssid_list, nssid) < 0) {
+ err = BCME_ERROR;
+ goto exit;
+ }
+ DHD_PNO(("GSCAN mode is ON! Will restart GSCAN+Legacy PNO\n"));
+ err = dhd_pno_initiate_gscan_request(dhd, 1, 0);
+ goto exit;
+ }
+#endif /* GSCAN_SUPPORT */
if ((err = _dhd_pno_set(dhd, _params, DHD_PNO_LEGACY_MODE)) < 0) {
DHD_ERROR(("failed to set call pno_set (err %d) in firmware\n", err));
goto exit;
@@ -942,17 +1401,9 @@
DHD_ERROR(("failed to add ssid list(err %d), %d in firmware\n", err, nssid));
goto exit;
}
- for (i = 0; i < nssid; i++) {
- _pno_ssid = kzalloc(sizeof(struct dhd_pno_ssid), GFP_KERNEL);
- if (_pno_ssid == NULL) {
- DHD_ERROR(("%s : failed to allocate struct dhd_pno_ssid\n",
- __FUNCTION__));
- goto exit;
- }
- _pno_ssid->SSID_len = ssid_list[i].SSID_len;
- memcpy(_pno_ssid->SSID, ssid_list[i].SSID, _pno_ssid->SSID_len);
- list_add_tail(&_pno_ssid->list, &_params->params_legacy.ssid_list);
-
+ if (dhd_pno_add_to_ssid_list(_params, ssid_list, nssid) < 0) {
+ err = BCME_ERROR;
+ goto exit;
}
if (tot_nchan > 0) {
if ((err = _dhd_pno_cfg(dhd, _chan_list, tot_nchan)) < 0) {
@@ -966,9 +1417,20 @@
DHD_ERROR(("%s : failed to enable PNO\n", __FUNCTION__));
}
exit:
- /* clear mode in case of error */
if (err < 0)
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ _dhd_pno_reinitialize_prof(dhd, _params, DHD_PNO_LEGACY_MODE);
+exit_no_clear:
+ /* clear mode in case of error */
+ if (err < 0) {
+ int ret = dhd_pno_clean(dhd);
+
+ if (ret < 0) {
+ DHD_ERROR(("%s : dhd_pno_clean failure (err: %d)\n",
+ __FUNCTION__, ret));
+ } else {
+ _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ }
+ }
return err;
}
int
@@ -978,11 +1440,10 @@
uint16 _chan_list[WL_NUMCHANNELS];
int rem_nchan = 0, tot_nchan = 0;
int mode = 0, mscan = 0;
- int i = 0;
dhd_pno_params_t *_params;
dhd_pno_params_t *_params2;
dhd_pno_status_info_t *_pno_state;
- wlc_ssid_t *p_ssid_list = NULL;
+ wlc_ssid_ext_t *p_ssid_list = NULL;
NULL_CHECK(dhd, "dhd is NULL", err);
NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
NULL_CHECK(batch_params, "batch_params is NULL", err);
@@ -1052,7 +1513,6 @@
tot_nchan = _params->params_batch.nchan;
}
if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
- struct dhd_pno_ssid *iter, *next;
DHD_PNO(("PNO SSID is on progress in firmware\n"));
/* store current pno_mode before disabling pno */
mode = _pno_state->pno_mode;
@@ -1079,22 +1539,12 @@
} else {
DHD_PNO(("superset channel will use all channels in firmware\n"));
}
- p_ssid_list = kzalloc(sizeof(wlc_ssid_t) *
- _params2->params_legacy.nssid, GFP_KERNEL);
- if (p_ssid_list == NULL) {
- DHD_ERROR(("%s : failed to allocate wlc_ssid_t array (count: %d)",
- __FUNCTION__, _params2->params_legacy.nssid));
- err = BCME_ERROR;
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ p_ssid_list = dhd_pno_get_legacy_pno_ssid(dhd, _pno_state);
+ if (!p_ssid_list) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("failed to get Legacy PNO SSID list\n"));
goto exit;
}
- i = 0;
- /* convert dhd_pno_ssid to dhd_pno_ssid */
- list_for_each_entry_safe(iter, next, &_params2->params_legacy.ssid_list, list) {
- p_ssid_list[i].SSID_len = iter->SSID_len;
- memcpy(p_ssid_list->SSID, iter->SSID, p_ssid_list[i].SSID_len);
- i++;
- }
if ((err = _dhd_pno_add_ssid(dhd, p_ssid_list,
_params2->params_legacy.nssid)) < 0) {
DHD_ERROR(("failed to add ssid list (err %d) in firmware\n", err));
@@ -1128,11 +1578,1279 @@
/* return #max scan firmware can do */
err = mscan;
}
- if (p_ssid_list)
- kfree(p_ssid_list);
+ kfree(p_ssid_list);
return err;
}
+
+#ifdef GSCAN_SUPPORT
+static void dhd_pno_reset_cfg_gscan(dhd_pno_params_t *_params,
+ dhd_pno_status_info_t *_pno_state, uint8 flags)
+{
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ if (flags & GSCAN_FLUSH_SCAN_CFG) {
+ _params->params_gscan.bestn = 0;
+ _params->params_gscan.mscan = 0;
+ _params->params_gscan.buffer_threshold = GSCAN_BATCH_NO_THR_SET;
+ _params->params_gscan.scan_fr = 0;
+ _params->params_gscan.send_all_results_flag = 0;
+ memset(_params->params_gscan.channel_bucket, 0,
+ _params->params_gscan.nchannel_buckets *
+ sizeof(struct dhd_pno_gscan_channel_bucket));
+ _params->params_gscan.nchannel_buckets = 0;
+ DHD_PNO(("Flush Scan config\n"));
+ }
+ if (flags & GSCAN_FLUSH_HOTLIST_CFG) {
+ struct dhd_pno_bssid *iter, *next;
+ if (_params->params_gscan.nbssid_hotlist > 0) {
+ list_for_each_entry_safe(iter, next,
+ &_params->params_gscan.hotlist_bssid_list, list) {
+ list_del(&iter->list);
+ kfree(iter);
+ }
+ }
+ _params->params_gscan.nbssid_hotlist = 0;
+ DHD_PNO(("Flush Hotlist Config\n"));
+ }
+ if (flags & GSCAN_FLUSH_SIGNIFICANT_CFG) {
+ dhd_pno_significant_bssid_t *iter, *next;
+
+ if (_params->params_gscan.nbssid_significant_change > 0) {
+ list_for_each_entry_safe(iter, next,
+ &_params->params_gscan.significant_bssid_list, list) {
+ list_del(&iter->list);
+ kfree(iter);
+ }
+ }
+ _params->params_gscan.nbssid_significant_change = 0;
+ DHD_PNO(("Flush Significant Change Config\n"));
+ }
+ if (flags & GSCAN_FLUSH_EPNO_CFG) {
+ dhd_epno_params_t *iter, *next;
+
+ if (_params->params_gscan.num_epno_ssid > 0) {
+ list_for_each_entry_safe(iter, next,
+ &_params->params_gscan.epno_ssid_list, list) {
+ list_del(&iter->list);
+ kfree(iter);
+ }
+ }
+ _params->params_gscan.num_epno_ssid = 0;
+ _params->params_gscan.num_visible_epno_ssid = 0;
+ _params->params_gscan.ssid_ext_last_used_index = 0;
+ DHD_PNO(("Flushed ePNO Config\n"));
+ }
+
+ return;
+}
+
+int dhd_pno_lock_batch_results(dhd_pub_t *dhd)
+{
+ dhd_pno_status_info_t *_pno_state;
+ int err = BCME_OK;
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ mutex_lock(&_pno_state->pno_mutex);
+ return err;
+}
+
+void dhd_pno_unlock_batch_results(dhd_pub_t *dhd)
+{
+ dhd_pno_status_info_t *_pno_state;
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ mutex_unlock(&_pno_state->pno_mutex);
+ return;
+}
+
+int dhd_wait_batch_results_complete(dhd_pub_t *dhd)
+{
+ dhd_pno_status_info_t *_pno_state;
+ dhd_pno_params_t *_params;
+ int err = BCME_OK;
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+
+ /* Has the workqueue finished its job already?? */
+ if (_params->params_gscan.get_batch_flag == GSCAN_BATCH_RETRIEVAL_IN_PROGRESS) {
+ DHD_PNO(("%s: Waiting to complete retrieval..\n", __FUNCTION__));
+ wait_event_interruptible_timeout(_pno_state->batch_get_wait,
+ is_batch_retrieval_complete(&_params->params_gscan),
+ msecs_to_jiffies(GSCAN_BATCH_GET_MAX_WAIT));
+ } else { /* GSCAN_BATCH_RETRIEVAL_COMPLETE */
+ gscan_results_cache_t *iter;
+ uint16 num_results = 0;
+
+ mutex_lock(&_pno_state->pno_mutex);
+ iter = _params->params_gscan.gscan_batch_cache;
+ while (iter) {
+ num_results += iter->tot_count - iter->tot_consumed;
+ iter = iter->next;
+ }
+ mutex_unlock(&_pno_state->pno_mutex);
+
+ /* All results consumed/No results cached??
+ * Get fresh results from FW
+ */
+ if ((_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) && !num_results) {
+ DHD_PNO(("%s: No results cached, getting from FW..\n", __FUNCTION__));
+ err = dhd_retreive_batch_scan_results(dhd);
+ if (err == BCME_OK) {
+ wait_event_interruptible_timeout(_pno_state->batch_get_wait,
+ is_batch_retrieval_complete(&_params->params_gscan),
+ msecs_to_jiffies(GSCAN_BATCH_GET_MAX_WAIT));
+ }
+ }
+ }
+ DHD_PNO(("%s: Wait complete\n", __FUNCTION__));
+ return err;
+}
+
+static void *dhd_get_gscan_batch_results(dhd_pub_t *dhd, uint32 *len)
+{
+ gscan_results_cache_t *iter, *results;
+ dhd_pno_status_info_t *_pno_state;
+ dhd_pno_params_t *_params;
+ uint16 num_scan_ids = 0, num_results = 0;
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+
+ iter = results = _params->params_gscan.gscan_batch_cache;
+ while (iter) {
+ num_results += iter->tot_count - iter->tot_consumed;
+ num_scan_ids++;
+ iter = iter->next;
+ }
+
+ *len = ((num_results << 16) | (num_scan_ids));
+ return results;
+}
+
+void * dhd_pno_get_gscan(dhd_pub_t *dhd, dhd_pno_gscan_cmd_cfg_t type,
+ void *info, uint32 *len)
+{
+ void *ret = NULL;
+ dhd_pno_gscan_capabilities_t *ptr;
+ dhd_epno_params_t *epno_params;
+ dhd_pno_params_t *_params;
+ dhd_pno_status_info_t *_pno_state;
+
+ if (!dhd || !dhd->pno_state) {
+ DHD_ERROR(("NULL POINTER : %s\n", __FUNCTION__));
+ return NULL;
+ }
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ if (!len) {
+ DHD_ERROR(("%s: len is NULL\n", __FUNCTION__));
+ return NULL;
+ }
+
+ switch (type) {
+ case DHD_PNO_GET_CAPABILITIES:
+ ptr = (dhd_pno_gscan_capabilities_t *)
+ kmalloc(sizeof(dhd_pno_gscan_capabilities_t), GFP_KERNEL);
+ if (!ptr)
+ break;
+ /* Hardcoding these values for now, need to get
+ * these values from FW, will change in a later check-in
+ */
+ ptr->max_scan_cache_size = GSCAN_MAX_AP_CACHE;
+ ptr->max_scan_buckets = GSCAN_MAX_CH_BUCKETS;
+ ptr->max_ap_cache_per_scan = GSCAN_MAX_AP_CACHE_PER_SCAN;
+ ptr->max_rssi_sample_size = PFN_SWC_RSSI_WINDOW_MAX;
+ ptr->max_scan_reporting_threshold = 100;
+ ptr->max_hotlist_aps = PFN_HOTLIST_MAX_NUM_APS;
+ ptr->max_significant_wifi_change_aps = PFN_SWC_MAX_NUM_APS;
+ ptr->max_epno_ssid_crc32 = MAX_EPNO_SSID_NUM;
+ ptr->max_epno_hidden_ssid = MAX_EPNO_HIDDEN_SSID;
+ ptr->max_white_list_ssid = MAX_WHITELIST_SSID;
+ ret = (void *)ptr;
+ *len = sizeof(dhd_pno_gscan_capabilities_t);
+ break;
+
+ case DHD_PNO_GET_BATCH_RESULTS:
+ ret = dhd_get_gscan_batch_results(dhd, len);
+ break;
+ case DHD_PNO_GET_CHANNEL_LIST:
+ if (info) {
+ uint16 ch_list[WL_NUMCHANNELS];
+ uint32 *ptr, mem_needed, i;
+ int32 err, nchan = WL_NUMCHANNELS;
+ uint32 *gscan_band = (uint32 *) info;
+ uint8 band = 0;
+
+ /* No band specified?, nothing to do */
+ if ((*gscan_band & GSCAN_BAND_MASK) == 0) {
+ DHD_PNO(("No band specified\n"));
+ *len = 0;
+ break;
+ }
+
+ /* HAL and DHD use different bits for 2.4G and
+ * 5G in bitmap. Hence translating it here...
+ */
+ if (*gscan_band & GSCAN_BG_BAND_MASK)
+ band |= WLC_BAND_2G;
+ if (*gscan_band & GSCAN_A_BAND_MASK)
+ band |= WLC_BAND_5G;
+
+ err = _dhd_pno_get_channels(dhd, ch_list, &nchan,
+ (band & GSCAN_ABG_BAND_MASK),
+ !(*gscan_band & GSCAN_DFS_MASK));
+
+ if (err < 0) {
+ DHD_ERROR(("%s: failed to get valid channel list\n",
+ __FUNCTION__));
+ *len = 0;
+ } else {
+ mem_needed = sizeof(uint32) * nchan;
+ ptr = (uint32 *) kmalloc(mem_needed, GFP_KERNEL);
+ if (!ptr) {
+ DHD_ERROR(("%s: Unable to malloc %d bytes\n",
+ __FUNCTION__, mem_needed));
+ break;
+ }
+ for (i = 0; i < nchan; i++) {
+ ptr[i] = wf_channel2mhz(ch_list[i],
+ (ch_list[i] <= CH_MAX_2G_CHANNEL?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ }
+ ret = ptr;
+ *len = mem_needed;
+ }
+ } else {
+ *len = 0;
+ DHD_ERROR(("%s: info buffer is NULL\n", __FUNCTION__));
+ }
+ break;
+ case DHD_PNO_GET_EPNO_SSID_ELEM:
+ if (_params->params_gscan.num_epno_ssid >=
+ (MAX_EPNO_SSID_NUM + MAX_EPNO_HIDDEN_SSID)) {
+ DHD_ERROR(("Excessive number of ePNO SSIDs programmed %d\n",
+ _params->params_gscan.num_epno_ssid));
+ return NULL;
+ }
+
+ if (!_params->params_gscan.num_epno_ssid)
+ INIT_LIST_HEAD(&_params->params_gscan.epno_ssid_list);
+
+ epno_params = kzalloc(sizeof(dhd_epno_params_t), GFP_KERNEL);
+ if (!epno_params) {
+ DHD_ERROR(("EPNO ssid: cannot alloc %zd bytes",
+ sizeof(dhd_epno_params_t)));
+ return NULL;
+ }
+ _params->params_gscan.num_epno_ssid++;
+ epno_params->index = DHD_EPNO_DEFAULT_INDEX;
+ list_add_tail(&epno_params->list, &_params->params_gscan.epno_ssid_list);
+ ret = epno_params;
+ default:
+ break;
+ }
+
+ return ret;
+
+}
+
+int dhd_pno_set_cfg_gscan(dhd_pub_t *dhd, dhd_pno_gscan_cmd_cfg_t type,
+ void *buf, uint8 flush)
+{
+ int err = BCME_OK;
+ dhd_pno_params_t *_params;
+ int i;
+ dhd_pno_status_info_t *_pno_state;
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ mutex_lock(&_pno_state->pno_mutex);
+
+ switch (type) {
+ case DHD_PNO_BATCH_SCAN_CFG_ID:
+ {
+ gscan_batch_params_t *ptr = (gscan_batch_params_t *)buf;
+ _params->params_gscan.bestn = ptr->bestn;
+ _params->params_gscan.mscan = ptr->mscan;
+ _params->params_gscan.buffer_threshold = ptr->buffer_threshold;
+ }
+ break;
+ case DHD_PNO_GEOFENCE_SCAN_CFG_ID:
+ {
+ gscan_hotlist_scan_params_t *ptr = (gscan_hotlist_scan_params_t *)buf;
+ struct dhd_pno_bssid *_pno_bssid;
+ struct bssid_t *bssid_ptr;
+ int8 flags;
+
+ if (flush) {
+ dhd_pno_reset_cfg_gscan(_params, _pno_state,
+ GSCAN_FLUSH_HOTLIST_CFG);
+ }
+
+ if (!ptr->nbssid)
+ break;
+
+ if (!_params->params_gscan.nbssid_hotlist)
+ INIT_LIST_HEAD(&_params->params_gscan.hotlist_bssid_list);
+
+ if ((_params->params_gscan.nbssid_hotlist +
+ ptr->nbssid) > PFN_SWC_MAX_NUM_APS) {
+ DHD_ERROR(("Excessive number of hotlist APs programmed %d\n",
+ (_params->params_gscan.nbssid_hotlist +
+ ptr->nbssid)));
+ err = BCME_RANGE;
+ goto exit;
+ }
+
+ for (i = 0, bssid_ptr = ptr->bssid; i < ptr->nbssid; i++, bssid_ptr++) {
+ _pno_bssid = kzalloc(sizeof(struct dhd_pno_bssid), GFP_KERNEL);
+
+ if (!_pno_bssid) {
+ DHD_ERROR(("_pno_bssid is NULL, cannot kalloc %zd bytes",
+ sizeof(struct dhd_pno_bssid)));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ memcpy(&_pno_bssid->macaddr, &bssid_ptr->macaddr, ETHER_ADDR_LEN);
+
+ flags = (int8) bssid_ptr->rssi_reporting_threshold;
+ _pno_bssid->flags = flags << WL_PFN_RSSI_SHIFT;
+ list_add_tail(&_pno_bssid->list,
+ &_params->params_gscan.hotlist_bssid_list);
+ }
+
+ _params->params_gscan.nbssid_hotlist += ptr->nbssid;
+ _params->params_gscan.lost_ap_window = ptr->lost_ap_window;
+ }
+ break;
+ case DHD_PNO_SIGNIFICANT_SCAN_CFG_ID:
+ {
+ gscan_swc_params_t *ptr = (gscan_swc_params_t *)buf;
+ dhd_pno_significant_bssid_t *_pno_significant_change_bssid;
+ wl_pfn_significant_bssid_t *significant_bssid_ptr;
+
+ if (flush) {
+ dhd_pno_reset_cfg_gscan(_params, _pno_state,
+ GSCAN_FLUSH_SIGNIFICANT_CFG);
+ }
+
+ if (!ptr->nbssid)
+ break;
+
+ if (!_params->params_gscan.nbssid_significant_change)
+ INIT_LIST_HEAD(&_params->params_gscan.significant_bssid_list);
+
+ if ((_params->params_gscan.nbssid_significant_change +
+ ptr->nbssid) > PFN_SWC_MAX_NUM_APS) {
+ DHD_ERROR(("Excessive number of SWC APs programmed %d\n",
+ (_params->params_gscan.nbssid_significant_change +
+ ptr->nbssid)));
+ err = BCME_RANGE;
+ goto exit;
+ }
+
+ for (i = 0, significant_bssid_ptr = ptr->bssid_elem_list;
+ i < ptr->nbssid; i++, significant_bssid_ptr++) {
+ _pno_significant_change_bssid =
+ kzalloc(sizeof(dhd_pno_significant_bssid_t),
+ GFP_KERNEL);
+
+ if (!_pno_significant_change_bssid) {
+ DHD_ERROR(("SWC bssidptr is NULL, cannot kalloc %zd bytes",
+ sizeof(dhd_pno_significant_bssid_t)));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ memcpy(&_pno_significant_change_bssid->BSSID,
+ &significant_bssid_ptr->macaddr, ETHER_ADDR_LEN);
+ _pno_significant_change_bssid->rssi_low_threshold =
+ significant_bssid_ptr->rssi_low_threshold;
+ _pno_significant_change_bssid->rssi_high_threshold =
+ significant_bssid_ptr->rssi_high_threshold;
+ list_add_tail(&_pno_significant_change_bssid->list,
+ &_params->params_gscan.significant_bssid_list);
+ }
+
+ _params->params_gscan.swc_nbssid_threshold = ptr->swc_threshold;
+ _params->params_gscan.swc_rssi_window_size = ptr->rssi_window;
+ _params->params_gscan.lost_ap_window = ptr->lost_ap_window;
+ _params->params_gscan.nbssid_significant_change += ptr->nbssid;
+
+ }
+ break;
+ case DHD_PNO_SCAN_CFG_ID:
+ {
+ int i, k;
+ uint16 band;
+ gscan_scan_params_t *ptr = (gscan_scan_params_t *)buf;
+ struct dhd_pno_gscan_channel_bucket *ch_bucket;
+
+ if (ptr->nchannel_buckets <= GSCAN_MAX_CH_BUCKETS) {
+ _params->params_gscan.nchannel_buckets = ptr->nchannel_buckets;
+
+ memcpy(_params->params_gscan.channel_bucket, ptr->channel_bucket,
+ _params->params_gscan.nchannel_buckets *
+ sizeof(struct dhd_pno_gscan_channel_bucket));
+ ch_bucket = _params->params_gscan.channel_bucket;
+
+ for (i = 0; i < ptr->nchannel_buckets; i++) {
+ band = ch_bucket[i].band;
+ for (k = 0; k < ptr->channel_bucket[i].num_channels; k++) {
+ ch_bucket[i].chan_list[k] =
+ wf_mhz2channel(ptr->channel_bucket[i].chan_list[k],
+ 0);
+ }
+ ch_bucket[i].band = 0;
+ /* HAL and DHD use different bits for 2.4G and
+ * 5G in bitmap. Hence translating it here...
+ */
+ if (band & GSCAN_BG_BAND_MASK)
+ ch_bucket[i].band |= WLC_BAND_2G;
+ if (band & GSCAN_A_BAND_MASK)
+ ch_bucket[i].band |= WLC_BAND_5G;
+ if (band & GSCAN_DFS_MASK)
+ ch_bucket[i].band |= GSCAN_DFS_MASK;
+
+ DHD_PNO(("band %d report_flag %d\n", ch_bucket[i].band,
+ ch_bucket[i].report_flag));
+ }
+
+ for (i = 0; i < ptr->nchannel_buckets; i++) {
+ ch_bucket[i].bucket_freq_multiple =
+ ch_bucket[i].bucket_freq_multiple/ptr->scan_fr;
+ ch_bucket[i].bucket_max_multiple =
+ ch_bucket[i].bucket_max_multiple/ptr->scan_fr;
+ DHD_PNO(("mult %d max_mult %d\n", ch_bucket[i].bucket_freq_multiple,
+ ch_bucket[i].bucket_max_multiple));
+ }
+ _params->params_gscan.scan_fr = ptr->scan_fr;
+
+ DHD_PNO(("num_buckets %d scan_fr %d\n", ptr->nchannel_buckets,
+ _params->params_gscan.scan_fr));
+ } else {
+ err = BCME_BADARG;
+ }
+ }
+ break;
+ case DHD_PNO_EPNO_CFG_ID:
+ if (flush) {
+ dhd_pno_reset_cfg_gscan(_params, _pno_state,
+ GSCAN_FLUSH_EPNO_CFG);
+ } else {
+ _params->params_gscan.num_visible_epno_ssid += *((uint16 *)buf);
+ }
+ break;
+ default:
+ err = BCME_BADARG;
+ break;
+ }
+exit:
+ mutex_unlock(&_pno_state->pno_mutex);
+ return err;
+
+}
+
+
+static bool
+validate_gscan_params(struct dhd_pno_gscan_params *gscan_params)
+{
+ unsigned int i, k;
+
+ if (!gscan_params->scan_fr || !gscan_params->nchannel_buckets) {
+ DHD_ERROR(("%s : Scan freq - %d or number of channel buckets - %d is empty\n",
+ __FUNCTION__, gscan_params->scan_fr, gscan_params->nchannel_buckets));
+ return false;
+ }
+
+ for (i = 0; i < gscan_params->nchannel_buckets; i++) {
+ if (!gscan_params->channel_bucket[i].band) {
+ for (k = 0; k < gscan_params->channel_bucket[i].num_channels; k++) {
+ if (gscan_params->channel_bucket[i].chan_list[k] > CHANNEL_5G_MAX) {
+ DHD_ERROR(("%s : Unknown channel %d\n", __FUNCTION__,
+ gscan_params->channel_bucket[i].chan_list[k]));
+ return false;
+ }
+ }
+ }
+ }
+
+ return true;
+}
+
+static int
+dhd_pno_set_for_gscan(dhd_pub_t *dhd, struct dhd_pno_gscan_params *gscan_params)
+{
+ int err = BCME_OK;
+ int mode, i = 0, k;
+ uint16 _chan_list[WL_NUMCHANNELS];
+ int tot_nchan = 0;
+ int num_buckets_to_fw, tot_num_buckets, gscan_param_size;
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ wl_pfn_gscan_ch_bucket_cfg_t *ch_bucket = NULL;
+ wl_pfn_gscan_cfg_t *pfn_gscan_cfg_t = NULL;
+ wl_pfn_significant_bssid_t *p_pfn_significant_bssid = NULL;
+ wl_pfn_bssid_t *p_pfn_bssid = NULL;
+ wlc_ssid_ext_t *pssid_list = NULL;
+ dhd_pno_params_t *params_legacy;
+ dhd_pno_params_t *_params;
+
+ params_legacy = &_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS];
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ NULL_CHECK(gscan_params, "gscan_params is NULL", err);
+
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ if (!dhd_support_sta_mode(dhd)) {
+ err = BCME_BADOPTION;
+ goto exit;
+ }
+ if (!WLS_SUPPORTED(_pno_state)) {
+ DHD_ERROR(("%s : wifi location service is not supported\n", __FUNCTION__));
+ err = BCME_UNSUPPORTED;
+ goto exit;
+ }
+
+ if (!validate_gscan_params(gscan_params)) {
+ DHD_ERROR(("%s : Cannot start gscan - bad params\n", __FUNCTION__));
+ err = BCME_BADARG;
+ goto exit;
+ }
+
+ if (!(ch_bucket = dhd_pno_gscan_create_channel_list(dhd, _pno_state,
+ _chan_list, &tot_num_buckets, &num_buckets_to_fw)))
+ goto exit;
+
+ mutex_lock(&_pno_state->pno_mutex);
+ /* Clear any pre-existing results in our cache
+ * not consumed by framework
+ */
+ dhd_gscan_clear_all_batch_results(dhd);
+ if (_pno_state->pno_mode & (DHD_PNO_GSCAN_MODE | DHD_PNO_LEGACY_MODE)) {
+ /* store current pno_mode before disabling pno */
+ mode = _pno_state->pno_mode;
+ err = dhd_pno_clean(dhd);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to disable PNO\n", __FUNCTION__));
+ mutex_unlock(&_pno_state->pno_mutex);
+ goto exit;
+ }
+ /* restore the previous mode */
+ _pno_state->pno_mode = mode;
+ }
+ _pno_state->pno_mode |= DHD_PNO_GSCAN_MODE;
+ mutex_unlock(&_pno_state->pno_mutex);
+
+ if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
+ pssid_list = dhd_pno_get_legacy_pno_ssid(dhd, _pno_state);
+
+ if (!pssid_list) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("failed to get Legacy PNO SSID list\n"));
+ goto exit;
+ }
+
+ if ((err = _dhd_pno_add_ssid(dhd, pssid_list,
+ params_legacy->params_legacy.nssid)) < 0) {
+ DHD_ERROR(("failed to add ssid list (err %d) in firmware\n", err));
+ goto exit;
+ }
+ }
+
+ if ((err = _dhd_pno_set(dhd, _params, DHD_PNO_GSCAN_MODE)) < 0) {
+ DHD_ERROR(("failed to set call pno_set (err %d) in firmware\n", err));
+ goto exit;
+ }
+
+ gscan_param_size = sizeof(wl_pfn_gscan_cfg_t) +
+ (num_buckets_to_fw - 1) * sizeof(wl_pfn_gscan_ch_bucket_cfg_t);
+ pfn_gscan_cfg_t = (wl_pfn_gscan_cfg_t *) MALLOC(dhd->osh, gscan_param_size);
+
+ if (!pfn_gscan_cfg_t) {
+ DHD_ERROR(("%s: failed to malloc memory of size %d\n",
+ __FUNCTION__, gscan_param_size));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+
+ pfn_gscan_cfg_t->version = WL_GSCAN_CFG_VERSION;
+ if (gscan_params->mscan)
+ pfn_gscan_cfg_t->buffer_threshold = gscan_params->buffer_threshold;
+ else
+ pfn_gscan_cfg_t->buffer_threshold = GSCAN_BATCH_NO_THR_SET;
+
+ if (gscan_params->nbssid_significant_change) {
+ pfn_gscan_cfg_t->swc_nbssid_threshold = gscan_params->swc_nbssid_threshold;
+ pfn_gscan_cfg_t->swc_rssi_window_size = gscan_params->swc_rssi_window_size;
+ pfn_gscan_cfg_t->lost_ap_window = gscan_params->lost_ap_window;
+ } else {
+ pfn_gscan_cfg_t->swc_nbssid_threshold = 0;
+ pfn_gscan_cfg_t->swc_rssi_window_size = 0;
+ pfn_gscan_cfg_t->lost_ap_window = 0;
+ }
+
+ pfn_gscan_cfg_t->flags =
+ (gscan_params->send_all_results_flag & GSCAN_SEND_ALL_RESULTS_MASK);
+ pfn_gscan_cfg_t->count_of_channel_buckets = num_buckets_to_fw;
+ pfn_gscan_cfg_t->retry_threshold = GSCAN_RETRY_THRESHOLD;
+
+ for (i = 0, k = 0; i < tot_num_buckets; i++) {
+ if (ch_bucket[i].bucket_end_index != CHANNEL_BUCKET_EMPTY_INDEX) {
+ pfn_gscan_cfg_t->channel_bucket[k].bucket_end_index =
+ ch_bucket[i].bucket_end_index;
+ pfn_gscan_cfg_t->channel_bucket[k].bucket_freq_multiple =
+ ch_bucket[i].bucket_freq_multiple;
+ pfn_gscan_cfg_t->channel_bucket[k].max_freq_multiple =
+ ch_bucket[i].max_freq_multiple;
+ pfn_gscan_cfg_t->channel_bucket[k].repeat =
+ ch_bucket[i].repeat;
+ pfn_gscan_cfg_t->channel_bucket[k].flag =
+ ch_bucket[i].flag;
+ k++;
+ }
+ }
+
+ tot_nchan = pfn_gscan_cfg_t->channel_bucket[num_buckets_to_fw - 1].bucket_end_index + 1;
+ DHD_PNO(("Total channel num %d total ch_buckets %d ch_buckets_to_fw %d \n", tot_nchan,
+ tot_num_buckets, num_buckets_to_fw));
+
+ if ((err = _dhd_pno_cfg(dhd, _chan_list, tot_nchan)) < 0) {
+ DHD_ERROR(("%s : failed to set call pno_cfg (err %d) in firmware\n",
+ __FUNCTION__, err));
+ goto exit;
+ }
+
+ if ((err = _dhd_pno_gscan_cfg(dhd, pfn_gscan_cfg_t, gscan_param_size)) < 0) {
+ DHD_ERROR(("%s : failed to set call pno_gscan_cfg (err %d) in firmware\n",
+ __FUNCTION__, err));
+ goto exit;
+ }
+ if (gscan_params->nbssid_significant_change) {
+ dhd_pno_significant_bssid_t *iter, *next;
+
+ p_pfn_significant_bssid = kzalloc(sizeof(wl_pfn_significant_bssid_t) *
+ gscan_params->nbssid_significant_change, GFP_KERNEL);
+ if (p_pfn_significant_bssid == NULL) {
+ DHD_ERROR(("%s : failed to allocate memory %zd\n",
+ __FUNCTION__,
+ sizeof(wl_pfn_significant_bssid_t) *
+ gscan_params->nbssid_significant_change));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ i = 0;
+ /* convert dhd_pno_significant_bssid_t to wl_pfn_significant_bssid_t */
+ list_for_each_entry_safe(iter, next, &gscan_params->significant_bssid_list, list) {
+ p_pfn_significant_bssid[i].rssi_low_threshold = iter->rssi_low_threshold;
+ p_pfn_significant_bssid[i].rssi_high_threshold = iter->rssi_high_threshold;
+ memcpy(&p_pfn_significant_bssid[i].macaddr, &iter->BSSID, ETHER_ADDR_LEN);
+ i++;
+ }
+
+ DHD_PNO(("nbssid_significant_change %d \n",
+ gscan_params->nbssid_significant_change));
+ err = _dhd_pno_add_significant_bssid(dhd, p_pfn_significant_bssid,
+ gscan_params->nbssid_significant_change);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to call _dhd_pno_add_significant_bssid(err :%d)\n",
+ __FUNCTION__, err));
+ goto exit;
+ }
+ }
+
+ if (gscan_params->nbssid_hotlist) {
+ struct dhd_pno_bssid *iter, *next;
+ wl_pfn_bssid_t *ptr;
+ p_pfn_bssid = (wl_pfn_bssid_t *)kzalloc(sizeof(wl_pfn_bssid_t) *
+ gscan_params->nbssid_hotlist, GFP_KERNEL);
+ if (p_pfn_bssid == NULL) {
+ DHD_ERROR(("%s : failed to allocate wl_pfn_bssid_t array"
+ " (count: %d)",
+ __FUNCTION__, _params->params_hotlist.nbssid));
+ err = BCME_ERROR;
+ _pno_state->pno_mode &= ~DHD_PNO_HOTLIST_MODE;
+ goto exit;
+ }
+ ptr = p_pfn_bssid;
+ /* convert dhd_pno_bssid to wl_pfn_bssid */
+ DHD_PNO(("nhotlist %d\n", gscan_params->nbssid_hotlist));
+ list_for_each_entry_safe(iter, next,
+ &gscan_params->hotlist_bssid_list, list) {
+ memcpy(&ptr->macaddr,
+ &iter->macaddr, ETHER_ADDR_LEN);
+ ptr->flags = iter->flags;
+ ptr++;
+ }
+
+ err = _dhd_pno_add_bssid(dhd, p_pfn_bssid, gscan_params->nbssid_hotlist);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to call _dhd_pno_add_bssid(err :%d)\n",
+ __FUNCTION__, err));
+ goto exit;
+ }
+ }
+
+ if (gscan_params->num_epno_ssid > 0) {
+ DHD_PNO(("num_epno_ssid %d\n", gscan_params->num_epno_ssid));
+ err = dhd_epno_set_ssid(dhd, _pno_state);
+ if (err < 0) {
+ DHD_ERROR(("failed to add ssid list (err %d) in firmware\n", err));
+ goto exit;
+ }
+ }
+
+ if ((err = _dhd_pno_enable(dhd, PNO_ON)) < 0)
+ DHD_ERROR(("%s : failed to enable PNO err %d\n", __FUNCTION__, err));
+
+exit:
+ /* clear mode in case of error */
+ if (err < 0) {
+ int ret = dhd_pno_clean(dhd);
+
+ if (ret < 0) {
+ DHD_ERROR(("%s : failed to call dhd_pno_clean (err: %d)\n",
+ __FUNCTION__, ret));
+ } else {
+ _pno_state->pno_mode &= ~DHD_PNO_GSCAN_MODE;
+ }
+ }
+ kfree(pssid_list);
+ kfree(p_pfn_significant_bssid);
+ kfree(p_pfn_bssid);
+ if (pfn_gscan_cfg_t)
+ MFREE(dhd->osh, pfn_gscan_cfg_t, gscan_param_size);
+ if (ch_bucket)
+ MFREE(dhd->osh, ch_bucket,
+ (tot_num_buckets * sizeof(wl_pfn_gscan_ch_bucket_cfg_t)));
+ return err;
+
+}
+
+static wl_pfn_gscan_ch_bucket_cfg_t *
+dhd_pno_gscan_create_channel_list(dhd_pub_t *dhd,
+ dhd_pno_status_info_t *_pno_state,
+ uint16 *chan_list,
+ uint32 *num_buckets,
+ uint32 *num_buckets_to_fw)
+{
+ int i, num_channels, err, nchan = WL_NUMCHANNELS, ch_cnt;
+ uint16 *ptr = chan_list, max;
+ wl_pfn_gscan_ch_bucket_cfg_t *ch_bucket;
+ dhd_pno_params_t *_params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ bool is_pno_legacy_running = _pno_state->pno_mode & DHD_PNO_LEGACY_MODE;
+ dhd_pno_gscan_channel_bucket_t *gscan_buckets = _params->params_gscan.channel_bucket;
+
+ if (is_pno_legacy_running)
+ *num_buckets = _params->params_gscan.nchannel_buckets + 1;
+ else
+ *num_buckets = _params->params_gscan.nchannel_buckets;
+
+ *num_buckets_to_fw = 0;
+
+ ch_bucket = (wl_pfn_gscan_ch_bucket_cfg_t *) MALLOC(dhd->osh,
+ ((*num_buckets) * sizeof(wl_pfn_gscan_ch_bucket_cfg_t)));
+
+ if (!ch_bucket) {
+ DHD_ERROR(("%s: failed to malloc memory of size %zd\n",
+ __FUNCTION__, (*num_buckets) * sizeof(wl_pfn_gscan_ch_bucket_cfg_t)));
+ *num_buckets_to_fw = *num_buckets = 0;
+ return NULL;
+ }
+
+ max = gscan_buckets[0].bucket_freq_multiple;
+ num_channels = 0;
+ /* nchan is the remaining space left in chan_list buffer
+ * So any overflow list of channels is ignored
+ */
+ for (i = 0; i < _params->params_gscan.nchannel_buckets && nchan; i++) {
+ if (!gscan_buckets[i].band) {
+ ch_cnt = MIN(gscan_buckets[i].num_channels, (uint8)nchan);
+ num_channels += ch_cnt;
+ memcpy(ptr, gscan_buckets[i].chan_list,
+ ch_cnt * sizeof(uint16));
+ ptr = ptr + ch_cnt;
+ } else {
+ /* get a valid channel list based on band B or A */
+ err = _dhd_pno_get_channels(dhd, ptr,
+ &nchan, (gscan_buckets[i].band & GSCAN_ABG_BAND_MASK),
+ !(gscan_buckets[i].band & GSCAN_DFS_MASK));
+
+ if (err < 0) {
+ DHD_ERROR(("%s: failed to get valid channel list(band : %d)\n",
+ __FUNCTION__, gscan_buckets[i].band));
+ MFREE(dhd->osh, ch_bucket,
+ ((*num_buckets) * sizeof(wl_pfn_gscan_ch_bucket_cfg_t)));
+ *num_buckets_to_fw = *num_buckets = 0;
+ return NULL;
+ }
+
+ num_channels += nchan;
+ ptr = ptr + nchan;
+ }
+
+ ch_bucket[i].bucket_end_index = num_channels - 1;
+ ch_bucket[i].bucket_freq_multiple = gscan_buckets[i].bucket_freq_multiple;
+ ch_bucket[i].repeat = gscan_buckets[i].repeat;
+ ch_bucket[i].max_freq_multiple = gscan_buckets[i].bucket_max_multiple;
+ ch_bucket[i].flag = gscan_buckets[i].report_flag;
+ /* HAL and FW interpretations are opposite for this bit */
+ ch_bucket[i].flag ^= DHD_PNO_REPORT_NO_BATCH;
+ if (max < gscan_buckets[i].bucket_freq_multiple)
+ max = gscan_buckets[i].bucket_freq_multiple;
+ nchan = WL_NUMCHANNELS - num_channels;
+ *num_buckets_to_fw = *num_buckets_to_fw + 1;
+ DHD_PNO(("end_idx %d freq_mult - %d\n",
+ ch_bucket[i].bucket_end_index, ch_bucket[i].bucket_freq_multiple));
+ }
+
+ _params->params_gscan.max_ch_bucket_freq = max;
+ /* Legacy PNO maybe running, which means we need to create a legacy PNO bucket
+ * Get GCF of Legacy PNO and Gscan scanfreq
+ */
+ if (is_pno_legacy_running) {
+ dhd_pno_params_t *_params1 = &_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS];
+ uint16 *legacy_chan_list = _params1->params_legacy.chan_list;
+ uint16 common_freq;
+ uint32 legacy_bucket_idx = _params->params_gscan.nchannel_buckets;
+ /* If no space is left then only gscan buckets will be sent to FW */
+ if (nchan) {
+ common_freq = gcd(_params->params_gscan.scan_fr,
+ _params1->params_legacy.scan_fr);
+ max = gscan_buckets[0].bucket_freq_multiple;
+ /* GSCAN buckets */
+ for (i = 0; i < _params->params_gscan.nchannel_buckets; i++) {
+ ch_bucket[i].bucket_freq_multiple *= _params->params_gscan.scan_fr;
+ ch_bucket[i].bucket_freq_multiple /= common_freq;
+ if (max < gscan_buckets[i].bucket_freq_multiple)
+ max = gscan_buckets[i].bucket_freq_multiple;
+ }
+ /* Legacy PNO bucket */
+ ch_bucket[legacy_bucket_idx].bucket_freq_multiple =
+ _params1->params_legacy.scan_fr;
+ ch_bucket[legacy_bucket_idx].bucket_freq_multiple /=
+ common_freq;
+ _params->params_gscan.max_ch_bucket_freq = MAX(max,
+ ch_bucket[legacy_bucket_idx].bucket_freq_multiple);
+ ch_bucket[legacy_bucket_idx].flag = CH_BUCKET_REPORT_REGULAR;
+ /* Now add channels to the legacy scan bucket */
+ for (i = 0; i < _params1->params_legacy.nchan && nchan; i++, nchan--) {
+ ptr[i] = legacy_chan_list[i];
+ num_channels++;
+ }
+ ch_bucket[legacy_bucket_idx].bucket_end_index = num_channels - 1;
+ *num_buckets_to_fw = *num_buckets_to_fw + 1;
+ DHD_PNO(("end_idx %d freq_mult - %d\n",
+ ch_bucket[legacy_bucket_idx].bucket_end_index,
+ ch_bucket[legacy_bucket_idx].bucket_freq_multiple));
+ }
+ }
+ return ch_bucket;
+}
+
+static int dhd_pno_stop_for_gscan(dhd_pub_t *dhd)
+{
+ int err = BCME_OK;
+ int mode;
+ dhd_pno_status_info_t *_pno_state;
+ wlc_ssid_ext_t *pssid_list = NULL;
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ if (!dhd_support_sta_mode(dhd)) {
+ err = BCME_BADOPTION;
+ goto exit;
+ }
+ if (!WLS_SUPPORTED(_pno_state)) {
+ DHD_ERROR(("%s : wifi location service is not supported\n",
+ __FUNCTION__));
+ err = BCME_UNSUPPORTED;
+ goto exit;
+ }
+
+ if (!(_pno_state->pno_mode & DHD_PNO_GSCAN_MODE)) {
+ DHD_ERROR(("%s : GSCAN is not enabled\n", __FUNCTION__));
+ goto exit;
+ }
+ if (_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan.mscan) {
+ /* retrieve the batching data from firmware into host */
+ err = dhd_wait_batch_results_complete(dhd);
+ if (err != BCME_OK)
+ goto exit;
+ }
+ mutex_lock(&_pno_state->pno_mutex);
+ mode = _pno_state->pno_mode & ~DHD_PNO_GSCAN_MODE;
+ err = dhd_pno_clean(dhd);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to call dhd_pno_clean (err: %d)\n",
+ __FUNCTION__, err));
+ mutex_unlock(&_pno_state->pno_mutex);
+ return err;
+ }
+ _pno_state->pno_mode = mode;
+ mutex_unlock(&_pno_state->pno_mutex);
+ _pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan.ssid_ext_last_used_index = 0;
+
+ /* Reprogram Legacy PNO if it was running */
+ if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
+ struct dhd_pno_legacy_params *params_legacy;
+ uint16 chan_list[WL_NUMCHANNELS];
+
+ params_legacy = &(_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS].params_legacy);
+ _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ pssid_list = dhd_pno_get_legacy_pno_ssid(dhd, _pno_state);
+ if (!pssid_list) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("failed to get Legacy PNO SSID list\n"));
+ goto exit;
+ }
+
+ DHD_PNO(("Restarting Legacy PNO SSID scan...\n"));
+ memcpy(chan_list, params_legacy->chan_list,
+ (params_legacy->nchan * sizeof(uint16)));
+ err = dhd_pno_set_for_ssid(dhd, pssid_list, params_legacy->nssid,
+ params_legacy->scan_fr, params_legacy->pno_repeat,
+ params_legacy->pno_freq_expo_max, chan_list,
+ params_legacy->nchan);
+ if (err < 0) {
+ DHD_ERROR(("%s : failed to restart legacy PNO scan(err: %d)\n",
+ __FUNCTION__, err));
+ goto exit;
+ }
+
+ }
+
+exit:
+ kfree(pssid_list);
+ return err;
+}
+
+int
+dhd_pno_initiate_gscan_request(dhd_pub_t *dhd, bool run, bool flush)
+{
+ int err = BCME_OK;
+ dhd_pno_params_t *params;
+ dhd_pno_status_info_t *_pno_state;
+ struct dhd_pno_gscan_params *gscan_params;
+
+ NULL_CHECK(dhd, "dhd is NULL\n", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+
+ DHD_ERROR(("%s enter - run %d flush %d\n", __FUNCTION__, run, flush));
+
+ params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ gscan_params = ¶ms->params_gscan;
+
+ if (run) {
+ err = dhd_pno_set_for_gscan(dhd, gscan_params);
+ } else {
+ if (flush) {
+ mutex_lock(&_pno_state->pno_mutex);
+ dhd_pno_reset_cfg_gscan(params, _pno_state, GSCAN_FLUSH_ALL_CFG);
+ mutex_unlock(&_pno_state->pno_mutex);
+ }
+ /* Need to stop all gscan */
+ err = dhd_pno_stop_for_gscan(dhd);
+ }
+
+ return err;
+}
+
+int dhd_pno_enable_full_scan_result(dhd_pub_t *dhd, bool real_time_flag)
+{
+ int err = BCME_OK;
+ dhd_pno_params_t *params;
+ dhd_pno_status_info_t *_pno_state;
+ struct dhd_pno_gscan_params *gscan_params;
+ uint8 old_flag;
+
+ NULL_CHECK(dhd, "dhd is NULL\n", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ if (!WLS_SUPPORTED(_pno_state)) {
+ DHD_ERROR(("%s : wifi location service is not supported\n", __FUNCTION__));
+ err = BCME_UNSUPPORTED;
+ goto exit;
+ }
+
+ params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ gscan_params = ¶ms->params_gscan;
+
+ mutex_lock(&_pno_state->pno_mutex);
+
+ old_flag = gscan_params->send_all_results_flag;
+ gscan_params->send_all_results_flag = (uint8) real_time_flag;
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ if (old_flag != gscan_params->send_all_results_flag) {
+ wl_pfn_gscan_cfg_t gscan_cfg;
+
+ gscan_cfg.version = WL_GSCAN_CFG_VERSION;
+ gscan_cfg.flags = (gscan_params->send_all_results_flag &
+ GSCAN_SEND_ALL_RESULTS_MASK);
+ gscan_cfg.flags |= GSCAN_CFG_FLAGS_ONLY_MASK;
+
+ if ((err = _dhd_pno_gscan_cfg(dhd, &gscan_cfg,
+ sizeof(wl_pfn_gscan_cfg_t))) < 0) {
+ DHD_ERROR(("%s : pno_gscan_cfg failed (err %d) in firmware\n",
+ __FUNCTION__, err));
+ goto exit_mutex_unlock;
+ }
+ } else {
+ DHD_PNO(("No change in flag - %d\n", old_flag));
+ }
+ } else {
+ DHD_PNO(("Gscan not started\n"));
+ }
+exit_mutex_unlock:
+ mutex_unlock(&_pno_state->pno_mutex);
+exit:
+ return err;
+}
+
+/* Cleanup any consumed results
+ * Return TRUE if all results consumed else FALSE
+ */
+int dhd_gscan_batch_cache_cleanup(dhd_pub_t *dhd)
+{
+ int ret = 0;
+ dhd_pno_params_t *params;
+ struct dhd_pno_gscan_params *gscan_params;
+ dhd_pno_status_info_t *_pno_state;
+ gscan_results_cache_t *iter, *tmp;
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ gscan_params = ¶ms->params_gscan;
+ iter = gscan_params->gscan_batch_cache;
+
+ while (iter) {
+ if (iter->tot_consumed == iter->tot_count) {
+ tmp = iter->next;
+ kfree(iter);
+ iter = tmp;
+ } else
+ break;
+ }
+ gscan_params->gscan_batch_cache = iter;
+ ret = (iter == NULL);
+ return ret;
+}
+
+static int _dhd_pno_get_gscan_batch_from_fw(dhd_pub_t *dhd)
+{
+ int err = BCME_OK;
+ uint32 timestamp = 0, ts = 0, i, j, timediff;
+ dhd_pno_params_t *params;
+ dhd_pno_status_info_t *_pno_state;
+ wl_pfn_lnet_info_t *plnetinfo;
+ struct dhd_pno_gscan_params *gscan_params;
+ wl_pfn_lscanresults_t *plbestnet = NULL;
+ gscan_results_cache_t *iter, *tail;
+ wifi_gscan_result_t *result;
+ uint8 *nAPs_per_scan = NULL;
+ uint8 num_scans_in_cur_iter;
+ uint16 count;
+
+ NULL_CHECK(dhd, "dhd is NULL\n", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ DHD_PNO(("%s enter\n", __FUNCTION__));
+
+ if (!WLS_SUPPORTED(_pno_state)) {
+ DHD_ERROR(("%s : wifi location service is not supported\n", __FUNCTION__));
+ err = BCME_UNSUPPORTED;
+ goto exit;
+ }
+ if (!(_pno_state->pno_mode & DHD_PNO_GSCAN_MODE)) {
+ DHD_ERROR(("%s: GSCAN is not enabled\n", __FUNCTION__));
+ goto exit;
+ }
+ gscan_params = ¶ms->params_gscan;
+ nAPs_per_scan = (uint8 *) MALLOC(dhd->osh, gscan_params->mscan);
+
+ if (!nAPs_per_scan) {
+ DHD_ERROR(("%s :Out of memory!! Cant malloc %d bytes\n", __FUNCTION__,
+ gscan_params->mscan));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+
+ plbestnet = (wl_pfn_lscanresults_t *)MALLOC(dhd->osh, PNO_BESTNET_LEN);
+
+ mutex_lock(&_pno_state->pno_mutex);
+
+ dhd_gscan_clear_all_batch_results(dhd);
+
+ if (!(_pno_state->pno_mode & DHD_PNO_GSCAN_MODE)) {
+ DHD_ERROR(("%s : GSCAN is not enabled\n", __FUNCTION__));
+ goto exit_mutex_unlock;
+ }
+
+ timediff = gscan_params->scan_fr * 1000;
+ timediff = timediff >> 1;
+
+ /* Ok, now lets start getting results from the FW */
+ plbestnet->status = PFN_INCOMPLETE;
+ tail = gscan_params->gscan_batch_cache;
+ while (plbestnet->status != PFN_COMPLETE) {
+ memset(plbestnet, 0, PNO_BESTNET_LEN);
+ err = dhd_iovar(dhd, 0, "pfnlbest", (char *)plbestnet, PNO_BESTNET_LEN, 0);
+ if (err < 0) {
+ DHD_ERROR(("%s : Cannot get all the batch results, err :%d\n",
+ __FUNCTION__, err));
+ goto exit_mutex_unlock;
+ }
+ DHD_PNO(("ver %d, status : %d, count %d\n", plbestnet->version,
+ plbestnet->status, plbestnet->count));
+ if (plbestnet->version != PFN_SCANRESULT_VERSION) {
+ err = BCME_VERSION;
+ DHD_ERROR(("bestnet version(%d) is mismatch with Driver version(%d)\n",
+ plbestnet->version, PFN_SCANRESULT_VERSION));
+ goto exit_mutex_unlock;
+ }
+ if (plbestnet->count == 0) {
+ DHD_PNO(("No more batch results\n"));
+ goto exit_mutex_unlock;
+ }
+ num_scans_in_cur_iter = 0;
+ timestamp = plbestnet->netinfo[0].timestamp;
+ /* find out how many scans' results did we get in this batch of FW results */
+ for (i = 0, count = 0; i < plbestnet->count; i++, count++) {
+ plnetinfo = &plbestnet->netinfo[i];
+ /* Unlikely to happen, but just in case the results from
+ * FW doesnt make sense..... Assume its part of one single scan
+ */
+ if (num_scans_in_cur_iter > gscan_params->mscan) {
+ num_scans_in_cur_iter = 0;
+ count = plbestnet->count;
+ break;
+ }
+ if (TIME_DIFF_MS(timestamp, plnetinfo->timestamp) > timediff) {
+ nAPs_per_scan[num_scans_in_cur_iter] = count;
+ count = 0;
+ num_scans_in_cur_iter++;
+ }
+ timestamp = plnetinfo->timestamp;
+ }
+ nAPs_per_scan[num_scans_in_cur_iter] = count;
+ num_scans_in_cur_iter++;
+
+ DHD_PNO(("num_scans_in_cur_iter %d\n", num_scans_in_cur_iter));
+ plnetinfo = &plbestnet->netinfo[0];
+
+ for (i = 0; i < num_scans_in_cur_iter; i++) {
+ iter = (gscan_results_cache_t *)
+ kmalloc(((nAPs_per_scan[i] - 1) * sizeof(wifi_gscan_result_t)) +
+ sizeof(gscan_results_cache_t),
+ GFP_KERNEL);
+ if (!iter) {
+ DHD_ERROR(("%s :Out of memory!! Cant malloc %d bytes\n",
+ __FUNCTION__, gscan_params->mscan));
+ err = BCME_NOMEM;
+ goto exit_mutex_unlock;
+ }
+ /* Need this check because the new set of results from FW
+ * maybe a continuation of previous sets' scan results
+ */
+ if (TIME_DIFF_MS(ts, plnetinfo->timestamp) > timediff)
+ iter->scan_id = ++gscan_params->scan_id;
+ else
+ iter->scan_id = gscan_params->scan_id;
+
+ DHD_PNO(("scan_id %d tot_count %d\n", gscan_params->scan_id, nAPs_per_scan[i]));
+ iter->tot_count = nAPs_per_scan[i];
+ iter->tot_consumed = 0;
+ iter->flag = 0;
+ if (plnetinfo->flags & PFN_PARTIAL_SCAN_MASK) {
+ DHD_PNO(("This scan is aborted\n"));
+ iter->flag = (ENABLE << PNO_STATUS_ABORT);
+ } else if (gscan_params->reason) {
+ iter->flag = (ENABLE << gscan_params->reason);
+ }
+
+ if (!tail)
+ gscan_params->gscan_batch_cache = iter;
+ else
+ tail->next = iter;
+
+ tail = iter;
+ iter->next = NULL;
+ for (j = 0; j < nAPs_per_scan[i]; j++, plnetinfo++) {
+ result = &iter->results[j];
+
+ result->channel = wf_channel2mhz(plnetinfo->pfnsubnet.channel,
+ (plnetinfo->pfnsubnet.channel <= CH_MAX_2G_CHANNEL?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ result->rssi = (int32) plnetinfo->RSSI;
+ /* Info not available & not expected */
+ result->beacon_period = 0;
+ result->capability = 0;
+ result->ie_length = 0;
+ result->rtt = (uint64) plnetinfo->rtt0;
+ result->rtt_sd = (uint64) plnetinfo->rtt1;
+ result->ts = convert_fw_rel_time_to_systime(plnetinfo->timestamp);
+ ts = plnetinfo->timestamp;
+ if (plnetinfo->pfnsubnet.SSID_len > DOT11_MAX_SSID_LEN) {
+ DHD_ERROR(("%s: Invalid SSID length %d\n",
+ __FUNCTION__, plnetinfo->pfnsubnet.SSID_len));
+ plnetinfo->pfnsubnet.SSID_len = DOT11_MAX_SSID_LEN;
+ }
+ memcpy(result->ssid, plnetinfo->pfnsubnet.SSID,
+ plnetinfo->pfnsubnet.SSID_len);
+ result->ssid[plnetinfo->pfnsubnet.SSID_len] = '\0';
+ memcpy(&result->macaddr, &plnetinfo->pfnsubnet.BSSID,
+ ETHER_ADDR_LEN);
+
+ DHD_PNO(("\tSSID : "));
+ DHD_PNO(("\n"));
+ DHD_PNO(("\tBSSID: %02x:%02x:%02x:%02x:%02x:%02x\n",
+ result->macaddr.octet[0],
+ result->macaddr.octet[1],
+ result->macaddr.octet[2],
+ result->macaddr.octet[3],
+ result->macaddr.octet[4],
+ result->macaddr.octet[5]));
+ DHD_PNO(("\tchannel: %d, RSSI: %d, timestamp: %d ms\n",
+ plnetinfo->pfnsubnet.channel,
+ plnetinfo->RSSI, plnetinfo->timestamp));
+ DHD_PNO(("\tRTT0 : %d, RTT1: %d\n",
+ plnetinfo->rtt0, plnetinfo->rtt1));
+
+ }
+ }
+ }
+exit_mutex_unlock:
+ mutex_unlock(&_pno_state->pno_mutex);
+exit:
+ params->params_gscan.get_batch_flag = GSCAN_BATCH_RETRIEVAL_COMPLETE;
+ smp_wmb();
+ wake_up_interruptible(&_pno_state->batch_get_wait);
+ if (nAPs_per_scan)
+ MFREE(dhd->osh, nAPs_per_scan, gscan_params->mscan * sizeof(uint8));
+ if (plbestnet)
+ MFREE(dhd->osh, plbestnet, PNO_BESTNET_LEN);
+ DHD_PNO(("Batch retrieval done!\n"));
+ return err;
+}
+#endif /* GSCAN_SUPPORT */
+
static int
_dhd_pno_get_for_batch(dhd_pub_t *dhd, char *buf, int bufsize, int reason)
{
@@ -1151,7 +2869,7 @@
NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
if (!dhd_support_sta_mode(dhd)) {
err = BCME_BADOPTION;
- goto exit;
+ goto exit_no_unlock;
}
DHD_PNO(("%s enter\n", __FUNCTION__));
_pno_state = PNO_GET_PNOSTATE(dhd);
@@ -1159,11 +2877,12 @@
if (!WLS_SUPPORTED(_pno_state)) {
DHD_ERROR(("%s : wifi location service is not supported\n", __FUNCTION__));
err = BCME_UNSUPPORTED;
- goto exit;
+ goto exit_no_unlock;
}
+
if (!(_pno_state->pno_mode & DHD_PNO_BATCH_MODE)) {
DHD_ERROR(("%s: Batching SCAN mode is not enabled\n", __FUNCTION__));
- goto exit;
+ goto exit_no_unlock;
}
mutex_lock(&_pno_state->pno_mutex);
_params = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS];
@@ -1301,6 +3020,11 @@
pbestnet_entry->rtt0 = plnetinfo->rtt0;
pbestnet_entry->rtt1 = plnetinfo->rtt1;
pbestnet_entry->timestamp = plnetinfo->timestamp;
+ if (plnetinfo->pfnsubnet.SSID_len > DOT11_MAX_SSID_LEN) {
+ DHD_ERROR(("%s: Invalid SSID length %d: trimming it to max\n",
+ __FUNCTION__, plnetinfo->pfnsubnet.SSID_len));
+ plnetinfo->pfnsubnet.SSID_len = DOT11_MAX_SSID_LEN;
+ }
pbestnet_entry->SSID_len = plnetinfo->pfnsubnet.SSID_len;
memcpy(pbestnet_entry->SSID, plnetinfo->pfnsubnet.SSID,
pbestnet_entry->SSID_len);
@@ -1372,6 +3096,7 @@
_params->params_batch.get_batch.bytes_written = err;
}
mutex_unlock(&_pno_state->pno_mutex);
+exit_no_unlock:
if (waitqueue_active(&_pno_state->get_batch_done.wait))
complete(&_pno_state->get_batch_done);
return err;
@@ -1389,10 +3114,15 @@
DHD_ERROR(("%s : dhd is NULL\n", __FUNCTION__));
return;
}
- params_batch = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS].params_batch;
- _dhd_pno_get_for_batch(dhd, params_batch->get_batch.buf,
- params_batch->get_batch.bufsize, params_batch->get_batch.reason);
+#ifdef GSCAN_SUPPORT
+ _dhd_pno_get_gscan_batch_from_fw(dhd);
+#endif /* GSCAN_SUPPORT */
+ if (_pno_state->pno_mode & DHD_PNO_BATCH_MODE) {
+ params_batch = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS].params_batch;
+ _dhd_pno_get_for_batch(dhd, params_batch->get_batch.buf,
+ params_batch->get_batch.bufsize, params_batch->get_batch.reason);
+ }
}
int
@@ -1417,20 +3147,39 @@
goto exit;
}
params_batch = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS].params_batch;
- if (!(_pno_state->pno_mode & DHD_PNO_BATCH_MODE)) {
- DHD_ERROR(("%s: Batching SCAN mode is not enabled\n", __FUNCTION__));
- memset(pbuf, 0, bufsize);
- pbuf += sprintf(pbuf, "scancount=%d\n", 0);
- sprintf(pbuf, "%s", RESULTS_END_MARKER);
- err = strlen(buf);
- goto exit;
+#ifdef GSCAN_SUPPORT
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ struct dhd_pno_gscan_params *gscan_params;
+ gscan_params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan;
+ gscan_params->reason = reason;
+ err = dhd_retreive_batch_scan_results(dhd);
+ if (err == BCME_OK) {
+ wait_event_interruptible_timeout(_pno_state->batch_get_wait,
+ is_batch_retrieval_complete(gscan_params),
+ msecs_to_jiffies(GSCAN_BATCH_GET_MAX_WAIT));
+ }
+ } else
+#endif
+ {
+ if (!(_pno_state->pno_mode & DHD_PNO_BATCH_MODE)) {
+ DHD_ERROR(("%s: Batching SCAN mode is not enabled\n", __FUNCTION__));
+ memset(pbuf, 0, bufsize);
+ pbuf += sprintf(pbuf, "scancount=%d\n", 0);
+ sprintf(pbuf, "%s", RESULTS_END_MARKER);
+ err = strlen(buf);
+ goto exit;
+ }
+ params_batch->get_batch.buf = buf;
+ params_batch->get_batch.bufsize = bufsize;
+ params_batch->get_batch.reason = reason;
+ params_batch->get_batch.bytes_written = 0;
+ schedule_work(&_pno_state->work);
+ wait_for_completion(&_pno_state->get_batch_done);
}
- params_batch->get_batch.buf = buf;
- params_batch->get_batch.bufsize = bufsize;
- params_batch->get_batch.reason = reason;
- params_batch->get_batch.bytes_written = 0;
- schedule_work(&_pno_state->work);
- wait_for_completion(&_pno_state->get_batch_done);
+
+#ifdef GSCAN_SUPPORT
+ if (!(_pno_state->pno_mode & DHD_PNO_GSCAN_MODE))
+#endif
err = params_batch->get_batch.bytes_written;
exit:
return err;
@@ -1444,8 +3193,8 @@
int i = 0;
dhd_pno_status_info_t *_pno_state;
dhd_pno_params_t *_params;
- wl_pfn_bssid_t *p_pfn_bssid;
- wlc_ssid_t *p_ssid_list = NULL;
+ wl_pfn_bssid_t *p_pfn_bssid = NULL;
+ wlc_ssid_ext_t *p_ssid_list = NULL;
NULL_CHECK(dhd, "dhd is NULL", err);
NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
_pno_state = PNO_GET_PNOSTATE(dhd);
@@ -1460,6 +3209,14 @@
err = BCME_UNSUPPORTED;
goto exit;
}
+
+#ifdef GSCAN_SUPPORT
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ DHD_PNO(("Gscan is ongoing, nothing to stop here\n"));
+ return err;
+ }
+#endif
+
if (!(_pno_state->pno_mode & DHD_PNO_BATCH_MODE)) {
DHD_ERROR(("%s : PNO BATCH MODE is not enabled\n", __FUNCTION__));
goto exit;
@@ -1472,31 +3229,19 @@
/* restart Legacy PNO if the Legacy PNO is on */
if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
struct dhd_pno_legacy_params *_params_legacy;
- struct dhd_pno_ssid *iter, *next;
_params_legacy =
&(_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS].params_legacy);
- p_ssid_list = kzalloc(sizeof(wlc_ssid_t) *
- _params_legacy->nssid, GFP_KERNEL);
- if (p_ssid_list == NULL) {
- DHD_ERROR(("%s : failed to allocate wlc_ssid_t array (count: %d)",
- __FUNCTION__, _params_legacy->nssid));
- err = BCME_ERROR;
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ p_ssid_list = dhd_pno_get_legacy_pno_ssid(dhd, _pno_state);
+ if (!p_ssid_list) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("failed to get Legacy PNO SSID list\n"));
goto exit;
}
- i = 0;
- /* convert dhd_pno_ssid to dhd_pno_ssid */
- list_for_each_entry_safe(iter, next, &_params_legacy->ssid_list, list) {
- p_ssid_list[i].SSID_len = iter->SSID_len;
- memcpy(p_ssid_list[i].SSID, iter->SSID, p_ssid_list[i].SSID_len);
- i++;
- }
err = dhd_pno_set_for_ssid(dhd, p_ssid_list, _params_legacy->nssid,
_params_legacy->scan_fr, _params_legacy->pno_repeat,
_params_legacy->pno_freq_expo_max, _params_legacy->chan_list,
_params_legacy->nchan);
if (err < 0) {
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
DHD_ERROR(("%s : failed to restart legacy PNO scan(err: %d)\n",
__FUNCTION__, err));
goto exit;
@@ -1541,8 +3286,8 @@
exit:
_params = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS];
_dhd_pno_reinitialize_prof(dhd, _params, DHD_PNO_BATCH_MODE);
- if (p_ssid_list)
- kfree(p_ssid_list);
+ kfree(p_ssid_list);
+ kfree(p_pfn_bssid);
return err;
}
@@ -1702,7 +3447,7 @@
uint32 mode = 0;
dhd_pno_status_info_t *_pno_state;
dhd_pno_params_t *_params;
- wlc_ssid_t *p_ssid_list;
+ wlc_ssid_ext_t *p_ssid_list = NULL;
NULL_CHECK(dhd, "dhd is NULL", err);
NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
_pno_state = PNO_GET_PNOSTATE(dhd);
@@ -1737,30 +3482,19 @@
if (_pno_state->pno_mode & DHD_PNO_LEGACY_MODE) {
/* restart Legacy PNO Scan */
struct dhd_pno_legacy_params *_params_legacy;
- struct dhd_pno_ssid *iter, *next;
_params_legacy =
&(_pno_state->pno_params_arr[INDEX_OF_LEGACY_PARAMS].params_legacy);
- p_ssid_list =
- kzalloc(sizeof(wlc_ssid_t) * _params_legacy->nssid, GFP_KERNEL);
- if (p_ssid_list == NULL) {
- DHD_ERROR(("%s : failed to allocate wlc_ssid_t array (count: %d)",
- __FUNCTION__, _params_legacy->nssid));
- err = BCME_ERROR;
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
+ p_ssid_list = dhd_pno_get_legacy_pno_ssid(dhd, _pno_state);
+ if (!p_ssid_list) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("failed to get Legacy PNO SSID list\n"));
goto exit;
}
- /* convert dhd_pno_ssid to dhd_pno_ssid */
- list_for_each_entry_safe(iter, next, &_params_legacy->ssid_list, list) {
- p_ssid_list->SSID_len = iter->SSID_len;
- memcpy(p_ssid_list->SSID, iter->SSID, p_ssid_list->SSID_len);
- p_ssid_list++;
- }
err = dhd_pno_set_for_ssid(dhd, p_ssid_list, _params_legacy->nssid,
_params_legacy->scan_fr, _params_legacy->pno_repeat,
_params_legacy->pno_freq_expo_max, _params_legacy->chan_list,
_params_legacy->nchan);
if (err < 0) {
- _pno_state->pno_mode &= ~DHD_PNO_LEGACY_MODE;
DHD_ERROR(("%s : failed to restart legacy PNO scan(err: %d)\n",
__FUNCTION__, err));
goto exit;
@@ -1786,9 +3520,445 @@
}
}
exit:
+ kfree(p_ssid_list);
return err;
}
+#ifdef GSCAN_SUPPORT
+int dhd_retreive_batch_scan_results(dhd_pub_t *dhd)
+{
+ int err = BCME_OK;
+ dhd_pno_status_info_t *_pno_state;
+ dhd_pno_params_t *_params;
+ struct dhd_pno_batch_params *params_batch;
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(dhd->pno_state, "pno_state is NULL", err);
+ _pno_state = PNO_GET_PNOSTATE(dhd);
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+
+ params_batch = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS].params_batch;
+ if (_params->params_gscan.get_batch_flag == GSCAN_BATCH_RETRIEVAL_COMPLETE) {
+ DHD_PNO(("Retreive batch results\n"));
+ params_batch->get_batch.buf = NULL;
+ params_batch->get_batch.bufsize = 0;
+ params_batch->get_batch.reason = PNO_STATUS_EVENT;
+ _params->params_gscan.get_batch_flag = GSCAN_BATCH_RETRIEVAL_IN_PROGRESS;
+ smp_wmb();
+ schedule_work(&_pno_state->work);
+ } else {
+ DHD_PNO(("%s : WLC_E_PFN_BEST_BATCHING retrieval"
+ "already in progress, will skip\n", __FUNCTION__));
+ err = BCME_ERROR;
+ }
+
+ return err;
+}
+
+/* Handle Significant WiFi Change (SWC) event from FW
+ * Send event to HAL when all results arrive from FW
+ */
+void * dhd_handle_swc_evt(dhd_pub_t *dhd, const void *event_data, int *send_evt_bytes)
+{
+ void *ptr = NULL;
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ struct dhd_pno_gscan_params *gscan_params;
+ struct dhd_pno_swc_evt_param *params;
+ wl_pfn_swc_results_t *results = (wl_pfn_swc_results_t *)event_data;
+ wl_pfn_significant_net_t *change_array;
+ int i;
+
+ gscan_params = &(_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan);
+ params = &(gscan_params->param_significant);
+
+ if (!results->total_count) {
+ *send_evt_bytes = 0;
+ return ptr;
+ }
+
+ if (!params->results_rxed_so_far) {
+ if (!params->change_array) {
+ params->change_array = (wl_pfn_significant_net_t *)
+ kmalloc(sizeof(wl_pfn_significant_net_t) * results->total_count,
+ GFP_KERNEL);
+
+ if (!params->change_array) {
+ DHD_ERROR(("%s Cannot Malloc %zd bytes!!\n", __FUNCTION__,
+ sizeof(wl_pfn_significant_net_t) * results->total_count));
+ *send_evt_bytes = 0;
+ return ptr;
+ }
+ } else {
+ DHD_ERROR(("RX'ed WLC_E_PFN_SWC evt from FW, previous evt not complete!!"));
+ *send_evt_bytes = 0;
+ return ptr;
+ }
+
+ }
+
+ DHD_PNO(("%s: pkt_count %d total_count %d\n", __FUNCTION__,
+ results->pkt_count, results->total_count));
+
+ for (i = 0; i < results->pkt_count; i++) {
+ DHD_PNO(("\t %02x:%02x:%02x:%02x:%02x:%02x\n",
+ results->list[i].BSSID.octet[0],
+ results->list[i].BSSID.octet[1],
+ results->list[i].BSSID.octet[2],
+ results->list[i].BSSID.octet[3],
+ results->list[i].BSSID.octet[4],
+ results->list[i].BSSID.octet[5]));
+ }
+
+ change_array = ¶ms->change_array[params->results_rxed_so_far];
+ memcpy(change_array, results->list, sizeof(wl_pfn_significant_net_t) * results->pkt_count);
+ params->results_rxed_so_far += results->pkt_count;
+
+ if (params->results_rxed_so_far == results->total_count) {
+ params->results_rxed_so_far = 0;
+ *send_evt_bytes = sizeof(wl_pfn_significant_net_t) * results->total_count;
+ /* Pack up change buffer to send up and reset
+ * results_rxed_so_far, after its done.
+ */
+ ptr = (void *) params->change_array;
+ /* expecting the callee to free this mem chunk */
+ params->change_array = NULL;
+ }
+ else {
+ *send_evt_bytes = 0;
+ }
+
+ return ptr;
+}
+
+void dhd_gscan_hotlist_cache_cleanup(dhd_pub_t *dhd, hotlist_type_t type)
+{
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ struct dhd_pno_gscan_params *gscan_params;
+ gscan_results_cache_t *iter, *tmp;
+
+ if (!_pno_state)
+ return;
+ gscan_params = &(_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan);
+
+ if (type == HOTLIST_FOUND) {
+ iter = gscan_params->gscan_hotlist_found;
+ gscan_params->gscan_hotlist_found = NULL;
+ } else {
+ iter = gscan_params->gscan_hotlist_lost;
+ gscan_params->gscan_hotlist_lost = NULL;
+ }
+
+ while (iter) {
+ tmp = iter->next;
+ kfree(iter);
+ iter = tmp;
+ }
+
+ return;
+}
+
+void *
+dhd_process_full_gscan_result(dhd_pub_t *dhd, const void *data, int *size)
+{
+ wl_bss_info_t *bi = NULL;
+ wl_gscan_result_t *gscan_result;
+ wifi_gscan_result_t *result = NULL;
+ u32 bi_length = 0;
+ uint8 channel;
+ uint32 mem_needed;
+ struct timespec ts;
+
+ *size = 0;
+
+ gscan_result = (wl_gscan_result_t *)data;
+
+ if (!gscan_result) {
+ DHD_ERROR(("Invalid gscan result (NULL pointer)\n"));
+ goto exit;
+ }
+ if (!gscan_result->bss_info) {
+ DHD_ERROR(("Invalid gscan bss info (NULL pointer)\n"));
+ goto exit;
+ }
+ bi = &gscan_result->bss_info[0].info;
+ bi_length = dtoh32(bi->length);
+ if (bi_length != (dtoh32(gscan_result->buflen) -
+ WL_GSCAN_RESULTS_FIXED_SIZE - WL_GSCAN_INFO_FIXED_FIELD_SIZE)) {
+ DHD_ERROR(("Invalid bss_info length %d: ignoring\n", bi_length));
+ goto exit;
+ }
+ if (bi->SSID_len > DOT11_MAX_SSID_LEN) {
+ DHD_ERROR(("Invalid SSID length %d: trimming it to max\n", bi->SSID_len));
+ bi->SSID_len = DOT11_MAX_SSID_LEN;
+ }
+
+ mem_needed = OFFSETOF(wifi_gscan_result_t, ie_data) + bi->ie_length;
+ result = kmalloc(mem_needed, GFP_KERNEL);
+
+ if (!result) {
+ DHD_ERROR(("%s Cannot malloc scan result buffer %d bytes\n",
+ __FUNCTION__, mem_needed));
+ goto exit;
+ }
+
+ memcpy(result->ssid, bi->SSID, bi->SSID_len);
+ result->ssid[bi->SSID_len] = '\0';
+ channel = wf_chspec_ctlchan(bi->chanspec);
+ result->channel = wf_channel2mhz(channel,
+ (channel <= CH_MAX_2G_CHANNEL?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ result->rssi = (int32) bi->RSSI;
+ result->rtt = 0;
+ result->rtt_sd = 0;
+ get_monotonic_boottime(&ts);
+ result->ts = (uint64) TIMESPEC_TO_US(ts);
+ result->beacon_period = dtoh16(bi->beacon_period);
+ result->capability = dtoh16(bi->capability);
+ result->ie_length = dtoh32(bi->ie_length);
+ memcpy(&result->macaddr, &bi->BSSID, ETHER_ADDR_LEN);
+ memcpy(result->ie_data, ((uint8 *)bi + bi->ie_offset), bi->ie_length);
+ *size = mem_needed;
+exit:
+ return result;
+}
+
+void *
+dhd_pno_process_epno_result(dhd_pub_t *dhd, const void *data, uint32 event, int *size)
+{
+ dhd_epno_results_t *results = NULL;
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ struct dhd_pno_gscan_params *gscan_params;
+ uint32 count, mem_needed = 0, i;
+ uint8 ssid[DOT11_MAX_SSID_LEN + 1];
+ struct ether_addr *bssid;
+
+ *size = 0;
+ if (!_pno_state)
+ return NULL;
+ gscan_params = &(_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan);
+
+ if (event == WLC_E_PFN_SSID_EXT) {
+ wl_pfn_ssid_ext_result_t *evt_data;
+ evt_data = (wl_pfn_ssid_ext_result_t *) data;
+
+ if (evt_data->version != PFN_SSID_EXT_VERSION) {
+ DHD_PNO(("ePNO event: Incorrect version %d %d\n", evt_data->version,
+ PFN_SSID_EXT_VERSION));
+ return NULL;
+ }
+ count = evt_data->count;
+ mem_needed = sizeof(dhd_epno_results_t) * count;
+ results = (dhd_epno_results_t *) kmalloc(mem_needed, GFP_KERNEL);
+ if (!results) {
+ DHD_ERROR(("%s: Can't malloc %d bytes for results\n", __FUNCTION__,
+ mem_needed));
+ return NULL;
+ }
+ DHD_ERROR(("Rx'ed WLC_E_PFN_SSID_EXT event: %d results\n", count));
+ for (i = 0; i < count; i++) {
+ results[i].rssi = evt_data->net[i].rssi;
+ results[i].channel = wf_channel2mhz(evt_data->net[i].channel,
+ (evt_data->net[i].channel <= CH_MAX_2G_CHANNEL ?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ results[i].flags = evt_data->net[i].flags;
+ dhd_pno_idx_to_ssid(gscan_params, &results[i],
+ evt_data->net[i].index);
+ memcpy(ssid, results[i].ssid, results[i].ssid_len);
+ bssid = &results[i].bssid;
+ memcpy(bssid, &evt_data->net[i].bssid, ETHER_ADDR_LEN);
+ ssid[results[i].ssid_len] = '\0';
+ DHD_PNO(("ssid - %s bssid %02x:%02x:%02x:%02x:%02x:%02x "
+ "idx %d ch %d rssi %d flags %d\n", ssid,
+ bssid->octet[0], bssid->octet[1],
+ bssid->octet[2], bssid->octet[3],
+ bssid->octet[4], bssid->octet[5],
+ evt_data->net[i].index, results[i].channel,
+ results[i].rssi, results[i].flags));
+ }
+ } else if (event == WLC_E_PFN_NET_FOUND || event == WLC_E_PFN_NET_LOST) {
+ wl_pfn_scanresults_t *pfn_result = (wl_pfn_scanresults_t *)data;
+ wl_pfn_net_info_t *net;
+
+ if (pfn_result->version != PFN_SCANRESULT_VERSION) {
+ DHD_ERROR(("%s event %d: Incorrect version %d %d\n", __FUNCTION__, event,
+ pfn_result->version, PFN_SCANRESULT_VERSION));
+ return NULL;
+ }
+ count = pfn_result->count;
+ mem_needed = sizeof(dhd_epno_results_t) * count;
+ results = (dhd_epno_results_t *) kmalloc(mem_needed, GFP_KERNEL);
+ if (!results) {
+ DHD_ERROR(("%s: Can't malloc %d bytes for results\n", __FUNCTION__,
+ mem_needed));
+ return NULL;
+ }
+ for (i = 0; i < count; i++) {
+ net = &pfn_result->netinfo[i];
+ results[i].rssi = net->RSSI;
+ results[i].channel = wf_channel2mhz(net->pfnsubnet.channel,
+ (net->pfnsubnet.channel <= CH_MAX_2G_CHANNEL ?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ results[i].flags = (event == WLC_E_PFN_NET_FOUND) ?
+ WL_PFN_SSID_EXT_FOUND: WL_PFN_SSID_EXT_LOST;
+ results[i].ssid_len = min(net->pfnsubnet.SSID_len,
+ (uint8)DOT11_MAX_SSID_LEN);
+ bssid = &results[i].bssid;
+ memcpy(bssid, &net->pfnsubnet.BSSID, ETHER_ADDR_LEN);
+ memcpy(results[i].ssid, net->pfnsubnet.SSID, results[i].ssid_len);
+ memcpy(ssid, results[i].ssid, results[i].ssid_len);
+ ssid[results[i].ssid_len] = '\0';
+ DHD_PNO(("ssid - %s bssid %02x:%02x:%02x:%02x:%02x:%02x "
+ "ch %d rssi %d flags %d\n", ssid,
+ bssid->octet[0], bssid->octet[1],
+ bssid->octet[2], bssid->octet[3],
+ bssid->octet[4], bssid->octet[5],
+ results[i].channel, results[i].rssi, results[i].flags));
+ }
+ }
+ *size = mem_needed;
+ return results;
+}
+
+void *
+dhd_pno_process_anqpo_result(dhd_pub_t *dhd, const void *data, uint32 event, int *size)
+{
+ wl_bss_info_t *bi = (wl_bss_info_t *)data;
+ wifi_gscan_result_t *result = NULL;
+ wl_event_gas_t *gas_data = (wl_event_gas_t *)((uint8 *)data +
+ OFFSETOF(wifi_gscan_result_t, ie_data) + bi->ie_length);
+ uint8 channel;
+ uint32 mem_needed;
+ struct timespec ts;
+
+ if (event == WLC_E_PFN_NET_FOUND) {
+ mem_needed = OFFSETOF(wifi_gscan_result_t, ie_data) + bi->ie_length +
+ OFFSETOF(wl_event_gas_t, data) + gas_data->data_len +
+ sizeof(int);
+ result = (wifi_gscan_result_t *) kmalloc(mem_needed, GFP_KERNEL);
+ if (NULL == result) {
+ DHD_ERROR(("%s Cannot Malloc %d bytes!!\n", __FUNCTION__, mem_needed));
+ return NULL;
+ }
+
+ memcpy(result->ssid, bi->SSID, bi->SSID_len);
+ result->ssid[bi->SSID_len] = '\0';
+ channel = wf_chspec_ctlchan(bi->chanspec);
+ result->channel = wf_channel2mhz(channel,
+ (channel <= CH_MAX_2G_CHANNEL?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ result->rssi = (int32) bi->RSSI;
+ result->rtt = 0;
+ result->rtt_sd = 0;
+ get_monotonic_boottime(&ts);
+ result->ts = (uint64) TIMESPEC_TO_US(ts);
+ result->beacon_period = dtoh16(bi->beacon_period);
+ result->capability = dtoh16(bi->capability);
+ result->ie_length = dtoh32(bi->ie_length);
+ memcpy(&result->macaddr, &bi->BSSID, ETHER_ADDR_LEN);
+ memcpy(result->ie_data, ((uint8 *)bi + bi->ie_offset), bi->ie_length);
+ /* append ANQP data to end of scan result */
+ memcpy((uint8 *)result+OFFSETOF(wifi_gscan_result_t, ie_data)+bi->ie_length,
+ gas_data, OFFSETOF(wl_event_gas_t, data)+gas_data->data_len);
+ /* append network id to end of result */
+ memcpy((uint8 *)result+mem_needed-sizeof(int),
+ (uint8 *)data+(*size)-sizeof(int), sizeof(int));
+ *size = mem_needed;
+ } else {
+ DHD_ERROR(("%s unknown event: %d!!\n", __FUNCTION__, event));
+ }
+
+ return result;
+}
+
+
+void *dhd_handle_hotlist_scan_evt(dhd_pub_t *dhd, const void *event_data, int *send_evt_bytes,
+ hotlist_type_t type)
+{
+ void *ptr = NULL;
+ dhd_pno_status_info_t *_pno_state = PNO_GET_PNOSTATE(dhd);
+ struct dhd_pno_gscan_params *gscan_params;
+ wl_pfn_scanresults_t *results = (wl_pfn_scanresults_t *)event_data;
+ wifi_gscan_result_t *hotlist_found_array;
+ wl_pfn_net_info_t *plnetinfo;
+ gscan_results_cache_t *gscan_hotlist_cache;
+ int malloc_size = 0, i, total = 0;
+
+ gscan_params = &(_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS].params_gscan);
+
+ if (!results->count) {
+ *send_evt_bytes = 0;
+ return ptr;
+ }
+
+ malloc_size = sizeof(gscan_results_cache_t) +
+ ((results->count - 1) * sizeof(wifi_gscan_result_t));
+ gscan_hotlist_cache = (gscan_results_cache_t *) kmalloc(malloc_size, GFP_KERNEL);
+
+ if (!gscan_hotlist_cache) {
+ DHD_ERROR(("%s Cannot Malloc %d bytes!!\n", __FUNCTION__, malloc_size));
+ *send_evt_bytes = 0;
+ return ptr;
+ }
+
+ if (type == HOTLIST_FOUND) {
+ gscan_hotlist_cache->next = gscan_params->gscan_hotlist_found;
+ gscan_params->gscan_hotlist_found = gscan_hotlist_cache;
+ DHD_PNO(("%s enter, FOUND results count %d\n", __FUNCTION__, results->count));
+ } else {
+ gscan_hotlist_cache->next = gscan_params->gscan_hotlist_lost;
+ gscan_params->gscan_hotlist_lost = gscan_hotlist_cache;
+ DHD_PNO(("%s enter, LOST results count %d\n", __FUNCTION__, results->count));
+ }
+
+ gscan_hotlist_cache->tot_count = results->count;
+ gscan_hotlist_cache->tot_consumed = 0;
+ plnetinfo = results->netinfo;
+
+ for (i = 0; i < results->count; i++, plnetinfo++) {
+ hotlist_found_array = &gscan_hotlist_cache->results[i];
+ hotlist_found_array->channel = wf_channel2mhz(plnetinfo->pfnsubnet.channel,
+ (plnetinfo->pfnsubnet.channel <= CH_MAX_2G_CHANNEL?
+ WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G));
+ hotlist_found_array->rssi = (int32) plnetinfo->RSSI;
+ /* Info not available & not expected */
+ hotlist_found_array->beacon_period = 0;
+ hotlist_found_array->capability = 0;
+ hotlist_found_array->ie_length = 0;
+
+ hotlist_found_array->ts = convert_fw_rel_time_to_systime(plnetinfo->timestamp);
+ if (plnetinfo->pfnsubnet.SSID_len > DOT11_MAX_SSID_LEN) {
+ DHD_ERROR(("Invalid SSID length %d: trimming it to max\n",
+ plnetinfo->pfnsubnet.SSID_len));
+ plnetinfo->pfnsubnet.SSID_len = DOT11_MAX_SSID_LEN;
+ }
+ memcpy(hotlist_found_array->ssid, plnetinfo->pfnsubnet.SSID,
+ plnetinfo->pfnsubnet.SSID_len);
+ hotlist_found_array->ssid[plnetinfo->pfnsubnet.SSID_len] = '\0';
+
+ memcpy(&hotlist_found_array->macaddr, &plnetinfo->pfnsubnet.BSSID, ETHER_ADDR_LEN);
+ DHD_PNO(("\t%s %02x:%02x:%02x:%02x:%02x:%02x rssi %d\n", hotlist_found_array->ssid,
+ hotlist_found_array->macaddr.octet[0],
+ hotlist_found_array->macaddr.octet[1],
+ hotlist_found_array->macaddr.octet[2],
+ hotlist_found_array->macaddr.octet[3],
+ hotlist_found_array->macaddr.octet[4],
+ hotlist_found_array->macaddr.octet[5],
+ hotlist_found_array->rssi));
+ }
+
+
+ if (results->status == PFN_COMPLETE) {
+ ptr = (void *) gscan_hotlist_cache;
+ while (gscan_hotlist_cache) {
+ total += gscan_hotlist_cache->tot_count;
+ gscan_hotlist_cache = gscan_hotlist_cache->next;
+ }
+ *send_evt_bytes = total * sizeof(wifi_gscan_result_t);
+ }
+
+ return ptr;
+}
+#endif /* GSCAN_SUPPORT */
int
dhd_pno_event_handler(dhd_pub_t *dhd, wl_event_msg_t *event, void *event_data)
{
@@ -1814,6 +3984,7 @@
/* TODO : need to implement event logic using generic netlink */
break;
case WLC_E_PFN_BEST_BATCHING:
+#ifndef GSCAN_SUPPORT
{
struct dhd_pno_batch_params *params_batch;
params_batch = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS].params_batch;
@@ -1828,6 +3999,9 @@
"will skip this event\n", __FUNCTION__));
break;
}
+#else
+ break;
+#endif /* !GSCAN_SUPPORT */
default:
DHD_ERROR(("unknown event : %d\n", event_type));
}
@@ -1854,6 +4028,9 @@
mutex_init(&_pno_state->pno_mutex);
INIT_WORK(&_pno_state->work, _dhd_pno_get_batch_handler);
init_completion(&_pno_state->get_batch_done);
+#ifdef GSCAN_SUPPORT
+ init_waitqueue_head(&_pno_state->batch_get_wait);
+#endif /* GSCAN_SUPPORT */
err = dhd_iovar(dhd, 0, "pfnlbest", NULL, 0, 0);
if (err == BCME_UNSUPPORTED) {
_pno_state->wls_supported = FALSE;
@@ -1879,6 +4056,15 @@
_dhd_pno_reinitialize_prof(dhd, _params, DHD_PNO_LEGACY_MODE);
}
+#ifdef GSCAN_SUPPORT
+ if (_pno_state->pno_mode & DHD_PNO_GSCAN_MODE) {
+ _params = &_pno_state->pno_params_arr[INDEX_OF_GSCAN_PARAMS];
+ mutex_lock(&_pno_state->pno_mutex);
+ dhd_pno_reset_cfg_gscan(_params, _pno_state, GSCAN_FLUSH_ALL_CFG);
+ mutex_unlock(&_pno_state->pno_mutex);
+ }
+#endif /* GSCAN_SUPPORT */
+
if (_pno_state->pno_mode & DHD_PNO_BATCH_MODE) {
_params = &_pno_state->pno_params_arr[INDEX_OF_BATCH_PARAMS];
/* clear resource if the BATCH MODE is on */
diff --git a/drivers/net/wireless/bcmdhd/dhd_pno.h b/drivers/net/wireless/bcmdhd/dhd_pno.h
old mode 100755
new mode 100644
index e7d594c..349cee8
--- a/drivers/net/wireless/bcmdhd/dhd_pno.h
+++ b/drivers/net/wireless/bcmdhd/dhd_pno.h
@@ -2,13 +2,13 @@
* Header file of Broadcom Dongle Host Driver (DHD)
* Prefered Network Offload code and Wi-Fi Location Service(WLS) code.
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,7 +16,7 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
@@ -59,6 +59,47 @@
#define RESULTS_END_MARKER "----\n"
#define SCAN_END_MARKER "####\n"
#define AP_END_MARKER "====\n"
+#define PNO_RSSI_MARGIN_DBM 30
+
+#ifdef GSCAN_SUPPORT
+
+#define GSCAN_MAX_CH_BUCKETS 8
+#define GSCAN_MAX_CHANNELS_IN_BUCKET 32
+#define GSCAN_MAX_AP_CACHE_PER_SCAN 32
+#define GSCAN_MAX_AP_CACHE 320
+#define GSCAN_BG_BAND_MASK (1 << 0)
+#define GSCAN_A_BAND_MASK (1 << 1)
+#define GSCAN_DFS_MASK (1 << 2)
+#define GSCAN_ABG_BAND_MASK (GSCAN_A_BAND_MASK | GSCAN_BG_BAND_MASK)
+#define GSCAN_BAND_MASK (GSCAN_ABG_BAND_MASK | GSCAN_DFS_MASK)
+
+#define GSCAN_FLUSH_HOTLIST_CFG (1 << 0)
+#define GSCAN_FLUSH_SIGNIFICANT_CFG (1 << 1)
+#define GSCAN_FLUSH_SCAN_CFG (1 << 2)
+#define GSCAN_FLUSH_EPNO_CFG (1 << 3)
+#define GSCAN_FLUSH_ALL_CFG (GSCAN_FLUSH_SCAN_CFG | \
+ GSCAN_FLUSH_SIGNIFICANT_CFG | \
+ GSCAN_FLUSH_HOTLIST_CFG | \
+ GSCAN_FLUSH_EPNO_CFG)
+#define DHD_EPNO_HIDDEN_SSID (1 << 0)
+#define DHD_EPNO_A_BAND_TRIG (1 << 1)
+#define DHD_EPNO_BG_BAND_TRIG (1 << 2)
+#define DHD_EPNO_STRICT_MATCH (1 << 3)
+#define DHD_PNO_USE_SSID (DHD_EPNO_HIDDEN_SSID | DHD_EPNO_STRICT_MATCH)
+
+/* Do not change GSCAN_BATCH_RETRIEVAL_COMPLETE */
+#define GSCAN_BATCH_RETRIEVAL_COMPLETE 0
+#define GSCAN_BATCH_RETRIEVAL_IN_PROGRESS 1
+#define GSCAN_BATCH_NO_THR_SET 101
+#define GSCAN_LOST_AP_WINDOW_DEFAULT 4
+#define GSCAN_MIN_BSSID_TIMEOUT 90
+#define GSCAN_BATCH_GET_MAX_WAIT 500
+
+#define CHANNEL_BUCKET_EMPTY_INDEX 0xFFFF
+#define GSCAN_RETRY_THRESHOLD 3
+#define MAX_EPNO_SSID_NUM 32
+
+#endif /* GSCAN_SUPPORT */
enum scan_status {
/* SCAN ABORT by other scan */
@@ -82,6 +123,12 @@
INDEX_OF_LEGACY_PARAMS,
INDEX_OF_BATCH_PARAMS,
INDEX_OF_HOTLIST_PARAMS,
+ /* GSCAN includes hotlist scan and they do not run
+ * independent of each other
+ */
+#ifdef GSCAN_SUPPORT
+ INDEX_OF_GSCAN_PARAMS = INDEX_OF_HOTLIST_PARAMS,
+#endif /* GSCAN_SUPPORT */
INDEX_MODE_MAX
};
enum dhd_pno_status {
@@ -95,17 +142,62 @@
char subtype;
char reserved;
} cmd_tlv_t;
+#ifdef GSCAN_SUPPORT
+typedef enum {
+ WIFI_BAND_UNSPECIFIED,
+ WIFI_BAND_BG = 1, /* 2.4 GHz */
+ WIFI_BAND_A = 2, /* 5 GHz without DFS */
+ WIFI_BAND_A_DFS = 4, /* 5 GHz DFS only */
+ WIFI_BAND_A_WITH_DFS = 6, /* 5 GHz with DFS */
+ WIFI_BAND_ABG = 3, /* 2.4 GHz + 5 GHz; no DFS */
+ WIFI_BAND_ABG_WITH_DFS = 7, /* 2.4 GHz + 5 GHz with DFS */
+} gscan_wifi_band_t;
+
+typedef enum {
+ HOTLIST_LOST,
+ HOTLIST_FOUND
+} hotlist_type_t;
+
+typedef enum dhd_pno_gscan_cmd_cfg {
+ DHD_PNO_BATCH_SCAN_CFG_ID,
+ DHD_PNO_GEOFENCE_SCAN_CFG_ID,
+ DHD_PNO_SIGNIFICANT_SCAN_CFG_ID,
+ DHD_PNO_SCAN_CFG_ID,
+ DHD_PNO_GET_CAPABILITIES,
+ DHD_PNO_GET_BATCH_RESULTS,
+ DHD_PNO_GET_CHANNEL_LIST,
+ DHD_PNO_GET_EPNO_SSID_ELEM,
+ DHD_PNO_EPNO_CFG_ID,
+ DHD_PNO_GET_AUTOJOIN_CAPABILITIES
+} dhd_pno_gscan_cmd_cfg_t;
+
typedef enum dhd_pno_mode {
/* Wi-Fi Legacy PNO Mode */
- DHD_PNO_NONE_MODE = 0,
+ DHD_PNO_NONE_MODE = 0,
+ DHD_PNO_LEGACY_MODE = (1 << (0)),
+ /* Wi-Fi Android BATCH SCAN Mode */
+ DHD_PNO_BATCH_MODE = (1 << (1)),
+ /* Wi-Fi Android Hotlist SCAN Mode */
+ DHD_PNO_HOTLIST_MODE = (1 << (2)),
+ /* Wi-Fi Google Android SCAN Mode */
+ DHD_PNO_GSCAN_MODE = (1 << (3))
+} dhd_pno_mode_t;
+#else
+typedef enum dhd_pno_mode {
+ /* Wi-Fi Legacy PNO Mode */
+ DHD_PNO_NONE_MODE = 0,
DHD_PNO_LEGACY_MODE = (1 << (0)),
/* Wi-Fi Android BATCH SCAN Mode */
DHD_PNO_BATCH_MODE = (1 << (1)),
/* Wi-Fi Android Hotlist SCAN Mode */
DHD_PNO_HOTLIST_MODE = (1 << (2))
} dhd_pno_mode_t;
+#endif /* GSCAN_SUPPORT */
struct dhd_pno_ssid {
- uint32 SSID_len;
+ bool hidden;
+ int8 rssi_thresh;
+ uint8 dummy;
+ uint16 SSID_len;
uchar SSID[DOT11_MAX_SSID_LEN];
struct list_head list;
};
@@ -185,15 +277,190 @@
uint16 nbssid;
struct list_head bssid_list;
};
+#ifdef GSCAN_SUPPORT
+#define DHD_PNO_REPORT_NO_BATCH (1 << 2)
+
+typedef struct dhd_pno_gscan_channel_bucket {
+ uint16 bucket_freq_multiple;
+ /* band = 1 All bg band channels,
+ * band = 2 All a band channels,
+ * band = 0 chan_list channels
+ */
+ uint16 band;
+ uint8 report_flag;
+ uint8 num_channels;
+ uint16 repeat;
+ uint16 bucket_max_multiple;
+ uint16 chan_list[GSCAN_MAX_CHANNELS_IN_BUCKET];
+} dhd_pno_gscan_channel_bucket_t;
+
+
+#define DHD_PNO_AUTH_CODE_OPEN 1 /* Open */
+#define DHD_PNO_AUTH_CODE_PSK 2 /* WPA_PSK or WPA2PSK */
+#define DHD_PNO_AUTH_CODE_EAPOL 4 /* any EAPOL */
+
+#define DHD_EPNO_DEFAULT_INDEX 0xFFFFFFFF
+
+typedef struct dhd_epno_params {
+ uint8 ssid[DOT11_MAX_SSID_LEN];
+ uint8 ssid_len;
+ int8 rssi_thresh;
+ uint8 flags;
+ uint8 auth;
+ /* index required only for visble ssid */
+ uint32 index;
+ struct list_head list;
+} dhd_epno_params_t;
+
+typedef struct dhd_epno_results {
+ uint8 ssid[DOT11_MAX_SSID_LEN];
+ uint8 ssid_len;
+ int8 rssi;
+ uint16 channel;
+ uint16 flags;
+ struct ether_addr bssid;
+} dhd_epno_results_t;
+
+struct dhd_pno_swc_evt_param {
+ uint16 results_rxed_so_far;
+ wl_pfn_significant_net_t *change_array;
+};
+
+typedef struct wifi_gscan_result {
+ uint64 ts; /* Time of discovery */
+ char ssid[DOT11_MAX_SSID_LEN+1]; /* null terminated */
+ struct ether_addr macaddr; /* BSSID */
+ uint32 channel; /* channel frequency in MHz */
+ int32 rssi; /* in db */
+ uint64 rtt; /* in nanoseconds */
+ uint64 rtt_sd; /* standard deviation in rtt */
+ uint16 beacon_period; /* units are Kusec */
+ uint16 capability; /* Capability information */
+ uint32 ie_length; /* byte length of Information Elements */
+ char ie_data[1]; /* IE data to follow */
+} wifi_gscan_result_t;
+
+typedef struct gscan_results_cache {
+ struct gscan_results_cache *next;
+ uint8 scan_id;
+ uint8 flag;
+ uint8 tot_count;
+ uint8 tot_consumed;
+ wifi_gscan_result_t results[1];
+} gscan_results_cache_t;
+
+typedef struct {
+ int id; /* identifier of this network block, report this in event */
+ char realm[256]; /* null terminated UTF8 encoded realm, 0 if unspecified */
+ int64_t roamingConsortiumIds[16]; /* roaming consortium ids to match, 0s if unspecified */
+ uint8 plmn[3]; /* mcc/mnc combination as per rules, 0s if unspecified */
+} wifi_passpoint_network;
+
+typedef struct dhd_pno_gscan_capabilities {
+ int max_scan_cache_size;
+ int max_scan_buckets;
+ int max_ap_cache_per_scan;
+ int max_rssi_sample_size;
+ int max_scan_reporting_threshold;
+ int max_hotlist_aps;
+ int max_significant_wifi_change_aps;
+ int max_epno_ssid_crc32;
+ int max_epno_hidden_ssid;
+ int max_white_list_ssid;
+} dhd_pno_gscan_capabilities_t;
+
+struct dhd_pno_gscan_params {
+ int32 scan_fr;
+ uint8 bestn;
+ uint8 mscan;
+ uint8 buffer_threshold;
+ uint8 swc_nbssid_threshold;
+ uint8 swc_rssi_window_size;
+ uint8 lost_ap_window;
+ uint8 nchannel_buckets;
+ uint8 reason;
+ uint8 get_batch_flag;
+ uint8 send_all_results_flag;
+ uint16 max_ch_bucket_freq;
+ gscan_results_cache_t *gscan_batch_cache;
+ gscan_results_cache_t *gscan_hotlist_found;
+ gscan_results_cache_t *gscan_hotlist_lost;
+ uint16 nbssid_significant_change;
+ uint16 nbssid_hotlist;
+ uint16 num_epno_ssid;
+ uint8 num_visible_epno_ssid;
+ /* To keep track of visble ssid index
+ * across multiple FW configs i.e. config
+ * w/o clear in between
+ */
+ uint8 ssid_ext_last_used_index;
+ struct dhd_pno_swc_evt_param param_significant;
+ struct dhd_pno_gscan_channel_bucket channel_bucket[GSCAN_MAX_CH_BUCKETS];
+ struct list_head hotlist_bssid_list;
+ struct list_head significant_bssid_list;
+ struct list_head epno_ssid_list;
+ uint32 scan_id;
+};
+
+typedef struct gscan_scan_params {
+ int32 scan_fr;
+ uint16 nchannel_buckets;
+ struct dhd_pno_gscan_channel_bucket channel_bucket[GSCAN_MAX_CH_BUCKETS];
+} gscan_scan_params_t;
+
+typedef struct gscan_batch_params {
+ uint8 bestn;
+ uint8 mscan;
+ uint8 buffer_threshold;
+} gscan_batch_params_t;
+
+struct bssid_t {
+ struct ether_addr macaddr;
+ int16 rssi_reporting_threshold; /* 0 -> no reporting threshold */
+};
+
+typedef struct gscan_hotlist_scan_params {
+ uint16 lost_ap_window; /* number of scans to declare LOST */
+ uint16 nbssid; /* number of bssids */
+ struct bssid_t bssid[1]; /* n bssids to follow */
+} gscan_hotlist_scan_params_t;
+
+/* SWC (Significant WiFi Change) params */
+typedef struct gscan_swc_params {
+ /* Rssi averaging window size */
+ uint8 rssi_window;
+ /* Number of scans that the AP has to be absent before
+ * being declared LOST
+ */
+ uint8 lost_ap_window;
+ /* if x Aps have a significant change generate an event. */
+ uint8 swc_threshold;
+ uint8 nbssid;
+ wl_pfn_significant_bssid_t bssid_elem_list[1];
+} gscan_swc_params_t;
+
+typedef struct dhd_pno_significant_bssid {
+ struct ether_addr BSSID;
+ int8 rssi_low_threshold;
+ int8 rssi_high_threshold;
+ struct list_head list;
+} dhd_pno_significant_bssid_t;
+#endif /* GSCAN_SUPPORT */
typedef union dhd_pno_params {
struct dhd_pno_legacy_params params_legacy;
struct dhd_pno_batch_params params_batch;
struct dhd_pno_hotlist_params params_hotlist;
+#ifdef GSCAN_SUPPORT
+ struct dhd_pno_gscan_params params_gscan;
+#endif /* GSCAN_SUPPORT */
} dhd_pno_params_t;
typedef struct dhd_pno_status_info {
dhd_pub_t *dhd;
struct work_struct work;
struct mutex pno_mutex;
+#ifdef GSCAN_SUPPORT
+ wait_queue_head_t batch_get_wait;
+#endif /* GSCAN_SUPPORT */
struct completion get_batch_done;
bool wls_supported; /* wifi location service supported or not */
enum dhd_pno_status pno_status;
@@ -210,7 +477,7 @@
dhd_dev_pno_stop_for_ssid(struct net_device *dev);
extern int
-dhd_dev_pno_set_for_ssid(struct net_device *dev, wlc_ssid_t* ssids_local, int nssid,
+dhd_dev_pno_set_for_ssid(struct net_device *dev, wlc_ssid_ext_t* ssids_local, int nssid,
uint16 scan_fr, int pno_repeat, int pno_freq_expo_max, uint16 *channel_list, int nchan);
extern int
@@ -226,11 +493,37 @@
extern int
dhd_dev_pno_set_for_hotlist(struct net_device *dev, wl_pfn_bssid_t *p_pfn_bssid,
struct dhd_pno_hotlist_params *hotlist_params);
-
+extern bool dhd_dev_is_legacy_pno_enabled(struct net_device *dev);
+#ifdef GSCAN_SUPPORT
+extern int
+dhd_dev_pno_set_cfg_gscan(struct net_device *dev, dhd_pno_gscan_cmd_cfg_t type,
+ void *buf, uint8 flush);
+extern void *
+dhd_dev_pno_get_gscan(struct net_device *dev, dhd_pno_gscan_cmd_cfg_t type, void *info,
+ uint32 *len);
+int dhd_dev_pno_lock_access_batch_results(struct net_device *dev);
+void dhd_dev_pno_unlock_access_batch_results(struct net_device *dev);
+extern int dhd_dev_pno_run_gscan(struct net_device *dev, bool run, bool flush);
+extern int dhd_dev_pno_enable_full_scan_result(struct net_device *dev, bool real_time);
+extern void * dhd_dev_swc_scan_event(struct net_device *dev, const void *data,
+ int *send_evt_bytes);
+int dhd_retreive_batch_scan_results(dhd_pub_t *dhd);
+extern void * dhd_dev_hotlist_scan_event(struct net_device *dev,
+ const void *data, int *send_evt_bytes, hotlist_type_t type);
+void * dhd_dev_process_full_gscan_result(struct net_device *dev,
+ const void *data, int *send_evt_bytes);
+extern int dhd_dev_gscan_batch_cache_cleanup(struct net_device *dev);
+extern void dhd_dev_gscan_hotlist_cache_cleanup(struct net_device *dev, hotlist_type_t type);
+extern int dhd_dev_wait_batch_results_complete(struct net_device *dev);
+extern void * dhd_dev_process_epno_result(struct net_device *dev,
+ const void *data, uint32 event, int *send_evt_bytes);
+extern void * dhd_dev_process_anqpo_result(struct net_device *dev,
+ const void *data, uint32 event, int *send_evt_bytes);
+#endif /* GSCAN_SUPPORT */
/* dhd pno fuctions */
extern int dhd_pno_stop_for_ssid(dhd_pub_t *dhd);
extern int dhd_pno_enable(dhd_pub_t *dhd, int enable);
-extern int dhd_pno_set_for_ssid(dhd_pub_t *dhd, wlc_ssid_t* ssid_list, int nssid,
+extern int dhd_pno_set_for_ssid(dhd_pub_t *dhd, wlc_ssid_ext_t* ssid_list, int nssid,
uint16 scan_fr, int pno_repeat, int pno_freq_expo_max, uint16 *channel_list, int nchan);
extern int dhd_pno_set_for_batch(dhd_pub_t *dhd, struct dhd_pno_batch_params *batch_params);
@@ -248,6 +541,32 @@
extern int dhd_pno_event_handler(dhd_pub_t *dhd, wl_event_msg_t *event, void *event_data);
extern int dhd_pno_init(dhd_pub_t *dhd);
extern int dhd_pno_deinit(dhd_pub_t *dhd);
-#endif
+extern bool dhd_is_pno_supported(dhd_pub_t *dhd);
+extern int dhd_pno_set_mac_oui(dhd_pub_t *dhd, uint8 *oui);
+extern bool dhd_is_legacy_pno_enabled(dhd_pub_t *dhd);
+#ifdef GSCAN_SUPPORT
+extern int dhd_pno_set_cfg_gscan(dhd_pub_t *dhd, dhd_pno_gscan_cmd_cfg_t type,
+ void *buf, uint8 flush);
+extern void * dhd_pno_get_gscan(dhd_pub_t *dhd, dhd_pno_gscan_cmd_cfg_t type, void *info,
+ uint32 *len);
+extern int dhd_pno_lock_batch_results(dhd_pub_t *dhd);
+extern void dhd_pno_unlock_batch_results(dhd_pub_t *dhd);
+extern int dhd_pno_initiate_gscan_request(dhd_pub_t *dhd, bool run, bool flush);
+extern int dhd_pno_enable_full_scan_result(dhd_pub_t *dhd, bool real_time_flag);
+extern int dhd_pno_cfg_gscan(dhd_pub_t *dhd, dhd_pno_gscan_cmd_cfg_t type, void *buf);
+extern int dhd_dev_retrieve_batch_scan(struct net_device *dev);
+extern void *dhd_handle_swc_evt(dhd_pub_t *dhd, const void *event_data, int *send_evt_bytes);
+extern void *dhd_handle_hotlist_scan_evt(dhd_pub_t *dhd, const void *event_data,
+ int *send_evt_bytes, hotlist_type_t type);
+extern void *dhd_process_full_gscan_result(dhd_pub_t *dhd, const void *event_data,
+ int *send_evt_bytes);
+extern int dhd_gscan_batch_cache_cleanup(dhd_pub_t *dhd);
+extern void dhd_gscan_hotlist_cache_cleanup(dhd_pub_t *dhd, hotlist_type_t type);
+extern int dhd_wait_batch_results_complete(dhd_pub_t *dhd);
+extern void * dhd_pno_process_epno_result(dhd_pub_t *dhd, const void *data,
+ uint32 event, int *size);
+extern void * dhd_pno_process_anqpo_result(dhd_pub_t *dhd, const void *data, uint32 event, int *size);
+#endif /* GSCAN_SUPPORT */
+#endif /* PNO_SUPPORT */
#endif /* __DHD_PNO_H__ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_proto.h b/drivers/net/wireless/bcmdhd/dhd_proto.h
old mode 100755
new mode 100644
index 4b794a4..87e0c83
--- a/drivers/net/wireless/bcmdhd/dhd_proto.h
+++ b/drivers/net/wireless/bcmdhd/dhd_proto.h
@@ -24,7 +24,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_proto.h 455951 2014-02-17 10:52:22Z $
+ * $Id: dhd_proto.h 472193 2014-04-23 06:27:38Z $
*/
#ifndef _dhd_proto_h_
@@ -32,9 +32,18 @@
#include <dhdioctl.h>
#include <wlioctl.h>
+#ifdef BCMPCIE
+#include <dhd_flowring.h>
+#endif
+#define DEFAULT_IOCTL_RESP_TIMEOUT 2000
#ifndef IOCTL_RESP_TIMEOUT
-#define IOCTL_RESP_TIMEOUT 2000 /* In milli second default value for Production FW */
+#ifdef BCMQT
+#define IOCTL_RESP_TIMEOUT 30000 /* In milli second */
+#else
+/* In milli second default value for Production FW */
+#define IOCTL_RESP_TIMEOUT DEFAULT_IOCTL_RESP_TIMEOUT
+#endif /* BCMQT */
#endif /* IOCTL_RESP_TIMEOUT */
#ifndef MFG_IOCTL_RESP_TIMEOUT
@@ -48,13 +57,19 @@
/* Linkage, sets prot link and updates hdrlen in pub */
extern int dhd_prot_attach(dhd_pub_t *dhdp);
+/* Initilizes the index block for dma'ing indices */
+extern int dhd_prot_init_index_dma_block(dhd_pub_t *dhdp, uint8 type, uint32 length);
+
/* Unlink, frees allocated protocol memory (including dhd_prot) */
extern void dhd_prot_detach(dhd_pub_t *dhdp);
/* Initialize protocol: sync w/dongle state.
* Sets dongle media info (iswl, drv_version, mac address).
*/
-extern int dhd_prot_init(dhd_pub_t *dhdp);
+extern int dhd_sync_with_dongle(dhd_pub_t *dhdp);
+
+/* Protocol initialization needed for IOCTL/IOVAR path */
+extern int dhd_prot_init(dhd_pub_t *dhd);
/* Stop protocol: sync w/dongle state. */
extern void dhd_prot_stop(dhd_pub_t *dhdp);
@@ -63,6 +78,7 @@
* Caller must reserve prot_hdrlen prepend space.
*/
extern void dhd_prot_hdrpush(dhd_pub_t *, int ifidx, void *txp);
+extern uint dhd_prot_hdrlen(dhd_pub_t *, void *txp);
/* Remove any protocol-specific data header. */
extern int dhd_prot_hdrpull(dhd_pub_t *, int *ifidx, void *rxp, uchar *buf, uint *len);
@@ -91,7 +107,8 @@
uint reorder_info_len, void **pkt, uint32 *free_buf_count);
#ifdef BCMPCIE
-extern int dhd_prot_process_msgbuf(dhd_pub_t *dhd);
+extern int dhd_prot_process_msgbuf_txcpl(dhd_pub_t *dhd);
+extern int dhd_prot_process_msgbuf_rxcpl(dhd_pub_t *dhd);
extern int dhd_prot_process_ctrlbuf(dhd_pub_t * dhd);
extern bool dhd_prot_dtohsplit(dhd_pub_t * dhd);
extern int dhd_post_dummy_msg(dhd_pub_t *dhd);
@@ -99,7 +116,25 @@
extern void dhd_prot_rx_dataoffset(dhd_pub_t *dhd, uint32 offset);
extern int dhd_prot_txdata(dhd_pub_t *dhd, void *p, uint8 ifidx);
extern int dhdmsgbuf_dmaxfer_req(dhd_pub_t *dhd, uint len, uint srcdelay, uint destdelay);
-#endif
+
+extern int dhd_prot_flow_ring_create(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node);
+extern void dhd_prot_clean_flow_ring(dhd_pub_t *dhd, void *msgbuf_flow_info);
+extern int dhd_post_tx_ring_item(dhd_pub_t *dhd, void *PKTBUF, uint8 ifindex);
+extern int dhd_prot_flow_ring_delete(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node);
+extern int dhd_prot_flow_ring_flush(dhd_pub_t *dhd, flow_ring_node_t *flow_ring_node);
+extern int dhd_prot_ringupd_dump(dhd_pub_t *dhd, struct bcmstrbuf *b);
+extern uint32 dhd_prot_metadatalen_set(dhd_pub_t *dhd, uint32 val, bool rx);
+extern uint32 dhd_prot_metadatalen_get(dhd_pub_t *dhd, bool rx);
+extern void dhd_prot_print_flow_ring(dhd_pub_t *dhd, void *msgbuf_flow_info,
+ struct bcmstrbuf *strbuf);
+extern void dhd_prot_print_info(dhd_pub_t *dhd, struct bcmstrbuf *strbuf);
+extern void dhd_prot_update_txflowring(dhd_pub_t *dhdp, uint16 flow_id, void *msgring_info);
+extern void dhd_prot_txdata_write_flush(dhd_pub_t *dhd, uint16 flow_id, bool in_lock);
+extern uint32 dhd_prot_txp_threshold(dhd_pub_t *dhd, bool set, uint32 val);
+extern void dhd_prot_clear(dhd_pub_t *dhd);
+
+
+#endif /* BCMPCIE */
/********************************
* For version-string expansion *
diff --git a/drivers/net/wireless/bcmdhd/dhd_qmon.c b/drivers/net/wireless/bcmdhd/dhd_qmon.c
deleted file mode 100755
index b93cecd..0000000
--- a/drivers/net/wireless/bcmdhd/dhd_qmon.c
+++ /dev/null
@@ -1,169 +0,0 @@
-/*
- * Queue monitoring.
- *
- * The feature allows monitoring the DHD queue utilization to get the percentage of a time period
- * where the number of pending packets is above a configurable theshold.
- * Right now, this is used by a server application, interfacing a Miracast Video Encoder, and
- * doing IOVAR "qtime_percent" at regular interval. Based on IOVAR "qtime_percent" results,
- * the server indicates to the Video Encoder if its bitrate can be increased or must be decreased.
- * Currently, this works only with P2P interfaces and with PROP_TXSTATUS. There is no need to handle
- * concurrent access to the fieds because the existing concurrent accesses are protected
- * by the PROP_TXSTATUS's lock.
- *
- * Copyright (C) 1999-2014, Broadcom Corporation
- *
- * Unless you and Broadcom execute a separate written software license
- * agreement governing use of this software, this software is licensed to you
- * under the terms of the GNU General Public License version 2 (the "GPL"),
- * available at http://www.broadcom.com/licenses/GPLv2.php, with the
- * following added to such license:
- *
- * As a special exception, the copyright holders of this software give you
- * permission to link this software with independent modules, and to copy and
- * distribute the resulting executable under terms of your choice, provided that
- * you also meet, for each linked independent module, the terms and conditions of
- * the license of that module. An independent module is a module which is not
- * derived from this software. The special exception does not apply to any
- * modifications of the software.
- *
- * Notwithstanding the above, under no circumstances may you combine this
- * software in any way with any other Broadcom software provided under a license
- * other than the GPL, without Broadcom's express prior written consent.
- *
- * $Id: dhd_qmon.c 309265 2012-01-19 02:50:46Z $
- *
- */
-#include <osl.h>
-#include <bcmutils.h>
-#include <bcmendian.h>
-#include <dngl_stats.h>
-#include <wlioctl.h>
-#include <dhd.h>
-#include <dhd_qmon.h>
-#ifndef PROP_TXSTATUS
-#error "PROP_TXSTATUS must be build to build dhd_qmon.c"
-#endif
-#include <wlfc_proto.h>
-#include <dhd_wlfc.h>
-
-#if defined(BCMDRIVER)
-#define QMON_SYSUPTIME() ((uint64)(jiffies_to_usecs(jiffies)))
-#else
- #error "target not yet supported"
-#endif
-
-static dhd_qmon_t *
-dhd_qmon_p2p_entry(dhd_pub_t *dhdp)
-{
- wlfc_mac_descriptor_t* interfaces = NULL;
- wlfc_mac_descriptor_t* nodes = NULL;
- uint8 i;
-
- if (dhdp->wlfc_state == NULL)
- return NULL;
-
- interfaces = ((athost_wl_status_info_t*)dhdp->wlfc_state)->destination_entries.interfaces;
- nodes = ((athost_wl_status_info_t*)dhdp->wlfc_state)->destination_entries.nodes;
-
- ASSERT(interfaces != NULL);
- ASSERT(nodes != NULL);
-
- for (i = 0; i < WLFC_MAC_DESC_TABLE_SIZE; i++) {
- if (nodes[i].occupied &&
- ((nodes[i].iftype == WLC_E_IF_ROLE_P2P_CLIENT) ||
- (nodes[i].iftype == WLC_E_IF_ROLE_P2P_GO)))
- return &nodes[i].qmon;
- }
-
- for (i = 0; i < WLFC_MAX_IFNUM; i++) {
- if (interfaces[i].occupied &&
- ((interfaces[i].iftype == WLC_E_IF_ROLE_P2P_CLIENT) ||
- (interfaces[i].iftype == WLC_E_IF_ROLE_P2P_GO)))
- return &nodes[i].qmon;
- }
-
- return NULL;
-}
-
-void
-dhd_qmon_reset(dhd_qmon_t* qmon)
-{
- qmon->transitq_count = 0;
- qmon->queued_time_cumul = 0;
- qmon->queued_time_cumul_last = 0;
- qmon->queued_time_last = 0;
- qmon->queued_time_last_io = 0;
-}
-
-void
-dhd_qmon_tx(dhd_qmon_t* qmon)
-{
- if ((++qmon->transitq_count > qmon->queued_time_thres) &&
- (qmon->queued_time_last == 0)) {
- /* Set timestamp when transit packet above a threshold */
- qmon->queued_time_last = QMON_SYSUPTIME();
- }
-}
-
-void
-dhd_qmon_txcomplete(dhd_qmon_t* qmon)
-{
- uint64 now = QMON_SYSUPTIME();
-
- qmon->transitq_count--;
- if ((qmon->transitq_count <= qmon->queued_time_thres) &&
- (qmon->queued_time_last != 0)) {
- /* Set timestamp when transit packet above a threshold */
- qmon->queued_time_cumul += now - qmon->queued_time_last;
- qmon->queued_time_last = 0;
- }
-}
-
-int
-dhd_qmon_thres(dhd_pub_t *dhdp, int set, int setval)
-{
- int val = 0;
- dhd_qmon_t* qmon = dhd_qmon_p2p_entry(dhdp);
-
- if (qmon == NULL)
- return 0;
-
- if (set)
- qmon->queued_time_thres = setval;
- else
- val = qmon->queued_time_thres;
-
- return val;
-}
-
-
-int
-dhd_qmon_getpercent(dhd_pub_t *dhdp)
-{
- int percent = 0;
- uint64 time_cumul_adjust = 0;
- uint64 now = QMON_SYSUPTIME();
- dhd_qmon_t* qmon = dhd_qmon_p2p_entry(dhdp);
- uint64 queued_time_cumul = 0;
- uint64 queued_time_last = 0;
-
- if (qmon == NULL)
- return 0;
-
- queued_time_cumul = qmon->queued_time_cumul;
- queued_time_last = qmon->queued_time_last;
-
- if (queued_time_last)
- time_cumul_adjust = now - queued_time_last;
-
- if ((now - qmon->queued_time_last_io) > 0) {
- percent = (uint32)((time_cumul_adjust + queued_time_cumul
- - qmon->queued_time_cumul_last) * 100) /
- (uint32)(now - qmon->queued_time_last_io);
- }
-
- qmon->queued_time_cumul_last = queued_time_cumul + time_cumul_adjust;
- qmon->queued_time_last_io = now;
-
- return percent;
-}
diff --git a/drivers/net/wireless/bcmdhd/dhd_qmon.h b/drivers/net/wireless/bcmdhd/dhd_qmon.h
deleted file mode 100755
index 27f6df4..0000000
--- a/drivers/net/wireless/bcmdhd/dhd_qmon.h
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Queue monitoring.
- *
- * The feature allows monitoring the DHD queue utilization to get the percentage of a time period
- * where the number of pending packets is above a configurable theshold.
- * Right now, this is used by a server application, interfacing a Miracast Video Encoder, and
- * doing IOVAR "qtime_percent" at regular interval. Based on IOVAR "qtime_percent" results,
- * the server indicates to the Video Encoder if its bitrate can be increased or must be decreased.
- * Currently, this works only with P2P interfaces and with PROP_TXSTATUS. There is no need to handle
- * concurrent access to the fieds because the existing concurrent accesses are protected
- * by the PROP_TXSTATUS's lock.
- *
- * Copyright (C) 1999-2014, Broadcom Corporation
- *
- * Unless you and Broadcom execute a separate written software license
- * agreement governing use of this software, this software is licensed to you
- * under the terms of the GNU General Public License version 2 (the "GPL"),
- * available at http://www.broadcom.com/licenses/GPLv2.php, with the
- * following added to such license:
- *
- * As a special exception, the copyright holders of this software give you
- * permission to link this software with independent modules, and to copy and
- * distribute the resulting executable under terms of your choice, provided that
- * you also meet, for each linked independent module, the terms and conditions of
- * the license of that module. An independent module is a module which is not
- * derived from this software. The special exception does not apply to any
- * modifications of the software.
- *
- * Notwithstanding the above, under no circumstances may you combine this
- * software in any way with any other Broadcom software provided under a license
- * other than the GPL, without Broadcom's express prior written consent.
- *
- * $Id: dhd_qmon.h 309265 2012-01-19 02:50:46Z $
- *
- */
-#ifndef _dhd_qmon_h_
-#define _dhd_qmon_h_
-
-
-typedef struct dhd_qmon_s {
- uint32 transitq_count;
- uint32 queued_time_thres;
- uint64 queued_time_cumul;
- uint64 queued_time_cumul_last;
- uint64 queued_time_last;
- uint64 queued_time_last_io;
-} dhd_qmon_t;
-
-
-extern void dhd_qmon_reset(dhd_qmon_t* entry);
-extern void dhd_qmon_tx(dhd_qmon_t* entry);
-extern void dhd_qmon_txcomplete(dhd_qmon_t* entry);
-extern int dhd_qmon_getpercent(dhd_pub_t *dhdp);
-extern int dhd_qmon_thres(dhd_pub_t *dhdp, int set, int setval);
-
-
-#endif /* _dhd_qmon_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_rtt.c b/drivers/net/wireless/bcmdhd/dhd_rtt.c
new file mode 100644
index 0000000..0a1f30d
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_rtt.c
@@ -0,0 +1,2058 @@
+/*
+ * Header file of Broadcom Dongle Host Driver (DHD)
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_rtt.c 423669 2014-07-01 13:01:55Z $
+ */
+#include <typedefs.h>
+#include <osl.h>
+
+#include <epivers.h>
+#include <bcmutils.h>
+
+#include <bcmendian.h>
+#include <linuxver.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/sort.h>
+#include <dngl_stats.h>
+#include <wlioctl.h>
+
+#include <proto/bcmevent.h>
+#include <dhd.h>
+#include <dhd_rtt.h>
+#include <dhd_dbg.h>
+#define GET_RTTSTATE(dhd) ((rtt_status_info_t *)dhd->rtt_state)
+static DEFINE_SPINLOCK(noti_list_lock);
+#define NULL_CHECK(p, s, err) \
+ do { \
+ if (!(p)) { \
+ printf("NULL POINTER (%s) : %s\n", __FUNCTION__, (s)); \
+ err = BCME_ERROR; \
+ return err; \
+ } \
+ } while (0)
+
+#define RTT_IS_ENABLED(rtt_status) (rtt_status->status == RTT_ENABLED)
+#define RTT_IS_STOPPED(rtt_status) (rtt_status->status == RTT_STOPPED)
+#define TIMESPEC_TO_US(ts) (((uint64)(ts).tv_sec * USEC_PER_SEC) + \
+ (ts).tv_nsec / NSEC_PER_USEC)
+
+#define FTM_IOC_BUFSZ 2048 /* ioc buffsize for our module (> BCM_XTLV_HDR_SIZE) */
+#define FTM_AVAIL_MAX_SLOTS 32
+#define FTM_MAX_CONFIGS 10
+#define FTM_MAX_PARAMS 10
+#define FTM_DEFAULT_SESSION 1
+#define FTM_BURST_TIMEOUT_UNIT 250 /* 250 ns */
+#define FTM_INVALID -1
+#define FTM_DEFAULT_CNT_20M 12
+#define FTM_DEFAULT_CNT_40M 10
+#define FTM_DEFAULT_CNT_80M 5
+
+/* convenience macros */
+#define FTM_TU2MICRO(_tu) ((uint64)(_tu) << 10)
+#define FTM_MICRO2TU(_tu) ((uint64)(_tu) >> 10)
+#define FTM_TU2MILLI(_tu) ((uint32)FTM_TU2MICRO(_tu) / 1000)
+#define FTM_MICRO2MILLI(_x) ((uint32)(_x) / 1000)
+#define FTM_MICRO2SEC(_x) ((uint32)(_x) / 1000000)
+#define FTM_INTVL2NSEC(_intvl) ((uint32)ftm_intvl2nsec(_intvl))
+#define FTM_INTVL2USEC(_intvl) ((uint32)ftm_intvl2usec(_intvl))
+#define FTM_INTVL2MSEC(_intvl) (FTM_INTVL2USEC(_intvl) / 1000)
+#define FTM_INTVL2SEC(_intvl) (FTM_INTVL2USEC(_intvl) / 1000000)
+#define FTM_USECIN100MILLI(_usec) ((_usec) / 100000)
+
+/* broadcom specific set to have more accurate data */
+#define ENABLE_VHT_ACK
+
+struct rtt_noti_callback {
+ struct list_head list;
+ void *ctx;
+ dhd_rtt_compl_noti_fn noti_fn;
+};
+
+typedef struct rtt_status_info {
+ dhd_pub_t *dhd;
+ int8 status; /* current status for the current entry */
+ int8 txchain; /* current device tx chain */
+ int8 mpc; /* indicate we change mpc mode */
+ int8 cur_idx; /* current entry to do RTT */
+ bool all_cancel; /* cancel all request once we got the cancel requet */
+ struct capability {
+ int32 proto :8;
+ int32 feature :8;
+ int32 preamble :8;
+ int32 bw :8;
+ } rtt_capa; /* rtt capability */
+ struct mutex rtt_mutex;
+ rtt_config_params_t rtt_config;
+ struct work_struct work;
+ struct list_head noti_fn_list;
+ struct list_head rtt_results_cache; /* store results for RTT */
+} rtt_status_info_t;
+
+/* bitmask indicating which command groups; */
+typedef enum {
+ FTM_SUBCMD_FLAG_METHOD = 0x01, /* FTM method command */
+ FTM_SUBCMD_FLAG_SESSION = 0x02, /* FTM session command */
+ FTM_SUBCMD_FLAG_ALL = FTM_SUBCMD_FLAG_METHOD | FTM_SUBCMD_FLAG_SESSION
+} ftm_subcmd_flag_t;
+
+/* proxd ftm config-category definition */
+typedef enum {
+ FTM_CONFIG_CAT_GENERAL = 1, /* generial configuration */
+ FTM_CONFIG_CAT_OPTIONS = 2, /* 'config options' */
+ FTM_CONFIG_CAT_AVAIL = 3, /* 'config avail' */
+} ftm_config_category_t;
+
+
+typedef struct ftm_subcmd_info {
+ int16 version; /* FTM version (optional) */
+ char *name; /* cmd-name string as cmdline input */
+ wl_proxd_cmd_t cmdid; /* cmd-id */
+ bcm_xtlv_unpack_cbfn_t *handler; /* cmd response handler (optional) */
+ ftm_subcmd_flag_t cmdflag; /* CMD flag (optional) */
+} ftm_subcmd_info_t;
+
+
+typedef struct ftm_config_options_info {
+ uint32 flags; /* wl_proxd_flags_t/wl_proxd_session_flags_t */
+ bool enable;
+} ftm_config_options_info_t;
+
+typedef struct ftm_config_param_info {
+ uint16 tlvid; /* mapping TLV id for the item */
+ union {
+ uint32 chanspec;
+ struct ether_addr mac_addr;
+ wl_proxd_intvl_t data_intvl;
+ uint32 data32;
+ uint16 data16;
+ uint8 data8;
+ };
+} ftm_config_param_info_t;
+
+/*
+* definition for id-string mapping.
+* This is used to map an id (can be cmd-id, tlv-id, ....) to a text-string
+* for debug-display or cmd-log-display
+*/
+typedef struct ftm_strmap_entry {
+ int32 id;
+ char *text;
+} ftm_strmap_entry_t;
+
+
+typedef struct ftm_status_map_host_entry {
+ wl_proxd_status_t proxd_status;
+ rtt_reason_t rtt_reason;
+} ftm_status_map_host_entry_t;
+
+static int
+dhd_rtt_convert_results_to_host(rtt_report_t *rtt_report, uint8 *p_data, uint16 tlvid, uint16 len);
+
+static wifi_rate_t
+dhd_rtt_convert_rate_to_host(uint32 ratespec);
+
+static int
+dhd_rtt_start(dhd_pub_t *dhd);
+static const int burst_duration_idx[] = {0, 0, 1, 2, 4, 8, 16, 32, 64, 128, 0, 0};
+
+/* ftm status mapping to host status */
+static const ftm_status_map_host_entry_t ftm_status_map_info[] = {
+ {WL_PROXD_E_INCOMPLETE, RTT_REASON_FAILURE},
+ {WL_PROXD_E_OVERRIDDEN, RTT_REASON_FAILURE},
+ {WL_PROXD_E_ASAP_FAILED, RTT_REASON_FAILURE},
+ {WL_PROXD_E_NOTSTARTED, RTT_REASON_FAIL_NOT_SCHEDULED_YET},
+ {WL_PROXD_E_INVALIDAVB, RTT_REASON_FAIL_INVALID_TS},
+ {WL_PROXD_E_INCAPABLE, RTT_REASON_FAIL_NO_CAPABILITY},
+ {WL_PROXD_E_MISMATCH, RTT_REASON_FAILURE},
+ {WL_PROXD_E_DUP_SESSION, RTT_REASON_FAILURE},
+ {WL_PROXD_E_REMOTE_FAIL, RTT_REASON_FAILURE},
+ {WL_PROXD_E_REMOTE_INCAPABLE, RTT_REASON_FAILURE},
+ {WL_PROXD_E_SCHED_FAIL, RTT_REASON_FAIL_SCHEDULE},
+ {WL_PROXD_E_PROTO, RTT_REASON_FAIL_PROTOCOL},
+ {WL_PROXD_E_EXPIRED, RTT_REASON_FAILURE},
+ {WL_PROXD_E_TIMEOUT, RTT_REASON_FAIL_TM_TIMEOUT},
+ {WL_PROXD_E_NOACK, RTT_REASON_FAIL_NO_RSP},
+ {WL_PROXD_E_DEFERRED, RTT_REASON_FAILURE},
+ {WL_PROXD_E_INVALID_SID, RTT_REASON_FAILURE},
+ {WL_PROXD_E_REMOTE_CANCEL, RTT_REASON_FAILURE},
+ {WL_PROXD_E_CANCELED, RTT_REASON_ABORTED},
+ {WL_PROXD_E_INVALID_SESSION, RTT_REASON_FAILURE},
+ {WL_PROXD_E_BAD_STATE, RTT_REASON_FAILURE},
+ {WL_PROXD_E_ERROR, RTT_REASON_FAILURE},
+ {WL_PROXD_E_OK, RTT_REASON_SUCCESS}
+};
+
+/* ftm tlv-id mapping */
+static const ftm_strmap_entry_t ftm_tlvid_loginfo[] = {
+ /* { WL_PROXD_TLV_ID_xxx, "text for WL_PROXD_TLV_ID_xxx" }, */
+ { WL_PROXD_TLV_ID_NONE, "none" },
+ { WL_PROXD_TLV_ID_METHOD, "method" },
+ { WL_PROXD_TLV_ID_FLAGS, "flags" },
+ { WL_PROXD_TLV_ID_CHANSPEC, "chanspec" },
+ { WL_PROXD_TLV_ID_TX_POWER, "tx power" },
+ { WL_PROXD_TLV_ID_RATESPEC, "ratespec" },
+ { WL_PROXD_TLV_ID_BURST_DURATION, "burst duration" },
+ { WL_PROXD_TLV_ID_BURST_PERIOD, "burst period" },
+ { WL_PROXD_TLV_ID_BURST_FTM_SEP, "burst ftm sep" },
+ { WL_PROXD_TLV_ID_BURST_NUM_FTM, "burst num ftm" },
+ { WL_PROXD_TLV_ID_NUM_BURST, "num burst" },
+ { WL_PROXD_TLV_ID_FTM_RETRIES, "ftm retries" },
+ { WL_PROXD_TLV_ID_BSS_INDEX, "BSS index" },
+ { WL_PROXD_TLV_ID_BSSID, "bssid" },
+ { WL_PROXD_TLV_ID_INIT_DELAY, "burst init delay" },
+ { WL_PROXD_TLV_ID_BURST_TIMEOUT, "burst timeout" },
+ { WL_PROXD_TLV_ID_EVENT_MASK, "event mask" },
+ { WL_PROXD_TLV_ID_FLAGS_MASK, "flags mask" },
+ { WL_PROXD_TLV_ID_PEER_MAC, "peer addr" },
+ { WL_PROXD_TLV_ID_FTM_REQ, "ftm req" },
+ { WL_PROXD_TLV_ID_LCI_REQ, "lci req" },
+ { WL_PROXD_TLV_ID_LCI, "lci" },
+ { WL_PROXD_TLV_ID_CIVIC_REQ, "civic req" },
+ { WL_PROXD_TLV_ID_CIVIC, "civic" },
+ { WL_PROXD_TLV_ID_AVAIL, "availability" },
+ { WL_PROXD_TLV_ID_SESSION_FLAGS, "session flags" },
+ { WL_PROXD_TLV_ID_SESSION_FLAGS_MASK, "session flags mask" },
+ { WL_PROXD_TLV_ID_RX_MAX_BURST, "rx max bursts" },
+ { WL_PROXD_TLV_ID_RANGING_INFO, "ranging info" },
+ { WL_PROXD_TLV_ID_RANGING_FLAGS, "ranging flags" },
+ { WL_PROXD_TLV_ID_RANGING_FLAGS_MASK, "ranging flags mask" },
+ /* output - 512 + x */
+ { WL_PROXD_TLV_ID_STATUS, "status" },
+ { WL_PROXD_TLV_ID_COUNTERS, "counters" },
+ { WL_PROXD_TLV_ID_INFO, "info" },
+ { WL_PROXD_TLV_ID_RTT_RESULT, "rtt result" },
+ { WL_PROXD_TLV_ID_AOA_RESULT, "aoa result" },
+ { WL_PROXD_TLV_ID_SESSION_INFO, "session info" },
+ { WL_PROXD_TLV_ID_SESSION_STATUS, "session status" },
+ { WL_PROXD_TLV_ID_SESSION_ID_LIST, "session ids" },
+ /* debug tlvs can be added starting 1024 */
+ { WL_PROXD_TLV_ID_DEBUG_MASK, "debug mask" },
+ { WL_PROXD_TLV_ID_COLLECT, "collect" },
+ { WL_PROXD_TLV_ID_STRBUF, "result" }
+};
+
+static const ftm_strmap_entry_t ftm_event_type_loginfo[] = {
+ /* wl_proxd_event_type_t, text-string */
+ { WL_PROXD_EVENT_NONE, "none" },
+ { WL_PROXD_EVENT_SESSION_CREATE, "session create" },
+ { WL_PROXD_EVENT_SESSION_START, "session start" },
+ { WL_PROXD_EVENT_FTM_REQ, "FTM req" },
+ { WL_PROXD_EVENT_BURST_START, "burst start" },
+ { WL_PROXD_EVENT_BURST_END, "burst end" },
+ { WL_PROXD_EVENT_SESSION_END, "session end" },
+ { WL_PROXD_EVENT_SESSION_RESTART, "session restart" },
+ { WL_PROXD_EVENT_BURST_RESCHED, "burst rescheduled" },
+ { WL_PROXD_EVENT_SESSION_DESTROY, "session destroy" },
+ { WL_PROXD_EVENT_RANGE_REQ, "range request" },
+ { WL_PROXD_EVENT_FTM_FRAME, "FTM frame" },
+ { WL_PROXD_EVENT_DELAY, "delay" },
+ { WL_PROXD_EVENT_VS_INITIATOR_RPT, "initiator-report " }, /* rx initiator-rpt */
+ { WL_PROXD_EVENT_RANGING, "ranging " },
+};
+
+/*
+* session-state --> text string mapping
+*/
+static const ftm_strmap_entry_t ftm_session_state_value_loginfo[] = {
+ /* wl_proxd_session_state_t, text string */
+ { WL_PROXD_SESSION_STATE_CREATED, "created" },
+ { WL_PROXD_SESSION_STATE_CONFIGURED, "configured" },
+ { WL_PROXD_SESSION_STATE_STARTED, "started" },
+ { WL_PROXD_SESSION_STATE_DELAY, "delay" },
+ { WL_PROXD_SESSION_STATE_USER_WAIT, "user-wait" },
+ { WL_PROXD_SESSION_STATE_SCHED_WAIT, "sched-wait" },
+ { WL_PROXD_SESSION_STATE_BURST, "burst" },
+ { WL_PROXD_SESSION_STATE_STOPPING, "stopping" },
+ { WL_PROXD_SESSION_STATE_ENDED, "ended" },
+ { WL_PROXD_SESSION_STATE_DESTROYING, "destroying" },
+ { WL_PROXD_SESSION_STATE_NONE, "none" }
+};
+
+/*
+* ranging-state --> text string mapping
+*/
+static const ftm_strmap_entry_t ftm_ranging_state_value_loginfo [] = {
+ /* wl_proxd_ranging_state_t, text string */
+ { WL_PROXD_RANGING_STATE_NONE, "none" },
+ { WL_PROXD_RANGING_STATE_NOTSTARTED, "nonstarted" },
+ { WL_PROXD_RANGING_STATE_INPROGRESS, "inprogress" },
+ { WL_PROXD_RANGING_STATE_DONE, "done" },
+};
+
+/*
+* status --> text string mapping
+*/
+static const ftm_strmap_entry_t ftm_status_value_loginfo[] = {
+ /* wl_proxd_status_t, text-string */
+ { WL_PROXD_E_OVERRIDDEN, "overridden" },
+ { WL_PROXD_E_ASAP_FAILED, "ASAP failed" },
+ { WL_PROXD_E_NOTSTARTED, "not started" },
+ { WL_PROXD_E_INVALIDAVB, "invalid AVB" },
+ { WL_PROXD_E_INCAPABLE, "incapable" },
+ { WL_PROXD_E_MISMATCH, "mismatch"},
+ { WL_PROXD_E_DUP_SESSION, "dup session" },
+ { WL_PROXD_E_REMOTE_FAIL, "remote fail" },
+ { WL_PROXD_E_REMOTE_INCAPABLE, "remote incapable" },
+ { WL_PROXD_E_SCHED_FAIL, "sched failure" },
+ { WL_PROXD_E_PROTO, "protocol error" },
+ { WL_PROXD_E_EXPIRED, "expired" },
+ { WL_PROXD_E_TIMEOUT, "timeout" },
+ { WL_PROXD_E_NOACK, "no ack" },
+ { WL_PROXD_E_DEFERRED, "deferred" },
+ { WL_PROXD_E_INVALID_SID, "invalid session id" },
+ { WL_PROXD_E_REMOTE_CANCEL, "remote cancel" },
+ { WL_PROXD_E_CANCELED, "canceled" },
+ { WL_PROXD_E_INVALID_SESSION, "invalid session" },
+ { WL_PROXD_E_BAD_STATE, "bad state" },
+ { WL_PROXD_E_ERROR, "error" },
+ { WL_PROXD_E_OK, "OK" }
+};
+
+/*
+* time interval unit --> text string mapping
+*/
+static const ftm_strmap_entry_t ftm_tmu_value_loginfo[] = {
+ /* wl_proxd_tmu_t, text-string */
+ { WL_PROXD_TMU_TU, "TU" },
+ { WL_PROXD_TMU_SEC, "sec" },
+ { WL_PROXD_TMU_MILLI_SEC, "ms" },
+ { WL_PROXD_TMU_MICRO_SEC, "us" },
+ { WL_PROXD_TMU_NANO_SEC, "ns" },
+ { WL_PROXD_TMU_PICO_SEC, "ps" }
+};
+
+#define RSPEC_BW(rspec) ((rspec) & WL_RSPEC_BW_MASK)
+#define RSPEC_IS20MHZ(rspec) (RSPEC_BW(rspec) == WL_RSPEC_BW_20MHZ)
+#define RSPEC_IS40MHZ(rspec) (RSPEC_BW(rspec) == WL_RSPEC_BW_40MHZ)
+#define RSPEC_IS80MHZ(rspec) (RSPEC_BW(rspec) == WL_RSPEC_BW_80MHZ)
+#define RSPEC_IS160MHZ(rspec) (RSPEC_BW(rspec) == WL_RSPEC_BW_160MHZ)
+
+#define IS_MCS(rspec) (((rspec) & WL_RSPEC_ENCODING_MASK) != WL_RSPEC_ENCODE_RATE)
+#define IS_STBC(rspec) (((((rspec) & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_HT) || \
+ (((rspec) & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_VHT)) && \
+ (((rspec) & WL_RSPEC_STBC) == WL_RSPEC_STBC))
+#define RSPEC_ISSGI(rspec) (((rspec) & WL_RSPEC_SGI) != 0)
+#define RSPEC_ISLDPC(rspec) (((rspec) & WL_RSPEC_LDPC) != 0)
+#define RSPEC_ISSTBC(rspec) (((rspec) & WL_RSPEC_STBC) != 0)
+#define RSPEC_ISTXBF(rspec) (((rspec) & WL_RSPEC_TXBF) != 0)
+#define RSPEC_ISVHT(rspec) (((rspec) & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_VHT)
+#define RSPEC_ISHT(rspec) (((rspec) & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_HT)
+#define RSPEC_ISLEGACY(rspec) (((rspec) & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_RATE)
+#define RSPEC2RATE(rspec) (RSPEC_ISLEGACY(rspec) ? \
+ ((rspec) & RSPEC_RATE_MASK) : rate_rspec2rate(rspec))
+/* return rate in unit of 500Kbps -- for internal use in wlc_rate_sel.c */
+#define RSPEC2KBPS(rspec) rate_rspec2rate(rspec)
+
+struct ieee_80211_mcs_rate_info {
+ uint8 constellation_bits;
+ uint8 coding_q;
+ uint8 coding_d;
+};
+
+static const struct ieee_80211_mcs_rate_info wl_mcs_info[] = {
+ { 1, 1, 2 }, /* MCS 0: MOD: BPSK, CR 1/2 */
+ { 2, 1, 2 }, /* MCS 1: MOD: QPSK, CR 1/2 */
+ { 2, 3, 4 }, /* MCS 2: MOD: QPSK, CR 3/4 */
+ { 4, 1, 2 }, /* MCS 3: MOD: 16QAM, CR 1/2 */
+ { 4, 3, 4 }, /* MCS 4: MOD: 16QAM, CR 3/4 */
+ { 6, 2, 3 }, /* MCS 5: MOD: 64QAM, CR 2/3 */
+ { 6, 3, 4 }, /* MCS 6: MOD: 64QAM, CR 3/4 */
+ { 6, 5, 6 }, /* MCS 7: MOD: 64QAM, CR 5/6 */
+ { 8, 3, 4 }, /* MCS 8: MOD: 256QAM, CR 3/4 */
+ { 8, 5, 6 } /* MCS 9: MOD: 256QAM, CR 5/6 */
+};
+
+/**
+ * Returns the rate in [Kbps] units for a caller supplied MCS/bandwidth/Nss/Sgi combination.
+ * 'mcs' : a *single* spatial stream MCS (11n or 11ac)
+ */
+uint
+rate_mcs2rate(uint mcs, uint nss, uint bw, int sgi)
+{
+ const int ksps = 250; /* kilo symbols per sec, 4 us sym */
+ const int Nsd_20MHz = 52;
+ const int Nsd_40MHz = 108;
+ const int Nsd_80MHz = 234;
+ const int Nsd_160MHz = 468;
+ uint rate;
+
+ if (mcs == 32) {
+ /* just return fixed values for mcs32 instead of trying to parametrize */
+ rate = (sgi == 0) ? 6000 : 6778;
+ } else if (mcs <= 9) {
+ /* This calculation works for 11n HT and 11ac VHT if the HT mcs values
+ * are decomposed into a base MCS = MCS % 8, and Nss = 1 + MCS / 8.
+ * That is, HT MCS 23 is a base MCS = 7, Nss = 3
+ */
+
+ /* find the number of complex numbers per symbol */
+ if (RSPEC_IS20MHZ(bw)) {
+ /* XXX 4360 TODO: eliminate Phy const in rspec bw, then just compare
+ * as in 80 and 160 case below instead of RSPEC_IS20MHZ(bw)
+ */
+ rate = Nsd_20MHz;
+ } else if (RSPEC_IS40MHZ(bw)) {
+ /* XXX 4360 TODO: eliminate Phy const in rspec bw, then just compare
+ * as in 80 and 160 case below instead of RSPEC_IS40MHZ(bw)
+ */
+ rate = Nsd_40MHz;
+ } else if (bw == WL_RSPEC_BW_80MHZ) {
+ rate = Nsd_80MHz;
+ } else if (bw == WL_RSPEC_BW_160MHZ) {
+ rate = Nsd_160MHz;
+ } else {
+ rate = 0;
+ }
+
+ /* multiply by bits per number from the constellation in use */
+ rate = rate * wl_mcs_info[mcs].constellation_bits;
+
+ /* adjust for the number of spatial streams */
+ rate = rate * nss;
+
+ /* adjust for the coding rate given as a quotient and divisor */
+ rate = (rate * wl_mcs_info[mcs].coding_q) / wl_mcs_info[mcs].coding_d;
+
+ /* multiply by Kilo symbols per sec to get Kbps */
+ rate = rate * ksps;
+
+ /* adjust the symbols per sec for SGI
+ * symbol duration is 4 us without SGI, and 3.6 us with SGI,
+ * so ratio is 10 / 9
+ */
+ if (sgi) {
+ /* add 4 for rounding of division by 9 */
+ rate = ((rate * 10) + 4) / 9;
+ }
+ } else {
+ rate = 0;
+ }
+
+ return rate;
+} /* wlc_rate_mcs2rate */
+
+/** take a well formed ratespec_t arg and return phy rate in [Kbps] units */
+int
+rate_rspec2rate(uint32 rspec)
+{
+ int rate = -1;
+
+ if (RSPEC_ISLEGACY(rspec)) {
+ rate = 500 * (rspec & WL_RSPEC_RATE_MASK);
+ } else if (RSPEC_ISHT(rspec)) {
+ uint mcs = (rspec & WL_RSPEC_RATE_MASK);
+
+ if (mcs == 32) {
+ rate = rate_mcs2rate(mcs, 1, WL_RSPEC_BW_40MHZ, RSPEC_ISSGI(rspec));
+ } else {
+ uint nss = 1 + (mcs / 8);
+ mcs = mcs % 8;
+ rate = rate_mcs2rate(mcs, nss, RSPEC_BW(rspec), RSPEC_ISSGI(rspec));
+ }
+ } else if (RSPEC_ISVHT(rspec)) {
+ uint mcs = (rspec & WL_RSPEC_VHT_MCS_MASK);
+ uint nss = (rspec & WL_RSPEC_VHT_NSS_MASK) >> WL_RSPEC_VHT_NSS_SHIFT;
+
+ ASSERT(mcs <= 9);
+ ASSERT(nss <= 8);
+
+ rate = rate_mcs2rate(mcs, nss, RSPEC_BW(rspec), RSPEC_ISSGI(rspec));
+ } else {
+ ASSERT(0);
+ }
+
+ return (rate == 0) ? -1 : rate;
+}
+
+char resp_buf[WLC_IOCTL_SMLEN];
+
+static uint64
+ftm_intvl2nsec(const wl_proxd_intvl_t *intvl)
+{
+ uint64 ret;
+ ret = intvl->intvl;
+ switch (intvl->tmu) {
+ case WL_PROXD_TMU_TU: ret = FTM_TU2MICRO(ret) * 1000; break;
+ case WL_PROXD_TMU_SEC: ret *= 1000000000; break;
+ case WL_PROXD_TMU_MILLI_SEC: ret *= 1000000; break;
+ case WL_PROXD_TMU_MICRO_SEC: ret *= 1000; break;
+ case WL_PROXD_TMU_PICO_SEC: ret = intvl->intvl / 1000; break;
+ case WL_PROXD_TMU_NANO_SEC: /* fall through */
+ default: break;
+ }
+ return ret;
+}
+uint64
+ftm_intvl2usec(const wl_proxd_intvl_t *intvl)
+{
+ uint64 ret;
+ ret = intvl->intvl;
+ switch (intvl->tmu) {
+ case WL_PROXD_TMU_TU: ret = FTM_TU2MICRO(ret); break;
+ case WL_PROXD_TMU_SEC: ret *= 1000000; break;
+ case WL_PROXD_TMU_NANO_SEC: ret = intvl->intvl / 1000; break;
+ case WL_PROXD_TMU_PICO_SEC: ret = intvl->intvl / 1000000; break;
+ case WL_PROXD_TMU_MILLI_SEC: ret *= 1000; break;
+ case WL_PROXD_TMU_MICRO_SEC: /* fall through */
+ default: break;
+ }
+ return ret;
+}
+
+/*
+* lookup 'id' (as a key) from a fw status to host map table
+* if found, return the corresponding reason code
+*/
+
+static rtt_reason_t
+ftm_get_statusmap_info(wl_proxd_status_t id, const ftm_status_map_host_entry_t *p_table,
+ uint32 num_entries)
+{
+ int i;
+ const ftm_status_map_host_entry_t *p_entry;
+ /* scan thru the table till end */
+ p_entry = p_table;
+ for (i = 0; i < (int) num_entries; i++)
+ {
+ if (p_entry->proxd_status == id) {
+ return p_entry->rtt_reason;
+ }
+ p_entry++; /* next entry */
+ }
+ return RTT_REASON_FAILURE; /* not found */
+}
+/*
+* lookup 'id' (as a key) from a table
+* if found, return the entry pointer, otherwise return NULL
+*/
+static const ftm_strmap_entry_t*
+ftm_get_strmap_info(int32 id, const ftm_strmap_entry_t *p_table, uint32 num_entries)
+{
+ int i;
+ const ftm_strmap_entry_t *p_entry;
+
+ /* scan thru the table till end */
+ p_entry = p_table;
+ for (i = 0; i < (int) num_entries; i++)
+ {
+ if (p_entry->id == id)
+ return p_entry;
+ p_entry++; /* next entry */
+ }
+ return NULL; /* not found */
+}
+
+/*
+* map enum to a text-string for display, this function is called by the following:
+* For debug/trace:
+* ftm_[cmdid|tlvid]_to_str()
+* For TLV-output log for 'get' commands
+* ftm_[method|tmu|caps|status|state]_value_to_logstr()
+* Input:
+* pTable -- point to a 'enum to string' table.
+*/
+static const char *
+ftm_map_id_to_str(int32 id, const ftm_strmap_entry_t *p_table, uint32 num_entries)
+{
+ const ftm_strmap_entry_t*p_entry = ftm_get_strmap_info(id, p_table, num_entries);
+ if (p_entry)
+ return (p_entry->text);
+
+ return "invalid";
+}
+
+
+#ifdef RTT_DEBUG
+
+/* define entry, e.g. { WL_PROXD_CMD_xxx, "WL_PROXD_CMD_xxx" } */
+#define DEF_STRMAP_ENTRY(id) { (id), #id }
+
+/* ftm cmd-id mapping */
+static const ftm_strmap_entry_t ftm_cmdid_map[] = {
+ /* {wl_proxd_cmd_t(WL_PROXD_CMD_xxx), "WL_PROXD_CMD_xxx" }, */
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_NONE),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_VERSION),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_ENABLE),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_DISABLE),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_CONFIG),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_START_SESSION),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_BURST_REQUEST),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_STOP_SESSION),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_DELETE_SESSION),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_RESULT),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_INFO),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_STATUS),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_SESSIONS),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_COUNTERS),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_CLEAR_COUNTERS),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_COLLECT),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_TUNE),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_DUMP),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_START_RANGING),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_STOP_RANGING),
+ DEF_STRMAP_ENTRY(WL_PROXD_CMD_GET_RANGING_INFO),
+};
+
+/*
+* map a ftm cmd-id to a text-string for display
+*/
+static const char *
+ftm_cmdid_to_str(uint16 cmdid)
+{
+ return ftm_map_id_to_str((int32) cmdid, &ftm_cmdid_map[0], ARRAYSIZE(ftm_cmdid_map));
+}
+#endif /* RTT_DEBUG */
+
+
+/*
+* convert BCME_xxx error codes into related error strings
+* note, bcmerrorstr() defined in bcmutils is for BCMDRIVER only,
+* this duplicate copy is for WL access and may need to clean up later
+*/
+static const char *ftm_bcmerrorstrtable[] = BCMERRSTRINGTABLE;
+static const char *
+ftm_status_value_to_logstr(wl_proxd_status_t status)
+{
+ static char ftm_msgbuf_status_undef[32];
+ const ftm_strmap_entry_t *p_loginfo;
+ int bcmerror;
+
+ /* check if within BCME_xxx error range */
+ bcmerror = (int) status;
+ if (VALID_BCMERROR(bcmerror))
+ return ftm_bcmerrorstrtable[-bcmerror];
+
+ /* otherwise, look for 'proxd ftm status' range */
+ p_loginfo = ftm_get_strmap_info((int32) status,
+ &ftm_status_value_loginfo[0], ARRAYSIZE(ftm_status_value_loginfo));
+ if (p_loginfo)
+ return p_loginfo->text;
+
+ /* report for 'out of range' FTM-status error code */
+ memset(ftm_msgbuf_status_undef, 0, sizeof(ftm_msgbuf_status_undef));
+ snprintf(ftm_msgbuf_status_undef, sizeof(ftm_msgbuf_status_undef),
+ "Undefined status %d", status);
+ return &ftm_msgbuf_status_undef[0];
+}
+
+static const char *
+ftm_tmu_value_to_logstr(wl_proxd_tmu_t tmu)
+{
+ return ftm_map_id_to_str((int32)tmu,
+ &ftm_tmu_value_loginfo[0], ARRAYSIZE(ftm_tmu_value_loginfo));
+}
+
+static const ftm_strmap_entry_t*
+ftm_get_event_type_loginfo(wl_proxd_event_type_t event_type)
+{
+ /* look up 'event-type' from a predefined table */
+ return ftm_get_strmap_info((int32) event_type,
+ ftm_event_type_loginfo, ARRAYSIZE(ftm_event_type_loginfo));
+}
+
+static const char *
+ftm_session_state_value_to_logstr(wl_proxd_session_state_t state)
+{
+ return ftm_map_id_to_str((int32)state, &ftm_session_state_value_loginfo[0],
+ ARRAYSIZE(ftm_session_state_value_loginfo));
+}
+
+
+/*
+* send 'proxd' iovar for all ftm get-related commands
+*/
+static int
+rtt_do_get_ioctl(dhd_pub_t *dhd, wl_proxd_iov_t *p_proxd_iov, uint16 proxd_iovsize,
+ ftm_subcmd_info_t *p_subcmd_info)
+{
+
+ wl_proxd_iov_t *p_iovresp = (wl_proxd_iov_t *)resp_buf;
+ int status;
+ int tlvs_len;
+ /* send getbuf proxd iovar */
+ status = dhd_getiovar(dhd, 0, "proxd", (char *)p_proxd_iov,
+ proxd_iovsize, (char **)&p_iovresp, WLC_IOCTL_SMLEN);
+ if (status != BCME_OK) {
+ DHD_ERROR(("%s: failed to send getbuf proxd iovar (CMD ID : %d), status=%d\n",
+ __FUNCTION__, p_subcmd_info->cmdid, status));
+ return status;
+ }
+ if (p_subcmd_info->cmdid == WL_PROXD_CMD_GET_VERSION) {
+ p_subcmd_info->version = ltoh16(p_iovresp->version);
+ DHD_RTT(("ftm version: 0x%x\n", ltoh16(p_iovresp->version)));
+ goto exit;
+ }
+
+ tlvs_len = ltoh16(p_iovresp->len) - WL_PROXD_IOV_HDR_SIZE;
+ if (tlvs_len < 0) {
+ DHD_ERROR(("%s: alert, p_iovresp->len(%d) should not be smaller than %d\n",
+ __FUNCTION__, ltoh16(p_iovresp->len), (int) WL_PROXD_IOV_HDR_SIZE));
+ tlvs_len = 0;
+ }
+
+ if (tlvs_len > 0 && p_subcmd_info->handler) {
+ /* unpack TLVs and invokes the cbfn for processing */
+ status = bcm_unpack_xtlv_buf(p_proxd_iov, (uint8 *)p_iovresp->tlvs,
+ tlvs_len, BCM_XTLV_OPTION_ALIGN32, p_subcmd_info->handler);
+ }
+exit:
+ return status;
+}
+
+
+static wl_proxd_iov_t *
+rtt_alloc_getset_buf(wl_proxd_method_t method, wl_proxd_session_id_t session_id,
+ wl_proxd_cmd_t cmdid, uint16 tlvs_bufsize, uint16 *p_out_bufsize)
+{
+ uint16 proxd_iovsize;
+ uint16 kflags;
+ wl_proxd_tlv_t *p_tlv;
+ wl_proxd_iov_t *p_proxd_iov = (wl_proxd_iov_t *) NULL;
+
+ *p_out_bufsize = 0; /* init */
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+ /* calculate the whole buffer size, including one reserve-tlv entry in the header */
+ proxd_iovsize = sizeof(wl_proxd_iov_t) + tlvs_bufsize;
+
+ p_proxd_iov = kzalloc(proxd_iovsize, kflags);
+ if (p_proxd_iov == NULL) {
+ DHD_ERROR(("error: failed to allocate %d bytes of memory\n", proxd_iovsize));
+ return NULL;
+ }
+
+ /* setup proxd-FTM-method iovar header */
+ p_proxd_iov->version = htol16(WL_PROXD_API_VERSION);
+ p_proxd_iov->len = htol16(proxd_iovsize); /* caller may adjust it based on #of TLVs */
+ p_proxd_iov->cmd = htol16(cmdid);
+ p_proxd_iov->method = htol16(method);
+ p_proxd_iov->sid = htol16(session_id);
+
+ /* initialize the reserved/dummy-TLV in iovar header */
+ p_tlv = p_proxd_iov->tlvs;
+ p_tlv->id = htol16(WL_PROXD_TLV_ID_NONE);
+ p_tlv->len = htol16(0);
+
+ *p_out_bufsize = proxd_iovsize; /* for caller's reference */
+
+ return p_proxd_iov;
+}
+
+
+static int
+dhd_rtt_common_get_handler(dhd_pub_t *dhd, ftm_subcmd_info_t *p_subcmd_info,
+ wl_proxd_method_t method,
+ wl_proxd_session_id_t session_id)
+{
+ int status = BCME_OK;
+ uint16 proxd_iovsize = 0;
+ wl_proxd_iov_t *p_proxd_iov;
+#ifdef RTT_DEBUG
+ DHD_RTT(("enter %s: method=%d, session_id=%d, cmdid=%d(%s)\n",
+ __FUNCTION__, method, session_id, p_subcmd_info->cmdid,
+ ftm_cmdid_to_str(p_subcmd_info->cmdid)));
+#endif
+ /* alloc mem for ioctl headr + reserved 0 bufsize for tlvs (initialize to zero) */
+ p_proxd_iov = rtt_alloc_getset_buf(method, session_id, p_subcmd_info->cmdid,
+ 0, &proxd_iovsize);
+
+ if (p_proxd_iov == NULL)
+ return BCME_NOMEM;
+
+ status = rtt_do_get_ioctl(dhd, p_proxd_iov, proxd_iovsize, p_subcmd_info);
+
+ if (status != BCME_OK) {
+ DHD_RTT(("%s failed: status=%d\n", __FUNCTION__, status));
+ }
+ kfree(p_proxd_iov);
+ return status;
+}
+
+/*
+* common handler for set-related proxd method commands which require no TLV as input
+* wl proxd ftm [session-id] <set-subcmd>
+* e.g.
+* wl proxd ftm enable -- to enable ftm
+* wl proxd ftm disable -- to disable ftm
+* wl proxd ftm <session-id> start -- to start a specified session
+* wl proxd ftm <session-id> stop -- to cancel a specified session;
+* state is maintained till session is delete.
+* wl proxd ftm <session-id> delete -- to delete a specified session
+* wl proxd ftm [<session-id>] clear-counters -- to clear counters
+* wl proxd ftm <session-id> burst-request -- on initiator: to send burst request;
+* on target: send FTM frame
+* wl proxd ftm <session-id> collect
+* wl proxd ftm tune (TBD)
+*/
+static int
+dhd_rtt_common_set_handler(dhd_pub_t *dhd, const ftm_subcmd_info_t *p_subcmd_info,
+ wl_proxd_method_t method, wl_proxd_session_id_t session_id)
+{
+ uint16 proxd_iovsize;
+ wl_proxd_iov_t *p_proxd_iov;
+ int ret;
+
+#ifdef RTT_DEBUG
+ DHD_RTT(("enter %s: method=%d, session_id=%d, cmdid=%d(%s)\n",
+ __FUNCTION__, method, session_id, p_subcmd_info->cmdid,
+ ftm_cmdid_to_str(p_subcmd_info->cmdid)));
+#endif
+
+ /* allocate and initialize a temp buffer for 'set proxd' iovar */
+ proxd_iovsize = 0;
+ p_proxd_iov = rtt_alloc_getset_buf(method, session_id, p_subcmd_info->cmdid,
+ 0, &proxd_iovsize); /* no TLV */
+ if (p_proxd_iov == NULL)
+ return BCME_NOMEM;
+
+ /* no TLV to pack, simply issue a set-proxd iovar */
+ ret = dhd_iovar(dhd, 0, "proxd", (void *) p_proxd_iov, proxd_iovsize, 1);
+#ifdef RTT_DEBUG
+ if (ret != BCME_OK) {
+ DHD_RTT(("error: IOVAR failed, status=%d\n", ret));
+ }
+#endif
+ /* clean up */
+ kfree(p_proxd_iov);
+
+ return ret;
+}
+
+static int
+rtt_unpack_xtlv_cbfn(void *ctx, uint8 *p_data, uint16 tlvid, uint16 len)
+{
+ int ret = BCME_OK;
+ wl_proxd_ftm_session_status_t *p_data_info;
+ switch (tlvid) {
+ case WL_PROXD_TLV_ID_RTT_RESULT:
+ ret = dhd_rtt_convert_results_to_host((rtt_report_t *)ctx,
+ p_data, tlvid, len);
+ break;
+ case WL_PROXD_TLV_ID_SESSION_STATUS:
+ memcpy(ctx, p_data, sizeof(wl_proxd_ftm_session_status_t));
+ p_data_info = (wl_proxd_ftm_session_status_t *)ctx;
+ p_data_info->state = ltoh16_ua(&p_data_info->state);
+ p_data_info->status = ltoh32_ua(&p_data_info->status);
+ break;
+ default:
+ DHD_ERROR(("> Unsupported TLV ID %d\n", tlvid));
+ ret = BCME_ERROR;
+ break;
+ }
+
+ return ret;
+}
+static int
+rtt_handle_config_options(wl_proxd_session_id_t session_id, wl_proxd_tlv_t **p_tlv,
+ uint16 *p_buf_space_left, ftm_config_options_info_t *ftm_configs, int ftm_cfg_cnt)
+{
+ int ret = BCME_OK;
+ int cfg_idx = 0;
+ uint32 flags = WL_PROXD_FLAG_NONE;
+ uint32 flags_mask = WL_PROXD_FLAG_NONE;
+ uint32 new_mask; /* cmdline input */
+ ftm_config_options_info_t *p_option_info;
+ uint16 type = (session_id == WL_PROXD_SESSION_ID_GLOBAL) ?
+ WL_PROXD_TLV_ID_FLAGS_MASK : WL_PROXD_TLV_ID_SESSION_FLAGS_MASK;
+ for (cfg_idx = 0; cfg_idx < ftm_cfg_cnt; cfg_idx++) {
+ p_option_info = (ftm_configs + cfg_idx);
+ if (p_option_info != NULL) {
+ new_mask = p_option_info->flags;
+ /* update flags mask */
+ flags_mask |= new_mask;
+ if (p_option_info->enable) {
+ flags |= new_mask; /* set the bit on */
+ } else {
+ flags &= ~new_mask; /* set the bit off */
+ }
+ }
+ }
+ flags = htol32(flags);
+ flags_mask = htol32(flags_mask);
+ /* setup flags_mask TLV */
+ ret = bcm_pack_xtlv_entry((uint8 **)p_tlv, p_buf_space_left,
+ type, sizeof(uint32), &flags_mask, BCM_XTLV_OPTION_ALIGN32);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s : bcm_pack_xltv_entry() for mask flags failed, status=%d\n",
+ __FUNCTION__, ret));
+ goto exit;
+ }
+
+ type = (session_id == WL_PROXD_SESSION_ID_GLOBAL)?
+ WL_PROXD_TLV_ID_FLAGS : WL_PROXD_TLV_ID_SESSION_FLAGS;
+ /* setup flags TLV */
+ ret = bcm_pack_xtlv_entry((uint8 **)p_tlv, p_buf_space_left,
+ type, sizeof(uint32), &flags, BCM_XTLV_OPTION_ALIGN32);
+ if (ret != BCME_OK) {
+#ifdef RTT_DEBUG
+ DHD_RTT(("%s: bcm_pack_xltv_entry() for flags failed, status=%d\n",
+ __FUNCTION__, ret));
+#endif
+ }
+exit:
+ return ret;
+}
+
+static int
+rtt_handle_config_general(wl_proxd_session_id_t session_id, wl_proxd_tlv_t **p_tlv,
+ uint16 *p_buf_space_left, ftm_config_param_info_t *ftm_configs, int ftm_cfg_cnt)
+{
+ int ret = BCME_OK;
+ int cfg_idx = 0;
+ uint32 chanspec;
+ ftm_config_param_info_t *p_config_param_info;
+ void *p_src_data;
+ uint16 src_data_size; /* size of data pointed by p_src_data as 'source' */
+ for (cfg_idx = 0; cfg_idx < ftm_cfg_cnt; cfg_idx++) {
+ p_config_param_info = (ftm_configs + cfg_idx);
+ if (p_config_param_info != NULL) {
+ switch (p_config_param_info->tlvid) {
+ case WL_PROXD_TLV_ID_BSS_INDEX:
+ case WL_PROXD_TLV_ID_FTM_RETRIES:
+ case WL_PROXD_TLV_ID_FTM_REQ_RETRIES:
+ p_src_data = &p_config_param_info->data8;
+ src_data_size = sizeof(uint8);
+ break;
+ case WL_PROXD_TLV_ID_BURST_NUM_FTM: /* uint16 */
+ case WL_PROXD_TLV_ID_NUM_BURST:
+ case WL_PROXD_TLV_ID_RX_MAX_BURST:
+ p_src_data = &p_config_param_info->data16;
+ src_data_size = sizeof(uint16);
+ break;
+ case WL_PROXD_TLV_ID_TX_POWER: /* uint32 */
+ case WL_PROXD_TLV_ID_RATESPEC:
+ case WL_PROXD_TLV_ID_EVENT_MASK: /* wl_proxd_event_mask_t/uint32 */
+ case WL_PROXD_TLV_ID_DEBUG_MASK:
+ p_src_data = &p_config_param_info->data32;
+ src_data_size = sizeof(uint32);
+ break;
+ case WL_PROXD_TLV_ID_CHANSPEC: /* chanspec_t --> 32bit */
+ chanspec = p_config_param_info->chanspec;
+ p_src_data = (void *) &chanspec;
+ src_data_size = sizeof(uint32);
+ break;
+ case WL_PROXD_TLV_ID_BSSID: /* mac address */
+ case WL_PROXD_TLV_ID_PEER_MAC:
+ p_src_data = &p_config_param_info->mac_addr;
+ src_data_size = sizeof(struct ether_addr);
+ break;
+ case WL_PROXD_TLV_ID_BURST_DURATION: /* wl_proxd_intvl_t */
+ case WL_PROXD_TLV_ID_BURST_PERIOD:
+ case WL_PROXD_TLV_ID_BURST_FTM_SEP:
+ case WL_PROXD_TLV_ID_BURST_TIMEOUT:
+ case WL_PROXD_TLV_ID_INIT_DELAY:
+ p_src_data = &p_config_param_info->data_intvl;
+ src_data_size = sizeof(wl_proxd_intvl_t);
+ break;
+ default:
+ ret = BCME_BADARG;
+ break;
+ }
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s bad TLV ID : %d\n",
+ __FUNCTION__, p_config_param_info->tlvid));
+ break;
+ }
+
+ ret = bcm_pack_xtlv_entry((uint8 **) p_tlv, p_buf_space_left,
+ p_config_param_info->tlvid, src_data_size, p_src_data,
+ BCM_XTLV_OPTION_ALIGN32);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s: bcm_pack_xltv_entry() failed,"
+ " status=%d\n", __FUNCTION__, ret));
+ break;
+ }
+
+ }
+ }
+ return ret;
+}
+
+static int
+dhd_rtt_get_version(dhd_pub_t *dhd, int *out_version)
+{
+ int ret;
+ ftm_subcmd_info_t subcmd_info;
+ subcmd_info.name = "ver";
+ subcmd_info.cmdid = WL_PROXD_CMD_GET_VERSION;
+ subcmd_info.handler = NULL;
+ ret = dhd_rtt_common_get_handler(dhd, &subcmd_info,
+ WL_PROXD_METHOD_FTM, WL_PROXD_SESSION_ID_GLOBAL);
+ *out_version = (ret == BCME_OK) ? subcmd_info.version : 0;
+ return ret;
+}
+
+static int
+dhd_rtt_ftm_enable(dhd_pub_t *dhd, bool enable)
+{
+ ftm_subcmd_info_t subcmd_info;
+ subcmd_info.name = (enable)? "enable" : "disable";
+ subcmd_info.cmdid = (enable)? WL_PROXD_CMD_ENABLE: WL_PROXD_CMD_DISABLE;
+ subcmd_info.handler = NULL;
+ return dhd_rtt_common_set_handler(dhd, &subcmd_info,
+ WL_PROXD_METHOD_FTM, WL_PROXD_SESSION_ID_GLOBAL);
+}
+
+static int
+dhd_rtt_start_session(dhd_pub_t *dhd, wl_proxd_session_id_t session_id, bool start)
+{
+ ftm_subcmd_info_t subcmd_info;
+ subcmd_info.name = (start)? "start session" : "stop session";
+ subcmd_info.cmdid = (start)? WL_PROXD_CMD_START_SESSION: WL_PROXD_CMD_STOP_SESSION;
+ subcmd_info.handler = NULL;
+ return dhd_rtt_common_set_handler(dhd, &subcmd_info,
+ WL_PROXD_METHOD_FTM, session_id);
+}
+
+static int
+dhd_rtt_delete_session(dhd_pub_t *dhd, wl_proxd_session_id_t session_id)
+{
+ ftm_subcmd_info_t subcmd_info;
+ subcmd_info.name = "delete session";
+ subcmd_info.cmdid = WL_PROXD_CMD_DELETE_SESSION;
+ subcmd_info.handler = NULL;
+ return dhd_rtt_common_set_handler(dhd, &subcmd_info,
+ WL_PROXD_METHOD_FTM, session_id);
+}
+
+static int
+dhd_rtt_ftm_config(dhd_pub_t *dhd, wl_proxd_session_id_t session_id,
+ ftm_config_category_t catagory, void *ftm_configs, int ftm_cfg_cnt)
+{
+ ftm_subcmd_info_t subcmd_info;
+ wl_proxd_tlv_t *p_tlv;
+ /* alloc mem for ioctl headr + reserved 0 bufsize for tlvs (initialize to zero) */
+ wl_proxd_iov_t *p_proxd_iov;
+ uint16 proxd_iovsize = 0;
+ uint16 bufsize;
+ uint16 buf_space_left;
+ uint16 all_tlvsize;
+ int ret = BCME_OK;
+
+ subcmd_info.name = "config";
+ subcmd_info.cmdid = WL_PROXD_CMD_CONFIG;
+
+ p_proxd_iov = rtt_alloc_getset_buf(WL_PROXD_METHOD_FTM, session_id, subcmd_info.cmdid,
+ FTM_IOC_BUFSZ, &proxd_iovsize);
+
+ if (p_proxd_iov == NULL) {
+ DHD_ERROR(("%s : failed to allocate the iovar (size :%d)\n",
+ __FUNCTION__, FTM_IOC_BUFSZ));
+ return BCME_NOMEM;
+ }
+ /* setup TLVs */
+ bufsize = proxd_iovsize - WL_PROXD_IOV_HDR_SIZE; /* adjust available size for TLVs */
+ p_tlv = &p_proxd_iov->tlvs[0];
+ /* TLV buffer starts with a full size, will decrement for each packed TLV */
+ buf_space_left = bufsize;
+ if (catagory == FTM_CONFIG_CAT_OPTIONS) {
+ ret = rtt_handle_config_options(session_id, &p_tlv, &buf_space_left,
+ (ftm_config_options_info_t *)ftm_configs, ftm_cfg_cnt);
+ } else if (catagory == FTM_CONFIG_CAT_GENERAL) {
+ ret = rtt_handle_config_general(session_id, &p_tlv, &buf_space_left,
+ (ftm_config_param_info_t *)ftm_configs, ftm_cfg_cnt);
+ }
+ if (ret == BCME_OK) {
+ /* update the iov header, set len to include all TLVs + header */
+ all_tlvsize = (bufsize - buf_space_left);
+ p_proxd_iov->len = htol16(all_tlvsize + WL_PROXD_IOV_HDR_SIZE);
+ ret = dhd_iovar(dhd, 0, "proxd", (char *)p_proxd_iov,
+ all_tlvsize + WL_PROXD_IOV_HDR_SIZE, 1);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s : failed to set config\n", __FUNCTION__));
+ }
+ }
+ /* clean up */
+ kfree(p_proxd_iov);
+ return ret;
+}
+
+chanspec_t
+dhd_rtt_convert_to_chspec(wifi_channel_info_t channel)
+{
+ int bw;
+ chanspec_t chanspec = 0;
+ uint8 center_chan;
+ uint8 primary_chan;
+ /* set witdh to 20MHZ for 2.4G HZ */
+ if (channel.center_freq >= 2400 && channel.center_freq <= 2500) {
+ channel.width = WIFI_CHAN_WIDTH_20;
+ }
+ switch (channel.width) {
+ case WIFI_CHAN_WIDTH_20:
+ bw = WL_CHANSPEC_BW_20;
+ primary_chan = wf_mhz2channel(channel.center_freq, 0);
+ chanspec = wf_channel2chspec(primary_chan, bw);
+ break;
+ case WIFI_CHAN_WIDTH_40:
+ bw = WL_CHANSPEC_BW_40;
+ primary_chan = wf_mhz2channel(channel.center_freq, 0);
+ chanspec = wf_channel2chspec(primary_chan, bw);
+ break;
+ case WIFI_CHAN_WIDTH_80:
+ bw = WL_CHANSPEC_BW_80;
+ primary_chan = wf_mhz2channel(channel.center_freq, 0);
+ center_chan = wf_mhz2channel(channel.center_freq0, 0);
+ chanspec = wf_chspec_80(center_chan, primary_chan);
+ break;
+ default:
+ DHD_ERROR(("doesn't support this bandwith : %d", channel.width));
+ bw = -1;
+ break;
+ }
+ return chanspec;
+}
+
+int
+dhd_rtt_idx_to_burst_duration(uint idx)
+{
+ if (idx >= ARRAY_SIZE(burst_duration_idx)) {
+ return -1;
+ }
+ return burst_duration_idx[idx];
+}
+
+int
+dhd_rtt_set_cfg(dhd_pub_t *dhd, rtt_config_params_t *params)
+{
+ int err = BCME_OK;
+ int idx;
+ rtt_status_info_t *rtt_status;
+ NULL_CHECK(params, "params is NULL", err);
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ if (!HAS_11MC_CAP(rtt_status->rtt_capa.proto)) {
+ DHD_ERROR(("doesn't support RTT \n"));
+ return BCME_ERROR;
+ }
+ if (rtt_status->status != RTT_STOPPED) {
+ DHD_ERROR(("rtt is already started\n"));
+ return BCME_BUSY;
+ }
+ DHD_RTT(("%s enter\n", __FUNCTION__));
+
+ memset(rtt_status->rtt_config.target_info, 0, TARGET_INFO_SIZE(RTT_MAX_TARGET_CNT));
+ rtt_status->rtt_config.rtt_target_cnt = params->rtt_target_cnt;
+ memcpy(rtt_status->rtt_config.target_info,
+ params->target_info, TARGET_INFO_SIZE(params->rtt_target_cnt));
+ rtt_status->status = RTT_STARTED;
+ /* start to measure RTT from first device */
+ /* find next target to trigger RTT */
+ for (idx = rtt_status->cur_idx; idx < rtt_status->rtt_config.rtt_target_cnt; idx++) {
+ /* skip the disabled device */
+ if (rtt_status->rtt_config.target_info[idx].disable) {
+ continue;
+ } else {
+ /* set the idx to cur_idx */
+ rtt_status->cur_idx = idx;
+ break;
+ }
+ }
+ if (idx < rtt_status->rtt_config.rtt_target_cnt) {
+ DHD_RTT(("rtt_status->cur_idx : %d\n", rtt_status->cur_idx));
+ schedule_work(&rtt_status->work);
+ }
+ return err;
+}
+
+int
+dhd_rtt_stop(dhd_pub_t *dhd, struct ether_addr *mac_list, int mac_cnt)
+{
+ int err = BCME_OK;
+ int i = 0, j = 0;
+ rtt_status_info_t *rtt_status;
+ rtt_results_header_t *entry, *next;
+ rtt_result_t *rtt_result, *next2;
+ struct rtt_noti_callback *iter;
+
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ if (rtt_status->status == RTT_STOPPED) {
+ DHD_ERROR(("rtt is not started\n"));
+ return BCME_OK;
+ }
+ DHD_RTT(("%s enter\n", __FUNCTION__));
+ mutex_lock(&rtt_status->rtt_mutex);
+ for (i = 0; i < mac_cnt; i++) {
+ for (j = 0; j < rtt_status->rtt_config.rtt_target_cnt; j++) {
+ if (!bcmp(&mac_list[i], &rtt_status->rtt_config.target_info[j].addr,
+ ETHER_ADDR_LEN)) {
+ rtt_status->rtt_config.target_info[j].disable = TRUE;
+ }
+ }
+ }
+ if (rtt_status->all_cancel) {
+ /* cancel all of request */
+ rtt_status->status = RTT_STOPPED;
+ DHD_RTT(("current RTT process is cancelled\n"));
+ /* remove the rtt results in cache */
+ if (!list_empty(&rtt_status->rtt_results_cache)) {
+ /* Iterate rtt_results_header list */
+ list_for_each_entry_safe(entry, next,
+ &rtt_status->rtt_results_cache, list) {
+ list_del(&entry->list);
+ /* Iterate rtt_result list */
+ list_for_each_entry_safe(rtt_result, next2,
+ &entry->result_list, list) {
+ list_del(&rtt_result->list);
+ kfree(rtt_result);
+ }
+ kfree(entry);
+ }
+ }
+ /* send the rtt complete event to wake up the user process */
+ list_for_each_entry(iter, &rtt_status->noti_fn_list, list) {
+ iter->noti_fn(iter->ctx, &rtt_status->rtt_results_cache);
+ }
+
+ /* reinitialize the HEAD */
+ INIT_LIST_HEAD(&rtt_status->rtt_results_cache);
+ /* clear information for rtt_config */
+ rtt_status->rtt_config.rtt_target_cnt = 0;
+ memset(rtt_status->rtt_config.target_info, 0,
+ TARGET_INFO_SIZE(RTT_MAX_TARGET_CNT));
+ rtt_status->cur_idx = 0;
+ dhd_rtt_delete_session(dhd, FTM_DEFAULT_SESSION);
+ dhd_rtt_ftm_enable(dhd, FALSE);
+ }
+ mutex_unlock(&rtt_status->rtt_mutex);
+ return err;
+}
+
+
+static int
+dhd_rtt_start(dhd_pub_t *dhd)
+{
+ int err = BCME_OK;
+ char eabuf[ETHER_ADDR_STR_LEN];
+ char chanbuf[CHANSPEC_STR_LEN];
+ int mpc = 0;
+ int ftm_cfg_cnt = 0;
+ int ftm_param_cnt = 0;
+ uint32 rspec = 0;
+ ftm_config_options_info_t ftm_configs[FTM_MAX_CONFIGS];
+ ftm_config_param_info_t ftm_params[FTM_MAX_PARAMS];
+ rtt_target_info_t *rtt_target;
+ rtt_status_info_t *rtt_status;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+
+ if (rtt_status->cur_idx >= rtt_status->rtt_config.rtt_target_cnt) {
+ err = BCME_RANGE;
+ DHD_RTT(("%s : idx %d is out of range\n", __FUNCTION__, rtt_status->cur_idx));
+ goto exit;
+ }
+ if (RTT_IS_STOPPED(rtt_status)) {
+ DHD_RTT(("RTT is stopped\n"));
+ goto exit;
+ }
+ /* turn off mpc in case of non-associted */
+ if (!dhd_is_associated(dhd, NULL, NULL)) {
+ err = dhd_iovar(dhd, 0, "mpc", (char *)&mpc, sizeof(mpc), 1);
+ if (err) {
+ DHD_ERROR(("%s : failed to set mpc\n", __FUNCTION__));
+ goto exit;
+ }
+ rtt_status->mpc = 1; /* Either failure or complete, we need to enable mpc */
+ }
+
+ mutex_lock(&rtt_status->rtt_mutex);
+ /* Get a target information */
+ rtt_target = &rtt_status->rtt_config.target_info[rtt_status->cur_idx];
+ mutex_unlock(&rtt_status->rtt_mutex);
+ DHD_RTT(("%s enter\n", __FUNCTION__));
+ if (!RTT_IS_ENABLED(rtt_status)) {
+ /* enable ftm */
+ err = dhd_rtt_ftm_enable(dhd, TRUE);
+ if (err) {
+ DHD_ERROR(("failed to enable FTM (%d)\n", err));
+ goto exit;
+ }
+ }
+
+ /* delete session of index default sesession */
+ err = dhd_rtt_delete_session(dhd, FTM_DEFAULT_SESSION);
+ if (err < 0 && err != BCME_NOTFOUND) {
+ DHD_ERROR(("failed to delete session of FTM (%d)\n", err));
+ goto exit;
+ }
+ rtt_status->status = RTT_ENABLED;
+ memset(ftm_configs, 0, sizeof(ftm_configs));
+ memset(ftm_params, 0, sizeof(ftm_params));
+
+ /* configure the session 1 as initiator */
+ ftm_configs[ftm_cfg_cnt].enable = TRUE;
+ ftm_configs[ftm_cfg_cnt++].flags = WL_PROXD_SESSION_FLAG_INITIATOR;
+ dhd_rtt_ftm_config(dhd, FTM_DEFAULT_SESSION, FTM_CONFIG_CAT_OPTIONS,
+ ftm_configs, ftm_cfg_cnt);
+ /* target's mac address */
+ if (!ETHER_ISNULLADDR(rtt_target->addr.octet)) {
+ ftm_params[ftm_param_cnt].mac_addr = rtt_target->addr;
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_PEER_MAC;
+ DHD_RTT((">\t target %s\n", bcm_ether_ntoa(&rtt_target->addr, eabuf)));
+ }
+ /* target's chanspec */
+ if (rtt_target->chanspec) {
+ ftm_params[ftm_param_cnt].chanspec = htol32((uint32)rtt_target->chanspec);
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_CHANSPEC;
+ DHD_RTT((">\t chanspec : %s\n", wf_chspec_ntoa(rtt_target->chanspec, chanbuf)));
+ }
+ /* num-burst */
+ if (rtt_target->num_burst) {
+ ftm_params[ftm_param_cnt].data16 = htol16(rtt_target->num_burst);
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_NUM_BURST;
+ DHD_RTT((">\t num of burst : %d\n", rtt_target->num_burst));
+ }
+ /* number of frame per burst */
+ if (rtt_target->num_frames_per_burst == 0) {
+ rtt_target->num_frames_per_burst =
+ CHSPEC_IS20(rtt_target->chanspec) ? FTM_DEFAULT_CNT_20M :
+ CHSPEC_IS40(rtt_target->chanspec) ? FTM_DEFAULT_CNT_40M :
+ FTM_DEFAULT_CNT_80M;
+ }
+ ftm_params[ftm_param_cnt].data16 = htol16(rtt_target->num_frames_per_burst);
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_BURST_NUM_FTM;
+ DHD_RTT((">\t number of frame per burst : %d\n", rtt_target->num_frames_per_burst));
+ /* FTM retry count */
+ if (rtt_target->num_retries_per_ftm) {
+ ftm_params[ftm_param_cnt].data8 = rtt_target->num_retries_per_ftm;
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_FTM_RETRIES;
+ DHD_RTT((">\t retry count of FTM : %d\n", rtt_target->num_retries_per_ftm));
+ }
+ /* FTM Request retry count */
+ if (rtt_target->num_retries_per_ftmr) {
+ ftm_params[ftm_param_cnt].data8 = rtt_target->num_retries_per_ftmr;
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_FTM_REQ_RETRIES;
+ DHD_RTT((">\t retry count of FTM Req : %d\n", rtt_target->num_retries_per_ftm));
+ }
+ /* burst-period */
+ if (rtt_target->burst_period) {
+ ftm_params[ftm_param_cnt].data_intvl.intvl =
+ htol32(rtt_target->burst_period); /* ms */
+ ftm_params[ftm_param_cnt].data_intvl.tmu = WL_PROXD_TMU_MILLI_SEC;
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_BURST_PERIOD;
+ DHD_RTT((">\t burst period : %d ms\n", rtt_target->burst_period));
+ }
+ /* burst-duration */
+ if (rtt_target->burst_duration) {
+ ftm_params[ftm_param_cnt].data_intvl.intvl =
+ htol32(rtt_target->burst_period); /* ms */
+ ftm_params[ftm_param_cnt].data_intvl.tmu = WL_PROXD_TMU_MILLI_SEC;
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_BURST_DURATION;
+ DHD_RTT((">\t burst duration : %d ms\n",
+ rtt_target->burst_duration));
+ }
+ if (rtt_target->bw && rtt_target->preamble) {
+ bool use_default = FALSE;
+ int nss;
+ int mcs;
+ switch (rtt_target->preamble) {
+ case RTT_PREAMBLE_LEGACY:
+ rspec |= WL_RSPEC_ENCODE_RATE; /* 11abg */
+ rspec |= WL_RATE_6M;
+ break;
+ case RTT_PREAMBLE_HT:
+ rspec |= WL_RSPEC_ENCODE_HT; /* 11n HT */
+ mcs = 0; /* default MCS 0 */
+ rspec |= mcs;
+ break;
+ case RTT_PREAMBLE_VHT:
+ rspec |= WL_RSPEC_ENCODE_VHT; /* 11ac VHT */
+ mcs = 0; /* default MCS 0 */
+ nss = 1; /* default Nss = 1 */
+ rspec |= (nss << WL_RSPEC_VHT_NSS_SHIFT) | mcs;
+ break;
+ default:
+ DHD_RTT(("doesn't support this preamble : %d\n", rtt_target->preamble));
+ use_default = TRUE;
+ break;
+ }
+ switch (rtt_target->bw) {
+ case RTT_BW_20:
+ rspec |= WL_RSPEC_BW_20MHZ;
+ break;
+ case RTT_BW_40:
+ rspec |= WL_RSPEC_BW_40MHZ;
+ break;
+ case RTT_BW_80:
+ rspec |= WL_RSPEC_BW_80MHZ;
+ break;
+ default:
+ DHD_RTT(("doesn't support this BW : %d\n", rtt_target->bw));
+ use_default = TRUE;
+ break;
+ }
+ if (!use_default) {
+ ftm_params[ftm_param_cnt].data32 = htol32(rspec);
+ ftm_params[ftm_param_cnt++].tlvid = WL_PROXD_TLV_ID_RATESPEC;
+ DHD_RTT((">\t ratespec : %d\n", rspec));
+ }
+
+ }
+ dhd_rtt_ftm_config(dhd, FTM_DEFAULT_SESSION, FTM_CONFIG_CAT_GENERAL,
+ ftm_params, ftm_param_cnt);
+
+ err = dhd_rtt_start_session(dhd, FTM_DEFAULT_SESSION, TRUE);
+ if (err) {
+ DHD_ERROR(("failed to start session of FTM : error %d\n", err));
+ }
+exit:
+ if (err) {
+ rtt_status->status = RTT_STOPPED;
+ /* disable FTM */
+ dhd_rtt_ftm_enable(dhd, FALSE);
+ if (rtt_status->mpc) {
+ /* enable mpc again in case of error */
+ mpc = 1;
+ rtt_status->mpc = 0;
+ err = dhd_iovar(dhd, 0, "mpc", (char *)&mpc, sizeof(mpc), 1);
+ }
+ }
+ return err;
+}
+
+int
+dhd_rtt_register_noti_callback(dhd_pub_t *dhd, void *ctx, dhd_rtt_compl_noti_fn noti_fn)
+{
+ int err = BCME_OK;
+ struct rtt_noti_callback *cb = NULL, *iter;
+ rtt_status_info_t *rtt_status;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(noti_fn, "noti_fn is NULL", err);
+
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ spin_lock_bh(¬i_list_lock);
+ list_for_each_entry(iter, &rtt_status->noti_fn_list, list) {
+ if (iter->noti_fn == noti_fn) {
+ goto exit;
+ }
+ }
+ cb = kmalloc(sizeof(struct rtt_noti_callback), GFP_ATOMIC);
+ if (!cb) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ cb->noti_fn = noti_fn;
+ cb->ctx = ctx;
+ list_add(&cb->list, &rtt_status->noti_fn_list);
+exit:
+ spin_unlock_bh(¬i_list_lock);
+ return err;
+}
+
+int
+dhd_rtt_unregister_noti_callback(dhd_pub_t *dhd, dhd_rtt_compl_noti_fn noti_fn)
+{
+ int err = BCME_OK;
+ struct rtt_noti_callback *cb = NULL, *iter;
+ rtt_status_info_t *rtt_status;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ NULL_CHECK(noti_fn, "noti_fn is NULL", err);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ spin_lock_bh(¬i_list_lock);
+ list_for_each_entry(iter, &rtt_status->noti_fn_list, list) {
+ if (iter->noti_fn == noti_fn) {
+ cb = iter;
+ list_del(&cb->list);
+ break;
+ }
+ }
+ spin_unlock_bh(¬i_list_lock);
+ if (cb) {
+ kfree(cb);
+ }
+ return err;
+}
+
+static wifi_rate_t
+dhd_rtt_convert_rate_to_host(uint32 rspec)
+{
+ wifi_rate_t host_rate;
+ memset(&host_rate, 0, sizeof(wifi_rate_t));
+ if ((rspec & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_RATE) {
+ host_rate.preamble = 0;
+ } else if ((rspec & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_HT) {
+ host_rate.preamble = 2;
+ host_rate.rateMcsIdx = rspec & WL_RSPEC_RATE_MASK;
+ } else if ((rspec & WL_RSPEC_ENCODING_MASK) == WL_RSPEC_ENCODE_VHT) {
+ host_rate.preamble = 3;
+ host_rate.rateMcsIdx = rspec & WL_RSPEC_VHT_MCS_MASK;
+ host_rate.nss = (rspec & WL_RSPEC_VHT_NSS_MASK) >> WL_RSPEC_VHT_NSS_SHIFT;
+ }
+ host_rate.bw = (rspec & WL_RSPEC_BW_MASK) - 1;
+ host_rate.bitrate = rate_rspec2rate(rspec) / 100; /* 100kbps */
+ DHD_RTT(("bit rate : %d\n", host_rate.bitrate));
+ return host_rate;
+}
+
+
+static int
+dhd_rtt_convert_results_to_host(rtt_report_t *rtt_report, uint8 *p_data, uint16 tlvid, uint16 len)
+{
+ int err = BCME_OK;
+ char eabuf[ETHER_ADDR_STR_LEN];
+ wl_proxd_rtt_result_t *p_data_info;
+ wl_proxd_result_flags_t flags;
+ wl_proxd_session_state_t session_state;
+ wl_proxd_status_t proxd_status;
+ struct timespec ts;
+ uint32 ratespec;
+ uint32 avg_dist;
+ wl_proxd_rtt_sample_t *p_sample;
+ wl_proxd_intvl_t rtt;
+ wl_proxd_intvl_t p_time;
+
+ NULL_CHECK(rtt_report, "rtt_report is NULL", err);
+ NULL_CHECK(p_data, "p_data is NULL", err);
+ DHD_RTT(("%s enter\n", __FUNCTION__));
+ p_data_info = (wl_proxd_rtt_result_t *) p_data;
+ /* unpack and format 'flags' for display */
+ flags = ltoh16_ua(&p_data_info->flags);
+
+ /* session state and status */
+ session_state = ltoh16_ua(&p_data_info->state);
+ proxd_status = ltoh32_ua(&p_data_info->status);
+ DHD_RTT((">\tTarget(%s) session state=%d(%s), status=%d(%s)\n",
+ bcm_ether_ntoa((&(p_data_info->peer)), eabuf),
+ session_state,
+ ftm_session_state_value_to_logstr(session_state),
+ proxd_status,
+ ftm_status_value_to_logstr(proxd_status)));
+
+ /* show avg_dist (1/256m units), burst_num */
+ avg_dist = ltoh32_ua(&p_data_info->avg_dist);
+ if (avg_dist == 0xffffffff) { /* report 'failure' case */
+ DHD_RTT((">\tavg_dist=-1m, burst_num=%d, valid_measure_cnt=%d\n",
+ ltoh16_ua(&p_data_info->burst_num),
+ p_data_info->num_valid_rtt)); /* in a session */
+ avg_dist = FTM_INVALID;
+ }
+ else {
+ DHD_RTT((">\tavg_dist=%d.%04dm, burst_num=%d, valid_measure_cnt=%d num_ftm=%d\n",
+ avg_dist >> 8, /* 1/256m units */
+ ((avg_dist & 0xff) * 625) >> 4,
+ ltoh16_ua(&p_data_info->burst_num),
+ p_data_info->num_valid_rtt,
+ p_data_info->num_ftm)); /* in a session */
+ }
+ /* show 'avg_rtt' sample */
+ p_sample = &p_data_info->avg_rtt;
+ DHD_RTT((">\tavg_rtt sample: rssi=%d rtt=%d%s std_deviation =%d.%d ratespec=0x%08x\n",
+ (int16) ltoh16_ua(&p_sample->rssi),
+ ltoh32_ua(&p_sample->rtt.intvl),
+ ftm_tmu_value_to_logstr(ltoh16_ua(&p_sample->rtt.tmu)),
+ ltoh16_ua(&p_data_info->sd_rtt)/10, ltoh16_ua(&p_data_info->sd_rtt)%10,
+ ltoh32_ua(&p_sample->ratespec)));
+
+ /* set peer address */
+ rtt_report->addr = p_data_info->peer;
+ /* burst num */
+ rtt_report->burst_num = ltoh16_ua(&p_data_info->burst_num);
+ /* success num */
+ rtt_report->success_num = p_data_info->num_valid_rtt;
+ /* actual number of FTM supported by peer */
+ rtt_report->num_per_burst_peer = p_data_info->num_ftm;
+ rtt_report->negotiated_burst_num = p_data_info->num_ftm;
+ /* status */
+ rtt_report->status = ftm_get_statusmap_info(proxd_status,
+ &ftm_status_map_info[0], ARRAYSIZE(ftm_status_map_info));
+ /* rssi (0.5db) */
+ rtt_report->rssi = ABS(ltoh16_ua(&p_data_info->avg_rtt.rssi)) * 2;
+ /* rx rate */
+ ratespec = ltoh32_ua(&p_data_info->avg_rtt.ratespec);
+ rtt_report->rx_rate = dhd_rtt_convert_rate_to_host(ratespec);
+ /* tx rate */
+ if (flags & WL_PROXD_RESULT_FLAG_VHTACK) {
+ rtt_report->tx_rate = dhd_rtt_convert_rate_to_host(0x2010010);
+ } else {
+ rtt_report->tx_rate = dhd_rtt_convert_rate_to_host(0xc);
+ }
+ /* rtt_sd */
+ rtt.tmu = ltoh16_ua(&p_data_info->avg_rtt.rtt.tmu);
+ rtt.intvl = ltoh32_ua(&p_data_info->avg_rtt.rtt.intvl);
+ rtt_report->rtt = FTM_INTVL2NSEC(&rtt) * 10; /* nano -> 0.1 nano */
+ rtt_report->rtt_sd = ltoh16_ua(&p_data_info->sd_rtt); /* nano -> 0.1 nano */
+ DHD_RTT(("rtt_report->rtt : %llu\n", rtt_report->rtt));
+ DHD_RTT(("rtt_report->rssi : %d (0.5db)\n", rtt_report->rssi));
+
+ /* average distance */
+ if (avg_dist != FTM_INVALID) {
+ rtt_report->distance = (avg_dist >> 8) * 100; /* meter -> cm */
+ rtt_report->distance += (avg_dist & 0xff) * 100 / 256;
+ } else {
+ rtt_report->distance = FTM_INVALID;
+ }
+ /* time stamp */
+ /* get the time elapsed from boot time */
+ get_monotonic_boottime(&ts);
+ rtt_report->ts = (uint64)TIMESPEC_TO_US(ts);
+
+ if (proxd_status == WL_PROXD_E_REMOTE_FAIL) {
+ /* retry time after failure */
+ p_time.intvl = ltoh32_ua(&p_data_info->u.retry_after.intvl);
+ p_time.tmu = ltoh16_ua(&p_data_info->u.retry_after.tmu);
+ rtt_report->retry_after_duration = FTM_INTVL2SEC(&p_time); /* s -> s */
+ DHD_RTT((">\tretry_after: %d%s\n",
+ ltoh32_ua(&p_data_info->u.retry_after.intvl),
+ ftm_tmu_value_to_logstr(ltoh16_ua(&p_data_info->u.retry_after.tmu))));
+ } else {
+ /* burst duration */
+ p_time.intvl = ltoh32_ua(&p_data_info->u.retry_after.intvl);
+ p_time.tmu = ltoh16_ua(&p_data_info->u.retry_after.tmu);
+ rtt_report->burst_duration = FTM_INTVL2MSEC(&p_time); /* s -> ms */
+ DHD_RTT((">\tburst_duration: %d%s\n",
+ ltoh32_ua(&p_data_info->u.burst_duration.intvl),
+ ftm_tmu_value_to_logstr(ltoh16_ua(&p_data_info->u.burst_duration.tmu))));
+ DHD_RTT(("rtt_report->burst_duration : %d\n", rtt_report->burst_duration));
+ }
+ return err;
+}
+
+int
+dhd_rtt_event_handler(dhd_pub_t *dhd, wl_event_msg_t *event, void *event_data)
+{
+ int ret = BCME_OK;
+ int tlvs_len;
+ int idx;
+ uint16 version;
+ wl_proxd_event_t *p_event;
+ wl_proxd_event_type_t event_type;
+ wl_proxd_ftm_session_status_t session_status;
+ const ftm_strmap_entry_t *p_loginfo;
+ rtt_status_info_t *rtt_status;
+ rtt_target_info_t *rtt_target_info;
+ struct rtt_noti_callback *iter;
+ rtt_results_header_t *entry, *next, *rtt_results_header = NULL;
+ rtt_result_t *rtt_result, *next2;
+ gfp_t kflags;
+ bool is_new = TRUE;
+
+ NULL_CHECK(dhd, "dhd is NULL", ret);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", ret);
+
+ event_type = ntoh32_ua((void *)&event->event_type);
+
+ if (event_type != WLC_E_PROXD) {
+ return ret;
+ }
+ if (RTT_IS_STOPPED(rtt_status)) {
+ /* Ignore the Proxd event */
+ return ret;
+ }
+ p_event = (wl_proxd_event_t *) event_data;
+ version = ltoh16(p_event->version);
+ if (version < WL_PROXD_API_VERSION) {
+ DHD_ERROR(("ignore non-ftm event version = 0x%0x < WL_PROXD_API_VERSION (0x%x)\n",
+ version, WL_PROXD_API_VERSION));
+ return ret;
+ }
+ if (!in_atomic()) {
+ mutex_lock(&rtt_status->rtt_mutex);
+ }
+ event_type = (wl_proxd_event_type_t) ltoh16(p_event->type);
+
+ kflags = in_softirq()? GFP_ATOMIC : GFP_KERNEL;
+
+ DHD_RTT(("event_type=0x%x, ntoh16()=0x%x, ltoh16()=0x%x\n",
+ p_event->type, ntoh16(p_event->type), ltoh16(p_event->type)));
+ p_loginfo = ftm_get_event_type_loginfo(event_type);
+ if (p_loginfo == NULL) {
+ DHD_ERROR(("receive an invalid FTM event %d\n", event_type));
+ goto exit; /* ignore this event */
+ }
+ /* get TLVs len, skip over event header */
+ tlvs_len = ltoh16(p_event->len) - OFFSETOF(wl_proxd_event_t, tlvs);
+ DHD_RTT(("receive '%s' event: version=0x%x len=%d method=%d sid=%d tlvs_len=%d\n",
+ p_loginfo->text,
+ version,
+ ltoh16(p_event->len),
+ ltoh16(p_event->method),
+ ltoh16(p_event->sid),
+ tlvs_len));
+ rtt_target_info = &rtt_status->rtt_config.target_info[rtt_status->cur_idx];
+ /* find a rtt_report_header for this mac address */
+ list_for_each_entry(entry, &rtt_status->rtt_results_cache, list) {
+ if (!memcmp(&entry->peer_mac, &event->addr, ETHER_ADDR_LEN)) {
+ /* found a rtt_report_header for peer_mac in the list */
+ is_new = FALSE;
+ rtt_results_header = entry;
+ break;
+ }
+ }
+
+ switch (event_type) {
+ case WL_PROXD_EVENT_SESSION_CREATE:
+ DHD_RTT(("WL_PROXD_EVENT_SESSION_CREATE\n"));
+ break;
+ case WL_PROXD_EVENT_SESSION_START:
+ DHD_RTT(("WL_PROXD_EVENT_SESSION_START\n"));
+ break;
+ case WL_PROXD_EVENT_BURST_START:
+ DHD_RTT(("WL_PROXD_EVENT_BURST_START\n"));
+ break;
+ case WL_PROXD_EVENT_BURST_END:
+ DHD_RTT(("WL_PROXD_EVENT_BURST_END\n"));
+ if (is_new) {
+ /* allocate new header for rtt_results */
+ rtt_results_header = kzalloc(sizeof(rtt_results_header_t), GFP_KERNEL);
+ if (!rtt_results_header) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+ /* Initialize the head of list for rtt result */
+ INIT_LIST_HEAD(&rtt_results_header->result_list);
+ rtt_results_header->peer_mac = event->addr;
+ list_add_tail(&rtt_results_header->list, &rtt_status->rtt_results_cache);
+ }
+ if (tlvs_len > 0) {
+ /* allocate rtt_results for new results */
+ rtt_result = kzalloc(sizeof(rtt_result_t), kflags);
+ if (!rtt_result) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+ /* unpack TLVs and invokes the cbfn to print the event content TLVs */
+ ret = bcm_unpack_xtlv_buf((void *) &(rtt_result->report),
+ (uint8 *)&p_event->tlvs[0], tlvs_len,
+ BCM_XTLV_OPTION_ALIGN32, rtt_unpack_xtlv_cbfn);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s : Failed to unpack xtlv for an event\n",
+ __FUNCTION__));
+ goto exit;
+ }
+ /* fill out the results from the configuration param */
+ rtt_result->report.ftm_num = rtt_target_info->num_frames_per_burst;
+ rtt_result->report.type = RTT_TWO_WAY;
+ DHD_RTT(("report->ftm_num : %d\n", rtt_result->report.ftm_num));
+ rtt_result->report_len = RTT_REPORT_SIZE;
+ /* XXX TODO : implement code to get LCR or LCI information in rtt_result */
+
+ list_add_tail(&rtt_result->list, &rtt_results_header->result_list);
+ rtt_results_header->result_cnt++;
+ rtt_results_header->result_tot_len += rtt_result->report_len;
+ }
+ break;
+ case WL_PROXD_EVENT_SESSION_END:
+ DHD_RTT(("WL_PROXD_EVENT_SESSION_END\n"));
+ if (!RTT_IS_ENABLED(rtt_status)) {
+ DHD_RTT(("Ignore the session end evt\n"));
+ goto exit;
+ }
+ if (tlvs_len > 0) {
+ /* unpack TLVs and invokes the cbfn to print the event content TLVs */
+ ret = bcm_unpack_xtlv_buf((void *) &session_status,
+ (uint8 *)&p_event->tlvs[0], tlvs_len,
+ BCM_XTLV_OPTION_ALIGN32, rtt_unpack_xtlv_cbfn);
+ if (ret != BCME_OK) {
+ DHD_ERROR(("%s : Failed to unpack xtlv for an event\n",
+ __FUNCTION__));
+ goto exit;
+ }
+ }
+ /* In case of no result for the peer device, make fake result for error case */
+ if (is_new) {
+ /* allocate new header for rtt_results */
+ rtt_results_header = kzalloc(sizeof(rtt_results_header_t), GFP_KERNEL);
+ if (!rtt_results_header) {
+ ret = -ENOMEM;
+ goto exit;
+ }
+ /* Initialize the head of list for rtt result */
+ INIT_LIST_HEAD(&rtt_results_header->result_list);
+ rtt_results_header->peer_mac = event->addr;
+ list_add_tail(&rtt_results_header->list, &rtt_status->rtt_results_cache);
+
+ /* allocate rtt_results for new results */
+ rtt_result = kzalloc(sizeof(rtt_result_t), kflags);
+ if (!rtt_result) {
+ ret = -ENOMEM;
+ kfree(rtt_results_header);
+ goto exit;
+ }
+ /* fill out the results from the configuration param */
+ rtt_result->report.ftm_num = rtt_target_info->num_frames_per_burst;
+ rtt_result->report.type = RTT_TWO_WAY;
+ DHD_RTT(("report->ftm_num : %d\n", rtt_result->report.ftm_num));
+ rtt_result->report_len = RTT_REPORT_SIZE;
+ rtt_result->report.status = RTT_REASON_FAIL_NO_RSP;
+ rtt_result->report.addr = rtt_target_info->addr;
+ rtt_result->report.distance = FTM_INVALID;
+ list_add_tail(&rtt_result->list, &rtt_results_header->result_list);
+ rtt_results_header->result_cnt++;
+ rtt_results_header->result_tot_len += rtt_result->report_len;
+ }
+ /* find next target to trigger RTT */
+ for (idx = (rtt_status->cur_idx + 1);
+ idx < rtt_status->rtt_config.rtt_target_cnt; idx++) {
+ /* skip the disabled device */
+ if (rtt_status->rtt_config.target_info[idx].disable) {
+ continue;
+ } else {
+ /* set the idx to cur_idx */
+ rtt_status->cur_idx = idx;
+ break;
+ }
+ }
+ if (idx < rtt_status->rtt_config.rtt_target_cnt) {
+ /* restart to measure RTT from next device */
+ schedule_work(&rtt_status->work);
+ } else {
+ DHD_RTT(("RTT_STOPPED\n"));
+ rtt_status->status = RTT_STOPPED;
+ /* to turn on mpc mode */
+ schedule_work(&rtt_status->work);
+ /* notify the completed information to others */
+ list_for_each_entry(iter, &rtt_status->noti_fn_list, list) {
+ iter->noti_fn(iter->ctx, &rtt_status->rtt_results_cache);
+ }
+ /* remove the rtt results in cache */
+ if (!list_empty(&rtt_status->rtt_results_cache)) {
+ /* Iterate rtt_results_header list */
+ list_for_each_entry_safe(entry, next,
+ &rtt_status->rtt_results_cache, list) {
+ list_del(&entry->list);
+ /* Iterate rtt_result list */
+ list_for_each_entry_safe(rtt_result, next2,
+ &entry->result_list, list) {
+ list_del(&rtt_result->list);
+ kfree(rtt_result);
+ }
+ kfree(entry);
+ }
+ }
+
+ /* reinitialize the HEAD */
+ INIT_LIST_HEAD(&rtt_status->rtt_results_cache);
+ /* clear information for rtt_config */
+ rtt_status->rtt_config.rtt_target_cnt = 0;
+ memset(rtt_status->rtt_config.target_info, 0,
+ TARGET_INFO_SIZE(RTT_MAX_TARGET_CNT));
+ rtt_status->cur_idx = 0;
+ }
+ break;
+ case WL_PROXD_EVENT_SESSION_RESTART:
+ DHD_RTT(("WL_PROXD_EVENT_SESSION_RESTART\n"));
+ break;
+ case WL_PROXD_EVENT_BURST_RESCHED:
+ DHD_RTT(("WL_PROXD_EVENT_BURST_RESCHED\n"));
+ break;
+ case WL_PROXD_EVENT_SESSION_DESTROY:
+ DHD_RTT(("WL_PROXD_EVENT_SESSION_DESTROY\n"));
+ break;
+ case WL_PROXD_EVENT_FTM_FRAME:
+ DHD_RTT(("WL_PROXD_EVENT_FTM_FRAME\n"));
+ break;
+ case WL_PROXD_EVENT_DELAY:
+ DHD_RTT(("WL_PROXD_EVENT_DELAY\n"));
+ break;
+ case WL_PROXD_EVENT_VS_INITIATOR_RPT:
+ DHD_RTT(("WL_PROXD_EVENT_VS_INITIATOR_RPT\n "));
+ break;
+ case WL_PROXD_EVENT_RANGING:
+ DHD_RTT(("WL_PROXD_EVENT_RANGING\n"));
+ break;
+
+ default:
+ DHD_ERROR(("WLC_E_PROXD: not supported EVENT Type:%d\n", event_type));
+ break;
+ }
+exit:
+ if (!in_atomic()) {
+ mutex_unlock(&rtt_status->rtt_mutex);
+ }
+
+ return ret;
+}
+
+static void
+dhd_rtt_work(struct work_struct *work)
+{
+ rtt_status_info_t *rtt_status;
+ dhd_pub_t *dhd;
+ rtt_status = container_of(work, rtt_status_info_t, work);
+ if (rtt_status == NULL) {
+ DHD_ERROR(("%s : rtt_status is NULL\n", __FUNCTION__));
+ return;
+ }
+ dhd = rtt_status->dhd;
+ if (dhd == NULL) {
+ DHD_ERROR(("%s : dhd is NULL\n", __FUNCTION__));
+ return;
+ }
+ (void) dhd_rtt_start(dhd);
+}
+
+int
+dhd_rtt_capability(dhd_pub_t *dhd, rtt_capabilities_t *capa)
+{
+ rtt_status_info_t *rtt_status;
+ int err = BCME_OK;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ NULL_CHECK(capa, "capa is NULL", err);
+ bzero(capa, sizeof(rtt_capabilities_t));
+ switch (rtt_status->rtt_capa.proto) {
+ case RTT_CAP_ONE_WAY:
+ capa->rtt_one_sided_supported = 1;
+ break;
+ case RTT_CAP_FTM_WAY:
+ capa->rtt_ftm_supported = 1;
+ break;
+ }
+
+ switch (rtt_status->rtt_capa.feature) {
+ case RTT_FEATURE_LCI:
+ capa->lci_support = 1;
+ break;
+ case RTT_FEATURE_LCR:
+ capa->lcr_support = 1;
+ break;
+ case RTT_FEATURE_PREAMBLE:
+ capa->preamble_support = 1;
+ break;
+ case RTT_FEATURE_BW:
+ capa->bw_support = 1;
+ break;
+ }
+ /* bit mask */
+ capa->preamble_support = rtt_status->rtt_capa.preamble;
+ capa->bw_support = rtt_status->rtt_capa.bw;
+
+ return err;
+}
+
+int
+dhd_rtt_init(dhd_pub_t *dhd)
+{
+ int err = BCME_OK, ret;
+ int32 up = 1;
+ int32 version;
+ rtt_status_info_t *rtt_status;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ if (dhd->rtt_state) {
+ return err;
+ }
+ dhd->rtt_state = kzalloc(sizeof(rtt_status_info_t), GFP_KERNEL);
+ if (dhd->rtt_state == NULL) {
+ err = BCME_NOMEM;
+ DHD_ERROR(("%s : failed to create rtt_state\n", __FUNCTION__));
+ return err;
+ }
+ bzero(dhd->rtt_state, sizeof(rtt_status_info_t));
+ rtt_status = GET_RTTSTATE(dhd);
+ rtt_status->rtt_config.target_info =
+ kzalloc(TARGET_INFO_SIZE(RTT_MAX_TARGET_CNT), GFP_KERNEL);
+ if (rtt_status->rtt_config.target_info == NULL) {
+ DHD_ERROR(("%s failed to allocate the target info for %d\n",
+ __FUNCTION__, RTT_MAX_TARGET_CNT));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ rtt_status->dhd = dhd;
+ /* need to do WLC_UP */
+ dhd_wl_ioctl_cmd(dhd, WLC_UP, (char *)&up, sizeof(int32), TRUE, 0);
+
+ ret = dhd_rtt_get_version(dhd, &version);
+ if (ret == BCME_OK && (version == WL_PROXD_API_VERSION)) {
+ DHD_ERROR(("%s : FTM is supported\n", __FUNCTION__));
+ /* XXX : TODO : need to find a way to check rtt capability */
+ /* rtt_status->rtt_capa.proto |= RTT_CAP_ONE_WAY; */
+ rtt_status->rtt_capa.proto |= RTT_CAP_FTM_WAY;
+
+ /* indicate to set tx rate */
+ rtt_status->rtt_capa.feature |= RTT_FEATURE_PREAMBLE;
+ rtt_status->rtt_capa.preamble |= RTT_PREAMBLE_VHT;
+ rtt_status->rtt_capa.preamble |= RTT_PREAMBLE_HT;
+
+ /* indicate to set bandwith */
+ rtt_status->rtt_capa.feature |= RTT_FEATURE_BW;
+ rtt_status->rtt_capa.bw |= RTT_BW_20;
+ rtt_status->rtt_capa.bw |= RTT_BW_40;
+ rtt_status->rtt_capa.bw |= RTT_BW_80;
+ } else {
+ if ((ret != BCME_OK) || (version == 0)) {
+ DHD_ERROR(("%s : FTM is not supported\n", __FUNCTION__));
+ } else {
+ DHD_ERROR(("%s : FTM version mismatch between HOST (%d) and FW (%d)\n",
+ __FUNCTION__, WL_PROXD_API_VERSION, version));
+ }
+ }
+ /* cancel all of RTT request once we got the cancel request */
+ rtt_status->all_cancel = TRUE;
+ mutex_init(&rtt_status->rtt_mutex);
+ INIT_LIST_HEAD(&rtt_status->noti_fn_list);
+ INIT_LIST_HEAD(&rtt_status->rtt_results_cache);
+ INIT_WORK(&rtt_status->work, dhd_rtt_work);
+exit:
+ if (err < 0) {
+ kfree(rtt_status->rtt_config.target_info);
+ kfree(dhd->rtt_state);
+ }
+ return err;
+}
+
+int
+dhd_rtt_deinit(dhd_pub_t *dhd)
+{
+ int err = BCME_OK;
+ rtt_status_info_t *rtt_status;
+ rtt_results_header_t *rtt_header, *next;
+ rtt_result_t *rtt_result, *next2;
+ struct rtt_noti_callback *iter, *iter2;
+ NULL_CHECK(dhd, "dhd is NULL", err);
+ rtt_status = GET_RTTSTATE(dhd);
+ NULL_CHECK(rtt_status, "rtt_status is NULL", err);
+ rtt_status->status = RTT_STOPPED;
+ /* clear evt callback list */
+ if (!list_empty(&rtt_status->noti_fn_list)) {
+ list_for_each_entry_safe(iter, iter2, &rtt_status->noti_fn_list, list) {
+ list_del(&iter->list);
+ kfree(iter);
+ }
+ }
+ /* remove the rtt results */
+ if (!list_empty(&rtt_status->rtt_results_cache)) {
+ list_for_each_entry_safe(rtt_header, next, &rtt_status->rtt_results_cache, list) {
+ list_del(&rtt_header->list);
+ list_for_each_entry_safe(rtt_result, next2,
+ &rtt_header->result_list, list) {
+ list_del(&rtt_result->list);
+ kfree(rtt_result);
+ }
+ kfree(rtt_header);
+ }
+ }
+ kfree(rtt_status->rtt_config.target_info);
+ kfree(dhd->rtt_state);
+ dhd->rtt_state = NULL;
+ return err;
+}
diff --git a/drivers/net/wireless/bcmdhd/dhd_rtt.h b/drivers/net/wireless/bcmdhd/dhd_rtt.h
new file mode 100644
index 0000000..1d548fd
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/dhd_rtt.h
@@ -0,0 +1,332 @@
+/*
+ * Header file of Broadcom Dongle Host Driver (DHD)
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: dhd_rtt.h 423669 2014-07-01 13:01:56Z $
+ */
+#ifndef __DHD_RTT_H__
+#define __DHD_RTT_H__
+
+#include "dngl_stats.h"
+
+#define RTT_MAX_TARGET_CNT 50
+#define RTT_MAX_FRAME_CNT 25
+#define RTT_MAX_RETRY_CNT 10
+#define DEFAULT_FTM_CNT 6
+#define DEFAULT_RETRY_CNT 6
+#define TARGET_INFO_SIZE(count) (sizeof(rtt_target_info_t) * count)
+
+#define TARGET_TYPE(target) (target->type)
+
+#ifndef BIT
+#define BIT(x) (1 << (x))
+#endif
+
+/* DSSS, CCK and 802.11n rates in [500kbps] units */
+#define WL_MAXRATE 108 /* in 500kbps units */
+#define WL_RATE_1M 2 /* in 500kbps units */
+#define WL_RATE_2M 4 /* in 500kbps units */
+#define WL_RATE_5M5 11 /* in 500kbps units */
+#define WL_RATE_11M 22 /* in 500kbps units */
+#define WL_RATE_6M 12 /* in 500kbps units */
+#define WL_RATE_9M 18 /* in 500kbps units */
+#define WL_RATE_12M 24 /* in 500kbps units */
+#define WL_RATE_18M 36 /* in 500kbps units */
+#define WL_RATE_24M 48 /* in 500kbps units */
+#define WL_RATE_36M 72 /* in 500kbps units */
+#define WL_RATE_48M 96 /* in 500kbps units */
+#define WL_RATE_54M 108 /* in 500kbps units */
+
+
+enum rtt_role {
+ RTT_INITIATOR = 0,
+ RTT_TARGET = 1
+};
+enum rtt_status {
+ RTT_STOPPED = 0,
+ RTT_STARTED = 1,
+ RTT_ENABLED = 2
+};
+typedef int64_t wifi_timestamp; /* In microseconds (us) */
+typedef int64_t wifi_timespan;
+typedef int32 wifi_rssi;
+
+typedef enum {
+ RTT_INVALID,
+ RTT_ONE_WAY,
+ RTT_TWO_WAY,
+ RTT_AUTO
+} rtt_type_t;
+
+typedef enum {
+ RTT_PEER_STA,
+ RTT_PEER_AP,
+ RTT_PEER_P2P,
+ RTT_PEER_NAN,
+ RTT_PEER_INVALID
+} rtt_peer_type_t;
+
+typedef enum rtt_reason {
+ RTT_REASON_SUCCESS,
+ RTT_REASON_FAILURE,
+ RTT_REASON_FAIL_NO_RSP,
+ RTT_REASON_FAIL_INVALID_TS, /* Invalid timestamp */
+ RTT_REASON_FAIL_PROTOCOL, /* 11mc protocol failed */
+ RTT_REASON_FAIL_REJECTED,
+ RTT_REASON_FAIL_NOT_SCHEDULED_YET,
+ RTT_REASON_FAIL_SCHEDULE, /* schedule failed */
+ RTT_REASON_FAIL_TM_TIMEOUT,
+ RTT_REASON_FAIL_AP_ON_DIFF_CHANNEL,
+ RTT_REASON_FAIL_NO_CAPABILITY,
+ RTT_REASON_FAIL_BUSY_TRY_LATER,
+ RTT_REASON_ABORTED
+} rtt_reason_t;
+
+enum {
+ RTT_CAP_ONE_WAY = BIT(0),
+ /* IEEE802.11mc */
+ RTT_CAP_FTM_WAY = BIT(1)
+};
+
+enum {
+ RTT_FEATURE_LCI = BIT(0),
+ RTT_FEATURE_LCR = BIT(1),
+ RTT_FEATURE_PREAMBLE = BIT(2),
+ RTT_FEATURE_BW = BIT(3)
+};
+
+enum {
+ RTT_PREAMBLE_LEGACY = BIT(0),
+ RTT_PREAMBLE_HT = BIT(1),
+ RTT_PREAMBLE_VHT = BIT(2)
+};
+
+
+enum {
+ RTT_BW_5 = BIT(0),
+ RTT_BW_10 = BIT(1),
+ RTT_BW_20 = BIT(2),
+ RTT_BW_40 = BIT(3),
+ RTT_BW_80 = BIT(4),
+ RTT_BW_160 = BIT(5)
+};
+#define FTM_MAX_NUM_BURST_EXP 14
+#define HAS_11MC_CAP(cap) (cap & RTT_CAP_FTM_WAY)
+#define HAS_ONEWAY_CAP(cap) (cap & RTT_CAP_ONE_WAY)
+#define HAS_RTT_CAP(cap) (HAS_ONEWAY_CAP(cap) || HAS_11MC_CAP(cap))
+
+typedef struct wifi_channel_info {
+ wifi_channel_width_t width;
+ wifi_channel center_freq; /* primary 20 MHz channel */
+ wifi_channel center_freq0; /* center freq (MHz) first segment */
+ wifi_channel center_freq1; /* center freq (MHz) second segment valid for 80 + 80 */
+} wifi_channel_info_t;
+
+typedef struct wifi_rate {
+ uint32 preamble :3; /* 0: OFDM, 1: CCK, 2 : HT, 3: VHT, 4..7 reserved */
+ uint32 nss :2; /* 1 : 1x1, 2: 2x2, 3: 3x3, 4: 4x4 */
+ uint32 bw :3; /* 0: 20Mhz, 1: 40Mhz, 2: 80Mhz, 3: 160Mhz */
+ /* OFDM/CCK rate code would be as per IEEE std in the unit of 0.5 mb
+ * HT/VHT it would be mcs index
+ */
+ uint32 rateMcsIdx :8;
+ uint32 reserved :16; /* reserved */
+ uint32 bitrate; /* unit of 100 Kbps */
+} wifi_rate_t;
+
+typedef struct rtt_target_info {
+ struct ether_addr addr;
+ rtt_type_t type; /* rtt_type */
+ rtt_peer_type_t peer; /* peer type */
+ wifi_channel_info_t channel; /* channel information */
+ chanspec_t chanspec; /* chanspec for channel */
+ bool disable; /* disable for RTT measurement */
+ /*
+ * Time interval between bursts (units: 100 ms).
+ * Applies to 1-sided and 2-sided RTT multi-burst requests.
+ * Range: 0-31, 0: no preference by initiator (2-sided RTT)
+ */
+ uint32 burst_period;
+ /*
+ * Total number of RTT bursts to be executed. It will be
+ * specified in the same way as the parameter "Number of
+ * Burst Exponent" found in the FTM frame format. It
+ * applies to both: 1-sided RTT and 2-sided RTT. Valid
+ * values are 0 to 15 as defined in 802.11mc std.
+ * 0 means single shot
+ * The implication of this parameter on the maximum
+ * number of RTT results is the following:
+ * for 1-sided RTT: max num of RTT results = (2^num_burst)*(num_frames_per_burst)
+ * for 2-sided RTT: max num of RTT results = (2^num_burst)*(num_frames_per_burst - 1)
+ */
+ uint16 num_burst;
+ /*
+ * num of frames per burst.
+ * Minimum value = 1, Maximum value = 31
+ * For 2-sided this equals the number of FTM frames
+ * to be attempted in a single burst. This also
+ * equals the number of FTM frames that the
+ * initiator will request that the responder send
+ * in a single frame
+ */
+ uint32 num_frames_per_burst;
+ /* num of frames in each RTT burst
+ * for single side, measurement result num = frame number
+ * for 2 side RTT, measurement result num = frame number - 1
+ */
+ uint32 num_retries_per_ftm; /* retry time for RTT measurment frame */
+ /* following fields are only valid for 2 side RTT */
+ uint32 num_retries_per_ftmr;
+ uint8 LCI_request;
+ uint8 LCR_request;
+ /*
+ * Applies to 1-sided and 2-sided RTT. Valid values will
+ * be 2-11 and 15 as specified by the 802.11mc std for
+ * the FTM parameter burst duration. In a multi-burst
+ * request, if responder overrides with larger value,
+ * the initiator will return failure. In a single-burst
+ * request if responder overrides with larger value,
+ * the initiator will sent TMR_STOP to terminate RTT
+ * at the end of the burst_duration it requested.
+ */
+ uint32 burst_duration;
+ uint8 preamble; /* 1 - Legacy, 2 - HT, 4 - VHT */
+ uint8 bw; /* 5, 10, 20, 40, 80, 160 */
+} rtt_target_info_t;
+
+
+typedef struct rtt_report {
+ struct ether_addr addr;
+ unsigned int burst_num; /* # of burst inside a multi-burst request */
+ unsigned int ftm_num; /* total RTT measurement frames attempted */
+ unsigned int success_num; /* total successful RTT measurement frames */
+ uint8 num_per_burst_peer; /* max number of FTM number per burst the peer support */
+ rtt_reason_t status; /* raging status */
+ /* in s, 11mc only, only for RTT_REASON_FAIL_BUSY_TRY_LATER, 1- 31s */
+ uint8 retry_after_duration;
+ rtt_type_t type; /* rtt type */
+ wifi_rssi rssi; /* average rssi in 0.5 dB steps e.g. 143 implies -71.5 dB */
+ wifi_rssi rssi_spread; /* rssi spread in 0.5 db steps e.g. 5 implies 2.5 spread */
+ /*
+ * 1-sided RTT: TX rate of RTT frame.
+ * 2-sided RTT: TX rate of initiator's Ack in response to FTM frame.
+ */
+ wifi_rate_t tx_rate;
+ /*
+ * 1-sided RTT: TX rate of Ack from other side.
+ * 2-sided RTT: TX rate of FTM frame coming from responder.
+ */
+ wifi_rate_t rx_rate;
+ wifi_timespan rtt; /* round trip time in 0.1 nanoseconds */
+ wifi_timespan rtt_sd; /* rtt standard deviation in 0.1 nanoseconds */
+ wifi_timespan rtt_spread; /* difference between max and min rtt times recorded */
+ int distance; /* distance in cm (optional) */
+ int distance_sd; /* standard deviation in cm (optional) */
+ int distance_spread; /* difference between max and min distance recorded (optional) */
+ wifi_timestamp ts; /* time of the measurement (in microseconds since boot) */
+ int burst_duration; /* in ms, how long the FW time is to fininish one burst measurement */
+ int negotiated_burst_num; /* Number of bursts allowed by the responder */
+ bcm_tlv_t *LCI; /* LCI Report */
+ bcm_tlv_t *LCR; /* Location Civic Report */
+} rtt_report_t;
+#define RTT_REPORT_SIZE (sizeof(rtt_report_t))
+
+/* rtt_results_header to maintain rtt result list per mac address */
+typedef struct rtt_results_header {
+ struct ether_addr peer_mac;
+ uint32 result_cnt;
+ uint32 result_tot_len; /* sum of report_len of rtt_result */
+ struct list_head list;
+ struct list_head result_list;
+} rtt_results_header_t;
+
+/* rtt_result to link all of rtt_report */
+typedef struct rtt_result {
+ struct list_head list;
+ struct rtt_report report;
+ int32 report_len; /* total length of rtt_report */
+} rtt_result_t;
+
+/* RTT Capabilities */
+typedef struct rtt_capabilities {
+ uint8 rtt_one_sided_supported; /* if 1-sided rtt data collection is supported */
+ uint8 rtt_ftm_supported; /* if ftm rtt data collection is supported */
+ uint8 lci_support; /* location configuration information */
+ uint8 lcr_support; /* Civic Location */
+ uint8 preamble_support; /* bit mask indicate what preamble is supported */
+ uint8 bw_support; /* bit mask indicate what BW is supported */
+} rtt_capabilities_t;
+
+typedef struct rtt_config_params {
+ int8 rtt_target_cnt;
+ rtt_target_info_t *target_info;
+} rtt_config_params_t;
+
+typedef void (*dhd_rtt_compl_noti_fn)(void *ctx, void *rtt_data);
+/* Linux wrapper to call common dhd_rtt_set_cfg */
+int
+dhd_dev_rtt_set_cfg(struct net_device *dev, void *buf);
+
+int
+dhd_dev_rtt_cancel_cfg(struct net_device *dev, struct ether_addr *mac_list, int mac_cnt);
+
+int
+dhd_dev_rtt_register_noti_callback(struct net_device *dev, void *ctx,
+ dhd_rtt_compl_noti_fn noti_fn);
+
+int
+dhd_dev_rtt_unregister_noti_callback(struct net_device *dev, dhd_rtt_compl_noti_fn noti_fn);
+
+int
+dhd_dev_rtt_capability(struct net_device *dev, rtt_capabilities_t *capa);
+
+/* export to upper layer */
+chanspec_t
+dhd_rtt_convert_to_chspec(wifi_channel_info_t channel);
+
+int
+dhd_rtt_idx_to_burst_duration(uint idx);
+
+int
+dhd_rtt_set_cfg(dhd_pub_t *dhd, rtt_config_params_t *params);
+
+int
+dhd_rtt_stop(dhd_pub_t *dhd, struct ether_addr *mac_list, int mac_cnt);
+
+
+int
+dhd_rtt_register_noti_callback(dhd_pub_t *dhd, void *ctx, dhd_rtt_compl_noti_fn noti_fn);
+
+int
+dhd_rtt_unregister_noti_callback(dhd_pub_t *dhd, dhd_rtt_compl_noti_fn noti_fn);
+
+int
+dhd_rtt_event_handler(dhd_pub_t *dhd, wl_event_msg_t *event, void *event_data);
+
+int
+dhd_rtt_capability(dhd_pub_t *dhd, rtt_capabilities_t *capa);
+
+int
+dhd_rtt_init(dhd_pub_t *dhd);
+
+int
+dhd_rtt_deinit(dhd_pub_t *dhd);
+#endif /* __DHD_RTT_H__ */
diff --git a/drivers/net/wireless/bcmdhd/dhd_sdio.c b/drivers/net/wireless/bcmdhd/dhd_sdio.c
old mode 100755
new mode 100644
index 2d609c2..65e4784
--- a/drivers/net/wireless/bcmdhd/dhd_sdio.c
+++ b/drivers/net/wireless/bcmdhd/dhd_sdio.c
@@ -2,13 +2,13 @@
* DHD Bus Module for SDIO
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_sdio.c 457966 2014-02-25 10:10:35Z $
+ * $Id: dhd_sdio.c 476991 2014-05-12 06:21:02Z $
*/
#include <typedefs.h>
@@ -42,8 +42,8 @@
#include <hndsoc.h>
#include <bcmsdpcm.h>
#if defined(DHD_DEBUG)
-#include <hndrte_armtrap.h>
-#include <hndrte_cons.h>
+#include <hnd_armtrap.h>
+#include <hnd_cons.h>
#endif /* defined(DHD_DEBUG) */
#include <sbchipc.h>
#include <sbhnddma.h>
@@ -63,16 +63,17 @@
#include <dhd_bus.h>
#include <dhd_proto.h>
#include <dhd_dbg.h>
+#include <dhd_debug.h>
#include <dhdioctl.h>
#include <sdiovar.h>
#ifdef PROP_TXSTATUS
#include <dhd_wlfc.h>
#endif
-
#ifdef DHDTCPACK_SUPPRESS
#include <dhd_ip.h>
-#endif
+#endif /* DHDTCPACK_SUPPRESS */
+#include <proto/bcmevent.h>
bool dhd_mp_halting(dhd_pub_t *dhdp);
extern void bcmsdh_waitfor_iodrain(void *sdh);
@@ -146,9 +147,9 @@
#endif
/* hooks for limiting threshold custom tx num in rx processing */
-#define DEFAULT_TXINRX_THRES 0
-#ifndef CUSTOM_TXINRX_THRES
-#define CUSTOM_TXINRX_THRES DEFAULT_TXINRX_THRES
+#define DEFAULT_TXINRX_THRES 0
+#ifndef CUSTOM_TXINRX_THRES
+#define CUSTOM_TXINRX_THRES DEFAULT_TXINRX_THRES
#endif
/* Value for ChipClockCSR during initial setup */
@@ -173,7 +174,7 @@
typedef struct dhd_console {
uint count; /* Poll interval msec counter */
uint log_addr; /* Log struct address (fixed) */
- hndrte_log_t log; /* Log struct (host copy) */
+ hnd_log_t log; /* Log struct (host copy) */
uint bufsize; /* Size of log buffer */
uint8 *buf; /* Log buffer (host copy) */
uint last; /* Last buffer read index */
@@ -288,6 +289,7 @@
int32 sd_rxchain; /* If bcmsdh api accepts PKT chains */
bool use_rxchain; /* If dhd should use PKT chains */
bool sleeping; /* Is SDIO bus sleeping? */
+ wait_queue_head_t bus_sleep;
uint rxflow_mode; /* Rx flow control mode */
bool rxflow; /* Is rx flow control on */
uint prev_rxlim_hit; /* Is prev rx limit exceeded (per dpc schedule) */
@@ -350,10 +352,11 @@
uint f2rxdata; /* Number of frame data reads */
uint f2txdata; /* Number of f2 frame writes */
uint f1regdata; /* Number of f1 register accesses */
+ wake_counts_t wake_counts;
#ifdef DHDENABLE_TAILPAD
uint tx_tailpad_chain; /* Number of tail padding by chaining pad_pkt */
uint tx_tailpad_pktget; /* Number of tail padding by new PKTGET */
-#endif
+#endif /* DHDENABLE_TAILPAD */
uint8 *ctrl_frame_buf;
uint32 ctrl_frame_len;
bool ctrl_frame_stat;
@@ -376,7 +379,9 @@
uint32 txglom_total_len; /* Total length of pkts in glom array */
bool txglom_enable; /* Flag to indicate whether tx glom is enabled/disabled */
uint32 txglomsize; /* Glom size limitation */
+#ifdef DHDENABLE_TAILPAD
void *pad_pkt;
+#endif /* DHDENABLE_TAILPAD */
} dhd_bus_t;
/* clkstate */
@@ -554,6 +559,8 @@
static int dhd_serialconsole(dhd_bus_t *bus, bool get, bool enable, int *bcmerror);
#endif /* DHD_DEBUG */
+static int dhdsdio_mem_dump(dhd_bus_t *bus);
+
static int dhdsdio_devcap_set(dhd_bus_t *bus, uint8 cap);
static int dhdsdio_download_state(dhd_bus_t *bus, bool enter);
@@ -703,6 +710,14 @@
bool cap = FALSE;
uint32 core_capext, addr, data;
+ if (BCM4349_CHIP(bus->sih->chip)) {
+ /* For now SR capability would not be exercised */
+ return cap;
+ }
+ if (bus->sih->chip == BCM43430_CHIP_ID) {
+ /* For now SR capability would not be exercised */
+ return cap;
+ }
if (bus->sih->chip == BCM4324_CHIP_ID) {
addr = SI_ENUM_BASE + OFFSETOF(chipcregs_t, chipcontrol_addr);
data = SI_ENUM_BASE + OFFSETOF(chipcregs_t, chipcontrol_data);
@@ -715,6 +730,8 @@
(bus->sih->chip == BCM43349_CHIP_ID) ||
(bus->sih->chip == BCM4345_CHIP_ID) ||
(bus->sih->chip == BCM4354_CHIP_ID) ||
+ (bus->sih->chip == BCM4356_CHIP_ID) ||
+ (bus->sih->chip == BCM4358_CHIP_ID) ||
(bus->sih->chip == BCM4350_CHIP_ID)) {
core_capext = TRUE;
} else {
@@ -732,6 +749,8 @@
(bus->sih->chip == BCM43349_CHIP_ID) ||
(bus->sih->chip == BCM4345_CHIP_ID) ||
(bus->sih->chip == BCM4354_CHIP_ID) ||
+ (bus->sih->chip == BCM4356_CHIP_ID) ||
+ (bus->sih->chip == BCM4358_CHIP_ID) ||
(bus->sih->chip == BCM4350_CHIP_ID)) {
uint32 enabval = 0;
addr = SI_ENUM_BASE + OFFSETOF(chipcregs_t, chipcontrol_addr);
@@ -741,7 +760,9 @@
if ((bus->sih->chip == BCM4350_CHIP_ID) ||
(bus->sih->chip == BCM4345_CHIP_ID) ||
- (bus->sih->chip == BCM4354_CHIP_ID))
+ (bus->sih->chip == BCM4354_CHIP_ID) ||
+ (bus->sih->chip == BCM4356_CHIP_ID) ||
+ (bus->sih->chip == BCM4358_CHIP_ID))
enabval &= CC_CHIPCTRL3_SR_ENG_ENABLE;
if (enabval)
@@ -850,7 +871,7 @@
cmp_val = SBSDIO_FUNC1_SLEEPCSR_KSO_MASK | SBSDIO_FUNC1_SLEEPCSR_DEVON_MASK;
bmask = cmp_val;
- OSL_SLEEP(5);
+ OSL_SLEEP(3);
} else {
/* Put device to sleep, turn off KSO */
cmp_val = 0;
@@ -1461,7 +1482,7 @@
/* Change state */
bus->sleeping = TRUE;
-
+ wake_up(&bus->bus_sleep);
} else {
/* Waking up: bus power up is ok, set local state */
@@ -1534,7 +1555,7 @@
dhdsdio_clkctl(bus, CLK_SDONLY, FALSE);
#endif /* !defined(HW_OOB) */
}
-#endif
+#endif
int
dhd_bus_txdata(struct dhd_bus *bus, void *pkt)
@@ -2019,11 +2040,13 @@
}
+#ifdef DHDENABLE_TAILPAD
/* if a padding packet if needed, insert it to the end of the link list */
if (pad_pkt_len) {
PKTSETLEN(osh, bus->pad_pkt, pad_pkt_len);
PKTSETNEXT(osh, pkt, bus->pad_pkt);
}
+#endif /* DHDENABLE_TAILPAD */
/* dhd_bcmsdh_send_buf ignores the buffer pointer if he packet
* parameter is not NULL, for non packet chian we pass NULL pkt pointer
@@ -2110,6 +2133,13 @@
num_pkt = MIN(num_pkt, pktq_mlen(&bus->txq, tx_prec_map));
for (i = 0; i < num_pkt; i++) {
pkts[i] = pktq_mdeq(&bus->txq, ~bus->flowcontrol, &prec_out);
+ if (!pkts[i]) {
+ DHD_ERROR(("%s: pktq_mlen non-zero when no pkt\n",
+ __FUNCTION__));
+ ASSERT(0);
+ break;
+ }
+ PKTORPHAN(pkts[i]);
datalen += PKTLEN(osh, pkts[i]);
}
dhd_os_sdunlock_txq(bus->dhd);
@@ -2539,7 +2569,9 @@
dhd_bus_dump(dhd_pub_t *dhdp, struct bcmstrbuf *strbuf)
{
dhd_bus_t *bus = dhdp->bus;
-
+#if defined(DHD_WAKE_STATUS) && defined(DHD_WAKE_EVENT_STATUS)
+ int i;
+#endif
bcm_bprintf(strbuf, "Bus SDIO structure:\n");
bcm_bprintf(strbuf, "hostintmask 0x%08x intstatus 0x%08x sdpcm_ver %d\n",
bus->hostintmask, bus->intstatus, bus->sdpcm_ver);
@@ -2548,6 +2580,29 @@
bus->rxlen, bus->rx_seq);
bcm_bprintf(strbuf, "intr %d intrcount %u lastintrs %u spurious %u\n",
bus->intr, bus->intrcount, bus->lastintrs, bus->spurious);
+#ifdef DHD_WAKE_STATUS
+ bcm_bprintf(strbuf, "wake %u rxwake %u readctrlwake %u\n",
+ bcmsdh_get_total_wake(bus->sdh), bus->wake_counts.rxwake,
+ bus->wake_counts.rcwake);
+#ifdef DHD_WAKE_RX_STATUS
+ bcm_bprintf(strbuf, " unicast %u multicast %u broadcast %u arp %u\n",
+ bus->wake_counts.rx_ucast, bus->wake_counts.rx_mcast,
+ bus->wake_counts.rx_bcast, bus->wake_counts.rx_arp);
+ bcm_bprintf(strbuf, " multi4 %u multi6 %u icmp6 %u multiother %u\n",
+ bus->wake_counts.rx_multi_ipv4, bus->wake_counts.rx_multi_ipv6,
+ bus->wake_counts.rx_icmpv6, bus->wake_counts.rx_multi_other);
+ bcm_bprintf(strbuf, " icmp6_ra %u, icmp6_na %u, icmp6_ns %u\n",
+ bus->wake_counts.rx_icmpv6_ra, bus->wake_counts.rx_icmpv6_na,
+ bus->wake_counts.rx_icmpv6_ns);
+#endif
+#ifdef DHD_WAKE_EVENT_STATUS
+ for (i = 0; i < WLC_E_LAST; i++)
+ if (bus->wake_counts.rc_event[i] != 0)
+ bcm_bprintf(strbuf, " %s = %u\n", bcmevent_get_name(i),
+ bus->wake_counts.rc_event[i]);
+ bcm_bprintf(strbuf, "\n");
+#endif
+#endif
bcm_bprintf(strbuf, "pollrate %u pollcnt %u regfails %u\n",
bus->pollrate, bus->pollcnt, bus->regfails);
@@ -2555,7 +2610,7 @@
#ifdef DHDENABLE_TAILPAD
bcm_bprintf(strbuf, "tx_tailpad_chain %u tx_tailpad_pktget %u\n",
bus->tx_tailpad_chain, bus->tx_tailpad_pktget);
-#endif
+#endif /* DHDENABLE_TAILPAD */
bcm_bprintf(strbuf, "tx_sderrs %u fcqueued %u rxrtx %u rx_toolong %u rxc_errors %u\n",
bus->tx_sderrs, bus->fcqueued, bus->rxrtx, bus->rx_toolong,
bus->rxc_errors);
@@ -2631,7 +2686,7 @@
bus->rx_hdrfail = bus->rx_badhdr = bus->rx_badseq = 0;
#ifdef DHDENABLE_TAILPAD
bus->tx_tailpad_chain = bus->tx_tailpad_pktget = 0;
-#endif
+#endif /* DHDENABLE_TAILPAD */
bus->tx_sderrs = bus->fc_rcvd = bus->fc_xoff = bus->fc_xon = 0;
bus->rxglomfail = bus->rxglomframes = bus->rxglompkts = 0;
bus->f2rxhdrs = bus->f2rxdata = bus->f2txdata = bus->f1regdata = 0;
@@ -2851,7 +2906,7 @@
return 0;
/* Read console log struct */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, log);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, log);
if ((rv = dhdsdio_membytes(bus, FALSE, addr, (uint8 *)&c->log, sizeof(c->log))) < 0)
return rv;
@@ -3022,17 +3077,17 @@
ltoh32(tr.r0), ltoh32(tr.r1), ltoh32(tr.r2), ltoh32(tr.r3),
ltoh32(tr.r4), ltoh32(tr.r5), ltoh32(tr.r6), ltoh32(tr.r7));
- addr = sdpcm_shared.console_addr + OFFSETOF(hndrte_cons_t, log);
+ addr = sdpcm_shared.console_addr + OFFSETOF(hnd_cons_t, log);
if ((rv = dhdsdio_membytes(bus, FALSE, addr,
(uint8 *)&console_ptr, sizeof(console_ptr))) < 0)
goto printbuf;
- addr = sdpcm_shared.console_addr + OFFSETOF(hndrte_cons_t, log.buf_size);
+ addr = sdpcm_shared.console_addr + OFFSETOF(hnd_cons_t, log.buf_size);
if ((rv = dhdsdio_membytes(bus, FALSE, addr,
(uint8 *)&console_size, sizeof(console_size))) < 0)
goto printbuf;
- addr = sdpcm_shared.console_addr + OFFSETOF(hndrte_cons_t, log.idx);
+ addr = sdpcm_shared.console_addr + OFFSETOF(hnd_cons_t, log.idx);
if ((rv = dhdsdio_membytes(bus, FALSE, addr,
(uint8 *)&console_index, sizeof(console_index))) < 0)
goto printbuf;
@@ -3078,7 +3133,11 @@
if (sdpcm_shared.flags & (SDPCM_SHARED_ASSERT | SDPCM_SHARED_TRAP)) {
DHD_ERROR(("%s: %s\n", __FUNCTION__, strbuf.origbuf));
}
-
+ /* save core dump or write to a file */
+ if (bus->dhd->memdump_enabled) {
+ dhdsdio_mem_dump(bus);
+ dhd_dbg_send_urgent_evt(bus->dhd, NULL, 0);
+ }
done:
if (mbuffer)
@@ -3091,7 +3150,58 @@
return bcmerror;
}
#endif /* #ifdef DHD_DEBUG */
+static int
+dhdsdio_mem_dump(dhd_bus_t *bus)
+{
+ int ret = 0;
+ int size; /* Full mem size */
+ uint32 start = bus->dongle_ram_base; /* Start address */
+ int read_size = 0; /* Read size of each iteration */
+ uint8 *databuf = NULL;
+ dhd_pub_t *dhd = bus->dhd;
+ if (!dhd->soc_ram) {
+ DHD_ERROR(("%s : dhd->soc_ram is NULL\n", __FUNCTION__));
+ return -1;
+ }
+ dhd_os_sdlock(bus->dhd);
+ BUS_WAKE(bus);
+ dhdsdio_clkctl(bus, CLK_AVAIL, FALSE);
+
+ size = dhd->soc_ram_length = bus->ramsize;
+ /* Read mem content */
+ DHD_ERROR(("Dump dongle memory\n"));
+ databuf = dhd->soc_ram;
+ while (size)
+ {
+ read_size = MIN(MEMBLOCK, size);
+ if ((ret = dhdsdio_membytes(bus, FALSE, start, databuf, read_size)))
+ {
+ DHD_ERROR(("%s: Error membytes %d\n", __FUNCTION__, ret));
+ break;
+ }
+ /* Decrement size and increment start address */
+ size -= read_size;
+ start += read_size;
+ databuf += read_size;
+ }
+ DHD_ERROR(("Done\n"));
+
+ if ((bus->idletime == DHD_IDLE_IMMEDIATE) && !bus->dpc_sched) {
+ bus->activity = FALSE;
+ dhdsdio_clkctl(bus, CLK_NONE, TRUE);
+ }
+ dhd_os_sdunlock(bus->dhd);
+ if (!ret)
+ dhd_save_fwdump(bus->dhd, dhd->soc_ram, dhd->soc_ram_length);
+ return ret;
+}
+
+int
+dhd_socram_dump(dhd_bus_t * bus)
+{
+ return (dhdsdio_mem_dump(bus));
+}
int
dhdsdio_downloadvars(dhd_bus_t *bus, void *arg, int len)
@@ -3200,7 +3310,7 @@
return (int_val & uart_enab);
}
-#endif
+#endif
static int
dhdsdio_doiovar(dhd_bus_t *bus, const bcm_iovar_t *vi, uint32 actionid, const char *name,
@@ -3259,7 +3369,7 @@
bcmerror = dhdsdio_bussleep(bus, bool_val);
} else {
int_val = (int32)bus->sleeping;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
}
goto exit;
}
@@ -3273,7 +3383,7 @@
switch (actionid) {
case IOV_GVAL(IOV_INTR):
int_val = (int32)bus->intr;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_INTR):
@@ -3292,7 +3402,7 @@
case IOV_GVAL(IOV_POLLRATE):
int_val = (int32)bus->pollrate;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_POLLRATE):
@@ -3302,7 +3412,7 @@
case IOV_GVAL(IOV_IDLETIME):
int_val = bus->idletime;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_IDLETIME):
@@ -3315,7 +3425,7 @@
case IOV_GVAL(IOV_IDLECLOCK):
int_val = (int32)bus->idleclock;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_IDLECLOCK):
@@ -3324,7 +3434,7 @@
case IOV_GVAL(IOV_SD1IDLE):
int_val = (int32)sd1idle;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SD1IDLE):
@@ -3423,17 +3533,17 @@
case IOV_GVAL(IOV_RAMSIZE):
int_val = (int32)bus->ramsize;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_RAMSTART):
int_val = (int32)bus->dongle_ram_base;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_SDIOD_DRIVE):
int_val = (int32)dhd_sdiod_drive_strength;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SDIOD_DRIVE):
@@ -3455,7 +3565,7 @@
case IOV_GVAL(IOV_READAHEAD):
int_val = (int32)dhd_readahead;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_READAHEAD):
@@ -3466,7 +3576,7 @@
case IOV_GVAL(IOV_SDRXCHAIN):
int_val = (int32)bus->use_rxchain;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SDRXCHAIN):
@@ -3477,7 +3587,7 @@
break;
case IOV_GVAL(IOV_ALIGNCTL):
int_val = (int32)dhd_alignctl;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_ALIGNCTL):
@@ -3486,7 +3596,7 @@
case IOV_GVAL(IOV_SDALIGN):
int_val = DHD_SDALIGN;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
#ifdef DHD_DEBUG
@@ -3506,7 +3616,7 @@
sd_ptr = (sdreg_t *)params;
- addr = (ulong)bus->regs + sd_ptr->offset;
+ addr = (uint32)((ulong)bus->regs + sd_ptr->offset);
size = sd_ptr->func;
int_val = (int32)bcmsdh_reg_read(bus->sdh, addr, size);
if (bcmsdh_regfail(bus->sdh))
@@ -3522,7 +3632,7 @@
sd_ptr = (sdreg_t *)params;
- addr = (ulong)bus->regs + sd_ptr->offset;
+ addr = (uint32)((ulong)bus->regs + sd_ptr->offset);
size = sd_ptr->func;
bcmsdh_reg_write(bus->sdh, addr, size, sd_ptr->value);
if (bcmsdh_regfail(bus->sdh))
@@ -3577,7 +3687,7 @@
case IOV_GVAL(IOV_FORCEEVEN):
int_val = (int32)forcealign;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_FORCEEVEN):
@@ -3586,7 +3696,7 @@
case IOV_GVAL(IOV_TXBOUND):
int_val = (int32)dhd_txbound;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_TXBOUND):
@@ -3595,7 +3705,7 @@
case IOV_GVAL(IOV_RXBOUND):
int_val = (int32)dhd_rxbound;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_RXBOUND):
@@ -3604,7 +3714,7 @@
case IOV_GVAL(IOV_TXMINMAX):
int_val = (int32)dhd_txminmax;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_TXMINMAX):
@@ -3616,7 +3726,7 @@
if (bcmerror != 0)
break;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_SERIALCONS):
@@ -3630,7 +3740,7 @@
#ifdef SDTEST
case IOV_GVAL(IOV_EXTLOOP):
int_val = (int32)bus->ext_loop;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_EXTLOOP):
@@ -3649,7 +3759,7 @@
#if defined(USE_SDIOFIFO_IOVAR)
case IOV_GVAL(IOV_WATERMARK):
int_val = (int32)watermark;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_WATERMARK):
@@ -3661,7 +3771,7 @@
case IOV_GVAL(IOV_MESBUSYCTRL):
int_val = (int32)mesbusyctrl;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_MESBUSYCTRL):
@@ -3672,12 +3782,12 @@
bcmsdh_cfg_write(bus->sdh, SDIO_FUNC_1, SBSDIO_FUNC1_MESBUSYCTRL,
((uint8)mesbusyctrl | 0x80), NULL);
break;
-#endif
+#endif
case IOV_GVAL(IOV_DONGLEISOLATION):
int_val = bus->dhd->dongle_isolation;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DONGLEISOLATION):
@@ -3704,18 +3814,18 @@
/* Get its status */
int_val = (bool) bus->dhd->dongle_reset;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_KSO):
int_val = dhdsdio_sleepcsr_get(bus);
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_DEVCAP):
int_val = dhdsdio_devcap_get(bus);
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_DEVCAP):
@@ -3723,7 +3833,7 @@
break;
case IOV_GVAL(IOV_TXGLOMSIZE):
int_val = (int32)bus->txglomsize;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_TXGLOMSIZE):
@@ -3740,12 +3850,12 @@
case IOV_GVAL(IOV_HANGREPORT):
int_val = (int32)bus->dhd->hang_report;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_GVAL(IOV_TXINRX_THRES):
int_val = bus->txinrx_thres;
- bcopy(&int_val, arg, val_size);
+ bcopy(&int_val, arg, sizeof(int_val));
break;
case IOV_SVAL(IOV_TXINRX_THRES):
if (int_val < 0) {
@@ -4586,7 +4696,7 @@
void **pkt, uint32 *pkt_count);
static uint8
-dhdsdio_rxglom(dhd_bus_t *bus, uint8 rxseq)
+dhdsdio_rxglom(dhd_bus_t *bus, uint8 rxseq, int pkt_wake)
{
uint16 dlen, totlen;
uint8 *dptr, num = 0;
@@ -4999,7 +5109,8 @@
} while (temp);
if (cnt) {
dhd_os_sdunlock(bus->dhd);
- dhd_rx_frame(bus->dhd, idx, list_head[idx], cnt, 0);
+ dhd_rx_frame(bus->dhd, idx, list_head[idx], cnt, 0, pkt_wake, &bus->wake_counts);
+ pkt_wake = 0;
dhd_os_sdlock(bus->dhd);
}
}
@@ -5064,13 +5175,17 @@
uchar reorder_info_buf[WLHOST_REORDERDATA_TOTLEN];
uint reorder_info_len;
uint pkt_count;
-
+ int pkt_wake = 0;
#if defined(DHD_DEBUG) || defined(SDTEST)
bool sdtest = FALSE; /* To limit message spew from test mode */
#endif
DHD_TRACE(("%s: Enter\n", __FUNCTION__));
+#ifdef DHD_WAKE_STATUS
+ pkt_wake = bcmsdh_set_get_wake(bus->sdh, 0);
+#endif
+
bus->readframes = TRUE;
if (!KSO_ENAB(bus)) {
@@ -5131,7 +5246,8 @@
uint8 cnt;
DHD_GLOM(("%s: calling rxglom: glomd %p, glom %p\n",
__FUNCTION__, bus->glomd, bus->glom));
- cnt = dhdsdio_rxglom(bus, rxseq);
+ cnt = dhdsdio_rxglom(bus, rxseq, pkt_wake);
+ pkt_wake = 0;
DHD_GLOM(("%s: rxglom returned %d\n", __FUNCTION__, cnt));
rxseq += cnt - 1;
rxleft = (rxleft > cnt) ? (rxleft - cnt) : 1;
@@ -5659,7 +5775,8 @@
/* Unlock during rx call */
dhd_os_sdunlock(bus->dhd);
- dhd_rx_frame(bus->dhd, ifidx, pkt, pkt_count, chan);
+ dhd_rx_frame(bus->dhd, ifidx, pkt, pkt_count, chan, pkt_wake, &bus->wake_counts);
+ pkt_wake = 0;
dhd_os_sdlock(bus->dhd);
}
rxcount = maxframes - rxleft;
@@ -6489,6 +6606,8 @@
if (dhdp->busstate == DHD_BUS_DOWN)
return FALSE;
+ dhd_os_sdlock(bus->dhd);
+
/* Poll period: check device if appropriate. */
if (!SLPAUTO_ENAB(bus) && (bus->poll && (++bus->polltick >= bus->pollrate))) {
uint32 intstatus = 0;
@@ -6589,6 +6708,8 @@
}
#endif /* DHD_USE_IDLECOUNT */
+ dhd_os_sdunlock(bus->dhd);
+
return bus->ipend;
}
@@ -6620,18 +6741,18 @@
dhdsdio_clkctl(bus, CLK_AVAIL, FALSE);
/* Zero cbuf_index */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, cbuf_idx);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, cbuf_idx);
val = htol32(0);
if ((rv = dhdsdio_membytes(bus, TRUE, addr, (uint8 *)&val, sizeof(val))) < 0)
goto done;
/* Write message into cbuf */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, cbuf);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, cbuf);
if ((rv = dhdsdio_membytes(bus, TRUE, addr, (uint8 *)msg, msglen)) < 0)
goto done;
/* Write length into vcons_in */
- addr = bus->console_addr + OFFSETOF(hndrte_cons_t, vcons_in);
+ addr = bus->console_addr + OFFSETOF(hnd_cons_t, vcons_in);
val = htol32(msglen);
if ((rv = dhdsdio_membytes(bus, TRUE, addr, (uint8 *)&val, sizeof(val))) < 0)
goto done;
@@ -6733,6 +6854,14 @@
return TRUE;
if (chipid == BCM4354_CHIP_ID)
return TRUE;
+ if (chipid == BCM4356_CHIP_ID)
+ return TRUE;
+ if (chipid == BCM4358_CHIP_ID)
+ return TRUE;
+ if (chipid == BCM43430_CHIP_ID)
+ return TRUE;
+ if (BCM4349_CHIP(chipid))
+ return TRUE;
return FALSE;
}
@@ -6891,6 +7020,9 @@
}
+
+ init_waitqueue_head(&bus->bus_sleep);
+
return bus;
fail:
@@ -6916,6 +7048,11 @@
DHD_ERROR(("%s: FAILED to return to SI_ENUM_BASE\n", __FUNCTION__));
}
+#if defined(DHD_DEBUG)
+ DHD_ERROR(("F1 signature read @0x18000000=0x%4x\n",
+ bcmsdh_reg_read(bus->sdh, SI_ENUM_BASE, 4)));
+#endif
+
/* Force PLL off until si_attach() programs PLL control regs */
@@ -7040,6 +7177,8 @@
break;
case BCM4350_CHIP_ID:
case BCM4354_CHIP_ID:
+ case BCM4356_CHIP_ID:
+ case BCM4358_CHIP_ID:
bus->dongle_ram_base = CR4_4350_RAM_BASE;
break;
case BCM4360_CHIP_ID:
@@ -7048,6 +7187,9 @@
case BCM4345_CHIP_ID:
bus->dongle_ram_base = CR4_4345_RAM_BASE;
break;
+ case BCM4349_CHIP_GRPID:
+ bus->dongle_ram_base = CR4_4349_RAM_BASE;
+ break;
default:
bus->dongle_ram_base = 0;
DHD_ERROR(("%s: WARNING: Using default ram base at 0x%x\n",
@@ -7064,6 +7206,17 @@
bus->srmemsize = si_socram_srmem_size(bus->sih);
}
+ /* HACK: Fix HW problem with baseband chip in coex */
+ if (((uint16)bus->sih->chip) == BCM4354_CHIP_ID) {
+ bcmsdh_reg_write(bus->sdh, 0x18000c40, 4, 0);
+ if (bcmsdh_regfail(bus->sdh))
+ DHD_ERROR(("DHD: RECONFIG_GPIO3 step 1 failed.\n"));
+
+ bcmsdh_reg_write(bus->sdh, 0x18000e00, 4, 0x00001038);
+ if (bcmsdh_regfail(bus->sdh))
+ DHD_ERROR(("DHD: RECONFIG_GPIO3 step 2 failed.\n"));
+ }
+
/* ...but normally deal with the SDPCMDEV core */
if (!(bus->regs = si_setcore(bus->sih, PCMCIA_CORE_ID, 0)) &&
!(bus->regs = si_setcore(bus->sih, SDIOD_CORE_ID, 0))) {
@@ -7209,6 +7362,7 @@
}
bus->roundup = MIN(max_roundup, bus->blocksize);
+#ifdef DHDENABLE_TAILPAD
if (bus->pad_pkt)
PKTFREE(osh, bus->pad_pkt, FALSE);
bus->pad_pkt = PKTGET(osh, SDIO_MAX_BLOCK_SIZE, FALSE);
@@ -7221,6 +7375,7 @@
PKTPUSH(osh, bus->pad_pkt, alignment_offset);
PKTSETNEXT(osh, bus->pad_pkt, NULL);
}
+#endif /* DHDENABLE_TAILPAD */
/* Query if bus module supports packet chaining, default to use if supported */
if (bcmsdh_iovar_op(sdh, "sd_rxchain", NULL, 0,
@@ -7306,8 +7461,10 @@
MFREE(osh, bus->console.buf, bus->console.bufsize);
#endif
+#ifdef DHDENABLE_TAILPAD
if (bus->pad_pkt)
PKTFREE(osh, bus->pad_pkt, FALSE);
+#endif /* DHDENABLE_TAILPAD */
MFREE(osh, bus, sizeof(dhd_bus_t));
}
@@ -7401,7 +7558,19 @@
int ret = 0;
dhd_bus_t *bus = (dhd_bus_t*)context;
+ int wait_time = 0;
+ if (bus->idletime > 0) {
+ wait_time = msecs_to_jiffies(bus->idletime * dhd_watchdog_ms);
+ }
+
ret = dhd_os_check_wakelock(bus->dhd);
+ if ((!ret) && (bus->dhd->up)) {
+ if (wait_event_timeout(bus->bus_sleep, bus->sleeping, wait_time) == 0) {
+ if (!bus->sleeping) {
+ return 1;
+ }
+ }
+ }
return ret;
}
@@ -7413,7 +7582,7 @@
if (dhd_os_check_if_up(bus->dhd))
bcmsdh_oob_intr_set(bus->sdh, TRUE);
-#endif
+#endif
return 0;
}
@@ -7705,7 +7874,7 @@
_dhdsdio_download_firmware(struct dhd_bus *bus)
{
int bcmerror = -1;
-
+ dhd_pub_t *dhd = bus->dhd;
bool embed = FALSE; /* download embedded firmware */
bool dlok = FALSE; /* download firmware succeeded */
@@ -7773,6 +7942,14 @@
DHD_ERROR(("%s: error getting out of ARM core reset\n", __FUNCTION__));
goto err;
}
+ if (dhd) {
+ if (!dhd->soc_ram) {
+ dhd->soc_ram = MALLOC(dhd->osh, bus->ramsize);
+ dhd->soc_ram_length = bus->ramsize;
+ } else {
+ memset(dhd->soc_ram, 0, dhd->soc_ram_length);
+ }
+ }
bcmerror = 0;
@@ -7805,6 +7982,11 @@
int retries = 0;
bcmsdh_info_t *sdh;
+ if (!KSO_ENAB(bus)) {
+ DHD_ERROR(("%s: Device asleep\n", __FUNCTION__));
+ return BCME_NODEVICE;
+ }
+
sdh = bus->sdh;
do {
ret = bcmsdh_send_buf(bus->sdh, addr, fn, flags, buf, nbytes,
@@ -7874,6 +8056,12 @@
return &bus->txq;
}
+uint
+dhd_bus_hdrlen(struct dhd_bus *bus)
+{
+ return (bus->txglom_enable) ? SDPCM_HDRLEN_TXGLOM : SDPCM_HDRLEN;
+}
+
void
dhd_bus_set_dotxinrx(struct dhd_bus *bus, bool val)
{
@@ -7892,6 +8080,7 @@
if (!bus->dhd->dongle_reset) {
dhd_os_sdlock(dhdp);
dhd_os_wd_timer(dhdp, 0);
+ dhd_dbg_start(dhdp, 0);
#if !defined(IGNORE_ETH0_DOWN)
/* Force flow control as protection when stop come before ifconfig_down */
dhd_txflowcontrol(bus->dhd, ALL_INTERFACES, ON);
@@ -7905,7 +8094,7 @@
dhd_enable_oob_intr(bus, FALSE);
bcmsdh_oob_intr_set(bus->sdh, FALSE);
bcmsdh_oob_intr_unregister(bus->sdh);
-#endif
+#endif
/* Clean tx/rx buffer pointers, detach from the dongle */
dhdsdio_release_dongle(bus, bus->dhd->osh, TRUE, TRUE);
@@ -7946,7 +8135,7 @@
bcmsdh_oob_intr_register(bus->sdh,
dhdsdio_isr, bus);
bcmsdh_oob_intr_set(bus->sdh, TRUE);
-#endif
+#endif
bus->dhd->dongle_reset = FALSE;
bus->dhd->up = TRUE;
@@ -7954,9 +8143,9 @@
#if !defined(IGNORE_ETH0_DOWN)
/* Restore flow control */
dhd_txflowcontrol(bus->dhd, ALL_INTERFACES, OFF);
-#endif
+#endif
dhd_os_wd_timer(dhdp, dhd_watchdog_ms);
-
+ dhd_dbg_start(dhdp, 1);
DHD_TRACE(("%s: WLAN ON DONE\n", __FUNCTION__));
} else {
dhd_bus_stop(bus, FALSE);
diff --git a/drivers/net/wireless/bcmdhd/dhd_wlfc.c b/drivers/net/wireless/bcmdhd/dhd_wlfc.c
old mode 100755
new mode 100644
index 198ff08..cc75cc6
--- a/drivers/net/wireless/bcmdhd/dhd_wlfc.c
+++ b/drivers/net/wireless/bcmdhd/dhd_wlfc.c
@@ -2,13 +2,13 @@
* DHD PROP_TXSTATUS Module.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhd_wlfc.c 457888 2014-02-25 03:34:39Z $
+ * $Id: dhd_wlfc.c 490028 2014-07-09 05:58:25Z $
*
*/
@@ -41,9 +41,7 @@
#include <wlfc_proto.h>
#include <dhd_wlfc.h>
#endif
-#ifdef DHDTCPACK_SUPPRESS
#include <dhd_ip.h>
-#endif /* DHDTCPACK_SUPPRESS */
/*
@@ -58,11 +56,7 @@
#ifdef PROP_TXSTATUS
-#ifdef QMONITOR
-#define DHD_WLFC_QMON_COMPLETE(entry) dhd_qmon_txcomplete(&entry->qmon)
-#else
#define DHD_WLFC_QMON_COMPLETE(entry)
-#endif /* QMONITOR */
#define LIMIT_BORROW
@@ -98,7 +92,8 @@
ASSERT(prec >= 0 && prec < pq->num_prec);
- ASSERT(PKTLINK(p) == NULL); /* queueing chains not allowed */
+ /* queueing chains not allowed */
+ ASSERT(!((PKTLINK(p) != NULL) && (PKTLINK(p) != p)));
ASSERT(!pktq_full(pq));
ASSERT(!pktq_pfull(pq, prec));
@@ -265,6 +260,8 @@
if (h->items[slot_id].state == WLFC_HANGER_ITEM_STATE_FREE) {
h->items[slot_id].state = WLFC_HANGER_ITEM_STATE_INUSE;
h->items[slot_id].pkt = pkt;
+ h->items[slot_id].pkt_state = 0;
+ h->items[slot_id].pkt_txstatus = 0;
h->pushed++;
}
else {
@@ -278,7 +275,7 @@
}
static int
-_dhd_wlfc_hanger_poppkt(void* hanger, uint32 slot_id, void** pktout, int remove_from_hanger)
+_dhd_wlfc_hanger_poppkt(void* hanger, uint32 slot_id, void** pktout, bool remove_from_hanger)
{
int rc = BCME_OK;
wlfc_hanger_t* h = (wlfc_hanger_t*)hanger;
@@ -332,25 +329,6 @@
return rc;
}
-/* return true if the slot is only waiting for clean */
-static bool
-_dhd_wlfc_hanger_wait_clean(void* hanger, uint32 hslot)
-{
- wlfc_hanger_t* h = (wlfc_hanger_t*)hanger;
-
- if ((hslot < (uint32) h->max_items) &&
- (h->items[hslot].state == WLFC_HANGER_ITEM_STATE_WAIT_CLEAN)) {
- /* the packet should be already freed by _dhd_wlfc_cleanup */
- h->items[hslot].state = WLFC_HANGER_ITEM_STATE_FREE;
- h->items[hslot].pkt = NULL;
- h->items[hslot].gen = 0xff;
- h->items[hslot].identifier = 0;
- return TRUE;
- }
-
- return FALSE;
-}
-
/* remove reference of specific packet in hanger */
static bool
_dhd_wlfc_hanger_remove_reference(wlfc_hanger_t* h, void* pkt)
@@ -631,14 +609,14 @@
void *pout = NULL;
ASSERT(dhdp && p);
- ASSERT(prec >= 0 && prec <= WLFC_PSQ_PREC_COUNT);
+ ASSERT(prec >= 0 && prec < WLFC_PSQ_PREC_COUNT);
ctx = (athost_wl_status_info_t*)dhdp->wlfc_state;
if (!WLFC_GET_AFQ(dhdp->wlfc_mode) && (prec & 1)) {
/* suppressed queue, need pop from hanger */
_dhd_wlfc_hanger_poppkt(ctx->hanger, WL_TXSTATUS_GET_HSLOT(DHD_PKTTAG_H2DTAG
- (PKTTAG(p))), &pout, 1);
+ (PKTTAG(p))), &pout, TRUE);
ASSERT(p == pout);
}
@@ -672,11 +650,13 @@
ctx->pkt_cnt_per_ac[prec>>1]--;
}
- dhd_txcomplete(dhdp, p, FALSE);
- PKTFREE(ctx->osh, p, TRUE);
+ ctx->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(p))][DHD_PKTTAG_FIFO(PKTTAG(p))]--;
ctx->stats.pktout++;
ctx->stats.drop_pkts[prec]++;
+ dhd_txcomplete(dhdp, p, FALSE);
+ PKTFREE(ctx->osh, p, TRUE);
+
return 0;
}
@@ -806,7 +786,7 @@
int prec, ac_traffic = WLFC_NO_TRAFFIC;
for (prec = 0; prec < AC_COUNT; prec++) {
- if (ctx->pkt_cnt_in_q[ifid][prec] > 0) {
+ if (ctx->pkt_cnt_in_drv[ifid][prec] > 0) {
if (ac_traffic == WLFC_NO_TRAFFIC)
ac_traffic = prec + 1;
else if (ac_traffic != (prec + 1))
@@ -1014,7 +994,7 @@
bool send_tim_update = FALSE;
uint32 htod = 0;
uint16 htodseq = 0;
- uint8 free_ctr;
+ uint8 free_ctr, flags = 0;
int gen = 0xff;
dhd_pub_t *dhdp = (dhd_pub_t *)ctx->dhdp;
@@ -1068,22 +1048,25 @@
return BCME_ERROR;
}
- WL_TXSTATUS_SET_FREERUNCTR(htod, free_ctr);
- WL_TXSTATUS_SET_HSLOT(htod, hslot);
- WL_TXSTATUS_SET_FIFO(htod, DHD_PKTTAG_FIFO(PKTTAG(p)));
- WL_TXSTATUS_SET_FLAGS(htod, WLFC_PKTFLAG_PKTFROMHOST);
- WL_TXSTATUS_SET_GENERATION(htod, gen);
- DHD_PKTTAG_SETPKTDIR(PKTTAG(p), 1);
-
+ flags = WLFC_PKTFLAG_PKTFROMHOST;
if (!DHD_PKTTAG_CREDITCHECK(PKTTAG(p))) {
/*
Indicate that this packet is being sent in response to an
explicit request from the firmware side.
*/
- WLFC_PKTFLAG_SET_PKTREQUESTED(htod);
- } else {
- WLFC_PKTFLAG_CLR_PKTREQUESTED(htod);
+ flags |= WLFC_PKTFLAG_PKT_REQUESTED;
}
+ if (pkt_is_dhcp(ctx->osh, p)) {
+ flags |= WLFC_PKTFLAG_PKT_FORCELOWRATE;
+ }
+
+ WL_TXSTATUS_SET_FREERUNCTR(htod, free_ctr);
+ WL_TXSTATUS_SET_HSLOT(htod, hslot);
+ WL_TXSTATUS_SET_FIFO(htod, DHD_PKTTAG_FIFO(PKTTAG(p)));
+ WL_TXSTATUS_SET_FLAGS(htod, flags);
+ WL_TXSTATUS_SET_GENERATION(htod, gen);
+ DHD_PKTTAG_SETPKTDIR(PKTTAG(p), 1);
+
rc = _dhd_wlfc_pushheader(ctx, p, send_tim_update,
entry->traffic_lastreported_bmp, entry->mac_handle, htod, htodseq, FALSE);
@@ -1256,9 +1239,6 @@
return BCME_ERROR;
}
-#ifdef QMONITOR
- dhd_qmon_tx(&entry->qmon);
-#endif
/*
A packet has been pushed, update traffic availability bitmap,
@@ -1319,6 +1299,73 @@
}
static void
+_dhd_wlfc_hanger_free_pkt(athost_wl_status_info_t* wlfc, uint32 slot_id, uint8 pkt_state,
+ int pkt_txstatus)
+{
+ wlfc_hanger_t* hanger;
+ wlfc_hanger_item_t* item;
+
+ if (!wlfc)
+ return;
+
+ hanger = (wlfc_hanger_t*)wlfc->hanger;
+ if (!hanger)
+ return;
+
+ if (slot_id == WLFC_HANGER_MAXITEMS)
+ return;
+
+ item = &hanger->items[slot_id];
+ item->pkt_state |= pkt_state;
+ if (pkt_txstatus != -1) {
+ item->pkt_txstatus = pkt_txstatus;
+ }
+
+ if (item->pkt) {
+ if ((item->pkt_state & WLFC_HANGER_PKT_STATE_TXCOMPLETE) &&
+ (item->pkt_state & (WLFC_HANGER_PKT_STATE_TXSTATUS |
+ WLFC_HANGER_PKT_STATE_CLEANUP))) {
+ void *p = NULL;
+ void *pkt = item->pkt;
+ uint8 old_state = item->state;
+ int ret = _dhd_wlfc_hanger_poppkt(wlfc->hanger, slot_id, &p, TRUE);
+ BCM_REFERENCE(ret);
+ BCM_REFERENCE(pkt);
+ ASSERT((ret == BCME_OK) && p && (pkt == p));
+
+ /* free packet */
+ if (!(item->pkt_state & WLFC_HANGER_PKT_STATE_TXSTATUS)) {
+ /* cleanup case */
+ wlfc_mac_descriptor_t *entry = _dhd_wlfc_find_table_entry(wlfc, p);
+
+ ASSERT(entry);
+ entry->transit_count--;
+ if (entry->suppressed &&
+ (--entry->suppr_transit_count == 0)) {
+ entry->suppressed = FALSE;
+ }
+ _dhd_wlfc_return_implied_credit(wlfc, p);
+ wlfc->stats.cleanup_fw_cnt++;
+ /* slot not freeable yet */
+ item->state = old_state;
+ }
+
+ wlfc->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(p))]
+ [DHD_PKTTAG_FIFO(PKTTAG(p))]--;
+ wlfc->stats.pktout++;
+ dhd_txcomplete((dhd_pub_t *)wlfc->dhdp, p, item->pkt_txstatus);
+ PKTFREE(wlfc->osh, p, TRUE);
+ }
+ } else {
+ if (item->pkt_state & WLFC_HANGER_PKT_STATE_TXSTATUS) {
+ /* free slot */
+ ASSERT(item->state != WLFC_HANGER_ITEM_STATE_FREE);
+ item->state = WLFC_HANGER_ITEM_STATE_FREE;
+ }
+ }
+}
+
+static void
_dhd_wlfc_pktq_flush(athost_wl_status_info_t* ctx, struct pktq *pq,
bool dir, f_processpkt_t fn, void *arg, q_type_t q_type)
{
@@ -1383,11 +1430,13 @@
ctx->stats.cleanup_fw_cnt++;
}
PKTSETLINK(p, NULL);
- dhd_txcomplete(dhdp, p, FALSE);
- PKTFREE(ctx->osh, p, dir);
if (dir) {
+ ctx->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(p))][prec>>1]--;
ctx->stats.pktout++;
+ dhd_txcomplete(dhdp, p, FALSE);
}
+ PKTFREE(ctx->osh, p, dir);
+
q->len--;
pq->len--;
p = (head ? q->head : PKTLINK(prev));
@@ -1498,11 +1547,11 @@
entry->suppressed = FALSE;
}
_dhd_wlfc_return_implied_credit(wlfc, pkt);
- dhd_txcomplete(dhd, pkt, FALSE);
- PKTFREE(wlfc->osh, pkt, TRUE);
+ wlfc->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(pkt))][DHD_PKTTAG_FIFO(PKTTAG(pkt))]--;
wlfc->stats.pktout++;
wlfc->stats.cleanup_txq_cnt++;
-
+ dhd_txcomplete(dhd, pkt, FALSE);
+ PKTFREE(wlfc->osh, pkt, TRUE);
}
}
@@ -1567,18 +1616,8 @@
if ((h->items[i].state == WLFC_HANGER_ITEM_STATE_INUSE) ||
(h->items[i].state == WLFC_HANGER_ITEM_STATE_INUSE_SUPPRESSED)) {
if (fn == NULL || (*fn)(h->items[i].pkt, arg)) {
- table = _dhd_wlfc_find_table_entry(wlfc, h->items[i].pkt);
- table->transit_count--;
- if (table->suppressed &&
- (--table->suppr_transit_count == 0)) {
- table->suppressed = FALSE;
- }
- _dhd_wlfc_return_implied_credit(wlfc, h->items[i].pkt);
- dhd_txcomplete(dhd, h->items[i].pkt, FALSE);
- PKTFREE(wlfc->osh, h->items[i].pkt, TRUE);
- wlfc->stats.pktout++;
- wlfc->stats.cleanup_fw_cnt++;
- h->items[i].state = WLFC_HANGER_ITEM_STATE_WAIT_CLEAN;
+ _dhd_wlfc_hanger_free_pkt(wlfc, i,
+ WLFC_HANGER_PKT_STATE_CLEANUP, FALSE);
}
}
}
@@ -1594,9 +1633,6 @@
{
int rc = BCME_OK;
-#ifdef QMONITOR
- dhd_qmon_reset(&entry->qmon);
-#endif
if ((action == eWLFC_MAC_ENTRY_ACTION_ADD) || (action == eWLFC_MAC_ENTRY_ACTION_UPDATE)) {
entry->occupied = 1;
@@ -1674,9 +1710,10 @@
#ifdef LIMIT_BORROW
static int
-_dhd_wlfc_borrow_credit(athost_wl_status_info_t* ctx, int highest_lender_ac, int borrower_ac)
+_dhd_wlfc_borrow_credit(athost_wl_status_info_t* ctx, int highest_lender_ac, int borrower_ac,
+ bool bBorrowAll)
{
- int lender_ac;
+ int lender_ac, borrow_limit = 0;
int rc = -1;
if (ctx == NULL) {
@@ -1686,7 +1723,13 @@
/* Borrow from lowest priority available AC (including BC/MC credits) */
for (lender_ac = 0; lender_ac <= highest_lender_ac; lender_ac++) {
- if (ctx->FIFO_credit[lender_ac] > 0) {
+ if (!bBorrowAll) {
+ borrow_limit = ctx->Init_FIFO_credit[lender_ac]/WLFC_BORROW_LIMIT_RATIO;
+ } else {
+ borrow_limit = 0;
+ }
+
+ if (ctx->FIFO_credit[lender_ac] > borrow_limit) {
ctx->credits_borrowed[borrower_ac][lender_ac]++;
ctx->FIFO_credit[lender_ac]--;
rc = lender_ac;
@@ -1777,6 +1820,8 @@
{
uint32 hslot;
int rc;
+ dhd_pub_t *dhdp = (dhd_pub_t *)(ctx->dhdp);
+
/*
if ac_fifo_credit_spent = 0
@@ -1795,12 +1840,10 @@
commit_info->needs_hdr, &hslot);
if (rc == BCME_OK) {
- DHD_PKTTAG_WLFCPKT_SET(PKTTAG(commit_info->p), 1);
rc = fcommit(commit_ctx, commit_info->p);
if (rc == BCME_OK) {
uint8 gen = WL_TXSTATUS_GET_GENERATION(
DHD_PKTTAG_H2DTAG(PKTTAG(commit_info->p)));
-
ctx->stats.pkt2bus++;
if (commit_info->ac_fifo_credit_spent || (ac == AC_COUNT)) {
ctx->stats.send_pkts[ac]++;
@@ -1816,13 +1859,11 @@
}
commit_info->mac_entry->transit_count++;
} else if (commit_info->needs_hdr) {
- dhd_pub_t *dhdp = (dhd_pub_t *)(ctx->dhdp);
-
if (!WLFC_GET_AFQ(dhdp->wlfc_mode)) {
void *pout = NULL;
/* pop hanger for delayed packet */
_dhd_wlfc_hanger_poppkt(ctx->hanger, WL_TXSTATUS_GET_HSLOT(
- DHD_PKTTAG_H2DTAG(PKTTAG(commit_info->p))), &pout, 1);
+ DHD_PKTTAG_H2DTAG(PKTTAG(commit_info->p))), &pout, TRUE);
ASSERT(commit_info->p == pout);
}
}
@@ -1923,12 +1964,12 @@
if (WLFC_GET_AFQ(dhd->wlfc_mode)) {
ret = _dhd_wlfc_deque_afq(wlfc, hslot, hcnt, fifo_id, &pktbuf);
} else {
- if (_dhd_wlfc_hanger_wait_clean(wlfc->hanger, hslot)) {
+ ret = _dhd_wlfc_hanger_poppkt(wlfc->hanger, hslot, &pktbuf, FALSE);
+ if (!pktbuf) {
+ _dhd_wlfc_hanger_free_pkt(wlfc, hslot,
+ WLFC_HANGER_PKT_STATE_TXSTATUS, -1);
goto cont;
}
-
- ret = _dhd_wlfc_hanger_poppkt(wlfc->hanger, hslot,
- &pktbuf, remove_from_hanger);
}
if ((ret != BCME_OK) || !pktbuf) {
@@ -2020,12 +2061,20 @@
}
}
} else {
- dhd_txcomplete(dhd, pktbuf, TRUE);
DHD_WLFC_QMON_COMPLETE(entry);
- /* free the packet */
- PKTFREE(wlfc->osh, pktbuf, TRUE);
- wlfc->stats.pktout++;
+
+ if (!WLFC_GET_AFQ(dhd->wlfc_mode)) {
+ _dhd_wlfc_hanger_free_pkt(wlfc, hslot,
+ WLFC_HANGER_PKT_STATE_TXSTATUS, TRUE);
+ } else {
+ dhd_txcomplete(dhd, pktbuf, TRUE);
+ wlfc->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(pktbuf))]
+ [DHD_PKTTAG_FIFO(PKTTAG(pktbuf))]--;
+ wlfc->stats.pktout++;
+ /* free the packet */
+ PKTFREE(wlfc->osh, pktbuf, TRUE);
+ }
}
/* pkt back from firmware side */
entry->transit_count--;
@@ -2307,12 +2356,12 @@
table = wlfc->destination_entries.nodes;
desc = &table[WLFC_MAC_DESC_GET_LOOKUP_INDEX(mac_handle)];
if (desc->occupied) {
- /* a fresh PS mode should wipe old ps credits? */
- desc->requested_credit = 0;
if (type == WLFC_CTL_TYPE_MAC_OPEN) {
desc->state = WLFC_STATE_OPEN;
desc->ac_bitmap = 0xff;
DHD_WLFC_CTRINC_MAC_OPEN(desc);
+ desc->requested_credit = 0;
+ desc->requested_packet = 0;
_dhd_wlfc_remove_requested_entry(wlfc, desc);
}
else {
@@ -2466,6 +2515,9 @@
}
/* allocate space to track txstatus propagated from firmware */
+#ifdef WLFC_STATE_PREALLOC
+ if (!dhd->wlfc_state)
+#endif
dhd->wlfc_state = MALLOC(dhd->osh, sizeof(athost_wl_status_info_t));
if (dhd->wlfc_state == NULL) {
rc = BCME_NOMEM;
@@ -2483,14 +2535,23 @@
if (!WLFC_GET_AFQ(dhd->wlfc_mode)) {
wlfc->hanger = _dhd_wlfc_hanger_create(dhd->osh, WLFC_HANGER_MAXITEMS);
if (wlfc->hanger == NULL) {
+#ifndef WLFC_STATE_PREALLOC
MFREE(dhd->osh, dhd->wlfc_state, sizeof(athost_wl_status_info_t));
dhd->wlfc_state = NULL;
+#endif
rc = BCME_NOMEM;
goto exit;
}
}
dhd->proptxstatus_mode = WLFC_FCMODE_EXPLICIT_CREDIT;
+ /* default to check rx pkt */
+ if (dhd->op_mode & DHD_FLAG_IBSS_MODE) {
+ dhd->wlfc_rxpkt_chk = FALSE;
+ } else {
+ dhd->wlfc_rxpkt_chk = TRUE;
+ }
+
/* initialize all interfaces to accept traffic */
for (i = 0; i < WLFC_MAX_IFNUM; i++) {
@@ -2510,6 +2571,67 @@
return rc;
}
+#ifdef SUPPORT_P2P_GO_PS
+int
+dhd_wlfc_suspend(dhd_pub_t *dhd)
+{
+
+ uint32 iovbuf[4]; /* Room for "tlv" + '\0' + parameter */
+ uint32 tlv = 0;
+
+ DHD_TRACE(("%s: masking wlfc events\n", __FUNCTION__));
+ if (!dhd->wlfc_enabled)
+ return -1;
+
+ bcm_mkiovar("tlv", NULL, 0, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iovbuf, sizeof(iovbuf), FALSE, 0) < 0) {
+ DHD_ERROR(("%s: failed to get bdcv2 tlv signaling\n", __FUNCTION__));
+ return -1;
+ }
+ tlv = iovbuf[0];
+ if ((tlv & (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS)) == 0)
+ return 0;
+ tlv &= ~(WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS);
+ bcm_mkiovar("tlv", (char *)&tlv, 4, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0) < 0) {
+ DHD_ERROR(("%s: failed to set bdcv2 tlv signaling to 0x%x\n",
+ __FUNCTION__, tlv));
+ return -1;
+ }
+
+ return 0;
+}
+
+ int
+dhd_wlfc_resume(dhd_pub_t *dhd)
+{
+ uint32 iovbuf[4]; /* Room for "tlv" + '\0' + parameter */
+ uint32 tlv = 0;
+
+ DHD_TRACE(("%s: unmasking wlfc events\n", __FUNCTION__));
+ if (!dhd->wlfc_enabled)
+ return -1;
+
+ bcm_mkiovar("tlv", NULL, 0, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iovbuf, sizeof(iovbuf), FALSE, 0) < 0) {
+ DHD_ERROR(("%s: failed to get bdcv2 tlv signaling\n", __FUNCTION__));
+ return -1;
+ }
+ tlv = iovbuf[0];
+ if ((tlv & (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS)) ==
+ (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS))
+ return 0;
+ tlv |= (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS);
+ bcm_mkiovar("tlv", (char *)&tlv, 4, (char*)iovbuf, sizeof(iovbuf));
+ if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, (char*)iovbuf, sizeof(iovbuf), TRUE, 0) < 0) {
+ DHD_ERROR(("%s: failed to set bdcv2 tlv signaling to 0x%x\n",
+ __FUNCTION__, tlv));
+ return -1;
+ }
+
+ return 0;
+}
+#endif /* SUPPORT_P2P_GO_PS */
int
dhd_wlfc_parse_header_info(dhd_pub_t *dhd, void* pktbuf, int tlv_hdr_len, uchar *reorder_info_buf,
@@ -2644,13 +2766,12 @@
athost_wl_status_info_t* ctx;
int bus_retry_count = 0;
- uint8 traffic_map = 0; /* packets (send + in queue), Bitmask for 4 ACs + BC/MC */
+ uint8 tx_map = 0; /* packets (send + in queue), Bitmask for 4 ACs + BC/MC */
+ uint8 rx_map = 0; /* received packets, Bitmask for 4 ACs + BC/MC */
uint8 packets_map = 0; /* packets in queue, Bitmask for 4 ACs + BC/MC */
bool no_credit = FALSE;
-#ifdef LIMIT_BORROW
int lender;
-#endif
if ((dhdp == NULL) || (fcommit == NULL)) {
DHD_ERROR(("Error: %s():%d\n", __FUNCTION__, __LINE__));
@@ -2675,7 +2796,7 @@
uint32 htod = 0;
WL_TXSTATUS_SET_FLAGS(htod, WLFC_PKTFLAG_PKTFROMHOST);
_dhd_wlfc_pushheader(ctx, pktbuf, FALSE, 0, 0, htod, 0, FALSE);
- if (!fcommit(commit_ctx, pktbuf))
+ if (fcommit(commit_ctx, pktbuf))
PKTFREE(ctx->osh, pktbuf, TRUE);
rc = BCME_OK;
}
@@ -2698,20 +2819,33 @@
*/
if (pktbuf) {
+ DHD_PKTTAG_WLFCPKT_SET(PKTTAG(pktbuf), 1);
ac = DHD_PKTTAG_FIFO(PKTTAG(pktbuf));
/* en-queue the packets to respective queue. */
rc = _dhd_wlfc_enque_delayq(ctx, pktbuf, ac);
- if (rc)
+ if (rc) {
_dhd_wlfc_prec_drop(ctx->dhdp, (ac << 1), pktbuf, FALSE);
- else
+ } else {
ctx->stats.pktin++;
+ ctx->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(pktbuf))][ac]++;
+ }
}
for (ac = AC_COUNT; ac >= 0; ac--) {
+ if (dhdp->wlfc_rxpkt_chk) {
+ /* check rx packet */
+ uint32 curr_t = OSL_SYSUPTIME(), delta;
+
+ delta = curr_t - ctx->rx_timestamp[ac];
+ if (delta < WLFC_RX_DETECTION_THRESHOLD_MS) {
+ rx_map |= (1 << ac);
+ }
+ }
+
if (ctx->pkt_cnt_per_ac[ac] == 0) {
continue;
}
- traffic_map |= (1 << ac);
+ tx_map |= (1 << ac);
single_ac = ac + 1;
while (FALSE == dhdp->proptxstatus_txoff) {
/* packets from delayQ with less priority are fresh and
@@ -2722,6 +2856,17 @@
((ac == AC_COUNT) && !ctx->bcmc_credit_supported)) {
no_credit = FALSE;
}
+
+ lender = -1;
+#ifdef LIMIT_BORROW
+ if (no_credit && (ac < AC_COUNT) && (tx_map >= rx_map)) {
+ /* try borrow from lower priority */
+ lender = _dhd_wlfc_borrow_credit(ctx, ac - 1, ac, FALSE);
+ if (lender != -1) {
+ no_credit = FALSE;
+ }
+ }
+#endif
commit_info.needs_hdr = 1;
commit_info.mac_entry = NULL;
commit_info.p = _dhd_wlfc_deque_delayedq(ctx, ac,
@@ -2733,10 +2878,15 @@
eWLFC_PKTTYPE_SUPPRESSED;
if (commit_info.p == NULL) {
+#ifdef LIMIT_BORROW
+ if (lender != -1) {
+ _dhd_wlfc_return_credit(ctx, lender, ac);
+ }
+#endif
break;
}
- if (!dhdp->proptxstatus_credit_ignore) {
+ if (!dhdp->proptxstatus_credit_ignore && (lender == -1)) {
ASSERT(ctx->FIFO_credit[ac] >= commit_info.ac_fifo_credit_spent);
}
/* here we can ensure have credit or no credit needed */
@@ -2745,9 +2895,20 @@
/* Bus commits may fail (e.g. flow control); abort after retries */
if (rc == BCME_OK) {
- if (commit_info.ac_fifo_credit_spent)
+ if (commit_info.ac_fifo_credit_spent && (lender == -1)) {
ctx->FIFO_credit[ac]--;
+ }
+#ifdef LIMIT_BORROW
+ else if (!commit_info.ac_fifo_credit_spent && (lender != -1)) {
+ _dhd_wlfc_return_credit(ctx, lender, ac);
+ }
+#endif
} else {
+#ifdef LIMIT_BORROW
+ if (lender != -1) {
+ _dhd_wlfc_return_credit(ctx, lender, ac);
+ }
+#endif
bus_retry_count++;
if (bus_retry_count >= BUS_RETRIES) {
DHD_ERROR(("%s: bus error %d\n", __FUNCTION__, rc));
@@ -2761,14 +2922,14 @@
}
}
- if ((traffic_map == 0) || dhdp->proptxstatus_credit_ignore) {
+ if ((tx_map == 0) || dhdp->proptxstatus_credit_ignore) {
/* nothing send out or remain in queue */
rc = BCME_OK;
goto exit;
}
- if ((traffic_map & (traffic_map - 1)) == 0) {
- /* only one ac exist */
+ if (((tx_map & (tx_map - 1)) == 0) && (tx_map >= rx_map)) {
+ /* only one tx ac exist and no higher rx ac */
if ((single_ac == ctx->single_ac) && ctx->allow_credit_borrow) {
ac = single_ac - 1;
} else {
@@ -2784,10 +2945,7 @@
goto exit;
}
/* same ac traffic, check if it lasts enough time */
- if (curr_t > ctx->single_ac_timestamp)
- delta = curr_t - ctx->single_ac_timestamp;
- else
- delta = (~(uint32)0) - ctx->single_ac_timestamp + curr_t;
+ delta = curr_t - ctx->single_ac_timestamp;
if (delta >= WLFC_BORROW_DEFER_PERIOD_MS) {
/* wait enough time, can borrow now */
@@ -2816,7 +2974,7 @@
/* At this point, borrow all credits only for ac */
while (FALSE == dhdp->proptxstatus_txoff) {
#ifdef LIMIT_BORROW
- if ((lender = _dhd_wlfc_borrow_credit(ctx, AC_COUNT, ac)) == -1) {
+ if ((lender = _dhd_wlfc_borrow_credit(ctx, AC_COUNT, ac, TRUE)) == -1) {
break;
}
#endif
@@ -2878,7 +3036,7 @@
{
athost_wl_status_info_t* wlfc;
void* pout = NULL;
-
+ int rtn = BCME_OK;
if ((dhd == NULL) || (txp == NULL)) {
DHD_ERROR(("Error: %s():%d\n", __FUNCTION__, __LINE__));
return BCME_BADARG;
@@ -2887,8 +3045,8 @@
dhd_os_wlfc_block(dhd);
if (!dhd->wlfc_state || (dhd->proptxstatus_mode == WLFC_FCMODE_NONE)) {
- dhd_os_wlfc_unblock(dhd);
- return WLFC_UNSUPPORTED;
+ rtn = WLFC_UNSUPPORTED;
+ goto EXIT;
}
wlfc = (athost_wl_status_info_t*)dhd->wlfc_state;
@@ -2899,8 +3057,7 @@
/* is this a signal-only packet? */
_dhd_wlfc_pullheader(wlfc, txp);
PKTFREE(wlfc->osh, txp, TRUE);
- dhd_os_wlfc_unblock(dhd);
- return BCME_OK;
+ goto EXIT;
}
if (!success || dhd->proptxstatus_txstatus_ignore) {
@@ -2910,7 +3067,7 @@
__FUNCTION__, __LINE__, txp, DHD_PKTTAG_H2DTAG(PKTTAG(txp))));
if (!WLFC_GET_AFQ(dhd->wlfc_mode)) {
_dhd_wlfc_hanger_poppkt(wlfc->hanger, WL_TXSTATUS_GET_HSLOT(
- DHD_PKTTAG_H2DTAG(PKTTAG(txp))), &pout, 1);
+ DHD_PKTTAG_H2DTAG(PKTTAG(txp))), &pout, TRUE);
ASSERT(txp == pout);
}
@@ -2924,18 +3081,23 @@
if (entry->suppressed && (--entry->suppr_transit_count == 0)) {
entry->suppressed = FALSE;
}
-
- PKTFREE(wlfc->osh, txp, TRUE);
+ wlfc->pkt_cnt_in_drv[DHD_PKTTAG_IF(PKTTAG(txp))][DHD_PKTTAG_FIFO(PKTTAG(txp))]--;
wlfc->stats.pktout++;
+ PKTFREE(wlfc->osh, txp, TRUE);
} else {
/* bus confirmed pkt went to firmware side */
if (WLFC_GET_AFQ(dhd->wlfc_mode)) {
_dhd_wlfc_enque_afq(wlfc, txp);
+ } else {
+ int hslot = WL_TXSTATUS_GET_HSLOT(DHD_PKTTAG_H2DTAG(PKTTAG(txp)));
+ _dhd_wlfc_hanger_free_pkt(wlfc, hslot,
+ WLFC_HANGER_PKT_STATE_TXCOMPLETE, -1);
}
}
+EXIT:
dhd_os_wlfc_unblock(dhd);
- return BCME_OK;
+ return rtn;
}
int
@@ -3070,66 +3232,6 @@
}
int
-dhd_wlfc_suspend(dhd_pub_t *dhd)
-{
-
- uint32 iovbuf[4]; /* Room for "tlv" + '\0' + parameter */
- uint32 tlv = 0;
-
- DHD_TRACE(("%s: masking wlfc events\n", __FUNCTION__));
- if (!dhd->wlfc_enabled)
- return -1;
-
- bcm_mkiovar("tlv", NULL, 0, (char*)iovbuf, sizeof(iovbuf));
- if (dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iovbuf, sizeof(iovbuf), FALSE, 0) < 0) {
- DHD_ERROR(("%s: failed to get bdcv2 tlv signaling\n", __FUNCTION__));
- return -1;
- }
- tlv = iovbuf[0];
- if ((tlv & (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS)) == 0)
- return 0;
- tlv &= ~(WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS);
- bcm_mkiovar("tlv", (char *)&tlv, 4, (char*)iovbuf, sizeof(iovbuf));
- if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, iovbuf, sizeof(iovbuf), TRUE, 0) < 0) {
- DHD_ERROR(("%s: failed to set bdcv2 tlv signaling to 0x%x\n",
- __FUNCTION__, tlv));
- return -1;
- }
-
- return 0;
-}
-
- int
-dhd_wlfc_resume(dhd_pub_t *dhd)
-{
- uint32 iovbuf[4]; /* Room for "tlv" + '\0' + parameter */
- uint32 tlv = 0;
-
- DHD_TRACE(("%s: unmasking wlfc events\n", __FUNCTION__));
- if (!dhd->wlfc_enabled)
- return -1;
-
- bcm_mkiovar("tlv", NULL, 0, (char*)iovbuf, sizeof(iovbuf));
- if (dhd_wl_ioctl_cmd(dhd, WLC_GET_VAR, iovbuf, sizeof(iovbuf), FALSE, 0) < 0) {
- DHD_ERROR(("%s: failed to get bdcv2 tlv signaling\n", __FUNCTION__));
- return -1;
- }
- tlv = iovbuf[0];
- if ((tlv & (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS)) ==
- (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS))
- return 0;
- tlv |= (WLFC_FLAGS_RSSI_SIGNALS | WLFC_FLAGS_XONXOFF_SIGNALS);
- bcm_mkiovar("tlv", (char *)&tlv, 4, (char*)iovbuf, sizeof(iovbuf));
- if (dhd_wl_ioctl_cmd(dhd, WLC_SET_VAR, (char*)iovbuf, sizeof(iovbuf), TRUE, 0) < 0) {
- DHD_ERROR(("%s: failed to set bdcv2 tlv signaling to 0x%x\n",
- __FUNCTION__, tlv));
- return -1;
- }
-
- return 0;
-}
-
-int
dhd_wlfc_cleanup_txq(dhd_pub_t *dhd, f_processpkt_t fn, void *arg)
{
if (dhd == NULL) {
@@ -3264,8 +3366,10 @@
/* free top structure */
+#ifndef WLFC_STATE_PREALLOC
MFREE(dhd->osh, dhd->wlfc_state, sizeof(athost_wl_status_info_t));
dhd->wlfc_state = NULL;
+#endif
dhd->proptxstatus_mode = hostreorder ?
WLFC_ONLY_AMPDU_HOSTREORDER : WLFC_FCMODE_NONE;
@@ -3728,7 +3832,6 @@
dhd_os_wlfc_block(dhd);
- /* two locks for write variable, then read can use any one lock */
if (dhd->wlfc_state) {
dhd->proptxstatus_mode = val & 0xff;
}
@@ -3797,6 +3900,38 @@
return BCME_OK;
}
+int dhd_wlfc_save_rxpath_ac_time(dhd_pub_t * dhd, uint8 prio)
+{
+ athost_wl_status_info_t* wlfc;
+ int rx_path_ac = -1;
+
+ if ((dhd == NULL) || (prio >= NUMPRIO)) {
+ DHD_ERROR(("Error: %s():%d\n", __FUNCTION__, __LINE__));
+ return BCME_BADARG;
+ }
+
+ dhd_os_wlfc_block(dhd);
+
+ if (!dhd->wlfc_rxpkt_chk) {
+ dhd_os_wlfc_unblock(dhd);
+ return BCME_OK;
+ }
+
+ if (!dhd->wlfc_state || (dhd->proptxstatus_mode == WLFC_FCMODE_NONE)) {
+ dhd_os_wlfc_unblock(dhd);
+ return WLFC_UNSUPPORTED;
+ }
+
+ wlfc = (athost_wl_status_info_t*)dhd->wlfc_state;
+
+ rx_path_ac = prio2fifo[prio];
+ wlfc->rx_timestamp[rx_path_ac] = OSL_SYSUPTIME();
+
+ dhd_os_wlfc_unblock(dhd);
+
+ return BCME_OK;
+}
+
int dhd_wlfc_get_module_ignore(dhd_pub_t *dhd, int *val)
{
if (!dhd || !val) {
@@ -3827,7 +3962,6 @@
dhd_os_wlfc_block(dhd);
if ((bool)val != dhd->proptxstatus_module_ignore) {
- /* two locks for write variable, then read can use any one lock */
dhd->proptxstatus_module_ignore = (val != 0);
/* force txstatus_ignore sync with proptxstatus_module_ignore */
dhd->proptxstatus_txstatus_ignore = dhd->proptxstatus_module_ignore;
@@ -3884,7 +4018,6 @@
dhd_os_wlfc_block(dhd);
- /* two locks for write variable, then read can use any one lock */
dhd->proptxstatus_credit_ignore = (val != 0);
dhd_os_wlfc_unblock(dhd);
@@ -3917,7 +4050,6 @@
dhd_os_wlfc_block(dhd);
- /* two locks for write variable, then read can use any one lock */
dhd->proptxstatus_txstatus_ignore = (val != 0);
dhd_os_wlfc_unblock(dhd);
@@ -3925,4 +4057,35 @@
return BCME_OK;
}
+int dhd_wlfc_get_rxpkt_chk(dhd_pub_t *dhd, int *val)
+{
+ if (!dhd || !val) {
+ DHD_ERROR(("Error: %s():%d\n", __FUNCTION__, __LINE__));
+ return BCME_BADARG;
+ }
+
+ dhd_os_wlfc_block(dhd);
+
+ *val = dhd->wlfc_rxpkt_chk;
+
+ dhd_os_wlfc_unblock(dhd);
+
+ return BCME_OK;
+}
+
+int dhd_wlfc_set_rxpkt_chk(dhd_pub_t *dhd, int val)
+{
+ if (!dhd) {
+ DHD_ERROR(("Error: %s():%d\n", __FUNCTION__, __LINE__));
+ return BCME_BADARG;
+ }
+
+ dhd_os_wlfc_block(dhd);
+
+ dhd->wlfc_rxpkt_chk = (val != 0);
+
+ dhd_os_wlfc_unblock(dhd);
+
+ return BCME_OK;
+}
#endif /* PROP_TXSTATUS */
diff --git a/drivers/net/wireless/bcmdhd/dhd_wlfc.h b/drivers/net/wireless/bcmdhd/dhd_wlfc.h
old mode 100755
new mode 100644
index 2453a2d..1ac120c
--- a/drivers/net/wireless/bcmdhd/dhd_wlfc.h
+++ b/drivers/net/wireless/bcmdhd/dhd_wlfc.h
@@ -1,12 +1,12 @@
/*
* Copyright (C) 1999-2014, Broadcom Corporation
-*
+*
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
-*
+*
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -14,19 +14,16 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
-*
+*
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
-* $Id: dhd_wlfc.h 453829 2014-02-06 12:28:45Z $
+* $Id: dhd_wlfc.h 490028 2014-07-09 05:58:25Z $
*
*/
#ifndef __wlfc_host_driver_definitions_h__
#define __wlfc_host_driver_definitions_h__
-#ifdef QMONITOR
-#include <dhd_qmon.h>
-#endif
/* #define OOO_DEBUG */
@@ -43,7 +40,10 @@
#define WLFC_HANGER_ITEM_STATE_FREE 1
#define WLFC_HANGER_ITEM_STATE_INUSE 2
#define WLFC_HANGER_ITEM_STATE_INUSE_SUPPRESSED 3
-#define WLFC_HANGER_ITEM_STATE_WAIT_CLEAN 4
+
+#define WLFC_HANGER_PKT_STATE_TXSTATUS 1
+#define WLFC_HANGER_PKT_STATE_TXCOMPLETE 2
+#define WLFC_HANGER_PKT_STATE_CLEANUP 4
typedef enum {
Q_TYPE_PSQ,
@@ -67,7 +67,8 @@
typedef struct wlfc_hanger_item {
uint8 state;
uint8 gen;
- uint8 pad[2];
+ uint8 pkt_state;
+ uint8 pkt_txstatus;
uint32 identifier;
void* pkt;
#ifdef PROP_TXSTATUS_DEBUG
@@ -135,9 +136,6 @@
/* flag. TRUE when in suppress state */
uint8 suppressed;
-#ifdef QMONITOR
- dhd_qmon_t qmon;
-#endif /* QMONITOR */
#ifdef PROP_TXSTATUS_DEBUG
uint32 dstncredit_sent_packets;
@@ -234,12 +232,18 @@
#define WLFC_FCMODE_EXPLICIT_CREDIT 2
#define WLFC_ONLY_AMPDU_HOSTREORDER 3
+/* Reserved credits ratio when borrowed by hihger priority */
+#define WLFC_BORROW_LIMIT_RATIO 4
+
/* How long to defer borrowing in milliseconds */
#define WLFC_BORROW_DEFER_PERIOD_MS 100
/* How long to defer flow control in milliseconds */
#define WLFC_FC_DEFER_PERIOD_MS 200
+/* How long to detect occurance per AC in miliseconds */
+#define WLFC_RX_DETECTION_THRESHOLD_MS 100
+
/* Mask to represent available ACs (note: BC/MC is ignored */
#define WLFC_AC_MASK 0xF
@@ -283,8 +287,10 @@
/* pkt counts for each interface and ac */
int pkt_cnt_in_q[WLFC_MAX_IFNUM][AC_COUNT+1];
int pkt_cnt_per_ac[AC_COUNT+1];
+ int pkt_cnt_in_drv[WLFC_MAX_IFNUM][AC_COUNT+1];
uint8 allow_fc;
uint32 fc_defer_timestamp;
+ uint32 rx_timestamp[AC_COUNT+1];
/* ON/OFF state for flow control to the host network interface */
uint8 hostif_flow_state[WLFC_MAX_IFNUM];
uint8 host_ifidx;
@@ -477,9 +483,11 @@
void* commit_ctx, void *pktbuf, bool need_toggle_host_if);
int dhd_wlfc_txcomplete(dhd_pub_t *dhd, void *txp, bool success);
int dhd_wlfc_init(dhd_pub_t *dhd);
-int dhd_wlfc_hostreorder_init(dhd_pub_t *dhd);
+#ifdef SUPPORT_P2P_GO_PS
int dhd_wlfc_suspend(dhd_pub_t *dhd);
int dhd_wlfc_resume(dhd_pub_t *dhd);
+#endif /* SUPPORT_P2P_GO_PS */
+int dhd_wlfc_hostreorder_init(dhd_pub_t *dhd);
int dhd_wlfc_cleanup_txq(dhd_pub_t *dhd, f_processpkt_t fn, void *arg);
int dhd_wlfc_cleanup(dhd_pub_t *dhd, f_processpkt_t fn, void* arg);
int dhd_wlfc_deinit(dhd_pub_t *dhd);
@@ -495,6 +503,7 @@
bool dhd_wlfc_is_supported(dhd_pub_t *dhd);
bool dhd_wlfc_is_header_only_pkt(dhd_pub_t * dhd, void *pktbuf);
int dhd_wlfc_flowcontrol(dhd_pub_t *dhdp, bool state, bool bAcquireLock);
+int dhd_wlfc_save_rxpath_ac_time(dhd_pub_t * dhd, uint8 prio);
int dhd_wlfc_get_module_ignore(dhd_pub_t *dhd, int *val);
int dhd_wlfc_set_module_ignore(dhd_pub_t *dhd, int val);
@@ -502,4 +511,7 @@
int dhd_wlfc_set_credit_ignore(dhd_pub_t *dhd, int val);
int dhd_wlfc_get_txstatus_ignore(dhd_pub_t *dhd, int *val);
int dhd_wlfc_set_txstatus_ignore(dhd_pub_t *dhd, int val);
+
+int dhd_wlfc_get_rxpkt_chk(dhd_pub_t *dhd, int *val);
+int dhd_wlfc_set_rxpkt_chk(dhd_pub_t *dhd, int val);
#endif /* __wlfc_host_driver_definitions_h__ */
diff --git a/drivers/net/wireless/bcmdhd/dngl_stats.h b/drivers/net/wireless/bcmdhd/dngl_stats.h
old mode 100755
new mode 100644
index ac84522..af598e3
--- a/drivers/net/wireless/bcmdhd/dngl_stats.h
+++ b/drivers/net/wireless/bcmdhd/dngl_stats.h
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dngl_stats.h 241182 2011-02-17 21:50:03Z $
+ * $Id: dngl_stats.h 464743 2014-03-25 21:04:32Z $
*/
#ifndef _dngl_stats_h_
@@ -40,4 +40,192 @@
unsigned long multicast; /* multicast packets received */
} dngl_stats_t;
+typedef int wifi_radio;
+typedef int wifi_channel;
+typedef int wifi_rssi;
+
+typedef enum wifi_channel_width {
+ WIFI_CHAN_WIDTH_20 = 0,
+ WIFI_CHAN_WIDTH_40 = 1,
+ WIFI_CHAN_WIDTH_80 = 2,
+ WIFI_CHAN_WIDTH_160 = 3,
+ WIFI_CHAN_WIDTH_80P80 = 4,
+ WIFI_CHAN_WIDTH_5 = 5,
+ WIFI_CHAN_WIDTH_10 = 6,
+ WIFI_CHAN_WIDTH_INVALID = -1
+} wifi_channel_width_t;
+
+typedef enum {
+ WIFI_DISCONNECTED = 0,
+ WIFI_AUTHENTICATING = 1,
+ WIFI_ASSOCIATING = 2,
+ WIFI_ASSOCIATED = 3,
+ WIFI_EAPOL_STARTED = 4, // if done by firmware/driver
+ WIFI_EAPOL_COMPLETED = 5, // if done by firmware/driver
+} wifi_connection_state;
+
+typedef enum {
+ WIFI_ROAMING_IDLE = 0,
+ WIFI_ROAMING_ACTIVE = 1,
+} wifi_roam_state;
+
+typedef enum {
+ WIFI_INTERFACE_STA = 0,
+ WIFI_INTERFACE_SOFTAP = 1,
+ WIFI_INTERFACE_IBSS = 2,
+ WIFI_INTERFACE_P2P_CLIENT = 3,
+ WIFI_INTERFACE_P2P_GO = 4,
+ WIFI_INTERFACE_NAN = 5,
+ WIFI_INTERFACE_MESH = 6,
+ } wifi_interface_mode;
+
+#define WIFI_CAPABILITY_QOS 0x00000001 // set for QOS association
+#define WIFI_CAPABILITY_PROTECTED 0x00000002 // set for protected association (802.11 beacon frame control protected bit set)
+#define WIFI_CAPABILITY_INTERWORKING 0x00000004 // set if 802.11 Extended Capabilities element interworking bit is set
+#define WIFI_CAPABILITY_HS20 0x00000008 // set for HS20 association
+#define WIFI_CAPABILITY_SSID_UTF8 0x00000010 // set is 802.11 Extended Capabilities element UTF-8 SSID bit is set
+#define WIFI_CAPABILITY_COUNTRY 0x00000020 // set is 802.11 Country Element is present
+
+typedef struct {
+ wifi_interface_mode mode; // interface mode
+ u8 mac_addr[6]; // interface mac address (self)
+ wifi_connection_state state; // connection state (valid for STA, CLI only)
+ wifi_roam_state roaming; // roaming state
+ u32 capabilities; // WIFI_CAPABILITY_XXX (self)
+ u8 ssid[33]; // null terminated SSID
+ u8 bssid[6]; // bssid
+ u8 ap_country_str[3]; // country string advertised by AP
+ u8 country_str[3]; // country string for this association
+} wifi_interface_info;
+
+typedef wifi_interface_info *wifi_interface_handle;
+
+/* channel information */
+typedef struct {
+ wifi_channel_width_t width; // channel width (20, 40, 80, 80+80, 160)
+ wifi_channel center_freq; // primary 20 MHz channel
+ wifi_channel center_freq0; // center frequency (MHz) first segment
+ wifi_channel center_freq1; // center frequency (MHz) second segment
+} wifi_channel_info;
+
+/* wifi rate */
+typedef struct {
+ u32 preamble :3; // 0: OFDM, 1:CCK, 2:HT 3:VHT 4..7 reserved
+ u32 nss :2; // 0:1x1, 1:2x2, 3:3x3, 4:4x4
+ u32 bw :3; // 0:20MHz, 1:40Mhz, 2:80Mhz, 3:160Mhz
+ u32 rateMcsIdx :8; // OFDM/CCK rate code would be as per ieee std in the units of 0.5mbps
+ // HT/VHT it would be mcs index
+ u32 reserved :16; // reserved
+ u32 bitrate; // units of 100 Kbps
+} wifi_rate;
+
+/* channel statistics */
+typedef struct {
+ wifi_channel_info channel; // channel
+ u32 on_time; // msecs the radio is awake (32 bits number accruing over time)
+ u32 cca_busy_time; // msecs the CCA register is busy (32 bits number accruing over time)
+} wifi_channel_stat;
+
+/* radio statistics */
+typedef struct {
+ wifi_radio radio; // wifi radio (if multiple radio supported)
+ u32 on_time; // msecs the radio is awake (32 bits number accruing over time)
+ u32 tx_time; // msecs the radio is transmitting (32 bits number accruing over time)
+ u32 rx_time; // msecs the radio is in active receive (32 bits number accruing over time)
+ u32 on_time_scan; // msecs the radio is awake due to all scan (32 bits number accruing over time)
+ u32 on_time_nbd; // msecs the radio is awake due to NAN (32 bits number accruing over time)
+ u32 on_time_gscan; // msecs the radio is awake due to G?scan (32 bits number accruing over time)
+ u32 on_time_roam_scan; // msecs the radio is awake due to roam?scan (32 bits number accruing over time)
+ u32 on_time_pno_scan; // msecs the radio is awake due to PNO scan (32 bits number accruing over time)
+ u32 on_time_hs20; // msecs the radio is awake due to HS2.0 scans and GAS exchange (32 bits number accruing over time)
+ u32 num_channels; // number of channels
+ wifi_channel_stat channels[]; // channel statistics
+} wifi_radio_stat;
+
+/* per rate statistics */
+typedef struct {
+ wifi_rate rate; // rate information
+ u32 tx_mpdu; // number of successfully transmitted data pkts (ACK rcvd)
+ u32 rx_mpdu; // number of received data pkts
+ u32 mpdu_lost; // number of data packet losses (no ACK)
+ u32 retries; // total number of data pkt retries
+ u32 retries_short; // number of short data pkt retries
+ u32 retries_long; // number of long data pkt retries
+} wifi_rate_stat;
+
+/* access categories */
+typedef enum {
+ WIFI_AC_VO = 0,
+ WIFI_AC_VI = 1,
+ WIFI_AC_BE = 2,
+ WIFI_AC_BK = 3,
+ WIFI_AC_MAX = 4,
+} wifi_traffic_ac;
+
+/* wifi peer type */
+typedef enum
+{
+ WIFI_PEER_STA,
+ WIFI_PEER_AP,
+ WIFI_PEER_P2P_GO,
+ WIFI_PEER_P2P_CLIENT,
+ WIFI_PEER_NAN,
+ WIFI_PEER_TDLS,
+ WIFI_PEER_INVALID,
+} wifi_peer_type;
+
+/* per peer statistics */
+typedef struct {
+ wifi_peer_type type; // peer type (AP, TDLS, GO etc.)
+ u8 peer_mac_address[6]; // mac address
+ u32 capabilities; // peer WIFI_CAPABILITY_XXX
+ u32 num_rate; // number of rates
+ wifi_rate_stat rate_stats[]; // per rate statistics, number of entries = num_rate
+} wifi_peer_info;
+
+/* per access category statistics */
+typedef struct {
+ wifi_traffic_ac ac; // access category (VI, VO, BE, BK)
+ u32 tx_mpdu; // number of successfully transmitted unicast data pkts (ACK rcvd)
+ u32 rx_mpdu; // number of received unicast mpdus
+ u32 tx_mcast; // number of succesfully transmitted multicast data packets
+ // STA case: implies ACK received from AP for the unicast packet in which mcast pkt was sent
+ u32 rx_mcast; // number of received multicast data packets
+ u32 rx_ampdu; // number of received unicast a-mpdus
+ u32 tx_ampdu; // number of transmitted unicast a-mpdus
+ u32 mpdu_lost; // number of data pkt losses (no ACK)
+ u32 retries; // total number of data pkt retries
+ u32 retries_short; // number of short data pkt retries
+ u32 retries_long; // number of long data pkt retries
+ u32 contention_time_min; // data pkt min contention time (usecs)
+ u32 contention_time_max; // data pkt max contention time (usecs)
+ u32 contention_time_avg; // data pkt avg contention time (usecs)
+ u32 contention_num_samples; // num of data pkts used for contention statistics
+} wifi_wmm_ac_stat;
+
+/* interface statistics */
+typedef struct {
+ wifi_interface_handle iface; // wifi interface
+ wifi_interface_info info; // current state of the interface
+ u32 beacon_rx; // access point beacon received count from connected AP
+ u64 average_tsf_offset; // average beacon offset encountered (beacon_TSF - TBTT)
+ // The average_tsf_offset field is used so as to calculate the
+ // typical beacon contention time on the channel as well may be
+ // used to debug beacon synchronization and related power consumption issue
+ u32 leaky_ap_detected; // indicate that this AP typically leaks packets beyond the driver guard time.
+ u32 leaky_ap_avg_num_frames_leaked; // average number of frame leaked by AP after frame with PM bit set was ACK'ed by AP
+ u32 leaky_ap_guard_time; // guard time currently in force (when implementing IEEE power management based on
+ // frame control PM bit), How long driver waits before shutting down the radio and
+ // after receiving an ACK for a data frame with PM bit set)
+ u32 mgmt_rx; // access point mgmt frames received count from connected AP (including Beacon)
+ u32 mgmt_action_rx; // action frames received count
+ u32 mgmt_action_tx; // action frames transmit count
+ wifi_rssi rssi_mgmt; // access Point Beacon and Management frames RSSI (averaged)
+ wifi_rssi rssi_data; // access Point Data Frames RSSI (averaged) from connected AP
+ wifi_rssi rssi_ack; // access Point ACK RSSI (averaged) from connected AP
+ wifi_wmm_ac_stat ac[WIFI_AC_MAX]; // per ac data packet statistics
+ u32 num_peers; // number of peers
+ wifi_peer_info peer_info[]; // per peer statistics
+} wifi_iface_stat;
+
#endif /* _dngl_stats_h_ */
diff --git a/drivers/net/wireless/bcmdhd/dngl_wlhdr.h b/drivers/net/wireless/bcmdhd/dngl_wlhdr.h
old mode 100755
new mode 100644
index 2c1366a..fbd3209
--- a/drivers/net/wireless/bcmdhd/dngl_wlhdr.h
+++ b/drivers/net/wireless/bcmdhd/dngl_wlhdr.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dngl_wlhdr.h 241182 2011-02-17 21:50:03Z $
+ * $Id: dngl_wlhdr.h 464743 2014-03-25 21:04:32Z $
*/
#ifndef _dngl_wlhdr_h_
diff --git a/drivers/net/wireless/bcmdhd/hnd_pktpool.c b/drivers/net/wireless/bcmdhd/hnd_pktpool.c
new file mode 100644
index 0000000..bf48b6d
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/hnd_pktpool.c
@@ -0,0 +1,751 @@
+/*
+ * HND generic packet pool operation primitives
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: $
+ */
+
+#include <typedefs.h>
+#include <osl.h>
+#include <bcmutils.h>
+#include <hnd_pktpool.h>
+
+/* Registry size is one larger than max pools, as slot #0 is reserved */
+#define PKTPOOLREG_RSVD_ID (0U)
+#define PKTPOOLREG_RSVD_PTR (POOLPTR(0xdeaddead))
+#define PKTPOOLREG_FREE_PTR (POOLPTR(NULL))
+
+#define PKTPOOL_REGISTRY_SET(id, pp) (pktpool_registry_set((id), (pp)))
+#define PKTPOOL_REGISTRY_CMP(id, pp) (pktpool_registry_cmp((id), (pp)))
+
+/* Tag a registry entry as free for use */
+#define PKTPOOL_REGISTRY_CLR(id) \
+ PKTPOOL_REGISTRY_SET((id), PKTPOOLREG_FREE_PTR)
+#define PKTPOOL_REGISTRY_ISCLR(id) \
+ (PKTPOOL_REGISTRY_CMP((id), PKTPOOLREG_FREE_PTR))
+
+/* Tag registry entry 0 as reserved */
+#define PKTPOOL_REGISTRY_RSV() \
+ PKTPOOL_REGISTRY_SET(PKTPOOLREG_RSVD_ID, PKTPOOLREG_RSVD_PTR)
+#define PKTPOOL_REGISTRY_ISRSVD() \
+ (PKTPOOL_REGISTRY_CMP(PKTPOOLREG_RSVD_ID, PKTPOOLREG_RSVD_PTR))
+
+/* Walk all un-reserved entries in registry */
+#define PKTPOOL_REGISTRY_FOREACH(id) \
+ for ((id) = 1U; (id) <= pktpools_max; (id)++)
+
+uint32 pktpools_max = 0U; /* maximum number of pools that may be initialized */
+pktpool_t *pktpools_registry[PKTPOOL_MAXIMUM_ID + 1]; /* Pktpool registry */
+
+/* Register/Deregister a pktpool with registry during pktpool_init/deinit */
+static int pktpool_register(pktpool_t * poolptr);
+static int pktpool_deregister(pktpool_t * poolptr);
+
+/** accessor functions required when ROMming this file, forced into RAM */
+static void
+BCMRAMFN(pktpool_registry_set)(int id, pktpool_t *pp)
+{
+ pktpools_registry[id] = pp;
+}
+
+static bool
+BCMRAMFN(pktpool_registry_cmp)(int id, pktpool_t *pp)
+{
+ return pktpools_registry[id] == pp;
+}
+
+int /* Construct a pool registry to serve a maximum of total_pools */
+pktpool_attach(osl_t *osh, uint32 total_pools)
+{
+ uint32 poolid;
+
+ if (pktpools_max != 0U) {
+ return BCME_ERROR;
+ }
+
+ ASSERT(total_pools <= PKTPOOL_MAXIMUM_ID);
+
+ /* Initialize registry: reserve slot#0 and tag others as free */
+ PKTPOOL_REGISTRY_RSV(); /* reserve slot#0 */
+
+ PKTPOOL_REGISTRY_FOREACH(poolid) { /* tag all unreserved entries as free */
+ PKTPOOL_REGISTRY_CLR(poolid);
+ }
+
+ pktpools_max = total_pools;
+
+ return (int)pktpools_max;
+}
+
+int /* Destruct the pool registry. Ascertain all pools were first de-inited */
+pktpool_dettach(osl_t *osh)
+{
+ uint32 poolid;
+
+ if (pktpools_max == 0U) {
+ return BCME_OK;
+ }
+
+ /* Ascertain that no pools are still registered */
+ ASSERT(PKTPOOL_REGISTRY_ISRSVD()); /* assert reserved slot */
+
+ PKTPOOL_REGISTRY_FOREACH(poolid) { /* ascertain all others are free */
+ ASSERT(PKTPOOL_REGISTRY_ISCLR(poolid));
+ }
+
+ pktpools_max = 0U; /* restore boot state */
+
+ return BCME_OK;
+}
+
+static int /* Register a pool in a free slot; return the registry slot index */
+pktpool_register(pktpool_t * poolptr)
+{
+ uint32 poolid;
+
+ if (pktpools_max == 0U) {
+ return PKTPOOL_INVALID_ID; /* registry has not yet been constructed */
+ }
+
+ ASSERT(pktpools_max != 0U);
+
+ /* find an empty slot in pktpools_registry */
+ PKTPOOL_REGISTRY_FOREACH(poolid) {
+ if (PKTPOOL_REGISTRY_ISCLR(poolid)) {
+ PKTPOOL_REGISTRY_SET(poolid, POOLPTR(poolptr)); /* register pool */
+ return (int)poolid; /* return pool ID */
+ }
+ } /* FOREACH */
+
+ return PKTPOOL_INVALID_ID; /* error: registry is full */
+}
+
+static int /* Deregister a pktpool, given the pool pointer; tag slot as free */
+pktpool_deregister(pktpool_t * poolptr)
+{
+ uint32 poolid;
+
+ ASSERT(POOLPTR(poolptr) != POOLPTR(NULL));
+
+ poolid = POOLID(poolptr);
+ ASSERT(poolid <= pktpools_max);
+
+ /* Asertain that a previously registered poolptr is being de-registered */
+ if (PKTPOOL_REGISTRY_CMP(poolid, POOLPTR(poolptr))) {
+ PKTPOOL_REGISTRY_CLR(poolid); /* mark as free */
+ } else {
+ ASSERT(0);
+ return BCME_ERROR; /* mismatch in registry */
+ }
+
+ return BCME_OK;
+}
+
+
+/*
+ * pktpool_init:
+ * User provides a pktpool_t sturcture and specifies the number of packets to
+ * be pre-filled into the pool (pplen). The size of all packets in a pool must
+ * be the same and is specified by plen.
+ * pktpool_init first attempts to register the pool and fetch a unique poolid.
+ * If registration fails, it is considered an BCME_ERR, caused by either the
+ * registry was not pre-created (pktpool_attach) or the registry is full.
+ * If registration succeeds, then the requested number of packets will be filled
+ * into the pool as part of initialization. In the event that there is no
+ * available memory to service the request, then BCME_NOMEM will be returned
+ * along with the count of how many packets were successfully allocated.
+ * In dongle builds, prior to memory reclaimation, one should limit the number
+ * of packets to be allocated during pktpool_init and fill the pool up after
+ * reclaim stage.
+ */
+int
+pktpool_init(osl_t *osh, pktpool_t *pktp, int *pplen, int plen, bool istx, uint8 type)
+{
+ int i, err = BCME_OK;
+ int pktplen;
+ uint8 pktp_id;
+
+ ASSERT(pktp != NULL);
+ ASSERT(osh != NULL);
+ ASSERT(pplen != NULL);
+
+ pktplen = *pplen;
+
+ bzero(pktp, sizeof(pktpool_t));
+
+ /* assign a unique pktpool id */
+ if ((pktp_id = (uint8) pktpool_register(pktp)) == PKTPOOL_INVALID_ID) {
+ return BCME_ERROR;
+ }
+ POOLSETID(pktp, pktp_id);
+
+ pktp->inited = TRUE;
+ pktp->istx = istx ? TRUE : FALSE;
+ pktp->plen = (uint16)plen;
+ pktp->type = type;
+
+ pktp->maxlen = PKTPOOL_LEN_MAX;
+ pktplen = LIMIT_TO_MAX(pktplen, pktp->maxlen);
+
+ for (i = 0; i < pktplen; i++) {
+ void *p;
+ p = PKTGET(osh, plen, TRUE);
+
+ if (p == NULL) {
+ /* Not able to allocate all requested pkts
+ * so just return what was actually allocated
+ * We can add to the pool later
+ */
+ if (pktp->freelist == NULL) /* pktpool free list is empty */
+ err = BCME_NOMEM;
+
+ goto exit;
+ }
+
+ PKTSETPOOL(osh, p, TRUE, pktp); /* Tag packet with pool ID */
+
+ PKTSETFREELIST(p, pktp->freelist); /* insert p at head of free list */
+ pktp->freelist = p;
+
+ pktp->avail++;
+
+#ifdef BCMDBG_POOL
+ pktp->dbg_q[pktp->dbg_qlen++].p = p;
+#endif
+ }
+
+exit:
+ pktp->len = pktp->avail;
+
+ *pplen = pktp->len;
+ return err;
+}
+
+/*
+ * pktpool_deinit:
+ * Prior to freeing a pktpool, all packets must be first freed into the pktpool.
+ * Upon pktpool_deinit, all packets in the free pool will be freed to the heap.
+ * An assert is in place to ensure that there are no packets still lingering
+ * around. Packets freed to a pool after the deinit will cause a memory
+ * corruption as the pktpool_t structure no longer exists.
+ */
+int
+pktpool_deinit(osl_t *osh, pktpool_t *pktp)
+{
+ uint16 freed = 0;
+
+ ASSERT(osh != NULL);
+ ASSERT(pktp != NULL);
+
+#ifdef BCMDBG_POOL
+ {
+ int i;
+ for (i = 0; i <= pktp->len; i++) {
+ pktp->dbg_q[i].p = NULL;
+ }
+ }
+#endif
+
+ while (pktp->freelist != NULL) {
+ void * p = pktp->freelist;
+
+ pktp->freelist = PKTFREELIST(p); /* unlink head packet from free list */
+ PKTSETFREELIST(p, NULL);
+
+ PKTSETPOOL(osh, p, FALSE, NULL); /* clear pool ID tag in pkt */
+
+ PKTFREE(osh, p, pktp->istx); /* free the packet */
+
+ freed++;
+ ASSERT(freed <= pktp->len);
+ }
+
+ pktp->avail -= freed;
+ ASSERT(pktp->avail == 0);
+
+ pktp->len -= freed;
+
+ pktpool_deregister(pktp); /* release previously acquired unique pool id */
+ POOLSETID(pktp, PKTPOOL_INVALID_ID);
+
+ pktp->inited = FALSE;
+
+ /* Are there still pending pkts? */
+ ASSERT(pktp->len == 0);
+
+ return 0;
+}
+
+int
+pktpool_fill(osl_t *osh, pktpool_t *pktp, bool minimal)
+{
+ void *p;
+ int err = 0;
+ int len, psize, maxlen;
+
+ ASSERT(pktp->plen != 0);
+
+ maxlen = pktp->maxlen;
+ psize = minimal ? (maxlen >> 2) : maxlen;
+ for (len = (int)pktp->len; len < psize; len++) {
+
+ p = PKTGET(osh, pktp->len, TRUE);
+
+ if (p == NULL) {
+ err = BCME_NOMEM;
+ break;
+ }
+
+ if (pktpool_add(pktp, p) != BCME_OK) {
+ PKTFREE(osh, p, FALSE);
+ err = BCME_ERROR;
+ break;
+ }
+ }
+
+ return err;
+}
+
+static void *
+pktpool_deq(pktpool_t *pktp)
+{
+ void *p;
+
+ if (pktp->avail == 0)
+ return NULL;
+
+ ASSERT(pktp->freelist != NULL);
+
+ p = pktp->freelist; /* dequeue packet from head of pktpool free list */
+ pktp->freelist = PKTFREELIST(p); /* free list points to next packet */
+ PKTSETFREELIST(p, NULL);
+
+ pktp->avail--;
+
+ return p;
+}
+
+static void
+pktpool_enq(pktpool_t *pktp, void *p)
+{
+ ASSERT(p != NULL);
+
+ PKTSETFREELIST(p, pktp->freelist); /* insert at head of pktpool free list */
+ pktp->freelist = p; /* free list points to newly inserted packet */
+
+ pktp->avail++;
+ ASSERT(pktp->avail <= pktp->len);
+}
+
+/* utility for registering host addr fill function called from pciedev */
+int
+/* BCMATTACHFN */
+(pktpool_hostaddr_fill_register)(pktpool_t *pktp, pktpool_cb_extn_t cb, void *arg)
+{
+
+ ASSERT(cb != NULL);
+
+ ASSERT(pktp->cbext.cb == NULL);
+ pktp->cbext.cb = cb;
+ pktp->cbext.arg = arg;
+ return 0;
+}
+
+int
+pktpool_rxcplid_fill_register(pktpool_t *pktp, pktpool_cb_extn_t cb, void *arg)
+{
+
+ ASSERT(cb != NULL);
+
+ ASSERT(pktp->rxcplidfn.cb == NULL);
+ pktp->rxcplidfn.cb = cb;
+ pktp->rxcplidfn.arg = arg;
+ return 0;
+}
+/* Callback functions for split rx modes */
+/* when evr host posts rxbuffer, invike dma_rxfill from pciedev layer */
+void
+pktpool_invoke_dmarxfill(pktpool_t *pktp)
+{
+ ASSERT(pktp->dmarxfill.cb);
+ ASSERT(pktp->dmarxfill.arg);
+
+ if (pktp->dmarxfill.cb)
+ pktp->dmarxfill.cb(pktp, pktp->dmarxfill.arg);
+}
+int
+pkpool_haddr_avail_register_cb(pktpool_t *pktp, pktpool_cb_t cb, void *arg)
+{
+
+ ASSERT(cb != NULL);
+
+ pktp->dmarxfill.cb = cb;
+ pktp->dmarxfill.arg = arg;
+
+ return 0;
+}
+/* No BCMATTACHFN as it is used in xdc_enable_ep which is not an attach function */
+int
+pktpool_avail_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg)
+{
+ int i;
+
+ ASSERT(cb != NULL);
+
+ i = pktp->cbcnt;
+ if (i == PKTPOOL_CB_MAX)
+ return BCME_ERROR;
+
+ ASSERT(pktp->cbs[i].cb == NULL);
+ pktp->cbs[i].cb = cb;
+ pktp->cbs[i].arg = arg;
+ pktp->cbcnt++;
+
+ return 0;
+}
+
+int
+pktpool_empty_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg)
+{
+ int i;
+
+ ASSERT(cb != NULL);
+
+ i = pktp->ecbcnt;
+ if (i == PKTPOOL_CB_MAX)
+ return BCME_ERROR;
+
+ ASSERT(pktp->ecbs[i].cb == NULL);
+ pktp->ecbs[i].cb = cb;
+ pktp->ecbs[i].arg = arg;
+ pktp->ecbcnt++;
+
+ return 0;
+}
+
+static int
+pktpool_empty_notify(pktpool_t *pktp)
+{
+ int i;
+
+ pktp->empty = TRUE;
+ for (i = 0; i < pktp->ecbcnt; i++) {
+ ASSERT(pktp->ecbs[i].cb != NULL);
+ pktp->ecbs[i].cb(pktp, pktp->ecbs[i].arg);
+ }
+ pktp->empty = FALSE;
+
+ return 0;
+}
+
+#ifdef BCMDBG_POOL
+int
+pktpool_dbg_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg)
+{
+ int i;
+
+ ASSERT(cb);
+
+ i = pktp->dbg_cbcnt;
+ if (i == PKTPOOL_CB_MAX)
+ return BCME_ERROR;
+
+ ASSERT(pktp->dbg_cbs[i].cb == NULL);
+ pktp->dbg_cbs[i].cb = cb;
+ pktp->dbg_cbs[i].arg = arg;
+ pktp->dbg_cbcnt++;
+
+ return 0;
+}
+
+int pktpool_dbg_notify(pktpool_t *pktp);
+
+int
+pktpool_dbg_notify(pktpool_t *pktp)
+{
+ int i;
+
+ for (i = 0; i < pktp->dbg_cbcnt; i++) {
+ ASSERT(pktp->dbg_cbs[i].cb);
+ pktp->dbg_cbs[i].cb(pktp, pktp->dbg_cbs[i].arg);
+ }
+
+ return 0;
+}
+
+int
+pktpool_dbg_dump(pktpool_t *pktp)
+{
+ int i;
+
+ printf("pool len=%d maxlen=%d\n", pktp->dbg_qlen, pktp->maxlen);
+ for (i = 0; i < pktp->dbg_qlen; i++) {
+ ASSERT(pktp->dbg_q[i].p);
+ printf("%d, p: 0x%x dur:%lu us state:%d\n", i,
+ pktp->dbg_q[i].p, pktp->dbg_q[i].dur/100, PKTPOOLSTATE(pktp->dbg_q[i].p));
+ }
+
+ return 0;
+}
+
+int
+pktpool_stats_dump(pktpool_t *pktp, pktpool_stats_t *stats)
+{
+ int i;
+ int state;
+
+ bzero(stats, sizeof(pktpool_stats_t));
+ for (i = 0; i < pktp->dbg_qlen; i++) {
+ ASSERT(pktp->dbg_q[i].p != NULL);
+
+ state = PKTPOOLSTATE(pktp->dbg_q[i].p);
+ switch (state) {
+ case POOL_TXENQ:
+ stats->enq++; break;
+ case POOL_TXDH:
+ stats->txdh++; break;
+ case POOL_TXD11:
+ stats->txd11++; break;
+ case POOL_RXDH:
+ stats->rxdh++; break;
+ case POOL_RXD11:
+ stats->rxd11++; break;
+ case POOL_RXFILL:
+ stats->rxfill++; break;
+ case POOL_IDLE:
+ stats->idle++; break;
+ }
+ }
+
+ return 0;
+}
+
+int
+pktpool_start_trigger(pktpool_t *pktp, void *p)
+{
+ uint32 cycles, i;
+
+ if (!PKTPOOL(OSH_NULL, p))
+ return 0;
+
+ OSL_GETCYCLES(cycles);
+
+ for (i = 0; i < pktp->dbg_qlen; i++) {
+ ASSERT(pktp->dbg_q[i].p != NULL);
+
+ if (pktp->dbg_q[i].p == p) {
+ pktp->dbg_q[i].cycles = cycles;
+ break;
+ }
+ }
+
+ return 0;
+}
+
+int pktpool_stop_trigger(pktpool_t *pktp, void *p);
+int
+pktpool_stop_trigger(pktpool_t *pktp, void *p)
+{
+ uint32 cycles, i;
+
+ if (!PKTPOOL(OSH_NULL, p))
+ return 0;
+
+ OSL_GETCYCLES(cycles);
+
+ for (i = 0; i < pktp->dbg_qlen; i++) {
+ ASSERT(pktp->dbg_q[i].p != NULL);
+
+ if (pktp->dbg_q[i].p == p) {
+ if (pktp->dbg_q[i].cycles == 0)
+ break;
+
+ if (cycles >= pktp->dbg_q[i].cycles)
+ pktp->dbg_q[i].dur = cycles - pktp->dbg_q[i].cycles;
+ else
+ pktp->dbg_q[i].dur =
+ (((uint32)-1) - pktp->dbg_q[i].cycles) + cycles + 1;
+
+ pktp->dbg_q[i].cycles = 0;
+ break;
+ }
+ }
+
+ return 0;
+}
+#endif /* BCMDBG_POOL */
+
+int
+pktpool_avail_notify_normal(osl_t *osh, pktpool_t *pktp)
+{
+ ASSERT(pktp);
+ pktp->availcb_excl = NULL;
+ return 0;
+}
+
+int
+pktpool_avail_notify_exclusive(osl_t *osh, pktpool_t *pktp, pktpool_cb_t cb)
+{
+ int i;
+
+ ASSERT(pktp);
+ ASSERT(pktp->availcb_excl == NULL);
+ for (i = 0; i < pktp->cbcnt; i++) {
+ if (cb == pktp->cbs[i].cb) {
+ pktp->availcb_excl = &pktp->cbs[i];
+ break;
+ }
+ }
+
+ if (pktp->availcb_excl == NULL)
+ return BCME_ERROR;
+ else
+ return 0;
+}
+
+static int
+pktpool_avail_notify(pktpool_t *pktp)
+{
+ int i, k, idx;
+ int avail;
+
+ ASSERT(pktp);
+ if (pktp->availcb_excl != NULL) {
+ pktp->availcb_excl->cb(pktp, pktp->availcb_excl->arg);
+ return 0;
+ }
+
+ k = pktp->cbcnt - 1;
+ for (i = 0; i < pktp->cbcnt; i++) {
+ avail = pktp->avail;
+
+ if (avail) {
+ if (pktp->cbtoggle)
+ idx = i;
+ else
+ idx = k--;
+
+ ASSERT(pktp->cbs[idx].cb != NULL);
+ pktp->cbs[idx].cb(pktp, pktp->cbs[idx].arg);
+ }
+ }
+
+ /* Alternate between filling from head or tail
+ */
+ pktp->cbtoggle ^= 1;
+
+ return 0;
+}
+
+void *
+pktpool_get(pktpool_t *pktp)
+{
+ void *p;
+
+ p = pktpool_deq(pktp);
+
+ if (p == NULL) {
+ /* Notify and try to reclaim tx pkts */
+ if (pktp->ecbcnt)
+ pktpool_empty_notify(pktp);
+
+ p = pktpool_deq(pktp);
+ if (p == NULL)
+ return NULL;
+ }
+
+ return p;
+}
+
+void
+pktpool_free(pktpool_t *pktp, void *p)
+{
+ ASSERT(p != NULL);
+#ifdef BCMDBG_POOL
+ /* pktpool_stop_trigger(pktp, p); */
+#endif
+
+ pktpool_enq(pktp, p);
+
+ if (pktp->emptycb_disable)
+ return;
+
+ if (pktp->cbcnt) {
+ if (pktp->empty == FALSE)
+ pktpool_avail_notify(pktp);
+ }
+}
+
+int
+pktpool_add(pktpool_t *pktp, void *p)
+{
+ ASSERT(p != NULL);
+
+ if (pktp->len == pktp->maxlen)
+ return BCME_RANGE;
+
+ /* pkts in pool have same length */
+ ASSERT(pktp->plen == PKTLEN(OSH_NULL, p));
+ PKTSETPOOL(OSH_NULL, p, TRUE, pktp);
+
+ pktp->len++;
+ pktpool_enq(pktp, p);
+
+#ifdef BCMDBG_POOL
+ pktp->dbg_q[pktp->dbg_qlen++].p = p;
+#endif
+
+ return 0;
+}
+
+/* Force pktpool_setmaxlen () into RAM as it uses a constant
+ * (PKTPOOL_LEN_MAX) that may be changed post tapeout for ROM-based chips.
+ */
+int
+BCMRAMFN(pktpool_setmaxlen)(pktpool_t *pktp, uint16 maxlen)
+{
+ if (maxlen > PKTPOOL_LEN_MAX)
+ maxlen = PKTPOOL_LEN_MAX;
+
+ /* if pool is already beyond maxlen, then just cap it
+ * since we currently do not reduce the pool len
+ * already allocated
+ */
+ pktp->maxlen = (pktp->len > maxlen) ? pktp->len : maxlen;
+
+ return pktp->maxlen;
+}
+
+void
+pktpool_emptycb_disable(pktpool_t *pktp, bool disable)
+{
+ ASSERT(pktp);
+
+ pktp->emptycb_disable = disable;
+}
+
+bool
+pktpool_emptycb_disabled(pktpool_t *pktp)
+{
+ ASSERT(pktp);
+ return pktp->emptycb_disable;
+}
diff --git a/drivers/net/wireless/bcmdhd/hnd_pktq.c b/drivers/net/wireless/bcmdhd/hnd_pktq.c
new file mode 100644
index 0000000..c478fab
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/hnd_pktq.c
@@ -0,0 +1,603 @@
+/*
+ * HND generic pktq operation primitives
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: $
+ */
+
+#include <typedefs.h>
+#include <osl.h>
+#include <bcmutils.h>
+#include <hnd_pktq.h>
+
+/*
+ * osl multiple-precedence packet queue
+ * hi_prec is always >= the number of the highest non-empty precedence
+ */
+void * BCMFASTPATH
+pktq_penq(struct pktq *pq, int prec, void *p)
+{
+ struct pktq_prec *q;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+ /* queueing chains not allowed */
+ ASSERT(!((PKTLINK(p) != NULL) && (PKTLINK(p) != p)));
+ ASSERT(!pktq_full(pq));
+ ASSERT(!pktq_pfull(pq, prec));
+ PKTSETLINK(p, NULL);
+
+ q = &pq->q[prec];
+
+ if (q->head)
+ PKTSETLINK(q->tail, p);
+ else
+ q->head = p;
+
+ q->tail = p;
+ q->len++;
+
+ pq->len++;
+
+ if (pq->hi_prec < prec)
+ pq->hi_prec = (uint8)prec;
+
+ return p;
+}
+
+void * BCMFASTPATH
+pktq_penq_head(struct pktq *pq, int prec, void *p)
+{
+ struct pktq_prec *q;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+ /* queueing chains not allowed */
+ ASSERT(!((PKTLINK(p) != NULL) && (PKTLINK(p) != p)));
+ ASSERT(!pktq_full(pq));
+ ASSERT(!pktq_pfull(pq, prec));
+ PKTSETLINK(p, NULL);
+ q = &pq->q[prec];
+
+ if (q->head == NULL)
+ q->tail = p;
+
+ PKTSETLINK(p, q->head);
+ q->head = p;
+ q->len++;
+
+ pq->len++;
+
+ if (pq->hi_prec < prec)
+ pq->hi_prec = (uint8)prec;
+
+ return p;
+}
+
+/*
+ * Append spktq 'list' to the tail of pktq 'pq'
+ */
+void BCMFASTPATH
+pktq_append(struct pktq *pq, int prec, struct spktq *list)
+{
+ struct pktq_prec *q;
+ struct pktq_prec *list_q;
+
+ list_q = &list->q[0];
+
+ /* empty list check */
+ if (list_q->head == NULL)
+ return;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+ ASSERT(PKTLINK(list_q->tail) == NULL); /* terminated list */
+
+ ASSERT(!pktq_full(pq));
+ ASSERT(!pktq_pfull(pq, prec));
+
+ q = &pq->q[prec];
+
+ if (q->head)
+ PKTSETLINK(q->tail, list_q->head);
+ else
+ q->head = list_q->head;
+
+ q->tail = list_q->tail;
+ q->len += list_q->len;
+ pq->len += list_q->len;
+
+ if (pq->hi_prec < prec)
+ pq->hi_prec = (uint8)prec;
+
+ list_q->head = NULL;
+ list_q->tail = NULL;
+ list_q->len = 0;
+ list->len = 0;
+}
+
+/*
+ * Prepend spktq 'list' to the head of pktq 'pq'
+ */
+void BCMFASTPATH
+pktq_prepend(struct pktq *pq, int prec, struct spktq *list)
+{
+ struct pktq_prec *q;
+ struct pktq_prec *list_q;
+
+ list_q = &list->q[0];
+
+ /* empty list check */
+ if (list_q->head == NULL)
+ return;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+ ASSERT(PKTLINK(list_q->tail) == NULL); /* terminated list */
+
+ ASSERT(!pktq_full(pq));
+ ASSERT(!pktq_pfull(pq, prec));
+
+ q = &pq->q[prec];
+
+ /* set the tail packet of list to point at the former pq head */
+ PKTSETLINK(list_q->tail, q->head);
+ /* the new q head is the head of list */
+ q->head = list_q->head;
+
+ /* If the q tail was non-null, then it stays as is.
+ * If the q tail was null, it is now the tail of list
+ */
+ if (q->tail == NULL) {
+ q->tail = list_q->tail;
+ }
+
+ q->len += list_q->len;
+ pq->len += list_q->len;
+
+ if (pq->hi_prec < prec)
+ pq->hi_prec = (uint8)prec;
+
+ list_q->head = NULL;
+ list_q->tail = NULL;
+ list_q->len = 0;
+ list->len = 0;
+}
+
+void * BCMFASTPATH
+pktq_pdeq(struct pktq *pq, int prec)
+{
+ struct pktq_prec *q;
+ void *p;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ if ((q->head = PKTLINK(p)) == NULL)
+ q->tail = NULL;
+
+ q->len--;
+
+ pq->len--;
+
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
+
+void * BCMFASTPATH
+pktq_pdeq_prev(struct pktq *pq, int prec, void *prev_p)
+{
+ struct pktq_prec *q;
+ void *p;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ q = &pq->q[prec];
+
+ if (prev_p == NULL)
+ return NULL;
+
+ if ((p = PKTLINK(prev_p)) == NULL)
+ return NULL;
+
+ q->len--;
+
+ pq->len--;
+
+ PKTSETLINK(prev_p, PKTLINK(p));
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
+
+void * BCMFASTPATH
+pktq_pdeq_with_fn(struct pktq *pq, int prec, ifpkt_cb_t fn, int arg)
+{
+ struct pktq_prec *q;
+ void *p, *prev = NULL;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ q = &pq->q[prec];
+ p = q->head;
+
+ while (p) {
+ if (fn == NULL || (*fn)(p, arg)) {
+ break;
+ } else {
+ prev = p;
+ p = PKTLINK(p);
+ }
+ }
+ if (p == NULL)
+ return NULL;
+
+ if (prev == NULL) {
+ if ((q->head = PKTLINK(p)) == NULL) {
+ q->tail = NULL;
+ }
+ } else {
+ PKTSETLINK(prev, PKTLINK(p));
+ if (q->tail == p) {
+ q->tail = prev;
+ }
+ }
+
+ q->len--;
+
+ pq->len--;
+
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
+
+void * BCMFASTPATH
+pktq_pdeq_tail(struct pktq *pq, int prec)
+{
+ struct pktq_prec *q;
+ void *p, *prev;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ for (prev = NULL; p != q->tail; p = PKTLINK(p))
+ prev = p;
+
+ if (prev)
+ PKTSETLINK(prev, NULL);
+ else
+ q->head = NULL;
+
+ q->tail = prev;
+ q->len--;
+
+ pq->len--;
+
+ return p;
+}
+
+void
+pktq_pflush(osl_t *osh, struct pktq *pq, int prec, bool dir, ifpkt_cb_t fn, int arg)
+{
+ struct pktq_prec *q;
+ void *p, *prev = NULL;
+
+ q = &pq->q[prec];
+ p = q->head;
+ while (p) {
+ if (fn == NULL || (*fn)(p, arg)) {
+ bool head = (p == q->head);
+ if (head)
+ q->head = PKTLINK(p);
+ else
+ PKTSETLINK(prev, PKTLINK(p));
+ PKTSETLINK(p, NULL);
+ PKTFREE(osh, p, dir);
+ q->len--;
+ pq->len--;
+ p = (head ? q->head : PKTLINK(prev));
+ } else {
+ prev = p;
+ p = PKTLINK(p);
+ }
+ }
+
+ if (q->head == NULL) {
+ ASSERT(q->len == 0);
+ q->tail = NULL;
+ }
+}
+
+bool BCMFASTPATH
+pktq_pdel(struct pktq *pq, void *pktbuf, int prec)
+{
+ struct pktq_prec *q;
+ void *p;
+
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ /* Should this just assert pktbuf? */
+ if (!pktbuf)
+ return FALSE;
+
+ q = &pq->q[prec];
+
+ if (q->head == pktbuf) {
+ if ((q->head = PKTLINK(pktbuf)) == NULL)
+ q->tail = NULL;
+ } else {
+ for (p = q->head; p && PKTLINK(p) != pktbuf; p = PKTLINK(p))
+ ;
+ if (p == NULL)
+ return FALSE;
+
+ PKTSETLINK(p, PKTLINK(pktbuf));
+ if (q->tail == pktbuf)
+ q->tail = p;
+ }
+
+ q->len--;
+ pq->len--;
+ PKTSETLINK(pktbuf, NULL);
+ return TRUE;
+}
+
+void
+pktq_init(struct pktq *pq, int num_prec, int max_len)
+{
+ int prec;
+
+ ASSERT(num_prec > 0 && num_prec <= PKTQ_MAX_PREC);
+
+ /* pq is variable size; only zero out what's requested */
+ bzero(pq, OFFSETOF(struct pktq, q) + (sizeof(struct pktq_prec) * num_prec));
+
+ pq->num_prec = (uint16)num_prec;
+
+ pq->max = (uint16)max_len;
+
+ for (prec = 0; prec < num_prec; prec++)
+ pq->q[prec].max = pq->max;
+}
+
+void
+pktq_set_max_plen(struct pktq *pq, int prec, int max_len)
+{
+ ASSERT(prec >= 0 && prec < pq->num_prec);
+
+ if (prec < pq->num_prec)
+ pq->q[prec].max = (uint16)max_len;
+}
+
+void * BCMFASTPATH
+pktq_deq(struct pktq *pq, int *prec_out)
+{
+ struct pktq_prec *q;
+ void *p;
+ int prec;
+
+ if (pq->len == 0)
+ return NULL;
+
+ while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
+ pq->hi_prec--;
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ if ((q->head = PKTLINK(p)) == NULL)
+ q->tail = NULL;
+
+ q->len--;
+
+ pq->len--;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
+
+void * BCMFASTPATH
+pktq_deq_tail(struct pktq *pq, int *prec_out)
+{
+ struct pktq_prec *q;
+ void *p, *prev;
+ int prec;
+
+ if (pq->len == 0)
+ return NULL;
+
+ for (prec = 0; prec < pq->hi_prec; prec++)
+ if (pq->q[prec].head)
+ break;
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ for (prev = NULL; p != q->tail; p = PKTLINK(p))
+ prev = p;
+
+ if (prev)
+ PKTSETLINK(prev, NULL);
+ else
+ q->head = NULL;
+
+ q->tail = prev;
+ q->len--;
+
+ pq->len--;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
+
+void *
+pktq_peek(struct pktq *pq, int *prec_out)
+{
+ int prec;
+
+ if (pq->len == 0)
+ return NULL;
+
+ while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
+ pq->hi_prec--;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ return (pq->q[prec].head);
+}
+
+void *
+pktq_peek_tail(struct pktq *pq, int *prec_out)
+{
+ int prec;
+
+ if (pq->len == 0)
+ return NULL;
+
+ for (prec = 0; prec < pq->hi_prec; prec++)
+ if (pq->q[prec].head)
+ break;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ return (pq->q[prec].tail);
+}
+
+void
+pktq_flush(osl_t *osh, struct pktq *pq, bool dir, ifpkt_cb_t fn, int arg)
+{
+ int prec;
+
+ /* Optimize flush, if pktq len = 0, just return.
+ * pktq len of 0 means pktq's prec q's are all empty.
+ */
+ if (pq->len == 0) {
+ return;
+ }
+
+ for (prec = 0; prec < pq->num_prec; prec++)
+ pktq_pflush(osh, pq, prec, dir, fn, arg);
+ if (fn == NULL)
+ ASSERT(pq->len == 0);
+}
+
+/* Return sum of lengths of a specific set of precedences */
+int
+pktq_mlen(struct pktq *pq, uint prec_bmp)
+{
+ int prec, len;
+
+ len = 0;
+
+ for (prec = 0; prec <= pq->hi_prec; prec++)
+ if (prec_bmp & (1 << prec))
+ len += pq->q[prec].len;
+
+ return len;
+}
+
+/* Priority peek from a specific set of precedences */
+void * BCMFASTPATH
+pktq_mpeek(struct pktq *pq, uint prec_bmp, int *prec_out)
+{
+ struct pktq_prec *q;
+ void *p;
+ int prec;
+
+ if (pq->len == 0)
+ {
+ return NULL;
+ }
+ while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
+ pq->hi_prec--;
+
+ while ((prec_bmp & (1 << prec)) == 0 || pq->q[prec].head == NULL)
+ if (prec-- == 0)
+ return NULL;
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ return p;
+}
+/* Priority dequeue from a specific set of precedences */
+void * BCMFASTPATH
+pktq_mdeq(struct pktq *pq, uint prec_bmp, int *prec_out)
+{
+ struct pktq_prec *q;
+ void *p;
+ int prec;
+
+ if (pq->len == 0)
+ return NULL;
+
+ while ((prec = pq->hi_prec) > 0 && pq->q[prec].head == NULL)
+ pq->hi_prec--;
+
+ while ((pq->q[prec].head == NULL) || ((prec_bmp & (1 << prec)) == 0))
+ if (prec-- == 0)
+ return NULL;
+
+ q = &pq->q[prec];
+
+ if ((p = q->head) == NULL)
+ return NULL;
+
+ if ((q->head = PKTLINK(p)) == NULL)
+ q->tail = NULL;
+
+ q->len--;
+
+ if (prec_out)
+ *prec_out = prec;
+
+ pq->len--;
+
+ PKTSETLINK(p, NULL);
+
+ return p;
+}
diff --git a/drivers/net/wireless/bcmdhd/hndpmu.c b/drivers/net/wireless/bcmdhd/hndpmu.c
old mode 100755
new mode 100644
index 70d383e..f0a2d9c
--- a/drivers/net/wireless/bcmdhd/hndpmu.c
+++ b/drivers/net/wireless/bcmdhd/hndpmu.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: hndpmu.c 433378 2013-10-31 17:19:39Z $
+ * $Id: hndpmu.c 475037 2014-05-02 23:55:49Z $
*/
@@ -178,8 +178,6 @@
void
si_sdiod_drive_strength_init(si_t *sih, osl_t *osh, uint32 drivestrength)
{
- chipcregs_t *cc;
- uint origidx, intr_val = 0;
sdiod_drive_str_t *str_tab = NULL;
uint32 str_mask = 0; /* only alter desired bits in PMU chipcontrol 1 register */
uint32 str_shift = 0;
@@ -190,9 +188,6 @@
return;
}
- /* Remember original core before switch to chipc */
- cc = (chipcregs_t *) si_switch_core(sih, CC_CORE_ID, &origidx, &intr_val);
-
switch (SDIOD_DRVSTR_KEY(sih->chip, sih->pmurev)) {
case SDIOD_DRVSTR_KEY(BCM4325_CHIP_ID, 1):
str_tab = (sdiod_drive_str_t *)&sdiod_drive_strength_tab1;
@@ -251,7 +246,7 @@
break;
}
- if (str_tab != NULL && cc != NULL) {
+ if (str_tab != NULL) {
uint32 cc_data_temp;
int i;
@@ -264,19 +259,16 @@
if (i > 0 && drivestrength > str_tab[i].strength)
i--;
- W_REG(osh, &cc->chipcontrol_addr, PMU_CHIPCTL1);
- cc_data_temp = R_REG(osh, &cc->chipcontrol_data);
+ W_REG(osh, PMUREG(sih, chipcontrol_addr), PMU_CHIPCTL1);
+ cc_data_temp = R_REG(osh, PMUREG(sih, chipcontrol_data));
cc_data_temp &= ~str_mask;
cc_data_temp |= str_tab[i].sel << str_shift;
- W_REG(osh, &cc->chipcontrol_data, cc_data_temp);
+ W_REG(osh, PMUREG(sih, chipcontrol_data), cc_data_temp);
if (str_ovr_pmuval) { /* enables the selected drive strength */
- W_REG(osh, &cc->chipcontrol_addr, str_ovr_pmuctl);
- OR_REG(osh, &cc->chipcontrol_data, str_ovr_pmuval);
+ W_REG(osh, PMUREG(sih, chipcontrol_addr), str_ovr_pmuctl);
+ OR_REG(osh, PMUREG(sih, chipcontrol_data), str_ovr_pmuval);
}
PMU_MSG(("SDIO: %dmA drive strength requested; set to %dmA\n",
drivestrength, str_tab[i].strength));
}
-
- /* Return to original core */
- si_restore_core(sih, origidx, intr_val);
} /* si_sdiod_drive_strength_init */
diff --git a/drivers/net/wireless/bcmdhd/include/Makefile b/drivers/net/wireless/bcmdhd/include/Makefile
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/aidmp.h b/drivers/net/wireless/bcmdhd/include/aidmp.h
old mode 100755
new mode 100644
index 519d8be..4e07525
--- a/drivers/net/wireless/bcmdhd/include/aidmp.h
+++ b/drivers/net/wireless/bcmdhd/include/aidmp.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: aidmp.h 404499 2013-05-28 01:06:37Z $
+ * $Id: aidmp.h 456346 2014-02-18 16:48:52Z $
*/
#ifndef _AIDMP_H
@@ -111,7 +111,7 @@
#define SD_SZ_ALIGN 0x00000fff
-#ifndef _LANGUAGE_ASSEMBLY
+#if !defined(_LANGUAGE_ASSEMBLY) && !defined(__ASSEMBLY__)
typedef volatile struct _aidmp {
uint32 oobselina30; /* 0x000 */
@@ -231,7 +231,7 @@
uint32 componentid3; /* 0xffc */
} aidmp_t;
-#endif /* _LANGUAGE_ASSEMBLY */
+#endif /* !_LANGUAGE_ASSEMBLY && !__ASSEMBLY__ */
/* Out-of-band Router registers */
#define OOB_BUSCONFIG 0x020
diff --git a/drivers/net/wireless/bcmdhd/include/bcm_cfg.h b/drivers/net/wireless/bcmdhd/include/bcm_cfg.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcm_mpool_pub.h b/drivers/net/wireless/bcmdhd/include/bcm_mpool_pub.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmcdc.h b/drivers/net/wireless/bcmdhd/include/bcmcdc.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmdefs.h b/drivers/net/wireless/bcmdhd/include/bcmdefs.h
old mode 100755
new mode 100644
index adfceb8..755b853
--- a/drivers/net/wireless/bcmdhd/include/bcmdefs.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmdefs.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmdefs.h 433011 2013-10-30 09:19:54Z $
+ * $Id: bcmdefs.h 474209 2014-04-30 12:16:47Z $
*/
#ifndef _bcmdefs_h_
@@ -93,18 +93,7 @@
*/
#define BCMRAMFN(_fn) _fn
-
-
-/* Put some library data/code into ROM to reduce RAM requirements */
-#define _data _data
-#define BCMROMDAT_NAME(_data) _data
-#define _fn _fn
-#define _fn _fn
#define STATIC static
-#define BCMROMDAT_ARYSIZ(data) ARRAYSIZE(data)
-#define BCMROMDAT_SIZEOF(data) sizeof(data)
-#define BCMROMDAT_APATCH(data)
-#define BCMROMDAT_SPATCH(data)
/* Bus types */
#define SI_BUS 0 /* SOC Interconnect */
@@ -156,8 +145,10 @@
/* Defines for DMA Address Width - Shared between OSL and HNDDMA */
#define DMADDR_MASK_32 0x0 /* Address mask for 32-bits */
#define DMADDR_MASK_30 0xc0000000 /* Address mask for 30-bits */
+#define DMADDR_MASK_26 0xFC000000 /* Address maks for 26-bits */
#define DMADDR_MASK_0 0xffffffff /* Address mask for 0-bits (hi-part) */
+#define DMADDRWIDTH_26 26 /* 26-bit addressing capability */
#define DMADDRWIDTH_30 30 /* 30-bit addressing capability */
#define DMADDRWIDTH_32 32 /* 32-bit addressing capability */
#define DMADDRWIDTH_63 63 /* 64-bit addressing capability */
@@ -196,6 +187,7 @@
(_pa) = (_val); \
} while (0)
#endif /* BCMDMA64OSL */
+#define PHYSADDRISZERO(_pa) (PHYSADDRLO(_pa) == 0 && PHYSADDRHI(_pa) == 0)
/* One physical DMA segment */
typedef struct {
@@ -340,9 +332,7 @@
#define NVRAM_ARRAY_MAXSIZE MAXSZ_NVRAM_VARS
#endif /* DL_NVRAM */
-#ifdef BCMUSBDEV_ENABLED
extern uint32 gFWID;
-#endif
#endif /* _bcmdefs_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/bcmdevs.h b/drivers/net/wireless/bcmdhd/include/bcmdevs.h
old mode 100755
new mode 100644
index d700a7e..e673ab0
--- a/drivers/net/wireless/bcmdhd/include/bcmdevs.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmdevs.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmdevs.h 456607 2014-02-19 09:26:42Z $
+ * $Id: bcmdevs.h 474307 2014-04-30 20:58:03Z $
*/
#ifndef _BCMDEVS_H
@@ -72,6 +72,8 @@
#define BCM_DNGL_BL_PID_4345 0xbd24
#define BCM_DNGL_BL_PID_4349 0xbd25
#define BCM_DNGL_BL_PID_4354 0xbd26
+#define BCM_DNGL_BL_PID_43569 0xbd27
+#define BCM_DNGL_BL_PID_43909 0xbd28
#define BCM_DNGL_BDC_PID 0x0bdc
#define BCM_DNGL_JTAG_PID 0x4a44
@@ -182,6 +184,15 @@
#define BCM43602_D11AC_ID 0x43ba /* ac dualband PCI devid SPROM programmed */
#define BCM43602_D11AC2G_ID 0x43bb /* 43602 802.11ac 2.4G device */
#define BCM43602_D11AC5G_ID 0x43bc /* 43602 802.11ac 5G device */
+#define BCM4349_D11AC_ID 0x4349 /* 4349 802.11ac dualband device */
+#define BCM4349_D11AC2G_ID 0x43dd /* 4349 802.11ac 2.4G device */
+#define BCM4349_D11AC5G_ID 0x43de /* 4349 802.11ac 5G device */
+#define BCM4355_D11AC_ID 0x43d3 /* 4355 802.11ac dualband device */
+#define BCM4355_D11AC2G_ID 0x43d4 /* 4355 802.11ac 2.4G device */
+#define BCM4355_D11AC5G_ID 0x43d5 /* 4355 802.11ac 5G device */
+#define BCM4359_D11AC_ID 0x43d6 /* 4359 802.11ac dualband device */
+#define BCM4359_D11AC2G_ID 0x43d7 /* 4359 802.11ac 2.4G device */
+#define BCM4359_D11AC5G_ID 0x43d8 /* 4359 802.11ac 5G device */
/* PCI Subsystem ID */
#define BCM943228HMB_SSID_VEN1 0x0607
@@ -219,15 +230,28 @@
#define BCM43569_D11AC2G_ID 0x43da
#define BCM43569_D11AC5G_ID 0x43db
+#define BCM43570_D11AC_ID 0x43d9
+#define BCM43570_D11AC2G_ID 0x43da
+#define BCM43570_D11AC5G_ID 0x43db
+
#define BCM4354_D11AC_ID 0x43df /* 4354 802.11ac dualband device */
#define BCM4354_D11AC2G_ID 0x43e0 /* 4354 802.11ac 2.4G device */
#define BCM4354_D11AC5G_ID 0x43e1 /* 4354 802.11ac 5G device */
+#define BCM43430_D11N2G_ID 0x43e2 /* 43430 802.11n 2.4G device */
#define BCM43349_D11N_ID 0x43e6 /* 43349 802.11n dualband id */
#define BCM43349_D11N2G_ID 0x43e7 /* 43349 802.11n 2.4Ghz band id */
#define BCM43349_D11N5G_ID 0x43e8 /* 43349 802.11n 5Ghz band id */
+#define BCM4358_D11AC_ID 0x43e9 /* 4358 802.11ac dualband device */
+#define BCM4358_D11AC2G_ID 0x43ea /* 4358 802.11ac 2.4G device */
+#define BCM4358_D11AC5G_ID 0x43eb /* 4358 802.11ac 5G device */
+
+#define BCM4356_D11AC_ID 0x43ec /* 4356 802.11ac dualband device */
+#define BCM4356_D11AC2G_ID 0x43ed /* 4356 802.11ac 2.4G device */
+#define BCM4356_D11AC5G_ID 0x43ee /* 4356 802.11ac 5G device */
+
#define BCMGPRS_UART_ID 0x4333 /* Uart id used by 4306/gprs card */
#define BCMGPRS2_UART_ID 0x4344 /* Uart id used by 4306/gprs card */
#define FPGA_JTAGM_ID 0x43f0 /* FPGA jtagm device id */
@@ -337,21 +361,40 @@
#define BCM43342_CHIP_ID 43342 /* 43342 chipcommon chipid */
#define BCM4350_CHIP_ID 0x4350 /* 4350 chipcommon chipid */
#define BCM4354_CHIP_ID 0x4354 /* 4354 chipcommon chipid */
+#define BCM4356_CHIP_ID 0x4356 /* 4356 chipcommon chipid */
#define BCM43556_CHIP_ID 0xAA24 /* 43556 chipcommon chipid */
#define BCM43558_CHIP_ID 0xAA26 /* 43558 chipcommon chipid */
+#define BCM43562_CHIP_ID 0xAA2A /* 43562 chipcommon chipid */
#define BCM43566_CHIP_ID 0xAA2E /* 43566 chipcommon chipid */
#define BCM43568_CHIP_ID 0xAA30 /* 43568 chipcommon chipid */
#define BCM43569_CHIP_ID 0xAA31 /* 43569 chipcommon chipid */
+#define BCM43570_CHIP_ID 0xAA32 /* 43570 chipcommon chipid */
+#define BCM4358_CHIP_ID 0x4358 /* 4358 chipcommon chipid */
#define BCM4350_CHIP(chipid) ((CHIPID(chipid) == BCM4350_CHIP_ID) || \
(CHIPID(chipid) == BCM4354_CHIP_ID) || \
+ (CHIPID(chipid) == BCM4356_CHIP_ID) || \
(CHIPID(chipid) == BCM43556_CHIP_ID) || \
(CHIPID(chipid) == BCM43558_CHIP_ID) || \
+ (CHIPID(chipid) == BCM43562_CHIP_ID) || \
(CHIPID(chipid) == BCM43566_CHIP_ID) || \
(CHIPID(chipid) == BCM43568_CHIP_ID) || \
- (CHIPID(chipid) == BCM43569_CHIP_ID)) /* 4350 variations */
+ (CHIPID(chipid) == BCM43569_CHIP_ID) || \
+ (CHIPID(chipid) == BCM43570_CHIP_ID) || \
+ (CHIPID(chipid) == BCM4358_CHIP_ID)) /* 4350 variations */
#define BCM4345_CHIP_ID 0x4345 /* 4345 chipcommon chipid */
+#define BCM43430_CHIP_ID 43430 /* 43430 chipcommon chipid */
+#define BCM4349_CHIP_ID 0x4349 /* 4349 chipcommon chipid */
+#define BCM4355_CHIP_ID 0x4355 /* 4355 chipcommon chipid */
+#define BCM4359_CHIP_ID 0x4359 /* 4359 chipcommon chipid */
+#define BCM4349_CHIP(chipid) ((CHIPID(chipid) == BCM4349_CHIP_ID) || \
+ (CHIPID(chipid) == BCM4355_CHIP_ID) || \
+ (CHIPID(chipid) == BCM4359_CHIP_ID))
+#define BCM4349_CHIP_GRPID BCM4349_CHIP_ID: \
+ case BCM4355_CHIP_ID: \
+ case BCM4359_CHIP_ID
#define BCM43602_CHIP_ID 0xaa52 /* 43602 chipcommon chipid */
+#define BCM43462_CHIP_ID 0xa9c6 /* 43462 chipcommon chipid */
#define BCM4342_CHIP_ID 4342 /* 4342 chipcommon chipid (OTP, RBBU) */
#define BCM4402_CHIP_ID 0x4402 /* 4402 chipid */
@@ -442,11 +485,12 @@
#define BFL_ADCDIV 0x00000008 /* Board has the rssi ADC divider */
#define BFL_DIS_256QAM 0x00000008
#define BFL_ENETROBO 0x00000010 /* Board has robo switch or core */
+#define BFL_TSSIAVG 0x00000010 /* TSSI averaging for ACPHY chips */
#define BFL_NOPLLDOWN 0x00000020 /* Not ok to power down the chip pll and oscillator */
#define BFL_CCKHIPWR 0x00000040 /* Can do high-power CCK transmission */
#define BFL_ENETADM 0x00000080 /* Board has ADMtek switch */
#define BFL_ENETVLAN 0x00000100 /* Board has VLAN capability */
-#define BFL_UNUSED 0x00000200
+#define BFL_LTECOEX 0x00000200 /* LTE Coex enabled */
#define BFL_NOPCI 0x00000400 /* Board leaves PCI floating */
#define BFL_FEM 0x00000800 /* Board supports the Front End Module */
#define BFL_EXTLNA 0x00001000 /* Board has an external LNA in 2.4GHz band */
@@ -497,6 +541,7 @@
#define BFL2_FCC_BANDEDGE_WAR 0x00008000 /* Activates WAR to improve FCC bandedge performance */
#define BFL2_DAC_SPUR_IMPROVEMENT 0x00008000 /* Reducing DAC Spurs */
#define BFL2_GPLL_WAR2 0x00010000 /* Flag to widen G-band PLL loop b/w */
+#define BFL2_REDUCED_PA_TURNONTIME 0x00010000 /* Flag to reduce PA turn on Time */
#define BFL2_IPALVLSHIFT_3P3 0x00020000
#define BFL2_INTERNDET_TXIQCAL 0x00040000 /* Use internal envelope detector for TX IQCAL */
#define BFL2_XTALBUFOUTEN 0x00080000 /* Keep the buffered Xtal output from radio on */
@@ -529,6 +574,8 @@
#define BFL_SROM11_BTCOEX 0x00000001 /* Board supports BTCOEX */
#define BFL_SROM11_WLAN_BT_SH_XTL 0x00000002 /* bluetooth and wlan share same crystal */
#define BFL_SROM11_EXTLNA 0x00001000 /* Board has an external LNA in 2.4GHz band */
+#define BFL_SROM11_EPA_TURNON_TIME 0x00018000 /* 2 bits for different PA turn on times */
+#define BFL_SROM11_EPA_TURNON_TIME_SHIFT 15
#define BFL_SROM11_EXTLNA_5GHz 0x10000000 /* Board has an external LNA in 5GHz band */
#define BFL_SROM11_GAINBOOSTA01 0x20000000 /* 5g Gainboost for core0 and core1 */
#define BFL2_SROM11_APLL_WAR 0x00000002 /* Flag to implement alternative A-band PLL settings */
@@ -562,6 +609,8 @@
/* acphy, to use backed off gaintbl for lte-coex */
#define BFL3_LTECOEX_GAINTBL_EN_SHIFT 17
#define BFL3_5G_SPUR_WAR 0x00080000 /* acphy, enable spur WAR in 5G band */
+#define BFL3_1X1_RSDB_ANT 0x01000000 /* to find if 2-ant RSDB board or 1-ant RSDB board */
+#define BFL3_1X1_RSDB_ANT_SHIFT 24
/* acphy: lpmode2g and lpmode_5g related boardflags */
#define BFL3_ACPHY_LPMODE_2G 0x00300000 /* bits 20:21 for lpmode_2g choice */
@@ -630,6 +679,8 @@
/* 43602 Boards, unclear yet what boards will be created. */
#define BCM943602RSVD1_SSID 0x06a5
#define BCM943602RSVD2_SSID 0x06a6
+#define BCM943602X87 0X0133
+#define BCM943602X238 0X0132
/* # of GPIO pins */
#define GPIO_NUMPINS 32
diff --git a/drivers/net/wireless/bcmdhd/include/bcmendian.h b/drivers/net/wireless/bcmdhd/include/bcmendian.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmmsgbuf.h b/drivers/net/wireless/bcmdhd/include/bcmmsgbuf.h
old mode 100755
new mode 100644
index 3e7e961..e4281b3
--- a/drivers/net/wireless/bcmdhd/include/bcmmsgbuf.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmmsgbuf.h
@@ -5,13 +5,13 @@
* Definitions subject to change without notice.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -19,81 +19,559 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmmsgbuf.h 452261 2014-01-29 19:30:23Z $
+ * $Id: bcmmsgbuf.h 472643 2014-04-24 21:19:22Z $
*/
#ifndef _bcmmsgbuf_h_
#define _bcmmsgbuf_h_
#include <proto/ethernet.h>
#include <wlioctl.h>
#include <bcmpcie.h>
-#define MSGBUF_MAX_MSG_SIZE ETHER_MAX_LEN
-#define DNGL_TO_HOST_MSGBUF_SZ (8 * 1024) /* Host side ring */
-#define HOST_TO_DNGL_MSGBUF_SZ (8 * 1024) /* Host side ring */
-#define DTOH_LOCAL_MSGBUF_SZ (8 * 1024) /* dongle side ring */
-#define HTOD_LOCAL_MSGBUF_SZ (8 * 1024) /* dongle side ring */
-#define HTOD_LOCAL_CTRLRING_SZ (1 * 1024) /* H2D control ring dongle side */
-#define DTOH_LOCAL_CTRLRING_SZ (1 * 1024) /* D2H control ring dongle side */
-#define HOST_TO_DNGL_CTRLRING_SZ (1 * 1024) /* Host to Device ctrl ring on host */
-#define DNGL_TO_HOST_CTRLRING_SZ (1 * 1024) /* Device to host ctrl ring on host */
+#define MSGBUF_MAX_MSG_SIZE ETHER_MAX_LEN
+
+#define D2H_EPOCH_MODULO 253 /* sequence number wrap */
+#define D2H_EPOCH_INIT_VAL (D2H_EPOCH_MODULO + 1)
+
+#define H2DRING_TXPOST_ITEMSIZE 48
+#define H2DRING_RXPOST_ITEMSIZE 32
+#define H2DRING_CTRL_SUB_ITEMSIZE 40
+#define D2HRING_TXCMPLT_ITEMSIZE 16
+#define D2HRING_RXCMPLT_ITEMSIZE 32
+#define D2HRING_CTRL_CMPLT_ITEMSIZE 24
+
+#define H2DRING_TXPOST_MAX_ITEM 512
+#define H2DRING_RXPOST_MAX_ITEM 256
+#define H2DRING_CTRL_SUB_MAX_ITEM 20
+#define D2HRING_TXCMPLT_MAX_ITEM 1024
+#define D2HRING_RXCMPLT_MAX_ITEM 256
+#define D2HRING_CTRL_CMPLT_MAX_ITEM 20
enum {
DNGL_TO_HOST_MSGBUF,
HOST_TO_DNGL_MSGBUF
};
enum {
- MSG_TYPE_IOCTL_REQ = 0x1,
- MSG_TYPE_IOCTLPTR_REQ,
- MSG_TYPE_IOCTL_CMPLT,
- MSG_TYPE_WL_EVENT,
- MSG_TYPE_TX_POST,
- MSG_TYPE_RXBUF_POST,
- MSG_TYPE_RX_CMPLT,
- MSG_TYPE_TX_STATUS,
- MSG_TYPE_EVENT_PYLD,
- MSG_TYPE_IOCT_PYLD, /* used only internally inside dongle */
- MSG_TYPE_RX_PYLD, /* used only internally inside dongle */
- MSG_TYPE_TX_PYLD, /* To be removed once split header is implemented */
- MSG_TYPE_HOST_EVNT,
- MSG_TYPE_LOOPBACK = 15, /* dongle loops the message back to host */
- MSG_TYPE_LPBK_DMAXFER = 16, /* dongle DMA loopback */
- MSG_TYPE_TX_BATCH_POST = 17
-};
-
-enum {
- HOST_TO_DNGL_DATA,
+ HOST_TO_DNGL_TXP_DATA,
+ HOST_TO_DNGL_RXP_DATA,
HOST_TO_DNGL_CTRL,
DNGL_TO_HOST_DATA,
DNGL_TO_HOST_CTRL
};
-#define MESSAGE_PAYLOAD(a) (((a) == MSG_TYPE_IOCT_PYLD) | ((a) == MSG_TYPE_RX_PYLD) |\
- ((a) == MSG_TYPE_EVENT_PYLD) | ((a) == MSG_TYPE_TX_PYLD))
-#define MESSAGE_CTRLPATH(a) (((a) == MSG_TYPE_IOCTL_REQ) | ((a) == MSG_TYPE_IOCTLPTR_REQ) |\
- ((a) == MSG_TYPE_IOCTL_CMPLT) | ((a) == MSG_TYPE_HOST_EVNT) |\
- ((a) == MSG_TYPE_LOOPBACK) | ((a) == MSG_TYPE_WL_EVENT))
+#define MESSAGE_PAYLOAD(a) (a & MSG_TYPE_INTERNAL_USE_START) ? TRUE : FALSE
+
+#ifdef PCIE_API_REV1
+
+#define BCMMSGBUF_DUMMY_REF(a, b) do {BCM_REFERENCE((a));BCM_REFERENCE((b));} while (0)
+
+#define BCMMSGBUF_API_IFIDX(a) 0
+#define BCMMSGBUF_API_SEQNUM(a) 0
+#define BCMMSGBUF_IOCTL_XTID(a) 0
+#define BCMMSGBUF_IOCTL_PKTID(a) ((a)->cmd_id)
+
+#define BCMMSGBUF_SET_API_IFIDX(a, b) BCMMSGBUF_DUMMY_REF(a, b)
+#define BCMMSGBUF_SET_API_SEQNUM(a, b) BCMMSGBUF_DUMMY_REF(a, b)
+#define BCMMSGBUF_IOCTL_SET_PKTID(a, b) (BCMMSGBUF_IOCTL_PKTID(a) = (b))
+#define BCMMSGBUF_IOCTL_SET_XTID(a, b) BCMMSGBUF_DUMMY_REF(a, b)
+
+#else /* PCIE_API_REV1 */
+
+#define BCMMSGBUF_API_IFIDX(a) ((a)->if_id)
+#define BCMMSGBUF_IOCTL_PKTID(a) ((a)->pkt_id)
+#define BCMMSGBUF_API_SEQNUM(a) ((a)->u.seq.seq_no)
+#define BCMMSGBUF_IOCTL_XTID(a) ((a)->xt_id)
+
+#define BCMMSGBUF_SET_API_IFIDX(a, b) (BCMMSGBUF_API_IFIDX((a)) = (b))
+#define BCMMSGBUF_SET_API_SEQNUM(a, b) (BCMMSGBUF_API_SEQNUM((a)) = (b))
+#define BCMMSGBUF_IOCTL_SET_PKTID(a, b) (BCMMSGBUF_IOCTL_PKTID((a)) = (b))
+#define BCMMSGBUF_IOCTL_SET_XTID(a, b) (BCMMSGBUF_IOCTL_XTID((a)) = (b))
+
+#endif /* PCIE_API_REV1 */
+
+/* utility data structures */
+union addr64 {
+ struct {
+ uint32 low;
+ uint32 high;
+ };
+ struct {
+ uint32 low_addr;
+ uint32 high_addr;
+ };
+ uint64 u64;
+} DECLSPEC_ALIGN(8);
+
+typedef union addr64 addr64_t;
/* IOCTL req Hdr */
/* cmn Msg Hdr */
typedef struct cmn_msg_hdr {
- uint16 msglen;
- uint8 msgtype;
- uint8 ifidx;
- union seqn {
- uint32 seq_id;
- struct sequence {
- uint16 seq_no;
- uint8 ring_id;
- uint8 rsvd;
- } seq;
- } u;
+ /* message type */
+ uint8 msg_type;
+ /* interface index this is valid for */
+ uint8 if_id;
+ /* flags */
+ uint8 flags;
+ /* sequence number */
+ uint8 epoch;
+ /* packet Identifier for the associated host buffer */
+ uint32 request_id;
} cmn_msg_hdr_t;
+/* message type */
+typedef enum bcmpcie_msgtype {
+ MSG_TYPE_GEN_STATUS = 0x1,
+ MSG_TYPE_RING_STATUS = 0x2,
+ MSG_TYPE_FLOW_RING_CREATE = 0x3,
+ MSG_TYPE_FLOW_RING_CREATE_CMPLT = 0x4,
+ MSG_TYPE_FLOW_RING_DELETE = 0x5,
+ MSG_TYPE_FLOW_RING_DELETE_CMPLT = 0x6,
+ MSG_TYPE_FLOW_RING_FLUSH = 0x7,
+ MSG_TYPE_FLOW_RING_FLUSH_CMPLT = 0x8,
+ MSG_TYPE_IOCTLPTR_REQ = 0x9,
+ MSG_TYPE_IOCTLPTR_REQ_ACK = 0xA,
+ MSG_TYPE_IOCTLRESP_BUF_POST = 0xB,
+ MSG_TYPE_IOCTL_CMPLT = 0xC,
+ MSG_TYPE_EVENT_BUF_POST = 0xD,
+ MSG_TYPE_WL_EVENT = 0xE,
+ MSG_TYPE_TX_POST = 0xF,
+ MSG_TYPE_TX_STATUS = 0x10,
+ MSG_TYPE_RXBUF_POST = 0x11,
+ MSG_TYPE_RX_CMPLT = 0x12,
+ MSG_TYPE_LPBK_DMAXFER = 0x13,
+ MSG_TYPE_LPBK_DMAXFER_CMPLT = 0x14,
+ MSG_TYPE_API_MAX_RSVD = 0x3F
+} bcmpcie_msg_type_t;
+
+typedef enum bcmpcie_msgtype_int {
+ MSG_TYPE_INTERNAL_USE_START = 0x40,
+ MSG_TYPE_EVENT_PYLD = 0x41,
+ MSG_TYPE_IOCT_PYLD = 0x42,
+ MSG_TYPE_RX_PYLD = 0x43,
+ MSG_TYPE_HOST_FETCH = 0x44,
+ MSG_TYPE_LPBK_DMAXFER_PYLD = 0x45,
+ MSG_TYPE_TXMETADATA_PYLD = 0x46,
+ MSG_TYPE_HOSTDMA_PTRS = 0x47
+} bcmpcie_msgtype_int_t;
+
+typedef enum bcmpcie_msgtype_u {
+ MSG_TYPE_TX_BATCH_POST = 0x80,
+ MSG_TYPE_IOCTL_REQ = 0x81,
+ MSG_TYPE_HOST_EVNT = 0x82,
+ MSG_TYPE_LOOPBACK = 0x83
+} bcmpcie_msgtype_u_t;
+
+
+/* if_id */
+#define BCMPCIE_CMNHDR_IFIDX_PHYINTF_SHFT 5
+#define BCMPCIE_CMNHDR_IFIDX_PHYINTF_MAX 0x7
+#define BCMPCIE_CMNHDR_IFIDX_PHYINTF_MASK \
+ (BCMPCIE_CMNHDR_IFIDX_PHYINTF_MAX << BCMPCIE_CMNHDR_IFIDX_PHYINTF_SHFT)
+#define BCMPCIE_CMNHDR_IFIDX_VIRTINTF_SHFT 0
+#define BCMPCIE_CMNHDR_IFIDX_VIRTINTF_MAX 0x1F
+#define BCMPCIE_CMNHDR_IFIDX_VIRTINTF_MASK \
+ (BCMPCIE_CMNHDR_IFIDX_PHYINTF_MAX << BCMPCIE_CMNHDR_IFIDX_PHYINTF_SHFT)
+
+/* flags */
+#define BCMPCIE_CMNHDR_FLAGS_DMA_R_IDX 0x1
+#define BCMPCIE_CMNHDR_FLAGS_DMA_R_IDX_INTR 0x2
+#define BCMPCIE_CMNHDR_FLAGS_PHASE_BIT 0x80
+
+
+/* IOCTL request message */
+typedef struct ioctl_req_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+
+ /* ioctl command type */
+ uint32 cmd;
+ /* ioctl transaction ID, to pair with a ioctl response */
+ uint16 trans_id;
+ /* input arguments buffer len */
+ uint16 input_buf_len;
+ /* expected output len */
+ uint16 output_buf_len;
+ /* to aling the host address on 8 byte boundary */
+ uint16 rsvd[3];
+ /* always aling on 8 byte boundary */
+ addr64_t host_input_buf_addr;
+ /* rsvd */
+ uint32 rsvd1[2];
+} ioctl_req_msg_t;
+
+/* buffer post messages for device to use to return IOCTL responses, Events */
+typedef struct ioctl_resp_evt_buf_post_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* length of the host buffer supplied */
+ uint16 host_buf_len;
+ /* to aling the host address on 8 byte boundary */
+ uint16 reserved[3];
+ /* always aling on 8 byte boundary */
+ addr64_t host_buf_addr;
+ uint32 rsvd[4];
+} ioctl_resp_evt_buf_post_msg_t;
+
+
+typedef struct pcie_dma_xfer_params {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+
+ /* always aling on 8 byte boundary */
+ addr64_t host_input_buf_addr;
+
+ /* always aling on 8 byte boundary */
+ addr64_t host_ouput_buf_addr;
+
+ /* length of transfer */
+ uint32 xfer_len;
+ /* delay before doing the src txfer */
+ uint32 srcdelay;
+ /* delay before doing the dest txfer */
+ uint32 destdelay;
+ uint32 rsvd;
+} pcie_dma_xfer_params_t;
+
+/* Complete msgbuf hdr for flow ring update from host to dongle */
+typedef struct tx_flowring_create_request {
+ cmn_msg_hdr_t msg;
+ uint8 da[ETHER_ADDR_LEN];
+ uint8 sa[ETHER_ADDR_LEN];
+ uint8 tid;
+ uint8 if_flags;
+ uint16 flow_ring_id;
+ uint8 tc;
+ uint8 priority;
+ uint16 int_vector;
+ uint16 max_items;
+ uint16 len_item;
+ addr64_t flow_ring_ptr;
+} tx_flowring_create_request_t;
+
+typedef struct tx_flowring_delete_request {
+ cmn_msg_hdr_t msg;
+ uint16 flow_ring_id;
+ uint16 reason;
+ uint32 rsvd[7];
+} tx_flowring_delete_request_t;
+
+typedef struct tx_flowring_flush_request {
+ cmn_msg_hdr_t msg;
+ uint16 flow_ring_id;
+ uint16 reason;
+ uint32 rsvd[7];
+} tx_flowring_flush_request_t;
+
+typedef union ctrl_submit_item {
+ ioctl_req_msg_t ioctl_req;
+ ioctl_resp_evt_buf_post_msg_t resp_buf_post;
+ pcie_dma_xfer_params_t dma_xfer;
+ tx_flowring_create_request_t flow_create;
+ tx_flowring_delete_request_t flow_delete;
+ tx_flowring_flush_request_t flow_flush;
+ unsigned char check[H2DRING_CTRL_SUB_ITEMSIZE];
+} ctrl_submit_item_t;
+
+/* Control Completion messages (20 bytes) */
+typedef struct compl_msg_hdr {
+ /* status for the completion */
+ int16 status;
+ /* submisison flow ring id which generated this status */
+ uint16 flow_ring_id;
+} compl_msg_hdr_t;
+
+/* XOR checksum or a magic number to audit DMA done */
+typedef uint32 dma_done_t;
+
+/* completion header status codes */
+#define BCMPCIE_SUCCESS 0
+#define BCMPCIE_NOTFOUND 1
+#define BCMPCIE_NOMEM 2
+#define BCMPCIE_BADOPTION 3
+#define BCMPCIE_RING_IN_USE 4
+#define BCMPCIE_RING_ID_INVALID 5
+#define BCMPCIE_PKT_FLUSH 6
+#define BCMPCIE_NO_EVENT_BUF 7
+#define BCMPCIE_NO_RX_BUF 8
+#define BCMPCIE_NO_IOCTLRESP_BUF 9
+#define BCMPCIE_MAX_IOCTLRESP_BUF 10
+#define BCMPCIE_MAX_EVENT_BUF 11
+
+/* IOCTL completion response */
+typedef struct ioctl_compl_resp_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completeion message header */
+ compl_msg_hdr_t compl_hdr;
+ /* response buffer len where a host buffer is involved */
+ uint16 resp_len;
+ /* transaction id to pair with a request */
+ uint16 trans_id;
+ /* cmd id */
+ uint32 cmd;
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} ioctl_comp_resp_msg_t;
+
+/* IOCTL request acknowledgement */
+typedef struct ioctl_req_ack_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completion message header */
+ compl_msg_hdr_t compl_hdr;
+ /* cmd id */
+ uint32 cmd;
+ uint32 rsvd[1];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} ioctl_req_ack_msg_t;
+
+/* WL event message: send from device to host */
+typedef struct wlevent_req_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completeion message header */
+ compl_msg_hdr_t compl_hdr;
+ /* event data len valid with the event buffer */
+ uint16 event_data_len;
+ /* sequence number */
+ uint16 seqnum;
+ /* rsvd */
+ uint32 rsvd;
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} wlevent_req_msg_t;
+
+/* dma xfer complete message */
+typedef struct pcie_dmaxfer_cmplt {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completion message header */
+ compl_msg_hdr_t compl_hdr;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} pcie_dmaxfer_cmplt_t;
+
+/* general status message */
+typedef struct pcie_gen_status {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completeion message header */
+ compl_msg_hdr_t compl_hdr;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} pcie_gen_status_t;
+
+/* ring status message */
+typedef struct pcie_ring_status {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completion message header */
+ compl_msg_hdr_t compl_hdr;
+ /* message which firmware couldn't decode */
+ uint16 write_idx;
+ uint16 rsvd[3];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} pcie_ring_status_t;
+
+typedef struct tx_flowring_create_response {
+ cmn_msg_hdr_t msg;
+ compl_msg_hdr_t cmplt;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} tx_flowring_create_response_t;
+typedef struct tx_flowring_delete_response {
+ cmn_msg_hdr_t msg;
+ compl_msg_hdr_t cmplt;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} tx_flowring_delete_response_t;
+
+typedef struct tx_flowring_flush_response {
+ cmn_msg_hdr_t msg;
+ compl_msg_hdr_t cmplt;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} tx_flowring_flush_response_t;
+
+/* Common layout of all d2h control messages */
+typedef struct ctrl_compl_msg {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completion message header */
+ compl_msg_hdr_t compl_hdr;
+ uint32 rsvd[2];
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} ctrl_compl_msg_t;
+
+typedef union ctrl_completion_item {
+ ioctl_comp_resp_msg_t ioctl_resp;
+ wlevent_req_msg_t event;
+ ioctl_req_ack_msg_t ioct_ack;
+ pcie_dmaxfer_cmplt_t pcie_xfer_cmplt;
+ pcie_gen_status_t pcie_gen_status;
+ pcie_ring_status_t pcie_ring_status;
+ tx_flowring_create_response_t txfl_create_resp;
+ tx_flowring_delete_response_t txfl_delete_resp;
+ tx_flowring_flush_response_t txfl_flush_resp;
+ ctrl_compl_msg_t ctrl_compl;
+ unsigned char check[D2HRING_CTRL_CMPLT_ITEMSIZE];
+} ctrl_completion_item_t;
+
+/* H2D Rxpost ring work items */
+typedef struct host_rxbuf_post {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* provided meta data buffer len */
+ uint16 metadata_buf_len;
+ /* provided data buffer len to receive data */
+ uint16 data_buf_len;
+ /* alignment to make the host buffers start on 8 byte boundary */
+ uint32 rsvd;
+ /* provided meta data buffer */
+ addr64_t metadata_buf_addr;
+ /* provided data buffer to receive data */
+ addr64_t data_buf_addr;
+} host_rxbuf_post_t;
+
+typedef union rxbuf_submit_item {
+ host_rxbuf_post_t rxpost;
+ unsigned char check[H2DRING_RXPOST_ITEMSIZE];
+} rxbuf_submit_item_t;
+
+
+/* D2H Rxcompletion ring work items */
+typedef struct host_rxbuf_cmpl {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completeion message header */
+ compl_msg_hdr_t compl_hdr;
+ /* filled up meta data len */
+ uint16 metadata_len;
+ /* filled up buffer len to receive data */
+ uint16 data_len;
+ /* offset in the host rx buffer where the data starts */
+ uint16 data_offset;
+ /* offset in the host rx buffer where the data starts */
+ uint16 flags;
+ /* rx status */
+ uint32 rx_status_0;
+ uint32 rx_status_1;
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+} host_rxbuf_cmpl_t;
+
+typedef union rxbuf_complete_item {
+ host_rxbuf_cmpl_t rxcmpl;
+ unsigned char check[D2HRING_RXCMPLT_ITEMSIZE];
+} rxbuf_complete_item_t;
+
+
+typedef struct host_txbuf_post {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* eth header */
+ uint8 txhdr[ETHER_HDR_LEN];
+ /* flags */
+ uint8 flags;
+ /* number of segments */
+ uint8 seg_cnt;
+
+ /* provided meta data buffer for txstatus */
+ addr64_t metadata_buf_addr;
+ /* provided data buffer to receive data */
+ addr64_t data_buf_addr;
+ /* provided meta data buffer len */
+ uint16 metadata_buf_len;
+ /* provided data buffer len to receive data */
+ uint16 data_len;
+ uint32 flag2;
+} host_txbuf_post_t;
+
+#define BCMPCIE_PKT_FLAGS_FRAME_802_3 0x01
+#define BCMPCIE_PKT_FLAGS_FRAME_802_11 0x02
+
+#define BCMPCIE_PKT_FLAGS_FRAME_EXEMPT_MASK 0x03 /* Exempt uses 2 bits */
+#define BCMPCIE_PKT_FLAGS_FRAME_EXEMPT_SHIFT 0x02 /* needs to be shifted past other bits */
+
+
+#define BCMPCIE_PKT_FLAGS_PRIO_SHIFT 5
+#define BCMPCIE_PKT_FLAGS_PRIO_MASK (7 << BCMPCIE_PKT_FLAGS_PRIO_SHIFT)
+
+/* These are added to fix up teh compile issues */
+#define BCMPCIE_TXPOST_FLAGS_FRAME_802_3 BCMPCIE_PKT_FLAGS_FRAME_802_3
+#define BCMPCIE_TXPOST_FLAGS_FRAME_802_11 BCMPCIE_PKT_FLAGS_FRAME_802_11
+#define BCMPCIE_TXPOST_FLAGS_PRIO_SHIFT BCMPCIE_PKT_FLAGS_PRIO_SHIFT
+#define BCMPCIE_TXPOST_FLAGS_PRIO_MASK BCMPCIE_PKT_FLAGS_PRIO_MASK
+
+#define BCMPCIE_PKT_FLAGS2_FORCELOWRATE_MASK 0x01
+#define BCMPCIE_PKT_FLAGS2_FORCELOWRATE_SHIFT 0
+
+/* H2D Txpost ring work items */
+typedef union txbuf_submit_item {
+ host_txbuf_post_t txpost;
+ unsigned char check[H2DRING_TXPOST_ITEMSIZE];
+} txbuf_submit_item_t;
+
+/* D2H Txcompletion ring work items */
+typedef struct host_txbuf_cmpl {
+ /* common message header */
+ cmn_msg_hdr_t cmn_hdr;
+ /* completion message header */
+ compl_msg_hdr_t compl_hdr;
+ union {
+ struct {
+ /* provided meta data len */
+ uint16 metadata_len;
+ /* WLAN side txstatus */
+ uint16 tx_status;
+ };
+ /* XOR checksum or a magic number to audit DMA done */
+ dma_done_t marker;
+ };
+} host_txbuf_cmpl_t;
+
+typedef union txbuf_complete_item {
+ host_txbuf_cmpl_t txcmpl;
+ unsigned char check[D2HRING_TXCMPLT_ITEMSIZE];
+} txbuf_complete_item_t;
+
+#define BCMPCIE_D2H_METADATA_HDRLEN 4
+#define BCMPCIE_D2H_METADATA_MINLEN (BCMPCIE_D2H_METADATA_HDRLEN + 4)
+
+/* ret buf struct */
+typedef struct ret_buf_ptr {
+ uint32 low_addr;
+ uint32 high_addr;
+} ret_buf_t;
+
+#ifdef PCIE_API_REV1
+/* ioctl specific hdr */
+typedef struct ioctl_hdr {
+ uint16 cmd;
+ uint16 retbuf_len;
+ uint32 cmd_id;
+} ioctl_hdr_t;
+typedef struct ioctlptr_hdr {
+ uint16 cmd;
+ uint16 retbuf_len;
+ uint16 buflen;
+ uint16 rsvd;
+ uint32 cmd_id;
+} ioctlptr_hdr_t;
+#else /* PCIE_API_REV1 */
typedef struct ioctl_req_hdr {
uint32 pkt_id; /* Packet ID */
uint32 cmd; /* IOCTL ID */
@@ -102,23 +580,26 @@
uint16 xt_id; /* transaction ID */
uint16 rsvd[1];
} ioctl_req_hdr_t;
+#endif /* PCIE_API_REV1 */
-/* ret buf struct */
-typedef struct ret_buf_ptr {
- uint32 low_addr;
- uint32 high_addr;
-} ret_buf_t;
/* Complete msgbuf hdr for ioctl from host to dongle */
typedef struct ioct_reqst_hdr {
cmn_msg_hdr_t msg;
+#ifdef PCIE_API_REV1
+ ioctl_hdr_t ioct_hdr;
+#else
ioctl_req_hdr_t ioct_hdr;
+#endif
ret_buf_t ret_buf;
} ioct_reqst_hdr_t;
-
typedef struct ioctptr_reqst_hdr {
cmn_msg_hdr_t msg;
+#ifdef PCIE_API_REV1
+ ioctlptr_hdr_t ioct_hdr;
+#else
ioctl_req_hdr_t ioct_hdr;
+#endif
ret_buf_t ret_buf;
ret_buf_t ioct_buf;
} ioctptr_reqst_hdr_t;
@@ -126,12 +607,19 @@
/* ioctl response header */
typedef struct ioct_resp_hdr {
cmn_msg_hdr_t msg;
+#ifdef PCIE_API_REV1
+ uint32 cmd_id;
+#else
uint32 pkt_id;
+#endif
uint32 status;
uint32 ret_len;
uint32 inline_data;
+#ifdef PCIE_API_REV1
+#else
uint16 xt_id; /* transaction ID */
uint16 rsvd[1];
+#endif
} ioct_resp_hdr_t;
/* ioct resp header used in dongle */
diff --git a/drivers/net/wireless/bcmdhd/include/bcmnvram.h b/drivers/net/wireless/bcmdhd/include/bcmnvram.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmpcie.h b/drivers/net/wireless/bcmdhd/include/bcmpcie.h
old mode 100755
new mode 100644
index 3a5c671..530e235
--- a/drivers/net/wireless/bcmdhd/include/bcmpcie.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmpcie.h
@@ -22,13 +22,13 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmpcie.h 452261 2014-01-29 19:30:23Z $
+ * $Id: bcmpcie.h 472405 2014-04-23 23:46:55Z $
*/
#ifndef _bcmpcie_h_
#define _bcmpcie_h_
-#include <circularbuf.h>
+#include <bcmutils.h>
#define ADDR_64(x) (x.addr)
#define HIGH_ADDR_32(x) ((uint32) (((sh_addr_t) x).high_addr))
@@ -39,22 +39,64 @@
uint32 high_addr;
} sh_addr_t;
-#define PCIE_SHARED_VERSION 0x0003
-#define PCIE_SHARED_VERSION_MASK 0x00FF
-#define PCIE_SHARED_ASSERT_BUILT 0x0100
-#define PCIE_SHARED_ASSERT 0x0200
-#define PCIE_SHARED_TRAP 0x0400
-#define PCIE_SHARED_IN_BRPT 0x0800
-#define PCIE_SHARED_SET_BRPT 0x1000
-#define PCIE_SHARED_PENDING_BRPT 0x2000
-#define PCIE_SHARED_HTOD_SPLIT 0x4000
-#define PCIE_SHARED_DTOH_SPLIT 0x8000
+
+
+#ifdef BCMPCIE_SUPPORT_TX_PUSH_RING
+#define BCMPCIE_PUSH_TX_RING 1
+#else
+#define BCMPCIE_PUSH_TX_RING 0
+#endif /* BCMPCIE_SUPPORT_TX_PUSH_RING */
+
+/* May be overridden by 43xxxxx-roml.mk */
+#if !defined(BCMPCIE_MAX_TX_FLOWS)
+#define BCMPCIE_MAX_TX_FLOWS 40
+#endif /* ! BCMPCIE_MAX_TX_FLOWS */
+
+#define PCIE_SHARED_VERSION 0x00005
+#define PCIE_SHARED_VERSION_MASK 0x000FF
+#define PCIE_SHARED_ASSERT_BUILT 0x00100
+#define PCIE_SHARED_ASSERT 0x00200
+#define PCIE_SHARED_TRAP 0x00400
+#define PCIE_SHARED_IN_BRPT 0x00800
+#define PCIE_SHARED_SET_BRPT 0x01000
+#define PCIE_SHARED_PENDING_BRPT 0x02000
+#define PCIE_SHARED_TXPUSH_SPRT 0x04000
+#define PCIE_SHARED_EVT_SEQNUM 0x08000
+#define PCIE_SHARED_DMA_INDEX 0x10000
+
+#define BCMPCIE_H2D_MSGRING_CONTROL_SUBMIT 0
+#define BCMPCIE_H2D_MSGRING_RXPOST_SUBMIT 1
+#define BCMPCIE_D2H_MSGRING_CONTROL_COMPLETE 2
+#define BCMPCIE_D2H_MSGRING_TX_COMPLETE 3
+#define BCMPCIE_D2H_MSGRING_RX_COMPLETE 4
+#define BCMPCIE_COMMON_MSGRING_MAX_ID 4
+
+/* Added only for single tx ring */
+#define BCMPCIE_H2D_TXFLOWRINGID 5
+
+#define BCMPCIE_H2D_COMMON_MSGRINGS 2
+#define BCMPCIE_D2H_COMMON_MSGRINGS 3
+#define BCMPCIE_COMMON_MSGRINGS 5
+
+enum h2dring_idx {
+ BCMPCIE_H2D_MSGRING_CONTROL_SUBMIT_IDX = 0,
+ BCMPCIE_H2D_MSGRING_RXPOST_SUBMIT_IDX = 1,
+ BCMPCIE_H2D_MSGRING_TXFLOW_IDX_START = 2
+};
+
+enum d2hring_idx {
+ BCMPCIE_D2H_MSGRING_CONTROL_COMPLETE_IDX = 0,
+ BCMPCIE_D2H_MSGRING_TX_COMPLETE_IDX = 1,
+ BCMPCIE_D2H_MSGRING_RX_COMPLETE_IDX = 2
+};
typedef struct ring_mem {
- uint8 idx;
- uint8 rsvd;
- uint16 size;
- sh_addr_t base_addr;
+ uint16 idx;
+ uint8 type;
+ uint8 rsvd;
+ uint16 max_item;
+ uint16 len_items;
+ sh_addr_t base_addr;
} ring_mem_t;
#define RINGSTATE_INITED 1
@@ -68,13 +110,23 @@
} ring_state_t;
+
typedef struct ring_info {
- uint8 h2d_ring_count;
- uint8 d2h_ring_count;
- uint8 rsvd[2];
/* locations in the TCM where the ringmem is and ringstate are defined */
- uint32 ringmem_ptr; /* h2d_ring_count + d2h_ring_count */
- uint32 ring_state_ptr; /* h2d_ring_count + d2h_ring_count */
+ uint32 ringmem_ptr; /* ring mem location in TCM */
+ uint32 h2d_w_idx_ptr;
+
+ uint32 h2d_r_idx_ptr;
+ uint32 d2h_w_idx_ptr;
+
+ uint32 d2h_r_idx_ptr;
+ /* host locations where the DMA of read/write indices are */
+ sh_addr_t h2d_w_idx_hostaddr;
+ sh_addr_t h2d_r_idx_hostaddr;
+ sh_addr_t d2h_w_idx_hostaddr;
+ sh_addr_t d2h_r_idx_hostaddr;
+ uint16 max_sub_queues;
+ uint16 rsvd;
} ring_info_t;
typedef struct {
@@ -85,16 +137,17 @@
uint32 assert_exp_addr;
uint32 assert_file_addr;
uint32 assert_line;
- uint32 console_addr; /* Address of hndrte_cons_t */
+ uint32 console_addr; /* Address of hnd_cons_t */
+
uint32 msgtrace_addr;
+
uint32 fwid;
/* Used for debug/flow control */
uint16 total_lfrag_pkt_cnt;
- uint16 max_host_rxbufs;
- uint32 rsvd1;
+ uint16 max_host_rxbufs; /* rsvd in spec */
- uint32 dma_rxoffset;
+ uint32 dma_rxoffset; /* rsvd in spec */
/* these will be used for sleep request/ack, d3 req/ack */
uint32 h2d_mb_data_ptr;
@@ -104,21 +157,29 @@
/* location in the TCM memory which has the ring_info */
uint32 rings_info_ptr;
- /* block of host memory for the dongle to push the status into */
- sh_addr_t device_rings_stsblk;
- uint32 device_rings_stsblk_len;
+ /* block of host memory for the scratch buffer */
+ uint32 host_dma_scratch_buffer_len;
+ sh_addr_t host_dma_scratch_buffer;
+ /* block of host memory for the dongle to push the status into */
+ uint32 device_rings_stsblk_len;
+ sh_addr_t device_rings_stsblk;
+#ifdef BCM_BUZZZ
+ uint32 buzzz; /* BUZZZ state format strings and trace buffer */
+#endif
} pciedev_shared_t;
/* H2D mail box Data */
#define H2D_HOST_D3_INFORM 0x00000001
#define H2D_HOST_DS_ACK 0x00000002
+#define H2D_HOST_CONS_INT 0x80000000 /* h2d int for console cmds */
/* D2H mail box Data */
#define D2H_DEV_D3_ACK 0x00000001
#define D2H_DEV_DS_ENTER_REQ 0x00000002
#define D2H_DEV_DS_EXIT_NOTE 0x00000004
+#define D2H_DEV_FWHALT 0x10000000
extern pciedev_shared_t pciedev_shared;
@@ -126,4 +187,29 @@
#define NTXPACTIVE(r, w, d) (((r) <= (w)) ? ((w)-(r)) : ((d)-(r)+(w)))
#define NTXPAVAIL(r, w, d) (((d) - NTXPACTIVE((r), (w), (d))) > 1)
+/* Function can be used to notify host of FW halt */
+#define READ_AVAIL_SPACE(w, r, d) \
+ ((w >= r) ? (w - r) : (d - r))
+
+#define WRT_PEND(x) ((x)->wr_pending)
+#define DNGL_RING_WPTR(msgbuf) (*((msgbuf)->tcm_rs_w_ptr))
+#define BCMMSGBUF_RING_SET_W_PTR(msgbuf, a) (DNGL_RING_WPTR(msgbuf) = (a))
+
+#define DNGL_RING_RPTR(msgbuf) (*((msgbuf)->tcm_rs_r_ptr))
+#define BCMMSGBUF_RING_SET_R_PTR(msgbuf, a) (DNGL_RING_RPTR(msgbuf) = (a))
+
+#define RING_READ_PTR(x) ((x)->ringstate->r_offset)
+#define RING_WRITE_PTR(x) ((x)->ringstate->w_offset)
+#define RING_START_PTR(x) ((x)->ringmem->base_addr.low_addr)
+#define RING_MAX_ITEM(x) ((x)->ringmem->max_item)
+#define RING_LEN_ITEMS(x) ((x)->ringmem->len_items)
+#define HOST_RING_BASE(x) ((x)->ring_base.va)
+#define HOST_RING_END(x) ((uint8 *)HOST_RING_BASE((x)) + \
+ ((RING_MAX_ITEM((x))-1)*RING_LEN_ITEMS((x))))
+
+#define WRITE_SPACE_AVAIL_CONTINUOUS(r, w, d) ((w >= r) ? (d - w) : (r - w))
+#define WRITE_SPACE_AVAIL(r, w, d) (d - (NTXPACTIVE(r, w, d)) - 1)
+#define CHECK_WRITE_SPACE(r, w, d) \
+ MIN(WRITE_SPACE_AVAIL(r, w, d), WRITE_SPACE_AVAIL_CONTINUOUS(r, w, d))
+
#endif /* _bcmpcie_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/bcmpcispi.h b/drivers/net/wireless/bcmdhd/include/bcmpcispi.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmperf.h b/drivers/net/wireless/bcmdhd/include/bcmperf.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdbus.h b/drivers/net/wireless/bcmdhd/include/bcmsdbus.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdh.h b/drivers/net/wireless/bcmdhd/include/bcmsdh.h
old mode 100755
new mode 100644
index df65028..5520aa8
--- a/drivers/net/wireless/bcmdhd/include/bcmsdh.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsdh.h
@@ -23,7 +23,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh.h 455573 2014-02-14 17:49:31Z $
+ * $Id: bcmsdh.h 450676 2014-01-22 22:45:13Z $
*/
/**
@@ -62,6 +62,11 @@
bool regfail; /* Save status of last reg_read/reg_write call */
uint32 sbwad; /* Save backplane window address */
void *os_cxt; /* Pointer to per-OS private data */
+#ifdef DHD_WAKE_STATUS
+ unsigned int total_wake_count;
+ int pkt_wake;
+ int wake_irq;
+#endif
};
/* Detach - freeup resources allocated in attach */
@@ -85,6 +90,11 @@
extern bool bcmsdh_intr_pending(void *sdh);
#endif
+#ifdef DHD_WAKE_STATUS
+int bcmsdh_get_total_wake(bcmsdh_info_t *bcmsdh);
+int bcmsdh_set_get_wake(bcmsdh_info_t *bcmsdh, int flag);
+#endif
+
/* Register a callback to be called if and when bcmsdh detects
* device removal. No-op in the case of non-removable/hardwired devices.
*/
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdh_sdmmc.h b/drivers/net/wireless/bcmdhd/include/bcmsdh_sdmmc.h
old mode 100755
new mode 100644
index e637ae4..af265df
--- a/drivers/net/wireless/bcmdhd/include/bcmsdh_sdmmc.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsdh_sdmmc.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdh_sdmmc.h 444019 2013-12-18 08:36:54Z $
+ * $Id: bcmsdh_sdmmc.h 408158 2013-06-17 22:15:35Z $
*/
#ifndef __BCMSDH_SDMMC_H__
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdpcm.h b/drivers/net/wireless/bcmdhd/include/bcmsdpcm.h
old mode 100755
new mode 100644
index 273a4d7..e80cdc2
--- a/drivers/net/wireless/bcmdhd/include/bcmsdpcm.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsdpcm.h
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdpcm.h 414378 2013-07-24 15:58:50Z $
+ * $Id: bcmsdpcm.h 472405 2014-04-23 23:46:55Z $
*/
#ifndef _bcmsdpcm_h_
@@ -268,14 +268,11 @@
uint32 assert_exp_addr;
uint32 assert_file_addr;
uint32 assert_line;
- uint32 console_addr; /* Address of hndrte_cons_t */
+ uint32 console_addr; /* Address of hnd_cons_t */
uint32 msgtrace_addr;
uint32 fwid;
} sdpcm_shared_t;
extern sdpcm_shared_t sdpcm_shared;
-/* Function can be used to notify host of FW halt */
-extern void sdpcmd_fwhalt(void);
-
#endif /* _bcmsdpcm_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdspi.h b/drivers/net/wireless/bcmdhd/include/bcmsdspi.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsdstd.h b/drivers/net/wireless/bcmdhd/include/bcmsdstd.h
old mode 100755
new mode 100644
index 97c2a5a..4607879
--- a/drivers/net/wireless/bcmdhd/include/bcmsdstd.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsdstd.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsdstd.h 455427 2014-02-14 00:11:19Z $
+ * $Id: bcmsdstd.h 455390 2014-02-13 22:14:56Z $
*/
#ifndef _BCM_SD_STD_H
#define _BCM_SD_STD_H
@@ -237,8 +237,8 @@
*/
/* Register mapping routines */
-extern uint32 *sdstd_reg_map(osl_t *osh, int32 addr, int size);
-extern void sdstd_reg_unmap(osl_t *osh, int32 addr, int size);
+extern uint32 *sdstd_reg_map(osl_t *osh, ulong addr, int size);
+extern void sdstd_reg_unmap(osl_t *osh, ulong addr, int size);
/* Interrupt (de)registration routines */
extern int sdstd_register_irq(sdioh_info_t *sd, uint irq);
diff --git a/drivers/net/wireless/bcmdhd/include/bcmspi.h b/drivers/net/wireless/bcmdhd/include/bcmspi.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmspibrcm.h b/drivers/net/wireless/bcmdhd/include/bcmspibrcm.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsrom_fmt.h b/drivers/net/wireless/bcmdhd/include/bcmsrom_fmt.h
old mode 100755
new mode 100644
index 7d247bf..82eba65
--- a/drivers/net/wireless/bcmdhd/include/bcmsrom_fmt.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsrom_fmt.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsrom_fmt.h 427005 2013-10-02 00:15:10Z $
+ * $Id: bcmsrom_fmt.h 473704 2014-04-29 15:49:57Z $
*/
#ifndef _bcmsrom_fmt_h_
@@ -29,8 +29,8 @@
#define SROM_MAXREV 11 /* max revisiton supported by driver */
-/* Maximum srom: 6 Kilobits == 768 bytes */
-#define SROM_MAX 768
+/* Maximum srom: 12 Kilobits == 1536 bytes */
+#define SROM_MAX 1536
#define SROM_MAXW 384
#define VARS_MAX 4096
diff --git a/drivers/net/wireless/bcmdhd/include/bcmsrom_tbl.h b/drivers/net/wireless/bcmdhd/include/bcmsrom_tbl.h
old mode 100755
new mode 100644
index 503fc28..6de9d3c
--- a/drivers/net/wireless/bcmdhd/include/bcmsrom_tbl.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmsrom_tbl.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmsrom_tbl.h 427005 2013-10-02 00:15:10Z $
+ * $Id: bcmsrom_tbl.h 471127 2014-04-17 23:24:23Z $
*/
#ifndef _bcmsrom_tbl_h_
@@ -29,6 +29,7 @@
#include "sbpcmcia.h"
#include "wlioctl.h"
+#include <bcmsrom_fmt.h>
typedef struct {
const char *name;
@@ -888,7 +889,8 @@
{HNBU_AA, 0xffffffff, 3, "1aa2g 1aa5g"},
{HNBU_AA, 0xffffffff, 3, "1aa0 1aa1"}, /* backward compatibility */
{HNBU_AG, 0xffffffff, 5, "1ag0 1ag1 1ag2 1ag3"},
- {HNBU_BOARDFLAGS, 0xffffffff, 13, "4boardflags 4boardflags2 4boardflags3"},
+ {HNBU_BOARDFLAGS, 0xffffffff, 21, "4boardflags 4boardflags2 4boardflags3 "
+ "4boardflags4 4boardflags5 "},
{HNBU_LEDS, 0xffffffff, 17, "1ledbh0 1ledbh1 1ledbh2 1ledbh3 1ledbh4 1ledbh5 "
"1ledbh6 1ledbh7 1ledbh8 1ledbh9 1ledbh10 1ledbh11 1ledbh12 1ledbh13 1ledbh14 1ledbh15"},
{HNBU_CCODE, 0xffffffff, 4, "2ccode 1cctl"},
@@ -947,17 +949,17 @@
{OTP_RAW, 0xffffffff, 0, ""}, /* special case */
{HNBU_OFDMPO5G, 0xffffffff, 13, "4ofdm5gpo 4ofdm5glpo 4ofdm5ghpo"},
{HNBU_USBEPNUM, 0xffffffff, 3, "2usbepnum"},
- {HNBU_CCKBW202GPO, 0xffffffff, 5, "2cckbw202gpo 2cckbw20ul2gpo"},
+ {HNBU_CCKBW202GPO, 0xffffffff, 7, "2cckbw202gpo 2cckbw20ul2gpo 2cckbw20in802gpo"},
{HNBU_LEGOFDMBW202GPO, 0xffffffff, 9, "4legofdmbw202gpo 4legofdmbw20ul2gpo"},
{HNBU_LEGOFDMBW205GPO, 0xffffffff, 25, "4legofdmbw205glpo 4legofdmbw20ul5glpo "
"4legofdmbw205gmpo 4legofdmbw20ul5gmpo 4legofdmbw205ghpo 4legofdmbw20ul5ghpo"},
- {HNBU_MCS2GPO, 0xffffffff, 13, "4mcsbw202gpo 4mcsbw20ul2gpo 4mcsbw402gpo"},
+ {HNBU_MCS2GPO, 0xffffffff, 17, "4mcsbw202gpo 4mcsbw20ul2gpo 4mcsbw402gpo 4mcsbw802gpo"},
{HNBU_MCS5GLPO, 0xffffffff, 13, "4mcsbw205glpo 4mcsbw20ul5glpo 4mcsbw405glpo"},
{HNBU_MCS5GMPO, 0xffffffff, 13, "4mcsbw205gmpo 4mcsbw20ul5gmpo 4mcsbw405gmpo"},
{HNBU_MCS5GHPO, 0xffffffff, 13, "4mcsbw205ghpo 4mcsbw20ul5ghpo 4mcsbw405ghpo"},
{HNBU_MCS32PO, 0xffffffff, 3, "2mcs32po"},
- {HNBU_LEG40DUPPO, 0xffffffff, 3, "2legofdm40duppo"},
- {HNBU_TEMPTHRESH, 0xffffffff, 7, "1tempthresh 0temps_period 0temps_hysteresis "
+ {HNBU_LEG40DUPPO, 0xffffffff, 3, "2legofdm40duppo"},
+ {HNBU_TEMPTHRESH, 0xffffffff, 7, "1tempthresh 0temps_period 0temps_hysteresis "
"1tempoffset 1tempsense_slope 0tempcorrx 0tempsense_option "
"1phycal_tempdelta"}, /* special case */
{HNBU_MUXENAB, 0xffffffff, 2, "1muxenab"},
@@ -971,19 +973,32 @@
{HNBU_MEAS_PWR, 0xfffff800, 5, "1measpower 1measpower1 1measpower2 2rawtempsense"},
{HNBU_PDOFF, 0xfffff800, 13, "2pdoffset40ma0 2pdoffset40ma1 2pdoffset40ma2 "
"2pdoffset80ma0 2pdoffset80ma1 2pdoffset80ma2"},
- {HNBU_ACPPR_2GPO, 0xfffff800, 5, "2dot11agofdmhrbw202gpo 2ofdmlrbw202gpo"},
- {HNBU_ACPPR_5GPO, 0xfffff800, 31, "4mcsbw805glpo 4mcsbw1605glpo 4mcsbw805gmpo "
- "4mcsbw1605gmpo 4mcsbw805ghpo 4mcsbw1605ghpo 2mcslr5glpo 2mcslr5gmpo 2mcslr5ghpo"},
- {HNBU_ACPPR_SBPO, 0xfffff800, 33, "2sb20in40hrpo 2sb20in80and160hr5glpo "
+ {HNBU_ACPPR_2GPO, 0xfffff800, 13, "2dot11agofdmhrbw202gpo 2ofdmlrbw202gpo "
+ "2sb20in40dot11agofdm2gpo 2sb20in80dot11agofdm2gpo 2sb20in40ofdmlrbw202gpo "
+ "2sb20in80ofdmlrbw202gpo"},
+ {HNBU_ACPPR_5GPO, 0xfffff800, 59, "4mcsbw805glpo 4mcsbw1605glpo 4mcsbw805gmpo "
+ "4mcsbw1605gmpo 4mcsbw805ghpo 4mcsbw1605ghpo 2mcslr5glpo 2mcslr5gmpo 2mcslr5ghpo "
+ "4mcsbw80p805glpo 4mcsbw80p805gmpo 4mcsbw80p805ghpo 4mcsbw80p805gx1po 2mcslr5gx1po "
+ "2mcslr5g80p80po 4mcsbw805gx1po 4mcsbw1605gx1po"},
+ {HNBU_MCS5Gx1PO, 0xfffff800, 9, "4mcsbw205gx1po 4mcsbw405gx1po"},
+ {HNBU_ACPPR_SBPO, 0xfffff800, 49, "2sb20in40hrpo 2sb20in80and160hr5glpo "
"2sb40and80hr5glpo 2sb20in80and160hr5gmpo 2sb40and80hr5gmpo 2sb20in80and160hr5ghpo "
"2sb40and80hr5ghpo 2sb20in40lrpo 2sb20in80and160lr5glpo 2sb40and80lr5glpo "
"2sb20in80and160lr5gmpo 2sb40and80lr5gmpo 2sb20in80and160lr5ghpo 2sb40and80lr5ghpo "
- "2dot11agduphrpo 2dot11agduplrpo"},
+ "4dot11agduphrpo 4dot11agduplrpo 2sb20in40and80hrpo 2sb20in40and80lrpo "
+ "2sb20in80and160hr5gx1po 2sb20in80and160lr5gx1po 2sb40and80hr5gx1po 2sb40and80lr5gx1po "
+ },
+ {HNBU_ACPPR_SB8080_PO, 0xfffff800, 23, "2sb2040and80in80p80hr5glpo "
+ "2sb2040and80in80p80lr5glpo 2sb2040and80in80p80hr5gmpo "
+ "2sb2040and80in80p80lr5gmpo 2sb2040and80in80p80hr5ghpo 2sb2040and80in80p80lr5ghpo "
+ "2sb2040and80in80p80hr5gx1po 2sb2040and80in80p80lr5gx1po 2sb20in80p80hr5gpo "
+ "2sb20in80p80lr5gpo 2dot11agduppo"},
{HNBU_NOISELVL, 0xfffff800, 16, "1noiselvl2ga0 1noiselvl2ga1 1noiselvl2ga2 "
"1*4noiselvl5ga0 1*4noiselvl5ga1 1*4noiselvl5ga2"},
{HNBU_RXGAIN_ERR, 0xfffff800, 16, "1rxgainerr2ga0 1rxgainerr2ga1 1rxgainerr2ga2 "
"1*4rxgainerr5ga0 1*4rxgainerr5ga1 1*4rxgainerr5ga2"},
{HNBU_AGBGA, 0xfffff800, 7, "1agbg0 1agbg1 1agbg2 1aga0 1aga1 1aga2"},
+ {HNBU_USBDESC_COMPOSITE, 0xffffffff, 3, "2usbdesc_composite"},
{HNBU_UUID, 0xffffffff, 17, "16uuid"},
{HNBU_WOWLGPIO, 0xffffffff, 2, "1wowl_gpio"},
{HNBU_ACRXGAINS_C0, 0xfffff800, 5, "0rxgains5gtrelnabypa0 0rxgains5gtrisoa0 "
diff --git a/drivers/net/wireless/bcmdhd/include/bcmutils.h b/drivers/net/wireless/bcmdhd/include/bcmutils.h
old mode 100755
new mode 100644
index 516b956..ef29f9c
--- a/drivers/net/wireless/bcmdhd/include/bcmutils.h
+++ b/drivers/net/wireless/bcmdhd/include/bcmutils.h
@@ -2,13 +2,13 @@
* Misc useful os-independent macros and functions.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmutils.h 457888 2014-02-25 03:34:39Z $
+ * $Id: bcmutils.h 469595 2014-04-10 21:19:06Z $
*/
#ifndef _bcmutils_h_
@@ -66,6 +66,8 @@
#define bcm_tolower(c) (bcm_isupper((c)) ? ((c) + 'a' - 'A') : (c))
#define bcm_toupper(c) (bcm_islower((c)) ? ((c) + 'A' - 'a') : (c))
+#define CIRCULAR_ARRAY_FULL(rd_idx, wr_idx, max) ((wr_idx + 1)%max == rd_idx)
+
/* Buffer structure for collecting string-formatted data
* using bcm_bprintf() API.
* Use bcm_binit() to initialize before use
@@ -81,6 +83,8 @@
/* ** driver-only section ** */
#ifdef BCMDRIVER
#include <osl.h>
+#include <hnd_pktq.h>
+#include <hnd_pktpool.h>
#define GPIO_PIN_NOTDEFINED 0x20 /* Pin not defined */
@@ -101,253 +105,6 @@
} \
}
-/* osl multi-precedence packet queue */
-#define PKTQ_LEN_MAX 0xFFFF /* Max uint16 65535 packets */
-#ifndef PKTQ_LEN_DEFAULT
-#define PKTQ_LEN_DEFAULT 128 /* Max 128 packets */
-#endif
-#ifndef PKTQ_MAX_PREC
-#define PKTQ_MAX_PREC 16 /* Maximum precedence levels */
-#endif
-
-typedef struct pktq_prec {
- void *head; /* first packet to dequeue */
- void *tail; /* last packet to dequeue */
- uint16 len; /* number of queued packets */
- uint16 max; /* maximum number of queued packets */
-} pktq_prec_t;
-
-#ifdef PKTQ_LOG
-typedef struct {
- uint32 requested; /* packets requested to be stored */
- uint32 stored; /* packets stored */
- uint32 saved; /* packets saved,
- because a lowest priority queue has given away one packet
- */
- uint32 selfsaved; /* packets saved,
- because an older packet from the same queue has been dropped
- */
- uint32 full_dropped; /* packets dropped,
- because pktq is full with higher precedence packets
- */
- uint32 dropped; /* packets dropped because pktq per that precedence is full */
- uint32 sacrificed; /* packets dropped,
- in order to save one from a queue of a highest priority
- */
- uint32 busy; /* packets droped because of hardware/transmission error */
- uint32 retry; /* packets re-sent because they were not received */
- uint32 ps_retry; /* packets retried again prior to moving power save mode */
- uint32 suppress; /* packets which were suppressed and not transmitted */
- uint32 retry_drop; /* packets finally dropped after retry limit */
- uint32 max_avail; /* the high-water mark of the queue capacity for packets -
- goes to zero as queue fills
- */
- uint32 max_used; /* the high-water mark of the queue utilisation for packets -
- increases with use ('inverse' of max_avail)
- */
- uint32 queue_capacity; /* the maximum capacity of the queue */
- uint32 rtsfail; /* count of rts attempts that failed to receive cts */
- uint32 acked; /* count of packets sent (acked) successfully */
- uint32 txrate_succ; /* running total of phy rate of packets sent successfully */
- uint32 txrate_main; /* running totoal of primary phy rate of all packets */
- uint32 throughput; /* actual data transferred successfully */
- uint32 airtime; /* cumulative total medium access delay in useconds */
- uint32 _logtime; /* timestamp of last counter clear */
-} pktq_counters_t;
-
-typedef struct {
- uint32 _prec_log;
- pktq_counters_t* _prec_cnt[PKTQ_MAX_PREC]; /* Counters per queue */
-} pktq_log_t;
-#endif /* PKTQ_LOG */
-
-
-#define PKTQ_COMMON \
- uint16 num_prec; /* number of precedences in use */ \
- uint16 hi_prec; /* rapid dequeue hint (>= highest non-empty prec) */ \
- uint16 max; /* total max packets */ \
- uint16 len; /* total number of packets */
-
-/* multi-priority pkt queue */
-struct pktq {
- PKTQ_COMMON
- /* q array must be last since # of elements can be either PKTQ_MAX_PREC or 1 */
- struct pktq_prec q[PKTQ_MAX_PREC];
-#ifdef PKTQ_LOG
- pktq_log_t* pktqlog;
-#endif
-};
-
-/* simple, non-priority pkt queue */
-struct spktq {
- PKTQ_COMMON
- /* q array must be last since # of elements can be either PKTQ_MAX_PREC or 1 */
- struct pktq_prec q[1];
-};
-
-#define PKTQ_PREC_ITER(pq, prec) for (prec = (pq)->num_prec - 1; prec >= 0; prec--)
-
-/* fn(pkt, arg). return true if pkt belongs to if */
-typedef bool (*ifpkt_cb_t)(void*, int);
-
-#ifdef BCMPKTPOOL
-#define POOL_ENAB(pool) ((pool) && (pool)->inited)
-#define SHARED_POOL (pktpool_shared)
-#else /* BCMPKTPOOL */
-#define POOL_ENAB(bus) 0
-#define SHARED_POOL ((struct pktpool *)NULL)
-#endif /* BCMPKTPOOL */
-
-#ifdef BCMFRAGPOOL
-#define SHARED_FRAG_POOL (pktpool_shared_lfrag)
-#endif
-#define SHARED_RXFRAG_POOL (pktpool_shared_rxlfrag)
-
-
-#ifndef PKTPOOL_LEN_MAX
-#define PKTPOOL_LEN_MAX 40
-#endif /* PKTPOOL_LEN_MAX */
-#define PKTPOOL_CB_MAX 3
-
-struct pktpool;
-typedef void (*pktpool_cb_t)(struct pktpool *pool, void *arg);
-typedef struct {
- pktpool_cb_t cb;
- void *arg;
-} pktpool_cbinfo_t;
-/* call back fn extension to populate host address in pool pkt */
-typedef int (*pktpool_cb_extn_t)(struct pktpool *pool, void *arg, void* pkt);
-typedef struct {
- pktpool_cb_extn_t cb;
- void *arg;
-} pktpool_cbextn_info_t;
-
-
-#ifdef BCMDBG_POOL
-/* pkt pool debug states */
-#define POOL_IDLE 0
-#define POOL_RXFILL 1
-#define POOL_RXDH 2
-#define POOL_RXD11 3
-#define POOL_TXDH 4
-#define POOL_TXD11 5
-#define POOL_AMPDU 6
-#define POOL_TXENQ 7
-
-typedef struct {
- void *p;
- uint32 cycles;
- uint32 dur;
-} pktpool_dbg_t;
-
-typedef struct {
- uint8 txdh; /* tx to host */
- uint8 txd11; /* tx to d11 */
- uint8 enq; /* waiting in q */
- uint8 rxdh; /* rx from host */
- uint8 rxd11; /* rx from d11 */
- uint8 rxfill; /* dma_rxfill */
- uint8 idle; /* avail in pool */
-} pktpool_stats_t;
-#endif /* BCMDBG_POOL */
-
-typedef struct pktpool {
- bool inited; /* pktpool_init was successful */
- uint8 type; /* type of lbuf: basic, frag, etc */
- uint8 id; /* pktpool ID: index in registry */
- bool istx; /* direction: transmit or receive data path */
-
- void * freelist; /* free list: see PKTNEXTFREE(), PKTSETNEXTFREE() */
- uint16 avail; /* number of packets in pool's free list */
- uint16 len; /* number of packets managed by pool */
- uint16 maxlen; /* maximum size of pool <= PKTPOOL_LEN_MAX */
- uint16 plen; /* size of pkt buffer, excluding lbuf|lbuf_frag */
-
- bool empty;
- uint8 cbtoggle;
- uint8 cbcnt;
- uint8 ecbcnt;
- bool emptycb_disable;
- pktpool_cbinfo_t *availcb_excl;
- pktpool_cbinfo_t cbs[PKTPOOL_CB_MAX];
- pktpool_cbinfo_t ecbs[PKTPOOL_CB_MAX];
- pktpool_cbextn_info_t cbext;
-#ifdef BCMDBG_POOL
- uint8 dbg_cbcnt;
- pktpool_cbinfo_t dbg_cbs[PKTPOOL_CB_MAX];
- uint16 dbg_qlen;
- pktpool_dbg_t dbg_q[PKTPOOL_LEN_MAX + 1];
-#endif
-} pktpool_t;
-
-extern pktpool_t *pktpool_shared;
-#ifdef BCMFRAGPOOL
-extern pktpool_t *pktpool_shared_lfrag;
-#endif
-extern pktpool_t *pktpool_shared_rxlfrag;
-
-/* Incarnate a pktpool registry. On success returns total_pools. */
-extern int pktpool_attach(osl_t *osh, uint32 total_pools);
-extern int pktpool_dettach(osl_t *osh); /* Relinquish registry */
-
-extern int pktpool_init(osl_t *osh, pktpool_t *pktp, int *pktplen, int plen, bool istx, uint8 type);
-extern int pktpool_deinit(osl_t *osh, pktpool_t *pktp);
-extern int pktpool_fill(osl_t *osh, pktpool_t *pktp, bool minimal);
-extern void* pktpool_get(pktpool_t *pktp);
-extern void pktpool_free(pktpool_t *pktp, void *p);
-extern int pktpool_add(pktpool_t *pktp, void *p);
-extern int pktpool_avail_notify_normal(osl_t *osh, pktpool_t *pktp);
-extern int pktpool_avail_notify_exclusive(osl_t *osh, pktpool_t *pktp, pktpool_cb_t cb);
-extern int pktpool_avail_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
-extern int pktpool_empty_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
-extern int pktpool_setmaxlen(pktpool_t *pktp, uint16 maxlen);
-extern int pktpool_setmaxlen_strict(osl_t *osh, pktpool_t *pktp, uint16 maxlen);
-extern void pktpool_emptycb_disable(pktpool_t *pktp, bool disable);
-extern bool pktpool_emptycb_disabled(pktpool_t *pktp);
-int pktpool_hostaddr_fill_register(pktpool_t *pktp, pktpool_cb_extn_t cb, void *arg);
-#define POOLPTR(pp) ((pktpool_t *)(pp))
-#define POOLID(pp) (POOLPTR(pp)->id)
-
-#define POOLSETID(pp, ppid) (POOLPTR(pp)->id = (ppid))
-
-#define pktpool_len(pp) (POOLPTR(pp)->len)
-#define pktpool_avail(pp) (POOLPTR(pp)->avail)
-#define pktpool_plen(pp) (POOLPTR(pp)->plen)
-#define pktpool_maxlen(pp) (POOLPTR(pp)->maxlen)
-
-
-/*
- * ----------------------------------------------------------------------------
- * A pool ID is assigned with a pkt pool during pool initialization. This is
- * done by maintaining a registry of all initialized pools, and the registry
- * index at which the pool is registered is used as the pool's unique ID.
- * ID 0 is reserved and is used to signify an invalid pool ID.
- * All packets henceforth allocated from a pool will be tagged with the pool's
- * unique ID. Packets allocated from the heap will use the reserved ID = 0.
- * Packets with non-zero pool id signify that they were allocated from a pool.
- * A maximum of 15 pools are supported, allowing a 4bit pool ID to be used
- * in place of a 32bit pool pointer in each packet.
- * ----------------------------------------------------------------------------
- */
-#define PKTPOOL_INVALID_ID (0)
-#define PKTPOOL_MAXIMUM_ID (15)
-
-/* Registry of pktpool(s) */
-extern pktpool_t *pktpools_registry[PKTPOOL_MAXIMUM_ID + 1];
-
-/* Pool ID to/from Pool Pointer converters */
-#define PKTPOOL_ID2PTR(id) (pktpools_registry[id])
-#define PKTPOOL_PTR2ID(pp) (POOLID(pp))
-
-
-#ifdef BCMDBG_POOL
-extern int pktpool_dbg_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
-extern int pktpool_start_trigger(pktpool_t *pktp, void *p);
-extern int pktpool_dbg_dump(pktpool_t *pktp);
-extern int pktpool_dbg_notify(pktpool_t *pktp);
-extern int pktpool_stats_dump(pktpool_t *pktp, pktpool_stats_t *stats);
-#endif /* BCMDBG_POOL */
-
/* forward definition of ether_addr structure used by some function prototypes */
struct ether_addr;
@@ -355,60 +112,74 @@
extern int ether_isbcast(const void *ea);
extern int ether_isnulladdr(const void *ea);
-/* operations on a specific precedence in packet queue */
+#define BCM_MAC_RXCPL_IDX_BITS 12
+#define BCM_MAX_RXCPL_IDX_INVALID 0
+#define BCM_MAC_RXCPL_IFIDX_BITS 3
+#define BCM_MAC_RXCPL_DOT11_BITS 1
+#define BCM_MAX_RXCPL_IFIDX ((1 << BCM_MAC_RXCPL_IFIDX_BITS) - 1)
+#define BCM_MAC_RXCPL_FLAG_BITS 4
+#define BCM_RXCPL_FLAGS_IN_TRANSIT 0x1
+#define BCM_RXCPL_FLAGS_FIRST_IN_FLUSHLIST 0x2
+#define BCM_RXCPL_FLAGS_RXCPLVALID 0x4
+#define BCM_RXCPL_FLAGS_RSVD 0x8
-#define pktq_psetmax(pq, prec, _max) ((pq)->q[prec].max = (_max))
-#define pktq_pmax(pq, prec) ((pq)->q[prec].max)
-#define pktq_plen(pq, prec) ((pq)->q[prec].len)
-#define pktq_pavail(pq, prec) ((pq)->q[prec].max - (pq)->q[prec].len)
-#define pktq_pfull(pq, prec) ((pq)->q[prec].len >= (pq)->q[prec].max)
-#define pktq_pempty(pq, prec) ((pq)->q[prec].len == 0)
+#define BCM_RXCPL_SET_IN_TRANSIT(a) ((a)->rxcpl_id.flags |= BCM_RXCPL_FLAGS_IN_TRANSIT)
+#define BCM_RXCPL_CLR_IN_TRANSIT(a) ((a)->rxcpl_id.flags &= ~BCM_RXCPL_FLAGS_IN_TRANSIT)
+#define BCM_RXCPL_IN_TRANSIT(a) ((a)->rxcpl_id.flags & BCM_RXCPL_FLAGS_IN_TRANSIT)
-#define pktq_ppeek(pq, prec) ((pq)->q[prec].head)
-#define pktq_ppeek_tail(pq, prec) ((pq)->q[prec].tail)
+#define BCM_RXCPL_SET_FRST_IN_FLUSH(a) ((a)->rxcpl_id.flags |= BCM_RXCPL_FLAGS_FIRST_IN_FLUSHLIST)
+#define BCM_RXCPL_CLR_FRST_IN_FLUSH(a) ((a)->rxcpl_id.flags &= ~BCM_RXCPL_FLAGS_FIRST_IN_FLUSHLIST)
+#define BCM_RXCPL_FRST_IN_FLUSH(a) ((a)->rxcpl_id.flags & BCM_RXCPL_FLAGS_FIRST_IN_FLUSHLIST)
-extern void *pktq_penq(struct pktq *pq, int prec, void *p);
-extern void *pktq_penq_head(struct pktq *pq, int prec, void *p);
-extern void *pktq_pdeq(struct pktq *pq, int prec);
-extern void *pktq_pdeq_prev(struct pktq *pq, int prec, void *prev_p);
-extern void *pktq_pdeq_with_fn(struct pktq *pq, int prec, ifpkt_cb_t fn, int arg);
-extern void *pktq_pdeq_tail(struct pktq *pq, int prec);
-/* Empty the queue at particular precedence level */
-extern void pktq_pflush(osl_t *osh, struct pktq *pq, int prec, bool dir,
- ifpkt_cb_t fn, int arg);
-/* Remove a specified packet from its queue */
-extern bool pktq_pdel(struct pktq *pq, void *p, int prec);
+#define BCM_RXCPL_SET_VALID_INFO(a) ((a)->rxcpl_id.flags |= BCM_RXCPL_FLAGS_RXCPLVALID)
+#define BCM_RXCPL_CLR_VALID_INFO(a) ((a)->rxcpl_id.flags &= ~BCM_RXCPL_FLAGS_RXCPLVALID)
+#define BCM_RXCPL_VALID_INFO(a) (((a)->rxcpl_id.flags & BCM_RXCPL_FLAGS_RXCPLVALID) ? TRUE : FALSE)
-/* operations on a set of precedences in packet queue */
-extern int pktq_mlen(struct pktq *pq, uint prec_bmp);
-extern void *pktq_mdeq(struct pktq *pq, uint prec_bmp, int *prec_out);
-extern void *pktq_mpeek(struct pktq *pq, uint prec_bmp, int *prec_out);
+struct reorder_rxcpl_id_list {
+ uint16 head;
+ uint16 tail;
+ uint32 cnt;
+};
-/* operations on packet queue as a whole */
+typedef struct rxcpl_id {
+ uint32 idx : BCM_MAC_RXCPL_IDX_BITS;
+ uint32 next_idx : BCM_MAC_RXCPL_IDX_BITS;
+ uint32 ifidx : BCM_MAC_RXCPL_IFIDX_BITS;
+ uint32 dot11 : BCM_MAC_RXCPL_DOT11_BITS;
+ uint32 flags : BCM_MAC_RXCPL_FLAG_BITS;
+} rxcpl_idx_id_t;
-#define pktq_len(pq) ((int)(pq)->len)
-#define pktq_max(pq) ((int)(pq)->max)
-#define pktq_avail(pq) ((int)((pq)->max - (pq)->len))
-#define pktq_full(pq) ((pq)->len >= (pq)->max)
-#define pktq_empty(pq) ((pq)->len == 0)
+typedef struct rxcpl_data_len {
+ uint32 metadata_len_w : 6;
+ uint32 dataoffset: 10;
+ uint32 datalen : 16;
+} rxcpl_data_len_t;
-/* operations for single precedence queues */
-#define pktenq(pq, p) pktq_penq(((struct pktq *)(void *)pq), 0, (p))
-#define pktenq_head(pq, p) pktq_penq_head(((struct pktq *)(void *)pq), 0, (p))
-#define pktdeq(pq) pktq_pdeq(((struct pktq *)(void *)pq), 0)
-#define pktdeq_tail(pq) pktq_pdeq_tail(((struct pktq *)(void *)pq), 0)
-#define pktqinit(pq, len) pktq_init(((struct pktq *)(void *)pq), 1, len)
+typedef struct rxcpl_info {
+ rxcpl_idx_id_t rxcpl_id;
+ uint32 host_pktref;
+ union {
+ rxcpl_data_len_t rxcpl_len;
+ struct rxcpl_info *free_next;
+ };
+} rxcpl_info_t;
-extern void pktq_init(struct pktq *pq, int num_prec, int max_len);
-extern void pktq_set_max_plen(struct pktq *pq, int prec, int max_len);
+/* rx completion list */
+typedef struct bcm_rxcplid_list {
+ uint32 max;
+ uint32 avail;
+ rxcpl_info_t *rxcpl_ptr;
+ rxcpl_info_t *free_list;
+} bcm_rxcplid_list_t;
-/* prec_out may be NULL if caller is not interested in return value */
-extern void *pktq_deq(struct pktq *pq, int *prec_out);
-extern void *pktq_deq_tail(struct pktq *pq, int *prec_out);
-extern void *pktq_peek(struct pktq *pq, int *prec_out);
-extern void *pktq_peek_tail(struct pktq *pq, int *prec_out);
-extern void pktq_flush(osl_t *osh, struct pktq *pq, bool dir, ifpkt_cb_t fn, int arg);
+extern bool bcm_alloc_rxcplid_list(osl_t *osh, uint32 max);
+extern rxcpl_info_t * bcm_alloc_rxcplinfo(void);
+extern void bcm_free_rxcplinfo(rxcpl_info_t *ptr);
+extern void bcm_chain_rxcplid(uint16 first, uint16 next);
+extern rxcpl_info_t *bcm_id2rxcplinfo(uint16 id);
+extern uint16 bcm_rxcplinfo2id(rxcpl_info_t *ptr);
+extern rxcpl_info_t *bcm_rxcpllist_end(rxcpl_info_t *ptr, uint32 *count);
/* externs */
/* packet */
@@ -444,11 +215,13 @@
#define DSCP_EF 0x2E
extern uint pktsetprio(void *pkt, bool update_vtag);
+extern bool pktgetdscp(uint8 *pktdata, uint pktlen, uint8 *dscp);
/* string */
extern int bcm_atoi(const char *s);
extern ulong bcm_strtoul(const char *cp, char **endp, uint base);
extern char *bcmstrstr(const char *haystack, const char *needle);
+extern char *bcmstrnstr(const char *s, uint s_len, const char *substr, uint substr_len);
extern char *bcmstrcat(char *dest, const char *src);
extern char *bcmstrncat(char *dest, const char *src, uint size);
extern ulong wchar2ascii(char *abuf, ushort *wbuf, ushort wbuflen, ulong abuflen);
@@ -484,6 +257,8 @@
#define bcmdumplogent(buf, idx) -1
#define TSF_TICKS_PER_MS 1000
+#define TS_ENTER 0xdeadbeef /* Timestamp profiling enter */
+#define TS_EXIT 0xbeefcafe /* Timestamp profiling exit */
#define bcmtslog(tstamp, fmt, a1, a2)
#define bcmprinttslogs()
@@ -528,7 +303,7 @@
#if defined(WLTINYDUMP) || defined(WLMSG_INFORM) || defined(WLMSG_ASSOC) || \
defined(WLMSG_PRPKT) || defined(WLMSG_WSEC)
extern int bcm_format_ssid(char* buf, const uchar ssid[], uint ssid_len);
-#endif
+#endif
#endif /* BCMDRIVER */
/* Base type definitions */
@@ -784,6 +559,7 @@
#define isclr(a, i) ((((const uint8 *)a)[(i) / NBBY] & (1 << ((i) % NBBY))) == 0)
#endif
#endif /* setbit */
+extern void set_bitrange(void *array, uint start, uint end, uint maxbit);
#define isbitset(a, i) (((a) & (1 << (i))) != 0)
@@ -821,9 +597,9 @@
return ((*a >> pos) & MSK); \
}
-DECLARE_MAP_API(2, 4, 1, 15U, 0x0003) /* setbit2() and getbit2() */
-DECLARE_MAP_API(4, 3, 2, 7U, 0x000F) /* setbit4() and getbit4() */
-
+DECLARE_MAP_API(2, 4, 1, 15U, 0x0003) /* setbit2() and getbit2() */
+DECLARE_MAP_API(4, 3, 2, 7U, 0x000F) /* setbit4() and getbit4() */
+DECLARE_MAP_API(8, 2, 3, 3U, 0x00FF) /* setbit8() and getbit8() */
/* basic mux operation - can be optimized on several architectures */
#define MUX(pred, true, false) ((pred) ? (true) : (false))
@@ -944,24 +720,68 @@
/* IE parsing */
+/* packing is required if struct is passed across the bus */
+#include <packed_section_start.h>
+
/* tag_ID/length/value_buffer tuple */
-typedef struct bcm_tlv {
+typedef BWL_PRE_PACKED_STRUCT struct bcm_tlv {
uint8 id;
uint8 len;
uint8 data[1];
-} bcm_tlv_t;
+} BWL_POST_PACKED_STRUCT bcm_tlv_t;
+
+/* bcm tlv w/ 16 bit id/len */
+typedef BWL_PRE_PACKED_STRUCT struct bcm_xtlv {
+ uint16 id;
+ uint16 len;
+ uint8 data[1];
+} BWL_POST_PACKED_STRUCT bcm_xtlv_t;
+
+/* no default structure packing */
+#include <packed_section_end.h>
+
+
+/* descriptor of xtlv data src or dst */
+typedef struct {
+ uint16 type;
+ uint16 len;
+ void *ptr; /* ptr to memory location */
+} xtlv_desc_t;
+
+/* xtlv options */
+#define BCM_XTLV_OPTION_NONE 0x0000
+#define BCM_XTLV_OPTION_ALIGN32 0x0001
+
+typedef uint16 bcm_xtlv_opts_t;
+struct bcm_xtlvbuf {
+ bcm_xtlv_opts_t opts;
+ uint16 size;
+ uint8 *head; /* point to head of buffer */
+ uint8 *buf; /* current position of buffer */
+ /* allocated buffer may follow, but not necessarily */
+};
+typedef struct bcm_xtlvbuf bcm_xtlvbuf_t;
#define BCM_TLV_MAX_DATA_SIZE (255)
-
+#define BCM_XTLV_MAX_DATA_SIZE (65535)
#define BCM_TLV_HDR_SIZE (OFFSETOF(bcm_tlv_t, data))
+#define BCM_XTLV_HDR_SIZE (OFFSETOF(bcm_xtlv_t, data))
+/* LEN only stores the value's length without padding */
+#define BCM_XTLV_LEN(elt) ltoh16_ua(&(elt->len))
+#define BCM_XTLV_ID(elt) ltoh16_ua(&(elt->id))
+/* entire size of the XTLV including header, data, and optional padding */
+#define BCM_XTLV_SIZE(elt, opts) bcm_xtlv_size(elt, opts)
+#define bcm_valid_xtlv(elt, buflen, opts) (elt && ((int)(buflen) >= (int)BCM_XTLV_SIZE(elt, opts)))
+
/* Check that bcm_tlv_t fits into the given buflen */
#define bcm_valid_tlv(elt, buflen) (\
((int)(buflen) >= (int)BCM_TLV_HDR_SIZE) && \
((int)(buflen) >= (int)(BCM_TLV_HDR_SIZE + (elt)->len)))
-
extern bcm_tlv_t *bcm_next_tlv(bcm_tlv_t *elt, int *buflen);
extern bcm_tlv_t *bcm_parse_tlvs(void *buf, int buflen, uint key);
+extern bcm_tlv_t *bcm_parse_tlvs_min_bodylen(void *buf, int buflen, uint key, int min_bodylen);
+
extern bcm_tlv_t *bcm_parse_ordered_tlvs(void *buf, int buflen, uint key);
extern bcm_tlv_t *bcm_find_vendor_ie(void *tlvs, int tlvs_len, const char *voui, uint8 *type,
@@ -974,6 +794,62 @@
extern uint8 *bcm_copy_tlv(const void *src, uint8 *dst);
extern uint8 *bcm_copy_tlv_safe(const void *src, uint8 *dst, int dst_maxlen);
+/* xtlv */
+
+/* return the next xtlv element, and update buffer len (remaining). Buffer length
+ * updated includes padding as specified by options
+ */
+extern bcm_xtlv_t *bcm_next_xtlv(bcm_xtlv_t *elt, int *buflen, bcm_xtlv_opts_t opts);
+
+/* initialize an xtlv buffer. Use options specified for packing/unpacking using
+ * the buffer. Caller is responsible for allocating both buffers.
+ */
+extern int bcm_xtlv_buf_init(bcm_xtlvbuf_t *tlv_buf, uint8 *buf, uint16 len,
+ bcm_xtlv_opts_t opts);
+
+extern uint16 bcm_xtlv_buf_len(struct bcm_xtlvbuf *tbuf);
+extern uint16 bcm_xtlv_buf_rlen(struct bcm_xtlvbuf *tbuf);
+extern uint8 *bcm_xtlv_buf(struct bcm_xtlvbuf *tbuf);
+extern uint8 *bcm_xtlv_head(struct bcm_xtlvbuf *tbuf);
+extern int bcm_xtlv_put_data(bcm_xtlvbuf_t *tbuf, uint16 type, const void *data, uint16 dlen);
+extern int bcm_xtlv_put_8(bcm_xtlvbuf_t *tbuf, uint16 type, const int8 data);
+extern int bcm_xtlv_put_16(bcm_xtlvbuf_t *tbuf, uint16 type, const int16 data);
+extern int bcm_xtlv_put_32(bcm_xtlvbuf_t *tbuf, uint16 type, const int32 data);
+extern int bcm_unpack_xtlv_entry(uint8 **buf, uint16 xpct_type, uint16 xpct_len,
+ void *dst, bcm_xtlv_opts_t opts);
+extern int bcm_pack_xtlv_entry(uint8 **buf, uint16 *buflen, uint16 type, uint16 len,
+ void *src, bcm_xtlv_opts_t opts);
+extern int bcm_xtlv_size(const bcm_xtlv_t *elt, bcm_xtlv_opts_t opts);
+
+/* callback for unpacking xtlv from a buffer into context. */
+typedef int (bcm_xtlv_unpack_cbfn_t)(void *ctx, uint8 *buf, uint16 type, uint16 len);
+
+/* unpack a tlv buffer using buffer, options, and callback */
+extern int bcm_unpack_xtlv_buf(void *ctx, uint8 *buf, uint16 buflen,
+ bcm_xtlv_opts_t opts, bcm_xtlv_unpack_cbfn_t *cbfn);
+
+/* unpack a set of tlvs from the buffer using provided xtlv desc */
+extern int bcm_unpack_xtlv_buf_to_mem(void *buf, int *buflen, xtlv_desc_t *items,
+ bcm_xtlv_opts_t opts);
+
+/* pack a set of tlvs into buffer using provided xtlv desc */
+extern int bcm_pack_xtlv_buf_from_mem(void **buf, uint16 *buflen, xtlv_desc_t *items,
+ bcm_xtlv_opts_t opts);
+
+/* callback to return next tlv id and len to pack, if there is more tlvs to come and
+ * options e.g. alignment
+ */
+typedef bool (*bcm_pack_xtlv_next_info_cbfn_t)(void *ctx, uint16 *tlv_id, uint16 *tlv_len);
+
+/* callback to pack the tlv into length validated buffer */
+typedef void (*bcm_pack_xtlv_pack_next_cbfn_t)(void *ctx,
+ uint16 tlv_id, uint16 tlv_len, uint8* buf);
+
+/* pack a set of tlvs into buffer using get_next to interate */
+int bcm_pack_xtlv_buf(void *ctx, void *tlv_buf, uint16 buflen,
+ bcm_xtlv_opts_t opts, bcm_pack_xtlv_next_info_cbfn_t get_next,
+ bcm_pack_xtlv_pack_next_cbfn_t pack_next, int *outlen);
+
/* bcmerror */
extern const char *bcmerrorstr(int bcmerror);
@@ -1114,6 +990,35 @@
extern void bcm_mwbmap_audit(struct bcm_mwbmap * mwbmap_hdl);
/* End - Multiword bitmap based small Id allocator. */
+
+
+/* INTERFACE: Simple unique 16bit Id Allocator using a stack implementation. */
+
+#define ID16_INVALID ((uint16)(~0))
+
+/*
+ * Construct a 16bit id allocator, managing 16bit ids in the range:
+ * [start_val16 .. start_val16+total_ids)
+ * Note: start_val16 is inclusive.
+ * Returns an opaque handle to the 16bit id allocator.
+ */
+extern void * id16_map_init(osl_t *osh, uint16 total_ids, uint16 start_val16);
+extern void * id16_map_fini(osl_t *osh, void * id16_map_hndl);
+extern void id16_map_clear(void * id16_map_hndl, uint16 total_ids, uint16 start_val16);
+
+/* Allocate a unique 16bit id */
+extern uint16 id16_map_alloc(void * id16_map_hndl);
+
+/* Free a 16bit id value into the id16 allocator */
+extern void id16_map_free(void * id16_map_hndl, uint16 val16);
+
+/* Get the number of failures encountered during id allocation. */
+extern uint32 id16_map_failures(void * id16_map_hndl);
+
+/* Audit the 16bit id allocator state. */
+extern bool id16_map_audit(void * id16_map_hndl);
+/* End - Simple 16bit Id Allocator. */
+
#endif /* BCMDRIVER */
extern void bcm_uint64_right_shift(uint32* r, uint32 a_high, uint32 a_low, uint32 b);
@@ -1121,10 +1026,168 @@
void bcm_add_64(uint32* r_hi, uint32* r_lo, uint32 offset);
void bcm_sub_64(uint32* r_hi, uint32* r_lo, uint32 offset);
+/* calculate checksum for ip header, tcp / udp header / data */
+uint16 bcm_ip_cksum(uint8 *buf, uint32 len, uint32 sum);
+
+#ifndef _dll_t_
+#define _dll_t_
+/*
+ * -----------------------------------------------------------------------------
+ * Double Linked List Macros
+ * -----------------------------------------------------------------------------
+ *
+ * All dll operations must be performed on a pre-initialized node.
+ * Inserting an uninitialized node into a list effectively initialized it.
+ *
+ * When a node is deleted from a list, you may initialize it to avoid corruption
+ * incurred by double deletion. You may skip initialization if the node is
+ * immediately inserted into another list.
+ *
+ * By placing a dll_t element at the start of a struct, you may cast a dll_t *
+ * to the struct or vice versa.
+ *
+ * Example of declaring an initializing someList and inserting nodeA, nodeB
+ *
+ * typedef struct item {
+ * dll_t node;
+ * int someData;
+ * } Item_t;
+ * Item_t nodeA, nodeB, nodeC;
+ * nodeA.someData = 11111, nodeB.someData = 22222, nodeC.someData = 33333;
+ *
+ * dll_t someList;
+ * dll_init(&someList);
+ *
+ * dll_append(&someList, (dll_t *) &nodeA);
+ * dll_prepend(&someList, &nodeB.node);
+ * dll_insert((dll_t *)&nodeC, &nodeA.node);
+ *
+ * dll_delete((dll_t *) &nodeB);
+ *
+ * Example of a for loop to walk someList of node_p
+ *
+ * extern void mydisplay(Item_t * item_p);
+ *
+ * dll_t * item_p, * next_p;
+ * for (item_p = dll_head_p(&someList); ! dll_end(&someList, item_p);
+ * item_p = next_p)
+ * {
+ * next_p = dll_next_p(item_p);
+ * ... use item_p at will, including removing it from list ...
+ * mydisplay((PItem_t)item_p);
+ * }
+ *
+ * -----------------------------------------------------------------------------
+ */
+typedef struct dll {
+ struct dll * next_p;
+ struct dll * prev_p;
+} dll_t;
+
+static INLINE void
+dll_init(dll_t *node_p)
+{
+ node_p->next_p = node_p;
+ node_p->prev_p = node_p;
+}
+/* dll macros returing a pointer to dll_t */
+
+static INLINE dll_t *
+dll_head_p(dll_t *list_p)
+{
+ return list_p->next_p;
+}
+
+
+static INLINE dll_t *
+dll_tail_p(dll_t *list_p)
+{
+ return (list_p)->prev_p;
+}
+
+
+static INLINE dll_t *
+dll_next_p(dll_t *node_p)
+{
+ return (node_p)->next_p;
+}
+
+
+static INLINE dll_t *
+dll_prev_p(dll_t *node_p)
+{
+ return (node_p)->next_p;
+}
+
+
+static INLINE bool
+dll_empty(dll_t *list_p)
+{
+ return ((list_p)->next_p == (list_p));
+}
+
+
+static INLINE bool
+dll_end(dll_t *list_p, dll_t * node_p)
+{
+ return (list_p == node_p);
+}
+
+
+/* inserts the node new_p "after" the node at_p */
+static INLINE void
+dll_insert(dll_t *new_p, dll_t * at_p)
+{
+ new_p->next_p = at_p->next_p;
+ new_p->prev_p = at_p;
+ at_p->next_p = new_p;
+ (new_p->next_p)->prev_p = new_p;
+}
+
+static INLINE void
+dll_append(dll_t *list_p, dll_t *node_p)
+{
+ dll_insert(node_p, dll_tail_p(list_p));
+}
+
+static INLINE void
+dll_prepend(dll_t *list_p, dll_t *node_p)
+{
+ dll_insert(node_p, list_p);
+}
+
+
+/* deletes a node from any list that it "may" be in, if at all. */
+static INLINE void
+dll_delete(dll_t *node_p)
+{
+ node_p->prev_p->next_p = node_p->next_p;
+ node_p->next_p->prev_p = node_p->prev_p;
+}
+#endif /* ! defined(_dll_t_) */
+
+/* Elements managed in a double linked list */
+
+typedef struct dll_pool {
+ dll_t free_list;
+ uint16 free_count;
+ uint16 elems_max;
+ uint16 elem_size;
+ dll_t elements[1];
+} dll_pool_t;
+
+dll_pool_t * dll_pool_init(void * osh, uint16 elems_max, uint16 elem_size);
+void * dll_pool_alloc(dll_pool_t * dll_pool_p);
+void dll_pool_free(dll_pool_t * dll_pool_p, void * elem_p);
+void dll_pool_free_tail(dll_pool_t * dll_pool_p, void * elem_p);
+typedef void (* dll_elem_dump)(void * elem_p);
+void dll_pool_detach(void * osh, dll_pool_t * pool, uint16 elems_max, uint16 elem_size);
+
#ifdef __cplusplus
}
#endif
+/* #define DEBUG_COUNTER */
#ifdef DEBUG_COUNTER
#define CNTR_TBL_MAX 10
typedef struct _counter_tbl_t {
diff --git a/drivers/net/wireless/bcmdhd/include/brcm_nl80211.h b/drivers/net/wireless/bcmdhd/include/brcm_nl80211.h
old mode 100755
new mode 100644
index dbf1d23..95712c9
--- a/drivers/net/wireless/bcmdhd/include/brcm_nl80211.h
+++ b/drivers/net/wireless/bcmdhd/include/brcm_nl80211.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: brcm_nl80211.h 454792 2014-02-11 20:40:19Z $
+ * $Id: brcm_nl80211.h 438755 2013-11-22 23:20:40Z $
*
*/
diff --git a/drivers/net/wireless/bcmdhd/include/circularbuf.h b/drivers/net/wireless/bcmdhd/include/circularbuf.h
deleted file mode 100755
index e73fab8..0000000
--- a/drivers/net/wireless/bcmdhd/include/circularbuf.h
+++ /dev/null
@@ -1,115 +0,0 @@
-/*
- * Initialization and support routines for self-booting compressed image.
- *
- * Copyright (C) 1999-2014, Broadcom Corporation
- *
- * Unless you and Broadcom execute a separate written software license
- * agreement governing use of this software, this software is licensed to you
- * under the terms of the GNU General Public License version 2 (the "GPL"),
- * available at http://www.broadcom.com/licenses/GPLv2.php, with the
- * following added to such license:
- *
- * As a special exception, the copyright holders of this software give you
- * permission to link this software with independent modules, and to copy and
- * distribute the resulting executable under terms of your choice, provided that
- * you also meet, for each linked independent module, the terms and conditions of
- * the license of that module. An independent module is a module which is not
- * derived from this software. The special exception does not apply to any
- * modifications of the software.
- *
- * Notwithstanding the above, under no circumstances may you combine this
- * software in any way with any other Broadcom software provided under a license
- * other than the GPL, without Broadcom's express prior written consent.
- *
- * $Id: circularbuf.h 452261 2014-01-29 19:30:23Z $
- */
-
-#ifndef __CIRCULARBUF_H_INCLUDED__
-#define __CIRCULARBUF_H_INCLUDED__
-
-#include <osl.h>
-#include <typedefs.h>
-#include <bcmendian.h>
-
-/* Enumerations of return values provided by MsgBuf implementation */
-typedef enum {
- CIRCULARBUF_FAILURE = -1,
- CIRCULARBUF_SUCCESS
-} circularbuf_ret_t;
-
-/* Core circularbuf circular buffer structure */
-typedef struct circularbuf_s
-{
- uint16 depth; /* Depth of circular buffer */
- uint16 r_ptr; /* Read Ptr */
- uint16 w_ptr; /* Write Ptr */
- uint16 e_ptr; /* End Ptr */
- uint16 wp_ptr; /* wp_ptr/pending - scheduled for DMA. But, not yet complete. */
- uint16 rp_ptr; /* rp_ptr/pending - scheduled for DMA. But, not yet complete. */
-
- uint8 *buf_addr;
- void *mb_ctx;
- void (*mb_ring_bell)(void *ctx);
-} circularbuf_t;
-
-#define CBUF_ERROR_VAL 0x00000001 /* Error level tracing */
-#define CBUF_TRACE_VAL 0x00000002 /* Function level tracing */
-#define CBUF_INFORM_VAL 0x00000004 /* debug level tracing */
-
-extern int cbuf_msg_level;
-
-#define CBUF_ERROR(args) do {if (cbuf_msg_level & CBUF_ERROR_VAL) printf args;} while (0)
-#define CBUF_TRACE(args) do {if (cbuf_msg_level & CBUF_TRACE_VAL) printf args;} while (0)
-#define CBUF_INFO(args) do {if (cbuf_msg_level & CBUF_INFORM_VAL) printf args;} while (0)
-
-#define CIRCULARBUF_START(x) ((x)->buf_addr)
-#define CIRCULARBUF_WRITE_PTR(x) ((x)->w_ptr)
-#define CIRCULARBUF_READ_PTR(x) ((x)->r_ptr)
-#define CIRCULARBUF_END_PTR(x) ((x)->e_ptr)
-
-#define circularbuf_debug_print(handle) \
- CBUF_INFO(("%s:%d:\t%p rp=%4d r=%4d wp=%4d w=%4d e=%4d\n", \
- __FUNCTION__, __LINE__, \
- (void *) CIRCULARBUF_START(handle), \
- (int) (handle)->rp_ptr, (int) (handle)->r_ptr, \
- (int) (handle)->wp_ptr, (int) (handle)->w_ptr, \
- (int) (handle)->e_ptr));
-
-
-/* Callback registered by application/mail-box with the circularbuf implementation.
- * This will be invoked by the circularbuf implementation when write is complete and
- * ready for informing the peer
- */
-typedef void (*mb_ring_t)(void *ctx);
-
-
-/* Public Functions exposed by circularbuf */
-void
-circularbuf_init(circularbuf_t *handle, void *buf_base_addr, uint16 total_buf_len);
-void
-circularbuf_register_cb(circularbuf_t *handle, mb_ring_t mb_ring_func, void *ctx);
-
-/* Write Functions */
-void *
-circularbuf_reserve_for_write(circularbuf_t *handle, uint16 size);
-void
-circularbuf_write_complete(circularbuf_t *handle, uint16 bytes_written);
-
-/* Read Functions */
-void *
-circularbuf_get_read_ptr(circularbuf_t *handle, uint16 *avail_len);
-circularbuf_ret_t
-circularbuf_read_complete(circularbuf_t *handle, uint16 bytes_read);
-
-/*
- * circularbuf_get_read_ptr() updates rp_ptr by the amount that the consumer
- * is supposed to read. The consumer may not read the entire amount.
- * In such a case, circularbuf_revert_rp_ptr() call follows a corresponding
- * circularbuf_get_read_ptr() call to revert the rp_ptr back to
- * the point till which data has actually been processed.
- * It is not valid if it is preceded by multiple get_read_ptr() calls
- */
-circularbuf_ret_t
-circularbuf_revert_rp_ptr(circularbuf_t *handle, uint16 bytes);
-
-#endif /* __CIRCULARBUF_H_INCLUDED__ */
diff --git a/drivers/net/wireless/bcmdhd/include/dbus.h b/drivers/net/wireless/bcmdhd/include/dbus.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/devctrl_if/wlioctl_defs.h b/drivers/net/wireless/bcmdhd/include/devctrl_if/wlioctl_defs.h
old mode 100755
new mode 100644
index 09f7ca1..5468634
--- a/drivers/net/wireless/bcmdhd/include/devctrl_if/wlioctl_defs.h
+++ b/drivers/net/wireless/bcmdhd/include/devctrl_if/wlioctl_defs.h
@@ -85,6 +85,12 @@
#define HIGHEST_SINGLE_STREAM_MCS 7 /* MCS values greater than this enable multiple streams */
+/* given a proprietary MCS, get number of spatial streams */
+#define GET_PROPRIETARY_11N_MCS_NSS(mcs) (1 + ((mcs) - 85) / 8)
+
+#define GET_11N_MCS_NSS(mcs) ((mcs) < 32 ? (1 + ((mcs) / 8)) \
+ : ((mcs) == 32 ? 1 : GET_PROPRIETARY_11N_MCS_NSS(mcs)))
+
#define MAX_CCA_CHANNELS 38 /* Max number of 20 Mhz wide channels */
#define MAX_CCA_SECS 60 /* CCA keeps this many seconds history */
@@ -196,6 +202,7 @@
#define WL_SCANFLAGS_PROHIBITED 0x04 /* allow scanning prohibited channels */
#define WL_SCANFLAGS_OFFCHAN 0x08 /* allow scanning/reporting off-channel APs */
#define WL_SCANFLAGS_HOTSPOT 0x10 /* automatic ANQP to hotspot APs */
+#define WL_SCANFLAGS_SWTCHAN 0x20 /* Force channel switch for differerent bandwidth */
/* wl_iscan_results status values */
#define WL_SCAN_RESULTS_SUCCESS 0
@@ -271,6 +278,7 @@
#define WL_RM_TYPE_BASIC 1
#define WL_RM_TYPE_CCA 2
#define WL_RM_TYPE_RPI 3
+#define WL_RM_TYPE_ABORT -1 /* ABORT any in-progress RM request */
#define WL_RM_FLAG_PARALLEL (1<<0)
@@ -332,6 +340,8 @@
#define WL_BSS_FLAGS_HS20 0x08 /* hotspot 2.0 capable */
#define WL_BSS_FLAGS_RSSI_INVALID 0x10 /* BSS contains invalid RSSI */
#define WL_BSS_FLAGS_RSSI_INACCURATE 0x20 /* BSS contains inaccurate RSSI */
+#define WL_BSS_FLAGS_SNR_INVALID 0x40 /* BSS contains invalid SNR */
+#define WL_BSS_FLAGS_NF_INVALID 0x80 /* BSS contains invalid noise floor */
/* bssinfo flag for nbss_cap */
#define VHT_BI_SGI_80MHZ 0x00000100
@@ -434,11 +444,16 @@
/* pmkid */
#define MAXPMKID 16
+#ifdef SROM12
+#define WLC_IOCTL_MAXLEN 10000 /* max length ioctl buffer required */
+#else
#define WLC_IOCTL_MAXLEN 8192 /* max length ioctl buffer required */
+#endif /* SROM12 */
+
#define WLC_IOCTL_SMLEN 256 /* "small" length ioctl buffer required */
#define WLC_IOCTL_MEDLEN 1536 /* "med" length ioctl buffer required */
#if defined(LCNCONF) || defined(LCN40CONF)
-#define WLC_SAMPLECOLLECT_MAXLEN 8192 /* Max Sample Collect buffer */
+#define WLC_SAMPLECOLLECT_MAXLEN 1024 /* Max Sample Collect buffer */
#else
#define WLC_SAMPLECOLLECT_MAXLEN 10240 /* Max Sample Collect buffer for two cores */
#endif
@@ -773,8 +788,9 @@
#define WLC_SET_TXBF_RATESET 319
#define WLC_SCAN_CQ 320
#define WLC_GET_RSSI_QDB 321 /* qdB portion of the RSSI */
-
-#define WLC_LAST 322
+#define WLC_DUMP_RATESET 322
+#define WLC_ECHO 323
+#define WLC_LAST 324
#ifndef EPICTRL_COOKIE
#define EPICTRL_COOKIE 0xABADCEDE
#endif
@@ -890,8 +906,27 @@
#define WL_CHAN_FREQ_RANGE_5G_BAND2 3
#define WL_CHAN_FREQ_RANGE_5G_BAND3 4
-#define WL_CHAN_FREQ_RANGE_5G_4BAND 5
+#ifdef SROM12
+#define WL_CHAN_FREQ_RANGE_5G_BAND4 5
+#define WL_CHAN_FREQ_RANGE_2G_40 6
+#define WL_CHAN_FREQ_RANGE_5G_BAND0_40 7
+#define WL_CHAN_FREQ_RANGE_5G_BAND1_40 8
+#define WL_CHAN_FREQ_RANGE_5G_BAND2_40 9
+#define WL_CHAN_FREQ_RANGE_5G_BAND3_40 10
+#define WL_CHAN_FREQ_RANGE_5G_BAND4_40 11
+#define WL_CHAN_FREQ_RANGE_5G_BAND0_80 12
+#define WL_CHAN_FREQ_RANGE_5G_BAND1_80 13
+#define WL_CHAN_FREQ_RANGE_5G_BAND2_80 14
+#define WL_CHAN_FREQ_RANGE_5G_BAND3_80 15
+#define WL_CHAN_FREQ_RANGE_5G_BAND4_80 16
+#define WL_CHAN_FREQ_RANGE_5G_4BAND 17
+#define WL_CHAN_FREQ_RANGE_5G_5BAND 18
+#define WL_CHAN_FREQ_RANGE_5G_5BAND_40 19
+#define WL_CHAN_FREQ_RANGE_5G_5BAND_80 20
+#else
+#define WL_CHAN_FREQ_RANGE_5G_4BAND 5
+#endif /* SROM12 */
/* MAC list modes */
#define WLC_MACMODE_DISABLED 0 /* MAC list disabled */
#define WLC_MACMODE_DENY 1 /* Deny specified (i.e. allow unspecified) */
@@ -1009,7 +1044,8 @@
#define ACPHY_ACI_HWACI_PKTGAINLMT 2 /* bit 1 */
#define ACPHY_ACI_W2NB_PKTGAINLMT 4 /* bit 2 */
#define ACPHY_ACI_PREEMPTION 8 /* bit 3 */
-#define ACPHY_ACI_MAX_MODE 15
+#define ACPHY_HWACI_MITIGATION 16 /* bit 4 */
+#define ACPHY_ACI_MAX_MODE 31
/* AP environment */
#define AP_ENV_DETECT_NOT_USED 0 /* We aren't using AP environment detection */
@@ -1078,6 +1114,7 @@
#define WL_BW_40MHZ 1
#define WL_BW_80MHZ 2
#define WL_BW_160MHZ 3
+#define WL_BW_8080MHZ 4
/* tx_power_t.flags bits */
#define WL_TX_POWER_F_ENABLED 1
@@ -1101,6 +1138,7 @@
#define WL_PRUSR_VAL 0x00000200
#define WL_PS_VAL 0x00000400
#define WL_TXPWR_VAL 0x00000800 /* retired in TOT on 6/10/2009 */
+#define WL_MODE_SWITCH_VAL 0x00000800 /* Using retired TXPWR val */
#define WL_PORT_VAL 0x00001000
#define WL_DUAL_VAL 0x00002000
#define WL_WSEC_VAL 0x00004000
@@ -1109,13 +1147,14 @@
#define WL_NRSSI_VAL 0x00020000 /* retired in TOT on 6/10/2009 */
#define WL_LOFT_VAL 0x00040000 /* retired in TOT on 6/10/2009 */
#define WL_REGULATORY_VAL 0x00080000
-#define WL_PHYCAL_VAL 0x00100000 /* retired in TOT on 6/10/2009 */
+#define WL_TAF_VAL 0x00100000
#define WL_RADAR_VAL 0x00200000 /* retired in TOT on 6/10/2009 */
#define WL_MPC_VAL 0x00400000
#define WL_APSTA_VAL 0x00800000
#define WL_DFS_VAL 0x01000000
#define WL_BA_VAL 0x02000000 /* retired in TOT on 6/14/2010 */
#define WL_ACI_VAL 0x04000000
+#define WL_PRMAC_VAL 0x04000000
#define WL_MBSS_VAL 0x04000000
#define WL_CAC_VAL 0x08000000
#define WL_AMSDU_VAL 0x10000000
@@ -1148,14 +1187,12 @@
#define WL_TXBF_VAL 0x00100000
#define WL_P2PO_VAL 0x00200000
#define WL_TBTT_VAL 0x00400000
-#define WL_NIC_VAL 0x00800000
#define WL_MQ_VAL 0x01000000
/* This level is currently used in Phoenix2 only */
#define WL_SRSCAN_VAL 0x02000000
#define WL_WNM_VAL 0x04000000
-#define WL_AWDL_VAL 0x08000000
#define WL_PWRSEL_VAL 0x10000000
#define WL_NET_DETECT_VAL 0x20000000
#define WL_PCIE_VAL 0x40000000
@@ -1278,7 +1315,12 @@
#define WL_NUMCHANNELS 64
/* max number of chanspecs (used by the iovar to calc. buf space) */
+#ifdef WL11AC_80P80
+#define WL_NUMCHANSPECS 206
+#else
#define WL_NUMCHANSPECS 110
+#endif
+
/* WDS link local endpoint WPA role */
#define WL_WDS_WPA_ROLE_AUTH 0 /* authenticator */
@@ -1375,6 +1417,8 @@
#define WL_WOWL_FW_HALT (1 << 21) /* Firmware died in wowl mode */
#define WL_WOWL_ENAB_HWRADIO (1 << 22) /* Enable detection of radio button changes */
#define WL_WOWL_MIC_FAIL (1 << 23) /* Offloads detected MIC failure(s) */
+#define WL_WOWL_UNASSOC (1 << 24) /* Wakeup in Unassociated state (Net/Magic Pattern) */
+#define WL_WOWL_SECURE (1 << 25) /* Wakeup if received matched secured pattern */
#define WL_WOWL_LINKDOWN (1 << 31) /* Link Down indication in WoWL mode */
#define WL_WOWL_TCPKEEP (1 << 20) /* temp copy to satisfy automerger */
@@ -1752,6 +1796,9 @@
#define IMMEDIATE_EVENT_BIT 8
#define SUPPRESS_SSID_BIT 9
#define ENABLE_NET_OFFLOAD_BIT 10
+/* report found/lost events for SSID and BSSID networks seperately */
+#define REPORT_SEPERATELY_BIT 11
+#define BESTN_BSSID_ONLY_BIT 12
#define SORT_CRITERIA_MASK 0x0001
#define AUTO_NET_SWITCH_MASK 0x0002
@@ -1764,6 +1811,9 @@
#define IMMEDIATE_EVENT_MASK 0x0100
#define SUPPRESS_SSID_MASK 0x0200
#define ENABLE_NET_OFFLOAD_MASK 0x0400
+/* report found/lost events for SSID and BSSID networks seperately */
+#define REPORT_SEPERATELY_MASK 0x0800
+#define BESTN_BSSID_ONLY_MASK 0x1000
#define PFN_VERSION 2
#define PFN_SCANRESULT_VERSION 1
@@ -1777,23 +1827,37 @@
#define DEFAULT_REPEAT 10
#define DEFAULT_EXP 2
+#define PFN_PARTIAL_SCAN_BIT 0
+#define PFN_PARTIAL_SCAN_MASK 1
+
#define WL_PFN_SUPPRESSFOUND_MASK 0x08
#define WL_PFN_SUPPRESSLOST_MASK 0x10
-#define WL_PFN_RSSI_MASK 0xff00
-#define WL_PFN_RSSI_SHIFT 8
+#define WL_PFN_RSSI_MASK 0xff00
+#define WL_PFN_RSSI_SHIFT 8
#define WL_PFN_REPORT_ALLNET 0
#define WL_PFN_REPORT_SSIDNET 1
#define WL_PFN_REPORT_BSSIDNET 2
#define WL_PFN_CFG_FLAGS_PROHIBITED 0x00000001 /* Accept and use prohibited channels */
-#define WL_PFN_CFG_FLAGS_RESERVED 0xfffffffe /* Remaining reserved for future use */
+#define WL_PFN_CFG_FLAGS_HISTORY_OFF 0x00000002 /* Scan history suppressed */
#define WL_PFN_HIDDEN_BIT 2
#define PNO_SCAN_MAX_FW 508*1000 /* max time scan time in msec */
#define PNO_SCAN_MAX_FW_SEC PNO_SCAN_MAX_FW/1000 /* max time scan time in SEC */
#define PNO_SCAN_MIN_FW_SEC 10 /* min time scan time in SEC */
#define WL_PFN_HIDDEN_MASK 0x4
+#define MAX_SSID_WHITELIST_NUM 4
+#define MAX_BSSID_PREF_LIST_NUM 32
+#define MAX_BSSID_BLACKLIST_NUM 32
+
+#ifndef BESTN_MAX
+#define BESTN_MAX 8
+#endif
+
+#ifndef MSCAN_MAX
+#define MSCAN_MAX 32
+#endif
/* TCP Checksum Offload error injection for testing */
#define TOE_ERRTEST_TX_CSUM 0x00000001
@@ -1814,27 +1878,6 @@
#define ND_MULTIHOMING_MAX 10 /* Maximum local host IP addresses */
#define ND_REQUEST_MAX 5 /* Max set of offload params */
-/* AWDL AF flags for awdl_oob_af iovar */
-#define AWDL_OOB_AF_FILL_TSF_PARAMS 0x00000001
-#define AWDL_OOB_AF_FILL_SYNC_PARAMS 0x00000002
-#define AWDL_OOB_AF_FILL_ELECT_PARAMS 0x00000004
-#define AWDL_OOB_AF_PARAMS_SIZE 38
-
-#define AWDL_OPMODE_AUTO 0
-#define AWDL_OPMODE_FIXED 1
-
-#define AWDL_PEER_STATE_OPEN 0
-#define AWDL_PEER_STATE_CLOSE 1
-
-#define SYNC_ROLE_SLAVE 0
-#define SYNC_ROLE_NE_MASTER 1 /* Non-election master */
-#define SYNC_ROLE_MASTER 2
-
-/* peer opcode */
-#define AWDL_PEER_OP_ADD 0
-#define AWDL_PEER_OP_DEL 1
-#define AWDL_PEER_OP_INFO 2
-#define AWDL_PEER_OP_UPD 3
/* AOAC wake event flag */
#define WAKE_EVENT_NLO_DISCOVERY_BIT 1
@@ -1842,7 +1885,9 @@
#define WAKE_EVENT_GTK_HANDSHAKE_ERROR_BIT 4
#define WAKE_EVENT_4WAY_HANDSHAKE_REQUEST_BIT 8
-#define MAX_NUM_WOL_PATTERN 16 /* LOGO requirements min 16 */
+
+#define MAX_NUM_WOL_PATTERN 22 /* LOGO requirements min 22 */
+
/* Packet filter operation mode */
/* True: 1; False: 0 */
@@ -2003,4 +2048,19 @@
#define TOE_TX_CSUM_OL 0x00000001
#define TOE_RX_CSUM_OL 0x00000002
+/* Wi-Fi Display Services (WFDS) */
+#define WL_P2P_SOCIAL_CHANNELS_MAX WL_NUMCHANNELS
+#define MAX_WFDS_SEEK_SVC 4 /* Max # of wfds services to seek */
+#define MAX_WFDS_ADVERT_SVC 4 /* Max # of wfds services to advertise */
+#define MAX_WFDS_SVC_NAME_LEN 200 /* maximum service_name length */
+#define MAX_WFDS_ADV_SVC_INFO_LEN 65000 /* maximum adv service_info length */
+#define P2P_WFDS_HASH_LEN 6 /* Length of a WFDS service hash */
+#define MAX_WFDS_SEEK_SVC_INFO_LEN 255 /* maximum seek service_info req length */
+#define MAX_WFDS_SEEK_SVC_NAME_LEN 200 /* maximum service_name length */
+
+/* ap_isolate bitmaps */
+#define AP_ISOLATE_DISABLED 0x0
+#define AP_ISOLATE_SENDUP_ALL 0x01
+#define AP_ISOLATE_SENDUP_MCAST 0x02
+
#endif /* wlioctl_defs_h */
diff --git a/drivers/net/wireless/bcmdhd/include/dhdioctl.h b/drivers/net/wireless/bcmdhd/include/dhdioctl.h
old mode 100755
new mode 100644
index efca014..b953add
--- a/drivers/net/wireless/bcmdhd/include/dhdioctl.h
+++ b/drivers/net/wireless/bcmdhd/include/dhdioctl.h
@@ -6,13 +6,13 @@
* Definitions subject to change without notice.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -20,12 +20,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: dhdioctl.h 454792 2014-02-11 20:40:19Z $
+ * $Id: dhdioctl.h 438755 2013-11-22 23:20:40Z $
*/
#ifndef _dhdioctl_h_
@@ -40,18 +40,6 @@
/* Linux network driver ioctl encoding */
-#ifdef CONFIG_COMPAT
-typedef struct dhd_ioctl_compat {
- uint cmd; /* common ioctl definition */
- u32 buf; /* pointer to user buffer */
- uint len; /* length of user buffer */
- bool set; /* get or set request (optional) */
- uint used; /* bytes read or written (optional) */
- uint needed; /* bytes needed (optional) */
- uint driver; /* to identify target driver */
-} dhd_ioctl_compat_t;
-#endif
-
typedef struct dhd_ioctl {
uint cmd; /* common ioctl definition */
void *buf; /* pointer to user buffer */
@@ -98,13 +86,14 @@
#define DHD_GLOM_VAL 0x0400
#define DHD_EVENT_VAL 0x0800
#define DHD_BTA_VAL 0x1000
-#define DHD_ISCAN_VAL 0x2000
+#define DHD_RING_VAL 0x2000
#define DHD_ARPOE_VAL 0x4000
#define DHD_REORDER_VAL 0x8000
#define DHD_WL_VAL 0x10000
#define DHD_NOCHECKDIED_VAL 0x20000 /* UTF WAR */
#define DHD_WL_VAL2 0x40000
#define DHD_PNO_VAL 0x80000
+#define DHD_RTT_VAL 0x100000
#ifdef SDTEST
/* For pktgen iovar */
diff --git a/drivers/net/wireless/bcmdhd/include/epivers.h b/drivers/net/wireless/bcmdhd/include/epivers.h
old mode 100755
new mode 100644
index b54a582..ad50e1a
--- a/drivers/net/wireless/bcmdhd/include/epivers.h
+++ b/drivers/net/wireless/bcmdhd/include/epivers.h
@@ -28,21 +28,21 @@
#define EPI_MAJOR_VERSION 1
-#define EPI_MINOR_VERSION 141
+#define EPI_MINOR_VERSION 201
-#define EPI_RC_NUMBER 46
+#define EPI_RC_NUMBER 2
#define EPI_INCREMENTAL_NUMBER 0
#define EPI_BUILD_NUMBER 0
-#define EPI_VERSION 1, 141, 46, 0
+#define EPI_VERSION 1, 201, 31, 0
-#define EPI_VERSION_NUM 0x018d2e00
+#define EPI_VERSION_NUM 0x01c90200
-#define EPI_VERSION_DEV 1.141.46
+#define EPI_VERSION_DEV 1.201.31
/* Driver Version String, ASCII, 32 chars max */
-#define EPI_VERSION_STR "1.141.46 (r)"
+#define EPI_VERSION_STR "1.201.31 (r)"
#endif /* _epivers_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/event_log.h b/drivers/net/wireless/bcmdhd/include/event_log.h
new file mode 100644
index 0000000..6f0bbc4
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/event_log.h
@@ -0,0 +1,191 @@
+/*
+ * EVENT_LOG system definitions
+ * Broadcom 802.11abg Networking Device Driver
+ *
+ * Definitions subject to change without notice.
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: event_log.h 241182 2011-02-17 21:50:03Z$
+ */
+
+#ifndef _EVENT_LOG_H_
+#define _EVENT_LOG_H_
+
+
+/* Set a maximum number of sets here. It is not dynamic for
+ * efficiency of the EVENT_LOG calls.
+ */
+#define NUM_EVENT_LOG_SETS 4
+#define EVENT_LOG_SET_BUS 0
+#define EVENT_LOG_SET_WL 1
+#define EVENT_LOG_SET_PSM 2
+#define EVENT_LOG_SET_DBG 3
+
+/* Define new event log tags here */
+#define EVENT_LOG_TAG_NULL 0 /* Special null tag */
+#define EVENT_LOG_TAG_TS 1 /* Special timestamp tag */
+#define EVENT_LOG_TAG_BUS_OOB 2
+#define EVENT_LOG_TAG_BUS_STATE 3
+#define EVENT_LOG_TAG_BUS_PROTO 4
+#define EVENT_LOG_TAG_BUS_CTL 5
+#define EVENT_LOG_TAG_BUS_EVENT 6
+#define EVENT_LOG_TAG_BUS_PKT 7
+#define EVENT_LOG_TAG_BUS_FRAME 8
+#define EVENT_LOG_TAG_BUS_DESC 9
+#define EVENT_LOG_TAG_BUS_SETUP 10
+#define EVENT_LOG_TAG_BUS_MISC 11
+#define EVENT_LOG_TAG_AWDL_ERR 12
+#define EVENT_LOG_TAG_AWDL_WARN 13
+#define EVENT_LOG_TAG_AWDL_INFO 14
+#define EVENT_LOG_TAG_AWDL_DEBUG 15
+#define EVENT_LOG_TAG_AWDL_TRACE_TIMER 16
+#define EVENT_LOG_TAG_AWDL_TRACE_SYNC 17
+#define EVENT_LOG_TAG_AWDL_TRACE_CHAN 18
+#define EVENT_LOG_TAG_AWDL_TRACE_DP 19
+#define EVENT_LOG_TAG_AWDL_TRACE_MISC 20
+#define EVENT_LOG_TAG_AWDL_TEST 21
+#define EVENT_LOG_TAG_SRSCAN 22
+#define EVENT_LOG_TAG_PWRSTATS_INFO 23
+#define EVENT_LOG_TAG_AWDL_TRACE_CHANSW 24
+#define EVENT_LOG_TAG_AWDL_TRACE_PEER_OPENCLOSE 25
+#define EVENT_LOG_TAG_UCODE_WATCHDOG 26
+#define EVENT_LOG_TAG_UCODE_FIFO 27
+#define EVENT_LOG_TAG_SCAN_TRACE_LOW 28
+#define EVENT_LOG_TAG_SCAN_TRACE_HIGH 29
+#define EVENT_LOG_TAG_SCAN_ERROR 30
+#define EVENT_LOG_TAG_SCAN_WARN 31
+#define EVENT_LOG_TAG_MPF_ERR 32
+#define EVENT_LOG_TAG_MPF_WARN 33
+#define EVENT_LOG_TAG_MPF_INFO 34
+#define EVENT_LOG_TAG_MPF_DEBUG 35
+#define EVENT_LOG_TAG_EVENT_INFO 36
+#define EVENT_LOG_TAG_EVENT_ERR 37
+#define EVENT_LOG_TAG_PWRSTATS_ERROR 38
+#define EVENT_LOG_TAG_EXCESS_PM_ERROR 39
+#define EVENT_LOG_TAG_IOCTL_LOG 40
+#define EVENT_LOG_TAG_PFN_ERR 41
+#define EVENT_LOG_TAG_PFN_WARN 42
+#define EVENT_LOG_TAG_PFN_INFO 43
+#define EVENT_LOG_TAG_PFN_DEBUG 44
+#define EVENT_LOG_TAG_BEACON_LOG 45
+#define EVENT_LOG_TAG_WNM_BSSTRANS_INFO 46
+#define EVENT_LOG_TAG_TRACE_CHANSW 47
+#define EVENT_LOG_TAG_PCI_ERROR 48
+#define EVENT_LOG_TAG_PCI_TRACE 49
+#define EVENT_LOG_TAG_PCI_WARN 50
+#define EVENT_LOG_TAG_PCI_INFO 51
+#define EVENT_LOG_TAG_PCI_DBG 52
+#define EVENT_LOG_TAG_PCI_DATA 53
+#define EVENT_LOG_TAG_PCI_RING 54
+#define EVENT_LOG_TAG_AWDL_TRACE_RANGING 55
+#define EVENT_LOG_TAG_WL_ERROR 56
+#define EVENT_LOG_TAG_PHY_ERROR 57
+#define EVENT_LOG_TAG_OTP_ERROR 58
+#define EVENT_LOG_TAG_NOTIF_ERROR 59
+#define EVENT_LOG_TAG_MPOOL_ERROR 60
+#define EVENT_LOG_TAG_OBJR_ERROR 61
+#define EVENT_LOG_TAG_DMA_ERROR 62
+#define EVENT_LOG_TAG_PMU_ERROR 63
+#define EVENT_LOG_TAG_BSROM_ERROR 64
+#define EVENT_LOG_TAG_SI_ERROR 65
+#define EVENT_LOG_TAG_ROM_PRINTF 66
+#define EVENT_LOG_TAG_RATE_CNT 67
+#define EVENT_LOG_TAG_CTL_MGT_CNT 68
+#define EVENT_LOG_TAG_AMPDU_DUMP 69
+#define EVENT_LOG_TAG_MEM_ALLOC_SUCC 70
+#define EVENT_LOG_TAG_MEM_ALLOC_FAIL 71
+#define EVENT_LOG_TAG_MEM_FREE 72
+#define EVENT_LOG_TAG_WL_ASSOC_LOG 73
+#define EVENT_LOG_TAG_WL_PS_LOG 74
+#define EVENT_LOG_TAG_WL_ROAM_LOG 75
+#define EVENT_LOG_TAG_WL_MPC_LOG 76
+#define EVENT_LOG_TAG_WL_WSEC_LOG 77
+#define EVENT_LOG_TAG_WL_WSEC_DUMP 78
+#define EVENT_LOG_TAG_WL_MCNX_LOG 79
+#define EVENT_LOG_TAG_HEALTH_CHECK_ERROR 80
+#define EVENT_LOG_TAG_HNDRTE_EVENT_ERROR 81
+#define EVENT_LOG_TAG_ECOUNTERS_ERROR 82
+#define EVENT_LOG_TAG_WL_COUNTERS 83
+#define EVENT_LOG_TAG_ECOUNTERS_IPCSTATS 84
+#define EVENT_LOG_TAG_TRACE_WL_INFO 85
+#define EVENT_LOG_TAG_TRACE_BTCOEX_INFO 86
+#define EVENT_LOG_TAG_MAX 86 /* Set to the same value of last tag, not last tag + 1 */
+/* Note: New event should be added/reserved in trunk before adding it to branches */
+
+/* Flags for tag control */
+#define EVENT_LOG_TAG_FLAG_NONE 0
+#define EVENT_LOG_TAG_FLAG_LOG 0x80
+#define EVENT_LOG_TAG_FLAG_PRINT 0x40
+#define EVENT_LOG_TAG_FLAG_MASK 0x3f
+
+/* logstrs header */
+#define LOGSTRS_MAGIC 0x4C4F4753
+#define LOGSTRS_VERSION 0x1
+
+
+/*
+ * There are multiple levels of objects define here:
+ * event_log_set - a set of buffers
+ * event log groups - every event log call is part of just one. All
+ * event log calls in a group are handled the
+ * same way. Each event log group is associated
+ * with an event log set or is off.
+ */
+
+#ifndef __ASSEMBLER__
+
+/* On the external system where the dumper is we need to make sure
+ * that these types are the same size as they are on the ARM the
+ * produced them
+ */
+
+
+/* Each event log entry has a type. The type is the LAST word of the
+ * event log. The printing code walks the event entries in reverse
+ * order to find the first entry.
+ */
+typedef union event_log_hdr {
+ struct {
+ uint8 tag; /* Event_log entry tag */
+ uint8 count; /* Count of 4-byte entries */
+ uint16 fmt_num; /* Format number */
+ };
+ uint32 t; /* Type cheat */
+} event_log_hdr_t;
+
+
+/* Data structure of Keeping the Header from logstrs.bin */
+typedef struct {
+ uint32 logstrs_size; /* Size of the file */
+ uint32 rom_lognums_offset; /* Offset to the ROM lognum */
+ uint32 ram_lognums_offset; /* Offset to the RAM lognum */
+ uint32 rom_logstrs_offset; /* Offset to the ROM logstr */
+ uint32 ram_logstrs_offset; /* Offset to the RAM logstr */
+ /* Keep version and magic last since "header" is appended to the end of logstrs file. */
+ uint32 version; /* Header version */
+ uint32 log_magic; /* MAGIC number for verification 'LOGS' */
+} logstr_header_t;
+
+
+#endif /* __ASSEMBLER__ */
+
+#endif /* _EVENT_LOG_H */
diff --git a/drivers/net/wireless/bcmdhd/include/event_trace.h b/drivers/net/wireless/bcmdhd/include/event_trace.h
new file mode 100644
index 0000000..4ee4b59
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/event_trace.h
@@ -0,0 +1,139 @@
+/*
+ *Trace log blocks sent over HBUS
+ * Broadcom 802.11abg Networking Device Driver
+ *
+ * Definitions subject to change without notice.
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: event_log.h 241182 2011-02-17 21:50:03Z$
+ */
+
+
+#ifndef _WL_DIAG_H
+#define _WL_DIAG_H
+
+#define DIAG_MAJOR_VERSION 1 /* 4 bits */
+#define DIAG_MINOR_VERSION 0 /* 4 bits */
+#define DIAG_MICRO_VERSION 0 /* 4 bits */
+
+#define DIAG_VERSION \
+ ((DIAG_MICRO_VERSION&0xF) | (DIAG_MINOR_VERSION&0xF)<<4 | \
+ (DIAG_MAJOR_VERSION&0xF)<<8)
+ /* bit[11:8] major ver */
+ /* bit[7:4] minor ver */
+ /* bit[3:0] micro ver */
+#define ETHER_ADDR_PACK_LOW(addr) (((addr)->octet[3])<<24 | ((addr)->octet[2])<<16 | \
+ ((addr)->octet[1])<<8 | ((addr)->octet[0]))
+#define ETHER_ADDR_PACK_HI(addr) (((addr)->octet[5])<<8 | ((addr)->octet[4]))
+
+#define SSID_PACK(addr) (((uint8)(addr)[0])<<24 | ((uint8)(addr)[1])<<16 | \
+ ((uint8)(addr)[2])<<8 | ((uint8)(addr)[3]))
+
+/* event ID for trace purpose only, to avoid the conflict with future new
+* WLC_E_ , starting from 0x8000 */
+#define TRACE_FW_AUTH_STARTED 0x8000
+#define TRACE_FW_ASSOC_STARTED 0x8001
+#define TRACE_FW_RE_ASSOC_STARTED 0x8002
+#define TRACE_G_SCAN_STARTED 0x8003
+#define TRACE_ROAM_SCAN_STARTED 0x8004
+#define TRACE_ROAM_SCAN_COMPLETE 0x8005
+#define TRACE_FW_EAPOL_FRAME_TRANSMIT_START 0x8006
+#define TRACE_FW_EAPOL_FRAME_TRANSMIT_STOP 0x8007
+#define TRACE_BLOCK_ACK_NEGOTIATION_COMPLETE 0x8008 /* protocol status */
+#define TRACE_BT_COEX_BT_SCO_START 0x8009
+#define TRACE_BT_COEX_BT_SCO_STOP 0x800a
+#define TRACE_BT_COEX_BT_SCAN_START 0x800b
+#define TRACE_BT_COEX_BT_SCAN_STOP 0x800c
+#define TRACE_BT_COEX_BT_HID_START 0x800d
+#define TRACE_BT_COEX_BT_HID_STOP 0x800e
+#define TRACE_ROAM_AUTH_STARTED 0x800f
+
+/* Parameters of wifi logger events are TLVs */
+/* Event parameters tags are defined as: */
+#define TRACE_TAG_VENDOR_SPECIFIC 0 /* take a byte stream as parameter */
+#define TRACE_TAG_BSSID 1 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_ADDR 2 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_SSID 3 /* takes a 32 bytes SSID address as parameter */
+#define TRACE_TAG_STATUS 4 /* takes an integer as parameter */
+#define TRACE_TAG_CHANNEL_SPEC 5 /* takes one or more wifi_channel_spec as */
+ /* parameter */
+#define TRACE_TAG_WAKE_LOCK_EVENT 6 /* takes a wake_lock_event struct as parameter */
+#define TRACE_TAG_ADDR1 7 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_ADDR2 8 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_ADDR3 9 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_ADDR4 10 /* takes a 6 bytes MAC address as parameter */
+#define TRACE_TAG_TSF 11 /* take a 64 bits TSF value as parameter */
+#define TRACE_TAG_IE 12 /* take one or more specific 802.11 IEs */
+ /* parameter, IEs are in turn indicated in */
+ /* TLV format as per 802.11 spec */
+#define TRACE_TAG_INTERFACE 13 /* take interface name as parameter */
+#define TRACE_TAG_REASON_CODE 14 /* take a reason code as per 802.11 */
+ /* as parameter */
+#define TRACE_TAG_RATE_MBPS 15 /* take a wifi rate in 0.5 mbps */
+
+/* for each event id with logging data, define its logging data structure */
+
+typedef union {
+ struct {
+ uint16 event: 16;
+ uint16 version: 16;
+ };
+ uint32 t;
+} wl_event_log_id_t;
+
+typedef union {
+ struct {
+ uint16 status: 16;
+ uint16 paraset: 16;
+ };
+ uint32 t;
+} wl_event_log_blk_ack_t;
+
+typedef union {
+ struct {
+ uint8 mode: 8;
+ uint8 count: 8;
+ uint16 ch: 16;
+ };
+ uint32 t;
+} wl_event_log_csa_t;
+
+typedef union {
+ struct {
+ uint8 status: 1;
+ uint8 eapol_idx: 2;
+ uint32 notused: 13;
+ uint16 rate0: 16;
+ };
+ uint32 t;
+} wl_event_log_eapol_tx_t;
+
+#ifdef EVENT_LOG_COMPILE
+#define WL_EVENT_LOG(tag, event, ...) \
+ do { \
+ wl_event_log_id_t entry = {{event, DIAG_VERSION}}; \
+ EVENT_LOG(tag, "WL event", entry.t , ## __VA_ARGS__); \
+ } while (0)
+#else
+#define WL_EVENT_LOG(tag, event, ...)
+#endif /* EVENT_LOG_COMPILE */
+#endif /* _WL_DIAG_H */
diff --git a/drivers/net/wireless/bcmdhd/include/hndrte_armtrap.h b/drivers/net/wireless/bcmdhd/include/hnd_armtrap.h
old mode 100755
new mode 100644
similarity index 93%
rename from drivers/net/wireless/bcmdhd/include/hndrte_armtrap.h
rename to drivers/net/wireless/bcmdhd/include/hnd_armtrap.h
index 70cfa91..93f353e
--- a/drivers/net/wireless/bcmdhd/include/hndrte_armtrap.h
+++ b/drivers/net/wireless/bcmdhd/include/hnd_armtrap.h
@@ -1,5 +1,5 @@
/*
- * HNDRTE arm trap handling.
+ * HND arm trap handling.
*
* Copyright (C) 1999-2014, Broadcom Corporation
*
@@ -21,11 +21,11 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: hndrte_armtrap.h 261365 2011-05-24 20:42:23Z $
+ * $Id: hnd_armtrap.h 470663 2014-04-16 00:24:43Z $
*/
-#ifndef _hndrte_armtrap_h
-#define _hndrte_armtrap_h
+#ifndef _hnd_armtrap_h_
+#define _hnd_armtrap_h_
/* ARM trap handling */
@@ -85,4 +85,4 @@
#endif /* !_LANGUAGE_ASSEMBLY */
-#endif /* _hndrte_armtrap_h */
+#endif /* _hnd_armtrap_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/hndrte_cons.h b/drivers/net/wireless/bcmdhd/include/hnd_cons.h
old mode 100755
new mode 100644
similarity index 86%
rename from drivers/net/wireless/bcmdhd/include/hndrte_cons.h
rename to drivers/net/wireless/bcmdhd/include/hnd_cons.h
index 6cc846f..0b48ef8
--- a/drivers/net/wireless/bcmdhd/include/hndrte_cons.h
+++ b/drivers/net/wireless/bcmdhd/include/hnd_cons.h
@@ -1,5 +1,5 @@
/*
- * Console support for hndrte.
+ * Console support for RTE - for host use only.
*
* Copyright (C) 1999-2014, Broadcom Corporation
*
@@ -21,12 +21,13 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: hndrte_cons.h 427140 2013-10-02 18:07:07Z $
+ * $Id: hnd_cons.h 473343 2014-04-29 01:45:22Z $
*/
-#ifndef _HNDRTE_CONS_H
-#define _HNDRTE_CONS_H
+#ifndef _hnd_cons_h_
+#define _hnd_cons_h_
#include <typedefs.h>
+#include <siutils.h>
#if defined(RWL_DONGLE) || defined(UART_REFLECTOR)
/* For Dongle uart tranport max cmd len is 256 bytes + header length (16 bytes)
@@ -41,12 +42,21 @@
#define LOG_BUF_LEN 1024
+#ifdef BOOTLOADER_CONSOLE_OUTPUT
+#undef RWL_MAX_DATA_LEN
+#undef CBUF_LEN
+#undef LOG_BUF_LEN
+#define RWL_MAX_DATA_LEN (4 * 1024 + 8)
+#define CBUF_LEN (RWL_MAX_DATA_LEN + 64)
+#define LOG_BUF_LEN (16 * 1024)
+#endif
+
typedef struct {
uint32 buf; /* Can't be pointer on (64-bit) hosts */
uint buf_size;
uint idx;
uint out_idx; /* output index */
-} hndrte_log_t;
+} hnd_log_t;
typedef struct {
/* Virtual UART
@@ -63,7 +73,7 @@
* The host may read the output when it sees log_idx advance.
* Output will be lost if the output wraps around faster than the host polls.
*/
- hndrte_log_t log;
+ hnd_log_t log;
/* Console input line buffer
* Characters are read one at a time into cbuf until <CR> is received, then
@@ -71,8 +81,6 @@
*/
uint cbuf_idx;
char cbuf[CBUF_LEN];
-} hndrte_cons_t;
+} hnd_cons_t;
-hndrte_cons_t *hndrte_get_active_cons_state(void);
-
-#endif /* _HNDRTE_CONS_H */
+#endif /* _hnd_cons_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/hnd_pktpool.h b/drivers/net/wireless/bcmdhd/include/hnd_pktpool.h
new file mode 100644
index 0000000..4b78a21
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/hnd_pktpool.h
@@ -0,0 +1,204 @@
+/*
+ * HND generic packet pool operation primitives
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: $
+ */
+
+#ifndef _hnd_pktpool_h_
+#define _hnd_pktpool_h_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef BCMPKTPOOL
+#define POOL_ENAB(pool) ((pool) && (pool)->inited)
+#define SHARED_POOL (pktpool_shared)
+#else /* BCMPKTPOOL */
+#define POOL_ENAB(bus) 0
+#define SHARED_POOL ((struct pktpool *)NULL)
+#endif /* BCMPKTPOOL */
+
+#ifdef BCMFRAGPOOL
+#define SHARED_FRAG_POOL (pktpool_shared_lfrag)
+#endif
+#define SHARED_RXFRAG_POOL (pktpool_shared_rxlfrag)
+
+
+#ifndef PKTPOOL_LEN_MAX
+#define PKTPOOL_LEN_MAX 40
+#endif /* PKTPOOL_LEN_MAX */
+#define PKTPOOL_CB_MAX 3
+
+/* forward declaration */
+struct pktpool;
+
+typedef void (*pktpool_cb_t)(struct pktpool *pool, void *arg);
+typedef struct {
+ pktpool_cb_t cb;
+ void *arg;
+} pktpool_cbinfo_t;
+/* call back fn extension to populate host address in pool pkt */
+typedef int (*pktpool_cb_extn_t)(struct pktpool *pool, void *arg1, void* pkt, bool arg2);
+typedef struct {
+ pktpool_cb_extn_t cb;
+ void *arg;
+} pktpool_cbextn_info_t;
+
+
+#ifdef BCMDBG_POOL
+/* pkt pool debug states */
+#define POOL_IDLE 0
+#define POOL_RXFILL 1
+#define POOL_RXDH 2
+#define POOL_RXD11 3
+#define POOL_TXDH 4
+#define POOL_TXD11 5
+#define POOL_AMPDU 6
+#define POOL_TXENQ 7
+
+typedef struct {
+ void *p;
+ uint32 cycles;
+ uint32 dur;
+} pktpool_dbg_t;
+
+typedef struct {
+ uint8 txdh; /* tx to host */
+ uint8 txd11; /* tx to d11 */
+ uint8 enq; /* waiting in q */
+ uint8 rxdh; /* rx from host */
+ uint8 rxd11; /* rx from d11 */
+ uint8 rxfill; /* dma_rxfill */
+ uint8 idle; /* avail in pool */
+} pktpool_stats_t;
+#endif /* BCMDBG_POOL */
+
+typedef struct pktpool {
+ bool inited; /* pktpool_init was successful */
+ uint8 type; /* type of lbuf: basic, frag, etc */
+ uint8 id; /* pktpool ID: index in registry */
+ bool istx; /* direction: transmit or receive data path */
+
+ void * freelist; /* free list: see PKTNEXTFREE(), PKTSETNEXTFREE() */
+ uint16 avail; /* number of packets in pool's free list */
+ uint16 len; /* number of packets managed by pool */
+ uint16 maxlen; /* maximum size of pool <= PKTPOOL_LEN_MAX */
+ uint16 plen; /* size of pkt buffer, excluding lbuf|lbuf_frag */
+
+ bool empty;
+ uint8 cbtoggle;
+ uint8 cbcnt;
+ uint8 ecbcnt;
+ bool emptycb_disable;
+ pktpool_cbinfo_t *availcb_excl;
+ pktpool_cbinfo_t cbs[PKTPOOL_CB_MAX];
+ pktpool_cbinfo_t ecbs[PKTPOOL_CB_MAX];
+ pktpool_cbextn_info_t cbext;
+ pktpool_cbextn_info_t rxcplidfn;
+#ifdef BCMDBG_POOL
+ uint8 dbg_cbcnt;
+ pktpool_cbinfo_t dbg_cbs[PKTPOOL_CB_MAX];
+ uint16 dbg_qlen;
+ pktpool_dbg_t dbg_q[PKTPOOL_LEN_MAX + 1];
+#endif
+ pktpool_cbinfo_t dmarxfill;
+} pktpool_t;
+
+extern pktpool_t *pktpool_shared;
+#ifdef BCMFRAGPOOL
+extern pktpool_t *pktpool_shared_lfrag;
+#endif
+extern pktpool_t *pktpool_shared_rxlfrag;
+
+/* Incarnate a pktpool registry. On success returns total_pools. */
+extern int pktpool_attach(osl_t *osh, uint32 total_pools);
+extern int pktpool_dettach(osl_t *osh); /* Relinquish registry */
+
+extern int pktpool_init(osl_t *osh, pktpool_t *pktp, int *pktplen, int plen, bool istx, uint8 type);
+extern int pktpool_deinit(osl_t *osh, pktpool_t *pktp);
+extern int pktpool_fill(osl_t *osh, pktpool_t *pktp, bool minimal);
+extern void* pktpool_get(pktpool_t *pktp);
+extern void pktpool_free(pktpool_t *pktp, void *p);
+extern int pktpool_add(pktpool_t *pktp, void *p);
+extern int pktpool_avail_notify_normal(osl_t *osh, pktpool_t *pktp);
+extern int pktpool_avail_notify_exclusive(osl_t *osh, pktpool_t *pktp, pktpool_cb_t cb);
+extern int pktpool_avail_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
+extern int pktpool_empty_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
+extern int pktpool_setmaxlen(pktpool_t *pktp, uint16 maxlen);
+extern int pktpool_setmaxlen_strict(osl_t *osh, pktpool_t *pktp, uint16 maxlen);
+extern void pktpool_emptycb_disable(pktpool_t *pktp, bool disable);
+extern bool pktpool_emptycb_disabled(pktpool_t *pktp);
+extern int pktpool_hostaddr_fill_register(pktpool_t *pktp, pktpool_cb_extn_t cb, void *arg1);
+extern int pktpool_rxcplid_fill_register(pktpool_t *pktp, pktpool_cb_extn_t cb, void *arg);
+extern void pktpool_invoke_dmarxfill(pktpool_t *pktp);
+extern int pkpool_haddr_avail_register_cb(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
+
+#define POOLPTR(pp) ((pktpool_t *)(pp))
+#define POOLID(pp) (POOLPTR(pp)->id)
+
+#define POOLSETID(pp, ppid) (POOLPTR(pp)->id = (ppid))
+
+#define pktpool_len(pp) (POOLPTR(pp)->len)
+#define pktpool_avail(pp) (POOLPTR(pp)->avail)
+#define pktpool_plen(pp) (POOLPTR(pp)->plen)
+#define pktpool_maxlen(pp) (POOLPTR(pp)->maxlen)
+
+
+/*
+ * ----------------------------------------------------------------------------
+ * A pool ID is assigned with a pkt pool during pool initialization. This is
+ * done by maintaining a registry of all initialized pools, and the registry
+ * index at which the pool is registered is used as the pool's unique ID.
+ * ID 0 is reserved and is used to signify an invalid pool ID.
+ * All packets henceforth allocated from a pool will be tagged with the pool's
+ * unique ID. Packets allocated from the heap will use the reserved ID = 0.
+ * Packets with non-zero pool id signify that they were allocated from a pool.
+ * A maximum of 15 pools are supported, allowing a 4bit pool ID to be used
+ * in place of a 32bit pool pointer in each packet.
+ * ----------------------------------------------------------------------------
+ */
+#define PKTPOOL_INVALID_ID (0)
+#define PKTPOOL_MAXIMUM_ID (15)
+
+/* Registry of pktpool(s) */
+extern pktpool_t *pktpools_registry[PKTPOOL_MAXIMUM_ID + 1];
+
+/* Pool ID to/from Pool Pointer converters */
+#define PKTPOOL_ID2PTR(id) (pktpools_registry[id])
+#define PKTPOOL_PTR2ID(pp) (POOLID(pp))
+
+
+#ifdef BCMDBG_POOL
+extern int pktpool_dbg_register(pktpool_t *pktp, pktpool_cb_t cb, void *arg);
+extern int pktpool_start_trigger(pktpool_t *pktp, void *p);
+extern int pktpool_dbg_dump(pktpool_t *pktp);
+extern int pktpool_dbg_notify(pktpool_t *pktp);
+extern int pktpool_stats_dump(pktpool_t *pktp, pktpool_stats_t *stats);
+#endif /* BCMDBG_POOL */
+
+#ifdef __cplusplus
+ }
+#endif
+
+#endif /* _hnd_pktpool_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/hnd_pktq.h b/drivers/net/wireless/bcmdhd/include/hnd_pktq.h
new file mode 100644
index 0000000..ef3d4c8
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/hnd_pktq.h
@@ -0,0 +1,186 @@
+/*
+ * HND generic pktq operation primitives
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: $
+ */
+
+#ifndef _hnd_pktq_h_
+#define _hnd_pktq_h_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* osl multi-precedence packet queue */
+#define PKTQ_LEN_MAX 0xFFFF /* Max uint16 65535 packets */
+#ifndef PKTQ_LEN_DEFAULT
+#define PKTQ_LEN_DEFAULT 128 /* Max 128 packets */
+#endif
+#ifndef PKTQ_MAX_PREC
+#define PKTQ_MAX_PREC 16 /* Maximum precedence levels */
+#endif
+
+typedef struct pktq_prec {
+ void *head; /* first packet to dequeue */
+ void *tail; /* last packet to dequeue */
+ uint16 len; /* number of queued packets */
+ uint16 max; /* maximum number of queued packets */
+} pktq_prec_t;
+
+#ifdef PKTQ_LOG
+typedef struct {
+ uint32 requested; /* packets requested to be stored */
+ uint32 stored; /* packets stored */
+ uint32 saved; /* packets saved,
+ because a lowest priority queue has given away one packet
+ */
+ uint32 selfsaved; /* packets saved,
+ because an older packet from the same queue has been dropped
+ */
+ uint32 full_dropped; /* packets dropped,
+ because pktq is full with higher precedence packets
+ */
+ uint32 dropped; /* packets dropped because pktq per that precedence is full */
+ uint32 sacrificed; /* packets dropped,
+ in order to save one from a queue of a highest priority
+ */
+ uint32 busy; /* packets droped because of hardware/transmission error */
+ uint32 retry; /* packets re-sent because they were not received */
+ uint32 ps_retry; /* packets retried again prior to moving power save mode */
+ uint32 suppress; /* packets which were suppressed and not transmitted */
+ uint32 retry_drop; /* packets finally dropped after retry limit */
+ uint32 max_avail; /* the high-water mark of the queue capacity for packets -
+ goes to zero as queue fills
+ */
+ uint32 max_used; /* the high-water mark of the queue utilisation for packets -
+ increases with use ('inverse' of max_avail)
+ */
+ uint32 queue_capacity; /* the maximum capacity of the queue */
+ uint32 rtsfail; /* count of rts attempts that failed to receive cts */
+ uint32 acked; /* count of packets sent (acked) successfully */
+ uint32 txrate_succ; /* running total of phy rate of packets sent successfully */
+ uint32 txrate_main; /* running totoal of primary phy rate of all packets */
+ uint32 throughput; /* actual data transferred successfully */
+ uint32 airtime; /* cumulative total medium access delay in useconds */
+ uint32 _logtime; /* timestamp of last counter clear */
+} pktq_counters_t;
+
+typedef struct {
+ uint32 _prec_log;
+ pktq_counters_t* _prec_cnt[PKTQ_MAX_PREC]; /* Counters per queue */
+} pktq_log_t;
+#endif /* PKTQ_LOG */
+
+
+#define PKTQ_COMMON \
+ uint16 num_prec; /* number of precedences in use */ \
+ uint16 hi_prec; /* rapid dequeue hint (>= highest non-empty prec) */ \
+ uint16 max; /* total max packets */ \
+ uint16 len; /* total number of packets */
+
+/* multi-priority pkt queue */
+struct pktq {
+ PKTQ_COMMON
+ /* q array must be last since # of elements can be either PKTQ_MAX_PREC or 1 */
+ struct pktq_prec q[PKTQ_MAX_PREC];
+#ifdef PKTQ_LOG
+ pktq_log_t* pktqlog;
+#endif
+};
+
+/* simple, non-priority pkt queue */
+struct spktq {
+ PKTQ_COMMON
+ /* q array must be last since # of elements can be either PKTQ_MAX_PREC or 1 */
+ struct pktq_prec q[1];
+};
+
+#define PKTQ_PREC_ITER(pq, prec) for (prec = (pq)->num_prec - 1; prec >= 0; prec--)
+
+/* fn(pkt, arg). return true if pkt belongs to if */
+typedef bool (*ifpkt_cb_t)(void*, int);
+
+/* operations on a specific precedence in packet queue */
+
+#define pktq_psetmax(pq, prec, _max) ((pq)->q[prec].max = (_max))
+#define pktq_pmax(pq, prec) ((pq)->q[prec].max)
+#define pktq_plen(pq, prec) ((pq)->q[prec].len)
+#define pktq_pavail(pq, prec) ((pq)->q[prec].max - (pq)->q[prec].len)
+#define pktq_pfull(pq, prec) ((pq)->q[prec].len >= (pq)->q[prec].max)
+#define pktq_pempty(pq, prec) ((pq)->q[prec].len == 0)
+
+#define pktq_ppeek(pq, prec) ((pq)->q[prec].head)
+#define pktq_ppeek_tail(pq, prec) ((pq)->q[prec].tail)
+
+extern void pktq_append(struct pktq *pq, int prec, struct spktq *list);
+extern void pktq_prepend(struct pktq *pq, int prec, struct spktq *list);
+
+extern void *pktq_penq(struct pktq *pq, int prec, void *p);
+extern void *pktq_penq_head(struct pktq *pq, int prec, void *p);
+extern void *pktq_pdeq(struct pktq *pq, int prec);
+extern void *pktq_pdeq_prev(struct pktq *pq, int prec, void *prev_p);
+extern void *pktq_pdeq_with_fn(struct pktq *pq, int prec, ifpkt_cb_t fn, int arg);
+extern void *pktq_pdeq_tail(struct pktq *pq, int prec);
+/* Empty the queue at particular precedence level */
+extern void pktq_pflush(osl_t *osh, struct pktq *pq, int prec, bool dir,
+ ifpkt_cb_t fn, int arg);
+/* Remove a specified packet from its queue */
+extern bool pktq_pdel(struct pktq *pq, void *p, int prec);
+
+/* operations on a set of precedences in packet queue */
+
+extern int pktq_mlen(struct pktq *pq, uint prec_bmp);
+extern void *pktq_mdeq(struct pktq *pq, uint prec_bmp, int *prec_out);
+extern void *pktq_mpeek(struct pktq *pq, uint prec_bmp, int *prec_out);
+
+/* operations on packet queue as a whole */
+
+#define pktq_len(pq) ((int)(pq)->len)
+#define pktq_max(pq) ((int)(pq)->max)
+#define pktq_avail(pq) ((int)((pq)->max - (pq)->len))
+#define pktq_full(pq) ((pq)->len >= (pq)->max)
+#define pktq_empty(pq) ((pq)->len == 0)
+
+/* operations for single precedence queues */
+#define pktenq(pq, p) pktq_penq(((struct pktq *)(void *)pq), 0, (p))
+#define pktenq_head(pq, p) pktq_penq_head(((struct pktq *)(void *)pq), 0, (p))
+#define pktdeq(pq) pktq_pdeq(((struct pktq *)(void *)pq), 0)
+#define pktdeq_tail(pq) pktq_pdeq_tail(((struct pktq *)(void *)pq), 0)
+#define pktqflush(osh, pq) pktq_flush(osh, ((struct pktq *)(void *)pq), TRUE, NULL, 0)
+#define pktqinit(pq, len) pktq_init(((struct pktq *)(void *)pq), 1, len)
+
+extern void pktq_init(struct pktq *pq, int num_prec, int max_len);
+extern void pktq_set_max_plen(struct pktq *pq, int prec, int max_len);
+
+/* prec_out may be NULL if caller is not interested in return value */
+extern void *pktq_deq(struct pktq *pq, int *prec_out);
+extern void *pktq_deq_tail(struct pktq *pq, int *prec_out);
+extern void *pktq_peek(struct pktq *pq, int *prec_out);
+extern void *pktq_peek_tail(struct pktq *pq, int *prec_out);
+extern void pktq_flush(osl_t *osh, struct pktq *pq, bool dir, ifpkt_cb_t fn, int arg);
+
+#ifdef __cplusplus
+ }
+#endif
+
+#endif /* _hnd_pktq_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/hndpmu.h b/drivers/net/wireless/bcmdhd/include/hndpmu.h
old mode 100755
new mode 100644
index f760e62..9a31663
--- a/drivers/net/wireless/bcmdhd/include/hndpmu.h
+++ b/drivers/net/wireless/bcmdhd/include/hndpmu.h
@@ -21,16 +21,21 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: hndpmu.h 431134 2013-10-22 18:25:42Z $
+ * $Id: hndpmu.h 471127 2014-04-17 23:24:23Z $
*/
#ifndef _hndpmu_h_
#define _hndpmu_h_
+#include <typedefs.h>
+#include <osl_decl.h>
+#include <siutils.h>
+
extern void si_pmu_otp_power(si_t *sih, osl_t *osh, bool on, uint32* min_res_mask);
extern void si_sdiod_drive_strength_init(si_t *sih, osl_t *osh, uint32 drivestrength);
extern void si_pmu_minresmask_htavail_set(si_t *sih, osl_t *osh, bool set_clear);
+extern void si_pmu_slow_clk_reinit(si_t *sih, osl_t *osh);
#endif /* _hndpmu_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/hndsoc.h b/drivers/net/wireless/bcmdhd/include/hndsoc.h
old mode 100755
new mode 100644
index 7726a8a..947db00
--- a/drivers/net/wireless/bcmdhd/include/hndsoc.h
+++ b/drivers/net/wireless/bcmdhd/include/hndsoc.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: hndsoc.h 432420 2013-10-28 14:14:02Z $
+ * $Id: hndsoc.h 473238 2014-04-28 19:14:56Z $
*/
#ifndef _HNDSOC_H
@@ -73,6 +73,7 @@
#define SI_ARM_FLASH1 0xffff0000 /* ARM Flash Region 1 */
#define SI_ARM_FLASH1_SZ 0x00010000 /* ARM Size of Flash Region 1 */
+#define SI_SFLASH 0x14000000
#define SI_PCI_DMA 0x40000000 /* Client Mode sb2pcitranslation2 (1 GB) */
#define SI_PCI_DMA2 0x80000000 /* Client Mode sb2pcitranslation2 (1 GB) */
#define SI_PCI_DMA_SZ 0x40000000 /* Client Mode sb2pcitranslation2 size in bytes */
@@ -145,6 +146,7 @@
#define USB30D_CORE_ID 0x83d /* usb 3.0 device core */
#define ARMCR4_CORE_ID 0x83e /* ARM CR4 CPU */
#define GCI_CORE_ID 0x840 /* GCI Core */
+#define M2MDMA_CORE_ID 0x844 /* memory to memory dma */
#define APB_BRIDGE_CORE_ID 0x135 /* APB bridge core ID */
#define AXI_CORE_ID 0x301 /* AXI/GPV core ID */
#define EROM_CORE_ID 0x366 /* EROM core ID */
@@ -183,6 +185,7 @@
#define CC_4706B0_CORE_REV 0x8000001f /* chipcommon core */
#define SOCRAM_4706B0_CORE_REV 0x80000005 /* internal memory core */
#define GMAC_4706B0_CORE_REV 0x80000000 /* Gigabit MAC core */
+#define NS_PCIEG2_CORE_REV_B0 0x7 /* NS-B0 PCIE Gen 2 core rev */
/* There are TWO constants on all HND chips: SI_ENUM_BASE above,
* and chipcommon being the first core:
@@ -233,6 +236,7 @@
#define CCS_USBCLKREQ 0x00000100 /* USB Clock Req */
#define CCS_SECICLKREQ 0x00000100 /* SECI Clock Req */
#define CCS_ARMFASTCLOCKREQ 0x00000100 /* ARM CR4 fast clock request */
+#define CCS_AVBCLKREQ 0x00000400 /* AVB Clock enable request */
#define CCS_ERSRC_REQ_MASK 0x00000700 /* external resource requests */
#define CCS_ERSRC_REQ_SHIFT 8
#define CCS_ALPAVAIL 0x00010000 /* ALP is available */
@@ -274,9 +278,9 @@
#define SOC_KNLDEV_NORFLASH 0x00000002
#define SOC_KNLDEV_NANDFLASH 0x00000004
-#ifndef _LANGUAGE_ASSEMBLY
+#if !defined(_LANGUAGE_ASSEMBLY) && !defined(__ASSEMBLY__)
int soc_boot_dev(void *sih);
int soc_knl_dev(void *sih);
-#endif /* _LANGUAGE_ASSEMBLY */
+#endif /* !defined(_LANGUAGE_ASSEMBLY) && !defined(__ASSEMBLY__) */
#endif /* _HNDSOC_H */
diff --git a/drivers/net/wireless/bcmdhd/include/linux_osl.h b/drivers/net/wireless/bcmdhd/include/linux_osl.h
old mode 100755
new mode 100644
index 7c614ba..a7dca28
--- a/drivers/net/wireless/bcmdhd/include/linux_osl.h
+++ b/drivers/net/wireless/bcmdhd/include/linux_osl.h
@@ -2,13 +2,13 @@
* Linux OS Independent Layer
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,18 +16,19 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: linux_osl.h 432719 2013-10-29 12:04:59Z $
+ * $Id: linux_osl.h 474317 2014-04-30 21:49:42Z $
*/
#ifndef _linux_osl_h_
#define _linux_osl_h_
#include <typedefs.h>
+#define DECLSPEC_ALIGN(x) __attribute__ ((aligned(x)))
/* Linux Kernel: File Operations: start */
extern void * osl_os_open_image(char * filename);
@@ -70,7 +71,20 @@
#define ASSERT(exp)
#endif /* GCC_VERSION > 30100 */
#endif /* __GNUC__ */
-#endif
+#endif
+
+/* bcm_prefetch_32B */
+static inline void bcm_prefetch_32B(const uint8 *addr, const int cachelines_32B)
+{
+#if defined(BCM47XX_CA9) && (__LINUX_ARM_ARCH__ >= 5)
+ switch (cachelines_32B) {
+ case 4: __asm__ __volatile__("pld\t%a0" :: "p"(addr + 96) : "cc");
+ case 3: __asm__ __volatile__("pld\t%a0" :: "p"(addr + 64) : "cc");
+ case 2: __asm__ __volatile__("pld\t%a0" :: "p"(addr + 32) : "cc");
+ case 1: __asm__ __volatile__("pld\t%a0" :: "p"(addr + 0) : "cc");
+ }
+#endif
+}
/* microsecond delay */
#define OSL_DELAY(usec) osl_delay(usec)
@@ -97,10 +111,15 @@
/* PCI device bus # and slot # */
#define OSL_PCI_BUS(osh) osl_pci_bus(osh)
#define OSL_PCI_SLOT(osh) osl_pci_slot(osh)
+#define OSL_PCIE_DOMAIN(osh) osl_pcie_domain(osh)
+#define OSL_PCIE_BUS(osh) osl_pcie_bus(osh)
extern uint osl_pci_bus(osl_t *osh);
extern uint osl_pci_slot(osl_t *osh);
+extern uint osl_pcie_domain(osl_t *osh);
+extern uint osl_pcie_bus(osl_t *osh);
extern struct pci_dev *osl_pci_device(osl_t *osh);
+
/* Pkttag flag should be part of public information */
typedef struct {
bool pkttag;
@@ -110,6 +129,9 @@
void *unused[3];
} osl_pubinfo_t;
+extern void osl_flag_set(osl_t *osh, uint32 mask);
+extern bool osl_is_flag_set(osl_t *osh, uint32 mask);
+
#define PKTFREESETCB(osh, _tx_fn, _tx_ctx) \
do { \
((osl_pubinfo_t*)osh)->tx_fn = _tx_fn; \
@@ -147,8 +169,8 @@
osl_dma_free_consistent((osh), (void*)(va), (size), (pa))
extern uint osl_dma_consistent_align(void);
-extern void *
-osl_dma_alloc_consistent(osl_t *osh, uint size, uint16 align, uint *tot, dmaaddr_t *pap);
+extern void *osl_dma_alloc_consistent(osl_t *osh, uint size, uint16 align,
+ uint *tot, dmaaddr_t *pap);
extern void osl_dma_free_consistent(osl_t *osh, void *va, uint size, dmaaddr_t pa);
/* map/unmap direction */
@@ -165,18 +187,26 @@
/* API for DMA addressing capability */
#define OSL_DMADDRWIDTH(osh, addrwidth) ({BCM_REFERENCE(osh); BCM_REFERENCE(addrwidth);})
-#if defined(__ARM_ARCH_7A__)
+#if defined(__mips__) || (defined(BCM47XX_CA9) && defined(__ARM_ARCH_7A__))
extern void osl_cache_flush(void *va, uint size);
extern void osl_cache_inv(void *va, uint size);
extern void osl_prefetch(const void *ptr);
#define OSL_CACHE_FLUSH(va, len) osl_cache_flush((void *) va, len)
#define OSL_CACHE_INV(va, len) osl_cache_inv((void *) va, len)
#define OSL_PREFETCH(ptr) osl_prefetch(ptr)
+#ifdef __ARM_ARCH_7A__
+ extern int osl_arch_is_coherent(void);
+ #define OSL_ARCH_IS_COHERENT() osl_arch_is_coherent()
+#else
+ #define OSL_ARCH_IS_COHERENT() NULL
+#endif /* __ARM_ARCH_7A__ */
#else
#define OSL_CACHE_FLUSH(va, len) BCM_REFERENCE(va)
#define OSL_CACHE_INV(va, len) BCM_REFERENCE(va)
- #define OSL_PREFETCH(ptr) prefetch(ptr)
-#endif
+ #define OSL_PREFETCH(ptr) BCM_REFERENCE(ptr)
+
+ #define OSL_ARCH_IS_COHERENT() NULL
+#endif
/* register access macros */
#if defined(BCMSDIO)
@@ -185,7 +215,21 @@
(uintptr)(r), sizeof(*(r)), (v)))
#define OSL_READ_REG(osh, r) (bcmsdh_reg_read(osl_get_bus_handle(osh), \
(uintptr)(r), sizeof(*(r))))
-#endif
+#elif defined(BCM47XX_ACP_WAR)
+extern void osl_pcie_rreg(osl_t *osh, ulong addr, void *v, uint size);
+
+#define OSL_READ_REG(osh, r) \
+ ({\
+ __typeof(*(r)) __osl_v; \
+ osl_pcie_rreg(osh, (uintptr)(r), (void *)&__osl_v, sizeof(*(r))); \
+ __osl_v; \
+ })
+#endif
+
+#if defined(BCM47XX_ACP_WAR)
+ #define SELECT_BUS_WRITE(osh, mmap_op, bus_op) ({BCM_REFERENCE(osh); mmap_op;})
+ #define SELECT_BUS_READ(osh, mmap_op, bus_op) ({BCM_REFERENCE(osh); bus_op;})
+#else
#if defined(BCMSDIO)
#define SELECT_BUS_WRITE(osh, mmap_op, bus_op) if (((osl_pubinfo_t*)(osh))->mmbus) \
@@ -195,7 +239,8 @@
#else
#define SELECT_BUS_WRITE(osh, mmap_op, bus_op) ({BCM_REFERENCE(osh); mmap_op;})
#define SELECT_BUS_READ(osh, mmap_op, bus_op) ({BCM_REFERENCE(osh); mmap_op;})
-#endif
+#endif
+#endif /* BCM47XX_ACP_WAR */
#define OSL_ERROR(bcmerror) osl_error(bcmerror)
extern int osl_error(int bcmerror);
@@ -274,7 +319,7 @@
#define OSL_GETCYCLES(x) rdtscl((x))
#else
#define OSL_GETCYCLES(x) ((x) = 0)
-#endif
+#endif
/* dereference an address that may cause a bus exception */
#define BUSPROBE(val, addr) ({ (val) = R_REG(NULL, (addr)); 0; })
@@ -363,6 +408,12 @@
#define PKTID(skb) ({BCM_REFERENCE(skb); 0;})
#define PKTSETID(skb, id) ({BCM_REFERENCE(skb); BCM_REFERENCE(id);})
#define PKTSHRINK(osh, m) ({BCM_REFERENCE(osh); m;})
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 6, 0)
+#define PKTORPHAN(skb) skb_orphan(skb)
+#else
+#define PKTORPHAN(skb) ({BCM_REFERENCE(skb); 0;})
+#endif /* LINUX VERSION >= 3.6 */
+
#ifdef BCMDBG_CTRACE
#define DEL_CTRACE(zosh, zskb) { \
@@ -597,6 +648,152 @@
#define CTF_MARK(m) ({BCM_REFERENCE(m); 0;})
#endif /* HNDCTF */
+#if defined(BCM_GMAC3)
+
+/** pktalloced accounting in devices using GMAC Bulk Forwarding to DHD */
+
+/* Account for packets delivered to downstream forwarder by GMAC interface. */
+extern void osl_pkt_tofwder(osl_t *osh, void *skbs, int skb_cnt);
+#define PKTTOFWDER(osh, skbs, skb_cnt) \
+ osl_pkt_tofwder(((osl_t *)osh), (void *)(skbs), (skb_cnt))
+
+/* Account for packets received from downstream forwarder. */
+#if defined(BCMDBG_CTRACE) /* pkt logging */
+extern void osl_pkt_frmfwder(osl_t *osh, void *skbs, int skb_cnt,
+ int line, char *file);
+#define PKTFRMFWDER(osh, skbs, skb_cnt) \
+ osl_pkt_frmfwder(((osl_t *)osh), (void *)(skbs), (skb_cnt), \
+ __LINE__, __FILE__)
+#else /* ! (BCMDBG_PKT || BCMDBG_CTRACE) */
+extern void osl_pkt_frmfwder(osl_t *osh, void *skbs, int skb_cnt);
+#define PKTFRMFWDER(osh, skbs, skb_cnt) \
+ osl_pkt_frmfwder(((osl_t *)osh), (void *)(skbs), (skb_cnt))
+#endif
+
+
+/** GMAC Forwarded packet tagging for reduced cache flush/invalidate.
+ * In FWDERBUF tagged packet, only FWDER_PKTMAPSZ amount of data would have
+ * been accessed in the GMAC forwarder. This may be used to limit the number of
+ * cachelines that need to be flushed or invalidated.
+ * Packets sent to the DHD from a GMAC forwarder will be tagged w/ FWDERBUF.
+ * DHD may clear the FWDERBUF tag, if more than FWDER_PKTMAPSZ was accessed.
+ * Likewise, a debug print of a packet payload in say the ethernet driver needs
+ * to be accompanied with a clear of the FWDERBUF tag.
+ */
+
+/** Forwarded packets, have a HWRXOFF sized rx header (etc.h) */
+#define FWDER_HWRXOFF (30)
+
+/** Maximum amount of a pktadat that a downstream forwarder (GMAC) may have
+ * read into the L1 cache (not dirty). This may be used in reduced cache ops.
+ *
+ * Max 56: ET HWRXOFF[30] + BRCMHdr[4] + EtherHdr[14] + VlanHdr[4] + IP[4]
+ */
+#define FWDER_PKTMAPSZ (FWDER_HWRXOFF + 4 + 14 + 4 + 4)
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 36)
+
+#define FWDERBUF (1 << 4)
+#define PKTSETFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->pktc_flags |= FWDERBUF); \
+ })
+#define PKTCLRFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->pktc_flags &= (~FWDERBUF)); \
+ })
+#define PKTISFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->pktc_flags & FWDERBUF); \
+ })
+
+#elif LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 22)
+
+#define FWDERBUF (1 << 20)
+#define PKTSETFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->mac_len |= FWDERBUF); \
+ })
+#define PKTCLRFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->mac_len &= (~FWDERBUF)); \
+ })
+#define PKTISFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->mac_len & FWDERBUF); \
+ })
+
+#else /* 2.6.22 */
+
+#define FWDERBUF (1 << 4)
+#define PKTSETFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->__unused |= FWDERBUF); \
+ })
+#define PKTCLRFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->__unused &= (~FWDERBUF)); \
+ })
+#define PKTISFWDERBUF(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->__unused & FWDERBUF); \
+ })
+
+#endif /* 2.6.22 */
+
+#else /* ! BCM_GMAC3 */
+
+#define PKTSETFWDERBUF(osh, skb) ({ BCM_REFERENCE(osh); BCM_REFERENCE(skb); })
+#define PKTCLRFWDERBUF(osh, skb) ({ BCM_REFERENCE(osh); BCM_REFERENCE(skb); })
+#define PKTISFWDERBUF(osh, skb) ({ BCM_REFERENCE(osh); BCM_REFERENCE(skb); FALSE;})
+
+#endif /* ! BCM_GMAC3 */
+
+
+#ifdef HNDCTF
+/* For broadstream iqos */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 36)
+#define TOBR (1 << 5)
+#define PKTSETTOBR(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->pktc_flags |= TOBR); \
+ })
+#define PKTCLRTOBR(osh, skb) \
+ ({ \
+ BCM_REFERENCE(osh); \
+ (((struct sk_buff*)(skb))->pktc_flags &= (~TOBR)); \
+ })
+#define PKTISTOBR(skb) (((struct sk_buff*)(skb))->pktc_flags & TOBR)
+#define PKTSETCTFIPCTXIF(skb, ifp) (((struct sk_buff*)(skb))->ctf_ipc_txif = ifp)
+#elif LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 22)
+#define PKTSETTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTCLRTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTISTOBR(skb) ({BCM_REFERENCE(skb); FALSE;})
+#define PKTSETCTFIPCTXIF(skb, ifp) ({BCM_REFERENCE(skb); BCM_REFERENCE(ifp);})
+#else /* 2.6.22 */
+#define PKTSETTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTCLRTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTISTOBR(skb) ({BCM_REFERENCE(skb); FALSE;})
+#define PKTSETCTFIPCTXIF(skb, ifp) ({BCM_REFERENCE(skb); BCM_REFERENCE(ifp);})
+#endif /* 2.6.22 */
+#else /* HNDCTF */
+#define PKTSETTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTCLRTOBR(osh, skb) ({BCM_REFERENCE(osh); BCM_REFERENCE(skb);})
+#define PKTISTOBR(skb) ({BCM_REFERENCE(skb); FALSE;})
+#endif /* HNDCTF */
+
+
#ifdef BCMFA
#ifdef BCMFA_HW_HASH
#define PKTSETFAHIDX(skb, idx) (((struct sk_buff*)(skb))->napt_idx = idx)
@@ -629,6 +826,7 @@
extern void osl_pktfree(osl_t *osh, void *skb, bool send);
extern void *osl_pktget_static(osl_t *osh, uint len);
extern void osl_pktfree_static(osl_t *osh, void *skb, bool send);
+extern void osl_pktclone(osl_t *osh, void **pkt);
#ifdef BCMDBG_CTRACE
#define PKT_CTRACE_DUMP(osh, b) osl_ctrace_dump((osh), (b))
@@ -679,6 +877,9 @@
#define PKTALLOCED(osh) osl_pktalloced(osh)
extern uint osl_pktalloced(osl_t *osh);
+#define OSL_RAND() osl_rand()
+extern uint32 osl_rand(void);
+
#define DMA_MAP(osh, va, size, direction, p, dmah) \
osl_dma_map((osh), (va), (size), (direction), (p), (dmah))
diff --git a/drivers/net/wireless/bcmdhd/include/linuxver.h b/drivers/net/wireless/bcmdhd/include/linuxver.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/miniopt.h b/drivers/net/wireless/bcmdhd/include/miniopt.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/msgtrace.h b/drivers/net/wireless/bcmdhd/include/msgtrace.h
old mode 100755
new mode 100644
index c01676f..228c045
--- a/drivers/net/wireless/bcmdhd/include/msgtrace.h
+++ b/drivers/net/wireless/bcmdhd/include/msgtrace.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: msgtrace.h 369735 2012-11-19 22:50:22Z $
+ * $Id: msgtrace.h 439681 2013-11-27 15:39:50Z $
*/
#ifndef _MSGTRACE_H
@@ -34,7 +34,8 @@
/* This marks the start of a packed structure section. */
#include <packed_section_start.h>
-
+/* for osl_t */
+#include <osl_decl.h>
#define MSGTRACE_VERSION 1
/* Message trace header */
diff --git a/drivers/net/wireless/bcmdhd/include/osl.h b/drivers/net/wireless/bcmdhd/include/osl.h
old mode 100755
new mode 100644
index cdfb107..1e0455a
--- a/drivers/net/wireless/bcmdhd/include/osl.h
+++ b/drivers/net/wireless/bcmdhd/include/osl.h
@@ -21,15 +21,13 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: osl.h 424562 2013-09-18 10:57:30Z $
+ * $Id: osl.h 474639 2014-05-01 23:52:31Z $
*/
#ifndef _osl_h_
#define _osl_h_
-/* osl handle type forward declaration */
-typedef struct osl_info osl_t;
-typedef struct osl_dmainfo osldma_t;
+#include <osl_decl.h>
#define OSL_PKTTAG_SZ 32 /* Size of PktTag */
@@ -41,6 +39,7 @@
typedef void (*osl_wreg_fn_t)(void *ctx, volatile void *reg, unsigned int val, unsigned int size);
+
#include <linux_osl.h>
#ifndef PKTDBG_TRACE
@@ -113,8 +112,9 @@
#define PKTSETFRAGTOTNUM(osh, lb, tot) BCM_REFERENCE(osh)
#define PKTFRAGTOTLEN(osh, lb) (0)
#define PKTSETFRAGTOTLEN(osh, lb, len) BCM_REFERENCE(osh)
-#define PKTFRAGIFINDEX(osh, lb) (0)
-#define PKTSETFRAGIFINDEX(osh, lb, idx) BCM_REFERENCE(osh)
+#define PKTIFINDEX(osh, lb) (0)
+#define PKTSETIFINDEX(osh, lb, idx) BCM_REFERENCE(osh)
+#define PKTGETLF(osh, len, send, lbuf_type) (0)
/* in rx path, reuse totlen as used len */
#define PKTFRAGUSEDLEN(osh, lb) (0)
@@ -133,10 +133,17 @@
#define PKTRESETRXFRAG(osh, lb) BCM_REFERENCE(osh)
/* TX FRAG */
-#define PKTISTXFRAG(osh, lb) (0)
+#define PKTISTXFRAG(osh, lb) (0)
#define PKTSETTXFRAG(osh, lb) BCM_REFERENCE(osh)
+/* Need Rx completion used for AMPDU reordering */
+#define PKTNEEDRXCPL(osh, lb) (TRUE)
+#define PKTSETNORXCPL(osh, lb) BCM_REFERENCE(osh)
+#define PKTRESETNORXCPL(osh, lb) BCM_REFERENCE(osh)
+
#define PKTISFRAG(osh, lb) (0)
#define PKTFRAGISCHAINED(osh, i) (0)
+/* TRIM Tail bytes from lfrag */
+#define PKTFRAG_TRIM_TAILBYTES(osh, p, len) PKTSETLEN(osh, p, PKTLEN(osh, p) - len)
#endif /* _osl_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/osl_decl.h b/drivers/net/wireless/bcmdhd/include/osl_decl.h
new file mode 100644
index 0000000..aafad10
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/osl_decl.h
@@ -0,0 +1,34 @@
+/*
+ * osl forward declarations
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id$
+ */
+
+#ifndef _osl_decl_h_
+#define _osl_decl_h_
+
+/* osl handle type forward declaration */
+typedef struct osl_info osl_t;
+typedef struct osl_dmainfo osldma_t;
+
+#endif
diff --git a/drivers/net/wireless/bcmdhd/include/packed_section_end.h b/drivers/net/wireless/bcmdhd/include/packed_section_end.h
old mode 100755
new mode 100644
index a7133c2..08c2d56
--- a/drivers/net/wireless/bcmdhd/include/packed_section_end.h
+++ b/drivers/net/wireless/bcmdhd/include/packed_section_end.h
@@ -34,7 +34,7 @@
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
- * $Id: packed_section_end.h 241182 2011-02-17 21:50:03Z $
+ * $Id: packed_section_end.h 437241 2013-11-18 07:39:24Z $
*/
diff --git a/drivers/net/wireless/bcmdhd/include/packed_section_start.h b/drivers/net/wireless/bcmdhd/include/packed_section_start.h
old mode 100755
new mode 100644
index ed5045c..52dec03
--- a/drivers/net/wireless/bcmdhd/include/packed_section_start.h
+++ b/drivers/net/wireless/bcmdhd/include/packed_section_start.h
@@ -34,7 +34,7 @@
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
- * $Id: packed_section_start.h 286783 2011-09-29 06:18:57Z $
+ * $Id: packed_section_start.h 437241 2013-11-18 07:39:24Z $
*/
diff --git a/drivers/net/wireless/bcmdhd/include/pcicfg.h b/drivers/net/wireless/bcmdhd/include/pcicfg.h
old mode 100755
new mode 100644
index 2d28dde..3390e77
--- a/drivers/net/wireless/bcmdhd/include/pcicfg.h
+++ b/drivers/net/wireless/bcmdhd/include/pcicfg.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: pcicfg.h 413666 2013-07-20 01:16:40Z $
+ * $Id: pcicfg.h 465082 2014-03-26 17:37:28Z $
*/
#ifndef _h_pcicfg_
@@ -73,6 +73,26 @@
#define PCI_GPIO_IN 0xb0 /* pci config space gpio input (>=rev3) */
#define PCI_GPIO_OUT 0xb4 /* pci config space gpio output (>=rev3) */
#define PCI_GPIO_OUTEN 0xb8 /* pci config space gpio output enable (>=rev3) */
+#define PCI_L1SS_CTRL2 0x24c /* The L1 PM Substates Control register */
+
+/* Private Registers */
+#define PCI_STAT_CTRL 0xa80
+#define PCI_L0_EVENTCNT 0xa84
+#define PCI_L0_STATETMR 0xa88
+#define PCI_L1_EVENTCNT 0xa8c
+#define PCI_L1_STATETMR 0xa90
+#define PCI_L1_1_EVENTCNT 0xa94
+#define PCI_L1_1_STATETMR 0xa98
+#define PCI_L1_2_EVENTCNT 0xa9c
+#define PCI_L1_2_STATETMR 0xaa0
+#define PCI_L2_EVENTCNT 0xaa4
+#define PCI_L2_STATETMR 0xaa8
+
+#define PCI_PMCR_REFUP 0x1814 /* Trefup time */
+#define PCI_PMCR_REFUP_EXT 0x1818 /* Trefup extend Max */
+#define PCI_TPOWER_SCALE_MASK 0x3
+#define PCI_TPOWER_SCALE_SHIFT 3 /* 0:1 is scale and 2 is rsvd */
+
#define PCI_BAR0_SHADOW_OFFSET (2 * 1024) /* bar0 + 2K accesses sprom shadow (in pci core) */
#define PCI_BAR0_SPROM_OFFSET (4 * 1024) /* bar0 + 4K accesses external sprom */
diff --git a/drivers/net/wireless/bcmdhd/include/pcie_core.h b/drivers/net/wireless/bcmdhd/include/pcie_core.h
old mode 100755
new mode 100644
index 678fe9c..242a9a2
--- a/drivers/net/wireless/bcmdhd/include/pcie_core.h
+++ b/drivers/net/wireless/bcmdhd/include/pcie_core.h
@@ -21,11 +21,14 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: pcie_core.h 430913 2013-10-21 21:46:10Z $
+ * $Id: pcie_core.h 468449 2014-04-07 21:50:10Z $
*/
#ifndef _PCIE_CORE_H
#define _PCIE_CORE_H
+
#include <sbhnddma.h>
+#include <siutils.h>
+
/* cpp contortions to concatenate w/arg prescan */
#ifndef PAD
#define _PADLINE(line) pad ## line
@@ -59,18 +62,22 @@
/* dma regs to control the flow between host2dev and dev2host */
typedef struct pcie_devdmaregs {
dma64regs_t tx;
- uint32 PAD[2];
+ uint32 PAD[2];
dma64regs_t rx;
- uint32 PAD[2];
+ uint32 PAD[2];
} pcie_devdmaregs_t;
+#define PCIE_DB_HOST2DEV_0 0x1
+#define PCIE_DB_HOST2DEV_1 0x2
+#define PCIE_DB_DEV2HOST_0 0x3
+#define PCIE_DB_DEV2HOST_1 0x4
/* door bell register sets */
typedef struct pcie_doorbell {
- uint32 host2dev_0;
- uint32 host2dev_1;
- uint32 dev2host_0;
- uint32 dev2host_1;
+ uint32 host2dev_0;
+ uint32 host2dev_1;
+ uint32 dev2host_0;
+ uint32 dev2host_1;
} pcie_doorbell_t;
/* SB side: PCIE core and host control registers */
@@ -155,7 +162,9 @@
#define PCIE_RST 0x02 /* Value driven out to pin */
#define PCIE_SPERST 0x04 /* SurvivePeRst */
#define PCIE_DISABLE_L1CLK_GATING 0x10
+#define PCIE_DLYPERST 0x100 /* Delay PeRst to CoE Core */
#define PCIE_DISSPROMLD 0x200 /* DisableSpromLoadOnPerst */
+#define PCIE_WakeModeL2 0x1000 /* Wake on L2 */
#define PCIE_CFGADDR 0x120 /* offsetof(configaddr) */
#define PCIE_CFGDATA 0x124 /* offsetof(configdata) */
@@ -459,12 +468,13 @@
#define PCIE_CAP_DEVCTRL2_OBFF_ENAB_MASK 0x6000 /* Enable OBFF mechanism, select signaling method */
/* LTR registers in PCIE Cap */
-#define PCIE_CAP_LTR0_REG_OFFSET 0x798 /* ltr0_reg offset in pcie cap */
-#define PCIE_CAP_LTR1_REG_OFFSET 0x79C /* ltr1_reg offset in pcie cap */
-#define PCIE_CAP_LTR2_REG_OFFSET 0x7A0 /* ltr2_reg offset in pcie cap */
-#define PCIE_CAP_LTR0_REG 0 /* ltr0_reg */
-#define PCIE_CAP_LTR1_REG 1 /* ltr1_reg */
-#define PCIE_CAP_LTR2_REG 2 /* ltr2_reg */
+#define PCIE_LTR0_REG_OFFSET 0x844 /* ltr0_reg offset in pcie cap */
+#define PCIE_LTR1_REG_OFFSET 0x848 /* ltr1_reg offset in pcie cap */
+#define PCIE_LTR2_REG_OFFSET 0x84c /* ltr2_reg offset in pcie cap */
+#define PCIE_LTR0_REG_DEFAULT_60 0x883c883c /* active latency default to 60usec */
+#define PCIE_LTR0_REG_DEFAULT_150 0x88968896 /* active latency default to 150usec */
+#define PCIE_LTR1_REG_DEFAULT 0x88648864 /* idle latency default to 100usec */
+#define PCIE_LTR2_REG_DEFAULT 0x90039003 /* sleep latency default to 3msec */
/* Status reg PCIE_PLP_STATUSREG */
#define PCIE_PLP_POLARITYINV_STAT 0x10
@@ -499,14 +509,28 @@
/* definition of configuration space registers of PCIe gen2
* http://hwnbu-twiki.sj.broadcom.com/twiki/pub/Mwgroup/CurrentPcieGen2ProgramGuide/pcie_ep.htm
*/
-#define PCIECFGREG_PML1_SUB_CTRL1 0x248
-#define PCI_PM_L1_2_ENA_MASK 0x00000001 /* PCI-PM L1.2 Enabled */
-#define PCI_PM_L1_1_ENA_MASK 0x00000002 /* PCI-PM L1.1 Enabled */
-#define ASPM_L1_2_ENA_MASK 0x00000004 /* ASPM L1.2 Enabled */
-#define ASPM_L1_1_ENA_MASK 0x00000008 /* ASPM L1.1 Enabled */
+#define PCIECFGREG_STATUS_CMD 0x4
+#define PCIECFGREG_PM_CSR 0x4C
+#define PCIECFGREG_MSI_CAP 0x58
+#define PCIECFGREG_MSI_ADDR_L 0x5C
+#define PCIECFGREG_MSI_ADDR_H 0x60
+#define PCIECFGREG_MSI_DATA 0x64
+#define PCIECFGREG_LINK_STATUS_CTRL 0xBC
+#define PCIECFGREG_LINK_STATUS_CTRL2 0xDC
+#define PCIECFGREG_RBAR_CTRL 0x228
+#define PCIECFGREG_PML1_SUB_CTRL1 0x248
+#define PCIECFGREG_REG_BAR2_CONFIG 0x4E0
+#define PCIECFGREG_REG_BAR3_CONFIG 0x4F4
+#define PCIECFGREG_PDL_CTRL1 0x1004
+#define PCIECFGREG_PDL_IDDQ 0x1814
+#define PCIECFGREG_REG_PHY_CTL7 0x181c
-#define PCIECFGREG_PDL_CTRL1 0x1004
-#define PCIECFGREG_REG_PHY_CTL7 0x181c
+/* PCIECFGREG_PML1_SUB_CTRL1 Bit Definition */
+#define PCI_PM_L1_2_ENA_MASK 0x00000001 /* PCI-PM L1.2 Enabled */
+#define PCI_PM_L1_1_ENA_MASK 0x00000002 /* PCI-PM L1.1 Enabled */
+#define ASPM_L1_2_ENA_MASK 0x00000004 /* ASPM L1.2 Enabled */
+#define ASPM_L1_1_ENA_MASK 0x00000008 /* ASPM L1.1 Enabled */
+
/* PCIe gen2 mailbox interrupt masks */
#define I_MB 0x3
#define I_BIT0 0x1
@@ -519,6 +543,7 @@
/* enumeration Core regs */
#define PCIH2D_MailBox 0x140
+#define PCIH2D_DB1 0x144
#define PCID2H_MailBox 0x148
#define PCIMailBoxInt 0x48
#define PCIMailBoxMask 0x4C
@@ -547,9 +572,10 @@
* Sleep is most tolerant
*/
#define LTR_ACTIVE 2
-#define LTR_ACTIVE_IDLE 1
+#define LTR_ACTIVE_IDLE 1
#define LTR_SLEEP 0
-
+#define LTR_FINAL_MASK 0x300
+#define LTR_FINAL_SHIFT 8
/* pwrinstatus, pwrintmask regs */
#define PCIEGEN2_PWRINT_D0_STATE_SHIFT 0
@@ -578,4 +604,33 @@
#define SBTOPCIE_MB_FUNC2_SHIFT 12
#define SBTOPCIE_MB_FUNC3_SHIFT 14
+/* pcieiocstatus */
+#define PCIEGEN2_IOC_D0_STATE_SHIFT 8
+#define PCIEGEN2_IOC_D1_STATE_SHIFT 9
+#define PCIEGEN2_IOC_D2_STATE_SHIFT 10
+#define PCIEGEN2_IOC_D3_STATE_SHIFT 11
+#define PCIEGEN2_IOC_L0_LINK_SHIFT 12
+#define PCIEGEN2_IOC_L1_LINK_SHIFT 13
+#define PCIEGEN2_IOC_L1L2_LINK_SHIFT 14
+#define PCIEGEN2_IOC_L2_L3_LINK_SHIFT 15
+
+#define PCIEGEN2_IOC_D0_STATE_MASK (1 << PCIEGEN2_IOC_D0_STATE_SHIFT)
+#define PCIEGEN2_IOC_D1_STATE_MASK (1 << PCIEGEN2_IOC_D1_STATE_SHIF)
+#define PCIEGEN2_IOC_D2_STATE_MASK (1 << PCIEGEN2_IOC_D2_STATE_SHIF)
+#define PCIEGEN2_IOC_D3_STATE_MASK (1 << PCIEGEN2_IOC_D3_STATE_SHIF)
+#define PCIEGEN2_IOC_L0_LINK_MASK (1 << PCIEGEN2_IOC_L0_LINK_SHIF)
+#define PCIEGEN2_IOC_L1_LINK_MASK (1 << PCIEGEN2_IOC_L1_LINK_SHIF)
+#define PCIEGEN2_IOC_L1L2_LINK_MASK (1 << PCIEGEN2_IOC_L1L2_LINK_SHIFT)
+#define PCIEGEN2_IOC_L2_L3_LINK_MASK (1 << PCIEGEN2_IOC_L2_L3_LINK_SHIFT)
+
+/* stat_ctrl */
+#define PCIE_STAT_CTRL_RESET 0x1
+#define PCIE_STAT_CTRL_ENABLE 0x2
+#define PCIE_STAT_CTRL_INTENABLE 0x4
+#define PCIE_STAT_CTRL_INTSTATUS 0x8
+
+#ifdef BCMDRIVER
+void pcie_watchdog_reset(osl_t *osh, si_t *sih, sbpcieregs_t *sbpcieregs);
+#endif /* BCMDRIVER */
+
#endif /* _PCIE_CORE_H */
diff --git a/drivers/net/wireless/bcmdhd/include/proto/802.11.h b/drivers/net/wireless/bcmdhd/include/proto/802.11.h
old mode 100755
new mode 100644
index cad9e22..7a584f4
--- a/drivers/net/wireless/bcmdhd/include/proto/802.11.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/802.11.h
@@ -21,7 +21,7 @@
*
* Fundamental types and constants relating to 802.11
*
- * $Id: 802.11.h 444070 2013-12-18 13:20:12Z $
+ * $Id: 802.11.h 469158 2014-04-09 21:31:31Z $
*/
#ifndef _802_11_H_
@@ -91,7 +91,7 @@
#define DOT11_MIN_DTIM_PERIOD 1 /* d11 min DTIM period */
#define DOT11_MAX_DTIM_PERIOD 0xFF /* d11 max DTIM period */
-/* 802.2 LLC/SNAP header used by 802.11 per 802.1H */
+/** 802.2 LLC/SNAP header used by 802.11 per 802.1H */
#define DOT11_LLC_SNAP_HDR_LEN 8 /* d11 LLC/SNAP header length */
#define DOT11_OUI_LEN 3 /* d11 OUI length */
BWL_PRE_PACKED_STRUCT struct dot11_llc_snap_header {
@@ -108,7 +108,7 @@
#define RFC1042_HDR_LEN (ETHER_HDR_LEN + DOT11_LLC_SNAP_HDR_LEN) /* RCF1042 header length */
/* Generic 802.11 MAC header */
-/*
+/**
* N.B.: This struct reflects the full 4 address 802.11 MAC header.
* The fields are defined such that the shorter 1, 2, and 3
* address headers just use the first k fields.
@@ -163,9 +163,10 @@
} BWL_POST_PACKED_STRUCT;
#define DOT11_CS_END_LEN 16 /* d11 CF-END frame length */
-/* RWL wifi protocol: The Vendor Specific Action frame is defined for vendor-specific signaling
-* category+OUI+vendor specific content ( this can be variable)
-*/
+/**
+ * RWL wifi protocol: The Vendor Specific Action frame is defined for vendor-specific signaling
+ * category+OUI+vendor specific content ( this can be variable)
+ */
BWL_PRE_PACKED_STRUCT struct dot11_action_wifi_vendor_specific {
uint8 category;
uint8 OUI[3];
@@ -175,7 +176,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_action_wifi_vendor_specific dot11_action_wifi_vendor_specific_t;
-/* generic vender specific action frame with variable length */
+/** generic vendor specific action frame with variable length */
BWL_PRE_PACKED_STRUCT struct dot11_action_vs_frmhdr {
uint8 category;
uint8 OUI[3];
@@ -205,7 +206,7 @@
#define DOT11_BA_CTL_TID_MASK 0xF000 /* tid mask */
#define DOT11_BA_CTL_TID_SHIFT 12 /* tid shift */
-/* control frame header (BA/BAR) */
+/** control frame header (BA/BAR) */
BWL_PRE_PACKED_STRUCT struct dot11_ctl_header {
uint16 fc; /* frame control */
uint16 durid; /* duration/ID */
@@ -214,7 +215,7 @@
} BWL_POST_PACKED_STRUCT;
#define DOT11_CTL_HDR_LEN 16 /* control frame hdr len */
-/* BAR frame payload */
+/** BAR frame payload */
BWL_PRE_PACKED_STRUCT struct dot11_bar {
uint16 bar_control; /* BAR Control */
uint16 seqnum; /* Starting Sequence control */
@@ -223,7 +224,7 @@
#define DOT11_BA_BITMAP_LEN 128 /* bitmap length */
#define DOT11_BA_CMP_BITMAP_LEN 8 /* compressed bitmap length */
-/* BA frame payload */
+/** BA frame payload */
BWL_PRE_PACKED_STRUCT struct dot11_ba {
uint16 ba_control; /* BA Control */
uint16 seqnum; /* Starting Sequence control */
@@ -231,7 +232,7 @@
} BWL_POST_PACKED_STRUCT;
#define DOT11_BA_LEN 4 /* BA frame payload len (wo bitmap) */
-/* Management frame header */
+/** Management frame header */
BWL_PRE_PACKED_STRUCT struct dot11_management_header {
uint16 fc; /* frame control */
uint16 durid; /* duration/ID */
@@ -323,8 +324,8 @@
typedef struct dot11_power_cnst dot11_power_cnst_t;
BWL_PRE_PACKED_STRUCT struct dot11_power_cap {
- uint8 min;
- uint8 max;
+ int8 min;
+ int8 max;
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_power_cap dot11_power_cap_t;
@@ -345,7 +346,8 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_supp_channels dot11_supp_channels_t;
-/* Extension Channel Offset IE: 802.11n-D1.0 spec. added sideband
+/**
+ * Extension Channel Offset IE: 802.11n-D1.0 spec. added sideband
* offset for 40MHz operation. The possible 3 values are:
* 1 = above control channel
* 3 = below control channel
@@ -362,7 +364,7 @@
uint8 id; /* IE ID, 221, DOT11_MNG_PROPR_ID */
uint8 len; /* IE length */
uint8 oui[3];
- uint8 type; /* type inidicates what follows */
+ uint8 type; /* type indicates what follows */
uint8 extch;
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_brcm_extch dot11_brcm_extch_ie_t;
@@ -382,7 +384,7 @@
} BWL_POST_PACKED_STRUCT;
#define DOT11_ACTION_FRMHDR_LEN 2
-/* CSA IE data structure */
+/** CSA IE data structure */
BWL_PRE_PACKED_STRUCT struct dot11_channel_switch {
uint8 id; /* id DOT11_MNG_CHANNEL_SWITCH_ID */
uint8 len; /* length of IE */
@@ -411,7 +413,7 @@
uint8 count; /* number of beacons before switching */
} BWL_POST_PACKED_STRUCT;
-/* 11n Extended Channel Switch IE data structure */
+/** 11n Extended Channel Switch IE data structure */
BWL_PRE_PACKED_STRUCT struct dot11_ext_csa {
uint8 id; /* id DOT11_MNG_EXT_CHANNEL_SWITCH_ID */
uint8 len; /* length of IE */
@@ -432,7 +434,7 @@
struct dot11_csa_body b; /* body of the ie */
} BWL_POST_PACKED_STRUCT;
-/* Wide Bandwidth Channel Switch IE data structure */
+/** Wide Bandwidth Channel Switch IE data structure */
BWL_PRE_PACKED_STRUCT struct dot11_wide_bw_channel_switch {
uint8 id; /* id DOT11_MNG_WIDE_BW_CHANNEL_SWITCH_ID */
uint8 len; /* length of IE */
@@ -444,7 +446,7 @@
#define DOT11_WIDE_BW_SWITCH_IE_LEN 3 /* length of IE data, not including 2 byte header */
-/* Channel Switch Wrapper IE data structure */
+/** Channel Switch Wrapper IE data structure */
BWL_PRE_PACKED_STRUCT struct dot11_channel_switch_wrapper {
uint8 id; /* id DOT11_MNG_WIDE_BW_CHANNEL_SWITCH_ID */
uint8 len; /* length of IE */
@@ -452,7 +454,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_channel_switch_wrapper dot11_chan_switch_wrapper_ie_t;
-/* VHT Transmit Power Envelope IE data structure */
+/** VHT Transmit Power Envelope IE data structure */
BWL_PRE_PACKED_STRUCT struct dot11_vht_transmit_power_envelope {
uint8 id; /* id DOT11_MNG_WIDE_BW_CHANNEL_SWITCH_ID */
uint8 len; /* length of IE */
@@ -658,7 +660,7 @@
#define AC_BITMAP_SET(ab, ac) (((ab) |= (1 << (ac))))
#define AC_BITMAP_RESET(ab, ac) (((ab) &= ~(1 << (ac))))
-/* WME Information Element (IE) */
+/** WME Information Element (IE) */
BWL_PRE_PACKED_STRUCT struct wme_ie {
uint8 oui[3];
uint8 type;
@@ -676,7 +678,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct edcf_acparam edcf_acparam_t;
-/* WME Parameter Element (PE) */
+/** WME Parameter Element (PE) */
BWL_PRE_PACKED_STRUCT struct wme_param_ie {
uint8 oui[3];
uint8 type;
@@ -762,7 +764,7 @@
#define EDCF_AC_VO_ECW_AP 0x32 /* AP ECW value for audio AC */
#define EDCF_AC_VO_TXOP_AP 0x002f /* AP TXOP value for audio AC */
-/* EDCA Parameter IE */
+/** EDCA Parameter IE */
BWL_PRE_PACKED_STRUCT struct edca_param_ie {
uint8 qosinfo;
uint8 rsvd;
@@ -771,7 +773,7 @@
typedef struct edca_param_ie edca_param_ie_t;
#define EDCA_PARAM_IE_LEN 18 /* EDCA Parameter IE length */
-/* QoS Capability IE */
+/** QoS Capability IE */
BWL_PRE_PACKED_STRUCT struct qos_cap_ie {
uint8 qosinfo;
} BWL_POST_PACKED_STRUCT;
@@ -787,6 +789,8 @@
typedef struct dot11_qbss_load_ie dot11_qbss_load_ie_t;
#define BSS_LOAD_IE_SIZE 7 /* BSS load IE size */
+#define WLC_QBSS_LOAD_CHAN_FREE_MAX 0xff /* max for channel free score */
+
/* nom_msdu_size */
#define FIXED_MSDU_SIZE 0x8000 /* MSDU size is fixed */
#define MSDU_SIZE_MASK 0x7fff /* (Nominal or fixed) MSDU size */
@@ -796,7 +800,7 @@
#define INTEGER_SHIFT 13 /* integer shift */
#define FRACTION_MASK 0x1FFF /* fraction mask */
-/* Management Notification Frame */
+/** Management Notification Frame */
BWL_PRE_PACKED_STRUCT struct dot11_management_notification {
uint8 category; /* DOT11_ACTION_NOTIFICATION */
uint8 action;
@@ -806,7 +810,7 @@
} BWL_POST_PACKED_STRUCT;
#define DOT11_MGMT_NOTIFICATION_LEN 4 /* Fixed length */
-/* Timeout Interval IE */
+/** Timeout Interval IE */
BWL_PRE_PACKED_STRUCT struct ti_ie {
uint8 ti_type;
uint32 ti_val;
@@ -937,6 +941,7 @@
#define FC_PROBE_REQ FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_PROBE_REQ) /* probe request */
#define FC_PROBE_RESP FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_PROBE_RESP) /* probe response */
#define FC_BEACON FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_BEACON) /* beacon */
+#define FC_ATIM FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_ATIM) /* ATIM */
#define FC_DISASSOC FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_DISASSOC) /* disassoc */
#define FC_AUTH FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_AUTH) /* authentication */
#define FC_DEAUTH FC_KIND(FC_TYPE_MNG, FC_SUBTYPE_DEAUTH) /* deauthentication */
@@ -1376,6 +1381,8 @@
#define DOT11_EXT_CAP_DMS 26
/* Interworking support bit position */
#define DOT11_EXT_CAP_IW 31
+/* QoS map support bit position */
+#define DOT11_EXT_CAP_QOS_MAP 32
/* service Interval granularity bit position and mask */
#define DOT11_EXT_CAP_SI 41
#define DOT11_EXT_CAP_SI_MASK 0x0E
@@ -1472,6 +1479,13 @@
#define DOT11_SM_ACTION_CHANNEL_SWITCH 4 /* d11 action channel switch */
#define DOT11_SM_ACTION_EXT_CSA 5 /* d11 extened CSA for 11n */
+/* QoS action ids */
+#define DOT11_QOS_ACTION_ADDTS_REQ 0 /* d11 action ADDTS request */
+#define DOT11_QOS_ACTION_ADDTS_RESP 1 /* d11 action ADDTS response */
+#define DOT11_QOS_ACTION_DELTS 2 /* d11 action DELTS */
+#define DOT11_QOS_ACTION_SCHEDULE 3 /* d11 action schedule */
+#define DOT11_QOS_ACTION_QOS_MAP 4 /* d11 action QOS map */
+
/* HT action ids */
#define DOT11_ACTION_ID_HT_CH_WIDTH 0 /* notify channel width action id */
#define DOT11_ACTION_ID_HT_MIMO_PS 1 /* mimo ps action id */
@@ -1552,7 +1566,7 @@
#define DOT11_VHT_ACTION_GID_MGMT 1 /* Group ID Management */
#define DOT11_VHT_ACTION_OPER_MODE_NOTIF 2 /* Operating mode notif'n */
-/* DLS Request frame header */
+/** DLS Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_dls_req {
uint8 category; /* category of action frame (2) */
uint8 action; /* DLS action: req (0) */
@@ -1565,7 +1579,7 @@
typedef struct dot11_dls_req dot11_dls_req_t;
#define DOT11_DLS_REQ_LEN 18 /* Fixed length */
-/* DLS response frame header */
+/** DLS response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_dls_resp {
uint8 category; /* category of action frame (2) */
uint8 action; /* DLS action: req (0) */
@@ -1580,7 +1594,7 @@
/* ************* 802.11v related definitions. ************* */
-/* BSS Management Transition Query frame header */
+/** BSS Management Transition Query frame header */
BWL_PRE_PACKED_STRUCT struct dot11_bsstrans_query {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: trans_query (6) */
@@ -1591,7 +1605,7 @@
typedef struct dot11_bsstrans_query dot11_bsstrans_query_t;
#define DOT11_BSSTRANS_QUERY_LEN 4 /* Fixed length */
-/* BSS Management Transition Request frame header */
+/** BSS Management Transition Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_bsstrans_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: trans_req (7) */
@@ -1612,7 +1626,7 @@
#define DOT11_BSSTRANS_REQMODE_BSS_TERM_INCL 0x08
#define DOT11_BSSTRANS_REQMODE_ESS_DISASSOC_IMNT 0x10
-/* BSS Management transition response frame header */
+/** BSS Management transition response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_bsstrans_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: trans_resp (8) */
@@ -1636,7 +1650,7 @@
#define DOT11_BSSTRANS_RESP_STATUS_REJ_LEAVING_ESS 8
-/* BSS Max Idle Period element */
+/** BSS Max Idle Period element */
BWL_PRE_PACKED_STRUCT struct dot11_bss_max_idle_period_ie {
uint8 id; /* 90, DOT11_MNG_BSS_MAX_IDLE_PERIOD_ID */
uint8 len;
@@ -1647,7 +1661,7 @@
#define DOT11_BSS_MAX_IDLE_PERIOD_IE_LEN 3 /* bss max idle period IE size */
#define DOT11_BSS_MAX_IDLE_PERIOD_OPT_PROTECTED 1 /* BSS max idle option */
-/* TIM Broadcast request element */
+/** TIM Broadcast request element */
BWL_PRE_PACKED_STRUCT struct dot11_timbc_req_ie {
uint8 id; /* 94, DOT11_MNG_TIMBC_REQ_ID */
uint8 len;
@@ -1656,7 +1670,7 @@
typedef struct dot11_timbc_req_ie dot11_timbc_req_ie_t;
#define DOT11_TIMBC_REQ_IE_LEN 1 /* Fixed length */
-/* TIM Broadcast request frame header */
+/** TIM Broadcast request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_timbc_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: DOT11_WNM_ACTION_TIMBC_REQ(18) */
@@ -1666,7 +1680,7 @@
typedef struct dot11_timbc_req dot11_timbc_req_t;
#define DOT11_TIMBC_REQ_LEN 3 /* Fixed length */
-/* TIM Broadcast response element */
+/** TIM Broadcast response element */
BWL_PRE_PACKED_STRUCT struct dot11_timbc_resp_ie {
uint8 id; /* 95, DOT11_MNG_TIM_BROADCAST_RESP_ID */
uint8 len;
@@ -1686,7 +1700,7 @@
#define DOT11_TIMBC_STATUS_OVERRIDDEN 3
#define DOT11_TIMBC_STATUS_RESERVED 4
-/* TIM Broadcast request frame header */
+/** TIM Broadcast request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_timbc_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* action: DOT11_WNM_ACTION_TIMBC_RESP(19) */
@@ -1696,7 +1710,7 @@
typedef struct dot11_timbc_resp dot11_timbc_resp_t;
#define DOT11_TIMBC_RESP_LEN 3 /* Fixed length */
-/* TIM element */
+/** TIM element */
BWL_PRE_PACKED_STRUCT struct dot11_tim_ie {
uint8 id; /* 5, DOT11_MNG_TIM_ID */
uint8 len; /* 4 - 255 */
@@ -1709,7 +1723,7 @@
#define DOT11_TIM_IE_FIXED_LEN 3 /* Fixed length, without id and len */
#define DOT11_TIM_IE_FIXED_TOTAL_LEN 5 /* Fixed length, with id and len */
-/* TIM Broadcast frame header */
+/** TIM Broadcast frame header */
BWL_PRE_PACKED_STRUCT struct dot11_timbc {
uint8 category; /* category of action frame (11) */
uint8 action; /* action: TIM (0) */
@@ -1722,7 +1736,7 @@
#define DOT11_TIMBC_FIXED_LEN (sizeof(dot11_timbc_t) - 1) /* Fixed length */
#define DOT11_TIMBC_LEN 11 /* Fixed length */
-/* TCLAS frame classifier type */
+/** TCLAS frame classifier type */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_hdr {
uint8 type;
uint8 mask;
@@ -1747,7 +1761,7 @@
#define DOT11_TCLAS_FC_4_IP_HIGHER 4
#define DOT11_TCLAS_FC_5_8021D 5
-/* TCLAS frame classifier type 0 parameters for Ethernet */
+/** TCLAS frame classifier type 0 parameters for Ethernet */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_0_eth {
uint8 type;
uint8 mask;
@@ -1758,7 +1772,7 @@
typedef struct dot11_tclas_fc_0_eth dot11_tclas_fc_0_eth_t;
#define DOT11_TCLAS_FC_0_ETH_LEN 16
-/* TCLAS frame classifier type 1 parameters for IPV4 */
+/** TCLAS frame classifier type 1 parameters for IPV4 */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_1_ipv4 {
uint8 type;
uint8 mask;
@@ -1774,7 +1788,7 @@
typedef struct dot11_tclas_fc_1_ipv4 dot11_tclas_fc_1_ipv4_t;
#define DOT11_TCLAS_FC_1_IPV4_LEN 18
-/* TCLAS frame classifier type 2 parameters for 802.1Q */
+/** TCLAS frame classifier type 2 parameters for 802.1Q */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_2_8021q {
uint8 type;
uint8 mask;
@@ -1783,7 +1797,7 @@
typedef struct dot11_tclas_fc_2_8021q dot11_tclas_fc_2_8021q_t;
#define DOT11_TCLAS_FC_2_8021Q_LEN 4
-/* TCLAS frame classifier type 3 parameters for filter offset */
+/** TCLAS frame classifier type 3 parameters for filter offset */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_3_filter {
uint8 type;
uint8 mask;
@@ -1793,11 +1807,11 @@
typedef struct dot11_tclas_fc_3_filter dot11_tclas_fc_3_filter_t;
#define DOT11_TCLAS_FC_3_FILTER_LEN 4
-/* TCLAS frame classifier type 4 parameters for IPV4 is the same as TCLAS type 1 */
+/** TCLAS frame classifier type 4 parameters for IPV4 is the same as TCLAS type 1 */
typedef struct dot11_tclas_fc_1_ipv4 dot11_tclas_fc_4_ipv4_t;
#define DOT11_TCLAS_FC_4_IPV4_LEN DOT11_TCLAS_FC_1_IPV4_LEN
-/* TCLAS frame classifier type 4 parameters for IPV6 */
+/** TCLAS frame classifier type 4 parameters for IPV6 */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_4_ipv6 {
uint8 type;
uint8 mask;
@@ -1813,7 +1827,7 @@
typedef struct dot11_tclas_fc_4_ipv6 dot11_tclas_fc_4_ipv6_t;
#define DOT11_TCLAS_FC_4_IPV6_LEN 44
-/* TCLAS frame classifier type 5 parameters for 802.1D */
+/** TCLAS frame classifier type 5 parameters for 802.1D */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_fc_5_8021d {
uint8 type;
uint8 mask;
@@ -1824,7 +1838,7 @@
typedef struct dot11_tclas_fc_5_8021d dot11_tclas_fc_5_8021d_t;
#define DOT11_TCLAS_FC_5_8021D_LEN 6
-/* TCLAS frame classifier type parameters */
+/** TCLAS frame classifier type parameters */
BWL_PRE_PACKED_STRUCT union dot11_tclas_fc {
uint8 data[1];
dot11_tclas_fc_hdr_t hdr;
@@ -1841,7 +1855,7 @@
#define DOT11_TCLAS_FC_MIN_LEN 4 /* Classifier Type 2 has the min size */
#define DOT11_TCLAS_FC_MAX_LEN 254
-/* TCLAS element */
+/** TCLAS element */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_ie {
uint8 id; /* 14, DOT11_MNG_TCLAS_ID */
uint8 len;
@@ -1851,7 +1865,7 @@
typedef struct dot11_tclas_ie dot11_tclas_ie_t;
#define DOT11_TCLAS_IE_LEN 3 /* Fixed length, include id and len */
-/* TCLAS processing element */
+/** TCLAS processing element */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_proc_ie {
uint8 id; /* 44, DOT11_MNG_TCLAS_PROC_ID */
uint8 len;
@@ -1868,7 +1882,7 @@
/* TSPEC element defined in 802.11 std section 8.4.2.32 - Not supported */
#define DOT11_TSPEC_IE_LEN 57 /* Fixed length */
-/* TFS request element */
+/** TFS request element */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_req_ie {
uint8 id; /* 91, DOT11_MNG_TFS_REQUEST_ID */
uint8 len;
@@ -1879,15 +1893,15 @@
typedef struct dot11_tfs_req_ie dot11_tfs_req_ie_t;
#define DOT11_TFS_REQ_IE_LEN 2 /* Fixed length, without id and len */
-/* TFS request action codes (bitfield) */
+/** TFS request action codes (bitfield) */
#define DOT11_TFS_ACTCODE_DELETE 1
#define DOT11_TFS_ACTCODE_NOTIFY 2
-/* TFS request subelement IDs */
+/** TFS request subelement IDs */
#define DOT11_TFS_REQ_TFS_SE_ID 1
#define DOT11_TFS_REQ_VENDOR_SE_ID 221
-/* TFS subelement */
+/** TFS subelement */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_se {
uint8 sub_id;
uint8 len;
@@ -1896,7 +1910,7 @@
typedef struct dot11_tfs_se dot11_tfs_se_t;
-/* TFS response element */
+/** TFS response element */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_resp_ie {
uint8 id; /* 92, DOT11_MNG_TFS_RESPONSE_ID */
uint8 len;
@@ -1906,12 +1920,12 @@
typedef struct dot11_tfs_resp_ie dot11_tfs_resp_ie_t;
#define DOT11_TFS_RESP_IE_LEN 1 /* Fixed length, without id and len */
-/* TFS response subelement IDs (same subelments, but different IDs than in TFS request */
+/** TFS response subelement IDs (same subelments, but different IDs than in TFS request */
#define DOT11_TFS_RESP_TFS_STATUS_SE_ID 1
#define DOT11_TFS_RESP_TFS_SE_ID 2
#define DOT11_TFS_RESP_VENDOR_SE_ID 221
-/* TFS status subelement */
+/** TFS status subelement */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_status_se {
uint8 sub_id; /* 92, DOT11_MNG_TFS_RESPONSE_ID */
uint8 len;
@@ -1948,7 +1962,7 @@
#define DOT11_FMS_TFS_STATUS_ALT_CHANGE_MDI 13
#define DOT11_FMS_TFS_STATUS_ALT_TCLAS_UNSUPP 14
-/* TFS Management Request frame header */
+/** TFS Management Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: TFS request (13) */
@@ -1958,7 +1972,7 @@
typedef struct dot11_tfs_req dot11_tfs_req_t;
#define DOT11_TFS_REQ_LEN 3 /* Fixed length */
-/* TFS Management Response frame header */
+/** TFS Management Response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: TFS request (14) */
@@ -1968,7 +1982,7 @@
typedef struct dot11_tfs_resp dot11_tfs_resp_t;
#define DOT11_TFS_RESP_LEN 3 /* Fixed length */
-/* TFS Management Notify frame request header */
+/** TFS Management Notify frame request header */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_notify_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: TFS notify request (15) */
@@ -1978,7 +1992,7 @@
typedef struct dot11_tfs_notify_req dot11_tfs_notify_req_t;
#define DOT11_TFS_NOTIFY_REQ_LEN 3 /* Fixed length */
-/* TFS Management Notify frame response header */
+/** TFS Management Notify frame response header */
BWL_PRE_PACKED_STRUCT struct dot11_tfs_notify_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: TFS notify response (28) */
@@ -1989,7 +2003,7 @@
#define DOT11_TFS_NOTIFY_RESP_LEN 3 /* Fixed length */
-/* WNM-Sleep Management Request frame header */
+/** WNM-Sleep Management Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_wnm_sleep_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: wnm-sleep request (16) */
@@ -1999,7 +2013,7 @@
typedef struct dot11_wnm_sleep_req dot11_wnm_sleep_req_t;
#define DOT11_WNM_SLEEP_REQ_LEN 3 /* Fixed length */
-/* WNM-Sleep Management Response frame header */
+/** WNM-Sleep Management Response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_wnm_sleep_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: wnm-sleep request (17) */
@@ -2056,7 +2070,7 @@
#define DOT11_WNM_SLEEP_RESP_DENY_INUSE 5
#define DOT11_WNM_SLEEP_RESP_LAST 6
-/* DMS Management Request frame header */
+/** DMS Management Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_dms_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: dms request (23) */
@@ -2066,7 +2080,7 @@
typedef struct dot11_dms_req dot11_dms_req_t;
#define DOT11_DMS_REQ_LEN 3 /* Fixed length */
-/* DMS Management Response frame header */
+/** DMS Management Response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_dms_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: dms request (24) */
@@ -2076,7 +2090,7 @@
typedef struct dot11_dms_resp dot11_dms_resp_t;
#define DOT11_DMS_RESP_LEN 3 /* Fixed length */
-/* DMS request element */
+/** DMS request element */
BWL_PRE_PACKED_STRUCT struct dot11_dms_req_ie {
uint8 id; /* 99, DOT11_MNG_DMS_REQUEST_ID */
uint8 len;
@@ -2085,7 +2099,7 @@
typedef struct dot11_dms_req_ie dot11_dms_req_ie_t;
#define DOT11_DMS_REQ_IE_LEN 2 /* Fixed length */
-/* DMS response element */
+/** DMS response element */
BWL_PRE_PACKED_STRUCT struct dot11_dms_resp_ie {
uint8 id; /* 100, DOT11_MNG_DMS_RESPONSE_ID */
uint8 len;
@@ -2094,7 +2108,7 @@
typedef struct dot11_dms_resp_ie dot11_dms_resp_ie_t;
#define DOT11_DMS_RESP_IE_LEN 2 /* Fixed length */
-/* DMS request descriptor */
+/** DMS request descriptor */
BWL_PRE_PACKED_STRUCT struct dot11_dms_req_desc {
uint8 dms_id;
uint8 len;
@@ -2108,7 +2122,7 @@
#define DOT11_DMS_REQ_TYPE_REMOVE 1
#define DOT11_DMS_REQ_TYPE_CHANGE 2
-/* DMS response status */
+/** DMS response status */
BWL_PRE_PACKED_STRUCT struct dot11_dms_resp_st {
uint8 dms_id;
uint8 len;
@@ -2125,7 +2139,7 @@
#define DOT11_DMS_RESP_LSC_UNSUPPORTED 0xFFFF
-/* FMS Management Request frame header */
+/** FMS Management Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_fms_req {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: fms request (9) */
@@ -2135,7 +2149,7 @@
typedef struct dot11_fms_req dot11_fms_req_t;
#define DOT11_FMS_REQ_LEN 3 /* Fixed length */
-/* FMS Management Response frame header */
+/** FMS Management Response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_fms_resp {
uint8 category; /* category of action frame (10) */
uint8 action; /* WNM action: fms request (10) */
@@ -2145,7 +2159,7 @@
typedef struct dot11_fms_resp dot11_fms_resp_t;
#define DOT11_FMS_RESP_LEN 3 /* Fixed length */
-/* FMS Descriptor element */
+/** FMS Descriptor element */
BWL_PRE_PACKED_STRUCT struct dot11_fms_desc {
uint8 id;
uint8 len;
@@ -2161,7 +2175,7 @@
#define DOT11_FMS_CNTR_COUNT_MASK 0xf1
#define DOT11_FMS_CNTR_SHIFT 0x3
-/* FMS request element */
+/** FMS request element */
BWL_PRE_PACKED_STRUCT struct dot11_fms_req_ie {
uint8 id;
uint8 len;
@@ -2183,7 +2197,7 @@
#define DOT11_RATE_ID_FIELD_RATETYPE_OFFSET 3
#define DOT11_RATE_ID_FIELD_LEN sizeof(dot11_rate_id_field_t)
-/* FMS request subelements */
+/** FMS request subelements */
BWL_PRE_PACKED_STRUCT struct dot11_fms_se {
uint8 sub_id;
uint8 len;
@@ -2198,7 +2212,7 @@
#define DOT11_FMS_REQ_SE_ID_FMS 1 /* FMS subelement */
#define DOT11_FMS_REQ_SE_ID_VS 221 /* Vendor Specific subelement */
-/* FMS response element */
+/** FMS response element */
BWL_PRE_PACKED_STRUCT struct dot11_fms_resp_ie {
uint8 id;
uint8 len;
@@ -2213,7 +2227,7 @@
#define DOT11_FMS_STATUS_SE_ID_TCLAS 2 /* TCLAS Status */
#define DOT11_FMS_STATUS_SE_ID_VS 221 /* Vendor Specific subelement */
-/* FMS status subelement */
+/** FMS status subelement */
BWL_PRE_PACKED_STRUCT struct dot11_fms_status_se {
uint8 sub_id;
uint8 len;
@@ -2228,7 +2242,7 @@
typedef struct dot11_fms_status_se dot11_fms_status_se_t;
#define DOT11_FMS_STATUS_SE_LEN 15 /* Fixed length */
-/* TCLAS status subelement */
+/** TCLAS status subelement */
BWL_PRE_PACKED_STRUCT struct dot11_tclas_status_se {
uint8 sub_id;
uint8 len;
@@ -2281,7 +2295,7 @@
/* ************* 802.11r related definitions. ************* */
-/* Over-the-DS Fast Transition Request frame header */
+/** Over-the-DS Fast Transition Request frame header */
BWL_PRE_PACKED_STRUCT struct dot11_ft_req {
uint8 category; /* category of action frame (6) */
uint8 action; /* action: ft req */
@@ -2292,7 +2306,7 @@
typedef struct dot11_ft_req dot11_ft_req_t;
#define DOT11_FT_REQ_FIXED_LEN 14
-/* Over-the-DS Fast Transition Response frame header */
+/** Over-the-DS Fast Transition Response frame header */
BWL_PRE_PACKED_STRUCT struct dot11_ft_res {
uint8 category; /* category of action frame (6) */
uint8 action; /* action: ft resp */
@@ -2304,7 +2318,7 @@
typedef struct dot11_ft_res dot11_ft_res_t;
#define DOT11_FT_RES_FIXED_LEN 16
-/* RDE RIC Data Element. */
+/** RDE RIC Data Element. */
BWL_PRE_PACKED_STRUCT struct dot11_rde_ie {
uint8 id; /* 11r, DOT11_MNG_RDE_ID */
uint8 length;
@@ -2376,7 +2390,7 @@
#define DOT11_RM_ACTION_NR_REQ 4 /* Neighbor report request */
#define DOT11_RM_ACTION_NR_REP 5 /* Neighbor report response */
-/* Generic radio measurement action frame header */
+/** Generic radio measurement action frame header */
BWL_PRE_PACKED_STRUCT struct dot11_rm_action {
uint8 category; /* category of action frame (5) */
uint8 action; /* radio measurement action */
@@ -2472,7 +2486,7 @@
/* Sub-element IDs for Frame Report */
#define DOT11_RMREP_FRAME_COUNT_REPORT 1
-/* Channel load request */
+/** Channel load request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_chanload {
uint8 id;
uint8 len;
@@ -2487,7 +2501,7 @@
typedef struct dot11_rmreq_chanload dot11_rmreq_chanload_t;
#define DOT11_RMREQ_CHANLOAD_LEN 11
-/* Channel load report */
+/** Channel load report */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_chanload {
uint8 reg;
uint8 channel;
@@ -2498,7 +2512,7 @@
typedef struct dot11_rmrep_chanload dot11_rmrep_chanload_t;
#define DOT11_RMREP_CHANLOAD_LEN 13
-/* Noise histogram request */
+/** Noise histogram request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_noise {
uint8 id;
uint8 len;
@@ -2513,7 +2527,7 @@
typedef struct dot11_rmreq_noise dot11_rmreq_noise_t;
#define DOT11_RMREQ_NOISE_LEN 11
-/* Noise histogram report */
+/** Noise histogram report */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_noise {
uint8 reg;
uint8 channel;
@@ -2536,7 +2550,7 @@
typedef struct dot11_rmrep_noise dot11_rmrep_noise_t;
#define DOT11_RMREP_NOISE_LEN 25
-/* Frame request */
+/** Frame request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_frame {
uint8 id;
uint8 len;
@@ -2553,7 +2567,7 @@
typedef struct dot11_rmreq_frame dot11_rmreq_frame_t;
#define DOT11_RMREQ_FRAME_LEN 18
-/* Frame report */
+/** Frame report */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_frame {
uint8 reg;
uint8 channel;
@@ -2563,7 +2577,7 @@
typedef struct dot11_rmrep_frame dot11_rmrep_frame_t;
#define DOT11_RMREP_FRAME_LEN 12
-/* Frame report entry */
+/** Frame report entry */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_frmentry {
struct ether_addr ta;
struct ether_addr bssid;
@@ -2577,7 +2591,7 @@
typedef struct dot11_rmrep_frmentry dot11_rmrep_frmentry_t;
#define DOT11_RMREP_FRMENTRY_LEN 19
-/* STA statistics request */
+/** STA statistics request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_stat {
uint8 id;
uint8 len;
@@ -2592,14 +2606,14 @@
typedef struct dot11_rmreq_stat dot11_rmreq_stat_t;
#define DOT11_RMREQ_STAT_LEN 16
-/* STA statistics report */
+/** STA statistics report */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_stat {
uint16 duration;
uint8 group_id;
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_rmrep_stat dot11_rmrep_stat_t;
-/* Transmit stream/category measurement request */
+/** Transmit stream/category measurement request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_tx_stream {
uint8 id;
uint8 len;
@@ -2614,7 +2628,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_rmreq_tx_stream dot11_rmreq_tx_stream_t;
-/* Transmit stream/category measurement report */
+/** Transmit stream/category measurement report */
BWL_PRE_PACKED_STRUCT struct dot11_rmrep_tx_stream {
uint32 starttime[2];
uint16 duration;
@@ -2638,7 +2652,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_rmrep_tx_stream dot11_rmrep_tx_stream_t;
-/* Measurement pause request */
+/** Measurement pause request */
BWL_PRE_PACKED_STRUCT struct dot11_rmreq_pause_time {
uint8 id;
uint8 len;
@@ -2657,7 +2671,7 @@
#define DOT11_NGBR_BSS_TERM_DUR_SE_ID 4
#define DOT11_NGBR_BEARING_SE_ID 5
-/* Neighbor Report, BSS Transition Candidate Preference subelement */
+/** Neighbor Report, BSS Transition Candidate Preference subelement */
BWL_PRE_PACKED_STRUCT struct dot11_ngbr_bsstrans_pref_se {
uint8 sub_id;
uint8 len;
@@ -2666,7 +2680,7 @@
typedef struct dot11_ngbr_bsstrans_pref_se dot11_ngbr_bsstrans_pref_se_t;
#define DOT11_NGBR_BSSTRANS_PREF_SE_LEN 1
-/* Neighbor Report, BSS Termination Duration subelement */
+/** Neighbor Report, BSS Termination Duration subelement */
BWL_PRE_PACKED_STRUCT struct dot11_ngbr_bss_term_dur_se {
uint8 sub_id;
uint8 len;
@@ -2691,7 +2705,7 @@
#define DOT11_NGBR_BI_MOBILITY 0x0400
#define DOT11_NGBR_BI_HT 0x0800
-/* Neighbor Report element (11k & 11v) */
+/** Neighbor Report element (11k & 11v) */
BWL_PRE_PACKED_STRUCT struct dot11_neighbor_rep_ie {
uint8 id;
uint8 len;
@@ -2713,7 +2727,7 @@
#define DOT11_SCANTYPE_ACTIVE 0 /* d11 scan active */
#define DOT11_SCANTYPE_PASSIVE 1 /* d11 scan passive */
-/* Link Measurement */
+/** Link Measurement */
BWL_PRE_PACKED_STRUCT struct dot11_lmreq {
uint8 category; /* category of action frame (5) */
uint8 action; /* radio measurement action */
@@ -2863,7 +2877,7 @@
#define VHT_N_TAIL 6 /* tail bits per BCC encoder */
-/* dot11Counters Table - 802.11 spec., Annex D */
+/** dot11Counters Table - 802.11 spec., Annex D */
typedef struct d11cnt {
uint32 txfrag; /* dot11TransmittedFragmentCount */
uint32 txmulti; /* dot11MulticastTransmittedFrameCount */
@@ -2899,7 +2913,7 @@
#define BRCM_OUI "\x00\x10\x18" /* Broadcom OUI */
-/* BRCM info element */
+/** BRCM info element */
BWL_PRE_PACKED_STRUCT struct brcm_ie {
uint8 id; /* IE ID, 221, DOT11_MNG_PROPR_ID */
uint8 len; /* IE length */
@@ -2916,8 +2930,19 @@
#define BRCM_IE_LEGACY_AES_VER 1 /* BRCM IE legacy AES version */
/* brcm_ie flags */
+#define BRF_ABCAP 0x1 /* afterburner is obsolete, defined for backward compat */
+#define BRF_ABRQRD 0x2 /* afterburner is obsolete, defined for backward compat */
#define BRF_LZWDS 0x4 /* lazy wds enabled */
#define BRF_BLOCKACK 0x8 /* BlockACK capable */
+#define BRF_ABCOUNTER_MASK 0xf0 /* afterburner is obsolete, defined for backward compat */
+#define BRF_PROP_11N_MCS 0x10 /* re-use afterburner bit */
+
+/**
+ * Support for Broadcom proprietary HT MCS rates. Re-uses afterburner bits since afterburner is not
+ * used anymore. Checks for BRF_ABCAP to stay compliant with 'old' images in the field.
+ */
+#define GET_BRF_PROP_11N_MCS(brcm_ie) \
+ (!((brcm_ie)->flags & BRF_ABCAP) && ((brcm_ie)->flags & BRF_PROP_11N_MCS))
/* brcm_ie flags1 */
#define BRF1_AMSDU 0x1 /* A-MSDU capable */
@@ -2928,7 +2953,7 @@
#define BRF1_SOFTAP 0x40 /* Configure as Broadcom SOFTAP */
#define BRF1_DWDS 0x80 /* DWDS capable */
-/* Vendor IE structure */
+/** Vendor IE structure */
BWL_PRE_PACKED_STRUCT struct vndr_ie {
uchar id;
uchar len;
@@ -2943,12 +2968,12 @@
#define VNDR_IE_MAX_LEN 255 /* vendor IE max length, without ID and len */
-/* BRCM PROP DEVICE PRIMARY MAC ADDRESS IE */
+/** BRCM PROP DEVICE PRIMARY MAC ADDRESS IE */
BWL_PRE_PACKED_STRUCT struct member_of_brcm_prop_ie {
uchar id;
uchar len;
uchar oui[3];
- uint8 type; /* type inidicates what follows */
+ uint8 type; /* type indicates what follows */
struct ether_addr ea; /* Device Primary MAC Adrress */
} BWL_POST_PACKED_STRUCT;
typedef struct member_of_brcm_prop_ie member_of_brcm_prop_ie_t;
@@ -2957,12 +2982,12 @@
#define MEMBER_OF_BRCM_PROP_IE_HDRLEN (sizeof(member_of_brcm_prop_ie_t))
#define MEMBER_OF_BRCM_PROP_IE_TYPE 54
-/* BRCM Reliable Multicast IE */
+/** BRCM Reliable Multicast IE */
BWL_PRE_PACKED_STRUCT struct relmcast_brcm_prop_ie {
uint8 id;
uint8 len;
uint8 oui[3];
- uint8 type; /* type inidicates what follows */
+ uint8 type; /* type indicates what follows */
struct ether_addr ea; /* The ack sender's MAC Adrress */
struct ether_addr mcast_ea; /* The multicast MAC address */
uint8 updtmo; /* time interval(second) for client to send null packet to report its rssi */
@@ -3002,7 +3027,7 @@
uint8 id; /* IE ID, 221, DOT11_MNG_PROPR_ID */
uint8 len; /* IE length */
uint8 oui[3];
- uint8 type; /* type inidicates what follows */
+ uint8 type; /* type indicates what follows */
ht_cap_ie_t cap_ie;
} BWL_POST_PACKED_STRUCT;
typedef struct ht_prop_cap_ie ht_prop_cap_ie_t;
@@ -3039,8 +3064,8 @@
#define HT_CAP_TXBF_CAP_IMPLICIT_TXBF_RX 0x1
-#define HT_CAP_TXBF_CAP_NDP_TX 0x8
-#define HT_CAP_TXBF_CAP_NDP_RX 0x10
+#define HT_CAP_TXBF_CAP_NDP_RX 0x8
+#define HT_CAP_TXBF_CAP_NDP_TX 0x10
#define HT_CAP_TXBF_CAP_EXPLICIT_CSI 0x100
#define HT_CAP_TXBF_CAP_EXPLICIT_NC_STEERING 0x200
#define HT_CAP_TXBF_CAP_EXPLICIT_C_STEERING 0x400
@@ -3115,6 +3140,7 @@
#define HT_CAP_EXT_HTC 0x0400
#define HT_CAP_EXT_RD_RESP 0x0800
+/** 'ht_add' is called 'HT Operation' information element in the 802.11 standard */
BWL_PRE_PACKED_STRUCT struct ht_add_ie {
uint8 ctl_ch; /* control channel number */
uint8 byte1; /* ext ch,rec. ch. width, RIFS support */
@@ -3229,7 +3255,7 @@
/* ************* VHT definitions. ************* */
-/*
+/**
* VHT Capabilites IE (sec 8.4.2.160)
*/
@@ -3314,14 +3340,14 @@
(mcs_map == 0x1ff) ? VHT_CAP_MCS_MAP_0_8 : \
(mcs_map == 0x3ff) ? VHT_CAP_MCS_MAP_0_9 : VHT_CAP_MCS_MAP_NONE)
-/* VHT Capabilities Supported Channel Width */
+/** VHT Capabilities Supported Channel Width */
typedef enum vht_cap_chan_width {
VHT_CAP_CHAN_WIDTH_SUPPORT_MANDATORY = 0x00,
VHT_CAP_CHAN_WIDTH_SUPPORT_160 = 0x04,
VHT_CAP_CHAN_WIDTH_SUPPORT_160_8080 = 0x08
} vht_cap_chan_width_t;
-/* VHT Capabilities Supported max MPDU LEN (sec 8.4.2.160.2) */
+/** VHT Capabilities Supported max MPDU LEN (sec 8.4.2.160.2) */
typedef enum vht_cap_max_mpdu_len {
VHT_CAP_MPDU_MAX_4K = 0x00,
VHT_CAP_MPDU_MAX_8K = 0x01,
@@ -3334,7 +3360,7 @@
#define VHT_MPDU_LIMIT_11K 11454
-/*
+/**
* VHT Operation IE (sec 8.4.2.161)
*/
@@ -3358,7 +3384,7 @@
/* AID length */
#define AID_IE_LEN 2
-/*
+/**
* BRCM vht features IE header
* The header if the fixed part of the IE
* On the 5GHz band this is the entire IE,
@@ -3432,6 +3458,8 @@
#define WFA_OUI_TYPE_WFD 10
#endif /* WTDLS */
#define WFA_OUI_TYPE_HS20 0x10
+#define WFA_OUI_TYPE_OSEN 0x12
+#define WFA_OUI_TYPE_NAN 0x13
/* RSN authenticated key managment suite */
#define RSN_AKM_NONE 0 /* None (IBSS) */
@@ -3443,6 +3471,9 @@
#define RSN_AKM_MFP_PSK 6 /* SHA256 key derivation, using Pre-shared Key */
#define RSN_AKM_TPK 7 /* TPK(TDLS Peer Key) handshake */
+/* OSEN authenticated key managment suite */
+#define OSEN_AKM_UNSPECIFIED RSN_AKM_UNSPECIFIED /* Over 802.1x */
+
/* Key related defines */
#define DOT11_MAX_DEFAULT_KEYS 4 /* number of default keys */
#define DOT11_MAX_IGTK_KEYS 2
@@ -3482,7 +3513,7 @@
/* 802.11r protocol definitions */
-/* Mobility Domain IE */
+/** Mobility Domain IE */
BWL_PRE_PACKED_STRUCT struct dot11_mdid_ie {
uint8 id;
uint8 len;
@@ -3494,7 +3525,7 @@
#define FBT_MDID_CAP_OVERDS 0x01 /* Fast Bss transition over the DS support */
#define FBT_MDID_CAP_RRP 0x02 /* Resource request protocol support */
-/* Fast Bss Transition IE */
+/** Fast Bss Transition IE */
BWL_PRE_PACKED_STRUCT struct dot11_ft_ie {
uint8 id;
uint8 len;
@@ -3517,7 +3548,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_timeout_ie dot11_timeout_ie_t;
-/* GTK ie */
+/** GTK ie */
BWL_PRE_PACKED_STRUCT struct dot11_gtk_ie {
uint8 id;
uint8 len;
@@ -3528,7 +3559,7 @@
} BWL_POST_PACKED_STRUCT;
typedef struct dot11_gtk_ie dot11_gtk_ie_t;
-/* Management MIC ie */
+/** Management MIC ie */
BWL_PRE_PACKED_STRUCT struct mmic_ie {
uint8 id; /* IE ID: DOT11_MNG_MMIE_ID */
uint8 len; /* IE length */
@@ -3553,7 +3584,7 @@
#define WMM_OUI_SUBTYPE_PARAMETER 1
#define WMM_PARAMETER_IE_LEN 24
-/* Link Identifier Element */
+/** Link Identifier Element */
BWL_PRE_PACKED_STRUCT struct link_id_ie {
uint8 id;
uint8 len;
@@ -3564,7 +3595,7 @@
typedef struct link_id_ie link_id_ie_t;
#define TDLS_LINK_ID_IE_LEN 18
-/* Link Wakeup Schedule Element */
+/** Link Wakeup Schedule Element */
BWL_PRE_PACKED_STRUCT struct wakeup_sch_ie {
uint8 id;
uint8 len;
@@ -3577,7 +3608,7 @@
typedef struct wakeup_sch_ie wakeup_sch_ie_t;
#define TDLS_WAKEUP_SCH_IE_LEN 18
-/* Channel Switch Timing Element */
+/** Channel Switch Timing Element */
BWL_PRE_PACKED_STRUCT struct channel_switch_timing_ie {
uint8 id;
uint8 len;
@@ -3587,7 +3618,7 @@
typedef struct channel_switch_timing_ie channel_switch_timing_ie_t;
#define TDLS_CHANNEL_SWITCH_TIMING_IE_LEN 4
-/* PTI Control Element */
+/** PTI Control Element */
BWL_PRE_PACKED_STRUCT struct pti_control_ie {
uint8 id;
uint8 len;
@@ -3597,7 +3628,7 @@
typedef struct pti_control_ie pti_control_ie_t;
#define TDLS_PTI_CONTROL_IE_LEN 3
-/* PU Buffer Status Element */
+/** PU Buffer Status Element */
BWL_PRE_PACKED_STRUCT struct pu_buffer_status_ie {
uint8 id;
uint8 len;
@@ -3702,6 +3733,7 @@
#define VENUE_OUTDOOR 11
/* 802.11u network authentication type indicator */
+#define NATI_UNSPECIFIED -1
#define NATI_ACCEPTANCE_OF_TERMS_CONDITIONS 0
#define NATI_ONLINE_ENROLLMENT_SUPPORTED 1
#define NATI_HTTP_HTTPS_REDIRECTION 2
@@ -3732,9 +3764,12 @@
/* 802.11u IANA EAP method type numbers */
#define REALM_EAP_TLS 13
+#define REALM_EAP_LEAP 17
#define REALM_EAP_SIM 18
#define REALM_EAP_TTLS 21
#define REALM_EAP_AKA 23
+#define REALM_EAP_PEAP 25
+#define REALM_EAP_FAST 43
#define REALM_EAP_PSK 47
#define REALM_EAP_AKAP 50
#define REALM_EAP_EXPANDED 254
@@ -3749,6 +3784,7 @@
#define REALM_VENDOR_SPECIFIC_EAP 221
/* 802.11u non-EAP inner authentication type */
+#define REALM_RESERVED_AUTH 0
#define REALM_PAP 1
#define REALM_CHAP 2
#define REALM_MSCHAP 3
@@ -3763,12 +3799,14 @@
#define REALM_CERTIFICATE 6
#define REALM_USERNAME_PASSWORD 7
#define REALM_SERVER_SIDE 8
+#define REALM_RESERVED_CRED 9
+#define REALM_VENDOR_SPECIFIC_CRED 10
/* 802.11u 3GPP PLMN */
#define G3PP_GUD_VERSION 0
#define G3PP_PLMN_LIST_IE 0
-/* hotspot2.0 indication element (vendor specific) */
+/** hotspot2.0 indication element (vendor specific) */
BWL_PRE_PACKED_STRUCT struct hs20_ie {
uint8 oui[3];
uint8 type;
@@ -3777,7 +3815,7 @@
typedef struct hs20_ie hs20_ie_t;
#define HS20_IE_LEN 5 /* HS20 IE length */
-/* IEEE 802.11 Annex E */
+/** IEEE 802.11 Annex E */
typedef enum {
DOT11_2GHZ_20MHZ_CLASS_12 = 81, /* Ch 1-11 */
DOT11_5GHZ_20MHZ_CLASS_1 = 115, /* Ch 36-48 */
@@ -3797,6 +3835,12 @@
DOT11_2GHZ_40MHZ_CLASS_33 = 84, /* Ch 5-11, upper */
} dot11_op_class_t;
+/* QoS map */
+#define QOS_MAP_FIXED_LENGTH (8 * 2) /* DSCP ranges fixed with 8 entries */
+
+/* BCM proprietary IE type for AIBSS */
+#define BCM_AIBSS_IE_TYPE 56
+
/* This marks the end of a packed structure section. */
#include <packed_section_end.h>
diff --git a/drivers/net/wireless/bcmdhd/include/proto/802.11_bta.h b/drivers/net/wireless/bcmdhd/include/proto/802.11_bta.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/802.11e.h b/drivers/net/wireless/bcmdhd/include/proto/802.11e.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/802.1d.h b/drivers/net/wireless/bcmdhd/include/proto/802.1d.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/802.3.h b/drivers/net/wireless/bcmdhd/include/proto/802.3.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmdhcp.h b/drivers/net/wireless/bcmdhd/include/proto/bcmdhcp.h
new file mode 100644
index 0000000..5a7695e
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmdhcp.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2014, Broadcom Corporation
+ * All Rights Reserved.
+ *
+ * This is UNPUBLISHED PROPRIETARY SOURCE CODE of Broadcom Corporation;
+ * the contents of this file may not be disclosed to third parties, copied
+ * or duplicated in any form, in whole or in part, without the prior
+ * written permission of Broadcom Corporation.
+ *
+ * Fundamental constants relating to DHCP Protocol
+ *
+ * $Id: bcmdhcp.h 382883 2013-02-04 23:26:09Z $
+ */
+
+#ifndef _bcmdhcp_h_
+#define _bcmdhcp_h_
+
+/* DHCP params */
+#define DHCP_TYPE_OFFSET 0 /* DHCP type (request|reply) offset */
+#define DHCP_TID_OFFSET 4 /* DHCP transition id offset */
+#define DHCP_FLAGS_OFFSET 10 /* DHCP flags offset */
+#define DHCP_CIADDR_OFFSET 12 /* DHCP client IP address offset */
+#define DHCP_YIADDR_OFFSET 16 /* DHCP your IP address offset */
+#define DHCP_GIADDR_OFFSET 24 /* DHCP relay agent IP address offset */
+#define DHCP_CHADDR_OFFSET 28 /* DHCP client h/w address offset */
+#define DHCP_OPT_OFFSET 236 /* DHCP options offset */
+
+#define DHCP_OPT_MSGTYPE 53 /* DHCP message type */
+#define DHCP_OPT_MSGTYPE_REQ 3
+#define DHCP_OPT_MSGTYPE_ACK 5 /* DHCP message type - ACK */
+
+#define DHCP_OPT_CODE_OFFSET 0 /* Option identifier */
+#define DHCP_OPT_LEN_OFFSET 1 /* Option data length */
+#define DHCP_OPT_DATA_OFFSET 2 /* Option data */
+
+#define DHCP_OPT_CODE_CLIENTID 61 /* Option identifier */
+
+#define DHCP_TYPE_REQUEST 1 /* DHCP request (discover|request) */
+#define DHCP_TYPE_REPLY 2 /* DHCP reply (offset|ack) */
+
+#define DHCP_PORT_SERVER 67 /* DHCP server UDP port */
+#define DHCP_PORT_CLIENT 68 /* DHCP client UDP port */
+
+#define DHCP_FLAG_BCAST 0x8000 /* DHCP broadcast flag */
+
+#define DHCP_FLAGS_LEN 2 /* DHCP flags field length */
+
+#define DHCP6_TYPE_SOLICIT 1 /* DHCP6 solicit */
+#define DHCP6_TYPE_ADVERTISE 2 /* DHCP6 advertise */
+#define DHCP6_TYPE_REQUEST 3 /* DHCP6 request */
+#define DHCP6_TYPE_CONFIRM 4 /* DHCP6 confirm */
+#define DHCP6_TYPE_RENEW 5 /* DHCP6 renew */
+#define DHCP6_TYPE_REBIND 6 /* DHCP6 rebind */
+#define DHCP6_TYPE_REPLY 7 /* DHCP6 reply */
+#define DHCP6_TYPE_RELEASE 8 /* DHCP6 release */
+#define DHCP6_TYPE_DECLINE 9 /* DHCP6 decline */
+#define DHCP6_TYPE_RECONFIGURE 10 /* DHCP6 reconfigure */
+#define DHCP6_TYPE_INFOREQ 11 /* DHCP6 information request */
+#define DHCP6_TYPE_RELAYFWD 12 /* DHCP6 relay forward */
+#define DHCP6_TYPE_RELAYREPLY 13 /* DHCP6 relay reply */
+
+#define DHCP6_TYPE_OFFSET 0 /* DHCP6 type offset */
+
+#define DHCP6_MSG_OPT_OFFSET 4 /* Offset of options in client server messages */
+#define DHCP6_RELAY_OPT_OFFSET 34 /* Offset of options in relay messages */
+
+#define DHCP6_OPT_CODE_OFFSET 0 /* Option identifier */
+#define DHCP6_OPT_LEN_OFFSET 2 /* Option data length */
+#define DHCP6_OPT_DATA_OFFSET 4 /* Option data */
+
+#define DHCP6_OPT_CODE_CLIENTID 1 /* DHCP6 CLIENTID option */
+#define DHCP6_OPT_CODE_SERVERID 2 /* DHCP6 SERVERID option */
+
+#define DHCP6_PORT_SERVER 547 /* DHCP6 server UDP port */
+#define DHCP6_PORT_CLIENT 546 /* DHCP6 client UDP port */
+
+#endif /* #ifndef _bcmdhcp_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmeth.h b/drivers/net/wireless/bcmdhd/include/proto/bcmeth.h
index 50d9bdc..41c1b57 100755
--- a/drivers/net/wireless/bcmdhd/include/proto/bcmeth.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmeth.h
@@ -2,13 +2,13 @@
* Broadcom Ethernettype protocol definitions
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmeth.h 382882 2013-02-04 23:24:31Z $
+ * $Id: bcmeth.h 445746 2013-12-30 12:57:26Z $
*/
/*
@@ -89,8 +89,8 @@
* within BCMILCP_BCM_SUBTYPE_EVENT type messages
*/
/* #define BCMILCP_BCM_SUBTYPE_EAPOL 3 */
-#define BCMILCP_BCM_SUBTYPE_DPT 4
-
+#define BCMILCP_BCM_SUBTYPE_DPT 4
+#define BCMILCP_BCM_SUBTYPE_DNGLEVENT 5
#define BCMILCP_BCM_SUBTYPEHDR_MINLENGTH 8
#define BCMILCP_BCM_SUBTYPEHDR_VERSION 0
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmevent.h b/drivers/net/wireless/bcmdhd/include/proto/bcmevent.h
old mode 100755
new mode 100644
index a459cad..56cf83b
--- a/drivers/net/wireless/bcmdhd/include/proto/bcmevent.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmevent.h
@@ -2,13 +2,13 @@
* Broadcom Event protocol definitions
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,14 +16,14 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
* Dependencies: proto/bcmeth.h
*
- * $Id: bcmevent.h 433217 2013-10-31 00:39:54Z $
+ * $Id: bcmevent.h 474305 2014-04-30 20:54:29Z $
*
*/
@@ -151,11 +151,7 @@
#define WLC_E_IF 54 /* I/F change (for dongle host notification) */
#define WLC_E_P2P_DISC_LISTEN_COMPLETE 55 /* listen state expires */
#define WLC_E_RSSI 56 /* indicate RSSI change based on configured levels */
-/* PFN best network batching event, conflict/share with WLC_E_PFN_SCAN_COMPLETE */
-#define WLC_E_PFN_BEST_BATCHING 57
-#define WLC_E_PFN_SCAN_COMPLETE 57 /* PFN completed scan of network list */
-/* PFN best network batching event, conflict/share with WLC_E_PFN_SCAN_COMPLETE */
-#define WLC_E_PFN_BEST_BATCHING 57
+#define WLC_E_PFN_BEST_BATCHING 57 /* PFN best network batching event */
#define WLC_E_EXTLOG_MSG 58
#define WLC_E_ACTION_FRAME 59 /* Action frame Rx */
#define WLC_E_ACTION_FRAME_COMPLETE 60 /* Action frame Tx complete */
@@ -200,11 +196,8 @@
#define WLC_E_SPEEDY_RECREATE_FAIL 93 /* fast assoc recreation failed */
#define WLC_E_NATIVE 94 /* port-specific event and payload (e.g. NDIS) */
#define WLC_E_PKTDELAY_IND 95 /* event for tx pkt delay suddently jump */
-#define WLC_E_AWDL_AW 96 /* AWDL AW period starts */
-#define WLC_E_AWDL_ROLE 97 /* AWDL Master/Slave/NE master role event */
-#define WLC_E_AWDL_EVENT 98 /* Generic AWDL event */
#define WLC_E_PSTA_PRIMARY_INTF_IND 99 /* psta primary interface indication */
-#define WLC_E_EVENT_100 100
+#define WLC_E_NAN 100 /* NAN event */
#define WLC_E_BEACON_FRAME_RX 101
#define WLC_E_SERVICE_FOUND 102 /* desired service found */
#define WLC_E_GAS_FRAGMENT_RX 103 /* GAS fragment received */
@@ -218,16 +211,7 @@
#define WLC_E_PROXD 109 /* Proximity Detection event */
#define WLC_E_IBSS_COALESCE 110 /* IBSS Coalescing */
#define WLC_E_AIBSS_TXFAIL 110 /* TXFAIL event for AIBSS, re using event 110 */
-#define WLC_E_AWDL_RX_PRB_RESP 111 /* AWDL RX Probe response */
-#define WLC_E_AWDL_RX_ACT_FRAME 112 /* AWDL RX Action Frames */
-#define WLC_E_AWDL_WOWL_NULLPKT 113 /* AWDL Wowl nulls */
-#define WLC_E_AWDL_PHYCAL_STATUS 114 /* AWDL Phycal status */
-#define WLC_E_AWDL_OOB_AF_STATUS 115 /* AWDL OOB AF status */
-#define WLC_E_AWDL_SCAN_STATUS 116 /* Interleaved Scan status */
-#define WLC_E_AWDL_AW_START 117 /* AWDL AW Start */
-#define WLC_E_AWDL_AW_END 118 /* AWDL AW End */
-#define WLC_E_AWDL_AW_EXT 119 /* AWDL AW Extensions */
-#define WLC_E_AWDL_PEER_CACHE_CONTROL 120
+#define WLC_E_BSS_LOAD 114 /* Inform host of beacon bss load */
#define WLC_E_CSA_START_IND 121
#define WLC_E_CSA_DONE_IND 122
#define WLC_E_CSA_FAILURE_IND 123
@@ -235,21 +219,22 @@
#define WLC_E_BSSID 125 /* to report change in BSSID while roaming */
#define WLC_E_TX_STAT_ERROR 126 /* tx error indication */
#define WLC_E_BCMC_CREDIT_SUPPORT 127 /* credit check for BCMC supported */
-#define WLC_E_LAST 128 /* highest val + 1 for range checking */
+#define WLC_E_BT_WIFI_HANDOVER_REQ 130 /* Handover Request Initiated */
+#define WLC_E_SPW_TXINHIBIT 131 /* Southpaw TxInhibit notification */
+#define WLC_E_FBT_AUTH_REQ_IND 132 /* FBT Authentication Request Indication */
+#define WLC_E_RSSI_LQM 133 /* Enhancement addition for WLC_E_RSSI */
+#define WLC_E_PFN_GSCAN_FULL_RESULT 134 /* Full probe/beacon (IEs etc) results */
+#define WLC_E_PFN_SWC 135 /* Significant change in rssi of bssids being tracked */
+#define WLC_E_PFN_SCAN_COMPLETE 138 /* PFN completed scan of network list */
+#define WLC_E_RMC_EVENT 139 /* RMC event */
+#define WLC_E_PFN_SSID_EXT 142 /* SSID EXT event */
+#define WLC_E_ROAM_EXP_EVENT 143 /* Expanded roam event */
+#define WLC_E_LAST 144 /* highest val + 1 for range checking */
-#if (WLC_E_LAST > 128)
-#error "WLC_E_LAST: Invalid value for last event; must be <= 128."
-#endif /* WLC_E_LAST */
+/* define an API for getting the string name of an event */
+extern const char *bcmevent_get_name(uint event_type);
-/* Table of event name strings for UIs and debugging dumps */
-typedef struct {
- uint event;
- const char *name;
-} bcmevent_name_t;
-
-extern const bcmevent_name_t bcmevent_names[];
-extern const int bcmevent_names_size;
/* Event status codes */
#define WLC_E_STATUS_SUCCESS 0 /* operation was successful */
@@ -268,6 +253,8 @@
#define WLC_E_STATUS_NOCHANS 13 /* no allowable channels to scan */
#define WLC_E_STATUS_CS_ABORT 15 /* abort channel select */
#define WLC_E_STATUS_ERROR 16 /* request failed due to error */
+#define WLC_E_STATUS_INVALID 0xff /* Invalid status code to init variables. */
+
/* roam reason codes */
#define WLC_E_REASON_INITIAL_ASSOC 0 /* initial assoc */
@@ -283,7 +270,9 @@
#define WLC_E_REASON_BETTER_AP 8 /* roamed due to finding better AP */
#define WLC_E_REASON_MINTXRATE 9 /* roamed because at mintxrate for too long */
#define WLC_E_REASON_TXFAIL 10 /* We can hear AP, but AP can't hear us */
-#define WLC_E_REASON_REQUESTED_ROAM 11 /* roamed due to BSS Mgmt Transition REQ by AP */
+/* retained for precommit auto-merging errors; remove once all branches are synced */
+#define WLC_E_REASON_REQUESTED_ROAM 11
+#define WLC_E_REASON_BSSTRANS_REQ 11 /* roamed due to BSS Transition request by AP */
/* prune reason codes */
#define WLC_E_PRUNE_ENCR_MISMATCH 1 /* encryption mismatch */
@@ -325,12 +314,6 @@
* WLC_E_P2P_PROBREQ_MSG
* WLC_E_ACTION_FRAME_RX
*/
-#ifdef WLAWDL
-#define WLC_E_AWDL_SCAN_START 1 /* Scan start indication to host */
-#define WLC_E_AWDL_SCAN_DONE 0 /* Scan Done indication to host */
-
-
-#endif
typedef BWL_PRE_PACKED_STRUCT struct wl_event_rx_frame_data {
uint16 version;
uint16 channel; /* Matches chanspec_t format from bcmwifi_channels.h */
@@ -387,15 +370,11 @@
#define WLC_E_TDLS_PEER_CONNECTED 1
#define WLC_E_TDLS_PEER_DISCONNECTED 2
-#ifdef WLAWDL
-/* WLC_E_AWDL_EVENT subtypes */
+/* reason codes for WLC_E_RMC_EVENT event */
+#define WLC_E_REASON_RMC_NONE 0
+#define WLC_E_REASON_RMC_AR_LOST 1
+#define WLC_E_REASON_RMC_AR_NO_ACK 2
-/* WLC_E_AWDL_SCAN_STATUS status values */
-#define WLC_E_AWDL_SCAN_START 1 /* Scan start indication to host */
-#define WLC_E_AWDL_SCAN_DONE 0 /* Scan Done indication to host */
-#define WLC_E_AWDL_PHYCAL_START 1 /* Phy calibration start indication to host */
-#define WLC_E_AWDL_PHYCAL_DONE 0 /* Phy calibration done indication to host */
-#endif
/* GAS event data */
typedef BWL_PRE_PACKED_STRUCT struct wl_event_gas {
@@ -424,31 +403,53 @@
} BWL_POST_PACKED_STRUCT wl_event_sd_t;
/* Reason codes for WLC_E_PROXD */
-#define WLC_E_PROXD_FOUND 1 /* Found a proximity device */
-#define WLC_E_PROXD_GONE 2 /* Lost a proximity device */
+#define WLC_E_PROXD_FOUND 1 /* Found a proximity device */
+#define WLC_E_PROXD_GONE 2 /* Lost a proximity device */
+#define WLC_E_PROXD_START 3 /* used by: target */
+#define WLC_E_PROXD_STOP 4 /* used by: target */
+#define WLC_E_PROXD_COMPLETED 5 /* used by: initiator completed */
+#define WLC_E_PROXD_ERROR 6 /* used by both initiator and target */
+#define WLC_E_PROXD_COLLECT_START 7 /* used by: target & initiator */
+#define WLC_E_PROXD_COLLECT_STOP 8 /* used by: target */
+#define WLC_E_PROXD_COLLECT_COMPLETED 9 /* used by: initiator completed */
+#define WLC_E_PROXD_COLLECT_ERROR 10 /* used by both initiator and target */
+#define WLC_E_PROXD_NAN_EVENT 11 /* used by both initiator and target */
-/* WLC_E_AWDL_AW event data */
-typedef BWL_PRE_PACKED_STRUCT struct awdl_aws_event_data {
- uint32 fw_time; /* firmware PMU time */
- struct ether_addr current_master; /* Current master Mac addr */
- uint16 aw_counter; /* AW seq# */
- uint8 aw_ext_count; /* AW extension count */
- uint8 aw_role; /* AW role */
- uint8 flags; /* AW event flag */
- uint16 aw_chan;
- uint8 infra_rssi; /* rssi on the infra channel */
- uint32 infra_rxbcn_count; /* number of beacons received */
-} BWL_POST_PACKED_STRUCT awdl_aws_event_data_t;
+/* proxd_event data */
+typedef struct ftm_sample {
+ uint32 value; /* RTT in ns */
+ int8 rssi; /* RSSI */
+} ftm_sample_t;
-/* For awdl_aws_event_data_t.flags */
-#define AWDL_AW_LAST_EXT 0x01
+typedef BWL_PRE_PACKED_STRUCT struct proxd_event_data {
+ uint16 ver; /* version */
+ uint16 mode; /* mode: target/initiator */
+ uint16 method; /* method: rssi/TOF/AOA */
+ uint8 err_code; /* error classification */
+ uint8 TOF_type; /* one way or two way TOF */
+ uint8 OFDM_frame_type; /* legacy or VHT */
+ uint8 bandwidth; /* Bandwidth is 20, 40,80, MHZ */
+ struct ether_addr peer_mac; /* (e.g for tgt:initiator's */
+ uint32 distance; /* dst to tgt, units meter */
+ uint32 meanrtt; /* mean delta */
+ uint32 modertt; /* Mode delta */
+ uint32 medianrtt; /* median RTT */
+ uint32 sdrtt; /* Standard deviation of RTT */
+ int gdcalcresult; /* Software or Hardware Kind of redundant, but if */
+ /* frame type is VHT, then we should do it by hardware */
+ int16 avg_rssi; /* avg rssi accroos the ftm frames */
+ int16 validfrmcnt; /* Firmware's valid frame counts */
+ char *peer_router_info; /* Peer router information if available in TLV, */
+ /* We will add this field later */
+ int32 var1; /* average of group delay */
+ int32 var2; /* average of threshold crossing */
+ int32 var3; /* difference between group delay and threshold crossing */
+ /* raw Fine Time Measurements (ftm) data */
+ uint16 ftm_unit; /* ftm cnt resolution in picoseconds , 6250ps - default */
+ uint16 ftm_cnt; /* num of rtd measurments/length in the ftm buffer */
+ ftm_sample_t ftm_buff[1]; /* 1 ... ftm_cnt */
+} BWL_POST_PACKED_STRUCT wl_proxd_event_data_t;
-/* WLC_E_AWDL_OOB_AF_STATUS event data */
-typedef BWL_PRE_PACKED_STRUCT struct awdl_oob_af_status_data {
- uint32 tx_time_diff;
- uint16 pkt_tag;
- uint8 tx_chan;
-} BWL_POST_PACKED_STRUCT awdl_oob_af_status_data_t;
/* Video Traffic Interference Monitor Event */
#define INTFER_EVENT_VERSION 1
@@ -466,6 +467,27 @@
struct ether_addr prim_ea; /* primary intf ether addr */
} wl_psta_primary_intf_event_t;
+
+/* ********** NAN protocol events/subevents ********** */
+#define NAN_EVENT_BUFFER_SIZE 512 /* max size */
+/* nan application events to the host driver */
+enum nan_app_events {
+ WL_NAN_EVENT_START = 1, /* NAN cluster started */
+ WL_NAN_EVENT_JOIN = 2, /* Joined to a NAN cluster */
+ WL_NAN_EVENT_ROLE = 3, /* Role or State changed */
+ WL_NAN_EVENT_SCAN_COMPLETE = 4,
+ WL_NAN_EVENT_DISCOVERY_RESULT = 5,
+ WL_NAN_EVENT_REPLIED = 6,
+ WL_NAN_EVENT_TERMINATED = 7, /* the instance ID will be present in the ev data */
+ WL_NAN_EVENT_RECEIVE = 8,
+ WL_NAN_EVENT_STATUS_CHG = 9, /* generated on any change in nan_mac status */
+ WL_NAN_EVENT_MERGE = 10, /* Merged to a NAN cluster */
+ WL_NAN_EVENT_STOP = 11, /* NAN stopped */
+ WL_NAN_EVENT_INVALID = 12, /* delimiter for max value */
+};
+#define IS_NAN_EVT_ON(var, evt) ((var & (1 << (evt-1))) != 0)
+/* ******************* end of NAN section *************** */
+
/* This marks the end of a packed structure section. */
#include <packed_section_end.h>
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmip.h b/drivers/net/wireless/bcmdhd/include/proto/bcmip.h
old mode 100755
new mode 100644
index 7549235..05813e0
--- a/drivers/net/wireless/bcmdhd/include/proto/bcmip.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmip.h
@@ -21,7 +21,7 @@
*
* Fundamental constants relating to IP Protocol
*
- * $Id: bcmip.h 457888 2014-02-25 03:34:39Z $
+ * $Id: bcmip.h 458522 2014-02-27 02:26:15Z $
*/
#ifndef _bcmip_h_
@@ -90,6 +90,15 @@
#define IPV4_TOS_THROUGHPUT 0x8 /* Best throughput requested */
#define IPV4_TOS_RELIABILITY 0x4 /* Most reliable delivery requested */
+#define IPV4_TOS_ROUTINE 0
+#define IPV4_TOS_PRIORITY 1
+#define IPV4_TOS_IMMEDIATE 2
+#define IPV4_TOS_FLASH 3
+#define IPV4_TOS_FLASHOVERRIDE 4
+#define IPV4_TOS_CRITICAL 5
+#define IPV4_TOS_INETWORK_CTRL 6
+#define IPV4_TOS_NETWORK_CTRL 7
+
#define IPV4_PROT(ipv4_body) (((uint8 *)(ipv4_body))[IPV4_PROT_OFFSET])
#define IPV4_FRAG_RESV 0x8000 /* Reserved */
@@ -152,6 +161,11 @@
#define IP_DSCP46(ip_body) (IP_TOS46(ip_body) >> IPV4_TOS_DSCP_SHIFT);
+/* IPV4 or IPV6 Protocol Classifier or 0 */
+#define IP_PROT46(ip_body) \
+ (IP_VER(ip_body) == IP_VER_4 ? IPV4_PROT(ip_body) : \
+ IP_VER(ip_body) == IP_VER_6 ? IPV6_PROT(ip_body) : 0)
+
/* IPV6 extension headers (options) */
#define IPV6_EXTHDR_HOP 0
#define IPV6_EXTHDR_ROUTING 43
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmipv6.h b/drivers/net/wireless/bcmdhd/include/proto/bcmipv6.h
old mode 100755
new mode 100644
index fd2d6fa..e3351da
--- a/drivers/net/wireless/bcmdhd/include/proto/bcmipv6.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmipv6.h
@@ -21,7 +21,7 @@
*
* Fundamental constants relating to Neighbor Discovery Protocol
*
- * $Id: bcmipv6.h 399482 2013-04-30 09:24:37Z $
+ * $Id: bcmipv6.h 439574 2013-11-27 06:37:37Z $
*/
#ifndef _bcmipv6_h_
@@ -56,7 +56,8 @@
#define IPV6_FRAG_OFFS_SHIFT 3
/* For icmpv6 */
-#define ICMPV6_HEADER_TYPE 0x3A
+#define ICMPV6_HEADER_TYPE 0x3A
+#define ICMPV6_PKT_TYPE_RA 134
#define ICMPV6_PKT_TYPE_NS 135
#define ICMPV6_PKT_TYPE_NA 136
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmtcp.h b/drivers/net/wireless/bcmdhd/include/proto/bcmtcp.h
old mode 100755
new mode 100644
index 6754f36..84ab805
--- a/drivers/net/wireless/bcmdhd/include/proto/bcmtcp.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmtcp.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: bcmtcp.h 457888 2014-02-25 03:34:39Z $
+ * $Id: bcmtcp.h 458522 2014-02-27 02:26:15Z $
*/
#ifndef _bcmtcp_h_
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bcmudp.h b/drivers/net/wireless/bcmdhd/include/proto/bcmudp.h
new file mode 100644
index 0000000..32407f3
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/proto/bcmudp.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2014, Broadcom Corporation
+ * All Rights Reserved.
+ *
+ * This is UNPUBLISHED PROPRIETARY SOURCE CODE of Broadcom Corporation;
+ * the contents of this file may not be disclosed to third parties, copied
+ * or duplicated in any form, in whole or in part, without the prior
+ * written permission of Broadcom Corporation.
+ *
+ * Fundamental constants relating to UDP Protocol
+ *
+ * $Id: bcmudp.h 382882 2013-02-04 23:24:31Z $
+ */
+
+#ifndef _bcmudp_h_
+#define _bcmudp_h_
+
+#ifndef _TYPEDEFS_H_
+#include <typedefs.h>
+#endif
+
+/* This marks the start of a packed structure section. */
+#include <packed_section_start.h>
+
+
+/* UDP header */
+#define UDP_DEST_PORT_OFFSET 2 /* UDP dest port offset */
+#define UDP_LEN_OFFSET 4 /* UDP length offset */
+#define UDP_CHKSUM_OFFSET 6 /* UDP body checksum offset */
+
+#define UDP_HDR_LEN 8 /* UDP header length */
+#define UDP_PORT_LEN 2 /* UDP port length */
+
+/* These fields are stored in network order */
+BWL_PRE_PACKED_STRUCT struct bcmudp_hdr
+{
+ uint16 src_port; /* Source Port Address */
+ uint16 dst_port; /* Destination Port Address */
+ uint16 len; /* Number of bytes in datagram including header */
+ uint16 chksum; /* entire datagram checksum with pseudoheader */
+} BWL_POST_PACKED_STRUCT;
+
+/* This marks the end of a packed structure section. */
+#include <packed_section_end.h>
+
+#endif /* #ifndef _bcmudp_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/proto/bt_amp_hci.h b/drivers/net/wireless/bcmdhd/include/proto/bt_amp_hci.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/dnglevent.h b/drivers/net/wireless/bcmdhd/include/proto/dnglevent.h
new file mode 100755
index 0000000..584e9d2
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/include/proto/dnglevent.h
@@ -0,0 +1,80 @@
+/*
+ * Broadcom Event protocol definitions
+ *
+ * $Copyright Open Broadcom Corporation$
+ *
+ * Dependencies: proto/bcmeth.h
+ *
+ * $Id: dnglevent.h $
+ *
+ */
+
+/*
+ * Broadcom dngl Ethernet Events protocol defines
+ *
+ */
+
+#ifndef _DNGLEVENT_H_
+#define _DNGLEVENT_H_
+
+#ifndef _TYPEDEFS_H_
+#include <typedefs.h>
+#endif
+#include <proto/bcmeth.h>
+
+/* This marks the start of a packed structure section. */
+#include <packed_section_start.h>
+#define BCM_DNGL_EVENT_MSG_VERSION 1
+#define DNGL_E_SOCRAM_IND 0x2
+typedef BWL_PRE_PACKED_STRUCT struct
+{
+ uint16 version; /* Current version is 1 */
+ uint16 reserved; /* reserved for any future extension */
+ uint16 event_type; /* DNGL_E_SOCRAM_IND */
+ uint16 datalen; /* Length of the event payload */
+} BWL_POST_PACKED_STRUCT bcm_dngl_event_msg_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct bcm_dngl_event {
+ struct ether_header eth;
+ bcmeth_hdr_t bcm_hdr;
+ bcm_dngl_event_msg_t dngl_event;
+ /* data portion follows */
+} BWL_POST_PACKED_STRUCT bcm_dngl_event_t;
+
+
+/* SOCRAM_IND type tags */
+#define SOCRAM_IND_ASSRT_TAG 0x1
+#define SOCRAM_IND_TAG_HEALTH_CHECK 0x2
+typedef BWL_PRE_PACKED_STRUCT struct bcm_dngl_socramind {
+ uint16 tag; /* data tag */
+ uint16 length; /* data length */
+ uint8 value[1]; /* data value with variable length specified by length */
+} BWL_POST_PACKED_STRUCT bcm_dngl_socramind_t;
+
+/* Health check top level module tags */
+#define HEALTH_CHECK_TOP_LEVEL_MODULE_PCIEDEV_RTE 1
+typedef BWL_PRE_PACKED_STRUCT struct bcm_dngl_healthcheck {
+ uint16 top_module_tag; /* top level module tag */
+ uint16 top_module_len; /* Type of PCIE issue indication */
+ uint8 value[1]; /* data value with variable length specified by length */
+} BWL_POST_PACKED_STRUCT bcm_dngl_healthcheck_t;
+
+#define HEALTH_CHECK_PCIEDEV_VERSION 1
+#define HEALTH_CHECK_PCIEDEV_FLAG_IN_D3_SHIFT 0
+#define HEALTH_CHECK_PCIEDEV_FLAG_IN_D3_FLAG 1 << HEALTH_CHECK_PCIEDEV_FLAG_IN_D3_SHIFT
+/* PCIE Module TAGs */
+#define HEALTH_CHECK_PCIEDEV_INDUCED_IND 0x1
+#define HEALTH_CHECK_PCIEDEV_H2D_DMA_IND 0x2
+#define HEALTH_CHECK_PCIEDEV_D2H_DMA_IND 0x3
+typedef BWL_PRE_PACKED_STRUCT struct bcm_dngl_pcie_hc {
+ uint16 version; /* HEALTH_CHECK_PCIEDEV_VERSION */
+ uint16 reserved;
+ uint16 pcie_err_ind_type; /* PCIE Module TAGs */
+ uint16 pcie_flag;
+ uint32 pcie_control_reg;
+} BWL_POST_PACKED_STRUCT bcm_dngl_pcie_hc_t;
+
+/* This marks the end of a packed structure section. */
+#include <packed_section_end.h>
+
+#endif /* _DNGLEVENT_H_ */
diff --git a/drivers/net/wireless/bcmdhd/include/proto/eapol.h b/drivers/net/wireless/bcmdhd/include/proto/eapol.h
old mode 100755
new mode 100644
index 3f283a6..d3bff33
--- a/drivers/net/wireless/bcmdhd/include/proto/eapol.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/eapol.h
@@ -7,7 +7,7 @@
*
* Copyright Open Broadcom Corporation
*
- * $Id: eapol.h 452678 2014-01-31 19:16:29Z $
+ * $Id: eapol.h 452703 2014-01-31 20:33:06Z $
*/
#ifndef _eapol_h_
@@ -113,6 +113,7 @@
#define EAPOL_WPA_KEY_LEN 95
/* WPA/802.11i/WPA2 KEY KEY_INFO bits */
+#define WPA_KEY_DESC_OSEN 0x0
#define WPA_KEY_DESC_V1 0x01
#define WPA_KEY_DESC_V2 0x02
#define WPA_KEY_DESC_V3 0x03
diff --git a/drivers/net/wireless/bcmdhd/include/proto/ethernet.h b/drivers/net/wireless/bcmdhd/include/proto/ethernet.h
old mode 100755
new mode 100644
index 0760302..d3ef8c5
--- a/drivers/net/wireless/bcmdhd/include/proto/ethernet.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/ethernet.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: ethernet.h 403353 2013-05-20 14:05:33Z $
+ * $Id: ethernet.h 473238 2014-04-28 19:14:56Z $
*/
#ifndef _NET_ETHERNET_H_ /* use native BSD ethernet.h when available */
@@ -181,6 +181,14 @@
((uint16 *)(d))[0] = ((uint16 *)(s))[0]; \
} while (0)
+/* Copy 14B ethernet header: 32bit aligned source and destination. */
+#define ehcopy32(s, d) \
+do { \
+ ((uint32 *)(d))[0] = ((const uint32 *)(s))[0]; \
+ ((uint32 *)(d))[1] = ((const uint32 *)(s))[1]; \
+ ((uint32 *)(d))[2] = ((const uint32 *)(s))[2]; \
+ ((uint16 *)(d))[6] = ((const uint16 *)(s))[6]; \
+} while (0)
static const struct ether_addr ether_bcast = {{255, 255, 255, 255, 255, 255}};
diff --git a/drivers/net/wireless/bcmdhd/include/proto/p2p.h b/drivers/net/wireless/bcmdhd/include/proto/p2p.h
old mode 100755
new mode 100644
index 27c4794..be73c8b
--- a/drivers/net/wireless/bcmdhd/include/proto/p2p.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/p2p.h
@@ -21,7 +21,7 @@
*
* Fundamental types and constants relating to WFA P2P (aka WiFi Direct)
*
- * $Id: p2p.h 444066 2013-12-18 12:49:24Z $
+ * $Id: p2p.h 457033 2014-02-20 19:39:45Z $
*/
#ifndef _P2P_H_
@@ -63,6 +63,9 @@
#define P2P_ATTR_LEN_LEN 2 /* length field length */
#define P2P_ATTR_HDR_LEN 3 /* ID + 2-byte length field spec 1.02 */
+#define P2P_WFDS_HASH_LEN 6
+#define P2P_WFDS_MAX_SVC_NAME_LEN 32
+
/* P2P IE Subelement IDs from WiFi P2P Technical Spec 1.00 */
#define P2P_SEID_STATUS 0 /* Status */
#define P2P_SEID_MINOR_RC 1 /* Minor Reason Code */
@@ -83,6 +86,15 @@
#define P2P_SEID_P2P_IF 16 /* P2P Interface */
#define P2P_SEID_OP_CHANNEL 17 /* Operating Channel */
#define P2P_SEID_INVITE_FLAGS 18 /* Invitation Flags */
+#define P2P_SEID_SERVICE_HASH 21 /* Service hash */
+#define P2P_SEID_SESSION 22 /* Session information */
+#define P2P_SEID_CONNECT_CAP 23 /* Connection capability */
+#define P2P_SEID_ADVERTISE_ID 24 /* Advertisement ID */
+#define P2P_SEID_ADVERTISE_SERVICE 25 /* Advertised service */
+#define P2P_SEID_SESSION_ID 26 /* Session ID */
+#define P2P_SEID_FEATURE_CAP 27 /* Feature capability */
+#define P2P_SEID_PERSISTENT_GROUP 28 /* Persistent group */
+#define P2P_SEID_SESSION_INFO_RESP 29 /* Session Information Response */
#define P2P_SEID_VNDR 221 /* Vendor-specific subelement */
#define P2P_SE_VS_ID_SERVICES 0x1b
@@ -204,6 +216,8 @@
/* Failed, incompatible provisioning method */
#define P2P_STATSE_FAIL_USER_REJECT 11
/* Failed, rejected by user */
+#define P2P_STATSE_SUCCESS_USER_ACCEPT 12
+ /* Success, accepted by user */
/* WiFi P2P IE attribute: Extended Listen Timing */
BWL_PRE_PACKED_STRUCT struct wifi_p2p_ext_se_s {
@@ -357,6 +371,93 @@
} BWL_POST_PACKED_STRUCT;
typedef struct wifi_p2p_invite_flags_se_s wifi_p2p_invite_flags_se_t;
+/* WiFi P2P IE subelement: Service Hash */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_serv_hash_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_SERVICE_HASH */
+ uint8 len[2]; /* SE length not including eltId, len fields
+ * in multiple of 6 Bytes
+ */
+ uint8 hash[1]; /* Variable length - SHA256 hash of
+ * service names (can be more than one hashes)
+ */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_serv_hash_se_s wifi_p2p_serv_hash_se_t;
+
+/* WiFi P2P IE subelement: Service Instance Data */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_serv_inst_data_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_SESSION */
+ uint8 len[2]; /* SE length not including eltId, len */
+ uint8 ssn_info[1]; /* Variable length - Session information as specified by
+ * the service layer, type matches serv. name
+ */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_serv_inst_data_se_s wifi_p2p_serv_inst_data_se_t;
+
+
+/* WiFi P2P IE subelement: Connection capability */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_conn_cap_data_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_CONNECT_CAP */
+ uint8 len[2]; /* SE length not including eltId, len */
+ uint8 conn_cap; /* 1byte capability as specified by the
+ * service layer, valid bitmask/values
+ */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_conn_cap_data_se_s wifi_p2p_conn_cap_data_se_t;
+
+
+/* WiFi P2P IE subelement: Advertisement ID */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_advt_id_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_ADVERTISE_ID */
+ uint8 len[2]; /* SE length not including eltId, len fixed 4 Bytes */
+ uint8 advt_id[4]; /* 4byte Advertisement ID of the peer device sent in
+ * PROV Disc in Network byte order
+ */
+ uint8 advt_mac[6]; /* P2P device address of the service advertiser */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_advt_id_se_s wifi_p2p_advt_id_se_t;
+
+
+/* WiFi P2P IE subelement: Advertise Service Hash */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_adv_serv_info_s {
+ uint8 advt_id[4]; /* SE Advertise ID for the service */
+ uint16 nw_cfg_method; /* SE Network Config method for the service */
+ uint8 serv_name_len; /* SE length of the service name */
+ uint8 serv_name[1]; /* Variable length service name field */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_adv_serv_info_s wifi_p2p_adv_serv_info_t;
+
+
+/* WiFi P2P IE subelement: Advertise Service Hash */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_advt_serv_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_ADVERTISE_SERVICE */
+ uint8 len[2]; /* SE length not including eltId, len fields mutiple len of
+ * wifi_p2p_adv_serv_info_t entries
+ */
+ wifi_p2p_adv_serv_info_t p_advt_serv_info[1]; /* Variable length
+ of multiple instances
+ of the advertise service info
+ */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_advt_serv_se_s wifi_p2p_advt_serv_se_t;
+
+
+/* WiFi P2P IE subelement: Session ID */
+BWL_PRE_PACKED_STRUCT struct wifi_p2p_ssn_id_se_s {
+ uint8 eltId; /* SE ID: P2P_SEID_SESSION_ID */
+ uint8 len[2]; /* SE length not including eltId, len fixed 4 Bytes */
+ uint8 ssn_id[4]; /* 4byte Session ID of the peer device sent in
+ * PROV Disc in Network byte order
+ */
+ uint8 ssn_mac[6]; /* P2P device address of the seeker - session mac */
+} BWL_POST_PACKED_STRUCT;
+typedef struct wifi_p2p_ssn_id_se_s wifi_p2p_ssn_id_se_t;
+
+
+#define P2P_ADVT_SERV_SE_FIXED_LEN 3 /* Includes only the element ID and len */
+#define P2P_ADVT_SERV_INFO_FIXED_LEN 7 /* Per ADV Service Instance advt_id +
+ * nw_config_method + serv_name_len
+ */
+
/* WiFi P2P Action Frame */
BWL_PRE_PACKED_STRUCT struct wifi_p2p_action_frame {
uint8 category; /* P2P_AF_CATEGORY */
@@ -483,6 +584,7 @@
SVC_RPOTYPE_BONJOUR = 1,
SVC_RPOTYPE_UPNP = 2,
SVC_RPOTYPE_WSD = 3,
+ SVC_RPOTYPE_WFDS = 11,
SVC_RPOTYPE_VENDOR = 255
} p2psd_svc_protype_t;
diff --git a/drivers/net/wireless/bcmdhd/include/proto/sdspi.h b/drivers/net/wireless/bcmdhd/include/proto/sdspi.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/vlan.h b/drivers/net/wireless/bcmdhd/include/proto/vlan.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/proto/wpa.h b/drivers/net/wireless/bcmdhd/include/proto/wpa.h
old mode 100755
new mode 100644
index 6c39820..26fdb26
--- a/drivers/net/wireless/bcmdhd/include/proto/wpa.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/wpa.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wpa.h 384536 2013-02-12 04:13:09Z $
+ * $Id: wpa.h 450928 2014-01-23 14:13:38Z $
*/
#ifndef _proto_wpa_h_
@@ -81,6 +81,8 @@
#define WPA_RSN_IE_TAG_FIXED_LEN 2
typedef uint8 wpa_pmkid_t[WPA2_PMKID_LEN];
+#define WFA_OSEN_IE_FIXED_LEN 6
+
/* WPA suite/multicast suite */
typedef BWL_PRE_PACKED_STRUCT struct
{
diff --git a/drivers/net/wireless/bcmdhd/include/proto/wps.h b/drivers/net/wireless/bcmdhd/include/proto/wps.h
old mode 100755
new mode 100644
index 306d554..41424fa
--- a/drivers/net/wireless/bcmdhd/include/proto/wps.h
+++ b/drivers/net/wireless/bcmdhd/include/proto/wps.h
@@ -126,6 +126,7 @@
#define WPS_WFA_SUBID_NW_KEY_SHAREABLE 0x02
#define WPS_WFA_SUBID_REQ_TO_ENROLL 0x03
#define WPS_WFA_SUBID_SETTINGS_DELAY_TIME 0x04
+#define WPS_WFA_SUBID_REG_CFG_METHODS 0x05
/* WCN-NET Windows Rally Vertical Pairing Vendor Extensions */
@@ -174,6 +175,7 @@
#define WPS_WFA_SUBID_NW_KEY_SHAREABLE_S 1
#define WPS_WFA_SUBID_REQ_TO_ENROLL_S 1
#define WPS_WFA_SUBID_SETTINGS_DELAY_TIME_S 1
+#define WPS_WFA_SUBID_REG_CFG_METHODS_S 2
/* Association states */
#define WPS_ASSOC_NOT_ASSOCIATED 0
@@ -226,6 +228,8 @@
#define WPS_ERROR_MSG_TIMEOUT 16 /* Deprecated in WSC 2.0 */
#define WPS_ERROR_REG_SESSION_TIMEOUT 17 /* Deprecated in WSC 2.0 */
#define WPS_ERROR_DEV_PWD_AUTH_FAIL 18
+#define WPS_ERROR_60GHZ_NOT_SUPPORT 19
+#define WPS_ERROR_PKH_MISMATCH 20 /* Public Key Hash Mismatch */
/* Connection types */
#define WPS_CONNTYPE_ESS 0x01
@@ -238,6 +242,9 @@
#define WPS_DEVICEPWDID_REKEY 0x0003
#define WPS_DEVICEPWDID_PUSH_BTN 0x0004
#define WPS_DEVICEPWDID_REG_SPEC 0x0005
+#define WPS_DEVICEPWDID_IBSS 0x0006
+#define WPS_DEVICEPWDID_NFC_CHO 0x0007 /* NFC-Connection-Handover */
+#define WPS_DEVICEPWDID_WFDS 0x0008 /* Wi-Fi Direct Services Specification */
/* Encryption type */
#define WPS_ENCRTYPE_NONE 0x0001
diff --git a/drivers/net/wireless/bcmdhd/include/sbchipc.h b/drivers/net/wireless/bcmdhd/include/sbchipc.h
old mode 100755
new mode 100644
index c27df98..1fbeced
--- a/drivers/net/wireless/bcmdhd/include/sbchipc.h
+++ b/drivers/net/wireless/bcmdhd/include/sbchipc.h
@@ -5,7 +5,7 @@
* JTAG, 0/1/2 UARTs, clock frequency control, a watchdog interrupt timer,
* GPIO interface, extbus, and support for serial and parallel flashes.
*
- * $Id: sbchipc.h 433333 2013-10-31 10:34:27Z $
+ * $Id: sbchipc.h 474281 2014-04-30 18:24:55Z $
*
* Copyright (C) 1999-2014, Broadcom Corporation
*
@@ -31,7 +31,7 @@
#ifndef _SBCHIPC_H
#define _SBCHIPC_H
-#ifndef _LANGUAGE_ASSEMBLY
+#if !defined(_LANGUAGE_ASSEMBLY) && !defined(__ASSEMBLY__)
/* cpp contortions to concatenate w/arg prescan */
#ifndef PAD
@@ -40,6 +40,57 @@
#define PAD _XSTR(__LINE__)
#endif /* PAD */
+/**
+ * In chipcommon rev 49 the pmu registers have been moved from chipc to the pmu core if the
+ * 'AOBPresent' bit of 'CoreCapabilitiesExt' is set. If this field is set, the traditional chipc to
+ * [pmu|gci|sreng] register interface is deprecated and removed. These register blocks would instead
+ * be assigned their respective chipc-specific address space and connected to the Always On
+ * Backplane via the APB interface.
+ */
+typedef volatile struct {
+ uint32 PAD[384];
+ uint32 pmucontrol; /* 0x600 */
+ uint32 pmucapabilities;
+ uint32 pmustatus;
+ uint32 res_state;
+ uint32 res_pending;
+ uint32 pmutimer;
+ uint32 min_res_mask;
+ uint32 max_res_mask;
+ uint32 res_table_sel;
+ uint32 res_dep_mask;
+ uint32 res_updn_timer;
+ uint32 res_timer;
+ uint32 clkstretch;
+ uint32 pmuwatchdog;
+ uint32 gpiosel; /* 0x638, rev >= 1 */
+ uint32 gpioenable; /* 0x63c, rev >= 1 */
+ uint32 res_req_timer_sel;
+ uint32 res_req_timer;
+ uint32 res_req_mask;
+ uint32 PAD;
+ uint32 chipcontrol_addr; /* 0x650 */
+ uint32 chipcontrol_data; /* 0x654 */
+ uint32 regcontrol_addr;
+ uint32 regcontrol_data;
+ uint32 pllcontrol_addr;
+ uint32 pllcontrol_data;
+ uint32 pmustrapopt; /* 0x668, corerev >= 28 */
+ uint32 pmu_xtalfreq; /* 0x66C, pmurev >= 10 */
+ uint32 retention_ctl; /* 0x670 */
+ uint32 PAD[3];
+ uint32 retention_grpidx; /* 0x680 */
+ uint32 retention_grpctl; /* 0x684 */
+ uint32 PAD[20];
+ uint32 pmucontrol_ext; /* 0x6d8 */
+ uint32 slowclkperiod; /* 0x6dc */
+ uint32 PAD[8];
+ uint32 pmuintmask0; /* 0x700 */
+ uint32 pmuintmask1; /* 0x704 */
+ uint32 PAD[14];
+ uint32 pmuintstatus; /* 0x740 */
+} pmuregs_t;
+
typedef struct eci_prerev35 {
uint32 eci_output;
uint32 eci_control;
@@ -217,13 +268,13 @@
uint32 sromdata;
uint32 PAD[1]; /* 0x19C */
/* NAND flash registers for BCM4706 (corerev = 31) */
- uint32 nflashctrl; /* 0x1a0 */
- uint32 nflashconf;
- uint32 nflashcoladdr;
- uint32 nflashrowaddr;
- uint32 nflashdata;
- uint32 nflashwaitcnt0; /* 0x1b4 */
- uint32 PAD[2];
+ uint32 nflashctrl; /* 0x1a0 */
+ uint32 nflashconf;
+ uint32 nflashcoladdr;
+ uint32 nflashrowaddr;
+ uint32 nflashdata;
+ uint32 nflashwaitcnt0; /* 0x1b4 */
+ uint32 PAD[2];
uint32 seci_uart_data; /* 0x1C0 */
uint32 seci_uart_bauddiv;
@@ -302,7 +353,15 @@
uint32 PAD[3];
uint32 retention_grpidx; /* 0x680 */
uint32 retention_grpctl; /* 0x684 */
- uint32 PAD[94];
+ uint32 PAD[20];
+ uint32 pmucontrol_ext; /* 0x6d8 */
+ uint32 slowclkperiod; /* 0x6dc */
+ uint32 PAD[8];
+ uint32 pmuintmask0; /* 0x700 */
+ uint32 pmuintmask1; /* 0x704 */
+ uint32 PAD[14];
+ uint32 pmuintstatus; /* 0x740 */
+ uint32 PAD[47];
uint16 sromotp[512]; /* 0x800 */
#ifdef NFLASH_SUPPORT
/* Nand flash MLC controller registers (corerev >= 38) */
@@ -392,7 +451,7 @@
uint32 gci_output[4]; /* D60 */
uint32 gci_control_0; /* 0xD70 */
uint32 gci_control_1; /* 0xD74 */
- uint32 gci_level_polreg; /* 0xD78 */
+ uint32 gci_intpolreg; /* 0xD78 */
uint32 gci_levelintmask; /* 0xD7C */
uint32 gci_eventintmask; /* 0xD80 */
uint32 PAD[3];
@@ -408,21 +467,39 @@
uint32 gci_secif0tx_offset; /* 0xDB8 */
uint32 gci_secif0rx_offset; /* 0xDBC */
uint32 gci_secif1tx_offset; /* 0xDC0 */
- uint32 PAD[3];
+ uint32 gci_rxfifo_common_ctrl; /* 0xDC4 */
+ uint32 gci_rxfifoctrl; /* 0xDC8 */
+ uint32 gci_uartreadid; /* DCC */
uint32 gci_uartescval; /* DD0 */
- uint32 PAD[3];
+ uint32 PAD;
+ uint32 gci_secififolevel; /* DD8 */
+ uint32 gci_seciuartdata; /* DDC */
uint32 gci_secibauddiv; /* DE0 */
uint32 gci_secifcr; /* DE4 */
uint32 gci_secilcr; /* DE8 */
uint32 gci_secimcr; /* DEC */
- uint32 PAD[2];
+ uint32 gci_secilsr; /* DF0 */
+ uint32 gci_secimsr; /* DF4 */
uint32 gci_baudadj; /* DF8 */
uint32 PAD;
uint32 gci_chipctrl; /* 0xE00 */
uint32 gci_chipsts; /* 0xE04 */
+ uint32 gci_gpioout; /* 0xE08 */
+ uint32 gci_gpioout_read; /* 0xE0C */
+ uint32 gci_mpwaketx; /* 0xE10 */
+ uint32 gci_mpwakedetect; /* 0xE14 */
+ uint32 gci_seciin_ctrl; /* 0xE18 */
+ uint32 gci_seciout_ctrl; /* 0xE1C */
+ uint32 gci_seciin_auxfifo_en; /* 0xE20 */
+ uint32 gci_seciout_txen_txbr; /* 0xE24 */
+ uint32 gci_seciin_rxbrstatus; /* 0xE28 */
+ uint32 gci_seciin_rxerrstatus; /* 0xE2C */
+ uint32 gci_seciin_fcstatus; /* 0xE30 */
+ uint32 gci_seciout_txstatus; /* 0xE34 */
+ uint32 gci_seciout_txbrstatus; /* 0xE38 */
} chipcregs_t;
-#endif /* _LANGUAGE_ASSEMBLY */
+#endif /* !_LANGUAGE_ASSEMBLY && !__ASSEMBLY__ */
#define CC_CHIPID 0
@@ -430,7 +507,9 @@
#define CC_CHIPST 0x2c
#define CC_EROMPTR 0xfc
-#define CC_OTPST 0x10
+#define CC_OTPST 0x10
+#define CC_INTSTATUS 0x20
+#define CC_INTMASK 0x24
#define CC_JTAGCMD 0x30
#define CC_JTAGIR 0x34
#define CC_JTAGDR 0x38
@@ -443,7 +522,10 @@
#define CC_GPIOCTRL 0x6c
#define CC_GPIOPOL 0x70
#define CC_GPIOINTM 0x74
+#define CC_GPIOEVENT 0x78
+#define CC_GPIOEVENTMASK 0x7c
#define CC_WATCHDOG 0x80
+#define CC_GPIOEVENTPOL 0x84
#define CC_CLKC_N 0x90
#define CC_CLKC_M0 0x94
#define CC_CLKC_M1 0x98
@@ -456,6 +538,7 @@
#define PMU_CAP 0x604
#define PMU_ST 0x608
#define PMU_RES_STATE 0x60c
+#define PMU_RES_PENDING 0x610
#define PMU_TIMER 0x614
#define PMU_MIN_RES_MASK 0x618
#define PMU_MAX_RES_MASK 0x61c
@@ -471,6 +554,10 @@
#define CC_GCI_CHIP_CTRL_REG 0xE00
#define CC_GCI_CC_OFFSET_2 2
#define CC_GCI_CC_OFFSET_5 5
+#define CC_SWD_CTRL 0x380
+#define CC_SWD_REQACK 0x384
+#define CC_SWD_DATA 0x388
+
#define CHIPCTRLREG0 0x0
#define CHIPCTRLREG1 0x1
@@ -482,9 +569,6 @@
#define REGCTRLREG4 0x4
#define REGCTRLREG5 0x5
#define REGCTRLREG6 0x6
-#define PMU_RES_STATE 0x60c
-#define PMU_RES_PENDING 0x610
-#define PMU_TIMER 0x614
#define MINRESMASKREG 0x618
#define MAXRESMASKREG 0x61c
#define CHIPCTRLADDR 0x650
@@ -507,6 +591,17 @@
#define REGCTRL6_PWM_AUTO_CTRL_MASK 0x3fff0000
#define REGCTRL6_PWM_AUTO_CTRL_SHIFT 16
+#ifdef SR_DEBUG
+#define SUBCORE_POWER_ON 0x0001
+#define PHY_POWER_ON 0x0010
+#define VDDM_POWER_ON 0x0100
+#define MEMLPLDO_POWER_ON 0x1000
+#define SUBCORE_POWER_ON_CHK 0x00040000
+#define PHY_POWER_ON_CHK 0x00080000
+#define VDDM_POWER_ON_CHK 0x00100000
+#define MEMLPLDO_POWER_ON_CHK 0x00200000
+#endif /* SR_DEBUG */
+
#ifdef NFLASH_SUPPORT
/* NAND flash support */
#define CC_NAND_REVISION 0xC00
@@ -562,7 +657,9 @@
/* capabilities extension */
#define CC_CAP_EXT_SECI_PRESENT 0x00000001 /* SECI present */
+#define CC_CAP_EXT_GSIO_PRESENT 0x00000002 /* GSIO present */
#define CC_CAP_EXT_GCI_PRESENT 0x00000004 /* GCI present */
+#define CC_CAP_EXT_AOB_PRESENT 0x00000040 /* AOB present */
/* WL Channel Info to BT via GCI - bits 40 - 47 */
#define GCI_WL_CHN_INFO_MASK (0xFF00)
@@ -669,11 +766,14 @@
#define OTPC1_TM_WR 0x84
#define OTPC1_TM_V1X 0x84
#define OTPC1_TM_R1X 0x4
+#define OTPC1_CLK_EN_MASK 0x00020000
+#define OTPC1_CLK_DIV_MASK 0x00FC0000
/* Fields in otpprog in rev >= 21 and HND OTP */
#define OTPP_COL_MASK 0x000000ff
#define OTPP_COL_SHIFT 0
#define OTPP_ROW_MASK 0x0000ff00
+#define OTPP_ROW_MASK9 0x0001ff00 /* for ccrev >= 49 */
#define OTPP_ROW_SHIFT 8
#define OTPP_OC_MASK 0x0f000000
#define OTPP_OC_SHIFT 24
@@ -771,6 +871,8 @@
#define JCTRL_EXT_EN 2 /* Enable external targets */
#define JCTRL_EN 1 /* Enable Jtag master */
+#define JCTRL_TAPSEL_BIT 0x00000008 /* JtagMasterCtrl tap_sel bit */
+
/* Fields in clkdiv */
#define CLKD_SFLASH 0x0f000000
#define CLKD_SFLASH_SHIFT 24
@@ -1177,6 +1279,7 @@
#define UART_IER_ERBFI 1 /* enable data available interrupt */
/* pmustatus */
+#define PST_SLOW_WR_PENDING 0x0400
#define PST_EXTLPOAVAIL 0x0100
#define PST_WDRESET 0x0080
#define PST_INTPEND 0x0040
@@ -1216,6 +1319,9 @@
#define PRRT_HT_REQ 0x2000
#define PRRT_HQ_REQ 0x4000
+/* bit 0 of the PMU interrupt vector is asserted if this mask is enabled */
+#define RSRC_INTR_MASK_TIMER_INT_0 1
+
/* PMU resource bit position */
#define PMURES_BIT(bit) (1 << (bit))
@@ -1258,12 +1364,6 @@
/* PMU chip control3 register */
#define PMU_CHIPCTL3 3
-
-/* PMU chip control6 register */
-#define PMU_CHIPCTL6 6
-#define PMU_CC6_ENABLE_CLKREQ_WAKEUP (1 << 4)
-#define PMU_CC6_ENABLE_PMU_WAKEUP_ALP (1 << 6)
-
#define PMU_CC3_ENABLE_SDIO_WAKEUP_SHIFT 19
#define PMU_CC3_ENABLE_RF_SHIFT 22
#define PMU_CC3_RF_DISABLE_IVALUE_SHIFT 23
@@ -1276,6 +1376,12 @@
#define PMU_CC6_ENABLE_CLKREQ_WAKEUP (1 << 4)
#define PMU_CC6_ENABLE_PMU_WAKEUP_ALP (1 << 6)
+/* PMU chip control7 register */
+#define PMU_CHIPCTL7 7
+#define PMU_CC7_ENABLE_L2REFCLKPAD_PWRDWN (1 << 25)
+#define PMU_CC7_ENABLE_MDIO_RESET_WAR (1 << 27)
+
+
/* PMU corerev and chip specific PLL controls.
* PMU<rev>_PLL<num>_XX where <rev> is PMU corerev and <num> is an arbitrary number
* to differentiate different PLLs controlled by the same PMU rev.
@@ -1333,6 +1439,7 @@
#define PMU1_PLL0_PC1_M4DIV_BY_9 9
#define PMU1_PLL0_PC1_M4DIV_BY_18 0x12
#define PMU1_PLL0_PC1_M4DIV_BY_36 0x24
+#define PMU1_PLL0_PC1_M4DIV_BY_60 0x3C
#define DOT11MAC_880MHZ_CLK_DIVISOR_SHIFT 8
#define DOT11MAC_880MHZ_CLK_DIVISOR_MASK (0xFF << DOT11MAC_880MHZ_CLK_DIVISOR_SHIFT)
@@ -1372,6 +1479,9 @@
#define PMU1_PLL0_PLLCTL6 6
#define PMU1_PLL0_PLLCTL7 7
+#define PMU1_PLL0_PLLCTL8 8
+#define PMU1_PLLCTL8_OPENLOOP_MASK 0x2
+
/* PMU rev 2 control words */
#define PMU2_PHY_PLL_PLLCTL 4
#define PMU2_SI_PLL_PLLCTL 10
@@ -2218,6 +2328,16 @@
#define PMU_VREG4_LPLDO1_0p95V 6
#define PMU_VREG4_LPLDO1_0p90V 7
+/* 4350/4345 VREG4 settings */
+#define PMU4350_VREG4_LPLDO1_1p10V 0
+#define PMU4350_VREG4_LPLDO1_1p15V 1
+#define PMU4350_VREG4_LPLDO1_1p21V 2
+#define PMU4350_VREG4_LPLDO1_1p24V 3
+#define PMU4350_VREG4_LPLDO1_0p90V 4
+#define PMU4350_VREG4_LPLDO1_0p96V 5
+#define PMU4350_VREG4_LPLDO1_1p01V 6
+#define PMU4350_VREG4_LPLDO1_1p04V 7
+
#define PMU_VREG4_LPLDO2_LVM_SHIFT 18
#define PMU_VREG4_LPLDO2_LVM_MASK 0x7
#define PMU_VREG4_LPLDO2_HVM_SHIFT 21
@@ -2513,6 +2633,8 @@
CST4360_RSRC_INIT_MODE_SHIFT)
#define CCTRL_4360_UART_SEL 0x2
+#define CST4360_RSRC_INIT_MODE(cs) ((cs & CST4360_RSRC_INIT_MODE_MASK) >> \
+ CST4360_RSRC_INIT_MODE_SHIFT)
/* 43602 PMU resources based on pmu_params.xls version v0.95 */
@@ -2550,8 +2672,15 @@
#define CST43602_BBPLL_LOCK (1<<11)
#define CST43602_RF_LDO_OUT_OK (1<<15) /* RF LDO output OK */
-#define PMU43602_CC2_FORCE_EXT_LPO (1 << 19) /* 1=ext LPO clock is the final LPO clock */
-#define PMU43602_CC2_XTAL32_SEL (1 << 30) /* 0=ext_clock, 1=xtal */
+#define PMU43602_CC1_GPIO12_OVRD (1<<28) /* GPIO12 override */
+
+#define PMU43602_CC2_PCIE_CLKREQ_L_WAKE_EN (1<<1) /* creates gated_pcie_wake, pmu_wakeup logic */
+#define PMU43602_CC2_PCIE_PERST_L_WAKE_EN (1<<2) /* creates gated_pcie_wake, pmu_wakeup logic */
+#define PMU43602_CC2_ENABLE_L2REFCLKPAD_PWRDWN (1<<3)
+#define PMU43602_CC2_PMU_WAKE_ALP_AVAIL_EN (1<<5) /* enable pmu_wakeup to request for ALP_AVAIL */
+#define PMU43602_CC2_PERST_L_EXTEND_EN (1<<9) /* extend perst_l until rsc PERST_OVR comes up */
+#define PMU43602_CC2_FORCE_EXT_LPO (1<<19) /* 1=ext LPO clock is the final LPO clock */
+#define PMU43602_CC2_XTAL32_SEL (1<<30) /* 0=ext_clock, 1=xtal */
#define CC_SR1_43602_SR_ASM_ADDR (0x0)
@@ -2561,7 +2690,96 @@
#define PMU43602_CC3_ARMCR4_DBG_CLK (1 << 29)
+/* 4349 related */
+#define RES4349_LPLDO_PU 0
+#define RES4349_BG_PU 1
+#define RES4349_PMU_SLEEP 2
+#define RES4349_PALDO3P3_PU 3
+#define RES4349_CBUCK_LPOM_PU 4
+#define RES4349_CBUCK_PFM_PU 5
+#define RES4349_COLD_START_WAIT 6
+#define RES4349_RSVD_7 7
+#define RES4349_LNLDO_PU 8
+#define RES4349_XTALLDO_PU 9
+#define RES4349_LDO3P3_PU 10
+#define RES4349_OTP_PU 11
+#define RES4349_XTAL_PU 12
+#define RES4349_SR_CLK_START 13
+#define RES4349_LQ_AVAIL 14
+#define RES4349_LQ_START 15
+#define RES4349_PERST_OVR 16
+#define RES4349_WL_CORE_RDY 17
+#define RES4349_ILP_REQ 18
+#define RES4349_ALP_AVAIL 19
+#define RES4349_MINI_PMU 20
+#define RES4349_RADIO_PU 21
+#define RES4349_SR_CLK_STABLE 22
+#define RES4349_SR_SAVE_RESTORE 23
+#define RES4349_SR_PHY_PWRSW 24
+#define RES4349_SR_VDDM_PWRSW 25
+#define RES4349_SR_SUBCORE_PWRSW 26
+#define RES4349_SR_SLEEP 27
+#define RES4349_HT_START 28
+#define RES4349_HT_AVAIL 29
+#define RES4349_MACPHY_CLKAVAIL 30
+#define CR4_4349_RAM_BASE (0x180000)
+#define CC4_4349_SR_ASM_ADDR (0x48)
+
+#define CST4349_CHIPMODE_SDIOD(cs) (((cs) & (1 << 6)) != 0) /* SDIO */
+#define CST4349_CHIPMODE_PCIE(cs) (((cs) & (1 << 7)) != 0) /* PCIE */
+
+#define CST4349_SPROM_PRESENT 0x00000010
+
+
+/* 43430 PMU resources based on pmu_params.xls */
+#define RES43430_LPLDO_PU 0
+#define RES43430_BG_PU 1
+#define RES43430_PMU_SLEEP 2
+#define RES43430_RSVD_3 3
+#define RES43430_CBUCK_LPOM_PU 4
+#define RES43430_CBUCK_PFM_PU 5
+#define RES43430_COLD_START_WAIT 6
+#define RES43430_RSVD_7 7
+#define RES43430_LNLDO_PU 8
+#define RES43430_RSVD_9 9
+#define RES43430_LDO3P3_PU 10
+#define RES43430_OTP_PU 11
+#define RES43430_XTAL_PU 12
+#define RES43430_SR_CLK_START 13
+#define RES43430_LQ_AVAIL 14
+#define RES43430_LQ_START 15
+#define RES43430_RSVD_16 16
+#define RES43430_WL_CORE_RDY 17
+#define RES43430_ILP_REQ 18
+#define RES43430_ALP_AVAIL 19
+#define RES43430_MINI_PMU 20
+#define RES43430_RADIO_PU 21
+#define RES43430_SR_CLK_STABLE 22
+#define RES43430_SR_SAVE_RESTORE 23
+#define RES43430_SR_PHY_PWRSW 24
+#define RES43430_SR_VDDM_PWRSW 25
+#define RES43430_SR_SUBCORE_PWRSW 26
+#define RES43430_SR_SLEEP 27
+#define RES43430_HT_START 28
+#define RES43430_HT_AVAIL 29
+#define RES43430_MACPHY_CLK_AVAIL 30
+
+/* 43430 chip status bits */
+#define CST43430_SDIO_MODE 0x00000001
+#define CST43430_GSPI_MODE 0x00000002
+#define CST43430_RSRC_INIT_MODE_0 0x00000080
+#define CST43430_RSRC_INIT_MODE_1 0x00000100
+#define CST43430_SEL0_SDIO 0x00000200
+#define CST43430_SEL1_SDIO 0x00000400
+#define CST43430_SEL2_SDIO 0x00000800
+#define CST43430_BBPLL_LOCKED 0x00001000
+#define CST43430_DBG_INST_DETECT 0x00004000
+#define CST43430_CLB2WL_BT_READY 0x00020000
+#define CST43430_JTAG_MODE 0x00100000
+#define CST43430_HOST_IFACE 0x00400000
+#define CST43430_TRIM_EN 0x00800000
+#define CST43430_DIN_PACKAGE_OPTION 0x10000000
/* defines to detect active host interface in use */
#define CHIP_HOSTIF_PCIEMODE 0x1
@@ -2619,6 +2837,9 @@
#define CCTRL1_4335_GPIO_SEL (1 << 0) /* 1=select GPIOs to be muxed out */
#define CCTRL1_4335_SDIO_HOST_WAKE (1 << 2) /* SDIO: 1=configure GPIO0 for host wake */
+/* 4335 Chip specific ChipControl2 register bits */
+#define CCTRL2_4335_AOSBLOCK (1 << 30)
+#define CCTRL2_4335_PMUWAKE (1 << 31)
#define PATCHTBL_SIZE (0x800)
#define CR4_4335_RAM_BASE (0x180000)
#define CR4_4345_RAM_BASE (0x1b0000)
@@ -2756,14 +2977,6 @@
#define CST4350_IFC_MODE(cs) ((cs & CST4350_HOST_IFC_MASK) >> CST4350_HOST_IFC_SHIFT)
-#define CST4350_CHIPMODE_SDIOD(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_SDIOD))
-#define CST4350_CHIPMODE_USB20D(cs) ((CST4350_IFC_MODE(cs)) == (CST4350_IFC_MODE_USB20D))
-#define CST4350_CHIPMODE_HSIC20D(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_HSIC20D))
-#define CST4350_CHIPMODE_HSIC30D(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_HSIC30D))
-#define CST4350_CHIPMODE_USB30D(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_USB30D))
-#define CST4350_CHIPMODE_USB30D_WL(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_USB30D_WL))
-#define CST4350_CHIPMODE_PCIE(cs) (CST4350_IFC_MODE(cs) == (CST4350_IFC_MODE_PCIE))
-
/* 4350 PMU resources */
#define RES4350_LPLDO_PU 0
#define RES4350_PMU_BG_PU 1
@@ -2781,7 +2994,7 @@
#define RES4350_SR_CLK_START 13
#define RES4350_LQ_AVAIL 14
#define RES4350_LQ_START 15
-#define RES4350_RSVD_16 16
+#define RES4350_PERST_OVR 16
#define RES4350_WL_CORE_RDY 17
#define RES4350_ILP_REQ 18
#define RES4350_ALP_AVAIL 19
@@ -2840,6 +3053,8 @@
#define CC4350_PIN_GPIO_14 (14)
#define CC4350_PIN_GPIO_15 (15)
+#define CC4350_RSVD_16_SHIFT 16
+
#define CC2_4350_PHY_PWRSW_UPTIME_MASK (0xf << 0)
#define CC2_4350_PHY_PWRSW_UPTIME_SHIFT (0)
#define CC2_4350_VDDM_PWRSW_UPDELAY_MASK (0xf << 4)
@@ -2906,6 +3121,7 @@
/* Applies to 4335/4350/4345 */
#define CC4_SR_INIT_ADDR_MASK (0x3FF0000)
#define CC4_4350_SR_ASM_ADDR (0x30)
+#define CC4_4350_C0_SR_ASM_ADDR (0x0)
#define CC4_4335_SR_ASM_ADDR (0x48)
#define CC4_4345_SR_ASM_ADDR (0x48)
#define CC4_SR_INIT_ADDR_SHIFT (16)
@@ -2918,10 +3134,38 @@
#define VREG4_4350_MEMLPDO_PU_MASK (1 << 31)
#define VREG4_4350_MEMLPDO_PU_SHIFT 31
+#define VREG6_4350_SR_EXT_CLKDIR_MASK (1 << 20)
+#define VREG6_4350_SR_EXT_CLKDIR_SHIFT 20
+#define VREG6_4350_SR_EXT_CLKDIV_MASK (0x3 << 21)
+#define VREG6_4350_SR_EXT_CLKDIV_SHIFT 21
+#define VREG6_4350_SR_EXT_CLKEN_MASK (1 << 23)
+#define VREG6_4350_SR_EXT_CLKEN_SHIFT 23
+
+#define CC5_4350_PMU_EN_ASSERT_MASK (1 << 13)
+#define CC5_4350_PMU_EN_ASSERT_SHIFT (13)
+
#define CC6_4350_PCIE_CLKREQ_WAKEUP_MASK (1 << 4)
#define CC6_4350_PCIE_CLKREQ_WAKEUP_SHIFT (4)
#define CC6_4350_PMU_WAKEUP_ALPAVAIL_MASK (1 << 6)
#define CC6_4350_PMU_WAKEUP_ALPAVAIL_SHIFT (6)
+#define CC6_4350_PMU_EN_EXT_PERST_MASK (1 << 17)
+#define CC6_4350_PMU_EN_EXT_PERST_SHIFT (17)
+#define CC6_4350_PMU_EN_WAKEUP_MASK (1 << 18)
+#define CC6_4350_PMU_EN_WAKEUP_SHIFT (18)
+
+#define CC7_4350_PMU_EN_ASSERT_L2_MASK (1 << 26)
+#define CC7_4350_PMU_EN_ASSERT_L2_SHIFT (26)
+#define CC7_4350_PMU_EN_MDIO_MASK (1 << 27)
+#define CC7_4350_PMU_EN_MDIO_SHIFT (27)
+
+#define CC6_4345_PMU_EN_PERST_DEASSERT_MASK (1 << 13)
+#define CC6_4345_PMU_EN_PERST_DEASSERT_SHIF (13)
+#define CC6_4345_PMU_EN_L2_DEASSERT_MASK (1 << 14)
+#define CC6_4345_PMU_EN_L2_DEASSERT_SHIF (14)
+#define CC6_4345_PMU_EN_ASSERT_L2_MASK (1 << 15)
+#define CC6_4345_PMU_EN_ASSERT_L2_SHIFT (15)
+#define CC6_4345_PMU_EN_MDIO_MASK (1 << 24)
+#define CC6_4345_PMU_EN_MDIO_SHIFT (24)
/* GCI chipcontrol register indices */
#define CC_GCI_CHIPCTRL_00 (0)
@@ -3002,6 +3246,8 @@
#define CC4335_PIN_RF_SW_CTRL_7 (23)
#define CC4335_PIN_RF_SW_CTRL_8 (24)
#define CC4335_PIN_RF_SW_CTRL_9 (25)
+/* Last GPIO Pad */
+#define CC4335_PIN_GPIO_LAST (31)
/* 4335 GCI function sel values
*/
@@ -3022,6 +3268,15 @@
#define CC4335_FNSEL_PUP (14)
#define CC4335_FNSEL_TRI (15)
+/* GCI Core Control Reg */
+#define GCI_CORECTRL_SR_MASK (1 << 0) /* SECI block Reset */
+#define GCI_CORECTRL_RSL_MASK (1 << 1) /* ResetSECILogic */
+#define GCI_CORECTRL_ES_MASK (1 << 2) /* EnableSECI */
+#define GCI_CORECTRL_FSL_MASK (1 << 3) /* Force SECI Out Low */
+#define GCI_CORECTRL_SOM_MASK (7 << 4) /* SECI Op Mode */
+#define GCI_CORECTRL_US_MASK (1 << 7) /* Update SECI */
+#define GCI_CORECTRL_BOS_MASK (1 << 8) /* Break On Sleep */
+
/* 4345 pins
* note: only the values set as default/used are added here.
*/
@@ -3083,6 +3338,16 @@
#define MUXENAB4345_HOSTWAKE_MASK (0x000000f0)
#define MUXENAB4345_HOSTWAKE_SHIFT 4
+/* 4349 Group (4349, 4355, 4359) GCI AVS function sel values */
+#define CC4349_GRP_GCI_AVS_CTRL_MASK (0xffe00000)
+#define CC4349_GRP_GCI_AVS_CTRL_SHIFT (21)
+#define CC4349_GRP_GCI_AVS_CTRL_ENAB (1 << 5)
+
+/* 4345 GCI AVS function sel values */
+#define CC4345_GCI_AVS_CTRL_MASK (0xfc)
+#define CC4345_GCI_AVS_CTRL_SHIFT (2)
+#define CC4345_GCI_AVS_CTRL_ENAB (1 << 5)
+
/* GCI GPIO for function sel GCI-0/GCI-1 */
#define CC_GCI_GPIO_0 (0)
#define CC_GCI_GPIO_1 (1)
@@ -3092,6 +3357,15 @@
#define CC_GCI_GPIO_5 (5)
#define CC_GCI_GPIO_6 (6)
#define CC_GCI_GPIO_7 (7)
+#define CC_GCI_GPIO_8 (8)
+#define CC_GCI_GPIO_9 (9)
+#define CC_GCI_GPIO_10 (10)
+#define CC_GCI_GPIO_11 (11)
+#define CC_GCI_GPIO_12 (12)
+#define CC_GCI_GPIO_13 (13)
+#define CC_GCI_GPIO_14 (14)
+#define CC_GCI_GPIO_15 (15)
+
/* indicates Invalid GPIO, e.g. when PAD GPIO doesn't map to GCI GPIO */
#define CC_GCI_GPIO_INVALID 0xFF
@@ -3119,13 +3393,57 @@
#define GCIGETNBL_4B(val, pos) ((val >> pos) & 0xF)
-#define GCI_INTSTATUS_GPIOINT (1 << 25)
-#define GCI_INTSTATUS_GPIOWAKE (1 << 26)
-#define GCI_INTMASK_GPIOINT (1 << 25)
-#define GCI_INTMASK_GPIOWAKE (1 << 26)
-#define GCI_WAKEMASK_GPIOINT (1 << 25)
-#define GCI_WAKEMASK_GPIOWAKE (1 << 26)
+/* 4335 GCI Intstatus(Mask)/WakeMask Register bits. */
+#define GCI_INTSTATUS_RBI (1 << 0) /* Rx Break Interrupt */
+#define GCI_INTSTATUS_UB (1 << 1) /* UART Break Interrupt */
+#define GCI_INTSTATUS_SPE (1 << 2) /* SECI Parity Error Interrupt */
+#define GCI_INTSTATUS_SFE (1 << 3) /* SECI Framing Error Interrupt */
+#define GCI_INTSTATUS_SRITI (1 << 9) /* SECI Rx Idle Timer Interrupt */
+#define GCI_INTSTATUS_STFF (1 << 10) /* SECI Tx FIFO Full Interrupt */
+#define GCI_INTSTATUS_STFAE (1 << 11) /* SECI Tx FIFO Almost Empty Intr */
+#define GCI_INTSTATUS_SRFAF (1 << 12) /* SECI Rx FIFO Almost Full */
+#define GCI_INTSTATUS_SRFNE (1 << 14) /* SECI Rx FIFO Not Empty */
+#define GCI_INTSTATUS_SRFOF (1 << 15) /* SECI Rx FIFO Not Empty Timeout */
+#define GCI_INTSTATUS_GPIOINT (1 << 25) /* GCIGpioInt */
+#define GCI_INTSTATUS_GPIOWAKE (1 << 26) /* GCIGpioWake */
+/* 4335 GCI IntMask Register bits. */
+#define GCI_INTMASK_RBI (1 << 0) /* Rx Break Interrupt */
+#define GCI_INTMASK_UB (1 << 1) /* UART Break Interrupt */
+#define GCI_INTMASK_SPE (1 << 2) /* SECI Parity Error Interrupt */
+#define GCI_INTMASK_SFE (1 << 3) /* SECI Framing Error Interrupt */
+#define GCI_INTMASK_SRITI (1 << 9) /* SECI Rx Idle Timer Interrupt */
+#define GCI_INTMASK_STFF (1 << 10) /* SECI Tx FIFO Full Interrupt */
+#define GCI_INTMASK_STFAE (1 << 11) /* SECI Tx FIFO Almost Empty Intr */
+#define GCI_INTMASK_SRFAF (1 << 12) /* SECI Rx FIFO Almost Full */
+#define GCI_INTMASK_SRFNE (1 << 14) /* SECI Rx FIFO Not Empty */
+#define GCI_INTMASK_SRFOF (1 << 15) /* SECI Rx FIFO Not Empty Timeout */
+#define GCI_INTMASK_GPIOINT (1 << 25) /* GCIGpioInt */
+#define GCI_INTMASK_GPIOWAKE (1 << 26) /* GCIGpioWake */
+
+/* 4335 GCI WakeMask Register bits. */
+#define GCI_WAKEMASK_RBI (1 << 0) /* Rx Break Interrupt */
+#define GCI_WAKEMASK_UB (1 << 1) /* UART Break Interrupt */
+#define GCI_WAKEMASK_SPE (1 << 2) /* SECI Parity Error Interrupt */
+#define GCI_WAKEMASK_SFE (1 << 3) /* SECI Framing Error Interrupt */
+#define GCI_WAKE_SRITI (1 << 9) /* SECI Rx Idle Timer Interrupt */
+#define GCI_WAKEMASK_STFF (1 << 10) /* SECI Tx FIFO Full Interrupt */
+#define GCI_WAKEMASK_STFAE (1 << 11) /* SECI Tx FIFO Almost Empty Intr */
+#define GCI_WAKEMASK_SRFAF (1 << 12) /* SECI Rx FIFO Almost Full */
+#define GCI_WAKEMASK_SRFNE (1 << 14) /* SECI Rx FIFO Not Empty */
+#define GCI_WAKEMASK_SRFOF (1 << 15) /* SECI Rx FIFO Not Empty Timeout */
+#define GCI_WAKEMASK_GPIOINT (1 << 25) /* GCIGpioInt */
+#define GCI_WAKEMASK_GPIOWAKE (1 << 26) /* GCIGpioWake */
+
+#define GCI_WAKE_ON_GCI_GPIO1 1
+#define GCI_WAKE_ON_GCI_GPIO2 2
+#define GCI_WAKE_ON_GCI_GPIO3 3
+#define GCI_WAKE_ON_GCI_GPIO4 4
+#define GCI_WAKE_ON_GCI_GPIO5 5
+#define GCI_WAKE_ON_GCI_GPIO6 6
+#define GCI_WAKE_ON_GCI_GPIO7 7
+#define GCI_WAKE_ON_GCI_GPIO8 8
+#define GCI_WAKE_ON_GCI_SECI_IN 9
/* 4335 MUX options. each nibble belongs to a setting. Non-zero value specifies a logic
* for now only UART for bootloader.
@@ -3163,8 +3481,10 @@
#define SECI_MODE_SHIFT 4 /* (bits 5, 6, 7) */
#define SECI_UPD_SECI (1 << 7)
-#define SECI_SIGNOFF_0 0xDB
+#define SECI_SLIP_ESC_CHAR 0xDB
+#define SECI_SIGNOFF_0 SECI_SLIP_ESC_CHAR
#define SECI_SIGNOFF_1 0
+#define SECI_REFRESH_REQ 0xDA
/* seci clk_ctl_st bits */
#define CLKCTL_STS_SECI_CLK_REQ (1 << 8)
@@ -3175,7 +3495,28 @@
#define SECI_UART_SECI_IN_STATE (1 << 2)
#define SECI_UART_SECI_IN2_STATE (1 << 3)
-/* SECI UART LCR/MCR register bits */
+/* GCI RX FIFO Control Register */
+#define GCI_RXF_LVL_MASK (0xFF << 0)
+#define GCI_RXF_TIMEOUT_MASK (0xFF << 8)
+
+/* GCI UART Registers' Bit definitions */
+/* Seci Fifo Level Register */
+#define SECI_TXF_LVL_MASK (0x3F << 8)
+#define TXF_AE_LVL_DEFAULT 0x4
+#define SECI_RXF_LVL_FC_MASK (0x3F << 16)
+
+/* SeciUARTFCR Bit definitions */
+#define SECI_UART_FCR_RFR (1 << 0)
+#define SECI_UART_FCR_TFR (1 << 1)
+#define SECI_UART_FCR_SR (1 << 2)
+#define SECI_UART_FCR_THP (1 << 3)
+#define SECI_UART_FCR_AB (1 << 4)
+#define SECI_UART_FCR_ATOE (1 << 5)
+#define SECI_UART_FCR_ARTSOE (1 << 6)
+#define SECI_UART_FCR_ABV (1 << 7)
+#define SECI_UART_FCR_ALM (1 << 8)
+
+/* SECI UART LCR register bits */
#define SECI_UART_LCR_STOP_BITS (1 << 0) /* 0 - 1bit, 1 - 2bits */
#define SECI_UART_LCR_PARITY_EN (1 << 1)
#define SECI_UART_LCR_PARITY (1 << 2) /* 0 - odd, 1 - even */
@@ -3188,6 +3529,7 @@
#define SECI_UART_LCR_TXCRC_INV (1 << 9)
#define SECI_UART_LCR_TXCRC_LSBF (1 << 10)
#define SECI_UART_LCR_TXCRC_EN (1 << 11)
+#define SECI_UART_LCR_RXSYNC_EN (1 << 12)
#define SECI_UART_MCR_TX_EN (1 << 0)
#define SECI_UART_MCR_PRTS (1 << 1)
@@ -3199,6 +3541,52 @@
#define SECI_UART_MCR_BAUD_ADJ_EN (1 << 7)
#define SECI_UART_MCR_XONOFF_RPT (1 << 9)
+/* SeciUARTLSR Bit Mask */
+#define SECI_UART_LSR_RXOVR_MASK (1 << 0)
+#define SECI_UART_LSR_RFF_MASK (1 << 1)
+#define SECI_UART_LSR_TFNE_MASK (1 << 2)
+#define SECI_UART_LSR_TI_MASK (1 << 3)
+#define SECI_UART_LSR_TPR_MASK (1 << 4)
+#define SECI_UART_LSR_TXHALT_MASK (1 << 5)
+
+/* SeciUARTMSR Bit Mask */
+#define SECI_UART_MSR_CTSS_MASK (1 << 0)
+#define SECI_UART_MSR_RTSS_MASK (1 << 1)
+#define SECI_UART_MSR_SIS_MASK (1 << 2)
+#define SECI_UART_MSR_SIS2_MASK (1 << 3)
+
+/* SeciUARTData Bits */
+#define SECI_UART_DATA_RF_NOT_EMPTY_BIT (1 << 12)
+#define SECI_UART_DATA_RF_FULL_BIT (1 << 13)
+#define SECI_UART_DATA_RF_OVRFLOW_BIT (1 << 14)
+#define SECI_UART_DATA_FIFO_PTR_MASK 0xFF
+#define SECI_UART_DATA_RF_RD_PTR_SHIFT 16
+#define SECI_UART_DATA_RF_WR_PTR_SHIFT 24
+
+/* LTECX: ltecxmux */
+#define LTECX_EXTRACT_MUX(val, idx) (getbit4(&(val), (idx)))
+
+/* LTECX: ltecxmux MODE */
+#define LTECX_MUX_MODE_IDX 0
+#define LTECX_MUX_MODE_WCI2 0x0
+#define LTECX_MUX_MODE_GPIO 0x1
+
+
+/* LTECX GPIO Information Index */
+#define LTECX_NVRAM_FSYNC_IDX 0
+#define LTECX_NVRAM_LTERX_IDX 1
+#define LTECX_NVRAM_LTETX_IDX 2
+#define LTECX_NVRAM_WLPRIO_IDX 3
+
+/* LTECX WCI2 Information Index */
+#define LTECX_NVRAM_WCI2IN_IDX 0
+#define LTECX_NVRAM_WCI2OUT_IDX 1
+
+/* LTECX: Macros to get GPIO/FNSEL/GCIGPIO */
+#define LTECX_EXTRACT_PADNUM(val, idx) (getbit8(&(val), (idx)))
+#define LTECX_EXTRACT_FNSEL(val, idx) (getbit4(&(val), (idx)))
+#define LTECX_EXTRACT_GCIGPIO(val, idx) (getbit4(&(val), (idx)))
+
/* WLAN channel numbers - used from wifi.h */
/* WLAN BW */
@@ -3230,6 +3618,9 @@
#define CC_SR_CTL0_MAX_SR_LQ_CLK_CNT_SHIFT 25
#define CC_SR_CTL0_EN_MEM_DISABLE_FOR_SLEEP 30
+#define CC_SR_CTL1_SR_INIT_MASK 0x3FF
+#define CC_SR_CTL1_SR_INIT_SHIFT 0
+
#define ECI_INLO_PKTDUR_MASK 0x000000f0 /* [7:4] - 4 bits */
#define ECI_INLO_PKTDUR_SHIFT 4
diff --git a/drivers/net/wireless/bcmdhd/include/sbconfig.h b/drivers/net/wireless/bcmdhd/include/sbconfig.h
old mode 100755
new mode 100644
index 83f7d66..812e325
--- a/drivers/net/wireless/bcmdhd/include/sbconfig.h
+++ b/drivers/net/wireless/bcmdhd/include/sbconfig.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: sbconfig.h 241182 2011-02-17 21:50:03Z $
+ * $Id: sbconfig.h 456346 2014-02-18 16:48:52Z $
*/
#ifndef _SBCONFIG_H
@@ -81,7 +81,7 @@
#define SBTMPORTCONNID0 0xed8
#define SBTMPORTLOCK0 0xef8
-#ifndef _LANGUAGE_ASSEMBLY
+#if !defined(_LANGUAGE_ASSEMBLY) && !defined(__ASSEMBLY__)
typedef volatile struct _sbconfig {
uint32 PAD[2];
@@ -123,7 +123,7 @@
uint32 sbidhigh; /* identification */
} sbconfig_t;
-#endif /* _LANGUAGE_ASSEMBLY */
+#endif /* !_LANGUAGE_ASSEMBLY && !__ASSEMBLY__ */
/* sbipsflag */
#define SBIPS_INT1_MASK 0x3f /* which sbflags get routed to mips interrupt 1 */
diff --git a/drivers/net/wireless/bcmdhd/include/sbhnddma.h b/drivers/net/wireless/bcmdhd/include/sbhnddma.h
old mode 100755
new mode 100644
index 9db0fa1..cbd9f0a
--- a/drivers/net/wireless/bcmdhd/include/sbhnddma.h
+++ b/drivers/net/wireless/bcmdhd/include/sbhnddma.h
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: sbhnddma.h 424099 2013-09-16 07:44:34Z $
+ * $Id: sbhnddma.h 452424 2014-01-30 09:43:39Z $
*/
#ifndef _sbhnddma_h_
@@ -346,6 +346,7 @@
#define DMA_CTRL_USB_BOUNDRY4KB_WAR (1 << 4)
#define DMA_CTRL_DMA_AVOIDANCE_WAR (1 << 5) /* DMA avoidance WAR for 4331 */
#define DMA_CTRL_RXSINGLE (1 << 6) /* always single buffer */
+#define DMA_CTRL_SDIO_RXGLOM (1 << 7) /* DMA Rx glome is enabled */
/* receive descriptor table pointer */
#define D64_RP_LD_MASK 0x00001fff /* last valid descriptor */
diff --git a/drivers/net/wireless/bcmdhd/include/sbpcmcia.h b/drivers/net/wireless/bcmdhd/include/sbpcmcia.h
old mode 100755
new mode 100644
index f746ddc..f34fc18
--- a/drivers/net/wireless/bcmdhd/include/sbpcmcia.h
+++ b/drivers/net/wireless/bcmdhd/include/sbpcmcia.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: sbpcmcia.h 427964 2013-10-07 07:13:33Z $
+ * $Id: sbpcmcia.h 446298 2014-01-03 11:30:17Z $
*/
#ifndef _SBPCMCIA_H
diff --git a/drivers/net/wireless/bcmdhd/include/sbsdio.h b/drivers/net/wireless/bcmdhd/include/sbsdio.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/sbsdpcmdev.h b/drivers/net/wireless/bcmdhd/include/sbsdpcmdev.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/sbsocram.h b/drivers/net/wireless/bcmdhd/include/sbsocram.h
old mode 100755
new mode 100644
index 790e3f1..33442f8
--- a/drivers/net/wireless/bcmdhd/include/sbsocram.h
+++ b/drivers/net/wireless/bcmdhd/include/sbsocram.h
@@ -57,7 +57,8 @@
uint32 cambankmaskreg;
uint32 PAD[1];
uint32 bankinfo; /* corev 8 */
- uint32 PAD[15];
+ uint32 bankpda;
+ uint32 PAD[14];
uint32 extmemconfig;
uint32 extmemparitycsr;
uint32 extmemparityerrdata;
diff --git a/drivers/net/wireless/bcmdhd/include/sdio.h b/drivers/net/wireless/bcmdhd/include/sdio.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/sdioh.h b/drivers/net/wireless/bcmdhd/include/sdioh.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/sdiovar.h b/drivers/net/wireless/bcmdhd/include/sdiovar.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/siutils.h b/drivers/net/wireless/bcmdhd/include/siutils.h
old mode 100755
new mode 100644
index 1c4d457..bf51f8f
--- a/drivers/net/wireless/bcmdhd/include/siutils.h
+++ b/drivers/net/wireless/bcmdhd/include/siutils.h
@@ -22,14 +22,17 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: siutils.h 433599 2013-11-01 18:31:27Z $
+ * $Id: siutils.h 474902 2014-05-02 18:31:33Z $
*/
#ifndef _siutils_h_
#define _siutils_h_
+#ifdef SR_DEBUG
+#include "wlioctl.h"
+#endif /* SR_DEBUG */
-#include <bcmutils.h>
+
/*
* Data structure to export all chip specific common variables
* public (read-only) portion of siutils handle returned by si_attach()/si_kattach()
@@ -66,7 +69,6 @@
*/
typedef const struct si_pub si_t;
-
/*
* Many of the routines below take an 'sih' handle as their first arg.
* Allocate this by calling si_attach(). Free it by calling si_detach().
@@ -105,8 +107,12 @@
/* SI routine enumeration: to be used by update function with multiple hooks */
#define SI_DOATTACH 1
-#define SI_PCIDOWN 2
-#define SI_PCIUP 3
+#define SI_PCIDOWN 2 /* wireless interface is down */
+#define SI_PCIUP 3 /* wireless interface is up */
+
+#ifdef SR_DEBUG
+#define PMU_RES 31
+#endif /* SR_DEBUG */
#define ISSIM_ENAB(sih) FALSE
@@ -117,6 +123,9 @@
#define PMUCTL_ENAB(sih) ((sih)->cccaps & CC_CAP_PMU)
#endif
+#define AOB_ENAB(sih) ((sih)->ccrev >= 35 ? \
+ ((sih)->cccaps_ext & CC_CAP_EXT_AOB_PRESENT) : 0)
+
/* chipcommon clock/power control (exclusive with PMU's) */
#if defined(BCMPMUCTL) && BCMPMUCTL
#define CCCTL_ENAB(sih) (0)
@@ -153,13 +162,15 @@
#define ARMCR4_BSZ_MASK 0x3f
#define ARMCR4_BSZ_MULT 8192
+#include <osl_decl.h>
/* === exported functions === */
extern si_t *si_attach(uint pcidev, osl_t *osh, void *regs, uint bustype,
void *sdh, char **vars, uint *varsz);
extern si_t *si_kattach(osl_t *osh);
extern void si_detach(si_t *sih);
extern bool si_pci_war16165(si_t *sih);
-
+extern void *
+si_d11_switch_addrbase(si_t *sih, uint coreunit);
extern uint si_corelist(si_t *sih, uint coreid[]);
extern uint si_coreid(si_t *sih);
extern uint si_flag(si_t *sih);
@@ -172,6 +183,7 @@
extern void *si_osh(si_t *sih);
extern void si_setosh(si_t *sih, osl_t *osh);
extern uint si_corereg(si_t *sih, uint coreidx, uint regoff, uint mask, uint val);
+extern uint si_pmu_corereg(si_t *sih, uint32 idx, uint regoff, uint mask, uint val);
extern uint32 *si_corereg_addr(si_t *sih, uint coreidx, uint regoff);
extern void *si_coreregs(si_t *sih);
extern uint si_wrapperreg(si_t *sih, uint32 offset, uint32 mask, uint32 val);
@@ -182,6 +194,7 @@
extern uint32 si_core_sflags(si_t *sih, uint32 mask, uint32 val);
extern bool si_iscoreup(si_t *sih);
extern uint si_numcoreunits(si_t *sih, uint coreid);
+extern uint si_numd11coreunits(si_t *sih);
extern uint si_findcoreidx(si_t *sih, uint coreid, uint coreunit);
extern void *si_setcoreidx(si_t *sih, uint coreidx);
extern void *si_setcore(si_t *sih, uint coreid, uint coreunit);
@@ -198,8 +211,8 @@
extern uint si_chip_hostif(si_t *sih);
extern bool si_read_pmu_autopll(si_t *sih);
extern uint32 si_clock(si_t *sih);
-extern uint32 si_alp_clock(si_t *sih);
-extern uint32 si_ilp_clock(si_t *sih);
+extern uint32 si_alp_clock(si_t *sih); /* returns [Hz] units */
+extern uint32 si_ilp_clock(si_t *sih); /* returns [Hz] units */
extern void si_pci_setup(si_t *sih, uint coremask);
extern void si_pcmcia_init(si_t *sih);
extern void si_setint(si_t *sih, int siflag);
@@ -238,6 +251,7 @@
extern uint32 si_gpiopull(si_t *sih, bool updown, uint32 mask, uint32 val);
extern uint32 si_gpioevent(si_t *sih, uint regtype, uint32 mask, uint32 val);
extern uint32 si_gpio_int_enable(si_t *sih, bool enable);
+extern void si_gci_uart_init(si_t *sih, osl_t *osh, uint8 seci_mode);
extern void si_gci_enable_gpio(si_t *sih, uint8 gpio, uint32 mask, uint32 value);
extern uint8 si_gci_host_wake_gpio_init(si_t *sih);
extern void si_gci_host_wake_gpio_enable(si_t *sih, uint8 gpio, bool state);
@@ -254,10 +268,10 @@
extern void *si_gci_gpioint_handler_register(si_t *sih, uint8 gpio, uint8 sts,
gci_gpio_handler_t cb, void *arg);
extern void si_gci_gpioint_handler_unregister(si_t *sih, void* gci_i);
+extern uint8 si_gci_gpio_status(si_t *sih, uint8 gci_gpio, uint8 mask, uint8 value);
/* Wake-on-wireless-LAN (WOWL) */
extern bool si_pci_pmecap(si_t *sih);
-struct osl_info;
extern bool si_pci_fastpmecap(struct osl_info *osh);
extern bool si_pci_pmestat(si_t *sih);
extern void si_pci_pmeclr(si_t *sih);
@@ -309,6 +323,7 @@
extern int si_otp_fabid(si_t *sih, uint16 *fabid, bool rw);
extern uint16 si_fabid(si_t *sih);
+extern uint16 si_chipid(si_t *sih);
/*
* Build device path. Path size must be >= SI_DEVPATH_BUFSZ.
@@ -316,6 +331,7 @@
* Return 0 on success, nonzero otherwise.
*/
extern int si_devpath(si_t *sih, char *path, int size);
+extern int si_devpath_pcie(si_t *sih, char *path, int size);
/* Read variable with prepending the devpath to the name */
extern char *si_getdevpathvar(si_t *sih, const char *name);
extern int si_getdevpathintvar(si_t *sih, const char *name);
@@ -362,9 +378,19 @@
extern bool si_taclear(si_t *sih, bool details);
+#if defined(BCMDBG_PHYDUMP)
+extern void si_dumpregs(si_t *sih, struct bcmstrbuf *b);
+#endif
extern uint32 si_ccreg(si_t *sih, uint32 offset, uint32 mask, uint32 val);
extern uint32 si_pciereg(si_t *sih, uint32 offset, uint32 mask, uint32 val, uint type);
+#ifdef SR_DEBUG
+extern void si_dump_pmu(si_t *sih, void *pmu_var);
+extern void si_pmu_keep_on(si_t *sih, int32 int_val);
+extern uint32 si_pmu_keep_on_get(si_t *sih);
+extern uint32 si_power_island_set(si_t *sih, uint32 int_val);
+extern uint32 si_power_island_get(si_t *sih);
+#endif /* SR_DEBUG */
extern uint32 si_pcieserdesreg(si_t *sih, uint32 mdioslave, uint32 offset, uint32 mask, uint32 val);
extern void si_pcie_set_request_size(si_t *sih, uint16 size);
extern uint16 si_pcie_get_request_size(si_t *sih);
@@ -391,10 +417,16 @@
extern uint32 si_gci_input(si_t *sih, uint reg);
extern uint32 si_gci_int_enable(si_t *sih, bool enable);
extern void si_gci_reset(si_t *sih);
-extern void si_ercx_init(si_t *sih);
-extern void si_wci2_init(si_t *sih, uint baudrate);
+#ifdef BCMLTECOEX
extern void si_gci_seci_init(si_t *sih);
+extern void si_ercx_init(si_t *sih, uint32 ltecx_mux, uint32 ltecx_padnum,
+ uint32 ltecx_fnsel, uint32 ltecx_gcigpio);
+extern void si_wci2_init(si_t *sih, uint8 baudrate, uint32 ltecx_mux, uint32 ltecx_padnum,
+ uint32 ltecx_fnsel, uint32 ltecx_gcigpio);
+#endif /* BCMLTECOEX */
extern void si_gci_set_functionsel(si_t *sih, uint32 pin, uint8 fnsel);
+extern uint32 si_gci_get_functionsel(si_t *sih, uint32 pin);
+extern void si_gci_clear_functionsel(si_t *sih, uint8 fnsel);
extern uint8 si_gci_get_chipctrlreg_idx(uint32 pin, uint32 *regidx, uint32 *pos);
extern uint32 si_gci_chipcontrol(si_t *sih, uint reg, uint32 mask, uint32 val);
extern uint32 si_gci_chipstatus(si_t *sih, uint reg);
@@ -403,6 +435,7 @@
extern uint32 si_cc_set_reg32(uint32 reg_offs, uint32 val);
extern uint32 si_gci_preinit_upd_indirect(uint32 regidx, uint32 setval, uint32 mask);
extern uint8 si_enable_device_wake(si_t *sih, uint8 *wake_status, uint8 *cur_status);
+extern void si_swdenable(si_t *sih, uint32 swdflag);
#define CHIPCTRLREG1 0x1
#define CHIPCTRLREG2 0x2
@@ -422,7 +455,19 @@
extern uint32 si_pmu_res_req_timer_clr(si_t *sih);
extern void si_pmu_rfldo(si_t *sih, bool on);
extern void si_survive_perst_war(si_t *sih, bool reset, uint32 sperst_mask, uint32 spert_val);
+extern uint32 si_pcie_set_ctrlreg(si_t *sih, uint32 sperst_mask, uint32 spert_val);
extern void si_pcie_ltr_war(si_t *sih);
+extern void si_pcie_hw_LTR_war(si_t *sih);
+extern void si_pcie_hw_L1SS_war(si_t *sih);
+extern void si_pciedev_crwlpciegen2(si_t *sih);
+extern void si_pcie_prep_D3(si_t *sih, bool enter_D3);
+extern void si_pciedev_reg_pm_clk_period(si_t *sih);
+
+#ifdef WLRSDB
+extern void si_d11rsdb_core_disable(si_t *sih, uint32 bits);
+extern void si_d11rsdb_core_reset(si_t *sih, uint32 bits, uint32 resetbits);
+#endif
+
/* Macro to enable clock gating changes in different cores */
#define MEM_CLK_GATE_BIT 5
@@ -438,4 +483,106 @@
#define PLL_DIV2_MASK (0x37 << PLL_DIV2_BIT_START)
#define PLL_DIV2_DIS_OP (0x37 << PLL_DIV2_BIT_START)
+#define PMUREG(si, member) \
+ (AOB_ENAB(si) ? \
+ si_corereg_addr(si, si_findcoreidx(si, PMU_CORE_ID, 0), \
+ OFFSETOF(pmuregs_t, member)): \
+ si_corereg_addr(si, SI_CC_IDX, OFFSETOF(chipcregs_t, member)))
+
+#define pmu_corereg(si, cc_idx, member, mask, val) \
+ (AOB_ENAB(si) ? \
+ si_pmu_corereg(si, si_findcoreidx(sih, PMU_CORE_ID, 0), \
+ OFFSETOF(pmuregs_t, member), mask, val): \
+ si_pmu_corereg(si, cc_idx, OFFSETOF(chipcregs_t, member), mask, val))
+
+/* GCI Macros */
+#define ALLONES_32 0xFFFFFFFF
+#define GCI_CCTL_SECIRST_OFFSET 0 /* SeciReset */
+#define GCI_CCTL_RSTSL_OFFSET 1 /* ResetSeciLogic */
+#define GCI_CCTL_SECIEN_OFFSET 2 /* EnableSeci */
+#define GCI_CCTL_FSL_OFFSET 3 /* ForceSeciOutLow */
+#define GCI_CCTL_SMODE_OFFSET 4 /* SeciOpMode, 6:4 */
+#define GCI_CCTL_US_OFFSET 7 /* UpdateSeci */
+#define GCI_CCTL_BRKONSLP_OFFSET 8 /* BreakOnSleep */
+#define GCI_CCTL_SILOWTOUT_OFFSET 9 /* SeciInLowTimeout, 10:9 */
+#define GCI_CCTL_RSTOCC_OFFSET 11 /* ResetOffChipCoex */
+#define GCI_CCTL_ARESEND_OFFSET 12 /* AutoBTSigResend */
+#define GCI_CCTL_FGCR_OFFSET 16 /* ForceGciClkReq */
+#define GCI_CCTL_FHCRO_OFFSET 17 /* ForceHWClockReqOff */
+#define GCI_CCTL_FREGCLK_OFFSET 18 /* ForceRegClk */
+#define GCI_CCTL_FSECICLK_OFFSET 19 /* ForceSeciClk */
+#define GCI_CCTL_FGCA_OFFSET 20 /* ForceGciClkAvail */
+#define GCI_CCTL_FGCAV_OFFSET 21 /* ForceGciClkAvailValue */
+#define GCI_CCTL_SCS_OFFSET 24 /* SeciClkStretch, 31:24 */
+
+#define GCI_MODE_UART 0x0
+#define GCI_MODE_SECI 0x1
+#define GCI_MODE_BTSIG 0x2
+#define GCI_MODE_GPIO 0x3
+#define GCI_MODE_MASK 0x7
+
+#define GCI_CCTL_LOWTOUT_DIS 0x0
+#define GCI_CCTL_LOWTOUT_10BIT 0x1
+#define GCI_CCTL_LOWTOUT_20BIT 0x2
+#define GCI_CCTL_LOWTOUT_30BIT 0x3
+#define GCI_CCTL_LOWTOUT_MASK 0x3
+
+#define GCI_CCTL_SCS_DEF 0x19
+#define GCI_CCTL_SCS_MASK 0xFF
+
+#define GCI_SECIIN_MODE_OFFSET 0
+#define GCI_SECIIN_GCIGPIO_OFFSET 4
+#define GCI_SECIIN_RXID2IP_OFFSET 8
+
+#define GCI_SECIOUT_MODE_OFFSET 0
+#define GCI_SECIOUT_GCIGPIO_OFFSET 4
+#define GCI_SECIOUT_SECIINRELATED_OFFSET 16
+
+#define GCI_SECIAUX_RXENABLE_OFFSET 0
+#define GCI_SECIFIFO_RXENABLE_OFFSET 16
+
+#define GCI_SECITX_ENABLE_OFFSET 0
+
+#define GCI_GPIOCTL_INEN_OFFSET 0
+#define GCI_GPIOCTL_OUTEN_OFFSET 1
+#define GCI_GPIOCTL_PDN_OFFSET 4
+
+#define GCI_GPIOIDX_OFFSET 16
+
+#define GCI_LTECX_SECI_ID 0 /* SECI port for LTECX */
+
+/* To access per GCI bit registers */
+#define GCI_REG_WIDTH 32
+
+/* GCI bit positions */
+/* GCI [127:000] = WLAN [127:0] */
+#define GCI_WLAN_IP_ID 0
+#define GCI_WLAN_BEGIN 0
+#define GCI_WLAN_PRIO_POS (GCI_WLAN_BEGIN + 4)
+
+/* GCI [639:512] = LTE [127:0] */
+#define GCI_LTE_IP_ID 4
+#define GCI_LTE_BEGIN 512
+#define GCI_LTE_FRAMESYNC_POS (GCI_LTE_BEGIN + 0)
+#define GCI_LTE_RX_POS (GCI_LTE_BEGIN + 1)
+#define GCI_LTE_TX_POS (GCI_LTE_BEGIN + 2)
+#define GCI_LTE_AUXRXDVALID_POS (GCI_LTE_BEGIN + 56)
+
+/* Reg Index corresponding to ECI bit no x of ECI space */
+#define GCI_REGIDX(x) ((x)/GCI_REG_WIDTH)
+/* Bit offset of ECI bit no x in 32-bit words */
+#define GCI_BITOFFSET(x) ((x)%GCI_REG_WIDTH)
+
+/* End - GCI Macros */
+
+#ifdef REROUTE_OOBINT
+#define CC_OOB 0x0
+#define M2MDMA_OOB 0x1
+#define PMU_OOB 0x2
+#define D11_OOB 0x3
+#define SDIOD_OOB 0x4
+#define PMU_OOB_BIT (0x10 | PMU_OOB)
+#endif /* REROUTE_OOBINT */
+
+
#endif /* _siutils_h_ */
diff --git a/drivers/net/wireless/bcmdhd/include/spid.h b/drivers/net/wireless/bcmdhd/include/spid.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/trxhdr.h b/drivers/net/wireless/bcmdhd/include/trxhdr.h
old mode 100755
new mode 100644
diff --git a/drivers/net/wireless/bcmdhd/include/typedefs.h b/drivers/net/wireless/bcmdhd/include/typedefs.h
old mode 100755
new mode 100644
index 473ed9e..ce593f3
--- a/drivers/net/wireless/bcmdhd/include/typedefs.h
+++ b/drivers/net/wireless/bcmdhd/include/typedefs.h
@@ -18,7 +18,7 @@
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
- * $Id: typedefs.h 453696 2014-02-06 01:10:20Z $
+ * $Id: typedefs.h 473326 2014-04-29 00:37:35Z $
*/
#ifndef _TYPEDEFS_H_
@@ -86,7 +86,6 @@
#define TYPEDEF_ULONG
#endif
-
/*
* If this is either a Linux hybrid build or the per-port code of a hybrid build
* then use the Linux header files to get some of the typedefs. Otherwise, define
@@ -116,8 +115,6 @@
#endif /* !defined(LINUX_HYBRID) || defined(LINUX_PORT) */
-
-
/* Do not support the (u)int64 types with strict ansi for GNU C */
#if defined(__GNUC__) && defined(__STRICT_ANSI__)
#define TYPEDEF_INT64
@@ -149,7 +146,6 @@
#else
-
#include <sys/types.h>
#endif /* linux && __KERNEL__ */
@@ -157,7 +153,6 @@
#endif
-
/* use the default typedefs in the next section of this file */
#define USE_TYPEDEF_DEFAULTS
diff --git a/drivers/net/wireless/bcmdhd/include/wlfc_proto.h b/drivers/net/wireless/bcmdhd/include/wlfc_proto.h
old mode 100755
new mode 100644
index 95edeec..937b86d
--- a/drivers/net/wireless/bcmdhd/include/wlfc_proto.h
+++ b/drivers/net/wireless/bcmdhd/include/wlfc_proto.h
@@ -1,12 +1,12 @@
/*
* Copyright (C) 1999-2014, Broadcom Corporation
-*
+*
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
-*
+*
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -14,11 +14,11 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
-*
+*
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
-* $Id: wlfc_proto.h 431159 2013-10-22 19:40:51Z $
+* $Id: wlfc_proto.h 455301 2014-02-13 12:42:13Z $
*
*/
#ifndef __wlfc_proto_definitions_h__
@@ -99,6 +99,9 @@
#define WLFC_CTL_TYPE_TRANS_ID 18
#define WLFC_CTL_TYPE_COMP_TXSTATUS 19
+#define WLFC_CTL_TYPE_TID_OPEN 20
+#define WLFC_CTL_TYPE_TID_CLOSE 21
+
#define WLFC_CTL_TYPE_FILLER 255
@@ -124,6 +127,7 @@
#define WLFC_PKTFLAG_PKTFROMHOST 0x01
#define WLFC_PKTFLAG_PKT_REQUESTED 0x02
+#define WLFC_PKTFLAG_PKT_FORCELOWRATE 0x04 /* force low rate for this packet */
#define WL_TXSTATUS_STATUS_MASK 0xff /* allow 8 bits */
#define WL_TXSTATUS_STATUS_SHIFT 24
@@ -214,12 +218,6 @@
/* b[7:5] -reuse guard, b[4:0] -value */
#define WLFC_MAC_DESC_GET_LOOKUP_INDEX(x) ((x) & 0x1f)
-#define WLFC_PKTFLAG_SET_PKTREQUESTED(x) (x) |= \
- (WLFC_PKTFLAG_PKT_REQUESTED << WL_TXSTATUS_FLAGS_SHIFT)
-
-#define WLFC_PKTFLAG_CLR_PKTREQUESTED(x) (x) &= \
- ~(WLFC_PKTFLAG_PKT_REQUESTED << WL_TXSTATUS_FLAGS_SHIFT)
-
#define WLFC_MAX_PENDING_DATALEN 120
diff --git a/drivers/net/wireless/bcmdhd/include/wlioctl.h b/drivers/net/wireless/bcmdhd/include/wlioctl.h
old mode 100755
new mode 100644
index 3fa1058..33da2ea
--- a/drivers/net/wireless/bcmdhd/include/wlioctl.h
+++ b/drivers/net/wireless/bcmdhd/include/wlioctl.h
@@ -5,13 +5,13 @@
* Definitions subject to change without notice.
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -19,12 +19,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wlioctl.h 450748 2014-01-23 00:49:46Z $
+ * $Id: wlioctl.h 490639 2014-07-11 08:31:53Z $
*/
#ifndef _wlioctl_h_
@@ -42,12 +42,15 @@
#include <bcmwifi_rates.h>
#include <devctrl_if/wlioctl_defs.h>
+
#include <bcm_mpool_pub.h>
#include <bcmcdc.h>
+
+
#ifndef INTF_NAME_SIZ
#define INTF_NAME_SIZ 16
#endif
@@ -65,6 +68,20 @@
chanspec_t list[1];
} chanspec_list_t;
+/* DFS Forced param */
+typedef struct wl_dfs_forced_params {
+ chanspec_t chspec;
+ uint16 version;
+ chanspec_list_t chspec_list;
+} wl_dfs_forced_t;
+
+#define DFS_PREFCHANLIST_VER 0x01
+#define WL_CHSPEC_LIST_FIXED_SIZE OFFSETOF(chanspec_list_t, list)
+#define WL_DFS_FORCED_PARAMS_FIXED_SIZE \
+ (WL_CHSPEC_LIST_FIXED_SIZE + OFFSETOF(wl_dfs_forced_t, chspec_list))
+#define WL_DFS_FORCED_PARAMS_MAX_SIZE \
+ WL_DFS_FORCED_PARAMS_FIXED_SIZE + (WL_NUMCHANNELS * sizeof(chanspec_t))
+
/* association decision information */
typedef struct {
bool assoc_approved; /* (re)association approved */
@@ -113,6 +130,49 @@
#include <packed_section_start.h>
+/* Flags for OBSS IOVAR Parameters */
+#define WL_OBSS_DYN_BWSW_FLAG_ACTIVITY_PERIOD (0x01)
+#define WL_OBSS_DYN_BWSW_FLAG_NOACTIVITY_PERIOD (0x02)
+#define WL_OBSS_DYN_BWSW_FLAG_NOACTIVITY_INCR_PERIOD (0x04)
+#define WL_OBSS_DYN_BWSW_FLAG_PSEUDO_SENSE_PERIOD (0x08)
+#define WL_OBSS_DYN_BWSW_FLAG_RX_CRS_PERIOD (0x10)
+#define WL_OBSS_DYN_BWSW_FLAG_DUR_THRESHOLD (0x20)
+#define WL_OBSS_DYN_BWSW_FLAG_TXOP_PERIOD (0x40)
+
+/* OBSS IOVAR Version information */
+#define WL_PROT_OBSS_CONFIG_PARAMS_VERSION 1
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint8 obss_bwsw_activity_cfm_count_cfg; /* configurable count in
+ * seconds before we confirm that OBSS is present and
+ * dynamically activate dynamic bwswitch.
+ */
+ uint8 obss_bwsw_no_activity_cfm_count_cfg; /* configurable count in
+ * seconds before we confirm that OBSS is GONE and
+ * dynamically start pseudo upgrade. If in pseudo sense time, we
+ * will see OBSS, [means that, we false detected that OBSS-is-gone
+ * in watchdog] this count will be incremented in steps of
+ * obss_bwsw_no_activity_cfm_count_incr_cfg for confirming OBSS
+ * detection again. Note that, at present, max 30seconds is
+ * allowed like this. [OBSS_BWSW_NO_ACTIVITY_MAX_INCR_DEFAULT]
+ */
+ uint8 obss_bwsw_no_activity_cfm_count_incr_cfg; /* see above
+ */
+ uint16 obss_bwsw_pseudo_sense_count_cfg; /* number of msecs/cnt to be in
+ * pseudo state. This is used to sense/measure the stats from lq.
+ */
+ uint8 obss_bwsw_rx_crs_threshold_cfg; /* RX CRS default threshold */
+ uint8 obss_bwsw_dur_thres; /* OBSS dyn bwsw trigger/RX CRS Sec */
+ uint8 obss_bwsw_txop_threshold_cfg; /* TXOP default threshold */
+} BWL_POST_PACKED_STRUCT wlc_prot_dynbwsw_config_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint32 version; /* version field */
+ uint32 config_mask;
+ uint32 reset_mask;
+ wlc_prot_dynbwsw_config_t config_params;
+} BWL_POST_PACKED_STRUCT obss_config_params_t;
+
+
/* Legacy structure to help keep backward compatible wl tool and tray app */
@@ -229,6 +289,15 @@
/* variable length Information Elements */
} wl_bss_info_t;
+#define WL_GSCAN_BSS_INFO_VERSION 1 /* current version of wl_gscan_bss_info struct */
+#define WL_GSCAN_INFO_FIXED_FIELD_SIZE (sizeof(wl_gscan_bss_info_t) - sizeof(wl_bss_info_t))
+
+typedef struct wl_gscan_bss_info {
+ uint32 timestamp[2];
+ wl_bss_info_t info;
+ /* variable length Information Elements */
+} wl_gscan_bss_info_t;
+
typedef struct wl_bsscfg {
uint32 bsscfg_idx;
@@ -314,6 +383,14 @@
uchar SSID[DOT11_MAX_SSID_LEN];
} wlc_ssid_t;
+typedef struct wlc_ssid_ext {
+ bool hidden;
+ uint16 flags;
+ uint8 SSID_len;
+ int8 rssi_thresh;
+ uchar SSID[DOT11_MAX_SSID_LEN];
+} wlc_ssid_ext_t;
+
#define MAX_PREFERRED_AP_NUM 5
typedef struct wlc_fastssidinfo {
@@ -394,6 +471,7 @@
/* size of wl_scan_params not including variable length array */
#define WL_SCAN_PARAMS_FIXED_SIZE 64
+#define WL_MAX_ROAMSCAN_DATSZ (WL_SCAN_PARAMS_FIXED_SIZE + (WL_NUMCHANNELS * sizeof(uint16)))
#define ISCAN_REQ_VERSION 1
@@ -418,9 +496,6 @@
/* size of wl_scan_results not including variable length array */
#define WL_SCAN_RESULTS_FIXED_SIZE (sizeof(wl_scan_results_t) - sizeof(wl_bss_info_t))
-/* Used in EXT_STA */
-#define DNGL_RXCTXT_SIZE 45
-
#define ESCAN_REQ_VERSION 1
@@ -443,6 +518,14 @@
#define WL_ESCAN_RESULTS_FIXED_SIZE (sizeof(wl_escan_result_t) - sizeof(wl_bss_info_t))
+typedef struct wl_gscan_result {
+ uint32 buflen;
+ uint32 version;
+ wl_gscan_bss_info_t bss_info[1];
+} wl_gscan_result_t;
+
+#define WL_GSCAN_RESULTS_FIXED_SIZE (sizeof(wl_gscan_result_t) - sizeof(wl_gscan_bss_info_t))
+
/* incremental scan results struct */
typedef struct wl_iscan_results {
uint32 status;
@@ -566,6 +649,62 @@
*/
} wl_join_params_t;
+typedef struct wlc_roam_exp_params {
+ int8 a_band_boost_threshold;
+ int8 a_band_penalty_threshold;
+ uint8 a_band_boost_factor;
+ uint8 a_band_penalty_factor;
+ uint8 cur_bssid_boost;
+ int8 alert_roam_trigger_threshold;
+ uint16 a_band_max_boost;
+} wlc_roam_exp_params_t;
+
+#define ROAM_EXP_CFG_VERSION 1
+#define ROAM_EXP_ENABLE_FLAG (1 << 0)
+#define ROAM_EXP_CFG_PRESENT (1 << 1)
+typedef struct wl_roam_exp_cfg {
+ uint8 version;
+ uint8 flags;
+ uint16 reserved;
+ wlc_roam_exp_params_t params;
+} wl_roam_exp_cfg_t;
+
+typedef struct wl_bssid_pref_list {
+ struct ether_addr bssid;
+ /* Add this to modify rssi */
+ int8 rssi_factor;
+ int8 flags;
+} wl_bssid_pref_list_t;
+
+#define BSSID_PREF_LIST_VERSION 1
+#define ROAM_EXP_CLEAR_BSSID_PREF (1 << 0)
+typedef struct wl_bssid_pref_cfg {
+ uint8 version;
+ uint8 flags;
+ uint16 count;
+ wl_bssid_pref_list_t bssids[1];
+} wl_bssid_pref_cfg_t;
+
+#define SSID_WHITELIST_VERSION 1
+#define ROAM_EXP_CLEAR_SSID_WHITELIST (1 << 0)
+/* Roam SSID whitelist, ssids in this list are ok to */
+/* be considered as targets to join when considering a roam */
+typedef struct wl_ssid_whitelist {
+ uint8 version;
+ uint8 flags;
+ uint8 ssid_count;
+ uint8 reserved;
+ wlc_ssid_t ssids[1];
+} wl_ssid_whitelist_t;
+
+#define ROAM_EXP_EVENT_VERSION 1
+typedef struct wl_roam_exp_event {
+ uint8 version;
+ uint8 flags;
+ uint16 reserved;
+ wlc_ssid_t cur_ssid;
+} wl_roam_exp_event_t;
+
#define WL_JOIN_PARAMS_FIXED_SIZE (OFFSETOF(wl_join_params_t, params) + \
WL_ASSOC_PARAMS_FIXED_SIZE)
/* scan params for extended join */
@@ -617,6 +756,7 @@
cca_congest_t secs[1]; /* Data */
} cca_congest_channel_req_t;
+
/* interference sources */
enum interference_source {
ITFR_NONE = 0, /* interference */
@@ -820,9 +960,11 @@
uint16 buf[1];
} srom_rw_t;
+#define CISH_FLAG_PCIECIS (1 << 15) /* write CIS format bit for PCIe CIS */
/* similar cis (srom or otp) struct [iovar: may not be aligned] */
typedef struct {
- uint32 source; /* cis source */
+ uint16 source; /* cis source */
+ uint16 flags; /* flags */
uint32 byteoff; /* byte offset */
uint32 nbytes; /* number of bytes */
/* data follows here */
@@ -882,6 +1024,15 @@
} link_val_t;
+#define WL_PM_MUTE_TX_VER 1
+
+typedef struct wl_pm_mute_tx {
+ uint16 version; /* version */
+ uint16 len; /* length */
+ uint16 deadline; /* deadline timer (in milliseconds) */
+ uint8 enable; /* set to 1 to enable mode; set to 0 to disable it */
+} wl_pm_mute_tx_t;
+
typedef struct {
uint16 ver; /* version of this struct */
@@ -893,15 +1044,15 @@
wl_rateset_t rateset; /* rateset in use */
uint32 in; /* seconds elapsed since associated */
uint32 listen_interval_inms; /* Min Listen interval in ms for this STA */
- uint32 tx_pkts; /* # of packets transmitted */
- uint32 tx_failures; /* # of packets failed */
+ uint32 tx_pkts; /* # of user packets transmitted (unicast) */
+ uint32 tx_failures; /* # of user packets failed */
uint32 rx_ucast_pkts; /* # of unicast packets received */
uint32 rx_mcast_pkts; /* # of multicast packets received */
- uint32 tx_rate; /* Rate of last successful tx frame */
+ uint32 tx_rate; /* Rate used by last tx frame */
uint32 rx_rate; /* Rate of last successful rx frame */
uint32 rx_decrypt_succeeds; /* # of packet decrypted successfully */
uint32 rx_decrypt_failures; /* # of packet decrypted unsuccessfully */
- uint32 tx_tot_pkts; /* # of tx pkts (ucast + mcast) */
+ uint32 tx_tot_pkts; /* # of user tx pkts (ucast + mcast) */
uint32 rx_tot_pkts; /* # of data packets recvd (uni + mcast) */
uint32 tx_mcast_pkts; /* # of mcast pkts txed */
uint64 tx_tot_bytes; /* data bytes txed (ucast + mcast) */
@@ -917,13 +1068,28 @@
uint16 aid; /* association ID */
uint16 ht_capabilities; /* advertised ht caps */
uint16 vht_flags; /* converted vht flags */
- uint32 tx_pkts_retried; /* # of frames where a retry was necessary */
- uint32 tx_pkts_retry_exhausted; /* # of frames where a retry was
- * exhausted
- */
+ uint32 tx_pkts_retried; /* # of frames where a retry was
+ * necessary
+ */
+ uint32 tx_pkts_retry_exhausted; /* # of user frames where a retry
+ * was exhausted
+ */
int8 rx_lastpkt_rssi[WL_STA_ANT_MAX]; /* Per antenna RSSI of last
- * received data frame.
- */
+ * received data frame.
+ */
+ /* TX WLAN retry/failure statistics:
+ * Separated for host requested frames and WLAN locally generated frames.
+ * Include unicast frame only where the retries/failures can be counted.
+ */
+ uint32 tx_pkts_total; /* # user frames sent successfully */
+ uint32 tx_pkts_retries; /* # user frames retries */
+ uint32 tx_pkts_fw_total; /* # FW generated sent successfully */
+ uint32 tx_pkts_fw_retries; /* # retries for FW generated frames */
+ uint32 tx_pkts_fw_retry_exhausted; /* # FW generated where a retry
+ * was exhausted
+ */
+ uint32 rx_pkts_retried; /* # rx with retry bit set */
+ uint32 tx_rate_fallback; /* lowest fallback TX rate */
} sta_info_t;
#define WL_OLD_STAINFO_SIZE OFFSETOF(sta_info_t, tx_tot_pkts)
@@ -960,10 +1126,10 @@
} channel_info_t;
/* For ioctls that take a list of MAC addresses */
-struct maclist {
+typedef struct maclist {
uint count; /* number of MAC addresses */
struct ether_addr ea[1]; /* variable length array of MAC addresses */
-};
+} maclist_t;
/* get pkt count struct passed through ioctl */
typedef struct get_pktcnt {
@@ -1031,6 +1197,24 @@
uint needed; /* bytes needed (optional) */
} wl_ioctl_t;
+#ifdef CONFIG_COMPAT
+typedef struct compat_wl_ioctl {
+ uint cmd; /* common ioctl definition */
+ uint32 buf; /* pointer to user buffer */
+ uint len; /* length of user buffer */
+ uint8 set; /* 1=set IOCTL; 0=query IOCTL */
+ uint used; /* bytes read or written (optional) */
+ uint needed; /* bytes needed (optional) */
+} compat_wl_ioctl_t;
+#endif /* CONFIG_COMPAT */
+
+#define WL_NUM_RATES_CCK 4 /* 1, 2, 5.5, 11 Mbps */
+#define WL_NUM_RATES_OFDM 8 /* 6, 9, 12, 18, 24, 36, 48, 54 Mbps SISO/CDD */
+#define WL_NUM_RATES_MCS_1STREAM 8 /* MCS 0-7 1-stream rates - SISO/CDD/STBC/MCS */
+#define WL_NUM_RATES_EXTRA_VHT 2 /* Additional VHT 11AC rates */
+#define WL_NUM_RATES_VHT 10
+#define WL_NUM_RATES_MCS32 1
+
/*
* Structure for passing hardware and software
@@ -1435,13 +1619,6 @@
} tx_power_legacy2_t;
/* TX Power index defines */
-#define WL_NUM_RATES_CCK 4 /* 1, 2, 5.5, 11 Mbps */
-#define WL_NUM_RATES_OFDM 8 /* 6, 9, 12, 18, 24, 36, 48, 54 Mbps SISO/CDD */
-#define WL_NUM_RATES_MCS_1STREAM 8 /* MCS 0-7 1-stream rates - SISO/CDD/STBC/MCS */
-#define WL_NUM_RATES_EXTRA_VHT 2 /* Additional VHT 11AC rates */
-#define WL_NUM_RATES_VHT 10
-#define WL_NUM_RATES_MCS32 1
-
#define WLC_NUM_RATES_CCK WL_NUM_RATES_CCK
#define WLC_NUM_RATES_OFDM WL_NUM_RATES_OFDM
#define WLC_NUM_RATES_MCS_1_STREAM WL_NUM_RATES_MCS_1STREAM
@@ -1468,8 +1645,25 @@
#define WL_TXPPR_VERSION 1
#define WL_TXPPR_LENGTH (sizeof(wl_txppr_t))
-#define TX_POWER_T_VERSION 44
+#define TX_POWER_T_VERSION 45
+/* number of ppr serialization buffers, it should be reg, board and target */
+#define WL_TXPPR_SER_BUF_NUM (3)
+typedef struct chanspec_txpwr_max {
+ chanspec_t chanspec; /* chanspec */
+ uint8 txpwr_max; /* max txpwr in all the rates */
+ uint8 padding;
+} chanspec_txpwr_max_t;
+
+typedef struct wl_chanspec_txpwr_max {
+ uint16 ver; /* version of this struct */
+ uint16 len; /* length in bytes of this structure */
+ uint32 count; /* number of elements of (chanspec, txpwr_max) pair */
+ chanspec_txpwr_max_t txpwr[1]; /* array of (chanspec, max_txpwr) pair */
+} wl_chanspec_txpwr_max_t;
+
+#define WL_CHANSPEC_TXPWR_MAX_VER 1
+#define WL_CHANSPEC_TXPWR_MAX_LEN (sizeof(wl_chanspec_txpwr_max_t))
typedef struct tx_inst_power {
uint8 txpwr_est_Pout[2]; /* Latest estimate for 2.4 and 5 Ghz */
@@ -1528,6 +1722,48 @@
uint8 octets[3];
};
+#define RATE_CCK_1MBPS 0
+#define RATE_CCK_2MBPS 1
+#define RATE_CCK_5_5MBPS 2
+#define RATE_CCK_11MBPS 3
+
+#define RATE_LEGACY_OFDM_6MBPS 0
+#define RATE_LEGACY_OFDM_9MBPS 1
+#define RATE_LEGACY_OFDM_12MBPS 2
+#define RATE_LEGACY_OFDM_18MBPS 3
+#define RATE_LEGACY_OFDM_24MBPS 4
+#define RATE_LEGACY_OFDM_36MBPS 5
+#define RATE_LEGACY_OFDM_48MBPS 6
+#define RATE_LEGACY_OFDM_54MBPS 7
+
+#define WL_BSSTRANS_RSSI_RATE_MAP_VERSION 1
+
+typedef struct wl_bsstrans_rssi {
+ int8 rssi_2g; /* RSSI in dbm for 2.4 G */
+ int8 rssi_5g; /* RSSI in dbm for 5G, unused for cck */
+} wl_bsstrans_rssi_t;
+
+#define RSSI_RATE_MAP_MAX_STREAMS 4 /* max streams supported */
+
+/* RSSI to rate mapping, all 20Mhz, no SGI */
+typedef struct wl_bsstrans_rssi_rate_map {
+ uint16 ver;
+ uint16 len; /* length of entire structure */
+ wl_bsstrans_rssi_t cck[WL_NUM_RATES_CCK]; /* 2.4G only */
+ wl_bsstrans_rssi_t ofdm[WL_NUM_RATES_OFDM]; /* 6 to 54mbps */
+ wl_bsstrans_rssi_t phy_n[RSSI_RATE_MAP_MAX_STREAMS][WL_NUM_RATES_MCS_1STREAM]; /* MCS0-7 */
+ wl_bsstrans_rssi_t phy_ac[RSSI_RATE_MAP_MAX_STREAMS][WL_NUM_RATES_VHT]; /* MCS0-9 */
+} wl_bsstrans_rssi_rate_map_t;
+
+#define WL_BSSTRANS_ROAMTHROTTLE_VERSION 1
+
+/* Configure number of scans allowed per throttle period */
+typedef struct wl_bsstrans_roamthrottle {
+ uint16 ver;
+ uint16 period;
+ uint16 scans_allowed;
+} wl_bsstrans_roamthrottle_t;
+
#define NFIFO 6 /* # tx/rx fifopairs */
#define NREINITREASONCOUNT 8
#define REINITREASONIDX(_x) (((_x) < NREINITREASONCOUNT) ? (_x) : 0)
@@ -2070,22 +2306,6 @@
} wl_delta_stats_t;
-/* structure to store per-rate rx statistics */
-typedef struct wl_scb_rx_rate_stats {
- uint32 rx1mbps[2]; /* packets rx at 1Mbps */
- uint32 rx2mbps[2]; /* packets rx at 2Mbps */
- uint32 rx5mbps5[2]; /* packets rx at 5.5Mbps */
- uint32 rx6mbps[2]; /* packets rx at 6Mbps */
- uint32 rx9mbps[2]; /* packets rx at 9Mbps */
- uint32 rx11mbps[2]; /* packets rx at 11Mbps */
- uint32 rx12mbps[2]; /* packets rx at 12Mbps */
- uint32 rx18mbps[2]; /* packets rx at 18Mbps */
- uint32 rx24mbps[2]; /* packets rx at 24Mbps */
- uint32 rx36mbps[2]; /* packets rx at 36Mbps */
- uint32 rx48mbps[2]; /* packets rx at 48Mbps */
- uint32 rx54mbps[2]; /* packets rx at 54Mbps */
-} wl_scb_rx_rate_stats_t;
-
typedef struct {
uint32 packets;
uint32 bytes;
@@ -2205,16 +2425,18 @@
uint8 enable; /* enable/disable */
};
-/* struct for per-tid, per-mode ampdu control */
-struct ampdu_tid_control_mode {
- struct ampdu_tid_control control[NUMPRIO]; /* tid will be 0xff for not used element */
- char mode_name[8]; /* supported mode : AIBSS */
+/* struct for ampdu tx/rx aggregation control */
+struct ampdu_aggr {
+ int8 aggr_override; /* aggr overrided by dongle. Not to be set by host. */
+ uint16 conf_TID_bmap; /* bitmap of TIDs to configure */
+ uint16 enab_TID_bmap; /* enable/disable per TID */
};
/* structure for identifying ea/tid for sending addba/delba */
struct ampdu_ea_tid {
struct ether_addr ea; /* Station address */
uint8 tid; /* tid */
+ uint8 initiator; /* 0 is recipient, 1 is originator */
};
/* structure for identifying retry/tid for retry_limit_tid/rr_retry_limit_tid */
struct ampdu_retry_tid {
@@ -2222,35 +2444,6 @@
uint8 retry; /* retry value */
};
-/* structure for dpt iovars */
-typedef struct dpt_iovar {
- struct ether_addr ea; /* Station address */
- uint8 mode; /* mode: depends on iovar */
- uint32 pad; /* future */
-} dpt_iovar_t;
-
-#define DPT_FNAME_LEN 48 /* Max length of friendly name */
-
-typedef struct dpt_status {
- uint8 status; /* flags to indicate status */
- uint8 fnlen; /* length of friendly name */
- uchar name[DPT_FNAME_LEN]; /* friendly name */
- uint32 rssi; /* RSSI of the link */
- sta_info_t sta; /* sta info */
-} dpt_status_t;
-
-/* structure for dpt list */
-typedef struct dpt_list {
- uint32 num; /* number of entries in struct */
- dpt_status_t status[1]; /* per station info */
-} dpt_list_t;
-
-/* structure for dpt friendly name */
-typedef struct dpt_fname {
- uint8 len; /* length of friendly name */
- uchar name[DPT_FNAME_LEN]; /* friendly name */
-} dpt_fname_t;
-
#define BDD_FNAME_LEN 32 /* Max length of friendly name */
typedef struct bdd_fname {
uint8 len; /* length of friendly name */
@@ -2405,6 +2598,13 @@
#define PFN_PARTIAL_SCAN_BIT 0
#define PFN_PARTIAL_SCAN_MASK 1
+#define PFN_SWC_RSSI_WINDOW_MAX 8
+#define PFN_SWC_MAX_NUM_APS 16
+#define PFN_HOTLIST_MAX_NUM_APS 64
+
+#define MAX_EPNO_HIDDEN_SSID 8
+#define MAX_WHITELIST_SSID 2
+
/* PFN network info structure */
typedef struct wl_pfn_subnet_info {
struct ether_addr BSSID;
@@ -2435,6 +2635,7 @@
wl_pfn_lnet_info_t netinfo[1];
} wl_pfn_lscanresults_t;
+/* this is used to report on 1-* pfn scan results */
typedef struct wl_pfn_scanresults {
uint32 version;
uint32 status;
@@ -2442,6 +2643,30 @@
wl_pfn_net_info_t netinfo[1];
} wl_pfn_scanresults_t;
+typedef struct wl_pfn_significant_net {
+ uint16 flags;
+ uint16 channel;
+ struct ether_addr BSSID;
+ int8 rssi[PFN_SWC_RSSI_WINDOW_MAX];
+} wl_pfn_significant_net_t;
+
+typedef struct wl_pfn_swc_results {
+ uint32 version;
+ uint32 pkt_count;
+ uint32 total_count;
+ wl_pfn_significant_net_t list[1];
+} wl_pfn_swc_results_t;
+
+/* used to report exactly one scan result */
+/* plus reports detailed scan info in bss_info */
+typedef struct wl_pfn_scanresult {
+ uint32 version;
+ uint32 status;
+ uint32 count;
+ wl_pfn_net_info_t netinfo;
+ wl_bss_info_t bss_info;
+} wl_pfn_scanresult_t;
+
/* PFN data structure */
typedef struct wl_pfn_param {
int32 version; /* PNO parameters version */
@@ -2470,6 +2695,13 @@
/* Bit4: suppress_lost, Bit3: suppress_found */
uint16 flags;
} wl_pfn_bssid_t;
+
+typedef struct wl_pfn_significant_bssid {
+ struct ether_addr macaddr;
+ int8 rssi_low_threshold;
+ int8 rssi_high_threshold;
+} wl_pfn_significant_bssid_t;
+
#define WL_PFN_SUPPRESSFOUND_MASK 0x08
#define WL_PFN_SUPPRESSLOST_MASK 0x10
#define WL_PFN_RSSI_MASK 0xff00
@@ -2481,13 +2713,54 @@
uint16 channel_list[WL_NUMCHANNELS];
uint32 flags;
} wl_pfn_cfg_t;
+
+#define CH_BUCKET_REPORT_REGULAR 0
+#define CH_BUCKET_REPORT_FULL_RESULT 2
+#define CH_BUCKET_GSCAN 4
+
+typedef struct wl_pfn_gscan_ch_bucket_cfg {
+ uint8 bucket_end_index;
+ uint8 bucket_freq_multiple;
+ uint8 flag;
+ uint8 reserved;
+ uint16 repeat;
+ uint16 max_freq_multiple;
+} wl_pfn_gscan_ch_bucket_cfg_t;
+
+#define GSCAN_SEND_ALL_RESULTS_MASK (1 << 0)
+#define GSCAN_CFG_FLAGS_ONLY_MASK (1 << 7)
+#define WL_GSCAN_CFG_VERSION 2
+typedef struct wl_pfn_gscan_cfg {
+ uint16 version;
+ /* BIT0 1 = send probes/beacons to HOST
+ * BIT1 Reserved
+ * BIT2 Reserved
+ * Add any future flags here
+ * BIT7 1 = no other useful cfg sent
+ */
+ uint8 flags;
+ /* Buffer filled threshold in % to generate an event */
+ uint8 buffer_threshold;
+ /* No. of BSSIDs with "change" to generate an evt
+ * change - crosses rssi threshold/lost
+ */
+ uint8 swc_nbssid_threshold;
+ /* Max=8 (for now) Size of rssi cache buffer */
+ uint8 swc_rssi_window_size;
+ uint8 count_of_channel_buckets;
+ uint8 retry_threshold;
+ uint16 lost_ap_window;
+ wl_pfn_gscan_ch_bucket_cfg_t channel_bucket[1];
+} wl_pfn_gscan_cfg_t;
+
#define WL_PFN_REPORT_ALLNET 0
#define WL_PFN_REPORT_SSIDNET 1
#define WL_PFN_REPORT_BSSIDNET 2
#define WL_PFN_CFG_FLAGS_PROHIBITED 0x00000001 /* Accept and use prohibited channels */
#define WL_PFN_CFG_FLAGS_RESERVED 0xfffffffe /* Remaining reserved for future use */
-
+#define WL_PFN_SSID_A_BAND_TRIG 0x20
+#define WL_PFN_SSID_BG_BAND_TRIG 0x40
typedef struct wl_pfn {
wlc_ssid_t ssid; /* ssid name and its length */
int32 flags; /* bit2: hidden */
@@ -2504,6 +2777,55 @@
wl_pfn_t pfn[1];
} wl_pfn_list_t;
+#define PFN_SSID_EXT_VERSION 2
+
+typedef struct wl_pfn_ext {
+ uint8 flags;
+ int8 rssi_thresh; /* RSSI threshold, track only if RSSI > threshold */
+ uint16 wpa_auth; /* Match the wpa auth type defined in wlioctl_defs.h */
+ uint8 ssid[DOT11_MAX_SSID_LEN];
+ uint8 ssid_len;
+ uint8 pad;
+} wl_pfn_ext_t;
+
+typedef struct wl_pfn_ext_list {
+ uint16 version;
+ uint16 count;
+ wl_pfn_ext_t pfn_ext[1];
+} wl_pfn_ext_list_t;
+
+#define WL_PFN_SSID_EXT_FOUND 0x1
+#define WL_PFN_SSID_EXT_LOST 0x2
+typedef struct wl_pfn_result_ssid {
+ uint8 flags;
+ int8 rssi;
+ /* channel number */
+ uint16 channel;
+ /* Assume idx in order of cfg */
+ uint16 index;
+ struct ether_addr bssid;
+} wl_pfn_result_ssid_crc32_t;
+
+typedef struct wl_pfn_ssid_ext_result {
+ uint16 version;
+ uint16 count;
+ wl_pfn_result_ssid_crc32_t net[1];
+} wl_pfn_ssid_ext_result_t;
+
+#define PFN_EXT_AUTH_CODE_OPEN 1 /* open */
+#define PFN_EXT_AUTH_CODE_PSK 2 /* WPA_PSK or WPA2PSK */
+#define PFN_EXT_AUTH_CODE_EAPOL 4 /* any EAPOL */
+
+#define WL_PFN_MAC_OUI_ONLY_MASK 1
+#define WL_PFN_SET_MAC_UNASSOC_MASK 2
+/* To configure pfn_macaddr */
+typedef struct wl_pfn_macaddr_cfg {
+ uint8 version;
+ uint8 flags;
+ struct ether_addr macaddr;
+} wl_pfn_macaddr_cfg_t;
+#define WL_PFN_MACADDR_CFG_VER 1
+
typedef BWL_PRE_PACKED_STRUCT struct pfn_olmsg_params_t {
wlc_ssid_t ssid;
uint32 cipher_type;
@@ -2536,6 +2858,98 @@
uint16 interval; /* extended listen interval */
} wl_p2po_listen_t;
+/* GAS state machine tunable parameters. Structure field values of 0 means use the default. */
+typedef struct wl_gas_config {
+ uint16 max_retransmit; /* Max # of firmware/driver retransmits on no Ack
+ * from peer (on top of the ucode retries).
+ */
+ uint16 response_timeout; /* Max time to wait for a GAS-level response
+ * after sending a packet.
+ */
+ uint16 max_comeback_delay; /* Max GAS response comeback delay.
+ * Exceeding this fails the GAS exchange.
+ */
+ uint16 max_retries; /* Max # of GAS state machine retries on failure
+ * of a GAS frame exchange.
+ */
+} wl_gas_config_t;
+
+/* P2P Find Offload parameters */
+typedef BWL_PRE_PACKED_STRUCT struct wl_p2po_find_config {
+ uint16 version; /* Version of this struct */
+ uint16 length; /* sizeof(wl_p2po_find_config_t) */
+ int32 search_home_time; /* P2P search state home time when concurrent
+ * connection exists. -1 for default.
+ */
+ uint8 num_social_channels;
+ /* Number of social channels up to WL_P2P_SOCIAL_CHANNELS_MAX.
+ * 0 means use default social channels.
+ */
+ uint8 flags;
+ uint16 social_channels[1]; /* Variable length array of social channels */
+} BWL_POST_PACKED_STRUCT wl_p2po_find_config_t;
+#define WL_P2PO_FIND_CONFIG_VERSION 2 /* value for version field */
+
+/* wl_p2po_find_config_t flags */
+#define P2PO_FIND_FLAG_SCAN_ALL_APS 0x01 /* Whether to scan for all APs in the p2po_find
+ * periodic scans of all channels.
+ * 0 means scan for only P2P devices.
+ * 1 means scan for P2P devices plus non-P2P APs.
+ */
+
+
+/* For adding a WFDS service to seek */
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint32 seek_hdl; /* unique id chosen by host */
+ uint8 addr[6]; /* Seek service from a specific device with this
+ * MAC address, all 1's for any device.
+ */
+ uint8 service_hash[P2P_WFDS_HASH_LEN];
+ uint8 service_name_len;
+ uint8 service_name[MAX_WFDS_SEEK_SVC_NAME_LEN];
+ /* Service name to seek, not null terminated */
+ uint8 service_info_req_len;
+ uint8 service_info_req[1]; /* Service info request, not null terminated.
+ * Variable length specified by service_info_req_len.
+ * Maximum length is MAX_WFDS_SEEK_SVC_INFO_LEN.
+ */
+} BWL_POST_PACKED_STRUCT wl_p2po_wfds_seek_add_t;
+
+/* For deleting a WFDS service to seek */
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint32 seek_hdl; /* delete service specified by id */
+} BWL_POST_PACKED_STRUCT wl_p2po_wfds_seek_del_t;
+
+
+/* For adding a WFDS service to advertise */
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint32 advertise_hdl; /* unique id chosen by host */
+ uint8 service_hash[P2P_WFDS_HASH_LEN];
+ uint32 advertisement_id;
+ uint16 service_config_method;
+ uint8 service_name_len;
+ uint8 service_name[MAX_WFDS_SVC_NAME_LEN];
+ /* Service name , not null terminated */
+ uint8 service_status;
+ uint16 service_info_len;
+ uint8 service_info[1]; /* Service info, not null terminated.
+ * Variable length specified by service_info_len.
+ * Maximum length is MAX_WFDS_ADV_SVC_INFO_LEN.
+ */
+} BWL_POST_PACKED_STRUCT wl_p2po_wfds_advertise_add_t;
+
+/* For deleting a WFDS service to advertise */
+typedef BWL_PRE_PACKED_STRUCT struct {
+ uint32 advertise_hdl; /* delete service specified by hdl */
+} BWL_POST_PACKED_STRUCT wl_p2po_wfds_advertise_del_t;
+
+/* P2P Offload discovery mode for the p2po_state iovar */
+typedef enum {
+ WL_P2PO_DISC_STOP,
+ WL_P2PO_DISC_LISTEN,
+ WL_P2PO_DISC_DISCOVERY
+} disc_mode_t;
+
/* ANQP offload */
#define ANQPO_MAX_QUERY_SIZE 256
@@ -2573,6 +2987,50 @@
struct ether_addr bssid[1]; /* max ANQPO_MAX_IGNORE_BSSID */
} wl_anqpo_ignore_bssid_list_t;
+#define ANQPO_MAX_PFN_HS 16
+#define ANQPO_MAX_OI_LENGTH 8
+typedef struct
+{
+ uint8 length;
+ uint8 data[ANQPO_MAX_OI_LENGTH];
+} wl_anqpo_oi_t;
+
+#define ANQPO_MAX_OI 16
+typedef struct
+{
+ uint32 numOi;
+ wl_anqpo_oi_t oi[ANQPO_MAX_OI];
+} wl_anqpo_roaming_consortium_t;
+
+#define ANQPO_MAX_REALM_LENGTH 255
+typedef struct
+{
+ uint8 length;
+ uint8 data[ANQPO_MAX_REALM_LENGTH + 1]; /* null terminated */
+} wl_anqpo_realm_data_t;
+
+#define ANQPO_MCC_LENGTH 3
+#define ANQPO_MNC_LENGTH 3
+typedef struct
+{
+ char mcc[ANQPO_MCC_LENGTH + 1];
+ char mnc[ANQPO_MNC_LENGTH + 1];
+} wl_anqpo_plmn_t;
+
+typedef struct {
+ uint32 version;
+ uint32 id;
+ wl_anqpo_plmn_t plmn;
+ wl_anqpo_realm_data_t realm;
+ wl_anqpo_roaming_consortium_t rc;
+} wl_anqpo_pfn_hs_t;
+
+typedef struct {
+ bool is_clear; /* set to clear list (not used on GET) */
+ uint16 count; /* number of preferred hotspot in list */
+ wl_anqpo_pfn_hs_t hs[]; /* max ANQPO_MAX_PFN_HS */
+} wl_anqpo_pfn_hs_list_t;
+
struct toe_ol_stats_t {
/* Num of tx packets that don't need to be checksummed */
@@ -2652,308 +3110,6 @@
#define WL_KEEP_ALIVE_FIXED_LEN OFFSETOF(wl_keep_alive_pkt_t, data)
-typedef struct awdl_config_params {
- uint32 version;
- uint8 awdl_chan; /* awdl channel */
- uint8 guard_time; /* Guard Time */
- uint16 aw_period; /* AW interval period */
- uint16 aw_cmn_length; /* Radio on Time AW */
- uint16 action_frame_period; /* awdl action frame period */
- uint16 awdl_pktlifetime; /* max packet life time in msec for awdl action frames */
- uint16 awdl_maxnomaster; /* max master missing time */
- uint16 awdl_extcount; /* Max extended period count for traffic */
- uint16 aw_ext_length; /* AW ext period */
- uint16 awdl_nmode; /* Operation mode of awdl interface; * 0 - Legacy mode
- * 1 - 11n rate only * 2 - 11n + ampdu rx/tx
- */
- struct ether_addr ea; /* destination bcast/mcast address to which action frame
- * need to be sent
- */
-} awdl_config_params_t;
-
-typedef struct wl_awdl_action_frame {
- uint16 len_bytes;
- uint8 awdl_action_frame_data[1];
-} wl_awdl_action_frame_t;
-
-#define WL_AWDL_ACTION_FRAME_FIXED_LEN OFFSETOF(wl_awdl_action_frame_t, awdl_sync_frame)
-
-typedef struct awdl_peer_node {
- uint32 type_state; /* Master, slave , etc.. */
- uint16 aw_counter; /* avail window counter */
- int8 rssi; /* rssi last af was received at */
- int8 last_rssi; /* rssi in the last AF */
- uint16 tx_counter;
- uint16 tx_delay; /* ts_hw - ts_fw */
- uint16 period_tu;
- uint16 aw_period;
- uint16 aw_cmn_length;
- uint16 aw_ext_length;
- uint32 self_metrics; /* Election Metric */
- uint32 top_master_metrics; /* Top Master Metric */
- struct ether_addr addr;
- struct ether_addr top_master;
- uint8 dist_top; /* Distance from Top */
-} awdl_peer_node_t;
-
-typedef struct awdl_peer_table {
- uint16 version;
- uint16 len;
- uint8 peer_nodes[1];
-} awdl_peer_table_t;
-
-typedef struct awdl_af_hdr {
- struct ether_addr dst_mac;
- uint8 action_hdr[4]; /* Category + OUI[3] */
-} awdl_af_hdr_t;
-
-typedef struct awdl_oui {
- uint8 oui[3]; /* default: 0x00 0x17 0xf2 */
- uint8 oui_type; /* AWDL: 0x08 */
-} awdl_oui_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_hdr {
- uint8 type; /* 0x08 AWDL */
- uint8 version;
- uint8 sub_type; /* Sub type */
- uint8 rsvd; /* Reserved */
- uint32 phy_timestamp; /* PHY Tx time */
- uint32 fw_timestamp; /* Target Tx time */
-} BWL_POST_PACKED_STRUCT awdl_hdr_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_oob_af_params {
- struct ether_addr bssid;
- struct ether_addr dst_mac;
- uint32 channel;
- uint32 dwell_time;
- uint32 flags;
- uint32 pkt_lifetime;
- uint32 tx_rate;
- uint32 max_retries; /* for unicast frames only */
- uint16 payload_len;
- uint8 payload[1]; /* complete AF payload */
-} BWL_POST_PACKED_STRUCT awdl_oob_af_params_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_sync_params {
- uint8 type; /* Type */
- uint16 param_len; /* sync param length */
- uint8 tx_chan; /* tx channel */
- uint16 tx_counter; /* tx down counter */
- uint8 master_chan; /* master home channel */
- uint8 guard_time; /* Gaurd Time */
- uint16 aw_period; /* AW period */
- uint16 action_frame_period; /* awdl action frame period */
- uint16 awdl_flags; /* AWDL Flags */
- uint16 aw_ext_length; /* AW extention len */
- uint16 aw_cmn_length; /* AW common len */
- uint16 aw_remaining; /* Remaining AW length */
- uint8 min_ext; /* Minimum Extention count */
- uint8 max_ext_multi; /* Max multicast Extention count */
- uint8 max_ext_uni; /* Max unicast Extention count */
- uint8 max_ext_af; /* Max af Extention count */
- struct ether_addr current_master; /* Current Master mac addr */
- uint8 presence_mode; /* Presence mode */
- uint8 reserved;
- uint16 aw_counter; /* AW seq# */
- uint16 ap_bcn_alignment_delta; /* AP Beacon alignment delta */
-} BWL_POST_PACKED_STRUCT awdl_sync_params_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_channel_sequence {
- uint8 aw_seq_len; /* AW seq length */
- uint8 aw_seq_enc; /* AW seq encoding */
- uint8 aw_seq_duplicate_cnt; /* AW seq dupilcate count */
- uint8 seq_step_cnt; /* Seq spet count */
- uint16 seq_fill_chan; /* channel to fill in; 0xffff repeat current channel */
- uint8 chan_sequence[1]; /* Variable list of channel Sequence */
-} BWL_POST_PACKED_STRUCT awdl_channel_sequence_t;
-#define WL_AWDL_CHAN_SEQ_FIXED_LEN OFFSETOF(awdl_channel_sequence_t, chan_sequence)
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_election_info {
- uint8 election_flags; /* Election Flags */
- uint16 election_ID; /* Election ID */
- uint32 self_metrics;
-} BWL_POST_PACKED_STRUCT awdl_election_info_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_election_tree_info {
- uint8 election_flags; /* Election Flags */
- uint16 election_ID; /* Election ID */
- uint32 self_metrics;
- int8 master_sync_rssi_thld;
- int8 slave_sync_rssi_thld;
- int8 edge_sync_rssi_thld;
- int8 close_range_rssi_thld;
- int8 mid_range_rssi_thld;
- uint8 max_higher_masters_close_range;
- uint8 max_higher_masters_mid_range;
- uint8 max_tree_depth;
- /* read only */
- struct ether_addr top_master; /* top Master mac addr */
- uint32 top_master_self_metric;
- uint8 current_tree_depth;
-} BWL_POST_PACKED_STRUCT awdl_election_tree_info_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_election_params_tlv {
- uint8 type; /* Type */
- uint16 param_len; /* Election param length */
- uint8 election_flags; /* Election Flags */
- uint16 election_ID; /* Election ID */
- uint8 dist_top; /* Distance from Top */
- uint8 rsvd; /* Reserved */
- struct ether_addr top_master; /* Top Master mac addr */
- uint32 top_master_metrics;
- uint32 self_metrics;
- uint8 pad[2]; /* Padding */
-} BWL_POST_PACKED_STRUCT awdl_election_params_tlv_t;
-
-typedef struct awdl_payload {
- uint32 len; /* Payload length */
- uint8 payload[1]; /* Payload */
-} awdl_payload_t;
-
-typedef struct awdl_long_payload {
- uint8 long_psf_period; /* transmit every long_psf_perios AWs */
- uint8 long_psf_tx_offset; /* delay from aw_start */
- uint16 len; /* Payload length */
- uint8 payload[1]; /* Payload */
-} BWL_POST_PACKED_STRUCT awdl_long_payload_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_opmode {
- uint8 mode; /* 0 - Auto; 1 - Fixed */
- uint8 role; /* 0 - slave; 1 - non-elect master; 2 - master */
- uint16 bcast_tu; /* Bcasting period(TU) for non-elect master */
- struct ether_addr master; /* Address of master to sync to */
- uint16 cur_bcast_tu; /* Current Bcasting Period(TU) */
-} BWL_PRE_PACKED_STRUCT awdl_opmode_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_extcount {
- uint8 minExt; /* Min extension count */
- uint8 maxExtMulti; /* Max extension count for mcast packets */
- uint8 maxExtUni; /* Max extension count for unicast packets */
- uint8 maxAfExt; /* Max extension count */
-} BWL_PRE_PACKED_STRUCT awdl_extcount_t;
-
-/* peer add/del operation */
-typedef struct awdl_peer_op {
- uint8 version;
- uint8 opcode; /* see opcode definition */
- struct ether_addr addr;
- uint8 mode;
-} awdl_peer_op_t;
-
-/* peer op table */
-typedef struct awdl_peer_op_tbl {
- uint16 len; /* length */
- uint8 tbl[1]; /* Peer table */
-} awdl_peer_op_tbl_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_peer_op_node {
- struct ether_addr addr;
- uint32 flags; /* Flags to indicate various states */
-} BWL_POST_PACKED_STRUCT awdl_peer_op_node_t;
-
-#define AWDL_PEER_OP_CUR_VER 0
-
-/* AWDL related statistics */
-typedef BWL_PRE_PACKED_STRUCT struct awdl_stats {
- uint32 afrx;
- uint32 aftx;
- uint32 datatx;
- uint32 datarx;
- uint32 txdrop;
- uint32 rxdrop;
- uint32 monrx;
- uint32 lostmaster;
- uint32 misalign;
- uint32 aws;
- uint32 aw_dur;
- uint32 debug;
- uint32 txsupr;
- uint32 afrxdrop;
- uint32 awdrop;
- uint32 noawchansw;
- uint32 rx80211;
- uint32 peeropdrop;
-} BWL_POST_PACKED_STRUCT awdl_stats_t;
-
-typedef BWL_PRE_PACKED_STRUCT struct awdl_uct_stats {
- uint32 aw_proc_in_aw_sched;
- uint32 aw_upd_in_pre_aw_proc;
- uint32 pre_aw_proc_in_aw_set;
- uint32 ignore_pre_aw_proc;
- uint32 miss_pre_aw_intr;
- uint32 aw_dur_zero;
- uint32 aw_sched;
- uint32 aw_proc;
- uint32 pre_aw_proc;
- uint32 not_init;
- uint32 null_awdl;
-} BWL_POST_PACKED_STRUCT awdl_uct_stats_t;
-
-typedef struct awdl_pw_opmode {
- struct ether_addr top_master; /* Peer mac addr */
- uint8 mode; /* 0 - normal; 1 - fast mode */
-} awdl_pw_opmode_t;
-
-/* i/f request */
-typedef struct wl_awdl_if {
- int32 cfg_idx;
- int32 up;
- struct ether_addr if_addr;
- struct ether_addr bssid;
-} wl_awdl_if_t;
-
-typedef struct _aw_start {
- uint8 role;
- struct ether_addr master;
- uint8 aw_seq_num;
-} aw_start_t;
-
-typedef struct _aw_extension_start {
- uint8 aw_ext_num;
-} aw_extension_start_t;
-
-typedef struct _awdl_peer_state {
- struct ether_addr peer;
- uint8 state;
-} awdl_peer_state_t;
-
-typedef struct _awdl_sync_state_changed {
- uint8 new_role;
- struct ether_addr master;
-} awdl_sync_state_changed_t;
-
-typedef struct _awdl_sync_state {
- uint8 role;
- struct ether_addr master;
- uint32 continuous_election_enable;
-} awdl_sync_state_t;
-
-typedef struct _awdl_aw_ap_alignment {
- uint32 enabled;
- int32 offset;
- uint32 align_on_dtim;
-} awdl_aw_ap_alignment_t;
-
-typedef struct _awdl_peer_stats {
- uint32 version;
- struct ether_addr address;
- uint8 clear;
- int8 rssi;
- int8 avg_rssi;
- uint8 txRate;
- uint8 rxRate;
- uint32 numTx;
- uint32 numTxRetries;
- uint32 numTxFailures;
-} awdl_peer_stats_t;
-
-#define MAX_NUM_AWDL_KEYS 4
-typedef struct _awdl_aes_key {
- uint32 version;
- int32 enable;
- struct ether_addr awdl_peer;
- uint8 keys[MAX_NUM_AWDL_KEYS][16];
-} awdl_aes_key_t;
/*
* Dongle pattern matching filter.
@@ -2978,7 +3134,8 @@
typedef enum wl_pkt_filter_type {
WL_PKT_FILTER_TYPE_PATTERN_MATCH=0, /* Pattern matching filter */
WL_PKT_FILTER_TYPE_MAGIC_PATTERN_MATCH=1, /* Magic packet match */
- WL_PKT_FILTER_TYPE_PATTERN_LIST_MATCH=2 /* A pattern list (match all to match filter) */
+ WL_PKT_FILTER_TYPE_PATTERN_LIST_MATCH=2, /* A pattern list (match all to match filter) */
+ WL_PKT_FILTER_TYPE_ENCRYPTED_PATTERN_MATCH=3, /* SECURE WOWL magic / net pattern match */
} wl_pkt_filter_type_t;
#define WL_PKT_FILTER_TYPE wl_pkt_filter_type_t
@@ -2989,14 +3146,22 @@
{ "MAGIC", WL_PKT_FILTER_TYPE_MAGIC_PATTERN_MATCH }, \
{ "PATLIST", WL_PKT_FILTER_TYPE_PATTERN_LIST_MATCH }
+/* Secured WOWL packet was encrypted, need decrypted before check filter match */
+typedef struct wl_pkt_decrypter {
+ uint8* (*dec_cb)(void* dec_ctx, const void *sdu, int sending);
+ void* dec_ctx;
+} wl_pkt_decrypter_t;
+
/* Pattern matching filter. Specifies an offset within received packets to
* start matching, the pattern to match, the size of the pattern, and a bitmask
* that indicates which bits within the pattern should be matched.
*/
typedef struct wl_pkt_filter_pattern {
- uint32 offset; /* Offset within received packet to start pattern matching.
+ union {
+ uint32 offset; /* Offset within received packet to start pattern matching.
* Offset '0' is the first byte of the ethernet header.
*/
+ };
uint32 size_bytes; /* Size of the pattern. Bitmask must be the same size. */
uint8 mask_and_pattern[1]; /* Variable length mask and pattern data. mask starts
* at offset 0. Pattern immediately follows mask.
@@ -3122,6 +3287,13 @@
uint8 rssi_qdb; /* qdB portion of the computed rssi */
} wl_pkteng_stats_t;
+typedef struct wl_txcal_params {
+ wl_pkteng_t pkteng;
+ uint8 gidx_start;
+ int8 gidx_step;
+ uint8 gidx_stop;
+} wl_txcal_params_t;
+
typedef enum {
wowl_pattern_type_bitmap = 0,
@@ -3188,6 +3360,21 @@
*/
} wl_rssi_event_t;
+#define RSSI_MONITOR_VERSION 1
+#define RSSI_MONITOR_STOP (1 << 0)
+typedef struct wl_rssi_monitor_cfg {
+ uint8 version;
+ uint8 flags;
+ int8 max_rssi;
+ int8 min_rssi;
+}wl_rssi_monitor_cfg_t;
+
+typedef struct wl_rssi_monitor_evt {
+ uint8 version;
+ int8 cur_rssi;
+ uint16 pad;
+} wl_rssi_monitor_evt_t;
+
typedef struct wl_action_obss_coex_req {
uint8 info;
uint8 num;
@@ -3200,16 +3387,16 @@
#define WL_IOV_PKTQ_LOG_PRECS 16
-typedef struct {
+typedef BWL_PRE_PACKED_STRUCT struct {
uint32 num_addrs;
char addr_type[WL_IOV_MAC_PARAM_LEN];
struct ether_addr ea[WL_IOV_MAC_PARAM_LEN];
-} wl_iov_mac_params_t;
+} BWL_POST_PACKED_STRUCT wl_iov_mac_params_t;
/* This is extra info that follows wl_iov_mac_params_t */
-typedef struct {
+typedef BWL_PRE_PACKED_STRUCT struct {
uint32 addr_info[WL_IOV_MAC_PARAM_LEN];
-} wl_iov_mac_extra_params_t;
+} BWL_POST_PACKED_STRUCT wl_iov_mac_extra_params_t;
/* Combined structure */
typedef struct {
@@ -3442,6 +3629,45 @@
uint8 pad;
} nbr_element_t;
+
+typedef enum event_msgs_ext_command {
+ EVENTMSGS_NONE = 0,
+ EVENTMSGS_SET_BIT = 1,
+ EVENTMSGS_RESET_BIT = 2,
+ EVENTMSGS_SET_MASK = 3
+} event_msgs_ext_command_t;
+
+#define EVENTMSGS_VER 1
+#define EVENTMSGS_EXT_STRUCT_SIZE OFFSETOF(eventmsgs_ext_t, mask[0])
+
+/* len- for SET it would be mask size from the application to the firmware */
+/* for GET it would be actual firmware mask size */
+/* maxgetsize - is only used for GET. indicate max mask size that the */
+/* application can read from the firmware */
+typedef struct eventmsgs_ext
+{
+ uint8 ver;
+ uint8 command;
+ uint8 len;
+ uint8 maxgetsize;
+ uint8 mask[1];
+} eventmsgs_ext_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct pcie_bus_tput_params {
+ /* no of host dma descriptors programmed by the firmware before a commit */
+ uint16 max_dma_descriptors;
+
+ uint16 host_buf_len; /* length of host buffer */
+ dmaaddr_t host_buf_addr; /* physical address for bus_throughput_buf */
+} BWL_POST_PACKED_STRUCT pcie_bus_tput_params_t;
+typedef BWL_PRE_PACKED_STRUCT struct pcie_bus_tput_stats {
+ uint16 time_taken; /* no of secs the test is run */
+ uint16 nbytes_per_descriptor; /* no of bytes of data dma ed per descriptor */
+
+ /* no of desciptors fo which dma is sucessfully completed within the test time */
+ uint32 count;
+} BWL_POST_PACKED_STRUCT pcie_bus_tput_stats_t;
+
/* no default structure packing */
#include <packed_section_end.h>
@@ -3458,6 +3684,293 @@
/* require strict packing */
#include <packed_section_start.h>
+/* ##### Power Stats section ##### */
+
+#define WL_PWRSTATS_VERSION 2
+
+/* Input structure for pwrstats IOVAR */
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwrstats_query {
+ uint16 length; /* Number of entries in type array. */
+ uint16 type[1]; /* Types (tags) to retrieve.
+ * Length 0 (no types) means get all.
+ */
+} BWL_POST_PACKED_STRUCT wl_pwrstats_query_t;
+
+/* This structure is for version 2; version 1 will be deprecated in by FW */
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwrstats {
+ uint16 version; /* Version = 2 is TLV format */
+ uint16 length; /* Length of entire structure */
+ uint8 data[1]; /* TLV data, a series of structures,
+ * each starting with type and length.
+ *
+ * Padded as necessary so each section
+ * starts on a 4-byte boundary.
+ *
+ * Both type and len are uint16, but the
+ * upper nibble of length is reserved so
+ * valid len values are 0-4095.
+ */
+} BWL_POST_PACKED_STRUCT wl_pwrstats_t;
+#define WL_PWR_STATS_HDRLEN OFFSETOF(wl_pwrstats_t, data)
+
+/* Type values for the data section */
+#define WL_PWRSTATS_TYPE_PHY 0 /* struct wl_pwr_phy_stats */
+#define WL_PWRSTATS_TYPE_SCAN 1 /* struct wl_pwr_scan_stats */
+#define WL_PWRSTATS_TYPE_USB_HSIC 2 /* struct wl_pwr_usb_hsic_stats */
+#define WL_PWRSTATS_TYPE_PM_AWAKE 3 /* struct wl_pwr_pm_awake_stats */
+#define WL_PWRSTATS_TYPE_CONNECTION 4 /* struct wl_pwr_connect_stats; assoc and key-exch time */
+#define WL_PWRSTATS_TYPE_PCIE 6 /* struct wl_pwr_pcie_stats */
+
+/* Bits for wake reasons */
+#define WLC_PMD_WAKE_SET 0x1
+#define WLC_PMD_PM_AWAKE_BCN 0x2
+#define WLC_PMD_BTA_ACTIVE 0x4
+#define WLC_PMD_SCAN_IN_PROGRESS 0x8
+#define WLC_PMD_RM_IN_PROGRESS 0x10
+#define WLC_PMD_AS_IN_PROGRESS 0x20
+#define WLC_PMD_PM_PEND 0x40
+#define WLC_PMD_PS_POLL 0x80
+#define WLC_PMD_CHK_UNALIGN_TBTT 0x100
+#define WLC_PMD_APSD_STA_UP 0x200
+#define WLC_PMD_TX_PEND_WAR 0x400
+#define WLC_PMD_GPTIMER_STAY_AWAKE 0x800
+#define WLC_PMD_PM2_RADIO_SOFF_PEND 0x2000
+#define WLC_PMD_NON_PRIM_STA_UP 0x4000
+#define WLC_PMD_AP_UP 0x8000
+
+typedef BWL_PRE_PACKED_STRUCT struct wlc_pm_debug {
+ uint32 timestamp; /* timestamp in millisecond */
+ uint32 reason; /* reason(s) for staying awake */
+} BWL_POST_PACKED_STRUCT wlc_pm_debug_t;
+
+/* Data sent as part of pwrstats IOVAR */
+typedef BWL_PRE_PACKED_STRUCT struct pm_awake_data {
+ uint32 curr_time; /* ms */
+ uint32 hw_macc; /* HW maccontrol */
+ uint32 sw_macc; /* SW maccontrol */
+ uint32 pm_dur; /* Total sleep time in PM, usecs */
+ uint32 mpc_dur; /* Total sleep time in MPC, usecs */
+
+ /* int32 drifts = remote - local; +ve drift => local-clk slow */
+ int32 last_drift; /* Most recent TSF drift from beacon */
+ int32 min_drift; /* Min TSF drift from beacon in magnitude */
+ int32 max_drift; /* Max TSF drift from beacon in magnitude */
+
+ uint32 avg_drift; /* Avg TSF drift from beacon */
+
+ /* Wake history tracking */
+
+ /* pmstate array (type wlc_pm_debug_t) start offset */
+ uint16 pm_state_offset;
+ /* pmstate number of array entries */
+ uint16 pm_state_len;
+
+ /* array (type uint32) start offset */
+ uint16 pmd_event_wake_dur_offset;
+ /* pmd_event_wake_dur number of array entries */
+ uint16 pmd_event_wake_dur_len;
+
+ uint32 drift_cnt; /* Count of drift readings over which avg_drift was computed */
+ uint8 pmwake_idx; /* for stepping through pm_state */
+ uint8 pad[3];
+ uint32 frts_time; /* Cumulative ms spent in frts since driver load */
+ uint32 frts_end_cnt; /* No of times frts ended since driver load */
+} BWL_POST_PACKED_STRUCT pm_awake_data_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_pm_awake_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_PM_AWAKE */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ pm_awake_data_t awake_data;
+} BWL_POST_PACKED_STRUCT wl_pwr_pm_awake_stats_t;
+
+/* Original bus structure is for HSIC */
+typedef BWL_PRE_PACKED_STRUCT struct bus_metrics {
+ uint32 suspend_ct; /* suspend count */
+ uint32 resume_ct; /* resume count */
+ uint32 disconnect_ct; /* disconnect count */
+ uint32 reconnect_ct; /* reconnect count */
+ uint32 active_dur; /* msecs in bus, usecs for user */
+ uint32 suspend_dur; /* msecs in bus, usecs for user */
+ uint32 disconnect_dur; /* msecs in bus, usecs for user */
+} BWL_POST_PACKED_STRUCT bus_metrics_t;
+
+/* Bus interface info for USB/HSIC */
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_usb_hsic_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_USB_HSIC */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ bus_metrics_t hsic; /* stats from hsic bus driver */
+} BWL_POST_PACKED_STRUCT wl_pwr_usb_hsic_stats_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct pcie_bus_metrics {
+ uint32 d3_suspend_ct; /* suspend count */
+ uint32 d0_resume_ct; /* resume count */
+ uint32 perst_assrt_ct; /* PERST# assert count */
+ uint32 perst_deassrt_ct; /* PERST# de-assert count */
+ uint32 active_dur; /* msecs */
+ uint32 d3_suspend_dur; /* msecs */
+ uint32 perst_dur; /* msecs */
+ uint32 l0_cnt; /* L0 entry count */
+ uint32 l0_usecs; /* L0 duration in usecs */
+ uint32 l1_cnt; /* L1 entry count */
+ uint32 l1_usecs; /* L1 duration in usecs */
+ uint32 l1_1_cnt; /* L1_1ss entry count */
+ uint32 l1_1_usecs; /* L1_1ss duration in usecs */
+ uint32 l1_2_cnt; /* L1_2ss entry count */
+ uint32 l1_2_usecs; /* L1_2ss duration in usecs */
+ uint32 l2_cnt; /* L2 entry count */
+ uint32 l2_usecs; /* L2 duration in usecs */
+} BWL_POST_PACKED_STRUCT pcie_bus_metrics_t;
+
+/* Bus interface info for PCIE */
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_pcie_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_PCIE */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+ pcie_bus_metrics_t pcie; /* stats from pcie bus driver */
+} BWL_POST_PACKED_STRUCT wl_pwr_pcie_stats_t;
+
+/* Scan information history per category */
+typedef BWL_PRE_PACKED_STRUCT struct scan_data {
+ uint32 count; /* Number of scans performed */
+ uint32 dur; /* Total time (in us) used */
+} BWL_POST_PACKED_STRUCT scan_data_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_scan_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_SCAN */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ /* Scan history */
+ scan_data_t user_scans; /* User-requested scans: (i/e/p)scan */
+ scan_data_t assoc_scans; /* Scans initiated by association requests */
+ scan_data_t roam_scans; /* Scans initiated by the roam engine */
+ scan_data_t pno_scans[8]; /* For future PNO bucketing (BSSID, SSID, etc) */
+ scan_data_t other_scans; /* Scan engine usage not assigned to the above */
+} BWL_POST_PACKED_STRUCT wl_pwr_scan_stats_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_connect_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_SCAN */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ /* Connection (Association + Key exchange) data */
+ uint32 count; /* Number of connections performed */
+ uint32 dur; /* Total time (in ms) used */
+} BWL_POST_PACKED_STRUCT wl_pwr_connect_stats_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pwr_phy_stats {
+ uint16 type; /* WL_PWRSTATS_TYPE_PHY */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+ uint32 tx_dur; /* TX Active duration in us */
+ uint32 rx_dur; /* RX Active duration in us */
+} BWL_POST_PACKED_STRUCT wl_pwr_phy_stats_t;
+
+
+/* ##### End of Power Stats section ##### */
+
+/* IPV4 Arp offloads for ndis context */
+BWL_PRE_PACKED_STRUCT struct hostip_id {
+ struct ipv4_addr ipa;
+ uint8 id;
+} BWL_POST_PACKED_STRUCT;
+
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pfn_roam_thresh {
+ uint32 pfn_alert_thresh; /* time in ms */
+ uint32 roam_alert_thresh; /* time in ms */
+} BWL_POST_PACKED_STRUCT wl_pfn_roam_thresh_t;
+
+
+/* Reasons for wl_pmalert_t */
+#define PM_DUR_EXCEEDED (1<<0)
+#define MPC_DUR_EXCEEDED (1<<1)
+#define ROAM_ALERT_THRESH_EXCEEDED (1<<2)
+#define PFN_ALERT_THRESH_EXCEEDED (1<<3)
+#define CONST_AWAKE_DUR_ALERT (1<<4)
+#define CONST_AWAKE_DUR_RECOVERY (1<<5)
+
+#define MIN_PM_ALERT_LEN 9
+
+/* Data sent in EXCESS_PM_WAKE event */
+#define WL_PM_ALERT_VERSION 3
+
+#define MAX_P2P_BSS_DTIM_PRD 4
+
+/* This structure is for version 3; version 2 will be deprecated in by FW */
+typedef BWL_PRE_PACKED_STRUCT struct wl_pmalert {
+ uint16 version; /* Version = 3 is TLV format */
+ uint16 length; /* Length of entire structure */
+ uint32 reasons; /* reason(s) for pm_alert */
+ uint8 data[1]; /* TLV data, a series of structures,
+ * each starting with type and length.
+ *
+ * Padded as necessary so each section
+ * starts on a 4-byte boundary.
+ *
+ * Both type and len are uint16, but the
+ * upper nibble of length is reserved so
+ * valid len values are 0-4095.
+ */
+} BWL_POST_PACKED_STRUCT wl_pmalert_t;
+
+/* Type values for the data section */
+#define WL_PMALERT_FIXED 0 /* struct wl_pmalert_fixed_t, fixed fields */
+#define WL_PMALERT_PMSTATE 1 /* struct wl_pmalert_pmstate_t, variable */
+#define WL_PMALERT_EVENT_DUR 2 /* struct wl_pmalert_event_dur_t, variable */
+#define WL_PMALERT_UCODE_DBG 3 /* struct wl_pmalert_ucode_dbg_t, variable */
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pmalert_fixed {
+ uint16 type; /* WL_PMALERT_FIXED */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+ uint32 prev_stats_time; /* msecs */
+ uint32 curr_time; /* ms */
+ uint32 prev_pm_dur; /* usecs */
+ uint32 pm_dur; /* Total sleep time in PM, usecs */
+ uint32 prev_mpc_dur; /* usecs */
+ uint32 mpc_dur; /* Total sleep time in MPC, usecs */
+ uint32 hw_macc; /* HW maccontrol */
+ uint32 sw_macc; /* SW maccontrol */
+
+ /* int32 drifts = remote - local; +ve drift -> local-clk slow */
+ int32 last_drift; /* Most recent TSF drift from beacon */
+ int32 min_drift; /* Min TSF drift from beacon in magnitude */
+ int32 max_drift; /* Max TSF drift from beacon in magnitude */
+
+ uint32 avg_drift; /* Avg TSF drift from beacon */
+ uint32 drift_cnt; /* Count of drift readings over which avg_drift was computed */
+ uint32 frts_time; /* Cumulative ms spent in frts since driver load */
+ uint32 frts_end_cnt; /* No of times frts ended since driver load */
+} BWL_POST_PACKED_STRUCT wl_pmalert_fixed_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pmalert_pmstate {
+ uint16 type; /* WL_PMALERT_PMSTATE */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ uint8 pmwake_idx; /* for stepping through pm_state */
+ uint8 pad[3];
+ /* Array of pmstate; len of array is based on tlv len */
+ wlc_pm_debug_t pmstate[1];
+} BWL_POST_PACKED_STRUCT wl_pmalert_pmstate_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pmalert_event_dur {
+ uint16 type; /* WL_PMALERT_EVENT_DUR */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+
+ /* Array of event_dur, len of array is based on tlv len */
+ uint32 event_dur[1];
+} BWL_POST_PACKED_STRUCT wl_pmalert_event_dur_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_pmalert_ucode_dbg {
+ uint16 type; /* WL_PMALERT_UCODE_DBG */
+ uint16 len; /* Up to 4K-1, top 4 bits are reserved */
+ uint32 macctrl;
+ uint16 m_p2p_hps;
+ uint32 psm_brc;
+ uint32 ifsstat;
+ uint16 m_p2p_bss_dtim_prd[MAX_P2P_BSS_DTIM_PRD];
+ uint32 psmdebug[20];
+ uint32 phydebug[20];
+} BWL_POST_PACKED_STRUCT wl_pmalert_ucode_dbg_t;
+
/* Structures and constants used for "vndr_ie" IOVar interface */
#define VNDR_IE_CMD_LEN 4 /* length of the set command string:
@@ -3551,9 +4064,6 @@
uint8 est_Pout_cck; /* Latest CCK tx power out estimate */
uint8 tx_power_max[4]; /* Maximum target power among all rates */
uint tx_power_max_rate_ind[4]; /* Index of the rate with the max target power */
- int8 clm_limits[WL_NUMRATES]; /* regulatory limits - 20, 40 or 80MHz */
- int8 clm_limits_subchan1[WL_NUMRATES]; /* regulatory limits - 20in40 or 40in80 */
- int8 clm_limits_subchan2[WL_NUMRATES]; /* regulatory limits - 20in80MHz */
int8 sar; /* SAR limit for display by wl executable */
int8 channel_bandwidth; /* 20, 40 or 80 MHz bandwidth? */
uint8 version; /* Version of the data format wlu <--> driver */
@@ -3561,8 +4071,7 @@
int8 target_offsets[4]; /* Target power offsets for current rate per core */
uint32 last_tx_ratespec; /* Ratespec for last transmition */
uint user_target; /* user limit */
- uint32 board_limit_len; /* length of board limit buffer */
- uint32 target_len; /* length of target power buffer */
+ uint32 ppr_len; /* length of each ppr serialization buffer */
int8 SARLIMIT[MAX_STREAMS_SUPPORTED];
uint8 pprdata[1]; /* ppr serialization buffer */
} BWL_POST_PACKED_STRUCT tx_pwr_rpt_t;
@@ -3637,6 +4146,27 @@
uint32 max_tx_retry; /* no of consecutive no acks to send txfail event */
} BWL_POST_PACKED_STRUCT aibss_txfail_config_t;
+typedef BWL_PRE_PACKED_STRUCT struct wl_aibss_if {
+ uint16 version;
+ uint16 len;
+ uint32 flags;
+ struct ether_addr addr;
+ chanspec_t chspec;
+} BWL_POST_PACKED_STRUCT wl_aibss_if_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wlc_ipfo_route_entry {
+ struct ipv4_addr ip_addr;
+ struct ether_addr nexthop;
+} BWL_POST_PACKED_STRUCT wlc_ipfo_route_entry_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wlc_ipfo_route_tbl {
+ uint32 num_entry;
+ wlc_ipfo_route_entry_t route_entry[1];
+} BWL_POST_PACKED_STRUCT wlc_ipfo_route_tbl_t;
+
+#define WL_IPFO_ROUTE_TBL_FIXED_LEN 4
+#define WL_MAX_IPFO_ROUTE_TBL_ENTRY 64
+
/* no strict structure packing */
#include <packed_section_end.h>
@@ -3847,6 +4377,14 @@
wl_p2p_sched_desc_t desc[1];
} wl_p2p_sched_t;
+typedef struct wl_p2p_wfds_hash {
+ uint32 advt_id;
+ uint16 nw_cfg_method;
+ uint8 wfds_hash[6];
+ uint8 name_len;
+ uint8 service_name[MAX_WFDS_SVC_NAME_LEN];
+} wl_p2p_wfds_hash_t;
+
typedef struct wl_bcmdcs_data {
uint reason;
chanspec_t chspec;
@@ -3916,6 +4454,24 @@
uint8 band5g[WLC_SUBBAND_MAX][WLC_TXCORE_MAX];
} sar_limit_t;
+#define WLC_TXCAL_CORE_MAX 2 /* max number of txcore supports for txcal */
+#define MAX_NUM_TXCAL_MEAS 128
+
+typedef struct wl_txcal_meas {
+ uint8 tssi[WLC_TXCAL_CORE_MAX][MAX_NUM_TXCAL_MEAS];
+ int16 pwr[WLC_TXCAL_CORE_MAX][MAX_NUM_TXCAL_MEAS];
+ uint8 valid_cnt;
+} wl_txcal_meas_t;
+
+typedef struct wl_txcal_power_tssi {
+ uint8 set_core;
+ uint8 channel;
+ int16 pwr_start[WLC_TXCAL_CORE_MAX];
+ uint8 num_entries[WLC_TXCAL_CORE_MAX];
+ uint8 tssi[WLC_TXCAL_CORE_MAX][MAX_NUM_TXCAL_MEAS];
+ bool gen_tbl;
+} wl_txcal_power_tssi_t;
+
/* IOVAR "mempool" parameter. Used to retrieve a list of memory pool statistics. */
typedef struct wl_mempool_stats {
int num; /* Number of memory pools */
@@ -4210,12 +4766,18 @@
struct ether_addr target;
} wl_bsstrans_resp_t;
-/* "wnm_bsstrans_resp" argument programming behavior after BSSTRANS Req reception */
+/* "wnm_bsstrans_policy" argument programs behavior after BSSTRANS Req reception.
+ * BSS-Transition feature is used by multiple programs such as NPS-PF, VE-PF,
+ * Band-steering, Hotspot 2.0 and customer requirements. Each PF and its test plan
+ * mandates different behavior on receiving BSS-transition request. To accomodate
+ * such divergent behaviors these policies have been created.
+ */
enum {
- WL_BSSTRANS_RESP_ROAM_ALWAYS = 0, /* Roam (or disassociate) in all cases */
- WL_BSSTRANS_RESP_ROAM_IF_MODE = 1, /* Roam only if requested by Request Mode field */
- WL_BSSTRANS_RESP_ROAM_IF_PREF = 2, /* Roam only if Preferred BSS provided */
- WL_BSSTRANS_RESP_WAIT = 3 /* Wait for deauth and send Accepted status */
+ WL_BSSTRANS_POLICY_ROAM_ALWAYS = 0, /* Roam (or disassociate) in all cases */
+ WL_BSSTRANS_POLICY_ROAM_IF_MODE = 1, /* Roam only if requested by Request Mode field */
+ WL_BSSTRANS_POLICY_ROAM_IF_PREF = 2, /* Roam only if Preferred BSS provided */
+ WL_BSSTRANS_POLICY_WAIT = 3, /* Wait for deauth and send Accepted status */
+ WL_BSSTRANS_POLICY_PRODUCT = 4, /* Policy for real product use cases (non-pf) */
};
/* Definitions for WNM/NPS TIM Broadcast */
@@ -4287,6 +4849,7 @@
/* Definitions for Reliable Multicast */
#define WL_RMC_CNT_VERSION 1
+#define WL_RMC_TR_VERSION 1
#define WL_RMC_MAX_CLIENT 32
#define WL_RMC_FLAG_INBLACKLIST 1
#define WL_RMC_FLAG_ACTIVEACKER 2
@@ -4302,6 +4865,7 @@
#define WL_RMC_ACK_MCAST_ALL 0x01
#define WL_RMC_ACTF_TIME_MIN 300 /* time in ms */
#define WL_RMC_ACTF_TIME_MAX 20000 /* time in ms */
+#define WL_RMC_MAX_NUM_TRS 32 /* maximun transmitters allowed */
#define WL_RMC_ARTMO_MIN 350 /* time in ms */
#define WL_RMC_ARTMO_MAX 40000 /* time in ms */
@@ -4337,10 +4901,12 @@
uint16 null_tx_err; /* error count for rmc null frame transmit */
uint16 af_unicast_tx_err; /* error count for rmc unicast frame transmit */
uint16 mc_no_amt_slot; /* No mcast AMT entry available */
+ /* Unused. Keep for rom compatibility */
uint16 mc_no_glb_slot; /* No mcast entry available in global table */
uint16 mc_not_mirrored; /* mcast group is not mirrored */
uint16 mc_existing_tr; /* mcast group is already taken by transmitter */
uint16 mc_exist_in_amt; /* mcast group is already programmed in amt */
+ /* Unused. Keep for rom compatibility */
uint16 mc_not_exist_in_gbl; /* mcast group is not in global table */
uint16 mc_not_exist_in_amt; /* mcast group is not in AMT table */
uint16 mc_utilized; /* mcast addressed is already taken */
@@ -4351,6 +4917,8 @@
uint32 mc_ar_role_selected; /* no. of times took AR role */
uint32 mc_ar_role_deleted; /* no. of times AR role cancelled */
uint32 mc_noacktimer_expired; /* no. of times noack timer expired */
+ uint16 mc_no_wl_clk; /* no wl clk detected when trying to access amt */
+ uint16 mc_tr_cnt_exceeded; /* No of transmitters in the network exceeded */
} wl_rmc_cnts_t;
/* RMC Status */
@@ -4378,33 +4946,20 @@
wl_rmc_entry_t entry[WL_RMC_MAX_TABLE_ENTRY];
} wl_rmc_entry_table_t;
-/* Transmitter Info */
-typedef struct wl_rmc_trans_info {
- struct ether_addr addr; /* transmitter mac */
- uint32 time_val; /* timer val in case aging of entry is required */
- uint16 seq; /* last seq number of packet received from transmitter */
- uint16 artmo;
-} wl_rmc_trans_info_t;
+typedef struct wl_rmc_trans_elem {
+ struct ether_addr tr_mac; /* transmitter mac */
+ struct ether_addr ar_mac; /* ar mac */
+ uint16 artmo; /* AR timeout */
+ uint8 amt_idx; /* amt table entry */
+ uint16 flag; /* entry will be acked, not acked, programmed, full etc */
+} wl_rmc_trans_elem_t;
-/* Multicast Group */
-typedef struct wl_rmc_grp_entry {
- struct ether_addr mcaddr; /* multi-cast group mac */
- struct ether_addr ar; /* active receiver for the group */
- wl_rmc_trans_info_t tr_info[WL_RMC_MAX_TRS_PER_GROUP];
-} wl_rmc_grp_entry_t;
-
-/* RMC ACKALL Table */
-typedef struct wl_rmc_ackall_entry {
- struct ether_addr ar; /* active receiver for the entry */
- wl_rmc_trans_info_t tr_info[WL_RMC_NUM_OF_MC_STREAMS];
-} wl_rmc_ackall_entry_t;
-
-/* RMC Peers Table */
-typedef struct wl_rmc_gbl_table {
- uint8 activeMask; /* mask to denote the entry(s) that are active */
- wl_rmc_ackall_entry_t ackAll; /* structure to keep info related to ACK all */
- wl_rmc_grp_entry_t mc_entry[WL_RMC_NUM_OF_MC_STREAMS];
-} wl_rmc_gbl_table_t;
+/* RMC transmitters */
+typedef struct wl_rmc_trans_in_network {
+ uint8 ver; /* version of RMC */
+ uint8 num_tr; /* number of transmitters in the network */
+ wl_rmc_trans_elem_t trs[WL_RMC_MAX_NUM_TRS];
+} wl_rmc_trans_in_network_t;
/* To update vendor specific ie for RMC */
typedef struct wl_rmc_vsie {
@@ -4412,62 +4967,690 @@
uint16 payload; /* IE Data Payload */
} wl_rmc_vsie_t;
+
+/* structures & defines for proximity detection */
+enum proxd_method {
+ PROXD_UNDEFINED_METHOD = 0,
+ PROXD_RSSI_METHOD = 1,
+ PROXD_TOF_METHOD = 2
+};
+
+/* structures for proximity detection device role */
+#define WL_PROXD_MODE_DISABLE 0
+#define WL_PROXD_MODE_NEUTRAL 1
+#define WL_PROXD_MODE_INITIATOR 2
+#define WL_PROXD_MODE_TARGET 3
+
+#define WL_PROXD_ACTION_STOP 0
+#define WL_PROXD_ACTION_START 1
+
+#define WL_PROXD_FLAG_TARGET_REPORT 0x1
+#define WL_PROXD_FLAG_REPORT_FAILURE 0x2
+#define WL_PROXD_FLAG_INITIATOR_REPORT 0x4
+#define WL_PROXD_FLAG_NOCHANSWT 0x8
+#define WL_PROXD_FLAG_NETRUAL 0x10
+#define WL_PROXD_FLAG_INITIATOR_RPTRTT 0x20
+#define WL_PROXD_FLAG_ONEWAY 0x40
+#define WL_PROXD_FLAG_SEQ_EN 0x80
+
+#define WL_PROXD_RANDOM_WAKEUP 0x8000
+
typedef struct wl_proxd_iovar {
uint16 method; /* Proxmity Detection method */
uint16 mode; /* Mode (neutral, initiator, target) */
} wl_proxd_iovar_t;
-/* structures for proximity detection parameters */
-typedef struct wl_proxd_params_rssi_method {
- chanspec_t chanspec; /* chanspec for home channel */
+/*
+ * structures for proximity detection parameters
+ * consists of two parts, common and method specific params
+ * common params should be placed at the beginning
+ */
+
+/* require strict packing */
+#include <packed_section_start.h>
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_params_common {
+ chanspec_t chanspec; /* channel spec */
+ int16 tx_power; /* tx power of Proximity Detection(PD) frames (in dBm) */
+ uint16 tx_rate; /* tx rate of PD rames (in 500kbps units) */
+ uint16 timeout; /* timeout value */
uint16 interval; /* interval between neighbor finding attempts (in TU) */
uint16 duration; /* duration of neighbor finding attempts (in ms) */
- int16 rssi_thresh; /* RSSI threshold (in dBm) */
+} BWL_POST_PACKED_STRUCT wl_proxd_params_common_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_params_rssi_method {
+ chanspec_t chanspec; /* chanspec for home channel */
int16 tx_power; /* tx power of Proximity Detection frames (in dBm) */
- uint16 tx_rate; /* tx rate of Proximity Detection frames
- * (in 500kbps units)
- */
+ uint16 tx_rate; /* tx rate of PD frames, 500kbps units */
uint16 timeout; /* state machine wait timeout of the frames (in ms) */
+ uint16 interval; /* interval between neighbor finding attempts (in TU) */
+ uint16 duration; /* duration of neighbor finding attempts (in ms) */
+ /* method specific ones go after this line */
+ int16 rssi_thresh; /* RSSI threshold (in dBm) */
uint16 maxconvergtmo; /* max wait converge timeout (in ms) */
} wl_proxd_params_rssi_method_t;
+#define Q1_NS 25 /* Q1 time units */
+
+#define TOF_BW_NUM 3 /* number of bandwidth that the TOF can support */
+#define TOF_BW_SEQ_NUM (TOF_BW_NUM+2) /* number of total index */
+enum tof_bw_index {
+ TOF_BW_20MHZ_INDEX = 0,
+ TOF_BW_40MHZ_INDEX = 1,
+ TOF_BW_80MHZ_INDEX = 2,
+ TOF_BW_SEQTX_INDEX = 3,
+ TOF_BW_SEQRX_INDEX = 4
+};
+
+#define BANDWIDTH_BASE 20 /* base value of bandwidth */
+#define TOF_BW_20MHZ (BANDWIDTH_BASE << TOF_BW_20MHZ_INDEX)
+#define TOF_BW_40MHZ (BANDWIDTH_BASE << TOF_BW_40MHZ_INDEX)
+#define TOF_BW_80MHZ (BANDWIDTH_BASE << TOF_BW_80MHZ_INDEX)
+#define TOF_BW_10MHZ 10
+
+#define NFFT_BASE 64 /* base size of fft */
+#define TOF_NFFT_20MHZ (NFFT_BASE << TOF_BW_20MHZ_INDEX)
+#define TOF_NFFT_40MHZ (NFFT_BASE << TOF_BW_40MHZ_INDEX)
+#define TOF_NFFT_80MHZ (NFFT_BASE << TOF_BW_80MHZ_INDEX)
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_params_tof_method {
+ chanspec_t chanspec; /* chanspec for home channel */
+ int16 tx_power; /* tx power of Proximity Detection(PD) frames (in dBm) */
+ uint16 tx_rate; /* tx rate of PD rames (in 500kbps units) */
+ uint16 timeout; /* state machine wait timeout of the frames (in ms) */
+ uint16 interval; /* interval between neighbor finding attempts (in TU) */
+ uint16 duration; /* duration of neighbor finding attempts (in ms) */
+ /* specific for the method go after this line */
+ struct ether_addr tgt_mac; /* target mac addr for TOF method */
+ uint16 ftm_cnt; /* number of the frames txed by initiator */
+ uint16 retry_cnt; /* number of retransmit attampts for ftm frames */
+ int16 vht_rate; /* ht or vht rate */
+ /* add more params required for other methods can be added here */
+} BWL_POST_PACKED_STRUCT wl_proxd_params_tof_method_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_params_tof_tune {
+ uint32 Ki; /* h/w delay K factor for initiator */
+ uint32 Kt; /* h/w delay K factor for target */
+ int16 vhtack; /* enable/disable VHT ACK */
+ int16 N_log2[TOF_BW_SEQ_NUM]; /* simple threshold crossing */
+ int16 w_offset[TOF_BW_NUM]; /* offset of threshold crossing window(per BW) */
+ int16 w_len[TOF_BW_NUM]; /* length of threshold crossing window(per BW) */
+ int32 maxDT; /* max time difference of T4/T1 or T3/T2 */
+ int32 minDT; /* min time difference of T4/T1 or T3/T2 */
+ uint8 totalfrmcnt; /* total count of transfered measurement frames */
+ uint16 rsv_media; /* reserve media value for TOF */
+ uint32 flags; /* flags */
+ uint8 core; /* core to use for tx */
+ uint8 force_K; /* set to force value of K */
+ int16 N_scale[TOF_BW_SEQ_NUM]; /* simple threshold crossing */
+ uint8 sw_adj; /* enable sw assisted timestamp adjustment */
+ uint8 hw_adj; /* enable hw assisted timestamp adjustment */
+ uint8 seq_en; /* enable ranging sequence */
+ uint8 ftm_cnt[TOF_BW_SEQ_NUM]; /* number of ftm frames based on bandwidth */
+ int16 N_log2_2g; /* simple threshold crossing for 2g channel */
+ int16 N_scale_2g; /* simple threshold crossing for 2g channel */
+} BWL_POST_PACKED_STRUCT wl_proxd_params_tof_tune_t;
+
typedef struct wl_proxd_params_iovar {
uint16 method; /* Proxmity Detection method */
union {
- wl_proxd_params_rssi_method_t rssi_params;
+ /* common params for pdsvc */
+ wl_proxd_params_common_t cmn_params; /* common parameters */
+ /* method specific */
+ wl_proxd_params_rssi_method_t rssi_params; /* RSSI method parameters */
+ wl_proxd_params_tof_method_t tof_params; /* TOF meothod parameters */
+ /* tune parameters */
+ wl_proxd_params_tof_tune_t tof_tune; /* TOF tune parameters */
} u; /* Method specific optional parameters */
} wl_proxd_params_iovar_t;
-enum {
- RSSI_REASON_UNKNOW,
- RSSI_REASON_LOWRSSI,
- RSSI_REASON_NSYC,
- RSSI_REASON_TIMEOUT
+#define PROXD_COLLECT_GET_STATUS 0
+#define PROXD_COLLECT_SET_STATUS 1
+#define PROXD_COLLECT_QUERY_HEADER 2
+#define PROXD_COLLECT_QUERY_DATA 3
+#define PROXD_COLLECT_QUERY_DEBUG 4
+#define PROXD_COLLECT_REMOTE_REQUEST 5
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_collect_query {
+ uint32 method; /* method */
+ uint8 request; /* Query request. */
+ uint8 status; /* 0 -- disable, 1 -- enable collection, */
+ /* 2 -- enable collection & debug */
+ uint16 index; /* The current frame index [0 to total_frames - 1]. */
+ uint16 mode; /* Initiator or Target */
+ bool busy; /* tof sm is busy */
+ bool remote; /* Remote collect data */
+} BWL_POST_PACKED_STRUCT wl_proxd_collect_query_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_collect_header {
+ uint16 total_frames; /* The totral frames for this collect. */
+ uint16 nfft; /* nfft value */
+ uint16 bandwidth; /* bandwidth */
+ uint16 channel; /* channel number */
+ uint32 chanspec; /* channel spec */
+ uint32 fpfactor; /* avb timer value factor */
+ uint16 fpfactor_shift; /* avb timer value shift bits */
+ int32 distance; /* distance calculated by fw */
+ uint32 meanrtt; /* mean of RTTs */
+ uint32 modertt; /* mode of RTTs */
+ uint32 medianrtt; /* median of RTTs */
+ uint32 sdrtt; /* standard deviation of RTTs */
+ uint32 clkdivisor; /* clock divisor */
+ uint16 chipnum; /* chip type */
+ uint8 chiprev; /* chip revision */
+ uint8 phyver; /* phy version */
+ struct ether_addr loaclMacAddr; /* local mac address */
+ struct ether_addr remoteMacAddr; /* remote mac address */
+ wl_proxd_params_tof_tune_t params;
+} BWL_POST_PACKED_STRUCT wl_proxd_collect_header_t;
+
+
+/* ********************** NAN wl interface struct types and defs ******************** */
+
+#define WL_NAN_IOCTL_VERSION 0x1
+
+/* wl_nan_sub_cmd may also be used in dhd */
+typedef struct wl_nan_sub_cmd wl_nan_sub_cmd_t;
+typedef int (cmd_handler_t)(void *wl, const wl_nan_sub_cmd_t *cmd, char **argv);
+/* nan cmd list entry */
+struct wl_nan_sub_cmd {
+ char *name;
+ uint8 version; /* cmd version */
+ uint16 id; /* id for the dongle f/w switch/case */
+ uint16 type; /* base type of argument */
+ cmd_handler_t *handler; /* cmd handler */
};
-enum {
- RSSI_STATE_POLL,
- RSSI_STATE_TPAIRING,
- RSSI_STATE_IPAIRING,
- RSSI_STATE_THANDSHAKE,
- RSSI_STATE_IHANDSHAKE,
- RSSI_STATE_CONFIRMED,
- RSSI_STATE_PIPELINE,
- RSSI_STATE_NEGMODE,
- RSSI_STATE_MONITOR,
- RSSI_STATE_LAST
+/* container for nan iovtls & events */
+typedef BWL_PRE_PACKED_STRUCT struct wl_nan_ioc {
+ uint16 version; /* interface command or event version */
+ uint16 id; /* nan ioctl cmd ID */
+ uint16 len; /* total length of all tlv records in data[] */
+ uint8 data [1]; /* var len payload of bcm_xtlv_t type */
+} BWL_POST_PACKED_STRUCT wl_nan_ioc_t;
+
+typedef struct wl_nan_status {
+ uint8 inited;
+ uint8 joined;
+ uint8 role;
+ uint8 hop_count;
+ uint32 chspec;
+ uint8 amr[8]; /* Anchor Master Rank */
+ uint32 cnt_pend_txfrm; /* pending TX frames */
+ uint32 cnt_bcn_tx; /* TX disc/sync beacon count */
+ uint32 cnt_bcn_rx; /* RX disc/sync beacon count */
+ uint32 cnt_svc_disc_tx; /* TX svc disc frame count */
+ uint32 cnt_svc_disc_rx; /* RX svc disc frame count */
+ struct ether_addr cid;
+} wl_nan_status_t;
+
+/* various params and ctl swithce for nan_debug instance */
+typedef struct nan_debug_params {
+ uint8 enabled; /* runtime debuging enabled */
+ uint8 collect; /* enables debug svc sdf monitor mode */
+ uint16 cmd; /* debug cmd to perform a debug action */
+ uint32 msglevel; /* msg level if enabled */
+ uint16 status;
+} nan_debug_params_t;
+
+
+/* nan passive scan params */
+#define NAN_SCAN_MAX_CHCNT 8
+typedef BWL_PRE_PACKED_STRUCT struct nan_scan_params {
+ uint16 scan_time;
+ uint16 home_time;
+ uint16 chspec_num;
+ chanspec_t chspec_list[NAN_SCAN_MAX_CHCNT]; /* act. used 3, 5 rfu */
+} BWL_POST_PACKED_STRUCT nan_scan_params_t;
+
+enum wl_nan_role {
+ WL_NAN_ROLE_AUTO = 0,
+ WL_NAN_ROLE_NON_MASTER_NON_SYNC = 1,
+ WL_NAN_ROLE_NON_MASTER_SYNC = 2,
+ WL_NAN_ROLE_MASTER = 3,
+ WL_NAN_ROLE_ANCHOR_MASTER = 4
+};
+#define NAN_MASTER_RANK_LEN 8
+/* nan cmd IDs */
+enum wl_nan_cmds {
+ /* nan cfg /disc & dbg ioctls */
+ WL_NAN_CMD_ENABLE = 1,
+ WL_NAN_CMD_ATTR = 2,
+ WL_NAN_CMD_NAN_JOIN = 3,
+ WL_NAN_CMD_LEAVE = 4,
+ WL_NAN_CMD_MERGE = 5,
+ WL_NAN_CMD_STATUS = 6,
+ /* discovery engine commands */
+ WL_NAN_CMD_PUBLISH = 20,
+ WL_NAN_CMD_SUBSCRIBE = 21,
+ WL_NAN_CMD_CANCEL_PUBLISH = 22,
+ WL_NAN_CMD_CANCEL_SUBSCRIBE = 23,
+ WL_NAN_CMD_TRANSMIT = 24,
+ WL_NAN_CMD_CONNECTION = 25,
+ WL_NAN_CMD_SHOW = 26,
+ WL_NAN_CMD_STOP = 27, /* stop nan for a given cluster ID */
+ /* nan debug iovars & cmds */
+ WL_NAN_CMD_SCAN_PARAMS = 46,
+ WL_NAN_CMD_SCAN = 47,
+ WL_NAN_CMD_SCAN_RESULTS = 48,
+ WL_NAN_CMD_EVENT_MASK = 49,
+ WL_NAN_CMD_EVENT_CHECK = 50,
+
+ WL_NAN_CMD_DEBUG = 60,
+ WL_NAN_CMD_TEST1 = 61,
+ WL_NAN_CMD_TEST2 = 62,
+ WL_NAN_CMD_TEST3 = 63
+};
+
+/*
+ * tlv IDs uniquely identifies cmd parameters
+ * packed into wl_nan_ioc_t container
+ */
+enum wl_nan_cmd_xtlv_id {
+ /* 0x00 ~ 0xFF: standard TLV ID whose data format is the same as NAN attribute TLV */
+ WL_NAN_XTLV_ZERO = 0, /* used as tlv buf end marker */
+#ifdef NAN_STD_TLV /* rfu, don't use yet */
+ WL_NAN_XTLV_MASTER_IND = 1, /* == NAN_ATTR_MASTER_IND, */
+ WL_NAN_XTLV_CLUSTER = 2, /* == NAN_ATTR_CLUSTER, */
+ WL_NAN_XTLV_VENDOR = 221, /* == NAN_ATTR_VENDOR, */
+#endif
+ /* 0x02 ~ 0xFF: reserved. In case to use with the same data format as NAN attribute TLV */
+ /* 0x100 ~ : private TLV ID defined just for NAN command */
+ /* common types */
+ WL_NAN_XTLV_BUFFER = 0x101, /* generic type, function depends on cmd context */
+ WL_NAN_XTLV_MAC_ADDR = 0x102, /* used in various cmds */
+ WL_NAN_XTLV_REASON = 0x103,
+ WL_NAN_XTLV_ENABLE = 0x104,
+ /* explicit types, primarily for discovery engine iovars */
+ WL_NAN_XTLV_SVC_PARAMS = 0x120, /* Contains required params: wl_nan_disc_params_t */
+ WL_NAN_XTLV_MATCH_RX = 0x121, /* Matching filter to evaluate on receive */
+ WL_NAN_XTLV_MATCH_TX = 0x122, /* Matching filter to send */
+ WL_NAN_XTLV_SVC_INFO = 0x123, /* Service specific info */
+ WL_NAN_XTLV_SVC_NAME = 0x124, /* Optional UTF-8 service name, for debugging. */
+ WL_NAN_XTLV_INSTANCE_ID = 0x125, /* Identifies unique publish or subscribe instance */
+ WL_NAN_XTLV_PRIORITY = 0x126, /* used in transmit cmd context */
+ WL_NAN_XTLV_REQUESTOR_ID = 0x127, /* Requestor instance ID */
+ WL_NAN_XTLV_VNDR = 0x128, /* Vendor specific attribute */
+ /* explicit types, primarily for NAN MAC iovars */
+ WL_NAN_XTLV_DW_LEN = 0x140, /* discovery win length */
+ WL_NAN_XTLV_BCN_INTERVAL = 0x141, /* beacon interval, both sync and descovery bcns? */
+ WL_NAN_XTLV_CLUSTER_ID = 0x142,
+ WL_NAN_XTLV_IF_ADDR = 0x143,
+ WL_NAN_XTLV_MC_ADDR = 0x144,
+ WL_NAN_XTLV_ROLE = 0x145,
+ WL_NAN_XTLV_START = 0x146,
+
+ WL_NAN_XTLV_MASTER_PREF = 0x147,
+ WL_NAN_XTLV_DW_INTERVAL = 0x148,
+ WL_NAN_XTLV_PTBTT_OVERRIDE = 0x149,
+ /* nan status command xtlvs */
+ WL_NAN_XTLV_MAC_INITED = 0x14a,
+ WL_NAN_XTLV_MAC_ENABLED = 0x14b,
+ WL_NAN_XTLV_MAC_CHANSPEC = 0x14c,
+ WL_NAN_XTLV_MAC_AMR = 0x14d, /* anchormaster rank u8 amr[8] */
+ WL_NAN_XTLV_MAC_HOPCNT = 0x14e,
+ WL_NAN_XTLV_MAC_AMBTT = 0x14f,
+ WL_NAN_XTLV_MAC_TXRATE = 0x150,
+ WL_NAN_XTLV_MAC_STATUS = 0x151, /* xtlv payload is nan_status_t */
+ WL_NAN_XTLV_NAN_SCANPARAMS = 0x152, /* payload is nan_scan_params_t */
+ WL_NAN_XTLV_DEBUGPARAMS = 0x153, /* payload is nan_scan_params_t */
+ WL_NAN_XTLV_SUBSCR_ID = 0x154, /* subscriber id */
+ WL_NAN_XTLV_PUBLR_ID = 0x155, /* publisher id */
+ WL_NAN_XTLV_EVENT_MASK = 0x156,
+ WL_NAN_XTLV_MERGE = 0x157
+};
+
+/* Flag bits for Publish and Subscribe (wl_nan_disc_params_t flags) */
+#define WL_NAN_RANGE_LIMITED 0x0040
+/* Bits specific to Publish */
+/* Unsolicited transmissions */
+#define WL_NAN_PUB_UNSOLICIT 0x1000
+/* Solicited transmissions */
+#define WL_NAN_PUB_SOLICIT 0x2000
+#define WL_NAN_PUB_BOTH 0x3000
+/* Set for broadcast solicited transmission
+ * Do not set for unicast solicited transmission
+ */
+#define WL_NAN_PUB_BCAST 0x4000
+/* Generate event on each solicited transmission */
+#define WL_NAN_PUB_EVENT 0x8000
+/* Used for one-time solicited Publish functions to indicate transmision occurred */
+#define WL_NAN_PUB_SOLICIT_PENDING 0x10000
+/* Follow-up frames */
+#define WL_NAN_FOLLOWUP 0x20000
+/* Bits specific to Subscribe */
+/* Active subscribe mode (Leave unset for passive) */
+#define WL_NAN_SUB_ACTIVE 0x1000
+
+/* Special values for time to live (ttl) parameter */
+#define WL_NAN_TTL_UNTIL_CANCEL 0xFFFFFFFF
+/* Publish - runs until first transmission
+ * Subscribe - runs until first DiscoveryResult event
+ */
+#define WL_NAN_TTL_FIRST 0
+
+/* The service hash (service id) is exactly this many bytes. */
+#define WL_NAN_SVC_HASH_LEN 6
+
+/* Instance ID type (unique identifier) */
+typedef uint8 wl_nan_instance_id_t;
+
+/* Mandatory parameters for publish/subscribe iovars - NAN_TLV_SVC_PARAMS */
+typedef struct wl_nan_disc_params_s {
+ /* Periodicity of unsolicited/query transmissions, in DWs */
+ uint32 period;
+ /* Time to live in DWs */
+ uint32 ttl;
+ /* Flag bits */
+ uint32 flags;
+ /* Publish or subscribe service id, i.e. hash of the service name */
+ uint8 svc_hash[WL_NAN_SVC_HASH_LEN];
+ /* Publish or subscribe id */
+ wl_nan_instance_id_t instance_id;
+} wl_nan_disc_params_t;
+
+/*
+* desovery interface event structures *
+*/
+
+/* NAN Ranging */
+
+/* Bit defines for global flags */
+#define WL_NAN_RANGING_ENABLE 1 /* enable RTT */
+#define WL_NAN_RANGING_RANGED 2 /* Report to host if ranged as target */
+typedef struct nan_ranging_config {
+ uint32 chanspec; /* Ranging chanspec */
+ uint16 timeslot; /* NAN RTT start time slot 1-511 */
+ uint16 duration; /* NAN RTT duration in ms */
+ struct ether_addr allow_mac; /* peer initiated ranging: the allowed peer mac
+ * address, a unicast (for one peer) or
+ * a broadcast for all. Setting it to all zeros
+ * means responding to none,same as not setting
+ * the flag bit NAN_RANGING_RESPOND
+ */
+ uint16 flags;
+} wl_nan_ranging_config_t;
+
+/* list of peers for self initiated ranging */
+/* Bit defines for per peer flags */
+#define WL_NAN_RANGING_REPORT (1<<0) /* Enable reporting range to target */
+typedef struct nan_ranging_peer {
+ uint32 chanspec; /* desired chanspec for this peer */
+ uint32 abitmap; /* available bitmap */
+ struct ether_addr ea; /* peer MAC address */
+ uint8 frmcnt; /* frame count */
+ uint8 retrycnt; /* retry count */
+ uint16 flags; /* per peer flags, report or not */
+} wl_nan_ranging_peer_t;
+typedef struct nan_ranging_list {
+ uint8 count; /* number of MAC addresses */
+ uint8 num_peers_done; /* host set to 0, when read, shows number of peers
+ * completed, success or fail
+ */
+ uint8 num_dws; /* time period to do the ranging, specified in dws */
+ uint8 reserve; /* reserved field */
+ wl_nan_ranging_peer_t rp[1]; /* variable length array of peers */
+} wl_nan_ranging_list_t;
+
+/* ranging results, a list for self initiated ranging and one for peer initiated ranging */
+/* There will be one structure for each peer */
+#define WL_NAN_RANGING_STATUS_SUCCESS 1
+#define WL_NAN_RANGING_STATUS_FAIL 2
+#define WL_NAN_RANGING_STATUS_TIMEOUT 3
+#define WL_NAN_RANGING_STATUS_ABORT 4 /* with partial results if sounding count > 0 */
+typedef struct nan_ranging_result {
+ uint8 status; /* 1: Success, 2: Fail 3: Timeout 4: Aborted */
+ uint8 sounding_count; /* number of measurements completed (0 = failure) */
+ struct ether_addr ea; /* initiator MAC address */
+ uint32 chanspec; /* Chanspec where the ranging was done */
+ uint32 timestamp; /* 32bits of the TSF timestamp ranging was completed at */
+ uint32 distance; /* mean distance in meters expressed as Q4 number.
+ * Only valid when sounding_count > 0. Examples:
+ * 0x08 = 0.5m
+ * 0x10 = 1m
+ * 0x18 = 1.5m
+ * set to 0xffffffff to indicate invalid number
+ */
+ int32 rtt_var; /* standard deviation in 10th of ns of RTTs measured.
+ * Only valid when sounding_count > 0
+ */
+ struct ether_addr tgtea; /* target MAC address */
+} wl_nan_ranging_result_t;
+typedef struct nan_ranging_event_data {
+ uint8 mode; /* 1: Result of host initiated ranging */
+ /* 2: Result of peer initiated ranging */
+ uint8 reserved;
+ uint8 success_count; /* number of peers completed successfully */
+ uint8 count; /* number of peers in the list */
+ wl_nan_ranging_result_t rr[1]; /* variable array of ranging peers */
+} wl_nan_ranging_event_data_t;
+
+/* ********************* end of NAN section ******************************** */
+
+
+#define RSSI_THRESHOLD_SIZE 16
+#define MAX_IMP_RESP_SIZE 256
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_rssi_bias {
+ int32 version; /* version */
+ int32 threshold[RSSI_THRESHOLD_SIZE]; /* threshold */
+ int32 peak_offset; /* peak offset */
+ int32 bias; /* rssi bias */
+ int32 gd_delta; /* GD - GD_ADJ */
+ int32 imp_resp[MAX_IMP_RESP_SIZE]; /* (Hi*Hi)+(Hr*Hr) */
+} BWL_POST_PACKED_STRUCT wl_proxd_rssi_bias_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_rssi_bias_avg {
+ int32 avg_threshold[RSSI_THRESHOLD_SIZE]; /* avg threshold */
+ int32 avg_peak_offset; /* avg peak offset */
+ int32 avg_rssi; /* avg rssi */
+ int32 avg_bias; /* avg bias */
+} BWL_POST_PACKED_STRUCT wl_proxd_rssi_bias_avg_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_collect_info {
+ uint16 type; /* type: 0 channel table, 1 channel smoothing table, 2 and 3 seq */
+ uint16 index; /* The current frame index, from 1 to total_frames. */
+ uint16 tof_cmd; /* M_TOF_CMD */
+ uint16 tof_rsp; /* M_TOF_RSP */
+ uint16 tof_avb_rxl; /* M_TOF_AVB_RX_L */
+ uint16 tof_avb_rxh; /* M_TOF_AVB_RX_H */
+ uint16 tof_avb_txl; /* M_TOF_AVB_TX_L */
+ uint16 tof_avb_txh; /* M_TOF_AVB_TX_H */
+ uint16 tof_id; /* M_TOF_ID */
+ uint8 tof_frame_type;
+ uint8 tof_frame_bw;
+ int8 tof_rssi;
+ int32 tof_cfo;
+ int32 gd_adj_ns; /* gound delay */
+ int32 gd_h_adj_ns; /* group delay + threshold crossing */
+#ifdef RSSI_REFINE
+ wl_proxd_rssi_bias_t rssi_bias; /* RSSI refinement info */
+#endif
+ int16 nfft; /* number of samples stored in H */
+
+} BWL_POST_PACKED_STRUCT wl_proxd_collect_info_t;
+
+#define k_tof_collect_H_pad 1
+#define k_tof_collect_H_size (256+16+k_tof_collect_H_pad)
+#define k_tof_collect_Hraw_size (2*k_tof_collect_H_size)
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_collect_data {
+ wl_proxd_collect_info_t info;
+ uint32 H[k_tof_collect_H_size]; /* raw data read from phy used to adjust timestamps */
+
+} BWL_POST_PACKED_STRUCT wl_proxd_collect_data_t;
+
+typedef BWL_PRE_PACKED_STRUCT struct wl_proxd_debug_data {
+ uint8 count; /* number of packets */
+ uint8 stage; /* state machone stage */
+ uint8 received; /* received or txed */
+ uint8 paket_type; /* packet type */
+ uint8 category; /* category field */
+ uint8 action; /* action field */
+ uint8 token; /* token number */
+ uint8 follow_token; /* following token number */
+ uint16 index; /* index of the packet */
+ uint16 tof_cmd; /* M_TOF_CMD */
+ uint16 tof_rsp; /* M_TOF_RSP */
+ uint16 tof_avb_rxl; /* M_TOF_AVB_RX_L */
+ uint16 tof_avb_rxh; /* M_TOF_AVB_RX_H */
+ uint16 tof_avb_txl; /* M_TOF_AVB_TX_L */
+ uint16 tof_avb_txh; /* M_TOF_AVB_TX_H */
+ uint16 tof_id; /* M_TOF_ID */
+ uint16 tof_status0; /* M_TOF_STATUS_0 */
+ uint16 tof_status2; /* M_TOF_STATUS_2 */
+ uint16 tof_chsm0; /* M_TOF_CHNSM_0 */
+ uint16 tof_phyctl0; /* M_TOF_PHYCTL0 */
+ uint16 tof_phyctl1; /* M_TOF_PHYCTL1 */
+ uint16 tof_phyctl2; /* M_TOF_PHYCTL2 */
+ uint16 tof_lsig; /* M_TOF_LSIG */
+ uint16 tof_vhta0; /* M_TOF_VHTA0 */
+ uint16 tof_vhta1; /* M_TOF_VHTA1 */
+ uint16 tof_vhta2; /* M_TOF_VHTA2 */
+ uint16 tof_vhtb0; /* M_TOF_VHTB0 */
+ uint16 tof_vhtb1; /* M_TOF_VHTB1 */
+ uint16 tof_apmductl; /* M_TOF_AMPDU_CTL */
+ uint16 tof_apmdudlim; /* M_TOF_AMPDU_DLIM */
+ uint16 tof_apmdulen; /* M_TOF_AMPDU_LEN */
+} BWL_POST_PACKED_STRUCT wl_proxd_debug_data_t;
+
+/* version of the wl_wsec_info structure */
+#define WL_WSEC_INFO_VERSION 0x01
+
+/* start enum value for BSS properties */
+#define WL_WSEC_INFO_BSS_BASE 0x0100
+
+/* size of len and type fields of wl_wsec_info_tlv_t struct */
+#define WL_WSEC_INFO_TLV_HDR_LEN OFFSETOF(wl_wsec_info_tlv_t, data)
+
+/* Allowed wl_wsec_info properties; not all of them may be supported. */
+typedef enum {
+ WL_WSEC_INFO_NONE = 0,
+ WL_WSEC_INFO_MAX_KEYS = 1,
+ WL_WSEC_INFO_NUM_KEYS = 2,
+ WL_WSEC_INFO_NUM_HW_KEYS = 3,
+ WL_WSEC_INFO_MAX_KEY_IDX = 4,
+ WL_WSEC_INFO_NUM_REPLAY_CNTRS = 5,
+ WL_WSEC_INFO_SUPPORTED_ALGOS = 6,
+ WL_WSEC_INFO_MAX_KEY_LEN = 7,
+ WL_WSEC_INFO_FLAGS = 8,
+ /* add global/per-wlc properties above */
+ WL_WSEC_INFO_BSS_FLAGS = (WL_WSEC_INFO_BSS_BASE + 1),
+ WL_WSEC_INFO_BSS_WSEC = (WL_WSEC_INFO_BSS_BASE + 2),
+ WL_WSEC_INFO_BSS_TX_KEY_ID = (WL_WSEC_INFO_BSS_BASE + 3),
+ WL_WSEC_INFO_BSS_ALGO = (WL_WSEC_INFO_BSS_BASE + 4),
+ WL_WSEC_INFO_BSS_KEY_LEN = (WL_WSEC_INFO_BSS_BASE + 5),
+ /* add per-BSS properties above */
+ WL_WSEC_INFO_MAX = 0xffff
+} wl_wsec_info_type_t;
+
+/* tlv used to return wl_wsec_info properties */
+typedef struct {
+ uint16 type;
+ uint16 len; /* data length */
+ uint8 data[1]; /* data follows */
+} wl_wsec_info_tlv_t;
+
+/* input/output data type for wsec_info iovar */
+typedef struct wl_wsec_info {
+ uint8 version; /* structure version */
+ uint8 pad[2];
+ uint8 num_tlvs;
+ wl_wsec_info_tlv_t tlvs[1]; /* tlv data follows */
+} wl_wsec_info_t;
+
+/* no default structure packing */
+#include <packed_section_end.h>
+
+enum rssi_reason {
+ RSSI_REASON_UNKNOW = 0,
+ RSSI_REASON_LOWRSSI = 1,
+ RSSI_REASON_NSYC = 2,
+ RSSI_REASON_TIMEOUT = 3
+};
+
+enum tof_reason {
+ TOF_REASON_OK = 0,
+ TOF_REASON_REQEND = 1,
+ TOF_REASON_TIMEOUT = 2,
+ TOF_REASON_NOACK = 3,
+ TOF_REASON_INVALIDAVB = 4,
+ TOF_REASON_INITIAL = 5,
+ TOF_REASON_ABORT = 6
+};
+
+enum rssi_state {
+ RSSI_STATE_POLL = 0,
+ RSSI_STATE_TPAIRING = 1,
+ RSSI_STATE_IPAIRING = 2,
+ RSSI_STATE_THANDSHAKE = 3,
+ RSSI_STATE_IHANDSHAKE = 4,
+ RSSI_STATE_CONFIRMED = 5,
+ RSSI_STATE_PIPELINE = 6,
+ RSSI_STATE_NEGMODE = 7,
+ RSSI_STATE_MONITOR = 8,
+ RSSI_STATE_LAST = 9
+};
+
+enum tof_state {
+ TOF_STATE_IDLE = 0,
+ TOF_STATE_IWAITM = 1,
+ TOF_STATE_TWAITM = 2,
+ TOF_STATE_ILEGACY = 3,
+ TOF_STATE_IWAITCL = 4,
+ TOF_STATE_TWAITCL = 5,
+ TOF_STATE_ICONFIRM = 6,
+ TOF_STATE_IREPORT = 7
+};
+
+enum tof_mode_type {
+ TOF_LEGACY_UNKNOWN = 0,
+ TOF_LEGACY_AP = 1,
+ TOF_NONLEGACY_AP = 2
+};
+
+enum tof_way_type {
+ TOF_TYPE_ONE_WAY = 0,
+ TOF_TYPE_TWO_WAY = 1,
+ TOF_TYPE_REPORT = 2
+};
+
+enum tof_rate_type {
+ TOF_FRAME_RATE_VHT = 0,
+ TOF_FRAME_RATE_LEGACY = 1
+};
+
+#define TOF_ADJ_TYPE_NUM 4 /* number of assisted timestamp adjustment */
+enum tof_adj_mode {
+ TOF_ADJ_SOFTWARE = 0,
+ TOF_ADJ_HARDWARE = 1,
+ TOF_ADJ_SEQ = 2,
+ TOF_ADJ_NONE = 3
+};
+
+#define FRAME_TYPE_NUM 4 /* number of frame type */
+enum frame_type {
+ FRAME_TYPE_CCK = 0,
+ FRAME_TYPE_OFDM = 1,
+ FRAME_TYPE_11N = 2,
+ FRAME_TYPE_11AC = 3
};
typedef struct wl_proxd_status_iovar {
- uint8 mode;
- uint8 peermode;
- uint8 state;
- uint8 reason;
- uint32 txcnt;
- uint32 rxcnt;
- struct ether_addr peer;
- int16 hi_rssi;
- int16 low_rssi;
+ uint16 method; /* method */
+ uint8 mode; /* mode */
+ uint8 peermode; /* peer mode */
+ uint8 state; /* state */
+ uint8 reason; /* reason code */
+ uint32 distance; /* distance */
+ uint32 txcnt; /* tx pkt counter */
+ uint32 rxcnt; /* rx pkt counter */
+ struct ether_addr peer; /* peer mac address */
+ int8 avg_rssi; /* average rssi */
+ int8 hi_rssi; /* highest rssi */
+ int8 low_rssi; /* lowest rssi */
+ uint32 dbgstatus; /* debug status */
+ uint16 frame_type_cnt[FRAME_TYPE_NUM]; /* frame types */
+ uint8 adj_type_cnt[TOF_ADJ_TYPE_NUM]; /* adj types HW/SW */
} wl_proxd_status_iovar_t;
#ifdef NET_DETECT
@@ -4557,6 +5740,20 @@
uint16 reps;
} statreq_t;
+#define WL_RRM_RPT_VER 0
+#define WL_RRM_RPT_MAX_PAYLOAD 64
+#define WL_RRM_RPT_MIN_PAYLOAD 7
+#define WL_RRM_RPT_FALG_ERR 0
+#define WL_RRM_RPT_FALG_OK 1
+typedef struct {
+ uint16 ver; /* version */
+ struct ether_addr addr; /* STA MAC addr */
+ uint32 timestamp; /* timestamp of the report */
+ uint16 flag; /* flag */
+ uint16 len; /* length of payload data */
+ unsigned char data[WL_RRM_RPT_MAX_PAYLOAD];
+} statrpt_t;
+
typedef struct wlc_l2keepalive_ol_params {
uint8 flags;
uint8 prio;
@@ -4605,6 +5802,33 @@
struct ether_addr ea;
} wlc_stamon_sta_config_t;
+#ifdef SR_DEBUG
+typedef struct /* pmu_reg */{
+ uint32 pmu_control;
+ uint32 pmu_capabilities;
+ uint32 pmu_status;
+ uint32 res_state;
+ uint32 res_pending;
+ uint32 pmu_timer1;
+ uint32 min_res_mask;
+ uint32 max_res_mask;
+ uint32 pmu_chipcontrol1[4];
+ uint32 pmu_regcontrol[5];
+ uint32 pmu_pllcontrol[5];
+ uint32 pmu_rsrc_up_down_timer[31];
+ uint32 rsrc_dep_mask[31];
+} pmu_reg_t;
+#endif /* pmu_reg */
+
+typedef struct wl_taf_define {
+ struct ether_addr ea; /* STA MAC or 0xFF... */
+ uint16 version; /* version */
+ uint32 sch; /* method index */
+ uint32 prio; /* priority */
+ uint32 misc; /* used for return value */
+ char text[1]; /* used to pass and return ascii text */
+} wl_taf_define_t;
+
/* Received Beacons lengths information */
#define WL_LAST_BCNS_INFO_FIXED_LEN OFFSETOF(wlc_bcn_len_hist_t, bcnlen_ring)
typedef struct wlc_bcn_len_hist {
@@ -4629,4 +5853,738 @@
} wl_bssload_static_t;
+/* LTE coex info */
+/* Analogue of HCI Set MWS Signaling cmd */
+typedef struct {
+ uint16 mws_rx_assert_offset;
+ uint16 mws_rx_assert_jitter;
+ uint16 mws_rx_deassert_offset;
+ uint16 mws_rx_deassert_jitter;
+ uint16 mws_tx_assert_offset;
+ uint16 mws_tx_assert_jitter;
+ uint16 mws_tx_deassert_offset;
+ uint16 mws_tx_deassert_jitter;
+ uint16 mws_pattern_assert_offset;
+ uint16 mws_pattern_assert_jitter;
+ uint16 mws_inact_dur_assert_offset;
+ uint16 mws_inact_dur_assert_jitter;
+ uint16 mws_scan_freq_assert_offset;
+ uint16 mws_scan_freq_assert_jitter;
+ uint16 mws_prio_assert_offset_req;
+} wci2_config_t;
+
+/* Analogue of HCI MWS Channel Params */
+typedef struct {
+ uint16 mws_rx_center_freq; /* MHz */
+ uint16 mws_tx_center_freq;
+ uint16 mws_rx_channel_bw; /* KHz */
+ uint16 mws_tx_channel_bw;
+ uint8 mws_channel_en;
+ uint8 mws_channel_type; /* Don't care for WLAN? */
+} mws_params_t;
+
+/* MWS wci2 message */
+typedef struct {
+ uint8 mws_wci2_data; /* BT-SIG msg */
+ uint16 mws_wci2_interval; /* Interval in us */
+ uint16 mws_wci2_repeat; /* No of msgs to send */
+} mws_wci2_msg_t;
+
+typedef struct {
+ uint32 config; /* MODE: AUTO (-1), Disable (0), Enable (1) */
+ uint32 status; /* Current state: Disabled (0), Enabled (1) */
+} wl_config_t;
+
+#define WLC_RSDB_MODE_AUTO_MASK 0x80
+#define WLC_RSDB_EXTRACT_MODE(val) ((int8)((val) & (~(WLC_RSDB_MODE_AUTO_MASK))))
+
+#define WL_IF_STATS_T_VERSION 1 /* current version of wl_if_stats structure */
+
+/* per interface counters */
+typedef struct wl_if_stats {
+ uint16 version; /* version of the structure */
+ uint16 length; /* length of the entire structure */
+ uint32 PAD; /* padding */
+
+ /* transmit stat counters */
+ uint64 txframe; /* tx data frames */
+ uint64 txbyte; /* tx data bytes */
+ uint64 txerror; /* tx data errors (derived: sum of others) */
+ uint64 txnobuf; /* tx out of buffer errors */
+ uint64 txrunt; /* tx runt frames */
+ uint64 txfail; /* tx failed frames */
+ uint64 txretry; /* tx retry frames */
+ uint64 txretrie; /* tx multiple retry frames */
+ uint64 txfrmsnt; /* tx sent frames */
+ uint64 txmulti; /* tx mulitcast sent frames */
+ uint64 txfrag; /* tx fragments sent */
+
+ /* receive stat counters */
+ uint64 rxframe; /* rx data frames */
+ uint64 rxbyte; /* rx data bytes */
+ uint64 rxerror; /* rx data errors (derived: sum of others) */
+ uint64 rxnobuf; /* rx out of buffer errors */
+ uint64 rxrunt; /* rx runt frames */
+ uint64 rxfragerr; /* rx fragment errors */
+ uint64 rxmulti; /* rx multicast frames */
+}
+wl_if_stats_t;
+
+typedef struct wl_band {
+ uint16 bandtype; /* WL_BAND_2G, WL_BAND_5G */
+ uint16 bandunit; /* bandstate[] index */
+ uint16 phytype; /* phytype */
+ uint16 phyrev;
+}
+wl_band_t;
+
+#define WL_WLC_VERSION_T_VERSION 1 /* current version of wlc_version structure */
+
+/* wlc interface version */
+typedef struct wl_wlc_version {
+ uint16 version; /* version of the structure */
+ uint16 length; /* length of the entire structure */
+
+ /* epi version numbers */
+ uint16 epi_ver_major; /* epi major version number */
+ uint16 epi_ver_minor; /* epi minor version number */
+ uint16 epi_rc_num; /* epi RC number */
+ uint16 epi_incr_num; /* epi increment number */
+
+ /* wlc interface version numbers */
+ uint16 wlc_ver_major; /* wlc interface major version number */
+ uint16 wlc_ver_minor; /* wlc interface minor version number */
+}
+wl_wlc_version_t;
+
+/* Version of WLC interface to be returned as a part of wl_wlc_version structure.
+ * For the discussion related to versions update policy refer to
+ * http://hwnbu-twiki.broadcom.com/bin/view/Mwgroup/WlShimAbstractionLayer
+ * For now the policy is to increment WLC_VERSION_MAJOR each time
+ * there is a change that involves both WLC layer and per-port layer.
+ * WLC_VERSION_MINOR is currently not in use.
+ */
+#define WLC_VERSION_MAJOR 2
+#define WLC_VERSION_MINOR 0
+
+/* current version of WLC interface supported by WL layer */
+#define WL_SUPPORTED_WLC_VER_MAJOR 3
+#define WL_SUPPORTED_WLC_VER_MINOR 0
+
+/* require strict packing */
+#include <packed_section_start.h>
+
+#define WL_PROXD_API_VERSION 0x0300 /* version 3.0 */
+
+/* proximity detection methods */
+enum {
+ WL_PROXD_METHOD_NONE = 0,
+ WL_PROXD_METHOD_RSVD1 = 1, /* backward compatibility - RSSI, not supported */
+ WL_PROXD_METHOD_TOF = 2, /* 11v+BCM proprietary */
+ WL_PROXD_METHOD_RSVD2 = 3, /* 11v only - if needed */
+ WL_PROXD_METHOD_FTM = 4, /* IEEE rev mc/2014 */
+ WL_PROXD_METHOD_MAX
+};
+typedef int16 wl_proxd_method_t;
+
+/* global and method configuration flags */
+enum {
+ WL_PROXD_FLAG_NONE = 0x00000000,
+ WL_PROXD_FLAG_RX_ENABLED = 0x00000001, /* respond to requests */
+ WL_PROXD_FLAG_RX_RANGE_REQ = 0x00000002, /* 11mc range requests enabled */
+ WL_PROXD_FLAG_TX_LCI = 0x00000004, /* transmit location, if available */
+ WL_PROXD_FLAG_TX_CIVIC = 0x00000008, /* tx civic loc, if available */
+ WL_PROXD_FLAG_RX_AUTO_BURST = 0x00000010, /* respond to requests w/o host action */
+ WL_PROXD_FLAG_TX_AUTO_BURST = 0x00000020, /* continue requests w/o host action */
+ WL_PROXD_FLAG_ALL = 0xffffffff
+};
+typedef uint32 wl_proxd_flags_t;
+
+/* session flags */
+enum {
+ WL_PROXD_SESSION_FLAG_NONE = 0x00000000, /* no flags */
+ WL_PROXD_SESSION_FLAG_INITIATOR = 0x00000001, /* local device is initiator */
+ WL_PROXD_SESSION_FLAG_TARGET = 0x00000002, /* local device is target */
+ WL_PROXD_SESSION_FLAG_ONE_WAY = 0x00000004, /* (initiated) 1-way rtt */
+ WL_PROXD_SESSION_FLAG_AUTO_BURST = 0x00000008, /* created w/ rx_auto_burst */
+ WL_PROXD_SESSION_FLAG_PERSIST = 0x00000010, /* good until cancelled */
+ WL_PROXD_SESSION_FLAG_RTT_DETAIL = 0x00000020, /* rtt detail in results */
+ WL_PROXD_SESSION_FLAG_TOF_COMPAT = 0x00000040, /* TOF compatibility - TBD */
+ WL_PROXD_SESSION_FLAG_AOA = 0x00000080, /* AOA along w/ RTT */
+ WL_PROXD_SESSION_FLAG_RX_AUTO_BURST = 0x00000100, /* Same as proxd flags above */
+ WL_PROXD_SESSION_FLAG_TX_AUTO_BURST = 0x00000200, /* Same as proxd flags above */
+ WL_PROXD_SESSION_FLAG_NAN_BSS = 0x00000400, /* Use NAN BSS, if applicable */
+ WL_PROXD_SESSION_FLAG_TS1 = 0x00000800, /* e.g. FTM1 - cap or rx */
+ WL_PROXD_SESSION_FLAG_REPORT_FAILURE = 0x00002000, /* report failure to target */
+ WL_PROXD_SESSION_FLAG_INITIATOR_RPT = 0x00004000, /* report distance to target */
+ WL_PROXD_SESSION_FLAG_NOCHANSWT = 0x00008000, /* No channel switching */
+ WL_PROXD_SESSION_FLAG_NETRUAL = 0x00010000, /* netrual mode */
+ WL_PROXD_SESSION_FLAG_SEQ_EN = 0x00020000, /* Toast */
+ WL_PROXD_SESSION_FLAG_NO_PARAM_OVRD = 0x00040000, /* no param override from target */
+ WL_PROXD_SESSION_FLAG_ASAP = 0x00080000, /* ASAP session */
+ WL_PROXD_SESSION_FLAG_REQ_LCI = 0x00100000, /* transmit LCI req */
+ WL_PROXD_SESSION_FLAG_REQ_CIV = 0x00200000, /* transmit civic loc req */
+ WL_PROXD_SESSION_FLAG_COLLECT = 0x80000000, /* debug - collect */
+ WL_PROXD_SESSION_FLAG_ALL = 0xffffffff
+};
+typedef uint32 wl_proxd_session_flags_t;
+
+/* time units - mc supports up to 0.1ns resolution */
+enum {
+ WL_PROXD_TMU_TU = 0, /* 1024us */
+ WL_PROXD_TMU_SEC = 1,
+ WL_PROXD_TMU_MILLI_SEC = 2,
+ WL_PROXD_TMU_MICRO_SEC = 3,
+ WL_PROXD_TMU_NANO_SEC = 4,
+ WL_PROXD_TMU_PICO_SEC = 5
+};
+typedef int16 wl_proxd_tmu_t;
+
+/* time interval e.g. 10ns */
+typedef struct wl_proxd_intvl {
+ uint32 intvl;
+ wl_proxd_tmu_t tmu;
+ uint8 pad[2];
+} wl_proxd_intvl_t;
+
+/* commands that can apply to proxd, method or a session */
+enum {
+ WL_PROXD_CMD_NONE = 0,
+ WL_PROXD_CMD_GET_VERSION = 1,
+ WL_PROXD_CMD_ENABLE = 2,
+ WL_PROXD_CMD_DISABLE = 3,
+ WL_PROXD_CMD_CONFIG = 4,
+ WL_PROXD_CMD_START_SESSION = 5,
+ WL_PROXD_CMD_BURST_REQUEST = 6,
+ WL_PROXD_CMD_STOP_SESSION = 7,
+ WL_PROXD_CMD_DELETE_SESSION = 8,
+ WL_PROXD_CMD_GET_RESULT = 9,
+ WL_PROXD_CMD_GET_INFO = 10,
+ WL_PROXD_CMD_GET_STATUS = 11,
+ WL_PROXD_CMD_GET_SESSIONS = 12,
+ WL_PROXD_CMD_GET_COUNTERS = 13,
+ WL_PROXD_CMD_CLEAR_COUNTERS = 14,
+ WL_PROXD_CMD_COLLECT = 15,
+ WL_PROXD_CMD_TUNE = 16,
+ WL_PROXD_CMD_DUMP = 17,
+ WL_PROXD_CMD_START_RANGING = 18,
+ WL_PROXD_CMD_STOP_RANGING = 19,
+ WL_PROXD_CMD_GET_RANGING_INFO = 20,
+ WL_PROXD_CMD_MAX
+};
+typedef int16 wl_proxd_cmd_t;
+
+/* session ids:
+ * id 0 is reserved
+ * ids 1..0x7fff - allocated by host/app
+ * 0x8000-0xffff - allocated by firmware, used for auto/rx
+ */
+enum {
+ WL_PROXD_SESSION_ID_GLOBAL = 0
+};
+
+#define WL_PROXD_SID_HOST_MAX 0x7fff
+#define WL_PROXD_SID_HOST_ALLOC(_sid) ((_sid) > 0 && (_sid) <= WL_PROXD_SID_HOST_MAX)
+
+/* maximum number sessions that can be allocated, may be less if tunable */
+#define WL_PROXD_MAX_SESSIONS 16
+
+typedef uint16 wl_proxd_session_id_t;
+
+/* status - TBD BCME_ vs proxd status - range reserved for BCME_ */
+enum {
+ WL_PROXD_E_INCOMPLETE = -1044,
+ WL_PROXD_E_OVERRIDDEN = -1043,
+ WL_PROXD_E_ASAP_FAILED = -1042,
+ WL_PROXD_E_NOTSTARTED = -1041,
+ WL_PROXD_E_INVALIDAVB = -1040,
+ WL_PROXD_E_INCAPABLE = -1039,
+ WL_PROXD_E_MISMATCH = -1038,
+ WL_PROXD_E_DUP_SESSION = -1037,
+ WL_PROXD_E_REMOTE_FAIL = -1036,
+ WL_PROXD_E_REMOTE_INCAPABLE = -1035,
+ WL_PROXD_E_SCHED_FAIL = -1034,
+ WL_PROXD_E_PROTO = -1033,
+ WL_PROXD_E_EXPIRED = -1032,
+ WL_PROXD_E_TIMEOUT = -1031,
+ WL_PROXD_E_NOACK = -1030,
+ WL_PROXD_E_DEFERRED = -1029,
+ WL_PROXD_E_INVALID_SID = -1028,
+ WL_PROXD_E_REMOTE_CANCEL = -1027,
+ WL_PROXD_E_CANCELED = -1026, /* local */
+ WL_PROXD_E_INVALID_SESSION = -1025,
+ WL_PROXD_E_BAD_STATE = -1024,
+ WL_PROXD_E_ERROR = -1,
+ WL_PROXD_E_OK = 0
+};
+typedef int32 wl_proxd_status_t;
+
+/* session states */
+enum {
+ WL_PROXD_SESSION_STATE_NONE = 0,
+ WL_PROXD_SESSION_STATE_CREATED = 1,
+ WL_PROXD_SESSION_STATE_CONFIGURED = 2,
+ WL_PROXD_SESSION_STATE_STARTED = 3,
+ WL_PROXD_SESSION_STATE_DELAY = 4,
+ WL_PROXD_SESSION_STATE_USER_WAIT = 5,
+ WL_PROXD_SESSION_STATE_SCHED_WAIT = 6,
+ WL_PROXD_SESSION_STATE_BURST = 7,
+ WL_PROXD_SESSION_STATE_STOPPING = 8,
+ WL_PROXD_SESSION_STATE_ENDED = 9,
+ WL_PROXD_SESSION_STATE_DESTROYING = -1
+};
+typedef int16 wl_proxd_session_state_t;
+
+/* RTT sample flags */
+enum {
+ WL_PROXD_RTT_SAMPLE_NONE = 0x00,
+ WL_PROXD_RTT_SAMPLE_DISCARD = 0x01
+};
+typedef uint8 wl_proxd_rtt_sample_flags_t;
+
+typedef struct wl_proxd_rtt_sample {
+ uint8 id; /* id for the sample - non-zero */
+ wl_proxd_rtt_sample_flags_t flags;
+ int16 rssi;
+ wl_proxd_intvl_t rtt; /* round trip time */
+ uint32 ratespec;
+} wl_proxd_rtt_sample_t;
+
+/* result flags */
+enum {
+ WL_PRXOD_RESULT_FLAG_NONE = 0x0000,
+ WL_PROXD_RESULT_FLAG_NLOS = 0x0001, /* LOS - if available */
+ WL_PROXD_RESULT_FLAG_LOS = 0x0002, /* NLOS - if available */
+ WL_PROXD_RESULT_FLAG_FATAL = 0x0004, /* Fatal error during burst */
+ WL_PROXD_RESULT_FLAG_VHTACK = 0x0008, /* VHTACK or Legacy ACK used */
+ WL_PROXD_RESULT_FLAG_ALL = 0xffff
+};
+typedef int16 wl_proxd_result_flags_t;
+
+/* rtt measurement result */
+typedef struct wl_proxd_rtt_result {
+ wl_proxd_session_id_t sid;
+ wl_proxd_result_flags_t flags;
+ wl_proxd_status_t status;
+ struct ether_addr peer;
+ wl_proxd_session_state_t state; /* current state */
+ union {
+ wl_proxd_intvl_t retry_after; /* hint for errors */
+ wl_proxd_intvl_t burst_duration; /* burst duration */
+ } u;
+ wl_proxd_rtt_sample_t avg_rtt;
+ uint32 avg_dist; /* 1/256m units */
+ uint16 sd_rtt; /* RTT standard deviation */
+ uint8 num_valid_rtt; /* valid rtt cnt */
+ uint8 num_ftm; /* actual num of ftm cnt */
+ uint16 burst_num; /* in a session */
+ uint16 num_rtt; /* 0 if no detail */
+ wl_proxd_rtt_sample_t rtt[1]; /* variable */
+} wl_proxd_rtt_result_t;
+
+/* aoa measurement result */
+typedef struct wl_proxd_aoa_result {
+ wl_proxd_session_id_t sid;
+ wl_proxd_result_flags_t flags;
+ wl_proxd_status_t status;
+ struct ether_addr peer;
+ wl_proxd_session_state_t state;
+ uint16 burst_num;
+ uint8 pad[2];
+ /* wl_proxd_aoa_sample_t sample_avg; TBD */
+} BWL_POST_PACKED_STRUCT wl_proxd_aoa_result_t;
+
+/* global stats */
+typedef struct wl_proxd_counters {
+ uint32 tx; /* tx frame count */
+ uint32 rx; /* rx frame count */
+ uint32 burst; /* total number of burst */
+ uint32 sessions; /* total number of sessions */
+ uint32 max_sessions; /* max concurrency */
+ uint32 sched_fail; /* scheduling failures */
+ uint32 timeouts; /* timeouts */
+ uint32 protoerr; /* protocol errors */
+ uint32 noack; /* tx w/o ack */
+ uint32 txfail; /* any tx falure */
+ uint32 lci_req_tx; /* tx LCI requests */
+ uint32 lci_req_rx; /* rx LCI requests */
+ uint32 lci_rep_tx; /* tx LCI reports */
+ uint32 lci_rep_rx; /* rx LCI reports */
+ uint32 civic_req_tx; /* tx civic requests */
+ uint32 civic_req_rx; /* rx civic requests */
+ uint32 civic_rep_tx; /* tx civic reports */
+ uint32 civic_rep_rx; /* rx civic reports */
+ uint32 rctx; /* ranging contexts created */
+ uint32 rctx_done; /* count of ranging done */
+} wl_proxd_counters_t;
+
+typedef struct wl_proxd_counters wl_proxd_session_counters_t;
+
+enum {
+ WL_PROXD_CAP_NONE = 0x0000,
+ WL_PROXD_CAP_ALL = 0xffff
+};
+typedef int16 wl_proxd_caps_t;
+
+/* method capabilities */
+enum {
+ WL_PROXD_FTM_CAP_NONE = 0x0000,
+ WL_PROXD_FTM_CAP_FTM1 = 0x0001
+};
+typedef uint16 wl_proxd_ftm_caps_t;
+
+typedef struct BWL_PRE_PACKED_STRUCT wl_proxd_tlv_id_list {
+ uint16 num_ids;
+ uint16 ids[1];
+} BWL_POST_PACKED_STRUCT wl_proxd_tlv_id_list_t;
+
+typedef struct wl_proxd_session_id_list {
+ uint16 num_ids;
+ wl_proxd_session_id_t ids[1];
+} wl_proxd_session_id_list_t;
+
+/* tlvs returned for get_info on ftm method
+ * configuration:
+ * proxd flags
+ * event mask
+ * debug mask
+ * session defaults (session tlvs)
+ * status tlv - not supported for ftm method
+ * info tlv
+ */
+typedef struct wl_proxd_ftm_info {
+ wl_proxd_ftm_caps_t caps;
+ uint16 max_sessions;
+ uint16 num_sessions;
+ uint16 rx_max_burst;
+} wl_proxd_ftm_info_t;
+
+/* tlvs returned for get_info on session
+ * session config (tlvs)
+ * session info tlv
+ */
+typedef struct wl_proxd_ftm_session_info {
+ uint16 sid;
+ uint8 bss_index;
+ uint8 pad;
+ struct ether_addr bssid;
+ wl_proxd_session_state_t state;
+ wl_proxd_status_t status;
+ uint16 burst_num;
+} wl_proxd_ftm_session_info_t;
+
+typedef struct wl_proxd_ftm_session_status {
+ uint16 sid;
+ wl_proxd_session_state_t state;
+ wl_proxd_status_t status;
+ uint16 burst_num;
+} wl_proxd_ftm_session_status_t;
+
+/* rrm range request */
+typedef struct wl_proxd_range_req {
+ uint16 num_repeat;
+ uint16 init_delay_range; /* in TUs */
+ uint8 pad;
+ uint8 num_nbr; /* number of (possible) neighbors */
+ nbr_element_t nbr[1];
+} wl_proxd_range_req_t;
+
+#define WL_PROXD_LCI_LAT_OFF 0
+#define WL_PROXD_LCI_LONG_OFF 5
+#define WL_PROXD_LCI_ALT_OFF 10
+
+#define WL_PROXD_LCI_GET_LAT(_lci, _lat, _lat_err) { \
+ unsigned _off = WL_PROXD_LCI_LAT_OFF; \
+ _lat_err = (_lci)->data[(_off)] & 0x3f; \
+ _lat = (_lci)->data[(_off)+1]; \
+ _lat |= (_lci)->data[(_off)+2] << 8; \
+ _lat |= (_lci)->data[_(_off)+3] << 16; \
+ _lat |= (_lci)->data[(_off)+4] << 24; \
+ _lat <<= 2; \
+ _lat |= (_lci)->data[(_off)] >> 6; \
+}
+
+#define WL_PROXD_LCI_GET_LONG(_lci, _lcilong, _long_err) { \
+ unsigned _off = WL_PROXD_LCI_LONG_OFF; \
+ _long_err = (_lci)->data[(_off)] & 0x3f; \
+ _lcilong = (_lci)->data[(_off)+1]; \
+ _lcilong |= (_lci)->data[(_off)+2] << 8; \
+ _lcilong |= (_lci)->data[_(_off)+3] << 16; \
+ _lcilong |= (_lci)->data[(_off)+4] << 24; \
+ __lcilong <<= 2; \
+ _lcilong |= (_lci)->data[(_off)] >> 6; \
+}
+
+#define WL_PROXD_LCI_GET_ALT(_lci, _alt_type, _alt, _alt_err) { \
+ unsigned _off = WL_PROXD_LCI_ALT_OFF; \
+ _alt_type = (_lci)->data[_off] & 0x0f; \
+ _alt_err = (_lci)->data[(_off)] >> 4; \
+ _alt_err |= ((_lci)->data[(_off)+1] & 0x03) << 4; \
+ _alt = (_lci)->data[(_off)+2]; \
+ _alt |= (_lci)->data[(_off)+3] << 8; \
+ _alt |= (_lci)->data[_(_off)+4] << 16; \
+ _alt <<= 6; \
+ _alt |= (_lci)->data[(_off) + 1] >> 2; \
+}
+
+#define WL_PROXD_LCI_VERSION(_lci) ((_lci)->data[15] >> 6)
+
+/* availability. advertising mechanism bss specific */
+/* availablity flags */
+enum {
+ WL_PROXD_AVAIL_NONE = 0,
+ WL_PROXD_AVAIL_NAN_PUBLISHED = 0x0001,
+ WL_PROXD_AVAIL_SCHEDULED = 0x0002 /* scheduled by proxd */
+};
+typedef int16 wl_proxd_avail_flags_t;
+
+/* time reference */
+enum {
+ WL_PROXD_TREF_NONE = 0,
+ WL_PROXD_TREF_DEV_TSF = 1,
+ WL_PROXD_TREF_NAN_DW = 2,
+ WL_PROXD_TREF_TBTT = 3,
+ WL_PROXD_TREF_MAX /* last entry */
+};
+typedef int16 wl_proxd_time_ref_t;
+
+/* proxd channel-time slot */
+typedef struct {
+ wl_proxd_intvl_t start; /* from ref */
+ wl_proxd_intvl_t duration; /* from start */
+ uint32 chanspec;
+} wl_proxd_time_slot_t;
+
+/* availability. advertising mechanism bss specific */
+typedef struct wl_proxd_avail {
+ wl_proxd_avail_flags_t flags; /* for query only */
+ wl_proxd_time_ref_t time_ref;
+ uint16 max_slots; /* for query only */
+ uint16 num_slots;
+ wl_proxd_time_slot_t slots[1];
+} wl_proxd_avail_t;
+
+/* collect support TBD */
+
+/* debugging */
+enum {
+ WL_PROXD_DEBUG_NONE = 0x00000000,
+ WL_PROXD_DEBUG_LOG = 0x00000001,
+ WL_PROXD_DEBUG_IOV = 0x00000002,
+ WL_PROXD_DEBUG_EVENT = 0x00000004,
+ WL_PROXD_DEBUG_SESSION = 0x00000008,
+ WL_PROXD_DEBUG_PROTO = 0x00000010,
+ WL_PROXD_DEBUG_SCHED = 0x00000020,
+ WL_PROXD_DEBUG_RANGING = 0x00000040,
+ WL_PROXD_DEBUG_ALL = 0xffffffff
+};
+typedef uint32 wl_proxd_debug_mask_t;
+
+/* tlv IDs - data length 4 bytes unless overridden by type, alignment 32 bits */
+enum {
+ WL_PROXD_TLV_ID_NONE = 0,
+ WL_PROXD_TLV_ID_METHOD = 1,
+ WL_PROXD_TLV_ID_FLAGS = 2,
+ WL_PROXD_TLV_ID_CHANSPEC = 3, /* note: uint32 */
+ WL_PROXD_TLV_ID_TX_POWER = 4,
+ WL_PROXD_TLV_ID_RATESPEC = 5,
+ WL_PROXD_TLV_ID_BURST_DURATION = 6, /* intvl - length of burst */
+ WL_PROXD_TLV_ID_BURST_PERIOD = 7, /* intvl - between bursts */
+ WL_PROXD_TLV_ID_BURST_FTM_SEP = 8, /* intvl - between FTMs */
+ WL_PROXD_TLV_ID_BURST_NUM_FTM = 9, /* uint16 - per burst */
+ WL_PROXD_TLV_ID_NUM_BURST = 10, /* uint16 */
+ WL_PROXD_TLV_ID_FTM_RETRIES = 11, /* uint16 at FTM level */
+ WL_PROXD_TLV_ID_BSS_INDEX = 12, /* uint8 */
+ WL_PROXD_TLV_ID_BSSID = 13,
+ WL_PROXD_TLV_ID_INIT_DELAY = 14, /* intvl - optional, non-standalone only */
+ WL_PROXD_TLV_ID_BURST_TIMEOUT = 15, /* expect response within - intvl */
+ WL_PROXD_TLV_ID_EVENT_MASK = 16, /* interested events - in/out */
+ WL_PROXD_TLV_ID_FLAGS_MASK = 17, /* interested flags - in only */
+ WL_PROXD_TLV_ID_PEER_MAC = 18, /* mac address of peer */
+ WL_PROXD_TLV_ID_FTM_REQ = 19, /* dot11_ftm_req */
+ WL_PROXD_TLV_ID_LCI_REQ = 20,
+ WL_PROXD_TLV_ID_LCI = 21,
+ WL_PROXD_TLV_ID_CIVIC_REQ = 22,
+ WL_PROXD_TLV_ID_CIVIC = 23,
+ WL_PROXD_TLV_ID_AVAIL = 24,
+ WL_PROXD_TLV_ID_SESSION_FLAGS = 25,
+ WL_PROXD_TLV_ID_SESSION_FLAGS_MASK = 26, /* in only */
+ WL_PROXD_TLV_ID_RX_MAX_BURST = 27, /* uint16 - limit bursts per session */
+ WL_PROXD_TLV_ID_RANGING_INFO = 28, /* ranging info */
+ WL_PROXD_TLV_ID_RANGING_FLAGS = 29, /* uint16 */
+ WL_PROXD_TLV_ID_RANGING_FLAGS_MASK = 30, /* uint16, in only */
+ /* 31 - 34 reserved for other feature */
+ WL_PROXD_TLV_ID_FTM_REQ_RETRIES = 35, /* uint16 FTM request retries */
+
+ /* output - 512 + x */
+ WL_PROXD_TLV_ID_STATUS = 512,
+ WL_PROXD_TLV_ID_COUNTERS = 513,
+ WL_PROXD_TLV_ID_INFO = 514,
+ WL_PROXD_TLV_ID_RTT_RESULT = 515,
+ WL_PROXD_TLV_ID_AOA_RESULT = 516,
+ WL_PROXD_TLV_ID_SESSION_INFO = 517,
+ WL_PROXD_TLV_ID_SESSION_STATUS = 518,
+ WL_PROXD_TLV_ID_SESSION_ID_LIST = 519,
+
+ /* debug tlvs can be added starting 1024 */
+ WL_PROXD_TLV_ID_DEBUG_MASK = 1024,
+ WL_PROXD_TLV_ID_COLLECT = 1025, /* output only */
+ WL_PROXD_TLV_ID_STRBUF = 1026,
+
+ WL_PROXD_TLV_ID_MAX
+};
+
+typedef struct wl_proxd_tlv {
+ uint16 id;
+ uint16 len;
+ uint8 data[1];
+} wl_proxd_tlv_t;
+
+/* proxd iovar - applies to proxd, method or session */
+typedef struct wl_proxd_iov {
+ uint16 version;
+ uint16 len;
+ wl_proxd_cmd_t cmd;
+ wl_proxd_method_t method;
+ wl_proxd_session_id_t sid;
+ uint8 pad[2];
+ wl_proxd_tlv_t tlvs[1]; /* variable */
+} wl_proxd_iov_t;
+
+#define WL_PROXD_IOV_HDR_SIZE OFFSETOF(wl_proxd_iov_t, tlvs)
+
+/* The following event definitions may move to bcmevent.h, but sharing proxd types
+ * across needs more invasive changes unrelated to proxd
+ */
+enum {
+ WL_PROXD_EVENT_NONE = 0, /* not an event, reserved */
+ WL_PROXD_EVENT_SESSION_CREATE = 1,
+ WL_PROXD_EVENT_SESSION_START = 2,
+ WL_PROXD_EVENT_FTM_REQ = 3,
+ WL_PROXD_EVENT_BURST_START = 4,
+ WL_PROXD_EVENT_BURST_END = 5,
+ WL_PROXD_EVENT_SESSION_END = 6,
+ WL_PROXD_EVENT_SESSION_RESTART = 7,
+ WL_PROXD_EVENT_BURST_RESCHED = 8, /* burst rescheduled - e.g. partial TSF */
+ WL_PROXD_EVENT_SESSION_DESTROY = 9,
+ WL_PROXD_EVENT_RANGE_REQ = 10,
+ WL_PROXD_EVENT_FTM_FRAME = 11,
+ WL_PROXD_EVENT_DELAY = 12,
+ WL_PROXD_EVENT_VS_INITIATOR_RPT = 13, /* (target) rx initiator-report */
+ WL_PROXD_EVENT_RANGING = 14,
+
+ WL_PROXD_EVENT_MAX
+};
+typedef int16 wl_proxd_event_type_t;
+
+/* proxd event mask - upto 32 events for now */
+typedef uint32 wl_proxd_event_mask_t;
+
+#define WL_PROXD_EVENT_MASK_ALL 0xfffffffe
+#define WL_PROXD_EVENT_MASK_EVENT(_event_type) (1 << (_event_type))
+#define WL_PROXD_EVENT_ENABLED(_mask, _event_type) (\
+ ((_mask) & WL_PROXD_EVENT_MASK_EVENT(_event_type)) != 0)
+
+/* proxd event - applies to proxd, method or session */
+typedef struct wl_proxd_event {
+ uint16 version;
+ uint16 len;
+ wl_proxd_event_type_t type;
+ wl_proxd_method_t method;
+ wl_proxd_session_id_t sid;
+ uint8 pad[2];
+ wl_proxd_tlv_t tlvs[1]; /* variable */
+} wl_proxd_event_t;
+
+enum {
+ WL_PROXD_RANGING_STATE_NONE = 0,
+ WL_PROXD_RANGING_STATE_NOTSTARTED = 1,
+ WL_PROXD_RANGING_STATE_INPROGRESS = 2,
+ WL_PROXD_RANGING_STATE_DONE = 3
+};
+typedef int16 wl_proxd_ranging_state_t;
+
+/* proxd ranging flags */
+enum {
+ WL_PROXD_RANGING_FLAG_NONE = 0x0000, /* no flags */
+ WL_PROXD_RANGING_FLAG_DEL_SESSIONS_ON_STOP = 0x0001,
+ WL_PROXD_RANGING_FLAG_ALL = 0xffff
+};
+typedef uint16 wl_proxd_ranging_flags_t;
+
+struct wl_proxd_ranging_info {
+ wl_proxd_status_t status;
+ wl_proxd_ranging_state_t state;
+ wl_proxd_ranging_flags_t flags;
+ uint16 num_sids;
+ uint16 num_done;
+};
+typedef struct wl_proxd_ranging_info wl_proxd_ranging_info_t;
+
+/* end proxd definitions */
+
+
+/* Data structures for Interface Create/Remove */
+
+#define WL_INTERFACE_CREATE_VER (0)
+
+/*
+ * The flags filed of the wl_interface_create is designed to be
+ * a Bit Mask. As of now only Bit 0 and Bit 1 are used as mentioned below.
+ * The rest of the bits can be used, incase we have to provide
+ * more information to the dongle
+ */
+#define MAX_BSSLOAD_LEVELS 8
+#define MAX_BSSLOAD_RANGES (MAX_BSSLOAD_LEVELS + 1)
+
+/* BSS Load event notification configuration. */
+typedef struct wl_bssload_cfg {
+ uint32 rate_limit_msec; /* # of events posted to application will be limited to
+ * one per specified period (0 to disable rate limit).
+ */
+ uint8 num_util_levels; /* Number of entries in util_levels[] below */
+ uint8 util_levels[MAX_BSSLOAD_LEVELS];
+ /* Variable number of BSS Load utilization levels in
+ * low to high order. An event will be posted each time
+ * a received beacon's BSS Load IE channel utilization
+ * value crosses a level.
+ */
+} wl_bssload_cfg_t;
+
+/* Multiple roaming profile suport */
+#define WL_MAX_ROAM_PROF_BRACKETS 4
+
+#define WL_MAX_ROAM_PROF_VER 0
+
+#define WL_ROAM_PROF_NONE (0 << 0)
+#define WL_ROAM_PROF_LAZY (1 << 0)
+#define WL_ROAM_PROF_NO_CI (1 << 1)
+#define WL_ROAM_PROF_SUSPEND (1 << 2)
+#define WL_ROAM_PROF_SYNC_DTIM (1 << 6)
+#define WL_ROAM_PROF_DEFAULT (1 << 7) /* backward compatible single default profile */
+
+typedef struct wl_roam_prof {
+ int8 roam_flags; /* bit flags */
+ int8 roam_trigger; /* RSSI trigger level per profile/RSSI bracket */
+ int8 rssi_lower;
+ int8 roam_delta;
+ int8 rssi_boost_thresh; /* Min RSSI to qualify for RSSI boost */
+ int8 rssi_boost_delta; /* RSSI boost for AP in the other band */
+ uint16 nfscan; /* nuber of full scan to start with */
+ uint16 fullscan_period;
+ uint16 init_scan_period;
+ uint16 backoff_multiplier;
+ uint16 max_scan_period;
+} wl_roam_prof_t;
+
+typedef struct wl_roam_prof_band {
+ uint32 band; /* Must be just one band */
+ uint16 ver; /* version of this struct */
+ uint16 len; /* length in bytes of this structure */
+ wl_roam_prof_t roam_prof[WL_MAX_ROAM_PROF_BRACKETS];
+} wl_roam_prof_band_t;
+
+/* no default structure packing */
+#include <packed_section_end.h>
+
#endif /* _wlioctl_h_ */
diff --git a/drivers/net/wireless/bcmdhd/linux_osl.c b/drivers/net/wireless/bcmdhd/linux_osl.c
old mode 100755
new mode 100644
index 5d9e862..5949ac1
--- a/drivers/net/wireless/bcmdhd/linux_osl.c
+++ b/drivers/net/wireless/bcmdhd/linux_osl.c
@@ -2,13 +2,13 @@
* Linux OS Independent Layer
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: linux_osl.c 451649 2014-01-27 17:23:38Z $
+ * $Id: linux_osl.c 474402 2014-05-01 03:50:41Z $
*/
#define LINUX_PORT
@@ -31,9 +31,11 @@
#include <linuxver.h>
#include <bcmdefs.h>
-#ifdef __ARM_ARCH_7A__
+#if defined(USE_KMALLOC_FOR_FLOW_RING) && defined(__ARM_ARCH_7A__)
#include <asm/cacheflush.h>
-#endif /* __ARM_ARCH_7A__ */
+#endif
+
+#include <linux/random.h>
#include <osl.h>
#include <bcmutils.h>
@@ -44,6 +46,11 @@
#include <linux/fs.h>
+
+#ifdef BCMPCIE
+#include <bcmpcie.h>
+#endif /* BCMPCIE */
+
#define PCI_CFG_RETRY 10
#define OS_HANDLE_MAGIC 0x1234abcd /* Magic # to recognize osh */
@@ -51,10 +58,9 @@
#define DUMPBUFSZ 1024
#ifdef CONFIG_DHD_USE_STATIC_BUF
-#define DHD_SKB_HDRSIZE 336
-#define DHD_SKB_1PAGE_BUFSIZE ((PAGE_SIZE*1)-DHD_SKB_HDRSIZE)
-#define DHD_SKB_2PAGE_BUFSIZE ((PAGE_SIZE*2)-DHD_SKB_HDRSIZE)
-#define DHD_SKB_4PAGE_BUFSIZE ((PAGE_SIZE*4)-DHD_SKB_HDRSIZE)
+#define DHD_SKB_1PAGE_BUFSIZE (PAGE_SIZE*1)
+#define DHD_SKB_2PAGE_BUFSIZE (PAGE_SIZE*2)
+#define DHD_SKB_4PAGE_BUFSIZE (PAGE_SIZE*4)
#define STATIC_BUF_MAX_NUM 16
#define STATIC_BUF_SIZE (PAGE_SIZE*2)
@@ -68,27 +74,48 @@
static bcm_static_buf_t *bcm_static_buf = 0;
-#define STATIC_PKT_MAX_NUM 8
-#if defined(ENHANCED_STATIC_BUF)
+#if defined(BCMPCIE)
+#define STATIC_PKT_4PAGE_NUM 0
+#define DHD_SKB_MAX_BUFSIZE DHD_SKB_2PAGE_BUFSIZE
+#elif defined(ENHANCED_STATIC_BUF)
#define STATIC_PKT_4PAGE_NUM 1
#define DHD_SKB_MAX_BUFSIZE DHD_SKB_4PAGE_BUFSIZE
#else
#define STATIC_PKT_4PAGE_NUM 0
#define DHD_SKB_MAX_BUFSIZE DHD_SKB_2PAGE_BUFSIZE
-#endif /* ENHANCED_STATIC_BUF */
+#endif /* BCMPCIE */
+
+#ifdef BCMPCIE
+#define STATIC_PKT_1PAGE_NUM 0
+#define STATIC_PKT_2PAGE_NUM 16
+#else
+#define STATIC_PKT_1PAGE_NUM 8
+#define STATIC_PKT_2PAGE_NUM 8
+#endif /* BCMPCIE */
+
+#define STATIC_PKT_1_2PAGE_NUM \
+ ((STATIC_PKT_1PAGE_NUM) + (STATIC_PKT_2PAGE_NUM))
+#define STATIC_PKT_MAX_NUM \
+ ((STATIC_PKT_1_2PAGE_NUM) + (STATIC_PKT_4PAGE_NUM))
typedef struct bcm_static_pkt {
- struct sk_buff *skb_4k[STATIC_PKT_MAX_NUM];
- struct sk_buff *skb_8k[STATIC_PKT_MAX_NUM];
+ struct sk_buff *skb_4k[STATIC_PKT_1PAGE_NUM+1];
+ struct sk_buff *skb_8k[STATIC_PKT_2PAGE_NUM];
+#if !defined(BCMPCIE)
#ifdef ENHANCED_STATIC_BUF
struct sk_buff *skb_16k;
-#endif
+#endif /* ENHANCED_STATIC_BUF */
struct semaphore osl_pkt_sem;
- unsigned char pkt_use[STATIC_PKT_MAX_NUM * 2 + STATIC_PKT_4PAGE_NUM];
+#else
+ spinlock_t osl_pkt_lock;
+#endif /* !BCMPCIE */
+ unsigned char pkt_use[STATIC_PKT_MAX_NUM];
} bcm_static_pkt_t;
static bcm_static_pkt_t *bcm_static_skb = 0;
+
+
void* wifi_platform_prealloc(void *adapter, int section, unsigned long size);
#endif /* CONFIG_DHD_USE_STATIC_BUF */
@@ -123,11 +150,7 @@
osl_cmn_t *cmn; /* Common OSL related data shred between two OSH's */
void *bus_handle;
-#ifdef BCMDBG_CTRACE
- spinlock_t ctrace_lock;
- struct list_head ctrace_list;
- int ctrace_num;
-#endif /* BCMDBG_CTRACE */
+ uint32 flags; /* If specific cases to be handled in the OSL */
};
#define OSL_PKTTAG_CLEAR(p) \
@@ -143,7 +166,8 @@
/* PCMCIA attribute space access macros */
/* Global ASSERT type flag */
-uint32 g_assert_type = FALSE;
+uint32 g_assert_type = 0;
+module_param(g_assert_type, int, 0);
static int16 linuxbcmerrormap[] =
{ 0, /* 0 */
@@ -289,11 +313,6 @@
break;
}
-#ifdef BCMDBG_CTRACE
- spin_lock_init(&osh->ctrace_lock);
- INIT_LIST_HEAD(&osh->ctrace_list);
- osh->ctrace_num = 0;
-#endif /* BCMDBG_CTRACE */
return osh;
@@ -301,46 +320,49 @@
int osl_static_mem_init(osl_t *osh, void *adapter)
{
-#if defined(CONFIG_DHD_USE_STATIC_BUF)
- if (!bcm_static_buf && adapter) {
- if (!(bcm_static_buf = (bcm_static_buf_t *)wifi_platform_prealloc(adapter,
- 3, STATIC_BUF_SIZE + STATIC_BUF_TOTAL_LEN))) {
- printk("can not alloc static buf!\n");
- bcm_static_skb = NULL;
- ASSERT(osh->magic == OS_HANDLE_MAGIC);
- kfree(osh);
- return -ENOMEM;
- }
- else
- printk("alloc static buf at %x!\n", (unsigned int)bcm_static_buf);
+#ifdef CONFIG_DHD_USE_STATIC_BUF
+ if (!bcm_static_buf && adapter) {
+ if (!(bcm_static_buf = (bcm_static_buf_t *)wifi_platform_prealloc(adapter,
+ 3, STATIC_BUF_SIZE + STATIC_BUF_TOTAL_LEN))) {
+ printk("can not alloc static buf!\n");
+ bcm_static_skb = NULL;
+ ASSERT(osh->magic == OS_HANDLE_MAGIC);
+ return -ENOMEM;
+ }
+ else
+ printk("alloc static buf at %p!\n", bcm_static_buf);
- sema_init(&bcm_static_buf->static_sem, 1);
+ sema_init(&bcm_static_buf->static_sem, 1);
- bcm_static_buf->buf_ptr = (unsigned char *)bcm_static_buf + STATIC_BUF_SIZE;
+ bcm_static_buf->buf_ptr = (unsigned char *)bcm_static_buf + STATIC_BUF_SIZE;
+ }
+
+ if (!bcm_static_skb && adapter) {
+ int i;
+ void *skb_buff_ptr = 0;
+ bcm_static_skb = (bcm_static_pkt_t *)((char *)bcm_static_buf + 2048);
+ skb_buff_ptr = wifi_platform_prealloc(adapter, 4, 0);
+ if (!skb_buff_ptr) {
+ printk("cannot alloc static buf!\n");
+ bcm_static_buf = NULL;
+ bcm_static_skb = NULL;
+ ASSERT(osh->magic == OS_HANDLE_MAGIC);
+ return -ENOMEM;
}
- if (!bcm_static_skb && adapter) {
- int i;
- void *skb_buff_ptr = 0;
- bcm_static_skb = (bcm_static_pkt_t *)((char *)bcm_static_buf + 2048);
- skb_buff_ptr = wifi_platform_prealloc(adapter, 4, 0);
- if (!skb_buff_ptr) {
- printk("cannot alloc static buf!\n");
- bcm_static_buf = NULL;
- bcm_static_skb = NULL;
- ASSERT(osh->magic == OS_HANDLE_MAGIC);
- kfree(osh);
- return -ENOMEM;
- }
+ bcopy(skb_buff_ptr, bcm_static_skb, sizeof(struct sk_buff *) *
+ (STATIC_PKT_MAX_NUM));
+ for (i = 0; i < STATIC_PKT_MAX_NUM; i++)
+ bcm_static_skb->pkt_use[i] = 0;
- bcopy(skb_buff_ptr, bcm_static_skb, sizeof(struct sk_buff *) *
- (STATIC_PKT_MAX_NUM * 2 + STATIC_PKT_4PAGE_NUM));
- for (i = 0; i < STATIC_PKT_MAX_NUM * 2 + STATIC_PKT_4PAGE_NUM; i++)
- bcm_static_skb->pkt_use[i] = 0;
+#if defined(BCMPCIE)
+ spin_lock_init(&bcm_static_skb->osl_pkt_lock);
+#else
+ sema_init(&bcm_static_skb->osl_pkt_sem, 1);
+#endif /* BCMPCIE */
+ }
- sema_init(&bcm_static_skb->osl_pkt_sem, 1);
- }
#endif /* CONFIG_DHD_USE_STATIC_BUF */
return 0;
@@ -379,10 +401,11 @@
if (bcm_static_skb) {
bcm_static_skb = 0;
}
-#endif
+#endif /* CONFIG_DHD_USE_STATIC_BUF */
return 0;
}
+
static struct sk_buff *osl_alloc_skb(osl_t *osh, unsigned int len)
{
struct sk_buff *skb;
@@ -391,6 +414,7 @@
#if defined(CONFIG_SPARSEMEM) && defined(CONFIG_ZONE_DMA)
flags |= GFP_ATOMIC;
#endif
+
skb = __dev_alloc_skb(len, flags);
#else
skb = dev_alloc_skb(len);
@@ -621,6 +645,31 @@
return skb;
}
#endif /* CTFPOOL */
+
+#if defined(BCM_GMAC3)
+/* Account for a packet delivered to downstream forwarder.
+ * Decrement a GMAC forwarder interface's pktalloced count.
+ */
+void BCMFASTPATH
+osl_pkt_tofwder(osl_t *osh, void *skbs, int skb_cnt)
+{
+
+ atomic_sub(skb_cnt, &osh->cmn->pktalloced);
+}
+
+/* Account for a downstream forwarder delivered packet to a WL/DHD driver.
+ * Increment a GMAC forwarder interface's pktalloced count.
+ */
+void BCMFASTPATH
+osl_pkt_frmfwder(osl_t *osh, void *skbs, int skb_cnt)
+{
+
+
+ atomic_add(skb_cnt, &osh->cmn->pktalloced);
+}
+
+#endif /* BCM_GMAC3 */
+
/* Convert a driver packet to native(OS) packet
* In the process, packettag is zeroed out before sending up
* IP code depends on skb->cb to be setup correctly with various options
@@ -630,9 +679,6 @@
osl_pkt_tonative(osl_t *osh, void *pkt)
{
struct sk_buff *nskb;
-#ifdef BCMDBG_CTRACE
- struct sk_buff *nskb1, *nskb2;
-#endif
if (osh->pub.pkttag)
OSL_PKTTAG_CLEAR(pkt);
@@ -641,17 +687,6 @@
for (nskb = (struct sk_buff *)pkt; nskb; nskb = nskb->next) {
atomic_sub(PKTISCHAINED(nskb) ? PKTCCNT(nskb) : 1, &osh->cmn->pktalloced);
-#ifdef BCMDBG_CTRACE
- for (nskb1 = nskb; nskb1 != NULL; nskb1 = nskb2) {
- if (PKTISCHAINED(nskb1)) {
- nskb2 = PKTCLINK(nskb1);
- }
- else
- nskb2 = NULL;
-
- DEL_CTRACE(osh, nskb1);
- }
-#endif /* BCMDBG_CTRACE */
}
return (struct sk_buff *)pkt;
}
@@ -660,18 +695,10 @@
* In the process, native packet is destroyed, there is no copying
* Also, a packettag is zeroed out
*/
-#ifdef BCMDBG_CTRACE
-void * BCMFASTPATH
-osl_pkt_frmnative(osl_t *osh, void *pkt, int line, char *file)
-#else
void * BCMFASTPATH
osl_pkt_frmnative(osl_t *osh, void *pkt)
-#endif /* BCMDBG_CTRACE */
{
struct sk_buff *nskb;
-#ifdef BCMDBG_CTRACE
- struct sk_buff *nskb1, *nskb2;
-#endif
if (osh->pub.pkttag)
OSL_PKTTAG_CLEAR(pkt);
@@ -680,29 +707,13 @@
for (nskb = (struct sk_buff *)pkt; nskb; nskb = nskb->next) {
atomic_add(PKTISCHAINED(nskb) ? PKTCCNT(nskb) : 1, &osh->cmn->pktalloced);
-#ifdef BCMDBG_CTRACE
- for (nskb1 = nskb; nskb1 != NULL; nskb1 = nskb2) {
- if (PKTISCHAINED(nskb1)) {
- nskb2 = PKTCLINK(nskb1);
- }
- else
- nskb2 = NULL;
-
- ADD_CTRACE(osh, nskb1, file, line);
- }
-#endif /* BCMDBG_CTRACE */
}
return (void *)pkt;
}
/* Return a new packet. zero out pkttag */
-#ifdef BCMDBG_CTRACE
-void * BCMFASTPATH
-osl_pktget(osl_t *osh, uint len, int line, char *file)
-#else
void * BCMFASTPATH
osl_pktget(osl_t *osh, uint len)
-#endif /* BCMDBG_CTRACE */
{
struct sk_buff *skb;
@@ -717,12 +728,8 @@
skb->len += len;
skb->priority = 0;
-#ifdef BCMDBG_CTRACE
- ADD_CTRACE(osh, skb, file, line);
-#endif
atomic_inc(&osh->cmn->pktalloced);
}
-
return ((void*) skb);
}
@@ -791,9 +798,6 @@
nskb = skb->next;
skb->next = NULL;
-#ifdef BCMDBG_CTRACE
- DEL_CTRACE(osh, skb);
-#endif
#ifdef CTFPOOL
@@ -806,16 +810,7 @@
} else
#endif
{
- if (skb->destructor)
- /* cannot kfree_skb() on hard IRQ (net/core/skbuff.c) if
- * destructor exists
- */
- dev_kfree_skb_any(skb);
- else
- /* can free immediately (even in_irq()) if destructor
- * does not exist
- */
- dev_kfree_skb(skb);
+ dev_kfree_skb_any(skb);
}
#ifdef CTFPOOL
next_skb:
@@ -831,64 +826,117 @@
{
int i = 0;
struct sk_buff *skb;
+#if defined(BCMPCIE)
+ unsigned long flags;
+#endif /* BCMPCIE */
+
+ if (!bcm_static_skb)
+ return osl_pktget(osh, len);
if (len > DHD_SKB_MAX_BUFSIZE) {
printk("%s: attempt to allocate huge packet (0x%x)\n", __FUNCTION__, len);
return osl_pktget(osh, len);
}
+#if defined(BCMPCIE)
+ spin_lock_irqsave(&bcm_static_skb->osl_pkt_lock, flags);
+#else
down(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
if (len <= DHD_SKB_1PAGE_BUFSIZE) {
- for (i = 0; i < STATIC_PKT_MAX_NUM; i++) {
- if (bcm_static_skb->pkt_use[i] == 0)
+ for (i = 0; i < STATIC_PKT_1PAGE_NUM; i++)
+ {
+ if (bcm_static_skb->pkt_use[i] == 0) {
break;
+ }
}
- if (i != STATIC_PKT_MAX_NUM) {
+ if (i != STATIC_PKT_1PAGE_NUM)
+ {
bcm_static_skb->pkt_use[i] = 1;
skb = bcm_static_skb->skb_4k[i];
- skb->tail = skb->data + len;
skb->len = len;
+#if defined(BCMPCIE)
+#if defined(__ARM_ARCH_7A__)
+ skb->data = skb->head + NET_SKB_PAD;
+ skb->tail = skb->head + NET_SKB_PAD;
+#else
+ skb->data = skb->head + NET_SKB_PAD;
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+ skb_set_tail_pointer(skb, len);
+#else
+ skb->tail = skb->data + len;
+#endif
+
+#endif /* __ARM_ARCH_7A__ */
+ skb->cloned = 0;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 14)
+ skb->list = NULL;
+#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 14) */
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+ skb_set_tail_pointer(skb, len);
+#else
+ skb->tail = skb->data + len;
+#endif
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
return skb;
}
}
if (len <= DHD_SKB_2PAGE_BUFSIZE) {
- for (i = 0; i < STATIC_PKT_MAX_NUM; i++) {
- if (bcm_static_skb->pkt_use[i + STATIC_PKT_MAX_NUM]
- == 0)
+ for (i = STATIC_PKT_1PAGE_NUM; i < STATIC_PKT_1_2PAGE_NUM; i++) {
+ if (bcm_static_skb->pkt_use[i] == 0)
break;
}
- if (i != STATIC_PKT_MAX_NUM) {
- bcm_static_skb->pkt_use[i + STATIC_PKT_MAX_NUM] = 1;
- skb = bcm_static_skb->skb_8k[i];
+ if ((i >= STATIC_PKT_1PAGE_NUM) && (i < STATIC_PKT_1_2PAGE_NUM)) {
+ bcm_static_skb->pkt_use[i] = 1;
+ skb = bcm_static_skb->skb_8k[i - STATIC_PKT_1PAGE_NUM];
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+ skb_set_tail_pointer(skb, len);
+#else
skb->tail = skb->data + len;
+#endif
skb->len = len;
-
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
return skb;
}
}
+#if !defined(BCMPCIE)
#if defined(ENHANCED_STATIC_BUF)
- if (bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM * 2] == 0) {
- bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM * 2] = 1;
+ if (bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM - 1] == 0) {
+ bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM - 1] = 1;
skb = bcm_static_skb->skb_16k;
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+ skb_set_tail_pointer(skb, len);
+#else
skb->tail = skb->data + len;
+#endif
skb->len = len;
up(&bcm_static_skb->osl_pkt_sem);
return skb;
}
-#endif
+#endif /* ENHANCED_STATIC_BUF */
+#endif /* !BCMPCIE */
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
printk("%s: all static pkt in use!\n", __FUNCTION__);
return osl_pktget(osh, len);
}
@@ -897,37 +945,92 @@
osl_pktfree_static(osl_t *osh, void *p, bool send)
{
int i;
+#if defined(BCMPCIE)
+ unsigned long flags;
+#endif /* BCMPCIE */
+
if (!bcm_static_skb) {
osl_pktfree(osh, p, send);
return;
}
+#if defined(BCMPCIE)
+ spin_lock_irqsave(&bcm_static_skb->osl_pkt_lock, flags);
+#else
down(&bcm_static_skb->osl_pkt_sem);
- for (i = 0; i < STATIC_PKT_MAX_NUM; i++) {
+#endif /* BCMPCIE */
+
+ for (i = 0; i < STATIC_PKT_1PAGE_NUM; i++) {
if (p == bcm_static_skb->skb_4k[i]) {
bcm_static_skb->pkt_use[i] = 0;
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
return;
}
}
- for (i = 0; i < STATIC_PKT_MAX_NUM; i++) {
- if (p == bcm_static_skb->skb_8k[i]) {
- bcm_static_skb->pkt_use[i + STATIC_PKT_MAX_NUM] = 0;
+ for (i = STATIC_PKT_1PAGE_NUM; i < STATIC_PKT_1_2PAGE_NUM; i++) {
+ if (p == bcm_static_skb->skb_8k[i - STATIC_PKT_1PAGE_NUM]) {
+ bcm_static_skb->pkt_use[i] = 0;
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
return;
}
}
+#if !defined(BCMPCIE)
#ifdef ENHANCED_STATIC_BUF
if (p == bcm_static_skb->skb_16k) {
- bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM * 2] = 0;
+ bcm_static_skb->pkt_use[STATIC_PKT_MAX_NUM - 1] = 0;
up(&bcm_static_skb->osl_pkt_sem);
return;
}
-#endif
+#endif /* ENHANCED_STATIC_BUF */
+#endif /* !BCMPCIE */
+
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
osl_pktfree(osh, p, send);
}
+
+void
+osl_pktclear_static(osl_t *osh)
+{
+ int i;
+#if defined(BCMPCIE)
+ unsigned long flags;
+#endif /* BCMPCIE */
+
+ if (!bcm_static_skb) {
+ printk("%s: bcm_static_skb is NULL\n", __FUNCTION__);
+ return;
+ }
+
+#if defined(BCMPCIE)
+ spin_lock_irqsave(&bcm_static_skb->osl_pkt_lock, flags);
+#else
+ down(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
+ for (i = 0; i < STATIC_PKT_MAX_NUM; i++) {
+ if (bcm_static_skb->pkt_use[i]) {
+ bcm_static_skb->pkt_use[i] = 0;
+ }
+ }
+
+#if defined(BCMPCIE)
+ spin_unlock_irqrestore(&bcm_static_skb->osl_pkt_lock, flags);
+#else
+ up(&bcm_static_skb->osl_pkt_sem);
+#endif /* BCMPCIE */
+}
#endif /* CONFIG_DHD_USE_STATIC_BUF */
uint32
@@ -977,7 +1080,11 @@
{
ASSERT(osh && (osh->magic == OS_HANDLE_MAGIC) && osh->pdev);
+#if defined(__ARM_ARCH_7A__) && LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 35)
+ return pci_domain_nr(((struct pci_dev *)osh->pdev)->bus);
+#else
return ((struct pci_dev *)osh->pdev)->bus->number;
+#endif
}
/* return slot # for the pci device pointed by osh->pdev */
@@ -993,6 +1100,24 @@
#endif
}
+/* return domain # for the pci device pointed by osh->pdev */
+uint
+osl_pcie_domain(osl_t *osh)
+{
+ ASSERT(osh && (osh->magic == OS_HANDLE_MAGIC) && osh->pdev);
+
+ return pci_domain_nr(((struct pci_dev *)osh->pdev)->bus);
+}
+
+/* return bus # for the pci device pointed by osh->pdev */
+uint
+osl_pcie_bus(osl_t *osh)
+{
+ ASSERT(osh && (osh->magic == OS_HANDLE_MAGIC) && osh->pdev);
+
+ return ((struct pci_dev *)osh->pdev)->bus->number;
+}
+
/* return the pci device pointed by osh->pdev */
struct pci_dev *
osl_pci_device(osl_t *osh)
@@ -1137,7 +1262,7 @@
osl_malloced(osl_t *osh)
{
ASSERT((osh && (osh->magic == OS_HANDLE_MAGIC)));
- return (atomic_read(&osh->cmn->malloced));
+ return (atomic_read(&osh->cmn->malloced));
}
uint
@@ -1165,7 +1290,7 @@
size += align;
*alloced = size;
-#ifdef __ARM_ARCH_7A__
+#if defined(USE_KMALLOC_FOR_FLOW_RING) && defined(__ARM_ARCH_7A__)
va = kmalloc(size, GFP_ATOMIC | __GFP_ZERO);
if (va)
*pap = (ulong)__virt_to_phys((ulong)va);
@@ -1184,7 +1309,7 @@
{
ASSERT((osh && (osh->magic == OS_HANDLE_MAGIC)));
-#ifdef __ARM_ARCH_7A__
+#if defined(USE_KMALLOC_FOR_FLOW_RING) && defined(__ARM_ARCH_7A__)
kfree(va);
#else
pci_free_consistent(osh->pdev, size, va, (dma_addr_t)pa);
@@ -1244,12 +1369,13 @@
}
-#if defined(__ARM_ARCH_7A__)
+#if defined(USE_KMALLOC_FOR_FLOW_RING) && defined(__ARM_ARCH_7A__)
inline void BCMFASTPATH
osl_cache_flush(void *va, uint size)
{
- dma_sync_single_for_device(OSH_NULL, virt_to_dma(OSH_NULL, va), size, DMA_TX);
+ if (size > 0)
+ dma_sync_single_for_device(OSH_NULL, virt_to_dma(OSH_NULL, va), size, DMA_TX);
}
inline void BCMFASTPATH
@@ -1267,7 +1393,16 @@
: "o" (*(char *)ptr)
: "cc");
}
-#endif
+
+int osl_arch_is_coherent(void)
+{
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 7, 0)
+ return 0;
+#else
+ return arch_is_coherent();
+#endif
+}
+#endif
#if defined(BCMASSERT_LOG)
void
@@ -1292,7 +1427,7 @@
}
-#endif
+#endif
void
osl_delay(uint usec)
@@ -1322,13 +1457,8 @@
/* Clone a packet.
* The pkttag contents are NOT cloned.
*/
-#ifdef BCMDBG_CTRACE
-void *
-osl_pktdup(osl_t *osh, void *skb, int line, char *file)
-#else
void *
osl_pktdup(osl_t *osh, void *skb)
-#endif /* BCMDBG_CTRACE */
{
void * p;
@@ -1376,70 +1506,9 @@
/* Increment the packet counter */
atomic_inc(&osh->cmn->pktalloced);
-#ifdef BCMDBG_CTRACE
- ADD_CTRACE(osh, (struct sk_buff *)p, file, line);
-#endif
return (p);
}
-#ifdef BCMDBG_CTRACE
-int osl_pkt_is_frmnative(osl_t *osh, struct sk_buff *pkt)
-{
- unsigned long flags;
- struct sk_buff *skb;
- int ck = FALSE;
-
- spin_lock_irqsave(&osh->ctrace_lock, flags);
-
- list_for_each_entry(skb, &osh->ctrace_list, ctrace_list) {
- if (pkt == skb) {
- ck = TRUE;
- break;
- }
- }
-
- spin_unlock_irqrestore(&osh->ctrace_lock, flags);
- return ck;
-}
-
-void osl_ctrace_dump(osl_t *osh, struct bcmstrbuf *b)
-{
- unsigned long flags;
- struct sk_buff *skb;
- int idx = 0;
- int i, j;
-
- spin_lock_irqsave(&osh->ctrace_lock, flags);
-
- if (b != NULL)
- bcm_bprintf(b, " Total %d sbk not free\n", osh->ctrace_num);
- else
- printk(" Total %d sbk not free\n", osh->ctrace_num);
-
- list_for_each_entry(skb, &osh->ctrace_list, ctrace_list) {
- if (b != NULL)
- bcm_bprintf(b, "[%d] skb %p:\n", ++idx, skb);
- else
- printk("[%d] skb %p:\n", ++idx, skb);
-
- for (i = 0; i < skb->ctrace_count; i++) {
- j = (skb->ctrace_start + i) % CTRACE_NUM;
- if (b != NULL)
- bcm_bprintf(b, " [%s(%d)]\n", skb->func[j], skb->line[j]);
- else
- printk(" [%s(%d)]\n", skb->func[j], skb->line[j]);
- }
- if (b != NULL)
- bcm_bprintf(b, "\n");
- else
- printk("\n");
- }
-
- spin_unlock_irqrestore(&osh->ctrace_lock, flags);
-
- return;
-}
-#endif /* BCMDBG_CTRACE */
/*
@@ -1459,6 +1528,16 @@
return 0;
}
+uint32
+osl_rand(void)
+{
+ uint32 rand;
+
+ get_random_bytes(&rand, sizeof(rand));
+
+ return rand;
+}
+
/* Linux Kernel: File Operations: start */
void *
osl_os_open_image(char *filename)
@@ -1518,3 +1597,17 @@
}
/* Linux Kernel: File Operations: end */
+
+
+/* APIs to set/get specific quirks in OSL layer */
+void
+osl_flag_set(osl_t *osh, uint32 mask)
+{
+ osh->flags |= mask;
+}
+
+bool
+osl_is_flag_set(osl_t *osh, uint32 mask)
+{
+ return (osh->flags & mask);
+}
diff --git a/drivers/net/wireless/bcmdhd/pcie_core.c b/drivers/net/wireless/bcmdhd/pcie_core.c
new file mode 100644
index 0000000..1eaedf5
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/pcie_core.c
@@ -0,0 +1,83 @@
+/** @file pcie_core.c
+ *
+ * Contains PCIe related functions that are shared between different driver models (e.g. firmware
+ * builds, DHD builds, BMAC builds), in order to avoid code duplication.
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: pcie_core.c 444841 2013-12-21 04:32:29Z $
+ */
+
+#include <bcm_cfg.h>
+#include <typedefs.h>
+#include <bcmutils.h>
+#include <bcmdefs.h>
+#include <osl.h>
+#include <siutils.h>
+#include <hndsoc.h>
+#include <sbchipc.h>
+
+#include "pcie_core.h"
+
+/* local prototypes */
+
+/* local variables */
+
+/* function definitions */
+
+#ifdef BCMDRIVER
+
+void pcie_watchdog_reset(osl_t *osh, si_t *sih, sbpcieregs_t *sbpcieregs)
+{
+ uint32 val, i, lsc;
+ uint16 cfg_offset[] = {PCIECFGREG_STATUS_CMD, PCIECFGREG_PM_CSR,
+ PCIECFGREG_MSI_CAP, PCIECFGREG_MSI_ADDR_L,
+ PCIECFGREG_MSI_ADDR_H, PCIECFGREG_MSI_DATA,
+ PCIECFGREG_LINK_STATUS_CTRL2, PCIECFGREG_RBAR_CTRL,
+ PCIECFGREG_PML1_SUB_CTRL1, PCIECFGREG_REG_BAR2_CONFIG,
+ PCIECFGREG_REG_BAR3_CONFIG};
+ uint32 origidx = si_coreidx(sih);
+
+ /* Disable/restore ASPM Control to protect the watchdog reset */
+ W_REG(osh, &sbpcieregs->configaddr, PCIECFGREG_LINK_STATUS_CTRL);
+ lsc = R_REG(osh, &sbpcieregs->configdata);
+ val = lsc & (~PCIE_ASPM_ENAB);
+ W_REG(osh, &sbpcieregs->configdata, val);
+
+ si_setcore(sih, PCIE2_CORE_ID, 0);
+ si_corereg(sih, SI_CC_IDX, OFFSETOF(chipcregs_t, watchdog), ~0, 4);
+ OSL_DELAY(100000);
+
+ W_REG(osh, &sbpcieregs->configaddr, PCIECFGREG_LINK_STATUS_CTRL);
+ W_REG(osh, &sbpcieregs->configdata, lsc);
+
+ /* Write configuration registers back to the shadow registers
+ * cause shadow registers are cleared out after watchdog reset.
+ */
+ for (i = 0; i < ARRAYSIZE(cfg_offset); i++) {
+ W_REG(osh, &sbpcieregs->configaddr, cfg_offset[i]);
+ val = R_REG(osh, &sbpcieregs->configdata);
+ W_REG(osh, &sbpcieregs->configdata, val);
+ }
+ si_setcoreidx(sih, origidx);
+}
+
+#endif /* BCMDRIVER */
diff --git a/drivers/net/wireless/bcmdhd/sbutils.c b/drivers/net/wireless/bcmdhd/sbutils.c
old mode 100755
new mode 100644
index f95b171..12c4559
--- a/drivers/net/wireless/bcmdhd/sbutils.c
+++ b/drivers/net/wireless/bcmdhd/sbutils.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: sbutils.c 431423 2013-10-23 16:07:35Z $
+ * $Id: sbutils.c 467150 2014-04-02 17:30:43Z $
*/
#include <bcm_cfg.h>
@@ -1067,3 +1067,39 @@
return (size);
}
+
+#if defined(BCMDBG_PHYDUMP)
+/* print interesting sbconfig registers */
+void
+sb_dumpregs(si_t *sih, struct bcmstrbuf *b)
+{
+ sbconfig_t *sb;
+ uint origidx, i, intr_val = 0;
+ si_info_t *sii = SI_INFO(sih);
+ si_cores_info_t *cores_info = (si_cores_info_t *)sii->cores_info;
+
+ origidx = sii->curidx;
+
+ INTR_OFF(sii, intr_val);
+
+ for (i = 0; i < sii->numcores; i++) {
+ sb = REGS2SB(sb_setcoreidx(sih, i));
+
+ bcm_bprintf(b, "core 0x%x: \n", cores_info->coreid[i]);
+
+ if (sii->pub.socirev > SONICS_2_2)
+ bcm_bprintf(b, "sbimerrlog 0x%x sbimerrloga 0x%x\n",
+ sb_corereg(sih, si_coreidx(&sii->pub), SBIMERRLOG, 0, 0),
+ sb_corereg(sih, si_coreidx(&sii->pub), SBIMERRLOGA, 0, 0));
+
+ bcm_bprintf(b, "sbtmstatelow 0x%x sbtmstatehigh 0x%x sbidhigh 0x%x "
+ "sbimstate 0x%x\n sbimconfiglow 0x%x sbimconfighigh 0x%x\n",
+ R_SBREG(sii, &sb->sbtmstatelow), R_SBREG(sii, &sb->sbtmstatehigh),
+ R_SBREG(sii, &sb->sbidhigh), R_SBREG(sii, &sb->sbimstate),
+ R_SBREG(sii, &sb->sbimconfiglow), R_SBREG(sii, &sb->sbimconfighigh));
+ }
+
+ sb_setcoreidx(sih, origidx);
+ INTR_RESTORE(sii, intr_val);
+}
+#endif
diff --git a/drivers/net/wireless/bcmdhd/siutils.c b/drivers/net/wireless/bcmdhd/siutils.c
old mode 100755
new mode 100644
index 7dc390b..90a5dee
--- a/drivers/net/wireless/bcmdhd/siutils.c
+++ b/drivers/net/wireless/bcmdhd/siutils.c
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: siutils.c 434466 2013-11-06 12:34:26Z $
+ * $Id: siutils.c 474902 2014-05-02 18:31:33Z $
*/
#include <bcm_cfg.h>
@@ -50,9 +50,27 @@
#ifdef BCM_SDRBL
#include <hndcpu.h>
#endif /* BCM_SDRBL */
+#ifdef HNDGCI
+#include <hndgci.h>
+#endif /* HNDGCI */
#include "siutils_priv.h"
+/**
+ * A set of PMU registers is clocked in the ILP domain, which has an implication on register write
+ * behavior: if such a register is written, it takes multiple ILP clocks for the PMU block to absorb
+ * the write. During that time the 'SlowWritePending' bit in the PMUStatus register is set.
+ */
+#define PMUREGS_ILP_SENSITIVE(regoff) \
+ ((regoff) == OFFSETOF(pmuregs_t, pmutimer) || \
+ (regoff) == OFFSETOF(pmuregs_t, pmuwatchdog) || \
+ (regoff) == OFFSETOF(pmuregs_t, res_req_timer))
+
+#define CHIPCREGS_ILP_SENSITIVE(regoff) \
+ ((regoff) == OFFSETOF(chipcregs_t, pmutimer) || \
+ (regoff) == OFFSETOF(chipcregs_t, pmuwatchdog) || \
+ (regoff) == OFFSETOF(chipcregs_t, res_req_timer))
+
/* local prototypes */
static si_info_t *si_doattach(si_info_t *sii, uint devid, osl_t *osh, void *regs,
uint bustype, void *sdh, char **vars, uint *varsz);
@@ -61,11 +79,26 @@
uint *origidx, void *regs);
+static bool si_pmu_is_ilp_sensitive(uint32 idx, uint regoff);
+
+#ifdef BCMLTECOEX
+static void si_config_gcigpio(si_t *sih, uint32 gci_pos, uint8 gcigpio,
+ uint8 gpioctl_mask, uint8 gpioctl_val);
+#endif /* BCMLTECOEX */
+
/* global variable to indicate reservation/release of gpio's */
static uint32 si_gpioreservation = 0;
/* global flag to prevent shared resources from being initialized multiple times in si_attach() */
+#ifdef SR_DEBUG
+static const uint32 si_power_island_test_array[] = {
+ 0x0000, 0x0001, 0x0010, 0x0011,
+ 0x0100, 0x0101, 0x0110, 0x0111,
+ 0x1000, 0x1001, 0x1010, 0x1011,
+ 0x1100, 0x1101, 0x1110, 0x1111
+};
+#endif /* SR_DEBUG */
int do_4360_pcie2_war = 0;
@@ -233,7 +266,16 @@
/* get pmu rev and caps */
if (sii->pub.cccaps & CC_CAP_PMU) {
- sii->pub.pmucaps = R_REG(sii->osh, &cc->pmucapabilities);
+ if (AOB_ENAB(&sii->pub)) {
+ uint pmucoreidx;
+ pmuregs_t *pmu;
+ pmucoreidx = si_findcoreidx(&sii->pub, PMU_CORE_ID, 0);
+ pmu = si_setcoreidx(&sii->pub, pmucoreidx);
+ sii->pub.pmucaps = R_REG(sii->osh, &pmu->pmucapabilities);
+ si_setcoreidx(&sii->pub, SI_CC_IDX);
+ } else
+ sii->pub.pmucaps = R_REG(sii->osh, &cc->pmucapabilities);
+
sii->pub.pmurev = sii->pub.pmucaps & PCAP_REV_MASK;
}
@@ -355,6 +397,37 @@
+uint16
+si_chipid(si_t *sih)
+{
+ si_info_t *sii = SI_INFO(sih);
+
+ return (sii->chipnew) ? sii->chipnew : sih->chip;
+}
+
+static void
+si_chipid_fixup(si_t *sih)
+{
+ si_info_t *sii = SI_INFO(sih);
+
+ ASSERT(sii->chipnew == 0);
+ switch (sih->chip) {
+ case BCM43570_CHIP_ID:
+ case BCM4358_CHIP_ID:
+ case BCM43562_CHIP_ID:
+ sii->chipnew = sih->chip; /* save it */
+ sii->pub.chip = BCM43569_CHIP_ID; /* chip class */
+ break;
+ case BCM4356_CHIP_ID:
+ sii->chipnew = sih->chip; /* save it */
+ sii->pub.chip = BCM4354_CHIP_ID; /* chip class */
+ break;
+ default:
+ ASSERT(0);
+ break;
+ }
+}
+
/**
* Allocate an si handle. This function may be called multiple times.
*
@@ -423,10 +496,10 @@
}
/* ChipID recognition.
- * We assume we can read chipid at offset 0 from the regs arg.
- * If we add other chiptypes (or if we need to support old sdio hosts w/o chipcommon),
- * some way of recognizing them needs to be added here.
- */
+ * We assume we can read chipid at offset 0 from the regs arg.
+ * If we add other chiptypes (or if we need to support old sdio hosts w/o chipcommon),
+ * some way of recognizing them needs to be added here.
+ */
if (!cc) {
SI_ERROR(("%s: chipcommon register space is null \n", __FUNCTION__));
return NULL;
@@ -439,6 +512,13 @@
sih->chiprev = (w & CID_REV_MASK) >> CID_REV_SHIFT;
sih->chippkg = (w & CID_PKG_MASK) >> CID_PKG_SHIFT;
+ if ((sih->chip == BCM4358_CHIP_ID) ||
+ (sih->chip == BCM43570_CHIP_ID) ||
+ (sih->chip == BCM43562_CHIP_ID) ||
+ (sih->chip == BCM4358_CHIP_ID)) {
+ si_chipid_fixup(sih);
+ }
+
if ((CHIPID(sih->chip) == BCM4329_CHIP_ID) && (sih->chiprev == 0) &&
(sih->chippkg != BCM4329_289PIN_PKG_ID)) {
sih->chippkg = BCM4329_182PIN_PKG_ID;
@@ -555,6 +635,11 @@
ASSERT(!si_taclear(sih, FALSE));
+#ifdef BOOTLOADER_CONSOLE_OUTPUT
+ /* Enable console prints */
+ si_muxenab(sii, 3);
+#endif
+
return (sii);
exit:
@@ -636,6 +721,8 @@
sii = SI_INFO(sih);
sii->intrsoff_fn = NULL;
+ sii->intrsrestore_fn = NULL;
+ sii->intrsenabled_fn = NULL;
}
uint
@@ -711,6 +798,12 @@
return sii->curidx;
}
+void *
+si_d11_switch_addrbase(si_t *sih, uint coreunit)
+{
+ return si_setcore(sih, D11_CORE_ID, coreunit);
+}
+
/** return the core-type instantiation # of the current core */
uint
si_coreunit(si_t *sih)
@@ -773,7 +866,8 @@
}
}
-/** return index of coreid or BADIDX if not found */
+
+/* return index of coreid or BADIDX if not found */
uint
si_findcoreidx(si_t *sih, uint coreid, uint coreunit)
{
@@ -801,17 +895,35 @@
{
si_info_t *sii = SI_INFO(sih);
si_cores_info_t *cores_info = (si_cores_info_t *)sii->cores_info;
- uint found;
+ uint found = 0;
uint i;
- found = 0;
-
- for (i = 0; i < sii->numcores; i++)
+ for (i = 0; i < sii->numcores; i++) {
if (cores_info->coreid[i] == coreid) {
found++;
}
+ }
- return (found == 0? 0:found);
+ return found;
+}
+
+/** return total D11 coreunits */
+uint
+BCMRAMFN(si_numd11coreunits)(si_t *sih)
+{
+ uint found = 0;
+
+ found = si_numcoreunits(sih, D11_CORE_ID);
+
+#if defined(WLRSDB) && defined(WLRSDB_DISABLED)
+ /* If RSDB functionality is compiled out,
+ * then ignore any D11 cores beyond the first
+ * Used in norsdb dongle build variants for rsdb chip.
+ */
+ found = 1;
+#endif /* defined(WLRSDB) && !defined(WLRSDB_DISABLED) */
+
+ return found;
}
/** return list of found cores */
@@ -1068,6 +1180,38 @@
}
}
+/** ILP sensitive register access needs special treatment to avoid backplane stalls */
+bool si_pmu_is_ilp_sensitive(uint32 idx, uint regoff)
+{
+ if (idx == SI_CC_IDX) {
+ if (CHIPCREGS_ILP_SENSITIVE(regoff))
+ return TRUE;
+ } else if (PMUREGS_ILP_SENSITIVE(regoff)) {
+ return TRUE;
+ }
+
+ return FALSE;
+}
+
+/** 'idx' should refer either to the chipcommon core or the PMU core */
+uint
+si_pmu_corereg(si_t *sih, uint32 idx, uint regoff, uint mask, uint val)
+{
+ int pmustatus_offset;
+
+ /* prevent backplane stall on double write to 'ILP domain' registers in the PMU */
+ if (mask != 0 && sih->pmurev >= 22 &&
+ si_pmu_is_ilp_sensitive(idx, regoff)) {
+ pmustatus_offset = AOB_ENAB(sih) ? OFFSETOF(pmuregs_t, pmustatus) :
+ OFFSETOF(chipcregs_t, pmustatus);
+
+ while (si_corereg(sih, idx, pmustatus_offset, 0, 0) & PST_SLOW_WR_PENDING)
+ {};
+ }
+
+ return si_corereg(sih, idx, regoff, mask, val);
+}
+
/*
* If there is no need for fiddling with interrupts or core switches (typically silicon
* back plane registers, pci registers and chipcommon registers), this function
@@ -1279,14 +1423,23 @@
hosti = CHIP_HOSTIF_PCIEMODE;
break;
+ case BCM4349_CHIP_GRPID:
+ if (CST4349_CHIPMODE_SDIOD(sih->chipst))
+ hosti = CHIP_HOSTIF_SDIOMODE;
+ else if (CST4349_CHIPMODE_PCIE(sih->chipst))
+ hosti = CHIP_HOSTIF_PCIEMODE;
+ break;
case BCM4350_CHIP_ID:
case BCM4354_CHIP_ID:
+ case BCM4356_CHIP_ID:
case BCM43556_CHIP_ID:
case BCM43558_CHIP_ID:
case BCM43566_CHIP_ID:
case BCM43568_CHIP_ID:
case BCM43569_CHIP_ID:
+ case BCM43570_CHIP_ID:
+ case BCM4358_CHIP_ID:
if (CST4350_CHIPMODE_USB20D(sih->chipst) ||
CST4350_CHIPMODE_HSIC20D(sih->chipst) ||
CST4350_CHIPMODE_USB30D(sih->chipst) ||
@@ -1339,7 +1492,7 @@
else if (ticks > maxt)
ticks = maxt;
- si_corereg(sih, SI_CC_IDX, OFFSETOF(chipcregs_t, pmuwatchdog), ~0, ticks);
+ pmu_corereg(sih, SI_CC_IDX, pmuwatchdog, ~0, ticks);
} else {
maxt = (1 << 28) - 1;
if (ticks > maxt)
@@ -1802,7 +1955,7 @@
uint32 polarity = (h->level ? levelp : edgep) & h->event;
/* polarity bitval is opposite of status bitval */
- if (status ^ polarity)
+ if ((h->level && (status ^ polarity)) || (!h->level && status))
h->handler(status, h->arg);
}
}
@@ -2628,13 +2781,19 @@
case BCM4345_CHIP_ID:
return ((sih->chipst & CST4335_SPROM_MASK) &&
!(sih->chipst & CST4335_SFLASH_MASK));
+ case BCM4349_CHIP_GRPID:
+ return (sih->chipst & CST4349_SPROM_PRESENT) != 0;
+ break;
case BCM4350_CHIP_ID:
case BCM4354_CHIP_ID:
+ case BCM4356_CHIP_ID:
case BCM43556_CHIP_ID:
case BCM43558_CHIP_ID:
case BCM43566_CHIP_ID:
case BCM43568_CHIP_ID:
case BCM43569_CHIP_ID:
+ case BCM43570_CHIP_ID:
+ case BCM4358_CHIP_ID:
return (sih->chipst & CST4350_SPROM_PRESENT) != 0;
case BCM43602_CHIP_ID:
return (sih->chipst & CST43602_SPROM_PRESENT) != 0;
@@ -2691,22 +2850,26 @@
uint
si_core_wrapperreg(si_t *sih, uint32 coreidx, uint32 offset, uint32 mask, uint32 val)
{
- uint origidx;
+ uint origidx, intr_val = 0;
uint ret_val;
+ si_info_t *sii = SI_INFO(sih);
+ si_cores_info_t *cores_info = (si_cores_info_t *)sii->cores_info;
origidx = si_coreidx(sih);
+ INTR_OFF(sii, intr_val);
si_setcoreidx(sih, coreidx);
ret_val = si_wrapperreg(sih, offset, mask, val);
/* return to the original core */
si_setcoreidx(sih, origidx);
+ INTR_RESTORE(sii, intr_val);
return ret_val;
}
-/* cleanup the hndrte timer from the host when ARM is been halted
+/* cleanup the timer from the host when ARM is been halted
* without a chance for ARM cleanup its resources
* If left not cleanup, Intr from a software timer can still
* request HT clk when ARM is halted.
@@ -2716,13 +2879,13 @@
{
uint32 mask;
- mask = PRRT_REQ_ACTIVE | PRRT_INTEN;
+ mask = PRRT_REQ_ACTIVE | PRRT_INTEN | PRRT_HT_REQ;
if (CHIPID(sih->chip) != BCM4328_CHIP_ID)
mask <<= 14;
/* clear mask bits */
- si_corereg(sih, SI_CC_IDX, OFFSETOF(chipcregs_t, res_req_timer), mask, 0);
+ pmu_corereg(sih, SI_CC_IDX, res_req_timer, mask, 0);
/* readback to ensure write completes */
- return si_corereg(sih, SI_CC_IDX, OFFSETOF(chipcregs_t, res_req_timer), 0, 0);
+ return pmu_corereg(sih, SI_CC_IDX, res_req_timer, 0, 0);
}
/** turn on/off rfldo */
@@ -2731,6 +2894,7 @@
{
}
+
#ifdef SURVIVE_PERST_ENAB
static uint32
si_pcie_survive_perst(si_t *sih, uint32 mask, uint32 val)
@@ -2749,18 +2913,14 @@
si_watchdog_reset(si_t *sih)
{
si_info_t *sii = SI_INFO(sih);
- chipcregs_t *cc;
- uint32 origidx, i;
+ uint32 i;
- origidx = si_coreidx(sih);
- cc = (chipcregs_t *)si_setcore(sih, CC_CORE_ID, 0);
/* issue a watchdog reset */
- W_REG(sii->osh, &cc->pmuwatchdog, 2);
+ pmu_corereg(sih, SI_CC_IDX, pmuwatchdog, 2, 2);
/* do busy wait for 20ms */
for (i = 0; i < 2000; i++) {
OSL_DELAY(10);
}
- si_setcoreidx(sih, origidx);
}
#endif /* SURVIVE_PERST_ENAB */
@@ -2802,3 +2962,23 @@
si_pcie_ltr_war(si_t *sih)
{
}
+
+void
+si_pcie_hw_LTR_war(si_t *sih)
+{
+}
+
+void
+si_pciedev_reg_pm_clk_period(si_t *sih)
+{
+}
+
+void
+si_pciedev_crwlpciegen2(si_t *sih)
+{
+}
+
+void
+si_pcie_prep_D3(si_t *sih, bool enter_D3)
+{
+}
diff --git a/drivers/net/wireless/bcmdhd/siutils_priv.h b/drivers/net/wireless/bcmdhd/siutils_priv.h
old mode 100755
new mode 100644
index a7d8ffc..38c60a8
--- a/drivers/net/wireless/bcmdhd/siutils_priv.h
+++ b/drivers/net/wireless/bcmdhd/siutils_priv.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: siutils_priv.h 431423 2013-10-23 16:07:35Z $
+ * $Id: siutils_priv.h 474902 2014-05-02 18:31:33Z $
*/
#ifndef _siutils_priv_h_
@@ -113,6 +113,7 @@
void *cores_info;
gci_gpio_item_t *gci_gpio_head; /* gci gpio interrupts head */
+ uint chipnew; /* new chip number */
} si_info_t;
@@ -211,6 +212,9 @@
extern bool sb_taclear(si_t *sih, bool details);
+#if defined(BCMDBG_PHYDUMP)
+extern void sb_dumpregs(si_t *sih, struct bcmstrbuf *b);
+#endif
/* Wake-on-wireless-LAN (WOWL) */
extern bool sb_pci_pmecap(si_t *sih);
@@ -240,13 +244,20 @@
extern uint32 ai_core_sflags(si_t *sih, uint32 mask, uint32 val);
extern uint ai_corereg(si_t *sih, uint coreidx, uint regoff, uint mask, uint val);
extern void ai_core_reset(si_t *sih, uint32 bits, uint32 resetbits);
+extern void ai_d11rsdb_core_reset(si_t *sih, uint32 bits,
+ uint32 resetbits, void *p, void *s);
extern void ai_core_disable(si_t *sih, uint32 bits);
+extern void ai_d11rsdb_core_disable(const si_info_t *sii, uint32 bits,
+ aidmp_t *pmacai, aidmp_t *smacai);
extern int ai_numaddrspaces(si_t *sih);
extern uint32 ai_addrspace(si_t *sih, uint asidx);
extern uint32 ai_addrspacesize(si_t *sih, uint asidx);
extern void ai_coreaddrspaceX(si_t *sih, uint asidx, uint32 *addr, uint32 *size);
extern uint ai_wrap_reg(si_t *sih, uint32 offset, uint32 mask, uint32 val);
+#if defined(BCMDBG_PHYDUMP)
+extern void ai_dumpregs(si_t *sih, struct bcmstrbuf *b);
+#endif
#define ub_scan(a, b, c) do {} while (0)
diff --git a/drivers/net/wireless/bcmdhd/uamp_api.h b/drivers/net/wireless/bcmdhd/uamp_api.h
old mode 100755
new mode 100644
index dde4e1c..2bd0629
--- a/drivers/net/wireless/bcmdhd/uamp_api.h
+++ b/drivers/net/wireless/bcmdhd/uamp_api.h
@@ -23,9 +23,11 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: uamp_api.h 294267 2011-11-04 23:41:52Z $
+ * $Id: uamp_api.h 467328 2014-04-03 01:23:40Z $
*
*/
+
+
#ifndef UAMP_API_H
#define UAMP_API_H
diff --git a/drivers/net/wireless/bcmdhd/wl_android.c b/drivers/net/wireless/bcmdhd/wl_android.c
old mode 100755
new mode 100644
index 21610bb..f2bed33
--- a/drivers/net/wireless/bcmdhd/wl_android.c
+++ b/drivers/net/wireless/bcmdhd/wl_android.c
@@ -2,13 +2,13 @@
* Linux cfg80211 driver - Android related functions
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,18 +16,20 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_android.c 457855 2014-02-25 01:27:41Z $
+ * $Id: wl_android.c 470703 2014-04-16 02:25:28Z $
*/
#include <linux/module.h>
#include <linux/netdevice.h>
-#include <linux/compat.h>
#include <net/netlink.h>
+#ifdef CONFIG_COMPAT
+#include <linux/compat.h>
+#endif
#include <wl_android.h>
#include <wldev_common.h>
@@ -90,11 +92,6 @@
#define CMD_KEEP_ALIVE "KEEPALIVE"
-#define CMD_SETMIRACAST "SETMIRACAST"
-#define CMD_ASSOCRESPIE "ASSOCRESPIE"
-#define CMD_MAXLINKSPEED "MAXLINKSPEED"
-#define CMD_RXRATESTATS "RXRATESTATS"
-
/* CCX Private Commands */
#ifdef PNO_SUPPORT
@@ -109,18 +106,6 @@
#define CMD_OKC_ENABLE "OKC_ENABLE"
#define CMD_HAPD_MAC_FILTER "HAPD_MAC_FILTER"
-/* hostap mac mode */
-#define MACLIST_MODE_DISABLED 0
-#define MACLIST_MODE_DENY 1
-#define MACLIST_MODE_ALLOW 2
-
-/* max number of assoc list */
-#define MAX_NUM_OF_ASSOCLIST 64
-
-/* max number of mac filter list
- * restrict max number to 10 as maximum cmd string size is 255
- */
-#define MAX_NUM_MAC_FILT 10
@@ -155,20 +140,20 @@
struct list_head list;
};
-#ifdef CONFIG_COMPAT
-typedef struct android_wifi_priv_cmd_compat {
- u32 bufaddr;
- int used_len;
- int total_len;
-} android_wifi_priv_cmd_compat;
-#endif
-
-typedef struct android_wifi_priv_cmd {
- char *bufaddr;
+typedef struct _android_wifi_priv_cmd {
+ char *buf;
int used_len;
int total_len;
} android_wifi_priv_cmd;
+#ifdef CONFIG_COMPAT
+typedef struct _compat_android_wifi_priv_cmd {
+ compat_caddr_t buf;
+ int used_len;
+ int total_len;
+} compat_android_wifi_priv_cmd;
+#endif /* CONFIG_COMPAT */
+
#if defined(BCMFW_ROAM_ENABLE)
#define CMD_SET_ROAMPREF "SET_ROAMPREF"
@@ -213,7 +198,7 @@
extern bool ap_fw_loaded;
#if defined(CUSTOMER_HW2)
extern char iface_name[IFNAMSIZ];
-#endif
+#endif
/**
* Local (static) functions and variables
@@ -329,6 +314,7 @@
#ifdef PNO_SUPPORT
#define PNO_PARAM_SIZE 50
#define VALUE_SIZE 50
+#define LIMIT_STR_FMT ("%50s %50s")
static int
wls_parse_batching_cmd(struct net_device *dev, char *command, int total_len)
{
@@ -359,17 +345,17 @@
if (delim != NULL)
*delim = ' ';
- tokens = sscanf(token, "%s %s", param, value);
- if (!strncmp(param, PNO_PARAM_SCANFREQ, strlen(PNO_PARAM_MSCAN))) {
+ tokens = sscanf(token, LIMIT_STR_FMT, param, value);
+ if (!strncmp(param, PNO_PARAM_SCANFREQ, strlen(PNO_PARAM_SCANFREQ))) {
batch_params.scan_fr = simple_strtol(value, NULL, 0);
DHD_PNO(("scan_freq : %d\n", batch_params.scan_fr));
- } else if (!strncmp(param, PNO_PARAM_BESTN, strlen(PNO_PARAM_MSCAN))) {
+ } else if (!strncmp(param, PNO_PARAM_BESTN, strlen(PNO_PARAM_BESTN))) {
batch_params.bestn = simple_strtol(value, NULL, 0);
DHD_PNO(("bestn : %d\n", batch_params.bestn));
} else if (!strncmp(param, PNO_PARAM_MSCAN, strlen(PNO_PARAM_MSCAN))) {
batch_params.mscan = simple_strtol(value, NULL, 0);
DHD_PNO(("mscan : %d\n", batch_params.mscan));
- } else if (!strncmp(param, PNO_PARAM_CHANNEL, strlen(PNO_PARAM_MSCAN))) {
+ } else if (!strncmp(param, PNO_PARAM_CHANNEL, strlen(PNO_PARAM_CHANNEL))) {
i = 0;
pos2 = value;
tokens = sscanf(value, "<%s>", value);
@@ -398,7 +384,7 @@
batch_params.chan_list[i-1]));
}
}
- } else if (!strncmp(param, PNO_PARAM_RTT, strlen(PNO_PARAM_MSCAN))) {
+ } else if (!strncmp(param, PNO_PARAM_RTT, strlen(PNO_PARAM_RTT))) {
batch_params.rtt = simple_strtol(value, NULL, 0);
DHD_PNO(("rtt : %d\n", batch_params.rtt));
} else {
@@ -440,7 +426,7 @@
#ifndef WL_SCHED_SCAN
static int wl_android_set_pno_setup(struct net_device *dev, char *command, int total_len)
{
- wlc_ssid_t ssids_local[MAX_PFN_LIST_COUNT];
+ wlc_ssid_ext_t ssids_local[MAX_PFN_LIST_COUNT];
int res = -1;
int nssid = 0;
cmd_tlv_t *cmd_tlv_temp;
@@ -553,7 +539,7 @@
}
-static int
+int
wl_android_set_ap_mac_list(struct net_device *dev, int macmode, struct maclist *maclist)
{
int i, j, match;
@@ -710,14 +696,19 @@
DHD_ERROR(("\nfailed to power up wifi chip, max retry reached **\n\n"));
goto exit;
}
-#ifdef BCMSDIO
+#if defined(BCMSDIO) || defined(BCMPCIE)
ret = dhd_net_bus_devreset(dev, FALSE);
+#ifdef BCMSDIO
dhd_net_bus_resume(dev, 1);
#endif
+#endif /* BCMSDIO || BCMPCIE */
+#ifndef BCMPCIE
if (!ret) {
if (dhd_dev_init_ioctl(dev) < 0)
ret = -EFAULT;
}
+#endif
+ if (!ret)
g_wifi_on = TRUE;
}
@@ -727,7 +718,7 @@
return ret;
}
-int wl_android_wifi_off(struct net_device *dev)
+int wl_android_wifi_off(struct net_device *dev, bool on_failure)
{
int ret = 0;
@@ -738,11 +729,13 @@
}
dhd_net_if_lock(dev);
- if (g_wifi_on) {
-#ifdef BCMSDIO
+ if (g_wifi_on || on_failure) {
+#if defined(BCMSDIO) || defined(BCMPCIE)
ret = dhd_net_bus_devreset(dev, TRUE);
+#ifdef BCMSDIO
dhd_net_bus_suspend(dev);
#endif
+#endif /* BCMSDIO || BCMPCIE */
dhd_net_wifi_platform_set_power(dev, FALSE, WIFI_TURNOFF_DELAY);
g_wifi_on = FALSE;
}
@@ -1284,12 +1277,8 @@
#define PRIVATE_COMMAND_MAX_LEN 8192
int ret = 0;
char *command = NULL;
- char *buf = NULL;
int bytes_written = 0;
android_wifi_priv_cmd priv_cmd;
-#ifdef CONFIG_COMPAT
- android_wifi_priv_cmd_compat priv_cmd_compat;
-#endif
net_os_wake_lock(net);
@@ -1297,29 +1286,28 @@
ret = -EINVAL;
goto exit;
}
+
#ifdef CONFIG_COMPAT
if (is_compat_task()) {
- if (copy_from_user(&priv_cmd_compat, ifr->ifr_data, sizeof(android_wifi_priv_cmd_compat))) {
+ compat_android_wifi_priv_cmd compat_priv_cmd;
+ if (copy_from_user(&compat_priv_cmd, ifr->ifr_data,
+ sizeof(compat_android_wifi_priv_cmd))) {
ret = -EFAULT;
goto exit;
+
}
- priv_cmd.bufaddr = (char *)(uintptr_t) priv_cmd_compat.bufaddr;
- priv_cmd.used_len = priv_cmd_compat.used_len;
- priv_cmd.total_len = priv_cmd_compat.total_len;
- } else {
+ priv_cmd.buf = compat_ptr(compat_priv_cmd.buf);
+ priv_cmd.used_len = compat_priv_cmd.used_len;
+ priv_cmd.total_len = compat_priv_cmd.total_len;
+ } else
+#endif /* CONFIG_COMPAT */
+ {
if (copy_from_user(&priv_cmd, ifr->ifr_data, sizeof(android_wifi_priv_cmd))) {
ret = -EFAULT;
goto exit;
}
}
-#else
- if (copy_from_user(&priv_cmd, ifr->ifr_data, sizeof(android_wifi_priv_cmd))) {
- ret = -EFAULT;
- goto exit;
- }
-#endif
- if (priv_cmd.total_len > PRIVATE_COMMAND_MAX_LEN)
- {
+ if ((priv_cmd.total_len > PRIVATE_COMMAND_MAX_LEN) || (priv_cmd.total_len < 0)) {
DHD_ERROR(("%s: too long priavte command\n", __FUNCTION__));
ret = -EINVAL;
goto exit;
@@ -1331,8 +1319,7 @@
ret = -ENOMEM;
goto exit;
}
- buf = (char *)priv_cmd.bufaddr;
- if (copy_from_user(command, buf, priv_cmd.total_len)) {
+ if (copy_from_user(command, priv_cmd.buf, priv_cmd.total_len)) {
ret = -EFAULT;
goto exit;
}
@@ -1356,7 +1343,7 @@
}
if (strnicmp(command, CMD_STOP, strlen(CMD_STOP)) == 0) {
- bytes_written = wl_android_wifi_off(net);
+ bytes_written = wl_android_wifi_off(net, FALSE);
}
else if (strnicmp(command, CMD_SCAN_ACTIVE, strlen(CMD_SCAN_ACTIVE)) == 0) {
/* TBD: SCAN-ACTIVE */
@@ -1503,18 +1490,11 @@
int skip = strlen(CMD_KEEP_ALIVE) + 1;
bytes_written = wl_keep_alive_set(net, command + skip, priv_cmd.total_len - skip);
}
- else if (strnicmp(command, CMD_SETMIRACAST, strlen(CMD_SETMIRACAST)) == 0)
- bytes_written = wldev_miracast_tuning(net, command, priv_cmd.total_len);
- else if (strnicmp(command, CMD_ASSOCRESPIE, strlen(CMD_ASSOCRESPIE)) == 0)
- bytes_written = wldev_get_assoc_resp_ie(net, command, priv_cmd.total_len);
- else if (strnicmp(command, CMD_MAXLINKSPEED, strlen(CMD_MAXLINKSPEED))== 0)
- bytes_written = wldev_get_max_linkspeed(net, command, priv_cmd.total_len);
- else if (strnicmp(command, CMD_RXRATESTATS, strlen(CMD_RXRATESTATS)) == 0)
- bytes_written = wldev_get_rx_rate_stats(net, command, priv_cmd.total_len);
else if (strnicmp(command, CMD_ROAM_OFFLOAD, strlen(CMD_ROAM_OFFLOAD)) == 0) {
int enable = *(command + strlen(CMD_ROAM_OFFLOAD) + 1) - '0';
bytes_written = wl_cfg80211_enable_roam_offload(net, enable);
- } else {
+ }
+ else {
DHD_ERROR(("Unknown PRIVATE command %s - ignored\n", command));
snprintf(command, 3, "OK");
bytes_written = strlen("OK");
@@ -1530,7 +1510,7 @@
bytes_written++;
}
priv_cmd.used_len = bytes_written;
- if (copy_to_user(buf, command, bytes_written)) {
+ if (copy_to_user(priv_cmd.buf, command, bytes_written)) {
DHD_ERROR(("%s: failed to copy data to user buffer\n", __FUNCTION__));
ret = -EFAULT;
}
@@ -1560,7 +1540,7 @@
memset(iface_name, 0, IFNAMSIZ);
bcm_strncpy_s(iface_name, IFNAMSIZ, "wlan", IFNAMSIZ);
}
-#endif
+#endif
return ret;
diff --git a/drivers/net/wireless/bcmdhd/wl_android.h b/drivers/net/wireless/bcmdhd/wl_android.h
old mode 100755
new mode 100644
index 53be81e..2827132
--- a/drivers/net/wireless/bcmdhd/wl_android.h
+++ b/drivers/net/wireless/bcmdhd/wl_android.h
@@ -2,13 +2,13 @@
* Linux cfg80211 driver - Android related functions
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_android.h 454971 2014-02-12 14:56:41Z $
+ * $Id: wl_android.h 467328 2014-04-03 01:23:40Z $
*/
#include <linux/module.h>
@@ -31,6 +31,9 @@
/* If any feature uses the Generic Netlink Interface, put it here to enable WL_GENL
* automatically
*/
+#if defined(BT_WIFI_HANDOVER)
+#define WL_GENL
+#endif
@@ -48,5 +51,21 @@
int wl_android_exit(void);
void wl_android_post_init(void);
int wl_android_wifi_on(struct net_device *dev);
-int wl_android_wifi_off(struct net_device *dev);
+int wl_android_wifi_off(struct net_device *dev, bool on_failure);
int wl_android_priv_cmd(struct net_device *net, struct ifreq *ifr, int cmd);
+
+
+/* hostap mac mode */
+#define MACLIST_MODE_DISABLED 0
+#define MACLIST_MODE_DENY 1
+#define MACLIST_MODE_ALLOW 2
+
+/* max number of assoc list */
+#define MAX_NUM_OF_ASSOCLIST 64
+
+/* max number of mac filter list
+ * restrict max number to 10 as maximum cmd string size is 255
+ */
+#define MAX_NUM_MAC_FILT 10
+
+int wl_android_set_ap_mac_list(struct net_device *dev, int macmode, struct maclist *maclist);
diff --git a/drivers/net/wireless/bcmdhd/wl_cfg80211.c b/drivers/net/wireless/bcmdhd/wl_cfg80211.c
old mode 100755
new mode 100644
index ef49256..1a52f89
--- a/drivers/net/wireless/bcmdhd/wl_cfg80211.c
+++ b/drivers/net/wireless/bcmdhd/wl_cfg80211.c
@@ -2,13 +2,13 @@
* Linux cfg80211 driver
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_cfg80211.c 456934 2014-02-20 11:43:56Z $
+ * $Id: wl_cfg80211.c 477711 2014-05-14 08:45:17Z $
*/
/* */
#include <typedefs.h>
@@ -46,7 +46,7 @@
#ifdef PNO_SUPPORT
#include <dhd_pno.h>
#endif /* PNO_SUPPORT */
-
+#include <dhd_debug.h>
#include <proto/ethernet.h>
#include <linux/kernel.h>
#include <linux/kthread.h>
@@ -89,6 +89,9 @@
#define MAX_WAIT_TIME 1500
+#define CHAN_INFO_LEN 128
+#define IBSS_IF_NAME "ibss%d"
+
#ifdef VSDB
/* sleep time to keep STA's connecting or connection for continuous af tx or finding a peer */
#define DEFAULT_SLEEP_TIME_VSDB 120
@@ -115,6 +118,8 @@
#else
#define WL_DRV_STATUS_SENDING_AF_FRM_EXT(cfg) wl_get_drv_status_all(cfg, SENDING_ACT_FRM)
#endif /* WL_CFG80211_SYNC_GON */
+#define WL_IS_P2P_DEV_EVENT(e) ((e->emsg.ifidx == 0) && \
+ (e->emsg.bsscfgidx == P2PAPI_BSSCFG_DEVICE))
#define DNGL_FUNC(func, parameters) func parameters
#define COEX_DHCP
@@ -200,11 +205,7 @@
static const struct ieee80211_iface_combination
common_iface_combinations[] = {
{
-#ifdef DHD_ENABLE_MCC
- .num_different_channels = 2,
-#else
- .num_different_channels = 1,
-#endif
+ .num_different_channels = NUM_DIFF_CHANNELS,
.max_interfaces = 4,
.limits = common_if_limits,
.n_limits = ARRAY_SIZE(common_if_limits),
@@ -248,6 +249,10 @@
#define PM_BLOCK 1
#define PM_ENABLE 0
+#ifdef MFP
+#define WL_AKM_SUITE_MFP_1X 0x000FAC05
+#define WL_AKM_SUITE_MFP_PSK 0x000FAC06
+#endif /* MFP */
#ifndef IBSS_COALESCE_ALLOWED
@@ -257,6 +262,8 @@
#ifndef IBSS_INITIAL_SCAN_ALLOWED
#define IBSS_INITIAL_SCAN_ALLOWED 0
#endif
+
+#define CUSTOM_RETRY_MASK 0xff000000 /* Mask for retry counter of custom dwell time */
/*
* cfg80211_ops api/callback list
*/
@@ -275,13 +282,18 @@
struct cfg80211_scan_request *request);
#endif /* WL_CFG80211_P2P_DEV_IF */
static s32 wl_cfg80211_set_wiphy_params(struct wiphy *wiphy, u32 changed);
+static bcm_struct_cfgdev* bcm_cfg80211_add_ibss_if(struct wiphy *wiphy, char *name);
+static s32 bcm_cfg80211_del_ibss_if(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev);
static s32 wl_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_ibss_params *params);
static s32 wl_cfg80211_leave_ibss(struct wiphy *wiphy,
struct net_device *dev);
static s32 wl_cfg80211_get_station(struct wiphy *wiphy,
- struct net_device *dev, u8 *mac,
- struct station_info *sinfo);
+ struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
+ u8 *mac, struct station_info *sinfo);
static s32 wl_cfg80211_set_power_mgmt(struct wiphy *wiphy,
struct net_device *dev, bool enabled,
s32 timeout);
@@ -324,9 +336,17 @@
static s32 wl_cfg80211_mgmt_tx_cancel_wait(struct wiphy *wiphy,
bcm_struct_cfgdev *cfgdev, u64 cookie);
static s32 wl_cfg80211_del_station(struct wiphy *wiphy,
- struct net_device *ndev, u8* mac_addr);
+ struct net_device *ndev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
+ u8 *mac_addr);
static s32 wl_cfg80211_change_station(struct wiphy *wiphy,
- struct net_device *dev, u8 *mac, struct station_parameters *params);
+ struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
+ u8 *mac, struct station_parameters *params);
#endif /* WL_SUPPORT_BACKPORTED_KPATCHES || KERNEL_VER >= KERNEL_VERSION(3, 2, 0)) */
#if (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 39))
static s32 wl_cfg80211_suspend(struct wiphy *wiphy, struct cfg80211_wowlan *wow);
@@ -344,11 +364,21 @@
struct net_device *ndev, bool aborted, bool fw_abort);
#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 2, 0))
static s32 wl_cfg80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
u8 *peer, enum nl80211_tdls_operation oper);
-#endif
+#endif
#ifdef WL_SCHED_SCAN
static int wl_cfg80211_sched_scan_stop(struct wiphy *wiphy, struct net_device *dev);
#endif
+#if defined(DUAL_STA) || defined(DUAL_STA_STATIC_IF)
+bcm_struct_cfgdev*
+wl_cfg80211_create_iface(struct wiphy *wiphy, enum nl80211_iftype
+ iface_type, u8 *mac_addr, const char *name);
+s32
+wl_cfg80211_del_iface(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev);
+#endif /* defined(DUAL_STA) || defined(DUAL_STA_STATIC_IF) */
/*
* event & event Q handlers for cfg80211 interfaces
@@ -381,6 +411,10 @@
const wl_event_msg_t *e, void *data);
static s32 wl_notify_mic_status(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
const wl_event_msg_t *e, void *data);
+#ifdef BT_WIFI_HANDOVER
+static s32 wl_notify_bt_wifi_handover_req(struct bcm_cfg80211 *cfg,
+ bcm_struct_cfgdev *cfgdev, const wl_event_msg_t *e, void *data);
+#endif /* BT_WIFI_HANDOVER */
#ifdef WL_SCHED_SCAN
static s32
wl_notify_sched_scan_results(struct bcm_cfg80211 *cfg, struct net_device *ndev,
@@ -390,6 +424,14 @@
static s32 wl_notify_pfn_status(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
const wl_event_msg_t *e, void *data);
#endif /* PNO_SUPPORT */
+#ifdef GSCAN_SUPPORT
+static s32 wl_notify_gscan_event(struct bcm_cfg80211 *wl, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data);
+static s32 wl_handle_roam_exp_event(struct bcm_cfg80211 *wl, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data);
+#endif /* GSCAN_SUPPORT */
+static s32 wl_handle_rssi_monitor_event(struct bcm_cfg80211 *wl, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data);
static s32 wl_notifier_change_state(struct bcm_cfg80211 *cfg, struct net_info *_net_info,
enum wl_status state, bool set);
@@ -443,10 +485,13 @@
*/
static void wl_rst_ie(struct bcm_cfg80211 *cfg);
static __used s32 wl_add_ie(struct bcm_cfg80211 *cfg, u8 t, u8 l, u8 *v);
-static void wl_update_hidden_ap_ie(struct wl_bss_info *bi, u8 *ie_stream, u32 *ie_size);
+static void wl_update_hidden_ap_ie(struct wl_bss_info *bi, u8 *ie_stream, u32 *ie_size, bool roam);
static s32 wl_mrg_ie(struct bcm_cfg80211 *cfg, u8 *ie_stream, u16 ie_size);
static s32 wl_cp_ie(struct bcm_cfg80211 *cfg, u8 *dst, u16 dst_size);
static u32 wl_get_ielen(struct bcm_cfg80211 *cfg);
+#ifdef MFP
+static int wl_cfg80211_get_rsn_capa(bcm_tlv_t *wpa2ie, u8* capa);
+#endif
#ifdef WL11U
bcm_tlv_t *
@@ -459,13 +504,13 @@
static s32 wl_setup_wiphy(struct wireless_dev *wdev, struct device *dev, void *data);
static void wl_free_wdev(struct bcm_cfg80211 *cfg);
#ifdef CONFIG_CFG80211_INTERNAL_REGDB
-static int
+static void
wl_cfg80211_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request);
#endif /* CONFIG_CFG80211_INTERNAL_REGDB */
static s32 wl_inform_bss(struct bcm_cfg80211 *cfg);
-static s32 wl_inform_single_bss(struct bcm_cfg80211 *cfg, struct wl_bss_info *bi);
-static s32 wl_update_bss_info(struct bcm_cfg80211 *cfg, struct net_device *ndev);
+static s32 wl_inform_single_bss(struct bcm_cfg80211 *cfg, struct wl_bss_info *bi, bool roam);
+static s32 wl_update_bss_info(struct bcm_cfg80211 *cfg, struct net_device *ndev, bool roam);
static chanspec_t wl_cfg80211_get_shared_freq(struct wiphy *wiphy);
s32 wl_cfg80211_channel_to_freq(u32 channel);
@@ -531,6 +576,14 @@
int nprobes, int *out_params_size);
static bool check_dev_role_integrity(struct bcm_cfg80211 *cfg, u32 dev_role);
+#ifdef WL_CFG80211_ACL
+/* ACL */
+static int wl_cfg80211_set_mac_acl(struct wiphy *wiphy, struct net_device *cfgdev,
+ const struct cfg80211_acl_data *acl);
+#endif /* WL_CFG80211_ACL */
+
+static void wl_send_event(struct net_device *dev, uint32 event_type, uint32 status, uint32 reason);
+
/*
* Some external functions, TODO: move them to dhd_linux.h
*/
@@ -540,16 +593,29 @@
int dhd_monitor_uninit(void);
int dhd_start_xmit(struct sk_buff *skb, struct net_device *net);
+#ifdef ROAM_CHANNEL_CACHE
+void init_roam(int ioctl_ver);
+void reset_roam_cache(void);
+void add_roam_cache(wl_bss_info_t *bi);
+int get_roam_channel_list(int target_chan,
+ chanspec_t *channels, const wlc_ssid_t *ssid, int ioctl_ver);
+void print_roam_cache(void);
+void set_roam_band(int band);
+void update_roam_cache(struct bcm_cfg80211 *cfg, int ioctl_ver);
+#define MAX_ROAM_CACHE_NUM 100
+#endif /* ROAM_CHANNEL_CACHE */
static int wl_cfg80211_delayed_roam(struct bcm_cfg80211 *cfg, struct net_device *ndev,
const struct ether_addr *bssid);
+static int bw2cap[] = { 0, 0, WLC_BW_CAP_20MHZ, WLC_BW_CAP_40MHZ, WLC_BW_CAP_80MHZ,
+ WLC_BW_CAP_160MHZ, WLC_BW_CAP_160MHZ };
#define RETURN_EIO_IF_NOT_UP(wlpriv) \
do { \
struct net_device *checkSysUpNDev = bcmcfg_to_prmry_ndev(wlpriv); \
if (unlikely(!wl_get_drv_status(wlpriv, READY, checkSysUpNDev))) { \
- WL_INFO(("device is not ready\n")); \
+ WL_INFORM(("device is not ready\n")); \
return -EIO; \
} \
} while (0)
@@ -680,9 +746,10 @@
CHAN5G(116, 0), CHAN5G(120, 0),
CHAN5G(124, 0), CHAN5G(128, 0),
CHAN5G(132, 0), CHAN5G(136, 0),
- CHAN5G(140, 0), CHAN5G(149, 0),
- CHAN5G(153, 0), CHAN5G(157, 0),
- CHAN5G(161, 0), CHAN5G(165, 0)
+ CHAN5G(140, 0), CHAN5G(144, 0),
+ CHAN5G(149, 0), CHAN5G(153, 0),
+ CHAN5G(157, 0), CHAN5G(161, 0),
+ CHAN5G(165, 0)
};
static struct ieee80211_supported_band __wl_band_2ghz = {
@@ -709,6 +776,22 @@
WLAN_CIPHER_SUITE_AES_CMAC,
};
+#ifdef WL_SUPPORT_ACS
+/*
+ * The firmware code required for this feature to work is currently under
+ * BCMINTERNAL flag. In future if this is to enabled we need to bring the
+ * required firmware code out of the BCMINTERNAL flag.
+ */
+struct wl_dump_survey {
+ u32 obss;
+ u32 ibss;
+ u32 no_ctg;
+ u32 no_pckt;
+ u32 tx;
+ u32 idle;
+};
+#endif /* WL_SUPPORT_ACS */
+
#if defined(USE_DYNAMIC_MAXPKT_RXGLOM)
static int maxrxpktglom = 0;
@@ -905,6 +988,65 @@
return chanspec;
}
+/*
+ * convert ASCII string to MAC address (colon-delimited format)
+ * eg: 00:11:22:33:44:55
+ */
+int
+wl_cfg80211_ether_atoe(const char *a, struct ether_addr *n)
+{
+ char *c = NULL;
+ int count = 0;
+
+ memset(n, 0, ETHER_ADDR_LEN);
+ for (;;) {
+ n->octet[count++] = (uint8)simple_strtoul(a, &c, 16);
+ if (!*c++ || count == ETHER_ADDR_LEN)
+ break;
+ a = c;
+ }
+ return (count == ETHER_ADDR_LEN);
+}
+
+/* convert hex string buffer to binary */
+int
+wl_cfg80211_hex_str_to_bin(unsigned char *data, int dlen, char *str)
+{
+ int count, slen;
+ int hvalue;
+ char tmp[3] = {0};
+ char *ptr = str, *endp = NULL;
+
+ if (!data || !str || !dlen) {
+ WL_DBG((" passed buffer is empty \n"));
+ return 0;
+ }
+
+ slen = strlen(str);
+ if (dlen * 2 < slen) {
+ WL_DBG((" destination buffer too short \n"));
+ return 0;
+ }
+
+ if (slen % 2) {
+ WL_DBG((" source buffer is of odd length \n"));
+ return 0;
+ }
+
+ for (count = 0; count < slen; count += 2) {
+ memcpy(tmp, ptr, 2);
+ hvalue = simple_strtol(tmp, &endp, 16);
+ if (*endp != '\0') {
+ WL_DBG((" non hexadecimal character encountered \n"));
+ return 0;
+ }
+ *data++ = (unsigned char)hvalue;
+ ptr += 2;
+ }
+
+ return (slen / 2);
+}
+
/* There isn't a lot of sense in it, but you can transmit anything you like */
static const struct ieee80211_txrx_stypes
wl_cfg80211_default_mgmt_stypes[NUM_NL80211_IFTYPES] = {
@@ -984,8 +1126,7 @@
key->iv_initialized = dtoh32(key->iv_initialized);
}
-#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 4, 0))
-/* For debug: Dump the contents of the encoded wps ie buffe */
+/* Dump the contents of the encoded wps ie buffer and get pbc value */
static void
wl_validate_wps_ie(char *wps_ie, s32 wps_ie_len, bool *pbc)
{
@@ -1068,7 +1209,6 @@
subel += subelt_len;
}
}
-#endif
s32 wl_set_tx_power(struct net_device *dev,
enum nl80211_tx_power_setting type, s32 dbm)
@@ -1076,6 +1216,7 @@
s32 err = 0;
s32 disable = 0;
s32 txpwrqdbm;
+ struct bcm_cfg80211 *cfg = g_bcm_cfg;
/* Make sure radio is off or on as far as software is concerned */
disable = WL_RADIO_SW_DISABLE << 16;
@@ -1089,7 +1230,9 @@
if (dbm > 0xffff)
dbm = 0xffff;
txpwrqdbm = dbm * 4;
- err = wldev_iovar_setint(dev, "qtxpower", txpwrqdbm);
+ err = wldev_iovar_setbuf_bsscfg(dev, "qtxpower", (void *)&txpwrqdbm,
+ sizeof(txpwrqdbm), cfg->ioctl_buf, WLC_IOCTL_SMLEN, 0,
+ &cfg->ioctl_buf_sync);
if (unlikely(err))
WL_ERR(("qtxpower error (%d)\n", err));
else
@@ -1102,15 +1245,21 @@
{
s32 err = 0;
s32 txpwrdbm;
+ struct bcm_cfg80211 *cfg = g_bcm_cfg;
- err = wldev_iovar_getint(dev, "qtxpower", &txpwrdbm);
+ err = wldev_iovar_getbuf_bsscfg(dev, "qtxpower",
+ NULL, 0, cfg->ioctl_buf, WLC_IOCTL_SMLEN, 0, &cfg->ioctl_buf_sync);
if (unlikely(err)) {
WL_ERR(("error (%d)\n", err));
return err;
}
+ memcpy(&txpwrdbm, cfg->ioctl_buf, sizeof(txpwrdbm));
+ txpwrdbm = dtoh32(txpwrdbm);
*dbm = (txpwrdbm & ~WL_TXPWR_OVERRIDE) / 4;
+ WL_INFORM(("dBm=%d, txpwrdbm=0x%x\n", *dbm, txpwrdbm));
+
return err;
}
@@ -1152,13 +1301,13 @@
wl_cfg80211_add_monitor_if(char *name)
{
#if defined(WL_ENABLE_P2P_IF) || defined(WL_CFG80211_P2P_DEV_IF)
- WL_INFO(("wl_cfg80211_add_monitor_if: No more support monitor interface\n"));
+ WL_INFORM(("wl_cfg80211_add_monitor_if: No more support monitor interface\n"));
return ERR_PTR(-EOPNOTSUPP);
#else
struct net_device* ndev = NULL;
dhd_add_monitor(name, &ndev);
- WL_INFO(("wl_cfg80211_add_monitor_if net device returned: 0x%p\n", ndev));
+ WL_INFORM(("wl_cfg80211_add_monitor_if net device returned: 0x%p\n", ndev));
return ndev_to_cfgdev(ndev);
#endif /* WL_ENABLE_P2P_IF || WL_CFG80211_P2P_DEV_IF */
}
@@ -1189,7 +1338,7 @@
s32 up = 1;
dhd_pub_t *dhd;
bool enabled;
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
if (!cfg)
@@ -1198,7 +1347,7 @@
#ifdef PROP_TXSTATUS_VSDB
#if defined(BCMSDIO)
dhd = (dhd_pub_t *)(cfg->pub);
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
@@ -1213,6 +1362,7 @@
WL_DBG(("if name: %s, type: %d\n", name, type));
switch (type) {
case NL80211_IFTYPE_ADHOC:
+ return bcm_cfg80211_add_ibss_if(wiphy, (char *)name);
case NL80211_IFTYPE_AP_VLAN:
case NL80211_IFTYPE_WDS:
case NL80211_IFTYPE_MESH_POINT:
@@ -1225,8 +1375,21 @@
case NL80211_IFTYPE_P2P_DEVICE:
return wl_cfgp2p_add_p2p_disc_if(cfg);
#endif /* WL_CFG80211_P2P_DEV_IF */
- case NL80211_IFTYPE_P2P_CLIENT:
case NL80211_IFTYPE_STATION:
+#ifdef DUAL_STA
+ if (cfg->ibss_cfgdev) {
+ WL_ERR(("AIBSS is already operational. "
+ " AIBSS & DUALSTA can't be used together \n"));
+ return NULL;
+ }
+ if (!name) {
+ WL_ERR(("Interface name not provided \n"));
+ return NULL;
+ }
+ return wl_cfg80211_create_iface(cfg->wdev->wiphy,
+ NL80211_IFTYPE_STATION, NULL, name);
+#endif /* DUAL_STA */
+ case NL80211_IFTYPE_P2P_CLIENT:
wlif_type = WL_P2P_IF_CLIENT;
mode = WL_MODE_BSS;
break;
@@ -1252,7 +1415,7 @@
#if defined(BCMSDIO)
if (!dhd)
return ERR_PTR(-ENODEV);
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
if (!cfg->p2p)
return ERR_PTR(-ENODEV);
@@ -1283,7 +1446,7 @@
}
cfg->wlfc_on = true;
}
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
/* In concurrency case, STA may be already associated in a particular channel.
@@ -1333,7 +1496,7 @@
goto fail;
}
vwdev->wiphy = cfg->wdev->wiphy;
- WL_INFO(("virtual interface(%s) is created\n", cfg->p2p->vir_ifname));
+ WL_INFORM(("virtual interface(%s) is created\n", cfg->p2p->vir_ifname));
vwdev->iftype = type;
vwdev->netdev = new_ndev;
new_ndev->ieee80211_ptr = vwdev;
@@ -1364,8 +1527,11 @@
dhd_mode = DHD_FLAG_P2P_GO_MODE;
DNGL_FUNC(dhd_cfg80211_set_p2p_info, (cfg, dhd_mode));
/* reinitialize completion to clear previous count */
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 13, 0))
INIT_COMPLETION(cfg->iface_disable);
-
+#else
+ init_completion(&cfg->iface_disable);
+#endif
return ndev_to_cfgdev(new_ndev);
} else {
wl_clr_p2p_status(cfg, IF_ADDING);
@@ -1380,7 +1546,7 @@
dhd_wlfc_deinit(dhd);
cfg->wlfc_on = false;
}
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
}
}
@@ -1417,6 +1583,14 @@
#endif /* WL_CFG80211_P2P_DEV_IF */
dev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+ if (cfgdev == cfg->ibss_cfgdev)
+ return bcm_cfg80211_del_ibss_if(wiphy, cfgdev);
+
+#ifdef DUAL_STA
+ if (cfgdev == cfg->bss_cfgdev)
+ return wl_cfg80211_del_iface(wiphy, cfgdev);
+#endif /* DUAL_STA */
+
if (wl_cfgp2p_find_idx(cfg, dev, &index) != BCME_OK) {
WL_ERR(("Find p2p index from ndev(%p) failed\n", dev));
return BCME_ERROR;
@@ -1479,7 +1653,7 @@
ret, ndev->name));
#if defined(BCMDONGLEHOST) && defined(OEM_ANDROID)
net_os_send_hang_message(ndev);
- #endif
+ #endif
} else {
/* Wait for IF_DEL operation to be finished */
timeout = wait_event_interruptible_timeout(cfg->netif_change_event,
@@ -1518,6 +1692,7 @@
chanspec_t chspec;
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
+
WL_DBG(("Enter type %d\n", type));
switch (type) {
case NL80211_IFTYPE_MONITOR:
@@ -1610,6 +1785,7 @@
s32
wl_cfg80211_notify_ifadd(int ifidx, char *name, uint8 *mac, uint8 bssidx)
{
+ bool ifadd_expected = FALSE;
struct bcm_cfg80211 *cfg = g_bcm_cfg;
/* P2P may send WLC_E_IF_ADD and/or WLC_E_IF_CHANGE during IF updating ("p2p_ifupd")
@@ -1620,6 +1796,14 @@
/* Okay, we are expecting IF_ADD (as IF_ADDING is true) */
if (wl_get_p2p_status(cfg, IF_ADDING)) {
+ ifadd_expected = TRUE;
+ wl_clr_p2p_status(cfg, IF_ADDING);
+ } else if (cfg->bss_pending_op) {
+ ifadd_expected = TRUE;
+ cfg->bss_pending_op = FALSE;
+ }
+
+ if (ifadd_expected) {
wl_if_event_info *if_event_info = &cfg->if_event_info;
if_event_info->valid = TRUE;
@@ -1629,8 +1813,6 @@
if_event_info->name[IFNAMSIZ] = '\0';
if (mac)
memcpy(if_event_info->mac, mac, ETHER_ADDR_LEN);
-
- wl_clr_p2p_status(cfg, IF_ADDING);
wake_up_interruptible(&cfg->netif_change_event);
return BCME_OK;
}
@@ -1641,14 +1823,22 @@
s32
wl_cfg80211_notify_ifdel(int ifidx, char *name, uint8 *mac, uint8 bssidx)
{
+ bool ifdel_expected = FALSE;
struct bcm_cfg80211 *cfg = g_bcm_cfg;
wl_if_event_info *if_event_info = &cfg->if_event_info;
if (wl_get_p2p_status(cfg, IF_DELETING)) {
+ ifdel_expected = TRUE;
+ wl_clr_p2p_status(cfg, IF_DELETING);
+ } else if (cfg->bss_pending_op) {
+ ifdel_expected = TRUE;
+ cfg->bss_pending_op = FALSE;
+ }
+
+ if (ifdel_expected) {
if_event_info->valid = TRUE;
if_event_info->ifidx = ifidx;
if_event_info->bssidx = bssidx;
- wl_clr_p2p_status(cfg, IF_DELETING);
wake_up_interruptible(&cfg->netif_change_event);
return BCME_OK;
}
@@ -1679,7 +1869,7 @@
#if defined(BCMSDIO)
dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
bool enabled;
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
bssidx = if_event_info->bssidx;
@@ -1715,13 +1905,11 @@
dhd_wlfc_deinit(dhd);
cfg->wlfc_on = false;
}
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
}
-#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 6, 0))
wl_cfg80211_remove_if(cfg, if_event_info->ifidx, ndev);
-#endif /* (LINUX_VERSION_CODE < KERNEL_VERSION(3, 6, 0)) */
return BCME_OK;
}
@@ -1830,7 +2018,11 @@
/* SKIP DFS channels for Secondary interface */
if ((cfg->escan_info.ndev != bcmcfg_to_prmry_ndev(cfg)) &&
(request->channels[i]->flags &
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0))
(IEEE80211_CHAN_RADAR | IEEE80211_CHAN_PASSIVE_SCAN)))
+#else
+ (IEEE80211_CHAN_RADAR | IEEE80211_CHAN_NO_IR)))
+#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0) */
continue;
if (request->channels[i]->band == IEEE80211_BAND_2GHZ) {
@@ -1928,6 +2120,7 @@
u16 *default_chan_list = NULL;
wl_uint32_list_t *list;
struct net_device *dev = NULL;
+ dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
#if defined(USE_INITIAL_2G_SCAN) || defined(USE_INITIAL_SHORT_DWELL_TIME)
bool is_first_init_2g_scan = false;
#endif /* USE_INITIAL_2G_SCAN || USE_INITIAL_SHORT_DWELL_TIME */
@@ -2038,6 +2231,8 @@
WL_DBG((" Escan not permitted at this time (%d)\n", err));
else
WL_ERR((" Escan set error (%d)\n", err));
+ } else {
+ DBG_EVENT_LOG(dhd, WIFI_EVENT_DRIVER_SCAN_REQUESTED);
}
kfree(params);
}
@@ -2065,8 +2260,13 @@
/* ignore DFS channels */
if (request->channels[i]->flags &
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ (IEEE80211_CHAN_NO_IR
+ | IEEE80211_CHAN_RADAR))
+#else
(IEEE80211_CHAN_RADAR
| IEEE80211_CHAN_PASSIVE_SCAN))
+#endif
continue;
for (j = 0; j < n_valid_chan; j++) {
@@ -2087,11 +2287,11 @@
/* SOCIAL CHANNELS 1, 6, 11 */
search_state = WL_P2P_DISC_ST_SEARCH;
p2p_scan_purpose = P2P_SCAN_SOCIAL_CHANNEL;
- WL_INFO(("P2P SEARCH PHASE START \n"));
+ WL_INFORM(("P2P SEARCH PHASE START \n"));
} else if ((dev = wl_to_p2p_bss_ndev(cfg, P2PAPI_BSSCFG_CONNECTION)) &&
(wl_get_mode_by_netdev(cfg, dev) == WL_MODE_AP)) {
/* If you are already a GO, then do SEARCH only */
- WL_INFO(("Already a GO. Do SEARCH Only"));
+ WL_INFORM(("Already a GO. Do SEARCH Only"));
search_state = WL_P2P_DISC_ST_SEARCH;
num_chans = n_nodfs;
p2p_scan_purpose = P2P_SCAN_NORMAL;
@@ -2104,7 +2304,7 @@
*/
p2p_scan_purpose = P2P_SCAN_SOCIAL_CHANNEL;
} else {
- WL_INFO(("P2P SCAN STATE START \n"));
+ WL_INFORM(("P2P SCAN STATE START \n"));
num_chans = n_nodfs;
p2p_scan_purpose = P2P_SCAN_NORMAL;
}
@@ -2191,9 +2391,18 @@
dhd_pub_t *dhd;
dhd = (dhd_pub_t *)(cfg->pub);
+ /*
+ * Hostapd triggers scan before starting automatic channel selection
+ * also Dump stats IOVAR scans each channel hence returning from here.
+ */
if (dhd->op_mode & DHD_FLAG_HOSTAP_MODE) {
+#ifdef WL_SUPPORT_ACS
+ WL_INFORM(("Scan Command at SoftAP mode\n"));
+ return 0;
+#else
WL_ERR(("Invalid Scan Command at SoftAP mode\n"));
return -EINVAL;
+#endif /* WL_SUPPORT_ACS */
}
ndev = ndev_to_wlc_ndev(ndev, cfg);
@@ -2323,10 +2532,6 @@
ssids = this_ssid;
}
- if (request && !p2p_scan(cfg)) {
- WL_TRACE_HW4(("START SCAN\n"));
- }
-
cfg->scan_request = request;
wl_set_drv_status(cfg, SCANNING, ndev);
@@ -2522,7 +2727,7 @@
chanspec_t c = 0, ret_c = 0;
int bw = 0, tmp_bw = 0;
int i;
- u32 tmp_c, sb;
+ u32 tmp_c;
u16 kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
#define LOCAL_BUF_SIZE 1024
buf = (u8 *) kzalloc(LOCAL_BUF_SIZE, kflags);
@@ -2549,34 +2754,8 @@
goto exit;
}
}
- if (CHSPEC_IS20(c)) {
- tmp_c = CHSPEC_CHANNEL(c);
- tmp_bw = WLC_BW_CAP_20MHZ;
- }
- else if (CHSPEC_IS40(c)) {
- tmp_c = CHSPEC_CHANNEL(c);
- if (CHSPEC_SB_UPPER(c)) {
- tmp_c += CH_10MHZ_APART;
- } else {
- tmp_c -= CH_10MHZ_APART;
- }
- tmp_bw = WLC_BW_CAP_40MHZ;
- }
- else {
- tmp_c = CHSPEC_CHANNEL(c);
- sb = c & WL_CHANSPEC_CTL_SB_MASK;
- if (sb == WL_CHANSPEC_CTL_SB_LL) {
- tmp_c -= (CH_10MHZ_APART + CH_20MHZ_APART);
- } else if (sb == WL_CHANSPEC_CTL_SB_LU) {
- tmp_c -= CH_10MHZ_APART;
- } else if (sb == WL_CHANSPEC_CTL_SB_UL) {
- tmp_c += CH_10MHZ_APART;
- } else {
- /* WL_CHANSPEC_CTL_SB_UU */
- tmp_c += (CH_10MHZ_APART + CH_20MHZ_APART);
- }
- tmp_bw = WLC_BW_CAP_80MHZ;
- }
+ tmp_c = wf_chspec_ctlchan(c);
+ tmp_bw = bw2cap[CHSPEC_BW(c) >> WL_CHANSPEC_BW_SHIFT];
if (tmp_c != channel)
continue;
@@ -2591,7 +2770,7 @@
if (buf)
kfree(buf);
#undef LOCAL_BUF_SIZE
- WL_INFO(("return chanspec %x %d\n", ret_c, bw));
+ WL_INFORM(("return chanspec %x %d\n", ret_c, bw));
return ret_c;
}
@@ -2658,6 +2837,375 @@
return ret;
}
+static bcm_struct_cfgdev*
+bcm_cfg80211_add_ibss_if(struct wiphy *wiphy, char *name)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct wireless_dev* wdev = NULL;
+ struct net_device *new_ndev = NULL;
+ struct net_device *primary_ndev = NULL;
+ s32 timeout;
+ wl_aibss_if_t aibss_if;
+ wl_if_event_info *event = NULL;
+
+ if (cfg->ibss_cfgdev != NULL) {
+ WL_ERR(("IBSS interface %s already exists\n", name));
+ return NULL;
+ }
+
+ WL_ERR(("Try to create IBSS interface %s\n", name));
+ primary_ndev = bcmcfg_to_prmry_ndev(cfg);
+ /* generate a new MAC address for the IBSS interface */
+ get_primary_mac(cfg, &cfg->ibss_if_addr);
+ cfg->ibss_if_addr.octet[4] ^= 0x40;
+ memset(&aibss_if, sizeof(aibss_if), 0);
+ memcpy(&aibss_if.addr, &cfg->ibss_if_addr, sizeof(aibss_if.addr));
+ aibss_if.chspec = 0;
+ aibss_if.len = sizeof(aibss_if);
+
+ cfg->bss_pending_op = TRUE;
+ memset(&cfg->if_event_info, 0, sizeof(cfg->if_event_info));
+ err = wldev_iovar_setbuf(primary_ndev, "aibss_ifadd", &aibss_if,
+ sizeof(aibss_if), cfg->ioctl_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (err) {
+ WL_ERR(("IOVAR aibss_ifadd failed with error %d\n", err));
+ goto fail;
+ }
+ timeout = wait_event_interruptible_timeout(cfg->netif_change_event,
+ !cfg->bss_pending_op, msecs_to_jiffies(MAX_WAIT_TIME));
+ if (timeout <= 0 || cfg->bss_pending_op)
+ goto fail;
+
+ event = &cfg->if_event_info;
+ strncpy(event->name, name, IFNAMSIZ - 1);
+ /* By calling wl_cfg80211_allocate_if (dhd_allocate_if eventually) we give the control
+ * over this net_device interface to dhd_linux, hence the interface is managed by dhd_liux
+ * and will be freed by dhd_detach unless it gets unregistered before that. The
+ * wireless_dev instance new_ndev->ieee80211_ptr associated with this net_device will
+ * be freed by wl_dealloc_netinfo
+ */
+ new_ndev = wl_cfg80211_allocate_if(cfg, event->ifidx, event->name,
+ event->mac, event->bssidx);
+ if (new_ndev == NULL)
+ goto fail;
+ wdev = kzalloc(sizeof(*wdev), GFP_KERNEL);
+ if (wdev == NULL)
+ goto fail;
+ wdev->wiphy = wiphy;
+ wdev->iftype = NL80211_IFTYPE_ADHOC;
+ wdev->netdev = new_ndev;
+ new_ndev->ieee80211_ptr = wdev;
+ SET_NETDEV_DEV(new_ndev, wiphy_dev(wdev->wiphy));
+
+ /* rtnl lock must have been acquired, if this is not the case, wl_cfg80211_register_if
+ * needs to be modified to take one parameter (bool need_rtnl_lock)
+ */
+ ASSERT_RTNL();
+ if (wl_cfg80211_register_if(cfg, event->ifidx, new_ndev) != BCME_OK)
+ goto fail;
+
+ wl_alloc_netinfo(cfg, new_ndev, wdev, WL_MODE_IBSS, PM_ENABLE);
+ cfg->ibss_cfgdev = ndev_to_cfgdev(new_ndev);
+ WL_ERR(("IBSS interface %s created\n", new_ndev->name));
+ return cfg->ibss_cfgdev;
+
+fail:
+ WL_ERR(("failed to create IBSS interface %s \n", name));
+ cfg->bss_pending_op = FALSE;
+ if (new_ndev)
+ wl_cfg80211_remove_if(cfg, event->ifidx, new_ndev);
+ if (wdev)
+ kfree(wdev);
+ return NULL;
+}
+
+#if defined(DUAL_STA) || defined(DUAL_STA_STATIC_IF)
+s32
+wl_cfg80211_add_del_bss(struct bcm_cfg80211 *cfg,
+ struct net_device *ndev, s32 bsscfg_idx,
+ enum nl80211_iftype iface_type, s32 del, u8 *addr)
+{
+ s32 ret = BCME_OK;
+ s32 val = 0;
+
+ struct {
+ s32 cfg;
+ s32 val;
+ struct ether_addr ea;
+ } bss_setbuf;
+
+ WL_INFORM(("iface_type:%d del:%d \n", iface_type, del));
+
+ bzero(&bss_setbuf, sizeof(bss_setbuf));
+
+ /* AP=3, STA=2, up=1, down=0, val=-1 */
+ if (del) {
+ val = -1;
+ } else if (iface_type == NL80211_IFTYPE_AP) {
+ /* AP Interface */
+ WL_DBG(("Adding AP Interface \n"));
+ val = 3;
+ } else if (iface_type == NL80211_IFTYPE_STATION) {
+ WL_DBG(("Adding STA Interface \n"));
+ val = 2;
+ } else {
+ WL_ERR((" add_del_bss NOT supported for IFACE type:0x%x", iface_type));
+ return -EINVAL;
+ }
+
+ bss_setbuf.cfg = htod32(bsscfg_idx);
+ bss_setbuf.val = htod32(val);
+
+ if (addr) {
+ memcpy(&bss_setbuf.ea.octet, addr, ETH_ALEN);
+ }
+
+ ret = wldev_iovar_setbuf(ndev, "bss", &bss_setbuf, sizeof(bss_setbuf),
+ cfg->ioctl_buf, WLC_IOCTL_MAXLEN, &cfg->ioctl_buf_sync);
+ if (ret != 0)
+ WL_ERR(("'bss %d' failed with %d\n", val, ret));
+
+ return ret;
+}
+
+/* Create a Generic Network Interface and initialize it depending up on
+ * the interface type
+ */
+bcm_struct_cfgdev*
+wl_cfg80211_create_iface(struct wiphy *wiphy,
+ enum nl80211_iftype iface_type,
+ u8 *mac_addr, const char *name)
+{
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct net_device *new_ndev = NULL;
+ struct net_device *primary_ndev = NULL;
+ s32 ret = BCME_OK;
+ s32 bsscfg_idx = 1;
+ u32 timeout;
+ wl_if_event_info *event = NULL;
+ struct wireless_dev *wdev = NULL;
+ u8 addr[ETH_ALEN];
+
+ WL_DBG(("Enter\n"));
+
+ if (!name) {
+ WL_ERR(("Interface name not provided\n"));
+ return NULL;
+ }
+
+ primary_ndev = bcmcfg_to_prmry_ndev(cfg);
+
+ if (likely(!mac_addr)) {
+ /* Use primary MAC with the locally administered bit for the Secondary STA I/F */
+ memcpy(addr, primary_ndev->dev_addr, ETH_ALEN);
+ addr[0] |= 0x02;
+ } else {
+ /* Use the application provided mac address (if any) */
+ memcpy(addr, mac_addr, ETH_ALEN);
+ }
+
+ if ((iface_type != NL80211_IFTYPE_STATION) && (iface_type != NL80211_IFTYPE_AP)) {
+ WL_ERR(("IFACE type:%d not supported. STA "
+ "or AP IFACE is only supported\n", iface_type));
+ return NULL;
+ }
+
+ cfg->bss_pending_op = TRUE;
+ memset(&cfg->if_event_info, 0, sizeof(cfg->if_event_info));
+
+ /* De-initialize the p2p discovery interface, if operational */
+ if (p2p_is_on(cfg)) {
+ WL_DBG(("Disabling P2P Discovery Interface \n"));
+#ifdef WL_CFG80211_P2P_DEV_IF
+ ret = wl_cfg80211_scan_stop(bcmcfg_to_p2p_wdev(cfg));
+#else
+ ret = wl_cfg80211_scan_stop(cfg->p2p_net);
+#endif
+ if (unlikely(ret < 0)) {
+ CFGP2P_ERR(("P2P scan stop failed, ret=%d\n", ret));
+ }
+
+ wl_cfgp2p_disable_discovery(cfg);
+ wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_DEVICE) = 0;
+ p2p_on(cfg) = false;
+ }
+
+ /*
+ * Intialize the firmware I/F.
+ */
+ if ((ret = wl_cfg80211_add_del_bss(cfg, primary_ndev,
+ bsscfg_idx, iface_type, 0, addr)) < 0) {
+ return NULL;
+ }
+
+ /*
+ * Wait till the firmware send a confirmation event back.
+ */
+ WL_DBG(("Wait for the FW I/F Event\n"));
+ timeout = wait_event_interruptible_timeout(cfg->netif_change_event,
+ !cfg->bss_pending_op, msecs_to_jiffies(MAX_WAIT_TIME));
+ if (timeout <= 0 || cfg->bss_pending_op) {
+ WL_ERR(("ADD_IF event, didn't come. Return \n"));
+ goto fail;
+ }
+
+ /*
+ * Since FW operation is successful,we can go ahead with the
+ * the host interface creation.
+ */
+ event = &cfg->if_event_info;
+ strncpy(event->name, name, IFNAMSIZ - 1);
+ new_ndev = wl_cfg80211_allocate_if(cfg, event->ifidx,
+ event->name, addr, event->bssidx);
+ if (!new_ndev) {
+ WL_ERR(("I/F allocation failed! \n"));
+ goto fail;
+ } else
+ WL_DBG(("I/F allocation succeeded! ifidx:0x%x bssidx:0x%x \n",
+ event->ifidx, event->bssidx));
+
+ wdev = kzalloc(sizeof(*wdev), GFP_KERNEL);
+ if (!wdev) {
+ WL_ERR(("wireless_dev alloc failed! \n"));
+ goto fail;
+ }
+
+ wdev->wiphy = wiphy;
+ wdev->iftype = iface_type;
+ new_ndev->ieee80211_ptr = wdev;
+ SET_NETDEV_DEV(new_ndev, wiphy_dev(wdev->wiphy));
+
+ /* RTNL lock must have been acquired. */
+ ASSERT_RTNL();
+
+ /* Set the locally administed mac addr, if not applied already */
+ if (memcmp(addr, event->mac, ETH_ALEN) != 0) {
+ ret = wldev_iovar_setbuf_bsscfg(primary_ndev, "cur_etheraddr", addr, ETH_ALEN,
+ cfg->ioctl_buf, WLC_IOCTL_MAXLEN, event->bssidx, &cfg->ioctl_buf_sync);
+ if (unlikely(ret)) {
+ WL_ERR(("set cur_etheraddr Error (%d)\n", ret));
+ goto fail;
+ }
+ memcpy(new_ndev->dev_addr, addr, ETH_ALEN);
+ }
+
+ if (wl_cfg80211_register_if(cfg, event->ifidx, new_ndev) != BCME_OK) {
+ WL_ERR(("IFACE register failed \n"));
+ goto fail;
+ }
+
+ /* Initialize with the station mode params */
+ wl_alloc_netinfo(cfg, new_ndev, wdev,
+ (iface_type == NL80211_IFTYPE_STATION) ?
+ WL_MODE_BSS : WL_MODE_AP, PM_ENABLE);
+ cfg->bss_cfgdev = ndev_to_cfgdev(new_ndev);
+ cfg->cfgdev_bssidx = event->bssidx;
+
+ WL_DBG(("Host Network Interface for Secondary I/F created"));
+
+ return cfg->bss_cfgdev;
+
+fail:
+ cfg->bss_pending_op = FALSE;
+ if (new_ndev)
+ wl_cfg80211_remove_if(cfg, event->ifidx, new_ndev);
+ if (wdev)
+ kfree(wdev);
+
+ return NULL;
+}
+
+s32
+wl_cfg80211_del_iface(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev)
+{
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct net_device *ndev = NULL;
+ struct net_device *primary_ndev = NULL;
+ s32 ret = BCME_OK;
+ s32 bsscfg_idx = 1;
+ u32 timeout;
+ enum nl80211_iftype iface_type = NL80211_IFTYPE_STATION;
+
+ WL_DBG(("Enter\n"));
+
+ if (!cfg->bss_cfgdev)
+ return 0;
+
+ /* If any scan is going on, abort it */
+ if (wl_get_drv_status_all(cfg, SCANNING)) {
+ WL_DBG(("Scan in progress. Aborting the scan!\n"));
+ wl_notify_escan_complete(cfg, cfg->escan_info.ndev, true, true);
+ }
+
+ ndev = cfgdev_to_ndev(cfg->bss_cfgdev);
+ primary_ndev = bcmcfg_to_prmry_ndev(cfg);
+
+ cfg->bss_pending_op = TRUE;
+ memset(&cfg->if_event_info, 0, sizeof(cfg->if_event_info));
+
+ /* Delete the firmware interface */
+ if ((ret = wl_cfg80211_add_del_bss(cfg, ndev,
+ bsscfg_idx, iface_type, true, NULL)) < 0) {
+ WL_ERR(("DEL bss failed ret:%d \n", ret));
+ return ret;
+ }
+
+ timeout = wait_event_interruptible_timeout(cfg->netif_change_event,
+ !cfg->bss_pending_op, msecs_to_jiffies(MAX_WAIT_TIME));
+ if (timeout <= 0 || cfg->bss_pending_op) {
+ WL_ERR(("timeout in waiting IF_DEL event\n"));
+ }
+
+ wl_cfg80211_remove_if(cfg, cfg->if_event_info.ifidx, ndev);
+ cfg->bss_cfgdev = NULL;
+ cfg->cfgdev_bssidx = -1;
+ cfg->bss_pending_op = FALSE;
+
+ WL_DBG(("IF_DEL Done.\n"));
+
+ return ret;
+}
+#endif /* defined(DUAL_STA) || defined(DUAL_STA_STATIC_IF) */
+
+static s32
+bcm_cfg80211_del_ibss_if(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct net_device *ndev = NULL;
+ struct net_device *primary_ndev = NULL;
+ s32 timeout;
+
+ if (!cfgdev || cfg->ibss_cfgdev != cfgdev || ETHER_ISNULLADDR(&cfg->ibss_if_addr.octet))
+ return -EINVAL;
+ ndev = cfgdev_to_ndev(cfg->ibss_cfgdev);
+ primary_ndev = bcmcfg_to_prmry_ndev(cfg);
+
+ cfg->bss_pending_op = TRUE;
+ memset(&cfg->if_event_info, 0, sizeof(cfg->if_event_info));
+ err = wldev_iovar_setbuf(primary_ndev, "aibss_ifdel", &cfg->ibss_if_addr,
+ sizeof(cfg->ibss_if_addr), cfg->ioctl_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (err) {
+ WL_ERR(("IOVAR aibss_ifdel failed with error %d\n", err));
+ goto fail;
+ }
+ timeout = wait_event_interruptible_timeout(cfg->netif_change_event,
+ !cfg->bss_pending_op, msecs_to_jiffies(MAX_WAIT_TIME));
+ if (timeout <= 0 || cfg->bss_pending_op) {
+ WL_ERR(("timeout in waiting IF_DEL event\n"));
+ goto fail;
+ }
+
+ wl_cfg80211_remove_if(cfg, cfg->if_event_info.ifidx, ndev);
+ cfg->ibss_cfgdev = NULL;
+ return 0;
+
+fail:
+ cfg->bss_pending_op = FALSE;
+ return -1;
+}
+
static s32
wl_cfg80211_join_ibss(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_ibss_params *params)
@@ -2677,7 +3225,7 @@
WL_TRACE(("In\n"));
RETURN_EIO_IF_NOT_UP(cfg);
- WL_INFO(("JOIN BSSID:" MACDBG "\n", MAC2STRDBG(params->bssid)));
+ WL_INFORM(("JOIN BSSID:" MACDBG "\n", MAC2STRDBG(params->bssid)));
if (!params->ssid || params->ssid_len <= 0) {
WL_ERR(("Invalid parameter\n"));
return -EINVAL;
@@ -2845,6 +3393,43 @@
return err;
}
+#ifdef MFP
+static int wl_cfg80211_get_rsn_capa(bcm_tlv_t *wpa2ie, u8* capa)
+{
+ u16 suite_count;
+ wpa_suite_mcast_t *mcast;
+ wpa_suite_ucast_t *ucast;
+ u16 len;
+ wpa_suite_auth_key_mgmt_t *mgmt;
+
+ if (!wpa2ie)
+ return -1;
+
+ len = wpa2ie->len;
+ mcast = (wpa_suite_mcast_t *)&wpa2ie->data[WPA2_VERSION_LEN];
+ if ((len -= WPA_SUITE_LEN) <= 0)
+ return BCME_BADLEN;
+ ucast = (wpa_suite_ucast_t *)&mcast[1];
+ suite_count = ltoh16_ua(&ucast->count);
+ if ((suite_count > NL80211_MAX_NR_CIPHER_SUITES) ||
+ (len -= (WPA_IE_SUITE_COUNT_LEN +
+ (WPA_SUITE_LEN * suite_count))) <= 0)
+ return BCME_BADLEN;
+
+ mgmt = (wpa_suite_auth_key_mgmt_t *)&ucast->list[suite_count];
+ suite_count = ltoh16_ua(&mgmt->count);
+
+ if ((suite_count > NL80211_MAX_NR_CIPHER_SUITES) ||
+ (len -= (WPA_IE_SUITE_COUNT_LEN +
+ (WPA_SUITE_LEN * suite_count))) >= RSN_CAP_LEN) {
+ capa[0] = *(u8 *)&mgmt->list[suite_count];
+ capa[1] = *((u8 *)&mgmt->list[suite_count] + 1);
+ } else
+ return BCME_BADLEN;
+
+ return 0;
+}
+#endif /* MFP */
static s32
@@ -2935,6 +3520,11 @@
s32 gval = 0;
s32 err = 0;
s32 wsec_val = 0;
+#ifdef MFP
+ s32 mfp = 0;
+ bcm_tlv_t *wpa2_ie;
+ u8 rsn_cap[2];
+#endif /* MFP */
s32 bssidx;
if (wl_cfgp2p_find_idx(cfg, dev, &bssidx) != BCME_OK) {
@@ -2994,6 +3584,41 @@
} else {
WL_DBG((" NO, is_wps_conn, Set pval | gval to WSEC"));
wsec_val = pval | gval;
+#ifdef MFP
+ if (pval == AES_ENABLED) {
+ if (((wpa2_ie = bcm_parse_tlvs((u8 *)sme->ie, sme->ie_len,
+ DOT11_MNG_RSN_ID)) != NULL) &&
+ (wl_cfg80211_get_rsn_capa(wpa2_ie, rsn_cap) == 0)) {
+
+ if (rsn_cap[0] & RSN_CAP_MFPC) {
+ /* MFP Capability advertised by supplicant. Check
+ * whether MFP is supported in the firmware
+ */
+ if ((err = wldev_iovar_getint_bsscfg(dev,
+ "mfp", &mfp, bssidx)) < 0) {
+ WL_ERR(("Get MFP failed! "
+ "Check MFP support in FW \n"));
+ return -1;
+ }
+
+ if ((sme->crypto.n_akm_suites == 1) &&
+ ((sme->crypto.akm_suites[0] ==
+ WL_AKM_SUITE_MFP_PSK) ||
+ (sme->crypto.akm_suites[0] ==
+ WL_AKM_SUITE_MFP_1X))) {
+ wsec_val |= MFP_SHA256;
+ } else if (sme->crypto.n_akm_suites > 1) {
+ WL_ERR(("Multiple AKM Specified \n"));
+ return -EINVAL;
+ }
+
+ wsec_val |= MFP_CAPABLE;
+ if (rsn_cap[0] & RSN_CAP_MFPR)
+ wsec_val |= MFP_REQUIRED;
+ }
+ }
+ }
+#endif /* MFP */
WL_DBG((" Set WSEC to fW 0x%x \n", wsec_val));
err = wldev_iovar_setint_bsscfg(dev, "wsec",
@@ -3050,6 +3675,14 @@
case WLAN_AKM_SUITE_8021X:
val = WPA2_AUTH_UNSPECIFIED;
break;
+#ifdef MFP
+ case WL_AKM_SUITE_MFP_1X:
+ val = WPA2_AUTH_UNSPECIFIED;
+ break;
+ case WL_AKM_SUITE_MFP_PSK:
+ val = WPA2_AUTH_PSK;
+ break;
+#endif
case WLAN_AKM_SUITE_PSK:
val = WPA2_AUTH_PSK;
break;
@@ -3173,24 +3806,28 @@
{
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
struct ieee80211_channel *chan = sme->channel;
- wl_extjoin_params_t *ext_join_params;
struct wl_join_params join_params;
+ struct ether_addr bssid;
+ wl_extjoin_params_t *ext_join_params;
size_t join_params_size;
#if defined(ROAM_ENABLE) && defined(ROAM_AP_ENV_DETECTION)
dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
s32 roam_trigger[2] = {0, 0};
#endif /* ROAM_AP_ENV_DETECTION */
+ u8* wpaie = 0;
+ u8 chan_info[CHAN_INFO_LEN] = {0}, *chan_ptr;
+ u32 wpaie_len = 0;
+ u32 timeout;
+ u32 chan_cnt = 0, i, w_count = 0;
+ s32 wait_cnt;
+ s32 bssidx;
s32 err = 0;
+#ifdef ROAM_CHANNEL_CACHE
+ chanspec_t chanspec_list[MAX_ROAM_CACHE_NUM];
+#endif /* ROAM_CHANNEL_CACHE */
wpa_ie_fixed_t *wpa_ie;
bcm_tlv_t *wpa2_ie;
- u8* wpaie = 0;
- u32 wpaie_len = 0;
- u32 chan_cnt = 0;
- struct ether_addr bssid;
- s32 bssidx;
- int ret;
- int wait_cnt;
-
+ bool use_chan_cache = FALSE;
WL_DBG(("In\n"));
if (unlikely(!sme->ssid)) {
@@ -3206,14 +3843,13 @@
RETURN_EIO_IF_NOT_UP(cfg);
+ chan_ptr = chan_info;
/*
* Cancel ongoing scan to sync up with sme state machine of cfg80211.
*/
-#if !defined(ESCAN_RESULT_PATCH)
if (cfg->scan_request) {
wl_notify_escan_complete(cfg, dev, true, true);
}
-#endif
#ifdef WL_SCHED_SCAN
if (cfg->sched_scan_req) {
wl_cfg80211_sched_scan_stop(wiphy, bcmcfg_to_prmry_ndev(cfg));
@@ -3231,7 +3867,7 @@
#endif
bzero(&bssid, sizeof(bssid));
if (!wl_get_drv_status(cfg, CONNECTED, dev)&&
- (ret = wldev_ioctl(dev, WLC_GET_BSSID, &bssid, ETHER_ADDR_LEN, false)) == 0) {
+ (err = wldev_ioctl(dev, WLC_GET_BSSID, &bssid, ETHER_ADDR_LEN, false)) == 0) {
if (!ETHER_ISNULLADDR(&bssid)) {
scb_val_t scbval;
wl_set_drv_status(cfg, DISCONNECTING, dev);
@@ -3257,14 +3893,14 @@
}
} else
WL_DBG(("Currently not associated!\n"));
- } else {
- /* if status is DISCONNECTING, wait for disconnection terminated max 500 ms */
- wait_cnt = 500/10;
- while (wl_get_drv_status(cfg, DISCONNECTING, dev) && wait_cnt) {
- WL_DBG(("Waiting for disconnection terminated, wait_cnt: %d\n", wait_cnt));
- wait_cnt--;
- OSL_SLEEP(10);
+ } else if (wl_get_drv_status(cfg, DISCONNECTING, dev)) {
+ timeout = wait_event_interruptible_timeout(cfg->event_sync_wq,
+ !wl_get_drv_status(cfg, DISCONNECTING, dev),
+ msecs_to_jiffies(MAX_WAIT_TIME/3));
+ if (timeout <= 0 || wl_get_drv_status(cfg, DISCONNECTING, dev)) {
+ WL_ERR(("timeout in waiting disconnect event\n"));
}
+ wl_clr_drv_status(cfg, DISCONNECTING, dev);
}
/* Clean BSSID */
@@ -3325,17 +3961,34 @@
if (unlikely(err)) {
WL_ERR((" failed to restore roam_trigger for auto env"
" detection\n"));
+ }
}
}
- }
#endif /* ROAM_ENABLE && ROAM_AP_ENV_DETECTION */
if (chan) {
cfg->channel = ieee80211_frequency_to_channel(chan->center_freq);
chan_cnt = 1;
WL_DBG(("channel (%d), center_req (%d), %d channels\n", cfg->channel,
chan->center_freq, chan_cnt));
- } else
+ } else {
+#ifdef ROAM_CHANNEL_CACHE
+ wlc_ssid_t ssid;
+ int band;
+ use_chan_cache = TRUE;
+ err = wldev_get_band(dev, &band);
+ if (!err) {
+ set_roam_band(band);
+ }
+
cfg->channel = 0;
+ memcpy(ssid.SSID, sme->ssid, sme->ssid_len);
+ ssid.SSID_len = sme->ssid_len;
+ chan_cnt = get_roam_channel_list(cfg->channel, chanspec_list, &ssid, ioctl_version);
+#else
+ cfg->channel = 0;
+#endif /* ROAM_CHANNEL_CACHE */
+
+ }
WL_DBG(("ie (%p), ie_len (%zd)\n", sme->ie, sme->ie_len));
WL_DBG(("3. set wapi version \n"));
err = wl_set_wpa_version(dev, sme);
@@ -3400,22 +4053,37 @@
memcpy(&ext_join_params->assoc.bssid, ðer_bcast, ETH_ALEN);
ext_join_params->assoc.chanspec_num = chan_cnt;
if (chan_cnt) {
- u16 channel, band, bw, ctl_sb;
- chanspec_t chspec;
- channel = cfg->channel;
- band = (channel <= CH_MAX_2G_CHANNEL) ? WL_CHANSPEC_BAND_2G
- : WL_CHANSPEC_BAND_5G;
- bw = WL_CHANSPEC_BW_20;
- ctl_sb = WL_CHANSPEC_CTL_SB_NONE;
- chspec = (channel | band | bw | ctl_sb);
- ext_join_params->assoc.chanspec_list[0] &= WL_CHANSPEC_CHAN_MASK;
- ext_join_params->assoc.chanspec_list[0] |= chspec;
- ext_join_params->assoc.chanspec_list[0] =
- wl_chspec_host_to_driver(ext_join_params->assoc.chanspec_list[0]);
+ if (use_chan_cache) {
+ memcpy(ext_join_params->assoc.chanspec_list, chanspec_list,
+ sizeof(chanspec_t) * chan_cnt);
+ for (i = 0; i < chan_cnt; i++) {
+ w_count += snprintf(chan_ptr + w_count, sizeof(chan_info) - w_count, "%d",
+ wf_chspec_ctlchan(chanspec_list[i]));
+ if (i != chan_cnt - 1) {
+ w_count += snprintf(chan_ptr + w_count, sizeof(chan_info) - w_count, ", ");
+ }
+ }
+ } else {
+ u16 channel, band, bw, ctl_sb;
+ chanspec_t chspec;
+ channel = cfg->channel;
+ band = (channel <= CH_MAX_2G_CHANNEL) ? WL_CHANSPEC_BAND_2G
+ : WL_CHANSPEC_BAND_5G;
+ bw = WL_CHANSPEC_BW_20;
+ ctl_sb = WL_CHANSPEC_CTL_SB_NONE;
+ chspec = (channel | band | bw | ctl_sb);
+ ext_join_params->assoc.chanspec_list[0] &= WL_CHANSPEC_CHAN_MASK;
+ ext_join_params->assoc.chanspec_list[0] |= chspec;
+ ext_join_params->assoc.chanspec_list[0] =
+ wl_chspec_host_to_driver(ext_join_params->assoc.chanspec_list[0]);
+ snprintf(chan_ptr, sizeof(chan_info), "%d", channel);
+ }
+ } else {
+ snprintf(chan_ptr, sizeof(chan_info), "0");
}
ext_join_params->assoc.chanspec_num = htod32(ext_join_params->assoc.chanspec_num);
if (ext_join_params->ssid.SSID_len < IEEE80211_MAX_SSID_LEN) {
- WL_INFO(("ssid \"%s\", len (%d)\n", ext_join_params->ssid.SSID,
+ WL_INFORM(("ssid \"%s\", len (%d)\n", ext_join_params->ssid.SSID,
ext_join_params->ssid.SSID_len));
}
wl_set_drv_status(cfg, CONNECTING, dev);
@@ -3427,9 +4095,9 @@
err = wldev_iovar_setbuf_bsscfg(dev, "join", ext_join_params, join_params_size,
cfg->ioctl_buf, WLC_IOCTL_MAXLEN, bssidx, &cfg->ioctl_buf_sync);
- WL_ERR(("Connectting with" MACDBG " channel (%d) ssid \"%s\", len (%d)\n\n",
- MAC2STRDBG((u8*)(&ext_join_params->assoc.bssid)), cfg->channel,
- ext_join_params->ssid.SSID, ext_join_params->ssid.SSID_len));
+ WL_ERR(("Connecting to " MACDBG " with channel (%s) ssid %s\n",
+ MAC2STRDBG((u8*)(&ext_join_params->assoc.bssid)),
+ chan_info, ext_join_params->ssid.SSID));
kfree(ext_join_params);
if (err) {
@@ -3461,7 +4129,7 @@
WL_DBG(("join_param_size %zu\n", join_params_size));
if (join_params.ssid.SSID_len < IEEE80211_MAX_SSID_LEN) {
- WL_INFO(("ssid \"%s\", len (%d)\n", join_params.ssid.SSID,
+ WL_INFORM(("ssid \"%s\", len (%d)\n", join_params.ssid.SSID,
join_params.ssid.SSID_len));
}
wl_set_drv_status(cfg, CONNECTING, dev);
@@ -3490,26 +4158,32 @@
RETURN_EIO_IF_NOT_UP(cfg);
act = *(bool *) wl_read_prof(cfg, dev, WL_PROF_ACT);
curbssid = wl_read_prof(cfg, dev, WL_PROF_BSSID);
- if (act) {
+
+ if (act || wl_get_drv_status(cfg, CONNECTING, dev)) {
/*
* Cancel ongoing scan to sync up with sme state machine of cfg80211.
*/
-#if !defined(ESCAN_RESULT_PATCH)
/* Let scan aborted by F/W */
if (cfg->scan_request) {
wl_notify_escan_complete(cfg, dev, true, true);
}
-#endif /* ESCAN_RESULT_PATCH */
wl_set_drv_status(cfg, DISCONNECTING, dev);
- scbval.val = reason_code;
- memcpy(&scbval.ea, curbssid, ETHER_ADDR_LEN);
- scbval.val = htod32(scbval.val);
- err = wldev_ioctl(dev, WLC_DISASSOC, &scbval,
- sizeof(scb_val_t), true);
- if (unlikely(err)) {
- wl_clr_drv_status(cfg, DISCONNECTING, dev);
- WL_ERR(("error (%d)\n", err));
- return err;
+ if (wl_get_drv_status(cfg, CONNECTING, dev)) {
+ /* in case of associating status, this will abort assoc procedure */
+ wl_notify_escan_complete(cfg, dev, false, true);
+ /* send pseudo connection failure event */
+ wl_send_event(dev, WLC_E_SET_SSID, WLC_E_STATUS_ABORT, 0);
+ } else {
+ scbval.val = reason_code;
+ memcpy(&scbval.ea, curbssid, ETHER_ADDR_LEN);
+ scbval.val = htod32(scbval.val);
+ err = wldev_ioctl(dev, WLC_DISASSOC, &scbval,
+ sizeof(scb_val_t), true);
+ if (unlikely(err)) {
+ wl_clr_drv_status(cfg, DISCONNECTING, dev);
+ WL_ERR(("error (%d)\n", err));
+ return err;
+ }
}
}
#ifdef CUSTOM_SET_CPUCORE
@@ -3711,8 +4385,7 @@
}
swap_key_from_BE(&key);
/* need to guarantee EAPOL 4/4 send out before set key */
- if (mode != WL_MODE_AP)
- dhd_wait_pend8021x(dev);
+ dhd_wait_pend8021x(dev);
err = wldev_iovar_setbuf_bsscfg(dev, "wsec_key", &key, sizeof(key),
cfg->ioctl_buf, WLC_IOCTL_MAXLEN, bssidx, &cfg->ioctl_buf_sync);
if (unlikely(err)) {
@@ -3914,6 +4587,10 @@
s32 wsec;
s32 err = 0;
s32 bssidx;
+ union {
+ int32 index;
+ uint8 tsc[DOT11_WPA_KEY_RSC_LEN];
+ } u;
if (wl_cfgp2p_find_idx(cfg, dev, &bssidx) != BCME_OK) {
WL_ERR(("Find p2p index from dev(%p) failed\n", dev));
return BCME_ERROR;
@@ -3922,11 +4599,21 @@
RETURN_EIO_IF_NOT_UP(cfg);
memset(&key, 0, sizeof(key));
key.index = key_idx;
+ swap_key_from_BE(&key);
+ if ((err = wldev_ioctl(dev, WLC_GET_KEY, &key, sizeof(key), false))) {
+ return err;
+ }
swap_key_to_BE(&key);
memset(¶ms, 0, sizeof(params));
params.key_len = (u8) min_t(u8, DOT11_MAX_KEY_SIZE, key.len);
- memcpy(params.key, key.data, params.key_len);
+ params.key = key.data;
+ u.index = key.index;
+ if ((err = wldev_ioctl(dev, WLC_GET_KEY_SEQ, &u, sizeof(u), false))) {
+ return err;
+ }
+ params.seq = u.tsc;
+ params.seq_len = DOT11_WPA_KEY_RSC_LEN;
err = wldev_iovar_getint_bsscfg(dev, "wsec", &wsec, bssidx);
if (unlikely(err)) {
WL_ERR(("WLC_GET_WSEC error (%d)\n", err));
@@ -3964,12 +4651,15 @@
wl_cfg80211_config_default_mgmt_key(struct wiphy *wiphy,
struct net_device *dev, u8 key_idx)
{
- WL_INFO(("Not supported\n"));
+ WL_INFORM(("Not supported\n"));
return -EOPNOTSUPP;
}
static s32
wl_cfg80211_get_station(struct wiphy *wiphy, struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
u8 *mac, struct station_info *sinfo)
{
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
@@ -4003,7 +4693,7 @@
sinfo->filled |= STATION_INFO_CONNECTED_TIME;
sinfo->connected_time = sta->in;
}
- WL_INFO(("STA %s : idle time : %d sec, connected time :%d ms\n",
+ WL_INFORM(("STA %s : idle time : %d sec, connected time :%d ms\n",
bcm_ether_ntoa((const struct ether_addr *)mac, eabuf), sinfo->inactive_time,
sta->idle * 1000));
#endif
@@ -4195,7 +4885,7 @@
s32 err = 0;
if (unlikely(!wl_get_drv_status(cfg, READY, ndev))) {
- WL_INFO(("device is not ready\n"));
+ WL_INFORM(("device is not ready\n"));
return 0;
}
@@ -4214,7 +4904,7 @@
struct net_device *ndev = bcmcfg_to_prmry_ndev(cfg);
unsigned long flags;
if (unlikely(!wl_get_drv_status(cfg, READY, ndev))) {
- WL_INFO(("device is not ready : status (%d)\n",
+ WL_INFORM(("device is not ready : status (%d)\n",
(int)cfg->status));
return 0;
}
@@ -4255,7 +4945,7 @@
* Refer code wlc_bsscfg.c->wlc_bsscfg_sta_init
*/
if (primary_dev != dev) {
- WL_INFO(("Not supporting Flushing pmklist on virtual"
+ WL_INFORM(("Not supporting Flushing pmklist on virtual"
" interfaces than primary interface\n"));
return err;
}
@@ -4531,7 +5221,7 @@
exit:
if (err == BCME_OK) {
- WL_INFO(("Success\n"));
+ WL_INFORM(("Success\n"));
#if defined(WL_CFG80211_P2P_DEV_IF)
cfg80211_ready_on_channel(cfgdev, *cookie, channel,
duration, GFP_KERNEL);
@@ -4780,6 +5470,7 @@
struct wl_bss_info *bi = NULL;
bool result = false;
s32 i;
+ chanspec_t chanspec;
/* If DFS channel is 52~148, check to block it or not */
if (af_params &&
@@ -4788,8 +5479,9 @@
bss_list = cfg->bss_list;
bi = next_bss(bss_list, bi);
for_each_bss(bss_list, bi, i) {
- if (CHSPEC_IS5G(bi->chanspec) &&
- ((bi->ctl_ch ? bi->ctl_ch : CHSPEC_CHANNEL(bi->chanspec))
+ chanspec = wl_chspec_driver_to_host(bi->chanspec);
+ if (CHSPEC_IS5G(chanspec) &&
+ ((bi->ctl_ch ? bi->ctl_ch : CHSPEC_CHANNEL(chanspec))
== af_params->channel)) {
result = true; /* do not block the action frame */
break;
@@ -4824,6 +5516,14 @@
ulong off_chan_started_jiffies = 0;
#endif
dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
+
+
+ /* Add the default dwell time
+ * Dwell time to stay off-channel to wait for a response action frame
+ * after transmitting an GO Negotiation action frame
+ */
+ af_params->dwell_time = WL_DWELL_TIME;
+
#ifdef WL11U
#if defined(WL_CFG80211_P2P_DEV_IF)
ndev = dev;
@@ -5038,7 +5738,7 @@
if (cfg->afx_hdl->pending_tx_act_frm)
cfg->afx_hdl->pending_tx_act_frm = NULL;
- WL_INFO(("-- sending Action Frame is %s, listen chan: %d\n",
+ WL_INFORM(("-- sending Action Frame is %s, listen chan: %d\n",
(ack) ? "Succeeded!!":"Failed!!", cfg->afx_hdl->my_listen_chan));
@@ -5051,19 +5751,19 @@
}
#define MAX_NUM_OF_ASSOCIATED_DEV 64
-#if defined(WL_CFG80211_P2P_DEV_IF)
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
static s32
wl_cfg80211_mgmt_tx(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev,
- struct ieee80211_channel *channel, bool offchan,
- unsigned int wait, const u8* buf, size_t len, bool no_cck,
- bool dont_wait_for_ack, u64 *cookie)
+ struct cfg80211_mgmt_tx_params *params, u64 *cookie)
#else
static s32
wl_cfg80211_mgmt_tx(struct wiphy *wiphy, bcm_struct_cfgdev *cfgdev,
struct ieee80211_channel *channel, bool offchan,
+#if (LINUX_VERSION_CODE <= KERNEL_VERSION(3, 7, 0))
enum nl80211_channel_type channel_type,
- bool channel_type_valid, unsigned int wait,
- const u8* buf, size_t len,
+ bool channel_type_valid,
+#endif /* LINUX_VERSION_CODE <= KERNEL_VERSION(3, 7, 0) */
+ unsigned int wait, const u8* buf, size_t len,
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 2, 0))
bool no_cck,
#endif
@@ -5071,11 +5771,16 @@
bool dont_wait_for_ack,
#endif
u64 *cookie)
-#endif /* WL_CFG80211_P2P_DEV_IF */
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0) */
{
wl_action_frame_t *action_frame;
wl_af_params_t *af_params;
scb_val_t scb_val;
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ struct ieee80211_channel *channel = params->chan;
+ const u8 *buf = params->buf;
+ size_t len = params->len;
+#endif
const struct ieee80211_mgmt *mgmt;
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
struct net_device *dev = NULL;
@@ -5121,7 +5826,7 @@
if (ieee80211_is_probe_resp(mgmt->frame_control)) {
s32 ie_offset = DOT11_MGMT_HDR_LEN + DOT11_BCN_PRB_FIXED_LEN;
s32 ie_len = len - ie_offset;
- if (dev == bcmcfg_to_prmry_ndev(cfg))
+ if ((dev == bcmcfg_to_prmry_ndev(cfg)) && cfg->p2p)
bssidx = wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_DEVICE);
wl_cfgp2p_set_management_ie(cfg, dev, bssidx,
VNDR_IE_PRBRSP_FLAG, (u8 *)(buf + ie_offset), ie_len);
@@ -5204,16 +5909,15 @@
/* Add the channel */
af_params->channel =
ieee80211_frequency_to_channel(channel->center_freq);
-
/* Save listen_chan for searching common channel */
cfg->afx_hdl->peer_listen_chan = af_params->channel;
WL_DBG(("channel from upper layer %d\n", cfg->afx_hdl->peer_listen_chan));
- /* Add the default dwell time
- * Dwell time to stay off-channel to wait for a response action frame
- * after transmitting an GO Negotiation action frame
- */
- af_params->dwell_time = WL_DWELL_TIME;
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ af_params->dwell_time = params->wait;
+#else
+ af_params->dwell_time = wait;
+#endif
memcpy(action_frame->data, &buf[DOT11_MGMT_HDR_LEN], action_frame->len);
@@ -5376,7 +6080,7 @@
dhd->chan_isvht80 |= DHD_FLAG_P2P_MODE;
dhd_set_cpucore(dhd, TRUE);
}
-#endif
+#endif /* CUSTOM_SET_CPUCORE */
return err;
}
@@ -5576,7 +6280,7 @@
len -= WPA_IE_TAG_FIXED_LEN;
/* check for multicast cipher suite */
if (len < WPA_SUITE_LEN) {
- WL_INFO(("no multicast cipher suite\n"));
+ WL_INFORM(("no multicast cipher suite\n"));
goto exit;
}
@@ -5608,7 +6312,7 @@
}
/* Check for unicast suite(s) */
if (len < WPA_IE_SUITE_COUNT_LEN) {
- WL_INFO(("no unicast suite\n"));
+ WL_INFORM(("no unicast suite\n"));
goto exit;
}
/* walk thru unicast cipher list and pick up what we recognize */
@@ -5644,7 +6348,7 @@
len -= (count - i) * WPA_SUITE_LEN;
/* Check for auth key management suite(s) */
if (len < WPA_IE_SUITE_COUNT_LEN) {
- WL_INFO((" no auth key mgmt suite\n"));
+ WL_INFORM((" no auth key mgmt suite\n"));
goto exit;
}
/* walk thru auth management suite list and pick up what we recognize */
@@ -5752,7 +6456,6 @@
ies->wpa2_ie->len + WPA_RSN_IE_TAG_FIXED_LEN,
GFP_KERNEL);
}
-
if (!ies->wpa2_ie && !ies->wpa_ie) {
wl_validate_opensecurity(dev, bssidx);
cfg->ap_info->security_mode = false;
@@ -5819,7 +6522,7 @@
return err;
}
-#endif
+#endif
static s32
wl_cfg80211_parse_ies(u8 *ptr, u32 len, struct parsed_ies *ies)
@@ -5865,6 +6568,12 @@
s32 infra = 1;
s32 join_params_size = 0;
s32 ap = 1;
+#ifdef DISABLE_11H_SOFTAP
+ s32 spect = 0;
+#endif /* DISABLE_11H_SOFTAP */
+#ifdef MAX_GO_CLIENT_CNT
+ s32 bss_maxassoc = MAX_GO_CLIENT_CNT;
+#endif
s32 err = BCME_OK;
WL_DBG(("Enter dev_role: %d\n", dev_role));
@@ -5897,6 +6606,13 @@
WL_ERR(("GO Bring up error %d\n", err));
goto exit;
}
+#ifdef MAX_GO_CLIENT_CNT
+ err = wldev_iovar_setint_bsscfg(dev, "bss_maxassoc", bss_maxassoc, bssidx);
+ if (unlikely(err)) {
+ WL_ERR(("bss_maxassoc error (%d)\n", err));
+ goto exit;
+ }
+#endif
} else
WL_DBG(("Bss is already up\n"));
} else if ((dev_role == NL80211_IFTYPE_AP) &&
@@ -5916,6 +6632,13 @@
WL_ERR(("setting AP mode failed %d \n", err));
goto exit;
}
+#ifdef DISABLE_11H_SOFTAP
+ err = wldev_ioctl(dev, WLC_SET_SPECT_MANAGMENT, &spect, sizeof(s32), true);
+ if (err < 0) {
+ WL_ERR(("SET SPECT_MANAGMENT error %d\n", err));
+ goto exit;
+ }
+#endif /* DISABLE_11H_SOFTAP */
err = wldev_ioctl(dev, WLC_UP, &ap, sizeof(s32), true);
if (unlikely(err)) {
@@ -6036,7 +6759,7 @@
return err;
}
-#endif
+#endif
static s32 wl_cfg80211_hostapd_sec(
struct net_device *dev,
@@ -6113,7 +6836,10 @@
wl_cfg80211_del_station(
struct wiphy *wiphy,
struct net_device *ndev,
- u8* mac_addr)
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
+ u8 *mac_addr)
{
struct net_device *dev;
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
@@ -6171,25 +6897,32 @@
wl_cfg80211_change_station(
struct wiphy *wiphy,
struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
u8 *mac,
struct station_parameters *params)
{
int err;
- struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
- struct net_device *primary_ndev = bcmcfg_to_prmry_ndev(cfg);
+
+ WL_DBG(("SCB_AUTHORIZE mac_addr:"MACDBG" sta_flags_mask:0x%x "
+ "sta_flags_set:0x%x iface:%s \n", MAC2STRDBG(mac),
+ params->sta_flags_mask, params->sta_flags_set, dev->name));
/* Processing only authorize/de-authorize flag for now */
- if (!(params->sta_flags_mask & BIT(NL80211_STA_FLAG_AUTHORIZED)))
+ if (!(params->sta_flags_mask & BIT(NL80211_STA_FLAG_AUTHORIZED))) {
+ WL_ERR(("WLC_SCB_AUTHORIZE sta_flags_mask not set \n"));
return -ENOTSUPP;
+ }
if (!(params->sta_flags_set & BIT(NL80211_STA_FLAG_AUTHORIZED))) {
- err = wldev_ioctl(primary_ndev, WLC_SCB_DEAUTHORIZE, mac, ETH_ALEN, true);
+ err = wldev_ioctl(dev, WLC_SCB_DEAUTHORIZE, (void *)mac, ETH_ALEN, true);
if (err)
WL_ERR(("WLC_SCB_DEAUTHORIZE error (%d)\n", err));
return err;
}
- err = wldev_ioctl(primary_ndev, WLC_SCB_AUTHORIZE, mac, ETH_ALEN, true);
+ err = wldev_ioctl(dev, WLC_SCB_AUTHORIZE, (void *)mac, ETH_ALEN, true);
if (err)
WL_ERR(("WLC_SCB_AUTHORIZE error (%d)\n", err));
return err;
@@ -6213,6 +6946,15 @@
if (dev == bcmcfg_to_prmry_ndev(cfg)) {
WL_DBG(("Start AP req on primary iface: Softap\n"));
dev_role = NL80211_IFTYPE_AP;
+ if (!cfg->ap_info) {
+ if ((cfg->ap_info = kzalloc(sizeof(struct ap_info), GFP_KERNEL))) {
+ WL_ERR(("%s: struct ap_info re-allocated\n", __FUNCTION__));
+ } else {
+ WL_ERR(("%s: struct ap_info re-allocation failed\n", __FUNCTION__));
+ err = -ENOMEM;
+ goto fail;
+ }
+ }
}
#if defined(WL_ENABLE_P2P_IF)
else if (dev == cfg->p2p_net) {
@@ -6243,7 +6985,7 @@
WL_ERR(("Set channel failed \n"));
goto fail;
}
-#endif
+#endif
if ((err = wl_cfg80211_bcn_set_params(info, dev,
dev_role, bssidx)) < 0) {
@@ -6272,10 +7014,27 @@
WL_DBG(("** AP/GO Created **\n"));
+#ifdef WL_CFG80211_ACL
+ /* Enfoce Admission Control. */
+ if ((err = wl_cfg80211_set_mac_acl(wiphy, dev, info->acl)) < 0) {
+ WL_ERR(("Set ACL failed\n"));
+ }
+#endif /* WL_CFG80211_ACL */
+
/* Set IEs to FW */
if ((err = wl_cfg80211_set_ies(dev, &info->beacon, bssidx)) < 0)
WL_ERR(("Set IEs failed \n"));
+ /* Enable Probe Req filter, WPS-AP certification 4.2.13 */
+ if ((dev_role == NL80211_IFTYPE_AP) && (ies.wps_ie != NULL)) {
+ bool pbc = 0;
+ wl_validate_wps_ie((char *) ies.wps_ie, ies.wps_ie_len, &pbc);
+ if (pbc) {
+ WL_DBG(("set WLC_E_PROBREQ_MSG\n"));
+ wl_add_remove_eventmsg(dev, WLC_E_PROBREQ_MSG, true);
+ }
+ }
+
fail:
if (err) {
WL_ERR(("ADD/SET beacon failed\n"));
@@ -6376,6 +7135,7 @@
struct parsed_ies ies;
u32 dev_role = 0;
s32 bssidx = 0;
+ bool pbc = 0;
WL_DBG(("Enter \n"));
@@ -6402,6 +7162,12 @@
if (!check_dev_role_integrity(cfg, dev_role))
goto fail;
+ if ((dev_role == NL80211_IFTYPE_P2P_GO) && (cfg->p2p_wdev == NULL)) {
+ WL_ERR(("P2P already down status!\n"));
+ err = BCME_ERROR;
+ goto fail;
+ }
+
/* Parse IEs */
if ((err = wl_cfg80211_parse_ap_ies(dev, info, &ies)) < 0) {
WL_ERR(("Parse IEs failed \n"));
@@ -6420,6 +7186,15 @@
err = -EINVAL;
goto fail;
}
+ /* Enable Probe Req filter, WPS-AP certification 4.2.13 */
+ if ((dev_role == NL80211_IFTYPE_AP) && (ies.wps_ie != NULL)) {
+ wl_validate_wps_ie((char *) ies.wps_ie, ies.wps_ie_len, &pbc);
+ WL_DBG((" WPS AP, wps_ie is exists pbc=%d\n", pbc));
+ if (pbc)
+ wl_add_remove_eventmsg(dev, WLC_E_PROBREQ_MSG, true);
+ else
+ wl_add_remove_eventmsg(dev, WLC_E_PROBREQ_MSG, false);
+ }
}
fail:
@@ -6464,6 +7239,12 @@
if (!check_dev_role_integrity(cfg, dev_role))
goto fail;
+ if ((dev_role == NL80211_IFTYPE_P2P_GO) && (cfg->p2p_wdev == NULL)) {
+ WL_ERR(("P2P already down status!\n"));
+ err = BCME_ERROR;
+ goto fail;
+ }
+
ie_offset = DOT11_MGMT_HDR_LEN + DOT11_BCN_PRB_FIXED_LEN;
/* find the SSID */
if ((ssid_ie = bcm_parse_tlvs((u8 *)&info->head[ie_offset],
@@ -6497,6 +7278,18 @@
} else {
WL_DBG(("Applied Vndr IEs for Beacon \n"));
}
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 2, 0))
+ if (wl_cfgp2p_set_management_ie(cfg, dev, bssidx,
+ VNDR_IE_PRBRSP_FLAG, (u8 *)info->proberesp_ies,
+ info->proberesp_ies_len) < 0) {
+ WL_ERR(("ProbeRsp set IEs failed \n"));
+ goto fail;
+ } else {
+ WL_DBG(("Applied Vndr IEs for ProbeRsp \n"));
+ }
+#endif
+
if (!wl_cfgp2p_bss_isup(dev, bssidx) &&
(wl_cfg80211_bcn_validate_sec(dev, &ies, dev_role, bssidx) < 0))
{
@@ -6552,12 +7345,29 @@
return err;
}
-#endif
+#endif
#ifdef WL_SCHED_SCAN
#define PNO_TIME 30
#define PNO_REPEAT 4
#define PNO_FREQ_EXPO_MAX 2
+static bool
+is_ssid_in_list(struct cfg80211_ssid *ssid, struct cfg80211_ssid *ssid_list, int count)
+{
+ int i;
+
+ if (!ssid || !ssid_list)
+ return FALSE;
+
+ for (i = 0; i < count; i++) {
+ if (ssid->ssid_len == ssid_list[i].ssid_len) {
+ if (strncmp(ssid->ssid, ssid_list[i].ssid, ssid->ssid_len) == 0)
+ return TRUE;
+ }
+ }
+ return FALSE;
+}
+
static int
wl_cfg80211_sched_scan_start(struct wiphy *wiphy,
struct net_device *dev,
@@ -6566,15 +7376,16 @@
ushort pno_time = PNO_TIME;
int pno_repeat = PNO_REPEAT;
int pno_freq_expo_max = PNO_FREQ_EXPO_MAX;
- wlc_ssid_t ssids_local[MAX_PFN_LIST_COUNT];
+ wlc_ssid_ext_t ssids_local[MAX_PFN_LIST_COUNT];
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
struct cfg80211_ssid *ssid = NULL;
- int ssid_count = 0;
+ struct cfg80211_ssid *hidden_ssid_list = NULL;
+ int ssid_cnt = 0;
int i;
int ret = 0;
WL_DBG(("Enter \n"));
- WL_PNO((">>> SCHED SCAN START\n"));
+ WL_ERR((">>> SCHED SCAN START\n"));
WL_PNO(("Enter n_match_sets:%d n_ssids:%d \n",
request->n_match_sets, request->n_ssids));
WL_PNO(("ssids:%d pno_time:%d pno_repeat:%d pno_freq:%d \n",
@@ -6588,30 +7399,33 @@
memset(&ssids_local, 0, sizeof(ssids_local));
- if (request->n_match_sets > 0) {
- for (i = 0; i < request->n_match_sets; i++) {
- ssid = &request->match_sets[i].ssid;
- memcpy(ssids_local[i].SSID, ssid->ssid, ssid->ssid_len);
- ssids_local[i].SSID_len = ssid->ssid_len;
- WL_PNO((">>> PNO filter set for ssid (%s) \n", ssid->ssid));
- ssid_count++;
+ if (request->n_ssids > 0)
+ hidden_ssid_list = request->ssids;
+
+ for (i = 0; i < request->n_match_sets && ssid_cnt < MAX_PFN_LIST_COUNT; i++) {
+ ssid = &request->match_sets[i].ssid;
+ /* No need to include null ssid */
+ if (ssid->ssid_len) {
+ memcpy(ssids_local[ssid_cnt].SSID, ssid->ssid, ssid->ssid_len);
+ ssids_local[ssid_cnt].SSID_len = ssid->ssid_len;
+ if (is_ssid_in_list(ssid, hidden_ssid_list, request->n_ssids)) {
+ ssids_local[ssid_cnt].hidden = TRUE;
+ WL_PNO((">>> PNO hidden SSID (%s) \n", ssid->ssid));
+ } else {
+ ssids_local[ssid_cnt].hidden = FALSE;
+ WL_PNO((">>> PNO non-hidden SSID (%s) \n", ssid->ssid));
+ }
+ if (request->match_sets[i].rssi_thold != NL80211_SCAN_RSSI_THOLD_OFF) {
+ ssids_local[ssid_cnt].rssi_thresh =
+ (int8)request->match_sets[i].rssi_thold;
+ }
+ ssid_cnt++;
}
}
- if (request->n_ssids > 0) {
- for (i = 0; i < request->n_ssids; i++) {
- /* Active scan req for ssids */
- WL_PNO((">>> Active scan req for ssid (%s) \n", request->ssids[i].ssid));
-
- /* match_set ssids is a supert set of n_ssid list, so we need
- * not add these set seperately
- */
- }
- }
-
- if (ssid_count) {
- if ((ret = dhd_dev_pno_set_for_ssid(dev, ssids_local, request->n_match_sets,
- pno_time, pno_repeat, pno_freq_expo_max, NULL, 0)) < 0) {
+ if (ssid_cnt) {
+ if ((ret = dhd_dev_pno_set_for_ssid(dev, ssids_local, ssid_cnt, pno_time,
+ pno_repeat, pno_freq_expo_max, NULL, 0)) < 0) {
WL_ERR(("PNO setup failed!! ret=%d \n", ret));
return -EINVAL;
}
@@ -6629,7 +7443,7 @@
struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
WL_DBG(("Enter \n"));
- WL_PNO((">>> SCHED SCAN STOP\n"));
+ WL_ERR((">>> SCHED SCAN STOP\n"));
if (dhd_dev_pno_stop_for_ssid(dev) < 0)
WL_ERR(("PNO Stop for SSID failed"));
@@ -6646,6 +7460,217 @@
}
#endif /* WL_SCHED_SCAN */
+#ifdef WL_SUPPORT_ACS
+/*
+ * Currently the dump_obss IOVAR is returning string as output so we need to
+ * parse the output buffer in an unoptimized way. Going forward if we get the
+ * IOVAR output in binary format this method can be optimized
+ */
+static int wl_parse_dump_obss(char *buf, struct wl_dump_survey *survey)
+{
+ int i;
+ char *token;
+ char delim[] = " \n";
+
+ token = strsep(&buf, delim);
+ while (token != NULL) {
+ if (!strcmp(token, "OBSS")) {
+ for (i = 0; i < OBSS_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->obss = simple_strtoul(token, NULL, 10);
+ }
+
+ if (!strcmp(token, "IBSS")) {
+ for (i = 0; i < IBSS_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->ibss = simple_strtoul(token, NULL, 10);
+ }
+
+ if (!strcmp(token, "TXDur")) {
+ for (i = 0; i < TX_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->tx = simple_strtoul(token, NULL, 10);
+ }
+
+ if (!strcmp(token, "Category")) {
+ for (i = 0; i < CTG_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->no_ctg = simple_strtoul(token, NULL, 10);
+ }
+
+ if (!strcmp(token, "Packet")) {
+ for (i = 0; i < PKT_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->no_pckt = simple_strtoul(token, NULL, 10);
+ }
+
+ if (!strcmp(token, "Opp(time):")) {
+ for (i = 0; i < IDLE_TOKEN_IDX; i++)
+ token = strsep(&buf, delim);
+ survey->idle = simple_strtoul(token, NULL, 10);
+ }
+
+ token = strsep(&buf, delim);
+ }
+
+ return 0;
+}
+
+static int wl_dump_obss(struct net_device *ndev, cca_msrmnt_query req,
+ struct wl_dump_survey *survey)
+{
+ cca_stats_n_flags *results;
+ char *buf;
+ int retry, err;
+
+ buf = kzalloc(sizeof(char) * WLC_IOCTL_MAXLEN, GFP_KERNEL);
+ if (unlikely(!buf)) {
+ WL_ERR(("%s: buf alloc failed\n", __func__));
+ return -ENOMEM;
+ }
+
+ retry = IOCTL_RETRY_COUNT;
+ while (retry--) {
+ err = wldev_iovar_getbuf(ndev, "dump_obss", &req, sizeof(req),
+ buf, WLC_IOCTL_MAXLEN, NULL);
+ if (err >= 0) {
+ break;
+ }
+ WL_DBG(("attempt = %d, err = %d, \n",
+ (IOCTL_RETRY_COUNT - retry), err));
+ }
+
+ if (retry <= 0) {
+ WL_ERR(("failure, dump_obss IOVAR failed\n"));
+ err = -BCME_ERROR;
+ goto exit;
+ }
+
+ results = (cca_stats_n_flags *)(buf);
+ wl_parse_dump_obss(results->buf, survey);
+ kfree(buf);
+
+ return 0;
+exit:
+ kfree(buf);
+ return err;
+}
+
+static int wl_cfg80211_dump_survey(struct wiphy *wiphy, struct net_device *ndev,
+ int idx, struct survey_info *info)
+{
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct wl_dump_survey *survey;
+ struct ieee80211_supported_band *band;
+ struct ieee80211_channel*chan;
+ cca_msrmnt_query req;
+ int val, err, noise, retry;
+
+ dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
+ if (!(dhd->op_mode & DHD_FLAG_HOSTAP_MODE)) {
+ return -ENOENT;
+ }
+ band = wiphy->bands[IEEE80211_BAND_2GHZ];
+ if (band && idx >= band->n_channels) {
+ idx -= band->n_channels;
+ band = NULL;
+ }
+
+ if (!band || idx >= band->n_channels) {
+ /* Move to 5G band */
+ band = wiphy->bands[IEEE80211_BAND_5GHZ];
+ if (idx >= band->n_channels) {
+ return -ENOENT;
+ }
+ }
+
+ chan = &band->channels[idx];
+ /* Setting current channel to the requested channel */
+ if ((err = wl_cfg80211_set_channel(wiphy, ndev, chan,
+ NL80211_CHAN_HT20) < 0)) {
+ WL_ERR(("Set channel failed \n"));
+ }
+
+ if (!idx) {
+ /* Disable mpc */
+ val = 0;
+ err = wldev_iovar_setbuf_bsscfg(ndev, "mpc", (void *)&val,
+ sizeof(val), cfg->ioctl_buf, WLC_IOCTL_SMLEN, 0,
+ &cfg->ioctl_buf_sync);
+ if (err < 0) {
+ WL_ERR(("set 'mpc' failed, error = %d\n", err));
+ }
+
+ /* Set interface up, explicitly. */
+ val = 1;
+ err = wldev_ioctl(ndev, WLC_UP, (void *)&val, sizeof(val), true);
+ if (err < 0) {
+ WL_ERR(("set interface up failed, error = %d\n", err));
+ }
+ }
+
+ /* Get noise value */
+ retry = IOCTL_RETRY_COUNT;
+ while (retry--) {
+ err = wldev_ioctl(ndev, WLC_GET_PHY_NOISE, &noise,
+ sizeof(noise), false);
+ if (err >= 0) {
+ break;
+ }
+ WL_DBG(("attempt = %d, err = %d, \n",
+ (IOCTL_RETRY_COUNT - retry), err));
+ }
+
+ if (retry <= 0) {
+ WL_ERR(("Get Phy Noise failed, error = %d\n", err));
+ noise = CHAN_NOISE_DUMMY;
+ }
+
+ survey = (struct wl_dump_survey *) kzalloc(sizeof(struct wl_dump_survey),
+ GFP_KERNEL);
+ if (unlikely(!survey)) {
+ WL_ERR(("%s: alloc failed\n", __func__));
+ return -ENOMEM;
+ }
+
+ /* Start Measurement for obss stats on current channel */
+ req.msrmnt_query = 0;
+ req.time_req = ACS_MSRMNT_DELAY;
+ if ((err = wl_dump_obss(ndev, req, survey)) < 0) {
+ goto exit;
+ }
+
+ /*
+ * Wait for the meaurement to complete, adding a buffer value of 10 to take
+ * into consideration any delay in IOVAR completion
+ */
+ msleep(ACS_MSRMNT_DELAY + 10);
+
+ /* Issue IOVAR to collect measurement results */
+ req.msrmnt_query = 1;
+ if ((err = wl_dump_obss(ndev, req, survey)) < 0) {
+ goto exit;
+ }
+
+ info->channel = chan;
+ info->noise = noise;
+ info->channel_time = ACS_MSRMNT_DELAY;
+ info->channel_time_busy = ACS_MSRMNT_DELAY - survey->idle;
+ info->channel_time_rx = survey->obss + survey->ibss + survey->no_ctg +
+ survey->no_pckt;
+ info->channel_time_tx = survey->tx;
+ info->filled = SURVEY_INFO_NOISE_DBM |SURVEY_INFO_CHANNEL_TIME |
+ SURVEY_INFO_CHANNEL_TIME_BUSY | SURVEY_INFO_CHANNEL_TIME_RX |
+ SURVEY_INFO_CHANNEL_TIME_TX;
+ kfree(survey);
+
+ return 0;
+exit:
+ kfree(survey);
+ return err;
+}
+#endif /* WL_SUPPORT_ACS */
+
static struct cfg80211_ops wl_cfg80211_ops = {
.add_virtual_intf = wl_cfg80211_add_virtual_iface,
.del_virtual_intf = wl_cfg80211_del_virtual_iface,
@@ -6681,7 +7706,7 @@
.change_bss = wl_cfg80211_change_bss,
#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 6, 0))
.set_channel = wl_cfg80211_set_channel,
-#endif
+#endif
#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 4, 0))
.set_beacon = wl_cfg80211_add_set_beacon,
.add_beacon = wl_cfg80211_add_set_beacon,
@@ -6689,7 +7714,7 @@
.change_beacon = wl_cfg80211_change_beacon,
.start_ap = wl_cfg80211_start_ap,
.stop_ap = wl_cfg80211_stop_ap,
-#endif
+#endif
#ifdef WL_SCHED_SCAN
.sched_scan_start = wl_cfg80211_sched_scan_start,
.sched_scan_stop = wl_cfg80211_sched_scan_stop,
@@ -6702,8 +7727,13 @@
#endif /* WL_SUPPORT_BACKPORTED_KPATCHES || KERNEL_VERSION >= (3,2,0) */
#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 2, 0))
.tdls_oper = wl_cfg80211_tdls_oper,
-#endif
- CFG80211_TESTMODE_CMD(dhd_cfg80211_testmode_cmd)
+#endif
+#ifdef WL_SUPPORT_ACS
+ .dump_survey = wl_cfg80211_dump_survey,
+#endif /* WL_SUPPORT_ACS */
+#ifdef WL_CFG80211_ACL
+ .set_mac_acl = wl_cfg80211_set_mac_acl,
+#endif /* WL_CFG80211_ACL */
};
s32 wl_mode_to_nl80211_iftype(s32 mode)
@@ -6725,7 +7755,7 @@
}
#ifdef CONFIG_CFG80211_INTERNAL_REGDB
-static int
+static void
wl_cfg80211_reg_notifier(
struct wiphy *wiphy,
struct regulatory_request *request)
@@ -6735,7 +7765,7 @@
if (!request || !cfg) {
WL_ERR(("Invalid arg\n"));
- return -EINVAL;
+ return ;
}
WL_DBG(("ccode: %c%c Initiator: %d\n",
@@ -6760,10 +7790,18 @@
WL_ERR(("set country Failed :%d\n", ret));
}
- return ret;
+ return ;
}
#endif /* CONFIG_CFG80211_INTERNAL_REGDB */
+#ifdef CONFIG_PM
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+static const struct wiphy_wowlan_support brcm_wowlan_support = {
+ .flags = WIPHY_WOWLAN_ANY,
+};
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0) */
+#endif /* CONFIG_PM */
+
static s32 wl_setup_wiphy(struct wireless_dev *wdev, struct device *sdiofunc_dev, void *context)
{
s32 err = 0;
@@ -6776,7 +7814,7 @@
err = -ENODEV;
return err;
}
-#endif
+#endif
wdev->wiphy =
wiphy_new(&wl_cfg80211_ops, sizeof(struct bcm_cfg80211));
@@ -6799,9 +7837,9 @@
wdev->wiphy->interface_modes =
BIT(NL80211_IFTYPE_STATION)
| BIT(NL80211_IFTYPE_ADHOC)
-#if !defined(WL_ENABLE_P2P_IF)
+#if !defined(WL_ENABLE_P2P_IF) && !defined(WL_CFG80211_P2P_DEV_IF)
| BIT(NL80211_IFTYPE_MONITOR)
-#endif /* !WL_ENABLE_P2P_IF */
+#endif /* !WL_ENABLE_P2P_IF && !WL_CFG80211_P2P_DEV_IF */
#if defined(WL_IFACE_COMB_NUM_CHANNELS) || defined(WL_CFG80211_P2P_DEV_IF)
| BIT(NL80211_IFTYPE_P2P_CLIENT)
| BIT(NL80211_IFTYPE_P2P_GO)
@@ -6847,8 +7885,8 @@
* to allow bssid & freq to be sent down to driver even if
* FW ROAM is advertised.
*/
- /* wdev->wiphy->flags |= WIPHY_FLAG_SUPPORTS_FW_ROAM; */
-#endif
+ wdev->wiphy->flags |= WIPHY_FLAG_SUPPORTS_FW_ROAM;
+#endif
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 3, 0))
wdev->wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL |
WIPHY_FLAG_OFFCHAN_TX;
@@ -6859,6 +7897,12 @@
* to remove the patch from supplicant
*/
wdev->wiphy->flags |= WIPHY_FLAG_HAVE_AP_SME;
+
+#ifdef WL_CFG80211_ACL
+ /* Configure ACL capabilities. */
+ wdev->wiphy->max_acl_mac_addrs = MAX_NUM_MAC_FILT;
+#endif
+
#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
/* Supplicant distinguish between the SoftAP mode and other
* modes (e.g. P2P, WPS, HS2.0) when it builds the probe
@@ -6871,7 +7915,7 @@
wdev->wiphy->flags |= WIPHY_FLAG_AP_PROBE_RESP_OFFLOAD;
wdev->wiphy->probe_resp_offload = 0;
}
-#endif
+#endif
#endif /* WL_SUPPORT_BACKPORTED_KPATCHES) || (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0)) */
#ifdef CONFIG_CFG80211_INTERNAL_REGDB
@@ -6888,16 +7932,25 @@
* disconnection of connected network before suspend. So a dummy wowlan
* filter is configured for kernels linux-3.8 and above.
*/
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 0))
+ wdev->wiphy->wowlan = &brcm_wowlan_support;
+#else
wdev->wiphy->wowlan.flags = WIPHY_WOWLAN_ANY;
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3, 11, 10) */
#endif /* CONFIG_PM && WL_CFG80211_P2P_DEV_IF */
WL_DBG(("Registering custom regulatory)\n"));
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ wdev->wiphy->regulatory_flags |= REGULATORY_CUSTOM_REG;
+#else
wdev->wiphy->flags |= WIPHY_FLAG_CUSTOM_REGULATORY;
+#endif
wiphy_apply_custom_regulatory(wdev->wiphy, &brcm_regdom);
#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 13, 0)) || defined(WL_VENDOR_EXT_SUPPORT)
- WL_ERR(("Registering Vendor80211)\n"));
- err = wl_cfgvendor_attach(wdev->wiphy);
+ WL_ERR(("Registering Vendor80211\n"));
+ err = wl_cfgvendor_attach(wdev->wiphy, dhd);
if (unlikely(err < 0)) {
WL_ERR(("Couldn not attach vendor commands (%d)\n", err));
}
@@ -6952,16 +8005,26 @@
bss_list = cfg->bss_list;
WL_DBG(("scanned AP count (%d)\n", bss_list->count));
+#ifdef ROAM_CHANNEL_CACHE
+ reset_roam_cache();
+#endif /* ROAM_CHANNEL_CACHE */
bi = next_bss(bss_list, bi);
for_each_bss(bss_list, bi, i) {
- err = wl_inform_single_bss(cfg, bi);
+#ifdef ROAM_CHANNEL_CACHE
+ add_roam_cache(bi);
+#endif /* ROAM_CHANNEL_CACHE */
+ err = wl_inform_single_bss(cfg, bi, false);
if (unlikely(err))
break;
}
+#ifdef ROAM_CHANNEL_CACHE
+ /* print_roam_cache(); */
+ update_roam_cache(cfg, ioctl_version);
+#endif /* ROAM_CHANNEL_CACHE */
return err;
}
-static s32 wl_inform_single_bss(struct bcm_cfg80211 *cfg, struct wl_bss_info *bi)
+static s32 wl_inform_single_bss(struct bcm_cfg80211 *cfg, struct wl_bss_info *bi, bool roam)
{
struct wiphy *wiphy = bcmcfg_to_wiphy(cfg);
struct ieee80211_mgmt *mgmt;
@@ -6990,7 +8053,7 @@
}
mgmt = (struct ieee80211_mgmt *)notif_bss_info->frame_buf;
notif_bss_info->channel =
- bi->ctl_ch ? bi->ctl_ch : CHSPEC_CHANNEL(wl_chspec_driver_to_host(bi->chanspec));
+ wf_chspec_ctlchan(wl_chspec_driver_to_host(bi->chanspec));
if (notif_bss_info->channel <= CH_MAX_2G_CHANNEL)
band = wiphy->bands[IEEE80211_BAND_2GHZ];
@@ -7015,7 +8078,7 @@
beacon_proberesp->beacon_int = cpu_to_le16(bi->beacon_period);
beacon_proberesp->capab_info = cpu_to_le16(bi->capability);
wl_rst_ie(cfg);
- wl_update_hidden_ap_ie(bi, ((u8 *) bi) + bi->ie_offset, &bi->ie_length);
+ wl_update_hidden_ap_ie(bi, ((u8 *) bi) + bi->ie_offset, &bi->ie_length, roam);
wl_mrg_ie(cfg, ((u8 *) bi) + bi->ie_offset, bi->ie_length);
wl_cp_ie(cfg, beacon_proberesp->variable, WL_BSS_INFO_MAX -
offsetof(struct wl_cfg80211_bss_info, frame_buf));
@@ -7108,7 +8171,8 @@
event == WLC_E_DISASSOC ||
event == WLC_E_DEAUTH) {
#if (WL_DBG_LEVEL > 0)
- WL_ERR(("Link down Reason : WLC_E_%s\n", wl_dbg_estr[event]));
+ WL_ERR(("Link down Reason : WLC_E_%s reason = %d status = %d\n", wl_dbg_estr[event],
+ ntoh32(e->reason), ntoh32(e->status)));
#endif /* (WL_DBG_LEVEL > 0) */
return true;
} else if (event == WLC_E_LINK) {
@@ -7166,21 +8230,21 @@
channel_info_t ci;
#else
struct station_info sinfo;
-#endif
+#endif
WL_DBG(("event %d status %d reason %d\n", event, ntoh32(e->status), reason));
/* if link down, bsscfg is disabled. */
if (event == WLC_E_LINK && reason == WLC_E_LINK_BSSCFG_DIS &&
wl_get_p2p_status(cfg, IF_DELETING) && (ndev != bcmcfg_to_prmry_ndev(cfg))) {
wl_add_remove_eventmsg(ndev, WLC_E_PROBREQ_MSG, false);
- WL_INFO(("AP mode link down !! \n"));
+ WL_INFORM(("AP mode link down !! \n"));
complete(&cfg->iface_disable);
return 0;
}
if (event == WLC_E_DISASSOC_IND || event == WLC_E_DEAUTH_IND || event == WLC_E_DEAUTH) {
WL_ERR(("event %s(%d) status %d reason %d\n",
- bcmevent_names[event].name, event, ntoh32(e->status), reason));
+ bcmevent_get_name(event), event, ntoh32(e->status), reason));
}
#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 2, 0)) && !defined(WL_CFG80211_STA_EVENT)
@@ -7260,23 +8324,35 @@
isfree = true;
if (event == WLC_E_ASSOC_IND && reason == DOT11_SC_SUCCESS) {
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0, GFP_ATOMIC);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, GFP_ATOMIC);
#else
cfg80211_rx_mgmt(ndev, freq, mgmt_frame, len, GFP_ATOMIC);
-#endif
+#endif
} else if (event == WLC_E_DISASSOC_IND) {
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0, GFP_ATOMIC);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, GFP_ATOMIC);
#else
cfg80211_rx_mgmt(ndev, freq, mgmt_frame, len, GFP_ATOMIC);
-#endif
+#endif
} else if ((event == WLC_E_DEAUTH_IND) || (event == WLC_E_DEAUTH)) {
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, 0, GFP_ATOMIC);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
cfg80211_rx_mgmt(ndev, freq, 0, mgmt_frame, len, GFP_ATOMIC);
#else
cfg80211_rx_mgmt(ndev, freq, mgmt_frame, len, GFP_ATOMIC);
-#endif
+#endif
}
exit:
@@ -7301,7 +8377,7 @@
} else if ((event == WLC_E_DEAUTH_IND) || (event == WLC_E_DEAUTH)) {
cfg80211_del_sta(ndev, e->addr.octet, GFP_ATOMIC);
}
-#endif
+#endif
return err;
}
@@ -7335,6 +8411,9 @@
u16 flags = ntoh16(e->flags);
u32 status = ntoh32(e->status);
bool active;
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ struct ieee80211_channel *chan;
+#endif
if (event == WLC_E_JOIN) {
WL_DBG(("joined in IBSS network\n"));
@@ -7344,6 +8423,9 @@
}
if (event == WLC_E_JOIN || event == WLC_E_START ||
(event == WLC_E_LINK && (flags == WLC_EVENT_MSG_LINK))) {
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ chan = ieee80211_get_channel(bcmcfg_to_wiphy(cfg), cfg->channel);
+#endif
if (wl_get_drv_status(cfg, CONNECTED, ndev)) {
/* ROAM or Redundant */
u8 *cur_bssid = wl_read_prof(cfg, ndev, WL_PROF_BSSID);
@@ -7352,21 +8434,29 @@
MACDBG "), ignore it\n", MAC2STRDBG(cur_bssid)));
return err;
}
- WL_INFO(("IBSS BSSID is changed from " MACDBG " to " MACDBG "\n",
+ WL_INFORM(("IBSS BSSID is changed from " MACDBG " to " MACDBG "\n",
MAC2STRDBG(cur_bssid), MAC2STRDBG((u8 *)&e->addr)));
wl_get_assoc_ies(cfg, ndev);
wl_update_prof(cfg, ndev, NULL, (void *)&e->addr, WL_PROF_BSSID);
- wl_update_bss_info(cfg, ndev);
+ wl_update_bss_info(cfg, ndev, false);
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_ibss_joined(ndev, (s8 *)&e->addr, chan, GFP_KERNEL);
+#else
cfg80211_ibss_joined(ndev, (s8 *)&e->addr, GFP_KERNEL);
+#endif
}
else {
/* New connection */
- WL_INFO(("IBSS connected to " MACDBG "\n", MAC2STRDBG((u8 *)&e->addr)));
+ WL_INFORM(("IBSS connected to " MACDBG "\n", MAC2STRDBG((u8 *)&e->addr)));
wl_link_up(cfg);
wl_get_assoc_ies(cfg, ndev);
wl_update_prof(cfg, ndev, NULL, (void *)&e->addr, WL_PROF_BSSID);
- wl_update_bss_info(cfg, ndev);
+ wl_update_bss_info(cfg, ndev, false);
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_ibss_joined(ndev, (s8 *)&e->addr, chan, GFP_KERNEL);
+#else
cfg80211_ibss_joined(ndev, (s8 *)&e->addr, GFP_KERNEL);
+#endif
wl_set_drv_status(cfg, CONNECTED, ndev);
active = true;
wl_update_prof(cfg, ndev, NULL, (void *)&active, WL_PROF_ACT);
@@ -7412,16 +8502,15 @@
wl_link_up(cfg);
act = true;
if (!wl_get_drv_status(cfg, DISCONNECTING, ndev)) {
- printk("wl_bss_connect_done succeeded with " MACDBG "\n",
- MAC2STRDBG((u8*)(&e->addr)));
- wl_bss_connect_done(cfg, ndev, e, data, true);
- WL_DBG(("joined in BSS network \"%s\"\n",
- ((struct wlc_ssid *)
- wl_read_prof(cfg, ndev, WL_PROF_SSID))->SSID));
- }
+ printk("wl_bss_connect_done succeeded with " MACDBG "\n",
+ MAC2STRDBG((u8*)(&e->addr)));
+ wl_bss_connect_done(cfg, ndev, e, data, true);
+ WL_DBG(("joined in BSS network \"%s\"\n",
+ ((struct wlc_ssid *)
+ wl_read_prof(cfg, ndev, WL_PROF_SSID))->SSID));
+ }
wl_update_prof(cfg, ndev, e, &act, WL_PROF_ACT);
wl_update_prof(cfg, ndev, NULL, (void *)&e->addr, WL_PROF_BSSID);
-
} else if (wl_is_linkdown(cfg, e)) {
if (cfg->scan_request)
wl_notify_escan_complete(cfg, ndev, true, true);
@@ -7434,10 +8523,10 @@
/* WLAN_REASON_UNSPECIFIED is used for hang up event in Android */
reason = (reason == WLAN_REASON_UNSPECIFIED)? 0 : reason;
- printk("link down if %s may call cfg80211_disconnected. "
+ WL_ERR(("link down if %s may call cfg80211_disconnected. "
"event : %d, reason=%d from " MACDBG "\n",
ndev->name, event, ntoh32(e->reason),
- MAC2STRDBG((u8*)(&e->addr)));
+ MAC2STRDBG((u8*)(&e->addr))));
if (!cfg->roam_offload &&
memcmp(curbssid, &e->addr, ETHER_ADDR_LEN) != 0) {
WL_ERR(("BSSID of event is not the connected BSSID"
@@ -7446,7 +8535,7 @@
return 0;
}
wl_clr_drv_status(cfg, CONNECTED, ndev);
- if (! wl_get_drv_status(cfg, DISCONNECTING, ndev)) {
+ if (!wl_get_drv_status(cfg, DISCONNECTING, ndev)) {
/* To make sure disconnect, explictly send dissassoc
* for BSSID 00:00:00:00:00:00 issue
*/
@@ -7463,17 +8552,19 @@
cfg80211_disconnected(ndev, reason, NULL, 0, GFP_KERNEL);
wl_link_down(cfg);
wl_init_prof(cfg, ndev);
+ } else {
+ wl_clr_drv_status(cfg, DISCONNECTING, ndev);
}
- }
- else if (wl_get_drv_status(cfg, CONNECTING, ndev)) {
- printk("link down, during connecting\n");
+ } else if (wl_get_drv_status(cfg, CONNECTING, ndev)) {
+ WL_ERR(("link down, during connecting\n"));
#ifdef ESCAN_RESULT_PATCH
if ((memcmp(connect_req_bssid, broad_bssid, ETHER_ADDR_LEN) == 0) ||
(memcmp(&e->addr, broad_bssid, ETHER_ADDR_LEN) == 0) ||
(memcmp(&e->addr, connect_req_bssid, ETHER_ADDR_LEN) == 0))
/* In case this event comes while associating another AP */
#endif /* ESCAN_RESULT_PATCH */
- wl_bss_connect_done(cfg, ndev, e, data, false);
+ if (!wl_get_drv_status(cfg, DISCONNECTING, ndev))
+ wl_bss_connect_done(cfg, ndev, e, data, false);
}
wl_clr_drv_status(cfg, DISCONNECTING, ndev);
@@ -7482,13 +8573,16 @@
complete(&cfg->iface_disable);
} else if (wl_is_nonetwork(cfg, e)) {
- printk("connect failed event=%d e->status %d e->reason %d \n",
- event, (int)ntoh32(e->status), (int)ntoh32(e->reason));
+ WL_ERR(("connect failed event=%d e->status %d e->reason %d\n",
+ event, (int)ntoh32(e->status), (int)ntoh32(e->reason)));
/* Clean up any pending scan request */
if (cfg->scan_request)
wl_notify_escan_complete(cfg, ndev, true, true);
- if (wl_get_drv_status(cfg, CONNECTING, ndev))
+ if (wl_get_drv_status(cfg, CONNECTING, ndev) &&
+ !wl_get_drv_status(cfg, DISCONNECTING, ndev))
wl_bss_connect_done(cfg, ndev, e, data, false);
+ wl_clr_drv_status(cfg, DISCONNECTING, ndev);
+ wl_clr_drv_status(cfg, CONNECTING, ndev);
} else {
WL_DBG(("%s nothing\n", __FUNCTION__));
}
@@ -7499,6 +8593,63 @@
return err;
}
+#ifdef GSCAN_SUPPORT
+static s32
+wl_handle_roam_exp_event(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data)
+{
+ struct net_device *ndev = NULL;
+ u32 datalen = be32_to_cpu(e->datalen);
+
+ if (datalen) {
+ wl_roam_exp_event_t *evt_data = (wl_roam_exp_event_t *)data;
+ if (evt_data->version == ROAM_EXP_EVENT_VERSION) {
+ wlc_ssid_t *ssid = &evt_data->cur_ssid;
+ struct wireless_dev *wdev;
+ ndev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+ if (ndev) {
+ wdev = ndev->ieee80211_ptr;
+ wdev->ssid_len = min(ssid->SSID_len, (uint32)DOT11_MAX_SSID_LEN);
+ memcpy(wdev->ssid, ssid->SSID, wdev->ssid_len);
+ WL_ERR(("SSID is %s\n", ssid->SSID));
+ wl_update_prof(cfg, ndev, NULL, ssid, WL_PROF_SSID);
+ } else {
+ WL_ERR(("NULL ndev!\n"));
+ }
+ } else {
+ WL_ERR(("Version mismatch %d, expected %d", evt_data->version,
+ ROAM_EXP_EVENT_VERSION));
+ }
+ }
+ return BCME_OK;
+}
+#endif /* GSCAN_SUPPORT */
+
+static s32 wl_handle_rssi_monitor_event(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data)
+{
+ u32 datalen = be32_to_cpu(e->datalen);
+ struct net_device *ndev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+ struct wiphy *wiphy = bcmcfg_to_wiphy(cfg);
+
+ if (datalen) {
+ wl_rssi_monitor_evt_t *evt_data = (wl_rssi_monitor_evt_t *)data;
+ if (evt_data->version == RSSI_MONITOR_VERSION) {
+ dhd_rssi_monitor_evt_t monitor_data;
+ monitor_data.version = DHD_RSSI_MONITOR_EVT_VERSION;
+ monitor_data.cur_rssi = evt_data->cur_rssi;
+ memcpy(&monitor_data.BSSID, &e->addr, ETHER_ADDR_LEN);
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_RSSI_MONITOR_EVENT,
+ &monitor_data, sizeof(monitor_data));
+ } else {
+ WL_ERR(("Version mismatch %d, expected %d", evt_data->version,
+ RSSI_MONITOR_VERSION));
+ }
+ }
+ return BCME_OK;
+}
+
static s32
wl_notify_roaming_status(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
@@ -7606,8 +8757,20 @@
static void wl_ch_to_chanspec(int ch, struct wl_join_params *join_params,
size_t *join_params_size)
{
+#ifndef ROAM_CHANNEL_CACHE
chanspec_t chanspec = 0;
+#endif
+
if (ch != 0) {
+#ifdef ROAM_CHANNEL_CACHE
+ int n_channels;
+
+ n_channels = get_roam_channel_list(ch, join_params->params.chanspec_list,
+ &join_params->ssid, ioctl_version);
+ join_params->params.chanspec_num = htod32(n_channels);
+ *join_params_size += WL_ASSOC_PARAMS_FIXED_SIZE +
+ join_params->params.chanspec_num * sizeof(chanspec_t);
+#else
join_params->params.chanspec_num = 1;
join_params->params.chanspec_list[0] = ch;
@@ -7629,13 +8792,14 @@
join_params->params.chanspec_num =
htod32(join_params->params.chanspec_num);
+#endif /* ROAM_CHANNEL_CACHE */
WL_DBG(("join_params->params.chanspec_list[0]= %X, %d channels\n",
join_params->params.chanspec_list[0],
join_params->params.chanspec_num));
}
}
-static s32 wl_update_bss_info(struct bcm_cfg80211 *cfg, struct net_device *ndev)
+static s32 wl_update_bss_info(struct bcm_cfg80211 *cfg, struct net_device *ndev, bool roam)
{
struct cfg80211_bss *bss;
struct wl_bss_info *bi;
@@ -7649,6 +8813,10 @@
s32 err = 0;
struct wiphy *wiphy;
u32 channel;
+#ifdef ROAM_CHANNEL_CACHE
+ struct ieee80211_channel *cur_channel;
+ u32 freq, band;
+#endif /* ROAM_CHANNEL_CACHE */
wiphy = bcmcfg_to_wiphy(cfg);
@@ -7668,8 +8836,7 @@
goto update_bss_info_out;
}
bi = (struct wl_bss_info *)(cfg->extra_buf + 4);
- channel = bi->ctl_ch ? bi->ctl_ch :
- CHSPEC_CHANNEL(wl_chspec_driver_to_host(bi->chanspec));
+ channel = wf_chspec_ctlchan(wl_chspec_driver_to_host(bi->chanspec));
wl_update_prof(cfg, ndev, NULL, &channel, WL_PROF_CHAN);
if (!bss) {
@@ -7679,7 +8846,7 @@
err = -EIO;
goto update_bss_info_out;
}
- err = wl_inform_single_bss(cfg, bi);
+ err = wl_inform_single_bss(cfg, bi, roam);
if (unlikely(err))
goto update_bss_info_out;
@@ -7688,6 +8855,16 @@
beacon_interval = cpu_to_le16(bi->beacon_period);
} else {
WL_DBG(("Found the AP in the list - BSSID %pM\n", bss->bssid));
+#ifdef ROAM_CHANNEL_CACHE
+#if LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 38) && !defined(WL_COMPAT_WIRELESS)
+ freq = ieee80211_channel_to_frequency(channel);
+#else
+ band = (channel <= CH_MAX_2G_CHANNEL) ? IEEE80211_BAND_2GHZ : IEEE80211_BAND_5GHZ;
+ freq = ieee80211_channel_to_frequency(channel, band);
+#endif
+ cur_channel = ieee80211_get_channel(wiphy, freq);
+ bss->channel = cur_channel;
+#endif /* ROAM_CHANNEL_CACHE */
#if defined(WL_CFG80211_P2P_DEV_IF)
ie = (u8 *)bss->ies->data;
ie_len = bss->ies->len;
@@ -7744,12 +8921,13 @@
struct ieee80211_channel *notify_channel = NULL;
u32 *channel;
u32 freq;
-#endif
+#endif
+
wl_get_assoc_ies(cfg, ndev);
wl_update_prof(cfg, ndev, NULL, (void *)(e->addr.octet), WL_PROF_BSSID);
curbssid = wl_read_prof(cfg, ndev, WL_PROF_BSSID);
- wl_update_bss_info(cfg, ndev);
+ wl_update_bss_info(cfg, ndev, true);
wl_update_pmklist(ndev, cfg->pmk_list, err);
#if (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 39))
@@ -7761,20 +8939,20 @@
band = wiphy->bands[IEEE80211_BAND_5GHZ];
freq = ieee80211_channel_to_frequency(*channel, band->band);
notify_channel = ieee80211_get_channel(wiphy, freq);
-#endif
+#endif
printk("wl_bss_roaming_done succeeded to " MACDBG "\n",
MAC2STRDBG((u8*)(&e->addr)));
+#ifdef PCIE_FULL_DONGLE
+ wl_roam_flowring_cleanup(cfg);
+#endif /* PCIE_FULL_DONGLE */
- if (memcmp(curbssid, connect_req_bssid, ETHER_ADDR_LEN) != 0) {
- WL_DBG(("BSSID Mismatch, so indicate roam to cfg80211\n"));
- cfg80211_roamed(ndev,
+ cfg80211_roamed(ndev,
#if (LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 39))
- notify_channel,
+ notify_channel,
#endif
- curbssid,
- conn_info->req_ie, conn_info->req_ie_len,
- conn_info->resp_ie, conn_info->resp_ie_len, GFP_KERNEL);
- }
+ curbssid,
+ conn_info->req_ie, conn_info->req_ie_len,
+ conn_info->resp_ie, conn_info->resp_ie_len, GFP_KERNEL);
WL_DBG(("Report roaming result\n"));
wl_set_drv_status(cfg, CONNECTED, ndev);
@@ -7825,7 +9003,7 @@
wl_get_assoc_ies(cfg, ndev);
wl_update_prof(cfg, ndev, NULL, (void *)(e->addr.octet), WL_PROF_BSSID);
curbssid = wl_read_prof(cfg, ndev, WL_PROF_BSSID);
- wl_update_bss_info(cfg, ndev);
+ wl_update_bss_info(cfg, ndev, false);
wl_update_pmklist(ndev, cfg->pmk_list, err);
wl_set_drv_status(cfg, CONNECTED, ndev);
#if defined(ROAM_ENABLE) && defined(ROAM_AP_ENV_DETECTION)
@@ -7834,8 +9012,12 @@
AP_ENV_INDETERMINATE);
#endif /* ROAM_AP_ENV_DETECTION */
if (ndev != bcmcfg_to_prmry_ndev(cfg)) {
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 13, 0)
+ init_completion(&cfg->iface_disable);
+#else
/* reinitialize completion to clear previous count */
INIT_COMPLETION(cfg->iface_disable);
+#endif
}
#ifdef CUSTOM_SET_CPUCORE
if (wl_get_chan_isvht80(ndev, dhd)) {
@@ -7860,7 +9042,7 @@
WLAN_STATUS_UNSPECIFIED_FAILURE,
GFP_KERNEL);
if (completed)
- WL_INFO(("Report connect result - connection succeeded\n"));
+ WL_INFORM(("Report connect result - connection succeeded\n"));
else
WL_ERR(("Report connect result - connection failed\n"));
}
@@ -7897,17 +9079,52 @@
return 0;
}
+#ifdef BT_WIFI_HANDOVER
+static s32
+wl_notify_bt_wifi_handover_req(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data)
+{
+ struct net_device *ndev = NULL;
+ u32 event = ntoh32(e->event_type);
+ u32 datalen = ntoh32(e->datalen);
+ s32 err;
+
+ WL_ERR(("wl_notify_bt_wifi_handover_req: event_type : %d, datalen : %d\n", event, datalen));
+ ndev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+ err = wl_genl_send_msg(ndev, event, data, (u16)datalen, 0, 0);
+
+ return err;
+}
+#endif /* BT_WIFI_HANDOVER */
+
#ifdef PNO_SUPPORT
static s32
wl_notify_pfn_status(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
const wl_event_msg_t *e, void *data)
{
struct net_device *ndev = NULL;
+#ifdef GSCAN_SUPPORT
+ void *ptr;
+ int send_evt_bytes = 0;
+ u32 event = be32_to_cpu(e->event_type);
+ struct wiphy *wiphy = bcmcfg_to_wiphy(cfg);
+#endif /* GSCAN_SUPPORT */
WL_ERR((">>> PNO Event\n"));
ndev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+#ifdef GSCAN_SUPPORT
+ ptr = dhd_dev_process_epno_result(ndev, data, event, &send_evt_bytes);
+ if (ptr) {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_SCAN_EPNO_EVENT, ptr, send_evt_bytes);
+ kfree(ptr);
+ }
+ if (!dhd_dev_is_legacy_pno_enabled(ndev))
+ return 0;
+#endif /* GSCAN_SUPPORT */
+
#ifndef WL_SCHED_SCAN
mutex_lock(&cfg->usr_sync);
/* TODO: Use cfg80211_sched_scan_results(wiphy); */
@@ -7923,6 +9140,106 @@
}
#endif /* PNO_SUPPORT */
+#ifdef GSCAN_SUPPORT
+static s32
+wl_notify_gscan_event(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
+ const wl_event_msg_t *e, void *data)
+{
+ s32 err = 0;
+ u32 event = be32_to_cpu(e->event_type);
+ void *ptr;
+ int send_evt_bytes = 0;
+ int batch_event_result_dummy = 0;
+ struct net_device *ndev = cfgdev_to_wlc_ndev(cfgdev, cfg);
+ struct wiphy *wiphy = bcmcfg_to_wiphy(cfg);
+ u32 len = ntoh32(e->datalen);
+
+ switch (event) {
+ case WLC_E_PFN_SWC:
+ ptr = dhd_dev_swc_scan_event(ndev, data, &send_evt_bytes);
+ if (send_evt_bytes) {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_GSCAN_SIGNIFICANT_EVENT, ptr, send_evt_bytes);
+ kfree(ptr);
+ }
+ break;
+ case WLC_E_PFN_BEST_BATCHING:
+ err = dhd_dev_retrieve_batch_scan(ndev);
+ if (err < 0) {
+ WL_ERR(("Batch retrieval already in progress %d\n", err));
+ } else {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_GSCAN_BATCH_SCAN_EVENT,
+ &batch_event_result_dummy, sizeof(int));
+ }
+ break;
+ case WLC_E_PFN_SCAN_COMPLETE:
+ batch_event_result_dummy = WIFI_SCAN_COMPLETE;
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_SCAN_COMPLETE_EVENT,
+ &batch_event_result_dummy, sizeof(int));
+ break;
+ case WLC_E_PFN_BSSID_NET_FOUND:
+ ptr = dhd_dev_hotlist_scan_event(ndev, data, &send_evt_bytes,
+ HOTLIST_FOUND);
+ if (ptr) {
+ wl_cfgvendor_send_hotlist_event(wiphy, ndev,
+ ptr, send_evt_bytes, GOOGLE_GSCAN_GEOFENCE_FOUND_EVENT);
+ dhd_dev_gscan_hotlist_cache_cleanup(ndev, HOTLIST_FOUND);
+ } else
+ err = -ENOMEM;
+ break;
+ case WLC_E_PFN_BSSID_NET_LOST:
+ /* WLC_E_PFN_BSSID_NET_LOST is conflict shared with WLC_E_PFN_SCAN_ALLGONE
+ * We currently do not use WLC_E_PFN_SCAN_ALLGONE, so if we get it, ignore
+ */
+ if (len) {
+ ptr = dhd_dev_hotlist_scan_event(ndev, data, &send_evt_bytes,
+ HOTLIST_LOST);
+ if (ptr) {
+ wl_cfgvendor_send_hotlist_event(wiphy, ndev,
+ ptr, send_evt_bytes, GOOGLE_GSCAN_GEOFENCE_LOST_EVENT);
+ dhd_dev_gscan_hotlist_cache_cleanup(ndev, HOTLIST_LOST);
+ } else
+ err = -ENOMEM;
+ } else
+ err = -EINVAL;
+ break;
+ case WLC_E_PFN_GSCAN_FULL_RESULT:
+ ptr = dhd_dev_process_full_gscan_result(ndev, data, &send_evt_bytes);
+ if (ptr) {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_SCAN_FULL_RESULTS_EVENT, ptr, send_evt_bytes);
+ kfree(ptr);
+ } else
+ err = -ENOMEM;
+ break;
+ case WLC_E_PFN_SSID_EXT:
+ ptr = dhd_dev_process_epno_result(ndev, data, event, &send_evt_bytes);
+ if (ptr) {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_SCAN_EPNO_EVENT, ptr, send_evt_bytes);
+ kfree(ptr);
+ } else
+ err = -ENOMEM;
+ break;
+ case WLC_E_PFN_NET_FOUND:
+ ptr = dhd_dev_process_anqpo_result(ndev, data, event, &len);
+ if (ptr) {
+ wl_cfgvendor_send_async_event(wiphy, ndev,
+ GOOGLE_PNO_HOTSPOT_FOUND_EVENT, ptr, len);
+ kfree(ptr);
+ } else
+ err = -ENOMEM;
+ break;
+ default:
+ WL_ERR(("Unknown event %d\n", event));
+ break;
+ }
+ return err;
+}
+#endif /* GSCAN_SUPPORT */
+
static s32
wl_notify_scan_status(struct bcm_cfg80211 *cfg, bcm_struct_cfgdev *cfgdev,
const wl_event_msg_t *e, void *data)
@@ -8155,14 +9472,6 @@
}
(void) sd_act_frm;
} else {
- /*
- * if we got normal action frame and ndev is p2p0,
- * we have to change ndev from p2p0 to wlan0
- */
-
- /* use primary device instead of p2p's */
- if (discover_cfgdev(cfgdev, cfg))
- cfgdev = bcmcfg_to_prmry_cfgdev(cfgdev, cfg);
if (cfg->next_af_subtype != P2P_PAF_SUBTYPE_INVALID) {
u8 action = 0;
@@ -8211,6 +9520,35 @@
WL_DBG(("P2P: GO_NEG_PHASE status cleared \n"));
wl_clr_p2p_status(cfg, GO_NEG_PHASE);
}
+ } else if (event == WLC_E_PROBREQ_MSG) {
+
+ /* Handle probe reqs frame
+ * WPS-AP certification 4.2.13
+ */
+ struct parsed_ies prbreq_ies;
+ u32 prbreq_ie_len = 0;
+ bool pbc = 0;
+
+ WL_DBG((" Event WLC_E_PROBREQ_MSG received\n"));
+ mgmt_frame = (u8 *)(data);
+ mgmt_frame_len = ntoh32(e->datalen);
+
+ prbreq_ie_len = mgmt_frame_len - DOT11_MGMT_HDR_LEN;
+
+ /* Parse prob_req IEs */
+ if (wl_cfg80211_parse_ies(&mgmt_frame[DOT11_MGMT_HDR_LEN],
+ prbreq_ie_len, &prbreq_ies) < 0) {
+ WL_ERR(("Prob req get IEs failed\n"));
+ return 0;
+ }
+ if (prbreq_ies.wps_ie != NULL) {
+ wl_validate_wps_ie((char *)prbreq_ies.wps_ie, prbreq_ies.wps_ie_len, &pbc);
+ WL_DBG((" wps_ie exist pbc = %d\n", pbc));
+ /* if pbc method, send prob_req mgmt frame to upper layer */
+ if (!pbc)
+ return 0;
+ } else
+ return 0;
} else {
mgmt_frame = (u8 *)((wl_event_rx_frame_data_t *)rxframe + 1);
@@ -8236,11 +9574,17 @@
}
}
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0))
+
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 18, 0))
+ cfg80211_rx_mgmt(cfgdev, freq, 0, mgmt_frame, mgmt_frame_len, 0);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 14, 0))
+ cfg80211_rx_mgmt(cfgdev, freq, 0, mgmt_frame, mgmt_frame_len, 0, GFP_ATOMIC);
+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 4, 0)) || \
+ defined(WL_COMPAT_WIRELESS)
cfg80211_rx_mgmt(cfgdev, freq, 0, mgmt_frame, mgmt_frame_len, GFP_ATOMIC);
#else
cfg80211_rx_mgmt(cfgdev, freq, mgmt_frame, mgmt_frame_len, GFP_ATOMIC);
-#endif
+#endif /* LINUX_VERSION >= VERSION(3, 14, 0) */
WL_DBG(("mgmt_frame_len (%d) , e->datalen (%d), channel (%d), freq (%d)\n",
mgmt_frame_len, ntoh32(e->datalen), channel, freq));
@@ -8308,8 +9652,8 @@
err = -EINVAL;
goto out_err;
}
- WL_PNO((">>> SSID:%s Channel:%d \n",
- netinfo->pfnsubnet.SSID, netinfo->pfnsubnet.channel));
+ printk(">>> SSID:%s Channel:%d \n",
+ netinfo->pfnsubnet.SSID, netinfo->pfnsubnet.channel);
/* PFN result doesn't have all the info which are required by the supplicant
* (For e.g IEs) Do a target Escan so that sched scan results are reported
* via wl_inform_single_bss in the required format. Escan does require the
@@ -8351,6 +9695,9 @@
}
wl_set_drv_status(cfg, SCANNING, ndev);
+#ifdef CUSTOM_SET_SHORT_DWELL_TIME
+ net_set_short_dwell_time(ndev, FALSE);
+#endif
#if FULL_ESCAN_ON_PFN_NET_FOUND
WL_PNO((">>> Doing Full ESCAN on PNO event\n"));
err = wl_do_escan(cfg, wiphy, ndev, NULL);
@@ -8423,17 +9770,33 @@
#ifdef PNO_SUPPORT
cfg->evt_handler[WLC_E_PFN_NET_FOUND] = wl_notify_pfn_status;
#endif /* PNO_SUPPORT */
+#ifdef GSCAN_SUPPORT
+ cfg->evt_handler[WLC_E_PFN_BEST_BATCHING] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_SCAN_COMPLETE] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_GSCAN_FULL_RESULT] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_SWC] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_BSSID_NET_FOUND] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_BSSID_NET_LOST] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_PFN_SSID_EXT] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_GAS_FRAGMENT_RX] = wl_notify_gscan_event;
+ cfg->evt_handler[WLC_E_ROAM_EXP_EVENT] = wl_handle_roam_exp_event;
+#endif /* GSCAN_SUPPORT */
+ cfg->evt_handler[WLC_E_RSSI_LQM] = wl_handle_rssi_monitor_event;
#ifdef WLTDLS
cfg->evt_handler[WLC_E_TDLS_PEER_EVENT] = wl_tdls_event_handler;
#endif /* WLTDLS */
cfg->evt_handler[WLC_E_BSSID] = wl_notify_roaming_status;
+#ifdef BT_WIFI_HANDOVER
+ cfg->evt_handler[WLC_E_BT_WIFI_HANDOVER_REQ] = wl_notify_bt_wifi_handover_req;
+#endif
}
#if defined(STATIC_WL_PRIV_STRUCT)
static void
wl_init_escan_result_buf(struct bcm_cfg80211 *cfg)
{
- cfg->escan_info.escan_buf = DHD_OS_PREALLOC(cfg->pub, DHD_PREALLOC_WIPHY_ESCAN0, 0);
+ cfg->escan_info.escan_buf = DHD_OS_PREALLOC(cfg->pub,
+ DHD_PREALLOC_WIPHY_ESCAN0, ESCAN_BUF_SIZE);
bzero(cfg->escan_info.escan_buf, ESCAN_BUF_SIZE);
}
@@ -8598,6 +9961,16 @@
wl_cfg80211_event(bcmcfg_to_prmry_ndev(cfg), &msg, NULL);
}
+static void wl_send_event(struct net_device *dev, uint32 event_type,
+ uint32 status, uint32 reason)
+{
+ wl_event_msg_t msg;
+ bzero(&msg, sizeof(wl_event_msg_t));
+ msg.event_type = hton32(event_type);
+ msg.status = hton32(status);
+ msg.reason = hton32(reason);
+ wl_cfg80211_event(dev, &msg, NULL);
+}
static s32
wl_cfg80211_netdev_notifier_call(struct notifier_block * nb,
unsigned long state,
@@ -8606,7 +9979,6 @@
struct net_device *dev = ndev;
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct bcm_cfg80211 *cfg = g_bcm_cfg;
- int refcnt = 0;
WL_DBG(("Enter \n"));
@@ -8616,8 +9988,10 @@
switch (state) {
case NETDEV_DOWN:
{
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 11, 0))
int max_wait_timeout = 2;
int max_wait_count = 100;
+ int refcnt = 0;
unsigned long limit = jiffies + max_wait_timeout * HZ;
while (work_pending(&wdev->cleanup_work)) {
if (refcnt%5 == 0) {
@@ -8640,10 +10014,11 @@
break;
}
set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(100);
+ (void)schedule_timeout(100);
set_current_state(TASK_RUNNING);
refcnt++;
}
+#endif /* LINUX_VERSION < VERSION(3, 14, 0) */
break;
}
@@ -8799,7 +10174,7 @@
escan_result = (wl_escan_result_t *)data;
if (status == WLC_E_STATUS_PARTIAL) {
- WL_INFO(("WLC_E_STATUS_PARTIAL \n"));
+ WL_INFORM(("WLC_E_STATUS_PARTIAL \n"));
if (!escan_result) {
WL_ERR(("Invalid escan result (NULL pointer)\n"));
goto exit;
@@ -8964,13 +10339,13 @@
escan_result->sync_id);
if (wl_get_drv_status_all(cfg, FINDING_COMMON_CHANNEL)) {
- WL_INFO(("ACTION FRAME SCAN DONE\n"));
+ WL_INFORM(("ACTION FRAME SCAN DONE\n"));
wl_clr_p2p_status(cfg, SCANNING);
wl_clr_drv_status(cfg, SCANNING, cfg->afx_hdl->dev);
if (cfg->afx_hdl->peer_chan == WL_INVALID)
complete(&cfg->act_frm_scan);
} else if ((likely(cfg->scan_request)) || (cfg->sched_scan_running)) {
- WL_INFO(("ESCAN COMPLETED\n"));
+ WL_INFORM(("ESCAN COMPLETED\n"));
cfg->bss_list = wl_escan_get_buf(cfg, FALSE);
if (!scan_req_match(cfg)) {
WL_TRACE_HW4(("SCAN COMPLETED: scanned AP count=%d\n",
@@ -8981,18 +10356,27 @@
}
wl_escan_increment_sync_id(cfg, SCAN_BUF_NEXT);
}
+#ifdef GSCAN_SUPPORT
+ else if ((status == WLC_E_STATUS_ABORT) || (status == WLC_E_STATUS_NEWSCAN)) {
+ if (status == WLC_E_STATUS_NEWSCAN) {
+ WL_ERR(("WLC_E_STATUS_NEWSCAN : scan_request[%p]\n", cfg->scan_request));
+ WL_ERR(("sync_id[%d], bss_count[%d]\n", escan_result->sync_id,
+ escan_result->bss_count));
+ }
+#else
else if (status == WLC_E_STATUS_ABORT) {
+#endif /* GSCAN_SUPPORT */
cfg->escan_info.escan_state = WL_ESCAN_STATE_IDLE;
wl_escan_print_sync_id(status, escan_result->sync_id,
cfg->escan_info.cur_sync_id);
if (wl_get_drv_status_all(cfg, FINDING_COMMON_CHANNEL)) {
- WL_INFO(("ACTION FRAME SCAN DONE\n"));
+ WL_INFORM(("ACTION FRAME SCAN DONE\n"));
wl_clr_drv_status(cfg, SCANNING, cfg->afx_hdl->dev);
wl_clr_p2p_status(cfg, SCANNING);
if (cfg->afx_hdl->peer_chan == WL_INVALID)
complete(&cfg->act_frm_scan);
} else if ((likely(cfg->scan_request)) || (cfg->sched_scan_running)) {
- WL_INFO(("ESCAN ABORTED\n"));
+ WL_INFORM(("ESCAN ABORTED\n"));
cfg->bss_list = wl_escan_get_buf(cfg, TRUE);
if (!scan_req_match(cfg)) {
WL_TRACE_HW4(("SCAN ABORTED: scanned AP count=%d\n",
@@ -9018,7 +10402,7 @@
wl_escan_print_sync_id(status, escan_result->sync_id,
cfg->escan_info.cur_sync_id);
if (wl_get_drv_status_all(cfg, FINDING_COMMON_CHANNEL)) {
- WL_INFO(("ACTION FRAME SCAN DONE\n"));
+ WL_INFORM(("ACTION FRAME SCAN DONE\n"));
wl_clr_p2p_status(cfg, SCANNING);
wl_clr_drv_status(cfg, SCANNING, cfg->afx_hdl->dev);
if (cfg->afx_hdl->peer_chan == WL_INVALID)
@@ -9122,111 +10506,140 @@
s32 err = BCME_OK;
u32 mode;
u32 chan = 0;
+ u32 frameburst;
struct net_info *iter, *next;
struct net_device *primary_dev = bcmcfg_to_prmry_ndev(cfg);
WL_DBG(("Enter state %d set %d _net_info->pm_restore %d iface %s\n",
state, set, _net_info->pm_restore, _net_info->ndev->name));
- if (state != WL_STATUS_CONNECTED)
- return 0;
mode = wl_get_mode_by_netdev(cfg, _net_info->ndev);
if (set) {
- wl_cfg80211_concurrent_roam(cfg, 1);
+ if (state == WL_STATUS_CONNECTED) {
+ wl_cfg80211_concurrent_roam(cfg, 1);
- if (mode == WL_MODE_AP) {
-
- if (wl_add_remove_eventmsg(primary_dev, WLC_E_P2P_PROBREQ_MSG, false))
- WL_ERR((" failed to unset WLC_E_P2P_PROPREQ_MSG\n"));
- }
- wl_cfg80211_determine_vsdb_mode(cfg);
- if (cfg->vsdb_mode || _net_info->pm_block) {
- /* Delete pm_enable_work */
- wl_add_remove_pm_enable_work(cfg, FALSE, WL_HANDLER_MAINTAIN);
- /* save PM_FAST in _net_info to restore this
- * if _net_info->pm_block is false
- */
- if (!_net_info->pm_block && (mode == WL_MODE_BSS)) {
- _net_info->pm = PM_FAST;
- _net_info->pm_restore = true;
+ if (mode == WL_MODE_AP) {
+ if (wl_add_remove_eventmsg(primary_dev, WLC_E_P2P_PROBREQ_MSG, false))
+ WL_ERR((" failed to unset WLC_E_P2P_PROPREQ_MSG\n"));
}
- pm = PM_OFF;
- for_each_ndev(cfg, iter, next) {
- if (iter->pm_restore)
- continue;
- /* Save the current power mode */
- err = wldev_ioctl(iter->ndev, WLC_GET_PM, &iter->pm,
- sizeof(iter->pm), false);
- WL_DBG(("%s:power save %s\n", iter->ndev->name,
- iter->pm ? "enabled" : "disabled"));
- if (!err && iter->pm) {
- iter->pm_restore = true;
+ wl_cfg80211_determine_vsdb_mode(cfg);
+ if (cfg->vsdb_mode || _net_info->pm_block) {
+ /* Delete pm_enable_work */
+ wl_add_remove_pm_enable_work(cfg, FALSE, WL_HANDLER_MAINTAIN);
+ /* save PM_FAST in _net_info to restore this
+ * if _net_info->pm_block is false
+ */
+ if (!_net_info->pm_block && (mode == WL_MODE_BSS)) {
+ _net_info->pm = PM_FAST;
+ _net_info->pm_restore = true;
+ }
+ pm = PM_OFF;
+ for_each_ndev(cfg, iter, next) {
+ if (iter->pm_restore)
+ continue;
+ /* Save the current power mode */
+ err = wldev_ioctl(iter->ndev, WLC_GET_PM, &iter->pm,
+ sizeof(iter->pm), false);
+ WL_DBG(("%s:power save %s\n", iter->ndev->name,
+ iter->pm ? "enabled" : "disabled"));
+ if (!err && iter->pm) {
+ iter->pm_restore = true;
+ }
+ }
+ for_each_ndev(cfg, iter, next) {
+ if ((err = wldev_ioctl(iter->ndev, WLC_SET_PM, &pm,
+ sizeof(pm), true)) != 0) {
+ if (err == -ENODEV)
+ WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
+ else
+ WL_ERR(("%s:error (%d)\n", iter->ndev->name, err));
+ } else {
+ wl_cfg80211_update_power_mode(iter->ndev);
+ }
+ }
+ } else {
+ /*
+ * Re-enable PM2 mode for static IP and roaming event
+ */
+ pm = PM_FAST;
+
+ for_each_ndev(cfg, iter, next) {
+ if ((err = wldev_ioctl(iter->ndev, WLC_SET_PM, &pm,
+ sizeof(pm), true)) != 0) {
+ if (err == -ENODEV)
+ WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
+ else
+ WL_ERR(("%s:error (%d)\n", iter->ndev->name, err));
+ }
}
+ if (cfg->pm_enable_work_on) {
+ wl_add_remove_pm_enable_work(cfg, FALSE, WL_HANDLER_DEL);
+ }
}
+#if defined(WLTDLS)
+#if defined(DISABLE_TDLS_IN_P2P)
+ if (cfg->vsdb_mode || p2p_is_on(cfg))
+#else
+ if (cfg->vsdb_mode)
+#endif /* defined(DISABLE_TDLS_IN_P2P) */
+ {
+
+ err = wldev_iovar_setint(primary_dev, "tdls_enable", 0);
+ }
+#endif /* defined(WLTDLS) */
+ if (cfg->vsdb_mode) {
+ /* disable frameburst on multichannel */
+ frameburst = 0;
+ if (wldev_ioctl(primary_dev, WLC_SET_FAKEFRAG, &frameburst,
+ sizeof(frameburst), true) != 0) {
+ WL_DBG(("frameburst set 0 error\n"));
+ } else {
+ WL_DBG(("Frameburst Disabled\n"));
+ }
+ }
+ }
+ } else { /* clear */
+ if (state == WL_STATUS_CONNECTED) {
+ chan = 0;
+ /* clear chan information when the net device is disconnected */
+ wl_update_prof(cfg, _net_info->ndev, NULL, &chan, WL_PROF_CHAN);
+ wl_cfg80211_determine_vsdb_mode(cfg);
for_each_ndev(cfg, iter, next) {
- if ((err = wldev_ioctl(iter->ndev, WLC_SET_PM, &pm,
- sizeof(pm), true)) != 0) {
- if (err == -ENODEV)
- WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
- else
- WL_ERR(("%s:error (%d)\n", iter->ndev->name, err));
+ if (iter->pm_restore && iter->pm) {
+ WL_DBG(("%s:restoring power save %s\n",
+ iter->ndev->name, (iter->pm ? "enabled" : "disabled")));
+ err = wldev_ioctl(iter->ndev, WLC_SET_PM, &iter->pm,
+ sizeof(iter->pm), true);
+ if (unlikely(err)) {
+ if (err == -ENODEV)
+ WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
+ else
+ WL_ERR(("%s:error(%d)\n", iter->ndev->name, err));
+ break;
+ }
+ iter->pm_restore = 0;
wl_cfg80211_update_power_mode(iter->ndev);
}
}
- } else {
- /* add PM Enable timer to go to power save mode
- * if supplicant control pm mode, it will be cleared or
- * updated by wl_cfg80211_set_power_mgmt() if not - for static IP & HW4 P2P,
- * PM will be configured when timer expired
- */
+ wl_cfg80211_concurrent_roam(cfg, 0);
- /*
- * before calling pm_enable_timer, we need to set PM -1 for all ndev
- */
- pm = PM_OFF;
-
- for_each_ndev(cfg, iter, next) {
- if ((err = wldev_ioctl(iter->ndev, WLC_SET_PM, &pm,
- sizeof(pm), true)) != 0) {
- if (err == -ENODEV)
- WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
- else
- WL_ERR(("%s:error (%d)\n", iter->ndev->name, err));
+ if (!cfg->vsdb_mode) {
+#if defined(WLTDLS)
+ err = wldev_iovar_setint(primary_dev, "tdls_enable", 1);
+#endif /* defined(WLTDLS) */
+ /* enable frameburst on single channel */
+ frameburst = 1;
+ if (wldev_ioctl(primary_dev, WLC_SET_FAKEFRAG, &frameburst,
+ sizeof(frameburst), true) != 0) {
+ WL_DBG(("frameburst set 1 error\n"));
+ } else {
+ WL_DBG(("Frameburst Enabled\n"));
}
}
-
- if (cfg->pm_enable_work_on) {
- wl_add_remove_pm_enable_work(cfg, FALSE, WL_HANDLER_DEL);
- }
-
- cfg->pm_enable_work_on = true;
- wl_add_remove_pm_enable_work(cfg, TRUE, WL_HANDLER_NOTUSE);
+ } else if (state == WL_STATUS_DISCONNECTING) {
+ wake_up_interruptible(&cfg->event_sync_wq);
}
}
- else { /* clear */
- chan = 0;
- /* clear chan information when the net device is disconnected */
- wl_update_prof(cfg, _net_info->ndev, NULL, &chan, WL_PROF_CHAN);
- wl_cfg80211_determine_vsdb_mode(cfg);
- for_each_ndev(cfg, iter, next) {
- if (iter->pm_restore && iter->pm) {
- WL_DBG(("%s:restoring power save %s\n",
- iter->ndev->name, (iter->pm ? "enabled" : "disabled")));
- err = wldev_ioctl(iter->ndev,
- WLC_SET_PM, &iter->pm, sizeof(iter->pm), true);
- if (unlikely(err)) {
- if (err == -ENODEV)
- WL_DBG(("%s:netdev not ready\n", iter->ndev->name));
- else
- WL_ERR(("%s:error(%d)\n", iter->ndev->name, err));
- break;
- }
- iter->pm_restore = 0;
- wl_cfg80211_update_power_mode(iter->ndev);
- }
- }
- wl_cfg80211_concurrent_roam(cfg, 0);
- }
return err;
}
static s32 wl_init_scan(struct bcm_cfg80211 *cfg)
@@ -9259,14 +10672,16 @@
cfg->vsdb_mode = false;
#if defined(BCMSDIO)
cfg->wlfc_on = false;
-#endif
+#endif
cfg->roamoff_on_concurrent = true;
cfg->disable_roam_event = false;
/* register interested state */
set_bit(WL_STATUS_CONNECTED, &cfg->interrested_state);
+ set_bit(WL_STATUS_DISCONNECTING, &cfg->interrested_state);
spin_lock_init(&cfg->cfgdrv_lock);
mutex_init(&cfg->ioctl_buf_sync);
init_waitqueue_head(&cfg->netif_change_event);
+ init_waitqueue_head(&cfg->event_sync_wq);
init_completion(&cfg->send_af_done);
init_completion(&cfg->iface_disable);
wl_init_eq(cfg);
@@ -9344,7 +10759,7 @@
return 0;
}
-#endif
+#endif
s32 wl_cfg80211_attach_post(struct net_device *ndev)
{
@@ -9473,7 +10888,7 @@
cfg->btcoex_info = wl_cfg80211_btcoex_init(cfg->wdev->netdev);
if (!cfg->btcoex_info)
goto cfg80211_attach_out;
-#endif
+#endif
g_bcm_cfg = cfg;
@@ -9481,7 +10896,7 @@
err = wl_cfg80211_attach_p2p();
if (err)
goto cfg80211_attach_out;
-#endif
+#endif
return err;
@@ -9505,7 +10920,7 @@
#if defined(COEX_DHCP)
wl_cfg80211_btcoex_deinit();
cfg->btcoex_info = NULL;
-#endif
+#endif
wl_setup_rfkill(cfg, FALSE);
#ifdef DEBUGFS_CFG80211
@@ -9525,7 +10940,7 @@
#endif /* WL_CFG80211_P2P_DEV_IF */
#if defined(WL_ENABLE_P2P_IF)
wl_cfg80211_detach_p2p();
-#endif
+#endif
wl_cfg80211_ibss_vsie_free(cfg);
wl_deinit_priv(cfg);
@@ -9546,44 +10961,6 @@
}
}
-#if (defined(WL_CFG80211_P2P_DEV_IF) || defined(WL_ENABLE_P2P_IF))
-static int wl_is_p2p_event(struct wl_event_q *e)
-{
- switch (e->etype) {
- /* We have to seperate out the P2P events received
- * on primary interface so that it can be send up
- * via p2p0 interface.
- */
- case WLC_E_P2P_PROBREQ_MSG:
- case WLC_E_P2P_DISC_LISTEN_COMPLETE:
- case WLC_E_ACTION_FRAME_RX:
- case WLC_E_ACTION_FRAME_OFF_CHAN_COMPLETE:
- case WLC_E_ACTION_FRAME_COMPLETE:
-
- if (e->emsg.ifidx != 0) {
- WL_TRACE(("P2P event(%d) on virtual interface(ifidx:%d)\n",
- e->etype, e->emsg.ifidx));
- /* We are only bothered about the P2P events received
- * on primary interface. For rest of them return false
- * so that it is sent over the interface corresponding
- * to the ifidx.
- */
- return FALSE;
- } else {
- WL_TRACE(("P2P event(%d) on interface(ifidx:%d)\n",
- e->etype, e->emsg.ifidx));
- return TRUE;
- }
- break;
-
- default:
- WL_TRACE(("NON-P2P event(%d) on interface(ifidx:%d)\n",
- e->etype, e->emsg.ifidx));
- return FALSE;
- }
-}
-#endif /* BCMDONGLEHOST && (WL_CFG80211_P2P_DEV_IF || WL_ENABLE_P2P_IF) */
-
static s32 wl_event_handler(void *data)
{
struct bcm_cfg80211 *cfg = NULL;
@@ -9606,7 +10983,7 @@
* interface.
*/
#if defined(WL_CFG80211_P2P_DEV_IF)
- if ((wl_is_p2p_event(e) == TRUE) && (cfg->p2p_wdev)) {
+ if (WL_IS_P2P_DEV_EVENT(e) && (cfg->p2p_wdev)) {
cfgdev = bcmcfg_to_p2p_wdev(cfg);
} else {
struct net_device *ndev = NULL;
@@ -9616,7 +10993,7 @@
cfgdev = ndev_to_wdev(ndev);
}
#elif defined(WL_ENABLE_P2P_IF)
- if ((wl_is_p2p_event(e) == TRUE) && (cfg->p2p_net)) {
+ if (WL_IS_P2P_DEV_EVENT(e) && (cfg->p2p_net)) {
cfgdev = cfg->p2p_net;
} else {
cfgdev = dhd_idx2net((struct dhd_pub *)(cfg->pub),
@@ -9951,14 +11328,10 @@
ht40_allowed = false;
c = (chanspec_t)dtoh32(list->element[i]);
c = wl_chspec_driver_to_host(c);
- channel = CHSPEC_CHANNEL(c);
- if (CHSPEC_IS40(c)) {
- if (CHSPEC_SB_UPPER(c))
- channel += CH_10MHZ_APART;
- else
- channel -= CH_10MHZ_APART;
- } else if (CHSPEC_IS80(c)) {
- WL_DBG(("HT80 center channel : %d\n", channel));
+ channel = wf_chspec_ctlchan(c);
+
+ if (!CHSPEC_IS40(c) && ! CHSPEC_IS20(c)) {
+ WL_DBG(("HT80/160/80p80 center channel : %d\n", channel));
continue;
}
if (CHSPEC_IS2G(c) && (channel >= CH_MIN_2G_CHANNEL) &&
@@ -10030,13 +11403,25 @@
channel = wl_chspec_host_to_driver(channel);
err = wldev_iovar_getint(dev, "per_chan_info", &channel);
if (!err) {
- if (channel & WL_CHAN_RADAR)
+ if (channel & WL_CHAN_RADAR) {
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0))
band_chan_arr[index].flags |=
- (IEEE80211_CHAN_RADAR |
- IEEE80211_CHAN_NO_IBSS);
+ (IEEE80211_CHAN_RADAR
+ | IEEE80211_CHAN_NO_IBSS);
+#else
+ band_chan_arr[index].flags |=
+ IEEE80211_CHAN_RADAR;
+#endif
+ }
+
if (channel & WL_CHAN_PASSIVE)
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0))
band_chan_arr[index].flags |=
IEEE80211_CHAN_PASSIVE_SCAN;
+#else
+ band_chan_arr[index].flags |=
+ IEEE80211_CHAN_NO_IR;
+#endif
} else if (err == BCME_UNSUPPORTED) {
dfs_radar_disabled = TRUE;
WL_ERR(("does not support per_chan_info\n"));
@@ -10203,12 +11588,12 @@
struct net_device *ndev = bcmcfg_to_prmry_ndev(cfg);
#if defined(WL_CFG80211) && defined(WL_ENABLE_P2P_IF)
struct net_device *p2p_net = cfg->p2p_net;
-#endif
+#endif
u32 bssidx = 0;
#ifdef PROP_TXSTATUS_VSDB
#if defined(BCMSDIO)
dhd_pub_t *dhd = (dhd_pub_t *)(cfg->pub);
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
WL_DBG(("In\n"));
/* Delete pm_enable_work */
@@ -10227,7 +11612,7 @@
cfg->wlfc_on = false;
}
}
-#endif
+#endif
#endif /* PROP_TXSTATUS_VSDB */
}
@@ -10268,14 +11653,28 @@
#if defined(WL_CFG80211) && defined(WL_ENABLE_P2P_IF)
if (p2p_net)
dev_close(p2p_net);
-#endif
- DNGL_FUNC(dhd_cfg80211_down, (cfg));
+#endif
wl_flush_eq(cfg);
wl_link_down(cfg);
if (cfg->p2p_supported)
wl_cfgp2p_down(cfg);
+ if (cfg->ap_info) {
+ kfree(cfg->ap_info->wpa_ie);
+ kfree(cfg->ap_info->rsn_ie);
+ kfree(cfg->ap_info->wps_ie);
+ kfree(cfg->ap_info);
+ cfg->ap_info = NULL;
+ }
dhd_monitor_uninit();
+#if defined(DUAL_STA) || defined(DUAL_STA_STATIC_IF)
+ /* Clean up if not removed already */
+ if (cfg->bss_cfgdev)
+ wl_cfg80211_del_iface(cfg->wdev->wiphy, cfg->bss_cfgdev);
+#endif /* defined (DUAL_STA) || defined (DUAL_STA_STATIC_IF) */
+
+ DNGL_FUNC(dhd_cfg80211_down, (cfg));
+
return err;
}
@@ -10314,7 +11713,20 @@
err = __wl_cfg80211_up(cfg);
if (unlikely(err))
WL_ERR(("__wl_cfg80211_up failed\n"));
+#ifdef ROAM_CHANNEL_CACHE
+ init_roam(ioctl_version);
+#endif
mutex_unlock(&cfg->usr_sync);
+
+
+#ifdef DUAL_STA_STATIC_IF
+#ifdef DUAL_STA
+#error "Both DUAL_STA and DUAL_STA_STATIC_IF can't be enabled together"
+#endif
+ /* Static Interface support is currently supported only for STA only builds (without P2P) */
+ wl_cfg80211_create_iface(cfg->wdev->wiphy, NL80211_IFTYPE_STATION, NULL, "wlan%d");
+#endif /* DUAL_STA_STATIC_IF */
+
return err;
}
@@ -10478,7 +11890,7 @@
return err;
}
-static void wl_update_hidden_ap_ie(struct wl_bss_info *bi, u8 *ie_stream, u32 *ie_size)
+static void wl_update_hidden_ap_ie(struct wl_bss_info *bi, u8 *ie_stream, u32 *ie_size, bool roam)
{
u8 *ssidie;
ssidie = (u8 *)cfg80211_find_ie(WLAN_EID_SSID, ie_stream, *ie_size);
@@ -10488,12 +11900,16 @@
if (ssidie[1]) {
WL_ERR(("%s: Wrong SSID len: %d != %d\n",
__FUNCTION__, ssidie[1], bi->SSID_len));
- return;
}
- memmove(ssidie + bi->SSID_len + 2, ssidie + 2, *ie_size - (ssidie + 2 - ie_stream));
- memcpy(ssidie + 2, bi->SSID, bi->SSID_len);
- *ie_size = *ie_size + bi->SSID_len;
- ssidie[1] = bi->SSID_len;
+ if (roam) {
+ WL_ERR(("Changing the SSID Info.\n"));
+ memmove(ssidie + bi->SSID_len + 2,
+ (ssidie + 2) + ssidie[1],
+ *ie_size - (ssidie + 2 + ssidie[1] - ie_stream));
+ memcpy(ssidie + 2, bi->SSID, bi->SSID_len);
+ *ie_size = *ie_size + bi->SSID_len - ssidie[1];
+ ssidie[1] = bi->SSID_len;
+ }
return;
}
if (*(ssidie + 2) == '\0')
@@ -10658,9 +12074,15 @@
msg = " TDLS PEER DISCOVERD ";
break;
case WLC_E_TDLS_PEER_CONNECTED :
+#ifdef PCIE_FULL_DONGLE
+ dhd_tdls_update_peer_info(ndev, TRUE, (uint8 *)&e->addr.octet[0]);
+#endif /* PCIE_FULL_DONGLE */
msg = " TDLS PEER CONNECTED ";
break;
case WLC_E_TDLS_PEER_DISCONNECTED :
+#ifdef PCIE_FULL_DONGLE
+ dhd_tdls_update_peer_info(ndev, FALSE, (uint8 *)&e->addr.octet[0]);
+#endif /* PCIE_FULL_DONGLE */
msg = "TDLS PEER DISCONNECTED ";
break;
}
@@ -10676,6 +12098,9 @@
#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 2, 0))
static s32
wl_cfg80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0))
+ const
+#endif
u8 *peer, enum nl80211_tdls_operation oper)
{
s32 ret = 0;
@@ -10722,7 +12147,7 @@
#endif /* WLTDLS */
return ret;
}
-#endif
+#endif
s32 wl_cfg80211_set_wps_p2p_ie(struct net_device *net, char *buf, int len,
enum wl_management_type type)
@@ -10837,6 +12262,7 @@
wl_cfg80211_valid_chanspec_p2p(chanspec_t chanspec)
{
bool valid = false;
+ char chanbuf[CHANSPEC_STR_LEN];
/* channel 1 to 14 */
if ((chanspec >= 0x2b01) && (chanspec <= 0x2b0e)) {
@@ -10852,8 +12278,8 @@
}
else {
valid = false;
- WL_INFO(("invalid P2P chanspec, channel = %d, chanspec = %04x\n",
- CHSPEC_CHANNEL(chanspec), chanspec));
+ WL_INFORM(("invalid P2P chanspec, chanspec = %s\n",
+ wf_chspec_ntoa_ex(chanspec, chanbuf)));
}
return valid;
@@ -10969,10 +12395,10 @@
false);
if ((ret == 0) && (dtoh32(chosen) != 0)) {
*channel = (u16)(chosen & 0x00FF);
- WL_INFO(("selected channel = %d\n", *channel));
+ WL_INFORM(("selected channel = %d\n", *channel));
break;
}
- WL_INFO(("attempt = %d, ret = %d, chosen = %d\n",
+ WL_INFORM(("attempt = %d, ret = %d, chosen = %d\n",
(CHAN_SEL_RETRY_COUNT - retry), ret, dtoh32(chosen)));
}
@@ -11546,7 +12972,7 @@
if (frame_len < DOT11_ACTION_HDR_LEN)
return DOT11_ACTION_CAT_ERR_MASK;
category = ptr[DOT11_ACTION_CAT_OFF];
- WL_INFO(("Action Category: %d\n", category));
+ WL_INFORM(("Action Category: %d\n", category));
return category;
}
@@ -11561,10 +12987,11 @@
if (DOT11_ACTION_CAT_PUBLIC != wl_get_action_category(frame, frame_len))
return BCME_ERROR;
*ret_action = ptr[DOT11_ACTION_ACT_OFF];
- WL_INFO(("Public Action : %d\n", *ret_action));
+ WL_INFORM(("Public Action : %d\n", *ret_action));
return BCME_OK;
}
+
static int
wl_cfg80211_delayed_roam(struct bcm_cfg80211 *cfg, struct net_device *ndev,
const struct ether_addr *bssid)
@@ -11580,3 +13007,60 @@
return err;
}
+
+#ifdef WL_CFG80211_ACL
+static int
+wl_cfg80211_set_mac_acl(struct wiphy *wiphy, struct net_device *cfgdev,
+ const struct cfg80211_acl_data *acl)
+{
+ int i;
+ int ret = 0;
+ int macnum = 0;
+ int macmode = MACLIST_MODE_DISABLED;
+ struct maclist *list;
+
+ /* get the MAC filter mode */
+ if (acl && acl->acl_policy == NL80211_ACL_POLICY_DENY_UNLESS_LISTED) {
+ macmode = MACLIST_MODE_ALLOW;
+ } else if (acl && acl->acl_policy == NL80211_ACL_POLICY_ACCEPT_UNLESS_LISTED &&
+ acl->n_acl_entries) {
+ macmode = MACLIST_MODE_DENY;
+ }
+
+ /* if acl == NULL, macmode is still disabled.. */
+ if (macmode == MACLIST_MODE_DISABLED) {
+ if ((ret = wl_android_set_ap_mac_list(cfgdev, macmode, NULL)) != 0)
+ WL_ERR(("%s : Setting MAC list failed error=%d\n", __FUNCTION__, ret));
+
+ return ret;
+ }
+
+ macnum = acl->n_acl_entries;
+ if (macnum < 0 || macnum > MAX_NUM_MAC_FILT) {
+ WL_ERR(("%s : invalid number of MAC address entries %d\n",
+ __FUNCTION__, macnum));
+ return -1;
+ }
+
+ /* allocate memory for the MAC list */
+ list = (struct maclist*)kmalloc(sizeof(int) +
+ sizeof(struct ether_addr) * macnum, GFP_KERNEL);
+ if (!list) {
+ WL_ERR(("%s : failed to allocate memory\n", __FUNCTION__));
+ return -1;
+ }
+
+ /* prepare the MAC list */
+ list->count = htod32(macnum);
+ for (i = 0; i < macnum; i++) {
+ memcpy(&list->ea[i], &acl->mac_addrs[i], ETHER_ADDR_LEN);
+ }
+ /* set the list */
+ if ((ret = wl_android_set_ap_mac_list(cfgdev, macmode, list)) != 0)
+ WL_ERR(("%s : Setting MAC list failed error=%d\n", __FUNCTION__, ret));
+
+ kfree(list);
+
+ return ret;
+}
+#endif /* WL_CFG80211_ACL */
diff --git a/drivers/net/wireless/bcmdhd/wl_cfg80211.h b/drivers/net/wireless/bcmdhd/wl_cfg80211.h
old mode 100755
new mode 100644
index 67cdcad..1d58c82
--- a/drivers/net/wireless/bcmdhd/wl_cfg80211.h
+++ b/drivers/net/wireless/bcmdhd/wl_cfg80211.h
@@ -2,13 +2,13 @@
* Linux cfg80211 driver
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,16 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_cfg80211.h 457855 2014-02-25 01:27:41Z $
+ * $Id: wl_cfg80211.h 472818 2014-04-25 08:07:56Z $
+ */
+
+/**
+ * Older Linux versions support the 'iw' interface, more recent ones the 'cfg80211' interface.
*/
#ifndef _wl_cfg80211_h_
@@ -82,16 +86,19 @@
} while (0)
#endif /* defined(DHD_DEBUG) */
-#ifdef WL_INFO
-#undef WL_INFO
+#ifdef WL_INFORM
+#undef WL_INFORM
#endif
-#define WL_INFO(args) \
+
+#define WL_INFORM(args) \
do { \
if (wl_dbg_level & WL_DBG_INFO) { \
printk(KERN_INFO "CFG80211-INFO) %s : ", __func__); \
printk args; \
} \
} while (0)
+
+
#ifdef WL_SCAN
#undef WL_SCAN
#endif
@@ -403,8 +410,10 @@
/* Structure to hold WPS, WPA IEs for a AP */
u8 probe_res_ie[VNDR_IES_MAX_BUF_LEN];
u8 beacon_ie[VNDR_IES_MAX_BUF_LEN];
+ u8 assoc_res_ie[VNDR_IES_MAX_BUF_LEN];
u32 probe_res_ie_len;
u32 beacon_ie_len;
+ u32 assoc_res_ie_len;
u8 *wpa_ie;
u8 *rsn_ie;
u8 *wps_ie;
@@ -524,9 +533,9 @@
bool pwr_save;
bool roam_on; /* on/off switch for self-roaming */
bool scan_tried; /* indicates if first scan attempted */
-#if defined(BCMSDIO)
+#if defined(BCMSDIO) || defined(BCMPCIE)
bool wlfc_on;
-#endif
+#endif
bool vsdb_mode;
bool roamoff_on_concurrent;
u8 *ioctl_buf; /* ioctl buffer */
@@ -541,6 +550,7 @@
u64 send_action_id;
u64 last_roc_id;
wait_queue_head_t netif_change_event;
+ wait_queue_head_t event_sync_wq;
wl_if_event_info if_event_info;
struct completion send_af_done;
struct afx_hdl *afx_hdl;
@@ -572,6 +582,11 @@
struct delayed_work pm_enable_work;
vndr_ie_setbuf_t *ibss_vsie; /* keep the VSIE for IBSS */
int ibss_vsie_len;
+ struct ether_addr ibss_if_addr;
+ bcm_struct_cfgdev *ibss_cfgdev; /* For AIBSS */
+ bcm_struct_cfgdev *bss_cfgdev; /* For DUAL STA/STA+AP */
+ s32 cfgdev_bssidx;
+ bool bss_pending_op; /* indicate where there is a pending IF operation */
bool roam_offload;
};
@@ -614,10 +629,6 @@
if (ndev && (_net_info->ndev == ndev)) {
list_del(&_net_info->list);
cfg->iface_cnt--;
- if (_net_info->wdev) {
- kfree(_net_info->wdev);
- ndev->ieee80211_ptr = NULL;
- }
kfree(_net_info);
}
}
@@ -792,9 +803,11 @@
#if defined(WL_CFG80211_P2P_DEV_IF)
#define ndev_to_cfgdev(ndev) ndev_to_wdev(ndev)
+#define cfgdev_to_ndev(cfgdev) (cfgdev->netdev)
#define discover_cfgdev(cfgdev, cfg) (cfgdev->iftype == NL80211_IFTYPE_P2P_DEVICE)
#else
#define ndev_to_cfgdev(ndev) (ndev)
+#define cfgdev_to_ndev(cfgdev) (cfgdev)
#define discover_cfgdev(cfgdev, cfg) (cfgdev == cfg->p2p_net)
#endif /* WL_CFG80211_P2P_DEV_IF */
@@ -920,7 +933,6 @@
#define wl_escan_print_sync_id(a, b, c)
#define wl_escan_increment_sync_id(a, b)
#define wl_escan_init_sync_id(a)
-
extern void wl_cfg80211_ibss_vsie_set_buffer(vndr_ie_setbuf_t *ibss_vsie, int ibss_vsie_len);
extern s32 wl_cfg80211_ibss_vsie_delete(struct net_device *dev);
@@ -928,12 +940,22 @@
extern u8 wl_get_action_category(void *frame, u32 frame_len);
extern int wl_get_public_action(void *frame, u32 frame_len, u8 *ret_action);
-extern int wl_cfg80211_enable_roam_offload(struct net_device *dev, bool enable);
-
#ifdef WL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST
struct net_device *wl_cfg80211_get_remain_on_channel_ndev(struct bcm_cfg80211 *cfg);
#endif /* WL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST */
-extern int wl_cfg80211_get_ioctl_version(void);
+#ifdef WL_SUPPORT_ACS
+#define ACS_MSRMNT_DELAY 1000 /* dump_obss delay in ms */
+#define IOCTL_RETRY_COUNT 5
+#define CHAN_NOISE_DUMMY -80
+#define OBSS_TOKEN_IDX 15
+#define IBSS_TOKEN_IDX 15
+#define TX_TOKEN_IDX 14
+#define CTG_TOKEN_IDX 13
+#define PKT_TOKEN_IDX 15
+#define IDLE_TOKEN_IDX 12
+#endif /* WL_SUPPORT_ACS */
+extern int wl_cfg80211_get_ioctl_version(void);
+extern int wl_cfg80211_enable_roam_offload(struct net_device *dev, bool enable);
#endif /* _wl_cfg80211_h_ */
diff --git a/drivers/net/wireless/bcmdhd/wl_cfg_btcoex.c b/drivers/net/wireless/bcmdhd/wl_cfg_btcoex.c
old mode 100755
new mode 100644
index a0b0bf4..0220da1
--- a/drivers/net/wireless/bcmdhd/wl_cfg_btcoex.c
+++ b/drivers/net/wireless/bcmdhd/wl_cfg_btcoex.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_cfg_btcoex.c 427707 2013-10-04 10:28:29Z $
+ * $Id: wl_cfg_btcoex.c 467328 2014-04-03 01:23:40Z $
*/
#include <net/rtnetlink.h>
diff --git a/drivers/net/wireless/bcmdhd/wl_cfgp2p.c b/drivers/net/wireless/bcmdhd/wl_cfgp2p.c
old mode 100755
new mode 100644
index 992d606..793a173
--- a/drivers/net/wireless/bcmdhd/wl_cfgp2p.c
+++ b/drivers/net/wireless/bcmdhd/wl_cfgp2p.c
@@ -2,13 +2,13 @@
* Linux cfgp2p driver
*
* Copyright (C) 1999-2014, Broadcom Corporation
- *
+ *
* Unless you and Broadcom execute a separate written software license
* agreement governing use of this software, this software is licensed to you
* under the terms of the GNU General Public License version 2 (the "GPL"),
* available at http://www.broadcom.com/licenses/GPLv2.php, with the
* following added to such license:
- *
+ *
* As a special exception, the copyright holders of this software give you
* permission to link this software with independent modules, and to copy and
* distribute the resulting executable under terms of your choice, provided that
@@ -16,12 +16,12 @@
* the license of that module. An independent module is a module which is not
* derived from this software. The special exception does not apply to any
* modifications of the software.
- *
+ *
* Notwithstanding the above, under no circumstances may you combine this
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_cfgp2p.c 454364 2014-02-10 09:20:25Z $
+ * $Id: wl_cfgp2p.c 472818 2014-04-25 08:07:56Z $
*
*/
#include <typedefs.h>
@@ -41,6 +41,7 @@
#include <bcmendian.h>
#include <proto/ethernet.h>
#include <proto/802.11.h>
+#include <net/rtnetlink.h>
#include <wl_cfg80211.h>
#include <wl_cfgp2p.h>
@@ -72,7 +73,6 @@
};
#endif /* WL_ENABLE_P2P_IF */
-
bool wl_cfgp2p_is_pub_action(void *frame, u32 frame_len)
{
wifi_p2p_pub_act_frame_t *pact_frm;
@@ -158,22 +158,6 @@
if (sd_act_frm->category != P2PSD_ACTION_CATEGORY)
return false;
-#ifdef WL11U
- if (sd_act_frm->action == P2PSD_ACTION_ID_GAS_IRESP)
- return wl_cfgp2p_find_gas_subtype(P2PSD_GAS_OUI_SUBTYPE,
- (u8 *)sd_act_frm->query_data + GAS_RESP_OFFSET,
- frame_len);
-
- else if (sd_act_frm->action == P2PSD_ACTION_ID_GAS_CRESP)
- return wl_cfgp2p_find_gas_subtype(P2PSD_GAS_OUI_SUBTYPE,
- (u8 *)sd_act_frm->query_data + GAS_CRESP_OFFSET,
- frame_len);
- else if (sd_act_frm->action == P2PSD_ACTION_ID_GAS_IREQ ||
- sd_act_frm->action == P2PSD_ACTION_ID_GAS_CREQ)
- return true;
- else
- return false;
-#else
if (sd_act_frm->action == P2PSD_ACTION_ID_GAS_IREQ ||
sd_act_frm->action == P2PSD_ACTION_ID_GAS_IRESP ||
sd_act_frm->action == P2PSD_ACTION_ID_GAS_CREQ ||
@@ -181,7 +165,6 @@
return true;
else
return false;
-#endif /* WL11U */
}
void wl_cfgp2p_print_actframe(bool tx, void *frame, u32 frame_len, u32 channel)
{
@@ -614,7 +597,7 @@
CFGP2P_DBG(("enter\n"));
- if (wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_DEVICE) != 0) {
+ if (wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_DEVICE) > 0) {
CFGP2P_ERR(("do nothing, already initialized\n"));
return ret;
}
@@ -936,7 +919,7 @@
p2p_scan_purpose_t p2p_scan_purpose = P2P_SCAN_AFX_PEER_NORMAL;
if (!p2p_is_on(cfg) || ndev == NULL || bssidx == WL_INVALID)
return -BCME_ERROR;
- CFGP2P_ERR((" Enter\n"));
+ WL_TRACE_HW4((" Enter\n"));
if (bssidx == wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_PRIMARY))
bssidx = wl_to_p2p_bss_bssidx(cfg, P2PAPI_BSSCFG_DEVICE);
if (channel)
@@ -998,7 +981,7 @@
remained_len = (s32)len;
memset(vndr_ies, 0, sizeof(*vndr_ies));
- WL_INFO(("---> len %d\n", len));
+ WL_INFORM(("---> len %d\n", len));
ie = (bcm_tlv_t *) parse;
if (!bcm_valid_tlv(ie, remained_len))
ie = NULL;
@@ -1076,7 +1059,12 @@
memset(g_mgmt_ie_buf, 0, sizeof(g_mgmt_ie_buf));
curr_ie_buf = g_mgmt_ie_buf;
CFGP2P_DBG((" bssidx %d, pktflag : 0x%02X\n", bssidx, pktflag));
+
+#ifdef DUAL_STA
+ if ((cfg->p2p != NULL) && (bssidx != cfg->cfgdev_bssidx)) {
+#else
if (cfg->p2p != NULL) {
+#endif
if (wl_cfgp2p_find_type(cfg, bssidx, &type)) {
CFGP2P_ERR(("cannot find type from bssidx : %d\n", bssidx));
return BCME_ERROR;
@@ -1126,6 +1114,12 @@
mgmt_ie_len = &cfg->ap_info->beacon_ie_len;
mgmt_ie_buf_len = sizeof(cfg->ap_info->beacon_ie);
break;
+ case VNDR_IE_ASSOCRSP_FLAG :
+ /* WPS-AP WSC2.0 assoc res includes wps_ie */
+ mgmt_ie_buf = cfg->ap_info->assoc_res_ie;
+ mgmt_ie_len = &cfg->ap_info->assoc_res_ie_len;
+ mgmt_ie_buf_len = sizeof(cfg->ap_info->assoc_res_ie);
+ break;
default:
mgmt_ie_buf = NULL;
mgmt_ie_len = NULL;
@@ -1474,7 +1468,17 @@
return BCME_OK;
}
}
+
+#ifdef DUAL_STA
+ if (cfg->bss_cfgdev && (cfg->bss_cfgdev == ndev_to_cfgdev(ndev))) {
+ CFGP2P_INFO(("cfgdev is present, return the bssidx"));
+ *bssidx = cfg->cfgdev_bssidx;
+ return BCME_OK;
+ }
+#endif
+
return BCME_BADARG;
+
}
struct net_device *
wl_cfgp2p_find_ndev(struct bcm_cfg80211 *cfg, s32 bssidx)
@@ -1521,6 +1525,14 @@
}
}
+#ifdef DUAL_STA
+ if (bssidx == cfg->cfgdev_bssidx) {
+ CFGP2P_DBG(("bssidx matching with the virtual I/F \n"));
+ *type = 1;
+ return BCME_OK;
+ }
+#endif
+
exit:
return BCME_BADARG;
}
@@ -1579,11 +1591,13 @@
#endif /* WL_CFG80211_VSDB_PRIORITIZE_SCAN_REQUEST */
if (ndev && (ndev->ieee80211_ptr != NULL)) {
#if defined(WL_CFG80211_P2P_DEV_IF)
- cfg80211_remain_on_channel_expired(cfgdev, cfg->last_roc_id,
- &cfg->remain_on_chan, GFP_KERNEL);
+ if (bcmcfg_to_p2p_wdev(cfg))
+ cfg80211_remain_on_channel_expired(bcmcfg_to_p2p_wdev(cfg), cfg->last_roc_id,
+ &cfg->remain_on_chan, GFP_KERNEL);
#else
- cfg80211_remain_on_channel_expired(cfgdev, cfg->last_roc_id,
- &cfg->remain_on_chan, cfg->remain_on_chan_type, GFP_KERNEL);
+ if (cfgdev)
+ cfg80211_remain_on_channel_expired(cfgdev, cfg->last_roc_id,
+ &cfg->remain_on_chan, cfg->remain_on_chan_type, GFP_KERNEL);
#endif /* WL_CFG80211_P2P_DEV_IF */
}
}
@@ -1632,16 +1646,17 @@
*/
if (timer_pending(&cfg->p2p->listen_timer)) {
del_timer_sync(&cfg->p2p->listen_timer);
- if (notify)
- if (ndev && ndev->ieee80211_ptr) {
+ if (notify) {
#if defined(WL_CFG80211_P2P_DEV_IF)
- cfg80211_remain_on_channel_expired(wdev, cfg->last_roc_id,
+ if (bcmcfg_to_p2p_wdev(cfg))
+ cfg80211_remain_on_channel_expired(bcmcfg_to_p2p_wdev(cfg), cfg->last_roc_id,
&cfg->remain_on_chan, GFP_KERNEL);
#else
+ if (ndev && ndev->ieee80211_ptr)
cfg80211_remain_on_channel_expired(ndev, cfg->last_roc_id,
&cfg->remain_on_chan, cfg->remain_on_chan_type, GFP_KERNEL);
#endif /* WL_CFG80211_P2P_DEV_IF */
- }
+ }
}
return 0;
}
@@ -1828,6 +1843,9 @@
if (timeout >= 0 && wl_get_p2p_status(cfg, ACTION_TX_COMPLETED)) {
CFGP2P_INFO(("tx action frame operation is completed\n"));
ret = BCME_OK;
+ } else if (ETHER_ISBCAST(&cfg->afx_hdl->tx_dst_addr)) {
+ CFGP2P_INFO(("bcast tx action frame operation is completed\n"));
+ ret = BCME_OK;
} else {
ret = BCME_ERROR;
CFGP2P_INFO(("tx action frame operation is failed\n"));
@@ -1857,12 +1875,6 @@
wl_cfgp2p_generate_bss_mac(struct ether_addr *primary_addr,
struct ether_addr *out_dev_addr, struct ether_addr *out_int_addr)
{
-
- if ((out_dev_addr == NULL) || (out_int_addr == NULL)) {
- WL_ERR(("Invalid input data\n"));
- return;
- }
-
memset(out_dev_addr, 0, sizeof(*out_dev_addr));
memset(out_int_addr, 0, sizeof(*out_int_addr));
@@ -2035,9 +2047,6 @@
if (index != WL_INVALID)
wl_cfgp2p_clear_management_ie(cfg, index);
}
-#if defined(WL_CFG80211_P2P_DEV_IF)
- wl_cfgp2p_del_p2p_disc_if(wdev, cfg);
-#endif /* WL_CFG80211_P2P_DEV_IF */
wl_cfgp2p_deinit_priv(cfg);
return 0;
}
@@ -2069,13 +2078,7 @@
if (duration != -1)
cfg->p2p->noa.desc[0].duration = duration;
- if (cfg->p2p->noa.desc[0].count < 255 && cfg->p2p->noa.desc[0].count > 1) {
- cfg->p2p->noa.desc[0].start = 0;
- dongle_noa.type = WL_P2P_SCHED_TYPE_ABS;
- dongle_noa.action = WL_P2P_SCHED_ACTION_NONE;
- dongle_noa.option = WL_P2P_SCHED_OPTION_TSFOFS;
- }
- else if (cfg->p2p->noa.desc[0].count == 1) {
+ if (cfg->p2p->noa.desc[0].count != 255 && cfg->p2p->noa.desc[0].count != 0) {
cfg->p2p->noa.desc[0].start = 200;
dongle_noa.type = WL_P2P_SCHED_TYPE_REQ_ABS;
dongle_noa.action = WL_P2P_SCHED_ACTION_GOOFF;
@@ -2083,11 +2086,13 @@
}
else if (cfg->p2p->noa.desc[0].count == 0) {
cfg->p2p->noa.desc[0].start = 0;
+ dongle_noa.type = WL_P2P_SCHED_TYPE_ABS;
+ dongle_noa.option = WL_P2P_SCHED_OPTION_NORMAL;
dongle_noa.action = WL_P2P_SCHED_ACTION_RESET;
}
else {
/* Continuous NoA interval. */
- dongle_noa.action = WL_P2P_SCHED_ACTION_NONE;
+ dongle_noa.action = WL_P2P_SCHED_ACTION_DOZE;
dongle_noa.type = WL_P2P_SCHED_TYPE_ABS;
if ((cfg->p2p->noa.desc[0].interval == 102) ||
(cfg->p2p->noa.desc[0].interval == 100)) {
@@ -2438,7 +2443,6 @@
return 0;
}
-
static int wl_cfgp2p_start_xmit(struct sk_buff *skb, struct net_device *ndev)
{
@@ -2473,7 +2477,7 @@
return ret;
}
-#endif
+#endif
#if defined(WL_ENABLE_P2P_IF)
static int wl_cfgp2p_if_open(struct net_device *net)
@@ -2534,8 +2538,8 @@
WL_TRACE(("Enter\n"));
if (cfg->p2p_wdev) {
- CFGP2P_ERR(("p2p_wdev defined already.\n"));
- return ERR_PTR(-ENFILE);
+ wl_cfgp2p_del_p2p_disc_if(cfg->p2p_wdev, cfg);
+ CFGP2P_ERR(("p2p_wdev deleted.\n"));
}
wdev = kzalloc(sizeof(*wdev), GFP_KERNEL);
@@ -2547,7 +2551,7 @@
memset(&primary_mac, 0, sizeof(primary_mac));
get_primary_mac(cfg, &primary_mac);
wl_cfgp2p_generate_bss_mac(&primary_mac,
- &cfg->p2p->dev_addr, &cfg->p2p->int_addr);
+ &cfg->p2p->dev_addr, &cfg->p2p->int_addr);
wdev->wiphy = cfg->wdev->wiphy;
wdev->iftype = NL80211_IFTYPE_P2P_DEVICE;
@@ -2609,6 +2613,9 @@
CFGP2P_ERR(("P2P scan stop failed, ret=%d\n", ret));
}
+ if (!cfg->p2p)
+ return;
+
ret = wl_cfgp2p_disable_discovery(cfg);
if (unlikely(ret < 0)) {
CFGP2P_ERR(("P2P disable discovery failed, ret=%d\n", ret));
@@ -2624,13 +2631,23 @@
int
wl_cfgp2p_del_p2p_disc_if(struct wireless_dev *wdev, struct bcm_cfg80211 *cfg)
{
+ bool rollback_lock = false;
+
if (!wdev)
return -EINVAL;
WL_TRACE(("Enter\n"));
+ if (!rtnl_is_locked()) {
+ rtnl_lock();
+ rollback_lock = true;
+ }
+
cfg80211_unregister_wdev(wdev);
+ if (rollback_lock)
+ rtnl_unlock();
+
kfree(wdev);
if (cfg)
diff --git a/drivers/net/wireless/bcmdhd/wl_cfgp2p.h b/drivers/net/wireless/bcmdhd/wl_cfgp2p.h
old mode 100755
new mode 100644
index 9b8beed..f4c7c4f
--- a/drivers/net/wireless/bcmdhd/wl_cfgp2p.h
+++ b/drivers/net/wireless/bcmdhd/wl_cfgp2p.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_cfgp2p.h 444054 2013-12-18 11:33:42Z $
+ * $Id: wl_cfgp2p.h 472818 2014-04-25 08:07:56Z $
*/
#ifndef _wl_cfgp2p_h_
#define _wl_cfgp2p_h_
@@ -74,7 +74,7 @@
};
struct p2p_bss {
- u32 bssidx;
+ s32 bssidx;
struct net_device *dev;
struct p2p_saved_ie saved_ie;
void *private_data;
@@ -187,7 +187,7 @@
add_timer(timer); \
} while (0);
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 8, 0))
+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 8, 0)) && !defined(WL_CFG80211_P2P_DEV_IF)
#define WL_CFG80211_P2P_DEV_IF
#ifdef WL_ENABLE_P2P_IF
@@ -197,6 +197,13 @@
#ifdef WL_SUPPORT_BACKPORTED_KPATCHES
#undef WL_SUPPORT_BACKPORTED_KPATCHES
#endif
+#else
+#ifdef WLP2P
+#ifndef WL_ENABLE_P2P_IF
+/* Enable P2P network Interface if P2P support is enabled */
+#define WL_ENABLE_P2P_IF
+#endif /* WL_ENABLE_P2P_IF */
+#endif /* WLP2P */
#endif /* (LINUX_VERSION >= VERSION(3, 8, 0)) */
#ifndef WL_CFG80211_P2P_DEV_IF
diff --git a/drivers/net/wireless/bcmdhd/wl_cfgvendor.c b/drivers/net/wireless/bcmdhd/wl_cfgvendor.c
new file mode 100644
index 0000000..8236763
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/wl_cfgvendor.c
@@ -0,0 +1,2628 @@
+/*
+ * Linux cfg80211 Vendor Extension Code
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: wl_cfgvendor.c 473890 2014-04-30 01:55:06Z $
+*/
+
+/*
+ * New vendor interface additon to nl80211/cfg80211 to allow vendors
+ * to implement proprietary features over the cfg80211 stack.
+*/
+
+#include <typedefs.h>
+#include <linuxver.h>
+#include <osl.h>
+#include <linux/kernel.h>
+
+#include <bcmutils.h>
+#include <bcmwifi_channels.h>
+#include <bcmendian.h>
+#include <proto/ethernet.h>
+#include <proto/802.11.h>
+#include <linux/if_arp.h>
+#include <asm/uaccess.h>
+
+
+#include <dngl_stats.h>
+#include <dhd.h>
+#include <dhdioctl.h>
+#include <wlioctl.h>
+#include <dhd_cfg80211.h>
+#ifdef PNO_SUPPORT
+#include <dhd_pno.h>
+#endif /* PNO_SUPPORT */
+#ifdef RTT_SUPPORT
+#include <dhd_rtt.h>
+#endif /* RTT_SUPPORT */
+#include <dhd_debug.h>
+#include <proto/ethernet.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/etherdevice.h>
+#include <linux/wireless.h>
+#include <linux/ieee80211.h>
+#include <linux/wait.h>
+#include <linux/vmalloc.h>
+#include <net/cfg80211.h>
+#include <net/rtnetlink.h>
+
+#include <wlioctl.h>
+#include <wldev_common.h>
+#include <wl_cfg80211.h>
+#include <wl_cfgp2p.h>
+#include <wl_android.h>
+#include <wl_cfgvendor.h>
+#ifdef PROP_TXSTATUS
+#include <dhd_wlfc.h>
+#endif
+
+#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 13, 0)) || defined(WL_VENDOR_EXT_SUPPORT)
+
+/*
+ * This API is to be used for asynchronous vendor events. This
+ * shouldn't be used in response to a vendor command from its
+ * do_it handler context (instead wl_cfgvendor_send_cmd_reply should
+ * be used).
+ */
+int wl_cfgvendor_send_async_event(struct wiphy *wiphy,
+ struct net_device *dev, int event_id, const void *data, int len)
+{
+ u16 kflags;
+ struct sk_buff *skb;
+
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_event_alloc(wiphy, len, event_id, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return -ENOMEM;
+ }
+
+ /* Push the data to the skb */
+ nla_put_nohdr(skb, len, data);
+
+ cfg80211_vendor_event(skb, kflags);
+
+ return 0;
+}
+
+static int wl_cfgvendor_send_cmd_reply(struct wiphy *wiphy,
+ struct net_device *dev, const void *data, int len)
+{
+ struct sk_buff *skb;
+
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, len);
+ if (unlikely(!skb)) {
+ WL_ERR(("skb alloc failed"));
+ return -ENOMEM;
+ }
+
+ /* Push the data to the skb */
+ nla_put_nohdr(skb, len, data);
+
+ return cfg80211_vendor_cmd_reply(skb);
+}
+
+static int wl_cfgvendor_get_feature_set(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int reply;
+
+ reply = dhd_dev_get_feature_set(bcmcfg_to_prmry_ndev(cfg));
+
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ &reply, sizeof(int));
+
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+
+ return err;
+}
+
+static int wl_cfgvendor_get_feature_set_matrix(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ struct sk_buff *skb;
+ int *reply;
+ int num, mem_needed, i;
+
+ reply = dhd_dev_get_feature_set_matrix(bcmcfg_to_prmry_ndev(cfg), &num);
+
+ if (!reply) {
+ WL_ERR(("Could not get feature list matrix\n"));
+ err = -EINVAL;
+ return err;
+ }
+
+ mem_needed = VENDOR_REPLY_OVERHEAD + (ATTRIBUTE_U32_LEN * num) +
+ ATTRIBUTE_U32_LEN;
+
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, mem_needed);
+ if (unlikely(!skb)) {
+ WL_ERR(("skb alloc failed"));
+ err = -ENOMEM;
+ goto exit;
+ }
+
+ nla_put_u32(skb, ANDR_WIFI_ATTRIBUTE_NUM_FEATURE_SET, num);
+ for (i = 0; i < num; i++) {
+ nla_put_u32(skb, ANDR_WIFI_ATTRIBUTE_FEATURE_SET, reply[i]);
+ }
+
+ err = cfg80211_vendor_cmd_reply(skb);
+
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+exit:
+ kfree(reply);
+ return err;
+}
+
+static int wl_cfgvendor_set_rand_mac_oui(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int type;
+ uint8 random_mac_oui[DOT11_OUI_LEN];
+
+ type = nla_type(data);
+
+ if (type == ANDR_WIFI_ATTRIBUTE_RANDOM_MAC_OUI) {
+ memcpy(random_mac_oui, nla_data(data), DOT11_OUI_LEN);
+
+ err = dhd_dev_cfg_rand_mac_oui(bcmcfg_to_prmry_ndev(cfg), random_mac_oui);
+
+ if (unlikely(err))
+ WL_ERR(("Bad OUI, could not set:%d \n", err));
+
+ } else {
+ err = -1;
+ }
+
+ return err;
+}
+
+static int wl_cfgvendor_set_nodfs_flag(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int type;
+ u32 nodfs;
+
+ type = nla_type(data);
+ if (type == ANDR_WIFI_ATTRIBUTE_NODFS_SET) {
+ nodfs = nla_get_u32(data);
+ err = dhd_dev_set_nodfs(bcmcfg_to_prmry_ndev(cfg), nodfs);
+ } else {
+ err = -1;
+ }
+ return err;
+}
+
+#ifdef GSCAN_SUPPORT
+int wl_cfgvendor_send_hotlist_event(struct wiphy *wiphy,
+ struct net_device *dev, void *data, int len, wl_vendor_event_t event)
+{
+ u16 kflags;
+ const void *ptr;
+ struct sk_buff *skb;
+ int malloc_len, total, iter_cnt_to_send, cnt;
+ gscan_results_cache_t *cache = (gscan_results_cache_t *)data;
+
+ total = len/sizeof(wifi_gscan_result_t);
+ while (total > 0) {
+ malloc_len = (total * sizeof(wifi_gscan_result_t)) + VENDOR_DATA_OVERHEAD;
+ if (malloc_len > NLMSG_DEFAULT_SIZE) {
+ malloc_len = NLMSG_DEFAULT_SIZE;
+ }
+ iter_cnt_to_send =
+ (malloc_len - VENDOR_DATA_OVERHEAD)/sizeof(wifi_gscan_result_t);
+ total = total - iter_cnt_to_send;
+
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_event_alloc(wiphy, malloc_len, event, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return -ENOMEM;
+ }
+
+ while (cache && iter_cnt_to_send) {
+ ptr = (const void *) &cache->results[cache->tot_consumed];
+
+ if (iter_cnt_to_send < (cache->tot_count - cache->tot_consumed))
+ cnt = iter_cnt_to_send;
+ else
+ cnt = (cache->tot_count - cache->tot_consumed);
+
+ iter_cnt_to_send -= cnt;
+ cache->tot_consumed += cnt;
+ /* Push the data to the skb */
+ nla_append(skb, cnt * sizeof(wifi_gscan_result_t), ptr);
+ if (cache->tot_consumed == cache->tot_count)
+ cache = cache->next;
+
+ }
+
+ cfg80211_vendor_event(skb, kflags);
+ }
+
+ return 0;
+}
+
+static int wl_cfgvendor_gscan_get_capabilities(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pno_gscan_capabilities_t *reply = NULL;
+ uint32 reply_len = 0;
+
+
+ reply = dhd_dev_pno_get_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_GET_CAPABILITIES, NULL, &reply_len);
+ if (!reply) {
+ WL_ERR(("Could not get capabilities\n"));
+ err = -EINVAL;
+ return err;
+ }
+
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ reply, reply_len);
+
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+
+ kfree(reply);
+ return err;
+}
+
+static int wl_cfgvendor_gscan_get_channel_list(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0, type, band;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ uint16 *reply = NULL;
+ uint32 reply_len = 0, num_channels, mem_needed;
+ struct sk_buff *skb;
+
+ type = nla_type(data);
+
+ if (type == GSCAN_ATTRIBUTE_BAND) {
+ band = nla_get_u32(data);
+ } else {
+ return -1;
+ }
+
+ reply = dhd_dev_pno_get_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_GET_CHANNEL_LIST, &band, &reply_len);
+
+ if (!reply) {
+ WL_ERR(("Could not get channel list\n"));
+ err = -EINVAL;
+ return err;
+ }
+ num_channels = reply_len/ sizeof(uint32);
+ mem_needed = reply_len + VENDOR_REPLY_OVERHEAD + (ATTRIBUTE_U32_LEN * 2);
+
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, mem_needed);
+ if (unlikely(!skb)) {
+ WL_ERR(("skb alloc failed"));
+ err = -ENOMEM;
+ goto exit;
+ }
+
+ nla_put_u32(skb, GSCAN_ATTRIBUTE_NUM_CHANNELS, num_channels);
+ nla_put(skb, GSCAN_ATTRIBUTE_CHANNEL_LIST, reply_len, reply);
+
+ err = cfg80211_vendor_cmd_reply(skb);
+
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+exit:
+ kfree(reply);
+ return err;
+}
+
+static int wl_cfgvendor_gscan_get_batch_results(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ gscan_results_cache_t *results, *iter;
+ uint32 reply_len, complete = 1;
+ int32 mem_needed, num_results_iter;
+ wifi_gscan_result_t *ptr;
+ uint16 num_scan_ids, num_results;
+ struct sk_buff *skb;
+ struct nlattr *scan_hdr, *complete_flag;
+
+ err = dhd_dev_wait_batch_results_complete(bcmcfg_to_prmry_ndev(cfg));
+ if (err != BCME_OK)
+ return -EBUSY;
+
+ err = dhd_dev_pno_lock_access_batch_results(bcmcfg_to_prmry_ndev(cfg));
+ if (err != BCME_OK) {
+ WL_ERR(("Can't obtain lock to access batch results %d\n", err));
+ return -EBUSY;
+ }
+ results = dhd_dev_pno_get_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_GET_BATCH_RESULTS, NULL, &reply_len);
+
+ if (!results) {
+ WL_ERR(("No results to send %d\n", err));
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ results, 0);
+
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+ dhd_dev_pno_unlock_access_batch_results(bcmcfg_to_prmry_ndev(cfg));
+ return err;
+ }
+ num_scan_ids = reply_len & 0xFFFF;
+ num_results = (reply_len & 0xFFFF0000) >> 16;
+ mem_needed = (num_results * sizeof(wifi_gscan_result_t)) +
+ (num_scan_ids * GSCAN_BATCH_RESULT_HDR_LEN) +
+ VENDOR_REPLY_OVERHEAD + SCAN_RESULTS_COMPLETE_FLAG_LEN;
+
+ if (mem_needed > (int32)NLMSG_DEFAULT_SIZE) {
+ mem_needed = (int32)NLMSG_DEFAULT_SIZE;
+ }
+
+ WL_TRACE(("complete %d mem_needed %d max_mem %d\n", complete, mem_needed,
+ (int)NLMSG_DEFAULT_SIZE));
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, mem_needed);
+ if (unlikely(!skb)) {
+ WL_ERR(("skb alloc failed"));
+ dhd_dev_pno_unlock_access_batch_results(bcmcfg_to_prmry_ndev(cfg));
+ return -ENOMEM;
+ }
+ iter = results;
+ complete_flag = nla_reserve(skb, GSCAN_ATTRIBUTE_SCAN_RESULTS_COMPLETE,
+ sizeof(complete));
+ mem_needed = mem_needed - (SCAN_RESULTS_COMPLETE_FLAG_LEN + VENDOR_REPLY_OVERHEAD);
+
+ while (iter) {
+ num_results_iter =
+ (mem_needed - (int32)GSCAN_BATCH_RESULT_HDR_LEN)/(int32)sizeof(wifi_gscan_result_t);
+ if (num_results_iter <= 0 ||
+ ((iter->tot_count - iter->tot_consumed) > num_results_iter)) {
+ break;
+ }
+ scan_hdr = nla_nest_start(skb, GSCAN_ATTRIBUTE_SCAN_RESULTS);
+ /* no more room? we are done then (for now) */
+ if (scan_hdr == NULL) {
+ complete = 0;
+ break;
+ }
+ nla_put_u32(skb, GSCAN_ATTRIBUTE_SCAN_ID, iter->scan_id);
+ nla_put_u8(skb, GSCAN_ATTRIBUTE_SCAN_FLAGS, iter->flag);
+ num_results_iter = iter->tot_count - iter->tot_consumed;
+
+ nla_put_u32(skb, GSCAN_ATTRIBUTE_NUM_OF_RESULTS, num_results_iter);
+ if (num_results_iter) {
+ ptr = &iter->results[iter->tot_consumed];
+ iter->tot_consumed += num_results_iter;
+ nla_put(skb, GSCAN_ATTRIBUTE_SCAN_RESULTS,
+ num_results_iter * sizeof(wifi_gscan_result_t), ptr);
+ }
+ nla_nest_end(skb, scan_hdr);
+ mem_needed -= GSCAN_BATCH_RESULT_HDR_LEN +
+ (num_results_iter * sizeof(wifi_gscan_result_t));
+ iter = iter->next;
+ }
+ /* Returns TRUE if all result consumed */
+ complete = dhd_dev_gscan_batch_cache_cleanup(bcmcfg_to_prmry_ndev(cfg));
+ memcpy(nla_data(complete_flag), &complete, sizeof(complete));
+ dhd_dev_pno_unlock_access_batch_results(bcmcfg_to_prmry_ndev(cfg));
+ return cfg80211_vendor_cmd_reply(skb);
+}
+
+static int wl_cfgvendor_initiate_gscan(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int type, tmp = len;
+ int run = 0xFF;
+ int flush = 0;
+ const struct nlattr *iter;
+
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+ if (type == GSCAN_ATTRIBUTE_ENABLE_FEATURE)
+ run = nla_get_u32(iter);
+ else if (type == GSCAN_ATTRIBUTE_FLUSH_FEATURE)
+ flush = nla_get_u32(iter);
+ }
+
+ if (run != 0xFF) {
+ err = dhd_dev_pno_run_gscan(bcmcfg_to_prmry_ndev(cfg), run, flush);
+
+ if (unlikely(err))
+ WL_ERR(("Could not run gscan:%d \n", err));
+ return err;
+ } else {
+ return -1;
+ }
+
+
+}
+
+static int wl_cfgvendor_enable_full_scan_result(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int type;
+ bool real_time = FALSE;
+
+ type = nla_type(data);
+
+ if (type == GSCAN_ATTRIBUTE_ENABLE_FULL_SCAN_RESULTS) {
+ real_time = nla_get_u32(data);
+
+ err = dhd_dev_pno_enable_full_scan_result(bcmcfg_to_prmry_ndev(cfg), real_time);
+
+ if (unlikely(err))
+ WL_ERR(("Could not run gscan:%d \n", err));
+
+ } else {
+ err = -1;
+ }
+
+ return err;
+}
+
+static int wl_cfgvendor_set_scan_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ gscan_scan_params_t *scan_param;
+ int j = 0;
+ int type, tmp, tmp1, tmp2, k = 0;
+ const struct nlattr *iter, *iter1, *iter2;
+ struct dhd_pno_gscan_channel_bucket *ch_bucket;
+
+ scan_param = kzalloc(sizeof(gscan_scan_params_t), GFP_KERNEL);
+ if (!scan_param) {
+ WL_ERR(("Could not set GSCAN scan cfg, mem alloc failure\n"));
+ err = -EINVAL;
+ return err;
+
+ }
+
+ scan_param->scan_fr = PNO_SCAN_MIN_FW_SEC;
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+
+ if (j >= GSCAN_MAX_CH_BUCKETS)
+ break;
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_BASE_PERIOD:
+ scan_param->scan_fr = nla_get_u32(iter)/1000;
+ break;
+ case GSCAN_ATTRIBUTE_NUM_BUCKETS:
+ scan_param->nchannel_buckets = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_CH_BUCKET_1:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_2:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_3:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_4:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_5:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_6:
+ case GSCAN_ATTRIBUTE_CH_BUCKET_7:
+ nla_for_each_nested(iter1, iter, tmp1) {
+ type = nla_type(iter1);
+ ch_bucket =
+ scan_param->channel_bucket;
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_BUCKET_ID:
+ break;
+ case GSCAN_ATTRIBUTE_BUCKET_PERIOD:
+ ch_bucket[j].bucket_freq_multiple =
+ nla_get_u32(iter1)/1000;
+ break;
+ case GSCAN_ATTRIBUTE_BUCKET_NUM_CHANNELS:
+ ch_bucket[j].num_channels =
+ nla_get_u32(iter1);
+ break;
+ case GSCAN_ATTRIBUTE_BUCKET_CHANNELS:
+ nla_for_each_nested(iter2, iter1, tmp2) {
+ if (k >= GSCAN_MAX_CHANNELS_IN_BUCKET)
+ break;
+ ch_bucket[j].chan_list[k] =
+ nla_get_u32(iter2);
+ k++;
+ }
+ k = 0;
+ break;
+ case GSCAN_ATTRIBUTE_BUCKETS_BAND:
+ ch_bucket[j].band = (uint16)
+ nla_get_u32(iter1);
+ break;
+ case GSCAN_ATTRIBUTE_REPORT_EVENTS:
+ ch_bucket[j].report_flag = (uint8)
+ nla_get_u32(iter1);
+ break;
+ case GSCAN_ATTRIBUTE_BUCKET_STEP_COUNT:
+ ch_bucket[j].repeat = (uint16)
+ nla_get_u32(iter1);
+ break;
+ case GSCAN_ATTRIBUTE_BUCKET_MAX_PERIOD:
+ ch_bucket[j].bucket_max_multiple =
+ nla_get_u32(iter1)/1000;
+ break;
+ }
+ }
+ j++;
+ break;
+ }
+ }
+
+ if (dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_SCAN_CFG_ID, scan_param, 0) < 0) {
+ WL_ERR(("Could not set GSCAN scan cfg\n"));
+ err = -EINVAL;
+ }
+
+ kfree(scan_param);
+ return err;
+
+}
+
+static int wl_cfgvendor_hotlist_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ gscan_hotlist_scan_params_t *hotlist_params;
+ int tmp, tmp1, tmp2, type, j = 0, dummy;
+ const struct nlattr *outer, *inner, *iter;
+ uint8 flush = 0;
+ struct bssid_t *pbssid;
+
+ hotlist_params = (gscan_hotlist_scan_params_t *)kzalloc(len, GFP_KERNEL);
+ if (!hotlist_params) {
+ WL_ERR(("Cannot Malloc mem to parse config commands size - %d bytes \n", len));
+ return -1;
+ }
+
+ hotlist_params->lost_ap_window = GSCAN_LOST_AP_WINDOW_DEFAULT;
+
+ nla_for_each_attr(iter, data, len, tmp2) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_HOTLIST_BSSIDS:
+ pbssid = hotlist_params->bssid;
+ nla_for_each_nested(outer, iter, tmp) {
+ nla_for_each_nested(inner, outer, tmp1) {
+ type = nla_type(inner);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_BSSID:
+ memcpy(&(pbssid[j].macaddr),
+ nla_data(inner), ETHER_ADDR_LEN);
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_LOW:
+ pbssid[j].rssi_reporting_threshold =
+ (int8) nla_get_u8(inner);
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_HIGH:
+ dummy = (int8) nla_get_u8(inner);
+ break;
+ }
+ }
+ j++;
+ }
+ hotlist_params->nbssid = j;
+ break;
+ case GSCAN_ATTRIBUTE_HOTLIST_FLUSH:
+ flush = nla_get_u8(iter);
+ break;
+ case GSCAN_ATTRIBUTE_LOST_AP_SAMPLE_SIZE:
+ hotlist_params->lost_ap_window = nla_get_u32(iter);
+ break;
+ }
+
+ }
+
+ if (dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_GEOFENCE_SCAN_CFG_ID, hotlist_params, flush) < 0) {
+ WL_ERR(("Could not set GSCAN HOTLIST cfg\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+exit:
+ kfree(hotlist_params);
+ return err;
+}
+
+static int wl_cfgvendor_epno_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_epno_params_t *epno_params;
+ int tmp, tmp1, tmp2, type, num = 0;
+ const struct nlattr *outer, *inner, *iter;
+ uint8 flush = 0, i = 0;
+ uint16 num_visible_ssid = 0;
+
+ nla_for_each_attr(iter, data, len, tmp2) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_EPNO_SSID_LIST:
+ nla_for_each_nested(outer, iter, tmp) {
+ epno_params = (dhd_epno_params_t *)
+ dhd_dev_pno_get_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_GET_EPNO_SSID_ELEM, NULL, &num);
+ if (!epno_params) {
+ WL_ERR(("Failed to get SSID LIST buffer\n"));
+ err = -ENOMEM;
+ goto exit;
+ }
+ i++;
+ nla_for_each_nested(inner, outer, tmp1) {
+ type = nla_type(inner);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_EPNO_SSID:
+ memcpy(epno_params->ssid,
+ nla_data(inner),
+ DOT11_MAX_SSID_LEN);
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_SSID_LEN:
+ len = nla_get_u8(inner);
+ if (len < DOT11_MAX_SSID_LEN) {
+ epno_params->ssid_len = len;
+ } else {
+ WL_ERR(("SSID too long %d\n", len));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_RSSI:
+ epno_params->rssi_thresh =
+ (int8) nla_get_u32(inner);
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_FLAGS:
+ epno_params->flags =
+ nla_get_u8(inner);
+ if (!(epno_params->flags &
+ DHD_PNO_USE_SSID))
+ num_visible_ssid++;
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_AUTH:
+ epno_params->auth =
+ nla_get_u8(inner);
+ break;
+ }
+ }
+ }
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_SSID_NUM:
+ num = nla_get_u8(iter);
+ break;
+ case GSCAN_ATTRIBUTE_EPNO_FLUSH:
+ flush = nla_get_u8(iter);
+ dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_EPNO_CFG_ID, NULL, flush);
+ break;
+ default:
+ WL_ERR(("%s: No such attribute %d\n", __FUNCTION__, type));
+ err = -EINVAL;
+ goto exit;
+ }
+
+ }
+ if (i != num) {
+ WL_ERR(("%s: num_ssid %d does not match ssids sent %d\n", __FUNCTION__,
+ num, i));
+ err = -EINVAL;
+ }
+exit:
+ /* Flush all configs if error condition */
+ flush = (err < 0) ? TRUE: FALSE;
+ dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_EPNO_CFG_ID, &num_visible_ssid, flush);
+ return err;
+}
+
+static int wl_cfgvendor_gscan_anqpo_config(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = BCME_ERROR, rem, type, hs_list_size = 0, malloc_size, i = 0, j, k, num_oi, oi_len;
+ wifi_passpoint_network *hs_list = NULL, *src_hs;
+ wl_anqpo_pfn_hs_list_t *anqpo_hs_list;
+ wl_anqpo_pfn_hs_t *dst_hs;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int tmp, tmp1;
+ const struct nlattr *outer, *inner, *iter;
+ static char iovar_buf[WLC_IOCTL_MAXLEN];
+ char *rcid;
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_ANQPO_HS_LIST:
+ if (hs_list_size > 0) {
+ hs_list = kmalloc(hs_list_size*sizeof(wifi_passpoint_network), GFP_KERNEL);
+ if (hs_list == NULL) {
+ WL_ERR(("failed to allocate hs_list\n"));
+ return -ENOMEM;
+ }
+ }
+ nla_for_each_nested(outer, iter, tmp) {
+ nla_for_each_nested(inner, outer, tmp1) {
+ type = nla_type(inner);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_ANQPO_HS_NETWORK_ID:
+ hs_list[i].id = nla_get_u32(inner);
+ WL_ERR(("%s: net id: %d\n", __func__, hs_list[i].id));
+ break;
+ case GSCAN_ATTRIBUTE_ANQPO_HS_NAI_REALM:
+ memcpy(hs_list[i].realm,
+ nla_data(inner), 256);
+ WL_ERR(("%s: realm: %s\n", __func__, hs_list[i].realm));
+ break;
+ case GSCAN_ATTRIBUTE_ANQPO_HS_ROAM_CONSORTIUM_ID:
+ memcpy(hs_list[i].roamingConsortiumIds,
+ nla_data(inner), 128);
+ break;
+ case GSCAN_ATTRIBUTE_ANQPO_HS_PLMN:
+ memcpy(hs_list[i].plmn,
+ nla_data(inner), 3);
+ WL_ERR(("%s: plmn: %c %c %c\n", __func__, hs_list[i].plmn[0], hs_list[i].plmn[1], hs_list[i].plmn[2]));
+ break;
+ }
+ }
+ i++;
+ }
+ break;
+ case GSCAN_ATTRIBUTE_ANQPO_HS_LIST_SIZE:
+ hs_list_size = nla_get_u32(iter);
+ WL_ERR(("%s: ANQPO: %d\n", __func__, hs_list_size));
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ return err;
+ }
+ }
+
+ malloc_size = OFFSETOF(wl_anqpo_pfn_hs_list_t, hs) +
+ (hs_list_size * (sizeof(wl_anqpo_pfn_hs_t)));
+ anqpo_hs_list = (wl_anqpo_pfn_hs_list_t *)kmalloc(malloc_size, GFP_KERNEL);
+ if (anqpo_hs_list == NULL) {
+ WL_ERR(("failed to allocate anqpo_hs_list\n"));
+ err = -ENOMEM;
+ goto exit;
+ }
+ anqpo_hs_list->count = hs_list_size;
+ anqpo_hs_list->is_clear = (hs_list_size == 0) ? TRUE : FALSE;
+
+ if ((hs_list_size > 0) && (NULL != hs_list)) {
+ src_hs = hs_list;
+ dst_hs = &anqpo_hs_list->hs[0];
+ for (i = 0; i < hs_list_size; i++, src_hs++, dst_hs++) {
+ num_oi = 0;
+ dst_hs->id = src_hs->id;
+ dst_hs->realm.length = strlen(src_hs->realm)+1;
+ memcpy(dst_hs->realm.data, src_hs->realm, dst_hs->realm.length);
+ memcpy(dst_hs->plmn.mcc, src_hs->plmn, ANQPO_MCC_LENGTH);
+ memcpy(dst_hs->plmn.mnc, src_hs->plmn, ANQPO_MCC_LENGTH);
+ for (j = 0; j < ANQPO_MAX_PFN_HS; j++) {
+ oi_len = 0;
+ if (0 != src_hs->roamingConsortiumIds[j]) {
+ num_oi++;
+ rcid = (char *)&src_hs->roamingConsortiumIds[j];
+ for (k = 0; k < ANQPO_MAX_OI_LENGTH; k++)
+ if (0 != rcid[k]) oi_len++;
+
+ dst_hs->rc.oi[j].length = oi_len;
+ memcpy(dst_hs->rc.oi[j].data, rcid, oi_len);
+ }
+ }
+ dst_hs->rc.numOi = num_oi;
+ }
+ }
+
+ err = wldev_iovar_setbuf(bcmcfg_to_prmry_ndev(cfg), "anqpo_pfn_hs_list",
+ anqpo_hs_list, malloc_size, iovar_buf, WLC_IOCTL_MAXLEN, NULL);
+
+ kfree(anqpo_hs_list);
+exit:
+ kfree(hs_list);
+ return err;
+}
+
+static int wl_cfgvendor_set_batch_scan_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0, tmp, type;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ gscan_batch_params_t batch_param;
+ const struct nlattr *iter;
+
+ batch_param.mscan = batch_param.bestn = 0;
+ batch_param.buffer_threshold = GSCAN_BATCH_NO_THR_SET;
+
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_NUM_AP_PER_SCAN:
+ batch_param.bestn = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_NUM_SCANS_TO_CACHE:
+ batch_param.mscan = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_REPORT_THRESHOLD:
+ batch_param.buffer_threshold = nla_get_u32(iter);
+ break;
+ }
+ }
+
+ if (dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_BATCH_SCAN_CFG_ID, &batch_param, 0) < 0) {
+ WL_ERR(("Could not set batch cfg\n"));
+ err = -EINVAL;
+ return err;
+ }
+
+ return err;
+}
+
+static int wl_cfgvendor_significant_change_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ gscan_swc_params_t *significant_params;
+ int tmp, tmp1, tmp2, type, j = 0;
+ const struct nlattr *outer, *inner, *iter;
+ uint8 flush = 0;
+ wl_pfn_significant_bssid_t *pbssid;
+
+ significant_params = (gscan_swc_params_t *) kzalloc(len, GFP_KERNEL);
+ if (!significant_params) {
+ WL_ERR(("Cannot Malloc mem to parse config commands size - %d bytes \n", len));
+ return -1;
+ }
+
+
+ nla_for_each_attr(iter, data, len, tmp2) {
+ type = nla_type(iter);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_SIGNIFICANT_CHANGE_FLUSH:
+ flush = nla_get_u8(iter);
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_SAMPLE_SIZE:
+ significant_params->rssi_window = nla_get_u16(iter);
+ break;
+ case GSCAN_ATTRIBUTE_LOST_AP_SAMPLE_SIZE:
+ significant_params->lost_ap_window = nla_get_u16(iter);
+ break;
+ case GSCAN_ATTRIBUTE_MIN_BREACHING:
+ significant_params->swc_threshold = nla_get_u16(iter);
+ break;
+ case GSCAN_ATTRIBUTE_SIGNIFICANT_CHANGE_BSSIDS:
+ pbssid = significant_params->bssid_elem_list;
+ nla_for_each_nested(outer, iter, tmp) {
+ nla_for_each_nested(inner, outer, tmp1) {
+ switch (nla_type(inner)) {
+ case GSCAN_ATTRIBUTE_BSSID:
+ memcpy(&(pbssid[j].macaddr),
+ nla_data(inner),
+ ETHER_ADDR_LEN);
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_HIGH:
+ pbssid[j].rssi_high_threshold =
+ (int8) nla_get_u8(inner);
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_LOW:
+ pbssid[j].rssi_low_threshold =
+ (int8) nla_get_u8(inner);
+ break;
+ }
+ }
+ j++;
+ }
+ break;
+ }
+ }
+ significant_params->nbssid = j;
+
+ if (dhd_dev_pno_set_cfg_gscan(bcmcfg_to_prmry_ndev(cfg),
+ DHD_PNO_SIGNIFICANT_SCAN_CFG_ID, significant_params, flush) < 0) {
+ WL_ERR(("Could not set GSCAN significant cfg\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+exit:
+ kfree(significant_params);
+ return err;
+}
+
+static int wl_cfgvendor_enable_lazy_roam(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = -EINVAL;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int type;
+ uint32 lazy_roam_enable_flag;
+
+ type = nla_type(data);
+
+ if (type == GSCAN_ATTRIBUTE_LAZY_ROAM_ENABLE) {
+ lazy_roam_enable_flag = nla_get_u32(data);
+
+ err = dhd_dev_lazy_roam_enable(bcmcfg_to_prmry_ndev(cfg),
+ lazy_roam_enable_flag);
+
+ if (unlikely(err))
+ WL_ERR(("Could not enable lazy roam:%d \n", err));
+
+ }
+ return err;
+}
+
+static int wl_cfgvendor_set_lazy_roam_cfg(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0, tmp, type;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ wlc_roam_exp_params_t roam_param;
+ const struct nlattr *iter;
+
+ memset(&roam_param, 0, sizeof(roam_param));
+
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+
+ switch (type) {
+ case GSCAN_ATTRIBUTE_A_BAND_BOOST_THRESHOLD:
+ roam_param.a_band_boost_threshold = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_A_BAND_PENALTY_THRESHOLD:
+ roam_param.a_band_penalty_threshold = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_A_BAND_BOOST_FACTOR:
+ roam_param.a_band_boost_factor = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_A_BAND_PENALTY_FACTOR:
+ roam_param.a_band_penalty_factor = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_A_BAND_MAX_BOOST:
+ roam_param.a_band_max_boost = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_LAZY_ROAM_HYSTERESIS:
+ roam_param.cur_bssid_boost = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_ALERT_ROAM_RSSI_TRIGGER:
+ roam_param.alert_roam_trigger_threshold = nla_get_u32(iter);
+ break;
+ }
+ }
+
+ if (dhd_dev_set_lazy_roam_cfg(bcmcfg_to_prmry_ndev(cfg), &roam_param) < 0) {
+ WL_ERR(("Could not set batch cfg\n"));
+ err = -EINVAL;
+ }
+ return err;
+}
+
+/* small helper function */
+static wl_bssid_pref_cfg_t *create_bssid_pref_cfg(uint32 num)
+{
+ uint32 mem_needed;
+ wl_bssid_pref_cfg_t *bssid_pref;
+
+ mem_needed = sizeof(wl_bssid_pref_cfg_t);
+ if (num)
+ mem_needed += (num - 1) * sizeof(wl_bssid_pref_list_t);
+ bssid_pref = (wl_bssid_pref_cfg_t *) kmalloc(mem_needed, GFP_KERNEL);
+ return bssid_pref;
+}
+
+static int wl_cfgvendor_set_bssid_pref(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ wl_bssid_pref_cfg_t *bssid_pref = NULL;
+ wl_bssid_pref_list_t *bssids;
+ int tmp, tmp1, tmp2, type;
+ const struct nlattr *outer, *inner, *iter;
+ uint32 flush = 0, i = 0, num = 0;
+
+ /* Assumption: NUM attribute must come first */
+ nla_for_each_attr(iter, data, len, tmp2) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_NUM_BSSID:
+ num = nla_get_u32(iter);
+ if (num > MAX_BSSID_PREF_LIST_NUM) {
+ WL_ERR(("Too many Preferred BSSIDs!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case GSCAN_ATTRIBUTE_BSSID_PREF_FLUSH:
+ flush = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_BSSID_PREF_LIST:
+ if (!num)
+ return -EINVAL;
+ if ((bssid_pref = create_bssid_pref_cfg(num)) == NULL) {
+ WL_ERR(("%s: Can't malloc memory\n", __FUNCTION__));
+ err = -ENOMEM;
+ goto exit;
+ }
+ bssid_pref->count = num;
+ bssids = bssid_pref->bssids;
+ nla_for_each_nested(outer, iter, tmp) {
+ if (i >= num) {
+ WL_ERR(("CFGs don't seem right!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ nla_for_each_nested(inner, outer, tmp1) {
+ type = nla_type(inner);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_BSSID_PREF:
+ memcpy(&(bssids[i].bssid),
+ nla_data(inner), ETHER_ADDR_LEN);
+ /* not used for now */
+ bssids[i].flags = 0;
+ break;
+ case GSCAN_ATTRIBUTE_RSSI_MODIFIER:
+ bssids[i].rssi_factor =
+ (int8) nla_get_u32(inner);
+ break;
+ }
+ }
+ i++;
+ }
+ break;
+ default:
+ WL_ERR(("%s: No such attribute %d\n", __FUNCTION__, type));
+ break;
+ }
+ }
+
+ if (!bssid_pref) {
+ /* What if only flush is desired? */
+ if (flush) {
+ if ((bssid_pref = create_bssid_pref_cfg(0)) == NULL) {
+ WL_ERR(("%s: Can't malloc memory\n", __FUNCTION__));
+ err = -ENOMEM;
+ goto exit;
+ }
+ bssid_pref->count = 0;
+ } else {
+ err = -EINVAL;
+ goto exit;
+ }
+ }
+ err = dhd_dev_set_lazy_roam_bssid_pref(bcmcfg_to_prmry_ndev(cfg),
+ bssid_pref, flush);
+exit:
+ kfree(bssid_pref);
+ return err;
+}
+
+
+static int wl_cfgvendor_set_bssid_blacklist(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ maclist_t *blacklist = NULL;
+ int err = 0;
+ int type, tmp;
+ const struct nlattr *iter;
+ uint32 mem_needed = 0, flush = 0, i = 0, num = 0;
+
+ /* Assumption: NUM attribute must come first */
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_NUM_BSSID:
+ num = nla_get_u32(iter);
+ if (num > MAX_BSSID_BLACKLIST_NUM) {
+ WL_ERR(("Too many Blacklist BSSIDs!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case GSCAN_ATTRIBUTE_BSSID_BLACKLIST_FLUSH:
+ flush = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_BLACKLIST_BSSID:
+ if (num) {
+ if (!blacklist) {
+ mem_needed = sizeof(maclist_t) +
+ sizeof(struct ether_addr) * (num - 1);
+ blacklist = (maclist_t *)
+ kmalloc(mem_needed, GFP_KERNEL);
+ if (!blacklist) {
+ WL_ERR(("%s: Can't malloc %d bytes\n",
+ __FUNCTION__, mem_needed));
+ err = -ENOMEM;
+ goto exit;
+ }
+ blacklist->count = num;
+ }
+ if (i >= num) {
+ WL_ERR(("CFGs don't seem right!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ memcpy(&(blacklist->ea[i]),
+ nla_data(iter), ETHER_ADDR_LEN);
+ i++;
+ }
+ break;
+ default:
+ WL_ERR(("%s: No such attribute %d\n", __FUNCTION__, type));
+ break;
+ }
+ }
+ err = dhd_dev_set_blacklist_bssid(bcmcfg_to_prmry_ndev(cfg),
+ blacklist, mem_needed, flush);
+exit:
+ kfree(blacklist);
+ return err;
+}
+
+static int wl_cfgvendor_set_ssid_whitelist(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ wl_ssid_whitelist_t *ssid_whitelist = NULL;
+ wlc_ssid_t *ssid_elem;
+ int tmp, tmp2, mem_needed = 0, type;
+ const struct nlattr *inner, *iter;
+ uint32 flush = 0, i = 0, num = 0;
+
+ /* Assumption: NUM attribute must come first */
+ nla_for_each_attr(iter, data, len, tmp2) {
+ type = nla_type(iter);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_NUM_WL_SSID:
+ num = nla_get_u32(iter);
+ if (num > MAX_SSID_WHITELIST_NUM) {
+ WL_ERR(("Too many WL SSIDs!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ mem_needed = sizeof(wl_ssid_whitelist_t);
+ if (num)
+ mem_needed += (num - 1) * sizeof(ssid_info_t);
+ ssid_whitelist = (wl_ssid_whitelist_t *)
+ kzalloc(mem_needed, GFP_KERNEL);
+ if (ssid_whitelist == NULL) {
+ WL_ERR(("%s: Can't malloc %d bytes\n",
+ __FUNCTION__, mem_needed));
+ err = -ENOMEM;
+ goto exit;
+ }
+ ssid_whitelist->ssid_count = num;
+ break;
+ case GSCAN_ATTRIBUTE_WL_SSID_FLUSH:
+ flush = nla_get_u32(iter);
+ break;
+ case GSCAN_ATTRIBUTE_WHITELIST_SSID_ELEM:
+ if (!num || !ssid_whitelist) {
+ WL_ERR(("num ssid is not set!\n"));
+ return -EINVAL;
+ }
+ if (i >= num) {
+ WL_ERR(("CFGs don't seem right!\n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ ssid_elem = &ssid_whitelist->ssids[i];
+ nla_for_each_nested(inner, iter, tmp) {
+ type = nla_type(inner);
+ switch (type) {
+ case GSCAN_ATTRIBUTE_WHITELIST_SSID:
+ memcpy(ssid_elem->SSID,
+ nla_data(inner),
+ DOT11_MAX_SSID_LEN);
+ break;
+ case GSCAN_ATTRIBUTE_WL_SSID_LEN:
+ ssid_elem->SSID_len = (uint8)
+ nla_get_u32(inner);
+ break;
+ }
+ }
+ i++;
+ break;
+ default:
+ WL_ERR(("%s: No such attribute %d\n", __FUNCTION__, type));
+ break;
+ }
+ }
+
+ err = dhd_dev_set_whitelist_ssid(bcmcfg_to_prmry_ndev(cfg),
+ ssid_whitelist, mem_needed, flush);
+exit:
+ kfree(ssid_whitelist);
+ return err;
+}
+#endif /* GSCAN_SUPPORT */
+
+static int wl_cfgvendor_set_rssi_monitor(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = 0, tmp, type, start = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int8 max_rssi = 0, min_rssi = 0;
+ const struct nlattr *iter;
+
+ nla_for_each_attr(iter, data, len, tmp) {
+ type = nla_type(iter);
+ switch (type) {
+ case RSSI_MONITOR_ATTRIBUTE_MAX_RSSI:
+ max_rssi = (int8) nla_get_u32(iter);
+ break;
+ case RSSI_MONITOR_ATTRIBUTE_MIN_RSSI:
+ min_rssi = (int8) nla_get_u32(iter);
+ break;
+ case RSSI_MONITOR_ATTRIBUTE_START:
+ start = nla_get_u32(iter);
+ }
+ }
+
+ if (dhd_dev_set_rssi_monitor_cfg(bcmcfg_to_prmry_ndev(cfg),
+ start, max_rssi, min_rssi) < 0) {
+ WL_ERR(("Could not set rssi monitor cfg\n"));
+ err = -EINVAL;
+ }
+ return err;
+}
+
+#ifdef RTT_SUPPORT
+void
+wl_cfgvendor_rtt_evt(void *ctx, void *rtt_data)
+{
+ struct wireless_dev *wdev = (struct wireless_dev *)ctx;
+ struct wiphy *wiphy;
+ struct sk_buff *skb;
+ uint32 complete = 0;
+ gfp_t kflags;
+ rtt_result_t *rtt_result;
+ rtt_results_header_t *rtt_header;
+ struct list_head *rtt_cache_list;
+ struct nlattr *rtt_nl_hdr;
+ wiphy = wdev->wiphy;
+
+ WL_DBG(("In\n"));
+ /* Push the data to the skb */
+ if (!rtt_data) {
+ WL_ERR(("rtt_data is NULL\n"));
+ return;
+ }
+ rtt_cache_list = (struct list_head *)rtt_data;
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+ if (list_empty(rtt_cache_list)) {
+ skb = cfg80211_vendor_event_alloc(wiphy, 100, GOOGLE_RTT_COMPLETE_EVENT, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return;
+ }
+ complete = 1;
+ nla_put_u32(skb, RTT_ATTRIBUTE_RESULTS_COMPLETE, complete);
+ cfg80211_vendor_event(skb, kflags);
+ return;
+ }
+ list_for_each_entry(rtt_header, rtt_cache_list, list) {
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_event_alloc(wiphy, rtt_header->result_tot_len + 100,
+ GOOGLE_RTT_COMPLETE_EVENT, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return;
+ }
+ if (list_is_last(&rtt_header->list, rtt_cache_list)) {
+ complete = 1;
+ }
+ nla_put_u32(skb, RTT_ATTRIBUTE_RESULTS_COMPLETE, complete);
+ rtt_nl_hdr = nla_nest_start(skb, RTT_ATTRIBUTE_RESULTS_PER_TARGET);
+ if (!rtt_nl_hdr) {
+ WL_ERR(("rtt_nl_hdr is NULL\n"));
+ break;
+ }
+ nla_put(skb, RTT_ATTRIBUTE_TARGET_MAC, ETHER_ADDR_LEN, &rtt_header->peer_mac);
+ nla_put_u32(skb, RTT_ATTRIBUTE_RESULT_CNT, rtt_header->result_cnt);
+ list_for_each_entry(rtt_result, &rtt_header->result_list, list) {
+ nla_put(skb, RTT_ATTRIBUTE_RESULT,
+ rtt_result->report_len, &rtt_result->report);
+ }
+ nla_nest_end(skb, rtt_nl_hdr);
+ cfg80211_vendor_event(skb, kflags);
+ }
+}
+
+static int
+wl_cfgvendor_rtt_set_config(struct wiphy *wiphy, struct wireless_dev *wdev,
+ const void *data, int len) {
+ int err = 0, rem, rem1, rem2, type;
+ int target_cnt;
+ rtt_config_params_t rtt_param;
+ rtt_target_info_t* rtt_target = NULL;
+ const struct nlattr *iter, *iter1, *iter2;
+ int8 eabuf[ETHER_ADDR_STR_LEN];
+ int8 chanbuf[CHANSPEC_STR_LEN];
+ int32 feature_set = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ rtt_capabilities_t capability;
+ feature_set = dhd_dev_get_feature_set(bcmcfg_to_prmry_ndev(cfg));
+
+ WL_DBG(("In\n"));
+ err = dhd_dev_rtt_register_noti_callback(wdev->netdev, wdev, wl_cfgvendor_rtt_evt);
+ if (err < 0) {
+ WL_ERR(("failed to register rtt_noti_callback\n"));
+ goto exit;
+ }
+ err = dhd_dev_rtt_capability(bcmcfg_to_prmry_ndev(cfg), &capability);
+ if (err < 0) {
+ WL_ERR(("failed to get the capability\n"));
+ goto exit;
+ }
+
+ memset(&rtt_param, 0, sizeof(rtt_param));
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case RTT_ATTRIBUTE_TARGET_CNT:
+ target_cnt = nla_get_u8(iter);
+ if (rtt_param.rtt_target_cnt > RTT_MAX_TARGET_CNT) {
+ WL_ERR(("exceed max target count : %d\n",
+ target_cnt));
+ err = BCME_RANGE;
+ goto exit;
+ }
+ rtt_param.rtt_target_cnt = target_cnt;
+ rtt_param.target_info = kzalloc(TARGET_INFO_SIZE(target_cnt), GFP_KERNEL);
+ if (rtt_param.target_info == NULL) {
+ WL_ERR(("failed to allocate target info for (%d)\n", target_cnt));
+ err = BCME_NOMEM;
+ goto exit;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_INFO:
+ rtt_target = rtt_param.target_info;
+ nla_for_each_nested(iter1, iter, rem1) {
+ nla_for_each_nested(iter2, iter1, rem2) {
+ type = nla_type(iter2);
+ switch (type) {
+ case RTT_ATTRIBUTE_TARGET_MAC:
+ memcpy(&rtt_target->addr, nla_data(iter2),
+ ETHER_ADDR_LEN);
+ break;
+ case RTT_ATTRIBUTE_TARGET_TYPE:
+ rtt_target->type = nla_get_u8(iter2);
+ if (rtt_target->type == RTT_INVALID ||
+ (rtt_target->type == RTT_ONE_WAY &&
+ !capability.rtt_one_sided_supported)) {
+ WL_ERR(("doesn't support RTT type"
+ " : %d\n",
+ rtt_target->type));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_PEER:
+ rtt_target->peer = nla_get_u8(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_CHAN:
+ memcpy(&rtt_target->channel, nla_data(iter2),
+ sizeof(rtt_target->channel));
+ break;
+ case RTT_ATTRIBUTE_TARGET_PERIOD:
+ rtt_target->burst_period = nla_get_u32(iter2);
+ if (rtt_target->burst_period < 32) {
+ rtt_target->burst_period *= 100; /* 100 ms unit */
+ } else {
+ WL_ERR(("%d value must in (0-31)\n", rtt_target->burst_period));
+ err = EINVAL;
+ goto exit;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_NUM_BURST:
+ rtt_target->num_burst = nla_get_u32(iter2);
+ if (rtt_target->num_burst > 16) {
+ WL_ERR(("%d value must in (0-15)\n",
+ rtt_target->num_burst));
+ err = -EINVAL;
+ goto exit;
+ }
+ rtt_target->num_burst = BIT(rtt_target->num_burst);
+ break;
+ case RTT_ATTRIBUTE_TARGET_NUM_FTM_BURST:
+ rtt_target->num_frames_per_burst =
+ nla_get_u32(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_NUM_RETRY_FTM:
+ rtt_target->num_retries_per_ftm =
+ nla_get_u32(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_NUM_RETRY_FTMR:
+ rtt_target->num_retries_per_ftmr =
+ nla_get_u32(iter2);
+ if (rtt_target->num_retries_per_ftmr > 3) {
+ WL_ERR(("%d value must in (0-3)\n",
+ rtt_target->num_retries_per_ftmr));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_LCI:
+ rtt_target->LCI_request = nla_get_u8(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_LCR:
+ rtt_target->LCI_request = nla_get_u8(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_BURST_DURATION:
+ if ((nla_get_u32(iter2) > 1 &&
+ nla_get_u32(iter2) < 12)) {
+ rtt_target->burst_duration =
+ dhd_rtt_idx_to_burst_duration(nla_get_u32(iter2));
+ } else if (nla_get_u32(iter2) == 15) {
+ /* use default value */
+ rtt_target->burst_duration = 0;
+ } else {
+ WL_ERR(("%d value must in (2-11) or 15\n",
+ nla_get_u32(iter2)));
+ err = -EINVAL;
+ goto exit;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_BW:
+ rtt_target->bw = nla_get_u8(iter2);
+ break;
+ case RTT_ATTRIBUTE_TARGET_PREAMBLE:
+ rtt_target->preamble = nla_get_u8(iter2);
+ break;
+ }
+ }
+ /* convert to chanspec value */
+ rtt_target->chanspec =
+ dhd_rtt_convert_to_chspec(rtt_target->channel);
+ if (rtt_target->chanspec == 0) {
+ WL_ERR(("Channel is not valid \n"));
+ err = -EINVAL;
+ goto exit;
+ }
+ WL_INFORM(("Target addr %s, Channel : %s for RTT \n",
+ bcm_ether_ntoa((const struct ether_addr *)&rtt_target->addr,
+ eabuf),
+ wf_chspec_ntoa(rtt_target->chanspec, chanbuf)));
+ rtt_target++;
+ }
+ break;
+ }
+ }
+ WL_DBG(("leave :target_cnt : %d\n", rtt_param.rtt_target_cnt));
+ if (dhd_dev_rtt_set_cfg(bcmcfg_to_prmry_ndev(cfg), &rtt_param) < 0) {
+ WL_ERR(("Could not set RTT configuration\n"));
+ err = -EINVAL;
+ }
+exit:
+ /* free the target info list */
+ kfree(rtt_param.target_info);
+ return err;
+}
+
+static int
+wl_cfgvendor_rtt_cancel_config(struct wiphy *wiphy, struct wireless_dev *wdev,
+ const void *data, int len)
+{
+ int err = 0, rem, type, target_cnt = 0;
+ int target_cnt_chk = 0;
+ const struct nlattr *iter;
+ struct ether_addr *mac_list = NULL, *mac_addr = NULL;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case RTT_ATTRIBUTE_TARGET_CNT:
+ if (mac_list != NULL) {
+ WL_ERR(("mac_list is not NULL\n"));
+ goto exit;
+ }
+ target_cnt = nla_get_u8(iter);
+ if (target_cnt > 0) {
+ mac_list = (struct ether_addr *)kzalloc(target_cnt * ETHER_ADDR_LEN,
+ GFP_KERNEL);
+ if (mac_list == NULL) {
+ WL_ERR(("failed to allocate mem for mac list\n"));
+ goto exit;
+ }
+ mac_addr = &mac_list[0];
+ } else {
+ /* cancel the current whole RTT process */
+ goto cancel;
+ }
+ break;
+ case RTT_ATTRIBUTE_TARGET_MAC:
+ if (mac_addr) {
+ memcpy(mac_addr++, nla_data(iter), ETHER_ADDR_LEN);
+ target_cnt_chk++;
+ if (target_cnt_chk > target_cnt) {
+ WL_ERR(("over target count\n"));
+ goto exit;
+ }
+ break;
+ } else {
+ WL_ERR(("mac_list is NULL\n"));
+ goto exit;
+ }
+ }
+ }
+cancel:
+ if (dhd_dev_rtt_cancel_cfg(bcmcfg_to_prmry_ndev(cfg), mac_list, target_cnt) < 0) {
+ WL_ERR(("Could not cancel RTT configuration\n"));
+ err = -EINVAL;
+ }
+
+exit:
+ if (mac_list) {
+ kfree(mac_list);
+ }
+ return err;
+}
+
+static int
+wl_cfgvendor_rtt_get_capability(struct wiphy *wiphy, struct wireless_dev *wdev,
+ const void *data, int len)
+{
+ int err = 0;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ rtt_capabilities_t capability;
+
+ err = dhd_dev_rtt_capability(bcmcfg_to_prmry_ndev(cfg), &capability);
+ if (unlikely(err)) {
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+ goto exit;
+ }
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ &capability, sizeof(capability));
+
+ if (unlikely(err)) {
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+ }
+exit:
+ return err;
+}
+
+#endif /* RTT_SUPPORT */
+static int wl_cfgvendor_priv_string_handler(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int err = 0;
+ int data_len = 0;
+
+ bzero(cfg->ioctl_buf, WLC_IOCTL_MAXLEN);
+
+ if (strncmp((char *)data, BRCM_VENDOR_SCMD_CAPA, strlen(BRCM_VENDOR_SCMD_CAPA)) == 0) {
+ err = wldev_iovar_getbuf(bcmcfg_to_prmry_ndev(cfg), "cap", NULL, 0,
+ cfg->ioctl_buf, WLC_IOCTL_MAXLEN, &cfg->ioctl_buf_sync);
+ if (unlikely(err)) {
+ WL_ERR(("error (%d)\n", err));
+ return err;
+ }
+ data_len = strlen(cfg->ioctl_buf);
+ cfg->ioctl_buf[data_len] = '\0';
+ }
+
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ cfg->ioctl_buf, data_len+1);
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+ else
+ WL_INFORM(("Vendor Command reply sent successfully!\n"));
+
+ return err;
+}
+
+#ifdef LINKSTAT_SUPPORT
+#define NUM_RATE 32
+#define NUM_PEER 1
+#define NUM_CHAN 11
+static int wl_cfgvendor_lstats_get_info(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ static char iovar_buf[WLC_IOCTL_MAXLEN];
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ int err = 0;
+ wifi_iface_stat *iface;
+ wifi_radio_stat *radio;
+ wl_wme_cnt_t *wl_wme_cnt;
+ wl_cnt_t *wl_cnt;
+ char *output;
+
+ WL_INFORM(("%s: Enter \n", __func__));
+
+ bzero(cfg->ioctl_buf, WLC_IOCTL_MAXLEN);
+ bzero(iovar_buf, WLC_IOCTL_MAXLEN);
+
+ output = cfg->ioctl_buf;
+ radio = (wifi_radio_stat *)output;
+
+ err = wldev_iovar_getbuf(bcmcfg_to_prmry_ndev(cfg), "radiostat", NULL, 0,
+ iovar_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (unlikely(err)) {
+ WL_ERR(("error (%d) - size = %zu\n", err, sizeof(wifi_radio_stat)));
+ return err;
+ }
+ memcpy(output, iovar_buf, sizeof(wifi_radio_stat));
+
+ radio->num_channels = NUM_CHAN;
+ output += sizeof(wifi_radio_stat);
+ output += (NUM_CHAN*sizeof(wifi_channel_stat));
+
+ err = wldev_iovar_getbuf(bcmcfg_to_prmry_ndev(cfg), "wme_counters", NULL, 0,
+ iovar_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (unlikely(err)) {
+ WL_ERR(("error (%d)\n", err));
+ return err;
+ }
+ wl_wme_cnt = (wl_wme_cnt_t *)iovar_buf;
+ iface = (wifi_iface_stat *)output;
+
+ iface->ac[WIFI_AC_VO].ac = WIFI_AC_VO;
+ iface->ac[WIFI_AC_VO].tx_mpdu = wl_wme_cnt->tx[AC_VO].packets;
+ iface->ac[WIFI_AC_VO].rx_mpdu = wl_wme_cnt->rx[AC_VO].packets;
+ iface->ac[WIFI_AC_VO].mpdu_lost = wl_wme_cnt->tx_failed[WIFI_AC_VO].packets;
+
+ iface->ac[WIFI_AC_VI].ac = WIFI_AC_VI;
+ iface->ac[WIFI_AC_VI].tx_mpdu = wl_wme_cnt->tx[AC_VI].packets;
+ iface->ac[WIFI_AC_VI].rx_mpdu = wl_wme_cnt->rx[AC_VI].packets;
+ iface->ac[WIFI_AC_VI].mpdu_lost = wl_wme_cnt->tx_failed[WIFI_AC_VI].packets;
+
+ iface->ac[WIFI_AC_BE].ac = WIFI_AC_BE;
+ iface->ac[WIFI_AC_BE].tx_mpdu = wl_wme_cnt->tx[AC_BE].packets;
+ iface->ac[WIFI_AC_BE].rx_mpdu = wl_wme_cnt->rx[AC_BE].packets;
+ iface->ac[WIFI_AC_BE].mpdu_lost = wl_wme_cnt->tx_failed[WIFI_AC_BE].packets;
+
+ iface->ac[WIFI_AC_BK].ac = WIFI_AC_BK;
+ iface->ac[WIFI_AC_BK].tx_mpdu = wl_wme_cnt->tx[AC_BK].packets;
+ iface->ac[WIFI_AC_BK].rx_mpdu = wl_wme_cnt->rx[AC_BK].packets;
+ iface->ac[WIFI_AC_BK].mpdu_lost = wl_wme_cnt->tx_failed[WIFI_AC_BK].packets;
+ bzero(iovar_buf, WLC_IOCTL_MAXLEN);
+
+ err = wldev_iovar_getbuf(bcmcfg_to_prmry_ndev(cfg), "counters", NULL, 0,
+ iovar_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (unlikely(err)) {
+ WL_ERR(("error (%d) - size = %zu\n", err, sizeof(wl_cnt_t)));
+ return err;
+ }
+ wl_cnt = (wl_cnt_t *)iovar_buf;
+ iface->ac[WIFI_AC_BE].retries = wl_cnt->txretry;
+ iface->beacon_rx = wl_cnt->rxbeaconmbss;
+
+ err = wldev_get_rssi(bcmcfg_to_prmry_ndev(cfg), &iface->rssi_mgmt);
+ if (unlikely(err)) {
+ WL_ERR(("get_rssi error (%d)\n", err));
+ return err;
+ }
+
+ iface->num_peers = NUM_PEER;
+ iface->peer_info->num_rate = NUM_RATE;
+
+ bzero(iovar_buf, WLC_IOCTL_MAXLEN);
+ output = (char *)iface + sizeof(wifi_iface_stat) + NUM_PEER*sizeof(wifi_peer_info);
+
+ err = wldev_iovar_getbuf(bcmcfg_to_prmry_ndev(cfg), "ratestat", NULL, 0,
+ iovar_buf, WLC_IOCTL_MAXLEN, NULL);
+ if (unlikely(err)) {
+ WL_ERR(("error (%d) - size = %zu\n", err, NUM_RATE*sizeof(wifi_rate_stat)));
+ return err;
+ }
+ memcpy(output, iovar_buf, NUM_RATE*sizeof(wifi_rate_stat));
+
+ err = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ cfg->ioctl_buf, sizeof(wifi_radio_stat)+NUM_CHAN*sizeof(wifi_channel_stat)+
+ sizeof(wifi_iface_stat)+NUM_PEER*sizeof(wifi_peer_info)+
+ NUM_RATE*sizeof(wifi_rate_stat));
+ if (unlikely(err))
+ WL_ERR(("Vendor Command reply failed ret:%d \n", err));
+
+ return err;
+}
+#endif /* LINKSTAT_SUPPORT */
+
+static int wl_cfgvendor_set_country(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int err = BCME_ERROR, rem, type;
+ char country_code[WLC_CNTRY_BUF_SZ] = {0};
+ const struct nlattr *iter;
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case ANDR_WIFI_ATTRIBUTE_COUNTRY:
+ memcpy(country_code, nla_data(iter),
+ MIN(nla_len(iter), WLC_CNTRY_BUF_SZ));
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ return err;
+ }
+ }
+
+ err = wldev_set_country(wdev->netdev, country_code, true, true);
+ if (err < 0) {
+ WL_ERR(("Set country failed ret:%d\n", err));
+ }
+
+ return err;
+}
+
+static int wl_cfgvendor_dbg_start_logging(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK, rem, type;
+ char ring_name[DBGRING_NAME_MAX] = {0};
+ int log_level = 0, flags = 0, time_intval = 0, threshold = 0;
+ const struct nlattr *iter;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case DEBUG_ATTRIBUTE_RING_NAME:
+ strncpy(ring_name, nla_data(iter),
+ MIN(sizeof(ring_name) -1, nla_len(iter)));
+ break;
+ case DEBUG_ATTRIBUTE_LOG_LEVEL:
+ log_level = nla_get_u32(iter);
+ break;
+ case DEBUG_ATTRIBUTE_RING_FLAGS:
+ flags = nla_get_u32(iter);
+ break;
+ case DEBUG_ATTRIBUTE_LOG_TIME_INTVAL:
+ time_intval = nla_get_u32(iter);
+ break;
+ case DEBUG_ATTRIBUTE_LOG_MIN_DATA_SIZE:
+ threshold = nla_get_u32(iter);
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ ret = BCME_BADADDR;
+ goto exit;
+ }
+ }
+
+ ret = dhd_os_start_logging(dhd_pub, ring_name, log_level, flags, time_intval, threshold);
+ if (ret < 0) {
+ WL_ERR(("start_logging is failed ret: %d\n", ret));
+ }
+exit:
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_reset_logging(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+
+ ret = dhd_os_reset_logging(dhd_pub);
+ if (ret < 0) {
+ WL_ERR(("reset logging is failed ret: %d\n", ret));
+ }
+
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_trigger_mem_dump(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK;
+ uint32 alloc_len;
+ struct sk_buff *skb;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+
+ ret = dhd_os_socram_dump(bcmcfg_to_prmry_ndev(cfg), &alloc_len);
+ if (ret) {
+ WL_ERR(("failed to call dhd_os_socram_dump : %d\n", ret));
+ goto exit;
+ }
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, 100);
+ if (!skb) {
+ WL_ERR(("skb allocation is failed\n"));
+ ret = BCME_NOMEM;
+ goto exit;
+ }
+ nla_put_u32(skb, DEBUG_ATTRIBUTE_FW_DUMP_LEN, alloc_len);
+
+ ret = cfg80211_vendor_cmd_reply(skb);
+
+ if (ret) {
+ WL_ERR(("Vendor Command reply failed ret:%d \n", ret));
+ }
+
+exit:
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_get_mem_dump(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK, rem, type;
+ int buf_len = 0;
+ void __user *user_buf = NULL;
+ const struct nlattr *iter;
+ char *mem_buf;
+ struct sk_buff *skb;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case DEBUG_ATTRIBUTE_FW_DUMP_LEN:
+ buf_len = nla_get_u32(iter);
+ break;
+ case DEBUG_ATTRIBUTE_FW_DUMP_DATA:
+ user_buf = (void __user *)(unsigned long) nla_get_u64(iter);
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ ret = BCME_ERROR;
+ goto exit;
+ }
+ }
+ if (buf_len > 0 && user_buf) {
+ mem_buf = vmalloc(buf_len);
+ if (!mem_buf) {
+ WL_ERR(("failed to allocate mem_buf with size : %d\n", buf_len));
+ ret = BCME_NOMEM;
+ goto exit;
+ }
+ ret = dhd_os_get_socram_dump(bcmcfg_to_prmry_ndev(cfg), &mem_buf, &buf_len);
+ if (ret) {
+ WL_ERR(("failed to get_socram_dump : %d\n", ret));
+ goto free_mem;
+ }
+ ret = copy_to_user(user_buf, mem_buf, buf_len);
+ if (ret) {
+ WL_ERR(("failed to copy memdump into user buffer : %d\n", ret));
+ goto free_mem;
+ }
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy, 100);
+ if (!skb) {
+ WL_ERR(("skb allocation is failed\n"));
+ ret = BCME_NOMEM;
+ goto free_mem;
+ }
+ /* Indicate the memdump is succesfully copied */
+ nla_put(skb, DEBUG_ATTRIBUTE_FW_DUMP_DATA, sizeof(ret), &ret);
+
+ ret = cfg80211_vendor_cmd_reply(skb);
+
+ if (ret) {
+ WL_ERR(("Vendor Command reply failed ret:%d \n", ret));
+ }
+ }
+
+free_mem:
+ vfree(mem_buf);
+exit:
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_get_version(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK, rem, type;
+ int buf_len = 1024;
+ bool dhd_ver = FALSE;
+ char *buf_ptr;
+ const struct nlattr *iter;
+ gfp_t kflags;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+ buf_ptr = kzalloc(buf_len, kflags);
+ if (!buf_ptr) {
+ WL_ERR(("failed to allocate the buffer for version n"));
+ ret = BCME_NOMEM;
+ goto exit;
+ }
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case DEBUG_ATTRIBUTE_GET_DRIVER:
+ dhd_ver = TRUE;
+ break;
+ case DEBUG_ATTRIBUTE_GET_FW:
+ dhd_ver = FALSE;
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ ret = BCME_ERROR;
+ goto exit;
+ }
+ }
+ ret = dhd_os_get_version(bcmcfg_to_prmry_ndev(cfg), dhd_ver, &buf_ptr, buf_len);
+ if (ret < 0) {
+ WL_ERR(("failed to get the version %d\n", ret));
+ goto exit;
+ }
+ ret = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ buf_ptr, strlen(buf_ptr));
+exit:
+ kfree(buf_ptr);
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_get_ring_status(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK;
+ int ring_id, i;
+ int ring_cnt;
+ struct sk_buff *skb;
+ dhd_dbg_ring_status_t dbg_ring_status[DEBUG_RING_ID_MAX];
+ dhd_dbg_ring_status_t ring_status;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+ memset(dbg_ring_status, 0, DBG_RING_STATUS_SIZE * DEBUG_RING_ID_MAX);
+ ring_cnt = 0;
+ for (ring_id = DEBUG_RING_ID_INVALID + 1; ring_id < DEBUG_RING_ID_MAX; ring_id++) {
+ ret = dhd_os_get_ring_status(dhd_pub, ring_id, &ring_status);
+ if (ret == BCME_NOTFOUND) {
+ WL_DBG(("The ring (%d) is not found \n", ring_id));
+ } else if (ret == BCME_OK) {
+ dbg_ring_status[ring_cnt++] = ring_status;
+ }
+ }
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_cmd_alloc_reply_skb(wiphy,
+ (DBG_RING_STATUS_SIZE * ring_cnt) + 100);
+ if (!skb) {
+ WL_ERR(("skb allocation is failed\n"));
+ ret = BCME_NOMEM;
+ goto exit;
+ }
+
+ nla_put_u32(skb, DEBUG_ATTRIBUTE_RING_NUM, ring_cnt);
+ for (i = 0; i < ring_cnt; i++) {
+ nla_put(skb, DEBUG_ATTRIBUTE_RING_STATUS, DBG_RING_STATUS_SIZE,
+ &dbg_ring_status[i]);
+ }
+ ret = cfg80211_vendor_cmd_reply(skb);
+
+ if (ret) {
+ WL_ERR(("Vendor Command reply failed ret:%d \n", ret));
+ }
+exit:
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_get_ring_data(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK, rem, type;
+ char ring_name[DBGRING_NAME_MAX] = {0};
+ const struct nlattr *iter;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case DEBUG_ATTRIBUTE_RING_NAME:
+ strncpy(ring_name, nla_data(iter),
+ MIN(sizeof(ring_name) -1, nla_len(iter)));
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ return ret;
+ }
+ }
+
+ ret = dhd_os_trigger_get_ring_data(dhd_pub, ring_name);
+ if (ret < 0) {
+ WL_ERR(("trigger_get_data failed ret:%d\n", ret));
+ }
+
+ return ret;
+}
+
+static int wl_cfgvendor_dbg_get_feature(struct wiphy *wiphy,
+ struct wireless_dev *wdev, const void *data, int len)
+{
+ int ret = BCME_OK;
+ u32 supported_features;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+
+ ret = dhd_os_dbg_get_feature(dhd_pub, &supported_features);
+ if (ret < 0) {
+ WL_ERR(("dbg_get_feature failed ret:%d\n", ret));
+ goto exit;
+ }
+ ret = wl_cfgvendor_send_cmd_reply(wiphy, bcmcfg_to_prmry_ndev(cfg),
+ &supported_features, sizeof(supported_features));
+exit:
+ return ret;
+}
+
+static void wl_cfgvendor_dbg_ring_send_evt(void *ctx,
+ const int ring_id, const void *data, const uint32 len,
+ const dhd_dbg_ring_status_t ring_status)
+{
+ struct net_device *ndev = ctx;
+ struct wiphy *wiphy;
+ gfp_t kflags;
+ struct sk_buff *skb;
+ if (!ndev) {
+ WL_ERR(("ndev is NULL\n"));
+ return;
+ }
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+ wiphy = ndev->ieee80211_ptr->wiphy;
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_event_alloc(wiphy, len + 100,
+ GOOGLE_DEBUG_RING_EVENT, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return;
+ }
+ nla_put(skb, DEBUG_ATTRIBUTE_RING_STATUS, sizeof(ring_status), &ring_status);
+ nla_put(skb, DEBUG_ATTRIBUTE_RING_DATA, len, data);
+ cfg80211_vendor_event(skb, kflags);
+}
+
+
+static void wl_cfgvendor_dbg_send_urgent_evt(void *ctx, const void *data,
+ const uint32 len, const uint32 fw_len)
+{
+ struct net_device *ndev = ctx;
+ struct wiphy *wiphy;
+ gfp_t kflags;
+ struct sk_buff *skb;
+ if (!ndev) {
+ WL_ERR(("ndev is NULL\n"));
+ return;
+ }
+ kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+ wiphy = ndev->ieee80211_ptr->wiphy;
+ /* Alloc the SKB for vendor_event */
+ skb = cfg80211_vendor_event_alloc(wiphy, len + 100,
+ GOOGLE_FW_DUMP_EVENT, kflags);
+ if (!skb) {
+ WL_ERR(("skb alloc failed"));
+ return;
+ }
+ nla_put_u32(skb, DEBUG_ATTRIBUTE_FW_DUMP_LEN, fw_len);
+ nla_put(skb, DEBUG_ATTRIBUTE_RING_DATA, len, data);
+ cfg80211_vendor_event(skb, kflags);
+}
+
+
+#if defined(KEEP_ALIVE)
+static int wl_cfgvendor_start_mkeep_alive(struct wiphy *wiphy, struct wireless_dev *wdev,
+ const void *data, int len)
+{
+ /* max size of IP packet for keep alive */
+ const int MKEEP_ALIVE_IP_PKT_MAX = 256;
+
+ int ret = BCME_OK, rem, type;
+ u8 mkeep_alive_id = 0;
+ u8 *ip_pkt = NULL;
+ u16 ip_pkt_len = 0;
+ u8 src_mac[ETHER_ADDR_LEN];
+ u8 dst_mac[ETHER_ADDR_LEN];
+ u32 period_msec = 0;
+ const struct nlattr *iter;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+ gfp_t kflags = in_atomic() ? GFP_ATOMIC : GFP_KERNEL;
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case MKEEP_ALIVE_ATTRIBUTE_ID:
+ mkeep_alive_id = nla_get_u8(iter);
+ break;
+ case MKEEP_ALIVE_ATTRIBUTE_IP_PKT_LEN:
+ ip_pkt_len = nla_get_u16(iter);
+ if ( ip_pkt_len > MKEEP_ALIVE_IP_PKT_MAX) {
+ ret = BCME_BADARG;
+ goto exit;
+ }
+ break;
+ case MKEEP_ALIVE_ATTRIBUTE_IP_PKT:
+ ip_pkt = (u8 *)kzalloc(ip_pkt_len, kflags);
+ if (ip_pkt == NULL) {
+ ret = BCME_NOMEM;
+ WL_ERR(("Failed to allocate mem for ip packet\n"));
+ goto exit;
+ }
+ memcpy(ip_pkt, (u8*)nla_data(iter), ip_pkt_len);
+ break;
+ case MKEEP_ALIVE_ATTRIBUTE_SRC_MAC_ADDR:
+ memcpy(src_mac, nla_data(iter), ETHER_ADDR_LEN);
+ break;
+ case MKEEP_ALIVE_ATTRIBUTE_DST_MAC_ADDR:
+ memcpy(dst_mac, nla_data(iter), ETHER_ADDR_LEN);
+ break;
+ case MKEEP_ALIVE_ATTRIBUTE_PERIOD_MSEC:
+ period_msec = nla_get_u32(iter);
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ ret = BCME_BADARG;
+ goto exit;
+ }
+ }
+
+ ret = dhd_dev_start_mkeep_alive(dhd_pub, mkeep_alive_id, ip_pkt, ip_pkt_len, src_mac,
+ dst_mac, period_msec);
+ if (ret < 0) {
+ WL_ERR(("start_mkeep_alive is failed ret: %d\n", ret));
+ }
+
+exit:
+ if (ip_pkt) {
+ kfree(ip_pkt);
+ }
+
+ return ret;
+}
+
+static int wl_cfgvendor_stop_mkeep_alive(struct wiphy *wiphy, struct wireless_dev *wdev,
+ const void *data, int len)
+{
+ int ret = BCME_OK, rem, type;
+ u8 mkeep_alive_id = 0;
+ const struct nlattr *iter;
+ struct bcm_cfg80211 *cfg = wiphy_priv(wiphy);
+ dhd_pub_t *dhd_pub = cfg->pub;
+
+ nla_for_each_attr(iter, data, len, rem) {
+ type = nla_type(iter);
+ switch (type) {
+ case MKEEP_ALIVE_ATTRIBUTE_ID:
+ mkeep_alive_id = nla_get_u8(iter);
+ break;
+ default:
+ WL_ERR(("Unknown type: %d\n", type));
+ ret = BCME_BADARG;
+ break;
+ }
+ }
+
+ ret = dhd_dev_stop_mkeep_alive(dhd_pub, mkeep_alive_id);
+ if (ret < 0) {
+ WL_ERR(("stop_mkeep_alive is failed ret: %d\n", ret));
+ }
+
+ return ret;
+}
+#endif /* defined(KEEP_ALIVE) */
+
+
+static const struct wiphy_vendor_command wl_vendor_cmds [] = {
+ {
+ {
+ .vendor_id = OUI_BRCM,
+ .subcmd = BRCM_VENDOR_SCMD_PRIV_STR
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_priv_string_handler
+ },
+#ifdef GSCAN_SUPPORT
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_GET_CAPABILITIES
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_gscan_get_capabilities
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_SET_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_scan_cfg
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_SET_SCAN_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_batch_scan_cfg
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_ENABLE_GSCAN
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_initiate_gscan
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_ENABLE_FULL_SCAN_RESULTS
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_enable_full_scan_result
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_SET_HOTLIST
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_hotlist_cfg
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_SET_SIGNIFICANT_CHANGE_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_significant_change_cfg
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_GET_SCAN_RESULTS
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_gscan_get_batch_results
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_GET_CHANNEL_LIST
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_gscan_get_channel_list
+ },
+#endif /* GSCAN_SUPPORT */
+#ifdef RTT_SUPPORT
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = RTT_SUBCMD_SET_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_rtt_set_config
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = RTT_SUBCMD_CANCEL_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_rtt_cancel_config
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = RTT_SUBCMD_GETCAPABILITY
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_rtt_get_capability
+ },
+#endif /* RTT_SUPPORT */
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = ANDR_WIFI_SUBCMD_GET_FEATURE_SET
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_get_feature_set
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = ANDR_WIFI_SUBCMD_GET_FEATURE_SET_MATRIX
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_get_feature_set_matrix
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = ANDR_WIFI_RANDOM_MAC_OUI
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_rand_mac_oui
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = ANDR_WIFI_NODFS_CHANNELS
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_nodfs_flag
+
+ },
+#ifdef LINKSTAT_SUPPORT
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = LSTATS_SUBCMD_GET_INFO
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_lstats_get_info
+ },
+#endif /* LINKSTAT_SUPPORT */
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = ANDR_WIFI_SET_COUNTRY
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_country
+ },
+#ifdef GSCAN_SUPPORT
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_SET_EPNO_SSID
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_epno_cfg
+
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_SET_SSID_WHITELIST
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_ssid_whitelist
+
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_SET_LAZY_ROAM_PARAMS
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_lazy_roam_cfg
+
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_ENABLE_LAZY_ROAM
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_enable_lazy_roam
+
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_SET_BSSID_PREF
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_bssid_pref
+
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_SET_BSSID_BLACKLIST
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_bssid_blacklist
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = GSCAN_SUBCMD_ANQPO_CONFIG
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_gscan_anqpo_config
+ },
+#endif /* GSCAN_SUPPORT */
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_START_LOGGING
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_start_logging
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_RESET_LOGGING
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_reset_logging
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_TRIGGER_MEM_DUMP
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_trigger_mem_dump
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_GET_MEM_DUMP
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_get_mem_dump
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_GET_VER
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_get_version
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_GET_RING_STATUS
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_get_ring_status
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_GET_RING_DATA
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_get_ring_data
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = DEBUG_GET_FEATURE
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_dbg_get_feature
+ },
+#ifdef KEEP_ALIVE
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_OFFLOAD_SUBCMD_START_MKEEP_ALIVE
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_start_mkeep_alive
+ },
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_OFFLOAD_SUBCMD_STOP_MKEEP_ALIVE
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_stop_mkeep_alive
+ },
+#endif /* KEEP_ALIVE */
+ {
+ {
+ .vendor_id = OUI_GOOGLE,
+ .subcmd = WIFI_SUBCMD_SET_RSSI_MONITOR
+ },
+ .flags = WIPHY_VENDOR_CMD_NEED_WDEV | WIPHY_VENDOR_CMD_NEED_NETDEV,
+ .doit = wl_cfgvendor_set_rssi_monitor
+ }
+};
+
+static const struct nl80211_vendor_cmd_info wl_vendor_events [] = {
+ { OUI_BRCM, BRCM_VENDOR_EVENT_UNSPEC },
+ { OUI_BRCM, BRCM_VENDOR_EVENT_PRIV_STR },
+#ifdef GSCAN_SUPPORT
+ { OUI_GOOGLE, GOOGLE_GSCAN_SIGNIFICANT_EVENT },
+ { OUI_GOOGLE, GOOGLE_GSCAN_GEOFENCE_FOUND_EVENT },
+ { OUI_GOOGLE, GOOGLE_GSCAN_BATCH_SCAN_EVENT },
+ { OUI_GOOGLE, GOOGLE_SCAN_FULL_RESULTS_EVENT },
+#endif /* GSCAN_SUPPORT */
+#ifdef RTT_SUPPORT
+ { OUI_GOOGLE, GOOGLE_RTT_COMPLETE_EVENT },
+#endif /* RTT_SUPPORT */
+#ifdef GSCAN_SUPPORT
+ { OUI_GOOGLE, GOOGLE_SCAN_COMPLETE_EVENT },
+ { OUI_GOOGLE, GOOGLE_GSCAN_GEOFENCE_LOST_EVENT },
+ { OUI_GOOGLE, GOOGLE_SCAN_EPNO_EVENT },
+#endif /* GSCAN_SUPPORT */
+ { OUI_GOOGLE, GOOGLE_DEBUG_RING_EVENT },
+ { OUI_GOOGLE, GOOGLE_FW_DUMP_EVENT },
+#ifdef GSCAN_SUPPORT
+ { OUI_GOOGLE, GOOGLE_PNO_HOTSPOT_FOUND_EVENT },
+#endif /* GSCAN_SUPPORT */
+#ifdef KEEP_ALIVE
+ { OUI_GOOGLE, GOOGLE_MKEEP_ALIVE_EVENT },
+#endif /* KEEP_ALIVE */
+ { OUI_GOOGLE, GOOGLE_RSSI_MONITOR_EVENT }
+};
+
+int wl_cfgvendor_attach(struct wiphy *wiphy, dhd_pub_t *dhd)
+{
+
+ WL_INFORM(("Vendor: Register BRCM cfg80211 vendor cmd(0x%x) interface \n",
+ NL80211_CMD_VENDOR));
+
+ wiphy->vendor_commands = wl_vendor_cmds;
+ wiphy->n_vendor_commands = ARRAY_SIZE(wl_vendor_cmds);
+ wiphy->vendor_events = wl_vendor_events;
+ wiphy->n_vendor_events = ARRAY_SIZE(wl_vendor_events);
+ dhd_os_dbg_register_callback(FW_VERBOSE_RING_ID, wl_cfgvendor_dbg_ring_send_evt);
+ dhd_os_dbg_register_callback(FW_EVENT_RING_ID, wl_cfgvendor_dbg_ring_send_evt);
+ dhd_os_dbg_register_callback(DHD_EVENT_RING_ID, wl_cfgvendor_dbg_ring_send_evt);
+ dhd_os_dbg_register_urgent_notifier(dhd, wl_cfgvendor_dbg_send_urgent_evt);
+ return 0;
+}
+
+int wl_cfgvendor_detach(struct wiphy *wiphy)
+{
+ WL_INFORM(("Vendor: Unregister BRCM cfg80211 vendor interface \n"));
+
+ wiphy->vendor_commands = NULL;
+ wiphy->vendor_events = NULL;
+ wiphy->n_vendor_commands = 0;
+ wiphy->n_vendor_events = 0;
+
+ return 0;
+}
+#endif /* (LINUX_VERSION_CODE > KERNEL_VERSION(3, 13, 0)) || defined(WL_VENDOR_EXT_SUPPORT) */
diff --git a/drivers/net/wireless/bcmdhd/wl_cfgvendor.h b/drivers/net/wireless/bcmdhd/wl_cfgvendor.h
new file mode 100644
index 0000000..58077b3
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/wl_cfgvendor.h
@@ -0,0 +1,383 @@
+/*
+ * Linux cfg80211 Vendor Extension Code
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: wl_cfgvendor.h 473890 2014-04-30 01:55:06Z $
+ */
+
+/*
+ * New vendor interface additon to nl80211/cfg80211 to allow vendors
+ * to implement proprietary features over the cfg80211 stack.
+ */
+
+#ifndef _wl_cfgvendor_h_
+#define _wl_cfgvendor_h_
+
+#define OUI_BRCM 0x001018
+#define OUI_GOOGLE 0x001A11
+#define BRCM_VENDOR_SUBCMD_PRIV_STR 1
+#define ATTRIBUTE_U32_LEN (NLA_HDRLEN + 4)
+#define VENDOR_ID_OVERHEAD ATTRIBUTE_U32_LEN
+#define VENDOR_SUBCMD_OVERHEAD ATTRIBUTE_U32_LEN
+#define VENDOR_DATA_OVERHEAD (NLA_HDRLEN)
+
+#define SCAN_RESULTS_COMPLETE_FLAG_LEN ATTRIBUTE_U32_LEN
+#define SCAN_INDEX_HDR_LEN (NLA_HDRLEN)
+#define SCAN_ID_HDR_LEN ATTRIBUTE_U32_LEN
+#define SCAN_FLAGS_HDR_LEN ATTRIBUTE_U32_LEN
+#define GSCAN_NUM_RESULTS_HDR_LEN ATTRIBUTE_U32_LEN
+#define GSCAN_RESULTS_HDR_LEN (NLA_HDRLEN)
+#define GSCAN_BATCH_RESULT_HDR_LEN (SCAN_INDEX_HDR_LEN + SCAN_ID_HDR_LEN + \
+ SCAN_FLAGS_HDR_LEN + \
+ GSCAN_NUM_RESULTS_HDR_LEN + \
+ GSCAN_RESULTS_HDR_LEN)
+
+#define VENDOR_REPLY_OVERHEAD (VENDOR_ID_OVERHEAD + \
+ VENDOR_SUBCMD_OVERHEAD + \
+ VENDOR_DATA_OVERHEAD)
+typedef enum {
+ /* don't use 0 as a valid subcommand */
+ VENDOR_NL80211_SUBCMD_UNSPECIFIED,
+
+ /* define all vendor startup commands between 0x0 and 0x0FFF */
+ VENDOR_NL80211_SUBCMD_RANGE_START = 0x0001,
+ VENDOR_NL80211_SUBCMD_RANGE_END = 0x0FFF,
+
+ /* define all GScan related commands between 0x1000 and 0x10FF */
+ ANDROID_NL80211_SUBCMD_GSCAN_RANGE_START = 0x1000,
+ ANDROID_NL80211_SUBCMD_GSCAN_RANGE_END = 0x10FF,
+
+
+ /* define all RTT related commands between 0x1100 and 0x11FF */
+ ANDROID_NL80211_SUBCMD_RTT_RANGE_START = 0x1100,
+ ANDROID_NL80211_SUBCMD_RTT_RANGE_END = 0x11FF,
+
+ ANDROID_NL80211_SUBCMD_LSTATS_RANGE_START = 0x1200,
+ ANDROID_NL80211_SUBCMD_LSTATS_RANGE_END = 0x12FF,
+
+ ANDROID_NL80211_SUBCMD_TDLS_RANGE_START = 0x1300,
+ ANDROID_NL80211_SUBCMD_TDLS_RANGE_END = 0x13FF,
+
+ ANDROID_NL80211_SUBCMD_DEBUG_RANGE_START = 0x1400,
+ ANDROID_NL80211_SUBCMD_DEBUG_RANGE_END = 0x14FF,
+
+ /* define all NearbyDiscovery related commands between 0x1500 and 0x15FF */
+ ANDROID_NL80211_SUBCMD_NBD_RANGE_START = 0x1500,
+ ANDROID_NL80211_SUBCMD_NBD_RANGE_END = 0x15FF,
+
+ /* define all wifi calling related commands between 0x1600 and 0x16FF */
+ ANDROID_NL80211_SUBCMD_WIFI_OFFLOAD_RANGE_START = 0x1600,
+ ANDROID_NL80211_SUBCMD_WIFI_OFFLOAD_RANGE_END = 0x16FF,
+
+ /* This is reserved for future usage */
+
+} ANDROID_VENDOR_SUB_COMMAND;
+
+enum wl_vendor_subcmd {
+ BRCM_VENDOR_SCMD_UNSPEC,
+ BRCM_VENDOR_SCMD_PRIV_STR,
+ GSCAN_SUBCMD_GET_CAPABILITIES = ANDROID_NL80211_SUBCMD_GSCAN_RANGE_START,
+ GSCAN_SUBCMD_SET_CONFIG,
+ GSCAN_SUBCMD_SET_SCAN_CONFIG,
+ GSCAN_SUBCMD_ENABLE_GSCAN,
+ GSCAN_SUBCMD_GET_SCAN_RESULTS,
+ GSCAN_SUBCMD_SCAN_RESULTS,
+ GSCAN_SUBCMD_SET_HOTLIST,
+ GSCAN_SUBCMD_SET_SIGNIFICANT_CHANGE_CONFIG,
+ GSCAN_SUBCMD_ENABLE_FULL_SCAN_RESULTS,
+ GSCAN_SUBCMD_GET_CHANNEL_LIST,
+ ANDR_WIFI_SUBCMD_GET_FEATURE_SET,
+ ANDR_WIFI_SUBCMD_GET_FEATURE_SET_MATRIX,
+ ANDR_WIFI_RANDOM_MAC_OUI,
+ ANDR_WIFI_NODFS_CHANNELS,
+ ANDR_WIFI_SET_COUNTRY,
+ GSCAN_SUBCMD_SET_EPNO_SSID,
+ WIFI_SUBCMD_SET_SSID_WHITELIST,
+ WIFI_SUBCMD_SET_LAZY_ROAM_PARAMS,
+ WIFI_SUBCMD_ENABLE_LAZY_ROAM,
+ WIFI_SUBCMD_SET_BSSID_PREF,
+ WIFI_SUBCMD_SET_BSSID_BLACKLIST,
+ GSCAN_SUBCMD_ANQPO_CONFIG,
+ WIFI_SUBCMD_SET_RSSI_MONITOR,
+ RTT_SUBCMD_SET_CONFIG = ANDROID_NL80211_SUBCMD_RTT_RANGE_START,
+ RTT_SUBCMD_CANCEL_CONFIG,
+ RTT_SUBCMD_GETCAPABILITY,
+ LSTATS_SUBCMD_GET_INFO = ANDROID_NL80211_SUBCMD_LSTATS_RANGE_START,
+ DEBUG_START_LOGGING = ANDROID_NL80211_SUBCMD_DEBUG_RANGE_START,
+ DEBUG_TRIGGER_MEM_DUMP,
+ DEBUG_GET_MEM_DUMP,
+ DEBUG_GET_VER,
+ DEBUG_GET_RING_STATUS,
+ DEBUG_GET_RING_DATA,
+ DEBUG_GET_FEATURE,
+ DEBUG_RESET_LOGGING,
+ WIFI_OFFLOAD_SUBCMD_START_MKEEP_ALIVE = ANDROID_NL80211_SUBCMD_WIFI_OFFLOAD_RANGE_START,
+ WIFI_OFFLOAD_SUBCMD_STOP_MKEEP_ALIVE,
+ /* Add more sub commands here */
+ VENDOR_SUBCMD_MAX
+};
+
+enum gscan_attributes {
+ GSCAN_ATTRIBUTE_NUM_BUCKETS = 10,
+ GSCAN_ATTRIBUTE_BASE_PERIOD,
+ GSCAN_ATTRIBUTE_BUCKETS_BAND,
+ GSCAN_ATTRIBUTE_BUCKET_ID,
+ GSCAN_ATTRIBUTE_BUCKET_PERIOD,
+ GSCAN_ATTRIBUTE_BUCKET_NUM_CHANNELS,
+ GSCAN_ATTRIBUTE_BUCKET_CHANNELS,
+ GSCAN_ATTRIBUTE_NUM_AP_PER_SCAN,
+ GSCAN_ATTRIBUTE_REPORT_THRESHOLD,
+ GSCAN_ATTRIBUTE_NUM_SCANS_TO_CACHE,
+ GSCAN_ATTRIBUTE_BAND = GSCAN_ATTRIBUTE_BUCKETS_BAND,
+
+ GSCAN_ATTRIBUTE_ENABLE_FEATURE = 20,
+ GSCAN_ATTRIBUTE_SCAN_RESULTS_COMPLETE,
+ GSCAN_ATTRIBUTE_FLUSH_FEATURE,
+ GSCAN_ATTRIBUTE_ENABLE_FULL_SCAN_RESULTS,
+ GSCAN_ATTRIBUTE_REPORT_EVENTS,
+ /* remaining reserved for additional attributes */
+ GSCAN_ATTRIBUTE_NUM_OF_RESULTS = 30,
+ GSCAN_ATTRIBUTE_FLUSH_RESULTS,
+ GSCAN_ATTRIBUTE_SCAN_RESULTS, /* flat array of wifi_scan_result */
+ GSCAN_ATTRIBUTE_SCAN_ID, /* indicates scan number */
+ GSCAN_ATTRIBUTE_SCAN_FLAGS, /* indicates if scan was aborted */
+ GSCAN_ATTRIBUTE_AP_FLAGS, /* flags on significant change event */
+ GSCAN_ATTRIBUTE_NUM_CHANNELS,
+ GSCAN_ATTRIBUTE_CHANNEL_LIST,
+
+ /* remaining reserved for additional attributes */
+
+ GSCAN_ATTRIBUTE_SSID = 40,
+ GSCAN_ATTRIBUTE_BSSID,
+ GSCAN_ATTRIBUTE_CHANNEL,
+ GSCAN_ATTRIBUTE_RSSI,
+ GSCAN_ATTRIBUTE_TIMESTAMP,
+ GSCAN_ATTRIBUTE_RTT,
+ GSCAN_ATTRIBUTE_RTTSD,
+
+ /* remaining reserved for additional attributes */
+
+ GSCAN_ATTRIBUTE_HOTLIST_BSSIDS = 50,
+ GSCAN_ATTRIBUTE_RSSI_LOW,
+ GSCAN_ATTRIBUTE_RSSI_HIGH,
+ GSCAN_ATTRIBUTE_HOSTLIST_BSSID_ELEM,
+ GSCAN_ATTRIBUTE_HOTLIST_FLUSH,
+
+ /* remaining reserved for additional attributes */
+ GSCAN_ATTRIBUTE_RSSI_SAMPLE_SIZE = 60,
+ GSCAN_ATTRIBUTE_LOST_AP_SAMPLE_SIZE,
+ GSCAN_ATTRIBUTE_MIN_BREACHING,
+ GSCAN_ATTRIBUTE_SIGNIFICANT_CHANGE_BSSIDS,
+ GSCAN_ATTRIBUTE_SIGNIFICANT_CHANGE_FLUSH,
+
+ /* EPNO */
+ GSCAN_ATTRIBUTE_EPNO_SSID_LIST = 70,
+ GSCAN_ATTRIBUTE_EPNO_SSID,
+ GSCAN_ATTRIBUTE_EPNO_SSID_LEN,
+ GSCAN_ATTRIBUTE_EPNO_RSSI,
+ GSCAN_ATTRIBUTE_EPNO_FLAGS,
+ GSCAN_ATTRIBUTE_EPNO_AUTH,
+ GSCAN_ATTRIBUTE_EPNO_SSID_NUM,
+ GSCAN_ATTRIBUTE_EPNO_FLUSH,
+
+ /* Roam SSID Whitelist and BSSID pref */
+ GSCAN_ATTRIBUTE_WHITELIST_SSID = 80,
+ GSCAN_ATTRIBUTE_NUM_WL_SSID,
+ GSCAN_ATTRIBUTE_WL_SSID_LEN,
+ GSCAN_ATTRIBUTE_WL_SSID_FLUSH,
+ GSCAN_ATTRIBUTE_WHITELIST_SSID_ELEM,
+ GSCAN_ATTRIBUTE_NUM_BSSID,
+ GSCAN_ATTRIBUTE_BSSID_PREF_LIST,
+ GSCAN_ATTRIBUTE_BSSID_PREF_FLUSH,
+ GSCAN_ATTRIBUTE_BSSID_PREF,
+ GSCAN_ATTRIBUTE_RSSI_MODIFIER,
+
+
+ /* Roam cfg */
+ GSCAN_ATTRIBUTE_A_BAND_BOOST_THRESHOLD = 90,
+ GSCAN_ATTRIBUTE_A_BAND_PENALTY_THRESHOLD,
+ GSCAN_ATTRIBUTE_A_BAND_BOOST_FACTOR,
+ GSCAN_ATTRIBUTE_A_BAND_PENALTY_FACTOR,
+ GSCAN_ATTRIBUTE_A_BAND_MAX_BOOST,
+ GSCAN_ATTRIBUTE_LAZY_ROAM_HYSTERESIS,
+ GSCAN_ATTRIBUTE_ALERT_ROAM_RSSI_TRIGGER,
+ GSCAN_ATTRIBUTE_LAZY_ROAM_ENABLE,
+
+ /* BSSID blacklist */
+ GSCAN_ATTRIBUTE_BSSID_BLACKLIST_FLUSH = 100,
+ GSCAN_ATTRIBUTE_BLACKLIST_BSSID,
+
+ GSCAN_ATTRIBUTE_ANQPO_HS_LIST = 110,
+ GSCAN_ATTRIBUTE_ANQPO_HS_LIST_SIZE,
+ GSCAN_ATTRIBUTE_ANQPO_HS_NETWORK_ID,
+ GSCAN_ATTRIBUTE_ANQPO_HS_NAI_REALM,
+ GSCAN_ATTRIBUTE_ANQPO_HS_ROAM_CONSORTIUM_ID,
+ GSCAN_ATTRIBUTE_ANQPO_HS_PLMN,
+
+ /* Adaptive scan attributes */
+ GSCAN_ATTRIBUTE_BUCKET_STEP_COUNT = 120,
+ GSCAN_ATTRIBUTE_BUCKET_MAX_PERIOD,
+
+ GSCAN_ATTRIBUTE_MAX
+};
+
+enum gscan_bucket_attributes {
+ GSCAN_ATTRIBUTE_CH_BUCKET_1,
+ GSCAN_ATTRIBUTE_CH_BUCKET_2,
+ GSCAN_ATTRIBUTE_CH_BUCKET_3,
+ GSCAN_ATTRIBUTE_CH_BUCKET_4,
+ GSCAN_ATTRIBUTE_CH_BUCKET_5,
+ GSCAN_ATTRIBUTE_CH_BUCKET_6,
+ GSCAN_ATTRIBUTE_CH_BUCKET_7
+};
+
+enum gscan_ch_attributes {
+ GSCAN_ATTRIBUTE_CH_ID_1,
+ GSCAN_ATTRIBUTE_CH_ID_2,
+ GSCAN_ATTRIBUTE_CH_ID_3,
+ GSCAN_ATTRIBUTE_CH_ID_4,
+ GSCAN_ATTRIBUTE_CH_ID_5,
+ GSCAN_ATTRIBUTE_CH_ID_6,
+ GSCAN_ATTRIBUTE_CH_ID_7
+};
+
+enum rtt_attributes {
+ RTT_ATTRIBUTE_TARGET_CNT,
+ RTT_ATTRIBUTE_TARGET_INFO,
+ RTT_ATTRIBUTE_TARGET_MAC,
+ RTT_ATTRIBUTE_TARGET_TYPE,
+ RTT_ATTRIBUTE_TARGET_PEER,
+ RTT_ATTRIBUTE_TARGET_CHAN,
+ RTT_ATTRIBUTE_TARGET_PERIOD,
+ RTT_ATTRIBUTE_TARGET_NUM_BURST,
+ RTT_ATTRIBUTE_TARGET_NUM_FTM_BURST,
+ RTT_ATTRIBUTE_TARGET_NUM_RETRY_FTM,
+ RTT_ATTRIBUTE_TARGET_NUM_RETRY_FTMR,
+ RTT_ATTRIBUTE_TARGET_LCI,
+ RTT_ATTRIBUTE_TARGET_LCR,
+ RTT_ATTRIBUTE_TARGET_BURST_DURATION,
+ RTT_ATTRIBUTE_TARGET_PREAMBLE,
+ RTT_ATTRIBUTE_TARGET_BW,
+ RTT_ATTRIBUTE_RESULTS_COMPLETE = 30,
+ RTT_ATTRIBUTE_RESULTS_PER_TARGET,
+ RTT_ATTRIBUTE_RESULT_CNT,
+ RTT_ATTRIBUTE_RESULT
+};
+
+enum debug_attributes {
+ DEBUG_ATTRIBUTE_GET_DRIVER,
+ DEBUG_ATTRIBUTE_GET_FW,
+ DEBUG_ATTRIBUTE_RING_ID,
+ DEBUG_ATTRIBUTE_RING_NAME,
+ DEBUG_ATTRIBUTE_RING_FLAGS,
+ DEBUG_ATTRIBUTE_LOG_LEVEL,
+ DEBUG_ATTRIBUTE_LOG_TIME_INTVAL,
+ DEBUG_ATTRIBUTE_LOG_MIN_DATA_SIZE,
+ DEBUG_ATTRIBUTE_FW_DUMP_LEN,
+ DEBUG_ATTRIBUTE_FW_DUMP_DATA,
+ DEBUG_ATTRIBUTE_RING_DATA,
+ DEBUG_ATTRIBUTE_RING_STATUS,
+ DEBUG_ATTRIBUTE_RING_NUM
+};
+
+enum mkeep_alive_attributes {
+ MKEEP_ALIVE_ATTRIBUTE_ID,
+ MKEEP_ALIVE_ATTRIBUTE_IP_PKT,
+ MKEEP_ALIVE_ATTRIBUTE_IP_PKT_LEN,
+ MKEEP_ALIVE_ATTRIBUTE_SRC_MAC_ADDR,
+ MKEEP_ALIVE_ATTRIBUTE_DST_MAC_ADDR,
+ MKEEP_ALIVE_ATTRIBUTE_PERIOD_MSEC
+};
+
+enum wifi_rssi_monitor_attr {
+ RSSI_MONITOR_ATTRIBUTE_MAX_RSSI,
+ RSSI_MONITOR_ATTRIBUTE_MIN_RSSI,
+ RSSI_MONITOR_ATTRIBUTE_START,
+};
+
+typedef enum wl_vendor_event {
+ BRCM_VENDOR_EVENT_UNSPEC,
+ BRCM_VENDOR_EVENT_PRIV_STR,
+ GOOGLE_GSCAN_SIGNIFICANT_EVENT,
+ GOOGLE_GSCAN_GEOFENCE_FOUND_EVENT,
+ GOOGLE_GSCAN_BATCH_SCAN_EVENT,
+ GOOGLE_SCAN_FULL_RESULTS_EVENT,
+ GOOGLE_RTT_COMPLETE_EVENT,
+ GOOGLE_SCAN_COMPLETE_EVENT,
+ GOOGLE_GSCAN_GEOFENCE_LOST_EVENT,
+ GOOGLE_SCAN_EPNO_EVENT,
+ GOOGLE_DEBUG_RING_EVENT,
+ GOOGLE_FW_DUMP_EVENT,
+ GOOGLE_PNO_HOTSPOT_FOUND_EVENT,
+ GOOGLE_RSSI_MONITOR_EVENT,
+ GOOGLE_MKEEP_ALIVE_EVENT
+} wl_vendor_event_t;
+
+enum andr_wifi_attr {
+ ANDR_WIFI_ATTRIBUTE_NUM_FEATURE_SET,
+ ANDR_WIFI_ATTRIBUTE_FEATURE_SET,
+ ANDR_WIFI_ATTRIBUTE_RANDOM_MAC_OUI,
+ ANDR_WIFI_ATTRIBUTE_NODFS_SET,
+ ANDR_WIFI_ATTRIBUTE_COUNTRY
+};
+
+typedef enum wl_vendor_gscan_attribute {
+ ATTR_START_GSCAN,
+ ATTR_STOP_GSCAN,
+ ATTR_SET_SCAN_BATCH_CFG_ID, /* set batch scan params */
+ ATTR_SET_SCAN_GEOFENCE_CFG_ID, /* set list of bssids to track */
+ ATTR_SET_SCAN_SIGNIFICANT_CFG_ID, /* set list of bssids, rssi threshold etc.. */
+ ATTR_SET_SCAN_CFG_ID, /* set common scan config params here */
+ ATTR_GET_GSCAN_CAPABILITIES_ID,
+ /* Add more sub commands here */
+ ATTR_GSCAN_MAX
+} wl_vendor_gscan_attribute_t;
+
+typedef enum gscan_batch_attribute {
+ ATTR_GSCAN_BATCH_BESTN,
+ ATTR_GSCAN_BATCH_MSCAN,
+ ATTR_GSCAN_BATCH_BUFFER_THRESHOLD
+} gscan_batch_attribute_t;
+
+typedef enum gscan_geofence_attribute {
+ ATTR_GSCAN_NUM_HOTLIST_BSSID,
+ ATTR_GSCAN_HOTLIST_BSSID
+} gscan_geofence_attribute_t;
+
+typedef enum gscan_complete_event {
+ WIFI_SCAN_BUFFER_FULL,
+ WIFI_SCAN_COMPLETE
+} gscan_complete_event_t;
+
+/* Capture the BRCM_VENDOR_SUBCMD_PRIV_STRINGS* here */
+#define BRCM_VENDOR_SCMD_CAPA "cap"
+
+#if (LINUX_VERSION_CODE > KERNEL_VERSION(3, 13, 0)) || defined(WL_VENDOR_EXT_SUPPORT)
+extern int wl_cfgvendor_attach(struct wiphy *wiphy, dhd_pub_t *dhd);
+extern int wl_cfgvendor_detach(struct wiphy *wiphy);
+extern int wl_cfgvendor_send_async_event(struct wiphy *wiphy,
+ struct net_device *dev, int event_id, const void *data, int len);
+extern int wl_cfgvendor_send_hotlist_event(struct wiphy *wiphy,
+ struct net_device *dev, void *data, int len, wl_vendor_event_t event);
+#endif /* (LINUX_VERSION_CODE > KERNEL_VERSION(3, 13, 0)) || defined(WL_VENDOR_EXT_SUPPORT) */
+
+#endif /* _wl_cfgvendor_h_ */
diff --git a/drivers/net/wireless/bcmdhd/wl_dbg.h b/drivers/net/wireless/bcmdhd/wl_dbg.h
old mode 100755
new mode 100644
index 67349a13..083d0c3
--- a/drivers/net/wireless/bcmdhd/wl_dbg.h
+++ b/drivers/net/wireless/bcmdhd/wl_dbg.h
@@ -22,7 +22,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_dbg.h 430628 2013-10-19 04:07:25Z $
+ * $Id: wl_dbg.h 472390 2014-04-23 23:32:01Z $
*/
@@ -44,6 +44,144 @@
#define WL_SRSCAN(args)
#endif
+#if defined(BCMCONDITIONAL_LOGGING)
+
+/* Ideally this should be some include file that vendors can include to conditionalize logging */
+
+/* DBGONLY() macro to reduce ifdefs in code for statements that are only needed when
+ * BCMDBG is defined.
+ */
+#define DBGONLY(x)
+
+/* To disable a message completely ... until you need it again */
+#define WL_NONE(args)
+#define WL_ERROR(args) do {if (wl_msg_level & WL_ERROR_VAL) WL_PRINT(args);} while (0)
+#define WL_TRACE(args)
+#define WL_PRHDRS_MSG(args)
+#define WL_PRHDRS(i, p, f, t, r, l)
+#define WL_PRPKT(m, b, n)
+#define WL_INFORM(args)
+#define WL_TMP(args)
+#define WL_OID(args)
+#define WL_RATE(args) do {if (wl_msg_level & WL_RATE_VAL) WL_PRINT(args);} while (0)
+#define WL_ASSOC(args) do {if (wl_msg_level & WL_ASSOC_VAL) WL_PRINT(args);} while (0)
+#define WL_PRUSR(m, b, n)
+#define WL_PS(args) do {if (wl_msg_level & WL_PS_VAL) WL_PRINT(args);} while (0)
+
+#define WL_PORT(args)
+#define WL_DUAL(args)
+#define WL_REGULATORY(args) do {if (wl_msg_level & WL_REGULATORY_VAL) WL_PRINT(args);} while (0)
+
+#define WL_MPC(args)
+#define WL_APSTA(args)
+#define WL_APSTA_BCN(args)
+#define WL_APSTA_TX(args)
+#define WL_APSTA_TSF(args)
+#define WL_APSTA_BSSID(args)
+#define WL_BA(args)
+#define WL_MBSS(args)
+#define WL_PROTO(args)
+
+#define WL_CAC(args) do {if (wl_msg_level & WL_CAC_VAL) WL_PRINT(args);} while (0)
+#define WL_AMSDU(args)
+#define WL_AMPDU(args)
+#define WL_FFPLD(args)
+#define WL_MCHAN(args)
+
+#define WL_DFS(args)
+#define WL_WOWL(args)
+#define WL_DPT(args)
+#define WL_ASSOC_OR_DPT(args)
+#define WL_SCAN(args) do {if (wl_msg_level2 & WL_SCAN_VAL) WL_PRINT(args);} while (0)
+#define WL_COEX(args)
+#define WL_RTDC(w, s, i, j)
+#define WL_RTDC2(w, s, i, j)
+#define WL_CHANINT(args)
+#define WL_BTA(args)
+#define WL_P2P(args)
+#define WL_ITFR(args)
+#define WL_TDLS(args)
+#define WL_MCNX(args)
+#define WL_PROT(args)
+#define WL_PSTA(args)
+#define WL_TRF_MGMT(args)
+#define WL_L2FILTER(args)
+#define WL_MQ(args)
+#define WL_TXBF(args)
+#define WL_P2PO(args)
+#define WL_NET_DETECT(args)
+#define WL_ROAM(args)
+#define WL_WNM(args)
+
+
+#define WL_AMPDU_UPDN(args)
+#define WL_AMPDU_RX(args)
+#define WL_AMPDU_ERR(args)
+#define WL_AMPDU_TX(args)
+#define WL_AMPDU_CTL(args)
+#define WL_AMPDU_HW(args)
+#define WL_AMPDU_HWTXS(args)
+#define WL_AMPDU_HWDBG(args)
+#define WL_AMPDU_STAT(args)
+#define WL_AMPDU_ERR_ON() 0
+#define WL_AMPDU_HW_ON() 0
+#define WL_AMPDU_HWTXS_ON() 0
+
+#define WL_APSTA_UPDN(args)
+#define WL_APSTA_RX(args)
+#define WL_WSEC(args)
+#define WL_WSEC_DUMP(args)
+#define WL_PCIE(args)
+#define WL_CHANLOG(w, s, i, j)
+
+#define WL_ERROR_ON() (wl_msg_level & WL_ERROR_VAL)
+#define WL_TRACE_ON() 0
+#define WL_PRHDRS_ON() 0
+#define WL_PRPKT_ON() 0
+#define WL_INFORM_ON() 0
+#define WL_TMP_ON() 0
+#define WL_OID_ON() 0
+#define WL_RATE_ON() (wl_msg_level & WL_RATE_VAL)
+#define WL_ASSOC_ON() (wl_msg_level & WL_ASSOC_VAL)
+#define WL_PRUSR_ON() 0
+#define WL_PS_ON() (wl_msg_level & WL_PS_VAL)
+#define WL_PORT_ON() 0
+#define WL_WSEC_ON() 0
+#define WL_WSEC_DUMP_ON() 0
+#define WL_MPC_ON() 0
+#define WL_REGULATORY_ON() (wl_msg_level & WL_REGULATORY_VAL)
+#define WL_APSTA_ON() 0
+#define WL_DFS_ON() 0
+#define WL_MBSS_ON() 0
+#define WL_CAC_ON() (wl_msg_level & WL_CAC_VAL)
+#define WL_AMPDU_ON() 0
+#define WL_DPT_ON() 0
+#define WL_WOWL_ON() 0
+#define WL_SCAN_ON() (wl_msg_level2 & WL_SCAN_VAL)
+#define WL_BTA_ON() 0
+#define WL_P2P_ON() 0
+#define WL_ITFR_ON() 0
+#define WL_MCHAN_ON() 0
+#define WL_TDLS_ON() 0
+#define WL_MCNX_ON() 0
+#define WL_PROT_ON() 0
+#define WL_PSTA_ON() 0
+#define WL_TRF_MGMT_ON() 0
+#define WL_LPC_ON() 0
+#define WL_L2FILTER_ON() 0
+#define WL_TXBF_ON() 0
+#define WL_P2PO_ON() 0
+#define WL_CHANLOG_ON() 0
+#define WL_NET_DETECT_ON() 0
+#define WL_WNM_ON() 0
+#define WL_PCIE_ON() 0
+
+#else /* !BCMDBG */
+
+/* DBGONLY() macro to reduce ifdefs in code for statements that are only needed when
+ * BCMDBG is defined.
+ */
+#define DBGONLY(x)
/* To disable a message completely ... until you need it again */
#define WL_NONE(args)
@@ -61,6 +199,7 @@
#endif
#define WL_PCIE(args) do {if (wl_msg_level2 & WL_PCIE_VAL) WL_PRINT(args);} while (0)
#define WL_PCIE_ON() (wl_msg_level2 & WL_PCIE_VAL)
+#endif
extern uint32 wl_msg_level;
extern uint32 wl_msg_level2;
diff --git a/drivers/net/wireless/bcmdhd/wl_iw.c b/drivers/net/wireless/bcmdhd/wl_iw.c
old mode 100755
new mode 100644
index fd42924..6a0b676
--- a/drivers/net/wireless/bcmdhd/wl_iw.c
+++ b/drivers/net/wireless/bcmdhd/wl_iw.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_iw.c 425990 2013-09-26 07:28:16Z $
+ * $Id: wl_iw.c 467328 2014-04-03 01:23:40Z $
*/
#if defined(USE_IW)
@@ -1330,6 +1330,17 @@
uint8 *ptr = ((uint8 *)bi) + sizeof(wl_bss_info_t);
int ptr_len = bi->ie_length;
+ /* OSEN IE */
+ if ((ie = bcm_parse_tlvs(ptr, ptr_len, DOT11_MNG_VS_ID)) &&
+ ie->len > WFA_OUI_LEN + 1 &&
+ !bcmp((const void *)&ie->data[0], (const void *)WFA_OUI, WFA_OUI_LEN) &&
+ ie->data[WFA_OUI_LEN] == WFA_OUI_TYPE_OSEN) {
+ iwe.cmd = IWEVGENIE;
+ iwe.u.data.length = ie->len + 2;
+ event = IWE_STREAM_ADD_POINT(info, event, end, &iwe, (char *)ie);
+ }
+ ptr = ((uint8 *)bi) + sizeof(wl_bss_info_t);
+
if ((ie = bcm_parse_tlvs(ptr, ptr_len, DOT11_MNG_RSN_ID))) {
iwe.cmd = IWEVGENIE;
iwe.u.data.length = ie->len + 2;
@@ -1444,8 +1455,9 @@
/* Channel */
iwe.cmd = SIOCGIWFREQ;
+
iwe.u.freq.m = wf_channel2mhz(CHSPEC_CHANNEL(bi->chanspec),
- CHSPEC_CHANNEL(bi->chanspec) <= CH_MAX_2G_CHANNEL ?
+ (CHSPEC_IS2G(bi->chanspec)) ?
WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G);
iwe.u.freq.e = 6;
event = IWE_STREAM_ADD_EVENT(info, event, end, &iwe, IW_EV_FREQ_LEN);
@@ -1567,8 +1579,9 @@
/* Channel */
iwe.cmd = SIOCGIWFREQ;
+
iwe.u.freq.m = wf_channel2mhz(CHSPEC_CHANNEL(bi->chanspec),
- CHSPEC_CHANNEL(bi->chanspec) <= CH_MAX_2G_CHANNEL ?
+ (CHSPEC_IS2G(bi->chanspec)) ?
WF_CHAN_FACTOR_2_4_G : WF_CHAN_FACTOR_5_G);
iwe.u.freq.e = 6;
event = IWE_STREAM_ADD_EVENT(info, event, end, &iwe, IW_EV_FREQ_LEN);
diff --git a/drivers/net/wireless/bcmdhd/wl_iw.h b/drivers/net/wireless/bcmdhd/wl_iw.h
old mode 100755
new mode 100644
index 35d710a..95b2abd
--- a/drivers/net/wireless/bcmdhd/wl_iw.h
+++ b/drivers/net/wireless/bcmdhd/wl_iw.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_iw.h 291086 2011-10-21 01:17:24Z $
+ * $Id: wl_iw.h 467328 2014-04-03 01:23:40Z $
*/
#ifndef _wl_iw_h_
diff --git a/drivers/net/wireless/bcmdhd/wl_linux_mon.c b/drivers/net/wireless/bcmdhd/wl_linux_mon.c
old mode 100755
new mode 100644
index 210d171..2dc6aeb
--- a/drivers/net/wireless/bcmdhd/wl_linux_mon.c
+++ b/drivers/net/wireless/bcmdhd/wl_linux_mon.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wl_linux_mon.c 425343 2013-09-23 23:04:47Z $
+ * $Id: wl_linux_mon.c 467328 2014-04-03 01:23:40Z $
*/
#include <osl.h>
diff --git a/drivers/net/wireless/bcmdhd/wl_roam.c b/drivers/net/wireless/bcmdhd/wl_roam.c
new file mode 100644
index 0000000..3fc9e76
--- /dev/null
+++ b/drivers/net/wireless/bcmdhd/wl_roam.c
@@ -0,0 +1,308 @@
+/*
+ * Linux cfg80211 driver
+ *
+ * Copyright (C) 1999-2014, Broadcom Corporation
+ *
+ * Unless you and Broadcom execute a separate written software license
+ * agreement governing use of this software, this software is licensed to you
+ * under the terms of the GNU General Public License version 2 (the "GPL"),
+ * available at http://www.broadcom.com/licenses/GPLv2.php, with the
+ * following added to such license:
+ *
+ * As a special exception, the copyright holders of this software give you
+ * permission to link this software with independent modules, and to copy and
+ * distribute the resulting executable under terms of your choice, provided that
+ * you also meet, for each linked independent module, the terms and conditions of
+ * the license of that module. An independent module is a module which is not
+ * derived from this software. The special exception does not apply to any
+ * modifications of the software.
+ *
+ * Notwithstanding the above, under no circumstances may you combine this
+ * software in any way with any other Broadcom software provided under a license
+ * other than the GPL, without Broadcom's express prior written consent.
+ *
+ * $Id: wl_roam.c 477711 2014-05-14 08:45:17Z $
+ */
+
+
+#include <typedefs.h>
+#include <osl.h>
+#include <bcmwifi_channels.h>
+#include <wlioctl.h>
+#include <bcmutils.h>
+#include <wl_cfg80211.h>
+#include <wldev_common.h>
+
+#define MAX_ROAM_CACHE 100
+#define MAX_CHANNEL_LIST 20
+#define MAX_SSID_BUFSIZE 36
+
+#define ROAMSCAN_MODE_NORMAL 0
+#define ROAMSCAN_MODE_WES 1
+
+typedef struct {
+ chanspec_t chanspec;
+ int ssid_len;
+ char ssid[DOT11_MAX_SSID_LEN];
+} roam_channel_cache;
+
+typedef struct {
+ int n;
+ chanspec_t channels[MAX_CHANNEL_LIST];
+} channel_list_t;
+
+static int n_roam_cache = 0;
+static int roam_band = WLC_BAND_AUTO;
+static roam_channel_cache roam_cache[MAX_ROAM_CACHE];
+static uint band2G, band5G, band_bw;
+
+void init_roam(int ioctl_ver)
+{
+#ifdef D11AC_IOTYPES
+ if (ioctl_ver == 1) {
+ /* legacy chanspec */
+ band2G = WL_LCHANSPEC_BAND_2G;
+ band5G = WL_LCHANSPEC_BAND_5G;
+ band_bw = WL_LCHANSPEC_BW_20 | WL_LCHANSPEC_CTL_SB_NONE;
+ } else {
+ band2G = WL_CHANSPEC_BAND_2G;
+ band5G = WL_CHANSPEC_BAND_5G;
+ band_bw = WL_CHANSPEC_BW_20;
+ }
+#else
+ band2G = WL_CHANSPEC_BAND_2G;
+ band5G = WL_CHANSPEC_BAND_5G;
+ band_bw = WL_CHANSPEC_BW_20 | WL_CHANSPEC_CTL_SB_NONE;
+#endif /* D11AC_IOTYPES */
+
+ n_roam_cache = 0;
+ roam_band = WLC_BAND_AUTO;
+
+}
+
+
+void set_roam_band(int band)
+{
+ roam_band = band;
+}
+
+void reset_roam_cache(void)
+{
+ n_roam_cache = 0;
+}
+
+void add_roam_cache(wl_bss_info_t *bi)
+{
+ int i;
+ uint8 channel;
+ char chanbuf[CHANSPEC_STR_LEN];
+
+
+ if (n_roam_cache >= MAX_ROAM_CACHE)
+ return;
+
+ if (bi->SSID_len > DOT11_MAX_SSID_LEN)
+ return;
+
+ for (i = 0; i < n_roam_cache; i++) {
+ if ((roam_cache[i].ssid_len == bi->SSID_len) &&
+ (roam_cache[i].chanspec == bi->chanspec) &&
+ (memcmp(roam_cache[i].ssid, bi->SSID, bi->SSID_len) == 0)) {
+ /* identical one found, just return */
+ return;
+ }
+ }
+
+ roam_cache[n_roam_cache].ssid_len = bi->SSID_len;
+ channel = wf_chspec_ctlchan(bi->chanspec);
+ WL_DBG(("CHSPEC = %s, CTL %d\n", wf_chspec_ntoa_ex(bi->chanspec, chanbuf), channel));
+ roam_cache[n_roam_cache].chanspec =
+ (channel <= CH_MAX_2G_CHANNEL ? band2G : band5G) | band_bw | channel;
+ memcpy(roam_cache[n_roam_cache].ssid, bi->SSID, bi->SSID_len);
+
+ n_roam_cache++;
+}
+
+static bool is_duplicated_channel(const chanspec_t *channels,
+ int n_channels, chanspec_t new)
+{
+ int i;
+
+ for (i = 0; i < n_channels; i++) {
+ if (channels[i] == new)
+ return TRUE;
+ }
+
+ return FALSE;
+}
+
+int get_roam_channel_list(int target_chan, chanspec_t *channels,
+ const wlc_ssid_t *ssid, int ioctl_ver)
+{
+ int i, n = 0;
+ char chanbuf[CHANSPEC_STR_LEN];
+ if (target_chan) {
+ /* first index is filled with the given target channel */
+ channels[n++] = (target_chan & WL_CHANSPEC_CHAN_MASK) |
+ (target_chan <= CH_MAX_2G_CHANNEL ? band2G : band5G) | band_bw;
+ WL_DBG((" %s: %03d 0x%04X\n", __FUNCTION__, target_chan, channels[0]));
+ }
+
+ for (i = 0; i < n_roam_cache; i++) {
+ chanspec_t ch = roam_cache[i].chanspec;
+ bool is_2G = ioctl_ver == 1 ? LCHSPEC_IS2G(ch) : CHSPEC_IS2G(ch);
+ bool is_5G = ioctl_ver == 1 ? LCHSPEC_IS5G(ch) : CHSPEC_IS5G(ch);
+ bool band_match = ((roam_band == WLC_BAND_AUTO) ||
+ ((roam_band == WLC_BAND_2G) && is_2G) ||
+ ((roam_band == WLC_BAND_5G) && is_5G));
+
+ /* XXX: JIRA:SW4349-173 : 80p80 Support Required */
+ ch = CHSPEC_CHANNEL(ch) | (is_2G ? band2G : band5G) | band_bw;
+ if ((roam_cache[i].ssid_len == ssid->SSID_len) &&
+ band_match && !is_duplicated_channel(channels, n, ch) &&
+ (memcmp(roam_cache[i].ssid, ssid->SSID, ssid->SSID_len) == 0)) {
+ /* match found, add it */
+ WL_DBG(("%s: channel = %s\n", __FUNCTION__,
+ wf_chspec_ntoa_ex(ch, chanbuf)));
+ channels[n++] = ch;
+ }
+ }
+
+ return n;
+}
+
+
+void print_roam_cache(void)
+{
+ int i;
+
+ WL_DBG((" %d cache\n", n_roam_cache));
+
+ for (i = 0; i < n_roam_cache; i++) {
+ roam_cache[i].ssid[roam_cache[i].ssid_len] = 0;
+ WL_DBG(("0x%02X %02d %s\n", roam_cache[i].chanspec,
+ roam_cache[i].ssid_len, roam_cache[i].ssid));
+ }
+}
+
+static void add_roamcache_channel(channel_list_t *channels, chanspec_t ch)
+{
+ int i;
+
+ if (channels->n >= MAX_CHANNEL_LIST) /* buffer full */
+ return;
+
+ for (i = 0; i < channels->n; i++) {
+ if (channels->channels[i] == ch) /* already in the list */
+ return;
+ }
+
+ channels->channels[i] = ch;
+ channels->n++;
+
+ WL_DBG((" RCC: %02d 0x%04X\n",
+ ch & WL_CHANSPEC_CHAN_MASK, ch));
+}
+
+void update_roam_cache(struct bcm_cfg80211 *cfg, int ioctl_ver)
+{
+ int error, i, prev_channels;
+ channel_list_t channel_list;
+ char iobuf[WLC_IOCTL_SMLEN];
+ struct net_device *dev = bcmcfg_to_prmry_ndev(cfg);
+ wlc_ssid_t ssid;
+
+ if (!wl_get_drv_status(cfg, CONNECTED, dev)) {
+ WL_DBG(("Not associated\n"));
+ return;
+ }
+
+ /* need to read out the current cache list
+ as the firmware may change dynamically
+ */
+ error = wldev_iovar_getbuf(dev, "roamscan_channels", 0, 0,
+ (void *)&channel_list, sizeof(channel_list), NULL);
+
+ WL_DBG(("%d AP, %d cache item(s), err=%d\n", n_roam_cache, channel_list.n, error));
+
+ error = wldev_get_ssid(dev, &ssid);
+ if (error) {
+ WL_ERR(("Failed to get SSID, err=%d\n", error));
+ return;
+ }
+
+ prev_channels = channel_list.n;
+ for (i = 0; i < n_roam_cache; i++) {
+ chanspec_t ch = roam_cache[i].chanspec;
+ bool is_2G = ioctl_ver == 1 ? LCHSPEC_IS2G(ch) : CHSPEC_IS2G(ch);
+ bool is_5G = ioctl_ver == 1 ? LCHSPEC_IS5G(ch) : CHSPEC_IS5G(ch);
+ bool band_match = ((roam_band == WLC_BAND_AUTO) ||
+ ((roam_band == WLC_BAND_2G) && is_2G) ||
+ ((roam_band == WLC_BAND_5G) && is_5G));
+
+ if ((roam_cache[i].ssid_len == ssid.SSID_len) &&
+ band_match && (memcmp(roam_cache[i].ssid, ssid.SSID, ssid.SSID_len) == 0)) {
+ /* match found, add it */
+ /* XXX: JIRA:SW4349-173 : 80p80 Support Required */
+ ch = CHSPEC_CHANNEL(ch) | (is_2G ? band2G : band5G) | band_bw;
+ add_roamcache_channel(&channel_list, ch);
+ }
+ }
+ if (prev_channels != channel_list.n) {
+ /* channel list updated */
+ error = wldev_iovar_setbuf(dev, "roamscan_channels", &channel_list,
+ sizeof(channel_list), iobuf, sizeof(iobuf), NULL);
+ if (error) {
+ WL_ERR(("Failed to update roamscan channels, error = %d\n", error));
+ }
+ }
+}
+
+void wl_update_roamscan_cache_by_band(struct net_device *dev, int band)
+{
+ int i, error, ioctl_ver, wes_mode;
+ channel_list_t chanlist_before, chanlist_after;
+ char iobuf[WLC_IOCTL_SMLEN];
+
+ roam_band = band;
+ if (band == WLC_BAND_AUTO)
+ return;
+
+ error = wldev_iovar_getint(dev, "roamscan_mode", &wes_mode);
+ if (error) {
+ WL_ERR(("Failed to get roamscan mode, error = %d\n", error));
+ return;
+ }
+ /* in case of WES mode, then skip the update */
+ if (wes_mode)
+ return;
+
+ error = wldev_iovar_getbuf(dev, "roamscan_channels", 0, 0,
+ (void *)&chanlist_before, sizeof(channel_list_t), NULL);
+ if (error) {
+ WL_ERR(("Failed to get roamscan channels, error = %d\n", error));
+ return;
+ }
+ ioctl_ver = wl_cfg80211_get_ioctl_version();
+ chanlist_after.n = 0;
+ /* filtering by the given band */
+ for (i = 0; i < chanlist_before.n; i++) {
+ chanspec_t chspec = chanlist_before.channels[i];
+ bool is_2G = ioctl_ver == 1 ? LCHSPEC_IS2G(chspec) : CHSPEC_IS2G(chspec);
+ bool is_5G = ioctl_ver == 1 ? LCHSPEC_IS5G(chspec) : CHSPEC_IS5G(chspec);
+ bool band_match = ((band == WLC_BAND_2G) && is_2G) ||
+ ((band == WLC_BAND_5G) && is_5G);
+ if (band_match) {
+ chanlist_after.channels[chanlist_after.n++] = chspec;
+ }
+ }
+
+ if (chanlist_before.n == chanlist_after.n)
+ return;
+
+ error = wldev_iovar_setbuf(dev, "roamscan_channels", &chanlist_after,
+ sizeof(channel_list_t), iobuf, sizeof(iobuf), NULL);
+ if (error) {
+ WL_ERR(("Failed to update roamscan channels, error = %d\n", error));
+ }
+}
diff --git a/drivers/net/wireless/bcmdhd/wldev_common.c b/drivers/net/wireless/bcmdhd/wldev_common.c
old mode 100755
new mode 100644
index 6eda096..11ffa5c
--- a/drivers/net/wireless/bcmdhd/wldev_common.c
+++ b/drivers/net/wireless/bcmdhd/wldev_common.c
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wldev_common.c 432642 2013-10-29 04:23:40Z $
+ * $Id: wldev_common.c 467328 2014-04-03 01:23:40Z $
*/
#include <osl.h>
@@ -353,7 +353,7 @@
return error;
}
- if ((error < 0) ||
+ if ((error < 0) || dhd_force_country_change(dev) ||
(strncmp(country_code, cspec.country_abbrev, WLC_CNTRY_BUF_SZ) != 0)) {
if (user_enforced) {
@@ -383,347 +383,3 @@
}
return 0;
}
-
-/* tuning performance for miracast */
-int wldev_miracast_tuning(
- struct net_device *dev, char *command, int total_len)
-{
- int error = 0;
- int mode = 0;
- int ampdu_mpdu;
- int roam_off;
- int ampdu_rx_tid = -1;
-#ifdef VSDB_BW_ALLOCATE_ENABLE
- int mchan_algo;
- int mchan_bw;
-#endif /* VSDB_BW_ALLOCATE_ENABLE */
-
- if (sscanf(command, "%*s %d", &mode) != 1) {
- WLDEV_ERROR(("Failed to get mode\n"));
- return -1;
- }
-
-set_mode:
-
- WLDEV_ERROR(("mode: %d\n", mode));
-
- if (mode == 0) {
- /* Normal mode: restore everything to default */
- ampdu_mpdu = -1; /* FW default */
-#if defined(ROAM_ENABLE)
- roam_off = 0; /* roam enable */
-#elif defined(DISABLE_BUILTIN_ROAM)
- roam_off = 1; /* roam disable */
-#endif
-#ifdef VSDB_BW_ALLOCATE_ENABLE
- mchan_algo = 0; /* Default */
- mchan_bw = 50; /* 50:50 */
-#endif /* VSDB_BW_ALLOCATE_ENABLE */
- }
- else if (mode == 1) {
- /* Miracast source mode */
- ampdu_mpdu = 8; /* for tx latency */
-#if defined(ROAM_ENABLE) || defined(DISABLE_BUILTIN_ROAM)
- roam_off = 1; /* roam disable */
-#endif
-#ifdef VSDB_BW_ALLOCATE_ENABLE
- mchan_algo = 1; /* BW based */
- mchan_bw = 25; /* 25:75 */
-#endif /* VSDB_BW_ALLOCATE_ENABLE */
- }
- else if (mode == 2) {
- /* Miracast sink/PC Gaming mode */
- ampdu_mpdu = 8; /* FW default */
-#if defined(ROAM_ENABLE) || defined(DISABLE_BUILTIN_ROAM)
- roam_off = 1; /* roam disable */
-#endif
-#ifdef VSDB_BW_ALLOCATE_ENABLE
- mchan_algo = 0; /* Default */
- mchan_bw = 50; /* 50:50 */
-#endif /* VSDB_BW_ALLOCATE_ENABLE */
- } else if (mode == 3) {
- ampdu_rx_tid = 0;
- mode = 2;
- goto set_mode;
- } else if (mode == 4) {
- ampdu_rx_tid = 0x7f;
- mode = 0;
- goto set_mode;
- }
- else {
- WLDEV_ERROR(("Unknown mode: %d\n", mode));
- return -1;
- }
-
- /* Update ampdu_mpdu */
- error = wldev_iovar_setint(dev, "ampdu_mpdu", ampdu_mpdu);
- if (error) {
- WLDEV_ERROR(("Failed to set ampdu_mpdu: mode:%d, error:%d\n",
- mode, error));
- return -1;
- }
-
-#if defined(ROAM_ENABLE) || defined(DISABLE_BUILTIN_ROAM)
- error = wldev_iovar_setint(dev, "roam_off", roam_off);
- if (error) {
- WLDEV_ERROR(("Failed to set roam_off: mode:%d, error:%d\n",
- mode, error));
- return -1;
- }
-#endif /* ROAM_ENABLE || DISABLE_BUILTIN_ROAM */
-
-#ifdef VSDB_BW_ALLOCATE_ENABLE
- error = wldev_iovar_setint(dev, "mchan_algo", mchan_algo);
- if (error) {
- WLDEV_ERROR(("Failed to set mchan_algo: mode:%d, error:%d\n",
- mode, error));
- return -1;
- }
-
- error = wldev_iovar_setint(dev, "mchan_bw", mchan_bw);
- if (error) {
- WLDEV_ERROR(("Failed to set mchan_bw: mode:%d, error:%d\n",
- mode, error));
- return -1;
- }
-#endif /* VSDB_BW_ALLOCATE_ENABLE */
-
- if (ampdu_rx_tid != -1)
- dhd_set_ampdu_rx_tid(dev, ampdu_rx_tid);
-
- return error;
-}
-
-int wldev_get_assoc_resp_ie(
- struct net_device *dev, char *command, int total_len)
-{
- wl_assoc_info_t *assoc_info;
- char smbuf[WLC_IOCTL_SMLEN];
- char bssid[6], null_bssid[6];
- int resp_ies_len = 0;
- int bytes_written = 0;
- int error, i;
-
- bzero(bssid, 6);
- bzero(null_bssid, 6);
-
- /* Check Association */
- error = wldev_ioctl(dev, WLC_GET_BSSID, &bssid, sizeof(bssid), 0);
- if (error == BCME_NOTASSOCIATED) {
- /* Not associated */
- bytes_written += snprintf(&command[bytes_written], total_len, "NA");
- goto done;
- }
- else if (error < 0) {
- WLDEV_ERROR(("WLC_GET_BSSID failed = %d\n", error));
- return -1;
- }
- else if (memcmp(bssid, null_bssid, ETHER_ADDR_LEN) == 0) {
- /* Zero BSSID: Not associated */
- bytes_written += snprintf(&command[bytes_written], total_len, "NA");
- goto done;
- }
-
- /* Get assoc_info */
- bzero(smbuf, sizeof(smbuf));
- error = wldev_iovar_getbuf(dev, "assoc_info", NULL, 0, smbuf, sizeof(smbuf), NULL);
- if (error < 0) {
- WLDEV_ERROR(("get assoc_info failed = %d\n", error));
- return -1;
- }
-
- assoc_info = (wl_assoc_info_t *)smbuf;
- resp_ies_len = dtoh32(assoc_info->resp_len) - sizeof(struct dot11_assoc_resp);
-
- /* Retrieve assoc resp IEs */
- if (resp_ies_len) {
- error = wldev_iovar_getbuf(dev, "assoc_resp_ies",
- NULL, 0, smbuf, sizeof(smbuf), NULL);
- if (error < 0) {
- WLDEV_ERROR(("get assoc_resp_ies failed = %d\n", error));
- return -1;
- }
-
- /* Length */
- bytes_written += snprintf(&command[bytes_written], total_len, "%d,", resp_ies_len);
-
- /* IEs */
- if ((total_len - bytes_written) > resp_ies_len) {
- for (i = 0; i < resp_ies_len; i++) {
- bytes_written += sprintf(&command[bytes_written], "%02x", smbuf[i]);
- }
- } else {
- WLDEV_ERROR(("Not enough buffer\n"));
- return -1;
- }
- } else {
- WLDEV_ERROR(("Zero Length assoc resp ies = %d\n", resp_ies_len));
- return -1;
- }
-
-done:
-
- return bytes_written;
-}
-
-int wldev_get_max_linkspeed(
- struct net_device *dev, char *command, int total_len)
-{
- wl_assoc_info_t *assoc_info;
- char smbuf[WLC_IOCTL_SMLEN];
- char bssid[6], null_bssid[6];
- int resp_ies_len = 0;
- int bytes_written = 0;
- int error, i;
-
- bzero(bssid, 6);
- bzero(null_bssid, 6);
-
- /* Check Association */
- error = wldev_ioctl(dev, WLC_GET_BSSID, &bssid, sizeof(bssid), 0);
- if (error == BCME_NOTASSOCIATED) {
- /* Not associated */
- bytes_written += snprintf(&command[bytes_written],
- total_len, "-1");
- goto done;
- } else if (error < 0) {
- WLDEV_ERROR(("WLC_GET_BSSID failed = %d\n", error));
- return -1;
- } else if (memcmp(bssid, null_bssid, ETHER_ADDR_LEN) == 0) {
- /* Zero BSSID: Not associated */
- bytes_written += snprintf(&command[bytes_written],
- total_len, "-1");
- goto done;
- }
- /* Get assoc_info */
- bzero(smbuf, sizeof(smbuf));
- error = wldev_iovar_getbuf(dev, "assoc_info", NULL, 0, smbuf,
- sizeof(smbuf), NULL);
- if (error < 0) {
- WLDEV_ERROR(("get assoc_info failed = %d\n", error));
- return -1;
- }
-
- assoc_info = (wl_assoc_info_t *)smbuf;
- resp_ies_len = dtoh32(assoc_info->resp_len) -
- sizeof(struct dot11_assoc_resp);
-
- /* Retrieve assoc resp IEs */
- if (resp_ies_len) {
- error = wldev_iovar_getbuf(dev, "assoc_resp_ies", NULL, 0,
- smbuf, sizeof(smbuf), NULL);
- if (error < 0) {
- WLDEV_ERROR(("get assoc_resp_ies failed = %d\n",
- error));
- return -1;
- }
-
- {
- int maxRate = 0;
- struct dot11IE {
- unsigned char ie;
- unsigned char len;
- unsigned char data[0];
- } *dot11IE = (struct dot11IE *)smbuf;
- int remaining = resp_ies_len;
-
- while (1) {
- if (remaining < 2)
- break;
- if (remaining < dot11IE->len + 2)
- break;
- switch (dot11IE->ie) {
- case 0x01: /* supported rates */
- case 0x32: /* extended supported rates */
- for (i = 0; i < dot11IE->len; i++) {
- int rate = ((dot11IE->data[i] &
- 0x7f) / 2);
- if (rate > maxRate)
- maxRate = rate;
- }
- break;
- case 0x2d: /* HT capabilities */
- case 0x3d: /* HT operation */
- /* 11n supported */
- maxRate = 150; /* Just return an 11n
- rate for now. Could implement detailed
- parser later. */
- break;
- default:
- break;
- }
-
- /* next IE */
- dot11IE = (struct dot11IE *)
- ((unsigned char *)dot11IE + dot11IE->len + 2);
- remaining -= (dot11IE->len + 2);
- }
- bytes_written += snprintf(&command[bytes_written],
- total_len, "MaxLinkSpeed %d",
- maxRate);
- goto done;
- }
- } else {
- WLDEV_ERROR(("Zero Length assoc resp ies = %d\n",
- resp_ies_len));
- return -1;
- }
-
-done:
-
- return bytes_written;
-
-}
-
-int wldev_get_rx_rate_stats(
- struct net_device *dev, char *command, int total_len)
-{
- wl_scb_rx_rate_stats_t *rstats;
- struct ether_addr ea;
- char smbuf[WLC_IOCTL_SMLEN];
- char eabuf[18] = {0, };
- int bytes_written = 0;
- int error;
-
- memcpy(eabuf, command+strlen("RXRATESTATS")+1, 17);
-
- if (!bcm_ether_atoe(eabuf, &ea)) {
- WLDEV_ERROR(("Invalid MAC Address\n"));
- return -1;
- }
-
- error = wldev_iovar_getbuf(dev, "rx_rate_stats",
- &ea, ETHER_ADDR_LEN, smbuf, sizeof(smbuf), NULL);
- if (error < 0) {
- WLDEV_ERROR(("get rx_rate_stats failed = %d\n", error));
- return -1;
- }
-
- rstats = (wl_scb_rx_rate_stats_t *)smbuf;
- bytes_written = sprintf(command, "1/%d/%d,",
- dtoh32(rstats->rx1mbps[0]), dtoh32(rstats->rx1mbps[1]));
- bytes_written += sprintf(command+bytes_written, "2/%d/%d,",
- dtoh32(rstats->rx2mbps[0]), dtoh32(rstats->rx2mbps[1]));
- bytes_written += sprintf(command+bytes_written, "5.5/%d/%d,",
- dtoh32(rstats->rx5mbps5[0]), dtoh32(rstats->rx5mbps5[1]));
- bytes_written += sprintf(command+bytes_written, "6/%d/%d,",
- dtoh32(rstats->rx6mbps[0]), dtoh32(rstats->rx6mbps[1]));
- bytes_written += sprintf(command+bytes_written, "9/%d/%d,",
- dtoh32(rstats->rx9mbps[0]), dtoh32(rstats->rx9mbps[1]));
- bytes_written += sprintf(command+bytes_written, "11/%d/%d,",
- dtoh32(rstats->rx11mbps[0]), dtoh32(rstats->rx11mbps[1]));
- bytes_written += sprintf(command+bytes_written, "12/%d/%d,",
- dtoh32(rstats->rx12mbps[0]), dtoh32(rstats->rx12mbps[1]));
- bytes_written += sprintf(command+bytes_written, "18/%d/%d,",
- dtoh32(rstats->rx18mbps[0]), dtoh32(rstats->rx18mbps[1]));
- bytes_written += sprintf(command+bytes_written, "24/%d/%d,",
- dtoh32(rstats->rx24mbps[0]), dtoh32(rstats->rx24mbps[1]));
- bytes_written += sprintf(command+bytes_written, "36/%d/%d,",
- dtoh32(rstats->rx36mbps[0]), dtoh32(rstats->rx36mbps[1]));
- bytes_written += sprintf(command+bytes_written, "48/%d/%d,",
- dtoh32(rstats->rx48mbps[0]), dtoh32(rstats->rx48mbps[1]));
- bytes_written += sprintf(command+bytes_written, "54/%d/%d",
- dtoh32(rstats->rx54mbps[0]), dtoh32(rstats->rx54mbps[1]));
-
- return bytes_written;
-}
diff --git a/drivers/net/wireless/bcmdhd/wldev_common.h b/drivers/net/wireless/bcmdhd/wldev_common.h
old mode 100755
new mode 100644
index 101dc91..7944ef6
--- a/drivers/net/wireless/bcmdhd/wldev_common.h
+++ b/drivers/net/wireless/bcmdhd/wldev_common.h
@@ -21,7 +21,7 @@
* software in any way with any other Broadcom software provided under a license
* other than the GPL, without Broadcom's express prior written consent.
*
- * $Id: wldev_common.h 434085 2013-11-05 06:09:49Z $
+ * $Id: wldev_common.h 467328 2014-04-03 01:23:40Z $
*/
#ifndef __WLDEV_COMMON_H__
#define __WLDEV_COMMON_H__
@@ -91,6 +91,7 @@
extern void dhd_get_customized_country_code(struct net_device *dev, char *country_iso_code,
wl_country_t *cspec);
extern void dhd_bus_country_set(struct net_device *dev, wl_country_t *cspec, bool notify);
+extern bool dhd_force_country_change(struct net_device *dev);
extern void dhd_bus_band_set(struct net_device *dev, uint band);
extern int wldev_set_country(struct net_device *dev, char *country_code, bool notify,
bool user_enforced);
@@ -101,7 +102,7 @@
extern int net_os_set_dtim_skip(struct net_device *dev, int val);
extern int net_os_set_suspend_disable(struct net_device *dev, int val);
extern int net_os_set_suspend(struct net_device *dev, int val, int force);
-extern int wl_iw_parse_ssid_list_tlv(char** list_str, wlc_ssid_t* ssid,
+extern int wl_iw_parse_ssid_list_tlv(char** list_str, wlc_ssid_ext_t* ssid,
int max, int *bytes_left);
/* Get the link speed from dongle, speed is in kpbs */
@@ -115,10 +116,4 @@
int wldev_set_band(struct net_device *dev, uint band);
-
-int wldev_miracast_tuning(struct net_device *dev, char *command, int total_len);
-int wldev_get_assoc_resp_ie(struct net_device *dev, char *command, int total_len);
-int wldev_get_rx_rate_stats(struct net_device *dev, char *command, int total_len);
-int wldev_get_max_linkspeed(struct net_device *dev, char *command, int total_len);
-extern void dhd_set_ampdu_rx_tid(struct net_device *dev, int ampdu_rx_tid);
#endif /* __WLDEV_COMMON_H__ */
diff --git a/drivers/nfc/bcm2079x-i2c.c b/drivers/nfc/bcm2079x-i2c.c
index a8e04af..c6832f4 100644
--- a/drivers/nfc/bcm2079x-i2c.c
+++ b/drivers/nfc/bcm2079x-i2c.c
@@ -124,7 +124,7 @@
else
client->flags &= ~I2C_CLIENT_TEN;
- dev_info(&client->dev,
+ dev_printk(KERN_INFO, &client->dev,
"set_client_addr changed to (0x%04X) flag = %04x\n",
client->addr, client->flags);
}
@@ -151,15 +151,15 @@
for (i = 1; i < sizeof(addr_data) - 1; ++i)
ret += addr_data[i];
addr_data[sizeof(addr_data) - 1] = (ret & 0xFF);
- dev_info(&client->dev,
+ dev_printk(KERN_INFO, &client->dev,
"change_client_addr from (0x%04X) flag = "\
- "%04x, addr_data[%d] = %02x\n",
+ "%04x, addr_data[%zu] = %02x\n",
client->addr, client->flags, sizeof(addr_data) - 1,
addr_data[sizeof(addr_data) - 1]);
mutex_lock(&bcm2079x_dev->read_mutex);
if (bcm2079x_dev && (bcm2079x_dev->shutdown_complete == true)) {
- dev_info(&client->dev, "%s: discarding as " \
+ dev_printk(KERN_INFO, &client->dev, "%s: discarding as " \
"NFC in shutdown state\n", __func__);
mutex_unlock(&bcm2079x_dev->read_mutex);
return;
@@ -169,7 +169,7 @@
mutex_unlock(&bcm2079x_dev->read_mutex);
client->addr = addr_data[5];
- dev_info(&client->dev,
+ dev_printk(KERN_INFO, &client->dev,
"change_client_addr to (0x%04X) flag = %04x, ret = %d\n",
client->addr, client->flags, ret);
}
@@ -223,7 +223,7 @@
/*Check for shutdown condition*/
if (bcm2079x_dev && (bcm2079x_dev->shutdown_complete == true)) {
- dev_info(&bcm2079x_dev->client->dev, "%s: discarding read " \
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev, "%s: discarding read " \
"as NFC in shutdown state\n", __func__);
mutex_unlock(&bcm2079x_dev->read_mutex);
return -ENODEV;
@@ -305,7 +305,7 @@
/*Check for shutdown condition*/
if (bcm2079x_dev && (bcm2079x_dev->shutdown_complete == true)) {
- dev_info(&bcm2079x_dev->client->dev, "%s: discarding write " \
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev, "%s: discarding write " \
"as NFC in shutdown state\n", __func__);
mutex_unlock(&bcm2079x_dev->read_mutex);
return -ENODEV;
@@ -335,7 +335,7 @@
filp->private_data = bcm2079x_dev;
bcm2079x_init_stat(bcm2079x_dev);
bcm2079x_enable_irq(bcm2079x_dev);
- dev_info(&bcm2079x_dev->client->dev,
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev,
"device node major=%d, minor=%d\n",
imajor(inode), iminor(inode));
@@ -353,13 +353,13 @@
case BCMNFC_READ_MULTI_PACKETS:
break;
case BCMNFC_CHANGE_ADDR:
- dev_info(&bcm2079x_dev->client->dev,
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev,
"%s, BCMNFC_CHANGE_ADDR (%x, %lx):\n", __func__, cmd,
arg);
change_client_addr(bcm2079x_dev, arg);
break;
case BCMNFC_POWER_CTL:
- dev_info(&bcm2079x_dev->client->dev,
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev,
"%s, BCMNFC_POWER_CTL (%x, %lx):\n",
__func__, cmd, arg);
if (arg == 1) {
@@ -369,7 +369,7 @@
gpio_set_value(bcm2079x_dev->en_gpio, arg);
break;
case BCMNFC_WAKE_CTL:
- dev_info(&bcm2079x_dev->client->dev,
+ dev_printk(KERN_INFO, &bcm2079x_dev->client->dev,
"%s, BCMNFC_WAKE_CTL (%x, %lx):\n",
__func__, cmd, arg);
gpio_set_value(bcm2079x_dev->wake_gpio, arg);
@@ -391,7 +391,8 @@
.read = bcm2079x_dev_read,
.write = bcm2079x_dev_write,
.open = bcm2079x_dev_open,
- .unlocked_ioctl = bcm2079x_dev_unlocked_ioctl
+ .unlocked_ioctl = bcm2079x_dev_unlocked_ioctl,
+ .compat_ioctl = bcm2079x_dev_unlocked_ioctl // Just primitives in/out
};
static int bcm2079x_probe(struct i2c_client *client,
@@ -403,7 +404,7 @@
platform_data = client->dev.platform_data;
- dev_info(&client->dev, "%s, pro bcm2079x driver flags = %x\n",
+ dev_printk(KERN_INFO, &client->dev, "%s, pro bcm2079x driver flags = %x\n",
__func__, client->flags);
if (platform_data == NULL) {
dev_err(&client->dev, "nfc probe fail\n");
@@ -471,7 +472,7 @@
dev_err(&client->dev, "misc_register failed\n");
goto err_misc_register;
}
- dev_info(&client->dev,
+ dev_printk(KERN_INFO, &client->dev,
"%s, saving address 0x%02x\n",
__func__, client->addr);
bcm2079x_dev->original_address = client->addr;
@@ -479,7 +480,7 @@
/* request irq. the irq is set whenever the chip has data available
* for reading. it is cleared when all data has been read.
*/
- dev_info(&client->dev, "requesting IRQ %d with IRQF_NO_SUSPEND\n",
+ dev_printk(KERN_INFO, &client->dev, "requesting IRQ %d with IRQF_NO_SUSPEND\n",
client->irq);
bcm2079x_dev->irq_enabled = true;
ret = request_irq(client->irq, bcm2079x_dev_irq_handler,
@@ -491,7 +492,7 @@
}
bcm2079x_disable_irq(bcm2079x_dev);
i2c_set_clientdata(client, bcm2079x_dev);
- dev_info(&client->dev,
+ dev_printk(KERN_INFO, &client->dev,
"%s, probing bcm2079x driver exited successfully\n",
__func__);
return 0;
@@ -536,7 +537,7 @@
bcm2079x_data->shutdown_complete = true;
mutex_unlock(&bcm2079x_data->read_mutex);
- dev_info(&bcm2079x_data->client->dev,
+ dev_printk(KERN_INFO, &bcm2079x_data->client->dev,
"%s: NFC shutting down\n", __func__);
}
diff --git a/drivers/platform/tegra/tegra_usb_pad_ctrl.c b/drivers/platform/tegra/tegra_usb_pad_ctrl.c
index 0b49d76..f424f70 100644
--- a/drivers/platform/tegra/tegra_usb_pad_ctrl.c
+++ b/drivers/platform/tegra/tegra_usb_pad_ctrl.c
@@ -233,9 +233,13 @@
spin_lock_irqsave(&utmip_pad_lock, flags);
utmip_pad_count++;
+ val = readl(pad_base + UTMIP_SPARE_CFG0);
+ val &= ~(FUSE_HS_SQUELCH_LEVEL | FUSE_HS_IREF_CAP_CFG);
+ writel(val, pad_base + UTMIP_SPARE_CFG0);
+
val = readl(pad_base + UTMIP_BIAS_CFG0);
- val &= ~(UTMIP_OTGPD | UTMIP_BIASPD);
- val |= UTMIP_HSSQUELCH_LEVEL(0x2) | UTMIP_HSDISCON_LEVEL(0x3) |
+ val &= ~(UTMIP_HSSQUELCH_LEVEL(~0x0) | UTMIP_OTGPD | UTMIP_BIASPD);
+ val |= UTMIP_HSSQUELCH_LEVEL(0x1) | UTMIP_HSDISCON_LEVEL(0x3) |
UTMIP_HSDISCON_LEVEL_MSB;
writel(val, pad_base + UTMIP_BIAS_CFG0);
diff --git a/drivers/power/Kconfig b/drivers/power/Kconfig
index 0cadf6c..8ad82bd 100644
--- a/drivers/power/Kconfig
+++ b/drivers/power/Kconfig
@@ -47,6 +47,20 @@
Say Y here to enable driver support for TI BQ24190/
BQ24192/BQ24192i/BQ24193 Charger.
+config CHARGER_BQ2419X_HTC
+ tristate "BQ24190/BQ24192/BQ24192i/BQ24193 HTC Charger driver support"
+ depends on I2C && HTC_BATTERY_BQ2419X
+ help
+ Say Y here to enable driver support for TI BQ24190/
+ BQ24192/BQ24192i/BQ24193 Charger with HTC policy.
+
+config HTC_BATTERY_BQ2419X
+ tristate "HTC Battery policy for BQ2419X charging control"
+ select BATTERY_CHARGER_GAUGE_COMM
+ select CABLE_VBUS_MONITOR
+ help
+ This module is to control charging through BQ2419X driver.
+
config CHARGER_BQ2471X
tristate "BQ24715/BQ24717 Charger driver support"
depends on I2C
@@ -312,6 +326,19 @@
to operate with a single lithium cell, and MAX17049 for two lithium
cells.
+config GAUGE_MAX17050
+ tristate "Maxim MAX17050 Fuel Gauge"
+ depends on I2C
+ help
+ MAX17050 is fuel-gauge systems for lithium-ion (Li+) batteries
+ in handheld and portable equipment. The MAX17050 is configured
+ to operate with a single lithium cell.
+
+config FLOUNDER_BATTERY
+ tristate "Battery handling for flounder boards"
+ help
+ This module adds battery handling specific to flounder boards.
+
config BATTERY_BQ27441
tristate "TI's BQ27441 Fuel Gauge"
depends on I2C
diff --git a/drivers/power/Makefile b/drivers/power/Makefile
index e9fca36..63ad417 100644
--- a/drivers/power/Makefile
+++ b/drivers/power/Makefile
@@ -32,6 +32,8 @@
obj-$(CONFIG_BATTERY_SBS) += sbs-battery.o
obj-$(CONFIG_BATTERY_LC709203F) += lc709203f_battery.o
obj-$(CONFIG_CHARGER_BQ2419X) += bq2419x-charger.o
+obj-$(CONFIG_CHARGER_BQ2419X_HTC) += bq2419x-charger-htc.o
+obj-$(CONFIG_HTC_BATTERY_BQ2419X) += htc_battery_bq2419x.o
obj-$(CONFIG_CHARGER_BQ2471X) += bq2471x-charger.o
obj-$(CONFIG_CHARGER_BQ2477X) += bq2477x-charger.o
obj-$(CONFIG_BATTERY_BQ27x00) += bq27x00_battery.o
@@ -46,6 +48,8 @@
obj-$(CONFIG_BATTERY_MAX17040) += max17040_battery.o
obj-$(CONFIG_BATTERY_MAX17042) += max17042_battery.o
obj-$(CONFIG_BATTERY_MAX17048) += max17048_battery.o
+obj-$(CONFIG_GAUGE_MAX17050) += max17050_gauge.o
+obj-$(CONFIG_FLOUNDER_BATTERY) += flounder_battery.o
obj-$(CONFIG_BATTERY_BQ27441) += bq27441_battery.o
obj-$(CONFIG_BATTERY_PALMAS) += palmas_battery.o
obj-$(CONFIG_BATTERY_Z2) += z2_battery.o
diff --git a/drivers/power/battery-charger-gauge-comm.c b/drivers/power/battery-charger-gauge-comm.c
index 764fde8..2f73dd0 100644
--- a/drivers/power/battery-charger-gauge-comm.c
+++ b/drivers/power/battery-charger-gauge-comm.c
@@ -41,18 +41,48 @@
#include <linux/iio/consumer.h>
#include <linux/iio/types.h>
#include <linux/iio/iio.h>
+#include <linux/power_supply.h>
-#define JETI_TEMP_COLD 0
-#define JETI_TEMP_COOL 10
-#define JETI_TEMP_WARM 45
-#define JETI_TEMP_HOT 60
+#define JETI_TEMP_COLD (0)
+#define JETI_TEMP_COOL (100)
+#define JETI_TEMP_WARM (450)
+#define JETI_TEMP_HOT (600)
-#define MAX_STR_PRINT 50
+#define DEFAULT_BATTERY_REGULATION_VOLTAGE (4250)
+#define DEFAULT_BATTERY_THERMAL_VOLTAGE (4100)
+#define DEFAULT_CHARGE_DONE_CURRENT (670)
+#define DEFAULT_CHARGE_DONE_LOW_CURRENT (400)
+
+#define MAX_STR_PRINT (50)
+
+#define BATT_INFO_NO_VALUE (-1)
+
+#define CHARGING_CURRENT_0_TORRENCE (-10)
+#define CHARGING_FULL_DONE_CURRENT_THRESHOLD_TORRENCE (200)
+
+#define CHARGING_FULL_DONE_CHECK_TIMES (3)
+#define CHARGING_FULL_DONE_TIMEOUT_S (5400)
+#define CHARGING_FULL_DONE_LEVEL_MIN (96)
+
+#define UNKNOWN_BATTERY_ID_CHECK_COUNT (5)
+#define UNKNOWN_BATTERY_ID_CHECK_DELAY (3*HZ)
+
+#define DEFAULT_INPUT_VMIN_MV (4200)
+
+enum battery_monitor_state {
+ MONITOR_WAIT = 0,
+ MONITOR_THERMAL,
+ MONITOR_BATT_STATUS,
+};
static DEFINE_MUTEX(charger_gauge_list_mutex);
static LIST_HEAD(charger_list);
static LIST_HEAD(gauge_list);
+struct battery_info_ops {
+ int (*get_battery_temp)(void);
+};
+
struct battery_charger_dev {
int cell_id;
char *tz_name;
@@ -62,13 +92,33 @@
void *drv_data;
struct delayed_work restart_charging_wq;
struct delayed_work poll_temp_monitor_wq;
+ struct delayed_work poll_batt_status_monitor_wq;
int polling_time_sec;
struct thermal_zone_device *battery_tz;
- bool start_monitoring;
+ enum battery_monitor_state start_monitoring;
struct wake_lock charger_wake_lock;
bool locked;
struct rtc_device *rtc;
bool enable_thermal_monitor;
+ bool enable_batt_status_monitor;
+ ktime_t batt_status_last_check_time;
+ struct battery_thermal_prop thermal_prop;
+ enum charge_thermal_state thermal_state;
+ struct battery_info_ops batt_info_ops;
+ bool chg_full_done;
+ bool chg_full_stop;
+ int chg_full_done_prev_check_count;
+ ktime_t chg_full_stop_expire_time;
+ int in_current_limit;
+ struct charge_full_threshold full_thr;
+ struct mutex mutex;
+ const char *batt_id_channel_name;
+ struct charge_input_switch input_switch;
+ int unknown_batt_id_min;
+ struct delayed_work unknown_batt_id_work;
+ int unknown_batt_id_check_count;
+ const char *gauge_psy_name;
+ struct power_supply *psy;
};
struct battery_gauge_dev {
@@ -80,8 +130,10 @@
void *drv_data;
struct thermal_zone_device *battery_tz;
int battery_voltage;
+ int battery_current;
int battery_capacity;
int battery_snapshot_voltage;
+ int battery_snapshot_current;
int battery_snapshot_capacity;
const char *bat_curr_channel_name;
struct iio_channel *bat_current_iio_channel;
@@ -89,6 +141,21 @@
struct battery_gauge_dev *bg_temp;
+static inline int psy_get_property(struct power_supply *psy,
+ enum power_supply_property psp, int *val)
+{
+ union power_supply_propval pv;
+
+ if (!psy || !val)
+ return -EINVAL;
+
+ if (psy->get_property(psy, psp, &pv))
+ return -EFAULT;
+
+ *val = pv.intval;
+ return 0;
+}
+
static void battery_charger_restart_charging_wq(struct work_struct *work)
{
struct battery_charger_dev *bc_dev;
@@ -103,53 +170,120 @@
bc_dev->ops->restart_charging(bc_dev);
}
-static void battery_charger_thermal_monitor_wq(struct work_struct *work)
+static int battery_charger_thermal_monitor_func(
+ struct battery_charger_dev *bc_dev)
{
- struct battery_charger_dev *bc_dev;
struct device *dev;
long temperature;
bool charger_enable_state;
bool charger_current_half;
int battery_thersold_voltage;
+ int temp;
+ enum charge_thermal_state thermal_state_new;
int ret;
- bc_dev = container_of(work, struct battery_charger_dev,
- poll_temp_monitor_wq.work);
- if (!bc_dev->tz_name)
- return;
-
dev = bc_dev->parent_dev;
- if (!bc_dev->battery_tz) {
- bc_dev->battery_tz = thermal_zone_device_find_by_name(
+
+ if (bc_dev->tz_name) {
+ if (!bc_dev->battery_tz) {
+ bc_dev->battery_tz = thermal_zone_device_find_by_name(
bc_dev->tz_name);
- if (!bc_dev->battery_tz) {
- dev_info(dev,
- "Battery thermal zone %s is not registered yet\n",
- bc_dev->tz_name);
- schedule_delayed_work(&bc_dev->poll_temp_monitor_wq,
- msecs_to_jiffies(bc_dev->polling_time_sec * HZ));
- return;
+ if (!bc_dev->battery_tz) {
+ dev_info(dev,
+ "Battery thermal zone %s is not registered yet\n",
+ bc_dev->tz_name);
+ schedule_delayed_work(
+ &bc_dev->poll_temp_monitor_wq,
+ msecs_to_jiffies(
+ bc_dev->polling_time_sec * 1000));
+ return -EINVAL;
+ }
}
- }
- ret = thermal_zone_get_temp(bc_dev->battery_tz, &temperature);
- if (ret < 0) {
- dev_err(dev, "Temperature read failed: %d\n ", ret);
- goto exit;
- }
- temperature = temperature / 1000;
+ ret = thermal_zone_get_temp(bc_dev->battery_tz, &temperature);
+ if (ret < 0) {
+ dev_err(dev, "Temperature read failed: %d\n ", ret);
+ return 0;
+ }
+ temperature = temperature / 100;
+ } else if (bc_dev->batt_info_ops.get_battery_temp)
+ temperature = bc_dev->batt_info_ops.get_battery_temp();
+ else if (bc_dev->psy) {
+ ret = psy_get_property(bc_dev->psy, POWER_SUPPLY_PROP_TEMP,
+ &temp);
+ if (ret) {
+ dev_err(dev,
+ "POWER_SUPPLY_PROP_TEMP read failed: %d\n ",
+ ret);
+ return ret;
+ }
+ temperature = temp;
+ } else
+ return -EINVAL;
- charger_enable_state = true;
- charger_current_half = false;
- battery_thersold_voltage = 4250;
- if (temperature <= JETI_TEMP_COLD || temperature >= JETI_TEMP_HOT) {
+ if (temperature <= bc_dev->thermal_prop.temp_cold_dc)
+ thermal_state_new = CHARGE_THERMAL_COLD_STOP;
+ else if (temperature <= bc_dev->thermal_prop.temp_cool_dc) {
+ if (bc_dev->thermal_state == CHARGE_THERMAL_COLD_STOP &&
+ temperature < bc_dev->thermal_prop.temp_cold_dc
+ + bc_dev->thermal_prop.temp_hysteresis_dc)
+ thermal_state_new = CHARGE_THERMAL_COOL_STOP;
+ else
+ thermal_state_new = CHARGE_THERMAL_COOL;
+ } else if (temperature < bc_dev->thermal_prop.temp_warm_dc)
+ thermal_state_new = CHARGE_THERMAL_NORMAL;
+ else if (temperature < bc_dev->thermal_prop.temp_hot_dc) {
+ if (bc_dev->thermal_state == CHARGE_THERMAL_HOT_STOP &&
+ temperature >= bc_dev->thermal_prop.temp_hot_dc
+ - bc_dev->thermal_prop.temp_hysteresis_dc)
+ thermal_state_new = CHARGE_THERMAL_WARM_STOP;
+ else
+ thermal_state_new = CHARGE_THERMAL_WARM;
+ } else
+ thermal_state_new = CHARGE_THERMAL_HOT_STOP;
+
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->thermal_state != thermal_state_new)
+ dev_info(bc_dev->parent_dev,
+ "Battery charging state changed (%d -> %d)\n",
+ bc_dev->thermal_state, thermal_state_new);
+
+ bc_dev->thermal_state = thermal_state_new;
+ mutex_unlock(&bc_dev->mutex);
+
+ switch (thermal_state_new) {
+ case CHARGE_THERMAL_COLD_STOP:
+ case CHARGE_THERMAL_COOL_STOP:
charger_enable_state = false;
- } else if (temperature <= JETI_TEMP_COOL ||
- temperature >= JETI_TEMP_WARM) {
- charger_current_half = true;
- battery_thersold_voltage = 4100;
+ charger_current_half = false;
+ battery_thersold_voltage = bc_dev->thermal_prop.cool_voltage_mv;
+ break;
+ case CHARGE_THERMAL_COOL:
+ charger_enable_state = true;
+ charger_current_half =
+ !bc_dev->thermal_prop.disable_cool_current_half;
+ battery_thersold_voltage = bc_dev->thermal_prop.cool_voltage_mv;
+ break;
+ case CHARGE_THERMAL_WARM:
+ charger_enable_state = true;
+ charger_current_half =
+ !bc_dev->thermal_prop.disable_warm_current_half;
+ battery_thersold_voltage = bc_dev->thermal_prop.warm_voltage_mv;
+ break;
+ case CHARGE_THERMAL_WARM_STOP:
+ case CHARGE_THERMAL_HOT_STOP:
+ charger_enable_state = false;
+ charger_current_half = false;
+ battery_thersold_voltage = bc_dev->thermal_prop.warm_voltage_mv;
+ break;
+ case CHARGE_THERMAL_NORMAL:
+ default:
+ charger_enable_state = true;
+ charger_current_half = false;
+ battery_thersold_voltage =
+ bc_dev->thermal_prop.regulation_voltage_mv;
}
if (bc_dev->ops->thermal_configure)
@@ -157,11 +291,242 @@
charger_enable_state, charger_current_half,
battery_thersold_voltage);
-exit:
- if (bc_dev->start_monitoring)
+ return 0;
+}
+
+static void battery_charger_thermal_monitor_wq(struct work_struct *work)
+{
+ struct battery_charger_dev *bc_dev;
+ int ret;
+
+ bc_dev = container_of(work, struct battery_charger_dev,
+ poll_temp_monitor_wq.work);
+
+ ret = battery_charger_thermal_monitor_func(bc_dev);
+
+ mutex_lock(&bc_dev->mutex);
+ if (!ret && bc_dev->start_monitoring == MONITOR_THERMAL)
schedule_delayed_work(&bc_dev->poll_temp_monitor_wq,
- msecs_to_jiffies(bc_dev->polling_time_sec * HZ));
- return;
+ msecs_to_jiffies(bc_dev->polling_time_sec * 1000));
+ mutex_unlock(&bc_dev->mutex);
+}
+
+static inline bool is_charge_thermal_normal(struct battery_charger_dev *bc_dev)
+{
+ switch (bc_dev->thermal_state) {
+ case CHARGE_THERMAL_COLD_STOP:
+ case CHARGE_THERMAL_COOL_STOP:
+ case CHARGE_THERMAL_WARM_STOP:
+ case CHARGE_THERMAL_HOT_STOP:
+ return false;
+ case CHARGE_THERMAL_COOL:
+ if (bc_dev->thermal_prop.cool_voltage_mv !=
+ bc_dev->thermal_prop.regulation_voltage_mv)
+ return false;
+ break;
+ case CHARGE_THERMAL_WARM:
+ if (bc_dev->thermal_prop.warm_voltage_mv !=
+ bc_dev->thermal_prop.regulation_voltage_mv)
+ return false;
+ break;
+ case CHARGE_THERMAL_START:
+ case CHARGE_THERMAL_NORMAL:
+ default:
+ /* normal conditon, do nothing */
+ break;
+ };
+
+ return true;
+}
+
+static int battery_charger_batt_status_monitor_func(
+ struct battery_charger_dev *bc_dev)
+{
+ bool chg_full_done_check_match = false;
+ bool chg_full_done, chg_full_stop;
+ bool is_thermal_normal;
+ ktime_t timeout, cur_boottime;
+ int volt, curr, level;
+ int ret = 0;
+
+ mutex_lock(&bc_dev->mutex);
+ ret = psy_get_property(bc_dev->psy, POWER_SUPPLY_PROP_VOLTAGE_NOW,
+ &volt);
+ if (ret) {
+ dev_err(bc_dev->parent_dev,
+ "POWER_SUPPLY_PROP_VOLTAGE_NOW read failed: %d\n ",
+ ret);
+ goto error;
+ }
+
+ ret = psy_get_property(bc_dev->psy, POWER_SUPPLY_PROP_CURRENT_NOW,
+ &curr);
+ if (ret) {
+ dev_err(bc_dev->parent_dev,
+ "POWER_SUPPLY_PROP_CURRENT_NOW read failed: %d\n ",
+ ret);
+ goto error;
+ }
+
+ ret = psy_get_property(bc_dev->psy, POWER_SUPPLY_PROP_CAPACITY,
+ &level);
+ if (ret) {
+ dev_err(bc_dev->parent_dev,
+ "POWER_SUPPLY_PROP_CAPACITY read failed: %d\n ",
+ ret);
+ goto error;
+ }
+
+ volt /= 1000;
+ curr /= 1000;
+
+ cur_boottime = ktime_get_boottime();
+ is_thermal_normal = is_charge_thermal_normal(bc_dev);
+ if (level < CHARGING_FULL_DONE_LEVEL_MIN ||
+ (!bc_dev->chg_full_done && !is_thermal_normal)) {
+ bc_dev->chg_full_done_prev_check_count = 0;
+ bc_dev->chg_full_done = false;
+ bc_dev->chg_full_stop = false;
+ goto done;
+ }
+
+ if (!bc_dev->chg_full_done &&
+ volt >= bc_dev->full_thr.chg_done_voltage_min_mv &&
+ curr >= CHARGING_CURRENT_0_TORRENCE &&
+ level > CHARGING_FULL_DONE_LEVEL_MIN) {
+ if (bc_dev->in_current_limit <
+ bc_dev->full_thr.chg_done_current_min_ma
+ + CHARGING_FULL_DONE_CURRENT_THRESHOLD_TORRENCE) {
+ if (curr <
+ bc_dev->full_thr.chg_done_low_current_min_ma)
+ chg_full_done_check_match = true;
+ } else {
+ if (curr <
+ bc_dev->full_thr.chg_done_current_min_ma)
+ chg_full_done_check_match = true;
+ }
+
+ if (!chg_full_done_check_match)
+ bc_dev->chg_full_done_prev_check_count = 0;
+ else {
+ bc_dev->chg_full_done_prev_check_count++;
+ if (bc_dev->chg_full_done_prev_check_count >=
+ CHARGING_FULL_DONE_CHECK_TIMES) {
+ bc_dev->chg_full_done = true;
+ timeout = ktime_set(
+ CHARGING_FULL_DONE_TIMEOUT_S, 0);
+ bc_dev->chg_full_stop_expire_time =
+ ktime_add(cur_boottime, timeout);
+ }
+ }
+ }
+
+ if (!bc_dev->chg_full_stop) {
+ if (bc_dev->chg_full_done &&
+ volt >= bc_dev->full_thr.recharge_voltage_min_mv) {
+ if (ktime_compare(bc_dev->chg_full_stop_expire_time,
+ cur_boottime) <= 0) {
+ dev_info(bc_dev->parent_dev,
+ "Charging full stop timeout\n");
+ bc_dev->chg_full_stop = true;
+ }
+ }
+ } else {
+ if (volt < bc_dev->full_thr.recharge_voltage_min_mv) {
+ dev_info(bc_dev->parent_dev,
+ "Charging full recharging\n");
+ bc_dev->chg_full_stop = false;
+ timeout = ktime_set(
+ CHARGING_FULL_DONE_TIMEOUT_S, 0);
+ bc_dev->chg_full_stop_expire_time =
+ ktime_add(cur_boottime, timeout);
+ }
+ }
+
+done:
+ chg_full_done = bc_dev->chg_full_done;
+ chg_full_stop = bc_dev->chg_full_stop;
+ bc_dev->batt_status_last_check_time = cur_boottime;
+ mutex_unlock(&bc_dev->mutex);
+
+ if (bc_dev->ops->charging_full_configure)
+ bc_dev->ops->charging_full_configure(bc_dev,
+ chg_full_done, chg_full_stop);
+
+ return 0;
+error:
+ mutex_unlock(&bc_dev->mutex);
+ return ret;
+}
+
+static int battery_charger_input_voltage_adjust_func(
+ struct battery_charger_dev *bc_dev)
+{
+ int ret;
+ int batt_volt, input_volt_min;
+
+ mutex_lock(&bc_dev->mutex);
+ ret = psy_get_property(bc_dev->psy, POWER_SUPPLY_PROP_VOLTAGE_NOW,
+ &batt_volt);
+ if (ret) {
+ dev_err(bc_dev->parent_dev,
+ "POWER_SUPPLY_PROP_VOLTAGE_NOW read failed: %d\n ",
+ ret);
+ mutex_unlock(&bc_dev->mutex);
+ return -EINVAL;
+ }
+ batt_volt /= 1000;
+
+ if (batt_volt > bc_dev->input_switch.input_switch_threshold_mv)
+ input_volt_min = bc_dev->input_switch.input_vmin_high_mv;
+ else
+ input_volt_min = bc_dev->input_switch.input_vmin_low_mv;
+ mutex_unlock(&bc_dev->mutex);
+
+ if (bc_dev->ops->input_voltage_configure)
+ bc_dev->ops->input_voltage_configure(bc_dev, input_volt_min);
+
+ return 0;
+}
+
+static void battery_charger_batt_status_monitor_wq(struct work_struct *work)
+{
+ struct battery_charger_dev *bc_dev;
+ int ret;
+ bool keep_monitor = false;
+
+ bc_dev = container_of(work, struct battery_charger_dev,
+ poll_batt_status_monitor_wq.work);
+
+ if (!bc_dev->psy) {
+ bc_dev->psy = power_supply_get_by_name(bc_dev->gauge_psy_name);
+
+ if (!bc_dev->psy) {
+ dev_warn(bc_dev->parent_dev, "Cannot get power_supply:%s\n",
+ bc_dev->gauge_psy_name);
+ keep_monitor = true;
+ goto retry;
+ }
+ }
+
+ ret = battery_charger_thermal_monitor_func(bc_dev);
+ if (!ret)
+ keep_monitor = true;
+
+ ret = battery_charger_input_voltage_adjust_func(bc_dev);
+ if (!ret)
+ keep_monitor = true;
+
+ ret = battery_charger_batt_status_monitor_func(bc_dev);
+ if (!ret)
+ keep_monitor = true;
+
+retry:
+ mutex_lock(&bc_dev->mutex);
+ if (keep_monitor && bc_dev->start_monitoring == MONITOR_BATT_STATUS)
+ schedule_delayed_work(&bc_dev->poll_batt_status_monitor_wq,
+ msecs_to_jiffies(bc_dev->polling_time_sec * 1000));
+ mutex_unlock(&bc_dev->mutex);
}
int battery_charger_set_current_broadcast(struct battery_charger_dev *bc_dev)
@@ -190,12 +555,19 @@
int battery_charger_thermal_start_monitoring(
struct battery_charger_dev *bc_dev)
{
- if (!bc_dev || !bc_dev->polling_time_sec || !bc_dev->tz_name)
+ if (!bc_dev || !bc_dev->polling_time_sec || (!bc_dev->tz_name
+ && !bc_dev->batt_info_ops.get_battery_temp))
return -EINVAL;
- bc_dev->start_monitoring = true;
- schedule_delayed_work(&bc_dev->poll_temp_monitor_wq,
- msecs_to_jiffies(bc_dev->polling_time_sec * HZ));
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring == MONITOR_WAIT) {
+ bc_dev->start_monitoring = MONITOR_THERMAL;
+ bc_dev->thermal_state = CHARGE_THERMAL_START;
+ bc_dev->batt_status_last_check_time = ktime_get_boottime();
+ schedule_delayed_work(&bc_dev->poll_temp_monitor_wq,
+ msecs_to_jiffies(1000));
+ }
+ mutex_unlock(&bc_dev->mutex);
return 0;
}
EXPORT_SYMBOL_GPL(battery_charger_thermal_start_monitoring);
@@ -203,15 +575,111 @@
int battery_charger_thermal_stop_monitoring(
struct battery_charger_dev *bc_dev)
{
- if (!bc_dev || !bc_dev->polling_time_sec || !bc_dev->tz_name)
+ if (!bc_dev || !bc_dev->polling_time_sec || (!bc_dev->tz_name
+ && !bc_dev->batt_info_ops.get_battery_temp))
return -EINVAL;
- bc_dev->start_monitoring = false;
- cancel_delayed_work(&bc_dev->poll_temp_monitor_wq);
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring == MONITOR_THERMAL) {
+ bc_dev->start_monitoring = MONITOR_WAIT;
+ cancel_delayed_work(&bc_dev->poll_temp_monitor_wq);
+ }
+ mutex_unlock(&bc_dev->mutex);
return 0;
}
EXPORT_SYMBOL_GPL(battery_charger_thermal_stop_monitoring);
+int battery_charger_batt_status_start_monitoring(
+ struct battery_charger_dev *bc_dev,
+ int in_current_limit)
+{
+ if (!bc_dev || !bc_dev->polling_time_sec || (!bc_dev->tz_name
+ && !bc_dev->batt_info_ops.get_battery_temp
+ && !bc_dev->gauge_psy_name))
+ return -EINVAL;
+
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring == MONITOR_WAIT) {
+ bc_dev->chg_full_stop = false;
+ bc_dev->chg_full_done = false;
+ bc_dev->chg_full_done_prev_check_count = 0;
+ bc_dev->in_current_limit = in_current_limit;
+ bc_dev->start_monitoring = MONITOR_BATT_STATUS;
+ bc_dev->thermal_state = CHARGE_THERMAL_START;
+ schedule_delayed_work(&bc_dev->poll_batt_status_monitor_wq,
+ msecs_to_jiffies(1000));
+ }
+ mutex_unlock(&bc_dev->mutex);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(battery_charger_batt_status_start_monitoring);
+
+int battery_charger_batt_status_stop_monitoring(
+ struct battery_charger_dev *bc_dev)
+{
+ if (!bc_dev || !bc_dev->polling_time_sec || (!bc_dev->tz_name
+ && !bc_dev->batt_info_ops.get_battery_temp
+ && !bc_dev->gauge_psy_name))
+ return -EINVAL;
+
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring == MONITOR_BATT_STATUS) {
+ bc_dev->start_monitoring = MONITOR_WAIT;
+ cancel_delayed_work(&bc_dev->poll_batt_status_monitor_wq);
+ }
+ mutex_unlock(&bc_dev->mutex);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(battery_charger_batt_status_stop_monitoring);
+
+int battery_charger_batt_status_force_check(
+ struct battery_charger_dev *bc_dev)
+{
+ if (!bc_dev)
+ return -EINVAL;
+
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring == MONITOR_BATT_STATUS) {
+ cancel_delayed_work(&bc_dev->poll_batt_status_monitor_wq);
+ schedule_delayed_work(&bc_dev->poll_batt_status_monitor_wq, 0);
+ }
+ mutex_unlock(&bc_dev->mutex);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(battery_charger_batt_status_monitor_force_check);
+
+int battery_charger_get_batt_status_no_update_time_ms(
+ struct battery_charger_dev *bc_dev, s64 *time)
+{
+ int ret = 0;
+ ktime_t cur_boottime;
+ s64 delta;
+
+ if (!bc_dev || !time)
+ return -EINVAL;
+
+ mutex_lock(&bc_dev->mutex);
+ if (bc_dev->start_monitoring != MONITOR_BATT_STATUS)
+ ret = -ENODEV;
+ else {
+ cur_boottime = ktime_get_boottime();
+ if (ktime_compare(cur_boottime,
+ bc_dev->batt_status_last_check_time) <= 0)
+ *time = 0;
+ else {
+ delta = ktime_us_delta(
+ cur_boottime,
+ bc_dev->batt_status_last_check_time);
+ do_div(delta, 1000);
+ *time = delta;
+ }
+ }
+ mutex_unlock(&bc_dev->mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(battery_charger_get_batt_status_no_update_time_ms);
+
int battery_charger_acquire_wake_lock(struct battery_charger_dev *bc_dev)
{
if (!bc_dev->locked) {
@@ -255,6 +723,16 @@
bg_dev->battery_snapshot_voltage);
}
+static ssize_t battery_show_snapshot_current(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct battery_gauge_dev *bg_dev = bg_temp;
+
+ return snprintf(buf, MAX_STR_PRINT, "%d\n",
+ bg_dev->battery_snapshot_current);
+}
+
static ssize_t battery_show_snapshot_capacity(struct device *dev,
struct device_attribute *attr,
char *buf)
@@ -294,6 +772,9 @@
static DEVICE_ATTR(battery_snapshot_voltage, S_IRUGO,
battery_show_snapshot_voltage, NULL);
+static DEVICE_ATTR(battery_snapshot_current, S_IRUGO,
+ battery_show_snapshot_current, NULL);
+
static DEVICE_ATTR(battery_snapshot_capacity, S_IRUGO,
battery_show_snapshot_capacity, NULL);
@@ -302,6 +783,7 @@
static struct attribute *battery_snapshot_attributes[] = {
&dev_attr_battery_snapshot_voltage.attr,
+ &dev_attr_battery_snapshot_current.attr,
&dev_attr_battery_snapshot_capacity.attr,
&dev_attr_battery_max_capacity.attr,
NULL
@@ -323,6 +805,18 @@
}
EXPORT_SYMBOL_GPL(battery_gauge_record_voltage_value);
+int battery_gauge_record_current_value(struct battery_gauge_dev *bg_dev,
+ int battery_current)
+{
+ if (!bg_dev)
+ return -EINVAL;
+
+ bg_dev->battery_current = battery_current;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(battery_gauge_record_current_value);
+
int battery_gauge_record_capacity_value(struct battery_gauge_dev *bg_dev,
int capacity)
{
@@ -343,6 +837,7 @@
return -EINVAL;
bg_dev->battery_snapshot_voltage = bg_dev->battery_voltage;
+ bg_dev->battery_snapshot_current = bg_dev->battery_current;
bg_dev->battery_snapshot_capacity = bg_dev->battery_capacity;
return 0;
@@ -463,10 +958,142 @@
}
EXPORT_SYMBOL_GPL(battery_charging_system_power_on_usb_event);
+static void battery_charger_unknown_batt_id_work(struct work_struct *work)
+{
+ struct battery_charger_dev *bc_dev;
+ int batt_id = 0;
+ struct iio_channel *batt_id_channel;
+ int ret;
+
+ bc_dev = container_of(work, struct battery_charger_dev,
+ unknown_batt_id_work.work);
+
+ batt_id_channel = iio_channel_get(NULL, bc_dev->batt_id_channel_name);
+ if (IS_ERR(batt_id_channel)) {
+ if (bc_dev->unknown_batt_id_check_count > 0) {
+ bc_dev->unknown_batt_id_check_count--;
+ schedule_delayed_work(&bc_dev->unknown_batt_id_work,
+ UNKNOWN_BATTERY_ID_CHECK_DELAY);
+ } else
+ dev_err(bc_dev->parent_dev,
+ "Failed to get iio channel %s, %ld\n",
+ bc_dev->batt_id_channel_name,
+ PTR_ERR(batt_id_channel));
+ } else {
+ ret = iio_read_channel_processed(batt_id_channel, &batt_id);
+ if (ret < 0)
+ ret = iio_read_channel_raw(batt_id_channel, &batt_id);
+
+ if (ret < 0) {
+ dev_err(bc_dev->parent_dev,
+ "Failed to read batt id, ret=%d\n",
+ ret);
+ return;
+ }
+
+
+ dev_info(bc_dev->parent_dev,
+ "Battery id adc value is %d\n", batt_id);
+ if (batt_id > bc_dev->unknown_batt_id_min) {
+ dev_info(bc_dev->parent_dev,
+ "Unknown battery detected(%d), no charging!\n",
+ batt_id);
+
+ if (bc_dev->ops->unknown_battery_handle)
+ bc_dev->ops->unknown_battery_handle(bc_dev);
+ }
+ }
+}
+
+static void battery_charger_thermal_prop_init(
+ struct battery_charger_dev *bc_dev,
+ struct battery_thermal_prop thermal_prop)
+{
+ if (!bc_dev)
+ return;
+
+ if (thermal_prop.temp_hot_dc >= thermal_prop.temp_warm_dc
+ && thermal_prop.temp_warm_dc > thermal_prop.temp_cool_dc
+ && thermal_prop.temp_cool_dc >= thermal_prop.temp_cold_dc) {
+ bc_dev->thermal_prop.temp_hot_dc = thermal_prop.temp_hot_dc;
+ bc_dev->thermal_prop.temp_cold_dc = thermal_prop.temp_cold_dc;
+ bc_dev->thermal_prop.temp_warm_dc = thermal_prop.temp_warm_dc;
+ bc_dev->thermal_prop.temp_cool_dc = thermal_prop.temp_cool_dc;
+ bc_dev->thermal_prop.temp_hysteresis_dc =
+ thermal_prop.temp_hysteresis_dc;
+ } else {
+ bc_dev->thermal_prop.temp_hot_dc = JETI_TEMP_HOT;
+ bc_dev->thermal_prop.temp_cold_dc = JETI_TEMP_COLD;
+ bc_dev->thermal_prop.temp_warm_dc = JETI_TEMP_WARM;
+ bc_dev->thermal_prop.temp_cool_dc = JETI_TEMP_COOL;
+ bc_dev->thermal_prop.temp_hysteresis_dc = 0;
+ }
+
+ bc_dev->thermal_prop.regulation_voltage_mv =
+ thermal_prop.regulation_voltage_mv ?:
+ DEFAULT_BATTERY_REGULATION_VOLTAGE;
+
+ bc_dev->thermal_prop.warm_voltage_mv =
+ thermal_prop.warm_voltage_mv ?:
+ DEFAULT_BATTERY_THERMAL_VOLTAGE;
+
+ bc_dev->thermal_prop.cool_voltage_mv =
+ thermal_prop.cool_voltage_mv ?:
+ DEFAULT_BATTERY_THERMAL_VOLTAGE;
+
+ bc_dev->thermal_prop.disable_warm_current_half =
+ thermal_prop.disable_warm_current_half;
+
+ bc_dev->thermal_prop.disable_cool_current_half =
+ thermal_prop.disable_cool_current_half;
+}
+
+static void battery_charger_charge_full_threshold_init(
+ struct battery_charger_dev *bc_dev,
+ struct charge_full_threshold full_thr)
+{
+ if (!bc_dev)
+ return;
+
+ bc_dev->full_thr.chg_done_voltage_min_mv =
+ full_thr.chg_done_voltage_min_mv ?:
+ DEFAULT_BATTERY_REGULATION_VOLTAGE - 102;
+
+ bc_dev->full_thr.chg_done_current_min_ma =
+ full_thr.chg_done_current_min_ma ?:
+ DEFAULT_CHARGE_DONE_CURRENT;
+
+ bc_dev->full_thr.chg_done_low_current_min_ma =
+ full_thr.chg_done_low_current_min_ma ?:
+ DEFAULT_CHARGE_DONE_LOW_CURRENT;
+
+ bc_dev->full_thr.recharge_voltage_min_mv =
+ full_thr.recharge_voltage_min_mv ?:
+ DEFAULT_BATTERY_REGULATION_VOLTAGE - 48;
+}
+
+static void battery_charger_charge_input_switch_init(
+ struct battery_charger_dev *bc_dev,
+ struct charge_input_switch input_switch)
+{
+ if (!bc_dev)
+ return;
+
+ bc_dev->input_switch.input_vmin_high_mv =
+ input_switch.input_vmin_high_mv ?:
+ DEFAULT_INPUT_VMIN_MV;
+ bc_dev->input_switch.input_vmin_low_mv =
+ input_switch.input_vmin_low_mv ?:
+ DEFAULT_INPUT_VMIN_MV;
+ bc_dev->input_switch.input_switch_threshold_mv =
+ input_switch.input_switch_threshold_mv;
+}
+
struct battery_charger_dev *battery_charger_register(struct device *dev,
struct battery_charger_info *bci, void *drv_data)
{
struct battery_charger_dev *bc_dev;
+ struct battery_gauge_dev *bg_dev;
dev_info(dev, "Registering battery charger driver\n");
@@ -489,15 +1116,40 @@
bc_dev->parent_dev = dev;
bc_dev->drv_data = drv_data;
+ mutex_init(&bc_dev->mutex);
+
/* Thermal monitoring */
- if (bci->tz_name) {
+ if (bci->tz_name)
bc_dev->tz_name = kstrdup(bci->tz_name, GFP_KERNEL);
- bc_dev->polling_time_sec = bci->polling_time_sec;
- bc_dev->enable_thermal_monitor = true;
- INIT_DELAYED_WORK(&bc_dev->poll_temp_monitor_wq,
- battery_charger_thermal_monitor_wq);
+
+ list_for_each_entry(bg_dev, &gauge_list, list) {
+ if (bg_dev->cell_id != bc_dev->cell_id)
+ continue;
+ if (bg_dev->ops && bg_dev->ops->get_battery_temp) {
+ bc_dev->batt_info_ops.get_battery_temp =
+ bg_dev->ops->get_battery_temp;
+ break;
+ }
}
+ if (bci->tz_name || bci->enable_thermal_monitor
+ || bci->enable_batt_status_monitor) {
+ bc_dev->polling_time_sec = bci->polling_time_sec;
+ if (!bci->enable_batt_status_monitor) {
+ bc_dev->enable_thermal_monitor = true;
+ INIT_DELAYED_WORK(&bc_dev->poll_temp_monitor_wq,
+ battery_charger_thermal_monitor_wq);
+ } else {
+ bc_dev->enable_batt_status_monitor = true;
+ INIT_DELAYED_WORK(&bc_dev->poll_batt_status_monitor_wq,
+ battery_charger_batt_status_monitor_wq);
+ }
+ }
+
+ battery_charger_thermal_prop_init(bc_dev, bci->thermal_prop);
+ battery_charger_charge_full_threshold_init(bc_dev, bci->full_thr);
+ battery_charger_charge_input_switch_init(bc_dev, bci->input_switch);
+
INIT_DELAYED_WORK(&bc_dev->restart_charging_wq,
battery_charger_restart_charging_wq);
@@ -505,6 +1157,26 @@
"charger-suspend-lock");
list_add(&bc_dev->list, &charger_list);
mutex_unlock(&charger_gauge_list_mutex);
+
+ if (bci->batt_id_channel_name && bci->unknown_batt_id_min > 0) {
+ bc_dev->batt_id_channel_name =
+ kstrdup(bci->batt_id_channel_name, GFP_KERNEL);
+ if (bc_dev->batt_id_channel_name) {
+ bc_dev->unknown_batt_id_min = bci->unknown_batt_id_min;
+ bc_dev->unknown_batt_id_check_count =
+ UNKNOWN_BATTERY_ID_CHECK_COUNT;
+ INIT_DEFERRABLE_WORK(&bc_dev->unknown_batt_id_work,
+ battery_charger_unknown_batt_id_work);
+ schedule_delayed_work(&bc_dev->unknown_batt_id_work, 0);
+ } else
+ dev_err(dev,
+ "Failed to duplicate batt id channel name\n");
+ }
+
+ bc_dev->gauge_psy_name = kstrdup(bci->gauge_psy_name, GFP_KERNEL);
+ if (!bc_dev->gauge_psy_name)
+ dev_err(dev, "Failed to duplicate gauge power_supply name\n");
+
return bc_dev;
}
EXPORT_SYMBOL_GPL(battery_charger_register);
@@ -513,8 +1185,12 @@
{
mutex_lock(&charger_gauge_list_mutex);
list_del(&bc_dev->list);
- if (bc_dev->polling_time_sec)
+ if (bc_dev->enable_thermal_monitor)
cancel_delayed_work(&bc_dev->poll_temp_monitor_wq);
+ if (bc_dev->enable_batt_status_monitor)
+ cancel_delayed_work(&bc_dev->poll_batt_status_monitor_wq);
+ if (bc_dev->batt_id_channel_name && bc_dev->unknown_batt_id_min > 0)
+ cancel_delayed_work(&bc_dev->unknown_batt_id_work);
cancel_delayed_work(&bc_dev->restart_charging_wq);
wake_lock_destroy(&bc_dev->charger_wake_lock);
mutex_unlock(&charger_gauge_list_mutex);
@@ -531,22 +1207,28 @@
if (!bg_dev || !bg_dev->tz_name)
return -EINVAL;
- if (!bg_dev->battery_tz)
- bg_dev->battery_tz =
- thermal_zone_device_find_by_name(bg_dev->tz_name);
+ if (bg_dev->tz_name) {
+ if (!bg_dev->battery_tz)
+ bg_dev->battery_tz =
+ thermal_zone_device_find_by_name(
+ bg_dev->tz_name);
- if (!bg_dev->battery_tz) {
- dev_info(bg_dev->parent_dev,
- "Battery thermal zone %s is not registered yet\n",
- bg_dev->tz_name);
- return -ENODEV;
+ if (!bg_dev->battery_tz) {
+ dev_info(bg_dev->parent_dev,
+ "Battery thermal zone %s is not registered yet\n",
+ bg_dev->tz_name);
+ return -ENODEV;
+ }
+
+ ret = thermal_zone_get_temp(bg_dev->battery_tz, &temperature);
+ if (ret < 0)
+ return ret;
+
+ *temp = temperature / 1000;
+ } else if (bg_dev->ops->get_battery_temp) {
+ temperature = bg_dev->ops->get_battery_temp();
+ *temp = temperature / 10;
}
-
- ret = thermal_zone_get_temp(bg_dev->battery_tz, &temperature);
- if (ret < 0)
- return ret;
-
- *temp = temperature / 1000;
return 0;
}
EXPORT_SYMBOL_GPL(battery_gauge_get_battery_temperature);
@@ -585,6 +1267,7 @@
struct battery_gauge_info *bgi, void *drv_data)
{
struct battery_gauge_dev *bg_dev;
+ struct battery_charger_dev *bc_dev;
int ret;
dev_info(dev, "Registering battery gauge driver\n");
@@ -626,6 +1309,17 @@
bg_dev->tz_name);
}
+ list_for_each_entry(bc_dev, &charger_list, list) {
+ if (bg_dev->cell_id != bc_dev->cell_id)
+ continue;
+ if (bg_dev->ops && bg_dev->ops->get_battery_temp
+ && !bc_dev->batt_info_ops.get_battery_temp) {
+ bc_dev->batt_info_ops.get_battery_temp =
+ bg_dev->ops->get_battery_temp;
+ break;
+ }
+ }
+
list_add(&bg_dev->list, &gauge_list);
mutex_unlock(&charger_gauge_list_mutex);
bg_temp = bg_dev;
diff --git a/drivers/power/bq2419x-charger-htc.c b/drivers/power/bq2419x-charger-htc.c
new file mode 100644
index 0000000..992867c
--- /dev/null
+++ b/drivers/power/bq2419x-charger-htc.c
@@ -0,0 +1,1393 @@
+/*
+ * bq2419x-charger-htc.c -- BQ24190/BQ24192/BQ24192i/BQ24193 Charger driver
+ *
+ * Copyright (c) 2014, HTC CORPORATION. All rights reserved.
+ * Copyright (c) 2013-2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any kind,
+ * whether express or implied; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ * 02111-1307, USA
+ */
+#include <linux/delay.h>
+#include <linux/debugfs.h>
+#include <linux/err.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kthread.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/power/bq2419x-charger-htc.h>
+#include <linux/htc_battery_bq2419x.h>
+#include <linux/regmap.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/regulator/driver.h>
+#include <linux/regulator/machine.h>
+#include <linux/regulator/of_regulator.h>
+#include <linux/gpio.h>
+#include <linux/of_gpio.h>
+
+#define MAX_STR_PRINT 50
+
+#define bq_chg_err(bq, fmt, ...) \
+ dev_err(bq->dev, "Charging Fault: " fmt, ##__VA_ARGS__)
+
+#define BQ2419X_INPUT_VINDPM_OFFSET 3880
+#define BQ2419X_CHARGE_ICHG_OFFSET 512
+#define BQ2419X_PRE_CHG_IPRECHG_OFFSET 128
+#define BQ2419X_PRE_CHG_TERM_OFFSET 128
+#define BQ2419X_CHARGE_VOLTAGE_OFFSET 3504
+
+#define BQ2419X_I2C_RETRY_MAX_TIMES (10)
+#define BQ2419X_I2C_RETRY_DELAY_MS (100)
+
+/* input current limit */
+static const unsigned int iinlim[] = {
+ 100, 150, 500, 900, 1200, 1500, 2000, 3000,
+};
+
+static const struct regmap_config bq2419x_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = BQ2419X_MAX_REGS,
+};
+
+struct bq2419x_reg_info {
+ u8 mask;
+ u8 val;
+};
+
+struct bq2419x_chip {
+ struct device *dev;
+ struct regmap *regmap;
+ int irq;
+ struct mutex mutex;
+
+ int gpio_otg_iusb;
+ struct regulator_dev *vbus_rdev;
+ struct regulator_desc vbus_reg_desc;
+ struct regulator_init_data vbus_reg_init_data;
+
+ struct bq2419x_reg_info ir_comp_therm;
+ struct bq2419x_vbus_platform_data *vbus_pdata;
+ struct bq2419x_charger_platform_data *charger_pdata;
+ bool emulate_input_disconnected;
+ bool is_charge_enable;
+ bool is_boost_enable;
+ struct bq2419x_irq_notifier *irq_notifier;
+ struct dentry *dentry;
+};
+struct bq2419x_chip *the_chip;
+
+static int current_to_reg(const unsigned int *tbl,
+ size_t size, unsigned int val)
+{
+ size_t i;
+
+ for (i = 0; i < size; i++)
+ if (val < tbl[i])
+ break;
+ return i > 0 ? i - 1 : -EINVAL;
+}
+
+static int bq2419x_val_to_reg(int val, int offset, int div, int nbits,
+ bool roundup)
+{
+ int max_val = offset + (BIT(nbits) - 1) * div;
+
+ if (val <= offset)
+ return 0;
+
+ if (val >= max_val)
+ return BIT(nbits) - 1;
+
+ if (roundup)
+ return DIV_ROUND_UP(val - offset, div);
+ else
+ return (val - offset) / div;
+}
+
+static int bq2419x_update_bits(struct bq2419x_chip *bq2419x, unsigned int reg,
+ unsigned int mask, unsigned int bits)
+{
+ int ret = 0;
+ unsigned int retry;
+
+ for (retry = 0; retry < BQ2419X_I2C_RETRY_MAX_TIMES; retry++) {
+ ret = regmap_update_bits(bq2419x->regmap, reg, mask, bits);
+ if (!ret)
+ break;
+ msleep(BQ2419X_I2C_RETRY_DELAY_MS);
+ }
+
+ if (ret)
+ dev_err(bq2419x->dev,
+ "i2c update error after retry %d times\n", retry);
+ return ret;
+}
+
+static int bq2419x_read(struct bq2419x_chip *bq2419x, unsigned int reg,
+ unsigned int *bits)
+{
+ int ret = 0;
+ unsigned int retry;
+
+ for (retry = 0; retry < BQ2419X_I2C_RETRY_MAX_TIMES; retry++) {
+ ret = regmap_read(bq2419x->regmap, reg, bits);
+ if (!ret)
+ break;
+ msleep(BQ2419X_I2C_RETRY_DELAY_MS);
+ }
+
+ if (ret)
+ dev_err(bq2419x->dev,
+ "i2c read error after retry %d times\n", retry);
+ return ret;
+}
+
+#if 0 /* TODO: enable this if need later since it is unused now */
+static int bq2419x_write(struct bq2419x_chip *bq2419x, unsigned int reg,
+ unsigned int bits)
+{
+ int ret = 0;
+ unsigned int retry;
+
+ for (retry = 0; retry < BQ2419X_I2C_RETRY_MAX_TIMES; retry++) {
+ ret = regmap_write(bq2419x->regmap, reg, bits);
+ if (!ret)
+ break;
+ msleep(BQ2419X_I2C_RETRY_DELAY_MS);
+ }
+
+ if (ret)
+ dev_err(bq2419x->dev,
+ "i2c write error after retry %d times\n", retry);
+ return ret;
+}
+#endif
+
+static int bq2419x_set_charger_enable_config(struct bq2419x_chip *bq2419x,
+ bool enable, bool is_otg)
+{
+ int ret;
+ unsigned int bits;
+
+ dev_info(bq2419x->dev, "charger configure, enable=%d is_otg=%d\n",
+ enable, is_otg);
+
+ if (!enable)
+ bits = BQ2419X_DISABLE_CHARGE;
+ else if (!is_otg)
+ bits = BQ2419X_ENABLE_CHARGE;
+ else
+ bits = BQ2419X_ENABLE_VBUS;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "PWR_ON_REG update failed %d", ret);
+
+ return ret;
+}
+
+static int bq2419x_set_charger_enable(bool enable, void *data)
+{
+ int ret;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ mutex_lock(&bq2419x->mutex);
+ bq2419x->is_charge_enable = enable;
+ bq2419x->is_boost_enable = false;
+
+ ret = bq2419x_set_charger_enable_config(bq2419x, enable, false);
+ mutex_unlock(&bq2419x->mutex);
+
+ return ret;
+}
+
+static int bq2419x_set_charger_hiz(bool is_hiz, void *data)
+{
+ int ret;
+ unsigned bits;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ if (bq2419x->emulate_input_disconnected || is_hiz)
+ bits = BQ2419X_EN_HIZ;
+ else
+ bits = 0;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_INPUT_SRC_REG,
+ BQ2419X_EN_HIZ, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG update failed %d\n", ret);
+
+ return ret;
+}
+
+static int bq2419x_set_fastcharge_current(unsigned int current_ma, void *data)
+{
+ int ret;
+ unsigned int bits;
+ int ichg;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ ichg = bq2419x_val_to_reg(current_ma,
+ BQ2419X_CHARGE_ICHG_OFFSET, 64, 6, 0);
+ bits = ichg << 2;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_CHRG_CTRL_REG,
+ BQ2419X_CHRG_CTRL_ICHG_MASK, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "CHRG_CTRL_REG write failed %d\n", ret);
+
+ return ret;
+}
+
+static int bq2419x_set_charge_voltage(unsigned int voltage_mv, void *data)
+{
+ int ret;
+ unsigned int bits;
+ int vreg;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ vreg = bq2419x_val_to_reg(voltage_mv,
+ BQ2419X_CHARGE_VOLTAGE_OFFSET, 16, 6, 1);
+ bits = vreg << 2;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_VOLT_CTRL_REG,
+ BQ2419x_VOLTAGE_CTRL_MASK, bits);
+
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "VOLT_CTRL_REG update failed %d\n", ret);
+
+ return ret;
+}
+
+static int bq2419x_set_precharge_current(unsigned int current_ma, void *data)
+{
+ int ret;
+ unsigned int bits;
+ int iprechg;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ iprechg = bq2419x_val_to_reg(current_ma,
+ BQ2419X_PRE_CHG_IPRECHG_OFFSET, 128, 4, 0);
+ bits = iprechg << 4;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_CHRG_TERM_REG,
+ BQ2419X_CHRG_TERM_PRECHG_MASK, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "CHRG_TERM_REG write failed %d\n", ret);
+
+ return ret;
+}
+
+static int bq2419x_set_termination_current(unsigned int current_ma, void *data)
+{
+ int ret;
+ unsigned int bits;
+ int iterm;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ if (current_ma == 0) {
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_EN_TERM, 0);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "TIME_CTRL_REG update failed %d\n", ret);
+ } else {
+ iterm = bq2419x_val_to_reg(current_ma,
+ BQ2419X_PRE_CHG_TERM_OFFSET, 128, 4, 0);
+ bits = iterm;
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_CHRG_TERM_REG,
+ BQ2419X_CHRG_TERM_TERM_MASK, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "CHRG_TERM_REG update failed %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int bq2419x_set_input_current(unsigned int current_ma, void *data)
+{
+ int ret = 0;
+ int val;
+ int floor = 0;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ val = current_to_reg(iinlim, ARRAY_SIZE(iinlim), current_ma);
+ if (val < 0)
+ return -EINVAL;
+
+ floor = current_to_reg(iinlim, ARRAY_SIZE(iinlim), 500);
+ if (floor < val && floor >= 0) {
+ for (; floor <= val; floor++) {
+ ret = bq2419x_update_bits(bq2419x,
+ BQ2419X_INPUT_SRC_REG,
+ BQ2419x_CONFIG_MASK, floor);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG update failed: %d\n",
+ ret);
+ udelay(BQ2419x_CHARGING_CURRENT_STEP_DELAY_US);
+ }
+ } else {
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_INPUT_SRC_REG,
+ BQ2419x_CONFIG_MASK, val);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG update failed: %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int bq2419x_set_dpm_input_voltage(unsigned int voltage_mv, void *data)
+{
+ int ret;
+ unsigned int bits;
+ int vindpm;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ vindpm = bq2419x_val_to_reg(voltage_mv,
+ BQ2419X_INPUT_VINDPM_OFFSET, 80, 4, 0);
+ bits = vindpm << 3;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_INPUT_SRC_REG,
+ BQ2419X_INPUT_VINDPM_MASK, bits);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "INPUT_SRC_REG write failed %d\n",
+ ret);
+
+ return ret;
+}
+
+static int bq2419x_set_safety_timer_enable(bool enable, void *data)
+{
+ int ret;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_EN_SFT_TIMER_MASK, 0);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "TIME_CTRL_REG update failed: %d\n",
+ ret);
+ return ret;
+ }
+
+ if (!enable)
+ return 0;
+
+ /* reset saftty timer by setting 0 and then making 1 */
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_EN_SFT_TIMER_MASK, BQ2419X_EN_SFT_TIMER_MASK);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "TIME_CTRL_REG update failed: %d\n",
+ ret);
+ return ret;
+ }
+
+ mutex_lock(&bq2419x->mutex);
+ if (bq2419x->is_charge_enable && !bq2419x->is_boost_enable) {
+ /* need to toggel the charging-enable bit from 1 to 0 to 1 */
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK, 0);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "PWR_ON_REG update failed %d\n",
+ ret);
+ goto done;
+ }
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK,
+ BQ2419X_ENABLE_CHARGE);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "PWR_ON_REG update failed %d\n",
+ ret);
+ goto done;
+ }
+ }
+done:
+ mutex_unlock(&bq2419x->mutex);
+
+ return ret;
+}
+
+static int bq2419x_get_charger_state(unsigned int *state, void *data)
+{
+ int ret;
+ unsigned int val;
+ unsigned int now_state = 0;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data || !state)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_SYS_STAT_REG, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "SYS_STAT_REG read failed: %d\n", ret);
+ goto error;
+ }
+
+ if (((val & BQ2419x_VSYS_STAT_MASK) == BQ2419x_VSYS_STAT_BATT_LOW) ||
+ ((val & BQ2419x_THERM_STAT_MASK) ==
+ BQ2419x_IN_THERM_REGULATION))
+ now_state |= HTC_BATTERY_BQ2419X_IN_REGULATION;
+ if ((val & BQ2419x_DPM_STAT_MASK) == BQ2419x_DPM_MODE)
+ now_state |= HTC_BATTERY_BQ2419X_DPM_MODE;
+ if ((val & BQ2419x_PG_STAT_MASK) == BQ2419x_POWER_GOOD)
+ now_state |= HTC_BATTERY_BQ2419X_POWER_GOOD;
+ if ((val & BQ2419x_CHRG_STATE_MASK) != BQ2419x_CHRG_STATE_NOTCHARGING)
+ now_state |= HTC_BATTERY_BQ2419X_CHARGING;
+ if ((val & BQ2419x_VBUS_STAT) != BQ2419x_VBUS_UNKNOWN)
+ now_state |= HTC_BATTERY_BQ2419X_KNOWN_VBUS;
+
+ *state = now_state;
+error:
+ return ret;
+}
+
+static int bq2419x_get_input_current(unsigned int *current_ma, void *data)
+{
+ int ret;
+ unsigned int reg_val;
+ struct bq2419x_chip *bq2419x;
+
+ if (!data)
+ return -EINVAL;
+
+ bq2419x = data;
+
+ if (!current_ma)
+ return -EINVAL;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_INPUT_SRC_REG, ®_val);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "INPUT_SRC read failed: %d\n", ret);
+ else
+ *current_ma = iinlim[BQ2419x_CONFIG_MASK & reg_val];
+
+ return ret;
+}
+
+struct htc_battery_bq2419x_ops bq2419x_ops = {
+ .set_charger_enable = bq2419x_set_charger_enable,
+ .set_charger_hiz = bq2419x_set_charger_hiz,
+ .set_fastcharge_current = bq2419x_set_fastcharge_current,
+ .set_charge_voltage = bq2419x_set_charge_voltage,
+ .set_precharge_current = bq2419x_set_precharge_current,
+ .set_termination_current = bq2419x_set_termination_current,
+ .set_input_current = bq2419x_set_input_current,
+ .set_dpm_input_voltage = bq2419x_set_dpm_input_voltage,
+ .set_safety_timer_enable = bq2419x_set_safety_timer_enable,
+ .get_charger_state = bq2419x_get_charger_state,
+ .get_input_current = bq2419x_get_input_current,
+};
+
+static int bq2419x_vbus_enable(struct regulator_dev *rdev)
+{
+ struct bq2419x_chip *bq2419x = rdev_get_drvdata(rdev);
+ int ret;
+
+ dev_info(bq2419x->dev, "VBUS enabled, charging disabled\n");
+
+ mutex_lock(&bq2419x->mutex);
+ bq2419x->is_boost_enable = true;
+ ret = bq2419x_set_charger_enable_config(bq2419x, true, true);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "VBUS boost enable failed %d", ret);
+ mutex_unlock(&bq2419x->mutex);
+
+ return ret;
+}
+
+static int bq2419x_vbus_disable(struct regulator_dev *rdev)
+{
+ struct bq2419x_chip *bq2419x = rdev_get_drvdata(rdev);
+ int ret;
+
+ dev_info(bq2419x->dev, "VBUS disabled, charging enabled\n");
+
+ mutex_lock(&bq2419x->mutex);
+ bq2419x->is_boost_enable = false;
+ ret = bq2419x_set_charger_enable_config(bq2419x,
+ bq2419x->is_charge_enable, false);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "Charger enable failed %d", ret);
+ mutex_unlock(&bq2419x->mutex);
+
+ return ret;
+}
+
+static int bq2419x_vbus_is_enabled(struct regulator_dev *rdev)
+{
+ struct bq2419x_chip *bq2419x = rdev_get_drvdata(rdev);
+ int ret = 0;
+ unsigned int val;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_PWR_ON_REG, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "PWR_ON_REG read failed %d", ret);
+ ret = 0;
+ } else
+ ret = ((val & BQ2419X_ENABLE_CHARGE_MASK) ==
+ BQ2419X_ENABLE_VBUS);
+
+ return ret;
+}
+
+static struct regulator_ops bq2419x_vbus_ops = {
+ .enable = bq2419x_vbus_enable,
+ .disable = bq2419x_vbus_disable,
+ .is_enabled = bq2419x_vbus_is_enabled,
+};
+
+static int bq2419x_charger_init(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_THERM_REG,
+ bq2419x->ir_comp_therm.mask, bq2419x->ir_comp_therm.val);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "THERM_REG write failed: %d\n", ret);
+
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_TIME_JEITA_ISET, 0);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "TIME_CTRL update failed: %d\n", ret);
+
+ return ret;
+}
+
+static int bq2419x_init_vbus_regulator(struct bq2419x_chip *bq2419x,
+ struct bq2419x_platform_data *pdata)
+{
+ int ret = 0;
+ struct regulator_config rconfig = { };
+
+ if (!pdata->vbus_pdata) {
+ dev_err(bq2419x->dev, "No vbus platform data\n");
+ return 0;
+ }
+
+ bq2419x->gpio_otg_iusb = pdata->vbus_pdata->gpio_otg_iusb;
+ bq2419x->vbus_reg_desc.name = "data-vbus";
+ bq2419x->vbus_reg_desc.ops = &bq2419x_vbus_ops;
+ bq2419x->vbus_reg_desc.type = REGULATOR_VOLTAGE;
+ bq2419x->vbus_reg_desc.owner = THIS_MODULE;
+ bq2419x->vbus_reg_desc.enable_time = 8000;
+
+ bq2419x->vbus_reg_init_data.supply_regulator = NULL;
+ bq2419x->vbus_reg_init_data.regulator_init = NULL;
+ bq2419x->vbus_reg_init_data.num_consumer_supplies =
+ pdata->vbus_pdata->num_consumer_supplies;
+ bq2419x->vbus_reg_init_data.consumer_supplies =
+ pdata->vbus_pdata->consumer_supplies;
+ bq2419x->vbus_reg_init_data.driver_data = bq2419x;
+
+ bq2419x->vbus_reg_init_data.constraints.name = "data-vbus";
+ bq2419x->vbus_reg_init_data.constraints.min_uV = 0;
+ bq2419x->vbus_reg_init_data.constraints.max_uV = 5000000,
+ bq2419x->vbus_reg_init_data.constraints.valid_modes_mask =
+ REGULATOR_MODE_NORMAL |
+ REGULATOR_MODE_STANDBY;
+ bq2419x->vbus_reg_init_data.constraints.valid_ops_mask =
+ REGULATOR_CHANGE_MODE |
+ REGULATOR_CHANGE_STATUS |
+ REGULATOR_CHANGE_VOLTAGE;
+
+ if (gpio_is_valid(bq2419x->gpio_otg_iusb)) {
+ ret = gpio_request_one(bq2419x->gpio_otg_iusb,
+ GPIOF_OUT_INIT_HIGH, dev_name(bq2419x->dev));
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "gpio request failed %d\n", ret);
+ return ret;
+ }
+ }
+
+ /* Register the regulators */
+ rconfig.dev = bq2419x->dev;
+ rconfig.of_node = NULL;
+ rconfig.init_data = &bq2419x->vbus_reg_init_data;
+ rconfig.driver_data = bq2419x;
+ bq2419x->vbus_rdev = devm_regulator_register(bq2419x->dev,
+ &bq2419x->vbus_reg_desc, &rconfig);
+ if (IS_ERR(bq2419x->vbus_rdev)) {
+ ret = PTR_ERR(bq2419x->vbus_rdev);
+ dev_err(bq2419x->dev,
+ "VBUS regulator register failed %d\n", ret);
+ goto scrub;
+ }
+
+ return ret;
+
+scrub:
+ if (gpio_is_valid(bq2419x->gpio_otg_iusb))
+ gpio_free(bq2419x->gpio_otg_iusb);
+ return ret;
+}
+
+
+static int bq2419x_fault_clear_sts(struct bq2419x_chip *bq2419x,
+ unsigned int *reg09_val)
+{
+ int ret;
+ unsigned int reg09_1, reg09_2;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_FAULT_REG, ®09_1);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "FAULT_REG read failed: %d\n", ret);
+ return ret;
+ }
+
+ ret = bq2419x_read(bq2419x, BQ2419X_FAULT_REG, ®09_2);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "FAULT_REG read failed: %d\n", ret);
+
+ if (reg09_val) {
+ unsigned int reg09 = 0;
+
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_WATCHDOG_FAULT)
+ reg09 |= BQ2419x_FAULT_WATCHDOG_FAULT;
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_BOOST_FAULT)
+ reg09 |= BQ2419x_FAULT_BOOST_FAULT;
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_BAT_FAULT)
+ reg09 |= BQ2419x_FAULT_BAT_FAULT;
+ if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_SAFTY) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_SAFTY))
+ reg09 |= BQ2419x_FAULT_CHRG_SAFTY;
+ else if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_INPUT) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_INPUT))
+ reg09 |= BQ2419x_FAULT_CHRG_INPUT;
+ else if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_THERMAL) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_THERMAL))
+ reg09 |= BQ2419x_FAULT_CHRG_THERMAL;
+
+ reg09 |= reg09_2 & BQ2419x_FAULT_NTC_FAULT;
+ *reg09_val = reg09;
+ }
+ return ret;
+}
+
+static int bq2419x_watchdog_init_disable(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ /* TODO: support watch dog enable if need */
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_WD_MASK, 0);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "TIME_CTRL_REG read failed: %d\n",
+ ret);
+
+ return ret;
+}
+
+static irqreturn_t bq2419x_irq(int irq, void *data)
+{
+ struct bq2419x_chip *bq2419x = data;
+ int ret;
+ unsigned int val;
+
+ ret = bq2419x_fault_clear_sts(bq2419x, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
+ val = 0;
+ }
+
+ dev_info(bq2419x->dev, "%s() Irq %d status 0x%02x\n",
+ __func__, irq, val);
+
+ if (val & BQ2419x_FAULT_BOOST_FAULT)
+ bq_chg_err(bq2419x, "VBUS Overloaded\n");
+
+ mutex_lock(&bq2419x->mutex);
+ if (!bq2419x->is_charge_enable) {
+ bq2419x_set_charger_enable_config(bq2419x, false, false);
+ mutex_unlock(&bq2419x->mutex);
+ return IRQ_HANDLED;
+ }
+ mutex_unlock(&bq2419x->mutex);
+
+ if (val & BQ2419x_FAULT_WATCHDOG_FAULT)
+ bq_chg_err(bq2419x, "WatchDog Expired\n");
+
+ switch (val & BQ2419x_FAULT_CHRG_FAULT_MASK) {
+ case BQ2419x_FAULT_CHRG_INPUT:
+ bq_chg_err(bq2419x,
+ "input fault (VBUS OVP OR VBAT<VBUS<3.8V)\n");
+ break;
+ case BQ2419x_FAULT_CHRG_THERMAL:
+ bq_chg_err(bq2419x, "Thermal shutdown\n");
+ break;
+ case BQ2419x_FAULT_CHRG_SAFTY:
+ bq_chg_err(bq2419x, "Safety timer expiration\n");
+ htc_battery_bq2419x_notify(
+ HTC_BATTERY_BQ2419X_SAFETY_TIMER_TIMEOUT);
+ break;
+ default:
+ break;
+ }
+
+ if (val & BQ2419x_FAULT_NTC_FAULT)
+ bq_chg_err(bq2419x, "NTC fault %d\n",
+ val & BQ2419x_FAULT_NTC_FAULT);
+
+ ret = bq2419x_read(bq2419x, BQ2419X_SYS_STAT_REG, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "SYS_STAT_REG read failed %d\n", ret);
+ val = 0;
+ }
+
+ if ((val & BQ2419x_CHRG_STATE_MASK) == BQ2419x_CHRG_STATE_CHARGE_DONE)
+ dev_info(bq2419x->dev,
+ "charging done interrupt found\n");
+
+ if ((val & BQ2419x_VSYS_STAT_MASK) == BQ2419x_VSYS_STAT_BATT_LOW)
+ dev_info(bq2419x->dev,
+ "in VSYSMIN regulation, battery is too low\n");
+
+ return IRQ_HANDLED;
+}
+
+static int bq2419x_show_chip_version(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+ unsigned int val;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_REVISION_REG, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "REVISION_REG read failed: %d\n", ret);
+ return ret;
+ }
+
+ if ((val & BQ24190_IC_VER) == BQ24190_IC_VER)
+ dev_info(bq2419x->dev, "chip type BQ24190 detected\n");
+ else if ((val & BQ24192_IC_VER) == BQ24192_IC_VER)
+ dev_info(bq2419x->dev, "chip type BQ24192/3 detected\n");
+ else if ((val & BQ24192i_IC_VER) == BQ24192i_IC_VER)
+ dev_info(bq2419x->dev, "chip type BQ24192i detected\n");
+ return 0;
+}
+
+
+static ssize_t bq2419x_show_input_charging_current(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ unsigned int reg_val;
+ int ret;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_INPUT_SRC_REG, ®_val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "INPUT_SRC read failed: %d\n", ret);
+ return ret;
+ }
+ ret = iinlim[BQ2419x_CONFIG_MASK & reg_val];
+ return snprintf(buf, MAX_STR_PRINT, "%d mA\n", ret);
+}
+
+static ssize_t bq2419x_set_input_charging_current(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ int ret;
+ int in_current_limit;
+ char *p = (char *)buf;
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+
+ in_current_limit = memparse(p, &p);
+ ret = bq2419x_set_input_current(in_current_limit, bq2419x);
+ if (ret < 0) {
+ dev_err(dev, "Current %d mA configuration faild: %d\n",
+ in_current_limit, ret);
+ return ret;
+ }
+ return count;
+}
+
+static ssize_t bq2419x_show_charging_state(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ unsigned int reg_val;
+ int ret;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_PWR_ON_REG, ®_val);
+ if (ret < 0) {
+ dev_err(dev, "BQ2419X_PWR_ON register read failed: %d\n", ret);
+ return ret;
+ }
+
+ if ((reg_val & BQ2419X_ENABLE_CHARGE_MASK) == BQ2419X_ENABLE_CHARGE)
+ return snprintf(buf, MAX_STR_PRINT, "enabled\n");
+
+ return snprintf(buf, MAX_STR_PRINT, "disabled\n");
+}
+
+static ssize_t bq2419x_set_charging_state(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ bool enabled;
+ int ret;
+
+ if ((*buf == 'E') || (*buf == 'e'))
+ enabled = true;
+ else if ((*buf == 'D') || (*buf == 'd'))
+ enabled = false;
+ else
+ return -EINVAL;
+
+ ret = bq2419x_set_charger_enable(enabled, bq2419x);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "user charging enable failed, %d\n", ret);
+ return ret;
+ }
+ return count;
+}
+
+static ssize_t bq2419x_show_input_cable_state(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ unsigned int reg_val;
+ int ret;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_INPUT_SRC_REG, ®_val);
+ if (ret < 0) {
+ dev_err(dev, "BQ2419X_PWR_ON register read failed: %d\n", ret);
+ return ret;
+ }
+
+ if ((reg_val & BQ2419X_EN_HIZ) == BQ2419X_EN_HIZ)
+ return snprintf(buf, MAX_STR_PRINT, "Disconnected\n");
+ else
+ return snprintf(buf, MAX_STR_PRINT, "Connected\n");
+}
+
+static ssize_t bq2419x_set_input_cable_state(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ bool connect;
+ int ret;
+
+ if ((*buf == 'C') || (*buf == 'c'))
+ connect = true;
+ else if ((*buf == 'D') || (*buf == 'd'))
+ connect = false;
+ else
+ return -EINVAL;
+
+ bq2419x->emulate_input_disconnected = connect;
+
+ ret = bq2419x_set_charger_hiz(connect, bq2419x);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "register update failed, %d\n", ret);
+ return ret;
+ }
+ if (connect)
+ dev_info(bq2419x->dev,
+ "Emulation of charger cable disconnect disabled\n");
+ else
+ dev_info(bq2419x->dev,
+ "Emulated as charger cable Disconnected\n");
+ return count;
+}
+
+static ssize_t bq2419x_show_output_charging_current(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ int ret;
+ unsigned int data;
+
+ ret = bq2419x_read(bq2419x, BQ2419X_CHRG_CTRL_REG, &data);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "CHRG_CTRL read failed %d", ret);
+ return ret;
+ }
+ data >>= 2;
+ data = data * 64 + BQ2419X_CHARGE_ICHG_OFFSET;
+ return snprintf(buf, MAX_STR_PRINT, "%u mA\n", data);
+}
+
+static ssize_t bq2419x_set_output_charging_current(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ int curr_val, ret;
+ int ichg;
+
+ if (kstrtouint(buf, 0, &curr_val)) {
+ dev_err(dev, "\nfile: %s, line=%d return %s()",
+ __FILE__, __LINE__, __func__);
+ return -EINVAL;
+ }
+
+ ichg = bq2419x_val_to_reg(curr_val, BQ2419X_CHARGE_ICHG_OFFSET,
+ 64, 6, 0);
+ ret = bq2419x_update_bits(bq2419x, BQ2419X_CHRG_CTRL_REG,
+ BQ2419X_CHRG_CTRL_ICHG_MASK, ichg << 2);
+
+ return count;
+}
+
+static ssize_t bq2419x_show_output_charging_current_values(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int i, ret = 0;
+
+ for (i = 0; i <= 63; i++)
+ ret += snprintf(buf + strlen(buf), MAX_STR_PRINT,
+ "%d mA\n", i * 64 + BQ2419X_CHARGE_ICHG_OFFSET);
+
+ return ret;
+}
+
+static DEVICE_ATTR(output_charging_current, (S_IRUGO | (S_IWUSR | S_IWGRP)),
+ bq2419x_show_output_charging_current,
+ bq2419x_set_output_charging_current);
+
+static DEVICE_ATTR(output_current_allowed_values, S_IRUGO,
+ bq2419x_show_output_charging_current_values, NULL);
+
+static DEVICE_ATTR(input_charging_current_ma, (S_IRUGO | (S_IWUSR | S_IWGRP)),
+ bq2419x_show_input_charging_current,
+ bq2419x_set_input_charging_current);
+
+static DEVICE_ATTR(charging_state, (S_IRUGO | (S_IWUSR | S_IWGRP)),
+ bq2419x_show_charging_state, bq2419x_set_charging_state);
+
+static DEVICE_ATTR(input_cable_state, (S_IRUGO | (S_IWUSR | S_IWGRP)),
+ bq2419x_show_input_cable_state, bq2419x_set_input_cable_state);
+
+static struct attribute *bq2419x_attributes[] = {
+ &dev_attr_output_charging_current.attr,
+ &dev_attr_output_current_allowed_values.attr,
+ &dev_attr_input_charging_current_ma.attr,
+ &dev_attr_charging_state.attr,
+ &dev_attr_input_cable_state.attr,
+ NULL
+};
+
+static const struct attribute_group bq2419x_attr_group = {
+ .attrs = bq2419x_attributes,
+};
+
+static int bq2419x_debugfs_show(struct seq_file *s, void *unused)
+{
+ struct bq2419x_chip *bq2419x = s->private;
+ int ret;
+ u8 reg;
+ unsigned int data;
+
+ for (reg = BQ2419X_INPUT_SRC_REG; reg <= BQ2419X_REVISION_REG; reg++) {
+ ret = bq2419x_read(bq2419x, reg, &data);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "reg %u read failed %d", reg, ret);
+ else
+ seq_printf(s, "0x%02x:\t0x%02x\n", reg, data);
+ }
+
+ return 0;
+}
+
+static int bq2419x_debugfs_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, bq2419x_debugfs_show, inode->i_private);
+}
+
+static const struct file_operations bq2419x_debugfs_fops = {
+ .open = bq2419x_debugfs_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static struct bq2419x_platform_data *bq2419x_dt_parse(struct i2c_client *client)
+{
+ struct device_node *np = client->dev.of_node;
+ struct bq2419x_platform_data *pdata;
+ struct device_node *batt_reg_node;
+ struct device_node *vbus_reg_node;
+ int ret;
+
+ pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return ERR_PTR(-ENOMEM);
+
+
+ batt_reg_node = of_find_node_by_name(np, "charger");
+ if (batt_reg_node) {
+ const char *status_str;
+ u32 pval;
+ struct bq2419x_charger_platform_data *charger_pdata;
+
+ status_str = of_get_property(batt_reg_node, "status", NULL);
+ if (status_str && !(!strcmp(status_str, "okay"))) {
+ dev_info(&client->dev,
+ "charger node status is disabled\n");
+ goto vbus_node;
+ }
+
+ pdata->bcharger_pdata = devm_kzalloc(&client->dev,
+ sizeof(*(pdata->bcharger_pdata)), GFP_KERNEL);
+ if (!pdata->bcharger_pdata)
+ return ERR_PTR(-ENOMEM);
+
+ charger_pdata = pdata->bcharger_pdata;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "ti,ir-comp-resister-ohm", &pval);
+ if (!ret)
+ charger_pdata->ir_compensation_resister_ohm = pval;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "ti,ir-comp-voltage-millivolt", &pval);
+ if (!ret)
+ charger_pdata->ir_compensation_voltage_mv = pval;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "ti,thermal-regulation-threshold-degc",
+ &pval);
+ if (!ret)
+ charger_pdata->thermal_regulation_threshold_degc =
+ pval;
+ }
+
+vbus_node:
+ vbus_reg_node = of_find_node_by_name(np, "vbus");
+ if (vbus_reg_node) {
+ struct regulator_init_data *vbus_init_data;
+
+ pdata->vbus_pdata = devm_kzalloc(&client->dev,
+ sizeof(*(pdata->vbus_pdata)), GFP_KERNEL);
+ if (!pdata->vbus_pdata)
+ return ERR_PTR(-ENOMEM);
+
+ vbus_init_data = of_get_regulator_init_data(
+ &client->dev, vbus_reg_node);
+ if (!vbus_init_data)
+ return ERR_PTR(-EINVAL);
+
+ pdata->vbus_pdata->consumer_supplies =
+ vbus_init_data->consumer_supplies;
+ pdata->vbus_pdata->num_consumer_supplies =
+ vbus_init_data->num_consumer_supplies;
+ pdata->vbus_pdata->gpio_otg_iusb =
+ of_get_named_gpio(vbus_reg_node,
+ "ti,otg-iusb-gpio", 0);
+ }
+
+ return pdata;
+}
+
+static int bq2419x_process_charger_plat_data(struct bq2419x_chip *bq2419x,
+ struct bq2419x_charger_platform_data *chg_pdata)
+{
+ int ir_compensation_resistor;
+ int ir_compensation_voltage;
+ int thermal_regulation_threshold;
+ int bat_comp, vclamp, treg;
+
+ if (chg_pdata) {
+ ir_compensation_resistor =
+ chg_pdata->ir_compensation_resister_ohm ?: 70;
+ ir_compensation_voltage =
+ chg_pdata->ir_compensation_voltage_mv ?: 112;
+ thermal_regulation_threshold =
+ chg_pdata->thermal_regulation_threshold_degc ?: 100;
+ } else {
+ ir_compensation_resistor = 70;
+ ir_compensation_voltage = 112;
+ thermal_regulation_threshold = 100;
+ }
+
+ bat_comp = ir_compensation_resistor / 10;
+ bq2419x->ir_comp_therm.mask = BQ2419X_THERM_BAT_COMP_MASK;
+ bq2419x->ir_comp_therm.val = bat_comp << 5;
+ vclamp = ir_compensation_voltage / 16;
+ bq2419x->ir_comp_therm.mask |= BQ2419X_THERM_VCLAMP_MASK;
+ bq2419x->ir_comp_therm.val |= vclamp << 2;
+ bq2419x->ir_comp_therm.mask |= BQ2419X_THERM_TREG_MASK;
+ if (thermal_regulation_threshold <= 60)
+ treg = 0;
+ else if (thermal_regulation_threshold <= 80)
+ treg = 1;
+ else if (thermal_regulation_threshold <= 100)
+ treg = 2;
+ else
+ treg = 3;
+ bq2419x->ir_comp_therm.val |= treg;
+
+ return 0;
+}
+
+static int bq2419x_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct bq2419x_chip *bq2419x;
+ struct bq2419x_platform_data *pdata = NULL;
+ int ret = 0;
+
+ if (client->dev.platform_data)
+ pdata = client->dev.platform_data;
+
+ if (!pdata && client->dev.of_node) {
+ pdata = bq2419x_dt_parse(client);
+ if (IS_ERR(pdata)) {
+ ret = PTR_ERR(pdata);
+ dev_err(&client->dev, "Parsing of node failed, %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (!pdata) {
+ dev_err(&client->dev, "No Platform data");
+ return -EINVAL;
+ }
+
+ bq2419x = devm_kzalloc(&client->dev, sizeof(*bq2419x), GFP_KERNEL);
+ if (!bq2419x) {
+ dev_err(&client->dev, "Memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ bq2419x->regmap = devm_regmap_init_i2c(client, &bq2419x_regmap_config);
+ if (IS_ERR(bq2419x->regmap)) {
+ ret = PTR_ERR(bq2419x->regmap);
+ dev_err(&client->dev, "regmap init failed with err %d\n", ret);
+ return ret;
+ }
+
+ bq2419x->charger_pdata = pdata->bcharger_pdata;
+ bq2419x->vbus_pdata = pdata->vbus_pdata;
+
+ bq2419x->dev = &client->dev;
+ i2c_set_clientdata(client, bq2419x);
+ bq2419x->irq = client->irq;
+
+ ret = bq2419x_show_chip_version(bq2419x);
+ if (ret < 0) {
+ dev_err(&client->dev, "version read failed %d\n", ret);
+ return ret;
+ }
+
+ ret = sysfs_create_group(&client->dev.kobj, &bq2419x_attr_group);
+ if (ret < 0) {
+ dev_err(&client->dev, "sysfs create failed %d\n", ret);
+ return ret;
+ }
+
+ mutex_init(&bq2419x->mutex);
+
+ bq2419x_process_charger_plat_data(bq2419x, pdata->bcharger_pdata);
+
+ ret = bq2419x_charger_init(bq2419x);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "Charger init failed: %d\n", ret);
+ goto scrub_mutex;
+ }
+
+ ret = bq2419x_watchdog_init_disable(bq2419x);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "WDT init failed %d\n", ret);
+ goto scrub_mutex;
+ }
+
+ ret = bq2419x_fault_clear_sts(bq2419x, NULL);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
+ goto scrub_mutex;
+ }
+
+ ret = devm_request_threaded_irq(bq2419x->dev, bq2419x->irq, NULL,
+ bq2419x_irq, IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
+ dev_name(bq2419x->dev), bq2419x);
+ if (ret < 0) {
+ dev_warn(bq2419x->dev, "request IRQ %d fail, err = %d\n",
+ bq2419x->irq, ret);
+ dev_info(bq2419x->dev,
+ "Supporting bq driver without interrupt\n");
+ }
+
+ the_chip = bq2419x;
+
+ ret = htc_battery_bq2419x_charger_register(&bq2419x_ops, bq2419x);
+ if (ret < 0) {
+ dev_err(&client->dev,
+ "htc_battery_bq2419x register failed %d\n", ret);
+ goto scrub_mutex;
+ }
+
+ ret = bq2419x_init_vbus_regulator(bq2419x, pdata);
+ if (ret < 0) {
+ dev_err(&client->dev, "VBUS regulator init failed %d\n", ret);
+ goto scrub_policy;
+ }
+
+ bq2419x->dentry = debugfs_create_file("bq2419x-regs", S_IRUSR, NULL,
+ bq2419x, &bq2419x_debugfs_fops);
+ return 0;
+
+scrub_policy:
+ htc_battery_bq2419x_charger_unregister(bq2419x);
+
+scrub_mutex:
+ mutex_destroy(&bq2419x->mutex);
+ return ret;
+}
+
+static int bq2419x_remove(struct i2c_client *client)
+{
+ struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+
+ htc_battery_bq2419x_charger_unregister(bq2419x);
+ debugfs_remove(bq2419x->dentry);
+ mutex_destroy(&bq2419x->mutex);
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int bq2419x_resume(struct device *dev)
+{
+ int ret = 0;
+ struct bq2419x_chip *bq2419x = dev_get_drvdata(dev);
+ unsigned int val;
+
+ ret = bq2419x_fault_clear_sts(bq2419x, &val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
+ return ret;
+ }
+
+ if (val & BQ2419x_FAULT_WATCHDOG_FAULT)
+ bq_chg_err(bq2419x, "Watchdog Timer Expired\n");
+
+ if (val & BQ2419x_FAULT_CHRG_SAFTY) {
+ bq_chg_err(bq2419x,
+ "Safety timer Expiration\n");
+ htc_battery_bq2419x_notify(
+ HTC_BATTERY_BQ2419X_SAFETY_TIMER_TIMEOUT);
+ }
+
+ return 0;
+};
+
+static const struct dev_pm_ops bq2419x_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(NULL, bq2419x_resume)
+};
+#endif
+
+static const struct i2c_device_id bq2419x_id[] = {
+ {.name = "bq2419x",},
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, bq2419x_id);
+
+static struct i2c_driver bq2419x_i2c_driver = {
+ .driver = {
+ .name = "bq2419x",
+ .owner = THIS_MODULE,
+#ifdef CONFIG_PM_SLEEP
+ .pm = &bq2419x_pm_ops,
+#endif
+ },
+ .probe = bq2419x_probe,
+ .remove = bq2419x_remove,
+ .id_table = bq2419x_id,
+};
+
+static int __init bq2419x_module_init(void)
+{
+ return i2c_add_driver(&bq2419x_i2c_driver);
+}
+subsys_initcall_sync(bq2419x_module_init);
+
+static void __exit bq2419x_cleanup(void)
+{
+ i2c_del_driver(&bq2419x_i2c_driver);
+}
+module_exit(bq2419x_cleanup);
+
+MODULE_DESCRIPTION("BQ24190/BQ24192/BQ24192i/BQ24193 battery charger driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/power/bq2419x-charger.c b/drivers/power/bq2419x-charger.c
index df555cd..8b81016 100644
--- a/drivers/power/bq2419x-charger.c
+++ b/drivers/power/bq2419x-charger.c
@@ -51,12 +51,26 @@
#define bq_chg_err(bq, fmt, ...) \
dev_err(bq->dev, "Charging Fault: " fmt, ##__VA_ARGS__)
+enum charging_states {
+ STATE_INIT = 0,
+ ENABLED_HALF_IBAT,
+ ENABLED_FULL_IBAT,
+ DISABLED,
+};
+
+#define CHG_DISABLE_REASON_THERMAL BIT(0)
+#define CHG_DISABLE_REASON_USER BIT(1)
+#define CHG_DISABLE_REASON_UNKNOWN_BATTERY BIT(2)
+#define CHG_DISABLE_REASON_CHG_FULL_STOP BIT(3)
+
#define BQ2419X_INPUT_VINDPM_OFFSET 3880
#define BQ2419X_CHARGE_ICHG_OFFSET 512
#define BQ2419X_PRE_CHG_IPRECHG_OFFSET 128
#define BQ2419X_PRE_CHG_TERM_OFFSET 128
#define BQ2419X_CHARGE_VOLTAGE_OFFSET 3504
+#define BQ2419X_SAFETY_TIMER_ENABLE_CURRENT_MIN (500)
+
/* input current limit */
static const unsigned int iinlim[] = {
100, 150, 500, 900, 1200, 1500, 2000, 3000,
@@ -80,6 +94,7 @@
int gpio_otg_iusb;
int wdt_refresh_timeout;
int wdt_time_sec;
+ int auto_recharge_time_power_off;
bool emulate_input_disconnected;
struct mutex mutex;
@@ -104,6 +119,10 @@
int last_charging_current;
bool disable_suspend_during_charging;
int last_temp;
+ u32 auto_recharge_time_supend;
+ int charging_state;
+ struct bq2419x_reg_info last_chg_voltage;
+ struct bq2419x_reg_info last_input_src;
struct bq2419x_reg_info input_src;
struct bq2419x_reg_info chg_current_control;
struct bq2419x_reg_info prechg_term_control;
@@ -111,6 +130,16 @@
struct bq2419x_reg_info chg_voltage_control;
struct bq2419x_vbus_platform_data *vbus_pdata;
struct bq2419x_charger_platform_data *charger_pdata;
+ bool otp_control_no_thermister;
+ struct bq2419x_reg_info otp_output_current;
+ int charge_suspend_polling_time;
+ int charge_polling_time;
+ unsigned int charging_disabled_reason;
+ bool enable_batt_status_monitor;
+ bool chg_full_done;
+ bool chg_full_stop;
+ bool safety_timeout_happen;
+ bool safety_timer_reset_disable;
struct dentry *dentry;
};
@@ -125,32 +154,91 @@
return i > 0 ? i - 1 : -EINVAL;
}
-static int bq2419x_charger_enable(struct bq2419x_chip *bq2419x)
+static int __bq2419x_charger_enable_locked(struct bq2419x_chip *bq2419x,
+ unsigned int disable_reason, bool enable)
+{
+ int ret;
+
+ dev_info(bq2419x->dev, "Charging %s with reason 0x%x\n",
+ enable ? "enable" : "disable",
+ disable_reason);
+ if (enable)
+ bq2419x->charging_disabled_reason &= ~disable_reason;
+ else
+ bq2419x->charging_disabled_reason |= disable_reason;
+
+ if (!bq2419x->charging_disabled_reason)
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK,
+ BQ2419X_ENABLE_CHARGE);
+ else {
+ dev_info(bq2419x->dev, "Charging disabled reason 0x%x\n",
+ bq2419x->charging_disabled_reason);
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK,
+ BQ2419X_DISABLE_CHARGE);
+ }
+ if (ret < 0)
+ dev_err(bq2419x->dev, "register update failed, err %d\n", ret);
+ return ret;
+}
+
+static int bq2419x_charger_enable_locked(struct bq2419x_chip *bq2419x,
+ unsigned int reason)
{
int ret;
if (bq2419x->battery_presense) {
- dev_info(bq2419x->dev, "Charging enabled\n");
- /* set default Charge regulation voltage */
+ /* set default/overtemp Charge regulation voltage */
ret = regmap_update_bits(bq2419x->regmap, BQ2419X_VOLT_CTRL_REG,
- bq2419x->chg_voltage_control.mask,
- bq2419x->chg_voltage_control.val);
+ bq2419x->last_chg_voltage.mask,
+ bq2419x->last_chg_voltage.val);
if (ret < 0) {
dev_err(bq2419x->dev,
"VOLT_CTRL_REG update failed %d\n", ret);
return ret;
}
- ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
- BQ2419X_ENABLE_CHARGE_MASK,
- BQ2419X_ENABLE_CHARGE);
+
+ reason |= CHG_DISABLE_REASON_UNKNOWN_BATTERY;
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ reason, true);
} else {
- dev_info(bq2419x->dev, "Charging disabled\n");
- ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
- BQ2419X_ENABLE_CHARGE_MASK,
- BQ2419X_DISABLE_CHARGE);
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_UNKNOWN_BATTERY, false);
+ if (ret < 0) {
+ dev_err(bq2419x->dev,
+ "charger enable failed %d\n", ret);
+ return ret;
+ }
+
+ reason &= ~CHG_DISABLE_REASON_UNKNOWN_BATTERY;
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ reason, true);
}
- if (ret < 0)
- dev_err(bq2419x->dev, "register update failed, err %d\n", ret);
+
+ return ret;
+}
+
+static inline int bq2419x_charger_enable(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ mutex_lock(&bq2419x->mutex);
+ ret = bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_CHG_FULL_STOP);
+ mutex_unlock(&bq2419x->mutex);
+
+ return ret;
+}
+
+static inline int bq2419x_charger_enable_suspend(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ mutex_lock(&bq2419x->mutex);
+ ret = bq2419x_charger_enable_locked(bq2419x, 0);
+ mutex_unlock(&bq2419x->mutex);
+
return ret;
}
@@ -229,6 +317,7 @@
int thermal_regulation_threshold;
int charge_voltage_limit;
int vindpm, ichg, iprechg, iterm, bat_comp, vclamp, treg, vreg;
+ int otp_output_current;
if (chg_pdata) {
voltage_input = chg_pdata->input_voltage_limit_mV ?: 4200;
@@ -246,6 +335,8 @@
chg_pdata->thermal_regulation_threshold_degC ?: 100;
charge_voltage_limit =
chg_pdata->charge_voltage_limit_mV ?: 4208;
+ otp_output_current =
+ chg_pdata->thermal_prop.otp_output_current_ma ?: 1344;
} else {
voltage_input = 4200;
fast_charge_current = 4544;
@@ -255,14 +346,13 @@
ir_compensation_voltage = 112;
thermal_regulation_threshold = 100;
charge_voltage_limit = 4208;
+ otp_output_current = 1344;
}
vindpm = bq2419x_val_to_reg(voltage_input,
BQ2419X_INPUT_VINDPM_OFFSET, 80, 4, 0);
bq2419x->input_src.mask = BQ2419X_INPUT_VINDPM_MASK;
bq2419x->input_src.val = vindpm << 3;
- bq2419x->input_src.mask |= BQ2419X_INPUT_IINLIM_MASK;
- bq2419x->input_src.val |= 0x2;
ichg = bq2419x_val_to_reg(fast_charge_current,
BQ2419X_CHARGE_ICHG_OFFSET, 64, 6, 0);
@@ -299,16 +389,28 @@
BQ2419X_CHARGE_VOLTAGE_OFFSET, 16, 6, 1);
bq2419x->chg_voltage_control.mask = BQ2419X_CHG_VOLT_LIMIT_MASK;
bq2419x->chg_voltage_control.val = vreg << 2;
+
+ ichg = bq2419x_val_to_reg(otp_output_current,
+ BQ2419X_CHARGE_ICHG_OFFSET, 64, 6, 0);
+ bq2419x->otp_output_current.mask = BQ2419X_CHRG_CTRL_ICHG_MASK;
+ bq2419x->otp_output_current.val = ichg << 2;
+
return 0;
}
static int bq2419x_charger_init(struct bq2419x_chip *bq2419x)
{
int ret;
+ struct bq2419x_reg_info reg;
- ret = regmap_update_bits(bq2419x->regmap, BQ2419X_CHRG_CTRL_REG,
- bq2419x->chg_current_control.mask,
- bq2419x->chg_current_control.val);
+ if (bq2419x->charging_state != ENABLED_HALF_IBAT)
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_CHRG_CTRL_REG,
+ bq2419x->chg_current_control.mask,
+ bq2419x->chg_current_control.val);
+ else
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_CHRG_CTRL_REG,
+ bq2419x->otp_output_current.mask,
+ bq2419x->otp_output_current.val);
if (ret < 0) {
dev_err(bq2419x->dev, "CHRG_CTRL_REG write failed %d\n", ret);
return ret;
@@ -322,8 +424,12 @@
return ret;
}
+ reg.mask = bq2419x->last_input_src.mask;
+ reg.val = bq2419x->last_input_src.val;
+ reg.mask |= BQ2419X_INPUT_IINLIM_MASK;
+ reg.val |= 0x2;
ret = regmap_update_bits(bq2419x->regmap, BQ2419X_INPUT_SRC_REG,
- bq2419x->input_src.mask, bq2419x->input_src.val);
+ reg.mask, reg.val);
if (ret < 0)
dev_err(bq2419x->dev, "INPUT_SRC_REG write failed %d\n", ret);
@@ -333,8 +439,8 @@
dev_err(bq2419x->dev, "THERM_REG write failed: %d\n", ret);
ret = regmap_update_bits(bq2419x->regmap, BQ2419X_VOLT_CTRL_REG,
- bq2419x->chg_voltage_control.mask,
- bq2419x->chg_voltage_control.val);
+ bq2419x->last_chg_voltage.mask,
+ bq2419x->last_chg_voltage.val);
if (ret < 0)
dev_err(bq2419x->dev, "VOLT_CTRL update failed: %d\n", ret);
@@ -343,6 +449,17 @@
if (ret < 0)
dev_err(bq2419x->dev, "TIME_CTRL update failed: %d\n", ret);
+ if (bq2419x->enable_batt_status_monitor) {
+ ret = regmap_update_bits(bq2419x->regmap,
+ BQ2419X_TIME_CTRL_REG,
+ BQ2419X_EN_TERM, 0);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "TIME_CTRL_REG update failed: %d\n",
+ ret);
+ return ret;
+ }
+ }
+
return ret;
}
@@ -382,55 +499,149 @@
return ret;
}
-static int bq2419x_set_charging_current(struct regulator_dev *rdev,
+int bq2419x_full_current_enable(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_CHRG_CTRL_REG,
+ bq2419x->chg_current_control.mask,
+ bq2419x->chg_current_control.val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev,
+ "Failed to write CHRG_CTRL_REG %d\n", ret);
+ return ret;
+ }
+
+ bq2419x->charging_state = ENABLED_FULL_IBAT;
+
+ return 0;
+}
+
+int bq2419x_half_current_enable(struct bq2419x_chip *bq2419x)
+{
+ int ret;
+
+ if (bq2419x->chg_current_control.val
+ > bq2419x->otp_output_current.val) {
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_CHRG_CTRL_REG,
+ bq2419x->otp_output_current.mask,
+ bq2419x->otp_output_current.val);
+ if (ret < 0) {
+ dev_err(bq2419x->dev,
+ "Failed to write CHRG_CTRL_REG %d\n", ret);
+ return ret;
+ }
+ }
+ bq2419x->charging_state = ENABLED_HALF_IBAT;
+
+ return 0;
+}
+
+static void bq2419x_monitor_work_control(struct bq2419x_chip *bq2419x,
+ bool start)
+{
+ if (!bq2419x->battery_presense)
+ return;
+
+ if (start) {
+ if (bq2419x->enable_batt_status_monitor)
+ battery_charger_batt_status_start_monitoring(
+ bq2419x->bc_dev,
+ bq2419x->last_charging_current / 1000);
+ else
+ battery_charger_thermal_start_monitoring(
+ bq2419x->bc_dev);
+ } else {
+ if (bq2419x->enable_batt_status_monitor)
+ battery_charger_batt_status_stop_monitoring(
+ bq2419x->bc_dev);
+ else
+ battery_charger_thermal_stop_monitoring(
+ bq2419x->bc_dev);
+ }
+}
+
+static int bq2419x_set_charging_current_locked(struct bq2419x_chip *bq2419x,
int min_uA, int max_uA)
{
- struct bq2419x_chip *bq2419x = rdev_get_drvdata(rdev);
int in_current_limit;
int old_current_limit;
int ret = 0;
int val;
+ bool check_charge_done;
- dev_info(bq2419x->dev, "Setting charging current %d\n", max_uA/1000);
- msleep(200);
bq2419x->chg_status = BATTERY_DISCHARGING;
+ bq2419x->charging_state = STATE_INIT;
+ bq2419x->chg_full_stop = false;
+ bq2419x->chg_full_done = false;
+ bq2419x->last_chg_voltage.mask = bq2419x->chg_voltage_control.mask;
+ bq2419x->last_chg_voltage.val = bq2419x->chg_voltage_control.val;
- ret = bq2419x_charger_enable(bq2419x);
- if (ret < 0) {
- dev_err(bq2419x->dev, "Charger enable failed %d", ret);
- return ret;
- }
+ ret = bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_CHG_FULL_STOP);
+ if (ret < 0)
+ goto error;
ret = regmap_read(bq2419x->regmap, BQ2419X_SYS_STAT_REG, &val);
if (ret < 0)
dev_err(bq2419x->dev, "SYS_STAT_REG read failed: %d\n", ret);
if (max_uA == 0 && val != 0)
- return ret;
+ goto done;
old_current_limit = bq2419x->in_current_limit;
bq2419x->last_charging_current = max_uA;
+ check_charge_done = !bq2419x->enable_batt_status_monitor &&
+ (!bq2419x->otp_control_no_thermister ||
+ bq2419x->last_chg_voltage.val ==
+ bq2419x->chg_voltage_control.val);
if ((val & BQ2419x_VBUS_STAT) == BQ2419x_VBUS_UNKNOWN) {
battery_charging_restart_cancel(bq2419x->bc_dev);
in_current_limit = 500;
bq2419x->cable_connected = 0;
bq2419x->chg_status = BATTERY_DISCHARGING;
- battery_charger_thermal_stop_monitoring(
- bq2419x->bc_dev);
+ bq2419x_monitor_work_control(bq2419x, false);
+ } else if (((val & BQ2419x_CHRG_STATE_MASK) ==
+ BQ2419x_CHRG_STATE_CHARGE_DONE) &&
+ check_charge_done) {
+ dev_info(bq2419x->dev, "Charging completed\n");
+ bq2419x->chg_status = BATTERY_CHARGING_DONE;
+ bq2419x->cable_connected = 1;
+ in_current_limit = max_uA/1000;
+ battery_charging_restart(bq2419x->bc_dev,
+ bq2419x->chg_restart_time);
+ bq2419x_monitor_work_control(bq2419x, false);
} else {
in_current_limit = max_uA/1000;
bq2419x->cable_connected = 1;
bq2419x->chg_status = BATTERY_CHARGING;
- battery_charger_thermal_start_monitoring(
- bq2419x->bc_dev);
+ bq2419x_monitor_work_control(bq2419x, true);
}
ret = bq2419x_configure_charging_current(bq2419x, in_current_limit);
if (ret < 0)
goto error;
+ if (!bq2419x->battery_presense)
+ bq2419x->chg_status = BATTERY_UNKNOWN;
+
+ if (bq2419x->safety_timer_reset_disable) {
+ if (in_current_limit > BQ2419X_SAFETY_TIMER_ENABLE_CURRENT_MIN)
+ val = BQ2419X_EN_SFT_TIMER_MASK;
+ else
+ val = 0;
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_TIME_CTRL_REG,
+ BQ2419X_EN_SFT_TIMER_MASK, val);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "TIME_CTRL_REG update failed: %d\n", ret);
+ }
+
battery_charging_status_update(bq2419x->bc_dev, bq2419x->chg_status);
- if (bq2419x->disable_suspend_during_charging) {
- if (bq2419x->cable_connected && in_current_limit > 500)
+ if (bq2419x->disable_suspend_during_charging &&
+ bq2419x->battery_presense) {
+ if (bq2419x->cable_connected && in_current_limit > 500
+ && (bq2419x->chg_status != BATTERY_CHARGING_DONE ||
+ !check_charge_done))
battery_charger_acquire_wake_lock(bq2419x->bc_dev);
else if (!bq2419x->cable_connected && old_current_limit > 500)
battery_charger_release_wake_lock(bq2419x->bc_dev);
@@ -439,6 +650,24 @@
return 0;
error:
dev_err(bq2419x->dev, "Charger enable failed, err = %d\n", ret);
+done:
+ return ret;
+}
+
+static int bq2419x_set_charging_current(struct regulator_dev *rdev,
+ int min_uA, int max_uA)
+{
+ struct bq2419x_chip *bq2419x = rdev_get_drvdata(rdev);
+ int ret = 0;
+
+ dev_info(bq2419x->dev, "Setting charging current %d\n", max_uA/1000);
+ msleep(200);
+
+ mutex_lock(&bq2419x->mutex);
+ bq2419x->safety_timeout_happen = false;
+ ret = bq2419x_set_charging_current_locked(bq2419x, min_uA, max_uA);
+ mutex_unlock(&bq2419x->mutex);
+
return ret;
}
@@ -449,13 +678,13 @@
static int bq2419x_set_charging_current_suspend(struct bq2419x_chip *bq2419x,
int in_current_limit)
{
- int ret;
+ int ret = 0;
int val;
dev_info(bq2419x->dev, "Setting charging current %d mA\n",
in_current_limit);
- ret = bq2419x_charger_enable(bq2419x);
+ ret = bq2419x_charger_enable_suspend(bq2419x);
if (ret < 0) {
dev_err(bq2419x->dev, "Charger enable failed %d", ret);
return ret;
@@ -482,7 +711,12 @@
int timeout;
mutex_lock(&bq2419x->mutex);
- dev_info(bq2419x->dev, "%s() from %s()\n", __func__, from);
+ if (!bq2419x->battery_presense || !bq2419x->wdt_refresh_timeout) {
+ mutex_unlock(&bq2419x->mutex);
+ return ret;
+ }
+
+ dev_dbg(bq2419x->dev, "%s() from %s()\n", __func__, from);
/* Clear EN_HIZ */
if (bq2419x->emulate_input_disconnected)
@@ -527,21 +761,50 @@
return ret;
}
-static int bq2419x_fault_clear_sts(struct bq2419x_chip *bq2419x)
+static int bq2419x_fault_clear_sts(struct bq2419x_chip *bq2419x,
+ unsigned int *reg09_val)
{
int ret;
- unsigned int reg09;
+ unsigned int reg09_1, reg09_2;
- ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, ®09);
+ ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, ®09_1);
if (ret < 0) {
dev_err(bq2419x->dev, "FAULT_REG read failed: %d\n", ret);
return ret;
}
- ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, ®09);
+ ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, ®09_2);
if (ret < 0)
dev_err(bq2419x->dev, "FAULT_REG read failed: %d\n", ret);
+ if (reg09_val) {
+ unsigned int reg09 = 0;
+
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_WATCHDOG_FAULT)
+ reg09 |= BQ2419x_FAULT_WATCHDOG_FAULT;
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_BOOST_FAULT)
+ reg09 |= BQ2419x_FAULT_BOOST_FAULT;
+ if ((reg09_1 | reg09_2) & BQ2419x_FAULT_BAT_FAULT)
+ reg09 |= BQ2419x_FAULT_BAT_FAULT;
+ if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_SAFTY) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_SAFTY))
+ reg09 |= BQ2419x_FAULT_CHRG_SAFTY;
+ else if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_INPUT) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_INPUT))
+ reg09 |= BQ2419x_FAULT_CHRG_INPUT;
+ else if (((reg09_1 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_THERMAL) ||
+ ((reg09_2 & BQ2419x_FAULT_CHRG_FAULT_MASK) ==
+ BQ2419x_FAULT_CHRG_THERMAL))
+ reg09 |= BQ2419x_FAULT_CHRG_THERMAL;
+
+ reg09 |= reg09_2 & BQ2419x_FAULT_NTC_FAULT;
+ *reg09_val = reg09;
+ }
return ret;
}
@@ -611,25 +874,89 @@
dev_err(bq2419x->dev, "bq2419x_reset_wdt failed: %d\n", ret);
}
-
-static int bq2419x_reset_safety_timer(struct bq2419x_chip *bq2419x)
+static int bq2419x_reconfigure_charger_param(struct bq2419x_chip *bq2419x,
+ const char *from)
{
int ret;
+ dev_info(bq2419x->dev, "Reconfiguring charging param from %s\n", from);
+ ret = bq2419x_watchdog_init(bq2419x, bq2419x->wdt_time_sec, from);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "BQWDT init failed %d\n", ret);
+ return ret;
+ }
+
+ ret = bq2419x_charger_init(bq2419x);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "Charger init failed: %d\n", ret);
+ return ret;
+ }
+
+ ret = bq2419x_configure_charging_current(bq2419x,
+ bq2419x->in_current_limit);
+ if (ret < 0) {
+ dev_err(bq2419x->dev, "Current config failed: %d\n", ret);
+ return ret;
+ }
+ return ret;
+}
+
+static int bq2419x_handle_safety_timer_expire(struct bq2419x_chip *bq2419x)
+{
+ struct device *dev = bq2419x->dev;
+ int ret;
+
+ /* Reset saftty timer by setting 0 and then making 1 */
ret = regmap_update_bits(bq2419x->regmap, BQ2419X_TIME_CTRL_REG,
BQ2419X_EN_SFT_TIMER_MASK, 0);
if (ret < 0) {
- dev_err(bq2419x->dev, "TIME_CTRL_REG update failed: %d\n", ret);
+ dev_err(dev, "TIME_CTRL_REG update failed: %d\n", ret);
return ret;
}
ret = regmap_update_bits(bq2419x->regmap, BQ2419X_TIME_CTRL_REG,
BQ2419X_EN_SFT_TIMER_MASK, BQ2419X_EN_SFT_TIMER_MASK);
- if (ret < 0)
- dev_err(bq2419x->dev, "TIME_CTRL_REG update failed: %d\n", ret);
+ if (ret < 0) {
+ dev_err(dev, "TIME_CTRL_REG update failed: %d\n", ret);
+ return ret;
+ }
+
+ /* Need to toggel the Charging-enable bit from 1 to 0 to 1 */
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK, 0);
+ if (ret < 0) {
+ dev_err(dev, "PWR_ON_REG update failed %d\n", ret);
+ return ret;
+ }
+ ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
+ BQ2419X_ENABLE_CHARGE_MASK, BQ2419X_ENABLE_CHARGE);
+ if (ret < 0) {
+ dev_err(dev, "PWR_ON_REG update failed %d\n", ret);
+ return ret;
+ }
+
+ ret = bq2419x_reconfigure_charger_param(bq2419x, "SAFETY-TIMER_EXPIRE");
+ if (ret < 0) {
+ dev_err(dev, "Reconfig of BQ parm failed: %d\n", ret);
+ return ret;
+ }
return ret;
}
+static inline void bq2419x_charging_done_update_locked(
+ struct bq2419x_chip *bq2419x)
+{
+ if (!bq2419x->otp_control_no_thermister
+ || bq2419x->last_chg_voltage.val ==
+ bq2419x->chg_voltage_control.val) {
+ dev_info(bq2419x->dev, "Charging completed\n");
+ bq2419x->chg_status = BATTERY_CHARGING_DONE;
+ battery_charging_status_update(bq2419x->bc_dev,
+ bq2419x->chg_status);
+ } else
+ dev_info(bq2419x->dev, "OTP charging completed\n");
+}
+
static irqreturn_t bq2419x_irq(int irq, void *data)
{
struct bq2419x_chip *bq2419x = data;
@@ -637,10 +964,10 @@
unsigned int val;
int check_chg_state = 0;
- ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, &val);
+ ret = bq2419x_fault_clear_sts(bq2419x, &val);
if (ret < 0) {
- dev_err(bq2419x->dev, "FAULT_REG read failed %d\n", ret);
- return ret;
+ dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
+ val = 0;
}
dev_info(bq2419x->dev, "%s() Irq %d status 0x%02x\n",
@@ -649,30 +976,18 @@
if (val & BQ2419x_FAULT_BOOST_FAULT)
bq_chg_err(bq2419x, "VBUS Overloaded\n");
- if (!bq2419x->battery_presense)
- return IRQ_HANDLED;
+ mutex_lock(&bq2419x->mutex);
+ if (!bq2419x->battery_presense) {
+ bq2419x_charger_enable_locked(bq2419x, 0);
+ goto done;
+ }
+ mutex_unlock(&bq2419x->mutex);
if (val & BQ2419x_FAULT_WATCHDOG_FAULT) {
bq_chg_err(bq2419x, "WatchDog Expired\n");
- ret = bq2419x_watchdog_init(bq2419x,
- bq2419x->wdt_time_sec, "ISR");
- if (ret < 0) {
- dev_err(bq2419x->dev, "BQWDT init failed %d\n", ret);
- return ret;
- }
-
- ret = bq2419x_charger_init(bq2419x);
- if (ret < 0) {
- dev_err(bq2419x->dev, "Charger init failed: %d\n", ret);
- return ret;
- }
-
- ret = bq2419x_configure_charging_current(bq2419x,
- bq2419x->in_current_limit);
- if (ret < 0) {
- dev_err(bq2419x->dev, "bq2419x init failed: %d\n", ret);
- return ret;
- }
+ ret = bq2419x_reconfigure_charger_param(bq2419x, "WDT-EXP-ISR");
+ if (ret < 0)
+ dev_err(bq2419x->dev, "BQ reconfig failed %d\n", ret);
}
switch (val & BQ2419x_FAULT_CHRG_FAULT_MASK) {
@@ -685,12 +1000,21 @@
check_chg_state = 1;
break;
case BQ2419x_FAULT_CHRG_SAFTY:
- bq_chg_err(bq2419x, "Safety timer expiration\n");
- ret = bq2419x_reset_safety_timer(bq2419x);
- if (ret < 0) {
- dev_err(bq2419x->dev, "Reset safety timer failed %d\n",
- ret);
- return ret;
+ if (!bq2419x->safety_timer_reset_disable) {
+ bq_chg_err(bq2419x, "Safety timer expiration\n");
+ ret = bq2419x_handle_safety_timer_expire(bq2419x);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "Handling of safty timer expire failed: %d\n",
+ ret);
+ } else {
+ bq_chg_err(bq2419x,
+ "Safety timer expiration, stop charging\n");
+ if (bq2419x->disable_suspend_during_charging)
+ battery_charger_release_wake_lock(
+ bq2419x->bc_dev);
+ bq2419x_monitor_work_control(bq2419x, false);
+ bq2419x->safety_timeout_happen = true;
}
check_chg_state = 1;
break;
@@ -704,29 +1028,27 @@
check_chg_state = 1;
}
- ret = bq2419x_fault_clear_sts(bq2419x);
- if (ret < 0) {
- dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
- return ret;
- }
-
ret = regmap_read(bq2419x->regmap, BQ2419X_SYS_STAT_REG, &val);
if (ret < 0) {
dev_err(bq2419x->dev, "SYS_STAT_REG read failed %d\n", ret);
- return ret;
+ val = 0;
}
- if ((val & BQ2419x_CHRG_STATE_MASK) == BQ2419x_CHRG_STATE_CHARGE_DONE) {
- dev_info(bq2419x->dev, "Charging completed\n");
- bq2419x->chg_status = BATTERY_CHARGING_DONE;
- battery_charging_status_update(bq2419x->bc_dev,
- bq2419x->chg_status);
- battery_charging_restart(bq2419x->bc_dev,
+ mutex_lock(&bq2419x->mutex);
+ if ((val & BQ2419x_CHRG_STATE_MASK) == BQ2419x_CHRG_STATE_CHARGE_DONE &&
+ bq2419x->battery_presense) {
+ bq2419x_charging_done_update_locked(bq2419x);
+ if ((!bq2419x->otp_control_no_thermister
+ || bq2419x->last_chg_voltage.val ==
+ bq2419x->chg_voltage_control.val)
+ && !bq2419x->enable_batt_status_monitor) {
+ battery_charging_restart(bq2419x->bc_dev,
bq2419x->chg_restart_time);
- if (bq2419x->disable_suspend_during_charging)
- battery_charger_release_wake_lock(bq2419x->bc_dev);
- battery_charger_thermal_stop_monitoring(
- bq2419x->bc_dev);
+ if (bq2419x->disable_suspend_during_charging)
+ battery_charger_release_wake_lock(
+ bq2419x->bc_dev);
+ bq2419x_monitor_work_control(bq2419x, false);
+ }
}
if ((val & BQ2419x_VSYS_STAT_MASK) == BQ2419x_VSYS_STAT_BATT_LOW)
@@ -735,7 +1057,8 @@
/* Update Charging status based on STAT register */
if (check_chg_state &&
- ((val & BQ2419x_CHRG_STATE_MASK) == BQ2419x_CHRG_STATE_NOTCHARGING)) {
+ ((val & BQ2419x_CHRG_STATE_MASK) == BQ2419x_CHRG_STATE_NOTCHARGING) &&
+ bq2419x->battery_presense) {
bq2419x->chg_status = BATTERY_DISCHARGING;
battery_charging_status_update(bq2419x->bc_dev,
bq2419x->chg_status);
@@ -745,6 +1068,12 @@
battery_charger_release_wake_lock(bq2419x->bc_dev);
}
+ if (!bq2419x->battery_presense)
+ bq2419x_charger_enable_locked(bq2419x, 0);
+
+done:
+ mutex_unlock(&bq2419x->mutex);
+
return IRQ_HANDLED;
}
@@ -967,14 +1296,16 @@
else
return -EINVAL;
+ mutex_lock(&bq2419x->mutex);
if (enabled)
- ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
- BQ2419X_ENABLE_CHARGE_MASK, BQ2419X_ENABLE_CHARGE);
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_USER, true);
else
- ret = regmap_update_bits(bq2419x->regmap, BQ2419X_PWR_ON_REG,
- BQ2419X_ENABLE_CHARGE_MASK, BQ2419X_DISABLE_CHARGE);
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_USER, false);
+ mutex_unlock(&bq2419x->mutex);
if (ret < 0) {
- dev_err(bq2419x->dev, "register update failed, %d\n", ret);
+ dev_err(bq2419x->dev, "user charging enable failed, %d\n", ret);
return ret;
}
return count;
@@ -1143,7 +1474,8 @@
int curr_ichg, vreg;
chg_pdata = bq2419x->charger_pdata;
- if (!bq2419x->cable_connected || !chg_pdata->n_temp_profile)
+ if (!bq2419x->cable_connected || !chg_pdata->n_temp_profile ||
+ !bq2419x->battery_presense)
return 0;
if (bq2419x->last_temp == temp)
@@ -1204,16 +1536,92 @@
return 0;
}
+static int bq2419x_charger_thermal_configure_no_thermister(
+ struct battery_charger_dev *bc_dev,
+ int temp, bool enable_charger, bool enable_charg_half_current,
+ int battery_voltage)
+{
+ struct bq2419x_chip *bq2419x = battery_charger_get_drvdata(bc_dev);
+ int ret = 0;
+ int vreg_val;
+
+ mutex_lock(&bq2419x->mutex);
+ if (!bq2419x->cable_connected)
+ goto done;
+
+ if (bq2419x->last_temp == temp && bq2419x->charging_state != STATE_INIT)
+ goto done;
+
+ bq2419x->last_temp = temp;
+
+ dev_info(bq2419x->dev, "Battery temp %d\n", temp);
+ if (enable_charger) {
+
+ vreg_val = bq2419x_val_to_reg(battery_voltage,
+ BQ2419X_CHARGE_VOLTAGE_OFFSET, 16, 6, 1) << 2;
+ if (vreg_val > 0
+ && vreg_val <= bq2419x->chg_voltage_control.val
+ && vreg_val != bq2419x->last_chg_voltage.val) {
+ /*Set charge voltage */
+ ret = regmap_update_bits(bq2419x->regmap,
+ BQ2419X_VOLT_CTRL_REG,
+ BQ2419x_VOLTAGE_CTRL_MASK,
+ vreg_val);
+ if (ret < 0)
+ goto error;
+ bq2419x->last_chg_voltage.val = vreg_val;
+ }
+
+ if (bq2419x->charging_state == DISABLED) {
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_THERMAL, true);
+ if (ret < 0)
+ goto error;
+ }
+
+ if (!enable_charg_half_current &&
+ bq2419x->charging_state != ENABLED_FULL_IBAT) {
+ bq2419x_full_current_enable(bq2419x);
+ battery_charging_status_update(bq2419x->bc_dev,
+ BATTERY_CHARGING);
+ } else if (enable_charg_half_current &&
+ bq2419x->charging_state != ENABLED_HALF_IBAT) {
+ bq2419x_half_current_enable(bq2419x);
+ battery_charging_status_update(bq2419x->bc_dev,
+ BATTERY_CHARGING);
+ }
+ } else {
+ if (bq2419x->charging_state != DISABLED) {
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_THERMAL, false);
+
+ if (ret < 0)
+ goto error;
+ bq2419x->charging_state = DISABLED;
+ battery_charging_status_update(bq2419x->bc_dev,
+ BATTERY_DISCHARGING);
+ }
+ }
+
+error:
+done:
+ mutex_unlock(&bq2419x->mutex);
+ return ret;
+}
+
static int bq2419x_charging_restart(struct battery_charger_dev *bc_dev)
{
struct bq2419x_chip *bq2419x = battery_charger_get_drvdata(bc_dev);
- int ret;
+ int ret = 0;
- if (!bq2419x->cable_connected)
- return 0;
+ mutex_lock(&bq2419x->mutex);
+ if (!bq2419x->cable_connected || !bq2419x->battery_presense
+ || (bq2419x->safety_timer_reset_disable
+ && bq2419x->safety_timeout_happen))
+ goto done;
dev_info(bq2419x->dev, "Restarting the charging\n");
- ret = bq2419x_set_charging_current(bq2419x->chg_rdev,
+ ret = bq2419x_set_charging_current_locked(bq2419x,
bq2419x->last_charging_current,
bq2419x->last_charging_current);
if (ret < 0) {
@@ -1222,14 +1630,162 @@
battery_charging_restart(bq2419x->bc_dev,
bq2419x->chg_restart_time);
}
+
+done:
+ mutex_unlock(&bq2419x->mutex);
return ret;
}
+static int bq2419x_charger_charging_full_configure(
+ struct battery_charger_dev *bc_dev,
+ bool charge_full_done, bool charge_full_stop)
+{
+ struct bq2419x_chip *bq2419x = battery_charger_get_drvdata(bc_dev);
+
+ mutex_lock(&bq2419x->mutex);
+
+ if (!bq2419x->enable_batt_status_monitor ||
+ !bq2419x->battery_presense)
+ goto done;
+
+ if (!bq2419x->cable_connected)
+ goto done;
+
+ if (charge_full_done != bq2419x->chg_full_done) {
+ bq2419x->chg_full_done = charge_full_done;
+ if (charge_full_done)
+ bq2419x_charging_done_update_locked(bq2419x);
+ else {
+ bq2419x->chg_status = BATTERY_CHARGING;
+ battery_charging_status_update(bq2419x->bc_dev,
+ bq2419x->chg_status);
+ }
+ }
+
+ if (charge_full_stop != bq2419x->chg_full_stop) {
+ bq2419x->chg_full_stop = charge_full_stop;
+ if (charge_full_stop) {
+ __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_CHG_FULL_STOP,
+ false);
+ if (bq2419x->disable_suspend_during_charging)
+ battery_charger_release_wake_lock(
+ bq2419x->bc_dev);
+ } else {
+ __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_CHG_FULL_STOP,
+ true);
+ if (bq2419x->disable_suspend_during_charging)
+ battery_charger_acquire_wake_lock(
+ bq2419x->bc_dev);
+ }
+ }
+done:
+ mutex_unlock(&bq2419x->mutex);
+ return 0;
+}
+
+static int bq2419x_input_control(struct battery_charger_dev *bc_dev,
+ int voltage_min)
+{
+ struct bq2419x_chip *bq2419x = battery_charger_get_drvdata(bc_dev);
+ int vindpm, mask;
+ unsigned int val, data;
+ int ret;
+
+ mutex_lock(&bq2419x->mutex);
+
+ if (!bq2419x->battery_presense)
+ goto done;
+
+ if (!bq2419x->cable_connected)
+ goto done;
+
+ /* input source checking */
+ vindpm = bq2419x_val_to_reg(voltage_min,
+ BQ2419X_INPUT_VINDPM_OFFSET, 80, 4, 0);
+ vindpm <<= 3;
+ mask = BQ2419X_INPUT_VINDPM_MASK;
+ if (vindpm != bq2419x->last_input_src.val) {
+ bq2419x->last_input_src.val = vindpm;
+ ret = regmap_update_bits(bq2419x->regmap,
+ BQ2419X_INPUT_SRC_REG,
+ mask, vindpm);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG update failed: %d\n",
+ ret);
+ }
+
+ /* Check input current limit if reset */
+ val = current_to_reg(iinlim, ARRAY_SIZE(iinlim),
+ bq2419x->in_current_limit);
+ if (val > 0) {
+ ret = regmap_read(bq2419x->regmap,
+ BQ2419X_INPUT_SRC_REG, &data);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG read failed %d\n", ret);
+ else if ((data & BQ2419x_CONFIG_MASK) != val) {
+ ret = regmap_update_bits(bq2419x->regmap,
+ BQ2419X_INPUT_SRC_REG,
+ BQ2419x_CONFIG_MASK,
+ val);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "INPUT_SRC_REG update failed: %d\n",
+ ret);
+ }
+ }
+
+done:
+ mutex_unlock(&bq2419x->mutex);
+
+ return 0;
+}
+
+static int bq2419x_unknown_battery_handle(struct battery_charger_dev *bc_dev)
+{
+ int ret = 0;
+ struct bq2419x_chip *bq2419x = battery_charger_get_drvdata(bc_dev);
+
+ mutex_lock(&bq2419x->mutex);
+ bq2419x->battery_presense = false;
+ bq2419x->chg_status = BATTERY_UNKNOWN;
+ bq2419x->charging_state = STATE_INIT;
+
+ ret = __bq2419x_charger_enable_locked(bq2419x,
+ CHG_DISABLE_REASON_UNKNOWN_BATTERY, false);
+ if (ret < 0) {
+ dev_err(bq2419x->dev,
+ "charger enable failed %d\n", ret);
+ goto error;
+ }
+
+ cancel_delayed_work(&bq2419x->wdt_restart_wq);
+ battery_charging_restart_cancel(bq2419x->bc_dev);
+ battery_charger_batt_status_stop_monitoring(
+ bq2419x->bc_dev);
+
+ battery_charging_status_update(bq2419x->bc_dev, bq2419x->chg_status);
+
+ if (bq2419x->disable_suspend_during_charging
+ && bq2419x->last_charging_current > 500)
+ battery_charger_release_wake_lock(bq2419x->bc_dev);
+
+ bq2419x->last_charging_current = 0;
+error:
+ mutex_unlock(&bq2419x->mutex);
+ return ret;
+}
static struct battery_charging_ops bq2419x_charger_bci_ops = {
.get_charging_status = bq2419x_charger_get_status,
.restart_charging = bq2419x_charging_restart,
.thermal_configure = bq2419x_charger_thermal_configure,
+ .charging_full_configure = bq2419x_charger_charging_full_configure,
+ .input_voltage_configure = bq2419x_input_control,
+ .unknown_battery_handle = bq2419x_unknown_battery_handle,
};
static struct battery_charger_info bq2419x_charger_bci = {
@@ -1243,6 +1799,7 @@
struct bq2419x_platform_data *pdata;
struct device_node *batt_reg_node;
struct device_node *vbus_reg_node;
+ struct device_node *charge_policy_node;
int ret;
pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
@@ -1252,9 +1809,15 @@
batt_reg_node = of_find_node_by_name(np, "charger");
if (batt_reg_node) {
int temp_range_len, chg_current_lim_len, chg_voltage_lim_len;
+ int count;
int wdt_timeout;
int chg_restart_time;
+ int suspend_polling_time;
+ int auto_recharge_time_power_off;
int temp_polling_time;
+ int thermal_temp;
+ unsigned int thermal_volt;
+ unsigned int otp_output_current;
struct regulator_init_data *batt_init_data;
struct bq2419x_charger_platform_data *chg_pdata;
const char *status_str;
@@ -1326,18 +1889,49 @@
of_property_read_bool(batt_reg_node,
"ti,disbale-suspend-during-charging");
+ pdata->bcharger_pdata->safety_timer_reset_disable =
+ of_property_read_bool(batt_reg_node,
+ "ti,safety-timer-reset-disable");
+
ret = of_property_read_u32(batt_reg_node,
"ti,watchdog-timeout", &wdt_timeout);
if (!ret)
pdata->bcharger_pdata->wdt_timeout = wdt_timeout;
ret = of_property_read_u32(batt_reg_node,
+ "ti,auto-recharge-time-power-off",
+ &auto_recharge_time_power_off);
+ if (!ret)
+ pdata->bcharger_pdata->auto_recharge_time_power_off =
+ auto_recharge_time_power_off;
+ else
+ pdata->bcharger_pdata->auto_recharge_time_power_off =
+ 3600;
+
+ ret = of_property_read_u32(batt_reg_node,
"ti,auto-recharge-time", &chg_restart_time);
if (!ret)
pdata->bcharger_pdata->chg_restart_time =
chg_restart_time;
ret = of_property_read_u32(batt_reg_node,
+ "ti,charge-suspend-polling-time-sec",
+ &suspend_polling_time);
+ if (!ret)
+ bcharger_pdata->charge_suspend_polling_time_sec =
+ suspend_polling_time;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "ti,auto-recharge-time-suspend",
+ &chg_restart_time);
+ if (!ret)
+ pdata->bcharger_pdata->auto_recharge_time_supend =
+ chg_restart_time;
+ else
+ pdata->bcharger_pdata->auto_recharge_time_supend =
+ 3600;
+
+ ret = of_property_read_u32(batt_reg_node,
"ti,temp-polling-time-sec", &temp_polling_time);
if (!ret)
bcharger_pdata->temp_polling_time_sec =
@@ -1346,27 +1940,32 @@
chg_pdata->tz_name = of_get_property(batt_reg_node,
"ti,thermal-zone", NULL);
- temp_range_len = of_property_count_u32(batt_reg_node,
- "ti,temp-range");
- chg_current_lim_len = of_property_count_u32(batt_reg_node,
+ count = of_property_count_u32(batt_reg_node, "ti,temp-range");
+ temp_range_len = (count > 0) ? count : 0;
+
+ count = of_property_count_u32(batt_reg_node,
"ti,charge-current-limit");
- if (!chg_current_lim_len)
- chg_current_lim_len = of_property_count_u32(batt_reg_node,
+ if (count <= 0)
+ count = of_property_count_u32(batt_reg_node,
"ti,charge-thermal-current-limit");
- chg_voltage_lim_len = of_property_count_u32(batt_reg_node,
+ chg_current_lim_len = (count > 0) ? count : 0;
+
+ count = of_property_count_u32(batt_reg_node,
"ti,charge-thermal-voltage-limit");
- if (temp_range_len < 0)
+ chg_voltage_lim_len = (count > 0) ? count : 0;
+
+ if (!temp_range_len)
goto skip_therm_profile;
if (temp_range_len != chg_current_lim_len) {
dev_info(&client->dev,
- "thermal profile data is not correct\n");
+ "current thermal profile is not correct\n");
goto skip_therm_profile;
}
if (chg_voltage_lim_len && (temp_range_len != chg_voltage_lim_len)) {
dev_info(&client->dev,
- "thermal profile data is not correct\n");
+ "voltage thermal profile is not correct\n");
goto skip_therm_profile;
}
@@ -1424,6 +2023,67 @@
batt_init_data->num_consumer_supplies;
pdata->bcharger_pdata->max_charge_current_mA =
batt_init_data->constraints.max_uA / 1000;
+
+ bcharger_pdata->otp_control_no_thermister =
+ of_property_read_bool(batt_reg_node,
+ "thermal-overtemp-control-no-thermister");
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-temperature-hot-deciC", &thermal_temp);
+ if (!ret)
+ bcharger_pdata->thermal_prop.temp_hot_dc =
+ thermal_temp;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-temperature-cold-deciC", &thermal_temp);
+ if (!ret)
+ bcharger_pdata->thermal_prop.temp_cold_dc =
+ thermal_temp;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-temperature-warm-deciC", &thermal_temp);
+ if (!ret)
+ bcharger_pdata->thermal_prop.temp_warm_dc =
+ thermal_temp;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-temperature-cool-deciC", &thermal_temp);
+ if (!ret)
+ bcharger_pdata->thermal_prop.temp_cool_dc =
+ thermal_temp;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-temperature-hysteresis-deciC", &thermal_temp);
+ if (!ret)
+ bcharger_pdata->thermal_prop.temp_hysteresis_dc =
+ thermal_temp;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-warm-voltage-millivolt", &thermal_volt);
+ if (!ret)
+ bcharger_pdata->thermal_prop.warm_voltage_mv =
+ thermal_volt;
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-cool-voltage-millivolt", &thermal_volt);
+ if (!ret)
+ bcharger_pdata->thermal_prop.cool_voltage_mv =
+ thermal_volt;
+
+ bcharger_pdata->thermal_prop.disable_warm_current_half =
+ of_property_read_bool(batt_reg_node,
+ "thermal-disable-warm-current-half");
+
+ bcharger_pdata->thermal_prop.disable_cool_current_half =
+ of_property_read_bool(batt_reg_node,
+ "thermal-disable-cool-current-half");
+
+ ret = of_property_read_u32(batt_reg_node,
+ "thermal-overtemp-output-current-milliamp",
+ &otp_output_current);
+ if (!ret)
+ bcharger_pdata->thermal_prop.otp_output_current_ma =
+ otp_output_current;
}
vbus_node:
@@ -1450,9 +2110,157 @@
"ti,otg-iusb-gpio", 0);
}
+ /* TODO: seperate product policy from chip driver */
+ charge_policy_node = of_find_node_by_name(np, "policy");
+ if (charge_policy_node) {
+ struct bq2419x_charge_policy_platform_data *policy_pdata;
+ unsigned int full_thr_val;
+ unsigned int input_switch_val;
+ int unknown_batt_id_min;
+
+ pdata->cpolicy_pdata = devm_kzalloc(&client->dev,
+ sizeof(*(pdata->cpolicy_pdata)), GFP_KERNEL);
+ if (!pdata->cpolicy_pdata)
+ return ERR_PTR(-ENOMEM);
+
+ policy_pdata = pdata->cpolicy_pdata;
+
+ policy_pdata->enable_battery_status_monitor =
+ of_property_read_bool(charge_policy_node,
+ "enable-battery-status-monitor");
+
+ if (!policy_pdata->enable_battery_status_monitor)
+ goto unknown_battery_id;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "charge-full-done-voltage-min-millivolt",
+ &full_thr_val);
+ if (!ret)
+ policy_pdata->full_thr.chg_done_voltage_min_mv =
+ full_thr_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "charge-full-done-current-min-milliamp",
+ &full_thr_val);
+ if (!ret)
+ policy_pdata->full_thr.chg_done_current_min_ma =
+ full_thr_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "charge-full-done-low-current-min-milliamp",
+ &full_thr_val);
+ if (!ret)
+ policy_pdata->full_thr.chg_done_low_current_min_ma =
+ full_thr_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "charge-full-recharge-voltage-min-millivolt",
+ &full_thr_val);
+ if (!ret)
+ policy_pdata->full_thr.recharge_voltage_min_mv =
+ full_thr_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "input-voltage-min-high-battery-millivolt",
+ &input_switch_val);
+ if (!ret)
+ policy_pdata->input_switch.input_vmin_high_mv =
+ input_switch_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "input-voltage-min-low-battery-millivolt",
+ &input_switch_val);
+ if (!ret)
+ policy_pdata->input_switch.input_vmin_low_mv =
+ input_switch_val;
+
+ ret = of_property_read_u32(charge_policy_node,
+ "input-voltage-switch-millivolt",
+ &input_switch_val);
+ if (!ret)
+ policy_pdata->input_switch.input_switch_threshold_mv =
+ input_switch_val;
+
+unknown_battery_id:
+ policy_pdata->batt_id_channel_name =
+ of_get_property(charge_policy_node,
+ "battery-id-channel-name", NULL);
+
+ ret = of_property_read_u32(charge_policy_node,
+ "unknown-battery-id-minimum",
+ &unknown_batt_id_min);
+ if (!ret)
+ policy_pdata->unknown_batt_id_min =
+ unknown_batt_id_min;
+ }
+
return pdata;
}
+static inline void bq2419x_battery_charger_info_init(
+ struct bq2419x_charger_platform_data *bcharger_pdata,
+ struct bq2419x_charge_policy_platform_data *cpolicy_pdata)
+{
+ if (bcharger_pdata->otp_control_no_thermister)
+ bq2419x_charger_bci.bc_ops->thermal_configure =
+ bq2419x_charger_thermal_configure_no_thermister;
+
+ bq2419x_charger_bci.polling_time_sec =
+ bcharger_pdata->temp_polling_time_sec;
+ bq2419x_charger_bci.tz_name = bcharger_pdata->tz_name;
+
+ bq2419x_charger_bci.thermal_prop.temp_hot_dc =
+ bcharger_pdata->thermal_prop.temp_hot_dc;
+ bq2419x_charger_bci.thermal_prop.temp_cold_dc =
+ bcharger_pdata->thermal_prop.temp_cold_dc;
+ bq2419x_charger_bci.thermal_prop.temp_warm_dc =
+ bcharger_pdata->thermal_prop.temp_warm_dc;
+ bq2419x_charger_bci.thermal_prop.temp_cool_dc =
+ bcharger_pdata->thermal_prop.temp_cool_dc;
+ bq2419x_charger_bci.thermal_prop.temp_hysteresis_dc =
+ bcharger_pdata->thermal_prop.temp_hysteresis_dc;
+ bq2419x_charger_bci.thermal_prop.regulation_voltage_mv =
+ bcharger_pdata->charge_voltage_limit_mV;
+ bq2419x_charger_bci.thermal_prop.warm_voltage_mv =
+ bcharger_pdata->thermal_prop.warm_voltage_mv;
+ bq2419x_charger_bci.thermal_prop.cool_voltage_mv =
+ bcharger_pdata->thermal_prop.cool_voltage_mv;
+ bq2419x_charger_bci.thermal_prop.disable_warm_current_half =
+ bcharger_pdata->thermal_prop.disable_warm_current_half;
+ bq2419x_charger_bci.thermal_prop.disable_cool_current_half =
+ bcharger_pdata->thermal_prop.disable_cool_current_half;
+
+ bq2419x_charger_bci.full_thr.chg_done_voltage_min_mv =
+ cpolicy_pdata->full_thr.chg_done_voltage_min_mv ?:
+ (bcharger_pdata->charge_voltage_limit_mV ?
+ bcharger_pdata->charge_voltage_limit_mV - 102 : 0);
+ bq2419x_charger_bci.full_thr.chg_done_current_min_ma =
+ cpolicy_pdata->full_thr.chg_done_current_min_ma;
+ bq2419x_charger_bci.full_thr.chg_done_low_current_min_ma =
+ cpolicy_pdata->full_thr.chg_done_low_current_min_ma;
+ bq2419x_charger_bci.full_thr.recharge_voltage_min_mv =
+ cpolicy_pdata->full_thr.recharge_voltage_min_mv ?:
+ (bcharger_pdata->charge_voltage_limit_mV ?
+ bcharger_pdata->charge_voltage_limit_mV - 48 : 0);
+
+ bq2419x_charger_bci.input_switch.input_vmin_high_mv =
+ cpolicy_pdata->input_switch.input_vmin_high_mv;
+ bq2419x_charger_bci.input_switch.input_vmin_low_mv =
+ cpolicy_pdata->input_switch.input_vmin_low_mv;
+ bq2419x_charger_bci.input_switch.input_switch_threshold_mv =
+ cpolicy_pdata->input_switch.input_switch_threshold_mv;
+
+ bq2419x_charger_bci.batt_id_channel_name =
+ cpolicy_pdata->batt_id_channel_name;
+ bq2419x_charger_bci.unknown_batt_id_min =
+ cpolicy_pdata->unknown_batt_id_min;
+
+ if (cpolicy_pdata->enable_battery_status_monitor)
+ bq2419x_charger_bci.enable_batt_status_monitor = true;
+ else
+ bq2419x_charger_bci.enable_thermal_monitor = true;
+}
+
static int bq2419x_debugfs_show(struct seq_file *s, void *unused)
{
struct bq2419x_chip *bq2419x = s->private;
@@ -1549,7 +2357,7 @@
goto scrub_mutex;
}
- ret = bq2419x_fault_clear_sts(bq2419x);
+ ret = bq2419x_fault_clear_sts(bq2419x, NULL);
if (ret < 0) {
dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
goto scrub_mutex;
@@ -1557,15 +2365,33 @@
goto skip_bcharger_init;
}
+ bq2419x->auto_recharge_time_power_off =
+ pdata->bcharger_pdata->auto_recharge_time_power_off;
bq2419x->wdt_time_sec = pdata->bcharger_pdata->wdt_timeout;
bq2419x->chg_restart_time = pdata->bcharger_pdata->chg_restart_time;
bq2419x->battery_presense = true;
bq2419x->last_temp = -1000;
bq2419x->disable_suspend_during_charging =
pdata->bcharger_pdata->disable_suspend_during_charging;
+ bq2419x->safety_timer_reset_disable =
+ pdata->bcharger_pdata->safety_timer_reset_disable;
+ bq2419x->charge_suspend_polling_time =
+ pdata->bcharger_pdata->charge_suspend_polling_time_sec;
+ bq2419x->charge_polling_time =
+ pdata->bcharger_pdata->temp_polling_time_sec;
+ bq2419x->auto_recharge_time_supend =
+ pdata->bcharger_pdata->auto_recharge_time_supend;
bq2419x_process_charger_plat_data(bq2419x, pdata->bcharger_pdata);
+ bq2419x->enable_batt_status_monitor =
+ pdata->cpolicy_pdata->enable_battery_status_monitor;
+
+ bq2419x->last_chg_voltage.mask = bq2419x->chg_voltage_control.mask;
+ bq2419x->last_chg_voltage.val = bq2419x->chg_voltage_control.val;
+ bq2419x->last_input_src.mask = bq2419x->input_src.mask;
+ bq2419x->last_input_src.val = bq2419x->input_src.val;
+
ret = bq2419x_charger_init(bq2419x);
if (ret < 0) {
dev_err(bq2419x->dev, "Charger init failed: %d\n", ret);
@@ -1579,9 +2405,12 @@
goto scrub_mutex;
}
- bq2419x_charger_bci.polling_time_sec =
- pdata->bcharger_pdata->temp_polling_time_sec;
- bq2419x_charger_bci.tz_name = pdata->bcharger_pdata->tz_name;
+ bq2419x->otp_control_no_thermister =
+ pdata->bcharger_pdata->otp_control_no_thermister;
+
+ bq2419x_battery_charger_info_init(pdata->bcharger_pdata,
+ pdata->cpolicy_pdata);
+
bq2419x->bc_dev = battery_charger_register(bq2419x->dev,
&bq2419x_charger_bci, bq2419x);
if (IS_ERR(bq2419x->bc_dev)) {
@@ -1598,7 +2427,7 @@
goto scrub_wq;
}
- ret = bq2419x_fault_clear_sts(bq2419x);
+ ret = bq2419x_fault_clear_sts(bq2419x, NULL);
if (ret < 0) {
dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
goto scrub_wq;
@@ -1656,25 +2485,48 @@
static void bq2419x_shutdown(struct i2c_client *client)
{
struct bq2419x_chip *bq2419x = i2c_get_clientdata(client);
+ struct device *dev = &client->dev;
int ret;
+ int next_poweron_time = 0;
if (!bq2419x->battery_presense)
return;
+ if (!bq2419x->cable_connected)
+ goto end;
+
ret = bq2419x_reset_wdt(bq2419x, "SHUTDOWN");
if (ret < 0)
dev_err(bq2419x->dev, "Reset WDT failed: %d\n", ret);
- if (bq2419x->cable_connected && bq2419x->in_current_limit > 500 &&
- bq2419x->wdt_refresh_timeout) {
- ret = battery_charging_system_reset_after(bq2419x->bc_dev,
- bq2419x->wdt_refresh_timeout);
- if (ret < 0)
- dev_err(bq2419x->dev,
- "System reset after %d config failed %d\n",
- bq2419x->wdt_refresh_timeout, ret);
+ if (bq2419x->chg_status == BATTERY_CHARGING_DONE) {
+ dev_info(bq2419x->dev, "Battery charging done\n");
+ goto end;
}
+ if (bq2419x->in_current_limit <= 500) {
+ dev_info(bq2419x->dev, "Battery charging with 500mA\n");
+ next_poweron_time = bq2419x->auto_recharge_time_power_off;
+ } else {
+ dev_info(bq2419x->dev, "Battery charging with high current\n");
+ next_poweron_time = bq2419x->wdt_refresh_timeout;
+ }
+
+ if (!next_poweron_time)
+ goto end;
+
+ ret = battery_charging_system_reset_after(bq2419x->bc_dev,
+ next_poweron_time);
+ if (ret < 0)
+ dev_err(dev, "System poweron after %d config failed %d\n",
+ next_poweron_time, ret);
+end:
+ if (next_poweron_time)
+ dev_info(dev, "System-charger will power-ON after %d sec\n",
+ next_poweron_time);
+ else
+ dev_info(bq2419x->dev, "System-charger will not power-ON\n");
+
battery_charging_system_power_on_usb_event(bq2419x->bc_dev);
}
@@ -1682,6 +2534,7 @@
static int bq2419x_suspend(struct device *dev)
{
struct bq2419x_chip *bq2419x = dev_get_drvdata(dev);
+ int next_wakeup = 0;
int ret;
if (!bq2419x->battery_presense)
@@ -1689,23 +2542,63 @@
battery_charging_restart_cancel(bq2419x->bc_dev);
+ if (!bq2419x->cable_connected)
+ goto end;
+
ret = bq2419x_reset_wdt(bq2419x, "Suspend");
if (ret < 0)
dev_err(bq2419x->dev, "Reset WDT failed: %d\n", ret);
- if (bq2419x->cable_connected &&
- !bq2419x->disable_suspend_during_charging &&
- (bq2419x->in_current_limit > 500)) {
- battery_charging_wakeup(bq2419x->bc_dev,
- bq2419x->wdt_refresh_timeout);
- } else {
- ret = bq2419x_set_charging_current_suspend(bq2419x, 500);
- if (ret < 0)
- dev_err(bq2419x->dev,
- "Configuration of charging failed: %d\n", ret);
+ if (bq2419x->enable_batt_status_monitor) {
+ if (bq2419x->chg_full_stop)
+ next_wakeup =
+ bq2419x->charge_suspend_polling_time;
+ else if (bq2419x->wdt_refresh_timeout)
+ next_wakeup = bq2419x->wdt_refresh_timeout;
+ else
+ next_wakeup = bq2419x->charge_polling_time;
+ } else if (bq2419x->otp_control_no_thermister) {
+ if (bq2419x->chg_status == BATTERY_CHARGING_DONE)
+ next_wakeup =
+ bq2419x->auto_recharge_time_supend;
+ else if (bq2419x->wdt_refresh_timeout)
+ next_wakeup = bq2419x->wdt_refresh_timeout;
+ else
+ next_wakeup = bq2419x->charge_polling_time;
+ } else {
+ if (bq2419x->chg_status == BATTERY_CHARGING_DONE) {
+ dev_info(bq2419x->dev, "Battery charging done\n");
+ goto end;
+ }
+
+ if (bq2419x->in_current_limit <= 500) {
+ dev_info(bq2419x->dev, "Battery charging with 500mA\n");
+ next_wakeup = bq2419x->auto_recharge_time_supend;
+ } else {
+ dev_info(bq2419x->dev,
+ "Battery charging with high current\n");
+ next_wakeup = bq2419x->wdt_refresh_timeout;
+ }
+
}
- return 0;
+ battery_charging_wakeup(bq2419x->bc_dev, next_wakeup);
+end:
+ if (next_wakeup)
+ dev_info(dev, "System-charger will resume after %d sec\n",
+ next_wakeup);
+ else
+ dev_info(dev, "System-charger will not have resume time\n");
+
+ if (next_wakeup == bq2419x->wdt_refresh_timeout ||
+ bq2419x->enable_batt_status_monitor ||
+ bq2419x->otp_control_no_thermister)
+ return 0;
+
+ ret = bq2419x_set_charging_current_suspend(bq2419x, 500);
+ if (ret < 0)
+ dev_err(bq2419x->dev, "Config of charging failed: %d\n", ret);
+ return ret;
}
static int bq2419x_resume(struct device *dev)
@@ -1717,13 +2610,7 @@
if (!bq2419x->battery_presense)
return 0;
- ret = regmap_read(bq2419x->regmap, BQ2419X_FAULT_REG, &val);
- if (ret < 0) {
- dev_err(bq2419x->dev, "FAULT_REG read failed %d\n", ret);
- return ret;
- }
-
- ret = bq2419x_fault_clear_sts(bq2419x);
+ ret = bq2419x_fault_clear_sts(bq2419x, &val);
if (ret < 0) {
dev_err(bq2419x->dev, "fault clear status failed %d\n", ret);
return ret;
@@ -1732,25 +2619,10 @@
if (val & BQ2419x_FAULT_WATCHDOG_FAULT) {
bq_chg_err(bq2419x, "Watchdog Timer Expired\n");
- ret = bq2419x_watchdog_init(bq2419x, bq2419x->wdt_time_sec,
- "RESUME");
+ ret = bq2419x_reconfigure_charger_param(bq2419x,
+ "WDT-EXP-RESUME");
if (ret < 0) {
- dev_err(bq2419x->dev, "BQWDT init failed %d\n", ret);
- return ret;
- }
-
- ret = bq2419x_charger_init(bq2419x);
- if (ret < 0) {
- dev_err(bq2419x->dev, "Charger init failed: %d\n", ret);
- return ret;
- }
-
- ret = bq2419x_set_charging_current(bq2419x->chg_rdev,
- bq2419x->last_charging_current,
- bq2419x->last_charging_current);
- if (ret < 0) {
- dev_err(bq2419x->dev,
- "Set charging current failed: %d\n", ret);
+ dev_err(bq2419x->dev, "BQ reconfig failed %d\n", ret);
return ret;
}
} else {
@@ -1759,6 +2631,24 @@
dev_err(bq2419x->dev, "Reset WDT failed: %d\n", ret);
}
+ if (val & BQ2419x_FAULT_CHRG_SAFTY) {
+ if (!bq2419x->safety_timer_reset_disable) {
+ bq_chg_err(bq2419x, "Safety timer Expired\n");
+ ret = bq2419x_handle_safety_timer_expire(bq2419x);
+ if (ret < 0)
+ dev_err(bq2419x->dev,
+ "Handling of safty timer expire failed: %d\n",
+ ret);
+ } else {
+ bq_chg_err(bq2419x,
+ "Safety timer expiration, stop charging\n");
+ mutex_lock(&bq2419x->mutex);
+ bq2419x_monitor_work_control(bq2419x, false);
+ bq2419x->safety_timeout_happen = true;
+ mutex_unlock(&bq2419x->mutex);
+ }
+ }
+
return 0;
};
#endif
diff --git a/drivers/power/flounder_battery.c b/drivers/power/flounder_battery.c
new file mode 100644
index 0000000..b6ec6dd
--- /dev/null
+++ b/drivers/power/flounder_battery.c
@@ -0,0 +1,116 @@
+/*
+ * flounder_battery.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/err.h>
+#include <linux/power_supply.h>
+#include <linux/slab.h>
+#include <linux/of.h>
+
+struct flounder_battery_data {
+ struct power_supply battery;
+ struct device *dev;
+ int present;
+};
+static struct flounder_battery_data *flounder_battery_data;
+
+static enum power_supply_property flounder_battery_prop[] = {
+ POWER_SUPPLY_PROP_PRESENT,
+};
+
+static int flounder_battery_get_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ union power_supply_propval *val)
+{
+ struct flounder_battery_data *data = container_of(psy,
+ struct flounder_battery_data, battery);
+
+ if (psp == POWER_SUPPLY_PROP_PRESENT)
+ val->intval = data->present;
+ else
+ return -EINVAL;
+ return 0;
+}
+
+static int flounder_battery_probe(struct platform_device *pdev)
+{
+ struct flounder_battery_data *data;
+ int ret;
+
+ data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ dev_set_drvdata(&pdev->dev, data);
+
+ data->battery.name = "flounder-battery";
+ data->battery.type = POWER_SUPPLY_TYPE_BATTERY;
+ data->battery.get_property = flounder_battery_get_property;
+ data->battery.properties = flounder_battery_prop;
+ data->battery.num_properties = ARRAY_SIZE(flounder_battery_prop);
+ data->dev = &pdev->dev;
+
+ ret = power_supply_register(data->dev, &data->battery);
+ if (ret) {
+ dev_err(data->dev, "failed: power supply register\n");
+ return ret;
+ }
+
+ data->present = 1;
+ flounder_battery_data = data;
+
+ return 0;
+}
+
+static int flounder_battery_remove(struct platform_device *pdev)
+{
+ struct flounder_battery_data *data = dev_get_drvdata(&pdev->dev);
+
+ data->present = 0;
+ power_supply_unregister(&data->battery);
+
+ return 0;
+}
+
+static const struct of_device_id flounder_battery_dt_match[] = {
+ { .compatible = "htc,max17050_battery" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, flounder_battery_dt_match);
+
+static struct platform_driver flounder_battery_driver = {
+ .driver = {
+ .name = "flounder_battery",
+ .of_match_table = of_match_ptr(flounder_battery_dt_match),
+ .owner = THIS_MODULE,
+ },
+ .probe = flounder_battery_probe,
+ .remove = flounder_battery_remove,
+};
+
+static int __init flounder_battery_init(void)
+{
+ return platform_driver_register(&flounder_battery_driver);
+}
+fs_initcall_sync(flounder_battery_init);
+
+static void __exit flounder_battery_exit(void)
+{
+ platform_driver_unregister(&flounder_battery_driver);
+}
+module_exit(flounder_battery_exit);
+
+MODULE_DESCRIPTION("Flounder battery driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/power/htc_battery_bq2419x.c b/drivers/power/htc_battery_bq2419x.c
new file mode 100644
index 0000000..baea3c4
--- /dev/null
+++ b/drivers/power/htc_battery_bq2419x.c
@@ -0,0 +1,1899 @@
+/*
+ * htc_battery_bq2419x.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <asm/unaligned.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/mutex.h>
+#include <linux/err.h>
+#include <linux/delay.h>
+#include <linux/power_supply.h>
+#include <linux/slab.h>
+#include <linux/regulator/driver.h>
+#include <linux/regulator/machine.h>
+#include <linux/regulator/of_regulator.h>
+#include <linux/htc_battery_bq2419x.h>
+#include <linux/power/battery-charger-gauge-comm.h>
+#include <linux/pm.h>
+#include <linux/jiffies.h>
+#include <linux/wakelock.h>
+#include <linux/of.h>
+#include <linux/ktime.h>
+#include <linux/cable_vbus_monitor.h>
+#include <linux/iio/consumer.h>
+#include <linux/iio/types.h>
+#include <linux/iio/iio.h>
+
+#define MAX_STR_PRINT 50
+
+enum charging_states {
+ STATE_INIT = 0,
+ ENABLED_HALF_IBAT,
+ ENABLED_FULL_IBAT,
+ DISABLED,
+};
+
+enum aicl_phase {
+ AICL_DISABLED,
+ AICL_DETECT_INIT,
+ AICL_DETECT,
+ AICL_MATCH,
+ AICL_NOT_MATCH,
+};
+
+#define CHG_DISABLE_REASON_THERMAL BIT(0)
+#define CHG_DISABLE_REASON_USER BIT(1)
+#define CHG_DISABLE_REASON_UNKNOWN_BATTERY BIT(2)
+#define CHG_DISABLE_REASON_CHG_FULL_STOP BIT(3)
+#define CHG_DISABLE_REASON_CHARGER_NOT_INIT BIT(4)
+
+#define BQ2419X_SLOW_CURRENT_MA (500)
+#define BQ2419X_IN_CURRENT_LIMIT_MIN_MA (100)
+
+#define AICL_STEP_DELAY_MS (200)
+#define AICL_STEP_RETRY_MS (200)
+#define AICL_DETECT_INPUT_VOLTAGE (4200)
+#define AICL_INPUT_CURRENT_TRIGGER_MA (1500)
+#define AICL_INPUT_CURRENT_MIN_MA (900)
+#define AICL_CURRENT_STEP_MA (300)
+#define AICL_INPUT_CURRENT_INIT_MA AICL_INPUT_CURRENT_MIN_MA
+#define AICL_CABLE_OUT_MAX_COUNT (3)
+#define AICL_RECHECK_TIME_S (300)
+#define AICL_MAX_RETRY_COUNT (2)
+#define AICL_DETECT_RESET_CHARGING_S (2)
+/*
+ * If 1A adaptor OCP at 1.25A, it will hit DPM and cause
+ * vdiff2/vdiff1 drop larger than 1.5 times.
+ */
+#define AICL_VDIFF1_MULTIPLIER (3)
+#define AICL_VDIFF2_MULTIPLIER (2)
+#define AICL_VBUS_CHECK_TIMES (3)
+#define AICL_VBUS_CHECK_CURRENT_NUM (3)
+#define AICL_VBUS_CHECK_CURRENT1_MA AICL_INPUT_CURRENT_MIN_MA
+#define AICL_VBUS_CHECK_CURRENT2_MA (1200)
+#define AICL_VBUS_CHECK_CURRENT3_MA (1500)
+
+#define INPUT_ADJUST_RETRY_TIMES_MAX (2)
+#define INPUT_ADJUST_RETRY_DELAY_S (300)
+#define INPUT_ADJUST_CURRENT_MA (900)
+#define INPUT_ADJUST_VBUS_CHECK_MA (4100)
+
+struct htc_battery_bq2419x_data {
+ struct device *dev;
+ int irq;
+
+ struct mutex mutex;
+ int in_current_limit;
+ struct wake_lock charge_change_wake_lock;
+
+ struct power_supply charger;
+
+ struct regulator_dev *chg_rdev;
+ struct regulator_desc chg_reg_desc;
+ struct regulator_init_data chg_reg_init_data;
+
+ struct battery_charger_dev *bc_dev;
+ int chg_status;
+
+ int chg_restart_time;
+ bool charger_presense;
+ bool battery_unknown;
+ bool cable_connected;
+ int last_charging_current;
+ bool disable_suspend_during_charging;
+ int last_temp;
+ int charging_state;
+ unsigned int last_chg_voltage;
+ unsigned int last_input_src;
+ unsigned int input_src;
+ unsigned int chg_current_control;
+ unsigned int prechg_control;
+ unsigned int term_control;
+ unsigned int chg_voltage_control;
+ struct htc_battery_bq2419x_platform_data *pdata;
+ unsigned int otp_output_current;
+ int charge_suspend_polling_time;
+ int charge_polling_time;
+ unsigned int charging_disabled_reason;
+ bool chg_full_done;
+ bool chg_full_stop;
+ bool safety_timeout_happen;
+ bool is_recheck_charge;
+ ktime_t recheck_charge_time;
+ struct htc_battery_bq2419x_ops *ops;
+ void *ops_data;
+
+ struct delayed_work aicl_wq;
+ int aicl_input_current;
+ enum aicl_phase aicl_current_phase;
+ int aicl_target_input;
+ int aicl_cable_out_count_in_detect;
+ int aicl_cable_out_found;
+ struct wake_lock aicl_wake_lock;
+ bool aicl_wake_lock_locked;
+ int aicl_retry_count;
+ bool aicl_disable_after_detect;
+ int aicl_vbus[AICL_VBUS_CHECK_CURRENT_NUM];
+ struct iio_channel *vbus_channel;
+ const char *vbus_channel_name;
+ unsigned int vbus_channel_max_voltage;
+ unsigned int vbus_channel_max_adc;
+
+ int input_adjust_retry_count;
+ ktime_t input_adjust_retry_time;
+ bool input_adjust;
+
+ bool input_reset_check_disable;
+};
+
+static struct htc_battery_bq2419x_data *htc_battery_data;
+
+static int bq2419x_get_vbus(struct htc_battery_bq2419x_data *data)
+{
+ int vbus = -1, ret;
+
+ if (!data->vbus_channel_name || !data->vbus_channel_max_voltage ||
+ !data->vbus_channel_max_adc)
+ return -EINVAL;
+
+ if (!data->vbus_channel || IS_ERR(data->vbus_channel))
+ data->vbus_channel =
+ iio_channel_get(NULL, data->vbus_channel_name);
+
+ if (data->vbus_channel && !IS_ERR(data->vbus_channel)) {
+ ret = iio_read_channel_processed(data->vbus_channel, &vbus);
+ if (ret < 0)
+ ret = iio_read_channel_raw(data->vbus_channel, &vbus);
+
+ if (ret < 0) {
+ dev_err(data->dev,
+ "Failed to read charger vbus, ret=%d\n",
+ ret);
+ vbus = -1;
+ }
+ }
+
+ if (vbus > 0)
+ vbus = data->vbus_channel_max_voltage * vbus /
+ data->vbus_channel_max_adc;
+
+ return vbus;
+}
+
+static int htc_battery_bq2419x_charger_enable(
+ struct htc_battery_bq2419x_data *data,
+ unsigned int reason, bool enable)
+{
+ int ret = 0;
+
+ dev_info(data->dev, "Charging %s with reason 0x%x (origin:0x%x)\n",
+ enable ? "enable" : "disable",
+ reason, data->charging_disabled_reason);
+
+ if (enable)
+ data->charging_disabled_reason &= ~reason;
+ else
+ data->charging_disabled_reason |= reason;
+
+ if (!data->ops || !data->ops->set_charger_enable)
+ return -EINVAL;
+
+ ret = data->ops->set_charger_enable(!data->charging_disabled_reason,
+ data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev,
+ "register update failed, err %d\n", ret);
+
+ return ret;
+}
+
+static int htc_battery_bq2419x_process_plat_data(
+ struct htc_battery_bq2419x_data *data,
+ struct htc_battery_bq2419x_platform_data *chg_pdata)
+{
+ int voltage_input;
+ int fast_charge_current;
+ int pre_charge_current;
+ int termination_current;
+ int charge_voltage_limit;
+ int otp_output_current;
+
+ if (chg_pdata) {
+ voltage_input = chg_pdata->input_voltage_limit_mv ?: 4200;
+ fast_charge_current =
+ chg_pdata->fast_charge_current_limit_ma ?: 4544;
+ pre_charge_current =
+ chg_pdata->pre_charge_current_limit_ma ?: 256;
+ termination_current =
+ chg_pdata->termination_current_limit_ma;
+ charge_voltage_limit =
+ chg_pdata->charge_voltage_limit_mv ?: 4208;
+ otp_output_current =
+ chg_pdata->thermal_prop.otp_output_current_ma ?: 1344;
+ } else {
+ voltage_input = 4200;
+ fast_charge_current = 4544;
+ pre_charge_current = 256;
+ termination_current = 128;
+ charge_voltage_limit = 4208;
+ otp_output_current = 1344;
+ }
+
+
+ data->input_src = voltage_input;
+ data->chg_current_control = fast_charge_current;
+ data->prechg_control = pre_charge_current;
+ data->term_control = termination_current;
+ data->chg_voltage_control = charge_voltage_limit;
+ data->otp_output_current = otp_output_current;
+
+ return 0;
+}
+
+static int htc_battery_bq2419x_charger_init(
+ struct htc_battery_bq2419x_data *data)
+{
+ int ret;
+
+ if (!data->ops)
+ return -EINVAL;
+
+ if (data->ops->set_fastcharge_current) {
+ ret = data->ops->set_fastcharge_current(
+ data->chg_current_control,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev, "set charge current failed %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (data->ops->set_precharge_current) {
+ ret = data->ops->set_precharge_current(data->prechg_control,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev, "set precharge current failed %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (data->ops->set_termination_current) {
+ ret = data->ops->set_termination_current(data->term_control,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "set termination current failed %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (data->ops->set_dpm_input_voltage) {
+ ret = data->ops->set_dpm_input_voltage(data->last_input_src,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev, "set input voltage failed: %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (data->ops->set_charge_voltage) {
+ ret = data->ops->set_charge_voltage(data->last_chg_voltage,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev, "set charger voltage failed %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int htc_battery_bq2419x_configure_charging_current(
+ struct htc_battery_bq2419x_data *data,
+ int in_current_limit)
+{
+ int ret;
+
+ if (!data->ops || !data->ops->set_input_current)
+ return -EINVAL;
+
+ if (data->ops->set_charger_hiz) {
+ ret = data->ops->set_charger_hiz(false, data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev,
+ "clear hiz failed: %d\n", ret);
+ }
+
+ ret = data->ops->set_input_current(in_current_limit, data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev,
+ "set input_current failed: %d\n", ret);
+
+ return ret;
+}
+
+int htc_battery_bq2419x_full_current_enable(
+ struct htc_battery_bq2419x_data *data)
+{
+ int ret;
+
+ if (!data->ops || !data->ops->set_fastcharge_current)
+ return -EINVAL;
+
+ ret = data->ops->set_fastcharge_current(data->chg_current_control,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "Failed to set charge current %d\n", ret);
+ return ret;
+ }
+
+ data->charging_state = ENABLED_FULL_IBAT;
+
+ return 0;
+}
+
+int htc_battery_bq2419x_half_current_enable(
+ struct htc_battery_bq2419x_data *data)
+{
+ int ret;
+
+ if (!data->ops || !data->ops->set_fastcharge_current)
+ return -EINVAL;
+
+ if (data->chg_current_control > data->otp_output_current) {
+ ret = data->ops->set_fastcharge_current(
+ data->otp_output_current,
+ data->ops_data);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "Failed to set charge current %d\n", ret);
+ return ret;
+ }
+ }
+
+ data->charging_state = ENABLED_HALF_IBAT;
+
+ return 0;
+}
+
+static inline void htc_battery_bq2419x_aicl_wakelock(
+ struct htc_battery_bq2419x_data *data, bool lock)
+{
+ if (lock != data->aicl_wake_lock_locked) {
+ if (lock)
+ wake_lock(&data->aicl_wake_lock);
+ else
+ wake_unlock(&data->aicl_wake_lock);
+ data->aicl_wake_lock_locked = lock;
+ }
+}
+
+static void htc_battery_bq2419x_aicl_wq(struct work_struct *work)
+{
+ struct htc_battery_bq2419x_data *data;
+ int ret;
+ enum aicl_phase current_phase;
+ int current_input_current;
+ ktime_t timeout, cur_boottime;
+ int vbus, vdiff1, vdiff2;
+ int i;
+
+ data = container_of(work, struct htc_battery_bq2419x_data,
+ aicl_wq.work);
+
+ mutex_lock(&data->mutex);
+
+ current_phase = data->aicl_current_phase;
+ current_input_current = data->aicl_input_current;
+
+ dev_dbg(data->dev, "aicl: phase=%d, retry=%d, input=%d\n",
+ current_phase,
+ data->aicl_retry_count,
+ current_input_current);
+
+ if (current_phase == AICL_DETECT_INIT ||
+ (current_phase == AICL_NOT_MATCH &&
+ data->aicl_retry_count > 0)) {
+ htc_battery_bq2419x_aicl_wakelock(data, true);
+ current_input_current = AICL_INPUT_CURRENT_INIT_MA;
+ current_phase = AICL_DETECT;
+ data->aicl_cable_out_found = 0;
+ data->aicl_vbus[0] = 0;
+ data->aicl_vbus[1] = 0;
+ data->aicl_vbus[2] = 0;
+
+ --data->aicl_retry_count;
+ if (data->ops && data->ops->set_dpm_input_voltage) {
+ ret = data->ops->set_dpm_input_voltage(
+ AICL_DETECT_INPUT_VOLTAGE,
+ data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev,
+ "set input voltage failed: %d\n",
+ ret);
+ else
+ data->last_input_src =
+ AICL_DETECT_INPUT_VOLTAGE;
+ }
+ } else if (current_phase != AICL_DETECT) {
+ htc_battery_bq2419x_aicl_wakelock(data, false);
+ goto no_detect;
+ } else {
+ if (data->aicl_cable_out_found > 0) {
+ /* input current raising might cause charger latched */
+ data->aicl_cable_out_count_in_detect++;
+ if (data->aicl_cable_out_count_in_detect >=
+ AICL_CABLE_OUT_MAX_COUNT) {
+ current_input_current -= AICL_CURRENT_STEP_MA;
+ current_phase = AICL_NOT_MATCH;
+ }
+ } else {
+ if (current_input_current <
+ data->aicl_target_input)
+ current_input_current += AICL_CURRENT_STEP_MA;
+ else
+ current_phase = AICL_MATCH;
+ }
+ }
+
+ if (current_input_current > data->aicl_target_input)
+ current_input_current = data->aicl_target_input;
+ else if (current_input_current < AICL_INPUT_CURRENT_MIN_MA)
+ current_input_current = AICL_INPUT_CURRENT_MIN_MA;
+
+ if (current_phase == AICL_DETECT || current_phase == AICL_DETECT_INIT) {
+ data->aicl_cable_out_found = 0;
+
+ ret = htc_battery_bq2419x_configure_charging_current(data,
+ current_input_current);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "input current configure fail, ret = %d\n",
+ ret);
+ goto error;
+ }
+
+ if (current_input_current >= AICL_VBUS_CHECK_CURRENT1_MA &&
+ current_input_current <=
+ AICL_VBUS_CHECK_CURRENT3_MA) {
+ vbus = 0;
+ for (i = 0; i < AICL_VBUS_CHECK_TIMES; i++)
+ vbus += bq2419x_get_vbus(data);
+
+ vbus /= AICL_VBUS_CHECK_TIMES;
+
+ if (current_input_current ==
+ AICL_VBUS_CHECK_CURRENT1_MA)
+ data->aicl_vbus[0] = vbus;
+ else if (current_input_current ==
+ AICL_VBUS_CHECK_CURRENT2_MA)
+ data->aicl_vbus[1] = vbus;
+ else
+ data->aicl_vbus[2] = vbus;
+ }
+
+ data->aicl_current_phase = current_phase;
+ data->aicl_input_current = current_input_current;
+ schedule_delayed_work(&data->aicl_wq,
+ msecs_to_jiffies(AICL_STEP_DELAY_MS));
+ } else {
+ if (data->aicl_disable_after_detect) {
+ current_phase = AICL_DISABLED;
+ current_input_current = data->in_current_limit;
+ data->aicl_target_input = -1;
+ dev_info(data->dev,
+ "aicl disabled, aicl_result=(%d, %d)\n",
+ current_phase, current_input_current);
+ } else if (data->in_current_limit != data->aicl_target_input) {
+ data->aicl_target_input = data->in_current_limit;
+ if (data->in_current_limit >=
+ AICL_INPUT_CURRENT_TRIGGER_MA) {
+ current_input_current =
+ AICL_INPUT_CURRENT_MIN_MA;
+ current_phase = AICL_NOT_MATCH;
+ } else {
+ current_input_current = data->in_current_limit;
+ current_phase = AICL_MATCH;
+ }
+ dev_info(data->dev,
+ "input changed, aicl_result=(%d, %d)\n",
+ current_phase, current_input_current);
+ } else
+ dev_info(data->dev, "aicl_result=(%d, %d)\n",
+ current_phase, current_input_current);
+
+ if (current_phase == AICL_NOT_MATCH &&
+ data->aicl_input_current >
+ AICL_INPUT_CURRENT_MIN_MA) {
+ dev_info(data->dev, "limit aicl current to %d\n",
+ AICL_INPUT_CURRENT_MIN_MA);
+ current_input_current = AICL_INPUT_CURRENT_MIN_MA;
+ } else if (current_phase == AICL_MATCH &&
+ data->aicl_vbus[1] &&
+ data->aicl_vbus[2]) {
+ vdiff1 = data->aicl_vbus[0] - data->aicl_vbus[1];
+ vdiff2 = data->aicl_vbus[1] - data->aicl_vbus[2];
+ dev_dbg(data->dev, "vbus1=%d, vbus2=%d, vbus3=%d\n",
+ data->aicl_vbus[0],
+ data->aicl_vbus[1],
+ data->aicl_vbus[2]);
+ if (vdiff1 < 0 || vdiff2 < 0) {
+ if (data->aicl_retry_count > 0) {
+ current_phase = AICL_NOT_MATCH;
+ current_input_current =
+ AICL_INPUT_CURRENT_MIN_MA;
+ schedule_delayed_work(&data->aicl_wq,
+ msecs_to_jiffies(
+ AICL_STEP_RETRY_MS));
+ }
+ } else if (AICL_VDIFF2_MULTIPLIER * vdiff2 >
+ AICL_VDIFF1_MULTIPLIER * vdiff1) {
+ current_phase = AICL_NOT_MATCH;
+ current_input_current =
+ AICL_INPUT_CURRENT_MIN_MA;
+ dev_info(data->dev,
+ "vbus drop significantly, limit aicl current to %d\n",
+ AICL_INPUT_CURRENT_MIN_MA);
+
+ /* set input adjust to trigger redetect*/
+ data->input_adjust = true;
+ timeout = ktime_set(INPUT_ADJUST_RETRY_DELAY_S,
+ 0);
+ data->input_adjust_retry_time =
+ ktime_add(ktime_get_boottime(),
+ timeout);
+ }
+ }
+
+ ret = htc_battery_bq2419x_configure_charging_current(data,
+ current_input_current);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "input current configure fail, ret = %d\n",
+ ret);
+ goto error;
+ }
+
+ data->aicl_current_phase = current_phase;
+ data->aicl_input_current = current_input_current;
+
+ htc_battery_bq2419x_aicl_wakelock(data, false);
+ }
+
+no_detect:
+ mutex_unlock(&data->mutex);
+ return;
+error:
+ data->aicl_current_phase = AICL_NOT_MATCH;
+ data->aicl_input_current = AICL_INPUT_CURRENT_INIT_MA;
+ if (!data->is_recheck_charge) {
+ data->is_recheck_charge = true;
+ cur_boottime = ktime_get_boottime();
+ timeout = ktime_set(data->chg_restart_time, 0);
+ data->recheck_charge_time = ktime_add(cur_boottime, timeout);
+ battery_charging_restart(data->bc_dev, data->chg_restart_time);
+ }
+
+ htc_battery_bq2419x_aicl_wakelock(data, false);
+ mutex_unlock(&data->mutex);
+}
+
+static int htc_battery_bq2419x_aicl_enable(
+ struct htc_battery_bq2419x_data *data, bool enable)
+{
+ int ret = 0;
+
+ data->aicl_disable_after_detect = false;
+ if (!enable) {
+ if (data->aicl_current_phase == AICL_DETECT ||
+ data->aicl_current_phase == AICL_DETECT_INIT) {
+ data->aicl_disable_after_detect = true;
+ ret = -EBUSY;
+ } else {
+ cancel_delayed_work_sync(&data->aicl_wq);
+ htc_battery_bq2419x_aicl_wakelock(data, false);
+ data->aicl_current_phase = AICL_DISABLED;
+ }
+ } else {
+ if (data->aicl_current_phase == AICL_DETECT ||
+ data->aicl_current_phase == AICL_DETECT_INIT)
+ ret = -EBUSY;
+ else {
+ data->aicl_retry_count = AICL_MAX_RETRY_COUNT;
+ data->aicl_input_current = -1;
+ data->aicl_target_input = -1;
+ data->aicl_current_phase = AICL_NOT_MATCH;
+ }
+ }
+
+ return ret;
+}
+
+static int htc_battery_bq2419x_aicl_configure_current(
+ struct htc_battery_bq2419x_data *data, int target_input)
+{
+ int ret = 0;
+
+ if (target_input < BQ2419X_IN_CURRENT_LIMIT_MIN_MA)
+ return 0;
+
+ switch (data->aicl_current_phase) {
+ case AICL_DISABLED:
+ data->aicl_target_input = target_input;
+ data->aicl_input_current = target_input;
+ break;
+ case AICL_DETECT:
+ case AICL_DETECT_INIT:
+ if (!data->cable_connected)
+ data->aicl_cable_out_found++;
+ goto detecting;
+ case AICL_MATCH:
+ case AICL_NOT_MATCH:
+ data->aicl_target_input = target_input;
+ if (target_input >= AICL_INPUT_CURRENT_TRIGGER_MA) {
+ htc_battery_bq2419x_aicl_wakelock(data, true);
+ data->aicl_current_phase = AICL_DETECT_INIT;
+ data->aicl_input_current = AICL_INPUT_CURRENT_INIT_MA;
+ cancel_delayed_work_sync(&data->aicl_wq);
+ schedule_delayed_work(&data->aicl_wq, 0);
+ } else {
+ data->aicl_input_current = target_input;
+ data->aicl_current_phase = AICL_MATCH;
+ }
+ break;
+ default:
+ break;
+ }
+
+ ret = htc_battery_bq2419x_configure_charging_current(data,
+ data->aicl_input_current);
+ if (ret < 0)
+ goto error;
+
+detecting:
+ data->in_current_limit = target_input;
+error:
+ return ret;
+}
+
+static int __htc_battery_bq2419x_set_charging_current(
+ struct htc_battery_bq2419x_data *data,
+ int charging_current)
+{
+ int in_current_limit;
+ int old_current_limit;
+ int ret = 0;
+ unsigned int charger_state = 0;
+ bool enable_safety_timer;
+ ktime_t timeout, cur_boottime;
+
+ data->chg_status = BATTERY_DISCHARGING;
+ data->charging_state = STATE_INIT;
+ data->chg_full_stop = false;
+ data->chg_full_done = false;
+ data->last_chg_voltage = data->chg_voltage_control;
+ data->is_recheck_charge = false;
+ data->input_adjust_retry_count = INPUT_ADJUST_RETRY_TIMES_MAX;
+ data->input_adjust = false;
+
+ battery_charging_restart_cancel(data->bc_dev);
+
+ if (data->ops && data->ops->set_charge_voltage) {
+ ret = data->ops->set_charge_voltage(
+ data->last_chg_voltage,
+ data->ops_data);
+ if (ret < 0)
+ goto error;
+ } else {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ ret = htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_CHG_FULL_STOP, true);
+ if (ret < 0)
+ goto error;
+
+ if (data->ops && data->ops->get_charger_state) {
+ ret = data->ops->get_charger_state(&charger_state,
+ data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev, "charger state get failed: %d\n",
+ ret);
+ } else {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ if (charging_current == 0 && charger_state != 0)
+ goto done;
+
+ old_current_limit = data->in_current_limit;
+ if ((charger_state & HTC_BATTERY_BQ2419X_KNOWN_VBUS) == 0) {
+ in_current_limit = BQ2419X_SLOW_CURRENT_MA;
+ data->cable_connected = 0;
+ data->chg_status = BATTERY_DISCHARGING;
+ battery_charger_batt_status_stop_monitoring(data->bc_dev);
+ htc_battery_bq2419x_aicl_enable(data, false);
+ } else {
+ in_current_limit = charging_current / 1000;
+ data->cable_connected = 1;
+ data->chg_status = BATTERY_CHARGING;
+ if (!data->battery_unknown)
+ battery_charger_batt_status_start_monitoring(
+ data->bc_dev,
+ data->last_charging_current / 1000);
+ htc_battery_bq2419x_aicl_enable(data, true);
+ }
+ ret = htc_battery_bq2419x_aicl_configure_current(data,
+ in_current_limit);
+ if (ret < 0)
+ goto error;
+
+ if (data->battery_unknown)
+ data->chg_status = BATTERY_UNKNOWN;
+
+ if (in_current_limit > BQ2419X_SLOW_CURRENT_MA)
+ enable_safety_timer = true;
+ else
+ enable_safety_timer = false;
+ if (data->ops->set_safety_timer_enable) {
+ ret = data->ops->set_safety_timer_enable(enable_safety_timer,
+ data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev, "safety timer control failed: %d\n",
+ ret);
+ }
+
+ battery_charging_status_update(data->bc_dev, data->chg_status);
+ power_supply_changed(&data->charger);
+
+ if (data->disable_suspend_during_charging && !data->battery_unknown) {
+ if (data->cable_connected &&
+ data->in_current_limit > BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_acquire_wake_lock(data->bc_dev);
+ else if (!data->cable_connected &&
+ old_current_limit > BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_release_wake_lock(data->bc_dev);
+ }
+
+ return 0;
+error:
+ dev_err(data->dev, "Charger enable failed, err = %d\n", ret);
+ data->is_recheck_charge = true;
+ cur_boottime = ktime_get_boottime();
+ timeout = ktime_set(data->chg_restart_time, 0);
+ data->recheck_charge_time = ktime_add(cur_boottime, timeout);
+ battery_charging_restart(data->bc_dev,
+ data->chg_restart_time);
+done:
+ return ret;
+}
+
+static int htc_battery_bq2419x_set_charging_current(struct regulator_dev *rdev,
+ int min_ua, int max_ua)
+{
+ struct htc_battery_bq2419x_data *data = rdev_get_drvdata(rdev);
+ int ret = 0;
+
+ if (!data || !data->charger_presense)
+ return -EINVAL;
+
+ dev_info(data->dev, "Setting charging current %d\n", max_ua/1000);
+ msleep(200);
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+
+ data->safety_timeout_happen = false;
+ data->last_charging_current = max_ua;
+
+ ret = __htc_battery_bq2419x_set_charging_current(data, max_ua);
+ if (ret < 0)
+ dev_err(data->dev, "set charging current fail, ret=%d\n", ret);
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+
+ return ret;
+}
+
+static struct regulator_ops htc_battery_bq2419x_tegra_regulator_ops = {
+ .set_current_limit = htc_battery_bq2419x_set_charging_current,
+};
+
+static int htc_battery_bq2419x_init_charger_regulator(
+ struct htc_battery_bq2419x_data *data,
+ struct htc_battery_bq2419x_platform_data *pdata)
+{
+ int ret = 0;
+ struct regulator_config rconfig = { };
+
+ if (!pdata) {
+ dev_err(data->dev, "No platform data\n");
+ return 0;
+ }
+
+ data->chg_reg_desc.name = "data-charger";
+ data->chg_reg_desc.ops = &htc_battery_bq2419x_tegra_regulator_ops;
+ data->chg_reg_desc.type = REGULATOR_CURRENT;
+ data->chg_reg_desc.owner = THIS_MODULE;
+
+ data->chg_reg_init_data.supply_regulator = NULL;
+ data->chg_reg_init_data.regulator_init = NULL;
+ data->chg_reg_init_data.num_consumer_supplies =
+ pdata->num_consumer_supplies;
+ data->chg_reg_init_data.consumer_supplies =
+ pdata->consumer_supplies;
+ data->chg_reg_init_data.driver_data = data;
+ data->chg_reg_init_data.constraints.name = "data-charger";
+ data->chg_reg_init_data.constraints.min_uA = 0;
+ data->chg_reg_init_data.constraints.max_uA =
+ pdata->max_charge_current_ma * 1000;
+
+ data->chg_reg_init_data.constraints.ignore_current_constraint_init =
+ true;
+ data->chg_reg_init_data.constraints.valid_modes_mask =
+ REGULATOR_MODE_NORMAL |
+ REGULATOR_MODE_STANDBY;
+
+ data->chg_reg_init_data.constraints.valid_ops_mask =
+ REGULATOR_CHANGE_MODE |
+ REGULATOR_CHANGE_STATUS |
+ REGULATOR_CHANGE_CURRENT;
+
+ rconfig.dev = data->dev;
+ rconfig.of_node = NULL;
+ rconfig.init_data = &data->chg_reg_init_data;
+ rconfig.driver_data = data;
+ data->chg_rdev = devm_regulator_register(data->dev,
+ &data->chg_reg_desc, &rconfig);
+ if (IS_ERR(data->chg_rdev)) {
+ ret = PTR_ERR(data->chg_rdev);
+ dev_err(data->dev,
+ "vbus-charger regulator register failed %d\n", ret);
+ }
+
+ return ret;
+}
+
+void htc_battery_bq2419x_notify(enum htc_battery_bq2419x_notify_event event)
+{
+ unsigned int charger_state = 0;
+ int ret;
+
+ if (!htc_battery_data || !htc_battery_data->charger_presense)
+ return;
+
+ if (event == HTC_BATTERY_BQ2419X_SAFETY_TIMER_TIMEOUT) {
+ mutex_lock(&htc_battery_data->mutex);
+ htc_battery_data->safety_timeout_happen = true;
+ if (htc_battery_data->ops &&
+ htc_battery_data->ops->get_charger_state) {
+ ret = htc_battery_data->ops->get_charger_state(
+ &charger_state,
+ htc_battery_data->ops_data);
+ if (ret < 0)
+ dev_err(htc_battery_data->dev,
+ "charger state get failed: %d\n",
+ ret);
+ }
+ if ((charger_state & HTC_BATTERY_BQ2419X_CHARGING)
+ != HTC_BATTERY_BQ2419X_CHARGING) {
+ htc_battery_data->chg_status = BATTERY_DISCHARGING;
+
+ battery_charger_batt_status_stop_monitoring(
+ htc_battery_data->bc_dev);
+ battery_charging_status_update(htc_battery_data->bc_dev,
+ htc_battery_data->chg_status);
+ power_supply_changed(&htc_battery_data->charger);
+
+ if (htc_battery_data->disable_suspend_during_charging
+ && htc_battery_data->cable_connected
+ && htc_battery_data->in_current_limit >
+ BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_release_wake_lock(
+ htc_battery_data->bc_dev);
+ }
+ mutex_unlock(&htc_battery_data->mutex);
+ }
+}
+EXPORT_SYMBOL_GPL(htc_battery_bq2419x_notify);
+
+int htc_battery_bq2419x_charger_register(struct htc_battery_bq2419x_ops *ops,
+ void *data)
+{
+ int ret;
+
+ if (!htc_battery_data || !htc_battery_data->charger_presense)
+ return -ENODEV;
+
+ if (!ops || !ops->set_charger_enable || !ops->set_fastcharge_current ||
+ !ops->set_charge_voltage || !ops->set_input_current ||
+ !ops->get_charger_state) {
+ dev_err(htc_battery_data->dev,
+ "Gauge operation register fail!");
+ return -EINVAL;
+ }
+
+ mutex_lock(&htc_battery_data->mutex);
+ htc_battery_data->ops = ops;
+ htc_battery_data->ops_data = data;
+
+ ret = htc_battery_bq2419x_charger_init(htc_battery_data);
+ if (ret < 0)
+ dev_err(htc_battery_data->dev, "Charger init failed: %d\n",
+ ret);
+
+ ret = htc_battery_bq2419x_charger_enable(htc_battery_data, 0, true);
+ if (ret < 0)
+ dev_err(htc_battery_data->dev, "Charger enable failed: %d\n",
+ ret);
+ mutex_unlock(&htc_battery_data->mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(htc_battery_bq2419x_charger_register);
+
+int htc_battery_bq2419x_charger_unregister(void *data)
+{
+ int ret = 0;
+
+ if (!htc_battery_data || !htc_battery_data->charger_presense)
+ return -ENODEV;
+
+ mutex_lock(&htc_battery_data->mutex);
+ if (htc_battery_data->ops_data != data) {
+ ret = -EINVAL;
+ goto error;
+ }
+ htc_battery_data->is_recheck_charge = false;
+ battery_charging_restart_cancel(htc_battery_data->bc_dev);
+ cancel_delayed_work_sync(&htc_battery_data->aicl_wq);
+ htc_battery_data->ops = NULL;
+ htc_battery_data->ops_data = NULL;
+error:
+ mutex_unlock(&htc_battery_data->mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(htc_battery_bq2419x_charger_unregister);
+
+static int htc_battery_bq2419x_charger_get_status(
+ struct battery_charger_dev *bc_dev)
+{
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+
+ if (data)
+ return data->chg_status;
+
+ return BATTERY_DISCHARGING;
+}
+
+static int htc_battery_bq2419x_charger_thermal_configure_no_thermister(
+ struct battery_charger_dev *bc_dev,
+ int temp, bool enable_charger, bool enable_charg_half_current,
+ int battery_voltage)
+{
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+ int ret = 0;
+
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+ if (!data->cable_connected)
+ goto done;
+
+ if (data->last_temp == temp && data->charging_state != STATE_INIT)
+ goto done;
+
+ data->last_temp = temp;
+
+ dev_dbg(data->dev, "Battery temp %d\n", temp);
+ if (enable_charger) {
+
+ if (battery_voltage > 0
+ && battery_voltage <= data->chg_voltage_control
+ && battery_voltage != data->last_chg_voltage) {
+ /*Set charge voltage */
+ if (data->ops && data->ops->set_charge_voltage) {
+ ret = data->ops->set_charge_voltage(
+ battery_voltage,
+ data->ops_data);
+ if (ret < 0)
+ goto error;
+ data->last_chg_voltage = battery_voltage;
+ }
+ }
+
+ if (data->charging_state == DISABLED) {
+ ret = htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_THERMAL, true);
+ if (ret < 0)
+ goto error;
+ }
+
+ if (!enable_charg_half_current &&
+ data->charging_state != ENABLED_FULL_IBAT) {
+ htc_battery_bq2419x_full_current_enable(data);
+ battery_charging_status_update(data->bc_dev,
+ BATTERY_CHARGING);
+ } else if (enable_charg_half_current &&
+ data->charging_state != ENABLED_HALF_IBAT) {
+ htc_battery_bq2419x_half_current_enable(data);
+ battery_charging_status_update(data->bc_dev,
+ BATTERY_CHARGING);
+ }
+ } else {
+ if (data->charging_state != DISABLED) {
+ ret = htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_THERMAL, false);
+
+ if (ret < 0)
+ goto error;
+ data->charging_state = DISABLED;
+ battery_charging_status_update(data->bc_dev,
+ BATTERY_DISCHARGING);
+ }
+ }
+
+error:
+done:
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+ return ret;
+}
+
+static int htc_battery_bq2419x_charging_restart(
+ struct battery_charger_dev *bc_dev)
+{
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+ int ret = 0;
+
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+ if (!data->is_recheck_charge || data->safety_timeout_happen)
+ goto done;
+
+ dev_info(data->dev, "Restarting the charging\n");
+ ret = __htc_battery_bq2419x_set_charging_current(data,
+ data->last_charging_current);
+ if (ret < 0)
+ dev_err(data->dev,
+ "Restarting of charging failed: %d\n", ret);
+
+done:
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+ return ret;
+}
+
+static int htc_battery_bq2419x_charger_charging_full_configure(
+ struct battery_charger_dev *bc_dev,
+ bool charge_full_done, bool charge_full_stop)
+{
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+
+ if (data->battery_unknown)
+ goto done;
+
+ if (!data->cable_connected)
+ goto done;
+
+ if (charge_full_done != data->chg_full_done) {
+ data->chg_full_done = charge_full_done;
+ if (charge_full_done) {
+ if (data->last_chg_voltage ==
+ data->chg_voltage_control) {
+ dev_info(data->dev, "Charging completed\n");
+ data->chg_status = BATTERY_CHARGING_DONE;
+ battery_charging_status_update(data->bc_dev,
+ data->chg_status);
+ power_supply_changed(&data->charger);
+ } else
+ dev_info(data->dev, "OTP charging completed\n");
+ } else {
+ data->chg_status = BATTERY_CHARGING;
+ battery_charging_status_update(data->bc_dev,
+ data->chg_status);
+ power_supply_changed(&data->charger);
+ }
+ }
+
+ if (charge_full_stop != data->chg_full_stop) {
+ data->chg_full_stop = charge_full_stop;
+ if (charge_full_stop) {
+ htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_CHG_FULL_STOP,
+ false);
+ if (data->disable_suspend_during_charging
+ && data->in_current_limit >
+ BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_release_wake_lock(
+ data->bc_dev);
+ } else {
+ htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_CHG_FULL_STOP,
+ true);
+ if (data->disable_suspend_during_charging &&
+ data->in_current_limit >
+ BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_acquire_wake_lock(
+ data->bc_dev);
+ }
+ }
+done:
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+ return 0;
+}
+
+static int htc_battery_bq2419x_input_control(
+ struct battery_charger_dev *bc_dev, int voltage_min)
+{
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+ unsigned int val;
+ int ret = 0;
+ int now_input, target_input, vbus;
+ ktime_t timeout, cur_boottime;
+
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+ if (!data->ops || !data->ops->set_dpm_input_voltage) {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ if (data->battery_unknown)
+ goto done;
+
+ if (!data->cable_connected)
+ goto done;
+
+ if (data->aicl_current_phase == AICL_DETECT ||
+ data->aicl_current_phase == AICL_DETECT_INIT)
+ goto done;
+
+ /* input source checking */
+ if (voltage_min != data->last_input_src) {
+ ret = data->ops->set_dpm_input_voltage(voltage_min,
+ data->ops_data);
+ if (ret < 0)
+ dev_err(data->dev,
+ "set input voltage failed: %d\n",
+ ret);
+ else
+ data->last_input_src = voltage_min;
+ }
+
+ /* Check input current limit if reset */
+ if (!data->input_reset_check_disable && data->ops->get_input_current) {
+ if (data->aicl_target_input > 0)
+ target_input = data->aicl_target_input;
+ else
+ target_input = data->in_current_limit;
+
+ if (data->input_adjust)
+ now_input = INPUT_ADJUST_CURRENT_MA;
+ else if (data->aicl_input_current > 0)
+ now_input = data->aicl_input_current;
+ else
+ now_input = data->in_current_limit;
+
+ cur_boottime = ktime_get_boottime();
+ if (data->input_adjust && data->input_adjust_retry_count > 0 &&
+ target_input > now_input &&
+ ktime_compare(cur_boottime,
+ data->input_adjust_retry_time) >= 0) {
+ data->input_adjust = false;
+ data->input_adjust_retry_count--;
+ dev_info(data->dev,
+ "retry aicl by current %d\n",
+ target_input);
+ htc_battery_bq2419x_aicl_configure_current(data,
+ target_input);
+ goto done;
+ }
+
+ vbus = bq2419x_get_vbus(data);
+ ret = data->ops->get_input_current(&val, data->ops_data);
+
+ dev_dbg(data->dev, "vbus = %d\n", vbus);
+ if ((!ret && val != now_input) ||
+ (vbus > 0 && vbus < INPUT_ADJUST_VBUS_CHECK_MA)) {
+ if (now_input > INPUT_ADJUST_CURRENT_MA) {
+ data->input_adjust = true;
+ timeout = ktime_set(INPUT_ADJUST_RETRY_DELAY_S,
+ 0);
+ data->input_adjust_retry_time =
+ ktime_add(cur_boottime, timeout);
+ now_input = INPUT_ADJUST_CURRENT_MA;
+ dev_info(data->dev,
+ "adjust input current to %d due to input unstable\n",
+ now_input);
+ }
+ htc_battery_bq2419x_configure_charging_current(data,
+ now_input);
+ }
+ }
+
+error:
+done:
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+
+ return ret;
+}
+
+static int htc_battery_bq2419x_unknown_battery_handle(
+ struct battery_charger_dev *bc_dev)
+{
+ int ret = 0;
+ struct htc_battery_bq2419x_data *data =
+ battery_charger_get_drvdata(bc_dev);
+
+ if (!data)
+ return -EINVAL;
+
+ mutex_lock(&data->mutex);
+ wake_lock(&data->charge_change_wake_lock);
+ data->battery_unknown = true;
+ data->chg_status = BATTERY_UNKNOWN;
+ data->charging_state = STATE_INIT;
+
+ ret = htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_UNKNOWN_BATTERY, false);
+ if (ret < 0) {
+ dev_err(data->dev,
+ "charger enable failed %d\n", ret);
+ goto error;
+ }
+
+ battery_charger_batt_status_stop_monitoring(data->bc_dev);
+
+ battery_charging_status_update(data->bc_dev, data->chg_status);
+ power_supply_changed(&data->charger);
+
+ if (data->disable_suspend_during_charging
+ && data->cable_connected
+ && data->in_current_limit > BQ2419X_SLOW_CURRENT_MA)
+ battery_charger_release_wake_lock(data->bc_dev);
+error:
+ wake_unlock(&data->charge_change_wake_lock);
+ mutex_unlock(&data->mutex);
+ return ret;
+}
+
+static struct battery_charging_ops htc_battery_bq2419x_charger_bci_ops = {
+ .get_charging_status = htc_battery_bq2419x_charger_get_status,
+ .restart_charging = htc_battery_bq2419x_charging_restart,
+ .thermal_configure =
+ htc_battery_bq2419x_charger_thermal_configure_no_thermister,
+ .charging_full_configure =
+ htc_battery_bq2419x_charger_charging_full_configure,
+ .input_voltage_configure = htc_battery_bq2419x_input_control,
+ .unknown_battery_handle = htc_battery_bq2419x_unknown_battery_handle,
+};
+
+static struct battery_charger_info htc_battery_bq2419x_charger_bci = {
+ .cell_id = 0,
+ .bc_ops = &htc_battery_bq2419x_charger_bci_ops,
+};
+
+static ssize_t bq2419x_show_input_reset_check_disable(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct htc_battery_bq2419x_data *data;
+
+ data = dev_get_drvdata(dev);
+
+ return snprintf(buf, MAX_STR_PRINT, "%d\n",
+ data->input_reset_check_disable);
+}
+
+static ssize_t bq2419x_set_input_reset_check_disable(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int disable;
+ struct htc_battery_bq2419x_data *data;
+
+ data = dev_get_drvdata(dev);
+ sscanf(buf, "%u", &disable);
+
+ data->input_reset_check_disable = !!disable;
+
+ return count;
+}
+
+static DEVICE_ATTR(input_reset_check_disable, (S_IRUGO | (S_IWUSR | S_IWGRP)),
+ bq2419x_show_input_reset_check_disable,
+ bq2419x_set_input_reset_check_disable);
+
+static struct htc_battery_bq2419x_platform_data
+ *htc_battery_bq2419x_dt_parse(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct htc_battery_bq2419x_platform_data *pdata;
+ int ret;
+ int chg_restart_time;
+ int suspend_polling_time;
+ int temp_polling_time;
+ int thermal_temp;
+ unsigned int thermal_volt;
+ unsigned int otp_output_current;
+ unsigned int unknown_batt_id_min;
+ unsigned int input_switch_val;
+ unsigned int full_thr_val;
+ struct regulator_init_data *batt_init_data;
+ u32 pval;
+
+ pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return ERR_PTR(-ENOMEM);
+
+ batt_init_data = of_get_regulator_init_data(&pdev->dev, np);
+ if (!batt_init_data)
+ return ERR_PTR(-EINVAL);
+
+ ret = of_property_read_u32(np,
+ "input-voltage-limit-millivolt", &pval);
+ if (!ret)
+ pdata->input_voltage_limit_mv = pval;
+
+ ret = of_property_read_u32(np,
+ "fast-charge-current-limit-milliamp", &pval);
+ if (!ret)
+ pdata->fast_charge_current_limit_ma = pval;
+
+ ret = of_property_read_u32(np,
+ "pre-charge-current-limit-milliamp", &pval);
+ if (!ret)
+ pdata->pre_charge_current_limit_ma = pval;
+
+ ret = of_property_read_u32(np,
+ "charge-term-current-limit-milliamp", &pval);
+ if (!ret)
+ pdata->termination_current_limit_ma = pval;
+
+ ret = of_property_read_u32(np,
+ "charge-voltage-limit-millivolt", &pval);
+ if (!ret)
+ pdata->charge_voltage_limit_mv = pval;
+
+ pdata->disable_suspend_during_charging = of_property_read_bool(np,
+ "disable-suspend-during-charging");
+
+ ret = of_property_read_u32(np, "auto-recharge-time",
+ &chg_restart_time);
+ if (!ret)
+ pdata->chg_restart_time = chg_restart_time;
+
+ ret = of_property_read_u32(np, "charge-suspend-polling-time-sec",
+ &suspend_polling_time);
+ if (!ret)
+ pdata->charge_suspend_polling_time_sec = suspend_polling_time;
+
+ ret = of_property_read_u32(np, "temp-polling-time-sec",
+ &temp_polling_time);
+ if (!ret)
+ pdata->temp_polling_time_sec = temp_polling_time;
+
+ pdata->consumer_supplies = batt_init_data->consumer_supplies;
+ pdata->num_consumer_supplies = batt_init_data->num_consumer_supplies;
+ pdata->max_charge_current_ma =
+ batt_init_data->constraints.max_uA / 1000;
+
+ ret = of_property_read_u32(np, "thermal-temperature-hot-deciC",
+ &thermal_temp);
+ if (!ret)
+ pdata->thermal_prop.temp_hot_dc = thermal_temp;
+
+ ret = of_property_read_u32(np, "thermal-temperature-cold-deciC",
+ &thermal_temp);
+ if (!ret)
+ pdata->thermal_prop.temp_cold_dc = thermal_temp;
+
+ ret = of_property_read_u32(np, "thermal-temperature-warm-deciC",
+ &thermal_temp);
+ if (!ret)
+ pdata->thermal_prop.temp_warm_dc = thermal_temp;
+
+ ret = of_property_read_u32(np, "thermal-temperature-cool-deciC",
+ &thermal_temp);
+ if (!ret)
+ pdata->thermal_prop.temp_cool_dc = thermal_temp;
+
+ ret = of_property_read_u32(np, "thermal-temperature-hysteresis-deciC",
+ &thermal_temp);
+ if (!ret)
+ pdata->thermal_prop.temp_hysteresis_dc = thermal_temp;
+
+ ret = of_property_read_u32(np, "thermal-warm-voltage-millivolt",
+ &thermal_volt);
+ if (!ret)
+ pdata->thermal_prop.warm_voltage_mv = thermal_volt;
+
+ ret = of_property_read_u32(np, "thermal-cool-voltage-millivolt",
+ &thermal_volt);
+ if (!ret)
+ pdata->thermal_prop.cool_voltage_mv = thermal_volt;
+
+ pdata->thermal_prop.disable_warm_current_half =
+ of_property_read_bool(np, "thermal-disable-warm-current-half");
+
+ pdata->thermal_prop.disable_cool_current_half =
+ of_property_read_bool(np, "thermal-disable-cool-current-half");
+
+ ret = of_property_read_u32(np,
+ "thermal-overtemp-output-current-milliamp",
+ &otp_output_current);
+ if (!ret)
+ pdata->thermal_prop.otp_output_current_ma = otp_output_current;
+
+ ret = of_property_read_u32(np,
+ "charge-full-done-voltage-min-millivolt",
+ &full_thr_val);
+ if (!ret)
+ pdata->full_thr.chg_done_voltage_min_mv = full_thr_val;
+
+ ret = of_property_read_u32(np,
+ "charge-full-done-current-min-milliamp",
+ &full_thr_val);
+ if (!ret)
+ pdata->full_thr.chg_done_current_min_ma = full_thr_val;
+
+ ret = of_property_read_u32(np,
+ "charge-full-done-low-current-min-milliamp",
+ &full_thr_val);
+ if (!ret)
+ pdata->full_thr.chg_done_low_current_min_ma = full_thr_val;
+
+ ret = of_property_read_u32(np,
+ "charge-full-recharge-voltage-min-millivolt",
+ &full_thr_val);
+ if (!ret)
+ pdata->full_thr.recharge_voltage_min_mv = full_thr_val;
+
+ ret = of_property_read_u32(np,
+ "input-voltage-min-high-battery-millivolt",
+ &input_switch_val);
+ if (!ret)
+ pdata->input_switch.input_vmin_high_mv = input_switch_val;
+
+ ret = of_property_read_u32(np,
+ "input-voltage-min-low-battery-millivolt",
+ &input_switch_val);
+ if (!ret)
+ pdata->input_switch.input_vmin_low_mv = input_switch_val;
+
+ ret = of_property_read_u32(np, "input-voltage-switch-millivolt",
+ &input_switch_val);
+ if (!ret)
+ pdata->input_switch.input_switch_threshold_mv =
+ input_switch_val;
+
+ pdata->batt_id_channel_name =
+ of_get_property(np, "battery-id-channel-name", NULL);
+
+ ret = of_property_read_u32(np, "unknown-battery-id-minimum",
+ &unknown_batt_id_min);
+ if (!ret)
+ pdata->unknown_batt_id_min = unknown_batt_id_min;
+
+ pdata->gauge_psy_name =
+ of_get_property(np, "gauge-power-supply-name", NULL);
+
+ pdata->vbus_channel_name =
+ of_get_property(np, "vbus-channel-name", NULL);
+
+ ret = of_property_read_u32(np,
+ "vbus-channel-max-voltage-mv", &pval);
+ if (!ret)
+ pdata->vbus_channel_max_voltage_mv = pval;
+
+ ret = of_property_read_u32(np,
+ "vbus-channel-max-adc", &pval);
+ if (!ret)
+ pdata->vbus_channel_max_adc = pval;
+
+ return pdata;
+}
+
+static inline void htc_battery_bq2419x_battery_info_init(
+ struct htc_battery_bq2419x_platform_data *pdata)
+{
+ htc_battery_bq2419x_charger_bci.polling_time_sec =
+ pdata->temp_polling_time_sec;
+
+ htc_battery_bq2419x_charger_bci.thermal_prop.temp_hot_dc =
+ pdata->thermal_prop.temp_hot_dc;
+ htc_battery_bq2419x_charger_bci.thermal_prop.temp_cold_dc =
+ pdata->thermal_prop.temp_cold_dc;
+ htc_battery_bq2419x_charger_bci.thermal_prop.temp_warm_dc =
+ pdata->thermal_prop.temp_warm_dc;
+ htc_battery_bq2419x_charger_bci.thermal_prop.temp_cool_dc =
+ pdata->thermal_prop.temp_cool_dc;
+ htc_battery_bq2419x_charger_bci.thermal_prop.temp_hysteresis_dc =
+ pdata->thermal_prop.temp_hysteresis_dc;
+ htc_battery_bq2419x_charger_bci.thermal_prop.regulation_voltage_mv =
+ pdata->charge_voltage_limit_mv;
+ htc_battery_bq2419x_charger_bci.thermal_prop.warm_voltage_mv =
+ pdata->thermal_prop.warm_voltage_mv;
+ htc_battery_bq2419x_charger_bci.thermal_prop.cool_voltage_mv =
+ pdata->thermal_prop.cool_voltage_mv;
+ htc_battery_bq2419x_charger_bci.thermal_prop.disable_warm_current_half =
+ pdata->thermal_prop.disable_warm_current_half;
+ htc_battery_bq2419x_charger_bci.thermal_prop.disable_cool_current_half =
+ pdata->thermal_prop.disable_cool_current_half;
+
+ htc_battery_bq2419x_charger_bci.full_thr.chg_done_voltage_min_mv =
+ pdata->full_thr.chg_done_voltage_min_mv ?:
+ (pdata->charge_voltage_limit_mv ?
+ pdata->charge_voltage_limit_mv - 102 : 0);
+ htc_battery_bq2419x_charger_bci.full_thr.chg_done_current_min_ma =
+ pdata->full_thr.chg_done_current_min_ma;
+ htc_battery_bq2419x_charger_bci.full_thr.chg_done_low_current_min_ma =
+ pdata->full_thr.chg_done_low_current_min_ma;
+ htc_battery_bq2419x_charger_bci.full_thr.recharge_voltage_min_mv =
+ pdata->full_thr.recharge_voltage_min_mv ?:
+ (pdata->charge_voltage_limit_mv ?
+ pdata->charge_voltage_limit_mv - 48 : 0);
+
+ htc_battery_bq2419x_charger_bci.input_switch.input_vmin_high_mv =
+ pdata->input_switch.input_vmin_high_mv;
+ htc_battery_bq2419x_charger_bci.input_switch.input_vmin_low_mv =
+ pdata->input_switch.input_vmin_low_mv;
+ htc_battery_bq2419x_charger_bci.input_switch.input_switch_threshold_mv =
+ pdata->input_switch.input_switch_threshold_mv;
+
+ htc_battery_bq2419x_charger_bci.batt_id_channel_name =
+ pdata->batt_id_channel_name;
+ htc_battery_bq2419x_charger_bci.unknown_batt_id_min =
+ pdata->unknown_batt_id_min;
+
+ htc_battery_bq2419x_charger_bci.gauge_psy_name =
+ pdata->gauge_psy_name;
+
+ htc_battery_bq2419x_charger_bci.enable_batt_status_monitor = true;
+}
+
+static enum power_supply_property charger_bq2419x_properties[] = {
+ POWER_SUPPLY_PROP_STATUS,
+};
+
+static int charger_bq2419x_get_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ union power_supply_propval *val)
+{
+ struct htc_battery_bq2419x_data *data = container_of(psy, struct htc_battery_bq2419x_data, charger);
+
+ if (psp == POWER_SUPPLY_PROP_STATUS) {
+ switch (data->chg_status) {
+ case BATTERY_CHARGING:
+ val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ break;
+ case BATTERY_CHARGING_DONE:
+ val->intval = POWER_SUPPLY_STATUS_FULL;
+ break;
+ case BATTERY_DISCHARGING:
+ case BATTERY_UNKNOWN:
+ default:
+ if (!data->cable_connected)
+ val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
+ else
+ val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ }
+ } else
+ return -EINVAL;
+
+ return 0;
+}
+
+static int htc_battery_bq2419x_probe(struct platform_device *pdev)
+{
+ struct htc_battery_bq2419x_data *data;
+ struct htc_battery_bq2419x_platform_data *pdata = NULL;
+
+ int ret = 0;
+
+ if (pdev->dev.platform_data)
+ pdata = pdev->dev.platform_data;
+
+ if (!pdata && pdev->dev.of_node) {
+ pdata = htc_battery_bq2419x_dt_parse(pdev);
+ if (IS_ERR(pdata)) {
+ ret = PTR_ERR(pdata);
+ dev_err(&pdev->dev, "Parsing of node failed, %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (!pdata) {
+ dev_err(&pdev->dev, "No Platform data");
+ return -EINVAL;
+ }
+
+ data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ dev_err(&pdev->dev, "Memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ data->pdata = pdata;
+
+ data->dev = &pdev->dev;
+
+ mutex_init(&data->mutex);
+ wake_lock_init(&data->charge_change_wake_lock, WAKE_LOCK_SUSPEND,
+ "charge-change-lock");
+ wake_lock_init(&data->aicl_wake_lock, WAKE_LOCK_SUSPEND,
+ "charge-aicl-lock");
+ INIT_DELAYED_WORK(&data->aicl_wq, htc_battery_bq2419x_aicl_wq);
+
+ dev_set_drvdata(&pdev->dev, data);
+
+ data->chg_restart_time = pdata->chg_restart_time;
+ data->charger_presense = true;
+ data->battery_unknown = false;
+ data->last_temp = -1000;
+ data->disable_suspend_during_charging =
+ pdata->disable_suspend_during_charging;
+ data->charge_suspend_polling_time =
+ pdata->charge_suspend_polling_time_sec;
+ data->charge_polling_time = pdata->temp_polling_time_sec;
+ data->vbus_channel_name = pdata->vbus_channel_name;
+ data->vbus_channel_max_voltage = pdata->vbus_channel_max_voltage_mv;
+ data->vbus_channel_max_adc = pdata->vbus_channel_max_adc;
+
+ htc_battery_bq2419x_process_plat_data(data, pdata);
+
+ data->last_chg_voltage = data->chg_voltage_control;
+ data->last_input_src = data->input_src;
+
+ ret = htc_battery_bq2419x_init_charger_regulator(data, pdata);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "Charger regualtor init failed %d\n", ret);
+ goto scrub_mutex;
+ }
+
+ htc_battery_bq2419x_battery_info_init(pdata);
+
+ data->bc_dev = battery_charger_register(data->dev,
+ &htc_battery_bq2419x_charger_bci, data);
+ if (IS_ERR(data->bc_dev)) {
+ ret = PTR_ERR(data->bc_dev);
+ dev_err(data->dev, "battery charger register failed: %d\n",
+ ret);
+ data->bc_dev = NULL;
+ goto scrub_mutex;
+ }
+
+ data->charger.name = "charger";
+ data->charger.type = POWER_SUPPLY_TYPE_BATTERY;
+ data->charger.get_property = charger_bq2419x_get_property;
+ data->charger.properties = charger_bq2419x_properties;
+ data->charger.num_properties = ARRAY_SIZE(charger_bq2419x_properties);
+ power_supply_register(data->dev, &data->charger);
+ if (ret) {
+ battery_charger_unregister(data->bc_dev);
+ dev_err(data->dev, "failed: power supply register\n");
+ goto scrub_mutex;
+ }
+
+ data->input_reset_check_disable = false;
+ ret = sysfs_create_file(&data->dev->kobj,
+ &dev_attr_input_reset_check_disable.attr);
+ if (ret < 0)
+ dev_err(data->dev, "error creating sysfs file %d\n", ret);
+
+ htc_battery_data = data;
+
+ return 0;
+scrub_mutex:
+ data->charger_presense = false;
+ ret = htc_battery_bq2419x_charger_enable(data,
+ CHG_DISABLE_REASON_CHARGER_NOT_INIT, false);
+ wake_lock_destroy(&data->charge_change_wake_lock);
+ wake_lock_destroy(&data->aicl_wake_lock);
+ mutex_destroy(&data->mutex);
+ return ret;
+}
+
+static int htc_battery_bq2419x_remove(struct platform_device *pdev)
+{
+ struct htc_battery_bq2419x_data *data = dev_get_drvdata(&pdev->dev);
+
+ if (data->charger_presense) {
+ if (data->bc_dev) {
+ battery_charger_unregister(data->bc_dev);
+ data->bc_dev = NULL;
+ }
+ cancel_delayed_work_sync(&data->aicl_wq);
+ wake_lock_destroy(&data->charge_change_wake_lock);
+ wake_lock_destroy(&data->charge_change_wake_lock);
+ power_supply_unregister(&data->charger);
+ }
+ mutex_destroy(&data->mutex);
+ return 0;
+}
+
+static void htc_battery_bq2419x_shutdown(struct platform_device *pdev)
+{
+ struct htc_battery_bq2419x_data *data = dev_get_drvdata(&pdev->dev);
+
+ if (!data->charger_presense)
+ return;
+
+ battery_charging_system_power_on_usb_event(data->bc_dev);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int htc_battery_bq2419x_suspend(struct device *dev)
+{
+ struct htc_battery_bq2419x_data *data = dev_get_drvdata(dev);
+ int next_wakeup = 0;
+
+ battery_charging_restart_cancel(data->bc_dev);
+
+ if (!data->charger_presense || data->battery_unknown)
+ return 0;
+
+ if (!data->cable_connected)
+ goto end;
+
+ if (data->chg_full_stop)
+ next_wakeup =
+ data->charge_suspend_polling_time;
+ else
+ next_wakeup = data->charge_polling_time;
+
+ battery_charging_wakeup(data->bc_dev, next_wakeup);
+end:
+ if (next_wakeup)
+ dev_dbg(dev, "System-charger will resume after %d sec\n",
+ next_wakeup);
+ else
+ dev_dbg(dev, "System-charger will not have resume time\n");
+
+ return 0;
+}
+
+static int htc_battery_bq2419x_resume(struct device *dev)
+{
+ struct htc_battery_bq2419x_data *data = dev_get_drvdata(dev);
+ ktime_t cur_boottime;
+ s64 restart_time_s;
+ s64 batt_status_check_pass_time;
+ int ret;
+
+ if (!data->charger_presense)
+ return 0;
+
+ if (data->is_recheck_charge) {
+ cur_boottime = ktime_get_boottime();
+ if (ktime_compare(cur_boottime, data->recheck_charge_time) >= 0)
+ restart_time_s = 0;
+ else {
+ restart_time_s = ktime_us_delta(
+ data->recheck_charge_time,
+ cur_boottime);
+ do_div(restart_time_s, 1000000);
+ }
+
+ dev_info(dev, "restart charging time is %lld\n",
+ restart_time_s);
+ battery_charging_restart(data->bc_dev, (int) restart_time_s);
+ }
+
+ if (data->cable_connected && !data->battery_unknown) {
+ ret = battery_charger_get_batt_status_no_update_time_ms(
+ data->bc_dev,
+ &batt_status_check_pass_time);
+ if (!ret) {
+ if (batt_status_check_pass_time >
+ data->charge_polling_time * 1000)
+ battery_charger_batt_status_force_check(
+ data->bc_dev);
+ }
+ }
+
+ return 0;
+};
+#endif
+
+static const struct dev_pm_ops htc_battery_bq2419x_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(htc_battery_bq2419x_suspend,
+ htc_battery_bq2419x_resume)
+};
+
+static const struct of_device_id htc_battery_bq2419x_dt_match[] = {
+ { .compatible = "htc,bq2419x_battery" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, htc_battery_bq2419x_dt_match);
+
+static struct platform_driver htc_battery_bq2419x_driver = {
+ .driver = {
+ .name = "htc_battery_bq2419x",
+ .of_match_table = of_match_ptr(htc_battery_bq2419x_dt_match),
+ .owner = THIS_MODULE,
+#ifdef CONFIG_PM_SLEEP
+ .pm = &htc_battery_bq2419x_pm_ops,
+#endif /* CONFIG_PM */
+ },
+ .probe = htc_battery_bq2419x_probe,
+ .remove = htc_battery_bq2419x_remove,
+ .shutdown = htc_battery_bq2419x_shutdown,
+};
+
+static int __init htc_battery_bq2419x_init(void)
+{
+ return platform_driver_register(&htc_battery_bq2419x_driver);
+}
+subsys_initcall(htc_battery_bq2419x_init);
+
+static void __exit htc_battery_bq2419x_exit(void)
+{
+ platform_driver_unregister(&htc_battery_bq2419x_driver);
+}
+module_exit(htc_battery_bq2419x_exit);
+
+MODULE_DESCRIPTION("HTC battery bq2419x module");
+MODULE_LICENSE("GPL");
diff --git a/drivers/power/max17050_gauge.c b/drivers/power/max17050_gauge.c
new file mode 100644
index 0000000..2eff268
--- /dev/null
+++ b/drivers/power/max17050_gauge.c
@@ -0,0 +1,964 @@
+/*
+ * max17050_gauge.c
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <asm/unaligned.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/mutex.h>
+#include <linux/err.h>
+#include <linux/i2c.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/htc_battery_max17050.h>
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+#include <linux/power_supply.h>
+#include <linux/of.h>
+#include <linux/iio/consumer.h>
+#include <linux/iio/types.h>
+#include <linux/iio/iio.h>
+
+#define MAX17050_I2C_RETRY_TIMES (5)
+
+/* Fuel Gauge Maxim MAX17050 Register Definition */
+enum max17050_fg_register {
+ MAX17050_FG_STATUS = 0x00,
+ MAX17050_FG_VALRT_Th = 0x01,
+ MAX17050_FG_TALRT_Th = 0x02,
+ MAX17050_FG_SALRT_Th = 0x03,
+ MAX17050_FG_AtRate = 0x04,
+ MAX17050_FG_RepCap = 0x05,
+ MAX17050_FG_RepSOC = 0x06,
+ MAX17050_FG_Age = 0x07,
+ MAX17050_FG_TEMP = 0x08,
+ MAX17050_FG_VCELL = 0x09,
+ MAX17050_FG_Current = 0x0A,
+ MAX17050_FG_AvgCurrent = 0x0B,
+ MAX17050_FG_Qresidual = 0x0C,
+ MAX17050_FG_SOC = 0x0D,
+ MAX17050_FG_AvSOC = 0x0E,
+ MAX17050_FG_RemCap = 0x0F,
+ MAX17050_FG_FullCAP = 0x10,
+ MAX17050_FG_TTE = 0x11,
+ MAX17050_FG_QRtable00 = 0x12,
+ MAX17050_FG_FullSOCthr = 0x13,
+ MAX17050_FG_RSLOW = 0x14,
+ MAX17050_FG_RFAST = 0x15,
+ MAX17050_FG_AvgTA = 0x16,
+ MAX17050_FG_Cycles = 0x17,
+ MAX17050_FG_DesignCap = 0x18,
+ MAX17050_FG_AvgVCELL = 0x19,
+ MAX17050_FG_MinMaxTemp = 0x1A,
+ MAX17050_FG_MinMaxVolt = 0x1B,
+ MAX17050_FG_MinMaxCurr = 0x1C,
+ MAX17050_FG_CONFIG = 0x1D,
+ MAX17050_FG_ICHGTerm = 0x1E,
+ MAX17050_FG_AvCap = 0x1F,
+ MAX17050_FG_ManName = 0x20,
+ MAX17050_FG_DevName = 0x21,
+ MAX17050_FG_QRtable10 = 0x22,
+ MAX17050_FG_FullCAPNom = 0x23,
+ MAX17050_FG_TempNom = 0x24,
+ MAX17050_FG_TempLim = 0x25,
+ MAX17050_FG_AvgTA0 = 0x26,
+ MAX17050_FG_AIN = 0x27,
+ MAX17050_FG_LearnCFG = 0x28,
+ MAX17050_FG_SHFTCFG = 0x29,
+ MAX17050_FG_RelaxCFG = 0x2A,
+ MAX17050_FG_MiscCFG = 0x2B,
+ MAX17050_FG_TGAIN = 0x2C,
+ MAX17050_FG_TOFF = 0x2D,
+ MAX17050_FG_CGAIN = 0x2E,
+ MAX17050_FG_COFF = 0x2F,
+
+ MAX17050_FG_dV_acc = 0x30,
+ MAX17050_FG_I_acc = 0x31,
+ MAX17050_FG_QRtable20 = 0x32,
+ MAX17050_FG_MaskSOC = 0x33,
+ MAX17050_FG_CHG_CNFG_10 = 0x34,
+ MAX17050_FG_FullCAP0 = 0x35,
+ MAX17050_FG_Iavg_empty = 0x36,
+ MAX17050_FG_FCTC = 0x37,
+ MAX17050_FG_RCOMP0 = 0x38,
+ MAX17050_FG_TempCo = 0x39,
+ MAX17050_FG_V_empty = 0x3A,
+ MAX17050_FG_AvgCurrent0 = 0x3B,
+ MAX17050_FG_TaskPeriod = 0x3C,
+ MAX17050_FG_FSTAT = 0x3D,
+ MAX17050_FG_TIMER = 0x3E,
+ MAX17050_FG_SHDNTIMER = 0x3F,
+
+ MAX17050_FG_AvgCurrentL = 0x40,
+ MAX17050_FG_AvgTAL = 0x41,
+ MAX17050_FG_QRtable30 = 0x42,
+ MAX17050_FG_RepCapL = 0x43,
+ MAX17050_FG_AvgVCELL0 = 0x44,
+ MAX17050_FG_dQacc = 0x45,
+ MAX17050_FG_dp_acc = 0x46,
+ MAX17050_FG_RlxSOC = 0x47,
+ MAX17050_FG_VFSOC0 = 0x48,
+ MAX17050_FG_RemCapL = 0x49,
+ MAX17050_FG_VFRemCap = 0x4A,
+ MAX17050_FG_AvgVCELLL = 0x4B,
+ MAX17050_FG_QH0 = 0x4C,
+ MAX17050_FG_QH = 0x4D,
+ MAX17050_FG_QL = 0x4E,
+ MAX17050_FG_RemCapL0 = 0x4F,
+ MAX17050_FG_LOCK_I = 0x62,
+ MAX17050_FG_LOCK_II = 0x63,
+ MAX17050_FG_OCV = 0x80,
+ MAX17050_FG_VFOCV = 0xFB,
+ MAX17050_FG_VFSOC = 0xFF,
+};
+
+int debugfs_regs[] = {
+ MAX17050_FG_STATUS, MAX17050_FG_Age, MAX17050_FG_Cycles,
+ MAX17050_FG_SHFTCFG, MAX17050_FG_VALRT_Th, MAX17050_FG_TALRT_Th,
+ MAX17050_FG_SALRT_Th, MAX17050_FG_AvgCurrent, MAX17050_FG_Current,
+ MAX17050_FG_MinMaxCurr, MAX17050_FG_VCELL, MAX17050_FG_AvgVCELL,
+ MAX17050_FG_MinMaxVolt, MAX17050_FG_TEMP, MAX17050_FG_AvgTA,
+ MAX17050_FG_MinMaxTemp, MAX17050_FG_AvSOC, MAX17050_FG_AvCap,
+};
+
+struct max17050_chip {
+ struct i2c_client *client;
+ struct max17050_platform_data *pdata;
+ int shutdown_complete;
+ struct mutex mutex;
+ struct dentry *dentry;
+ struct device *dev;
+ struct power_supply battery;
+ bool adjust_present;
+ bool is_low_temp;
+ int temp_normal2low_thr;
+ int temp_low2normal_thr;
+ unsigned int temp_normal_params[FLOUNDER_BATTERY_PARAMS_SIZE];
+ unsigned int temp_low_params[FLOUNDER_BATTERY_PARAMS_SIZE];
+};
+static struct max17050_chip *max17050_data;
+
+#define MAX17050_T_GAIN_OFF_NUM (5)
+struct max17050_t_gain_off_table {
+ int t[MAX17050_T_GAIN_OFF_NUM];
+ unsigned int tgain[MAX17050_T_GAIN_OFF_NUM];
+ unsigned int toff[MAX17050_T_GAIN_OFF_NUM];
+};
+
+static struct max17050_t_gain_off_table t_gain_off_lut = {
+ .t = { -200, -160, 1, 401, 501},
+ .tgain = { 0xDFB0, 0xDF90, 0xEAC0, 0xDD50, 0xDD30},
+ .toff = { 0x32A5, 0x3370, 0x21E2, 0x2A30, 0x2A5A},
+};
+
+static int max17050_write_word(struct i2c_client *client, int reg, u16 value)
+{
+ struct max17050_chip *chip = i2c_get_clientdata(client);
+ int retry;
+ uint8_t buf[3];
+
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = 0,
+ .len = 3,
+ .buf = buf,
+ }
+ };
+
+ mutex_lock(&chip->mutex);
+ if (chip && chip->shutdown_complete) {
+ mutex_unlock(&chip->mutex);
+ return -ENODEV;
+ }
+
+ buf[0] = reg & 0xFF;
+ buf[1] = value & 0xFF;
+ buf[2] = (value >> 8) & 0xFF;
+
+ for (retry = 0; retry < MAX17050_I2C_RETRY_TIMES; retry++) {
+ if (i2c_transfer(client->adapter, msg, 1) == 1)
+ break;
+ msleep(20);
+ }
+
+ if (retry == MAX17050_I2C_RETRY_TIMES) {
+ dev_err(&client->dev,
+ "%s(): Failed in writing register 0x%02x after retry %d times\n"
+ , __func__, reg, MAX17050_I2C_RETRY_TIMES);
+ mutex_unlock(&chip->mutex);
+ return -EIO;
+ }
+ mutex_unlock(&chip->mutex);
+
+ return 0;
+}
+
+static int max17050_read_word(struct i2c_client *client, int reg)
+{
+ struct max17050_chip *chip = i2c_get_clientdata(client);
+ int retry;
+ uint8_t vals[2], buf[1];
+
+ struct i2c_msg msg[] = {
+ {
+ .addr = client->addr,
+ .flags = 0,
+ .len = 1,
+ .buf = buf,
+ },
+ {
+ .addr = client->addr,
+ .flags = I2C_M_RD,
+ .len = 2,
+ .buf = vals,
+ }
+ };
+
+ mutex_lock(&chip->mutex);
+ if (chip && chip->shutdown_complete) {
+ mutex_unlock(&chip->mutex);
+ return -ENODEV;
+ }
+
+ buf[0] = reg & 0xFF;
+
+ for (retry = 0; retry < MAX17050_I2C_RETRY_TIMES; retry++) {
+ if (i2c_transfer(client->adapter, msg, 2) == 2)
+ break;
+ msleep(20);
+ }
+
+ if (retry == MAX17050_I2C_RETRY_TIMES) {
+ dev_err(&client->dev,
+ "%s(): Failed in reading register 0x%02x after retry %d times\n"
+ , __func__, reg, MAX17050_I2C_RETRY_TIMES);
+ mutex_unlock(&chip->mutex);
+ return -EIO;
+ }
+
+ mutex_unlock(&chip->mutex);
+ return (vals[1] << 8) | vals[0];
+}
+
+/* Return value in uV */
+static int max17050_get_ocv(struct i2c_client *client, int *batt_ocv)
+{
+ int reg;
+
+ reg = max17050_read_word(client, MAX17050_FG_VFOCV);
+ if (reg < 0)
+ return reg;
+ *batt_ocv = (reg >> 4) * 1250;
+ return 0;
+}
+
+static int max17050_get_vcell(struct i2c_client *client, int *vcell)
+{
+ int reg;
+ int ret = 0;
+
+ reg = max17050_read_word(client, MAX17050_FG_VCELL);
+ if (reg < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, reg);
+ ret = -EINVAL;
+ } else
+ *vcell = ((uint16_t)reg >> 3) * 625;
+
+ return ret;
+}
+
+static int max17050_get_current(struct i2c_client *client, int *batt_curr)
+{
+ int curr;
+ int ret = 0;
+
+ /*
+ * TODO: Assumes current sense resistor is 10mohms.
+ */
+
+ curr = max17050_read_word(client, MAX17050_FG_Current);
+ if (curr < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, curr);
+ ret = -EINVAL;
+ } else
+ *batt_curr = ((int16_t) curr) * 5000 / 32;
+
+ return ret;
+}
+
+static int max17050_get_avgcurrent(struct i2c_client *client, int *batt_avg_curr)
+{
+ int avg_curr;
+ int ret = 0;
+
+ /*
+ * TODO: Assumes current sense resistor is 10mohms.
+ */
+
+ avg_curr = max17050_read_word(client, MAX17050_FG_AvgCurrent);
+ if (avg_curr < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, avg_curr);
+ ret = -EINVAL;
+ } else
+ *batt_avg_curr = ((int16_t) avg_curr) * 5000 / 32;
+
+ return ret;
+}
+
+static int max17050_get_charge(struct i2c_client *client, int *batt_charge)
+{
+ int charge;
+ int ret = 0;
+
+ /*
+ * TODO: Assumes current sense resistor is 10mohms.
+ */
+
+ charge = max17050_read_word(client, MAX17050_FG_AvCap);
+ if (charge < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, charge);
+ ret = -EINVAL;
+ } else
+ *batt_charge = ((int16_t) charge) * 500;
+
+ return ret;
+}
+
+static int max17050_get_charge_ext(struct i2c_client *client, int64_t *batt_charge_ext)
+{
+ int charge_msb, charge_lsb;
+ int ret = 0;
+
+ /*
+ * TODO: Assumes current sense resistor is 10mohms.
+ */
+
+ charge_msb = max17050_read_word(client, MAX17050_FG_QH);
+ charge_lsb = max17050_read_word(client, MAX17050_FG_QL);
+ if (charge_msb < 0 || charge_lsb < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, charge_msb);
+ ret = -EINVAL;
+ } else
+ *batt_charge_ext = (int64_t)((int16_t) charge_msb << 16 |
+ (uint16_t) charge_lsb) * 8LL;
+
+ return ret;
+}
+
+static int max17050_set_parameter_by_temp(struct i2c_client *client,
+ bool is_low_temp)
+{
+ int ret = 0;
+ int temp_params[FLOUNDER_BATTERY_PARAMS_SIZE];
+
+ struct max17050_chip *chip = i2c_get_clientdata(client);
+
+ if (is_low_temp)
+ memcpy(temp_params, chip->temp_normal_params,
+ sizeof(temp_params));
+ else
+ memcpy(temp_params, chip->temp_low_params,
+ sizeof(temp_params));
+
+ ret = max17050_write_word(client, MAX17050_FG_V_empty,
+ temp_params[0]);
+ if (ret < 0) {
+ dev_err(&client->dev, "%s: write V_empty fail, err %d\n"
+ , __func__, ret);
+ return ret;
+ }
+
+ ret = max17050_write_word(client, MAX17050_FG_QRtable00,
+ temp_params[1]);
+ if (ret < 0) {
+ dev_err(&client->dev, "%s: write QRtable00 fail, err %d\n"
+ , __func__, ret);
+ return ret;
+ }
+
+ ret = max17050_write_word(client, MAX17050_FG_QRtable10,
+ temp_params[2]);
+ if (ret < 0) {
+ dev_err(&client->dev, "%s: write QRtable10 fail, err %d\n"
+ , __func__, ret);
+ return ret;
+ }
+
+ return 0;
+
+}
+
+static int __max17050_get_temperature(struct i2c_client *client, int *batt_temp)
+{
+ int temp;
+
+ temp = max17050_read_word(client, MAX17050_FG_TEMP);
+ if (temp < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, temp);
+ return temp;
+ }
+
+ /* The value is signed. */
+ if (temp & 0x8000) {
+ temp = (0x7fff & ~temp) + 1;
+ temp *= -1;
+ }
+ /* The value is converted into deci-centigrade scale */
+ /* Units of LSB = 1 / 256 degree Celsius */
+
+ *batt_temp = temp * 10 / (1 << 8);
+ return 0;
+}
+
+static int max17050_get_temperature(struct i2c_client *client, int *batt_temp)
+{
+ int temp, ret, i;
+ uint16_t tgain, toff;
+
+ ret = __max17050_get_temperature(client, &temp);
+ if (ret < 0)
+ goto error;
+
+ if (temp <= t_gain_off_lut.t[0]) {
+ tgain = t_gain_off_lut.tgain[0];
+ toff = t_gain_off_lut.toff[0];
+ } else {
+ tgain = t_gain_off_lut.tgain[MAX17050_T_GAIN_OFF_NUM - 1];
+ toff = t_gain_off_lut.toff[MAX17050_T_GAIN_OFF_NUM - 1];
+ /* adjust TGAIN and TOFF for battery temperature accuracy*/
+ for (i = 0; i < MAX17050_T_GAIN_OFF_NUM - 1; i++) {
+ if (temp >= t_gain_off_lut.t[i] &&
+ temp < t_gain_off_lut.t[i + 1]) {
+ tgain = t_gain_off_lut.tgain[i];
+ toff = t_gain_off_lut.toff[i];
+ break;
+ }
+ }
+ }
+
+ ret = max17050_write_word(client, MAX17050_FG_TGAIN, tgain);
+ if (ret < 0)
+ goto error;
+
+ ret = max17050_write_word(client, MAX17050_FG_TOFF, toff);
+ if (ret < 0)
+ goto error;
+
+ *batt_temp = temp;
+ if (max17050_data->adjust_present) {
+ if (!max17050_data->is_low_temp &&
+ temp <= max17050_data->temp_normal2low_thr) {
+ max17050_set_parameter_by_temp(client, true);
+ max17050_data->is_low_temp = true;
+ } else if (max17050_data->is_low_temp &&
+ temp >= max17050_data->temp_low2normal_thr) {
+ max17050_set_parameter_by_temp(client, false);
+ max17050_data->is_low_temp = false;
+ }
+ }
+
+ return 0;
+error:
+ dev_err(&client->dev, "%s: temperature reading fail, err %d\n"
+ , __func__, ret);
+ return ret;
+}
+
+static int max17050_get_soc(struct i2c_client *client, int *soc_raw)
+{
+ int soc, soc_adjust;
+ int ret = 0;
+
+ soc = max17050_read_word(client, MAX17050_FG_RepSOC);
+ if (soc < 0) {
+ dev_err(&client->dev, "%s: err %d\n", __func__, soc);
+ ret = -EINVAL;
+ } else {
+ soc_adjust = (uint16_t)soc >> 8;
+ if (!!(soc & 0xFF))
+ soc_adjust += 1;
+ *soc_raw = soc_adjust;
+ }
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_vcell(int *batt_volt)
+{
+ int ret = 0;
+
+ if (!batt_volt)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_vcell(max17050_data->client, batt_volt);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_current(int *batt_curr)
+{
+ int ret = 0;
+
+ if (!batt_curr)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_current(max17050_data->client, batt_curr);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_avgcurrent(int *batt_avgcurr)
+{
+ int ret = 0;
+
+ if (!batt_avgcurr)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_avgcurrent(max17050_data->client, batt_avgcurr);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_charge(int *batt_charge)
+{
+ int ret = 0;
+
+ if (!batt_charge)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_charge(max17050_data->client, batt_charge);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_charge_ext(int64_t *batt_charge_ext)
+{
+ int ret = 0;
+
+ if (!batt_charge_ext)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_charge_ext(max17050_data->client, batt_charge_ext);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_temperature(int *batt_temp)
+{
+ int ret = 0;
+
+ if (!batt_temp)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_temperature(max17050_data->client, batt_temp);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_soc(int *batt_soc)
+{
+ int ret = 0;
+
+ if (!batt_soc)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_soc(max17050_data->client, batt_soc);
+
+ return ret;
+}
+
+static int htc_batt_max17050_get_ocv(int *batt_ocv)
+{
+ int ret = 0;
+
+ if (!batt_ocv)
+ return -EINVAL;
+
+ if (!max17050_data)
+ return -ENODEV;
+
+ ret = max17050_get_ocv(max17050_data->client, batt_ocv);
+
+ return ret;
+}
+
+static int max17050_debugfs_show(struct seq_file *s, void *unused)
+{
+ struct max17050_chip *chip = s->private;
+ int index;
+ u8 reg;
+ unsigned int data;
+
+ for (index = 0; index < ARRAY_SIZE(debugfs_regs); index++) {
+ reg = debugfs_regs[index];
+ data = max17050_read_word(chip->client, reg);
+ if (data < 0)
+ dev_err(&chip->client->dev, "%s: err %d\n", __func__,
+ data);
+ else
+ seq_printf(s, "0x%02x:\t0x%04x\n", reg, data);
+ }
+
+ return 0;
+}
+
+static int max17050_debugfs_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, max17050_debugfs_show, inode->i_private);
+}
+
+struct htc_battery_max17050_ops htc_batt_max17050_ops = {
+ .get_vcell = htc_batt_max17050_get_vcell,
+ .get_battery_current = htc_batt_max17050_get_current,
+ .get_battery_avgcurrent = htc_batt_max17050_get_avgcurrent,
+ .get_temperature = htc_batt_max17050_get_temperature,
+ .get_soc = htc_batt_max17050_get_soc,
+ .get_ocv = htc_batt_max17050_get_ocv,
+ .get_battery_charge = htc_batt_max17050_get_charge,
+ .get_battery_charge_ext = htc_batt_max17050_get_charge_ext,
+};
+
+static const struct file_operations max17050_debugfs_fops = {
+ .open = max17050_debugfs_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static enum power_supply_property max17050_prop[] = {
+ POWER_SUPPLY_PROP_TECHNOLOGY,
+ POWER_SUPPLY_PROP_VOLTAGE_NOW,
+ POWER_SUPPLY_PROP_CAPACITY,
+ POWER_SUPPLY_PROP_VOLTAGE_OCV,
+ POWER_SUPPLY_PROP_TEMP,
+ POWER_SUPPLY_PROP_CURRENT_NOW,
+ POWER_SUPPLY_PROP_CURRENT_AVG,
+ POWER_SUPPLY_PROP_CHARGE_COUNTER,
+ POWER_SUPPLY_PROP_CHARGE_COUNTER_EXT,
+};
+
+static int max17050_get_property(struct power_supply *psy,
+ enum power_supply_property psp,
+ union power_supply_propval *val)
+{
+ int ret = 0;
+
+ switch (psp) {
+ case POWER_SUPPLY_PROP_TECHNOLOGY:
+ val->intval = POWER_SUPPLY_TECHNOLOGY_LION;
+ break;
+ case POWER_SUPPLY_PROP_VOLTAGE_NOW:
+ ret = htc_batt_max17050_get_vcell(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_CURRENT_NOW:
+ ret = htc_batt_max17050_get_current(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_CURRENT_AVG:
+ ret = htc_batt_max17050_get_avgcurrent(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_CAPACITY:
+ ret = htc_batt_max17050_get_soc(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_VOLTAGE_OCV:
+ htc_batt_max17050_get_ocv(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_TEMP:
+ htc_batt_max17050_get_temperature(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_CHARGE_COUNTER:
+ htc_batt_max17050_get_charge(&val->intval);
+ break;
+ case POWER_SUPPLY_PROP_CHARGE_COUNTER_EXT:
+ htc_batt_max17050_get_charge_ext(&val->int64val);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return ret;
+}
+
+static struct flounder_battery_platform_data
+ *flounder_battery_dt_parse(struct i2c_client *client)
+{
+ struct device_node *np = client->dev.of_node;
+ struct flounder_battery_platform_data *pdata;
+ struct device_node *map_node;
+ struct device_node *child;
+ int params_num = 0;
+ int ret;
+ u32 pval;
+ u32 id_range[FLOUNDER_BATTERY_ID_RANGE_SIZE];
+ u32 params[FLOUNDER_BATTERY_PARAMS_SIZE];
+ struct flounder_battery_adjust_by_id *param;
+
+ pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return ERR_PTR(-ENOMEM);
+
+ pdata->batt_id_channel_name =
+ of_get_property(np, "battery-id-channel-name", NULL);
+
+ map_node = of_get_child_by_name(np, "param_adjust_map_by_id");
+ if (!map_node) {
+ dev_warn(&client->dev,
+ "parameter adjust map table not found\n");
+ goto done;
+ }
+
+ for_each_child_of_node(map_node, child) {
+ param = &pdata->batt_params[params_num];
+
+ ret = of_property_read_u32(child, "id-number", &pval);
+ if (!ret)
+ param->id = pval;
+
+ ret = of_property_read_u32_array(child, "id-range", id_range,
+ FLOUNDER_BATTERY_ID_RANGE_SIZE);
+ if (!ret)
+ memcpy(param->id_range, id_range,
+ sizeof(params) *
+ FLOUNDER_BATTERY_ID_RANGE_SIZE);
+
+ ret = of_property_read_u32(child,
+ "temperature-normal-to-low-threshold",
+ &pval);
+ if (!ret)
+ param->temp_normal2low_thr = pval;
+
+ ret = of_property_read_u32(child,
+ "temperature-low-to-normal-threshold",
+ &pval);
+ if (!ret)
+ param->temp_low2normal_thr = pval;
+
+ ret = of_property_read_u32_array(child,
+ "temperature-normal-parameters",
+ params,
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+ if (!ret)
+ memcpy(param->temp_normal_params, params,
+ sizeof(params) *
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+
+ ret = of_property_read_u32_array(child,
+ "temperature-low-parameters",
+ params,
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+ if (!ret)
+ memcpy(param->temp_low_params, params,
+ sizeof(params) *
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+
+ if (++params_num >= FLOUNDER_BATTERY_ID_MAX)
+ break;
+ }
+
+done:
+ pdata->batt_params_num = params_num;
+ return pdata;
+}
+
+static int flounder_battery_id_check(
+ struct flounder_battery_platform_data *pdata,
+ struct max17050_chip *data)
+{
+ int batt_id = 0;
+ struct iio_channel *batt_id_channel;
+ int ret;
+ int i;
+
+ if (!pdata->batt_id_channel_name || pdata->batt_params_num == 0)
+ return -EINVAL;
+
+ batt_id_channel = iio_channel_get(NULL, pdata->batt_id_channel_name);
+ if (IS_ERR(batt_id_channel)) {
+ dev_err(data->dev,
+ "Failed to get iio channel %s, %ld\n",
+ pdata->batt_id_channel_name,
+ PTR_ERR(batt_id_channel));
+ return -EINVAL;
+ }
+
+ ret = iio_read_channel_processed(batt_id_channel, &batt_id);
+ if (ret < 0)
+ ret = iio_read_channel_raw(batt_id_channel, &batt_id);
+
+ if (ret < 0) {
+ dev_err(data->dev,
+ "Failed to read batt id, ret=%d\n",
+ ret);
+ return -EFAULT;
+ }
+
+ dev_dbg(data->dev, "Battery id adc value is %d\n", batt_id);
+
+ for (i = 0; i < pdata->batt_params_num; i++) {
+ if (batt_id >= pdata->batt_params[i].id_range[0] &&
+ batt_id <= pdata->batt_params[i].id_range[1]) {
+ data->temp_normal2low_thr =
+ pdata->batt_params[i].temp_normal2low_thr;
+ data->temp_low2normal_thr =
+ pdata->batt_params[i].temp_low2normal_thr;
+ memcpy(data->temp_normal_params,
+ pdata->batt_params[i].temp_normal_params,
+ sizeof(data->temp_normal_params) *
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+ memcpy(data->temp_low_params,
+ pdata->batt_params[i].temp_low_params,
+ sizeof(data->temp_low_params) *
+ FLOUNDER_BATTERY_PARAMS_SIZE);
+
+ data->adjust_present = true;
+ max17050_set_parameter_by_temp(data->client, false);
+ data->is_low_temp = false;
+ return 0;
+ }
+ }
+
+ return -ENODATA;
+}
+
+static int max17050_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct max17050_chip *chip;
+ struct flounder_battery_platform_data *pdata = NULL;
+ int ret;
+ bool ignore_param_adjust = false;
+
+ if (client->dev.platform_data)
+ pdata = client->dev.platform_data;
+
+ if (!pdata && client->dev.of_node) {
+ pdata = flounder_battery_dt_parse(client);
+ if (IS_ERR(pdata)) {
+ ret = PTR_ERR(pdata);
+ dev_err(&client->dev, "Parsing of node failed, %d\n",
+ ret);
+ ignore_param_adjust = true;
+ }
+ }
+
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+ return -ENOMEM;
+
+ chip->client = client;
+ mutex_init(&chip->mutex);
+ chip->shutdown_complete = 0;
+ i2c_set_clientdata(client, chip);
+
+ if (!ignore_param_adjust)
+ flounder_battery_id_check(pdata, chip);
+
+ max17050_data = chip;
+
+ chip->battery.name = "battery";
+ chip->battery.type = POWER_SUPPLY_TYPE_BATTERY;
+ chip->battery.get_property = max17050_get_property;
+ chip->battery.properties = max17050_prop;
+ chip->battery.num_properties = ARRAY_SIZE(max17050_prop);
+ chip->dev = &client->dev;
+
+ ret = power_supply_register(&client->dev, &chip->battery);
+ if (ret)
+ dev_err(&client->dev, "failed: power supply register\n");
+
+ chip->dentry = debugfs_create_file("max17050-regs", S_IRUGO, NULL,
+ chip, &max17050_debugfs_fops);
+
+ return 0;
+}
+
+static int max17050_remove(struct i2c_client *client)
+{
+ struct max17050_chip *chip = i2c_get_clientdata(client);
+
+ debugfs_remove(chip->dentry);
+ power_supply_unregister(&chip->battery);
+ mutex_destroy(&chip->mutex);
+
+ return 0;
+}
+
+static void max17050_shutdown(struct i2c_client *client)
+{
+ struct max17050_chip *chip = i2c_get_clientdata(client);
+
+ mutex_lock(&chip->mutex);
+ chip->shutdown_complete = 1;
+ mutex_unlock(&chip->mutex);
+
+}
+
+#ifdef CONFIG_OF
+static const struct of_device_id max17050_dt_match[] = {
+ { .compatible = "maxim,max17050" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, max17050_dt_match);
+#endif
+
+static const struct i2c_device_id max17050_id[] = {
+ { "max17050", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, max17050_id);
+
+static struct i2c_driver max17050_i2c_driver = {
+ .driver = {
+ .name = "max17050",
+ .of_match_table = of_match_ptr(max17050_dt_match),
+ },
+ .probe = max17050_probe,
+ .remove = max17050_remove,
+ .id_table = max17050_id,
+ .shutdown = max17050_shutdown,
+};
+
+static int __init max17050_init(void)
+{
+ return i2c_add_driver(&max17050_i2c_driver);
+}
+device_initcall(max17050_init);
+
+static void __exit max17050_exit(void)
+{
+ i2c_del_driver(&max17050_i2c_driver);
+}
+module_exit(max17050_exit);
+
+MODULE_DESCRIPTION("MAX17050 Fuel Gauge");
+MODULE_LICENSE("GPL");
diff --git a/drivers/power/reset/palmas-poweroff.c b/drivers/power/reset/palmas-poweroff.c
index 6108d1a..e84d654 100644
--- a/drivers/power/reset/palmas-poweroff.c
+++ b/drivers/power/reset/palmas-poweroff.c
@@ -40,6 +40,7 @@
int int_mask_val[PALMAS_MAX_INTERRUPT_MASK_REG];
bool need_rtc_power_on;
bool need_usb_event_power_on;
+ bool enable_boot_up_at_vbus;
};
static void palmas_auto_power_on(struct palmas_pm *palmas_pm)
@@ -96,6 +97,7 @@
unsigned int val;
int i;
int ret;
+ unsigned int vbus_line_state, ldo_short_status2;
palmas_allow_atomic_xfer(palmas);
@@ -119,7 +121,8 @@
palmas_pm->int_status_reg_add[i], ret);
}
- if (palmas_pm->need_usb_event_power_on) {
+ if (palmas_pm->enable_boot_up_at_vbus ||
+ palmas_pm->need_usb_event_power_on) {
ret = palmas_update_bits(palmas, PALMAS_INTERRUPT_BASE,
PALMAS_INT3_MASK,
PALMAS_INT3_MASK_VBUS | PALMAS_INT3_MASK_VBUS_OTG, 0);
@@ -134,6 +137,57 @@
dev_info(palmas_pm->dev, "Powering off the device\n");
+ palmas_read(palmas, PALMAS_INTERRUPT_BASE,
+ PALMAS_INT3_LINE_STATE, &vbus_line_state);
+
+ if (palmas_pm->enable_boot_up_at_vbus &&
+ (vbus_line_state & PALMAS_INT3_LINE_STATE_VBUS)) {
+ dev_info(palmas_pm->dev, "VBUS found, boot on system by timer interrupt\n");
+ ret = palmas_update_bits(palmas, PALMAS_RTC_BASE,
+ PALMAS_RTC_INTERRUPTS_REG,
+ PALMAS_RTC_INTERRUPTS_REG_IT_TIMER,
+ PALMAS_RTC_INTERRUPTS_REG_IT_TIMER);
+ if (ret < 0) {
+ dev_err(palmas_pm->dev,
+ "RTC_INTERRUPTS update failed: %d\n", ret);
+ goto poweroff_direct;
+ }
+
+ ret = palmas_update_bits(palmas, PALMAS_RTC_BASE,
+ PALMAS_RTC_INTERRUPTS_REG,
+ PALMAS_RTC_INTERRUPTS_REG_EVERY_MASK, 0);
+ if (ret < 0) {
+ dev_err(palmas_pm->dev,
+ "RTC_INTERRUPTS update failed: %d\n", ret);
+ goto poweroff_direct;
+ }
+
+ ret = palmas_update_bits(palmas, PALMAS_RTC_BASE,
+ PALMAS_RTC_CTRL_REG, PALMAS_RTC_CTRL_REG_STOP_RTC,
+ PALMAS_RTC_CTRL_REG_STOP_RTC);
+ if (ret < 0) {
+ dev_err(palmas_pm->dev,
+ "RTC_CTRL_REG update failed: %d\n", ret);
+ goto poweroff_direct;
+ }
+
+ ret = palmas_update_bits(palmas, PALMAS_INTERRUPT_BASE,
+ PALMAS_INT2_MASK,
+ PALMAS_INT2_MASK_RTC_TIMER, 0);
+ if (ret < 0) {
+ dev_err(palmas_pm->dev,
+ "INT2_MASK update failed: %d\n", ret);
+ goto poweroff_direct;
+ }
+ }
+
+poweroff_direct:
+ /* Errata
+ * clear VANA short status before switch-off
+ */
+ palmas_read(palmas, PALMAS_LDO_BASE, PALMAS_LDO_SHORT_STATUS2,
+ &ldo_short_status2);
+
/* Power off the device */
palmas_update_bits(palmas, PALMAS_PMU_CONTROL_BASE,
PALMAS_DEV_CTRL, 1, 0);
@@ -308,15 +362,21 @@
if (pm_pdata) {
config.allow_power_off = pm_pdata->use_power_off;
config.allow_power_reset = pm_pdata->use_power_reset;
+ palmas_pm->enable_boot_up_at_vbus =
+ pm_pdata->use_boot_up_at_vbus;
} else {
if (node) {
config.allow_power_off = of_property_read_bool(node,
"system-pmic-power-off");
config.allow_power_reset = of_property_read_bool(node,
"system-pmic-power-reset");
+ palmas_pm->enable_boot_up_at_vbus =
+ of_property_read_bool(node,
+ "boot-up-at-vbus");
} else {
config.allow_power_off = true;
config.allow_power_reset = false;
+ palmas_pm->enable_boot_up_at_vbus = false;
}
}
diff --git a/drivers/regulator/palmas-regulator.c b/drivers/regulator/palmas-regulator.c
index 8fec532..bb5ce55 100644
--- a/drivers/regulator/palmas-regulator.c
+++ b/drivers/regulator/palmas-regulator.c
@@ -1437,6 +1437,7 @@
}
pdata->ldo6_vibrator = of_property_read_bool(node, "ti,ldo6-vibrator");
+ pdata->disable_smps10_in_suspend = of_property_read_bool(node, "ti,disable-smps10-in-suspend");
}
@@ -1810,6 +1811,9 @@
}
}
+ if (pdata && pdata->disable_smps10_in_suspend)
+ pmic->disable_smps10_in_suspend = true;
+
palmas_dvfs_init(palmas, pdata);
return 0;
@@ -1858,6 +1862,7 @@
struct palmas *palmas = dev_get_drvdata(dev->parent);
struct palmas_pmic *pmic = dev_get_drvdata(dev);
int id;
+ unsigned int ldo_short_status2;
for (id = 0; id < PALMAS_NUM_REGS; id++) {
unsigned int cf = pmic->config_flags[id];
@@ -1875,6 +1880,20 @@
}
palams_rail_pd_control(palmas, id, false);
}
+
+ if (pmic->disable_smps10_in_suspend) {
+ palmas_smps_read(palmas, PALMAS_SMPS10_CTRL, &(pmic->smps10_ctrl_reg));
+ dev_dbg(dev, "%s:Save SMPS10_CTRL register before suspend: 0x%x\n", __func__, pmic->smps10_ctrl_reg);
+
+ /* disable SMPS10 in suspend*/
+ palmas_smps_write(palmas, PALMAS_SMPS10_CTRL, 0);
+ }
+
+ /* Errata
+ * clear VANA short status before suspend
+ */
+ palmas_ldo_read(palmas, PALMAS_LDO_SHORT_STATUS2, &ldo_short_status2);
+
return 0;
}
@@ -1883,6 +1902,7 @@
struct palmas *palmas = dev_get_drvdata(dev->parent);
struct palmas_pmic *pmic = dev_get_drvdata(dev);
int id;
+ unsigned int reg;
for (id = 0; id < PALMAS_NUM_REGS; id++) {
unsigned int cf = pmic->config_flags[id];
@@ -1901,7 +1921,15 @@
palams_rail_pd_control(palmas, id,
pmic->disable_active_discharge_idle[id]);
+
}
+
+ if (pmic->disable_smps10_in_suspend) {
+ palmas_smps_write(palmas, PALMAS_SMPS10_CTRL, pmic->smps10_ctrl_reg);
+ palmas_smps_read(palmas, PALMAS_SMPS10_CTRL, ®);
+ dev_dbg(dev, "%s: Restore SMPS10_CTRL register: 0x%x\n", __func__, reg);
+ }
+
return 0;
}
#endif
diff --git a/drivers/rtc/rtc-palmas.c b/drivers/rtc/rtc-palmas.c
index ae74754..8a53417 100644
--- a/drivers/rtc/rtc-palmas.c
+++ b/drivers/rtc/rtc-palmas.c
@@ -383,7 +383,7 @@
enable_irq_wake(palmas_rtc->irq);
ret = palmas_rtc_read_alarm(dev, &alm);
if (!ret)
- dev_info(dev, "%s() alrm %d time %d %d %d %d %d %d\n",
+ dev_dbg(dev, "%s() alrm %d time %d %d %d %d %d %d\n",
__func__, alm.enabled,
alm.time.tm_year, alm.time.tm_mon,
alm.time.tm_mday, alm.time.tm_hour,
@@ -404,7 +404,7 @@
disable_irq_wake(palmas_rtc->irq);
ret = palmas_rtc_read_time(dev, &tm);
if (!ret)
- dev_info(dev, "%s() %d %d %d %d %d %d\n",
+ dev_dbg(dev, "%s() %d %d %d %d %d %d\n",
__func__, tm.tm_year, tm.tm_mon, tm.tm_mday,
tm.tm_hour, tm.tm_min, tm.tm_sec);
}
diff --git a/drivers/staging/iio/adc/palmas_gpadc.c b/drivers/staging/iio/adc/palmas_gpadc.c
index 8236b0e..da20bef 100644
--- a/drivers/staging/iio/adc/palmas_gpadc.c
+++ b/drivers/staging/iio/adc/palmas_gpadc.c
@@ -1368,7 +1368,7 @@
{
return platform_driver_register(&palmas_gpadc_driver);
}
-module_init(palmas_gpadc_init);
+fs_initcall(palmas_gpadc_init);
static void __exit palmas_gpadc_exit(void)
{
diff --git a/drivers/thermal/Kconfig b/drivers/thermal/Kconfig
index e7dfb56..4b3874b 100644
--- a/drivers/thermal/Kconfig
+++ b/drivers/thermal/Kconfig
@@ -64,6 +64,13 @@
devices based on their 'contribution' to a zone. The
contribution should be provided through platform data.
+config THERMAL_DEFAULT_GOV_ADAPTIVE_SKIN
+ bool
+ prompt "adaptive skin thermal governor"
+ select THERMAL_GOV_ADAPTIVE_SKIN
+ help
+ Use the adaptive skin thermal governor as default.
+
config THERMAL_DEFAULT_GOV_PID
bool
prompt "pid_thermal_gov"
@@ -98,6 +105,13 @@
This governor manages thermals based on output values of
PID controller.
+config THERMAL_GOV_ADAPTIVE_SKIN
+ bool
+ prompt "adaptive skin thermal governor"
+ help
+ This governor throttles clocks based on skin temperature and
+ heat source(s) of the skin temperature like CPU and GPU.
+
config THERMAL_GOV_USER_SPACE
bool "User_space thermal governor"
help
diff --git a/drivers/thermal/Makefile b/drivers/thermal/Makefile
index f4bd2d7..c474db3 100644
--- a/drivers/thermal/Makefile
+++ b/drivers/thermal/Makefile
@@ -13,6 +13,7 @@
thermal_sys-$(CONFIG_THERMAL_GOV_FAIR_SHARE) += fair_share.o
thermal_sys-$(CONFIG_THERMAL_GOV_STEP_WISE) += step_wise.o
thermal_sys-$(CONFIG_THERMAL_GOV_PID) += pid_thermal_gov.o
+thermal_sys-$(CONFIG_THERMAL_GOV_ADAPTIVE_SKIN) += adaptive_skin.o
thermal_sys-$(CONFIG_THERMAL_GOV_USER_SPACE) += user_space.o
# cpufreq cooling
diff --git a/drivers/thermal/adaptive_skin.c b/drivers/thermal/adaptive_skin.c
new file mode 100644
index 0000000..fe790b2
--- /dev/null
+++ b/drivers/thermal/adaptive_skin.c
@@ -0,0 +1,1045 @@
+/*
+ * drivers/thermal/adaptive_skin.c
+ *
+ * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
+
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/module.h>
+#include <linux/kobject.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+
+#include <linux/adaptive_skin.h>
+#include "thermal_core.h"
+
+#define DRV_NAME "adaptive_skin"
+
+#define DEFAULT_CPU_ZONE_NAME "CPU-therm"
+#define DEFAULT_GPU_ZONE_NAME "GPU-therm"
+
+#define MAX_ERROR_TJ_DELTA 15000
+#define MAX_ERROR_TSKIN_DELTA 5000
+/* Tj delta should be below this value during the transient */
+#define DEFAULT_TJ_TRAN_THRESHOLD 2000
+#define MAX_TJ_TRAN_THRESHOLD MAX_ERROR_TJ_DELTA
+/* Tj delta should be below this value during the steady */
+#define DEFAULT_TJ_STD_THRESHOLD 3000
+#define MAX_TJ_STD_THRESHOLD MAX_ERROR_TJ_DELTA
+/* Tj delta from previous value should be below this value
+ after an action in steady */
+#define DEFAULT_TJ_STD_FORCE_UPDATE_THRESHOLD 5000
+#define MAX_TJ_STD_FORCE_UPDATE_THRESHOLD MAX_ERROR_TJ_DELTA
+#define DEFAULT_TSKIN_TRAN_THRESHOLD 500
+#define MAX_TSKIN_TRAN_THRESHOLD MAX_ERROR_TSKIN_DELTA
+/* Tskin delta from trip temp should be below this value */
+#define DEFAULT_TSKIN_STD_THRESHOLD 1000
+#define MAX_TSKIN_STD_THRESHOLD MAX_ERROR_TSKIN_DELTA
+#define DEFAULT_TSKIN_STD_OFFSET 500
+#define FORCE_DROP_THRESHOLD 3000
+
+/* number of pollings */
+#define DEFAULT_FORCE_UPDATE_PERIOD 6
+#define MAX_FORCE_UPDATE_PERIOD 10
+#define DEFAULT_TARGET_STATE_TDP 10
+#define MAX_ALLOWED_POLL 4U
+#define POLL_PER_STATE_SHIFT 16
+#define FORCE_UPDATE_MOVEBACK 2
+
+#define STEADY_CHECK_PERIOD_THRESHOLD 4
+
+/* trend calculation */
+#define TC1 10
+#define TC2 1
+
+enum ast_action {
+ AST_ACT_STAY = 0x00,
+ AST_ACT_TRAN_RAISE = 0x01,
+ AST_ACT_TRAN_DROP = 0x02,
+ AST_ACT_STD_RAISE = 0x04,
+ AST_ACT_STD_DROP = 0x08,
+ AST_ACT_TRAN = AST_ACT_TRAN_RAISE | AST_ACT_TRAN_DROP,
+ AST_ACT_STD = AST_ACT_STD_RAISE | AST_ACT_STD_DROP,
+ AST_ACT_RAISE = AST_ACT_TRAN_RAISE | AST_ACT_STD_RAISE,
+ AST_ACT_DROP = AST_ACT_TRAN_DROP | AST_ACT_STD_DROP,
+};
+
+#define is_action_transient_raise(action) ((action) & AST_ACT_TRAN_RAISE)
+#define is_action_transient_drop(action) ((action) & AST_ACT_TRAN_DROP)
+#define is_action_steady_raise(action) ((action) & AST_ACT_TRAN_RAISE)
+#define is_action_steady_drop(action) ((action) & AST_ACT_TRAN_DROP)
+#define is_action_raise(action) ((action) & AST_ACT_RAISE)
+#define is_action_drop(action) ((action) & AST_ACT_DROP)
+#define is_action_transient(action) ((action) & AST_ACT_TRAN)
+#define is_action_steady(action) ((action) & AST_ACT_STD)
+
+struct astg_ctx {
+ struct kobject kobj;
+ long temp_last; /* skin temperature at last action */
+
+ /* normal update */
+ int raise_cnt; /* num of pollings elapsed after temp > trip */
+ int drop_cnt; /* num of pollings elapsed after temp < trip */
+
+ /* force update */
+ int fup_period; /* allowed period staying away from trip temp */
+ int fup_raise_cnt; /* state will raise if raise_cnt > period */
+ int fup_drop_cnt; /* state will drop if drop_cnt > period */
+
+ /* steady state detection */
+ int std_cnt; /* num of pollings while device is in steady */
+ int std_offset; /* steady offset to check if it is in steay */
+
+ /* thresholds for Tskin */
+ int tskin_tran_threshold;
+ int tskin_std_threshold;
+
+ /* gains */
+ int poll_raise_gain;
+ int poll_drop_gain;
+ int target_state_tdp;
+
+ /* heat sources */
+ struct thermal_zone_device *cpu_zone;
+ struct thermal_zone_device *gpu_zone;
+ long tcpu_last; /* cpu temperature at last action */
+ long tgpu_last; /* gpu temperature at last action */
+ /* thresholds for heat source */
+ int tj_tran_threshold;
+ int tj_std_threshold;
+ int tj_std_fup_threshold;
+
+ enum ast_action prev_action; /* previous action */
+};
+
+struct astg_attribute {
+ struct attribute attr;
+ ssize_t (*show)(struct kobject *kobj, struct attribute *attr,
+ char *buf);
+ ssize_t (*store)(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count);
+};
+
+#define tz_to_gov(t) \
+ (t->governor_data)
+
+#define kobj_to_gov(k) \
+ container_of(k, struct astg_ctx, kobj)
+
+#define attr_to_gov_attr(a) \
+ container_of(a, struct astg_attribute, attr)
+
+static ssize_t target_state_tdp_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->target_state_tdp);
+}
+
+static ssize_t target_state_tdp_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val, max_poll;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val <= 0)
+ return -EINVAL;
+
+ max_poll = MAX_ALLOWED_POLL << POLL_PER_STATE_SHIFT;
+
+ gov->target_state_tdp = val;
+ gov->poll_raise_gain = max_poll / (gov->target_state_tdp + 1);
+ gov->poll_drop_gain = gov->poll_raise_gain;
+ return count;
+}
+
+static struct astg_attribute target_state_tdp_attr =
+ __ATTR(target_state_tdp, 0644, target_state_tdp_show,
+ target_state_tdp_store);
+
+static ssize_t fup_period_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->fup_period);
+}
+
+static ssize_t fup_period_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_FORCE_UPDATE_PERIOD || val <= 0)
+ return -EINVAL;
+
+ gov->fup_period = val;
+ return count;
+}
+
+static struct astg_attribute fup_period_attr =
+ __ATTR(fup_period, 0644, fup_period_show,
+ fup_period_store);
+
+static ssize_t tj_tran_threshold_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->tj_tran_threshold);
+}
+
+static ssize_t tj_tran_threshold_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_TJ_TRAN_THRESHOLD || val <= 0)
+ return -EINVAL;
+
+ gov->tj_tran_threshold = val;
+ return count;
+}
+
+static struct astg_attribute tj_tran_threshold_attr =
+ __ATTR(tj_tran_threshold, 0644, tj_tran_threshold_show,
+ tj_tran_threshold_store);
+
+static ssize_t tj_std_threshold_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->tj_std_threshold);
+}
+
+static ssize_t tj_std_threshold_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_TJ_STD_THRESHOLD || val <= 0)
+ return -EINVAL;
+
+ gov->tj_std_threshold = val;
+ return count;
+}
+
+static struct astg_attribute tj_std_threshold_attr =
+ __ATTR(tj_std_threshold, 0644, tj_std_threshold_show,
+ tj_std_threshold_store);
+
+static ssize_t tj_std_fup_threshold_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->tj_std_fup_threshold);
+}
+
+static ssize_t tj_std_fup_threshold_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_TJ_STD_FORCE_UPDATE_THRESHOLD || val <= 0)
+ return -EINVAL;
+
+ gov->tj_std_fup_threshold = val;
+ return count;
+}
+
+static struct astg_attribute tj_std_fup_threshold_attr =
+ __ATTR(tj_std_fup_threshold, 0644, tj_std_fup_threshold_show,
+ tj_std_fup_threshold_store);
+
+static ssize_t tskin_tran_threshold_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->tskin_tran_threshold);
+}
+
+static ssize_t tskin_tran_threshold_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_TSKIN_TRAN_THRESHOLD || val <= 0)
+ return -EINVAL;
+
+ gov->tskin_tran_threshold = val;
+ return count;
+}
+
+static struct astg_attribute tskin_tran_threshold_attr =
+ __ATTR(tskin_tran_threshold, 0644, tskin_tran_threshold_show,
+ tskin_tran_threshold_store);
+
+static ssize_t tskin_std_threshold_show(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+
+ if (!gov)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", gov->tskin_std_threshold);
+}
+
+static ssize_t tskin_std_threshold_store(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count)
+{
+ struct astg_ctx *gov = kobj_to_gov(kobj);
+ int val;
+
+ if (!gov)
+ return -ENODEV;
+
+ if (!sscanf(buf, "%d\n", &val))
+ return -EINVAL;
+
+ if (val > MAX_TSKIN_STD_THRESHOLD || val <= 0)
+ return -EINVAL;
+
+ gov->tskin_std_threshold = val;
+ return count;
+}
+
+static struct astg_attribute tskin_std_threshold_attr =
+ __ATTR(tskin_std_threshold, 0644, tskin_std_threshold_show,
+ tskin_std_threshold_store);
+
+static struct attribute *astg_default_attrs[] = {
+ &target_state_tdp_attr.attr,
+ &fup_period_attr.attr,
+ &tj_tran_threshold_attr.attr,
+ &tj_std_threshold_attr.attr,
+ &tj_std_fup_threshold_attr.attr,
+ &tskin_tran_threshold_attr.attr,
+ &tskin_std_threshold_attr.attr,
+ NULL,
+};
+
+static ssize_t astg_show(struct kobject *kobj,
+ struct attribute *attr, char *buf)
+{
+ struct astg_attribute *gov_attr = attr_to_gov_attr(attr);
+
+ if (!gov_attr->show)
+ return -EIO;
+
+ return gov_attr->show(kobj, attr, buf);
+}
+
+static ssize_t astg_store(struct kobject *kobj,
+ struct attribute *attr, const char *buf,
+ size_t len)
+{
+ struct astg_attribute *gov_attr = attr_to_gov_attr(attr);
+
+ if (!gov_attr->store)
+ return -EIO;
+
+ return gov_attr->store(kobj, attr, buf, len);
+}
+
+static const struct sysfs_ops astg_sysfs_ops = {
+ .show = astg_show,
+ .store = astg_store,
+};
+
+static struct kobj_type astg_ktype = {
+ .default_attrs = astg_default_attrs,
+ .sysfs_ops = &astg_sysfs_ops,
+};
+
+static int astg_zone_match(struct thermal_zone_device *thz, void *data)
+{
+ return strcmp((char *)data, thz->type) == 0;
+}
+
+static int astg_start(struct thermal_zone_device *tz)
+{
+ struct astg_ctx *gov;
+ struct adaptive_skin_thermal_gov_params *params;
+ int ret, max_poll, target_state_tdp;
+
+ gov = kzalloc(sizeof(*gov), GFP_KERNEL);
+ if (!gov) {
+ dev_err(&tz->device, "%s: Can't alloc governor data\n",
+ DRV_NAME);
+ return -ENOMEM;
+ }
+
+ ret = kobject_init_and_add(&gov->kobj, &astg_ktype,
+ &tz->device.kobj, DRV_NAME);
+ if (ret) {
+ dev_err(&tz->device, "%s: Can't init kobject\n", DRV_NAME);
+ kobject_put(&gov->kobj);
+ kfree(gov);
+ return ret;
+ }
+
+ if (tz->tzp->governor_params) {
+ params = (struct adaptive_skin_thermal_gov_params *)
+ tz->tzp->governor_params;
+
+ gov->tj_tran_threshold = params->tj_tran_threshold;
+ gov->tj_std_threshold = params->tj_std_threshold;
+ gov->tj_std_fup_threshold = params->tj_std_fup_threshold;
+ gov->tskin_tran_threshold = params->tskin_tran_threshold;
+ gov->tskin_std_threshold = params->tskin_std_threshold;
+ target_state_tdp = params->target_state_tdp;
+ } else {
+ gov->tj_tran_threshold = DEFAULT_TJ_TRAN_THRESHOLD;
+ gov->tj_std_threshold = DEFAULT_TJ_STD_THRESHOLD;
+ gov->tj_std_fup_threshold =
+ DEFAULT_TJ_STD_FORCE_UPDATE_THRESHOLD;
+ gov->tskin_tran_threshold = DEFAULT_TSKIN_TRAN_THRESHOLD;
+ gov->tskin_std_threshold = DEFAULT_TSKIN_STD_THRESHOLD;
+ target_state_tdp = DEFAULT_TARGET_STATE_TDP;
+ }
+
+ max_poll = MAX_ALLOWED_POLL << POLL_PER_STATE_SHIFT;
+ gov->poll_raise_gain = max_poll / (target_state_tdp + 1);
+ gov->poll_drop_gain = gov->poll_raise_gain;
+ gov->target_state_tdp = target_state_tdp;
+ gov->fup_period = DEFAULT_FORCE_UPDATE_PERIOD;
+ gov->std_offset = DEFAULT_TSKIN_STD_OFFSET;
+
+ gov->cpu_zone = thermal_zone_device_find(DEFAULT_CPU_ZONE_NAME,
+ astg_zone_match);
+ gov->gpu_zone = thermal_zone_device_find(DEFAULT_GPU_ZONE_NAME,
+ astg_zone_match);
+ tz_to_gov(tz) = gov;
+
+ return 0;
+}
+
+static void astg_stop(struct thermal_zone_device *tz)
+{
+ struct astg_ctx *gov = tz_to_gov(tz);
+
+ if (!gov)
+ return;
+
+ kobject_put(&gov->kobj);
+ kfree(gov);
+}
+
+static void astg_update_target_state(struct thermal_instance *instance,
+ enum ast_action action,
+ int over_trip)
+{
+ struct thermal_cooling_device *cdev = instance->cdev;
+ unsigned long cur_state, lower;
+
+ lower = over_trip >= 0 ? instance->lower : THERMAL_NO_TARGET;
+
+ cdev->ops->get_cur_state(cdev, &cur_state);
+
+ if (is_action_raise(action))
+ cur_state = cur_state < instance->upper ?
+ (cur_state + 1) : instance->upper;
+ else if (is_action_drop(action))
+ cur_state = cur_state > instance->lower ?
+ (cur_state - 1) : lower;
+
+ pr_debug("astg state : %lu -> %lu\n", instance->target, cur_state);
+ instance->target = cur_state;
+}
+
+static void astg_update_non_passive_instance(struct thermal_zone_device *tz,
+ struct thermal_instance *instance,
+ long trip_temp)
+{
+ if (tz->temperature >= trip_temp)
+ astg_update_target_state(instance, AST_ACT_TRAN_RAISE, 1);
+ else
+ astg_update_target_state(instance, AST_ACT_TRAN_DROP, -1);
+}
+
+static int get_target_poll_raise(struct astg_ctx *gov,
+ long cur_state,
+ int is_steady)
+{
+ unsigned target;
+
+ cur_state++;
+ target = (cur_state * gov->poll_raise_gain) >> POLL_PER_STATE_SHIFT;
+
+ if (is_steady)
+ target += 1 + ((unsigned)cur_state >> 3);
+
+ return (int)min(target, MAX_ALLOWED_POLL);
+}
+
+static int get_target_poll_drop(struct astg_ctx *gov,
+ long cur_state,
+ int is_steady)
+{
+ unsigned target;
+
+ target = MAX_ALLOWED_POLL - ((unsigned)(cur_state * gov->poll_drop_gain)
+ >> POLL_PER_STATE_SHIFT);
+
+ if (is_steady)
+ return (int)MAX_ALLOWED_POLL;
+
+ if (target > MAX_ALLOWED_POLL)
+ return 0;
+
+ return (int)target;
+}
+
+/*
+ return 1 if skin temp is within threshold(gov->std_offset) for long time
+*/
+static int check_steady(struct astg_ctx *gov, long tz_temp, long trip_temp)
+{
+ int delta;
+
+ delta = tz_temp - trip_temp;
+ delta = abs(delta);
+
+ pr_debug(">> delta : %d\n", delta);
+
+ if (gov->std_offset > delta) {
+ gov->std_cnt++;
+ if (gov->std_cnt > STEADY_CHECK_PERIOD_THRESHOLD)
+ return 1;
+ } else
+ gov->std_cnt = 0;
+
+ return 0;
+}
+
+static void astg_init_gov_context(struct astg_ctx *gov, long trip_temp)
+{
+ gov->temp_last = trip_temp;
+ gov->cpu_zone->ops->get_temp(gov->cpu_zone, &gov->tcpu_last);
+ gov->gpu_zone->ops->get_temp(gov->gpu_zone, &gov->tgpu_last);
+ gov->std_cnt = 0;
+ gov->fup_raise_cnt = 0;
+ gov->fup_drop_cnt = 0;
+ gov->raise_cnt = 0;
+ gov->drop_cnt = 0;
+ gov->prev_action = AST_ACT_STAY;
+}
+
+
+static enum ast_action astg_get_target_action(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev,
+ long trip_temp,
+ enum thermal_trend trend)
+{
+ struct astg_ctx *gov = tz_to_gov(tz);
+ unsigned long cur_state;
+ long tz_temp;
+ long temp_last;
+ long tcpu, tcpu_last, tcpu_std_upperlimit, tcpu_std_lowerlimit;
+ long tcpu_std_fup_upperlimit, tcpu_std_fup_lowerlimit;
+ long tcpu_tran_lowerlimit, tcpu_tran_upperlimit;
+ long tgpu, tgpu_last, tgpu_std_upperlimit, tgpu_std_lowerlimit;
+ long tgpu_std_fup_upperlimit, tgpu_std_fup_lowerlimit;
+ long tgpu_tran_lowerlimit, tgpu_tran_upperlimit;
+ long tskin_std_lowerlimit, tskin_std_upperlimit;
+ long tskin_tran_boundary;
+ int over_trip;
+ int raise_cnt;
+ int drop_cnt;
+ int raise_period;
+ int drop_period;
+ int fup_raise_cnt;
+ int fup_drop_cnt;
+ int is_steady = 0;
+ enum ast_action action = AST_ACT_STAY;
+ enum ast_action prev_action;
+
+ tz_temp = tz->temperature;
+ raise_cnt = gov->raise_cnt;
+ drop_cnt = gov->drop_cnt;
+ temp_last = gov->temp_last;
+ tcpu_last = gov->tcpu_last;
+ tgpu_last = gov->tgpu_last;
+ fup_raise_cnt = gov->fup_raise_cnt;
+ fup_drop_cnt = gov->fup_drop_cnt;
+ prev_action = gov->prev_action;
+ gov->cpu_zone->ops->get_temp(gov->cpu_zone, &tcpu);
+ gov->gpu_zone->ops->get_temp(gov->gpu_zone, &tgpu);
+ over_trip = tz_temp - trip_temp;
+
+ /* get current throttling index of the cooling device */
+ cdev->ops->get_cur_state(cdev, &cur_state);
+
+ /* is tz_temp within threshold for long time ? */
+ is_steady = check_steady(gov, tz_temp, trip_temp);
+
+ /* when to decide new target is differ from current throttling index */
+ raise_period = get_target_poll_raise(gov, cur_state, is_steady);
+ drop_period = get_target_poll_drop(gov, cur_state, is_steady);
+
+ pr_debug(">> tcur:%ld, tlast:%ld, tr:%d\n", tz_temp, temp_last, trend);
+ pr_debug(">> c_last : %ld, c_cur : %ld\n", tcpu_last, tcpu);
+ pr_debug(">> g_last : %ld, g_cur : %ld\n", tgpu_last, tgpu);
+ pr_debug(">> ucp : %d, udp : %d, fru : %d, fdu : %d\n", raise_period,
+ drop_period, fup_raise_cnt, fup_drop_cnt);
+ pr_debug(">> is_steady : %d, uc:%d, udc:%d, lt:%ld, pa : %d\n",
+ is_steady, raise_cnt, drop_cnt, temp_last, prev_action);
+
+ if (is_action_transient_raise(prev_action) && tcpu > tcpu_last) {
+ gov->tcpu_last = tcpu;
+ gov->tgpu_last = tgpu;
+ }
+
+ switch (trend) {
+ /* skin temperature is raising */
+ case THERMAL_TREND_RAISING:
+ /*
+ tz_temp is over trip temp and also it is even higher than
+ after the latest action(state change) on cooling device
+ */
+ if (over_trip > 0 && tz_temp >= temp_last) {
+ /*
+ raise throttling index if tz_temp stayed over
+ trip temp longer than the time allowed with
+ the current throttling index.
+ */
+ if (raise_cnt > raise_period) {
+ action = AST_ACT_TRAN_RAISE;
+
+ /*
+ force update : if tz_temp stayed over trip temp
+ longer than the time allowed to device,
+ then raises throttling index
+ */
+ } else if ((tcpu >= tcpu_last || tgpu >= tgpu_last) &&
+ !is_action_raise(prev_action)
+ && fup_raise_cnt >= gov->fup_period) {
+ action = AST_ACT_TRAN_RAISE;
+ }
+ }
+
+ /* see if there's any big move in heat sources
+ while device is in steady range */
+ if (action == AST_ACT_STAY) {
+ tcpu_std_upperlimit = tcpu_last + gov->tj_std_threshold;
+ tgpu_std_upperlimit = tgpu_last + gov->tj_std_threshold;
+ tcpu_std_fup_upperlimit =
+ tcpu_last + gov->tj_std_fup_threshold;
+ tgpu_std_fup_upperlimit =
+ tgpu_last + gov->tj_std_fup_threshold;
+
+ tskin_std_lowerlimit =
+ trip_temp - gov->tskin_std_threshold;
+ /*
+ tz_temp is over lower limit of steady state but
+ temperature of any heat source is increased more than
+ its update threshold, then raises throttling index
+ */
+ if (tz_temp >= tskin_std_lowerlimit &&
+ (tcpu > tcpu_std_fup_upperlimit ||
+ tgpu > tgpu_std_fup_upperlimit ||
+ ((tcpu >= tcpu_std_upperlimit ||
+ tgpu >= tgpu_std_upperlimit) &&
+ !is_action_drop(prev_action)))) {
+ action = AST_ACT_STD_RAISE;
+ }
+ }
+ break;
+
+ /* skin temperature is dropping */
+ case THERMAL_TREND_DROPPING:
+ /*
+ tz_temp is under trip temp and it is even lower than
+ after the latest action(index change) on cooling device and it
+ stayed under tz_temp longer than the allowed for the current
+ throttling index
+ */
+ if (over_trip < 0 && drop_cnt > drop_period &&
+ tz_temp <= temp_last) {
+ tskin_tran_boundary =
+ trip_temp - gov->tskin_tran_threshold;
+
+ tcpu_tran_lowerlimit =
+ tcpu_last - gov->tj_tran_threshold;
+ tgpu_tran_lowerlimit =
+ tgpu_last - gov->tj_tran_threshold;
+ tcpu_tran_upperlimit =
+ tcpu_last + gov->tj_tran_threshold;
+ tgpu_tran_upperlimit =
+ tgpu_last + gov->tj_tran_threshold;
+ /*
+ if tz_temp is a little bit lower than trip temp
+ and also temperature of any heat source decreased
+ DRASTICALLY then drops throttling index
+ */
+ if (tz_temp > tskin_tran_boundary &&
+ (tcpu <= tcpu_tran_lowerlimit &&
+ tgpu <= tgpu_tran_lowerlimit)) {
+ action = AST_ACT_TRAN_DROP;
+ /*
+ if tz_temp is significantly lower than trip temp
+ and also there's no DRASTACAL INCREASE in any of
+ heat sources then drops throttling index
+ */
+ } else if (tz_temp < tskin_tran_boundary &&
+ (tcpu <= tcpu_tran_upperlimit &&
+ tgpu <= tgpu_tran_upperlimit)) {
+ action = AST_ACT_TRAN_DROP;
+ }
+ }
+
+ /* see if there's any big move in heat sources
+ while device is in steady range */
+ if (action == AST_ACT_STAY) {
+ tskin_std_upperlimit =
+ trip_temp + gov->tskin_std_threshold;
+
+ tcpu_std_lowerlimit = tcpu_last - gov->tj_std_threshold;
+ tgpu_std_lowerlimit = tgpu_last - gov->tj_std_threshold;
+ tcpu_std_fup_lowerlimit =
+ tcpu_last - gov->tj_std_fup_threshold;
+ tgpu_std_fup_lowerlimit =
+ tgpu_last - gov->tj_std_fup_threshold;
+
+ if (tz_temp < tskin_std_upperlimit &&
+ (tcpu < tcpu_std_fup_lowerlimit ||
+ tgpu < tgpu_std_fup_lowerlimit ||
+ ((tcpu <= tcpu_std_lowerlimit ||
+ tgpu <= tgpu_std_lowerlimit) &&
+ !is_action_raise(prev_action)))) {
+ action = AST_ACT_STD_DROP;
+ }
+ }
+
+ /*
+ force update : if tz_temp stayed under trip temp longer than
+ the time allowed to device, then drops throttling index
+ */
+ if (action == AST_ACT_STAY && over_trip < 0 &&
+ tz_temp < temp_last &&
+ ((tcpu < tcpu_last || tgpu < tgpu_last) &&
+ !is_action_drop(prev_action) &&
+ fup_drop_cnt >= gov->fup_period)) {
+ action = AST_ACT_TRAN_DROP;
+ }
+ break;
+ default:
+ break;
+ }
+
+ /* decrease throttling index if skin temp is away below its trip temp */
+ if (tz_temp < trip_temp - FORCE_DROP_THRESHOLD)
+ action = AST_ACT_TRAN_DROP;
+
+ if (is_action_steady(action)) {
+ gov->tcpu_last = tcpu;
+ gov->tgpu_last = tgpu;
+ }
+
+ return action;
+}
+
+static void astg_appy_action(struct thermal_zone_device *tz,
+ enum ast_action action,
+ struct thermal_instance *instance,
+ long trip_temp)
+{
+ struct astg_ctx *gov = tz_to_gov(tz);
+ long tz_temp;
+ int over_trip;
+ int raise_cnt;
+ int drop_cnt;
+ int fup_raise_cnt;
+ int fup_drop_cnt;
+
+ tz_temp = tz->temperature;
+ raise_cnt = gov->raise_cnt;
+ drop_cnt = gov->drop_cnt;
+ fup_raise_cnt = gov->fup_raise_cnt;
+ fup_drop_cnt = gov->fup_drop_cnt;
+ over_trip = tz_temp - trip_temp;
+
+ /* DO ACTION */
+ if (action != AST_ACT_STAY) {
+ astg_update_target_state(instance, action, over_trip);
+
+ if (over_trip > 0) {
+ gov->temp_last = min(tz_temp, trip_temp +
+ DEFAULT_TSKIN_STD_OFFSET);
+ } else {
+ gov->temp_last = max(tz_temp, trip_temp -
+ DEFAULT_TSKIN_STD_OFFSET);
+ }
+
+ if (is_action_transient_raise(action) &&
+ fup_raise_cnt >= gov->fup_period) {
+ fup_raise_cnt = 0;
+ } else if (is_action_raise(action)) {
+ if (fup_raise_cnt >= gov->fup_period) {
+ fup_raise_cnt = gov->fup_period -
+ FORCE_UPDATE_MOVEBACK;
+ } else {
+ fup_raise_cnt = max(0, fup_raise_cnt -
+ FORCE_UPDATE_MOVEBACK);
+ }
+ } else if (is_action_transient_drop(action) &&
+ fup_drop_cnt >= gov->fup_period) {
+ fup_drop_cnt = 0;
+ } else if (is_action_drop(action)) {
+ if (fup_drop_cnt >= gov->fup_period) {
+ fup_drop_cnt = gov->fup_period -
+ FORCE_UPDATE_MOVEBACK;
+ } else {
+ fup_drop_cnt = max(0, fup_drop_cnt -
+ FORCE_UPDATE_MOVEBACK);
+ }
+ }
+
+ raise_cnt = 0;
+ drop_cnt = 0;
+ } else {
+ if (over_trip > 0) {
+ raise_cnt++;
+ drop_cnt = 0;
+ fup_raise_cnt++;
+ fup_drop_cnt = 0;
+ } else {
+ raise_cnt = 0;
+ drop_cnt++;
+ fup_raise_cnt = 0;
+ fup_drop_cnt++;
+ }
+ }
+
+ gov->raise_cnt = raise_cnt;
+ gov->drop_cnt = drop_cnt;
+ gov->fup_raise_cnt = fup_raise_cnt;
+ gov->fup_drop_cnt = fup_drop_cnt;
+}
+
+static void astg_update_passive_instance(struct thermal_zone_device *tz,
+ struct thermal_instance *instance,
+ long trip_temp,
+ enum thermal_trend trend)
+{
+ struct astg_ctx *gov = tz_to_gov(tz);
+ struct thermal_cooling_device *cdev = instance->cdev;
+ enum ast_action action = AST_ACT_STAY;
+
+ /*
+ For now, the number of trip points with THERMAL_TRIP_PASSIVE should be
+ only one for Tskin trip point. Governor context for managing Tskin
+ will be initialized if the instance's target becomes THERMAL_NO_TARGET
+ */
+ if (instance->target == THERMAL_NO_TARGET) {
+ astg_init_gov_context(gov, trip_temp);
+ action = AST_ACT_TRAN_RAISE;
+ } else {
+ action = astg_get_target_action(tz, cdev, trip_temp, trend);
+ }
+ astg_appy_action(tz, action, instance, trip_temp);
+
+ gov->prev_action = action;
+}
+
+static enum thermal_trend get_trend(struct thermal_zone_device *thz,
+ int trip)
+{
+ int temperature, last_temperature, tr;
+ long trip_temp;
+ enum thermal_trend new_trend = THERMAL_TREND_STABLE;
+
+ thz->ops->get_trip_temp(thz, trip, &trip_temp);
+
+ temperature = thz->temperature;
+ last_temperature = thz->last_temperature;
+ if (temperature < trip_temp + DEFAULT_TSKIN_STD_OFFSET &&
+ temperature > trip_temp - DEFAULT_TSKIN_STD_OFFSET) {
+ if (temperature > last_temperature + 50)
+ new_trend = THERMAL_TREND_RAISING;
+ else if (temperature < last_temperature - 50)
+ new_trend = THERMAL_TREND_DROPPING;
+ else
+ new_trend = THERMAL_TREND_STABLE;
+ } else {
+ tr = (TC1 * (temperature - last_temperature)) +
+ (TC2 * (temperature - trip_temp));
+ if (tr > 0)
+ new_trend = THERMAL_TREND_RAISING;
+ else if (tr < 0)
+ new_trend = THERMAL_TREND_DROPPING;
+ else
+ new_trend = THERMAL_TREND_STABLE;
+ }
+
+ return new_trend;
+}
+
+static int astg_throttle(struct thermal_zone_device *tz, int trip)
+{
+ struct astg_ctx *gov = tz_to_gov(tz);
+ struct thermal_instance *instance;
+ long trip_temp;
+ enum thermal_trip_type trip_type;
+ enum thermal_trend trend;
+ unsigned long old_target;
+
+ gov->cpu_zone = thermal_zone_device_find(DEFAULT_CPU_ZONE_NAME,
+ astg_zone_match);
+ gov->gpu_zone = thermal_zone_device_find(DEFAULT_GPU_ZONE_NAME,
+ astg_zone_match);
+
+ if (gov->cpu_zone == NULL || gov->gpu_zone == NULL)
+ return -ENODEV;
+
+ tz->ops->get_trip_type(tz, trip, &trip_type);
+ tz->ops->get_trip_temp(tz, trip, &trip_temp);
+
+ mutex_lock(&tz->lock);
+
+ list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+ if ((instance->trip != trip) ||
+ ((tz->temperature < trip_temp) &&
+ (instance->target == THERMAL_NO_TARGET)))
+ continue;
+
+ if (trip_type != THERMAL_TRIP_PASSIVE) {
+ astg_update_non_passive_instance(tz, instance,
+ trip_temp);
+ } else {
+ trend = get_trend(tz, trip);
+ old_target = instance->target;
+ astg_update_passive_instance(tz, instance, trip_temp,
+ trend);
+
+ if ((old_target == THERMAL_NO_TARGET) &&
+ (instance->target != THERMAL_NO_TARGET))
+ tz->passive++;
+ else if ((old_target != THERMAL_NO_TARGET) &&
+ (instance->target == THERMAL_NO_TARGET))
+ tz->passive--;
+ }
+
+ instance->cdev->updated = false;
+ }
+
+ list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+ thermal_cdev_update(instance->cdev);
+
+ mutex_unlock(&tz->lock);
+
+ return 0;
+}
+
+static struct thermal_governor adaptive_skin_thermal_gov = {
+ .name = DRV_NAME,
+ .start = astg_start,
+ .stop = astg_stop,
+ .throttle = astg_throttle,
+};
+
+int thermal_gov_adaptive_skin_register(void)
+{
+ return thermal_register_governor(&adaptive_skin_thermal_gov);
+}
+
+void thermal_gov_adaptive_skin_unregister(void)
+{
+ thermal_unregister_governor(&adaptive_skin_thermal_gov);
+}
diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
index ffb4b9c..d9830b6 100644
--- a/drivers/thermal/thermal_core.c
+++ b/drivers/thermal/thermal_core.c
@@ -1964,6 +1964,10 @@
if (result)
return result;
+ result = thermal_gov_adaptive_skin_register();
+ if (result)
+ return result;
+
return thermal_gov_user_space_register();
}
@@ -1973,6 +1977,7 @@
thermal_gov_fair_share_unregister();
pid_thermal_gov_unregister();
thermal_gov_user_space_unregister();
+ thermal_gov_adaptive_skin_unregister();
}
static int __init thermal_init(void)
diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
index df4cdda..d4734be 100644
--- a/drivers/thermal/thermal_core.h
+++ b/drivers/thermal/thermal_core.h
@@ -77,6 +77,15 @@
static inline void pid_thermal_gov_unregister(void) {}
#endif /* CONFIG_THERMAL_GOV_PID */
+#ifdef CONFIG_THERMAL_GOV_ADAPTIVE_SKIN
+int thermal_gov_adaptive_skin_register(void);
+void thermal_gov_adaptive_skin_unregister(void);
+#else
+static inline int thermal_gov_adaptive_skin_register(void) { return 0; }
+static inline void thermal_gov_adaptive_skin_unregister(void) {}
+#endif /* CONFIG_THERMAL_GOV_ADAPTIVE_SKIN */
+
+
#ifdef CONFIG_THERMAL_GOV_USER_SPACE
int thermal_gov_user_space_register(void);
void thermal_gov_user_space_unregister(void);
diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
index d7bc45b..df28399 100644
--- a/drivers/usb/gadget/Kconfig
+++ b/drivers/usb/gadget/Kconfig
@@ -518,6 +518,12 @@
config USB_U_SERIAL
tristate
+config USB_U_CTRL_HSIC
+ tristate
+
+config USB_U_DATA_HSIC
+ tristate
+
config USB_F_SERIAL
tristate
@@ -837,6 +843,9 @@
select USB_F_ACM
select USB_LIBCOMPOSITE
select USB_U_SERIAL
+ select USB_F_SERIAL
+ select USB_U_CTRL_HSIC
+ select USB_U_DATA_HSIC
help
The Android Composite Gadget supports multiple USB
functions: adb, acm, mass storage, mtp, accessory
@@ -1004,4 +1013,17 @@
endchoice
+config QCT_USB_MODEM_SUPPORT
+ boolean "modem support in generic serial function driver"
+ depends on USB_G_ANDROID && USB_F_SERIAL
+ default y
+ help
+ This feature enables the modem functionality in the
+ generic serial function driver.
+ adds interrupt endpoint support to send modem notifications
+ to host.
+ adds CDC descriptors to enumerate the generic serial as MODEM.
+ adds CDC class requests to configure MODEM line settings.
+ Say "y" to enable MODEM support in the generic serial function driver.
+
endif # USB_GADGET
diff --git a/drivers/usb/gadget/Makefile b/drivers/usb/gadget/Makefile
index 1f8297c..e8b7e7c 100644
--- a/drivers/usb/gadget/Makefile
+++ b/drivers/usb/gadget/Makefile
@@ -46,6 +46,8 @@
usb_f_ss_lb-y := f_loopback.o f_sourcesink.o
obj-$(CONFIG_USB_F_SS_LB) += usb_f_ss_lb.o
obj-$(CONFIG_USB_U_SERIAL) += u_serial.o
+obj-$(CONFIG_USB_U_CTRL_HSIC) += u_ctrl_hsic.o
+obj-$(CONFIG_USB_U_DATA_HSIC) += u_data_hsic.o
usb_f_serial-y := f_serial.o
obj-$(CONFIG_USB_F_SERIAL) += usb_f_serial.o
usb_f_obex-y := f_obex.o
diff --git a/drivers/usb/gadget/android.c b/drivers/usb/gadget/android.c
index f95c21d..6eab5ef 100644
--- a/drivers/usb/gadget/android.c
+++ b/drivers/usb/gadget/android.c
@@ -29,6 +29,7 @@
#include <linux/usb/composite.h>
#include <linux/usb/gadget.h>
#include <linux/of_platform.h>
+#include <mach/usb_gadget_xport.h>
#include "gadget_chips.h"
@@ -37,6 +38,10 @@
#include "f_audio_source.c"
#include "f_midi.c"
#include "f_mass_storage.c"
+#ifdef CONFIG_DIAG_CHAR
+#include "f_diag.c"
+#endif
+#include "f_rmnet.c"
#include "f_mtp.c"
#include "f_accessory.c"
#define USB_ETH_RNDIS y
@@ -113,6 +118,31 @@
unsigned short ffs_string_ids;
};
+#define GSERIAL_NO_PORTS 8
+static unsigned int no_tty_ports;
+static unsigned int no_hsic_sports;
+static unsigned int nr_ports;
+
+static struct port_info {
+ enum transport_type transport;
+ enum fserial_func_type func_type;
+ unsigned port_num;
+ unsigned client_port_num;
+} gserial_ports[GSERIAL_NO_PORTS];
+
+static enum fserial_func_type serial_str_to_func_type(const char *name)
+{
+ if (!name)
+ return USB_FSER_FUNC_NONE;
+
+ if (!strcasecmp("MODEM", name))
+ return USB_FSER_FUNC_MODEM;
+ if (!strcasecmp("SERIAL", name))
+ return USB_FSER_FUNC_SERIAL;
+
+ return USB_FSER_FUNC_NONE;
+}
+
static struct class *android_class;
static struct android_dev *_android_dev;
static int android_bind_config(struct usb_configuration *c);
@@ -996,6 +1026,396 @@
.bind_config = nvusb_function_bind_config,
};
+/* Serial */
+static char serial_transports[64]; /*enabled FSERIAL ports - "tty[,sdio]"*/
+#define MAX_SERIAL_INSTANCES 5
+static struct serial_function_config {
+ int instances;
+ int instances_on;
+ struct usb_function *f_serial[MAX_SERIAL_INSTANCES];
+ struct usb_function_instance *f_serial_inst[MAX_SERIAL_INSTANCES];
+ struct usb_function *f_serial_modem[MAX_SERIAL_INSTANCES];
+ struct usb_function_instance *f_serial_modem_inst[MAX_SERIAL_INSTANCES];
+}*serial_modem_config;
+
+static int gserial_init_port(int port_num, const char *name, char *serial_type)
+{
+ enum transport_type transport;
+ enum fserial_func_type func_type;
+
+ if (port_num >= GSERIAL_NO_PORTS)
+ return -ENODEV;
+
+ transport = str_to_xport(name);
+ func_type = serial_str_to_func_type(serial_type);
+
+ pr_info("%s, port:%d, transport:%s, type:%d\n", __func__,
+ port_num, xport_to_str(transport), func_type);
+
+ gserial_ports[port_num].transport = transport;
+ gserial_ports[port_num].func_type = func_type;
+ gserial_ports[port_num].port_num = port_num;
+
+ switch (transport) {
+ case USB_GADGET_XPORT_TTY:
+ gserial_ports[port_num].client_port_num = no_tty_ports;
+ no_tty_ports++;
+ break;
+ case USB_GADGET_XPORT_HSIC:
+ gserial_ports[port_num].client_port_num = no_hsic_sports;
+ no_hsic_sports++;
+ break;
+ default:
+ pr_err("%s: Un-supported transport transport: %u\n",
+ __func__, gserial_ports[port_num].transport);
+ return -ENODEV;
+ }
+
+ nr_ports++;
+
+ return 0;
+}
+
+static int
+serial_function_init(struct android_usb_function *f,
+ struct usb_composite_dev *cdev)
+{
+ int i, ports = 0;
+ int ret;
+ struct serial_function_config *config;
+ char *name, *str[2];
+ char buf[80], *b;
+
+ serial_modem_config = config = kzalloc(sizeof(struct serial_function_config), GFP_KERNEL);
+ if (!config)
+ return -ENOMEM;
+ f->config = config;
+
+ strcpy(serial_transports, "hsic:modem,tty,tty,tty:serial");
+ strncpy(buf, serial_transports, sizeof(buf));
+ buf[79] = 0;
+ pr_info("%s: init string: %s\n",__func__, buf);
+
+ b = strim(buf);
+
+ while (b) {
+ str[0] = str[1] = 0;
+ name = strsep(&b, ",");
+ if (name) {
+ str[0] = strsep(&name, ":");
+ if (str[0])
+ str[1] = strsep(&name, ":");
+ }
+ ret = gserial_init_port(ports, str[0], str[1]);
+ if (ret) {
+ pr_err("serial: Cannot open port '%s'\n", str[0]);
+ goto out;
+ }
+ ports++;
+ }
+
+ for (i = 0; i < no_tty_ports; i++) {
+ config->f_serial_inst[i] = usb_get_function_instance("gser");
+ if (IS_ERR(config->f_serial_inst[i])) {
+ ret = PTR_ERR(config->f_serial_inst[i]);
+ goto err_usb_get_function_instance;
+ }
+ config->f_serial[i] = usb_get_function(config->f_serial_inst[i]);
+ if (IS_ERR(config->f_serial[i])) {
+ ret = PTR_ERR(config->f_serial[i]);
+ goto err_usb_get_function;
+ }
+ }
+
+ for (i = 0; i < no_hsic_sports; i++) {
+ config->f_serial_modem_inst[i] = usb_get_function_instance("modem");
+ if (IS_ERR(config->f_serial_modem_inst[i])) {
+ ret = PTR_ERR(config->f_serial_modem_inst[i]);
+ goto err_usb_get_function_instance;
+ }
+ config->f_serial_modem[i] = usb_get_function(config->f_serial_modem_inst[i]);
+ if (IS_ERR(config->f_serial_modem[i])) {
+ ret = PTR_ERR(config->f_serial_modem[i]);
+ goto err_usb_get_function;
+ }
+ }
+
+ return 0;
+err_usb_get_function_instance:
+ while (--i >= 0) {
+ usb_put_function(config->f_serial[i]);
+err_usb_get_function:
+ usb_put_function_instance(config->f_serial_inst[i]);
+ }
+out:
+ kfree(config);
+ f->config = NULL;
+ return ret;
+}
+
+static void serial_function_cleanup(struct android_usb_function *f)
+{
+ int i;
+ struct serial_function_config *config = f->config;
+
+ for (i = 0; i < no_tty_ports; i++) {
+ usb_put_function(config->f_serial[i]);
+ usb_put_function_instance(config->f_serial_inst[i]);
+ }
+ kfree(f->config);
+ f->config = NULL;
+}
+
+static int
+
+serial_function_bind_config(struct android_usb_function *f,
+ struct usb_configuration *c)
+{
+ int i;
+ int ret = 0;
+ struct serial_function_config *config = f->config;
+
+ for (i = 0; i < nr_ports; i++) {
+ if (gserial_ports[i].func_type == USB_FSER_FUNC_SERIAL)
+ ret = usb_add_function(c, config->f_serial[gserial_ports[i].client_port_num]);
+ if (ret) {
+ pr_err("Could not bind serial%u config\n", i);
+ goto err_usb_add_function;
+ }
+ }
+
+ return 0;
+
+err_usb_add_function:
+ while (--i >= 0)
+ usb_remove_function(c, config->f_serial[gserial_ports[i].client_port_num]);
+ return ret;
+}
+
+/*rmnet transport string format(per port):"ctrl0,data0,ctrl1,data1..." */
+#define MAX_XPORT_STR_LEN 50
+static char rmnet_transports[MAX_XPORT_STR_LEN];
+static int rmnet_nports;
+
+static void rmnet_function_cleanup(struct android_usb_function *f)
+{
+ frmnet_cleanup();
+}
+
+static int rmnet_function_bind_config(struct android_usb_function *f,
+ struct usb_configuration *c)
+{
+ int i, err;
+
+ for (i = 0; i < rmnet_nports; i++) {
+ err = frmnet_bind_config(c, i);
+ if (err) {
+ pr_err("Could not bind rmnet%u config\n", i);
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int rmnet_function_init(struct android_usb_function *f,
+ struct usb_composite_dev *cdev)
+{
+ int err = 0;
+ char *name;
+ char *ctrl_name;
+ char *data_name;
+ char buf[MAX_XPORT_STR_LEN], *b;
+
+ strcpy(rmnet_transports, "HSIC:HSIC");
+ strlcpy(buf, rmnet_transports, sizeof(buf));
+ b = strim(buf);
+
+ while (b) {
+ ctrl_name = data_name = 0;
+ name = strsep(&b, ",");
+ if (name) {
+ ctrl_name = strsep(&name, ":");
+ if (ctrl_name)
+ data_name = strsep(&name, ":");
+ }
+ if (ctrl_name && data_name) {
+ err = frmnet_init_port(ctrl_name, data_name);
+ if (err) {
+ pr_err("rmnet: Cannot open ctrl port:"
+ "'%s' data port:'%s'\n",
+ ctrl_name, data_name);
+ goto out;
+ }
+ rmnet_nports++;
+ }
+ }
+
+ err = rmnet_gport_setup();
+ if (err) {
+ pr_err("rmnet: Cannot setup transports");
+ goto out;
+ }
+out:
+ return err;
+}
+
+static ssize_t rmnet_transports_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%s\n", rmnet_transports);
+}
+
+static ssize_t rmnet_transports_store(
+ struct device *device, struct device_attribute *attr,
+ const char *buff, size_t size)
+{
+ strlcpy(rmnet_transports, buff, sizeof(rmnet_transports));
+
+ return size;
+}
+
+static struct device_attribute dev_attr_rmnet_transports =
+ __ATTR(transports, S_IRUGO | S_IWUSR,
+ rmnet_transports_show,
+ rmnet_transports_store);
+static struct device_attribute *rmnet_function_attributes[] = {
+ &dev_attr_rmnet_transports,
+ NULL };
+
+static struct android_usb_function rmnet_function = {
+ .name = "rmnet",
+ .cleanup = rmnet_function_cleanup,
+ .bind_config = rmnet_function_bind_config,
+ .attributes = rmnet_function_attributes,
+ .init = rmnet_function_init,
+};
+
+/* Diag */
+#ifdef CONFIG_DIAG_CHAR
+static char diag_clients[32]; /*enabled DIAG clients- "diag[,diag_mdm]" */
+static ssize_t clients_store(struct device *device, struct device_attribute *attr, const char *buff, size_t size)
+{
+ strlcpy(diag_clients, buff, sizeof(diag_clients));
+
+ return size;
+}
+
+static DEVICE_ATTR(clients, S_IWUSR, NULL, clients_store);
+static struct device_attribute *diag_function_attributes[] = { &dev_attr_clients, NULL };
+
+static int diag_function_init(struct android_usb_function *f, struct usb_composite_dev *cdev)
+{
+ return diag_setup();
+}
+
+static void diag_function_cleanup(struct android_usb_function *f)
+{
+ diag_cleanup();
+}
+
+static int diag_function_bind_config(struct android_usb_function *f, struct usb_configuration *c)
+{
+#if 1
+ int err;
+ int (*notify) (uint32_t, const char *);
+
+ //notify = _android_dev->pdata->update_pid_and_serial_num;
+ notify = NULL;
+
+ err = diag_function_add(c, DIAG_MDM, notify); /* Note:DIAG_LEGACY->DIAG_MDM */
+ if (err)
+ pr_err("diag: Cannot open channel '%s'", DIAG_MDM); /* Note:DIAG_LEGACY->DIAG_MDM */
+
+#else
+ char *name;
+ char buf[32], *b;
+ int once = 0, err = -1;
+ int (*notify) (uint32_t, const char *);
+
+ strlcpy(buf, diag_clients, sizeof(buf));
+ b = strim(buf);
+
+ while (b) {
+ notify = NULL;
+ name = strsep(&b, ",");
+ /* Allow only first diag channel to update pid and serial no */
+ if (_android_dev->pdata && !once++)
+ notify = _android_dev->pdata->update_pid_and_serial_num;
+
+ if (name) {
+ if (strcmp(name, f->name) != 0)
+ continue;
+ err = diag_function_add(c, name, notify);
+ if (err)
+ pr_err("diag: Cannot open channel '%s'", name);
+ }
+ }
+#endif
+ return err;
+}
+
+static struct android_usb_function diag_function = {
+ .name = DIAG_MDM,
+ .init = diag_function_init,
+ .cleanup = diag_function_cleanup,
+ .bind_config = diag_function_bind_config,
+ .attributes = diag_function_attributes,
+};
+#endif
+
+static int
+modem_function_bind_config(struct android_usb_function *f,
+ struct usb_configuration *c)
+{
+ int i;
+ int ret = 0;
+ struct serial_function_config *config = serial_modem_config;
+
+ for (i = 0; i < nr_ports; i++) {
+ pr_info("[USB] %s: client num = %d\n",__func__,gserial_ports[i].client_port_num);
+ if (gserial_ports[i].func_type == USB_FSER_FUNC_MODEM)
+ ret = usb_add_function(c, config->f_serial_modem[gserial_ports[i].client_port_num]);
+ if (ret) {
+ pr_err("Could not bind serial%u config\n", i);
+ goto err_usb_add_function;
+ }
+ }
+
+ return 0;
+
+err_usb_add_function:
+ while (i-- > 0)
+ usb_remove_function(c, config->f_serial_modem[gserial_ports[i].client_port_num]);
+ return ret;
+}
+
+static void modem_function_cleanup(struct android_usb_function *f)
+{
+ int i;
+ struct serial_function_config *config = f->config;
+
+ for (i = 0; i < no_hsic_sports; i++) {
+ usb_put_function(config->f_serial_modem[i]);
+ usb_put_function_instance(config->f_serial_modem_inst[i]);
+ }
+ kfree(f->config);
+ f->config = NULL;
+}
+
+static struct android_usb_function serial_function = {
+ .name = "serial",
+ .init = serial_function_init,
+ .cleanup = serial_function_cleanup,
+ .bind_config = serial_function_bind_config,
+};
+
+static struct android_usb_function modem_function = {
+ .name = "modem",
+ .cleanup = modem_function_cleanup,
+ .bind_config = modem_function_bind_config,
+};
+
static int midi_function_init(struct android_usb_function *f,
struct usb_composite_dev *cdev)
{
@@ -1052,7 +1472,6 @@
static struct android_usb_function *supported_functions[] = {
&ffs_function,
- &acm_function,
&mtp_function,
&ptp_function,
&rndis_function,
@@ -1060,6 +1479,13 @@
&accessory_function,
&audio_source_function,
&nvusb_function,
+ &serial_function,
+ &modem_function,
+ &acm_function,
+#ifdef CONFIG_DIAG_CHAR
+ &diag_function,
+#endif
+ &rmnet_function,
&midi_function,
NULL
};
diff --git a/drivers/usb/gadget/f_diag.c b/drivers/usb/gadget/f_diag.c
new file mode 100644
index 0000000..8160a37
--- /dev/null
+++ b/drivers/usb/gadget/f_diag.c
@@ -0,0 +1,865 @@
+/* drivers/usb/gadget/f_diag.c
+ * Diag Function Device - Route ARM9 and ARM11 DIAG messages
+ * between HOST and DEVICE.
+ * Copyright (C) 2007 Google, Inc.
+ * Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/ratelimit.h>
+
+#include <mach/usbdiag.h>
+
+#include <linux/usb/composite.h>
+#include <linux/usb/gadget.h>
+#include <linux/workqueue.h>
+#include <linux/debugfs.h>
+
+static DEFINE_SPINLOCK(ch_lock);
+static LIST_HEAD(usb_diag_ch_list);
+
+static struct usb_interface_descriptor intf_desc = {
+ .bLength = sizeof intf_desc,
+ .bDescriptorType = USB_DT_INTERFACE,
+ .bNumEndpoints = 2,
+ .bInterfaceClass = 0xFF,
+ .bInterfaceSubClass = 0xFF,
+ .bInterfaceProtocol = 0xFF,
+};
+
+static struct usb_endpoint_descriptor hs_bulk_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
+ .bInterval = 0,
+};
+
+static struct usb_endpoint_descriptor fs_bulk_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(64),
+ .bInterval = 0,
+};
+
+static struct usb_endpoint_descriptor hs_bulk_out_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
+ .bInterval = 0,
+};
+
+static struct usb_endpoint_descriptor fs_bulk_out_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(64),
+ .bInterval = 0,
+};
+
+static struct usb_endpoint_descriptor ss_bulk_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(1024),
+};
+
+static struct usb_ss_ep_comp_descriptor ss_bulk_in_comp_desc = {
+ .bLength = sizeof ss_bulk_in_comp_desc,
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ /* the following 2 values can be tweaked if necessary */
+ /* .bMaxBurst = 0, */
+ /* .bmAttributes = 0, */
+};
+
+static struct usb_endpoint_descriptor ss_bulk_out_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(1024),
+};
+
+static struct usb_ss_ep_comp_descriptor ss_bulk_out_comp_desc = {
+ .bLength = sizeof ss_bulk_out_comp_desc,
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ /* the following 2 values can be tweaked if necessary */
+ /* .bMaxBurst = 0, */
+ /* .bmAttributes = 0, */
+};
+
+static struct usb_descriptor_header *fs_diag_desc[] = {
+ (struct usb_descriptor_header *)&intf_desc,
+ (struct usb_descriptor_header *)&fs_bulk_in_desc,
+ (struct usb_descriptor_header *)&fs_bulk_out_desc,
+ NULL,
+};
+
+static struct usb_descriptor_header *hs_diag_desc[] = {
+ (struct usb_descriptor_header *)&intf_desc,
+ (struct usb_descriptor_header *)&hs_bulk_in_desc,
+ (struct usb_descriptor_header *)&hs_bulk_out_desc,
+ NULL,
+};
+
+static struct usb_descriptor_header *ss_diag_desc[] = {
+ (struct usb_descriptor_header *)&intf_desc,
+ (struct usb_descriptor_header *)&ss_bulk_in_desc,
+ (struct usb_descriptor_header *)&ss_bulk_in_comp_desc,
+ (struct usb_descriptor_header *)&ss_bulk_out_desc,
+ (struct usb_descriptor_header *)&ss_bulk_out_comp_desc,
+ NULL,
+};
+
+/**
+ * struct diag_context - USB diag function driver private structure
+ * @function: function structure for USB interface
+ * @out: USB OUT endpoint struct
+ * @in: USB IN endpoint struct
+ * @in_desc: USB IN endpoint descriptor struct
+ * @out_desc: USB OUT endpoint descriptor struct
+ * @read_pool: List of requests used for Rx (OUT ep)
+ * @write_pool: List of requests used for Tx (IN ep)
+ * @lock: Spinlock to proctect read_pool, write_pool lists
+ * @cdev: USB composite device struct
+ * @ch: USB diag channel
+ *
+ */
+struct diag_context {
+ struct usb_function function;
+ struct usb_ep *out;
+ struct usb_ep *in;
+ struct list_head read_pool;
+ struct list_head write_pool;
+ spinlock_t lock;
+ unsigned configured;
+ struct usb_composite_dev *cdev;
+ int (*update_pid_and_serial_num) (uint32_t, const char *);
+ struct usb_diag_ch *ch;
+
+ /* pkt counters */
+ unsigned long dpkts_tolaptop;
+ unsigned long dpkts_tomodem;
+ unsigned dpkts_tolaptop_pending;
+
+ /* A list node inside the diag_dev_list */
+ struct list_head list_item;
+};
+
+static struct list_head diag_dev_list;
+
+static inline struct diag_context *func_to_diag(struct usb_function *f)
+{
+ return container_of(f, struct diag_context, function);
+}
+
+static void diag_update_pid_and_serial_num(struct diag_context *ctxt)
+{
+ struct usb_composite_dev *cdev = ctxt->cdev;
+ struct usb_gadget_strings *table;
+ struct usb_string *s;
+
+ if (!ctxt->update_pid_and_serial_num)
+ return;
+
+ /*
+ * update pid and serail number to dload only if diag
+ * interface is zeroth interface.
+ */
+ if (intf_desc.bInterfaceNumber)
+ return;
+
+ /* pass on product id and serial number to dload */
+ if (!cdev->desc.iSerialNumber) {
+ ctxt->update_pid_and_serial_num(cdev->desc.idProduct, 0);
+ return;
+ }
+
+ /*
+ * Serial number is filled by the composite driver. So
+ * it is fair enough to assume that it will always be
+ * found at first table of strings.
+ */
+ table = *(cdev->driver->strings);
+ for (s = table->strings; s && s->s; s++)
+ if (s->id == cdev->desc.iSerialNumber) {
+ ctxt->update_pid_and_serial_num(cdev->desc.idProduct, s->s);
+ break;
+ }
+}
+
+static void diag_write_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct diag_context *ctxt = ep->driver_data;
+ struct diag_request *d_req = req->context;
+ unsigned long flags;
+
+ ctxt->dpkts_tolaptop_pending--;
+
+ if (!req->status) {
+ if ((req->length >= ep->maxpacket) && ((req->length % ep->maxpacket) == 0)) {
+ ctxt->dpkts_tolaptop_pending++;
+ req->length = 0;
+ d_req->actual = req->actual;
+ d_req->status = req->status;
+ /* Queue zero length packet */
+ if (!usb_ep_queue(ctxt->in, req, GFP_ATOMIC))
+ return;
+ }
+ }
+
+ spin_lock_irqsave(&ctxt->lock, flags);
+ list_add_tail(&req->list, &ctxt->write_pool);
+ if (req->length != 0) {
+ d_req->actual = req->actual;
+ d_req->status = req->status;
+ }
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+
+ if (ctxt->ch && ctxt->ch->notify)
+ ctxt->ch->notify(ctxt->ch->priv, USB_DIAG_WRITE_DONE, d_req);
+}
+
+static void diag_read_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct diag_context *ctxt = ep->driver_data;
+ struct diag_request *d_req = req->context;
+ unsigned long flags;
+
+ d_req->actual = req->actual;
+ d_req->status = req->status;
+
+ spin_lock_irqsave(&ctxt->lock, flags);
+ list_add_tail(&req->list, &ctxt->read_pool);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+
+ ctxt->dpkts_tomodem++;
+
+ if (ctxt->ch && ctxt->ch->notify)
+ ctxt->ch->notify(ctxt->ch->priv, USB_DIAG_READ_DONE, d_req);
+}
+
+/**
+ * usb_diag_open() - Open a diag channel over USB
+ * @name: Name of the channel
+ * @priv: Private structure pointer which will be passed in notify()
+ * @notify: Callback function to receive notifications
+ *
+ * This function iterates overs the available channels and returns
+ * the channel handler if the name matches. The notify callback is called
+ * for CONNECT, DISCONNECT, READ_DONE and WRITE_DONE events.
+ *
+ */
+struct usb_diag_ch *usb_diag_open(const char *name, void *priv, void (*notify) (void *, unsigned, struct diag_request *))
+{
+ struct usb_diag_ch *ch;
+ unsigned long flags;
+ int found = 0;
+
+ pr_info("[USB] %s: name: %s\n", __func__, name);
+ spin_lock_irqsave(&ch_lock, flags);
+ /* Check if we already have a channel with this name */
+ list_for_each_entry(ch, &usb_diag_ch_list, list) {
+ if (!strcmp(name, ch->name)) {
+ found = 1;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&ch_lock, flags);
+
+ if (!found) {
+ ch = kzalloc(sizeof(*ch), GFP_KERNEL);
+ if (!ch)
+ return ERR_PTR(-ENOMEM);
+ }
+
+ ch->name = name;
+ ch->priv = priv;
+ ch->notify = notify;
+
+ spin_lock_irqsave(&ch_lock, flags);
+ list_add_tail(&ch->list, &usb_diag_ch_list);
+ spin_unlock_irqrestore(&ch_lock, flags);
+
+ return ch;
+}
+
+EXPORT_SYMBOL(usb_diag_open);
+
+/**
+ * usb_diag_close() - Close a diag channel over USB
+ * @ch: Channel handler
+ *
+ * This function closes the diag channel.
+ *
+ */
+void usb_diag_close(struct usb_diag_ch *ch)
+{
+ struct diag_context *dev = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ch_lock, flags);
+ ch->priv = NULL;
+ ch->notify = NULL;
+ /* Free-up the resources if channel is no more active */
+ list_del(&ch->list);
+ list_for_each_entry(dev, &diag_dev_list, list_item)
+ if (dev->ch == ch)
+ dev->ch = NULL;
+ kfree(ch);
+
+ spin_unlock_irqrestore(&ch_lock, flags);
+}
+
+EXPORT_SYMBOL(usb_diag_close);
+
+static void free_reqs(struct diag_context *ctxt)
+{
+ struct list_head *act, *tmp;
+ struct usb_request *req;
+
+ list_for_each_safe(act, tmp, &ctxt->write_pool) {
+ req = list_entry(act, struct usb_request, list);
+ list_del(&req->list);
+ usb_ep_free_request(ctxt->in, req);
+ }
+
+ list_for_each_safe(act, tmp, &ctxt->read_pool) {
+ req = list_entry(act, struct usb_request, list);
+ list_del(&req->list);
+ usb_ep_free_request(ctxt->out, req);
+ }
+}
+
+/**
+ * usb_diag_free_req() - Free USB requests
+ * @ch: Channel handler
+ *
+ * This function free read and write USB requests for the interface
+ * associated with this channel.
+ *
+ */
+void usb_diag_free_req(struct usb_diag_ch *ch)
+{
+ struct diag_context *ctxt = ch->priv_usb;
+ unsigned long flags;
+
+ if (ctxt) {
+ spin_lock_irqsave(&ctxt->lock, flags);
+ free_reqs(ctxt);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ }
+
+}
+
+EXPORT_SYMBOL(usb_diag_free_req);
+
+/**
+ * usb_diag_alloc_req() - Allocate USB requests
+ * @ch: Channel handler
+ * @n_write: Number of requests for Tx
+ * @n_read: Number of requests for Rx
+ *
+ * This function allocate read and write USB requests for the interface
+ * associated with this channel. The actual buffer is not allocated.
+ * The buffer is passed by diag char driver.
+ *
+ */
+int usb_diag_alloc_req(struct usb_diag_ch *ch, int n_write, int n_read)
+{
+ struct diag_context *ctxt = ch->priv_usb;
+ struct usb_request *req;
+ int i;
+ unsigned long flags;
+
+ if (!ctxt)
+ return -ENODEV;
+
+ spin_lock_irqsave(&ctxt->lock, flags);
+ /* Free previous session's stale requests */
+ free_reqs(ctxt);
+ for (i = 0; i < n_write; i++) {
+ req = usb_ep_alloc_request(ctxt->in, GFP_ATOMIC);
+ if (!req)
+ goto fail;
+ req->complete = diag_write_complete;
+ list_add_tail(&req->list, &ctxt->write_pool);
+ }
+
+ for (i = 0; i < n_read; i++) {
+ req = usb_ep_alloc_request(ctxt->out, GFP_ATOMIC);
+ if (!req)
+ goto fail;
+ req->complete = diag_read_complete;
+ list_add_tail(&req->list, &ctxt->read_pool);
+ }
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ return 0;
+fail:
+ free_reqs(ctxt);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ return -ENOMEM;
+
+}
+
+EXPORT_SYMBOL(usb_diag_alloc_req);
+
+/**
+ * usb_diag_read() - Read data from USB diag channel
+ * @ch: Channel handler
+ * @d_req: Diag request struct
+ *
+ * Enqueue a request on OUT endpoint of the interface corresponding to this
+ * channel. This function returns proper error code when interface is not
+ * in configured state, no Rx requests available and ep queue is failed.
+ *
+ * This function operates asynchronously. READ_DONE event is notified after
+ * completion of OUT request.
+ *
+ */
+int usb_diag_read(struct usb_diag_ch *ch, struct diag_request *d_req)
+{
+ struct diag_context *ctxt = ch->priv_usb;
+ unsigned long flags;
+ struct usb_request *req;
+ static DEFINE_RATELIMIT_STATE(rl, 10 * HZ, 1);
+
+ if (!ctxt)
+ return -ENODEV;
+
+ spin_lock_irqsave(&ctxt->lock, flags);
+
+ if (!ctxt->configured) {
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ return -EIO;
+ }
+
+ if (list_empty(&ctxt->read_pool)) {
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ ERROR(ctxt->cdev, "%s: no requests available\n", __func__);
+ return -EAGAIN;
+ }
+
+ req = list_first_entry(&ctxt->read_pool, struct usb_request, list);
+ list_del(&req->list);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+
+ req->buf = d_req->buf;
+ req->length = d_req->length;
+ req->context = d_req;
+ if (usb_ep_queue(ctxt->out, req, GFP_ATOMIC)) {
+ /* If error add the link to linked list again */
+ spin_lock_irqsave(&ctxt->lock, flags);
+ list_add_tail(&req->list, &ctxt->read_pool);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ /* 1 error message for every 10 sec */
+ if (__ratelimit(&rl))
+ ERROR(ctxt->cdev, "%s: cannot queue" " read request\n", __func__);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(usb_diag_read);
+
+/**
+ * usb_diag_write() - Write data from USB diag channel
+ * @ch: Channel handler
+ * @d_req: Diag request struct
+ *
+ * Enqueue a request on IN endpoint of the interface corresponding to this
+ * channel. This function returns proper error code when interface is not
+ * in configured state, no Tx requests available and ep queue is failed.
+ *
+ * This function operates asynchronously. WRITE_DONE event is notified after
+ * completion of IN request.
+ *
+ */
+int usb_diag_write(struct usb_diag_ch *ch, struct diag_request *d_req)
+{
+ struct diag_context *ctxt = ch->priv_usb;
+ unsigned long flags;
+ struct usb_request *req = NULL;
+ static DEFINE_RATELIMIT_STATE(rl, 10 * HZ, 1);
+
+ if (!ctxt)
+ return -ENODEV;
+
+ spin_lock_irqsave(&ctxt->lock, flags);
+
+ if (!ctxt->configured) {
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ return -EIO;
+ }
+
+ if (list_empty(&ctxt->write_pool)) {
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ ERROR(ctxt->cdev, "%s: no requests available\n", __func__);
+ return -EAGAIN;
+ }
+
+ req = list_first_entry(&ctxt->write_pool, struct usb_request, list);
+ list_del(&req->list);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+
+ req->buf = d_req->buf;
+ req->length = d_req->length;
+ req->context = d_req;
+ if (usb_ep_queue(ctxt->in, req, GFP_ATOMIC)) {
+ /* If error add the link to linked list again */
+ spin_lock_irqsave(&ctxt->lock, flags);
+ list_add_tail(&req->list, &ctxt->write_pool);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ /* 1 error message for every 10 sec */
+ if (__ratelimit(&rl))
+ ERROR(ctxt->cdev, "%s: cannot queue" " read request\n", __func__);
+ return -EIO;
+ }
+
+ ctxt->dpkts_tolaptop++;
+ ctxt->dpkts_tolaptop_pending++;
+
+ return 0;
+}
+
+EXPORT_SYMBOL(usb_diag_write);
+
+static void diag_function_disable(struct usb_function *f)
+{
+ struct diag_context *dev = func_to_diag(f);
+ unsigned long flags;
+
+ DBG(dev->cdev, "diag_function_disable\n");
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->configured = 0;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ if (dev->ch && dev->ch->notify)
+ dev->ch->notify(dev->ch->priv, USB_DIAG_DISCONNECT, NULL);
+
+ usb_ep_disable(dev->in);
+ dev->in->driver_data = NULL;
+
+ usb_ep_disable(dev->out);
+ dev->out->driver_data = NULL;
+ if (dev->ch)
+ dev->ch->priv_usb = NULL;
+}
+
+static int diag_function_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+{
+ struct diag_context *dev = func_to_diag(f);
+ struct usb_composite_dev *cdev = f->config->cdev;
+ unsigned long flags;
+ int rc = 0;
+
+ if (config_ep_by_speed(cdev->gadget, f, dev->in) || config_ep_by_speed(cdev->gadget, f, dev->out)) {
+ dev->in->desc = NULL;
+ dev->out->desc = NULL;
+ return -EINVAL;
+ }
+
+ if (!dev->ch)
+ return -ENODEV;
+
+ /*
+ * Indicate to the diag channel that the active diag device is dev.
+ * Since a few diag devices can point to the same channel.
+ */
+ dev->ch->priv_usb = dev;
+
+ dev->in->driver_data = dev;
+ rc = usb_ep_enable(dev->in);
+ if (rc) {
+ ERROR(dev->cdev, "can't enable %s, result %d\n", dev->in->name, rc);
+ return rc;
+ }
+ dev->out->driver_data = dev;
+ rc = usb_ep_enable(dev->out);
+ if (rc) {
+ ERROR(dev->cdev, "can't enable %s, result %d\n", dev->out->name, rc);
+ usb_ep_disable(dev->in);
+ return rc;
+ }
+
+ dev->dpkts_tolaptop = 0;
+ dev->dpkts_tomodem = 0;
+ dev->dpkts_tolaptop_pending = 0;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->configured = 1;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ if (dev->ch->notify)
+ dev->ch->notify(dev->ch->priv, USB_DIAG_CONNECT, NULL);
+
+ return rc;
+}
+
+static void diag_function_unbind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct diag_context *ctxt = func_to_diag(f);
+ unsigned long flags;
+
+ if (gadget_is_superspeed(c->cdev->gadget))
+ usb_free_descriptors(f->ss_descriptors);
+ if (gadget_is_dualspeed(c->cdev->gadget))
+ usb_free_descriptors(f->hs_descriptors);
+
+ usb_free_descriptors(f->fs_descriptors);
+
+ /*
+ * Channel priv_usb may point to other diag function.
+ * Clear the priv_usb only if the channel is used by the
+ * diag dev we unbind here.
+ */
+ if (ctxt->ch && ctxt->ch->priv_usb == ctxt)
+ ctxt->ch->priv_usb = NULL;
+ list_del(&ctxt->list_item);
+ /* Free any pending USB requests from last session */
+ spin_lock_irqsave(&ctxt->lock, flags);
+ free_reqs(ctxt);
+ spin_unlock_irqrestore(&ctxt->lock, flags);
+ kfree(ctxt);
+}
+
+static int diag_function_bind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct usb_composite_dev *cdev = c->cdev;
+ struct diag_context *ctxt = func_to_diag(f);
+ struct usb_ep *ep;
+ int status = -ENODEV;
+
+ intf_desc.bInterfaceNumber = usb_interface_id(c, f);
+
+ ep = usb_ep_autoconfig(cdev->gadget, &fs_bulk_in_desc);
+ if (!ep)
+ goto fail;
+ ctxt->in = ep;
+ ep->driver_data = ctxt;
+
+ ep = usb_ep_autoconfig(cdev->gadget, &fs_bulk_out_desc);
+ if (!ep)
+ goto fail;
+ ctxt->out = ep;
+ ep->driver_data = ctxt;
+
+ status = -ENOMEM;
+ /* copy descriptors, and track endpoint copies */
+ f->fs_descriptors = usb_copy_descriptors(fs_diag_desc);
+ if (!f->fs_descriptors)
+ goto fail;
+
+ if (gadget_is_dualspeed(c->cdev->gadget)) {
+ hs_bulk_in_desc.bEndpointAddress = fs_bulk_in_desc.bEndpointAddress;
+ hs_bulk_out_desc.bEndpointAddress = fs_bulk_out_desc.bEndpointAddress;
+
+ /* copy descriptors, and track endpoint copies */
+ f->hs_descriptors = usb_copy_descriptors(hs_diag_desc);
+ if (!f->hs_descriptors)
+ goto fail;
+ }
+
+ if (gadget_is_superspeed(c->cdev->gadget)) {
+ ss_bulk_in_desc.bEndpointAddress = fs_bulk_in_desc.bEndpointAddress;
+ ss_bulk_out_desc.bEndpointAddress = fs_bulk_out_desc.bEndpointAddress;
+
+ /* copy descriptors, and track endpoint copies */
+ f->ss_descriptors = usb_copy_descriptors(ss_diag_desc);
+ if (!f->ss_descriptors)
+ goto fail;
+ }
+ diag_update_pid_and_serial_num(ctxt);
+ return 0;
+fail:
+ if (f->ss_descriptors)
+ usb_free_descriptors(f->ss_descriptors);
+ if (f->hs_descriptors)
+ usb_free_descriptors(f->hs_descriptors);
+ if (f->fs_descriptors)
+ usb_free_descriptors(f->fs_descriptors);
+ if (ctxt->out)
+ ctxt->out->driver_data = NULL;
+ if (ctxt->in)
+ ctxt->in->driver_data = NULL;
+ return status;
+
+}
+
+int diag_function_add(struct usb_configuration *c, const char *name, int (*update_pid) (uint32_t, const char *))
+{
+ struct diag_context *dev;
+ struct usb_diag_ch *_ch;
+ int found = 0, ret;
+
+ DBG(c->cdev, "diag_function_add\n");
+
+ list_for_each_entry(_ch, &usb_diag_ch_list, list) {
+ if (!strcmp(name, _ch->name)) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found) {
+ ERROR(c->cdev, "unable to get diag usb channel\n");
+ return -ENODEV;
+ }
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+
+ list_add_tail(&dev->list_item, &diag_dev_list);
+
+ /*
+ * A few diag devices can point to the same channel, in case that
+ * the diag devices belong to different configurations, however
+ * only the active diag device will claim the channel by setting
+ * the ch->priv_usb (see diag_function_set_alt).
+ */
+ dev->ch = _ch;
+
+ dev->update_pid_and_serial_num = update_pid;
+ dev->cdev = c->cdev;
+ dev->function.name = _ch->name;
+ dev->function.fs_descriptors = fs_diag_desc;
+ dev->function.hs_descriptors = hs_diag_desc;
+ dev->function.bind = diag_function_bind;
+ dev->function.unbind = diag_function_unbind;
+ dev->function.set_alt = diag_function_set_alt;
+ dev->function.disable = diag_function_disable;
+ spin_lock_init(&dev->lock);
+ INIT_LIST_HEAD(&dev->read_pool);
+ INIT_LIST_HEAD(&dev->write_pool);
+
+ ret = usb_add_function(c, &dev->function);
+ if (ret) {
+ INFO(c->cdev, "usb_add_function failed\n");
+ list_del(&dev->list_item);
+ kfree(dev);
+ }
+
+ return ret;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+static char debug_buffer[PAGE_SIZE];
+
+static ssize_t debug_read_stats(struct file *file, char __user * ubuf, size_t count, loff_t * ppos)
+{
+ char *buf = debug_buffer;
+ int temp = 0;
+ struct usb_diag_ch *ch;
+
+ list_for_each_entry(ch, &usb_diag_ch_list, list) {
+ struct diag_context *ctxt = ch->priv_usb;
+
+ if (ctxt)
+ temp += scnprintf(buf + temp, PAGE_SIZE - temp,
+ "---Name: %s---\n"
+ "endpoints: %s, %s\n"
+ "dpkts_tolaptop: %lu\n"
+ "dpkts_tomodem: %lu\n"
+ "pkts_tolaptop_pending: %u\n",
+ ch->name, ctxt->in->name, ctxt->out->name, ctxt->dpkts_tolaptop, ctxt->dpkts_tomodem, ctxt->dpkts_tolaptop_pending);
+ }
+
+ return simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+}
+
+static ssize_t debug_reset_stats(struct file *file, const char __user * buf, size_t count, loff_t * ppos)
+{
+ struct usb_diag_ch *ch;
+
+ list_for_each_entry(ch, &usb_diag_ch_list, list) {
+ struct diag_context *ctxt = ch->priv_usb;
+
+ if (ctxt) {
+ ctxt->dpkts_tolaptop = 0;
+ ctxt->dpkts_tomodem = 0;
+ ctxt->dpkts_tolaptop_pending = 0;
+ }
+ }
+
+ return count;
+}
+
+static int debug_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static const struct file_operations debug_fdiag_ops = {
+ .open = debug_open,
+ .read = debug_read_stats,
+ .write = debug_reset_stats,
+};
+
+struct dentry *dent_diag;
+static void fdiag_debugfs_init(void)
+{
+ dent_diag = debugfs_create_dir("usb_diag", 0);
+ if (IS_ERR(dent_diag))
+ return;
+
+ debugfs_create_file("status", 0444, dent_diag, 0, &debug_fdiag_ops);
+}
+#else
+static void fdiag_debugfs_init(void)
+{
+ return;
+}
+#endif
+
+static void diag_cleanup(void)
+{
+ struct list_head *act, *tmp;
+ struct usb_diag_ch *_ch;
+ unsigned long flags;
+
+ debugfs_remove_recursive(dent_diag);
+
+ list_for_each_safe(act, tmp, &usb_diag_ch_list) {
+ _ch = list_entry(act, struct usb_diag_ch, list);
+
+ spin_lock_irqsave(&ch_lock, flags);
+ /* Free if diagchar is not using the channel anymore */
+ if (!_ch->priv) {
+ list_del(&_ch->list);
+ kfree(_ch);
+ }
+ spin_unlock_irqrestore(&ch_lock, flags);
+ }
+}
+
+static int diag_setup(void)
+{
+ INIT_LIST_HEAD(&diag_dev_list);
+
+ fdiag_debugfs_init();
+
+ return 0;
+}
diff --git a/drivers/usb/gadget/f_diag.h b/drivers/usb/gadget/f_diag.h
new file mode 100644
index 0000000..02244d5
--- /dev/null
+++ b/drivers/usb/gadget/f_diag.h
@@ -0,0 +1,23 @@
+/* drivers/usb/gadget/f_diag.h
+ *
+ * Diag Function Device - Route DIAG frames between SMD and USB
+ *
+ * Copyright (C) 2008-2009 Google, Inc.
+ * Copyright (c) 2009, The Linux Foundation. All rights reserved.
+ * Author: Brian Swetland <swetland@google.com>
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef __F_DIAG_H
+#define __F_DIAG_H
+
+int diag_function_add(struct usb_configuration *c, const char *);
+
+#endif /* __F_DIAG_H */
diff --git a/drivers/usb/gadget/f_rmnet.c b/drivers/usb/gadget/f_rmnet.c
new file mode 100644
index 0000000..3503c29
--- /dev/null
+++ b/drivers/usb/gadget/f_rmnet.c
@@ -0,0 +1,1042 @@
+/*
+ * Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/usb/android_composite.h>
+#include <linux/spinlock.h>
+
+#include <mach/usb_gadget_xport.h>
+
+#include "u_rmnet.h"
+#include "gadget_chips.h"
+
+#define RMNET_NOTIFY_INTERVAL 5
+#define RMNET_MAX_NOTIFY_SIZE sizeof(struct usb_cdc_notification)
+
+
+#define ACM_CTRL_DTR (1 << 0)
+
+/* TODO: use separate structures for data and
+ * control paths
+ */
+struct f_rmnet {
+ struct grmnet port;
+ int ifc_id;
+ u8 port_num;
+ atomic_t online;
+ atomic_t ctrl_online;
+ struct usb_composite_dev *cdev;
+
+ spinlock_t lock;
+
+ /* usb eps*/
+ struct usb_ep *notify;
+ struct usb_request *notify_req;
+
+ /* control info */
+ struct list_head cpkt_resp_q;
+ atomic_t notify_count;
+ unsigned long cpkts_len;
+};
+
+#define NR_RMNET_PORTS 3
+static unsigned int nr_rmnet_ports;
+static unsigned int no_ctrl_smd_ports;
+static unsigned int no_ctrl_hsic_ports;
+static unsigned int no_ctrl_hsuart_ports;
+static unsigned int no_data_bam_ports;
+static unsigned int no_data_bam2bam_ports;
+static unsigned int no_data_hsic_ports;
+static unsigned int no_data_hsuart_ports;
+static struct rmnet_ports {
+ enum transport_type data_xport;
+ enum transport_type ctrl_xport;
+ unsigned data_xport_num;
+ unsigned ctrl_xport_num;
+ unsigned port_num;
+ struct f_rmnet *port;
+} rmnet_ports[NR_RMNET_PORTS];
+
+static struct usb_interface_descriptor rmnet_interface_desc = {
+ .bLength = USB_DT_INTERFACE_SIZE,
+ .bDescriptorType = USB_DT_INTERFACE,
+ .bNumEndpoints = 3,
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+ .bInterfaceSubClass = USB_CLASS_VENDOR_SPEC,
+ .bInterfaceProtocol = USB_CLASS_VENDOR_SPEC,
+ /* .iInterface = DYNAMIC */
+};
+
+/* Full speed support */
+static struct usb_endpoint_descriptor rmnet_fs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(RMNET_MAX_NOTIFY_SIZE),
+ .bInterval = 1 << RMNET_NOTIFY_INTERVAL,
+};
+
+static struct usb_endpoint_descriptor rmnet_fs_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(64),
+};
+
+static struct usb_endpoint_descriptor rmnet_fs_out_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(64),
+};
+
+static struct usb_descriptor_header *rmnet_fs_function[] = {
+ (struct usb_descriptor_header *) &rmnet_interface_desc,
+ (struct usb_descriptor_header *) &rmnet_fs_notify_desc,
+ (struct usb_descriptor_header *) &rmnet_fs_in_desc,
+ (struct usb_descriptor_header *) &rmnet_fs_out_desc,
+ NULL,
+};
+
+/* High speed support */
+static struct usb_endpoint_descriptor rmnet_hs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(RMNET_MAX_NOTIFY_SIZE),
+ .bInterval = RMNET_NOTIFY_INTERVAL + 4,
+};
+
+static struct usb_endpoint_descriptor rmnet_hs_in_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
+};
+
+static struct usb_endpoint_descriptor rmnet_hs_out_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_OUT,
+ .bmAttributes = USB_ENDPOINT_XFER_BULK,
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
+};
+
+static struct usb_descriptor_header *rmnet_hs_function[] = {
+ (struct usb_descriptor_header *) &rmnet_interface_desc,
+ (struct usb_descriptor_header *) &rmnet_hs_notify_desc,
+ (struct usb_descriptor_header *) &rmnet_hs_in_desc,
+ (struct usb_descriptor_header *) &rmnet_hs_out_desc,
+ NULL,
+};
+
+/* String descriptors */
+
+static struct usb_string rmnet_string_defs[] = {
+ [0].s = "RmNet",
+ { } /* end of list */
+};
+
+static struct usb_gadget_strings rmnet_string_table = {
+ .language = 0x0409, /* en-us */
+ .strings = rmnet_string_defs,
+};
+
+static struct usb_gadget_strings *rmnet_strings[] = {
+ &rmnet_string_table,
+ NULL,
+};
+
+static void frmnet_ctrl_response_available(struct f_rmnet *dev);
+
+/* ------- misc functions --------------------*/
+
+static inline struct f_rmnet *func_to_rmnet(struct usb_function *f)
+{
+ return container_of(f, struct f_rmnet, port.func);
+}
+
+static inline struct f_rmnet *port_to_rmnet(struct grmnet *r)
+{
+ return container_of(r, struct f_rmnet, port);
+}
+
+static struct usb_request *
+frmnet_alloc_req(struct usb_ep *ep, unsigned len, gfp_t flags)
+{
+ struct usb_request *req;
+
+ req = usb_ep_alloc_request(ep, flags);
+ if (!req)
+ return ERR_PTR(-ENOMEM);
+
+ req->buf = kmalloc(len, flags);
+ if (!req->buf) {
+ usb_ep_free_request(ep, req);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ req->length = len;
+
+ return req;
+}
+
+void frmnet_free_req(struct usb_ep *ep, struct usb_request *req)
+{
+ kfree(req->buf);
+ usb_ep_free_request(ep, req);
+}
+
+static struct rmnet_ctrl_pkt *rmnet_alloc_ctrl_pkt(unsigned len, gfp_t flags)
+{
+ struct rmnet_ctrl_pkt *pkt;
+
+ pkt = kzalloc(sizeof(struct rmnet_ctrl_pkt), flags);
+ if (!pkt)
+ return ERR_PTR(-ENOMEM);
+
+ pkt->buf = kmalloc(len, flags);
+ if (!pkt->buf) {
+ kfree(pkt);
+ return ERR_PTR(-ENOMEM);
+ }
+ pkt->len = len;
+
+ return pkt;
+}
+
+static void rmnet_free_ctrl_pkt(struct rmnet_ctrl_pkt *pkt)
+{
+ kfree(pkt->buf);
+ kfree(pkt);
+}
+
+/* -------------------------------------------*/
+
+static int rmnet_gport_setup(void)
+{
+ int port_idx;
+ int i;
+
+ pr_debug("%s: bam ports: %u bam2bam ports: %u data hsic ports: %u data hsuart ports: %u"
+ " smd ports: %u ctrl hsic ports: %u ctrl hsuart ports: %u"
+ " nr_rmnet_ports: %u\n",
+ __func__, no_data_bam_ports, no_data_bam2bam_ports,
+ no_data_hsic_ports, no_data_hsuart_ports, no_ctrl_smd_ports,
+ no_ctrl_hsic_ports, no_ctrl_hsuart_ports, nr_rmnet_ports);
+
+ if (no_data_hsic_ports) {
+ port_idx = ghsic_data_setup(no_data_hsic_ports,
+ USB_GADGET_RMNET);
+ if (port_idx < 0)
+ return port_idx;
+ for (i = 0; i < nr_rmnet_ports; i++) {
+ if (rmnet_ports[i].data_xport ==
+ USB_GADGET_XPORT_HSIC) {
+ rmnet_ports[i].data_xport_num = port_idx;
+ port_idx++;
+ }
+ }
+ }
+
+ if (no_ctrl_hsic_ports) {
+ port_idx = ghsic_ctrl_setup(no_ctrl_hsic_ports,
+ USB_GADGET_RMNET);
+ if (port_idx < 0)
+ return port_idx;
+ for (i = 0; i < nr_rmnet_ports; i++) {
+ if (rmnet_ports[i].ctrl_xport ==
+ USB_GADGET_XPORT_HSIC) {
+ rmnet_ports[i].ctrl_xport_num = port_idx;
+ port_idx++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int gport_rmnet_connect(struct f_rmnet *dev)
+{
+ int ret;
+ unsigned port_num;
+ enum transport_type cxport = rmnet_ports[dev->port_num].ctrl_xport;
+ enum transport_type dxport = rmnet_ports[dev->port_num].data_xport;
+
+ pr_debug("%s: ctrl xport: %s data xport: %s dev: %p portno: %d\n",
+ __func__, xport_to_str(cxport), xport_to_str(dxport),
+ dev, dev->port_num);
+
+ port_num = rmnet_ports[dev->port_num].ctrl_xport_num;
+ switch (cxport) {
+ case USB_GADGET_XPORT_HSIC:
+ ret = ghsic_ctrl_connect(&dev->port, port_num);
+ if (ret) {
+ pr_err("%s: ghsic_ctrl_connect failed: err:%d\n",
+ __func__, ret);
+ return ret;
+ }
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(cxport));
+ return -ENODEV;
+ }
+
+ port_num = rmnet_ports[dev->port_num].data_xport_num;
+ switch (dxport) {
+ case USB_GADGET_XPORT_HSIC:
+ ret = ghsic_data_connect(&dev->port, port_num);
+ if (ret) {
+ pr_err("%s: ghsic_data_connect failed: err:%d\n",
+ __func__, ret);
+ ghsic_ctrl_disconnect(&dev->port, port_num);
+ return ret;
+ }
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(dxport));
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int gport_rmnet_disconnect(struct f_rmnet *dev)
+{
+ unsigned port_num;
+ enum transport_type cxport = rmnet_ports[dev->port_num].ctrl_xport;
+ enum transport_type dxport = rmnet_ports[dev->port_num].data_xport;
+
+ pr_debug("%s: ctrl xport: %s data xport: %s dev: %p portno: %d\n",
+ __func__, xport_to_str(cxport), xport_to_str(dxport),
+ dev, dev->port_num);
+
+ port_num = rmnet_ports[dev->port_num].ctrl_xport_num;
+ switch (cxport) {
+ case USB_GADGET_XPORT_HSIC:
+ ghsic_ctrl_disconnect(&dev->port, port_num);
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(cxport));
+ return -ENODEV;
+ }
+
+ port_num = rmnet_ports[dev->port_num].data_xport_num;
+ switch (dxport) {
+ case USB_GADGET_XPORT_HSIC:
+ ghsic_data_disconnect(&dev->port, port_num);
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(dxport));
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static void frmnet_unbind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+
+ pr_debug("%s: portno:%d\n", __func__, dev->port_num);
+
+ if (gadget_is_dualspeed(c->cdev->gadget))
+ usb_free_descriptors(f->hs_descriptors);
+ usb_free_descriptors(f->fs_descriptors);
+
+ frmnet_free_req(dev->notify, dev->notify_req);
+
+ kfree(f->name);
+}
+
+static void frmnet_suspend(struct usb_function *f)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ unsigned port_num;
+ enum transport_type dxport = rmnet_ports[dev->port_num].data_xport;
+
+ pr_debug("%s: data xport: %s dev: %p portno: %d\n",
+ __func__, xport_to_str(dxport),
+ dev, dev->port_num);
+
+ port_num = rmnet_ports[dev->port_num].data_xport_num;
+ switch (dxport) {
+ case USB_GADGET_XPORT_HSIC:
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(dxport));
+ }
+}
+
+static void frmnet_resume(struct usb_function *f)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ unsigned port_num;
+ enum transport_type dxport = rmnet_ports[dev->port_num].data_xport;
+
+ pr_debug("%s: data xport: %s dev: %p portno: %d\n",
+ __func__, xport_to_str(dxport),
+ dev, dev->port_num);
+
+ port_num = rmnet_ports[dev->port_num].data_xport_num;
+ switch (dxport) {
+ case USB_GADGET_XPORT_HSIC:
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %s\n", __func__,
+ xport_to_str(dxport));
+ }
+}
+
+static void frmnet_disable(struct usb_function *f)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ unsigned long flags;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ pr_debug("%s: port#%d\n", __func__, dev->port_num);
+
+ usb_ep_disable(dev->notify);
+ dev->notify->driver_data = NULL;
+
+ atomic_set(&dev->online, 0);
+
+ spin_lock_irqsave(&dev->lock, flags);
+ while (!list_empty(&dev->cpkt_resp_q)) {
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+
+ list_del(&cpkt->list);
+ rmnet_free_ctrl_pkt(cpkt);
+ }
+ atomic_set(&dev->notify_count, 0);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ gport_rmnet_disconnect(dev);
+}
+
+static int
+frmnet_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ struct usb_composite_dev *cdev = dev->cdev;
+ int ret;
+ struct list_head *cpkt;
+
+ pr_debug("%s:dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ if (dev->notify->driver_data) {
+ pr_debug("%s: reset port:%d\n", __func__, dev->port_num);
+ usb_ep_disable(dev->notify);
+ }
+
+ ret = config_ep_by_speed(cdev->gadget, f, dev->notify);
+ if (ret) {
+ dev->notify->desc = NULL;
+ ERROR(cdev, "config_ep_by_speed failes for ep %s, result %d\n",
+ dev->notify->name, ret);
+ return ret;
+ }
+ ret = usb_ep_enable(dev->notify);
+
+ if (ret) {
+ pr_err("%s: usb ep#%s enable failed, err#%d\n",
+ __func__, dev->notify->name, ret);
+ return ret;
+ }
+ dev->notify->driver_data = dev;
+
+ if (!dev->port.in->desc || !dev->port.out->desc) {
+ if (config_ep_by_speed(cdev->gadget, f, dev->port.in) ||
+ config_ep_by_speed(cdev->gadget, f, dev->port.out)) {
+ dev->port.in->desc = NULL;
+ dev->port.out->desc = NULL;
+ return -EINVAL;
+ }
+ ret = gport_rmnet_connect(dev);
+ }
+
+ atomic_set(&dev->online, 1);
+
+ /* In case notifications were aborted, but there are pending control
+ packets in the response queue, re-add the notifications */
+ list_for_each(cpkt, &dev->cpkt_resp_q)
+ frmnet_ctrl_response_available(dev);
+
+ return ret;
+}
+
+static void frmnet_ctrl_response_available(struct f_rmnet *dev)
+{
+ struct usb_request *req = dev->notify_req;
+ struct usb_cdc_notification *event;
+ unsigned long flags;
+ int ret;
+
+ pr_debug("%s:dev:%p portno#%d\n", __func__, dev, dev->port_num);
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!atomic_read(&dev->online) || !req || !req->buf) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return;
+ }
+
+ if (atomic_inc_return(&dev->notify_count) != 1) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return;
+ }
+
+ event = req->buf;
+ event->bmRequestType = USB_DIR_IN | USB_TYPE_CLASS
+ | USB_RECIP_INTERFACE;
+ event->bNotificationType = USB_CDC_NOTIFY_RESPONSE_AVAILABLE;
+ event->wValue = cpu_to_le16(0);
+ event->wIndex = cpu_to_le16(dev->ifc_id);
+ event->wLength = cpu_to_le16(0);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ ret = usb_ep_queue(dev->notify, dev->notify_req, GFP_ATOMIC);
+ if (ret) {
+ atomic_dec(&dev->notify_count);
+ pr_debug("ep enqueue error %d\n", ret);
+ }
+}
+
+static void frmnet_connect(struct grmnet *gr)
+{
+ struct f_rmnet *dev;
+
+ if (!gr) {
+ pr_err("%s: Invalid grmnet:%p\n", __func__, gr);
+ return;
+ }
+
+ dev = port_to_rmnet(gr);
+
+ atomic_set(&dev->ctrl_online, 1);
+}
+
+static void frmnet_disconnect(struct grmnet *gr)
+{
+ struct f_rmnet *dev;
+ unsigned long flags;
+ struct usb_cdc_notification *event;
+ int status;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ if (!gr) {
+ pr_err("%s: Invalid grmnet:%p\n", __func__, gr);
+ return;
+ }
+
+ dev = port_to_rmnet(gr);
+
+ atomic_set(&dev->ctrl_online, 0);
+
+ if (!atomic_read(&dev->online)) {
+ pr_debug("%s: nothing to do\n", __func__);
+ return;
+ }
+
+ usb_ep_fifo_flush(dev->notify);
+
+ event = dev->notify_req->buf;
+ event->bmRequestType = USB_DIR_IN | USB_TYPE_CLASS
+ | USB_RECIP_INTERFACE;
+ event->bNotificationType = USB_CDC_NOTIFY_NETWORK_CONNECTION;
+ event->wValue = cpu_to_le16(0);
+ event->wIndex = cpu_to_le16(dev->ifc_id);
+ event->wLength = cpu_to_le16(0);
+
+ status = usb_ep_queue(dev->notify, dev->notify_req, GFP_ATOMIC);
+ if (status < 0) {
+ if (!atomic_read(&dev->online))
+ return;
+ pr_err("%s: rmnet notify ep enqueue error %d\n",
+ __func__, status);
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ while (!list_empty(&dev->cpkt_resp_q)) {
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+
+ list_del(&cpkt->list);
+ rmnet_free_ctrl_pkt(cpkt);
+ }
+ atomic_set(&dev->notify_count, 0);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+}
+
+static int
+frmnet_send_cpkt_response(void *gr, void *buf, size_t len)
+{
+ struct f_rmnet *dev;
+ struct rmnet_ctrl_pkt *cpkt;
+ unsigned long flags;
+
+ if (!gr || !buf) {
+ pr_err("%s: Invalid grmnet/buf, grmnet:%p buf:%p\n",
+ __func__, gr, buf);
+ return -ENODEV;
+ }
+ cpkt = rmnet_alloc_ctrl_pkt(len, GFP_ATOMIC);
+ if (IS_ERR(cpkt)) {
+ pr_err("%s: Unable to allocate ctrl pkt\n", __func__);
+ return -ENOMEM;
+ }
+ memcpy(cpkt->buf, buf, len);
+ cpkt->len = len;
+
+ dev = port_to_rmnet(gr);
+
+ pr_debug("%s: dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ if (!atomic_read(&dev->online) || !atomic_read(&dev->ctrl_online)) {
+ rmnet_free_ctrl_pkt(cpkt);
+ return 0;
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ list_add_tail(&cpkt->list, &dev->cpkt_resp_q);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ frmnet_ctrl_response_available(dev);
+
+ return 0;
+}
+
+static void
+frmnet_cmd_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct f_rmnet *dev = req->context;
+ struct usb_composite_dev *cdev;
+ unsigned port_num;
+
+ if (!dev) {
+ pr_err("%s: rmnet dev is null\n", __func__);
+ return;
+ }
+
+ pr_debug("%s: dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ cdev = dev->cdev;
+
+ if (dev->port.send_encap_cmd) {
+ port_num = rmnet_ports[dev->port_num].ctrl_xport_num;
+ dev->port.send_encap_cmd(port_num, req->buf, req->actual);
+ }
+}
+
+static void frmnet_notify_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct f_rmnet *dev = req->context;
+ int status = req->status;
+
+ pr_debug("%s: dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ switch (status) {
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ /* connection gone */
+ atomic_set(&dev->notify_count, 0);
+ break;
+ default:
+ pr_err("rmnet notify ep error %d\n", status);
+ /* FALLTHROUGH */
+ case 0:
+ if (!atomic_read(&dev->ctrl_online))
+ break;
+
+ if (atomic_dec_and_test(&dev->notify_count))
+ break;
+
+ status = usb_ep_queue(dev->notify, req, GFP_ATOMIC);
+ if (status) {
+ atomic_dec(&dev->notify_count);
+ pr_debug("ep enqueue error %d\n", status);
+ }
+ break;
+ }
+}
+
+static int
+frmnet_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ struct usb_composite_dev *cdev = dev->cdev;
+ struct usb_request *req = cdev->req;
+ unsigned port_num;
+ u16 w_index = le16_to_cpu(ctrl->wIndex);
+ u16 w_value = le16_to_cpu(ctrl->wValue);
+ u16 w_length = le16_to_cpu(ctrl->wLength);
+ int ret = -EOPNOTSUPP;
+
+ pr_debug("%s:dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ if (!atomic_read(&dev->online)) {
+ pr_debug("%s: usb cable is not connected\n", __func__);
+ return -ENOTCONN;
+ }
+
+ switch ((ctrl->bRequestType << 8) | ctrl->bRequest) {
+
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_SEND_ENCAPSULATED_COMMAND:
+ ret = w_length;
+ req->complete = frmnet_cmd_complete;
+ req->context = dev;
+ break;
+
+
+ case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_GET_ENCAPSULATED_RESPONSE:
+ if (w_value)
+ goto invalid;
+ else {
+ unsigned len;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ spin_lock(&dev->lock);
+ if (list_empty(&dev->cpkt_resp_q)) {
+ pr_err("ctrl resp queue empty "
+ " req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ spin_unlock(&dev->lock);
+ goto invalid;
+ }
+
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+ list_del(&cpkt->list);
+ spin_unlock(&dev->lock);
+
+ len = min_t(unsigned, w_length, cpkt->len);
+ memcpy(req->buf, cpkt->buf, len);
+ ret = len;
+
+ rmnet_free_ctrl_pkt(cpkt);
+ }
+ break;
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_REQ_SET_CONTROL_LINE_STATE:
+ if (dev->port.notify_modem) {
+ port_num = rmnet_ports[dev->port_num].ctrl_xport_num;
+ dev->port.notify_modem(&dev->port, port_num, w_value);
+ }
+ ret = 0;
+
+ break;
+ default:
+
+invalid:
+ DBG(cdev, "invalid control req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ }
+
+ /* respond with data transfer or status phase? */
+ if (ret >= 0) {
+ VDBG(cdev, "rmnet req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ req->zero = (ret < w_length);
+ req->length = ret;
+ ret = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
+ if (ret < 0)
+ ERROR(cdev, "rmnet ep0 enqueue err %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int frmnet_bind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct f_rmnet *dev = func_to_rmnet(f);
+ struct usb_ep *ep;
+ struct usb_composite_dev *cdev = c->cdev;
+ int ret = -ENODEV;
+
+ dev->ifc_id = usb_interface_id(c, f);
+ if (dev->ifc_id < 0) {
+ pr_err("%s: unable to allocate ifc id, err:%d",
+ __func__, dev->ifc_id);
+ return dev->ifc_id;
+ }
+ rmnet_interface_desc.bInterfaceNumber = dev->ifc_id;
+
+ ep = usb_ep_autoconfig(cdev->gadget, &rmnet_fs_in_desc);
+ if (!ep) {
+ pr_err("%s: usb epin autoconfig failed\n", __func__);
+ return -ENODEV;
+ }
+ dev->port.in = ep;
+ ep->driver_data = cdev;
+
+ ep = usb_ep_autoconfig(cdev->gadget, &rmnet_fs_out_desc);
+ if (!ep) {
+ pr_err("%s: usb epout autoconfig failed\n", __func__);
+ ret = -ENODEV;
+ goto ep_auto_out_fail;
+ }
+ dev->port.out = ep;
+ ep->driver_data = cdev;
+
+ ep = usb_ep_autoconfig(cdev->gadget, &rmnet_fs_notify_desc);
+ if (!ep) {
+ pr_err("%s: usb epnotify autoconfig failed\n", __func__);
+ ret = -ENODEV;
+ goto ep_auto_notify_fail;
+ }
+ dev->notify = ep;
+ ep->driver_data = cdev;
+
+ dev->notify_req = frmnet_alloc_req(ep,
+ sizeof(struct usb_cdc_notification),
+ GFP_KERNEL);
+ if (IS_ERR(dev->notify_req)) {
+ pr_err("%s: unable to allocate memory for notify req\n",
+ __func__);
+ ret = -ENOMEM;
+ goto ep_notify_alloc_fail;
+ }
+
+ dev->notify_req->complete = frmnet_notify_complete;
+ dev->notify_req->context = dev;
+
+ f->fs_descriptors = usb_copy_descriptors(rmnet_fs_function);
+
+ if (!f->fs_descriptors)
+ goto fail;
+
+ if (gadget_is_dualspeed(cdev->gadget)) {
+ rmnet_hs_in_desc.bEndpointAddress =
+ rmnet_fs_in_desc.bEndpointAddress;
+ rmnet_hs_out_desc.bEndpointAddress =
+ rmnet_fs_out_desc.bEndpointAddress;
+ rmnet_hs_notify_desc.bEndpointAddress =
+ rmnet_fs_notify_desc.bEndpointAddress;
+
+ /* copy descriptors, and track endpoint copies */
+ f->hs_descriptors = usb_copy_descriptors(rmnet_hs_function);
+
+ if (!f->hs_descriptors)
+ goto fail;
+ }
+
+ pr_debug("%s: RmNet(%d) %s Speed, IN:%s OUT:%s\n",
+ __func__, dev->port_num,
+ gadget_is_dualspeed(cdev->gadget) ? "dual" : "full",
+ dev->port.in->name, dev->port.out->name);
+
+ return 0;
+
+fail:
+ if (f->fs_descriptors)
+ usb_free_descriptors(f->fs_descriptors);
+ep_notify_alloc_fail:
+ dev->notify->driver_data = NULL;
+ dev->notify = NULL;
+ep_auto_notify_fail:
+ dev->port.out->driver_data = NULL;
+ dev->port.out = NULL;
+ep_auto_out_fail:
+ dev->port.in->driver_data = NULL;
+ dev->port.in = NULL;
+
+ return ret;
+}
+
+static int frmnet_bind_config(struct usb_configuration *c, unsigned portno)
+{
+ int status;
+ struct f_rmnet *dev;
+ struct usb_function *f;
+ unsigned long flags;
+
+ pr_debug("%s: usb config:%p\n", __func__, c);
+
+ if (portno >= nr_rmnet_ports) {
+ pr_err("%s: supporting ports#%u port_id:%u", __func__,
+ nr_rmnet_ports, portno);
+ return -ENODEV;
+ }
+
+ if (rmnet_string_defs[0].id == 0) {
+ status = usb_string_id(c->cdev);
+ if (status < 0) {
+ pr_err("%s: failed to get string id, err:%d\n",
+ __func__, status);
+ return status;
+ }
+ rmnet_string_defs[0].id = status;
+ }
+
+ dev = rmnet_ports[portno].port;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->cdev = c->cdev;
+ f = &dev->port.func;
+ f->name = kasprintf(GFP_ATOMIC, "rmnet%d", portno);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (!f->name) {
+ pr_err("%s: cannot allocate memory for name\n", __func__);
+ return -ENOMEM;
+ }
+
+ f->strings = rmnet_strings;
+ f->bind = frmnet_bind;
+ f->unbind = frmnet_unbind;
+ f->disable = frmnet_disable;
+ f->set_alt = frmnet_set_alt;
+ f->setup = frmnet_setup;
+ f->suspend = frmnet_suspend;
+ f->resume = frmnet_resume;
+ dev->port.send_cpkt_response = frmnet_send_cpkt_response;
+ dev->port.disconnect = frmnet_disconnect;
+ dev->port.connect = frmnet_connect;
+
+ rmnet_interface_desc.iInterface = rmnet_string_defs[0].id;
+ status = usb_add_function(c, f);
+ if (status) {
+ pr_err("%s: usb add function failed: %d\n",
+ __func__, status);
+ kfree(f->name);
+ return status;
+ }
+
+ pr_debug("%s: complete\n", __func__);
+
+ return status;
+}
+
+static void frmnet_cleanup(void)
+{
+ int i;
+
+ for (i = 0; i < nr_rmnet_ports; i++)
+ kfree(rmnet_ports[i].port);
+
+ nr_rmnet_ports = 0;
+ no_ctrl_smd_ports = 0;
+ no_data_bam_ports = 0;
+ no_data_bam2bam_ports = 0;
+ no_ctrl_hsic_ports = 0;
+ no_data_hsic_ports = 0;
+ no_ctrl_hsuart_ports = 0;
+ no_data_hsuart_ports = 0;
+}
+
+static int frmnet_init_port(const char *ctrl_name, const char *data_name)
+{
+ struct f_rmnet *dev;
+ struct rmnet_ports *rmnet_port;
+ int ret;
+ int i;
+
+ if (nr_rmnet_ports >= NR_RMNET_PORTS) {
+ pr_err("%s: Max-%d instances supported\n",
+ __func__, NR_RMNET_PORTS);
+ return -EINVAL;
+ }
+
+ pr_debug("%s: port#:%d, ctrl port: %s data port: %s\n",
+ __func__, nr_rmnet_ports, ctrl_name, data_name);
+
+ dev = kzalloc(sizeof(struct f_rmnet), GFP_KERNEL);
+ if (!dev) {
+ pr_err("%s: Unable to allocate rmnet device\n", __func__);
+ return -ENOMEM;
+ }
+
+ dev->port_num = nr_rmnet_ports;
+ spin_lock_init(&dev->lock);
+ INIT_LIST_HEAD(&dev->cpkt_resp_q);
+
+ rmnet_port = &rmnet_ports[nr_rmnet_ports];
+ rmnet_port->port = dev;
+ rmnet_port->port_num = nr_rmnet_ports;
+ rmnet_port->ctrl_xport = str_to_xport(ctrl_name);
+ rmnet_port->data_xport = str_to_xport(data_name);
+
+ switch (rmnet_port->ctrl_xport) {
+ case USB_GADGET_XPORT_HSIC:
+ rmnet_port->ctrl_xport_num = no_ctrl_hsic_ports;
+ no_ctrl_hsic_ports++;
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %u\n", __func__,
+ rmnet_port->ctrl_xport);
+ ret = -ENODEV;
+ goto fail_probe;
+ }
+
+ switch (rmnet_port->data_xport) {
+ case USB_GADGET_XPORT_HSIC:
+ rmnet_port->data_xport_num = no_data_hsic_ports;
+ no_data_hsic_ports++;
+ break;
+ case USB_GADGET_XPORT_NONE:
+ break;
+ default:
+ pr_err("%s: Un-supported transport: %u\n", __func__,
+ rmnet_port->data_xport);
+ ret = -ENODEV;
+ goto fail_probe;
+ }
+ nr_rmnet_ports++;
+
+ return 0;
+
+fail_probe:
+ for (i = 0; i < nr_rmnet_ports; i++)
+ kfree(rmnet_ports[i].port);
+
+ nr_rmnet_ports = 0;
+ no_ctrl_smd_ports = 0;
+ no_data_bam_ports = 0;
+ no_ctrl_hsic_ports = 0;
+ no_data_hsic_ports = 0;
+ no_ctrl_hsuart_ports = 0;
+ no_data_hsuart_ports = 0;
+
+ return ret;
+}
diff --git a/drivers/usb/gadget/f_serial.c b/drivers/usb/gadget/f_serial.c
index 981113c..84d4125 100644
--- a/drivers/usb/gadget/f_serial.c
+++ b/drivers/usb/gadget/f_serial.c
@@ -16,6 +16,8 @@
#include <linux/device.h>
#include "u_serial.h"
+#include "u_data_hsic.h"
+#include "u_ctrl_hsic.h"
#include "gadget_chips.h"
@@ -32,6 +34,30 @@
struct gserial port;
u8 data_id;
u8 port_num;
+
+ u8 online;
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ u8 pending;
+ spinlock_t lock;
+ struct usb_ep *notify;
+ struct usb_request *notify_req;
+
+ struct usb_cdc_line_coding port_line_coding;
+ u16 port_handshake_bits;
+#define ACM_CTRL_RTS (1 << 1) /* unused with full duplex */
+#define ACM_CTRL_DTR (1 << 0) /* host is ready for data r/w */
+
+ /* SerialState notification */
+ u16 serial_state;
+#define ACM_CTRL_OVERRUN (1 << 6)
+#define ACM_CTRL_PARITY (1 << 5)
+#define ACM_CTRL_FRAMING (1 << 4)
+#define ACM_CTRL_RI (1 << 3)
+#define ACM_CTRL_BRK (1 << 2)
+#define ACM_CTRL_DSR (1 << 1)
+#define ACM_CTRL_DCD (1 << 0)
+#endif
};
static inline struct f_gser *func_to_gser(struct usb_function *f)
@@ -39,6 +65,14 @@
return container_of(f, struct f_gser, port.func);
}
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static inline struct f_gser *port_to_gser(struct gserial *p)
+{
+ return container_of(p, struct f_gser, port);
+}
+#define GS_LOG2_NOTIFY_INTERVAL 5 /* 1 << 5 == 32 msec */
+#define GS_NOTIFY_MAXPACKET 10 /* notification + 2 bytes */
+#endif
/*-------------------------------------------------------------------------*/
/* interface descriptor: */
@@ -47,14 +81,60 @@
.bLength = USB_DT_INTERFACE_SIZE,
.bDescriptorType = USB_DT_INTERFACE,
/* .bInterfaceNumber = DYNAMIC */
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ .bNumEndpoints = 3,
+#else
.bNumEndpoints = 2,
+#endif
.bInterfaceClass = USB_CLASS_VENDOR_SPEC,
.bInterfaceSubClass = 0,
.bInterfaceProtocol = 0,
/* .iInterface = DYNAMIC */
};
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static struct usb_cdc_header_desc gser_header_desc = {
+ .bLength = sizeof(gser_header_desc),
+ .bDescriptorType = USB_DT_CS_INTERFACE,
+ .bDescriptorSubType = USB_CDC_HEADER_TYPE,
+ .bcdCDC = __constant_cpu_to_le16(0x0110),
+};
+
+static struct usb_cdc_call_mgmt_descriptor
+gser_call_mgmt_descriptor = {
+ .bLength = sizeof(gser_call_mgmt_descriptor),
+ .bDescriptorType = USB_DT_CS_INTERFACE,
+ .bDescriptorSubType = USB_CDC_CALL_MANAGEMENT_TYPE,
+ .bmCapabilities = 0,
+ /* .bDataInterface = DYNAMIC */
+};
+
+static struct usb_cdc_acm_descriptor gser_descriptor = {
+ .bLength = sizeof(gser_descriptor),
+ .bDescriptorType = USB_DT_CS_INTERFACE,
+ .bDescriptorSubType = USB_CDC_ACM_TYPE,
+ .bmCapabilities = USB_CDC_CAP_LINE,
+};
+
+static struct usb_cdc_union_desc gser_union_desc = {
+ .bLength = sizeof(gser_union_desc),
+ .bDescriptorType = USB_DT_CS_INTERFACE,
+ .bDescriptorSubType = USB_CDC_UNION_TYPE,
+ /* .bMasterInterface0 = DYNAMIC */
+ /* .bSlaveInterface0 = DYNAMIC */
+};
+#endif
/* full speed support: */
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static struct usb_endpoint_descriptor gser_fs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GS_NOTIFY_MAXPACKET),
+ .bInterval = 1 << GS_LOG2_NOTIFY_INTERVAL,
+};
+#endif
static struct usb_endpoint_descriptor gser_fs_in_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
@@ -72,29 +152,53 @@
static struct usb_descriptor_header *gser_fs_function[] = {
(struct usb_descriptor_header *) &gser_interface_desc,
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ (struct usb_descriptor_header *) &gser_header_desc,
+ (struct usb_descriptor_header *) &gser_call_mgmt_descriptor,
+ (struct usb_descriptor_header *) &gser_descriptor,
+ (struct usb_descriptor_header *) &gser_union_desc,
+ (struct usb_descriptor_header *) &gser_fs_notify_desc,
+#endif
(struct usb_descriptor_header *) &gser_fs_in_desc,
(struct usb_descriptor_header *) &gser_fs_out_desc,
NULL,
};
/* high speed support: */
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static struct usb_endpoint_descriptor gser_hs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GS_NOTIFY_MAXPACKET),
+ .bInterval = GS_LOG2_NOTIFY_INTERVAL+4,
+};
+#endif
static struct usb_endpoint_descriptor gser_hs_in_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bmAttributes = USB_ENDPOINT_XFER_BULK,
- .wMaxPacketSize = cpu_to_le16(512),
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
};
static struct usb_endpoint_descriptor gser_hs_out_desc = {
.bLength = USB_DT_ENDPOINT_SIZE,
.bDescriptorType = USB_DT_ENDPOINT,
.bmAttributes = USB_ENDPOINT_XFER_BULK,
- .wMaxPacketSize = cpu_to_le16(512),
+ .wMaxPacketSize = __constant_cpu_to_le16(512),
};
static struct usb_descriptor_header *gser_hs_function[] = {
(struct usb_descriptor_header *) &gser_interface_desc,
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ (struct usb_descriptor_header *) &gser_header_desc,
+ (struct usb_descriptor_header *) &gser_call_mgmt_descriptor,
+ (struct usb_descriptor_header *) &gser_descriptor,
+ (struct usb_descriptor_header *) &gser_union_desc,
+ (struct usb_descriptor_header *) &gser_hs_notify_desc,
+#endif
(struct usb_descriptor_header *) &gser_hs_in_desc,
(struct usb_descriptor_header *) &gser_hs_out_desc,
NULL,
@@ -119,8 +223,37 @@
.bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
};
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static struct usb_endpoint_descriptor gser_ss_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GS_NOTIFY_MAXPACKET),
+ .bInterval = GS_LOG2_NOTIFY_INTERVAL+4,
+};
+
+static struct usb_ss_ep_comp_descriptor gser_ss_notify_comp_desc = {
+ .bLength = sizeof gser_ss_notify_comp_desc,
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ /* the following 2 values can be tweaked if necessary */
+ /* .bMaxBurst = 0, */
+ /* .bmAttributes = 0, */
+ .wBytesPerInterval = cpu_to_le16(GS_NOTIFY_MAXPACKET),
+};
+#endif
+
static struct usb_descriptor_header *gser_ss_function[] = {
(struct usb_descriptor_header *) &gser_interface_desc,
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ (struct usb_descriptor_header *) &gser_header_desc,
+ (struct usb_descriptor_header *) &gser_call_mgmt_descriptor,
+ (struct usb_descriptor_header *) &gser_descriptor,
+ (struct usb_descriptor_header *) &gser_union_desc,
+ (struct usb_descriptor_header *) &gser_ss_notify_desc,
+ (struct usb_descriptor_header *) &gser_ss_notify_comp_desc,
+#endif
(struct usb_descriptor_header *) &gser_ss_in_desc,
(struct usb_descriptor_header *) &gser_ss_bulk_comp_desc,
(struct usb_descriptor_header *) &gser_ss_out_desc,
@@ -145,8 +278,121 @@
NULL,
};
+static struct usb_string modem_string_defs[] = {
+ [0].s = "HTC Modem",
+ [1].s = "HTC 9k Modem",
+ { } /* end of list */
+};
+
+static struct usb_gadget_strings modem_string_table = {
+ .language = 0x0409, /* en-us */
+ .strings = modem_string_defs,
+};
+
+static struct usb_gadget_strings *modem_strings[] = {
+ &modem_string_table,
+ NULL,
+};
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static void gser_complete_set_line_coding(struct usb_ep *ep,
+ struct usb_request *req)
+{
+ struct f_gser *gser = ep->driver_data;
+ struct usb_composite_dev *cdev = gser->port.func.config->cdev;
+
+ if (req->status != 0) {
+ DBG(cdev, "gser ttyGS%d completion, err %d\n",
+ gser->port_num, req->status);
+ return;
+ }
+
+ /* normal completion */
+ if (req->actual != sizeof(gser->port_line_coding)) {
+ DBG(cdev, "gser ttyGS%d short resp, len %d\n",
+ gser->port_num, req->actual);
+ usb_ep_set_halt(ep);
+ } else {
+ struct usb_cdc_line_coding *value = req->buf;
+ gser->port_line_coding = *value;
+ }
+}
/*-------------------------------------------------------------------------*/
+static int
+gser_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
+{
+ struct f_gser *gser = func_to_gser(f);
+ struct usb_composite_dev *cdev = f->config->cdev;
+ struct usb_request *req = cdev->req;
+ int value = -EOPNOTSUPP;
+ u16 w_index = le16_to_cpu(ctrl->wIndex);
+ u16 w_value = le16_to_cpu(ctrl->wValue);
+ u16 w_length = le16_to_cpu(ctrl->wLength);
+
+ switch ((ctrl->bRequestType << 8) | ctrl->bRequest) {
+
+ /* SET_LINE_CODING ... just read and save what the host sends */
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_REQ_SET_LINE_CODING:
+ if (w_length != sizeof(struct usb_cdc_line_coding))
+ goto invalid;
+
+ value = w_length;
+ cdev->gadget->ep0->driver_data = gser;
+ req->complete = gser_complete_set_line_coding;
+ break;
+
+ /* GET_LINE_CODING ... return what host sent, or initial value */
+ case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_REQ_GET_LINE_CODING:
+ value = min_t(unsigned, w_length,
+ sizeof(struct usb_cdc_line_coding));
+ memcpy(req->buf, &gser->port_line_coding, value);
+ break;
+
+ /* SET_CONTROL_LINE_STATE ... save what the host sent */
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_REQ_SET_CONTROL_LINE_STATE:
+
+ value = 0;
+ gser->port_handshake_bits = w_value;
+ if (gser->port.notify_modem) {
+ unsigned port_num = 0;
+/*
+ unsigned port_num =
+ gserial_ports[gser->port_num].client_port_num;
+*/
+ gser->port.notify_modem(&gser->port,
+ port_num, w_value);
+ }
+ break;
+
+ default:
+invalid:
+ DBG(cdev, "invalid control req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ }
+
+ /* respond with data transfer or status phase? */
+ if (value >= 0) {
+ DBG(cdev, "gser ttyGS%d req%02x.%02x v%04x i%04x l%d\n",
+ gser->port_num, ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ req->zero = 0;
+ req->length = value;
+ value = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
+ if (value < 0)
+ ERROR(cdev, "gser response on ttyGS%d, err %d\n",
+ gser->port_num, value);
+ }
+
+ /* device either stalls (value < 0) or reports success */
+ return value;
+}
+#endif
+
static int gser_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
{
struct f_gser *gser = func_to_gser(f);
@@ -171,6 +417,66 @@
return 0;
}
+static int gser_mdm_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+{
+ struct f_gser *gser = func_to_gser(f);
+ struct usb_composite_dev *cdev = f->config->cdev;
+ int ret = 0;
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ if (gser->notify->driver_data) {
+ DBG(cdev, "reset generic ctl ttyGS%d\n", gser->port_num);
+ usb_ep_disable(gser->notify);
+ }
+
+ if (!gser->notify->desc) {
+ if (config_ep_by_speed(cdev->gadget, f, gser->notify)) {
+ gser->notify->desc = NULL;
+ return -EINVAL;
+ }
+ }
+ ret = usb_ep_enable(gser->notify);
+
+ if (ret) {
+ ERROR(cdev, "can't enable %s, result %d\n",
+ gser->notify->name, ret);
+ return ret;
+ }
+ gser->notify->driver_data = gser;
+#endif
+
+ /* we know alt == 0, so this is an activation or a reset */
+
+ if (gser->port.in->driver_data) {
+ DBG(cdev, "reset generic ttyGS%d\n", gser->port_num);
+ ghsic_ctrl_disconnect(&gser->port, gser->port_num);
+ ghsic_data_disconnect(&gser->port, gser->port_num);
+ }
+ if (!gser->port.in->desc || !gser->port.out->desc) {
+ DBG(cdev, "activate generic ttyGS%d\n", gser->port_num);
+ if (config_ep_by_speed(cdev->gadget, f, gser->port.in) ||
+ config_ep_by_speed(cdev->gadget, f, gser->port.out)) {
+ gser->port.in->desc = NULL;
+ gser->port.out->desc = NULL;
+ return -EINVAL;
+ }
+ }
+ ret = ghsic_ctrl_connect(&gser->port, gser->port_num);
+ if (ret) {
+ pr_err("%s: ghsic_ctrl_connect failed: err:%d\n",
+ __func__, ret);
+ return ret;
+ }
+ ret = ghsic_data_connect(&gser->port, gser->port_num);
+ if (ret) {
+ pr_err("%s: ghsic_data_connect failed: err:%d\n",
+ __func__, ret);
+ ghsic_ctrl_disconnect(&gser->port, gser->port_num);
+ return ret;
+ }
+ return ret;
+}
+
static void gser_disable(struct usb_function *f)
{
struct f_gser *gser = func_to_gser(f);
@@ -180,6 +486,187 @@
gserial_disconnect(&gser->port);
}
+static void gser_mdm_disable(struct usb_function *f)
+{
+ struct f_gser *gser = func_to_gser(f);
+ struct usb_composite_dev *cdev = f->config->cdev;
+
+ DBG(cdev, "generic ttyGS%d deactivated\n", gser->port_num);
+ ghsic_ctrl_disconnect(&gser->port, gser->port_num);
+ ghsic_data_disconnect(&gser->port, gser->port_num);
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ usb_ep_fifo_flush(gser->notify);
+ usb_ep_disable(gser->notify);
+#endif
+ gser->notify->driver_data = NULL;
+}
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+static int gser_notify(struct f_gser *gser, u8 type, u16 value,
+ void *data, unsigned length)
+{
+ struct usb_ep *ep = gser->notify;
+ struct usb_request *req;
+ struct usb_cdc_notification *notify;
+ const unsigned len = sizeof(*notify) + length;
+ void *buf;
+ int status;
+ struct usb_composite_dev *cdev = gser->port.func.config->cdev;
+
+ req = gser->notify_req;
+ gser->notify_req = NULL;
+ gser->pending = false;
+
+ req->length = len;
+ notify = req->buf;
+ buf = notify + 1;
+
+ notify->bmRequestType = USB_DIR_IN | USB_TYPE_CLASS
+ | USB_RECIP_INTERFACE;
+ notify->bNotificationType = type;
+ notify->wValue = cpu_to_le16(value);
+ notify->wIndex = cpu_to_le16(gser->data_id);
+ notify->wLength = cpu_to_le16(length);
+ memcpy(buf, data, length);
+
+ status = usb_ep_queue(ep, req, GFP_ATOMIC);
+ if (status < 0) {
+ ERROR(cdev, "gser ttyGS%d can't notify serial state, %d\n",
+ gser->port_num, status);
+ gser->notify_req = req;
+ }
+
+ return status;
+}
+
+static int gser_notify_serial_state(struct f_gser *gser)
+{
+ int status;
+ unsigned long flags;
+ struct usb_composite_dev *cdev = gser->port.func.config->cdev;
+
+ spin_lock_irqsave(&gser->lock, flags);
+ if (gser->notify_req) {
+ DBG(cdev, "gser ttyGS%d serial state %04x\n",
+ gser->port_num, gser->serial_state);
+ status = gser_notify(gser, USB_CDC_NOTIFY_SERIAL_STATE,
+ 0, &gser->serial_state,
+ sizeof(gser->serial_state));
+ } else {
+ gser->pending = true;
+ status = 0;
+ }
+ spin_unlock_irqrestore(&gser->lock, flags);
+ return status;
+}
+
+static void gser_notify_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct f_gser *gser = req->context;
+ u8 doit = false;
+ unsigned long flags;
+
+ /* on this call path we do NOT hold the port spinlock,
+ * which is why ACM needs its own spinlock
+ */
+ spin_lock_irqsave(&gser->lock, flags);
+ if (req->status != -ESHUTDOWN)
+ doit = gser->pending;
+ gser->notify_req = req;
+ spin_unlock_irqrestore(&gser->lock, flags);
+
+ if (doit && gser->online)
+ gser_notify_serial_state(gser);
+}
+static void gser_connect(struct gserial *port)
+{
+ struct f_gser *gser = port_to_gser(port);
+
+ gser->serial_state |= ACM_CTRL_DSR | ACM_CTRL_DCD;
+ gser_notify_serial_state(gser);
+}
+
+unsigned int gser_get_dtr(struct gserial *port)
+{
+ struct f_gser *gser = port_to_gser(port);
+
+ if (gser->port_handshake_bits & ACM_CTRL_DTR)
+ return 1;
+ else
+ return 0;
+}
+
+unsigned int gser_get_rts(struct gserial *port)
+{
+ struct f_gser *gser = port_to_gser(port);
+
+ if (gser->port_handshake_bits & ACM_CTRL_RTS)
+ return 1;
+ else
+ return 0;
+}
+
+unsigned int gser_send_carrier_detect(struct gserial *port, unsigned int yes)
+{
+ struct f_gser *gser = port_to_gser(port);
+ u16 state;
+
+ state = gser->serial_state;
+ state &= ~ACM_CTRL_DCD;
+ if (yes)
+ state |= ACM_CTRL_DCD;
+
+ gser->serial_state = state;
+ return gser_notify_serial_state(gser);
+
+}
+
+unsigned int gser_send_ring_indicator(struct gserial *port, unsigned int yes)
+{
+ struct f_gser *gser = port_to_gser(port);
+ u16 state;
+
+ state = gser->serial_state;
+ state &= ~ACM_CTRL_RI;
+ if (yes)
+ state |= ACM_CTRL_RI;
+
+ gser->serial_state = state;
+ return gser_notify_serial_state(gser);
+
+}
+static void gser_disconnect(struct gserial *port)
+{
+ struct f_gser *gser = port_to_gser(port);
+
+ gser->serial_state &= ~(ACM_CTRL_DSR | ACM_CTRL_DCD);
+ gser_notify_serial_state(gser);
+}
+
+static int gser_send_break(struct gserial *port, int duration)
+{
+ struct f_gser *gser = port_to_gser(port);
+ u16 state;
+
+ state = gser->serial_state;
+ state &= ~ACM_CTRL_BRK;
+ if (duration)
+ state |= ACM_CTRL_BRK;
+
+ gser->serial_state = state;
+ return gser_notify_serial_state(gser);
+}
+
+static int gser_send_modem_ctrl_bits(struct gserial *port, int ctrl_bits)
+{
+ struct f_gser *gser = port_to_gser(port);
+
+ gser->serial_state = ctrl_bits;
+
+ return gser_notify_serial_state(gser);
+}
+#endif
/*-------------------------------------------------------------------------*/
/* serial function driver setup/binding */
@@ -225,6 +712,23 @@
gser->port.out = ep;
ep->driver_data = cdev; /* claim */
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ ep = usb_ep_autoconfig(cdev->gadget, &gser_fs_notify_desc);
+ if (!ep)
+ goto fail;
+ gser->notify = ep;
+ ep->driver_data = cdev; /* claim */
+ /* allocate notification */
+ gser->notify_req = gs_alloc_req(ep,
+ sizeof(struct usb_cdc_notification) + 2,
+ GFP_KERNEL);
+ if (!gser->notify_req)
+ goto fail;
+
+ gser->notify_req->complete = gser_notify_complete;
+ gser->notify_req->context = gser;
+#endif
+
/* support all relevant hardware speeds... we expect that when
* hardware is dual speed, all bulk-capable endpoints work at
* both speeds
@@ -235,6 +739,11 @@
gser_ss_in_desc.bEndpointAddress = gser_fs_in_desc.bEndpointAddress;
gser_ss_out_desc.bEndpointAddress = gser_fs_out_desc.bEndpointAddress;
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ gser_hs_notify_desc.bEndpointAddress = gser_fs_notify_desc.bEndpointAddress;
+ gser_ss_notify_desc.bEndpointAddress = gser_fs_notify_desc.bEndpointAddress;
+#endif
+
status = usb_assign_descriptors(f, gser_fs_function, gser_hs_function,
gser_ss_function);
if (status)
@@ -247,6 +756,15 @@
return 0;
fail:
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ if (gser->notify_req)
+ gs_free_req(gser->notify, gser->notify_req);
+
+ /* we might as well release our claims on endpoints */
+ if (gser->notify)
+ gser->notify->driver_data = NULL;
+#endif
+
/* we might as well release our claims on endpoints */
if (gser->port.out)
gser->port.out->driver_data = NULL;
@@ -341,6 +859,37 @@
return &opts->func_inst;
}
+static struct usb_function_instance *modem_alloc_inst(void)
+{
+ struct f_serial_opts *opts;
+ int ret;
+
+ opts = kzalloc(sizeof(*opts), GFP_KERNEL);
+ if (!opts)
+ return ERR_PTR(-ENOMEM);
+
+ opts->func_inst.free_func_inst = gser_free_inst;
+
+ ret = ghsic_data_setup(1, USB_GADGET_SERIAL);
+ if (ret) {
+ kfree(opts);
+ return ERR_PTR(ret);
+ }
+
+ ret = ghsic_ctrl_setup(1, USB_GADGET_SERIAL);
+ if (ret) {
+ kfree(opts);
+ return ERR_PTR(ret);
+ }
+
+ opts->port_num = 0;
+
+ config_group_init_type_name(&opts->func_inst.group, "",
+ &serial_func_type);
+
+ return &opts->func_inst;
+}
+
static void gser_free(struct usb_function *f)
{
struct f_gser *serial;
@@ -351,7 +900,14 @@
static void gser_unbind(struct usb_configuration *c, struct usb_function *f)
{
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ struct f_gser *gser = func_to_gser(f);
+#endif
+
usb_free_all_descriptors(f);
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ gs_free_req(gser->notify, gser->notify_req);
+#endif
}
struct usb_function *gser_alloc(struct usb_function_instance *fi)
@@ -375,10 +931,53 @@
gser->port.func.set_alt = gser_set_alt;
gser->port.func.disable = gser_disable;
gser->port.func.free_func = gser_free;
+ gser_interface_desc.iInterface = gser_string_defs[0].id;
return &gser->port.func;
}
+struct usb_function *modem_alloc(struct usb_function_instance *fi)
+{
+ struct f_gser *gser;
+ struct f_serial_opts *opts;
+
+ /* allocate and initialize one new instance */
+ gser = kzalloc(sizeof(*gser), GFP_KERNEL);
+ if (!gser)
+ return ERR_PTR(-ENOMEM);
+
+ opts = container_of(fi, struct f_serial_opts, func_inst);
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ spin_lock_init(&gser->lock);
+#endif
+
+ gser->port_num = opts->port_num;
+
+ gser->port.func.name = "modem";
+ gser->port.func.strings = modem_strings;
+ gser->port.func.bind = gser_bind;
+ gser->port.func.unbind = gser_unbind;
+ gser->port.func.set_alt = gser_mdm_set_alt;
+ gser->port.func.disable = gser_mdm_disable;
+ gser->port.func.free_func = gser_free;
+ gser_interface_desc.iInterface = modem_string_defs[0].id;
+
+#ifdef CONFIG_QCT_USB_MODEM_SUPPORT
+ gser->port.func.setup = gser_setup;
+ gser->port.connect = gser_connect;
+ gser->port.get_dtr = gser_get_dtr;
+ gser->port.get_rts = gser_get_rts;
+ gser->port.send_carrier_detect = gser_send_carrier_detect;
+ gser->port.send_ring_indicator = gser_send_ring_indicator;
+ gser->port.send_modem_ctrl_bits = gser_send_modem_ctrl_bits;
+ gser->port.disconnect = gser_disconnect;
+ gser->port.send_break = gser_send_break;
+#endif
+ return &gser->port.func;
+}
+
+DECLARE_USB_FUNCTION_INIT(modem, modem_alloc_inst, modem_alloc);
DECLARE_USB_FUNCTION_INIT(gser, gser_alloc_inst, gser_alloc);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Al Borchers");
diff --git a/drivers/usb/gadget/tegra_udc.c b/drivers/usb/gadget/tegra_udc.c
index 2ad8acb..019a2d2 100644
--- a/drivers/usb/gadget/tegra_udc.c
+++ b/drivers/usb/gadget/tegra_udc.c
@@ -64,6 +64,11 @@
#define AHB_PREFETCH_BUFFER SZ_128
+/* tegra_udc drvier will pull low D+ in USB drive strength test */
+static int USB_drive_strength_test = 0;
+module_param(USB_drive_strength_test, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(USB_drive_strength_test, "USB drive strength test flag");
+
#define get_ep_by_pipe(udc, pipe) ((pipe == 1) ? &udc->eps[0] : \
&udc->eps[pipe])
#define get_pipe_by_windex(windex) ((windex & USB_ENDPOINT_NUMBER_MASK) \
@@ -128,6 +133,23 @@
module_param_cb(boost_enable, &boost_enable_ops, &boost_enable, 0644);
#endif
+static ssize_t set_current_limit(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct tegra_udc *udc = platform_get_drvdata(pdev);
+ int current_ma;
+
+ if (sscanf(buf, "%d", ¤t_ma) != 1 )
+ return -EINVAL;
+
+ regulator_set_current_limit(udc->vbus_reg, 0, current_ma*1000);
+
+ return count;
+}
+
+static DEVICE_ATTR(set_current, 0644, NULL, set_current_limit);
+
static char *const tegra_udc_extcon_cable[] = {
[CONNECT_TYPE_NONE] = "",
[CONNECT_TYPE_SDP] = "USB",
@@ -378,7 +400,7 @@
return;
if ((udc->connect_type == CONNECT_TYPE_SDP) ||
- (udc->connect_type == CONNECT_TYPE_SDP))
+ (udc->connect_type == CONNECT_TYPE_CDP))
pm_stay_awake(&udc->pdev->dev);
/* Clear stopped bit */
@@ -1403,16 +1425,14 @@
case CONNECT_TYPE_NONE:
dev_info(dev, "USB cable/charger disconnected\n");
max_ua = 0;
- /* Notify if HOST(SDP/CDP) is connected */
- if ((udc->prev_connect_type == CONNECT_TYPE_SDP) ||
- (udc->prev_connect_type == CONNECT_TYPE_CDP))
- tegra_udc_notify_event(udc, USB_EVENT_NONE);
+ tegra_udc_notify_event(udc, USB_EVENT_NONE);
break;
case CONNECT_TYPE_SDP:
if (udc->current_limit > 2)
dev_info(dev, "connected to SDP\n");
max_ua = min(udc->current_limit * 1000,
USB_CHARGING_SDP_CURRENT_LIMIT_UA);
+
if (udc->charging_supported && !USB_drive_strength_test && charger_detect)
schedule_delayed_work(&udc->non_std_charger_work,
msecs_to_jiffies(NON_STD_CHARGER_DET_TIME_MS));
@@ -1435,15 +1455,7 @@
break;
case CONNECT_TYPE_CDP:
dev_info(dev, "connected to CDP(1.5A)\n");
- /*
- * if current is more than VBUS suspend current, we draw CDP
- * allowed maximum current (override SDP max current which is
- * set by the upper level driver).
- */
- if (udc->current_limit > 2)
- max_ua = USB_CHARGING_CDP_CURRENT_LIMIT_UA;
- else
- max_ua = udc->current_limit * 1000;
+ max_ua = USB_CHARGING_CDP_CURRENT_LIMIT_UA;
tegra_udc_notify_event(udc, USB_EVENT_VBUS);
break;
case CONNECT_TYPE_NV_CHARGER:
@@ -1549,6 +1561,7 @@
* USB host(CDP/SDP), we also start charging now. Upper gadget driver
* won't decide the current while androidboot.mode=charger.
*/
+
tegra_usb_set_charging_current(udc, true);
return 0;
@@ -1656,6 +1669,10 @@
OTG_STATE_B_PERIPHERAL)
return 0;
+ /* set charger type to SDP type if recognize as non standard charger before */
+ if (udc->connect_type == CONNECT_TYPE_NON_STANDARD_CHARGER)
+ tegra_udc_set_charger_type(udc, CONNECT_TYPE_SDP);
+
if (udc->stopped && can_pullup(udc))
dr_controller_run(udc);
@@ -1675,7 +1692,7 @@
* enumeration started.
*/
if (udc->charging_supported &&
- (udc->connect_type == CONNECT_TYPE_SDP))
+ (udc->connect_type == CONNECT_TYPE_SDP) && !USB_drive_strength_test)
schedule_delayed_work(&udc->non_std_charger_work,
msecs_to_jiffies(NON_STD_CHARGER_DET_TIME_MS));
} else {
@@ -2971,6 +2988,9 @@
goto err_phy;
}
+ err = device_create_file(&pdev->dev, &dev_attr_set_current);
+
+
err = usb_add_gadget_udc_release(&pdev->dev, &udc->gadget,
tegra_udc_release);
if (err)
@@ -3137,8 +3157,11 @@
udc->connect_type_lp0 = udc->connect_type;
/* If the controller is in otg mode, return */
- if (udc->transceiver)
+ if (udc->transceiver) {
+ if (udc->vbus_active)
+ tegra_usb_phy_power_off(udc->phy);
return 0;
+ }
if (udc->irq) {
err = enable_irq_wake(udc->irq);
@@ -3197,8 +3220,11 @@
}
}
- if (udc->transceiver)
+ if (udc->transceiver) {
+ if (udc->vbus_active)
+ tegra_usb_phy_power_on(udc->phy);
return 0;
+ }
if (udc->irq) {
err = disable_irq_wake(udc->irq);
diff --git a/drivers/usb/gadget/u_ctrl_hsic.c b/drivers/usb/gadget/u_ctrl_hsic.c
new file mode 100644
index 0000000..ff3fbf3
--- /dev/null
+++ b/drivers/usb/gadget/u_ctrl_hsic.c
@@ -0,0 +1,651 @@
+/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/utsname.h>
+#include <linux/platform_device.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/composite.h>
+#include <linux/usb/gadget.h>
+#include <mach/board_htc.h>
+#include <mach/usb_gadget_xport.h>
+
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/termios.h>
+#include <linux/debugfs.h>
+#include <linux/bitops.h>
+#include <linux/termios.h>
+#include <mach/usb_bridge.h>
+#include <mach/usb_gadget_xport.h>
+#include "u_serial.h"
+#include "u_rmnet.h"
+
+/* from cdc-acm.h */
+#define ACM_CTRL_RTS (1 << 1) /* unused with full duplex */
+#define ACM_CTRL_DTR (1 << 0) /* host is ready for data r/w */
+#define ACM_CTRL_OVERRUN (1 << 6)
+#define ACM_CTRL_PARITY (1 << 5)
+#define ACM_CTRL_FRAMING (1 << 4)
+#define ACM_CTRL_RI (1 << 3)
+#define ACM_CTRL_BRK (1 << 2)
+#define ACM_CTRL_DSR (1 << 1)
+#define ACM_CTRL_DCD (1 << 0)
+
+
+static unsigned int no_ctrl_ports;
+
+static const char *ctrl_bridge_names[] = {
+ "serial_hsic_ctrl",
+ "rmnet_hsic_ctrl"
+};
+
+#define CTRL_BRIDGE_NAME_MAX_LEN 20
+#define READ_BUF_LEN 1024
+
+#define CH_OPENED 0
+#define CH_READY 1
+
+struct gctrl_port {
+ /* port */
+ unsigned port_num;
+
+ /* gadget */
+ spinlock_t port_lock;
+ void *port_usb;
+
+ /* work queue*/
+ struct workqueue_struct *wq;
+ struct work_struct connect_w;
+ struct work_struct disconnect_w;
+
+ enum gadget_type gtype;
+
+ /*ctrl pkt response cb*/
+ int (*send_cpkt_response)(void *g, void *buf, size_t len);
+
+ struct bridge brdg;
+
+ /* bridge status */
+ unsigned long bridge_sts;
+
+ /* control bits */
+ unsigned cbits_tomodem;
+ unsigned cbits_tohost;
+
+ /* counters */
+ unsigned long to_modem;
+ unsigned long to_host;
+ unsigned long drp_cpkt_cnt;
+};
+
+static struct {
+ struct gctrl_port *port;
+ struct platform_driver pdrv;
+} gctrl_ports[NUM_PORTS];
+
+static int ghsic_ctrl_receive(void *dev, void *buf, size_t actual)
+{
+ struct gctrl_port *port = dev;
+ int retval = 0;
+
+ pr_debug_ratelimited("%s: read complete bytes read: %d\n",
+ __func__, actual);
+
+ /* send it to USB here */
+ if (port && port->send_cpkt_response) {
+ retval = port->send_cpkt_response(port->port_usb, buf, actual);
+ port->to_host++;
+ }
+
+ return retval;
+}
+
+static int
+ghsic_send_cpkt_tomodem(u8 portno, void *buf, size_t len)
+{
+ void *cbuf;
+ struct gctrl_port *port;
+
+ if (portno >= no_ctrl_ports) {
+ pr_err("%s: Invalid portno#%d\n", __func__, portno);
+ return -ENODEV;
+ }
+
+ port = gctrl_ports[portno].port;
+ if (!port) {
+ pr_err("%s: port is null\n", __func__);
+ return -ENODEV;
+ }
+
+ cbuf = kmalloc(len, GFP_ATOMIC);
+ if (!cbuf)
+ return -ENOMEM;
+
+ memcpy(cbuf, buf, len);
+
+ /* drop cpkt if ch is not open */
+ if (!test_bit(CH_OPENED, &port->bridge_sts)) {
+ port->drp_cpkt_cnt++;
+ kfree(cbuf);
+ return 0;
+ }
+
+ pr_debug("%s: ctrl_pkt:%d bytes\n", __func__, len);
+
+ ctrl_bridge_write(port->brdg.ch_id, cbuf, len);
+
+ port->to_modem++;
+
+ return 0;
+}
+
+static void
+ghsic_send_cbits_tomodem(void *gptr, u8 portno, int cbits)
+{
+ struct gctrl_port *port;
+
+ if (portno >= no_ctrl_ports || !gptr) {
+ pr_err("%s: Invalid portno#%d\n", __func__, portno);
+ return;
+ }
+
+ port = gctrl_ports[portno].port;
+ if (!port) {
+ pr_err("%s: port is null\n", __func__);
+ return;
+ }
+
+ if (cbits == port->cbits_tomodem)
+ return;
+
+ port->cbits_tomodem = cbits;
+
+ if (!test_bit(CH_OPENED, &port->bridge_sts))
+ return;
+
+ pr_debug("%s: ctrl_tomodem:%d\n", __func__, cbits);
+
+ ctrl_bridge_set_cbits(port->brdg.ch_id, cbits);
+}
+
+static void ghsic_ctrl_connect_w(struct work_struct *w)
+{
+ struct gserial *gser = NULL;
+ struct grmnet *gr = NULL;
+ struct gctrl_port *port =
+ container_of(w, struct gctrl_port, connect_w);
+ unsigned long flags;
+ int retval;
+ unsigned cbits;
+
+ if (!port || !test_bit(CH_READY, &port->bridge_sts))
+ return;
+
+ pr_debug("%s: port:%p\n", __func__, port);
+
+ retval = ctrl_bridge_open(&port->brdg);
+ if (retval) {
+ pr_err("%s: ctrl bridge open failed :%d\n", __func__, retval);
+ return;
+ }
+
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (!port->port_usb) {
+ ctrl_bridge_close(port->brdg.ch_id);
+ spin_unlock_irqrestore(&port->port_lock, flags);
+ return;
+ }
+ set_bit(CH_OPENED, &port->bridge_sts);
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ cbits = ctrl_bridge_get_cbits_tohost(port->brdg.ch_id);
+
+ if (port->gtype == USB_GADGET_SERIAL && (cbits & ACM_CTRL_DCD)) {
+ gser = port->port_usb;
+ if (gser && gser->connect)
+ gser->connect(gser);
+ return;
+ }
+
+ if (port->gtype == USB_GADGET_RMNET) {
+ gr = port->port_usb;
+ if (gr && gr->connect)
+ gr->connect(gr);
+ }
+}
+
+int ghsic_ctrl_connect(void *gptr, int port_num)
+{
+ struct gctrl_port *port;
+ struct gserial *gser;
+ struct grmnet *gr;
+ unsigned long flags;
+
+ pr_debug("%s: port#%d\n", __func__, port_num);
+
+ if (port_num > no_ctrl_ports || !gptr) {
+ pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ return -ENODEV;
+ }
+
+ port = gctrl_ports[port_num].port;
+ if (!port) {
+ pr_err("%s: port is null\n", __func__);
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->gtype == USB_GADGET_SERIAL) {
+ gser = gptr;
+ gser->notify_modem = ghsic_send_cbits_tomodem;
+ }
+
+ if (port->gtype == USB_GADGET_RMNET) {
+ gr = gptr;
+ port->send_cpkt_response = gr->send_cpkt_response;
+ gr->send_encap_cmd = ghsic_send_cpkt_tomodem;
+ gr->notify_modem = ghsic_send_cbits_tomodem;
+ }
+
+ port->port_usb = gptr;
+ port->to_host = 0;
+ port->to_modem = 0;
+ port->drp_cpkt_cnt = 0;
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ queue_work(port->wq, &port->connect_w);
+
+ return 0;
+}
+
+static void gctrl_disconnect_w(struct work_struct *w)
+{
+ struct gctrl_port *port =
+ container_of(w, struct gctrl_port, disconnect_w);
+
+ if (!test_bit(CH_OPENED, &port->bridge_sts))
+ return;
+
+ /* send the dtr zero */
+ ctrl_bridge_close(port->brdg.ch_id);
+ clear_bit(CH_OPENED, &port->bridge_sts);
+}
+
+void ghsic_ctrl_disconnect(void *gptr, int port_num)
+{
+ struct gctrl_port *port;
+ struct gserial *gser = NULL;
+ struct grmnet *gr = NULL;
+ unsigned long flags;
+
+ pr_debug("%s: port#%d\n", __func__, port_num);
+
+ port = gctrl_ports[port_num].port;
+
+ if (port_num > no_ctrl_ports) {
+ pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ return;
+ }
+
+ if (!gptr || !port) {
+ pr_err("%s: grmnet port is null\n", __func__);
+ return;
+ }
+
+ if (port->gtype == USB_GADGET_SERIAL)
+ gser = gptr;
+ else
+ gr = gptr;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (gr) {
+ gr->send_encap_cmd = 0;
+ gr->notify_modem = 0;
+ }
+
+ if (gser)
+ gser->notify_modem = 0;
+ port->cbits_tomodem = 0;
+ port->port_usb = 0;
+ port->send_cpkt_response = 0;
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ queue_work(port->wq, &port->disconnect_w);
+}
+
+static void ghsic_ctrl_status(void *ctxt, unsigned int ctrl_bits)
+{
+ struct gctrl_port *port = ctxt;
+ struct gserial *gser;
+
+ pr_debug("%s - input control lines: dcd%c dsr%c break%c "
+ "ring%c framing%c parity%c overrun%c\n", __func__,
+ ctrl_bits & ACM_CTRL_DCD ? '+' : '-',
+ ctrl_bits & ACM_CTRL_DSR ? '+' : '-',
+ ctrl_bits & ACM_CTRL_BRK ? '+' : '-',
+ ctrl_bits & ACM_CTRL_RI ? '+' : '-',
+ ctrl_bits & ACM_CTRL_FRAMING ? '+' : '-',
+ ctrl_bits & ACM_CTRL_PARITY ? '+' : '-',
+ ctrl_bits & ACM_CTRL_OVERRUN ? '+' : '-');
+
+ port->cbits_tohost = ctrl_bits;
+ gser = port->port_usb;
+ if (gser && gser->send_modem_ctrl_bits)
+ gser->send_modem_ctrl_bits(gser, ctrl_bits);
+}
+
+static int ghsic_ctrl_get_port_id(const char *pdev_name)
+{
+ struct gctrl_port *port;
+ int i;
+
+ for (i = 0; i < no_ctrl_ports; i++) {
+ port = gctrl_ports[i].port;
+ if (!strncmp(port->brdg.name, pdev_name, BRIDGE_NAME_MAX_LEN))
+ return i;
+ }
+
+ return -EINVAL;
+}
+
+static int ghsic_ctrl_probe(struct platform_device *pdev)
+{
+ struct gctrl_port *port;
+ unsigned long flags;
+ int id;
+
+ pr_debug("%s: name:%s\n", __func__, pdev->name);
+
+ id = ghsic_ctrl_get_port_id(pdev->name);
+ if (id < 0 || id >= no_ctrl_ports) {
+ pr_err("%s: invalid port: %d\n", __func__, id);
+ return -EINVAL;
+ }
+
+ port = gctrl_ports[id].port;
+ set_bit(CH_READY, &port->bridge_sts);
+
+ /* if usb is online, start read */
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->port_usb)
+ queue_work(port->wq, &port->connect_w);
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ return 0;
+}
+
+static int ghsic_ctrl_remove(struct platform_device *pdev)
+{
+ struct gctrl_port *port;
+ struct gserial *gser = NULL;
+ struct grmnet *gr = NULL;
+ unsigned long flags;
+ int id;
+
+ pr_debug("%s: name:%s\n", __func__, pdev->name);
+
+ id = ghsic_ctrl_get_port_id(pdev->name);
+ if (id < 0 || id >= no_ctrl_ports) {
+ pr_err("%s: invalid port: %d\n", __func__, id);
+ return -EINVAL;
+ }
+
+ port = gctrl_ports[id].port;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (!port->port_usb) {
+ spin_unlock_irqrestore(&port->port_lock, flags);
+ goto not_ready;
+ }
+
+ if (port->gtype == USB_GADGET_SERIAL)
+ gser = port->port_usb;
+ else
+ gr = port->port_usb;
+
+ port->cbits_tohost = 0;
+ spin_unlock_irqrestore(&port->port_lock, flags);
+
+ if (gr && gr->disconnect)
+ gr->disconnect(gr);
+
+ if (gser && gser->disconnect)
+ gser->disconnect(gser);
+
+ ctrl_bridge_close(port->brdg.ch_id);
+
+ clear_bit(CH_OPENED, &port->bridge_sts);
+not_ready:
+ clear_bit(CH_READY, &port->bridge_sts);
+
+ return 0;
+}
+
+static void ghsic_ctrl_port_free(int portno)
+{
+ struct gctrl_port *port = gctrl_ports[portno].port;
+ struct platform_driver *pdrv = &gctrl_ports[portno].pdrv;
+
+ destroy_workqueue(port->wq);
+ kfree(port);
+
+ if (pdrv)
+ platform_driver_unregister(pdrv);
+}
+
+static int gctrl_port_alloc(int portno, enum gadget_type gtype)
+{
+ struct gctrl_port *port;
+ struct platform_driver *pdrv;
+
+ port = kzalloc(sizeof(struct gctrl_port), GFP_KERNEL);
+ if (!port)
+ return -ENOMEM;
+
+ port->wq = create_singlethread_workqueue(ctrl_bridge_names[portno]);
+ if (!port->wq) {
+ pr_err("%s: Unable to create workqueue:%s\n",
+ __func__, ctrl_bridge_names[portno]);
+ return -ENOMEM;
+ }
+
+ port->port_num = portno;
+ port->gtype = gtype;
+
+ spin_lock_init(&port->port_lock);
+
+ INIT_WORK(&port->connect_w, ghsic_ctrl_connect_w);
+ INIT_WORK(&port->disconnect_w, gctrl_disconnect_w);
+
+ port->brdg.name = (char*)ctrl_bridge_names[portno];;
+ port->brdg.ctx = port;
+ port->brdg.ops.send_pkt = ghsic_ctrl_receive;
+ if (port->gtype == USB_GADGET_SERIAL)
+ port->brdg.ops.send_cbits = ghsic_ctrl_status;
+ gctrl_ports[portno].port = port;
+
+ pdrv = &gctrl_ports[portno].pdrv;
+ pdrv->probe = ghsic_ctrl_probe;
+ pdrv->remove = ghsic_ctrl_remove;
+ pdrv->driver.name = ctrl_bridge_names[portno];
+ pdrv->driver.owner = THIS_MODULE;
+
+ platform_driver_register(pdrv);
+
+ pr_debug("%s: port:%p portno:%d\n", __func__, port, portno);
+
+ return 0;
+}
+
+int ghsic_ctrl_setup(unsigned int num_ports, enum gadget_type gtype)
+{
+ int first_port_id = no_ctrl_ports;
+ int total_num_ports = num_ports + no_ctrl_ports;
+ int i;
+ int ret = 0;
+
+ if (!num_ports || total_num_ports > NUM_PORTS) {
+ pr_err("%s: Invalid num of ports count:%d\n",
+ __func__, num_ports);
+ return -EINVAL;
+ }
+
+ pr_debug("%s: requested ports:%d\n", __func__, num_ports);
+
+ for (i = first_port_id; i < (first_port_id + num_ports); i++) {
+
+ /*probe can be called while port_alloc,so update no_ctrl_ports*/
+ no_ctrl_ports++;
+ ret = gctrl_port_alloc(i, gtype);
+ if (ret) {
+ no_ctrl_ports--;
+ pr_err("%s: Unable to alloc port:%d\n", __func__, i);
+ goto free_ports;
+ }
+ }
+
+ return first_port_id;
+
+free_ports:
+ for (i = first_port_id; i < no_ctrl_ports; i++)
+ ghsic_ctrl_port_free(i);
+ no_ctrl_ports = first_port_id;
+ return ret;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 1024
+static ssize_t gctrl_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct gctrl_port *port;
+ struct platform_driver *pdrv;
+ char *buf;
+ unsigned long flags;
+ int ret;
+ int i;
+ int temp = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < no_ctrl_ports; i++) {
+ port = gctrl_ports[i].port;
+ if (!port)
+ continue;
+ pdrv = &gctrl_ports[i].pdrv;
+ spin_lock_irqsave(&port->port_lock, flags);
+
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\nName: %s\n"
+ "#PORT:%d port: %p\n"
+ "to_usbhost: %lu\n"
+ "to_modem: %lu\n"
+ "cpkt_drp_cnt: %lu\n"
+ "DTR: %s\n"
+ "ch_open: %d\n"
+ "ch_ready: %d\n",
+ pdrv->driver.name,
+ i, port,
+ port->to_host, port->to_modem,
+ port->drp_cpkt_cnt,
+ port->cbits_tomodem ? "HIGH" : "LOW",
+ test_bit(CH_OPENED, &port->bridge_sts),
+ test_bit(CH_READY, &port->bridge_sts));
+
+ spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+
+ kfree(buf);
+
+ return ret;
+}
+
+static ssize_t gctrl_reset_stats(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ struct gctrl_port *port;
+ int i;
+ unsigned long flags;
+
+ for (i = 0; i < no_ctrl_ports; i++) {
+ port = gctrl_ports[i].port;
+ if (!port)
+ continue;
+
+ spin_lock_irqsave(&port->port_lock, flags);
+ port->to_host = 0;
+ port->to_modem = 0;
+ port->drp_cpkt_cnt = 0;
+ spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+ return count;
+}
+
+const struct file_operations gctrl_stats_ops = {
+ .read = gctrl_read_stats,
+ .write = gctrl_reset_stats,
+};
+
+struct dentry *gctrl_dent;
+struct dentry *gctrl_dfile;
+static void gctrl_debugfs_init(void)
+{
+ gctrl_dent = debugfs_create_dir("ghsic_ctrl_xport", 0);
+ if (IS_ERR(gctrl_dent))
+ return;
+
+ gctrl_dfile =
+ debugfs_create_file("status", 0444, gctrl_dent, 0,
+ &gctrl_stats_ops);
+ if (!gctrl_dfile || IS_ERR(gctrl_dfile))
+ debugfs_remove(gctrl_dent);
+}
+
+static void gctrl_debugfs_exit(void)
+{
+ debugfs_remove(gctrl_dfile);
+ debugfs_remove(gctrl_dent);
+}
+
+#else
+static void gctrl_debugfs_init(void) { }
+static void gctrl_debugfs_exit(void) { }
+#endif
+
+static int __init gctrl_init(void)
+{
+ gctrl_debugfs_init();
+
+ return 0;
+}
+module_init(gctrl_init);
+
+static void __exit gctrl_exit(void)
+{
+ gctrl_debugfs_exit();
+}
+module_exit(gctrl_exit);
+MODULE_DESCRIPTION("hsic control xport driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/usb/gadget/u_ctrl_hsic.h b/drivers/usb/gadget/u_ctrl_hsic.h
new file mode 100644
index 0000000..b98b53c
--- /dev/null
+++ b/drivers/usb/gadget/u_ctrl_hsic.h
@@ -0,0 +1,11 @@
+
+#ifndef __U_CTRL_HSIC_H
+#define __U_CTRL_HSIC_H
+
+#include <linux/usb/composite.h>
+#include <linux/usb/cdc.h>
+#include <mach/usb_gadget_xport.h>
+
+int ghsic_ctrl_setup(unsigned num_ports, enum gadget_type gtype);
+
+#endif /* __U_CTRL_HSIC_H*/
diff --git a/drivers/usb/gadget/u_data_hsic.c b/drivers/usb/gadget/u_data_hsic.c
new file mode 100644
index 0000000..ec13488
--- /dev/null
+++ b/drivers/usb/gadget/u_data_hsic.c
@@ -0,0 +1,1193 @@
+/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/utsname.h>
+#include <linux/platform_device.h>
+
+#include <linux/usb/ch9.h>
+#include <linux/usb/composite.h>
+#include <linux/usb/gadget.h>
+#include <mach/board_htc.h>
+#include <mach/usb_gadget_xport.h>
+
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/termios.h>
+#include <linux/netdevice.h>
+#include <linux/debugfs.h>
+#include <linux/bitops.h>
+#include <linux/termios.h>
+#include <mach/usb_bridge.h>
+#include <mach/usb_gadget_xport.h>
+#include "u_serial.h"
+#include "u_rmnet.h"
+
+static unsigned int no_data_ports;
+
+static const char *data_bridge_names[] = {
+ "serial_hsic_data",
+ "rmnet_hsic_data"
+};
+
+#define DATA_BRIDGE_NAME_MAX_LEN 20
+
+#define GHSIC_DATA_RMNET_RX_Q_SIZE 50
+#define GHSIC_DATA_RMNET_TX_Q_SIZE 300
+#define GHSIC_DATA_SERIAL_RX_Q_SIZE 10
+#define GHSIC_DATA_SERIAL_TX_Q_SIZE 20
+#define GHSIC_DATA_RX_REQ_SIZE 2048
+#define GHSIC_DATA_TX_INTR_THRESHOLD 20
+
+static unsigned int ghsic_data_rmnet_tx_q_size = GHSIC_DATA_RMNET_TX_Q_SIZE;
+module_param(ghsic_data_rmnet_tx_q_size, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_rmnet_rx_q_size = GHSIC_DATA_RMNET_RX_Q_SIZE;
+module_param(ghsic_data_rmnet_rx_q_size, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_serial_tx_q_size = GHSIC_DATA_SERIAL_TX_Q_SIZE;
+module_param(ghsic_data_serial_tx_q_size, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_serial_rx_q_size = GHSIC_DATA_SERIAL_RX_Q_SIZE;
+module_param(ghsic_data_serial_rx_q_size, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_rx_req_size = GHSIC_DATA_RX_REQ_SIZE;
+module_param(ghsic_data_rx_req_size, uint, S_IRUGO | S_IWUSR);
+
+unsigned int ghsic_data_tx_intr_thld = GHSIC_DATA_TX_INTR_THRESHOLD;
+module_param(ghsic_data_tx_intr_thld, uint, S_IRUGO | S_IWUSR);
+
+/*flow ctrl*/
+#define GHSIC_DATA_FLOW_CTRL_EN_THRESHOLD 500
+#define GHSIC_DATA_FLOW_CTRL_DISABLE 300
+#define GHSIC_DATA_FLOW_CTRL_SUPPORT 1
+#define GHSIC_DATA_PENDLIMIT_WITH_BRIDGE 500
+
+static unsigned int ghsic_data_fctrl_support = GHSIC_DATA_FLOW_CTRL_SUPPORT;
+module_param(ghsic_data_fctrl_support, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_fctrl_en_thld =
+ GHSIC_DATA_FLOW_CTRL_EN_THRESHOLD;
+module_param(ghsic_data_fctrl_en_thld, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_fctrl_dis_thld = GHSIC_DATA_FLOW_CTRL_DISABLE;
+module_param(ghsic_data_fctrl_dis_thld, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int ghsic_data_pend_limit_with_bridge =
+ GHSIC_DATA_PENDLIMIT_WITH_BRIDGE;
+module_param(ghsic_data_pend_limit_with_bridge, uint, S_IRUGO | S_IWUSR);
+
+#define CH_OPENED 0
+#define CH_READY 1
+
+struct gdata_port {
+ /* port */
+ unsigned port_num;
+
+ /* gadget */
+ atomic_t connected;
+ struct usb_ep *in;
+ struct usb_ep *out;
+
+ enum gadget_type gtype;
+
+ /* data transfer queues */
+ unsigned int tx_q_size;
+ struct list_head tx_idle;
+ struct sk_buff_head tx_skb_q;
+ spinlock_t tx_lock;
+
+ unsigned int rx_q_size;
+ struct list_head rx_idle;
+ struct sk_buff_head rx_skb_q;
+ spinlock_t rx_lock;
+
+ /* work */
+ struct workqueue_struct *wq;
+ struct work_struct connect_w;
+ struct work_struct disconnect_w;
+ struct work_struct write_tomdm_w;
+ struct work_struct write_tohost_w;
+
+ struct bridge brdg;
+
+ /*bridge status*/
+ unsigned long bridge_sts;
+
+ unsigned int n_tx_req_queued;
+
+ /*counters*/
+ unsigned long to_modem;
+ unsigned long to_host;
+ unsigned int rx_throttled_cnt;
+ unsigned int rx_unthrottled_cnt;
+ unsigned int tx_throttled_cnt;
+ unsigned int tx_unthrottled_cnt;
+ unsigned int tomodem_drp_cnt;
+ unsigned int unthrottled_pnd_skbs;
+};
+
+static struct {
+ struct gdata_port *port;
+ struct platform_driver pdrv;
+} gdata_ports[NUM_PORTS];
+
+static unsigned int get_timestamp(void);
+static void dbg_timestamp(char *, struct sk_buff *);
+static void ghsic_data_start_rx(struct gdata_port *port);
+
+static void ghsic_data_free_requests(struct usb_ep *ep, struct list_head *head)
+{
+ struct usb_request *req;
+
+ while (!list_empty(head)) {
+ req = list_entry(head->next, struct usb_request, list);
+ list_del(&req->list);
+ usb_ep_free_request(ep, req);
+ }
+}
+
+static int ghsic_data_alloc_requests(struct usb_ep *ep, struct list_head *head,
+ int num,
+ void (*cb)(struct usb_ep *ep, struct usb_request *),
+ spinlock_t *lock)
+{
+ int i;
+ struct usb_request *req;
+ unsigned long flags;
+
+ pr_debug("%s: ep:%s head:%p num:%d cb:%p", __func__,
+ ep->name, head, num, cb);
+
+ for (i = 0; i < num; i++) {
+ req = usb_ep_alloc_request(ep, GFP_KERNEL);
+ if (!req) {
+ pr_debug("%s: req allocated:%d\n", __func__, i);
+ return list_empty(head) ? -ENOMEM : 0;
+ }
+ req->complete = cb;
+ spin_lock_irqsave(lock, flags);
+ list_add(&req->list, head);
+ spin_unlock_irqrestore(lock, flags);
+ }
+
+ return 0;
+}
+
+static void ghsic_data_unthrottle_tx(void *ctx)
+{
+ struct gdata_port *port = ctx;
+ unsigned long flags;
+
+ if (!port || !atomic_read(&port->connected))
+ return;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->tx_unthrottled_cnt++;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ queue_work(port->wq, &port->write_tomdm_w);
+ pr_debug("%s: port num =%d unthrottled\n", __func__,
+ port->port_num);
+}
+
+static void ghsic_data_write_tohost(struct work_struct *w)
+{
+ unsigned long flags;
+ struct sk_buff *skb;
+ int ret;
+ struct usb_request *req;
+ struct usb_ep *ep;
+ struct gdata_port *port;
+ struct timestamp_info *info;
+
+ port = container_of(w, struct gdata_port, write_tohost_w);
+
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ ep = port->in;
+ if (!ep) {
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ return;
+ }
+
+ while (!list_empty(&port->tx_idle)) {
+ skb = __skb_dequeue(&port->tx_skb_q);
+ if (!skb)
+ break;
+
+ req = list_first_entry(&port->tx_idle, struct usb_request,
+ list);
+ req->context = skb;
+ req->buf = skb->data;
+ req->length = skb->len;
+
+ port->n_tx_req_queued++;
+ if (port->n_tx_req_queued == ghsic_data_tx_intr_thld) {
+ req->no_interrupt = 0;
+ port->n_tx_req_queued = 0;
+ } else {
+ req->no_interrupt = 1;
+ }
+ /* Send ZLP in case packet length is multiple of maxpacketsize */
+ req->zero = 1;
+
+ list_del(&req->list);
+
+ info = (struct timestamp_info *)skb->cb;
+ info->tx_queued = get_timestamp();
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ ret = usb_ep_queue(ep, req, GFP_KERNEL);
+ spin_lock_irqsave(&port->tx_lock, flags);
+ if (ret) {
+ pr_err("%s: usb epIn failed\n", __func__);
+ list_add(&req->list, &port->tx_idle);
+ dev_kfree_skb_any(skb);
+ break;
+ }
+ port->to_host++;
+ if (ghsic_data_fctrl_support &&
+ port->tx_skb_q.qlen <= ghsic_data_fctrl_dis_thld &&
+ test_and_clear_bit(RX_THROTTLED, &port->brdg.flags)) {
+ port->rx_unthrottled_cnt++;
+ port->unthrottled_pnd_skbs = port->tx_skb_q.qlen;
+ pr_debug_ratelimited("%s: disable flow ctrl:"
+ " tx skbq len: %u\n",
+ __func__, port->tx_skb_q.qlen);
+ data_bridge_unthrottle_rx(port->brdg.ch_id);
+ }
+ }
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+}
+
+static int ghsic_data_receive(void *p, void *data, size_t len)
+{
+ struct gdata_port *port = p;
+ unsigned long flags;
+ struct sk_buff *skb = data;
+
+ if (!port || !atomic_read(&port->connected)) {
+ dev_kfree_skb_any(skb);
+ return -ENOTCONN;
+ }
+
+ pr_debug("%s: p:%p#%d skb_len:%d\n", __func__,
+ port, port->port_num, skb->len);
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ __skb_queue_tail(&port->tx_skb_q, skb);
+
+ if (ghsic_data_fctrl_support &&
+ port->tx_skb_q.qlen >= ghsic_data_fctrl_en_thld) {
+ set_bit(RX_THROTTLED, &port->brdg.flags);
+ port->rx_throttled_cnt++;
+ pr_debug_ratelimited("%s: flow ctrl enabled: tx skbq len: %u\n",
+ __func__, port->tx_skb_q.qlen);
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ queue_work(port->wq, &port->write_tohost_w);
+ return -EBUSY;
+ }
+
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ queue_work(port->wq, &port->write_tohost_w);
+
+ return 0;
+}
+
+static void ghsic_data_write_tomdm(struct work_struct *w)
+{
+ struct gdata_port *port;
+ struct sk_buff *skb;
+ struct timestamp_info *info;
+ unsigned long flags;
+ int ret;
+
+ port = container_of(w, struct gdata_port, write_tomdm_w);
+
+ if (!port || !atomic_read(&port->connected))
+ return;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ if (test_bit(TX_THROTTLED, &port->brdg.flags)) {
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ goto start_rx;
+ }
+
+ while ((skb = __skb_dequeue(&port->rx_skb_q))) {
+ pr_debug("%s: port:%p tom:%lu pno:%d\n", __func__,
+ port, port->to_modem, port->port_num);
+
+ info = (struct timestamp_info *)skb->cb;
+ info->rx_done_sent = get_timestamp();
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ ret = data_bridge_write(port->brdg.ch_id, skb);
+ spin_lock_irqsave(&port->rx_lock, flags);
+ if (ret < 0) {
+ if (ret == -EBUSY) {
+ /*flow control*/
+ port->tx_throttled_cnt++;
+ break;
+ }
+ pr_err_ratelimited("%s: write error:%d\n",
+ __func__, ret);
+ port->tomodem_drp_cnt++;
+ dev_kfree_skb_any(skb);
+ break;
+ }
+ port->to_modem++;
+ }
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+start_rx:
+ ghsic_data_start_rx(port);
+}
+
+static void ghsic_data_epin_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct gdata_port *port = ep->driver_data;
+ struct sk_buff *skb = req->context;
+ int status = req->status;
+
+ switch (status) {
+ case 0:
+ /* successful completion */
+ dbg_timestamp("DL", skb);
+ break;
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ /* connection gone */
+ dev_kfree_skb_any(skb);
+ req->buf = 0;
+ usb_ep_free_request(ep, req);
+ return;
+ default:
+ pr_err("%s: data tx ep error %d\n", __func__, status);
+ break;
+ }
+
+ dev_kfree_skb_any(skb);
+
+ spin_lock(&port->tx_lock);
+ list_add_tail(&req->list, &port->tx_idle);
+ spin_unlock(&port->tx_lock);
+
+ queue_work(port->wq, &port->write_tohost_w);
+}
+
+static void
+ghsic_data_epout_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct gdata_port *port = ep->driver_data;
+ struct sk_buff *skb = req->context;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+ int status = req->status;
+ int queue = 0;
+
+ switch (status) {
+ case 0:
+ skb_put(skb, req->actual);
+ queue = 1;
+ break;
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ /* cable disconnection */
+ dev_kfree_skb_any(skb);
+ req->buf = 0;
+ usb_ep_free_request(ep, req);
+ return;
+ default:
+ pr_err_ratelimited("%s: %s response error %d, %d/%d\n",
+ __func__, ep->name, status,
+ req->actual, req->length);
+ dev_kfree_skb_any(skb);
+ break;
+ }
+
+ spin_lock(&port->rx_lock);
+ if (queue) {
+ info->rx_done = get_timestamp();
+ __skb_queue_tail(&port->rx_skb_q, skb);
+ list_add_tail(&req->list, &port->rx_idle);
+ queue_work(port->wq, &port->write_tomdm_w);
+ }
+ spin_unlock(&port->rx_lock);
+}
+
+static void ghsic_data_start_rx(struct gdata_port *port)
+{
+ struct usb_request *req;
+ struct usb_ep *ep;
+ unsigned long flags;
+ int ret;
+ struct sk_buff *skb;
+ struct timestamp_info *info;
+ unsigned int created;
+
+ pr_debug("%s: port:%p\n", __func__, port);
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ ep = port->out;
+ if (!ep) {
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ return;
+ }
+
+ while (atomic_read(&port->connected) && !list_empty(&port->rx_idle)) {
+ if (port->rx_skb_q.qlen > ghsic_data_pend_limit_with_bridge)
+ break;
+
+ req = list_first_entry(&port->rx_idle,
+ struct usb_request, list);
+ list_del(&req->list);
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ created = get_timestamp();
+ skb = alloc_skb(ghsic_data_rx_req_size, GFP_KERNEL);
+ if (!skb) {
+ spin_lock_irqsave(&port->rx_lock, flags);
+ list_add(&req->list, &port->rx_idle);
+ break;
+ }
+ info = (struct timestamp_info *)skb->cb;
+ info->created = created;
+ req->buf = skb->data;
+ req->length = ghsic_data_rx_req_size;
+ req->context = skb;
+
+ info->rx_queued = get_timestamp();
+ ret = usb_ep_queue(ep, req, GFP_KERNEL);
+ spin_lock_irqsave(&port->rx_lock, flags);
+ if (ret) {
+ dev_kfree_skb_any(skb);
+
+ pr_err_ratelimited("%s: rx queue failed\n", __func__);
+
+ if (atomic_read(&port->connected))
+ list_add(&req->list, &port->rx_idle);
+ else
+ usb_ep_free_request(ep, req);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+}
+
+static void ghsic_data_start_io(struct gdata_port *port)
+{
+ unsigned long flags;
+ struct usb_ep *ep_out, *ep_in;
+ int ret;
+
+ pr_debug("%s: port:%p\n", __func__, port);
+
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ ep_out = port->out;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ if (!ep_out)
+ return;
+
+ ret = ghsic_data_alloc_requests(ep_out, &port->rx_idle,
+ port->rx_q_size, ghsic_data_epout_complete, &port->rx_lock);
+ if (ret) {
+ pr_err("%s: rx req allocation failed\n", __func__);
+ return;
+ }
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ ep_in = port->in;
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ if (!ep_in) {
+ spin_lock_irqsave(&port->rx_lock, flags);
+ ghsic_data_free_requests(ep_out, &port->rx_idle);
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ return;
+ }
+
+ ret = ghsic_data_alloc_requests(ep_in, &port->tx_idle,
+ port->tx_q_size, ghsic_data_epin_complete, &port->tx_lock);
+ if (ret) {
+ pr_err("%s: tx req allocation failed\n", __func__);
+ spin_lock_irqsave(&port->rx_lock, flags);
+ ghsic_data_free_requests(ep_out, &port->rx_idle);
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ return;
+ }
+
+ /* queue out requests */
+ ghsic_data_start_rx(port);
+}
+
+static void ghsic_data_connect_w(struct work_struct *w)
+{
+ struct gdata_port *port =
+ container_of(w, struct gdata_port, connect_w);
+ int ret;
+ printk("%s: connected=%d, CH_READY=%d, port=%p\n",
+ __func__, atomic_read(&port->connected),
+ test_bit(CH_READY, &port->bridge_sts), port);
+ if (!port || !atomic_read(&port->connected) ||
+ !test_bit(CH_READY, &port->bridge_sts)) {
+ printk("%s: return\n", __func__);
+ return;
+ }
+
+ pr_debug("%s: port:%p\n", __func__, port);
+
+ ret = data_bridge_open(&port->brdg);
+ if (ret) {
+ pr_err("%s: unable open bridge ch:%d err:%d\n",
+ __func__, port->brdg.ch_id, ret);
+ return;
+ }
+
+ set_bit(CH_OPENED, &port->bridge_sts);
+
+ ghsic_data_start_io(port);
+}
+
+static void ghsic_data_disconnect_w(struct work_struct *w)
+{
+ struct gdata_port *port =
+ container_of(w, struct gdata_port, disconnect_w);
+
+ if (!test_bit(CH_OPENED, &port->bridge_sts))
+ return;
+
+ data_bridge_close(port->brdg.ch_id);
+ clear_bit(CH_OPENED, &port->bridge_sts);
+}
+
+static void ghsic_data_free_buffers(struct gdata_port *port)
+{
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ if (!port->in) {
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ return;
+ }
+
+ ghsic_data_free_requests(port->in, &port->tx_idle);
+
+ while ((skb = __skb_dequeue(&port->tx_skb_q)))
+ dev_kfree_skb_any(skb);
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ if (!port->out) {
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+ return;
+ }
+
+ ghsic_data_free_requests(port->out, &port->rx_idle);
+
+ while ((skb = __skb_dequeue(&port->rx_skb_q)))
+ dev_kfree_skb_any(skb);
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+}
+
+static int ghsic_data_get_port_id(const char *pdev_name)
+{
+ struct gdata_port *port;
+ int i;
+
+ for (i = 0; i < no_data_ports; i++) {
+ port = gdata_ports[i].port;
+ if (!strncmp(port->brdg.name, pdev_name, BRIDGE_NAME_MAX_LEN))
+ return i;
+ }
+
+ return -EINVAL;
+}
+
+static int ghsic_data_probe(struct platform_device *pdev)
+{
+ struct gdata_port *port;
+ int id;
+
+ pr_debug("%s: name:%s no_data_ports= %d\n", __func__, pdev->name,
+ no_data_ports);
+
+ id = ghsic_data_get_port_id(pdev->name);
+ if (id < 0 || id >= no_data_ports) {
+ pr_err("%s: invalid port: %d\n", __func__, id);
+ return -EINVAL;
+ }
+
+ port = gdata_ports[id].port;
+ set_bit(CH_READY, &port->bridge_sts);
+
+ /* if usb is online, try opening bridge */
+ if (atomic_read(&port->connected))
+ queue_work(port->wq, &port->connect_w);
+
+ return 0;
+}
+
+/* mdm disconnect */
+static int ghsic_data_remove(struct platform_device *pdev)
+{
+ struct gdata_port *port;
+ struct usb_ep *ep_in;
+ struct usb_ep *ep_out;
+ int id;
+
+ pr_debug("%s: name:%s\n", __func__, pdev->name);
+
+ id = ghsic_data_get_port_id(pdev->name);
+ if (id < 0 || id >= no_data_ports) {
+ pr_err("%s: invalid port: %d\n", __func__, id);
+ return -EINVAL;
+ }
+
+ port = gdata_ports[id].port;
+
+ ep_in = port->in;
+ if (ep_in)
+ usb_ep_fifo_flush(ep_in);
+
+ ep_out = port->out;
+ if (ep_out)
+ usb_ep_fifo_flush(ep_out);
+
+ ghsic_data_free_buffers(port);
+
+ data_bridge_close(port->brdg.ch_id);
+
+ clear_bit(CH_READY, &port->bridge_sts);
+ clear_bit(CH_OPENED, &port->bridge_sts);
+
+ return 0;
+}
+
+static void ghsic_data_port_free(int portno)
+{
+ struct gdata_port *port = gdata_ports[portno].port;
+ struct platform_driver *pdrv = &gdata_ports[portno].pdrv;
+
+ destroy_workqueue(port->wq);
+ kfree(port);
+
+ if (pdrv)
+ platform_driver_unregister(pdrv);
+}
+
+static int ghsic_data_port_alloc(unsigned port_num, enum gadget_type gtype)
+{
+ struct gdata_port *port;
+ struct platform_driver *pdrv;
+ port = kzalloc(sizeof(struct gdata_port), GFP_KERNEL);
+ if (!port)
+ return -ENOMEM;
+
+ port->wq = create_singlethread_workqueue(data_bridge_names[port_num]);
+ if (!port->wq) {
+ pr_err("%s: Unable to create workqueue:%s\n",
+ __func__, data_bridge_names[port_num]);
+ kfree(port);
+ return -ENOMEM;
+ }
+ port->port_num = port_num;
+
+ /* port initialization */
+ spin_lock_init(&port->rx_lock);
+ spin_lock_init(&port->tx_lock);
+
+ INIT_WORK(&port->connect_w, ghsic_data_connect_w);
+ INIT_WORK(&port->disconnect_w, ghsic_data_disconnect_w);
+ INIT_WORK(&port->write_tohost_w, ghsic_data_write_tohost);
+ INIT_WORK(&port->write_tomdm_w, ghsic_data_write_tomdm);
+
+ INIT_LIST_HEAD(&port->tx_idle);
+ INIT_LIST_HEAD(&port->rx_idle);
+
+ skb_queue_head_init(&port->tx_skb_q);
+ skb_queue_head_init(&port->rx_skb_q);
+
+ port->gtype = gtype;
+ port->brdg.name = (char*)data_bridge_names[port_num];;
+ port->brdg.ctx = port;
+ port->brdg.ops.send_pkt = ghsic_data_receive;
+ port->brdg.ops.unthrottle_tx = ghsic_data_unthrottle_tx;
+ gdata_ports[port_num].port = port;
+
+ pdrv = &gdata_ports[port_num].pdrv;
+ pdrv->probe = ghsic_data_probe;
+ pdrv->remove = ghsic_data_remove;
+ pdrv->driver.name = data_bridge_names[port_num];
+ pdrv->driver.owner = THIS_MODULE;
+
+ platform_driver_register(pdrv);
+
+ pr_debug("%s: port:%p portno:%d\n", __func__, port, port_num);
+
+ return 0;
+}
+
+void ghsic_data_disconnect(void *gptr, int port_num)
+{
+ struct gdata_port *port;
+ unsigned long flags;
+
+ pr_debug("%s: port#%d\n", __func__, port_num);
+
+ port = gdata_ports[port_num].port;
+
+ if (port_num > no_data_ports) {
+ pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ return;
+ }
+
+ if (!gptr || !port) {
+ pr_err("%s: port is null\n", __func__);
+ return;
+ }
+
+ ghsic_data_free_buffers(port);
+
+ /* disable endpoints */
+ if (port->in) {
+ usb_ep_disable(port->in);
+ port->in->driver_data = NULL;
+ }
+
+ if (port->out) {
+ usb_ep_disable(port->out);
+ port->out->driver_data = NULL;
+ }
+
+ atomic_set(&port->connected, 0);
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ port->in = NULL;
+ port->n_tx_req_queued = 0;
+ clear_bit(RX_THROTTLED, &port->brdg.flags);
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->out = NULL;
+ clear_bit(TX_THROTTLED, &port->brdg.flags);
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ queue_work(port->wq, &port->disconnect_w);
+}
+
+int ghsic_data_connect(void *gptr, int port_num)
+{
+ struct gdata_port *port;
+ struct gserial *gser;
+ struct grmnet *gr;
+ unsigned long flags;
+ int ret = 0;
+
+ pr_debug("%s: port#%d\n", __func__, port_num);
+
+ port = gdata_ports[port_num].port;
+
+ if (port_num > no_data_ports) {
+ pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ return -ENODEV;
+ }
+
+ if (!gptr || !port) {
+ pr_err("%s: port is null\n", __func__);
+ return -ENODEV;
+ }
+
+ if (port->gtype == USB_GADGET_SERIAL) {
+ gser = gptr;
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ port->in = gser->in;
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->out = gser->out;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ port->tx_q_size = ghsic_data_serial_tx_q_size;
+ port->rx_q_size = ghsic_data_serial_rx_q_size;
+ gser->in->driver_data = port;
+ gser->out->driver_data = port;
+ } else {
+ gr = gptr;
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ port->in = gr->in;
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->out = gr->out;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ port->tx_q_size = ghsic_data_rmnet_tx_q_size;
+ port->rx_q_size = ghsic_data_rmnet_rx_q_size;
+ gr->in->driver_data = port;
+ gr->out->driver_data = port;
+ }
+
+ ret = usb_ep_enable(port->in);
+ if (ret) {
+ pr_err("%s: usb_ep_enable failed eptype:IN ep:%p",
+ __func__, port->in);
+ goto fail;
+ }
+
+ ret = usb_ep_enable(port->out);
+ if (ret) {
+ pr_err("%s: usb_ep_enable failed eptype:OUT ep:%p",
+ __func__, port->out);
+ usb_ep_disable(port->in);
+ goto fail;
+ }
+
+ atomic_set(&port->connected, 1);
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ port->to_host = 0;
+ port->rx_throttled_cnt = 0;
+ port->rx_unthrottled_cnt = 0;
+ port->unthrottled_pnd_skbs = 0;
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->to_modem = 0;
+ port->tomodem_drp_cnt = 0;
+ port->tx_throttled_cnt = 0;
+ port->tx_unthrottled_cnt = 0;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ queue_work(port->wq, &port->connect_w);
+fail:
+ return ret;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 1024
+
+static unsigned int record_timestamp;
+module_param(record_timestamp, uint, S_IRUGO | S_IWUSR);
+
+static struct timestamp_buf dbg_data = {
+ .idx = 0,
+ .lck = __RW_LOCK_UNLOCKED(lck)
+};
+
+/*get_timestamp - returns time of day in us */
+static unsigned int get_timestamp(void)
+{
+ struct timeval tval;
+ unsigned int stamp;
+
+ if (!record_timestamp)
+ return 0;
+
+ do_gettimeofday(&tval);
+ /* 2^32 = 4294967296. Limit to 4096s. */
+ stamp = tval.tv_sec & 0xFFF;
+ stamp = stamp * 1000000 + tval.tv_usec;
+ return stamp;
+}
+
+static void dbg_inc(unsigned *idx)
+{
+ *idx = (*idx + 1) & (DBG_DATA_MAX-1);
+}
+
+/**
+* dbg_timestamp - Stores timestamp values of a SKB life cycle
+* to debug buffer
+* @event: "DL": Downlink Data
+* @skb: SKB used to store timestamp values to debug buffer
+*/
+static void dbg_timestamp(char *event, struct sk_buff * skb)
+{
+ unsigned long flags;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+
+ if (!record_timestamp)
+ return;
+
+ write_lock_irqsave(&dbg_data.lck, flags);
+
+ scnprintf(dbg_data.buf[dbg_data.idx], DBG_DATA_MSG,
+ "%p %u[%s] %u %u %u %u %u %u\n",
+ skb, skb->len, event, info->created, info->rx_queued,
+ info->rx_done, info->rx_done_sent, info->tx_queued,
+ get_timestamp());
+
+ dbg_inc(&dbg_data.idx);
+
+ write_unlock_irqrestore(&dbg_data.lck, flags);
+}
+
+/* show_timestamp: displays the timestamp buffer */
+static ssize_t show_timestamp(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long flags;
+ unsigned i;
+ unsigned j = 0;
+ char *buf;
+ int ret = 0;
+
+ if (!record_timestamp)
+ return 0;
+
+ buf = kzalloc(sizeof(char) * 4 * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ read_lock_irqsave(&dbg_data.lck, flags);
+
+ i = dbg_data.idx;
+ for (dbg_inc(&i); i != dbg_data.idx; dbg_inc(&i)) {
+ if (!strnlen(dbg_data.buf[i], DBG_DATA_MSG))
+ continue;
+ j += scnprintf(buf + j, (4 * DEBUG_BUF_SIZE) - j,
+ "%s\n", dbg_data.buf[i]);
+ }
+
+ read_unlock_irqrestore(&dbg_data.lck, flags);
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, j);
+
+ kfree(buf);
+
+ return ret;
+}
+
+const struct file_operations gdata_timestamp_ops = {
+ .read = show_timestamp,
+};
+
+static ssize_t ghsic_data_read_stats(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos)
+{
+ struct gdata_port *port;
+ struct platform_driver *pdrv;
+ char *buf;
+ unsigned long flags;
+ int ret;
+ int i;
+ int temp = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < no_data_ports; i++) {
+ port = gdata_ports[i].port;
+ if (!port)
+ continue;
+ pdrv = &gdata_ports[i].pdrv;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\nName: %s\n"
+ "#PORT:%d port#: %p\n"
+ "data_ch_open: %d\n"
+ "data_ch_ready: %d\n"
+ "\n******UL INFO*****\n\n"
+ "dpkts_to_modem: %lu\n"
+ "tomodem_drp_cnt: %u\n"
+ "rx_buf_len: %u\n"
+ "tx thld cnt %u\n"
+ "tx unthld cnt %u\n"
+ "TX_THROTTLED %d\n",
+ pdrv->driver.name,
+ i, port,
+ test_bit(CH_OPENED, &port->bridge_sts),
+ test_bit(CH_READY, &port->bridge_sts),
+ port->to_modem,
+ port->tomodem_drp_cnt,
+ port->rx_skb_q.qlen,
+ port->tx_throttled_cnt,
+ port->tx_unthrottled_cnt,
+ test_bit(TX_THROTTLED, &port->brdg.flags));
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\n******DL INFO******\n\n"
+ "dpkts_to_usbhost: %lu\n"
+ "tx_buf_len: %u\n"
+ "rx thld cnt %u\n"
+ "rx unthld cnt %u\n"
+ "uthld pnd skbs %u\n"
+ "RX_THROTTLED %d\n",
+ port->to_host,
+ port->tx_skb_q.qlen,
+ port->rx_throttled_cnt,
+ port->rx_unthrottled_cnt,
+ port->unthrottled_pnd_skbs,
+ test_bit(RX_THROTTLED, &port->brdg.flags));
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+
+ kfree(buf);
+
+ return ret;
+}
+
+static ssize_t ghsic_data_reset_stats(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ struct gdata_port *port;
+ int i;
+ unsigned long flags;
+
+ for (i = 0; i < no_data_ports; i++) {
+ port = gdata_ports[i].port;
+ if (!port)
+ continue;
+
+ spin_lock_irqsave(&port->rx_lock, flags);
+ port->to_modem = 0;
+ port->tomodem_drp_cnt = 0;
+ port->tx_throttled_cnt = 0;
+ port->tx_unthrottled_cnt = 0;
+ spin_unlock_irqrestore(&port->rx_lock, flags);
+
+ spin_lock_irqsave(&port->tx_lock, flags);
+ port->to_host = 0;
+ port->rx_throttled_cnt = 0;
+ port->rx_unthrottled_cnt = 0;
+ port->unthrottled_pnd_skbs = 0;
+ spin_unlock_irqrestore(&port->tx_lock, flags);
+ }
+ return count;
+}
+
+const struct file_operations ghsic_stats_ops = {
+ .read = ghsic_data_read_stats,
+ .write = ghsic_data_reset_stats,
+};
+
+static struct dentry *gdata_dent;
+static struct dentry *gdata_dfile_stats;
+static struct dentry *gdata_dfile_tstamp;
+
+static void ghsic_data_debugfs_init(void)
+{
+ gdata_dent = debugfs_create_dir("ghsic_data_xport", 0);
+ if (IS_ERR(gdata_dent))
+ return;
+
+ gdata_dfile_stats = debugfs_create_file("status", 0444, gdata_dent, 0,
+ &ghsic_stats_ops);
+ if (!gdata_dfile_stats || IS_ERR(gdata_dfile_stats)) {
+ debugfs_remove(gdata_dent);
+ return;
+ }
+
+ gdata_dfile_tstamp = debugfs_create_file("timestamp", 0644, gdata_dent,
+ 0, &gdata_timestamp_ops);
+ if (!gdata_dfile_tstamp || IS_ERR(gdata_dfile_tstamp))
+ debugfs_remove(gdata_dent);
+}
+
+static void ghsic_data_debugfs_exit(void)
+{
+ debugfs_remove(gdata_dfile_stats);
+ debugfs_remove(gdata_dfile_tstamp);
+ debugfs_remove(gdata_dent);
+}
+
+#else
+static void ghsic_data_debugfs_init(void) { }
+static void ghsic_data_debugfs_exit(void) { }
+static void dbg_timestamp(char *event, struct sk_buff * skb)
+{
+ return;
+}
+static unsigned int get_timestamp(void)
+{
+ return 0;
+}
+
+#endif
+
+int ghsic_data_setup(unsigned num_ports, enum gadget_type gtype)
+{
+ int first_port_id = no_data_ports;
+ int total_num_ports = num_ports + no_data_ports;
+ int ret = 0;
+ int i;
+
+ if (!num_ports || total_num_ports > NUM_PORTS) {
+ pr_err("%s: Invalid num of ports count:%d\n",
+ __func__, num_ports);
+ return -EINVAL;
+ }
+ pr_debug("%s: count: %d\n", __func__, num_ports);
+
+ for (i = first_port_id; i < (num_ports + first_port_id); i++) {
+
+ /*probe can be called while port_alloc,so update no_data_ports*/
+ no_data_ports++;
+ ret = ghsic_data_port_alloc(i, gtype);
+ if (ret) {
+ no_data_ports--;
+ pr_err("%s: Unable to alloc port:%d\n", __func__, i);
+ goto free_ports;
+ }
+ }
+
+ /*return the starting index*/
+ return first_port_id;
+
+free_ports:
+ for (i = first_port_id; i < no_data_ports; i++)
+ ghsic_data_port_free(i);
+ no_data_ports = first_port_id;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ghsic_data_setup);
+
+static int __init ghsic_data_init(void)
+{
+ ghsic_data_debugfs_init();
+
+ return 0;
+}
+module_init(ghsic_data_init);
+
+static void __exit ghsic_data_exit(void)
+{
+ ghsic_data_debugfs_exit();
+}
+module_exit(ghsic_data_exit);
+MODULE_DESCRIPTION("hsic data xport driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/usb/gadget/u_data_hsic.h b/drivers/usb/gadget/u_data_hsic.h
new file mode 100644
index 0000000..1d95aa1
--- /dev/null
+++ b/drivers/usb/gadget/u_data_hsic.h
@@ -0,0 +1,11 @@
+
+#ifndef __U_DATA_HSIC_H
+#define __U_DATA_HSIC_H
+
+#include <linux/usb/composite.h>
+#include <linux/usb/cdc.h>
+#include <mach/usb_gadget_xport.h>
+
+int ghsic_data_setup(unsigned num_ports, enum gadget_type gtype);
+
+#endif /* __U_DATA_HSIC_H*/
diff --git a/drivers/usb/gadget/u_rmnet.h b/drivers/usb/gadget/u_rmnet.h
new file mode 100644
index 0000000..0f7c4fb
--- /dev/null
+++ b/drivers/usb/gadget/u_rmnet.h
@@ -0,0 +1,59 @@
+/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __U_RMNET_H
+#define __U_RMNET_H
+
+#include <linux/usb/composite.h>
+#include <linux/usb/cdc.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+struct rmnet_ctrl_pkt {
+ void *buf;
+ int len;
+ struct list_head list;
+};
+
+struct grmnet {
+ struct usb_function func;
+
+ struct usb_ep *in;
+ struct usb_ep *out;
+
+ /* to usb host, aka laptop, windows pc etc. Will
+ * be filled by usb driver of rmnet functionality
+ */
+ int (*send_cpkt_response)(void *g, void *buf, size_t len);
+
+ /* to modem, and to be filled by driver implementing
+ * control function
+ */
+ int (*send_encap_cmd)(u8 port_num, void *buf, size_t len);
+
+ void (*notify_modem)(void *g, u8 port_num, int cbits);
+
+ void (*disconnect)(struct grmnet *g);
+ void (*connect)(struct grmnet *g);
+};
+
+int gbam_setup(unsigned int no_bam_port, unsigned int no_bam2bam_port);
+int gbam_connect(struct grmnet *gr, u8 port_num,
+ enum transport_type trans, u8 connection_idx);
+void gbam_disconnect(struct grmnet *gr, u8 port_num, enum transport_type trans);
+void gbam_suspend(struct grmnet *gr, u8 port_num, enum transport_type trans);
+void gbam_resume(struct grmnet *gr, u8 port_num, enum transport_type trans);
+int gsmd_ctrl_connect(struct grmnet *gr, int port_num);
+void gsmd_ctrl_disconnect(struct grmnet *gr, u8 port_num);
+int gsmd_ctrl_setup(unsigned int count);
+
+#endif /* __U_RMNET_H*/
diff --git a/drivers/usb/gadget/u_serial.c b/drivers/usb/gadget/u_serial.c
index 7206808..237c34a 100644
--- a/drivers/usb/gadget/u_serial.c
+++ b/drivers/usb/gadget/u_serial.c
@@ -1301,7 +1301,8 @@
gs_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
gs_tty_driver->subtype = SERIAL_TYPE_NORMAL;
- gs_tty_driver->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;
+ gs_tty_driver->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV
+ | TTY_DRIVER_RESET_TERMIOS;
gs_tty_driver->init_termios = tty_std_termios;
/* 9600-8-N-1 ... matches defaults expected by "usbser.sys" on
@@ -1313,6 +1314,10 @@
gs_tty_driver->init_termios.c_ispeed = 9600;
gs_tty_driver->init_termios.c_ospeed = 9600;
+ gs_tty_driver->init_termios.c_lflag = 0;
+ gs_tty_driver->init_termios.c_iflag = 0;
+ gs_tty_driver->init_termios.c_oflag = 0;
+
tty_set_operations(gs_tty_driver, &gs_tty_ops);
for (i = 0; i < MAX_U_SERIAL_PORTS; i++)
mutex_init(&ports[i].lock);
diff --git a/drivers/usb/gadget/u_serial.h b/drivers/usb/gadget/u_serial.h
index c20210c..b68ac98 100644
--- a/drivers/usb/gadget/u_serial.h
+++ b/drivers/usb/gadget/u_serial.h
@@ -15,7 +15,7 @@
#include <linux/usb/composite.h>
#include <linux/usb/cdc.h>
-#define MAX_U_SERIAL_PORTS 4
+#define MAX_U_SERIAL_PORTS 8
struct f_serial_opts {
struct usb_function_instance func_inst;
@@ -45,11 +45,22 @@
/* REVISIT avoid this CDC-ACM support harder ... */
struct usb_cdc_line_coding port_line_coding; /* 9600-8-N-1 etc */
+ u16 serial_state;
+
+ /* control signal callbacks*/
+ unsigned int (*get_dtr)(struct gserial *p);
+ unsigned int (*get_rts)(struct gserial *p);
/* notification callbacks */
void (*connect)(struct gserial *p);
void (*disconnect)(struct gserial *p);
int (*send_break)(struct gserial *p, int duration);
+ unsigned int (*send_carrier_detect)(struct gserial *p, unsigned int);
+ unsigned int (*send_ring_indicator)(struct gserial *p, unsigned int);
+ int (*send_modem_ctrl_bits)(struct gserial *p, int ctrl_bits);
+
+ /* notification changes to modem */
+ void (*notify_modem)(void *gser, u8 portno, int ctrl_bits);
};
/* utilities to allocate/free request and buffer */
diff --git a/drivers/usb/host/ehci-tegra.c b/drivers/usb/host/ehci-tegra.c
index 94a4782..a009a1b 100644
--- a/drivers/usb/host/ehci-tegra.c
+++ b/drivers/usb/host/ehci-tegra.c
@@ -108,9 +108,11 @@
static void tegra_ehci_notify_event(struct tegra_ehci_hcd *tegra, int event)
{
- tegra->transceiver->last_event = event;
- atomic_notifier_call_chain(&tegra->transceiver->notifier, event,
- tegra->transceiver->otg->gadget);
+ if (!IS_ERR_OR_NULL(tegra->transceiver)) {
+ tegra->transceiver->last_event = event;
+ atomic_notifier_call_chain(&tegra->transceiver->notifier, event,
+ tegra->transceiver->otg->gadget);
+ }
}
static void free_align_buffer(struct urb *urb, struct usb_hcd *hcd)
diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
index c6abf32..178cd6a 100644
--- a/drivers/usb/host/xhci-tegra.c
+++ b/drivers/usb/host/xhci-tegra.c
@@ -3737,6 +3737,7 @@
regulator_disable(tegra->xusb_s1p8v_reg);
regulator_disable(tegra->xusb_s1p05v_reg);
+ regulator_disable(tegra->xusb_s3p3v_reg);
tegra_usb2_clocks_deinit(tegra);
return ret;
@@ -3768,6 +3769,7 @@
regulator_enable(tegra->xusb_s1p05v_reg);
regulator_enable(tegra->xusb_s1p8v_reg);
+ regulator_enable(tegra->xusb_s3p3v_reg);
tegra_usb2_clocks_init(tegra);
return 0;
diff --git a/drivers/usb/misc/Kconfig b/drivers/usb/misc/Kconfig
index 8eb9916..c487fe1 100644
--- a/drivers/usb/misc/Kconfig
+++ b/drivers/usb/misc/Kconfig
@@ -253,3 +253,36 @@
help
This driver communicates with the microcontroller on Nvidia Shield
to control the center button LED.
+
+config USB_QCOM_DIAG_BRIDGE
+ tristate "USB Qualcomm diagnostic bridge driver"
+ depends on USB
+ help
+ Say Y here if you have a Qualcomm modem device connected via USB that
+ will be bridged in kernel space. This driver communicates with the
+ diagnostic interface and allows for bridging with the diag forwarding
+ driver.
+
+ To compile this driver as a module, choose M here: the
+ module will be called diag_bridge. If unsure, choose N.
+
+config USB_QCOM_MDM_BRIDGE
+ tristate "USB Qualcomm modem bridge driver for DUN and RMNET"
+ depends on USB
+ help
+ Say Y here if you have a Qualcomm modem device connected via USB that
+ will be bridged in kernel space. This driver works as a bridge to pass
+ control and data packets between the modem and peripheral usb gadget
+ driver for dial up network and RMNET.
+ To compile this driver as a module, choose M here: the module
+ will be called mdm_bridge. If unsure, choose N.
+
+config USB_QCOM_KS_BRIDGE
+ tristate "USB Qualcomm kick start bridge"
+ depends on USB
+ help
+ Say Y here if you have a Qualcomm modem device connected via USB that
+ will be bridged in kernel space. This driver works as a bridge to pass
+ boot images, ram-dumps and efs sync
+ To compile this driver as a module, choose M here: the module
+ will be called ks_bridge. If unsure, choose N.
diff --git a/drivers/usb/misc/Makefile b/drivers/usb/misc/Makefile
index 608bc47..47b1bf6 100644
--- a/drivers/usb/misc/Makefile
+++ b/drivers/usb/misc/Makefile
@@ -31,3 +31,8 @@
obj-$(CONFIG_USB_SISUSBVGA) += sisusbvga/
obj-$(CONFIG_USB_NV_SHIELD_LED) += usb_nvshieldled.o
+
+obj-$(CONFIG_USB_QCOM_DIAG_BRIDGE) += diag_bridge.o
+mdm_bridge-y := mdm_ctrl_bridge.o mdm_data_bridge.o
+obj-$(CONFIG_USB_QCOM_MDM_BRIDGE) += mdm_bridge.o
+obj-$(CONFIG_USB_QCOM_KS_BRIDGE) += ks_bridge.o
diff --git a/drivers/usb/misc/diag_bridge.c b/drivers/usb/misc/diag_bridge.c
new file mode 100644
index 0000000..cbdf55a
--- /dev/null
+++ b/drivers/usb/misc/diag_bridge.c
@@ -0,0 +1,682 @@
+/* Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/* add additional information to our printk's */
+#define pr_fmt(fmt) "%s: " fmt "\n", __func__
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kref.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/ratelimit.h>
+#include <linux/uaccess.h>
+#include <linux/usb.h>
+#include <linux/debugfs.h>
+#include <mach/diag_bridge.h>
+
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#define DRIVER_DESC "USB host diag bridge driver"
+#define DRIVER_VERSION "1.0"
+
+#define MAX_DIAG_BRIDGE_DEVS 2
+#define AUTOSUSP_DELAY_WITH_USB 1000
+
+struct diag_bridge {
+ struct usb_device *udev;
+ struct usb_interface *ifc;
+ struct usb_anchor submitted;
+ __u8 in_epAddr;
+ __u8 out_epAddr;
+ int err;
+ struct kref kref;
+ struct mutex ifc_mutex;
+ struct diag_bridge_ops *ops;
+ struct platform_device *pdev;
+ unsigned default_autosusp_delay;
+ int id;
+
+ /* Support INT IN instead of BULK IN */
+ bool use_int_in_pipe;
+ unsigned int period;
+
+ /* debugging counters */
+ unsigned long bytes_to_host;
+ unsigned long bytes_to_mdm;
+ unsigned pending_reads;
+ unsigned pending_writes;
+ unsigned drop_count;
+};
+struct diag_bridge *__dev[MAX_DIAG_BRIDGE_DEVS];
+
+int diag_bridge_open(int id, struct diag_bridge_ops *ops)
+{
+ struct diag_bridge *dev;
+
+ if (id < 0 || id >= MAX_DIAG_BRIDGE_DEVS) {
+ pr_err("Invalid device ID");
+ return -ENODEV;
+ }
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("dev is null");
+ return -ENODEV;
+ }
+
+ if (dev->ops) {
+ pr_err("bridge already opened");
+ return -EALREADY;
+ }
+
+ dev->ops = ops;
+ dev->err = 0;
+
+#ifdef CONFIG_PM_RUNTIME
+ dev->default_autosusp_delay = dev->udev->dev.power.autosuspend_delay;
+#endif
+ pm_runtime_set_autosuspend_delay(&dev->udev->dev,
+ AUTOSUSP_DELAY_WITH_USB);
+
+ kref_get(&dev->kref);
+
+ return 0;
+}
+EXPORT_SYMBOL(diag_bridge_open);
+
+static void diag_bridge_delete(struct kref *kref)
+{
+ struct diag_bridge *dev = container_of(kref, struct diag_bridge, kref);
+ int id = dev->id;
+
+ usb_put_dev(dev->udev);
+ __dev[id] = 0;
+ kfree(dev);
+}
+
+void diag_bridge_close(int id)
+{
+ struct diag_bridge *dev;
+
+ if (id < 0 || id >= MAX_DIAG_BRIDGE_DEVS) {
+ pr_err("Invalid device ID");
+ return;
+ }
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("dev is null");
+ return;
+ }
+
+ if (!dev->ops) {
+ pr_err("can't close bridge that was not open");
+ return;
+ }
+
+ dev_dbg(&dev->ifc->dev, "%s:\n", __func__);
+
+ usb_kill_anchored_urbs(&dev->submitted);
+ dev->ops = 0;
+
+ pm_runtime_set_autosuspend_delay(&dev->udev->dev,
+ dev->default_autosusp_delay);
+
+ kref_put(&dev->kref, diag_bridge_delete);
+}
+EXPORT_SYMBOL(diag_bridge_close);
+
+static void diag_bridge_read_cb(struct urb *urb)
+{
+ struct diag_bridge *dev = urb->context;
+ struct diag_bridge_ops *cbs = dev->ops;
+
+ dev_dbg(&dev->ifc->dev, "%s: status:%d actual:%d\n", __func__,
+ urb->status, urb->actual_length);
+
+ /* save error so that subsequent read/write returns ENODEV */
+ if (urb->status == -EPROTO)
+ dev->err = urb->status;
+
+ if (cbs && cbs->read_complete_cb)
+ cbs->read_complete_cb(cbs->ctxt,
+ urb->transfer_buffer,
+ urb->transfer_buffer_length,
+ urb->status < 0 ? urb->status : urb->actual_length);
+
+ dev->bytes_to_host += urb->actual_length;
+ dev->pending_reads--;
+ kref_put(&dev->kref, diag_bridge_delete);
+}
+
+int diag_bridge_read(int id, char *data, int size)
+{
+ struct urb *urb = NULL;
+ unsigned int pipe;
+ struct diag_bridge *dev;
+ int ret;
+
+ if (id < 0 || id >= MAX_DIAG_BRIDGE_DEVS) {
+ pr_err("Invalid device ID");
+ return -ENODEV;
+ }
+
+ pr_debug("reading %d bytes", size);
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("device is disconnected");
+ return -ENODEV;
+ }
+
+ mutex_lock(&dev->ifc_mutex);
+ if (!dev->ifc) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ if (!dev->ops) {
+ pr_err("bridge is not open");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ if (!size) {
+ dev_dbg(&dev->ifc->dev, "invalid size:%d\n", size);
+ dev->drop_count++;
+ ret = -EINVAL;
+ goto error;
+ }
+
+ /* if there was a previous unrecoverable error, just quit */
+ if (dev->err) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ kref_get(&dev->kref);
+
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb) {
+ dev_dbg(&dev->ifc->dev, "unable to allocate urb\n");
+ ret = -ENOMEM;
+ goto put_error;
+ }
+
+ ret = usb_autopm_get_interface(dev->ifc);
+ if (ret < 0 && ret != -EAGAIN && ret != -EACCES) {
+ pr_err_ratelimited("read: autopm_get failed:%d", ret);
+ goto free_error;
+ }
+
+ if (dev->use_int_in_pipe) {
+ pipe = usb_rcvintpipe(dev->udev, dev->in_epAddr);
+ usb_fill_int_urb(urb, dev->udev, pipe, data, size,
+ diag_bridge_read_cb, dev, dev->period);
+ } else {
+ pipe = usb_rcvbulkpipe(dev->udev, dev->in_epAddr);
+ usb_fill_bulk_urb(urb, dev->udev, pipe, data, size,
+ diag_bridge_read_cb, dev);
+ }
+
+ usb_anchor_urb(urb, &dev->submitted);
+ dev->pending_reads++;
+
+ ret = usb_submit_urb(urb, GFP_KERNEL);
+ if (ret) {
+ pr_err_ratelimited("submitting urb failed err:%d", ret);
+ dev->pending_reads--;
+ usb_unanchor_urb(urb);
+ }
+ usb_autopm_put_interface(dev->ifc);
+
+free_error:
+ usb_free_urb(urb);
+put_error:
+ if (ret) /* otherwise this is done in the completion handler */
+ kref_put(&dev->kref, diag_bridge_delete);
+error:
+ mutex_unlock(&dev->ifc_mutex);
+ return ret;
+}
+EXPORT_SYMBOL(diag_bridge_read);
+
+static void diag_bridge_write_cb(struct urb *urb)
+{
+ struct diag_bridge *dev = urb->context;
+ struct diag_bridge_ops *cbs = dev->ops;
+
+ dev_dbg(&dev->ifc->dev, "%s:\n", __func__);
+
+ usb_autopm_put_interface_async(dev->ifc);
+
+ /* save error so that subsequent read/write returns ENODEV */
+ if (urb->status == -EPROTO)
+ dev->err = urb->status;
+
+ if (cbs && cbs->write_complete_cb)
+ cbs->write_complete_cb(cbs->ctxt,
+ urb->transfer_buffer,
+ urb->transfer_buffer_length,
+ urb->status < 0 ? urb->status : urb->actual_length);
+
+ dev->bytes_to_mdm += urb->actual_length;
+ dev->pending_writes--;
+ kref_put(&dev->kref, diag_bridge_delete);
+}
+
+int diag_bridge_write(int id, char *data, int size)
+{
+ struct urb *urb = NULL;
+ unsigned int pipe;
+ struct diag_bridge *dev;
+ int ret;
+
+ if (id < 0 || id >= MAX_DIAG_BRIDGE_DEVS) {
+ pr_err("Invalid device ID");
+ return -ENODEV;
+ }
+
+ pr_debug("writing %d bytes", size);
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("device is disconnected");
+ return -ENODEV;
+ }
+
+ mutex_lock(&dev->ifc_mutex);
+ if (!dev->ifc) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ if (!dev->ops) {
+ pr_err("bridge is not open");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ if (!size) {
+ dev_err(&dev->ifc->dev, "invalid size:%d\n", size);
+ ret = -EINVAL;
+ goto error;
+ }
+
+ /* if there was a previous unrecoverable error, just quit */
+ if (dev->err) {
+ ret = -ENODEV;
+ goto error;
+ }
+
+ kref_get(&dev->kref);
+
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb) {
+ dev_err(&dev->ifc->dev, "unable to allocate urb\n");
+ ret = -ENOMEM;
+ goto put_error;
+ }
+
+ ret = usb_autopm_get_interface(dev->ifc);
+ if (ret < 0 && ret != -EAGAIN && ret != -EACCES) {
+ pr_err_ratelimited("write: autopm_get failed:%d", ret);
+ goto free_error;
+ }
+
+ pipe = usb_sndbulkpipe(dev->udev, dev->out_epAddr);
+ usb_fill_bulk_urb(urb, dev->udev, pipe, data, size,
+ diag_bridge_write_cb, dev);
+ urb->transfer_flags |= URB_ZERO_PACKET;
+ usb_anchor_urb(urb, &dev->submitted);
+ dev->pending_writes++;
+
+ ret = usb_submit_urb(urb, GFP_KERNEL);
+ if (ret) {
+ pr_err_ratelimited("submitting urb failed err:%d", ret);
+ dev->pending_writes--;
+ usb_unanchor_urb(urb);
+ usb_autopm_put_interface(dev->ifc);
+ goto free_error;
+ }
+
+free_error:
+ usb_free_urb(urb);
+put_error:
+ if (ret) /* otherwise this is done in the completion handler */
+ kref_put(&dev->kref, diag_bridge_delete);
+error:
+ mutex_unlock(&dev->ifc_mutex);
+ return ret;
+}
+EXPORT_SYMBOL(diag_bridge_write);
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 512
+static ssize_t diag_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ char *buf;
+ int i, ret = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < MAX_DIAG_BRIDGE_DEVS; i++) {
+ struct diag_bridge *dev = __dev[i];
+ if (!dev)
+ continue;
+
+ ret += scnprintf(buf, DEBUG_BUF_SIZE,
+ "epin:%d, epout:%d\n"
+ "bytes to host: %lu\n"
+ "bytes to mdm: %lu\n"
+ "pending reads: %u\n"
+ "pending writes: %u\n"
+ "drop count:%u\n"
+ "last error: %d\n",
+ dev->in_epAddr, dev->out_epAddr,
+ dev->bytes_to_host, dev->bytes_to_mdm,
+ dev->pending_reads, dev->pending_writes,
+ dev->drop_count,
+ dev->err);
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, ret);
+ kfree(buf);
+ return ret;
+}
+
+static ssize_t diag_reset_stats(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ int i;
+
+ for (i = 0; i < MAX_DIAG_BRIDGE_DEVS; i++) {
+ struct diag_bridge *dev = __dev[i];
+ if (dev) {
+ dev->bytes_to_host = dev->bytes_to_mdm = 0;
+ dev->pending_reads = dev->pending_writes = 0;
+ dev->drop_count = 0;
+ }
+ }
+
+ return count;
+}
+
+const struct file_operations diag_stats_ops = {
+ .read = diag_read_stats,
+ .write = diag_reset_stats,
+};
+
+static struct dentry *dent;
+
+static void diag_bridge_debugfs_init(void)
+{
+ struct dentry *dfile;
+
+ dent = debugfs_create_dir("diag_bridge", 0);
+ if (IS_ERR(dent))
+ return;
+
+ dfile = debugfs_create_file("status", 0444, dent, 0, &diag_stats_ops);
+ if (!dfile || IS_ERR(dfile))
+ debugfs_remove(dent);
+}
+
+static void diag_bridge_debugfs_cleanup(void)
+{
+ if (dent) {
+ debugfs_remove_recursive(dent);
+ dent = NULL;
+ }
+}
+#else
+static inline void diag_bridge_debugfs_init(void) { }
+static inline void diag_bridge_debugfs_cleanup(void) { }
+#endif
+
+static int
+diag_bridge_probe(struct usb_interface *ifc, const struct usb_device_id *id)
+{
+ struct diag_bridge *dev;
+ struct usb_host_interface *ifc_desc;
+ struct usb_endpoint_descriptor *ep_desc;
+ int i, devid, ret = -ENOMEM;
+
+ pr_debug("id:%lu", id->driver_info);
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("diag_bridge_probe\n");
+#endif
+
+ devid = id->driver_info & 0xFF;
+ if (devid < 0 || devid >= MAX_DIAG_BRIDGE_DEVS)
+ return -ENODEV;
+
+ /* already probed? */
+ if (__dev[devid]) {
+ pr_err("Diag device already probed");
+ return -ENODEV;
+ }
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("diag_bridge_probe:interface=(%d), epnum=(%d))\n", ifc->cur_altsetting->desc.bInterfaceNumber, ifc->cur_altsetting->desc.bNumEndpoints);
+#endif
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ pr_err("unable to allocate dev");
+ return -ENOMEM;
+ }
+
+ __dev[devid] = dev;
+ dev->id = devid;
+
+ dev->udev = usb_get_dev(interface_to_usbdev(ifc));
+ dev->ifc = ifc;
+ kref_init(&dev->kref);
+ mutex_init(&dev->ifc_mutex);
+ init_usb_anchor(&dev->submitted);
+
+ ifc_desc = ifc->cur_altsetting;
+ for (i = 0; i < ifc_desc->desc.bNumEndpoints; i++) {
+ ep_desc = &ifc_desc->endpoint[i].desc;
+ if (!dev->in_epAddr && (usb_endpoint_is_bulk_in(ep_desc) ||
+ usb_endpoint_is_int_in(ep_desc))) {
+ dev->in_epAddr = ep_desc->bEndpointAddress;
+ if (usb_endpoint_is_int_in(ep_desc)) {
+ dev->use_int_in_pipe = 1;
+ dev->period = ep_desc->bInterval;
+ }
+ }
+ if (!dev->out_epAddr && usb_endpoint_is_bulk_out(ep_desc))
+ dev->out_epAddr = ep_desc->bEndpointAddress;
+ }
+
+ if (!(dev->in_epAddr && dev->out_epAddr)) {
+ pr_err("could not find bulk in and bulk out endpoints");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ usb_set_intfdata(ifc, dev);
+ diag_bridge_debugfs_init();
+ dev->pdev = platform_device_register_simple("diag_bridge", devid,
+ NULL, 0);
+ if (IS_ERR(dev->pdev)) {
+ pr_err("unable to allocate platform device");
+ ret = PTR_ERR(dev->pdev);
+ goto error;
+ }
+
+ dev_dbg(&dev->ifc->dev, "%s: complete\n", __func__);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("[DIAG_BRIDGE]%s: complete\n", __func__);
+#endif
+ return 0;
+
+error:
+ if (dev)
+ kref_put(&dev->kref, diag_bridge_delete);
+
+ return ret;
+}
+
+static void diag_bridge_disconnect(struct usb_interface *ifc)
+{
+ struct diag_bridge *dev = usb_get_intfdata(ifc);
+
+ dev_dbg(&dev->ifc->dev, "%s:\n", __func__);
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("[DIAG_BRIDGE]%s:\n", __func__);
+#endif
+
+ platform_device_unregister(dev->pdev);
+ mutex_lock(&dev->ifc_mutex);
+ dev->ifc = NULL;
+ mutex_unlock(&dev->ifc_mutex);
+ diag_bridge_debugfs_cleanup();
+ kref_put(&dev->kref, diag_bridge_delete);
+ usb_set_intfdata(ifc, NULL);
+}
+
+static int diag_bridge_suspend(struct usb_interface *ifc, pm_message_t message)
+{
+ struct diag_bridge *dev = usb_get_intfdata(ifc);
+ struct diag_bridge_ops *cbs = dev->ops;
+ int ret = 0;
+
+ if (cbs && cbs->suspend) {
+ ret = cbs->suspend(cbs->ctxt);
+ if (ret) {
+ dev_dbg(&dev->ifc->dev,
+ "%s: diag veto'd suspend\n", __func__);
+ return ret;
+ }
+
+ usb_kill_anchored_urbs(&dev->submitted);
+ }
+
+ return ret;
+}
+
+static int diag_bridge_resume(struct usb_interface *ifc)
+{
+ struct diag_bridge *dev = usb_get_intfdata(ifc);
+ struct diag_bridge_ops *cbs = dev->ops;
+
+
+ if (cbs && cbs->resume)
+ cbs->resume(cbs->ctxt);
+
+ return 0;
+}
+
+#define DEV_ID(n) (n)
+
+static const struct usb_device_id diag_bridge_ids[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9001, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9034, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9048, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x904C, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9075, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9079, 0),
+ .driver_info = DEV_ID(1), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908A, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 0),
+ .driver_info = DEV_ID(0), },
+ /* 908E, ifc#1 refers to diag client interface */
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 1),
+ .driver_info = DEV_ID(1), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909C, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909D, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 0),
+ .driver_info = DEV_ID(0), },
+ /* 909E, ifc#1 refers to diag client interface */
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 1),
+ .driver_info = DEV_ID(1), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A0, 0),
+ .driver_info = DEV_ID(0), },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 0),
+ .driver_info = DEV_ID(0), },
+ /* 909E, ifc#1 refers to diag client interface */
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 1),
+ .driver_info = DEV_ID(1), },
+
+ {} /* terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, diag_bridge_ids);
+
+static struct usb_driver diag_bridge_driver = {
+ .name = "diag_bridge",
+ .probe = diag_bridge_probe,
+ .disconnect = diag_bridge_disconnect,
+ .suspend = diag_bridge_suspend,
+ .resume = diag_bridge_resume,
+ .reset_resume = diag_bridge_resume,
+ .id_table = diag_bridge_ids,
+ .supports_autosuspend = 1,
+};
+
+static int __init diag_bridge_init(void)
+{
+ int ret;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return 0;
+#endif
+
+ pr_info("diag_bridge_init\n");
+
+ ret = usb_register(&diag_bridge_driver);
+ if (ret) {
+ pr_err("unable to register diag driver");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void __exit diag_bridge_exit(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return;
+#endif
+
+ usb_deregister(&diag_bridge_driver);
+}
+
+module_init(diag_bridge_init);
+module_exit(diag_bridge_exit);
+
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_VERSION(DRIVER_VERSION);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/usb/misc/ks_bridge.c b/drivers/usb/misc/ks_bridge.c
new file mode 100644
index 0000000..c4e24ab
--- /dev/null
+++ b/drivers/usb/misc/ks_bridge.c
@@ -0,0 +1,1108 @@
+/*
+ * Copyright (c) 2012-2014, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/* add additional information to our printk's */
+#define pr_fmt(fmt) "%s: " fmt "\n", __func__
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kref.h>
+#include <linux/platform_device.h>
+#include <linux/ratelimit.h>
+#include <linux/uaccess.h>
+#include <linux/usb.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/cdev.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/poll.h>
+#include <linux/wakelock.h>
+
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#define DRIVER_DESC "USB host ks bridge driver"
+#define DRIVER_VERSION "1.0"
+
+enum bus_id {
+ BUS_HSIC,
+ BUS_USB,
+ BUS_UNDEF,
+};
+
+#define BUSNAME_LEN 20
+
+static enum bus_id str_to_busid(const char *name)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!strncasecmp("tegra-ehci.1", name, BUSNAME_LEN))
+ return BUS_HSIC;
+#endif
+ if (!strncasecmp("msm_hsic_host", name, BUSNAME_LEN))
+ return BUS_HSIC;
+ if (!strncasecmp("msm_ehci_host.0", name, BUSNAME_LEN))
+ return BUS_USB;
+
+ return BUS_UNDEF;
+}
+
+struct data_pkt {
+ int n_read;
+ char *buf;
+ size_t len;
+ struct list_head list;
+ void *ctxt;
+};
+
+#define FILE_OPENED BIT(0)
+#define USB_DEV_CONNECTED BIT(1)
+#define NO_RX_REQS 10
+#define NO_BRIDGE_INSTANCES 4
+#define EFS_HSIC_BRIDGE_INDEX 2
+#define EFS_USB_BRIDGE_INDEX 3
+#define MAX_DATA_PKT_SIZE 16384
+#define PENDING_URB_TIMEOUT 10
+
+struct ksb_dev_info {
+ const char *name;
+};
+
+struct ks_bridge {
+ char *name;
+ spinlock_t lock;
+ struct workqueue_struct *wq;
+ struct work_struct to_mdm_work;
+ struct work_struct start_rx_work;
+ struct list_head to_mdm_list;
+ struct list_head to_ks_list;
+ wait_queue_head_t ks_wait_q;
+ wait_queue_head_t pending_urb_wait;
+ atomic_t tx_pending_cnt;
+ atomic_t rx_pending_cnt;
+
+ struct ksb_dev_info id_info;
+
+ /* cdev interface */
+ dev_t cdev_start_no;
+ struct cdev cdev;
+ struct class *class;
+ struct device *device;
+
+ /* usb specific */
+ struct usb_device *udev;
+ struct usb_interface *ifc;
+ __u8 in_epAddr;
+ __u8 out_epAddr;
+ unsigned int in_pipe;
+ unsigned int out_pipe;
+ struct usb_anchor submitted;
+
+ unsigned long flags;
+
+ /* to handle INT IN ep */
+ unsigned int period;
+
+ /* wake lock */
+ struct wake_lock ks_wake_lock;
+
+#define DBG_MSG_LEN 40
+#define DBG_MAX_MSG 500
+ unsigned int dbg_idx;
+ rwlock_t dbg_lock;
+ char (dbgbuf[DBG_MAX_MSG])[DBG_MSG_LEN]; /* buffer */
+};
+struct ks_bridge *__ksb[NO_BRIDGE_INSTANCES];
+
+/* by default debugging is enabled */
+static unsigned int enable_dbg = 1;
+module_param(enable_dbg, uint, S_IRUGO | S_IWUSR);
+
+static void
+dbg_log_event(struct ks_bridge *ksb, char *event, int d1, int d2)
+{
+ unsigned long flags;
+ unsigned long long t;
+ unsigned long nanosec;
+
+ if (!enable_dbg)
+ return;
+
+ write_lock_irqsave(&ksb->dbg_lock, flags);
+ t = cpu_clock(smp_processor_id());
+ nanosec = do_div(t, 1000000000)/1000;
+ scnprintf(ksb->dbgbuf[ksb->dbg_idx], DBG_MSG_LEN, "%5lu.%06lu:%s:%x:%x",
+ (unsigned long)t, nanosec, event, d1, d2);
+
+ ksb->dbg_idx++;
+ ksb->dbg_idx = ksb->dbg_idx % DBG_MAX_MSG;
+ write_unlock_irqrestore(&ksb->dbg_lock, flags);
+}
+
+static
+struct data_pkt *ksb_alloc_data_pkt(size_t count, gfp_t flags, void *ctxt)
+{
+ struct data_pkt *pkt;
+
+ pkt = kzalloc(sizeof(struct data_pkt), flags);
+ if (!pkt) {
+ pr_err("failed to allocate data packet\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ pkt->buf = kmalloc(count, flags);
+ if (!pkt->buf) {
+ pr_err("failed to allocate data buffer\n");
+ kfree(pkt);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ pkt->len = count;
+ INIT_LIST_HEAD(&pkt->list);
+ pkt->ctxt = ctxt;
+
+ return pkt;
+}
+
+static void ksb_free_data_pkt(struct data_pkt *pkt)
+{
+ kfree(pkt->buf);
+ kfree(pkt);
+}
+
+
+static void
+submit_one_urb(struct ks_bridge *ksb, gfp_t flags, struct data_pkt *pkt);
+static ssize_t ksb_fs_read(struct file *fp, char __user *buf,
+ size_t count, loff_t *pos)
+{
+ int ret;
+ unsigned long flags;
+ struct ks_bridge *ksb = fp->private_data;
+ struct data_pkt *pkt = NULL;
+ size_t space, copied;
+
+read_start:
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ return -ENODEV;
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ if (list_empty(&ksb->to_ks_list)) {
+ spin_unlock_irqrestore(&ksb->lock, flags);
+ ret = wait_event_interruptible(ksb->ks_wait_q,
+ !list_empty(&ksb->to_ks_list) ||
+ !test_bit(USB_DEV_CONNECTED, &ksb->flags));
+ if (ret < 0)
+ return ret;
+
+ goto read_start;
+ }
+
+ space = count;
+ copied = 0;
+ while (!list_empty(&ksb->to_ks_list) && space &&
+ test_bit(USB_DEV_CONNECTED, &ksb->flags)) {
+ size_t len;
+
+ pkt = list_first_entry(&ksb->to_ks_list, struct data_pkt, list);
+ list_del_init(&pkt->list);
+ len = min_t(size_t, space, pkt->len - pkt->n_read);
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ ret = copy_to_user(buf + copied, pkt->buf + pkt->n_read, len);
+ if (ret) {
+ dev_err(ksb->device,
+ "copy_to_user failed err:%d\n", ret);
+ ksb_free_data_pkt(pkt);
+ return -EFAULT;
+ }
+
+ pkt->n_read += len;
+ space -= len;
+ copied += len;
+
+ if (pkt->n_read == pkt->len) {
+ /*
+ * re-init the packet and queue it
+ * for more data.
+ */
+ pkt->n_read = 0;
+ pkt->len = MAX_DATA_PKT_SIZE;
+ submit_one_urb(ksb, GFP_KERNEL, pkt);
+ pkt = NULL;
+ }
+ spin_lock_irqsave(&ksb->lock, flags);
+ }
+
+ /* put the partial packet back in the list */
+ if (!space && pkt && pkt->n_read != pkt->len) {
+ if (test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ list_add(&pkt->list, &ksb->to_ks_list);
+ else
+ ksb_free_data_pkt(pkt);
+ }
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ dbg_log_event(ksb, "KS_READ", copied, 0);
+
+ dev_dbg(ksb->device, "count:%zd space:%zd copied:%zd", count,
+ space, copied);
+
+ return copied;
+}
+
+static void ksb_tx_cb(struct urb *urb)
+{
+ struct data_pkt *pkt = urb->context;
+ struct ks_bridge *ksb = pkt->ctxt;
+
+ dbg_log_event(ksb, "C TX_URB", urb->status, 0);
+ dev_dbg(&ksb->udev->dev, "status:%d", urb->status);
+
+ if (test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ usb_autopm_put_interface_async(ksb->ifc);
+
+ if (urb->status < 0)
+ pr_err_ratelimited("%s: urb failed with err:%d",
+ ksb->id_info.name, urb->status);
+
+ ksb_free_data_pkt(pkt);
+
+ atomic_dec(&ksb->tx_pending_cnt);
+ wake_up(&ksb->pending_urb_wait);
+}
+
+static void ksb_tomdm_work(struct work_struct *w)
+{
+ struct ks_bridge *ksb = container_of(w, struct ks_bridge, to_mdm_work);
+ struct data_pkt *pkt;
+ unsigned long flags;
+ struct urb *urb;
+ int ret;
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ while (!list_empty(&ksb->to_mdm_list)
+ && test_bit(USB_DEV_CONNECTED, &ksb->flags)) {
+ pkt = list_first_entry(&ksb->to_mdm_list,
+ struct data_pkt, list);
+ list_del_init(&pkt->list);
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb) {
+ dbg_log_event(ksb, "TX_URB_MEM_FAIL", -ENOMEM, 0);
+ pr_err_ratelimited("%s: unable to allocate urb",
+ ksb->id_info.name);
+ ksb_free_data_pkt(pkt);
+ return;
+ }
+
+ ret = usb_autopm_get_interface(ksb->ifc);
+ if (ret < 0 && ret != -EAGAIN && ret != -EACCES) {
+ dbg_log_event(ksb, "TX_URB_AUTOPM_FAIL", ret, 0);
+ pr_err_ratelimited("%s: autopm_get failed:%d",
+ ksb->id_info.name, ret);
+ usb_free_urb(urb);
+ ksb_free_data_pkt(pkt);
+ return;
+ }
+ usb_fill_bulk_urb(urb, ksb->udev, ksb->out_pipe,
+ pkt->buf, pkt->len, ksb_tx_cb, pkt);
+ usb_anchor_urb(urb, &ksb->submitted);
+
+ dbg_log_event(ksb, "S TX_URB", pkt->len, 0);
+
+ atomic_inc(&ksb->tx_pending_cnt);
+ ret = usb_submit_urb(urb, GFP_KERNEL);
+ if (ret) {
+ dev_err(&ksb->udev->dev, "out urb submission failed");
+ usb_unanchor_urb(urb);
+ usb_free_urb(urb);
+ ksb_free_data_pkt(pkt);
+ usb_autopm_put_interface(ksb->ifc);
+ atomic_dec(&ksb->tx_pending_cnt);
+ wake_up(&ksb->pending_urb_wait);
+ return;
+ }
+
+ usb_free_urb(urb);
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ }
+ spin_unlock_irqrestore(&ksb->lock, flags);
+}
+
+static ssize_t ksb_fs_write(struct file *fp, const char __user *buf,
+ size_t count, loff_t *pos)
+{
+ int ret;
+ struct data_pkt *pkt;
+ unsigned long flags;
+ struct ks_bridge *ksb = fp->private_data;
+
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ return -ENODEV;
+
+ if (count > MAX_DATA_PKT_SIZE)
+ count = MAX_DATA_PKT_SIZE;
+
+ pkt = ksb_alloc_data_pkt(count, GFP_KERNEL, ksb);
+ if (IS_ERR(pkt)) {
+ dev_err(ksb->device,
+ "unable to allocate data packet");
+ return PTR_ERR(pkt);
+ }
+
+ ret = copy_from_user(pkt->buf, buf, count);
+ if (ret) {
+ dev_err(ksb->device,
+ "copy_from_user failed: err:%d", ret);
+ ksb_free_data_pkt(pkt);
+ return ret;
+ }
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ list_add_tail(&pkt->list, &ksb->to_mdm_list);
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ queue_work(ksb->wq, &ksb->to_mdm_work);
+
+ dbg_log_event(ksb, "KS_WRITE", count, 0);
+
+ return count;
+}
+
+static int ksb_fs_open(struct inode *ip, struct file *fp)
+{
+ struct ks_bridge *ksb =
+ container_of(ip->i_cdev, struct ks_bridge, cdev);
+
+ if (IS_ERR(ksb)) {
+ pr_err("ksb device not found");
+ return -ENODEV;
+ }
+
+ dev_dbg(ksb->device, ":%s", ksb->id_info.name);
+ dbg_log_event(ksb, "FS-OPEN", 0, 0);
+
+ fp->private_data = ksb;
+ set_bit(FILE_OPENED, &ksb->flags);
+
+ if (test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ queue_work(ksb->wq, &ksb->start_rx_work);
+
+ return 0;
+}
+
+static unsigned int ksb_fs_poll(struct file *file, poll_table *wait)
+{
+ struct ks_bridge *ksb = file->private_data;
+ unsigned long flags;
+ int ret = 0;
+
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ return POLLERR;
+
+ poll_wait(file, &ksb->ks_wait_q, wait);
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ return POLLERR;
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ if (!list_empty(&ksb->to_ks_list))
+ ret = POLLIN | POLLRDNORM;
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ return ret;
+}
+
+static int ksb_fs_release(struct inode *ip, struct file *fp)
+{
+ struct ks_bridge *ksb = fp->private_data;
+
+ dev_dbg(ksb->device, ":%s", ksb->id_info.name);
+ dbg_log_event(ksb, "FS-RELEASE", 0, 0);
+
+ clear_bit(FILE_OPENED, &ksb->flags);
+ fp->private_data = NULL;
+
+ return 0;
+}
+
+static const struct file_operations ksb_fops = {
+ .owner = THIS_MODULE,
+ .read = ksb_fs_read,
+ .write = ksb_fs_write,
+ .open = ksb_fs_open,
+ .release = ksb_fs_release,
+ .poll = ksb_fs_poll,
+};
+
+static struct ksb_dev_info ksb_fboot_dev[] = {
+ {
+ .name = "ks_hsic_bridge",
+ },
+ {
+ .name = "ks_usb_bridge",
+ },
+};
+
+static struct ksb_dev_info ksb_efs_hsic_dev = {
+ .name = "efs_hsic_bridge",
+};
+
+static struct ksb_dev_info ksb_efs_usb_dev = {
+ .name = "efs_usb_bridge",
+};
+static const struct usb_device_id ksb_usb_ids[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9008, 0),
+ .driver_info = (unsigned long)&ksb_fboot_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9048, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x904C, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9075, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9079, 2),
+ .driver_info = (unsigned long)&ksb_efs_usb_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908A, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 3),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909C, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909D, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 3),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909F, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A0, 2),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 3),
+ .driver_info = (unsigned long)&ksb_efs_hsic_dev, },
+
+ {} /* terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, ksb_usb_ids);
+
+static void ksb_rx_cb(struct urb *urb);
+static void
+submit_one_urb(struct ks_bridge *ksb, gfp_t flags, struct data_pkt *pkt)
+{
+ struct urb *urb;
+ int ret;
+
+ urb = usb_alloc_urb(0, flags);
+ if (!urb) {
+ dev_err(&ksb->udev->dev, "unable to allocate urb");
+ ksb_free_data_pkt(pkt);
+ return;
+ }
+
+ if (ksb->period)
+ usb_fill_int_urb(urb, ksb->udev, ksb->in_pipe,
+ pkt->buf, pkt->len,
+ ksb_rx_cb, pkt, ksb->period);
+ else
+ usb_fill_bulk_urb(urb, ksb->udev, ksb->in_pipe,
+ pkt->buf, pkt->len,
+ ksb_rx_cb, pkt);
+
+ usb_anchor_urb(urb, &ksb->submitted);
+
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags)) {
+ usb_unanchor_urb(urb);
+ usb_free_urb(urb);
+ ksb_free_data_pkt(pkt);
+ return;
+ }
+
+ atomic_inc(&ksb->rx_pending_cnt);
+ ret = usb_submit_urb(urb, flags);
+ if (ret) {
+ dev_err(&ksb->udev->dev, "in urb submission failed");
+ usb_unanchor_urb(urb);
+ usb_free_urb(urb);
+ ksb_free_data_pkt(pkt);
+ atomic_dec(&ksb->rx_pending_cnt);
+ wake_up(&ksb->pending_urb_wait);
+ return;
+ }
+
+ dbg_log_event(ksb, "S RX_URB", pkt->len, 0);
+
+ usb_free_urb(urb);
+}
+static void ksb_rx_cb(struct urb *urb)
+{
+ struct data_pkt *pkt = urb->context;
+ struct ks_bridge *ksb = pkt->ctxt;
+ bool wakeup = true;
+
+ dbg_log_event(ksb, "C RX_URB", urb->status, urb->actual_length);
+
+ dev_dbg(&ksb->udev->dev, "status:%d actual:%d", urb->status,
+ urb->actual_length);
+
+ /*non zero len of data received while unlinking urb*/
+ if (urb->status == -ENOENT && (urb->actual_length > 0)) {
+ /*
+ * If we wakeup the reader process now, it may
+ * queue the URB before its reject flag gets
+ * cleared.
+ */
+ wakeup = false;
+ goto add_to_list;
+ }
+
+ if (urb->status < 0) {
+ if (urb->status != -ESHUTDOWN && urb->status != -ENOENT
+ && urb->status != -EPROTO)
+ pr_err_ratelimited("%s: urb failed with err:%d",
+ ksb->id_info.name, urb->status);
+
+ if (!urb->actual_length) {
+ ksb_free_data_pkt(pkt);
+ goto done;
+ }
+ }
+
+ usb_mark_last_busy(ksb->udev);
+
+ if (urb->actual_length == 0) {
+ submit_one_urb(ksb, GFP_ATOMIC, pkt);
+ goto done;
+ }
+
+add_to_list:
+ spin_lock(&ksb->lock);
+ pkt->len = urb->actual_length;
+ list_add_tail(&pkt->list, &ksb->to_ks_list);
+ spin_unlock(&ksb->lock);
+ /* wake up read thread */
+ if (wakeup)
+ {
+ /* make sure ks bridge data can pass to user space process before pm freeze user space */
+ wake_lock_timeout(&ksb->ks_wake_lock, HZ);
+ wake_up(&ksb->ks_wait_q);
+ }
+done:
+ atomic_dec(&ksb->rx_pending_cnt);
+ wake_up(&ksb->pending_urb_wait);
+}
+
+static void ksb_start_rx_work(struct work_struct *w)
+{
+ struct ks_bridge *ksb =
+ container_of(w, struct ks_bridge, start_rx_work);
+ struct data_pkt *pkt;
+ struct urb *urb;
+ int i = 0;
+ int ret;
+ bool put = true;
+
+ ret = usb_autopm_get_interface(ksb->ifc);
+ if (ret < 0) {
+ if (ret != -EAGAIN && ret != -EACCES) {
+ pr_err_ratelimited("%s: autopm_get failed:%d",
+ ksb->id_info.name, ret);
+ return;
+ }
+ put = false;
+ }
+ for (i = 0; i < NO_RX_REQS; i++) {
+
+ if (!test_bit(USB_DEV_CONNECTED, &ksb->flags))
+ break;
+
+ pkt = ksb_alloc_data_pkt(MAX_DATA_PKT_SIZE, GFP_KERNEL, ksb);
+ if (IS_ERR(pkt)) {
+ dev_err(&ksb->udev->dev, "unable to allocate data pkt");
+ break;
+ }
+
+ urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!urb) {
+ dev_err(&ksb->udev->dev, "unable to allocate urb");
+ ksb_free_data_pkt(pkt);
+ break;
+ }
+
+ if (ksb->period)
+ usb_fill_int_urb(urb, ksb->udev, ksb->in_pipe,
+ pkt->buf, pkt->len,
+ ksb_rx_cb, pkt, ksb->period);
+ else
+ usb_fill_bulk_urb(urb, ksb->udev, ksb->in_pipe,
+ pkt->buf, pkt->len,
+ ksb_rx_cb, pkt);
+
+ usb_anchor_urb(urb, &ksb->submitted);
+
+ dbg_log_event(ksb, "S RX_URB", pkt->len, 0);
+
+ atomic_inc(&ksb->rx_pending_cnt);
+ ret = usb_submit_urb(urb, GFP_KERNEL);
+ if (ret) {
+ dev_err(&ksb->udev->dev, "in urb submission failed");
+ usb_unanchor_urb(urb);
+ usb_free_urb(urb);
+ ksb_free_data_pkt(pkt);
+ atomic_dec(&ksb->rx_pending_cnt);
+ wake_up(&ksb->pending_urb_wait);
+ break;
+ }
+
+ usb_free_urb(urb);
+ }
+ if (put)
+ usb_autopm_put_interface_async(ksb->ifc);
+}
+
+static int
+ksb_usb_probe(struct usb_interface *ifc, const struct usb_device_id *id)
+{
+ __u8 ifc_num;
+ struct usb_host_interface *ifc_desc;
+ struct usb_endpoint_descriptor *ep_desc;
+ int i;
+ struct ks_bridge *ksb;
+ unsigned long flags;
+ struct data_pkt *pkt;
+ struct ksb_dev_info *mdev, *fbdev;
+ struct usb_device *udev;
+ unsigned int bus_id;
+ int ret;
+
+ ifc_num = ifc->cur_altsetting->desc.bInterfaceNumber;
+
+ udev = interface_to_usbdev(ifc);
+ fbdev = mdev = (struct ksb_dev_info *)id->driver_info;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ pr_info("%s: ifc_num=%d\n", __func__, ifc_num);
+#endif
+
+ bus_id = str_to_busid(udev->bus->bus_name);
+ if (bus_id == BUS_UNDEF) {
+ dev_err(&udev->dev, "unknown usb bus %s, probe failed\n",
+ udev->bus->bus_name);
+ return -ENODEV;
+ }
+
+ switch (id->idProduct) {
+ case 0x9008:
+#ifdef CONFIG_QCT_9K_MODEM
+ /* During 1st enumeration, disable auto-suspend */
+ pr_info("%s: disable auto-suspend for DL mode\n", __func__);
+ usb_disable_autosuspend(udev);
+#endif
+ ksb = __ksb[bus_id];
+ mdev = &fbdev[bus_id];
+ break;
+ case 0x9048:
+ case 0x904C:
+ case 0x9075:
+ case 0x908A:
+ case 0x908E:
+ case 0x90A0:
+ case 0x909C:
+ case 0x909D:
+ case 0x909E:
+ case 0x909F:
+ case 0x90A4:
+ ksb = __ksb[EFS_HSIC_BRIDGE_INDEX];
+ break;
+ case 0x9079:
+ if (ifc_num != 2)
+ return -ENODEV;
+ ksb = __ksb[EFS_USB_BRIDGE_INDEX];
+ break;
+ default:
+ return -ENODEV;
+ }
+
+ if (!ksb) {
+ pr_err("ksb is not initialized");
+ return -ENODEV;
+ }
+
+ ksb->udev = usb_get_dev(interface_to_usbdev(ifc));
+ ksb->ifc = ifc;
+ ifc_desc = ifc->cur_altsetting;
+ ksb->id_info = *mdev;
+
+ for (i = 0; i < ifc_desc->desc.bNumEndpoints; i++) {
+ ep_desc = &ifc_desc->endpoint[i].desc;
+
+ if (!ksb->in_epAddr && (usb_endpoint_is_bulk_in(ep_desc))) {
+ ksb->in_epAddr = ep_desc->bEndpointAddress;
+ ksb->period = 0;
+ }
+
+ if (!ksb->in_epAddr && (usb_endpoint_is_int_in(ep_desc))) {
+ ksb->in_epAddr = ep_desc->bEndpointAddress;
+ ksb->period = ep_desc->bInterval;
+ }
+
+ if (!ksb->out_epAddr && usb_endpoint_is_bulk_out(ep_desc))
+ ksb->out_epAddr = ep_desc->bEndpointAddress;
+ }
+
+ if (!(ksb->in_epAddr && ksb->out_epAddr)) {
+ dev_err(&udev->dev,
+ "could not find bulk in and bulk out endpoints");
+ usb_put_dev(ksb->udev);
+ ksb->ifc = NULL;
+ return -ENODEV;
+ }
+
+ ksb->in_pipe = ksb->period ?
+ usb_rcvintpipe(ksb->udev, ksb->in_epAddr) :
+ usb_rcvbulkpipe(ksb->udev, ksb->in_epAddr);
+
+ ksb->out_pipe = usb_sndbulkpipe(ksb->udev, ksb->out_epAddr);
+
+ usb_set_intfdata(ifc, ksb);
+ set_bit(USB_DEV_CONNECTED, &ksb->flags);
+ atomic_set(&ksb->tx_pending_cnt, 0);
+ atomic_set(&ksb->rx_pending_cnt, 0);
+
+ dbg_log_event(ksb, "PID-ATT", id->idProduct, 0);
+
+ /*free up stale buffers if any from previous disconnect*/
+ spin_lock_irqsave(&ksb->lock, flags);
+ while (!list_empty(&ksb->to_ks_list)) {
+ pkt = list_first_entry(&ksb->to_ks_list,
+ struct data_pkt, list);
+ list_del_init(&pkt->list);
+ ksb_free_data_pkt(pkt);
+ }
+ while (!list_empty(&ksb->to_mdm_list)) {
+ pkt = list_first_entry(&ksb->to_mdm_list,
+ struct data_pkt, list);
+ list_del_init(&pkt->list);
+ ksb_free_data_pkt(pkt);
+ }
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ ret = alloc_chrdev_region(&ksb->cdev_start_no, 0, 1, mdev->name);
+ if (ret < 0) {
+ dbg_log_event(ksb, "chr reg failed", ret, 0);
+ goto fail_chrdev_region;
+ }
+
+ ksb->class = class_create(THIS_MODULE, mdev->name);
+ if (IS_ERR(ksb->class)) {
+ dbg_log_event(ksb, "clscr failed", PTR_ERR(ksb->class), 0);
+ goto fail_class_create;
+ }
+
+ cdev_init(&ksb->cdev, &ksb_fops);
+ ksb->cdev.owner = THIS_MODULE;
+
+ ret = cdev_add(&ksb->cdev, ksb->cdev_start_no, 1);
+ if (ret < 0) {
+ dbg_log_event(ksb, "cdev_add failed", ret, 0);
+ goto fail_class_create;
+ }
+
+ ksb->device = device_create(ksb->class, NULL, ksb->cdev_start_no,
+ NULL, mdev->name);
+ if (IS_ERR(ksb->device)) {
+ dbg_log_event(ksb, "devcrfailed", PTR_ERR(ksb->device), 0);
+ goto fail_device_create;
+ }
+
+ if (device_can_wakeup(&ksb->udev->dev)) {
+ ifc->needs_remote_wakeup = 1;
+ usb_enable_autosuspend(ksb->udev);
+ }
+
+ dev_dbg(&udev->dev, "usb dev connected");
+
+ return 0;
+
+fail_device_create:
+ cdev_del(&ksb->cdev);
+fail_class_create:
+ unregister_chrdev_region(ksb->cdev_start_no, 1);
+fail_chrdev_region:
+ usb_set_intfdata(ifc, NULL);
+ clear_bit(USB_DEV_CONNECTED, &ksb->flags);
+
+ return -ENODEV;
+
+}
+
+static int ksb_usb_suspend(struct usb_interface *ifc, pm_message_t message)
+{
+ struct ks_bridge *ksb = usb_get_intfdata(ifc);
+ unsigned long flags;
+
+ dbg_log_event(ksb, "SUSPEND", 0, 0);
+
+ if (pm_runtime_autosuspend_expiration(&ksb->udev->dev)) {
+ dbg_log_event(ksb, "SUSP ABORT-TimeCheck", 0, 0);
+ return -EBUSY;
+ }
+
+ usb_kill_anchored_urbs(&ksb->submitted);
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ if (!list_empty(&ksb->to_ks_list)) {
+ spin_unlock_irqrestore(&ksb->lock, flags);
+ dbg_log_event(ksb, "SUSPEND ABORT", 0, 0);
+ /*
+ * Now wakeup the reader process and queue
+ * Rx URBs for more data.
+ */
+ wake_up(&ksb->ks_wait_q);
+ queue_work(ksb->wq, &ksb->start_rx_work);
+ return -EBUSY;
+ }
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ return 0;
+}
+
+static int ksb_usb_resume(struct usb_interface *ifc)
+{
+ struct ks_bridge *ksb = usb_get_intfdata(ifc);
+
+ dbg_log_event(ksb, "RESUME", 0, 0);
+
+ if (test_bit(FILE_OPENED, &ksb->flags))
+ queue_work(ksb->wq, &ksb->start_rx_work);
+
+ return 0;
+}
+
+static void ksb_usb_disconnect(struct usb_interface *ifc)
+{
+ struct ks_bridge *ksb = usb_get_intfdata(ifc);
+ unsigned long flags;
+ struct data_pkt *pkt;
+
+ dbg_log_event(ksb, "PID-DETACH", 0, 0);
+
+ clear_bit(USB_DEV_CONNECTED, &ksb->flags);
+ wake_up(&ksb->ks_wait_q);
+ cancel_work_sync(&ksb->to_mdm_work);
+ cancel_work_sync(&ksb->start_rx_work);
+
+ device_destroy(ksb->class, ksb->cdev_start_no);
+ cdev_del(&ksb->cdev);
+ class_destroy(ksb->class);
+ unregister_chrdev_region(ksb->cdev_start_no, 1);
+
+ usb_kill_anchored_urbs(&ksb->submitted);
+
+ wait_event_interruptible_timeout(
+ ksb->pending_urb_wait,
+ !atomic_read(&ksb->tx_pending_cnt) &&
+ !atomic_read(&ksb->rx_pending_cnt),
+ msecs_to_jiffies(PENDING_URB_TIMEOUT));
+
+ spin_lock_irqsave(&ksb->lock, flags);
+ while (!list_empty(&ksb->to_ks_list)) {
+ pkt = list_first_entry(&ksb->to_ks_list,
+ struct data_pkt, list);
+ list_del_init(&pkt->list);
+ ksb_free_data_pkt(pkt);
+ }
+ while (!list_empty(&ksb->to_mdm_list)) {
+ pkt = list_first_entry(&ksb->to_mdm_list,
+ struct data_pkt, list);
+ list_del_init(&pkt->list);
+ ksb_free_data_pkt(pkt);
+ }
+ spin_unlock_irqrestore(&ksb->lock, flags);
+
+ ifc->needs_remote_wakeup = 0;
+ usb_put_dev(ksb->udev);
+ ksb->ifc = NULL;
+ usb_set_intfdata(ifc, NULL);
+
+ return;
+}
+
+static struct usb_driver ksb_usb_driver = {
+ .name = "ks_bridge",
+ .probe = ksb_usb_probe,
+ .disconnect = ksb_usb_disconnect,
+ .suspend = ksb_usb_suspend,
+ .resume = ksb_usb_resume,
+ .reset_resume = ksb_usb_resume,
+ .id_table = ksb_usb_ids,
+ .supports_autosuspend = 1,
+};
+
+static ssize_t ksb_debug_show(struct seq_file *s, void *unused)
+{
+ unsigned long flags;
+ struct ks_bridge *ksb = s->private;
+ int i;
+
+ read_lock_irqsave(&ksb->dbg_lock, flags);
+ for (i = 0; i < DBG_MAX_MSG; i++) {
+ if (i == (ksb->dbg_idx - 1))
+ seq_printf(s, "-->%s\n", ksb->dbgbuf[i]);
+ else
+ seq_printf(s, "%s\n", ksb->dbgbuf[i]);
+ }
+ read_unlock_irqrestore(&ksb->dbg_lock, flags);
+
+ return 0;
+}
+
+static int ksb_debug_open(struct inode *ip, struct file *fp)
+{
+ return single_open(fp, ksb_debug_show, ip->i_private);
+
+ return 0;
+}
+
+static const struct file_operations dbg_fops = {
+ .open = ksb_debug_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+static struct dentry *dbg_dir;
+static int __init ksb_init(void)
+{
+ struct ks_bridge *ksb;
+ int num_instances = 0;
+ int ret = 0;
+ int i;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return 0;
+#endif
+
+ dbg_dir = debugfs_create_dir("ks_bridge", NULL);
+ if (IS_ERR(dbg_dir))
+ pr_err("unable to create debug dir");
+
+ for (i = 0; i < NO_BRIDGE_INSTANCES; i++) {
+ ksb = kzalloc(sizeof(struct ks_bridge), GFP_KERNEL);
+ if (!ksb) {
+ pr_err("unable to allocat mem for ks_bridge");
+ ret = -ENOMEM;
+ goto dev_free;
+ }
+ __ksb[i] = ksb;
+
+ ksb->name = kasprintf(GFP_KERNEL, "ks_bridge:%i", i + 1);
+ if (!ksb->name) {
+ pr_info("unable to allocate name");
+ kfree(ksb);
+ ret = -ENOMEM;
+ goto dev_free;
+ }
+
+ spin_lock_init(&ksb->lock);
+ INIT_LIST_HEAD(&ksb->to_mdm_list);
+ INIT_LIST_HEAD(&ksb->to_ks_list);
+ init_waitqueue_head(&ksb->ks_wait_q);
+ init_waitqueue_head(&ksb->pending_urb_wait);
+ ksb->wq = create_singlethread_workqueue(ksb->name);
+ if (!ksb->wq) {
+ pr_err("unable to allocate workqueue");
+ kfree(ksb->name);
+ kfree(ksb);
+ ret = -ENOMEM;
+ goto dev_free;
+ }
+
+ INIT_WORK(&ksb->to_mdm_work, ksb_tomdm_work);
+ INIT_WORK(&ksb->start_rx_work, ksb_start_rx_work);
+ init_usb_anchor(&ksb->submitted);
+
+ ksb->dbg_idx = 0;
+ ksb->dbg_lock = __RW_LOCK_UNLOCKED(lck);
+
+ wake_lock_init(&ksb->ks_wake_lock, WAKE_LOCK_SUSPEND, "ks_bridge_lock");
+
+ if (!IS_ERR(dbg_dir))
+ debugfs_create_file(ksb->name, S_IRUGO, dbg_dir,
+ ksb, &dbg_fops);
+
+ num_instances++;
+ }
+
+ ret = usb_register(&ksb_usb_driver);
+ if (ret) {
+ pr_err("unable to register ks bridge driver");
+ goto dev_free;
+ }
+
+ pr_info("init done");
+
+ return 0;
+
+dev_free:
+ if (!IS_ERR(dbg_dir))
+ debugfs_remove_recursive(dbg_dir);
+
+ for (i = 0; i < num_instances; i++) {
+ ksb = __ksb[i];
+
+ destroy_workqueue(ksb->wq);
+ kfree(ksb->name);
+ kfree(ksb);
+ }
+
+ return ret;
+
+}
+
+static void __exit ksb_exit(void)
+{
+ struct ks_bridge *ksb;
+ int i;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return;
+#endif
+
+ if (!IS_ERR(dbg_dir))
+ debugfs_remove_recursive(dbg_dir);
+
+ //Destory ks wake lock
+ wake_lock_destroy(&ksb->ks_wake_lock);
+
+ usb_deregister(&ksb_usb_driver);
+
+ for (i = 0; i < NO_BRIDGE_INSTANCES; i++) {
+ ksb = __ksb[i];
+
+ destroy_workqueue(ksb->wq);
+ kfree(ksb->name);
+ kfree(ksb);
+ }
+}
+
+module_init(ksb_init);
+module_exit(ksb_exit);
+
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_VERSION(DRIVER_VERSION);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/usb/misc/mdm_ctrl_bridge.c b/drivers/usb/misc/mdm_ctrl_bridge.c
new file mode 100644
index 0000000..a00e7bd
--- /dev/null
+++ b/drivers/usb/misc/mdm_ctrl_bridge.c
@@ -0,0 +1,884 @@
+/* Copyright (c) 2011-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kref.h>
+#include <linux/debugfs.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/ratelimit.h>
+#include <linux/usb/ch9.h>
+#include <linux/usb/cdc.h>
+#include <linux/termios.h>
+#include <asm/unaligned.h>
+#include <mach/usb_bridge.h>
+
+#define ACM_CTRL_DTR (1 << 0)
+#define DEFAULT_READ_URB_LENGTH 4096
+
+#define SUSPENDED BIT(0)
+
+enum ctrl_bridge_rx_state {
+ RX_IDLE, /* inturb is not queued */
+ RX_WAIT, /* inturb is queued and waiting for data */
+ RX_BUSY, /* inturb is completed. processing RX */
+};
+
+struct ctrl_bridge {
+ struct usb_device *udev;
+ struct usb_interface *intf;
+
+ char *name;
+
+ unsigned int int_pipe;
+ struct urb *inturb;
+ void *intbuf;
+
+ struct urb *readurb;
+ void *readbuf;
+
+ struct usb_anchor tx_submitted;
+ struct usb_anchor tx_deferred;
+ struct usb_ctrlrequest *in_ctlreq;
+
+ struct bridge *brdg;
+ struct platform_device *pdev;
+
+ unsigned long flags;
+
+ /* input control lines (DSR, CTS, CD, RI) */
+ unsigned int cbits_tohost;
+
+ /* output control lines (DTR, RTS) */
+ unsigned int cbits_tomdm;
+
+ spinlock_t lock;
+ enum ctrl_bridge_rx_state rx_state;
+
+ /* counters */
+ unsigned int snd_encap_cmd;
+ unsigned int get_encap_res;
+ unsigned int resp_avail;
+ unsigned int set_ctrl_line_sts;
+ unsigned int notify_ser_state;
+};
+
+static struct ctrl_bridge *__dev[MAX_BRIDGE_DEVICES];
+
+static int get_ctrl_bridge_chid(char *xport_name)
+{
+ struct ctrl_bridge *dev;
+ int i;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!strncmp(dev->name, xport_name, BRIDGE_NAME_MAX_LEN))
+ return i;
+ }
+
+ return -ENODEV;
+}
+
+unsigned int ctrl_bridge_get_cbits_tohost(unsigned int id)
+{
+ struct ctrl_bridge *dev;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return -EINVAL;
+
+ dev = __dev[id];
+ if (!dev)
+ return -ENODEV;
+
+ return dev->cbits_tohost;
+}
+EXPORT_SYMBOL(ctrl_bridge_get_cbits_tohost);
+
+int ctrl_bridge_set_cbits(unsigned int id, unsigned int cbits)
+{
+ struct ctrl_bridge *dev;
+ struct bridge *brdg;
+ int retval;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return -EINVAL;
+
+ dev = __dev[id];
+ if (!dev)
+ return -ENODEV;
+
+ pr_debug("%s: dev[id] =%u cbits : %u\n", __func__, id, cbits);
+
+ brdg = dev->brdg;
+ if (!brdg)
+ return -ENODEV;
+
+ dev->cbits_tomdm = cbits;
+
+ retval = ctrl_bridge_write(id, NULL, 0);
+
+ /* if DTR is high, update latest modem info to host */
+ if (brdg && (cbits & ACM_CTRL_DTR) && brdg->ops.send_cbits)
+ brdg->ops.send_cbits(brdg->ctx, dev->cbits_tohost);
+
+ return retval;
+}
+EXPORT_SYMBOL(ctrl_bridge_set_cbits);
+
+static int ctrl_bridge_start_read(struct ctrl_bridge *dev, gfp_t gfp_flags,
+ bool resubmit_inturb)
+{
+ int retval = 0;
+ unsigned long flags;
+
+ if (!dev->inturb) {
+ dev_err(&dev->intf->dev, "%s: inturb is NULL\n", __func__);
+ return -ENODEV;
+ }
+
+ /* FIXME: Add a workaround to prevent inturb from being submitted more than once */
+ if(!atomic_read(&dev->inturb->use_count) || resubmit_inturb)
+ {
+ retval = usb_submit_urb(dev->inturb, gfp_flags);
+ if (retval < 0 && retval != -EPERM) {
+ dev_err(&dev->intf->dev,
+ "%s error submitting int urb %d\n",
+ __func__, retval);
+ }
+ }
+ else
+ {
+ printk("===START TO DUMP STACK===\n");
+ dump_stack();
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (retval)
+ dev->rx_state = RX_IDLE;
+ else
+ dev->rx_state = RX_WAIT;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ return retval;
+}
+
+static void resp_avail_cb(struct urb *urb)
+{
+ struct ctrl_bridge *dev = urb->context;
+ int resubmit_urb = 1;
+ struct bridge *brdg = dev->brdg;
+ unsigned long flags;
+
+ /*usb device disconnect*/
+ if (urb->dev->state == USB_STATE_NOTATTACHED)
+ return;
+
+ switch (urb->status) {
+ case 0:
+ /*success*/
+ dev->get_encap_res++;
+ if (brdg && brdg->ops.send_pkt)
+ brdg->ops.send_pkt(brdg->ctx, urb->transfer_buffer,
+ urb->actual_length);
+ break;
+
+ /*do not resubmit*/
+ case -ESHUTDOWN:
+ case -ENOENT:
+ case -ECONNRESET:
+ /* unplug */
+ case -EPROTO:
+ /*babble error*/
+ resubmit_urb = 0;
+ /*resubmit*/
+ case -EOVERFLOW:
+ default:
+ dev_dbg(&dev->intf->dev, "%s: non zero urb status = %d\n",
+ __func__, urb->status);
+ }
+
+ if (resubmit_urb) {
+ /*re- submit int urb to check response available*/
+ ctrl_bridge_start_read(dev, GFP_ATOMIC, false);
+ } else {
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->rx_state = RX_IDLE;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ }
+
+ usb_autopm_put_interface_async(dev->intf);
+}
+
+static void notification_available_cb(struct urb *urb)
+{
+ int status;
+ struct usb_cdc_notification *ctrl;
+ struct ctrl_bridge *dev = urb->context;
+ struct bridge *brdg = dev->brdg;
+ unsigned int ctrl_bits;
+ unsigned char *data;
+ unsigned long flags;
+
+ /*usb device disconnect*/
+ if (urb->dev->state == USB_STATE_NOTATTACHED)
+ return;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->rx_state = RX_IDLE;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ switch (urb->status) {
+ case 0:
+ /*success*/
+ break;
+ case -ESHUTDOWN:
+ case -ENOENT:
+ case -ECONNRESET:
+ case -EPROTO:
+ /* unplug */
+ return;
+ case -EPIPE:
+ dev_err(&dev->intf->dev,
+ "%s: stall on int endpoint\n", __func__);
+ /* TBD : halt to be cleared in work */
+ case -EOVERFLOW:
+ default:
+ pr_debug_ratelimited("%s: non zero urb status = %d\n",
+ __func__, urb->status);
+ goto resubmit_int_urb;
+ }
+
+ ctrl = (struct usb_cdc_notification *)urb->transfer_buffer;
+ data = (unsigned char *)(ctrl + 1);
+
+ switch (ctrl->bNotificationType) {
+ case USB_CDC_NOTIFY_RESPONSE_AVAILABLE:
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->rx_state = RX_BUSY;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ dev->resp_avail++;
+ usb_autopm_get_interface_no_resume(dev->intf);
+ usb_fill_control_urb(dev->readurb, dev->udev,
+ usb_rcvctrlpipe(dev->udev, 0),
+ (unsigned char *)dev->in_ctlreq,
+ dev->readbuf,
+ DEFAULT_READ_URB_LENGTH,
+ resp_avail_cb, dev);
+
+ status = usb_submit_urb(dev->readurb, GFP_ATOMIC);
+ if (status) {
+ dev_err(&dev->intf->dev,
+ "%s: Error submitting Read URB %d\n",
+ __func__, status);
+ usb_autopm_put_interface_async(dev->intf);
+ goto resubmit_int_urb;
+ }
+ return;
+ case USB_CDC_NOTIFY_NETWORK_CONNECTION:
+ dev_dbg(&dev->intf->dev, "%s network\n", ctrl->wValue ?
+ "connected to" : "disconnected from");
+ break;
+ case USB_CDC_NOTIFY_SERIAL_STATE:
+ dev->notify_ser_state++;
+ ctrl_bits = get_unaligned_le16(data);
+ dev_dbg(&dev->intf->dev, "serial state: %d\n", ctrl_bits);
+ dev->cbits_tohost = ctrl_bits;
+ if (brdg && brdg->ops.send_cbits)
+ brdg->ops.send_cbits(brdg->ctx, ctrl_bits);
+ break;
+ default:
+ dev_err(&dev->intf->dev, "%s: unknown notification %d received:"
+ "index %d len %d data0 %d data1 %d",
+ __func__, ctrl->bNotificationType, ctrl->wIndex,
+ ctrl->wLength, data[0], data[1]);
+ }
+
+resubmit_int_urb:
+ ctrl_bridge_start_read(dev, GFP_ATOMIC, true);
+}
+
+int ctrl_bridge_open(struct bridge *brdg)
+{
+ struct ctrl_bridge *dev;
+ int ch_id;
+
+ if (!brdg) {
+ pr_err("bridge is null\n");
+ return -EINVAL;
+ }
+
+ ch_id = get_ctrl_bridge_chid(brdg->name);
+ if (ch_id < 0 || ch_id >= MAX_BRIDGE_DEVICES) {
+ pr_err("%s: %s dev not found\n", __func__, brdg->name);
+ return ch_id;
+ }
+
+ brdg->ch_id = ch_id;
+
+ dev = __dev[ch_id];
+ dev->brdg = brdg;
+ dev->snd_encap_cmd = 0;
+ dev->get_encap_res = 0;
+ dev->resp_avail = 0;
+ dev->set_ctrl_line_sts = 0;
+ dev->notify_ser_state = 0;
+
+ if (brdg->ops.send_cbits)
+ brdg->ops.send_cbits(brdg->ctx, dev->cbits_tohost);
+
+ return 0;
+}
+EXPORT_SYMBOL(ctrl_bridge_open);
+
+void ctrl_bridge_close(unsigned int id)
+{
+ struct ctrl_bridge *dev;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return;
+
+ dev = __dev[id];
+ if (!dev || !dev->brdg)
+ return;
+
+ dev_dbg(&dev->intf->dev, "%s:\n", __func__);
+
+ ctrl_bridge_set_cbits(dev->brdg->ch_id, 0);
+
+ dev->brdg = NULL;
+}
+EXPORT_SYMBOL(ctrl_bridge_close);
+
+static void ctrl_write_callback(struct urb *urb)
+{
+ struct ctrl_bridge *dev = urb->context;
+
+ if (urb->status) {
+ pr_debug("Write status/size %d/%d\n",
+ urb->status, urb->actual_length);
+ }
+
+ kfree(urb->transfer_buffer);
+ kfree(urb->setup_packet);
+ usb_free_urb(urb);
+
+ /* if we are here after device disconnect
+ * usb_unbind_interface() takes care of
+ * residual pm_autopm_get_interface_* calls
+ */
+ if (urb->dev->state != USB_STATE_NOTATTACHED)
+ usb_autopm_put_interface_async(dev->intf);
+}
+
+int ctrl_bridge_write(unsigned int id, char *data, size_t size)
+{
+ int result;
+ struct urb *writeurb;
+ struct usb_ctrlrequest *out_ctlreq;
+ struct ctrl_bridge *dev;
+ unsigned long flags;
+
+ if (id >= MAX_BRIDGE_DEVICES) {
+ result = -EINVAL;
+ goto free_data;
+ }
+
+ dev = __dev[id];
+
+ if (!dev) {
+ result = -ENODEV;
+ goto free_data;
+ }
+
+ dev_dbg(&dev->intf->dev, "%s:[id]:%u: write (%d bytes)\n",
+ __func__, id, size);
+
+ writeurb = usb_alloc_urb(0, GFP_ATOMIC);
+ if (!writeurb) {
+ dev_err(&dev->intf->dev, "%s: error allocating read urb\n",
+ __func__);
+ result = -ENOMEM;
+ goto free_data;
+ }
+
+ out_ctlreq = kmalloc(sizeof(*out_ctlreq), GFP_ATOMIC);
+ if (!out_ctlreq) {
+ dev_err(&dev->intf->dev,
+ "%s: error allocating setup packet buffer\n",
+ __func__);
+ result = -ENOMEM;
+ goto free_urb;
+ }
+
+ /* CDC Send Encapsulated Request packet */
+ out_ctlreq->bRequestType = (USB_DIR_OUT | USB_TYPE_CLASS |
+ USB_RECIP_INTERFACE);
+ if (!data && !size) {
+ out_ctlreq->bRequest = USB_CDC_REQ_SET_CONTROL_LINE_STATE;
+ out_ctlreq->wValue = dev->cbits_tomdm;
+ dev->set_ctrl_line_sts++;
+ } else {
+ out_ctlreq->bRequest = USB_CDC_SEND_ENCAPSULATED_COMMAND;
+ out_ctlreq->wValue = 0;
+ dev->snd_encap_cmd++;
+ }
+ out_ctlreq->wIndex =
+ dev->intf->cur_altsetting->desc.bInterfaceNumber;
+ out_ctlreq->wLength = cpu_to_le16(size);
+
+ usb_fill_control_urb(writeurb, dev->udev,
+ usb_sndctrlpipe(dev->udev, 0),
+ (unsigned char *)out_ctlreq,
+ (void *)data, size,
+ ctrl_write_callback, dev);
+
+ result = usb_autopm_get_interface_async(dev->intf);
+ if (result < 0) {
+ dev_dbg(&dev->intf->dev, "%s: unable to resume interface: %d\n",
+ __func__, result);
+
+ /*
+ * Revisit: if (result == -EPERM)
+ * bridge_suspend(dev->intf, PMSG_SUSPEND);
+ */
+
+ goto free_ctrlreq;
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (test_bit(SUSPENDED, &dev->flags)) {
+ usb_anchor_urb(writeurb, &dev->tx_deferred);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ goto deferred;
+ }
+
+ usb_anchor_urb(writeurb, &dev->tx_submitted);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ result = usb_submit_urb(writeurb, GFP_ATOMIC);
+ if (result < 0) {
+ dev_err(&dev->intf->dev, "%s: submit URB error %d\n",
+ __func__, result);
+ usb_autopm_put_interface_async(dev->intf);
+ goto unanchor_urb;
+ }
+deferred:
+ return size;
+
+unanchor_urb:
+ usb_unanchor_urb(writeurb);
+free_ctrlreq:
+ kfree(out_ctlreq);
+free_urb:
+ usb_free_urb(writeurb);
+free_data:
+ kfree(data);
+
+ return result;
+}
+EXPORT_SYMBOL(ctrl_bridge_write);
+
+int ctrl_bridge_suspend(unsigned int id)
+{
+ struct ctrl_bridge *dev;
+ unsigned long flags;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return -EINVAL;
+
+ dev = __dev[id];
+ if (!dev)
+ return -ENODEV;
+ if (!dev->int_pipe)
+ return 0;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!usb_anchor_empty(&dev->tx_submitted) || dev->rx_state == RX_BUSY) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return -EBUSY;
+ }
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ usb_kill_urb(dev->inturb);
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (dev->rx_state != RX_IDLE) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return -EBUSY;
+ }
+ if (!usb_anchor_empty(&dev->tx_submitted)) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ ctrl_bridge_start_read(dev, GFP_KERNEL, false);
+ return -EBUSY;
+ }
+ set_bit(SUSPENDED, &dev->flags);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ return 0;
+}
+
+int ctrl_bridge_resume(unsigned int id)
+{
+ struct ctrl_bridge *dev;
+ struct urb *urb;
+ unsigned long flags;
+ int ret;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return -EINVAL;
+
+ dev = __dev[id];
+ if (!dev)
+ return -ENODEV;
+ if (!dev->int_pipe)
+ return 0;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!test_bit(SUSPENDED, &dev->flags)) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return 0;
+ }
+
+ /* submit pending write requests */
+ while ((urb = usb_get_from_anchor(&dev->tx_deferred))) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ /*
+ * usb_get_from_anchor() does not drop the
+ * ref count incremented by the usb_anchro_urb()
+ * called in Tx submission path. Let us do it.
+ */
+ usb_put_urb(urb);
+ usb_anchor_urb(urb, &dev->tx_submitted);
+ ret = usb_submit_urb(urb, GFP_ATOMIC);
+ if (ret < 0) {
+ usb_unanchor_urb(urb);
+ kfree(urb->setup_packet);
+ kfree(urb->transfer_buffer);
+ usb_free_urb(urb);
+ usb_autopm_put_interface_async(dev->intf);
+ }
+ spin_lock_irqsave(&dev->lock, flags);
+ }
+ clear_bit(SUSPENDED, &dev->flags);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ return ctrl_bridge_start_read(dev, GFP_KERNEL, false);
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 1024
+static ssize_t ctrl_bridge_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct ctrl_bridge *dev;
+ char *buf;
+ int ret;
+ int i;
+ int temp = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!dev)
+ continue;
+
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\nName#%s dev %p\n"
+ "snd encap cmd cnt: %u\n"
+ "get encap res cnt: %u\n"
+ "res available cnt: %u\n"
+ "set ctrlline sts cnt: %u\n"
+ "notify ser state cnt: %u\n"
+ "cbits_tomdm: %d\n"
+ "cbits_tohost: %d\n"
+ "suspended: %d\n",
+ dev->name, dev,
+ dev->snd_encap_cmd,
+ dev->get_encap_res,
+ dev->resp_avail,
+ dev->set_ctrl_line_sts,
+ dev->notify_ser_state,
+ dev->cbits_tomdm,
+ dev->cbits_tohost,
+ test_bit(SUSPENDED, &dev->flags));
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+
+ kfree(buf);
+
+ return ret;
+}
+
+static ssize_t ctrl_bridge_reset_stats(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ struct ctrl_bridge *dev;
+ int i;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!dev)
+ continue;
+
+ dev->snd_encap_cmd = 0;
+ dev->get_encap_res = 0;
+ dev->resp_avail = 0;
+ dev->set_ctrl_line_sts = 0;
+ dev->notify_ser_state = 0;
+ }
+ return count;
+}
+
+const struct file_operations ctrl_stats_ops = {
+ .read = ctrl_bridge_read_stats,
+ .write = ctrl_bridge_reset_stats,
+};
+
+struct dentry *ctrl_dent;
+struct dentry *ctrl_dfile;
+static void ctrl_bridge_debugfs_init(void)
+{
+ ctrl_dent = debugfs_create_dir("ctrl_hsic_bridge", 0);
+ if (IS_ERR(ctrl_dent))
+ return;
+
+ ctrl_dfile =
+ debugfs_create_file("status", 0644, ctrl_dent, 0,
+ &ctrl_stats_ops);
+ if (!ctrl_dfile || IS_ERR(ctrl_dfile))
+ debugfs_remove(ctrl_dent);
+}
+
+static void ctrl_bridge_debugfs_exit(void)
+{
+ debugfs_remove(ctrl_dfile);
+ debugfs_remove(ctrl_dent);
+}
+
+#else
+static void ctrl_bridge_debugfs_init(void) { }
+static void ctrl_bridge_debugfs_exit(void) { }
+#endif
+
+int
+ctrl_bridge_probe(struct usb_interface *ifc, struct usb_host_endpoint *int_in,
+ char *name, int id)
+{
+ struct ctrl_bridge *dev;
+ struct usb_device *udev;
+ struct usb_endpoint_descriptor *ep;
+ u16 wMaxPacketSize;
+ int retval = 0;
+ int interval;
+
+ udev = interface_to_usbdev(ifc);
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("%s:device not found\n", __func__);
+ return -ENODEV;
+ }
+ if (!int_in)
+ return 0;
+ dev->name = name;
+
+ dev->pdev = platform_device_alloc(name, -1);
+ if (!dev->pdev) {
+ retval = -ENOMEM;
+ dev_err(&ifc->dev, "%s: unable to allocate platform device\n",
+ __func__);
+ goto free_name;
+ }
+
+ dev->flags = 0;
+ dev->udev = udev;
+ dev->int_pipe = usb_rcvintpipe(udev,
+ int_in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+ dev->intf = ifc;
+
+ /*use max pkt size from ep desc*/
+ ep = &dev->intf->cur_altsetting->endpoint[0].desc;
+
+ dev->inturb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->inturb) {
+ dev_err(&ifc->dev, "%s: error allocating int urb\n", __func__);
+ retval = -ENOMEM;
+ goto pdev_put;
+ }
+
+ wMaxPacketSize = le16_to_cpu(ep->wMaxPacketSize);
+
+ dev->intbuf = kmalloc(wMaxPacketSize, GFP_KERNEL);
+ if (!dev->intbuf) {
+ dev_err(&ifc->dev, "%s: error allocating int buffer\n",
+ __func__);
+ retval = -ENOMEM;
+ goto free_inturb;
+ }
+
+ interval = int_in->desc.bInterval;
+
+ usb_fill_int_urb(dev->inturb, udev, dev->int_pipe,
+ dev->intbuf, wMaxPacketSize,
+ notification_available_cb, dev, interval);
+
+ dev->readurb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->readurb) {
+ dev_err(&ifc->dev, "%s: error allocating read urb\n",
+ __func__);
+ retval = -ENOMEM;
+ goto free_intbuf;
+ }
+
+ dev->readbuf = kmalloc(DEFAULT_READ_URB_LENGTH, GFP_KERNEL);
+ if (!dev->readbuf) {
+ dev_err(&ifc->dev, "%s: error allocating read buffer\n",
+ __func__);
+ retval = -ENOMEM;
+ goto free_rurb;
+ }
+
+ dev->in_ctlreq = kmalloc(sizeof(*dev->in_ctlreq), GFP_KERNEL);
+ if (!dev->in_ctlreq) {
+ dev_err(&ifc->dev, "%s:error allocating setup packet buffer\n",
+ __func__);
+ retval = -ENOMEM;
+ goto free_rbuf;
+ }
+
+ dev->in_ctlreq->bRequestType =
+ (USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE);
+ dev->in_ctlreq->bRequest = USB_CDC_GET_ENCAPSULATED_RESPONSE;
+ dev->in_ctlreq->wValue = 0;
+ dev->in_ctlreq->wIndex =
+ dev->intf->cur_altsetting->desc.bInterfaceNumber;
+ dev->in_ctlreq->wLength = cpu_to_le16(DEFAULT_READ_URB_LENGTH);
+
+ retval = platform_device_add(dev->pdev);
+ if (retval) {
+ dev_err(&ifc->dev, "%s:fail to add pdev\n", __func__);
+ goto free_ctrlreq;
+ }
+
+ retval = ctrl_bridge_start_read(dev, GFP_KERNEL, false);
+ if (retval) {
+ dev_err(&ifc->dev, "%s:fail to start reading\n", __func__);
+ goto pdev_del;
+ }
+
+ return 0;
+
+pdev_del:
+ platform_device_del(dev->pdev);
+free_ctrlreq:
+ kfree(dev->in_ctlreq);
+free_rbuf:
+ kfree(dev->readbuf);
+free_rurb:
+ usb_free_urb(dev->readurb);
+free_intbuf:
+ kfree(dev->intbuf);
+free_inturb:
+ usb_free_urb(dev->inturb);
+pdev_put:
+ platform_device_put(dev->pdev);
+free_name:
+ dev->name = "none";
+
+ return retval;
+}
+
+void ctrl_bridge_disconnect(unsigned int id)
+{
+ struct ctrl_bridge *dev = __dev[id];
+
+ if (!dev->int_pipe)
+ return;
+ dev_dbg(&dev->intf->dev, "%s:\n", __func__);
+
+ /*set device name to none to get correct channel id
+ * at the time of bridge open
+ */
+ dev->name = "none";
+
+ platform_device_unregister(dev->pdev);
+
+ usb_scuttle_anchored_urbs(&dev->tx_deferred);
+ usb_kill_anchored_urbs(&dev->tx_submitted);
+
+ usb_kill_urb(dev->inturb);
+ usb_kill_urb(dev->readurb);
+
+ kfree(dev->in_ctlreq);
+ kfree(dev->readbuf);
+ kfree(dev->intbuf);
+
+ usb_free_urb(dev->readurb);
+ usb_free_urb(dev->inturb);
+}
+
+int ctrl_bridge_init(void)
+{
+ struct ctrl_bridge *dev;
+ int i;
+ int retval = 0;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ pr_err("%s: unable to allocate dev\n", __func__);
+ retval = -ENOMEM;
+ goto error;
+ }
+
+ /*transport name will be set during probe*/
+ dev->name = "none";
+
+ spin_lock_init(&dev->lock);
+ init_usb_anchor(&dev->tx_submitted);
+ init_usb_anchor(&dev->tx_deferred);
+
+ __dev[i] = dev;
+ }
+
+ ctrl_bridge_debugfs_init();
+
+ return 0;
+
+error:
+ while (--i >= 0) {
+ kfree(__dev[i]);
+ __dev[i] = NULL;
+ }
+
+ return retval;
+}
+
+void ctrl_bridge_exit(void)
+{
+ int i;
+
+ ctrl_bridge_debugfs_exit();
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ kfree(__dev[i]);
+ __dev[i] = NULL;
+ }
+}
diff --git a/drivers/usb/misc/mdm_data_bridge.c b/drivers/usb/misc/mdm_data_bridge.c
new file mode 100644
index 0000000..03c3f0c
--- /dev/null
+++ b/drivers/usb/misc/mdm_data_bridge.c
@@ -0,0 +1,1304 @@
+/* Copyright (c) 2011-2014, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/debugfs.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/ratelimit.h>
+#include <mach/usb_bridge.h>
+
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+
+#define MAX_RX_URBS 100
+#define RMNET_RX_BUFSIZE 2048
+
+#define STOP_SUBMIT_URB_LIMIT 500
+#define FLOW_CTRL_EN_THRESHOLD 500
+#define FLOW_CTRL_DISABLE 300
+#define FLOW_CTRL_SUPPORT 1
+
+#define BRIDGE_DATA_IDX 0
+#define BRIDGE_CTRL_IDX 1
+
+/*for xport : HSIC*/
+static const char * const serial_hsic_bridge_names[] = {
+ "serial_hsic_data",
+ "serial_hsic_ctrl",
+};
+
+static const char * const rmnet_hsic_bridge_names[] = {
+ "rmnet_hsic_data",
+ "rmnet_hsic_ctrl",
+};
+
+static const char * const qdss_hsic_bridge_names[] = {
+ "qdss_hsic_data",
+};
+
+/*for xport : HSUSB*/
+static const char * const serial_hsusb_bridge_names[] = {
+ "serial_hsusb_data",
+ "serial_hsusb_ctrl",
+};
+
+static const char * const rmnet_hsusb_bridge_names[] = {
+ "rmnet_hsusb_data",
+ "rmnet_hsusb_ctrl",
+};
+
+/* since driver supports multiple instances, on smp systems
+ * probe might get called from multiple cores, hence use lock
+ * to identify unclaimed bridge device instance
+ */
+static DEFINE_MUTEX(brdg_claim_lock);
+
+static struct workqueue_struct *bridge_wq;
+
+static unsigned int fctrl_support = FLOW_CTRL_SUPPORT;
+module_param(fctrl_support, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int fctrl_en_thld = FLOW_CTRL_EN_THRESHOLD;
+module_param(fctrl_en_thld, uint, S_IRUGO | S_IWUSR);
+
+static unsigned int fctrl_dis_thld = FLOW_CTRL_DISABLE;
+module_param(fctrl_dis_thld, uint, S_IRUGO | S_IWUSR);
+
+unsigned int max_rx_urbs = MAX_RX_URBS;
+module_param(max_rx_urbs, uint, S_IRUGO | S_IWUSR);
+
+unsigned int stop_submit_urb_limit = STOP_SUBMIT_URB_LIMIT;
+module_param(stop_submit_urb_limit, uint, S_IRUGO | S_IWUSR);
+
+static unsigned tx_urb_mult = 20;
+module_param(tx_urb_mult, uint, S_IRUGO|S_IWUSR);
+
+static unsigned int rx_rmnet_buffer_size = RMNET_RX_BUFSIZE;
+module_param(rx_rmnet_buffer_size, uint, S_IRUGO | S_IWUSR);
+
+#define TX_HALT 0
+#define RX_HALT 1
+#define SUSPENDED 2
+#define CLAIMED 3
+
+struct data_bridge {
+ struct usb_interface *intf;
+ struct usb_device *udev;
+ int id;
+ char *name;
+
+ unsigned int bulk_in;
+ unsigned int bulk_out;
+ int err;
+
+ /* Support INT IN instead of BULK IN */
+ bool use_int_in_pipe;
+ unsigned int period;
+
+ /* keep track of in-flight URBs */
+ struct usb_anchor tx_active;
+ struct usb_anchor rx_active;
+
+ struct list_head rx_idle;
+ struct sk_buff_head rx_done;
+
+ struct workqueue_struct *wq;
+ struct work_struct process_rx_w;
+
+ struct bridge *brdg;
+
+ /* work queue function for handling halt conditions */
+ struct work_struct kevent;
+
+ unsigned long flags;
+
+ struct platform_device *pdev;
+
+ /* counters */
+ atomic_t pending_txurbs;
+ unsigned int txurb_drp_cnt;
+ unsigned long to_host;
+ unsigned long to_modem;
+ unsigned int tx_throttled_cnt;
+ unsigned int tx_unthrottled_cnt;
+ unsigned int rx_throttled_cnt;
+ unsigned int rx_unthrottled_cnt;
+};
+
+static struct data_bridge *__dev[MAX_BRIDGE_DEVICES];
+
+static unsigned int get_timestamp(void);
+static void dbg_timestamp(char *, struct sk_buff *);
+static int submit_rx_urb(struct data_bridge *dev, struct urb *urb,
+ gfp_t flags);
+
+/* Find an unclaimed bridge device instance */
+static int get_bridge_dev_idx(void)
+{
+ struct data_bridge *dev;
+ int i;
+
+ mutex_lock(&brdg_claim_lock);
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!test_bit(CLAIMED, &dev->flags)) {
+ set_bit(CLAIMED, &dev->flags);
+ mutex_unlock(&brdg_claim_lock);
+ return i;
+ }
+ }
+ mutex_unlock(&brdg_claim_lock);
+
+ return -ENODEV;
+}
+
+static int get_data_bridge_chid(char *xport_name)
+{
+ struct data_bridge *dev;
+ int i;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!strncmp(dev->name, xport_name, BRIDGE_NAME_MAX_LEN))
+ return i;
+ }
+
+ return -ENODEV;
+}
+
+static inline bool rx_halted(struct data_bridge *dev)
+{
+ return test_bit(RX_HALT, &dev->flags);
+}
+
+static inline bool rx_throttled(struct bridge *brdg)
+{
+ return test_bit(RX_THROTTLED, &brdg->flags);
+}
+
+static void free_rx_urbs(struct data_bridge *dev)
+{
+ struct list_head *head;
+ struct urb *rx_urb;
+ unsigned long flags;
+
+ head = &dev->rx_idle;
+ spin_lock_irqsave(&dev->rx_done.lock, flags);
+ while (!list_empty(head)) {
+ rx_urb = list_entry(head->next, struct urb, urb_list);
+ list_del(&rx_urb->urb_list);
+ usb_free_urb(rx_urb);
+ }
+ spin_unlock_irqrestore(&dev->rx_done.lock, flags);
+}
+
+int data_bridge_unthrottle_rx(unsigned int id)
+{
+ struct data_bridge *dev;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return -EINVAL;
+
+ dev = __dev[id];
+ if (!dev || !dev->brdg)
+ return -ENODEV;
+
+ dev->rx_unthrottled_cnt++;
+ queue_work(dev->wq, &dev->process_rx_w);
+
+ return 0;
+}
+EXPORT_SYMBOL(data_bridge_unthrottle_rx);
+
+static void data_bridge_process_rx(struct work_struct *work)
+{
+ int retval;
+ unsigned long flags;
+ struct urb *rx_idle;
+ struct sk_buff *skb;
+ struct timestamp_info *info;
+ struct data_bridge *dev =
+ container_of(work, struct data_bridge, process_rx_w);
+
+ struct bridge *brdg = dev->brdg;
+
+ if (!brdg || !brdg->ops.send_pkt || rx_halted(dev))
+ return;
+
+ while (!rx_throttled(brdg) && (skb = skb_dequeue(&dev->rx_done))) {
+ dev->to_host++;
+ info = (struct timestamp_info *)skb->cb;
+ info->rx_done_sent = get_timestamp();
+ /* hand off sk_buff to client,they'll need to free it */
+ retval = brdg->ops.send_pkt(brdg->ctx, skb, skb->len);
+ if (retval == -ENOTCONN || retval == -EINVAL) {
+ return;
+ } else if (retval == -EBUSY) {
+ dev->rx_throttled_cnt++;
+ break;
+ }
+ }
+
+ spin_lock_irqsave(&dev->rx_done.lock, flags);
+ while (!list_empty(&dev->rx_idle)) {
+ if (dev->rx_done.qlen > stop_submit_urb_limit)
+ break;
+
+ rx_idle = list_first_entry(&dev->rx_idle, struct urb, urb_list);
+ list_del(&rx_idle->urb_list);
+ spin_unlock_irqrestore(&dev->rx_done.lock, flags);
+ retval = submit_rx_urb(dev, rx_idle, GFP_KERNEL);
+ spin_lock_irqsave(&dev->rx_done.lock, flags);
+ if (retval) {
+ list_add_tail(&rx_idle->urb_list, &dev->rx_idle);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&dev->rx_done.lock, flags);
+}
+
+static void data_bridge_read_cb(struct urb *urb)
+{
+ struct bridge *brdg;
+ struct sk_buff *skb = urb->context;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+ struct data_bridge *dev = info->dev;
+ bool queue = 0;
+
+ /*usb device disconnect*/
+ if (urb->dev->state == USB_STATE_NOTATTACHED)
+ urb->status = -ECONNRESET;
+
+ brdg = dev->brdg;
+ skb_put(skb, urb->actual_length);
+
+ switch (urb->status) {
+ case 0: /* success */
+ queue = 1;
+ info->rx_done = get_timestamp();
+ spin_lock(&dev->rx_done.lock);
+ __skb_queue_tail(&dev->rx_done, skb);
+ spin_unlock(&dev->rx_done.lock);
+ break;
+
+ /*do not resubmit*/
+ case -EPIPE:
+ set_bit(RX_HALT, &dev->flags);
+ dev_err(&dev->intf->dev, "%s: epout halted\n", __func__);
+ schedule_work(&dev->kevent);
+ /* FALLTHROUGH */
+ case -ESHUTDOWN:
+ case -ENOENT: /* suspended */
+ case -ECONNRESET: /* unplug */
+ case -EPROTO:
+ dev_kfree_skb_any(skb);
+ break;
+
+ /*resubmit */
+ case -EOVERFLOW: /*babble error*/
+ default:
+ queue = 1;
+ dev_kfree_skb_any(skb);
+ pr_debug_ratelimited("%s: non zero urb status = %d\n",
+ __func__, urb->status);
+ break;
+ }
+
+ spin_lock(&dev->rx_done.lock);
+ list_add_tail(&urb->urb_list, &dev->rx_idle);
+ spin_unlock(&dev->rx_done.lock);
+
+ if (queue)
+ queue_work(dev->wq, &dev->process_rx_w);
+}
+
+static int submit_rx_urb(struct data_bridge *dev, struct urb *rx_urb,
+ gfp_t flags)
+{
+ struct sk_buff *skb;
+ struct timestamp_info *info;
+ int retval = -EINVAL;
+ unsigned int created;
+
+ created = get_timestamp();
+ skb = alloc_skb(rx_rmnet_buffer_size, flags);
+ if (!skb)
+ return -ENOMEM;
+
+ info = (struct timestamp_info *)skb->cb;
+ info->dev = dev;
+ info->created = created;
+
+ if (dev->use_int_in_pipe)
+ usb_fill_int_urb(rx_urb, dev->udev, dev->bulk_in,
+ skb->data, rx_rmnet_buffer_size,
+ data_bridge_read_cb, skb, dev->period);
+ else
+ usb_fill_bulk_urb(rx_urb, dev->udev, dev->bulk_in,
+ skb->data, rx_rmnet_buffer_size,
+ data_bridge_read_cb, skb);
+
+ if (test_bit(SUSPENDED, &dev->flags))
+ goto suspended;
+
+ usb_anchor_urb(rx_urb, &dev->rx_active);
+ info->rx_queued = get_timestamp();
+ retval = usb_submit_urb(rx_urb, flags);
+ if (retval)
+ goto fail;
+
+ usb_mark_last_busy(dev->udev);
+ return 0;
+fail:
+ usb_unanchor_urb(rx_urb);
+suspended:
+ dev_kfree_skb_any(skb);
+
+ return retval;
+}
+
+static int data_bridge_prepare_rx(struct data_bridge *dev)
+{
+ int i;
+ struct urb *rx_urb;
+ int retval = 0;
+
+ for (i = 0; i < max_rx_urbs; i++) {
+ rx_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!rx_urb) {
+ retval = -ENOMEM;
+ goto free_urbs;
+ }
+
+ list_add_tail(&rx_urb->urb_list, &dev->rx_idle);
+ }
+
+ return 0;
+
+free_urbs:
+ free_rx_urbs(dev);
+ return retval;
+}
+
+int data_bridge_open(struct bridge *brdg)
+{
+ struct data_bridge *dev;
+ int ch_id;
+
+ if (!brdg) {
+ pr_err("bridge is null\n");
+ return -EINVAL;
+ }
+
+ ch_id = get_data_bridge_chid(brdg->name);
+ if (ch_id < 0 || ch_id >= MAX_BRIDGE_DEVICES) {
+ pr_err("%s: %s dev not found\n", __func__, brdg->name);
+ return ch_id;
+ }
+
+ brdg->ch_id = ch_id;
+
+ dev = __dev[ch_id];
+
+ dev_dbg(&dev->intf->dev, "%s: dev:%p\n", __func__, dev);
+
+ dev->brdg = brdg;
+ dev->err = 0;
+ atomic_set(&dev->pending_txurbs, 0);
+ dev->to_host = 0;
+ dev->to_modem = 0;
+ dev->txurb_drp_cnt = 0;
+ dev->tx_throttled_cnt = 0;
+ dev->tx_unthrottled_cnt = 0;
+ dev->rx_throttled_cnt = 0;
+ dev->rx_unthrottled_cnt = 0;
+
+ queue_work(dev->wq, &dev->process_rx_w);
+
+ return 0;
+}
+EXPORT_SYMBOL(data_bridge_open);
+
+void data_bridge_close(unsigned int id)
+{
+ struct data_bridge *dev;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ if (id >= MAX_BRIDGE_DEVICES)
+ return;
+
+ dev = __dev[id];
+ if (!dev || !dev->brdg)
+ return;
+
+ dev_dbg(&dev->intf->dev, "%s:\n", __func__);
+
+ cancel_work_sync(&dev->kevent);
+ cancel_work_sync(&dev->process_rx_w);
+
+ usb_kill_anchored_urbs(&dev->tx_active);
+ usb_kill_anchored_urbs(&dev->rx_active);
+
+ spin_lock_irqsave(&dev->rx_done.lock, flags);
+ while ((skb = __skb_dequeue(&dev->rx_done)))
+ dev_kfree_skb_any(skb);
+ spin_unlock_irqrestore(&dev->rx_done.lock, flags);
+
+ dev->brdg = NULL;
+}
+EXPORT_SYMBOL(data_bridge_close);
+
+static void defer_kevent(struct work_struct *work)
+{
+ int status;
+ struct data_bridge *dev =
+ container_of(work, struct data_bridge, kevent);
+
+ if (!dev)
+ return;
+
+ if (test_bit(TX_HALT, &dev->flags)) {
+ usb_unlink_anchored_urbs(&dev->tx_active);
+
+ status = usb_autopm_get_interface(dev->intf);
+ if (status < 0) {
+ dev_dbg(&dev->intf->dev,
+ "can't acquire interface, status %d\n", status);
+ return;
+ }
+
+ status = usb_clear_halt(dev->udev, dev->bulk_out);
+ usb_autopm_put_interface(dev->intf);
+ if (status < 0 && status != -EPIPE && status != -ESHUTDOWN)
+ dev_err(&dev->intf->dev,
+ "can't clear tx halt, status %d\n", status);
+ else
+ clear_bit(TX_HALT, &dev->flags);
+ }
+
+ if (test_bit(RX_HALT, &dev->flags)) {
+ usb_unlink_anchored_urbs(&dev->rx_active);
+
+ status = usb_autopm_get_interface(dev->intf);
+ if (status < 0) {
+ dev_dbg(&dev->intf->dev,
+ "can't acquire interface, status %d\n", status);
+ return;
+ }
+
+ status = usb_clear_halt(dev->udev, dev->bulk_in);
+ usb_autopm_put_interface(dev->intf);
+ if (status < 0 && status != -EPIPE && status != -ESHUTDOWN)
+ dev_err(&dev->intf->dev,
+ "can't clear rx halt, status %d\n", status);
+ else {
+ clear_bit(RX_HALT, &dev->flags);
+ if (dev->brdg)
+ queue_work(dev->wq, &dev->process_rx_w);
+ }
+ }
+}
+
+static void data_bridge_write_cb(struct urb *urb)
+{
+ struct sk_buff *skb = urb->context;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+ struct data_bridge *dev = info->dev;
+ struct bridge *brdg = dev->brdg;
+ int pending;
+
+ pr_debug("%s: dev:%p\n", __func__, dev);
+
+ switch (urb->status) {
+ case 0: /*success*/
+ dbg_timestamp("UL", skb);
+ break;
+ case -EPROTO:
+ dev->err = -EPROTO;
+ break;
+ case -EPIPE:
+ set_bit(TX_HALT, &dev->flags);
+ dev_err(&dev->intf->dev, "%s: epout halted\n", __func__);
+ schedule_work(&dev->kevent);
+ /* FALLTHROUGH */
+ case -ESHUTDOWN:
+ case -ENOENT: /* suspended */
+ case -ECONNRESET: /* unplug */
+ case -EOVERFLOW: /*babble error*/
+ /* FALLTHROUGH */
+ default:
+ pr_debug_ratelimited("%s: non zero urb status = %d\n",
+ __func__, urb->status);
+ }
+
+ usb_free_urb(urb);
+ dev_kfree_skb_any(skb);
+
+ pending = atomic_dec_return(&dev->pending_txurbs);
+
+ /*flow ctrl*/
+ if (brdg && fctrl_support && pending <= fctrl_dis_thld &&
+ test_and_clear_bit(TX_THROTTLED, &brdg->flags)) {
+ pr_debug_ratelimited("%s: disable flow ctrl: pend urbs:%u\n",
+ __func__, pending);
+ dev->tx_unthrottled_cnt++;
+ if (brdg->ops.unthrottle_tx)
+ brdg->ops.unthrottle_tx(brdg->ctx);
+ }
+
+ /* if we are here after device disconnect
+ * usb_unbind_interface() takes care of
+ * residual pm_autopm_get_interface_* calls
+ */
+ if (urb->dev->state != USB_STATE_NOTATTACHED)
+ usb_autopm_put_interface_async(dev->intf);
+}
+
+int data_bridge_write(unsigned int id, struct sk_buff *skb)
+{
+ int result;
+ int size = skb->len;
+ int pending;
+ struct urb *txurb;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+ struct data_bridge *dev = __dev[id];
+ struct bridge *brdg;
+
+ if (!dev || !dev->brdg || dev->err || !usb_get_intfdata(dev->intf))
+ return -ENODEV;
+
+ brdg = dev->brdg;
+ if (!brdg)
+ return -ENODEV;
+
+ dev_dbg(&dev->intf->dev, "%s: write (%d bytes)\n", __func__, skb->len);
+
+ result = usb_autopm_get_interface(dev->intf);
+ if (result < 0) {
+ dev_dbg(&dev->intf->dev, "%s: resume failure\n", __func__);
+ goto pm_error;
+ }
+
+ txurb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!txurb) {
+ dev_err(&dev->intf->dev, "%s: error allocating read urb\n",
+ __func__);
+ result = -ENOMEM;
+ goto error;
+ }
+
+ /* store dev pointer in skb */
+ info->dev = dev;
+ info->tx_queued = get_timestamp();
+
+ usb_fill_bulk_urb(txurb, dev->udev, dev->bulk_out,
+ skb->data, skb->len, data_bridge_write_cb, skb);
+
+ txurb->transfer_flags |= URB_ZERO_PACKET;
+
+ pending = atomic_inc_return(&dev->pending_txurbs);
+ usb_anchor_urb(txurb, &dev->tx_active);
+
+ if (atomic_read(&dev->pending_txurbs) % tx_urb_mult)
+ txurb->transfer_flags |= URB_NO_INTERRUPT;
+
+ result = usb_submit_urb(txurb, GFP_KERNEL);
+ if (result < 0) {
+ usb_unanchor_urb(txurb);
+ atomic_dec(&dev->pending_txurbs);
+ dev_err(&dev->intf->dev, "%s: submit URB error %d\n",
+ __func__, result);
+ goto free_urb;
+ }
+
+ dev->to_modem++;
+ dev_dbg(&dev->intf->dev, "%s: pending_txurbs: %u\n", __func__, pending);
+
+ /* flow control: last urb submitted but return -EBUSY */
+ if (fctrl_support && pending > fctrl_en_thld) {
+ set_bit(TX_THROTTLED, &brdg->flags);
+ dev->tx_throttled_cnt++;
+ pr_debug_ratelimited("%s: enable flow ctrl pend txurbs:%u\n",
+ __func__, pending);
+ return -EBUSY;
+ }
+
+ return size;
+
+free_urb:
+ usb_free_urb(txurb);
+error:
+ dev->txurb_drp_cnt++;
+ usb_autopm_put_interface(dev->intf);
+pm_error:
+ return result;
+}
+EXPORT_SYMBOL(data_bridge_write);
+
+static int bridge_resume(struct usb_interface *iface)
+{
+ int retval = 0;
+ struct data_bridge *dev = usb_get_intfdata(iface);
+
+ clear_bit(SUSPENDED, &dev->flags);
+
+ if (dev->brdg)
+ queue_work(dev->wq, &dev->process_rx_w);
+
+ retval = ctrl_bridge_resume(dev->id);
+
+ return retval;
+}
+
+static int bridge_suspend(struct usb_interface *intf, pm_message_t message)
+{
+ int retval;
+ struct data_bridge *dev = usb_get_intfdata(intf);
+
+ if (atomic_read(&dev->pending_txurbs))
+ return -EBUSY;
+
+ retval = ctrl_bridge_suspend(dev->id);
+ if (retval)
+ return retval;
+
+ set_bit(SUSPENDED, &dev->flags);
+ usb_kill_anchored_urbs(&dev->rx_active);
+
+ return 0;
+}
+
+static int data_bridge_probe(struct usb_interface *iface,
+ struct usb_host_endpoint *bulk_in,
+ struct usb_host_endpoint *bulk_out, char *name, int id)
+{
+ struct data_bridge *dev;
+ int retval;
+
+ dev = __dev[id];
+ if (!dev) {
+ pr_err("%s: device not found\n", __func__);
+ return -ENODEV;
+ }
+
+ dev->pdev = platform_device_alloc(name, -1);
+ if (!dev->pdev) {
+ pr_err("%s: unable to allocate platform device\n", __func__);
+ kfree(dev);
+ return -ENOMEM;
+ }
+
+ /*clear all bits except claimed bit*/
+ clear_bit(RX_HALT, &dev->flags);
+ clear_bit(TX_HALT, &dev->flags);
+ clear_bit(SUSPENDED, &dev->flags);
+
+ dev->id = id;
+ dev->name = name;
+ dev->udev = interface_to_usbdev(iface);
+ dev->intf = iface;
+
+ if (dev->use_int_in_pipe)
+ dev->bulk_in = usb_rcvintpipe(dev->udev,
+ bulk_in->desc.bEndpointAddress &
+ USB_ENDPOINT_NUMBER_MASK);
+ else
+ dev->bulk_in = usb_rcvbulkpipe(dev->udev,
+ bulk_in->desc.bEndpointAddress &
+ USB_ENDPOINT_NUMBER_MASK);
+
+ if (bulk_out)
+ dev->bulk_out = usb_sndbulkpipe(dev->udev,
+ bulk_out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK);
+
+ usb_set_intfdata(iface, dev);
+
+ /*allocate list of rx urbs*/
+ retval = data_bridge_prepare_rx(dev);
+ if (retval) {
+ platform_device_put(dev->pdev);
+ return retval;
+ }
+
+ platform_device_add(dev->pdev);
+
+ return 0;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#define DEBUG_BUF_SIZE 4096
+
+static unsigned int record_timestamp;
+module_param(record_timestamp, uint, S_IRUGO | S_IWUSR);
+
+static struct timestamp_buf dbg_data = {
+ .idx = 0,
+ .lck = __RW_LOCK_UNLOCKED(lck)
+};
+
+/*get_timestamp - returns time of day in us */
+static unsigned int get_timestamp(void)
+{
+ struct timeval tval;
+ unsigned int stamp;
+
+ if (!record_timestamp)
+ return 0;
+
+ do_gettimeofday(&tval);
+ /* 2^32 = 4294967296. Limit to 4096s. */
+ stamp = tval.tv_sec & 0xFFF;
+ stamp = stamp * 1000000 + tval.tv_usec;
+ return stamp;
+}
+
+static void dbg_inc(unsigned *idx)
+{
+ *idx = (*idx + 1) & (DBG_DATA_MAX-1);
+}
+
+/**
+* dbg_timestamp - Stores timestamp values of a SKB life cycle
+* to debug buffer
+* @event: "UL": Uplink Data
+* @skb: SKB used to store timestamp values to debug buffer
+*/
+static void dbg_timestamp(char *event, struct sk_buff * skb)
+{
+ unsigned long flags;
+ struct timestamp_info *info = (struct timestamp_info *)skb->cb;
+
+ if (!record_timestamp)
+ return;
+
+ write_lock_irqsave(&dbg_data.lck, flags);
+
+ scnprintf(dbg_data.buf[dbg_data.idx], DBG_DATA_MSG,
+ "%p %u[%s] %u %u %u %u %u %u\n",
+ skb, skb->len, event, info->created, info->rx_queued,
+ info->rx_done, info->rx_done_sent, info->tx_queued,
+ get_timestamp());
+
+ dbg_inc(&dbg_data.idx);
+
+ write_unlock_irqrestore(&dbg_data.lck, flags);
+}
+
+/* show_timestamp: displays the timestamp buffer */
+static ssize_t show_timestamp(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ unsigned long flags;
+ unsigned i;
+ unsigned j = 0;
+ char *buf;
+ int ret = 0;
+
+ if (!record_timestamp)
+ return 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ read_lock_irqsave(&dbg_data.lck, flags);
+
+ i = dbg_data.idx;
+ for (dbg_inc(&i); i != dbg_data.idx; dbg_inc(&i)) {
+ if (!strnlen(dbg_data.buf[i], DBG_DATA_MSG))
+ continue;
+ j += scnprintf(buf + j, DEBUG_BUF_SIZE - j,
+ "%s\n", dbg_data.buf[i]);
+ }
+
+ read_unlock_irqrestore(&dbg_data.lck, flags);
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, j);
+
+ kfree(buf);
+
+ return ret;
+}
+
+const struct file_operations data_timestamp_ops = {
+ .read = show_timestamp,
+};
+
+static ssize_t data_bridge_read_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct data_bridge *dev;
+ char *buf;
+ int ret;
+ int i;
+ int temp = 0;
+
+ buf = kzalloc(sizeof(char) * DEBUG_BUF_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!dev)
+ continue;
+
+ temp += scnprintf(buf + temp, DEBUG_BUF_SIZE - temp,
+ "\nName#%s dev %p\n"
+ "pending tx urbs: %u\n"
+ "tx urb drp cnt: %u\n"
+ "to host: %lu\n"
+ "to mdm: %lu\n"
+ "tx throttled cnt: %u\n"
+ "tx unthrottled cnt: %u\n"
+ "rx throttled cnt: %u\n"
+ "rx unthrottled cnt: %u\n"
+ "rx done skb qlen: %u\n"
+ "dev err: %d\n"
+ "suspended: %d\n"
+ "TX_HALT: %d\n"
+ "RX_HALT: %d\n",
+ dev->name, dev,
+ atomic_read(&dev->pending_txurbs),
+ dev->txurb_drp_cnt,
+ dev->to_host,
+ dev->to_modem,
+ dev->tx_throttled_cnt,
+ dev->tx_unthrottled_cnt,
+ dev->rx_throttled_cnt,
+ dev->rx_unthrottled_cnt,
+ dev->rx_done.qlen,
+ dev->err,
+ test_bit(SUSPENDED, &dev->flags),
+ test_bit(TX_HALT, &dev->flags),
+ test_bit(RX_HALT, &dev->flags));
+
+ }
+
+ ret = simple_read_from_buffer(ubuf, count, ppos, buf, temp);
+
+ kfree(buf);
+
+ return ret;
+}
+
+static ssize_t data_bridge_reset_stats(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ struct data_bridge *dev;
+ int i;
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ dev = __dev[i];
+ if (!dev)
+ continue;
+
+ dev->to_host = 0;
+ dev->to_modem = 0;
+ dev->txurb_drp_cnt = 0;
+ dev->tx_throttled_cnt = 0;
+ dev->tx_unthrottled_cnt = 0;
+ dev->rx_throttled_cnt = 0;
+ dev->rx_unthrottled_cnt = 0;
+ }
+ return count;
+}
+
+const struct file_operations data_stats_ops = {
+ .read = data_bridge_read_stats,
+ .write = data_bridge_reset_stats,
+};
+
+static struct dentry *data_dent;
+static struct dentry *data_dfile_stats;
+static struct dentry *data_dfile_tstamp;
+
+static void data_bridge_debugfs_init(void)
+{
+ data_dent = debugfs_create_dir("data_hsic_bridge", 0);
+ if (IS_ERR(data_dent))
+ return;
+
+ data_dfile_stats = debugfs_create_file("status", 0644, data_dent, 0,
+ &data_stats_ops);
+ if (!data_dfile_stats || IS_ERR(data_dfile_stats)) {
+ debugfs_remove(data_dent);
+ return;
+ }
+
+ data_dfile_tstamp = debugfs_create_file("timestamp", 0644, data_dent,
+ 0, &data_timestamp_ops);
+ if (!data_dfile_tstamp || IS_ERR(data_dfile_tstamp))
+ debugfs_remove(data_dent);
+}
+
+static void data_bridge_debugfs_exit(void)
+{
+ debugfs_remove(data_dfile_stats);
+ debugfs_remove(data_dfile_tstamp);
+ debugfs_remove(data_dent);
+}
+
+#else
+static void data_bridge_debugfs_init(void) { }
+static void data_bridge_debugfs_exit(void) { }
+static void dbg_timestamp(char *event, struct sk_buff * skb)
+{
+ return;
+}
+
+static unsigned int get_timestamp(void)
+{
+ return 0;
+}
+
+#endif
+
+static int
+bridge_probe(struct usb_interface *iface, const struct usb_device_id *id)
+{
+ struct usb_host_endpoint *endpoint = NULL;
+ struct usb_host_endpoint *bulk_in = NULL;
+ struct usb_host_endpoint *bulk_out = NULL;
+ struct usb_host_endpoint *int_in = NULL;
+ struct usb_host_endpoint *data_int_in = NULL;
+ struct usb_device *udev;
+ int i;
+ int status = 0;
+ int numends;
+ int ch_id;
+ char **bname = (char **)id->driver_info;
+
+ if (iface->num_altsetting != 1) {
+ pr_err("%s invalid num_altsetting %u\n",
+ __func__, iface->num_altsetting);
+ return -EINVAL;
+ }
+
+ udev = interface_to_usbdev(iface);
+ usb_get_dev(udev);
+
+#ifdef CONFIG_QCT_9K_MODEM
+#define DUN_IFC_NUM 3
+
+ {
+ unsigned int iface_num;
+
+ iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
+ pr_info("%s: iface number is %d\n", __func__, iface_num);
+
+ if (!(get_radio_flag() & RADIO_FLAG_DIAG_ENABLE?true:false) &&
+ iface_num == DUN_IFC_NUM &&
+ board_mfg_mode() == BOARD_MFG_MODE_MFGKERNEL) {
+ pr_info("%s DUN channel is NOT enumed as bridge interface!!! MAY be switched to TTY interface!!!\n", __func__);
+ return -ENODEV;
+ }
+ }
+#endif
+
+ numends = iface->cur_altsetting->desc.bNumEndpoints;
+ for (i = 0; i < numends; i++) {
+ endpoint = iface->cur_altsetting->endpoint + i;
+ if (!endpoint) {
+ dev_err(&iface->dev, "%s: invalid endpoint %u\n",
+ __func__, i);
+ status = -EINVAL;
+ goto out;
+ }
+
+ if (usb_endpoint_is_bulk_in(&endpoint->desc))
+ bulk_in = endpoint;
+ else if (usb_endpoint_is_bulk_out(&endpoint->desc))
+ bulk_out = endpoint;
+ else if (usb_endpoint_is_int_in(&endpoint->desc)) {
+ if (int_in != 0)
+ data_int_in = endpoint;
+ else
+ int_in = endpoint;
+ }
+ }
+ if (((numends == 3)
+ && ((!bulk_in && !data_int_in) || !bulk_out || !int_in))
+ || ((numends == 1) && !bulk_in)) {
+ dev_err(&iface->dev, "%s: invalid endpoints\n", __func__);
+ status = -EINVAL;
+ goto out;
+ }
+
+ ch_id = get_bridge_dev_idx();
+ if (ch_id < 0) {
+ pr_err("%s all bridge channels claimed. Probe failed\n",
+ __func__);
+ return -ENODEV;
+ }
+ if (data_int_in) {
+ __dev[ch_id]->use_int_in_pipe = true;
+ __dev[ch_id]->period = data_int_in->desc.bInterval;
+ status = data_bridge_probe(iface, data_int_in, bulk_out,
+ bname[BRIDGE_DATA_IDX], ch_id);
+ } else {
+ status = data_bridge_probe(iface, bulk_in, bulk_out,
+ bname[BRIDGE_DATA_IDX], ch_id);
+ }
+ if (status < 0) {
+ dev_err(&iface->dev, "data_bridge_probe failed %d\n", status);
+ goto out;
+ }
+
+ status = ctrl_bridge_probe(iface,
+ int_in,
+ bname[BRIDGE_CTRL_IDX],
+ ch_id);
+ if (status < 0) {
+ dev_err(&iface->dev, "ctrl_bridge_probe failed %d\n",
+ status);
+ goto error;
+ }
+ return 0;
+
+error:
+ platform_device_unregister(__dev[ch_id]->pdev);
+ free_rx_urbs(__dev[ch_id]);
+ usb_set_intfdata(iface, NULL);
+out:
+ usb_put_dev(udev);
+
+ return status;
+}
+
+static void bridge_disconnect(struct usb_interface *intf)
+{
+ struct data_bridge *dev = usb_get_intfdata(intf);
+
+ if (!dev) {
+ pr_err("%s: data device not found\n", __func__);
+ return;
+ }
+
+ /*set device name to none to get correct channel id
+ * at the time of bridge open
+ */
+ dev->name = "none";
+
+ ctrl_bridge_disconnect(dev->id);
+ platform_device_unregister(dev->pdev);
+ usb_set_intfdata(intf, NULL);
+
+ free_rx_urbs(dev);
+
+ usb_put_dev(dev->udev);
+
+ clear_bit(CLAIMED, &dev->flags);
+}
+
+/*driver info stores data/ctrl bridge name used to match bridge xport name*/
+static const struct usb_device_id bridge_ids[] = {
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9001, 2),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9001, 3),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9034, 2),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9034, 3),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9048, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9048, 4),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x904c, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x904c, 5),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9075, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9075, 5),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9079, 3),
+ .driver_info = (unsigned long)serial_hsusb_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x9079, 4),
+ .driver_info = (unsigned long)rmnet_hsusb_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908A, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908A, 5),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ /* this PID supports QDSS-MDM trace*/
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 4),
+ .driver_info = (unsigned long)qdss_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 5),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x908E, 7),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909C, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909D, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 5),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ /* this PID supports QDSS-MDM trace*/
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x909E, 4),
+ .driver_info = (unsigned long)qdss_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A0, 3),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A0, 5),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+ /* this PID supports QDSS-MDM trace*/
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 4),
+ .driver_info = (unsigned long)qdss_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 5),
+ .driver_info = (unsigned long)serial_hsic_bridge_names,
+ },
+ { USB_DEVICE_INTERFACE_NUMBER(0x5c6, 0x90A4, 7),
+ .driver_info = (unsigned long)rmnet_hsic_bridge_names,
+ },
+
+
+ { } /* Terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, bridge_ids);
+
+static struct usb_driver bridge_driver = {
+ .name = "mdm_bridge",
+ .probe = bridge_probe,
+ .disconnect = bridge_disconnect,
+ .id_table = bridge_ids,
+ .suspend = bridge_suspend,
+ .resume = bridge_resume,
+ .reset_resume = bridge_resume,
+ .supports_autosuspend = 1,
+};
+
+static int __init bridge_init(void)
+{
+ struct data_bridge *dev;
+ int ret;
+ int i = 0;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return 0;
+#endif
+
+ ret = ctrl_bridge_init();
+ if (ret)
+ return ret;
+
+ bridge_wq = create_singlethread_workqueue("mdm_bridge");
+ if (!bridge_wq) {
+ pr_err("%s: Unable to create workqueue:bridge\n", __func__);
+ ret = -ENOMEM;
+ goto free_ctrl;
+ }
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ pr_err("%s: unable to allocate dev\n", __func__);
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ dev->wq = bridge_wq;
+
+ /*transport name will be set during probe*/
+ dev->name = "none";
+
+ init_usb_anchor(&dev->tx_active);
+ init_usb_anchor(&dev->rx_active);
+
+ INIT_LIST_HEAD(&dev->rx_idle);
+
+ skb_queue_head_init(&dev->rx_done);
+
+ INIT_WORK(&dev->kevent, defer_kevent);
+ INIT_WORK(&dev->process_rx_w, data_bridge_process_rx);
+
+ __dev[i] = dev;
+ }
+
+ ret = usb_register(&bridge_driver);
+ if (ret) {
+ pr_err("%s: unable to register mdm_bridge driver", __func__);
+ goto error;
+ }
+
+ data_bridge_debugfs_init();
+
+ return 0;
+
+error:
+ while (--i >= 0) {
+ kfree(__dev[i]);
+ __dev[i] = NULL;
+ }
+ destroy_workqueue(bridge_wq);
+free_ctrl:
+ ctrl_bridge_exit();
+ return ret;
+}
+
+static void __exit bridge_exit(void)
+{
+ int i;
+
+#ifdef CONFIG_QCT_9K_MODEM
+ if (!is_mdm_modem())
+ return;
+#endif
+
+ usb_deregister(&bridge_driver);
+ data_bridge_debugfs_exit();
+ destroy_workqueue(bridge_wq);
+
+ for (i = 0; i < MAX_BRIDGE_DEVICES; i++) {
+ kfree(__dev[i]);
+ __dev[i] = NULL;
+ }
+
+ ctrl_bridge_exit();
+}
+
+module_init(bridge_init);
+module_exit(bridge_exit);
+
+MODULE_DESCRIPTION("Qualcomm modem data bridge driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/usb/phy/tegra-otg.c b/drivers/usb/phy/tegra-otg.c
index 0a59e22..32738ab 100644
--- a/drivers/usb/phy/tegra-otg.c
+++ b/drivers/usb/phy/tegra-otg.c
@@ -60,7 +60,7 @@
#define DBG(stuff...) do {} while (0)
#endif
-#define YCABLE_CHARGING_CURRENT_UA 500000u
+#define YCABLE_CHARGING_CURRENT_UA 1200000u
struct tegra_otg {
struct platform_device *pdev;
@@ -1025,10 +1025,6 @@
clk_disable_unprepare(tegra->clk);
pm_runtime_put_sync(dev);
- /* suspend peripheral mode, host mode is taken care by host driver */
- if (from == OTG_STATE_B_PERIPHERAL)
- tegra_change_otg_state(tegra, OTG_STATE_A_SUSPEND);
-
if (from == OTG_STATE_A_HOST && tegra->turn_off_vbus_on_lp0)
tegra_otg_vbus_enable(tegra->vbus_reg, 0);
diff --git a/drivers/usb/phy/tegra11x_usb_phy.c b/drivers/usb/phy/tegra11x_usb_phy.c
index 33fb6bb..f381516 100644
--- a/drivers/usb/phy/tegra11x_usb_phy.c
+++ b/drivers/usb/phy/tegra11x_usb_phy.c
@@ -29,6 +29,7 @@
#include <linux/clk/tegra.h>
#include <linux/tegra-soc.h>
#include <linux/tegra-fuse.h>
+#include <linux/moduleparam.h>
#include <mach/pinmux.h>
#include <mach/tegra_usb_pmc.h>
#include <mach/tegra_usb_pad_ctrl.h>
@@ -112,7 +113,7 @@
#define UTMIP_XCVR_SETUP_MSB(x) (((x) & 0x7) << 22)
#define UTMIP_XCVR_HSSLEW_MSB(x) (((x) & 0x7f) << 25)
#define UTMIP_XCVR_HSSLEW_LSB(x) (((x) & 0x3) << 4)
-#define UTMIP_XCVR_MAX_OFFSET 2
+#define UTMIP_XCVR_MAX_OFFSET 3
#define UTMIP_XCVR_SETUP_MAX_VALUE 0x7f
#define UTMIP_XCVR_SETUP_MIN_VALUE 0
#define XCVR_SETUP_MSB_CALIB(x) ((x) >> 4)
@@ -293,6 +294,10 @@
#define HSIC_ELASTIC_UNDERRUN_LIMIT 16
#define HSIC_ELASTIC_OVERRUN_LIMIT 16
+static int dynamic_utmi_xcvr_setup = -1;
+module_param(dynamic_utmi_xcvr_setup, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(dynamic_utmi_xcvr_setup, "dynamic set setup value");
+
struct tegra_usb_pmc_data pmc_data[3];
static struct tegra_xtal_freq utmip_freq_table[] = {
@@ -1084,11 +1089,19 @@
utmi_phy_pad_enable();
+ val = readl(base + UTMIP_SPARE_CFG0);
+ val &= ~FUSE_SETUP_SEL;
+ val |= FUSE_ATERM_SEL;
+ writel(val, base + UTMIP_SPARE_CFG0);
+
val = readl(base + UTMIP_XCVR_CFG0);
val &= ~(UTMIP_XCVR_LSBIAS_SEL | UTMIP_FORCE_PD_POWERDOWN |
UTMIP_FORCE_PD2_POWERDOWN | UTMIP_FORCE_PDZI_POWERDOWN |
UTMIP_XCVR_SETUP(~0) | UTMIP_XCVR_LSFSLEW(~0) |
UTMIP_XCVR_LSRSLEW(~0) | UTMIP_XCVR_HSSLEW_MSB(~0));
+ /* utmi_xcvr_setup value range is 0~127 */
+ if (dynamic_utmi_xcvr_setup >= 0 && dynamic_utmi_xcvr_setup < 128)
+ phy->utmi_xcvr_setup = dynamic_utmi_xcvr_setup;
val |= UTMIP_XCVR_SETUP(phy->utmi_xcvr_setup);
val |= UTMIP_XCVR_SETUP_MSB(XCVR_SETUP_MSB_CALIB(phy->utmi_xcvr_setup));
val |= UTMIP_XCVR_LSFSLEW(config->xcvr_lsfslew);
@@ -1101,6 +1114,8 @@
}
if (config->xcvr_hsslew_lsb)
val |= UTMIP_XCVR_HSSLEW_LSB(config->xcvr_hsslew_lsb);
+ pr_info("[USB] %s UTMIP_XCVR_CFG0:%lx xcvr_use_fuses:%d utmi_xcvr_setup:%lx\n"
+ , __func__, val, config->xcvr_use_fuses,phy->utmi_xcvr_setup);
writel(val, base + UTMIP_XCVR_CFG0);
val = readl(base + UTMIP_XCVR_CFG1);
@@ -1116,11 +1131,6 @@
writel(val, base + UTMIP_BIAS_CFG1);
}
- val = readl(base + UTMIP_SPARE_CFG0);
- val &= ~FUSE_SETUP_SEL;
- val |= FUSE_ATERM_SEL;
- writel(val, base + UTMIP_SPARE_CFG0);
-
val = readl(base + USB_SUSP_CTRL);
val |= UTMIP_PHY_ENABLE;
writel(val, base + USB_SUSP_CTRL);
diff --git a/drivers/video/adf/adf_fbdev.c b/drivers/video/adf/adf_fbdev.c
index a5b53bc..e428132 100644
--- a/drivers/video/adf/adf_fbdev.c
+++ b/drivers/video/adf/adf_fbdev.c
@@ -348,6 +348,8 @@
kfree(modelist);
}
+static bool fbdev_opened_once;
+
/**
* adf_fbdev_open - default implementation of fbdev open op
*/
@@ -384,11 +386,15 @@
adf_fbdev_fill_modelist(fbdev);
}
- ret = adf_fbdev_post(fbdev);
- if (ret < 0) {
- if (!fbdev->refcount)
- adf_fb_destroy(fbdev);
- goto done;
+ if (!fbdev_opened_once) {
+ fbdev_opened_once = true;
+ } else {
+ ret = adf_fbdev_post(fbdev);
+ if (ret < 0) {
+ if (!fbdev->refcount)
+ adf_fb_destroy(fbdev);
+ goto done;
+ }
}
fbdev->refcount++;
diff --git a/drivers/video/backlight/tegra_dsi_bl.c b/drivers/video/backlight/tegra_dsi_bl.c
index 2b57269..10494bb 100644
--- a/drivers/video/backlight/tegra_dsi_bl.c
+++ b/drivers/video/backlight/tegra_dsi_bl.c
@@ -68,9 +68,12 @@
struct tegra_dsi_bl_data *tbl = dev_get_drvdata(&bl->dev);
int brightness = bl->props.brightness;
- if (tbl)
+ if (tbl) {
+ if (tbl->notify)
+ brightness = tbl->notify(tbl->dev, brightness);
+
return send_backlight_cmd(tbl, brightness);
- else
+ } else
return dev_err(&bl->dev,
"tegra display controller not available\n");
}
@@ -146,17 +149,5 @@
};
module_platform_driver(tegra_dsi_bl_driver);
-static int __init tegra_dsi_bl_init(void)
-{
- return platform_driver_register(&tegra_dsi_bl_driver);
-}
-late_initcall(tegra_dsi_bl_init);
-
-static void __exit tegra_dsi_bl_exit(void)
-{
- platform_driver_unregister(&tegra_dsi_bl_driver);
-}
-module_exit(tegra_dsi_bl_exit);
-
MODULE_DESCRIPTION("Tegra DSI Backlight Driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/video/tegra/dc/dc.c b/drivers/video/tegra/dc/dc.c
index 1a8e12d..62bee4d 100644
--- a/drivers/video/tegra/dc/dc.c
+++ b/drivers/video/tegra/dc/dc.c
@@ -1664,6 +1664,43 @@
return ret;
}
+static int _tegra_dc_config_frame_end_intr(struct tegra_dc *dc, bool enable)
+{
+ tegra_dc_io_start(dc);
+ if (enable) {
+ atomic_inc(&dc->frame_end_ref);
+ tegra_dc_unmask_interrupt(dc, FRAME_END_INT);
+ } else if (!atomic_dec_return(&dc->frame_end_ref))
+ tegra_dc_mask_interrupt(dc, FRAME_END_INT);
+ tegra_dc_io_end(dc);
+
+ return 0;
+}
+
+int _tegra_dc_wait_for_frame_end(struct tegra_dc *dc,
+ u32 timeout_ms)
+{
+ int ret;
+
+ INIT_COMPLETION(dc->frame_end_complete);
+
+ tegra_dc_get(dc);
+
+ tegra_dc_flush_interrupt(dc, FRAME_END_INT);
+ /* unmask frame end interrupt */
+ _tegra_dc_config_frame_end_intr(dc, true);
+
+ ret = wait_for_completion_interruptible_timeout(
+ &dc->frame_end_complete,
+ msecs_to_jiffies(timeout_ms));
+
+ _tegra_dc_config_frame_end_intr(dc, false);
+
+ tegra_dc_put(dc);
+
+ return ret;
+}
+
static void tegra_dc_prism_update_backlight(struct tegra_dc *dc)
{
/* Do the actual brightness update outside of the mutex dc->lock */
@@ -1905,6 +1942,17 @@
}
}
+int tegra_dc_config_frame_end_intr(struct tegra_dc *dc, bool enable)
+{
+ int ret;
+
+ mutex_lock(&dc->lock);
+ ret = _tegra_dc_config_frame_end_intr(dc, enable);
+ mutex_unlock(&dc->lock);
+
+ return ret;
+}
+
static void tegra_dc_one_shot_irq(struct tegra_dc *dc, unsigned long status,
ktime_t timestamp)
{
diff --git a/drivers/video/tegra/dc/dc_priv.h b/drivers/video/tegra/dc/dc_priv.h
index db1f6f6..36fc518 100644
--- a/drivers/video/tegra/dc/dc_priv.h
+++ b/drivers/video/tegra/dc/dc_priv.h
@@ -402,6 +402,13 @@
u32 reg, u32 mask, u32 exp_val, u32 poll_interval_us,
u32 timeout_ms);
+/* defined in dc.c, used in ext/dev.c */
+int tegra_dc_config_frame_end_intr(struct tegra_dc *dc, bool enable);
+
+/* defined in dc.c, used in dsi.c */
+int _tegra_dc_wait_for_frame_end(struct tegra_dc *dc,
+ u32 timeout_ms);
+
/* defined in bandwidth.c, used in dc.c */
void tegra_dc_clear_bandwidth(struct tegra_dc *dc);
void tegra_dc_program_bandwidth(struct tegra_dc *dc, bool use_new);
diff --git a/drivers/video/tegra/dc/dsi.c b/drivers/video/tegra/dc/dsi.c
index 0800486..17ff413 100644
--- a/drivers/video/tegra/dc/dsi.c
+++ b/drivers/video/tegra/dc/dsi.c
@@ -1729,7 +1729,6 @@
struct tegra_dc_dsi_data *dsi,
u32 timeout_n_frames)
{
- int val;
long timeout;
u32 frame_period = DIV_ROUND_UP(S_TO_MS(1), dsi->info.refresh_rate);
struct tegra_dc_mode mode = dc->mode;
@@ -1742,26 +1741,12 @@
dev_WARN(&dc->ndev->dev,
"dsi: to stop at next frame give at least 2 frame delay\n");
- INIT_COMPLETION(dc->frame_end_complete);
-
- tegra_dc_get(dc);
-
- tegra_dc_flush_interrupt(dc, FRAME_END_INT);
- /* unmask frame end interrupt */
- val = tegra_dc_unmask_interrupt(dc, FRAME_END_INT);
-
- timeout = wait_for_completion_interruptible_timeout(
- &dc->frame_end_complete,
- msecs_to_jiffies(timeout_n_frames * frame_period));
-
- /* reinstate interrupt mask */
- tegra_dc_writel(dc, val, DC_CMD_INT_MASK);
+ timeout = _tegra_dc_wait_for_frame_end(dc, timeout_n_frames *
+ frame_period);
/* wait for v_ref_to_sync no. of lines after frame end interrupt */
udelay(mode.v_ref_to_sync * line_period);
- tegra_dc_put(dc);
-
return timeout;
}
@@ -2645,8 +2630,7 @@
err = regulator_enable(dsi->avdd_dsi_csi);
if (WARN(err, "unable to enable regulator"))
return err;
- /* stablization delay */
- mdelay(50);
+
/* Enable DSI clocks */
tegra_dsi_clk_enable(dsi);
tegra_dsi_set_dsi_clk(dc, dsi, dsi->target_lp_clk_khz);
@@ -2655,7 +2639,8 @@
* to avoid visible glitches on panel during transition
* from bootloader to kernel driver
*/
- tegra_dsi_stop_dc_stream_at_frame_end(dc, dsi, 2);
+ if (dsi->status.dc_stream == DSI_DC_STREAM_ENABLE)
+ tegra_dsi_stop_dc_stream_at_frame_end(dc, dsi, 2);
tegra_dsi_writel(dsi,
DSI_POWER_CONTROL_LEG_DSI_ENABLE(TEGRA_DSI_DISABLE),
diff --git a/drivers/video/tegra/dc/window.c b/drivers/video/tegra/dc/window.c
index ef8dd91..c7b44b1 100644
--- a/drivers/video/tegra/dc/window.c
+++ b/drivers/video/tegra/dc/window.c
@@ -49,21 +49,6 @@
return true;
}
-int tegra_dc_config_frame_end_intr(struct tegra_dc *dc, bool enable)
-{
-
- mutex_lock(&dc->lock);
- tegra_dc_io_start(dc);
- if (enable) {
- atomic_inc(&dc->frame_end_ref);
- tegra_dc_unmask_interrupt(dc, FRAME_END_INT);
- } else if (!atomic_dec_return(&dc->frame_end_ref))
- tegra_dc_mask_interrupt(dc, FRAME_END_INT);
- tegra_dc_io_end(dc);
- mutex_unlock(&dc->lock);
- return 0;
-}
-
static int get_topmost_window(u32 *depths, unsigned long *wins, int win_num)
{
int idx, best = -1;
diff --git a/drivers/video/tegra/host/bus_client.c b/drivers/video/tegra/host/bus_client.c
index 819f0ce..6343843 100644
--- a/drivers/video/tegra/host/bus_client.c
+++ b/drivers/video/tegra/host/bus_client.c
@@ -202,7 +202,7 @@
nvhost_job_put(priv->job);
mutex_unlock(&channel_lock);
- nvhost_putchannel(priv->ch);
+ nvhost_putchannel(priv->ch, 1);
kfree(priv);
return 0;
}
@@ -220,7 +220,8 @@
struct nvhost_device_data, cdev);
ret = nvhost_channel_map(pdata, &ch);
if (ret) {
- pr_err("%s: failed to map channel\n", __func__);
+ pr_err("%s: failed to map channel, error: %d\n",
+ __func__, ret);
return ret;
}
} else {
@@ -244,7 +245,7 @@
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
- nvhost_putchannel(ch);
+ nvhost_putchannel(ch, 1);
goto fail;
}
filp->private_data = priv;
diff --git a/drivers/video/tegra/host/debug.c b/drivers/video/tegra/host/debug.c
index 900d30a..eb33946 100644
--- a/drivers/video/tegra/host/debug.c
+++ b/drivers/video/tegra/host/debug.c
@@ -56,7 +56,7 @@
struct output *o = data;
struct nvhost_master *m;
struct nvhost_device_data *pdata;
- int index;
+ int index, locked;
if (pdev == NULL)
return 0;
@@ -64,6 +64,13 @@
m = nvhost_get_host(pdev);
pdata = platform_get_drvdata(pdev);
+ /* acquire lock to prevent channel modifications */
+ locked = mutex_trylock(&m->chlist_mutex);
+ if (!locked) {
+ nvhost_debug_output(o, "unable to lock channel list\n");
+ return 0;
+ }
+
for (index = 0; index < pdata->num_channels; index++) {
ch = pdata->channels[index];
if (!ch || !ch->dev) {
@@ -71,19 +78,27 @@
index + 1, pdev->name);
continue;
}
- nvhost_getchannel(ch);
- if (ch->chid != locked_id)
- mutex_lock(&ch->cdma.lock);
+
+ /* ensure that we get a lock */
+ locked = mutex_trylock(&ch->cdma.lock);
+ if (!(locked || ch->chid == locked_id)) {
+ nvhost_debug_output(o, "failed to lock channel %d cdma\n",
+ ch->chid);
+ continue;
+ }
+
if (fifo)
nvhost_get_chip_ops()->debug.show_channel_fifo(
m, ch, o, ch->chid);
nvhost_get_chip_ops()->debug.show_channel_cdma(
m, ch, o, ch->chid);
+
if (ch->chid != locked_id)
mutex_unlock(&ch->cdma.lock);
- nvhost_putchannel(ch);
}
+ mutex_unlock(&m->chlist_mutex);
+
return 0;
}
diff --git a/drivers/video/tegra/host/flcn/flcn.c b/drivers/video/tegra/host/flcn/flcn.c
index 9943ecf..2495003 100644
--- a/drivers/video/tegra/host/flcn/flcn.c
+++ b/drivers/video/tegra/host/flcn/flcn.c
@@ -397,9 +397,6 @@
err = nvhost_flcn_boot(dev);
nvhost_module_idle(dev);
- if (pdata->scaling_init)
- nvhost_scale_hw_init(dev);
-
return 0;
clean_up:
@@ -410,7 +407,6 @@
void nvhost_flcn_deinit(struct platform_device *dev)
{
struct flcn *v = get_flcn(dev);
- struct nvhost_device_data *pdata = nvhost_get_devdata(dev);
DEFINE_DMA_ATTRS(attrs);
dma_set_attr(DMA_ATTR_READ_ONLY, &attrs);
@@ -418,9 +414,6 @@
if (!v)
return;
- if (pdata->scaling_init)
- nvhost_scale_hw_deinit(dev);
-
if (v->mapped) {
dma_free_attrs(&dev->dev,
v->size, v->mapped,
@@ -575,6 +568,9 @@
int nvhost_vic_finalize_poweron(struct platform_device *pdev)
{
+ struct nvhost_device_data *pdata = nvhost_get_devdata(pdev);
+ int err;
+
nvhost_dbg_fn("");
host1x_writel(pdev, flcn_slcg_override_high_a_r(), 0);
@@ -582,7 +578,19 @@
flcn_cg_idle_cg_dly_cnt_f(4) |
flcn_cg_idle_cg_en_f(1) |
flcn_cg_wakeup_dly_cnt_f(4));
- return nvhost_flcn_boot(pdev);
+
+ err = nvhost_flcn_boot(pdev);
+ if (err)
+ return err;
+
+ if (pdata->scaling_init) {
+ err = nvhost_scale_hw_init(pdev);
+ if (err)
+ dev_warn(&pdev->dev, "failed to initialize scaling (%d)",
+ err);
+ }
+
+ return 0;
}
int nvhost_vic_prepare_poweroff(struct platform_device *dev)
@@ -593,6 +601,9 @@
nvhost_dbg_fn("");
+ if (pdata->scaling_deinit)
+ nvhost_scale_hw_deinit(dev);
+
if (ch && ch->dev) {
mutex_lock(&ch->submitlock);
ch->cur_ctx = NULL;
diff --git a/drivers/video/tegra/host/host1x/host1x.h b/drivers/video/tegra/host/host1x/host1x.h
index d779116..13b974a 100644
--- a/drivers/video/tegra/host/host1x/host1x.h
+++ b/drivers/video/tegra/host/host1x/host1x.h
@@ -56,11 +56,10 @@
struct nvhost_capability_node *caps_nodes;
struct mutex timeout_mutex;
- struct nvhost_channel chlist; /* channel list */
+ struct nvhost_channel **chlist; /* channel list */
struct mutex chlist_mutex; /* mutex for channel list */
unsigned long allocated_channels;
unsigned long next_free_ch;
- int cnt_alloc_channels;
};
extern struct nvhost_master *nvhost;
diff --git a/drivers/video/tegra/host/host1x/host1x_channel.c b/drivers/video/tegra/host/host1x/host1x_channel.c
index 46f319e..011b523 100644
--- a/drivers/video/tegra/host/host1x/host1x_channel.c
+++ b/drivers/video/tegra/host/host1x/host1x_channel.c
@@ -318,7 +318,7 @@
err = mutex_lock_interruptible(&ch->submitlock);
if (err) {
nvhost_module_idle_mult(ch->dev, job->num_syncpts);
- nvhost_putchannel_mult(ch, job->num_syncpts);
+ nvhost_putchannel(ch, job->num_syncpts);
goto error;
}
@@ -326,7 +326,7 @@
completed_waiters[i] = nvhost_intr_alloc_waiter();
if (!completed_waiters[i]) {
nvhost_module_idle_mult(ch->dev, job->num_syncpts);
- nvhost_putchannel_mult(ch, job->num_syncpts);
+ nvhost_putchannel(ch, job->num_syncpts);
mutex_unlock(&ch->submitlock);
err = -ENOMEM;
goto error;
@@ -342,7 +342,7 @@
err = nvhost_cdma_begin(&ch->cdma, job);
if (err) {
nvhost_module_idle_mult(ch->dev, job->num_syncpts);
- nvhost_putchannel_mult(ch, job->num_syncpts);
+ nvhost_putchannel(ch, job->num_syncpts);
mutex_unlock(&ch->submitlock);
goto error;
}
@@ -526,7 +526,6 @@
static int host1x_channel_init(struct nvhost_channel *ch,
struct nvhost_master *dev)
{
- mutex_init(&ch->reflock);
mutex_init(&ch->submitlock);
ch->aperture = host1x_channel_aperture(dev->aperture, ch->chid);
diff --git a/drivers/video/tegra/host/nvhost_acm.c b/drivers/video/tegra/host/nvhost_acm.c
index 2695268..512d416 100644
--- a/drivers/video/tegra/host/nvhost_acm.c
+++ b/drivers/video/tegra/host/nvhost_acm.c
@@ -764,13 +764,17 @@
{
int index = 0;
struct nvhost_device_data *pdata;
+ struct nvhost_master *host = NULL;
pdata = dev_get_drvdata(dev);
if (!pdata)
return -EINVAL;
+ host = nvhost_get_host(pdata->pdev);
+
for (index = 0; index < pdata->num_channels; index++)
- if (pdata->channels[index])
+ if (pdata->channels[index] &&
+ test_bit(pdata->channels[index]->chid, &host->allocated_channels))
nvhost_channel_suspend(pdata->channels[index]);
for (index = 0; index < pdata->num_clks; index++)
diff --git a/drivers/video/tegra/host/nvhost_channel.c b/drivers/video/tegra/host/nvhost_channel.c
index 683865a..2bf9a57 100644
--- a/drivers/video/tegra/host/nvhost_channel.c
+++ b/drivers/video/tegra/host/nvhost_channel.c
@@ -35,14 +35,18 @@
/* Constructor for the host1x device list */
int nvhost_channel_list_init(struct nvhost_master *host)
{
- INIT_LIST_HEAD(&host->chlist.list);
- mutex_init(&host->chlist_mutex);
-
if (host->info.nb_channels > BITS_PER_LONG) {
WARN(1, "host1x hardware has more channels than supported\n");
return -ENOSYS;
}
+ host->chlist = kzalloc(host->info.nb_channels *
+ sizeof(struct nvhost_channel *), GFP_KERNEL);
+ if (host->chlist == NULL)
+ return -ENOMEM;
+
+ mutex_init(&host->chlist_mutex);
+
return 0;
}
@@ -50,44 +54,32 @@
int nvhost_alloc_channels(struct nvhost_master *host)
{
int max_channels = host->info.nb_channels;
- int i;
+ int i, err = 0;
struct nvhost_channel *ch;
- nvhost_channel_list_init(host);
- mutex_lock(&host->chlist_mutex);
+ err = nvhost_channel_list_init(host);
+ if (err) {
+ dev_err(&host->dev->dev, "failed to init channel list\n");
+ return err;
+ }
+ mutex_lock(&host->chlist_mutex);
for (i = 0; i < max_channels; i++) {
- ch = nvhost_alloc_channel_internal(i, max_channels,
- &host->cnt_alloc_channels);
+ ch = nvhost_alloc_channel_internal(i, max_channels);
if (!ch) {
+ dev_err(&host->dev->dev, "failed to alloc channels\n");
mutex_unlock(&host->chlist_mutex);
return -ENOMEM;
}
+ host->chlist[i] = ch;
ch->dev = NULL;
ch->chid = NVHOST_INVALID_CHANNEL;
-
- list_add_tail(&ch->list, &host->chlist.list);
}
mutex_unlock(&host->chlist_mutex);
return 0;
}
-/* Return N'th channel from list */
-struct nvhost_channel *nvhost_return_node(struct nvhost_master *host,
- int index)
-{
- int i = 0;
- struct nvhost_channel *ch = NULL;
-
- list_for_each_entry(ch, &host->chlist.list, list) {
- if (i == index)
- return ch;
- i++;
- }
- return NULL;
-}
-
/* return any one of assigned channel from device
* This API can be used to check if any channel assigned to device
*/
@@ -133,12 +125,12 @@
for (i = 0; i < pdata->num_channels; i++) {
ch = pdata->channels[i];
if (ch && ch->dev)
- nvhost_putchannel(ch);
+ nvhost_putchannel(ch, 1);
}
return 0;
}
/* Unmap channel from device and free all resources, deinit device */
-int nvhost_channel_unmap(struct nvhost_channel *ch)
+static int nvhost_channel_unmap_locked(struct nvhost_channel *ch)
{
struct nvhost_device_data *pdata;
struct nvhost_master *host;
@@ -153,12 +145,10 @@
pdata = platform_get_drvdata(ch->dev);
host = nvhost_get_host(pdata->pdev);
- mutex_lock(&host->chlist_mutex);
max_channels = host->info.nb_channels;
if (ch->chid == NVHOST_INVALID_CHANNEL) {
dev_err(&host->dev->dev, "Freeing un-mapped channel\n");
- mutex_unlock(&host->chlist_mutex);
return 0;
}
if (ch->error_notifier_ref)
@@ -203,10 +193,9 @@
ch->ctxhandler = NULL;
ch->cur_ctx = NULL;
ch->aperture = NULL;
+ ch->refcount = 0;
pdata->channels[ch->dev_chid] = NULL;
- mutex_unlock(&host->chlist_mutex);
-
return 0;
}
@@ -238,7 +227,7 @@
}
ch = nvhost_check_channel(pdata);
if (ch)
- nvhost_getchannel(ch);
+ ch->refcount++;
mutex_unlock(&host->chlist_mutex);
*channel = ch;
return 0;
@@ -261,32 +250,28 @@
}
/* Get channel from list and map to device */
- ch = nvhost_return_node(host, index);
- if (!ch) {
- dev_err(&host->dev->dev, "%s: No channel is free\n", __func__);
- mutex_unlock(&host->chlist_mutex);
- return -EBUSY;
- }
- if (ch->chid == NVHOST_INVALID_CHANNEL) {
- ch->dev = pdata->pdev;
- ch->chid = index;
- nvhost_channel_assign(pdata, ch);
- nvhost_set_chanops(ch);
- } else {
+ ch = host->chlist[index];
+ if (!ch || (ch->chid != NVHOST_INVALID_CHANNEL)) {
dev_err(&host->dev->dev, "%s: wrong channel map\n", __func__);
mutex_unlock(&host->chlist_mutex);
return -EINVAL;
}
+ ch->dev = pdata->pdev;
+ ch->chid = index;
+ nvhost_channel_assign(pdata, ch);
+ nvhost_set_chanops(ch);
+ ch->refcount = 1;
+
/* Initialize channel */
err = nvhost_channel_init(ch, host);
if (err) {
dev_err(&ch->dev->dev, "%s: channel init failed\n", __func__);
+ nvhost_channel_unmap_locked(ch);
mutex_unlock(&host->chlist_mutex);
- nvhost_channel_unmap(ch);
return err;
}
- nvhost_getchannel(ch);
+
set_bit(ch->chid, &host->allocated_channels);
/* set next free channel */
@@ -299,8 +284,8 @@
err = pdata->init(ch->dev);
if (err) {
dev_err(&ch->dev->dev, "device init failed\n");
+ nvhost_channel_unmap_locked(ch);
mutex_unlock(&host->chlist_mutex);
- nvhost_channel_unmap(ch);
return err;
}
}
@@ -319,14 +304,13 @@
/* Free channel memory and list */
int nvhost_channel_list_free(struct nvhost_master *host)
{
- struct nvhost_channel *ch = NULL;
+ int i;
- list_for_each_entry(ch, &host->chlist.list, list) {
- list_del(&ch->list);
- kfree(ch);
- }
+ for (i = 0; i < host->info.nb_channels; i++)
+ kfree(host->chlist[i]);
dev_info(&host->dev->dev, "channel list free'd\n");
+
return 0;
}
@@ -383,22 +367,28 @@
void nvhost_getchannel(struct nvhost_channel *ch)
{
- atomic_inc(&ch->refcount);
+ struct nvhost_device_data *pdata = platform_get_drvdata(ch->dev);
+ struct nvhost_master *host = nvhost_get_host(pdata->pdev);
+
+ mutex_lock(&host->chlist_mutex);
+ ch->refcount++;
+ mutex_unlock(&host->chlist_mutex);
}
-void nvhost_putchannel(struct nvhost_channel *ch)
+void nvhost_putchannel(struct nvhost_channel *ch, int cnt)
{
- if (!atomic_dec_if_positive(&ch->refcount))
- nvhost_channel_unmap(ch);
-}
+ struct nvhost_device_data *pdata = platform_get_drvdata(ch->dev);
+ struct nvhost_master *host = nvhost_get_host(pdata->pdev);
+ mutex_lock(&host->chlist_mutex);
+ ch->refcount -= cnt;
-void nvhost_putchannel_mult(struct nvhost_channel *ch, int cnt)
-{
- int i;
-
- for (i = 0; i < cnt; i++)
- nvhost_putchannel(ch);
+ /* WARN on negative reference, with zero reference unmap channel*/
+ if (!ch->refcount)
+ nvhost_channel_unmap_locked(ch);
+ else if (ch->refcount < 0)
+ WARN_ON(1);
+ mutex_unlock(&host->chlist_mutex);
}
int nvhost_channel_suspend(struct nvhost_channel *ch)
@@ -412,30 +402,15 @@
}
struct nvhost_channel *nvhost_alloc_channel_internal(int chindex,
- int max_channels, int *current_channel_count)
+ int max_channels)
{
struct nvhost_channel *ch = NULL;
- if ( (chindex > max_channels) ||
- ( (*current_channel_count + 1) > max_channels) )
- return NULL;
- else {
- ch = kzalloc(sizeof(*ch), GFP_KERNEL);
- if (ch == NULL)
- return NULL;
- else {
- ch->chid = *current_channel_count;
- (*current_channel_count)++;
- return ch;
- }
- }
-}
+ ch = kzalloc(sizeof(*ch), GFP_KERNEL);
+ if (ch)
+ ch->chid = chindex;
-void nvhost_free_channel_internal(struct nvhost_channel *ch,
- int *current_channel_count)
-{
- kfree(ch);
- (*current_channel_count)--;
+ return ch;
}
int nvhost_channel_save_context(struct nvhost_channel *ch)
diff --git a/drivers/video/tegra/host/nvhost_channel.h b/drivers/video/tegra/host/nvhost_channel.h
index baed282..026e756 100644
--- a/drivers/video/tegra/host/nvhost_channel.h
+++ b/drivers/video/tegra/host/nvhost_channel.h
@@ -47,11 +47,9 @@
struct nvhost_channel {
struct nvhost_channel_ops ops;
- atomic_t refcount;
+ int refcount;
int chid;
int dev_chid;
- u32 syncpt_id;
- struct mutex reflock;
struct mutex submitlock;
void __iomem *aperture;
struct nvhost_hwctx *cur_ctx;
@@ -64,8 +62,6 @@
* now just keep it here */
struct nvhost_as *as;
- struct list_head list;
-
/* error notificatiers used channel submit timeout */
struct dma_buf *error_notifier_ref;
struct nvhost_notification *error_notifier;
@@ -88,8 +84,7 @@
void nvhost_free_error_notifiers(struct nvhost_channel *ch);
void nvhost_getchannel(struct nvhost_channel *ch);
-void nvhost_putchannel(struct nvhost_channel *ch);
-void nvhost_putchannel_mult(struct nvhost_channel *ch, int cnt);
+void nvhost_putchannel(struct nvhost_channel *ch, int cnt);
int nvhost_channel_suspend(struct nvhost_channel *ch);
int nvhost_channel_read_reg(struct nvhost_channel *channel,
@@ -97,10 +92,7 @@
u32 offset, u32 *value);
struct nvhost_channel *nvhost_alloc_channel_internal(int chindex,
- int max_channels, int *current_channel_count);
-
-void nvhost_free_channel_internal(struct nvhost_channel *ch,
- int *current_channel_count);
+ int max_channels);
int nvhost_channel_save_context(struct nvhost_channel *ch);
void nvhost_channel_init_gather_filter(struct nvhost_channel *ch);
diff --git a/drivers/video/tegra/host/nvhost_intr.c b/drivers/video/tegra/host/nvhost_intr.c
index 18b714c..b856f26 100644
--- a/drivers/video/tegra/host/nvhost_intr.c
+++ b/drivers/video/tegra/host/nvhost_intr.c
@@ -175,7 +175,7 @@
channel->cdma.med_prio_count,
channel->cdma.low_prio_count);
- nvhost_putchannel_mult(channel, nr_completed);
+ nvhost_putchannel(channel, nr_completed);
}
diff --git a/drivers/video/tegra/host/nvhost_scale.c b/drivers/video/tegra/host/nvhost_scale.c
index 51e9425..fc50938 100644
--- a/drivers/video/tegra/host/nvhost_scale.c
+++ b/drivers/video/tegra/host/nvhost_scale.c
@@ -404,7 +404,7 @@
struct nvhost_device_profile *profile = pdata->power_profile;
if (profile && profile->actmon)
- actmon_op().init(profile->actmon);
+ return actmon_op().init(profile->actmon);
return 0;
}
diff --git a/drivers/video/tegra/host/pod_scaling.c b/drivers/video/tegra/host/pod_scaling.c
index 6489895..872718b 100644
--- a/drivers/video/tegra/host/pod_scaling.c
+++ b/drivers/video/tegra/host/pod_scaling.c
@@ -840,13 +840,13 @@
if (!strcmp(d->name, "vic03.0")) {
podgov->p_load_max = 990;
- podgov->p_load_target = 800;
+ podgov->p_load_target = 400;
podgov->p_bias = 80;
podgov->p_hint_lo_limit = 500;
podgov->p_hint_hi_limit = 997;
podgov->p_scaleup_limit = 1100;
podgov->p_scaledown_limit = 1300;
- podgov->p_smooth = 10;
+ podgov->p_smooth = 30;
podgov->p_damp = 7;
} else {
switch (cid) {
diff --git a/drivers/video/tegra/nvmap/nvmap_dev.c b/drivers/video/tegra/nvmap/nvmap_dev.c
index 6720269..fae6d3b 100644
--- a/drivers/video/tegra/nvmap/nvmap_dev.c
+++ b/drivers/video/tegra/nvmap/nvmap_dev.c
@@ -613,14 +613,14 @@
switch (cmd) {
case NVMAP_IOC_CREATE:
- case NVMAP_IOC_FROM_ID:
case NVMAP_IOC_FROM_FD:
err = nvmap_ioctl_create(filp, cmd, uarg);
break;
+ case NVMAP_IOC_FROM_ID:
case NVMAP_IOC_GET_ID:
- err = nvmap_ioctl_getid(filp, uarg);
- break;
+ pr_warn_once("nvmap: unsupported FROM_ID/GET_ID IOCTLs used.\n");
+ return -ENOTTY;
case NVMAP_IOC_GET_FD:
err = nvmap_ioctl_getfd(filp, uarg);
diff --git a/drivers/video/tegra/nvmap/nvmap_dmabuf.c b/drivers/video/tegra/nvmap/nvmap_dmabuf.c
index 162e434..e0ade75 100644
--- a/drivers/video/tegra/nvmap/nvmap_dmabuf.c
+++ b/drivers/video/tegra/nvmap/nvmap_dmabuf.c
@@ -684,6 +684,11 @@
/*
* Returns the nvmap handle ID associated with the passed dma_buf's fd. This
* does not affect the ref count of the dma_buf.
+ * NOTE: Callers of this utility function must invoke nvmap_handle_put after
+ * using the returned nvmap_handle. Call to nvmap_handle_get is required in
+ * this utility function to avoid race conditions in code where nvmap_handle
+ * returned by this function is freed concurrently while the caller is still
+ * using it.
*/
struct nvmap_handle *nvmap_get_id_from_dmabuf_fd(struct nvmap_client *client,
int fd)
@@ -698,6 +703,8 @@
if (dmabuf->ops == &nvmap_dma_buf_ops) {
info = dmabuf->priv;
handle = info->handle;
+ if (!nvmap_handle_get(handle))
+ handle = ERR_PTR(-EINVAL);
}
dma_buf_put(dmabuf);
return handle;
@@ -714,11 +721,12 @@
if (copy_from_user(&op, (void __user *)arg, sizeof(op)))
return -EFAULT;
- handle = unmarshal_user_id(op.id);
+ handle = unmarshal_user_handle(op.id);
if (!handle)
return -EINVAL;
op.fd = nvmap_get_dmabuf_fd(client, handle);
+ nvmap_handle_put(handle);
if (op.fd < 0)
return op.fd;
diff --git a/drivers/video/tegra/nvmap/nvmap_handle.c b/drivers/video/tegra/nvmap/nvmap_handle.c
index 8774865..f7646ae 100644
--- a/drivers/video/tegra/nvmap/nvmap_handle.c
+++ b/drivers/video/tegra/nvmap/nvmap_handle.c
@@ -428,7 +428,11 @@
void nvmap_free_handle_user_id(struct nvmap_client *client,
unsigned long user_id)
{
- nvmap_free_handle(client, unmarshal_user_id(user_id));
+ struct nvmap_handle *handle = unmarshal_user_handle(user_id);
+ if (handle) {
+ nvmap_free_handle(client, handle);
+ nvmap_handle_put(handle);
+ }
}
static void add_handle_ref(struct nvmap_client *client,
@@ -599,6 +603,7 @@
if (IS_ERR(handle))
return ERR_CAST(handle);
ref = nvmap_duplicate_handle(client, handle, 1);
+ nvmap_handle_put(handle);
return ref;
}
diff --git a/drivers/video/tegra/nvmap/nvmap_ioctl.c b/drivers/video/tegra/nvmap/nvmap_ioctl.c
index 33b7c3f..1963d51 100644
--- a/drivers/video/tegra/nvmap/nvmap_ioctl.c
+++ b/drivers/video/tegra/nvmap/nvmap_ioctl.c
@@ -48,54 +48,19 @@
unsigned long sys_stride, unsigned long elem_size,
unsigned long count);
-static struct nvmap_handle *fd_to_handle_id(int handle)
+/* NOTE: Callers of this utility function must invoke nvmap_handle_put after
+ * using the returned nvmap_handle.
+ */
+struct nvmap_handle *unmarshal_user_handle(__u32 handle)
{
struct nvmap_handle *h;
- h = nvmap_get_id_from_dmabuf_fd(NULL, handle);
+ h = nvmap_get_id_from_dmabuf_fd(NULL, (int)handle);
if (!IS_ERR(h))
return h;
return 0;
}
-static struct nvmap_handle *unmarshal_user_handle(__u32 handle)
-{
- return fd_to_handle_id((int)handle);
-}
-
-struct nvmap_handle *unmarshal_user_id(u32 id)
-{
- return unmarshal_user_handle(id);
-}
-
-/*
- * marshal_id/unmarshal_id are for get_id/handle_from_id.
- * These are added to support using Fd's for handle.
- */
-#ifdef CONFIG_ARM64
-static __u32 marshal_id(struct nvmap_handle *handle)
-{
- return (__u32)((uintptr_t)handle >> 2);
-}
-
-static struct nvmap_handle *unmarshal_id(__u32 id)
-{
- uintptr_t h = ((id << 2) | PAGE_OFFSET);
-
- return (struct nvmap_handle *)h;
-}
-#else
-static __u32 marshal_id(struct nvmap_handle *handle)
-{
- return (uintptr_t)handle;
-}
-
-static struct nvmap_handle *unmarshal_id(__u32 id)
-{
- return (struct nvmap_handle *)id;
-}
-#endif
-
struct nvmap_handle *__nvmap_ref_to_id(struct nvmap_handle_ref *ref)
{
if (!virt_addr_valid(ref))
@@ -115,8 +80,8 @@
struct nvmap_handle *on_stack[16];
struct nvmap_handle **refs;
unsigned long __user *output = NULL;
- unsigned int i;
int err = 0;
+ u32 i, n_unmarshal_handles = 0;
#ifdef CONFIG_COMPAT
if (is32) {
@@ -159,6 +124,7 @@
err = -EINVAL;
goto out;
}
+ n_unmarshal_handles++;
}
} else {
refs = on_stack;
@@ -168,6 +134,7 @@
err = -EINVAL;
goto out;
}
+ n_unmarshal_handles++;
}
trace_nvmap_ioctl_pinop(filp->private_data, is_pin, op.count, refs);
@@ -230,35 +197,15 @@
nvmap_unpin_ids(filp->private_data, op.count, refs);
out:
+ for (i = 0; i < n_unmarshal_handles; i++)
+ nvmap_handle_put(refs[i]);
+
if (refs != on_stack)
kfree(refs);
return err;
}
-int nvmap_ioctl_getid(struct file *filp, void __user *arg)
-{
- struct nvmap_create_handle op;
- struct nvmap_handle *h = NULL;
-
- if (copy_from_user(&op, arg, sizeof(op)))
- return -EFAULT;
-
- h = unmarshal_user_handle(op.handle);
- if (!h)
- return -EINVAL;
-
- h = nvmap_handle_get(h);
-
- if (!h)
- return -EPERM;
-
- op.id = marshal_id(h);
- nvmap_handle_put(h);
-
- return copy_to_user(arg, &op, sizeof(op)) ? -EFAULT : 0;
-}
-
static int nvmap_share_release(struct inode *inode, struct file *file)
{
struct nvmap_handle *h = file->private_data;
@@ -294,6 +241,7 @@
return -EINVAL;
op.fd = nvmap_get_dmabuf_fd(client, handle);
+ nvmap_handle_put(handle);
if (op.fd < 0)
return op.fd;
@@ -309,24 +257,27 @@
struct nvmap_alloc_handle op;
struct nvmap_client *client = filp->private_data;
struct nvmap_handle *handle;
+ int err;
if (copy_from_user(&op, arg, sizeof(op)))
return -EFAULT;
- handle = unmarshal_user_handle(op.handle);
- if (!handle)
+ if (op.align & (op.align - 1))
return -EINVAL;
- if (op.align & (op.align - 1))
+ handle = unmarshal_user_handle(op.handle);
+ if (!handle)
return -EINVAL;
/* user-space handles are aligned to page boundaries, to prevent
* data leakage. */
op.align = max_t(size_t, op.align, PAGE_SIZE);
- return nvmap_alloc_handle(client, handle, op.heap_mask, op.align,
+ err = nvmap_alloc_handle(client, handle, op.heap_mask, op.align,
0, /* no kind */
op.flags & (~NVMAP_HANDLE_KIND_SPECIFIED));
+ nvmap_handle_put(handle);
+ return err;
}
int nvmap_ioctl_alloc_kind(struct file *filp, void __user *arg)
@@ -334,26 +285,29 @@
struct nvmap_alloc_kind_handle op;
struct nvmap_client *client = filp->private_data;
struct nvmap_handle *handle;
+ int err;
if (copy_from_user(&op, arg, sizeof(op)))
return -EFAULT;
- handle = unmarshal_user_handle(op.handle);
- if (!handle)
+ if (op.align & (op.align - 1))
return -EINVAL;
- if (op.align & (op.align - 1))
+ handle = unmarshal_user_handle(op.handle);
+ if (!handle)
return -EINVAL;
/* user-space handles are aligned to page boundaries, to prevent
* data leakage. */
op.align = max_t(size_t, op.align, PAGE_SIZE);
- return nvmap_alloc_handle(client, handle,
+ err = nvmap_alloc_handle(client, handle,
op.heap_mask,
op.align,
op.kind,
op.flags);
+ nvmap_handle_put(handle);
+ return err;
}
int nvmap_create_fd(struct nvmap_client *client, struct nvmap_handle *h)
@@ -392,8 +346,6 @@
ref = nvmap_create_handle(client, PAGE_ALIGN(op.size));
if (!IS_ERR(ref))
ref->handle->orig_size = op.size;
- } else if (cmd == NVMAP_IOC_FROM_ID) {
- ref = nvmap_duplicate_handle(client, unmarshal_id(op.id), 0);
} else if (cmd == NVMAP_IOC_FROM_FD) {
ref = nvmap_create_handle_from_fd(client, op.fd);
} else {
@@ -446,15 +398,9 @@
return -EFAULT;
h = unmarshal_user_handle(op.handle);
-
if (!h)
return -EINVAL;
- h = nvmap_handle_get(h);
-
- if (!h)
- return -EPERM;
-
if(!h->alloc) {
nvmap_handle_put(h);
return -EFAULT;
@@ -547,10 +493,6 @@
if (!h)
return -EINVAL;
- h = nvmap_handle_get(h);
- if (!h)
- return -EINVAL;
-
nvmap_ref_lock(client);
ref = __nvmap_validate_locked(client, h);
if (IS_ERR_OR_NULL(ref)) {
@@ -603,13 +545,12 @@
if (copy_from_user(&op, arg, sizeof(op)))
return -EFAULT;
- h = unmarshal_user_handle(op.handle);
- if (!h || !op.addr || !op.count || !op.elem_size)
+ if (!op.addr || !op.count || !op.elem_size)
return -EINVAL;
- h = nvmap_handle_get(h);
+ h = unmarshal_user_handle(op.handle);
if (!h)
- return -EPERM;
+ return -EINVAL;
trace_nvmap_ioctl_rw_handle(client, h, is_read, op.offset,
op.addr, op.hmem_stride,
@@ -646,11 +587,14 @@
unsigned long end;
int err = 0;
- handle = unmarshal_user_handle(op->handle);
- if (!handle || !op->addr || op->op < NVMAP_CACHE_OP_WB ||
+ if (!op->addr || op->op < NVMAP_CACHE_OP_WB ||
op->op > NVMAP_CACHE_OP_WB_INV)
return -EINVAL;
+ handle = unmarshal_user_handle(op->handle);
+ if (!handle)
+ return -EINVAL;
+
down_read(¤t->mm->mmap_sem);
vma = find_vma(current->active_mm, (unsigned long)op->addr);
@@ -677,6 +621,7 @@
false);
out:
up_read(¤t->mm->mmap_sem);
+ nvmap_handle_put(handle);
return err;
}
@@ -1137,7 +1082,8 @@
u32 *offset_ptr;
u32 *size_ptr;
struct nvmap_handle **refs;
- int i, err = 0;
+ int err = 0;
+ u32 i, n_unmarshal_handles = 0;
if (copy_from_user(&op, arg, sizeof(op)))
return -EFAULT;
@@ -1179,6 +1125,7 @@
err = -EINVAL;
goto free_mem;
}
+ n_unmarshal_handles++;
}
if (is_reserve_ioctl)
@@ -1189,6 +1136,8 @@
op.op, op.nr);
free_mem:
+ for (i = 0; i < n_unmarshal_handles; i++)
+ nvmap_handle_put(refs[i]);
kfree(refs);
return err;
}
diff --git a/drivers/video/tegra/nvmap/nvmap_ioctl.h b/drivers/video/tegra/nvmap/nvmap_ioctl.h
index a0379b0..6c051fe 100644
--- a/drivers/video/tegra/nvmap/nvmap_ioctl.h
+++ b/drivers/video/tegra/nvmap/nvmap_ioctl.h
@@ -30,8 +30,6 @@
int nvmap_ioctl_get_param(struct file *filp, void __user *arg, bool is32);
-int nvmap_ioctl_getid(struct file *filp, void __user *arg);
-
int nvmap_ioctl_getfd(struct file *filp, void __user *arg);
int nvmap_ioctl_alloc(struct file *filp, void __user *arg);
diff --git a/drivers/video/tegra/nvmap/nvmap_mm.c b/drivers/video/tegra/nvmap/nvmap_mm.c
index 942394c..24f69f4 100644
--- a/drivers/video/tegra/nvmap/nvmap_mm.c
+++ b/drivers/video/tegra/nvmap/nvmap_mm.c
@@ -24,12 +24,28 @@
#include "nvmap_priv.h"
+inline static void nvmap_flush_dcache_all(void *dummy)
+{
+#if defined(CONFIG_DENVER_CPU)
+ u64 id_afr0;
+ asm volatile ("mrs %0, ID_AFR0_EL1" : "=r"(id_afr0));
+ if (likely((id_afr0 & 0xf00) == 0x100)) {
+ asm volatile ("msr s3_0_c15_c13_0, %0" : : "r" (0));
+ asm volatile ("dsb sy");
+ } else {
+ __flush_dcache_all(NULL);
+ }
+#else
+ __flush_dcache_all(NULL);
+#endif
+}
+
void inner_flush_cache_all(void)
{
#if defined(CONFIG_ARM64) && defined(CONFIG_NVMAP_CACHE_MAINT_BY_SET_WAYS_ON_ONE_CPU)
- __flush_dcache_all(NULL);
+ nvmap_flush_dcache_all(NULL);
#elif defined(CONFIG_ARM64)
- on_each_cpu(__flush_dcache_all, NULL, 1);
+ on_each_cpu(nvmap_flush_dcache_all, NULL, 1);
#elif defined(CONFIG_NVMAP_CACHE_MAINT_BY_SET_WAYS_ON_ONE_CPU)
v7_flush_kern_cache_all();
#else
diff --git a/drivers/video/tegra/nvmap/nvmap_priv.h b/drivers/video/tegra/nvmap/nvmap_priv.h
index 271b6f7..45738cd 100644
--- a/drivers/video/tegra/nvmap/nvmap_priv.h
+++ b/drivers/video/tegra/nvmap/nvmap_priv.h
@@ -371,7 +371,7 @@
void nvmap_client_put(struct nvmap_client *c);
-struct nvmap_handle *unmarshal_user_id(u32 id);
+struct nvmap_handle *unmarshal_user_handle(__u32 handle);
/* MM definitions. */
extern size_t cache_maint_inner_threshold;
diff --git a/drivers/video/tegra/tegra_adf.c b/drivers/video/tegra/tegra_adf.c
index 6d7dbc5..7935010 100644
--- a/drivers/video/tegra/tegra_adf.c
+++ b/drivers/video/tegra/tegra_adf.c
@@ -1152,9 +1152,15 @@
event.base.type = TEGRA_ADF_EVENT_BANDWIDTH_RENEGOTIATE;
event.base.length = sizeof(event);
- event.total_bw = bw->total_bw;
- event.avail_bw = bw->avail_bw;
- event.resvd_bw = bw->resvd_bw;
+ if (bw == NULL) {
+ event.total_bw = 0;
+ event.avail_bw = 0;
+ event.resvd_bw = 0;
+ } else {
+ event.total_bw = bw->total_bw;
+ event.avail_bw = bw->avail_bw;
+ event.resvd_bw = bw->resvd_bw;
+ }
return adf_event_notify(&adf_info->base.base, &event.base);
}
diff --git a/fs/autofs4/root.c b/fs/autofs4/root.c
index 085da86..ca8e555 100644
--- a/fs/autofs4/root.c
+++ b/fs/autofs4/root.c
@@ -41,7 +41,7 @@
.open = dcache_dir_open,
.release = dcache_dir_close,
.read = generic_read_dir,
- .readdir = dcache_readdir,
+ .iterate = dcache_readdir,
.llseek = dcache_dir_lseek,
.unlocked_ioctl = autofs4_root_ioctl,
#ifdef CONFIG_COMPAT
@@ -53,7 +53,7 @@
.open = autofs4_dir_open,
.release = dcache_dir_close,
.read = generic_read_dir,
- .readdir = dcache_readdir,
+ .iterate = dcache_readdir,
.llseek = dcache_dir_lseek,
};
diff --git a/fs/buffer.c b/fs/buffer.c
index 62b470c..82e5775 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -975,7 +975,7 @@
int ret = 0; /* Will call free_more_memory() */
page = find_or_create_page(inode->i_mapping, index,
- (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
+ (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS));
if (!page)
return ret;
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index c014858..a62e662 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -350,18 +350,17 @@
/*
* Read a cramfs directory entry.
*/
-static int cramfs_readdir(struct file *filp, void *dirent, filldir_t filldir)
+static int cramfs_readdir(struct file *file, struct dir_context *ctx)
{
- struct inode *inode = file_inode(filp);
+ struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
char *buf;
unsigned int offset;
- int copied;
/* Offset within the thing. */
- offset = filp->f_pos;
- if (offset >= inode->i_size)
+ if (ctx->pos >= inode->i_size)
return 0;
+ offset = ctx->pos;
/* Directory entries are always 4-byte aligned */
if (offset & 3)
return -EINVAL;
@@ -370,14 +369,13 @@
if (!buf)
return -ENOMEM;
- copied = 0;
while (offset < inode->i_size) {
struct cramfs_inode *de;
unsigned long nextoffset;
char *name;
ino_t ino;
umode_t mode;
- int namelen, error;
+ int namelen;
mutex_lock(&read_mutex);
de = cramfs_read(sb, OFFSET(inode) + offset, sizeof(*de)+CRAMFS_MAXPATHLEN);
@@ -403,13 +401,10 @@
break;
namelen--;
}
- error = filldir(dirent, buf, namelen, offset, ino, mode >> 12);
- if (error)
+ if (!dir_emit(ctx, buf, namelen, ino, mode >> 12))
break;
- offset = nextoffset;
- filp->f_pos = offset;
- copied++;
+ ctx->pos = offset = nextoffset;
}
kfree(buf);
return 0;
@@ -548,7 +543,7 @@
static const struct file_operations cramfs_directory_operations = {
.llseek = generic_file_llseek,
.read = generic_read_dir,
- .readdir = cramfs_readdir,
+ .iterate = cramfs_readdir,
};
static const struct inode_operations cramfs_dir_inode_operations = {
diff --git a/fs/dcache.c b/fs/dcache.c
index 9a59653..7a271b9 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2984,6 +2984,22 @@
goto again;
}
+void d_tmpfile(struct dentry *dentry, struct inode *inode)
+{
+ inode_dec_link_count(inode);
+ BUG_ON(dentry->d_name.name != dentry->d_iname ||
+ !hlist_unhashed(&dentry->d_alias) ||
+ !d_unlinked(dentry));
+ spin_lock(&dentry->d_parent->d_lock);
+ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
+ dentry->d_name.len = sprintf(dentry->d_iname, "#%llu",
+ (unsigned long long)inode->i_ino);
+ spin_unlock(&dentry->d_lock);
+ spin_unlock(&dentry->d_parent->d_lock);
+ d_instantiate(dentry, inode);
+}
+EXPORT_SYMBOL(d_tmpfile);
+
/**
* find_inode_number - check for dentry with name
* @dir: directory to check
diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
index 43b448d..68acdfd 100644
--- a/fs/exportfs/expfs.c
+++ b/fs/exportfs/expfs.c
@@ -280,6 +280,7 @@
goto out_close;
buffer.sequence = 0;
+ buffer.ctx.actor = filldir_one;
while (1) {
int old_seq = buffer.sequence;
diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c
index 73b0d95..256dd5f 100644
--- a/fs/ext2/namei.c
+++ b/fs/ext2/namei.c
@@ -119,6 +119,29 @@
return ext2_add_nondir(dentry, inode);
}
+static int ext2_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ struct inode *inode = ext2_new_inode(dir, mode, NULL);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+ inode->i_op = &ext2_file_inode_operations;
+ if (ext2_use_xip(inode->i_sb)) {
+ inode->i_mapping->a_ops = &ext2_aops_xip;
+ inode->i_fop = &ext2_xip_file_operations;
+ } else if (test_opt(inode->i_sb, NOBH)) {
+ inode->i_mapping->a_ops = &ext2_nobh_aops;
+ inode->i_fop = &ext2_file_operations;
+ } else {
+ inode->i_mapping->a_ops = &ext2_aops;
+ inode->i_fop = &ext2_file_operations;
+ }
+ mark_inode_dirty(inode);
+ d_tmpfile(dentry, inode);
+ unlock_new_inode(inode);
+ return 0;
+}
+
static int ext2_mknod (struct inode * dir, struct dentry *dentry, umode_t mode, dev_t rdev)
{
struct inode * inode;
@@ -398,6 +421,7 @@
#endif
.setattr = ext2_setattr,
.get_acl = ext2_get_acl,
+ .tmpfile = ext2_tmpfile,
};
const struct inode_operations ext2_special_inode_operations = {
diff --git a/fs/f2fs/Kconfig b/fs/f2fs/Kconfig
index fd27e7e..214fe10 100644
--- a/fs/f2fs/Kconfig
+++ b/fs/f2fs/Kconfig
@@ -51,3 +51,23 @@
Linux website <http://acl.bestbits.at/>.
If you don't know what Access Control Lists are, say N
+
+config F2FS_FS_SECURITY
+ bool "F2FS Security Labels"
+ depends on F2FS_FS_XATTR
+ help
+ Security labels provide an access control facility to support Linux
+ Security Models (LSMs) accepted by AppArmor, SELinux, Smack and TOMOYO
+ Linux. This option enables an extended attribute handler for file
+ security labels in the f2fs filesystem, so that it requires enabling
+ the extended attribute support in advance.
+
+ If you are not using a security module, say N.
+
+config F2FS_CHECK_FS
+ bool "F2FS consistency checking feature"
+ depends on F2FS_FS
+ help
+ Enables BUG_ONs which check the file system consistency in runtime.
+
+ If you want to improve the performance, say N.
diff --git a/fs/f2fs/Makefile b/fs/f2fs/Makefile
index 27a0820..2e35da1 100644
--- a/fs/f2fs/Makefile
+++ b/fs/f2fs/Makefile
@@ -1,6 +1,6 @@
obj-$(CONFIG_F2FS_FS) += f2fs.o
-f2fs-y := dir.o file.o inode.o namei.o hash.o super.o
+f2fs-y := dir.o file.o inode.o namei.o hash.o super.o inline.o
f2fs-y += checkpoint.o gc.o data.o node.o segment.o recovery.o
f2fs-$(CONFIG_F2FS_STAT_FS) += debug.o
f2fs-$(CONFIG_F2FS_FS_XATTR) += xattr.o
diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c
index 44abc2f..fdab759 100644
--- a/fs/f2fs/acl.c
+++ b/fs/f2fs/acl.c
@@ -185,7 +185,7 @@
retval = f2fs_getxattr(inode, name_index, "", NULL, 0);
if (retval > 0) {
- value = kmalloc(retval, GFP_KERNEL);
+ value = kmalloc(retval, GFP_F2FS_ZERO);
if (!value)
return ERR_PTR(-ENOMEM);
retval = f2fs_getxattr(inode, name_index, "", value, retval);
@@ -205,7 +205,8 @@
return acl;
}
-static int f2fs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
+static int f2fs_set_acl(struct inode *inode, int type,
+ struct posix_acl *acl, struct page *ipage)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_inode_info *fi = F2FS_I(inode);
@@ -250,7 +251,7 @@
}
}
- error = f2fs_setxattr(inode, name_index, "", value, size);
+ error = f2fs_setxattr(inode, name_index, "", value, size, ipage, 0);
kfree(value);
if (!error)
@@ -260,10 +261,10 @@
return error;
}
-int f2fs_init_acl(struct inode *inode, struct inode *dir)
+int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage)
{
- struct posix_acl *acl = NULL;
struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ struct posix_acl *acl = NULL;
int error = 0;
if (!S_ISLNK(inode->i_mode)) {
@@ -272,23 +273,24 @@
if (IS_ERR(acl))
return PTR_ERR(acl);
}
- if (!acl)
+ if (!acl && !(test_opt(sbi, ANDROID_EMU) &&
+ F2FS_I(inode)->i_advise & FADVISE_ANDROID_EMU))
inode->i_mode &= ~current_umask();
}
- if (test_opt(sbi, POSIX_ACL) && acl) {
+ if (!test_opt(sbi, POSIX_ACL) || !acl)
+ goto cleanup;
- if (S_ISDIR(inode->i_mode)) {
- error = f2fs_set_acl(inode, ACL_TYPE_DEFAULT, acl);
- if (error)
- goto cleanup;
- }
- error = posix_acl_create(&acl, GFP_KERNEL, &inode->i_mode);
- if (error < 0)
- return error;
- if (error > 0)
- error = f2fs_set_acl(inode, ACL_TYPE_ACCESS, acl);
+ if (S_ISDIR(inode->i_mode)) {
+ error = f2fs_set_acl(inode, ACL_TYPE_DEFAULT, acl, ipage);
+ if (error)
+ goto cleanup;
}
+ error = posix_acl_create(&acl, GFP_KERNEL, &inode->i_mode);
+ if (error < 0)
+ return error;
+ if (error > 0)
+ error = f2fs_set_acl(inode, ACL_TYPE_ACCESS, acl, ipage);
cleanup:
posix_acl_release(acl);
return error;
@@ -313,11 +315,38 @@
error = posix_acl_chmod(&acl, GFP_KERNEL, mode);
if (error)
return error;
- error = f2fs_set_acl(inode, ACL_TYPE_ACCESS, acl);
+
+ error = f2fs_set_acl(inode, ACL_TYPE_ACCESS, acl, NULL);
posix_acl_release(acl);
return error;
}
+int f2fs_android_emu(struct f2fs_sb_info *sbi, struct inode *inode,
+ u32 *uid, u32 *gid, umode_t *mode)
+{
+ F2FS_I(inode)->i_advise |= FADVISE_ANDROID_EMU;
+
+ if (uid)
+ *uid = sbi->android_emu_uid;
+ if (gid)
+ *gid = sbi->android_emu_gid;
+ if (mode) {
+ *mode = (*mode & ~S_IRWXUGO) | sbi->android_emu_mode;
+ if (F2FS_I(inode)->i_advise & FADVISE_ANDROID_EMU_ROOT)
+ *mode &= ~S_IRWXO;
+ if (S_ISDIR(*mode)) {
+ if (*mode & S_IRUSR)
+ *mode |= S_IXUSR;
+ if (*mode & S_IRGRP)
+ *mode |= S_IXGRP;
+ if (*mode & S_IROTH)
+ *mode |= S_IXOTH;
+ }
+ }
+
+ return 0;
+}
+
static size_t f2fs_xattr_list_acl(struct dentry *dentry, char *list,
size_t list_size, const char *name, size_t name_len, int type)
{
@@ -388,7 +417,7 @@
acl = NULL;
}
- error = f2fs_set_acl(inode, type, acl);
+ error = f2fs_set_acl(inode, type, acl, NULL);
release_and_out:
posix_acl_release(acl);
diff --git a/fs/f2fs/acl.h b/fs/f2fs/acl.h
index 80f4306..4963313 100644
--- a/fs/f2fs/acl.h
+++ b/fs/f2fs/acl.h
@@ -36,9 +36,9 @@
#ifdef CONFIG_F2FS_FS_POSIX_ACL
-extern struct posix_acl *f2fs_get_acl(struct inode *inode, int type);
-extern int f2fs_acl_chmod(struct inode *inode);
-extern int f2fs_init_acl(struct inode *inode, struct inode *dir);
+extern struct posix_acl *f2fs_get_acl(struct inode *, int);
+extern int f2fs_acl_chmod(struct inode *);
+extern int f2fs_init_acl(struct inode *, struct inode *, struct page *);
#else
#define f2fs_check_acl NULL
#define f2fs_get_acl NULL
@@ -49,7 +49,8 @@
return 0;
}
-static inline int f2fs_init_acl(struct inode *inode, struct inode *dir)
+static inline int f2fs_init_acl(struct inode *inode, struct inode *dir,
+ struct page *page)
{
return 0;
}
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index b1de01d..81f6288 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -22,7 +22,7 @@
#include "segment.h"
#include <trace/events/f2fs.h>
-static struct kmem_cache *orphan_entry_slab;
+static struct kmem_cache *ino_entry_slab;
static struct kmem_cache *inode_entry_slab;
/*
@@ -30,7 +30,7 @@
*/
struct page *grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
{
- struct address_space *mapping = sbi->meta_inode->i_mapping;
+ struct address_space *mapping = META_MAPPING(sbi);
struct page *page = NULL;
repeat:
page = grab_cache_page(mapping, index);
@@ -38,9 +38,7 @@
cond_resched();
goto repeat;
}
-
- /* We wait writeback only inside grab_meta_page() */
- wait_on_page_writeback(page);
+ f2fs_wait_on_page_writeback(page, META);
SetPageUptodate(page);
return page;
}
@@ -50,7 +48,7 @@
*/
struct page *get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
{
- struct address_space *mapping = sbi->meta_inode->i_mapping;
+ struct address_space *mapping = META_MAPPING(sbi);
struct page *page;
repeat:
page = grab_cache_page(mapping, index);
@@ -61,11 +59,12 @@
if (PageUptodate(page))
goto out;
- if (f2fs_readpage(sbi, page, index, READ_SYNC))
+ if (f2fs_submit_page_bio(sbi, page, index,
+ READ_SYNC | REQ_META | REQ_PRIO))
goto repeat;
lock_page(page);
- if (page->mapping != mapping) {
+ if (unlikely(page->mapping != mapping)) {
f2fs_put_page(page, 1);
goto repeat;
}
@@ -74,54 +73,160 @@
return page;
}
+struct page *get_meta_page_ra(struct f2fs_sb_info *sbi, pgoff_t index)
+{
+ bool readahead = false;
+ struct page *page;
+
+ page = find_get_page(META_MAPPING(sbi), index);
+ if (!page || (page && !PageUptodate(page)))
+ readahead = true;
+ f2fs_put_page(page, 0);
+
+ if (readahead)
+ ra_meta_pages(sbi, index, MAX_BIO_BLOCKS(sbi), META_POR);
+ return get_meta_page(sbi, index);
+}
+
+static inline block_t get_max_meta_blks(struct f2fs_sb_info *sbi, int type)
+{
+ switch (type) {
+ case META_NAT:
+ return NM_I(sbi)->max_nid / NAT_ENTRY_PER_BLOCK;
+ case META_SIT:
+ return SIT_BLK_CNT(sbi);
+ case META_SSA:
+ case META_CP:
+ return 0;
+ case META_POR:
+ return MAX_BLKADDR(sbi);
+ default:
+ BUG();
+ }
+}
+
+/*
+ * Readahead CP/NAT/SIT/SSA pages
+ */
+int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, int type)
+{
+ block_t prev_blk_addr = 0;
+ struct page *page;
+ block_t blkno = start;
+ block_t max_blks = get_max_meta_blks(sbi, type);
+
+ struct f2fs_io_info fio = {
+ .type = META,
+ .rw = READ_SYNC | REQ_META | REQ_PRIO
+ };
+
+ for (; nrpages-- > 0; blkno++) {
+ block_t blk_addr;
+
+ switch (type) {
+ case META_NAT:
+ /* get nat block addr */
+ if (unlikely(blkno >= max_blks))
+ blkno = 0;
+ blk_addr = current_nat_addr(sbi,
+ blkno * NAT_ENTRY_PER_BLOCK);
+ break;
+ case META_SIT:
+ /* get sit block addr */
+ if (unlikely(blkno >= max_blks))
+ goto out;
+ blk_addr = current_sit_addr(sbi,
+ blkno * SIT_ENTRY_PER_BLOCK);
+ if (blkno != start && prev_blk_addr + 1 != blk_addr)
+ goto out;
+ prev_blk_addr = blk_addr;
+ break;
+ case META_SSA:
+ case META_CP:
+ case META_POR:
+ if (unlikely(blkno >= max_blks))
+ goto out;
+ if (unlikely(blkno < SEG0_BLKADDR(sbi)))
+ goto out;
+ blk_addr = blkno;
+ break;
+ default:
+ BUG();
+ }
+
+ page = grab_cache_page(META_MAPPING(sbi), blk_addr);
+ if (!page)
+ continue;
+ if (PageUptodate(page)) {
+ mark_page_accessed(page);
+ f2fs_put_page(page, 1);
+ continue;
+ }
+
+ f2fs_submit_page_mbio(sbi, page, blk_addr, &fio);
+ mark_page_accessed(page);
+ f2fs_put_page(page, 0);
+ }
+out:
+ f2fs_submit_merged_bio(sbi, META, READ);
+ return blkno - start;
+}
+
static int f2fs_write_meta_page(struct page *page,
struct writeback_control *wbc)
{
- struct inode *inode = page->mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_P_SB(page);
- /* Should not write any meta pages, if any IO error was occurred */
- if (wbc->for_reclaim ||
- is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ERROR_FLAG)) {
- dec_page_count(sbi, F2FS_DIRTY_META);
- wbc->pages_skipped++;
- set_page_dirty(page);
- return AOP_WRITEPAGE_ACTIVATE;
- }
+ trace_f2fs_writepage(page, META);
- wait_on_page_writeback(page);
+ if (unlikely(sbi->por_doing))
+ goto redirty_out;
+ if (wbc->for_reclaim)
+ goto redirty_out;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
+ f2fs_wait_on_page_writeback(page, META);
write_meta_page(sbi, page);
dec_page_count(sbi, F2FS_DIRTY_META);
unlock_page(page);
return 0;
+
+redirty_out:
+ redirty_page_for_writepage(wbc, page);
+ return AOP_WRITEPAGE_ACTIVATE;
}
static int f2fs_write_meta_pages(struct address_space *mapping,
struct writeback_control *wbc)
{
- struct f2fs_sb_info *sbi = F2FS_SB(mapping->host->i_sb);
- struct block_device *bdev = sbi->sb->s_bdev;
- long written;
+ struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
+ long diff, written;
- if (wbc->for_kupdate)
- return 0;
+ trace_f2fs_writepages(mapping->host, wbc, META);
- if (get_pages(sbi, F2FS_DIRTY_META) == 0)
- return 0;
+ /* collect a number of dirty meta pages and write together */
+ if (wbc->for_kupdate ||
+ get_pages(sbi, F2FS_DIRTY_META) < nr_pages_to_skip(sbi, META))
+ goto skip_write;
/* if mounting is failed, skip writing node pages */
mutex_lock(&sbi->cp_mutex);
- written = sync_meta_pages(sbi, META, bio_get_nr_vecs(bdev));
+ diff = nr_pages_to_write(sbi, META, wbc);
+ written = sync_meta_pages(sbi, META, wbc->nr_to_write);
mutex_unlock(&sbi->cp_mutex);
- wbc->nr_to_write -= written;
+ wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff);
+ return 0;
+
+skip_write:
+ wbc->pages_skipped += get_pages(sbi, F2FS_DIRTY_META);
return 0;
}
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
long nr_to_write)
{
- struct address_space *mapping = sbi->meta_inode->i_mapping;
+ struct address_space *mapping = META_MAPPING(sbi);
pgoff_t index = 0, end = LONG_MAX;
struct pagevec pvec;
long nwritten = 0;
@@ -136,20 +241,33 @@
nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
PAGECACHE_TAG_DIRTY,
min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1);
- if (nr_pages == 0)
+ if (unlikely(nr_pages == 0))
break;
for (i = 0; i < nr_pages; i++) {
struct page *page = pvec.pages[i];
+
lock_page(page);
- BUG_ON(page->mapping != mapping);
- BUG_ON(!PageDirty(page));
- clear_page_dirty_for_io(page);
+
+ if (unlikely(page->mapping != mapping)) {
+continue_unlock:
+ unlock_page(page);
+ continue;
+ }
+ if (!PageDirty(page)) {
+ /* someone wrote it for us */
+ goto continue_unlock;
+ }
+
+ if (!clear_page_dirty_for_io(page))
+ goto continue_unlock;
+
if (f2fs_write_meta_page(page, &wbc)) {
unlock_page(page);
break;
}
- if (nwritten++ >= nr_to_write)
+ nwritten++;
+ if (unlikely(nwritten >= nr_to_write))
break;
}
pagevec_release(&pvec);
@@ -157,20 +275,19 @@
}
if (nwritten)
- f2fs_submit_bio(sbi, type, nr_to_write == LONG_MAX);
+ f2fs_submit_merged_bio(sbi, type, WRITE);
return nwritten;
}
static int f2fs_set_meta_page_dirty(struct page *page)
{
- struct address_space *mapping = page->mapping;
- struct f2fs_sb_info *sbi = F2FS_SB(mapping->host->i_sb);
+ trace_f2fs_set_page_dirty(page, META);
SetPageUptodate(page);
if (!PageDirty(page)) {
__set_page_dirty_nobuffers(page);
- inc_page_count(sbi, F2FS_DIRTY_META);
+ inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
return 1;
}
return 0;
@@ -182,99 +299,156 @@
.set_page_dirty = f2fs_set_meta_page_dirty,
};
-int check_orphan_space(struct f2fs_sb_info *sbi)
+static void __add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
{
- unsigned int max_orphans;
+ struct ino_entry *e;
+retry:
+ spin_lock(&sbi->ino_lock[type]);
+
+ e = radix_tree_lookup(&sbi->ino_root[type], ino);
+ if (!e) {
+ e = kmem_cache_alloc(ino_entry_slab, GFP_ATOMIC);
+ if (!e) {
+ spin_unlock(&sbi->ino_lock[type]);
+ goto retry;
+ }
+ if (radix_tree_insert(&sbi->ino_root[type], ino, e)) {
+ spin_unlock(&sbi->ino_lock[type]);
+ kmem_cache_free(ino_entry_slab, e);
+ goto retry;
+ }
+ memset(e, 0, sizeof(struct ino_entry));
+ e->ino = ino;
+
+ list_add_tail(&e->list, &sbi->ino_list[type]);
+ }
+ spin_unlock(&sbi->ino_lock[type]);
+}
+
+static void __remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type)
+{
+ struct ino_entry *e;
+
+ spin_lock(&sbi->ino_lock[type]);
+ e = radix_tree_lookup(&sbi->ino_root[type], ino);
+ if (e) {
+ list_del(&e->list);
+ radix_tree_delete(&sbi->ino_root[type], ino);
+ if (type == ORPHAN_INO)
+ sbi->n_orphans--;
+ spin_unlock(&sbi->ino_lock[type]);
+ kmem_cache_free(ino_entry_slab, e);
+ return;
+ }
+ spin_unlock(&sbi->ino_lock[type]);
+}
+
+void add_dirty_inode(struct f2fs_sb_info *sbi, nid_t ino, int type)
+{
+ /* add new dirty ino entry into list */
+ __add_ino_entry(sbi, ino, type);
+}
+
+void remove_dirty_inode(struct f2fs_sb_info *sbi, nid_t ino, int type)
+{
+ /* remove dirty ino entry from list */
+ __remove_ino_entry(sbi, ino, type);
+}
+
+/* mode should be APPEND_INO or UPDATE_INO */
+bool exist_written_data(struct f2fs_sb_info *sbi, nid_t ino, int mode)
+{
+ struct ino_entry *e;
+ spin_lock(&sbi->ino_lock[mode]);
+ e = radix_tree_lookup(&sbi->ino_root[mode], ino);
+ spin_unlock(&sbi->ino_lock[mode]);
+ return e ? true : false;
+}
+
+void release_dirty_inode(struct f2fs_sb_info *sbi)
+{
+ struct ino_entry *e, *tmp;
+ int i;
+
+ for (i = APPEND_INO; i <= UPDATE_INO; i++) {
+ spin_lock(&sbi->ino_lock[i]);
+ list_for_each_entry_safe(e, tmp, &sbi->ino_list[i], list) {
+ list_del(&e->list);
+ radix_tree_delete(&sbi->ino_root[i], e->ino);
+ kmem_cache_free(ino_entry_slab, e);
+ }
+ spin_unlock(&sbi->ino_lock[i]);
+ }
+}
+
+int acquire_orphan_inode(struct f2fs_sb_info *sbi)
+{
int err = 0;
- /*
- * considering 512 blocks in a segment 5 blocks are needed for cp
- * and log segment summaries. Remaining blocks are used to keep
- * orphan entries with the limitation one reserved segment
- * for cp pack we can have max 1020*507 orphan entries
- */
- max_orphans = (sbi->blocks_per_seg - 5) * F2FS_ORPHANS_PER_BLOCK;
- mutex_lock(&sbi->orphan_inode_mutex);
- if (sbi->n_orphans >= max_orphans)
+ spin_lock(&sbi->ino_lock[ORPHAN_INO]);
+ if (unlikely(sbi->n_orphans >= sbi->max_orphans))
err = -ENOSPC;
- mutex_unlock(&sbi->orphan_inode_mutex);
+ else
+ sbi->n_orphans++;
+ spin_unlock(&sbi->ino_lock[ORPHAN_INO]);
+
return err;
}
+void release_orphan_inode(struct f2fs_sb_info *sbi)
+{
+ spin_lock(&sbi->ino_lock[ORPHAN_INO]);
+ if (sbi->n_orphans == 0) {
+ f2fs_msg(sbi->sb, KERN_ERR, "releasing "
+ "unacquired orphan inode");
+ f2fs_handle_error(sbi);
+ } else
+ sbi->n_orphans--;
+ spin_unlock(&sbi->ino_lock[ORPHAN_INO]);
+}
+
void add_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
{
- struct list_head *head, *this;
- struct orphan_inode_entry *new = NULL, *orphan = NULL;
-
- mutex_lock(&sbi->orphan_inode_mutex);
- head = &sbi->orphan_inode_list;
- list_for_each(this, head) {
- orphan = list_entry(this, struct orphan_inode_entry, list);
- if (orphan->ino == ino)
- goto out;
- if (orphan->ino > ino)
- break;
- orphan = NULL;
- }
-retry:
- new = kmem_cache_alloc(orphan_entry_slab, GFP_ATOMIC);
- if (!new) {
- cond_resched();
- goto retry;
- }
- new->ino = ino;
-
- /* add new_oentry into list which is sorted by inode number */
- if (orphan)
- list_add(&new->list, this->prev);
- else
- list_add_tail(&new->list, head);
-
- sbi->n_orphans++;
-out:
- mutex_unlock(&sbi->orphan_inode_mutex);
+ /* add new orphan ino entry into list */
+ __add_ino_entry(sbi, ino, ORPHAN_INO);
}
void remove_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
{
- struct list_head *this, *next, *head;
- struct orphan_inode_entry *orphan;
-
- mutex_lock(&sbi->orphan_inode_mutex);
- head = &sbi->orphan_inode_list;
- list_for_each_safe(this, next, head) {
- orphan = list_entry(this, struct orphan_inode_entry, list);
- if (orphan->ino == ino) {
- list_del(&orphan->list);
- kmem_cache_free(orphan_entry_slab, orphan);
- sbi->n_orphans--;
- break;
- }
- }
- mutex_unlock(&sbi->orphan_inode_mutex);
+ /* remove orphan entry from orphan list */
+ __remove_ino_entry(sbi, ino, ORPHAN_INO);
}
static void recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
{
struct inode *inode = f2fs_iget(sbi->sb, ino);
- BUG_ON(IS_ERR(inode));
+ if (IS_ERR(inode)) {
+ f2fs_msg(sbi->sb, KERN_ERR, "unable to recover orphan inode %d",
+ ino);
+ f2fs_handle_error(sbi);
+ return;
+ }
clear_nlink(inode);
/* truncate all the data during iput */
iput(inode);
}
-int recover_orphan_inodes(struct f2fs_sb_info *sbi)
+void recover_orphan_inodes(struct f2fs_sb_info *sbi)
{
block_t start_blk, orphan_blkaddr, i, j;
if (!is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ORPHAN_PRESENT_FLAG))
- return 0;
+ return;
- sbi->por_doing = 1;
- start_blk = __start_cp_addr(sbi) + 1;
+ sbi->por_doing = true;
+
+ start_blk = __start_cp_addr(sbi) + 1 +
+ le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
orphan_blkaddr = __start_sum_addr(sbi) - 1;
+ ra_meta_pages(sbi, start_blk, orphan_blkaddr, META_CP);
+
for (i = 0; i < orphan_blkaddr; i++) {
struct page *page = get_meta_page(sbi, start_blk + i);
struct f2fs_orphan_block *orphan_blk;
@@ -288,30 +462,40 @@
}
/* clear Orphan Flag */
clear_ckpt_flags(F2FS_CKPT(sbi), CP_ORPHAN_PRESENT_FLAG);
- sbi->por_doing = 0;
- return 0;
+ sbi->por_doing = false;
+ return;
}
static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
{
- struct list_head *head, *this, *next;
+ struct list_head *head;
struct f2fs_orphan_block *orphan_blk = NULL;
- struct page *page = NULL;
unsigned int nentries = 0;
- unsigned short index = 1;
- unsigned short orphan_blocks;
+ unsigned short index;
+ unsigned short orphan_blocks =
+ (unsigned short)GET_ORPHAN_BLOCKS(sbi->n_orphans);
+ struct page *page = NULL;
+ struct ino_entry *orphan = NULL;
- orphan_blocks = (unsigned short)((sbi->n_orphans +
- (F2FS_ORPHANS_PER_BLOCK - 1)) / F2FS_ORPHANS_PER_BLOCK);
+ for (index = 0; index < orphan_blocks; index++)
+ grab_meta_page(sbi, start_blk + index);
- mutex_lock(&sbi->orphan_inode_mutex);
- head = &sbi->orphan_inode_list;
+ index = 1;
+ spin_lock(&sbi->ino_lock[ORPHAN_INO]);
+ head = &sbi->ino_list[ORPHAN_INO];
/* loop for each orphan inode entry and write them in Jornal block */
- list_for_each_safe(this, next, head) {
- struct orphan_inode_entry *orphan;
+ list_for_each_entry(orphan, head, list) {
+ if (!page) {
+ page = find_get_page(META_MAPPING(sbi), start_blk++);
+ f2fs_bug_on(sbi, !page);
+ orphan_blk =
+ (struct f2fs_orphan_block *)page_address(page);
+ memset(orphan_blk, 0, sizeof(*orphan_blk));
+ f2fs_put_page(page, 0);
+ }
- orphan = list_entry(this, struct orphan_inode_entry, list);
+ orphan_blk->ino[nentries++] = cpu_to_le32(orphan->ino);
if (nentries == F2FS_ORPHANS_PER_BLOCK) {
/*
@@ -325,29 +509,20 @@
set_page_dirty(page);
f2fs_put_page(page, 1);
index++;
- start_blk++;
nentries = 0;
page = NULL;
}
- if (page)
- goto page_exist;
-
- page = grab_meta_page(sbi, start_blk);
- orphan_blk = (struct f2fs_orphan_block *)page_address(page);
- memset(orphan_blk, 0, sizeof(*orphan_blk));
-page_exist:
- orphan_blk->ino[nentries++] = cpu_to_le32(orphan->ino);
}
- if (!page)
- goto end;
- orphan_blk->blk_addr = cpu_to_le16(index);
- orphan_blk->blk_count = cpu_to_le16(orphan_blocks);
- orphan_blk->entry_count = cpu_to_le32(nentries);
- set_page_dirty(page);
- f2fs_put_page(page, 1);
-end:
- mutex_unlock(&sbi->orphan_inode_mutex);
+ if (page) {
+ orphan_blk->blk_addr = cpu_to_le16(index);
+ orphan_blk->blk_count = cpu_to_le16(orphan_blocks);
+ orphan_blk->entry_count = cpu_to_le32(nentries);
+ set_page_dirty(page);
+ f2fs_put_page(page, 1);
+ }
+
+ spin_unlock(&sbi->ino_lock[ORPHAN_INO]);
}
static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
@@ -357,8 +532,8 @@
unsigned long blk_size = sbi->blocksize;
struct f2fs_checkpoint *cp_block;
unsigned long long cur_version = 0, pre_version = 0;
- unsigned int crc = 0;
size_t crc_offset;
+ __u32 crc = 0;
/* Read the 1st cp block in this CP pack */
cp_page_1 = get_meta_page(sbi, cp_addr);
@@ -369,11 +544,11 @@
if (crc_offset >= blk_size)
goto invalid_cp1;
- crc = *(unsigned int *)((unsigned char *)cp_block + crc_offset);
+ crc = le32_to_cpu(*((__u32 *)((unsigned char *)cp_block + crc_offset)));
if (!f2fs_crc_valid(crc, cp_block, crc_offset))
goto invalid_cp1;
- pre_version = le64_to_cpu(cp_block->checkpoint_ver);
+ pre_version = cur_cp_version(cp_block);
/* Read the 2nd cp block in this CP pack */
cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
@@ -384,11 +559,11 @@
if (crc_offset >= blk_size)
goto invalid_cp2;
- crc = *(unsigned int *)((unsigned char *)cp_block + crc_offset);
+ crc = le32_to_cpu(*((__u32 *)((unsigned char *)cp_block + crc_offset)));
if (!f2fs_crc_valid(crc, cp_block, crc_offset))
goto invalid_cp2;
- cur_version = le64_to_cpu(cp_block->checkpoint_ver);
+ cur_version = cur_cp_version(cp_block);
if (cur_version == pre_version) {
*version = cur_version;
@@ -410,8 +585,11 @@
unsigned long blk_size = sbi->blocksize;
unsigned long long cp1_version = 0, cp2_version = 0;
unsigned long long cp_start_blk_no;
+ unsigned int cp_blks = 1 + le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
+ block_t cp_blk_no;
+ int i;
- sbi->ckpt = kzalloc(blk_size, GFP_KERNEL);
+ sbi->ckpt = kzalloc(cp_blks * blk_size, GFP_KERNEL);
if (!sbi->ckpt)
return -ENOMEM;
/*
@@ -422,7 +600,8 @@
cp1 = validate_checkpoint(sbi, cp_start_blk_no, &cp1_version);
/* The second checkpoint pack should start at the next segment */
- cp_start_blk_no += 1 << le32_to_cpu(fsb->log_blocks_per_seg);
+ cp_start_blk_no += ((unsigned long long)1) <<
+ le32_to_cpu(fsb->log_blocks_per_seg);
cp2 = validate_checkpoint(sbi, cp_start_blk_no, &cp2_version);
if (cp1 && cp2) {
@@ -441,6 +620,23 @@
cp_block = (struct f2fs_checkpoint *)page_address(cur_page);
memcpy(sbi->ckpt, cp_block, blk_size);
+ if (cp_blks <= 1)
+ goto done;
+
+ cp_blk_no = le32_to_cpu(fsb->cp_blkaddr);
+ if (cur_page == cp2)
+ cp_blk_no += 1 << le32_to_cpu(fsb->log_blocks_per_seg);
+
+ for (i = 1; i < cp_blks; i++) {
+ void *sit_bitmap_ptr;
+ unsigned char *ckpt = (unsigned char *)sbi->ckpt;
+
+ cur_page = get_meta_page(sbi, cp_blk_no + i);
+ sit_bitmap_ptr = page_address(cur_page);
+ memcpy(ckpt + i * blk_size, sit_bitmap_ptr, blk_size);
+ f2fs_put_page(cur_page, 1);
+ }
+done:
f2fs_put_page(cp1, 1);
f2fs_put_page(cp2, 1);
return 0;
@@ -450,79 +646,106 @@
return -EINVAL;
}
-void set_dirty_dir_page(struct inode *inode, struct page *page)
+static int __add_dirty_inode(struct inode *inode, struct dir_inode_entry *new)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct list_head *head = &sbi->dir_inode_list;
- struct dir_inode_entry *new;
- struct list_head *this;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- if (!S_ISDIR(inode->i_mode))
+ if (is_inode_flag_set(F2FS_I(inode), FI_DIRTY_DIR))
+ return -EEXIST;
+
+ set_inode_flag(F2FS_I(inode), FI_DIRTY_DIR);
+ F2FS_I(inode)->dirty_dir = new;
+ list_add_tail(&new->list, &sbi->dir_inode_list);
+ stat_inc_dirty_dir(sbi);
+ return 0;
+}
+
+void update_dirty_page(struct inode *inode, struct page *page)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct dir_inode_entry *new;
+ int ret = 0;
+
+ if (!S_ISDIR(inode->i_mode) && !S_ISREG(inode->i_mode))
return;
-retry:
- new = kmem_cache_alloc(inode_entry_slab, GFP_NOFS);
- if (!new) {
- cond_resched();
- goto retry;
+
+ if (!S_ISDIR(inode->i_mode)) {
+ inode_inc_dirty_pages(inode);
+ goto out;
}
+
+ new = f2fs_kmem_cache_alloc(inode_entry_slab, GFP_NOFS);
new->inode = inode;
INIT_LIST_HEAD(&new->list);
spin_lock(&sbi->dir_inode_lock);
- list_for_each(this, head) {
- struct dir_inode_entry *entry;
- entry = list_entry(this, struct dir_inode_entry, list);
- if (entry->inode == inode) {
- kmem_cache_free(inode_entry_slab, new);
- goto out;
- }
- }
- list_add_tail(&new->list, head);
- sbi->n_dirty_dirs++;
-
- BUG_ON(!S_ISDIR(inode->i_mode));
-out:
- inc_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_inc_dirty_dents(inode);
- SetPagePrivate(page);
-
+ ret = __add_dirty_inode(inode, new);
+ inode_inc_dirty_pages(inode);
spin_unlock(&sbi->dir_inode_lock);
+
+ if (ret)
+ kmem_cache_free(inode_entry_slab, new);
+out:
+ SetPagePrivate(page);
+}
+
+void add_dirty_dir_inode(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct dir_inode_entry *new =
+ f2fs_kmem_cache_alloc(inode_entry_slab, GFP_NOFS);
+ int ret = 0;
+
+ new->inode = inode;
+ INIT_LIST_HEAD(&new->list);
+
+ spin_lock(&sbi->dir_inode_lock);
+ ret = __add_dirty_inode(inode, new);
+ spin_unlock(&sbi->dir_inode_lock);
+
+ if (ret)
+ kmem_cache_free(inode_entry_slab, new);
}
void remove_dirty_dir_inode(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct list_head *head = &sbi->dir_inode_list;
- struct list_head *this;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct dir_inode_entry *entry;
if (!S_ISDIR(inode->i_mode))
return;
spin_lock(&sbi->dir_inode_lock);
- if (atomic_read(&F2FS_I(inode)->dirty_dents))
- goto out;
-
- list_for_each(this, head) {
- struct dir_inode_entry *entry;
- entry = list_entry(this, struct dir_inode_entry, list);
- if (entry->inode == inode) {
- list_del(&entry->list);
- kmem_cache_free(inode_entry_slab, entry);
- sbi->n_dirty_dirs--;
- break;
- }
+ if (get_dirty_pages(inode) ||
+ !is_inode_flag_set(F2FS_I(inode), FI_DIRTY_DIR)) {
+ spin_unlock(&sbi->dir_inode_lock);
+ return;
}
-out:
+
+ entry = F2FS_I(inode)->dirty_dir;
+ list_del(&entry->list);
+ F2FS_I(inode)->dirty_dir = NULL;
+ clear_inode_flag(F2FS_I(inode), FI_DIRTY_DIR);
+ stat_dec_dirty_dir(sbi);
spin_unlock(&sbi->dir_inode_lock);
+ kmem_cache_free(inode_entry_slab, entry);
+
+ /* Only from the recovery routine */
+ if (is_inode_flag_set(F2FS_I(inode), FI_DELAY_IPUT)) {
+ clear_inode_flag(F2FS_I(inode), FI_DELAY_IPUT);
+ iput(inode);
+ }
}
void sync_dirty_dir_inodes(struct f2fs_sb_info *sbi)
{
- struct list_head *head = &sbi->dir_inode_list;
+ struct list_head *head;
struct dir_inode_entry *entry;
struct inode *inode;
retry:
spin_lock(&sbi->dir_inode_lock);
+
+ head = &sbi->dir_inode_list;
if (list_empty(head)) {
spin_unlock(&sbi->dir_inode_lock);
return;
@@ -531,14 +754,14 @@
inode = igrab(entry->inode);
spin_unlock(&sbi->dir_inode_lock);
if (inode) {
- filemap_flush(inode->i_mapping);
+ filemap_fdatawrite(inode->i_mapping);
iput(inode);
} else {
/*
* We should submit bio, since it exists several
* wribacking dentry pages in the freeing inode.
*/
- f2fs_submit_bio(sbi, DATA, true);
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
}
goto retry;
}
@@ -546,7 +769,7 @@
/*
* Freeze all the FS-operations for checkpoint.
*/
-static void block_operations(struct f2fs_sb_info *sbi)
+static int block_operations(struct f2fs_sb_info *sbi)
{
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
@@ -554,16 +777,20 @@
.for_reclaim = 0,
};
struct blk_plug plug;
+ int err = 0;
blk_start_plug(&plug);
retry_flush_dents:
- mutex_lock_all(sbi);
-
+ f2fs_lock_all(sbi);
/* write all the dirty dentry pages */
if (get_pages(sbi, F2FS_DIRTY_DENTS)) {
- mutex_unlock_all(sbi);
+ f2fs_unlock_all(sbi);
sync_dirty_dir_inodes(sbi);
+ if (unlikely(f2fs_cp_error(sbi))) {
+ err = -EIO;
+ goto out;
+ }
goto retry_flush_dents;
}
@@ -572,36 +799,70 @@
* until finishing nat/sit flush.
*/
retry_flush_nodes:
- mutex_lock(&sbi->node_write);
+ down_write(&sbi->node_write);
if (get_pages(sbi, F2FS_DIRTY_NODES)) {
- mutex_unlock(&sbi->node_write);
+ up_write(&sbi->node_write);
sync_node_pages(sbi, 0, &wbc);
+ if (unlikely(f2fs_cp_error(sbi))) {
+ f2fs_unlock_all(sbi);
+ err = -EIO;
+ goto out;
+ }
goto retry_flush_nodes;
}
+out:
blk_finish_plug(&plug);
+ return err;
}
static void unblock_operations(struct f2fs_sb_info *sbi)
{
- mutex_unlock(&sbi->node_write);
- mutex_unlock_all(sbi);
+ up_write(&sbi->node_write);
+ f2fs_unlock_all(sbi);
}
-static void do_checkpoint(struct f2fs_sb_info *sbi, bool is_umount)
+static void wait_on_all_pages_writeback(struct f2fs_sb_info *sbi)
+{
+ DEFINE_WAIT(wait);
+
+ for (;;) {
+ prepare_to_wait(&sbi->cp_wait, &wait, TASK_UNINTERRUPTIBLE);
+
+ if (!get_pages(sbi, F2FS_WRITEBACK))
+ break;
+
+ io_schedule();
+ }
+ finish_wait(&sbi->cp_wait, &wait);
+}
+
+static void do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
- nid_t last_nid = 0;
+ struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_WARM_NODE);
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+ nid_t last_nid = nm_i->next_scan_nid;
block_t start_blk;
struct page *cp_page;
unsigned int data_sum_blocks, orphan_blocks;
- unsigned int crc32 = 0;
+ __u32 crc32 = 0;
void *kaddr;
int i;
+ int cp_payload_blks = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
+
+ /*
+ * This avoids to conduct wrong roll-forward operations and uses
+ * metapages, so should be called prior to sync_meta_pages below.
+ */
+ discard_next_dnode(sbi, NEXT_FREE_BLKADDR(sbi, curseg));
/* Flush all the NAT/SIT pages */
- while (get_pages(sbi, F2FS_DIRTY_META))
+ while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX);
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+ }
next_free_nid(sbi, &last_nid);
@@ -612,7 +873,7 @@
ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi));
ckpt->valid_block_count = cpu_to_le64(valid_user_blocks(sbi));
ckpt->free_segment_count = cpu_to_le32(free_segments(sbi));
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) {
ckpt->cur_node_segno[i] =
cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_NODE));
ckpt->cur_node_blkoff[i] =
@@ -620,7 +881,7 @@
ckpt->alloc_type[i + CURSEG_HOT_NODE] =
curseg_alloc_type(sbi, i + CURSEG_HOT_NODE);
}
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NR_CURSEG_DATA_TYPE; i++) {
ckpt->cur_data_segno[i] =
cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_DATA));
ckpt->cur_data_blkoff[i] =
@@ -635,23 +896,25 @@
/* 2 cp + n data seg summary + orphan inode blocks */
data_sum_blocks = npages_for_summary_flush(sbi);
- if (data_sum_blocks < 3)
+ if (data_sum_blocks < NR_CURSEG_DATA_TYPE)
set_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
else
clear_ckpt_flags(ckpt, CP_COMPACT_SUM_FLAG);
- orphan_blocks = (sbi->n_orphans + F2FS_ORPHANS_PER_BLOCK - 1)
- / F2FS_ORPHANS_PER_BLOCK;
- ckpt->cp_pack_start_sum = cpu_to_le32(1 + orphan_blocks);
+ orphan_blocks = GET_ORPHAN_BLOCKS(sbi->n_orphans);
+ ckpt->cp_pack_start_sum = cpu_to_le32(1 + cp_payload_blks +
+ orphan_blocks);
- if (is_umount) {
+ if (cpc->reason == CP_UMOUNT) {
set_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
- ckpt->cp_pack_total_block_count = cpu_to_le32(2 +
- data_sum_blocks + orphan_blocks + NR_CURSEG_NODE_TYPE);
+ ckpt->cp_pack_total_block_count = cpu_to_le32(F2FS_CP_PACKS+
+ cp_payload_blks + data_sum_blocks +
+ orphan_blocks + NR_CURSEG_NODE_TYPE);
} else {
clear_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
- ckpt->cp_pack_total_block_count = cpu_to_le32(2 +
- data_sum_blocks + orphan_blocks);
+ ckpt->cp_pack_total_block_count = cpu_to_le32(F2FS_CP_PACKS +
+ cp_payload_blks + data_sum_blocks +
+ orphan_blocks);
}
if (sbi->n_orphans)
@@ -659,13 +922,16 @@
else
clear_ckpt_flags(ckpt, CP_ORPHAN_PRESENT_FLAG);
+ if (sbi->need_fsck)
+ set_ckpt_flags(ckpt, CP_FSCK_FLAG);
+
/* update SIT/NAT bitmap */
get_sit_bitmap(sbi, __bitmap_ptr(sbi, SIT_BITMAP));
get_nat_bitmap(sbi, __bitmap_ptr(sbi, NAT_BITMAP));
crc32 = f2fs_crc32(ckpt, le32_to_cpu(ckpt->checksum_offset));
- *(__le32 *)((unsigned char *)ckpt +
- le32_to_cpu(ckpt->checksum_offset))
+ *((__le32 *)((unsigned char *)ckpt +
+ le32_to_cpu(ckpt->checksum_offset)))
= cpu_to_le32(crc32);
start_blk = __start_cp_addr(sbi);
@@ -677,6 +943,15 @@
set_page_dirty(cp_page);
f2fs_put_page(cp_page, 1);
+ for (i = 1; i < 1 + cp_payload_blks; i++) {
+ cp_page = grab_meta_page(sbi, start_blk++);
+ kaddr = page_address(cp_page);
+ memcpy(kaddr, (char *)ckpt + i * F2FS_BLKSIZE,
+ (1 << sbi->log_blocksize));
+ set_page_dirty(cp_page);
+ f2fs_put_page(cp_page, 1);
+ }
+
if (sbi->n_orphans) {
write_orphan_inodes(sbi, start_blk);
start_blk += orphan_blocks;
@@ -684,7 +959,7 @@
write_data_summaries(sbi, start_blk);
start_blk += data_sum_blocks;
- if (is_umount) {
+ if (cpc->reason == CP_UMOUNT) {
write_node_summaries(sbi, start_blk);
start_blk += NR_CURSEG_NODE_TYPE;
}
@@ -697,11 +972,13 @@
f2fs_put_page(cp_page, 1);
/* wait for previous submitted node/meta pages writeback */
- while (get_pages(sbi, F2FS_WRITEBACK))
- congestion_wait(BLK_RW_ASYNC, HZ / 50);
+ wait_on_all_pages_writeback(sbi);
- filemap_fdatawait_range(sbi->node_inode->i_mapping, 0, LONG_MAX);
- filemap_fdatawait_range(sbi->meta_inode->i_mapping, 0, LONG_MAX);
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+
+ filemap_fdatawait_range(NODE_MAPPING(sbi), 0, LONG_MAX);
+ filemap_fdatawait_range(META_MAPPING(sbi), 0, LONG_MAX);
/* update user_block_counts */
sbi->last_valid_block_count = sbi->total_valid_block_count;
@@ -710,69 +987,93 @@
/* Here, we only have one bio having CP pack */
sync_meta_pages(sbi, META_FLUSH, LONG_MAX);
- if (!is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) {
- clear_prefree_segments(sbi);
- F2FS_RESET_SB_DIRT(sbi);
- }
+ release_dirty_inode(sbi);
+
+ if (unlikely(f2fs_cp_error(sbi)))
+ return;
+
+ clear_prefree_segments(sbi);
+ F2FS_RESET_SB_DIRT(sbi);
}
/*
* We guarantee that this checkpoint procedure should not fail.
*/
-void write_checkpoint(struct f2fs_sb_info *sbi, bool is_umount)
+void write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
unsigned long long ckpt_ver;
- trace_f2fs_write_checkpoint(sbi->sb, is_umount, "start block_ops");
+ trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "start block_ops");
mutex_lock(&sbi->cp_mutex);
- block_operations(sbi);
- trace_f2fs_write_checkpoint(sbi->sb, is_umount, "finish block_ops");
+ if (!sbi->s_dirty && cpc->reason != CP_DISCARD)
+ goto out;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto out;
+ if (block_operations(sbi))
+ goto out;
- f2fs_submit_bio(sbi, DATA, true);
- f2fs_submit_bio(sbi, NODE, true);
- f2fs_submit_bio(sbi, META, true);
+ trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish block_ops");
+
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
+ f2fs_submit_merged_bio(sbi, NODE, WRITE);
+ f2fs_submit_merged_bio(sbi, META, WRITE);
/*
* update checkpoint pack index
* Increase the version number so that
* SIT entries and seg summaries are written at correct place
*/
- ckpt_ver = le64_to_cpu(ckpt->checkpoint_ver);
+ ckpt_ver = cur_cp_version(ckpt);
ckpt->checkpoint_ver = cpu_to_le64(++ckpt_ver);
/* write cached NAT/SIT entries to NAT/SIT area */
flush_nat_entries(sbi);
- flush_sit_entries(sbi);
+ flush_sit_entries(sbi, cpc);
/* unlock all the fs_lock[] in do_checkpoint() */
- do_checkpoint(sbi, is_umount);
+ do_checkpoint(sbi, cpc);
unblock_operations(sbi);
+ stat_inc_cp_count(sbi->stat_info);
+out:
mutex_unlock(&sbi->cp_mutex);
-
- trace_f2fs_write_checkpoint(sbi->sb, is_umount, "finish checkpoint");
+ trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish checkpoint");
}
-void init_orphan_info(struct f2fs_sb_info *sbi)
+void init_ino_entry_info(struct f2fs_sb_info *sbi)
{
- mutex_init(&sbi->orphan_inode_mutex);
- INIT_LIST_HEAD(&sbi->orphan_inode_list);
+ int i;
+
+ for (i = 0; i < MAX_INO_ENTRY; i++) {
+ INIT_RADIX_TREE(&sbi->ino_root[i], GFP_ATOMIC);
+ spin_lock_init(&sbi->ino_lock[i]);
+ INIT_LIST_HEAD(&sbi->ino_list[i]);
+ }
+
+ /*
+ * considering 512 blocks in a segment 8 blocks are needed for cp
+ * and log segment summaries. Remaining blocks are used to keep
+ * orphan entries with the limitation one reserved segment
+ * for cp pack we can have max 1020*504 orphan entries
+ */
sbi->n_orphans = 0;
+ sbi->max_orphans = (sbi->blocks_per_seg - F2FS_CP_PACKS -
+ NR_CURSEG_TYPE) * F2FS_ORPHANS_PER_BLOCK;
}
int __init create_checkpoint_caches(void)
{
- orphan_entry_slab = f2fs_kmem_cache_create("f2fs_orphan_entry",
- sizeof(struct orphan_inode_entry), NULL);
- if (unlikely(!orphan_entry_slab))
+ ino_entry_slab = f2fs_kmem_cache_create("f2fs_ino_entry",
+ sizeof(struct ino_entry));
+ if (!ino_entry_slab)
return -ENOMEM;
inode_entry_slab = f2fs_kmem_cache_create("f2fs_dirty_dir_entry",
- sizeof(struct dir_inode_entry), NULL);
- if (unlikely(!inode_entry_slab)) {
- kmem_cache_destroy(orphan_entry_slab);
+ sizeof(struct dir_inode_entry));
+ if (!inode_entry_slab) {
+ kmem_cache_destroy(ino_entry_slab);
return -ENOMEM;
}
return 0;
@@ -780,6 +1081,6 @@
void destroy_checkpoint_caches(void)
{
- kmem_cache_destroy(orphan_entry_slab);
+ kmem_cache_destroy(ino_entry_slab);
kmem_cache_destroy(inode_entry_slab);
}
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index ce11d9a..5e50ab4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -14,6 +14,7 @@
#include <linux/mpage.h>
#include <linux/aio.h>
#include <linux/writeback.h>
+#include <linux/mount.h>
#include <linux/backing-dev.h>
#include <linux/blkdev.h>
#include <linux/bio.h>
@@ -24,6 +25,198 @@
#include "segment.h"
#include <trace/events/f2fs.h>
+static void f2fs_read_end_io(struct bio *bio, int err)
+{
+ struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+
+ do {
+ struct page *page = bvec->bv_page;
+
+ if (--bvec >= bio->bi_io_vec)
+ prefetchw(&bvec->bv_page->flags);
+
+ if (!err) {
+ SetPageUptodate(page);
+ } else {
+ ClearPageUptodate(page);
+ SetPageError(page);
+ }
+ unlock_page(page);
+ } while (bvec >= bio->bi_io_vec);
+
+ bio_put(bio);
+}
+
+static void f2fs_write_end_io(struct bio *bio, int err)
+{
+ struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ struct f2fs_sb_info *sbi = bio->bi_private;
+
+ do {
+ struct page *page = bvec->bv_page;
+
+ if (--bvec >= bio->bi_io_vec)
+ prefetchw(&bvec->bv_page->flags);
+
+ if (unlikely(err)) {
+ set_page_dirty(page);
+ set_bit(AS_EIO, &page->mapping->flags);
+ f2fs_stop_checkpoint(sbi);
+ }
+ end_page_writeback(page);
+ dec_page_count(sbi, F2FS_WRITEBACK);
+ } while (bvec >= bio->bi_io_vec);
+
+ if (sbi->wait_io) {
+ complete(sbi->wait_io);
+ sbi->wait_io = NULL;
+ }
+
+ if (!get_pages(sbi, F2FS_WRITEBACK) &&
+ !list_empty(&sbi->cp_wait.task_list))
+ wake_up(&sbi->cp_wait);
+
+ bio_put(bio);
+}
+
+/*
+ * Low-level block read/write IO operations.
+ */
+static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
+ int npages, bool is_read)
+{
+ struct bio *bio;
+
+ /* No failure on bio allocation */
+ bio = bio_alloc(GFP_NOIO, npages);
+
+ bio->bi_bdev = sbi->sb->s_bdev;
+ bio->bi_sector = SECTOR_FROM_BLOCK(blk_addr);
+ bio->bi_end_io = is_read ? f2fs_read_end_io : f2fs_write_end_io;
+ bio->bi_private = sbi;
+
+ return bio;
+}
+
+static void __submit_merged_bio(struct f2fs_bio_info *io)
+{
+ struct f2fs_io_info *fio = &io->fio;
+ int rw;
+
+ if (!io->bio)
+ return;
+
+ rw = fio->rw;
+
+ if (is_read_io(rw)) {
+ trace_f2fs_submit_read_bio(io->sbi->sb, rw,
+ fio->type, io->bio);
+ submit_bio(rw, io->bio);
+ } else {
+ trace_f2fs_submit_write_bio(io->sbi->sb, rw,
+ fio->type, io->bio);
+ /*
+ * META_FLUSH is only from the checkpoint procedure, and we
+ * should wait this metadata bio for FS consistency.
+ */
+ if (fio->type == META_FLUSH) {
+ DECLARE_COMPLETION_ONSTACK(wait);
+ io->sbi->wait_io = &wait;
+ submit_bio(rw, io->bio);
+ wait_for_completion(&wait);
+ } else {
+ submit_bio(rw, io->bio);
+ }
+ }
+
+ io->bio = NULL;
+}
+
+void f2fs_submit_merged_bio(struct f2fs_sb_info *sbi,
+ enum page_type type, int rw)
+{
+ enum page_type btype = PAGE_TYPE_OF_BIO(type);
+ struct f2fs_bio_info *io;
+
+ io = is_read_io(rw) ? &sbi->read_io : &sbi->write_io[btype];
+
+ down_write(&io->io_rwsem);
+
+ /* change META to META_FLUSH in the checkpoint procedure */
+ if (type >= META_FLUSH) {
+ io->fio.type = META_FLUSH;
+ if (test_opt(sbi, NOBARRIER))
+ io->fio.rw = WRITE_FLUSH | REQ_META | REQ_PRIO;
+ else
+ io->fio.rw = WRITE_FLUSH_FUA | REQ_META | REQ_PRIO;
+ }
+ __submit_merged_bio(io);
+ up_write(&io->io_rwsem);
+}
+
+/*
+ * Fill the locked page with data located in the block address.
+ * Return unlocked page.
+ */
+int f2fs_submit_page_bio(struct f2fs_sb_info *sbi, struct page *page,
+ block_t blk_addr, int rw)
+{
+ struct bio *bio;
+
+ trace_f2fs_submit_page_bio(page, blk_addr, rw);
+
+ /* Allocate a new bio */
+ bio = __bio_alloc(sbi, blk_addr, 1, is_read_io(rw));
+
+ if (bio_add_page(bio, page, PAGE_CACHE_SIZE, 0) < PAGE_CACHE_SIZE) {
+ bio_put(bio);
+ f2fs_put_page(page, 1);
+ return -EFAULT;
+ }
+
+ submit_bio(rw, bio);
+ return 0;
+}
+
+void f2fs_submit_page_mbio(struct f2fs_sb_info *sbi, struct page *page,
+ block_t blk_addr, struct f2fs_io_info *fio)
+{
+ enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
+ struct f2fs_bio_info *io;
+ bool is_read = is_read_io(fio->rw);
+
+ io = is_read ? &sbi->read_io : &sbi->write_io[btype];
+
+ verify_block_addr(sbi, blk_addr);
+
+ down_write(&io->io_rwsem);
+
+ if (!is_read)
+ inc_page_count(sbi, F2FS_WRITEBACK);
+
+ if (io->bio && (io->last_block_in_bio != blk_addr - 1 ||
+ io->fio.rw != fio->rw))
+ __submit_merged_bio(io);
+alloc_new:
+ if (io->bio == NULL) {
+ int bio_blocks = MAX_BIO_BLOCKS(sbi);
+
+ io->bio = __bio_alloc(sbi, blk_addr, bio_blocks, is_read);
+ io->fio = *fio;
+ }
+
+ if (bio_add_page(io->bio, page, PAGE_CACHE_SIZE, 0) <
+ PAGE_CACHE_SIZE) {
+ __submit_merged_bio(io);
+ goto alloc_new;
+ }
+
+ io->last_block_in_bio = blk_addr;
+
+ up_write(&io->io_rwsem);
+ trace_f2fs_submit_page_mbio(page, fio->rw, fio->type, blk_addr);
+}
+
/*
* Lock ordering for the change of data block address:
* ->data_page
@@ -37,9 +230,9 @@
struct page *node_page = dn->node_page;
unsigned int ofs_in_node = dn->ofs_in_node;
- wait_on_page_writeback(node_page);
+ f2fs_wait_on_page_writeback(node_page, NODE);
- rn = (struct f2fs_node *)page_address(node_page);
+ rn = F2FS_NODE(node_page);
/* Get physical address of data block */
addr_array = blkaddr_in_node(rn);
@@ -49,36 +242,59 @@
int reserve_new_block(struct dnode_of_data *dn)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
- if (is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC))
+ if (unlikely(is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC)))
return -EPERM;
- if (!inc_valid_block_count(sbi, dn->inode, 1))
+ if (unlikely(!inc_valid_block_count(sbi, dn->inode, 1)))
return -ENOSPC;
trace_f2fs_reserve_new_block(dn->inode, dn->nid, dn->ofs_in_node);
__set_data_blkaddr(dn, NEW_ADDR);
dn->data_blkaddr = NEW_ADDR;
+ mark_inode_dirty(dn->inode);
sync_inode_page(dn);
return 0;
}
+int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index)
+{
+ bool need_put = dn->inode_page ? false : true;
+ int err;
+
+ /* if inode_page exists, index should be zero */
+ f2fs_bug_on(F2FS_I_SB(dn->inode), !need_put && index);
+
+ err = get_dnode_of_data(dn, index, ALLOC_NODE);
+ if (err)
+ return err;
+
+ if (dn->data_blkaddr == NULL_ADDR)
+ err = reserve_new_block(dn);
+ if (err || need_put)
+ f2fs_put_dnode(dn);
+ return err;
+}
+
static int check_extent_cache(struct inode *inode, pgoff_t pgofs,
struct buffer_head *bh_result)
{
struct f2fs_inode_info *fi = F2FS_I(inode);
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
pgoff_t start_fofs, end_fofs;
block_t start_blkaddr;
+ if (is_inode_flag_set(fi, FI_NO_EXTENT))
+ return 0;
+
read_lock(&fi->ext.ext_lock);
if (fi->ext.len == 0) {
read_unlock(&fi->ext.ext_lock);
return 0;
}
- sbi->total_hit_ext++;
+ stat_inc_total_hit(inode->i_sb);
+
start_fofs = fi->ext.fofs;
end_fofs = fi->ext.fofs + fi->ext.len - 1;
start_blkaddr = fi->ext.blk_addr;
@@ -96,7 +312,7 @@
else
bh_result->b_size = UINT_MAX;
- sbi->read_hit_ext++;
+ stat_inc_read_hit(inode->i_sb);
read_unlock(&fi->ext.ext_lock);
return 1;
}
@@ -109,13 +325,18 @@
struct f2fs_inode_info *fi = F2FS_I(dn->inode);
pgoff_t fofs, start_fofs, end_fofs;
block_t start_blkaddr, end_blkaddr;
+ int need_update = true;
- BUG_ON(blk_addr == NEW_ADDR);
- fofs = start_bidx_of_node(ofs_of_node(dn->node_page)) + dn->ofs_in_node;
+ f2fs_bug_on(F2FS_I_SB(dn->inode), blk_addr == NEW_ADDR);
+ fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fi) +
+ dn->ofs_in_node;
/* Update the page address in the parent node */
__set_data_blkaddr(dn, blk_addr);
+ if (is_inode_flag_set(fi, FI_NO_EXTENT))
+ return;
+
write_lock(&fi->ext.ext_lock);
start_fofs = fi->ext.fofs;
@@ -162,20 +383,25 @@
fofs - start_fofs + 1;
fi->ext.len -= fofs - start_fofs + 1;
}
- goto end_update;
+ } else {
+ need_update = false;
}
- write_unlock(&fi->ext.ext_lock);
- return;
+ /* Finally, if the extent is very fragmented, let's drop the cache. */
+ if (fi->ext.len < F2FS_MIN_EXTENT_LEN) {
+ fi->ext.len = 0;
+ set_inode_flag(fi, FI_NO_EXTENT);
+ need_update = true;
+ }
end_update:
write_unlock(&fi->ext.ext_lock);
- sync_inode_page(dn);
+ if (need_update)
+ sync_inode_page(dn);
return;
}
struct page *find_data_page(struct inode *inode, pgoff_t index, bool sync)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
@@ -196,7 +422,7 @@
return ERR_PTR(-ENOENT);
/* By fallocate(), there is no cached page, but with NEW_ADDR */
- if (dn.data_blkaddr == NEW_ADDR)
+ if (unlikely(dn.data_blkaddr == NEW_ADDR))
return ERR_PTR(-EINVAL);
page = grab_cache_page(mapping, index);
@@ -208,11 +434,14 @@
return page;
}
- err = f2fs_readpage(sbi, page, dn.data_blkaddr,
+ err = f2fs_submit_page_bio(F2FS_I_SB(inode), page, dn.data_blkaddr,
sync ? READ_SYNC : READA);
+ if (err)
+ return ERR_PTR(err);
+
if (sync) {
wait_on_page_locked(page);
- if (!PageUptodate(page)) {
+ if (unlikely(!PageUptodate(page))) {
f2fs_put_page(page, 0);
return ERR_PTR(-EIO);
}
@@ -227,41 +456,55 @@
*/
struct page *get_lock_data_page(struct inode *inode, pgoff_t index)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
int err;
- set_new_dnode(&dn, inode, NULL, NULL, 0);
- err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
- if (err)
- return ERR_PTR(err);
- f2fs_put_dnode(&dn);
-
- if (dn.data_blkaddr == NULL_ADDR)
- return ERR_PTR(-ENOENT);
repeat:
page = grab_cache_page(mapping, index);
if (!page)
return ERR_PTR(-ENOMEM);
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
+ if (err) {
+ f2fs_put_page(page, 1);
+ return ERR_PTR(err);
+ }
+ f2fs_put_dnode(&dn);
+
+ if (unlikely(dn.data_blkaddr == NULL_ADDR)) {
+ f2fs_put_page(page, 1);
+ return ERR_PTR(-ENOENT);
+ }
+
if (PageUptodate(page))
return page;
- BUG_ON(dn.data_blkaddr == NEW_ADDR);
- BUG_ON(dn.data_blkaddr == NULL_ADDR);
+ /*
+ * A new dentry page is allocated but not able to be written, since its
+ * new inode page couldn't be allocated due to -ENOSPC.
+ * In such the case, its blkaddr can be remained as NEW_ADDR.
+ * see, f2fs_add_link -> get_new_data_page -> init_inode_metadata.
+ */
+ if (dn.data_blkaddr == NEW_ADDR) {
+ zero_user_segment(page, 0, PAGE_CACHE_SIZE);
+ SetPageUptodate(page);
+ return page;
+ }
- err = f2fs_readpage(sbi, page, dn.data_blkaddr, READ_SYNC);
+ err = f2fs_submit_page_bio(F2FS_I_SB(inode), page,
+ dn.data_blkaddr, READ_SYNC);
if (err)
return ERR_PTR(err);
lock_page(page);
- if (!PageUptodate(page)) {
+ if (unlikely(!PageUptodate(page))) {
f2fs_put_page(page, 1);
return ERR_PTR(-EIO);
}
- if (page->mapping != mapping) {
+ if (unlikely(page->mapping != mapping)) {
f2fs_put_page(page, 1);
goto repeat;
}
@@ -272,34 +515,28 @@
* Caller ensures that this data page is never allocated.
* A new zero-filled data page is allocated in the page cache.
*
- * Also, caller should grab and release a mutex by calling mutex_lock_op() and
- * mutex_unlock_op().
+ * Also, caller should grab and release a rwsem by calling f2fs_lock_op() and
+ * f2fs_unlock_op().
+ * Note that, ipage is set only by make_empty_dir.
*/
-struct page *get_new_data_page(struct inode *inode, pgoff_t index,
- bool new_i_size)
+struct page *get_new_data_page(struct inode *inode,
+ struct page *ipage, pgoff_t index, bool new_i_size)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct address_space *mapping = inode->i_mapping;
struct page *page;
struct dnode_of_data dn;
int err;
- set_new_dnode(&dn, inode, NULL, NULL, 0);
- err = get_dnode_of_data(&dn, index, ALLOC_NODE);
+ set_new_dnode(&dn, inode, ipage, NULL, 0);
+ err = f2fs_reserve_block(&dn, index);
if (err)
return ERR_PTR(err);
-
- if (dn.data_blkaddr == NULL_ADDR) {
- if (reserve_new_block(&dn)) {
- f2fs_put_dnode(&dn);
- return ERR_PTR(-ENOSPC);
- }
- }
- f2fs_put_dnode(&dn);
repeat:
page = grab_cache_page(mapping, index);
- if (!page)
- return ERR_PTR(-ENOMEM);
+ if (!page) {
+ err = -ENOMEM;
+ goto put_err;
+ }
if (PageUptodate(page))
return page;
@@ -308,15 +545,18 @@
zero_user_segment(page, 0, PAGE_CACHE_SIZE);
SetPageUptodate(page);
} else {
- err = f2fs_readpage(sbi, page, dn.data_blkaddr, READ_SYNC);
+ err = f2fs_submit_page_bio(F2FS_I_SB(inode), page,
+ dn.data_blkaddr, READ_SYNC);
if (err)
- return ERR_PTR(err);
+ goto put_err;
+
lock_page(page);
- if (!PageUptodate(page)) {
+ if (unlikely(!PageUptodate(page))) {
f2fs_put_page(page, 1);
- return ERR_PTR(-EIO);
+ err = -EIO;
+ goto put_err;
}
- if (page->mapping != mapping) {
+ if (unlikely(page->mapping != mapping)) {
f2fs_put_page(page, 1);
goto repeat;
}
@@ -325,142 +565,217 @@
if (new_i_size &&
i_size_read(inode) < ((index + 1) << PAGE_CACHE_SHIFT)) {
i_size_write(inode, ((index + 1) << PAGE_CACHE_SHIFT));
- mark_inode_dirty_sync(inode);
+ /* Only the directory inode sets new_i_size */
+ set_inode_flag(F2FS_I(inode), FI_UPDATE_DIR);
}
return page;
+
+put_err:
+ f2fs_put_dnode(&dn);
+ return ERR_PTR(err);
}
-static void read_end_io(struct bio *bio, int err)
+static int __allocate_data_block(struct dnode_of_data *dn)
{
- const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
- struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
+ struct f2fs_inode_info *fi = F2FS_I(dn->inode);
+ struct f2fs_summary sum;
+ block_t new_blkaddr;
+ struct node_info ni;
+ pgoff_t fofs;
+ int type;
- do {
- struct page *page = bvec->bv_page;
+ if (unlikely(is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC)))
+ return -EPERM;
+ if (unlikely(!inc_valid_block_count(sbi, dn->inode, 1)))
+ return -ENOSPC;
- if (--bvec >= bio->bi_io_vec)
- prefetchw(&bvec->bv_page->flags);
+ __set_data_blkaddr(dn, NEW_ADDR);
+ dn->data_blkaddr = NEW_ADDR;
- if (uptodate) {
- SetPageUptodate(page);
- } else {
- ClearPageUptodate(page);
- SetPageError(page);
- }
- unlock_page(page);
- } while (bvec >= bio->bi_io_vec);
- kfree(bio->bi_private);
- bio_put(bio);
-}
+ get_node_info(sbi, dn->nid, &ni);
+ set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
-/*
- * Fill the locked page with data located in the block address.
- * Return unlocked page.
- */
-int f2fs_readpage(struct f2fs_sb_info *sbi, struct page *page,
- block_t blk_addr, int type)
-{
- struct block_device *bdev = sbi->sb->s_bdev;
- struct bio *bio;
+ type = CURSEG_WARM_DATA;
- trace_f2fs_readpage(page, blk_addr, type);
+ allocate_data_block(sbi, NULL, NULL_ADDR, &new_blkaddr, &sum, type);
- down_read(&sbi->bio_sem);
+ /* direct IO doesn't use extent cache to maximize the performance */
+ set_inode_flag(F2FS_I(dn->inode), FI_NO_EXTENT);
+ update_extent_cache(new_blkaddr, dn);
+ clear_inode_flag(F2FS_I(dn->inode), FI_NO_EXTENT);
- /* Allocate a new bio */
- bio = f2fs_bio_alloc(bdev, 1);
+ /* update i_size */
+ fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fi) +
+ dn->ofs_in_node;
+ if (i_size_read(dn->inode) < ((fofs + 1) << PAGE_CACHE_SHIFT))
+ i_size_write(dn->inode, ((fofs + 1) << PAGE_CACHE_SHIFT));
- /* Initialize the bio */
- bio->bi_sector = SECTOR_FROM_BLOCK(sbi, blk_addr);
- bio->bi_end_io = read_end_io;
-
- if (bio_add_page(bio, page, PAGE_CACHE_SIZE, 0) < PAGE_CACHE_SIZE) {
- kfree(bio->bi_private);
- bio_put(bio);
- up_read(&sbi->bio_sem);
- f2fs_put_page(page, 1);
- return -EFAULT;
- }
-
- submit_bio(type, bio);
- up_read(&sbi->bio_sem);
+ dn->data_blkaddr = new_blkaddr;
return 0;
}
/*
- * This function should be used by the data read flow only where it
- * does not check the "create" flag that indicates block allocation.
- * The reason for this special functionality is to exploit VFS readahead
- * mechanism.
+ * get_data_block() now supported readahead/bmap/rw direct_IO with mapped bh.
+ * If original data blocks are allocated, then give them to blockdev.
+ * Otherwise,
+ * a. preallocate requested block addresses
+ * b. do not use extent cache for better performance
+ * c. give the block addresses to blockdev
*/
-static int get_data_block_ro(struct inode *inode, sector_t iblock,
- struct buffer_head *bh_result, int create)
+static int __get_data_block(struct inode *inode, sector_t iblock,
+ struct buffer_head *bh_result, int create, bool fiemap)
{
unsigned int blkbits = inode->i_sb->s_blocksize_bits;
unsigned maxblocks = bh_result->b_size >> blkbits;
struct dnode_of_data dn;
- pgoff_t pgofs;
- int err;
+ int mode = create ? ALLOC_NODE : LOOKUP_NODE_RA;
+ pgoff_t pgofs, end_offset;
+ int err = 0, ofs = 1;
+ bool allocated = false;
/* Get the page offset from the block offset(iblock) */
pgofs = (pgoff_t)(iblock >> (PAGE_CACHE_SHIFT - blkbits));
- if (check_extent_cache(inode, pgofs, bh_result)) {
- trace_f2fs_get_data_block(inode, iblock, bh_result, 0);
- return 0;
+ if (check_extent_cache(inode, pgofs, bh_result))
+ goto out;
+
+ if (create) {
+ f2fs_balance_fs(F2FS_I_SB(inode));
+ f2fs_lock_op(F2FS_I_SB(inode));
}
/* When reading holes, we need its node page */
set_new_dnode(&dn, inode, NULL, NULL, 0);
- err = get_dnode_of_data(&dn, pgofs, LOOKUP_NODE_RA);
+ err = get_dnode_of_data(&dn, pgofs, mode);
if (err) {
- trace_f2fs_get_data_block(inode, iblock, bh_result, err);
- return (err == -ENOENT) ? 0 : err;
+ if (err == -ENOENT)
+ err = 0;
+ goto unlock_out;
}
+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)
+ goto put_out;
- /* It does not support data allocation */
- BUG_ON(create);
-
- if (dn.data_blkaddr != NEW_ADDR && dn.data_blkaddr != NULL_ADDR) {
- int i;
- unsigned int end_offset;
-
- end_offset = IS_INODE(dn.node_page) ?
- ADDRS_PER_INODE :
- ADDRS_PER_BLOCK;
-
- clear_buffer_new(bh_result);
-
- /* Give more consecutive addresses for the read ahead */
- for (i = 0; i < end_offset - dn.ofs_in_node; i++)
- if (((datablock_addr(dn.node_page,
- dn.ofs_in_node + i))
- != (dn.data_blkaddr + i)) || maxblocks == i)
- break;
+ if (dn.data_blkaddr != NULL_ADDR) {
map_bh(bh_result, inode->i_sb, dn.data_blkaddr);
- bh_result->b_size = (i << blkbits);
+ } else if (create) {
+ err = __allocate_data_block(&dn);
+ if (err)
+ goto put_out;
+ allocated = true;
+ map_bh(bh_result, inode->i_sb, dn.data_blkaddr);
+ } else {
+ goto put_out;
}
+
+ end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));
+ bh_result->b_size = (((size_t)1) << blkbits);
+ dn.ofs_in_node++;
+ pgofs++;
+
+get_next:
+ if (dn.ofs_in_node >= end_offset) {
+ if (allocated)
+ sync_inode_page(&dn);
+ allocated = false;
+ f2fs_put_dnode(&dn);
+
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ err = get_dnode_of_data(&dn, pgofs, mode);
+ if (err) {
+ if (err == -ENOENT)
+ err = 0;
+ goto unlock_out;
+ }
+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)
+ goto put_out;
+
+ end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));
+ }
+
+ if (maxblocks > (bh_result->b_size >> blkbits)) {
+ block_t blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node);
+ if (blkaddr == NULL_ADDR && create) {
+ err = __allocate_data_block(&dn);
+ if (err)
+ goto sync_out;
+ allocated = true;
+ blkaddr = dn.data_blkaddr;
+ }
+ /* Give more consecutive addresses for the read ahead */
+ if (blkaddr == (bh_result->b_blocknr + ofs)) {
+ ofs++;
+ dn.ofs_in_node++;
+ pgofs++;
+ bh_result->b_size += (((size_t)1) << blkbits);
+ goto get_next;
+ }
+ }
+sync_out:
+ if (allocated)
+ sync_inode_page(&dn);
+put_out:
f2fs_put_dnode(&dn);
- trace_f2fs_get_data_block(inode, iblock, bh_result, 0);
- return 0;
+unlock_out:
+ if (create)
+ f2fs_unlock_op(F2FS_I_SB(inode));
+out:
+ trace_f2fs_get_data_block(inode, iblock, bh_result, err);
+ return err;
+}
+
+static int get_data_block(struct inode *inode, sector_t iblock,
+ struct buffer_head *bh_result, int create)
+{
+ return __get_data_block(inode, iblock, bh_result, create, false);
+}
+
+static int get_data_block_fiemap(struct inode *inode, sector_t iblock,
+ struct buffer_head *bh_result, int create)
+{
+ return __get_data_block(inode, iblock, bh_result, create, true);
+}
+
+int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+{
+ return generic_block_fiemap(inode, fieinfo,
+ start, len, get_data_block_fiemap);
}
static int f2fs_read_data_page(struct file *file, struct page *page)
{
- return mpage_readpage(page, get_data_block_ro);
+ struct inode *inode = page->mapping->host;
+ int ret;
+
+ trace_f2fs_readpage(page, DATA);
+
+ /* If the file has inline data, try to read it directlly */
+ if (f2fs_has_inline_data(inode))
+ ret = f2fs_read_inline_data(inode, page);
+ else
+ ret = mpage_readpage(page, get_data_block);
+
+ return ret;
}
static int f2fs_read_data_pages(struct file *file,
struct address_space *mapping,
struct list_head *pages, unsigned nr_pages)
{
- return mpage_readpages(mapping, pages, nr_pages, get_data_block_ro);
+ struct inode *inode = file->f_mapping->host;
+
+ /* If the file has inline data, skip readpages */
+ if (f2fs_has_inline_data(inode))
+ return 0;
+
+ return mpage_readpages(mapping, pages, nr_pages, get_data_block);
}
-int do_write_data_page(struct page *page)
+int do_write_data_page(struct page *page, struct f2fs_io_info *fio)
{
struct inode *inode = page->mapping->host;
- block_t old_blk_addr, new_blk_addr;
+ block_t old_blkaddr, new_blkaddr;
struct dnode_of_data dn;
int err = 0;
@@ -469,10 +784,10 @@
if (err)
return err;
- old_blk_addr = dn.data_blkaddr;
+ old_blkaddr = dn.data_blkaddr;
/* This page is already truncated */
- if (old_blk_addr == NULL_ADDR)
+ if (old_blkaddr == NULL_ADDR)
goto out_writepage;
set_page_writeback(page);
@@ -481,14 +796,15 @@
* If current allocation needs SSR,
* it had better in-place writes for updated data.
*/
- if (old_blk_addr != NEW_ADDR && !is_cold_data(page) &&
- need_inplace_update(inode)) {
- rewrite_data_page(F2FS_SB(inode->i_sb), page,
- old_blk_addr);
+ if (unlikely(old_blkaddr != NEW_ADDR &&
+ !is_cold_data(page) &&
+ need_inplace_update(inode))) {
+ rewrite_data_page(page, old_blkaddr, fio);
+ set_inode_flag(F2FS_I(inode), FI_UPDATE_WRITE);
} else {
- write_data_page(inode, page, &dn,
- old_blk_addr, &new_blk_addr);
- update_extent_cache(new_blk_addr, &dn);
+ write_data_page(page, &dn, &new_blkaddr, fio);
+ update_extent_cache(new_blkaddr, &dn);
+ set_inode_flag(F2FS_I(inode), FI_APPEND_WRITE);
}
out_writepage:
f2fs_put_dnode(&dn);
@@ -499,13 +815,19 @@
struct writeback_control *wbc)
{
struct inode *inode = page->mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
loff_t i_size = i_size_read(inode);
const pgoff_t end_index = ((unsigned long long) i_size)
>> PAGE_CACHE_SHIFT;
- unsigned offset;
+ unsigned offset = 0;
bool need_balance_fs = false;
int err = 0;
+ struct f2fs_io_info fio = {
+ .type = DATA,
+ .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE,
+ };
+
+ trace_f2fs_writepage(page, DATA);
if (page->index < end_index)
goto write;
@@ -515,55 +837,59 @@
* this page does not have to be written to disk.
*/
offset = i_size & (PAGE_CACHE_SIZE - 1);
- if ((page->index >= end_index + 1) || !offset) {
- if (S_ISDIR(inode->i_mode)) {
- dec_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_dec_dirty_dents(inode);
- }
+ if ((page->index >= end_index + 1) || !offset)
goto out;
- }
zero_user_segment(page, offset, PAGE_CACHE_SIZE);
write:
- if (sbi->por_doing) {
- err = AOP_WRITEPAGE_ACTIVATE;
+ if (unlikely(sbi->por_doing))
goto redirty_out;
- }
/* Dentry blocks are controlled by checkpoint */
if (S_ISDIR(inode->i_mode)) {
- dec_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_dec_dirty_dents(inode);
- err = do_write_data_page(page);
- } else {
- int ilock = mutex_lock_op(sbi);
- err = do_write_data_page(page);
- mutex_unlock_op(sbi, ilock);
- need_balance_fs = true;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
+ err = do_write_data_page(page, &fio);
+ goto done;
}
- if (err == -ENOENT)
+
+ /* we should bypass data pages to proceed the kworkder jobs */
+ if (unlikely(f2fs_cp_error(sbi))) {
+ SetPageError(page);
+ unlock_page(page);
goto out;
- else if (err)
+ }
+
+ if (!wbc->for_reclaim)
+ need_balance_fs = true;
+ else if (has_not_enough_free_secs(sbi, 0))
goto redirty_out;
- if (wbc->for_reclaim)
- f2fs_submit_bio(sbi, DATA, true);
+ f2fs_lock_op(sbi);
+ if (f2fs_has_inline_data(inode) || f2fs_may_inline(inode))
+ err = f2fs_write_inline_data(inode, page, offset);
+ else
+ err = do_write_data_page(page, &fio);
+ f2fs_unlock_op(sbi);
+done:
+ if (err && err != -ENOENT)
+ goto redirty_out;
clear_cold_data(page);
out:
+ inode_dec_dirty_pages(inode);
unlock_page(page);
if (need_balance_fs)
f2fs_balance_fs(sbi);
+ if (wbc->for_reclaim)
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
return 0;
redirty_out:
- wbc->pages_skipped++;
- set_page_dirty(page);
- return err;
+ redirty_page_for_writepage(wbc, page);
+ return AOP_WRITEPAGE_ACTIVATE;
}
-#define MAX_DESIRED_PAGES_WP 4096
-
static int __f2fs_writepage(struct page *page, struct writeback_control *wbc,
void *data)
{
@@ -577,20 +903,23 @@
struct writeback_control *wbc)
{
struct inode *inode = mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
bool locked = false;
int ret;
- long excess_nrtw = 0, desired_nrtw;
+ long diff;
+
+ trace_f2fs_writepages(mapping->host, wbc, DATA);
/* deal with chardevs and other special file */
if (!mapping->a_ops->writepage)
return 0;
- if (wbc->nr_to_write < MAX_DESIRED_PAGES_WP) {
- desired_nrtw = MAX_DESIRED_PAGES_WP;
- excess_nrtw = desired_nrtw - wbc->nr_to_write;
- wbc->nr_to_write = desired_nrtw;
- }
+ if (S_ISDIR(inode->i_mode) && wbc->sync_mode == WB_SYNC_NONE &&
+ get_dirty_pages(inode) < nr_pages_to_skip(sbi, DATA) &&
+ available_free_memory(sbi, DIRTY_DENTS))
+ goto skip_write;
+
+ diff = nr_pages_to_write(sbi, DATA, wbc);
if (!S_ISDIR(inode->i_mode)) {
mutex_lock(&sbi->writepages);
@@ -599,12 +928,26 @@
ret = write_cache_pages(mapping, wbc, __f2fs_writepage, mapping);
if (locked)
mutex_unlock(&sbi->writepages);
- f2fs_submit_bio(sbi, DATA, (wbc->sync_mode == WB_SYNC_ALL));
+
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
remove_dirty_dir_inode(inode);
- wbc->nr_to_write -= excess_nrtw;
+ wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff);
return ret;
+
+skip_write:
+ wbc->pages_skipped += get_dirty_pages(inode);
+ return 0;
+}
+static void f2fs_write_failed(struct address_space *mapping, loff_t to)
+{
+ struct inode *inode = mapping->host;
+
+ if (to > inode->i_size) {
+ truncate_pagecache(inode, 0, inode->i_size);
+ truncate_blocks(inode, inode->i_size, true);
+ }
}
static int f2fs_write_begin(struct file *file, struct address_space *mapping,
@@ -612,38 +955,50 @@
struct page **pagep, void **fsdata)
{
struct inode *inode = mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct page *page;
pgoff_t index = ((unsigned long long) pos) >> PAGE_CACHE_SHIFT;
struct dnode_of_data dn;
int err = 0;
- int ilock;
- /* for nobh_write_end */
- *fsdata = NULL;
+ trace_f2fs_write_begin(inode, pos, len, flags);
f2fs_balance_fs(sbi);
repeat:
+ err = f2fs_convert_inline_data(inode, pos + len, NULL);
+ if (err)
+ goto fail;
+
page = grab_cache_page_write_begin(mapping, index, flags);
- if (!page)
- return -ENOMEM;
+ if (!page) {
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ /* to avoid latency during memory pressure */
+ unlock_page(page);
+
*pagep = page;
- ilock = mutex_lock_op(sbi);
+ if (f2fs_has_inline_data(inode) && (pos + len) <= MAX_INLINE_DATA)
+ goto inline_data;
+ f2fs_lock_op(sbi);
set_new_dnode(&dn, inode, NULL, NULL, 0);
- err = get_dnode_of_data(&dn, index, ALLOC_NODE);
- if (err)
- goto err;
+ err = f2fs_reserve_block(&dn, index);
+ f2fs_unlock_op(sbi);
+ if (err) {
+ f2fs_put_page(page, 0);
+ goto fail;
+ }
+inline_data:
+ lock_page(page);
+ if (unlikely(page->mapping != mapping)) {
+ f2fs_put_page(page, 1);
+ goto repeat;
+ }
- if (dn.data_blkaddr == NULL_ADDR)
- err = reserve_new_block(&dn);
-
- f2fs_put_dnode(&dn);
- if (err)
- goto err;
-
- mutex_unlock_op(sbi, ilock);
+ f2fs_wait_on_page_writeback(page, DATA);
if ((len == PAGE_CACHE_SIZE) || PageUptodate(page))
return 0;
@@ -660,15 +1015,26 @@
if (dn.data_blkaddr == NEW_ADDR) {
zero_user_segment(page, 0, PAGE_CACHE_SIZE);
} else {
- err = f2fs_readpage(sbi, page, dn.data_blkaddr, READ_SYNC);
- if (err)
- return err;
- lock_page(page);
- if (!PageUptodate(page)) {
- f2fs_put_page(page, 1);
- return -EIO;
+ if (f2fs_has_inline_data(inode)) {
+ err = f2fs_read_inline_data(inode, page);
+ if (err) {
+ page_cache_release(page);
+ goto fail;
+ }
+ } else {
+ err = f2fs_submit_page_bio(sbi, page, dn.data_blkaddr,
+ READ_SYNC);
+ if (err)
+ goto fail;
}
- if (page->mapping != mapping) {
+
+ lock_page(page);
+ if (unlikely(!PageUptodate(page))) {
+ f2fs_put_page(page, 1);
+ err = -EIO;
+ goto fail;
+ }
+ if (unlikely(page->mapping != mapping)) {
f2fs_put_page(page, 1);
goto repeat;
}
@@ -677,36 +1043,92 @@
SetPageUptodate(page);
clear_cold_data(page);
return 0;
-
-err:
- mutex_unlock_op(sbi, ilock);
- f2fs_put_page(page, 1);
+fail:
+ f2fs_write_failed(mapping, pos + len);
return err;
}
-static ssize_t f2fs_direct_IO(int rw, struct kiocb *iocb,
+static int f2fs_write_end(struct file *file,
+ struct address_space *mapping,
+ loff_t pos, unsigned len, unsigned copied,
+ struct page *page, void *fsdata)
+{
+ struct inode *inode = page->mapping->host;
+
+ trace_f2fs_write_end(inode, pos, len, copied);
+
+ if (is_inode_flag_set(F2FS_I(inode), FI_ATOMIC_FILE))
+ get_page(page);
+ else
+ set_page_dirty(page);
+
+ if (pos + copied > i_size_read(inode)) {
+ i_size_write(inode, pos + copied);
+ mark_inode_dirty(inode);
+ update_inode_page(inode);
+ }
+
+ f2fs_put_page(page, 1);
+ return copied;
+}
+
+static int check_direct_IO(struct inode *inode, int rw,
const struct iovec *iov, loff_t offset, unsigned long nr_segs)
{
- struct file *file = iocb->ki_filp;
- struct inode *inode = file->f_mapping->host;
+ unsigned blocksize_mask = inode->i_sb->s_blocksize - 1;
+ int i;
- if (rw == WRITE)
+ if (rw == READ)
return 0;
- /* Needs synchronization with the cleaner */
- return blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
- get_data_block_ro);
+ if (offset & blocksize_mask)
+ return -EINVAL;
+
+ for (i = 0; i < nr_segs; i++)
+ if (iov[i].iov_len & blocksize_mask)
+ return -EINVAL;
+ return 0;
+}
+
+static ssize_t f2fs_direct_IO(int rw, struct kiocb *iocb,
+ const struct iovec *iov, loff_t offset,
+ unsigned long nr_segs)
+{
+ struct file *file = iocb->ki_filp;
+ struct address_space *mapping = file->f_mapping;
+ struct inode *inode = mapping->host;
+ size_t count = iov_length(iov, nr_segs);
+ int err;
+
+ /* Let buffer I/O handle the inline data case. */
+ if (f2fs_has_inline_data(inode))
+ return 0;
+
+ if (check_direct_IO(inode, rw, iov, offset, nr_segs))
+ return 0;
+
+ trace_f2fs_direct_IO_enter(inode, offset, count, rw);
+
+ err = blockdev_direct_IO(rw, iocb, inode, iov, offset, nr_segs,
+ get_data_block);
+ if (err < 0 && (rw & WRITE))
+ f2fs_write_failed(mapping, offset + count);
+
+ trace_f2fs_direct_IO_exit(inode, offset, count, rw, err);
+
+ return err;
}
static void f2fs_invalidate_data_page(struct page *page, unsigned int offset,
unsigned int length)
{
struct inode *inode = page->mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- if (S_ISDIR(inode->i_mode) && PageDirty(page)) {
- dec_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_dec_dirty_dents(inode);
- }
+
+ if (offset % PAGE_CACHE_SIZE)
+ return;
+
+ if (PageDirty(page))
+ inode_dec_dirty_pages(inode);
ClearPagePrivate(page);
}
@@ -721,10 +1143,14 @@
struct address_space *mapping = page->mapping;
struct inode *inode = mapping->host;
+ trace_f2fs_set_page_dirty(page, DATA);
+
SetPageUptodate(page);
+ mark_inode_dirty(inode);
+
if (!PageDirty(page)) {
__set_page_dirty_nobuffers(page);
- set_dirty_dir_page(inode, page);
+ update_dirty_page(inode, page);
return 1;
}
return 0;
@@ -732,7 +1158,12 @@
static sector_t f2fs_bmap(struct address_space *mapping, sector_t block)
{
- return generic_block_bmap(mapping, block, get_data_block_ro);
+ struct inode *inode = mapping->host;
+
+ if (f2fs_has_inline_data(inode))
+ return 0;
+
+ return generic_block_bmap(mapping, block, get_data_block);
}
const struct address_space_operations f2fs_dblock_aops = {
@@ -741,7 +1172,7 @@
.writepage = f2fs_write_data_page,
.writepages = f2fs_write_data_pages,
.write_begin = f2fs_write_begin,
- .write_end = nobh_write_end,
+ .write_end = f2fs_write_end,
.set_page_dirty = f2fs_set_data_page_dirty,
.invalidatepage = f2fs_invalidate_data_page,
.releasepage = f2fs_release_data_page,
diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
index 8d99437..2204b46 100644
--- a/fs/f2fs/debug.c
+++ b/fs/f2fs/debug.c
@@ -24,12 +24,12 @@
#include "gc.h"
static LIST_HEAD(f2fs_stat_list);
-static struct dentry *debugfs_root;
+static struct dentry *f2fs_debugfs_root;
static DEFINE_MUTEX(f2fs_stat_mutex);
static void update_general_status(struct f2fs_sb_info *sbi)
{
- struct f2fs_stat_info *si = sbi->stat_info;
+ struct f2fs_stat_info *si = F2FS_STAT(sbi);
int i;
/* valid check of the segment numbers */
@@ -45,14 +45,15 @@
si->valid_count = valid_user_blocks(sbi);
si->valid_node_count = valid_node_count(sbi);
si->valid_inode_count = valid_inode_count(sbi);
+ si->inline_inode = sbi->inline_inode;
si->utilization = utilization(sbi);
si->free_segs = free_segments(sbi);
si->free_secs = free_sections(sbi);
si->prefree_count = prefree_segments(sbi);
si->dirty_count = dirty_segments(sbi);
- si->node_pages = sbi->node_inode->i_mapping->nrpages;
- si->meta_pages = sbi->meta_inode->i_mapping->nrpages;
+ si->node_pages = NODE_MAPPING(sbi)->nrpages;
+ si->meta_pages = META_MAPPING(sbi)->nrpages;
si->nats = NM_I(sbi)->nat_cnt;
si->sits = SIT_I(sbi)->dirty_sentries;
si->fnids = NM_I(sbi)->fcnt;
@@ -83,9 +84,8 @@
*/
static void update_sit_info(struct f2fs_sb_info *sbi)
{
- struct f2fs_stat_info *si = sbi->stat_info;
+ struct f2fs_stat_info *si = F2FS_STAT(sbi);
unsigned int blks_per_sec, hblks_per_sec, total_vblocks, bimodal, dist;
- struct sit_info *sit_i = SIT_I(sbi);
unsigned int segno, vblocks;
int ndirty = 0;
@@ -93,8 +93,7 @@
total_vblocks = 0;
blks_per_sec = sbi->segs_per_sec * (1 << sbi->log_blocks_per_seg);
hblks_per_sec = blks_per_sec / 2;
- mutex_lock(&sit_i->sentry_lock);
- for (segno = 0; segno < TOTAL_SEGS(sbi); segno += sbi->segs_per_sec) {
+ for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
vblocks = get_valid_blocks(sbi, segno, sbi->segs_per_sec);
dist = abs(vblocks - hblks_per_sec);
bimodal += dist * dist;
@@ -104,8 +103,7 @@
ndirty++;
}
}
- mutex_unlock(&sit_i->sentry_lock);
- dist = TOTAL_SECS(sbi) * hblks_per_sec * hblks_per_sec / 100;
+ dist = MAIN_SECS(sbi) * hblks_per_sec * hblks_per_sec / 100;
si->bimodal = bimodal / dist;
if (si->dirty_count)
si->avg_vblocks = total_vblocks / ndirty;
@@ -118,7 +116,7 @@
*/
static void update_mem_info(struct f2fs_sb_info *sbi)
{
- struct f2fs_stat_info *si = sbi->stat_info;
+ struct f2fs_stat_info *si = F2FS_STAT(sbi);
unsigned npages;
if (si->base_mem)
@@ -133,17 +131,17 @@
/* build sit */
si->base_mem += sizeof(struct sit_info);
- si->base_mem += TOTAL_SEGS(sbi) * sizeof(struct seg_entry);
- si->base_mem += f2fs_bitmap_size(TOTAL_SEGS(sbi));
- si->base_mem += 2 * SIT_VBLOCK_MAP_SIZE * TOTAL_SEGS(sbi);
+ si->base_mem += MAIN_SEGS(sbi) * sizeof(struct seg_entry);
+ si->base_mem += f2fs_bitmap_size(MAIN_SEGS(sbi));
+ si->base_mem += 2 * SIT_VBLOCK_MAP_SIZE * MAIN_SEGS(sbi);
if (sbi->segs_per_sec > 1)
- si->base_mem += TOTAL_SECS(sbi) * sizeof(struct sec_entry);
+ si->base_mem += MAIN_SECS(sbi) * sizeof(struct sec_entry);
si->base_mem += __bitmap_size(sbi, SIT_BITMAP);
/* build free segmap */
si->base_mem += sizeof(struct free_segmap_info);
- si->base_mem += f2fs_bitmap_size(TOTAL_SEGS(sbi));
- si->base_mem += f2fs_bitmap_size(TOTAL_SECS(sbi));
+ si->base_mem += f2fs_bitmap_size(MAIN_SEGS(sbi));
+ si->base_mem += f2fs_bitmap_size(MAIN_SECS(sbi));
/* build curseg */
si->base_mem += sizeof(struct curseg_info) * NR_CURSEG_TYPE;
@@ -151,8 +149,8 @@
/* build dirty segmap */
si->base_mem += sizeof(struct dirty_seglist_info);
- si->base_mem += NR_DIRTY_TYPE * f2fs_bitmap_size(TOTAL_SEGS(sbi));
- si->base_mem += f2fs_bitmap_size(TOTAL_SECS(sbi));
+ si->base_mem += NR_DIRTY_TYPE * f2fs_bitmap_size(MAIN_SEGS(sbi));
+ si->base_mem += f2fs_bitmap_size(MAIN_SECS(sbi));
/* buld nm */
si->base_mem += sizeof(struct f2fs_nm_info);
@@ -165,22 +163,22 @@
/* free nids */
si->cache_mem = NM_I(sbi)->fcnt;
si->cache_mem += NM_I(sbi)->nat_cnt;
- npages = sbi->node_inode->i_mapping->nrpages;
+ npages = NODE_MAPPING(sbi)->nrpages;
si->cache_mem += npages << PAGE_CACHE_SHIFT;
- npages = sbi->meta_inode->i_mapping->nrpages;
+ npages = META_MAPPING(sbi)->nrpages;
si->cache_mem += npages << PAGE_CACHE_SHIFT;
- si->cache_mem += sbi->n_orphans * sizeof(struct orphan_inode_entry);
+ si->cache_mem += sbi->n_orphans * sizeof(struct ino_entry);
si->cache_mem += sbi->n_dirty_dirs * sizeof(struct dir_inode_entry);
}
static int stat_show(struct seq_file *s, void *v)
{
- struct f2fs_stat_info *si, *next;
+ struct f2fs_stat_info *si;
int i = 0;
int j;
mutex_lock(&f2fs_stat_mutex);
- list_for_each_entry_safe(si, next, &f2fs_stat_list, stat_list) {
+ list_for_each_entry(si, &f2fs_stat_list, stat_list) {
char devname[BDEVNAME_SIZE];
update_general_status(si->sbi);
@@ -200,6 +198,8 @@
seq_printf(s, "Other: %u)\n - Data: %u\n",
si->valid_node_count - si->valid_inode_count,
si->valid_count - si->valid_node_count);
+ seq_printf(s, " - Inline_data Inode: %u\n",
+ si->inline_inode);
seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n",
si->main_area_segs, si->main_area_sections,
si->main_area_zones);
@@ -233,6 +233,7 @@
si->dirty_count);
seq_printf(s, " - Prefree: %d\n - Free: %d (%d)\n\n",
si->prefree_count, si->free_segs, si->free_secs);
+ seq_printf(s, "CP calls: %d\n", si->cp_count);
seq_printf(s, "GC calls: %d (BG: %d)\n",
si->call_count, si->bg_gc);
seq_printf(s, " - data segments : %d\n", si->data_segs);
@@ -242,32 +243,32 @@
seq_printf(s, " - node blocks : %d\n", si->node_blks);
seq_printf(s, "\nExtent Hit Ratio: %d / %d\n",
si->hit_ext, si->total_ext);
- seq_printf(s, "\nBalancing F2FS Async:\n");
- seq_printf(s, " - nodes %4d in %4d\n",
+ seq_puts(s, "\nBalancing F2FS Async:\n");
+ seq_printf(s, " - nodes: %4d in %4d\n",
si->ndirty_node, si->node_pages);
- seq_printf(s, " - dents %4d in dirs:%4d\n",
+ seq_printf(s, " - dents: %4d in dirs:%4d\n",
si->ndirty_dent, si->ndirty_dirs);
- seq_printf(s, " - meta %4d in %4d\n",
+ seq_printf(s, " - meta: %4d in %4d\n",
si->ndirty_meta, si->meta_pages);
- seq_printf(s, " - NATs %5d > %lu\n",
- si->nats, NM_WOUT_THRESHOLD);
- seq_printf(s, " - SITs: %5d\n - free_nids: %5d\n",
- si->sits, si->fnids);
- seq_printf(s, "\nDistribution of User Blocks:");
- seq_printf(s, " [ valid | invalid | free ]\n");
- seq_printf(s, " [");
+ seq_printf(s, " - NATs: %9d\n - SITs: %9d\n",
+ si->nats, si->sits);
+ seq_printf(s, " - free_nids: %9d\n",
+ si->fnids);
+ seq_puts(s, "\nDistribution of User Blocks:");
+ seq_puts(s, " [ valid | invalid | free ]\n");
+ seq_puts(s, " [");
for (j = 0; j < si->util_valid; j++)
- seq_printf(s, "-");
- seq_printf(s, "|");
+ seq_putc(s, '-');
+ seq_putc(s, '|');
for (j = 0; j < si->util_invalid; j++)
- seq_printf(s, "-");
- seq_printf(s, "|");
+ seq_putc(s, '-');
+ seq_putc(s, '|');
for (j = 0; j < si->util_free; j++)
- seq_printf(s, "-");
- seq_printf(s, "]\n\n");
+ seq_putc(s, '-');
+ seq_puts(s, "]\n\n");
seq_printf(s, "SSR: %u blocks in %u segments\n",
si->block_count[SSR], si->segment_count[SSR]);
seq_printf(s, "LFS: %u blocks in %u segments\n",
@@ -305,11 +306,10 @@
struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
struct f2fs_stat_info *si;
- sbi->stat_info = kzalloc(sizeof(struct f2fs_stat_info), GFP_KERNEL);
- if (!sbi->stat_info)
+ si = kzalloc(sizeof(struct f2fs_stat_info), GFP_KERNEL);
+ if (!si)
return -ENOMEM;
- si = sbi->stat_info;
si->all_area_segs = le32_to_cpu(raw_super->segment_count);
si->sit_area_segs = le32_to_cpu(raw_super->segment_count_sit);
si->nat_area_segs = le32_to_cpu(raw_super->segment_count_nat);
@@ -319,6 +319,7 @@
si->main_area_zones = si->main_area_sections /
le32_to_cpu(raw_super->secs_per_zone);
si->sbi = sbi;
+ sbi->stat_info = si;
mutex_lock(&f2fs_stat_mutex);
list_add_tail(&si->stat_list, &f2fs_stat_list);
@@ -329,25 +330,36 @@
void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
{
- struct f2fs_stat_info *si = sbi->stat_info;
+ struct f2fs_stat_info *si = F2FS_STAT(sbi);
mutex_lock(&f2fs_stat_mutex);
list_del(&si->stat_list);
mutex_unlock(&f2fs_stat_mutex);
- kfree(sbi->stat_info);
+ kfree(si);
}
void __init f2fs_create_root_stats(void)
{
- debugfs_root = debugfs_create_dir("f2fs", NULL);
- if (debugfs_root)
- debugfs_create_file("status", S_IRUGO, debugfs_root,
- NULL, &stat_fops);
+ struct dentry *file;
+
+ f2fs_debugfs_root = debugfs_create_dir("f2fs", NULL);
+ if (!f2fs_debugfs_root)
+ return;
+
+ file = debugfs_create_file("status", S_IRUGO, f2fs_debugfs_root,
+ NULL, &stat_fops);
+ if (!file) {
+ debugfs_remove(f2fs_debugfs_root);
+ f2fs_debugfs_root = NULL;
+ }
}
void f2fs_destroy_root_stats(void)
{
- debugfs_remove_recursive(debugfs_root);
- debugfs_root = NULL;
+ if (!f2fs_debugfs_root)
+ return;
+
+ debugfs_remove_recursive(f2fs_debugfs_root);
+ f2fs_debugfs_root = NULL;
}
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index 1ac6b93..16e2f96 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -13,6 +13,7 @@
#include "f2fs.h"
#include "node.h"
#include "acl.h"
+#include "xattr.h"
static unsigned long dir_blocks(struct inode *inode)
{
@@ -20,12 +21,12 @@
>> PAGE_CACHE_SHIFT;
}
-static unsigned int dir_buckets(unsigned int level)
+static unsigned int dir_buckets(unsigned int level, int dir_level)
{
- if (level < MAX_DIR_HASH_DEPTH / 2)
- return 1 << level;
+ if (level + dir_level < MAX_DIR_HASH_DEPTH / 2)
+ return 1 << (level + dir_level);
else
- return 1 << ((MAX_DIR_HASH_DEPTH / 2) - 1);
+ return MAX_DIR_BUCKETS;
}
static unsigned int bucket_blocks(unsigned int level)
@@ -64,19 +65,20 @@
de->file_type = f2fs_type_by_mode[(mode & S_IFMT) >> S_SHIFT];
}
-static unsigned long dir_block_index(unsigned int level, unsigned int idx)
+static unsigned long dir_block_index(unsigned int level,
+ int dir_level, unsigned int idx)
{
unsigned long i;
unsigned long bidx = 0;
for (i = 0; i < level; i++)
- bidx += dir_buckets(i) * bucket_blocks(i);
+ bidx += dir_buckets(i, dir_level) * bucket_blocks(i);
bidx += idx * bucket_blocks(level);
return bidx;
}
-static bool early_match_name(const char *name, size_t namelen,
- f2fs_hash_t namehash, struct f2fs_dir_entry *de)
+static bool early_match_name(size_t namelen, f2fs_hash_t namehash,
+ struct f2fs_dir_entry *de)
{
if (le16_to_cpu(de->name_len) != namelen)
return false;
@@ -88,65 +90,89 @@
}
static struct f2fs_dir_entry *find_in_block(struct page *dentry_page,
- const char *name, size_t namelen, int *max_slots,
- f2fs_hash_t namehash, struct page **res_page)
+ struct qstr *name, int *max_slots,
+ f2fs_hash_t namehash, struct page **res_page,
+ bool nocase)
{
struct f2fs_dir_entry *de;
- unsigned long bit_pos, end_pos, next_pos;
+ unsigned long bit_pos = 0;
struct f2fs_dentry_block *dentry_blk = kmap(dentry_page);
- int slots;
+ const void *dentry_bits = &dentry_blk->dentry_bitmap;
+ int max_len = 0;
- bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
- NR_DENTRY_IN_BLOCK, 0);
while (bit_pos < NR_DENTRY_IN_BLOCK) {
- de = &dentry_blk->dentry[bit_pos];
- slots = GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
+ if (!test_bit_le(bit_pos, dentry_bits)) {
+ if (bit_pos == 0)
+ max_len = 1;
+ else if (!test_bit_le(bit_pos - 1, dentry_bits))
+ max_len++;
+ bit_pos++;
+ continue;
+ }
- if (early_match_name(name, namelen, namehash, de)) {
+ de = &dentry_blk->dentry[bit_pos];
+ if (nocase) {
+ if ((le16_to_cpu(de->name_len) == name->len) &&
+ !strncasecmp(dentry_blk->filename[bit_pos],
+ name->name, name->len)) {
+ *res_page = dentry_page;
+ goto found;
+ }
+ } else if (early_match_name(name->len, namehash, de)) {
if (!memcmp(dentry_blk->filename[bit_pos],
- name, namelen)) {
+ name->name,
+ name->len)) {
*res_page = dentry_page;
goto found;
}
}
- next_pos = bit_pos + slots;
- bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
- NR_DENTRY_IN_BLOCK, next_pos);
- if (bit_pos >= NR_DENTRY_IN_BLOCK)
- end_pos = NR_DENTRY_IN_BLOCK;
- else
- end_pos = bit_pos;
- if (*max_slots < end_pos - next_pos)
- *max_slots = end_pos - next_pos;
+ if (max_len > *max_slots) {
+ *max_slots = max_len;
+ max_len = 0;
+ }
+
+ /*
+ * For the most part, it should be a bug when name_len is zero.
+ * We stop here for figuring out where the bugs are occurred.
+ */
+ f2fs_bug_on(F2FS_P_SB(dentry_page), !de->name_len);
+
+ bit_pos += GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
}
de = NULL;
kunmap(dentry_page);
found:
+ if (max_len > *max_slots)
+ *max_slots = max_len;
return de;
}
static struct f2fs_dir_entry *find_in_level(struct inode *dir,
- unsigned int level, const char *name, size_t namelen,
+ unsigned int level, struct qstr *name,
f2fs_hash_t namehash, struct page **res_page)
{
- int s = GET_DENTRY_SLOTS(namelen);
+ int s = GET_DENTRY_SLOTS(name->len);
unsigned int nbucket, nblock;
unsigned int bidx, end_block;
struct page *dentry_page;
struct f2fs_dir_entry *de = NULL;
+ struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
bool room = false;
int max_slots = 0;
- BUG_ON(level > MAX_DIR_HASH_DEPTH);
+ f2fs_bug_on(F2FS_I_SB(dir), level > MAX_DIR_HASH_DEPTH);
- nbucket = dir_buckets(level);
+ nbucket = dir_buckets(level, F2FS_I(dir)->i_dir_level);
nblock = bucket_blocks(level);
- bidx = dir_block_index(level, le32_to_cpu(namehash) % nbucket);
+ bidx = dir_block_index(level, F2FS_I(dir)->i_dir_level,
+ le32_to_cpu(namehash) % nbucket);
end_block = bidx + nblock;
for (; bidx < end_block; bidx++) {
+ bool nocase = false;
+
/* no need to allocate new dentry pages to all the indices */
dentry_page = find_data_page(dir, bidx, true);
if (IS_ERR(dentry_page)) {
@@ -154,8 +180,13 @@
continue;
}
- de = find_in_block(dentry_page, name, namelen,
- &max_slots, namehash, res_page);
+ if (test_opt(sbi, ANDROID_EMU) &&
+ (sbi->android_emu_flags & F2FS_ANDROID_EMU_NOCASE) &&
+ F2FS_I(dir)->i_advise & FADVISE_ANDROID_EMU)
+ nocase = true;
+
+ de = find_in_block(dentry_page, name, &max_slots,
+ namehash, res_page, nocase);
if (de)
break;
@@ -181,28 +212,22 @@
struct f2fs_dir_entry *f2fs_find_entry(struct inode *dir,
struct qstr *child, struct page **res_page)
{
- const char *name = child->name;
- size_t namelen = child->len;
unsigned long npages = dir_blocks(dir);
struct f2fs_dir_entry *de = NULL;
f2fs_hash_t name_hash;
unsigned int max_depth;
unsigned int level;
- if (namelen > F2FS_NAME_LEN)
- return NULL;
-
if (npages == 0)
return NULL;
*res_page = NULL;
- name_hash = f2fs_dentry_hash(name, namelen);
+ name_hash = f2fs_dentry_hash(child);
max_depth = F2FS_I(dir)->i_current_depth;
for (level = 0; level < max_depth; level++) {
- de = find_in_level(dir, level, name,
- namelen, name_hash, res_page);
+ de = find_in_level(dir, level, child, name_hash, res_page);
if (de)
break;
}
@@ -215,9 +240,9 @@
struct f2fs_dir_entry *f2fs_parent_dir(struct inode *dir, struct page **p)
{
- struct page *page = NULL;
- struct f2fs_dir_entry *de = NULL;
- struct f2fs_dentry_block *dentry_blk = NULL;
+ struct page *page;
+ struct f2fs_dir_entry *de;
+ struct f2fs_dentry_block *dentry_blk;
page = get_lock_data_page(dir, 0);
if (IS_ERR(page))
@@ -250,7 +275,7 @@
struct page *page, struct inode *inode)
{
lock_page(page);
- wait_on_page_writeback(page);
+ f2fs_wait_on_page_writeback(page, DATA);
de->ino = cpu_to_le32(inode->i_ino);
set_de_type(de, inode);
kunmap(page);
@@ -258,41 +283,49 @@
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
mark_inode_dirty(dir);
- /* update parent inode number before releasing dentry page */
- F2FS_I(inode)->i_pino = dir->i_ino;
-
f2fs_put_page(page, 1);
}
-void init_dent_inode(const struct qstr *name, struct page *ipage)
+static void init_dent_inode(const struct qstr *name, struct page *ipage)
{
- struct f2fs_node *rn;
+ struct f2fs_inode *ri;
- if (IS_ERR(ipage))
- return;
-
- wait_on_page_writeback(ipage);
+ f2fs_wait_on_page_writeback(ipage, NODE);
/* copy name info. to this inode page */
- rn = (struct f2fs_node *)page_address(ipage);
- rn->i.i_namelen = cpu_to_le32(name->len);
- memcpy(rn->i.i_name, name->name, name->len);
+ ri = F2FS_INODE(ipage);
+ ri->i_namelen = cpu_to_le32(name->len);
+ memcpy(ri->i_name, name->name, name->len);
set_page_dirty(ipage);
}
-static int make_empty_dir(struct inode *inode, struct inode *parent)
+int update_dent_inode(struct inode *inode, const struct qstr *name)
+{
+ struct page *page;
+
+ page = get_node_page(F2FS_I_SB(inode), inode->i_ino);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ init_dent_inode(name, page);
+ f2fs_put_page(page, 1);
+
+ return 0;
+}
+
+static int make_empty_dir(struct inode *inode,
+ struct inode *parent, struct page *page)
{
struct page *dentry_page;
struct f2fs_dentry_block *dentry_blk;
struct f2fs_dir_entry *de;
- void *kaddr;
- dentry_page = get_new_data_page(inode, 0, true);
+ dentry_page = get_new_data_page(inode, page, 0, true);
if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page);
- kaddr = kmap_atomic(dentry_page);
- dentry_blk = (struct f2fs_dentry_block *)kaddr;
+
+ dentry_blk = kmap_atomic(dentry_page);
de = &dentry_blk->dentry[0];
de->name_len = cpu_to_le16(1);
@@ -310,74 +343,93 @@
test_and_set_bit_le(0, &dentry_blk->dentry_bitmap);
test_and_set_bit_le(1, &dentry_blk->dentry_bitmap);
- kunmap_atomic(kaddr);
+ kunmap_atomic(dentry_blk);
set_page_dirty(dentry_page);
f2fs_put_page(dentry_page, 1);
return 0;
}
-static int init_inode_metadata(struct inode *inode,
+static struct page *init_inode_metadata(struct inode *inode,
struct inode *dir, const struct qstr *name)
{
+ struct page *page;
+ int err;
+
if (is_inode_flag_set(F2FS_I(inode), FI_NEW_INODE)) {
- int err;
- err = new_inode_page(inode, name);
- if (err)
- return err;
+ page = new_inode_page(inode);
+ if (IS_ERR(page))
+ return page;
if (S_ISDIR(inode->i_mode)) {
- err = make_empty_dir(inode, dir);
- if (err) {
- remove_inode_page(inode);
- return err;
- }
+ err = make_empty_dir(inode, dir, page);
+ if (err)
+ goto error;
}
- err = f2fs_init_acl(inode, dir);
- if (err) {
- remove_inode_page(inode);
- return err;
- }
+ err = f2fs_init_acl(inode, dir, page);
+ if (err)
+ goto put_error;
+
+ err = f2fs_init_security(inode, dir, name, page);
+ if (err)
+ goto put_error;
} else {
- struct page *ipage;
- ipage = get_node_page(F2FS_SB(dir->i_sb), inode->i_ino);
- if (IS_ERR(ipage))
- return PTR_ERR(ipage);
- set_cold_node(inode, ipage);
- init_dent_inode(name, ipage);
- f2fs_put_page(ipage, 1);
+ page = get_node_page(F2FS_I_SB(dir), inode->i_ino);
+ if (IS_ERR(page))
+ return page;
+
+ set_cold_node(inode, page);
}
+
+ if (name)
+ init_dent_inode(name, page);
+
+ /*
+ * This file should be checkpointed during fsync.
+ * We lost i_pino from now on.
+ */
if (is_inode_flag_set(F2FS_I(inode), FI_INC_LINK)) {
+ file_lost_pino(inode);
+ /*
+ * If link the tmpfile to alias through linkat path,
+ * we should remove this inode from orphan list.
+ */
+ if (inode->i_nlink == 0)
+ remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino);
inc_nlink(inode);
- update_inode_page(inode);
}
- return 0;
+ return page;
+
+put_error:
+ f2fs_put_page(page, 1);
+error:
+ /* once the failed inode becomes a bad inode, i_mode is S_IFREG */
+ truncate_inode_pages(&inode->i_data, 0);
+ truncate_blocks(inode, 0, false);
+ remove_dirty_dir_inode(inode);
+ remove_inode_page(inode);
+ return ERR_PTR(err);
}
static void update_parent_metadata(struct inode *dir, struct inode *inode,
unsigned int current_depth)
{
- bool need_dir_update = false;
-
if (is_inode_flag_set(F2FS_I(inode), FI_NEW_INODE)) {
if (S_ISDIR(inode->i_mode)) {
inc_nlink(dir);
- need_dir_update = true;
+ set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
}
clear_inode_flag(F2FS_I(inode), FI_NEW_INODE);
}
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
+ mark_inode_dirty(dir);
+
if (F2FS_I(dir)->i_current_depth != current_depth) {
F2FS_I(dir)->i_current_depth = current_depth;
- need_dir_update = true;
+ set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
}
- if (need_dir_update)
- update_inode_page(dir);
- else
- mark_inode_dirty(dir);
-
if (is_inode_flag_set(F2FS_I(inode), FI_INC_LINK))
clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
}
@@ -407,10 +459,11 @@
}
/*
- * Caller should grab and release a mutex by calling mutex_lock_op() and
- * mutex_unlock_op().
+ * Caller should grab and release a rwsem by calling f2fs_lock_op() and
+ * f2fs_unlock_op().
*/
-int __f2fs_add_link(struct inode *dir, const struct qstr *name, struct inode *inode)
+int __f2fs_add_link(struct inode *dir, const struct qstr *name,
+ struct inode *inode)
{
unsigned int bit_pos;
unsigned int level;
@@ -423,10 +476,11 @@
struct page *dentry_page = NULL;
struct f2fs_dentry_block *dentry_blk = NULL;
int slots = GET_DENTRY_SLOTS(namelen);
+ struct page *page;
int err = 0;
int i;
- dentry_hash = f2fs_dentry_hash(name->name, name->len);
+ dentry_hash = f2fs_dentry_hash(name);
level = 0;
current_depth = F2FS_I(dir)->i_current_depth;
if (F2FS_I(dir)->chash == dentry_hash) {
@@ -435,20 +489,21 @@
}
start:
- if (current_depth == MAX_DIR_HASH_DEPTH)
+ if (unlikely(current_depth == MAX_DIR_HASH_DEPTH))
return -ENOSPC;
/* Increase the depth, if required */
if (level == current_depth)
++current_depth;
- nbucket = dir_buckets(level);
+ nbucket = dir_buckets(level, F2FS_I(dir)->i_dir_level);
nblock = bucket_blocks(level);
- bidx = dir_block_index(level, (le32_to_cpu(dentry_hash) % nbucket));
+ bidx = dir_block_index(level, F2FS_I(dir)->i_dir_level,
+ (le32_to_cpu(dentry_hash) % nbucket));
for (block = bidx; block <= (bidx + nblock - 1); block++) {
- dentry_page = get_new_data_page(dir, block, true);
+ dentry_page = get_new_data_page(dir, NULL, block, true);
if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page);
@@ -465,12 +520,14 @@
++level;
goto start;
add_dentry:
- err = init_inode_metadata(inode, dir, name);
- if (err)
+ f2fs_wait_on_page_writeback(dentry_page, DATA);
+
+ down_write(&F2FS_I(inode)->i_sem);
+ page = init_inode_metadata(inode, dir, name);
+ if (IS_ERR(page)) {
+ err = PTR_ERR(page);
goto fail;
-
- wait_on_page_writeback(dentry_page);
-
+ }
de = &dentry_blk->dentry[bit_pos];
de->hash_code = dentry_hash;
de->name_len = cpu_to_le16(namelen);
@@ -481,16 +538,45 @@
test_and_set_bit_le(bit_pos + i, &dentry_blk->dentry_bitmap);
set_page_dirty(dentry_page);
- update_parent_metadata(dir, inode, current_depth);
-
- /* update parent inode number before releasing dentry page */
+ /* we don't need to mark_inode_dirty now */
F2FS_I(inode)->i_pino = dir->i_ino;
+ update_inode(inode, page);
+ f2fs_put_page(page, 1);
+
+ update_parent_metadata(dir, inode, current_depth);
fail:
+ up_write(&F2FS_I(inode)->i_sem);
+
+ if (is_inode_flag_set(F2FS_I(dir), FI_UPDATE_DIR)) {
+ update_inode_page(dir);
+ clear_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
+ }
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1);
return err;
}
+int f2fs_do_tmpfile(struct inode *inode, struct inode *dir)
+{
+ struct page *page;
+ int err = 0;
+
+ down_write(&F2FS_I(inode)->i_sem);
+ page = init_inode_metadata(inode, dir, NULL);
+ if (IS_ERR(page)) {
+ err = PTR_ERR(page);
+ goto fail;
+ }
+ /* we don't need to mark_inode_dirty now */
+ update_inode(inode, page);
+ f2fs_put_page(page, 1);
+
+ clear_inode_flag(F2FS_I(inode), FI_NEW_INODE);
+fail:
+ up_write(&F2FS_I(inode)->i_sem);
+ return err;
+}
+
/*
* It only removes the dentry from the dentry page,corresponding name
* entry in name page does not need to be touched during deletion.
@@ -500,18 +586,15 @@
{
struct f2fs_dentry_block *dentry_blk;
unsigned int bit_pos;
- struct address_space *mapping = page->mapping;
- struct inode *dir = mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ struct inode *dir = page->mapping->host;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
- void *kaddr = page_address(page);
int i;
lock_page(page);
- wait_on_page_writeback(page);
+ f2fs_wait_on_page_writeback(page, DATA);
- dentry_blk = (struct f2fs_dentry_block *)kaddr;
- bit_pos = dentry - (struct f2fs_dir_entry *)dentry_blk->dentry;
+ dentry_blk = page_address(page);
+ bit_pos = dentry - dentry_blk->dentry;
for (i = 0; i < slots; i++)
test_and_clear_bit_le(bit_pos + i, &dentry_blk->dentry_bitmap);
@@ -524,32 +607,35 @@
dir->i_ctime = dir->i_mtime = CURRENT_TIME;
- if (inode && S_ISDIR(inode->i_mode)) {
- drop_nlink(dir);
- update_inode_page(dir);
- } else {
- mark_inode_dirty(dir);
- }
-
if (inode) {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+
+ down_write(&F2FS_I(inode)->i_sem);
+
+ if (S_ISDIR(inode->i_mode)) {
+ drop_nlink(dir);
+ update_inode_page(dir);
+ }
inode->i_ctime = CURRENT_TIME;
drop_nlink(inode);
if (S_ISDIR(inode->i_mode)) {
drop_nlink(inode);
i_size_write(inode, 0);
}
+ up_write(&F2FS_I(inode)->i_sem);
update_inode_page(inode);
if (inode->i_nlink == 0)
add_orphan_inode(sbi, inode->i_ino);
+ else
+ release_orphan_inode(sbi);
}
if (bit_pos == NR_DENTRY_IN_BLOCK) {
truncate_hole(dir, page->index, page->index + 1);
clear_page_dirty_for_io(page);
ClearPageUptodate(page);
- dec_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_dec_dirty_dents(dir);
+ inode_dec_dirty_pages(dir);
}
f2fs_put_page(page, 1);
}
@@ -563,7 +649,6 @@
unsigned long nblock = dir_blocks(dir);
for (bidx = 0; bidx < nblock; bidx++) {
- void *kaddr;
dentry_page = get_lock_data_page(dir, bidx);
if (IS_ERR(dentry_page)) {
if (PTR_ERR(dentry_page) == -ENOENT)
@@ -572,8 +657,8 @@
return false;
}
- kaddr = kmap_atomic(dentry_page);
- dentry_blk = (struct f2fs_dentry_block *)kaddr;
+
+ dentry_blk = kmap_atomic(dentry_page);
if (bidx == 0)
bit_pos = 2;
else
@@ -581,7 +666,7 @@
bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
NR_DENTRY_IN_BLOCK,
bit_pos);
- kunmap_atomic(kaddr);
+ kunmap_atomic(dentry_blk);
f2fs_put_page(dentry_page, 1);
@@ -594,14 +679,15 @@
static int f2fs_readdir(struct file *file, void *dirent, filldir_t filldir)
{
unsigned long pos = file->f_pos;
- struct inode *inode = file_inode(file);
- unsigned long npages = dir_blocks(inode);
unsigned char *types = NULL;
unsigned int bit_pos = 0, start_bit_pos = 0;
int over = 0;
+ struct inode *inode = file_inode(file);
+ unsigned long npages = dir_blocks(inode);
struct f2fs_dentry_block *dentry_blk = NULL;
struct f2fs_dir_entry *de = NULL;
struct page *dentry_page = NULL;
+ struct file_ra_state *ra = &file->f_ra;
unsigned int n = 0;
unsigned char d_type = DT_UNKNOWN;
int slots;
@@ -610,7 +696,12 @@
bit_pos = (pos % NR_DENTRY_IN_BLOCK);
n = (pos / NR_DENTRY_IN_BLOCK);
- for ( ; n < npages; n++) {
+ /* readahead for multi pages of dir */
+ if (npages - n > 1 && !ra_has_index(ra, n))
+ page_cache_sync_readahead(inode->i_mapping, ra, file, n,
+ min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES));
+
+ for (; n < npages; n++) {
dentry_page = get_lock_data_page(inode, n);
if (IS_ERR(dentry_page))
continue;
@@ -636,18 +727,18 @@
le32_to_cpu(de->ino), d_type);
if (over) {
file->f_pos += bit_pos - start_bit_pos;
- goto success;
+ goto stop;
}
slots = GET_DENTRY_SLOTS(le16_to_cpu(de->name_len));
bit_pos += slots;
}
bit_pos = 0;
- file->f_pos = (n + 1) * NR_DENTRY_IN_BLOCK;
kunmap(dentry_page);
+ file->f_pos = (n + 1) * NR_DENTRY_IN_BLOCK;
f2fs_put_page(dentry_page, 1);
dentry_page = NULL;
}
-success:
+stop:
if (dentry_page && !IS_ERR(dentry_page)) {
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 20aab02..4997584 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -17,6 +17,22 @@
#include <linux/slab.h>
#include <linux/crc32.h>
#include <linux/magic.h>
+#include <linux/kobject.h>
+#include <linux/sched.h>
+
+#ifdef CONFIG_F2FS_CHECK_FS
+#define f2fs_bug_on(sbi, condition) BUG_ON(condition)
+#define f2fs_down_write(x, y) down_write_nest_lock(x, y)
+#else
+#define f2fs_bug_on(sbi, condition) \
+ do { \
+ if (unlikely(condition)) { \
+ WARN_ON(1); \
+ sbi->need_fsck = true; \
+ } \
+ } while (0)
+#define f2fs_down_write(x, y) down_write(x)
+#endif
/*
* For mount options
@@ -28,6 +44,13 @@
#define F2FS_MOUNT_XATTR_USER 0x00000010
#define F2FS_MOUNT_POSIX_ACL 0x00000020
#define F2FS_MOUNT_DISABLE_EXT_IDENTIFY 0x00000040
+#define F2FS_MOUNT_INLINE_XATTR 0x00000080
+#define F2FS_MOUNT_INLINE_DATA 0x00000100
+#define F2FS_MOUNT_FLUSH_MERGE 0x00000200
+#define F2FS_MOUNT_NOBARRIER 0x00000400
+#define F2FS_MOUNT_ANDROID_EMU 0x00001000
+#define F2FS_MOUNT_ERRORS_PANIC 0x00002000
+#define F2FS_MOUNT_ERRORS_RECOVER 0x00004000
#define clear_opt(sbi, option) (sbi->mount_opt.opt &= ~F2FS_MOUNT_##option)
#define set_opt(sbi, option) (sbi->mount_opt.opt |= F2FS_MOUNT_##option)
@@ -37,21 +60,35 @@
typecheck(unsigned long long, b) && \
((long long)((a) - (b)) > 0))
-typedef u64 block_t;
+typedef u32 block_t; /*
+ * should not change u32, since it is the on-disk block
+ * address format, __le32.
+ */
typedef u32 nid_t;
struct f2fs_mount_info {
unsigned int opt;
};
-static inline __u32 f2fs_crc32(void *buff, size_t len)
+#define CRCPOLY_LE 0xedb88320
+
+static inline __u32 f2fs_crc32(void *buf, size_t len)
{
- return crc32_le(F2FS_SUPER_MAGIC, buff, len);
+ unsigned char *p = (unsigned char *)buf;
+ __u32 crc = F2FS_SUPER_MAGIC;
+ int i;
+
+ while (len--) {
+ crc ^= *p++;
+ for (i = 0; i < 8; i++)
+ crc = (crc >> 1) ^ ((crc & 1) ? CRCPOLY_LE : 0);
+ }
+ return crc;
}
-static inline bool f2fs_crc_valid(__u32 blk_crc, void *buff, size_t buff_size)
+static inline bool f2fs_crc_valid(__u32 blk_crc, void *buf, size_t buf_size)
{
- return f2fs_crc32(buff, buff_size) == blk_crc;
+ return f2fs_crc32(buf, buf_size) == blk_crc;
}
/*
@@ -62,8 +99,40 @@
SIT_BITMAP
};
-/* for the list of orphan inodes */
-struct orphan_inode_entry {
+enum {
+ CP_UMOUNT,
+ CP_SYNC,
+ CP_DISCARD,
+};
+
+struct cp_control {
+ int reason;
+ __u64 trim_start;
+ __u64 trim_end;
+ __u64 trim_minlen;
+ __u64 trimmed;
+};
+
+/*
+ * For CP/NAT/SIT/SSA readahead
+ */
+enum {
+ META_CP,
+ META_NAT,
+ META_SIT,
+ META_SSA,
+ META_POR,
+};
+
+/* for the list of ino */
+enum {
+ ORPHAN_INO, /* for orphan ino list */
+ APPEND_INO, /* for append ino list */
+ UPDATE_INO, /* for update ino list */
+ MAX_INO_ENTRY, /* max. list */
+};
+
+struct ino_entry {
struct list_head list; /* list head */
nid_t ino; /* inode number */
};
@@ -74,11 +143,20 @@
struct inode *inode; /* vfs inode pointer */
};
+/* for the list of blockaddresses to be discarded */
+struct discard_entry {
+ struct list_head list; /* list head */
+ block_t blkaddr; /* block address to be discarded */
+ int len; /* # of consecutive blocks of the discard */
+};
+
/* for the list of fsync inodes, used only during recovery */
struct fsync_inode_entry {
struct list_head list; /* list head */
struct inode *inode; /* vfs inode pointer */
- block_t blkaddr; /* block address locating the last inode */
+ block_t blkaddr; /* block address locating the last fsync */
+ block_t last_dentry; /* block address locating the last dentry */
+ block_t last_inode; /* block address locating the last inode */
};
#define nats_in_cursum(sum) (le16_to_cpu(sum->n_nats))
@@ -89,6 +167,9 @@
#define sit_in_journal(sum, i) (sum->sit_j.entries[i].se)
#define segno_in_journal(sum, i) (sum->sit_j.entries[i].segno)
+#define MAX_NAT_JENTRIES(sum) (NAT_JOURNAL_ENTRIES - nats_in_cursum(sum))
+#define MAX_SIT_JENTRIES(sum) (SIT_JOURNAL_ENTRIES - sits_in_cursum(sum))
+
static inline int update_nats_in_cursum(struct f2fs_summary_block *rs, int i)
{
int before = nats_in_cursum(rs);
@@ -103,11 +184,30 @@
return before;
}
+static inline bool __has_cursum_space(struct f2fs_summary_block *sum, int size,
+ int type)
+{
+ if (type == NAT_JOURNAL)
+ return size <= MAX_NAT_JENTRIES(sum);
+ return size <= MAX_SIT_JENTRIES(sum);
+}
+
/*
* ioctl commands
*/
-#define F2FS_IOC_GETFLAGS FS_IOC_GETFLAGS
-#define F2FS_IOC_SETFLAGS FS_IOC_SETFLAGS
+#define F2FS_IOC_GETFLAGS FS_IOC_GETFLAGS
+#define F2FS_IOC_SETFLAGS FS_IOC_SETFLAGS
+
+#define F2FS_IOCTL_MAGIC 0xf5
+#define F2FS_IOC_ATOMIC_WRITE _IOW(F2FS_IOCTL_MAGIC, 1, struct atomic_w)
+#define F2FS_IOC_ATOMIC_COMMIT _IOW(F2FS_IOCTL_MAGIC, 2, u64)
+
+struct atomic_w {
+ u64 aid; /* atomic write id */
+ const char __user *buf; /* user data */
+ u64 count; /* size to update */
+ u64 pos; /* file offset */
+};
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
/*
@@ -120,23 +220,29 @@
/*
* For INODE and NODE manager
*/
-#define XATTR_NODE_OFFSET (-1) /*
- * store xattrs to one node block per
- * file keeping -1 as its node offset to
- * distinguish from index node blocks.
- */
+/*
+ * XATTR_NODE_OFFSET stores xattrs to one node block per file keeping -1
+ * as its node offset to distinguish from index node blocks.
+ * But some bits are used to mark the node block.
+ */
+#define XATTR_NODE_OFFSET ((((unsigned int)-1) << OFFSET_BIT_SHIFT) \
+ >> OFFSET_BIT_SHIFT)
enum {
ALLOC_NODE, /* allocate a new node page if needed */
LOOKUP_NODE, /* look up a node without readahead */
LOOKUP_NODE_RA, /*
* look up a node with readahead called
- * by get_datablock_ro.
+ * by get_data_block.
*/
};
#define F2FS_LINK_MAX 32000 /* maximum link count per file */
+#define MAX_DIR_RA_PAGES 4 /* maximum ra pages of dir */
+
/* for in-memory extent cache entry */
+#define F2FS_MIN_EXTENT_LEN 16 /* minimum extent length */
+
struct extent_info {
rwlock_t ext_lock; /* rwlock for consistency */
unsigned int fofs; /* start offset in a file */
@@ -148,23 +254,33 @@
* i_advise uses FADVISE_XXX_BIT. We can add additional hints later.
*/
#define FADVISE_COLD_BIT 0x01
-#define FADVISE_CP_BIT 0x02
+#define FADVISE_LOST_PINO_BIT 0x02
+#define FADVISE_ANDROID_EMU 0x10
+#define FADVISE_ANDROID_EMU_ROOT 0x20
+
+#define DEF_DIR_LEVEL 0
struct f2fs_inode_info {
struct inode vfs_inode; /* serve a vfs inode */
unsigned long i_flags; /* keep an inode flags for ioctl */
unsigned char i_advise; /* use to give file attribute hints */
+ unsigned char i_dir_level; /* use for dentry level for large dir */
unsigned int i_current_depth; /* use only in directory structure */
unsigned int i_pino; /* parent inode number */
umode_t i_acl_mode; /* keep file acl mode temporarily */
/* Use below internally in f2fs*/
unsigned long flags; /* use to pass per-file flags */
- atomic_t dirty_dents; /* # of dirty dentry pages */
+ struct rw_semaphore i_sem; /* protect fi info */
+ atomic_t dirty_pages; /* # of dirty pages */
f2fs_hash_t chash; /* hash value of given file name */
unsigned int clevel; /* maximum level of given file name */
nid_t i_xattr_nid; /* node id that contains xattrs */
+ unsigned long long xattr_ver; /* cp version of xattr modification */
struct extent_info ext; /* in-memory extent cache entry */
+ struct dir_inode_entry *dirty_dir; /* the pointer of dirty dir */
+
+ struct list_head atomic_pages; /* atomic page indexes */
};
static inline void get_extent_info(struct extent_info *ext,
@@ -190,16 +306,20 @@
struct f2fs_nm_info {
block_t nat_blkaddr; /* base disk address of NAT */
nid_t max_nid; /* maximum possible node ids */
+ nid_t available_nids; /* maximum available node ids */
nid_t next_scan_nid; /* the next nid to be scanned */
+ unsigned int ram_thresh; /* control the memory footprint */
/* NAT cache management */
struct radix_tree_root nat_root;/* root of the nat entry cache */
+ struct radix_tree_root nat_set_root;/* root of the nat set cache */
rwlock_t nat_tree_lock; /* protect nat_tree_lock */
- unsigned int nat_cnt; /* the # of cached nat entries */
struct list_head nat_entries; /* cached nat entry list (clean) */
- struct list_head dirty_nat_entries; /* cached nat entry list (dirty) */
+ unsigned int nat_cnt; /* the # of cached nat entries */
+ unsigned int dirty_nat_cnt; /* total num of nat entries in set */
/* free node ids management */
+ struct radix_tree_root free_nid_root;/* root of the free_nid cache */
struct list_head free_nid_list; /* a list for free nids */
spinlock_t free_nid_list_lock; /* protect free nid list */
unsigned int fcnt; /* the number of free node id */
@@ -262,15 +382,25 @@
NO_CHECK_TYPE
};
+struct flush_cmd {
+ struct completion wait;
+ struct llist_node llnode;
+ int ret;
+};
+
+struct flush_cmd_control {
+ struct task_struct *f2fs_issue_flush; /* flush thread */
+ wait_queue_head_t flush_wait_queue; /* waiting queue for wake-up */
+ struct llist_head issue_list; /* list for command issue */
+ struct llist_node *dispatch_list; /* list for command dispatch */
+};
+
struct f2fs_sm_info {
struct sit_info *sit_info; /* whole segment information */
struct free_segmap_info *free_info; /* free segment information */
struct dirty_seglist_info *dirty_info; /* dirty segment information */
struct curseg_info *curseg_array; /* active segment information */
- struct list_head wblist_head; /* list of under-writeback pages */
- spinlock_t wblist_lock; /* lock for checkpoint */
-
block_t seg0_blkaddr; /* block address of 0'th segment */
block_t main_blkaddr; /* start block address of main area */
block_t ssa_blkaddr; /* start block address of SSA area */
@@ -279,16 +409,25 @@
unsigned int main_segments; /* # of segments in main area */
unsigned int reserved_segments; /* # of reserved segments */
unsigned int ovp_segments; /* # of overprovision segments */
-};
-/*
- * For directory operation
- */
-#define NODE_DIR1_BLOCK (ADDRS_PER_INODE + 1)
-#define NODE_DIR2_BLOCK (ADDRS_PER_INODE + 2)
-#define NODE_IND1_BLOCK (ADDRS_PER_INODE + 3)
-#define NODE_IND2_BLOCK (ADDRS_PER_INODE + 4)
-#define NODE_DIND_BLOCK (ADDRS_PER_INODE + 5)
+ /* a threshold to reclaim prefree segments */
+ unsigned int rec_prefree_segments;
+
+ /* for small discard management */
+ struct list_head discard_list; /* 4KB discard list */
+ int nr_discards; /* # of discards in the list */
+ int max_discards; /* max. discards to be issued */
+
+ struct list_head sit_entry_set; /* sit entry set list */
+
+ unsigned int ipu_policy; /* in-place-update policy */
+ unsigned int min_ipu_util; /* in-place-update threshold */
+ unsigned int min_fsync_blocks; /* threshold for fsync */
+
+ /* for flush command control */
+ struct flush_cmd_control *cmd_control_info;
+
+};
/*
* For superblock
@@ -308,14 +447,6 @@
};
/*
- * Uses as sbi->fs_lock[NR_GLOBAL_LOCKS].
- * The checkpoint procedure blocks all the locks in this fs_lock array.
- * Some FS operations grab free locks, and if there is no free lock,
- * then wait to grab a lock in a round-robin manner.
- */
-#define NR_GLOBAL_LOCKS 8
-
-/*
* The below are the page types of bios used in submti_bio().
* The available types are:
* DATA User data pages. It operates as async mode.
@@ -326,6 +457,7 @@
* with waiting the bio's completion
* ... Only can be used with META.
*/
+#define PAGE_TYPE_OF_BIO(type) ((type) > META ? META : (type))
enum page_type {
DATA,
NODE,
@@ -334,11 +466,32 @@
META_FLUSH,
};
+/*
+ * Android sdcard emulation flags
+ */
+#define F2FS_ANDROID_EMU_NOCASE 0x00000001
+
+struct f2fs_io_info {
+ enum page_type type; /* contains DATA/NODE/META/META_FLUSH */
+ int rw; /* contains R/RS/W/WS with REQ_META/REQ_PRIO */
+};
+
+#define is_read_io(rw) (((rw) & 1) == READ)
+struct f2fs_bio_info {
+ struct f2fs_sb_info *sbi; /* f2fs superblock */
+ struct bio *bio; /* bios to merge */
+ sector_t last_block_in_bio; /* last block number */
+ struct f2fs_io_info fio; /* store buffered io info. */
+ struct rw_semaphore io_rwsem; /* blocking op for bio */
+};
+
struct f2fs_sb_info {
struct super_block *sb; /* pointer to VFS super block */
+ struct proc_dir_entry *s_proc; /* proc entry */
struct buffer_head *raw_super_buf; /* buffer head of raw sb */
struct f2fs_super_block *raw_super; /* raw super block pointer */
int s_dirty; /* dirty flag for checkpoint */
+ bool need_fsck; /* need fsck.f2fs to fix */
/* for node-related operations */
struct f2fs_nm_info *nm_info; /* node manager */
@@ -346,30 +499,34 @@
/* for segment-related operations */
struct f2fs_sm_info *sm_info; /* segment manager */
- struct bio *bio[NR_PAGE_TYPE]; /* bios to merge */
- sector_t last_block_in_bio[NR_PAGE_TYPE]; /* last block number */
- struct rw_semaphore bio_sem; /* IO semaphore */
+
+ /* for bio operations */
+ struct f2fs_bio_info read_io; /* for read bios */
+ struct f2fs_bio_info write_io[NR_PAGE_TYPE]; /* for write bios */
+ struct completion *wait_io; /* for completion bios */
/* for checkpoint */
struct f2fs_checkpoint *ckpt; /* raw checkpoint pointer */
struct inode *meta_inode; /* cache meta blocks */
struct mutex cp_mutex; /* checkpoint procedure lock */
- struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS operations */
- struct mutex node_write; /* locking node writes */
+ struct rw_semaphore cp_rwsem; /* blocking FS operations */
+ struct rw_semaphore node_write; /* locking node writes */
struct mutex writepages; /* mutex for writepages() */
- unsigned char next_lock_num; /* round-robin global locks */
- int por_doing; /* recovery is doing or not */
- int on_build_free_nids; /* build_free_nids is doing */
+ bool por_doing; /* recovery is doing or not */
+ wait_queue_head_t cp_wait;
- /* for orphan inode management */
- struct list_head orphan_inode_list; /* orphan inode list */
- struct mutex orphan_inode_mutex; /* for orphan inode list */
+ /* for inode management */
+ struct radix_tree_root ino_root[MAX_INO_ENTRY]; /* ino entry array */
+ spinlock_t ino_lock[MAX_INO_ENTRY]; /* for ino entry lock */
+ struct list_head ino_list[MAX_INO_ENTRY]; /* inode list head */
+
+ /* for orphan inode, use 0'th array */
unsigned int n_orphans; /* # of orphan inodes */
+ unsigned int max_orphans; /* max orphan inodes */
/* for directory inode management */
struct list_head dir_inode_list; /* dir inode list */
spinlock_t dir_inode_lock; /* for dir inode list lock */
- unsigned int n_dirty_dirs; /* # of dir inodes */
/* basic file system units */
unsigned int log_sectors_per_block; /* log2 sectors per block */
@@ -387,6 +544,7 @@
unsigned int total_valid_node_count; /* valid node block count */
unsigned int total_valid_inode_count; /* valid inode count */
int active_logs; /* # of active logs */
+ int dir_level; /* directory level */
block_t user_block_count; /* # of user blocks */
block_t total_valid_block_count; /* # of valid blocks */
@@ -402,17 +560,34 @@
struct f2fs_gc_kthread *gc_thread; /* GC thread */
unsigned int cur_victim_sec; /* current victim section num */
+ /* maximum # of trials to find a victim segment for SSR and GC */
+ unsigned int max_victim_search;
+
/*
* for stat information.
* one is for the LFS mode, and the other is for the SSR mode.
*/
+#ifdef CONFIG_F2FS_STAT_FS
struct f2fs_stat_info *stat_info; /* FS status information */
unsigned int segment_count[2]; /* # of allocated segments */
unsigned int block_count[2]; /* # of allocated blocks */
- unsigned int last_victim[2]; /* last victim segment # */
int total_hit_ext, read_hit_ext; /* extent cache hit ratio */
+ int inline_inode; /* # of inline_data inodes */
int bg_gc; /* background gc calls */
+ unsigned int n_dirty_dirs; /* # of dir inodes */
+#endif
+ unsigned int last_victim[2]; /* last victim segment # */
spinlock_t stat_lock; /* lock for stat operations */
+
+ /* For sysfs suppport */
+ struct kobject s_kobj;
+ struct completion s_kobj_unregister;
+
+ /* For Android sdcard emulation */
+ u32 android_emu_uid;
+ u32 android_emu_gid;
+ umode_t android_emu_mode;
+ int android_emu_flags;
};
/*
@@ -428,6 +603,21 @@
return sb->s_fs_info;
}
+static inline struct f2fs_sb_info *F2FS_I_SB(struct inode *inode)
+{
+ return F2FS_SB(inode->i_sb);
+}
+
+static inline struct f2fs_sb_info *F2FS_M_SB(struct address_space *mapping)
+{
+ return F2FS_I_SB(mapping->host);
+}
+
+static inline struct f2fs_sb_info *F2FS_P_SB(struct page *page)
+{
+ return F2FS_M_SB(page->mapping);
+}
+
static inline struct f2fs_super_block *F2FS_RAW_SUPER(struct f2fs_sb_info *sbi)
{
return (struct f2fs_super_block *)(sbi->raw_super);
@@ -438,6 +628,16 @@
return (struct f2fs_checkpoint *)(sbi->ckpt);
}
+static inline struct f2fs_node *F2FS_NODE(struct page *page)
+{
+ return (struct f2fs_node *)page_address(page);
+}
+
+static inline struct f2fs_inode *F2FS_INODE(struct page *page)
+{
+ return &((struct f2fs_node *)page_address(page))->i;
+}
+
static inline struct f2fs_nm_info *NM_I(struct f2fs_sb_info *sbi)
{
return (struct f2fs_nm_info *)(sbi->nm_info);
@@ -463,6 +663,16 @@
return (struct dirty_seglist_info *)(SM_I(sbi)->dirty_info);
}
+static inline struct address_space *META_MAPPING(struct f2fs_sb_info *sbi)
+{
+ return sbi->meta_inode->i_mapping;
+}
+
+static inline struct address_space *NODE_MAPPING(struct f2fs_sb_info *sbi)
+{
+ return sbi->node_inode->i_mapping;
+}
+
static inline void F2FS_SET_SB_DIRT(struct f2fs_sb_info *sbi)
{
sbi->s_dirty = 1;
@@ -473,6 +683,11 @@
sbi->s_dirty = 0;
}
+static inline unsigned long long cur_cp_version(struct f2fs_checkpoint *cp)
+{
+ return le64_to_cpu(cp->checkpoint_ver);
+}
+
static inline bool is_set_ckpt_flags(struct f2fs_checkpoint *cp, unsigned int f)
{
unsigned int ckpt_flags = le32_to_cpu(cp->ckpt_flags);
@@ -493,40 +708,24 @@
cp->ckpt_flags = cpu_to_le32(ckpt_flags);
}
-static inline void mutex_lock_all(struct f2fs_sb_info *sbi)
+static inline void f2fs_lock_op(struct f2fs_sb_info *sbi)
{
- int i = 0;
- for (; i < NR_GLOBAL_LOCKS; i++)
- mutex_lock(&sbi->fs_lock[i]);
+ down_read(&sbi->cp_rwsem);
}
-static inline void mutex_unlock_all(struct f2fs_sb_info *sbi)
+static inline void f2fs_unlock_op(struct f2fs_sb_info *sbi)
{
- int i = 0;
- for (; i < NR_GLOBAL_LOCKS; i++)
- mutex_unlock(&sbi->fs_lock[i]);
+ up_read(&sbi->cp_rwsem);
}
-static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
+static inline void f2fs_lock_all(struct f2fs_sb_info *sbi)
{
- unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
- int i = 0;
-
- for (; i < NR_GLOBAL_LOCKS; i++)
- if (mutex_trylock(&sbi->fs_lock[i]))
- return i;
-
- mutex_lock(&sbi->fs_lock[next_lock]);
- sbi->next_lock_num++;
- return next_lock;
+ f2fs_down_write(&sbi->cp_rwsem, &sbi->cp_mutex);
}
-static inline void mutex_unlock_op(struct f2fs_sb_info *sbi, int ilock)
+static inline void f2fs_unlock_all(struct f2fs_sb_info *sbi)
{
- if (ilock < 0)
- return;
- BUG_ON(ilock >= NR_GLOBAL_LOCKS);
- mutex_unlock(&sbi->fs_lock[ilock]);
+ up_write(&sbi->cp_rwsem);
}
/*
@@ -534,8 +733,9 @@
*/
static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)
{
- WARN_ON((nid >= NM_I(sbi)->max_nid));
- if (nid >= NM_I(sbi)->max_nid)
+ if (unlikely(nid < F2FS_ROOT_INO(sbi)))
+ return -EINVAL;
+ if (unlikely(nid >= NM_I(sbi)->max_nid))
return -EINVAL;
return 0;
}
@@ -548,9 +748,24 @@
static inline int F2FS_HAS_BLOCKS(struct inode *inode)
{
if (F2FS_I(inode)->i_xattr_nid)
- return (inode->i_blocks > F2FS_DEFAULT_ALLOCATED_BLOCKS + 1);
+ return inode->i_blocks > F2FS_DEFAULT_ALLOCATED_BLOCKS + 1;
else
- return (inode->i_blocks > F2FS_DEFAULT_ALLOCATED_BLOCKS);
+ return inode->i_blocks > F2FS_DEFAULT_ALLOCATED_BLOCKS;
+}
+
+static inline int f2fs_handle_error(struct f2fs_sb_info *sbi)
+{
+ if (test_opt(sbi, ERRORS_PANIC))
+ BUG();
+ if (test_opt(sbi, ERRORS_RECOVER))
+ sbi->need_fsck = true;
+ return 1;
+ return 0;
+}
+
+static inline bool f2fs_has_xattr_block(unsigned int ofs)
+{
+ return ofs == XATTR_NODE_OFFSET;
}
static inline bool inc_valid_block_count(struct f2fs_sb_info *sbi,
@@ -561,7 +776,7 @@
spin_lock(&sbi->stat_lock);
valid_block_count =
sbi->total_valid_block_count + (block_t)count;
- if (valid_block_count > sbi->user_block_count) {
+ if (unlikely(valid_block_count > sbi->user_block_count)) {
spin_unlock(&sbi->stat_lock);
return false;
}
@@ -572,17 +787,30 @@
return true;
}
-static inline int dec_valid_block_count(struct f2fs_sb_info *sbi,
+static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
struct inode *inode,
blkcnt_t count)
{
spin_lock(&sbi->stat_lock);
- BUG_ON(sbi->total_valid_block_count < (block_t) count);
- BUG_ON(inode->i_blocks < count);
+
+ if (sbi->total_valid_block_count < (block_t)count) {
+ pr_crit("F2FS-fs (%s): block accounting error: %u < %llu\n",
+ sbi->sb->s_id, sbi->total_valid_block_count,
+ (unsigned long long)count);
+ f2fs_handle_error(sbi);
+ sbi->total_valid_block_count = count;
+ }
+ if (inode->i_blocks < count) {
+ pr_crit("F2FS-fs (%s): inode accounting error: %llu < %llu\n",
+ sbi->sb->s_id, (unsigned long long)inode->i_blocks,
+ (unsigned long long)count);
+ f2fs_handle_error(sbi);
+ inode->i_blocks = count;
+ }
+
inode->i_blocks -= count;
sbi->total_valid_block_count -= (block_t)count;
spin_unlock(&sbi->stat_lock);
- return 0;
}
static inline void inc_page_count(struct f2fs_sb_info *sbi, int count_type)
@@ -591,9 +819,11 @@
F2FS_SET_SB_DIRT(sbi);
}
-static inline void inode_inc_dirty_dents(struct inode *inode)
+static inline void inode_inc_dirty_pages(struct inode *inode)
{
- atomic_inc(&F2FS_I(inode)->dirty_dents);
+ atomic_inc(&F2FS_I(inode)->dirty_pages);
+ if (S_ISDIR(inode->i_mode))
+ inc_page_count(F2FS_I_SB(inode), F2FS_DIRTY_DENTS);
}
static inline void dec_page_count(struct f2fs_sb_info *sbi, int count_type)
@@ -601,9 +831,15 @@
atomic_dec(&sbi->nr_pages[count_type]);
}
-static inline void inode_dec_dirty_dents(struct inode *inode)
+static inline void inode_dec_dirty_pages(struct inode *inode)
{
- atomic_dec(&F2FS_I(inode)->dirty_dents);
+ if (!S_ISDIR(inode->i_mode) && !S_ISREG(inode->i_mode))
+ return;
+
+ atomic_dec(&F2FS_I(inode)->dirty_pages);
+
+ if (S_ISDIR(inode->i_mode))
+ dec_page_count(F2FS_I_SB(inode), F2FS_DIRTY_DENTS);
}
static inline int get_pages(struct f2fs_sb_info *sbi, int count_type)
@@ -611,6 +847,11 @@
return atomic_read(&sbi->nr_pages[count_type]);
}
+static inline int get_dirty_pages(struct inode *inode)
+{
+ return atomic_read(&F2FS_I(inode)->dirty_pages);
+}
+
static inline int get_blocktype_secs(struct f2fs_sb_info *sbi, int block_type)
{
unsigned int pages_per_sec = sbi->segs_per_sec *
@@ -621,11 +862,7 @@
static inline block_t valid_user_blocks(struct f2fs_sb_info *sbi)
{
- block_t ret;
- spin_lock(&sbi->stat_lock);
- ret = sbi->total_valid_block_count;
- spin_unlock(&sbi->stat_lock);
- return ret;
+ return sbi->total_valid_block_count;
}
static inline unsigned long __bitmap_size(struct f2fs_sb_info *sbi, int flag)
@@ -644,16 +881,25 @@
static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag)
{
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
- int offset = (flag == NAT_BITMAP) ?
+ int offset;
+
+ if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) {
+ if (flag == NAT_BITMAP)
+ return &ckpt->sit_nat_version_bitmap;
+ else
+ return (unsigned char *)ckpt + F2FS_BLKSIZE;
+ } else {
+ offset = (flag == NAT_BITMAP) ?
le32_to_cpu(ckpt->sit_ver_bitmap_bytesize) : 0;
- return &ckpt->sit_nat_version_bitmap + offset;
+ return &ckpt->sit_nat_version_bitmap + offset;
+ }
}
static inline block_t __start_cp_addr(struct f2fs_sb_info *sbi)
{
block_t start_addr;
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
- unsigned long long ckpt_version = le64_to_cpu(ckpt->checkpoint_ver);
+ unsigned long long ckpt_version = cur_cp_version(ckpt);
start_addr = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_blkaddr);
@@ -673,96 +919,101 @@
}
static inline bool inc_valid_node_count(struct f2fs_sb_info *sbi,
- struct inode *inode,
- unsigned int count)
+ struct inode *inode)
{
block_t valid_block_count;
unsigned int valid_node_count;
spin_lock(&sbi->stat_lock);
- valid_block_count = sbi->total_valid_block_count + (block_t)count;
- sbi->alloc_valid_block_count += (block_t)count;
- valid_node_count = sbi->total_valid_node_count + count;
-
- if (valid_block_count > sbi->user_block_count) {
+ valid_block_count = sbi->total_valid_block_count + 1;
+ if (unlikely(valid_block_count > sbi->user_block_count)) {
spin_unlock(&sbi->stat_lock);
return false;
}
- if (valid_node_count > sbi->total_node_count) {
+ valid_node_count = sbi->total_valid_node_count + 1;
+ if (unlikely(valid_node_count > sbi->total_node_count)) {
spin_unlock(&sbi->stat_lock);
return false;
}
if (inode)
- inode->i_blocks += count;
- sbi->total_valid_node_count = valid_node_count;
- sbi->total_valid_block_count = valid_block_count;
+ inode->i_blocks++;
+
+ sbi->alloc_valid_block_count++;
+ sbi->total_valid_node_count++;
+ sbi->total_valid_block_count++;
spin_unlock(&sbi->stat_lock);
return true;
}
static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
- struct inode *inode,
- unsigned int count)
+ struct inode *inode)
{
spin_lock(&sbi->stat_lock);
- BUG_ON(sbi->total_valid_block_count < count);
- BUG_ON(sbi->total_valid_node_count < count);
- BUG_ON(inode->i_blocks < count);
+ if (sbi->total_valid_block_count < 1) {
+ pr_crit("F2FS-fs (%s): block accounting error: %llu < 1\n",
+ sbi->sb->s_id,
+ (unsigned long long)sbi->total_valid_block_count);
+ f2fs_handle_error(sbi);
+ sbi->total_valid_block_count = 1;
+ }
+ if (sbi->total_valid_node_count < 1) {
+ pr_crit("F2FS-fs (%s): node accounting error: %u < 1\n",
+ sbi->sb->s_id, sbi->total_valid_node_count);
+ f2fs_handle_error(sbi);
+ sbi->total_valid_node_count = 1;
+ }
+ if (inode->i_blocks < 1) {
+ pr_crit("F2FS-fs (%s): inode accounting error: %llu < 1\n",
+ sbi->sb->s_id, (unsigned long long)inode->i_blocks);
+ f2fs_handle_error(sbi);
+ inode->i_blocks = 1;
+ }
- inode->i_blocks -= count;
- sbi->total_valid_node_count -= count;
- sbi->total_valid_block_count -= (block_t)count;
+ inode->i_blocks--;
+ sbi->total_valid_node_count--;
+ sbi->total_valid_block_count--;
spin_unlock(&sbi->stat_lock);
}
static inline unsigned int valid_node_count(struct f2fs_sb_info *sbi)
{
- unsigned int ret;
- spin_lock(&sbi->stat_lock);
- ret = sbi->total_valid_node_count;
- spin_unlock(&sbi->stat_lock);
- return ret;
+ return sbi->total_valid_node_count;
}
static inline void inc_valid_inode_count(struct f2fs_sb_info *sbi)
{
spin_lock(&sbi->stat_lock);
- BUG_ON(sbi->total_valid_inode_count == sbi->total_node_count);
+ f2fs_bug_on(sbi, sbi->total_valid_inode_count == sbi->total_node_count);
sbi->total_valid_inode_count++;
spin_unlock(&sbi->stat_lock);
}
-static inline int dec_valid_inode_count(struct f2fs_sb_info *sbi)
+static inline void dec_valid_inode_count(struct f2fs_sb_info *sbi)
{
spin_lock(&sbi->stat_lock);
- BUG_ON(!sbi->total_valid_inode_count);
+ f2fs_bug_on(sbi, !sbi->total_valid_inode_count);
sbi->total_valid_inode_count--;
spin_unlock(&sbi->stat_lock);
- return 0;
}
static inline unsigned int valid_inode_count(struct f2fs_sb_info *sbi)
{
- unsigned int ret;
- spin_lock(&sbi->stat_lock);
- ret = sbi->total_valid_inode_count;
- spin_unlock(&sbi->stat_lock);
- return ret;
+ return sbi->total_valid_inode_count;
}
static inline void f2fs_put_page(struct page *page, int unlock)
{
- if (!page || IS_ERR(page))
+ if (!page)
return;
if (unlock) {
- BUG_ON(!PageLocked(page));
+ f2fs_bug_on(F2FS_P_SB(page), !PageLocked(page));
unlock_page(page);
}
page_cache_release(page);
@@ -779,16 +1030,30 @@
}
static inline struct kmem_cache *f2fs_kmem_cache_create(const char *name,
- size_t size, void (*ctor)(void *))
+ size_t size)
{
- return kmem_cache_create(name, size, 0, SLAB_RECLAIM_ACCOUNT, ctor);
+ return kmem_cache_create(name, size, 0, SLAB_RECLAIM_ACCOUNT, NULL);
+}
+
+static inline void *f2fs_kmem_cache_alloc(struct kmem_cache *cachep,
+ gfp_t flags)
+{
+ void *entry;
+retry:
+ entry = kmem_cache_alloc(cachep, flags);
+ if (!entry) {
+ cond_resched();
+ goto retry;
+ }
+
+ return entry;
}
#define RAW_IS_INODE(p) ((p)->footer.nid == (p)->footer.ino)
static inline bool IS_INODE(struct page *page)
{
- struct f2fs_node *p = (struct f2fs_node *)page_address(page);
+ struct f2fs_node *p = F2FS_NODE(page);
return RAW_IS_INODE(p);
}
@@ -802,7 +1067,7 @@
{
struct f2fs_node *raw_node;
__le32 *addr_array;
- raw_node = (struct f2fs_node *)page_address(node_page);
+ raw_node = F2FS_NODE(node_page);
addr_array = blkaddr_in_node(raw_node);
return le32_to_cpu(addr_array[offset]);
}
@@ -843,14 +1108,26 @@
/* used for f2fs_inode_info->flags */
enum {
FI_NEW_INODE, /* indicate newly allocated inode */
+ FI_DIRTY_INODE, /* indicate inode is dirty or not */
+ FI_DIRTY_DIR, /* indicate directory has dirty pages */
FI_INC_LINK, /* need to increment i_nlink */
FI_ACL_MODE, /* indicate acl mode */
FI_NO_ALLOC, /* should not allocate any blocks */
+ FI_UPDATE_DIR, /* should update inode block for consistency */
+ FI_DELAY_IPUT, /* used for the recovery */
+ FI_NO_EXTENT, /* not to use the extent cache */
+ FI_INLINE_XATTR, /* used for inline xattr */
+ FI_INLINE_DATA, /* used for inline data*/
+ FI_APPEND_WRITE, /* inode has appended data */
+ FI_UPDATE_WRITE, /* inode has in-place-update data */
+ FI_NEED_IPU, /* used for ipu for fdatasync */
+ FI_ATOMIC_FILE, /* used for atomic writes support */
};
static inline void set_inode_flag(struct f2fs_inode_info *fi, int flag)
{
- set_bit(flag, &fi->flags);
+ if (!test_bit(flag, &fi->flags))
+ set_bit(flag, &fi->flags);
}
static inline int is_inode_flag_set(struct f2fs_inode_info *fi, int flag)
@@ -860,7 +1137,8 @@
static inline void clear_inode_flag(struct f2fs_inode_info *fi, int flag)
{
- clear_bit(flag, &fi->flags);
+ if (test_bit(flag, &fi->flags))
+ clear_bit(flag, &fi->flags);
}
static inline void set_acl_inode(struct f2fs_inode_info *fi, umode_t mode)
@@ -878,14 +1156,110 @@
return 0;
}
+int f2fs_android_emu(struct f2fs_sb_info *, struct inode *, u32 *, u32 *,
+ umode_t *);
+
+#define IS_ANDROID_EMU(sbi, fi, pfi) \
+ (test_opt((sbi), ANDROID_EMU) && \
+ (((fi)->i_advise & FADVISE_ANDROID_EMU) || \
+ ((pfi)->i_advise & FADVISE_ANDROID_EMU)))
+
+static inline void get_inline_info(struct f2fs_inode_info *fi,
+ struct f2fs_inode *ri)
+{
+ if (ri->i_inline & F2FS_INLINE_XATTR)
+ set_inode_flag(fi, FI_INLINE_XATTR);
+ if (ri->i_inline & F2FS_INLINE_DATA)
+ set_inode_flag(fi, FI_INLINE_DATA);
+}
+
+static inline void set_raw_inline(struct f2fs_inode_info *fi,
+ struct f2fs_inode *ri)
+{
+ ri->i_inline = 0;
+
+ if (is_inode_flag_set(fi, FI_INLINE_XATTR))
+ ri->i_inline |= F2FS_INLINE_XATTR;
+ if (is_inode_flag_set(fi, FI_INLINE_DATA))
+ ri->i_inline |= F2FS_INLINE_DATA;
+}
+
+static inline int f2fs_has_inline_xattr(struct inode *inode)
+{
+ return is_inode_flag_set(F2FS_I(inode), FI_INLINE_XATTR);
+}
+
+static inline unsigned int addrs_per_inode(struct f2fs_inode_info *fi)
+{
+ if (f2fs_has_inline_xattr(&fi->vfs_inode))
+ return DEF_ADDRS_PER_INODE - F2FS_INLINE_XATTR_ADDRS;
+ return DEF_ADDRS_PER_INODE;
+}
+
+static inline void *inline_xattr_addr(struct page *page)
+{
+ struct f2fs_inode *ri = F2FS_INODE(page);
+ return (void *)&(ri->i_addr[DEF_ADDRS_PER_INODE -
+ F2FS_INLINE_XATTR_ADDRS]);
+}
+
+static inline int inline_xattr_size(struct inode *inode)
+{
+ if (f2fs_has_inline_xattr(inode))
+ return F2FS_INLINE_XATTR_ADDRS << 2;
+ else
+ return 0;
+}
+
+static inline int f2fs_has_inline_data(struct inode *inode)
+{
+ return is_inode_flag_set(F2FS_I(inode), FI_INLINE_DATA);
+}
+
+static inline void *inline_data_addr(struct page *page)
+{
+ struct f2fs_inode *ri = F2FS_INODE(page);
+ return (void *)&(ri->i_addr[1]);
+}
+
+static inline int f2fs_readonly(struct super_block *sb)
+{
+ return sb->s_flags & MS_RDONLY;
+}
+
+static inline bool f2fs_cp_error(struct f2fs_sb_info *sbi)
+{
+ return is_set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
+}
+
+static inline void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi)
+{
+ set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
+ if (f2fs_handle_error(sbi))
+ sbi->sb->s_flags |= MS_RDONLY;
+}
+
+#define get_inode_mode(i) \
+ ((is_inode_flag_set(F2FS_I(i), FI_ACL_MODE)) ? \
+ (F2FS_I(i)->i_acl_mode) : ((i)->i_mode))
+
+/* get offset of first page in next direct node */
+#define PGOFS_OF_NEXT_DNODE(pgofs, fi) \
+ ((pgofs < ADDRS_PER_INODE(fi)) ? ADDRS_PER_INODE(fi) : \
+ (pgofs - ADDRS_PER_INODE(fi) + ADDRS_PER_BLOCK) / \
+ ADDRS_PER_BLOCK * ADDRS_PER_BLOCK + ADDRS_PER_INODE(fi))
+
/*
* file.c
*/
int f2fs_sync_file(struct file *, loff_t, loff_t, int);
void truncate_data_blocks(struct dnode_of_data *);
+int truncate_blocks(struct inode *, u64, bool);
void f2fs_truncate(struct inode *);
+int f2fs_getattr(struct vfsmount *, struct dentry *, struct kstat *);
int f2fs_setattr(struct dentry *, struct iattr *);
int truncate_hole(struct inode *, pgoff_t, pgoff_t);
+int truncate_data_blocks_range(struct dnode_of_data *, int);
long f2fs_ioctl(struct file *, unsigned int, unsigned long);
long f2fs_compat_ioctl(struct file *, unsigned int, unsigned long);
@@ -894,10 +1268,12 @@
*/
void f2fs_set_inode_flags(struct inode *);
struct inode *f2fs_iget(struct super_block *, unsigned long);
+int try_to_free_nats(struct f2fs_sb_info *, int);
void update_inode(struct inode *, struct page *);
-int update_inode_page(struct inode *);
+void update_inode_page(struct inode *);
int f2fs_write_inode(struct inode *, struct writeback_control *);
void f2fs_evict_inode(struct inode *);
+void handle_failed_inode(struct inode *);
/*
* namei.c
@@ -913,9 +1289,10 @@
ino_t f2fs_inode_by_name(struct inode *, struct qstr *);
void f2fs_set_link(struct inode *, struct f2fs_dir_entry *,
struct page *, struct inode *);
-void init_dent_inode(const struct qstr *, struct page *);
+int update_dent_inode(struct inode *, const struct qstr *);
int __f2fs_add_link(struct inode *, const struct qstr *, struct inode *);
void f2fs_delete_entry(struct f2fs_dir_entry *, struct page *, struct inode *);
+int f2fs_do_tmpfile(struct inode *, struct inode *);
int f2fs_make_empty(struct inode *, struct inode *);
bool f2fs_empty_dir(struct inode *);
@@ -935,7 +1312,7 @@
/*
* hash.c
*/
-f2fs_hash_t f2fs_dentry_hash(const char *, size_t);
+f2fs_hash_t f2fs_dentry_hash(const struct qstr *);
/*
* node.c
@@ -943,13 +1320,18 @@
struct dnode_of_data;
struct node_info;
-int is_checkpointed_node(struct f2fs_sb_info *, nid_t);
+bool available_free_memory(struct f2fs_sb_info *, int);
+bool is_checkpointed_node(struct f2fs_sb_info *, nid_t);
+bool has_fsynced_inode(struct f2fs_sb_info *, nid_t);
+bool need_inode_block_update(struct f2fs_sb_info *, nid_t);
void get_node_info(struct f2fs_sb_info *, nid_t, struct node_info *);
int get_dnode_of_data(struct dnode_of_data *, pgoff_t, int);
int truncate_inode_blocks(struct inode *, pgoff_t);
-int remove_inode_page(struct inode *);
-int new_inode_page(struct inode *, const struct qstr *);
-struct page *new_node_page(struct dnode_of_data *, unsigned int);
+int truncate_xattr_node(struct inode *, struct page *);
+int wait_on_node_pages_writeback(struct f2fs_sb_info *, nid_t);
+void remove_inode_page(struct inode *);
+struct page *new_inode_page(struct inode *);
+struct page *new_node_page(struct dnode_of_data *, unsigned int, struct page *);
void ra_node_page(struct f2fs_sb_info *, nid_t);
struct page *get_node_page(struct f2fs_sb_info *, pgoff_t);
struct page *get_node_page_ra(struct page *, int);
@@ -958,8 +1340,8 @@
bool alloc_nid(struct f2fs_sb_info *, nid_t *);
void alloc_nid_done(struct f2fs_sb_info *, nid_t);
void alloc_nid_failed(struct f2fs_sb_info *, nid_t);
-void recover_node_page(struct f2fs_sb_info *, struct page *,
- struct f2fs_summary *, struct node_info *, block_t);
+void recover_inline_xattr(struct inode *, struct page *);
+void recover_xattr_data(struct inode *, struct page *, block_t);
int recover_inode_page(struct f2fs_sb_info *, struct page *);
int restore_node_summary(struct f2fs_sb_info *, unsigned int,
struct f2fs_summary_block *);
@@ -972,69 +1354,93 @@
/*
* segment.c
*/
+void prepare_atomic_pages(struct inode *, struct atomic_w *);
+void commit_atomic_pages(struct inode *, u64, bool);
void f2fs_balance_fs(struct f2fs_sb_info *);
+void f2fs_balance_fs_bg(struct f2fs_sb_info *);
+int f2fs_issue_flush(struct f2fs_sb_info *);
+int create_flush_cmd_control(struct f2fs_sb_info *);
+void destroy_flush_cmd_control(struct f2fs_sb_info *);
void invalidate_blocks(struct f2fs_sb_info *, block_t);
-void locate_dirty_segment(struct f2fs_sb_info *, unsigned int);
+void refresh_sit_entry(struct f2fs_sb_info *, block_t, block_t);
void clear_prefree_segments(struct f2fs_sb_info *);
+void release_discard_addrs(struct f2fs_sb_info *);
+void discard_next_dnode(struct f2fs_sb_info *, block_t);
int npages_for_summary_flush(struct f2fs_sb_info *);
void allocate_new_segments(struct f2fs_sb_info *);
+int f2fs_trim_fs(struct f2fs_sb_info *, struct fstrim_range *);
struct page *get_sum_page(struct f2fs_sb_info *, unsigned int);
-struct bio *f2fs_bio_alloc(struct block_device *, int);
-void f2fs_submit_bio(struct f2fs_sb_info *, enum page_type, bool sync);
void write_meta_page(struct f2fs_sb_info *, struct page *);
-void write_node_page(struct f2fs_sb_info *, struct page *, unsigned int,
- block_t, block_t *);
-void write_data_page(struct inode *, struct page *, struct dnode_of_data*,
- block_t, block_t *);
-void rewrite_data_page(struct f2fs_sb_info *, struct page *, block_t);
+void write_node_page(struct f2fs_sb_info *, struct page *,
+ struct f2fs_io_info *, unsigned int, block_t, block_t *);
+void write_data_page(struct page *, struct dnode_of_data *, block_t *,
+ struct f2fs_io_info *);
+void rewrite_data_page(struct page *, block_t, struct f2fs_io_info *);
void recover_data_page(struct f2fs_sb_info *, struct page *,
struct f2fs_summary *, block_t, block_t);
-void rewrite_node_page(struct f2fs_sb_info *, struct page *,
- struct f2fs_summary *, block_t, block_t);
+void allocate_data_block(struct f2fs_sb_info *, struct page *,
+ block_t, block_t *, struct f2fs_summary *, int);
+void f2fs_wait_on_page_writeback(struct page *, enum page_type);
void write_data_summaries(struct f2fs_sb_info *, block_t);
void write_node_summaries(struct f2fs_sb_info *, block_t);
int lookup_journal_in_cursum(struct f2fs_summary_block *,
int, unsigned int, int);
-void flush_sit_entries(struct f2fs_sb_info *);
+void flush_sit_entries(struct f2fs_sb_info *, struct cp_control *);
int build_segment_manager(struct f2fs_sb_info *);
void destroy_segment_manager(struct f2fs_sb_info *);
+int __init create_segment_manager_caches(void);
+void destroy_segment_manager_caches(void);
/*
* checkpoint.c
*/
struct page *grab_meta_page(struct f2fs_sb_info *, pgoff_t);
struct page *get_meta_page(struct f2fs_sb_info *, pgoff_t);
+struct page *get_meta_page_ra(struct f2fs_sb_info *, pgoff_t);
+int ra_meta_pages(struct f2fs_sb_info *, block_t, int, int);
long sync_meta_pages(struct f2fs_sb_info *, enum page_type, long);
-int check_orphan_space(struct f2fs_sb_info *);
+void add_dirty_inode(struct f2fs_sb_info *, nid_t, int type);
+void remove_dirty_inode(struct f2fs_sb_info *, nid_t, int type);
+void release_dirty_inode(struct f2fs_sb_info *);
+bool exist_written_data(struct f2fs_sb_info *, nid_t, int);
+int acquire_orphan_inode(struct f2fs_sb_info *);
+void release_orphan_inode(struct f2fs_sb_info *);
void add_orphan_inode(struct f2fs_sb_info *, nid_t);
void remove_orphan_inode(struct f2fs_sb_info *, nid_t);
-int recover_orphan_inodes(struct f2fs_sb_info *);
+void recover_orphan_inodes(struct f2fs_sb_info *);
int get_valid_checkpoint(struct f2fs_sb_info *);
-void set_dirty_dir_page(struct inode *, struct page *);
+void update_dirty_page(struct inode *, struct page *);
+void add_dirty_dir_inode(struct inode *);
void remove_dirty_dir_inode(struct inode *);
void sync_dirty_dir_inodes(struct f2fs_sb_info *);
-void write_checkpoint(struct f2fs_sb_info *, bool);
-void init_orphan_info(struct f2fs_sb_info *);
+void write_checkpoint(struct f2fs_sb_info *, struct cp_control *);
+void init_ino_entry_info(struct f2fs_sb_info *);
int __init create_checkpoint_caches(void);
void destroy_checkpoint_caches(void);
/*
* data.c
*/
+void f2fs_submit_merged_bio(struct f2fs_sb_info *, enum page_type, int);
+int f2fs_submit_page_bio(struct f2fs_sb_info *, struct page *, block_t, int);
+void f2fs_submit_page_mbio(struct f2fs_sb_info *, struct page *, block_t,
+ struct f2fs_io_info *);
int reserve_new_block(struct dnode_of_data *);
+int f2fs_reserve_block(struct dnode_of_data *, pgoff_t);
void update_extent_cache(block_t, struct dnode_of_data *);
struct page *find_data_page(struct inode *, pgoff_t, bool);
struct page *get_lock_data_page(struct inode *, pgoff_t);
-struct page *get_new_data_page(struct inode *, pgoff_t, bool);
-int f2fs_readpage(struct f2fs_sb_info *, struct page *, block_t, int);
-int do_write_data_page(struct page *);
+struct page *get_new_data_page(struct inode *, struct page *, pgoff_t, bool);
+int do_write_data_page(struct page *, struct f2fs_io_info *);
+int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *, u64, u64);
/*
* gc.c
*/
+void move_data_page(struct inode *, struct page *, int);
int start_gc_thread(struct f2fs_sb_info *);
void stop_gc_thread(struct f2fs_sb_info *);
-block_t start_bidx_of_node(unsigned int);
+block_t start_bidx_of_node(unsigned int, struct f2fs_inode_info *);
int f2fs_gc(struct f2fs_sb_info *);
void build_gc_manager(struct f2fs_sb_info *);
int __init create_gc_caches(void);
@@ -1053,20 +1459,19 @@
struct f2fs_stat_info {
struct list_head stat_list;
struct f2fs_sb_info *sbi;
- struct mutex stat_lock;
int all_area_segs, sit_area_segs, nat_area_segs, ssa_area_segs;
int main_area_segs, main_area_sections, main_area_zones;
int hit_ext, total_ext;
int ndirty_node, ndirty_dent, ndirty_dirs, ndirty_meta;
int nats, sits, fnids;
int total_count, utilization;
- int bg_gc;
+ int bg_gc, inline_inode;
unsigned int valid_count, valid_node_count, valid_inode_count;
unsigned int bimodal, avg_vblocks;
int util_free, util_valid, util_invalid;
int rsvd_segs, overp_segs;
int dirty_count, node_pages, meta_pages;
- int prefree_count, call_count;
+ int prefree_count, call_count, cp_count;
int tot_segs, node_segs, data_segs, free_segs, free_secs;
int tot_blks, data_blks, node_blks;
int curseg[NR_CURSEG_TYPE];
@@ -1078,11 +1483,37 @@
unsigned base_mem, cache_mem;
};
-#define stat_inc_call_count(si) ((si)->call_count++)
+static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
+{
+ return (struct f2fs_stat_info *)sbi->stat_info;
+}
+
+#define stat_inc_cp_count(si) ((si)->cp_count++)
+#define stat_inc_call_count(si) ((si)->call_count++)
+#define stat_inc_bggc_count(sbi) ((sbi)->bg_gc++)
+#define stat_inc_dirty_dir(sbi) ((sbi)->n_dirty_dirs++)
+#define stat_dec_dirty_dir(sbi) ((sbi)->n_dirty_dirs--)
+#define stat_inc_total_hit(sb) ((F2FS_SB(sb))->total_hit_ext++)
+#define stat_inc_read_hit(sb) ((F2FS_SB(sb))->read_hit_ext++)
+#define stat_inc_inline_inode(inode) \
+ do { \
+ if (f2fs_has_inline_data(inode)) \
+ ((F2FS_I_SB(inode))->inline_inode++); \
+ } while (0)
+#define stat_dec_inline_inode(inode) \
+ do { \
+ if (f2fs_has_inline_data(inode)) \
+ ((F2FS_I_SB(inode))->inline_inode--); \
+ } while (0)
+
+#define stat_inc_seg_type(sbi, curseg) \
+ ((sbi)->segment_count[(curseg)->alloc_type]++)
+#define stat_inc_block_count(sbi, curseg) \
+ ((sbi)->block_count[(curseg)->alloc_type]++)
#define stat_inc_seg_count(sbi, type) \
do { \
- struct f2fs_stat_info *si = sbi->stat_info; \
+ struct f2fs_stat_info *si = F2FS_STAT(sbi); \
(si)->tot_segs++; \
if (type == SUM_TYPE_DATA) \
si->data_segs++; \
@@ -1095,14 +1526,14 @@
#define stat_inc_data_blk_count(sbi, blks) \
do { \
- struct f2fs_stat_info *si = sbi->stat_info; \
+ struct f2fs_stat_info *si = F2FS_STAT(sbi); \
stat_inc_tot_blk_count(si, blks); \
si->data_blks += (blks); \
} while (0)
#define stat_inc_node_blk_count(sbi, blks) \
do { \
- struct f2fs_stat_info *si = sbi->stat_info; \
+ struct f2fs_stat_info *si = F2FS_STAT(sbi); \
stat_inc_tot_blk_count(si, blks); \
si->node_blks += (blks); \
} while (0)
@@ -1112,7 +1543,17 @@
void __init f2fs_create_root_stats(void);
void f2fs_destroy_root_stats(void);
#else
+#define stat_inc_cp_count(si)
#define stat_inc_call_count(si)
+#define stat_inc_bggc_count(si)
+#define stat_inc_dirty_dir(sbi)
+#define stat_dec_dirty_dir(sbi)
+#define stat_inc_total_hit(sb)
+#define stat_inc_read_hit(sb)
+#define stat_inc_inline_inode(inode)
+#define stat_dec_inline_inode(inode)
+#define stat_inc_seg_type(sbi, curseg)
+#define stat_inc_block_count(sbi, curseg)
#define stat_inc_seg_count(si, type)
#define stat_inc_tot_blk_count(si, blks)
#define stat_inc_data_blk_count(si, blks)
@@ -1133,4 +1574,14 @@
extern const struct inode_operations f2fs_dir_inode_operations;
extern const struct inode_operations f2fs_symlink_inode_operations;
extern const struct inode_operations f2fs_special_inode_operations;
+
+/*
+ * inline.c
+ */
+bool f2fs_may_inline(struct inode *);
+int f2fs_read_inline_data(struct inode *, struct page *);
+int f2fs_convert_inline_data(struct inode *, pgoff_t, struct page *);
+int f2fs_write_inline_data(struct inode *, struct page *, unsigned int);
+void truncate_inline_data(struct inode *, u64);
+bool recover_inline_data(struct inode *, struct page *);
#endif
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 1cae864..784590a 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -19,6 +19,7 @@
#include <linux/compat.h>
#include <linux/uaccess.h>
#include <linux/mount.h>
+#include <linux/pagevec.h>
#include "f2fs.h"
#include "node.h"
@@ -32,41 +33,32 @@
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vma->vm_file);
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- block_t old_blk_addr;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct dnode_of_data dn;
- int err, ilock;
+ int err;
f2fs_balance_fs(sbi);
sb_start_pagefault(inode->i_sb);
- /* block allocation */
- ilock = mutex_lock_op(sbi);
- set_new_dnode(&dn, inode, NULL, NULL, 0);
- err = get_dnode_of_data(&dn, page->index, ALLOC_NODE);
- if (err) {
- mutex_unlock_op(sbi, ilock);
+ /* force to convert with normal data indices */
+ err = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1, page);
+ if (err)
goto out;
- }
- old_blk_addr = dn.data_blkaddr;
+ /* block allocation */
+ f2fs_lock_op(sbi);
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ err = f2fs_reserve_block(&dn, page->index);
+ f2fs_unlock_op(sbi);
+ if (err)
+ goto out;
- if (old_blk_addr == NULL_ADDR) {
- err = reserve_new_block(&dn);
- if (err) {
- f2fs_put_dnode(&dn);
- mutex_unlock_op(sbi, ilock);
- goto out;
- }
- }
- f2fs_put_dnode(&dn);
- mutex_unlock_op(sbi, ilock);
-
+ file_update_time(vma->vm_file);
lock_page(page);
- if (page->mapping != inode->i_mapping ||
- page_offset(page) >= i_size_read(inode) ||
- !PageUptodate(page)) {
+ if (unlikely(page->mapping != inode->i_mapping ||
+ page_offset(page) > i_size_read(inode) ||
+ !PageUptodate(page))) {
unlock_page(page);
err = -EFAULT;
goto out;
@@ -76,10 +68,7 @@
* check to see if the page is mapped already (no holes)
*/
if (PageMappedToDisk(page))
- goto out;
-
- /* fill the page */
- wait_on_page_writeback(page);
+ goto mapped;
/* page is wholly or partially inside EOF */
if (((page->index + 1) << PAGE_CACHE_SHIFT) > i_size_read(inode)) {
@@ -90,7 +79,10 @@
set_page_dirty(page);
SetPageUptodate(page);
- file_update_time(vma->vm_file);
+ trace_f2fs_vm_page_mkwrite(page, DATA);
+mapped:
+ /* fill the page */
+ f2fs_wait_on_page_writeback(page, DATA);
out:
sb_end_pagefault(inode->i_sb);
return block_page_mkwrite_return(err);
@@ -99,13 +91,53 @@
static const struct vm_operations_struct f2fs_file_vm_ops = {
.fault = filemap_fault,
.page_mkwrite = f2fs_vm_page_mkwrite,
- .remap_pages = generic_file_remap_pages,
};
+static int get_parent_ino(struct inode *inode, nid_t *pino)
+{
+ struct dentry *dentry;
+
+ inode = igrab(inode);
+ dentry = d_find_any_alias(inode);
+ iput(inode);
+ if (!dentry)
+ return 0;
+
+ if (update_dent_inode(inode, &dentry->d_name)) {
+ dput(dentry);
+ return 0;
+ }
+
+ *pino = parent_ino(dentry);
+ dput(dentry);
+ return 1;
+}
+
+static inline bool need_do_checkpoint(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ bool need_cp = false;
+
+ if (!S_ISREG(inode->i_mode) || inode->i_nlink != 1)
+ need_cp = true;
+ else if (file_wrong_pino(inode))
+ need_cp = true;
+ else if (!space_for_roll_forward(sbi))
+ need_cp = true;
+ else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
+ need_cp = true;
+ else if (F2FS_I(inode)->xattr_ver == cur_cp_version(F2FS_CKPT(sbi)))
+ need_cp = true;
+
+ return need_cp;
+}
+
int f2fs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
{
struct inode *inode = file->f_mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ nid_t ino = inode->i_ino;
int ret = 0;
bool need_cp = false;
struct writeback_control wbc = {
@@ -114,53 +146,249 @@
.for_reclaim = 0,
};
- if (inode->i_sb->s_flags & MS_RDONLY)
+ if (unlikely(f2fs_readonly(inode->i_sb)))
return 0;
trace_f2fs_sync_file_enter(inode);
+
+ /* if fdatasync is triggered, let's do in-place-update */
+ if (get_dirty_pages(inode) <= SM_I(sbi)->min_fsync_blocks)
+ set_inode_flag(fi, FI_NEED_IPU);
ret = filemap_write_and_wait_range(inode->i_mapping, start, end);
+ clear_inode_flag(fi, FI_NEED_IPU);
+
if (ret) {
trace_f2fs_sync_file_exit(inode, need_cp, datasync, ret);
return ret;
}
+ /*
+ * if there is no written data, don't waste time to write recovery info.
+ */
+ if (!is_inode_flag_set(fi, FI_APPEND_WRITE) &&
+ !exist_written_data(sbi, ino, APPEND_INO)) {
+ struct page *i = find_get_page(NODE_MAPPING(sbi), ino);
+
+ /* But we need to avoid that there are some inode updates */
+ if ((i && PageDirty(i)) || need_inode_block_update(sbi, ino)) {
+ f2fs_put_page(i, 0);
+ goto go_write;
+ }
+ f2fs_put_page(i, 0);
+
+ if (is_inode_flag_set(fi, FI_UPDATE_WRITE) ||
+ exist_written_data(sbi, ino, UPDATE_INO))
+ goto flush_out;
+ goto out;
+ }
+go_write:
/* guarantee free sections for fsync */
f2fs_balance_fs(sbi);
- mutex_lock(&inode->i_mutex);
-
- if (datasync && !(inode->i_state & I_DIRTY_DATASYNC))
- goto out;
-
- if (!S_ISREG(inode->i_mode) || inode->i_nlink != 1)
- need_cp = true;
- else if (is_cp_file(inode))
- need_cp = true;
- else if (!space_for_roll_forward(sbi))
- need_cp = true;
- else if (!is_checkpointed_node(sbi, F2FS_I(inode)->i_pino))
- need_cp = true;
+ /*
+ * Both of fdatasync() and fsync() are able to be recovered from
+ * sudden-power-off.
+ */
+ down_read(&fi->i_sem);
+ need_cp = need_do_checkpoint(inode);
+ up_read(&fi->i_sem);
if (need_cp) {
+ nid_t pino;
+
/* all the dirty node pages should be flushed for POR */
ret = f2fs_sync_fs(inode->i_sb, 1);
- } else {
- /* if there is no written node page, write its inode page */
- while (!sync_node_pages(sbi, inode->i_ino, &wbc)) {
+
+ down_write(&fi->i_sem);
+ F2FS_I(inode)->xattr_ver = 0;
+ if (file_wrong_pino(inode) && inode->i_nlink == 1 &&
+ get_parent_ino(inode, &pino)) {
+ F2FS_I(inode)->i_pino = pino;
+ file_got_pino(inode);
+ up_write(&fi->i_sem);
+ mark_inode_dirty_sync(inode);
ret = f2fs_write_inode(inode, NULL);
if (ret)
goto out;
+ } else {
+ up_write(&fi->i_sem);
}
- filemap_fdatawait_range(sbi->node_inode->i_mapping,
- 0, LONG_MAX);
- ret = blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL);
+ } else {
+sync_nodes:
+ sync_node_pages(sbi, ino, &wbc);
+
+ if (need_inode_block_update(sbi, ino)) {
+ mark_inode_dirty_sync(inode);
+ ret = f2fs_write_inode(inode, NULL);
+ if (ret)
+ goto out;
+ goto sync_nodes;
+ }
+
+ ret = wait_on_node_pages_writeback(sbi, ino);
+ if (ret)
+ goto out;
+
+ /* once recovery info is written, don't need to tack this */
+ remove_dirty_inode(sbi, ino, APPEND_INO);
+ clear_inode_flag(fi, FI_APPEND_WRITE);
+flush_out:
+ remove_dirty_inode(sbi, ino, UPDATE_INO);
+ clear_inode_flag(fi, FI_UPDATE_WRITE);
+ ret = f2fs_issue_flush(F2FS_I_SB(inode));
}
out:
- mutex_unlock(&inode->i_mutex);
trace_f2fs_sync_file_exit(inode, need_cp, datasync, ret);
return ret;
}
+static pgoff_t __get_first_dirty_index(struct address_space *mapping,
+ pgoff_t pgofs, int whence)
+{
+ struct pagevec pvec;
+ int nr_pages;
+
+ if (whence != SEEK_DATA)
+ return 0;
+
+ /* find first dirty page index */
+ pagevec_init(&pvec, 0);
+ nr_pages = pagevec_lookup_tag(&pvec, mapping, &pgofs,
+ PAGECACHE_TAG_DIRTY, 1);
+ pgofs = nr_pages ? pvec.pages[0]->index : LONG_MAX;
+ pagevec_release(&pvec);
+ return pgofs;
+}
+
+static bool __found_offset(block_t blkaddr, pgoff_t dirty, pgoff_t pgofs,
+ int whence)
+{
+ switch (whence) {
+ case SEEK_DATA:
+ if ((blkaddr == NEW_ADDR && dirty == pgofs) ||
+ (blkaddr != NEW_ADDR && blkaddr != NULL_ADDR))
+ return true;
+ break;
+ case SEEK_HOLE:
+ if (blkaddr == NULL_ADDR)
+ return true;
+ break;
+ }
+ return false;
+}
+
+static inline int unsigned_offsets(struct file *file)
+{
+ return file->f_mode & FMODE_UNSIGNED_OFFSET;
+}
+
+static loff_t vfs_setpos(struct file *file, loff_t offset, loff_t maxsize)
+{
+ if (offset < 0 && !unsigned_offsets(file))
+ return -EINVAL;
+ if (offset > maxsize)
+ return -EINVAL;
+
+ if (offset != file->f_pos) {
+ file->f_pos = offset;
+ file->f_version = 0;
+ }
+ return offset;
+}
+
+static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
+{
+ struct inode *inode = file->f_mapping->host;
+ loff_t maxbytes = inode->i_sb->s_maxbytes;
+ struct dnode_of_data dn;
+ pgoff_t pgofs, end_offset, dirty;
+ loff_t data_ofs = offset;
+ loff_t isize;
+ int err = 0;
+
+ mutex_lock(&inode->i_mutex);
+
+ isize = i_size_read(inode);
+ if (offset >= isize)
+ goto fail;
+
+ /* handle inline data case */
+ if (f2fs_has_inline_data(inode)) {
+ if (whence == SEEK_HOLE)
+ data_ofs = isize;
+ goto found;
+ }
+
+ pgofs = (pgoff_t)(offset >> PAGE_CACHE_SHIFT);
+
+ dirty = __get_first_dirty_index(inode->i_mapping, pgofs, whence);
+
+ for (; data_ofs < isize; data_ofs = pgofs << PAGE_CACHE_SHIFT) {
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ err = get_dnode_of_data(&dn, pgofs, LOOKUP_NODE_RA);
+ if (err && err != -ENOENT) {
+ goto fail;
+ } else if (err == -ENOENT) {
+ /* direct node is not exist */
+ if (whence == SEEK_DATA) {
+ pgofs = PGOFS_OF_NEXT_DNODE(pgofs,
+ F2FS_I(inode));
+ continue;
+ } else {
+ goto found;
+ }
+ }
+
+ end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));
+
+ /* find data/hole in dnode block */
+ for (; dn.ofs_in_node < end_offset;
+ dn.ofs_in_node++, pgofs++,
+ data_ofs = pgofs << PAGE_CACHE_SHIFT) {
+ block_t blkaddr;
+ blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node);
+
+ if (__found_offset(blkaddr, dirty, pgofs, whence)) {
+ f2fs_put_dnode(&dn);
+ goto found;
+ }
+ }
+ f2fs_put_dnode(&dn);
+ }
+
+ if (whence == SEEK_DATA)
+ goto fail;
+found:
+ if (whence == SEEK_HOLE && data_ofs > isize)
+ data_ofs = isize;
+ mutex_unlock(&inode->i_mutex);
+ return vfs_setpos(file, data_ofs, maxbytes);
+fail:
+ mutex_unlock(&inode->i_mutex);
+ return -ENXIO;
+}
+
+static loff_t f2fs_llseek(struct file *file, loff_t offset, int whence)
+{
+ struct inode *inode = file->f_mapping->host;
+ loff_t maxbytes = inode->i_sb->s_maxbytes;
+
+ switch (whence) {
+ case SEEK_SET:
+ case SEEK_CUR:
+ case SEEK_END:
+ return generic_file_llseek_size(file, offset, whence,
+ maxbytes, i_size_read(inode));
+ case SEEK_DATA:
+ case SEEK_HOLE:
+ if (offset < 0)
+ return -ENXIO;
+ return f2fs_seek_block(file, offset, whence);
+ }
+
+ return -EINVAL;
+}
+
static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
file_accessed(file);
@@ -168,27 +396,27 @@
return 0;
}
-static int truncate_data_blocks_range(struct dnode_of_data *dn, int count)
+int truncate_data_blocks_range(struct dnode_of_data *dn, int count)
{
int nr_free = 0, ofs = dn->ofs_in_node;
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct f2fs_node *raw_node;
__le32 *addr;
- raw_node = page_address(dn->node_page);
+ raw_node = F2FS_NODE(dn->node_page);
addr = blkaddr_in_node(raw_node) + ofs;
- for ( ; count > 0; count--, addr++, dn->ofs_in_node++) {
+ for (; count > 0; count--, addr++, dn->ofs_in_node++) {
block_t blkaddr = le32_to_cpu(*addr);
if (blkaddr == NULL_ADDR)
continue;
update_extent_cache(NULL_ADDR, dn);
invalidate_blocks(sbi, blkaddr);
- dec_valid_block_count(sbi, dn->inode, 1);
nr_free++;
}
if (nr_free) {
+ dec_valid_block_count(sbi, dn->inode, nr_free);
set_page_dirty(dn->node_page);
sync_inode_page(dn);
}
@@ -209,6 +437,9 @@
unsigned offset = from & (PAGE_CACHE_SIZE - 1);
struct page *page;
+ if (f2fs_has_inline_data(inode))
+ return truncate_inline_data(inode, from);
+
if (!offset)
return;
@@ -217,48 +448,52 @@
return;
lock_page(page);
- if (page->mapping != inode->i_mapping) {
- f2fs_put_page(page, 1);
- return;
- }
- wait_on_page_writeback(page);
+ if (unlikely(!PageUptodate(page) ||
+ page->mapping != inode->i_mapping))
+ goto out;
+
+ f2fs_wait_on_page_writeback(page, DATA);
zero_user(page, offset, PAGE_CACHE_SIZE - offset);
set_page_dirty(page);
+
+out:
f2fs_put_page(page, 1);
}
-static int truncate_blocks(struct inode *inode, u64 from)
+int truncate_blocks(struct inode *inode, u64 from, bool lock)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
unsigned int blocksize = inode->i_sb->s_blocksize;
struct dnode_of_data dn;
pgoff_t free_from;
- int count = 0, ilock = -1;
- int err;
+ int count = 0, err = 0;
trace_f2fs_truncate_blocks_enter(inode, from);
+ if (f2fs_has_inline_data(inode))
+ goto done;
+
free_from = (pgoff_t)
((from + blocksize - 1) >> (sbi->log_blocksize));
- ilock = mutex_lock_op(sbi);
+ if (lock)
+ f2fs_lock_op(sbi);
+
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, free_from, LOOKUP_NODE);
if (err) {
if (err == -ENOENT)
goto free_next;
- mutex_unlock_op(sbi, ilock);
+ if (lock)
+ f2fs_unlock_op(sbi);
trace_f2fs_truncate_blocks_exit(inode, err);
return err;
}
- if (IS_INODE(dn.node_page))
- count = ADDRS_PER_INODE;
- else
- count = ADDRS_PER_BLOCK;
+ count = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));
count -= dn.ofs_in_node;
- BUG_ON(count < 0);
+ f2fs_bug_on(sbi, count < 0);
if (dn.ofs_in_node || IS_INODE(dn.node_page)) {
truncate_data_blocks_range(&dn, count);
@@ -268,8 +503,9 @@
f2fs_put_dnode(&dn);
free_next:
err = truncate_inode_blocks(inode, free_from);
- mutex_unlock_op(sbi, ilock);
-
+ if (lock)
+ f2fs_unlock_op(sbi);
+done:
/* lastly zero out the first data page */
truncate_partial_data_page(inode, from);
@@ -279,19 +515,26 @@
void f2fs_truncate(struct inode *inode)
{
+ int err;
+
if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
S_ISLNK(inode->i_mode)))
return;
trace_f2fs_truncate(inode);
- if (!truncate_blocks(inode, i_size_read(inode))) {
+ err = truncate_blocks(inode, i_size_read(inode), true);
+ if (err) {
+ f2fs_msg(inode->i_sb, KERN_ERR, "truncate failed with %d",
+ err);
+ f2fs_handle_error(F2FS_SB(inode->i_sb));
+ } else {
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
}
}
-static int f2fs_getattr(struct vfsmount *mnt,
+int f2fs_getattr(struct vfsmount *mnt,
struct dentry *dentry, struct kstat *stat)
{
struct inode *inode = dentry->d_inode;
@@ -335,17 +578,34 @@
{
struct inode *inode = dentry->d_inode;
struct f2fs_inode_info *fi = F2FS_I(inode);
+ struct f2fs_inode_info *pfi = F2FS_I(dentry->d_parent->d_inode);
+ struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
int err;
err = inode_change_ok(inode, attr);
if (err)
return err;
- if ((attr->ia_valid & ATTR_SIZE) &&
- attr->ia_size != i_size_read(inode)) {
- truncate_setsize(inode, attr->ia_size);
- f2fs_truncate(inode);
- f2fs_balance_fs(F2FS_SB(inode->i_sb));
+ if (IS_ANDROID_EMU(sbi, fi, pfi))
+ f2fs_android_emu(sbi, inode, &attr->ia_uid, &attr->ia_gid,
+ &attr->ia_mode);
+
+ if (attr->ia_valid & ATTR_SIZE) {
+ err = f2fs_convert_inline_data(inode, attr->ia_size, NULL);
+ if (err)
+ return err;
+
+ if (attr->ia_size != i_size_read(inode)) {
+ truncate_setsize(inode, attr->ia_size);
+ f2fs_truncate(inode);
+ f2fs_balance_fs(F2FS_I_SB(inode));
+ } else {
+ /*
+ * giving a chance to truncate blocks past EOF which
+ * are fallocated with FALLOC_FL_KEEP_SIZE.
+ */
+ f2fs_truncate(inode);
+ }
}
__setattr_copy(inode, attr);
@@ -372,26 +632,26 @@
.listxattr = f2fs_listxattr,
.removexattr = generic_removexattr,
#endif
+ .fiemap = f2fs_fiemap,
};
static void fill_zero(struct inode *inode, pgoff_t index,
loff_t start, loff_t len)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct page *page;
- int ilock;
if (!len)
return;
f2fs_balance_fs(sbi);
- ilock = mutex_lock_op(sbi);
- page = get_new_data_page(inode, index, false);
- mutex_unlock_op(sbi, ilock);
+ f2fs_lock_op(sbi);
+ page = get_new_data_page(inode, NULL, index, false);
+ f2fs_unlock_op(sbi);
if (!IS_ERR(page)) {
- wait_on_page_writeback(page);
+ f2fs_wait_on_page_writeback(page, DATA);
zero_user(page, start, len);
set_page_dirty(page);
f2fs_put_page(page, 1);
@@ -421,12 +681,23 @@
return 0;
}
-static int punch_hole(struct inode *inode, loff_t offset, loff_t len, int mode)
+static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
{
pgoff_t pg_start, pg_end;
loff_t off_start, off_end;
int ret = 0;
+ if (!S_ISREG(inode->i_mode))
+ return -EOPNOTSUPP;
+
+ /* skip punching hole beyond i_size */
+ if (offset >= inode->i_size)
+ return ret;
+
+ ret = f2fs_convert_inline_data(inode, MAX_INLINE_DATA + 1, NULL);
+ if (ret)
+ return ret;
+
pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT;
pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT;
@@ -446,8 +717,7 @@
if (pg_start < pg_end) {
struct address_space *mapping = inode->i_mapping;
loff_t blk_start, blk_end;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- int ilock;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
f2fs_balance_fs(sbi);
@@ -456,63 +726,53 @@
truncate_inode_pages_range(mapping, blk_start,
blk_end - 1);
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
ret = truncate_hole(inode, pg_start, pg_end);
- mutex_unlock_op(sbi, ilock);
+ f2fs_unlock_op(sbi);
}
}
- if (!(mode & FALLOC_FL_KEEP_SIZE) &&
- i_size_read(inode) <= (offset + len)) {
- i_size_write(inode, offset);
- mark_inode_dirty(inode);
- }
-
return ret;
}
static int expand_inode_data(struct inode *inode, loff_t offset,
loff_t len, int mode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
pgoff_t index, pg_start, pg_end;
loff_t new_size = i_size_read(inode);
loff_t off_start, off_end;
int ret = 0;
+ f2fs_balance_fs(sbi);
+
ret = inode_newsize_ok(inode, (len + offset));
if (ret)
return ret;
+ ret = f2fs_convert_inline_data(inode, offset + len, NULL);
+ if (ret)
+ return ret;
+
pg_start = ((unsigned long long) offset) >> PAGE_CACHE_SHIFT;
pg_end = ((unsigned long long) offset + len) >> PAGE_CACHE_SHIFT;
off_start = offset & (PAGE_CACHE_SIZE - 1);
off_end = (offset + len) & (PAGE_CACHE_SIZE - 1);
+ f2fs_lock_op(sbi);
+
for (index = pg_start; index <= pg_end; index++) {
struct dnode_of_data dn;
- int ilock;
- ilock = mutex_lock_op(sbi);
+ if (index == pg_end && !off_end)
+ goto noalloc;
+
set_new_dnode(&dn, inode, NULL, NULL, 0);
- ret = get_dnode_of_data(&dn, index, ALLOC_NODE);
- if (ret) {
- mutex_unlock_op(sbi, ilock);
+ ret = f2fs_reserve_block(&dn, index);
+ if (ret)
break;
- }
-
- if (dn.data_blkaddr == NULL_ADDR) {
- ret = reserve_new_block(&dn);
- if (ret) {
- f2fs_put_dnode(&dn);
- mutex_unlock_op(sbi, ilock);
- break;
- }
- }
- f2fs_put_dnode(&dn);
- mutex_unlock_op(sbi, ilock);
-
+noalloc:
if (pg_start == pg_end)
new_size = offset + len;
else if (index == pg_start && off_start)
@@ -527,7 +787,9 @@
i_size_read(inode) < new_size) {
i_size_write(inode, new_size);
mark_inode_dirty(inode);
+ update_inode_page(inode);
}
+ f2fs_unlock_op(sbi);
return ret;
}
@@ -541,8 +803,10 @@
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
return -EOPNOTSUPP;
+ mutex_lock(&inode->i_mutex);
+
if (mode & FALLOC_FL_PUNCH_HOLE)
- ret = punch_hole(inode, offset, len, mode);
+ ret = punch_hole(inode, offset, len);
else
ret = expand_inode_data(inode, offset, len, mode);
@@ -550,6 +814,9 @@
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
}
+
+ mutex_unlock(&inode->i_mutex);
+
trace_f2fs_fallocate(inode, mode, offset, len, ret);
return ret;
}
@@ -567,61 +834,157 @@
return flags & F2FS_OTHER_FLMASK;
}
-long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+static int f2fs_ioc_getflags(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode);
- unsigned int flags;
+ unsigned int flags = fi->i_flags & FS_FL_USER_VISIBLE;
+ return put_user(flags, (int __user *)arg);
+}
+
+static int f2fs_ioc_setflags(struct file *filp, unsigned long arg)
+{
+ struct inode *inode = file_inode(filp);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+ unsigned int flags = fi->i_flags & FS_FL_USER_VISIBLE;
+ unsigned int oldflags;
int ret;
- switch (cmd) {
- case FS_IOC_GETFLAGS:
- flags = fi->i_flags & FS_FL_USER_VISIBLE;
- return put_user(flags, (int __user *) arg);
- case FS_IOC_SETFLAGS:
- {
- unsigned int oldflags;
-
- ret = mnt_want_write_file(filp);
- if (ret)
- return ret;
-
- if (!inode_owner_or_capable(inode)) {
- ret = -EACCES;
- goto out;
- }
-
- if (get_user(flags, (int __user *) arg)) {
- ret = -EFAULT;
- goto out;
- }
-
- flags = f2fs_mask_flags(inode->i_mode, flags);
-
- mutex_lock(&inode->i_mutex);
-
- oldflags = fi->i_flags;
-
- if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) {
- if (!capable(CAP_LINUX_IMMUTABLE)) {
- mutex_unlock(&inode->i_mutex);
- ret = -EPERM;
- goto out;
- }
- }
-
- flags = flags & FS_FL_USER_MODIFIABLE;
- flags |= oldflags & ~FS_FL_USER_MODIFIABLE;
- fi->i_flags = flags;
- mutex_unlock(&inode->i_mutex);
-
- f2fs_set_inode_flags(inode);
- inode->i_ctime = CURRENT_TIME;
- mark_inode_dirty(inode);
-out:
- mnt_drop_write_file(filp);
+ ret = mnt_want_write_file(filp);
+ if (ret)
return ret;
+
+ if (!inode_owner_or_capable(inode)) {
+ ret = -EACCES;
+ goto out;
}
+
+ if (get_user(flags, (int __user *)arg)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ flags = f2fs_mask_flags(inode->i_mode, flags);
+
+ mutex_lock(&inode->i_mutex);
+
+ oldflags = fi->i_flags;
+
+ if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) {
+ if (!capable(CAP_LINUX_IMMUTABLE)) {
+ mutex_unlock(&inode->i_mutex);
+ ret = -EPERM;
+ goto out;
+ }
+ }
+
+ flags = flags & FS_FL_USER_MODIFIABLE;
+ flags |= oldflags & ~FS_FL_USER_MODIFIABLE;
+ fi->i_flags = flags;
+ mutex_unlock(&inode->i_mutex);
+
+ f2fs_set_inode_flags(inode);
+ inode->i_ctime = CURRENT_TIME;
+ mark_inode_dirty(inode);
+out:
+ mnt_drop_write_file(filp);
+ return ret;
+}
+
+static int f2fs_ioc_fitrim(struct file *filp, unsigned long arg)
+{
+ struct inode *inode = file_inode(filp);
+ struct super_block *sb = inode->i_sb;
+ struct request_queue *q = bdev_get_queue(sb->s_bdev);
+ struct fstrim_range range;
+ int ret;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (!blk_queue_discard(q))
+ return -EOPNOTSUPP;
+
+ if (copy_from_user(&range, (struct fstrim_range __user *)arg,
+ sizeof(range)))
+ return -EFAULT;
+
+ range.minlen = max((unsigned int)range.minlen,
+ q->limits.discard_granularity);
+ ret = f2fs_trim_fs(F2FS_SB(sb), &range);
+ if (ret < 0)
+ return ret;
+
+ if (copy_to_user((struct fstrim_range __user *)arg, &range,
+ sizeof(range)))
+ return -EFAULT;
+ return 0;
+}
+
+static int f2fs_ioc_atomic_write(struct file *filp, unsigned long arg)
+{
+ struct inode *inode = file_inode(filp);
+ struct atomic_w aw;
+ loff_t pos;
+ int ret;
+
+ if (!inode_owner_or_capable(inode))
+ return -EACCES;
+
+ if (copy_from_user(&aw, (struct atomic_w __user *)arg, sizeof(aw)))
+ return -EFAULT;
+
+ ret = mnt_want_write_file(filp);
+ if (ret)
+ return ret;
+
+ pos = aw.pos;
+ set_inode_flag(F2FS_I(inode), FI_ATOMIC_FILE);
+ ret = vfs_write(filp, aw.buf, aw.count, &pos);
+ if (ret >= 0)
+ prepare_atomic_pages(inode, &aw);
+ else
+ clear_inode_flag(F2FS_I(inode), FI_ATOMIC_FILE);
+
+ mnt_drop_write_file(filp);
+ return ret;
+}
+
+static int f2fs_ioc_atomic_commit(struct file *filp, unsigned long arg)
+{
+ struct inode *inode = file_inode(filp);
+ int ret;
+ u64 aid;
+
+ if (!inode_owner_or_capable(inode))
+ return -EACCES;
+
+ if (copy_from_user(&aid, (u64 __user *)arg, sizeof(u64)))
+ return -EFAULT;
+
+ ret = mnt_want_write_file(filp);
+ if (ret)
+ return ret;
+
+ commit_atomic_pages(inode, aid, false);
+ ret = f2fs_sync_file(filp, 0, LONG_MAX, 0);
+ mnt_drop_write_file(filp);
+ return ret;
+}
+
+long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case F2FS_IOC_GETFLAGS:
+ return f2fs_ioc_getflags(filp, arg);
+ case F2FS_IOC_SETFLAGS:
+ return f2fs_ioc_setflags(filp, arg);
+ case F2FS_IOC_ATOMIC_WRITE:
+ return f2fs_ioc_atomic_write(filp, arg);
+ case F2FS_IOC_ATOMIC_COMMIT:
+ return f2fs_ioc_atomic_commit(filp, arg);
+ case FITRIM:
+ return f2fs_ioc_fitrim(filp, arg);
default:
return -ENOTTY;
}
@@ -645,7 +1008,7 @@
#endif
const struct file_operations f2fs_file_operations = {
- .llseek = generic_file_llseek,
+ .llseek = f2fs_llseek,
.read = do_sync_read,
.write = do_sync_write,
.aio_read = generic_file_aio_read,
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 1496159..401364a 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -29,10 +29,11 @@
static int gc_thread_func(void *data)
{
struct f2fs_sb_info *sbi = data;
+ struct f2fs_gc_kthread *gc_th = sbi->gc_thread;
wait_queue_head_t *wq = &sbi->gc_thread->gc_wait_queue_head;
long wait_ms;
- wait_ms = GC_THREAD_MIN_SLEEP_TIME;
+ wait_ms = gc_th->min_sleep_time;
do {
if (try_to_freeze())
@@ -45,7 +46,7 @@
break;
if (sbi->sb->s_writers.frozen >= SB_FREEZE_WRITE) {
- wait_ms = GC_THREAD_MAX_SLEEP_TIME;
+ wait_ms = gc_th->max_sleep_time;
continue;
}
@@ -66,21 +67,25 @@
continue;
if (!is_idle(sbi)) {
- wait_ms = increase_sleep_time(wait_ms);
+ wait_ms = increase_sleep_time(gc_th, wait_ms);
mutex_unlock(&sbi->gc_mutex);
continue;
}
if (has_enough_invalid_blocks(sbi))
- wait_ms = decrease_sleep_time(wait_ms);
+ wait_ms = decrease_sleep_time(gc_th, wait_ms);
else
- wait_ms = increase_sleep_time(wait_ms);
+ wait_ms = increase_sleep_time(gc_th, wait_ms);
- sbi->bg_gc++;
+ stat_inc_bggc_count(sbi);
/* if return value is not zero, no victim was selected */
if (f2fs_gc(sbi))
- wait_ms = GC_THREAD_NOGC_SLEEP_TIME;
+ wait_ms = gc_th->no_gc_sleep_time;
+
+ /* balancing f2fs's metadata periodically */
+ f2fs_balance_fs_bg(sbi);
+
} while (!kthread_should_stop());
return 0;
}
@@ -89,23 +94,33 @@
{
struct f2fs_gc_kthread *gc_th;
dev_t dev = sbi->sb->s_bdev->bd_dev;
+ int err = 0;
if (!test_opt(sbi, BG_GC))
- return 0;
+ goto out;
gc_th = kmalloc(sizeof(struct f2fs_gc_kthread), GFP_KERNEL);
- if (!gc_th)
- return -ENOMEM;
+ if (!gc_th) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
+ gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
+ gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
+
+ gc_th->gc_idle = 0;
sbi->gc_thread = gc_th;
init_waitqueue_head(&sbi->gc_thread->gc_wait_queue_head);
sbi->gc_thread->f2fs_gc_task = kthread_run(gc_thread_func, sbi,
"f2fs_gc-%u:%u", MAJOR(dev), MINOR(dev));
if (IS_ERR(gc_th->f2fs_gc_task)) {
+ err = PTR_ERR(gc_th->f2fs_gc_task);
kfree(gc_th);
sbi->gc_thread = NULL;
- return -ENOMEM;
}
- return 0;
+out:
+ return err;
}
void stop_gc_thread(struct f2fs_sb_info *sbi)
@@ -118,9 +133,17 @@
sbi->gc_thread = NULL;
}
-static int select_gc_type(int gc_type)
+static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type)
{
- return (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
+ int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
+
+ if (gc_th && gc_th->gc_idle) {
+ if (gc_th->gc_idle == 1)
+ gc_mode = GC_CB;
+ else if (gc_th->gc_idle == 2)
+ gc_mode = GC_GREEDY;
+ }
+ return gc_mode;
}
static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
@@ -131,12 +154,18 @@
if (p->alloc_mode == SSR) {
p->gc_mode = GC_GREEDY;
p->dirty_segmap = dirty_i->dirty_segmap[type];
+ p->max_search = dirty_i->nr_dirty[type];
p->ofs_unit = 1;
} else {
- p->gc_mode = select_gc_type(gc_type);
+ p->gc_mode = select_gc_type(sbi->gc_thread, gc_type);
p->dirty_segmap = dirty_i->dirty_segmap[DIRTY];
+ p->max_search = dirty_i->nr_dirty[DIRTY];
p->ofs_unit = sbi->segs_per_sec;
}
+
+ if (p->max_search > sbi->max_victim_search)
+ p->max_search = sbi->max_victim_search;
+
p->offset = sbi->last_victim[p->gc_mode];
}
@@ -157,7 +186,6 @@
static unsigned int check_bg_victims(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
- unsigned int hint = 0;
unsigned int secno;
/*
@@ -165,11 +193,9 @@
* selected by background GC before.
* Those segments guarantee they have small valid blocks.
*/
-next:
- secno = find_next_bit(dirty_i->victim_secmap, TOTAL_SECS(sbi), hint++);
- if (secno < TOTAL_SECS(sbi)) {
+ for_each_set_bit(secno, dirty_i->victim_secmap, MAIN_SECS(sbi)) {
if (sec_usage_check(sbi, secno))
- goto next;
+ continue;
clear_bit(secno, dirty_i->victim_secmap);
return secno * sbi->segs_per_sec;
}
@@ -208,8 +234,8 @@
return UINT_MAX - ((100 * (100 - u) * age) / (100 + u));
}
-static unsigned int get_gc_cost(struct f2fs_sb_info *sbi, unsigned int segno,
- struct victim_sel_policy *p)
+static inline unsigned int get_gc_cost(struct f2fs_sb_info *sbi,
+ unsigned int segno, struct victim_sel_policy *p)
{
if (p->alloc_mode == SSR)
return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
@@ -234,16 +260,16 @@
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
struct victim_sel_policy p;
- unsigned int secno;
+ unsigned int secno, max_cost;
int nsearched = 0;
+ mutex_lock(&dirty_i->seglist_lock);
+
p.alloc_mode = alloc_mode;
select_policy(sbi, gc_type, type, &p);
p.min_segno = NULL_SEGNO;
- p.min_cost = get_max_cost(sbi, &p);
-
- mutex_lock(&dirty_i->seglist_lock);
+ p.min_cost = max_cost = get_max_cost(sbi, &p);
if (p.alloc_mode == LFS && gc_type == FG_GC) {
p.min_segno = check_bg_victims(sbi);
@@ -255,9 +281,8 @@
unsigned long cost;
unsigned int segno;
- segno = find_next_bit(p.dirty_segmap,
- TOTAL_SEGS(sbi), p.offset);
- if (segno >= TOTAL_SEGS(sbi)) {
+ segno = find_next_bit(p.dirty_segmap, MAIN_SEGS(sbi), p.offset);
+ if (segno >= MAIN_SEGS(sbi)) {
if (sbi->last_victim[p.gc_mode]) {
sbi->last_victim[p.gc_mode] = 0;
p.offset = 0;
@@ -265,7 +290,11 @@
}
break;
}
- p.offset = ((segno / p.ofs_unit) * p.ofs_unit) + p.ofs_unit;
+
+ p.offset = segno + p.ofs_unit;
+ if (p.ofs_unit > 1)
+ p.offset -= segno % p.ofs_unit;
+
secno = GET_SECNO(sbi, segno);
if (sec_usage_check(sbi, secno))
@@ -278,18 +307,17 @@
if (p.min_cost > cost) {
p.min_segno = segno;
p.min_cost = cost;
+ } else if (unlikely(cost == max_cost)) {
+ continue;
}
- if (cost == get_max_cost(sbi, &p))
- continue;
-
- if (nsearched++ >= MAX_VICTIM_SEARCH) {
+ if (nsearched++ >= p.max_search) {
sbi->last_victim[p.gc_mode] = segno;
break;
}
}
-got_it:
if (p.min_segno != NULL_SEGNO) {
+got_it:
if (p.alloc_mode == LFS) {
secno = GET_SECNO(sbi, p.min_segno);
if (gc_type == FG_GC)
@@ -314,35 +342,24 @@
static struct inode *find_gc_inode(nid_t ino, struct list_head *ilist)
{
- struct list_head *this;
struct inode_entry *ie;
- list_for_each(this, ilist) {
- ie = list_entry(this, struct inode_entry, list);
+ list_for_each_entry(ie, ilist, list)
if (ie->inode->i_ino == ino)
return ie->inode;
- }
return NULL;
}
static void add_gc_inode(struct inode *inode, struct list_head *ilist)
{
- struct list_head *this;
- struct inode_entry *new_ie, *ie;
+ struct inode_entry *new_ie;
- list_for_each(this, ilist) {
- ie = list_entry(this, struct inode_entry, list);
- if (ie->inode == inode) {
- iput(inode);
- return;
- }
+ if (inode == find_gc_inode(inode->i_ino, ilist)) {
+ iput(inode);
+ return;
}
-repeat:
- new_ie = kmem_cache_alloc(winode_slab, GFP_NOFS);
- if (!new_ie) {
- cond_resched();
- goto repeat;
- }
+
+ new_ie = f2fs_kmem_cache_alloc(winode_slab, GFP_NOFS);
new_ie->inode = inode;
list_add_tail(&new_ie->list, ilist);
}
@@ -405,10 +422,15 @@
if (IS_ERR(node_page))
continue;
+ /* block may become invalid during get_node_page */
+ if (check_valid_map(sbi, segno, off) == 0) {
+ f2fs_put_page(node_page, 1);
+ continue;
+ }
+
/* set page dirty and write it */
if (gc_type == FG_GC) {
- f2fs_submit_bio(sbi, NODE, true);
- wait_on_page_writeback(node_page);
+ f2fs_wait_on_page_writeback(node_page, NODE);
set_page_dirty(node_page);
} else {
if (!PageWriteback(node_page))
@@ -447,7 +469,7 @@
* as indirect or double indirect node blocks, are given, it must be a caller's
* bug.
*/
-block_t start_bidx_of_node(unsigned int node_ofs)
+block_t start_bidx_of_node(unsigned int node_ofs, struct f2fs_inode_info *fi)
{
unsigned int indirect_blks = 2 * NIDS_PER_BLOCK + 4;
unsigned int bidx;
@@ -464,7 +486,7 @@
int dec = (node_ofs - indirect_blks - 3) / (NIDS_PER_BLOCK + 1);
bidx = node_ofs - 5 - dec;
}
- return bidx * ADDRS_PER_BLOCK + ADDRS_PER_INODE;
+ return bidx * ADDRS_PER_BLOCK + ADDRS_PER_INODE(fi);
}
static int check_dnode(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
@@ -498,28 +520,25 @@
return 1;
}
-static void move_data_page(struct inode *inode, struct page *page, int gc_type)
+void move_data_page(struct inode *inode, struct page *page, int gc_type)
{
+ struct f2fs_io_info fio = {
+ .type = DATA,
+ .rw = WRITE_SYNC,
+ };
+
if (gc_type == BG_GC) {
if (PageWriteback(page))
goto out;
set_page_dirty(page);
set_cold_data(page);
} else {
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ f2fs_wait_on_page_writeback(page, DATA);
- if (PageWriteback(page)) {
- f2fs_submit_bio(sbi, DATA, true);
- wait_on_page_writeback(page);
- }
-
- if (clear_page_dirty_for_io(page) &&
- S_ISDIR(inode->i_mode)) {
- dec_page_count(sbi, F2FS_DIRTY_DENTS);
- inode_dec_dirty_dents(inode);
- }
+ if (clear_page_dirty_for_io(page))
+ inode_dec_dirty_pages(inode);
set_cold_data(page);
- do_write_data_page(page);
+ do_write_data_page(page, &fio);
clear_cold_data(page);
}
out:
@@ -575,14 +594,15 @@
continue;
}
- start_bidx = start_bidx_of_node(nofs);
ofs_in_node = le16_to_cpu(entry->ofs_in_node);
if (phase == 2) {
inode = f2fs_iget(sb, dni.ino);
- if (IS_ERR(inode))
+ if (IS_ERR(inode) || is_bad_inode(inode))
continue;
+ start_bidx = start_bidx_of_node(nofs, F2FS_I(inode));
+
data_page = find_data_page(inode,
start_bidx + ofs_in_node, false);
if (IS_ERR(data_page))
@@ -593,6 +613,8 @@
} else {
inode = find_gc_inode(dni.ino, ilist);
if (inode) {
+ start_bidx = start_bidx_of_node(nofs,
+ F2FS_I(inode));
data_page = get_lock_data_page(inode,
start_bidx + ofs_in_node);
if (IS_ERR(data_page))
@@ -610,7 +632,7 @@
goto next_step;
if (gc_type == FG_GC) {
- f2fs_submit_bio(sbi, DATA, true);
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
/*
* In the case of FG_GC, it'd be better to reclaim this victim
@@ -643,8 +665,6 @@
/* read segment summary of victim */
sum_page = get_sum_page(sbi, segno);
- if (IS_ERR(sum_page))
- return;
blk_start_plug(&plug);
@@ -673,21 +693,31 @@
int gc_type = BG_GC;
int nfree = 0;
int ret = -1;
+ struct cp_control cpc = {
+ .reason = CP_SYNC,
+ };
INIT_LIST_HEAD(&ilist);
gc_more:
- if (!(sbi->sb->s_flags & MS_ACTIVE))
+ if (unlikely(!(sbi->sb->s_flags & MS_ACTIVE)))
+ goto stop;
+ if (unlikely(f2fs_cp_error(sbi)))
goto stop;
if (gc_type == BG_GC && has_not_enough_free_secs(sbi, nfree)) {
gc_type = FG_GC;
- write_checkpoint(sbi, false);
+ write_checkpoint(sbi, &cpc);
}
if (!__get_victim(sbi, &segno, gc_type, NO_CHECK_TYPE))
goto stop;
ret = 0;
+ /* readahead multi ssa blocks those have contiguous address */
+ if (sbi->segs_per_sec > 1)
+ ra_meta_pages(sbi, GET_SUM_BLOCK(sbi, segno), sbi->segs_per_sec,
+ META_SSA);
+
for (i = 0; i < sbi->segs_per_sec; i++)
do_garbage_collect(sbi, segno + i, &ilist, gc_type);
@@ -701,7 +731,7 @@
goto gc_more;
if (gc_type == FG_GC)
- write_checkpoint(sbi, false);
+ write_checkpoint(sbi, &cpc);
stop:
mutex_unlock(&sbi->gc_mutex);
@@ -717,7 +747,7 @@
int __init create_gc_caches(void)
{
winode_slab = f2fs_kmem_cache_create("f2fs_gc_inodes",
- sizeof(struct inode_entry), NULL);
+ sizeof(struct inode_entry));
if (!winode_slab)
return -ENOMEM;
return 0;
diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h
index 2c6a6bd..5d5eb60 100644
--- a/fs/f2fs/gc.h
+++ b/fs/f2fs/gc.h
@@ -13,18 +13,26 @@
* whether IO subsystem is idle
* or not
*/
-#define GC_THREAD_MIN_SLEEP_TIME 30000 /* milliseconds */
-#define GC_THREAD_MAX_SLEEP_TIME 60000
-#define GC_THREAD_NOGC_SLEEP_TIME 300000 /* wait 5 min */
+#define DEF_GC_THREAD_MIN_SLEEP_TIME 30000 /* milliseconds */
+#define DEF_GC_THREAD_MAX_SLEEP_TIME 60000
+#define DEF_GC_THREAD_NOGC_SLEEP_TIME 300000 /* wait 5 min */
#define LIMIT_INVALID_BLOCK 40 /* percentage over total user space */
#define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */
/* Search max. number of dirty segments to select a victim segment */
-#define MAX_VICTIM_SEARCH 20
+#define DEF_MAX_VICTIM_SEARCH 4096 /* covers 8GB */
struct f2fs_gc_kthread {
struct task_struct *f2fs_gc_task;
wait_queue_head_t gc_wait_queue_head;
+
+ /* for gc sleep time */
+ unsigned int min_sleep_time;
+ unsigned int max_sleep_time;
+ unsigned int no_gc_sleep_time;
+
+ /* for changing gc mode */
+ unsigned int gc_idle;
};
struct inode_entry {
@@ -56,25 +64,25 @@
return (long)(reclaimable_user_blocks * LIMIT_FREE_BLOCK) / 100;
}
-static inline long increase_sleep_time(long wait)
+static inline long increase_sleep_time(struct f2fs_gc_kthread *gc_th, long wait)
{
- if (wait == GC_THREAD_NOGC_SLEEP_TIME)
+ if (wait == gc_th->no_gc_sleep_time)
return wait;
- wait += GC_THREAD_MIN_SLEEP_TIME;
- if (wait > GC_THREAD_MAX_SLEEP_TIME)
- wait = GC_THREAD_MAX_SLEEP_TIME;
+ wait += gc_th->min_sleep_time;
+ if (wait > gc_th->max_sleep_time)
+ wait = gc_th->max_sleep_time;
return wait;
}
-static inline long decrease_sleep_time(long wait)
+static inline long decrease_sleep_time(struct f2fs_gc_kthread *gc_th, long wait)
{
- if (wait == GC_THREAD_NOGC_SLEEP_TIME)
- wait = GC_THREAD_MAX_SLEEP_TIME;
+ if (wait == gc_th->no_gc_sleep_time)
+ wait = gc_th->max_sleep_time;
- wait -= GC_THREAD_MIN_SLEEP_TIME;
- if (wait <= GC_THREAD_MIN_SLEEP_TIME)
- wait = GC_THREAD_MIN_SLEEP_TIME;
+ wait -= gc_th->min_sleep_time;
+ if (wait <= gc_th->min_sleep_time)
+ wait = gc_th->min_sleep_time;
return wait;
}
diff --git a/fs/f2fs/hash.c b/fs/f2fs/hash.c
index 6eb8d26..a844fcf 100644
--- a/fs/f2fs/hash.c
+++ b/fs/f2fs/hash.c
@@ -42,7 +42,8 @@
buf[1] += b1;
}
-static void str2hashbuf(const char *msg, size_t len, unsigned int *buf, int num)
+static void str2hashbuf(const unsigned char *msg, size_t len,
+ unsigned int *buf, int num)
{
unsigned pad, val;
int i;
@@ -69,12 +70,14 @@
*buf++ = pad;
}
-f2fs_hash_t f2fs_dentry_hash(const char *name, size_t len)
+f2fs_hash_t f2fs_dentry_hash(const struct qstr *name_info)
{
__u32 hash;
f2fs_hash_t f2fs_hash;
- const char *p;
+ const unsigned char *p;
__u32 in[8], buf[4];
+ const unsigned char *name = name_info->name;
+ size_t len = name_info->len;
if ((len <= 2) && (name[0] == '.') &&
(name[1] == '.' || name[1] == '\0'))
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
new file mode 100644
index 0000000..6aef11d
--- /dev/null
+++ b/fs/f2fs/inline.c
@@ -0,0 +1,256 @@
+/*
+ * fs/f2fs/inline.c
+ * Copyright (c) 2013, Intel Corporation
+ * Authors: Huajun Li <huajun.li@intel.com>
+ * Haicheng Li <haicheng.li@intel.com>
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/fs.h>
+#include <linux/f2fs_fs.h>
+
+#include "f2fs.h"
+
+bool f2fs_may_inline(struct inode *inode)
+{
+ block_t nr_blocks;
+ loff_t i_size;
+
+ if (!test_opt(F2FS_I_SB(inode), INLINE_DATA))
+ return false;
+
+ nr_blocks = F2FS_I(inode)->i_xattr_nid ? 3 : 2;
+ if (inode->i_blocks > nr_blocks)
+ return false;
+
+ i_size = i_size_read(inode);
+ if (i_size > MAX_INLINE_DATA)
+ return false;
+
+ return true;
+}
+
+int f2fs_read_inline_data(struct inode *inode, struct page *page)
+{
+ struct page *ipage;
+ void *src_addr, *dst_addr;
+
+ if (page->index) {
+ zero_user_segment(page, 0, PAGE_CACHE_SIZE);
+ goto out;
+ }
+
+ ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
+ if (IS_ERR(ipage)) {
+ unlock_page(page);
+ return PTR_ERR(ipage);
+ }
+
+ zero_user_segment(page, MAX_INLINE_DATA, PAGE_CACHE_SIZE);
+
+ /* Copy the whole inline data block */
+ src_addr = inline_data_addr(ipage);
+ dst_addr = kmap(page);
+ memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
+ kunmap(page);
+ f2fs_put_page(ipage, 1);
+
+out:
+ SetPageUptodate(page);
+ unlock_page(page);
+
+ return 0;
+}
+
+static int __f2fs_convert_inline_data(struct inode *inode, struct page *page)
+{
+ int err = 0;
+ struct page *ipage;
+ struct dnode_of_data dn;
+ void *src_addr, *dst_addr;
+ block_t new_blk_addr;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_io_info fio = {
+ .type = DATA,
+ .rw = WRITE_SYNC | REQ_PRIO,
+ };
+
+ f2fs_lock_op(sbi);
+ ipage = get_node_page(sbi, inode->i_ino);
+ if (IS_ERR(ipage)) {
+ err = PTR_ERR(ipage);
+ goto out;
+ }
+
+ /* someone else converted inline_data already */
+ if (!f2fs_has_inline_data(inode))
+ goto out;
+
+ /*
+ * i_addr[0] is not used for inline data,
+ * so reserving new block will not destroy inline data
+ */
+ set_new_dnode(&dn, inode, ipage, NULL, 0);
+ err = f2fs_reserve_block(&dn, 0);
+ if (err)
+ goto out;
+
+ f2fs_wait_on_page_writeback(page, DATA);
+ zero_user_segment(page, MAX_INLINE_DATA, PAGE_CACHE_SIZE);
+
+ /* Copy the whole inline data block */
+ src_addr = inline_data_addr(ipage);
+ dst_addr = kmap(page);
+ memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
+ kunmap(page);
+ SetPageUptodate(page);
+
+ /* write data page to try to make data consistent */
+ set_page_writeback(page);
+ write_data_page(page, &dn, &new_blk_addr, &fio);
+ update_extent_cache(new_blk_addr, &dn);
+ f2fs_wait_on_page_writeback(page, DATA);
+
+ /* clear inline data and flag after data writeback */
+ zero_user_segment(ipage, INLINE_DATA_OFFSET,
+ INLINE_DATA_OFFSET + MAX_INLINE_DATA);
+ clear_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
+ stat_dec_inline_inode(inode);
+
+ sync_inode_page(&dn);
+ f2fs_put_dnode(&dn);
+out:
+ f2fs_unlock_op(sbi);
+ return err;
+}
+
+int f2fs_convert_inline_data(struct inode *inode, pgoff_t to_size,
+ struct page *page)
+{
+ struct page *new_page = page;
+ int err;
+
+ if (!f2fs_has_inline_data(inode))
+ return 0;
+ else if (to_size <= MAX_INLINE_DATA)
+ return 0;
+
+ if (!page || page->index != 0) {
+ new_page = grab_cache_page(inode->i_mapping, 0);
+ if (!new_page)
+ return -ENOMEM;
+ }
+
+ err = __f2fs_convert_inline_data(inode, new_page);
+ if (!page || page->index != 0)
+ f2fs_put_page(new_page, 1);
+ return err;
+}
+
+int f2fs_write_inline_data(struct inode *inode,
+ struct page *page, unsigned size)
+{
+ void *src_addr, *dst_addr;
+ struct page *ipage;
+ struct dnode_of_data dn;
+ int err;
+
+ set_new_dnode(&dn, inode, NULL, NULL, 0);
+ err = get_dnode_of_data(&dn, 0, LOOKUP_NODE);
+ if (err)
+ return err;
+ ipage = dn.inode_page;
+
+ f2fs_wait_on_page_writeback(ipage, NODE);
+ zero_user_segment(ipage, INLINE_DATA_OFFSET,
+ INLINE_DATA_OFFSET + MAX_INLINE_DATA);
+ src_addr = kmap(page);
+ dst_addr = inline_data_addr(ipage);
+ memcpy(dst_addr, src_addr, size);
+ kunmap(page);
+
+ /* Release the first data block if it is allocated */
+ if (!f2fs_has_inline_data(inode)) {
+ truncate_data_blocks_range(&dn, 1);
+ set_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
+ stat_inc_inline_inode(inode);
+ }
+
+ set_inode_flag(F2FS_I(inode), FI_APPEND_WRITE);
+ sync_inode_page(&dn);
+ f2fs_put_dnode(&dn);
+
+ return 0;
+}
+
+void truncate_inline_data(struct inode *inode, u64 from)
+{
+ struct page *ipage;
+
+ if (from >= MAX_INLINE_DATA)
+ return;
+
+ ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
+ if (IS_ERR(ipage))
+ return;
+
+ f2fs_wait_on_page_writeback(ipage, NODE);
+
+ zero_user_segment(ipage, INLINE_DATA_OFFSET + from,
+ INLINE_DATA_OFFSET + MAX_INLINE_DATA);
+ set_page_dirty(ipage);
+ f2fs_put_page(ipage, 1);
+}
+
+bool recover_inline_data(struct inode *inode, struct page *npage)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode *ri = NULL;
+ void *src_addr, *dst_addr;
+ struct page *ipage;
+
+ /*
+ * The inline_data recovery policy is as follows.
+ * [prev.] [next] of inline_data flag
+ * o o -> recover inline_data
+ * o x -> remove inline_data, and then recover data blocks
+ * x o -> remove inline_data, and then recover inline_data
+ * x x -> recover data blocks
+ */
+ if (IS_INODE(npage))
+ ri = F2FS_INODE(npage);
+
+ if (f2fs_has_inline_data(inode) &&
+ ri && (ri->i_inline & F2FS_INLINE_DATA)) {
+process_inline:
+ ipage = get_node_page(sbi, inode->i_ino);
+ f2fs_bug_on(sbi, IS_ERR(ipage));
+
+ f2fs_wait_on_page_writeback(ipage, NODE);
+
+ src_addr = inline_data_addr(npage);
+ dst_addr = inline_data_addr(ipage);
+ memcpy(dst_addr, src_addr, MAX_INLINE_DATA);
+ update_inode(inode, ipage);
+ f2fs_put_page(ipage, 1);
+ return true;
+ }
+
+ if (f2fs_has_inline_data(inode)) {
+ ipage = get_node_page(sbi, inode->i_ino);
+ f2fs_bug_on(sbi, IS_ERR(ipage));
+ f2fs_wait_on_page_writeback(ipage, NODE);
+ zero_user_segment(ipage, INLINE_DATA_OFFSET,
+ INLINE_DATA_OFFSET + MAX_INLINE_DATA);
+ clear_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
+ update_inode(inode, ipage);
+ f2fs_put_page(ipage, 1);
+ } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
+ truncate_blocks(inode, 0, false);
+ set_inode_flag(F2FS_I(inode), FI_INLINE_DATA);
+ goto process_inline;
+ }
+ return false;
+}
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index 91ac7f9..b957f96 100644
--- a/fs/f2fs/inode.c
+++ b/fs/f2fs/inode.c
@@ -37,18 +37,47 @@
inode->i_flags |= S_DIRSYNC;
}
+static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
+{
+ if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
+ S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
+ if (ri->i_addr[0])
+ inode->i_rdev =
+ old_decode_dev(le32_to_cpu(ri->i_addr[0]));
+ else
+ inode->i_rdev =
+ new_decode_dev(le32_to_cpu(ri->i_addr[1]));
+ }
+}
+
+static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
+{
+ if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
+ if (old_valid_dev(inode->i_rdev)) {
+ ri->i_addr[0] =
+ cpu_to_le32(old_encode_dev(inode->i_rdev));
+ ri->i_addr[1] = 0;
+ } else {
+ ri->i_addr[0] = 0;
+ ri->i_addr[1] =
+ cpu_to_le32(new_encode_dev(inode->i_rdev));
+ ri->i_addr[2] = 0;
+ }
+ }
+}
+
static int do_read_inode(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode);
struct page *node_page;
- struct f2fs_node *rn;
struct f2fs_inode *ri;
/* Check if ino is within scope */
if (check_nid_range(sbi, inode->i_ino)) {
f2fs_msg(inode->i_sb, KERN_ERR, "bad inode number: %lu",
(unsigned long) inode->i_ino);
+ WARN_ON(1);
return -EINVAL;
}
@@ -56,8 +85,7 @@
if (IS_ERR(node_page))
return PTR_ERR(node_page);
- rn = page_address(node_page);
- ri = &(rn->i);
+ ri = F2FS_INODE(node_page);
inode->i_mode = le16_to_cpu(ri->i_mode);
i_uid_write(inode, le32_to_cpu(ri->i_uid));
@@ -73,10 +101,6 @@
inode->i_ctime.tv_nsec = le32_to_cpu(ri->i_ctime_nsec);
inode->i_mtime.tv_nsec = le32_to_cpu(ri->i_mtime_nsec);
inode->i_generation = le32_to_cpu(ri->i_generation);
- if (ri->i_addr[0])
- inode->i_rdev = old_decode_dev(le32_to_cpu(ri->i_addr[0]));
- else
- inode->i_rdev = new_decode_dev(le32_to_cpu(ri->i_addr[1]));
fi->i_current_depth = le32_to_cpu(ri->i_current_depth);
fi->i_xattr_nid = le32_to_cpu(ri->i_xattr_nid);
@@ -84,7 +108,14 @@
fi->flags = 0;
fi->i_advise = ri->i_advise;
fi->i_pino = le32_to_cpu(ri->i_pino);
+ fi->i_dir_level = ri->i_dir_level;
+
get_extent_info(&fi->ext, ri->i_ext);
+ get_inline_info(fi, ri);
+
+ /* get rdev by using inline_info */
+ __get_inode_rdev(inode, ri);
+
f2fs_put_page(node_page, 1);
return 0;
}
@@ -109,12 +140,6 @@
ret = do_read_inode(inode);
if (ret)
goto bad_inode;
-
- if (!sbi->por_doing && inode->i_nlink == 0) {
- ret = -ENOENT;
- goto bad_inode;
- }
-
make_now:
if (ino == F2FS_NODE_INO(sbi)) {
inode->i_mapping->a_ops = &f2fs_node_aops;
@@ -130,8 +155,7 @@
inode->i_op = &f2fs_dir_inode_operations;
inode->i_fop = &f2fs_dir_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops;
- mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER_MOVABLE |
- __GFP_ZERO);
+ mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
} else if (S_ISLNK(inode->i_mode)) {
inode->i_op = &f2fs_symlink_inode_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops;
@@ -155,13 +179,11 @@
void update_inode(struct inode *inode, struct page *node_page)
{
- struct f2fs_node *rn;
struct f2fs_inode *ri;
- wait_on_page_writeback(node_page);
+ f2fs_wait_on_page_writeback(node_page, NODE);
- rn = page_address(node_page);
- ri = &(rn->i);
+ ri = F2FS_INODE(node_page);
ri->i_mode = cpu_to_le16(inode->i_mode);
ri->i_advise = F2FS_I(inode)->i_advise;
@@ -171,6 +193,7 @@
ri->i_size = cpu_to_le64(i_size_read(inode));
ri->i_blocks = cpu_to_le64(inode->i_blocks);
set_raw_extent(&F2FS_I(inode)->ext, &ri->i_ext);
+ set_raw_inline(F2FS_I(inode), ri);
ri->i_atime = cpu_to_le64(inode->i_atime.tv_sec);
ri->i_ctime = cpu_to_le64(inode->i_ctime.tv_sec);
@@ -183,58 +206,58 @@
ri->i_flags = cpu_to_le32(F2FS_I(inode)->i_flags);
ri->i_pino = cpu_to_le32(F2FS_I(inode)->i_pino);
ri->i_generation = cpu_to_le32(inode->i_generation);
+ ri->i_dir_level = F2FS_I(inode)->i_dir_level;
- if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
- if (old_valid_dev(inode->i_rdev)) {
- ri->i_addr[0] =
- cpu_to_le32(old_encode_dev(inode->i_rdev));
- ri->i_addr[1] = 0;
- } else {
- ri->i_addr[0] = 0;
- ri->i_addr[1] =
- cpu_to_le32(new_encode_dev(inode->i_rdev));
- ri->i_addr[2] = 0;
- }
- }
-
+ __set_inode_rdev(inode, ri);
set_cold_node(inode, node_page);
set_page_dirty(node_page);
+
+ clear_inode_flag(F2FS_I(inode), FI_DIRTY_INODE);
}
-int update_inode_page(struct inode *inode)
+void update_inode_page(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct page *node_page;
-
+retry:
node_page = get_node_page(sbi, inode->i_ino);
- if (IS_ERR(node_page))
- return PTR_ERR(node_page);
-
+ if (IS_ERR(node_page)) {
+ int err = PTR_ERR(node_page);
+ if (err == -ENOMEM) {
+ cond_resched();
+ goto retry;
+ } else if (err != -ENOENT) {
+ f2fs_stop_checkpoint(sbi);
+ }
+ return;
+ }
update_inode(inode, node_page);
f2fs_put_page(node_page, 1);
- return 0;
}
int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- int ret, ilock;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
if (inode->i_ino == F2FS_NODE_INO(sbi) ||
inode->i_ino == F2FS_META_INO(sbi))
return 0;
- if (wbc)
- f2fs_balance_fs(sbi);
+ if (!is_inode_flag_set(F2FS_I(inode), FI_DIRTY_INODE))
+ return 0;
/*
* We need to lock here to prevent from producing dirty node pages
* during the urgent cleaning time when runing out of free sections.
*/
- ilock = mutex_lock_op(sbi);
- ret = update_inode_page(inode);
- mutex_unlock_op(sbi, ilock);
- return ret;
+ f2fs_lock_op(sbi);
+ update_inode_page(inode);
+ f2fs_unlock_op(sbi);
+
+ if (wbc)
+ f2fs_balance_fs(sbi);
+
+ return 0;
}
/*
@@ -242,17 +265,21 @@
*/
void f2fs_evict_inode(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- int ilock;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ nid_t xnid = F2FS_I(inode)->i_xattr_nid;
+
+ /* some remained atomic pages should discarded */
+ if (is_inode_flag_set(F2FS_I(inode), FI_ATOMIC_FILE))
+ commit_atomic_pages(inode, 0, true);
trace_f2fs_evict_inode(inode);
truncate_inode_pages(&inode->i_data, 0);
if (inode->i_ino == F2FS_NODE_INO(sbi) ||
inode->i_ino == F2FS_META_INO(sbi))
- goto no_delete;
+ goto out_clear;
- BUG_ON(atomic_read(&F2FS_I(inode)->dirty_dents));
+ f2fs_bug_on(sbi, get_dirty_pages(inode));
remove_dirty_dir_inode(inode);
if (inode->i_nlink || is_bad_inode(inode))
@@ -265,11 +292,43 @@
if (F2FS_HAS_BLOCKS(inode))
f2fs_truncate(inode);
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
remove_inode_page(inode);
- mutex_unlock_op(sbi, ilock);
+ stat_dec_inline_inode(inode);
+ f2fs_unlock_op(sbi);
sb_end_intwrite(inode->i_sb);
no_delete:
+ invalidate_mapping_pages(NODE_MAPPING(sbi), inode->i_ino, inode->i_ino);
+ if (xnid)
+ invalidate_mapping_pages(NODE_MAPPING(sbi), xnid, xnid);
+ if (is_inode_flag_set(F2FS_I(inode), FI_APPEND_WRITE))
+ add_dirty_inode(sbi, inode->i_ino, APPEND_INO);
+ if (is_inode_flag_set(F2FS_I(inode), FI_UPDATE_WRITE))
+ add_dirty_inode(sbi, inode->i_ino, UPDATE_INO);
+out_clear:
clear_inode(inode);
}
+
+/* caller should call f2fs_lock_op() */
+void handle_failed_inode(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+
+ clear_nlink(inode);
+ make_bad_inode(inode);
+ unlock_new_inode(inode);
+
+ i_size_write(inode, 0);
+ if (F2FS_HAS_BLOCKS(inode))
+ f2fs_truncate(inode);
+
+ remove_inode_page(inode);
+ stat_dec_inline_inode(inode);
+
+ alloc_nid_failed(sbi, inode->i_ino);
+ f2fs_unlock_op(sbi);
+
+ /* iput will drop the inode object */
+ iput(inode);
+}
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
index 47abc97..625601e 100644
--- a/fs/f2fs/namei.c
+++ b/fs/f2fs/namei.c
@@ -13,6 +13,7 @@
#include <linux/pagemap.h>
#include <linux/sched.h>
#include <linux/ctype.h>
+#include <linux/dcache.h>
#include "f2fs.h"
#include "node.h"
@@ -22,37 +23,34 @@
static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
{
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
nid_t ino;
struct inode *inode;
bool nid_free = false;
- int err, ilock;
+ int err;
- inode = new_inode(sb);
+ inode = new_inode(dir->i_sb);
if (!inode)
return ERR_PTR(-ENOMEM);
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
if (!alloc_nid(sbi, &ino)) {
- mutex_unlock_op(sbi, ilock);
+ f2fs_unlock_op(sbi);
err = -ENOSPC;
goto fail;
}
- mutex_unlock_op(sbi, ilock);
+ f2fs_unlock_op(sbi);
inode->i_uid = current_fsuid();
- if (dir->i_mode & S_ISGID) {
- inode->i_gid = dir->i_gid;
- if (S_ISDIR(mode))
- mode |= S_ISGID;
+ if (IS_ANDROID_EMU(sbi, F2FS_I(dir), F2FS_I(dir))) {
+ f2fs_android_emu(sbi, inode, &inode->i_uid,
+ &inode->i_gid, &mode);
} else {
- inode->i_gid = current_fsgid();
+ inode_init_owner(inode, dir, mode);
}
inode->i_ino = ino;
- inode->i_mode = mode;
inode->i_blocks = 0;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
inode->i_generation = sbi->s_next_generation++;
@@ -83,21 +81,11 @@
{
size_t slen = strlen(s);
size_t sublen = strlen(sub);
- int ret;
if (sublen > slen)
return 0;
- ret = memcmp(s + slen - sublen, sub, sublen);
- if (ret) { /* compare upper case */
- int i;
- char upper_sub[8];
- for (i = 0; i < sublen && i < sizeof(upper_sub); i++)
- upper_sub[i] = toupper(sub[i]);
- return !memcmp(s + slen - sublen, upper_sub, sublen);
- }
-
- return !ret;
+ return !strncasecmp(s + slen - sublen, sub, sublen);
}
/*
@@ -112,7 +100,7 @@
int count = le32_to_cpu(sbi->raw_super->extension_count);
for (i = 0; i < count; i++) {
if (is_multimedia_file(name, extlist[i])) {
- set_cold_file(inode);
+ file_set_cold(inode);
break;
}
}
@@ -121,11 +109,10 @@
static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
bool excl)
{
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode;
nid_t ino = 0;
- int err, ilock;
+ int err;
f2fs_balance_fs(sbi);
@@ -141,24 +128,19 @@
inode->i_mapping->a_ops = &f2fs_dblock_aops;
ino = inode->i_ino;
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode);
- mutex_unlock_op(sbi, ilock);
if (err)
goto out;
+ f2fs_unlock_op(sbi);
alloc_nid_done(sbi, ino);
- if (!sbi->por_doing)
- d_instantiate(dentry, inode);
+ d_instantiate(dentry, inode);
unlock_new_inode(inode);
return 0;
out:
- clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
- alloc_nid_failed(sbi, ino);
+ handle_failed_inode(inode);
return err;
}
@@ -166,34 +148,27 @@
struct dentry *dentry)
{
struct inode *inode = old_dentry->d_inode;
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
- int err, ilock;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+ int err;
f2fs_balance_fs(sbi);
inode->i_ctime = CURRENT_TIME;
- atomic_inc(&inode->i_count);
+ ihold(inode);
set_inode_flag(F2FS_I(inode), FI_INC_LINK);
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode);
- mutex_unlock_op(sbi, ilock);
if (err)
goto out;
-
- /*
- * This file should be checkpointed during fsync.
- * We lost i_pino from now on.
- */
- set_cp_file(inode);
+ f2fs_unlock_op(sbi);
d_instantiate(dentry, inode);
return 0;
out:
clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
- make_bad_inode(inode);
iput(inode);
+ f2fs_unlock_op(sbi);
return err;
}
@@ -225,6 +200,8 @@
inode = f2fs_iget(dir->i_sb, ino);
if (IS_ERR(inode))
return ERR_CAST(inode);
+
+ stat_inc_inline_inode(inode);
}
return d_splice_alias(inode, dentry);
@@ -232,13 +209,11 @@
static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
{
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode = dentry->d_inode;
struct f2fs_dir_entry *de;
struct page *page;
int err = -ENOENT;
- int ilock;
trace_f2fs_unlink_enter(dir, dentry);
f2fs_balance_fs(sbi);
@@ -247,16 +222,16 @@
if (!de)
goto fail;
- err = check_orphan_space(sbi);
+ f2fs_lock_op(sbi);
+ err = acquire_orphan_inode(sbi);
if (err) {
+ f2fs_unlock_op(sbi);
kunmap(page);
f2fs_put_page(page, 0);
goto fail;
}
-
- ilock = mutex_lock_op(sbi);
f2fs_delete_entry(de, page, inode);
- mutex_unlock_op(sbi, ilock);
+ f2fs_unlock_op(sbi);
/* In order to evict this inode, we set it dirty */
mark_inode_dirty(inode);
@@ -268,11 +243,10 @@
static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
const char *symname)
{
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode;
size_t symlen = strlen(symname) + 1;
- int err, ilock;
+ int err;
f2fs_balance_fs(sbi);
@@ -283,11 +257,11 @@
inode->i_op = &f2fs_symlink_inode_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops;
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode);
- mutex_unlock_op(sbi, ilock);
if (err)
goto out;
+ f2fs_unlock_op(sbi);
err = page_symlink(inode, symname, symlen);
alloc_nid_done(sbi, inode->i_ino);
@@ -296,19 +270,15 @@
unlock_new_inode(inode);
return err;
out:
- clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
- alloc_nid_failed(sbi, inode->i_ino);
+ handle_failed_inode(inode);
return err;
}
static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode;
- int err, ilock;
+ int err;
f2fs_balance_fs(sbi);
@@ -322,11 +292,11 @@
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_ZERO);
set_inode_flag(F2FS_I(inode), FI_INC_LINK);
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode);
- mutex_unlock_op(sbi, ilock);
if (err)
goto out_fail;
+ f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino);
@@ -337,11 +307,7 @@
out_fail:
clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
- clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
- alloc_nid_failed(sbi, inode->i_ino);
+ handle_failed_inode(inode);
return err;
}
@@ -356,11 +322,9 @@
static int f2fs_mknod(struct inode *dir, struct dentry *dentry,
umode_t mode, dev_t rdev)
{
- struct super_block *sb = dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode;
int err = 0;
- int ilock;
if (!new_valid_dev(rdev))
return -EINVAL;
@@ -374,38 +338,33 @@
init_special_inode(inode, inode->i_mode, rdev);
inode->i_op = &f2fs_special_inode_operations;
- ilock = mutex_lock_op(sbi);
+ f2fs_lock_op(sbi);
err = f2fs_add_link(dentry, inode);
- mutex_unlock_op(sbi, ilock);
if (err)
goto out;
+ f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino);
d_instantiate(dentry, inode);
unlock_new_inode(inode);
return 0;
out:
- clear_nlink(inode);
- unlock_new_inode(inode);
- make_bad_inode(inode);
- iput(inode);
- alloc_nid_failed(sbi, inode->i_ino);
+ handle_failed_inode(inode);
return err;
}
static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry)
{
- struct super_block *sb = old_dir->i_sb;
- struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(old_dir);
struct inode *old_inode = old_dentry->d_inode;
struct inode *new_inode = new_dentry->d_inode;
struct page *old_dir_page;
- struct page *old_page;
+ struct page *old_page, *new_page;
struct f2fs_dir_entry *old_dir_entry = NULL;
struct f2fs_dir_entry *old_entry;
struct f2fs_dir_entry *new_entry;
- int err = -ENOENT, ilock = -1;
+ int err = -ENOENT;
f2fs_balance_fs(sbi);
@@ -420,10 +379,7 @@
goto out_old;
}
- ilock = mutex_lock_op(sbi);
-
if (new_inode) {
- struct page *new_page;
err = -ENOTEMPTY;
if (old_dir_entry && !f2fs_empty_dir(new_inode))
@@ -435,19 +391,43 @@
if (!new_entry)
goto out_dir;
+ f2fs_lock_op(sbi);
+
+ err = acquire_orphan_inode(sbi);
+ if (err)
+ goto put_out_dir;
+
+ if (update_dent_inode(old_inode, &new_dentry->d_name)) {
+ release_orphan_inode(sbi);
+ goto put_out_dir;
+ }
+
f2fs_set_link(new_dir, new_entry, new_page, old_inode);
new_inode->i_ctime = CURRENT_TIME;
+ down_write(&F2FS_I(new_inode)->i_sem);
if (old_dir_entry)
drop_nlink(new_inode);
drop_nlink(new_inode);
+ up_write(&F2FS_I(new_inode)->i_sem);
+
+ mark_inode_dirty(new_inode);
+
if (!new_inode->i_nlink)
add_orphan_inode(sbi, new_inode->i_ino);
+ else
+ release_orphan_inode(sbi);
+
+ update_inode_page(old_inode);
update_inode_page(new_inode);
} else {
+ f2fs_lock_op(sbi);
+
err = f2fs_add_link(new_dentry, old_inode);
- if (err)
+ if (err) {
+ f2fs_unlock_op(sbi);
goto out_dir;
+ }
if (old_dir_entry) {
inc_nlink(new_dir);
@@ -455,6 +435,10 @@
}
}
+ down_write(&F2FS_I(old_inode)->i_sem);
+ file_lost_pino(old_inode);
+ up_write(&F2FS_I(old_inode)->i_sem);
+
old_inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(old_inode);
@@ -464,23 +448,32 @@
if (old_dir != new_dir) {
f2fs_set_link(old_inode, old_dir_entry,
old_dir_page, new_dir);
+ update_inode_page(old_inode);
} else {
kunmap(old_dir_page);
f2fs_put_page(old_dir_page, 0);
}
drop_nlink(old_dir);
+ mark_inode_dirty(old_dir);
update_inode_page(old_dir);
}
- mutex_unlock_op(sbi, ilock);
+ f2fs_unlock_op(sbi);
return 0;
+put_out_dir:
+ f2fs_unlock_op(sbi);
+ kunmap(new_page);
+ if (PageLocked(new_page)) {
+ f2fs_put_page(new_page, 1);
+ } else {
+ f2fs_put_page(new_page, 0);
+ }
out_dir:
if (old_dir_entry) {
kunmap(old_dir_page);
f2fs_put_page(old_dir_page, 0);
}
- mutex_unlock_op(sbi, ilock);
out_old:
kunmap(old_page);
f2fs_put_page(old_page, 0);
@@ -488,6 +481,48 @@
return err;
}
+static int f2fs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+ struct inode *inode;
+ int err;
+
+ inode = f2fs_new_inode(dir, mode);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+ inode->i_op = &f2fs_file_inode_operations;
+ inode->i_fop = &f2fs_file_operations;
+ inode->i_mapping->a_ops = &f2fs_dblock_aops;
+
+ f2fs_lock_op(sbi);
+ err = acquire_orphan_inode(sbi);
+ if (err)
+ goto out;
+
+ err = f2fs_do_tmpfile(inode, dir);
+ if (err)
+ goto release_out;
+
+ /*
+ * add this non-linked tmpfile to orphan list, in this way we could
+ * remove all unused data of tmpfile after abnormal power-off.
+ */
+ add_orphan_inode(sbi, inode->i_ino);
+ f2fs_unlock_op(sbi);
+
+ alloc_nid_done(sbi, inode->i_ino);
+ d_tmpfile(dentry, inode);
+ unlock_new_inode(inode);
+ return 0;
+
+release_out:
+ release_orphan_inode(sbi);
+out:
+ handle_failed_inode(inode);
+ return err;
+}
+
const struct inode_operations f2fs_dir_inode_operations = {
.create = f2fs_create,
.lookup = f2fs_lookup,
@@ -498,6 +533,8 @@
.rmdir = f2fs_rmdir,
.mknod = f2fs_mknod,
.rename = f2fs_rename,
+ .tmpfile = f2fs_tmpfile,
+ .getattr = f2fs_getattr,
.setattr = f2fs_setattr,
.get_acl = f2fs_get_acl,
#ifdef CONFIG_F2FS_FS_XATTR
@@ -512,6 +549,7 @@
.readlink = generic_readlink,
.follow_link = page_follow_link_light,
.put_link = page_put_link,
+ .getattr = f2fs_getattr,
.setattr = f2fs_setattr,
#ifdef CONFIG_F2FS_FS_XATTR
.setxattr = generic_setxattr,
@@ -522,6 +560,7 @@
};
const struct inode_operations f2fs_special_inode_operations = {
+ .getattr = f2fs_getattr,
.setattr = f2fs_setattr,
.get_acl = f2fs_get_acl,
#ifdef CONFIG_F2FS_FS_XATTR
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 74f3c7b..19bad24 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -21,13 +21,39 @@
#include "segment.h"
#include <trace/events/f2fs.h>
+#define on_build_free_nids(nmi) mutex_is_locked(&nm_i->build_lock)
+
static struct kmem_cache *nat_entry_slab;
static struct kmem_cache *free_nid_slab;
+static struct kmem_cache *nat_entry_set_slab;
+
+bool available_free_memory(struct f2fs_sb_info *sbi, int type)
+{
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+ struct sysinfo val;
+ unsigned long mem_size = 0;
+ bool res = false;
+
+ si_meminfo(&val);
+ /* give 25%, 25%, 50% memory for each components respectively */
+ if (type == FREE_NIDS) {
+ mem_size = (nm_i->fcnt * sizeof(struct free_nid)) >> 12;
+ res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2);
+ } else if (type == NAT_ENTRIES) {
+ mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> 12;
+ res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 2);
+ } else if (type == DIRTY_DENTS) {
+ if (sbi->sb->s_bdi->dirty_exceeded)
+ return false;
+ mem_size = get_pages(sbi, F2FS_DIRTY_DENTS);
+ res = mem_size < ((val.totalram * nm_i->ram_thresh / 100) >> 1);
+ }
+ return res;
+}
static void clear_node_page_dirty(struct page *page)
{
struct address_space *mapping = page->mapping;
- struct f2fs_sb_info *sbi = F2FS_SB(mapping->host->i_sb);
unsigned int long flags;
if (PageDirty(page)) {
@@ -38,7 +64,7 @@
spin_unlock_irqrestore(&mapping->tree_lock, flags);
clear_page_dirty_for_io(page);
- dec_page_count(sbi, F2FS_DIRTY_NODES);
+ dec_page_count(F2FS_M_SB(mapping), F2FS_DIRTY_NODES);
}
ClearPageUptodate(page);
}
@@ -64,12 +90,8 @@
/* get current nat block page with lock */
src_page = get_meta_page(sbi, src_off);
-
- /* Dirty src_page means that it is already the new target NAT page. */
- if (PageDirty(src_page))
- return src_page;
-
dst_page = grab_meta_page(sbi, dst_off);
+ f2fs_bug_on(sbi, PageDirty(src_page));
src_addr = page_address(src_page);
dst_addr = page_address(dst_page);
@@ -82,40 +104,6 @@
return dst_page;
}
-/*
- * Readahead NAT pages
- */
-static void ra_nat_pages(struct f2fs_sb_info *sbi, int nid)
-{
- struct address_space *mapping = sbi->meta_inode->i_mapping;
- struct f2fs_nm_info *nm_i = NM_I(sbi);
- struct blk_plug plug;
- struct page *page;
- pgoff_t index;
- int i;
-
- blk_start_plug(&plug);
-
- for (i = 0; i < FREE_NID_PAGES; i++, nid += NAT_ENTRY_PER_BLOCK) {
- if (nid >= nm_i->max_nid)
- nid = 0;
- index = current_nat_addr(sbi, nid);
-
- page = grab_cache_page(mapping, index);
- if (!page)
- continue;
- if (PageUptodate(page)) {
- f2fs_put_page(page, 1);
- continue;
- }
- if (f2fs_readpage(sbi, page, index, READ))
- continue;
-
- f2fs_put_page(page, 0);
- }
- blk_finish_plug(&plug);
-}
-
static struct nat_entry *__lookup_nat_cache(struct f2fs_nm_info *nm_i, nid_t n)
{
return radix_tree_lookup(&nm_i->nat_root, n);
@@ -135,20 +123,101 @@
kmem_cache_free(nat_entry_slab, e);
}
-int is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
+static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
+ struct nat_entry *ne)
+{
+ nid_t set = ne->ni.nid / NAT_ENTRY_PER_BLOCK;
+ struct nat_entry_set *head;
+
+ if (get_nat_flag(ne, IS_DIRTY))
+ return;
+retry:
+ head = radix_tree_lookup(&nm_i->nat_set_root, set);
+ if (!head) {
+ head = f2fs_kmem_cache_alloc(nat_entry_set_slab, GFP_ATOMIC);
+
+ INIT_LIST_HEAD(&head->entry_list);
+ INIT_LIST_HEAD(&head->set_list);
+ head->set = set;
+ head->entry_cnt = 0;
+
+ if (radix_tree_insert(&nm_i->nat_set_root, set, head)) {
+ cond_resched();
+ goto retry;
+ }
+ }
+ list_move_tail(&ne->list, &head->entry_list);
+ nm_i->dirty_nat_cnt++;
+ head->entry_cnt++;
+ set_nat_flag(ne, IS_DIRTY, true);
+}
+
+static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i,
+ struct nat_entry *ne)
+{
+ nid_t set = ne->ni.nid / NAT_ENTRY_PER_BLOCK;
+ struct nat_entry_set *head;
+
+ head = radix_tree_lookup(&nm_i->nat_set_root, set);
+ if (head) {
+ list_move_tail(&ne->list, &nm_i->nat_entries);
+ set_nat_flag(ne, IS_DIRTY, false);
+ head->entry_cnt--;
+ nm_i->dirty_nat_cnt--;
+ }
+}
+
+static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i,
+ nid_t start, unsigned int nr, struct nat_entry_set **ep)
+{
+ return radix_tree_gang_lookup(&nm_i->nat_set_root, (void **)ep,
+ start, nr);
+}
+
+bool is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *e;
- int is_cp = 1;
+ bool is_cp = true;
read_lock(&nm_i->nat_tree_lock);
e = __lookup_nat_cache(nm_i, nid);
- if (e && !e->checkpointed)
- is_cp = 0;
+ if (e && !get_nat_flag(e, IS_CHECKPOINTED))
+ is_cp = false;
read_unlock(&nm_i->nat_tree_lock);
return is_cp;
}
+bool has_fsynced_inode(struct f2fs_sb_info *sbi, nid_t ino)
+{
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+ struct nat_entry *e;
+ bool fsynced = false;
+
+ read_lock(&nm_i->nat_tree_lock);
+ e = __lookup_nat_cache(nm_i, ino);
+ if (e && get_nat_flag(e, HAS_FSYNCED_INODE))
+ fsynced = true;
+ read_unlock(&nm_i->nat_tree_lock);
+ return fsynced;
+}
+
+bool need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino)
+{
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+ struct nat_entry *e;
+ bool need_update = true;
+
+ read_lock(&nm_i->nat_tree_lock);
+ e = __lookup_nat_cache(nm_i, ino);
+ if (e && get_nat_flag(e, HAS_LAST_FSYNC) &&
+ (get_nat_flag(e, IS_CHECKPOINTED) ||
+ get_nat_flag(e, HAS_FSYNCED_INODE)))
+ need_update = false;
+ read_unlock(&nm_i->nat_tree_lock);
+ return need_update;
+}
+
static struct nat_entry *grab_nat_entry(struct f2fs_nm_info *nm_i, nid_t nid)
{
struct nat_entry *new;
@@ -162,6 +231,7 @@
}
memset(new, 0, sizeof(struct nat_entry));
nat_set_nid(new, nid);
+ nat_reset_flag(new);
list_add_tail(&new->list, &nm_i->nat_entries);
nm_i->nat_cnt++;
return new;
@@ -180,16 +250,13 @@
write_unlock(&nm_i->nat_tree_lock);
goto retry;
}
- nat_set_blkaddr(e, le32_to_cpu(ne->block_addr));
- nat_set_ino(e, le32_to_cpu(ne->ino));
- nat_set_version(e, ne->version);
- e->checkpointed = true;
+ node_info_from_raw_nat(&e->ni, ne);
}
write_unlock(&nm_i->nat_tree_lock);
}
-static void set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
- block_t new_blkaddr)
+static int set_node_addr(struct f2fs_sb_info *sbi, struct node_info *ni,
+ block_t new_blkaddr, bool fsync_done)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *e;
@@ -203,8 +270,7 @@
goto retry;
}
e->ni = *ni;
- e->checkpointed = true;
- BUG_ON(ni->blk_addr == NEW_ADDR);
+ f2fs_bug_on(sbi, ni->blk_addr == NEW_ADDR);
} else if (new_blkaddr == NEW_ADDR) {
/*
* when nid is reallocated,
@@ -212,19 +278,23 @@
* So, reinitialize it with new information.
*/
e->ni = *ni;
- BUG_ON(ni->blk_addr != NULL_ADDR);
+ if (ni->blk_addr != NULL_ADDR) {
+ f2fs_msg(sbi->sb, KERN_ERR, "node block address is "
+ "already set: %llu", (unsigned long long)ni->blk_addr);
+ f2fs_handle_error(sbi);
+ /* just give up on this node */
+ write_unlock(&nm_i->nat_tree_lock);
+ return -EIO;
+ }
}
- if (new_blkaddr == NEW_ADDR)
- e->checkpointed = false;
-
/* sanity check */
- BUG_ON(nat_get_blkaddr(e) != ni->blk_addr);
- BUG_ON(nat_get_blkaddr(e) == NULL_ADDR &&
+ f2fs_bug_on(sbi, nat_get_blkaddr(e) != ni->blk_addr);
+ f2fs_bug_on(sbi, nat_get_blkaddr(e) == NULL_ADDR &&
new_blkaddr == NULL_ADDR);
- BUG_ON(nat_get_blkaddr(e) == NEW_ADDR &&
+ f2fs_bug_on(sbi, nat_get_blkaddr(e) == NEW_ADDR &&
new_blkaddr == NEW_ADDR);
- BUG_ON(nat_get_blkaddr(e) != NEW_ADDR &&
+ f2fs_bug_on(sbi, nat_get_blkaddr(e) != NEW_ADDR &&
nat_get_blkaddr(e) != NULL_ADDR &&
new_blkaddr == NEW_ADDR);
@@ -236,15 +306,26 @@
/* change address */
nat_set_blkaddr(e, new_blkaddr);
+ if (new_blkaddr == NEW_ADDR || new_blkaddr == NULL_ADDR)
+ set_nat_flag(e, IS_CHECKPOINTED, false);
__set_nat_cache_dirty(nm_i, e);
+
+ /* update fsync_mark if its inode nat entry is still alive */
+ e = __lookup_nat_cache(nm_i, ni->ino);
+ if (e) {
+ if (fsync_done && ni->nid == ni->ino)
+ set_nat_flag(e, HAS_FSYNCED_INODE, true);
+ set_nat_flag(e, HAS_LAST_FSYNC, fsync_done);
+ }
write_unlock(&nm_i->nat_tree_lock);
+ return 0;
}
-static int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
+int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
- if (nm_i->nat_cnt <= NM_WOUT_THRESHOLD)
+ if (available_free_memory(sbi, NAT_ENTRIES))
return 0;
write_lock(&nm_i->nat_tree_lock);
@@ -315,9 +396,10 @@
* The maximum depth is four.
* Offset[0] will have raw inode offset.
*/
-static int get_node_path(long block, int offset[4], unsigned int noffset[4])
+static int get_node_path(struct f2fs_inode_info *fi, long block,
+ int offset[4], unsigned int noffset[4])
{
- const long direct_index = ADDRS_PER_INODE;
+ const long direct_index = ADDRS_PER_INODE(fi);
const long direct_blks = ADDRS_PER_BLOCK;
const long dptrs_per_blk = NIDS_PER_BLOCK;
const long indirect_blks = ADDRS_PER_BLOCK * NIDS_PER_BLOCK;
@@ -390,13 +472,13 @@
/*
* Caller should call f2fs_put_dnode(dn).
- * Also, it should grab and release a mutex by calling mutex_lock_op() and
- * mutex_unlock_op() only if ro is not set RDONLY_NODE.
+ * Also, it should grab and release a rwsem by calling f2fs_lock_op() and
+ * f2fs_unlock_op() only if ro is not set RDONLY_NODE.
* In the case of RDONLY_NODE, we don't need to care about mutex.
*/
int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct page *npage[4];
struct page *parent;
int offset[4];
@@ -405,13 +487,16 @@
int level, i;
int err = 0;
- level = get_node_path(index, offset, noffset);
+ level = get_node_path(F2FS_I(dn->inode), index, offset, noffset);
nids[0] = dn->inode->i_ino;
- npage[0] = get_node_page(sbi, nids[0]);
- if (IS_ERR(npage[0]))
- return PTR_ERR(npage[0]);
+ npage[0] = dn->inode_page;
+ if (!npage[0]) {
+ npage[0] = get_node_page(sbi, nids[0]);
+ if (IS_ERR(npage[0]))
+ return PTR_ERR(npage[0]);
+ }
parent = npage[0];
if (level != 0)
nids[1] = get_nid(parent, offset[0], true);
@@ -430,7 +515,7 @@
}
dn->nid = nids[i];
- npage[i] = new_node_page(dn, noffset[i]);
+ npage[i] = new_node_page(dn, noffset[i], NULL);
if (IS_ERR(npage[i])) {
alloc_nid_failed(sbi, nids[i]);
err = PTR_ERR(npage[i]);
@@ -486,20 +571,25 @@
static void truncate_node(struct dnode_of_data *dn)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct node_info ni;
get_node_info(sbi, dn->nid, &ni);
if (dn->inode->i_blocks == 0) {
- BUG_ON(ni.blk_addr != NULL_ADDR);
+ if (ni.blk_addr != NULL_ADDR) {
+ f2fs_msg(sbi->sb, KERN_ERR,
+ "empty node still has block address %u ",
+ ni.blk_addr);
+ f2fs_handle_error(sbi);
+ }
goto invalidate;
}
- BUG_ON(ni.blk_addr == NULL_ADDR);
+ f2fs_bug_on(sbi, ni.blk_addr == NULL_ADDR);
/* Deallocate node address */
invalidate_blocks(sbi, ni.blk_addr);
- dec_valid_node_count(sbi, dn->inode, 1);
- set_node_addr(sbi, &ni, NULL_ADDR);
+ dec_valid_node_count(sbi, dn->inode);
+ set_node_addr(sbi, &ni, NULL_ADDR, false);
if (dn->nid == dn->inode->i_ino) {
remove_orphan_inode(sbi, dn->nid);
@@ -512,20 +602,23 @@
F2FS_SET_SB_DIRT(sbi);
f2fs_put_page(dn->node_page, 1);
+
+ invalidate_mapping_pages(NODE_MAPPING(sbi),
+ dn->node_page->index, dn->node_page->index);
+
dn->node_page = NULL;
trace_f2fs_truncate_node(dn->inode, dn->nid, ni.blk_addr);
}
static int truncate_dnode(struct dnode_of_data *dn)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
struct page *page;
if (dn->nid == 0)
return 1;
/* get direct node */
- page = get_node_page(sbi, dn->nid);
+ page = get_node_page(F2FS_I_SB(dn->inode), dn->nid);
if (IS_ERR(page) && PTR_ERR(page) == -ENOENT)
return 1;
else if (IS_ERR(page))
@@ -542,7 +635,6 @@
static int truncate_nodes(struct dnode_of_data *dn, unsigned int nofs,
int ofs, int depth)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
struct dnode_of_data rdn = *dn;
struct page *page;
struct f2fs_node *rn;
@@ -556,13 +648,13 @@
trace_f2fs_truncate_nodes_enter(dn->inode, dn->nid, dn->data_blkaddr);
- page = get_node_page(sbi, dn->nid);
+ page = get_node_page(F2FS_I_SB(dn->inode), dn->nid);
if (IS_ERR(page)) {
trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(page));
return PTR_ERR(page);
}
- rn = (struct f2fs_node *)page_address(page);
+ rn = F2FS_NODE(page);
if (depth < 3) {
for (i = ofs; i < NIDS_PER_BLOCK; i++, freed++) {
child_nid = le32_to_cpu(rn->in.nid[i]);
@@ -614,7 +706,6 @@
static int truncate_partial_nodes(struct dnode_of_data *dn,
struct f2fs_inode *ri, int *offset, int depth)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
struct page *pages[2];
nid_t nid[3];
nid_t child_nid;
@@ -627,19 +718,19 @@
return 0;
/* get indirect nodes in the path */
- for (i = 0; i < depth - 1; i++) {
- /* refernece count'll be increased */
- pages[i] = get_node_page(sbi, nid[i]);
+ for (i = 0; i < idx + 1; i++) {
+ /* reference count'll be increased */
+ pages[i] = get_node_page(F2FS_I_SB(dn->inode), nid[i]);
if (IS_ERR(pages[i])) {
- depth = i + 1;
err = PTR_ERR(pages[i]);
+ idx = i - 1;
goto fail;
}
nid[i + 1] = get_nid(pages[i], offset[i + 1], false);
}
/* free direct nodes linked to a partial indirect node */
- for (i = offset[depth - 1]; i < NIDS_PER_BLOCK; i++) {
+ for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) {
child_nid = get_nid(pages[idx], i, false);
if (!child_nid)
continue;
@@ -650,7 +741,7 @@
set_nid(pages[idx], i, 0, false);
}
- if (offset[depth - 1] == 0) {
+ if (offset[idx + 1] == 0) {
dn->node_page = pages[idx];
dn->nid = nid[idx];
truncate_node(dn);
@@ -658,9 +749,10 @@
f2fs_put_page(pages[idx], 1);
}
offset[idx]++;
- offset[depth - 1] = 0;
+ offset[idx + 1] = 0;
+ idx--;
fail:
- for (i = depth - 3; i >= 0; i--)
+ for (i = idx; i >= 0; i--)
f2fs_put_page(pages[i], 1);
trace_f2fs_truncate_partial_nodes(dn->inode, nid, depth, err);
@@ -673,18 +765,17 @@
*/
int truncate_inode_blocks(struct inode *inode, pgoff_t from)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct address_space *node_mapping = sbi->node_inode->i_mapping;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
int err = 0, cont = 1;
int level, offset[4], noffset[4];
unsigned int nofs = 0;
- struct f2fs_node *rn;
+ struct f2fs_inode *ri;
struct dnode_of_data dn;
struct page *page;
trace_f2fs_truncate_inode_blocks_enter(inode, from);
- level = get_node_path(from, offset, noffset);
+ level = get_node_path(F2FS_I(inode), from, offset, noffset);
restart:
page = get_node_page(sbi, inode->i_ino);
if (IS_ERR(page)) {
@@ -695,7 +786,7 @@
set_new_dnode(&dn, inode, page, NULL, 0);
unlock_page(page);
- rn = page_address(page);
+ ri = F2FS_INODE(page);
switch (level) {
case 0:
case 1:
@@ -705,7 +796,7 @@
nofs = noffset[1];
if (!offset[level - 1])
goto skip_partial;
- err = truncate_partial_nodes(&dn, &rn->i, offset, level);
+ err = truncate_partial_nodes(&dn, ri, offset, level);
if (err < 0 && err != -ENOENT)
goto fail;
nofs += 1 + NIDS_PER_BLOCK;
@@ -714,7 +805,7 @@
nofs = 5 + 2 * NIDS_PER_BLOCK;
if (!offset[level - 1])
goto skip_partial;
- err = truncate_partial_nodes(&dn, &rn->i, offset, level);
+ err = truncate_partial_nodes(&dn, ri, offset, level);
if (err < 0 && err != -ENOENT)
goto fail;
break;
@@ -724,7 +815,7 @@
skip_partial:
while (cont) {
- dn.nid = le32_to_cpu(rn->i.i_nid[offset[0] - NODE_DIR1_BLOCK]);
+ dn.nid = le32_to_cpu(ri->i_nid[offset[0] - NODE_DIR1_BLOCK]);
switch (offset[0]) {
case NODE_DIR1_BLOCK:
case NODE_DIR2_BLOCK:
@@ -747,14 +838,14 @@
if (err < 0 && err != -ENOENT)
goto fail;
if (offset[1] == 0 &&
- rn->i.i_nid[offset[0] - NODE_DIR1_BLOCK]) {
+ ri->i_nid[offset[0] - NODE_DIR1_BLOCK]) {
lock_page(page);
- if (page->mapping != node_mapping) {
+ if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
f2fs_put_page(page, 1);
goto restart;
}
- wait_on_page_writeback(page);
- rn->i.i_nid[offset[0] - NODE_DIR1_BLOCK] = 0;
+ f2fs_wait_on_page_writeback(page, NODE);
+ ri->i_nid[offset[0] - NODE_DIR1_BLOCK] = 0;
set_page_dirty(page);
unlock_page(page);
}
@@ -768,91 +859,119 @@
return err > 0 ? 0 : err;
}
-/*
- * Caller should grab and release a mutex by calling mutex_lock_op() and
- * mutex_unlock_op().
- */
-int remove_inode_page(struct inode *inode)
+int truncate_xattr_node(struct inode *inode, struct page *page)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct page *page;
- nid_t ino = inode->i_ino;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ nid_t nid = F2FS_I(inode)->i_xattr_nid;
struct dnode_of_data dn;
+ struct page *npage;
- page = get_node_page(sbi, ino);
- if (IS_ERR(page))
- return PTR_ERR(page);
+ if (!nid)
+ return 0;
- if (F2FS_I(inode)->i_xattr_nid) {
- nid_t nid = F2FS_I(inode)->i_xattr_nid;
- struct page *npage = get_node_page(sbi, nid);
+ npage = get_node_page(sbi, nid);
+ if (IS_ERR(npage))
+ return PTR_ERR(npage);
- if (IS_ERR(npage))
- return PTR_ERR(npage);
+ F2FS_I(inode)->i_xattr_nid = 0;
- F2FS_I(inode)->i_xattr_nid = 0;
- set_new_dnode(&dn, inode, page, npage, nid);
- dn.inode_page_locked = 1;
- truncate_node(&dn);
- }
+ /* need to do checkpoint during fsync */
+ F2FS_I(inode)->xattr_ver = cur_cp_version(F2FS_CKPT(sbi));
- /* 0 is possible, after f2fs_new_inode() is failed */
- BUG_ON(inode->i_blocks != 0 && inode->i_blocks != 1);
- set_new_dnode(&dn, inode, page, page, ino);
+ set_new_dnode(&dn, inode, page, npage, nid);
+
+ if (page)
+ dn.inode_page_locked = true;
truncate_node(&dn);
return 0;
}
-int new_inode_page(struct inode *inode, const struct qstr *name)
+/*
+ * Caller should grab and release a rwsem by calling f2fs_lock_op() and
+ * f2fs_unlock_op().
+ */
+void remove_inode_page(struct inode *inode)
{
- struct page *page;
+ struct dnode_of_data dn;
+
+ set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
+ if (get_dnode_of_data(&dn, 0, LOOKUP_NODE))
+ return;
+
+ if (truncate_xattr_node(inode, dn.inode_page)) {
+ f2fs_put_dnode(&dn);
+ return;
+ }
+ /* remove potential inline_data blocks */
+ if (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
+ S_ISLNK(inode->i_mode))
+ truncate_data_blocks_range(&dn, 1);
+
+ /* 0 is possible, after f2fs_new_inode() has failed */
+ if (inode->i_blocks != 0 && inode->i_blocks != 1) {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ f2fs_msg(sbi->sb, KERN_ERR, "inode %lu still has %llu blocks",
+ inode->i_ino, (unsigned long long)inode->i_blocks);
+ f2fs_handle_error(sbi);
+ }
+
+ /* will put inode & node pages */
+ truncate_node(&dn);
+}
+
+struct page *new_inode_page(struct inode *inode)
+{
struct dnode_of_data dn;
/* allocate inode page for new inode */
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
- page = new_node_page(&dn, 0);
- init_dent_inode(name, page);
- if (IS_ERR(page))
- return PTR_ERR(page);
- f2fs_put_page(page, 1);
- return 0;
+
+ /* caller should f2fs_put_page(page, 1); */
+ return new_node_page(&dn, 0, NULL);
}
-struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+struct page *new_node_page(struct dnode_of_data *dn,
+ unsigned int ofs, struct page *ipage)
{
- struct f2fs_sb_info *sbi = F2FS_SB(dn->inode->i_sb);
- struct address_space *mapping = sbi->node_inode->i_mapping;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct node_info old_ni, new_ni;
struct page *page;
int err;
- if (is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC))
+ if (unlikely(is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC)))
return ERR_PTR(-EPERM);
- page = grab_cache_page(mapping, dn->nid);
+ page = grab_cache_page(NODE_MAPPING(sbi), dn->nid);
if (!page)
return ERR_PTR(-ENOMEM);
- get_node_info(sbi, dn->nid, &old_ni);
-
- SetPageUptodate(page);
- fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true);
-
- /* Reinitialize old_ni with new node page */
- BUG_ON(old_ni.blk_addr != NULL_ADDR);
- new_ni = old_ni;
- new_ni.ino = dn->inode->i_ino;
-
- if (!inc_valid_node_count(sbi, dn->inode, 1)) {
+ if (unlikely(!inc_valid_node_count(sbi, dn->inode))) {
err = -ENOSPC;
goto fail;
}
- set_node_addr(sbi, &new_ni, NEW_ADDR);
+
+ get_node_info(sbi, dn->nid, &old_ni);
+
+ /* Reinitialize old_ni with new node page */
+ f2fs_bug_on(sbi, old_ni.blk_addr != NULL_ADDR);
+ new_ni = old_ni;
+ new_ni.ino = dn->inode->i_ino;
+ set_node_addr(sbi, &new_ni, NEW_ADDR, false);
+
+ f2fs_wait_on_page_writeback(page, NODE);
+ fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true);
set_cold_node(dn->inode, page);
+ SetPageUptodate(page);
+ set_page_dirty(page);
+
+ if (f2fs_has_xattr_block(ofs))
+ F2FS_I(dn->inode)->i_xattr_nid = dn->nid;
dn->node_page = page;
- sync_inode_page(dn);
- set_page_dirty(page);
+ if (ipage)
+ update_inode(dn->inode, ipage);
+ else
+ sync_inode_page(dn);
if (ofs == 0)
inc_valid_inode_count(sbi);
@@ -870,14 +989,14 @@
* LOCKED_PAGE: f2fs_put_page(page, 1)
* error: nothing
*/
-static int read_node_page(struct page *page, int type)
+static int read_node_page(struct page *page, int rw)
{
- struct f2fs_sb_info *sbi = F2FS_SB(page->mapping->host->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_P_SB(page);
struct node_info ni;
get_node_info(sbi, page->index, &ni);
- if (ni.blk_addr == NULL_ADDR) {
+ if (unlikely(ni.blk_addr == NULL_ADDR)) {
f2fs_put_page(page, 1);
return -ENOENT;
}
@@ -885,7 +1004,7 @@
if (PageUptodate(page))
return LOCKED_PAGE;
- return f2fs_readpage(sbi, page, ni.blk_addr, type);
+ return f2fs_submit_page_bio(sbi, page, ni.blk_addr, rw);
}
/*
@@ -893,18 +1012,17 @@
*/
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid)
{
- struct address_space *mapping = sbi->node_inode->i_mapping;
struct page *apage;
int err;
- apage = find_get_page(mapping, nid);
+ apage = find_get_page(NODE_MAPPING(sbi), nid);
if (apage && PageUptodate(apage)) {
f2fs_put_page(apage, 0);
return;
}
f2fs_put_page(apage, 0);
- apage = grab_cache_page(mapping, nid);
+ apage = grab_cache_page(NODE_MAPPING(sbi), nid);
if (!apage)
return;
@@ -913,16 +1031,14 @@
f2fs_put_page(apage, 0);
else if (err == LOCKED_PAGE)
f2fs_put_page(apage, 1);
- return;
}
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
{
- struct address_space *mapping = sbi->node_inode->i_mapping;
struct page *page;
int err;
repeat:
- page = grab_cache_page(mapping, nid);
+ page = grab_cache_page(NODE_MAPPING(sbi), nid);
if (!page)
return ERR_PTR(-ENOMEM);
@@ -933,16 +1049,15 @@
goto got_it;
lock_page(page);
- if (!PageUptodate(page)) {
+ if (unlikely(!PageUptodate(page) || nid != nid_of_node(page))) {
f2fs_put_page(page, 1);
return ERR_PTR(-EIO);
}
- if (page->mapping != mapping) {
+ if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
f2fs_put_page(page, 1);
goto repeat;
}
got_it:
- BUG_ON(nid != nid_of_node(page));
mark_page_accessed(page);
return page;
}
@@ -953,8 +1068,7 @@
*/
struct page *get_node_page_ra(struct page *parent, int start)
{
- struct f2fs_sb_info *sbi = F2FS_SB(parent->mapping->host->i_sb);
- struct address_space *mapping = sbi->node_inode->i_mapping;
+ struct f2fs_sb_info *sbi = F2FS_P_SB(parent);
struct blk_plug plug;
struct page *page;
int err, i, end;
@@ -965,7 +1079,7 @@
if (!nid)
return ERR_PTR(-ENOENT);
repeat:
- page = grab_cache_page(mapping, nid);
+ page = grab_cache_page(NODE_MAPPING(sbi), nid);
if (!page)
return ERR_PTR(-ENOMEM);
@@ -990,12 +1104,12 @@
blk_finish_plug(&plug);
lock_page(page);
- if (page->mapping != mapping) {
+ if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
f2fs_put_page(page, 1);
goto repeat;
}
page_hit:
- if (!PageUptodate(page)) {
+ if (unlikely(!PageUptodate(page))) {
f2fs_put_page(page, 1);
return ERR_PTR(-EIO);
}
@@ -1021,7 +1135,6 @@
int sync_node_pages(struct f2fs_sb_info *sbi, nid_t ino,
struct writeback_control *wbc)
{
- struct address_space *mapping = sbi->node_inode->i_mapping;
pgoff_t index, end;
struct pagevec pvec;
int step = ino ? 2 : 0;
@@ -1035,7 +1148,7 @@
while (index <= end) {
int i, nr_pages;
- nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
+ nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
PAGECACHE_TAG_DIRTY,
min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1);
if (nr_pages == 0)
@@ -1068,7 +1181,7 @@
else if (!trylock_page(page))
continue;
- if (unlikely(page->mapping != mapping)) {
+ if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
continue_unlock:
unlock_page(page);
continue;
@@ -1086,17 +1199,24 @@
/* called by fsync() */
if (ino && IS_DNODE(page)) {
- int mark = !is_checkpointed_node(sbi, ino);
set_fsync_mark(page, 1);
- if (IS_INODE(page))
- set_dentry_mark(page, mark);
+ if (IS_INODE(page)) {
+ if (!is_checkpointed_node(sbi, ino) &&
+ !has_fsynced_inode(sbi, ino))
+ set_dentry_mark(page, 1);
+ else
+ set_dentry_mark(page, 0);
+ }
nwritten++;
} else {
set_fsync_mark(page, 0);
set_dentry_mark(page, 0);
}
- mapping->a_ops->writepage(page, wbc);
- wrote++;
+
+ if (NODE_MAPPING(sbi)->a_ops->writepage(page, wbc))
+ unlock_page(page);
+ else
+ wrote++;
if (--wbc->nr_to_write == 0)
break;
@@ -1116,89 +1236,137 @@
}
if (wrote)
- f2fs_submit_bio(sbi, NODE, wbc->sync_mode == WB_SYNC_ALL);
-
+ f2fs_submit_merged_bio(sbi, NODE, WRITE);
return nwritten;
}
+int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino)
+{
+ pgoff_t index = 0, end = LONG_MAX;
+ struct pagevec pvec;
+ int ret2 = 0, ret = 0;
+
+ pagevec_init(&pvec, 0);
+
+ while (index <= end) {
+ int i, nr_pages;
+ nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
+ PAGECACHE_TAG_WRITEBACK,
+ min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1);
+ if (nr_pages == 0)
+ break;
+
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pvec.pages[i];
+
+ /* until radix tree lookup accepts end_index */
+ if (unlikely(page->index > end))
+ continue;
+
+ if (ino && ino_of_node(page) == ino) {
+ f2fs_wait_on_page_writeback(page, NODE);
+ if (TestClearPageError(page))
+ ret = -EIO;
+ }
+ }
+ pagevec_release(&pvec);
+ cond_resched();
+ }
+
+ if (unlikely(test_and_clear_bit(AS_ENOSPC, &NODE_MAPPING(sbi)->flags)))
+ ret2 = -ENOSPC;
+ if (unlikely(test_and_clear_bit(AS_EIO, &NODE_MAPPING(sbi)->flags)))
+ ret2 = -EIO;
+ if (!ret)
+ ret = ret2;
+ return ret;
+}
+
static int f2fs_write_node_page(struct page *page,
struct writeback_control *wbc)
{
- struct f2fs_sb_info *sbi = F2FS_SB(page->mapping->host->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_P_SB(page);
nid_t nid;
block_t new_addr;
struct node_info ni;
+ struct f2fs_io_info fio = {
+ .type = NODE,
+ .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE,
+ };
- wait_on_page_writeback(page);
+ trace_f2fs_writepage(page, NODE);
+
+ if (unlikely(sbi->por_doing))
+ goto redirty_out;
+ if (unlikely(f2fs_cp_error(sbi)))
+ goto redirty_out;
+
+ f2fs_wait_on_page_writeback(page, NODE);
/* get old block addr of this node page */
nid = nid_of_node(page);
- BUG_ON(page->index != nid);
+ f2fs_bug_on(sbi, page->index != nid);
get_node_info(sbi, nid, &ni);
/* This page is already truncated */
- if (ni.blk_addr == NULL_ADDR) {
+ if (unlikely(ni.blk_addr == NULL_ADDR)) {
dec_page_count(sbi, F2FS_DIRTY_NODES);
unlock_page(page);
return 0;
}
- if (wbc->for_reclaim) {
- dec_page_count(sbi, F2FS_DIRTY_NODES);
- wbc->pages_skipped++;
- set_page_dirty(page);
- return AOP_WRITEPAGE_ACTIVATE;
- }
+ if (wbc->for_reclaim)
+ goto redirty_out;
- mutex_lock(&sbi->node_write);
+ down_read(&sbi->node_write);
set_page_writeback(page);
- write_node_page(sbi, page, nid, ni.blk_addr, &new_addr);
- set_node_addr(sbi, &ni, new_addr);
+ write_node_page(sbi, page, &fio, nid, ni.blk_addr, &new_addr);
+ set_node_addr(sbi, &ni, new_addr, is_fsync_dnode(page));
dec_page_count(sbi, F2FS_DIRTY_NODES);
- mutex_unlock(&sbi->node_write);
+ up_read(&sbi->node_write);
unlock_page(page);
return 0;
+
+redirty_out:
+ redirty_page_for_writepage(wbc, page);
+ return AOP_WRITEPAGE_ACTIVATE;
}
-/*
- * It is very important to gather dirty pages and write at once, so that we can
- * submit a big bio without interfering other data writes.
- * Be default, 512 pages (2MB), a segment size, is quite reasonable.
- */
-#define COLLECT_DIRTY_NODES 512
static int f2fs_write_node_pages(struct address_space *mapping,
struct writeback_control *wbc)
{
- struct f2fs_sb_info *sbi = F2FS_SB(mapping->host->i_sb);
- long nr_to_write = wbc->nr_to_write;
+ struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
+ long diff;
- /* First check balancing cached NAT entries */
- if (try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK)) {
- f2fs_sync_fs(sbi->sb, true);
- return 0;
- }
+ trace_f2fs_writepages(mapping->host, wbc, NODE);
+
+ /* balancing f2fs's metadata in background */
+ f2fs_balance_fs_bg(sbi);
/* collect a number of dirty node pages and write together */
- if (get_pages(sbi, F2FS_DIRTY_NODES) < COLLECT_DIRTY_NODES)
- return 0;
+ if (get_pages(sbi, F2FS_DIRTY_NODES) < nr_pages_to_skip(sbi, NODE))
+ goto skip_write;
- /* if mounting is failed, skip writing node pages */
- wbc->nr_to_write = max_hw_blocks(sbi);
+ diff = nr_pages_to_write(sbi, NODE, wbc);
+ wbc->sync_mode = WB_SYNC_NONE;
sync_node_pages(sbi, 0, wbc);
- wbc->nr_to_write = nr_to_write - (max_hw_blocks(sbi) - wbc->nr_to_write);
+ wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff);
+ return 0;
+
+skip_write:
+ wbc->pages_skipped += get_pages(sbi, F2FS_DIRTY_NODES);
return 0;
}
static int f2fs_set_node_page_dirty(struct page *page)
{
- struct address_space *mapping = page->mapping;
- struct f2fs_sb_info *sbi = F2FS_SB(mapping->host->i_sb);
+ trace_f2fs_set_page_dirty(page, NODE);
SetPageUptodate(page);
if (!PageDirty(page)) {
__set_page_dirty_nobuffers(page);
- inc_page_count(sbi, F2FS_DIRTY_NODES);
+ inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
SetPagePrivate(page);
return 1;
}
@@ -1209,9 +1377,8 @@
unsigned int length)
{
struct inode *inode = page->mapping->host;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
if (PageDirty(page))
- dec_page_count(sbi, F2FS_DIRTY_NODES);
+ dec_page_count(F2FS_I_SB(inode), F2FS_DIRTY_NODES);
ClearPagePrivate(page);
}
@@ -1232,59 +1399,52 @@
.releasepage = f2fs_release_node_page,
};
-static struct free_nid *__lookup_free_nid_list(nid_t n, struct list_head *head)
+static struct free_nid *__lookup_free_nid_list(struct f2fs_nm_info *nm_i,
+ nid_t n)
{
- struct list_head *this;
- struct free_nid *i;
- list_for_each(this, head) {
- i = list_entry(this, struct free_nid, list);
- if (i->nid == n)
- return i;
- }
- return NULL;
+ return radix_tree_lookup(&nm_i->free_nid_root, n);
}
-static void __del_from_free_nid_list(struct free_nid *i)
+static void __del_from_free_nid_list(struct f2fs_nm_info *nm_i,
+ struct free_nid *i)
{
list_del(&i->list);
- kmem_cache_free(free_nid_slab, i);
+ radix_tree_delete(&nm_i->free_nid_root, i->nid);
}
-static int add_free_nid(struct f2fs_nm_info *nm_i, nid_t nid, bool build)
+static int add_free_nid(struct f2fs_sb_info *sbi, nid_t nid, bool build)
{
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i;
struct nat_entry *ne;
bool allocated = false;
- if (nm_i->fcnt > 2 * MAX_FREE_NIDS)
+ if (!available_free_memory(sbi, FREE_NIDS))
return -1;
/* 0 nid should not be used */
- if (nid == 0)
+ if (unlikely(nid == 0))
return 0;
- if (!build)
- goto retry;
-
- /* do not add allocated nids */
- read_lock(&nm_i->nat_tree_lock);
- ne = __lookup_nat_cache(nm_i, nid);
- if (ne && nat_get_blkaddr(ne) != NULL_ADDR)
- allocated = true;
- read_unlock(&nm_i->nat_tree_lock);
- if (allocated)
- return 0;
-retry:
- i = kmem_cache_alloc(free_nid_slab, GFP_NOFS);
- if (!i) {
- cond_resched();
- goto retry;
+ if (build) {
+ /* do not add allocated nids */
+ read_lock(&nm_i->nat_tree_lock);
+ ne = __lookup_nat_cache(nm_i, nid);
+ if (ne &&
+ (!get_nat_flag(ne, IS_CHECKPOINTED) ||
+ nat_get_blkaddr(ne) != NULL_ADDR))
+ allocated = true;
+ read_unlock(&nm_i->nat_tree_lock);
+ if (allocated)
+ return 0;
}
+
+ i = f2fs_kmem_cache_alloc(free_nid_slab, GFP_NOFS);
i->nid = nid;
i->state = NID_NEW;
spin_lock(&nm_i->free_nid_list_lock);
- if (__lookup_free_nid_list(nid, &nm_i->free_nid_list)) {
+ if (radix_tree_insert(&nm_i->free_nid_root, i->nid, i)) {
spin_unlock(&nm_i->free_nid_list_lock);
kmem_cache_free(free_nid_slab, i);
return 0;
@@ -1298,18 +1458,25 @@
static void remove_free_nid(struct f2fs_nm_info *nm_i, nid_t nid)
{
struct free_nid *i;
+ bool need_free = false;
+
spin_lock(&nm_i->free_nid_list_lock);
- i = __lookup_free_nid_list(nid, &nm_i->free_nid_list);
+ i = __lookup_free_nid_list(nm_i, nid);
if (i && i->state == NID_NEW) {
- __del_from_free_nid_list(i);
+ __del_from_free_nid_list(nm_i, i);
nm_i->fcnt--;
+ need_free = true;
}
spin_unlock(&nm_i->free_nid_list_lock);
+
+ if (need_free)
+ kmem_cache_free(free_nid_slab, i);
}
-static void scan_nat_page(struct f2fs_nm_info *nm_i,
+static void scan_nat_page(struct f2fs_sb_info *sbi,
struct page *nat_page, nid_t start_nid)
{
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
struct f2fs_nat_block *nat_blk = page_address(nat_page);
block_t blk_addr;
int i;
@@ -1318,13 +1485,13 @@
for (; i < NAT_ENTRY_PER_BLOCK; i++, start_nid++) {
- if (start_nid >= nm_i->max_nid)
+ if (unlikely(start_nid >= nm_i->max_nid))
break;
blk_addr = le32_to_cpu(nat_blk->entries[i].block_addr);
- BUG_ON(blk_addr == NEW_ADDR);
+ f2fs_bug_on(sbi, blk_addr == NEW_ADDR);
if (blk_addr == NULL_ADDR) {
- if (add_free_nid(nm_i, start_nid, true) < 0)
+ if (add_free_nid(sbi, start_nid, true) < 0)
break;
}
}
@@ -1343,16 +1510,16 @@
return;
/* readahead nat pages to be scanned */
- ra_nat_pages(sbi, nid);
+ ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES, META_NAT);
while (1) {
struct page *page = get_current_nat_page(sbi, nid);
- scan_nat_page(nm_i, page, nid);
+ scan_nat_page(sbi, page, nid);
f2fs_put_page(page, 1);
nid += (NAT_ENTRY_PER_BLOCK - (nid % NAT_ENTRY_PER_BLOCK));
- if (nid >= nm_i->max_nid)
+ if (unlikely(nid >= nm_i->max_nid))
nid = 0;
if (i++ == FREE_NID_PAGES)
@@ -1368,7 +1535,7 @@
block_t addr = le32_to_cpu(nat_in_journal(sum, i).block_addr);
nid = le32_to_cpu(nid_in_journal(sum, i));
if (addr == NULL_ADDR)
- add_free_nid(nm_i, nid, true);
+ add_free_nid(sbi, nid, true);
else
remove_free_nid(nm_i, nid);
}
@@ -1384,23 +1551,20 @@
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i = NULL;
- struct list_head *this;
retry:
- if (sbi->total_valid_node_count + 1 >= nm_i->max_nid)
+ if (unlikely(sbi->total_valid_node_count + 1 > nm_i->available_nids))
return false;
spin_lock(&nm_i->free_nid_list_lock);
/* We should not use stale free nids created by build_free_nids */
- if (nm_i->fcnt && !sbi->on_build_free_nids) {
- BUG_ON(list_empty(&nm_i->free_nid_list));
- list_for_each(this, &nm_i->free_nid_list) {
- i = list_entry(this, struct free_nid, list);
+ if (nm_i->fcnt && !on_build_free_nids(nm_i)) {
+ f2fs_bug_on(sbi, list_empty(&nm_i->free_nid_list));
+ list_for_each_entry(i, &nm_i->free_nid_list, list)
if (i->state == NID_NEW)
break;
- }
- BUG_ON(i->state != NID_NEW);
+ f2fs_bug_on(sbi, i->state != NID_NEW);
*nid = i->nid;
i->state = NID_ALLOC;
nm_i->fcnt--;
@@ -1411,9 +1575,7 @@
/* Let's scan nat pages and its caches to get free nids */
mutex_lock(&nm_i->build_lock);
- sbi->on_build_free_nids = 1;
build_free_nids(sbi);
- sbi->on_build_free_nids = 0;
mutex_unlock(&nm_i->build_lock);
goto retry;
}
@@ -1426,11 +1588,16 @@
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i;
+ if (!nid)
+ return;
+
spin_lock(&nm_i->free_nid_list_lock);
- i = __lookup_free_nid_list(nid, &nm_i->free_nid_list);
- BUG_ON(!i || i->state != NID_ALLOC);
- __del_from_free_nid_list(i);
+ i = __lookup_free_nid_list(nm_i, nid);
+ f2fs_bug_on(sbi, !i || i->state != NID_ALLOC);
+ __del_from_free_nid_list(nm_i, i);
spin_unlock(&nm_i->free_nid_list_lock);
+
+ kmem_cache_free(free_nid_slab, i);
}
/*
@@ -1440,64 +1607,159 @@
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *i;
+ bool need_free = false;
spin_lock(&nm_i->free_nid_list_lock);
- i = __lookup_free_nid_list(nid, &nm_i->free_nid_list);
- BUG_ON(!i || i->state != NID_ALLOC);
- if (nm_i->fcnt > 2 * MAX_FREE_NIDS) {
- __del_from_free_nid_list(i);
+ i = __lookup_free_nid_list(nm_i, nid);
+ f2fs_bug_on(sbi, !i || i->state != NID_ALLOC);
+ if (!available_free_memory(sbi, FREE_NIDS)) {
+ __del_from_free_nid_list(nm_i, i);
+ need_free = true;
} else {
i->state = NID_NEW;
nm_i->fcnt++;
}
spin_unlock(&nm_i->free_nid_list_lock);
+
+ if (need_free)
+ kmem_cache_free(free_nid_slab, i);
}
-void recover_node_page(struct f2fs_sb_info *sbi, struct page *page,
- struct f2fs_summary *sum, struct node_info *ni,
- block_t new_blkaddr)
+void recover_inline_xattr(struct inode *inode, struct page *page)
{
- rewrite_node_page(sbi, page, sum, ni->blk_addr, new_blkaddr);
- set_node_addr(sbi, ni, new_blkaddr);
- clear_node_page_dirty(page);
+ void *src_addr, *dst_addr;
+ size_t inline_size;
+ struct page *ipage;
+ struct f2fs_inode *ri;
+
+ ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
+ f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(ipage));
+
+ ri = F2FS_INODE(page);
+ if (!(ri->i_inline & F2FS_INLINE_XATTR)) {
+ clear_inode_flag(F2FS_I(inode), FI_INLINE_XATTR);
+ goto update_inode;
+ }
+
+ dst_addr = inline_xattr_addr(ipage);
+ src_addr = inline_xattr_addr(page);
+ inline_size = inline_xattr_size(inode);
+
+ f2fs_wait_on_page_writeback(ipage, NODE);
+ memcpy(dst_addr, src_addr, inline_size);
+update_inode:
+ update_inode(inode, ipage);
+ f2fs_put_page(ipage, 1);
+}
+
+void recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid;
+ nid_t new_xnid = nid_of_node(page);
+ struct node_info ni;
+
+ /* 1: invalidate the previous xattr nid */
+ if (!prev_xnid)
+ goto recover_xnid;
+
+ /* Deallocate node address */
+ get_node_info(sbi, prev_xnid, &ni);
+ f2fs_bug_on(sbi, ni.blk_addr == NULL_ADDR);
+ invalidate_blocks(sbi, ni.blk_addr);
+ dec_valid_node_count(sbi, inode);
+ set_node_addr(sbi, &ni, NULL_ADDR, false);
+
+recover_xnid:
+ /* 2: allocate new xattr nid */
+ if (unlikely(!inc_valid_node_count(sbi, inode)))
+ f2fs_bug_on(sbi, 1);
+
+ remove_free_nid(NM_I(sbi), new_xnid);
+ get_node_info(sbi, new_xnid, &ni);
+ ni.ino = inode->i_ino;
+ set_node_addr(sbi, &ni, NEW_ADDR, false);
+ F2FS_I(inode)->i_xattr_nid = new_xnid;
+
+ /* 3: update xattr blkaddr */
+ refresh_sit_entry(sbi, NEW_ADDR, blkaddr);
+ set_node_addr(sbi, &ni, blkaddr, false);
+
+ update_inode_page(inode);
}
int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
{
- struct address_space *mapping = sbi->node_inode->i_mapping;
- struct f2fs_node *src, *dst;
+ struct f2fs_inode *src, *dst;
nid_t ino = ino_of_node(page);
struct node_info old_ni, new_ni;
struct page *ipage;
+ int err;
- ipage = grab_cache_page(mapping, ino);
+ get_node_info(sbi, ino, &old_ni);
+
+ if (unlikely(old_ni.blk_addr != NULL_ADDR))
+ return -EINVAL;
+
+ ipage = grab_cache_page(NODE_MAPPING(sbi), ino);
if (!ipage)
return -ENOMEM;
/* Should not use this inode from free nid list */
remove_free_nid(NM_I(sbi), ino);
- get_node_info(sbi, ino, &old_ni);
SetPageUptodate(ipage);
fill_node_footer(ipage, ino, ino, 0, true);
- src = (struct f2fs_node *)page_address(page);
- dst = (struct f2fs_node *)page_address(ipage);
+ src = F2FS_INODE(page);
+ dst = F2FS_INODE(ipage);
- memcpy(dst, src, (unsigned long)&src->i.i_ext - (unsigned long)&src->i);
- dst->i.i_size = 0;
- dst->i.i_blocks = cpu_to_le64(1);
- dst->i.i_links = cpu_to_le32(1);
- dst->i.i_xattr_nid = 0;
+ memcpy(dst, src, (unsigned long)&src->i_ext - (unsigned long)src);
+ dst->i_size = 0;
+ dst->i_blocks = cpu_to_le64(1);
+ dst->i_links = cpu_to_le32(1);
+ dst->i_xattr_nid = 0;
+ dst->i_inline = src->i_inline & F2FS_INLINE_XATTR;
new_ni = old_ni;
new_ni.ino = ino;
- set_node_addr(sbi, &new_ni, NEW_ADDR);
- inc_valid_inode_count(sbi);
-
+ err = set_node_addr(sbi, &new_ni, NEW_ADDR, false);
+ if (!err)
+ if (unlikely(!inc_valid_node_count(sbi, NULL)))
+ err = -ENOSPC;
+ if (!err)
+ inc_valid_inode_count(sbi);
+ set_page_dirty(ipage);
f2fs_put_page(ipage, 1);
- return 0;
+ return err;
+}
+
+/*
+ * ra_sum_pages() merge contiguous pages into one bio and submit.
+ * these pre-readed pages are alloced in bd_inode's mapping tree.
+ */
+static int ra_sum_pages(struct f2fs_sb_info *sbi, struct page **pages,
+ int start, int nrpages)
+{
+ struct inode *inode = sbi->sb->s_bdev->bd_inode;
+ struct address_space *mapping = inode->i_mapping;
+ int i, page_idx = start;
+ struct f2fs_io_info fio = {
+ .type = META,
+ .rw = READ_SYNC | REQ_META | REQ_PRIO
+ };
+
+ for (i = 0; page_idx < start + nrpages; page_idx++, i++) {
+ /* alloc page in bd_inode for reading node summary info */
+ pages[i] = grab_cache_page(mapping, page_idx);
+ if (!pages[i])
+ break;
+ f2fs_submit_page_mbio(sbi, pages[i], page_idx, &fio);
+ }
+
+ f2fs_submit_merged_bio(sbi, META, READ);
+ return i;
}
int restore_node_summary(struct f2fs_sb_info *sbi,
@@ -1505,45 +1767,51 @@
{
struct f2fs_node *rn;
struct f2fs_summary *sum_entry;
- struct page *page;
+ struct inode *inode = sbi->sb->s_bdev->bd_inode;
block_t addr;
- int i, last_offset;
-
- /* alloc temporal page for read node */
- page = alloc_page(GFP_NOFS | __GFP_ZERO);
- if (IS_ERR(page))
- return PTR_ERR(page);
- lock_page(page);
+ int bio_blocks = MAX_BIO_BLOCKS(sbi);
+ struct page *pages[bio_blocks];
+ int i, idx, last_offset, nrpages, err = 0;
/* scan the node segment */
last_offset = sbi->blocks_per_seg;
addr = START_BLOCK(sbi, segno);
sum_entry = &sum->entries[0];
- for (i = 0; i < last_offset; i++, sum_entry++) {
- /*
- * In order to read next node page,
- * we must clear PageUptodate flag.
- */
- ClearPageUptodate(page);
+ for (i = 0; !err && i < last_offset; i += nrpages, addr += nrpages) {
+ nrpages = min(last_offset - i, bio_blocks);
- if (f2fs_readpage(sbi, page, addr, READ_SYNC))
- goto out;
+ /* read ahead node pages */
+ nrpages = ra_sum_pages(sbi, pages, addr, nrpages);
+ if (!nrpages)
+ return -ENOMEM;
- lock_page(page);
- rn = (struct f2fs_node *)page_address(page);
- sum_entry->nid = rn->footer.nid;
- sum_entry->version = 0;
- sum_entry->ofs_in_node = 0;
- addr++;
+ for (idx = 0; idx < nrpages; idx++) {
+ if (err)
+ goto skip;
+
+ lock_page(pages[idx]);
+ if (unlikely(!PageUptodate(pages[idx]))) {
+ err = -EIO;
+ } else {
+ rn = F2FS_NODE(pages[idx]);
+ sum_entry->nid = rn->footer.nid;
+ sum_entry->version = 0;
+ sum_entry->ofs_in_node = 0;
+ sum_entry++;
+ }
+ unlock_page(pages[idx]);
+skip:
+ page_cache_release(pages[idx]);
+ }
+
+ invalidate_mapping_pages(inode->i_mapping, addr,
+ addr + nrpages);
}
- unlock_page(page);
-out:
- __free_pages(page, 0);
- return 0;
+ return err;
}
-static bool flush_nats_in_journal(struct f2fs_sb_info *sbi)
+static void remove_nats_in_journal(struct f2fs_sb_info *sbi)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
@@ -1551,12 +1819,6 @@
int i;
mutex_lock(&curseg->curseg_mutex);
-
- if (nats_in_cursum(sum) < NAT_JOURNAL_ENTRIES) {
- mutex_unlock(&curseg->curseg_mutex);
- return false;
- }
-
for (i = 0; i < nats_in_cursum(sum); i++) {
struct nat_entry *ne;
struct f2fs_nat_entry raw_ne;
@@ -1566,25 +1828,106 @@
retry:
write_lock(&nm_i->nat_tree_lock);
ne = __lookup_nat_cache(nm_i, nid);
- if (ne) {
- __set_nat_cache_dirty(nm_i, ne);
- write_unlock(&nm_i->nat_tree_lock);
- continue;
- }
+ if (ne)
+ goto found;
+
ne = grab_nat_entry(nm_i, nid);
if (!ne) {
write_unlock(&nm_i->nat_tree_lock);
goto retry;
}
- nat_set_blkaddr(ne, le32_to_cpu(raw_ne.block_addr));
- nat_set_ino(ne, le32_to_cpu(raw_ne.ino));
- nat_set_version(ne, raw_ne.version);
+ node_info_from_raw_nat(&ne->ni, &raw_ne);
+found:
__set_nat_cache_dirty(nm_i, ne);
write_unlock(&nm_i->nat_tree_lock);
}
update_nats_in_cursum(sum, -i);
mutex_unlock(&curseg->curseg_mutex);
- return true;
+}
+
+static void __adjust_nat_entry_set(struct nat_entry_set *nes,
+ struct list_head *head, int max)
+{
+ struct nat_entry_set *cur;
+ nid_t dirty_cnt = 0;
+
+ if (nes->entry_cnt >= max)
+ goto add_out;
+
+ list_for_each_entry(cur, head, set_list) {
+ dirty_cnt += cur->entry_cnt;
+ if (dirty_cnt > max)
+ break;
+ if (cur->entry_cnt >= nes->entry_cnt) {
+ list_add(&nes->set_list, cur->set_list.prev);
+ return;
+ }
+ }
+add_out:
+ list_add_tail(&nes->set_list, head);
+}
+
+static void __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+ struct nat_entry_set *set)
+{
+ struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
+ struct f2fs_summary_block *sum = curseg->sum_blk;
+ nid_t start_nid = set->set * NAT_ENTRY_PER_BLOCK;
+ bool to_journal = true;
+ struct f2fs_nat_block *nat_blk;
+ struct nat_entry *ne, *cur;
+ struct page *page = NULL;
+
+ /*
+ * there are two steps to flush nat entries:
+ * #1, flush nat entries to journal in current hot data summary block.
+ * #2, flush nat entries to nat page.
+ */
+ if (!__has_cursum_space(sum, set->entry_cnt, NAT_JOURNAL))
+ to_journal = false;
+
+ if (to_journal) {
+ mutex_lock(&curseg->curseg_mutex);
+ } else {
+ page = get_next_nat_page(sbi, start_nid);
+ nat_blk = page_address(page);
+ f2fs_bug_on(sbi, !nat_blk);
+ }
+
+ /* flush dirty nats in nat entry set */
+ list_for_each_entry_safe(ne, cur, &set->entry_list, list) {
+ struct f2fs_nat_entry *raw_ne;
+ nid_t nid = nat_get_nid(ne);
+ int offset;
+
+ if (to_journal) {
+ offset = lookup_journal_in_cursum(sum,
+ NAT_JOURNAL, nid, 1);
+ f2fs_bug_on(sbi, offset < 0);
+ raw_ne = &nat_in_journal(sum, offset);
+ nid_in_journal(sum, offset) = cpu_to_le32(nid);
+ } else {
+ raw_ne = &nat_blk->entries[nid - start_nid];
+ }
+ raw_nat_from_node_info(raw_ne, &ne->ni);
+
+ write_lock(&NM_I(sbi)->nat_tree_lock);
+ nat_reset_flag(ne);
+ __clear_nat_cache_dirty(NM_I(sbi), ne);
+ write_unlock(&NM_I(sbi)->nat_tree_lock);
+
+ if (nat_get_blkaddr(ne) == NULL_ADDR)
+ add_free_nid(sbi, nid, false);
+ }
+
+ if (to_journal)
+ mutex_unlock(&curseg->curseg_mutex);
+ else
+ f2fs_put_page(page, 1);
+
+ f2fs_bug_on(sbi, set->entry_cnt);
+ radix_tree_delete(&NM_I(sbi)->nat_set_root, set->set);
+ kmem_cache_free(nat_entry_set_slab, set);
}
/*
@@ -1595,90 +1938,37 @@
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
struct f2fs_summary_block *sum = curseg->sum_blk;
- struct list_head *cur, *n;
- struct page *page = NULL;
- struct f2fs_nat_block *nat_blk = NULL;
- nid_t start_nid = 0, end_nid = 0;
- bool flushed;
+ struct nat_entry_set *setvec[NATVEC_SIZE];
+ struct nat_entry_set *set, *tmp;
+ unsigned int found;
+ nid_t set_idx = 0;
+ LIST_HEAD(sets);
- flushed = flush_nats_in_journal(sbi);
+ /*
+ * if there are no enough space in journal to store dirty nat
+ * entries, remove all entries from journal and merge them
+ * into nat entry set.
+ */
+ if (!__has_cursum_space(sum, nm_i->dirty_nat_cnt, NAT_JOURNAL))
+ remove_nats_in_journal(sbi);
- if (!flushed)
- mutex_lock(&curseg->curseg_mutex);
+ if (!nm_i->dirty_nat_cnt)
+ return;
- /* 1) flush dirty nat caches */
- list_for_each_safe(cur, n, &nm_i->dirty_nat_entries) {
- struct nat_entry *ne;
- nid_t nid;
- struct f2fs_nat_entry raw_ne;
- int offset = -1;
- block_t new_blkaddr;
-
- ne = list_entry(cur, struct nat_entry, list);
- nid = nat_get_nid(ne);
-
- if (nat_get_blkaddr(ne) == NEW_ADDR)
- continue;
- if (flushed)
- goto to_nat_page;
-
- /* if there is room for nat enries in curseg->sumpage */
- offset = lookup_journal_in_cursum(sum, NAT_JOURNAL, nid, 1);
- if (offset >= 0) {
- raw_ne = nat_in_journal(sum, offset);
- goto flush_now;
- }
-to_nat_page:
- if (!page || (start_nid > nid || nid > end_nid)) {
- if (page) {
- f2fs_put_page(page, 1);
- page = NULL;
- }
- start_nid = START_NID(nid);
- end_nid = start_nid + NAT_ENTRY_PER_BLOCK - 1;
-
- /*
- * get nat block with dirty flag, increased reference
- * count, mapped and lock
- */
- page = get_next_nat_page(sbi, start_nid);
- nat_blk = page_address(page);
- }
-
- BUG_ON(!nat_blk);
- raw_ne = nat_blk->entries[nid - start_nid];
-flush_now:
- new_blkaddr = nat_get_blkaddr(ne);
-
- raw_ne.ino = cpu_to_le32(nat_get_ino(ne));
- raw_ne.block_addr = cpu_to_le32(new_blkaddr);
- raw_ne.version = nat_get_version(ne);
-
- if (offset < 0) {
- nat_blk->entries[nid - start_nid] = raw_ne;
- } else {
- nat_in_journal(sum, offset) = raw_ne;
- nid_in_journal(sum, offset) = cpu_to_le32(nid);
- }
-
- if (nat_get_blkaddr(ne) == NULL_ADDR &&
- add_free_nid(NM_I(sbi), nid, false) <= 0) {
- write_lock(&nm_i->nat_tree_lock);
- __del_from_nat_cache(nm_i, ne);
- write_unlock(&nm_i->nat_tree_lock);
- } else {
- write_lock(&nm_i->nat_tree_lock);
- __clear_nat_cache_dirty(nm_i, ne);
- ne->checkpointed = true;
- write_unlock(&nm_i->nat_tree_lock);
- }
+ while ((found = __gang_lookup_nat_set(nm_i,
+ set_idx, NATVEC_SIZE, setvec))) {
+ unsigned idx;
+ set_idx = setvec[found - 1]->set + 1;
+ for (idx = 0; idx < found; idx++)
+ __adjust_nat_entry_set(setvec[idx], &sets,
+ MAX_NAT_JENTRIES(sum));
}
- if (!flushed)
- mutex_unlock(&curseg->curseg_mutex);
- f2fs_put_page(page, 1);
- /* 2) shrink nat caches if necessary */
- try_to_free_nats(sbi, nm_i->nat_cnt - NM_WOUT_THRESHOLD);
+ /* flush dirty nats in nat entry set */
+ list_for_each_entry_safe(set, tmp, &sets, set_list)
+ __flush_nat_entry_set(sbi, set);
+
+ f2fs_bug_on(sbi, nm_i->dirty_nat_cnt);
}
static int init_node_manager(struct f2fs_sb_info *sbi)
@@ -1693,14 +1983,20 @@
/* segment_count_nat includes pair segment so divide to 2. */
nat_segs = le32_to_cpu(sb_raw->segment_count_nat) >> 1;
nat_blocks = nat_segs << le32_to_cpu(sb_raw->log_blocks_per_seg);
+
nm_i->max_nid = NAT_ENTRY_PER_BLOCK * nat_blocks;
+
+ /* not used nids: 0, node, meta, (and root counted as valid node) */
+ nm_i->available_nids = nm_i->max_nid - F2FS_RESERVED_NODE_NUM;
nm_i->fcnt = 0;
nm_i->nat_cnt = 0;
+ nm_i->ram_thresh = DEF_RAM_THRESHOLD;
+ INIT_RADIX_TREE(&nm_i->free_nid_root, GFP_ATOMIC);
INIT_LIST_HEAD(&nm_i->free_nid_list);
INIT_RADIX_TREE(&nm_i->nat_root, GFP_ATOMIC);
+ INIT_RADIX_TREE(&nm_i->nat_set_root, GFP_ATOMIC);
INIT_LIST_HEAD(&nm_i->nat_entries);
- INIT_LIST_HEAD(&nm_i->dirty_nat_entries);
mutex_init(&nm_i->build_lock);
spin_lock_init(&nm_i->free_nid_list_lock);
@@ -1749,11 +2045,14 @@
/* destroy free nid list */
spin_lock(&nm_i->free_nid_list_lock);
list_for_each_entry_safe(i, next_i, &nm_i->free_nid_list, list) {
- BUG_ON(i->state == NID_ALLOC);
- __del_from_free_nid_list(i);
+ f2fs_bug_on(sbi, i->state == NID_ALLOC);
+ __del_from_free_nid_list(nm_i, i);
nm_i->fcnt--;
+ spin_unlock(&nm_i->free_nid_list_lock);
+ kmem_cache_free(free_nid_slab, i);
+ spin_lock(&nm_i->free_nid_list_lock);
}
- BUG_ON(nm_i->fcnt);
+ f2fs_bug_on(sbi, nm_i->fcnt);
spin_unlock(&nm_i->free_nid_list_lock);
/* destroy nat cache */
@@ -1761,13 +2060,11 @@
while ((found = __gang_lookup_nat_cache(nm_i,
nid, NATVEC_SIZE, natvec))) {
unsigned idx;
- for (idx = 0; idx < found; idx++) {
- struct nat_entry *e = natvec[idx];
- nid = nat_get_nid(e) + 1;
- __del_from_nat_cache(nm_i, e);
- }
+ nid = nat_get_nid(natvec[found - 1]) + 1;
+ for (idx = 0; idx < found; idx++)
+ __del_from_nat_cache(nm_i, natvec[idx]);
}
- BUG_ON(nm_i->nat_cnt);
+ f2fs_bug_on(sbi, nm_i->nat_cnt);
write_unlock(&nm_i->nat_tree_lock);
kfree(nm_i->nat_bitmap);
@@ -1778,21 +2075,32 @@
int __init create_node_manager_caches(void)
{
nat_entry_slab = f2fs_kmem_cache_create("nat_entry",
- sizeof(struct nat_entry), NULL);
+ sizeof(struct nat_entry));
if (!nat_entry_slab)
- return -ENOMEM;
+ goto fail;
free_nid_slab = f2fs_kmem_cache_create("free_nid",
- sizeof(struct free_nid), NULL);
- if (!free_nid_slab) {
- kmem_cache_destroy(nat_entry_slab);
- return -ENOMEM;
- }
+ sizeof(struct free_nid));
+ if (!free_nid_slab)
+ goto destory_nat_entry;
+
+ nat_entry_set_slab = f2fs_kmem_cache_create("nat_entry_set",
+ sizeof(struct nat_entry_set));
+ if (!nat_entry_set_slab)
+ goto destory_free_nid;
return 0;
+
+destory_free_nid:
+ kmem_cache_destroy(free_nid_slab);
+destory_nat_entry:
+ kmem_cache_destroy(nat_entry_slab);
+fail:
+ return -ENOMEM;
}
void destroy_node_manager_caches(void)
{
+ kmem_cache_destroy(nat_entry_set_slab);
kmem_cache_destroy(free_nid_slab);
kmem_cache_destroy(nat_entry_slab);
}
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index 0a2d72f..bd826d9 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -17,14 +17,11 @@
/* # of pages to perform readahead before building free nids */
#define FREE_NID_PAGES 4
-/* maximum # of free node ids to produce during build_free_nids */
-#define MAX_FREE_NIDS (NAT_ENTRY_PER_BLOCK * FREE_NID_PAGES)
-
/* maximum readahead size for node during getting data blocks */
#define MAX_RA_NODE 128
-/* maximum cached nat entries to manage memory footprint */
-#define NM_WOUT_THRESHOLD (64 * NAT_ENTRY_PER_BLOCK)
+/* control the memory footprint threshold (10MB per 1GB ram) */
+#define DEF_RAM_THRESHOLD 10
/* vector size for gang look-up from nat cache that consists of radix tree */
#define NATVEC_SIZE 64
@@ -42,9 +39,16 @@
unsigned char version; /* version of the node */
};
+enum {
+ IS_CHECKPOINTED, /* is it checkpointed before? */
+ HAS_FSYNCED_INODE, /* is the inode fsynced before? */
+ HAS_LAST_FSYNC, /* has the latest node fsync mark? */
+ IS_DIRTY, /* this nat entry is dirty? */
+};
+
struct nat_entry {
struct list_head list; /* for clean or dirty nat list */
- bool checkpointed; /* whether it is checkpointed or not */
+ unsigned char flag; /* for node information bits */
struct node_info ni; /* in-memory node information */
};
@@ -57,12 +61,32 @@
#define nat_get_version(nat) (nat->ni.version)
#define nat_set_version(nat, v) (nat->ni.version = v)
-#define __set_nat_cache_dirty(nm_i, ne) \
- list_move_tail(&ne->list, &nm_i->dirty_nat_entries);
-#define __clear_nat_cache_dirty(nm_i, ne) \
- list_move_tail(&ne->list, &nm_i->nat_entries);
#define inc_node_version(version) (++version)
+static inline void set_nat_flag(struct nat_entry *ne,
+ unsigned int type, bool set)
+{
+ unsigned char mask = 0x01 << type;
+ if (set)
+ ne->flag |= mask;
+ else
+ ne->flag &= ~mask;
+}
+
+static inline bool get_nat_flag(struct nat_entry *ne, unsigned int type)
+{
+ unsigned char mask = 0x01 << type;
+ return ne->flag & mask;
+}
+
+static inline void nat_reset_flag(struct nat_entry *ne)
+{
+ /* these states can be set only after checkpoint was done */
+ set_nat_flag(ne, IS_CHECKPOINTED, true);
+ set_nat_flag(ne, HAS_FSYNCED_INODE, false);
+ set_nat_flag(ne, HAS_LAST_FSYNC, true);
+}
+
static inline void node_info_from_raw_nat(struct node_info *ni,
struct f2fs_nat_entry *raw_ne)
{
@@ -71,6 +95,27 @@
ni->version = raw_ne->version;
}
+static inline void raw_nat_from_node_info(struct f2fs_nat_entry *raw_ne,
+ struct node_info *ni)
+{
+ raw_ne->ino = cpu_to_le32(ni->ino);
+ raw_ne->block_addr = cpu_to_le32(ni->blk_addr);
+ raw_ne->version = ni->version;
+}
+
+enum mem_type {
+ FREE_NIDS, /* indicates the free nid list */
+ NAT_ENTRIES, /* indicates the cached nat entry */
+ DIRTY_DENTS /* indicates dirty dentry pages */
+};
+
+struct nat_entry_set {
+ struct list_head set_list; /* link with other nat sets */
+ struct list_head entry_list; /* link with dirty nat entries */
+ nid_t set; /* set number*/
+ unsigned int entry_cnt; /* the # of nat entries in set */
+};
+
/*
* For free nid mangement
*/
@@ -90,9 +135,11 @@
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct free_nid *fnid;
- if (nm_i->fcnt <= 0)
- return -1;
spin_lock(&nm_i->free_nid_list_lock);
+ if (nm_i->fcnt <= 0) {
+ spin_unlock(&nm_i->free_nid_list_lock);
+ return -1;
+ }
fnid = list_entry(nm_i->free_nid_list.next, struct free_nid, list);
*nid = fnid->nid;
spin_unlock(&nm_i->free_nid_list_lock);
@@ -155,8 +202,7 @@
static inline void fill_node_footer(struct page *page, nid_t nid,
nid_t ino, unsigned int ofs, bool reset)
{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(page);
if (reset)
memset(rn, 0, sizeof(*rn));
rn->footer.nid = cpu_to_le32(nid);
@@ -166,56 +212,48 @@
static inline void copy_node_footer(struct page *dst, struct page *src)
{
- void *src_addr = page_address(src);
- void *dst_addr = page_address(dst);
- struct f2fs_node *src_rn = (struct f2fs_node *)src_addr;
- struct f2fs_node *dst_rn = (struct f2fs_node *)dst_addr;
+ struct f2fs_node *src_rn = F2FS_NODE(src);
+ struct f2fs_node *dst_rn = F2FS_NODE(dst);
memcpy(&dst_rn->footer, &src_rn->footer, sizeof(struct node_footer));
}
static inline void fill_node_footer_blkaddr(struct page *page, block_t blkaddr)
{
- struct f2fs_sb_info *sbi = F2FS_SB(page->mapping->host->i_sb);
- struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_checkpoint *ckpt = F2FS_CKPT(F2FS_P_SB(page));
+ struct f2fs_node *rn = F2FS_NODE(page);
+
rn->footer.cp_ver = ckpt->checkpoint_ver;
rn->footer.next_blkaddr = cpu_to_le32(blkaddr);
}
static inline nid_t ino_of_node(struct page *node_page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(node_page);
return le32_to_cpu(rn->footer.ino);
}
static inline nid_t nid_of_node(struct page *node_page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(node_page);
return le32_to_cpu(rn->footer.nid);
}
static inline unsigned int ofs_of_node(struct page *node_page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(node_page);
unsigned flag = le32_to_cpu(rn->footer.flag);
return flag >> OFFSET_BIT_SHIFT;
}
static inline unsigned long long cpver_of_node(struct page *node_page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(node_page);
return le64_to_cpu(rn->footer.cp_ver);
}
static inline block_t next_blkaddr_of_node(struct page *node_page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(node_page);
return le32_to_cpu(rn->footer.next_blkaddr);
}
@@ -232,11 +270,21 @@
* | `- direct node (5 + N => 5 + 2N - 1)
* `- double indirect node (5 + 2N)
* `- indirect node (6 + 2N)
- * `- direct node (x(N + 1))
+ * `- direct node
+ * ......
+ * `- indirect node ((6 + 2N) + x(N + 1))
+ * `- direct node
+ * ......
+ * `- indirect node ((6 + 2N) + (N - 1)(N + 1))
+ * `- direct node
*/
static inline bool IS_DNODE(struct page *node_page)
{
unsigned int ofs = ofs_of_node(node_page);
+
+ if (f2fs_has_xattr_block(ofs))
+ return false;
+
if (ofs == 3 || ofs == 4 + NIDS_PER_BLOCK ||
ofs == 5 + 2 * NIDS_PER_BLOCK)
return false;
@@ -250,9 +298,9 @@
static inline void set_nid(struct page *p, int off, nid_t nid, bool i)
{
- struct f2fs_node *rn = (struct f2fs_node *)page_address(p);
+ struct f2fs_node *rn = F2FS_NODE(p);
- wait_on_page_writeback(p);
+ f2fs_wait_on_page_writeback(p, NODE);
if (i)
rn->i.i_nid[off - NODE_DIR1_BLOCK] = cpu_to_le32(nid);
@@ -263,7 +311,8 @@
static inline nid_t get_nid(struct page *p, int off, bool i)
{
- struct f2fs_node *rn = (struct f2fs_node *)page_address(p);
+ struct f2fs_node *rn = F2FS_NODE(p);
+
if (i)
return le32_to_cpu(rn->i.i_nid[off - NODE_DIR1_BLOCK]);
return le32_to_cpu(rn->in.nid[off]);
@@ -275,25 +324,27 @@
* - Mark cold node blocks in their node footer
* - Mark cold data pages in page cache
*/
-static inline int is_cold_file(struct inode *inode)
+static inline int is_file(struct inode *inode, int type)
{
- return F2FS_I(inode)->i_advise & FADVISE_COLD_BIT;
+ return F2FS_I(inode)->i_advise & type;
}
-static inline void set_cold_file(struct inode *inode)
+static inline void set_file(struct inode *inode, int type)
{
- F2FS_I(inode)->i_advise |= FADVISE_COLD_BIT;
+ F2FS_I(inode)->i_advise |= type;
}
-static inline int is_cp_file(struct inode *inode)
+static inline void clear_file(struct inode *inode, int type)
{
- return F2FS_I(inode)->i_advise & FADVISE_CP_BIT;
+ F2FS_I(inode)->i_advise &= ~type;
}
-static inline void set_cp_file(struct inode *inode)
-{
- F2FS_I(inode)->i_advise |= FADVISE_CP_BIT;
-}
+#define file_is_cold(inode) is_file(inode, FADVISE_COLD_BIT)
+#define file_wrong_pino(inode) is_file(inode, FADVISE_LOST_PINO_BIT)
+#define file_set_cold(inode) set_file(inode, FADVISE_COLD_BIT)
+#define file_lost_pino(inode) set_file(inode, FADVISE_LOST_PINO_BIT)
+#define file_clear_cold(inode) clear_file(inode, FADVISE_COLD_BIT)
+#define file_got_pino(inode) clear_file(inode, FADVISE_LOST_PINO_BIT)
static inline int is_cold_data(struct page *page)
{
@@ -310,33 +361,19 @@
ClearPageChecked(page);
}
-static inline int is_cold_node(struct page *page)
+static inline int is_node(struct page *page, int type)
{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
- unsigned int flag = le32_to_cpu(rn->footer.flag);
- return flag & (0x1 << COLD_BIT_SHIFT);
+ struct f2fs_node *rn = F2FS_NODE(page);
+ return le32_to_cpu(rn->footer.flag) & (1 << type);
}
-static inline unsigned char is_fsync_dnode(struct page *page)
-{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
- unsigned int flag = le32_to_cpu(rn->footer.flag);
- return flag & (0x1 << FSYNC_BIT_SHIFT);
-}
-
-static inline unsigned char is_dent_dnode(struct page *page)
-{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
- unsigned int flag = le32_to_cpu(rn->footer.flag);
- return flag & (0x1 << DENT_BIT_SHIFT);
-}
+#define is_cold_node(page) is_node(page, COLD_BIT_SHIFT)
+#define is_fsync_dnode(page) is_node(page, FSYNC_BIT_SHIFT)
+#define is_dent_dnode(page) is_node(page, DENT_BIT_SHIFT)
static inline void set_cold_node(struct inode *inode, struct page *page)
{
- struct f2fs_node *rn = (struct f2fs_node *)page_address(page);
+ struct f2fs_node *rn = F2FS_NODE(page);
unsigned int flag = le32_to_cpu(rn->footer.flag);
if (S_ISDIR(inode->i_mode))
@@ -346,26 +383,15 @@
rn->footer.flag = cpu_to_le32(flag);
}
-static inline void set_fsync_mark(struct page *page, int mark)
+static inline void set_mark(struct page *page, int mark, int type)
{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
+ struct f2fs_node *rn = F2FS_NODE(page);
unsigned int flag = le32_to_cpu(rn->footer.flag);
if (mark)
- flag |= (0x1 << FSYNC_BIT_SHIFT);
+ flag |= (0x1 << type);
else
- flag &= ~(0x1 << FSYNC_BIT_SHIFT);
+ flag &= ~(0x1 << type);
rn->footer.flag = cpu_to_le32(flag);
}
-
-static inline void set_dentry_mark(struct page *page, int mark)
-{
- void *kaddr = page_address(page);
- struct f2fs_node *rn = (struct f2fs_node *)kaddr;
- unsigned int flag = le32_to_cpu(rn->footer.flag);
- if (mark)
- flag |= (0x1 << DENT_BIT_SHIFT);
- else
- flag &= ~(0x1 << DENT_BIT_SHIFT);
- rn->footer.flag = cpu_to_le32(flag);
-}
+#define set_dentry_mark(page, mark) set_mark(page, mark, DENT_BIT_SHIFT)
+#define set_fsync_mark(page, mark) set_mark(page, mark, FSYNC_BIT_SHIFT)
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 60c8a50..c0543a8 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -14,6 +14,37 @@
#include "node.h"
#include "segment.h"
+/*
+ * Roll forward recovery scenarios.
+ *
+ * [Term] F: fsync_mark, D: dentry_mark
+ *
+ * 1. inode(x) | CP | inode(x) | dnode(F)
+ * -> Update the latest inode(x).
+ *
+ * 2. inode(x) | CP | inode(F) | dnode(F)
+ * -> No problem.
+ *
+ * 3. inode(x) | CP | dnode(F) | inode(x)
+ * -> Recover to the latest dnode(F), and drop the last inode(x)
+ *
+ * 4. inode(x) | CP | dnode(F) | inode(F)
+ * -> No problem.
+ *
+ * 5. CP | inode(x) | dnode(F)
+ * -> The inode(DF) was missing. Should drop this dnode(F).
+ *
+ * 6. CP | inode(DF) | dnode(F)
+ * -> No problem.
+ *
+ * 7. CP | dnode(F) | inode(DF)
+ * -> If f2fs_iget fails, then goto next to find inode(DF).
+ *
+ * 8. CP | dnode(F) | inode(x)
+ * -> If f2fs_iget fails, then goto next to find inode(DF).
+ * But it will fail due to no inode(DF).
+ */
+
static struct kmem_cache *fsync_entry_slab;
bool space_for_roll_forward(struct f2fs_sb_info *sbi)
@@ -27,32 +58,30 @@
static struct fsync_inode_entry *get_fsync_inode(struct list_head *head,
nid_t ino)
{
- struct list_head *this;
struct fsync_inode_entry *entry;
- list_for_each(this, head) {
- entry = list_entry(this, struct fsync_inode_entry, list);
+ list_for_each_entry(entry, head, list)
if (entry->inode->i_ino == ino)
return entry;
- }
+
return NULL;
}
-static int recover_dentry(struct page *ipage, struct inode *inode)
+static int recover_dentry(struct inode *inode, struct page *ipage)
{
- struct f2fs_node *raw_node = (struct f2fs_node *)kmap(ipage);
- struct f2fs_inode *raw_inode = &(raw_node->i);
- struct qstr name;
+ struct f2fs_inode *raw_inode = F2FS_INODE(ipage);
+ nid_t pino = le32_to_cpu(raw_inode->i_pino);
struct f2fs_dir_entry *de;
+ struct qstr name;
struct page *page;
- struct inode *dir;
+ struct inode *dir, *einode;
int err = 0;
- if (!is_dent_dnode(ipage))
- goto out;
-
- dir = f2fs_iget(inode->i_sb, le32_to_cpu(raw_inode->i_pino));
+ dir = f2fs_iget(inode->i_sb, pino);
if (IS_ERR(dir)) {
+ f2fs_msg(inode->i_sb, KERN_INFO,
+ "%s: f2fs_iget failed: %ld",
+ __func__, PTR_ERR(dir));
err = PTR_ERR(dir);
goto out;
}
@@ -60,122 +89,160 @@
name.len = le32_to_cpu(raw_inode->i_namelen);
name.name = raw_inode->i_name;
- de = f2fs_find_entry(dir, &name, &page);
- if (de) {
- kunmap(page);
- f2fs_put_page(page, 0);
- } else {
- err = __f2fs_add_link(dir, &name, inode);
+ if (unlikely(name.len > F2FS_NAME_LEN)) {
+ WARN_ON(1);
+ err = -ENAMETOOLONG;
+ goto out_err;
}
+retry:
+ de = f2fs_find_entry(dir, &name, &page);
+ if (de && inode->i_ino == le32_to_cpu(de->ino)) {
+ clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
+ goto out_unmap_put;
+ }
+ if (de) {
+ einode = f2fs_iget(inode->i_sb, le32_to_cpu(de->ino));
+ if (IS_ERR(einode)) {
+ WARN_ON(1);
+ err = PTR_ERR(einode);
+ if (err == -ENOENT)
+ err = -EEXIST;
+ goto out_unmap_put;
+ }
+ err = acquire_orphan_inode(F2FS_I_SB(inode));
+ if (err) {
+ iput(einode);
+ goto out_unmap_put;
+ }
+ f2fs_delete_entry(de, page, einode);
+ iput(einode);
+ goto retry;
+ }
+ err = __f2fs_add_link(dir, &name, inode);
+ if (err)
+ goto out_err;
+
+ if (is_inode_flag_set(F2FS_I(dir), FI_DELAY_IPUT)) {
+ iput(dir);
+ } else {
+ add_dirty_dir_inode(dir);
+ set_inode_flag(F2FS_I(dir), FI_DELAY_IPUT);
+ }
+
+ goto out;
+
+out_unmap_put:
+ kunmap(page);
+ f2fs_put_page(page, 0);
+out_err:
iput(dir);
out:
- kunmap(ipage);
+ f2fs_msg(inode->i_sb, KERN_DEBUG,
+ "%s: ino = %x, name = %s, dir = %lx, err = %d",
+ __func__, ino_of_node(ipage), raw_inode->i_name,
+ IS_ERR(dir) ? 0 : dir->i_ino, err);
return err;
}
-static int recover_inode(struct inode *inode, struct page *node_page)
+static void recover_inode(struct inode *inode, struct page *page)
{
- void *kaddr = page_address(node_page);
- struct f2fs_node *raw_node = (struct f2fs_node *)kaddr;
- struct f2fs_inode *raw_inode = &(raw_node->i);
+ struct f2fs_inode *raw = F2FS_INODE(page);
- inode->i_mode = le16_to_cpu(raw_inode->i_mode);
- i_size_write(inode, le64_to_cpu(raw_inode->i_size));
- inode->i_atime.tv_sec = le64_to_cpu(raw_inode->i_mtime);
- inode->i_ctime.tv_sec = le64_to_cpu(raw_inode->i_ctime);
- inode->i_mtime.tv_sec = le64_to_cpu(raw_inode->i_mtime);
- inode->i_atime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
- inode->i_ctime.tv_nsec = le32_to_cpu(raw_inode->i_ctime_nsec);
- inode->i_mtime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
+ inode->i_mode = le16_to_cpu(raw->i_mode);
+ i_size_write(inode, le64_to_cpu(raw->i_size));
+ inode->i_atime.tv_sec = le64_to_cpu(raw->i_mtime);
+ inode->i_ctime.tv_sec = le64_to_cpu(raw->i_ctime);
+ inode->i_mtime.tv_sec = le64_to_cpu(raw->i_mtime);
+ inode->i_atime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
+ inode->i_ctime.tv_nsec = le32_to_cpu(raw->i_ctime_nsec);
+ inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
- return recover_dentry(node_page, inode);
+ f2fs_msg(inode->i_sb, KERN_DEBUG, "recover_inode: ino = %x, name = %s",
+ ino_of_node(page), F2FS_INODE(page)->i_name);
}
static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head)
{
- unsigned long long cp_ver = le64_to_cpu(sbi->ckpt->checkpoint_ver);
+ unsigned long long cp_ver = cur_cp_version(F2FS_CKPT(sbi));
struct curseg_info *curseg;
- struct page *page;
+ struct page *page = NULL;
block_t blkaddr;
int err = 0;
/* get node pages in the current segment */
curseg = CURSEG_I(sbi, CURSEG_WARM_NODE);
- blkaddr = START_BLOCK(sbi, curseg->segno) + curseg->next_blkoff;
-
- /* read node page */
- page = alloc_page(GFP_F2FS_ZERO);
- if (IS_ERR(page))
- return PTR_ERR(page);
- lock_page(page);
+ blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
while (1) {
struct fsync_inode_entry *entry;
- err = f2fs_readpage(sbi, page, blkaddr, READ_SYNC);
- if (err)
- goto out;
+ if (blkaddr < MAIN_BLKADDR(sbi) || blkaddr >= MAX_BLKADDR(sbi))
+ return 0;
- lock_page(page);
+ page = get_meta_page_ra(sbi, blkaddr);
if (cp_ver != cpver_of_node(page))
- goto unlock_out;
+ break;
if (!is_fsync_dnode(page))
goto next;
entry = get_fsync_inode(head, ino_of_node(page));
if (entry) {
- entry->blkaddr = blkaddr;
if (IS_INODE(page) && is_dent_dnode(page))
set_inode_flag(F2FS_I(entry->inode),
FI_INC_LINK);
} else {
if (IS_INODE(page) && is_dent_dnode(page)) {
err = recover_inode_page(sbi, page);
- if (err)
- goto unlock_out;
+ if (err) {
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: recover_inode_page failed: %d",
+ __func__, err);
+ break;
+ }
}
/* add this fsync inode to the list */
- entry = kmem_cache_alloc(fsync_entry_slab, GFP_NOFS);
+ entry = kmem_cache_alloc(fsync_entry_slab, GFP_F2FS_ZERO);
if (!entry) {
err = -ENOMEM;
- goto unlock_out;
+ break;
}
-
+ /*
+ * CP | dnode(F) | inode(DF)
+ * For this case, we should not give up now.
+ */
entry->inode = f2fs_iget(sbi->sb, ino_of_node(page));
if (IS_ERR(entry->inode)) {
err = PTR_ERR(entry->inode);
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: f2fs_iget failed: %d",
+ __func__, err);
kmem_cache_free(fsync_entry_slab, entry);
- goto unlock_out;
+ if (err == -ENOENT)
+ goto next;
+ break;
}
-
list_add_tail(&entry->list, head);
- entry->blkaddr = blkaddr;
}
+ entry->blkaddr = blkaddr;
+
if (IS_INODE(page)) {
- err = recover_inode(entry->inode, page);
- if (err == -ENOENT) {
- goto next;
- } else if (err) {
- err = -EINVAL;
- goto unlock_out;
- }
+ entry->last_inode = blkaddr;
+ if (is_dent_dnode(page))
+ entry->last_dentry = blkaddr;
}
next:
/* check next segment */
blkaddr = next_blkaddr_of_node(page);
+ f2fs_put_page(page, 1);
}
-unlock_out:
- unlock_page(page);
-out:
- __free_pages(page, 0);
+ f2fs_put_page(page, 1);
return err;
}
-static void destroy_fsync_dnodes(struct f2fs_sb_info *sbi,
- struct list_head *head)
+static void destroy_fsync_dnodes(struct list_head *head)
{
struct fsync_inode_entry *entry, *tmp;
@@ -186,88 +253,148 @@
}
}
-static void check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
- block_t blkaddr)
+static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
+ block_t blkaddr, struct dnode_of_data *dn)
{
struct seg_entry *sentry;
unsigned int segno = GET_SEGNO(sbi, blkaddr);
- unsigned short blkoff = GET_SEGOFF_FROM_SEG0(sbi, blkaddr) &
- (sbi->blocks_per_seg - 1);
+ unsigned short blkoff = GET_BLKOFF_FROM_SEG0(sbi, blkaddr);
+ struct f2fs_summary_block *sum_node;
struct f2fs_summary sum;
- nid_t ino;
- void *kaddr;
+ struct page *sum_page, *node_page;
+ nid_t ino, nid;
struct inode *inode;
- struct page *node_page;
+ unsigned int offset;
block_t bidx;
int i;
+ if (segno >= TOTAL_SEGS(sbi)) {
+ f2fs_msg(sbi->sb, KERN_ERR, "invalid segment number %u", segno);
+ if (f2fs_handle_error(sbi))
+ return -EIO;
+ }
+
sentry = get_seg_entry(sbi, segno);
if (!f2fs_test_bit(blkoff, sentry->cur_valid_map))
- return;
+ return 0;
/* Get the previous summary */
for (i = CURSEG_WARM_DATA; i <= CURSEG_COLD_DATA; i++) {
struct curseg_info *curseg = CURSEG_I(sbi, i);
if (curseg->segno == segno) {
sum = curseg->sum_blk->entries[blkoff];
- break;
+ goto got_it;
}
}
- if (i > CURSEG_COLD_DATA) {
- struct page *sum_page = get_sum_page(sbi, segno);
- struct f2fs_summary_block *sum_node;
- kaddr = page_address(sum_page);
- sum_node = (struct f2fs_summary_block *)kaddr;
- sum = sum_node->entries[blkoff];
- f2fs_put_page(sum_page, 1);
+
+ sum_page = get_sum_page(sbi, segno);
+ sum_node = (struct f2fs_summary_block *)page_address(sum_page);
+ sum = sum_node->entries[blkoff];
+ f2fs_put_page(sum_page, 1);
+got_it:
+ /* Use the locked dnode page and inode */
+ nid = le32_to_cpu(sum.nid);
+ if (dn->inode->i_ino == nid) {
+ struct dnode_of_data tdn = *dn;
+ tdn.nid = nid;
+ tdn.node_page = dn->inode_page;
+ tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
+ truncate_data_blocks_range(&tdn, 1);
+ return 0;
+ } else if (dn->nid == nid) {
+ struct dnode_of_data tdn = *dn;
+ tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
+ truncate_data_blocks_range(&tdn, 1);
+ return 0;
}
/* Get the node page */
- node_page = get_node_page(sbi, le32_to_cpu(sum.nid));
- bidx = start_bidx_of_node(ofs_of_node(node_page)) +
- le16_to_cpu(sum.ofs_in_node);
+ node_page = get_node_page(sbi, nid);
+ if (IS_ERR(node_page))
+ return PTR_ERR(node_page);
+
+ offset = ofs_of_node(node_page);
ino = ino_of_node(node_page);
f2fs_put_page(node_page, 1);
- /* Deallocate previous index in the node page */
- inode = f2fs_iget(sbi->sb, ino);
- if (IS_ERR(inode))
- return;
+ /* Skip nodes with circular references */
+ if (ino == dn->inode->i_ino) {
+ f2fs_msg(sbi->sb, KERN_ERR, "%s: node %x has circular inode %x",
+ __func__, ino, nid);
+ f2fs_handle_error(sbi);
+ return -EDEADLK;
+ }
- truncate_hole(inode, bidx, bidx + 1);
- iput(inode);
+ if (ino != dn->inode->i_ino) {
+ /* Deallocate previous index in the node page */
+ inode = f2fs_iget(sbi->sb, ino);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+ } else {
+ inode = dn->inode;
+ }
+
+ bidx = start_bidx_of_node(offset, F2FS_I(inode)) +
+ le16_to_cpu(sum.ofs_in_node);
+
+ if (ino != dn->inode->i_ino) {
+ truncate_hole(inode, bidx, bidx + 1);
+ iput(inode);
+ } else {
+ struct dnode_of_data tdn;
+ set_new_dnode(&tdn, inode, dn->inode_page, NULL, 0);
+ if (get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
+ return 0;
+ if (tdn.data_blkaddr != NULL_ADDR)
+ truncate_data_blocks_range(&tdn, 1);
+ f2fs_put_page(tdn.node_page, 1);
+ }
+ return 0;
}
static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
struct page *page, block_t blkaddr)
{
+ struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int start, end;
struct dnode_of_data dn;
struct f2fs_summary sum;
struct node_info ni;
- int err = 0;
- int ilock;
+ int err = 0, recovered = 0;
- start = start_bidx_of_node(ofs_of_node(page));
- if (IS_INODE(page))
- end = start + ADDRS_PER_INODE;
- else
- end = start + ADDRS_PER_BLOCK;
+ /* step 1: recover xattr */
+ if (IS_INODE(page)) {
+ recover_inline_xattr(inode, page);
+ } else if (f2fs_has_xattr_block(ofs_of_node(page))) {
+ recover_xattr_data(inode, page, blkaddr);
+ goto out;
+ }
- ilock = mutex_lock_op(sbi);
+ /* step 2: recover inline data */
+ if (recover_inline_data(inode, page))
+ goto out;
+
+ /* step 3: recover data indices */
+ start = start_bidx_of_node(ofs_of_node(page), fi);
+ end = start + ADDRS_PER_PAGE(page, fi);
+
+ f2fs_lock_op(sbi);
+
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, start, ALLOC_NODE);
if (err) {
- mutex_unlock_op(sbi, ilock);
- return err;
+ f2fs_unlock_op(sbi);
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: get_dnode_of_data failed: %d", __func__, err);
+ goto out;
}
- wait_on_page_writeback(dn.node_page);
+ f2fs_wait_on_page_writeback(dn.node_page, NODE);
get_node_info(sbi, dn.nid, &ni);
- BUG_ON(ni.ino != ino_of_node(page));
- BUG_ON(ofs_of_node(dn.node_page) != ofs_of_node(page));
+ f2fs_bug_on(sbi, ni.ino != ino_of_node(page));
+ f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page));
for (; start < end; start++) {
block_t src, dest;
@@ -277,19 +404,26 @@
if (src != dest && dest != NEW_ADDR && dest != NULL_ADDR) {
if (src == NULL_ADDR) {
- int err = reserve_new_block(&dn);
+ err = reserve_new_block(&dn);
/* We should not get -ENOSPC */
- BUG_ON(err);
+ f2fs_bug_on(sbi,err);
+ if (err)
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: reserve_new_block failed: %d",
+ __func__, err);
}
/* Check the previous node page having this index */
- check_index_in_prev_nodes(sbi, dest);
+ err = check_index_in_prev_nodes(sbi, dest, &dn);
+ if (err)
+ goto err;
set_summary(&sum, dn.nid, dn.ofs_in_node, ni.version);
/* write dummy data page */
recover_data_page(sbi, NULL, &sum, src, dest);
update_extent_cache(dest, &dn);
+ recovered++;
}
dn.ofs_in_node++;
}
@@ -303,19 +437,23 @@
fill_node_footer(dn.node_page, dn.nid, ni.ino,
ofs_of_node(page), false);
set_page_dirty(dn.node_page);
-
- recover_node_page(sbi, dn.node_page, &sum, &ni, blkaddr);
+err:
f2fs_put_dnode(&dn);
- mutex_unlock_op(sbi, ilock);
- return 0;
+ f2fs_unlock_op(sbi);
+
+out:
+ f2fs_msg(sbi->sb, KERN_DEBUG,
+ "recover_data: ino = %lx, recovered = %d blocks, err = %d",
+ inode->i_ino, recovered, err);
+ return err;
}
static int recover_data(struct f2fs_sb_info *sbi,
struct list_head *head, int type)
{
- unsigned long long cp_ver = le64_to_cpu(sbi->ckpt->checkpoint_ver);
+ unsigned long long cp_ver = cur_cp_version(F2FS_CKPT(sbi));
struct curseg_info *curseg;
- struct page *page;
+ struct page *page = NULL;
int err = 0;
block_t blkaddr;
@@ -323,32 +461,44 @@
curseg = CURSEG_I(sbi, type);
blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
- /* read node page */
- page = alloc_page(GFP_NOFS | __GFP_ZERO);
- if (IS_ERR(page))
- return -ENOMEM;
-
- lock_page(page);
-
while (1) {
struct fsync_inode_entry *entry;
- err = f2fs_readpage(sbi, page, blkaddr, READ_SYNC);
- if (err)
- goto out;
+ if (blkaddr < MAIN_BLKADDR(sbi) || blkaddr >= MAX_BLKADDR(sbi))
+ break;
- lock_page(page);
+ page = get_meta_page_ra(sbi, blkaddr);
- if (cp_ver != cpver_of_node(page))
- goto unlock_out;
+ if (cp_ver != cpver_of_node(page)) {
+ f2fs_put_page(page, 1);
+ break;
+ }
entry = get_fsync_inode(head, ino_of_node(page));
if (!entry)
goto next;
-
+ /*
+ * inode(x) | CP | inode(x) | dnode(F)
+ * In this case, we can lose the latest inode(x).
+ * So, call recover_inode for the inode update.
+ */
+ if (entry->last_inode == blkaddr)
+ recover_inode(entry->inode, page);
+ if (entry->last_dentry == blkaddr) {
+ err = recover_dentry(entry->inode, page);
+ if (err) {
+ f2fs_put_page(page, 1);
+ break;
+ }
+ }
err = do_recover_data(sbi, entry->inode, page, blkaddr);
- if (err)
- goto out;
+ if (err) {
+ f2fs_put_page(page, 1);
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: do_recover_data failed: %d",
+ __func__, err);
+ break;
+ }
if (entry->blkaddr == blkaddr) {
iput(entry->inode);
@@ -358,12 +508,8 @@
next:
/* check next segment */
blkaddr = next_blkaddr_of_node(page);
+ f2fs_put_page(page, 1);
}
-unlock_out:
- unlock_page(page);
-out:
- __free_pages(page, 0);
-
if (!err)
allocate_new_segments(sbi);
return err;
@@ -371,32 +517,78 @@
int recover_fsync_data(struct f2fs_sb_info *sbi)
{
+ struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_WARM_NODE);
struct list_head inode_list;
+ block_t blkaddr;
int err;
+ bool need_writecp = false;
fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
- sizeof(struct fsync_inode_entry), NULL);
- if (unlikely(!fsync_entry_slab))
+ sizeof(struct fsync_inode_entry));
+ if (!fsync_entry_slab)
return -ENOMEM;
INIT_LIST_HEAD(&inode_list);
/* step #1: find fsynced inode numbers */
+ sbi->por_doing = true;
+
+ /* prevent checkpoint */
+ mutex_lock(&sbi->cp_mutex);
+
+ blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
+
err = find_fsync_dnodes(sbi, &inode_list);
- if (err)
+ if (err) {
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "%s: find_fsync_dnodes failed: %d", __func__, err);
goto out;
+ }
if (list_empty(&inode_list))
goto out;
+ need_writecp = true;
+
/* step #2: recover data */
- sbi->por_doing = 1;
err = recover_data(sbi, &inode_list, CURSEG_WARM_NODE);
- sbi->por_doing = 0;
- BUG_ON(!list_empty(&inode_list));
+ if (!err && !list_empty(&inode_list)) {
+ f2fs_handle_error(sbi);
+ err = -EIO;
+ }
out:
- destroy_fsync_dnodes(sbi, &inode_list);
+ destroy_fsync_dnodes(&inode_list);
kmem_cache_destroy(fsync_entry_slab);
- write_checkpoint(sbi, false);
+
+ /* truncate meta pages to be used by the recovery */
+ truncate_inode_pages_range(META_MAPPING(sbi),
+ MAIN_BLKADDR(sbi) << PAGE_CACHE_SHIFT, -1);
+
+ if (err) {
+ truncate_inode_pages(NODE_MAPPING(sbi), 0);
+ truncate_inode_pages(META_MAPPING(sbi), 0);
+ }
+
+ sbi->por_doing = false;
+ if (err) {
+ discard_next_dnode(sbi, blkaddr);
+
+ /* Flush all the NAT/SIT pages */
+ while (get_pages(sbi, F2FS_DIRTY_META))
+ sync_meta_pages(sbi, META, LONG_MAX);
+ set_ckpt_flags(sbi->ckpt, CP_ERROR_FLAG);
+ mutex_unlock(&sbi->cp_mutex);
+ f2fs_msg(sbi->sb, KERN_INFO, "recovery complete");
+ } else if (need_writecp) {
+ struct cp_control cpc = {
+ .reason = CP_SYNC,
+ };
+ mutex_unlock(&sbi->cp_mutex);
+ write_checkpoint(sbi, &cpc);
+ f2fs_msg(sbi->sb, KERN_INFO, "recovery complete");
+ } else {
+ mutex_unlock(&sbi->cp_mutex);
+ f2fs_msg(sbi->sb, KERN_ERR, "recovery did not fully complete");
+ }
return err;
}
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index d8e84e4..4a384cb 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -13,13 +13,231 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/prefetch.h>
+#include <linux/kthread.h>
#include <linux/vmalloc.h>
+#include <linux/swap.h>
#include "f2fs.h"
#include "segment.h"
#include "node.h"
#include <trace/events/f2fs.h>
+#define __reverse_ffz(x) __reverse_ffs(~(x))
+
+static struct kmem_cache *discard_entry_slab;
+static struct kmem_cache *sit_entry_set_slab;
+static struct kmem_cache *aw_entry_slab;
+
+/*
+ * __reverse_ffs is copied from include/asm-generic/bitops/__ffs.h since
+ * MSB and LSB are reversed in a byte by f2fs_set_bit.
+ */
+static inline unsigned long __reverse_ffs(unsigned long word)
+{
+ int num = 0;
+
+#if BITS_PER_LONG == 64
+ if ((word & 0xffffffff) == 0) {
+ num += 32;
+ word >>= 32;
+ }
+#endif
+ if ((word & 0xffff) == 0) {
+ num += 16;
+ word >>= 16;
+ }
+ if ((word & 0xff) == 0) {
+ num += 8;
+ word >>= 8;
+ }
+ if ((word & 0xf0) == 0)
+ num += 4;
+ else
+ word >>= 4;
+ if ((word & 0xc) == 0)
+ num += 2;
+ else
+ word >>= 2;
+ if ((word & 0x2) == 0)
+ num += 1;
+ return num;
+}
+
+/*
+ * __find_rev_next(_zero)_bit is copied from lib/find_next_bit.c becasue
+ * f2fs_set_bit makes MSB and LSB reversed in a byte.
+ * Example:
+ * LSB <--> MSB
+ * f2fs_set_bit(0, bitmap) => 0000 0001
+ * f2fs_set_bit(7, bitmap) => 1000 0000
+ */
+static unsigned long __find_rev_next_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BIT_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG - 1);
+ unsigned long tmp;
+ unsigned long mask, submask;
+ unsigned long quot, rest;
+
+ if (offset >= size)
+ return size;
+
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (!offset)
+ goto aligned;
+
+ tmp = *(p++);
+ quot = (offset >> 3) << 3;
+ rest = offset & 0x7;
+ mask = ~0UL << quot;
+ submask = (unsigned char)(0xff << rest) >> rest;
+ submask <<= quot;
+ mask &= submask;
+ tmp &= mask;
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+aligned:
+ while (size & ~(BITS_PER_LONG-1)) {
+ tmp = *(p++);
+ if (tmp)
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+found_first:
+ tmp &= (~0UL >> (BITS_PER_LONG - size));
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __reverse_ffs(tmp);
+}
+
+static unsigned long __find_rev_next_zero_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BIT_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG - 1);
+ unsigned long tmp;
+ unsigned long mask, submask;
+ unsigned long quot, rest;
+
+ if (offset >= size)
+ return size;
+
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (!offset)
+ goto aligned;
+
+ tmp = *(p++);
+ quot = (offset >> 3) << 3;
+ rest = offset & 0x7;
+ mask = ~(~0UL << quot);
+ submask = (unsigned char)~((unsigned char)(0xff << rest) >> rest);
+ submask <<= quot;
+ mask += submask;
+ tmp |= mask;
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+aligned:
+ while (size & ~(BITS_PER_LONG - 1)) {
+ tmp = *(p++);
+ if (~tmp)
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __reverse_ffz(tmp);
+}
+
+/* For atomic write support */
+void prepare_atomic_pages(struct inode *inode, struct atomic_w *aw)
+{
+ pgoff_t start = aw->pos >> PAGE_CACHE_SHIFT;
+ pgoff_t end = (aw->pos + aw->count + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT;
+ struct atomic_range *new;
+
+ new = f2fs_kmem_cache_alloc(aw_entry_slab, GFP_NOFS);
+
+ /* add atomic page indices to the list */
+ new->aid = aw->aid;
+ new->start = start;
+ new->end = end;
+ INIT_LIST_HEAD(&new->list);
+ list_add_tail(&new->list, &F2FS_I(inode)->atomic_pages);
+}
+
+void commit_atomic_pages(struct inode *inode, u64 aid, bool abort)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct atomic_range *cur, *tmp;
+ u64 start;
+ struct page *page;
+
+ if (abort)
+ goto release;
+
+ f2fs_balance_fs(sbi);
+ mutex_lock(&sbi->cp_mutex);
+
+ /* Step #1: write all the pages */
+ list_for_each_entry(cur, &F2FS_I(inode)->atomic_pages, list) {
+ if (cur->aid != aid)
+ continue;
+
+ for (start = cur->start; start < cur->end; start++) {
+ page = grab_cache_page(inode->i_mapping, start);
+ WARN_ON(!page);
+ move_data_page(inode, page, FG_GC);
+ }
+ }
+ f2fs_submit_merged_bio(sbi, DATA, WRITE);
+ mutex_unlock(&sbi->cp_mutex);
+release:
+ /* Step #2: wait for writeback */
+ list_for_each_entry_safe(cur, tmp, &F2FS_I(inode)->atomic_pages, list) {
+ if (cur->aid != aid && !abort)
+ continue;
+
+ for (start = cur->start; start < cur->end; start++) {
+ page = find_get_page(inode->i_mapping, start);
+ WARN_ON(!page);
+ wait_on_page_writeback(page);
+ f2fs_put_page(page, 0);
+
+ /* release reference got by atomic_write operation */
+ f2fs_put_page(page, 0);
+ }
+ list_del(&cur->list);
+ kmem_cache_free(aw_entry_slab, cur);
+ }
+}
+
/*
* This function balances dirty node and dentry pages.
* In addition, it controls garbage collection.
@@ -36,6 +254,108 @@
}
}
+void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
+{
+ /* check the # of cached NAT entries and prefree segments */
+ if (try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK) ||
+ excess_prefree_segs(sbi))
+ f2fs_sync_fs(sbi->sb, true);
+}
+
+static int issue_flush_thread(void *data)
+{
+ struct f2fs_sb_info *sbi = data;
+ struct flush_cmd_control *fcc = SM_I(sbi)->cmd_control_info;
+ wait_queue_head_t *q = &fcc->flush_wait_queue;
+repeat:
+ if (kthread_should_stop())
+ return 0;
+
+ if (!llist_empty(&fcc->issue_list)) {
+ struct bio *bio = bio_alloc(GFP_NOIO, 0);
+ struct flush_cmd *cmd, *next;
+ int ret;
+
+ fcc->dispatch_list = llist_del_all(&fcc->issue_list);
+ fcc->dispatch_list = llist_reverse_order(fcc->dispatch_list);
+
+ bio->bi_bdev = sbi->sb->s_bdev;
+ ret = submit_bio_wait(WRITE_FLUSH, bio);
+
+ llist_for_each_entry_safe(cmd, next,
+ fcc->dispatch_list, llnode) {
+ cmd->ret = ret;
+ complete(&cmd->wait);
+ }
+ bio_put(bio);
+ fcc->dispatch_list = NULL;
+ }
+
+ wait_event_interruptible(*q,
+ kthread_should_stop() || !llist_empty(&fcc->issue_list));
+ goto repeat;
+}
+
+int f2fs_issue_flush(struct f2fs_sb_info *sbi)
+{
+ struct flush_cmd_control *fcc = SM_I(sbi)->cmd_control_info;
+ struct flush_cmd cmd;
+
+ trace_f2fs_issue_flush(sbi->sb, test_opt(sbi, NOBARRIER),
+ test_opt(sbi, FLUSH_MERGE));
+
+ if (test_opt(sbi, NOBARRIER))
+ return 0;
+
+ if (!test_opt(sbi, FLUSH_MERGE))
+ return blkdev_issue_flush(sbi->sb->s_bdev, GFP_KERNEL, NULL);
+
+ init_completion(&cmd.wait);
+
+ llist_add(&cmd.llnode, &fcc->issue_list);
+
+ if (!fcc->dispatch_list)
+ wake_up(&fcc->flush_wait_queue);
+
+ wait_for_completion(&cmd.wait);
+
+ return cmd.ret;
+}
+
+int create_flush_cmd_control(struct f2fs_sb_info *sbi)
+{
+ dev_t dev = sbi->sb->s_bdev->bd_dev;
+ struct flush_cmd_control *fcc;
+ int err = 0;
+
+ fcc = kzalloc(sizeof(struct flush_cmd_control), GFP_KERNEL);
+ if (!fcc)
+ return -ENOMEM;
+ init_waitqueue_head(&fcc->flush_wait_queue);
+ init_llist_head(&fcc->issue_list);
+ SM_I(sbi)->cmd_control_info = fcc;
+ fcc->f2fs_issue_flush = kthread_run(issue_flush_thread, sbi,
+ "f2fs_flush-%u:%u", MAJOR(dev), MINOR(dev));
+ if (IS_ERR(fcc->f2fs_issue_flush)) {
+ err = PTR_ERR(fcc->f2fs_issue_flush);
+ kfree(fcc);
+ SM_I(sbi)->cmd_control_info = NULL;
+ return err;
+ }
+
+ return err;
+}
+
+void destroy_flush_cmd_control(struct f2fs_sb_info *sbi)
+{
+ struct flush_cmd_control *fcc = SM_I(sbi)->cmd_control_info;
+
+ if (fcc && fcc->f2fs_issue_flush)
+ kthread_stop(fcc->f2fs_issue_flush);
+ kfree(fcc);
+ SM_I(sbi)->cmd_control_info = NULL;
+}
+
static void __locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno,
enum dirty_type dirty_type)
{
@@ -50,20 +370,14 @@
if (dirty_type == DIRTY) {
struct seg_entry *sentry = get_seg_entry(sbi, segno);
- enum dirty_type t = DIRTY_HOT_DATA;
+ enum dirty_type t = sentry->type;
- dirty_type = sentry->type;
-
- if (!test_and_set_bit(segno, dirty_i->dirty_segmap[dirty_type]))
- dirty_i->nr_dirty[dirty_type]++;
-
- /* Only one bitmap should be set */
- for (; t <= DIRTY_COLD_NODE; t++) {
- if (t == dirty_type)
- continue;
- if (test_and_clear_bit(segno, dirty_i->dirty_segmap[t]))
- dirty_i->nr_dirty[t]--;
+ if (unlikely(t >= DIRTY)) {
+ f2fs_bug_on(sbi, 1);
+ return;
}
+ if (!test_and_set_bit(segno, dirty_i->dirty_segmap[t]))
+ dirty_i->nr_dirty[t]++;
}
}
@@ -76,12 +390,11 @@
dirty_i->nr_dirty[dirty_type]--;
if (dirty_type == DIRTY) {
- enum dirty_type t = DIRTY_HOT_DATA;
+ struct seg_entry *sentry = get_seg_entry(sbi, segno);
+ enum dirty_type t = sentry->type;
- /* clear all the bitmaps */
- for (; t <= DIRTY_COLD_NODE; t++)
- if (test_and_clear_bit(segno, dirty_i->dirty_segmap[t]))
- dirty_i->nr_dirty[t]--;
+ if (test_and_clear_bit(segno, dirty_i->dirty_segmap[t]))
+ dirty_i->nr_dirty[t]--;
if (get_valid_blocks(sbi, segno, sbi->segs_per_sec) == 0)
clear_bit(GET_SECNO(sbi, segno),
@@ -94,7 +407,7 @@
* Adding dirty entry into seglist is not critical operation.
* If a given segment is one of current working segments, it won't be added.
*/
-void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno)
+static void locate_dirty_segment(struct f2fs_sb_info *sbi, unsigned int segno)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
unsigned short valid_blocks;
@@ -117,7 +430,113 @@
}
mutex_unlock(&dirty_i->seglist_lock);
- return;
+}
+
+static int f2fs_issue_discard(struct f2fs_sb_info *sbi,
+ block_t blkstart, block_t blklen)
+{
+ sector_t start = SECTOR_FROM_BLOCK(blkstart);
+ sector_t len = SECTOR_FROM_BLOCK(blklen);
+ trace_f2fs_issue_discard(sbi->sb, blkstart, blklen);
+ return blkdev_issue_discard(sbi->sb->s_bdev, start, len, GFP_NOFS, 0);
+}
+
+void discard_next_dnode(struct f2fs_sb_info *sbi, block_t blkaddr)
+{
+ if (f2fs_issue_discard(sbi, blkaddr, 1)) {
+ struct page *page = grab_meta_page(sbi, blkaddr);
+ /* zero-filled page */
+ set_page_dirty(page);
+ f2fs_put_page(page, 1);
+ }
+}
+
+static void add_discard_addrs(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+{
+ struct list_head *head = &SM_I(sbi)->discard_list;
+ struct discard_entry *new;
+ int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long);
+ int max_blocks = sbi->blocks_per_seg;
+ struct seg_entry *se = get_seg_entry(sbi, cpc->trim_start);
+ unsigned long *cur_map = (unsigned long *)se->cur_valid_map;
+ unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map;
+ unsigned long *dmap;
+ unsigned int start = 0, end = -1;
+ bool force = (cpc->reason == CP_DISCARD);
+ int i;
+
+ if (!force && !test_opt(sbi, DISCARD))
+ return;
+
+ if (force && !se->valid_blocks) {
+ struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
+ /*
+ * if this segment is registered in the prefree list, then
+ * we should skip adding a discard candidate, and let the
+ * checkpoint do that later.
+ */
+ mutex_lock(&dirty_i->seglist_lock);
+ if (test_bit(cpc->trim_start, dirty_i->dirty_segmap[PRE])) {
+ mutex_unlock(&dirty_i->seglist_lock);
+ cpc->trimmed += sbi->blocks_per_seg;
+ return;
+ }
+ mutex_unlock(&dirty_i->seglist_lock);
+
+ new = f2fs_kmem_cache_alloc(discard_entry_slab, GFP_NOFS);
+ INIT_LIST_HEAD(&new->list);
+ new->blkaddr = START_BLOCK(sbi, cpc->trim_start);
+ new->len = sbi->blocks_per_seg;
+ list_add_tail(&new->list, head);
+ SM_I(sbi)->nr_discards += sbi->blocks_per_seg;
+ cpc->trimmed += sbi->blocks_per_seg;
+ return;
+ }
+
+ /* zero block will be discarded through the prefree list */
+ if (!se->valid_blocks || se->valid_blocks == max_blocks)
+ return;
+
+ dmap = kzalloc(SIT_VBLOCK_MAP_SIZE, GFP_KERNEL);
+ if (!dmap)
+ return;
+
+ /* SIT_VBLOCK_MAP_SIZE should be multiple of sizeof(unsigned long) */
+ for (i = 0; i < entries; i++)
+ dmap[i] = (cur_map[i] ^ ckpt_map[i]) & ckpt_map[i];
+
+ while (force || SM_I(sbi)->nr_discards <= SM_I(sbi)->max_discards) {
+ start = __find_rev_next_bit(dmap, max_blocks, end + 1);
+ if (start >= max_blocks)
+ break;
+
+ end = __find_rev_next_zero_bit(dmap, max_blocks, start + 1);
+
+ if (end - start < cpc->trim_minlen)
+ continue;
+
+ new = f2fs_kmem_cache_alloc(discard_entry_slab, GFP_NOFS);
+ INIT_LIST_HEAD(&new->list);
+ new->blkaddr = START_BLOCK(sbi, cpc->trim_start) + start;
+ new->len = end - start;
+ cpc->trimmed += end - start;
+
+ list_add_tail(&new->list, head);
+ SM_I(sbi)->nr_discards += end - start;
+ }
+ kfree(dmap);
+}
+
+void release_discard_addrs(struct f2fs_sb_info *sbi)
+{
+ struct list_head *head = &(SM_I(sbi)->discard_list);
+ struct discard_entry *entry, *this;
+
+ /* drop caches */
+ list_for_each_entry_safe(entry, this, head, list) {
+ list_del(&entry->list);
+ kmem_cache_free(discard_entry_slab, entry);
+ }
}
/*
@@ -126,55 +545,64 @@
static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
- unsigned int segno, offset = 0;
- unsigned int total_segs = TOTAL_SEGS(sbi);
+ unsigned int segno;
mutex_lock(&dirty_i->seglist_lock);
- while (1) {
- segno = find_next_bit(dirty_i->dirty_segmap[PRE], total_segs,
- offset);
- if (segno >= total_segs)
- break;
+ for_each_set_bit(segno, dirty_i->dirty_segmap[PRE], MAIN_SEGS(sbi))
__set_test_and_free(sbi, segno);
- offset = segno + 1;
- }
mutex_unlock(&dirty_i->seglist_lock);
}
void clear_prefree_segments(struct f2fs_sb_info *sbi)
{
+ struct list_head *head = &(SM_I(sbi)->discard_list);
+ struct discard_entry *entry, *this;
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
- unsigned int segno, offset = 0;
- unsigned int total_segs = TOTAL_SEGS(sbi);
+ unsigned long *prefree_map = dirty_i->dirty_segmap[PRE];
+ unsigned int start = 0, end = -1;
mutex_lock(&dirty_i->seglist_lock);
+
while (1) {
- segno = find_next_bit(dirty_i->dirty_segmap[PRE], total_segs,
- offset);
- if (segno >= total_segs)
+ int i;
+ start = find_next_bit(prefree_map, MAIN_SEGS(sbi), end + 1);
+ if (start >= MAIN_SEGS(sbi))
break;
+ end = find_next_zero_bit(prefree_map, MAIN_SEGS(sbi),
+ start + 1);
- offset = segno + 1;
- if (test_and_clear_bit(segno, dirty_i->dirty_segmap[PRE]))
- dirty_i->nr_dirty[PRE]--;
+ for (i = start; i < end; i++)
+ clear_bit(i, prefree_map);
- /* Let's use trim */
- if (test_opt(sbi, DISCARD))
- blkdev_issue_discard(sbi->sb->s_bdev,
- START_BLOCK(sbi, segno) <<
- sbi->log_sectors_per_block,
- 1 << (sbi->log_sectors_per_block +
- sbi->log_blocks_per_seg),
- GFP_NOFS, 0);
+ dirty_i->nr_dirty[PRE] -= end - start;
+
+ if (!test_opt(sbi, DISCARD))
+ continue;
+
+ f2fs_issue_discard(sbi, START_BLOCK(sbi, start),
+ (end - start) << sbi->log_blocks_per_seg);
}
mutex_unlock(&dirty_i->seglist_lock);
+
+ /* send small discards */
+ list_for_each_entry_safe(entry, this, head, list) {
+ f2fs_issue_discard(sbi, entry->blkaddr, entry->len);
+ list_del(&entry->list);
+ SM_I(sbi)->nr_discards -= entry->len;
+ kmem_cache_free(discard_entry_slab, entry);
+ }
}
-static void __mark_sit_entry_dirty(struct f2fs_sb_info *sbi, unsigned int segno)
+static bool __mark_sit_entry_dirty(struct f2fs_sb_info *sbi, unsigned int segno)
{
struct sit_info *sit_i = SIT_I(sbi);
- if (!__test_and_set_bit(segno, sit_i->dirty_sentries_bitmap))
+
+ if (!__test_and_set_bit(segno, sit_i->dirty_sentries_bitmap)) {
sit_i->dirty_sentries++;
+ return false;
+ }
+
+ return true;
}
static void __set_sit_entry_type(struct f2fs_sb_info *sbi, int type,
@@ -191,28 +619,56 @@
struct seg_entry *se;
unsigned int segno, offset;
long int new_vblocks;
+ bool check_map = false;
segno = GET_SEGNO(sbi, blkaddr);
se = get_seg_entry(sbi, segno);
new_vblocks = se->valid_blocks + del;
- offset = GET_SEGOFF_FROM_SEG0(sbi, blkaddr) & (sbi->blocks_per_seg - 1);
+ offset = GET_BLKOFF_FROM_SEG0(sbi, blkaddr);
- BUG_ON((new_vblocks >> (sizeof(unsigned short) << 3) ||
- (new_vblocks > sbi->blocks_per_seg)));
+ if (new_vblocks < 0 || new_vblocks > sbi->blocks_per_seg ||
+ (new_vblocks >> (sizeof(unsigned short) << 3)))
+ if (f2fs_handle_error(sbi))
+ check_map = true;
- se->valid_blocks = new_vblocks;
se->mtime = get_mtime(sbi);
SIT_I(sbi)->max_mtime = se->mtime;
/* Update valid block bitmap */
if (del > 0) {
if (f2fs_set_bit(offset, se->cur_valid_map))
- BUG();
+ if (f2fs_handle_error(sbi))
+ check_map = true;
} else {
if (!f2fs_clear_bit(offset, se->cur_valid_map))
- BUG();
+ if (f2fs_handle_error(sbi))
+ check_map = true;
}
+
+ if (unlikely(check_map)) {
+ int i;
+ long int vblocks = 0;
+
+ f2fs_msg(sbi->sb, KERN_ERR,
+ "cannot %svalidate block %u in segment %u with %hu valid blocks",
+ (del < 0) ? "in" : "",
+ offset, segno, se->valid_blocks);
+
+ /* assume the count was stale to start */
+ del = 0;
+ for (i = 0; i < sbi->blocks_per_seg; i++)
+ if (f2fs_test_bit(i, se->cur_valid_map))
+ vblocks++;
+ if (vblocks != se->valid_blocks) {
+ f2fs_msg(sbi->sb, KERN_INFO, "correcting valid block "
+ "counts %d -> %ld", se->valid_blocks, vblocks);
+ /* make accounting corrections */
+ del = vblocks - se->valid_blocks;
+ }
+ }
+ se->valid_blocks += del;
+
if (!f2fs_test_bit(offset, se->ckpt_valid_map))
se->ckpt_valid_blocks += del;
@@ -225,12 +681,14 @@
get_sec_entry(sbi, segno)->valid_blocks += del;
}
-static void refresh_sit_entry(struct f2fs_sb_info *sbi,
- block_t old_blkaddr, block_t new_blkaddr)
+void refresh_sit_entry(struct f2fs_sb_info *sbi, block_t old, block_t new)
{
- update_sit_entry(sbi, new_blkaddr, 1);
- if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO)
- update_sit_entry(sbi, old_blkaddr, -1);
+ update_sit_entry(sbi, new, 1);
+ if (GET_SEGNO(sbi, old) != NULL_SEGNO)
+ update_sit_entry(sbi, old, -1);
+
+ locate_dirty_segment(sbi, GET_SEGNO(sbi, old));
+ locate_dirty_segment(sbi, GET_SEGNO(sbi, new));
}
void invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
@@ -238,10 +696,16 @@
unsigned int segno = GET_SEGNO(sbi, addr);
struct sit_info *sit_i = SIT_I(sbi);
- BUG_ON(addr == NULL_ADDR);
+ f2fs_bug_on(sbi, addr == NULL_ADDR);
if (addr == NEW_ADDR)
return;
+ if (segno >= TOTAL_SEGS(sbi)) {
+ f2fs_msg(sbi->sb, KERN_ERR, "invalid segment number %u", segno);
+ if (f2fs_handle_error(sbi))
+ return;
+ }
+
/* add it into sit main buffer */
mutex_lock(&sit_i->sentry_lock);
@@ -257,13 +721,12 @@
* This function should be resided under the curseg_mutex lock
*/
static void __add_sum_entry(struct f2fs_sb_info *sbi, int type,
- struct f2fs_summary *sum, unsigned short offset)
+ struct f2fs_summary *sum)
{
struct curseg_info *curseg = CURSEG_I(sbi, type);
void *addr = curseg->sum_blk;
- addr += offset * sizeof(struct f2fs_summary);
+ addr += curseg->next_blkoff * sizeof(struct f2fs_summary);
memcpy(addr, sum, sizeof(struct f2fs_summary));
- return;
}
/*
@@ -271,9 +734,8 @@
*/
int npages_for_summary_flush(struct f2fs_sb_info *sbi)
{
- int total_size_bytes = 0;
int valid_sum_count = 0;
- int i, sum_space;
+ int i, sum_in_page;
for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) {
if (sbi->ckpt->alloc_type[i] == SSR)
@@ -282,13 +744,12 @@
valid_sum_count += curseg_blkoff(sbi, i);
}
- total_size_bytes = valid_sum_count * (SUMMARY_SIZE + 1)
- + sizeof(struct nat_journal) + 2
- + sizeof(struct sit_journal) + 2;
- sum_space = PAGE_CACHE_SIZE - SUM_FOOTER_SIZE;
- if (total_size_bytes < sum_space)
+ sum_in_page = (PAGE_CACHE_SIZE - 2 * SUM_JOURNAL_SIZE -
+ SUM_FOOTER_SIZE) / SUMMARY_SIZE;
+ if (valid_sum_count <= sum_in_page)
return 1;
- else if (total_size_bytes < 2 * sum_space)
+ else if ((valid_sum_count - sum_in_page) <=
+ (PAGE_CACHE_SIZE - SUM_FOOTER_SIZE) / SUMMARY_SIZE)
return 2;
return 3;
}
@@ -311,64 +772,14 @@
f2fs_put_page(page, 1);
}
-static unsigned int check_prefree_segments(struct f2fs_sb_info *sbi, int type)
-{
- struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
- unsigned long *prefree_segmap = dirty_i->dirty_segmap[PRE];
- unsigned int segno;
- unsigned int ofs = 0;
-
- /*
- * If there is not enough reserved sections,
- * we should not reuse prefree segments.
- */
- if (has_not_enough_free_secs(sbi, 0))
- return NULL_SEGNO;
-
- /*
- * NODE page should not reuse prefree segment,
- * since those information is used for SPOR.
- */
- if (IS_NODESEG(type))
- return NULL_SEGNO;
-next:
- segno = find_next_bit(prefree_segmap, TOTAL_SEGS(sbi), ofs);
- ofs += sbi->segs_per_sec;
-
- if (segno < TOTAL_SEGS(sbi)) {
- int i;
-
- /* skip intermediate segments in a section */
- if (segno % sbi->segs_per_sec)
- goto next;
-
- /* skip if the section is currently used */
- if (sec_usage_check(sbi, GET_SECNO(sbi, segno)))
- goto next;
-
- /* skip if whole section is not prefree */
- for (i = 1; i < sbi->segs_per_sec; i++)
- if (!test_bit(segno + i, prefree_segmap))
- goto next;
-
- /* skip if whole section was not free at the last checkpoint */
- for (i = 0; i < sbi->segs_per_sec; i++)
- if (get_seg_entry(sbi, segno + i)->ckpt_valid_blocks)
- goto next;
-
- return segno;
- }
- return NULL_SEGNO;
-}
-
static int is_next_segment_free(struct f2fs_sb_info *sbi, int type)
{
struct curseg_info *curseg = CURSEG_I(sbi, type);
- unsigned int segno = curseg->segno;
+ unsigned int segno = curseg->segno + 1;
struct free_segmap_info *free_i = FREE_I(sbi);
- if (segno + 1 < TOTAL_SEGS(sbi) && (segno + 1) % sbi->segs_per_sec)
- return !test_bit(segno + 1, free_i->free_segmap);
+ if (segno < MAIN_SEGS(sbi) && segno % sbi->segs_per_sec)
+ return !test_bit(segno, free_i->free_segmap);
return 0;
}
@@ -381,7 +792,7 @@
{
struct free_segmap_info *free_i = FREE_I(sbi);
unsigned int segno, secno, zoneno;
- unsigned int total_zones = TOTAL_SECS(sbi) / sbi->secs_per_zone;
+ unsigned int total_zones = MAIN_SECS(sbi) / sbi->secs_per_zone;
unsigned int hint = *newseg / sbi->segs_per_sec;
unsigned int old_zoneno = GET_ZONENO_FROM_SEGNO(sbi, *newseg);
unsigned int left_start = hint;
@@ -393,18 +804,18 @@
if (!new_sec && ((*newseg + 1) % sbi->segs_per_sec)) {
segno = find_next_zero_bit(free_i->free_segmap,
- TOTAL_SEGS(sbi), *newseg + 1);
+ MAIN_SEGS(sbi), *newseg + 1);
if (segno - *newseg < sbi->segs_per_sec -
(*newseg % sbi->segs_per_sec))
goto got_it;
}
find_other_zone:
- secno = find_next_zero_bit(free_i->free_secmap, TOTAL_SECS(sbi), hint);
- if (secno >= TOTAL_SECS(sbi)) {
+ secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint);
+ if (secno >= MAIN_SECS(sbi)) {
if (dir == ALLOC_RIGHT) {
secno = find_next_zero_bit(free_i->free_secmap,
- TOTAL_SECS(sbi), 0);
- BUG_ON(secno >= TOTAL_SECS(sbi));
+ MAIN_SECS(sbi), 0);
+ f2fs_bug_on(sbi, secno >= MAIN_SECS(sbi));
} else {
go_left = 1;
left_start = hint - 1;
@@ -419,8 +830,8 @@
continue;
}
left_start = find_next_zero_bit(free_i->free_secmap,
- TOTAL_SECS(sbi), 0);
- BUG_ON(left_start >= TOTAL_SECS(sbi));
+ MAIN_SECS(sbi), 0);
+ f2fs_bug_on(sbi, left_start >= MAIN_SECS(sbi));
break;
}
secno = left_start;
@@ -459,7 +870,7 @@
}
got_it:
/* set it as dirty segment in free segmap */
- BUG_ON(test_bit(segno, free_i->free_segmap));
+ f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap));
__set_inuse(sbi, segno);
*newseg = segno;
write_unlock(&free_i->segmap_lock);
@@ -495,7 +906,7 @@
int dir = ALLOC_LEFT;
write_sum_page(sbi, curseg->sum_blk,
- GET_SUM_BLOCK(sbi, curseg->segno));
+ GET_SUM_BLOCK(sbi, segno));
if (type == CURSEG_WARM_DATA || type == CURSEG_COLD_DATA)
dir = ALLOC_RIGHT;
@@ -512,13 +923,18 @@
struct curseg_info *seg, block_t start)
{
struct seg_entry *se = get_seg_entry(sbi, seg->segno);
- block_t ofs;
- for (ofs = start; ofs < sbi->blocks_per_seg; ofs++) {
- if (!f2fs_test_bit(ofs, se->ckpt_valid_map)
- && !f2fs_test_bit(ofs, se->cur_valid_map))
- break;
- }
- seg->next_blkoff = ofs;
+ int entries = SIT_VBLOCK_MAP_SIZE / sizeof(unsigned long);
+ unsigned long target_map[entries];
+ unsigned long *ckpt_map = (unsigned long *)se->ckpt_valid_map;
+ unsigned long *cur_map = (unsigned long *)se->cur_valid_map;
+ int i, pos;
+
+ for (i = 0; i < entries; i++)
+ target_map[i] = ckpt_map[i] | cur_map[i];
+
+ pos = __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, start);
+
+ seg->next_blkoff = pos;
}
/*
@@ -594,15 +1010,8 @@
{
struct curseg_info *curseg = CURSEG_I(sbi, type);
- if (force) {
+ if (force)
new_curseg(sbi, type, true);
- goto out;
- }
-
- curseg->next_segno = check_prefree_segments(sbi, type);
-
- if (curseg->next_segno != NULL_SEGNO)
- change_curseg(sbi, type, false);
else if (type == CURSEG_WARM_NODE)
new_curseg(sbi, type, false);
else if (curseg->alloc_type == LFS && is_next_segment_free(sbi, type))
@@ -611,8 +1020,8 @@
change_curseg(sbi, type, true);
else
new_curseg(sbi, type, false);
-out:
- sbi->segment_count[curseg->alloc_type]++;
+
+ stat_inc_seg_type(sbi, curseg);
}
void allocate_new_segments(struct f2fs_sb_info *sbi)
@@ -633,126 +1042,35 @@
.allocate_segment = allocate_segment_by_default,
};
-static void f2fs_end_io_write(struct bio *bio, int err)
+int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
{
- const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
- struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
- struct bio_private *p = bio->bi_private;
+ __u64 start = range->start >> sbi->log_blocksize;
+ __u64 end = start + (range->len >> sbi->log_blocksize) - 1;
+ unsigned int start_segno, end_segno;
+ struct cp_control cpc;
- do {
- struct page *page = bvec->bv_page;
+ if (range->minlen > SEGMENT_SIZE(sbi) || start >= MAX_BLKADDR(sbi) ||
+ range->len < sbi->blocksize)
+ return -EINVAL;
- if (--bvec >= bio->bi_io_vec)
- prefetchw(&bvec->bv_page->flags);
- if (!uptodate) {
- SetPageError(page);
- if (page->mapping)
- set_bit(AS_EIO, &page->mapping->flags);
- set_ckpt_flags(p->sbi->ckpt, CP_ERROR_FLAG);
- p->sbi->sb->s_flags |= MS_RDONLY;
- }
- end_page_writeback(page);
- dec_page_count(p->sbi, F2FS_WRITEBACK);
- } while (bvec >= bio->bi_io_vec);
+ if (end <= MAIN_BLKADDR(sbi))
+ goto out;
- if (p->is_sync)
- complete(p->wait);
- kfree(p);
- bio_put(bio);
-}
+ /* start/end segment number in main_area */
+ start_segno = (start <= MAIN_BLKADDR(sbi)) ? 0 : GET_SEGNO(sbi, start);
+ end_segno = (end >= MAX_BLKADDR(sbi)) ? MAIN_SEGS(sbi) - 1 :
+ GET_SEGNO(sbi, end);
+ cpc.reason = CP_DISCARD;
+ cpc.trim_start = start_segno;
+ cpc.trim_end = end_segno;
+ cpc.trim_minlen = range->minlen >> sbi->log_blocksize;
+ cpc.trimmed = 0;
-struct bio *f2fs_bio_alloc(struct block_device *bdev, int npages)
-{
- struct bio *bio;
- struct bio_private *priv;
-retry:
- priv = kmalloc(sizeof(struct bio_private), GFP_NOFS);
- if (!priv) {
- cond_resched();
- goto retry;
- }
-
- /* No failure on bio allocation */
- bio = bio_alloc(GFP_NOIO, npages);
- bio->bi_bdev = bdev;
- bio->bi_private = priv;
- return bio;
-}
-
-static void do_submit_bio(struct f2fs_sb_info *sbi,
- enum page_type type, bool sync)
-{
- int rw = sync ? WRITE_SYNC : WRITE;
- enum page_type btype = type > META ? META : type;
-
- if (type >= META_FLUSH)
- rw = WRITE_FLUSH_FUA;
-
- if (btype == META)
- rw |= REQ_META;
-
- if (sbi->bio[btype]) {
- struct bio_private *p = sbi->bio[btype]->bi_private;
- p->sbi = sbi;
- sbi->bio[btype]->bi_end_io = f2fs_end_io_write;
-
- trace_f2fs_do_submit_bio(sbi->sb, btype, sync, sbi->bio[btype]);
-
- if (type == META_FLUSH) {
- DECLARE_COMPLETION_ONSTACK(wait);
- p->is_sync = true;
- p->wait = &wait;
- submit_bio(rw, sbi->bio[btype]);
- wait_for_completion(&wait);
- } else {
- p->is_sync = false;
- submit_bio(rw, sbi->bio[btype]);
- }
- sbi->bio[btype] = NULL;
- }
-}
-
-void f2fs_submit_bio(struct f2fs_sb_info *sbi, enum page_type type, bool sync)
-{
- down_write(&sbi->bio_sem);
- do_submit_bio(sbi, type, sync);
- up_write(&sbi->bio_sem);
-}
-
-static void submit_write_page(struct f2fs_sb_info *sbi, struct page *page,
- block_t blk_addr, enum page_type type)
-{
- struct block_device *bdev = sbi->sb->s_bdev;
-
- verify_block_addr(sbi, blk_addr);
-
- down_write(&sbi->bio_sem);
-
- inc_page_count(sbi, F2FS_WRITEBACK);
-
- if (sbi->bio[type] && sbi->last_block_in_bio[type] != blk_addr - 1)
- do_submit_bio(sbi, type, false);
-alloc_new:
- if (sbi->bio[type] == NULL) {
- sbi->bio[type] = f2fs_bio_alloc(bdev, max_hw_blocks(sbi));
- sbi->bio[type]->bi_sector = SECTOR_FROM_BLOCK(sbi, blk_addr);
- /*
- * The end_io will be assigned at the sumbission phase.
- * Until then, let bio_add_page() merge consecutive IOs as much
- * as possible.
- */
- }
-
- if (bio_add_page(sbi->bio[type], page, PAGE_CACHE_SIZE, 0) <
- PAGE_CACHE_SIZE) {
- do_submit_bio(sbi, type, false);
- goto alloc_new;
- }
-
- sbi->last_block_in_bio[type] = blk_addr;
-
- up_write(&sbi->bio_sem);
- trace_f2fs_submit_write_page(page, blk_addr, type);
+ /* do checkpoint to issue discard commands safely */
+ write_checkpoint(sbi, &cpc);
+out:
+ range->len = cpc.trimmed << sbi->log_blocksize;
+ return 0;
}
static bool __has_curseg_space(struct f2fs_sb_info *sbi, int type)
@@ -795,7 +1113,7 @@
if (S_ISDIR(inode->i_mode))
return CURSEG_HOT_DATA;
- else if (is_cold_data(page) || is_cold_file(inode))
+ else if (is_cold_data(page) || file_is_cold(inode))
return CURSEG_COLD_DATA;
else
return CURSEG_WARM_DATA;
@@ -810,102 +1128,109 @@
static int __get_segment_type(struct page *page, enum page_type p_type)
{
- struct f2fs_sb_info *sbi = F2FS_SB(page->mapping->host->i_sb);
- switch (sbi->active_logs) {
+ switch (F2FS_P_SB(page)->active_logs) {
case 2:
return __get_segment_type_2(page, p_type);
case 4:
return __get_segment_type_4(page, p_type);
}
/* NR_CURSEG_TYPE(6) logs by default */
- BUG_ON(sbi->active_logs != NR_CURSEG_TYPE);
+ f2fs_bug_on(F2FS_P_SB(page),
+ F2FS_P_SB(page)->active_logs != NR_CURSEG_TYPE);
return __get_segment_type_6(page, p_type);
}
-static void do_write_page(struct f2fs_sb_info *sbi, struct page *page,
- block_t old_blkaddr, block_t *new_blkaddr,
- struct f2fs_summary *sum, enum page_type p_type)
+void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
+ block_t old_blkaddr, block_t *new_blkaddr,
+ struct f2fs_summary *sum, int type)
{
struct sit_info *sit_i = SIT_I(sbi);
struct curseg_info *curseg;
- unsigned int old_cursegno;
- int type;
- type = __get_segment_type(page, p_type);
curseg = CURSEG_I(sbi, type);
mutex_lock(&curseg->curseg_mutex);
*new_blkaddr = NEXT_FREE_BLKADDR(sbi, curseg);
- old_cursegno = curseg->segno;
/*
* __add_sum_entry should be resided under the curseg_mutex
* because, this function updates a summary entry in the
* current summary block.
*/
- __add_sum_entry(sbi, type, sum, curseg->next_blkoff);
+ __add_sum_entry(sbi, type, sum);
mutex_lock(&sit_i->sentry_lock);
__refresh_next_blkoff(sbi, curseg);
- sbi->block_count[curseg->alloc_type]++;
+ stat_inc_block_count(sbi, curseg);
+
+ if (!__has_curseg_space(sbi, type))
+ sit_i->s_ops->allocate_segment(sbi, type, false);
/*
* SIT information should be updated before segment allocation,
* since SSR needs latest valid block information.
*/
refresh_sit_entry(sbi, old_blkaddr, *new_blkaddr);
- if (!__has_curseg_space(sbi, type))
- sit_i->s_ops->allocate_segment(sbi, type, false);
-
- locate_dirty_segment(sbi, old_cursegno);
- locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr));
mutex_unlock(&sit_i->sentry_lock);
- if (p_type == NODE)
+ if (page && IS_NODESEG(type))
fill_node_footer_blkaddr(page, NEXT_FREE_BLKADDR(sbi, curseg));
- /* writeout dirty page into bdev */
- submit_write_page(sbi, page, *new_blkaddr, p_type);
-
mutex_unlock(&curseg->curseg_mutex);
}
+static void do_write_page(struct f2fs_sb_info *sbi, struct page *page,
+ block_t old_blkaddr, block_t *new_blkaddr,
+ struct f2fs_summary *sum, struct f2fs_io_info *fio)
+{
+ int type = __get_segment_type(page, fio->type);
+
+ allocate_data_block(sbi, page, old_blkaddr, new_blkaddr, sum, type);
+
+ /* writeout dirty page into bdev */
+ f2fs_submit_page_mbio(sbi, page, *new_blkaddr, fio);
+}
+
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page)
{
+ struct f2fs_io_info fio = {
+ .type = META,
+ .rw = WRITE_SYNC | REQ_META | REQ_PRIO
+ };
+
set_page_writeback(page);
- submit_write_page(sbi, page, page->index, META);
+ f2fs_submit_page_mbio(sbi, page, page->index, &fio);
}
void write_node_page(struct f2fs_sb_info *sbi, struct page *page,
+ struct f2fs_io_info *fio,
unsigned int nid, block_t old_blkaddr, block_t *new_blkaddr)
{
struct f2fs_summary sum;
set_summary(&sum, nid, 0, 0);
- do_write_page(sbi, page, old_blkaddr, new_blkaddr, &sum, NODE);
+ do_write_page(sbi, page, old_blkaddr, new_blkaddr, &sum, fio);
}
-void write_data_page(struct inode *inode, struct page *page,
- struct dnode_of_data *dn, block_t old_blkaddr,
- block_t *new_blkaddr)
+void write_data_page(struct page *page, struct dnode_of_data *dn,
+ block_t *new_blkaddr, struct f2fs_io_info *fio)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct f2fs_summary sum;
struct node_info ni;
- BUG_ON(old_blkaddr == NULL_ADDR);
+ f2fs_bug_on(sbi, dn->data_blkaddr == NULL_ADDR);
get_node_info(sbi, dn->nid, &ni);
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
- do_write_page(sbi, page, old_blkaddr,
- new_blkaddr, &sum, DATA);
+ do_write_page(sbi, page, dn->data_blkaddr, new_blkaddr, &sum, fio);
}
-void rewrite_data_page(struct f2fs_sb_info *sbi, struct page *page,
- block_t old_blk_addr)
+void rewrite_data_page(struct page *page, block_t old_blkaddr,
+ struct f2fs_io_info *fio)
{
- submit_write_page(sbi, page, old_blk_addr, DATA);
+ f2fs_submit_page_mbio(F2FS_P_SB(page), page, old_blkaddr, fio);
}
void recover_data_page(struct f2fs_sb_info *sbi,
@@ -941,66 +1266,50 @@
change_curseg(sbi, type, true);
}
- curseg->next_blkoff = GET_SEGOFF_FROM_SEG0(sbi, new_blkaddr) &
- (sbi->blocks_per_seg - 1);
- __add_sum_entry(sbi, type, sum, curseg->next_blkoff);
+ curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, new_blkaddr);
+ __add_sum_entry(sbi, type, sum);
refresh_sit_entry(sbi, old_blkaddr, new_blkaddr);
-
locate_dirty_segment(sbi, old_cursegno);
- locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr));
mutex_unlock(&sit_i->sentry_lock);
mutex_unlock(&curseg->curseg_mutex);
}
-void rewrite_node_page(struct f2fs_sb_info *sbi,
- struct page *page, struct f2fs_summary *sum,
- block_t old_blkaddr, block_t new_blkaddr)
+static inline bool is_merged_page(struct f2fs_sb_info *sbi,
+ struct page *page, enum page_type type)
{
- struct sit_info *sit_i = SIT_I(sbi);
- int type = CURSEG_WARM_NODE;
- struct curseg_info *curseg;
- unsigned int segno, old_cursegno;
- block_t next_blkaddr = next_blkaddr_of_node(page);
- unsigned int next_segno = GET_SEGNO(sbi, next_blkaddr);
+ enum page_type btype = PAGE_TYPE_OF_BIO(type);
+ struct f2fs_bio_info *io = &sbi->write_io[btype];
+ struct bio_vec *bvec;
+ int i;
- curseg = CURSEG_I(sbi, type);
+ down_read(&io->io_rwsem);
+ if (!io->bio)
+ goto out;
- mutex_lock(&curseg->curseg_mutex);
- mutex_lock(&sit_i->sentry_lock);
-
- segno = GET_SEGNO(sbi, new_blkaddr);
- old_cursegno = curseg->segno;
-
- /* change the current segment */
- if (segno != curseg->segno) {
- curseg->next_segno = segno;
- change_curseg(sbi, type, true);
+ bio_for_each_segment_all(bvec, io->bio, i) {
+ if (page == bvec->bv_page) {
+ up_read(&io->io_rwsem);
+ return true;
+ }
}
- curseg->next_blkoff = GET_SEGOFF_FROM_SEG0(sbi, new_blkaddr) &
- (sbi->blocks_per_seg - 1);
- __add_sum_entry(sbi, type, sum, curseg->next_blkoff);
- /* change the current log to the next block addr in advance */
- if (next_segno != segno) {
- curseg->next_segno = next_segno;
- change_curseg(sbi, type, true);
+out:
+ up_read(&io->io_rwsem);
+ return false;
+}
+
+void f2fs_wait_on_page_writeback(struct page *page,
+ enum page_type type)
+{
+ if (PageWriteback(page)) {
+ struct f2fs_sb_info *sbi = F2FS_P_SB(page);
+
+ if (is_merged_page(sbi, page, type))
+ f2fs_submit_merged_bio(sbi, type, WRITE);
+ wait_on_page_writeback(page);
}
- curseg->next_blkoff = GET_SEGOFF_FROM_SEG0(sbi, next_blkaddr) &
- (sbi->blocks_per_seg - 1);
-
- /* rewrite node page */
- set_page_writeback(page);
- submit_write_page(sbi, page, new_blkaddr, NODE);
- f2fs_submit_bio(sbi, NODE, true);
- refresh_sit_entry(sbi, old_blkaddr, new_blkaddr);
-
- locate_dirty_segment(sbi, old_cursegno);
- locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr));
-
- mutex_unlock(&sit_i->sentry_lock);
- mutex_unlock(&curseg->curseg_mutex);
}
static int read_compacted_summaries(struct f2fs_sb_info *sbi)
@@ -1107,9 +1416,12 @@
ns->ofs_in_node = 0;
}
} else {
- if (restore_node_summary(sbi, segno, sum)) {
+ int err;
+
+ err = restore_node_summary(sbi, segno, sum);
+ if (err) {
f2fs_put_page(new, 1);
- return -EINVAL;
+ return err;
}
}
}
@@ -1130,6 +1442,7 @@
static int restore_curseg_summaries(struct f2fs_sb_info *sbi)
{
int type = CURSEG_HOT_DATA;
+ int err;
if (is_set_ckpt_flags(F2FS_CKPT(sbi), CP_COMPACT_SUM_FLAG)) {
/* restore for compacted data summary */
@@ -1138,9 +1451,12 @@
type = CURSEG_HOT_NODE;
}
- for (; type <= CURSEG_COLD_NODE; type++)
- if (read_normal_summaries(sbi, type))
- return -EINVAL;
+ for (; type <= CURSEG_COLD_NODE; type++) {
+ err = read_normal_summaries(sbi, type);
+ if (err)
+ return err;
+ }
+
return 0;
}
@@ -1167,8 +1483,6 @@
SUM_JOURNAL_SIZE);
written_size += SUM_JOURNAL_SIZE;
- set_page_dirty(page);
-
/* Step 3: write summary entries */
for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) {
unsigned short blkoff;
@@ -1187,18 +1501,20 @@
summary = (struct f2fs_summary *)(kaddr + written_size);
*summary = seg_i->sum_blk->entries[j];
written_size += SUMMARY_SIZE;
- set_page_dirty(page);
if (written_size + SUMMARY_SIZE <= PAGE_CACHE_SIZE -
SUM_FOOTER_SIZE)
continue;
+ set_page_dirty(page);
f2fs_put_page(page, 1);
page = NULL;
}
}
- if (page)
+ if (page) {
+ set_page_dirty(page);
f2fs_put_page(page, 1);
+ }
}
static void write_normal_summaries(struct f2fs_sb_info *sbi,
@@ -1230,7 +1546,6 @@
{
if (is_set_ckpt_flags(F2FS_CKPT(sbi), CP_UMOUNT_FLAG))
write_normal_summaries(sbi, start_blk, CURSEG_HOT_NODE);
- return;
}
int lookup_journal_in_cursum(struct f2fs_summary_block *sum, int type,
@@ -1259,7 +1574,7 @@
unsigned int segno)
{
struct sit_info *sit_i = SIT_I(sbi);
- unsigned int offset = SIT_BLOCK_OFFSET(sit_i, segno);
+ unsigned int offset = SIT_BLOCK_OFFSET(segno);
block_t blk_addr = sit_i->sit_base_addr + offset;
check_seg_range(sbi, segno);
@@ -1285,7 +1600,7 @@
/* get current sit block page without lock */
src_page = get_meta_page(sbi, src_off);
dst_page = grab_meta_page(sbi, dst_off);
- BUG_ON(PageDirty(src_page));
+ f2fs_bug_on(sbi, PageDirty(src_page));
src_addr = page_address(src_page);
dst_addr = page_address(dst_page);
@@ -1299,97 +1614,192 @@
return dst_page;
}
-static bool flush_sits_in_journal(struct f2fs_sb_info *sbi)
+static struct sit_entry_set *grab_sit_entry_set(void)
+{
+ struct sit_entry_set *ses =
+ f2fs_kmem_cache_alloc(sit_entry_set_slab, GFP_ATOMIC);
+
+ ses->entry_cnt = 0;
+ INIT_LIST_HEAD(&ses->set_list);
+ return ses;
+}
+
+static void release_sit_entry_set(struct sit_entry_set *ses)
+{
+ list_del(&ses->set_list);
+ kmem_cache_free(sit_entry_set_slab, ses);
+}
+
+static void adjust_sit_entry_set(struct sit_entry_set *ses,
+ struct list_head *head)
+{
+ struct sit_entry_set *next = ses;
+
+ if (list_is_last(&ses->set_list, head))
+ return;
+
+ list_for_each_entry_continue(next, head, set_list)
+ if (ses->entry_cnt <= next->entry_cnt)
+ break;
+
+ list_move_tail(&ses->set_list, &next->set_list);
+}
+
+static void add_sit_entry(unsigned int segno, struct list_head *head)
+{
+ struct sit_entry_set *ses;
+ unsigned int start_segno = START_SEGNO(segno);
+
+ list_for_each_entry(ses, head, set_list) {
+ if (ses->start_segno == start_segno) {
+ ses->entry_cnt++;
+ adjust_sit_entry_set(ses, head);
+ return;
+ }
+ }
+
+ ses = grab_sit_entry_set();
+
+ ses->start_segno = start_segno;
+ ses->entry_cnt++;
+ list_add(&ses->set_list, head);
+}
+
+static void add_sits_in_set(struct f2fs_sb_info *sbi)
+{
+ struct f2fs_sm_info *sm_info = SM_I(sbi);
+ struct list_head *set_list = &sm_info->sit_entry_set;
+ unsigned long *bitmap = SIT_I(sbi)->dirty_sentries_bitmap;
+ unsigned int segno;
+
+ for_each_set_bit(segno, bitmap, MAIN_SEGS(sbi))
+ add_sit_entry(segno, set_list);
+}
+
+static void remove_sits_in_journal(struct f2fs_sb_info *sbi)
{
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_COLD_DATA);
struct f2fs_summary_block *sum = curseg->sum_blk;
int i;
- /*
- * If the journal area in the current summary is full of sit entries,
- * all the sit entries will be flushed. Otherwise the sit entries
- * are not able to replace with newly hot sit entries.
- */
- if (sits_in_cursum(sum) >= SIT_JOURNAL_ENTRIES) {
- for (i = sits_in_cursum(sum) - 1; i >= 0; i--) {
- unsigned int segno;
- segno = le32_to_cpu(segno_in_journal(sum, i));
- __mark_sit_entry_dirty(sbi, segno);
- }
- update_sits_in_cursum(sum, -sits_in_cursum(sum));
- return 1;
+ for (i = sits_in_cursum(sum) - 1; i >= 0; i--) {
+ unsigned int segno;
+ bool dirtied;
+
+ segno = le32_to_cpu(segno_in_journal(sum, i));
+ dirtied = __mark_sit_entry_dirty(sbi, segno);
+
+ if (!dirtied)
+ add_sit_entry(segno, &SM_I(sbi)->sit_entry_set);
}
- return 0;
+ update_sits_in_cursum(sum, -sits_in_cursum(sum));
}
/*
* CP calls this function, which flushes SIT entries including sit_journal,
* and moves prefree segs to free segs.
*/
-void flush_sit_entries(struct f2fs_sb_info *sbi)
+void flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{
struct sit_info *sit_i = SIT_I(sbi);
unsigned long *bitmap = sit_i->dirty_sentries_bitmap;
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_COLD_DATA);
struct f2fs_summary_block *sum = curseg->sum_blk;
- unsigned long nsegs = TOTAL_SEGS(sbi);
- struct page *page = NULL;
- struct f2fs_sit_block *raw_sit = NULL;
- unsigned int start = 0, end = 0;
- unsigned int segno = -1;
- bool flushed;
+ struct sit_entry_set *ses, *tmp;
+ struct list_head *head = &SM_I(sbi)->sit_entry_set;
+ bool to_journal = true;
+ struct seg_entry *se;
mutex_lock(&curseg->curseg_mutex);
mutex_lock(&sit_i->sentry_lock);
/*
- * "flushed" indicates whether sit entries in journal are flushed
- * to the SIT area or not.
+ * add and account sit entries of dirty bitmap in sit entry
+ * set temporarily
*/
- flushed = flush_sits_in_journal(sbi);
+ add_sits_in_set(sbi);
- while ((segno = find_next_bit(bitmap, nsegs, segno + 1)) < nsegs) {
- struct seg_entry *se = get_seg_entry(sbi, segno);
- int sit_offset, offset;
+ /*
+ * if there are no enough space in journal to store dirty sit
+ * entries, remove all entries from journal and add and account
+ * them in sit entry set.
+ */
+ if (!__has_cursum_space(sum, sit_i->dirty_sentries, SIT_JOURNAL))
+ remove_sits_in_journal(sbi);
- sit_offset = SIT_ENTRY_OFFSET(sit_i, segno);
+ if (!sit_i->dirty_sentries)
+ goto out;
- if (flushed)
- goto to_sit_page;
+ /*
+ * there are two steps to flush sit entries:
+ * #1, flush sit entries to journal in current cold data summary block.
+ * #2, flush sit entries to sit page.
+ */
+ list_for_each_entry_safe(ses, tmp, head, set_list) {
+ struct page *page;
+ struct f2fs_sit_block *raw_sit = NULL;
+ unsigned int start_segno = ses->start_segno;
+ unsigned int end = min(start_segno + SIT_ENTRY_PER_BLOCK,
+ (unsigned long)MAIN_SEGS(sbi));
+ unsigned int segno = start_segno;
- offset = lookup_journal_in_cursum(sum, SIT_JOURNAL, segno, 1);
- if (offset >= 0) {
- segno_in_journal(sum, offset) = cpu_to_le32(segno);
- seg_info_to_raw_sit(se, &sit_in_journal(sum, offset));
- goto flush_done;
- }
-to_sit_page:
- if (!page || (start > segno) || (segno > end)) {
- if (page) {
- f2fs_put_page(page, 1);
- page = NULL;
- }
+ if (to_journal &&
+ !__has_cursum_space(sum, ses->entry_cnt, SIT_JOURNAL))
+ to_journal = false;
- start = START_SEGNO(sit_i, segno);
- end = start + SIT_ENTRY_PER_BLOCK - 1;
-
- /* read sit block that will be updated */
- page = get_next_sit_page(sbi, start);
+ if (!to_journal) {
+ page = get_next_sit_page(sbi, start_segno);
raw_sit = page_address(page);
}
- /* udpate entry in SIT block */
- seg_info_to_raw_sit(se, &raw_sit->entries[sit_offset]);
-flush_done:
- __clear_bit(segno, bitmap);
- sit_i->dirty_sentries--;
+ /* flush dirty sit entries in region of current sit set */
+ for_each_set_bit_from(segno, bitmap, end) {
+ int offset, sit_offset;
+
+ se = get_seg_entry(sbi, segno);
+
+ /* add discard candidates */
+ if (SM_I(sbi)->nr_discards < SM_I(sbi)->max_discards) {
+ cpc->trim_start = segno;
+ add_discard_addrs(sbi, cpc);
+ }
+
+ if (to_journal) {
+ offset = lookup_journal_in_cursum(sum,
+ SIT_JOURNAL, segno, 1);
+ f2fs_bug_on(sbi, offset < 0);
+ segno_in_journal(sum, offset) =
+ cpu_to_le32(segno);
+ seg_info_to_raw_sit(se,
+ &sit_in_journal(sum, offset));
+ } else {
+ sit_offset = SIT_ENTRY_OFFSET(sit_i, segno);
+ seg_info_to_raw_sit(se,
+ &raw_sit->entries[sit_offset]);
+ }
+
+ __clear_bit(segno, bitmap);
+ sit_i->dirty_sentries--;
+ ses->entry_cnt--;
+ }
+
+ if (!to_journal)
+ f2fs_put_page(page, 1);
+
+ f2fs_bug_on(sbi, ses->entry_cnt);
+ release_sit_entry_set(ses);
+ }
+
+ f2fs_bug_on(sbi, !list_empty(head));
+ f2fs_bug_on(sbi, sit_i->dirty_sentries);
+out:
+ if (cpc->reason == CP_DISCARD) {
+ for (; cpc->trim_start <= cpc->trim_end; cpc->trim_start++)
+ add_discard_addrs(sbi, cpc);
}
mutex_unlock(&sit_i->sentry_lock);
mutex_unlock(&curseg->curseg_mutex);
- /* writeout last modified SIT block */
- f2fs_put_page(page, 1);
-
set_prefree_as_free_segments(sbi);
}
@@ -1409,16 +1819,16 @@
SM_I(sbi)->sit_info = sit_i;
- sit_i->sentries = vzalloc(TOTAL_SEGS(sbi) * sizeof(struct seg_entry));
+ sit_i->sentries = vzalloc(MAIN_SEGS(sbi) * sizeof(struct seg_entry));
if (!sit_i->sentries)
return -ENOMEM;
- bitmap_size = f2fs_bitmap_size(TOTAL_SEGS(sbi));
+ bitmap_size = f2fs_bitmap_size(MAIN_SEGS(sbi));
sit_i->dirty_sentries_bitmap = kzalloc(bitmap_size, GFP_KERNEL);
if (!sit_i->dirty_sentries_bitmap)
return -ENOMEM;
- for (start = 0; start < TOTAL_SEGS(sbi); start++) {
+ for (start = 0; start < MAIN_SEGS(sbi); start++) {
sit_i->sentries[start].cur_valid_map
= kzalloc(SIT_VBLOCK_MAP_SIZE, GFP_KERNEL);
sit_i->sentries[start].ckpt_valid_map
@@ -1429,7 +1839,7 @@
}
if (sbi->segs_per_sec > 1) {
- sit_i->sec_entries = vzalloc(TOTAL_SECS(sbi) *
+ sit_i->sec_entries = vzalloc(MAIN_SECS(sbi) *
sizeof(struct sec_entry));
if (!sit_i->sec_entries)
return -ENOMEM;
@@ -1464,7 +1874,6 @@
static int build_free_segmap(struct f2fs_sb_info *sbi)
{
- struct f2fs_sm_info *sm_info = SM_I(sbi);
struct free_segmap_info *free_i;
unsigned int bitmap_size, sec_bitmap_size;
@@ -1475,12 +1884,12 @@
SM_I(sbi)->free_info = free_i;
- bitmap_size = f2fs_bitmap_size(TOTAL_SEGS(sbi));
+ bitmap_size = f2fs_bitmap_size(MAIN_SEGS(sbi));
free_i->free_segmap = kmalloc(bitmap_size, GFP_KERNEL);
if (!free_i->free_segmap)
return -ENOMEM;
- sec_bitmap_size = f2fs_bitmap_size(TOTAL_SECS(sbi));
+ sec_bitmap_size = f2fs_bitmap_size(MAIN_SECS(sbi));
free_i->free_secmap = kmalloc(sec_bitmap_size, GFP_KERNEL);
if (!free_i->free_secmap)
return -ENOMEM;
@@ -1490,8 +1899,7 @@
memset(free_i->free_secmap, 0xff, sec_bitmap_size);
/* init free segmap information */
- free_i->start_segno =
- (unsigned int) GET_SEGNO_FROM_SEG0(sbi, sm_info->main_blkaddr);
+ free_i->start_segno = GET_SEGNO_FROM_SEG0(sbi, MAIN_BLKADDR(sbi));
free_i->free_segments = 0;
free_i->free_sections = 0;
rwlock_init(&free_i->segmap_lock);
@@ -1503,7 +1911,7 @@
struct curseg_info *array;
int i;
- array = kzalloc(sizeof(*array) * NR_CURSEG_TYPE, GFP_KERNEL);
+ array = kcalloc(NR_CURSEG_TYPE, sizeof(*array), GFP_KERNEL);
if (!array)
return -ENOMEM;
@@ -1525,36 +1933,48 @@
struct sit_info *sit_i = SIT_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_COLD_DATA);
struct f2fs_summary_block *sum = curseg->sum_blk;
- unsigned int start;
+ int sit_blk_cnt = SIT_BLK_CNT(sbi);
+ unsigned int i, start, end;
+ unsigned int readed, start_blk = 0;
+ int nrpages = MAX_BIO_BLOCKS(sbi);
- for (start = 0; start < TOTAL_SEGS(sbi); start++) {
- struct seg_entry *se = &sit_i->sentries[start];
- struct f2fs_sit_block *sit_blk;
- struct f2fs_sit_entry sit;
- struct page *page;
- int i;
+ do {
+ readed = ra_meta_pages(sbi, start_blk, nrpages, META_SIT);
- mutex_lock(&curseg->curseg_mutex);
- for (i = 0; i < sits_in_cursum(sum); i++) {
- if (le32_to_cpu(segno_in_journal(sum, i)) == start) {
- sit = sit_in_journal(sum, i);
- mutex_unlock(&curseg->curseg_mutex);
- goto got_it;
+ start = start_blk * sit_i->sents_per_block;
+ end = (start_blk + readed) * sit_i->sents_per_block;
+
+ for (; start < end && start < MAIN_SEGS(sbi); start++) {
+ struct seg_entry *se = &sit_i->sentries[start];
+ struct f2fs_sit_block *sit_blk;
+ struct f2fs_sit_entry sit;
+ struct page *page;
+
+ mutex_lock(&curseg->curseg_mutex);
+ for (i = 0; i < sits_in_cursum(sum); i++) {
+ if (le32_to_cpu(segno_in_journal(sum, i))
+ == start) {
+ sit = sit_in_journal(sum, i);
+ mutex_unlock(&curseg->curseg_mutex);
+ goto got_it;
+ }
+ }
+ mutex_unlock(&curseg->curseg_mutex);
+
+ page = get_current_sit_page(sbi, start);
+ sit_blk = (struct f2fs_sit_block *)page_address(page);
+ sit = sit_blk->entries[SIT_ENTRY_OFFSET(sit_i, start)];
+ f2fs_put_page(page, 1);
+got_it:
+ check_block_count(sbi, start, &sit);
+ seg_info_from_raw_sit(se, &sit);
+ if (sbi->segs_per_sec > 1) {
+ struct sec_entry *e = get_sec_entry(sbi, start);
+ e->valid_blocks += se->valid_blocks;
}
}
- mutex_unlock(&curseg->curseg_mutex);
- page = get_current_sit_page(sbi, start);
- sit_blk = (struct f2fs_sit_block *)page_address(page);
- sit = sit_blk->entries[SIT_ENTRY_OFFSET(sit_i, start)];
- f2fs_put_page(page, 1);
-got_it:
- check_block_count(sbi, start, &sit);
- seg_info_from_raw_sit(se, &sit);
- if (sbi->segs_per_sec > 1) {
- struct sec_entry *e = get_sec_entry(sbi, start);
- e->valid_blocks += se->valid_blocks;
- }
- }
+ start_blk += readed;
+ } while (start_blk < sit_blk_cnt);
}
static void init_free_segmap(struct f2fs_sb_info *sbi)
@@ -1562,7 +1982,7 @@
unsigned int start;
int type;
- for (start = 0; start < TOTAL_SEGS(sbi); start++) {
+ for (start = 0; start < MAIN_SEGS(sbi); start++) {
struct seg_entry *sentry = get_seg_entry(sbi, start);
if (!sentry->valid_blocks)
__set_free(sbi, start);
@@ -1582,15 +2002,19 @@
unsigned int segno = 0, offset = 0;
unsigned short valid_blocks;
- while (segno < TOTAL_SEGS(sbi)) {
+ while (1) {
/* find dirty segment based on free segmap */
- segno = find_next_inuse(free_i, TOTAL_SEGS(sbi), offset);
- if (segno >= TOTAL_SEGS(sbi))
+ segno = find_next_inuse(free_i, MAIN_SEGS(sbi), offset);
+ if (segno >= MAIN_SEGS(sbi))
break;
offset = segno + 1;
valid_blocks = get_valid_blocks(sbi, segno, 0);
- if (valid_blocks >= sbi->blocks_per_seg || !valid_blocks)
+ if (valid_blocks == sbi->blocks_per_seg || !valid_blocks)
continue;
+ if (valid_blocks > sbi->blocks_per_seg) {
+ f2fs_bug_on(sbi, 1);
+ continue;
+ }
mutex_lock(&dirty_i->seglist_lock);
__locate_dirty_segment(sbi, segno, DIRTY);
mutex_unlock(&dirty_i->seglist_lock);
@@ -1600,7 +2024,7 @@
static int init_victim_secmap(struct f2fs_sb_info *sbi)
{
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
- unsigned int bitmap_size = f2fs_bitmap_size(TOTAL_SECS(sbi));
+ unsigned int bitmap_size = f2fs_bitmap_size(MAIN_SECS(sbi));
dirty_i->victim_secmap = kzalloc(bitmap_size, GFP_KERNEL);
if (!dirty_i->victim_secmap)
@@ -1621,7 +2045,7 @@
SM_I(sbi)->dirty_info = dirty_i;
mutex_init(&dirty_i->seglist_lock);
- bitmap_size = f2fs_bitmap_size(TOTAL_SEGS(sbi));
+ bitmap_size = f2fs_bitmap_size(MAIN_SEGS(sbi));
for (i = 0; i < NR_DIRTY_TYPE; i++) {
dirty_i->dirty_segmap[i] = kzalloc(bitmap_size, GFP_KERNEL);
@@ -1645,7 +2069,7 @@
sit_i->min_mtime = LLONG_MAX;
- for (segno = 0; segno < TOTAL_SEGS(sbi); segno += sbi->segs_per_sec) {
+ for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
unsigned int i;
unsigned long long mtime = 0;
@@ -1674,8 +2098,6 @@
/* init sm info */
sbi->sm_info = sm_info;
- INIT_LIST_HEAD(&sm_info->wblist_head);
- spin_lock_init(&sm_info->wblist_lock);
sm_info->seg0_blkaddr = le32_to_cpu(raw_super->segment0_blkaddr);
sm_info->main_blkaddr = le32_to_cpu(raw_super->main_blkaddr);
sm_info->segment_count = le32_to_cpu(raw_super->segment_count);
@@ -1683,6 +2105,23 @@
sm_info->ovp_segments = le32_to_cpu(ckpt->overprov_segment_count);
sm_info->main_segments = le32_to_cpu(raw_super->segment_count_main);
sm_info->ssa_blkaddr = le32_to_cpu(raw_super->ssa_blkaddr);
+ sm_info->rec_prefree_segments = sm_info->main_segments *
+ DEF_RECLAIM_PREFREE_SEGMENTS / 100;
+ sm_info->ipu_policy = 1 << F2FS_IPU_FSYNC;
+ sm_info->min_ipu_util = DEF_MIN_IPU_UTIL;
+ sm_info->min_fsync_blocks = DEF_MIN_FSYNC_BLOCKS;
+
+ INIT_LIST_HEAD(&sm_info->discard_list);
+ sm_info->nr_discards = 0;
+ sm_info->max_discards = 0;
+
+ INIT_LIST_HEAD(&sm_info->sit_entry_set);
+
+ if (test_opt(sbi, FLUSH_MERGE) && !f2fs_readonly(sbi->sb)) {
+ err = create_flush_cmd_control(sbi);
+ if (err)
+ return err;
+ }
err = build_sit_info(sbi);
if (err)
@@ -1773,7 +2212,7 @@
return;
if (sit_i->sentries) {
- for (start = 0; start < TOTAL_SEGS(sbi); start++) {
+ for (start = 0; start < MAIN_SEGS(sbi); start++) {
kfree(sit_i->sentries[start].cur_valid_map);
kfree(sit_i->sentries[start].ckpt_valid_map);
}
@@ -1790,6 +2229,10 @@
void destroy_segment_manager(struct f2fs_sb_info *sbi)
{
struct f2fs_sm_info *sm_info = SM_I(sbi);
+
+ if (!sm_info)
+ return;
+ destroy_flush_cmd_control(sbi);
destroy_dirty_segmap(sbi);
destroy_curseg(sbi);
destroy_free_segmap(sbi);
@@ -1797,3 +2240,34 @@
sbi->sm_info = NULL;
kfree(sm_info);
}
+
+int __init create_segment_manager_caches(void)
+{
+ discard_entry_slab = f2fs_kmem_cache_create("discard_entry",
+ sizeof(struct discard_entry));
+ if (!discard_entry_slab)
+ goto fail;
+
+ sit_entry_set_slab = f2fs_kmem_cache_create("sit_entry_set",
+ sizeof(struct nat_entry_set));
+ if (!sit_entry_set_slab)
+ goto destory_discard_entry;
+ aw_entry_slab = f2fs_kmem_cache_create("atomic_entry",
+ sizeof(struct atomic_range));
+ if (!aw_entry_slab)
+ goto destroy_sit_entry_set;
+ return 0;
+
+destroy_sit_entry_set:
+ kmem_cache_destroy(sit_entry_set_slab);
+destory_discard_entry:
+ kmem_cache_destroy(discard_entry_slab);
+fail:
+ return -ENOMEM;
+}
+
+void destroy_segment_manager_caches(void)
+{
+ kmem_cache_destroy(sit_entry_set_slab);
+ kmem_cache_destroy(discard_entry_slab);
+}
diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
index 062424a..c994d03 100644
--- a/fs/f2fs/segment.h
+++ b/fs/f2fs/segment.h
@@ -14,17 +14,14 @@
#define NULL_SEGNO ((unsigned int)(~0))
#define NULL_SECNO ((unsigned int)(~0))
+#define DEF_RECLAIM_PREFREE_SEGMENTS 5 /* 5% over total segments */
+
/* L: Logical segment # in volume, R: Relative segment # in main area */
#define GET_L2R_SEGNO(free_i, segno) (segno - free_i->start_segno)
#define GET_R2L_SEGNO(free_i, segno) (segno + free_i->start_segno)
-#define IS_DATASEG(t) \
- ((t == CURSEG_HOT_DATA) || (t == CURSEG_COLD_DATA) || \
- (t == CURSEG_WARM_DATA))
-
-#define IS_NODESEG(t) \
- ((t == CURSEG_HOT_NODE) || (t == CURSEG_COLD_NODE) || \
- (t == CURSEG_WARM_NODE))
+#define IS_DATASEG(t) (t <= CURSEG_COLD_DATA)
+#define IS_NODESEG(t) (t >= CURSEG_HOT_NODE)
#define IS_CURSEG(sbi, seg) \
((seg == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno) || \
@@ -48,18 +45,31 @@
(secno == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \
sbi->segs_per_sec)) \
-#define START_BLOCK(sbi, segno) \
- (SM_I(sbi)->seg0_blkaddr + \
+#define MAIN_BLKADDR(sbi) (SM_I(sbi)->main_blkaddr)
+#define SEG0_BLKADDR(sbi) (SM_I(sbi)->seg0_blkaddr)
+
+#define MAIN_SEGS(sbi) (SM_I(sbi)->main_segments)
+#define MAIN_SECS(sbi) (sbi->total_sections)
+
+#define TOTAL_SEGS(sbi) (SM_I(sbi)->segment_count)
+#define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << sbi->log_blocks_per_seg)
+
+#define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi))
+#define SEGMENT_SIZE(sbi) (1ULL << (sbi->log_blocksize + \
+ sbi->log_blocks_per_seg))
+
+#define START_BLOCK(sbi, segno) (SEG0_BLKADDR(sbi) + \
(GET_R2L_SEGNO(FREE_I(sbi), segno) << sbi->log_blocks_per_seg))
+
#define NEXT_FREE_BLKADDR(sbi, curseg) \
(START_BLOCK(sbi, curseg->segno) + curseg->next_blkoff)
-#define MAIN_BASE_BLOCK(sbi) (SM_I(sbi)->main_blkaddr)
-
-#define GET_SEGOFF_FROM_SEG0(sbi, blk_addr) \
- ((blk_addr) - SM_I(sbi)->seg0_blkaddr)
+#define GET_SEGOFF_FROM_SEG0(sbi, blk_addr) ((blk_addr) - SEG0_BLKADDR(sbi))
#define GET_SEGNO_FROM_SEG0(sbi, blk_addr) \
(GET_SEGOFF_FROM_SEG0(sbi, blk_addr) >> sbi->log_blocks_per_seg)
+#define GET_BLKOFF_FROM_SEG0(sbi, blk_addr) \
+ (GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & (sbi->blocks_per_seg - 1))
+
#define GET_SEGNO(sbi, blk_addr) \
(((blk_addr == NULL_ADDR) || (blk_addr == NEW_ADDR)) ? \
NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \
@@ -77,26 +87,21 @@
#define SIT_ENTRY_OFFSET(sit_i, segno) \
(segno % sit_i->sents_per_block)
-#define SIT_BLOCK_OFFSET(sit_i, segno) \
+#define SIT_BLOCK_OFFSET(segno) \
(segno / SIT_ENTRY_PER_BLOCK)
-#define START_SEGNO(sit_i, segno) \
- (SIT_BLOCK_OFFSET(sit_i, segno) * SIT_ENTRY_PER_BLOCK)
+#define START_SEGNO(segno) \
+ (SIT_BLOCK_OFFSET(segno) * SIT_ENTRY_PER_BLOCK)
+#define SIT_BLK_CNT(sbi) \
+ ((MAIN_SEGS(sbi) + SIT_ENTRY_PER_BLOCK - 1) / SIT_ENTRY_PER_BLOCK)
#define f2fs_bitmap_size(nr) \
(BITS_TO_LONGS(nr) * sizeof(unsigned long))
-#define TOTAL_SEGS(sbi) (SM_I(sbi)->main_segments)
-#define TOTAL_SECS(sbi) (sbi->total_sections)
-#define SECTOR_FROM_BLOCK(sbi, blk_addr) \
- (blk_addr << ((sbi)->log_blocksize - F2FS_LOG_SECTOR_SIZE))
-#define SECTOR_TO_BLOCK(sbi, sectors) \
- (sectors >> ((sbi)->log_blocksize - F2FS_LOG_SECTOR_SIZE))
-
-/* during checkpoint, bio_private is used to synchronize the last bio */
-struct bio_private {
- struct f2fs_sb_info *sbi;
- bool is_sync;
- void *wait;
-};
+#define SECTOR_FROM_BLOCK(blk_addr) \
+ (((sector_t)blk_addr) << F2FS_LOG_SECTORS_PER_BLOCK)
+#define SECTOR_TO_BLOCK(sectors) \
+ (sectors >> F2FS_LOG_SECTORS_PER_BLOCK)
+#define MAX_BIO_BLOCKS(sbi) \
+ ((int)min((int)max_hw_blocks(sbi), BIO_MAX_PAGES))
/*
* indicate a block allocation direction: RIGHT and LEFT.
@@ -142,6 +147,7 @@
int alloc_mode; /* LFS or SSR */
int gc_mode; /* GC_CB or GC_GREEDY */
unsigned long *dirty_segmap; /* dirty segment bitmap */
+ unsigned int max_search; /* maximum # of segments to search */
unsigned int offset; /* last scanned bitmap offset */
unsigned int ofs_unit; /* bitmap search unit */
unsigned int min_cost; /* minimum cost */
@@ -169,6 +175,13 @@
void (*allocate_segment)(struct f2fs_sb_info *, int, bool);
};
+struct atomic_range {
+ struct list_head list;
+ u64 aid;
+ pgoff_t start;
+ pgoff_t end;
+};
+
struct sit_info {
const struct segment_allocation *s_ops;
@@ -239,6 +252,12 @@
unsigned int next_segno; /* preallocated segment */
};
+struct sit_entry_set {
+ struct list_head set_list; /* link with all sit sets */
+ unsigned int start_segno; /* start segno of sits in set */
+ unsigned int entry_cnt; /* the # of sit entries in set */
+};
+
/*
* inline functions
*/
@@ -318,7 +337,7 @@
clear_bit(segno, free_i->free_segmap);
free_i->free_segments++;
- next = find_next_bit(free_i->free_segmap, TOTAL_SEGS(sbi), start_segno);
+ next = find_next_bit(free_i->free_segmap, MAIN_SEGS(sbi), start_segno);
if (next >= start_segno + sbi->segs_per_sec) {
clear_bit(secno, free_i->free_secmap);
free_i->free_sections++;
@@ -349,8 +368,8 @@
if (test_and_clear_bit(segno, free_i->free_segmap)) {
free_i->free_segments++;
- next = find_next_bit(free_i->free_segmap, TOTAL_SEGS(sbi),
- start_segno);
+ next = find_next_bit(free_i->free_segmap,
+ start_segno + sbi->segs_per_sec, start_segno);
if (next >= start_segno + sbi->segs_per_sec) {
if (test_and_clear_bit(secno, free_i->free_secmap))
free_i->free_sections++;
@@ -382,26 +401,12 @@
static inline block_t written_block_count(struct f2fs_sb_info *sbi)
{
- struct sit_info *sit_i = SIT_I(sbi);
- block_t vblocks;
-
- mutex_lock(&sit_i->sentry_lock);
- vblocks = sit_i->written_valid_blocks;
- mutex_unlock(&sit_i->sentry_lock);
-
- return vblocks;
+ return SIT_I(sbi)->written_valid_blocks;
}
static inline unsigned int free_segments(struct f2fs_sb_info *sbi)
{
- struct free_segmap_info *free_i = FREE_I(sbi);
- unsigned int free_segs;
-
- read_lock(&free_i->segmap_lock);
- free_segs = free_i->free_segments;
- read_unlock(&free_i->segmap_lock);
-
- return free_segs;
+ return FREE_I(sbi)->free_segments;
}
static inline int reserved_segments(struct f2fs_sb_info *sbi)
@@ -411,14 +416,7 @@
static inline unsigned int free_sections(struct f2fs_sb_info *sbi)
{
- struct free_segmap_info *free_i = FREE_I(sbi);
- unsigned int free_secs;
-
- read_lock(&free_i->segmap_lock);
- free_secs = free_i->free_sections;
- read_unlock(&free_i->segmap_lock);
-
- return free_secs;
+ return FREE_I(sbi)->free_sections;
}
static inline unsigned int prefree_segments(struct f2fs_sb_info *sbi)
@@ -453,7 +451,10 @@
static inline bool need_SSR(struct f2fs_sb_info *sbi)
{
- return (free_sections(sbi) < overprovision_sections(sbi));
+ int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
+ int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
+ return free_sections(sbi) <= (node_secs + 2 * dent_secs +
+ reserved_sections(sbi) + 1);
}
static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed)
@@ -461,33 +462,75 @@
int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
- if (sbi->por_doing)
+ if (unlikely(sbi->por_doing))
return false;
- return ((free_sections(sbi) + freed) <= (node_secs + 2 * dent_secs +
- reserved_sections(sbi)));
+ return (free_sections(sbi) + freed) <= (node_secs + 2 * dent_secs +
+ reserved_sections(sbi));
+}
+
+static inline bool excess_prefree_segs(struct f2fs_sb_info *sbi)
+{
+ return prefree_segments(sbi) > SM_I(sbi)->rec_prefree_segments;
}
static inline int utilization(struct f2fs_sb_info *sbi)
{
- return div_u64(valid_user_blocks(sbi) * 100, sbi->user_block_count);
+ return div_u64((u64)valid_user_blocks(sbi) * 100,
+ sbi->user_block_count);
}
/*
* Sometimes f2fs may be better to drop out-of-place update policy.
- * So, if fs utilization is over MIN_IPU_UTIL, then f2fs tries to write
- * data in the original place likewise other traditional file systems.
- * But, currently set 100 in percentage, which means it is disabled.
- * See below need_inplace_update().
+ * And, users can control the policy through sysfs entries.
+ * There are five policies with triggering conditions as follows.
+ * F2FS_IPU_FORCE - all the time,
+ * F2FS_IPU_SSR - if SSR mode is activated,
+ * F2FS_IPU_UTIL - if FS utilization is over threashold,
+ * F2FS_IPU_SSR_UTIL - if SSR mode is activated and FS utilization is over
+ * threashold,
+ * F2FS_IPU_FSYNC - activated in fsync path only for high performance flash
+ * storages. IPU will be triggered only if the # of dirty
+ * pages over min_fsync_blocks.
+ * F2FS_IPUT_DISABLE - disable IPU. (=default option)
*/
-#define MIN_IPU_UTIL 100
+#define DEF_MIN_IPU_UTIL 70
+#define DEF_MIN_FSYNC_BLOCKS 8
+
+enum {
+ F2FS_IPU_FORCE,
+ F2FS_IPU_SSR,
+ F2FS_IPU_UTIL,
+ F2FS_IPU_SSR_UTIL,
+ F2FS_IPU_FSYNC,
+};
+
static inline bool need_inplace_update(struct inode *inode)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- if (S_ISDIR(inode->i_mode))
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ unsigned int policy = SM_I(sbi)->ipu_policy;
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+
+ /* IPU can be done only for the user data */
+ if (S_ISDIR(inode->i_mode) || is_inode_flag_set(fi, FI_ATOMIC_FILE))
return false;
- if (need_SSR(sbi) && utilization(sbi) > MIN_IPU_UTIL)
+
+ if (policy & (0x1 << F2FS_IPU_FORCE))
return true;
+ if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi))
+ return true;
+ if (policy & (0x1 << F2FS_IPU_UTIL) &&
+ utilization(sbi) > SM_I(sbi)->min_ipu_util)
+ return true;
+ if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && need_SSR(sbi) &&
+ utilization(sbi) > SM_I(sbi)->min_ipu_util)
+ return true;
+
+ /* this is only set during fdatasync */
+ if (policy & (0x1 << F2FS_IPU_FSYNC) &&
+ is_inode_flag_set(fi, FI_NEED_IPU))
+ return true;
+
return false;
}
@@ -511,24 +554,17 @@
return curseg->next_blkoff;
}
+#ifdef CONFIG_F2FS_CHECK_FS
static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno)
{
unsigned int end_segno = SM_I(sbi)->segment_count - 1;
BUG_ON(segno > end_segno);
}
-/*
- * This function is used for only debugging.
- * NOTE: In future, we have to remove this function.
- */
static inline void verify_block_addr(struct f2fs_sb_info *sbi, block_t blk_addr)
{
- struct f2fs_sm_info *sm_info = SM_I(sbi);
- block_t total_blks = sm_info->segment_count << sbi->log_blocks_per_seg;
- block_t start_addr = sm_info->seg0_blkaddr;
- block_t end_addr = start_addr + total_blks - 1;
- BUG_ON(blk_addr < start_addr);
- BUG_ON(blk_addr > end_addr);
+ BUG_ON(blk_addr < SEG0_BLKADDR(sbi));
+ BUG_ON(blk_addr >= MAX_BLKADDR(sbi));
}
/*
@@ -539,8 +575,9 @@
{
struct f2fs_sm_info *sm_info = SM_I(sbi);
unsigned int end_segno = sm_info->segment_count - 1;
+ bool is_valid = test_bit_le(0, raw_sit->valid_map) ? true : false;
int valid_blocks = 0;
- int i;
+ int cur_pos = 0, next_pos;
/* check segment usage */
BUG_ON(GET_SIT_VBLOCKS(raw_sit) > sbi->blocks_per_seg);
@@ -549,17 +586,59 @@
BUG_ON(segno > end_segno);
/* check bitmap with valid block count */
- for (i = 0; i < sbi->blocks_per_seg; i++)
- if (f2fs_test_bit(i, raw_sit->valid_map))
- valid_blocks++;
+ do {
+ if (is_valid) {
+ next_pos = find_next_zero_bit_le(&raw_sit->valid_map,
+ sbi->blocks_per_seg,
+ cur_pos);
+ valid_blocks += next_pos - cur_pos;
+ } else
+ next_pos = find_next_bit_le(&raw_sit->valid_map,
+ sbi->blocks_per_seg,
+ cur_pos);
+ cur_pos = next_pos;
+ is_valid = !is_valid;
+ } while (cur_pos < sbi->blocks_per_seg);
BUG_ON(GET_SIT_VBLOCKS(raw_sit) != valid_blocks);
}
+#else
+static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno)
+{
+ unsigned int end_segno = SM_I(sbi)->segment_count - 1;
+
+ if (segno > end_segno)
+ sbi->need_fsck = true;
+}
+
+static inline void verify_block_addr(struct f2fs_sb_info *sbi, block_t blk_addr)
+{
+ if (blk_addr < SEG0_BLKADDR(sbi) || blk_addr >= MAX_BLKADDR(sbi))
+ sbi->need_fsck = true;
+}
+
+/*
+ * Summary block is always treated as an invalid block
+ */
+static inline void check_block_count(struct f2fs_sb_info *sbi,
+ int segno, struct f2fs_sit_entry *raw_sit)
+{
+ unsigned int end_segno = SM_I(sbi)->segment_count - 1;
+
+ /* check segment usage */
+ if (GET_SIT_VBLOCKS(raw_sit) > sbi->blocks_per_seg)
+ sbi->need_fsck = true;
+
+ /* check boundary of a given segment number */
+ if (segno > end_segno)
+ sbi->need_fsck = true;
+}
+#endif
static inline pgoff_t current_sit_addr(struct f2fs_sb_info *sbi,
unsigned int start)
{
struct sit_info *sit_i = SIT_I(sbi);
- unsigned int offset = SIT_BLOCK_OFFSET(sit_i, start);
+ unsigned int offset = SIT_BLOCK_OFFSET(start);
block_t blk_addr = sit_i->sit_base_addr + offset;
check_seg_range(sbi, start);
@@ -586,7 +665,7 @@
static inline void set_to_next_sit(struct sit_info *sit_i, unsigned int start)
{
- unsigned int block_off = SIT_BLOCK_OFFSET(sit_i, start);
+ unsigned int block_off = SIT_BLOCK_OFFSET(start);
if (f2fs_test_bit(block_off, sit_i->sit_bitmap))
f2fs_clear_bit(block_off, sit_i->sit_bitmap);
@@ -633,5 +712,48 @@
{
struct block_device *bdev = sbi->sb->s_bdev;
struct request_queue *q = bdev_get_queue(bdev);
- return SECTOR_TO_BLOCK(sbi, queue_max_sectors(q));
+ return SECTOR_TO_BLOCK(queue_max_sectors(q));
+}
+
+/*
+ * It is very important to gather dirty pages and write at once, so that we can
+ * submit a big bio without interfering other data writes.
+ * By default, 512 pages for directory data,
+ * 512 pages (2MB) * 3 for three types of nodes, and
+ * max_bio_blocks for meta are set.
+ */
+static inline int nr_pages_to_skip(struct f2fs_sb_info *sbi, int type)
+{
+ if (type == DATA)
+ return sbi->blocks_per_seg;
+ else if (type == NODE)
+ return 3 * sbi->blocks_per_seg;
+ else if (type == META)
+ return MAX_BIO_BLOCKS(sbi);
+ else
+ return 0;
+}
+
+/*
+ * When writing pages, it'd better align nr_to_write for segment size.
+ */
+static inline long nr_pages_to_write(struct f2fs_sb_info *sbi, int type,
+ struct writeback_control *wbc)
+{
+ long nr_to_write, desired;
+
+ if (wbc->sync_mode != WB_SYNC_NONE)
+ return 0;
+
+ nr_to_write = wbc->nr_to_write;
+
+ if (type == DATA)
+ desired = 4096;
+ else if (type == NODE)
+ desired = 3 * max_hw_blocks(sbi);
+ else
+ desired = MAX_BIO_BLOCKS(sbi);
+
+ wbc->nr_to_write = desired;
+ return desired - nr_to_write;
}
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 03ab8b8..ab5e516 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -18,45 +18,219 @@
#include <linux/parser.h>
#include <linux/mount.h>
#include <linux/seq_file.h>
+#include <linux/proc_fs.h>
#include <linux/random.h>
#include <linux/exportfs.h>
#include <linux/blkdev.h>
#include <linux/f2fs_fs.h>
+#include <linux/sysfs.h>
#include "f2fs.h"
#include "node.h"
#include "segment.h"
#include "xattr.h"
+#include "gc.h"
#define CREATE_TRACE_POINTS
#include <trace/events/f2fs.h>
+static struct proc_dir_entry *f2fs_proc_root;
static struct kmem_cache *f2fs_inode_cachep;
+static struct kset *f2fs_kset;
enum {
- Opt_gc_background_off,
+ Opt_gc_background,
Opt_disable_roll_forward,
Opt_discard,
Opt_noheap,
+ Opt_user_xattr,
Opt_nouser_xattr,
+ Opt_acl,
Opt_noacl,
Opt_active_logs,
Opt_disable_ext_identify,
+ Opt_inline_xattr,
+ Opt_inline_data,
+ Opt_flush_merge,
+ Opt_nobarrier,
+ Opt_android_emu,
+ Opt_err_continue,
+ Opt_err_panic,
+ Opt_err_recover,
Opt_err,
};
static match_table_t f2fs_tokens = {
- {Opt_gc_background_off, "background_gc_off"},
+ {Opt_gc_background, "background_gc=%s"},
{Opt_disable_roll_forward, "disable_roll_forward"},
{Opt_discard, "discard"},
{Opt_noheap, "no_heap"},
+ {Opt_user_xattr, "user_xattr"},
{Opt_nouser_xattr, "nouser_xattr"},
+ {Opt_acl, "acl"},
{Opt_noacl, "noacl"},
{Opt_active_logs, "active_logs=%u"},
{Opt_disable_ext_identify, "disable_ext_identify"},
+ {Opt_inline_xattr, "inline_xattr"},
+ {Opt_inline_data, "inline_data"},
+ {Opt_flush_merge, "flush_merge"},
+ {Opt_nobarrier, "nobarrier"},
+ {Opt_android_emu, "android_emu=%s"},
+ {Opt_err_continue, "errors=continue"},
+ {Opt_err_panic, "errors=panic"},
+ {Opt_err_recover, "errors=recover"},
{Opt_err, NULL},
};
+/* Sysfs support for f2fs */
+enum {
+ GC_THREAD, /* struct f2fs_gc_thread */
+ SM_INFO, /* struct f2fs_sm_info */
+ NM_INFO, /* struct f2fs_nm_info */
+ F2FS_SBI, /* struct f2fs_sb_info */
+};
+
+struct f2fs_attr {
+ struct attribute attr;
+ ssize_t (*show)(struct f2fs_attr *, struct f2fs_sb_info *, char *);
+ ssize_t (*store)(struct f2fs_attr *, struct f2fs_sb_info *,
+ const char *, size_t);
+ int struct_type;
+ int offset;
+};
+
+static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
+{
+ if (struct_type == GC_THREAD)
+ return (unsigned char *)sbi->gc_thread;
+ else if (struct_type == SM_INFO)
+ return (unsigned char *)SM_I(sbi);
+ else if (struct_type == NM_INFO)
+ return (unsigned char *)NM_I(sbi);
+ else if (struct_type == F2FS_SBI)
+ return (unsigned char *)sbi;
+ return NULL;
+}
+
+static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
+ struct f2fs_sb_info *sbi, char *buf)
+{
+ unsigned char *ptr = NULL;
+ unsigned int *ui;
+
+ ptr = __struct_ptr(sbi, a->struct_type);
+ if (!ptr)
+ return -EINVAL;
+
+ ui = (unsigned int *)(ptr + a->offset);
+
+ return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
+}
+
+static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
+ struct f2fs_sb_info *sbi,
+ const char *buf, size_t count)
+{
+ unsigned char *ptr;
+ unsigned long t;
+ unsigned int *ui;
+ ssize_t ret;
+
+ ptr = __struct_ptr(sbi, a->struct_type);
+ if (!ptr)
+ return -EINVAL;
+
+ ui = (unsigned int *)(ptr + a->offset);
+
+ ret = kstrtoul(skip_spaces(buf), 0, &t);
+ if (ret < 0)
+ return ret;
+ *ui = t;
+ return count;
+}
+
+static ssize_t f2fs_attr_show(struct kobject *kobj,
+ struct attribute *attr, char *buf)
+{
+ struct f2fs_sb_info *sbi = container_of(kobj, struct f2fs_sb_info,
+ s_kobj);
+ struct f2fs_attr *a = container_of(attr, struct f2fs_attr, attr);
+
+ return a->show ? a->show(a, sbi, buf) : 0;
+}
+
+static ssize_t f2fs_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t len)
+{
+ struct f2fs_sb_info *sbi = container_of(kobj, struct f2fs_sb_info,
+ s_kobj);
+ struct f2fs_attr *a = container_of(attr, struct f2fs_attr, attr);
+
+ return a->store ? a->store(a, sbi, buf, len) : 0;
+}
+
+static void f2fs_sb_release(struct kobject *kobj)
+{
+ struct f2fs_sb_info *sbi = container_of(kobj, struct f2fs_sb_info,
+ s_kobj);
+ complete(&sbi->s_kobj_unregister);
+}
+
+#define F2FS_ATTR_OFFSET(_struct_type, _name, _mode, _show, _store, _offset) \
+static struct f2fs_attr f2fs_attr_##_name = { \
+ .attr = {.name = __stringify(_name), .mode = _mode }, \
+ .show = _show, \
+ .store = _store, \
+ .struct_type = _struct_type, \
+ .offset = _offset \
+}
+
+#define F2FS_RW_ATTR(struct_type, struct_name, name, elname) \
+ F2FS_ATTR_OFFSET(struct_type, name, 0644, \
+ f2fs_sbi_show, f2fs_sbi_store, \
+ offsetof(struct struct_name, elname))
+
+F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time);
+F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time);
+F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
+F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle);
+F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
+F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, max_small_discards, max_discards);
+F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, ipu_policy, ipu_policy);
+F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_ipu_util, min_ipu_util);
+F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, min_fsync_blocks, min_fsync_blocks);
+F2FS_RW_ATTR(NM_INFO, f2fs_nm_info, ram_thresh, ram_thresh);
+F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
+F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
+
+#define ATTR_LIST(name) (&f2fs_attr_##name.attr)
+static struct attribute *f2fs_attrs[] = {
+ ATTR_LIST(gc_min_sleep_time),
+ ATTR_LIST(gc_max_sleep_time),
+ ATTR_LIST(gc_no_gc_sleep_time),
+ ATTR_LIST(gc_idle),
+ ATTR_LIST(reclaim_segments),
+ ATTR_LIST(max_small_discards),
+ ATTR_LIST(ipu_policy),
+ ATTR_LIST(min_ipu_util),
+ ATTR_LIST(min_fsync_blocks),
+ ATTR_LIST(max_victim_search),
+ ATTR_LIST(dir_level),
+ ATTR_LIST(ram_thresh),
+ NULL,
+};
+
+static const struct sysfs_ops f2fs_attr_ops = {
+ .show = f2fs_attr_show,
+ .store = f2fs_attr_store,
+};
+
+static struct kobj_type f2fs_ktype = {
+ .default_attrs = f2fs_attrs,
+ .sysfs_ops = &f2fs_attr_ops,
+ .release = f2fs_sb_release,
+};
+
void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...)
{
struct va_format vaf;
@@ -76,11 +250,186 @@
inode_init_once(&fi->vfs_inode);
}
+static int parse_android_emu(struct f2fs_sb_info *sbi, char *args)
+{
+ char *sep = args;
+ char *sepres;
+ int ret;
+
+ if (!sep)
+ return -EINVAL;
+
+ sepres = strsep(&sep, ":");
+ if (!sep)
+ return -EINVAL;
+ ret = kstrtou32(sepres, 0, &sbi->android_emu_uid);
+ if (ret)
+ return ret;
+
+ sepres = strsep(&sep, ":");
+ if (!sep)
+ return -EINVAL;
+ ret = kstrtou32(sepres, 0, &sbi->android_emu_gid);
+ if (ret)
+ return ret;
+
+ sepres = strsep(&sep, ":");
+ ret = kstrtou16(sepres, 8, &sbi->android_emu_mode);
+ if (ret)
+ return ret;
+
+ if (sep && strstr(sep, "nocase"))
+ sbi->android_emu_flags = F2FS_ANDROID_EMU_NOCASE;
+
+ return 0;
+}
+
+static int parse_options(struct super_block *sb, char *options)
+{
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ substring_t args[MAX_OPT_ARGS];
+ char *p, *name;
+ int arg = 0;
+
+ if (!options)
+ return 0;
+
+ while ((p = strsep(&options, ",")) != NULL) {
+ int token;
+ if (!*p)
+ continue;
+ /*
+ * Initialize args struct so we know whether arg was
+ * found; some options take optional arguments.
+ */
+ args[0].to = args[0].from = NULL;
+ token = match_token(p, f2fs_tokens, args);
+
+ switch (token) {
+ case Opt_gc_background:
+ name = match_strdup(&args[0]);
+
+ if (!name)
+ return -ENOMEM;
+ if (strlen(name) == 2 && !strncmp(name, "on", 2))
+ set_opt(sbi, BG_GC);
+ else if (strlen(name) == 3 && !strncmp(name, "off", 3))
+ clear_opt(sbi, BG_GC);
+ else {
+ kfree(name);
+ return -EINVAL;
+ }
+ kfree(name);
+ break;
+ case Opt_disable_roll_forward:
+ set_opt(sbi, DISABLE_ROLL_FORWARD);
+ break;
+ case Opt_discard:
+ set_opt(sbi, DISCARD);
+ break;
+ case Opt_noheap:
+ set_opt(sbi, NOHEAP);
+ break;
+#ifdef CONFIG_F2FS_FS_XATTR
+ case Opt_user_xattr:
+ set_opt(sbi, XATTR_USER);
+ break;
+ case Opt_nouser_xattr:
+ clear_opt(sbi, XATTR_USER);
+ break;
+ case Opt_inline_xattr:
+ set_opt(sbi, INLINE_XATTR);
+ break;
+#else
+ case Opt_user_xattr:
+ f2fs_msg(sb, KERN_INFO,
+ "user_xattr options not supported");
+ break;
+ case Opt_nouser_xattr:
+ f2fs_msg(sb, KERN_INFO,
+ "nouser_xattr options not supported");
+ break;
+ case Opt_inline_xattr:
+ f2fs_msg(sb, KERN_INFO,
+ "inline_xattr options not supported");
+ break;
+#endif
+#ifdef CONFIG_F2FS_FS_POSIX_ACL
+ case Opt_acl:
+ set_opt(sbi, POSIX_ACL);
+ break;
+ case Opt_noacl:
+ clear_opt(sbi, POSIX_ACL);
+ break;
+#else
+ case Opt_acl:
+ f2fs_msg(sb, KERN_INFO, "acl options not supported");
+ break;
+ case Opt_noacl:
+ f2fs_msg(sb, KERN_INFO, "noacl options not supported");
+ break;
+#endif
+ case Opt_active_logs:
+ if (args->from && match_int(args, &arg))
+ return -EINVAL;
+ if (arg != 2 && arg != 4 && arg != NR_CURSEG_TYPE)
+ return -EINVAL;
+ sbi->active_logs = arg;
+ break;
+ case Opt_disable_ext_identify:
+ set_opt(sbi, DISABLE_EXT_IDENTIFY);
+ break;
+ case Opt_inline_data:
+ set_opt(sbi, INLINE_DATA);
+ break;
+ case Opt_flush_merge:
+ set_opt(sbi, FLUSH_MERGE);
+ break;
+ case Opt_nobarrier:
+ set_opt(sbi, NOBARRIER);
+ break;
+ case Opt_err_continue:
+ clear_opt(sbi, ERRORS_RECOVER);
+ clear_opt(sbi, ERRORS_PANIC);
+ break;
+ case Opt_err_panic:
+ set_opt(sbi, ERRORS_PANIC);
+ clear_opt(sbi, ERRORS_RECOVER);
+ break;
+ case Opt_err_recover:
+ set_opt(sbi, ERRORS_RECOVER);
+ clear_opt(sbi, ERRORS_PANIC);
+ break;
+ case Opt_android_emu:
+ if (args->from) {
+ int ret;
+ char *perms = match_strdup(args);
+
+ ret = parse_android_emu(sbi, perms);
+ kfree(perms);
+
+ if (ret)
+ return -EINVAL;
+
+ set_opt(sbi, ANDROID_EMU);
+ } else
+ return -EINVAL;
+ break;
+ default:
+ f2fs_msg(sb, KERN_ERR,
+ "Unrecognized mount option \"%s\" or missing value",
+ p);
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
static struct inode *f2fs_alloc_inode(struct super_block *sb)
{
struct f2fs_inode_info *fi;
- fi = kmem_cache_alloc(f2fs_inode_cachep, GFP_NOFS | __GFP_ZERO);
+ fi = kmem_cache_alloc(f2fs_inode_cachep, GFP_F2FS_ZERO);
if (!fi)
return NULL;
@@ -88,13 +437,21 @@
/* Initialize f2fs-specific inode info */
fi->vfs_inode.i_version = 1;
- atomic_set(&fi->dirty_dents, 0);
+ atomic_set(&fi->dirty_pages, 0);
fi->i_current_depth = 1;
fi->i_advise = 0;
rwlock_init(&fi->ext.ext_lock);
+ init_rwsem(&fi->i_sem);
+ INIT_LIST_HEAD(&fi->atomic_pages);
set_inode_flag(fi, FI_NEW_INODE);
+ if (test_opt(F2FS_SB(sb), INLINE_XATTR))
+ set_inode_flag(fi, FI_INLINE_XATTR);
+
+ /* Will be used by directory only */
+ fi->i_dir_level = F2FS_SB(sb)->dir_level;
+
return &fi->vfs_inode;
}
@@ -112,6 +469,16 @@
return generic_drop_inode(inode);
}
+/*
+ * f2fs_dirty_inode() is called from __mark_inode_dirty()
+ *
+ * We should call set_dirty_inode to write the dirty inode through write_inode.
+ */
+static void f2fs_dirty_inode(struct inode *inode, int flags)
+{
+ set_inode_flag(F2FS_I(inode), FI_DIRTY_INODE);
+}
+
static void f2fs_i_callback(struct rcu_head *head)
{
struct inode *inode = container_of(head, struct inode, i_rcu);
@@ -127,10 +494,29 @@
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ if (sbi->s_proc) {
+ remove_proc_entry("segment_info", sbi->s_proc);
+ remove_proc_entry(sb->s_id, f2fs_proc_root);
+ }
+ kobject_del(&sbi->s_kobj);
+
f2fs_destroy_stats(sbi);
stop_gc_thread(sbi);
- write_checkpoint(sbi, true);
+ /* We don't need to do checkpoint when it's clean */
+ if (sbi->s_dirty) {
+ struct cp_control cpc = {
+ .reason = CP_UMOUNT,
+ };
+ write_checkpoint(sbi, &cpc);
+ }
+
+ /*
+ * normally superblock is clean, so we need to release this.
+ * In addition, EIO will skip do checkpoint, we need this as well.
+ */
+ release_dirty_inode(sbi);
+ release_discard_addrs(sbi);
iput(sbi->node_inode);
iput(sbi->meta_inode);
@@ -140,6 +526,8 @@
destroy_segment_manager(sbi);
kfree(sbi->ckpt);
+ kobject_put(&sbi->s_kobj);
+ wait_for_completion(&sbi->s_kobj_unregister);
sb->s_fs_info = NULL;
brelse(sbi->raw_super_buf);
@@ -152,12 +540,12 @@
trace_f2fs_sync_fs(sb, sync);
- if (!sbi->s_dirty && !get_pages(sbi, F2FS_DIRTY_NODES))
- return 0;
-
if (sync) {
+ struct cp_control cpc = {
+ .reason = CP_SYNC,
+ };
mutex_lock(&sbi->gc_mutex);
- write_checkpoint(sbi, false);
+ write_checkpoint(sbi, &cpc);
mutex_unlock(&sbi->gc_mutex);
} else {
f2fs_balance_fs(sbi);
@@ -200,8 +588,8 @@
buf->f_bfree = buf->f_blocks - valid_user_blocks(sbi) - ovp_count;
buf->f_bavail = user_block_count - valid_user_blocks(sbi);
- buf->f_files = sbi->total_node_count;
- buf->f_ffree = sbi->total_node_count - valid_inode_count(sbi);
+ buf->f_files = sbi->total_node_count - F2FS_RESERVED_NODE_NUM;
+ buf->f_ffree = buf->f_files - valid_inode_count(sbi);
buf->f_namelen = F2FS_NAME_LEN;
buf->f_fsid.val[0] = (u32)id;
@@ -214,10 +602,10 @@
{
struct f2fs_sb_info *sbi = F2FS_SB(root->d_sb);
- if (test_opt(sbi, BG_GC))
- seq_puts(seq, ",background_gc_on");
+ if (!f2fs_readonly(sbi->sb) && test_opt(sbi, BG_GC))
+ seq_printf(seq, ",background_gc=%s", "on");
else
- seq_puts(seq, ",background_gc_off");
+ seq_printf(seq, ",background_gc=%s", "off");
if (test_opt(sbi, DISABLE_ROLL_FORWARD))
seq_puts(seq, ",disable_roll_forward");
if (test_opt(sbi, DISCARD))
@@ -229,6 +617,8 @@
seq_puts(seq, ",user_xattr");
else
seq_puts(seq, ",nouser_xattr");
+ if (test_opt(sbi, INLINE_XATTR))
+ seq_puts(seq, ",inline_xattr");
#endif
#ifdef CONFIG_F2FS_FS_POSIX_ACL
if (test_opt(sbi, POSIX_ACL))
@@ -236,18 +626,161 @@
else
seq_puts(seq, ",noacl");
#endif
+ if (test_opt(sbi, ERRORS_PANIC))
+ seq_puts(seq, ",errors=panic");
+ else if (test_opt(sbi, ERRORS_RECOVER))
+ seq_puts(seq, ",errors=recover");
+ else
+ seq_puts(seq, ",errors=continue");
if (test_opt(sbi, DISABLE_EXT_IDENTIFY))
seq_puts(seq, ",disable_ext_identify");
+ if (test_opt(sbi, INLINE_DATA))
+ seq_puts(seq, ",inline_data");
+ if (!f2fs_readonly(sbi->sb) && test_opt(sbi, FLUSH_MERGE))
+ seq_puts(seq, ",flush_merge");
+ if (test_opt(sbi, NOBARRIER))
+ seq_puts(seq, ",nobarrier");
+
+ if (test_opt(sbi, ANDROID_EMU))
+ seq_printf(seq, ",android_emu=%u:%u:%ho%s",
+ sbi->android_emu_uid,
+ sbi->android_emu_gid,
+ sbi->android_emu_mode,
+ (sbi->android_emu_flags &
+ F2FS_ANDROID_EMU_NOCASE) ?
+ ":nocase" : "");
return 0;
}
+static int segment_info_seq_show(struct seq_file *seq, void *offset)
+{
+ struct super_block *sb = seq->private;
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ unsigned int total_segs =
+ le32_to_cpu(sbi->raw_super->segment_count_main);
+ int i;
+
+ seq_puts(seq, "format: segment_type|valid_blocks\n"
+ "segment_type(0:HD, 1:WD, 2:CD, 3:HN, 4:WN, 5:CN)\n");
+
+ for (i = 0; i < total_segs; i++) {
+ struct seg_entry *se = get_seg_entry(sbi, i);
+
+ if ((i % 10) == 0)
+ seq_printf(seq, "%-5d", i);
+ seq_printf(seq, "%d|%-3u", se->type,
+ get_valid_blocks(sbi, i, 1));
+ if ((i % 10) == 9 || i == (total_segs - 1))
+ seq_putc(seq, '\n');
+ else
+ seq_putc(seq, ' ');
+ }
+
+ return 0;
+}
+
+static int segment_info_open_fs(struct inode *inode, struct file *file)
+{
+ return single_open(file, segment_info_seq_show,
+ PDE_DATA(inode));
+}
+
+static const struct file_operations f2fs_seq_segment_info_fops = {
+ .owner = THIS_MODULE,
+ .open = segment_info_open_fs,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+{
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct f2fs_mount_info org_mount_opt;
+ int err, active_logs;
+ bool need_restart_gc = false;
+ bool need_stop_gc = false;
+
+ sync_filesystem(sb);
+
+ /*
+ * Save the old mount options in case we
+ * need to restore them.
+ */
+ org_mount_opt = sbi->mount_opt;
+ active_logs = sbi->active_logs;
+
+ sbi->mount_opt.opt = 0;
+ sbi->active_logs = NR_CURSEG_TYPE;
+
+ /* parse mount options */
+ err = parse_options(sb, data);
+ if (err)
+ goto restore_opts;
+
+ /*
+ * Previous and new state of filesystem is RO,
+ * so skip checking GC and FLUSH_MERGE conditions.
+ */
+ if (f2fs_readonly(sb) && (*flags & MS_RDONLY))
+ goto skip;
+
+ /*
+ * We stop the GC thread if FS is mounted as RO
+ * or if background_gc = off is passed in mount
+ * option. Also sync the filesystem.
+ */
+ if ((*flags & MS_RDONLY) || !test_opt(sbi, BG_GC)) {
+ if (sbi->gc_thread) {
+ stop_gc_thread(sbi);
+ f2fs_sync_fs(sb, 1);
+ need_restart_gc = true;
+ }
+ } else if (test_opt(sbi, BG_GC) && !sbi->gc_thread) {
+ err = start_gc_thread(sbi);
+ if (err)
+ goto restore_opts;
+ need_stop_gc = true;
+ }
+
+ /*
+ * We stop issue flush thread if FS is mounted as RO
+ * or if flush_merge is not passed in mount option.
+ */
+ if ((*flags & MS_RDONLY) || !test_opt(sbi, FLUSH_MERGE)) {
+ destroy_flush_cmd_control(sbi);
+ } else if (test_opt(sbi, FLUSH_MERGE) && !SM_I(sbi)->cmd_control_info) {
+ err = create_flush_cmd_control(sbi);
+ if (err)
+ goto restore_gc;
+ }
+skip:
+ /* Update the POSIXACL Flag */
+ sb->s_flags = (sb->s_flags & ~MS_POSIXACL) |
+ (test_opt(sbi, POSIX_ACL) ? MS_POSIXACL : 0);
+ return 0;
+restore_gc:
+ if (need_restart_gc) {
+ if (start_gc_thread(sbi))
+ f2fs_msg(sbi->sb, KERN_WARNING,
+ "background gc thread is stop");
+ } else if (need_stop_gc) {
+ stop_gc_thread(sbi);
+ }
+restore_opts:
+ sbi->mount_opt = org_mount_opt;
+ sbi->active_logs = active_logs;
+ return err;
+}
+
static struct super_operations f2fs_sops = {
.alloc_inode = f2fs_alloc_inode,
.drop_inode = f2fs_drop_inode,
.destroy_inode = f2fs_destroy_inode,
.write_inode = f2fs_write_inode,
+ .dirty_inode = f2fs_dirty_inode,
.show_options = f2fs_show_options,
.evict_inode = f2fs_evict_inode,
.put_super = f2fs_put_super,
@@ -255,6 +788,7 @@
.freeze_fs = f2fs_freeze,
.unfreeze_fs = f2fs_unfreeze,
.statfs = f2fs_statfs,
+ .remount_fs = f2fs_remount,
};
static struct inode *f2fs_nfs_get_inode(struct super_block *sb,
@@ -263,7 +797,7 @@
struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct inode *inode;
- if (ino < F2FS_ROOT_INO(sbi))
+ if (check_nid_range(sbi, ino))
return ERR_PTR(-ESTALE);
/*
@@ -274,7 +808,7 @@
inode = f2fs_iget(sb, ino);
if (IS_ERR(inode))
return ERR_CAST(inode);
- if (generation && inode->i_generation != generation) {
+ if (unlikely(generation && inode->i_generation != generation)) {
/* we didn't find the right inode.. */
iput(inode);
return ERR_PTR(-ESTALE);
@@ -302,82 +836,9 @@
.get_parent = f2fs_get_parent,
};
-static int parse_options(struct super_block *sb, struct f2fs_sb_info *sbi,
- char *options)
-{
- substring_t args[MAX_OPT_ARGS];
- char *p;
- int arg = 0;
-
- if (!options)
- return 0;
-
- while ((p = strsep(&options, ",")) != NULL) {
- int token;
- if (!*p)
- continue;
- /*
- * Initialize args struct so we know whether arg was
- * found; some options take optional arguments.
- */
- args[0].to = args[0].from = NULL;
- token = match_token(p, f2fs_tokens, args);
-
- switch (token) {
- case Opt_gc_background_off:
- clear_opt(sbi, BG_GC);
- break;
- case Opt_disable_roll_forward:
- set_opt(sbi, DISABLE_ROLL_FORWARD);
- break;
- case Opt_discard:
- set_opt(sbi, DISCARD);
- break;
- case Opt_noheap:
- set_opt(sbi, NOHEAP);
- break;
-#ifdef CONFIG_F2FS_FS_XATTR
- case Opt_nouser_xattr:
- clear_opt(sbi, XATTR_USER);
- break;
-#else
- case Opt_nouser_xattr:
- f2fs_msg(sb, KERN_INFO,
- "nouser_xattr options not supported");
- break;
-#endif
-#ifdef CONFIG_F2FS_FS_POSIX_ACL
- case Opt_noacl:
- clear_opt(sbi, POSIX_ACL);
- break;
-#else
- case Opt_noacl:
- f2fs_msg(sb, KERN_INFO, "noacl options not supported");
- break;
-#endif
- case Opt_active_logs:
- if (args->from && match_int(args, &arg))
- return -EINVAL;
- if (arg != 2 && arg != 4 && arg != NR_CURSEG_TYPE)
- return -EINVAL;
- sbi->active_logs = arg;
- break;
- case Opt_disable_ext_identify:
- set_opt(sbi, DISABLE_EXT_IDENTIFY);
- break;
- default:
- f2fs_msg(sb, KERN_ERR,
- "Unrecognized mount option \"%s\" or missing value",
- p);
- return -EINVAL;
- }
- }
- return 0;
-}
-
static loff_t max_file_size(unsigned bits)
{
- loff_t result = ADDRS_PER_INODE;
+ loff_t result = (DEF_ADDRS_PER_INODE - F2FS_INLINE_XATTR_ADDRS);
loff_t leaf_count = ADDRS_PER_BLOCK;
/* two direct node blocks */
@@ -424,14 +885,22 @@
return 1;
}
- if (le32_to_cpu(raw_super->log_sectorsize) !=
- F2FS_LOG_SECTOR_SIZE) {
- f2fs_msg(sb, KERN_INFO, "Invalid log sectorsize");
+ /* Currently, support 512/1024/2048/4096 bytes sector size */
+ if (le32_to_cpu(raw_super->log_sectorsize) >
+ F2FS_MAX_LOG_SECTOR_SIZE ||
+ le32_to_cpu(raw_super->log_sectorsize) <
+ F2FS_MIN_LOG_SECTOR_SIZE) {
+ f2fs_msg(sb, KERN_INFO, "Invalid log sectorsize (%u)",
+ le32_to_cpu(raw_super->log_sectorsize));
return 1;
}
- if (le32_to_cpu(raw_super->log_sectors_per_block) !=
- F2FS_LOG_SECTORS_PER_BLOCK) {
- f2fs_msg(sb, KERN_INFO, "Invalid log sectors per block");
+ if (le32_to_cpu(raw_super->log_sectors_per_block) +
+ le32_to_cpu(raw_super->log_sectorsize) !=
+ F2FS_MAX_LOG_SECTOR_SIZE) {
+ f2fs_msg(sb, KERN_INFO,
+ "Invalid log sectors per block(%u) log sectorsize(%u)",
+ le32_to_cpu(raw_super->log_sectors_per_block),
+ le32_to_cpu(raw_super->log_sectorsize));
return 1;
}
return 0;
@@ -450,10 +919,10 @@
fsmeta += le32_to_cpu(ckpt->rsvd_segment_count);
fsmeta += le32_to_cpu(raw_super->segment_count_ssa);
- if (fsmeta >= total)
+ if (unlikely(fsmeta >= total))
return 1;
- if (is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) {
+ if (unlikely(f2fs_cp_error(sbi))) {
f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
return 1;
}
@@ -481,35 +950,57 @@
sbi->node_ino_num = le32_to_cpu(raw_super->node_ino);
sbi->meta_ino_num = le32_to_cpu(raw_super->meta_ino);
sbi->cur_victim_sec = NULL_SECNO;
+ sbi->max_victim_search = DEF_MAX_VICTIM_SEARCH;
for (i = 0; i < NR_COUNT_TYPE; i++)
atomic_set(&sbi->nr_pages[i], 0);
+
+ sbi->dir_level = DEF_DIR_LEVEL;
+ sbi->need_fsck = false;
}
-static int validate_superblock(struct super_block *sb,
- struct f2fs_super_block **raw_super,
- struct buffer_head **raw_super_buf, sector_t block)
+/*
+ * Read f2fs raw super block.
+ * Because we have two copies of super block, so read the first one at first,
+ * if the first one is invalid, move to read the second one.
+ */
+static int read_raw_super_block(struct super_block *sb,
+ struct f2fs_super_block **raw_super,
+ struct buffer_head **raw_super_buf)
{
- const char *super = (block == 0 ? "first" : "second");
+ int block = 0;
- /* read f2fs raw super block */
+retry:
*raw_super_buf = sb_bread(sb, block);
if (!*raw_super_buf) {
- f2fs_msg(sb, KERN_ERR, "unable to read %s superblock",
- super);
- return -EIO;
+ f2fs_msg(sb, KERN_ERR, "Unable to read %dth superblock",
+ block + 1);
+ if (block == 0) {
+ block++;
+ goto retry;
+ } else {
+ return -EIO;
+ }
}
*raw_super = (struct f2fs_super_block *)
((char *)(*raw_super_buf)->b_data + F2FS_SUPER_OFFSET);
/* sanity checking of raw super */
- if (!sanity_check_raw_super(sb, *raw_super))
- return 0;
+ if (sanity_check_raw_super(sb, *raw_super)) {
+ brelse(*raw_super_buf);
+ f2fs_msg(sb, KERN_ERR,
+ "Can't find valid F2FS filesystem in %dth superblock",
+ block + 1);
+ if (block == 0) {
+ block++;
+ goto retry;
+ } else {
+ return -EINVAL;
+ }
+ }
- f2fs_msg(sb, KERN_ERR, "Can't find a valid F2FS filesystem "
- "in %s superblock", super);
- return -EINVAL;
+ return 0;
}
static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
@@ -519,27 +1010,29 @@
struct buffer_head *raw_super_buf;
struct inode *root;
long err = -EINVAL;
+ const char *descr = "";
+ bool retry = true;
int i;
+ f2fs_msg(sb, KERN_INFO, "mounting..");
+
+try_onemore:
/* allocate memory for f2fs-specific super block info */
sbi = kzalloc(sizeof(struct f2fs_sb_info), GFP_KERNEL);
if (!sbi)
return -ENOMEM;
/* set a block size */
- if (!sb_set_blocksize(sb, F2FS_BLKSIZE)) {
+ if (unlikely(!sb_set_blocksize(sb, F2FS_BLKSIZE))) {
f2fs_msg(sb, KERN_ERR, "unable to set blocksize");
goto free_sbi;
}
- err = validate_superblock(sb, &raw_super, &raw_super_buf, 0);
- if (err) {
- brelse(raw_super_buf);
- /* check secondary superblock when primary failed */
- err = validate_superblock(sb, &raw_super, &raw_super_buf, 1);
- if (err)
- goto free_sb_buf;
- }
+ err = read_raw_super_block(sb, &raw_super, &raw_super_buf);
+ if (err)
+ goto free_sbi;
+
+ sb->s_fs_info = sbi;
/* init some FS parameters */
sbi->active_logs = NR_CURSEG_TYPE;
@@ -552,7 +1045,7 @@
set_opt(sbi, POSIX_ACL);
#endif
/* parse mount options */
- err = parse_options(sb, sbi, (char *)data);
+ err = parse_options(sb, (char *)data);
if (err)
goto free_sb_buf;
@@ -564,7 +1057,6 @@
sb->s_xattr = f2fs_xattr_handlers;
sb->s_export_op = &f2fs_export_ops;
sb->s_magic = F2FS_SUPER_MAGIC;
- sb->s_fs_info = sbi;
sb->s_time_gran = 1;
sb->s_flags = (sb->s_flags & ~MS_POSIXACL) |
(test_opt(sbi, POSIX_ACL) ? MS_POSIXACL : 0);
@@ -577,12 +1069,21 @@
mutex_init(&sbi->gc_mutex);
mutex_init(&sbi->writepages);
mutex_init(&sbi->cp_mutex);
- for (i = 0; i < NR_GLOBAL_LOCKS; i++)
- mutex_init(&sbi->fs_lock[i]);
- mutex_init(&sbi->node_write);
- sbi->por_doing = 0;
+ init_rwsem(&sbi->node_write);
+ sbi->por_doing = false;
spin_lock_init(&sbi->stat_lock);
- init_rwsem(&sbi->bio_sem);
+
+ init_rwsem(&sbi->read_io.io_rwsem);
+ sbi->read_io.sbi = sbi;
+ sbi->read_io.bio = NULL;
+ for (i = 0; i < NR_PAGE_TYPE; i++) {
+ init_rwsem(&sbi->write_io[i].io_rwsem);
+ sbi->write_io[i].sbi = sbi;
+ sbi->write_io[i].bio = NULL;
+ }
+
+ init_rwsem(&sbi->cp_rwsem);
+ init_waitqueue_head(&sbi->cp_wait);
init_sb_info(sbi);
/* get an inode for meta space */
@@ -593,6 +1094,7 @@
goto free_sb_buf;
}
+get_cp:
err = get_valid_checkpoint(sbi);
if (err) {
f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint");
@@ -618,7 +1120,7 @@
INIT_LIST_HEAD(&sbi->dir_inode_list);
spin_lock_init(&sbi->dir_inode_lock);
- init_orphan_info(sbi);
+ init_ino_entry_info(sbi);
/* setup f2fs internal modules */
err = build_segment_manager(sbi);
@@ -645,9 +1147,7 @@
}
/* if there are nt orphan nodes free them */
- err = -EINVAL;
- if (recover_orphan_inodes(sbi))
- goto free_node_inode;
+ recover_orphan_inodes(sbi);
/* read root inode and dentry */
root = f2fs_iget(sb, F2FS_ROOT_INO(sbi));
@@ -656,8 +1156,11 @@
err = PTR_ERR(root);
goto free_node_inode;
}
- if (!S_ISDIR(root->i_mode) || !root->i_blocks || !root->i_size)
- goto free_root_inode;
+ if (!S_ISDIR(root->i_mode) || !root->i_blocks || !root->i_size) {
+ iput(root);
+ err = -EINVAL;
+ goto free_node_inode;
+ }
sb->s_root = d_make_root(root); /* allocate root dentry */
if (!sb->s_root) {
@@ -665,22 +1168,16 @@
goto free_root_inode;
}
- /* recover fsynced data */
- if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
- err = recover_fsync_data(sbi);
- if (err)
- f2fs_msg(sb, KERN_ERR,
- "Cannot recover all fsync data errno=%ld", err);
- }
-
- /* After POR, we can run background GC thread */
- err = start_gc_thread(sbi);
- if (err)
- goto fail;
-
err = f2fs_build_stats(sbi);
if (err)
- goto fail;
+ goto free_root_inode;
+
+ if (f2fs_proc_root)
+ sbi->s_proc = proc_mkdir(sb->s_id, f2fs_proc_root);
+
+ if (sbi->s_proc)
+ proc_create_data("segment_info", S_IRUGO, sbi->s_proc,
+ &f2fs_seq_segment_info_fops, sb);
if (test_opt(sbi, DISCARD)) {
struct request_queue *q = bdev_get_queue(sb->s_bdev);
@@ -690,9 +1187,59 @@
"the device does not support discard");
}
+ if (test_opt(sbi, ANDROID_EMU))
+ descr = " with android sdcard emulation";
+ f2fs_msg(sb, KERN_INFO, "mounted filesystem%s", descr);
+
+ sbi->s_kobj.kset = f2fs_kset;
+ init_completion(&sbi->s_kobj_unregister);
+ err = kobject_init_and_add(&sbi->s_kobj, &f2fs_ktype, NULL,
+ "%s", sb->s_id);
+ if (err)
+ goto free_proc;
+
+ if (!retry)
+ sbi->need_fsck = true;
+
+ /* recover fsynced data */
+ if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
+ err = recover_fsync_data(sbi);
+ if (err) {
+ if (f2fs_handle_error(sbi)) {
+ set_opt(sbi, DISABLE_ROLL_FORWARD);
+ kfree(sbi->ckpt);
+ f2fs_msg(sb, KERN_ERR,
+ "reloading last checkpoint");
+ goto get_cp;
+ }
+ f2fs_msg(sb, KERN_ERR,
+ "Cannot recover all fsync data errno=%ld", err);
+ /* checkpoint what we have */
+ write_checkpoint(sbi, false);
+ goto free_kobj;
+ }
+ }
+
+ /*
+ * If filesystem is not mounted as read-only then
+ * do start the gc_thread.
+ */
+ if (!f2fs_readonly(sb)) {
+ /* After POR, we can run background GC thread.*/
+ err = start_gc_thread(sbi);
+ if (err)
+ goto free_kobj;
+ }
return 0;
-fail:
- stop_gc_thread(sbi);
+
+free_kobj:
+ kobject_del(&sbi->s_kobj);
+free_proc:
+ if (sbi->s_proc) {
+ remove_proc_entry("segment_info", sbi->s_proc);
+ remove_proc_entry(sb->s_id, f2fs_proc_root);
+ }
+ f2fs_destroy_stats(sbi);
free_root_inode:
dput(sb->s_root);
sb->s_root = NULL;
@@ -711,6 +1258,13 @@
brelse(raw_super_buf);
free_sbi:
kfree(sbi);
+ /* give only one another chance */
+ if (retry) {
+ retry = 0;
+ shrink_dcache_sb(sb);
+ goto try_onemore;
+ }
+ f2fs_msg(sb, KERN_ERR, "mount failed");
return err;
}
@@ -732,8 +1286,8 @@
static int __init init_inodecache(void)
{
f2fs_inode_cachep = f2fs_kmem_cache_create("f2fs_inode_cache",
- sizeof(struct f2fs_inode_info), NULL);
- if (f2fs_inode_cachep == NULL)
+ sizeof(struct f2fs_inode_info));
+ if (!f2fs_inode_cachep)
return -ENOMEM;
return 0;
}
@@ -757,29 +1311,55 @@
goto fail;
err = create_node_manager_caches();
if (err)
- goto fail;
+ goto free_inodecache;
+ err = create_segment_manager_caches();
+ if (err)
+ goto free_node_manager_caches;
err = create_gc_caches();
if (err)
- goto fail;
+ goto free_segment_manager_caches;
err = create_checkpoint_caches();
if (err)
- goto fail;
+ goto free_gc_caches;
+ f2fs_kset = kset_create_and_add("f2fs", NULL, fs_kobj);
+ if (!f2fs_kset) {
+ err = -ENOMEM;
+ goto free_checkpoint_caches;
+ }
err = register_filesystem(&f2fs_fs_type);
if (err)
- goto fail;
+ goto free_kset;
f2fs_create_root_stats();
+ f2fs_proc_root = proc_mkdir("fs/f2fs", NULL);
+ return 0;
+
+free_kset:
+ kset_unregister(f2fs_kset);
+free_checkpoint_caches:
+ destroy_checkpoint_caches();
+free_gc_caches:
+ destroy_gc_caches();
+free_segment_manager_caches:
+ destroy_segment_manager_caches();
+free_node_manager_caches:
+ destroy_node_manager_caches();
+free_inodecache:
+ destroy_inodecache();
fail:
return err;
}
static void __exit exit_f2fs_fs(void)
{
+ remove_proc_entry("fs/f2fs", NULL);
f2fs_destroy_root_stats();
unregister_filesystem(&f2fs_fs_type);
destroy_checkpoint_caches();
destroy_gc_caches();
+ destroy_segment_manager_caches();
destroy_node_manager_caches();
destroy_inodecache();
+ kset_unregister(f2fs_kset);
}
module_init(init_f2fs_fs)
diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
index 0b02dce..ffda1a5 100644
--- a/fs/f2fs/xattr.c
+++ b/fs/f2fs/xattr.c
@@ -20,11 +20,12 @@
*/
#include <linux/rwsem.h>
#include <linux/f2fs_fs.h>
+#include <linux/security.h>
#include "f2fs.h"
#include "xattr.h"
static size_t f2fs_xattr_generic_list(struct dentry *dentry, char *list,
- size_t list_size, const char *name, size_t name_len, int type)
+ size_t list_size, const char *name, size_t len, int type)
{
struct f2fs_sb_info *sbi = F2FS_SB(dentry->d_sb);
int total_len, prefix_len = 0;
@@ -43,15 +44,19 @@
prefix = XATTR_TRUSTED_PREFIX;
prefix_len = XATTR_TRUSTED_PREFIX_LEN;
break;
+ case F2FS_XATTR_INDEX_SECURITY:
+ prefix = XATTR_SECURITY_PREFIX;
+ prefix_len = XATTR_SECURITY_PREFIX_LEN;
+ break;
default:
return -EINVAL;
}
- total_len = prefix_len + name_len + 1;
+ total_len = prefix_len + len + 1;
if (list && total_len <= list_size) {
memcpy(list, prefix, prefix_len);
- memcpy(list+prefix_len, name, name_len);
- list[prefix_len + name_len] = '\0';
+ memcpy(list + prefix_len, name, len);
+ list[prefix_len + len] = '\0';
}
return total_len;
}
@@ -70,13 +75,14 @@
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
break;
+ case F2FS_XATTR_INDEX_SECURITY:
+ break;
default:
return -EINVAL;
}
if (strcmp(name, "") == 0)
return -EINVAL;
- return f2fs_getxattr(dentry->d_inode, type, name,
- buffer, size);
+ return f2fs_getxattr(dentry->d_inode, type, name, buffer, size);
}
static int f2fs_xattr_generic_set(struct dentry *dentry, const char *name,
@@ -93,17 +99,20 @@
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
break;
+ case F2FS_XATTR_INDEX_SECURITY:
+ break;
default:
return -EINVAL;
}
if (strcmp(name, "") == 0)
return -EINVAL;
- return f2fs_setxattr(dentry->d_inode, type, name, value, size);
+ return f2fs_setxattr(dentry->d_inode, type, name,
+ value, size, NULL, flags);
}
static size_t f2fs_xattr_advise_list(struct dentry *dentry, char *list,
- size_t list_size, const char *name, size_t name_len, int type)
+ size_t list_size, const char *name, size_t len, int type)
{
const char *xname = F2FS_SYSTEM_ADVISE_PREFIX;
size_t size;
@@ -122,10 +131,11 @@
{
struct inode *inode = dentry->d_inode;
- if (strcmp(name, "") != 0)
+ if (!name || strcmp(name, "") != 0)
return -EINVAL;
- *((char *)buffer) = F2FS_I(inode)->i_advise;
+ if (buffer)
+ *((char *)buffer) = F2FS_I(inode)->i_advise;
return sizeof(char);
}
@@ -141,10 +151,35 @@
if (value == NULL)
return -EINVAL;
- F2FS_I(inode)->i_advise |= *(char *)value;
+ F2FS_I(inode)->i_advise = *(char *)value;
return 0;
}
+#ifdef CONFIG_F2FS_FS_SECURITY
+static int f2fs_initxattrs(struct inode *inode, const struct xattr *xattr_array,
+ void *page)
+{
+ const struct xattr *xattr;
+ int err = 0;
+
+ for (xattr = xattr_array; xattr->name != NULL; xattr++) {
+ err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_SECURITY,
+ xattr->name, xattr->value,
+ xattr->value_len, (struct page *)page, 0);
+ if (err < 0)
+ break;
+ }
+ return err;
+}
+
+int f2fs_init_security(struct inode *inode, struct inode *dir,
+ const struct qstr *qstr, struct page *ipage)
+{
+ return security_inode_init_security(inode, dir, qstr,
+ &f2fs_initxattrs, ipage);
+}
+#endif
+
const struct xattr_handler f2fs_xattr_user_handler = {
.prefix = XATTR_USER_PREFIX,
.flags = F2FS_XATTR_INDEX_USER,
@@ -169,6 +204,14 @@
.set = f2fs_xattr_advise_set,
};
+const struct xattr_handler f2fs_xattr_security_handler = {
+ .prefix = XATTR_SECURITY_PREFIX,
+ .flags = F2FS_XATTR_INDEX_SECURITY,
+ .list = f2fs_xattr_generic_list,
+ .get = f2fs_xattr_generic_get,
+ .set = f2fs_xattr_generic_set,
+};
+
static const struct xattr_handler *f2fs_xattr_handler_map[] = {
[F2FS_XATTR_INDEX_USER] = &f2fs_xattr_user_handler,
#ifdef CONFIG_F2FS_FS_POSIX_ACL
@@ -176,6 +219,9 @@
[F2FS_XATTR_INDEX_POSIX_ACL_DEFAULT] = &f2fs_xattr_acl_default_handler,
#endif
[F2FS_XATTR_INDEX_TRUSTED] = &f2fs_xattr_trusted_handler,
+#ifdef CONFIG_F2FS_FS_SECURITY
+ [F2FS_XATTR_INDEX_SECURITY] = &f2fs_xattr_security_handler,
+#endif
[F2FS_XATTR_INDEX_ADVISE] = &f2fs_xattr_advise_handler,
};
@@ -186,89 +232,225 @@
&f2fs_xattr_acl_default_handler,
#endif
&f2fs_xattr_trusted_handler,
+#ifdef CONFIG_F2FS_FS_SECURITY
+ &f2fs_xattr_security_handler,
+#endif
&f2fs_xattr_advise_handler,
NULL,
};
-static inline const struct xattr_handler *f2fs_xattr_handler(int name_index)
+static inline const struct xattr_handler *f2fs_xattr_handler(int index)
{
const struct xattr_handler *handler = NULL;
- if (name_index > 0 && name_index < ARRAY_SIZE(f2fs_xattr_handler_map))
- handler = f2fs_xattr_handler_map[name_index];
+ if (index > 0 && index < ARRAY_SIZE(f2fs_xattr_handler_map))
+ handler = f2fs_xattr_handler_map[index];
return handler;
}
-int f2fs_getxattr(struct inode *inode, int name_index, const char *name,
+static struct f2fs_xattr_entry *__find_xattr(void *base_addr, int index,
+ size_t len, const char *name)
+{
+ struct f2fs_xattr_entry *entry;
+
+ list_for_each_xattr(entry, base_addr) {
+ if (entry->e_name_index != index)
+ continue;
+ if (entry->e_name_len != len)
+ continue;
+ if (!memcmp(entry->e_name, name, len))
+ break;
+ }
+ return entry;
+}
+
+static void *read_all_xattrs(struct inode *inode, struct page *ipage)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_xattr_header *header;
+ size_t size = PAGE_SIZE, inline_size = 0;
+ void *txattr_addr;
+
+ inline_size = inline_xattr_size(inode);
+
+ txattr_addr = kzalloc(inline_size + size, GFP_F2FS_ZERO);
+ if (!txattr_addr)
+ return NULL;
+
+ /* read from inline xattr */
+ if (inline_size) {
+ struct page *page = NULL;
+ void *inline_addr;
+
+ if (ipage) {
+ inline_addr = inline_xattr_addr(ipage);
+ } else {
+ page = get_node_page(sbi, inode->i_ino);
+ if (IS_ERR(page))
+ goto fail;
+ inline_addr = inline_xattr_addr(page);
+ }
+ memcpy(txattr_addr, inline_addr, inline_size);
+ f2fs_put_page(page, 1);
+ }
+
+ /* read from xattr node block */
+ if (F2FS_I(inode)->i_xattr_nid) {
+ struct page *xpage;
+ void *xattr_addr;
+
+ /* The inode already has an extended attribute block. */
+ xpage = get_node_page(sbi, F2FS_I(inode)->i_xattr_nid);
+ if (IS_ERR(xpage))
+ goto fail;
+
+ xattr_addr = page_address(xpage);
+ memcpy(txattr_addr + inline_size, xattr_addr, PAGE_SIZE);
+ f2fs_put_page(xpage, 1);
+ }
+
+ header = XATTR_HDR(txattr_addr);
+
+ /* never been allocated xattrs */
+ if (le32_to_cpu(header->h_magic) != F2FS_XATTR_MAGIC) {
+ header->h_magic = cpu_to_le32(F2FS_XATTR_MAGIC);
+ header->h_refcount = cpu_to_le32(1);
+ }
+ return txattr_addr;
+fail:
+ kzfree(txattr_addr);
+ return NULL;
+}
+
+static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
+ void *txattr_addr, struct page *ipage)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ size_t inline_size = 0;
+ void *xattr_addr;
+ struct page *xpage;
+ nid_t new_nid = 0;
+ int err;
+
+ inline_size = inline_xattr_size(inode);
+
+ if (hsize > inline_size && !F2FS_I(inode)->i_xattr_nid)
+ if (!alloc_nid(sbi, &new_nid))
+ return -ENOSPC;
+
+ /* write to inline xattr */
+ if (inline_size) {
+ struct page *page = NULL;
+ void *inline_addr;
+
+ if (ipage) {
+ inline_addr = inline_xattr_addr(ipage);
+ f2fs_wait_on_page_writeback(ipage, NODE);
+ } else {
+ page = get_node_page(sbi, inode->i_ino);
+ if (IS_ERR(page)) {
+ alloc_nid_failed(sbi, new_nid);
+ return PTR_ERR(page);
+ }
+ inline_addr = inline_xattr_addr(page);
+ f2fs_wait_on_page_writeback(page, NODE);
+ }
+ memcpy(inline_addr, txattr_addr, inline_size);
+ f2fs_put_page(page, 1);
+
+ /* no need to use xattr node block */
+ if (hsize <= inline_size) {
+ err = truncate_xattr_node(inode, ipage);
+ alloc_nid_failed(sbi, new_nid);
+ return err;
+ }
+ }
+
+ /* write to xattr node block */
+ if (F2FS_I(inode)->i_xattr_nid) {
+ xpage = get_node_page(sbi, F2FS_I(inode)->i_xattr_nid);
+ if (IS_ERR(xpage)) {
+ alloc_nid_failed(sbi, new_nid);
+ return PTR_ERR(xpage);
+ }
+ f2fs_bug_on(sbi, new_nid);
+ f2fs_wait_on_page_writeback(xpage, NODE);
+ } else {
+ struct dnode_of_data dn;
+ set_new_dnode(&dn, inode, NULL, NULL, new_nid);
+ xpage = new_node_page(&dn, XATTR_NODE_OFFSET, ipage);
+ if (IS_ERR(xpage)) {
+ alloc_nid_failed(sbi, new_nid);
+ return PTR_ERR(xpage);
+ }
+ alloc_nid_done(sbi, new_nid);
+ }
+
+ xattr_addr = page_address(xpage);
+ memcpy(xattr_addr, txattr_addr + inline_size, PAGE_SIZE -
+ sizeof(struct node_footer));
+ set_page_dirty(xpage);
+ f2fs_put_page(xpage, 1);
+
+ /* need to checkpoint during fsync */
+ F2FS_I(inode)->xattr_ver = cur_cp_version(F2FS_CKPT(sbi));
+ return 0;
+}
+
+int f2fs_getxattr(struct inode *inode, int index, const char *name,
void *buffer, size_t buffer_size)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_xattr_entry *entry;
- struct page *page;
void *base_addr;
- int error = 0, found = 0;
- size_t value_len, name_len;
+ int error = 0;
+ size_t size, len;
if (name == NULL)
return -EINVAL;
- name_len = strlen(name);
- if (!fi->i_xattr_nid)
- return -ENODATA;
+ len = strlen(name);
+ if (len > F2FS_NAME_LEN)
+ return -ERANGE;
- page = get_node_page(sbi, fi->i_xattr_nid);
- base_addr = page_address(page);
+ base_addr = read_all_xattrs(inode, NULL);
+ if (!base_addr)
+ return -ENOMEM;
- list_for_each_xattr(entry, base_addr) {
- if (entry->e_name_index != name_index)
- continue;
- if (entry->e_name_len != name_len)
- continue;
- if (!memcmp(entry->e_name, name, name_len)) {
- found = 1;
- break;
- }
- }
- if (!found) {
+ entry = __find_xattr(base_addr, index, len, name);
+ if (IS_XATTR_LAST_ENTRY(entry)) {
error = -ENODATA;
goto cleanup;
}
- value_len = le16_to_cpu(entry->e_value_size);
+ size = le16_to_cpu(entry->e_value_size);
- if (buffer && value_len > buffer_size) {
+ if (buffer && size > buffer_size) {
error = -ERANGE;
goto cleanup;
}
if (buffer) {
char *pval = entry->e_name + entry->e_name_len;
- memcpy(buffer, pval, value_len);
+ memcpy(buffer, pval, size);
}
- error = value_len;
+ error = size;
cleanup:
- f2fs_put_page(page, 1);
+ kzfree(base_addr);
return error;
}
ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
{
struct inode *inode = dentry->d_inode;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_xattr_entry *entry;
- struct page *page;
void *base_addr;
int error = 0;
size_t rest = buffer_size;
- if (!fi->i_xattr_nid)
- return 0;
-
- page = get_node_page(sbi, fi->i_xattr_nid);
- base_addr = page_address(page);
+ base_addr = read_all_xattrs(inode, NULL);
+ if (!base_addr)
+ return -ENOMEM;
list_for_each_xattr(entry, base_addr) {
const struct xattr_handler *handler =
@@ -291,119 +473,77 @@
}
error = buffer_size - rest;
cleanup:
- f2fs_put_page(page, 1);
+ kzfree(base_addr);
return error;
}
-int f2fs_setxattr(struct inode *inode, int name_index, const char *name,
- const void *value, size_t value_len)
+static int __f2fs_setxattr(struct inode *inode, int index,
+ const char *name, const void *value, size_t size,
+ struct page *ipage, int flags)
{
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct f2fs_inode_info *fi = F2FS_I(inode);
- struct f2fs_xattr_header *header = NULL;
struct f2fs_xattr_entry *here, *last;
- struct page *page;
void *base_addr;
- int error, found, free, newsize;
- size_t name_len;
- char *pval;
- int ilock;
+ int found, newsize;
+ size_t len;
+ __u32 new_hsize;
+ int error = -ENOMEM;
if (name == NULL)
return -EINVAL;
if (value == NULL)
- value_len = 0;
+ size = 0;
- name_len = strlen(name);
+ len = strlen(name);
- if (name_len > F2FS_NAME_LEN || value_len > MAX_VALUE_LEN)
+ if (len > F2FS_NAME_LEN || size > MAX_VALUE_LEN(inode))
return -ERANGE;
- f2fs_balance_fs(sbi);
-
- ilock = mutex_lock_op(sbi);
-
- if (!fi->i_xattr_nid) {
- /* Allocate new attribute block */
- struct dnode_of_data dn;
-
- if (!alloc_nid(sbi, &fi->i_xattr_nid)) {
- error = -ENOSPC;
- goto exit;
- }
- set_new_dnode(&dn, inode, NULL, NULL, fi->i_xattr_nid);
- mark_inode_dirty(inode);
-
- page = new_node_page(&dn, XATTR_NODE_OFFSET);
- if (IS_ERR(page)) {
- alloc_nid_failed(sbi, fi->i_xattr_nid);
- fi->i_xattr_nid = 0;
- error = PTR_ERR(page);
- goto exit;
- }
-
- alloc_nid_done(sbi, fi->i_xattr_nid);
- base_addr = page_address(page);
- header = XATTR_HDR(base_addr);
- header->h_magic = cpu_to_le32(F2FS_XATTR_MAGIC);
- header->h_refcount = cpu_to_le32(1);
- } else {
- /* The inode already has an extended attribute block. */
- page = get_node_page(sbi, fi->i_xattr_nid);
- if (IS_ERR(page)) {
- error = PTR_ERR(page);
- goto exit;
- }
-
- base_addr = page_address(page);
- header = XATTR_HDR(base_addr);
- }
-
- if (le32_to_cpu(header->h_magic) != F2FS_XATTR_MAGIC) {
- error = -EIO;
- goto cleanup;
- }
+ base_addr = read_all_xattrs(inode, ipage);
+ if (!base_addr)
+ goto exit;
/* find entry with wanted name. */
- found = 0;
- list_for_each_xattr(here, base_addr) {
- if (here->e_name_index != name_index)
- continue;
- if (here->e_name_len != name_len)
- continue;
- if (!memcmp(here->e_name, name, name_len)) {
- found = 1;
- break;
- }
+ here = __find_xattr(base_addr, index, len, name);
+
+ found = IS_XATTR_LAST_ENTRY(here) ? 0 : 1;
+
+ if ((flags & XATTR_REPLACE) && !found) {
+ error = -ENODATA;
+ goto exit;
+ } else if ((flags & XATTR_CREATE) && found) {
+ error = -EEXIST;
+ goto exit;
}
last = here;
-
while (!IS_XATTR_LAST_ENTRY(last))
last = XATTR_NEXT_ENTRY(last);
- newsize = XATTR_ALIGN(sizeof(struct f2fs_xattr_entry) +
- name_len + value_len);
+ newsize = XATTR_ALIGN(sizeof(struct f2fs_xattr_entry) + len + size);
/* 1. Check space */
if (value) {
- /* If value is NULL, it is remove operation.
+ int free;
+ /*
+ * If value is NULL, it is remove operation.
* In case of update operation, we caculate free.
*/
- free = MIN_OFFSET - ((char *)last - (char *)header);
+ free = MIN_OFFSET(inode) - ((char *)last - (char *)base_addr);
if (found)
- free = free - ENTRY_SIZE(here);
+ free = free + ENTRY_SIZE(here);
- if (free < newsize) {
+ if (unlikely(free < newsize)) {
error = -ENOSPC;
- goto cleanup;
+ goto exit;
}
}
/* 2. Remove old entry */
if (found) {
- /* If entry is found, remove old entry.
+ /*
+ * If entry is found, remove old entry.
* If not found, remove operation is not needed.
*/
struct f2fs_xattr_entry *next = XATTR_NEXT_ENTRY(here);
@@ -414,34 +554,63 @@
memset(last, 0, oldsize);
}
+ new_hsize = (char *)last - (char *)base_addr;
+
/* 3. Write new entry */
if (value) {
- /* Before we come here, old entry is removed.
- * We just write new entry. */
+ char *pval;
+ /*
+ * Before we come here, old entry is removed.
+ * We just write new entry.
+ */
memset(last, 0, newsize);
- last->e_name_index = name_index;
- last->e_name_len = name_len;
- memcpy(last->e_name, name, name_len);
- pval = last->e_name + name_len;
- memcpy(pval, value, value_len);
- last->e_value_size = cpu_to_le16(value_len);
+ last->e_name_index = index;
+ last->e_name_len = len;
+ memcpy(last->e_name, name, len);
+ pval = last->e_name + len;
+ memcpy(pval, value, size);
+ last->e_value_size = cpu_to_le16(size);
+ new_hsize += newsize;
}
- set_page_dirty(page);
- f2fs_put_page(page, 1);
+ error = write_all_xattrs(inode, new_hsize, base_addr, ipage);
+ if (error)
+ goto exit;
if (is_inode_flag_set(fi, FI_ACL_MODE)) {
inode->i_mode = fi->i_acl_mode;
inode->i_ctime = CURRENT_TIME;
clear_inode_flag(fi, FI_ACL_MODE);
}
- update_inode_page(inode);
- mutex_unlock_op(sbi, ilock);
- return 0;
-cleanup:
- f2fs_put_page(page, 1);
+ if (ipage)
+ update_inode(inode, ipage);
+ else
+ update_inode_page(inode);
exit:
- mutex_unlock_op(sbi, ilock);
+ kzfree(base_addr);
return error;
}
+
+int f2fs_setxattr(struct inode *inode, int index, const char *name,
+ const void *value, size_t size,
+ struct page *ipage, int flags)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ int err;
+
+ /* this case is only from init_inode_metadata */
+ if (ipage)
+ return __f2fs_setxattr(inode, index, name, value,
+ size, ipage, flags);
+ f2fs_balance_fs(sbi);
+
+ f2fs_lock_op(sbi);
+ /* protect xattr_ver */
+ down_write(&F2FS_I(inode)->i_sem);
+ err = __f2fs_setxattr(inode, index, name, value, size, ipage, flags);
+ up_write(&F2FS_I(inode)->i_sem);
+ f2fs_unlock_op(sbi);
+
+ return err;
+}
diff --git a/fs/f2fs/xattr.h b/fs/f2fs/xattr.h
index 49c9558..9b18c07 100644
--- a/fs/f2fs/xattr.h
+++ b/fs/f2fs/xattr.h
@@ -51,7 +51,7 @@
#define XATTR_HDR(ptr) ((struct f2fs_xattr_header *)(ptr))
#define XATTR_ENTRY(ptr) ((struct f2fs_xattr_entry *)(ptr))
-#define XATTR_FIRST_ENTRY(ptr) (XATTR_ENTRY(XATTR_HDR(ptr)+1))
+#define XATTR_FIRST_ENTRY(ptr) (XATTR_ENTRY(XATTR_HDR(ptr) + 1))
#define XATTR_ROUND (3)
#define XATTR_ALIGN(size) ((size + XATTR_ROUND) & ~XATTR_ROUND)
@@ -69,17 +69,16 @@
!IS_XATTR_LAST_ENTRY(entry);\
entry = XATTR_NEXT_ENTRY(entry))
+#define MIN_OFFSET(i) XATTR_ALIGN(inline_xattr_size(i) + PAGE_SIZE - \
+ sizeof(struct node_footer) - sizeof(__u32))
-#define MIN_OFFSET XATTR_ALIGN(PAGE_SIZE - \
- sizeof(struct node_footer) - \
- sizeof(__u32))
-
-#define MAX_VALUE_LEN (MIN_OFFSET - sizeof(struct f2fs_xattr_header) - \
- sizeof(struct f2fs_xattr_entry))
+#define MAX_VALUE_LEN(i) (MIN_OFFSET(i) - \
+ sizeof(struct f2fs_xattr_header) - \
+ sizeof(struct f2fs_xattr_entry))
/*
* On-disk structure of f2fs_xattr
- * We use only 1 block for xattr.
+ * We use inline xattrs space + 1 block for xattr.
*
* +--------------------+
* | f2fs_xattr_header |
@@ -112,25 +111,23 @@
extern const struct xattr_handler f2fs_xattr_acl_access_handler;
extern const struct xattr_handler f2fs_xattr_acl_default_handler;
extern const struct xattr_handler f2fs_xattr_advise_handler;
+extern const struct xattr_handler f2fs_xattr_security_handler;
extern const struct xattr_handler *f2fs_xattr_handlers[];
-extern int f2fs_setxattr(struct inode *inode, int name_index, const char *name,
- const void *value, size_t value_len);
-extern int f2fs_getxattr(struct inode *inode, int name_index, const char *name,
- void *buffer, size_t buffer_size);
-extern ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer,
- size_t buffer_size);
-
+extern int f2fs_setxattr(struct inode *, int, const char *,
+ const void *, size_t, struct page *, int);
+extern int f2fs_getxattr(struct inode *, int, const char *, void *, size_t);
+extern ssize_t f2fs_listxattr(struct dentry *, char *, size_t);
#else
#define f2fs_xattr_handlers NULL
-static inline int f2fs_setxattr(struct inode *inode, int name_index,
- const char *name, const void *value, size_t value_len)
+static inline int f2fs_setxattr(struct inode *inode, int index,
+ const char *name, const void *value, size_t size, int flags)
{
return -EOPNOTSUPP;
}
-static inline int f2fs_getxattr(struct inode *inode, int name_index,
+static inline int f2fs_getxattr(struct inode *inode, int index,
const char *name, void *buffer, size_t buffer_size)
{
return -EOPNOTSUPP;
@@ -142,4 +139,14 @@
}
#endif
+#ifdef CONFIG_F2FS_FS_SECURITY
+extern int f2fs_init_security(struct inode *, struct inode *,
+ const struct qstr *, struct page *);
+#else
+static inline int f2fs_init_security(struct inode *inode, struct inode *dir,
+ const struct qstr *qstr, struct page *ipage)
+{
+ return 0;
+}
+#endif
#endif /* __F2FS_XATTR_H__ */
diff --git a/fs/inode.c b/fs/inode.c
index 180b743..d5239b3 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -164,7 +164,7 @@
mapping->a_ops = &empty_aops;
mapping->host = inode;
mapping->flags = 0;
- mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE);
+ mapping_set_gfp_mask(mapping, GFP_HIGHUSER);
mapping->private_data = NULL;
mapping->backing_dev_info = &default_backing_dev_info;
mapping->writeback_index = 0;
diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
index d370549..e5e0bdd 100644
--- a/fs/isofs/inode.c
+++ b/fs/isofs/inode.c
@@ -125,6 +125,7 @@
static int isofs_remount(struct super_block *sb, int *flags, char *data)
{
+ sync_filesystem(sb);
if (!(*flags & MS_RDONLY))
return -EROFS;
return 0;
diff --git a/fs/jbd/journal.c b/fs/jbd/journal.c
index 6510d63..182b786 100644
--- a/fs/jbd/journal.c
+++ b/fs/jbd/journal.c
@@ -61,6 +61,7 @@
EXPORT_SYMBOL(journal_sync_buffer);
#endif
EXPORT_SYMBOL(journal_flush);
+EXPORT_SYMBOL(journal_force_flush);
EXPORT_SYMBOL(journal_revoke);
EXPORT_SYMBOL(journal_init_dev);
@@ -1531,7 +1532,7 @@
* recovery does not need to happen on remount.
*/
-int journal_flush(journal_t *journal)
+static int __journal_flush(journal_t *journal, bool assert)
{
int err = 0;
transaction_t *transaction = NULL;
@@ -1579,6 +1580,8 @@
* s_start value. */
mark_journal_empty(journal);
mutex_unlock(&journal->j_checkpoint_mutex);
+ if (!assert)
+ return 0;
spin_lock(&journal->j_state_lock);
J_ASSERT(!journal->j_running_transaction);
J_ASSERT(!journal->j_committing_transaction);
@@ -1589,6 +1592,16 @@
return 0;
}
+int journal_flush(journal_t *journal)
+{
+ return __journal_flush(journal, true);
+}
+
+int journal_force_flush(journal_t *journal)
+{
+ return __journal_flush(journal, false);
+}
+
/**
* int journal_wipe() - Wipe journal contents
* @journal: Journal to act on.
diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c
index e3e255c..4130213 100644
--- a/fs/jbd/transaction.c
+++ b/fs/jbd/transaction.c
@@ -1750,8 +1750,12 @@
__journal_try_to_free_buffer(journal, bh);
journal_put_journal_head(jh);
jbd_unlock_bh_state(bh);
- if (buffer_jbd(bh))
+ if (buffer_jbd(bh)) {
+ unsigned mt = get_pageblock_migratetype(page);
+ if (mt == MIGRATE_ISOLATE || mt == MIGRATE_CMA)
+ journal_force_flush(journal);
goto busy;
+ }
} while ((bh = bh->b_this_page) != head);
ret = try_to_free_buffers(page);
diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
index 1df94fa..189a46d 100644
--- a/fs/jbd2/journal.c
+++ b/fs/jbd2/journal.c
@@ -72,6 +72,7 @@
EXPORT_SYMBOL(journal_sync_buffer);
#endif
EXPORT_SYMBOL(jbd2_journal_flush);
+EXPORT_SYMBOL(jbd2_journal_force_flush);
EXPORT_SYMBOL(jbd2_journal_revoke);
EXPORT_SYMBOL(jbd2_journal_init_dev);
@@ -1912,7 +1913,7 @@
* recovery does not need to happen on remount.
*/
-int jbd2_journal_flush(journal_t *journal)
+static int __jbd2_journal_flush(journal_t *journal, bool assert)
{
int err = 0;
transaction_t *transaction = NULL;
@@ -1960,6 +1961,8 @@
* s_start value. */
jbd2_mark_journal_empty(journal);
mutex_unlock(&journal->j_checkpoint_mutex);
+ if (!assert)
+ return 0;
write_lock(&journal->j_state_lock);
J_ASSERT(!journal->j_running_transaction);
J_ASSERT(!journal->j_committing_transaction);
@@ -1970,6 +1973,16 @@
return 0;
}
+int jbd2_journal_flush(journal_t *journal)
+{
+ return __jbd2_journal_flush(journal, true);
+}
+
+int jbd2_journal_force_flush(journal_t *journal)
+{
+ return __jbd2_journal_flush(journal, false);
+}
+
/**
* int jbd2_journal_wipe() - Wipe journal contents
* @journal: Journal to act on.
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index 5f09370..9a55ddd 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -1895,8 +1895,12 @@
__journal_try_to_free_buffer(journal, bh);
jbd2_journal_put_journal_head(jh);
jbd_unlock_bh_state(bh);
- if (buffer_jbd(bh))
+ if (buffer_jbd(bh)) {
+ unsigned mt = get_pageblock_migratetype(page);
+ if (mt == MIGRATE_ISOLATE || mt == MIGRATE_CMA)
+ jbd2_journal_force_flush(journal);
goto busy;
+ }
} while ((bh = bh->b_this_page) != head);
ret = try_to_free_buffers(page);
diff --git a/fs/libfs.c b/fs/libfs.c
index 916da8c..c3a0837 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -135,60 +135,40 @@
* both impossible due to the lock on directory.
*/
-int dcache_readdir(struct file * filp, void * dirent, filldir_t filldir)
+int dcache_readdir(struct file *file, struct dir_context *ctx)
{
- struct dentry *dentry = filp->f_path.dentry;
- struct dentry *cursor = filp->private_data;
+ struct dentry *dentry = file->f_path.dentry;
+ struct dentry *cursor = file->private_data;
struct list_head *p, *q = &cursor->d_u.d_child;
- ino_t ino;
- int i = filp->f_pos;
- switch (i) {
- case 0:
- ino = dentry->d_inode->i_ino;
- if (filldir(dirent, ".", 1, i, ino, DT_DIR) < 0)
- break;
- filp->f_pos++;
- i++;
- /* fallthrough */
- case 1:
- ino = parent_ino(dentry);
- if (filldir(dirent, "..", 2, i, ino, DT_DIR) < 0)
- break;
- filp->f_pos++;
- i++;
- /* fallthrough */
- default:
- spin_lock(&dentry->d_lock);
- if (filp->f_pos == 2)
- list_move(q, &dentry->d_subdirs);
+ if (!dir_emit_dots(file, ctx))
+ return 0;
+ spin_lock(&dentry->d_lock);
+ if (ctx->pos == 2)
+ list_move(q, &dentry->d_subdirs);
- for (p=q->next; p != &dentry->d_subdirs; p=p->next) {
- struct dentry *next;
- next = list_entry(p, struct dentry, d_u.d_child);
- spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
- if (!simple_positive(next)) {
- spin_unlock(&next->d_lock);
- continue;
- }
+ for (p = q->next; p != &dentry->d_subdirs; p = p->next) {
+ struct dentry *next = list_entry(p, struct dentry, d_u.d_child);
+ spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
+ if (!simple_positive(next)) {
+ spin_unlock(&next->d_lock);
+ continue;
+ }
- spin_unlock(&next->d_lock);
- spin_unlock(&dentry->d_lock);
- if (filldir(dirent, next->d_name.name,
- next->d_name.len, filp->f_pos,
- next->d_inode->i_ino,
- dt_type(next->d_inode)) < 0)
- return 0;
- spin_lock(&dentry->d_lock);
- spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
- /* next is still alive */
- list_move(q, p);
- spin_unlock(&next->d_lock);
- p = q;
- filp->f_pos++;
- }
- spin_unlock(&dentry->d_lock);
+ spin_unlock(&next->d_lock);
+ spin_unlock(&dentry->d_lock);
+ if (!dir_emit(ctx, next->d_name.name, next->d_name.len,
+ next->d_inode->i_ino, dt_type(next->d_inode)))
+ return 0;
+ spin_lock(&dentry->d_lock);
+ spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
+ /* next is still alive */
+ list_move(q, p);
+ spin_unlock(&next->d_lock);
+ p = q;
+ ctx->pos++;
}
+ spin_unlock(&dentry->d_lock);
return 0;
}
@@ -202,7 +182,7 @@
.release = dcache_dir_close,
.llseek = dcache_dir_lseek,
.read = generic_read_dir,
- .readdir = dcache_readdir,
+ .iterate = dcache_readdir,
.fsync = noop_fsync,
};
diff --git a/fs/minix/namei.c b/fs/minix/namei.c
index 0db73d9..cd950e2 100644
--- a/fs/minix/namei.c
+++ b/fs/minix/namei.c
@@ -54,6 +54,18 @@
return error;
}
+static int minix_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ int error;
+ struct inode *inode = minix_new_inode(dir, mode, &error);
+ if (inode) {
+ minix_set_inode(inode, 0);
+ mark_inode_dirty(inode);
+ d_tmpfile(dentry, inode);
+ }
+ return error;
+}
+
static int minix_create(struct inode *dir, struct dentry *dentry, umode_t mode,
bool excl)
{
@@ -254,4 +266,5 @@
.mknod = minix_mknod,
.rename = minix_rename,
.getattr = minix_getattr,
+ .tmpfile = minix_tmpfile,
};
diff --git a/fs/namei.c b/fs/namei.c
index 1211ee5..72586dc 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -2921,6 +2921,61 @@
goto retry_lookup;
}
+static int do_tmpfile(int dfd, struct filename *pathname,
+ struct nameidata *nd, int flags,
+ const struct open_flags *op,
+ struct file *file, int *opened)
+{
+ static const struct qstr name = QSTR_INIT("/", 1);
+ struct dentry *dentry, *child;
+ struct inode *dir;
+ int error = path_lookupat(dfd, pathname->name,
+ flags | LOOKUP_DIRECTORY, nd);
+ if (unlikely(error))
+ return error;
+ error = mnt_want_write(nd->path.mnt);
+ if (unlikely(error))
+ goto out;
+ /* we want directory to be writable */
+ error = inode_permission(nd->inode, MAY_WRITE | MAY_EXEC);
+ if (error)
+ goto out2;
+ dentry = nd->path.dentry;
+ dir = dentry->d_inode;
+ if (!dir->i_op->tmpfile) {
+ error = -EOPNOTSUPP;
+ goto out2;
+ }
+ child = d_alloc(dentry, &name);
+ if (unlikely(!child)) {
+ error = -ENOMEM;
+ goto out2;
+ }
+ nd->flags &= ~LOOKUP_DIRECTORY;
+ nd->flags |= op->intent;
+ dput(nd->path.dentry);
+ nd->path.dentry = child;
+ error = dir->i_op->tmpfile(dir, nd->path.dentry, op->mode);
+ if (error)
+ goto out2;
+ audit_inode(pathname, nd->path.dentry, 0);
+ error = may_open(&nd->path, op->acc_mode, op->open_flag);
+ if (error)
+ goto out2;
+ file->f_path.mnt = nd->path.mnt;
+ error = finish_open(file, nd->path.dentry, NULL, opened);
+ if (error)
+ goto out2;
+ error = open_check_o_direct(file);
+ if (error)
+ fput(file);
+out2:
+ mnt_drop_write(nd->path.mnt);
+out:
+ path_put(&nd->path);
+ return error;
+}
+
static struct file *path_openat(int dfd, struct filename *pathname,
struct nameidata *nd, const struct open_flags *op, int flags)
{
@@ -2936,6 +2991,11 @@
file->f_flags = op->open_flag;
+ if (unlikely(file->f_flags & O_TMPFILE)) {
+ error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened);
+ goto out;
+ }
+
error = path_init(dfd, pathname->name, flags | LOOKUP_PARENT, nd, &base);
if (unlikely(error))
goto out;
diff --git a/fs/open.c b/fs/open.c
index 8c74100..3a9e01b 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -840,11 +840,15 @@
if (flags & __O_SYNC)
flags |= O_DSYNC;
- /*
- * If we have O_PATH in the open flag. Then we
- * cannot have anything other than the below set of flags
- */
- if (flags & O_PATH) {
+ if (flags & O_TMPFILE) {
+ if (!(flags & O_CREAT))
+ return -EINVAL;
+ acc_mode = MAY_OPEN | ACC_MODE(flags);
+ } else if (flags & O_PATH) {
+ /*
+ * If we have O_PATH in the open flag. Then we
+ * cannot have anything other than the below set of flags
+ */
flags &= O_DIRECTORY | O_NOFOLLOW | O_PATH;
acc_mode = 0;
} else {
diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
index 3e09965..3a28d46 100644
--- a/fs/pstore/ram.c
+++ b/fs/pstore/ram.c
@@ -180,9 +180,6 @@
/* ECC correction notice */
ecc_notice_size = persistent_ram_ecc_string(prz, NULL, 0);
- if (!(size + ecc_notice_size))
- return 0;
-
*buf = kmalloc(size + ecc_notice_size + 1, GFP_KERNEL);
if (*buf == NULL)
return -ENOMEM;
diff --git a/fs/udf/super.c b/fs/udf/super.c
index 32f5297..d06ce26 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -631,6 +631,7 @@
int error = 0;
sync_filesystem(sb);
+ sync_filesystem(sb);
if (sbi->s_lvid_bh) {
int write_rev = le16_to_cpu(udf_sb_lvidiu(sbi)->minUDFWriteRev);
if (write_rev > UDF_MAX_WRITE_VERSION && !(*flags & MS_RDONLY))
diff --git a/include/asm-generic/dma-coherent.h b/include/asm-generic/dma-coherent.h
index a082a3f..b9da2da 100644
--- a/include/asm-generic/dma-coherent.h
+++ b/include/asm-generic/dma-coherent.h
@@ -31,6 +31,9 @@
struct dma_declare_info *dma_info);
extern int
+dma_set_resizable_heap_floor_size(struct device *dev, size_t floor_size);
+
+extern int
dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
dma_addr_t device_addr, size_t size, int flags);
diff --git a/include/linux/CwMcuSensor.h b/include/linux/CwMcuSensor.h
new file mode 100644
index 0000000..6d2dde6
--- /dev/null
+++ b/include/linux/CwMcuSensor.h
@@ -0,0 +1,325 @@
+/* CwMcuSensor.c - driver file for HTC SensorHUB
+ *
+ * Copyright (C) 2014 HTC Ltd.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __CWMCUSENSOR_H__
+#define __CWMCUSENSOR_H__
+#include <linux/ioctl.h>
+
+#define CWMCU_I2C_NAME "CwMcuSensor"
+
+enum ABS_status {
+ CW_SCAN_ID = 0,
+ CW_SCAN_X,
+ CW_SCAN_Y,
+ CW_SCAN_Z,
+ CW_SCAN_XX,
+ CW_SCAN_YY,
+ CW_SCAN_ZZ,
+ CW_SCAN_TIMESTAMP,
+};
+
+
+typedef enum {
+ CW_ACCELERATION = 0,
+ CW_MAGNETIC = 1,
+ CW_GYRO = 2,
+ CW_LIGHT = 3,
+ CW_PRESSURE = 5,
+ CW_ORIENTATION = 6,
+ CW_ROTATIONVECTOR = 7,
+ CW_LINEARACCELERATION = 8,
+ CW_GRAVITY = 9,
+ HTC_MAGIC_COVER = 12,
+ CW_MAGNETIC_UNCALIBRATED = 16,
+ CW_GYROSCOPE_UNCALIBRATED = 17,
+ CW_GAME_ROTATION_VECTOR = 18,
+ CW_GEOMAGNETIC_ROTATION_VECTOR = 19,
+ CW_SIGNIFICANT_MOTION = 20,
+ CW_STEP_DETECTOR = 21,
+ CW_STEP_COUNTER = 22,
+ HTC_FACEDOWN_DETECTION = 23,
+ CW_SENSORS_ID_FW /* Be careful, do not exceed 31, Firmware ID limit */,
+ CW_ACCELERATION_W = 32,
+ CW_MAGNETIC_W = 33,
+ CW_GYRO_W = 34,
+ CW_PRESSURE_W = 37,
+ CW_ORIENTATION_W = 38,
+ CW_ROTATIONVECTOR_W = 39,
+ CW_LINEARACCELERATION_W = 40,
+ CW_GRAVITY_W = 41,
+ CW_MAGNETIC_UNCALIBRATED_W = 48,
+ CW_GYROSCOPE_UNCALIBRATED_W = 49,
+ CW_GAME_ROTATION_VECTOR_W = 50,
+ CW_GEOMAGNETIC_ROTATION_VECTOR_W = 51,
+ CW_STEP_DETECTOR_W = 53,
+ CW_STEP_COUNTER_W = 54,
+ CW_SENSORS_ID_TOTAL = 55, /* Includes Wake up version */
+ TIME_DIFF_EXHAUSTED = 97,
+ CW_TIME_BASE = 98,
+ CW_META_DATA = 99,
+ CW_MAGNETIC_UNCALIBRATED_BIAS = 100,
+ CW_GYROSCOPE_UNCALIBRATED_BIAS = 101
+} CW_SENSORS_ID;
+
+#define NS_PER_US 1000000LL
+
+#define FIRMWARE_VERSION 0x10
+
+#define HTC_SYSTEM_STATUS_REG 0x1E
+
+#define CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_ACC 0x60
+#define CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_ACC 0x68
+#define CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_ACC 0x68
+#define CW_I2C_REG_SENSORS_CALIBRATOR_TARGET_ACC 0x69
+#define CW_I2C_REG_SENSORS_CALIBRATOR_RESULT_RL_ACC 0x6A
+
+#define CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_MAG 0x70
+#define CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_MAG 0x78
+#define CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_MAG 0x78
+#define CW_I2C_REG_SENSORS_ACCURACY_MAG 0x79
+
+
+#define CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_GYRO 0x80
+#define CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_GYRO 0x88
+#define CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_GYRO 0x88
+
+#define CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_LIGHT 0x90
+#define CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_LIGHT 0x98
+#define CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_LIGHT 0x98
+
+#define CW_I2C_REG_SENSORS_CALIBRATOR_STATUS_PRESSURE 0xB0
+#define CW_I2C_REG_SENSORS_CALIBRATOR_GET_DATA_PRESSURE 0xB8
+#define CW_I2C_REG_SENSORS_CALIBRATOR_SET_DATA_PRESSURE 0xB8
+#define PRESSURE_UPDATE_RATE 0xB6
+#define PRESSURE_WAKE_UPDATE_RATE 0xB7
+
+#define CWMCU_MAX_DELAY 200
+#define CWMCU_NO_POLLING_DELAY 10000
+
+#define G_SENSORS_STATUS 0x60
+#define ACCE_UPDATE_RATE 0x66
+#define ACCE_WAKE_UPDATE_RATE 0x67
+#define ECOMPASS_SENSORS_STATUS 0x70
+#define MAGN_UPDATE_RATE 0x76
+#define MAGN_WAKE_UPDATE_RATE 0x77
+#define GYRO_SENSORS_STATUS 0x80
+#define GYRO_UPDATE_RATE 0x86
+#define GYRO_WAKE_UPDATE_RATE 0x87
+#define LIGHT_SENSORS_STATUS 0x90
+#define LIGHT_UPDATE_PERIOD 0x96
+#define LIGHT_SENSORS_CALIBRATION_DATA 0x98
+
+#define ORIE_UPDATE_RATE 0xC0
+#define ROTA_UPDATE_RATE 0xC1
+#define LINE_UPDATE_RATE 0xC2
+#define GRAV_UPDATE_RATE 0xC3
+#define MAGN_UNCA_UPDATE_RATE 0xC4
+#define GYRO_UNCA_UPDATE_RATE 0xC5
+#define GAME_ROTA_UPDATE_RATE 0xC6
+#define GEOM_ROTA_UPDATE_RATE 0xC7
+#define SIGN_UPDATE_RATE 0xC8
+
+#define ORIE_WAKE_UPDATE_RATE 0xC9
+#define ROTA_WAKE_UPDATE_RATE 0xCA
+#define LINE_WAKE_UPDATE_RATE 0xCB
+#define GRAV_WAKE_UPDATE_RATE 0xCC
+#define MAGN_UNCA_WAKE_UPDATE_RATE 0xCD
+#define GYRO_UNCA_WAKE_UPDATE_RATE 0xCE
+#define GAME_ROTA_WAKE_UPDATE_RATE 0xCF
+#define GEOM_ROTA_WAKE_UPDATE_RATE 0xD2
+#define STEP_COUNTER_UPDATE_PERIOD 0xD3
+
+#define STEP_COUNTER_MASK ((1ULL << CW_STEP_COUNTER) | \
+ (1ULL << CW_STEP_COUNTER_W))
+
+#define IIO_CONTINUOUS_MASK ((1ULL << CW_ACCELERATION) | \
+ (1ULL << CW_MAGNETIC) | \
+ (1ULL << CW_GYRO) | \
+ (1ULL << CW_PRESSURE) | \
+ (1ULL << CW_ORIENTATION) | \
+ (1ULL << CW_ROTATIONVECTOR) | \
+ (1ULL << CW_LINEARACCELERATION) | \
+ (1ULL << CW_GRAVITY) | \
+ (1ULL << CW_MAGNETIC_UNCALIBRATED) | \
+ (1ULL << CW_GYROSCOPE_UNCALIBRATED) | \
+ (1ULL << CW_GAME_ROTATION_VECTOR) | \
+ (1ULL << CW_GEOMAGNETIC_ROTATION_VECTOR) | \
+ (1ULL << CW_STEP_DETECTOR) | \
+ (1ULL << CW_STEP_COUNTER) | \
+ (1ULL << CW_ACCELERATION_W) | \
+ (1ULL << CW_MAGNETIC_W) | \
+ (1ULL << CW_GYRO_W) | \
+ (1ULL << CW_PRESSURE_W) | \
+ (1ULL << CW_ORIENTATION_W) | \
+ (1ULL << CW_ROTATIONVECTOR_W) | \
+ (1ULL << CW_LINEARACCELERATION_W) | \
+ (1ULL << CW_GRAVITY_W) | \
+ (1ULL << CW_MAGNETIC_UNCALIBRATED_W) | \
+ (1ULL << CW_GYROSCOPE_UNCALIBRATED_W) | \
+ (1ULL << CW_GAME_ROTATION_VECTOR_W) | \
+ (1ULL << CW_GEOMAGNETIC_ROTATION_VECTOR_W) | \
+ (1ULL << CW_STEP_DETECTOR_W) | \
+ (1ULL << CW_STEP_COUNTER_W))
+
+#define CW_I2C_REG_WATCHDOG_STATUS 0xE6
+#define WATCHDOG_STATUS_LEN 12
+
+#define CW_I2C_REG_EXCEPTION_BUFFER_LEN 0xFD
+#define EXCEPTION_BUFFER_LEN_SIZE 4
+#define CW_I2C_REG_EXCEPTION_BUFFER 0xFE
+#define EXCEPTION_BLOCK_LEN 16
+
+#define CW_I2C_REG_WARN_MSG_ENABLE 0xFA
+#define CW_I2C_REG_WARN_MSG_BUFFER_LEN 0xFB
+#define WARN_MSG_BUFFER_LEN_SIZE 8
+#define CW_I2C_REG_WARN_MSG_BUFFER 0xFC
+#define WARN_MSG_BLOCK_LEN 16
+#define WARN_MSG_PER_ITEM_LEN 120
+
+#define CW_I2C_REG_WATCH_DOG_ENABLE 0xF9
+
+#define UPDATE_RATE_NORMAL 1
+#define UPDATE_RATE_UI 2
+#define UPDATE_RATE_GAME 3
+#define UPDATE_RATE_FASTEST 4
+#define UPDATE_RATE_RATE_10Hz 5
+#define UPDATE_RATE_RATE_25Hz 6
+
+#define GENSOR_POSITION 0x65
+#define COMPASS_POSITION 0x75
+#define GYRO_POSITION 0x85
+
+#define num_sensors CW_SENSORS_ID_TOTAL
+
+#define CWSTM32_BATCH_MODE_COMMAND 0x40 /* R/W 1 Byte
+ * Bit 2: Timeout flag (R/WC)
+ * Bit 4: Buffer full flag (R/WC)
+ */
+
+#define CWSTM32_BATCH_MODE_DATA_QUEUE 0x45 /* R/W 9 Bytes */
+#define CWSTM32_BATCH_MODE_TIMEOUT 0x46 /* R/W 4 Bytes (ms) */
+#define CWSTM32_BATCH_MODE_DATA_COUNTER 0x47 /* R/W 4 Bytes
+ * (4 bytes, from low byte to
+ * high byte */
+#define CWSTM32_BATCH_FLUSH 0x48 /* W 1 Byte (sensors_id) */
+
+#define CWSTM32_WAKE_UP_BATCH_MODE_DATA_QUEUE 0x55 /* R/W 9 Bytes */
+#define CWSTM32_WAKE_UP_BATCH_MODE_TIMEOUT 0x56 /* R/W 4 Bytes
+ * (ms) */
+#define CWSTM32_WAKE_UP_BATCH_MODE_DATA_COUNTER 0x57 /* R/W 4 Bytes
+ * (4 bytes, from low byte
+ * to high byte) */
+
+#define SYNC_TIMESTAMP_BIT (1 << 1)
+#define TIMESTAMP_SYNC_CODE (98)
+
+#define CW_I2C_REG_MCU_TIME 0x11
+
+#define MAX_EVENT_COUNT 2500
+
+/* If queue is empty */
+#define CWMCU_NODATA 0xFF
+
+#define CWSTM32_ENABLE_REG 0x01
+#define CWSTM32_READ_SEQUENCE_DATA_REG 0x0F
+
+#define CWSTM32_WRITE_POSITION_Acceleration 0x20
+#define CWSTM32_WRITE_POSITION_Magnetic 0x21
+#define CWSTM32_WRITE_POSITION_Gyro 0x22
+
+#define CWSTM32_WRITE_CLEAN_COUNT_Pedometer 0x30
+
+#define CWSTM32_INT_ST1 0x08
+#define CWSTM32_INT_ST2 0x09
+#define CWSTM32_INT_ST3 0x0A
+#define CWSTM32_INT_ST4 0x0B
+#define CWSTM32_ERR_ST 0x1F
+
+#define CW_BATCH_ENABLE_REG 0x41
+#define CW_WAKE_UP_BATCH_ENABLE_REG 0x51
+
+#define CW_CPU_STATUS_REG 0xD1
+
+/* INT_ST1 */
+#define CW_MCU_INT_BIT_LIGHT (1 << 3)
+
+/* INT_ST2 */
+#define CW_MCU_INT_BIT_MAGIC_COVER (1 << 4)
+
+/* INT_ST3 */
+#define CW_MCU_INT_BIT_SIGNIFICANT_MOTION (1 << 4)
+#define CW_MCU_INT_BIT_STEP_DETECTOR (1 << 5)
+#define CW_MCU_INT_BIT_STEP_COUNTER (1 << 6)
+#define CW_MCU_INT_BIT_FACEDOWN_DETECTION (1 << 7)
+
+/* ERR_ST */
+#define CW_MCU_INT_BIT_ERROR_WARN_MSG (1 << 5)
+#define CW_MCU_INT_BIT_ERROR_MCU_EXCEPTION (1 << 6)
+#define CW_MCU_INT_BIT_ERROR_WATCHDOG_RESET (1 << 7)
+
+/* batch_st */
+#define CW_MCU_INT_BIT_BATCH_TIMEOUT (1 << 2)
+#define CW_MCU_INT_BIT_BATCH_BUFFER_FULL (1 << 4)
+#define CW_MCU_INT_BIT_BATCH_TRIGGER_READ (CW_MCU_INT_BIT_BATCH_TIMEOUT |\
+ CW_MCU_INT_BIT_BATCH_BUFFER_FULL)
+#define CW_MCU_INT_BIT_BATCH_INT_MASK CW_MCU_INT_BIT_BATCH_TRIGGER_READ
+
+#define IIO_SENSORS_MASK (((u64)(~0ULL)) & ~(1ULL << HTC_MAGIC_COVER) & \
+ ~(1ULL << (HTC_MAGIC_COVER+32)))
+
+#define CW_MCU_BIT_LIGHT_POLLING (1 << 5)
+
+#define FW_DOES_NOT_EXIST (1 << 0)
+#define FW_UPDATE_QUEUED (1 << 1)
+#define FW_ERASE_FAILED (1 << 2)
+#define FW_FLASH_FAILED (1 << 3)
+
+#define CW_MCU_I2C_SENSORS_REG_START (0x20)
+
+#define CWSTM32_READ_Gesture_Flip (CW_MCU_I2C_SENSORS_REG_START + HTC_GESTURE_FLIP)
+#define CWSTM32_READ_Acceleration (CW_MCU_I2C_SENSORS_REG_START + CW_ACCELERATION)
+#define CWSTM32_READ_Magnetic (CW_MCU_I2C_SENSORS_REG_START + CW_MAGNETIC)
+#define CWSTM32_READ_Gyro (CW_MCU_I2C_SENSORS_REG_START + CW_GYRO)
+#define CWSTM32_READ_Light (CW_MCU_I2C_SENSORS_REG_START + CW_LIGHT)
+#define CWSTM32_READ_Pressure (CW_MCU_I2C_SENSORS_REG_START + CW_PRESSURE)
+#define CWSTM32_READ_Orientation (CW_MCU_I2C_SENSORS_REG_START + CW_ORIENTATION)
+#define CWSTM32_READ_RotationVector (CW_MCU_I2C_SENSORS_REG_START + CW_ROTATIONVECTOR)
+#define CWSTM32_READ_LinearAcceleration (CW_MCU_I2C_SENSORS_REG_START + CW_LINEARACCELERATION)
+#define CWSTM32_READ_Gravity (CW_MCU_I2C_SENSORS_REG_START + CW_GRAVITY)
+#define CWSTM32_READ_Hall_Sensor 0x2C
+#define CWSTM32_READ_MAGNETIC_UNCALIBRATED 0x30
+#define CWSTM32_READ_GYROSCOPE_UNCALIBRATED 0x31
+#define CWSTM32_READ_GAME_ROTATION_VECTOR 0x32
+#define CWSTM32_READ_GEOMAGNETIC_ROTATION_VECTOR 0x33
+#define CWSTM32_READ_SIGNIFICANT_MOTION 0x34
+#define CWSTM32_READ_STEP_DETECTOR 0x35
+#define CWSTM32_READ_STEP_COUNTER 0x36
+#define CWSTM32_READ_FACEDOWN_DETECTION 0x3A
+
+#ifdef __KERNEL__
+struct cwmcu_platform_data {
+ unsigned char acceleration_axes;
+ unsigned char magnetic_axes;
+ unsigned char gyro_axes;
+ uint32_t gpio_wake_mcu;
+ uint32_t gpio_reset;
+ uint32_t gpio_chip_mode;
+ uint32_t gpio_mcu_irq;
+ int gs_chip_layout;
+
+};
+#endif /* __KERNEL */
+
+#endif /* __CWMCUSENSOR_H__ */
diff --git a/include/linux/adaptive_skin.h b/include/linux/adaptive_skin.h
new file mode 100644
index 0000000..b6d113b
--- /dev/null
+++ b/include/linux/adaptive_skin.h
@@ -0,0 +1,34 @@
+/*
+ * include/linux/adaptive_skin.h
+ *
+ * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+#ifndef _ADAPTIVE_SKIN_H
+#define _ADAPTIVE_SKIN_H
+
+struct adaptive_skin_thermal_gov_params {
+ int tj_tran_threshold;
+ int tj_std_threshold;
+ int tj_std_fup_threshold;
+
+ int tskin_tran_threshold;
+ int tskin_std_threshold;
+
+ int target_state_tdp;
+};
+
+#endif
diff --git a/include/linux/battery_system_voltage_monitor.h b/include/linux/battery_system_voltage_monitor.h
new file mode 100644
index 0000000..97fa2d7
--- /dev/null
+++ b/include/linux/battery_system_voltage_monitor.h
@@ -0,0 +1,49 @@
+/*
+ * battery_system_voltage_monitor.h
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __BATTERY_SYSTEM_VOLTAGE_MONITOR_H
+#define __BATTERY_SYSTEM_VOLTAGE_MONITOR_H
+
+struct battery_system_voltage_monitor_worker_operations {
+ int (*monitor_on_once)(unsigned int threshold, void *data);
+ void (*monitor_off)(void *data);
+ int (*listener_register)(int (*notification)(unsigned int threshold),
+ void *data);
+ void (*listener_unregister)(void *data);
+};
+
+struct battery_system_voltage_monitor_worker {
+ struct battery_system_voltage_monitor_worker_operations *ops;
+ void *data;
+};
+
+int battery_voltage_monitor_worker_register(
+ struct battery_system_voltage_monitor_worker *worker);
+
+int battery_voltage_monitor_on_once(unsigned int voltage);
+int battery_voltage_monitor_off(void);
+int battery_voltage_monitor_listener_register(
+ int (*notification)(unsigned int voltage));
+int battery_voltage_monitor_listener_unregister(void);
+
+int system_voltage_monitor_worker_register(
+ struct battery_system_voltage_monitor_worker *worker);
+
+int system_voltage_monitor_on_once(unsigned int voltage);
+int system_voltage_monitor_off(void);
+int system_voltage_monitor_listener_register(
+ int (*notification)(unsigned int voltage));
+int system_voltage_monitor_listener_unregister(void);
+
+#endif
diff --git a/include/linux/cable_vbus_monitor.h b/include/linux/cable_vbus_monitor.h
new file mode 100644
index 0000000..6d4cfc8
--- /dev/null
+++ b/include/linux/cable_vbus_monitor.h
@@ -0,0 +1,38 @@
+/*
+ * cable_vbus_monitor.h
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __CABLE_VBUS_MONITOR_H
+#define __CABLE_VBUS_MONITOR_H
+
+/*
+ * Provider to register the VBUS check callback.
+ *
+ * is_vbus_latched: callback function to check if vbus is latched once
+ * data: provider data
+ */
+int cable_vbus_monitor_latch_cb_register(int (*is_vbus_latched)(void *data),
+ void *data);
+/*
+ * Provider to unregister the VBUS check callback.
+ *
+ * data: provider data
+ */
+int cable_vbus_monitor_latch_cb_unregister(void *data);
+
+/*
+ * User to check if VBUS is latched once.
+ * The result might be cleared if called once.
+ */
+int cable_vbus_monitor_is_vbus_latched(void);
+#endif /* __CABLE_VBUS_MONITOR_H */
diff --git a/include/linux/coresight-stm.h b/include/linux/coresight-stm.h
new file mode 100644
index 0000000..9fe3c60
--- /dev/null
+++ b/include/linux/coresight-stm.h
@@ -0,0 +1,33 @@
+#ifndef __LINUX_CORESIGHT_STM_H_
+#define __LINUX_CORESIGHT_STM_H_
+
+#include <uapi/linux/coresight-stm.h>
+
+#define stm_log_inv(entity_id, proto_id, data, size) \
+ stm_trace(STM_OPTION_NONE, entity_id, proto_id, data, size)
+
+#define stm_log_inv_ts(entity_id, proto_id, data, size) \
+ stm_trace(STM_OPTION_TIMESTAMPED, entity_id, proto_id, \
+ data, size)
+
+#define stm_log_gtd(entity_id, proto_id, data, size) \
+ stm_trace(STM_OPTION_GUARANTEED, entity_id, proto_id, \
+ data, size)
+
+#define stm_log_gtd_ts(entity_id, proto_id, data, size) \
+ stm_trace(STM_OPTION_GUARANTEED | STM_OPTION_TIMESTAMPED, \
+ entity_id, proto_id, data, size)
+
+#define stm_log(entity_id, data, size) \
+ stm_log_inv_ts(entity_id, 0, data, size)
+
+#ifdef CONFIG_CORESIGHT_STM
+extern int stm_trace(uint32_t options, uint8_t entity_id, uint8_t proto_id, const void *data, uint32_t size);
+#else
+static inline int stm_trace(uint32_t options, uint8_t entity_id, uint8_t proto_id, const void *data, uint32_t size)
+{
+ return 0;
+}
+#endif
+
+#endif
diff --git a/include/linux/dcache.h b/include/linux/dcache.h
index 9be5ac9..5f936e6 100644
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -246,6 +246,8 @@
/* <clickety>-<click> the ramfs-type tree */
extern void d_genocide(struct dentry *);
+extern void d_tmpfile(struct dentry *, struct inode *);
+
extern struct dentry *d_find_alias(struct inode *);
extern void d_prune_aliases(struct inode *);
diff --git a/include/linux/diagchar.h b/include/linux/diagchar.h
new file mode 100644
index 0000000..dee46bc
--- /dev/null
+++ b/include/linux/diagchar.h
@@ -0,0 +1,751 @@
+/* Copyright (c) 2008-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef DIAGCHAR_SHARED
+#define DIAGCHAR_SHARED
+
+#define MSG_MASKS_TYPE 0x00000001
+#define LOG_MASKS_TYPE 0x00000002
+#define EVENT_MASKS_TYPE 0x00000004
+#define PKT_TYPE 0x00000008
+#define DEINIT_TYPE 0x00000010
+#define USER_SPACE_DATA_TYPE 0x00000020
+#define DCI_DATA_TYPE 0x00000040
+#define CALLBACK_DATA_TYPE 0x00000080
+#define DCI_LOG_MASKS_TYPE 0x00000100
+#define DCI_EVENT_MASKS_TYPE 0x00000200
+
+/* We always use 64 for the logging mode: UART/QXDM2SD,
+ * however, to not conflict with QCT definition, we shift
+ * the USERMODE_DIAGFWD to 2048
+ */
+#define USERMODE_DIAGFWD 2048
+#define USERMODE_DIAGFWD_LEGACY 64
+
+#define USB_MODE 1
+#define MEMORY_DEVICE_MODE 2
+#define NO_LOGGING_MODE 3
+#define UART_MODE 4
+#define SOCKET_MODE 5
+#define CALLBACK_MODE 6
+
+/* different values that go in for diag_data_type */
+
+#define DATA_TYPE_EVENT 0
+#define DATA_TYPE_F3 1
+#define DATA_TYPE_LOG 2
+#define DATA_TYPE_RESPONSE 3
+#define DATA_TYPE_DCI_LOG 0x00000100
+#define DATA_TYPE_DCI_EVENT 0x00000200
+
+/* Different IOCTL values */
+#define DIAG_IOCTL_COMMAND_REG 0
+#define DIAG_IOCTL_SWITCH_LOGGING 7
+#define DIAG_IOCTL_GET_DELAYED_RSP_ID 8
+#define DIAG_IOCTL_LSM_DEINIT 9
+#define DIAG_IOCTL_DCI_INIT 20
+#define DIAG_IOCTL_DCI_DEINIT 21
+#define DIAG_IOCTL_DCI_SUPPORT 22
+#define DIAG_IOCTL_DCI_REG 23
+#define DIAG_IOCTL_DCI_STREAM_INIT 24
+#define DIAG_IOCTL_DCI_HEALTH_STATS 25
+#define DIAG_IOCTL_DCI_LOG_STATUS 26
+#define DIAG_IOCTL_DCI_EVENT_STATUS 27
+#define DIAG_IOCTL_DCI_CLEAR_LOGS 28
+#define DIAG_IOCTL_DCI_CLEAR_EVENTS 29
+#define DIAG_IOCTL_REMOTE_DEV 32
+#define DIAG_IOCTL_VOTE_REAL_TIME 33
+#define DIAG_IOCTL_GET_REAL_TIME 34
+#define DIAG_IOCTL_NONBLOCKING_TIMEOUT 64
+
+/* PC Tools IDs */
+#define APQ8060_TOOLS_ID 4062
+#define AO8960_TOOLS_ID 4064
+#define APQ8064_TOOLS_ID 4072
+#define MSM8625_TOOLS_ID 4075
+#define MSM8930_TOOLS_ID 4076
+#define MSM8630_TOOLS_ID 4077
+#define MSM8230_TOOLS_ID 4078
+#define APQ8030_TOOLS_ID 4079
+#define MSM8627_TOOLS_ID 4080
+#define MSM8227_TOOLS_ID 4081
+#define MSM8974_TOOLS_ID 4083
+#define APQ8074_TOOLS_ID 4090
+#define APQ8084_TOOLS_ID 4095
+
+#define MSG_MASK_0 (0x00000001)
+#define MSG_MASK_1 (0x00000002)
+#define MSG_MASK_2 (0x00000004)
+#define MSG_MASK_3 (0x00000008)
+#define MSG_MASK_4 (0x00000010)
+#define MSG_MASK_5 (0x00000020)
+#define MSG_MASK_6 (0x00000040)
+#define MSG_MASK_7 (0x00000080)
+#define MSG_MASK_8 (0x00000100)
+#define MSG_MASK_9 (0x00000200)
+#define MSG_MASK_10 (0x00000400)
+#define MSG_MASK_11 (0x00000800)
+#define MSG_MASK_12 (0x00001000)
+#define MSG_MASK_13 (0x00002000)
+#define MSG_MASK_14 (0x00004000)
+#define MSG_MASK_15 (0x00008000)
+#define MSG_MASK_16 (0x00010000)
+#define MSG_MASK_17 (0x00020000)
+#define MSG_MASK_18 (0x00040000)
+#define MSG_MASK_19 (0x00080000)
+#define MSG_MASK_20 (0x00100000)
+#define MSG_MASK_21 (0x00200000)
+#define MSG_MASK_22 (0x00400000)
+#define MSG_MASK_23 (0x00800000)
+#define MSG_MASK_24 (0x01000000)
+#define MSG_MASK_25 (0x02000000)
+#define MSG_MASK_26 (0x04000000)
+#define MSG_MASK_27 (0x08000000)
+#define MSG_MASK_28 (0x10000000)
+#define MSG_MASK_29 (0x20000000)
+#define MSG_MASK_30 (0x40000000)
+#define MSG_MASK_31 (0x80000000)
+
+/* These masks are to be used for support of all legacy messages in the sw.
+The user does not need to remember the names as they will be embedded in
+the appropriate macros. */
+#define MSG_LEGACY_LOW MSG_MASK_0
+#define MSG_LEGACY_MED MSG_MASK_1
+#define MSG_LEGACY_HIGH MSG_MASK_2
+#define MSG_LEGACY_ERROR MSG_MASK_3
+#define MSG_LEGACY_FATAL MSG_MASK_4
+
+/* Legacy Message Priorities */
+#define MSG_LVL_FATAL (MSG_LEGACY_FATAL)
+#define MSG_LVL_ERROR (MSG_LEGACY_ERROR | MSG_LVL_FATAL)
+#define MSG_LVL_HIGH (MSG_LEGACY_HIGH | MSG_LVL_ERROR)
+#define MSG_LVL_MED (MSG_LEGACY_MED | MSG_LVL_HIGH)
+#define MSG_LVL_LOW (MSG_LEGACY_LOW | MSG_LVL_MED)
+
+#define MSG_LVL_NONE 0
+
+/* This needs to be modified manually now, when we add
+ a new RANGE of SSIDs to the msg_mask_tbl */
+#define MSG_MASK_TBL_CNT 24
+#define EVENT_LAST_ID 0x09D8
+
+#define MSG_SSID_0 0
+#define MSG_SSID_0_LAST 100
+#define MSG_SSID_1 500
+#define MSG_SSID_1_LAST 506
+#define MSG_SSID_2 1000
+#define MSG_SSID_2_LAST 1007
+#define MSG_SSID_3 2000
+#define MSG_SSID_3_LAST 2008
+#define MSG_SSID_4 3000
+#define MSG_SSID_4_LAST 3014
+#define MSG_SSID_5 4000
+#define MSG_SSID_5_LAST 4010
+#define MSG_SSID_6 4500
+#define MSG_SSID_6_LAST 4526
+#define MSG_SSID_7 4600
+#define MSG_SSID_7_LAST 4614
+#define MSG_SSID_8 5000
+#define MSG_SSID_8_LAST 5030
+#define MSG_SSID_9 5500
+#define MSG_SSID_9_LAST 5516
+#define MSG_SSID_10 6000
+#define MSG_SSID_10_LAST 6080
+#define MSG_SSID_11 6500
+#define MSG_SSID_11_LAST 6521
+#define MSG_SSID_12 7000
+#define MSG_SSID_12_LAST 7003
+#define MSG_SSID_13 7100
+#define MSG_SSID_13_LAST 7111
+#define MSG_SSID_14 7200
+#define MSG_SSID_14_LAST 7201
+#define MSG_SSID_15 8000
+#define MSG_SSID_15_LAST 8000
+#define MSG_SSID_16 8500
+#define MSG_SSID_16_LAST 8524
+#define MSG_SSID_17 9000
+#define MSG_SSID_17_LAST 9008
+#define MSG_SSID_18 9500
+#define MSG_SSID_18_LAST 9509
+#define MSG_SSID_19 10200
+#define MSG_SSID_19_LAST 10210
+#define MSG_SSID_20 10251
+#define MSG_SSID_20_LAST 10255
+#define MSG_SSID_21 10300
+#define MSG_SSID_21_LAST 10300
+#define MSG_SSID_22 10350
+#define MSG_SSID_22_LAST 10374
+#define MSG_SSID_23 0xC000
+#define MSG_SSID_23_LAST 0xC063
+
+struct diagpkt_delay_params {
+ void *rsp_ptr;
+ int size;
+ int *num_bytes_ptr;
+};
+
+static const uint32_t msg_bld_masks_0[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_ERROR,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_ERROR,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_ERROR,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_ERROR,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_7 |
+ MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 | MSG_MASK_11 | MSG_MASK_12 |
+ MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20 | MSG_MASK_21,
+ MSG_LVL_MED | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 | MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_5 | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 |
+ MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 |
+ MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20 | MSG_MASK_21 | MSG_MASK_22 | MSG_MASK_23 | MSG_MASK_24 | MSG_MASK_25,
+ MSG_LVL_MED | MSG_MASK_5 | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_5 | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8,
+ MSG_LVL_LOW | MSG_MASK_5 | MSG_MASK_6,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_MED | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 |
+ MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH | MSG_MASK_21,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_LVL_MED | MSG_LVL_HIGH | MSG_LVL_ERROR | MSG_LVL_FATAL,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_LVL_MED | MSG_LVL_HIGH | MSG_LVL_ERROR | MSG_LVL_FATAL,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_LVL_MED | MSG_LVL_HIGH | MSG_LVL_ERROR | MSG_LVL_FATAL,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH,
+};
+
+static const uint32_t msg_bld_masks_1[] = {
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH
+};
+
+static const uint32_t msg_bld_masks_2[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED,
+ MSG_LVL_MED
+};
+
+static const uint32_t msg_bld_masks_3[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED
+};
+
+static const uint32_t msg_bld_masks_4[] = {
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_5[] = {
+ MSG_LVL_HIGH,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_LVL_MED | MSG_MASK_5 | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9,
+ MSG_LVL_MED
+};
+
+static const uint32_t msg_bld_masks_6[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_7[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_8[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED
+};
+
+static const uint32_t msg_bld_masks_9[] = {
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5,
+ MSG_LVL_MED | MSG_MASK_5
+};
+
+static const uint32_t msg_bld_masks_10[] = {
+ MSG_LVL_MED,
+ MSG_LVL_ERROR,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 |
+ MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20 | MSG_MASK_21 | MSG_MASK_22,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_0 | MSG_MASK_1 | MSG_MASK_2 | MSG_MASK_3 | MSG_MASK_4 | MSG_MASK_5 | MSG_MASK_6,
+ MSG_LVL_HIGH,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_11[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+};
+
+static const uint32_t msg_bld_masks_12[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+};
+
+static const uint32_t msg_bld_masks_13[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+};
+
+static const uint32_t msg_bld_masks_14[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+};
+
+static const uint32_t msg_bld_masks_15[] = {
+ MSG_LVL_MED
+};
+
+static const uint32_t msg_bld_masks_16[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+};
+
+static const uint32_t msg_bld_masks_17[] = {
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9,
+ MSG_LVL_MED | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 | MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17,
+ MSG_LVL_MED,
+ MSG_LVL_MED | MSG_MASK_5 |
+ MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9 |
+ MSG_MASK_10 | MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 |
+ MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20 | MSG_MASK_21 | MSG_MASK_22,
+ MSG_LVL_MED,
+ MSG_LVL_MED,
+};
+
+static const uint32_t msg_bld_masks_18[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_8 | MSG_MASK_9 | MSG_MASK_10 |
+ MSG_MASK_11 | MSG_MASK_12 | MSG_MASK_13 | MSG_MASK_14 | MSG_MASK_15 | MSG_MASK_16 | MSG_MASK_17 | MSG_MASK_18 | MSG_MASK_19 | MSG_MASK_20,
+ MSG_LVL_LOW | MSG_MASK_5 | MSG_MASK_6,
+ MSG_LVL_LOW | MSG_MASK_5,
+ MSG_LVL_LOW | MSG_MASK_5 | MSG_MASK_6,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW | MSG_MASK_5 | MSG_MASK_6 | MSG_MASK_7 | MSG_MASK_8 | MSG_MASK_9,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_19[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_20[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+static const uint32_t msg_bld_masks_21[] = {
+ MSG_LVL_HIGH
+};
+
+static const uint32_t msg_bld_masks_22[] = {
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW,
+ MSG_LVL_LOW
+};
+
+/* LOG CODES */
+
+#define LOG_0 0x0
+#define LOG_1 0x182F
+#define LOG_2 0x0
+#define LOG_3 0x0
+#define LOG_4 0x4910
+#define LOG_5 0x5420
+#define LOG_6 0x0
+#define LOG_7 0x74FF
+#define LOG_8 0x0
+#define LOG_9 0x0
+#define LOG_10 0xA38A
+#define LOG_11 0xB201
+#define LOG_12 0x0
+#define LOG_13 0x0
+#define LOG_14 0x0
+#define LOG_15 0x0
+
+#define LOG_GET_ITEM_NUM(xx_code) (xx_code & 0x0FFF)
+#define LOG_GET_EQUIP_ID(xx_code) ((xx_code & 0xF000) >> 12)
+
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index a201038..b8da7e3 100755
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -85,6 +85,8 @@
int dma_get_contiguous_stats(struct device *dev,
struct dma_contiguous_stats *stats);
+bool dma_contiguous_should_replace_page(struct page *page);
+int dma_contiguous_enable_replace_pages(struct device *dev);
#else
struct dma_contiguous_stats;
@@ -127,6 +129,18 @@
{
return -ENOSYS;
}
+
+static inline
+bool dma_contiguous_should_replace_page(struct page *page)
+{
+ return false;
+}
+
+static inline
+int dma_contiguous_enable_replace_pages(struct device *dev)
+{
+ return 0;
+}
#endif
#endif
diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
index df6fab8..860313a 100644
--- a/include/linux/f2fs_fs.h
+++ b/include/linux/f2fs_fs.h
@@ -15,13 +15,18 @@
#include <linux/types.h>
#define F2FS_SUPER_OFFSET 1024 /* byte-size offset */
-#define F2FS_LOG_SECTOR_SIZE 9 /* 9 bits for 512 byte */
-#define F2FS_LOG_SECTORS_PER_BLOCK 3 /* 4KB: F2FS_BLKSIZE */
+#define F2FS_MIN_LOG_SECTOR_SIZE 9 /* 9 bits for 512 bytes */
+#define F2FS_MAX_LOG_SECTOR_SIZE 12 /* 12 bits for 4096 bytes */
+#define F2FS_LOG_SECTORS_PER_BLOCK 3 /* log number for sector/blk */
#define F2FS_BLKSIZE 4096 /* support only 4KB block */
#define F2FS_MAX_EXTENSION 64 /* # of extension entries */
+#define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) / F2FS_BLKSIZE)
-#define NULL_ADDR 0x0U
-#define NEW_ADDR -1U
+#define NULL_ADDR ((block_t)0) /* used as block_t addresses */
+#define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+
+/* 0, 1(node nid), 2(meta nid) are reserved node id */
+#define F2FS_RESERVED_NODE_NUM 3
#define F2FS_ROOT_INO(sbi) (sbi->root_ino_num)
#define F2FS_NODE_INO(sbi) (sbi->node_ino_num)
@@ -75,16 +80,20 @@
__le16 volume_name[512]; /* volume name */
__le32 extension_count; /* # of extensions below */
__u8 extension_list[F2FS_MAX_EXTENSION][8]; /* extension array */
+ __le32 cp_payload;
} __packed;
/*
* For checkpoint
*/
+#define CP_FSCK_FLAG 0x00000010
#define CP_ERROR_FLAG 0x00000008
#define CP_COMPACT_SUM_FLAG 0x00000004
#define CP_ORPHAN_PRESENT_FLAG 0x00000002
#define CP_UMOUNT_FLAG 0x00000001
+#define F2FS_CP_PACKS 2 /* # of checkpoint packs */
+
struct f2fs_checkpoint {
__le64 checkpoint_ver; /* checkpoint block version number */
__le64 user_block_count; /* # of user blocks */
@@ -121,6 +130,9 @@
*/
#define F2FS_ORPHANS_PER_BLOCK 1020
+#define GET_ORPHAN_BLOCKS(n) ((n + F2FS_ORPHANS_PER_BLOCK - 1) / \
+ F2FS_ORPHANS_PER_BLOCK)
+
struct f2fs_orphan_block {
__le32 ino[F2FS_ORPHANS_PER_BLOCK]; /* inode numbers */
__le32 reserved; /* reserved */
@@ -140,14 +152,36 @@
} __packed;
#define F2FS_NAME_LEN 255
-#define ADDRS_PER_INODE 923 /* Address Pointers in an Inode */
-#define ADDRS_PER_BLOCK 1018 /* Address Pointers in a Direct Block */
-#define NIDS_PER_BLOCK 1018 /* Node IDs in an Indirect Block */
+#define F2FS_INLINE_XATTR_ADDRS 50 /* 200 bytes for inline xattrs */
+#define DEF_ADDRS_PER_INODE 923 /* Address Pointers in an Inode */
+#define DEF_NIDS_PER_INODE 5 /* Node IDs in an Inode */
+#define ADDRS_PER_INODE(fi) addrs_per_inode(fi)
+#define ADDRS_PER_BLOCK 1018 /* Address Pointers in a Direct Block */
+#define NIDS_PER_BLOCK 1018 /* Node IDs in an Indirect Block */
+
+#define ADDRS_PER_PAGE(page, fi) \
+ (IS_INODE(page) ? ADDRS_PER_INODE(fi) : ADDRS_PER_BLOCK)
+
+#define NODE_DIR1_BLOCK (DEF_ADDRS_PER_INODE + 1)
+#define NODE_DIR2_BLOCK (DEF_ADDRS_PER_INODE + 2)
+#define NODE_IND1_BLOCK (DEF_ADDRS_PER_INODE + 3)
+#define NODE_IND2_BLOCK (DEF_ADDRS_PER_INODE + 4)
+#define NODE_DIND_BLOCK (DEF_ADDRS_PER_INODE + 5)
+
+#define F2FS_INLINE_XATTR 0x01 /* file inline xattr flag */
+#define F2FS_INLINE_DATA 0x02 /* file inline data flag */
+
+#define MAX_INLINE_DATA (sizeof(__le32) * (DEF_ADDRS_PER_INODE - \
+ F2FS_INLINE_XATTR_ADDRS - 1))
+
+#define INLINE_DATA_OFFSET (PAGE_CACHE_SIZE - sizeof(struct node_footer) -\
+ sizeof(__le32) * (DEF_ADDRS_PER_INODE + \
+ DEF_NIDS_PER_INODE - 1))
struct f2fs_inode {
__le16 i_mode; /* file mode */
__u8 i_advise; /* file hints */
- __u8 i_reserved; /* reserved */
+ __u8 i_inline; /* file inline flags */
__le32 i_uid; /* user ID */
__le32 i_gid; /* group ID */
__le32 i_links; /* links count */
@@ -166,13 +200,13 @@
__le32 i_pino; /* parent inode number */
__le32 i_namelen; /* file name length */
__u8 i_name[F2FS_NAME_LEN]; /* file name for SPOR */
- __u8 i_reserved2; /* for backward compatibility */
+ __u8 i_dir_level; /* dentry_level for large dir */
struct f2fs_extent i_ext; /* caching a largest extent */
- __le32 i_addr[ADDRS_PER_INODE]; /* Pointers to data blocks */
+ __le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
- __le32 i_nid[5]; /* direct(2), indirect(2),
+ __le32 i_nid[DEF_NIDS_PER_INODE]; /* direct(2), indirect(2),
double_indirect(1) node id */
} __packed;
@@ -374,6 +408,9 @@
/* MAX level for dir lookup */
#define MAX_DIR_HASH_DEPTH 63
+/* MAX buckets in one level of dir */
+#define MAX_DIR_BUCKETS (1 << ((MAX_DIR_HASH_DEPTH / 2) - 1))
+
#define SIZE_OF_DIR_ENTRY 11 /* by byte */
#define SIZE_OF_DENTRY_BITMAP ((NR_DENTRY_IN_BLOCK + BITS_PER_BYTE - 1) / \
BITS_PER_BYTE)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 8a81ed4..8026684 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1519,12 +1519,6 @@
loff_t pos;
};
-static inline bool dir_emit(struct dir_context *ctx,
- const char *name, int namelen,
- u64 ino, unsigned type)
-{
- return ctx->actor(ctx, name, namelen, ctx->pos, ino, type) == 0;
-}
struct block_device_operations;
/* These macros are for out of kernel modules to test that
@@ -1595,6 +1589,7 @@
int (*atomic_open)(struct inode *, struct dentry *,
struct file *, unsigned open_flag,
umode_t create_mode, int *opened);
+ int (*tmpfile) (struct inode *, struct dentry *, umode_t);
} ____cacheline_aligned;
ssize_t rw_copy_check_uvector(int type, const struct iovec __user * uvector,
@@ -2548,7 +2543,7 @@
extern int dcache_dir_open(struct inode *, struct file *);
extern int dcache_dir_close(struct inode *, struct file *);
extern loff_t dcache_dir_lseek(struct file *, loff_t, int);
-extern int dcache_readdir(struct file *, void *, filldir_t);
+extern int dcache_readdir(struct file *, struct dir_context *);
extern int simple_setattr(struct dentry *, struct iattr *);
extern int simple_getattr(struct vfsmount *, struct dentry *, struct kstat *);
extern int simple_statfs(struct dentry *, struct kstatfs *);
@@ -2712,6 +2707,37 @@
inode->i_flags |= S_NOSEC;
}
+static inline bool dir_emit(struct dir_context *ctx,
+ const char *name, int namelen,
+ u64 ino, unsigned type)
+{
+ return ctx->actor(ctx, name, namelen, ctx->pos, ino, type) == 0;
+}
+static inline bool dir_emit_dot(struct file *file, struct dir_context *ctx)
+{
+ return ctx->actor(ctx, ".", 1, ctx->pos,
+ file->f_path.dentry->d_inode->i_ino, DT_DIR) == 0;
+}
+static inline bool dir_emit_dotdot(struct file *file, struct dir_context *ctx)
+{
+ return ctx->actor(ctx, "..", 2, ctx->pos,
+ parent_ino(file->f_path.dentry), DT_DIR) == 0;
+}
+static inline bool dir_emit_dots(struct file *file, struct dir_context *ctx)
+{
+ if (ctx->pos == 0) {
+ if (!dir_emit_dot(file, ctx))
+ return false;
+ ctx->pos = 1;
+ }
+ if (ctx->pos == 1) {
+ if (!dir_emit_dotdot(file, ctx))
+ return false;
+ ctx->pos = 2;
+ }
+ return true;
+}
+
static inline bool dir_relax(struct inode *inode)
{
mutex_unlock(&inode->i_mutex);
diff --git a/include/linux/htc_battery_bq2419x.h b/include/linux/htc_battery_bq2419x.h
new file mode 100644
index 0000000..b4a83d6
--- /dev/null
+++ b/include/linux/htc_battery_bq2419x.h
@@ -0,0 +1,121 @@
+/*
+ * htc_battery_bq2419x.h -- BQ24190/BQ24192/BQ24192i/BQ24193 Charger policy
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ */
+
+#ifndef __HTC_BATTERY_BQ2419X_CHARGER_H
+#define __HTC_BATTERY_BQ2419X_CHARGER_H
+
+/*
+ * struct bq2419x_thermal_prop - bq1481x thermal properties
+ * for battery-charger-gauge-comm.
+ */
+struct htc_battery_thermal_prop {
+ int temp_hot_dc;
+ int temp_cold_dc;
+ int temp_warm_dc;
+ int temp_cool_dc;
+ unsigned int temp_hysteresis_dc;
+ unsigned int warm_voltage_mv;
+ unsigned int cool_voltage_mv;
+ bool disable_warm_current_half;
+ bool disable_cool_current_half;
+ unsigned int otp_output_current_ma;
+};
+
+/*
+ * struct htc_battery_charge_full_threshold -
+ * used for charging full/recharge check
+ */
+struct htc_battery_charge_full_threshold {
+ int chg_done_voltage_min_mv;
+ int chg_done_current_min_ma;
+ int chg_done_low_current_min_ma;
+ int recharge_voltage_min_mv;
+};
+
+/*
+ * struct htc_battery_charge_input_switch - used for adjust input voltage
+ */
+struct htc_battery_charge_input_switch {
+ int input_switch_threshold_mv;
+ int input_vmin_high_mv;
+ int input_vmin_low_mv;
+};
+
+/*
+ * struct htc_battery_bq2419x_platform_data - bq2419x platform data.
+ */
+struct htc_battery_bq2419x_platform_data {
+ int input_voltage_limit_mv;
+ int fast_charge_current_limit_ma;
+ int pre_charge_current_limit_ma;
+ int termination_current_limit_ma; /* 0 means disable current check */
+ int charge_voltage_limit_mv;
+ int max_charge_current_ma;
+ int rtc_alarm_time;
+ int num_consumer_supplies;
+ struct regulator_consumer_supply *consumer_supplies;
+ int chg_restart_time;
+ int auto_recharge_time_power_off;
+ bool disable_suspend_during_charging;
+ int charge_suspend_polling_time_sec;
+ int temp_polling_time_sec;
+ u32 auto_recharge_time_supend;
+ struct htc_battery_thermal_prop thermal_prop;
+ struct htc_battery_charge_full_threshold full_thr;
+ struct htc_battery_charge_input_switch input_switch;
+ const char *batt_id_channel_name;
+ int unknown_batt_id_min;
+ const char *gauge_psy_name;
+ const char *vbus_channel_name;
+ unsigned int vbus_channel_max_voltage_mv;
+ unsigned int vbus_channel_max_adc;
+};
+
+struct htc_battery_bq2419x_ops {
+ int (*set_charger_enable)(bool enable, void *data);
+ int (*set_charger_hiz)(bool is_hiz, void *data);
+ int (*set_fastcharge_current)(unsigned int current_ma, void *data);
+ int (*set_charge_voltage)(unsigned int voltage_mv, void *data);
+ int (*set_precharge_current)(unsigned int current_ma, void *data);
+ int (*set_termination_current)(
+ unsigned int current_ma, void *data); /* 0 means disable */
+ int (*set_input_current)(unsigned int current_ma, void *data);
+ int (*set_dpm_input_voltage)(unsigned int voltage_mv, void *data);
+ int (*set_safety_timer_enable)(bool enable, void *data);
+ int (*get_charger_state)(unsigned int *state, void *data);
+ int (*get_input_current)(unsigned *current_ma, void *data);
+};
+
+enum htc_battery_bq2419x_notify_event {
+ HTC_BATTERY_BQ2419X_SAFETY_TIMER_TIMEOUT,
+};
+
+enum htc_battery_bq2419x_charger_state {
+ HTC_BATTERY_BQ2419X_IN_REGULATION = (0x1U << 0),
+ HTC_BATTERY_BQ2419X_DPM_MODE = (0x1U << 1),
+ HTC_BATTERY_BQ2419X_POWER_GOOD = (0x1U << 2),
+ HTC_BATTERY_BQ2419X_CHARGING = (0x1U << 3),
+ HTC_BATTERY_BQ2419X_KNOWN_VBUS = (0x1U << 4),
+};
+
+void htc_battery_bq2419x_notify(enum htc_battery_bq2419x_notify_event);
+int htc_battery_bq2419x_charger_register(struct htc_battery_bq2419x_ops *ops,
+ void *data);
+int htc_battery_bq2419x_charger_unregister(void *data);
+#endif /* __HTC_BATTERY_BQ2419X_CHARGER_H */
diff --git a/include/linux/htc_battery_max17050.h b/include/linux/htc_battery_max17050.h
new file mode 100644
index 0000000..7ec6654
--- /dev/null
+++ b/include/linux/htc_battery_max17050.h
@@ -0,0 +1,53 @@
+/* htc_battery_max17050.h
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __HTC_BATTERY_MAX17050_H_
+#define __HTC_BATTERY_MAX17050_H_
+
+#include<linux/types.h>
+
+#define FLOUNDER_BATTERY_ID_RANGE_SIZE (2)
+#define FLOUNDER_BATTERY_PARAMS_SIZE (3)
+
+struct flounder_battery_adjust_by_id {
+ int id;
+ unsigned int id_range[FLOUNDER_BATTERY_ID_RANGE_SIZE];
+ int temp_normal2low_thr;
+ int temp_low2normal_thr;
+ unsigned int temp_normal_params[FLOUNDER_BATTERY_PARAMS_SIZE];
+ unsigned int temp_low_params[FLOUNDER_BATTERY_PARAMS_SIZE];
+};
+
+struct htc_battery_max17050_ops {
+ int (*get_vcell)(int *batt_volt);
+ int (*get_battery_current)(int *batt_curr);
+ int (*get_battery_avgcurrent)(int *batt_curr_avg);
+ int (*get_temperature)(int *batt_temp);
+ int (*get_soc)(int *batt_soc);
+ int (*get_ocv)(int *batt_ocv);
+ int (*get_battery_charge)(int *batt_charge);
+ int (*get_battery_charge_ext)(int64_t *batt_charge_ext);
+};
+
+#define FLOUNDER_BATTERY_ID_MAX (10)
+struct flounder_battery_platform_data {
+ const char *batt_id_channel_name;
+ struct flounder_battery_adjust_by_id
+ batt_params[FLOUNDER_BATTERY_ID_MAX];
+ int batt_params_num;
+};
+
+int htc_battery_max17050_gauge_register(struct htc_battery_max17050_ops *ops);
+int htc_battery_max17050_gauge_unregister(void);
+
+#endif /* __HTC_BATTERY_MAX17050_H_ */
diff --git a/include/linux/htc_headset_config.h b/include/linux/htc_headset_config.h
new file mode 100644
index 0000000..d5c6db3
--- /dev/null
+++ b/include/linux/htc_headset_config.h
@@ -0,0 +1,37 @@
+/*
+ *
+ * /arch/arm/mach-msm/include/mach/htc_headset_config.h
+ *
+ * HTC headset configurations.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef HTC_HEADSET_CONFIG_H
+#define HTC_HEADSET_CONFIG_H
+
+#define HTC_HEADSET_VERSION 6 /* 8960 ADC uA to mA */
+#define HTC_HEADSET_BRANCH "KERNEL_3_0_PROJECT_8960"
+
+#define HTC_HEADSET_KERNEL_3_0
+#define HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+
+#if 0 /* All Headset Configurations */
+#define HTC_HEADSET_KERNEL_3_0
+#define HTC_HEADSET_CONFIG_MSM_RPC
+#define HTC_HEADSET_CONFIG_QUICK_BOOT
+#define HTC_HEADSET_CONFIG_PMIC_8XXX_ADC
+#define HTC_HEADSET_CONFIG_PMIC_TPS80032_ADC
+#endif
+
+#endif
diff --git a/include/linux/htc_headset_mgr.h b/include/linux/htc_headset_mgr.h
new file mode 100644
index 0000000..7423b7f
--- /dev/null
+++ b/include/linux/htc_headset_mgr.h
@@ -0,0 +1,405 @@
+/*
+ *
+ * /arch/arm/mach-msm/include/mach/htc_headset_mgr.h
+ *
+ * HTC headset manager driver.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef HTC_HEADSET_MGR_H
+#define HTC_HEADSET_MGR_H
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+#include <linux/earlysuspend.h>
+#endif
+#include <linux/input.h>
+#include <linux/switch.h>
+#include <linux/wakelock.h>
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+#include <mach/msm_rpcrouter.h>
+#endif
+#include <linux/platform_device.h>
+#include <linux/htc_headset_config.h>
+
+#ifdef HTC_HEADSET_KERNEL_3_0
+#define set_irq_type(irq, type) irq_set_irq_type(irq, type)
+#define set_irq_wake(irq, on) irq_set_irq_wake(irq, on)
+#else
+#define set_irq_type(irq, type) set_irq_type(irq, type)
+#define set_irq_wake(irq, on) set_irq_wake(irq, on)
+#endif
+
+#define HS_ERR(fmt, arg...) \
+ printk(KERN_ERR "[" DRIVER_NAME "_ERR] (%s) " fmt "\n", \
+ __func__, ## arg)
+#define HS_LOG(fmt, arg...) \
+ printk(KERN_DEBUG "[" DRIVER_NAME "] (%s) " fmt "\n", __func__, ## arg)
+#define HS_LOG_TIME(fmt, arg...) do { \
+ struct timespec ts; \
+ struct rtc_time tm; \
+ getnstimeofday(&ts); \
+ rtc_time_to_tm(ts.tv_sec, &tm); \
+ printk(KERN_DEBUG "[" DRIVER_NAME "] (%s) " fmt \
+ " (%02d-%02d %02d:%02d:%02d.%03lu)\n", __func__, \
+ ## arg, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, \
+ tm.tm_min, tm.tm_sec, ts.tv_nsec / 1000000); \
+ } while (0)
+#define HS_DBG(fmt, arg...) \
+ if (hs_debug_log_state()) { \
+ printk(KERN_DEBUG "##### [" DRIVER_NAME "] (%s) " fmt "\n", \
+ __func__, ## arg); \
+ }
+
+#define DEVICE_ACCESSORY_ATTR(_name, _mode, _show, _store) \
+ struct device_attribute dev_attr_##_name = \
+ __ATTR(flag, _mode, _show, _store)
+
+#define DEVICE_HEADSET_ATTR(_name, _mode, _show, _store) \
+ struct device_attribute dev_attr_headset_##_name = \
+ __ATTR(_name, _mode, _show, _store)
+
+#define DRIVER_HS_MGR_RPC_SERVER (1 << 0)
+#define DRIVER_HS_MGR_FLOAT_DET (1 << 1)
+#define DRIVER_HS_MGR_OLD_AJ (1 << 2)
+
+#define DEBUG_FLAG_LOG (1 << 0)
+#define DEBUG_FLAG_ADC (1 << 1)
+
+#define BIT_HEADSET (1 << 0)
+#define BIT_HEADSET_NO_MIC (1 << 1)
+#define BIT_TTY_FULL (1 << 2)
+#define BIT_FM_HEADSET (1 << 3)
+#define BIT_FM_SPEAKER (1 << 4)
+#define BIT_TTY_VCO (1 << 5)
+#define BIT_TTY_HCO (1 << 6)
+#define BIT_35MM_HEADSET (1 << 7)
+#define BIT_TV_OUT (1 << 8)
+#define BIT_USB_CRADLE (1 << 9)
+#define BIT_TV_OUT_AUDIO (1 << 10)
+#define BIT_HDMI_CABLE (1 << 11)
+#define BIT_HDMI_AUDIO (1 << 12)
+#define BIT_USB_AUDIO_OUT (1 << 13)
+#define BIT_UNDEFINED (1 << 14)
+
+#define MASK_HEADSET (BIT_HEADSET | BIT_HEADSET_NO_MIC)
+#define MASK_35MM_HEADSET (BIT_HEADSET | BIT_HEADSET_NO_MIC | \
+ BIT_35MM_HEADSET | BIT_TV_OUT)
+#define MASK_FM_ATTRIBUTE (BIT_FM_HEADSET | BIT_FM_SPEAKER)
+#define MASK_USB_HEADSET (BIT_USB_AUDIO_OUT)
+
+#define GOOGLE_BIT_HEADSET (1 << 0)
+#define GOOGLE_BIT_HEADSET_NO_MIC (1 << 1)
+#define GOOGLE_BIT_USB_HEADSET_ANLG (1 << 2)
+#define GOOGLE_BIT_USB_HEADSET_DGTL (1 << 3)
+#define GOOGLE_BIT_HDMI_AUDIO (1 << 4)
+
+#define GOOGLE_SUPPORTED_HEADSETS (GOOGLE_BIT_HEADSET | \
+ GOOGLE_BIT_HEADSET_NO_MIC | \
+ GOOGLE_BIT_USB_HEADSET_ANLG | \
+ GOOGLE_BIT_USB_HEADSET_DGTL | \
+ GOOGLE_BIT_HDMI_AUDIO)
+#define GOOGLE_HEADSETS_WITH_MIC GOOGLE_BIT_HEADSET
+#define GOOGLE_USB_HEADSETS (GOOGLE_BIT_USB_HEADSET_ANLG | \
+ GOOGLE_BIT_USB_HEADSET_DGTL)
+
+#define HS_DEF_MIC_ADC_10_BIT 200
+#define HS_DEF_MIC_ADC_15_BIT_MAX 25320
+#define HS_DEF_MIC_ADC_15_BIT_MIN 7447
+#define HS_DEF_MIC_ADC_16_BIT_MAX 50641
+#define HS_DEF_MIC_ADC_16_BIT_MIN 14894
+#define HS_DEF_MIC_ADC_16_BIT_MAX2 56007
+#define HS_DEF_MIC_ADC_16_BIT_MIN2 14894
+#define HS_DEF_HPTV_ADC_16_BIT_MAX 16509
+#define HS_DEF_HPTV_ADC_16_BIT_MIN 6456
+
+#define HS_DEF_MIC_DETECT_COUNT 10
+
+#define HS_DELAY_ZERO 0
+#define HS_DELAY_SEC 1000
+#define HS_DELAY_MIC_BIAS 200
+#define HS_DELAY_MIC_DETECT 1000
+#define HS_DELAY_INSERT 300
+#define HS_DELAY_REMOVE 200
+#define HS_DELAY_REMOVE_LONG 500
+#define HS_DELAY_BUTTON 500
+#define HS_DELAY_1WIRE_BUTTON 800
+#define HS_DELAY_1WIRE_BUTTON_SHORT 20
+#define HS_DELAY_IRQ_INIT (10 * HS_DELAY_SEC)
+
+#define HS_JIFFIES_ZERO msecs_to_jiffies(HS_DELAY_ZERO)
+#define HS_JIFFIES_MIC_BIAS msecs_to_jiffies(HS_DELAY_MIC_BIAS)
+#define HS_JIFFIES_MIC_DETECT msecs_to_jiffies(HS_DELAY_MIC_DETECT)
+#define HS_JIFFIES_INSERT msecs_to_jiffies(HS_DELAY_INSERT)
+#define HS_JIFFIES_REMOVE msecs_to_jiffies(HS_DELAY_REMOVE)
+#define HS_JIFFIES_REMOVE_LONG msecs_to_jiffies(HS_DELAY_REMOVE_LONG)
+#define HS_JIFFIES_BUTTON msecs_to_jiffies(HS_DELAY_BUTTON)
+#define HS_JIFFIES_1WIRE_BUTTON msecs_to_jiffies(HS_DELAY_1WIRE_BUTTON)
+#define HS_JIFFIES_1WIRE_BUTTON_SHORT msecs_to_jiffies(HS_DELAY_1WIRE_BUTTON_SHORT)
+#define HS_JIFFIES_IRQ_INIT msecs_to_jiffies(HS_DELAY_IRQ_INIT)
+
+#define HS_WAKE_LOCK_TIMEOUT (2 * HZ)
+#define HS_RPC_TIMEOUT (5 * HZ)
+#define HS_MIC_DETECT_TIMEOUT (HZ + HZ / 2)
+
+/* Definitions for Headset RPC Server */
+#define HS_RPC_SERVER_PROG 0x30100004
+#define HS_RPC_SERVER_VERS 0x00000000
+#define HS_RPC_SERVER_PROC_NULL 0
+#define HS_RPC_SERVER_PROC_KEY 1
+
+/* Definitions for Headset RPC Client */
+#define HS_RPC_CLIENT_PROG 0x30100005
+#define HS_RPC_CLIENT_VERS 0x00000000
+#define HS_RPC_CLIENT_PROC_NULL 0
+#define HS_RPC_CLIENT_PROC_ADC 1
+
+#define HS_MGR_KEYCODE_END KEY_END /* 107 */
+#define HS_MGR_KEYCODE_MUTE KEY_MUTE /* 113 */
+#define HS_MGR_KEYCODE_VOLDOWN KEY_VOLUMEDOWN /* 114 */
+#define HS_MGR_KEYCODE_VOLUP KEY_VOLUMEUP /* 115 */
+#define HS_MGR_KEYCODE_FORWARD KEY_NEXTSONG /* 163 */
+#define HS_MGR_KEYCODE_PLAY KEY_PLAYPAUSE /* 164 */
+#define HS_MGR_KEYCODE_BACKWARD KEY_PREVIOUSSONG /* 165 */
+#define HS_MGR_KEYCODE_MEDIA KEY_MEDIA /* 226 */
+#define HS_MGR_KEYCODE_SEND KEY_SEND /* 231 */
+#define HS_MGR_KEYCODE_FF KEY_FASTFORWARD
+#define HS_MGR_KEYCODE_RW KEY_REWIND
+#define HS_MGR_KEYCODE_ASSIST KEY_VOICECOMMAND
+
+#define HS_MGR_2X_KEY_MEDIA (1 << 4)
+#define HS_MGR_3X_KEY_MEDIA (1 << 8)
+#define HS_MGR_KEY_HOLD(x) x | (x << 4)
+#define HS_MGR_2X_HOLD_MEDIA HS_MGR_KEY_HOLD(HS_MGR_2X_KEY_MEDIA)
+#define HS_MGR_3X_HOLD_MEDIA HS_MGR_KEY_HOLD(HS_MGR_3X_KEY_MEDIA)
+
+enum {
+ HEADSET_UNPLUG = 0,
+ HEADSET_NO_MIC = 1,
+ HEADSET_MIC = 2,
+ HEADSET_METRICO = 3,
+ HEADSET_UNKNOWN_MIC = 4,
+ HEADSET_UNSTABLE = 5,
+ HEADSET_TV_OUT = 6,
+ HEADSET_INDICATOR = 7,
+ HEADSET_BEATS = 8,
+ HEADSET_BEATS_SOLO = 9,
+ HEADSET_UART = 10,
+};
+
+enum {
+ GOOGLE_USB_AUDIO_UNPLUG = 0,
+ GOOGLE_USB_AUDIO_ANLG = 1,
+#ifdef CONFIG_SUPPORT_USB_SPEAKER
+ GOOGLE_USB_AUDIO_DGTL = 2,
+#endif
+};
+
+enum {
+ HEADSET_REG_HPIN_GPIO,
+ HEADSET_REG_REMOTE_ADC,
+ HEADSET_REG_REMOTE_KEYCODE,
+ HEADSET_REG_RPC_KEY,
+ HEADSET_REG_MIC_STATUS,
+ HEADSET_REG_MIC_BIAS,
+ HEADSET_REG_MIC_SELECT,
+ HEADSET_REG_KEY_INT_ENABLE,
+ HEADSET_REG_KEY_ENABLE,
+ HEADSET_REG_INDICATOR_ENABLE,
+ HEADSET_REG_UART_SET,
+ HEADSET_REG_1WIRE_INIT,
+ HEADSET_REG_1WIRE_QUERY,
+ HEADSET_REG_1WIRE_READ_KEY,
+ HEADSET_REG_1WIRE_DEINIT,
+ HEADSET_REG_1WIRE_REPORT_TYPE,
+ HEADSET_REG_1WIRE_OPEN,
+ HEADSET_REG_HS_INSERT,
+};
+
+enum {
+ HS_MGR_KEY_INVALID = -1,
+ HS_MGR_KEY_NONE = 0,
+ HS_MGR_KEY_PLAY = 1,
+ HS_MGR_KEY_VOLUP = 2,
+ HS_MGR_KEY_VOLDOWN = 3,
+ HS_MGR_KEY_ASSIST = 4,
+};
+
+enum {
+ STATUS_DISCONNECTED = 0,
+ STATUS_CONNECTED_ENABLED = 1,
+ STATUS_CONNECTED_DISABLED = 2,
+};
+
+enum {
+ H2W_NO_HEADSET = 0,
+ H2W_HEADSET = 1,
+ H2W_35MM_HEADSET = 2,
+ H2W_REMOTE_CONTROL = 3,
+ H2W_USB_CRADLE = 4,
+ H2W_UART_DEBUG = 5,
+ H2W_TVOUT = 6,
+ USB_NO_HEADSET = 7,
+ USB_AUDIO_OUT = 8,
+#ifdef CONFIG_SUPPORT_USB_SPEAKER
+ USB_AUDIO_OUT_DGTL = 9,
+#endif
+};
+
+struct hs_rpc_server_args_key {
+ uint32_t adc;
+};
+
+struct hs_rpc_client_req_adc {
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ struct rpc_request_hdr hdr;
+#endif
+};
+
+struct hs_rpc_client_rep_adc {
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ struct rpc_reply_hdr hdr;
+#endif
+ uint32_t adc;
+};
+
+struct external_headset {
+ int type;
+ int status;
+};
+
+struct headset_adc_config {
+ int type;
+ uint32_t adc_max;
+ uint32_t adc_min;
+};
+
+struct headset_notifier {
+ int id;
+ void *func;
+};
+
+struct hs_notifier_func {
+ int (*hpin_gpio)(void);
+ int (*remote_adc)(int *);
+ int (*remote_keycode)(int);
+ void (*rpc_key)(int);
+ int (*mic_status)(void);
+ int (*mic_bias_enable)(int);
+ void (*mic_select)(int);
+ int (*key_int_enable)(int);
+ void (*key_enable)(int);
+ int (*indicator_enable)(int);
+ void (*uart_set)(int);
+ int (*hs_1wire_init)(void);
+ int (*hs_1wire_query)(int);
+ int (*hs_1wire_read_key)(void);
+ int (*hs_1wire_deinit)(void);
+ int (*hs_1wire_report_type)(char **);
+ int (*hs_1wire_open)(void);
+ int (*hs_insert)(int);
+};
+
+struct htc_headset_mgr_platform_data {
+ unsigned int driver_flag;
+ int headset_devices_num;
+ struct platform_device **headset_devices;
+ int headset_config_num;
+ struct headset_adc_config *headset_config;
+
+ unsigned int hptv_det_hp_gpio;
+ unsigned int hptv_det_tv_gpio;
+ unsigned int hptv_sel_gpio;
+
+ void (*headset_init)(void);
+ void (*headset_power)(int);
+ void (*uart_lv_shift_en)(int);
+ void (*uart_tx_gpo)(int);
+};
+
+struct htc_headset_mgr_info {
+ struct htc_headset_mgr_platform_data pdata;
+ int driver_init_seq;
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ struct early_suspend early_suspend;
+#endif
+ struct wake_lock hs_wake_lock;
+
+ unsigned long hpin_jiffies;
+ struct external_headset h2w_headset;
+ struct external_headset usb_headset;
+
+ struct class *htc_accessory_class;
+ struct device *headset_dev;
+ struct device *tty_dev;
+ struct device *fm_dev;
+ struct device *debug_dev;
+ struct mutex mutex_lock;
+
+ struct switch_dev sdev_h2w;
+ struct switch_dev sdev_usb_audio;
+ struct input_dev *input;
+ unsigned long insert_jiffies;
+
+ atomic_t btn_state;
+
+ int tty_enable_flag;
+ int fm_flag;
+ int debug_flag;
+
+ unsigned int irq_btn_35mm;
+
+ /* The variables were used by 35mm headset*/
+ int key_level_flag;
+ int hs_35mm_type;
+ int h2w_35mm_type;
+ int is_ext_insert;
+ int mic_bias_state;
+ int mic_detect_counter;
+ int metrico_status; /* For HW Metrico lab test */
+ int quick_boot_status;
+
+ /*Variables for one wire driver*/
+ int driver_one_wire_exist;
+ int one_wire_mode;
+ int key_code_1wire[15];
+ int key_code_1wire_index;
+ unsigned int onewire_key_delay;
+};
+
+int headset_notifier_register(struct headset_notifier *notifier);
+
+void headset_ext_detect(int type);
+void headset_ext_button(int headset_type, int key_code, int press);
+
+void hs_notify_driver_ready(char *name);
+void hs_notify_hpin_irq(void);
+int hs_notify_plug_event(int insert, unsigned int intr_id);
+int hs_notify_key_event(int key_code);
+int hs_notify_key_irq(void);
+
+int hs_debug_log_state(void);
+
+void hs_set_mic_select(int state);
+struct class *hs_get_attribute_class(void);
+
+int headset_get_type(void);
+int headset_get_type_sync(int count, unsigned int interval);
+
+extern int switch_send_event(unsigned int bit, int on);
+
+#if defined(CONFIG_FB_MSM_TVOUT) && defined(CONFIG_ARCH_MSM8X60)
+extern void tvout_enable_detection(unsigned int on);
+#endif
+
+#endif
diff --git a/include/linux/htc_headset_one_wire.h b/include/linux/htc_headset_one_wire.h
new file mode 100644
index 0000000..c85d1b2
--- /dev/null
+++ b/include/linux/htc_headset_one_wire.h
@@ -0,0 +1,48 @@
+/*
+ *
+ * /arch/arm/mach-msm/include/mach/htc_headset_one_wire.h
+ *
+ * HTC 1-wire headset driver.
+ *
+ * Copyright (C) 2012 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#define INIT_CMD_INT_5MS 0x35
+#define INIT_CMD_INT_8MS 0x65
+#define INIT_CMD_INT_10MS 0x75
+#define INIT_CMD_INT_13MS 0xA5
+#define INIT_CMD_INT_15MS 0xB5
+#define INIT_CMD_INT_18MS 0xE5
+#define INIT_CMD_INT_30MS 0xF5
+#define QUERY_AID 0xD5
+#define QUERY_CONFIG 0xE5
+#define QUERY_KEYCODE 0xF5
+
+
+struct htc_headset_1wire_platform_data {
+ unsigned int tx_level_shift_en;
+ unsigned int uart_sw;
+ unsigned int uart_tx;
+ unsigned int uart_rx;
+ unsigned int remote_press;
+ char one_wire_remote[6]; /* Key code for press and release */
+ char onewire_tty_dev[15];
+
+};
+
+struct htc_35mm_1wire_info {
+ struct htc_headset_1wire_platform_data pdata;
+ char aid;
+ struct wake_lock hs_wake_lock;
+ struct mutex mutex_lock;
+};
diff --git a/include/linux/htc_headset_pmic.h b/include/linux/htc_headset_pmic.h
new file mode 100644
index 0000000..67076a3
--- /dev/null
+++ b/include/linux/htc_headset_pmic.h
@@ -0,0 +1,118 @@
+/*
+ *
+ * /arch/arm/mach-msm/include/mach/htc_headset_pmic.h
+ *
+ * HTC PMIC headset driver.
+ *
+ * Copyright (C) 2010 HTC, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef HTC_HEADSET_PMIC_H
+#define HTC_HEADSET_PMIC_H
+
+#define DRIVER_HS_PMIC_RPC_KEY (1 << 0)
+#define DRIVER_HS_PMIC_DYNAMIC_THRESHOLD (1 << 1)
+#define DRIVER_HS_PMIC_ADC (1 << 2)
+#define DRIVER_HS_PMIC_EDGE_IRQ (1 << 3)
+
+#define HS_PMIC_HTC_CURRENT_THRESHOLD 500
+
+#define HS_PMIC_RPC_CLIENT_PROG 0x30000061
+#define HS_PMIC_RPC_CLIENT_VERS 0x00010001
+#define HS_PMIC_RPC_CLIENT_VERS_1_1 0x00010001
+#define HS_PMIC_RPC_CLIENT_VERS_2_1 0x00020001
+#define HS_PMIC_RPC_CLIENT_VERS_3_1 0x00030001
+
+#define HS_PMIC_RPC_CLIENT_PROC_NULL 0
+#define HS_PMIC_RPC_CLIENT_PROC_THRESHOLD 65
+
+enum {
+ HS_PMIC_RPC_ERR_SUCCESS,
+};
+
+enum {
+ HS_PMIC_CONTROLLER_0,
+ HS_PMIC_CONTROLLER_1,
+ HS_PMIC_CONTROLLER_2,
+};
+
+enum {
+ HS_PMIC_SC_SWITCH_TYPE,
+ HS_PMIC_OC_SWITCH_TYPE,
+};
+
+struct hs_pmic_rpc_request {
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ struct rpc_request_hdr hdr;
+#endif
+ uint32_t hs_controller;
+ uint32_t hs_switch;
+ uint32_t current_uA;
+};
+
+struct hs_pmic_rpc_reply {
+#ifdef HTC_HEADSET_CONFIG_MSM_RPC
+ struct rpc_reply_hdr hdr;
+#endif
+ uint32_t status;
+ uint32_t data;
+};
+
+struct hs_pmic_current_threshold {
+ uint32_t adc_max;
+ uint32_t adc_min;
+ uint32_t current_uA;
+};
+
+struct htc_headset_pmic_platform_data {
+ unsigned int driver_flag;
+ unsigned int hpin_gpio;
+ unsigned int hpin_irq;
+ unsigned int key_gpio;
+ unsigned int key_irq;
+ unsigned int key_enable_gpio;
+ unsigned int adc_mpp;
+ unsigned int adc_amux;
+ unsigned int hs_controller;
+ unsigned int hs_switch;
+ const char *iio_channel_name;
+
+ /* ADC tables */
+ uint32_t adc_mic;
+ uint32_t adc_mic_bias[2];
+ uint32_t adc_remote[8];
+ uint32_t adc_metrico[2];
+
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ unsigned int debug_gpio;
+ unsigned int debug_irq;
+ int (*headset_get_debug)(void);
+#endif
+};
+
+struct htc_35mm_pmic_info {
+ struct htc_headset_pmic_platform_data pdata;
+ unsigned int hpin_irq_type;
+ unsigned int hpin_debounce;
+ unsigned int key_irq_type;
+#ifdef CONFIG_HEADSET_DEBUG_UART
+ unsigned int debug_irq_type;
+#endif
+ struct wake_lock hs_wake_lock;
+ struct class* htc_accessory_class;
+ struct device* pmic_dev;
+ struct hrtimer timer;
+ struct iio_channel *channel;
+};
+
+#endif
diff --git a/include/linux/iio/buffer.h b/include/linux/iio/buffer.h
index 2bac0eb..30421ec 100644
--- a/include/linux/iio/buffer.h
+++ b/include/linux/iio/buffer.h
@@ -57,6 +57,7 @@
* control method is used
* @scan_mask: [INTERN] bitmask used in masking scan mode elements
* @scan_timestamp: [INTERN] does the scan mode include a timestamp
+ * @kfifo_use_vmalloc: [INTERN] kfifo buffer uses vmalloc instead of kmalloc
* @access: [DRIVER] buffer access functions associated with the
* implementation.
* @scan_el_dev_attr_list:[INTERN] list of scan element related attributes.
@@ -74,6 +75,7 @@
struct attribute_group *scan_el_attrs;
long *scan_mask;
bool scan_timestamp;
+ bool kfifo_use_vmalloc;
const struct iio_buffer_access_funcs *access;
struct list_head scan_el_dev_attr_list;
struct attribute_group scan_el_group;
diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
index 90ec0fc..1600d9a 100644
--- a/include/linux/iio/iio.h
+++ b/include/linux/iio/iio.h
@@ -243,6 +243,7 @@
#define INDIO_DIRECT_MODE 0x01
#define INDIO_BUFFER_TRIGGERED 0x02
#define INDIO_BUFFER_HARDWARE 0x08
+#define INDIO_KFIFO_USE_VMALLOC 0x10
#define INDIO_ALL_BUFFER_MODES \
(INDIO_BUFFER_TRIGGERED | INDIO_BUFFER_HARDWARE)
diff --git a/include/linux/input/cy8c_sar.h b/include/linux/input/cy8c_sar.h
new file mode 100644
index 0000000..3311bac
--- /dev/null
+++ b/include/linux/input/cy8c_sar.h
@@ -0,0 +1,125 @@
+#ifndef CY8C_SAR_I2C_H
+#define CY8C_SAR_I2C_H
+
+#include <linux/i2c.h>
+#include <linux/types.h>
+#include <linux/notifier.h>
+#include <linux/wakelock.h>
+#include <linux/list.h>
+
+#define CYPRESS_SAR_NAME "CYPRESS_SAR"
+#define CYPRESS_SAR1_NAME "CYPRESS_SAR1"
+#define CYPRESS_SS_NAME "CY8C21x34B"
+#define SAR_MISSING (0x80)
+#define SAR_DYSFUNCTIONAL (0x10)
+
+/* Bit 0 => Sensor Pad 1, ..., Bit 3 => Sendor Pad 4*/
+#define CS_STATUS (0x00)
+
+/* F/W Reversion */
+#define CS_FW_VERSION (0x06)
+#define CS_FW_CONFIG (0xAA) /*Smart Sense Supported*/
+
+/* 8: high sensitivity, 255: low sensitivity */
+#define CS_IDAC_BTN_BASE (0x02)
+#define CS_IDAC_BTN_PAD1 (0x02)
+#define CS_IDAC_BTN_PAD2 (0x03)
+#define CS_IDAC_BTN_PAD3 (0x04)
+#define CS_IDAC_BTN_PAD4 (0x05)
+
+#define CS_MODE (0x07)
+#define CS_DTIME (0x07)
+#define CS_FW_CHIPID (0x08) /*Smart Sense Supported*/
+#define CS_FW_KEYCFG (0x0B) /*Smart Sense Supported*/
+
+#define CS_SELECT (0x0C)
+#define CS_BL_HB (0x0D)
+#define CS_BL_LB (0x0E)
+#define CS_RC_HB (0x0F)
+#define CS_RC_LB (0x10)
+#define CS_DF_HB (0x11)
+#define CS_DF_LB (0x12)
+#define CS_INT_STATUS (0x13)
+
+
+#define CS_CMD_BASELINE (0x55)
+#define CS_CMD_DSLEEP (0x02)
+#define CS_CMD_BTN1 (0xA0)
+#define CS_CMD_BTN2 (0xA1)
+#define CS_CMD_BTN3 (0xA2)
+#define CS_CMD_BTN4 (0xA3)
+
+#define CS_CHIPID (0x56)
+#define CS_KEY_4 (0x04)
+#define CS_KEY_3 (0x03)
+
+#define CS_FUNC_PRINTRAW (0x01)
+#define CS_FW_BLADD (0x02)
+/* F/W Reversion Addr */
+#define BL_STATUSADD (0x01)
+#define BL_CODEADD (0x02)
+#define CS_FW_VERADD (0x06)
+#define CS_FW_CHIPADD (0x08)
+#define BL_BLIVE (0x02) /* Image verify need check BL state */
+#define BL_BLMODE (0x10) /* use check BL mode state */
+#define BL_RETMODE (0x20) /* reset bootloader mode */
+#define BL_COMPLETE (0x21) /* bootloader mode complete */
+#define BL_RETBL (0x38) /* checksum error, reset bootloader */
+
+/* 1:Enable 0:Disable. Let cap only support 3Key.*/
+#define ENABLE_CAP_ONLY_3KEY 1
+
+#define CY8C_I2C_RETRY_TIMES (5)
+#define WAKEUP_DELAY (1*HZ)
+
+enum mode {
+ KEEP_AWAKE = 0,
+ DEEP_SLEEP,
+};
+
+struct infor {
+ uint16_t chipid;
+ uint16_t version;
+};
+
+struct cy8c_i2c_sar_platform_data {
+ uint16_t gpio_irq;
+ uint16_t position_id;
+ uint8_t bl_addr;
+ uint8_t ap_addr;
+ int (*reset)(void);
+ void (*gpio_init)(void);
+ int (*powerdown)(int);
+};
+
+struct cy8c_sar_data {
+ struct list_head list;
+ struct i2c_client *client;
+ struct input_dev *input_dev;
+ struct workqueue_struct *cy8c_wq;
+ uint8_t use_irq;
+ int radio_state;
+ enum mode sleep_mode;
+ int pm_state;
+ uint8_t dysfunctional;
+ uint8_t is_activated;
+ uint8_t polarity;
+ int intr_irq;
+ struct hrtimer timer;
+ spinlock_t spin_lock;
+ uint16_t version;
+ struct infor id;
+ uint16_t intr;
+ struct class *sar_class;
+ struct device *sar_dev;
+ struct delayed_work sleep_work;
+};
+
+/*For wifi call back*/
+extern struct blocking_notifier_head sar_notifier_list;
+
+extern int board_build_flag(void);
+
+extern int register_notifier_by_sar(struct notifier_block *nb);
+extern int unregister_notifier_by_sar(struct notifier_block *nb);
+#endif
diff --git a/include/linux/input/synaptics_dsx.h b/include/linux/input/synaptics_dsx.h
new file mode 100644
index 0000000..617caa0
--- /dev/null
+++ b/include/linux/input/synaptics_dsx.h
@@ -0,0 +1,98 @@
+/*
+ * Synaptics DSX touchscreen driver
+ *
+ * Copyright (C) 2012 Synaptics Incorporated
+ *
+ * Copyright (C) 2012 Alexandra Chin <alexandra.chin@tw.synaptics.com>
+ * Copyright (C) 2012 Scott Lin <scott.lin@tw.synaptics.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _SYNAPTICS_DSX_H_
+#define _SYNAPTICS_DSX_H_
+
+#define PLATFORM_DRIVER_NAME "synaptics_dsx"
+#define I2C_DRIVER_NAME "synaptics_dsx_i2c"
+#define SPI_DRIVER_NAME "synaptics_dsx_spi"
+
+/*
+ * struct synaptics_dsx_cap_button_map - 0D button map
+ * @nbuttons: number of 0D buttons
+ * @map: pointer to array of button types
+ */
+struct synaptics_dsx_cap_button_map {
+ unsigned char nbuttons;
+ unsigned int *map;
+};
+
+#define SYN_CFG_BLK_UNIT (16)
+#define SYN_CONFIG_SIZE (64 * SYN_CFG_BLK_UNIT)
+
+struct synaptics_rmi4_config {
+ uint32_t sensor_id;
+ uint32_t pr_number;
+ uint16_t length;
+ uint8_t config[SYN_CONFIG_SIZE];
+};
+
+/*
+ * struct synaptics_dsx_board_data - DSX board data
+ * @x_flip: x flip flag
+ * @y_flip: y flip flag
+ * @swap_axes: swap axes flag
+ * @irq_gpio: attention interrupt GPIO
+ * @irq_on_state: attention interrupt active state
+ * @power_gpio: power switch GPIO
+ * @power_on_state: power switch active state
+ * @reset_gpio: reset GPIO
+ * @reset_on_state: reset active state
+ * @irq_flags: IRQ flags
+ * @device_descriptor_addr: HID device descriptor address
+ * @panel_x: x-axis resolution of display panel
+ * @panel_y: y-axis resolution of display panel
+ * @power_delay_ms: delay time to wait after powering up device
+ * @reset_delay_ms: delay time to wait after resetting device
+ * @reset_active_ms: reset active time
+ * @byte_delay_us: delay time between two bytes of SPI data
+ * @block_delay_us: delay time between two SPI transfers
+ * @pwr_reg_name: pointer to name of regulator for power control
+ * @bus_reg_name: pointer to name of regulator for bus pullup control
+ * @cap_button_map: pointer to 0D button map
+ */
+struct synaptics_dsx_board_data {
+ bool x_flip;
+ bool y_flip;
+ bool swap_axes;
+ int irq_gpio;
+ int irq_on_state;
+ int power_gpio;
+ int power_on_state;
+ int reset_gpio;
+ int reset_on_state;
+ unsigned long irq_flags;
+ unsigned short device_descriptor_addr;
+ unsigned int panel_x;
+ unsigned int panel_y;
+ unsigned int power_delay_ms;
+ unsigned int reset_delay_ms;
+ unsigned int reset_active_ms;
+ unsigned int byte_delay_us;
+ unsigned int block_delay_us;
+ const char *pwr_reg_name;
+ const char *bus_reg_name;
+ struct synaptics_dsx_cap_button_map *cap_button_map;
+ uint16_t tw_pin_mask;
+ int config_num;
+ struct synaptics_rmi4_config *config_table;
+};
+
+#endif
diff --git a/include/linux/jbd.h b/include/linux/jbd.h
index 7e0b622..f9b9841 100644
--- a/include/linux/jbd.h
+++ b/include/linux/jbd.h
@@ -844,6 +844,7 @@
extern int journal_try_to_free_buffers(journal_t *, struct page *, gfp_t);
extern int journal_stop(handle_t *);
extern int journal_flush (journal_t *);
+extern int journal_force_flush (journal_t *);
extern void journal_lock_updates (journal_t *);
extern void journal_unlock_updates (journal_t *);
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index 704b9a5..42e177f 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -1135,6 +1135,7 @@
extern int jbd2_journal_try_to_free_buffers(journal_t *, struct page *, gfp_t);
extern int jbd2_journal_stop(handle_t *);
extern int jbd2_journal_flush (journal_t *);
+extern int jbd2_journal_force_flush (journal_t *);
extern void jbd2_journal_lock_updates (journal_t *);
extern void jbd2_journal_unlock_updates (journal_t *);
diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
index 10308c6..ed8f6a9 100644
--- a/include/linux/kfifo.h
+++ b/include/linux/kfifo.h
@@ -790,6 +790,9 @@
extern int __kfifo_alloc(struct __kfifo *fifo, unsigned int size,
size_t esize, gfp_t gfp_mask);
+extern int __kfifo_valloc(struct __kfifo *fifo, unsigned int size,
+ size_t esize);
+
extern void __kfifo_free(struct __kfifo *fifo);
extern int __kfifo_init(struct __kfifo *fifo, void *buffer,
diff --git a/include/linux/leds-lp5521_htc.h b/include/linux/leds-lp5521_htc.h
new file mode 100644
index 0000000..41dd406
--- /dev/null
+++ b/include/linux/leds-lp5521_htc.h
@@ -0,0 +1,31 @@
+#ifndef _LINUX_LP5521_HTC_H
+#define _LINUX_LP5521_HTC_H
+
+#define LED_I2C_NAME "LP5521-LED"
+
+#define ENABLE_REGISTER 0x00
+#define OPRATION_REGISTER 0x01
+#define R_PWM_CONTROL 0x02
+#define G_PWM_CONTROL 0x03
+#define B_PWM_CONTROL 0x04
+
+
+
+#define I2C_WRITE_RETRY_TIMES 2
+#define LED_I2C_WRITE_BLOCK_SIZE 80
+
+struct led_i2c_platform_data {
+ int num_leds;
+ int ena_gpio;
+ int ena_gpio_io_ext;
+ int tri_gpio;
+ int button_lux;
+};
+
+
+
+void led_behavior(struct i2c_client *client, int val);
+void lp5521_led_current_set_for_key(int brightness_key);
+
+#endif /*_LINUXLP5521-LED_H*/
+
diff --git a/include/linux/llist.h b/include/linux/llist.h
index 30019d8..6c69068 100644
--- a/include/linux/llist.h
+++ b/include/linux/llist.h
@@ -126,6 +126,29 @@
(pos) = llist_entry((pos)->member.next, typeof(*(pos)), member))
/**
+ * llist_for_each_entry_safe - iterate over some deleted entries of lock-less list of given type
+ * safe against removal of list entry
+ * @pos: the type * to use as a loop cursor.
+ * @n: another type * to use as temporary storage
+ * @node: the first entry of deleted list entries.
+ * @member: the name of the llist_node with the struct.
+ *
+ * In general, some entries of the lock-less list can be traversed
+ * safely only after being removed from list, so start with an entry
+ * instead of list head.
+ *
+ * If being used on entries deleted from lock-less list directly, the
+ * traverse order is from the newest to the oldest added entry. If
+ * you want to traverse from the oldest to the newest, you must
+ * reverse the order by yourself before traversing.
+ */
+#define llist_for_each_entry_safe(pos, n, node, member) \
+ for (pos = llist_entry((node), typeof(*pos), member); \
+ &pos->member != NULL && \
+ (n = llist_entry(pos->member.next, typeof(*n), member), true); \
+ pos = n)
+
+/**
* llist_empty - tests whether a lock-less list is empty
* @head: the list to test
*
@@ -189,4 +212,6 @@
struct llist_head *head);
extern struct llist_node *llist_del_first(struct llist_head *head);
+struct llist_node *llist_reverse_order(struct llist_node *head);
+
#endif /* LLIST_H */
diff --git a/include/linux/max1187x.h b/include/linux/max1187x.h
new file mode 100644
index 0000000..1507137
--- /dev/null
+++ b/include/linux/max1187x.h
@@ -0,0 +1,179 @@
+/* include/linux/max1187x.h
+ *
+ * Copyright (c)2012 Maxim Integrated Products, Inc.
+ *
+ * Driver Version: 3.0.7
+ * Release Date: Feb 22, 2013
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __MAX1187X_H
+#define __MAX1187X_H
+
+#define MAX1187X_NAME "max1187x"
+#define MAX1187X_TOUCH MAX1187X_NAME "_touchscreen_0"
+#define MAX1187X_KEY MAX1187X_NAME "_key_0"
+#define MAX1187X_LOG_NAME "[TP] "
+
+#define MAX_WORDS_COMMAND 9 /* command address space 0x00-0x09 minus header
+ => 9 command words maximum */
+#define MAX_WORDS_REPORT 245 /* address space 0x00-0xFF minus 0x00-0x09 for
+ commands minus header, maximum 1 report packet*/
+#define MAX_WORDS_COMMAND_ALL (15 * MAX_WORDS_COMMAND) /* maximum 15 packets
+ 9 payload words each */
+
+#define MAX1187X_NUM_FW_MAPPINGS_MAX 5
+#define MAX1187X_TOUCH_COUNT_MAX 10
+#define MAX1187X_TOUCH_REPORT_RAW 0x0800
+#define MAX1187X_TOUCH_REPORT_BASIC 0x0801
+#define MAX1187X_TOUCH_REPORT_EXTENDED 0x0802
+#define MAX_REPORT_READERS 5
+#define DEBUG_STRING_LEN_MAX 60
+#define MAX_FW_RETRIES 5
+
+#define MAX1187X_PI 205887 /* pi multiplied by 2^16 */
+
+#define MAX1187X_TOUCH_CONFIG_MAX 65
+#define MAX1187X_CALIB_TABLE_MAX 74
+#define MAX1187X_PRIVATE_CONFIG_MAX 34
+#define MAX1187X_LOOKUP_TABLE_MAX 8
+#define MAX1187X_IMAGE_FACTOR_MAX 460
+
+#define MAX1187X_NO_BASELINE 0
+#define MAX1187X_FIX_BASELINE 1
+#define MAX1187X_AUTO_BASELINE 2
+
+struct max1187x_touch_report_header {
+ u16 header;
+ u16 report_id;
+ u16 report_size;
+ u16 touch_count:4;
+ u16 touch_status:4;
+ u16 reserved0:5;
+ u16 cycles:1;
+ u16 reserved1:2;
+ u16 button0:1;
+ u16 button1:1;
+ u16 button2:1;
+ u16 button3:1;
+ u16 reserved2:12;
+ u16 framecounter;
+};
+
+struct max1187x_touch_report_basic {
+ u16 finger_id:4;
+ u16 reserved0:4;
+ u16 finger_status:4;
+ u16 reserved1:4;
+ u16 x:12;
+ u16 reserved2:4;
+ u16 y:12;
+ u16 reserved3:4;
+ u16 z;
+};
+
+struct max1187x_touch_report_extended {
+ u16 finger_id:4;
+ u16 reserved0:4;
+ u16 finger_status:4;
+ u16 reserved1:4;
+ u16 x:12;
+ u16 reserved2:4;
+ u16 y:12;
+ u16 reserved3:4;
+ u16 z;
+ s16 xspeed;
+ s16 yspeed;
+ s8 xpixel;
+ s8 ypixel;
+ u16 area;
+ u16 xmin;
+ u16 xmax;
+ u16 ymin;
+ u16 ymax;
+};
+
+struct max1187x_board_config {
+ u16 config_id;
+ u16 chip_id;
+ u8 major_ver;
+ u8 minor_ver;
+ u8 protocol_ver;
+ u16 vendor_pin;
+ u16 config_touch[MAX1187X_TOUCH_CONFIG_MAX];
+ u16 config_cal[MAX1187X_CALIB_TABLE_MAX];
+ u16 config_private[MAX1187X_PRIVATE_CONFIG_MAX];
+ u16 config_lin_x[MAX1187X_LOOKUP_TABLE_MAX];
+ u16 config_lin_y[MAX1187X_LOOKUP_TABLE_MAX];
+ u16 config_ifactor[MAX1187X_IMAGE_FACTOR_MAX];
+};
+
+struct max1187x_virtual_key {
+ int index;
+ int keycode;
+ int x_position;
+ int y_position;
+};
+
+struct max1187x_fw_mapping {
+ u32 chip_id;
+ char *filename;
+ u32 filesize;
+ u32 filecrc16;
+ u32 file_codesize;
+};
+
+struct max1187x_pdata {
+ struct max1187x_board_config *fw_config;
+ u32 gpio_tirq;
+ u32 gpio_reset;
+ u32 num_fw_mappings;
+ struct max1187x_fw_mapping fw_mapping[MAX1187X_NUM_FW_MAPPINGS_MAX];
+ u32 defaults_allow;
+ u32 default_config_id;
+ u32 default_chip_id;
+ u32 i2c_words;
+ #define MAX1187X_REVERSE_X 0x0001
+ #define MAX1187X_REVERSE_Y 0x0002
+ #define MAX1187X_SWAP_XY 0x0004
+ u32 coordinate_settings;
+ u32 panel_min_x;
+ u32 panel_max_x;
+ u32 panel_min_y;
+ u32 panel_max_y;
+ u32 lcd_x;
+ u32 lcd_y;
+ u32 num_rows;
+ u32 num_cols;
+ #define MAX1187X_PROTOCOL_A 0
+ #define MAX1187X_PROTOCOL_B 1
+ #define MAX1187X_PROTOCOL_CUSTOM1 2
+ u16 input_protocol;
+ #define MAX1187X_UPDATE_NONE 0
+ #define MAX1187X_UPDATE_BIN 1
+ #define MAX1187X_UPDATE_CONFIG 2
+ #define MAX1187X_UPDATE_BOTH 3
+ u8 update_feature;
+ u8 support_htc_event;
+ u16 tw_mask;
+ u32 button_code0;
+ u32 button_code1;
+ u32 button_code2;
+ u32 button_code3;
+ #define MAX1187X_REPORT_MODE_BASIC 1
+ #define MAX1187X_REPORT_MODE_EXTEND 2
+ u8 report_mode;
+ struct max1187x_virtual_key *button_data;
+};
+
+#endif /* __MAX1187X_H */
+
diff --git a/include/linux/maxim_sti.h b/include/linux/maxim_sti.h
index 1a495e9..7de5cea 100644
--- a/include/linux/maxim_sti.h
+++ b/include/linux/maxim_sti.h
@@ -191,7 +191,7 @@
__u8 method;
};
-#define MAX_IRQ_PARAMS 26
+#define MAX_IRQ_PARAMS 20
struct __attribute__ ((__packed__)) dr_config_irq {
__u16 irq_param[MAX_IRQ_PARAMS];
__u8 irq_params;
diff --git a/include/linux/mfd/palmas.h b/include/linux/mfd/palmas.h
index 791c361..6a3fa57 100644
--- a/include/linux/mfd/palmas.h
+++ b/include/linux/mfd/palmas.h
@@ -470,6 +470,8 @@
int ldo6_vibrator;
bool disable_smps10_boost_suspend;
+
+ bool disable_smps10_in_suspend;
};
struct palmas_usb_platform_data {
@@ -551,6 +553,7 @@
struct palmas_pm_platform_data {
bool use_power_off;
bool use_power_reset;
+ bool use_boot_up_at_vbus;
};
struct palmas_pinctrl_config {
@@ -612,6 +615,11 @@
bool enable_in1_above_threshold;
};
+struct palmas_voltage_monitor_platform_data {
+ bool use_vbat_monitor;
+ bool use_vsys_monitor;
+};
+
struct palmas_platform_data {
int irq_flags;
int gpio_base;
@@ -630,6 +638,7 @@
struct palmas_battery_platform_data *battery_pdata;
struct palmas_sim_platform_data *sim_pdata;
struct palmas_ldousb_in_platform_data *ldousb_in_pdata;
+ struct palmas_voltage_monitor_platform_data *voltage_monitor_pdata;
struct palmas_clk32k_init_data *clk32k_init_data;
int clk32k_init_data_size;
@@ -735,6 +744,9 @@
unsigned int current_reg_mode[PALMAS_REG_SMPS10_OUT1];
unsigned long config_flags[PALMAS_NUM_REGS];
bool disable_active_discharge_idle[PALMAS_NUM_REGS];
+
+ bool disable_smps10_in_suspend;
+ unsigned int smps10_ctrl_reg;
};
struct palmas_resource {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 14f0d85..eeba137 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -534,6 +534,10 @@
}
#endif
+#ifdef CONFIG_CMA
+unsigned int cma_threshold_get(void);
+#endif
+
/*
* Multiple processes may "see" the same page. E.g. for untouched
* mappings of /dev/null, all processes see the same page full of
diff --git a/include/linux/nvhost_as_ioctl.h b/include/linux/nvhost_as_ioctl.h
index cb6e8fd..56488c5 100644
--- a/include/linux/nvhost_as_ioctl.h
+++ b/include/linux/nvhost_as_ioctl.h
@@ -146,9 +146,10 @@
__u32 dmabuf_fd; /* in */
__u32 page_size; /* inout, 0:= best fit to buffer */
- __u32 padding[4]; /* reserved for future usage */
+ __u64 buffer_offset; /* in, offset of mapped buffer region */
+ __u64 mapping_size; /* in, size of mapped buffer region */
- __u64 offset; /* in/out, we use this address if flag
+ __u64 as_offset; /* in/out, we use this address if flag
* FIXED_OFFSET is set. This will fail
* if space is not properly allocated. The
* actual virtual address to which we mapped
diff --git a/include/linux/ote_protocol.h b/include/linux/ote_protocol.h
index 766e73e..a3c9fca 100644
--- a/include/linux/ote_protocol.h
+++ b/include/linux/ote_protocol.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
+ * Copyright (c) 2013-2015 NVIDIA Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -20,5 +20,6 @@
#define __OTE_PROTOCOL_H__
int te_set_vpr_params(void *vpr_base, size_t vpr_size);
+void te_restore_keyslots(void);
#endif
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 0530ab8..34badb2 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -392,6 +392,8 @@
*/
extern void wait_on_page_bit(struct page *page, int bit_nr);
+extern void wait_on_page_bit_timeout(struct page *page, int bit_nr);
+
extern int wait_on_page_bit_killable(struct page *page, int bit_nr);
static inline int wait_on_page_locked_killable(struct page *page)
@@ -414,6 +416,12 @@
wait_on_page_bit(page, PG_locked);
}
+static inline void wait_on_page_locked_timeout(struct page *page)
+{
+ if (PageLocked(page))
+ wait_on_page_bit_timeout(page, PG_locked);
+}
+
/*
* Wait for a page to complete writeback
*/
diff --git a/include/linux/platform_data/qcom_usb_modem_power.h b/include/linux/platform_data/qcom_usb_modem_power.h
new file mode 100644
index 0000000..98c8ec3
--- /dev/null
+++ b/include/linux/platform_data/qcom_usb_modem_power.h
@@ -0,0 +1,240 @@
+/*
+ * Copyright (c) 2014, HTC CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __MACH_QCOM_USB_MODEM_POWER_H
+#define __MACH_QCOM_USB_MODEM_POWER_H
+
+#include <linux/interrupt.h>
+#include <linux/usb.h>
+#include <linux/pm_qos.h>
+#include <linux/wakelock.h>
+
+#ifdef CONFIG_MDM_SYSEDP
+#include <linux/sysedp.h>
+#endif
+
+/* modem private data structure */
+#if defined(CONFIG_MDM_FTRACE_DEBUG) || defined(CONFIG_MDM_ERRMSG)
+#define MDM_COM_BUF_SIZE 256
+#endif
+
+#ifdef CONFIG_MSM_SYSMON_COMM
+#define RD_BUF_SIZE 100
+#define MODEM_ERRMSG_LIST_LEN 10
+
+struct mdm_msr_info {
+ int valid;
+ struct timespec msr_time;
+ char modem_errmsg[RD_BUF_SIZE];
+};
+#endif
+
+enum charm_boot_type {
+ CHARM_NORMAL_BOOT = 0,
+ CHARM_RAM_DUMPS,
+ CHARM_CNV_RESET,
+};
+
+#ifdef CONFIG_MDM_SYSEDP
+enum sysedp_radio_statue {
+ MDM_SYSEDP_AIRPLANE_MODE = 0,
+ MDM_SYSEDP_SEARCHING_MODE = 1,
+ MDM_SYSEDP_2G_MODE = 2,
+ MDM_SYSEDP_3G_MODE = 3,
+ MDM_SYSEDP_LTE_MODE = 4,
+ MDM_SYSEDP_MAX
+};
+#endif
+
+struct qcom_usb_modem {
+ struct qcom_usb_modem_power_platform_data *pdata;
+ struct platform_device *pdev;
+
+ /* irq */
+ unsigned int wake_cnt; /* remote wakeup counter */
+ unsigned int wake_irq; /* remote wakeup irq */
+ bool wake_irq_wakeable;
+ unsigned int errfatal_irq;
+ bool errfatal_irq_wakeable;
+ unsigned int hsic_ready_irq;
+ bool hsic_ready_irq_wakeable;
+ unsigned int status_irq;
+ bool status_irq_wakeable;
+ unsigned int ipc3_irq;
+ bool ipc3_irq_wakeable;
+ unsigned int vdd_min_irq;
+ bool vdd_min_irq_wakeable;
+ bool mdm_irq_enabled;
+ bool mdm_wake_irq_enabled;
+ bool mdm_gpio_exported;
+
+ /* mutex */
+ struct mutex lock;
+ struct mutex hc_lock;
+ struct mutex hsic_phy_lock;
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ struct mutex ftrace_cmd_lock;
+#endif
+
+ /* wake lock */
+ struct wake_lock wake_lock; /* modem wake lock */
+
+ /* usb */
+ unsigned int vid; /* modem vendor id */
+ unsigned int pid; /* modem product id */
+ struct usb_device *udev; /* modem usb device */
+ struct usb_device *parent; /* parent device */
+ struct usb_interface *intf; /* first modem usb interface */
+ struct platform_device *hc; /* USB host controller */
+ struct notifier_block usb_notifier; /* usb event notifier */
+ struct notifier_block pm_notifier; /* pm event notifier */
+ int system_suspend; /* system suspend flag */
+ int short_autosuspend_enabled;
+
+ /* workqueue */
+ struct workqueue_struct *usb_host_wq; /* Usb host workqueue */
+ struct workqueue_struct *wq; /* modem workqueue */
+ struct workqueue_struct *mdm_recovery_wq; /* modem recovery workqueue */
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ struct workqueue_struct *ftrace_wq; /* ftrace workqueue */
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ struct workqueue_struct *mdm_restart_wq;
+#endif
+
+ /* work */
+ struct work_struct host_reset_work; /* usb host reset work */
+ struct work_struct host_load_work; /* usb host load work */
+ struct work_struct host_unload_work; /* usb host unload work */
+ struct pm_qos_request cpu_boost_req; /* min CPU freq request */
+ struct work_struct cpu_boost_work; /* CPU freq boost work */
+ struct delayed_work cpu_unboost_work; /* CPU freq unboost work */
+ struct work_struct mdm_hsic_ready_work; /* modem hsic ready work */
+ struct work_struct mdm_status_work; /* modem status changed work */
+ struct work_struct mdm_errfatal_work; /* modem errfatal work */
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ struct work_struct ftrace_enable_log_work; /* ftrace enable log work */
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ struct work_struct mdm_restart_reason_work; /* modem reset reason work */
+#endif
+
+ const struct qcom_modem_operations *ops; /* modem operations */
+
+ /* modem status */
+ enum charm_boot_type boot_type;
+ unsigned int mdm9k_status;
+ struct proc_dir_entry *mdm9k_pde;
+ unsigned short int mdm_status;
+ unsigned int mdm2ap_ipc3_status;
+ atomic_t final_efs_wait;
+ struct completion usb_host_reset_done;
+
+ /* modem debug */
+ bool mdm_debug_on;
+#ifdef CONFIG_MDM_FTRACE_DEBUG
+ bool ftrace_enable;
+ char ftrace_cmd[MDM_COM_BUF_SIZE];
+ struct completion ftrace_cmd_pending;
+ struct completion ftrace_cmd_can_be_executed;
+#endif
+#ifdef CONFIG_MSM_SYSMON_COMM
+ struct mdm_msr_info msr_info_list[MODEM_ERRMSG_LIST_LEN];
+ int mdm_msr_index;
+#endif
+#ifdef CONFIG_MDM_ERRMSG
+ char mdm_errmsg[MDM_COM_BUF_SIZE];
+#endif
+#ifdef CONFIG_MSM_SUBSYSTEM_RESTART
+ struct completion mdm_needs_reload;
+ struct completion mdm_boot;
+ struct completion mdm_ram_dumps;
+ int ramdump_save;
+#endif
+
+ /* hsic wakeup */
+ unsigned long mdm_hsic_phy_resume_jiffies;
+ unsigned long mdm_hsic_phy_active_total_ms;
+ bool hsic_wakeup_pending;
+#ifdef CONFIG_MDM_SYSEDP
+ struct sysedp_consumer *sysedpc;
+ enum sysedp_radio_statue radio_state;
+#endif
+};
+
+/* modem operations */
+struct qcom_modem_operations {
+ int (*init) (struct qcom_usb_modem *); /* modem init */
+ void (*start) (struct qcom_usb_modem *); /* modem start */
+ void (*stop) (struct qcom_usb_modem *); /* modem stop */
+ void (*stop2) (struct qcom_usb_modem *); /* modem stop 2 */
+ void (*suspend) (void); /* send L3 hint during system suspend */
+ void (*resume) (void); /* send L3->0 hint during system resume */
+ void (*reset) (void); /* modem reset */
+ void (*remove) (struct qcom_usb_modem *); /* modem remove */
+ void (*status_cb) (struct qcom_usb_modem *, int);
+ void (*fatal_trigger_cb) (struct qcom_usb_modem *);
+ void (*normal_boot_done_cb) (struct qcom_usb_modem *);
+ void (*nv_write_done_cb) (struct qcom_usb_modem *);
+ void (*debug_state_changed_cb) (struct qcom_usb_modem *, int);
+ void (*dump_mdm_gpio_cb) (struct qcom_usb_modem *, int, char *); /* if gpio value is negtive, dump all gpio */
+};
+
+/* tegra usb modem power platform data */
+struct qcom_usb_modem_power_platform_data {
+ char *mdm_version;
+ const struct qcom_modem_operations *ops;
+ const struct usb_device_id *modem_list; /* supported modem list */
+ unsigned mdm2ap_errfatal_gpio;
+ unsigned ap2mdm_errfatal_gpio;
+ unsigned mdm2ap_status_gpio;
+ unsigned ap2mdm_status_gpio;
+ unsigned mdm2ap_wakeup_gpio;
+ unsigned ap2mdm_wakeup_gpio;
+ unsigned mdm2ap_vdd_min_gpio;
+ unsigned ap2mdm_vdd_min_gpio;
+ unsigned mdm2ap_hsic_ready_gpio;
+ unsigned ap2mdm_pmic_reset_n_gpio;
+ unsigned ap2mdm_ipc1_gpio;
+ unsigned ap2mdm_ipc2_gpio;
+ unsigned mdm2ap_ipc3_gpio;
+ unsigned long errfatal_irq_flags; /* modem error fatal irq flags */
+ unsigned long status_irq_flags; /* modem status irq flags */
+ unsigned long wake_irq_flags; /* remote wakeup irq flags */
+ unsigned long vdd_min_irq_flags; /* vdd min irq flags */
+ unsigned long hsic_ready_irq_flags; /* modem hsic ready irq flags */
+ unsigned long ipc3_irq_flags; /* ipc3 irq flags */
+ int autosuspend_delay; /* autosuspend delay in milliseconds */
+ int short_autosuspend_delay; /* short autosuspend delay in ms */
+ int ramdump_delay_ms;
+ struct platform_device *tegra_ehci_device; /* USB host device */
+ struct tegra_usb_platform_data *tegra_ehci_pdata;
+};
+
+/* MDM status bit mask definition */
+#define MDM_STATUS_POWER_DOWN 0
+#define MDM_STATUS_POWER_ON 1
+#define MDM_STATUS_HSIC_READY (1 << 1)
+#define MDM_STATUS_STATUS_READY (1 << 2)
+#define MDM_STATUS_BOOT_DONE (1 << 3)
+#define MDM_STATUS_RESET (1 << 4)
+#define MDM_STATUS_RESETTING (1 << 5)
+#define MDM_STATUS_RAMDUMP (1 << 6)
+#define MDM_STATUS_EFFECTIVE_BIT 0x7f
+
+#endif /* __MACH_QCOM_USB_MODEM_POWER_H */
diff --git a/include/linux/power/battery-charger-gauge-comm.h b/include/linux/power/battery-charger-gauge-comm.h
index f04ef64..e6cac7c 100644
--- a/include/linux/power/battery-charger-gauge-comm.h
+++ b/include/linux/power/battery-charger-gauge-comm.h
@@ -30,6 +30,18 @@
BATTERY_DISCHARGING,
BATTERY_CHARGING,
BATTERY_CHARGING_DONE,
+ BATTERY_UNKNOWN,
+};
+
+enum charge_thermal_state {
+ CHARGE_THERMAL_START = 0,
+ CHARGE_THERMAL_NORMAL,
+ CHARGE_THERMAL_COLD_STOP,
+ CHARGE_THERMAL_COOL_STOP,
+ CHARGE_THERMAL_COOL,
+ CHARGE_THERMAL_WARM,
+ CHARGE_THERMAL_WARM_STOP,
+ CHARGE_THERMAL_HOT_STOP,
};
struct battery_gauge_dev;
@@ -48,6 +60,37 @@
int (*thermal_configure)(struct battery_charger_dev *bct_dev,
int temp, bool enable_charger, bool enable_charg_half_current,
int battery_voltage);
+ int (*charging_full_configure)(struct battery_charger_dev *bc_dev,
+ bool charge_full_done, bool charge_full_stop);
+ int (*input_voltage_configure)(struct battery_charger_dev *bc_dev,
+ int voltage_min);
+ int (*unknown_battery_handle)(struct battery_charger_dev *bc_dev);
+};
+
+struct battery_thermal_prop {
+ int temp_hot_dc;
+ int temp_cold_dc;
+ int temp_warm_dc;
+ int temp_cool_dc;
+ unsigned int temp_hysteresis_dc;
+ unsigned int regulation_voltage_mv;
+ unsigned int warm_voltage_mv;
+ unsigned int cool_voltage_mv;
+ bool disable_warm_current_half;
+ bool disable_cool_current_half;
+};
+
+struct charge_full_threshold {
+ int chg_done_voltage_min_mv;
+ int chg_done_current_min_ma;
+ int chg_done_low_current_min_ma;
+ int recharge_voltage_min_mv;
+};
+
+struct charge_input_switch {
+ int input_switch_threshold_mv;
+ int input_vmin_high_mv;
+ int input_vmin_low_mv;
};
struct battery_charger_info {
@@ -55,7 +98,14 @@
int cell_id;
int polling_time_sec;
bool enable_thermal_monitor;
+ bool enable_batt_status_monitor;
struct battery_charging_ops *bc_ops;
+ struct battery_thermal_prop thermal_prop;
+ struct charge_full_threshold full_thr;
+ struct charge_input_switch input_switch;
+ const char *batt_id_channel_name;
+ int unknown_batt_id_min;
+ const char *gauge_psy_name;
};
struct battery_gauge_info {
@@ -76,6 +126,16 @@
struct battery_charger_dev *bc_dev);
int battery_charger_thermal_stop_monitoring(
struct battery_charger_dev *bc_dev);
+int battery_charger_batt_status_start_monitoring(
+ struct battery_charger_dev *bc_dev,
+ int in_current_limit);
+int battery_charger_batt_status_stop_monitoring(
+ struct battery_charger_dev *bc_dev);
+int battery_charger_batt_status_force_check(
+ struct battery_charger_dev *bc_dev);
+int battery_charger_get_batt_status_no_update_time_ms(
+ struct battery_charger_dev *bc_dev,
+ s64 *time);
int battery_charger_acquire_wake_lock(struct battery_charger_dev *bc_dev);
int battery_charger_release_wake_lock(struct battery_charger_dev *bc_dev);
@@ -100,6 +160,8 @@
void battery_gauge_set_drvdata(struct battery_gauge_dev *bg_dev, void *data);
int battery_gauge_record_voltage_value(struct battery_gauge_dev *bg_dev,
int voltage);
+int battery_gauge_record_current_value(struct battery_gauge_dev *bg_dev,
+ int batt_current);
int battery_gauge_record_capacity_value(struct battery_gauge_dev *bg_dev,
int capacity);
int battery_gauge_record_snapshot_values(struct battery_gauge_dev *bg_dev,
diff --git a/include/linux/power/bq2419x-charger-htc.h b/include/linux/power/bq2419x-charger-htc.h
new file mode 100644
index 0000000..d1c14d0
--- /dev/null
+++ b/include/linux/power/bq2419x-charger-htc.h
@@ -0,0 +1,149 @@
+/*
+ * bq2419x-charger-htc.h -- BQ24190/BQ24192/BQ24192i/BQ24193 Charger driver
+ *
+ * Copyright (c) 2014, HTC CORPORATION. All rights reserved.
+ * Copyright (c) 2013-2014, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ */
+
+#ifndef __LINUX_POWER_BQ2419X_CHARGER_HTC_H
+#define __LINUX_POWER_BQ2419X_CHARGER_HTC_H
+
+/* Register definitions */
+#define BQ2419X_INPUT_SRC_REG 0x00
+#define BQ2419X_PWR_ON_REG 0x01
+#define BQ2419X_CHRG_CTRL_REG 0x02
+#define BQ2419X_CHRG_TERM_REG 0x03
+#define BQ2419X_VOLT_CTRL_REG 0x04
+#define BQ2419X_TIME_CTRL_REG 0x05
+#define BQ2419X_THERM_REG 0x06
+#define BQ2419X_MISC_OPER_REG 0x07
+#define BQ2419X_SYS_STAT_REG 0x08
+#define BQ2419X_FAULT_REG 0x09
+#define BQ2419X_REVISION_REG 0x0a
+
+#define BQ2419X_INPUT_VINDPM_MASK 0x78
+#define BQ2419X_INPUT_IINLIM_MASK 0x07
+
+#define BQ2419X_CHRG_CTRL_ICHG_MASK 0xFC
+
+#define BQ2419X_CHRG_TERM_PRECHG_MASK 0xF0
+#define BQ2419X_CHRG_TERM_TERM_MASK 0x0F
+
+#define BQ2419X_THERM_BAT_COMP_MASK 0xE0
+#define BQ2419X_THERM_VCLAMP_MASK 0x1C
+#define BQ2419X_THERM_TREG_MASK 0x03
+
+#define BQ2419X_TIME_JEITA_ISET 0x01
+
+#define BQ2419X_CHG_VOLT_LIMIT_MASK 0xFC
+
+#define BQ24190_IC_VER 0x20
+#define BQ24192_IC_VER 0x28
+#define BQ24192i_IC_VER 0x18
+
+#define BQ2419X_ENABLE_CHARGE_MASK 0x30
+#define BQ2419X_ENABLE_VBUS 0x20
+#define BQ2419X_ENABLE_CHARGE 0x10
+#define BQ2419X_DISABLE_CHARGE 0x00
+
+#define BQ2419X_REG0 0x0
+#define BQ2419X_EN_HIZ BIT(7)
+
+#define BQ2419X_EN_TERM BIT(7)
+
+#define BQ2419X_WD 0x5
+#define BQ2419X_WD_MASK 0x30
+#define BQ2419X_EN_SFT_TIMER_MASK BIT(3)
+#define BQ2419X_WD_DISABLE 0x00
+#define BQ2419X_WD_40ms 0x10
+#define BQ2419X_WD_80ms 0x20
+#define BQ2419X_WD_160ms 0x30
+
+#define BQ2419x_VBUS_STAT 0xc0
+#define BQ2419x_VBUS_UNKNOWN 0x00
+#define BQ2419x_VBUS_USB 0x40
+#define BQ2419x_VBUS_AC 0x80
+
+#define BQ2419x_CHRG_STATE_MASK 0x30
+#define BQ2419x_VSYS_STAT_MASK 0x01
+#define BQ2419x_VSYS_STAT_BATT_LOW 0x01
+#define BQ2419x_CHRG_STATE_NOTCHARGING 0x00
+#define BQ2419x_CHRG_STATE_PRE_CHARGE 0x10
+#define BQ2419x_CHRG_STATE_POST_CHARGE 0x20
+#define BQ2419x_CHRG_STATE_CHARGE_DONE 0x30
+#define BQ2419x_DPM_STAT_MASK 0x08
+#define BQ2419x_DPM_MODE 0x08
+#define BQ2419x_PG_STAT_MASK 0x04
+#define BQ2419x_POWER_GOOD 0x04
+#define BQ2419x_THERM_STAT_MASK 0x02
+#define BQ2419x_IN_THERM_REGULATION 0x02
+
+#define BQ2419x_FAULT_WATCHDOG_FAULT BIT(7)
+#define BQ2419x_FAULT_BOOST_FAULT BIT(6)
+#define BQ2419x_FAULT_CHRG_FAULT_MASK 0x30
+#define BQ2419x_FAULT_CHRG_NORMAL 0x00
+#define BQ2419x_FAULT_CHRG_INPUT 0x10
+#define BQ2419x_FAULT_CHRG_THERMAL 0x20
+#define BQ2419x_FAULT_CHRG_SAFTY 0x30
+#define BQ2419x_FAULT_BAT_FAULT BIT(3)
+
+#define BQ2419x_FAULT_NTC_FAULT 0x07
+#define BQ2419x_TREG 0x03
+#define BQ2419x_TREG_100_C 0x02
+
+#define BQ2419x_CONFIG_MASK 0x7
+#define BQ2419x_INPUT_VOLTAGE_MASK 0x78
+#define BQ2419x_NVCHARGER_INPUT_VOL_SEL 0x40
+#define BQ2419x_DEFAULT_INPUT_VOL_SEL 0x30
+#define BQ2419x_VOLTAGE_CTRL_MASK 0xFC
+
+#define BQ2419x_CHARGING_CURRENT_STEP_DELAY_US 1000
+
+#define BQ2419X_MAX_REGS (BQ2419X_REVISION_REG + 1)
+
+/*
+ * struct htc_battery_bq2419x_vbus_platform_data - bq2419x VBUS platform data.
+ *
+ * @gpio_otg_iusb: GPIO number for OTG/IUSB
+ * @num_consumer_supplies: Number fo consumer for vbus regulators.
+ * @consumer_supplies: List of consumer suppliers.
+ */
+struct bq2419x_vbus_platform_data {
+ int gpio_otg_iusb;
+ int num_consumer_supplies;
+ struct regulator_consumer_supply *consumer_supplies;
+};
+
+/*
+ * struct bq2419x_charger_platform_data - bq2419x charger platform data.
+ */
+struct bq2419x_charger_platform_data {
+ int ir_compensation_resister_ohm;
+ int ir_compensation_voltage_mv;
+ int thermal_regulation_threshold_degc;
+};
+
+/*
+ * struct bq2419x_platform_data - bq2419x platform data.
+ */
+struct bq2419x_platform_data {
+ struct bq2419x_vbus_platform_data *vbus_pdata;
+ struct bq2419x_charger_platform_data *bcharger_pdata;
+};
+
+#endif /* __LINUX_POWER_BQ2419X_CHARGER_HTC_H */
diff --git a/include/linux/power/bq2419x-charger.h b/include/linux/power/bq2419x-charger.h
index 128d2a1..5f2e72a 100644
--- a/include/linux/power/bq2419x-charger.h
+++ b/include/linux/power/bq2419x-charger.h
@@ -67,6 +67,8 @@
#define BQ2419X_REG0 0x0
#define BQ2419X_EN_HIZ BIT(7)
+#define BQ2419X_EN_TERM BIT(7)
+
#define BQ2419X_WD 0x5
#define BQ2419X_WD_MASK 0x30
#define BQ2419X_EN_SFT_TIMER_MASK BIT(3)
@@ -95,6 +97,7 @@
#define BQ2419x_FAULT_CHRG_INPUT 0x10
#define BQ2419x_FAULT_CHRG_THERMAL 0x20
#define BQ2419x_FAULT_CHRG_SAFTY 0x30
+#define BQ2419x_FAULT_BAT_FAULT BIT(3)
#define BQ2419x_FAULT_NTC_FAULT 0x07
#define BQ2419x_TREG 0x03
@@ -124,6 +127,23 @@
};
/*
+ * struct bq2419x_thermal_prop - bq1481x thermal properties
+ * for battery-charger-gauge-comm.
+ */
+struct bq2419x_thermal_prop {
+ int temp_hot_dc;
+ int temp_cold_dc;
+ int temp_warm_dc;
+ int temp_cool_dc;
+ unsigned int temp_hysteresis_dc;
+ unsigned int warm_voltage_mv;
+ unsigned int cool_voltage_mv;
+ bool disable_warm_current_half;
+ bool disable_cool_current_half;
+ unsigned int otp_output_current_ma;
+};
+
+/*
* struct bq2419x_charger_platform_data - bq2419x charger platform data.
*/
struct bq2419x_charger_platform_data {
@@ -141,14 +161,52 @@
int num_consumer_supplies;
struct regulator_consumer_supply *consumer_supplies;
int chg_restart_time;
+ int auto_recharge_time_power_off;
const char *tz_name; /* Thermal zone name */
bool disable_suspend_during_charging;
bool enable_thermal_monitor; /* TRUE if FuelGauge provides temp */
+ int charge_suspend_polling_time_sec;
int temp_polling_time_sec;
int n_temp_profile;
u32 *temp_range;
u32 *chg_current_limit;
u32 *chg_thermal_voltage_limit;
+ u32 auto_recharge_time_supend;
+ bool otp_control_no_thermister; /* TRUE if chip thermister is unused */
+ struct bq2419x_thermal_prop thermal_prop;
+ bool safety_timer_reset_disable;
+};
+
+
+/*
+ * struct bq2419x_charge_full_threshold - used for charging full/recharge check
+ */
+struct bq2419x_charge_full_threshold {
+ int chg_done_voltage_min_mv;
+ int chg_done_current_min_ma;
+ int chg_done_low_current_min_ma;
+ int recharge_voltage_min_mv;
+};
+
+/*
+ * struct bq2419x_charge_input_switch - used for adjust input voltage
+ */
+struct bq2419x_charge_input_switch {
+ int input_switch_threshold_mv;
+ int input_vmin_high_mv;
+ int input_vmin_low_mv;
+};
+
+/*
+ * struct bq2419x_charge_policy_platform_data - bq2419x charge policy data
+ *
+ */
+struct bq2419x_charge_policy_platform_data {
+ bool enable_battery_status_monitor;
+ struct bq2419x_charge_full_threshold full_thr;
+ struct bq2419x_charge_input_switch input_switch;
+ const char *batt_id_channel_name;
+ int unknown_batt_id_min;
};
/*
@@ -157,6 +215,7 @@
struct bq2419x_platform_data {
struct bq2419x_vbus_platform_data *vbus_pdata;
struct bq2419x_charger_platform_data *bcharger_pdata;
+ struct bq2419x_charge_policy_platform_data *cpolicy_pdata;
};
#endif /* __LINUX_POWER_BQ2419X_CHARGER_H */
diff --git a/include/linux/sensor_hub.h b/include/linux/sensor_hub.h
new file mode 100644
index 0000000..d74b902
--- /dev/null
+++ b/include/linux/sensor_hub.h
@@ -0,0 +1,11 @@
+/*for P/L sensor common header file for each vender chip*/
+#ifndef __LINUX_SENSOR_HUB_H
+#define __LINUX_SENSOR_HUB_H
+
+extern struct blocking_notifier_head double_tap_notifier_list;
+
+extern int register_notifier_by_facedown(struct notifier_block *nb);
+extern int unregister_notifier_by_facedown(struct notifier_block *nb);
+
+#endif
+
diff --git a/include/linux/smux.h b/include/linux/smux.h
new file mode 100644
index 0000000..258cedf
--- /dev/null
+++ b/include/linux/smux.h
@@ -0,0 +1,295 @@
+/* include/linux/smux.h
+ *
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef SMUX_H
+#define SMUX_H
+
+/**
+ * Logical Channel IDs
+ *
+ * This must be identical between local and remote clients.
+ */
+enum {
+ /* Data Ports */
+ SMUX_DATA_0,
+ SMUX_DATA_1,
+ SMUX_DATA_2,
+ SMUX_DATA_3,
+ SMUX_DATA_4,
+ SMUX_DATA_5,
+ SMUX_DATA_6,
+ SMUX_DATA_7,
+ SMUX_DATA_8,
+ SMUX_DATA_9,
+ SMUX_USB_RMNET_DATA_0,
+ SMUX_USB_DUN_0,
+ SMUX_USB_DIAG_0,
+ SMUX_SYS_MONITOR_0,
+ SMUX_CSVT_0,
+ /* add new data ports here */
+
+ /* Control Ports */
+ SMUX_DATA_CTL_0 = 32,
+ SMUX_DATA_CTL_1,
+ SMUX_DATA_CTL_2,
+ SMUX_DATA_CTL_3,
+ SMUX_DATA_CTL_4,
+ SMUX_DATA_CTL_5,
+ SMUX_DATA_CTL_6,
+ SMUX_DATA_CTL_7,
+ SMUX_DATA_CTL_8,
+ SMUX_DATA_CTL_9,
+ SMUX_USB_RMNET_CTL_0,
+ SMUX_USB_DUN_CTL_0_UNUSED,
+ SMUX_USB_DIAG_CTL_0,
+ SMUX_SYS_MONITOR_CTL_0,
+ SMUX_CSVT_CTL_0,
+ /* add new control ports here */
+
+ SMUX_TEST_LCID,
+ SMUX_NUM_LOGICAL_CHANNELS,
+};
+
+/**
+ * Notification events that are passed to the notify() function.
+ *
+ * If the @metadata argument in the notifier is non-null, then it will
+ * point to the associated struct smux_meta_* structure.
+ */
+enum {
+ SMUX_CONNECTED, /* @metadata is null */
+ SMUX_DISCONNECTED,
+ SMUX_READ_DONE,
+ SMUX_READ_FAIL,
+ SMUX_WRITE_DONE,
+ SMUX_WRITE_FAIL,
+ SMUX_TIOCM_UPDATE,
+ SMUX_LOW_WM_HIT, /* @metadata is NULL */
+ SMUX_HIGH_WM_HIT, /* @metadata is NULL */
+ SMUX_RX_RETRY_HIGH_WM_HIT, /* @metadata is NULL */
+ SMUX_RX_RETRY_LOW_WM_HIT, /* @metadata is NULL */
+ SMUX_LOCAL_CLOSED,
+ SMUX_REMOTE_CLOSED,
+};
+
+/**
+ * Channel options used to modify channel behavior.
+ */
+enum {
+ SMUX_CH_OPTION_LOCAL_LOOPBACK = 1 << 0,
+ SMUX_CH_OPTION_REMOTE_LOOPBACK = 1 << 1,
+ SMUX_CH_OPTION_REMOTE_TX_STOP = 1 << 2,
+ SMUX_CH_OPTION_AUTO_REMOTE_TX_STOP = 1 << 3,
+};
+
+/**
+ * Metadata for SMUX_DISCONNECTED notification
+ *
+ * @is_ssr: Disconnect caused by subsystem restart
+ */
+struct smux_meta_disconnected {
+ int is_ssr;
+};
+
+/**
+ * Metadata for SMUX_READ_DONE/SMUX_READ_FAIL notification
+ *
+ * @pkt_priv: Packet-specific private data
+ * @buffer: Buffer pointer passed into msm_smux_write
+ * @len: Buffer length passed into msm_smux_write
+ */
+struct smux_meta_read {
+ void *pkt_priv;
+ void *buffer;
+ int len;
+};
+
+/**
+ * Metadata for SMUX_WRITE_DONE/SMUX_WRITE_FAIL notification
+ *
+ * @pkt_priv: Packet-specific private data
+ * @buffer: Buffer pointer returned by get_rx_buffer()
+ * @len: Buffer length returned by get_rx_buffer()
+ */
+struct smux_meta_write {
+ void *pkt_priv;
+ void *buffer;
+ int len;
+};
+
+/**
+ * Metadata for SMUX_TIOCM_UPDATE notification
+ *
+ * @tiocm_old: Previous TIOCM state
+ * @tiocm_new: Current TIOCM state
+ */
+struct smux_meta_tiocm {
+ uint32_t tiocm_old;
+ uint32_t tiocm_new;
+};
+
+#ifdef CONFIG_N_SMUX
+/**
+ * Starts the opening sequence for a logical channel.
+ *
+ * @lcid Logical channel ID
+ * @priv Free for client usage
+ * @notify Event notification function
+ * @get_rx_buffer Function used to provide a receive buffer to SMUX
+ *
+ * @returns 0 for success, <0 otherwise
+ *
+ * A channel must be fully closed (either not previously opened or
+ * msm_smux_close() has been called and the SMUX_DISCONNECTED has been
+ * recevied.
+ *
+ * One the remote side is opened, the client will receive a SMUX_CONNECTED
+ * event.
+ */
+int msm_smux_open(uint8_t lcid, void *priv,
+ void (*notify) (void *priv, int event_type, const void *metadata), int (*get_rx_buffer) (void *priv, void **pkt_priv, void **buffer, int size));
+
+/**
+ * Starts the closing sequence for a logical channel.
+ *
+ * @lcid Logical channel ID
+ * @returns 0 for success, <0 otherwise
+ *
+ * Once the close event has been acknowledge by the remote side, the client
+ * will receive a SMUX_DISCONNECTED notification.
+ */
+int msm_smux_close(uint8_t lcid);
+
+/**
+ * Write data to a logical channel.
+ *
+ * @lcid Logical channel ID
+ * @pkt_priv Client data that will be returned with the SMUX_WRITE_DONE or
+ * SMUX_WRITE_FAIL notification.
+ * @data Data to write
+ * @len Length of @data
+ *
+ * @returns 0 for success, <0 otherwise
+ *
+ * Data may be written immediately after msm_smux_open() is called, but
+ * the data will wait in the transmit queue until the channel has been
+ * fully opened.
+ *
+ * Once the data has been written, the client will receive either a completion
+ * (SMUX_WRITE_DONE) or a failure notice (SMUX_WRITE_FAIL).
+ */
+int msm_smux_write(uint8_t lcid, void *pkt_priv, const void *data, int len);
+
+/**
+ * Returns true if the TX queue is currently full (high water mark).
+ *
+ * @lcid Logical channel ID
+ *
+ * @returns 0 if channel is not full; 1 if it is full; < 0 for error
+ */
+int msm_smux_is_ch_full(uint8_t lcid);
+
+/**
+ * Returns true if the TX queue has space for more packets it is at or
+ * below the low water mark).
+ *
+ * @lcid Logical channel ID
+ *
+ * @returns 0 if channel is above low watermark
+ * 1 if it's at or below the low watermark
+ * < 0 for error
+ */
+int msm_smux_is_ch_low(uint8_t lcid);
+
+/**
+ * Get the TIOCM status bits.
+ *
+ * @lcid Logical channel ID
+ *
+ * @returns >= 0 TIOCM status bits
+ * < 0 Error condition
+ */
+long msm_smux_tiocm_get(uint8_t lcid);
+
+/**
+ * Set/clear the TIOCM status bits.
+ *
+ * @lcid Logical channel ID
+ * @set Bits to set
+ * @clear Bits to clear
+ *
+ * @returns 0 for success; < 0 for failure
+ *
+ * If a bit is specified in both the @set and @clear masks, then the clear bit
+ * definition will dominate and the bit will be cleared.
+ */
+int msm_smux_tiocm_set(uint8_t lcid, uint32_t set, uint32_t clear);
+
+/**
+ * Set or clear channel option using the SMUX_CH_OPTION_* channel
+ * flags.
+ *
+ * @lcid Logical channel ID
+ * @set Options to set
+ * @clear Options to clear
+ *
+ * @returns 0 for success, < 0 for failure
+ */
+int msm_smux_set_ch_option(uint8_t lcid, uint32_t set, uint32_t clear);
+
+#else
+static inline int msm_smux_open(uint8_t lcid, void *priv,
+ void (*notify) (void *priv, int event_type, const void *metadata), int (*get_rx_buffer) (void *priv, void **pkt_priv, void **buffer, int size))
+{
+ return -ENODEV;
+}
+
+static inline int msm_smux_close(uint8_t lcid)
+{
+ return -ENODEV;
+}
+
+static inline int msm_smux_write(uint8_t lcid, void *pkt_priv, const void *data, int len)
+{
+ return -ENODEV;
+}
+
+static inline int msm_smux_is_ch_full(uint8_t lcid)
+{
+ return -ENODEV;
+}
+
+static inline int msm_smux_is_ch_low(uint8_t lcid)
+{
+ return -ENODEV;
+}
+
+static inline long msm_smux_tiocm_get(uint8_t lcid)
+{
+ return 0;
+}
+
+static inline int msm_smux_tiocm_set(uint8_t lcid, uint32_t set, uint32_t clear)
+{
+ return -ENODEV;
+}
+
+static inline int msm_smux_set_ch_option(uint8_t lcid, uint32_t set, uint32_t clear)
+{
+ return -ENODEV;
+}
+
+#endif /* CONFIG_N_SMUX */
+
+#endif /* SMUX_H */
diff --git a/include/linux/tegra_nvavp.h b/include/linux/tegra_nvavp.h
index 7862e9b..c74b645 100644
--- a/include/linux/tegra_nvavp.h
+++ b/include/linux/tegra_nvavp.h
@@ -1,7 +1,7 @@
/*
* include/linux/tegra_nvavp.h
*
- * Copyright (c) 2012-2014, NVIDIA CORPORATION. All rights reserved.
+ * Copyright (c) 2012-2015, NVIDIA CORPORATION. All rights reserved.
*
* This file is licensed under the terms of the GNU General Public License
* version 2. This program is licensed "as is" without any warranty of any
@@ -129,9 +129,10 @@
struct nvavp_map_args)
#define NVAVP_IOCTL_CHANNEL_OPEN _IOR(NVAVP_IOCTL_MAGIC, 0x73, \
struct nvavp_channel_open_args)
+#define NVAVP_IOCTL_VPR_FLOOR_SIZE _IOW(NVAVP_IOCTL_MAGIC, 0x74, __u32)
#define NVAVP_IOCTL_MIN_NR _IOC_NR(NVAVP_IOCTL_SET_NVMAP_FD)
-#define NVAVP_IOCTL_MAX_NR _IOC_NR(NVAVP_IOCTL_CHANNEL_OPEN)
+#define NVAVP_IOCTL_MAX_NR _IOC_NR(NVAVP_IOCTL_VPR_FLOOR_SIZE)
#define NVAVP_IOCTL_CHANNEL_MAX_ARG_SIZE \
sizeof(struct nvavp_pushbuffer_submit_hdr)
diff --git a/include/linux/usb/android_composite.h b/include/linux/usb/android_composite.h
new file mode 100644
index 0000000..e387715
--- /dev/null
+++ b/include/linux/usb/android_composite.h
@@ -0,0 +1,169 @@
+/*
+ * Platform data for Android USB
+ *
+ * Copyright (C) 2008 Google, Inc.
+ * Author: Mike Lockwood <lockwood@android.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef __LINUX_USB_ANDROID_H
+#define __LINUX_USB_ANDROID_H
+
+#include <linux/usb/composite.h>
+#include <linux/if_ether.h>
+
+#if 0
+struct android_usb_function {
+ struct list_head list;
+ char *name;
+ int (*bind_config) (struct usb_configuration * c);
+};
+#endif
+
+struct android_usb_product {
+ /* Vendor ID for this set of functions.
+ * Default vendor_id in platform data will be used if this is zero.
+ */
+ __u16 vendor_id;
+
+ /* Default product ID. */
+ __u16 product_id;
+
+ /* List of function names associated with this product.
+ * This is used to compute the USB product ID dynamically
+ * based on which functions are enabled.
+ */
+ int num_functions;
+ char **functions;
+};
+
+struct android_usb_platform_data {
+ /* USB device descriptor fields */
+ __u16 vendor_id;
+
+ /* Default product ID. */
+ __u16 product_id;
+
+ __u16 version;
+
+ char *product_name;
+ char *manufacturer_name;
+ char *serial_number;
+
+ /* List of available USB products.
+ * This is used to compute the USB product ID dynamically
+ * based on which functions are enabled.
+ * if num_products is zero or no match can be found,
+ * we use the default product ID
+ */
+ int num_products;
+ struct android_usb_product *products;
+
+ /* List of all supported USB functions.
+ * This list is used to define the order in which
+ * the functions appear in the configuration's list of USB interfaces.
+ * This is necessary to avoid depending upon the order in which
+ * the individual function drivers are initialized.
+ */
+ int num_functions;
+ char **functions;
+
+ void (*enable_fast_charge) (bool enable);
+ bool RndisDisableMPDecision;
+
+ /* To indicate the GPIO num for USB id
+ */
+ int usb_id_pin_gpio;
+
+ /* For QCT diag
+ */
+ int (*update_pid_and_serial_num) (uint32_t, const char *);
+
+ /* For multiple serial function support
+ * Ex: "tty:serial[,sdio:modem_mdm][,smd:modem]"
+ */
+ char *fserial_init_string;
+
+ /* the ctrl/data interface name for rmnet interface.
+ * format(per port):"ctrl0,data0,ctrl1,data1..."
+ * Ex: "smd,bam" or "hsic,hsic"
+ */
+ char *usb_rmnet_interface;
+ char *usb_diag_interface;
+
+ /* The gadget driver need to initial at beginning
+ */
+ unsigned char diag_init:1;
+ unsigned char modem_init:1;
+ unsigned char rmnet_init:1;
+ unsigned char reserved:5;
+
+ /* ums initial parameters */
+
+ /* number of LUNS */
+ int nluns;
+ /* bitmap of lun to indicate cdrom disk.
+ * NOTE: Only support one cdrom disk
+ * and it must be located in last lun */
+ int cdrom_lun;
+ int cdrom_cttype;
+
+ /* Re-match the product ID.
+ * In some devices, the product id is specified by vendor request.
+ *
+ * @param product_id: the common product id
+ * @param intrsharing: 1 for internet sharing, 0 for internet pass through
+ */
+ int (*match) (int product_id, int intrsharing);
+ /* in some cpu architecture, the sfab freq is not fixed.
+ * it will impact USB perforamnce,
+ * add callback function to lock sfab manaully.
+ */
+ void (*sfab_lock) (int lock);
+ u32 swfi_latency;
+
+ bool support_modem;
+ /* hold a performance lock while adb_read a maximum data to keep
+ * adb throughput level
+ */
+ int mtp_perf_lock_on;
+};
+
+/* Platform data for "usb_mass_storage" driver. */
+struct usb_mass_storage_platform_data {
+ /* Contains values for the SC_INQUIRY SCSI command. */
+ char *vendor;
+ char *product;
+ int release;
+
+ char can_stall;
+ /* number of LUNS */
+ int nluns;
+};
+
+/* Platform data for USB ethernet driver. */
+struct usb_ether_platform_data {
+ u8 ethaddr[ETH_ALEN];
+ u32 vendorID;
+ const char *vendorDescr;
+};
+
+#if defined(CONFIG_MACH_HOLIDAY)
+extern u8 in_usb_tethering;
+#endif
+int htc_usb_enable_function(char *name, int ebl);
+
+#if 0
+extern void android_register_function(struct android_usb_function *f);
+extern int android_enable_function(struct usb_function *f, int enable);
+#endif
+
+#endif /* __LINUX_USB_ANDROID_H */
diff --git a/include/media/drv201.h b/include/media/drv201.h
new file mode 100644
index 0000000..ed0bc7c
--- /dev/null
+++ b/include/media/drv201.h
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2011-2013 NVIDIA Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __DRV201_H__
+#define __DRV201_H__
+
+#include <linux/miscdevice.h>
+#include <media/nvc_focus.h>
+#include <media/nvc.h>
+
+struct drv201_power_rail {
+ struct regulator *vdd;
+ struct regulator *vdd_i2c;
+};
+
+struct drv201_platform_data {
+ int cfg;
+ int num;
+ int sync;
+ const char *dev_name;
+ struct nvc_focus_nvc *nvc;
+ struct nvc_focus_cap *cap;
+ int gpio_count;
+ struct nvc_gpio_pdata *gpio;
+ int (*power_on)(struct drv201_power_rail *pw);
+ int (*power_off)(struct drv201_power_rail *pw);
+};
+
+/* Register Definitions */
+#define CONTROL 0x02
+#define VCM_CODE_MSB 0x03
+#define VCM_CODE_LSB 0x04
+#define STATUS 0x05
+#define MODE 0x06
+#define VCM_FREQ 0x07
+
+
+#endif
+/* __DRV201_H__ */
diff --git a/include/media/imx219.h b/include/media/imx219.h
new file mode 100644
index 0000000..87b1923
--- /dev/null
+++ b/include/media/imx219.h
@@ -0,0 +1,76 @@
+/**
+ * Copyright (c) 2012-2013, NVIDIA Corporation. All rights reserved.
+ *
+ * NVIDIA Corporation and its licensors retain all intellectual property
+ * and proprietary rights in and to this software and related documentation
+ * and any modifications thereto. Any use, reproduction, disclosure or
+ * distribution of this software and related documentation without an express
+ * license agreement from NVIDIA Corporation is strictly prohibited.
+ */
+
+#ifndef __IMX219_H__
+#define __IMX219_H__
+
+#include <linux/ioctl.h> /* For IOCTL macros */
+#include <media/nvc.h>
+#include <media/nvc_image.h>
+
+#define IMX219_IOCTL_SET_MODE _IOW('o', 1, struct imx219_mode)
+#define IMX219_IOCTL_GET_STATUS _IOR('o', 2, __u8)
+#define IMX219_IOCTL_SET_FRAME_LENGTH _IOW('o', 3, __u32)
+#define IMX219_IOCTL_SET_COARSE_TIME _IOW('o', 4, __u32)
+#define IMX219_IOCTL_SET_GAIN _IOW('o', 5, __u16)
+#define IMX219_IOCTL_GET_FUSEID _IOR('o', 6, struct nvc_fuseid)
+#define IMX219_IOCTL_SET_GROUP_HOLD _IOW('o', 7, struct imx219_ae)
+#define IMX219_IOCTL_GET_AFDAT _IOR('o', 8, __u32)
+#define IMX219_IOCTL_SET_POWER _IOW('o', 20, __u32)
+#define IMX219_IOCTL_GET_FLASH_CAP _IOR('o', 30, __u32)
+#define IMX219_IOCTL_SET_FLASH_MODE _IOW('o', 31, struct imx219_flash_control)
+
+struct imx219_gain {
+ __u16 again;
+ __u8 dgain_upper;
+ __u8 dgain_lower;
+};
+
+struct imx219_mode {
+ int xres;
+ int yres;
+ __u32 frame_length;
+ __u32 coarse_time;
+ struct imx219_gain gain;
+};
+
+struct imx219_ae {
+ __u32 frame_length;
+ __u8 frame_length_enable;
+ __u32 coarse_time;
+ __u8 coarse_time_enable;
+ struct imx219_gain gain;
+ __u8 gain_enable;
+};
+
+struct imx219_flash_control {
+ u8 enable;
+ u8 edge_trig_en;
+ u8 start_edge;
+ u8 repeat;
+ u16 delay_frm;
+};
+
+#ifdef __KERNEL__
+struct imx219_power_rail {
+ struct regulator *dvdd;
+ struct regulator *avdd;
+ struct regulator *iovdd;
+};
+
+struct imx219_platform_data {
+ struct imx219_flash_control flash_cap;
+ const char *mclk_name; /* NULL for default default_mclk */
+ int (*power_on)(struct imx219_power_rail *pw);
+ int (*power_off)(struct imx219_power_rail *pw);
+};
+#endif /* __KERNEL__ */
+
+#endif /* __IMX219_H__ */
diff --git a/include/media/ov9760.h b/include/media/ov9760.h
new file mode 100644
index 0000000..c7a2855
--- /dev/null
+++ b/include/media/ov9760.h
@@ -0,0 +1,61 @@
+/**
+ * Copyright (c) 2012-2013 NVIDIA CORPORATION. All rights reserved.
+ *
+ * NVIDIA Corporation and its licensors retain all intellectual property
+ * and proprietary rights in and to this software and related documentation
+ * and any modifications thereto. Any use, reproduction, disclosure or
+ * distribution of this software and related documentation without an express
+ * license agreement from NVIDIA Corporation is strictly prohibited.
+ */
+
+#ifndef __OV9760_H__
+#define __OV9760_H__
+
+#include <linux/ioctl.h>
+
+#define OV9760_IOCTL_SET_MODE _IOW('o', 1, struct ov9760_mode)
+#define OV9760_IOCTL_SET_FRAME_LENGTH _IOW('o', 2, __u32)
+#define OV9760_IOCTL_SET_COARSE_TIME _IOW('o', 3, __u32)
+#define OV9760_IOCTL_SET_GAIN _IOW('o', 4, __u16)
+#define OV9760_IOCTL_GET_STATUS _IOR('o', 5, __u8)
+#define OV9760_IOCTL_SET_GROUP_HOLD _IOW('o', 6, struct ov9760_ae)
+#define OV9760_IOCTL_GET_FUSEID _IOR('o', 7, struct ov9760_sensordata)
+
+struct ov9760_sensordata {
+ __u32 fuse_id_size;
+ __u8 fuse_id[16];
+};
+
+struct ov9760_mode {
+ int xres;
+ int yres;
+ __u32 frame_length;
+ __u32 coarse_time;
+ __u16 gain;
+};
+
+struct ov9760_ae {
+ __u32 frame_length;
+ __u8 frame_length_enable;
+ __u32 coarse_time;
+ __u8 coarse_time_enable;
+ __s32 gain;
+ __u8 gain_enable;
+};
+
+#ifdef __KERNEL__
+struct ov9760_power_rail {
+ struct regulator *dvdd;
+ struct regulator *avdd;
+ struct regulator *iovdd;
+};
+
+struct ov9760_platform_data {
+ int (*power_on)(struct ov9760_power_rail *pw);
+ int (*power_off)(struct ov9760_power_rail *pw);
+ const char *mclk_name;
+};
+#endif /* __KERNEL__ */
+
+#endif /* __OV9760_H__ */
+
diff --git a/include/media/tps61310.h b/include/media/tps61310.h
new file mode 100644
index 0000000..823d717
--- /dev/null
+++ b/include/media/tps61310.h
@@ -0,0 +1,36 @@
+/* Copyright (C) 2011 NVIDIA Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
+ * 02111-1307, USA
+ */
+
+#ifndef __TPS61310_H__
+#define __TPS61310_H__
+
+#include <media/nvc_torch.h>
+
+#define TPS61310_MAX_TORCH_LEVEL 7
+#define TPS61310_MAX_FLASH_LEVEL 15
+
+struct tps61310_platform_data {
+ unsigned cfg; /* use the NVC_CFG_ defines */
+ unsigned num; /* see implementation notes in driver */
+ unsigned sync; /* see implementation notes in driver */
+ const char *dev_name; /* see implementation notes in driver */
+ struct nvc_torch_pin_state (*pinstate); /* see notes in driver */
+ unsigned max_amp_torch; /* see implementation notes in driver */
+ unsigned max_amp_flash; /* see implementation notes in driver */
+};
+
+#endif /* __TPS61310_H__ */
diff --git a/include/sound/soc-dpcm.h b/include/sound/soc-dpcm.h
index 603a2f3..f99103d 100644
--- a/include/sound/soc-dpcm.h
+++ b/include/sound/soc-dpcm.h
@@ -102,6 +102,8 @@
/* state and update */
enum snd_soc_dpcm_update runtime_update;
enum snd_soc_dpcm_state state;
+
+ int trigger_pending; /* trigger cmd + 1 if pending, 0 if not */
};
/* can this BE stop and free */
diff --git a/include/sound/soc.h b/include/sound/soc.h
index 5bbdc65..745f3b1 100644
--- a/include/sound/soc.h
+++ b/include/sound/soc.h
@@ -652,7 +652,7 @@
int (*startup)(struct snd_compr_stream *);
void (*shutdown)(struct snd_compr_stream *);
int (*set_params)(struct snd_compr_stream *);
- int (*trigger)(struct snd_compr_stream *);
+ int (*trigger)(struct snd_compr_stream *, int);
};
/* SoC cache ops */
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 52ae548..e86c1ea 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -16,15 +16,28 @@
{ META, "META" }, \
{ META_FLUSH, "META_FLUSH" })
-#define show_bio_type(type) \
- __print_symbolic(type, \
- { READ, "READ" }, \
- { READA, "READAHEAD" }, \
- { READ_SYNC, "READ_SYNC" }, \
- { WRITE, "WRITE" }, \
- { WRITE_SYNC, "WRITE_SYNC" }, \
- { WRITE_FLUSH, "WRITE_FLUSH" }, \
- { WRITE_FUA, "WRITE_FUA" })
+#define F2FS_BIO_MASK(t) (t & (READA | WRITE_FLUSH_FUA))
+#define F2FS_BIO_EXTRA_MASK(t) (t & (REQ_META | REQ_PRIO))
+
+#define show_bio_type(type) show_bio_base(type), show_bio_extra(type)
+
+#define show_bio_base(type) \
+ __print_symbolic(F2FS_BIO_MASK(type), \
+ { READ, "READ" }, \
+ { READA, "READAHEAD" }, \
+ { READ_SYNC, "READ_SYNC" }, \
+ { WRITE, "WRITE" }, \
+ { WRITE_SYNC, "WRITE_SYNC" }, \
+ { WRITE_FLUSH, "WRITE_FLUSH" }, \
+ { WRITE_FUA, "WRITE_FUA" }, \
+ { WRITE_FLUSH_FUA, "WRITE_FLUSH_FUA" })
+
+#define show_bio_extra(type) \
+ __print_symbolic(F2FS_BIO_EXTRA_MASK(type), \
+ { REQ_META, "(M)" }, \
+ { REQ_PRIO, "(P)" }, \
+ { REQ_META | REQ_PRIO, "(MP)" }, \
+ { 0, " \b" })
#define show_data_type(type) \
__print_symbolic(type, \
@@ -36,6 +49,11 @@
{ CURSEG_COLD_NODE, "Cold NODE" }, \
{ NO_CHECK_TYPE, "No TYPE" })
+#define show_file_type(type) \
+ __print_symbolic(type, \
+ { 0, "FILE" }, \
+ { 1, "DIR" })
+
#define show_gc_type(type) \
__print_symbolic(type, \
{ FG_GC, "Foreground GC" }, \
@@ -51,6 +69,12 @@
{ GC_GREEDY, "Greedy" }, \
{ GC_CB, "Cost-Benefit" })
+#define show_cpreason(type) \
+ __print_symbolic(type, \
+ { CP_UMOUNT, "Umount" }, \
+ { CP_SYNC, "Sync" }, \
+ { CP_DISCARD, "Discard" })
+
struct victim_sel_policy;
DECLARE_EVENT_CLASS(f2fs__inode,
@@ -416,7 +440,7 @@
__entry->err)
);
-TRACE_EVENT_CONDITION(f2fs_readpage,
+TRACE_EVENT_CONDITION(f2fs_submit_page_bio,
TP_PROTO(struct page *page, sector_t blkaddr, int type),
@@ -441,7 +465,7 @@
),
TP_printk("dev = (%d,%d), ino = %lu, page_index = 0x%lx, "
- "blkaddr = 0x%llx, bio_type = %s",
+ "blkaddr = 0x%llx, bio_type = %s%s",
show_dev_ino(__entry),
(unsigned long)__entry->index,
(unsigned long long)__entry->blkaddr,
@@ -569,6 +593,69 @@
__entry->ret)
);
+TRACE_EVENT(f2fs_direct_IO_enter,
+
+ TP_PROTO(struct inode *inode, loff_t offset, unsigned long len, int rw),
+
+ TP_ARGS(inode, offset, len, rw),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(loff_t, pos)
+ __field(unsigned long, len)
+ __field(int, rw)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pos = offset;
+ __entry->len = len;
+ __entry->rw = rw;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu pos = %lld len = %lu rw = %d",
+ show_dev_ino(__entry),
+ __entry->pos,
+ __entry->len,
+ __entry->rw)
+);
+
+TRACE_EVENT(f2fs_direct_IO_exit,
+
+ TP_PROTO(struct inode *inode, loff_t offset, unsigned long len,
+ int rw, int ret),
+
+ TP_ARGS(inode, offset, len, rw, ret),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(loff_t, pos)
+ __field(unsigned long, len)
+ __field(int, rw)
+ __field(int, ret)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pos = offset;
+ __entry->len = len;
+ __entry->rw = rw;
+ __entry->ret = ret;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu pos = %lld len = %lu "
+ "rw = %d ret = %d",
+ show_dev_ino(__entry),
+ __entry->pos,
+ __entry->len,
+ __entry->rw,
+ __entry->ret)
+);
+
TRACE_EVENT(f2fs_reserve_new_block,
TP_PROTO(struct inode *inode, nid_t nid, unsigned int ofs_in_node),
@@ -593,45 +680,249 @@
__entry->ofs_in_node)
);
-TRACE_EVENT(f2fs_do_submit_bio,
+DECLARE_EVENT_CLASS(f2fs__submit_bio,
- TP_PROTO(struct super_block *sb, int btype, bool sync, struct bio *bio),
+ TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
- TP_ARGS(sb, btype, sync, bio),
+ TP_ARGS(sb, rw, type, bio),
TP_STRUCT__entry(
__field(dev_t, dev)
- __field(int, btype)
- __field(bool, sync)
+ __field(int, rw)
+ __field(int, type)
__field(sector_t, sector)
__field(unsigned int, size)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
- __entry->btype = btype;
- __entry->sync = sync;
+ __entry->rw = rw;
+ __entry->type = type;
__entry->sector = bio->bi_sector;
__entry->size = bio->bi_size;
),
- TP_printk("dev = (%d,%d), type = %s, io = %s, sector = %lld, size = %u",
+ TP_printk("dev = (%d,%d), %s%s, %s, sector = %lld, size = %u",
show_dev(__entry),
- show_block_type(__entry->btype),
- __entry->sync ? "sync" : "no sync",
+ show_bio_type(__entry->rw),
+ show_block_type(__entry->type),
(unsigned long long)__entry->sector,
__entry->size)
);
-TRACE_EVENT(f2fs_submit_write_page,
+DEFINE_EVENT_CONDITION(f2fs__submit_bio, f2fs_submit_write_bio,
- TP_PROTO(struct page *page, block_t blk_addr, int type),
+ TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
- TP_ARGS(page, blk_addr, type),
+ TP_ARGS(sb, rw, type, bio),
+
+ TP_CONDITION(bio)
+);
+
+DEFINE_EVENT_CONDITION(f2fs__submit_bio, f2fs_submit_read_bio,
+
+ TP_PROTO(struct super_block *sb, int rw, int type, struct bio *bio),
+
+ TP_ARGS(sb, rw, type, bio),
+
+ TP_CONDITION(bio)
+);
+
+TRACE_EVENT(f2fs_write_begin,
+
+ TP_PROTO(struct inode *inode, loff_t pos, unsigned int len,
+ unsigned int flags),
+
+ TP_ARGS(inode, pos, len, flags),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(ino_t, ino)
+ __field(loff_t, pos)
+ __field(unsigned int, len)
+ __field(unsigned int, flags)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pos = pos;
+ __entry->len = len;
+ __entry->flags = flags;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u, flags = %u",
+ show_dev_ino(__entry),
+ (unsigned long long)__entry->pos,
+ __entry->len,
+ __entry->flags)
+);
+
+TRACE_EVENT(f2fs_write_end,
+
+ TP_PROTO(struct inode *inode, loff_t pos, unsigned int len,
+ unsigned int copied),
+
+ TP_ARGS(inode, pos, len, copied),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(loff_t, pos)
+ __field(unsigned int, len)
+ __field(unsigned int, copied)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pos = pos;
+ __entry->len = len;
+ __entry->copied = copied;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, pos = %llu, len = %u, copied = %u",
+ show_dev_ino(__entry),
+ (unsigned long long)__entry->pos,
+ __entry->len,
+ __entry->copied)
+);
+
+DECLARE_EVENT_CLASS(f2fs__page,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(int, type)
+ __field(int, dir)
+ __field(pgoff_t, index)
+ __field(int, dirty)
+ __field(int, uptodate)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = page->mapping->host->i_sb->s_dev;
+ __entry->ino = page->mapping->host->i_ino;
+ __entry->type = type;
+ __entry->dir = S_ISDIR(page->mapping->host->i_mode);
+ __entry->index = page->index;
+ __entry->dirty = PageDirty(page);
+ __entry->uptodate = PageUptodate(page);
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, %s, %s, index = %lu, "
+ "dirty = %d, uptodate = %d",
+ show_dev_ino(__entry),
+ show_block_type(__entry->type),
+ show_file_type(__entry->dir),
+ (unsigned long)__entry->index,
+ __entry->dirty,
+ __entry->uptodate)
+);
+
+DEFINE_EVENT(f2fs__page, f2fs_writepage,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
+DEFINE_EVENT(f2fs__page, f2fs_readpage,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
+DEFINE_EVENT(f2fs__page, f2fs_set_page_dirty,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
+DEFINE_EVENT(f2fs__page, f2fs_vm_page_mkwrite,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
+TRACE_EVENT(f2fs_writepages,
+
+ TP_PROTO(struct inode *inode, struct writeback_control *wbc, int type),
+
+ TP_ARGS(inode, wbc, type),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(int, type)
+ __field(int, dir)
+ __field(long, nr_to_write)
+ __field(long, pages_skipped)
+ __field(loff_t, range_start)
+ __field(loff_t, range_end)
+ __field(pgoff_t, writeback_index)
+ __field(int, sync_mode)
+ __field(char, for_kupdate)
+ __field(char, for_background)
+ __field(char, tagged_writepages)
+ __field(char, for_reclaim)
+ __field(char, range_cyclic)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->type = type;
+ __entry->dir = S_ISDIR(inode->i_mode);
+ __entry->nr_to_write = wbc->nr_to_write;
+ __entry->pages_skipped = wbc->pages_skipped;
+ __entry->range_start = wbc->range_start;
+ __entry->range_end = wbc->range_end;
+ __entry->writeback_index = inode->i_mapping->writeback_index;
+ __entry->sync_mode = wbc->sync_mode;
+ __entry->for_kupdate = wbc->for_kupdate;
+ __entry->for_background = wbc->for_background;
+ __entry->tagged_writepages = wbc->tagged_writepages;
+ __entry->for_reclaim = wbc->for_reclaim;
+ __entry->range_cyclic = wbc->range_cyclic;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, %s, %s, nr_to_write %ld, "
+ "skipped %ld, start %lld, end %lld, wb_idx %lu, sync_mode %d, "
+ "kupdate %u background %u tagged %u reclaim %u cyclic %u",
+ show_dev_ino(__entry),
+ show_block_type(__entry->type),
+ show_file_type(__entry->dir),
+ __entry->nr_to_write,
+ __entry->pages_skipped,
+ __entry->range_start,
+ __entry->range_end,
+ (unsigned long)__entry->writeback_index,
+ __entry->sync_mode,
+ __entry->for_kupdate,
+ __entry->for_background,
+ __entry->tagged_writepages,
+ __entry->for_reclaim,
+ __entry->range_cyclic)
+);
+
+TRACE_EVENT(f2fs_submit_page_mbio,
+
+ TP_PROTO(struct page *page, int rw, int type, block_t blk_addr),
+
+ TP_ARGS(page, rw, type, blk_addr),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(int, rw)
__field(int, type)
__field(pgoff_t, index)
__field(block_t, block)
@@ -640,13 +931,15 @@
TP_fast_assign(
__entry->dev = page->mapping->host->i_sb->s_dev;
__entry->ino = page->mapping->host->i_ino;
+ __entry->rw = rw;
__entry->type = type;
__entry->index = page->index;
__entry->block = blk_addr;
),
- TP_printk("dev = (%d,%d), ino = %lu, %s, index = %lu, blkaddr = 0x%llx",
+ TP_printk("dev = (%d,%d), ino = %lu, %s%s, %s, index = %lu, blkaddr = 0x%llx",
show_dev_ino(__entry),
+ show_bio_type(__entry->rw),
show_block_type(__entry->type),
(unsigned long)__entry->index,
(unsigned long long)__entry->block)
@@ -654,28 +947,75 @@
TRACE_EVENT(f2fs_write_checkpoint,
- TP_PROTO(struct super_block *sb, bool is_umount, char *msg),
+ TP_PROTO(struct super_block *sb, int reason, char *msg),
- TP_ARGS(sb, is_umount, msg),
+ TP_ARGS(sb, reason, msg),
TP_STRUCT__entry(
__field(dev_t, dev)
- __field(bool, is_umount)
+ __field(int, reason)
__field(char *, msg)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
- __entry->is_umount = is_umount;
+ __entry->reason = reason;
__entry->msg = msg;
),
TP_printk("dev = (%d,%d), checkpoint for %s, state = %s",
show_dev(__entry),
- __entry->is_umount ? "clean umount" : "consistency",
+ show_cpreason(__entry->reason),
__entry->msg)
);
+TRACE_EVENT(f2fs_issue_discard,
+
+ TP_PROTO(struct super_block *sb, block_t blkstart, block_t blklen),
+
+ TP_ARGS(sb, blkstart, blklen),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(block_t, blkstart)
+ __field(block_t, blklen)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = sb->s_dev;
+ __entry->blkstart = blkstart;
+ __entry->blklen = blklen;
+ ),
+
+ TP_printk("dev = (%d,%d), blkstart = 0x%llx, blklen = 0x%llx",
+ show_dev(__entry),
+ (unsigned long long)__entry->blkstart,
+ (unsigned long long)__entry->blklen)
+);
+
+TRACE_EVENT(f2fs_issue_flush,
+
+ TP_PROTO(struct super_block *sb, bool nobarrier, bool flush_merge),
+
+ TP_ARGS(sb, nobarrier, flush_merge),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(bool, nobarrier)
+ __field(bool, flush_merge)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = sb->s_dev;
+ __entry->nobarrier = nobarrier;
+ __entry->flush_merge = flush_merge;
+ ),
+
+ TP_printk("dev = (%d,%d), %s %s",
+ show_dev(__entry),
+ __entry->nobarrier ? "skip (nobarrier)" : "issue",
+ __entry->flush_merge ? " with flush_merge" : "")
+);
#endif /* _TRACE_F2FS_H */
/* This part must be outside protection */
diff --git a/include/uapi/asm-generic/fcntl.h b/include/uapi/asm-generic/fcntl.h
index a48937d..06632be 100644
--- a/include/uapi/asm-generic/fcntl.h
+++ b/include/uapi/asm-generic/fcntl.h
@@ -84,6 +84,10 @@
#define O_PATH 010000000
#endif
+#ifndef O_TMPFILE
+#define O_TMPFILE 020000000
+#endif
+
#ifndef O_NDELAY
#define O_NDELAY O_NONBLOCK
#endif
diff --git a/include/uapi/linux/coresight-stm.h b/include/uapi/linux/coresight-stm.h
new file mode 100644
index 0000000..f50b855
--- /dev/null
+++ b/include/uapi/linux/coresight-stm.h
@@ -0,0 +1,21 @@
+#ifndef __UAPI_CORESIGHT_STM_H_
+#define __UAPI_CORESIGHT_STM_H_
+
+enum {
+ OST_ENTITY_NONE = 0x00,
+ OST_ENTITY_FTRACE_EVENTS = 0x01,
+ OST_ENTITY_TRACE_PRINTK = 0x02,
+ OST_ENTITY_TRACE_MARKER = 0x04,
+ OST_ENTITY_DEV_NODE = 0x08,
+ OST_ENTITY_DIAG = 0xEE,
+ OST_ENTITY_QVIEW = 0xFE,
+ OST_ENTITY_MAX = 0xFF,
+};
+
+enum {
+ STM_OPTION_NONE = 0x0,
+ STM_OPTION_TIMESTAMPED = 0x08,
+ STM_OPTION_GUARANTEED = 0x80,
+};
+
+#endif
diff --git a/include/uapi/linux/if_arp.h b/include/uapi/linux/if_arp.h
index 1ddf65d1..fb86942 100644
--- a/include/uapi/linux/if_arp.h
+++ b/include/uapi/linux/if_arp.h
@@ -59,7 +59,7 @@
#define ARPHRD_LAPB 516 /* LAPB */
#define ARPHRD_DDCMP 517 /* Digital's DDCMP protocol */
#define ARPHRD_RAWHDLC 518 /* Raw HDLC */
-
+#define ARPHRD_RAWIP 530 /* Raw IP */
#define ARPHRD_TUNNEL 768 /* IPIP tunnel */
#define ARPHRD_TUNNEL6 769 /* IP6IP6 tunnel */
#define ARPHRD_FRAD 770 /* Frame Relay Access Device */
diff --git a/include/uapi/linux/if_ether.h b/include/uapi/linux/if_ether.h
index c7e57ee..b73723c 100644
--- a/include/uapi/linux/if_ether.h
+++ b/include/uapi/linux/if_ether.h
@@ -93,7 +93,7 @@
#define ETH_P_QINQ3 0x9300 /* deprecated QinQ VLAN [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_EDSA 0xDADA /* Ethertype DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */
-
+#define ETH_P_MAP 0xDA1A /* Multiplexing and Aggregation Protocol [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is less than this value
* then the frame is Ethernet II. Else it is 802.3 */
diff --git a/include/uapi/linux/msm_rmnet.h b/include/uapi/linux/msm_rmnet.h
new file mode 100644
index 0000000..a964fff
--- /dev/null
+++ b/include/uapi/linux/msm_rmnet.h
@@ -0,0 +1,158 @@
+/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _UAPI_MSM_RMNET_H_
+#define _UAPI_MSM_RMNET_H_
+
+/* Bitmap macros for RmNET driver operation mode. */
+#define RMNET_MODE_NONE (0x00)
+#define RMNET_MODE_LLP_ETH (0x01)
+#define RMNET_MODE_LLP_IP (0x02)
+#define RMNET_MODE_QOS (0x04)
+#define RMNET_MODE_MASK (RMNET_MODE_LLP_ETH | \
+ RMNET_MODE_LLP_IP | \
+ RMNET_MODE_QOS)
+
+#define RMNET_IS_MODE_QOS(mode) \
+ ((mode & RMNET_MODE_QOS) == RMNET_MODE_QOS)
+#define RMNET_IS_MODE_IP(mode) \
+ ((mode & RMNET_MODE_LLP_IP) == RMNET_MODE_LLP_IP)
+
+/* IOCTL command enum
+ * Values chosen to not conflict with other drivers in the ecosystem */
+enum rmnet_ioctl_cmds_e {
+ RMNET_IOCTL_SET_LLP_ETHERNET = 0x000089F1, /* Set Ethernet protocol */
+ RMNET_IOCTL_SET_LLP_IP = 0x000089F2, /* Set RAWIP protocol */
+ RMNET_IOCTL_GET_LLP = 0x000089F3, /* Get link protocol */
+ RMNET_IOCTL_SET_QOS_ENABLE = 0x000089F4, /* Set QoS header enabled */
+ RMNET_IOCTL_SET_QOS_DISABLE = 0x000089F5, /* Set QoS header disabled*/
+ RMNET_IOCTL_GET_QOS = 0x000089F6, /* Get QoS header state */
+ RMNET_IOCTL_GET_OPMODE = 0x000089F7, /* Get operation mode */
+ RMNET_IOCTL_OPEN = 0x000089F8, /* Open transport port */
+ RMNET_IOCTL_CLOSE = 0x000089F9, /* Close transport port */
+ RMNET_IOCTL_FLOW_ENABLE = 0x000089FA, /* Flow enable */
+ RMNET_IOCTL_FLOW_DISABLE = 0x000089FB, /* Flow disable */
+ RMNET_IOCTL_FLOW_SET_HNDL = 0x000089FC, /* Set flow handle */
+ RMNET_IOCTL_EXTENDED = 0x000089FD, /* Extended IOCTLs */
+ RMNET_IOCTL_MAX
+};
+
+enum rmnet_ioctl_extended_cmds_e {
+/* RmNet Data Required IOCTLs */
+ RMNET_IOCTL_GET_SUPPORTED_FEATURES = 0x0000, /* Get features */
+ RMNET_IOCTL_SET_MRU = 0x0001, /* Set MRU */
+ RMNET_IOCTL_GET_MRU = 0x0002, /* Get MRU */
+ RMNET_IOCTL_GET_EPID = 0x0003, /* Get endpoint ID */
+ RMNET_IOCTL_GET_DRIVER_NAME = 0x0004, /* Get driver name */
+ RMNET_IOCTL_ADD_MUX_CHANNEL = 0x0005, /* Add MUX ID */
+ RMNET_IOCTL_SET_EGRESS_DATA_FORMAT = 0x0006, /* Set EDF */
+ RMNET_IOCTL_SET_INGRESS_DATA_FORMAT = 0x0007, /* Set IDF */
+ RMNET_IOCTL_SET_AGGREGATION_COUNT = 0x0008, /* Set agg count */
+ RMNET_IOCTL_GET_AGGREGATION_COUNT = 0x0009, /* Get agg count */
+ RMNET_IOCTL_SET_AGGREGATION_SIZE = 0x000A, /* Set agg size */
+ RMNET_IOCTL_GET_AGGREGATION_SIZE = 0x000B, /* Get agg size */
+ RMNET_IOCTL_FLOW_CONTROL = 0x000C, /* Do flow control */
+ RMNET_IOCTL_GET_DFLT_CONTROL_CHANNEL = 0x000D, /* For legacy use */
+ RMNET_IOCTL_GET_HWSW_MAP = 0x000E, /* Get HW/SW map */
+ RMNET_IOCTL_SET_RX_HEADROOM = 0x000F, /* RX Headroom */
+ RMNET_IOCTL_GET_EP_PAIR = 0x0010, /* Endpoint pair */
+ RMNET_IOCTL_SET_QOS_VERSION = 0x0011, /* 8/6 byte QoS hdr*/
+ RMNET_IOCTL_GET_QOS_VERSION = 0x0012, /* 8/6 byte QoS hdr*/
+ RMNET_IOCTL_GET_SUPPORTED_QOS_MODES = 0x0013, /* Get QoS modes */
+ RMNET_IOCTL_EXTENDED_MAX = 0x0014
+};
+
+/* Return values for the RMNET_IOCTL_GET_SUPPORTED_FEATURES IOCTL */
+#define RMNET_IOCTL_FEAT_NOTIFY_MUX_CHANNEL (1<<0)
+#define RMNET_IOCTL_FEAT_SET_EGRESS_DATA_FORMAT (1<<1)
+#define RMNET_IOCTL_FEAT_SET_INGRESS_DATA_FORMAT (1<<2)
+#define RMNET_IOCTL_FEAT_SET_AGGREGATION_COUNT (1<<3)
+#define RMNET_IOCTL_FEAT_GET_AGGREGATION_COUNT (1<<4)
+#define RMNET_IOCTL_FEAT_SET_AGGREGATION_SIZE (1<<5)
+#define RMNET_IOCTL_FEAT_GET_AGGREGATION_SIZE (1<<6)
+#define RMNET_IOCTL_FEAT_FLOW_CONTROL (1<<7)
+#define RMNET_IOCTL_FEAT_GET_DFLT_CONTROL_CHANNEL (1<<8)
+#define RMNET_IOCTL_FEAT_GET_HWSW_MAP (1<<9)
+
+/* Input values for the RMNET_IOCTL_SET_EGRESS_DATA_FORMAT IOCTL */
+#define RMNET_IOCTL_EGRESS_FORMAT_MAP (1<<1)
+#define RMNET_IOCTL_EGRESS_FORMAT_AGGREGATION (1<<2)
+#define RMNET_IOCTL_EGRESS_FORMAT_MUXING (1<<3)
+#define RMNET_IOCTL_EGRESS_FORMAT_CHECKSUM (1<<4)
+
+/* Input values for the RMNET_IOCTL_SET_INGRESS_DATA_FORMAT IOCTL */
+#define RMNET_IOCTL_INGRESS_FORMAT_MAP (1<<1)
+#define RMNET_IOCTL_INGRESS_FORMAT_DEAGGREGATION (1<<2)
+#define RMNET_IOCTL_INGRESS_FORMAT_DEMUXING (1<<3)
+#define RMNET_IOCTL_INGRESS_FORMAT_CHECKSUM (1<<4)
+
+/* User space may not have this defined. */
+#ifndef IFNAMSIZ
+#define IFNAMSIZ 16
+#endif
+
+struct rmnet_ioctl_extended_s {
+ uint32_t extended_ioctl;
+ union {
+ uint32_t data; /* Generic data field for most extended IOCTLs */
+
+ /* Return values for
+ * RMNET_IOCTL_GET_DRIVER_NAME
+ * RMNET_IOCTL_GET_DFLT_CONTROL_CHANNEL */
+ int8_t if_name[IFNAMSIZ];
+
+ /* Input values for the RMNET_IOCTL_ADD_MUX_CHANNEL IOCTL */
+ struct {
+ uint32_t mux_id;
+ int8_t vchannel_name[IFNAMSIZ];
+ } rmnet_mux_val;
+
+ /* Input values for the RMNET_IOCTL_FLOW_CONTROL IOCTL */
+ struct {
+ uint8_t flow_mode;
+ uint8_t mux_id;
+ } flow_control_prop;
+
+ /* Return values for RMNET_IOCTL_GET_EP_PAIR */
+ struct {
+ uint32_t consumer_pipe_num;
+ uint32_t producer_pipe_num;
+ } ipa_ep_pair;
+ } u;
+};
+
+struct rmnet_ioctl_data_s {
+ union {
+ uint32_t operation_mode;
+ uint32_t tcm_handle;
+ } u;
+};
+
+#define RMNET_IOCTL_QOS_MODE_6 (1<<0)
+#define RMNET_IOCTL_QOS_MODE_8 (1<<1)
+
+/* QMI QoS header definition */
+#define QMI_QOS_HDR_S __attribute((__packed__)) qmi_qos_hdr_s
+struct QMI_QOS_HDR_S {
+ unsigned char version;
+ unsigned char flags;
+ unsigned long flow_id;
+};
+
+/* QMI QoS 8-byte header. */
+struct qmi_qos_hdr8_s {
+ struct QMI_QOS_HDR_S hdr;
+ uint8_t reserved[2];
+} __attribute((__packed__));
+
+#endif /* _UAPI_MSM_RMNET_H_ */
diff --git a/include/uapi/linux/rmnet_data.h b/include/uapi/linux/rmnet_data.h
new file mode 100644
index 0000000..d978caa
--- /dev/null
+++ b/include/uapi/linux/rmnet_data.h
@@ -0,0 +1,245 @@
+ /*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data configuration specification
+ */
+
+#ifndef _RMNET_DATA_H_
+#define _RMNET_DATA_H_
+
+/* ***************** Constants ********************************************** */
+#define RMNET_LOCAL_LOGICAL_ENDPOINT -1
+
+#define RMNET_EGRESS_FORMAT__RESERVED__ (1<<0)
+#define RMNET_EGRESS_FORMAT_MAP (1<<1)
+#define RMNET_EGRESS_FORMAT_AGGREGATION (1<<2)
+#define RMNET_EGRESS_FORMAT_MUXING (1<<3)
+
+#define RMNET_INGRESS_FIX_ETHERNET (1<<0)
+#define RMNET_INGRESS_FORMAT_MAP (1<<1)
+#define RMNET_INGRESS_FORMAT_DEAGGREGATION (1<<2)
+#define RMNET_INGRESS_FORMAT_DEMUXING (1<<3)
+#define RMNET_INGRESS_FORMAT_MAP_COMMANDS (1<<4)
+
+/* ***************** Netlink API ******************************************** */
+#define RMNET_NETLINK_PROTO 31
+#define RMNET_MAX_STR_LEN 16
+#define RMNET_NL_DATA_MAX_LEN 64
+
+#define RMNET_NETLINK_MSG_COMMAND 0
+#define RMNET_NETLINK_MSG_RETURNCODE 1
+#define RMNET_NETLINK_MSG_RETURNDATA 2
+
+struct rmnet_nl_msg_s {
+ uint16_t reserved;
+ uint16_t message_type;
+ uint16_t reserved2:14;
+ uint16_t crd:2;
+ union {
+ uint16_t arg_length;
+ uint16_t return_code;
+ };
+ union {
+ uint8_t data[RMNET_NL_DATA_MAX_LEN];
+ struct {
+ uint8_t dev[RMNET_MAX_STR_LEN];
+ uint32_t flags;
+ uint16_t agg_size;
+ uint16_t agg_count;
+ uint8_t tail_spacing;
+ } data_format;
+ struct {
+ uint8_t dev[RMNET_MAX_STR_LEN];
+ int32_t ep_id;
+ uint8_t operating_mode;
+ uint8_t next_dev[RMNET_MAX_STR_LEN];
+ } local_ep_config;
+ struct {
+ uint32_t id;
+ uint8_t vnd_name[RMNET_MAX_STR_LEN];
+ } vnd;
+ struct {
+ uint32_t id;
+ uint32_t map_flow_id;
+ uint32_t tc_flow_id;
+ } flow_control;
+ };
+};
+
+enum rmnet_netlink_message_types_e {
+ /*
+ * RMNET_NETLINK_ASSOCIATE_NETWORK_DEVICE - Register RMNET data driver
+ * on a particular device.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: status code
+ */
+ RMNET_NETLINK_ASSOCIATE_NETWORK_DEVICE,
+
+ /*
+ * RMNET_NETLINK_UNASSOCIATE_NETWORK_DEVICE - Unregister RMNET data
+ * driver on a particular
+ * device.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: status code
+ */
+ RMNET_NETLINK_UNASSOCIATE_NETWORK_DEVICE,
+
+ /*
+ * RMNET_NETLINK_GET_NETWORK_DEVICE_ASSOCIATED - Get if RMNET data
+ * driver is registered on a
+ * particular device.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: 1 if registered, 0 if not
+ */
+ RMNET_NETLINK_GET_NETWORK_DEVICE_ASSOCIATED,
+
+ /*
+ * RMNET_NETLINK_SET_LINK_EGRESS_DATA_FORMAT - Sets the egress data
+ * format for a particular
+ * link.
+ * Args: uint32_t egress_flags
+ * char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: status code
+ */
+ RMNET_NETLINK_SET_LINK_EGRESS_DATA_FORMAT,
+
+ /*
+ * RMNET_NETLINK_GET_LINK_EGRESS_DATA_FORMAT - Gets the egress data
+ * format for a particular
+ * link.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: 4-bytes data: uint32_t egress_flags
+ */
+ RMNET_NETLINK_GET_LINK_EGRESS_DATA_FORMAT,
+
+ /*
+ * RMNET_NETLINK_SET_LINK_INGRESS_DATA_FORMAT - Sets the ingress data
+ * format for a particular
+ * link.
+ * Args: uint32_t ingress_flags
+ * char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: status code
+ */
+ RMNET_NETLINK_SET_LINK_INGRESS_DATA_FORMAT,
+
+ /*
+ * RMNET_NETLINK_GET_LINK_INGRESS_DATA_FORMAT - Gets the ingress data
+ * format for a particular
+ * link.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * Returns: 4-bytes data: uint32_t ingress_flags
+ */
+ RMNET_NETLINK_GET_LINK_INGRESS_DATA_FORMAT,
+
+ /*
+ * RMNET_NETLINK_SET_LOGICAL_EP_CONFIG - Sets the logical endpoint
+ * configuration for a particular
+ * link.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * int32_t logical_ep_id, valid values are -1 through 31
+ * uint8_t rmnet_mode: one of none, vnd, bridged
+ * char[] egress_dev_name: Egress device if operating in bridge mode
+ * Returns: status code
+ */
+ RMNET_NETLINK_SET_LOGICAL_EP_CONFIG,
+
+ /*
+ * RMNET_NETLINK_UNSET_LOGICAL_EP_CONFIG - Un-sets the logical endpoint
+ * configuration for a particular
+ * link.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * int32_t logical_ep_id, valid values are -1 through 31
+ * Returns: status code
+ */
+ RMNET_NETLINK_UNSET_LOGICAL_EP_CONFIG,
+
+ /*
+ * RMNET_NETLINK_GET_LOGICAL_EP_CONFIG - Gets the logical endpoint
+ * configuration for a particular
+ * link.
+ * Args: char[] dev_name: Null terminated ASCII string, max length: 15
+ * int32_t logical_ep_id, valid values are -1 through 31
+ * Returns: uint8_t rmnet_mode: one of none, vnd, bridged
+ * char[] egress_dev_name: Egress device
+ */
+ RMNET_NETLINK_GET_LOGICAL_EP_CONFIG,
+
+ /*
+ * RMNET_NETLINK_NEW_VND - Creates a new virtual network device node
+ * Args: int32_t node number
+ * Returns: status code
+ */
+ RMNET_NETLINK_NEW_VND,
+
+ /*
+ * RMNET_NETLINK_NEW_VND_WITH_PREFIX - Creates a new virtual network
+ * device node with the specified
+ * prefix for the device name
+ * Args: int32_t node number
+ * char[] vnd_name - Use as prefix
+ * Returns: status code
+ */
+ RMNET_NETLINK_NEW_VND_WITH_PREFIX,
+
+ /*
+ * RMNET_NETLINK_GET_VND_NAME - Gets the string name of a VND from ID
+ * Args: int32_t node number
+ * Returns: char[] vnd_name
+ */
+ RMNET_NETLINK_GET_VND_NAME,
+
+ /*
+ * RMNET_NETLINK_FREE_VND - Removes virtual network device node
+ * Args: int32_t node number
+ * Returns: status code
+ */
+ RMNET_NETLINK_FREE_VND,
+
+ /*
+ * RMNET_NETLINK_ADD_VND_TC_FLOW - Add flow control handle on VND
+ * Args: int32_t node number
+ * uint32_t MAP Flow Handle
+ * uint32_t TC Flow Handle
+ * Returns: status code
+ */
+ RMNET_NETLINK_ADD_VND_TC_FLOW,
+
+ /*
+ * RMNET_NETLINK_DEL_VND_TC_FLOW - Removes flow control handle on VND
+ * Args: int32_t node number
+ * uint32_t MAP Flow Handle
+ * Returns: status code
+ */
+ RMNET_NETLINK_DEL_VND_TC_FLOW
+};
+
+enum rmnet_config_endpoint_modes_e {
+ RMNET_EPMODE_NONE,
+ RMNET_EPMODE_VND,
+ RMNET_EPMODE_BRIDGE,
+ RMNET_EPMODE_LENGTH /* Must be the last item in the list */
+};
+
+enum rmnet_config_return_codes_e {
+ RMNET_CONFIG_OK,
+ RMNET_CONFIG_UNKNOWN_MESSAGE,
+ RMNET_CONFIG_UNKNOWN_ERROR,
+ RMNET_CONFIG_NOMEM,
+ RMNET_CONFIG_DEVICE_IN_USE,
+ RMNET_CONFIG_INVALID_REQUEST,
+ RMNET_CONFIG_NO_SUCH_DEVICE,
+ RMNET_CONFIG_BAD_ARGUMENTS,
+ RMNET_CONFIG_BAD_EGRESS_DEVICE,
+ RMNET_CONFIG_TC_HANDLE_FULL
+};
+
+#endif /* _RMNET_DATA_H_ */
diff --git a/lib/kfifo.c b/lib/kfifo.c
index 7b7f830..46a8b36 100644
--- a/lib/kfifo.c
+++ b/lib/kfifo.c
@@ -22,6 +22,8 @@
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
#include <linux/err.h>
#include <linux/log2.h>
#include <linux/uaccess.h>
@@ -35,9 +37,14 @@
return (fifo->mask + 1) - (fifo->in - fifo->out);
}
-int __kfifo_alloc(struct __kfifo *fifo, unsigned int size,
- size_t esize, gfp_t gfp_mask)
+/*
+ * internal helper to aid __kfifo_alloc and __kfifo_valloc
+ */
+static int __kfifo_kvalloc(struct __kfifo *fifo, unsigned int size,
+ size_t esize, gfp_t gfp_mask, bool physical)
{
+ size_t sz;
+
/*
* round down to the next power of 2, since our 'let the indices
* wrap' technique works only in this case.
@@ -54,7 +61,11 @@
return -EINVAL;
}
- fifo->data = kmalloc(size * esize, gfp_mask);
+ sz = size * esize;
+ if (physical)
+ fifo->data = kmalloc(sz, gfp_mask);
+ else
+ fifo->data = vmalloc(sz);
if (!fifo->data) {
fifo->mask = 0;
@@ -64,11 +75,26 @@
return 0;
}
+
+int __kfifo_alloc(struct __kfifo *fifo, unsigned int size,
+ size_t esize, gfp_t gfp_mask)
+{
+ return __kfifo_kvalloc(fifo, size, esize, gfp_mask,
+ true);
+}
EXPORT_SYMBOL(__kfifo_alloc);
+int __kfifo_valloc(struct __kfifo *fifo, unsigned int size,
+ size_t esize)
+{
+ return __kfifo_kvalloc(fifo, size, esize, GFP_KERNEL | __GFP_NOWARN,
+ false);
+}
+EXPORT_SYMBOL(__kfifo_valloc);
+
void __kfifo_free(struct __kfifo *fifo)
{
- kfree(fifo->data);
+ kvfree(fifo->data);
fifo->in = 0;
fifo->out = 0;
fifo->esize = 0;
diff --git a/lib/llist.c b/lib/llist.c
index 4a15115..36d657c 100644
--- a/lib/llist.c
+++ b/lib/llist.c
@@ -86,3 +86,25 @@
return entry;
}
EXPORT_SYMBOL_GPL(llist_del_first);
+
+/**
+ * llist_reverse_order - reverse order of a llist chain
+ * @head: first item of the list to be reversed
+ *
+ * Reverse the oder of a chain of llist entries and return the
+ * new first entry.
+ */
+struct llist_node *llist_reverse_order(struct llist_node *head)
+{
+ struct llist_node *new_head = NULL;
+
+ while (head) {
+ struct llist_node *tmp = head;
+ head = head->next;
+ tmp->next = new_head;
+ new_head = tmp;
+ }
+
+ return new_head;
+}
+EXPORT_SYMBOL_GPL(llist_reverse_order);
diff --git a/mm/compaction.c b/mm/compaction.c
index 18a90b4..be153e2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -617,7 +617,8 @@
check_compact_cluster:
/* Avoid isolating too much */
- if (cc->nr_migratepages == COMPACT_CLUSTER_MAX) {
+ if (cc->nr_migratepages == COMPACT_CLUSTER_MAX &&
+ !is_cma_page(pfn_to_page(low_pfn))) {
++low_pfn;
break;
}
diff --git a/mm/filemap.c b/mm/filemap.c
index bc75af1..4054cbf 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -182,6 +182,11 @@
return 0;
}
+static int sleep_on_page_timeout(void *word)
+{
+ return io_schedule_timeout(2) ? 0 : -EAGAIN;
+}
+
static int sleep_on_page_killable(void *word)
{
sleep_on_page(word);
@@ -562,6 +567,16 @@
}
EXPORT_SYMBOL(wait_on_page_bit);
+void wait_on_page_bit_timeout(struct page *page, int bit_nr)
+{
+ DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
+
+ if (test_bit(bit_nr, &page->flags))
+ __wait_on_bit(page_waitqueue(page), &wait,
+ sleep_on_page_timeout, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(wait_on_page_bit_timeout);
+
int wait_on_page_bit_killable(struct page *page, int bit_nr)
{
DEFINE_WAIT_BIT(wait, &page->flags, bit_nr);
diff --git a/mm/memory.c b/mm/memory.c
index 07e1987..0564d7b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -59,6 +59,7 @@
#include <linux/gfp.h>
#include <linux/migrate.h>
#include <linux/string.h>
+#include <linux/dma-contiguous.h>
#include <asm/io.h>
#include <asm/pgalloc.h>
@@ -1681,8 +1682,10 @@
*/
get_page_foll(newpage);
- if (migrate_replace_page(page, newpage) == 0)
+ if (migrate_replace_page(page, newpage) == 0) {
+ put_page(newpage);
return newpage;
+ }
put_page(newpage);
__free_page(newpage);
@@ -1847,7 +1850,9 @@
struct page *page;
unsigned int foll_flags = gup_flags;
unsigned int page_increm;
+ static DEFINE_MUTEX(s_follow_page_lock);
+follow_page_again:
/*
* If we have a pending SIGKILL, don't keep faulting
* pages and potentially allocating memory.
@@ -1856,13 +1861,13 @@
return i ? i : -ERESTARTSYS;
cond_resched();
+ mutex_lock(&s_follow_page_lock);
while (!(page = follow_page_mask(vma, start,
foll_flags, &page_mask))) {
int ret;
unsigned int fault_flags = 0;
- if (gup_flags & FOLL_DURABLE)
- fault_flags = FAULT_FLAG_NO_CMA;
+ fault_flags = FAULT_FLAG_NO_CMA;
/* For mlock, just skip the stack guard page. */
if (foll_flags & FOLL_MLOCK) {
@@ -1880,6 +1885,7 @@
fault_flags);
if (ret & VM_FAULT_ERROR) {
+ mutex_unlock(&s_follow_page_lock);
if (ret & VM_FAULT_OOM)
return i ? i : -ENOMEM;
if (ret & (VM_FAULT_HWPOISON |
@@ -1904,6 +1910,7 @@
}
if (ret & VM_FAULT_RETRY) {
+ mutex_unlock(&s_follow_page_lock);
if (nonblocking)
*nonblocking = 0;
return i;
@@ -1927,11 +1934,53 @@
cond_resched();
}
- if (IS_ERR(page))
+ if (IS_ERR(page)) {
+ mutex_unlock(&s_follow_page_lock);
return i ? i : PTR_ERR(page);
+ }
- if ((gup_flags & FOLL_DURABLE) && is_cma_page(page))
+ if (dma_contiguous_should_replace_page(page) &&
+ (foll_flags & FOLL_GET)) {
+ struct page *old_page = page;
+ unsigned int fault_flags = 0;
+
+ put_page(page);
+ wait_on_page_locked_timeout(page);
page = migrate_replace_cma_page(page);
+ /* migration might be successful. vma mapping
+ * might have changed if there had been a write
+ * fault from other accesses before migration
+ * code locked the page. Follow the page again
+ * to get the latest mapping. If migration was
+ * successful, follow again would get
+ * non-CMA page. If there had been a write
+ * page fault, follow page and CMA page
+ * replacement(if necessary) would restart with
+ * new page.
+ */
+ if (page == old_page)
+ wait_on_page_locked_timeout(page);
+ if (foll_flags & FOLL_WRITE) {
+ /* page would be marked as old during
+ * migration. To make it young, call
+ * handle_mm_fault.
+ * This to avoid the sanity check
+ * failures in the calling code, which
+ * check for pte write permission
+ * bits.
+ */
+ fault_flags |= FAULT_FLAG_WRITE;
+ handle_mm_fault(mm, vma,
+ start, fault_flags);
+ }
+ foll_flags = gup_flags;
+ mutex_unlock(&s_follow_page_lock);
+ goto follow_page_again;
+ }
+
+ mutex_unlock(&s_follow_page_lock);
+ BUG_ON(dma_contiguous_should_replace_page(page) &&
+ (foll_flags & FOLL_GET));
if (pages) {
pages[i] = page;
@@ -3195,6 +3244,7 @@
return 0;
}
+extern bool is_vma_temporary_stack(struct vm_area_struct *vma);
/*
* We enter with non-exclusive mmap_sem (to exclude vma changes,
* but allow concurrent faults), and pte mapped but not yet locked.
@@ -3227,7 +3277,12 @@
/* Allocate our own private page. */
if (unlikely(anon_vma_prepare(vma)))
goto oom;
- page = alloc_zeroed_user_highpage_movable(vma, address);
+ if (vma->vm_flags & VM_LOCKED || flags & FAULT_FLAG_NO_CMA ||
+ is_vma_temporary_stack(vma)) {
+ page = alloc_zeroed_user_highpage(GFP_HIGHUSER, vma, address);
+ } else {
+ page = alloc_zeroed_user_highpage_movable(vma, address);
+ }
if (!page)
goto oom;
/*
diff --git a/mm/migrate.c b/mm/migrate.c
index 316b0ad..b8ab1c9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -218,18 +218,16 @@
page = migration_entry_to_page(entry);
- /*
- * Once radix-tree replacement of page migration started, page_count
- * *must* be zero. And, we don't want to call wait_on_page_locked()
- * against a page without get_page().
- * So, we use get_page_unless_zero(), here. Even failed, page fault
- * will occur again.
- */
- if (!get_page_unless_zero(page))
- goto out;
pte_unmap_unlock(ptep, ptl);
- wait_on_page_locked(page);
- put_page(page);
+ /* don't take ref on page, as it causes
+ * migration to get aborted in between.
+ * migration goes ahead after locking the page.
+ * Wait on page to be unlocked. In case page get
+ * unlocked, allocated and locked again forever,
+ * before this function call, it would timeout in
+ * next tick and exit.
+ */
+ wait_on_page_locked_timeout(page);
return;
out:
pte_unmap_unlock(ptep, ptl);
@@ -1117,9 +1115,6 @@
return -EAGAIN;
}
- /* page is now isolated, so release additional reference */
- put_page(page);
-
for (pass = 0; pass < 10 && ret != 0; pass++) {
cond_resched();
diff --git a/mm/mm_init.c b/mm/mm_init.c
index c280a02..61a4371 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -144,16 +144,56 @@
early_param("mminit_loglevel", set_mminit_loglevel);
#endif /* CONFIG_DEBUG_MEMORY_INIT */
+#ifdef CONFIG_CMA
+static unsigned int cma_threshold = 75;
+
+unsigned int cma_threshold_get(void)
+{
+ return cma_threshold;
+}
+
+static ssize_t cma_threshold_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", cma_threshold);
+}
+
+static ssize_t cma_threshold_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ unsigned int threshold;
+
+ sscanf(buf, "%u", &threshold);
+ if (threshold >= 0 && threshold <= 100) {
+ cma_threshold = threshold;
+ return count;
+ }
+
+ return -EINVAL;
+}
+
+static struct kobj_attribute cma_threshold_attr =
+ __ATTR(cma_threshold, 0644, cma_threshold_show, cma_threshold_store);
+#endif
+
struct kobject *mm_kobj;
EXPORT_SYMBOL_GPL(mm_kobj);
static int __init mm_sysfs_init(void)
{
+ int ret = 0;
+
mm_kobj = kobject_create_and_add("mm", kernel_kobj);
if (!mm_kobj)
return -ENOMEM;
- return 0;
+#ifdef CONFIG_CMA
+ ret = sysfs_create_file(mm_kobj, &cma_threshold_attr.attr);
+ if (ret < 0)
+ kobject_put(mm_kobj);
+#endif
+
+ return ret;
}
__initcall(mm_sysfs_init);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e064ce..b374294 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1038,10 +1038,11 @@
int current_order;
struct page *page;
int migratetype, i;
+ int non_cma_order;
/* Find the largest possible block of pages in the other list */
- for (current_order = MAX_ORDER-1; current_order >= order;
- --current_order) {
+ for (non_cma_order = MAX_ORDER-1; non_cma_order >= order;
+ --non_cma_order) {
for (i = 0;; i++) {
migratetype = fallbacks[start_migratetype][i];
@@ -1049,6 +1050,18 @@
if (migratetype == MIGRATE_RESERVE)
break;
+ if (is_migrate_cma(migratetype))
+ /* CMA page blocks are not movable across
+ * migrate types. Seach for free blocks
+ * from lowest order to avoid contiguous
+ * higher alignment allocations for subsequent
+ * alloc requests.
+ */
+ current_order = order + MAX_ORDER - 1 -
+ non_cma_order;
+ else
+ current_order = non_cma_order;
+
area = &(zone->free_area[current_order]);
if (list_empty(&area->free_list[migratetype]))
continue;
@@ -1111,6 +1124,10 @@
return NULL;
}
+#ifdef CONFIG_CMA
+unsigned long cma_get_total_pages(void);
+#endif
+
/*
* Do the hard work of removing an element from the buddy allocator.
* Call me with the zone->lock already held.
@@ -1118,10 +1135,24 @@
static struct page *__rmqueue(struct zone *zone, unsigned int order,
int migratetype)
{
- struct page *page;
+ struct page *page = NULL;
retry_reserve:
- page = __rmqueue_smallest(zone, order, migratetype);
+#ifdef CONFIG_CMA
+ if (migratetype == MIGRATE_MOVABLE) {
+
+ unsigned long nr_cma_pages = cma_get_total_pages();
+ unsigned long nr_free_cma_pages =
+ global_page_state(NR_FREE_CMA_PAGES);
+ unsigned int current_cma_usage = 100 -
+ ((nr_free_cma_pages * 100) / nr_cma_pages);
+
+ if (current_cma_usage < cma_threshold_get())
+ page = __rmqueue_smallest(zone, order, MIGRATE_CMA);
+ }
+ if (!page)
+#endif
+ page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
page = __rmqueue_fallback(zone, order, migratetype);
diff --git a/mm/shmem.c b/mm/shmem.c
index 6019778..3e5d3d2 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1965,6 +1965,37 @@
return error;
}
+static int
+shmem_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
+{
+ struct inode *inode;
+ int error = -ENOSPC;
+
+ inode = shmem_get_inode(dir->i_sb, dir, mode, 0, VM_NORESERVE);
+ if (inode) {
+ error = security_inode_init_security(inode, dir,
+ NULL,
+ shmem_initxattrs, NULL);
+ if (error) {
+ if (error != -EOPNOTSUPP) {
+ iput(inode);
+ return error;
+ }
+ }
+#ifdef CONFIG_TMPFS_POSIX_ACL
+ error = generic_acl_init(inode, dir);
+ if (error) {
+ iput(inode);
+ return error;
+ }
+#else
+ error = 0;
+#endif
+ d_tmpfile(dentry, inode);
+ }
+ return error;
+}
+
static int shmem_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
{
int error;
@@ -2723,6 +2754,7 @@
.rmdir = shmem_rmdir,
.mknod = shmem_mknod,
.rename = shmem_rename,
+ .tmpfile = shmem_tmpfile,
#endif
#ifdef CONFIG_TMPFS_XATTR
.setxattr = shmem_setxattr,
diff --git a/net/Kconfig b/net/Kconfig
index c9efb8b..a9f441c 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -232,6 +232,7 @@
source "net/openvswitch/Kconfig"
source "net/vmw_vsock/Kconfig"
source "net/netlink/Kconfig"
+source "net/rmnet_data/Kconfig"
config RPS
boolean "RPS"
diff --git a/net/Makefile b/net/Makefile
index 6afdd27..e0e0b22 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -72,3 +72,4 @@
obj-$(CONFIG_OPENVSWITCH) += openvswitch/
obj-$(CONFIG_VSOCKETS) += vmw_vsock/
obj-$(CONFIG_NET_ACTIVITY_STATS) += activity_stats.o
+obj-$(CONFIG_RMNET_DATA) += rmnet_data/
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 21bdaec..5121496 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -1893,6 +1893,16 @@
return addrconf_ifid_eui64(eui, dev);
case ARPHRD_IEEE1394:
return addrconf_ifid_ieee1394(eui, dev);
+ case ARPHRD_RAWIP: {
+ struct in6_addr lladdr;
+
+ if (ipv6_get_lladdr(dev, &lladdr, IFA_F_TENTATIVE))
+ get_random_bytes(eui, 8);
+ else
+ memcpy(eui, lladdr.s6_addr + 8, 8);
+
+ return 0;
+ }
}
return -1;
}
@@ -2800,6 +2810,7 @@
if ((dev->type != ARPHRD_ETHER) &&
(dev->type != ARPHRD_FDDI) &&
(dev->type != ARPHRD_ARCNET) &&
+ (dev->type != ARPHRD_RAWIP) &&
(dev->type != ARPHRD_INFINIBAND) &&
(dev->type != ARPHRD_IEEE802154) &&
(dev->type != ARPHRD_IEEE1394)) {
diff --git a/net/rmnet_data/Kconfig b/net/rmnet_data/Kconfig
new file mode 100644
index 0000000..36d5817
--- /dev/null
+++ b/net/rmnet_data/Kconfig
@@ -0,0 +1,29 @@
+#
+# RMNET Data and MAP driver
+#
+
+menuconfig RMNET_DATA
+ depends on NETDEVICES
+ bool "RmNet Data and MAP driver"
+ ---help---
+ If you say Y here, then the rmnet_data module will be statically
+ compiled into the kernel. The rmnet data module provides MAP
+ functionality for embedded and bridged traffic.
+if RMNET_DATA
+
+config RMNET_DATA_FC
+ bool "RmNet Data Flow Control"
+ depends on NET_SCHED && NET_SCH_PRIO
+ ---help---
+ Say Y here if you want RmNet data to handle in-band flow control and
+ ioctl based flow control. This depends on net scheduler and prio queue
+ capability being present in the kernel. In-band flow control requires
+ MAP protocol be used.
+config RMNET_DATA_DEBUG_PKT
+ bool "Packet Debug Logging"
+ ---help---
+ Say Y here if you want RmNet data to be able to log packets in main
+ system log. This should not be enabled on production builds as it can
+ impact system performance. Note that simply enabling it here will not
+ enable the logging; it must be enabled at run-time as well.
+endif # RMNET_DATA
diff --git a/net/rmnet_data/Makefile b/net/rmnet_data/Makefile
new file mode 100644
index 0000000..ccb8b5b
--- /dev/null
+++ b/net/rmnet_data/Makefile
@@ -0,0 +1,14 @@
+#
+# Makefile for the RMNET Data module
+#
+
+rmnet_data-y := rmnet_data_main.o
+rmnet_data-y += rmnet_data_config.o
+rmnet_data-y += rmnet_data_vnd.o
+rmnet_data-y += rmnet_data_handlers.o
+rmnet_data-y += rmnet_map_data.o
+rmnet_data-y += rmnet_map_command.o
+rmnet_data-y += rmnet_data_stats.o
+obj-$(CONFIG_RMNET_DATA) += rmnet_data.o
+
+CFLAGS_rmnet_data_main.o := -I$(src)
diff --git a/net/rmnet_data/rmnet_data_config.c b/net/rmnet_data/rmnet_data_config.c
new file mode 100644
index 0000000..c274918
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_config.c
@@ -0,0 +1,1205 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data configuration engine
+ *
+ */
+
+#include <net/sock.h>
+#include <linux/module.h>
+#include <linux/netlink.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/rmnet_data.h>
+#include "rmnet_data_config.h"
+#include "rmnet_data_handlers.h"
+#include "rmnet_data_vnd.h"
+#include "rmnet_data_private.h"
+
+RMNET_LOG_MODULE(RMNET_DATA_LOGMASK_CONFIG);
+
+/* ***************** Local Definitions and Declarations ********************* */
+static struct sock *nl_socket_handle;
+
+#ifndef RMNET_KERNEL_PRE_3_8
+static struct netlink_kernel_cfg rmnet_netlink_cfg = {
+ .input = rmnet_config_netlink_msg_handler
+};
+#endif
+
+static struct notifier_block rmnet_dev_notifier = {
+ .notifier_call = rmnet_config_notify_cb,
+ .next = 0,
+ .priority = 0
+};
+
+#define RMNET_NL_MSG_SIZE(Y) (sizeof(((struct rmnet_nl_msg_s *)0)->Y))
+
+struct rmnet_free_vnd_work {
+ struct work_struct work;
+ int vnd_id;
+};
+
+/* ***************** Init and Cleanup *************************************** */
+
+#ifdef RMNET_KERNEL_PRE_3_8
+static struct sock *_rmnet_config_start_netlink(void)
+{
+ return netlink_kernel_create(&init_net,
+ RMNET_NETLINK_PROTO,
+ 0,
+ rmnet_config_netlink_msg_handler,
+ NULL,
+ THIS_MODULE);
+}
+#else
+static struct sock *_rmnet_config_start_netlink(void)
+{
+ return netlink_kernel_create(&init_net,
+ RMNET_NETLINK_PROTO,
+ &rmnet_netlink_cfg);
+}
+#endif /* RMNET_KERNEL_PRE_3_8 */
+
+/**
+ * rmnet_config_init() - Startup init
+ *
+ * Registers netlink protocol with kernel and opens socket. Netlink handler is
+ * registered with kernel.
+ */
+int rmnet_config_init(void)
+{
+ int rc;
+ nl_socket_handle = _rmnet_config_start_netlink();
+ if (!nl_socket_handle) {
+ LOGE("%s", "Failed to init netlink socket");
+ return RMNET_INIT_ERROR;
+ }
+
+ rc = register_netdevice_notifier(&rmnet_dev_notifier);
+ if (rc != 0) {
+ LOGE("Failed to register device notifier; rc=%d", rc);
+ /* TODO: Cleanup the nl socket */
+ return RMNET_INIT_ERROR;
+ }
+
+ return 0;
+}
+
+/**
+ * rmnet_config_exit() - Cleans up all netlink related resources
+ */
+void rmnet_config_exit(void)
+{
+ netlink_kernel_release(nl_socket_handle);
+}
+
+/* ***************** Helper Functions *************************************** */
+
+/**
+ * _rmnet_is_physical_endpoint_associated() - Determines if device is associated
+ * @dev: Device to get check
+ *
+ * Compares device rx_handler callback pointer against known funtion
+ *
+ * Return:
+ * - 1 if associated
+ * - 0 if NOT associated
+ */
+static inline int _rmnet_is_physical_endpoint_associated(struct net_device *dev)
+{
+ rx_handler_func_t *rx_handler;
+ rx_handler = rcu_dereference(dev->rx_handler);
+
+ if (rx_handler == rmnet_rx_handler)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * _rmnet_get_phys_ep_config() - Get physical ep config for an associated device
+ * @dev: Device to get endpoint configuration from
+ *
+ * Return:
+ * - pointer to configuration if successful
+ * - 0 (null) if device is not associated
+ */
+static inline struct rmnet_phys_ep_conf_s *_rmnet_get_phys_ep_config
+ (struct net_device *dev)
+{
+ if (_rmnet_is_physical_endpoint_associated(dev))
+ return (struct rmnet_phys_ep_conf_s *)
+ rcu_dereference(dev->rx_handler_data);
+ else
+ return 0;
+}
+
+/**
+ * _rmnet_get_logical_ep() - Gets the logical end point configuration
+ * structure for a network device
+ * @dev: Device to get endpoint configuration from
+ * @config_id: Logical endpoint id on device
+ * Retrieves the logical_endpoint_config structure.
+ *
+ * Return:
+ * - End point configuration structure
+ * - NULL in case of an error
+ */
+struct rmnet_logical_ep_conf_s *_rmnet_get_logical_ep(struct net_device *dev,
+ int config_id)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ struct rmnet_logical_ep_conf_s *epconfig_l;
+
+ if (rmnet_vnd_is_vnd(dev))
+ epconfig_l = rmnet_vnd_get_le_config(dev);
+ else {
+ config = _rmnet_get_phys_ep_config(dev);
+
+ if (!config)
+ return NULL;
+
+ if (config_id == RMNET_LOCAL_LOGICAL_ENDPOINT)
+ epconfig_l = &config->local_ep;
+ else
+ epconfig_l = &config->muxed_ep[config_id];
+ }
+
+ return epconfig_l;
+}
+
+/* ***************** Netlink Handler **************************************** */
+#define _RMNET_NETLINK_NULL_CHECKS() do { if (!rmnet_header || !resp_rmnet) \
+ BUG(); \
+ } while (0)
+
+static void _rmnet_netlink_set_link_egress_data_format
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ dev = dev_get_by_name(&init_net, rmnet_header->data_format.dev);
+
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ resp_rmnet->return_code =
+ rmnet_set_egress_data_format(dev,
+ rmnet_header->data_format.flags,
+ rmnet_header->data_format.agg_size,
+ rmnet_header->data_format.agg_count
+ );
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_set_link_ingress_data_format
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data_format.dev);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ resp_rmnet->return_code = rmnet_set_ingress_data_format(
+ dev,
+ rmnet_header->data_format.flags,
+ rmnet_header->data_format.tail_spacing);
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_set_logical_ep_config
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev, *dev2;
+ _RMNET_NETLINK_NULL_CHECKS();
+
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ if (rmnet_header->local_ep_config.ep_id < -1
+ || rmnet_header->local_ep_config.ep_id > 254) {
+ resp_rmnet->return_code = RMNET_CONFIG_BAD_ARGUMENTS;
+ return;
+ }
+
+ dev = dev_get_by_name(&init_net,
+ rmnet_header->local_ep_config.dev);
+
+ dev2 = dev_get_by_name(&init_net,
+ rmnet_header->local_ep_config.next_dev);
+
+
+ if (dev && dev2)
+ resp_rmnet->return_code =
+ rmnet_set_logical_endpoint_config(
+ dev,
+ rmnet_header->local_ep_config.ep_id,
+ rmnet_header->local_ep_config.operating_mode,
+ dev2);
+ else
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ if (dev)
+ dev_put(dev);
+ if (dev2)
+ dev_put(dev2);
+}
+
+static void _rmnet_netlink_unset_logical_ep_config
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ if (rmnet_header->local_ep_config.ep_id < -1
+ || rmnet_header->local_ep_config.ep_id > 254) {
+ resp_rmnet->return_code = RMNET_CONFIG_BAD_ARGUMENTS;
+ return;
+ }
+
+ dev = dev_get_by_name(&init_net,
+ rmnet_header->local_ep_config.dev);
+
+ if (dev) {
+ resp_rmnet->return_code =
+ rmnet_unset_logical_endpoint_config(
+ dev,
+ rmnet_header->local_ep_config.ep_id);
+ dev_put(dev);
+ } else {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+}
+
+static void _rmnet_netlink_get_logical_ep_config
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ if (rmnet_header->local_ep_config.ep_id < -1
+ || rmnet_header->local_ep_config.ep_id > 254) {
+ resp_rmnet->return_code = RMNET_CONFIG_BAD_ARGUMENTS;
+ return;
+ }
+
+ dev = dev_get_by_name(&init_net,
+ rmnet_header->local_ep_config.dev);
+
+ if (dev)
+ resp_rmnet->return_code =
+ rmnet_get_logical_endpoint_config(
+ dev,
+ rmnet_header->local_ep_config.ep_id,
+ &resp_rmnet->local_ep_config.operating_mode,
+ resp_rmnet->local_ep_config.next_dev,
+ sizeof(resp_rmnet->local_ep_config.next_dev));
+ else {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ if (resp_rmnet->return_code == RMNET_CONFIG_OK) {
+ /* Begin Data */
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNDATA;
+ resp_rmnet->arg_length = RMNET_NL_MSG_SIZE(local_ep_config);
+ }
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_associate_network_device
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ resp_rmnet->return_code = rmnet_associate_network_device(dev);
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_unassociate_network_device
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ resp_rmnet->return_code = rmnet_unassociate_network_device(dev);
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_get_network_device_associated
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ resp_rmnet->return_code = _rmnet_is_physical_endpoint_associated(dev);
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNDATA;
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_get_link_egress_data_format
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ struct rmnet_phys_ep_conf_s *config;
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data_format.dev);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ config = _rmnet_get_phys_ep_config(dev);
+ if (!config) {
+ resp_rmnet->return_code = RMNET_CONFIG_INVALID_REQUEST;
+ dev_put(dev);
+ return;
+ }
+
+ /* Begin Data */
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNDATA;
+ resp_rmnet->arg_length = RMNET_NL_MSG_SIZE(data_format);
+ resp_rmnet->data_format.flags = config->egress_data_format;
+ resp_rmnet->data_format.agg_count = config->egress_agg_count;
+ resp_rmnet->data_format.agg_size = config->egress_agg_size;
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_get_link_ingress_data_format
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ struct net_device *dev;
+ struct rmnet_phys_ep_conf_s *config;
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+
+ dev = dev_get_by_name(&init_net, rmnet_header->data_format.dev);
+ if (!dev) {
+ resp_rmnet->return_code = RMNET_CONFIG_NO_SUCH_DEVICE;
+ return;
+ }
+
+ config = _rmnet_get_phys_ep_config(dev);
+ if (!config) {
+ resp_rmnet->return_code = RMNET_CONFIG_INVALID_REQUEST;
+ dev_put(dev);
+ return;
+ }
+
+ /* Begin Data */
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNDATA;
+ resp_rmnet->arg_length = RMNET_NL_MSG_SIZE(data_format);
+ resp_rmnet->data_format.flags = config->ingress_data_format;
+ resp_rmnet->data_format.tail_spacing = config->tail_spacing;
+ dev_put(dev);
+}
+
+static void _rmnet_netlink_get_vnd_name
+ (struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ int r;
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ r = rmnet_vnd_get_name(rmnet_header->vnd.id, resp_rmnet->vnd.vnd_name,
+ RMNET_MAX_STR_LEN);
+
+ if (r != 0) {
+ resp_rmnet->return_code = RMNET_CONFIG_INVALID_REQUEST;
+ return;
+ }
+
+ /* Begin Data */
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNDATA;
+ resp_rmnet->arg_length = RMNET_NL_MSG_SIZE(vnd);
+}
+
+static void _rmnet_netlink_add_del_vnd_tc_flow
+ (uint32_t command,
+ struct rmnet_nl_msg_s *rmnet_header,
+ struct rmnet_nl_msg_s *resp_rmnet)
+{
+ uint32_t id;
+ uint32_t map_flow_id;
+ uint32_t tc_flow_id;
+
+ _RMNET_NETLINK_NULL_CHECKS();
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+
+ id = rmnet_header->flow_control.id;
+ map_flow_id = rmnet_header->flow_control.map_flow_id;
+ tc_flow_id = rmnet_header->flow_control.tc_flow_id;
+
+ switch (command) {
+ case RMNET_NETLINK_ADD_VND_TC_FLOW:
+ resp_rmnet->return_code = rmnet_vnd_add_tc_flow(id,
+ map_flow_id,
+ tc_flow_id);
+ break;
+ case RMNET_NETLINK_DEL_VND_TC_FLOW:
+ resp_rmnet->return_code = rmnet_vnd_del_tc_flow(id,
+ map_flow_id,
+ tc_flow_id);
+ break;
+ default:
+ LOGM("Called with unhandled command %d", command);
+ resp_rmnet->return_code = RMNET_CONFIG_INVALID_REQUEST;
+ break;
+ }
+}
+
+/**
+ * rmnet_config_netlink_msg_handler() - Netlink message handler callback
+ * @skb: Packet containing netlink messages
+ *
+ * Standard kernel-expected format for a netlink message handler. Processes SKBs
+ * which contain RmNet data specific netlink messages.
+ */
+void rmnet_config_netlink_msg_handler(struct sk_buff *skb)
+{
+ struct nlmsghdr *nlmsg_header, *resp_nlmsg;
+ struct rmnet_nl_msg_s *rmnet_header, *resp_rmnet;
+ int return_pid, response_data_length;
+ struct sk_buff *skb_response;
+
+ response_data_length = 0;
+ nlmsg_header = (struct nlmsghdr *) skb->data;
+ rmnet_header = (struct rmnet_nl_msg_s *) nlmsg_data(nlmsg_header);
+
+ LOGL("Netlink message pid=%d, seq=%d, length=%d, rmnet_type=%d",
+ nlmsg_header->nlmsg_pid,
+ nlmsg_header->nlmsg_seq,
+ nlmsg_header->nlmsg_len,
+ rmnet_header->message_type);
+
+ return_pid = nlmsg_header->nlmsg_pid;
+
+ skb_response = nlmsg_new(sizeof(struct nlmsghdr)
+ + sizeof(struct rmnet_nl_msg_s),
+ GFP_KERNEL);
+
+ if (!skb_response) {
+ LOGH("%s", "Failed to allocate response buffer");
+ return;
+ }
+
+ resp_nlmsg = nlmsg_put(skb_response,
+ 0,
+ nlmsg_header->nlmsg_seq,
+ NLMSG_DONE,
+ sizeof(struct rmnet_nl_msg_s),
+ 0);
+
+ resp_rmnet = nlmsg_data(resp_nlmsg);
+
+ if (!resp_rmnet)
+ BUG();
+
+ resp_rmnet->message_type = rmnet_header->message_type;
+ rtnl_lock();
+ switch (rmnet_header->message_type) {
+ case RMNET_NETLINK_ASSOCIATE_NETWORK_DEVICE:
+ _rmnet_netlink_associate_network_device
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_UNASSOCIATE_NETWORK_DEVICE:
+ _rmnet_netlink_unassociate_network_device
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_GET_NETWORK_DEVICE_ASSOCIATED:
+ _rmnet_netlink_get_network_device_associated
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_SET_LINK_EGRESS_DATA_FORMAT:
+ _rmnet_netlink_set_link_egress_data_format
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_GET_LINK_EGRESS_DATA_FORMAT:
+ _rmnet_netlink_get_link_egress_data_format
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_SET_LINK_INGRESS_DATA_FORMAT:
+ _rmnet_netlink_set_link_ingress_data_format
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_GET_LINK_INGRESS_DATA_FORMAT:
+ _rmnet_netlink_get_link_ingress_data_format
+ (rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_SET_LOGICAL_EP_CONFIG:
+ _rmnet_netlink_set_logical_ep_config(rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_UNSET_LOGICAL_EP_CONFIG:
+ _rmnet_netlink_unset_logical_ep_config(rmnet_header,
+ resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_GET_LOGICAL_EP_CONFIG:
+ _rmnet_netlink_get_logical_ep_config(rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_NEW_VND:
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ resp_rmnet->return_code =
+ rmnet_create_vnd(rmnet_header->vnd.id);
+ break;
+
+ case RMNET_NETLINK_NEW_VND_WITH_PREFIX:
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ resp_rmnet->return_code = rmnet_create_vnd_prefix(
+ rmnet_header->vnd.id,
+ rmnet_header->vnd.vnd_name);
+ break;
+
+ case RMNET_NETLINK_FREE_VND:
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ /* Please check rmnet_vnd_free_dev documentation regarding
+ the below locking sequence
+ */
+ rtnl_unlock();
+ resp_rmnet->return_code = rmnet_free_vnd(rmnet_header->vnd.id);
+ rtnl_lock();
+ break;
+
+ case RMNET_NETLINK_GET_VND_NAME:
+ _rmnet_netlink_get_vnd_name(rmnet_header, resp_rmnet);
+ break;
+
+ case RMNET_NETLINK_DEL_VND_TC_FLOW:
+ case RMNET_NETLINK_ADD_VND_TC_FLOW:
+ _rmnet_netlink_add_del_vnd_tc_flow(rmnet_header->message_type,
+ rmnet_header,
+ resp_rmnet);
+ break;
+
+ default:
+ resp_rmnet->crd = RMNET_NETLINK_MSG_RETURNCODE;
+ resp_rmnet->return_code = RMNET_CONFIG_UNKNOWN_MESSAGE;
+ break;
+ }
+ rtnl_unlock();
+ nlmsg_unicast(nl_socket_handle, skb_response, return_pid);
+ LOGD("%s", "Done processing command");
+
+}
+
+/* ***************** Configuration API ************************************** */
+
+/**
+ * rmnet_unassociate_network_device() - Unassociate network device
+ * @dev: Device to unassociate
+ *
+ * Frees all structures generate for device. Unregisters rx_handler
+ * todo: needs to do some sanity verification first (is device in use, etc...)
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_NO_SUCH_DEVICE dev is null
+ * - RMNET_CONFIG_INVALID_REQUEST if device is not already associated
+ * - RMNET_CONFIG_DEVICE_IN_USE if device has logical ep that wasn't unset
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ */
+int rmnet_unassociate_network_device(struct net_device *dev)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ int config_id = RMNET_LOCAL_LOGICAL_ENDPOINT;
+ struct rmnet_logical_ep_conf_s *epconfig_l;
+ ASSERT_RTNL();
+
+ LOGL("(%s);", dev->name);
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ if (!_rmnet_is_physical_endpoint_associated(dev))
+ return RMNET_CONFIG_INVALID_REQUEST;
+
+ for (; config_id < RMNET_DATA_MAX_LOGICAL_EP; config_id++) {
+ epconfig_l = _rmnet_get_logical_ep(dev, config_id);
+ if (epconfig_l && epconfig_l->refcount)
+ return RMNET_CONFIG_DEVICE_IN_USE;
+ }
+
+ config = (struct rmnet_phys_ep_conf_s *)
+ rcu_dereference(dev->rx_handler_data);
+
+ if (!config)
+ return RMNET_CONFIG_UNKNOWN_ERROR;
+
+ kfree(config);
+
+ netdev_rx_handler_unregister(dev);
+
+ /* Explicitly release the reference from the device */
+ dev_put(dev);
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_set_ingress_data_format() - Set ingress data format on network device
+ * @dev: Device to ingress data format on
+ * @egress_data_format: 32-bit unsigned bitmask of ingress format
+ *
+ * Network device must already have association with RmNet Data driver
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_NO_SUCH_DEVICE dev is null
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ */
+int rmnet_set_ingress_data_format(struct net_device *dev,
+ uint32_t ingress_data_format,
+ uint8_t tail_spacing)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ ASSERT_RTNL();
+
+ LOGL("(%s,0x%08X);", dev->name, ingress_data_format);
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ config = _rmnet_get_phys_ep_config(dev);
+
+ if (!config)
+ return RMNET_CONFIG_INVALID_REQUEST;
+
+ config->ingress_data_format = ingress_data_format;
+ config->tail_spacing = tail_spacing;
+
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_set_egress_data_format() - Set egress data format on network device
+ * @dev: Device to egress data format on
+ * @egress_data_format: 32-bit unsigned bitmask of egress format
+ *
+ * Network device must already have association with RmNet Data driver
+ * todo: Bounds check on agg_*
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_NO_SUCH_DEVICE dev is null
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ */
+int rmnet_set_egress_data_format(struct net_device *dev,
+ uint32_t egress_data_format,
+ uint16_t agg_size,
+ uint16_t agg_count)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ ASSERT_RTNL();
+
+ LOGL("(%s,0x%08X, %d, %d);",
+ dev->name, egress_data_format, agg_size, agg_count);
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ config = _rmnet_get_phys_ep_config(dev);
+
+ if (!config)
+ return RMNET_CONFIG_UNKNOWN_ERROR;
+
+ config->egress_data_format = egress_data_format;
+ config->egress_agg_size = agg_size;
+ config->egress_agg_count = agg_count;
+
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_associate_network_device() - Associate network device
+ * @dev: Device to register with RmNet data
+ *
+ * Typically used on physical network devices. Registers RX handler and private
+ * metadata structures.
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_NO_SUCH_DEVICE dev is null
+ * - RMNET_CONFIG_INVALID_REQUEST if the device to be associated is a vnd
+ * - RMNET_CONFIG_DEVICE_IN_USE if dev rx_handler is already filled
+ * - RMNET_CONFIG_DEVICE_IN_USE if netdev_rx_handler_register() fails
+ */
+int rmnet_associate_network_device(struct net_device *dev)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ int rc;
+ ASSERT_RTNL();
+
+ LOGL("(%s);\n", dev->name);
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ if (_rmnet_is_physical_endpoint_associated(dev)) {
+ LOGM("%s is already regestered", dev->name);
+ return RMNET_CONFIG_DEVICE_IN_USE;
+ }
+
+ if (rmnet_vnd_is_vnd(dev)) {
+ LOGM("%s is a vnd", dev->name);
+ return RMNET_CONFIG_INVALID_REQUEST;
+ }
+
+ config = (struct rmnet_phys_ep_conf_s *)
+ kmalloc(sizeof(struct rmnet_phys_ep_conf_s), GFP_ATOMIC);
+
+ if (!config)
+ return RMNET_CONFIG_NOMEM;
+
+ memset(config, 0, sizeof(struct rmnet_phys_ep_conf_s));
+ config->dev = dev;
+ spin_lock_init(&config->agg_lock);
+
+ rc = netdev_rx_handler_register(dev, rmnet_rx_handler, config);
+
+ if (rc) {
+ LOGM("netdev_rx_handler_register returns %d", rc);
+ kfree(config);
+ return RMNET_CONFIG_DEVICE_IN_USE;
+ }
+
+ /* Explicitly hold a reference to the device */
+ dev_hold(dev);
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * _rmnet_set_logical_endpoint_config() - Set logical endpoing config on device
+ * @dev: Device to set endpoint configuration on
+ * @config_id: logical endpoint id on device
+ * @epconfig: endpoing configuration structure to set
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ * - RMNET_CONFIG_NO_SUCH_DEVICE if device to set config on is null
+ * - RMNET_CONFIG_DEVICE_IN_USE if device already has a logical ep
+ * - RMNET_CONFIG_BAD_ARGUMENTS if logical endpoint id is out of range
+ */
+int _rmnet_set_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ struct rmnet_logical_ep_conf_s *epconfig)
+{
+ struct rmnet_logical_ep_conf_s *epconfig_l;
+
+ ASSERT_RTNL();
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ if (config_id < RMNET_LOCAL_LOGICAL_ENDPOINT
+ || config_id >= RMNET_DATA_MAX_LOGICAL_EP)
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+
+ epconfig_l = _rmnet_get_logical_ep(dev, config_id);
+
+ if (!epconfig_l)
+ return RMNET_CONFIG_UNKNOWN_ERROR;
+
+ if (epconfig_l->refcount)
+ return RMNET_CONFIG_DEVICE_IN_USE;
+
+ memcpy(epconfig_l, epconfig, sizeof(struct rmnet_logical_ep_conf_s));
+ if (config_id == RMNET_LOCAL_LOGICAL_ENDPOINT)
+ epconfig_l->mux_id = 0;
+ else
+ epconfig_l->mux_id = config_id;
+
+ /* Explicitly hold a reference to the egress device */
+ dev_hold(epconfig_l->egress_dev);
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * _rmnet_unset_logical_endpoint_config() - Un-set the logical endpoing config
+ * on device
+ * @dev: Device to set endpoint configuration on
+ * @config_id: logical endpoint id on device
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ * - RMNET_CONFIG_NO_SUCH_DEVICE if device to set config on is null
+ * - RMNET_CONFIG_BAD_ARGUMENTS if logical endpoint id is out of range
+ */
+int _rmnet_unset_logical_endpoint_config(struct net_device *dev,
+ int config_id)
+{
+ struct rmnet_logical_ep_conf_s *epconfig_l = 0;
+
+ ASSERT_RTNL();
+
+ if (!dev)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ if (config_id < RMNET_LOCAL_LOGICAL_ENDPOINT
+ || config_id >= RMNET_DATA_MAX_LOGICAL_EP)
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+
+ epconfig_l = _rmnet_get_logical_ep(dev, config_id);
+
+ if (!epconfig_l || !epconfig_l->refcount)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ /* Explicitly release the reference from the egress device */
+ dev_put(epconfig_l->egress_dev);
+ memset(epconfig_l, 0, sizeof(struct rmnet_logical_ep_conf_s));
+
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_set_logical_endpoint_config() - Set logical endpoint config on a device
+ * @dev: Device to set endpoint configuration on
+ * @config_id: logical endpoint id on device
+ * @rmnet_mode: endpoint mode. Values from: rmnet_config_endpoint_modes_e
+ * @egress_device: device node to forward packet to once done processing in
+ * ingress/egress handlers
+ *
+ * Creates a logical_endpoint_config structure and fills in the information from
+ * function arguments. Calls _rmnet_set_logical_endpoint_config() to finish
+ * configuration. Network device must already have association with RmNet Data
+ * driver
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_BAD_EGRESS_DEVICE if egress device is null
+ * - RMNET_CONFIG_BAD_EGRESS_DEVICE if egress device is not handled by
+ * RmNet data module
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ * - RMNET_CONFIG_NO_SUCH_DEVICE if device to set config on is null
+ * - RMNET_CONFIG_BAD_ARGUMENTS if logical endpoint id is out of range
+ */
+int rmnet_set_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ uint8_t rmnet_mode,
+ struct net_device *egress_dev)
+{
+ struct rmnet_logical_ep_conf_s epconfig;
+
+ LOGL("(%s, %d, %d, %s);",
+ dev->name, config_id, rmnet_mode, egress_dev->name);
+
+ if (!egress_dev
+ || ((!_rmnet_is_physical_endpoint_associated(egress_dev))
+ && (!rmnet_vnd_is_vnd(egress_dev)))) {
+ return RMNET_CONFIG_BAD_EGRESS_DEVICE;
+ }
+
+ memset(&epconfig, 0, sizeof(struct rmnet_logical_ep_conf_s));
+ epconfig.refcount = 1;
+ epconfig.rmnet_mode = rmnet_mode;
+ epconfig.egress_dev = egress_dev;
+
+ return _rmnet_set_logical_endpoint_config(dev, config_id, &epconfig);
+}
+
+/**
+ * rmnet_unset_logical_endpoint_config() - Un-set logical endpoing configuration
+ * on a device
+ * @dev: Device to set endpoint configuration on
+ * @config_id: logical endpoint id on device
+ *
+ * Retrieves the logical_endpoint_config structure and frees the egress device.
+ * Network device must already have association with RmNet Data driver
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ * - RMNET_CONFIG_NO_SUCH_DEVICE device is not associated
+ * - RMNET_CONFIG_BAD_ARGUMENTS if logical endpoint id is out of range
+ */
+int rmnet_unset_logical_endpoint_config(struct net_device *dev,
+ int config_id)
+{
+ LOGL("(%s, %d);", dev->name, config_id);
+
+ if (!dev
+ || ((!_rmnet_is_physical_endpoint_associated(dev))
+ && (!rmnet_vnd_is_vnd(dev)))) {
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+
+ return _rmnet_unset_logical_endpoint_config(dev, config_id);
+}
+
+/**
+ * rmnet_get_logical_endpoint_config() - Gets logical endpoing configuration
+ * for a device
+ * @dev: Device to get endpoint configuration on
+ * @config_id: logical endpoint id on device
+ * @rmnet_mode: (I/O) logical endpoint mode
+ * @egress_dev_name: (I/O) logical endpoint egress device name
+ * @egress_dev_name_size: The maximal size of the I/O egress_dev_name
+ *
+ * Retrieves the logical_endpoint_config structure.
+ * Network device must already have association with RmNet Data driver
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_UNKNOWN_ERROR net_device private section is null
+ * - RMNET_CONFIG_NO_SUCH_DEVICE device is not associated
+ * - RMNET_CONFIG_BAD_ARGUMENTS if logical endpoint id is out of range or
+ * if the provided buffer size for egress dev name is too short
+ */
+int rmnet_get_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ uint8_t *rmnet_mode,
+ uint8_t *egress_dev_name,
+ size_t egress_dev_name_size)
+{
+ struct rmnet_logical_ep_conf_s *epconfig_l = 0;
+ size_t strlcpy_res = 0;
+
+ LOGL("(%s, %d);", dev->name, config_id);
+
+ if (!egress_dev_name || !rmnet_mode)
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+ if (config_id < RMNET_LOCAL_LOGICAL_ENDPOINT
+ || config_id >= RMNET_DATA_MAX_LOGICAL_EP)
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+
+ epconfig_l = _rmnet_get_logical_ep(dev, config_id);
+
+ if (!epconfig_l || !epconfig_l->refcount)
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+
+ *rmnet_mode = epconfig_l->rmnet_mode;
+
+ strlcpy_res = strlcpy(egress_dev_name, epconfig_l->egress_dev->name,
+ egress_dev_name_size);
+
+ if (strlcpy_res >= egress_dev_name_size)
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_create_vnd() - Create virtual network device node
+ * @id: RmNet virtual device node id
+ *
+ * Return:
+ * - result of rmnet_vnd_create_dev()
+ */
+int rmnet_create_vnd(int id)
+{
+ struct net_device *dev;
+ ASSERT_RTNL();
+ LOGL("(%d);", id);
+ return rmnet_vnd_create_dev(id, &dev, NULL);
+}
+
+/**
+ * rmnet_create_vnd() - Create virtual network device node
+ * @id: RmNet virtual device node id
+ * @prefix: String prefix for device name
+ *
+ * Return:
+ * - result of rmnet_vnd_create_dev()
+ */
+int rmnet_create_vnd_prefix(int id, const char *prefix)
+{
+ struct net_device *dev;
+ ASSERT_RTNL();
+ LOGL("(%d, \"%s\");", id, prefix);
+ return rmnet_vnd_create_dev(id, &dev, prefix);
+}
+
+/**
+ * rmnet_free_vnd() - Free virtual network device node
+ * @id: RmNet virtual device node id
+ *
+ * Return:
+ * - result of rmnet_vnd_free_dev()
+ */
+int rmnet_free_vnd(int id)
+{
+ LOGL("(%d);", id);
+ return rmnet_vnd_free_dev(id);
+}
+
+static void _rmnet_free_vnd_later(struct work_struct *work)
+{
+ struct rmnet_free_vnd_work *fwork;
+ fwork = (struct rmnet_free_vnd_work *) work;
+ rmnet_free_vnd(fwork->vnd_id);
+ kfree(work);
+}
+
+/**
+ * rmnet_free_vnd_later() - Schedule a work item to free virtual network device
+ * @id: RmNet virtual device node id
+ *
+ * Schedule the VND to be freed at a later time. We need to do this if the
+ * rtnl lock is already held as to prevent a deadlock.
+ */
+static void rmnet_free_vnd_later(int id)
+{
+ struct rmnet_free_vnd_work *work;
+ LOGL("(%d);", id);
+ work = (struct rmnet_free_vnd_work *)
+ kmalloc(sizeof(struct rmnet_free_vnd_work), GFP_KERNEL);
+ if (!work) {
+ LOGH("Failed to queue removal of VND:%d", id);
+ return;
+ }
+ INIT_WORK((struct work_struct *)work, _rmnet_free_vnd_later);
+ work->vnd_id = id;
+ schedule_work((struct work_struct *)work);
+}
+
+/**
+ * rmnet_force_unassociate_device() - Force a device to unassociate
+ * @dev: Device to unassociate
+ *
+ * Return:
+ * - void
+ */
+static void rmnet_force_unassociate_device(struct net_device *dev)
+{
+ int i;
+ struct net_device *vndev;
+ struct rmnet_logical_ep_conf_s *cfg;
+
+ if (!dev)
+ BUG();
+
+ if (!_rmnet_is_physical_endpoint_associated(dev)) {
+ LOGM("%s", "Called on unassociated device, skipping");
+ return;
+ }
+
+ /* Check the VNDs for offending mappings */
+ for (i = 0; i < RMNET_DATA_MAX_VND; i++) {
+ vndev = rmnet_vnd_get_by_id(i);
+ if (!vndev) {
+ LOGL("VND %d not in use; skipping", i);
+ continue;
+ }
+ cfg = rmnet_vnd_get_le_config(vndev);
+ if (!cfg) {
+ LOGH("Got NULL config from VND %d", i);
+ BUG();
+ continue;
+ }
+ if (cfg->refcount && (cfg->egress_dev == dev)) {
+ rmnet_unset_logical_endpoint_config(vndev,
+ RMNET_LOCAL_LOGICAL_ENDPOINT);
+ rmnet_free_vnd_later(i);
+ }
+ }
+
+ /* Clear on the mappings on the phys ep */
+ rmnet_unset_logical_endpoint_config(dev, RMNET_LOCAL_LOGICAL_ENDPOINT);
+ for (i = 0; i < RMNET_DATA_MAX_LOGICAL_EP; i++)
+ rmnet_unset_logical_endpoint_config(dev, i);
+ rmnet_unassociate_network_device(dev);
+}
+
+/**
+ * rmnet_config_notify_cb() - Callback for netdevice notifier chain
+ * @nb: Notifier block data
+ * @event: Netdevice notifier event ID
+ * @data: Contains a net device for which we are getting notified
+ *
+ * Return:
+ * - result of NOTIFY_DONE()
+ */
+int rmnet_config_notify_cb(struct notifier_block *nb,
+ unsigned long event, void *data)
+{
+ struct net_device *dev = data;
+
+ if (!dev)
+ BUG();
+
+ LOGL("(..., %lu, %s)", event, dev->name);
+
+ switch (event) {
+ case NETDEV_UNREGISTER_FINAL:
+ case NETDEV_UNREGISTER:
+ if (_rmnet_is_physical_endpoint_associated(dev)) {
+ LOGH("Kernel is trying to unregister %s", dev->name);
+ rmnet_force_unassociate_device(dev);
+ }
+ break;
+
+ default:
+ LOGD("Unhandeled event [%lu]", event);
+ break;
+ }
+
+ return NOTIFY_DONE;
+}
diff --git a/net/rmnet_data/rmnet_data_config.h b/net/rmnet_data/rmnet_data_config.h
new file mode 100644
index 0000000..777d730
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_config.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data configuration engine
+ *
+ */
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+
+#ifndef _RMNET_DATA_CONFIG_H_
+#define _RMNET_DATA_CONFIG_H_
+
+#define RMNET_DATA_MAX_LOGICAL_EP 32
+
+struct rmnet_logical_ep_conf_s {
+ uint8_t refcount;
+ uint8_t rmnet_mode;
+ uint8_t mux_id;
+ struct net_device *egress_dev;
+};
+
+struct rmnet_phys_ep_conf_s {
+ struct net_device *dev;
+ struct rmnet_logical_ep_conf_s local_ep;
+ struct rmnet_logical_ep_conf_s muxed_ep[RMNET_DATA_MAX_LOGICAL_EP];
+ uint32_t ingress_data_format;
+ uint32_t egress_data_format;
+
+ /* MAP specific */
+ uint16_t egress_agg_size;
+ uint16_t egress_agg_count;
+ spinlock_t agg_lock;
+ struct sk_buff *agg_skb;
+ uint8_t agg_state;
+ uint8_t agg_count;
+ uint8_t tail_spacing;
+};
+
+int rmnet_config_init(void);
+void rmnet_config_exit(void);
+
+int rmnet_unassociate_network_device(struct net_device *dev);
+int rmnet_set_ingress_data_format(struct net_device *dev,
+ uint32_t ingress_data_format,
+ uint8_t tail_spacing);
+int rmnet_set_egress_data_format(struct net_device *dev,
+ uint32_t egress_data_format,
+ uint16_t agg_size,
+ uint16_t agg_count);
+int rmnet_associate_network_device(struct net_device *dev);
+int _rmnet_set_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ struct rmnet_logical_ep_conf_s *epconfig);
+int rmnet_set_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ uint8_t rmnet_mode,
+ struct net_device *egress_dev);
+int _rmnet_unset_logical_endpoint_config(struct net_device *dev,
+ int config_id);
+int rmnet_unset_logical_endpoint_config(struct net_device *dev,
+ int config_id);
+int _rmnet_get_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ struct rmnet_logical_ep_conf_s *epconfig);
+int rmnet_get_logical_endpoint_config(struct net_device *dev,
+ int config_id,
+ uint8_t *rmnet_mode,
+ uint8_t *egress_dev_name,
+ size_t egress_dev_name_size);
+void rmnet_config_netlink_msg_handler (struct sk_buff *skb);
+int rmnet_config_notify_cb(struct notifier_block *nb,
+ unsigned long event, void *data);
+int rmnet_create_vnd(int id);
+int rmnet_create_vnd_prefix(int id, const char *name);
+int rmnet_free_vnd(int id);
+
+#endif /* _RMNET_DATA_CONFIG_H_ */
diff --git a/net/rmnet_data/rmnet_data_handlers.c b/net/rmnet_data/rmnet_data_handlers.c
new file mode 100644
index 0000000..c284f62
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_handlers.c
@@ -0,0 +1,551 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data ingress/egress handler
+ *
+ */
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/module.h>
+#include <linux/rmnet_data.h>
+#include "rmnet_data_private.h"
+#include "rmnet_data_config.h"
+#include "rmnet_data_vnd.h"
+#include "rmnet_map.h"
+#include "rmnet_data_stats.h"
+#include "rmnet_data_trace.h"
+
+RMNET_LOG_MODULE(RMNET_DATA_LOGMASK_HANDLER);
+
+
+void rmnet_egress_handler(struct sk_buff *skb,
+ struct rmnet_logical_ep_conf_s *ep);
+
+#ifdef CONFIG_RMNET_DATA_DEBUG_PKT
+unsigned int dump_pkt_rx;
+module_param(dump_pkt_rx, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(dump_pkt_rx, "Dump packets entering ingress handler");
+
+unsigned int dump_pkt_tx;
+module_param(dump_pkt_tx, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(dump_pkt_tx, "Dump packets exiting egress handler");
+#endif /* CONFIG_RMNET_DATA_DEBUG_PKT */
+
+/* ***************** Helper Functions *************************************** */
+
+/**
+ * __rmnet_data_set_skb_proto() - Set skb->protocol field
+ * @skb: packet being modified
+ *
+ * Peek at the first byte of the packet and set the protocol. There is not
+ * good way to determine if a packet has a MAP header. As of writing this,
+ * the reserved bit in the MAP frame will prevent it from overlapping with
+ * IPv4/IPv6 frames. This could change in the future!
+ */
+static inline void __rmnet_data_set_skb_proto(struct sk_buff *skb)
+{
+ switch (skb->data[0] & 0xF0) {
+ case 0x40: /* IPv4 */
+ skb->protocol = htons(ETH_P_IP);
+ break;
+ case 0x60: /* IPv6 */
+ skb->protocol = htons(ETH_P_IPV6);
+ break;
+ default:
+ skb->protocol = htons(ETH_P_MAP);
+ break;
+ }
+}
+
+#ifdef CONFIG_RMNET_DATA_DEBUG_PKT
+/**
+ * rmnet_print_packet() - Print packet / diagnostics
+ * @skb: Packet to print
+ * @printlen: Number of bytes to print
+ * @dev: Name of interface
+ * @dir: Character representing direction (e.g.. 'r' for receive)
+ *
+ * This function prints out raw bytes in an SKB. Use of this will have major
+ * performance impacts and may even trigger watchdog resets if too much is being
+ * printed. Hence, this should always be compiled out unless absolutely needed.
+ */
+void rmnet_print_packet(const struct sk_buff *skb, const char *dev, char dir)
+{
+ char buffer[200];
+ unsigned int len, printlen;
+ int i, buffloc = 0;
+
+ switch (dir) {
+ case 'r':
+ printlen = dump_pkt_rx;
+ break;
+
+ case 't':
+ printlen = dump_pkt_tx;
+ break;
+
+ default:
+ printlen = 0;
+ break;
+ }
+
+ if (!printlen)
+ return;
+
+ pr_err("[%s][%c] - PKT skb->len=%d skb->head=%p skb->data=%p skb->tail=%p skb->end=%p\n",
+ dev, dir, skb->len, skb->head, skb->data, skb->tail, skb->end);
+
+ if (skb->len > 0)
+ len = skb->len;
+ else
+ len = ((unsigned int)skb->end) - ((unsigned int)skb->data);
+
+ pr_err("[%s][%c] - PKT len: %d, printing first %d bytes\n",
+ dev, dir, len, printlen);
+
+ memset(buffer, 0, sizeof(buffer));
+ for (i = 0; (i < printlen) && (i < len); i++) {
+ if ((i%16) == 0) {
+ pr_err("[%s][%c] - PKT%s\n", dev, dir, buffer);
+ memset(buffer, 0, sizeof(buffer));
+ buffloc = 0;
+ buffloc += snprintf(&buffer[buffloc],
+ sizeof(buffer)-buffloc, "%04X:",
+ i);
+ }
+
+ buffloc += snprintf(&buffer[buffloc], sizeof(buffer)-buffloc,
+ " %02x", skb->data[i]);
+
+ }
+ pr_err("[%s][%c] - PKT%s\n", dev, dir, buffer);
+}
+#else
+void rmnet_print_packet(const struct sk_buff *skb, const char *dev, char dir)
+{
+ return;
+}
+#endif /* CONFIG_RMNET_DATA_DEBUG_PKT */
+
+/* ***************** Generic handler **************************************** */
+
+/**
+ * rmnet_bridge_handler() - Bridge related functionality
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED in all cases
+ */
+static rx_handler_result_t rmnet_bridge_handler(struct sk_buff *skb,
+ struct rmnet_logical_ep_conf_s *ep)
+{
+ if (!ep->egress_dev) {
+ LOGD("Missing egress device for packet arriving on %s",
+ skb->dev->name);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_BRDG_NO_EGRESS);
+ } else {
+ rmnet_egress_handler(skb, ep);
+ }
+
+ return RX_HANDLER_CONSUMED;
+}
+
+/**
+ * __rmnet_deliver_skb() - Deliver skb
+ *
+ * Determines where to deliver skb. Options are: consume by network stack,
+ * pass to bridge handler, or pass to virtual network device
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED if packet forwarded or dropped
+ * - RX_HANDLER_PASS if packet is to be consumed by network stack as-is
+ */
+static rx_handler_result_t __rmnet_deliver_skb(struct sk_buff *skb,
+ struct rmnet_logical_ep_conf_s *ep)
+{
+ trace___rmnet_deliver_skb(skb);
+ switch (ep->rmnet_mode) {
+ case RMNET_EPMODE_NONE:
+ return RX_HANDLER_PASS;
+
+ case RMNET_EPMODE_BRIDGE:
+ return rmnet_bridge_handler(skb, ep);
+
+ case RMNET_EPMODE_VND:
+ skb_reset_transport_header(skb);
+ skb_reset_network_header(skb);
+ switch (rmnet_vnd_rx_fixup(skb, skb->dev)) {
+ case RX_HANDLER_CONSUMED:
+ return RX_HANDLER_CONSUMED;
+
+ case RX_HANDLER_PASS:
+ skb->pkt_type = PACKET_HOST;
+ netif_receive_skb(skb);
+ return RX_HANDLER_CONSUMED;
+ }
+ return RX_HANDLER_PASS;
+
+ default:
+ LOGD("Unkown ep mode %d", ep->rmnet_mode);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_DELIVER_NO_EP);
+ return RX_HANDLER_CONSUMED;
+ }
+}
+
+/**
+ * rmnet_ingress_deliver_packet() - Ingress handler for raw IP and bridged
+ * MAP packets.
+ * @skb: Packet needing a destination.
+ * @config: Physical end point configuration that the packet arrived on.
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED if packet forwarded/dropped
+ * - RX_HANDLER_PASS if packet should be passed up the stack by caller
+ */
+static rx_handler_result_t rmnet_ingress_deliver_packet(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config)
+{
+ if (!config) {
+ LOGD("%s", "NULL physical EP provided");
+ kfree_skb(skb);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ if (!(config->local_ep.refcount)) {
+ LOGD("Packet on %s has no local endpoint configuration",
+ skb->dev->name);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_IPINGRESS_NO_EP);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ skb->dev = config->local_ep.egress_dev;
+
+ return __rmnet_deliver_skb(skb, &(config->local_ep));
+}
+
+/* ***************** MAP handler ******************************************** */
+
+/**
+ * _rmnet_map_ingress_handler() - Actual MAP ingress handler
+ * @skb: Packet being received
+ * @config: Physical endpoint configuration for the ingress device
+ *
+ * Most MAP ingress functions are processed here. Packets are processed
+ * individually; aggregates packets should use rmnet_map_ingress_handler()
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED if packet is dropped
+ * - result of __rmnet_deliver_skb() for all other cases
+ */
+static rx_handler_result_t _rmnet_map_ingress_handler(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config)
+{
+ struct rmnet_logical_ep_conf_s *ep;
+ uint8_t mux_id;
+ uint16_t len;
+
+ mux_id = RMNET_MAP_GET_MUX_ID(skb);
+ len = RMNET_MAP_GET_LENGTH(skb)
+ - RMNET_MAP_GET_PAD(skb)
+ - config->tail_spacing;
+
+ if (mux_id >= RMNET_DATA_MAX_LOGICAL_EP) {
+ LOGD("Got packet on %s with bad mux id %d",
+ skb->dev->name, mux_id);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_MAPINGRESS_BAD_MUX);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ ep = &(config->muxed_ep[mux_id]);
+
+ if (!ep->refcount) {
+ LOGD("Packet on %s:%d; has no logical endpoint config",
+ skb->dev->name, mux_id);
+
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_MAPINGRESS_MUX_NO_EP);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ if (config->ingress_data_format & RMNET_INGRESS_FORMAT_DEMUXING)
+ skb->dev = ep->egress_dev;
+
+ /* Subtract MAP header */
+ skb_pull(skb, sizeof(struct rmnet_map_header_s));
+ skb_trim(skb, len);
+ __rmnet_data_set_skb_proto(skb);
+
+ return __rmnet_deliver_skb(skb, ep);
+}
+
+/**
+ * rmnet_map_ingress_handler() - MAP ingress handler
+ * @skb: Packet being received
+ * @config: Physical endpoint configuration for the ingress device
+ *
+ * Called if and only if MAP is configured in the ingress device's ingress data
+ * format. Deaggregation is done here, actual MAP processing is done in
+ * _rmnet_map_ingress_handler().
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED for aggregated packets
+ * - RX_HANDLER_CONSUMED for dropped packets
+ * - result of _rmnet_map_ingress_handler() for all other cases
+ */
+static rx_handler_result_t rmnet_map_ingress_handler(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config)
+{
+ struct sk_buff *skbn;
+ int rc, co = 0;
+
+ if (config->ingress_data_format & RMNET_INGRESS_FORMAT_DEAGGREGATION) {
+ while ((skbn = rmnet_map_deaggregate(skb, config)) != 0) {
+ _rmnet_map_ingress_handler(skbn, config);
+ co++;
+ }
+ LOGD("De-aggregated %d packets", co);
+ rmnet_stats_deagg_pkts(co);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_MAPINGRESS_AGGBUF);
+ rc = RX_HANDLER_CONSUMED;
+ } else {
+ rc = _rmnet_map_ingress_handler(skb, config);
+ }
+
+ return rc;
+}
+
+/**
+ * rmnet_map_egress_handler() - MAP egress handler
+ * @skb: Packet being sent
+ * @config: Physical endpoint configuration for the egress device
+ * @ep: logical endpoint configuration of the packet originator
+ * (e.g.. RmNet virtual network device)
+ *
+ * Called if and only if MAP is configured in the egress device's egress data
+ * format. Will expand skb if there is insufficient headroom for MAP protocol.
+ * Note: headroomexpansion will incur a performance penalty.
+ *
+ * Return:
+ * - 0 on success
+ * - 1 on failure
+ */
+static int rmnet_map_egress_handler(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config,
+ struct rmnet_logical_ep_conf_s *ep)
+{
+ int required_headroom, additional_header_length;
+ struct rmnet_map_header_s *map_header;
+
+ additional_header_length = 0;
+
+ required_headroom = sizeof(struct rmnet_map_header_s);
+
+ LOGD("headroom of %d bytes", required_headroom);
+
+ if (skb_headroom(skb) < required_headroom) {
+ if (pskb_expand_head(skb, required_headroom, 0, GFP_KERNEL)) {
+ LOGD("Failed to add headroom of %d bytes",
+ required_headroom);
+ return 1;
+ }
+ }
+
+ map_header = rmnet_map_add_map_header(skb, additional_header_length);
+
+ if (!map_header) {
+ LOGD("%s", "Failed to add MAP header to egress packet");
+ return 1;
+ }
+
+ if (config->egress_data_format & RMNET_EGRESS_FORMAT_MUXING) {
+ if (ep->mux_id == 0xff)
+ map_header->mux_id = 0;
+ else
+ map_header->mux_id = ep->mux_id;
+ }
+
+ skb->protocol = htons(ETH_P_MAP);
+
+ if (config->egress_data_format & RMNET_EGRESS_FORMAT_AGGREGATION) {
+ rmnet_map_aggregate(skb, config);
+ return RMNET_MAP_CONSUMED;
+ }
+
+ return RMNET_MAP_SUCCESS;
+}
+/* ***************** Ingress / Egress Entry Points ************************** */
+
+/**
+ * rmnet_ingress_handler() - Ingress handler entry point
+ * @skb: Packet being received
+ *
+ * Processes packet as per ingress data format for receiving device. Logical
+ * endpoint is determined from packet inspection. Packet is then sent to the
+ * egress device listed in the logical endpoint configuration.
+ *
+ * Return:
+ * - RX_HANDLER_PASS if packet is not processed by handler (caller must
+ * deal with the packet)
+ * - RX_HANDLER_CONSUMED if packet is forwarded or processed by MAP
+ */
+rx_handler_result_t rmnet_ingress_handler(struct sk_buff *skb)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ struct net_device *dev;
+ int rc;
+
+ if (!skb)
+ BUG();
+
+ dev = skb->dev;
+ trace_rmnet_ingress_handler(skb);
+ rmnet_print_packet(skb, dev->name, 'r');
+
+ config = (struct rmnet_phys_ep_conf_s *)
+ rcu_dereference(skb->dev->rx_handler_data);
+
+ if (!config) {
+ LOGD("%s is not associated with rmnet_data", skb->dev->name);
+ kfree_skb(skb);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ /* Sometimes devices operate in ethernet mode even thouth there is no
+ * ethernet header. This causes the skb->protocol to contain a bogus
+ * value and the skb->data pointer to be off by 14 bytes. Fix it if
+ * configured to do so
+ */
+ if (config->ingress_data_format & RMNET_INGRESS_FIX_ETHERNET) {
+ skb_push(skb, RMNET_ETHERNET_HEADER_LENGTH);
+ __rmnet_data_set_skb_proto(skb);
+ }
+
+ if (config->ingress_data_format & RMNET_INGRESS_FORMAT_MAP) {
+ if (RMNET_MAP_GET_CD_BIT(skb)) {
+ if (config->ingress_data_format
+ & RMNET_INGRESS_FORMAT_MAP_COMMANDS) {
+ rc = rmnet_map_command(skb, config);
+ } else {
+ LOGM("MAP command packet on %s; %s", dev->name,
+ "Not configured for MAP commands");
+ rmnet_kfree_skb(skb,
+ RMNET_STATS_SKBFREE_INGRESS_NOT_EXPECT_MAPC);
+ return RX_HANDLER_CONSUMED;
+ }
+ } else {
+ rc = rmnet_map_ingress_handler(skb, config);
+ }
+ } else {
+ switch (ntohs(skb->protocol)) {
+ case ETH_P_MAP:
+ if (config->local_ep.rmnet_mode ==
+ RMNET_EPMODE_BRIDGE) {
+ rc = rmnet_ingress_deliver_packet(skb, config);
+ } else {
+ LOGD("MAP packet on %s; MAP not set",
+ dev->name);
+ rmnet_kfree_skb(skb,
+ RMNET_STATS_SKBFREE_INGRESS_NOT_EXPECT_MAPD);
+ rc = RX_HANDLER_CONSUMED;
+ }
+ break;
+
+ case ETH_P_ARP:
+ case ETH_P_IP:
+ case ETH_P_IPV6:
+ rc = rmnet_ingress_deliver_packet(skb, config);
+ break;
+
+ default:
+ LOGD("Unknown skb->proto 0x%04X\n",
+ ntohs(skb->protocol) & 0xFFFF);
+ rc = RX_HANDLER_PASS;
+ }
+ }
+
+ return rc;
+}
+
+/**
+ * rmnet_rx_handler() - Rx handler callback registered with kernel
+ * @pskb: Packet to be processed by rx handler
+ *
+ * Standard kernel-expected footprint for rx handlers. Calls
+ * rmnet_ingress_handler with correctly formatted arguments
+ *
+ * Return:
+ * - Whatever rmnet_ingress_handler() returns
+ */
+rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb)
+{
+ return rmnet_ingress_handler(*pskb);
+}
+
+/**
+ * rmnet_egress_handler() - Egress handler entry point
+ * @skb: packet to transmit
+ * @ep: logical endpoint configuration of the packet originator
+ * (e.g.. RmNet virtual network device)
+ *
+ * Modifies packet as per logical endpoint configuration and egress data format
+ * for egress device configured in logical endpoint. Packet is then transmitted
+ * on the egress device.
+ */
+void rmnet_egress_handler(struct sk_buff *skb,
+ struct rmnet_logical_ep_conf_s *ep)
+{
+ struct rmnet_phys_ep_conf_s *config;
+ struct net_device *orig_dev;
+ int rc;
+ orig_dev = skb->dev;
+ skb->dev = ep->egress_dev;
+
+ config = (struct rmnet_phys_ep_conf_s *)
+ rcu_dereference(skb->dev->rx_handler_data);
+
+ if (!config) {
+ LOGD("%s is not associated with rmnet_data", skb->dev->name);
+ kfree_skb(skb);
+ return;
+ }
+
+ LOGD("Packet going out on %s with egress format 0x%08X",
+ skb->dev->name, config->egress_data_format);
+
+ if (config->egress_data_format & RMNET_EGRESS_FORMAT_MAP) {
+ switch (rmnet_map_egress_handler(skb, config, ep)) {
+ case RMNET_MAP_CONSUMED:
+ LOGD("%s", "MAP process consumed packet");
+ return;
+
+ case RMNET_MAP_SUCCESS:
+ break;
+
+ default:
+ LOGD("MAP egress failed on packet on %s",
+ skb->dev->name);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_EGR_MAPFAIL);
+ return;
+ }
+ }
+
+ if (ep->rmnet_mode == RMNET_EPMODE_VND)
+ rmnet_vnd_tx_fixup(skb, orig_dev);
+
+ rmnet_print_packet(skb, skb->dev->name, 't');
+ trace_rmnet_egress_handler(skb);
+ rc = dev_queue_xmit(skb);
+ if (rc != 0) {
+ LOGD("Failed to queue packet for transmission on [%s]",
+ skb->dev->name);
+ }
+ rmnet_stats_queue_xmit(rc, RMNET_STATS_QUEUE_XMIT_EGRESS);
+}
diff --git a/net/rmnet_data/rmnet_data_handlers.h b/net/rmnet_data/rmnet_data_handlers.h
new file mode 100644
index 0000000..42f9e6f
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_handlers.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data ingress/egress handler
+ *
+ */
+
+#ifndef _RMNET_DATA_HANDLERS_H_
+#define _RMNET_DATA_HANDLERS_H_
+
+void rmnet_egress_handler(struct sk_buff *skb,
+ struct rmnet_logical_ep_conf_s *ep);
+
+rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb);
+
+#endif /* _RMNET_DATA_HANDLERS_H_ */
diff --git a/net/rmnet_data/rmnet_data_main.c b/net/rmnet_data/rmnet_data_main.c
new file mode 100644
index 0000000..830ce41
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_main.c
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * RMNET Data generic framework
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/export.h>
+#ifdef CONFIG_QCT_9K_MODEM
+#include <mach/board_htc.h>
+#endif
+#include "rmnet_data_private.h"
+#include "rmnet_data_config.h"
+#include "rmnet_data_vnd.h"
+
+/* ***************** Trace Points ******************************************* */
+#define CREATE_TRACE_POINTS
+#include "rmnet_data_trace.h"
+
+/* ***************** Module Parameters ************************************** */
+unsigned int rmnet_data_log_level = RMNET_LOG_LVL_ERR | RMNET_LOG_LVL_HI;
+module_param(rmnet_data_log_level, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(log_level, "Logging level");
+
+unsigned int rmnet_data_log_module_mask;
+module_param(rmnet_data_log_module_mask, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(rmnet_data_log_module_mask, "Logging module mask");
+
+/* ***************** Startup/Shutdown *************************************** */
+
+/**
+ * rmnet_init() - Module initialization
+ *
+ * todo: check for (and init) startup errors
+ */
+static int __init rmnet_init(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (is_mdm_modem())
+ {
+ rmnet_config_init();
+ rmnet_vnd_init();
+
+ LOGL("%s", "RMNET Data driver loaded successfully");
+ }
+#else
+ rmnet_config_init();
+ rmnet_vnd_init();
+
+ LOGL("%s", "RMNET Data driver loaded successfully");
+#endif
+ return 0;
+}
+
+static void __exit rmnet_exit(void)
+{
+#ifdef CONFIG_QCT_9K_MODEM
+ if (is_mdm_modem())
+ {
+ rmnet_config_exit();
+ rmnet_vnd_exit();
+ }
+#else
+ rmnet_config_exit();
+ rmnet_vnd_exit();
+#endif
+}
+
+module_init(rmnet_init)
+module_exit(rmnet_exit)
+MODULE_LICENSE("GPL v2");
diff --git a/net/rmnet_data/rmnet_data_private.h b/net/rmnet_data/rmnet_data_private.h
new file mode 100644
index 0000000..2979234
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_private.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _RMNET_DATA_PRIVATE_H_
+#define _RMNET_DATA_PRIVATE_H_
+
+#define RMNET_DATA_MAX_VND 32
+#define RMNET_DATA_MAX_PACKET_SIZE 16384
+#define RMNET_DATA_DFLT_PACKET_SIZE 1500
+#define RMNET_DATA_DEV_NAME_STR "rmnet_data"
+#define RMNET_DATA_NEEDED_HEADROOM 16
+#define RMNET_DATA_TX_QUEUE_LEN 1000
+#define RMNET_ETHERNET_HEADER_LENGTH 14
+
+extern unsigned int rmnet_data_log_level;
+extern unsigned int rmnet_data_log_module_mask;
+
+#define RMNET_INIT_OK 0
+#define RMNET_INIT_ERROR 1
+
+#define RMNET_LOG_LVL_DBG (1<<4)
+#define RMNET_LOG_LVL_LOW (1<<3)
+#define RMNET_LOG_LVL_MED (1<<2)
+#define RMNET_LOG_LVL_HI (1<<1)
+#define RMNET_LOG_LVL_ERR (1<<0)
+
+#define RMNET_LOG_MODULE(X) \
+ static uint32_t rmnet_mod_mask = X
+
+#define RMNET_DATA_LOGMASK_CONFIG (1<<0)
+#define RMNET_DATA_LOGMASK_HANDLER (1<<1)
+#define RMNET_DATA_LOGMASK_VND (1<<2)
+#define RMNET_DATA_LOGMASK_MAPD (1<<3)
+#define RMNET_DATA_LOGMASK_MAPC (1<<4)
+
+#define LOGE(fmt, ...) do { if (rmnet_data_log_level & RMNET_LOG_LVL_ERR) \
+ pr_err("[RMNET:ERR] %s(): " fmt "\n", __func__, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define LOGH(fmt, ...) do { if (rmnet_data_log_level & RMNET_LOG_LVL_HI) \
+ pr_err("[RMNET:HI] %s(): " fmt "\n" , __func__, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define LOGM(fmt, ...) do { if (rmnet_data_log_level & RMNET_LOG_LVL_MED) \
+ pr_warn("[RMNET:MED] %s(): " fmt "\n", __func__, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define LOGL(fmt, ...) do { if (unlikely \
+ (rmnet_data_log_level & RMNET_LOG_LVL_LOW)) \
+ pr_notice("[RMNET:LOW] %s(): " fmt "\n", __func__, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+/* Don't use pr_debug as it is compiled out of the kernel. We can be sure of
+ * minimal impact as LOGD is not enabled by default.
+ */
+#define LOGD(fmt, ...) do { if (unlikely( \
+ (rmnet_data_log_level & RMNET_LOG_LVL_DBG) \
+ && (rmnet_data_log_module_mask & rmnet_mod_mask))) \
+ pr_notice("[RMNET:DBG] %s(): " fmt "\n", __func__, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#endif /* _RMNET_DATA_PRIVATE_H_ */
diff --git a/net/rmnet_data/rmnet_data_stats.c b/net/rmnet_data/rmnet_data_stats.c
new file mode 100644
index 0000000..7643cb6
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_stats.c
@@ -0,0 +1,100 @@
+/*
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * RMNET Data statistics
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/export.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include "rmnet_data_private.h"
+#include "rmnet_data_stats.h"
+
+enum rmnet_deagg_e {
+ RMNET_STATS_AGG_BUFF,
+ RMNET_STATS_AGG_PKT,
+ RMNET_STATS_AGG_MAX
+};
+
+static DEFINE_SPINLOCK(rmnet_skb_free_lock);
+unsigned long int skb_free[RMNET_STATS_SKBFREE_MAX];
+module_param_array(skb_free, ulong, 0, S_IRUGO);
+MODULE_PARM_DESC(skb_free, "SKBs dropped or freed");
+
+static DEFINE_SPINLOCK(rmnet_queue_xmit_lock);
+unsigned long int queue_xmit[RMNET_STATS_QUEUE_XMIT_MAX*2];
+module_param_array(queue_xmit, ulong, 0, S_IRUGO);
+MODULE_PARM_DESC(queue_xmit, "SKBs queued");
+
+static DEFINE_SPINLOCK(rmnet_deagg_count);
+unsigned long int deagg_count[RMNET_STATS_AGG_MAX];
+module_param_array(deagg_count, ulong, 0, S_IRUGO);
+MODULE_PARM_DESC(deagg_count, "SKBs queued");
+
+static DEFINE_SPINLOCK(rmnet_agg_count);
+unsigned long int agg_count[RMNET_STATS_AGG_MAX];
+module_param_array(agg_count, ulong, 0, S_IRUGO);
+MODULE_PARM_DESC(agg_count, "SKBs queued");
+
+void rmnet_kfree_skb(struct sk_buff *skb, unsigned int reason)
+{
+ unsigned long flags;
+
+ if (reason >= RMNET_STATS_SKBFREE_MAX)
+ reason = RMNET_STATS_SKBFREE_UNKNOWN;
+
+ spin_lock_irqsave(&rmnet_skb_free_lock, flags);
+ skb_free[reason]++;
+ spin_unlock_irqrestore(&rmnet_skb_free_lock, flags);
+
+ if (skb)
+ kfree_skb(skb);
+}
+
+void rmnet_stats_queue_xmit(int rc, unsigned int reason)
+{
+ unsigned long flags;
+
+ if (rc != 0)
+ reason += RMNET_STATS_QUEUE_XMIT_MAX;
+ if (reason >= RMNET_STATS_QUEUE_XMIT_MAX*2)
+ reason = RMNET_STATS_SKBFREE_UNKNOWN;
+
+ spin_lock_irqsave(&rmnet_queue_xmit_lock, flags);
+ queue_xmit[reason]++;
+ spin_unlock_irqrestore(&rmnet_queue_xmit_lock, flags);
+}
+
+void rmnet_stats_agg_pkts(int aggcount)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rmnet_agg_count, flags);
+ agg_count[RMNET_STATS_AGG_BUFF]++;
+ agg_count[RMNET_STATS_AGG_PKT] += aggcount;
+ spin_unlock_irqrestore(&rmnet_agg_count, flags);
+}
+
+void rmnet_stats_deagg_pkts(int aggcount)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rmnet_deagg_count, flags);
+ deagg_count[RMNET_STATS_AGG_BUFF]++;
+ deagg_count[RMNET_STATS_AGG_PKT] += aggcount;
+ spin_unlock_irqrestore(&rmnet_deagg_count, flags);
+}
+
diff --git a/net/rmnet_data/rmnet_data_stats.h b/net/rmnet_data/rmnet_data_stats.h
new file mode 100644
index 0000000..6b5ec1f
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_stats.h
@@ -0,0 +1,56 @@
+/*
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * RMNET Data statistics
+ *
+ */
+
+#ifndef _RMNET_DATA_STATS_H_
+#define _RMNET_DATA_STATS_H_
+
+enum rmnet_skb_free_e {
+ RMNET_STATS_SKBFREE_UNKNOWN,
+ RMNET_STATS_SKBFREE_BRDG_NO_EGRESS,
+ RMNET_STATS_SKBFREE_DELIVER_NO_EP,
+ RMNET_STATS_SKBFREE_IPINGRESS_NO_EP,
+ RMNET_STATS_SKBFREE_MAPINGRESS_BAD_MUX,
+ RMNET_STATS_SKBFREE_MAPINGRESS_MUX_NO_EP,
+ RMNET_STATS_SKBFREE_MAPINGRESS_AGGBUF,
+ RMNET_STATS_SKBFREE_INGRESS_NOT_EXPECT_MAPD,
+ RMNET_STATS_SKBFREE_INGRESS_NOT_EXPECT_MAPC,
+ RMNET_STATS_SKBFREE_EGR_MAPFAIL,
+ RMNET_STATS_SKBFREE_VND_NO_EGRESS,
+ RMNET_STATS_SKBFREE_MAPC_BAD_MUX,
+ RMNET_STATS_SKBFREE_MAPC_MUX_NO_EP,
+ RMNET_STATS_SKBFREE_AGG_CPY_EXPAND,
+ RMNET_STATS_SKBFREE_AGG_INTO_BUFF,
+ RMNET_STATS_SKBFREE_DEAGG_MALFORMED,
+ RMNET_STATS_SKBFREE_DEAGG_CLONE_FAIL,
+ RMNET_STATS_SKBFREE_DEAGG_UNKOWN_IP_TYP,
+ RMNET_STATS_SKBFREE_MAX
+};
+
+enum rmnet_queue_xmit_e {
+ RMNET_STATS_QUEUE_XMIT_UNKNOWN,
+ RMNET_STATS_QUEUE_XMIT_EGRESS,
+ RMNET_STATS_QUEUE_XMIT_AGG_FILL_BUFFER,
+ RMNET_STATS_QUEUE_XMIT_AGG_TIMEOUT,
+ RMNET_STATS_QUEUE_XMIT_AGG_CPY_EXP_FAIL,
+ RMNET_STATS_QUEUE_XMIT_MAX
+};
+
+void rmnet_kfree_skb(struct sk_buff *skb, unsigned int reason);
+void rmnet_stats_queue_xmit(int rc, unsigned int reason);
+void rmnet_stats_deagg_pkts(int aggcount);
+void rmnet_stats_agg_pkts(int aggcount);
+#endif /* _RMNET_DATA_STATS_H_ */
diff --git a/net/rmnet_data/rmnet_data_trace.h b/net/rmnet_data/rmnet_data_trace.h
new file mode 100644
index 0000000..98d071a
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_trace.h
@@ -0,0 +1,80 @@
+/* Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM rmnet_data
+#define TRACE_INCLUDE_FILE rmnet_data_trace
+
+#if !defined(_TRACE_MSM_LOW_POWER_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _RMNET_DATA_TRACE_H_
+
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(rmnet_handler_template,
+
+ TP_PROTO(struct sk_buff *skb),
+
+ TP_ARGS(skb),
+
+ TP_STRUCT__entry(
+ __field(void *, skbaddr)
+ __field(unsigned int, len)
+ __string(name, skb->dev->name)
+ ),
+
+ TP_fast_assign(
+ __entry->skbaddr = skb;
+ __entry->len = skb->len;
+ __assign_str(name, skb->dev->name);
+ ),
+
+ TP_printk("dev=%s skbaddr=%p len=%u",
+ __get_str(name), __entry->skbaddr, __entry->len)
+)
+
+DEFINE_EVENT(rmnet_handler_template, rmnet_egress_handler,
+
+ TP_PROTO(struct sk_buff *skb),
+
+ TP_ARGS(skb)
+);
+
+DEFINE_EVENT(rmnet_handler_template, rmnet_ingress_handler,
+
+ TP_PROTO(struct sk_buff *skb),
+
+ TP_ARGS(skb)
+);
+
+DEFINE_EVENT(rmnet_handler_template, rmnet_vnd_start_xmit,
+
+ TP_PROTO(struct sk_buff *skb),
+
+ TP_ARGS(skb)
+);
+
+DEFINE_EVENT(rmnet_handler_template, __rmnet_deliver_skb,
+
+ TP_PROTO(struct sk_buff *skb),
+
+ TP_ARGS(skb)
+);
+
+#endif /* _RMNET_DATA_TRACE_H_ */
+
+/* This part must be outside protection */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#include <trace/define_trace.h>
+
diff --git a/net/rmnet_data/rmnet_data_vnd.c b/net/rmnet_data/rmnet_data_vnd.c
new file mode 100644
index 0000000..338e2ce
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_vnd.c
@@ -0,0 +1,1060 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ *
+ * RMNET Data virtual network driver
+ *
+ */
+
+#include <linux/types.h>
+#include <linux/rmnet_data.h>
+#include <linux/msm_rmnet.h>
+#include <linux/etherdevice.h>
+#include <linux/if_arp.h>
+#include <linux/spinlock.h>
+#include <net/pkt_sched.h>
+#include <linux/atomic.h>
+#include "rmnet_data_config.h"
+#include "rmnet_data_handlers.h"
+#include "rmnet_data_private.h"
+#include "rmnet_map.h"
+#include "rmnet_data_vnd.h"
+#include "rmnet_data_stats.h"
+#include "rmnet_data_trace.h"
+
+RMNET_LOG_MODULE(RMNET_DATA_LOGMASK_VND);
+
+#define RMNET_MAP_FLOW_NUM_TC_HANDLE 3
+#define RMNET_VND_UF_ACTION_ADD 0
+#define RMNET_VND_UF_ACTION_DEL 1
+enum {
+ RMNET_VND_UPDATE_FLOW_OK,
+ RMNET_VND_UPDATE_FLOW_NO_ACTION,
+ RMNET_VND_UPDATE_FLOW_NO_MORE_ROOM,
+ RMNET_VND_UPDATE_FLOW_NO_VALID_LEFT
+};
+
+struct net_device *rmnet_devices[RMNET_DATA_MAX_VND];
+
+struct rmnet_map_flow_mapping_s {
+ struct list_head list;
+ uint32_t map_flow_id;
+ uint32_t tc_flow_valid[RMNET_MAP_FLOW_NUM_TC_HANDLE];
+ uint32_t tc_flow_id[RMNET_MAP_FLOW_NUM_TC_HANDLE];
+ atomic_t v4_seq;
+ atomic_t v6_seq;
+};
+
+struct rmnet_vnd_private_s {
+ uint32_t qos_version;
+ struct rmnet_logical_ep_conf_s local_ep;
+
+ rwlock_t flow_map_lock;
+ struct list_head flow_head;
+};
+
+#define RMNET_VND_FC_QUEUED 0
+#define RMNET_VND_FC_NOT_ENABLED 1
+#define RMNET_VND_FC_KMALLOC_ERR 2
+
+/* ***************** Helper Functions *************************************** */
+
+/**
+ * rmnet_vnd_add_qos_header() - Adds QoS header to front of skb->data
+ * @skb: Socket buffer ("packet") to modify
+ * @dev: Egress interface
+ *
+ * Does not check for sufficient headroom! Caller must make sure there is enough
+ * headroom.
+ */
+static void rmnet_vnd_add_qos_header(struct sk_buff *skb,
+ struct net_device *dev,
+ uint32_t qos_version)
+{
+ struct QMI_QOS_HDR_S *qmih;
+ struct qmi_qos_hdr8_s *qmi8h;
+
+ if (qos_version & RMNET_IOCTL_QOS_MODE_6) {
+ qmih = (struct QMI_QOS_HDR_S *)
+ skb_push(skb, sizeof(struct QMI_QOS_HDR_S));
+ qmih->version = 1;
+ qmih->flags = 0;
+ qmih->flow_id = skb->mark;
+ } else if (qos_version & RMNET_IOCTL_QOS_MODE_8) {
+ qmi8h = (struct qmi_qos_hdr8_s *)
+ skb_push(skb, sizeof(struct qmi_qos_hdr8_s));
+ /* Flags are 0 always */
+ qmi8h->hdr.version = 0;
+ qmi8h->hdr.flags = 0;
+ memset(qmi8h->reserved, 0, sizeof(qmi8h->reserved));
+ qmi8h->hdr.flow_id = skb->mark;
+ } else {
+ LOGD("%s(): Bad QoS version configured\n", __func__);
+ }
+}
+
+/* ***************** RX/TX Fixup ******************************************** */
+
+/**
+ * rmnet_vnd_rx_fixup() - Virtual Network Device receive fixup hook
+ * @skb: Socket buffer ("packet") to modify
+ * @dev: Virtual network device
+ *
+ * Additional VND specific packet processing for ingress packets
+ *
+ * Return:
+ * - RX_HANDLER_PASS if packet should continue to process in stack
+ * - RX_HANDLER_CONSUMED if packet should not be processed in stack
+ *
+ */
+int rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev)
+{
+ if (unlikely(!dev || !skb))
+ BUG();
+
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += skb->len;
+
+ return RX_HANDLER_PASS;
+}
+
+/**
+ * rmnet_vnd_tx_fixup() - Virtual Network Device transmic fixup hook
+ * @skb: Socket buffer ("packet") to modify
+ * @dev: Virtual network device
+ *
+ * Additional VND specific packet processing for egress packets
+ *
+ * Return:
+ * - RX_HANDLER_PASS if packet should continue to be transmitted
+ * - RX_HANDLER_CONSUMED if packet should not be transmitted by stack
+ */
+int rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ if (unlikely(!dev || !skb))
+ BUG();
+
+ dev->stats.tx_packets++;
+ dev->stats.tx_bytes += skb->len;
+
+ return RX_HANDLER_PASS;
+}
+
+/* ***************** Network Device Operations ****************************** */
+
+/**
+ * rmnet_vnd_start_xmit() - Transmit NDO callback
+ * @skb: Socket buffer ("packet") being sent from network stack
+ * @dev: Virtual Network Device
+ *
+ * Standard network driver operations hook to transmit packets on virtual
+ * network device. Called by network stack. Packet is not transmitted directly
+ * from here; instead it is given to the rmnet egress handler.
+ *
+ * Return:
+ * - NETDEV_TX_OK under all cirumstances (cannot block/fail)
+ */
+static netdev_tx_t rmnet_vnd_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ trace_rmnet_vnd_start_xmit(skb);
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+ if (dev_conf->local_ep.egress_dev) {
+ /* QoS header should come after MAP header */
+ if (dev_conf->qos_version)
+ rmnet_vnd_add_qos_header(skb,
+ dev,
+ dev_conf->qos_version);
+ rmnet_egress_handler(skb, &dev_conf->local_ep);
+ } else {
+ dev->stats.tx_dropped++;
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_VND_NO_EGRESS);
+ }
+ return NETDEV_TX_OK;
+}
+
+/**
+ * rmnet_vnd_change_mtu() - Change MTU NDO callback
+ * @dev: Virtual network device
+ * @new_mtu: New MTU value to set (in bytes)
+ *
+ * Standard network driver operations hook to set the MTU. Called by kernel to
+ * set the device MTU. Checks if desired MTU is less than zero or greater than
+ * RMNET_DATA_MAX_PACKET_SIZE;
+ *
+ * Return:
+ * - 0 if successful
+ * - -EINVAL if new_mtu is out of range
+ */
+static int rmnet_vnd_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (new_mtu < 0 || new_mtu > RMNET_DATA_MAX_PACKET_SIZE)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+ return 0;
+}
+
+#ifdef CONFIG_RMNET_DATA_FC
+static int _rmnet_vnd_do_qos_ioctl(struct net_device *dev,
+ struct ifreq *ifr,
+ int cmd)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ int rc;
+ struct rmnet_ioctl_data_s ioctl_data;
+ rc = 0;
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ switch (cmd) {
+
+ case RMNET_IOCTL_SET_QOS_ENABLE:
+ LOGM("RMNET_IOCTL_SET_QOS_ENABLE on %s", dev->name);
+ if (!dev_conf->qos_version)
+ dev_conf->qos_version = RMNET_IOCTL_QOS_MODE_6;
+ break;
+
+ case RMNET_IOCTL_SET_QOS_DISABLE:
+ LOGM("RMNET_IOCTL_SET_QOS_DISABLE on %s", dev->name);
+ dev_conf->qos_version = 0;
+ break;
+
+ case RMNET_IOCTL_GET_QOS: /* Get QoS header state */
+ LOGM("RMNET_IOCTL_GET_QOS on %s", dev->name);
+ ioctl_data.u.operation_mode = (dev_conf->qos_version ==
+ RMNET_IOCTL_QOS_MODE_6);
+ if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
+ sizeof(struct rmnet_ioctl_data_s)))
+ rc = -EFAULT;
+ break;
+
+ case RMNET_IOCTL_FLOW_ENABLE:
+ LOGL("RMNET_IOCTL_FLOW_ENABLE on %s", dev->name);
+ if (copy_from_user(&ioctl_data, ifr->ifr_ifru.ifru_data,
+ sizeof(struct rmnet_ioctl_data_s))) {
+ rc = -EFAULT;
+ break;
+ }
+ tc_qdisc_flow_control(dev, ioctl_data.u.tcm_handle, 1);
+ break;
+
+ case RMNET_IOCTL_FLOW_DISABLE:
+ LOGL("RMNET_IOCTL_FLOW_DISABLE on %s", dev->name);
+ if (copy_from_user(&ioctl_data, ifr->ifr_ifru.ifru_data,
+ sizeof(struct rmnet_ioctl_data_s))) {
+ rc = -EFAULT;
+ break;
+ }
+ tc_qdisc_flow_control(dev, ioctl_data.u.tcm_handle, 0);
+ break;
+
+ default:
+ rc = -EINVAL;
+ }
+
+ return rc;
+}
+
+struct rmnet_vnd_fc_work {
+ struct work_struct work;
+ struct net_device *dev;
+ uint32_t tc_handle;
+ int enable;
+};
+
+static void _rmnet_vnd_wq_flow_control(struct work_struct *work)
+{
+ struct rmnet_vnd_fc_work *fcwork;
+ fcwork = (struct rmnet_vnd_fc_work *)work;
+
+ rtnl_lock();
+ tc_qdisc_flow_control(fcwork->dev, fcwork->tc_handle, fcwork->enable);
+ rtnl_unlock();
+
+ LOGL("[%s] handle:%08X enable:%d",
+ fcwork->dev->name, fcwork->tc_handle, fcwork->enable);
+
+ kfree(work);
+}
+
+static int _rmnet_vnd_do_flow_control(struct net_device *dev,
+ uint32_t tc_handle,
+ int enable)
+{
+ struct rmnet_vnd_fc_work *fcwork;
+
+ fcwork = (struct rmnet_vnd_fc_work *)
+ kmalloc(sizeof(struct rmnet_vnd_fc_work), GFP_ATOMIC);
+ if (!fcwork)
+ return RMNET_VND_FC_KMALLOC_ERR;
+ memset(fcwork, 0, sizeof(struct rmnet_vnd_fc_work));
+
+ INIT_WORK((struct work_struct *)fcwork, _rmnet_vnd_wq_flow_control);
+ fcwork->dev = dev;
+ fcwork->tc_handle = tc_handle;
+ fcwork->enable = enable;
+
+ schedule_work((struct work_struct *)fcwork);
+ return RMNET_VND_FC_QUEUED;
+}
+#else
+static int _rmnet_vnd_do_qos_ioctl(struct net_device *dev,
+ struct ifreq *ifr,
+ int cmd)
+{
+ return -EINVAL;
+}
+
+static inline int _rmnet_vnd_do_flow_control(struct net_device *dev,
+ uint32_t tc_handle,
+ int enable)
+{
+ LOGD("[%s] called with no QoS support", dev->name);
+ return RMNET_VND_FC_NOT_ENABLED;
+}
+#endif /* CONFIG_RMNET_DATA_FC */
+
+static int rmnet_vnd_ioctl_extended(struct net_device *dev, struct ifreq *ifr)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ struct rmnet_ioctl_extended_s ext_cmd;
+ int rc = 0;
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ rc = copy_from_user(&ext_cmd, ifr->ifr_ifru.ifru_data,
+ sizeof(struct rmnet_ioctl_extended_s));
+ if (rc) {
+ LOGM("%s(): copy_from_user() failed\n", __func__);
+ return rc;
+ }
+
+ switch (ext_cmd.extended_ioctl) {
+ case RMNET_IOCTL_GET_SUPPORTED_FEATURES:
+ ext_cmd.u.data = 0;
+ break;
+
+ case RMNET_IOCTL_GET_DRIVER_NAME:
+ strlcpy(ext_cmd.u.if_name, "rmnet_data",
+ sizeof(ext_cmd.u.if_name));
+ break;
+
+ case RMNET_IOCTL_GET_SUPPORTED_QOS_MODES:
+ ext_cmd.u.data = RMNET_IOCTL_QOS_MODE_6
+ | RMNET_IOCTL_QOS_MODE_8;
+ break;
+
+ case RMNET_IOCTL_GET_QOS_VERSION:
+ ext_cmd.u.data = dev_conf->qos_version;
+ break;
+
+ case RMNET_IOCTL_SET_QOS_VERSION:
+ if (ext_cmd.u.data == RMNET_IOCTL_QOS_MODE_6
+ || ext_cmd.u.data == RMNET_IOCTL_QOS_MODE_8
+ || ext_cmd.u.data == 0) {
+ dev_conf->qos_version = ext_cmd.u.data;
+ } else {
+ rc = -EINVAL;
+ goto done;
+ }
+ break;
+
+ default:
+ rc = -EINVAL;
+ goto done;
+ break;
+ }
+
+ rc = copy_to_user(ifr->ifr_ifru.ifru_data, &ext_cmd,
+ sizeof(struct rmnet_ioctl_extended_s));
+ if (rc)
+ LOGM("%s(): copy_to_user() failed\n", __func__);
+
+done:
+ return rc;
+}
+
+
+/**
+ * rmnet_vnd_ioctl() - IOCTL NDO callback
+ * @dev: Virtual network device
+ * @ifreq: User data
+ * @cmd: IOCTL command value
+ *
+ * Standard network driver operations hook to process IOCTLs. Called by kernel
+ * to process non-stanard IOCTLs for device
+ *
+ * Return:
+ * - 0 if successful
+ * - -EINVAL if unknown IOCTL
+ */
+static int rmnet_vnd_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ int rc;
+ struct rmnet_ioctl_data_s ioctl_data;
+ rc = 0;
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ rc = _rmnet_vnd_do_qos_ioctl(dev, ifr, cmd);
+ if (rc != -EINVAL)
+ return rc;
+ rc = 0; /* Reset rc as it may contain -EINVAL from above */
+
+ switch (cmd) {
+
+ case RMNET_IOCTL_OPEN: /* Do nothing. Support legacy behavior */
+ LOGM("RMNET_IOCTL_OPEN on %s (ignored)", dev->name);
+ break;
+
+ case RMNET_IOCTL_CLOSE: /* Do nothing. Support legacy behavior */
+ LOGM("RMNET_IOCTL_CLOSE on %s (ignored)", dev->name);
+ break;
+
+ case RMNET_IOCTL_SET_LLP_ETHERNET:
+ LOGM("RMNET_IOCTL_SET_LLP_ETHERNET on %s (no support)",
+ dev->name);
+ rc = -EINVAL;
+ break;
+
+ case RMNET_IOCTL_SET_LLP_IP: /* Do nothing. Support legacy behavior */
+ LOGM("RMNET_IOCTL_SET_LLP_IP on %s (ignored)", dev->name);
+ break;
+
+ case RMNET_IOCTL_GET_LLP: /* Always return IP mode */
+ LOGM("RMNET_IOCTL_GET_LLP on %s", dev->name);
+ ioctl_data.u.operation_mode = RMNET_MODE_LLP_IP;
+ if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
+ sizeof(struct rmnet_ioctl_data_s)))
+ rc = -EFAULT;
+ break;
+
+ case RMNET_IOCTL_EXTENDED:
+ rc = rmnet_vnd_ioctl_extended(dev, ifr);
+ break;
+
+ default:
+ LOGH("Unkown IOCTL 0x%08X", cmd);
+ rc = -EINVAL;
+ }
+
+ return rc;
+}
+
+static const struct net_device_ops rmnet_data_vnd_ops = {
+ .ndo_init = 0,
+ .ndo_start_xmit = rmnet_vnd_start_xmit,
+ .ndo_do_ioctl = rmnet_vnd_ioctl,
+ .ndo_change_mtu = rmnet_vnd_change_mtu,
+ .ndo_set_mac_address = 0,
+ .ndo_validate_addr = 0,
+};
+
+/**
+ * rmnet_vnd_setup() - net_device initialization callback
+ * @dev: Virtual network device
+ *
+ * Called by kernel whenever a new rmnet_data<n> device is created. Sets MTU,
+ * flags, ARP type, needed headroom, etc...
+ *
+ * todo: What is watchdog_timeo? Do we need to explicitly set it?
+ */
+static void rmnet_vnd_setup(struct net_device *dev)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ LOGM("Setting up device %s", dev->name);
+
+ /* Clear out private data */
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+ memset(dev_conf, 0, sizeof(struct rmnet_vnd_private_s));
+
+ dev->netdev_ops = &rmnet_data_vnd_ops;
+ dev->mtu = RMNET_DATA_DFLT_PACKET_SIZE;
+ dev->needed_headroom = RMNET_DATA_NEEDED_HEADROOM;
+ random_ether_addr(dev->dev_addr);
+ dev->watchdog_timeo = 1000;
+ dev->tx_queue_len = RMNET_DATA_TX_QUEUE_LEN;
+
+ /* Raw IP mode */
+ dev->header_ops = 0; /* No header */
+ dev->type = ARPHRD_RAWIP;
+ dev->hard_header_len = 0;
+ dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
+
+ /* Flow control */
+ rwlock_init(&dev_conf->flow_map_lock);
+ INIT_LIST_HEAD(&dev_conf->flow_head);
+}
+
+/* ***************** Exposed API ******************************************** */
+
+/**
+ * rmnet_vnd_exit() - Shutdown cleanup hook
+ *
+ * Called by RmNet main on module unload. Cleans up data structures and
+ * unregisters/frees net_devices.
+ */
+void rmnet_vnd_exit(void)
+{
+ int i;
+ for (i = 0; i < RMNET_DATA_MAX_VND; i++)
+ if (rmnet_devices[i]) {
+ unregister_netdev(rmnet_devices[i]);
+ free_netdev(rmnet_devices[i]);
+ }
+}
+
+/**
+ * rmnet_vnd_init() - Init hook
+ *
+ * Called by RmNet main on module load. Initializes data structures
+ */
+int rmnet_vnd_init(void)
+{
+ memset(rmnet_devices, 0,
+ sizeof(struct net_device *) * RMNET_DATA_MAX_VND);
+ return 0;
+}
+
+/**
+ * rmnet_vnd_create_dev() - Create a new virtual network device node.
+ * @id: Virtual device node id
+ * @new_device: Pointer to newly created device node
+ * @prefix: Device name prefix
+ *
+ * Allocates structures for new virtual network devices. Sets the name of the
+ * new device and registers it with the network stack. Device will appear in
+ * ifconfig list after this is called. If the prefix is null, then
+ * RMNET_DATA_DEV_NAME_STR will be assumed.
+ *
+ * Return:
+ * - 0 if successful
+ * - RMNET_CONFIG_BAD_ARGUMENTS if id is out of range or prefix is too long
+ * - RMNET_CONFIG_DEVICE_IN_USE if id already in use
+ * - RMNET_CONFIG_NOMEM if net_device allocation failed
+ * - RMNET_CONFIG_UNKNOWN_ERROR if register_netdevice() fails
+ */
+int rmnet_vnd_create_dev(int id, struct net_device **new_device,
+ const char *prefix)
+{
+ struct net_device *dev;
+ char dev_prefix[IFNAMSIZ];
+ int p, rc = 0;
+
+ if (id < 0 || id > RMNET_DATA_MAX_VND) {
+ *new_device = 0;
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+ }
+
+ if (rmnet_devices[id] != 0) {
+ *new_device = 0;
+ return RMNET_CONFIG_DEVICE_IN_USE;
+ }
+
+ if (!prefix)
+ p = scnprintf(dev_prefix, IFNAMSIZ, "%s%%d",
+ RMNET_DATA_DEV_NAME_STR);
+ else
+ p = scnprintf(dev_prefix, IFNAMSIZ, "%s%%d",
+ prefix);
+ if (p >= (IFNAMSIZ-1)) {
+ LOGE("Specified prefix longer than IFNAMSIZ");
+ return RMNET_CONFIG_BAD_ARGUMENTS;
+ }
+
+ dev = alloc_netdev(sizeof(struct rmnet_vnd_private_s),
+ dev_prefix,
+ rmnet_vnd_setup);
+ if (!dev) {
+ LOGE("Failed to to allocate netdev for id %d", id);
+ *new_device = 0;
+ return RMNET_CONFIG_NOMEM;
+ }
+
+ rc = register_netdevice(dev);
+ if (rc != 0) {
+ LOGE("Failed to to register netdev [%s]", dev->name);
+ free_netdev(dev);
+ *new_device = 0;
+ return RMNET_CONFIG_UNKNOWN_ERROR;
+ } else {
+ rmnet_devices[id] = dev;
+ *new_device = dev;
+ }
+
+ LOGM("Registered device %s", dev->name);
+ return rc;
+}
+
+/**
+ * rmnet_vnd_free_dev() - free a virtual network device node.
+ * @id: Virtual device node id
+ *
+ * Unregisters the virtual network device node and frees it.
+ * unregister_netdev locks the rtnl mutex, so the mutex must not be locked
+ * by the caller of the function. unregister_netdev enqueues the request to
+ * unregister the device into a TODO queue. The requests in the TODO queue
+ * are only done after rtnl mutex is unlocked, therefore free_netdev has to
+ * called after unlocking rtnl mutex.
+ *
+ * Return:
+ * - 0 if successful
+ * - RMNET_CONFIG_NO_SUCH_DEVICE if id is invalid or not in range
+ * - RMNET_CONFIG_DEVICE_IN_USE if device has logical ep that wasn't unset
+ */
+int rmnet_vnd_free_dev(int id)
+{
+ struct rmnet_logical_ep_conf_s *epconfig_l;
+ struct net_device *dev;
+
+ rtnl_lock();
+ if ((id < 0) || (id >= RMNET_DATA_MAX_VND) || !rmnet_devices[id]) {
+ rtnl_unlock();
+ LOGM("Invalid id [%d]", id);
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+
+ epconfig_l = rmnet_vnd_get_le_config(rmnet_devices[id]);
+ if (epconfig_l && epconfig_l->refcount) {
+ rtnl_unlock();
+ return RMNET_CONFIG_DEVICE_IN_USE;
+ }
+
+ dev = rmnet_devices[id];
+ rmnet_devices[id] = 0;
+ rtnl_unlock();
+
+ if (dev) {
+ unregister_netdev(dev);
+ free_netdev(dev);
+ return 0;
+ } else {
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+}
+
+/**
+ * rmnet_vnd_get_name() - Gets the string name of a VND based on ID
+ * @id: Virtual device node id
+ * @name: Buffer to store name of virtual device node
+ * @name_len: Length of name buffer
+ *
+ * Copies the name of the virtual device node into the users buffer. Will throw
+ * an error if the buffer is null, or too small to hold the device name.
+ *
+ * Return:
+ * - 0 if successful
+ * - -EINVAL if name is null
+ * - -EINVAL if id is invalid or not in range
+ * - -EINVAL if name is too small to hold things
+ */
+int rmnet_vnd_get_name(int id, char *name, int name_len)
+{
+ int p;
+
+ if (!name) {
+ LOGM("%s", "Bad arguments; name buffer null");
+ return -EINVAL;
+ }
+
+ if ((id < 0) || (id >= RMNET_DATA_MAX_VND) || !rmnet_devices[id]) {
+ LOGM("Invalid id [%d]", id);
+ return -EINVAL;
+ }
+
+ p = strlcpy(name, rmnet_devices[id]->name, name_len);
+ if (p >= name_len) {
+ LOGM("Buffer to small (%d) to fit device name", name_len);
+ return -EINVAL;
+ }
+ LOGL("Found mapping [%d]->\"%s\"", id, name);
+
+ return 0;
+}
+
+/**
+ * rmnet_vnd_is_vnd() - Determine if net_device is RmNet owned virtual devices
+ * @dev: Network device to test
+ *
+ * Searches through list of known RmNet virtual devices. This function is O(n)
+ * and should not be used in the data path.
+ *
+ * Return:
+ * - 0 if device is not RmNet virtual device
+ * - 1 if device is RmNet virtual device
+ */
+int rmnet_vnd_is_vnd(struct net_device *dev)
+{
+ /*
+ * This is not an efficient search, but, this will only be called in
+ * a configuration context, and the list is small.
+ */
+ int i;
+
+ if (!dev)
+ BUG();
+
+ for (i = 0; i < RMNET_DATA_MAX_VND; i++)
+ if (dev == rmnet_devices[i])
+ return i+1;
+
+ return 0;
+}
+
+/**
+ * rmnet_vnd_get_le_config() - Get the logical endpoint configuration
+ * @dev: Virtual device node
+ *
+ * Gets the logical endpoint configuration for a RmNet virtual network device
+ * node. Caller should confirm that devices is a RmNet VND before calling.
+ *
+ * Return:
+ * - Pointer to logical endpoint configuration structure
+ * - 0 (null) if dev is null
+ */
+struct rmnet_logical_ep_conf_s *rmnet_vnd_get_le_config(struct net_device *dev)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ if (!dev)
+ return 0;
+
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+ if (!dev_conf)
+ BUG();
+
+ return &dev_conf->local_ep;
+}
+
+/**
+ * _rmnet_vnd_get_flow_map() - Gets object representing a MAP flow handle
+ * @dev_conf: Private configuration structure for virtual network device
+ * @map_flow: MAP flow handle IF
+ *
+ * Loops through available flow mappings and compares the MAP flow handle.
+ * Returns when mapping is found.
+ *
+ * Return:
+ * - Null if no mapping was found
+ * - Pointer to mapping otherwise
+ */
+static struct rmnet_map_flow_mapping_s *_rmnet_vnd_get_flow_map
+ (struct rmnet_vnd_private_s *dev_conf,
+ uint32_t map_flow)
+{
+ struct list_head *p;
+ struct rmnet_map_flow_mapping_s *itm;
+
+ list_for_each(p, &(dev_conf->flow_head)) {
+ itm = list_entry(p, struct rmnet_map_flow_mapping_s, list);
+
+ if (unlikely(!itm))
+ BUG();
+
+ if (itm->map_flow_id == map_flow)
+ return itm;
+ }
+ return 0;
+}
+
+/**
+ * _rmnet_vnd_update_flow_map() - Add or remove individual TC flow handles
+ * @action: One of RMNET_VND_UF_ACTION_ADD / RMNET_VND_UF_ACTION_DEL
+ * @itm: Flow mapping object
+ * @map_flow: TC flow handle
+ *
+ * RMNET_VND_UF_ACTION_ADD:
+ * Will check for a free mapping slot in the mapping object. If one is found,
+ * valid for that slot will be set to 1 and the value will be set.
+ *
+ * RMNET_VND_UF_ACTION_DEL:
+ * Will check for matching tc handle. If found, valid for that slot will be
+ * set to 0 and the value will also be zeroed.
+ *
+ * Return:
+ * - RMNET_VND_UPDATE_FLOW_OK tc flow handle is added/removed ok
+ * - RMNET_VND_UPDATE_FLOW_NO_MORE_ROOM if there are no more tc handles
+ * - RMNET_VND_UPDATE_FLOW_NO_VALID_LEFT if flow mapping is now empty
+ * - RMNET_VND_UPDATE_FLOW_NO_ACTION if no action was taken
+ */
+static int _rmnet_vnd_update_flow_map(uint8_t action,
+ struct rmnet_map_flow_mapping_s *itm,
+ uint32_t tc_flow)
+{
+ int rc, i, j;
+ rc = RMNET_VND_UPDATE_FLOW_OK;
+
+ switch (action) {
+ case RMNET_VND_UF_ACTION_ADD:
+ rc = RMNET_VND_UPDATE_FLOW_NO_MORE_ROOM;
+ for (i = 0; i < RMNET_MAP_FLOW_NUM_TC_HANDLE; i++) {
+ if (itm->tc_flow_valid[i] == 0) {
+ itm->tc_flow_valid[i] = 1;
+ itm->tc_flow_id[i] = tc_flow;
+ rc = RMNET_VND_UPDATE_FLOW_OK;
+ LOGD("{%p}->tc_flow_id[%d]=%08X",
+ itm, i, tc_flow);
+ break;
+ }
+ }
+ break;
+
+ case RMNET_VND_UF_ACTION_DEL:
+ j = 0;
+ rc = RMNET_VND_UPDATE_FLOW_OK;
+ for (i = 0; i < RMNET_MAP_FLOW_NUM_TC_HANDLE; i++) {
+ if (itm->tc_flow_valid[i] == 1) {
+ if (itm->tc_flow_id[i] == tc_flow) {
+ itm->tc_flow_valid[i] = 0;
+ itm->tc_flow_id[i] = 0;
+ j++;
+ LOGD("{%p}->tc_flow_id[%d]=0", itm, i);
+ }
+ } else {
+ j++;
+ }
+ }
+ if (j == RMNET_MAP_FLOW_NUM_TC_HANDLE)
+ rc = RMNET_VND_UPDATE_FLOW_NO_VALID_LEFT;
+ break;
+
+ default:
+ rc = RMNET_VND_UPDATE_FLOW_NO_ACTION;
+ break;
+ }
+ return rc;
+}
+
+/**
+ * rmnet_vnd_add_tc_flow() - Add a MAP/TC flow handle mapping
+ * @id: Virtual network device ID
+ * @map_flow: MAP flow handle
+ * @tc_flow: TC flow handle
+ *
+ * Checkes for an existing flow mapping object corresponding to map_flow. If one
+ * is found, then it will try to add to the existing mapping object. Otherwise,
+ * a new mapping object is created.
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful
+ * - RMNET_CONFIG_TC_HANDLE_FULL if there is no more room in the map object
+ * - RMNET_CONFIG_NOMEM failed to allocate a new map object
+ */
+int rmnet_vnd_add_tc_flow(uint32_t id, uint32_t map_flow, uint32_t tc_flow)
+{
+ struct rmnet_map_flow_mapping_s *itm;
+ struct net_device *dev;
+ struct rmnet_vnd_private_s *dev_conf;
+ int r;
+ unsigned long flags;
+
+ if ((id < 0) || (id >= RMNET_DATA_MAX_VND) || !rmnet_devices[id]) {
+ LOGM("Invalid VND id [%d]", id);
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+
+ dev = rmnet_devices[id];
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ if (!dev_conf)
+ BUG();
+
+ write_lock_irqsave(&dev_conf->flow_map_lock, flags);
+ itm = _rmnet_vnd_get_flow_map(dev_conf, map_flow);
+ if (itm) {
+ r = _rmnet_vnd_update_flow_map(RMNET_VND_UF_ACTION_ADD,
+ itm, tc_flow);
+ if (r != RMNET_VND_UPDATE_FLOW_OK) {
+ write_unlock_irqrestore(&dev_conf->flow_map_lock,
+ flags);
+ return RMNET_CONFIG_TC_HANDLE_FULL;
+ }
+ write_unlock_irqrestore(&dev_conf->flow_map_lock, flags);
+ return RMNET_CONFIG_OK;
+ }
+ write_unlock_irqrestore(&dev_conf->flow_map_lock, flags);
+
+ itm = (struct rmnet_map_flow_mapping_s *)
+ kmalloc(sizeof(struct rmnet_map_flow_mapping_s), GFP_KERNEL);
+
+ if (!itm) {
+ LOGM("%s", "Failure allocating flow mapping");
+ return RMNET_CONFIG_NOMEM;
+ }
+ memset(itm, 0, sizeof(struct rmnet_map_flow_mapping_s));
+
+ itm->map_flow_id = map_flow;
+ itm->tc_flow_valid[0] = 1;
+ itm->tc_flow_id[0] = tc_flow;
+
+ /* How can we dynamically init these safely? Kernel only provides static
+ * initializers for atomic_t
+ */
+ itm->v4_seq.counter = 0; /* Init is broken: ATOMIC_INIT(0); */
+ itm->v6_seq.counter = 0; /* Init is broken: ATOMIC_INIT(0); */
+
+ write_lock_irqsave(&dev_conf->flow_map_lock, flags);
+ list_add(&(itm->list), &(dev_conf->flow_head));
+ write_unlock_irqrestore(&dev_conf->flow_map_lock, flags);
+
+ LOGD("Created flow mapping [%s][0x%08X][0x%08X]@%p",
+ dev->name, itm->map_flow_id, itm->tc_flow_id[0], itm);
+
+ return RMNET_CONFIG_OK;
+}
+
+/**
+ * rmnet_vnd_del_tc_flow() - Delete a MAP/TC flow handle mapping
+ * @id: Virtual network device ID
+ * @map_flow: MAP flow handle
+ * @tc_flow: TC flow handle
+ *
+ * Checkes for an existing flow mapping object corresponding to map_flow. If one
+ * is found, then it will try to remove the existing tc_flow mapping. If the
+ * mapping object no longer contains any mappings, then it is freed. Otherwise
+ * the mapping object is left in the list
+ *
+ * Return:
+ * - RMNET_CONFIG_OK if successful or if there was no such tc_flow
+ * - RMNET_CONFIG_INVALID_REQUEST if there is no such map_flow
+ */
+int rmnet_vnd_del_tc_flow(uint32_t id, uint32_t map_flow, uint32_t tc_flow)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ struct net_device *dev;
+ struct rmnet_map_flow_mapping_s *itm;
+ int r;
+ unsigned long flags;
+ int rc = RMNET_CONFIG_OK;
+
+ if ((id < 0) || (id >= RMNET_DATA_MAX_VND) || !rmnet_devices[id]) {
+ LOGM("Invalid VND id [%d]", id);
+ return RMNET_CONFIG_NO_SUCH_DEVICE;
+ }
+
+ dev = rmnet_devices[id];
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+
+ if (!dev_conf)
+ BUG();
+
+ r = RMNET_VND_UPDATE_FLOW_NO_ACTION;
+ write_lock_irqsave(&dev_conf->flow_map_lock, flags);
+ itm = _rmnet_vnd_get_flow_map(dev_conf, map_flow);
+ if (!itm) {
+ rc = RMNET_CONFIG_INVALID_REQUEST;
+ } else {
+ r = _rmnet_vnd_update_flow_map(RMNET_VND_UF_ACTION_DEL,
+ itm, tc_flow);
+ if (r == RMNET_VND_UPDATE_FLOW_NO_VALID_LEFT)
+ list_del(&(itm->list));
+ }
+ write_unlock_irqrestore(&dev_conf->flow_map_lock, flags);
+
+ if (r == RMNET_VND_UPDATE_FLOW_NO_VALID_LEFT) {
+ if (itm)
+ LOGD("Removed flow mapping [%s][0x%08X]@%p",
+ dev->name, itm->map_flow_id, itm);
+ kfree(itm);
+ }
+
+ return rc;
+}
+
+/**
+ * rmnet_vnd_do_flow_control() - Process flow control request
+ * @dev: Virtual network device node to do lookup on
+ * @map_flow_id: Flow ID from MAP message
+ * @v4_seq: pointer to IPv4 indication sequence number
+ * @v6_seq: pointer to IPv6 indication sequence number
+ * @enable: boolean to enable/disable flow.
+ *
+ * Return:
+ * - 0 if successful
+ * - 1 if no mapping is found
+ * - 2 if dev is not RmNet virtual network device node
+ */
+int rmnet_vnd_do_flow_control(struct net_device *dev,
+ uint32_t map_flow_id,
+ uint16_t v4_seq,
+ uint16_t v6_seq,
+ int enable)
+{
+ struct rmnet_vnd_private_s *dev_conf;
+ struct rmnet_map_flow_mapping_s *itm;
+ int do_fc, error, i;
+ error = 0;
+ do_fc = 0;
+
+ if (unlikely(!dev))
+ BUG();
+
+ if (!rmnet_vnd_is_vnd(dev)) {
+ return 2;
+ } else {
+ dev_conf = (struct rmnet_vnd_private_s *) netdev_priv(dev);
+ }
+
+ if (unlikely(!dev_conf))
+ BUG();
+
+ read_lock(&dev_conf->flow_map_lock);
+ itm = _rmnet_vnd_get_flow_map(dev_conf, map_flow_id);
+
+ if (!itm) {
+ LOGL("Got flow control request for unknown flow %08X",
+ map_flow_id);
+ goto fcdone;
+ }
+ if (v4_seq == 0 || v4_seq >= atomic_read(&(itm->v4_seq))) {
+ atomic_set(&(itm->v4_seq), v4_seq);
+ for (i = 0; i < RMNET_MAP_FLOW_NUM_TC_HANDLE; i++) {
+ if (itm->tc_flow_valid[i] == 1) {
+ LOGD("Found [%s][0x%08X][%d:0x%08X]",
+ dev->name, itm->map_flow_id, i,
+ itm->tc_flow_id[i]);
+
+ _rmnet_vnd_do_flow_control(dev,
+ itm->tc_flow_id[i],
+ enable);
+ }
+ }
+ } else {
+ LOGD("Internal seq(%hd) higher than called(%hd)",
+ atomic_read(&(itm->v4_seq)), v4_seq);
+ }
+
+fcdone:
+ read_unlock(&dev_conf->flow_map_lock);
+
+ return error;
+}
+
+/**
+ * rmnet_vnd_get_by_id() - Get VND by array index ID
+ * @id: Virtual network deice id [0:RMNET_DATA_MAX_VND]
+ *
+ * Return:
+ * - 0 if no device or ID out of range
+ * - otherwise return pointer to VND net_device struct
+ */
+struct net_device *rmnet_vnd_get_by_id(int id)
+{
+ if (id < 0 || id >= RMNET_DATA_MAX_VND) {
+ pr_err("Bug; VND ID out of bounds");
+ BUG();
+ return 0;
+ }
+ return rmnet_devices[id];
+}
diff --git a/net/rmnet_data/rmnet_data_vnd.h b/net/rmnet_data/rmnet_data_vnd.h
new file mode 100644
index 0000000..cb57f57
--- /dev/null
+++ b/net/rmnet_data/rmnet_data_vnd.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data Virtual Network Device APIs
+ *
+ */
+
+#include <linux/types.h>
+
+#ifndef _RMNET_DATA_VND_H_
+#define _RMNET_DATA_VND_H_
+
+int rmnet_vnd_do_flow_control(struct net_device *dev,
+ uint32_t map_flow_id,
+ uint16_t v4_seq,
+ uint16_t v6_seq,
+ int enable);
+struct rmnet_logical_ep_conf_s *rmnet_vnd_get_le_config(struct net_device *dev);
+int rmnet_vnd_get_name(int id, char *name, int name_len);
+int rmnet_vnd_create_dev(int id, struct net_device **new_device,
+ const char *prefix);
+int rmnet_vnd_free_dev(int id);
+int rmnet_vnd_rx_fixup(struct sk_buff *skb, struct net_device *dev);
+int rmnet_vnd_tx_fixup(struct sk_buff *skb, struct net_device *dev);
+int rmnet_vnd_is_vnd(struct net_device *dev);
+int rmnet_vnd_add_tc_flow(uint32_t id, uint32_t map_flow, uint32_t tc_flow);
+int rmnet_vnd_del_tc_flow(uint32_t id, uint32_t map_flow, uint32_t tc_flow);
+int rmnet_vnd_init(void);
+void rmnet_vnd_exit(void);
+struct net_device *rmnet_vnd_get_by_id(int id);
+
+#endif /* _RMNET_DATA_VND_H_ */
diff --git a/net/rmnet_data/rmnet_map.h b/net/rmnet_data/rmnet_map.h
new file mode 100644
index 0000000..8f4e740
--- /dev/null
+++ b/net/rmnet_data/rmnet_map.h
@@ -0,0 +1,135 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+
+#ifndef _RMNET_MAP_H_
+#define _RMNET_MAP_H_
+
+struct rmnet_map_header_s {
+#ifndef RMNET_USE_BIG_ENDIAN_STRUCTS
+ uint8_t pad_len:6;
+ uint8_t reserved_bit:1;
+ uint8_t cd_bit:1;
+#else
+ uint8_t cd_bit:1;
+ uint8_t reserved_bit:1;
+ uint8_t pad_len:6;
+#endif /* RMNET_USE_BIG_ENDIAN_STRUCTS */
+ uint8_t mux_id;
+ uint16_t pkt_len;
+} __aligned(1);
+
+struct rmnet_map_control_command_s {
+ uint8_t command_name;
+#ifndef RMNET_USE_BIG_ENDIAN_STRUCTS
+ uint8_t cmd_type:2;
+ uint8_t reserved:6;
+#else
+ uint8_t reserved:6;
+ uint8_t cmd_type:2;
+#endif /* RMNET_USE_BIG_ENDIAN_STRUCTS */
+ uint16_t reserved2;
+ uint32_t transaction_id;
+ union {
+ uint8_t data[65528];
+ struct {
+#ifndef RMNET_USE_BIG_ENDIAN_STRUCTS
+ uint16_t ip_family:2;
+ uint16_t reserved:14;
+#else
+ uint16_t reserved:14;
+ uint16_t ip_family:2;
+#endif /* RMNET_USE_BIG_ENDIAN_STRUCTS */
+ uint16_t flow_control_seq_num;
+ uint32_t qos_id;
+ } flow_control;
+ };
+} __aligned(1);
+
+enum rmnet_map_results_e {
+ RMNET_MAP_SUCCESS,
+ RMNET_MAP_CONSUMED,
+ RMNET_MAP_GENERAL_FAILURE,
+ RMNET_MAP_NOT_ENABLED,
+ RMNET_MAP_FAILED_AGGREGATION,
+ RMNET_MAP_FAILED_MUX
+};
+
+enum rmnet_map_mux_errors_e {
+ RMNET_MAP_MUX_SUCCESS,
+ RMNET_MAP_MUX_INVALID_MUX_ID,
+ RMNET_MAP_MUX_INVALID_PAD_LENGTH,
+ RMNET_MAP_MUX_INVALID_PKT_LENGTH,
+ /* This should always be the last element */
+ RMNET_MAP_MUX_ENUM_LENGTH
+};
+
+enum rmnet_map_checksum_errors_e {
+ RMNET_MAP_CHECKSUM_OK,
+ RMNET_MAP_CHECKSUM_VALID_FLAG_NOT_SET,
+ RMNET_MAP_CHECKSUM_VALIDATION_FAILED,
+ RMNET_MAP_CHECKSUM_ERROR_UNKOWN,
+ RMNET_MAP_CHECKSUM_ERROR_NOT_DATA_PACKET,
+ RMNET_MAP_CHECKSUM_ERROR_BAD_BUFFER,
+ RMNET_MAP_CHECKSUM_ERROR_UNKNOWN_IP_VERSION,
+ RMNET_MAP_CHECKSUM_ERROR_UNKNOWN_TRANSPORT,
+ /* This should always be the last element */
+ RMNET_MAP_CHECKSUM_ENUM_LENGTH
+};
+
+enum rmnet_map_commands_e {
+ RMNET_MAP_COMMAND_NONE,
+ RMNET_MAP_COMMAND_FLOW_DISABLE,
+ RMNET_MAP_COMMAND_FLOW_ENABLE,
+ /* These should always be the last 2 elements */
+ RMNET_MAP_COMMAND_UNKNOWN,
+ RMNET_MAP_COMMAND_ENUM_LENGTH
+};
+
+enum rmnet_map_agg_state_e {
+ RMNET_MAP_AGG_IDLE,
+ RMNET_MAP_TXFER_SCHEDULED
+};
+
+#define RMNET_MAP_P_ICMP4 0x01
+#define RMNET_MAP_P_TCP 0x06
+#define RMNET_MAP_P_UDP 0x11
+#define RMNET_MAP_P_ICMP6 0x3a
+
+#define RMNET_MAP_COMMAND_REQUEST 0
+#define RMNET_MAP_COMMAND_ACK 1
+#define RMNET_MAP_COMMAND_UNSUPPORTED 2
+#define RMNET_MAP_COMMAND_INVALID 3
+
+uint8_t rmnet_map_demultiplex(struct sk_buff *skb);
+struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config);
+
+#define RMNET_MAP_GET_MUX_ID(Y) (((struct rmnet_map_header_s *)Y->data)->mux_id)
+#define RMNET_MAP_GET_CD_BIT(Y) (((struct rmnet_map_header_s *)Y->data)->cd_bit)
+#define RMNET_MAP_GET_PAD(Y) (((struct rmnet_map_header_s *)Y->data)->pad_len)
+#define RMNET_MAP_GET_CMD_START(Y) ((struct rmnet_map_control_command_s *) \
+ (Y->data + sizeof(struct rmnet_map_header_s)))
+#define RMNET_MAP_GET_LENGTH(Y) (ntohs( \
+ ((struct rmnet_map_header_s *)Y->data)->pkt_len))
+
+struct rmnet_map_header_s *rmnet_map_add_map_header(struct sk_buff *skb,
+ int hdrlen);
+rx_handler_result_t rmnet_map_command(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config);
+void rmnet_map_aggregate(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config);
+
+#endif /* _RMNET_MAP_H_ */
diff --git a/net/rmnet_data/rmnet_map_command.c b/net/rmnet_data/rmnet_map_command.c
new file mode 100644
index 0000000..32326ea
--- /dev/null
+++ b/net/rmnet_data/rmnet_map_command.c
@@ -0,0 +1,178 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/rmnet_data.h>
+#include <net/pkt_sched.h>
+#include "rmnet_data_config.h"
+#include "rmnet_map.h"
+#include "rmnet_data_private.h"
+#include "rmnet_data_vnd.h"
+#include "rmnet_data_stats.h"
+
+RMNET_LOG_MODULE(RMNET_DATA_LOGMASK_MAPC);
+
+unsigned long int rmnet_map_command_stats[RMNET_MAP_COMMAND_ENUM_LENGTH];
+module_param_array(rmnet_map_command_stats, ulong, 0, S_IRUGO);
+MODULE_PARM_DESC(rmnet_map_command_stats, "MAP command statistics");
+
+/**
+ * rmnet_map_do_flow_control() - Process MAP flow control command
+ * @skb: Socket buffer containing the MAP flow control message
+ * @config: Physical end-point configuration of ingress device
+ * @enable: boolean for enable/disable
+ *
+ * Process in-band MAP flow control messages. Assumes mux ID is mapped to a
+ * RmNet Data vitrual network device.
+ *
+ * Return:
+ * - RMNET_MAP_COMMAND_UNSUPPORTED on any error
+ * - RMNET_MAP_COMMAND_ACK on success
+ */
+static uint8_t rmnet_map_do_flow_control(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config,
+ int enable)
+{
+ struct rmnet_map_control_command_s *cmd;
+ struct net_device *vnd;
+ struct rmnet_logical_ep_conf_s *ep;
+ uint8_t mux_id;
+ uint16_t ip_family;
+ uint16_t fc_seq;
+ uint32_t qos_id;
+ int r;
+
+ if (unlikely(!skb || !config))
+ BUG();
+
+ mux_id = RMNET_MAP_GET_MUX_ID(skb);
+ cmd = RMNET_MAP_GET_CMD_START(skb);
+
+ if (mux_id >= RMNET_DATA_MAX_LOGICAL_EP) {
+ LOGD("Got packet on %s with bad mux id %d",
+ skb->dev->name, mux_id);
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_MAPC_BAD_MUX);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ ep = &(config->muxed_ep[mux_id]);
+
+ if (!ep->refcount) {
+ LOGD("Packet on %s:%d; has no logical endpoint config",
+ skb->dev->name, mux_id);
+
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_MAPC_MUX_NO_EP);
+ return RX_HANDLER_CONSUMED;
+ }
+
+ vnd = ep->egress_dev;
+
+ ip_family = cmd->flow_control.ip_family;
+ fc_seq = ntohs(cmd->flow_control.flow_control_seq_num);
+ qos_id = ntohl(cmd->flow_control.qos_id);
+
+ /* Ignore the ip family and pass the sequence number for both v4 and v6
+ * sequence. User space does not support creating dedicated flows for
+ * the 2 protocols
+ */
+ r = rmnet_vnd_do_flow_control(vnd, qos_id, fc_seq, fc_seq, enable);
+ LOGD("dev:%s, qos_id:0x%08X, ip_family:%hd, fc_seq %hd, en:%d",
+ skb->dev->name, qos_id, ip_family & 3, fc_seq, enable);
+
+ if (r)
+ return RMNET_MAP_COMMAND_UNSUPPORTED;
+ else
+ return RMNET_MAP_COMMAND_ACK;
+}
+
+/**
+ * rmnet_map_send_ack() - Send N/ACK message for MAP commands
+ * @skb: Socket buffer containing the MAP command message
+ * @type: N/ACK message selector
+ *
+ * skb is modified to contain the message type selector. The message is then
+ * transmitted on skb->dev. Note that this function grabs global Tx lock on
+ * skb->dev for latency reasons.
+ *
+ * Return:
+ * - void
+ */
+static void rmnet_map_send_ack(struct sk_buff *skb,
+ unsigned char type)
+{
+ struct net_device *dev;
+ struct rmnet_map_control_command_s *cmd;
+ unsigned long flags;
+ int xmit_status;
+
+ if (!skb)
+ BUG();
+
+ dev = skb->dev;
+
+ cmd = RMNET_MAP_GET_CMD_START(skb);
+ cmd->cmd_type = type & 0x03;
+
+ spin_lock_irqsave(&(skb->dev->tx_global_lock), flags);
+ xmit_status = skb->dev->netdev_ops->ndo_start_xmit(skb, skb->dev);
+ spin_unlock_irqrestore(&(skb->dev->tx_global_lock), flags);
+}
+
+/**
+ * rmnet_map_command() - Entry point for handling MAP commands
+ * @skb: Socket buffer containing the MAP command message
+ * @config: Physical end-point configuration of ingress device
+ *
+ * Process MAP command frame and send N/ACK message as appropriate. Message cmd
+ * name is decoded here and appropriate handler is called.
+ *
+ * Return:
+ * - RX_HANDLER_CONSUMED. Command frames are always consumed.
+ */
+rx_handler_result_t rmnet_map_command(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config)
+{
+ struct rmnet_map_control_command_s *cmd;
+ unsigned char command_name;
+ unsigned char rc = 0;
+
+ if (unlikely(!skb))
+ BUG();
+
+ cmd = RMNET_MAP_GET_CMD_START(skb);
+ command_name = cmd->command_name;
+
+ if (command_name < RMNET_MAP_COMMAND_ENUM_LENGTH)
+ rmnet_map_command_stats[command_name]++;
+
+ switch (command_name) {
+ case RMNET_MAP_COMMAND_FLOW_ENABLE:
+ rc = rmnet_map_do_flow_control(skb, config, 1);
+ break;
+
+ case RMNET_MAP_COMMAND_FLOW_DISABLE:
+ rc = rmnet_map_do_flow_control(skb, config, 0);
+ break;
+
+ default:
+ rmnet_map_command_stats[RMNET_MAP_COMMAND_UNKNOWN]++;
+ LOGM("Uknown MAP command: %d", command_name);
+ rc = RMNET_MAP_COMMAND_UNSUPPORTED;
+ break;
+ }
+ rmnet_map_send_ack(skb, rc);
+ return RX_HANDLER_CONSUMED;
+}
diff --git a/net/rmnet_data/rmnet_map_data.c b/net/rmnet_data/rmnet_map_data.c
new file mode 100644
index 0000000..0e58839
--- /dev/null
+++ b/net/rmnet_data/rmnet_map_data.c
@@ -0,0 +1,270 @@
+/*
+ * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * RMNET Data MAP protocol
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/rmnet_data.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include "rmnet_data_config.h"
+#include "rmnet_map.h"
+#include "rmnet_data_private.h"
+#include "rmnet_data_stats.h"
+
+RMNET_LOG_MODULE(RMNET_DATA_LOGMASK_MAPD);
+
+/* ***************** Local Definitions ************************************** */
+struct agg_work {
+ struct delayed_work work;
+ struct rmnet_phys_ep_conf_s *config;
+};
+
+/******************************************************************************/
+
+/**
+ * rmnet_map_add_map_header() - Adds MAP header to front of skb->data
+ * @skb: Socket buffer ("packet") to modify
+ * @hdrlen: Number of bytes of header data which should not be included in
+ * MAP length field
+ *
+ * Padding is calculated and set appropriately in MAP header. Mux ID is
+ * initialized to 0.
+ *
+ * Return:
+ * - Pointer to MAP structure
+ * - 0 (null) if insufficient headroom
+ * - 0 (null) if insufficient tailroom for padding bytes
+ *
+ * todo: Parameterize skb alignment
+ */
+struct rmnet_map_header_s *rmnet_map_add_map_header(struct sk_buff *skb,
+ int hdrlen)
+{
+ uint32_t padding, map_datalen;
+ uint8_t *padbytes;
+ struct rmnet_map_header_s *map_header;
+
+ if (skb_headroom(skb) < sizeof(struct rmnet_map_header_s))
+ return 0;
+
+ map_datalen = skb->len - hdrlen;
+ map_header = (struct rmnet_map_header_s *)
+ skb_push(skb, sizeof(struct rmnet_map_header_s));
+ memset(map_header, 0, sizeof(struct rmnet_map_header_s));
+
+ padding = ALIGN(map_datalen, 4) - map_datalen;
+
+ if (skb_tailroom(skb) < padding)
+ return 0;
+
+ padbytes = (uint8_t *) skb_put(skb, padding);
+ LOGD("pad: %d", padding);
+ memset(padbytes, 0, padding);
+
+ map_header->pkt_len = htons(map_datalen + padding);
+ map_header->pad_len = padding&0x3F;
+
+ return map_header;
+}
+
+/**
+ * rmnet_map_deaggregate() - Deaggregates a single packet
+ * @skb: Source socket buffer containing multiple MAP frames
+ * @config: Physical endpoint configuration of the ingress device
+ *
+ * Source skb is cloned with skb_clone(). The new skb data and tail pointers are
+ * modified to contain a single MAP frame. Clone happens with GFP_ATOMIC flags
+ * set. User should keep calling deaggregate() on the source skb until 0 is
+ * returned, indicating that there are no more packets to deaggregate.
+ *
+ * Return:
+ * - Pointer to new skb
+ * - 0 (null) if no more aggregated packets
+ */
+struct sk_buff *rmnet_map_deaggregate(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config)
+{
+ struct sk_buff *skbn;
+ struct rmnet_map_header_s *maph;
+ uint32_t packet_len;
+ uint8_t ip_byte;
+
+ if (skb->len == 0)
+ return 0;
+
+ maph = (struct rmnet_map_header_s *) skb->data;
+ packet_len = ntohs(maph->pkt_len) + sizeof(struct rmnet_map_header_s);
+
+ if ((((int)skb->len) - ((int)packet_len)) < 0) {
+ LOGM("%s", "Got malformed packet. Dropping");
+ return 0;
+ }
+
+ skbn = skb_clone(skb, GFP_ATOMIC);
+ if (!skbn)
+ return 0;
+
+ LOGD("Trimming to %d bytes", packet_len);
+ LOGD("before skbn->len = %d", skbn->len);
+ skb_trim(skbn, packet_len);
+ skb_pull(skb, packet_len);
+ LOGD("after skbn->len = %d", skbn->len);
+
+ /* Sanity check */
+ ip_byte = (skbn->data[4]) & 0xF0;
+ if (ip_byte != 0x40 && ip_byte != 0x60) {
+ LOGM("Unknown IP type: 0x%02X", ip_byte);
+ rmnet_kfree_skb(skbn, RMNET_STATS_SKBFREE_DEAGG_UNKOWN_IP_TYP);
+ return 0;
+ }
+
+ return skbn;
+}
+
+/**
+ * rmnet_map_flush_packet_queue() - Transmits aggregeted frame on timeout
+ * @work: struct agg_work containing delayed work and skb to flush
+ *
+ * This function is scheduled to run in a specified number of jiffies after
+ * the last frame transmitted by the network stack. When run, the buffer
+ * containing aggregated packets is finally transmitted on the underlying link.
+ *
+ */
+static void rmnet_map_flush_packet_queue(struct work_struct *work)
+{
+ struct agg_work *real_work;
+ struct rmnet_phys_ep_conf_s *config;
+ unsigned long flags;
+ struct sk_buff *skb;
+ int rc;
+
+ skb = 0;
+ real_work = (struct agg_work *)work;
+ config = real_work->config;
+ LOGD("%s", "Entering flush thread");
+ spin_lock_irqsave(&config->agg_lock, flags);
+ if (likely(config->agg_state == RMNET_MAP_TXFER_SCHEDULED)) {
+ /* Buffer may have already been shipped out */
+ if (likely(config->agg_skb)) {
+ rmnet_stats_agg_pkts(config->agg_count);
+ if (config->agg_count > 1)
+ LOGL("Agg count: %d", config->agg_count);
+ skb = config->agg_skb;
+ config->agg_skb = 0;
+ }
+ config->agg_state = RMNET_MAP_AGG_IDLE;
+ } else {
+ /* How did we get here? */
+ LOGE("Ran queued command when state %s",
+ "is idle. State machine likely broken");
+ }
+
+ spin_unlock_irqrestore(&config->agg_lock, flags);
+ if (skb) {
+ rc = dev_queue_xmit(skb);
+ rmnet_stats_queue_xmit(rc, RMNET_STATS_QUEUE_XMIT_AGG_TIMEOUT);
+ }
+ kfree(work);
+}
+
+/**
+ * rmnet_map_aggregate() - Software aggregates multiple packets.
+ * @skb: current packet being transmitted
+ * @config: Physical endpoint configuration of the ingress device
+ *
+ * Aggregates multiple SKBs into a single large SKB for transmission. MAP
+ * protocol is used to separate the packets in the buffer. This funcion consumes
+ * the argument SKB and should not be further processed by any other function.
+ */
+void rmnet_map_aggregate(struct sk_buff *skb,
+ struct rmnet_phys_ep_conf_s *config) {
+ uint8_t *dest_buff;
+ struct agg_work *work;
+ unsigned long flags;
+ struct sk_buff *agg_skb;
+ int size, rc;
+
+
+ if (!skb || !config)
+ BUG();
+ size = config->egress_agg_size-skb->len;
+
+ if (size < 2000) {
+ LOGL("Invalid length %d", size);
+ return;
+ }
+
+new_packet:
+ spin_lock_irqsave(&config->agg_lock, flags);
+ if (!config->agg_skb) {
+ config->agg_skb = skb_copy_expand(skb, 0, size, GFP_ATOMIC);
+ if (!config->agg_skb) {
+ config->agg_skb = 0;
+ config->agg_count = 0;
+ spin_unlock_irqrestore(&config->agg_lock, flags);
+ rmnet_stats_agg_pkts(1);
+ rc = dev_queue_xmit(skb);
+ rmnet_stats_queue_xmit(rc,
+ RMNET_STATS_QUEUE_XMIT_AGG_CPY_EXP_FAIL);
+ return;
+ }
+ config->agg_count = 1;
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_AGG_CPY_EXPAND);
+ goto schedule;
+ }
+
+ if (skb->len > (config->egress_agg_size - config->agg_skb->len)) {
+ rmnet_stats_agg_pkts(config->agg_count);
+ if (config->agg_count > 1)
+ LOGL("Agg count: %d", config->agg_count);
+ agg_skb = config->agg_skb;
+ config->agg_skb = 0;
+ config->agg_count = 0;
+ spin_unlock_irqrestore(&config->agg_lock, flags);
+ rc = dev_queue_xmit(agg_skb);
+ rmnet_stats_queue_xmit(rc,
+ RMNET_STATS_QUEUE_XMIT_AGG_FILL_BUFFER);
+ goto new_packet;
+ }
+
+ dest_buff = skb_put(config->agg_skb, skb->len);
+ memcpy(dest_buff, skb->data, skb->len);
+ config->agg_count++;
+ rmnet_kfree_skb(skb, RMNET_STATS_SKBFREE_AGG_INTO_BUFF);
+
+schedule:
+ if (config->agg_state != RMNET_MAP_TXFER_SCHEDULED) {
+ work = (struct agg_work *)
+ kmalloc(sizeof(struct agg_work), GFP_ATOMIC);
+ if (!work) {
+ LOGE("Failed to allocate work item for packet %s",
+ "transfer. DATA PATH LIKELY BROKEN!");
+ config->agg_state = RMNET_MAP_AGG_IDLE;
+ spin_unlock_irqrestore(&config->agg_lock, flags);
+ return;
+ }
+ INIT_DELAYED_WORK((struct delayed_work *)work,
+ rmnet_map_flush_packet_queue);
+ work->config = config;
+ config->agg_state = RMNET_MAP_TXFER_SCHEDULED;
+ schedule_delayed_work((struct delayed_work *)work, 1);
+ }
+ spin_unlock_irqrestore(&config->agg_lock, flags);
+ return;
+}
+
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index a92a23c..f02af86 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -274,17 +274,11 @@
dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp)
-$(obj)/%.dtb: $(src)/%.dts FORCE
- $(call if_changed_dep,dtc)
-
-dtc-tmp = $(subst $(comma),_,$(dot-target).dts)
-
-quiet_cmd_dtc_cpp = DTC+CPP $@
-cmd_dtc_cpp = $(CPP) $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ; \
- $(objtree)/scripts/dtc/dtc -O dtb -o $@ -b 0 $(DTC_FLAGS) $(dtc-tmp)
-
-$(obj)/%.dtb: $(src)/%.dtsp FORCE
- $(call if_changed_dep,dtc_cpp)
+# cat
+# ---------------------------------------------------------------------------
+# Concatentate multiple files together
+quiet_cmd_cat = CAT $@
+cmd_cat = (cat $(filter-out FORCE,$^) > $@) || (rm -f $@; false)
# Bzip2
# ---------------------------------------------------------------------------
diff --git a/security/tlk_driver/ote_comms.c b/security/tlk_driver/ote_comms.c
index 1a0ba51..e5e0a70 100644
--- a/security/tlk_driver/ote_comms.c
+++ b/security/tlk_driver/ote_comms.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012-2014 NVIDIA Corporation. All rights reserved.
+ * Copyright (c) 2012-2015 NVIDIA Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -667,6 +667,38 @@
}
EXPORT_SYMBOL(te_set_vpr_params);
+void te_restore_keyslots(void)
+{
+ uint32_t retval;
+
+ mutex_lock(&smc_lock);
+
+ if (current->flags &
+ (PF_WQ_WORKER | PF_NO_SETAFFINITY | PF_KTHREAD)) {
+ struct tlk_smc_work_args work_args;
+ int cpu = cpu_logical_map(smp_processor_id());
+
+ work_args.arg0 = TE_SMC_TA_EVENT;
+ work_args.arg1 = TA_EVENT_RESTORE_KEYS;
+ work_args.arg2 = 0;
+
+ /* workers don't change CPU. depending on the CPU, execute
+ * directly or sched work */
+ if (cpu == 0 && (current->flags & PF_WQ_WORKER)) {
+ retval = tlk_generic_smc_on_cpu0(&work_args);
+ } else {
+ retval = work_on_cpu(0,
+ tlk_generic_smc_on_cpu0, &work_args);
+ }
+ } else {
+ retval = tlk_generic_smc(tlk_info, TE_SMC_TA_EVENT,
+ TA_EVENT_RESTORE_KEYS, 0, 0);
+ }
+
+ mutex_unlock(&smc_lock);
+}
+EXPORT_SYMBOL(te_restore_keyslots);
+
static int te_allocate_session(struct tlk_context *context, uint32_t session_id,
struct te_session **sessionp)
{
diff --git a/security/tlk_driver/ote_protocol.h b/security/tlk_driver/ote_protocol.h
index 1acebc5..f148347 100644
--- a/security/tlk_driver/ote_protocol.h
+++ b/security/tlk_driver/ote_protocol.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2013-2014 NVIDIA Corporation. All rights reserved.
+ * Copyright (c) 2013-2015 NVIDIA Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -155,6 +155,7 @@
TE_SMC_OPEN_SESSION = 0x30000001,
TE_SMC_CLOSE_SESSION = 0x30000002,
TE_SMC_LAUNCH_OPERATION = 0x30000003,
+ TE_SMC_TA_EVENT = 0x30000004,
/* Trusted OS calls */
TE_SMC_REGISTER_REQ_BUF = 0x32000002,
@@ -172,6 +173,7 @@
TE_SMC_CLOSE_SESSION = SMC_STDCALL_NR(SMC_ENTITY_TRUSTED_APP, 2),
TE_SMC_LAUNCH_OPERATION = SMC_STDCALL_NR(SMC_ENTITY_TRUSTED_APP, 3),
TE_SMC_NS_CB_COMPLETE = SMC_STDCALL_NR(SMC_ENTITY_TRUSTED_APP, 4),
+ TE_SMC_TA_EVENT = SMC_STDCALL_NR(SMC_ENTITY_TRUSTED_APP, 5),
TE_SMC_FC_HAS_NS_WORK = SMC_FASTCALL_NR(SMC_ENTITY_TRUSTED_APP, 1),
@@ -394,6 +396,13 @@
int te_handle_ss_ioctl_legacy(struct file *file, unsigned int ioctl_num,
unsigned long ioctl_param);
+
+enum ta_event_id {
+ TA_EVENT_RESTORE_KEYS = 0,
+
+ TA_EVENT_MASK = (1 << TA_EVENT_RESTORE_KEYS),
+};
+
int te_handle_ss_ioctl(struct file *file, unsigned int ioctl_num,
unsigned long ioctl_param);
int te_handle_fs_ioctl(struct file *file, unsigned int ioctl_num,
diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
index 3684f0c..4b97f01 100644
--- a/sound/soc/codecs/Kconfig
+++ b/sound/soc/codecs/Kconfig
@@ -62,6 +62,8 @@
select SND_SOC_RT5639 if I2C
select SND_SOC_RT5640 if I2C
select SND_SOC_RT5645 if I2C
+ select SND_SOC_RT5677 if I2C
+ select SND_SOC_RT5506 if I2C
select SND_SOC_SGTL5000 if I2C
select SND_SOC_SI476X if MFD_SI476X_CORE
select SND_SOC_SN95031 if INTEL_SCU_IPC
@@ -316,6 +318,9 @@
config SND_SOC_RT5645
tristate
+config SND_SOC_RT5677
+ tristate
+
#Freescale sgtl5000 codec
config SND_SOC_SGTL5000
tristate
@@ -575,3 +580,20 @@
config SND_SOC_TPA6130A2
tristate
+
+config SND_SOC_RT5506
+ tristate
+
+config AMP_TFA9895
+ tristate "NXP TFA9895 Speaker AMP Driver"
+ depends on I2C=y
+ help
+ NXP TFA9895 Speaker AMP Driver
+ implemented by HTC.
+
+config AMP_TFA9895L
+ tristate "NXP TFA9895L Speaker AMP Driver"
+ depends on I2C=y
+ help
+ NXP TFA9895L Speaker AMP Driver
+ implemented by HTC.
diff --git a/sound/soc/codecs/Makefile b/sound/soc/codecs/Makefile
index 5d3fed8..5c3b474 100644
--- a/sound/soc/codecs/Makefile
+++ b/sound/soc/codecs/Makefile
@@ -123,6 +123,8 @@
snd-soc-wm9713-objs := wm9713.o
snd-soc-wm-hubs-objs := wm_hubs.o
snd-soc-rt5639-objs := rt5639.o rt56xx_ioctl.o rt5639_ioctl.o
+snd-soc-rt5677-objs := rt5677.o rt5677-spi.o tfa9895.o tfa9895l.o \
+ rt5677_ioctl.o rt_codec_ioctl.o
snd-soc-rt5640-objs := rt5640.o
snd-soc-rt5645-objs := rt5645.o rt5645_ioctl.o rt_codec_ioctl.o
@@ -130,6 +132,7 @@
snd-soc-max9877-objs := max9877.o
snd-soc-max97236-objs := max97236.o
snd-soc-tpa6130a2-objs := tpa6130a2.o
+snd-soc-rt5506-objs := rt5506.o
obj-$(CONFIG_SND_SOC_88PM860X) += snd-soc-88pm860x.o
obj-$(CONFIG_SND_SOC_AB8500_CODEC) += snd-soc-ab8500-codec.o
@@ -257,8 +260,10 @@
obj-$(CONFIG_SND_SOC_RT5639) += snd-soc-rt5639.o
obj-$(CONFIG_SND_SOC_RT5640) += snd-soc-rt5640.o
obj-$(CONFIG_SND_SOC_RT5645) += snd-soc-rt5645.o
+obj-$(CONFIG_SND_SOC_RT5677) += snd-soc-rt5677.o
# Amp
obj-$(CONFIG_SND_SOC_MAX9877) += snd-soc-max9877.o
obj-$(CONFIG_SND_SOC_MAX97236) += snd-soc-max97236.o
obj-$(CONFIG_SND_SOC_TPA6130A2) += snd-soc-tpa6130a2.o
+obj-$(CONFIG_SND_SOC_RT5506) += snd-soc-rt5506.o
diff --git a/sound/soc/codecs/rt5506.c b/sound/soc/codecs/rt5506.c
new file mode 100644
index 0000000..7694297
--- /dev/null
+++ b/sound/soc/codecs/rt5506.c
@@ -0,0 +1,1181 @@
+/* driver/i2c/chip/rt5506.c
+ *
+ * Richtek Headphone Amp
+ *
+ * Copyright (C) 2010 HTC Corporation
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/interrupt.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <linux/miscdevice.h>
+#include <linux/uaccess.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/workqueue.h>
+#include <linux/freezer.h>
+#include "rt5506.h"
+#include <linux/mutex.h>
+#include <linux/debugfs.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/wakelock.h>
+#include <linux/jiffies.h>
+#include <linux/of_gpio.h>
+#include <linux/regulator/consumer.h>
+#include <linux/htc_headset_mgr.h>
+
+#define AMP_ON_CMD_LEN 7
+#define RETRY_CNT 5
+
+#define DRIVER_NAME "RT5506"
+
+
+enum AMP_REG_MODE {
+ REG_PWM_MODE = 0,
+ REG_AUTO_MODE,
+};
+
+struct headset_query {
+ struct mutex mlock;
+ struct mutex gpiolock;
+ struct delayed_work hs_imp_detec_work;
+ struct wake_lock hs_wake_lock;
+ struct wake_lock gpio_wake_lock;
+ enum HEADSET_QUERY_STATUS hs_qstatus;
+ enum AMP_STATUS rt5506_status;
+ enum HEADSET_OM headsetom;
+ enum PLAYBACK_MODE curmode;
+ enum AMP_GPIO_STATUS gpiostatus;
+ int gpio_off_cancel;
+ struct mutex actionlock;
+ struct delayed_work volume_ramp_work;
+ struct delayed_work gpio_off_work;
+};
+
+static struct i2c_client *this_client;
+static struct rt5506_platform_data *pdata;
+static int rt5506Connect;
+
+struct rt5506_config_data rt5506_cfg_data;
+static struct mutex hp_amp_lock;
+static int rt5506_opened;
+
+struct rt5506_config RT5506_AMP_ON = {7, {{0x0, 0xc0}, {0x1, 0x1c},
+{0x2, 0x00}, {0x7, 0x7f}, {0x9, 0x1}, {0xa, 0x0}, {0xb, 0xc7} } };
+struct rt5506_config RT5506_AMP_INIT = {11, {{0, 0xc0}, {0x81, 0x30},
+{0x87, 0xf6}, {0x93, 0x8d}, {0x95, 0x7d}, {0xa4, 0x52}, {0x96, 0xae},
+{0x97, 0x13}, {0x99, 0x35}, {0x9b, 0x68}, {0x9d, 0x68} } };
+struct rt5506_config RT5506_AMP_MUTE = {1, {{0x1, 0xC7 } } };
+struct rt5506_config RT5506_AMP_OFF = {1, {{0x0, 0x1 } } };
+
+static int rt5506_valid_registers[] = {0x0, 0x1, 0x2, 0x3, 0x4, 0x5,
+0x6, 0x7, 0x8, 0x9, 0xA, 0xB, 0xC0, 0x81, 0x87, 0x90, 0x93, 0x95,
+0xA4, 0x96, 0x97, 0x98, 0x99, 0x9A, 0x9B, 0x9C, 0x9D, 0x9E};
+
+static int rt5506_write_reg(u8 reg, u8 val);
+static void hs_imp_detec_func(struct work_struct *work);
+static int rt5506_i2c_read_addr(unsigned char *rxdata, unsigned char addr);
+static int rt5506_i2c_write(struct rt5506_reg_data *txdata, int length);
+static void set_amp(int on, struct rt5506_config *i2c_command);
+
+struct headset_query rt5506_query;
+static struct workqueue_struct *hs_wq;
+static struct workqueue_struct *ramp_wq;
+static struct workqueue_struct *gpio_wq;
+static int high_imp;
+
+
+int rt5506_headset_detect(int on)
+{
+ if (!rt5506Connect)
+ return 0;
+
+ if (on) {
+ pr_info("%s: headset in ++\n", __func__);
+ cancel_delayed_work_sync(&rt5506_query.hs_imp_detec_work);
+ mutex_lock(&rt5506_query.gpiolock);
+ mutex_lock(&rt5506_query.mlock);
+ rt5506_query.hs_qstatus = QUERY_HEADSET;
+ rt5506_query.headsetom = HEADSET_OM_UNDER_DETECT;
+
+ if (rt5506_query.rt5506_status == STATUS_PLAYBACK) {
+ /* AMP off */
+ if (high_imp) {
+ rt5506_write_reg(1, 0x7);
+ rt5506_write_reg(0xb1, 0x81);
+ } else {
+ rt5506_write_reg(1, 0xc7);
+ }
+
+ pr_info("%s: OFF\n", __func__);
+
+ rt5506_query.rt5506_status = STATUS_SUSPEND;
+ }
+ pr_info("%s: headset in --\n", __func__);
+ mutex_unlock(&rt5506_query.mlock);
+ mutex_unlock(&rt5506_query.gpiolock);
+ queue_delayed_work(hs_wq,
+ &rt5506_query.hs_imp_detec_work, msecs_to_jiffies(5));
+ pr_info("%s: headset in --2\n", __func__);
+ } else {
+ pr_info("%s: headset remove ++\n", __func__);
+ cancel_delayed_work_sync(&rt5506_query.hs_imp_detec_work);
+ flush_work(&rt5506_query.volume_ramp_work.work);
+ mutex_lock(&rt5506_query.gpiolock);
+ mutex_lock(&rt5506_query.mlock);
+ rt5506_query.hs_qstatus = QUERY_OFF;
+ rt5506_query.headsetom = HEADSET_OM_UNDER_DETECT;
+
+ if (rt5506_query.rt5506_status == STATUS_PLAYBACK) {
+ /* AMP off */
+ if (high_imp) {
+ rt5506_write_reg(1, 0x7);
+ rt5506_write_reg(0xb1, 0x81);
+ } else {
+ rt5506_write_reg(1, 0xc7);
+ }
+
+ pr_info("%s: OFF\n", __func__);
+
+ rt5506_query.rt5506_status = STATUS_SUSPEND;
+ }
+ if (high_imp) {
+ int closegpio = 0;
+
+ if (rt5506_query.gpiostatus == AMP_GPIO_OFF) {
+ pr_info("%s: enable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 1);
+ closegpio = 1;
+ usleep_range(1000, 2000);
+ }
+
+ pr_info("%s: reset rt5506\n", __func__);
+ rt5506_write_reg(0x0, 0x4);
+ mdelay(1);
+ rt5506_write_reg(0x1, 0xc7);
+ high_imp = 0;
+
+ if (closegpio) {
+ pr_info("%s: disable gpio %d\n",
+ __func__, pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 0);
+ }
+ }
+ rt5506_query.curmode = PLAYBACK_MODE_OFF;
+ pr_info("%s: headset remove --1\n", __func__);
+
+ mutex_unlock(&rt5506_query.mlock);
+ mutex_unlock(&rt5506_query.gpiolock);
+
+ pr_info("%s: headset remove --2\n", __func__);
+ }
+ return 0;
+}
+
+void rt5506_set_gain(u8 data)
+{
+ pr_info("%s:before addr=%d, val=%d\n", __func__,
+ RT5506_AMP_ON.reg[1].addr, RT5506_AMP_ON.reg[1].val);
+
+ RT5506_AMP_ON.reg[1].val = data;
+
+ pr_info("%s:after addr=%d, val=%d\n", __func__,
+ RT5506_AMP_ON.reg[1].addr, RT5506_AMP_ON.reg[1].val);
+}
+
+u8 rt5506_get_gain(void)
+{
+ return RT5506_AMP_ON.reg[1].val;
+}
+
+static void rt5506_register_hs_notification(void)
+{
+ struct headset_notifier notifier;
+ notifier.id = HEADSET_REG_HS_INSERT;
+ notifier.func = rt5506_headset_detect;
+ headset_notifier_register(¬ifier);
+}
+
+static int rt5506_write_reg(u8 reg, u8 val)
+{
+ int err;
+ struct i2c_msg msg[1];
+ unsigned char data[2];
+
+ msg->addr = this_client->addr;
+ msg->flags = 0;
+ msg->len = 2;
+ msg->buf = data;
+ data[0] = reg;
+ data[1] = val;
+#ifdef DEBUG
+ pr_info("%s: write reg 0x%x val 0x%x\n", __func__, data[0], data[1]);
+#endif
+ err = i2c_transfer(this_client->adapter, msg, 1);
+ if (err >= 0)
+ return 0;
+ else {
+ pr_info("%s: write error error %d\n", __func__, err);
+ return err;
+ }
+}
+
+static int rt5506_i2c_write(struct rt5506_reg_data *txdata, int length)
+{
+ int i, retry, pass = 0;
+ char buf[2];
+ struct i2c_msg msg[] = {
+ {
+ .addr = this_client->addr,
+ .flags = 0,
+ .len = 2,
+ .buf = buf,
+ },
+ };
+ for (i = 0; i < length; i++) {
+ buf[0] = txdata[i].addr;
+ buf[1] = txdata[i].val;
+#ifdef DEBUG
+ pr_info("%s:i2c_write addr 0x%x val 0x%x\n", __func__,
+ buf[0], buf[1]);
+#endif
+ msg->buf = buf;
+ retry = RETRY_CNT;
+ pass = 0;
+ while (retry--) {
+ if (i2c_transfer(this_client->adapter, msg, 1) < 0) {
+ pr_err("%s: I2C transfer error %d retry %d\n",
+ __func__, i, retry);
+ usleep_range(20000, 21000);
+ } else {
+ pass = 1;
+ break;
+ }
+ }
+ if (pass == 0) {
+ pr_err("I2C transfer error, retry fail\n");
+ return -EIO;
+ }
+ }
+ return 0;
+}
+
+static int rt5506_i2c_read_addr(unsigned char *rxdata, unsigned char addr)
+{
+ int rc;
+ struct i2c_msg msgs[] = {
+ {
+ .addr = this_client->addr,
+ .flags = 0,
+ .len = 1,
+ .buf = rxdata,
+ },
+ {
+ .addr = this_client->addr,
+ .flags = I2C_M_RD,
+ .len = 1,
+ .buf = rxdata,
+ },
+ };
+
+ if (!rxdata)
+ return -1;
+
+ *rxdata = addr;
+
+ rc = i2c_transfer(this_client->adapter, msgs, 2);
+ if (rc < 0) {
+ pr_err("%s:[1] transfer error %d\n", __func__, rc);
+ return rc;
+ }
+
+ pr_info("%s:i2c_read addr 0x%x value = 0x%x\n",
+ __func__, addr, *rxdata);
+ return 0;
+}
+
+static int rt5506_open(struct inode *inode, struct file *file)
+{
+ int rc = 0;
+
+ mutex_lock(&hp_amp_lock);
+
+ if (rt5506_opened) {
+ pr_err("%s: busy\n", __func__);
+ rc = -EBUSY;
+ goto done;
+ }
+ rt5506_opened = 1;
+done:
+ mutex_unlock(&hp_amp_lock);
+ return rc;
+}
+
+static int rt5506_release(struct inode *inode, struct file *file)
+{
+ mutex_lock(&hp_amp_lock);
+ rt5506_opened = 0;
+ mutex_unlock(&hp_amp_lock);
+
+ return 0;
+}
+
+static void hs_imp_gpio_off(struct work_struct *work)
+{
+ wake_lock(&rt5506_query.gpio_wake_lock);
+ mutex_lock(&rt5506_query.gpiolock);
+ pr_info("%s: disable gpio %d\n", __func__, pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 0);
+ rt5506_query.gpiostatus = AMP_GPIO_OFF;
+ mutex_unlock(&rt5506_query.gpiolock);
+ wake_unlock(&rt5506_query.gpio_wake_lock);
+}
+
+static void hs_imp_detec_func(struct work_struct *work)
+{
+ struct headset_query *hs;
+ unsigned char temp;
+ unsigned char r_channel;
+ int ret;
+ pr_info("%s: read rt5506 hs imp\n", __func__);
+
+ hs = container_of(work, struct headset_query, hs_imp_detec_work.work);
+ wake_lock(&hs->hs_wake_lock);
+
+ rt5506_query.gpio_off_cancel = 1;
+ cancel_delayed_work_sync(&rt5506_query.gpio_off_work);
+ mutex_lock(&hs->gpiolock);
+ mutex_lock(&hs->mlock);
+
+ if (hs->hs_qstatus != QUERY_HEADSET) {
+ mutex_unlock(&hs->mlock);
+ mutex_unlock(&hs->gpiolock);
+ wake_unlock(&hs->hs_wake_lock);
+ return;
+ }
+
+ if (hs->gpiostatus == AMP_GPIO_OFF) {
+ pr_info("%s: enable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 1);
+ rt5506_query.gpiostatus = AMP_GPIO_ON;
+ }
+
+ usleep_range(1000, 2000);
+
+ /*sense impedance start*/
+ rt5506_write_reg(0, 0x04);
+ rt5506_write_reg(0xa4, 0x52);
+ rt5506_write_reg(1, 0x7);
+ usleep_range(10000, 11000);
+ rt5506_write_reg(0x3, 0x81);
+ msleep(100);
+ /*sense impedance end*/
+
+ ret = rt5506_i2c_read_addr(&temp, 0x4);
+ if (ret < 0) {
+ pr_err("%s: read rt5506 status error %d\n", __func__, ret);
+ if (hs->gpiostatus == AMP_GPIO_ON) {
+ rt5506_query.gpio_off_cancel = 0;
+ queue_delayed_work(gpio_wq,
+ &rt5506_query.gpio_off_work, msecs_to_jiffies(0));
+ }
+ mutex_unlock(&hs->mlock);
+ mutex_unlock(&hs->gpiolock);
+ wake_unlock(&hs->hs_wake_lock);
+ return;
+ }
+
+ /*identify stereo or mono headset*/
+ rt5506_i2c_read_addr(&r_channel, 0x6);
+
+ /* init start*/
+ rt5506_write_reg(0x0, 0x4);
+ mdelay(1);
+ rt5506_write_reg(0x0, 0xc0);
+ rt5506_write_reg(0x81, 0x30);
+ rt5506_write_reg(0x90, 0xd0);
+ rt5506_write_reg(0x93, 0x9d);
+ rt5506_write_reg(0x95, 0x7b);
+ rt5506_write_reg(0xa4, 0x52);
+ rt5506_write_reg(0x97, 0x00);
+ rt5506_write_reg(0x98, 0x22);
+ rt5506_write_reg(0x99, 0x33);
+ rt5506_write_reg(0x9a, 0x55);
+ rt5506_write_reg(0x9b, 0x66);
+ rt5506_write_reg(0x9c, 0x99);
+ rt5506_write_reg(0x9d, 0x66);
+ rt5506_write_reg(0x9e, 0x99);
+ /* init end*/
+
+ high_imp = 0;
+
+ if (temp & AMP_SENSE_READY) {
+ unsigned char om, hsmode;
+ enum HEADSET_OM hsom;
+
+ hsmode = (temp & 0x60) >> 5;
+ om = (temp & 0xe) >> 1;
+
+ if (r_channel == 0) {
+ /*mono headset*/
+ hsom = HEADSET_MONO;
+ } else {
+ switch (om) {
+ case 0:
+ hsom = HEADSET_8OM;
+ break;
+ case 1:
+ hsom = HEADSET_16OM;
+ break;
+ case 2:
+ hsom = HEADSET_32OM;
+ break;
+ case 3:
+ hsom = HEADSET_64OM;
+ break;
+ case 4:
+ hsom = HEADSET_128OM;
+ break;
+ case 5:
+ hsom = HEADSET_256OM;
+ break;
+ case 6:
+ hsom = HEADSET_500OM;
+ break;
+ case 7:
+ hsom = HEADSET_1KOM;
+ break;
+
+ default:
+ hsom = HEADSET_OM_UNDER_DETECT;
+ break;
+ }
+ }
+
+ hs->hs_qstatus = QUERY_FINISH;
+ hs->headsetom = hsom;
+
+ if (om >= HEADSET_256OM && om <= HEADSET_1KOM)
+ high_imp = 1;
+
+ pr_info("rt5506 hs imp value 0x%x hsmode %d om 0x%x hsom %d high_imp %d\n",
+ temp & 0xf, hsmode, om, hsom, high_imp);
+
+ } else {
+ if (hs->hs_qstatus == QUERY_HEADSET)
+ queue_delayed_work(hs_wq,
+ &rt5506_query.hs_imp_detec_work, QUERY_LATTER);
+ }
+
+ if (high_imp) {
+ rt5506_write_reg(0xb1, 0x81);
+ rt5506_write_reg(0x80, 0x87);
+ rt5506_write_reg(0x83, 0xc3);
+ rt5506_write_reg(0x84, 0x63);
+ rt5506_write_reg(0x89, 0x7);
+ mdelay(9);
+ rt5506_write_reg(0x83, 0xcf);
+ rt5506_write_reg(0x89, 0x1d);
+ mdelay(1);
+ rt5506_write_reg(1, 0x7);
+ rt5506_write_reg(0xb1, 0x81);
+ } else {
+ rt5506_write_reg(1, 0xc7);
+ }
+
+ if (hs->gpiostatus == AMP_GPIO_ON) {
+ rt5506_query.gpio_off_cancel = 0;
+ queue_delayed_work(gpio_wq,
+ &rt5506_query.gpio_off_work, msecs_to_jiffies(0));
+ }
+
+ mutex_unlock(&hs->mlock);
+ mutex_unlock(&hs->gpiolock);
+
+ if (hs->rt5506_status == STATUS_SUSPEND)
+ set_rt5506_amp(1, 0);
+
+ wake_unlock(&hs->hs_wake_lock);
+}
+
+static void volume_ramp_func(struct work_struct *work)
+{
+ pr_info("%s\n", __func__);
+ if (rt5506_query.rt5506_status != STATUS_PLAYBACK) {
+ pr_info("%s:Not in STATUS_PLAYBACK\n", __func__);
+ mdelay(1);
+ /*start state machine and disable noise gate */
+ if (high_imp)
+ rt5506_write_reg(0xb1, 0x80);
+
+ rt5506_write_reg(0x2, 0x0);
+ mdelay(1);
+ }
+ set_amp(1, &RT5506_AMP_ON);
+}
+
+static void set_amp(int on, struct rt5506_config *i2c_command)
+{
+ pr_info("%s: %d\n", __func__, on);
+ mutex_lock(&rt5506_query.mlock);
+ mutex_lock(&hp_amp_lock);
+
+ if (rt5506_query.hs_qstatus == QUERY_HEADSET)
+ rt5506_query.hs_qstatus = QUERY_FINISH;
+
+ if (on) {
+ rt5506_query.rt5506_status = STATUS_PLAYBACK;
+ if (rt5506_i2c_write(i2c_command->reg,
+ i2c_command->reg_len) == 0) {
+ pr_info("%s: ON\n", __func__);
+ }
+ } else {
+ if (high_imp) {
+ rt5506_write_reg(1, 0x7);
+ rt5506_write_reg(0xb1, 0x81);
+ } else {
+ rt5506_write_reg(1, 0xc7);
+ }
+ if (rt5506_query.rt5506_status == STATUS_PLAYBACK)
+ pr_info("%s: OFF\n", __func__);
+ rt5506_query.rt5506_status = STATUS_OFF;
+ rt5506_query.curmode = PLAYBACK_MODE_OFF;
+ }
+ mutex_unlock(&hp_amp_lock);
+ mutex_unlock(&rt5506_query.mlock);
+}
+
+int query_rt5506(void)
+{
+ return rt5506Connect;
+}
+
+int rt5506_dump_reg(void)
+{
+ int ret = 0, rt5506_already_enabled = 0, i = 0;
+ unsigned char temp[2];
+
+ pr_info("%s:current gpio %d value %d\n", __func__,
+ pdata->rt5506_enable, gpio_get_value(pdata->rt5506_enable));
+
+ if (gpio_get_value(pdata->rt5506_enable) == 1) {
+ rt5506_already_enabled = 1;
+ } else {
+ ret = gpio_direction_output(pdata->rt5506_enable, 1);
+ if (ret) {
+ pr_err("%s: Fail set rt5506-enable to 1, ret=%d\n",
+ __func__, ret);
+ return ret;
+ } else {
+ pr_info("%s: rt5506-enable=1\n", __func__);
+ }
+ }
+
+ mdelay(1);
+ for (i = 0; i < sizeof(rt5506_valid_registers)/
+ sizeof(rt5506_valid_registers[0]); i++) {
+ ret = rt5506_i2c_read_addr(temp, rt5506_valid_registers[i]);
+ if (ret < 0) {
+ pr_err("%s: rt5506_i2c_read_addr(%x) fail\n",
+ __func__, rt5506_valid_registers[i]);
+ break;
+ }
+ }
+
+ if (rt5506_already_enabled == 0)
+ gpio_direction_output(pdata->rt5506_enable, 0);
+
+ return ret;
+}
+int set_rt5506_hp_en(bool on)
+{
+ if (!rt5506Connect)
+ return 0;
+ pr_info("%s: %d\n", __func__, on);
+ mutex_lock(&rt5506_query.actionlock);
+ rt5506_query.gpio_off_cancel = 1;
+
+ cancel_delayed_work_sync(&rt5506_query.gpio_off_work);
+ cancel_delayed_work_sync(&rt5506_query.volume_ramp_work);
+ mutex_lock(&rt5506_query.gpiolock);
+ if (on) {
+ if (rt5506_query.gpiostatus == AMP_GPIO_OFF) {
+ pr_info("%s: enable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 1);
+ rt5506_query.gpiostatus = AMP_GPIO_ON;
+ usleep_range(1000, 2000);
+ }
+ } else {
+ if (rt5506_query.gpiostatus == AMP_GPIO_ON) {
+ rt5506_query.gpio_off_cancel = 0;
+ queue_delayed_work(gpio_wq,
+ &rt5506_query.gpio_off_work, msecs_to_jiffies(0));
+ }
+ }
+
+ mutex_unlock(&rt5506_query.gpiolock);
+ mutex_unlock(&rt5506_query.actionlock);
+
+ return 0;
+
+}
+int set_rt5506_amp(int on, int dsp)
+{
+ if (!rt5506Connect)
+ return 0;
+
+ pr_info("%s: %d\n", __func__, on);
+ mutex_lock(&rt5506_query.actionlock);
+ rt5506_query.gpio_off_cancel = 1;
+
+ cancel_delayed_work_sync(&rt5506_query.gpio_off_work);
+ cancel_delayed_work_sync(&rt5506_query.volume_ramp_work);
+ mutex_lock(&rt5506_query.gpiolock);
+
+ if (on) {
+ if (rt5506_query.gpiostatus == AMP_GPIO_OFF) {
+ pr_info("%s: enable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+ gpio_set_value(pdata->rt5506_enable, 1);
+ rt5506_query.gpiostatus = AMP_GPIO_ON;
+ usleep_range(1000, 2000);
+ }
+ queue_delayed_work(ramp_wq,
+ &rt5506_query.volume_ramp_work, msecs_to_jiffies(0));
+ } else {
+ set_amp(0, &RT5506_AMP_ON);
+ if (rt5506_query.gpiostatus == AMP_GPIO_ON) {
+ rt5506_query.gpio_off_cancel = 0;
+ queue_delayed_work(gpio_wq,
+ &rt5506_query.gpio_off_work, msecs_to_jiffies(0));
+ }
+ }
+
+ mutex_unlock(&rt5506_query.gpiolock);
+ mutex_unlock(&rt5506_query.actionlock);
+
+ return 0;
+}
+
+static int update_amp_parameter(int mode)
+{
+ if (mode >= rt5506_cfg_data.mode_num)
+ return -EINVAL;
+
+ pr_info("%s: set mode %d\n", __func__, mode);
+
+ if (mode == PLAYBACK_MODE_OFF) {
+ memcpy(&RT5506_AMP_OFF,
+ &rt5506_cfg_data.cmd_data[mode].config,
+ sizeof(struct rt5506_config));
+ } else if (mode == AMP_INIT) {
+ memcpy(&RT5506_AMP_INIT,
+ &rt5506_cfg_data.cmd_data[mode].config,
+ sizeof(struct rt5506_config));
+ } else if (mode == AMP_MUTE) {
+ memcpy(&RT5506_AMP_MUTE,
+ &rt5506_cfg_data.cmd_data[mode].config,
+ sizeof(struct rt5506_config));
+ } else {
+ memcpy(&RT5506_AMP_ON,
+ &rt5506_cfg_data.cmd_data[mode].config,
+ sizeof(struct rt5506_config));
+ }
+ return 0;
+}
+
+
+static long rt5506_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ void __user *argp = (void __user *)arg;
+ int rc = 0, modeid = 0;
+ int premode = 0;
+
+ switch (cmd) {
+ case AMP_SET_MODE:
+ if (copy_from_user(&modeid, argp, sizeof(modeid)))
+ return -EFAULT;
+
+ if (!rt5506_cfg_data.cmd_data) {
+ pr_err("%s: out of memory\n", __func__);
+ return -ENOMEM;
+ }
+
+ if (modeid >= rt5506_cfg_data.mode_num || modeid < 0) {
+ pr_err("unsupported rt5506 mode %d\n", modeid);
+ return -EINVAL;
+ }
+ mutex_lock(&hp_amp_lock);
+ premode = rt5506_query.curmode;
+ rt5506_query.curmode = modeid;
+ rc = update_amp_parameter(modeid);
+ mutex_unlock(&hp_amp_lock);
+
+ pr_info("%s:set rt5506 mode to %d curstatus %d\n",
+ __func__, modeid, rt5506_query.rt5506_status);
+
+ mutex_lock(&rt5506_query.actionlock);
+
+ if (rt5506_query.rt5506_status == STATUS_PLAYBACK
+ && premode != rt5506_query.curmode) {
+ flush_work(&rt5506_query.volume_ramp_work.work);
+ queue_delayed_work(ramp_wq,
+ &rt5506_query.volume_ramp_work, msecs_to_jiffies(280));
+ }
+ mutex_unlock(&rt5506_query.actionlock);
+ break;
+ case AMP_SET_PARAM:
+ mutex_lock(&hp_amp_lock);
+ rt5506_cfg_data.mode_num = PLAYBACK_MAX_MODE;
+ if (rt5506_cfg_data.cmd_data == NULL)
+ rt5506_cfg_data.cmd_data = kzalloc(
+ sizeof(struct rt5506_comm_data) * rt5506_cfg_data.mode_num,
+ GFP_KERNEL);
+ if (!rt5506_cfg_data.cmd_data) {
+ pr_err("%s: out of memory\n", __func__);
+ mutex_unlock(&hp_amp_lock);
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(rt5506_cfg_data.cmd_data,
+ ((struct rt5506_config_data *)argp),
+ sizeof(struct rt5506_comm_data) * rt5506_cfg_data.mode_num)) {
+ pr_err("%s: copy data from user failed.\n", __func__);
+ kfree(rt5506_cfg_data.cmd_data);
+ rt5506_cfg_data.cmd_data = NULL;
+ mutex_unlock(&hp_amp_lock);
+ return -EFAULT;
+ }
+
+ pr_info("%s: update rt5506 i2c commands #%d success.\n",
+ __func__, rt5506_cfg_data.mode_num);
+ /* update default paramater from csv*/
+ update_amp_parameter(PLAYBACK_MODE_OFF);
+ update_amp_parameter(AMP_MUTE);
+ update_amp_parameter(AMP_INIT);
+ mutex_unlock(&hp_amp_lock);
+ rc = 0;
+ break;
+ case AMP_QUERY_OM:
+ mutex_lock(&rt5506_query.mlock);
+ rc = rt5506_query.headsetom;
+ mutex_unlock(&rt5506_query.mlock);
+ pr_info("%s: query headset om %d\n", __func__, rc);
+
+ if (copy_to_user(argp, &rc, sizeof(rc)))
+ rc = -EFAULT;
+ else
+ rc = 0;
+ break;
+ default:
+ pr_err("%s: Invalid command\n", __func__);
+ rc = -EINVAL;
+ break;
+ }
+ return rc;
+}
+
+static void rt5506_parse_pfdata(struct device *dev,
+struct rt5506_platform_data *ppdata)
+{
+ struct device_node *dt = dev->of_node;
+ enum of_gpio_flags flags;
+
+ pdata->rt5506_enable = -EINVAL;
+ pdata->rt5506_power_enable = -EINVAL;
+
+
+ if (dt) {
+ pr_info("%s: dt is used\n", __func__);
+ pdata->rt5506_enable =
+ of_get_named_gpio_flags(dt, "richtek,enable-gpio", 0, &flags);
+
+ pdata->rt5506_power_enable =
+ of_get_named_gpio_flags(dt,
+ "richtek,enable-power-gpio", 0, &flags);
+ } else {
+ pr_info("%s: dt is NOT used\n", __func__);
+ if (dev->platform_data) {
+ pdata->rt5506_enable =
+ ((struct rt5506_platform_data *)
+ dev->platform_data)->rt5506_enable;
+
+ pdata->rt5506_power_enable =
+ ((struct rt5506_platform_data *)
+ dev->platform_data)->rt5506_power_enable;
+ }
+ }
+
+ if (gpio_is_valid(pdata->rt5506_enable)) {
+ pr_info("%s: gpio_rt5506_enable %d\n",
+ __func__, pdata->rt5506_enable);
+ } else {
+ pr_err("%s: gpio_rt5506_enable %d is invalid\n",
+ __func__, pdata->rt5506_enable);
+ }
+
+ if (gpio_is_valid(pdata->rt5506_power_enable)) {
+ pr_info("%s: rt5506_power_enable %d\n",
+ __func__, pdata->rt5506_power_enable);
+ } else {
+ pr_err("%s: rt5506_power_enable %d is invalid\n",
+ __func__, pdata->rt5506_power_enable);
+ }
+}
+
+static const struct file_operations rt5506_fops = {
+ .owner = THIS_MODULE,
+ .open = rt5506_open,
+ .release = rt5506_release,
+ .unlocked_ioctl = rt5506_ioctl,
+ .compat_ioctl = rt5506_ioctl,
+};
+
+static struct miscdevice rt5506_device = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "rt5506",
+ .fops = &rt5506_fops,
+};
+
+int rt5506_probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ int ret = 0;
+ unsigned char temp[2];
+
+ struct regulator *rt5506_reg;
+
+ pr_info("%s\n", __func__);
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("%s: i2c check functionality error\n", __func__);
+ ret = -ENODEV;
+ goto err_alloc_data_failed;
+ }
+
+ if (pdata == NULL) {
+ pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
+ if (pdata == NULL) {
+ ret = -ENOMEM;
+ pr_err("%s: platform data is NULL\n", __func__);
+ goto err_alloc_data_failed;
+ }
+ }
+
+ rt5506_parse_pfdata(&client->dev, pdata);
+
+ this_client = client;
+
+ if (!gpio_is_valid(pdata->rt5506_power_enable)) {
+ rt5506_reg = regulator_get(&client->dev, "ldoen");
+ if (IS_ERR(rt5506_reg)) {
+ pr_err("%s: Fail regulator_get ldoen\n", __func__);
+ goto err_free_allocated_mem;
+ } else {
+ ret = regulator_enable(rt5506_reg);
+ if (ret) {
+ pr_err("%s: Fail regulator_enable ldoen, %d\n",
+ __func__, ret);
+ goto err_free_allocated_mem;
+ } else
+ pr_info("rt5506_reg ldoen is enabled\n");
+ }
+ } else {
+ ret = gpio_request(pdata->rt5506_power_enable,
+ "rt5506-power-en");
+ if (ret) {
+ pr_err("%s: Fail gpio_request rt5506_power_enable, %d\n",
+ __func__, ret);
+ goto err_free_allocated_mem;
+ } else {
+ ret = gpio_direction_output(pdata->rt5506_power_enable,
+ 1);
+
+ if (ret) {
+ pr_err("%s: Fail set rt5506_power_enable to 1, ret=%d\n",
+ __func__, ret);
+ gpio_free(pdata->rt5506_power_enable);
+ goto err_free_allocated_mem;
+ } else {
+ pr_info("rt5506_power_enable=1\n");
+ }
+ }
+ }
+ mdelay(10);
+
+ if (gpio_is_valid(pdata->rt5506_enable)) {
+ ret = gpio_request(pdata->rt5506_enable, "rt5506-enable");
+ if (ret) {
+ pr_err("%s: Fail gpio_request rt5506-enable, %d\n",
+ __func__, ret);
+ goto err_free_allocated_mem;
+ } else {
+ ret = gpio_direction_output(pdata->rt5506_enable, 1);
+ if (ret) {
+ pr_err("%s: Fail set rt5506-enable to 1, ret=%d\n",
+ __func__, ret);
+ goto err_free_allocated_mem;
+ } else {
+ pr_info("%s: rt5506-enable=1\n", __func__);
+ }
+ }
+ } else {
+ pr_err("%s: rt5506_enable is invalid\n", __func__);
+ goto err_free_allocated_mem;
+ }
+
+ mdelay(1);
+
+ pr_info("%s:current gpio %d value %d\n", __func__,
+ pdata->rt5506_enable, gpio_get_value(pdata->rt5506_enable));
+
+ /*init start*/
+ rt5506_write_reg(0, 0x04);
+ mdelay(1);
+ rt5506_write_reg(0x0, 0xc0);
+ rt5506_write_reg(0x81, 0x30);
+ rt5506_write_reg(0x90, 0xd0);
+ rt5506_write_reg(0x93, 0x9d);
+ rt5506_write_reg(0x95, 0x7b);
+ rt5506_write_reg(0xa4, 0x52);
+ rt5506_write_reg(0x97, 0x00);
+ rt5506_write_reg(0x98, 0x22);
+ rt5506_write_reg(0x99, 0x33);
+ rt5506_write_reg(0x9a, 0x55);
+ rt5506_write_reg(0x9b, 0x66);
+ rt5506_write_reg(0x9c, 0x99);
+ rt5506_write_reg(0x9d, 0x66);
+ rt5506_write_reg(0x9e, 0x99);
+ /*init end*/
+
+ rt5506_write_reg(0x1, 0xc7);
+ mdelay(10);
+
+ ret = rt5506_i2c_read_addr(temp, 0x1);
+ if (ret < 0) {
+ pr_err("%s: rt5506 is not connected\n", __func__);
+ rt5506Connect = 0;
+ } else {
+ pr_info("%s: rt5506 is connected\n", __func__);
+ rt5506Connect = 1;
+ }
+
+ ret = gpio_direction_output(pdata->rt5506_enable, 0);
+ if (ret) {
+ pr_err("%s: Fail set rt5506-enable to 0, ret=%d\n", __func__,
+ ret);
+ gpio_free(pdata->rt5506_enable);
+ goto err_free_allocated_mem;
+ } else {
+ pr_info("%s: rt5506-enable=0\n", __func__);
+ /*gpio_free(pdata->rt5506_enable);*/
+ }
+
+
+ if (rt5506Connect) {
+ /*htc_acoustic_register_hs_amp(set_rt5506_amp,&rt5506_fops);*/
+ ret = misc_register(&rt5506_device);
+ if (ret) {
+ pr_err("%s: rt5506_device register failed\n", __func__);
+ goto err_free_allocated_mem;
+ } else {
+ pr_info("%s: rt5506 is misc_registered\n", __func__);
+ }
+
+ hs_wq = create_workqueue("rt5506_hsdetect");
+ INIT_DELAYED_WORK(&rt5506_query.hs_imp_detec_work,
+ hs_imp_detec_func);
+
+ wake_lock_init(&rt5506_query.hs_wake_lock,
+ WAKE_LOCK_SUSPEND, "rt5506 hs wakelock");
+
+ wake_lock_init(&rt5506_query.gpio_wake_lock,
+ WAKE_LOCK_SUSPEND, "rt5506 gpio wakelock");
+
+ ramp_wq = create_workqueue("rt5506_volume_ramp");
+ INIT_DELAYED_WORK(&rt5506_query.volume_ramp_work,
+ volume_ramp_func);
+
+ gpio_wq = create_workqueue("rt5506_gpio_off");
+ INIT_DELAYED_WORK(&rt5506_query.gpio_off_work, hs_imp_gpio_off);
+
+ rt5506_register_hs_notification();
+ }
+ return 0;
+
+err_free_allocated_mem:
+ kfree(pdata);
+err_alloc_data_failed:
+ rt5506Connect = 0;
+ return ret;
+}
+
+static int rt5506_remove(struct i2c_client *client)
+{
+ struct rt5506_platform_data *p5506data = i2c_get_clientdata(client);
+ struct regulator *rt5506_reg;
+ int ret = 0;
+ pr_info("%s:\n", __func__);
+
+ if (gpio_is_valid(pdata->rt5506_enable)) {
+ ret = gpio_request(pdata->rt5506_enable, "rt5506-enable");
+ if (ret) {
+ pr_err("%s: Fail gpio_request rt5506-enable, %d\n",
+ __func__, ret);
+ } else {
+ ret = gpio_direction_output(pdata->rt5506_enable, 0);
+ if (ret) {
+ pr_err("%s: Fail set rt5506-enable to 0, ret=%d\n",
+ __func__, ret);
+ } else {
+ pr_info("%s: rt5506-enable=0\n", __func__);
+ }
+ gpio_free(pdata->rt5506_enable);
+ }
+ }
+
+ mdelay(1);
+
+ if (!gpio_is_valid(pdata->rt5506_power_enable)) {
+ rt5506_reg = regulator_get(&client->dev, "ldoen");
+ if (IS_ERR(rt5506_reg)) {
+ pr_err("%s: Fail regulator_get ldoen\n", __func__);
+ } else {
+ ret = regulator_disable(rt5506_reg);
+ if (ret) {
+ pr_err("%s: Fail regulator_disable ldoen, %d\n",
+ __func__, ret);
+ } else
+ pr_info("rt5506_reg ldoen is disabled\n");
+ }
+ } else {
+ if (ret) {
+ pr_err("%s: Fail gpio_request rt5506_power_enable, %d\n",
+ __func__, ret);
+ } else {
+ ret = gpio_direction_output(pdata->rt5506_power_enable,
+ 0);
+ if (ret) {
+ pr_err("%s: Fail set rt5506_power_enable to 0, ret=%d\n",
+ __func__, ret);
+ } else {
+ pr_info("rt5506_power_enable=0\n");
+ }
+ gpio_free(pdata->rt5506_power_enable);
+ }
+ }
+
+ kfree(p5506data);
+
+ if (rt5506Connect) {
+ misc_deregister(&rt5506_device);
+ cancel_delayed_work_sync(&rt5506_query.hs_imp_detec_work);
+ destroy_workqueue(hs_wq);
+ }
+ return 0;
+}
+
+static void rt5506_shutdown(struct i2c_client *client)
+{
+ rt5506_query.gpio_off_cancel = 1;
+ cancel_delayed_work_sync(&rt5506_query.gpio_off_work);
+ cancel_delayed_work_sync(&rt5506_query.volume_ramp_work);
+
+ mutex_lock(&rt5506_query.gpiolock);
+ mutex_lock(&hp_amp_lock);
+ mutex_lock(&rt5506_query.mlock);
+
+ if (rt5506_query.gpiostatus == AMP_GPIO_OFF) {
+ pr_info("%s: enable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+
+ gpio_set_value(pdata->rt5506_enable, 1);
+ rt5506_query.gpiostatus = AMP_GPIO_ON;
+ usleep_range(1000, 2000);
+ }
+ pr_info("%s: reset rt5506\n", __func__);
+ rt5506_write_reg(0x0, 0x4);
+ mdelay(1);
+ high_imp = 0;
+
+ if (rt5506_query.gpiostatus == AMP_GPIO_ON) {
+ pr_info("%s: disable gpio %d\n", __func__,
+ pdata->rt5506_enable);
+
+ gpio_set_value(pdata->rt5506_enable, 0);
+ rt5506_query.gpiostatus = AMP_GPIO_OFF;
+ }
+
+ rt5506Connect = 0;
+
+ mutex_unlock(&rt5506_query.mlock);
+ mutex_unlock(&hp_amp_lock);
+ mutex_unlock(&rt5506_query.gpiolock);
+
+}
+
+
+static struct of_device_id rt5506_match_table[] = {
+ { .compatible = "richtek,rt5506-amp",},
+ { },
+};
+
+static const struct i2c_device_id rt5506_id[] = {
+ { RT5506_I2C_NAME, 0 },
+ { }
+};
+
+static struct i2c_driver rt5506_driver = {
+ .probe = rt5506_probe,
+ .remove = rt5506_remove,
+ .shutdown = rt5506_shutdown,
+ .suspend = NULL,
+ .resume = NULL,
+ .id_table = rt5506_id,
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = RT5506_I2C_NAME,
+ .of_match_table = rt5506_match_table,
+ },
+};
+
+static int __init rt5506_init(void)
+{
+ pr_info("%s\n", __func__);
+ mutex_init(&hp_amp_lock);
+ mutex_init(&rt5506_query.mlock);
+ mutex_init(&rt5506_query.gpiolock);
+ mutex_init(&rt5506_query.actionlock);
+ rt5506_query.rt5506_status = STATUS_OFF;
+ rt5506_query.hs_qstatus = QUERY_OFF;
+ rt5506_query.headsetom = HEADSET_8OM;
+ rt5506_query.curmode = PLAYBACK_MODE_OFF;
+ rt5506_query.gpiostatus = AMP_GPIO_OFF;
+ return i2c_add_driver(&rt5506_driver);
+}
+
+static void __exit rt5506_exit(void)
+{
+ i2c_del_driver(&rt5506_driver);
+}
+
+module_init(rt5506_init);
+module_exit(rt5506_exit);
+
+MODULE_DESCRIPTION("rt5506 Headphone Amp driver");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/codecs/rt5506.h b/sound/soc/codecs/rt5506.h
new file mode 100644
index 0000000..bb2e150
--- /dev/null
+++ b/sound/soc/codecs/rt5506.h
@@ -0,0 +1,125 @@
+/*
+ * Definitions for rt5506 Headphone amp chip.
+ */
+#ifndef RT5506_H
+#define RT5506_H
+
+#include <linux/ioctl.h>
+#include <linux/wakelock.h>
+#include <linux/regulator/consumer.h>
+
+#define RT5506_I2C_NAME "rt5506"
+#define MAX_REG_DATA 15
+
+struct rt5506_platform_data {
+ uint32_t rt5506_enable;
+ uint32_t rt5506_power_enable;
+};
+
+struct rt5506_reg_data {
+ unsigned char addr;
+ unsigned char val;
+};
+
+struct rt5506_config {
+ unsigned int reg_len;
+ struct rt5506_reg_data reg[MAX_REG_DATA];
+};
+
+struct rt5506_comm_data {
+ unsigned int out_mode;
+ struct rt5506_config config;
+};
+
+struct rt5506_config_data {
+ unsigned int mode_num;
+ struct rt5506_comm_data *cmd_data;
+ /* [mode][mode_kind][reserve][cmds..] */
+};
+
+enum {
+ AMP_INIT = 0,
+ AMP_MUTE,
+ AMP_MAX_FUNC
+};
+
+enum PLAYBACK_MODE {
+ PLAYBACK_MODE_OFF = AMP_MAX_FUNC,
+ PLAYBACK_MODE_PLAYBACK,
+ PLAYBACK_MODE_PLAYBACK8OH,
+ PLAYBACK_MODE_PLAYBACK16OH,
+ PLAYBACK_MODE_PLAYBACK32OH,
+ PLAYBACK_MODE_PLAYBACK64OH,
+ PLAYBACK_MODE_PLAYBACK128OH,
+ PLAYBACK_MODE_PLAYBACK256OH,
+ PLAYBACK_MODE_PLAYBACK500OH,
+ PLAYBACK_MODE_PLAYBACK1KOH,
+ PLAYBACK_MODE_VOICE,
+ PLAYBACK_MODE_TTY,
+ PLAYBACK_MODE_FM,
+ PLAYBACK_MODE_RING,
+ PLAYBACK_MODE_MFG,
+ PLAYBACK_MODE_BEATS_8_64,
+ PLAYBACK_MODE_BEATS_128_500,
+ PLAYBACK_MODE_MONO,
+ PLAYBACK_MODE_MONO_BEATS,
+ PLAYBACK_MAX_MODE
+};
+
+enum HEADSET_QUERY_STATUS {
+ QUERY_OFF = 0,
+ QUERY_HEADSET,
+ QUERY_FINISH,
+};
+
+
+enum AMP_STATUS {
+ STATUS_OFF = 0,
+ STATUS_PLAYBACK,
+ STATUS_SUSPEND,
+};
+
+enum HEADSET_OM {
+ HEADSET_8OM = 0,
+ HEADSET_16OM,
+ HEADSET_32OM,
+ HEADSET_64OM,
+ HEADSET_128OM,
+ HEADSET_256OM,
+ HEADSET_500OM,
+ HEADSET_1KOM,
+ HEADSET_MONO,
+ HEADSET_OM_UNDER_DETECT,
+};
+
+enum AMP_GPIO_STATUS {
+ AMP_GPIO_OFF = 0,
+ AMP_GPIO_ON,
+ AMP_GPIO_QUERRTY_ON,
+};
+
+enum AMP_S4_STATUS {
+ AMP_S4_AUTO = 0,
+ AMP_S4_PWM,
+};
+
+#define QUERY_IMMED msecs_to_jiffies(0)
+#define QUERY_LATTER msecs_to_jiffies(200)
+#define AMP_SENSE_READY 0x80
+
+#define AMP_IOCTL_MAGIC 'g'
+#define AMP_SET_CONFIG _IOW(AMP_IOCTL_MAGIC, 0x01, unsigned)
+#define AMP_READ_CONFIG _IOW(AMP_IOCTL_MAGIC, 0x02, unsigned)
+#define AMP_SET_MODE _IOW(AMP_IOCTL_MAGIC, 0x03, unsigned)
+#define AMP_SET_PARAM _IOW(AMP_IOCTL_MAGIC, 0x04, unsigned)
+#define AMP_WRITE_REG _IOW(AMP_IOCTL_MAGIC, 0x07, unsigned)
+#define AMP_QUERY_OM _IOW(AMP_IOCTL_MAGIC, 0x08, unsigned)
+
+int query_rt5506(void);
+int set_rt5506_amp(int on, int dsp);
+int rt5506_headset_detect(int on);
+void rt5506_set_gain(u8 data);
+u8 rt5506_get_gain(void);
+int rt5506_dump_reg(void);
+int set_rt5506_hp_en(bool on);
+#endif
diff --git a/sound/soc/codecs/rt5677-spi.c b/sound/soc/codecs/rt5677-spi.c
new file mode 100644
index 0000000..44d796a
--- /dev/null
+++ b/sound/soc/codecs/rt5677-spi.c
@@ -0,0 +1,276 @@
+/*
+ * rt5677-spi.c -- RT5677 ALSA SoC audio codec driver
+ *
+ * Copyright 2013 Realtek Semiconductor Corp.
+ * Author: Oder Chiou <oder_chiou@realtek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/input.h>
+#include <linux/spi/spi.h>
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/slab.h>
+#include <linux/gpio.h>
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/uaccess.h>
+#include <linux/miscdevice.h>
+#include <linux/regulator/consumer.h>
+#include <linux/pm_qos.h>
+#include <linux/sysfs.h>
+#include <linux/clk.h>
+#include "rt5677-spi.h"
+
+#define SPI_BURST_LEN 240
+#define SPI_HEADER 5
+#define SPI_READ_FREQ 12228000
+
+#define RT5677_SPI_WRITE_BURST 0x5
+#define RT5677_SPI_READ_BURST 0x4
+#define RT5677_SPI_WRITE_32 0x3
+#define RT5677_SPI_READ_32 0x2
+#define RT5677_SPI_WRITE_16 0x1
+#define RT5677_SPI_READ_16 0x0
+
+static struct spi_device *g_spi;
+static DEFINE_MUTEX(spi_mutex);
+
+/* Read DSP memory using SPI. Addr and len have to be multiples of 16-bits. */
+int rt5677_spi_read(u32 addr, u8 *rx_data, size_t len)
+{
+ unsigned int i, end, offset = 0;
+ int status = 0;
+ struct spi_transfer t[2];
+ struct spi_message m;
+ u8 *rx_buf;
+ u8 buf[SPI_BURST_LEN + SPI_HEADER + 4];
+ u8 spi_cmd;
+
+ rx_buf = buf + SPI_HEADER + 4;
+ memset(t, 0, sizeof(t));
+ t[0].tx_buf = buf;
+ t[0].len = SPI_HEADER + 4;
+ t[0].speed_hz = SPI_READ_FREQ;
+ t[1].rx_buf = rx_buf;
+ t[1].speed_hz = SPI_READ_FREQ;
+ spi_message_init(&m);
+ spi_message_add_tail(&t[0], &m);
+ spi_message_add_tail(&t[1], &m);
+
+ while (offset < len) {
+ switch ((addr + offset) & 0x7) {
+ case 4:
+ spi_cmd = RT5677_SPI_READ_32;
+ end = 4;
+ break;
+ case 2:
+ case 6:
+ spi_cmd = RT5677_SPI_READ_16;
+ end = 2;
+ break;
+ case 0:
+ spi_cmd = RT5677_SPI_READ_BURST;
+ if (offset + SPI_BURST_LEN <= len)
+ end = SPI_BURST_LEN;
+ else {
+ end = len - offset;
+ end = (((end - 1) >> 3) + 1) << 3;
+ }
+ break;
+ default:
+ pr_err("Bad section alignment\n");
+ return -EACCES;
+ }
+
+ buf[0] = spi_cmd;
+ buf[1] = ((addr + offset) & 0xff000000) >> 24;
+ buf[2] = ((addr + offset) & 0x00ff0000) >> 16;
+ buf[3] = ((addr + offset) & 0x0000ff00) >> 8;
+ buf[4] = ((addr + offset) & 0x000000ff) >> 0;
+
+ t[1].len = end;
+
+ pr_debug("%s: addr = 0x%08X len = %zu read = %u spi_cmd=%d\n",
+ __func__, addr + offset, len, end, spi_cmd);
+
+ mutex_lock(&spi_mutex);
+ status |= spi_sync(g_spi, &m);
+ mutex_unlock(&spi_mutex);
+
+ if (spi_cmd == RT5677_SPI_READ_BURST) {
+ for (i = 0; i < end; i += 8) {
+ rx_data[offset + i + 0] = rx_buf[i + 7];
+ rx_data[offset + i + 1] = rx_buf[i + 6];
+ rx_data[offset + i + 2] = rx_buf[i + 5];
+ rx_data[offset + i + 3] = rx_buf[i + 4];
+ rx_data[offset + i + 4] = rx_buf[i + 3];
+ rx_data[offset + i + 5] = rx_buf[i + 2];
+ rx_data[offset + i + 6] = rx_buf[i + 1];
+ rx_data[offset + i + 7] = rx_buf[i + 0];
+ }
+ } else {
+ for (i = 0; i < end; i++)
+ rx_data[offset + i] = rx_buf[end - i - 1];
+ }
+
+ offset += end;
+ }
+
+ return status;
+}
+
+int rt5677_spi_write(u32 addr, const u8 *txbuf, size_t len)
+{
+ unsigned int i, end, offset = 0;
+ int status = 0;
+ u8 write_buf[SPI_BURST_LEN + SPI_HEADER + 1];
+ u8 spi_cmd;
+
+ while (offset < len) {
+ switch ((addr + offset) & 0x7) {
+ case 4:
+ spi_cmd = RT5677_SPI_WRITE_32;
+ end = 4;
+ break;
+ case 2:
+ case 6:
+ spi_cmd = RT5677_SPI_WRITE_16;
+ end = 2;
+ break;
+ case 0:
+ spi_cmd = RT5677_SPI_WRITE_BURST;
+ if (offset + SPI_BURST_LEN <= len)
+ end = SPI_BURST_LEN;
+ else {
+ end = len - offset;
+ end = (((end - 1) >> 3) + 1) << 3;
+ }
+ break;
+ default:
+ pr_err("Bad section alignment\n");
+ return -EACCES;
+ }
+
+ write_buf[0] = spi_cmd;
+ write_buf[1] = ((addr + offset) & 0xff000000) >> 24;
+ write_buf[2] = ((addr + offset) & 0x00ff0000) >> 16;
+ write_buf[3] = ((addr + offset) & 0x0000ff00) >> 8;
+ write_buf[4] = ((addr + offset) & 0x000000ff) >> 0;
+
+ if (spi_cmd == RT5677_SPI_WRITE_BURST) {
+ for (i = 0; i < end; i += 8) {
+ write_buf[i + 12] = txbuf[offset + i + 0];
+ write_buf[i + 11] = txbuf[offset + i + 1];
+ write_buf[i + 10] = txbuf[offset + i + 2];
+ write_buf[i + 9] = txbuf[offset + i + 3];
+ write_buf[i + 8] = txbuf[offset + i + 4];
+ write_buf[i + 7] = txbuf[offset + i + 5];
+ write_buf[i + 6] = txbuf[offset + i + 6];
+ write_buf[i + 5] = txbuf[offset + i + 7];
+ }
+ } else {
+ unsigned int j = end + (SPI_HEADER - 1);
+ for (i = 0; i < end; i++, j--) {
+ if (offset + i < len)
+ write_buf[j] = txbuf[offset + i];
+ else
+ write_buf[j] = 0;
+ }
+ }
+ write_buf[end + SPI_HEADER] = spi_cmd;
+
+ mutex_lock(&spi_mutex);
+ status |= spi_write(g_spi, write_buf, end + SPI_HEADER + 1);
+ mutex_unlock(&spi_mutex);
+
+ offset += end;
+ }
+
+ return status;
+}
+
+static int rt5677_spi_probe(struct spi_device *spi)
+{
+ g_spi = spi;
+ return 0;
+}
+
+static int rt5677_spi_remove(struct spi_device *spi)
+{
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int rt5677_suspend(struct device *dev)
+{
+ struct spi_device *spi = to_spi_device(dev);
+ struct rt5677_spi_platform_data *pdata;
+ pr_debug("%s\n", __func__);
+ if (spi == NULL) {
+ pr_debug("spi_device didn't exist");
+ return 0;
+ }
+ pdata = spi->dev.platform_data;
+ if (pdata && (pdata->spi_suspend))
+ pdata->spi_suspend(1);
+ return 0;
+}
+
+static int rt5677_resume(struct device *dev)
+{
+ struct spi_device *spi = to_spi_device(dev);
+ struct rt5677_spi_platform_data *pdata;
+ pr_debug("%s\n", __func__);
+ if (spi == NULL) {
+ pr_debug("spi_device didn't exist");
+ return 0;
+ }
+ pdata = spi->dev.platform_data;
+ if (pdata && (pdata->spi_suspend))
+ pdata->spi_suspend(0);
+ return 0;
+}
+
+static const struct dev_pm_ops rt5677_pm_ops = {
+ .suspend = rt5677_suspend,
+ .resume = rt5677_resume,
+};
+#endif /*CONFIG_PM */
+
+static struct spi_driver rt5677_spi_driver = {
+ .driver = {
+ .name = "rt5677_spidev",
+ .bus = &spi_bus_type,
+ .owner = THIS_MODULE,
+#if defined(CONFIG_PM)
+ .pm = &rt5677_pm_ops,
+#endif
+ },
+ .probe = rt5677_spi_probe,
+ .remove = rt5677_spi_remove,
+};
+
+static int __init rt5677_spi_init(void)
+{
+ return spi_register_driver(&rt5677_spi_driver);
+}
+
+static void __exit rt5677_spi_exit(void)
+{
+ spi_unregister_driver(&rt5677_spi_driver);
+}
+
+module_init(rt5677_spi_init);
+module_exit(rt5677_spi_exit);
+
+MODULE_DESCRIPTION("ASoC RT5677 driver");
+MODULE_AUTHOR("Oder Chiou <oder_chiou@realtek.com>");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/codecs/rt5677-spi.h b/sound/soc/codecs/rt5677-spi.h
new file mode 100644
index 0000000..71f38aa
--- /dev/null
+++ b/sound/soc/codecs/rt5677-spi.h
@@ -0,0 +1,22 @@
+/*
+ * rt5677-spi.h -- RT5677 ALSA SoC audio codec driver
+ *
+ * Copyright 2013 Realtek Semiconductor Corp.
+ * Author: Oder Chiou <oder_chiou@realtek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __RT5671_DSP_H__
+#define __RT5671_DSP_H__
+
+int rt5677_spi_write(u32 addr, const u8 *txbuf, size_t len);
+int rt5677_spi_read(u32 addr, u8 *rxbuf, size_t len);
+
+struct rt5677_spi_platform_data {
+ void (*spi_suspend) (bool);
+};
+
+#endif /* __RT5677_SPI_H__ */
diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
new file mode 100644
index 0000000..3d32d11
--- /dev/null
+++ b/sound/soc/codecs/rt5677.c
@@ -0,0 +1,5125 @@
+/*
+ * rt5677.c -- RT5677 ALSA SoC audio codec driver
+ *
+ * Copyright 2013 Realtek Semiconductor Corp.
+ * Author: Bard Liao <bardliao@realtek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/fs.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/pm.h>
+#include <linux/regmap.h>
+#include <linux/i2c.h>
+#include <linux/platform_device.h>
+#include <linux/spi/spi.h>
+#include <linux/of_gpio.h>
+#include <linux/elf.h>
+#include <linux/firmware.h>
+#include <sound/core.h>
+#include <sound/pcm.h>
+#include <sound/pcm_params.h>
+#include <sound/soc.h>
+#include <sound/soc-dapm.h>
+#include <sound/initval.h>
+#include <sound/tlv.h>
+#include <linux/htc_headset_mgr.h>
+
+#define RTK_IOCTL
+#define RT5677_DMIC_CLK_MAX 2400000
+#define RT5677_PRIV_MIC_BUF_SIZE (64 * 1024)
+
+#ifdef RTK_IOCTL
+#if defined(CONFIG_SND_HWDEP) || defined(CONFIG_SND_HWDEP_MODULE)
+#include "rt_codec_ioctl.h"
+#include "rt5677_ioctl.h"
+#endif
+#endif
+
+#include "rt5677.h"
+#include "rt5677-spi.h"
+
+#define VERSION "0.0.2 alsa 1.0.25"
+#define RT5677_PATH "rt5677_"
+
+static int dmic_depop_time = 100;
+module_param(dmic_depop_time, int, 0644);
+static int amic_depop_time = 150;
+module_param(amic_depop_time, int, 0644);
+
+static struct rt5677_priv *rt5677_global;
+
+struct rt5677_init_reg {
+ u8 reg;
+ u16 val;
+};
+
+static char *dsp_vad_suffix = "vad";
+module_param(dsp_vad_suffix, charp, S_IRUSR | S_IWUSR);
+MODULE_PARM_DESC(dsp_vad_suffix, "DSP VAD Firmware Suffix");
+
+extern void set_rt5677_power_extern(bool enable);
+extern int get_mic_state(void);
+
+static struct rt5677_init_reg init_list[] = {
+ {RT5677_DIG_MISC , 0x0021},
+ {RT5677_PRIV_INDEX , 0x003d},
+ {RT5677_PRIV_DATA , 0x364e},
+ {RT5677_PWR_DSP2 , 0x0c00},
+ /* 64Fs in TDM mode */
+ {RT5677_TDM1_CTRL1 , 0x1300},
+ {RT5677_MONO_ADC_DIG_VOL , 0xafaf},
+
+ /* MX80 bit10 0:MCLK1 1:MCLK2 */
+ {RT5677_GLB_CLK1 , 0x0400},
+
+ /* Playback Start */
+ {RT5677_PRIV_INDEX , 0x0017},
+ {RT5677_PRIV_DATA , 0x4fc0},
+ {RT5677_PRIV_INDEX , 0x0013},
+ {RT5677_PRIV_DATA , 0x0632},
+ {RT5677_LOUT1 , 0xf800},
+ {RT5677_STO1_DAC_MIXER , 0x8a8a},
+ /* Playback End */
+
+ /* Record Start */
+ {RT5677_PRIV_INDEX , 0x001e},
+ {RT5677_PRIV_DATA , 0x0000},
+ {RT5677_PRIV_INDEX , 0x0012},
+ {RT5677_PRIV_DATA , 0x0eaa},
+ {RT5677_PRIV_INDEX , 0x0014},
+ {RT5677_PRIV_DATA , 0x0f8b},
+ {RT5677_IN1 , 0x0040},
+ {RT5677_MICBIAS , 0x4000},
+ {RT5677_MONO_ADC_MIXER , 0xd4d5},
+ {RT5677_TDM1_CTRL2 , 0x0106},
+ /* Record End */
+};
+#define RT5677_INIT_REG_LEN ARRAY_SIZE(init_list)
+
+static int rt5677_reg_init(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int i;
+
+ for (i = 0; i < RT5677_INIT_REG_LEN; i++)
+ regmap_write(rt5677->regmap, init_list[i].reg,
+ init_list[i].val);
+
+ return 0;
+}
+
+static int rt5677_index_sync(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int i;
+
+ for (i = 0; i < RT5677_INIT_REG_LEN; i++)
+ if (RT5677_PRIV_INDEX == init_list[i].reg ||
+ RT5677_PRIV_DATA == init_list[i].reg)
+ regmap_write(rt5677->regmap, init_list[i].reg,
+ init_list[i].val);
+ return 0;
+}
+
+static const struct reg_default rt5677_reg[] = {
+ {RT5677_RESET , 0x0000},
+ {RT5677_LOUT1 , 0xa800},
+ {RT5677_IN1 , 0x0000},
+ {RT5677_MICBIAS , 0x0000},
+ {RT5677_SLIMBUS_PARAM , 0x0000},
+ {RT5677_SLIMBUS_RX , 0x0000},
+ {RT5677_SLIMBUS_CTRL , 0x0000},
+ {RT5677_SIDETONE_CTRL , 0x000b},
+ {RT5677_ANA_DAC1_2_3_SRC , 0x0000},
+ {RT5677_IF_DSP_DAC3_4_MIXER , 0x1111},
+ {RT5677_DAC4_DIG_VOL , 0xafaf},
+ {RT5677_DAC3_DIG_VOL , 0xafaf},
+ {RT5677_DAC1_DIG_VOL , 0xafaf},
+ {RT5677_DAC2_DIG_VOL , 0xafaf},
+ {RT5677_IF_DSP_DAC2_MIXER , 0x0011},
+ {RT5677_STO1_ADC_DIG_VOL , 0x2f2f},
+ {RT5677_MONO_ADC_DIG_VOL , 0x2f2f},
+ {RT5677_STO1_2_ADC_BST , 0x0000},
+ {RT5677_STO2_ADC_DIG_VOL , 0x2f2f},
+ {RT5677_ADC_BST_CTRL2 , 0x0000},
+ {RT5677_STO3_4_ADC_BST , 0x0000},
+ {RT5677_STO3_ADC_DIG_VOL , 0x2f2f},
+ {RT5677_STO4_ADC_DIG_VOL , 0x2f2f},
+ {RT5677_STO4_ADC_MIXER , 0xd4c0},
+ {RT5677_STO3_ADC_MIXER , 0xd4c0},
+ {RT5677_STO2_ADC_MIXER , 0xd4c0},
+ {RT5677_STO1_ADC_MIXER , 0xd4c0},
+ {RT5677_MONO_ADC_MIXER , 0xd4d1},
+ {RT5677_ADC_IF_DSP_DAC1_MIXER , 0x8080},
+ {RT5677_STO1_DAC_MIXER , 0xaaaa},
+ {RT5677_MONO_DAC_MIXER , 0xaaaa},
+ {RT5677_DD1_MIXER , 0xaaaa},
+ {RT5677_DD2_MIXER , 0xaaaa},
+ {RT5677_IF3_DATA , 0x0000},
+ {RT5677_IF4_DATA , 0x0000},
+ {RT5677_PDM_OUT_CTRL , 0x8888},
+ {RT5677_PDM_DATA_CTRL1 , 0x0000},
+ {RT5677_PDM_DATA_CTRL2 , 0x0000},
+ {RT5677_PDM1_DATA_CTRL2 , 0x0000},
+ {RT5677_PDM1_DATA_CTRL3 , 0x0000},
+ {RT5677_PDM1_DATA_CTRL4 , 0x0000},
+ {RT5677_PDM2_DATA_CTRL2 , 0x0000},
+ {RT5677_PDM2_DATA_CTRL3 , 0x0000},
+ {RT5677_PDM2_DATA_CTRL4 , 0x0000},
+ {RT5677_TDM1_CTRL1 , 0x0300},
+ {RT5677_TDM1_CTRL2 , 0x0000},
+ {RT5677_TDM1_CTRL3 , 0x4000},
+ {RT5677_TDM1_CTRL4 , 0x0123},
+ {RT5677_TDM1_CTRL5 , 0x4567},
+ {RT5677_TDM2_CTRL1 , 0x0300},
+ {RT5677_TDM2_CTRL2 , 0x0000},
+ {RT5677_TDM2_CTRL3 , 0x4000},
+ {RT5677_TDM2_CTRL4 , 0x0123},
+ {RT5677_TDM2_CTRL5 , 0x4567},
+ {RT5677_I2C_MASTER_CTRL1 , 0x0001},
+ {RT5677_I2C_MASTER_CTRL2 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL3 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL4 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL5 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL6 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL7 , 0x0000},
+ {RT5677_I2C_MASTER_CTRL8 , 0x0000},
+ {RT5677_DMIC_CTRL1 , 0x1505},
+ {RT5677_DMIC_CTRL2 , 0x0055},
+ {RT5677_HAP_GENE_CTRL1 , 0x0111},
+ {RT5677_HAP_GENE_CTRL2 , 0x0064},
+ {RT5677_HAP_GENE_CTRL3 , 0xef0e},
+ {RT5677_HAP_GENE_CTRL4 , 0xf0f0},
+ {RT5677_HAP_GENE_CTRL5 , 0xef0e},
+ {RT5677_HAP_GENE_CTRL6 , 0xf0f0},
+ {RT5677_HAP_GENE_CTRL7 , 0xef0e},
+ {RT5677_HAP_GENE_CTRL8 , 0xf0f0},
+ {RT5677_HAP_GENE_CTRL9 , 0xf000},
+ {RT5677_HAP_GENE_CTRL10 , 0x0000},
+ {RT5677_PWR_DIG1 , 0x0000},
+ {RT5677_PWR_DIG2 , 0x0000},
+ {RT5677_PWR_ANLG1 , 0x0055},
+ {RT5677_PWR_ANLG2 , 0x0000},
+ {RT5677_PWR_DSP1 , 0x0001},
+ {RT5677_PWR_DSP_ST , 0x0000},
+ {RT5677_PWR_DSP2 , 0x0000},
+ {RT5677_ADC_DAC_HPF_CTRL1 , 0x0e00},
+ {RT5677_PRIV_INDEX , 0x0000},
+ {RT5677_PRIV_DATA , 0x0000},
+ {RT5677_I2S4_SDP , 0x8000},
+ {RT5677_I2S1_SDP , 0x8000},
+ {RT5677_I2S2_SDP , 0x8000},
+ {RT5677_I2S3_SDP , 0x8000},
+ {RT5677_CLK_TREE_CTRL1 , 0x1111},
+ {RT5677_CLK_TREE_CTRL2 , 0x1111},
+ {RT5677_CLK_TREE_CTRL3 , 0x0000},
+ {RT5677_PLL1_CTRL1 , 0x0000},
+ {RT5677_PLL1_CTRL2 , 0x0000},
+ {RT5677_PLL2_CTRL1 , 0x0c60},
+ {RT5677_PLL2_CTRL2 , 0x2000},
+ {RT5677_GLB_CLK1 , 0x0000},
+ {RT5677_GLB_CLK2 , 0x0000},
+ {RT5677_ASRC_1 , 0x0000},
+ {RT5677_ASRC_2 , 0x0000},
+ {RT5677_ASRC_3 , 0x0000},
+ {RT5677_ASRC_4 , 0x0000},
+ {RT5677_ASRC_5 , 0x0000},
+ {RT5677_ASRC_6 , 0x0000},
+ {RT5677_ASRC_7 , 0x0000},
+ {RT5677_ASRC_8 , 0x0000},
+ {RT5677_ASRC_9 , 0x0000},
+ {RT5677_ASRC_10 , 0x0000},
+ {RT5677_ASRC_11 , 0x0000},
+ {RT5677_ASRC_12 , 0x0008},
+ {RT5677_ASRC_13 , 0x0000},
+ {RT5677_ASRC_14 , 0x0000},
+ {RT5677_ASRC_15 , 0x0000},
+ {RT5677_ASRC_16 , 0x0000},
+ {RT5677_ASRC_17 , 0x0000},
+ {RT5677_ASRC_18 , 0x0000},
+ {RT5677_ASRC_19 , 0x0000},
+ {RT5677_ASRC_20 , 0x0000},
+ {RT5677_ASRC_21 , 0x000c},
+ {RT5677_ASRC_22 , 0x0000},
+ {RT5677_ASRC_23 , 0x0000},
+ {RT5677_VAD_CTRL1 , 0x2184},
+ {RT5677_VAD_CTRL2 , 0x010a},
+ {RT5677_VAD_CTRL3 , 0x0aea},
+ {RT5677_VAD_CTRL4 , 0x000c},
+ {RT5677_VAD_CTRL5 , 0x0000},
+ {RT5677_DSP_INB_CTRL1 , 0x0000},
+ {RT5677_DSP_INB_CTRL2 , 0x0000},
+ {RT5677_DSP_IN_OUTB_CTRL , 0x0000},
+ {RT5677_DSP_OUTB0_1_DIG_VOL , 0x2f2f},
+ {RT5677_DSP_OUTB2_3_DIG_VOL , 0x2f2f},
+ {RT5677_DSP_OUTB4_5_DIG_VOL , 0x2f2f},
+ {RT5677_DSP_OUTB6_7_DIG_VOL , 0x2f2f},
+ {RT5677_ADC_EQ_CTRL1 , 0x6000},
+ {RT5677_ADC_EQ_CTRL2 , 0x0000},
+ {RT5677_EQ_CTRL1 , 0xc000},
+ {RT5677_EQ_CTRL2 , 0x0000},
+ {RT5677_EQ_CTRL3 , 0x0000},
+ {RT5677_SOFT_VOL_ZERO_CROSS1 , 0x0009},
+ {RT5677_JD_CTRL1 , 0x0000},
+ {RT5677_JD_CTRL2 , 0x0000},
+ {RT5677_JD_CTRL3 , 0x0000},
+ {RT5677_IRQ_CTRL1 , 0x0000},
+ {RT5677_IRQ_CTRL2 , 0x0000},
+ {RT5677_GPIO_ST , 0x0000},
+ {RT5677_GPIO_CTRL1 , 0x0000},
+ {RT5677_GPIO_CTRL2 , 0x0000},
+ {RT5677_GPIO_CTRL3 , 0x0000},
+ {RT5677_STO1_ADC_HI_FILTER1 , 0xb320},
+ {RT5677_STO1_ADC_HI_FILTER2 , 0x0000},
+ {RT5677_MONO_ADC_HI_FILTER1 , 0xb300},
+ {RT5677_MONO_ADC_HI_FILTER2 , 0x0000},
+ {RT5677_STO2_ADC_HI_FILTER1 , 0xb300},
+ {RT5677_STO2_ADC_HI_FILTER2 , 0x0000},
+ {RT5677_STO3_ADC_HI_FILTER1 , 0xb300},
+ {RT5677_STO3_ADC_HI_FILTER2 , 0x0000},
+ {RT5677_STO4_ADC_HI_FILTER1 , 0xb300},
+ {RT5677_STO4_ADC_HI_FILTER2 , 0x0000},
+ {RT5677_MB_DRC_CTRL1 , 0x0f20},
+ {RT5677_DRC1_CTRL1 , 0x001f},
+ {RT5677_DRC1_CTRL2 , 0x020c},
+ {RT5677_DRC1_CTRL3 , 0x1f00},
+ {RT5677_DRC1_CTRL4 , 0x0000},
+ {RT5677_DRC1_CTRL5 , 0x0000},
+ {RT5677_DRC1_CTRL6 , 0x0029},
+ {RT5677_DRC2_CTRL1 , 0x001f},
+ {RT5677_DRC2_CTRL2 , 0x020c},
+ {RT5677_DRC2_CTRL3 , 0x1f00},
+ {RT5677_DRC2_CTRL4 , 0x0000},
+ {RT5677_DRC2_CTRL5 , 0x0000},
+ {RT5677_DRC2_CTRL6 , 0x0029},
+ {RT5677_DRC1_HL_CTRL1 , 0x8000},
+ {RT5677_DRC1_HL_CTRL2 , 0x0200},
+ {RT5677_DRC2_HL_CTRL1 , 0x8000},
+ {RT5677_DRC2_HL_CTRL2 , 0x0200},
+ {RT5677_DSP_INB1_SRC_CTRL1 , 0x5800},
+ {RT5677_DSP_INB1_SRC_CTRL2 , 0x0000},
+ {RT5677_DSP_INB1_SRC_CTRL3 , 0x0000},
+ {RT5677_DSP_INB1_SRC_CTRL4 , 0x0800},
+ {RT5677_DSP_INB2_SRC_CTRL1 , 0x5800},
+ {RT5677_DSP_INB2_SRC_CTRL2 , 0x0000},
+ {RT5677_DSP_INB2_SRC_CTRL3 , 0x0000},
+ {RT5677_DSP_INB2_SRC_CTRL4 , 0x0800},
+ {RT5677_DSP_INB3_SRC_CTRL1 , 0x5800},
+ {RT5677_DSP_INB3_SRC_CTRL2 , 0x0000},
+ {RT5677_DSP_INB3_SRC_CTRL3 , 0x0000},
+ {RT5677_DSP_INB3_SRC_CTRL4 , 0x0800},
+ {RT5677_DSP_OUTB1_SRC_CTRL1 , 0x5800},
+ {RT5677_DSP_OUTB1_SRC_CTRL2 , 0x0000},
+ {RT5677_DSP_OUTB1_SRC_CTRL3 , 0x0000},
+ {RT5677_DSP_OUTB1_SRC_CTRL4 , 0x0800},
+ {RT5677_DSP_OUTB2_SRC_CTRL1 , 0x5800},
+ {RT5677_DSP_OUTB2_SRC_CTRL2 , 0x0000},
+ {RT5677_DSP_OUTB2_SRC_CTRL3 , 0x0000},
+ {RT5677_DSP_OUTB2_SRC_CTRL4 , 0x0800},
+ {RT5677_DSP_OUTB_0123_MIXER_CTRL, 0xfefe},
+ {RT5677_DSP_OUTB_45_MIXER_CTRL , 0xfefe},
+ {RT5677_DSP_OUTB_67_MIXER_CTRL , 0xfefe},
+ {RT5677_DIG_MISC , 0x0000},
+ {RT5677_GEN_CTRL1 , 0x0000},
+ {RT5677_GEN_CTRL2 , 0x0000},
+ {RT5677_VENDOR_ID , 0x0000},
+ {RT5677_VENDOR_ID1 , 0x10ec},
+ {RT5677_VENDOR_ID2 , 0x6327},
+};
+
+static int rt5677_reset(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ return regmap_write(rt5677->regmap, RT5677_RESET, 0x10ec);
+}
+
+/**
+ * rt5677_index_write - Write private register.
+ * @codec: SoC audio codec device.
+ * @reg: Private register index.
+ * @value: Private register data.
+ *
+ * Modify private register for advanced setting. It can be written through
+ * private index (0x6a) and data (0x6c) register.
+ *
+ * Returns 0 for success or negative error code.
+ */
+static int rt5677_index_write(struct snd_soc_codec *codec,
+ unsigned int reg, unsigned int value)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int ret;
+
+ mutex_lock(&rt5677->index_lock);
+
+ ret = regmap_write(rt5677->regmap, RT5677_PRIV_INDEX, reg);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set private addr: %d\n", ret);
+ goto err;
+ }
+ ret = regmap_write(rt5677->regmap, RT5677_PRIV_DATA, value);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set private value: %d\n", ret);
+ goto err;
+ }
+
+ mutex_unlock(&rt5677->index_lock);
+
+ return 0;
+
+err:
+ mutex_unlock(&rt5677->index_lock);
+
+ return ret;
+}
+
+/**
+ * rt5677_index_read - Read private register.
+ * @codec: SoC audio codec device.
+ * @reg: Private register index.
+ *
+ * Read advanced setting from private register. It can be read through
+ * private index (0x6a) and data (0x6c) register.
+ *
+ * Returns private register value or negative error code.
+ */
+static unsigned int rt5677_index_read(
+ struct snd_soc_codec *codec, unsigned int reg)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int ret;
+
+ mutex_lock(&rt5677->index_lock);
+
+ ret = regmap_write(rt5677->regmap, RT5677_PRIV_INDEX, reg);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set private addr: %d\n", ret);
+ mutex_unlock(&rt5677->index_lock);
+ return ret;
+ }
+
+ regmap_read(rt5677->regmap, RT5677_PRIV_DATA, &ret);
+
+ mutex_unlock(&rt5677->index_lock);
+
+ return ret;
+}
+
+/**
+ * rt5677_index_update_bits - update private register bits
+ * @codec: audio codec
+ * @reg: Private register index.
+ * @mask: register mask
+ * @value: new value
+ *
+ * Writes new register value.
+ *
+ * Returns 1 for change, 0 for no change, or negative error code.
+ */
+static int rt5677_index_update_bits(struct snd_soc_codec *codec,
+ unsigned int reg, unsigned int mask, unsigned int value)
+{
+ unsigned int old, new;
+ int change, ret;
+
+ ret = rt5677_index_read(codec, reg);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to read private reg: %d\n", ret);
+ goto err;
+ }
+
+ old = ret;
+ new = (old & ~mask) | (value & mask);
+ change = old != new;
+ if (change) {
+ ret = rt5677_index_write(codec, reg, new);
+ if (ret < 0) {
+ dev_err(codec->dev,
+ "Failed to write private reg: %d\n", ret);
+ goto err;
+ }
+ }
+ return change;
+
+err:
+ return ret;
+}
+
+/**
+ * rt5677_dsp_mode_i2c_write_address - Write value to address on DSP mode.
+ * @codec: SoC audio codec device.
+ * @address: Target address.
+ * @value: Target data.
+ *
+ *
+ * Returns 0 for success or negative error code.
+ */
+static int rt5677_dsp_mode_i2c_write_address(struct snd_soc_codec *codec,
+ unsigned int address, unsigned int value, unsigned int opcode)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int ret;
+
+ mutex_lock(&rt5677->index_lock);
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_ADDR_MSB,
+ address >> 16);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set addr msb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_ADDR_LSB,
+ address & 0xFFFF);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set addr lsb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_DATA_MSB,
+ value >> 16);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set data msb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_DATA_LSB,
+ value & 0xFFFF);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set data lsb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_OP_CODE, opcode);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set op code value: %d\n", ret);
+ goto err;
+ }
+
+err:
+ mutex_unlock(&rt5677->index_lock);
+
+ return ret;
+}
+
+/**
+ * rt5677_dsp_mode_i2c_read_address - Read value from address on DSP mode.
+ * @codec: SoC audio codec device.
+ * @addr: Address index.
+ * @value: Address data.
+ * Returns 0 for success or negative error code.
+ */
+static int rt5677_dsp_mode_i2c_read_address(
+ struct snd_soc_codec *codec, unsigned int addr, unsigned int *value)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int ret;
+ unsigned int msb, lsb;
+
+ mutex_lock(&rt5677->index_lock);
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_ADDR_MSB,
+ (addr & 0xffff0000) >> 16);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set addr msb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_ADDR_LSB,
+ addr & 0xffff);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set addr lsb value: %d\n", ret);
+ goto err;
+ }
+
+ ret = regmap_write(rt5677->regmap, RT5677_DSP_I2C_OP_CODE, 0x0002);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to set op code value: %d\n", ret);
+ goto err;
+ }
+
+ regmap_read(rt5677->regmap, RT5677_DSP_I2C_DATA_MSB, &msb);
+ regmap_read(rt5677->regmap, RT5677_DSP_I2C_DATA_LSB, &lsb);
+ *value = (msb << 16) | lsb;
+
+err:
+ mutex_unlock(&rt5677->index_lock);
+
+ return ret;
+
+}
+
+/**
+ * rt5677_dsp_mode_i2c_write - Write register on DSP mode.
+ * @codec: SoC audio codec device.
+ * @reg: Register index.
+ * @value: Register data.
+ *
+ *
+ * Returns 0 for success or negative error code.
+ */
+static int rt5677_dsp_mode_i2c_write(struct snd_soc_codec *codec,
+ unsigned int reg, unsigned int value)
+{
+ return rt5677_dsp_mode_i2c_write_address(codec, 0x18020000 + reg * 2,
+ value, 0x0001);
+}
+
+/**
+ * rt5677_dsp_mode_i2c_read - Read register on DSP mode.
+ * @codec: SoC audio codec device.
+ * @reg: Register index.
+ *
+ *
+ * Returns Register value.
+ */
+static unsigned int rt5677_dsp_mode_i2c_read(
+ struct snd_soc_codec *codec, unsigned int reg)
+{
+ unsigned int value;
+
+ rt5677_dsp_mode_i2c_read_address(codec, 0x18020000 + reg * 2,
+ &value);
+ return value & 0xffff;
+}
+/**
+ * rt5677_dsp_mode_i2c_update_bits - update register on DSP mode.
+ * @codec: audio codec
+ * @reg: register index.
+ * @mask: register mask
+ * @value: new value
+ *
+ *
+ * Returns 1 for change, 0 for no change, or negative error code.
+ */
+static int rt5677_dsp_mode_i2c_update_bits(struct snd_soc_codec *codec,
+ unsigned int reg, unsigned int mask, unsigned int value)
+{
+ unsigned int old, new;
+ int change, ret;
+
+ ret = rt5677_dsp_mode_i2c_read(codec, reg);
+ if (ret < 0) {
+ dev_err(codec->dev, "Failed to read private reg: %d\n", ret);
+ goto err;
+ }
+
+ old = ret;
+ new = (old & ~mask) | (value & mask);
+ change = old != new;
+ if (change) {
+ ret = rt5677_dsp_mode_i2c_write(codec, reg, new);
+ if (ret < 0) {
+ dev_err(codec->dev,
+ "Failed to write private reg: %d\n", ret);
+ goto err;
+ }
+ }
+ return change;
+
+err:
+ return ret;
+}
+
+static unsigned int rt5677_dsp_mbist_test(struct snd_soc_codec *codec)
+{
+ unsigned int i, value;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ rt5677->mbist_test_passed = true;
+
+ for (i = 0; i < 0x40; i += 4) {
+ rt5677_dsp_mode_i2c_write_address(codec, 0x1801f010 + i, 0x20,
+ 0x0003);
+ rt5677_dsp_mode_i2c_write_address(codec, 0x1801f010 + i, 0x0,
+ 0x0003);
+ rt5677_dsp_mode_i2c_write_address(codec, 0x1801f010 + i, 0x1,
+ 0x0003);
+ }
+
+ for (i = 0; i < 0x40; i += 4) {
+ rt5677_dsp_mode_i2c_read_address(codec, 0x1801f010 + i, &value);
+ if (value != 0x101) {
+ pr_err("rt5677 MBIST test failed\n");
+ rt5677->mbist_test_passed = false;
+ break;
+ }
+ }
+
+ rt5677_dsp_mode_i2c_write(codec, RT5677_PWR_DSP1, 0x07fd);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2, RT5677_PWR_LDO1, 0);
+ msleep(200);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2, RT5677_PWR_LDO1, RT5677_PWR_LDO1);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP1, 0x07ff);
+
+ return 0;
+}
+
+static unsigned int rt5677_read_dsp_code_from_file(const struct firmware **fwp,
+ const char *file_path, struct snd_soc_codec *codec)
+{
+ int ret;
+
+ pr_debug("%s\n", __func__);
+
+ ret = request_firmware(fwp, file_path, codec->dev);
+ if (ret != 0) {
+ pr_err("Failed to request '%s' = %d\n", file_path, ret);
+ return 0;
+ }
+ return (*fwp)->size;
+}
+
+static unsigned int rt5677_set_vad_source(
+ struct snd_soc_codec *codec, unsigned int source)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (source) {
+ case RT5677_VAD_IDLE_DMIC1:
+ case RT5677_VAD_SUSPEND_DMIC1:
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL2, 0x013f);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL3, 0x0a00);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL4, 0x017f);
+ regmap_write(rt5677->regmap, RT5677_ADC_BST_CTRL2, 0xa000);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL2, 0x7777);
+ regmap_write(rt5677->regmap, RT5677_MONO_ADC_MIXER, 0x54d1);
+ regmap_update_bits(rt5677->regmap, RT5677_MONO_ADC_DIG_VOL, RT5677_L_MUTE, 0);
+ regmap_write(rt5677->regmap, RT5677_DSP_INB_CTRL1, 0x4000);
+ regmap_write(rt5677->regmap, RT5677_DIG_MISC, 0x0001);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL1, 0x2777);
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK2, 0x0080);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG1, 0x0027);
+
+ /* from MCLK1 */
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK1, 0x0080);
+
+ regmap_write(rt5677->regmap, RT5677_DMIC_CTRL1, 0x95a5);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL1, 0x273c);
+ regmap_write(rt5677->regmap, RT5677_IRQ_CTRL2, 0x4000);
+ rt5677_index_write(codec, 0x14, 0x018a);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG2, 0x4000);
+ regmap_write(rt5677->regmap, RT5677_GPIO_CTRL2, 0x6000);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG2, 0x0081);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP2, 0x07ff);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP1, 0x07ff);
+ break;
+ case RT5677_VAD_IDLE_DMIC2:
+ case RT5677_VAD_SUSPEND_DMIC2:
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL2, 0x013f);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL3, 0x0a00);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL4, 0x017f);
+ regmap_write(rt5677->regmap, RT5677_ADC_BST_CTRL2, 0xa000);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL2, 0x7777);
+ regmap_write(rt5677->regmap, RT5677_MONO_ADC_MIXER, 0x55d1);
+ regmap_update_bits(rt5677->regmap, RT5677_MONO_ADC_DIG_VOL, RT5677_L_MUTE, 0);
+ regmap_write(rt5677->regmap, RT5677_DSP_INB_CTRL1, 0x4000);
+ regmap_write(rt5677->regmap, RT5677_DIG_MISC, 0x0001);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL1, 0x2777);
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK2, 0x0080);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG1, 0x0027);
+
+ /* from MCLK1 */
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK1, 0x0080);
+
+ regmap_write(rt5677->regmap, RT5677_DMIC_CTRL1, 0x55a5);
+ regmap_update_bits(rt5677->regmap, RT5677_GEN_CTRL2, 0x0200,
+ 0x0200);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL1, 0x273c);
+ regmap_write(rt5677->regmap, RT5677_IRQ_CTRL2, 0x4000);
+ rt5677_index_write(codec, 0x14, 0x018a);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG2, 0x4000);
+ regmap_write(rt5677->regmap, RT5677_GPIO_CTRL2, 0x6000);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG2, 0x0081);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP2, 0x07ff);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP1, 0x07ff);
+ break;
+ case RT5677_VAD_IDLE_AMIC:
+ case RT5677_VAD_SUSPEND_AMIC:
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL2, 0x013f);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL3, 0x0a00);
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL4, 0x027f);
+ regmap_write(rt5677->regmap, RT5677_IN1, 0x01c0);
+ regmap_write(rt5677->regmap, RT5677_ADC_BST_CTRL2, 0xa000);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL2, 0x7777);
+ regmap_write(rt5677->regmap, RT5677_MONO_ADC_MIXER, 0xd451);
+ regmap_update_bits(rt5677->regmap, RT5677_MONO_ADC_DIG_VOL, RT5677_R_MUTE, 0);
+ regmap_write(rt5677->regmap, RT5677_DSP_INB_CTRL1, 0x4000);
+ regmap_write(rt5677->regmap, RT5677_DIG_MISC, 0x0001);
+ regmap_write(rt5677->regmap, RT5677_CLK_TREE_CTRL1, 0x2777);
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK2, 0x0080);
+
+ /* from MCLK1 */
+ regmap_write(rt5677->regmap, RT5677_GLB_CLK1, 0x0080);
+
+ regmap_write(rt5677->regmap, RT5677_VAD_CTRL1, 0x273c);
+ regmap_write(rt5677->regmap, RT5677_IRQ_CTRL2, 0x4000);
+ rt5677_index_write(codec, 0x14, 0x018a);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG1, 0xa927);
+ msleep(20);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG1, 0xe9a7);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG1, 0x0012);
+ rt5677_index_write(codec, RT5677_CHOP_DAC_ADC, 0x364e);
+ rt5677_index_write(codec, RT5677_ANA_ADC_GAIN_CTRL, 0x0000);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG2, 0x2000);
+ regmap_write(rt5677->regmap, RT5677_GPIO_CTRL2, 0x6000);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG2, 0x4091);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP2, 0x07ff);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP1, 0x07ff);
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int rt5677_parse_and_load_dsp(const u8 *buf, unsigned int len)
+{
+ Elf32_Ehdr *elf_hdr;
+ Elf32_Phdr *pr_hdr;
+ Elf32_Half i;
+
+ if (!buf || (len < sizeof(Elf32_Ehdr)))
+ return -ENOMEM;
+
+ elf_hdr = (Elf32_Ehdr *)buf;
+#ifdef DEBUG
+#ifndef EM_XTENSA
+#define EM_XTENSA 94
+#endif
+ if (strncmp(elf_hdr->e_ident, ELFMAG, sizeof(ELFMAG) - 1) != 0)
+ pr_err("Wrong ELF header prefix\n");
+ if (elf_hdr->e_ehsize != sizeof(Elf32_Ehdr))
+ pr_err("Wrong Elf header size\n");
+ if (elf_hdr->e_machine != EM_XTENSA)
+ pr_err("Wrong DSP code file\n");
+#endif
+ if (len < elf_hdr->e_phoff)
+ return -ENOMEM;
+ pr_hdr = (Elf32_Phdr *)(buf + elf_hdr->e_phoff);
+ for (i=0; i < elf_hdr->e_phnum; i++) {
+ /* TODO: handle p_memsz != p_filesz */
+ if (pr_hdr->p_paddr && pr_hdr->p_filesz) {
+ pr_debug("Load [0x%x] -> 0x%x\n", pr_hdr->p_filesz,
+ pr_hdr->p_paddr);
+ rt5677_spi_write(pr_hdr->p_paddr,
+ buf + pr_hdr->p_offset,
+ pr_hdr->p_filesz);
+ }
+ pr_hdr++;
+ }
+ return 0;
+}
+
+static int rt5677_load_dsp_from_file(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int len;
+ const struct firmware *fwp;
+ char file_path[64];
+ int err = 0;
+#ifdef DEBUG
+ int ret = 0;
+#endif
+
+ sprintf(file_path, RT5677_PATH "elf_%s", dsp_vad_suffix);
+ len = rt5677_read_dsp_code_from_file(&fwp, file_path, codec);
+ if (len) {
+ pr_debug("load %s [%u] ok\n", file_path, len);
+ err = rt5677_parse_and_load_dsp(fwp->data, len);
+ release_firmware(fwp);
+ } else {
+ sprintf(file_path, RT5677_PATH "0x50000000_%s",
+ dsp_vad_suffix);
+ len = rt5677_read_dsp_code_from_file(&fwp, file_path, codec);
+ if (len) {
+ pr_debug("load %s ok\n", file_path);
+ rt5677_spi_write(0x50000000, fwp->data, len);
+ release_firmware(fwp);
+ } else {
+ pr_err("load %s fail\n", file_path);
+ }
+
+ sprintf(file_path, RT5677_PATH "0x60000000_%s",
+ dsp_vad_suffix);
+ len = rt5677_read_dsp_code_from_file(&fwp, file_path, codec);
+ if (len) {
+ pr_debug("load %s ok\n", file_path);
+ rt5677_spi_write(0x60000000, fwp->data, len);
+ release_firmware(fwp);
+ } else {
+ pr_err("load %s fail\n", file_path);
+ }
+ }
+ if (rt5677->model_buf && rt5677->model_len) {
+ err = rt5677_spi_write(RT5677_MODEL_ADDR, rt5677->model_buf,
+ rt5677->model_len);
+ if (err)
+ pr_err("model load failed: %u\n", err);
+ }
+
+#ifdef DEBUG
+ msleep(50);
+ regmap_write(rt5677->regmap, 0x01, 0x0520);
+ regmap_write(rt5677->regmap, 0x02, 0x5ffe);
+ regmap_write(rt5677->regmap, 0x00, 0x0002);
+ regmap_read(rt5677->regmap, 0x03, &ret);
+ dev_err(codec->dev, "rt5677 0x5ffe0520 0x03 %x\n", ret);
+ regmap_read(rt5677->regmap, 0x04, &ret);
+ dev_err(codec->dev, "rt5677 0x5ffe0520 0x04 %x\n", ret);
+
+ regmap_write(rt5677->regmap, 0x01, 0x0000);
+ regmap_write(rt5677->regmap, 0x02, 0x6000);
+ regmap_write(rt5677->regmap, 0x00, 0x0002);
+ regmap_read(rt5677->regmap, 0x03, &ret);
+ dev_err(codec->dev, "rt5677 0x60000000 0x03 %x\n", ret);
+ regmap_read(rt5677->regmap, 0x04, &ret);
+ dev_err(codec->dev, "rt5677 0x60000000 0x04 %x\n", ret);
+#endif
+ return err;
+}
+
+static unsigned int rt5677_set_vad(
+ struct snd_soc_codec *codec, unsigned int on, bool use_lock)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ static bool activity;
+ int pri99 = 0;
+
+ if (use_lock)
+ mutex_lock(&rt5677_global->vad_lock);
+
+ if (on && !activity) {
+ /* Kill the DSP so that all the registers are clean. */
+ set_rt5677_power_extern(false);
+
+ pr_info("rt5677_set_vad on, mic_state = %d\n", rt5677_global->mic_state);
+ set_rt5677_power_extern(true);
+
+ gpio_direction_output(rt5677->vad_clock_en, 1);
+ activity = true;
+ regcache_cache_only(rt5677->regmap, false);
+ regcache_cache_bypass(rt5677->regmap, true);
+
+ if (rt5677_global->mic_state == RT5677_VAD_ENABLE_AMIC)
+ rt5677_set_vad_source(codec, RT5677_VAD_IDLE_AMIC);
+ else
+ rt5677_set_vad_source(codec, RT5677_VAD_IDLE_DMIC1);
+
+ if (!rt5677->mbist_test) {
+ rt5677_dsp_mbist_test(codec);
+ rt5677->mbist_test = true;
+ }
+
+ /* Set PRI-99 with any flags that are passed to the firmware. */
+ rt5677_dsp_mode_i2c_write(codec, RT5677_PRIV_INDEX, 0x99);
+ /* The firmware knows what power state to use given the result
+ * of the MBIST test - if it passed, it will enter a low power
+ * wake, otherwise it will run at full power. */
+ pri99 |= rt5677->mbist_test_passed ?
+ (RT5677_MBIST_TEST_PASSED) : (RT5677_MBIST_TEST_FAILED);
+ /* Inform the firmware if it should go to sleep or not. */
+ pri99 |= rt5677->vad_sleep ?
+ (RT5677_VAD_SLEEP) : (RT5677_VAD_NO_SLEEP);
+ rt5677_dsp_mode_i2c_write(codec, RT5677_PRIV_DATA, pri99);
+
+ /* Reset the mic buffer read pointer. */
+ rt5677->mic_read_offset = 0;
+
+ /* Boot the firmware from IRAM instead of SRAM0. */
+ rt5677_dsp_mode_i2c_write_address(codec, RT5677_DSP_BOOT_VECTOR,
+ 0x0009, 0x0003);
+ rt5677_dsp_mode_i2c_write_address(codec, RT5677_DSP_BOOT_VECTOR,
+ 0x0019, 0x0003);
+ rt5677_dsp_mode_i2c_write_address(codec, RT5677_DSP_BOOT_VECTOR,
+ 0x0009, 0x0003);
+
+ rt5677_load_dsp_from_file(codec);
+
+ rt5677_dsp_mode_i2c_update_bits(codec, RT5677_PWR_DSP1,
+ 0x1, 0x0);
+ regcache_cache_bypass(rt5677->regmap, false);
+ regcache_cache_only(rt5677->regmap, true);
+ } else if (!on && activity) {
+ pr_info("rt5677_set_vad off\n");
+ activity = false;
+ regcache_cache_only(rt5677->regmap, false);
+ regcache_cache_bypass(rt5677->regmap, true);
+ rt5677_dsp_mode_i2c_update_bits(codec, RT5677_PWR_DSP1,
+ 0x1, 0x1);
+ rt5677_dsp_mode_i2c_write(codec, RT5677_PWR_DSP1, 0x0001);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP1, 0x0001);
+ regmap_write(rt5677->regmap, RT5677_PWR_DSP2, 0x0c00);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_LDO1, 0);
+
+ rt5677_index_write(codec, 0x14, 0x0f8b);
+ rt5677_index_update_bits(codec,
+ RT5677_BIAS_CUR4, 0x0f00, 0x0000);
+ regcache_cache_bypass(rt5677->regmap, false);
+ regcache_cache_only(rt5677->regmap, true);
+ gpio_direction_output(rt5677->vad_clock_en, 0);
+ set_rt5677_power_extern(false);
+ }
+
+ if (use_lock)
+ mutex_unlock(&rt5677_global->vad_lock);
+ return 0;
+}
+
+static void rt5677_check_hp_mic(struct work_struct *work)
+{
+ int state;
+ mutex_lock(&rt5677_global->vad_lock);
+ state = get_mic_state();
+ pr_debug("%s mic %d, rt5677_global->vad_mode %d\n",
+ __func__, state, rt5677_global->vad_mode);
+ if (rt5677_global->mic_state != state) {
+ rt5677_global->mic_state = state;
+ if ((rt5677_global->codec->dapm.bias_level == SND_SOC_BIAS_OFF) &&
+ (rt5677_global->vad_mode == RT5677_VAD_IDLE)) {
+ rt5677_set_vad(rt5677_global->codec, 0, false);
+ rt5677_set_vad(rt5677_global->codec, 1, false);
+ }
+ }
+ mutex_unlock(&rt5677_global->vad_lock);
+}
+
+int rt5677_headset_detect(int on)
+{
+ pr_debug("%s on = %d\n", __func__, on);
+ cancel_delayed_work_sync(&rt5677_global->check_hp_mic_work);
+ queue_delayed_work(rt5677_global->check_mic_wq, &rt5677_global->check_hp_mic_work,
+ msecs_to_jiffies(1100));
+ return on;
+}
+
+static void rt5677_register_hs_notification(void)
+{
+ struct headset_notifier notifier;
+ notifier.id = HEADSET_REG_HS_INSERT;
+ notifier.func = rt5677_headset_detect;
+ headset_notifier_register(¬ifier);
+}
+
+static bool rt5677_volatile_register(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case RT5677_RESET:
+ case RT5677_SLIMBUS_PARAM:
+ case RT5677_PDM_DATA_CTRL1:
+ case RT5677_PDM_DATA_CTRL2:
+ case RT5677_PDM1_DATA_CTRL4:
+ case RT5677_PDM2_DATA_CTRL4:
+ case RT5677_I2C_MASTER_CTRL1:
+ case RT5677_I2C_MASTER_CTRL7:
+ case RT5677_I2C_MASTER_CTRL8:
+ case RT5677_HAP_GENE_CTRL2:
+ case RT5677_PWR_DSP_ST:
+ case RT5677_PRIV_DATA:
+ case RT5677_PLL1_CTRL2:
+ case RT5677_PLL2_CTRL2:
+ case RT5677_ASRC_22:
+ case RT5677_ASRC_23:
+ case RT5677_VAD_CTRL5:
+ case RT5677_ADC_EQ_CTRL1:
+ case RT5677_EQ_CTRL1:
+ case RT5677_IRQ_CTRL1:
+ case RT5677_IRQ_CTRL2:
+ case RT5677_GPIO_ST:
+ case RT5677_DSP_INB1_SRC_CTRL4:
+ case RT5677_DSP_INB2_SRC_CTRL4:
+ case RT5677_DSP_INB3_SRC_CTRL4:
+ case RT5677_DSP_OUTB1_SRC_CTRL4:
+ case RT5677_DSP_OUTB2_SRC_CTRL4:
+ case RT5677_VENDOR_ID:
+ case RT5677_VENDOR_ID1:
+ case RT5677_VENDOR_ID2:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool rt5677_readable_register(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case RT5677_RESET:
+ case RT5677_LOUT1:
+ case RT5677_IN1:
+ case RT5677_MICBIAS:
+ case RT5677_SLIMBUS_PARAM:
+ case RT5677_SLIMBUS_RX:
+ case RT5677_SLIMBUS_CTRL:
+ case RT5677_SIDETONE_CTRL:
+ case RT5677_ANA_DAC1_2_3_SRC:
+ case RT5677_IF_DSP_DAC3_4_MIXER:
+ case RT5677_DAC4_DIG_VOL:
+ case RT5677_DAC3_DIG_VOL:
+ case RT5677_DAC1_DIG_VOL:
+ case RT5677_DAC2_DIG_VOL:
+ case RT5677_IF_DSP_DAC2_MIXER:
+ case RT5677_STO1_ADC_DIG_VOL:
+ case RT5677_MONO_ADC_DIG_VOL:
+ case RT5677_STO1_2_ADC_BST:
+ case RT5677_STO2_ADC_DIG_VOL:
+ case RT5677_ADC_BST_CTRL2:
+ case RT5677_STO3_4_ADC_BST:
+ case RT5677_STO3_ADC_DIG_VOL:
+ case RT5677_STO4_ADC_DIG_VOL:
+ case RT5677_STO4_ADC_MIXER:
+ case RT5677_STO3_ADC_MIXER:
+ case RT5677_STO2_ADC_MIXER:
+ case RT5677_STO1_ADC_MIXER:
+ case RT5677_MONO_ADC_MIXER:
+ case RT5677_ADC_IF_DSP_DAC1_MIXER:
+ case RT5677_STO1_DAC_MIXER:
+ case RT5677_MONO_DAC_MIXER:
+ case RT5677_DD1_MIXER:
+ case RT5677_DD2_MIXER:
+ case RT5677_IF3_DATA:
+ case RT5677_IF4_DATA:
+ case RT5677_PDM_OUT_CTRL:
+ case RT5677_PDM_DATA_CTRL1:
+ case RT5677_PDM_DATA_CTRL2:
+ case RT5677_PDM1_DATA_CTRL2:
+ case RT5677_PDM1_DATA_CTRL3:
+ case RT5677_PDM1_DATA_CTRL4:
+ case RT5677_PDM2_DATA_CTRL2:
+ case RT5677_PDM2_DATA_CTRL3:
+ case RT5677_PDM2_DATA_CTRL4:
+ case RT5677_TDM1_CTRL1:
+ case RT5677_TDM1_CTRL2:
+ case RT5677_TDM1_CTRL3:
+ case RT5677_TDM1_CTRL4:
+ case RT5677_TDM1_CTRL5:
+ case RT5677_TDM2_CTRL1:
+ case RT5677_TDM2_CTRL2:
+ case RT5677_TDM2_CTRL3:
+ case RT5677_TDM2_CTRL4:
+ case RT5677_TDM2_CTRL5:
+ case RT5677_I2C_MASTER_CTRL1:
+ case RT5677_I2C_MASTER_CTRL2:
+ case RT5677_I2C_MASTER_CTRL3:
+ case RT5677_I2C_MASTER_CTRL4:
+ case RT5677_I2C_MASTER_CTRL5:
+ case RT5677_I2C_MASTER_CTRL6:
+ case RT5677_I2C_MASTER_CTRL7:
+ case RT5677_I2C_MASTER_CTRL8:
+ case RT5677_DMIC_CTRL1:
+ case RT5677_DMIC_CTRL2:
+ case RT5677_HAP_GENE_CTRL1:
+ case RT5677_HAP_GENE_CTRL2:
+ case RT5677_HAP_GENE_CTRL3:
+ case RT5677_HAP_GENE_CTRL4:
+ case RT5677_HAP_GENE_CTRL5:
+ case RT5677_HAP_GENE_CTRL6:
+ case RT5677_HAP_GENE_CTRL7:
+ case RT5677_HAP_GENE_CTRL8:
+ case RT5677_HAP_GENE_CTRL9:
+ case RT5677_HAP_GENE_CTRL10:
+ case RT5677_PWR_DIG1:
+ case RT5677_PWR_DIG2:
+ case RT5677_PWR_ANLG1:
+ case RT5677_PWR_ANLG2:
+ case RT5677_PWR_DSP1:
+ case RT5677_PWR_DSP_ST:
+ case RT5677_PWR_DSP2:
+ case RT5677_ADC_DAC_HPF_CTRL1:
+ case RT5677_PRIV_INDEX:
+ case RT5677_PRIV_DATA:
+ case RT5677_I2S4_SDP:
+ case RT5677_I2S1_SDP:
+ case RT5677_I2S2_SDP:
+ case RT5677_I2S3_SDP:
+ case RT5677_CLK_TREE_CTRL1:
+ case RT5677_CLK_TREE_CTRL2:
+ case RT5677_CLK_TREE_CTRL3:
+ case RT5677_PLL1_CTRL1:
+ case RT5677_PLL1_CTRL2:
+ case RT5677_PLL2_CTRL1:
+ case RT5677_PLL2_CTRL2:
+ case RT5677_GLB_CLK1:
+ case RT5677_GLB_CLK2:
+ case RT5677_ASRC_1:
+ case RT5677_ASRC_2:
+ case RT5677_ASRC_3:
+ case RT5677_ASRC_4:
+ case RT5677_ASRC_5:
+ case RT5677_ASRC_6:
+ case RT5677_ASRC_7:
+ case RT5677_ASRC_8:
+ case RT5677_ASRC_9:
+ case RT5677_ASRC_10:
+ case RT5677_ASRC_11:
+ case RT5677_ASRC_12:
+ case RT5677_ASRC_13:
+ case RT5677_ASRC_14:
+ case RT5677_ASRC_15:
+ case RT5677_ASRC_16:
+ case RT5677_ASRC_17:
+ case RT5677_ASRC_18:
+ case RT5677_ASRC_19:
+ case RT5677_ASRC_20:
+ case RT5677_ASRC_21:
+ case RT5677_ASRC_22:
+ case RT5677_ASRC_23:
+ case RT5677_VAD_CTRL1:
+ case RT5677_VAD_CTRL2:
+ case RT5677_VAD_CTRL3:
+ case RT5677_VAD_CTRL4:
+ case RT5677_VAD_CTRL5:
+ case RT5677_DSP_INB_CTRL1:
+ case RT5677_DSP_INB_CTRL2:
+ case RT5677_DSP_IN_OUTB_CTRL:
+ case RT5677_DSP_OUTB0_1_DIG_VOL:
+ case RT5677_DSP_OUTB2_3_DIG_VOL:
+ case RT5677_DSP_OUTB4_5_DIG_VOL:
+ case RT5677_DSP_OUTB6_7_DIG_VOL:
+ case RT5677_ADC_EQ_CTRL1:
+ case RT5677_ADC_EQ_CTRL2:
+ case RT5677_EQ_CTRL1:
+ case RT5677_EQ_CTRL2:
+ case RT5677_EQ_CTRL3:
+ case RT5677_SOFT_VOL_ZERO_CROSS1:
+ case RT5677_JD_CTRL1:
+ case RT5677_JD_CTRL2:
+ case RT5677_JD_CTRL3:
+ case RT5677_IRQ_CTRL1:
+ case RT5677_IRQ_CTRL2:
+ case RT5677_GPIO_ST:
+ case RT5677_GPIO_CTRL1:
+ case RT5677_GPIO_CTRL2:
+ case RT5677_GPIO_CTRL3:
+ case RT5677_STO1_ADC_HI_FILTER1:
+ case RT5677_STO1_ADC_HI_FILTER2:
+ case RT5677_MONO_ADC_HI_FILTER1:
+ case RT5677_MONO_ADC_HI_FILTER2:
+ case RT5677_STO2_ADC_HI_FILTER1:
+ case RT5677_STO2_ADC_HI_FILTER2:
+ case RT5677_STO3_ADC_HI_FILTER1:
+ case RT5677_STO3_ADC_HI_FILTER2:
+ case RT5677_STO4_ADC_HI_FILTER1:
+ case RT5677_STO4_ADC_HI_FILTER2:
+ case RT5677_MB_DRC_CTRL1:
+ case RT5677_DRC1_CTRL1:
+ case RT5677_DRC1_CTRL2:
+ case RT5677_DRC1_CTRL3:
+ case RT5677_DRC1_CTRL4:
+ case RT5677_DRC1_CTRL5:
+ case RT5677_DRC1_CTRL6:
+ case RT5677_DRC2_CTRL1:
+ case RT5677_DRC2_CTRL2:
+ case RT5677_DRC2_CTRL3:
+ case RT5677_DRC2_CTRL4:
+ case RT5677_DRC2_CTRL5:
+ case RT5677_DRC2_CTRL6:
+ case RT5677_DRC1_HL_CTRL1:
+ case RT5677_DRC1_HL_CTRL2:
+ case RT5677_DRC2_HL_CTRL1:
+ case RT5677_DRC2_HL_CTRL2:
+ case RT5677_DSP_INB1_SRC_CTRL1:
+ case RT5677_DSP_INB1_SRC_CTRL2:
+ case RT5677_DSP_INB1_SRC_CTRL3:
+ case RT5677_DSP_INB1_SRC_CTRL4:
+ case RT5677_DSP_INB2_SRC_CTRL1:
+ case RT5677_DSP_INB2_SRC_CTRL2:
+ case RT5677_DSP_INB2_SRC_CTRL3:
+ case RT5677_DSP_INB2_SRC_CTRL4:
+ case RT5677_DSP_INB3_SRC_CTRL1:
+ case RT5677_DSP_INB3_SRC_CTRL2:
+ case RT5677_DSP_INB3_SRC_CTRL3:
+ case RT5677_DSP_INB3_SRC_CTRL4:
+ case RT5677_DSP_OUTB1_SRC_CTRL1:
+ case RT5677_DSP_OUTB1_SRC_CTRL2:
+ case RT5677_DSP_OUTB1_SRC_CTRL3:
+ case RT5677_DSP_OUTB1_SRC_CTRL4:
+ case RT5677_DSP_OUTB2_SRC_CTRL1:
+ case RT5677_DSP_OUTB2_SRC_CTRL2:
+ case RT5677_DSP_OUTB2_SRC_CTRL3:
+ case RT5677_DSP_OUTB2_SRC_CTRL4:
+ case RT5677_DSP_OUTB_0123_MIXER_CTRL:
+ case RT5677_DSP_OUTB_45_MIXER_CTRL:
+ case RT5677_DSP_OUTB_67_MIXER_CTRL:
+ case RT5677_DIG_MISC:
+ case RT5677_GEN_CTRL1:
+ case RT5677_GEN_CTRL2:
+ case RT5677_VENDOR_ID:
+ case RT5677_VENDOR_ID1:
+ case RT5677_VENDOR_ID2:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static const DECLARE_TLV_DB_SCALE(out_vol_tlv, -4650, 150, 0);
+static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -65625, 375, 0);
+static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0);
+static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -17625, 375, 0);
+static const DECLARE_TLV_DB_SCALE(adc_bst_tlv, 0, 1200, 0);
+
+/* {0, +20, +24, +30, +35, +40, +44, +50, +52} dB */
+static unsigned int bst_tlv[] = {
+ TLV_DB_RANGE_HEAD(7),
+ 0, 0, TLV_DB_SCALE_ITEM(0, 0, 0),
+ 1, 1, TLV_DB_SCALE_ITEM(2000, 0, 0),
+ 2, 2, TLV_DB_SCALE_ITEM(2400, 0, 0),
+ 3, 5, TLV_DB_SCALE_ITEM(3000, 500, 0),
+ 6, 6, TLV_DB_SCALE_ITEM(4400, 0, 0),
+ 7, 7, TLV_DB_SCALE_ITEM(5000, 0, 0),
+ 8, 8, TLV_DB_SCALE_ITEM(5200, 0, 0),
+};
+
+/* IN1/IN2 Input Type */
+static const char * const rt5677_input_mode[] = {
+ "Single ended", "Differential"
+};
+
+static const char * const rt5677_vad_mode[] = {
+ "Disable", "Idle-DMIC1", "Idle-DMIC2", "Idle-AMIC", "Suspend-DMIC1",
+ "Suspend-DMIC2", "Suspend-AMIC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_in1_mode_enum, RT5677_IN1,
+ RT5677_IN_DF1_SFT, rt5677_input_mode);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_in2_mode_enum, RT5677_IN1,
+ RT5677_IN_DF2_SFT, rt5677_input_mode);
+
+static const SOC_ENUM_SINGLE_DECL(rt5677_vad_mode_enum, 0, 0,
+ rt5677_vad_mode);
+
+static int rt5677_vad_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ ucontrol->value.integer.value[0] = rt5677->vad_source;
+
+ return 0;
+}
+
+static int rt5677_vad_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ rt5677->vad_source = ucontrol->value.integer.value[0];
+
+ switch (rt5677->vad_source) {
+ case RT5677_VAD_IDLE_DMIC1:
+ case RT5677_VAD_IDLE_DMIC2:
+ case RT5677_VAD_IDLE_AMIC:
+ rt5677->vad_mode = RT5677_VAD_IDLE;
+ break;
+
+ case RT5677_VAD_SUSPEND_DMIC1:
+ case RT5677_VAD_SUSPEND_DMIC2:
+ case RT5677_VAD_SUSPEND_AMIC:
+ rt5677->vad_mode = RT5677_VAD_SUSPEND;
+ break;
+ default:
+ rt5677->vad_mode = RT5677_VAD_OFF;
+ break;
+ }
+
+ if (codec->dapm.bias_level == SND_SOC_BIAS_OFF) {
+ pr_debug("codec->dapm.bias_level = SND_SOC_BIAS_OFF\n");
+ if (rt5677->vad_mode == RT5677_VAD_IDLE)
+ rt5677_set_vad(codec, 1, true);
+ else if (rt5677->vad_mode == RT5677_VAD_OFF)
+ rt5677_set_vad(codec, 0, true);
+ }
+ return 0;
+}
+
+static const char * const rt5677_tdm1_mode[] = {
+ "Normal", "LL Copy", "RR Copy"
+};
+
+static const SOC_ENUM_SINGLE_DECL(rt5677_tdm1_enum, 0, 0, rt5677_tdm1_mode);
+
+static int rt5677_tdm1_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int value;
+
+ regmap_read(rt5677->regmap, RT5677_TDM1_CTRL1, &value);
+ if ((value & 0xc) == 0xc)
+ ucontrol->value.integer.value[0] = 2;
+ else if ((value & 0xc) == 0x8)
+ ucontrol->value.integer.value[0] = 1;
+ else
+ ucontrol->value.integer.value[0] = 0;
+
+ return 0;
+}
+
+static int rt5677_tdm1_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ if (ucontrol->value.integer.value[0] == 0)
+ regmap_update_bits(rt5677->regmap, RT5677_TDM1_CTRL1, 0xc, 0x0);
+ else if (ucontrol->value.integer.value[0] == 1)
+ regmap_update_bits(rt5677->regmap, RT5677_TDM1_CTRL1, 0xc, 0x8);
+ else
+ regmap_update_bits(rt5677->regmap, RT5677_TDM1_CTRL1, 0xc, 0xc);
+
+ return 0;
+}
+
+static const struct snd_kcontrol_new rt5677_snd_controls[] = {
+ /* OUTPUT Control */
+ SOC_SINGLE("OUT1 Playback Switch", RT5677_LOUT1,
+ RT5677_LOUT1_L_MUTE_SFT, 1, 1),
+ SOC_SINGLE("OUT2 Playback Switch", RT5677_LOUT1,
+ RT5677_LOUT2_L_MUTE_SFT, 1, 1),
+ SOC_SINGLE("OUT3 Playback Switch", RT5677_LOUT1,
+ RT5677_LOUT3_L_MUTE_SFT, 1, 1),
+
+ /* DAC Digital Volume */
+ SOC_DOUBLE_TLV("DAC1 Playback Volume", RT5677_DAC1_DIG_VOL,
+ RT5677_L_VOL_SFT, RT5677_R_VOL_SFT,
+ 175, 0, dac_vol_tlv),
+ SOC_DOUBLE_TLV("DAC2 Playback Volume", RT5677_DAC2_DIG_VOL,
+ RT5677_L_VOL_SFT, RT5677_R_VOL_SFT,
+ 175, 0, dac_vol_tlv),
+ SOC_DOUBLE_TLV("DAC3 Playback Volume", RT5677_DAC3_DIG_VOL,
+ RT5677_L_VOL_SFT, RT5677_R_VOL_SFT,
+ 175, 0, dac_vol_tlv),
+ SOC_DOUBLE_TLV("DAC4 Playback Volume", RT5677_DAC4_DIG_VOL,
+ RT5677_L_VOL_SFT, RT5677_R_VOL_SFT,
+ 175, 0, dac_vol_tlv),
+
+ /* IN1/IN2 Control */
+ SOC_ENUM("IN1 Mode Control", rt5677_in1_mode_enum),
+ SOC_SINGLE_TLV("IN1 Boost", RT5677_IN1,
+ RT5677_BST_SFT1, 8, 0, bst_tlv),
+ SOC_ENUM("IN2 Mode Control", rt5677_in2_mode_enum),
+ SOC_SINGLE_TLV("IN2 Boost", RT5677_IN1,
+ RT5677_BST_SFT2, 8, 0, bst_tlv),
+
+ /* ADC Digital Volume Control */
+ SOC_DOUBLE("ADC1 Capture Switch", RT5677_STO1_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, RT5677_R_MUTE_SFT, 1, 1),
+ SOC_DOUBLE("ADC2 Capture Switch", RT5677_STO2_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, RT5677_R_MUTE_SFT, 1, 1),
+ SOC_DOUBLE("ADC3 Capture Switch", RT5677_STO3_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, RT5677_R_MUTE_SFT, 1, 1),
+ SOC_DOUBLE("ADC4 Capture Switch", RT5677_STO4_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, RT5677_R_MUTE_SFT, 1, 1),
+ SOC_DOUBLE("Mono ADC Capture Switch", RT5677_MONO_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, RT5677_R_MUTE_SFT, 1, 1),
+
+ SOC_DOUBLE_TLV("ADC1 Capture Volume", RT5677_STO1_ADC_DIG_VOL,
+ RT5677_STO1_ADC_L_VOL_SFT, RT5677_STO1_ADC_R_VOL_SFT,
+ 127, 0, adc_vol_tlv),
+ SOC_DOUBLE_TLV("ADC2 Capture Volume", RT5677_STO2_ADC_DIG_VOL,
+ RT5677_STO1_ADC_L_VOL_SFT, RT5677_STO1_ADC_R_VOL_SFT,
+ 127, 0, adc_vol_tlv),
+ SOC_DOUBLE_TLV("ADC3 Capture Volume", RT5677_STO3_ADC_DIG_VOL,
+ RT5677_STO1_ADC_L_VOL_SFT, RT5677_STO1_ADC_R_VOL_SFT,
+ 127, 0, adc_vol_tlv),
+ SOC_DOUBLE_TLV("ADC4 Capture Volume", RT5677_STO4_ADC_DIG_VOL,
+ RT5677_STO1_ADC_L_VOL_SFT, RT5677_STO1_ADC_R_VOL_SFT,
+ 127, 0, adc_vol_tlv),
+ SOC_DOUBLE_TLV("Mono ADC Capture Volume", RT5677_MONO_ADC_DIG_VOL,
+ RT5677_MONO_ADC_L_VOL_SFT, RT5677_MONO_ADC_R_VOL_SFT,
+ 127, 0, adc_vol_tlv),
+
+ /* ADC Boost Volume Control */
+ SOC_DOUBLE_TLV("STO1 ADC Boost Gain", RT5677_STO1_2_ADC_BST,
+ RT5677_STO1_ADC_L_BST_SFT, RT5677_STO1_ADC_R_BST_SFT,
+ 3, 0, adc_bst_tlv),
+ SOC_DOUBLE_TLV("STO2 ADC Boost Gain", RT5677_STO1_2_ADC_BST,
+ RT5677_STO2_ADC_L_BST_SFT, RT5677_STO2_ADC_R_BST_SFT,
+ 3, 0, adc_bst_tlv),
+ SOC_DOUBLE_TLV("STO3 ADC Boost Gain", RT5677_STO3_4_ADC_BST,
+ RT5677_STO3_ADC_L_BST_SFT, RT5677_STO3_ADC_R_BST_SFT,
+ 3, 0, adc_bst_tlv),
+ SOC_DOUBLE_TLV("STO4 ADC Boost Gain", RT5677_STO3_4_ADC_BST,
+ RT5677_STO4_ADC_L_BST_SFT, RT5677_STO4_ADC_R_BST_SFT,
+ 3, 0, adc_bst_tlv),
+ SOC_DOUBLE_TLV("Mono ADC Boost Gain", RT5677_ADC_BST_CTRL2,
+ RT5677_MONO_ADC_L_BST_SFT, RT5677_MONO_ADC_R_BST_SFT,
+ 3, 0, adc_bst_tlv),
+
+ SOC_ENUM_EXT("VAD Mode", rt5677_vad_mode_enum, rt5677_vad_get,
+ rt5677_vad_put),
+
+ SOC_ENUM_EXT("TDM1 Mode", rt5677_tdm1_enum, rt5677_tdm1_get,
+ rt5677_tdm1_put),
+};
+
+/**
+ * set_dmic_clk - Set parameter of dmic.
+ *
+ * @w: DAPM widget.
+ * @kcontrol: The kcontrol of this widget.
+ * @event: Event id.
+ *
+ * Choose dmic clock between 1MHz and 3MHz.
+ * It is better for clock to approximate 3MHz.
+ */
+static int set_dmic_clk(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int div[] = {2, 3, 4, 6, 8, 12}, idx = -EINVAL, i;
+ int rate, red, bound, temp;
+
+ rate = rt5677->lrck[rt5677->aif_pu] << 8;
+ red = RT5677_DMIC_CLK_MAX * 12;
+ for (i = 0; i < ARRAY_SIZE(div); i++) {
+ bound = div[i] * RT5677_DMIC_CLK_MAX;
+ if (rate > bound)
+ continue;
+ temp = bound - rate;
+ if (temp < red) {
+ red = temp;
+ idx = i;
+ }
+ }
+#ifdef USE_ASRC
+ idx = 5;
+#endif
+ if (idx < 0)
+ dev_err(codec->dev, "Failed to set DMIC clock\n");
+ else
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_CLK_MASK, idx << RT5677_DMIC_CLK_SFT);
+ return idx;
+}
+
+static int check_sysclk1_source(struct snd_soc_dapm_widget *source,
+ struct snd_soc_dapm_widget *sink)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(source->codec);
+ unsigned int val;
+
+ regmap_read(rt5677->regmap, RT5677_GLB_CLK1, &val);
+ val &= RT5677_SCLK_SRC_MASK;
+ if (val == RT5677_SCLK_SRC_PLL1)
+ return 1;
+ else
+ return 0;
+}
+
+/* Digital Mixer */
+static const struct snd_kcontrol_new rt5677_sto1_adc_l_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO1_ADC_MIXER,
+ RT5677_M_STO1_ADC_L1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO1_ADC_MIXER,
+ RT5677_M_STO1_ADC_L2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto1_adc_r_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO1_ADC_MIXER,
+ RT5677_M_STO1_ADC_R1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO1_ADC_MIXER,
+ RT5677_M_STO1_ADC_R2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto2_adc_l_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO2_ADC_MIXER,
+ RT5677_M_STO2_ADC_L1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO2_ADC_MIXER,
+ RT5677_M_STO2_ADC_L2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto2_adc_r_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO2_ADC_MIXER,
+ RT5677_M_STO2_ADC_R1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO2_ADC_MIXER,
+ RT5677_M_STO2_ADC_R2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto3_adc_l_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO3_ADC_MIXER,
+ RT5677_M_STO3_ADC_L1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO3_ADC_MIXER,
+ RT5677_M_STO3_ADC_L2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto3_adc_r_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO3_ADC_MIXER,
+ RT5677_M_STO3_ADC_R1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO3_ADC_MIXER,
+ RT5677_M_STO3_ADC_R2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto4_adc_l_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO4_ADC_MIXER,
+ RT5677_M_STO4_ADC_L1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO4_ADC_MIXER,
+ RT5677_M_STO4_ADC_L2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto4_adc_r_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_STO4_ADC_MIXER,
+ RT5677_M_STO4_ADC_R1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_STO4_ADC_MIXER,
+ RT5677_M_STO4_ADC_R2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_mono_adc_l_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_MONO_ADC_MIXER,
+ RT5677_M_MONO_ADC_L1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_MONO_ADC_MIXER,
+ RT5677_M_MONO_ADC_L2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_mono_adc_r_mix[] = {
+ SOC_DAPM_SINGLE("ADC1 Switch", RT5677_MONO_ADC_MIXER,
+ RT5677_M_MONO_ADC_R1_SFT, 1, 1),
+ SOC_DAPM_SINGLE("ADC2 Switch", RT5677_MONO_ADC_MIXER,
+ RT5677_M_MONO_ADC_R2_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dac_l_mix[] = {
+ SOC_DAPM_SINGLE("Stereo ADC Switch", RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_M_ADDA_MIXER1_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 Switch", RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_M_DAC1_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dac_r_mix[] = {
+ SOC_DAPM_SINGLE("Stereo ADC Switch", RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_M_ADDA_MIXER1_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 Switch", RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_M_DAC1_R_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto1_dac_l_mix[] = {
+ SOC_DAPM_SINGLE("ST L Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_ST_DAC1_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 L Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC1_L_STO_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 L Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC2_L_STO_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 R Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC1_R_STO_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_sto1_dac_r_mix[] = {
+ SOC_DAPM_SINGLE("ST R Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_ST_DAC1_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 R Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC1_R_STO_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 R Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC2_R_STO_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 L Switch", RT5677_STO1_DAC_MIXER,
+ RT5677_M_DAC1_L_STO_R_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_mono_dac_l_mix[] = {
+ SOC_DAPM_SINGLE("ST L Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_ST_DAC2_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 L Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC1_L_MONO_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 L Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC2_L_MONO_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 R Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC2_R_MONO_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_mono_dac_r_mix[] = {
+ SOC_DAPM_SINGLE("ST R Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_ST_DAC2_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC1 R Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC1_R_MONO_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 R Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC2_R_MONO_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC2 L Switch", RT5677_MONO_DAC_MIXER,
+ RT5677_M_DAC2_L_MONO_R_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dd1_l_mix[] = {
+ SOC_DAPM_SINGLE("Sto DAC Mix L Switch", RT5677_DD1_MIXER,
+ RT5677_M_STO_L_DD1_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("Mono DAC Mix L Switch", RT5677_DD1_MIXER,
+ RT5677_M_MONO_L_DD1_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC3 L Switch", RT5677_DD1_MIXER,
+ RT5677_M_DAC3_L_DD1_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC3 R Switch", RT5677_DD1_MIXER,
+ RT5677_M_DAC3_R_DD1_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dd1_r_mix[] = {
+ SOC_DAPM_SINGLE("Sto DAC Mix R Switch", RT5677_DD1_MIXER,
+ RT5677_M_STO_R_DD1_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("Mono DAC Mix R Switch", RT5677_DD1_MIXER,
+ RT5677_M_MONO_R_DD1_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC3 R Switch", RT5677_DD1_MIXER,
+ RT5677_M_DAC3_R_DD1_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC3 L Switch", RT5677_DD1_MIXER,
+ RT5677_M_DAC3_L_DD1_R_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dd2_l_mix[] = {
+ SOC_DAPM_SINGLE("Sto DAC Mix L Switch", RT5677_DD2_MIXER,
+ RT5677_M_STO_L_DD2_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("Mono DAC Mix L Switch", RT5677_DD2_MIXER,
+ RT5677_M_MONO_L_DD2_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC4 L Switch", RT5677_DD2_MIXER,
+ RT5677_M_DAC4_L_DD2_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC4 R Switch", RT5677_DD2_MIXER,
+ RT5677_M_DAC4_R_DD2_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_dd2_r_mix[] = {
+ SOC_DAPM_SINGLE("Sto DAC Mix R Switch", RT5677_DD2_MIXER,
+ RT5677_M_STO_R_DD2_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("Mono DAC Mix R Switch", RT5677_DD2_MIXER,
+ RT5677_M_MONO_R_DD2_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC4 R Switch", RT5677_DD2_MIXER,
+ RT5677_M_DAC4_R_DD2_R_SFT, 1, 1),
+ SOC_DAPM_SINGLE("DAC4 L Switch", RT5677_DD2_MIXER,
+ RT5677_M_DAC4_L_DD2_R_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_01_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_01_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_23_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_45_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_6_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_7_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_8_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_9_H_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_23_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_01_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_23_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_45_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_6_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_7_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_8_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_0123_MIXER_CTRL,
+ RT5677_DSP_IB_9_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_4_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_01_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_23_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_45_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_6_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_7_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_8_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_9_H_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_5_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_01_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_23_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_45_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_6_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_7_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_8_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_45_MIXER_CTRL,
+ RT5677_DSP_IB_9_L_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_6_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_01_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_23_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_45_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_6_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_7_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_8_H_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_9_H_SFT, 1, 1),
+};
+
+static const struct snd_kcontrol_new rt5677_ob_7_mix[] = {
+ SOC_DAPM_SINGLE("IB01 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_01_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB23 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_23_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB45 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_45_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB6 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_6_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB7 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_7_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB8 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_8_L_SFT, 1, 1),
+ SOC_DAPM_SINGLE("IB9 Switch", RT5677_DSP_OUTB_67_MIXER_CTRL,
+ RT5677_DSP_IB_9_L_SFT, 1, 1),
+};
+
+
+/* Mux */
+/* DAC1 L/R source */ /* MX-29 [10:8] */
+static const char * const rt5677_dac1_src[] = {
+ "IF1 DAC 01", "IF2 DAC 01", "IF3 DAC LR", "IF4 DAC LR", "SLB DAC 01",
+ "OB 01"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac1_enum, RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_DAC1_L_SEL_SFT, rt5677_dac1_src);
+
+static const struct snd_kcontrol_new rt5677_dac1_mux =
+ SOC_DAPM_ENUM("DAC1 source", rt5677_dac1_enum);
+
+/* ADDA1 L/R source */ /* MX-29 [1:0] */
+static const char * const rt5677_adda1_src[] = {
+ "STO1 ADC MIX", "STO2 ADC MIX", "OB 67",
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_adda1_enum, RT5677_ADC_IF_DSP_DAC1_MIXER,
+ RT5677_ADDA1_SEL_SFT, rt5677_adda1_src);
+
+static const struct snd_kcontrol_new rt5677_adda1_mux =
+ SOC_DAPM_ENUM("ADDA1 source", rt5677_adda1_enum);
+
+
+/*DAC2 L/R source*/ /* MX-1B [6:4] [2:0] */
+static const char * const rt5677_dac2l_src[] = {
+ "IF1 DAC 2", "IF2 DAC 2", "IF3 DAC L", "IF4 DAC L", "SLB DAC 2",
+ "OB 2",
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac2l_enum, RT5677_IF_DSP_DAC2_MIXER,
+ RT5677_SEL_DAC2_L_SRC_SFT, rt5677_dac2l_src);
+
+static const struct snd_kcontrol_new rt5677_dac2_l_mux =
+ SOC_DAPM_ENUM("DAC2 L source", rt5677_dac2l_enum);
+
+static const char * const rt5677_dac2r_src[] = {
+ "IF1 DAC 3", "IF2 DAC 3", "IF3 DAC R", "IF4 DAC R", "SLB DAC 3",
+ "OB 3", "Haptic Generator", "VAD ADC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac2r_enum, RT5677_IF_DSP_DAC2_MIXER,
+ RT5677_SEL_DAC2_R_SRC_SFT, rt5677_dac2r_src);
+
+static const struct snd_kcontrol_new rt5677_dac2_r_mux =
+ SOC_DAPM_ENUM("DAC2 R source", rt5677_dac2r_enum);
+
+/*DAC3 L/R source*/ /* MX-16 [6:4] [2:0] */
+static const char * const rt5677_dac3l_src[] = {
+ "IF1 DAC 4", "IF2 DAC 4", "IF3 DAC L", "IF4 DAC L",
+ "SLB DAC 4", "OB 4"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac3l_enum, RT5677_IF_DSP_DAC3_4_MIXER,
+ RT5677_SEL_DAC3_L_SRC_SFT, rt5677_dac3l_src);
+
+static const struct snd_kcontrol_new rt5677_dac3_l_mux =
+ SOC_DAPM_ENUM("DAC3 L source", rt5677_dac3l_enum);
+
+static const char * const rt5677_dac3r_src[] = {
+ "IF1 DAC 5", "IF2 DAC 5", "IF3 DAC R", "IF4 DAC R",
+ "SLB DAC 5", "OB 5"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac3r_enum, RT5677_IF_DSP_DAC3_4_MIXER,
+ RT5677_SEL_DAC3_R_SRC_SFT, rt5677_dac3r_src);
+
+static const struct snd_kcontrol_new rt5677_dac3_r_mux =
+ SOC_DAPM_ENUM("DAC3 R source", rt5677_dac3r_enum);
+
+/*DAC4 L/R source*/ /* MX-16 [14:12] [10:8] */
+static const char * const rt5677_dac4l_src[] = {
+ "IF1 DAC 6", "IF2 DAC 6", "IF3 DAC L", "IF4 DAC L",
+ "SLB DAC 6", "OB 6"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac4l_enum, RT5677_IF_DSP_DAC3_4_MIXER,
+ RT5677_SEL_DAC4_L_SRC_SFT, rt5677_dac4l_src);
+
+static const struct snd_kcontrol_new rt5677_dac4_l_mux =
+ SOC_DAPM_ENUM("DAC4 L source", rt5677_dac4l_enum);
+
+static const char * const rt5677_dac4r_src[] = {
+ "IF1 DAC 7", "IF2 DAC 7", "IF3 DAC R", "IF4 DAC R",
+ "SLB DAC 7", "OB 7"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac4r_enum, RT5677_IF_DSP_DAC3_4_MIXER,
+ RT5677_SEL_DAC4_R_SRC_SFT, rt5677_dac4r_src);
+
+static const struct snd_kcontrol_new rt5677_dac4_r_mux =
+ SOC_DAPM_ENUM("DAC4 R source", rt5677_dac4r_enum);
+
+/* In/OutBound Source Pass SRC */ /* MX-A5 [3] [4] [0] [1] [2] */
+static const char * const rt5677_iob_bypass_src[] = {
+ "Bypass", "Pass SRC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_ob01_bypass_src_enum, RT5677_DSP_IN_OUTB_CTRL,
+ RT5677_SEL_SRC_OB01_SFT, rt5677_iob_bypass_src);
+
+static const struct snd_kcontrol_new rt5677_ob01_bypass_src_mux =
+ SOC_DAPM_ENUM("OB01 Bypass source", rt5677_ob01_bypass_src_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_ob23_bypass_src_enum, RT5677_DSP_IN_OUTB_CTRL,
+ RT5677_SEL_SRC_OB23_SFT, rt5677_iob_bypass_src);
+
+static const struct snd_kcontrol_new rt5677_ob23_bypass_src_mux =
+ SOC_DAPM_ENUM("OB23 Bypass source", rt5677_ob23_bypass_src_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_ib01_bypass_src_enum, RT5677_DSP_IN_OUTB_CTRL,
+ RT5677_SEL_SRC_IB01_SFT, rt5677_iob_bypass_src);
+
+static const struct snd_kcontrol_new rt5677_ib01_bypass_src_mux =
+ SOC_DAPM_ENUM("IB01 Bypass source", rt5677_ib01_bypass_src_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_ib23_bypass_src_enum, RT5677_DSP_IN_OUTB_CTRL,
+ RT5677_SEL_SRC_IB23_SFT, rt5677_iob_bypass_src);
+
+static const struct snd_kcontrol_new rt5677_ib23_bypass_src_mux =
+ SOC_DAPM_ENUM("IB23 Bypass source", rt5677_ib23_bypass_src_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_ib45_bypass_src_enum, RT5677_DSP_IN_OUTB_CTRL,
+ RT5677_SEL_SRC_IB45_SFT, rt5677_iob_bypass_src);
+
+static const struct snd_kcontrol_new rt5677_ib45_bypass_src_mux =
+ SOC_DAPM_ENUM("IB45 Bypass source", rt5677_ib45_bypass_src_enum);
+
+/* Stereo ADC Source 2 */ /* MX-27 MX26 MX25 [11:10] */
+static const char * const rt5677_stereo_adc2_src[] = {
+ "DD MIX1", "DMIC", "Stereo DAC MIX"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo1_adc2_enum, RT5677_STO1_ADC_MIXER,
+ RT5677_SEL_STO1_ADC2_SFT, rt5677_stereo_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_sto1_adc2_mux =
+ SOC_DAPM_ENUM("Stereo1 ADC2 source", rt5677_stereo1_adc2_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo2_adc2_enum, RT5677_STO2_ADC_MIXER,
+ RT5677_SEL_STO2_ADC2_SFT, rt5677_stereo_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_sto2_adc2_mux =
+ SOC_DAPM_ENUM("Stereo2 ADC2 source", rt5677_stereo2_adc2_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo3_adc2_enum, RT5677_STO3_ADC_MIXER,
+ RT5677_SEL_STO3_ADC2_SFT, rt5677_stereo_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_sto3_adc2_mux =
+ SOC_DAPM_ENUM("Stereo3 ADC2 source", rt5677_stereo3_adc2_enum);
+
+/* DMIC Source */ /* MX-28 [9:8][1:0] MX-27 MX-26 MX-25 MX-24 [9:8] */
+static const char * const rt5677_dmic_src[] = {
+ "DMIC1", "DMIC2", "DMIC3", "DMIC4"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_dmic_l_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_DMIC_L_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_mono_dmic_l_mux =
+ SOC_DAPM_ENUM("Mono DMIC L source", rt5677_mono_dmic_l_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_dmic_r_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_DMIC_R_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_mono_dmic_r_mux =
+ SOC_DAPM_ENUM("Mono DMIC R source", rt5677_mono_dmic_r_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo1_dmic_enum, RT5677_STO1_ADC_MIXER,
+ RT5677_SEL_STO1_DMIC_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_sto1_dmic_mux =
+ SOC_DAPM_ENUM("Stereo1 DMIC source", rt5677_stereo1_dmic_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo2_dmic_enum, RT5677_STO2_ADC_MIXER,
+ RT5677_SEL_STO2_DMIC_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_sto2_dmic_mux =
+ SOC_DAPM_ENUM("Stereo2 DMIC source", rt5677_stereo2_dmic_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo3_dmic_enum, RT5677_STO3_ADC_MIXER,
+ RT5677_SEL_STO3_DMIC_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_sto3_dmic_mux =
+ SOC_DAPM_ENUM("Stereo3 DMIC source", rt5677_stereo3_dmic_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo4_dmic_enum, RT5677_STO4_ADC_MIXER,
+ RT5677_SEL_STO4_DMIC_SFT, rt5677_dmic_src);
+
+static const struct snd_kcontrol_new rt5677_sto4_dmic_mux =
+ SOC_DAPM_ENUM("Stereo4 DMIC source", rt5677_stereo4_dmic_enum);
+
+/* Stereo2 ADC source */ /* MX-26 [0] */
+static const char * const rt5677_stereo2_adc_lr_src[] = {
+ "L", "LR"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo2_adc_lr_enum, RT5677_STO2_ADC_MIXER,
+ RT5677_SEL_STO2_LR_MIX_SFT, rt5677_stereo2_adc_lr_src);
+
+static const struct snd_kcontrol_new rt5677_sto2_adc_lr_mux =
+ SOC_DAPM_ENUM("Stereo2 ADC LR source", rt5677_stereo2_adc_lr_enum);
+
+/* Stereo1 ADC Source 1 */ /* MX-27 MX26 MX25 [13:12] */
+static const char * const rt5677_stereo_adc1_src[] = {
+ "DD MIX1", "ADC1/2", "Stereo DAC MIX"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo1_adc1_enum, RT5677_STO1_ADC_MIXER,
+ RT5677_SEL_STO1_ADC1_SFT, rt5677_stereo_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_sto1_adc1_mux =
+ SOC_DAPM_ENUM("Stereo1 ADC1 source", rt5677_stereo1_adc1_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo2_adc1_enum, RT5677_STO2_ADC_MIXER,
+ RT5677_SEL_STO2_ADC1_SFT, rt5677_stereo_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_sto2_adc1_mux =
+ SOC_DAPM_ENUM("Stereo2 ADC1 source", rt5677_stereo2_adc1_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo3_adc1_enum, RT5677_STO3_ADC_MIXER,
+ RT5677_SEL_STO3_ADC1_SFT, rt5677_stereo_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_sto3_adc1_mux =
+ SOC_DAPM_ENUM("Stereo3 ADC1 source", rt5677_stereo3_adc1_enum);
+
+/* Mono ADC Left source 2 */ /* MX-28 [11:10] */
+static const char * const rt5677_mono_adc2_l_src[] = {
+ "DD MIX1L", "DMIC", "MONO DAC MIXL"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_adc2_l_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_ADC_L2_SFT, rt5677_mono_adc2_l_src);
+
+static const struct snd_kcontrol_new rt5677_mono_adc2_l_mux =
+ SOC_DAPM_ENUM("Mono ADC2 L source", rt5677_mono_adc2_l_enum);
+
+/* Mono ADC Left source 1 */ /* MX-28 [13:12] */
+static const char * const rt5677_mono_adc1_l_src[] = {
+ "DD MIX1L", "ADC1", "MONO DAC MIXL"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_adc1_l_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_ADC_L1_SFT, rt5677_mono_adc1_l_src);
+
+static const struct snd_kcontrol_new rt5677_mono_adc1_l_mux =
+ SOC_DAPM_ENUM("Mono ADC1 L source", rt5677_mono_adc1_l_enum);
+
+/* Mono ADC Right source 2 */ /* MX-28 [3:2] */
+static const char * const rt5677_mono_adc2_r_src[] = {
+ "DD MIX1R", "DMIC", "MONO DAC MIXR"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_adc2_r_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_ADC_R2_SFT, rt5677_mono_adc2_r_src);
+
+static const struct snd_kcontrol_new rt5677_mono_adc2_r_mux =
+ SOC_DAPM_ENUM("Mono ADC2 R source", rt5677_mono_adc2_r_enum);
+
+/* Mono ADC Right source 1 */ /* MX-28 [5:4] */
+static const char * const rt5677_mono_adc1_r_src[] = {
+ "DD MIX1R", "ADC2", "MONO DAC MIXR"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_mono_adc1_r_enum, RT5677_MONO_ADC_MIXER,
+ RT5677_SEL_MONO_ADC_R1_SFT, rt5677_mono_adc1_r_src);
+
+static const struct snd_kcontrol_new rt5677_mono_adc1_r_mux =
+ SOC_DAPM_ENUM("Mono ADC1 R source", rt5677_mono_adc1_r_enum);
+
+/* Stereo4 ADC Source 2 */ /* MX-24 [11:10] */
+static const char * const rt5677_stereo4_adc2_src[] = {
+ "DD MIX1", "DMIC", "DD MIX2"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo4_adc2_enum, RT5677_STO4_ADC_MIXER,
+ RT5677_SEL_STO4_ADC2_SFT, rt5677_stereo4_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_sto4_adc2_mux =
+ SOC_DAPM_ENUM("Stereo4 ADC2 source", rt5677_stereo4_adc2_enum);
+
+
+/* Stereo4 ADC Source 1 */ /* MX-24 [13:12] */
+static const char * const rt5677_stereo4_adc1_src[] = {
+ "DD MIX1", "ADC1/2", "DD MIX2"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_stereo4_adc1_enum, RT5677_STO4_ADC_MIXER,
+ RT5677_SEL_STO4_ADC1_SFT, rt5677_stereo4_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_sto4_adc1_mux =
+ SOC_DAPM_ENUM("Stereo4 ADC1 source", rt5677_stereo4_adc1_enum);
+
+/* InBound0/1 Source */ /* MX-A3 [14:12] */
+static const char * const rt5677_inbound01_src[] = {
+ "IF1 DAC 01", "IF2 DAC 01", "SLB DAC 01", "STO1 ADC MIX",
+ "VAD ADC/DAC1 FS"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound01_enum, RT5677_DSP_INB_CTRL1,
+ RT5677_IB01_SRC_SFT, rt5677_inbound01_src);
+
+static const struct snd_kcontrol_new rt5677_ib01_src_mux =
+ SOC_DAPM_ENUM("InBound0/1 Source", rt5677_inbound01_enum);
+
+/* InBound2/3 Source */ /* MX-A3 [10:8] */
+static const char * const rt5677_inbound23_src[] = {
+ "IF1 DAC 23", "IF2 DAC 23", "SLB DAC 23", "STO2 ADC MIX",
+ "DAC1 FS", "IF4 DAC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound23_enum, RT5677_DSP_INB_CTRL1,
+ RT5677_IB23_SRC_SFT, rt5677_inbound23_src);
+
+static const struct snd_kcontrol_new rt5677_ib23_src_mux =
+ SOC_DAPM_ENUM("InBound2/3 Source", rt5677_inbound23_enum);
+
+/* InBound4/5 Source */ /* MX-A3 [6:4] */
+static const char * const rt5677_inbound45_src[] = {
+ "IF1 DAC 45", "IF2 DAC 45", "SLB DAC 45", "STO3 ADC MIX",
+ "IF3 DAC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound45_enum, RT5677_DSP_INB_CTRL1,
+ RT5677_IB45_SRC_SFT, rt5677_inbound45_src);
+
+static const struct snd_kcontrol_new rt5677_ib45_src_mux =
+ SOC_DAPM_ENUM("InBound4/5 Source", rt5677_inbound45_enum);
+
+/* InBound6 Source */ /* MX-A3 [2:0] */
+static const char * const rt5677_inbound6_src[] = {
+ "IF1 DAC 6", "IF2 DAC 6", "SLB DAC 6", "STO4 ADC MIX L",
+ "IF4 DAC L", "STO1 ADC MIX L", "STO2 ADC MIX L", "STO3 ADC MIX L"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound6_enum, RT5677_DSP_INB_CTRL1,
+ RT5677_IB6_SRC_SFT, rt5677_inbound6_src);
+
+static const struct snd_kcontrol_new rt5677_ib6_src_mux =
+ SOC_DAPM_ENUM("InBound6 Source", rt5677_inbound6_enum);
+
+/* InBound7 Source */ /* MX-A4 [14:12] */
+static const char * const rt5677_inbound7_src[] = {
+ "IF1 DAC 7", "IF2 DAC 7", "SLB DAC 7", "STO4 ADC MIX R",
+ "IF4 DAC R", "STO1 ADC MIX R", "STO2 ADC MIX R", "STO3 ADC MIX R"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound7_enum, RT5677_DSP_INB_CTRL2,
+ RT5677_IB7_SRC_SFT, rt5677_inbound7_src);
+
+static const struct snd_kcontrol_new rt5677_ib7_src_mux =
+ SOC_DAPM_ENUM("InBound7 Source", rt5677_inbound7_enum);
+
+/* InBound8 Source */ /* MX-A4 [10:8] */
+static const char * const rt5677_inbound8_src[] = {
+ "STO1 ADC MIX L", "STO2 ADC MIX L", "STO3 ADC MIX L", "STO4 ADC MIX L",
+ "MONO ADC MIX L", "DACL1 FS"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound8_enum, RT5677_DSP_INB_CTRL2,
+ RT5677_IB8_SRC_SFT, rt5677_inbound8_src);
+
+static const struct snd_kcontrol_new rt5677_ib8_src_mux =
+ SOC_DAPM_ENUM("InBound8 Source", rt5677_inbound8_enum);
+
+/* InBound9 Source */ /* MX-A4 [6:4] */
+static const char * const rt5677_inbound9_src[] = {
+ "STO1 ADC MIX R", "STO2 ADC MIX R", "STO3 ADC MIX R", "STO4 ADC MIX R",
+ "MONO ADC MIX R", "DACR1 FS", "DAC1 FS"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_inbound9_enum, RT5677_DSP_INB_CTRL2,
+ RT5677_IB9_SRC_SFT, rt5677_inbound9_src);
+
+static const struct snd_kcontrol_new rt5677_ib9_src_mux =
+ SOC_DAPM_ENUM("InBound9 Source", rt5677_inbound9_enum);
+
+/* VAD Source */ /* MX-9F [6:4] */
+static const char * const rt5677_vad_src[] = {
+ "STO1 ADC MIX L", "MONO ADC MIX L", "MONO ADC MIX R", "STO2 ADC MIX L",
+ "STO3 ADC MIX L"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_vad_enum, RT5677_VAD_CTRL4,
+ RT5677_VAD_SRC_SFT, rt5677_vad_src);
+
+static const struct snd_kcontrol_new rt5677_vad_src_mux =
+ SOC_DAPM_ENUM("VAD Source", rt5677_vad_enum);
+
+/* Sidetone Source */ /* MX-13 [11:9] */
+static const char * const rt5677_sidetone_src[] = {
+ "DMIC1 L", "DMIC2 L", "DMIC3 L", "DMIC4 L", "ADC1", "ADC2"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_sidetone_enum, RT5677_SIDETONE_CTRL,
+ RT5677_ST_SEL_SFT, rt5677_sidetone_src);
+
+static const struct snd_kcontrol_new rt5677_sidetone_mux =
+ SOC_DAPM_ENUM("Sidetone Source", rt5677_sidetone_enum);
+
+/* DAC1/2 Source */ /* MX-15 [1:0] */
+static const char * const rt5677_dac12_src[] = {
+ "STO1 DAC MIX", "MONO DAC MIX", "DD MIX1", "DD MIX2"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac12_enum, RT5677_ANA_DAC1_2_3_SRC,
+ RT5677_ANA_DAC1_2_SRC_SEL_SFT, rt5677_dac12_src);
+
+static const struct snd_kcontrol_new rt5677_dac12_mux =
+ SOC_DAPM_ENUM("Analog DAC1/2 Source", rt5677_dac12_enum);
+
+/* DAC3 Source */ /* MX-15 [5:4] */
+static const char * const rt5677_dac3_src[] = {
+ "MONO DAC MIXL", "MONO DAC MIXR", "DD MIX1L", "DD MIX2L"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_dac3_enum, RT5677_ANA_DAC1_2_3_SRC,
+ RT5677_ANA_DAC3_SRC_SEL_SFT, rt5677_dac3_src);
+
+static const struct snd_kcontrol_new rt5677_dac3_mux =
+ SOC_DAPM_ENUM("Analog DAC3 Source", rt5677_dac3_enum);
+
+/* PDM channel source */ /* MX-31 [13:12][9:8][5:4][1:0] */
+static const char * const rt5677_pdm_src[] = {
+ "STO1 DAC MIX", "MONO DAC MIX", "DD MIX1", "DD MIX2"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_pdm1_l_enum, RT5677_PDM_OUT_CTRL,
+ RT5677_SEL_PDM1_L_SFT, rt5677_pdm_src);
+
+static const struct snd_kcontrol_new rt5677_pdm1_l_mux =
+ SOC_DAPM_ENUM("PDM1 source", rt5677_pdm1_l_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_pdm2_l_enum, RT5677_PDM_OUT_CTRL,
+ RT5677_SEL_PDM2_L_SFT, rt5677_pdm_src);
+
+static const struct snd_kcontrol_new rt5677_pdm2_l_mux =
+ SOC_DAPM_ENUM("PDM2 source", rt5677_pdm2_l_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_pdm1_r_enum, RT5677_PDM_OUT_CTRL,
+ RT5677_SEL_PDM1_R_SFT, rt5677_pdm_src);
+
+static const struct snd_kcontrol_new rt5677_pdm1_r_mux =
+ SOC_DAPM_ENUM("PDM1 source", rt5677_pdm1_r_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_pdm2_r_enum, RT5677_PDM_OUT_CTRL,
+ RT5677_SEL_PDM2_R_SFT, rt5677_pdm_src);
+
+static const struct snd_kcontrol_new rt5677_pdm2_r_mux =
+ SOC_DAPM_ENUM("PDM2 source", rt5677_pdm2_r_enum);
+
+/* TDM IF1/2 SLB ADC1 Data Selection */ /* MX-3C MX-41 [5:4] MX-08 [1:0]*/
+static const char * const rt5677_if12_adc1_src[] = {
+ "STO1 ADC MIX", "OB01", "VAD ADC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if1_adc1_enum, RT5677_TDM1_CTRL2,
+ RT5677_IF1_ADC1_SFT, rt5677_if12_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_if1_adc1_mux =
+ SOC_DAPM_ENUM("IF1 ADC1 source", rt5677_if1_adc1_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if2_adc1_enum, RT5677_TDM2_CTRL2,
+ RT5677_IF2_ADC1_SFT, rt5677_if12_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_if2_adc1_mux =
+ SOC_DAPM_ENUM("IF2 ADC1 source", rt5677_if2_adc1_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_slb_adc1_enum, RT5677_SLIMBUS_RX,
+ RT5677_SLB_ADC1_SFT, rt5677_if12_adc1_src);
+
+static const struct snd_kcontrol_new rt5677_slb_adc1_mux =
+ SOC_DAPM_ENUM("SLB ADC1 source", rt5677_slb_adc1_enum);
+
+/* TDM IF1/2 SLB ADC2 Data Selection */ /* MX-3C MX-41 [7:6] MX-08 [3:2] */
+static const char * const rt5677_if12_adc2_src[] = {
+ "STO2 ADC MIX", "OB23"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if1_adc2_enum, RT5677_TDM1_CTRL2,
+ RT5677_IF1_ADC2_SFT, rt5677_if12_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_if1_adc2_mux =
+ SOC_DAPM_ENUM("IF1 ADC2 source", rt5677_if1_adc2_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if2_adc2_enum, RT5677_TDM2_CTRL2,
+ RT5677_IF2_ADC2_SFT, rt5677_if12_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_if2_adc2_mux =
+ SOC_DAPM_ENUM("IF2 ADC2 source", rt5677_if2_adc2_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_slb_adc2_enum, RT5677_SLIMBUS_RX,
+ RT5677_SLB_ADC2_SFT, rt5677_if12_adc2_src);
+
+static const struct snd_kcontrol_new rt5677_slb_adc2_mux =
+ SOC_DAPM_ENUM("SLB ADC2 source", rt5677_slb_adc2_enum);
+
+/* TDM IF1/2 SLB ADC3 Data Selection */ /* MX-3C MX-41 [9:8] MX-08 [5:4] */
+static const char * const rt5677_if12_adc3_src[] = {
+ "STO3 ADC MIX", "MONO ADC MIX", "OB45"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if1_adc3_enum, RT5677_TDM1_CTRL2,
+ RT5677_IF1_ADC3_SFT, rt5677_if12_adc3_src);
+
+static const struct snd_kcontrol_new rt5677_if1_adc3_mux =
+ SOC_DAPM_ENUM("IF1 ADC3 source", rt5677_if1_adc3_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if2_adc3_enum, RT5677_TDM2_CTRL2,
+ RT5677_IF2_ADC3_SFT, rt5677_if12_adc3_src);
+
+static const struct snd_kcontrol_new rt5677_if2_adc3_mux =
+ SOC_DAPM_ENUM("IF2 ADC3 source", rt5677_if2_adc3_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_slb_adc3_enum, RT5677_SLIMBUS_RX,
+ RT5677_SLB_ADC3_SFT, rt5677_if12_adc3_src);
+
+static const struct snd_kcontrol_new rt5677_slb_adc3_mux =
+ SOC_DAPM_ENUM("SLB ADC3 source", rt5677_slb_adc3_enum);
+
+/* TDM IF1/2 SLB ADC4 Data Selection */ /* MX-3C MX-41 [11:10] MX-08 [7:6] */
+static const char * const rt5677_if12_adc4_src[] = {
+ "STO4 ADC MIX", "OB67", "OB01"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if1_adc4_enum, RT5677_TDM1_CTRL2,
+ RT5677_IF1_ADC4_SFT, rt5677_if12_adc4_src);
+
+static const struct snd_kcontrol_new rt5677_if1_adc4_mux =
+ SOC_DAPM_ENUM("IF1 ADC4 source", rt5677_if1_adc4_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if2_adc4_enum, RT5677_TDM2_CTRL2,
+ RT5677_IF2_ADC4_SFT, rt5677_if12_adc4_src);
+
+static const struct snd_kcontrol_new rt5677_if2_adc4_mux =
+ SOC_DAPM_ENUM("IF2 ADC4 source", rt5677_if2_adc4_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_slb_adc4_enum, RT5677_SLIMBUS_RX,
+ RT5677_SLB_ADC4_SFT, rt5677_if12_adc4_src);
+
+static const struct snd_kcontrol_new rt5677_slb_adc4_mux =
+ SOC_DAPM_ENUM("SLB ADC4 source", rt5677_slb_adc4_enum);
+
+/* Interface3/4 ADC Data Input */ /* MX-2F [3:0] MX-30 [7:4]*/
+static const char * const rt5677_if34_adc_src[] = {
+ "STO1 ADC MIX", "STO2 ADC MIX", "STO3 ADC MIX", "STO4 ADC MIX",
+ "MONO ADC MIX", "OB01", "OB23", "VAD ADC"
+};
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if3_adc_enum, RT5677_IF3_DATA,
+ RT5677_IF3_ADC_IN_SFT, rt5677_if34_adc_src);
+
+static const struct snd_kcontrol_new rt5677_if3_adc_mux =
+ SOC_DAPM_ENUM("IF3 ADC source", rt5677_if3_adc_enum);
+
+static const SOC_ENUM_SINGLE_DECL(
+ rt5677_if4_adc_enum, RT5677_IF4_DATA,
+ RT5677_IF4_ADC_IN_SFT, rt5677_if34_adc_src);
+
+static const struct snd_kcontrol_new rt5677_if4_adc_mux =
+ SOC_DAPM_ENUM("IF4 ADC source", rt5677_if4_adc_enum);
+
+static int rt5677_adc_clk_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ rt5677_index_update_bits(codec,
+ RT5677_CHOP_DAC_ADC, 0x1000, 0x1000);
+ break;
+
+ case SND_SOC_DAPM_POST_PMD:
+ rt5677_index_update_bits(codec,
+ RT5677_CHOP_DAC_ADC, 0x1000, 0x0000);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_sto1_adcl_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_STO1_ADC_DIG_VOL,
+ RT5677_L_MUTE, 0);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_STO1_ADC_DIG_VOL,
+ RT5677_L_MUTE,
+ RT5677_L_MUTE);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_sto1_adcr_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_STO1_ADC_DIG_VOL,
+ RT5677_R_MUTE, 0);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_STO1_ADC_DIG_VOL,
+ RT5677_R_MUTE,
+ RT5677_R_MUTE);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_mono_adcl_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int val;
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_read(rt5677->regmap, RT5677_DMIC_CTRL1, &val);
+ if (val & RT5677_DMIC_1_EN_MASK)
+ msleep(dmic_depop_time);
+ else
+ msleep(amic_depop_time);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_mono_adcr_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int val;
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_read(rt5677->regmap, RT5677_DMIC_CTRL1, &val);
+ if (val & RT5677_DMIC_2_EN_MASK)
+ msleep(dmic_depop_time);
+ else
+ msleep(amic_depop_time);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_lout1_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO1, RT5677_PWR_LO1);
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT1_L_MUTE, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT1_L_MUTE, RT5677_LOUT1_L_MUTE);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO1, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_lout2_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO2, RT5677_PWR_LO2);
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT2_L_MUTE, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT2_L_MUTE, RT5677_LOUT2_L_MUTE);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO2, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_lout3_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO3, RT5677_PWR_LO3);
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT3_L_MUTE, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_LOUT1,
+ RT5677_LOUT3_L_MUTE, RT5677_LOUT3_L_MUTE);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_LO3, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_set_dmic1_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int val;
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_read(rt5677->regmap, RT5677_DMIC_CTRL1, &val);
+ if (!(val & RT5677_DMIC_2_EN_MASK)) {
+ regmap_update_bits(rt5677->regmap, RT5677_GPIO_CTRL2,
+ 0x4000, 0x0);
+ regmap_update_bits(rt5677->regmap, RT5677_GEN_CTRL2,
+ 0x0200, 0x0);
+ }
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_1L_LH_MASK | RT5677_DMIC_1R_LH_MASK,
+ RT5677_DMIC_1L_LH_FALLING | RT5677_DMIC_1R_LH_RISING);
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_1_EN_MASK, RT5677_DMIC_1_EN);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_1_EN_MASK, RT5677_DMIC_1_DIS);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_set_dmic2_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_GPIO_CTRL2, 0x4000,
+ 0x4000);
+ regmap_update_bits(rt5677->regmap, RT5677_GEN_CTRL2, 0x0200,
+ 0x0200);
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_2L_LH_MASK | RT5677_DMIC_2R_LH_MASK,
+ RT5677_DMIC_2L_LH_FALLING | RT5677_DMIC_2R_LH_RISING);
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_2_EN_MASK, RT5677_DMIC_2_EN);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_2_EN_MASK, RT5677_DMIC_2_DIS);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_set_dmic3_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_3L_LH_MASK | RT5677_DMIC_3R_LH_MASK,
+ RT5677_DMIC_3L_LH_FALLING | RT5677_DMIC_3R_LH_RISING);
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_3_EN_MASK, RT5677_DMIC_3_EN);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL1,
+ RT5677_DMIC_3_EN_MASK, RT5677_DMIC_3_DIS);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_set_dmic4_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_4L_LH_MASK | RT5677_DMIC_4R_LH_MASK,
+ RT5677_DMIC_4L_LH_FALLING | RT5677_DMIC_4R_LH_RISING);
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_4_EN_MASK, RT5677_DMIC_4_EN);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_DMIC_CTRL2,
+ RT5677_DMIC_4_EN_MASK, RT5677_DMIC_4_DIS);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_bst1_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_BST1_P, RT5677_PWR_BST1_P);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_BST1_P, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_bst2_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_BST2_P, RT5677_PWR_BST2_P);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_BST2_P, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_pdm1_l_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM1_L, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM1_L, RT5677_M_PDM1_L);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_pdm1_r_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM1_R, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM1_R, RT5677_M_PDM1_R);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_pdm2_l_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM2_L, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM2_L, RT5677_M_PDM2_L);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_pdm2_r_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM2_R, 0);
+ break;
+
+ case SND_SOC_DAPM_PRE_PMD:
+ regmap_update_bits(rt5677->regmap, RT5677_PDM_OUT_CTRL,
+ RT5677_M_PDM2_R, RT5677_M_PDM2_R);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_post_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ break;
+ default:
+ return 0;
+ }
+ return 0;
+}
+
+static int rt5677_pre_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMD:
+ break;
+ case SND_SOC_DAPM_PRE_PMU:
+ break;
+ default:
+ return 0;
+ }
+ return 0;
+}
+
+static int rt5677_set_pll1_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PLL1_CTRL2, 0x2, 0x2);
+ regmap_update_bits(rt5677->regmap, RT5677_PLL1_CTRL2, 0x2, 0x0);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int rt5677_set_pll2_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ regmap_update_bits(rt5677->regmap, RT5677_PLL2_CTRL2, 0x2, 0x2);
+ regmap_update_bits(rt5677->regmap, RT5677_PLL2_CTRL2, 0x2, 0x0);
+ break;
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static const struct snd_soc_dapm_widget rt5677_dapm_widgets[] = {
+ SND_SOC_DAPM_SUPPLY_S("PLL1", 0, RT5677_PWR_ANLG2, RT5677_PWR_PLL1_BIT,
+ 0, rt5677_set_pll1_event, SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_SUPPLY_S("PLL2", 0, RT5677_PWR_ANLG2, RT5677_PWR_PLL2_BIT,
+ 0, rt5677_set_pll2_event, SND_SOC_DAPM_POST_PMU),
+
+ /* Input Side */
+ /* Input Lines */
+ SND_SOC_DAPM_INPUT("DMIC L1"),
+ SND_SOC_DAPM_INPUT("DMIC R1"),
+ SND_SOC_DAPM_INPUT("DMIC L2"),
+ SND_SOC_DAPM_INPUT("DMIC R2"),
+ SND_SOC_DAPM_INPUT("DMIC L3"),
+ SND_SOC_DAPM_INPUT("DMIC R3"),
+ SND_SOC_DAPM_INPUT("DMIC L4"),
+ SND_SOC_DAPM_INPUT("DMIC R4"),
+
+ SND_SOC_DAPM_INPUT("IN1P"),
+ SND_SOC_DAPM_INPUT("IN1N"),
+ SND_SOC_DAPM_INPUT("IN2P"),
+ SND_SOC_DAPM_INPUT("IN2N"),
+
+ SND_SOC_DAPM_INPUT("Haptic Generator"),
+
+ SND_SOC_DAPM_PGA_E("DMIC1", SND_SOC_NOPM, 0, 0, NULL, 0,
+ rt5677_set_dmic1_event, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_PGA_E("DMIC2", SND_SOC_NOPM, 0, 0, NULL, 0,
+ rt5677_set_dmic2_event, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_PGA_E("DMIC3", SND_SOC_NOPM, 0, 0, NULL, 0,
+ rt5677_set_dmic3_event, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_PGA_E("DMIC4", SND_SOC_NOPM, 0, 0, NULL, 0,
+ rt5677_set_dmic4_event, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMD),
+
+ SND_SOC_DAPM_SUPPLY("DMIC CLK", SND_SOC_NOPM, 0, 0,
+ set_dmic_clk, SND_SOC_DAPM_PRE_PMU),
+
+ /* Boost */
+ SND_SOC_DAPM_PGA_E("BST1", RT5677_PWR_ANLG2,
+ RT5677_PWR_BST1_BIT, 0, NULL, 0, rt5677_bst1_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_PGA_E("BST2", RT5677_PWR_ANLG2,
+ RT5677_PWR_BST2_BIT, 0, NULL, 0, rt5677_bst2_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+
+ /* ADCs */
+ SND_SOC_DAPM_ADC("ADC 1", NULL, SND_SOC_NOPM,
+ 0, 0),
+ SND_SOC_DAPM_ADC("ADC 2", NULL, SND_SOC_NOPM,
+ 0, 0),
+ SND_SOC_DAPM_PGA("ADC 1_2", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_SUPPLY("ADC 1 power", RT5677_PWR_DIG1,
+ RT5677_PWR_ADC_L_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("ADC 2 power", RT5677_PWR_DIG1,
+ RT5677_PWR_ADC_R_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("ADC clock", SND_SOC_NOPM, 0, 0,
+ rt5677_adc_clk_event, SND_SOC_DAPM_POST_PMD |
+ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_SUPPLY("ADC1 clock", RT5677_PWR_DIG1,
+ RT5677_PWR_ADCFED1_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("ADC2 clock", RT5677_PWR_DIG1,
+ RT5677_PWR_ADCFED2_BIT, 0, NULL, 0),
+
+ /* ADC Mux */
+ SND_SOC_DAPM_MUX("Stereo1 DMIC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto1_dmic_mux),
+ SND_SOC_DAPM_MUX("Stereo1 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto1_adc1_mux),
+ SND_SOC_DAPM_MUX("Stereo1 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto1_adc2_mux),
+ SND_SOC_DAPM_MUX("Stereo2 DMIC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto2_dmic_mux),
+ SND_SOC_DAPM_MUX("Stereo2 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto2_adc1_mux),
+ SND_SOC_DAPM_MUX("Stereo2 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto2_adc2_mux),
+ SND_SOC_DAPM_MUX("Stereo2 ADC LR Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto2_adc_lr_mux),
+ SND_SOC_DAPM_MUX("Stereo3 DMIC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto3_dmic_mux),
+ SND_SOC_DAPM_MUX("Stereo3 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto3_adc1_mux),
+ SND_SOC_DAPM_MUX("Stereo3 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto3_adc2_mux),
+ SND_SOC_DAPM_MUX("Stereo4 DMIC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto4_dmic_mux),
+ SND_SOC_DAPM_MUX("Stereo4 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto4_adc1_mux),
+ SND_SOC_DAPM_MUX("Stereo4 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sto4_adc2_mux),
+ SND_SOC_DAPM_MUX("Mono DMIC L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_dmic_l_mux),
+ SND_SOC_DAPM_MUX("Mono DMIC R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_dmic_r_mux),
+ SND_SOC_DAPM_MUX("Mono ADC2 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_adc2_l_mux),
+ SND_SOC_DAPM_MUX("Mono ADC1 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_adc1_l_mux),
+ SND_SOC_DAPM_MUX("Mono ADC1 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_adc1_r_mux),
+ SND_SOC_DAPM_MUX("Mono ADC2 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_mono_adc2_r_mux),
+
+ /* ADC Mixer */
+ SND_SOC_DAPM_SUPPLY("adc stereo1 filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_S1F_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("adc stereo2 filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_S2F_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("adc stereo3 filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_S3F_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("adc stereo4 filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_S4F_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER_E("Sto1 ADC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_sto1_adc_l_mix, ARRAY_SIZE(rt5677_sto1_adc_l_mix),
+ rt5677_sto1_adcl_event, SND_SOC_DAPM_PRE_PMD |
+ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_MIXER_E("Sto1 ADC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_sto1_adc_r_mix, ARRAY_SIZE(rt5677_sto1_adc_r_mix),
+ rt5677_sto1_adcr_event, SND_SOC_DAPM_PRE_PMD |
+ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_MIXER("Sto2 ADC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_sto2_adc_l_mix, ARRAY_SIZE(rt5677_sto2_adc_l_mix)),
+ SND_SOC_DAPM_MIXER("Sto2 ADC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_sto2_adc_r_mix, ARRAY_SIZE(rt5677_sto2_adc_r_mix)),
+ SND_SOC_DAPM_MIXER("Sto3 ADC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_sto3_adc_l_mix, ARRAY_SIZE(rt5677_sto3_adc_l_mix)),
+ SND_SOC_DAPM_MIXER("Sto3 ADC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_sto3_adc_r_mix, ARRAY_SIZE(rt5677_sto3_adc_r_mix)),
+
+ SND_SOC_DAPM_MIXER("Sto4 ADC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_sto4_adc_l_mix, ARRAY_SIZE(rt5677_sto4_adc_l_mix)),
+ SND_SOC_DAPM_MIXER("Sto4 ADC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_sto4_adc_r_mix, ARRAY_SIZE(rt5677_sto4_adc_r_mix)),
+ SND_SOC_DAPM_SUPPLY("adc mono left filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_MF_L_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER("Mono ADC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_mono_adc_l_mix, ARRAY_SIZE(rt5677_mono_adc_l_mix)),
+ SND_SOC_DAPM_SUPPLY("adc mono right filter", RT5677_PWR_DIG2,
+ RT5677_PWR_ADC_MF_R_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER("Mono ADC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_mono_adc_r_mix, ARRAY_SIZE(rt5677_mono_adc_r_mix)),
+
+ SND_SOC_DAPM_ADC_E("Mono ADC MIXL ADC", NULL, RT5677_MONO_ADC_DIG_VOL,
+ RT5677_L_MUTE_SFT, 1, rt5677_mono_adcl_event,
+ SND_SOC_DAPM_PRE_PMU),
+ SND_SOC_DAPM_ADC_E("Mono ADC MIXR ADC", NULL, RT5677_MONO_ADC_DIG_VOL,
+ RT5677_R_MUTE_SFT, 1, rt5677_mono_adcr_event,
+ SND_SOC_DAPM_PRE_PMU),
+
+ /* ADC PGA */
+ SND_SOC_DAPM_PGA("Stereo1 ADC MIXL", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo1 ADC MIXR", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo1 ADC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo2 ADC MIXL", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo2 ADC MIXR", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo2 ADC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo3 ADC MIXL", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo3 ADC MIXR", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo3 ADC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo4 ADC MIXL", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo4 ADC MIXR", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Stereo4 ADC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Sto2 ADC LR MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Mono ADC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1_ADC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1_ADC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1_ADC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1_ADC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ /* DSP */
+ SND_SOC_DAPM_MUX("IB9 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib9_src_mux),
+ SND_SOC_DAPM_MUX("IB8 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib8_src_mux),
+ SND_SOC_DAPM_MUX("IB7 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib7_src_mux),
+ SND_SOC_DAPM_MUX("IB6 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib6_src_mux),
+ SND_SOC_DAPM_MUX("IB45 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib45_src_mux),
+ SND_SOC_DAPM_MUX("IB23 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib23_src_mux),
+ SND_SOC_DAPM_MUX("IB01 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib01_src_mux),
+ SND_SOC_DAPM_MUX("IB45 Bypass Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib45_bypass_src_mux),
+ SND_SOC_DAPM_MUX("IB23 Bypass Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib23_bypass_src_mux),
+ SND_SOC_DAPM_MUX("IB01 Bypass Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ib01_bypass_src_mux),
+ SND_SOC_DAPM_MUX("OB23 Bypass Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ob23_bypass_src_mux),
+ SND_SOC_DAPM_MUX("OB01 Bypass Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_ob01_bypass_src_mux),
+
+ SND_SOC_DAPM_PGA("OB45", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OB67", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_PGA("OutBound2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OutBound3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OutBound4", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OutBound5", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OutBound6", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("OutBound7", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ /* Digital Interface */
+ SND_SOC_DAPM_SUPPLY("I2S1", RT5677_PWR_DIG1,
+ RT5677_PWR_I2S1_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC0", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC5", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC6", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC7", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC01", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC23", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC45", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 DAC67", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 ADC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 ADC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 ADC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF1 ADC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_SUPPLY("I2S2", RT5677_PWR_DIG1,
+ RT5677_PWR_I2S2_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC0", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC5", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC6", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC7", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC01", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC23", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC45", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 DAC67", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 ADC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 ADC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 ADC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF2 ADC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_SUPPLY("I2S3", RT5677_PWR_DIG1,
+ RT5677_PWR_I2S3_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 DAC", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 DAC L", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 DAC R", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 ADC", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 ADC L", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF3 ADC R", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_SUPPLY("I2S4", RT5677_PWR_DIG1,
+ RT5677_PWR_I2S4_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 DAC", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 DAC L", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 DAC R", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 ADC", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 ADC L", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("IF4 ADC R", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ SND_SOC_DAPM_SUPPLY("SLB", RT5677_PWR_DIG1,
+ RT5677_PWR_SLB_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC0", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC5", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC6", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC7", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC01", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC23", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC45", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB DAC67", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB ADC1", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB ADC2", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB ADC3", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("SLB ADC4", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ /* Digital Interface Select */
+ SND_SOC_DAPM_MUX("IF1 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if1_adc1_mux),
+ SND_SOC_DAPM_MUX("IF1 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if1_adc2_mux),
+ SND_SOC_DAPM_MUX("IF1 ADC3 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if1_adc3_mux),
+ SND_SOC_DAPM_MUX("IF1 ADC4 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if1_adc4_mux),
+ SND_SOC_DAPM_MUX("IF2 ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if2_adc1_mux),
+ SND_SOC_DAPM_MUX("IF2 ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if2_adc2_mux),
+ SND_SOC_DAPM_MUX("IF2 ADC3 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if2_adc3_mux),
+ SND_SOC_DAPM_MUX("IF2 ADC4 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if2_adc4_mux),
+ SND_SOC_DAPM_MUX("IF3 ADC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if3_adc_mux),
+ SND_SOC_DAPM_MUX("IF4 ADC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_if4_adc_mux),
+ SND_SOC_DAPM_MUX("SLB ADC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_slb_adc1_mux),
+ SND_SOC_DAPM_MUX("SLB ADC2 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_slb_adc2_mux),
+ SND_SOC_DAPM_MUX("SLB ADC3 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_slb_adc3_mux),
+ SND_SOC_DAPM_MUX("SLB ADC4 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_slb_adc4_mux),
+
+ /* Audio Interface */
+ SND_SOC_DAPM_AIF_IN("AIF1RX", "AIF1 Playback", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("AIF1TX", "AIF1 Capture", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("AIF2RX", "AIF2 Playback", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("AIF2TX", "AIF2 Capture", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("AIF3RX", "AIF3 Playback", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("AIF3TX", "AIF3 Capture", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("AIF4RX", "AIF4 Playback", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("AIF4TX", "AIF4 Capture", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SLBRX", "SLIMBus Playback", 0, SND_SOC_NOPM, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("SLBTX", "SLIMBus Capture", 0, SND_SOC_NOPM, 0, 0),
+
+ /* Sidetone Mux */
+ SND_SOC_DAPM_MUX("Sidetone Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_sidetone_mux),
+ /* VAD Mux*/
+ SND_SOC_DAPM_MUX("VAD ADC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_vad_src_mux),
+
+ /* Tensilica DSP */
+ SND_SOC_DAPM_PGA("Tensilica DSP", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER("OB01 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_01_mix, ARRAY_SIZE(rt5677_ob_01_mix)),
+ SND_SOC_DAPM_MIXER("OB23 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_23_mix, ARRAY_SIZE(rt5677_ob_23_mix)),
+ SND_SOC_DAPM_MIXER("OB4 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_4_mix, ARRAY_SIZE(rt5677_ob_4_mix)),
+ SND_SOC_DAPM_MIXER("OB5 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_5_mix, ARRAY_SIZE(rt5677_ob_5_mix)),
+ SND_SOC_DAPM_MIXER("OB6 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_6_mix, ARRAY_SIZE(rt5677_ob_6_mix)),
+ SND_SOC_DAPM_MIXER("OB7 MIX", SND_SOC_NOPM, 0, 0,
+ rt5677_ob_7_mix, ARRAY_SIZE(rt5677_ob_7_mix)),
+
+ /* Output Side */
+ /* DAC mixer before sound effect */
+ SND_SOC_DAPM_MIXER("DAC1 MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_dac_l_mix, ARRAY_SIZE(rt5677_dac_l_mix)),
+ SND_SOC_DAPM_MIXER("DAC1 MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_dac_r_mix, ARRAY_SIZE(rt5677_dac_r_mix)),
+ SND_SOC_DAPM_PGA("DAC1 FS", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ /* DAC Mux */
+ SND_SOC_DAPM_MUX("DAC1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac1_mux),
+ SND_SOC_DAPM_MUX("ADDA1 Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_adda1_mux),
+ SND_SOC_DAPM_MUX("DAC12 SRC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac12_mux),
+ SND_SOC_DAPM_MUX("DAC3 SRC Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac3_mux),
+
+ /* DAC2 channel Mux */
+ SND_SOC_DAPM_MUX("DAC2 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac2_l_mux),
+ SND_SOC_DAPM_MUX("DAC2 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac2_r_mux),
+
+ /* DAC3 channel Mux */
+ SND_SOC_DAPM_MUX("DAC3 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac3_l_mux),
+ SND_SOC_DAPM_MUX("DAC3 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac3_r_mux),
+
+ /* DAC4 channel Mux */
+ SND_SOC_DAPM_MUX("DAC4 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac4_l_mux),
+ SND_SOC_DAPM_MUX("DAC4 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_dac4_r_mux),
+
+ /* DAC Mixer */
+ SND_SOC_DAPM_SUPPLY("dac stereo1 filter", RT5677_PWR_DIG2,
+ RT5677_PWR_DAC_S1F_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("dac mono left filter", RT5677_PWR_DIG2,
+ RT5677_PWR_DAC_M2F_L_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("dac mono right filter", RT5677_PWR_DIG2,
+ RT5677_PWR_DAC_M2F_R_BIT, 0, NULL, 0),
+
+ SND_SOC_DAPM_MIXER("Stereo DAC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_sto1_dac_l_mix, ARRAY_SIZE(rt5677_sto1_dac_l_mix)),
+ SND_SOC_DAPM_MIXER("Stereo DAC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_sto1_dac_r_mix, ARRAY_SIZE(rt5677_sto1_dac_r_mix)),
+ SND_SOC_DAPM_MIXER("Mono DAC MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_mono_dac_l_mix, ARRAY_SIZE(rt5677_mono_dac_l_mix)),
+ SND_SOC_DAPM_MIXER("Mono DAC MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_mono_dac_r_mix, ARRAY_SIZE(rt5677_mono_dac_r_mix)),
+ SND_SOC_DAPM_MIXER("DD1 MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_dd1_l_mix, ARRAY_SIZE(rt5677_dd1_l_mix)),
+ SND_SOC_DAPM_MIXER("DD1 MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_dd1_r_mix, ARRAY_SIZE(rt5677_dd1_r_mix)),
+ SND_SOC_DAPM_MIXER("DD2 MIXL", SND_SOC_NOPM, 0, 0,
+ rt5677_dd2_l_mix, ARRAY_SIZE(rt5677_dd2_l_mix)),
+ SND_SOC_DAPM_MIXER("DD2 MIXR", SND_SOC_NOPM, 0, 0,
+ rt5677_dd2_r_mix, ARRAY_SIZE(rt5677_dd2_r_mix)),
+ SND_SOC_DAPM_PGA("Stereo DAC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Mono DAC MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("DD1 MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("DD2 MIX", SND_SOC_NOPM, 0, 0, NULL, 0),
+
+ /* DACs */
+ SND_SOC_DAPM_DAC("DAC 1", NULL, RT5677_PWR_DIG1,
+ RT5677_PWR_DAC1_BIT, 0),
+ SND_SOC_DAPM_DAC("DAC 2", NULL, RT5677_PWR_DIG1,
+ RT5677_PWR_DAC2_BIT, 0),
+ SND_SOC_DAPM_DAC("DAC 3", NULL, RT5677_PWR_DIG1,
+ RT5677_PWR_DAC3_BIT, 0),
+
+ /* PDM */
+ SND_SOC_DAPM_SUPPLY("PDM1 Power", RT5677_PWR_DIG2,
+ RT5677_PWR_PDM1_BIT, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("PDM2 Power", RT5677_PWR_DIG2,
+ RT5677_PWR_PDM2_BIT, 0, NULL, 0),
+
+ SND_SOC_DAPM_MUX_E("PDM1 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_pdm1_l_mux, rt5677_pdm1_l_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_MUX_E("PDM1 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_pdm1_r_mux, rt5677_pdm1_r_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_MUX_E("PDM2 L Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_pdm2_l_mux, rt5677_pdm2_l_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_MUX_E("PDM2 R Mux", SND_SOC_NOPM, 0, 0,
+ &rt5677_pdm2_r_mux, rt5677_pdm2_r_event,
+ SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMU),
+
+ SND_SOC_DAPM_PGA_S("LOUT1 amp", 1, SND_SOC_NOPM, 0, 0,
+ rt5677_lout1_event, SND_SOC_DAPM_PRE_PMD |
+ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_PGA_S("LOUT2 amp", 1, SND_SOC_NOPM, 0, 0,
+ rt5677_lout2_event, SND_SOC_DAPM_PRE_PMD |
+ SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_PGA_S("LOUT3 amp", 1, SND_SOC_NOPM, 0, 0,
+ rt5677_lout3_event, SND_SOC_DAPM_PRE_PMD |
+ SND_SOC_DAPM_POST_PMU),
+
+ /* Output Lines */
+ SND_SOC_DAPM_OUTPUT("LOUT1"),
+ SND_SOC_DAPM_OUTPUT("LOUT2"),
+ SND_SOC_DAPM_OUTPUT("LOUT3"),
+ SND_SOC_DAPM_OUTPUT("PDM1L"),
+ SND_SOC_DAPM_OUTPUT("PDM1R"),
+ SND_SOC_DAPM_OUTPUT("PDM2L"),
+ SND_SOC_DAPM_OUTPUT("PDM2R"),
+
+ SND_SOC_DAPM_POST("DAPM_POST", rt5677_post_event),
+ SND_SOC_DAPM_PRE("DAPM_PRE", rt5677_pre_event),
+};
+
+static const struct snd_soc_dapm_route rt5677_dapm_routes[] = {
+ { "DMIC1", NULL, "DMIC L1" },
+ { "DMIC1", NULL, "DMIC R1" },
+ { "DMIC2", NULL, "DMIC L2" },
+ { "DMIC2", NULL, "DMIC R2" },
+ { "DMIC3", NULL, "DMIC L3" },
+ { "DMIC3", NULL, "DMIC R3" },
+ { "DMIC4", NULL, "DMIC L4" },
+ { "DMIC4", NULL, "DMIC R4" },
+
+ { "DMIC L1", NULL, "DMIC CLK" },
+ { "DMIC R1", NULL, "DMIC CLK" },
+ { "DMIC L2", NULL, "DMIC CLK" },
+ { "DMIC R2", NULL, "DMIC CLK" },
+ { "DMIC L3", NULL, "DMIC CLK" },
+ { "DMIC R3", NULL, "DMIC CLK" },
+ { "DMIC L4", NULL, "DMIC CLK" },
+ { "DMIC R4", NULL, "DMIC CLK" },
+
+ { "BST1", NULL, "IN1P" },
+ { "BST1", NULL, "IN1N" },
+ { "BST2", NULL, "IN2P" },
+ { "BST2", NULL, "IN2N" },
+
+ { "ADC 1", NULL, "BST1" },
+ { "ADC 1", NULL, "ADC 1 power" },
+ { "ADC 1", NULL, "ADC clock" },
+ { "ADC 1", NULL, "ADC1 clock" },
+ { "ADC 2", NULL, "BST2" },
+ { "ADC 2", NULL, "ADC 2 power" },
+ { "ADC 2", NULL, "ADC clock" },
+ { "ADC 2", NULL, "ADC2 clock" },
+
+ { "Stereo1 DMIC Mux", "DMIC1", "DMIC1" },
+ { "Stereo1 DMIC Mux", "DMIC2", "DMIC2" },
+ { "Stereo1 DMIC Mux", "DMIC3", "DMIC3" },
+ { "Stereo1 DMIC Mux", "DMIC4", "DMIC4" },
+
+ { "Stereo2 DMIC Mux", "DMIC1", "DMIC1" },
+ { "Stereo2 DMIC Mux", "DMIC2", "DMIC2" },
+ { "Stereo2 DMIC Mux", "DMIC3", "DMIC3" },
+ { "Stereo2 DMIC Mux", "DMIC4", "DMIC4" },
+
+ { "Stereo3 DMIC Mux", "DMIC1", "DMIC1" },
+ { "Stereo3 DMIC Mux", "DMIC2", "DMIC2" },
+ { "Stereo3 DMIC Mux", "DMIC3", "DMIC3" },
+ { "Stereo3 DMIC Mux", "DMIC4", "DMIC4" },
+
+ { "Stereo4 DMIC Mux", "DMIC1", "DMIC1" },
+ { "Stereo4 DMIC Mux", "DMIC2", "DMIC2" },
+ { "Stereo4 DMIC Mux", "DMIC3", "DMIC3" },
+ { "Stereo4 DMIC Mux", "DMIC4", "DMIC4" },
+
+ { "Mono DMIC L Mux", "DMIC1", "DMIC1" },
+ { "Mono DMIC L Mux", "DMIC2", "DMIC2" },
+ { "Mono DMIC L Mux", "DMIC3", "DMIC3" },
+ { "Mono DMIC L Mux", "DMIC4", "DMIC4" },
+
+ { "Mono DMIC R Mux", "DMIC1", "DMIC1" },
+ { "Mono DMIC R Mux", "DMIC2", "DMIC2" },
+ { "Mono DMIC R Mux", "DMIC3", "DMIC3" },
+ { "Mono DMIC R Mux", "DMIC4", "DMIC4" },
+
+ { "ADC 1_2", NULL, "ADC 1" },
+ { "ADC 1_2", NULL, "ADC 2" },
+
+ { "Stereo1 ADC1 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo1 ADC1 Mux", "ADC1/2", "ADC 1_2" },
+ { "Stereo1 ADC1 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo1 ADC2 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo1 ADC2 Mux", "DMIC", "Stereo1 DMIC Mux" },
+ { "Stereo1 ADC2 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo2 ADC1 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo2 ADC1 Mux", "ADC1/2", "ADC 1_2" },
+ { "Stereo2 ADC1 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo2 ADC2 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo2 ADC2 Mux", "DMIC", "Stereo2 DMIC Mux" },
+ { "Stereo2 ADC2 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo3 ADC1 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo3 ADC1 Mux", "ADC1/2", "ADC 1_2" },
+ { "Stereo3 ADC1 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo3 ADC2 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo3 ADC2 Mux", "DMIC", "Stereo3 DMIC Mux" },
+ { "Stereo3 ADC2 Mux", "Stereo DAC MIX", "Stereo DAC MIX" },
+
+ { "Stereo4 ADC1 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo4 ADC1 Mux", "ADC1/2", "ADC 1_2" },
+ { "Stereo4 ADC1 Mux", "DD MIX2", "DD2 MIX" },
+
+ { "Stereo4 ADC2 Mux", "DD MIX1", "DD1 MIX" },
+ { "Stereo4 ADC2 Mux", "DMIC", "Stereo3 DMIC Mux" },
+ { "Stereo4 ADC2 Mux", "DD MIX2", "DD2 MIX" },
+
+ { "Mono ADC2 L Mux", "DD MIX1L", "DD1 MIXL" },
+ { "Mono ADC2 L Mux", "DMIC", "Mono DMIC L Mux" },
+ { "Mono ADC2 L Mux", "MONO DAC MIXL", "Mono DAC MIXL" },
+
+ { "Mono ADC1 L Mux", "DD MIX1L", "DD1 MIXL" },
+ { "Mono ADC1 L Mux", "ADC1", "ADC 1" },
+ { "Mono ADC1 L Mux", "MONO DAC MIXL", "Mono DAC MIXL" },
+
+ { "Mono ADC1 R Mux", "DD MIX1R", "DD1 MIXR" },
+ { "Mono ADC1 R Mux", "ADC2", "ADC 2" },
+ { "Mono ADC1 R Mux", "MONO DAC MIXR", "Mono DAC MIXR" },
+
+ { "Mono ADC2 R Mux", "DD MIX1R", "DD1 MIXR" },
+ { "Mono ADC2 R Mux", "DMIC", "Mono DMIC R Mux" },
+ { "Mono ADC2 R Mux", "MONO DAC MIXR", "Mono DAC MIXR" },
+
+ { "Sto1 ADC MIXL", "ADC1 Switch", "Stereo1 ADC1 Mux" },
+ { "Sto1 ADC MIXL", "ADC2 Switch", "Stereo1 ADC2 Mux" },
+ { "Sto1 ADC MIXR", "ADC1 Switch", "Stereo1 ADC1 Mux" },
+ { "Sto1 ADC MIXR", "ADC2 Switch", "Stereo1 ADC2 Mux" },
+
+ { "Stereo1 ADC MIXL", NULL, "Sto1 ADC MIXL" },
+ { "Stereo1 ADC MIXL", NULL, "adc stereo1 filter" },
+ { "adc stereo1 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo1 ADC MIXR", NULL, "Sto1 ADC MIXR" },
+ { "Stereo1 ADC MIXR", NULL, "adc stereo1 filter" },
+ { "adc stereo1 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo1 ADC MIX", NULL, "Stereo1 ADC MIXL" },
+ { "Stereo1 ADC MIX", NULL, "Stereo1 ADC MIXR" },
+
+ { "Sto2 ADC MIXL", "ADC1 Switch", "Stereo2 ADC1 Mux" },
+ { "Sto2 ADC MIXL", "ADC2 Switch", "Stereo2 ADC2 Mux" },
+ { "Sto2 ADC MIXR", "ADC1 Switch", "Stereo2 ADC1 Mux" },
+ { "Sto2 ADC MIXR", "ADC2 Switch", "Stereo2 ADC2 Mux" },
+
+ { "Sto2 ADC LR MIX", NULL, "Sto2 ADC MIXL" },
+ { "Sto2 ADC LR MIX", NULL, "Sto2 ADC MIXR" },
+
+ { "Stereo2 ADC LR Mux", "L", "Sto2 ADC MIXL" },
+ { "Stereo2 ADC LR Mux", "LR", "Sto2 ADC LR MIX" },
+
+ { "Stereo2 ADC MIXL", NULL, "Stereo2 ADC LR Mux" },
+ { "Stereo2 ADC MIXL", NULL, "adc stereo2 filter" },
+ { "adc stereo2 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo2 ADC MIXR", NULL, "Sto2 ADC MIXR" },
+ { "Stereo2 ADC MIXR", NULL, "adc stereo2 filter" },
+ { "adc stereo2 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo2 ADC MIX", NULL, "Stereo2 ADC MIXL" },
+ { "Stereo2 ADC MIX", NULL, "Stereo2 ADC MIXR" },
+
+ { "Sto3 ADC MIXL", "ADC1 Switch", "Stereo3 ADC1 Mux" },
+ { "Sto3 ADC MIXL", "ADC2 Switch", "Stereo3 ADC2 Mux" },
+ { "Sto3 ADC MIXR", "ADC1 Switch", "Stereo3 ADC1 Mux" },
+ { "Sto3 ADC MIXR", "ADC2 Switch", "Stereo3 ADC2 Mux" },
+
+ { "Stereo3 ADC MIXL", NULL, "Sto3 ADC MIXL" },
+ { "Stereo3 ADC MIXL", NULL, "adc stereo3 filter" },
+ { "adc stereo3 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo3 ADC MIXR", NULL, "Sto3 ADC MIXR" },
+ { "Stereo3 ADC MIXR", NULL, "adc stereo3 filter" },
+ { "adc stereo3 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo3 ADC MIX", NULL, "Stereo3 ADC MIXL" },
+ { "Stereo3 ADC MIX", NULL, "Stereo3 ADC MIXR" },
+
+ { "Sto4 ADC MIXL", "ADC1 Switch", "Stereo4 ADC1 Mux" },
+ { "Sto4 ADC MIXL", "ADC2 Switch", "Stereo4 ADC2 Mux" },
+ { "Sto4 ADC MIXR", "ADC1 Switch", "Stereo4 ADC1 Mux" },
+ { "Sto4 ADC MIXR", "ADC2 Switch", "Stereo4 ADC2 Mux" },
+
+ { "Stereo4 ADC MIXL", NULL, "Sto4 ADC MIXL" },
+ { "Stereo4 ADC MIXL", NULL, "adc stereo4 filter" },
+ { "adc stereo4 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo4 ADC MIXR", NULL, "Sto4 ADC MIXR" },
+ { "Stereo4 ADC MIXR", NULL, "adc stereo4 filter" },
+ { "adc stereo4 filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Stereo4 ADC MIX", NULL, "Stereo4 ADC MIXL" },
+ { "Stereo4 ADC MIX", NULL, "Stereo4 ADC MIXR" },
+
+ { "Mono ADC MIXL", "ADC1 Switch", "Mono ADC1 L Mux" },
+ { "Mono ADC MIXL", "ADC2 Switch", "Mono ADC2 L Mux" },
+ { "Mono ADC MIXL", NULL, "adc mono left filter" },
+ { "adc mono left filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Mono ADC MIXR", "ADC1 Switch", "Mono ADC1 R Mux" },
+ { "Mono ADC MIXR", "ADC2 Switch", "Mono ADC2 R Mux" },
+ { "Mono ADC MIXR", NULL, "adc mono right filter" },
+ { "adc mono right filter", NULL, "PLL1", check_sysclk1_source },
+
+ { "Mono ADC MIXL ADC", NULL, "Mono ADC MIXL" },
+ { "Mono ADC MIXR ADC", NULL, "Mono ADC MIXR" },
+
+ { "Mono ADC MIX", NULL, "Mono ADC MIXL ADC" },
+ { "Mono ADC MIX", NULL, "Mono ADC MIXR ADC" },
+
+ { "VAD ADC Mux", "STO1 ADC MIX L", "Stereo1 ADC MIXL" },
+ { "VAD ADC Mux", "MONO ADC MIX L", "Mono ADC MIXL ADC" },
+ { "VAD ADC Mux", "MONO ADC MIX R", "Mono ADC MIXR ADC" },
+ { "VAD ADC Mux", "STO2 ADC MIX L", "Stereo2 ADC MIXL" },
+ { "VAD ADC Mux", "STO3 ADC MIX L", "Stereo3 ADC MIXL" },
+
+ { "IF1 ADC1 Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "IF1 ADC1 Mux", "OB01", "OB01 Bypass Mux" },
+ { "IF1 ADC1 Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "IF1 ADC2 Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "IF1 ADC2 Mux", "OB23", "OB23 Bypass Mux" },
+
+ { "IF1 ADC3 Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "IF1 ADC3 Mux", "MONO ADC MIX", "Mono ADC MIX" },
+ { "IF1 ADC3 Mux", "OB45", "OB45" },
+
+ { "IF1 ADC4 Mux", "STO4 ADC MIX", "Stereo4 ADC MIX" },
+ { "IF1 ADC4 Mux", "OB67", "OB67" },
+ { "IF1 ADC4 Mux", "OB01", "OB01 Bypass Mux" },
+
+ { "AIF1TX", NULL, "I2S1" },
+ { "AIF1TX", NULL, "IF1 ADC1 Mux" },
+ { "AIF1TX", NULL, "IF1 ADC2 Mux" },
+ { "AIF1TX", NULL, "IF1 ADC3 Mux" },
+ { "AIF1TX", NULL, "IF1 ADC4 Mux" },
+
+ { "IF2 ADC1 Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "IF2 ADC1 Mux", "OB01", "OB01 Bypass Mux" },
+ { "IF2 ADC1 Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "IF2 ADC2 Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "IF2 ADC2 Mux", "OB23", "OB23 Bypass Mux" },
+
+ { "IF2 ADC3 Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "IF2 ADC3 Mux", "MONO ADC MIX", "Mono ADC MIX" },
+ { "IF2 ADC3 Mux", "OB45", "OB45" },
+
+ { "IF2 ADC4 Mux", "STO4 ADC MIX", "Stereo4 ADC MIX" },
+ { "IF2 ADC4 Mux", "OB67", "OB67" },
+ { "IF2 ADC4 Mux", "OB01", "OB01 Bypass Mux" },
+
+ { "AIF2TX", NULL, "I2S2" },
+ { "AIF2TX", NULL, "IF2 ADC1 Mux" },
+ { "AIF2TX", NULL, "IF2 ADC2 Mux" },
+ { "AIF2TX", NULL, "IF2 ADC3 Mux" },
+ { "AIF2TX", NULL, "IF2 ADC4 Mux" },
+
+ { "IF3 ADC Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "IF3 ADC Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "IF3 ADC Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "IF3 ADC Mux", "STO4 ADC MIX", "Stereo4 ADC MIX" },
+ { "IF3 ADC Mux", "MONO ADC MIX", "Mono ADC MIX" },
+ { "IF3 ADC Mux", "OB01", "OB01 Bypass Mux" },
+ { "IF3 ADC Mux", "OB23", "OB23 Bypass Mux" },
+ { "IF3 ADC Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "AIF3TX", NULL, "I2S3" },
+ { "AIF3TX", NULL, "IF3 ADC Mux" },
+
+ { "IF4 ADC Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "IF4 ADC Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "IF4 ADC Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "IF4 ADC Mux", "STO4 ADC MIX", "Stereo4 ADC MIX" },
+ { "IF4 ADC Mux", "MONO ADC MIX", "Mono ADC MIX" },
+ { "IF4 ADC Mux", "OB01", "OB01 Bypass Mux" },
+ { "IF4 ADC Mux", "OB23", "OB23 Bypass Mux" },
+ { "IF4 ADC Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "AIF4TX", NULL, "I2S4" },
+ { "AIF4TX", NULL, "IF4 ADC Mux" },
+
+ { "SLB ADC1 Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "SLB ADC1 Mux", "OB01", "OB01 Bypass Mux" },
+ { "SLB ADC1 Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "SLB ADC2 Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "SLB ADC2 Mux", "OB23", "OB23 Bypass Mux" },
+
+ { "SLB ADC3 Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "SLB ADC3 Mux", "MONO ADC MIX", "Mono ADC MIX" },
+ { "SLB ADC3 Mux", "OB45", "OB45" },
+
+ { "SLB ADC4 Mux", "STO4 ADC MIX", "Stereo4 ADC MIX" },
+ { "SLB ADC4 Mux", "OB67", "OB67" },
+ { "SLB ADC4 Mux", "OB01", "OB01 Bypass Mux" },
+
+ { "SLBTX", NULL, "SLB" },
+ { "SLBTX", NULL, "SLB ADC1 Mux" },
+ { "SLBTX", NULL, "SLB ADC2 Mux" },
+ { "SLBTX", NULL, "SLB ADC3 Mux" },
+ { "SLBTX", NULL, "SLB ADC4 Mux" },
+
+ { "IB01 Mux", "IF1 DAC 01", "IF1 DAC01" },
+ { "IB01 Mux", "IF2 DAC 01", "IF2 DAC01" },
+ { "IB01 Mux", "SLB DAC 01", "SLB DAC01" },
+ { "IB01 Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "IB01 Mux", "VAD ADC/DAC1 FS", "DAC1 FS" },
+
+ { "IB01 Bypass Mux", "Bypass", "IB01 Mux" },
+ { "IB01 Bypass Mux", "Pass SRC", "IB01 Mux" },
+
+ { "IB23 Mux", "IF1 DAC 23", "IF1 DAC23" },
+ { "IB23 Mux", "IF2 DAC 23", "IF2 DAC23" },
+ { "IB23 Mux", "SLB DAC 23", "SLB DAC23" },
+ { "IB23 Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "IB23 Mux", "DAC1 FS", "DAC1 FS" },
+ { "IB23 Mux", "IF4 DAC", "IF4 DAC" },
+
+ { "IB23 Bypass Mux", "Bypass", "IB23 Mux" },
+ { "IB23 Bypass Mux", "Pass SRC", "IB23 Mux" },
+
+ { "IB45 Mux", "IF1 DAC 45", "IF1 DAC45" },
+ { "IB45 Mux", "IF2 DAC 45", "IF2 DAC45" },
+ { "IB45 Mux", "SLB DAC 45", "SLB DAC45" },
+ { "IB45 Mux", "STO3 ADC MIX", "Stereo3 ADC MIX" },
+ { "IB45 Mux", "IF3 DAC", "IF3 DAC" },
+
+ { "IB45 Bypass Mux", "Bypass", "IB45 Mux" },
+ { "IB45 Bypass Mux", "Pass SRC", "IB45 Mux" },
+
+ { "IB6 Mux", "IF1 DAC 6", "IF1 DAC6" },
+ { "IB6 Mux", "IF2 DAC 6", "IF2 DAC6" },
+ { "IB6 Mux", "SLB DAC 6", "SLB DAC6" },
+ { "IB6 Mux", "STO4 ADC MIX L", "Stereo4 ADC MIXL" },
+ { "IB6 Mux", "IF4 DAC L", "IF4 DAC L" },
+ { "IB6 Mux", "STO1 ADC MIX L", "Stereo1 ADC MIXL" },
+ { "IB6 Mux", "STO2 ADC MIX L", "Stereo2 ADC MIXL" },
+ { "IB6 Mux", "STO3 ADC MIX L", "Stereo3 ADC MIXL" },
+
+ { "IB7 Mux", "IF1 DAC 7", "IF1 DAC7" },
+ { "IB7 Mux", "IF2 DAC 7", "IF2 DAC7" },
+ { "IB7 Mux", "SLB DAC 7", "SLB DAC7" },
+ { "IB7 Mux", "STO4 ADC MIX R", "Stereo4 ADC MIXR" },
+ { "IB7 Mux", "IF4 DAC R", "IF4 DAC R" },
+ { "IB7 Mux", "STO1 ADC MIX R", "Stereo1 ADC MIXR" },
+ { "IB7 Mux", "STO2 ADC MIX R", "Stereo2 ADC MIXR" },
+ { "IB7 Mux", "STO3 ADC MIX R", "Stereo3 ADC MIXR" },
+
+ { "IB8 Mux", "STO1 ADC MIX L", "Stereo1 ADC MIXL" },
+ { "IB8 Mux", "STO2 ADC MIX L", "Stereo2 ADC MIXL" },
+ { "IB8 Mux", "STO3 ADC MIX L", "Stereo3 ADC MIXL" },
+ { "IB8 Mux", "STO4 ADC MIX L", "Stereo4 ADC MIXL" },
+ { "IB8 Mux", "MONO ADC MIX L", "Mono ADC MIXL ADC" },
+ { "IB8 Mux", "DACL1 FS", "DAC1 MIXL" },
+
+ { "IB9 Mux", "STO1 ADC MIX R", "Stereo1 ADC MIXR" },
+ { "IB9 Mux", "STO2 ADC MIX R", "Stereo2 ADC MIXR" },
+ { "IB9 Mux", "STO3 ADC MIX R", "Stereo3 ADC MIXR" },
+ { "IB9 Mux", "STO4 ADC MIX R", "Stereo4 ADC MIXR" },
+ { "IB9 Mux", "MONO ADC MIX R", "Mono ADC MIXR ADC" },
+ { "IB9 Mux", "DACR1 FS", "DAC1 MIXR" },
+ { "IB9 Mux", "DAC1 FS", "DAC1 FS" },
+
+ { "OB01 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB01 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB01 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB01 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB01 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB01 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB01 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB23 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB23 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB23 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB23 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB23 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB23 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB23 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB4 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB4 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB4 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB4 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB4 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB4 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB4 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB5 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB5 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB5 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB5 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB5 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB5 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB5 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB6 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB6 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB6 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB6 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB6 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB6 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB6 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB7 MIX", "IB01 Switch", "IB01 Bypass Mux" },
+ { "OB7 MIX", "IB23 Switch", "IB23 Bypass Mux" },
+ { "OB7 MIX", "IB45 Switch", "IB45 Bypass Mux" },
+ { "OB7 MIX", "IB6 Switch", "IB6 Mux" },
+ { "OB7 MIX", "IB7 Switch", "IB7 Mux" },
+ { "OB7 MIX", "IB8 Switch", "IB8 Mux" },
+ { "OB7 MIX", "IB9 Switch", "IB9 Mux" },
+
+ { "OB01 Bypass Mux", "Bypass", "OB01 MIX" },
+ { "OB01 Bypass Mux", "Pass SRC", "OB01 MIX" },
+ { "OB23 Bypass Mux", "Bypass", "OB23 MIX" },
+ { "OB23 Bypass Mux", "Pass SRC", "OB23 MIX" },
+
+ { "OutBound2", NULL, "OB23 Bypass Mux" },
+ { "OutBound3", NULL, "OB23 Bypass Mux" },
+ { "OutBound4", NULL, "OB4 MIX" },
+ { "OutBound5", NULL, "OB5 MIX" },
+ { "OutBound6", NULL, "OB6 MIX" },
+ { "OutBound7", NULL, "OB7 MIX" },
+
+ { "OB45", NULL, "OutBound4" },
+ { "OB45", NULL, "OutBound5" },
+ { "OB67", NULL, "OutBound6" },
+ { "OB67", NULL, "OutBound7" },
+
+ { "IF1 DAC0", NULL, "AIF1RX" },
+ { "IF1 DAC1", NULL, "AIF1RX" },
+ { "IF1 DAC2", NULL, "AIF1RX" },
+ { "IF1 DAC3", NULL, "AIF1RX" },
+ { "IF1 DAC4", NULL, "AIF1RX" },
+ { "IF1 DAC5", NULL, "AIF1RX" },
+ { "IF1 DAC6", NULL, "AIF1RX" },
+ { "IF1 DAC7", NULL, "AIF1RX" },
+ { "IF1 DAC0", NULL, "I2S1" },
+ { "IF1 DAC1", NULL, "I2S1" },
+ { "IF1 DAC2", NULL, "I2S1" },
+ { "IF1 DAC3", NULL, "I2S1" },
+ { "IF1 DAC4", NULL, "I2S1" },
+ { "IF1 DAC5", NULL, "I2S1" },
+ { "IF1 DAC6", NULL, "I2S1" },
+ { "IF1 DAC7", NULL, "I2S1" },
+
+ { "IF1 DAC01", NULL, "IF1 DAC0" },
+ { "IF1 DAC01", NULL, "IF1 DAC1" },
+ { "IF1 DAC23", NULL, "IF1 DAC2" },
+ { "IF1 DAC23", NULL, "IF1 DAC3" },
+ { "IF1 DAC45", NULL, "IF1 DAC4" },
+ { "IF1 DAC45", NULL, "IF1 DAC5" },
+ { "IF1 DAC67", NULL, "IF1 DAC6" },
+ { "IF1 DAC67", NULL, "IF1 DAC7" },
+
+ { "IF2 DAC0", NULL, "AIF2RX" },
+ { "IF2 DAC1", NULL, "AIF2RX" },
+ { "IF2 DAC2", NULL, "AIF2RX" },
+ { "IF2 DAC3", NULL, "AIF2RX" },
+ { "IF2 DAC4", NULL, "AIF2RX" },
+ { "IF2 DAC5", NULL, "AIF2RX" },
+ { "IF2 DAC6", NULL, "AIF2RX" },
+ { "IF2 DAC7", NULL, "AIF2RX" },
+ { "IF2 DAC0", NULL, "I2S2" },
+ { "IF2 DAC1", NULL, "I2S2" },
+ { "IF2 DAC2", NULL, "I2S2" },
+ { "IF2 DAC3", NULL, "I2S2" },
+ { "IF2 DAC4", NULL, "I2S2" },
+ { "IF2 DAC5", NULL, "I2S2" },
+ { "IF2 DAC6", NULL, "I2S2" },
+ { "IF2 DAC7", NULL, "I2S2" },
+
+ { "IF2 DAC01", NULL, "IF2 DAC0" },
+ { "IF2 DAC01", NULL, "IF2 DAC1" },
+ { "IF2 DAC23", NULL, "IF2 DAC2" },
+ { "IF2 DAC23", NULL, "IF2 DAC3" },
+ { "IF2 DAC45", NULL, "IF2 DAC4" },
+ { "IF2 DAC45", NULL, "IF2 DAC5" },
+ { "IF2 DAC67", NULL, "IF2 DAC6" },
+ { "IF2 DAC67", NULL, "IF2 DAC7" },
+
+ { "IF3 DAC", NULL, "AIF3RX" },
+ { "IF3 DAC", NULL, "I2S3" },
+
+ { "IF4 DAC", NULL, "AIF4RX" },
+ { "IF4 DAC", NULL, "I2S4" },
+
+ { "IF3 DAC L", NULL, "IF3 DAC" },
+ { "IF3 DAC R", NULL, "IF3 DAC" },
+
+ { "IF4 DAC L", NULL, "IF4 DAC" },
+ { "IF4 DAC R", NULL, "IF4 DAC" },
+
+ { "SLB DAC0", NULL, "SLBRX" },
+ { "SLB DAC1", NULL, "SLBRX" },
+ { "SLB DAC2", NULL, "SLBRX" },
+ { "SLB DAC3", NULL, "SLBRX" },
+ { "SLB DAC4", NULL, "SLBRX" },
+ { "SLB DAC5", NULL, "SLBRX" },
+ { "SLB DAC6", NULL, "SLBRX" },
+ { "SLB DAC7", NULL, "SLBRX" },
+ { "SLB DAC0", NULL, "SLB" },
+ { "SLB DAC1", NULL, "SLB" },
+ { "SLB DAC2", NULL, "SLB" },
+ { "SLB DAC3", NULL, "SLB" },
+ { "SLB DAC4", NULL, "SLB" },
+ { "SLB DAC5", NULL, "SLB" },
+ { "SLB DAC6", NULL, "SLB" },
+ { "SLB DAC7", NULL, "SLB" },
+
+ { "SLB DAC01", NULL, "SLB DAC0" },
+ { "SLB DAC01", NULL, "SLB DAC1" },
+ { "SLB DAC23", NULL, "SLB DAC2" },
+ { "SLB DAC23", NULL, "SLB DAC3" },
+ { "SLB DAC45", NULL, "SLB DAC4" },
+ { "SLB DAC45", NULL, "SLB DAC5" },
+ { "SLB DAC67", NULL, "SLB DAC6" },
+ { "SLB DAC67", NULL, "SLB DAC7" },
+
+ { "ADDA1 Mux", "STO1 ADC MIX", "Stereo1 ADC MIX" },
+ { "ADDA1 Mux", "STO2 ADC MIX", "Stereo2 ADC MIX" },
+ { "ADDA1 Mux", "OB 67", "OB67" },
+
+ { "DAC1 Mux", "IF1 DAC 01", "IF1 DAC01" },
+ { "DAC1 Mux", "IF2 DAC 01", "IF2 DAC01" },
+ { "DAC1 Mux", "IF3 DAC LR", "IF3 DAC" },
+ { "DAC1 Mux", "IF4 DAC LR", "IF4 DAC" },
+ { "DAC1 Mux", "SLB DAC 01", "SLB DAC01" },
+ { "DAC1 Mux", "OB 01", "OB01 Bypass Mux" },
+
+ { "DAC1 MIXL", "Stereo ADC Switch", "ADDA1 Mux" },
+ { "DAC1 MIXL", "DAC1 Switch", "DAC1 Mux" },
+ { "DAC1 MIXL", NULL, "dac stereo1 filter" },
+ { "DAC1 MIXR", "Stereo ADC Switch", "ADDA1 Mux" },
+ { "DAC1 MIXR", "DAC1 Switch", "DAC1 Mux" },
+ { "DAC1 MIXR", NULL, "dac stereo1 filter" },
+
+ { "DAC1 FS", NULL, "DAC1 MIXL" },
+ { "DAC1 FS", NULL, "DAC1 MIXR" },
+
+ { "DAC2 L Mux", "IF1 DAC 2", "IF1 DAC2" },
+ { "DAC2 L Mux", "IF2 DAC 2", "IF2 DAC2" },
+ { "DAC2 L Mux", "IF3 DAC L", "IF3 DAC L" },
+ { "DAC2 L Mux", "IF4 DAC L", "IF4 DAC L" },
+ { "DAC2 L Mux", "SLB DAC 2", "SLB DAC2" },
+ { "DAC2 L Mux", "OB 2", "OutBound2" },
+
+ { "DAC2 R Mux", "IF1 DAC 3", "IF1 DAC3" },
+ { "DAC2 R Mux", "IF2 DAC 3", "IF2 DAC3" },
+ { "DAC2 R Mux", "IF3 DAC R", "IF3 DAC R" },
+ { "DAC2 R Mux", "IF4 DAC R", "IF4 DAC R" },
+ { "DAC2 R Mux", "SLB DAC 3", "SLB DAC3" },
+ { "DAC2 R Mux", "OB 3", "OutBound3" },
+ { "DAC2 R Mux", "Haptic Generator", "Haptic Generator" },
+ { "DAC2 R Mux", "VAD ADC", "VAD ADC Mux" },
+
+ { "DAC3 L Mux", "IF1 DAC 4", "IF1 DAC4" },
+ { "DAC3 L Mux", "IF2 DAC 4", "IF2 DAC4" },
+ { "DAC3 L Mux", "IF3 DAC L", "IF3 DAC L" },
+ { "DAC3 L Mux", "IF4 DAC L", "IF4 DAC L" },
+ { "DAC3 L Mux", "SLB DAC 4", "SLB DAC4" },
+ { "DAC3 L Mux", "OB 4", "OutBound4" },
+
+ { "DAC3 R Mux", "IF1 DAC 5", "IF1 DAC4" },
+ { "DAC3 R Mux", "IF2 DAC 5", "IF2 DAC4" },
+ { "DAC3 R Mux", "IF3 DAC R", "IF3 DAC R" },
+ { "DAC3 R Mux", "IF4 DAC R", "IF4 DAC R" },
+ { "DAC3 R Mux", "SLB DAC 5", "SLB DAC5" },
+ { "DAC3 R Mux", "OB 5", "OutBound5" },
+
+ { "DAC4 L Mux", "IF1 DAC 6", "IF1 DAC6" },
+ { "DAC4 L Mux", "IF2 DAC 6", "IF2 DAC6" },
+ { "DAC4 L Mux", "IF3 DAC L", "IF3 DAC L" },
+ { "DAC4 L Mux", "IF4 DAC L", "IF4 DAC L" },
+ { "DAC4 L Mux", "SLB DAC 6", "SLB DAC6" },
+ { "DAC4 L Mux", "OB 6", "OutBound6" },
+
+ { "DAC4 R Mux", "IF1 DAC 7", "IF1 DAC7" },
+ { "DAC4 R Mux", "IF2 DAC 7", "IF2 DAC7" },
+ { "DAC4 R Mux", "IF3 DAC R", "IF3 DAC R" },
+ { "DAC4 R Mux", "IF4 DAC R", "IF4 DAC R" },
+ { "DAC4 R Mux", "SLB DAC 7", "SLB DAC7" },
+ { "DAC4 R Mux", "OB 7", "OutBound7" },
+
+ { "Sidetone Mux", "DMIC1 L", "DMIC L1" },
+ { "Sidetone Mux", "DMIC2 L", "DMIC L2" },
+ { "Sidetone Mux", "DMIC3 L", "DMIC L3" },
+ { "Sidetone Mux", "DMIC4 L", "DMIC L4" },
+ { "Sidetone Mux", "ADC1", "ADC 1" },
+ { "Sidetone Mux", "ADC2", "ADC 2" },
+
+ { "Stereo DAC MIXL", "ST L Switch", "Sidetone Mux" },
+ { "Stereo DAC MIXL", "DAC1 L Switch", "DAC1 MIXL" },
+ { "Stereo DAC MIXL", "DAC2 L Switch", "DAC2 L Mux" },
+ { "Stereo DAC MIXL", "DAC1 R Switch", "DAC1 MIXR" },
+ { "Stereo DAC MIXL", NULL, "dac stereo1 filter" },
+ { "Stereo DAC MIXR", "ST R Switch", "Sidetone Mux" },
+ { "Stereo DAC MIXR", "DAC1 R Switch", "DAC1 MIXR" },
+ { "Stereo DAC MIXR", "DAC2 R Switch", "DAC2 R Mux" },
+ { "Stereo DAC MIXR", "DAC1 L Switch", "DAC1 MIXL" },
+ { "Stereo DAC MIXR", NULL, "dac stereo1 filter" },
+
+ { "Mono DAC MIXL", "ST L Switch", "Sidetone Mux" },
+ { "Mono DAC MIXL", "DAC1 L Switch", "DAC1 MIXL" },
+ { "Mono DAC MIXL", "DAC2 L Switch", "DAC2 L Mux" },
+ { "Mono DAC MIXL", "DAC2 R Switch", "DAC2 R Mux" },
+ { "Mono DAC MIXL", NULL, "dac mono left filter" },
+ { "Mono DAC MIXR", "ST R Switch", "Sidetone Mux" },
+ { "Mono DAC MIXR", "DAC1 R Switch", "DAC1 MIXR" },
+ { "Mono DAC MIXR", "DAC2 R Switch", "DAC2 R Mux" },
+ { "Mono DAC MIXR", "DAC2 L Switch", "DAC2 L Mux" },
+ { "Mono DAC MIXR", NULL, "dac mono right filter" },
+
+ { "DD1 MIXL", "Sto DAC Mix L Switch", "Stereo DAC MIXL" },
+ { "DD1 MIXL", "Mono DAC Mix L Switch", "Mono DAC MIXL" },
+ { "DD1 MIXL", "DAC3 L Switch", "DAC3 L Mux" },
+ { "DD1 MIXL", "DAC3 R Switch", "DAC3 R Mux" },
+ { "DD1 MIXR", "Sto DAC Mix R Switch", "Stereo DAC MIXR" },
+ { "DD1 MIXR", "Mono DAC Mix R Switch", "Mono DAC MIXR" },
+ { "DD1 MIXR", "DAC3 L Switch", "DAC3 L Mux" },
+ { "DD1 MIXR", "DAC3 R Switch", "DAC3 R Mux" },
+
+ { "DD2 MIXL", "Sto DAC Mix L Switch", "Stereo DAC MIXL" },
+ { "DD2 MIXL", "Mono DAC Mix L Switch", "Mono DAC MIXL" },
+ { "DD2 MIXL", "DAC4 L Switch", "DAC4 L Mux" },
+ { "DD2 MIXL", "DAC4 R Switch", "DAC4 R Mux" },
+ { "DD2 MIXR", "Sto DAC Mix R Switch", "Stereo DAC MIXR" },
+ { "DD2 MIXR", "Mono DAC Mix R Switch", "Mono DAC MIXR" },
+ { "DD2 MIXR", "DAC4 L Switch", "DAC4 L Mux" },
+ { "DD2 MIXR", "DAC4 R Switch", "DAC4 R Mux" },
+
+ { "Stereo DAC MIX", NULL, "Stereo DAC MIXL" },
+ { "Stereo DAC MIX", NULL, "Stereo DAC MIXR" },
+ { "Mono DAC MIX", NULL, "Mono DAC MIXL" },
+ { "Mono DAC MIX", NULL, "Mono DAC MIXR" },
+ { "DD1 MIX", NULL, "DD1 MIXL" },
+ { "DD1 MIX", NULL, "DD1 MIXR" },
+ { "DD2 MIX", NULL, "DD2 MIXL" },
+ { "DD2 MIX", NULL, "DD2 MIXR" },
+
+ { "DAC12 SRC Mux", "STO1 DAC MIX", "Stereo DAC MIX" },
+ { "DAC12 SRC Mux", "MONO DAC MIX", "Mono DAC MIX" },
+ { "DAC12 SRC Mux", "DD MIX1", "DD1 MIX" },
+ { "DAC12 SRC Mux", "DD MIX2", "DD2 MIX" },
+
+ { "DAC3 SRC Mux", "MONO DAC MIXL", "Mono DAC MIXL" },
+ { "DAC3 SRC Mux", "MONO DAC MIXR", "Mono DAC MIXR" },
+ { "DAC3 SRC Mux", "DD MIX1L", "DD1 MIXL" },
+ { "DAC3 SRC Mux", "DD MIX2L", "DD2 MIXL" },
+
+ { "DAC 1", NULL, "DAC12 SRC Mux" },
+ { "DAC 1", NULL, "PLL1", check_sysclk1_source },
+ { "DAC 2", NULL, "DAC12 SRC Mux" },
+ { "DAC 2", NULL, "PLL1", check_sysclk1_source },
+ { "DAC 3", NULL, "DAC3 SRC Mux" },
+ { "DAC 3", NULL, "PLL1", check_sysclk1_source },
+
+ { "PDM1 L Mux", "STO1 DAC MIX", "Stereo DAC MIXL" },
+ { "PDM1 L Mux", "MONO DAC MIX", "Mono DAC MIXL" },
+ { "PDM1 L Mux", "DD MIX1", "DD1 MIXL" },
+ { "PDM1 L Mux", "DD MIX2", "DD2 MIXL" },
+ { "PDM1 L Mux", NULL, "PDM1 Power" },
+ { "PDM1 R Mux", "STO1 DAC MIX", "Stereo DAC MIXR" },
+ { "PDM1 R Mux", "MONO DAC MIX", "Mono DAC MIXR" },
+ { "PDM1 R Mux", "DD MIX1", "DD1 MIXR" },
+ { "PDM1 R Mux", "DD MIX2", "DD2 MIXR" },
+ { "PDM1 R Mux", NULL, "PDM1 Power" },
+ { "PDM2 L Mux", "STO1 DAC MIX", "Stereo DAC MIXL" },
+ { "PDM2 L Mux", "MONO DAC MIX", "Mono DAC MIXL" },
+ { "PDM2 L Mux", "DD MIX1", "DD1 MIXL" },
+ { "PDM2 L Mux", "DD MIX2", "DD2 MIXL" },
+ { "PDM2 L Mux", NULL, "PDM2 Power" },
+ { "PDM2 R Mux", "STO1 DAC MIX", "Stereo DAC MIXR" },
+ { "PDM2 R Mux", "MONO DAC MIX", "Mono DAC MIXR" },
+ { "PDM2 R Mux", "DD MIX1", "DD1 MIXR" },
+ { "PDM2 R Mux", "DD MIX1", "DD2 MIXR" },
+ { "PDM2 R Mux", NULL, "PDM2 Power" },
+
+ { "LOUT1 amp", NULL, "DAC 1" },
+ { "LOUT2 amp", NULL, "DAC 2" },
+ { "LOUT3 amp", NULL, "DAC 3" },
+
+ { "LOUT1", NULL, "LOUT1 amp" },
+ { "LOUT2", NULL, "LOUT2 amp" },
+ { "LOUT3", NULL, "LOUT3 amp" },
+
+ { "PDM1L", NULL, "PDM1 L Mux" },
+ { "PDM1R", NULL, "PDM1 R Mux" },
+ { "PDM2L", NULL, "PDM2 L Mux" },
+ { "PDM2R", NULL, "PDM2 R Mux" },
+};
+
+static int get_clk_info(int sclk, int rate)
+{
+ int i, pd[] = {1, 2, 3, 4, 6, 8, 12, 16};
+
+#ifdef USE_ASRC
+ return 0;
+#endif
+ if (sclk <= 0 || rate <= 0)
+ return -EINVAL;
+
+ rate = rate << 8;
+ for (i = 0; i < ARRAY_SIZE(pd); i++)
+ if (sclk == rate * pd[i])
+ return i;
+
+ return -EINVAL;
+}
+
+static int rt5677_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params, struct snd_soc_dai *dai)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_codec *codec = rtd->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int val_len = 0, val_clk, mask_clk;
+ int pre_div, bclk_ms, frame_size;
+
+ rt5677->lrck[dai->id] = params_rate(params);
+ pre_div = get_clk_info(rt5677->sysclk, rt5677->lrck[dai->id]);
+ if (pre_div < 0) {
+ dev_err(codec->dev, "Unsupported clock setting\n");
+ return -EINVAL;
+ }
+ frame_size = snd_soc_params_to_frame_size(params);
+ if (frame_size < 0) {
+ dev_err(codec->dev, "Unsupported frame size: %d\n", frame_size);
+ return -EINVAL;
+ }
+ bclk_ms = frame_size > 32 ? 1 : 0;
+ rt5677->bclk[dai->id] = rt5677->lrck[dai->id] * (32 << bclk_ms);
+
+ dev_dbg(dai->dev, "bclk is %dHz and lrck is %dHz\n",
+ rt5677->bclk[dai->id], rt5677->lrck[dai->id]);
+ dev_dbg(dai->dev, "bclk_ms is %d and pre_div is %d for iis %d\n",
+ bclk_ms, pre_div, dai->id);
+
+ switch (params_format(params)) {
+ case SNDRV_PCM_FORMAT_S16_LE:
+ break;
+ case SNDRV_PCM_FORMAT_S20_3LE:
+ val_len |= RT5677_I2S_DL_20;
+ break;
+ case SNDRV_PCM_FORMAT_S24_LE:
+ val_len |= RT5677_I2S_DL_24;
+ break;
+ case SNDRV_PCM_FORMAT_S8:
+ val_len |= RT5677_I2S_DL_8;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (dai->id) {
+ case RT5677_AIF1:
+ mask_clk = RT5677_I2S_PD1_MASK;
+ val_clk = pre_div << RT5677_I2S_PD1_SFT;
+ regmap_update_bits(rt5677->regmap, RT5677_I2S1_SDP,
+ RT5677_I2S_DL_MASK, val_len);
+ regmap_update_bits(rt5677->regmap, RT5677_CLK_TREE_CTRL1,
+ mask_clk, val_clk);
+ break;
+ case RT5677_AIF2:
+ mask_clk = RT5677_I2S_BCLK_MS2_MASK | RT5677_I2S_PD2_MASK;
+ val_clk = bclk_ms << RT5677_I2S_BCLK_MS2_SFT |
+ pre_div << RT5677_I2S_PD2_SFT;
+ regmap_update_bits(rt5677->regmap, RT5677_I2S2_SDP,
+ RT5677_I2S_DL_MASK, val_len);
+ regmap_update_bits(rt5677->regmap, RT5677_CLK_TREE_CTRL1,
+ mask_clk, val_clk);
+ break;
+ default:
+ break;
+ }
+
+
+ return 0;
+}
+
+static int rt5677_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_codec *codec = rtd->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ rt5677->aif_pu = dai->id;
+ rt5677->stream = substream->stream;
+ return 0;
+}
+
+static int rt5677_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+{
+ struct snd_soc_codec *codec = dai->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int reg_val = 0;
+
+ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+ case SND_SOC_DAIFMT_CBM_CFM:
+ rt5677->master[dai->id] = 1;
+ break;
+ case SND_SOC_DAIFMT_CBS_CFS:
+ reg_val |= RT5677_I2S_MS_S;
+ rt5677->master[dai->id] = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+ case SND_SOC_DAIFMT_NB_NF:
+ break;
+ case SND_SOC_DAIFMT_IB_NF:
+ reg_val |= RT5677_I2S_BP_INV;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ break;
+ case SND_SOC_DAIFMT_LEFT_J:
+ reg_val |= RT5677_I2S_DF_LEFT;
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ reg_val |= RT5677_I2S_DF_PCM_A;
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ reg_val |= RT5677_I2S_DF_PCM_B;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (dai->id) {
+ case RT5677_AIF1:
+ regmap_update_bits(rt5677->regmap, RT5677_I2S1_SDP,
+ RT5677_I2S_MS_MASK | RT5677_I2S_BP_MASK |
+ RT5677_I2S_DF_MASK, reg_val);
+ break;
+ case RT5677_AIF2:
+ regmap_update_bits(rt5677->regmap, RT5677_I2S2_SDP,
+ RT5677_I2S_MS_MASK | RT5677_I2S_BP_MASK |
+ RT5677_I2S_DF_MASK, reg_val);
+ break;
+ default:
+ break;
+ }
+
+
+ return 0;
+}
+
+static int rt5677_set_dai_sysclk(struct snd_soc_dai *dai,
+ int clk_id, unsigned int freq, int dir)
+{
+ struct snd_soc_codec *codec = dai->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ unsigned int reg_val = 0;
+
+ if (freq == rt5677->sysclk && clk_id == rt5677->sysclk_src)
+ return 0;
+
+ switch (clk_id) {
+ case RT5677_SCLK_S_MCLK:
+ reg_val |= RT5677_SCLK_SRC_MCLK;
+ break;
+ case RT5677_SCLK_S_PLL1:
+ reg_val |= RT5677_SCLK_SRC_PLL1;
+ break;
+ case RT5677_SCLK_S_RCCLK:
+ reg_val |= RT5677_SCLK_SRC_RCCLK;
+ break;
+ default:
+ dev_err(codec->dev, "Invalid clock id (%d)\n", clk_id);
+ return -EINVAL;
+ }
+ regmap_update_bits(rt5677->regmap, RT5677_GLB_CLK1,
+ RT5677_SCLK_SRC_MASK, reg_val);
+ rt5677->sysclk = freq;
+ rt5677->sysclk_src = clk_id;
+
+ dev_dbg(dai->dev, "Sysclk is %dHz and clock id is %d\n", freq, clk_id);
+
+ return 0;
+}
+
+/**
+ * rt5677_pll_calc - Calcualte PLL M/N/K code.
+ * @freq_in: external clock provided to codec.
+ * @freq_out: target clock which codec works on.
+ * @pll_code: Pointer to structure with M, N, K, bypass K and bypass M flag.
+ *
+ * Calcualte M/N/K code to configure PLL for codec. And K is assigned to 2
+ * which make calculation more efficiently.
+ *
+ * Returns 0 for success or negative error code.
+ */
+static int rt5677_pll_calc(const unsigned int freq_in,
+ const unsigned int freq_out, struct rt5677_pll_code *pll_code)
+{
+ int max_n = RT5677_PLL_N_MAX, max_m = RT5677_PLL_M_MAX;
+ int k, n = 0, m = 0, red, n_t, m_t, pll_out, in_t;
+ int out_t, red_t = abs(freq_out - freq_in);
+ bool m_bp = false, k_bp = false;
+
+ if (RT5677_PLL_INP_MAX < freq_in || RT5677_PLL_INP_MIN > freq_in)
+ return -EINVAL;
+
+ k = 100000000 / freq_out - 2;
+ if (k > RT5677_PLL_K_MAX)
+ k = RT5677_PLL_K_MAX;
+ for (n_t = 0; n_t <= max_n; n_t++) {
+ in_t = freq_in / (k + 2);
+ pll_out = freq_out / (n_t + 2);
+ if (in_t < 0)
+ continue;
+ if (in_t == pll_out) {
+ m_bp = true;
+ n = n_t;
+ goto code_find;
+ }
+ red = abs(in_t - pll_out);
+ if (red < red_t) {
+ m_bp = true;
+ n = n_t;
+ m = m_t;
+ if (red == 0)
+ goto code_find;
+ red_t = red;
+ }
+ for (m_t = 0; m_t <= max_m; m_t++) {
+ out_t = in_t / (m_t + 2);
+ red = abs(out_t - pll_out);
+ if (red < red_t) {
+ m_bp = false;
+ n = n_t;
+ m = m_t;
+ if (red == 0)
+ goto code_find;
+ red_t = red;
+ }
+ }
+ }
+ pr_debug("Only get approximation about PLL\n");
+
+code_find:
+
+ pll_code->m_bp = m_bp;
+ pll_code->k_bp = k_bp;
+ pll_code->m_code = m;
+ pll_code->n_code = n;
+ pll_code->k_code = k;
+ return 0;
+}
+
+static int rt5677_set_dai_pll(struct snd_soc_dai *dai, int pll_id, int source,
+ unsigned int freq_in, unsigned int freq_out)
+{
+ struct snd_soc_codec *codec = dai->codec;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ struct rt5677_pll_code pll_code;
+ int ret;
+
+ if (source == rt5677->pll_src && freq_in == rt5677->pll_in &&
+ freq_out == rt5677->pll_out)
+ return 0;
+
+ if (!freq_in || !freq_out) {
+ dev_dbg(codec->dev, "PLL disabled\n");
+
+ rt5677->pll_in = 0;
+ rt5677->pll_out = 0;
+ regmap_update_bits(rt5677->regmap, RT5677_GLB_CLK1,
+ RT5677_SCLK_SRC_MASK, RT5677_SCLK_SRC_MCLK);
+ return 0;
+ }
+
+ switch (source) {
+ case RT5677_PLL1_S_MCLK:
+ regmap_update_bits(rt5677->regmap, RT5677_GLB_CLK1,
+ RT5677_PLL1_SRC_MASK, RT5677_PLL1_SRC_MCLK);
+ break;
+ case RT5677_PLL1_S_BCLK1:
+ case RT5677_PLL1_S_BCLK2:
+ case RT5677_PLL1_S_BCLK3:
+ case RT5677_PLL1_S_BCLK4:
+ switch (dai->id) {
+ case RT5677_AIF1:
+ regmap_update_bits(rt5677->regmap, RT5677_GLB_CLK1,
+ RT5677_PLL1_SRC_MASK, RT5677_PLL1_SRC_BCLK1);
+ break;
+ case RT5677_AIF2:
+ regmap_update_bits(rt5677->regmap, RT5677_GLB_CLK1,
+ RT5677_PLL1_SRC_MASK, RT5677_PLL1_SRC_BCLK2);
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ dev_err(codec->dev, "Unknown PLL source %d\n", source);
+ return -EINVAL;
+ }
+
+ ret = rt5677_pll_calc(freq_in, freq_out, &pll_code);
+ if (ret < 0) {
+ dev_err(codec->dev, "Unsupport input clock %d\n", freq_in);
+ return ret;
+ }
+
+ dev_dbg(codec->dev, "m_bypass=%d k_bypass=%d m=%d n=%d k=%d\n",
+ pll_code.m_bp, pll_code.k_bp,
+ (pll_code.m_bp ? 0 : pll_code.m_code), pll_code.n_code,
+ (pll_code.k_bp ? 0 : pll_code.k_code));
+
+ regmap_write(rt5677->regmap, RT5677_PLL1_CTRL1,
+ pll_code.n_code << RT5677_PLL_N_SFT |
+ pll_code.k_bp << RT5677_PLL_K_BP_SFT |
+ (pll_code.k_bp ? 0 : pll_code.k_code));
+ regmap_write(rt5677->regmap, RT5677_PLL1_CTRL2,
+ (pll_code.m_bp ? 0 : pll_code.m_code) << RT5677_PLL_M_SFT |
+ pll_code.m_bp << RT5677_PLL_M_BP_SFT);
+
+ rt5677->pll_in = freq_in;
+ rt5677->pll_out = freq_out;
+ rt5677->pll_src = source;
+
+ return 0;
+}
+
+/**
+ * rt5677_index_show - Dump private registers.
+ * @dev: codec device.
+ * @attr: device attribute.
+ * @buf: buffer for display.
+ *
+ * To show non-zero values of all private registers.
+ *
+ * Returns buffer length.
+ */
+static ssize_t rt5677_index_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ struct snd_soc_codec *codec = rt5677->codec;
+ unsigned int val;
+ int cnt = 0, i;
+
+ cnt += sprintf(buf, "RT5677 index register\n");
+ for (i = 0; i < 0xff; i++) {
+ if (cnt + RT5677_REG_DISP_LEN >= PAGE_SIZE)
+ break;
+ val = rt5677_index_read(codec, i);
+ if (!val)
+ continue;
+ cnt += snprintf(buf + cnt, RT5677_REG_DISP_LEN,
+ "%02x: %04x\n", i, val);
+ }
+
+ if (cnt >= PAGE_SIZE)
+ cnt = PAGE_SIZE - 1;
+
+ return cnt;
+}
+
+static ssize_t rt5677_index_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ struct snd_soc_codec *codec = rt5677->codec;
+ unsigned int val = 0, addr = 0;
+ int i;
+
+ for (i = 0; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ addr = (addr << 4) | (*(buf + i) - '0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ addr = (addr << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf+i) >= 'A')
+ addr = (addr << 4) | ((*(buf + i) - 'A') + 0xa);
+ else
+ break;
+ }
+
+ for (i = i + 1; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ val = (val << 4) | (*(buf + i)-'0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ val = (val << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf + i) >= 'A')
+ val = (val << 4) | ((*(buf+i)-'A') + 0xa);
+ else
+ break;
+
+ }
+ pr_info("addr = 0x%02x val = 0x%04x\n", addr, val);
+ if (addr > RT5677_VENDOR_ID2 || val > 0xffff || val < 0)
+ return count;
+
+ if (i == count)
+ pr_info("0x%02x = 0x%04x\n", addr,
+ rt5677_index_read(codec, addr));
+ else
+ rt5677_index_write(codec, addr, val);
+
+ return count;
+}
+static DEVICE_ATTR(index_reg, 0666, rt5677_index_show, rt5677_index_store);
+
+static ssize_t rt5677_codec_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ unsigned int val;
+ int cnt = 0, i;
+
+ for (i = 0; i <= RT5677_VENDOR_ID2; i++) {
+ if (cnt + RT5677_REG_DISP_LEN >= PAGE_SIZE)
+ break;
+
+ if (rt5677_readable_register(NULL, i)) {
+ regmap_read(rt5677->regmap, i, &val);
+
+ cnt += snprintf(buf + cnt, RT5677_REG_DISP_LEN,
+ "%04x: %04x\n", i, val);
+ }
+ }
+
+ if (cnt >= PAGE_SIZE)
+ cnt = PAGE_SIZE - 1;
+
+ return cnt;
+}
+
+static ssize_t rt5677_codec_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ unsigned int val = 0, addr = 0;
+ int i;
+
+ pr_info("register \"%s\" count = %zu\n", buf, count);
+ for (i = 0; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ addr = (addr << 4) | (*(buf + i) - '0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ addr = (addr << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf + i) >= 'A')
+ addr = (addr << 4) | ((*(buf + i)-'A') + 0xa);
+ else
+ break;
+ }
+
+ for (i = i + 1; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ val = (val << 4) | (*(buf + i) - '0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ val = (val << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf + i) >= 'A')
+ val = (val << 4) | ((*(buf + i) - 'A') + 0xa);
+ else
+ break;
+ }
+
+ pr_info("addr = 0x%02x val = 0x%04x\n", addr, val);
+ if (addr > RT5677_VENDOR_ID2 || val > 0xffff || val < 0)
+ return count;
+
+ if (i == count) {
+ regmap_read(rt5677->regmap, addr, &val);
+ pr_info("0x%02x = 0x%04x\n", addr, val);
+ } else
+ regmap_write(rt5677->regmap, addr, val);
+
+ return count;
+}
+
+static DEVICE_ATTR(codec_reg, 0666, rt5677_codec_show, rt5677_codec_store);
+
+static ssize_t rt5677_dsp_codec_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ struct snd_soc_codec *codec = rt5677->codec;
+ unsigned int val;
+ int cnt = 0, i;
+
+ regcache_cache_only(rt5677->regmap, false);
+ regcache_cache_bypass(rt5677->regmap, true);
+
+ for (i = 0; i <= RT5677_VENDOR_ID2; i++) {
+ if (cnt + RT5677_REG_DISP_LEN >= PAGE_SIZE)
+ break;
+
+ if (rt5677_readable_register(NULL, i)) {
+ val = rt5677_dsp_mode_i2c_read(codec, i);
+
+ cnt += snprintf(buf + cnt, RT5677_REG_DISP_LEN,
+ "%04x: %04x\n", i, val);
+ }
+ }
+
+ regcache_cache_bypass(rt5677->regmap, false);
+ regcache_cache_only(rt5677->regmap, true);
+
+ if (cnt >= PAGE_SIZE)
+ cnt = PAGE_SIZE - 1;
+
+ return cnt;
+}
+
+static ssize_t rt5677_dsp_codec_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ struct snd_soc_codec *codec = rt5677->codec;
+ unsigned int val = 0, addr = 0;
+ int i;
+
+ pr_info("register \"%s\" count = %zu\n", buf, count);
+ for (i = 0; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ addr = (addr << 4) | (*(buf + i) - '0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ addr = (addr << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf + i) >= 'A')
+ addr = (addr << 4) | ((*(buf + i)-'A') + 0xa);
+ else
+ break;
+ }
+
+ for (i = i + 1; i < count; i++) {
+ if (*(buf + i) <= '9' && *(buf + i) >= '0')
+ val = (val << 4) | (*(buf + i) - '0');
+ else if (*(buf + i) <= 'f' && *(buf + i) >= 'a')
+ val = (val << 4) | ((*(buf + i) - 'a') + 0xa);
+ else if (*(buf + i) <= 'F' && *(buf + i) >= 'A')
+ val = (val << 4) | ((*(buf + i) - 'A') + 0xa);
+ else
+ break;
+ }
+
+ pr_info("addr = 0x%02x val = 0x%04x\n", addr, val);
+ if (addr > RT5677_VENDOR_ID2 || val > 0xffff || val < 0)
+ return count;
+
+ regcache_cache_only(rt5677->regmap, false);
+ regcache_cache_bypass(rt5677->regmap, true);
+
+ if (i == count) {
+ val = rt5677_dsp_mode_i2c_read(codec, addr);
+ pr_info("0x%02x = 0x%04x\n", addr, val);
+ } else
+ rt5677_dsp_mode_i2c_write(codec, addr, val);
+
+ regcache_cache_bypass(rt5677->regmap, false);
+ regcache_cache_only(rt5677->regmap, true);
+
+ return count;
+}
+static DEVICE_ATTR(dsp_codec_reg, 0666, rt5677_dsp_codec_show, rt5677_dsp_codec_store);
+
+static int rt5677_set_bias_level(struct snd_soc_codec *codec,
+ enum snd_soc_bias_level level)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ int i;
+
+ switch (level) {
+ case SND_SOC_BIAS_ON:
+ break;
+
+ case SND_SOC_BIAS_PREPARE:
+ if (codec->dapm.bias_level == SND_SOC_BIAS_STANDBY) {
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_LDO1_SEL_MASK | RT5677_LDO2_SEL_MASK,
+ 0x36);
+ rt5677_index_update_bits(codec, RT5677_BIAS_CUR4,
+ 0x0f00, 0x0f00);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_VREF1 | RT5677_PWR_MB |
+ RT5677_PWR_BG | RT5677_PWR_VREF2,
+ RT5677_PWR_VREF1 | RT5677_PWR_MB |
+ RT5677_PWR_BG | RT5677_PWR_VREF2);
+ if (rt5677->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ regmap_update_bits(rt5677->regmap,
+ RT5677_PWR_ANLG1,
+ RT5677_PWR_LO1 | RT5677_PWR_LO2,
+ RT5677_PWR_LO1 | RT5677_PWR_LO2);
+ }
+ usleep_range(15000, 20000);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_FV1 | RT5677_PWR_FV2,
+ RT5677_PWR_FV1 | RT5677_PWR_FV2);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_CORE, RT5677_PWR_CORE);
+ regmap_update_bits(rt5677->regmap, RT5677_DIG_MISC,
+ 0x1, 0x1);
+ }
+ break;
+
+ case SND_SOC_BIAS_STANDBY:
+ if (codec->dapm.bias_level == SND_SOC_BIAS_OFF) {
+ rt5677_set_vad(codec, 0, true);
+ set_rt5677_power_extern(true);
+ regcache_cache_only(rt5677->regmap, false);
+ regcache_mark_dirty(rt5677->regmap);
+ for (i = 0; i < RT5677_VENDOR_ID2 + 1; i++)
+ regcache_sync_region(rt5677->regmap, i, i);
+ rt5677_index_sync(codec);
+ }
+ break;
+
+ case SND_SOC_BIAS_OFF:
+ regmap_update_bits(rt5677->regmap, RT5677_DIG_MISC, 0x1, 0x0);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG1, 0x0000);
+ regmap_write(rt5677->regmap, RT5677_PWR_DIG2, 0x0000);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG1, 0x0022);
+ regmap_write(rt5677->regmap, RT5677_PWR_ANLG2, 0x0000);
+ rt5677_index_update_bits(codec,
+ RT5677_BIAS_CUR4, 0x0f00, 0x0000);
+
+ if (rt5677->vad_mode == RT5677_VAD_IDLE)
+ rt5677_set_vad(codec, 1, true);
+ if (rt5677->vad_mode == RT5677_VAD_OFF)
+ regcache_cache_only(rt5677->regmap, true);
+ break;
+
+ default:
+ break;
+ }
+ codec->dapm.bias_level = level;
+
+ return 0;
+}
+
+static int rt5677_probe(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+#ifdef RTK_IOCTL
+#if defined(CONFIG_SND_HWDEP) || defined(CONFIG_SND_HWDEP_MODULE)
+ struct rt_codec_ops *ioctl_ops = rt_codec_get_ioctl_ops();
+#endif
+#endif
+ int ret;
+
+ pr_info("Codec driver version %s\n", VERSION);
+
+ ret = snd_soc_codec_set_cache_io(codec, 8, 16, SND_SOC_REGMAP);
+ if (ret != 0) {
+ dev_err(codec->dev, "Failed to set cache I/O: %d\n", ret);
+ return ret;
+ }
+
+ regmap_read(rt5677->regmap, RT5677_VENDOR_ID2, &ret);
+ if (ret != RT5677_DEVICE_ID) {
+ dev_err(codec->dev,
+ "Device with ID register %x is not rt5677\n", ret);
+ return -ENODEV;
+ }
+
+ rt5677_reset(codec);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_VREF1 | RT5677_PWR_MB |
+ RT5677_PWR_BG | RT5677_PWR_VREF2,
+ RT5677_PWR_VREF1 | RT5677_PWR_MB |
+ RT5677_PWR_BG | RT5677_PWR_VREF2);
+ msleep(20);
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG1,
+ RT5677_PWR_FV1 | RT5677_PWR_FV2,
+ RT5677_PWR_FV1 | RT5677_PWR_FV2);
+
+ regmap_update_bits(rt5677->regmap, RT5677_PWR_ANLG2,
+ RT5677_PWR_CORE, RT5677_PWR_CORE);
+
+ rt5677_reg_init(codec);
+
+ rt5677->codec = codec;
+
+ mutex_init(&rt5677->index_lock);
+ mutex_init(&rt5677->vad_lock);
+
+#ifdef RTK_IOCTL
+#if defined(CONFIG_SND_HWDEP) || defined(CONFIG_SND_HWDEP_MODULE)
+ ioctl_ops->index_write = rt5677_index_write;
+ ioctl_ops->index_read = rt5677_index_read;
+ ioctl_ops->index_update_bits = rt5677_index_update_bits;
+ ioctl_ops->ioctl_common = rt5677_ioctl_common;
+ realtek_ce_init_hwdep(codec);
+#endif
+#endif
+
+ ret = device_create_file(codec->dev, &dev_attr_index_reg);
+ if (ret != 0) {
+ dev_err(codec->dev,
+ "Failed to create index_reg sysfs files: %d\n", ret);
+ return ret;
+ }
+
+ ret = device_create_file(codec->dev, &dev_attr_codec_reg);
+ if (ret != 0) {
+ dev_err(codec->dev,
+ "Failed to create codec_reg sysfs files: %d\n", ret);
+ return ret;
+ }
+
+ ret = device_create_file(codec->dev, &dev_attr_dsp_codec_reg);
+ if (ret != 0) {
+ dev_err(codec->dev,
+ "Failed to create dsp_codec_reg sysfs files: %d\n", ret);
+ return ret;
+ }
+
+ rt5677_set_bias_level(codec, SND_SOC_BIAS_OFF);
+
+ rt5677_register_hs_notification();
+ rt5677->check_mic_wq = create_workqueue("check_hp_mic");
+ INIT_DELAYED_WORK(&rt5677->check_hp_mic_work, rt5677_check_hp_mic);
+ rt5677->mic_state = HEADSET_UNPLUG;
+ rt5677_global = rt5677;
+
+ return 0;
+}
+
+static int rt5677_remove(struct snd_soc_codec *codec)
+{
+ rt5677_set_bias_level(codec, SND_SOC_BIAS_OFF);
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int rt5677_suspend(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ if (rt5677->vad_mode == RT5677_VAD_SUSPEND)
+ rt5677_set_vad(codec, 1, true);
+
+ return 0;
+}
+
+static int rt5677_resume(struct snd_soc_codec *codec)
+{
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+
+ if (rt5677->vad_mode == RT5677_VAD_SUSPEND)
+ rt5677_set_vad(codec, 0, true);
+
+ return 0;
+}
+#else
+#define rt5677_suspend NULL
+#define rt5677_resume NULL
+#endif
+
+static void rt5677_shutdown(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+{
+ pr_debug("enter %s\n", __func__);
+}
+
+#define RT5677_STEREO_RATES SNDRV_PCM_RATE_8000_96000
+#define RT5677_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S20_3LE | \
+ SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S8)
+
+struct snd_soc_dai_ops rt5677_aif_dai_ops = {
+ .hw_params = rt5677_hw_params,
+ .prepare = rt5677_prepare,
+ .set_fmt = rt5677_set_dai_fmt,
+ .set_sysclk = rt5677_set_dai_sysclk,
+ .set_pll = rt5677_set_dai_pll,
+ .shutdown = rt5677_shutdown,
+};
+
+struct snd_soc_dai_driver rt5677_dai[] = {
+ {
+ .name = "rt5677-aif1",
+ .id = RT5677_AIF1,
+ .playback = {
+ .stream_name = "AIF1 Playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .capture = {
+ .stream_name = "AIF1 Capture",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .ops = &rt5677_aif_dai_ops,
+ },
+ {
+ .name = "rt5677-aif2",
+ .id = RT5677_AIF2,
+ .playback = {
+ .stream_name = "AIF2 Playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .capture = {
+ .stream_name = "AIF2 Capture",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .ops = &rt5677_aif_dai_ops,
+ },
+ {
+ .name = "rt5677-aif3",
+ .id = RT5677_AIF3,
+ .playback = {
+ .stream_name = "AIF3 Playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .capture = {
+ .stream_name = "AIF3 Capture",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .ops = &rt5677_aif_dai_ops,
+ },
+ {
+ .name = "rt5677-aif4",
+ .id = RT5677_AIF4,
+ .playback = {
+ .stream_name = "AIF4 Playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .capture = {
+ .stream_name = "AIF4 Capture",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .ops = &rt5677_aif_dai_ops,
+ },
+ {
+ .name = "rt5677-slimbus",
+ .id = RT5677_AIF5,
+ .playback = {
+ .stream_name = "SLIMBus Playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .capture = {
+ .stream_name = "SLIMBus Capture",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = RT5677_STEREO_RATES,
+ .formats = RT5677_FORMATS,
+ },
+ .ops = &rt5677_aif_dai_ops,
+ },
+};
+
+static struct snd_soc_codec_driver soc_codec_dev_rt5677 = {
+ .probe = rt5677_probe,
+ .remove = rt5677_remove,
+ .suspend = rt5677_suspend,
+ .resume = rt5677_resume,
+ .set_bias_level = rt5677_set_bias_level,
+ .idle_bias_off = true,
+ .controls = rt5677_snd_controls,
+ .num_controls = ARRAY_SIZE(rt5677_snd_controls),
+ .dapm_widgets = rt5677_dapm_widgets,
+ .num_dapm_widgets = ARRAY_SIZE(rt5677_dapm_widgets),
+ .dapm_routes = rt5677_dapm_routes,
+ .num_dapm_routes = ARRAY_SIZE(rt5677_dapm_routes),
+};
+
+static const struct regmap_config rt5677_regmap = {
+ .reg_bits = 8,
+ .val_bits = 16,
+
+ .max_register = RT5677_VENDOR_ID2 + 1,
+ .volatile_reg = rt5677_volatile_register,
+ .readable_reg = rt5677_readable_register,
+
+ .cache_type = REGCACHE_RBTREE,
+ .reg_defaults = rt5677_reg,
+ .num_reg_defaults = ARRAY_SIZE(rt5677_reg),
+};
+
+static const struct i2c_device_id rt5677_i2c_id[] = {
+ { "rt5677", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, rt5677_i2c_id);
+
+static int rt5677_i2c_probe(struct i2c_client *i2c,
+ const struct i2c_device_id *id)
+{
+ struct rt5677_priv *rt5677;
+ int ret;
+
+ rt5677 = kzalloc(sizeof(struct rt5677_priv), GFP_KERNEL);
+ if (NULL == rt5677)
+ return -ENOMEM;
+
+ rt5677->mbist_test = false;
+ rt5677->vad_sleep = true;
+ rt5677->mic_buf_len = RT5677_PRIV_MIC_BUF_SIZE;
+ rt5677->mic_buf = kmalloc(rt5677->mic_buf_len, GFP_KERNEL);
+ if (NULL == rt5677->mic_buf) {
+ kfree(rt5677);
+ return -ENOMEM;
+ }
+
+ i2c_set_clientdata(i2c, rt5677);
+
+ rt5677->regmap = devm_regmap_init_i2c(i2c, &rt5677_regmap);
+ if (IS_ERR(rt5677->regmap)) {
+ ret = PTR_ERR(rt5677->regmap);
+ dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
+ ret);
+ return ret;
+ }
+
+ ret = snd_soc_register_codec(&i2c->dev, &soc_codec_dev_rt5677,
+ rt5677_dai, ARRAY_SIZE(rt5677_dai));
+ if (ret < 0)
+ kfree(rt5677);
+
+ if (i2c->dev.platform_data) {
+
+ rt5677->vad_clock_en =
+ ((struct rt5677_priv *)i2c->dev.platform_data)->vad_clock_en;
+
+ if (gpio_is_valid(rt5677->vad_clock_en)) {
+ dev_dbg(&i2c->dev, "vad_clock_en: %d\n",
+ rt5677->vad_clock_en);
+
+ ret = gpio_request(rt5677->vad_clock_en, "vad_clock_en");
+ if (ret) {
+ dev_err(&i2c->dev, "cannot get vad_clock_en gpio\n");
+ } else {
+ ret = gpio_direction_output(rt5677->vad_clock_en, 0);
+ if (ret) {
+ dev_err(&i2c->dev,
+ "vad_clock_en=0 fail,%d\n", ret);
+
+ gpio_free(rt5677->vad_clock_en);
+ } else
+ dev_dbg(&i2c->dev, "vad_clock_en=0\n");
+ }
+ } else {
+ dev_dbg(&i2c->dev, "vad_clock_en is invalid: %d\n",
+ rt5677->vad_clock_en);
+ }
+ }
+
+ return ret;
+}
+
+static int rt5677_i2c_remove(struct i2c_client *i2c)
+{
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(i2c);
+
+ snd_soc_unregister_codec(&i2c->dev);
+ kfree(rt5677->mic_buf);
+ kfree(rt5677->model_buf);
+ kfree(rt5677);
+ return 0;
+}
+
+void rt5677_i2c_shutdown(struct i2c_client *client)
+{
+ struct rt5677_priv *rt5677 = i2c_get_clientdata(client);
+ struct snd_soc_codec *codec = rt5677->codec;
+
+ if (codec != NULL)
+ rt5677_set_bias_level(codec, SND_SOC_BIAS_OFF);
+}
+
+struct i2c_driver rt5677_i2c_driver = {
+ .driver = {
+ .name = "rt5677",
+ .owner = THIS_MODULE,
+ },
+ .probe = rt5677_i2c_probe,
+ .remove = rt5677_i2c_remove,
+ .shutdown = rt5677_i2c_shutdown,
+ .id_table = rt5677_i2c_id,
+};
+
+static int __init rt5677_modinit(void)
+{
+ return i2c_add_driver(&rt5677_i2c_driver);
+}
+module_init(rt5677_modinit);
+
+static void __exit rt5677_modexit(void)
+{
+ i2c_del_driver(&rt5677_i2c_driver);
+}
+module_exit(rt5677_modexit);
+
+MODULE_DESCRIPTION("ASoC RT5677 driver");
+MODULE_AUTHOR("Bard Liao <bardliao@realtek.com>");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/codecs/rt5677.h b/sound/soc/codecs/rt5677.h
new file mode 100644
index 0000000..5295cfd
--- /dev/null
+++ b/sound/soc/codecs/rt5677.h
@@ -0,0 +1,1501 @@
+/*
+ * rt5677.h -- RT5677 ALSA SoC audio driver
+ *
+ * Copyright 2011 Realtek Microelectronics
+ * Author: Johnny Hsu <johnnyhsu@realtek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __RT5677_H__
+#define __RT5677_H__
+
+#define RT5677_DEVICE_ID 0x6327
+
+/* Info */
+#define RT5677_RESET 0x00 /* DSP_OP_CODE */
+#define RT5677_VENDOR_ID 0xfd
+#define RT5677_VENDOR_ID1 0xfe
+#define RT5677_VENDOR_ID2 0xff
+/* I/O - Output */
+#define RT5677_LOUT1 0x01 /* DSP_I2C_ADDR1 */
+/* I/O - Input */
+#define RT5677_IN1 0x03 /* DSP_I2C_DATA1 */
+#define RT5677_MICBIAS 0x04 /* DSP_I2C_DATA2 */
+/* I/O - SLIMBus */
+#define RT5677_SLIMBUS_PARAM 0x07
+#define RT5677_SLIMBUS_RX 0x08
+#define RT5677_SLIMBUS_CTRL 0x09
+/* I/O */
+#define RT5677_SIDETONE_CTRL 0x13
+/* I/O - ADC/DAC */
+#define RT5677_ANA_DAC1_2_3_SRC 0x15
+#define RT5677_IF_DSP_DAC3_4_MIXER 0x16
+#define RT5677_DAC4_DIG_VOL 0x17
+#define RT5677_DAC3_DIG_VOL 0x18
+#define RT5677_DAC1_DIG_VOL 0x19
+#define RT5677_DAC2_DIG_VOL 0x1a
+#define RT5677_IF_DSP_DAC2_MIXER 0x1b
+#define RT5677_STO1_ADC_DIG_VOL 0x1c
+#define RT5677_MONO_ADC_DIG_VOL 0x1d
+#define RT5677_STO1_2_ADC_BST 0x1e
+#define RT5677_STO2_ADC_DIG_VOL 0x1f
+/* Mixer - D-D */
+#define RT5677_ADC_BST_CTRL2 0x20
+#define RT5677_STO3_4_ADC_BST 0x21
+#define RT5677_STO3_ADC_DIG_VOL 0x22
+#define RT5677_STO4_ADC_DIG_VOL 0x23
+#define RT5677_STO4_ADC_MIXER 0x24
+#define RT5677_STO3_ADC_MIXER 0x25
+#define RT5677_STO2_ADC_MIXER 0x26
+#define RT5677_STO1_ADC_MIXER 0x27
+#define RT5677_MONO_ADC_MIXER 0x28
+#define RT5677_ADC_IF_DSP_DAC1_MIXER 0x29
+#define RT5677_STO1_DAC_MIXER 0x2a
+#define RT5677_MONO_DAC_MIXER 0x2b
+#define RT5677_DD1_MIXER 0x2c
+#define RT5677_DD2_MIXER 0x2d
+#define RT5677_IF3_DATA 0x2f
+#define RT5677_IF4_DATA 0x30
+/* Mixer - PDM */
+#define RT5677_PDM_OUT_CTRL 0x31
+#define RT5677_PDM_DATA_CTRL1 0x32
+#define RT5677_PDM_DATA_CTRL2 0x33
+#define RT5677_PDM1_DATA_CTRL2 0x34
+#define RT5677_PDM1_DATA_CTRL3 0x35
+#define RT5677_PDM1_DATA_CTRL4 0x36
+#define RT5677_PDM2_DATA_CTRL2 0x37
+#define RT5677_PDM2_DATA_CTRL3 0x38
+#define RT5677_PDM2_DATA_CTRL4 0x39
+/* TDM */
+#define RT5677_TDM1_CTRL1 0x3b
+#define RT5677_TDM1_CTRL2 0x3c
+#define RT5677_TDM1_CTRL3 0x3d
+#define RT5677_TDM1_CTRL4 0x3e
+#define RT5677_TDM1_CTRL5 0x3f
+#define RT5677_TDM2_CTRL1 0x40
+#define RT5677_TDM2_CTRL2 0x41
+#define RT5677_TDM2_CTRL3 0x42
+#define RT5677_TDM2_CTRL4 0x43
+#define RT5677_TDM2_CTRL5 0x44
+/* I2C_MASTER_CTRL */
+#define RT5677_I2C_MASTER_CTRL1 0x47
+#define RT5677_I2C_MASTER_CTRL2 0x48
+#define RT5677_I2C_MASTER_CTRL3 0x49
+#define RT5677_I2C_MASTER_CTRL4 0x4a
+#define RT5677_I2C_MASTER_CTRL5 0x4b
+#define RT5677_I2C_MASTER_CTRL6 0x4c
+#define RT5677_I2C_MASTER_CTRL7 0x4d
+#define RT5677_I2C_MASTER_CTRL8 0x4e
+/* DMIC */
+#define RT5677_DMIC_CTRL1 0x50
+#define RT5677_DMIC_CTRL2 0x51
+/* Haptic Generator */
+#define RT5677_HAP_GENE_CTRL1 0x56
+#define RT5677_HAP_GENE_CTRL2 0x57
+#define RT5677_HAP_GENE_CTRL3 0x58
+#define RT5677_HAP_GENE_CTRL4 0x59
+#define RT5677_HAP_GENE_CTRL5 0x5a
+#define RT5677_HAP_GENE_CTRL6 0x5b
+#define RT5677_HAP_GENE_CTRL7 0x5c
+#define RT5677_HAP_GENE_CTRL8 0x5d
+#define RT5677_HAP_GENE_CTRL9 0x5e
+#define RT5677_HAP_GENE_CTRL10 0x5f
+/* Power */
+#define RT5677_PWR_DIG1 0x61
+#define RT5677_PWR_DIG2 0x62
+#define RT5677_PWR_ANLG1 0x63
+#define RT5677_PWR_ANLG2 0x64
+#define RT5677_PWR_DSP1 0x65
+#define RT5677_PWR_DSP_ST 0x66
+#define RT5677_PWR_DSP2 0x67
+#define RT5677_ADC_DAC_HPF_CTRL1 0x68
+/* Private Register Control */
+#define RT5677_PRIV_INDEX 0x6a
+#define RT5677_PRIV_DATA 0x6c
+/* Format - ADC/DAC */
+#define RT5677_I2S4_SDP 0x6f
+#define RT5677_I2S1_SDP 0x70
+#define RT5677_I2S2_SDP 0x71
+#define RT5677_I2S3_SDP 0x72
+#define RT5677_CLK_TREE_CTRL1 0x73
+#define RT5677_CLK_TREE_CTRL2 0x74
+#define RT5677_CLK_TREE_CTRL3 0x75
+/* Function - Analog */
+#define RT5677_PLL1_CTRL1 0x7a
+#define RT5677_PLL1_CTRL2 0x7b
+#define RT5677_PLL2_CTRL1 0x7c
+#define RT5677_PLL2_CTRL2 0x7d
+#define RT5677_GLB_CLK1 0x80
+#define RT5677_GLB_CLK2 0x81
+#define RT5677_ASRC_1 0x83
+#define RT5677_ASRC_2 0x84
+#define RT5677_ASRC_3 0x85
+#define RT5677_ASRC_4 0x86
+#define RT5677_ASRC_5 0x87
+#define RT5677_ASRC_6 0x88
+#define RT5677_ASRC_7 0x89
+#define RT5677_ASRC_8 0x8a
+#define RT5677_ASRC_9 0x8b
+#define RT5677_ASRC_10 0x8c
+#define RT5677_ASRC_11 0x8d
+#define RT5677_ASRC_12 0x8e
+#define RT5677_ASRC_13 0x8f
+#define RT5677_ASRC_14 0x90
+#define RT5677_ASRC_15 0x91
+#define RT5677_ASRC_16 0x92
+#define RT5677_ASRC_17 0x93
+#define RT5677_ASRC_18 0x94
+#define RT5677_ASRC_19 0x95
+#define RT5677_ASRC_20 0x97
+#define RT5677_ASRC_21 0x98
+#define RT5677_ASRC_22 0x99
+#define RT5677_ASRC_23 0x9a
+#define RT5677_VAD_CTRL1 0x9c
+#define RT5677_VAD_CTRL2 0x9d
+#define RT5677_VAD_CTRL3 0x9e
+#define RT5677_VAD_CTRL4 0x9f
+#define RT5677_VAD_CTRL5 0xa0
+/* Function - Digital */
+#define RT5677_DSP_INB_CTRL1 0xa3
+#define RT5677_DSP_INB_CTRL2 0xa4
+#define RT5677_DSP_IN_OUTB_CTRL 0xa5
+#define RT5677_DSP_OUTB0_1_DIG_VOL 0xa6
+#define RT5677_DSP_OUTB2_3_DIG_VOL 0xa7
+#define RT5677_DSP_OUTB4_5_DIG_VOL 0xa8
+#define RT5677_DSP_OUTB6_7_DIG_VOL 0xa9
+#define RT5677_ADC_EQ_CTRL1 0xae
+#define RT5677_ADC_EQ_CTRL2 0xaf
+#define RT5677_EQ_CTRL1 0xb0
+#define RT5677_EQ_CTRL2 0xb1
+#define RT5677_EQ_CTRL3 0xb2
+#define RT5677_SOFT_VOL_ZERO_CROSS1 0xb3
+#define RT5677_JD_CTRL1 0xb5
+#define RT5677_JD_CTRL2 0xb6
+#define RT5677_JD_CTRL3 0xb8
+#define RT5677_IRQ_CTRL1 0xbd
+#define RT5677_IRQ_CTRL2 0xbe
+#define RT5677_GPIO_ST 0xbf
+#define RT5677_GPIO_CTRL1 0xc0
+#define RT5677_GPIO_CTRL2 0xc1
+#define RT5677_GPIO_CTRL3 0xc2
+#define RT5677_STO1_ADC_HI_FILTER1 0xc5
+#define RT5677_STO1_ADC_HI_FILTER2 0xc6
+#define RT5677_MONO_ADC_HI_FILTER1 0xc7
+#define RT5677_MONO_ADC_HI_FILTER2 0xc8
+#define RT5677_STO2_ADC_HI_FILTER1 0xc9
+#define RT5677_STO2_ADC_HI_FILTER2 0xca
+#define RT5677_STO3_ADC_HI_FILTER1 0xcb
+#define RT5677_STO3_ADC_HI_FILTER2 0xcc
+#define RT5677_STO4_ADC_HI_FILTER1 0xcd
+#define RT5677_STO4_ADC_HI_FILTER2 0xce
+#define RT5677_MB_DRC_CTRL1 0xd0
+#define RT5677_DRC1_CTRL1 0xd2
+#define RT5677_DRC1_CTRL2 0xd3
+#define RT5677_DRC1_CTRL3 0xd4
+#define RT5677_DRC1_CTRL4 0xd5
+#define RT5677_DRC1_CTRL5 0xd6
+#define RT5677_DRC1_CTRL6 0xd7
+#define RT5677_DRC2_CTRL1 0xd8
+#define RT5677_DRC2_CTRL2 0xd9
+#define RT5677_DRC2_CTRL3 0xda
+#define RT5677_DRC2_CTRL4 0xdb
+#define RT5677_DRC2_CTRL5 0xdc
+#define RT5677_DRC2_CTRL6 0xdd
+#define RT5677_DRC1_HL_CTRL1 0xde
+#define RT5677_DRC1_HL_CTRL2 0xdf
+#define RT5677_DRC2_HL_CTRL1 0xe0
+#define RT5677_DRC2_HL_CTRL2 0xe1
+#define RT5677_DSP_INB1_SRC_CTRL1 0xe3
+#define RT5677_DSP_INB1_SRC_CTRL2 0xe4
+#define RT5677_DSP_INB1_SRC_CTRL3 0xe5
+#define RT5677_DSP_INB1_SRC_CTRL4 0xe6
+#define RT5677_DSP_INB2_SRC_CTRL1 0xe7
+#define RT5677_DSP_INB2_SRC_CTRL2 0xe8
+#define RT5677_DSP_INB2_SRC_CTRL3 0xe9
+#define RT5677_DSP_INB2_SRC_CTRL4 0xea
+#define RT5677_DSP_INB3_SRC_CTRL1 0xeb
+#define RT5677_DSP_INB3_SRC_CTRL2 0xec
+#define RT5677_DSP_INB3_SRC_CTRL3 0xed
+#define RT5677_DSP_INB3_SRC_CTRL4 0xee
+#define RT5677_DSP_OUTB1_SRC_CTRL1 0xef
+#define RT5677_DSP_OUTB1_SRC_CTRL2 0xf0
+#define RT5677_DSP_OUTB1_SRC_CTRL3 0xf1
+#define RT5677_DSP_OUTB1_SRC_CTRL4 0xf2
+#define RT5677_DSP_OUTB2_SRC_CTRL1 0xf3
+#define RT5677_DSP_OUTB2_SRC_CTRL2 0xf4
+#define RT5677_DSP_OUTB2_SRC_CTRL3 0xf5
+#define RT5677_DSP_OUTB2_SRC_CTRL4 0xf6
+
+/* Virtual DSP Mixer Control */
+#define RT5677_DSP_OUTB_0123_MIXER_CTRL 0xf7
+#define RT5677_DSP_OUTB_45_MIXER_CTRL 0xf8
+#define RT5677_DSP_OUTB_67_MIXER_CTRL 0xf9
+
+/* General Control */
+#define RT5677_DIG_MISC 0xfa
+#define RT5677_GEN_CTRL1 0xfb
+#define RT5677_GEN_CTRL2 0xfc
+
+/* DSP Mode I2C Control*/
+#define RT5677_DSP_I2C_OP_CODE 0x00
+#define RT5677_DSP_I2C_ADDR_LSB 0x01
+#define RT5677_DSP_I2C_ADDR_MSB 0x02
+#define RT5677_DSP_I2C_DATA_LSB 0x03
+#define RT5677_DSP_I2C_DATA_MSB 0x04
+
+/* Index of Codec Private Register definition */
+#define RT5677_PR_DRC1_CTRL_1 0x01
+#define RT5677_PR_DRC1_CTRL_2 0x02
+#define RT5677_PR_DRC1_CTRL_3 0x03
+#define RT5677_PR_DRC1_CTRL_4 0x04
+#define RT5677_PR_DRC1_CTRL_5 0x05
+#define RT5677_PR_DRC1_CTRL_6 0x06
+#define RT5677_PR_DRC1_CTRL_7 0x07
+#define RT5677_PR_DRC2_CTRL_1 0x08
+#define RT5677_PR_DRC2_CTRL_2 0x09
+#define RT5677_PR_DRC2_CTRL_3 0x0a
+#define RT5677_PR_DRC2_CTRL_4 0x0b
+#define RT5677_PR_DRC2_CTRL_5 0x0c
+#define RT5677_PR_DRC2_CTRL_6 0x0d
+#define RT5677_PR_DRC2_CTRL_7 0x0e
+#define RT5677_BIAS_CUR1 0x10
+#define RT5677_BIAS_CUR2 0x12
+#define RT5677_BIAS_CUR3 0x13
+#define RT5677_BIAS_CUR4 0x14
+#define RT5677_BIAS_CUR5 0x15
+#define RT5677_VREF_LOUT_CTRL 0x17
+#define RT5677_DIG_VOL_CTRL1 0x1a
+#define RT5677_DIG_VOL_CTRL2 0x1b
+#define RT5677_ANA_ADC_GAIN_CTRL 0x1e
+#define RT5677_VAD_SRAM_TEST1 0x20
+#define RT5677_VAD_SRAM_TEST2 0x21
+#define RT5677_VAD_SRAM_TEST3 0x22
+#define RT5677_VAD_SRAM_TEST4 0x23
+#define RT5677_PAD_DRV_CTRL 0x26
+#define RT5677_DIG_IN_PIN_ST_CTRL1 0x29
+#define RT5677_DIG_IN_PIN_ST_CTRL2 0x2a
+#define RT5677_DIG_IN_PIN_ST_CTRL3 0x2b
+#define RT5677_PLL1_INT 0x38
+#define RT5677_PLL2_INT 0x39
+#define RT5677_TEST_CTRL1 0x3a
+#define RT5677_TEST_CTRL2 0x3b
+#define RT5677_TEST_CTRL3 0x3c
+#define RT5677_CHOP_DAC_ADC 0x3d
+#define RT5677_SOFT_DEPOP_DAC_CLK_CTRL 0x3e
+#define RT5677_CROSS_OVER_FILTER1 0x90
+#define RT5677_CROSS_OVER_FILTER2 0x91
+#define RT5677_CROSS_OVER_FILTER3 0x92
+#define RT5677_CROSS_OVER_FILTER4 0x93
+#define RT5677_CROSS_OVER_FILTER5 0x94
+#define RT5677_CROSS_OVER_FILTER6 0x95
+#define RT5677_CROSS_OVER_FILTER7 0x96
+#define RT5677_CROSS_OVER_FILTER8 0x97
+#define RT5677_CROSS_OVER_FILTER9 0x98
+#define RT5677_CROSS_OVER_FILTER10 0x99
+
+/* global definition */
+#define RT5677_L_MUTE (0x1 << 15)
+#define RT5677_L_MUTE_SFT 15
+#define RT5677_VOL_L_MUTE (0x1 << 14)
+#define RT5677_VOL_L_SFT 14
+#define RT5677_R_MUTE (0x1 << 7)
+#define RT5677_R_MUTE_SFT 7
+#define RT5677_VOL_R_MUTE (0x1 << 6)
+#define RT5677_VOL_R_SFT 6
+#define RT5677_L_VOL_MASK (0x3f << 8)
+#define RT5677_L_VOL_SFT 8
+#define RT5677_R_VOL_MASK (0x3f)
+#define RT5677_R_VOL_SFT 0
+
+/* LOUT1 Control (0x01) */
+#define RT5677_LOUT1_L_MUTE (0x1 << 15)
+#define RT5677_LOUT1_L_MUTE_SFT (15)
+#define RT5677_LOUT1_L_DF (0x1 << 14)
+#define RT5677_LOUT1_L_DF_SFT (14)
+#define RT5677_LOUT2_L_MUTE (0x1 << 13)
+#define RT5677_LOUT2_L_MUTE_SFT (13)
+#define RT5677_LOUT2_L_DF (0x1 << 12)
+#define RT5677_LOUT2_L_DF_SFT (12)
+#define RT5677_LOUT3_L_MUTE (0x1 << 11)
+#define RT5677_LOUT3_L_MUTE_SFT (11)
+#define RT5677_LOUT3_L_DF (0x1 << 10)
+#define RT5677_LOUT3_L_DF_SFT (10)
+#define RT5677_LOUT1_ENH_DRV (0x1 << 9)
+#define RT5677_LOUT1_ENH_DRV_SFT (9)
+#define RT5677_LOUT2_ENH_DRV (0x1 << 8)
+#define RT5677_LOUT2_ENH_DRV_SFT (8)
+#define RT5677_LOUT3_ENH_DRV (0x1 << 7)
+#define RT5677_LOUT3_ENH_DRV_SFT (7)
+
+/* IN1 Control (0x03) */
+#define RT5677_BST_MASK1 (0xf << 12)
+#define RT5677_BST_SFT1 12
+#define RT5677_BST_MASK2 (0xf << 8)
+#define RT5677_BST_SFT2 8
+#define RT5677_IN_DF1 (0x1 << 7)
+#define RT5677_IN_DF1_SFT 7
+#define RT5677_IN_DF2 (0x1 << 6)
+#define RT5677_IN_DF2_SFT 6
+
+/* Micbias Control (0x04) */
+#define RT5677_MICBIAS1_OUTVOLT_MASK (0x1 << 15)
+#define RT5677_MICBIAS1_OUTVOLT_SFT (15)
+#define RT5677_MICBIAS1_OUTVOLT_2_7V (0x0 << 15)
+#define RT5677_MICBIAS1_OUTVOLT_2_25V (0x1 << 15)
+#define RT5677_MICBIAS1_CTRL_VDD_MASK (0x1 << 14)
+#define RT5677_MICBIAS1_CTRL_VDD_SFT (14)
+#define RT5677_MICBIAS1_CTRL_VDD_1_8V (0x0 << 14)
+#define RT5677_MICBIAS1_CTRL_VDD_3_3V (0x1 << 14)
+#define RT5677_MICBIAS1_OVCD_MASK (0x1 << 11)
+#define RT5677_MICBIAS1_OVCD_SHIFT (11)
+#define RT5677_MICBIAS1_OVCD_DIS (0x0 << 11)
+#define RT5677_MICBIAS1_OVCD_EN (0x1 << 11)
+#define RT5677_MICBIAS1_OVTH_MASK (0x3 << 9)
+#define RT5677_MICBIAS1_OVTH_SFT 9
+#define RT5677_MICBIAS1_OVTH_640UA (0x0 << 9)
+#define RT5677_MICBIAS1_OVTH_1280UA (0x1 << 9)
+#define RT5677_MICBIAS1_OVTH_1920UA (0x2 << 9)
+
+/* SLIMbus Parameter (0x07) */
+
+/* SLIMbus Rx (0x08) */
+#define RT5677_SLB_ADC4_MASK (0x3 << 6)
+#define RT5677_SLB_ADC4_SFT 6
+#define RT5677_SLB_ADC3_MASK (0x3 << 4)
+#define RT5677_SLB_ADC3_SFT 4
+#define RT5677_SLB_ADC2_MASK (0x3 << 2)
+#define RT5677_SLB_ADC2_SFT 2
+#define RT5677_SLB_ADC1_MASK (0x3 << 0)
+#define RT5677_SLB_ADC1_SFT 0
+
+/* SLIMBus control (0x09) */
+
+/* Sidetone Control (0x13) */
+#define RT5677_ST_HPF_SEL_MASK (0x7 << 13)
+#define RT5677_ST_HPF_SEL_SFT 13
+#define RT5677_ST_HPF_PATH (0x1 << 12)
+#define RT5677_ST_HPF_PATH_SFT 12
+#define RT5677_ST_SEL_MASK (0x7 << 9)
+#define RT5677_ST_SEL_SFT 9
+#define RT5677_ST_EN (0x1 << 6)
+#define RT5677_ST_EN_SFT 6
+
+/* Analog DAC1/2/3 Source Control (0x15) */
+#define RT5677_ANA_DAC3_SRC_SEL_MASK (0x3 << 4)
+#define RT5677_ANA_DAC3_SRC_SEL_SFT 4
+#define RT5677_ANA_DAC1_2_SRC_SEL_MASK (0x3 << 0)
+#define RT5677_ANA_DAC1_2_SRC_SEL_SFT 0
+
+/* IF/DSP to DAC3/4 Mixer Control (0x16) */
+#define RT5677_M_DAC4_L_VOL (0x1 << 15)
+#define RT5677_M_DAC4_L_VOL_SFT 15
+#define RT5677_SEL_DAC4_L_SRC_MASK (0x7 << 12)
+#define RT5677_SEL_DAC4_L_SRC_SFT 12
+#define RT5677_M_DAC4_R_VOL (0x1 << 11)
+#define RT5677_M_DAC4_R_VOL_SFT 11
+#define RT5677_SEL_DAC4_R_SRC_MASK (0x7 << 8)
+#define RT5677_SEL_DAC4_R_SRC_SFT 8
+#define RT5677_M_DAC3_L_VOL (0x1 << 7)
+#define RT5677_M_DAC3_L_VOL_SFT 7
+#define RT5677_SEL_DAC3_L_SRC_MASK (0x7 << 4)
+#define RT5677_SEL_DAC3_L_SRC_SFT 4
+#define RT5677_M_DAC3_R_VOL (0x1 << 3)
+#define RT5677_M_DAC3_R_VOL_SFT 3
+#define RT5677_SEL_DAC3_R_SRC_MASK (0x7 << 0)
+#define RT5677_SEL_DAC3_R_SRC_SFT 0
+
+/* DAC4 Digital Volume (0x17) */
+#define RT5677_DAC4_L_VOL_MASK (0xff << 8)
+#define RT5677_DAC4_L_VOL_SFT 8
+#define RT5677_DAC4_R_VOL_MASK (0xff)
+#define RT5677_DAC4_R_VOL_SFT 0
+
+/* DAC3 Digital Volume (0x18) */
+#define RT5677_DAC3_L_VOL_MASK (0xff << 8)
+#define RT5677_DAC3_L_VOL_SFT 8
+#define RT5677_DAC3_R_VOL_MASK (0xff)
+#define RT5677_DAC3_R_VOL_SFT 0
+
+/* DAC3 Digital Volume (0x19) */
+#define RT5677_DAC1_L_VOL_MASK (0xff << 8)
+#define RT5677_DAC1_L_VOL_SFT 8
+#define RT5677_DAC1_R_VOL_MASK (0xff)
+#define RT5677_DAC1_R_VOL_SFT 0
+
+/* DAC2 Digital Volume (0x1a) */
+#define RT5677_DAC2_L_VOL_MASK (0xff << 8)
+#define RT5677_DAC2_L_VOL_SFT 8
+#define RT5677_DAC2_R_VOL_MASK (0xff)
+#define RT5677_DAC2_R_VOL_SFT 0
+
+/* IF/DSP to DAC2 Mixer Control (0x1b) */
+#define RT5677_M_DAC2_L_VOL (0x1 << 7)
+#define RT5677_M_DAC2_L_VOL_SFT 7
+#define RT5677_SEL_DAC2_L_SRC_MASK (0x7 << 4)
+#define RT5677_SEL_DAC2_L_SRC_SFT 4
+#define RT5677_M_DAC2_R_VOL (0x1 << 3)
+#define RT5677_M_DAC2_R_VOL_SFT 3
+#define RT5677_SEL_DAC2_R_SRC_MASK (0x7 << 0)
+#define RT5677_SEL_DAC2_R_SRC_SFT 0
+
+/* Stereo1 ADC Digital Volume Control (0x1c) */
+#define RT5677_STO1_ADC_L_VOL_MASK (0x7f << 8)
+#define RT5677_STO1_ADC_L_VOL_SFT 8
+#define RT5677_STO1_ADC_R_VOL_MASK (0x7f)
+#define RT5677_STO1_ADC_R_VOL_SFT 0
+
+/* Mono ADC Digital Volume Control (0x1d) */
+#define RT5677_MONO_ADC_L_VOL_MASK (0x7f << 8)
+#define RT5677_MONO_ADC_L_VOL_SFT 8
+#define RT5677_MONO_ADC_R_VOL_MASK (0x7f)
+#define RT5677_MONO_ADC_R_VOL_SFT 0
+
+/* Stereo 1/2 ADC Boost Gain Control (0x1e) */
+#define RT5677_STO1_ADC_L_BST_MASK (0x3 << 14)
+#define RT5677_STO1_ADC_L_BST_SFT 14
+#define RT5677_STO1_ADC_R_BST_MASK (0x3 << 12)
+#define RT5677_STO1_ADC_R_BST_SFT 12
+#define RT5677_STO1_ADC_COMP_MASK (0x3 << 10)
+#define RT5677_STO1_ADC_COMP_SFT 10
+#define RT5677_STO2_ADC_L_BST_MASK (0x3 << 8)
+#define RT5677_STO2_ADC_L_BST_SFT 8
+#define RT5677_STO2_ADC_R_BST_MASK (0x3 << 6)
+#define RT5677_STO2_ADC_R_BST_SFT 6
+#define RT5677_STO2_ADC_COMP_MASK (0x3 << 4)
+#define RT5677_STO2_ADC_COMP_SFT 4
+
+/* Stereo2 ADC Digital Volume Control (0x1f) */
+#define RT5677_STO2_ADC_L_VOL_MASK (0x7f << 8)
+#define RT5677_STO2_ADC_L_VOL_SFT 8
+#define RT5677_STO2_ADC_R_VOL_MASK (0x7f)
+#define RT5677_STO2_ADC_R_VOL_SFT 0
+
+/* ADC Boost Gain Control 2 (0x20) */
+#define RT5677_MONO_ADC_L_BST_MASK (0x3 << 14)
+#define RT5677_MONO_ADC_L_BST_SFT 14
+#define RT5677_MONO_ADC_R_BST_MASK (0x3 << 12)
+#define RT5677_MONO_ADC_R_BST_SFT 12
+#define RT5677_MONO_ADC_COMP_MASK (0x3 << 10)
+#define RT5677_MONO_ADC_COMP_SFT 10
+
+/* Stereo 3/4 ADC Boost Gain Control (0x21) */
+#define RT5677_STO3_ADC_L_BST_MASK (0x3 << 14)
+#define RT5677_STO3_ADC_L_BST_SFT 14
+#define RT5677_STO3_ADC_R_BST_MASK (0x3 << 12)
+#define RT5677_STO3_ADC_R_BST_SFT 12
+#define RT5677_STO3_ADC_COMP_MASK (0x3 << 10)
+#define RT5677_STO3_ADC_COMP_SFT 10
+#define RT5677_STO4_ADC_L_BST_MASK (0x3 << 8)
+#define RT5677_STO4_ADC_L_BST_SFT 8
+#define RT5677_STO4_ADC_R_BST_MASK (0x3 << 6)
+#define RT5677_STO4_ADC_R_BST_SFT 6
+#define RT5677_STO4_ADC_COMP_MASK (0x3 << 4)
+#define RT5677_STO4_ADC_COMP_SFT 4
+
+/* Stereo3 ADC Digital Volume Control (0x22) */
+#define RT5677_STO3_ADC_L_VOL_MASK (0x7f << 8)
+#define RT5677_STO3_ADC_L_VOL_SFT 8
+#define RT5677_STO3_ADC_R_VOL_MASK (0x7f)
+#define RT5677_STO3_ADC_R_VOL_SFT 0
+
+/* Stereo4 ADC Digital Volume Control (0x23) */
+#define RT5677_STO4_ADC_L_VOL_MASK (0x7f << 8)
+#define RT5677_STO4_ADC_L_VOL_SFT 8
+#define RT5677_STO4_ADC_R_VOL_MASK (0x7f)
+#define RT5677_STO4_ADC_R_VOL_SFT 0
+
+/* Stereo4 ADC Mixer control (0x24) */
+#define RT5677_M_STO4_ADC_L2 (0x1 << 15)
+#define RT5677_M_STO4_ADC_L2_SFT 15
+#define RT5677_M_STO4_ADC_L1 (0x1 << 14)
+#define RT5677_M_STO4_ADC_L1_SFT 14
+#define RT5677_SEL_STO4_ADC1_MASK (0x3 << 12)
+#define RT5677_SEL_STO4_ADC1_SFT 12
+#define RT5677_SEL_STO4_ADC2_MASK (0x3 << 10)
+#define RT5677_SEL_STO4_ADC2_SFT 10
+#define RT5677_SEL_STO4_DMIC_MASK (0x3 << 8)
+#define RT5677_SEL_STO4_DMIC_SFT 8
+#define RT5677_M_STO4_ADC_R1 (0x1 << 7)
+#define RT5677_M_STO4_ADC_R1_SFT 7
+#define RT5677_M_STO4_ADC_R2 (0x1 << 6)
+#define RT5677_M_STO4_ADC_R2_SFT 6
+
+/* Stereo3 ADC Mixer control (0x25) */
+#define RT5677_M_STO3_ADC_L2 (0x1 << 15)
+#define RT5677_M_STO3_ADC_L2_SFT 15
+#define RT5677_M_STO3_ADC_L1 (0x1 << 14)
+#define RT5677_M_STO3_ADC_L1_SFT 14
+#define RT5677_SEL_STO3_ADC1_MASK (0x3 << 12)
+#define RT5677_SEL_STO3_ADC1_SFT 12
+#define RT5677_SEL_STO3_ADC2_MASK (0x3 << 10)
+#define RT5677_SEL_STO3_ADC2_SFT 10
+#define RT5677_SEL_STO3_DMIC_MASK (0x3 << 8)
+#define RT5677_SEL_STO3_DMIC_SFT 8
+#define RT5677_M_STO3_ADC_R1 (0x1 << 7)
+#define RT5677_M_STO3_ADC_R1_SFT 7
+#define RT5677_M_STO3_ADC_R2 (0x1 << 6)
+#define RT5677_M_STO3_ADC_R2_SFT 6
+
+/* Stereo2 ADC Mixer Control (0x26) */
+#define RT5677_M_STO2_ADC_L2 (0x1 << 15)
+#define RT5677_M_STO2_ADC_L2_SFT 15
+#define RT5677_M_STO2_ADC_L1 (0x1 << 14)
+#define RT5677_M_STO2_ADC_L1_SFT 14
+#define RT5677_SEL_STO2_ADC1_MASK (0x3 << 12)
+#define RT5677_SEL_STO2_ADC1_SFT 12
+#define RT5677_SEL_STO2_ADC2_MASK (0x3 << 10)
+#define RT5677_SEL_STO2_ADC2_SFT 10
+#define RT5677_SEL_STO2_DMIC_MASK (0x3 << 8)
+#define RT5677_SEL_STO2_DMIC_SFT 8
+#define RT5677_M_STO2_ADC_R1 (0x1 << 7)
+#define RT5677_M_STO2_ADC_R1_SFT 7
+#define RT5677_M_STO2_ADC_R2 (0x1 << 6)
+#define RT5677_M_STO2_ADC_R2_SFT 6
+#define RT5677_SEL_STO2_LR_MIX_MASK (0x1 << 0)
+#define RT5677_SEL_STO2_LR_MIX_SFT 0
+#define RT5677_SEL_STO2_LR_MIX_L (0x0 << 0)
+#define RT5677_SEL_STO2_LR_MIX_LR (0x1 << 0)
+
+/* Stereo1 ADC Mixer control (0x27) */
+#define RT5677_M_STO1_ADC_L2 (0x1 << 15)
+#define RT5677_M_STO1_ADC_L2_SFT 15
+#define RT5677_M_STO1_ADC_L1 (0x1 << 14)
+#define RT5677_M_STO1_ADC_L1_SFT 14
+#define RT5677_SEL_STO1_ADC1_MASK (0x3 << 12)
+#define RT5677_SEL_STO1_ADC1_SFT 12
+#define RT5677_SEL_STO1_ADC2_MASK (0x3 << 10)
+#define RT5677_SEL_STO1_ADC2_SFT 10
+#define RT5677_SEL_STO1_DMIC_MASK (0x3 << 8)
+#define RT5677_SEL_STO1_DMIC_SFT 8
+#define RT5677_M_STO1_ADC_R1 (0x1 << 7)
+#define RT5677_M_STO1_ADC_R1_SFT 7
+#define RT5677_M_STO1_ADC_R2 (0x1 << 6)
+#define RT5677_M_STO1_ADC_R2_SFT 6
+
+/* Mono ADC Mixer control (0x28) */
+#define RT5677_M_MONO_ADC_L2 (0x1 << 15)
+#define RT5677_M_MONO_ADC_L2_SFT 15
+#define RT5677_M_MONO_ADC_L1 (0x1 << 14)
+#define RT5677_M_MONO_ADC_L1_SFT 14
+#define RT5677_SEL_MONO_ADC_L1_MASK (0x3 << 12)
+#define RT5677_SEL_MONO_ADC_L1_SFT 12
+#define RT5677_SEL_MONO_ADC_L2_MASK (0x3 << 10)
+#define RT5677_SEL_MONO_ADC_L2_SFT 10
+#define RT5677_SEL_MONO_DMIC_L_MASK (0x3 << 8)
+#define RT5677_SEL_MONO_DMIC_L_SFT 8
+#define RT5677_M_MONO_ADC_R1 (0x1 << 7)
+#define RT5677_M_MONO_ADC_R1_SFT 7
+#define RT5677_M_MONO_ADC_R2 (0x1 << 6)
+#define RT5677_M_MONO_ADC_R2_SFT 6
+#define RT5677_SEL_MONO_ADC_R1_MASK (0x3 << 4)
+#define RT5677_SEL_MONO_ADC_R1_SFT 4
+#define RT5677_SEL_MONO_ADC_R2_MASK (0x3 << 2)
+#define RT5677_SEL_MONO_ADC_R2_SFT 2
+#define RT5677_SEL_MONO_DMIC_R_MASK (0x3 << 0)
+#define RT5677_SEL_MONO_DMIC_R_SFT 0
+
+/* ADC/IF/DSP to DAC1 Mixer control (0x29) */
+#define RT5677_M_ADDA_MIXER1_L (0x1 << 15)
+#define RT5677_M_ADDA_MIXER1_L_SFT 15
+#define RT5677_M_DAC1_L (0x1 << 14)
+#define RT5677_M_DAC1_L_SFT 14
+#define RT5677_DAC1_L_SEL_MASK (0x7 << 8)
+#define RT5677_DAC1_L_SEL_SFT 8
+#define RT5677_M_ADDA_MIXER1_R (0x1 << 7)
+#define RT5677_M_ADDA_MIXER1_R_SFT 7
+#define RT5677_M_DAC1_R (0x1 << 6)
+#define RT5677_M_DAC1_R_SFT 6
+#define RT5677_ADDA1_SEL_MASK (0x3 << 0)
+#define RT5677_ADDA1_SEL_SFT 0
+
+/* Stereo1 DAC Mixer L/R Control (0x2a) */
+#define RT5677_M_ST_DAC1_L (0x1 << 15)
+#define RT5677_M_ST_DAC1_L_SFT 15
+#define RT5677_M_DAC1_L_STO_L (0x1 << 13)
+#define RT5677_M_DAC1_L_STO_L_SFT 13
+#define RT5677_DAC1_L_STO_L_VOL_MASK (0x1 << 12)
+#define RT5677_DAC1_L_STO_L_VOL_SFT 12
+#define RT5677_M_DAC2_L_STO_L (0x1 << 11)
+#define RT5677_M_DAC2_L_STO_L_SFT 11
+#define RT5677_DAC2_L_STO_L_VOL_MASK (0x1 << 10)
+#define RT5677_DAC2_L_STO_L_VOL_SFT 10
+#define RT5677_M_DAC1_R_STO_L (0x1 << 9)
+#define RT5677_M_DAC1_R_STO_L_SFT 9
+#define RT5677_DAC1_R_STO_L_VOL_MASK (0x1 << 8)
+#define RT5677_DAC1_R_STO_L_VOL_SFT 8
+#define RT5677_M_ST_DAC1_R (0x1 << 7)
+#define RT5677_M_ST_DAC1_R_SFT 7
+#define RT5677_M_DAC1_R_STO_R (0x1 << 5)
+#define RT5677_M_DAC1_R_STO_R_SFT 5
+#define RT5677_DAC1_R_STO_R_VOL_MASK (0x1 << 4)
+#define RT5677_DAC1_R_STO_R_VOL_SFT 4
+#define RT5677_M_DAC2_R_STO_R (0x1 << 3)
+#define RT5677_M_DAC2_R_STO_R_SFT 3
+#define RT5677_DAC2_R_STO_R_VOL_MASK (0x1 << 2)
+#define RT5677_DAC2_R_STO_R_VOL_SFT 2
+#define RT5677_M_DAC1_L_STO_R (0x1 << 1)
+#define RT5677_M_DAC1_L_STO_R_SFT 1
+#define RT5677_DAC1_L_STO_R_VOL_MASK (0x1 << 0)
+#define RT5677_DAC1_L_STO_R_VOL_SFT 0
+
+/* Mono DAC Mixer L/R Control (0x2b) */
+#define RT5677_M_ST_DAC2_L (0x1 << 15)
+#define RT5677_M_ST_DAC2_L_SFT 15
+#define RT5677_M_DAC2_L_MONO_L (0x1 << 13)
+#define RT5677_M_DAC2_L_MONO_L_SFT 13
+#define RT5677_DAC2_L_MONO_L_VOL_MASK (0x1 << 12)
+#define RT5677_DAC2_L_MONO_L_VOL_SFT 12
+#define RT5677_M_DAC2_R_MONO_L (0x1 << 11)
+#define RT5677_M_DAC2_R_MONO_L_SFT 11
+#define RT5677_DAC2_R_MONO_L_VOL_MASK (0x1 << 10)
+#define RT5677_DAC2_R_MONO_L_VOL_SFT 10
+#define RT5677_M_DAC1_L_MONO_L (0x1 << 9)
+#define RT5677_M_DAC1_L_MONO_L_SFT 9
+#define RT5677_DAC1_L_MONO_L_VOL_MASK (0x1 << 8)
+#define RT5677_DAC1_L_MONO_L_VOL_SFT 8
+#define RT5677_M_ST_DAC2_R (0x1 << 7)
+#define RT5677_M_ST_DAC2_R_SFT 7
+#define RT5677_M_DAC2_R_MONO_R (0x1 << 5)
+#define RT5677_M_DAC2_R_MONO_R_SFT 5
+#define RT5677_DAC2_R_MONO_R_VOL_MASK (0x1 << 4)
+#define RT5677_DAC2_R_MONO_R_VOL_SFT 4
+#define RT5677_M_DAC1_R_MONO_R (0x1 << 3)
+#define RT5677_M_DAC1_R_MONO_R_SFT 3
+#define RT5677_DAC1_R_MONO_R_VOL_MASK (0x1 << 2)
+#define RT5677_DAC1_R_MONO_R_VOL_SFT 2
+#define RT5677_M_DAC2_L_MONO_R (0x1 << 1)
+#define RT5677_M_DAC2_L_MONO_R_SFT 1
+#define RT5677_DAC2_L_MONO_R_VOL_MASK (0x1 << 0)
+#define RT5677_DAC2_L_MONO_R_VOL_SFT 0
+
+/* DD Mixer 1 Control (0x2c) */
+#define RT5677_M_STO_L_DD1_L (0x1 << 15)
+#define RT5677_M_STO_L_DD1_L_SFT 15
+#define RT5677_STO_L_DD1_L_VOL_MASK (0x1 << 14)
+#define RT5677_STO_L_DD1_L_VOL_SFT 14
+#define RT5677_M_MONO_L_DD1_L (0x1 << 13)
+#define RT5677_M_MONO_L_DD1_L_SFT 13
+#define RT5677_MONO_L_DD1_L_VOL_MASK (0x1 << 12)
+#define RT5677_MONO_L_DD1_L_VOL_SFT 12
+#define RT5677_M_DAC3_L_DD1_L (0x1 << 11)
+#define RT5677_M_DAC3_L_DD1_L_SFT 11
+#define RT5677_DAC3_L_DD1_L_VOL_MASK (0x1 << 10)
+#define RT5677_DAC3_L_DD1_L_VOL_SFT 10
+#define RT5677_M_DAC3_R_DD1_L (0x1 << 9)
+#define RT5677_M_DAC3_R_DD1_L_SFT 9
+#define RT5677_DAC3_R_DD1_L_VOL_MASK (0x1 << 8)
+#define RT5677_DAC3_R_DD1_L_VOL_SFT 8
+#define RT5677_M_STO_R_DD1_R (0x1 << 7)
+#define RT5677_M_STO_R_DD1_R_SFT 7
+#define RT5677_STO_R_DD1_R_VOL_MASK (0x1 << 6)
+#define RT5677_STO_R_DD1_R_VOL_SFT 6
+#define RT5677_M_MONO_R_DD1_R (0x1 << 5)
+#define RT5677_M_MONO_R_DD1_R_SFT 5
+#define RT5677_MONO_R_DD1_R_VOL_MASK (0x1 << 4)
+#define RT5677_MONO_R_DD1_R_VOL_SFT 4
+#define RT5677_M_DAC3_R_DD1_R (0x1 << 3)
+#define RT5677_M_DAC3_R_DD1_R_SFT 3
+#define RT5677_DAC3_R_DD1_R_VOL_MASK (0x1 << 2)
+#define RT5677_DAC3_R_DD1_R_VOL_SFT 2
+#define RT5677_M_DAC3_L_DD1_R (0x1 << 1)
+#define RT5677_M_DAC3_L_DD1_R_SFT 1
+#define RT5677_DAC3_L_DD1_R_VOL_MASK (0x1 << 0)
+#define RT5677_DAC3_L_DD1_R_VOL_SFT 0
+
+/* DD Mixer 2 Control (0x2d) */
+#define RT5677_M_STO_L_DD2_L (0x1 << 15)
+#define RT5677_M_STO_L_DD2_L_SFT 15
+#define RT5677_STO_L_DD2_L_VOL_MASK (0x1 << 14)
+#define RT5677_STO_L_DD2_L_VOL_SFT 14
+#define RT5677_M_MONO_L_DD2_L (0x1 << 13)
+#define RT5677_M_MONO_L_DD2_L_SFT 13
+#define RT5677_MONO_L_DD2_L_VOL_MASK (0x1 << 12)
+#define RT5677_MONO_L_DD2_L_VOL_SFT 12
+#define RT5677_M_DAC4_L_DD2_L (0x1 << 11)
+#define RT5677_M_DAC4_L_DD2_L_SFT 11
+#define RT5677_DAC4_L_DD2_L_VOL_MASK (0x1 << 10)
+#define RT5677_DAC4_L_DD2_L_VOL_SFT 10
+#define RT5677_M_DAC4_R_DD2_L (0x1 << 9)
+#define RT5677_M_DAC4_R_DD2_L_SFT 9
+#define RT5677_DAC4_R_DD2_L_VOL_MASK (0x1 << 8)
+#define RT5677_DAC4_R_DD2_L_VOL_SFT 8
+#define RT5677_M_STO_R_DD2_R (0x1 << 7)
+#define RT5677_M_STO_R_DD2_R_SFT 7
+#define RT5677_STO_R_DD2_R_VOL_MASK (0x1 << 6)
+#define RT5677_STO_R_DD2_R_VOL_SFT 6
+#define RT5677_M_MONO_R_DD2_R (0x1 << 5)
+#define RT5677_M_MONO_R_DD2_R_SFT 5
+#define RT5677_MONO_R_DD2_R_VOL_MASK (0x1 << 4)
+#define RT5677_MONO_R_DD2_R_VOL_SFT 4
+#define RT5677_M_DAC4_R_DD2_R (0x1 << 3)
+#define RT5677_M_DAC4_R_DD2_R_SFT 3
+#define RT5677_DAC4_R_DD2_R_VOL_MASK (0x1 << 2)
+#define RT5677_DAC4_R_DD2_R_VOL_SFT 2
+#define RT5677_M_DAC4_L_DD2_R (0x1 << 1)
+#define RT5677_M_DAC4_L_DD2_R_SFT 1
+#define RT5677_DAC4_L_DD2_R_VOL_MASK (0x1 << 0)
+#define RT5677_DAC4_L_DD2_R_VOL_SFT 0
+
+/* IF3 data control (0x2f) */
+#define RT5677_IF3_DAC_SEL_MASK (0x3 << 6)
+#define RT5677_IF3_DAC_SEL_SFT 6
+#define RT5677_IF3_ADC_SEL_MASK (0x3 << 4)
+#define RT5677_IF3_ADC_SEL_SFT 4
+#define RT5677_IF3_ADC_IN_MASK (0xf << 0)
+#define RT5677_IF3_ADC_IN_SFT 0
+
+/* IF4 data control (0x30) */
+#define RT5677_IF4_ADC_IN_MASK (0xf << 4)
+#define RT5677_IF4_ADC_IN_SFT 4
+#define RT5677_IF4_DAC_SEL_MASK (0x3 << 2)
+#define RT5677_IF4_DAC_SEL_SFT 2
+#define RT5677_IF4_ADC_SEL_MASK (0x3 << 0)
+#define RT5677_IF4_ADC_SEL_SFT 0
+
+/* PDM Output Control (0x31) */
+#define RT5677_M_PDM1_L (0x1 << 15)
+#define RT5677_M_PDM1_L_SFT 15
+#define RT5677_SEL_PDM1_L_MASK (0x3 << 12)
+#define RT5677_SEL_PDM1_L_SFT 12
+#define RT5677_M_PDM1_R (0x1 << 11)
+#define RT5677_M_PDM1_R_SFT 11
+#define RT5677_SEL_PDM1_R_MASK (0x3 << 8)
+#define RT5677_SEL_PDM1_R_SFT 8
+#define RT5677_M_PDM2_L (0x1 << 7)
+#define RT5677_M_PDM2_L_SFT 7
+#define RT5677_SEL_PDM2_L_MASK (0x3 << 4)
+#define RT5677_SEL_PDM2_L_SFT 4
+#define RT5677_M_PDM2_R (0x1 << 3)
+#define RT5677_M_PDM2_R_SFT 3
+#define RT5677_SEL_PDM2_R_MASK (0x3 << 0)
+#define RT5677_SEL_PDM2_R_SFT 0
+
+/* PDM I2C / Data Control 1 (0x32) */
+#define RT5677_PDM2_PW_DOWN (0x1 << 7)
+#define RT5677_PDM1_PW_DOWN (0x1 << 6)
+#define RT5677_PDM2_BUSY (0x1 << 5)
+#define RT5677_PDM1_BUSY (0x1 << 4)
+#define RT5677_PDM_PATTERN (0x1 << 3)
+#define RT5677_PDM_GAIN (0x1 << 2)
+#define RT5677_PDM_DIV_MASK (0x3 << 0)
+
+/* PDM I2C / Data Control 2 (0x33) */
+#define RT5677_PDM1_I2C_ID (0xf << 12)
+#define RT5677_PDM1_EXE (0x1 << 11)
+#define RT5677_PDM1_I2C_CMD (0x1 << 10)
+#define RT5677_PDM1_I2C_EXE (0x1 << 9)
+#define RT5677_PDM1_I2C_BUSY (0x1 << 8)
+#define RT5677_PDM2_I2C_ID (0xf << 4)
+#define RT5677_PDM2_EXE (0x1 << 3)
+#define RT5677_PDM2_I2C_CMD (0x1 << 2)
+#define RT5677_PDM2_I2C_EXE (0x1 << 1)
+#define RT5677_PDM2_I2C_BUSY (0x1 << 0)
+
+/* MX3C TDM1 control 1 (0x3c) */
+#define RT5677_IF1_ADC4_MASK (0x3 << 10)
+#define RT5677_IF1_ADC4_SFT 10
+#define RT5677_IF1_ADC3_MASK (0x3 << 8)
+#define RT5677_IF1_ADC3_SFT 8
+#define RT5677_IF1_ADC2_MASK (0x3 << 6)
+#define RT5677_IF1_ADC2_SFT 6
+#define RT5677_IF1_ADC1_MASK (0x3 << 4)
+#define RT5677_IF1_ADC1_SFT 4
+
+/* MX41 TDM2 control 1 (0x41) */
+#define RT5677_IF2_ADC4_MASK (0x3 << 10)
+#define RT5677_IF2_ADC4_SFT 10
+#define RT5677_IF2_ADC3_MASK (0x3 << 8)
+#define RT5677_IF2_ADC3_SFT 8
+#define RT5677_IF2_ADC2_MASK (0x3 << 6)
+#define RT5677_IF2_ADC2_SFT 6
+#define RT5677_IF2_ADC1_MASK (0x3 << 4)
+#define RT5677_IF2_ADC1_SFT 4
+
+/* Digital Microphone Control 1 (0x50) */
+#define RT5677_DMIC_1_EN_MASK (0x1 << 15)
+#define RT5677_DMIC_1_EN_SFT 15
+#define RT5677_DMIC_1_DIS (0x0 << 15)
+#define RT5677_DMIC_1_EN (0x1 << 15)
+#define RT5677_DMIC_2_EN_MASK (0x1 << 14)
+#define RT5677_DMIC_2_EN_SFT 14
+#define RT5677_DMIC_2_DIS (0x0 << 14)
+#define RT5677_DMIC_2_EN (0x1 << 14)
+#define RT5677_DMIC_L_STO1_LH_MASK (0x1 << 13)
+#define RT5677_DMIC_L_STO1_LH_SFT 13
+#define RT5677_DMIC_L_STO1_LH_FALLING (0x0 << 13)
+#define RT5677_DMIC_L_STO1_LH_RISING (0x1 << 13)
+#define RT5677_DMIC_R_STO1_LH_MASK (0x1 << 12)
+#define RT5677_DMIC_R_STO1_LH_SFT 12
+#define RT5677_DMIC_R_STO1_LH_FALLING (0x0 << 12)
+#define RT5677_DMIC_R_STO1_LH_RISING (0x1 << 12)
+#define RT5677_DMIC_L_STO3_LH_MASK (0x1 << 11)
+#define RT5677_DMIC_L_STO3_LH_SFT 11
+#define RT5677_DMIC_L_STO3_LH_FALLING (0x0 << 11)
+#define RT5677_DMIC_L_STO3_LH_RISING (0x1 << 11)
+#define RT5677_DMIC_R_STO3_LH_MASK (0x1 << 10)
+#define RT5677_DMIC_R_STO3_LH_SFT 10
+#define RT5677_DMIC_R_STO3_LH_FALLING (0x0 << 10)
+#define RT5677_DMIC_R_STO3_LH_RISING (0x1 << 10)
+#define RT5677_DMIC_L_STO2_LH_MASK (0x1 << 9)
+#define RT5677_DMIC_L_STO2_LH_SFT 9
+#define RT5677_DMIC_L_STO2_LH_FALLING (0x0 << 9)
+#define RT5677_DMIC_L_STO2_LH_RISING (0x1 << 9)
+#define RT5677_DMIC_R_STO2_LH_MASK (0x1 << 8)
+#define RT5677_DMIC_R_STO2_LH_SFT 8
+#define RT5677_DMIC_R_STO2_LH_FALLING (0x0 << 8)
+#define RT5677_DMIC_R_STO2_LH_RISING (0x1 << 8)
+#define RT5677_DMIC_CLK_MASK (0x7 << 5)
+#define RT5677_DMIC_CLK_SFT 5
+#define RT5677_DMIC_3_EN_MASK (0x1 << 4)
+#define RT5677_DMIC_3_EN_SFT 4
+#define RT5677_DMIC_3_DIS (0x0 << 4)
+#define RT5677_DMIC_3_EN (0x1 << 4)
+#define RT5677_DMIC_R_MONO_LH_MASK (0x1 << 2)
+#define RT5677_DMIC_R_MONO_LH_SFT 2
+#define RT5677_DMIC_R_MONO_LH_FALLING (0x0 << 2)
+#define RT5677_DMIC_R_MONO_LH_RISING (0x1 << 2)
+#define RT5677_DMIC_L_STO4_LH_MASK (0x1 << 1)
+#define RT5677_DMIC_L_STO4_LH_SFT 1
+#define RT5677_DMIC_L_STO4_LH_FALLING (0x0 << 1)
+#define RT5677_DMIC_L_STO4_LH_RISING (0x1 << 1)
+#define RT5677_DMIC_R_STO4_LH_MASK (0x1 << 0)
+#define RT5677_DMIC_R_STO4_LH_SFT 0
+#define RT5677_DMIC_R_STO4_LH_FALLING (0x0 << 0)
+#define RT5677_DMIC_R_STO4_LH_RISING (0x1 << 0)
+
+/* Digital Microphone Control 2 (0x51) */
+#define RT5677_DMIC_4_EN_MASK (0x1 << 15)
+#define RT5677_DMIC_4_EN_SFT 15
+#define RT5677_DMIC_4_DIS (0x0 << 15)
+#define RT5677_DMIC_4_EN (0x1 << 15)
+#define RT5677_DMIC_4L_LH_MASK (0x1 << 7)
+#define RT5677_DMIC_4L_LH_SFT 7
+#define RT5677_DMIC_4L_LH_FALLING (0x0 << 7)
+#define RT5677_DMIC_4L_LH_RISING (0x1 << 7)
+#define RT5677_DMIC_4R_LH_MASK (0x1 << 6)
+#define RT5677_DMIC_4R_LH_SFT 6
+#define RT5677_DMIC_4R_LH_FALLING (0x0 << 6)
+#define RT5677_DMIC_4R_LH_RISING (0x1 << 6)
+#define RT5677_DMIC_3L_LH_MASK (0x1 << 5)
+#define RT5677_DMIC_3L_LH_SFT 5
+#define RT5677_DMIC_3L_LH_FALLING (0x0 << 5)
+#define RT5677_DMIC_3L_LH_RISING (0x1 << 5)
+#define RT5677_DMIC_3R_LH_MASK (0x1 << 4)
+#define RT5677_DMIC_3R_LH_SFT 4
+#define RT5677_DMIC_3R_LH_FALLING (0x0 << 4)
+#define RT5677_DMIC_3R_LH_RISING (0x1 << 4)
+#define RT5677_DMIC_2L_LH_MASK (0x1 << 3)
+#define RT5677_DMIC_2L_LH_SFT 3
+#define RT5677_DMIC_2L_LH_FALLING (0x0 << 3)
+#define RT5677_DMIC_2L_LH_RISING (0x1 << 3)
+#define RT5677_DMIC_2R_LH_MASK (0x1 << 2)
+#define RT5677_DMIC_2R_LH_SFT 2
+#define RT5677_DMIC_2R_LH_FALLING (0x0 << 2)
+#define RT5677_DMIC_2R_LH_RISING (0x1 << 2)
+#define RT5677_DMIC_1L_LH_MASK (0x1 << 1)
+#define RT5677_DMIC_1L_LH_SFT 1
+#define RT5677_DMIC_1L_LH_FALLING (0x0 << 1)
+#define RT5677_DMIC_1L_LH_RISING (0x1 << 1)
+#define RT5677_DMIC_1R_LH_MASK (0x1 << 0)
+#define RT5677_DMIC_1R_LH_SFT 0
+#define RT5677_DMIC_1R_LH_FALLING (0x0 << 0)
+#define RT5677_DMIC_1R_LH_RISING (0x1 << 0)
+
+/* Power Management for Digital 1 (0x61) */
+#define RT5677_PWR_I2S1 (0x1 << 15)
+#define RT5677_PWR_I2S1_BIT 15
+#define RT5677_PWR_I2S2 (0x1 << 14)
+#define RT5677_PWR_I2S2_BIT 14
+#define RT5677_PWR_I2S3 (0x1 << 13)
+#define RT5677_PWR_I2S3_BIT 13
+#define RT5677_PWR_DAC1 (0x1 << 12)
+#define RT5677_PWR_DAC1_BIT 12
+#define RT5677_PWR_DAC2 (0x1 << 11)
+#define RT5677_PWR_DAC2_BIT 11
+#define RT5677_PWR_I2S4 (0x1 << 10)
+#define RT5677_PWR_I2S4_BIT 10
+#define RT5677_PWR_SLB (0x1 << 9)
+#define RT5677_PWR_SLB_BIT 9
+#define RT5677_PWR_DAC3 (0x1 << 7)
+#define RT5677_PWR_DAC3_BIT 7
+#define RT5677_PWR_ADCFED2 (0x1 << 4)
+#define RT5677_PWR_ADCFED2_BIT 4
+#define RT5677_PWR_ADCFED1 (0x1 << 3)
+#define RT5677_PWR_ADCFED1_BIT 3
+#define RT5677_PWR_ADC_L (0x1 << 2)
+#define RT5677_PWR_ADC_L_BIT 2
+#define RT5677_PWR_ADC_R (0x1 << 1)
+#define RT5677_PWR_ADC_R_BIT 1
+#define RT5677_PWR_I2C_MASTER (0x1 << 0)
+#define RT5677_PWR_I2C_MASTER_BIT 0
+
+/* Power Management for Digital 2 (0x62) */
+#define RT5677_PWR_ADC_S1F (0x1 << 15)
+#define RT5677_PWR_ADC_S1F_BIT 15
+#define RT5677_PWR_ADC_MF_L (0x1 << 14)
+#define RT5677_PWR_ADC_MF_L_BIT 14
+#define RT5677_PWR_ADC_MF_R (0x1 << 13)
+#define RT5677_PWR_ADC_MF_R_BIT 13
+#define RT5677_PWR_DAC_S1F (0x1 << 12)
+#define RT5677_PWR_DAC_S1F_BIT 12
+#define RT5677_PWR_DAC_M2F_L (0x1 << 11)
+#define RT5677_PWR_DAC_M2F_L_BIT 11
+#define RT5677_PWR_DAC_M2F_R (0x1 << 10)
+#define RT5677_PWR_DAC_M2F_R_BIT 10
+#define RT5677_PWR_DAC_M3F_L (0x1 << 9)
+#define RT5677_PWR_DAC_M3F_L_BIT 9
+#define RT5677_PWR_DAC_M3F_R (0x1 << 8)
+#define RT5677_PWR_DAC_M3F_R_BIT 8
+#define RT5677_PWR_DAC_M4F_L (0x1 << 7)
+#define RT5677_PWR_DAC_M4F_L_BIT 7
+#define RT5677_PWR_DAC_M4F_R (0x1 << 6)
+#define RT5677_PWR_DAC_M4F_R_BIT 6
+#define RT5677_PWR_ADC_S2F (0x1 << 5)
+#define RT5677_PWR_ADC_S2F_BIT 5
+#define RT5677_PWR_ADC_S3F (0x1 << 4)
+#define RT5677_PWR_ADC_S3F_BIT 4
+#define RT5677_PWR_ADC_S4F (0x1 << 3)
+#define RT5677_PWR_ADC_S4F_BIT 3
+#define RT5677_PWR_PDM1 (0x1 << 2)
+#define RT5677_PWR_PDM1_BIT 2
+#define RT5677_PWR_PDM2 (0x1 << 1)
+#define RT5677_PWR_PDM2_BIT 1
+
+/* Power Management for Analog 1 (0x63) */
+#define RT5677_PWR_VREF1 (0x1 << 15)
+#define RT5677_PWR_VREF1_BIT 15
+#define RT5677_PWR_FV1 (0x1 << 14)
+#define RT5677_PWR_FV1_BIT 14
+#define RT5677_PWR_MB (0x1 << 13)
+#define RT5677_PWR_MB_BIT 13
+#define RT5677_PWR_LO1 (0x1 << 12)
+#define RT5677_PWR_LO1_BIT 12
+#define RT5677_PWR_BG (0x1 << 11)
+#define RT5677_PWR_BG_BIT 11
+#define RT5677_PWR_LO2 (0x1 << 10)
+#define RT5677_PWR_LO2_BIT 10
+#define RT5677_PWR_LO3 (0x1 << 9)
+#define RT5677_PWR_LO3_BIT 9
+#define RT5677_PWR_VREF2 (0x1 << 8)
+#define RT5677_PWR_VREF2_BIT 8
+#define RT5677_PWR_FV2 (0x1 << 7)
+#define RT5677_PWR_FV2_BIT 7
+#define RT5677_LDO2_SEL_MASK (0x7 << 4)
+#define RT5677_LDO2_SEL_SFT 4
+#define RT5677_LDO1_SEL_MASK (0x7 << 0)
+#define RT5677_LDO1_SEL_SFT 0
+
+/* Power Management for Analog 2 (0x64) */
+#define RT5677_PWR_BST1 (0x1 << 15)
+#define RT5677_PWR_BST1_BIT 15
+#define RT5677_PWR_BST2 (0x1 << 14)
+#define RT5677_PWR_BST2_BIT 14
+#define RT5677_PWR_CLK_MB1 (0x1 << 13)
+#define RT5677_PWR_CLK_MB1_BIT 13
+#define RT5677_PWR_SLIM (0x1 << 12)
+#define RT5677_PWR_SLIM_BIT 12
+#define RT5677_PWR_MB1 (0x1 << 11)
+#define RT5677_PWR_MB1_BIT 11
+#define RT5677_PWR_PP_MB1 (0x1 << 10)
+#define RT5677_PWR_PP_MB1_BIT 10
+#define RT5677_PWR_PLL1 (0x1 << 9)
+#define RT5677_PWR_PLL1_BIT 9
+#define RT5677_PWR_PLL2 (0x1 << 8)
+#define RT5677_PWR_PLL2_BIT 8
+#define RT5677_PWR_CORE (0x1 << 7)
+#define RT5677_PWR_CORE_BIT 7
+#define RT5677_PWR_CLK_MB (0x1 << 6)
+#define RT5677_PWR_CLK_MB_BIT 6
+#define RT5677_PWR_BST1_P (0x1 << 5)
+#define RT5677_PWR_BST1_P_BIT 5
+#define RT5677_PWR_BST2_P (0x1 << 4)
+#define RT5677_PWR_BST2_P_BIT 4
+#define RT5677_PWR_IPTV (0x1 << 3)
+#define RT5677_PWR_IPTV_BIT 3
+#define RT5677_PWR_25M_CLK (0x1 << 1)
+#define RT5677_PWR_25M_CLK_BIT 1
+#define RT5677_PWR_LDO1 (0x1 << 0)
+#define RT5677_PWR_LDO1_BIT 0
+
+/* Power Management for DSP (0x65) */
+#define RT5677_PWR_SR7 (0x1 << 10)
+#define RT5677_PWR_SR7_BIT 10
+#define RT5677_PWR_SR6 (0x1 << 9)
+#define RT5677_PWR_SR6_BIT 9
+#define RT5677_PWR_SR5 (0x1 << 8)
+#define RT5677_PWR_SR5_BIT 8
+#define RT5677_PWR_SR4 (0x1 << 7)
+#define RT5677_PWR_SR4_BIT 7
+#define RT5677_PWR_SR3 (0x1 << 6)
+#define RT5677_PWR_SR3_BIT 6
+#define RT5677_PWR_SR2 (0x1 << 5)
+#define RT5677_PWR_SR2_BIT 5
+#define RT5677_PWR_SR1 (0x1 << 4)
+#define RT5677_PWR_SR1_BIT 4
+#define RT5677_PWR_SR0 (0x1 << 3)
+#define RT5677_PWR_SR0_BIT 3
+#define RT5677_PWR_MLT (0x1 << 2)
+#define RT5677_PWR_MLT_BIT 2
+#define RT5677_PWR_DSP (0x1 << 1)
+#define RT5677_PWR_DSP_BIT 1
+#define RT5677_PWR_DSP_CPU (0x1 << 0)
+#define RT5677_PWR_DSP_CPU_BIT 0
+
+/* Power Status for DSP (0x66) */
+#define RT5677_PWR_SR7_RDY (0x1 << 9)
+#define RT5677_PWR_SR7_RDY_BIT 9
+#define RT5677_PWR_SR6_RDY (0x1 << 8)
+#define RT5677_PWR_SR6_RDY_BIT 8
+#define RT5677_PWR_SR5_RDY (0x1 << 7)
+#define RT5677_PWR_SR5_RDY_BIT 7
+#define RT5677_PWR_SR4_RDY (0x1 << 6)
+#define RT5677_PWR_SR4_RDY_BIT 6
+#define RT5677_PWR_SR3_RDY (0x1 << 5)
+#define RT5677_PWR_SR3_RDY_BIT 5
+#define RT5677_PWR_SR2_RDY (0x1 << 4)
+#define RT5677_PWR_SR2_RDY_BIT 4
+#define RT5677_PWR_SR1_RDY (0x1 << 3)
+#define RT5677_PWR_SR1_RDY_BIT 3
+#define RT5677_PWR_SR0_RDY (0x1 << 2)
+#define RT5677_PWR_SR0_RDY_BIT 2
+#define RT5677_PWR_MLT_RDY (0x1 << 1)
+#define RT5677_PWR_MLT_RDY_BIT 1
+#define RT5677_PWR_DSP_RDY (0x1 << 0)
+#define RT5677_PWR_DSP_RDY_BIT 0
+
+/* Power Management for DSP (0x67) */
+#define RT5677_PWR_SLIM_ISO (0x1 << 11)
+#define RT5677_PWR_SLIM_ISO_BIT 11
+#define RT5677_PWR_CORE_ISO (0x1 << 10)
+#define RT5677_PWR_CORE_ISO_BIT 10
+#define RT5677_PWR_DSP_ISO (0x1 << 9)
+#define RT5677_PWR_DSP_ISO_BIT 9
+#define RT5677_PWR_SR7_ISO (0x1 << 8)
+#define RT5677_PWR_SR7_ISO_BIT 8
+#define RT5677_PWR_SR6_ISO (0x1 << 7)
+#define RT5677_PWR_SR6_ISO_BIT 7
+#define RT5677_PWR_SR5_ISO (0x1 << 6)
+#define RT5677_PWR_SR5_ISO_BIT 6
+#define RT5677_PWR_SR4_ISO (0x1 << 5)
+#define RT5677_PWR_SR4_ISO_BIT 5
+#define RT5677_PWR_SR3_ISO (0x1 << 4)
+#define RT5677_PWR_SR3_ISO_BIT 4
+#define RT5677_PWR_SR2_ISO (0x1 << 3)
+#define RT5677_PWR_SR2_ISO_BIT 3
+#define RT5677_PWR_SR1_ISO (0x1 << 2)
+#define RT5677_PWR_SR1_ISO_BIT 2
+#define RT5677_PWR_SR0_ISO (0x1 << 1)
+#define RT5677_PWR_SR0_ISO_BIT 1
+#define RT5677_PWR_MLT_ISO (0x1 << 0)
+#define RT5677_PWR_MLT_ISO_BIT 0
+
+/* I2S1/2/3/4 Audio Serial Data Port Control (0x6f 0x70 0x71 0x72) */
+#define RT5677_I2S_MS_MASK (0x1 << 15)
+#define RT5677_I2S_MS_SFT 15
+#define RT5677_I2S_MS_M (0x0 << 15)
+#define RT5677_I2S_MS_S (0x1 << 15)
+#define RT5677_I2S_O_CP_MASK (0x3 << 10)
+#define RT5677_I2S_O_CP_SFT 10
+#define RT5677_I2S_O_CP_OFF (0x0 << 10)
+#define RT5677_I2S_O_CP_U_LAW (0x1 << 10)
+#define RT5677_I2S_O_CP_A_LAW (0x2 << 10)
+#define RT5677_I2S_I_CP_MASK (0x3 << 8)
+#define RT5677_I2S_I_CP_SFT 8
+#define RT5677_I2S_I_CP_OFF (0x0 << 8)
+#define RT5677_I2S_I_CP_U_LAW (0x1 << 8)
+#define RT5677_I2S_I_CP_A_LAW (0x2 << 8)
+#define RT5677_I2S_BP_MASK (0x1 << 7)
+#define RT5677_I2S_BP_SFT 7
+#define RT5677_I2S_BP_NOR (0x0 << 7)
+#define RT5677_I2S_BP_INV (0x1 << 7)
+#define RT5677_I2S_DL_MASK (0x3 << 2)
+#define RT5677_I2S_DL_SFT 2
+#define RT5677_I2S_DL_16 (0x0 << 2)
+#define RT5677_I2S_DL_20 (0x1 << 2)
+#define RT5677_I2S_DL_24 (0x2 << 2)
+#define RT5677_I2S_DL_8 (0x3 << 2)
+#define RT5677_I2S_DF_MASK (0x3 << 0)
+#define RT5677_I2S_DF_SFT 0
+#define RT5677_I2S_DF_I2S (0x0 << 0)
+#define RT5677_I2S_DF_LEFT (0x1 << 0)
+#define RT5677_I2S_DF_PCM_A (0x2 << 0)
+#define RT5677_I2S_DF_PCM_B (0x3 << 0)
+
+/* Clock Tree Control 1 (0x73) */
+#define RT5677_I2S_PD1_MASK (0x7 << 12)
+#define RT5677_I2S_PD1_SFT 12
+#define RT5677_I2S_PD1_1 (0x0 << 12)
+#define RT5677_I2S_PD1_2 (0x1 << 12)
+#define RT5677_I2S_PD1_3 (0x2 << 12)
+#define RT5677_I2S_PD1_4 (0x3 << 12)
+#define RT5677_I2S_PD1_6 (0x4 << 12)
+#define RT5677_I2S_PD1_8 (0x5 << 12)
+#define RT5677_I2S_PD1_12 (0x6 << 12)
+#define RT5677_I2S_PD1_16 (0x7 << 12)
+#define RT5677_I2S_BCLK_MS2_MASK (0x1 << 11)
+#define RT5677_I2S_BCLK_MS2_SFT 11
+#define RT5677_I2S_BCLK_MS2_32 (0x0 << 11)
+#define RT5677_I2S_BCLK_MS2_64 (0x1 << 11)
+#define RT5677_I2S_PD2_MASK (0x7 << 8)
+#define RT5677_I2S_PD2_SFT 8
+#define RT5677_I2S_PD2_1 (0x0 << 8)
+#define RT5677_I2S_PD2_2 (0x1 << 8)
+#define RT5677_I2S_PD2_3 (0x2 << 8)
+#define RT5677_I2S_PD2_4 (0x3 << 8)
+#define RT5677_I2S_PD2_6 (0x4 << 8)
+#define RT5677_I2S_PD2_8 (0x5 << 8)
+#define RT5677_I2S_PD2_12 (0x6 << 8)
+#define RT5677_I2S_PD2_16 (0x7 << 8)
+#define RT5677_I2S_BCLK_MS3_MASK (0x1 << 7)
+#define RT5677_I2S_BCLK_MS3_SFT 7
+#define RT5677_I2S_BCLK_MS3_32 (0x0 << 7)
+#define RT5677_I2S_BCLK_MS3_64 (0x1 << 7)
+#define RT5677_I2S_PD3_MASK (0x7 << 4)
+#define RT5677_I2S_PD3_SFT 4
+#define RT5677_I2S_PD3_1 (0x0 << 4)
+#define RT5677_I2S_PD3_2 (0x1 << 4)
+#define RT5677_I2S_PD3_3 (0x2 << 4)
+#define RT5677_I2S_PD3_4 (0x3 << 4)
+#define RT5677_I2S_PD3_6 (0x4 << 4)
+#define RT5677_I2S_PD3_8 (0x5 << 4)
+#define RT5677_I2S_PD3_12 (0x6 << 4)
+#define RT5677_I2S_PD3_16 (0x7 << 4)
+#define RT5677_I2S_BCLK_MS4_MASK (0x1 << 3)
+#define RT5677_I2S_BCLK_MS4_SFT 3
+#define RT5677_I2S_BCLK_MS4_32 (0x0 << 3)
+#define RT5677_I2S_BCLK_MS4_64 (0x1 << 3)
+#define RT5677_I2S_PD4_MASK (0x7 << 0)
+#define RT5677_I2S_PD4_SFT 0
+#define RT5677_I2S_PD4_1 (0x0 << 0)
+#define RT5677_I2S_PD4_2 (0x1 << 0)
+#define RT5677_I2S_PD4_3 (0x2 << 0)
+#define RT5677_I2S_PD4_4 (0x3 << 0)
+#define RT5677_I2S_PD4_6 (0x4 << 0)
+#define RT5677_I2S_PD4_8 (0x5 << 0)
+#define RT5677_I2S_PD4_12 (0x6 << 0)
+#define RT5677_I2S_PD4_16 (0x7 << 0)
+
+/* Clock Tree Control 2 (0x74) */
+#define RT5677_I2S_PD5_MASK (0x7 << 12)
+#define RT5677_I2S_PD5_SFT 12
+#define RT5677_I2S_PD5_1 (0x0 << 12)
+#define RT5677_I2S_PD5_2 (0x1 << 12)
+#define RT5677_I2S_PD5_3 (0x2 << 12)
+#define RT5677_I2S_PD5_4 (0x3 << 12)
+#define RT5677_I2S_PD5_6 (0x4 << 12)
+#define RT5677_I2S_PD5_8 (0x5 << 12)
+#define RT5677_I2S_PD5_12 (0x6 << 12)
+#define RT5677_I2S_PD5_16 (0x7 << 12)
+#define RT5677_I2S_PD6_MASK (0x7 << 8)
+#define RT5677_I2S_PD6_SFT 8
+#define RT5677_I2S_PD6_1 (0x0 << 8)
+#define RT5677_I2S_PD6_2 (0x1 << 8)
+#define RT5677_I2S_PD6_3 (0x2 << 8)
+#define RT5677_I2S_PD6_4 (0x3 << 8)
+#define RT5677_I2S_PD6_6 (0x4 << 8)
+#define RT5677_I2S_PD6_8 (0x5 << 8)
+#define RT5677_I2S_PD6_12 (0x6 << 8)
+#define RT5677_I2S_PD6_16 (0x7 << 8)
+#define RT5677_I2S_PD7_MASK (0x7 << 4)
+#define RT5677_I2S_PD7_SFT 4
+#define RT5677_I2S_PD7_1 (0x0 << 4)
+#define RT5677_I2S_PD7_2 (0x1 << 4)
+#define RT5677_I2S_PD7_3 (0x2 << 4)
+#define RT5677_I2S_PD7_4 (0x3 << 4)
+#define RT5677_I2S_PD7_6 (0x4 << 4)
+#define RT5677_I2S_PD7_8 (0x5 << 4)
+#define RT5677_I2S_PD7_12 (0x6 << 4)
+#define RT5677_I2S_PD7_16 (0x7 << 4)
+#define RT5677_I2S_PD8_MASK (0x7 << 0)
+#define RT5677_I2S_PD8_SFT 0
+#define RT5677_I2S_PD8_1 (0x0 << 0)
+#define RT5677_I2S_PD8_2 (0x1 << 0)
+#define RT5677_I2S_PD8_3 (0x2 << 0)
+#define RT5677_I2S_PD8_4 (0x3 << 0)
+#define RT5677_I2S_PD8_6 (0x4 << 0)
+#define RT5677_I2S_PD8_8 (0x5 << 0)
+#define RT5677_I2S_PD8_12 (0x6 << 0)
+#define RT5677_I2S_PD8_16 (0x7 << 0)
+
+/* Clock Tree Control 3 (0x75) */
+#define RT5677_DSP_ASRC_O_MASK (0x3 << 6)
+#define RT5677_DSP_ASRC_O_SFT 6
+#define RT5677_DSP_ASRC_O_1_0 (0x0 << 6)
+#define RT5677_DSP_ASRC_O_1_5 (0x1 << 6)
+#define RT5677_DSP_ASRC_O_2_0 (0x2 << 6)
+#define RT5677_DSP_ASRC_O_3_0 (0x3 << 6)
+#define RT5677_DSP_ASRC_I_MASK (0x3 << 4)
+#define RT5677_DSP_ASRC_I_SFT 4
+#define RT5677_DSP_ASRC_I_1_0 (0x0 << 4)
+#define RT5677_DSP_ASRC_I_1_5 (0x1 << 4)
+#define RT5677_DSP_ASRC_I_2_0 (0x2 << 4)
+#define RT5677_DSP_ASRC_I_3_0 (0x3 << 4)
+#define RT5677_DSP_BUS_PD_MASK (0x7 << 0)
+#define RT5677_DSP_BUS_PD_SFT 0
+#define RT5677_DSP_BUS_PD_1 (0x0 << 0)
+#define RT5677_DSP_BUS_PD_2 (0x1 << 0)
+#define RT5677_DSP_BUS_PD_3 (0x2 << 0)
+#define RT5677_DSP_BUS_PD_4 (0x3 << 0)
+#define RT5677_DSP_BUS_PD_6 (0x4 << 0)
+#define RT5677_DSP_BUS_PD_8 (0x5 << 0)
+#define RT5677_DSP_BUS_PD_12 (0x6 << 0)
+#define RT5677_DSP_BUS_PD_16 (0x7 << 0)
+
+#define RT5677_PLL_INP_MAX 40000000
+#define RT5677_PLL_INP_MIN 2048000
+/* PLL M/N/K Code Control 1 (0x7a 0x7c) */
+#define RT5677_PLL_N_MAX 0x1ff
+#define RT5677_PLL_N_MASK (RT5677_PLL_N_MAX << 7)
+#define RT5677_PLL_N_SFT 7
+#define RT5677_PLL_K_BP (0x1 << 5)
+#define RT5677_PLL_K_BP_SFT 5
+#define RT5677_PLL_K_MAX 0x1f
+#define RT5677_PLL_K_MASK (RT5677_PLL_K_MAX)
+#define RT5677_PLL_K_SFT 0
+
+/* PLL M/N/K Code Control 2 (0x7b 0x7d) */
+#define RT5677_PLL_M_MAX 0xf
+#define RT5677_PLL_M_MASK (RT5677_PLL_M_MAX << 12)
+#define RT5677_PLL_M_SFT 12
+#define RT5677_PLL_M_BP (0x1 << 11)
+#define RT5677_PLL_M_BP_SFT 11
+
+/* Global Clock Control 1 (0x80) */
+#define RT5677_SCLK_SRC_MASK (0x3 << 14)
+#define RT5677_SCLK_SRC_SFT 14
+#define RT5677_SCLK_SRC_MCLK (0x0 << 14)
+#define RT5677_SCLK_SRC_PLL1 (0x1 << 14)
+#define RT5677_SCLK_SRC_RCCLK (0x2 << 14) /* 25MHz */
+#define RT5677_SCLK_SRC_SLIM (0x3 << 14)
+#define RT5677_PLL1_SRC_MASK (0x7 << 11)
+#define RT5677_PLL1_SRC_SFT 11
+#define RT5677_PLL1_SRC_MCLK (0x0 << 11)
+#define RT5677_PLL1_SRC_BCLK1 (0x1 << 11)
+#define RT5677_PLL1_SRC_BCLK2 (0x2 << 11)
+#define RT5677_PLL1_SRC_BCLK3 (0x3 << 11)
+#define RT5677_PLL1_SRC_BCLK4 (0x4 << 11)
+#define RT5677_PLL1_SRC_RCCLK (0x5 << 11)
+#define RT5677_PLL1_SRC_SLIM (0x6 << 11)
+#define RT5677_MCLK_SRC_MASK (0x1 << 10)
+#define RT5677_MCLK_SRC_SFT 10
+#define RT5677_MCLK1_SRC (0x0 << 10)
+#define RT5677_MCLK2_SRC (0x1 << 10)
+#define RT5677_PLL1_PD_MASK (0x1 << 8)
+#define RT5677_PLL1_PD_SFT 8
+#define RT5677_PLL1_PD_1 (0x0 << 8)
+#define RT5677_PLL1_PD_2 (0x1 << 8)
+#define RT5671_DAC_OSR_MASK (0x3 << 6)
+#define RT5671_DAC_OSR_SFT 6
+#define RT5671_DAC_OSR_128 (0x0 << 6)
+#define RT5671_DAC_OSR_64 (0x1 << 6)
+#define RT5671_DAC_OSR_32 (0x2 << 6)
+#define RT5671_ADC_OSR_MASK (0x3 << 4)
+#define RT5671_ADC_OSR_SFT 4
+#define RT5671_ADC_OSR_128 (0x0 << 4)
+#define RT5671_ADC_OSR_64 (0x1 << 4)
+#define RT5671_ADC_OSR_32 (0x2 << 4)
+
+/* Global Clock Control 2 (0x81) */
+#define RT5677_PLL2_PR_SRC_MASK (0x1 << 15)
+#define RT5677_PLL2_PR_SRC_SFT 15
+#define RT5677_PLL2_PR_SRC_MCLK1 (0x0 << 15)
+#define RT5677_PLL2_PR_SRC_MCLK2 (0x1 << 15)
+#define RT5677_PLL2_SRC_MASK (0x7 << 12)
+#define RT5677_PLL2_SRC_SFT 12
+#define RT5677_PLL2_SRC_MCLK (0x0 << 12)
+#define RT5677_PLL2_SRC_BCLK1 (0x1 << 12)
+#define RT5677_PLL2_SRC_BCLK2 (0x2 << 12)
+#define RT5677_PLL2_SRC_BCLK3 (0x3 << 12)
+#define RT5677_PLL2_SRC_BCLK4 (0x4 << 12)
+#define RT5677_PLL2_SRC_RCCLK (0x5 << 12)
+#define RT5677_PLL2_SRC_SLIM (0x6 << 12)
+#define RT5671_DSP_ASRC_O_SRC (0x3 << 10)
+#define RT5671_DSP_ASRC_O_SRC_SFT 10
+#define RT5671_DSP_ASRC_O_MCLK (0x0 << 10)
+#define RT5671_DSP_ASRC_O_PLL1 (0x1 << 10)
+#define RT5671_DSP_ASRC_O_SLIM (0x2 << 10)
+#define RT5671_DSP_ASRC_O_RCCLK (0x3 << 10)
+#define RT5671_DSP_ASRC_I_SRC (0x3 << 8)
+#define RT5671_DSP_ASRC_I_SRC_SFT 8
+#define RT5671_DSP_ASRC_I_MCLK (0x0 << 8)
+#define RT5671_DSP_ASRC_I_PLL1 (0x1 << 8)
+#define RT5671_DSP_ASRC_I_SLIM (0x2 << 8)
+#define RT5671_DSP_ASRC_I_RCCLK (0x3 << 8)
+#define RT5677_DSP_CLK_SRC_MASK (0x1 << 7)
+#define RT5677_DSP_CLK_SRC_SFT 7
+#define RT5677_DSP_CLK_SRC_PLL2 (0x0 << 7)
+#define RT5677_DSP_CLK_SRC_BYPASS (0x1 << 7)
+
+/* VAD Function Control 4 (0x9f) */
+#define RT5677_VAD_SRC_MASK (0x7 << 8)
+#define RT5677_VAD_SRC_SFT 8
+
+/* DSP InBound Control (0xa3) */
+#define RT5677_IB01_SRC_MASK (0x7 << 12)
+#define RT5677_IB01_SRC_SFT 12
+#define RT5677_IB23_SRC_MASK (0x7 << 8)
+#define RT5677_IB23_SRC_SFT 8
+#define RT5677_IB45_SRC_MASK (0x7 << 4)
+#define RT5677_IB45_SRC_SFT 4
+#define RT5677_IB6_SRC_MASK (0x7 << 0)
+#define RT5677_IB6_SRC_SFT 0
+
+/* DSP InBound Control (0xa4) */
+#define RT5677_IB7_SRC_MASK (0x7 << 12)
+#define RT5677_IB7_SRC_SFT 12
+#define RT5677_IB8_SRC_MASK (0x7 << 8)
+#define RT5677_IB8_SRC_SFT 8
+#define RT5677_IB9_SRC_MASK (0x7 << 4)
+#define RT5677_IB9_SRC_SFT 4
+
+/* DSP In/OutBound Control (0xa5) */
+#define RT5677_SEL_SRC_OB23 (0x1 << 4)
+#define RT5677_SEL_SRC_OB23_SFT 4
+#define RT5677_SEL_SRC_OB01 (0x1 << 3)
+#define RT5677_SEL_SRC_OB01_SFT 3
+#define RT5677_SEL_SRC_IB45 (0x1 << 2)
+#define RT5677_SEL_SRC_IB45_SFT 2
+#define RT5677_SEL_SRC_IB23 (0x1 << 1)
+#define RT5677_SEL_SRC_IB23_SFT 1
+#define RT5677_SEL_SRC_IB01 (0x1 << 0)
+#define RT5677_SEL_SRC_IB01_SFT 0
+
+/* Virtual DSP Mixer Control (0xf7 0xf8 0xf9) */
+#define RT5677_DSP_IB_01_H (0x1 << 15)
+#define RT5677_DSP_IB_01_H_SFT 15
+#define RT5677_DSP_IB_23_H (0x1 << 14)
+#define RT5677_DSP_IB_23_H_SFT 14
+#define RT5677_DSP_IB_45_H (0x1 << 13)
+#define RT5677_DSP_IB_45_H_SFT 13
+#define RT5677_DSP_IB_6_H (0x1 << 12)
+#define RT5677_DSP_IB_6_H_SFT 12
+#define RT5677_DSP_IB_7_H (0x1 << 11)
+#define RT5677_DSP_IB_7_H_SFT 11
+#define RT5677_DSP_IB_8_H (0x1 << 10)
+#define RT5677_DSP_IB_8_H_SFT 10
+#define RT5677_DSP_IB_9_H (0x1 << 9)
+#define RT5677_DSP_IB_9_H_SFT 9
+#define RT5677_DSP_IB_01_L (0x1 << 7)
+#define RT5677_DSP_IB_01_L_SFT 7
+#define RT5677_DSP_IB_23_L (0x1 << 6)
+#define RT5677_DSP_IB_23_L_SFT 6
+#define RT5677_DSP_IB_45_L (0x1 << 5)
+#define RT5677_DSP_IB_45_L_SFT 5
+#define RT5677_DSP_IB_6_L (0x1 << 4)
+#define RT5677_DSP_IB_6_L_SFT 4
+#define RT5677_DSP_IB_7_L (0x1 << 3)
+#define RT5677_DSP_IB_7_L_SFT 3
+#define RT5677_DSP_IB_8_L (0x1 << 2)
+#define RT5677_DSP_IB_8_L_SFT 2
+#define RT5677_DSP_IB_9_L (0x1 << 1)
+#define RT5677_DSP_IB_9_L_SFT 1
+
+/* Register controlling boot vector */
+#define RT5677_DSP_BOOT_VECTOR 0x1801f090
+
+/* Debug String Length */
+#define RT5677_REG_DISP_LEN 12
+
+#define RT5677_NO_JACK BIT(0)
+#define RT5677_HEADSET_DET BIT(1)
+#define RT5677_HEADPHO_DET BIT(2)
+
+/* Flags used for the VAD firmware */
+#define RT5677_MBIST_TEST_PASSED 0
+#define RT5677_MBIST_TEST_FAILED BIT(0)
+#define RT5677_VAD_SLEEP 0
+#define RT5677_VAD_NO_SLEEP BIT(1)
+
+/* Flag used for VAD AMIC/DMIC switch */
+#define RT5677_VAD_ENABLE_AMIC 1
+
+/* System Clock Source */
+enum {
+ RT5677_SCLK_S_MCLK,
+ RT5677_SCLK_S_PLL1,
+ RT5677_SCLK_S_RCCLK,
+};
+
+/* PLL1 Source */
+enum {
+ RT5677_PLL1_S_MCLK,
+ RT5677_PLL1_S_BCLK1,
+ RT5677_PLL1_S_BCLK2,
+ RT5677_PLL1_S_BCLK3,
+ RT5677_PLL1_S_BCLK4,
+};
+
+enum {
+ RT5677_AIF1,
+ RT5677_AIF2,
+ RT5677_AIF3,
+ RT5677_AIF4,
+ RT5677_AIF5,
+ RT5677_AIFS,
+};
+
+enum {
+ RT5677_VAD_OFF,
+ RT5677_VAD_IDLE,
+ RT5677_VAD_SUSPEND,
+};
+
+enum {
+ RT5677_VAD_DISABLE,
+ RT5677_VAD_IDLE_DMIC1,
+ RT5677_VAD_IDLE_DMIC2,
+ RT5677_VAD_IDLE_AMIC,
+ RT5677_VAD_SUSPEND_DMIC1,
+ RT5677_VAD_SUSPEND_DMIC2,
+ RT5677_VAD_SUSPEND_AMIC,
+};
+
+struct rt5677_pll_code {
+ bool m_bp; /* Indicates bypass m code or not. */
+ bool k_bp; /* Indicates bypass k code or not. */
+ int m_code;
+ int n_code;
+ int k_code;
+};
+
+struct rt5677_priv {
+ struct snd_soc_codec *codec;
+ /*
+ struct rt5677_platform_data pdata;
+ */
+ struct regmap *regmap;
+ struct mutex index_lock;
+ struct mutex vad_lock;
+ struct workqueue_struct *check_mic_wq;
+ struct delayed_work check_hp_mic_work;
+
+ int aif_pu;
+ int sysclk;
+ int sysclk_src;
+ int lrck[RT5677_AIFS];
+ int bclk[RT5677_AIFS];
+ int master[RT5677_AIFS];
+ int pll_src;
+ int pll_in;
+ int pll_out;
+ int vad_mode;
+ int vad_source;
+ int vad_sleep;
+ unsigned int vad_clock_en;
+ int stream;
+ bool mbist_test;
+ bool mbist_test_passed;
+
+ u8 *model_buf;
+ u32 model_len;
+ u32 mic_read_offset;
+ u8 *mic_buf;
+ u32 mic_buf_len;
+
+ int mic_state;
+};
+
+#endif /* __RT5677_H__ */
diff --git a/sound/soc/codecs/rt5677_ioctl.c b/sound/soc/codecs/rt5677_ioctl.c
new file mode 100644
index 0000000..1c156c2
--- /dev/null
+++ b/sound/soc/codecs/rt5677_ioctl.c
@@ -0,0 +1,169 @@
+/*
+ * rt5677_ioctl.c -- RT5677 ALSA SoC audio driver IO control
+ *
+ * Copyright 2014 Google, Inc
+ * Author: Dmitry Shmidt <dimitrysh@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/spi/spi.h>
+#include <sound/soc.h>
+#include "rt_codec_ioctl.h"
+#include "rt5677_ioctl.h"
+#include "rt5677-spi.h"
+#include "rt5677.h"
+
+#define RT5677_MIC_BUF_SIZE 0x20000
+#define RT5677_MIC_BUF_FIRST_READ_SIZE 0x10000
+
+int rt5677_ioctl_common(struct snd_hwdep *hw, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct snd_soc_codec *codec = hw->private_data;
+ struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ struct rt_codec_cmd __user *_rt_codec = (struct rt_codec_cmd *)arg;
+ struct rt_codec_cmd rt_codec;
+ int ret = -EFAULT;
+ u32 addr = RT5677_MIC_BUF_ADDR;
+ size_t size;
+ u32 mic_write_offset;
+ size_t bytes_to_user = 0;
+ size_t first_chunk_start, first_chunk_len;
+ size_t second_chunk_start, second_chunk_len;
+
+#ifdef CONFIG_COMPAT
+ if (is_compat_task()) {
+ struct compat_rt_codec_cmd compat_rt_codec;
+
+ if (copy_from_user(&compat_rt_codec, _rt_codec,
+ sizeof(compat_rt_codec)))
+ return -EFAULT;
+ rt_codec.number = compat_rt_codec.number;
+ rt_codec.buf = compat_ptr(compat_rt_codec.buf);
+ } else
+#endif
+ {
+ if (copy_from_user(&rt_codec, _rt_codec, sizeof(rt_codec)))
+ return -EFAULT;
+ }
+
+ dev_dbg(codec->dev, "%s: rt_codec.number=%zu, cmd=%u\n",
+ __func__, rt_codec.number, cmd);
+
+ size = sizeof(int) * rt_codec.number;
+ switch (cmd) {
+ case RT_READ_CODEC_DSP_IOCTL:
+ case RT_READ_CODEC_DSP_IOCTL_COMPAT:
+ /* Grab the first 4 bytes that holds the write pointer on the
+ dsp, and check to make sure that it points somewhere inside
+ the buffer. */
+ ret = rt5677_spi_read(addr, rt5677->mic_buf, 4);
+ if (ret)
+ return ret;
+ mic_write_offset = le32_to_cpup((u32 *)rt5677->mic_buf);
+
+ if (mic_write_offset < sizeof(u32) ||
+ mic_write_offset >= RT5677_MIC_BUF_SIZE) {
+ dev_err(codec->dev,
+ "Invalid offset in the mic buffer %d\n",
+ mic_write_offset);
+ return -EFAULT;
+ }
+
+ /* If the mic_read_offset is zero, this means it's the first
+ time that we've asked for streaming data. We should start
+ reading from the previous 2 seconds of audio from wherever
+ the mic_write_offset is currently (note that this needs to
+ wraparound the buffer). */
+ if (rt5677->mic_read_offset == 0) {
+ if (mic_write_offset <
+ RT5677_MIC_BUF_FIRST_READ_SIZE + sizeof(u32)) {
+ rt5677->mic_read_offset = (RT5677_MIC_BUF_SIZE -
+ (RT5677_MIC_BUF_FIRST_READ_SIZE -
+ (mic_write_offset - sizeof(u32))));
+ } else {
+ rt5677->mic_read_offset = (mic_write_offset -
+ RT5677_MIC_BUF_FIRST_READ_SIZE);
+ }
+ }
+
+ /* If the audio wrapped around, then we need to do the copy in
+ two passes, otherwise, we can do it on one. We should also
+ make sure that we don't read more bytes than we have in the
+ user buffer, or we'll just waste time. */
+ if (mic_write_offset < rt5677->mic_read_offset) {
+ /* Copy the audio from the last read offset until the
+ end of the buffer, then do the second chunk that
+ starts after the u32. */
+ first_chunk_start = rt5677->mic_read_offset;
+ first_chunk_len =
+ RT5677_MIC_BUF_SIZE - rt5677->mic_read_offset;
+ if (first_chunk_len > size) {
+ first_chunk_len = size;
+ second_chunk_start = 0;
+ second_chunk_len = 0;
+ } else {
+ second_chunk_start = sizeof(u32);
+ second_chunk_len =
+ mic_write_offset - sizeof(u32);
+ if (first_chunk_len + second_chunk_len > size) {
+ second_chunk_len =
+ size - first_chunk_len;
+ }
+ }
+ } else {
+ first_chunk_start = rt5677->mic_read_offset;
+ first_chunk_len =
+ mic_write_offset - rt5677->mic_read_offset;
+ if (first_chunk_len > size)
+ first_chunk_len = size;
+ second_chunk_start = 0;
+ second_chunk_len = 0;
+ }
+
+ ret = rt5677_spi_read(addr + first_chunk_start, rt5677->mic_buf,
+ first_chunk_len);
+ if (ret)
+ return ret;
+ bytes_to_user += first_chunk_len;
+
+ if (second_chunk_len) {
+ ret = rt5677_spi_read(addr + second_chunk_start,
+ rt5677->mic_buf + first_chunk_len,
+ second_chunk_len);
+ if (!ret)
+ bytes_to_user += second_chunk_len;
+ }
+
+ bytes_to_user -= copy_to_user(rt_codec.buf, rt5677->mic_buf,
+ bytes_to_user);
+
+ rt5677->mic_read_offset += bytes_to_user;
+ if (rt5677->mic_read_offset >= RT5677_MIC_BUF_SIZE) {
+ rt5677->mic_read_offset -=
+ RT5677_MIC_BUF_SIZE - sizeof(u32);
+ }
+ return bytes_to_user >> 1;
+
+ case RT_WRITE_CODEC_DSP_IOCTL:
+ case RT_WRITE_CODEC_DSP_IOCTL_COMPAT:
+ if (!rt5677->model_buf || rt5677->model_len < size) {
+ kfree(rt5677->model_buf);
+ rt5677->model_len = 0;
+ rt5677->model_buf = kzalloc(size, GFP_KERNEL);
+ if (!rt5677->model_buf)
+ return -ENOMEM;
+ }
+ if (copy_from_user(rt5677->model_buf, rt_codec.buf, size))
+ return -EFAULT;
+ rt5677->model_len = size;
+ return 0;
+
+ default:
+ return -ENOTSUPP;
+ }
+}
+EXPORT_SYMBOL_GPL(rt5677_ioctl_common);
diff --git a/sound/soc/codecs/rt5677_ioctl.h b/sound/soc/codecs/rt5677_ioctl.h
new file mode 100644
index 0000000..8029e90
--- /dev/null
+++ b/sound/soc/codecs/rt5677_ioctl.h
@@ -0,0 +1,24 @@
+/*
+ * rt5677_ioctl.h -- RT5677 ALSA SoC audio driver IO control
+ *
+ * Copyright 2014 Google, Inc
+ * Author: Dmitry Shmidt <dimitrysh@google.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __RT5677_IOCTL_H__
+#define __RT5677_IOCTL_H__
+
+#include <sound/hwdep.h>
+#include <linux/ioctl.h>
+
+#define RT5677_MIC_BUF_ADDR 0x60030000
+#define RT5677_MODEL_ADDR 0x5FFC9800
+
+int rt5677_ioctl_common(struct snd_hwdep *hw, struct file *file,
+ unsigned int cmd, unsigned long arg);
+
+#endif /* __RT5677_IOCTL_H__ */
diff --git a/sound/soc/codecs/rt_codec_ioctl.c b/sound/soc/codecs/rt_codec_ioctl.c
index a41c683..6053d95 100644
--- a/sound/soc/codecs/rt_codec_ioctl.c
+++ b/sound/soc/codecs/rt_codec_ioctl.c
@@ -38,20 +38,28 @@
struct rt_codec_cmd __user *_rt_codec = (struct rt_codec_cmd *)arg;
struct rt_codec_cmd rt_codec;
int *buf, *p;
+ int ret = 0;
+
+ if (is_compat_task()) {
+ if (NULL == rt_codec_ioctl_ops.ioctl_common)
+ return -ENOTSUPP;
+ return rt_codec_ioctl_ops.ioctl_common(hw, file, cmd, arg);
+ }
if (copy_from_user(&rt_codec, _rt_codec, sizeof(rt_codec))) {
- dev_err(codec->dev,"copy_from_user faild\n");
+ dev_err(codec->dev,"copy_from_user failed\n");
return -EFAULT;
}
- dev_dbg(codec->dev, "%s(): rt_codec.number=%d, cmd=%d\n",
+ dev_dbg(codec->dev, "%s(): rt_codec.number=%zu, cmd=%u\n",
__func__, rt_codec.number, cmd);
buf = kmalloc(sizeof(*buf) * rt_codec.number, GFP_KERNEL);
if (buf == NULL)
return -ENOMEM;
if (copy_from_user(buf, rt_codec.buf, sizeof(*buf) * rt_codec.number)) {
+ dev_err(codec->dev,"copy_from_user failed\n");
goto err;
}
-
+
switch (cmd) {
case RT_READ_CODEC_REG_IOCTL:
for (p = buf; p < buf + rt_codec.number / 2; p++) {
@@ -59,7 +67,7 @@
}
if (copy_to_user(rt_codec.buf, buf, sizeof(*buf) * rt_codec.number))
goto err;
- break;
+ break;
case RT_WRITE_CODEC_REG_IOCTL:
for (p = buf; p < buf + rt_codec.number / 2; p++)
@@ -89,18 +97,18 @@
rt_codec_ioctl_ops.index_write(codec, *p,
*(p+rt_codec.number/2));
}
- break;
+ break;
default:
if (NULL == rt_codec_ioctl_ops.ioctl_common)
goto err;
- rt_codec_ioctl_ops.ioctl_common(hw, file, cmd, arg);
+ ret = rt_codec_ioctl_ops.ioctl_common(hw, file, cmd, arg);
break;
}
kfree(buf);
- return 0;
+ return ret;
err:
kfree(buf);
@@ -118,7 +126,7 @@
dev_dbg(codec->dev, "enter %s, number = %d\n", __func__, number);
if (copy_from_user(&rt_codec, _rt_codec, sizeof(rt_codec)))
return -EFAULT;
-
+
buf = kmalloc(sizeof(*buf) * number, GFP_KERNEL);
if (buf == NULL)
return -ENOMEM;
@@ -164,12 +172,15 @@
if ((err = snd_hwdep_new(card, RT_CE_CODEC_HWDEP_NAME, 0, &hw)) < 0)
return err;
-
+
strcpy(hw->name, RT_CE_CODEC_HWDEP_NAME);
hw->private_data = codec;
hw->ops.open = rt_codec_hwdep_open;
hw->ops.release = rt_codec_hwdep_release;
hw->ops.ioctl = rt_codec_hwdep_ioctl;
+#ifdef CONFIG_COMPAT
+ hw->ops.ioctl_compat = rt_codec_hwdep_ioctl;
+#endif
return 0;
}
diff --git a/sound/soc/codecs/rt_codec_ioctl.h b/sound/soc/codecs/rt_codec_ioctl.h
index 56daa37..9378b37 100644
--- a/sound/soc/codecs/rt_codec_ioctl.h
+++ b/sound/soc/codecs/rt_codec_ioctl.h
@@ -20,6 +20,13 @@
int __user *buf;
};
+#ifdef CONFIG_COMPAT
+struct compat_rt_codec_cmd {
+ compat_size_t number;
+ compat_caddr_t buf;
+};
+#endif /* CONFIG_COMPAT */
+
struct rt_codec_ops {
int (*index_write)(struct snd_soc_codec *codec,
unsigned int reg, unsigned int value);
@@ -72,6 +79,15 @@
RT_GET_CODEC_ID = _IOR('R', 0x30, struct rt_codec_cmd),
};
+#ifdef CONFIG_COMPAT
+enum {
+ RT_READ_CODEC_DSP_IOCTL_COMPAT =
+ _IOR('R', 0x04, struct compat_rt_codec_cmd),
+ RT_WRITE_CODEC_DSP_IOCTL_COMPAT =
+ _IOW('R', 0x04, struct compat_rt_codec_cmd),
+};
+#endif
+
int realtek_ce_init_hwdep(struct snd_soc_codec *codec);
struct rt_codec_ops *rt_codec_get_ioctl_ops(void);
diff --git a/sound/soc/codecs/tfa9895.c b/sound/soc/codecs/tfa9895.c
new file mode 100644
index 0000000..778f2a8
--- /dev/null
+++ b/sound/soc/codecs/tfa9895.c
@@ -0,0 +1,547 @@
+/* driver/i2c/chip/tfa9895.c
+ *
+ * NXP tfa9895 Speaker Amp
+ *
+ * Copyright (C) 2012 HTC Corporation
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/interrupt.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <linux/miscdevice.h>
+#include <asm/uaccess.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/workqueue.h>
+#include <linux/freezer.h>
+#include "tfa9895.h"
+#include <linux/mutex.h>
+#include <linux/debugfs.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/of_gpio.h>
+
+/* htc audio ++ */
+#undef pr_info
+#undef pr_err
+#define pr_aud_fmt(fmt) "[AUD] " KBUILD_MODNAME ": " fmt
+#define pr_info(fmt, ...) printk(KERN_INFO pr_aud_fmt(fmt), ##__VA_ARGS__)
+#define pr_err(fmt, ...) printk(KERN_ERR pr_aud_fmt(fmt), ##__VA_ARGS__)
+/* htc audio -- */
+
+static struct i2c_client *this_client;
+static struct tfa9895_platform_data *pdata;
+struct mutex spk_amp_lock;
+static int last_spkamp_state;
+static int dsp_enabled;
+static int tfa9895_i2c_write(char *txdata, int length);
+static int tfa9895_i2c_read(char *rxdata, int length);
+#ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_tpa_dent;
+static struct dentry *debugfs_peek;
+static struct dentry *debugfs_poke;
+static unsigned char read_data;
+
+static int get_parameters(char *buf, long int *param1, int num_of_par)
+{
+ char *token;
+ int base, cnt;
+
+ token = strsep(&buf, " ");
+
+ for (cnt = 0; cnt < num_of_par; cnt++) {
+ if (token != NULL) {
+ if ((token[1] == 'x') || (token[1] == 'X'))
+ base = 16;
+ else
+ base = 10;
+
+ if (kstrtoul(token, base, ¶m1[cnt]) != 0)
+ return -EINVAL;
+
+ token = strsep(&buf, " ");
+ }
+ else
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int codec_debug_open(struct inode *inode, struct file *file)
+{
+ file->private_data = inode->i_private;
+ return 0;
+}
+
+static ssize_t codec_debug_read(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ char lbuf[8];
+
+ snprintf(lbuf, sizeof(lbuf), "0x%x\n", read_data);
+ return simple_read_from_buffer(ubuf, count, ppos, lbuf, strlen(lbuf));
+}
+
+static ssize_t codec_debug_write(struct file *filp,
+ const char __user *ubuf, size_t cnt, loff_t *ppos)
+{
+ char *access_str = filp->private_data;
+ char lbuf[32];
+ unsigned char reg_idx[2] = {0x00, 0x00};
+ int rc;
+ long int param[5];
+
+ if (cnt > sizeof(lbuf) - 1)
+ return -EINVAL;
+
+ rc = copy_from_user(lbuf, ubuf, cnt);
+ if (rc)
+ return -EFAULT;
+
+ lbuf[cnt] = '\0';
+
+ if (!strcmp(access_str, "poke")) {
+ /* write */
+ rc = get_parameters(lbuf, param, 2);
+ if ((param[0] <= 0xFF) && (param[1] <= 0xFF) &&
+ (rc == 0)) {
+ reg_idx[0] = param[0];
+ reg_idx[1] = param[1];
+ tfa9895_i2c_write(reg_idx, 2);
+ } else
+ rc = -EINVAL;
+ } else if (!strcmp(access_str, "peek")) {
+ /* read */
+ rc = get_parameters(lbuf, param, 1);
+ if ((param[0] <= 0xFF) && (rc == 0)) {
+ reg_idx[0] = param[0];
+ tfa9895_i2c_read(&read_data, 1);
+ } else
+ rc = -EINVAL;
+ }
+
+ if (rc == 0)
+ rc = cnt;
+ else
+ pr_err("%s: rc = %d\n", __func__, rc);
+
+ return rc;
+}
+
+static const struct file_operations codec_debug_ops = {
+ .open = codec_debug_open,
+ .write = codec_debug_write,
+ .read = codec_debug_read
+};
+#endif
+#ifdef CONFIG_AMP_TFA9895L
+unsigned char cf_dsp_bypass[3][3] = {
+ {0x04, 0x88, 0x53}, /* for Rchannel */
+ {0x09, 0x06, 0x19},
+ {0x09, 0x06, 0x18}
+};
+#else
+unsigned char cf_dsp_bypass[3][3] = {
+ {0x04, 0x88, 0x0B},
+ {0x09, 0x06, 0x19},
+ {0x09, 0x06, 0x18}
+};
+#endif
+unsigned char amp_off[1][3] = {
+ {0x09, 0x06, 0x19}
+};
+
+static int tfa9895_i2c_write(char *txdata, int length)
+{
+ int rc;
+ struct i2c_msg msg[] = {
+ {
+ .addr = this_client->addr,
+ .flags = 0,
+ .len = length,
+ .buf = txdata,
+ },
+ };
+
+ rc = i2c_transfer(this_client->adapter, msg, 1);
+ if (rc < 0) {
+ pr_err("%s: transfer error %d\n", __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int tfa9895_i2c_read(char *rxdata, int length)
+{
+ int rc;
+ struct i2c_msg msgs[] = {
+ {
+ .addr = this_client->addr,
+ .flags = I2C_M_RD,
+ .len = length,
+ .buf = rxdata,
+ },
+ };
+
+ rc = i2c_transfer(this_client->adapter, msgs, 1);
+ if (rc < 0) {
+ pr_err("%s: transfer error %d\n", __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int tfa9895_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static int tfa9895_release(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+int set_tfa9895_spkamp(int en, int dsp_mode)
+{
+ int i = 0;
+ unsigned char mute_reg[1] = {0x06};
+ unsigned char mute_data[3] = {0, 0, 0};
+ unsigned char power_reg[1] = {0x09};
+ unsigned char power_data[3] = {0, 0, 0};
+ unsigned char SPK_CR[3] = {0x8, 0x8, 0};
+
+ pr_debug("%s: en = %d dsp_enabled = %d\n", __func__, en, dsp_enabled);
+ mutex_lock(&spk_amp_lock);
+ if (en && !last_spkamp_state) {
+ last_spkamp_state = 1;
+ /* NXP CF DSP Bypass mode */
+ if (dsp_enabled == 0) {
+ for (i = 0; i < 3; i++)
+ tfa9895_i2c_write(cf_dsp_bypass[i], 3);
+ /* Enable NXP PVP Bit10 of Reg 8 per acoustic's request in bypass mode.(Hboot loopback & MFG ROM) */
+ tfa9895_i2c_write(SPK_CR, 1);
+ tfa9895_i2c_read(SPK_CR + 1, 2);
+ SPK_CR[1] |= 0x4; /* Enable PVP bit10 */
+ tfa9895_i2c_write(SPK_CR, 3);
+ } else {
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ tfa9895_i2c_write(mute_reg, 1);
+ tfa9895_i2c_read(mute_data + 1, 2);
+ mute_data[0] = 0x6;
+ mute_data[2] &= 0xdf; /* bit 5 dn = un=mute */
+ power_data[0] = 0x9;
+ power_data[2] &= 0xfe; /* bit 0 dn = power up */
+ tfa9895_i2c_write(power_data, 3);
+ tfa9895_i2c_write(mute_data, 3);
+ power_data[2] |= 0x8; /* bit 3 Up = AMP on */
+ tfa9895_i2c_write(power_data, 3);
+ }
+ } else if (!en && last_spkamp_state) {
+ last_spkamp_state = 0;
+ if (dsp_enabled == 0) {
+ tfa9895_i2c_write(amp_off[0], 3);
+ } else {
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ tfa9895_i2c_write(mute_reg, 1);
+ tfa9895_i2c_read(mute_data + 1, 2);
+ mute_data[0] = 0x6;
+ mute_data[2] |= 0x20; /* bit 5 up = mute */
+ tfa9895_i2c_write(mute_data, 3);
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ power_data[0] = 0x9;
+ power_data[2] &= 0xf7; /* bit 3 down = AMP off */
+ tfa9895_i2c_write(power_data, 3);
+ power_data[2] |= 0x1; /* bit 0 up = power down */
+ tfa9895_i2c_write(power_data, 3);
+ }
+ }
+ mutex_unlock(&spk_amp_lock);
+ return 0;
+}
+
+int tfa9895_disable(bool disable)
+{
+ int rc = 0;
+
+ unsigned char amp_on[1][3] = {
+ {0x09, 0x06, 0x18}
+ };
+
+ if (disable) {
+ pr_info("%s: speaker switch off!\n", __func__);
+ rc = tfa9895_i2c_write(amp_off[0], 3);
+ } else {
+ pr_info("%s: speaker switch on!\n", __func__);
+ rc = tfa9895_i2c_write(amp_on[0], 3);
+ }
+
+ return rc;
+}
+
+static long tfa9895_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int rc = 0;
+ unsigned char *buf;
+ void __user *argp = (void __user *)arg;
+
+ if (_IOC_TYPE(cmd) != TFA9895_IOCTL_MAGIC)
+ return -ENOTTY;
+
+ if (_IOC_SIZE(cmd) > sizeof(struct tfa9895_i2c_buffer))
+ return -EINVAL;
+
+ buf = kzalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+
+ if (buf == NULL) {
+ pr_err("%s %d: allocate kernel buffer failed.\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ if (_IOC_DIR(cmd) & _IOC_WRITE) {
+ rc = copy_from_user(buf, argp, _IOC_SIZE(cmd));
+ if (rc) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ }
+
+ switch (_IOC_NR(cmd)) {
+ case TFA9895_WRITE_CONFIG_NR:
+ pr_debug("%s: TFA9895_WRITE_CONFIG\n", __func__);
+ rc = tfa9895_i2c_write(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_READ_CONFIG_NR:
+ pr_debug("%s: TFA9895_READ_CONFIG\n", __func__);
+ rc = tfa9895_i2c_read(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_WRITE_L_CONFIG_NR:
+ pr_debug("%s: TFA9895_WRITE_CONFIG_L\n", __func__);
+ rc = tfa9895_l_write(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_READ_L_CONFIG_NR:
+ pr_debug("%s: TFA9895_READ_CONFIG_L\n", __func__);
+ rc = tfa9895_l_read(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_ENABLE_DSP_NR:
+ pr_info("%s: TFA9895_ENABLE_DSP %d\n", __func__, *(int *)buf);
+ dsp_enabled = *(int *)buf;
+ break;
+ default:
+ kfree(buf);
+ return -ENOTTY;
+ }
+
+ if (_IOC_DIR(cmd) & _IOC_READ) {
+ rc = copy_to_user(argp, buf, _IOC_SIZE(cmd));
+ if (rc) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ }
+ kfree(buf);
+ return rc;
+}
+
+static const struct file_operations tfa9895_fops = {
+ .owner = THIS_MODULE,
+ .open = tfa9895_open,
+ .release = tfa9895_release,
+ .unlocked_ioctl = tfa9895_ioctl,
+ .compat_ioctl = tfa9895_ioctl,
+};
+
+static struct miscdevice tfa9895_device = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "tfa9895",
+ .fops = &tfa9895_fops,
+};
+
+int tfa9895_probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ int i;
+ unsigned char SPK_CR[3] = {0x8, 0x8, 0};
+
+ int ret = 0;
+ char temp[6] = {0x4, 0x88};
+ pdata = client->dev.platform_data;
+
+ if (pdata == NULL) {
+ ret = -ENOMEM;
+ pr_err("%s: platform data is NULL\n", __func__);
+ goto err_alloc_data_failed;
+ }
+
+ if (gpio_is_valid(pdata->tfa9895_power_enable)) {
+ ret = gpio_request(pdata->tfa9895_power_enable, "tfa9895-power-enable");
+ if (ret) {
+ pr_err("%s: Fail gpio_request tfa9895-power-enable\n", __func__);
+ goto err_free_gpio_all;
+ }
+
+ ret = gpio_direction_output(pdata->tfa9895_power_enable, 1);
+ if (ret) {
+ pr_err("%s: Fail gpio_direction tfa9895-power-enable\n", __func__);
+ goto err_free_gpio_all;
+ }
+ pr_info("%s: tfa9895 power pin is enabled\n", __func__);
+
+ mdelay(20);
+ }
+
+ this_client = client;
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("%s: i2c check functionality error\n", __func__);
+ ret = -ENODEV;
+ goto err_free_gpio_all;
+ }
+
+ ret = misc_register(&tfa9895_device);
+ if (ret) {
+ pr_err("%s: tfa9895_device register failed\n", __func__);
+ goto err_free_gpio_all;
+ }
+ ret = tfa9895_i2c_write(temp, 2);
+ ret = tfa9895_i2c_read(temp, 5);
+ if (ret < 0)
+ pr_info("%s:i2c read fail\n", __func__);
+ else
+ pr_info("%s:i2c read successfully\n", __func__);
+
+#ifdef CONFIG_DEBUG_FS
+ debugfs_tpa_dent = debugfs_create_dir("tfa9895", 0);
+ if (!IS_ERR(debugfs_tpa_dent)) {
+ debugfs_peek = debugfs_create_file("peek",
+ S_IFREG | S_IRUGO, debugfs_tpa_dent,
+ (void *) "peek", &codec_debug_ops);
+
+ debugfs_poke = debugfs_create_file("poke",
+ S_IFREG | S_IRUGO, debugfs_tpa_dent,
+ (void *) "poke", &codec_debug_ops);
+
+ }
+#endif
+
+ for (i = 0; i < 3; i++)
+ tfa9895_i2c_write(cf_dsp_bypass[i], 3);
+
+ /* Enable NXP PVP Bit10 of Reg 8 per acoustic's request in bypass mode.(Hboot loopback & MFG ROM) */
+ tfa9895_i2c_write(SPK_CR, 1);
+ tfa9895_i2c_read(SPK_CR + 1, 2);
+ SPK_CR[1] |= 0x4; /* Enable PVP bit10 */
+ tfa9895_i2c_write(SPK_CR, 3);
+
+ return 0;
+
+err_free_gpio_all:
+ return ret;
+err_alloc_data_failed:
+ return ret;
+}
+
+static int tfa9895_remove(struct i2c_client *client)
+{
+ struct tfa9895_platform_data *p9895data = i2c_get_clientdata(client);
+ kfree(p9895data);
+
+ return 0;
+}
+
+static void tfa9895_shutdown(struct i2c_client *client)
+{
+ int ret = 0;
+ pdata = client->dev.platform_data;
+
+ if (pdata == NULL) {
+ ret = -ENOMEM;
+ pr_err("%s: platform data is NULL, could not disable tfa9895-power-enable pin\n", __func__);
+ } else if (gpio_is_valid(pdata->tfa9895_power_enable)) {
+ ret = gpio_request(pdata->tfa9895_power_enable, "tfa9895-power-enable");
+ if (ret)
+ pr_err("%s: Fail gpio_request tfa9895-power-enable\n", __func__);
+
+ ret = gpio_direction_output(pdata->tfa9895_power_enable, 0);
+ if (ret)
+ pr_err("%s: Fail gpio_direction tfa9895-power-enable\n", __func__);
+ else
+ pr_info("%s: tfa9895 power pin is disabled\n", __func__);
+
+ mdelay(20);
+ }
+
+ return;
+}
+
+static int tfa9895_suspend(struct i2c_client *client, pm_message_t mesg)
+{
+ return 0;
+}
+
+static int tfa9895_resume(struct i2c_client *client)
+{
+ return 0;
+}
+
+static struct of_device_id tfa9895_match_table[] = {
+ { .compatible = "nxp,tfa9895-amp",},
+ { },
+};
+
+static const struct i2c_device_id tfa9895_id[] = {
+ { TFA9895_I2C_NAME, 0 },
+ { }
+};
+
+static struct i2c_driver tfa9895_driver = {
+ .probe = tfa9895_probe,
+ .remove = tfa9895_remove,
+ .shutdown = tfa9895_shutdown,
+ .suspend = tfa9895_suspend,
+ .resume = tfa9895_resume,
+ .id_table = tfa9895_id,
+ .driver = {
+ .name = TFA9895_I2C_NAME,
+ .of_match_table = tfa9895_match_table,
+ },
+};
+
+static int __init tfa9895_init(void)
+{
+ pr_info("%s\n", __func__);
+ mutex_init(&spk_amp_lock);
+ dsp_enabled = 0;
+ return i2c_add_driver(&tfa9895_driver);
+}
+
+static void __exit tfa9895_exit(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove(debugfs_peek);
+ debugfs_remove(debugfs_poke);
+ debugfs_remove(debugfs_tpa_dent);
+#endif
+ i2c_del_driver(&tfa9895_driver);
+}
+
+module_init(tfa9895_init);
+module_exit(tfa9895_exit);
+
+MODULE_DESCRIPTION("tfa9895 Speaker Amp driver");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/codecs/tfa9895.h b/sound/soc/codecs/tfa9895.h
new file mode 100644
index 0000000..516ee12
--- /dev/null
+++ b/sound/soc/codecs/tfa9895.h
@@ -0,0 +1,40 @@
+/*
+ * Definitions for tfa9895 speaker amp chip.
+ */
+#ifndef TFA9895_H
+#define TFA9895_H
+
+#include <linux/ioctl.h>
+
+#define TFA9895_I2C_NAME "tfa9895"
+#define TFA9895L_I2C_NAME "tfa9895l"
+
+#define TFA9895_IOCTL_MAGIC 'a'
+#define TFA9895_WRITE_CONFIG_NR 0x01
+#define TFA9895_READ_CONFIG_NR 0x02
+#define TFA9895_ENABLE_DSP_NR 0x03
+#define TFA9895_WRITE_L_CONFIG_NR 0x04
+#define TFA9895_READ_L_CONFIG_NR 0x05
+
+#define TFA9895_WRITE_CONFIG(size) _IOW(TFA9895_IOCTL_MAGIC, TFA9895_WRITE_CONFIG_NR, (size))
+#define TFA9895_READ_CONFIG(size) _IOWR(TFA9895_IOCTL_MAGIC, TFA9895_READ_CONFIG_NR, (size))
+#define TFA9895_ENABLE_DSP(size) _IOW(TFA9895_IOCTL_MAGIC, TFA9895_ENABLE_DSP_NR, (size))
+#define TFA9895_WRITE_L_CONFIG(size) _IOW(TFA9895_IOCTL_MAGIC, TFA9895_WRITE_L_CONFIG_NR , (size))
+#define TFA9895_READ_L_CONFIG(size) _IOWR(TFA9895_IOCTL_MAGIC, TFA9895_READ_L_CONFIG_NR, (size))
+
+struct tfa9895_platform_data {
+ uint32_t tfa9895_power_enable;
+};
+
+struct tfa9895_i2c_buffer {
+ int size;
+ unsigned char buffer[255];
+};
+
+int set_tfa9895_spkamp(int en, int dsp_mode);
+int set_tfa9895l_spkamp(int en, int dsp_mode);
+int tfa9895_l_write(char *txdata, int length);
+int tfa9895_l_read(char *rxdata, int length);
+int tfa9895_disable(bool disable);
+int tfa9895l_disable(bool disable);
+#endif
diff --git a/sound/soc/codecs/tfa9895l.c b/sound/soc/codecs/tfa9895l.c
new file mode 100644
index 0000000..7194caa
--- /dev/null
+++ b/sound/soc/codecs/tfa9895l.c
@@ -0,0 +1,488 @@
+/* driver/i2c/chip/tfa9895.c
+ *
+ * NXP tfa9895 Speaker Amp
+ *
+ * Copyright (C) 2012 HTC Corporation
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/interrupt.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <linux/miscdevice.h>
+#include <asm/uaccess.h>
+#include <linux/delay.h>
+#include <linux/input.h>
+#include <linux/workqueue.h>
+#include <linux/freezer.h>
+#include "tfa9895.h"
+#include <linux/mutex.h>
+#include <linux/debugfs.h>
+#include <linux/gpio.h>
+#include <linux/module.h>
+#include <linux/of_gpio.h>
+
+/* htc audio ++ */
+#undef pr_info
+#undef pr_err
+#define pr_aud_fmt(fmt) "[AUD] " KBUILD_MODNAME ": " fmt
+#define pr_info(fmt, ...) printk(KERN_INFO pr_aud_fmt(fmt), ##__VA_ARGS__)
+#define pr_err(fmt, ...) printk(KERN_ERR pr_aud_fmt(fmt), ##__VA_ARGS__)
+/* htc audio -- */
+
+static struct i2c_client *this_client;
+struct mutex spk_ampl_lock;
+static int last_spkampl_state;
+static int dspl_enabled;
+static int tfa9895_i2c_write(char *txdata, int length);
+static int tfa9895_i2c_read(char *rxdata, int length);
+#ifdef CONFIG_DEBUG_FS
+static struct dentry *debugfs_tpa_dent;
+static struct dentry *debugfs_peek;
+static struct dentry *debugfs_poke;
+static unsigned char read_data;
+
+static int get_parameters(char *buf, long int *param1, int num_of_par)
+{
+ char *token;
+ int base, cnt;
+
+ token = strsep(&buf, " ");
+
+ for (cnt = 0; cnt < num_of_par; cnt++) {
+ if (token != NULL) {
+ if ((token[1] == 'x') || (token[1] == 'X'))
+ base = 16;
+ else
+ base = 10;
+
+ if (kstrtoul(token, base, ¶m1[cnt]) != 0)
+ return -EINVAL;
+
+ token = strsep(&buf, " ");
+ }
+ else
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int codec_debug_open(struct inode *inode, struct file *file)
+{
+ file->private_data = inode->i_private;
+ return 0;
+}
+
+static ssize_t codec_debug_read(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ char lbuf[8];
+
+ snprintf(lbuf, sizeof(lbuf), "0x%x\n", read_data);
+ return simple_read_from_buffer(ubuf, count, ppos, lbuf, strlen(lbuf));
+}
+
+static ssize_t codec_debug_write(struct file *filp,
+ const char __user *ubuf, size_t cnt, loff_t *ppos)
+{
+ char *access_str = filp->private_data;
+ char lbuf[32];
+ unsigned char reg_idx[2] = {0x00, 0x00};
+ int rc;
+ long int param[5];
+
+ if (cnt > sizeof(lbuf) - 1)
+ return -EINVAL;
+
+ rc = copy_from_user(lbuf, ubuf, cnt);
+ if (rc)
+ return -EFAULT;
+
+ lbuf[cnt] = '\0';
+
+ if (!strcmp(access_str, "poke")) {
+ /* write */
+ rc = get_parameters(lbuf, param, 2);
+ if ((param[0] <= 0xFF) && (param[1] <= 0xFF) &&
+ (rc == 0)) {
+ reg_idx[0] = param[0];
+ reg_idx[1] = param[1];
+ tfa9895_i2c_write(reg_idx, 2);
+ } else
+ rc = -EINVAL;
+ } else if (!strcmp(access_str, "peek")) {
+ /* read */
+ rc = get_parameters(lbuf, param, 1);
+ if ((param[0] <= 0xFF) && (rc == 0)) {
+ reg_idx[0] = param[0];
+ tfa9895_i2c_read(&read_data, 1);
+ } else
+ rc = -EINVAL;
+ }
+
+ if (rc == 0)
+ rc = cnt;
+ else
+ pr_err("%s: rc = %d\n", __func__, rc);
+
+ return rc;
+}
+
+static const struct file_operations codec_debug_ops = {
+ .open = codec_debug_open,
+ .write = codec_debug_write,
+ .read = codec_debug_read
+};
+#endif
+
+unsigned char cf_dspl_bypass[3][3] = {
+ {0x04, 0x88, 0x0B},
+ {0x09, 0x06, 0x19},
+ {0x09, 0x06, 0x18}
+};
+
+unsigned char ampl_off[1][3] = {
+ {0x09, 0x06, 0x19}
+};
+
+static int tfa9895_i2c_write(char *txdata, int length)
+{
+ int rc;
+ struct i2c_msg msg[] = {
+ {
+ .addr = this_client->addr,
+ .flags = 0,
+ .len = length,
+ .buf = txdata,
+ },
+ };
+
+ rc = i2c_transfer(this_client->adapter, msg, 1);
+ if (rc < 0) {
+ pr_err("%s: transfer error %d\n", __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int tfa9895_i2c_read(char *rxdata, int length)
+{
+ int rc;
+ struct i2c_msg msgs[] = {
+ {
+ .addr = this_client->addr,
+ .flags = I2C_M_RD,
+ .len = length,
+ .buf = rxdata,
+ },
+ };
+
+ rc = i2c_transfer(this_client->adapter, msgs, 1);
+ if (rc < 0) {
+ pr_err("%s: transfer error %d\n", __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+int tfa9895_l_write(char *txdata, int length)
+{
+ return tfa9895_i2c_write(txdata, length);
+}
+
+int tfa9895_l_read(char *rxdata, int length)
+{
+ return tfa9895_i2c_read(rxdata, length);
+}
+
+static int tfa9895l_open(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+static int tfa9895l_release(struct inode *inode, struct file *file)
+{
+ return 0;
+}
+
+int set_tfa9895l_spkamp(int en, int dsp_mode)
+{
+ int i = 0;
+ unsigned char mute_reg[1] = {0x06};
+ unsigned char mute_data[3] = {0, 0, 0};
+ unsigned char power_reg[1] = {0x09};
+ unsigned char power_data[3] = {0, 0, 0};
+ unsigned char SPK_CR[3] = {0x8, 0x8, 0};
+
+ pr_debug("%s: en = %d dsp_enabled = %d\n", __func__, en, dspl_enabled);
+ mutex_lock(&spk_ampl_lock);
+ if (en && !last_spkampl_state) {
+ last_spkampl_state = 1;
+ /* NXP CF DSP Bypass mode */
+ if (dspl_enabled == 0) {
+ for (i = 0; i < 3; i++)
+ tfa9895_i2c_write(cf_dspl_bypass[i], 3);
+ /* Enable NXP PVP Bit10 of Reg 8 per acoustic's request in bypass mode.(Hboot loopback & MFG ROM) */
+ tfa9895_i2c_write(SPK_CR, 1);
+ tfa9895_i2c_read(SPK_CR + 1, 2);
+ SPK_CR[1] |= 0x4; /* Enable PVP bit10 */
+ tfa9895_i2c_write(SPK_CR, 3);
+ } else {
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ tfa9895_i2c_write(mute_reg, 1);
+ tfa9895_i2c_read(mute_data + 1, 2);
+ mute_data[0] = 0x6;
+ mute_data[2] &= 0xdf; /* bit 5 dn = un=mute */
+ power_data[0] = 0x9;
+ power_data[2] &= 0xfe; /* bit 0 dn = power up */
+ tfa9895_i2c_write(power_data, 3);
+ tfa9895_i2c_write(mute_data, 3);
+ power_data[2] |= 0x8; /* bit 3 Up = AMP on */
+ tfa9895_i2c_write(power_data, 3);
+ }
+ } else if (!en && last_spkampl_state) {
+ last_spkampl_state = 0;
+ if (dspl_enabled == 0) {
+ tfa9895_i2c_write(ampl_off[0], 3);
+ } else {
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ tfa9895_i2c_write(mute_reg, 1);
+ tfa9895_i2c_read(mute_data + 1, 2);
+ mute_data[0] = 0x6;
+ mute_data[2] |= 0x20; /* bit 5 up = mute */
+ tfa9895_i2c_write(mute_data, 3);
+ tfa9895_i2c_write(power_reg, 1);
+ tfa9895_i2c_read(power_data + 1, 2);
+ power_data[0] = 0x9;
+ power_data[2] &= 0xf7; /* bit 3 down = AMP off */
+ tfa9895_i2c_write(power_data, 3);
+ power_data[2] |= 0x1; /* bit 0 up = power down */
+ tfa9895_i2c_write(power_data, 3);
+ }
+ }
+ mutex_unlock(&spk_ampl_lock);
+ return 0;
+}
+
+int tfa9895l_disable(bool disable)
+{
+ int rc = 0;
+
+ unsigned char ampl_on[1][3] = {
+ {0x09, 0x06, 0x18}
+ };
+
+ if (disable) {
+ pr_info("%s: speaker_l switch off!\n", __func__);
+ rc = tfa9895_i2c_write(ampl_off[0], 3);
+ } else {
+ pr_info("%s: speaker_l switch on!\n", __func__);
+ rc = tfa9895_i2c_write(ampl_on[0], 3);
+ }
+
+ return rc;
+}
+
+static long tfa9895l_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int rc = 0;
+ unsigned char *buf;
+ void __user *argp = (void __user *)arg;
+
+ if (_IOC_TYPE(cmd) != TFA9895_IOCTL_MAGIC)
+ return -ENOTTY;
+
+ if (_IOC_SIZE(cmd) > sizeof(struct tfa9895_i2c_buffer))
+ return -EINVAL;
+
+ buf = kzalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+
+ if (buf == NULL) {
+ pr_err("%s %d: allocate kernel buffer failed.\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ if (_IOC_DIR(cmd) & _IOC_WRITE) {
+ rc = copy_from_user(buf, argp, _IOC_SIZE(cmd));
+ if (rc) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ }
+
+ switch (_IOC_NR(cmd)) {
+ case TFA9895_WRITE_CONFIG_NR:
+ pr_debug("%s: TFA9895_WRITE_CONFIG\n", __func__);
+ rc = tfa9895_i2c_write(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_READ_CONFIG_NR:
+ pr_debug("%s: TFA9895_READ_CONFIG\n", __func__);
+ rc = tfa9895_i2c_read(((struct tfa9895_i2c_buffer *)buf)->buffer, ((struct tfa9895_i2c_buffer *)buf)->size);
+ break;
+ case TFA9895_ENABLE_DSP_NR:
+ pr_info("%s: TFA9895_ENABLE_DSP %d\n", __func__, *(int *)buf);
+ dspl_enabled = *(int *)buf;
+ break;
+ default:
+ kfree(buf);
+ return -ENOTTY;
+ }
+
+ if (_IOC_DIR(cmd) & _IOC_READ) {
+ rc = copy_to_user(argp, buf, _IOC_SIZE(cmd));
+ if (rc) {
+ kfree(buf);
+ return -EFAULT;
+ }
+ }
+ kfree(buf);
+ return rc;
+}
+
+static const struct file_operations tfa9895l_fops = {
+ .owner = THIS_MODULE,
+ .open = tfa9895l_open,
+ .release = tfa9895l_release,
+ .unlocked_ioctl = tfa9895l_ioctl,
+ .compat_ioctl = tfa9895l_ioctl,
+};
+
+static struct miscdevice tfa9895l_device = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "tfa9895l",
+ .fops = &tfa9895l_fops,
+};
+
+int tfa9895l_probe(struct i2c_client *client, const struct i2c_device_id *id)
+{
+ int i;
+ unsigned char SPK_CR[3] = {0x8, 0x8, 0};
+
+ int ret = 0;
+ char temp[6] = {0x4, 0x88};
+ this_client = client;
+
+ if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
+ pr_err("%s: i2c check functionality error\n", __func__);
+ ret = -ENODEV;
+ goto err_free_gpio_all;
+ }
+
+ ret = misc_register(&tfa9895l_device);
+ if (ret) {
+ pr_err("%s: tfa9895l_device register failed\n", __func__);
+ goto err_free_gpio_all;
+ }
+ ret = tfa9895_i2c_write(temp, 2);
+ ret = tfa9895_i2c_read(temp, 5);
+ if (ret < 0)
+ pr_info("%s:i2c read fail\n", __func__);
+ else
+ pr_info("%s:i2c read successfully\n", __func__);
+
+#ifdef CONFIG_DEBUG_FS
+ debugfs_tpa_dent = debugfs_create_dir("tfa9895", 0);
+ if (!IS_ERR(debugfs_tpa_dent)) {
+ debugfs_peek = debugfs_create_file("peek",
+ S_IFREG | S_IRUGO, debugfs_tpa_dent,
+ (void *) "peek", &codec_debug_ops);
+
+ debugfs_poke = debugfs_create_file("poke",
+ S_IFREG | S_IRUGO, debugfs_tpa_dent,
+ (void *) "poke", &codec_debug_ops);
+
+ }
+#endif
+
+ for (i = 0; i < 3; i++)
+ tfa9895_i2c_write(cf_dspl_bypass[i], 3);
+ /* Enable NXP PVP Bit10 of Reg 8 per acoustic's request in bypass mode.(Hboot loopback & MFG ROM) */
+ tfa9895_i2c_write(SPK_CR, 1);
+ tfa9895_i2c_read(SPK_CR + 1, 2);
+ SPK_CR[1] |= 0x4; /* Enable PVP bit10 */
+ tfa9895_i2c_write(SPK_CR, 3);
+
+ return 0;
+
+err_free_gpio_all:
+ return ret;
+}
+
+static int tfa9895l_remove(struct i2c_client *client)
+{
+ struct tfa9895_platform_data *p9895data = i2c_get_clientdata(client);
+ kfree(p9895data);
+
+ return 0;
+}
+
+static int tfa9895l_suspend(struct i2c_client *client, pm_message_t mesg)
+{
+ return 0;
+}
+
+static int tfa9895l_resume(struct i2c_client *client)
+{
+ return 0;
+}
+
+static struct of_device_id tfa9895_match_table[] = {
+ { .compatible = "nxp,tfa9895l-amp",},
+ { },
+};
+
+static const struct i2c_device_id tfa9895l_id[] = {
+ { TFA9895L_I2C_NAME, 0 },
+ { }
+};
+
+static struct i2c_driver tfa9895l_driver = {
+ .probe = tfa9895l_probe,
+ .remove = tfa9895l_remove,
+ .suspend = tfa9895l_suspend,
+ .resume = tfa9895l_resume,
+ .id_table = tfa9895l_id,
+ .driver = {
+ .name = TFA9895L_I2C_NAME,
+ .of_match_table = tfa9895_match_table,
+ },
+};
+
+static int __init tfa9895l_init(void)
+{
+ pr_info("%s\n", __func__);
+ mutex_init(&spk_ampl_lock);
+ dspl_enabled = 0;
+ return i2c_add_driver(&tfa9895l_driver);
+}
+
+static void __exit tfa9895l_exit(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove(debugfs_peek);
+ debugfs_remove(debugfs_poke);
+ debugfs_remove(debugfs_tpa_dent);
+#endif
+ i2c_del_driver(&tfa9895l_driver);
+}
+
+module_init(tfa9895l_init);
+module_exit(tfa9895l_exit);
+
+MODULE_DESCRIPTION("tfa9895 L Speaker Amp driver");
+MODULE_LICENSE("GPL");
diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
index 7025d68..4feab38 100644
--- a/sound/soc/soc-dapm.c
+++ b/sound/soc/soc-dapm.c
@@ -26,7 +26,6 @@
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
-#include <linux/async.h>
#include <linux/delay.h>
#include <linux/pm.h>
#include <linux/bitops.h>
@@ -1483,12 +1482,11 @@
}
}
-/* Async callback run prior to DAPM sequences - brings to _PREPARE if
+/* Function run prior to DAPM sequences - brings to _PREPARE if
* they're changing state.
*/
-static void dapm_pre_sequence_async(void *data, async_cookie_t cookie)
+static void dapm_pre_sequence(struct snd_soc_dapm_context *d)
{
- struct snd_soc_dapm_context *d = data;
int ret;
if ((d->bias_level == SND_SOC_BIAS_OFF &&
@@ -1517,12 +1515,11 @@
}
}
-/* Async callback run prior to DAPM sequences - brings to their final
+/* Function run after DAPM sequences - brings to their final
* state.
*/
-static void dapm_post_sequence_async(void *data, async_cookie_t cookie)
+static void dapm_post_sequence(struct snd_soc_dapm_context *d)
{
- struct snd_soc_dapm_context *d = data;
int ret;
/* If we just powered the last thing off drop to standby bias */
@@ -1654,7 +1651,6 @@
struct snd_soc_dapm_context *d;
LIST_HEAD(up_list);
LIST_HEAD(down_list);
- ASYNC_DOMAIN_EXCLUSIVE(async_domain);
enum snd_soc_bias_level bias;
trace_snd_soc_dapm_start(card);
@@ -1730,11 +1726,9 @@
trace_snd_soc_dapm_walk_done(card);
- /* Run all the bias changes in parallel */
+ /* Run all the bias changes */
list_for_each_entry(d, &dapm->card->dapm_list, list)
- async_schedule_domain(dapm_pre_sequence_async, d,
- &async_domain);
- async_synchronize_full_domain(&async_domain);
+ dapm_pre_sequence(d);
/* Power down widgets first; try to avoid amplifying pops. */
dapm_seq_run(dapm, &down_list, event, false);
@@ -1744,11 +1738,9 @@
/* Now power up. */
dapm_seq_run(dapm, &up_list, event, true);
- /* Run all the bias changes in parallel */
+ /* Run all the bias changes */
list_for_each_entry(d, &dapm->card->dapm_list, list)
- async_schedule_domain(dapm_post_sequence_async, d,
- &async_domain);
- async_synchronize_full_domain(&async_domain);
+ dapm_post_sequence(d);
/* do we need to notify any clients that DAPM event is complete */
list_for_each_entry(d, &card->dapm_list, list) {
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 9a93bef..eff3712 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -1120,13 +1120,36 @@
}
}
+static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd);
+
+/* Set FE's runtime_update state; the state is protected via PCM stream lock
+ * for avoiding the race with trigger callback.
+ * If the state is unset and a trigger is pending while the previous operation,
+ * process the pending trigger action here.
+ */
+static void dpcm_set_fe_update_state(struct snd_soc_pcm_runtime *fe,
+ int stream, enum snd_soc_dpcm_update state)
+{
+ struct snd_pcm_substream *substream =
+ snd_soc_dpcm_get_substream(fe, stream);
+
+ snd_pcm_stream_lock_irq(substream);
+ if (state == SND_SOC_DPCM_UPDATE_NO && fe->dpcm[stream].trigger_pending) {
+ dpcm_fe_dai_do_trigger(substream,
+ fe->dpcm[stream].trigger_pending - 1);
+ fe->dpcm[stream].trigger_pending = 0;
+ }
+ fe->dpcm[stream].runtime_update = state;
+ snd_pcm_stream_unlock_irq(substream);
+}
+
static int dpcm_fe_dai_startup(struct snd_pcm_substream *fe_substream)
{
struct snd_soc_pcm_runtime *fe = fe_substream->private_data;
struct snd_pcm_runtime *runtime = fe_substream->runtime;
int stream = fe_substream->stream, ret = 0;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE);
ret = dpcm_be_dai_startup(fe, fe_substream->stream);
if (ret < 0) {
@@ -1148,13 +1171,13 @@
dpcm_set_fe_runtime(fe_substream);
snd_pcm_limit_hw_rates(runtime);
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
return 0;
unwind:
dpcm_be_dai_startup_unwind(fe, fe_substream->stream);
be_err:
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
return ret;
}
@@ -1201,7 +1224,7 @@
struct snd_soc_pcm_runtime *fe = substream->private_data;
int stream = substream->stream;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE);
/* shutdown the BEs */
dpcm_be_dai_shutdown(fe, substream->stream);
@@ -1215,7 +1238,7 @@
dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_STOP);
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_CLOSE;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
return 0;
}
@@ -1263,7 +1286,7 @@
int err, stream = substream->stream;
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE);
dev_dbg(fe->dev, "ASoC: hw_free FE %s\n", fe->dai_link->name);
@@ -1278,7 +1301,7 @@
err = dpcm_be_dai_hw_free(fe, stream);
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_FREE;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
mutex_unlock(&fe->card->mutex);
return 0;
@@ -1371,7 +1394,7 @@
int ret, stream = substream->stream;
mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE);
memcpy(&fe->dpcm[substream->stream].hw_params, params,
sizeof(struct snd_pcm_hw_params));
@@ -1394,7 +1417,7 @@
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_PARAMS;
out:
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
mutex_unlock(&fe->card->mutex);
return ret;
}
@@ -1433,7 +1456,8 @@
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
continue;
ret = dpcm_do_trigger(dpcm, be_substream, cmd);
@@ -1509,7 +1533,7 @@
}
EXPORT_SYMBOL_GPL(dpcm_be_dai_trigger);
-static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd)
+static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd)
{
struct snd_soc_pcm_runtime *fe = substream->private_data;
int stream = substream->stream, ret;
@@ -1583,6 +1607,23 @@
return ret;
}
+static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd)
+{
+ struct snd_soc_pcm_runtime *fe = substream->private_data;
+ int stream = substream->stream;
+
+ /* if FE's runtime_update is already set, we're in race;
+ * process this trigger later at exit
+ */
+ if (fe->dpcm[stream].runtime_update != SND_SOC_DPCM_UPDATE_NO) {
+ fe->dpcm[stream].trigger_pending = cmd + 1;
+ return 0; /* delayed, assuming it's successful */
+ }
+
+ /* we're alone, let's trigger */
+ return dpcm_fe_dai_do_trigger(substream, cmd);
+}
+
int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
{
struct snd_soc_dpcm *dpcm;
@@ -1626,7 +1667,7 @@
dev_dbg(fe->dev, "ASoC: prepare FE %s\n", fe->dai_link->name);
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE);
/* there is no point preparing this FE if there are no BEs */
if (list_empty(&fe->dpcm[stream].be_clients)) {
@@ -1655,7 +1696,7 @@
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
out:
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
mutex_unlock(&fe->card->mutex);
return ret;
@@ -1802,11 +1843,11 @@
{
int ret;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE);
ret = dpcm_run_update_startup(fe, stream);
if (ret < 0)
dev_err(fe->dev, "ASoC: failed to startup some BEs\n");
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
return ret;
}
@@ -1815,11 +1856,11 @@
{
int ret;
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE);
ret = dpcm_run_update_shutdown(fe, stream);
if (ret < 0)
dev_err(fe->dev, "ASoC: failed to shutdown some BEs\n");
- fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
return ret;
}
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index 6484c80..9e43601 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -320,6 +320,29 @@
boards using the ALC5645 codec. Currently, the supported boards
are Ardbeg.
+
+config MACH_HAS_SND_SOC_TEGRA_RT5677
+ bool
+ help
+ Machines that use the SND_SOC_TEGRA_RT5677 driver should select
+ this config option, in order to allow the user to enable
+ SND_SOC_TEGRA_RT5677.
+
+config SND_SOC_TEGRA_RT5677
+ tristate "SoC Audio support for Tegra boards using a ALC5677 codec + RT5506 AMP"
+ depends on SND_SOC_TEGRA && I2C && TEGRA_DC
+ select SND_SOC_TEGRA30_I2S if !ARCH_TEGRA_2x_SOC
+ select SND_SOC_TEGRA30_SPDIF if !ARCH_TEGRA_2x_SOC
+ select SND_SOC_RT5677
+ select SND_SOC_RT5506
+ select SND_SOC_SPDIF
+ select SND_SOC_TEGRA30_DAM if !ARCH_TEGRA_2x_SOC
+ select SND_SOC_TEGRA30_AVP if !ARCH_TEGRA_2x_SOC
+ help
+ Say Y or M here if you want to add support for SoC audio on Tegra
+ boards using the ALC5677 codec + Rt5506 AMP. Currently,
+ the supported boards are Flounder.
+
config SND_SOC_TEGRA_MAX98095
tristate "SoC Audio support for Tegra boards using a MAX98095 codec"
depends on SND_SOC_TEGRA && I2C && TEGRA_DC
diff --git a/sound/soc/tegra/Makefile b/sound/soc/tegra/Makefile
index 0434fa1..98f6f92 100644
--- a/sound/soc/tegra/Makefile
+++ b/sound/soc/tegra/Makefile
@@ -40,6 +40,7 @@
snd-soc-tegra-rt5640-objs := tegra_rt5640.o
snd-soc-tegra-rt5645-objs := tegra_rt5645.o
snd-soc-tegra-rt5639-objs := tegra_rt5639.o
+snd-soc-tegra-rt5677-objs := tegra_rt5677.o
snd-soc-tegra-max98095-objs := tegra_max98095.o
snd-soc-tegra-vcm-objs := tegra_vcm.o
snd-soc-tegra-cs42l73-objs := tegra_cs42l73.o
@@ -57,6 +58,7 @@
obj-$(CONFIG_SND_SOC_TEGRA_RT5640) += snd-soc-tegra-rt5640.o
obj-$(CONFIG_SND_SOC_TEGRA_RT5645) += snd-soc-tegra-rt5645.o
obj-$(CONFIG_SND_SOC_TEGRA_RT5639) += snd-soc-tegra-rt5639.o
+obj-$(CONFIG_SND_SOC_TEGRA_RT5677) += snd-soc-tegra-rt5677.o
obj-$(CONFIG_SND_SOC_TEGRA_MAX98095) += snd-soc-tegra-max98095.o
obj-$(CONFIG_SND_SOC_TEGRA_P1852) += snd-soc-tegra-vcm.o
obj-$(CONFIG_SND_SOC_TEGRA_E1853) += snd-soc-tegra-vcm.o
diff --git a/sound/soc/tegra/tegra30_ahub.c b/sound/soc/tegra/tegra30_ahub.c
index dfd0b95..7c3362e 100644
--- a/sound/soc/tegra/tegra30_ahub.c
+++ b/sound/soc/tegra/tegra30_ahub.c
@@ -312,6 +312,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -335,6 +338,9 @@
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
int reg, val;
+ if ((int)txcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -357,6 +363,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -372,6 +381,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -386,6 +398,9 @@
{
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
+ if ((int)rxcif < 0)
+ return 0;
+
__clear_bit(channel, ahub->rx_usage);
return 0;
@@ -442,6 +457,9 @@
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
int reg, val;
+ if ((int)txcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -457,6 +475,9 @@
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
int reg, val;
+ if ((int)txcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CHANNEL_CTRL +
(channel * TEGRA30_AHUB_CHANNEL_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -471,6 +492,9 @@
{
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
+ if ((int)txcif < 0)
+ return 0;
+
__clear_bit(channel, ahub->tx_usage);
return 0;
@@ -511,6 +535,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
unsigned int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CIF_RX_CTRL +
(channel * TEGRA30_AHUB_CIF_RX_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -529,6 +556,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
unsigned int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
tegra30_ahub_enable_clocks();
reg = TEGRA30_AHUB_CIF_RX_CTRL +
@@ -550,6 +580,9 @@
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
unsigned int reg, val;
+ if ((int)txcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CIF_TX_CTRL +
(channel * TEGRA30_AHUB_CIF_TX_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -571,6 +604,9 @@
int channel = rxcif - TEGRA30_AHUB_RXCIF_APBIF_RX0;
unsigned int reg, val;
+ if ((int)rxcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CIF_RX_CTRL +
(channel * TEGRA30_AHUB_CIF_RX_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
@@ -591,6 +627,9 @@
int channel = txcif - TEGRA30_AHUB_TXCIF_APBIF_TX0;
unsigned int reg, val;
+ if ((int)txcif < 0)
+ return 0;
+
reg = TEGRA30_AHUB_CIF_TX_CTRL +
(channel * TEGRA30_AHUB_CIF_TX_CTRL_STRIDE);
val = tegra30_apbif_read(reg);
diff --git a/sound/soc/tegra/tegra30_avp.c b/sound/soc/tegra/tegra30_avp.c
index bf7efd734..1a8c304 100644
--- a/sound/soc/tegra/tegra30_avp.c
+++ b/sound/soc/tegra/tegra30_avp.c
@@ -52,10 +52,13 @@
#define AVP_INIT_SAMPLE_RATE 48000
#define AVP_COMPR_THRESHOLD (4 * 1024)
-#define AVP_UNITY_STREAM_VOLUME 0x10000
#define AVP_CMD_BUFFER_SIZE 256
+#define DEFAULT_PERIOD_SIZE 4096
+#define DEFAULT_FRAGMENT_SIZE (32 * 1024)
+#define DEFAULT_FRAGMENTS 4
+
enum avp_compr_formats {
avp_compr_mp3,
avp_compr_aac,
@@ -107,6 +110,7 @@
atomic_t is_dma_allocated;
atomic_t active_count;
+ int dma_started;
};
struct tegra30_avp_stream {
@@ -115,6 +119,7 @@
struct stream_data *stream;
enum avp_audio_stream_id id;
int period_size;
+ unsigned int total_bytes_copied;
/* TODO : Use spinlock in appropriate places */
spinlock_t lock;
@@ -125,7 +130,13 @@
void (*notify_cb)(void *args, unsigned int is_eos);
void *notify_args;
- unsigned int is_drain_called;
+ atomic_t is_drain_called;
+ int is_stream_active;
+ /* cpu copy of some shared structure members */
+ enum KSSTATE stream_state_target;
+ unsigned int source_buffer_write_position;
+ unsigned int source_buffer_write_count;
+ unsigned int source_buffer_size;
};
struct tegra30_avp_audio {
@@ -143,6 +154,7 @@
int cmd_buf_idx;
atomic_t stream_active_count;
struct tegra30_avp_audio_dma audio_dma;
+ struct mutex mutex;
spinlock_t lock;
};
@@ -251,9 +263,11 @@
static void tegra30_avp_mem_free(struct tegra_offload_mem *mem)
{
- if (mem->virt_addr)
+ if (mem->virt_addr) {
dma_free_coherent(mem->dev, mem->bytes,
mem->virt_addr, mem->phys_addr);
+ mem->virt_addr = NULL;
+ }
}
static int tegra30_avp_load_ucode(void)
@@ -398,6 +412,9 @@
audio_engine->device_format.bits_per_sample = 16;
audio_engine->device_format.channels = 2;
+ atomic_set(&audio_avp->stream_active_count, 0);
+ atomic_set(&audio_avp->audio_dma.is_dma_allocated, 0);
+
/* Initialize stream memory */
for (i = 0; i < max_stream_id; i++) {
struct tegra30_avp_stream *avp_stream;
@@ -416,7 +433,9 @@
stream->stream_state_target = KSSTATE_STOP;
stream->source_buffer_write_position = 0;
stream->source_buffer_write_count = 0;
- stream->stream_params.rate = AVP_INIT_SAMPLE_RATE;
+ avp_stream->stream_state_target = KSSTATE_STOP;
+ avp_stream->source_buffer_write_position = 0;
+ avp_stream->source_buffer_write_count = 0;
for (j = 0; j < RENDERSW_MAX_CHANNELS; j++)
stream->stream_volume[j] = AVP_UNITY_STREAM_VOLUME;
@@ -427,12 +446,20 @@
stream->source_buffer_presentation_position = 0;
stream->source_buffer_frames_decoded = 0;
stream->stream_state_current = KSSTATE_STOP;
+ stream->stream_notification_enable = 1;
stream->stream_params.rate = AVP_INIT_SAMPLE_RATE;
stream->stream_params.bits_per_sample = 16;
stream->stream_params.channels = 2;
+ stream->stream_notification_interval = DEFAULT_PERIOD_SIZE;
+ if (i == decode_stream_id || i == decode2_stream_id)
+ stream->stream_notification_interval =
+ (DEFAULT_FRAGMENT_SIZE *
+ (DEFAULT_FRAGMENTS - 1));
avp_stream->audio_avp = audio_avp;
+
+ atomic_set(&avp_stream->is_drain_called, 0);
}
}
@@ -450,6 +477,8 @@
if (atomic_read(&dma->is_dma_allocated) == 1)
return 0;
+ atomic_set(&dma->is_dma_allocated, 1);
+
memcpy(&dma->params, params, sizeof(struct tegra_offload_dma_params));
dma_cap_zero(mask);
@@ -458,7 +487,8 @@
dma->chan = dma_request_channel(mask, NULL, NULL);
if (dma->chan == NULL) {
dev_err(audio_avp->dev, "Failed to allocate DMA chan.");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto err;
}
/* Only playback is supported */
@@ -471,12 +501,13 @@
ret = dmaengine_slave_config(dma->chan, &dma->chan_slave_config);
if (ret < 0) {
dev_err(audio_avp->dev, "dma slave config failed.err %d.", ret);
- return ret;
+ goto err;
}
audio_engine->apb_channel_handle = dma->chan->chan_id;
- atomic_set(&dma->is_dma_allocated, 1);
-
return 0;
+err:
+ atomic_set(&dma->is_dma_allocated, 0);
+ return ret;
}
static void tegra30_avp_audio_free_dma(void)
@@ -491,47 +522,79 @@
dma_release_channel(dma->chan);
atomic_set(&dma->is_dma_allocated, 0);
}
-
return;
}
+/* Call this function with avp lock held */
static int tegra30_avp_audio_start_dma(void)
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct tegra30_avp_audio_dma *dma = &audio_avp->audio_dma;
struct audio_engine_data *audio_engine = audio_avp->audio_engine;
+ struct tegra30_avp_stream *avp_stream;
+ struct stream_data *stream;
+ int i, start_dma = 1;
- dev_vdbg(audio_avp->dev, "%s: active %d", __func__,
- atomic_read(&dma->active_count));
+ for (i = 0; i < max_stream_id; i++) {
+ avp_stream = &audio_avp->avp_stream[i];
+ stream = avp_stream->stream;
- if (atomic_inc_return(&dma->active_count) > 1)
- return 0;
+ if (avp_stream->is_stream_active &&
+ (avp_stream->stream_state_target == KSSTATE_RUN)) {
+ start_dma = 0;
+ break;
+ }
+ }
- dma->chan_desc = dmaengine_prep_dma_cyclic(dma->chan,
+ if (start_dma && dma->dma_started == 0) {
+ dma->chan_desc = dmaengine_prep_dma_cyclic(dma->chan,
(dma_addr_t)audio_engine->device_buffer_avp,
DEVICE_BUFFER_SIZE,
DEVICE_BUFFER_SIZE,
dma->chan_slave_config.direction,
DMA_CTRL_ACK);
- if (!dma->chan_desc) {
- dev_err(audio_avp->dev, "Failed to prep cyclic dma");
- return -ENODEV;
+ if (!dma->chan_desc) {
+ dev_err(audio_avp->dev, "Failed to prep cyclic dma");
+ return -ENODEV;
+ }
+ dma->chan_cookie = dmaengine_submit(dma->chan_desc);
+ dma_async_issue_pending(dma->chan);
+ dma->dma_started = 1;
}
- dma->chan_cookie = dmaengine_submit(dma->chan_desc);
- dma_async_issue_pending(dma->chan);
+ dev_vdbg(audio_avp->dev, "%s: dma %s\n",
+ __func__, (start_dma ? "started" : "already running"));
+
return 0;
}
+/* Call this function with avp lock held */
static int tegra30_avp_audio_stop_dma(void)
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
+ struct audio_engine_data *audio_engine = audio_avp->audio_engine;
struct tegra30_avp_audio_dma *dma = &audio_avp->audio_dma;
+ struct tegra30_avp_stream *avp_stream;
+ struct stream_data *stream;
+ int i, stop_dma = 1;
- dev_vdbg(audio_avp->dev, "%s: active %d.", __func__,
- atomic_read(&dma->active_count));
+ for (i = 0; i < max_stream_id; i++) {
+ avp_stream = &audio_avp->avp_stream[i];
+ stream = avp_stream->stream;
- if (atomic_dec_and_test(&dma->active_count))
+ if (avp_stream->is_stream_active &&
+ avp_stream->stream_state_target == KSSTATE_RUN) {
+ stop_dma = 0;
+ break;
+ }
+ }
+
+ if (stop_dma && dma->dma_started) {
dmaengine_terminate_all(dma->chan);
+ dma->dma_started = 0;
+ audio_engine->device_buffer_write_position = 0;
+ }
+ dev_vdbg(audio_avp->dev, "%s: dma is %s\n", __func__,
+ (stop_dma ? "stopped" : "running"));
return 0;
}
@@ -581,18 +644,20 @@
return 0;
}
-
-/* Call this function with stream lock held */
static int tegra30_avp_stream_set_state(int id, enum KSSTATE new_state)
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
struct stream_data *stream = avp_stream->stream;
- enum KSSTATE old_state = stream->stream_state_target;
+ enum KSSTATE old_state;
int ret = 0;
dev_vdbg(audio_avp->dev, "%s : id %d state %d -> %d", __func__, id,
- old_state, new_state);
+ stream->stream_state_target, new_state);
+
+ spin_lock(&avp_stream->lock);
+ old_state = avp_stream->stream_state_target;
+ spin_unlock(&avp_stream->lock);
if (old_state == new_state)
return 0;
@@ -612,7 +677,6 @@
}
}
- stream->stream_state_target = new_state;
/* TODO : Need a way to wait till AVP stream state changes */
if (new_state == KSSTATE_STOP) {
@@ -629,19 +693,35 @@
break;
}
}
+ spin_lock(&audio_avp->lock);
+ /* must be called before updating new state */
+ if (new_state == KSSTATE_RUN)
+ tegra30_avp_audio_start_dma();
+
+ stream->stream_state_target = new_state;
+ avp_stream->stream_state_target = new_state;
+
+ /* must be called after updating new state */
+ if (new_state == KSSTATE_STOP || new_state == KSSTATE_PAUSE)
+ tegra30_avp_audio_stop_dma();
if (new_state == KSSTATE_STOP) {
stream->source_buffer_write_position = 0;
stream->source_buffer_write_count = 0;
+ stream->source_buffer_read_position = 0;
+ stream->source_buffer_read_position_fraction = 0;
+ stream->source_buffer_presentation_position = 0;
+ stream->source_buffer_linear_position = 0;
+ stream->source_buffer_frames_decoded = 0;
+ avp_stream->source_buffer_write_position = 0;
+ avp_stream->source_buffer_write_count = 0;
avp_stream->last_notification_offset = 0;
avp_stream->notification_received = 0;
avp_stream->source_buffer_offset = 0;
+ avp_stream->total_bytes_copied = 0;
}
- if (new_state == KSSTATE_RUN)
- tegra30_avp_audio_start_dma();
- else if (old_state == KSSTATE_RUN)
- tegra30_avp_audio_stop_dma();
+ spin_unlock(&audio_avp->lock);
return ret;
}
@@ -660,21 +740,25 @@
avp_stream = &audio_avp->avp_stream[i];
stream = avp_stream->stream;
- if (!stream->stream_allocated)
+ if (!avp_stream->is_stream_active)
continue;
- if (avp_stream->is_drain_called &&
- (stream->source_buffer_read_position ==
- stream->source_buffer_write_position) &&
+ if ((stream->source_buffer_read_position ==
+ avp_stream->source_buffer_write_position) &&
(avp_stream->notification_received >=
- stream->stream_notification_request)) {
+ stream->stream_notification_request) &&
+ atomic_read(&avp_stream->is_drain_called)) {
+ atomic_set(&avp_stream->is_drain_called, 0);
+ tegra30_avp_stream_set_state(i, KSSTATE_STOP);
/* End of stream occured and noitfy same with value 1 */
avp_stream->notify_cb(avp_stream->notify_args, 1);
- tegra30_avp_stream_set_state(i, KSSTATE_STOP);
} else if (stream->stream_notification_request >
avp_stream->notification_received) {
+ spin_lock(&avp_stream->lock);
avp_stream->notification_received++;
-
+ avp_stream->total_bytes_copied +=
+ stream->stream_notification_interval;
+ spin_unlock(&avp_stream->lock);
avp_stream->notify_cb(avp_stream->notify_args, 0);
}
}
@@ -695,8 +779,9 @@
dev_err(audio_avp->dev, "AVP platform not initialized.");
return -ENODEV;
}
-
+ mutex_lock(&audio_avp->mutex);
audio_engine->device_format.rate = rate;
+ mutex_unlock(&audio_avp->mutex);
return 0;
}
@@ -719,27 +804,139 @@
tegra30_avp_mem_free(mem);
}
+/* Loopback APIs */
+static int tegra30_avp_loopback_set_params(int id,
+ struct tegra_offload_pcm_params *params)
+{
+ struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
+ struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
+ struct stream_data *stream = avp_stream->stream;
+
+ dev_vdbg(audio_avp->dev, "%s:entry\n", __func__);
+
+ if (!stream) {
+ dev_err(audio_avp->dev, "AVP platform not initialized.");
+ return -ENODEV;
+ }
+
+ spin_lock(&avp_stream->lock);
+ stream->stream_notification_interval = params->period_size;
+ stream->stream_notification_enable = 1;
+ stream->stream_params.rate = params->rate;
+ stream->stream_params.channels = params->channels;
+ stream->stream_params.bits_per_sample = params->bits_per_sample;
+
+ avp_stream->period_size = params->period_size;
+ avp_stream->notify_cb = params->period_elapsed_cb;
+ avp_stream->notify_args = params->period_elapsed_args;
+
+ stream->source_buffer_system =
+ (uintptr_t)(params->source_buf.virt_addr);
+ stream->source_buffer_avp = params->source_buf.phys_addr;
+ stream->source_buffer_size = params->buffer_size;
+ spin_unlock(&avp_stream->lock);
+ return 0;
+}
+
+static int tegra30_avp_loopback_set_state(int id, int state)
+{
+ struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
+ struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
+ struct stream_data *stream = avp_stream->stream;
+
+ dev_vdbg(audio_avp->dev, "%s : id %d state %d", __func__, id, state);
+
+ if (!stream) {
+ dev_err(audio_avp->dev, "AVP platform not initialized.");
+ return -ENODEV;
+ }
+
+ spin_lock(&avp_stream->lock);
+ switch (state) {
+ case SNDRV_PCM_TRIGGER_START:
+ stream->stream_state_target = KSSTATE_RUN;
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ stream->stream_state_target = KSSTATE_STOP;
+ stream->source_buffer_write_position = 0;
+ stream->source_buffer_write_count = 0;
+ avp_stream->last_notification_offset = 0;
+ avp_stream->notification_received = 0;
+ avp_stream->source_buffer_offset = 0;
+ break;
+ default:
+ dev_err(audio_avp->dev, "Unsupported state.");
+ spin_unlock(&avp_stream->lock);
+ return -EINVAL;
+ }
+ spin_unlock(&avp_stream->lock);
+ return 0;
+}
+
+static size_t tegra30_avp_loopback_get_position(int id)
+{
+ struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
+ struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
+ struct stream_data *stream = avp_stream->stream;
+ size_t pos = 0;
+
+ spin_lock(&avp_stream->lock);
+ pos = (size_t)stream->source_buffer_read_position;
+ spin_unlock(&avp_stream->lock);
+
+ dev_vdbg(audio_avp->dev, "%s id %d pos %d", __func__, id, (u32)pos);
+
+ return pos;
+}
+
+static void tegra30_avp_loopback_data_ready(int id, int bytes)
+{
+ struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
+ struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
+ struct stream_data *stream = avp_stream->stream;
+
+ dev_vdbg(audio_avp->dev, "%s :id %d size %d", __func__, id, bytes);
+
+ spin_lock(&avp_stream->lock);
+ stream->source_buffer_write_position += bytes;
+ stream->source_buffer_write_position %= stream->source_buffer_size;
+
+ avp_stream->source_buffer_offset += bytes;
+ while (avp_stream->source_buffer_offset >=
+ stream->stream_notification_interval) {
+ stream->source_buffer_write_count++;
+ avp_stream->source_buffer_offset -=
+ stream->stream_notification_interval;
+ }
+ spin_unlock(&avp_stream->lock);
+ return;
+}
+
/* PCM APIs */
-static int tegra30_avp_pcm_open(int *id)
+static int tegra30_avp_pcm_open(int *id, char *stream_type)
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct audio_engine_data *audio_engine = audio_avp->audio_engine;
+ struct tegra30_avp_stream *avp_stream = audio_avp->avp_stream;
+ struct stream_data *stream;
int ret = 0;
dev_vdbg(audio_avp->dev, "%s", __func__);
+ mutex_lock(&audio_avp->mutex);
+
if (!audio_avp->nvavp_client) {
ret = tegra_nvavp_audio_client_open(&audio_avp->nvavp_client);
if (ret < 0) {
dev_err(audio_avp->dev, "Failed to open nvavp.");
- return ret;
+ goto exit;
}
}
if (!audio_engine) {
ret = tegra30_avp_load_ucode();
if (ret < 0) {
dev_err(audio_avp->dev, "Failed to load ucode.");
- return ret;
+ goto exit;
}
tegra30_avp_audio_engine_init();
nvavp_register_audio_cb(audio_avp->nvavp_client,
@@ -747,21 +944,55 @@
audio_engine = audio_avp->audio_engine;
}
- if (!audio_engine->stream[pcm_stream_id].stream_allocated)
- *id = pcm_stream_id;
- else if (!audio_engine->stream[pcm2_stream_id].stream_allocated)
- *id = pcm2_stream_id;
- else {
- dev_err(audio_avp->dev, "All AVP PCM streams are busy");
- return -EBUSY;
+ if (strcmp(stream_type, "pcm") == 0) {
+ if (!avp_stream[pcm_stream_id].is_stream_active) {
+ *id = pcm_stream_id;
+ atomic_inc(&audio_avp->stream_active_count);
+ } else if (!avp_stream[pcm2_stream_id].is_stream_active) {
+ *id = pcm2_stream_id;
+ atomic_inc(&audio_avp->stream_active_count);
+ } else {
+ dev_err(audio_avp->dev, "All AVP PCM streams are busy");
+ *id = -1;
+ ret = -EBUSY;
+ goto exit;
+ }
+ } else if (strcmp(stream_type, "loopback") == 0) {
+ if (!avp_stream[loopback_stream_id].is_stream_active) {
+ *id = loopback_stream_id;
+ } else {
+ dev_err(audio_avp->dev, "All AVP loopback streams are busy");
+ *id = -1;
+ ret = -EBUSY;
+ goto exit;
+ }
}
+ avp_stream = &audio_avp->avp_stream[*id];
+ stream = avp_stream->stream;
- audio_engine->stream[*id].stream_allocated = 1;
+ avp_stream->stream_state_target = KSSTATE_STOP;
+ avp_stream->source_buffer_write_position = 0;
+ avp_stream->source_buffer_write_count = 0;
+ stream->stream_state_target = KSSTATE_STOP;
+ stream->source_buffer_write_position = 0;
+ stream->source_buffer_write_count = 0;
- atomic_inc(&audio_avp->stream_active_count);
+ stream->source_buffer_read_position = 0;
+ stream->source_buffer_read_position_fraction = 0;
+ stream->source_buffer_linear_position = 0;
+ stream->source_buffer_presentation_position = 0;
+ stream->source_buffer_frames_decoded = 0;
+ stream->stream_state_current = KSSTATE_STOP;
+ stream->stream_notification_request = 0;
+ stream->stream_notification_offset = 0;
+ stream->stream_allocated = 1;
+
+ avp_stream->is_stream_active = 1;
tegra30_avp_audio_set_state(KSSTATE_RUN);
- return 0;
+exit:
+ mutex_unlock(&audio_avp->mutex);
+ return ret;
}
static int tegra30_avp_pcm_set_params(int id,
@@ -798,6 +1029,7 @@
(uintptr_t) (params->source_buf.virt_addr);
stream->source_buffer_avp = params->source_buf.phys_addr;
stream->source_buffer_size = params->buffer_size;
+ avp_stream->source_buffer_size = params->buffer_size;
/* Set DMA params */
ret = tegra30_avp_audio_alloc_dma(¶ms->dma_params);
@@ -813,6 +1045,7 @@
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
struct stream_data *stream = avp_stream->stream;
+ int ret = 0;
dev_vdbg(audio_avp->dev, "%s : id %d state %d", __func__, id, state);
@@ -826,16 +1059,18 @@
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
case SNDRV_PCM_TRIGGER_RESUME:
tegra30_avp_stream_set_state(id, KSSTATE_RUN);
- return 0;
+ break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
case SNDRV_PCM_TRIGGER_SUSPEND:
tegra30_avp_stream_set_state(id, KSSTATE_STOP);
- return 0;
+ break;
default:
dev_err(audio_avp->dev, "Unsupported state.");
- return -EINVAL;
+ ret = -EINVAL;
+ break;
}
+ return ret;
}
static void tegra30_avp_pcm_data_ready(int id, int bytes)
@@ -845,17 +1080,23 @@
struct stream_data *stream = avp_stream->stream;
dev_vdbg(audio_avp->dev, "%s :id %d size %d", __func__, id, bytes);
+ spin_lock(&avp_stream->lock);
+ avp_stream->source_buffer_write_position += bytes;
+ avp_stream->source_buffer_write_position %=
+ avp_stream->source_buffer_size;
- stream->source_buffer_write_position += bytes;
- stream->source_buffer_write_position %= stream->source_buffer_size;
+ stream->source_buffer_write_position =
+ avp_stream->source_buffer_write_position;
avp_stream->source_buffer_offset += bytes;
while (avp_stream->source_buffer_offset >=
stream->stream_notification_interval) {
stream->source_buffer_write_count++;
+ avp_stream->source_buffer_write_count++;
avp_stream->source_buffer_offset -=
stream->stream_notification_interval;
}
+ spin_unlock(&avp_stream->lock);
return;
}
@@ -866,7 +1107,9 @@
struct stream_data *stream = avp_stream->stream;
size_t pos = 0;
+ spin_lock(&avp_stream->lock);
pos = (size_t)stream->source_buffer_read_position;
+ spin_unlock(&avp_stream->lock);
dev_vdbg(audio_avp->dev, "%s id %d pos %d", __func__, id, (u32)pos);
@@ -878,22 +1121,25 @@
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct audio_engine_data *audio_engine = audio_avp->audio_engine;
+ struct tegra30_avp_stream *avp_stream;
+ struct stream_data *stream;
int ret = 0;
dev_vdbg(audio_avp->dev, "%s", __func__);
+ mutex_lock(&audio_avp->mutex);
if (!audio_avp->nvavp_client) {
ret = tegra_nvavp_audio_client_open(&audio_avp->nvavp_client);
if (ret < 0) {
dev_err(audio_avp->dev, "Failed to open nvavp.");
- return ret;
+ goto exit;
}
}
if (!audio_engine) {
ret = tegra30_avp_load_ucode();
if (ret < 0) {
dev_err(audio_avp->dev, "Failed to load ucode.");
- return ret;
+ goto exit;
}
tegra30_avp_audio_engine_init();
nvavp_register_audio_cb(audio_avp->nvavp_client,
@@ -901,21 +1147,46 @@
audio_engine = audio_avp->audio_engine;
}
- if (!audio_engine->stream[decode_stream_id].stream_allocated)
+ if (!audio_avp->avp_stream[decode_stream_id].is_stream_active)
*id = decode_stream_id;
- else if (!audio_engine->stream[decode2_stream_id].stream_allocated)
+ else if (!audio_avp->avp_stream[decode2_stream_id].is_stream_active)
*id = decode2_stream_id;
else {
dev_err(audio_avp->dev, "All AVP COMPR streams are busy");
- return -EBUSY;
+ ret = -EBUSY;
+ *id = -1;
+ goto exit;
}
- audio_avp->avp_stream[*id].is_drain_called = 0;
- audio_engine->stream[*id].stream_allocated = 1;
+
+ avp_stream = &audio_avp->avp_stream[*id];
+ stream = avp_stream->stream;
+
+ avp_stream->stream_state_target = KSSTATE_STOP;
+ avp_stream->source_buffer_write_position = 0;
+ avp_stream->source_buffer_write_count = 0;
+ stream->stream_state_target = KSSTATE_STOP;
+ stream->source_buffer_write_position = 0;
+ stream->source_buffer_write_count = 0;
+
+ stream->source_buffer_read_position = 0;
+ stream->source_buffer_read_position_fraction = 0;
+ stream->source_buffer_linear_position = 0;
+ stream->source_buffer_presentation_position = 0;
+ stream->source_buffer_frames_decoded = 0;
+ stream->stream_state_current = KSSTATE_STOP;
+ stream->stream_notification_request = 0;
+ stream->stream_notification_offset = 0;
+ stream->stream_allocated = 1;
+
+ atomic_set(&avp_stream->is_drain_called, 0);
+ avp_stream->is_stream_active = 1;
atomic_inc(&audio_avp->stream_active_count);
tegra30_avp_audio_set_state(KSSTATE_RUN);
- return 0;
+exit:
+ mutex_unlock(&audio_avp->mutex);
+ return ret;
}
static int tegra30_avp_compr_set_params(int id,
@@ -1003,10 +1274,12 @@
avp_stream->notify_cb = params->fragments_elapsed_cb;
avp_stream->notify_args = params->fragments_elapsed_args;
- stream->source_buffer_size = (params->fragments *
+ avp_stream->source_buffer_size = (params->fragments *
params->fragment_size);
+ stream->source_buffer_size = avp_stream->source_buffer_size;
+
ret = tegra30_avp_mem_alloc(&avp_stream->source_buf,
- stream->source_buffer_size);
+ avp_stream->source_buffer_size);
if (ret < 0) {
dev_err(audio_avp->dev, "Failed to allocate source buf memory");
return ret;
@@ -1016,9 +1289,9 @@
(uintptr_t) avp_stream->source_buf.virt_addr;
stream->source_buffer_avp = avp_stream->source_buf.phys_addr;
- if (stream->source_buffer_size > AVP_COMPR_THRESHOLD) {
+ if (avp_stream->source_buffer_size > AVP_COMPR_THRESHOLD) {
stream->stream_notification_interval =
- stream->source_buffer_size - AVP_COMPR_THRESHOLD;
+ params->fragment_size * (params->fragments - 1);
} else {
stream->stream_notification_interval = avp_stream->period_size;
}
@@ -1043,6 +1316,7 @@
{
struct tegra30_avp_audio *audio_avp = avp_audio_ctx;
struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
+ int ret = 0;
dev_vdbg(audio_avp->dev, "%s : id %d state %d",
__func__, id, state);
@@ -1051,25 +1325,30 @@
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_RESUME:
tegra30_avp_stream_set_state(id, KSSTATE_RUN);
- return 0;
+ break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
tegra30_avp_stream_set_state(id, KSSTATE_STOP);
- return 0;
+ break;
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
tegra30_avp_stream_set_state(id, KSSTATE_PAUSE);
- return 0;
+ break;
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
tegra30_avp_stream_set_state(id, KSSTATE_RUN);
- return 0;
+ break;
case SND_COMPR_TRIGGER_DRAIN:
case SND_COMPR_TRIGGER_PARTIAL_DRAIN:
- avp_stream->is_drain_called = 1;
- return 0;
+ atomic_set(&avp_stream->is_drain_called, 1);
+ break;
+ case SND_COMPR_TRIGGER_NEXT_TRACK:
+ pr_debug("%s: SND_COMPR_TRIGGER_NEXT_TRACK\n", __func__);
+ break;
default:
dev_err(audio_avp->dev, "Unsupported state.");
- return -EINVAL;
+ ret = -EINVAL;
+ break;
}
+ return ret;
}
static void tegra30_avp_compr_data_ready(int id, int bytes)
@@ -1080,16 +1359,22 @@
dev_vdbg(audio_avp->dev, "%s : id %d size %d", __func__, id, bytes);
- stream->source_buffer_write_position += bytes;
- stream->source_buffer_write_position %= stream->source_buffer_size;
+ spin_lock(&avp_stream->lock);
+ avp_stream->source_buffer_write_position += bytes;
+ avp_stream->source_buffer_write_position %=
+ avp_stream->source_buffer_size;
+ stream->source_buffer_write_position =
+ avp_stream->source_buffer_write_position;
avp_stream->source_buffer_offset += bytes;
while (avp_stream->source_buffer_offset >=
stream->stream_notification_interval) {
stream->source_buffer_write_count++;
+ avp_stream->source_buffer_write_count++;
avp_stream->source_buffer_offset -=
stream->stream_notification_interval;
}
+ spin_unlock(&avp_stream->lock);
return;
}
@@ -1099,17 +1384,17 @@
struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
struct stream_data *stream = avp_stream->stream;
void *dst = (char *)(uintptr_t)stream->source_buffer_system +
- stream->source_buffer_write_position;
+ avp_stream->source_buffer_write_position;
int avail = 0;
int write = 0;
int ret = 0;
avail = stream->source_buffer_read_position -
- stream->source_buffer_write_position;
+ avp_stream->source_buffer_write_position;
if ((avail < 0) || (!avail &&
- (stream->source_buffer_write_count ==
+ (avp_stream->source_buffer_write_count ==
stream->stream_notification_request)))
- avail += stream->source_buffer_size;
+ avail += avp_stream->source_buffer_size;
dev_vdbg(audio_avp->dev, "%s : id %d size %d", __func__, id, bytes);
@@ -1121,8 +1406,8 @@
return bytes;
}
- write = stream->source_buffer_size -
- stream->source_buffer_write_position;
+ write = avp_stream->source_buffer_size -
+ avp_stream->source_buffer_write_position;
if (write > bytes) {
ret = copy_from_user(dst, buf, bytes);
if (ret < 0) {
@@ -1144,16 +1429,22 @@
}
}
- stream->source_buffer_write_position += bytes;
- stream->source_buffer_write_position %= stream->source_buffer_size;
+ spin_lock(&avp_stream->lock);
+ avp_stream->source_buffer_write_position += bytes;
+ avp_stream->source_buffer_write_position %=
+ avp_stream->source_buffer_size;
+ stream->source_buffer_write_position =
+ avp_stream->source_buffer_write_position;
avp_stream->source_buffer_offset += bytes;
while (avp_stream->source_buffer_offset >=
stream->stream_notification_interval) {
stream->source_buffer_write_count++;
+ avp_stream->source_buffer_write_count++;
avp_stream->source_buffer_offset -=
stream->stream_notification_interval;
}
+ spin_unlock(&avp_stream->lock);
DUMP_AVP_STATUS(avp_stream);
return bytes;
}
@@ -1165,13 +1456,13 @@
struct tegra30_avp_stream *avp_stream = &audio_avp->avp_stream[id];
struct stream_data *stream = avp_stream->stream;
- tstamp->byte_offset = stream->source_buffer_write_position;
- tstamp->copied_total = stream->source_buffer_write_position +
- (stream->source_buffer_write_count *
- stream->stream_notification_interval);
+ spin_lock(&avp_stream->lock);
+ tstamp->byte_offset = stream->source_buffer_read_position;
+ tstamp->copied_total = avp_stream->total_bytes_copied;
tstamp->pcm_frames = stream->source_buffer_presentation_position;
tstamp->pcm_io_frames = stream->source_buffer_presentation_position;
tstamp->sampling_rate = stream->stream_params.rate;
+ spin_unlock(&avp_stream->lock);
dev_vdbg(audio_avp->dev, "%s id %d off %d copied %d pcm %d pcm io %d",
__func__, id, (int)tstamp->byte_offset,
@@ -1230,9 +1521,10 @@
dev_err(audio_avp->dev, "AVP platform not initialized.");
return -ENODEV;
}
-
+ spin_lock(&avp_stream->lock);
stream->stream_volume[0] = left;
stream->stream_volume[1] = right;
+ spin_unlock(&avp_stream->lock);
return 0;
}
@@ -1250,14 +1542,24 @@
dev_err(audio_avp->dev, "AVP platform not initialized.");
return;
}
- tegra30_avp_mem_free(&avp_stream->source_buf);
- stream->stream_allocated = 0;
+ mutex_lock(&audio_avp->mutex);
tegra30_avp_stream_set_state(id, KSSTATE_STOP);
+ stream->stream_allocated = 0;
+ avp_stream->is_stream_active = 0;
+ avp_stream->total_bytes_copied = 0;
+ tegra30_avp_mem_free(&avp_stream->source_buf);
+
+ if (id == loopback_stream_id)
+ goto exit;
+
+ atomic_set(&avp_stream->is_drain_called, 0);
if (atomic_dec_and_test(&audio_avp->stream_active_count)) {
tegra30_avp_audio_free_dma();
tegra30_avp_audio_set_state(KSSTATE_STOP);
}
+exit:
+ mutex_unlock(&audio_avp->mutex);
}
static struct tegra_offload_ops avp_audio_platform = {
@@ -1274,6 +1576,14 @@
.get_stream_position = tegra30_avp_pcm_get_position,
.data_ready = tegra30_avp_pcm_data_ready,
},
+ .loopback_ops = {
+ .stream_open = tegra30_avp_pcm_open,
+ .stream_close = tegra30_avp_stream_close,
+ .set_stream_params = tegra30_avp_loopback_set_params,
+ .set_stream_state = tegra30_avp_loopback_set_state,
+ .get_stream_position = tegra30_avp_loopback_get_position,
+ .data_ready = tegra30_avp_loopback_data_ready,
+ },
.compr_ops = {
.stream_open = tegra30_avp_compr_open,
.stream_close = tegra30_avp_stream_close,
@@ -1309,6 +1619,7 @@
return -EPROBE_DEFER;
}
+ mutex_init(&audio_avp->mutex);
spin_lock_init(&audio_avp->lock);
pdev->dev.dma_mask = &tegra_dma_mask;
pdev->dev.coherent_dma_mask = tegra_dma_mask;
diff --git a/sound/soc/tegra/tegra30_dam.c b/sound/soc/tegra/tegra30_dam.c
index a9190eb..524bfc2 100644
--- a/sound/soc/tegra/tegra30_dam.c
+++ b/sound/soc/tegra/tegra30_dam.c
@@ -933,10 +933,6 @@
return -EINVAL;
#ifndef CONFIG_ARCH_TEGRA_3x_SOC
- /*ch0 takes input as mono always*/
- if ((chid == dam_ch_in0) &&
- ((client_channels != 1)))
- return -EINVAL;
/*as per dam spec file chout is fixed to 32 bits*/
/*so accept ch0, ch1 and chout as 32bit always*/
if (client_bits != 32)
@@ -1050,6 +1046,21 @@
}
}
+void tegra30_dam_enable_stereo_mixing(int ifc, int on)
+{
+ u32 val;
+
+ if ((ifc < 0) || (ifc >= TEGRA30_NR_DAM_IFC))
+ return;
+
+ val = tegra30_dam_readl(dams_cont_info[ifc], TEGRA30_DAM_CTRL);
+ if (on)
+ val |= TEGRA30_DAM_CTRL_STEREO_MIXING_ENABLE;
+ else
+ val &= ~TEGRA30_DAM_CTRL_STEREO_MIXING_ENABLE;
+ tegra30_dam_writel(dams_cont_info[ifc], val, TEGRA30_DAM_CTRL);
+}
+
void tegra30_dam_ch0_set_datasync(int ifc, int datasync)
{
u32 val;
@@ -1120,6 +1131,28 @@
return 0;
}
+int tegra30_dam_soft_reset(int ifc)
+{
+ int dcnt = 10;
+ u32 val;
+ struct tegra30_dam_context *dam = NULL;
+
+ dam = dams_cont_info[ifc];
+ val = tegra30_dam_readl(dam, TEGRA30_DAM_CTRL);
+ val |= TEGRA30_DAM_CTRL_SOFT_RESET_ENABLE;
+ tegra30_dam_writel(dam, val, TEGRA30_DAM_CTRL);
+
+ while ((tegra30_dam_readl(dam, TEGRA30_DAM_CTRL) &
+ TEGRA30_DAM_CTRL_SOFT_RESET_ENABLE) && dcnt--)
+ udelay(100);
+
+ /* Restore reg_ctrl to ensure if a concurrent playback/capture
+ session was active it continues after SOFT_RESET */
+ val &= ~TEGRA30_DAM_CTRL_SOFT_RESET_ENABLE;
+ tegra30_dam_writel(dam, val, TEGRA30_DAM_CTRL);
+
+ return (dcnt < 0) ? -ETIMEDOUT : 0;
+}
/*
DAM Driver probe and remove functions
diff --git a/sound/soc/tegra/tegra30_dam.h b/sound/soc/tegra/tegra30_dam.h
index d23bd60..8d0ac74 100644
--- a/sound/soc/tegra/tegra30_dam.h
+++ b/sound/soc/tegra/tegra30_dam.h
@@ -188,8 +188,9 @@
int tegra30_dam_set_acif_stereo_conv(int ifc, int chtype, int conv);
void tegra30_dam_ch0_set_datasync(int ifc, int datasync);
void tegra30_dam_ch1_set_datasync(int ifc, int datasync);
+int tegra30_dam_soft_reset(int ifc);
#ifndef CONFIG_ARCH_TEGRA_3x_SOC
-void tegra30_dam_enable_stereo_mixing(int ifc);
+void tegra30_dam_enable_stereo_mixing(int ifc, int on);
#endif
#endif
diff --git a/sound/soc/tegra/tegra30_i2s.c b/sound/soc/tegra/tegra30_i2s.c
index f93928a..de24e3b 100644
--- a/sound/soc/tegra/tegra30_i2s.c
+++ b/sound/soc/tegra/tegra30_i2s.c
@@ -41,19 +41,19 @@
#include <sound/soc.h>
#include <asm/delay.h>
#include <mach/tegra_asoc_pdata.h>
+#include <mach/gpio-tegra.h>
+#include <linux/gpio.h>
#include "tegra30_ahub.h"
#include "tegra30_dam.h"
#include "tegra30_i2s.h"
+#include "tegra_rt5677.h"
+#include <mach/pinmux.h>
#define DRV_NAME "tegra30-i2s"
#define RETRY_CNT 10
-static struct snd_soc_pcm_runtime *allocated_fe;
-static int apbif_ref_cnt;
-static DEFINE_MUTEX(apbif_mutex);
-
extern int tegra_i2sloopback_func;
static struct tegra30_i2s *i2scont[TEGRA30_NR_I2S_IFC];
#if defined(CONFIG_ARCH_TEGRA_14x_SOC)
@@ -67,6 +67,7 @@
tegra30_ahub_disable_clocks();
regcache_cache_only(i2s->regmap, true);
+ regcache_mark_dirty(i2s->regmap);
clk_disable_unprepare(i2s->clk_i2s);
@@ -87,71 +88,138 @@
}
regcache_cache_only(i2s->regmap, false);
+ regcache_sync(i2s->regmap);
return 0;
}
+void tegra30_i2s_request_gpio(struct snd_pcm_substream *substream, int i2s_id)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int i, ret;
+
+ if (pdata == NULL)
+ return;
+ pr_debug("%s: pdata->gpio_free_count[%d]=%d\n", __func__, i2s_id, pdata->gpio_free_count[i2s_id]);
+ if (i2s_id > 1) {
+ /* Only HIFI_CODEC and SPEAKER GPIO need re-config */
+ return;
+ }
+ if (pdata->first_time_free[i2s_id]) {
+ mutex_init(&pdata->i2s_gpio_lock[i2s_id]);
+ mutex_lock(&pdata->i2s_gpio_lock[i2s_id]);
+ pr_info("pdata->gpio_free_count[%d]=%d, 1st time enter, don't need free gpio\n",
+ i2s_id, pdata->gpio_free_count[i2s_id]);
+ pdata->first_time_free[i2s_id] = false;
+ } else {
+ mutex_lock(&pdata->i2s_gpio_lock[i2s_id]);
+ }
+
+ pdata->gpio_free_count[i2s_id]--;
+
+ if (pdata->gpio_free_count[i2s_id] > 0) {
+ pr_info("pdata->gpio_free_count[%d]=%d > 0, needless to request again\n",
+ i2s_id, pdata->gpio_free_count[i2s_id]);
+ mutex_unlock(&pdata->i2s_gpio_lock[i2s_id]);
+ return;
+ pr_debug("pdata->gpio_free_count[%d]=%d\n", i2s_id, pdata->gpio_free_count[i2s_id]);
+ }
+
+ for (i = 0; i<4; i++) {
+ ret = gpio_request(pdata->i2s_set[i2s_id*4 + i].id,
+ pdata->i2s_set[i2s_id*4 + i].name);
+ if (!pdata->i2s_set[i2s_id*4 + i].dir_in) {
+ gpio_direction_output(pdata->i2s_set[i2s_id*4 + i].id, 0);
+ } else {
+ tegra_pinctrl_pg_set_pullupdown(pdata->i2s_set[i2s_id*4 + i].pg, TEGRA_PUPD_PULL_DOWN);
+ gpio_direction_input(pdata->i2s_set[i2s_id*4 + i].id);
+ }
+ pr_debug("%s: gpio_request for gpio[%d] %s, return %d\n",
+ __func__, pdata->i2s_set[i2s_id*4 + i].id, pdata->i2s_set[i2s_id*4 + i].name, ret);
+ }
+ mutex_unlock(&pdata->i2s_gpio_lock[i2s_id]);
+}
+
+void tegra30_i2s_free_gpio(struct snd_pcm_substream *substream, int i2s_id)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int i;
+
+ if (i2s_id > 1) {
+ /* Only HIFI_CODEC and SPEAKER GPIO need re-config */
+ return;
+ }
+ if (pdata->first_time_free[i2s_id]) {
+ mutex_init(&pdata->i2s_gpio_lock[i2s_id]);
+ mutex_lock(&pdata->i2s_gpio_lock[i2s_id]);
+ pr_debug("%s: Skip gpio_free if not allocated\n", __func__);
+ pdata->first_time_free[i2s_id] = false;
+ pdata->gpio_free_count[i2s_id]++;
+ mutex_unlock(&pdata->i2s_gpio_lock[i2s_id]);
+ return;
+ }
+
+ mutex_lock(&pdata->i2s_gpio_lock[i2s_id]);
+ pdata->gpio_free_count[i2s_id]++;
+ if (pdata->gpio_free_count[i2s_id] > 1) {
+ pr_debug("pdata->gpio_free_count[%d]=%d > 1, needless to free again\n",
+ i2s_id, pdata->gpio_free_count[i2s_id]);
+ mutex_unlock(&pdata->i2s_gpio_lock[i2s_id]);
+ return;
+ }
+
+ for (i = 0; i<4; i++) {
+ gpio_free(pdata->i2s_set[i2s_id*4 + i].id);
+ pr_debug("%s: gpio_free for gpio[%d] %s,\n",
+ __func__, pdata->i2s_set[i2s_id*4 + i].id, pdata->i2s_set[i2s_id*4 + i].name);
+ }
+ mutex_unlock(&pdata->i2s_gpio_lock[i2s_id]);
+}
+
int tegra30_i2s_startup(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai);
int ret = 0;
- struct snd_soc_pcm_runtime *rtd = substream->private_data;
- struct snd_soc_dai_link *dai_link = rtd->dai_link;
+ int i2s_id;
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- int allocate_fifo = 1;
+ /* To prevent power leakage */
+ if (i2s->playback_ref_count == 0) {
+ i2s_id = i2s->playback_i2s_cif - TEGRA30_AHUB_RXCIF_I2S0_RX0 - 1;
+ pr_debug("%s-playback:i2s_id = %d\n", __func__, i2s_id);
+ tegra30_i2s_free_gpio(substream, i2s_id);
+ }
/* increment the playback ref count */
i2s->playback_ref_count++;
- mutex_lock(&apbif_mutex);
- if (dai_link->no_pcm) {
- struct snd_soc_dpcm *dpcm;
-
- list_for_each_entry(dpcm,
- &rtd->dpcm[substream->stream].fe_clients,
- list_fe) {
- struct snd_soc_pcm_runtime *fe = dpcm->fe;
-
- if (allocated_fe == fe) {
- allocate_fifo = 0;
- break;
- }
-
- if (allocated_fe == NULL) {
- allocated_fe = fe;
- snd_soc_pcm_set_drvdata(allocated_fe,
- i2s);
- }
- }
- }
-
- if (allocate_fifo) {
+ if (i2s->allocate_pb_fifo_cif) {
ret = tegra30_ahub_allocate_tx_fifo(
&i2s->playback_fifo_cif,
&i2s->playback_dma_data.addr,
&i2s->playback_dma_data.req_sel);
i2s->playback_dma_data.wrap = 4;
i2s->playback_dma_data.width = 32;
- } else {
- struct tegra30_i2s *allocated_be =
- snd_soc_pcm_get_drvdata(allocated_fe);
- if (allocated_be) {
- memcpy(&i2s->playback_dma_data,
- &allocated_be->playback_dma_data,
- sizeof(struct tegra_pcm_dma_params));
- i2s->playback_fifo_cif =
- allocated_be->playback_fifo_cif;
- }
- }
- apbif_ref_cnt++;
- mutex_unlock(&apbif_mutex);
- if (!i2s->is_dam_used)
- tegra30_ahub_set_rx_cif_source(
- i2s->playback_i2s_cif,
- i2s->playback_fifo_cif);
+ if (!i2s->is_dam_used)
+ tegra30_ahub_set_rx_cif_source(
+ i2s->playback_i2s_cif,
+ i2s->playback_fifo_cif);
+ }
} else {
+ /* To prevent power leakage */
+ if (i2s->capture_ref_count == 0) {
+ i2s_id = i2s->capture_i2s_cif - TEGRA30_AHUB_TXCIF_I2S0_TX0 - 1;
+ pr_debug("%s-capture:i2s_id = %d\n", __func__, i2s_id);
+ tegra30_i2s_free_gpio(substream, i2s_id);
+ }
+ /* increment the capture ref count */
i2s->capture_ref_count++;
ret = tegra30_ahub_allocate_rx_fifo(&i2s->capture_fifo_cif,
&i2s->capture_dma_data.addr,
@@ -169,30 +237,35 @@
struct snd_soc_dai *dai)
{
struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai);
- struct snd_soc_pcm_runtime *rtd = substream->private_data;
- struct snd_soc_dai_link *dai_link = rtd->dai_link;
+ int i2s_id;
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- if (i2s->playback_ref_count == 1)
- tegra30_ahub_unset_rx_cif_source(
- i2s->playback_i2s_cif);
-
- mutex_lock(&apbif_mutex);
- apbif_ref_cnt--;
- /* free the apbif dma channel*/
- if ((!apbif_ref_cnt) || (!dai_link->no_pcm)) {
- tegra30_ahub_free_tx_fifo(i2s->playback_fifo_cif);
- allocated_fe = NULL;
+ if (i2s->playback_ref_count == 1) {
+ if (i2s->allocate_pb_fifo_cif)
+ tegra30_ahub_unset_rx_cif_source(
+ i2s->playback_i2s_cif);
+ /* To prevent power leakage */
+ i2s_id = i2s->playback_i2s_cif - TEGRA30_AHUB_RXCIF_I2S0_RX0 - 1;
+ pr_debug("%s-playback:i2s_id = %d\n", __func__, i2s_id);
+ tegra30_i2s_request_gpio(substream, i2s_id);
}
- mutex_unlock(&apbif_mutex);
- i2s->playback_fifo_cif = -1;
+
+ /* free the apbif dma channel*/
+ if (i2s->allocate_pb_fifo_cif) {
+ tegra30_ahub_free_tx_fifo(i2s->playback_fifo_cif);
+ i2s->playback_fifo_cif = -1;
+ }
/* decrement the playback ref count */
i2s->playback_ref_count--;
} else {
- if (i2s->capture_ref_count == 1)
+ if (i2s->capture_ref_count == 1) {
tegra30_ahub_unset_rx_cif_source(i2s->capture_fifo_cif);
-
+ /* To prevent power leakage */
+ i2s_id = i2s->capture_i2s_cif - TEGRA30_AHUB_TXCIF_I2S0_TX0 - 1;
+ pr_debug("%s-capture:i2s_id = %d\n", __func__, i2s_id);
+ tegra30_i2s_request_gpio(substream, i2s_id);
+ }
/* free the apbif dma channel*/
tegra30_ahub_free_rx_fifo(i2s->capture_fifo_cif);
@@ -846,10 +919,9 @@
{
tegra30_ahub_enable_tx_fifo(i2s->playback_fifo_cif);
/* if this is the only user of i2s tx then enable it*/
- if (i2s->playback_ref_count == 1)
- regmap_update_bits(i2s->regmap, TEGRA30_I2S_CTRL,
- TEGRA30_I2S_CTRL_XFER_EN_TX,
- TEGRA30_I2S_CTRL_XFER_EN_TX);
+ regmap_update_bits(i2s->regmap, TEGRA30_I2S_CTRL,
+ TEGRA30_I2S_CTRL_XFER_EN_TX,
+ TEGRA30_I2S_CTRL_XFER_EN_TX);
}
static void tegra30_i2s_stop_playback(struct tegra30_i2s *i2s)
@@ -857,27 +929,26 @@
int dcnt = 10;
/* if this is the only user of i2s tx then disable it*/
tegra30_ahub_disable_tx_fifo(i2s->playback_fifo_cif);
- if (i2s->playback_ref_count == 1) {
- regmap_update_bits(i2s->regmap, TEGRA30_I2S_CTRL,
- TEGRA30_I2S_CTRL_XFER_EN_TX, 0);
- while (tegra30_ahub_tx_fifo_is_enabled(i2s->id) && dcnt--)
- udelay(100);
+ regmap_update_bits(i2s->regmap, TEGRA30_I2S_CTRL,
+ TEGRA30_I2S_CTRL_XFER_EN_TX, 0);
+
+ while (tegra30_ahub_tx_fifo_is_enabled(i2s->id) && dcnt--)
+ udelay(100);
+
+ dcnt = 10;
+ while (!tegra30_ahub_tx_fifo_is_empty(i2s->id) && dcnt--)
+ udelay(100);
+
+ /* In case I2S FIFO does not get empty do a soft reset of the
+ I2S channel to prevent channel reversal in next session */
+ if (dcnt < 0) {
+ tegra30_i2s_soft_reset(i2s);
dcnt = 10;
- while (!tegra30_ahub_tx_fifo_is_empty(i2s->id) && dcnt--)
+ while (!tegra30_ahub_tx_fifo_is_empty(i2s->id) &&
+ dcnt--)
udelay(100);
-
- /* In case I2S FIFO does not get empty do a soft reset of the
- I2S channel to prevent channel reversal in next session */
- if (dcnt < 0) {
- tegra30_i2s_soft_reset(i2s);
-
- dcnt = 10;
- while (!tegra30_ahub_tx_fifo_is_empty(i2s->id) &&
- dcnt--)
- udelay(100);
- }
}
}
@@ -928,18 +999,30 @@
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
case SNDRV_PCM_TRIGGER_RESUME:
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
- tegra30_i2s_start_playback(i2s);
- else
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ spin_lock(&i2s->pb_lock);
+ if (!i2s->tx_enable) {
+ tegra30_i2s_start_playback(i2s);
+ i2s->tx_enable = 1;
+ }
+ spin_unlock(&i2s->pb_lock);
+ } else {
tegra30_i2s_start_capture(i2s);
+ }
break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
case SNDRV_PCM_TRIGGER_SUSPEND:
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
- tegra30_i2s_stop_playback(i2s);
- else
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ spin_lock(&i2s->pb_lock);
+ if (i2s->tx_enable) {
+ tegra30_i2s_stop_playback(i2s);
+ i2s->tx_enable = 0;
+ }
+ spin_unlock(&i2s->pb_lock);
+ } else {
tegra30_i2s_stop_capture(i2s);
+ }
break;
default:
return -EINVAL;
@@ -963,7 +1046,10 @@
i2s->dsp_config.rx_data_offset = 1;
i2s->dsp_config.tx_data_offset = 1;
tegra_i2sloopback_func = 0;
-
+ i2s->playback_fifo_cif = -1;
+ i2s->capture_fifo_cif = -1;
+ i2s->allocate_pb_fifo_cif = true;
+ i2s->allocate_cap_fifo_cif = true;
return 0;
}
@@ -2365,6 +2451,8 @@
goto err_unregister_dai;
}
+ spin_lock_init(&i2s->pb_lock);
+
return 0;
err_unregister_dai:
@@ -2426,7 +2514,8 @@
.remove = tegra30_i2s_platform_remove,
};
module_platform_driver(tegra30_i2s_driver);
-
+MODULE_AUTHOR("Ravindra Lokhande <rlokhande@nvidia.com>");
+MODULE_AUTHOR("Manoj Gangwal <mgangwal@nvidia.com>");
MODULE_AUTHOR("Stephen Warren <swarren@nvidia.com>");
MODULE_DESCRIPTION("Tegra30 I2S ASoC driver");
MODULE_LICENSE("GPL");
diff --git a/sound/soc/tegra/tegra30_i2s.h b/sound/soc/tegra/tegra30_i2s.h
index 5b754d2..39f4576 100644
--- a/sound/soc/tegra/tegra30_i2s.h
+++ b/sound/soc/tegra/tegra30_i2s.h
@@ -286,6 +286,8 @@
int playback_ref_count;
int capture_ref_count;
bool is_dam_used;
+ bool allocate_pb_fifo_cif;
+ bool allocate_cap_fifo_cif;
#ifdef CONFIG_PM
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
u32 reg_cache[(TEGRA30_I2S_LCOEF_2_4_2 >> 2) + 1];
@@ -298,6 +300,8 @@
int is_call_mode_rec;
struct dsp_config_t dsp_config;
int i2s_bit_clk;
+ spinlock_t pb_lock;
+ int tx_enable;
};
int tegra30_make_voice_call_connections(struct codec_config *codec_info,
diff --git a/sound/soc/tegra/tegra_asoc_utils.c b/sound/soc/tegra/tegra_asoc_utils.c
index 52e8989..4f20366 100644
--- a/sound/soc/tegra/tegra_asoc_utils.c
+++ b/sound/soc/tegra/tegra_asoc_utils.c
@@ -52,7 +52,8 @@
#include "tegra_asoc_utils.h"
int g_is_call_mode;
-static atomic_t dap_ref_count[5];
+static atomic_t dap_ref_count[4];
+static atomic_t dap_pd_ref_count[4];
int tegra_i2sloopback_func;
static const char * const loopback_function[] = {
@@ -140,6 +141,68 @@
}
EXPORT_SYMBOL_GPL(tegra_asoc_utils_tristate_dap);
+
+
+#define TRISTATE_PD_DAP_PORT(n) \
+static void tristate_pd_dap_##n(bool tristate) \
+{ \
+ enum tegra_pingroup fs, sclk, din, dout; \
+ fs = TEGRA_PINGROUP_DAP##n##_FS; \
+ sclk = TEGRA_PINGROUP_DAP##n##_SCLK; \
+ din = TEGRA_PINGROUP_DAP##n##_DIN; \
+ dout = TEGRA_PINGROUP_DAP##n##_DOUT; \
+ if (tristate) { \
+ if (atomic_dec_return(&dap_pd_ref_count[n-1]) == 0) {\
+ tegra_pinmux_set_tristate(fs, TEGRA_TRI_TRISTATE); \
+ tegra_pinmux_set_pullupdown(fs, TEGRA_PUPD_PULL_DOWN); \
+ tegra_pinmux_set_tristate(sclk, TEGRA_TRI_TRISTATE); \
+ tegra_pinmux_set_pullupdown(sclk, TEGRA_PUPD_PULL_DOWN); \
+ tegra_pinmux_set_tristate(din, TEGRA_TRI_TRISTATE); \
+ tegra_pinmux_set_tristate(dout, TEGRA_TRI_TRISTATE); \
+ tegra_pinmux_set_pullupdown(dout, TEGRA_PUPD_PULL_DOWN); \
+ } \
+ } else { \
+ if (atomic_inc_return(&dap_pd_ref_count[n-1]) == 1) {\
+ tegra_pinmux_set_tristate(fs, TEGRA_TRI_NORMAL); \
+ tegra_pinmux_set_pullupdown(fs, TEGRA_PUPD_NORMAL); \
+ tegra_pinmux_set_tristate(sclk, TEGRA_TRI_NORMAL); \
+ tegra_pinmux_set_pullupdown(sclk, TEGRA_PUPD_NORMAL); \
+ tegra_pinmux_set_tristate(din, TEGRA_TRI_NORMAL); \
+ tegra_pinmux_set_tristate(dout, TEGRA_TRI_NORMAL); \
+ tegra_pinmux_set_pullupdown(dout, TEGRA_PUPD_NORMAL); \
+ } \
+ } \
+}
+
+
+TRISTATE_PD_DAP_PORT(1)
+TRISTATE_PD_DAP_PORT(2)
+TRISTATE_PD_DAP_PORT(3)
+TRISTATE_PD_DAP_PORT(4)
+
+int tegra_asoc_utils_tristate_pd_dap(int id, bool tristate)
+{
+ switch (id) {
+ case 0:
+ tristate_pd_dap_1(tristate);
+ break;
+ case 1:
+ tristate_pd_dap_2(tristate);
+ break;
+ case 2:
+ tristate_pd_dap_3(tristate);
+ break;
+ case 3:
+ tristate_pd_dap_4(tristate);
+ break;
+ default:
+ pr_warn("Invalid DAP port\n");
+ break;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tegra_asoc_utils_tristate_pd_dap);
+
bool tegra_is_voice_call_active(void)
{
if (g_is_call_mode)
diff --git a/sound/soc/tegra/tegra_asoc_utils.h b/sound/soc/tegra/tegra_asoc_utils.h
index 346b01a..4750d1d 100644
--- a/sound/soc/tegra/tegra_asoc_utils.h
+++ b/sound/soc/tegra/tegra_asoc_utils.h
@@ -77,6 +77,7 @@
#endif
int tegra_asoc_utils_tristate_dap(int id, bool tristate);
+int tegra_asoc_utils_tristate_pd_dap(int id, bool tristate);
extern int g_is_call_mode;
diff --git a/sound/soc/tegra/tegra_offload.c b/sound/soc/tegra/tegra_offload.c
index bdfe94e..66914d8 100644
--- a/sound/soc/tegra/tegra_offload.c
+++ b/sound/soc/tegra/tegra_offload.c
@@ -24,6 +24,7 @@
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/wakelock.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/soc.h>
@@ -38,6 +39,7 @@
enum {
PCM_OFFLOAD_DAI,
COMPR_OFFLOAD_DAI,
+ PCM_CAPTURE_OFFLOAD_DAI,
MAX_OFFLOAD_DAI
};
@@ -51,17 +53,17 @@
struct tegra_offload_compr_ops *ops;
struct snd_codec codec;
int stream_id;
- int stream_vol[2];
- struct snd_kcontrol *kcontrol;
};
static struct tegra_offload_ops offload_ops;
static int tegra_offload_init_done;
static DEFINE_MUTEX(tegra_offload_lock);
+static unsigned int compr_vol[2] = {AVP_UNITY_STREAM_VOLUME,
+ AVP_UNITY_STREAM_VOLUME};
static int codec, spk;
-static const struct snd_pcm_hardware tegra_offload_pcm_hardware = {
+static const struct snd_pcm_hardware tegra_offload_pcm_hw_pb = {
.info = SNDRV_PCM_INFO_MMAP |
SNDRV_PCM_INFO_MMAP_VALID |
SNDRV_PCM_INFO_PAUSE |
@@ -78,6 +80,25 @@
.fifo_size = 4,
};
+static const struct snd_pcm_hardware tegra_offload_pcm_hw_cap = {
+ .info = SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_MMAP_VALID |
+ SNDRV_PCM_INFO_PAUSE |
+ SNDRV_PCM_INFO_RESUME |
+ SNDRV_PCM_INFO_INTERLEAVED,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels_min = 2,
+ .channels_max = 2,
+ .period_bytes_min = 128,
+ .period_bytes_max = PAGE_SIZE * 2,
+ .periods_min = 1,
+ .periods_max = 8,
+ .buffer_bytes_max = PAGE_SIZE * 8,
+ .fifo_size = 4,
+};
+
+static struct wake_lock tegra_offload_wake_lock;
+
int tegra_register_offload_ops(struct tegra_offload_ops *ops)
{
mutex_lock(&tegra_offload_lock);
@@ -91,6 +112,7 @@
return -EBUSY;
}
memcpy(&offload_ops, ops, sizeof(offload_ops));
+ wake_lock_init(&tegra_offload_wake_lock, WAKE_LOCK_SUSPEND, "audio_offload");
tegra_offload_init_done = 1;
mutex_unlock(&tegra_offload_lock);
@@ -114,76 +136,50 @@
static int tegra_set_compress_volume(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
- int ret = 1;
- struct snd_compr_stream *stream = snd_kcontrol_chip(kcontrol);
- struct tegra_offload_compr_data *data = stream->runtime->private_data;
- struct snd_soc_pcm_runtime *rtd = stream->device->private_data;
- struct device *dev = rtd->platform->dev;
+ int ret = 0;
+ struct snd_soc_platform *platform = snd_kcontrol_chip(kcontrol);
+ struct tegra_offload_compr_data *data =
+ snd_soc_platform_get_drvdata(platform);
- pr_debug("%s: value[0]: %d value[1]: %d\n", __func__,
- (int)ucontrol->value.integer.value[0],
- (int)ucontrol->value.integer.value[1]);
- ret = data->ops->set_stream_volume(data->stream_id,
- (int)ucontrol->value.integer.value[0],
- (int)ucontrol->value.integer.value[1]);
- if (ret < 0) {
- dev_err(dev, "Failed to get compr caps. ret %d", ret);
- return ret;
- } else {
- data->stream_vol[0] = (int)ucontrol->value.integer.value[0];
- data->stream_vol[1] = (int)ucontrol->value.integer.value[1];
+ mutex_lock(&tegra_offload_lock);
+ compr_vol[0] = ucontrol->value.integer.value[0];
+ compr_vol[1] = ucontrol->value.integer.value[1];
+ mutex_unlock(&tegra_offload_lock);
+
+ pr_debug("%s:compr_vol[0] %d, compr_vol[1] %d\n",
+ __func__, compr_vol[0], compr_vol[1]);
+
+ if (data) {
+ ret = data->ops->set_stream_volume(data->stream_id,
+ compr_vol[0], compr_vol[1]);
+ if (ret < 0) {
+ pr_err("Failed to get compr caps. ret %d", ret);
+ return ret;
+ }
+ return 1;
}
- return 1;
+ return ret;
}
static int tegra_get_compress_volume(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
- struct snd_compr_stream *stream = snd_kcontrol_chip(kcontrol);
- struct tegra_offload_compr_data *data = stream->runtime->private_data;
+ mutex_lock(&tegra_offload_lock);
+ ucontrol->value.integer.value[0] = compr_vol[0];
+ ucontrol->value.integer.value[1] = compr_vol[1];
+ mutex_unlock(&tegra_offload_lock);
- ucontrol->value.integer.value[0] = data->stream_vol[0];
- ucontrol->value.integer.value[1] = data->stream_vol[1];
-
+ pr_debug("%s:compr_vol[0] %d, compr_vol[1] %d\n",
+ __func__, compr_vol[0], compr_vol[1]);
return 0;
}
-struct snd_kcontrol_new tegra_offload_volume =
- SOC_DOUBLE_EXT("Compress Playback Volume", 0, 1, 0, 0xFFFFFFFF,
- 1, tegra_get_compress_volume, tegra_set_compress_volume);
-
-static int tegra_offload_compr_add_controls(struct snd_compr_stream *stream)
-{
- int ret = 0;
- struct snd_soc_pcm_runtime *rtd = stream->device->private_data;
- struct device *dev = rtd->platform->dev;
- struct tegra_offload_compr_data *data = stream->runtime->private_data;
-
- data->kcontrol = snd_ctl_new1(&tegra_offload_volume, stream);
- ret = snd_ctl_add(rtd->card->snd_card, data->kcontrol);
- if (ret < 0) {
- dev_err(dev, "Can't add offload volume");
- return ret;
- }
- return ret;
-}
-
-
-static int tegra_offload_compr_remove_controls(struct snd_compr_stream *stream)
-{
- int ret = 0;
- struct snd_soc_pcm_runtime *rtd = stream->device->private_data;
- struct device *dev = rtd->platform->dev;
- struct tegra_offload_compr_data *data = stream->runtime->private_data;
-
- ret = snd_ctl_remove(rtd->card->snd_card, data->kcontrol);
- if (ret < 0) {
- dev_err(dev, "Can't remove offload volume");
- return ret;
- }
- return ret;
-}
+static const struct snd_kcontrol_new tegra_offload_volume[] = {
+ SOC_DOUBLE_EXT("Compress Playback Volume", 0, 0, 1,
+ AVP_UNITY_STREAM_VOLUME, 0, tegra_get_compress_volume,
+ tegra_set_compress_volume),
+};
static int tegra_offload_compr_open(struct snd_compr_stream *stream)
{
@@ -191,9 +187,12 @@
struct device *dev = rtd->platform->dev;
struct tegra_offload_compr_data *data;
int ret = 0;
+ unsigned left, right;
dev_vdbg(dev, "%s", __func__);
+ stream->runtime->private_data = NULL;
+
if (!tegra_offload_init_done) {
dev_err(dev, "Offload interface is not registered");
return -ENODEV;
@@ -212,32 +211,35 @@
ret = data->ops->stream_open(&data->stream_id);
if (ret < 0) {
dev_err(dev, "Failed to open offload stream. err %d", ret);
+ devm_kfree(dev, data);
return ret;
}
stream->runtime->private_data = data;
+ snd_soc_platform_set_drvdata(rtd->platform, data);
- ret = tegra_offload_compr_add_controls(stream);
- if (ret)
- dev_err(dev, "Failed to add controls\n");
-
+ mutex_lock(&tegra_offload_lock);
+ left = compr_vol[0];
+ right = compr_vol[1];
+ mutex_unlock(&tegra_offload_lock);
+ data->ops->set_stream_volume(data->stream_id,
+ left, right);
return 0;
}
static int tegra_offload_compr_free(struct snd_compr_stream *stream)
{
+ struct snd_soc_pcm_runtime *rtd = stream->device->private_data;
struct device *dev = stream->device->dev;
struct tegra_offload_compr_data *data = stream->runtime->private_data;
- int ret = 0;
dev_vdbg(dev, "%s", __func__);
- ret = tegra_offload_compr_remove_controls(stream);
- if (ret)
- dev_err(dev, "Failed to remove controls\n");
-
- data->ops->stream_close(data->stream_id);
- devm_kfree(dev, data);
+ if (data) {
+ snd_soc_platform_set_drvdata(rtd->platform, NULL);
+ data->ops->stream_close(data->stream_id);
+ devm_kfree(dev, data);
+ }
return 0;
}
@@ -259,15 +261,15 @@
else
dir = SNDRV_PCM_STREAM_CAPTURE;
- dmap = rtd->cpu_dai->playback_dma_data;
- if (!dmap) {
- struct snd_soc_dpcm *dpcm;
-
- if (list_empty(&rtd->dpcm[dir].be_clients)) {
+ if (list_empty(&rtd->dpcm[dir].be_clients)) {
dev_err(dev, "No backend DAIs enabled for %s\n",
rtd->dai_link->name);
return -EINVAL;
- }
+ }
+
+ dmap = rtd->cpu_dai->playback_dma_data;
+ if (!dmap) {
+ struct snd_soc_dpcm *dpcm;
list_for_each_entry(dpcm,
&rtd->dpcm[dir].be_clients, list_be) {
@@ -299,7 +301,7 @@
offl_params.codec_type = params->codec.id;
offl_params.bits_per_sample = 16;
- offl_params.rate = snd_pcm_rate_bit_to_rate(params->codec.sample_rate);
+ offl_params.rate = params->codec.sample_rate;
offl_params.channels = params->codec.ch_in;
offl_params.fragment_size = params->buffer.fragment_size;
offl_params.fragments = params->buffer.fragments;
@@ -334,12 +336,47 @@
static int tegra_offload_compr_trigger(struct snd_compr_stream *stream, int cmd)
{
struct device *dev = stream->device->dev;
+ struct snd_soc_pcm_runtime *rtd = stream->private_data;
struct tegra_offload_compr_data *data = stream->runtime->private_data;
+ int ret = 0;
dev_vdbg(dev, "%s : cmd %d", __func__, cmd);
- data->ops->set_stream_state(data->stream_id, cmd);
- return 0;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ if (!wake_lock_active(&tegra_offload_wake_lock)){
+ wake_lock(&tegra_offload_wake_lock);
+ }
+
+ if (rtd->dai_link->compr_ops &&
+ rtd->dai_link->compr_ops->trigger) {
+ rtd->dai_link->compr_ops->trigger(stream, cmd);
+ }
+ ret = data->ops->set_stream_state(data->stream_id, cmd);
+ break;
+
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ ret = data->ops->set_stream_state(data->stream_id, cmd);
+ if (rtd->dai_link->compr_ops &&
+ rtd->dai_link->compr_ops->trigger) {
+ rtd->dai_link->compr_ops->trigger(stream, cmd);
+ }
+
+ if (wake_lock_active(&tegra_offload_wake_lock)){
+ wake_unlock(&tegra_offload_wake_lock);
+ }
+
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
}
static int tegra_offload_compr_pointer(struct snd_compr_stream *stream,
@@ -433,19 +470,19 @@
dev_vdbg(dev, "%s", __func__);
+ substream->runtime->private_data = NULL;
+
if (!tegra_offload_init_done) {
dev_err(dev, "Offload interface is not registered");
return -ENODEV;
}
- data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
- if (!data) {
- dev_err(dev, "Failed to allocate tegra_offload_pcm_data.");
- return -ENOMEM;
- }
-
- /* Set HW params now that initialization is complete */
- snd_soc_set_runtime_hwparams(substream, &tegra_offload_pcm_hardware);
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ snd_soc_set_runtime_hwparams(substream,
+ &tegra_offload_pcm_hw_pb);
+ else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+ snd_soc_set_runtime_hwparams(substream,
+ &tegra_offload_pcm_hw_cap);
/* Ensure period size is multiple of 4 */
ret = snd_pcm_hw_constraint_step(substream->runtime, 0,
@@ -454,14 +491,35 @@
dev_err(dev, "failed to set constraint %d\n", ret);
return ret;
}
- data->ops = &offload_ops.pcm_ops;
- ret = data->ops->stream_open(&data->stream_id);
- if (ret < 0) {
- dev_err(dev, "Failed to open offload stream. err %d", ret);
- return ret;
+ data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ dev_err(dev, "Failed to allocate tegra_offload_pcm_data.");
+ return -ENOMEM;
}
- offload_ops.device_ops.set_hw_rate(48000);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ data->ops = &offload_ops.pcm_ops;
+
+ ret = data->ops->stream_open(&data->stream_id, "pcm");
+ if (ret < 0) {
+ dev_err(dev,
+ "Failed to open offload stream err %d", ret);
+ devm_kfree(dev, data);
+ return ret;
+ }
+ } else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
+ data->ops = &offload_ops.loopback_ops;
+
+ ret = data->ops->stream_open(&data->stream_id, "loopback");
+ if (ret < 0) {
+ dev_err(dev,
+ "Failed to open offload stream err %d", ret);
+ devm_kfree(dev, data);
+ return ret;
+ }
+ }
+ offload_ops.device_ops.set_hw_rate(48000);
substream->runtime->private_data = data;
return 0;
}
@@ -474,8 +532,10 @@
dev_vdbg(dev, "%s", __func__);
- data->ops->stream_close(data->stream_id);
- devm_kfree(dev, data);
+ if (data) {
+ data->ops->stream_close(data->stream_id);
+ devm_kfree(dev, data);
+ }
return 0;
}
@@ -486,49 +546,62 @@
struct device *dev = rtd->platform->dev;
struct tegra_offload_pcm_data *data = substream->runtime->private_data;
struct snd_dma_buffer *buf = &substream->dma_buffer;
- struct tegra_pcm_dma_params *dmap;
struct tegra_offload_pcm_params offl_params;
int ret = 0;
dev_vdbg(dev, "%s", __func__);
- dmap = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
- if (!dmap) {
- struct snd_soc_dpcm *dpcm;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ struct tegra_pcm_dma_params *dmap;
if (list_empty(&rtd->dpcm[substream->stream].be_clients)) {
- dev_err(dev, "No backend DAIs enabled for %s\n",
+ dev_err(dev,
+ "No backend DAIs enabled for %s\n",
rtd->dai_link->name);
- return -EINVAL;
+ return -EINVAL;
}
- list_for_each_entry(dpcm,
- &rtd->dpcm[substream->stream].be_clients, list_be) {
- struct snd_soc_pcm_runtime *be = dpcm->be;
- struct snd_pcm_substream *be_substream =
- snd_soc_dpcm_get_substream(be,
- substream->stream);
- struct snd_soc_dai_link *dai_link = be->dai_link;
+ dmap = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
+ if (!dmap) {
+ struct snd_soc_dpcm *dpcm;
- dmap = snd_soc_dai_get_dma_data(be->cpu_dai,
- be_substream);
+ list_for_each_entry(dpcm,
+ &rtd->dpcm[substream->stream].be_clients,
+ list_be) {
+ struct snd_soc_pcm_runtime *be = dpcm->be;
+ struct snd_pcm_substream *be_substream =
+ snd_soc_dpcm_get_substream(be,
+ substream->stream);
+ struct snd_soc_dai_link *dai_link =
+ be->dai_link;
- if (spk && strstr(dai_link->name, "speaker")) {
dmap = snd_soc_dai_get_dma_data(be->cpu_dai,
- be_substream);
- break;
+ be_substream);
+
+ if (spk && strstr(dai_link->name, "speaker")) {
+ dmap = snd_soc_dai_get_dma_data(
+ be->cpu_dai,
+ be_substream);
+ break;
+ }
+ if (codec && strstr(dai_link->name, "codec")) {
+ dmap = snd_soc_dai_get_dma_data(
+ be->cpu_dai,
+ be_substream);
+ break;
+ }
+ /* TODO : Multiple BE to
+ * single FE not yet supported */
}
- if (codec && strstr(dai_link->name, "codec")) {
- dmap = snd_soc_dai_get_dma_data(be->cpu_dai,
- be_substream);
- break;
- }
- /* TODO : Multiple BE to single FE not yet supported */
}
- }
- if (!dmap) {
- dev_err(dev, "Failed to get DMA params.");
- return -ENODEV;
+ if (!dmap) {
+ dev_err(dev, "Failed to get DMA params.");
+ return -ENODEV;
+ }
+ offl_params.dma_params.addr = dmap->addr;
+ offl_params.dma_params.width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ offl_params.dma_params.req_sel = dmap->req_sel;
+ offl_params.dma_params.max_burst = 4;
}
offl_params.bits_per_sample =
@@ -539,11 +612,6 @@
offl_params.period_size = params_period_size(params) *
((offl_params.bits_per_sample >> 3) * offl_params.channels);
- offl_params.dma_params.addr = dmap->addr;
- offl_params.dma_params.width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- offl_params.dma_params.req_sel = dmap->req_sel;
- offl_params.dma_params.max_burst = 4;
-
offl_params.source_buf.virt_addr = buf->area;
offl_params.source_buf.phys_addr = buf->addr;
offl_params.source_buf.bytes = buf->bytes;
@@ -577,16 +645,33 @@
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct device *dev = rtd->platform->dev;
struct tegra_offload_pcm_data *data = substream->runtime->private_data;
+ int ret = 0;
dev_vdbg(dev, "%s : cmd %d", __func__, cmd);
- data->ops->set_stream_state(data->stream_id, cmd);
- if ((cmd == SNDRV_PCM_TRIGGER_STOP) ||
- (cmd == SNDRV_PCM_TRIGGER_SUSPEND) ||
- (cmd == SNDRV_PCM_TRIGGER_PAUSE_PUSH))
- data->appl_ptr = 0;
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ if (rtd->dai_link->ops && rtd->dai_link->ops->trigger)
+ rtd->dai_link->ops->trigger(substream, cmd);
+ ret = data->ops->set_stream_state(data->stream_id, cmd);
+ break;
- return 0;
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ ret = data->ops->set_stream_state(data->stream_id, cmd);
+ if (rtd->dai_link->ops && rtd->dai_link->ops->trigger)
+ rtd->dai_link->ops->trigger(substream, cmd);
+ data->appl_ptr = 0;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
}
static snd_pcm_uframes_t tegra_offload_pcm_pointer(
@@ -689,15 +774,14 @@
&codec_control, 1),
SND_SOC_DAPM_MIXER("SPK VMixer", SND_SOC_NOPM, 0, 0,
&spk_control, 1),
+ SND_SOC_DAPM_MIXER("DAM VMixer", SND_SOC_NOPM, 0, 0,
+ NULL, 0),
};
static const struct snd_soc_dapm_route graph[] = {
- {"Codec VMixer", "Codec Switch", "offload-pcm-playback"},
- {"Codec VMixer", "Codec Switch", "offload-compr-playback"},
+ {"Codec VMixer", "Codec Switch", "DAM VMixer"},
{"I2S1_OUT", NULL, "Codec VMixer"},
-
- {"SPK VMixer", "SPK Switch", "offload-pcm-playback"},
- {"SPK VMixer", "SPK Switch", "offload-compr-playback"},
+ {"SPK VMixer", "SPK Switch", "DAM VMixer"},
{"I2S2_OUT", NULL, "SPK VMixer"},
};
@@ -768,17 +852,39 @@
static int tegra_offload_pcm_new(struct snd_soc_pcm_runtime *rtd)
{
struct device *dev = rtd->platform->dev;
+ struct snd_pcm *pcm = rtd->pcm;
+ int ret = 0;
dev_vdbg(dev, "%s", __func__);
- return tegra_offload_dma_allocate(rtd , SNDRV_PCM_STREAM_PLAYBACK,
- tegra_offload_pcm_hardware.buffer_bytes_max);
+ if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) {
+ ret = tegra_offload_dma_allocate(rtd,
+ SNDRV_PCM_STREAM_PLAYBACK,
+ tegra_offload_pcm_hw_pb.buffer_bytes_max);
+ if (ret < 0) {
+ dev_err(pcm->card->dev, "Failed to allocate memory");
+ return -ENOMEM;
+ }
+ }
+ if (pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream) {
+ ret = tegra_offload_dma_allocate(rtd,
+ SNDRV_PCM_STREAM_CAPTURE,
+ tegra_offload_pcm_hw_cap.buffer_bytes_max);
+ if (ret < 0) {
+ dev_err(pcm->card->dev, "Failed to allocate memory");
+ return -ENOMEM;
+ }
+ }
+ return ret;
}
static void tegra_offload_pcm_free(struct snd_pcm *pcm)
{
- tegra_offload_dma_free(pcm, SNDRV_PCM_STREAM_PLAYBACK);
pr_debug("%s", __func__);
+ if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream)
+ tegra_offload_dma_free(pcm, SNDRV_PCM_STREAM_PLAYBACK);
+ if (pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream)
+ tegra_offload_dma_free(pcm, SNDRV_PCM_STREAM_CAPTURE);
}
static int tegra_offload_pcm_probe(struct snd_soc_platform *platform)
@@ -814,6 +920,8 @@
.num_dapm_widgets = ARRAY_SIZE(tegra_offload_widgets),
.dapm_routes = graph,
.num_dapm_routes = ARRAY_SIZE(graph),
+ .controls = tegra_offload_volume,
+ .num_controls = ARRAY_SIZE(tegra_offload_volume),
};
static struct snd_soc_dai_driver tegra_offload_dai[] = {
@@ -827,6 +935,13 @@
.rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000,
.formats = SNDRV_PCM_FMTBIT_S16_LE,
},
+ .capture = {
+ .stream_name = "offload-pcm-capture",
+ .channels_min = 2,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ },
},
[COMPR_OFFLOAD_DAI] = {
.name = "tegra-offload-compr",
diff --git a/sound/soc/tegra/tegra_offload.h b/sound/soc/tegra/tegra_offload.h
index 6a1d0cf..2ea01db 100644
--- a/sound/soc/tegra/tegra_offload.h
+++ b/sound/soc/tegra/tegra_offload.h
@@ -17,6 +17,8 @@
#ifndef __TEGRA_OFFLOAD_H__
#define __TEGRA_OFFLOAD_H__
+#define AVP_UNITY_STREAM_VOLUME 0x10000
+
struct tegra_offload_dma_params {
unsigned long addr;
unsigned long width;
@@ -56,7 +58,7 @@
};
struct tegra_offload_pcm_ops {
- int (*stream_open)(int *id);
+ int (*stream_open)(int *id, char *stream);
void (*stream_close)(int id);
int (*set_stream_params)(int id,
struct tegra_offload_pcm_params *params);
@@ -88,6 +90,7 @@
struct tegra_offload_ops {
struct tegra_offload_device_ops device_ops;
struct tegra_offload_pcm_ops pcm_ops;
+ struct tegra_offload_pcm_ops loopback_ops;
struct tegra_offload_compr_ops compr_ops;
};
diff --git a/sound/soc/tegra/tegra_pcm.c b/sound/soc/tegra/tegra_pcm.c
index 51903fd..a5c85ff 100644
--- a/sound/soc/tegra/tegra_pcm.c
+++ b/sound/soc/tegra/tegra_pcm.c
@@ -177,23 +177,60 @@
return 0;
}
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+/*
+ * If needed, set affinity for APB DMA interrupts to given CPU.
+ */
+#define SND_DMA_IRQ_CPU_AFFINITY 1
+
+static void snd_dma_irq_affinity_read(struct snd_info_entry *entry,
+ struct snd_info_buffer *buffer)
+{
+ snd_iprintf(buffer, "%d\n", SND_DMA_IRQ_CPU_AFFINITY);
+}
+
+static struct snd_info_entry *snd_dma_irq_affinity_entry;
+atomic_t snd_dma_proc_refcnt;
+static int snd_dma_proc_init(void)
+{
+ struct snd_info_entry *entry;
+
+ if (atomic_add_return(1, &snd_dma_proc_refcnt) == 1) {
+ entry = snd_info_create_module_entry(THIS_MODULE,
+ "irq_affinity", NULL);
+ if (entry) {
+ entry->c.text.read = snd_dma_irq_affinity_read;
+ if (snd_info_register(entry) < 0)
+ snd_info_free_entry(entry);
+ }
+ snd_dma_irq_affinity_entry = entry;
+ }
+ return 0;
+}
+
+static int snd_dma_proc_done(void)
+{
+ if (atomic_dec_and_test(&snd_dma_proc_refcnt))
+ snd_info_free_entry(snd_dma_irq_affinity_entry);
+ return 0;
+}
+#endif
+
int tegra_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
{
struct snd_soc_pcm_runtime *rtd = substream->private_data;
- struct tegra_pcm_dma_params * dmap;
struct tegra_runtime_data *prtd;
+ int ret;
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+ int err;
+ struct dma_chan *chan;
+#endif
if (rtd->dai_link->no_pcm)
return 0;
- dmap = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
-
-
prtd = (struct tegra_runtime_data *)
- snd_dmaengine_pcm_get_data(substream);
-
- if (!dmap)
- return 0;
+ snd_dmaengine_pcm_get_data(substream);
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
@@ -207,16 +244,48 @@
substream->runtime->no_period_wakeup = 0;
}
- return snd_dmaengine_pcm_trigger(substream,
+ if (rtd->dai_link->ops && rtd->dai_link->ops->trigger)
+ rtd->dai_link->ops->trigger(substream, cmd);
+
+ ret = snd_dmaengine_pcm_trigger(substream,
SNDRV_PCM_TRIGGER_START);
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+ if (ret == 0) {
+ chan = snd_dmaengine_pcm_get_chan(substream);
+ if (chan) {
+ err = irq_set_affinity(
+ INT_APB_DMA_CH0 + chan->chan_id,
+ cpumask_of(SND_DMA_IRQ_CPU_AFFINITY));
+ if (err < 0)
+ pr_warn("%s:Failed to set irq affinity for irq:%u to cpu:%u, error:%d\n",
+ __func__,
+ INT_APB_DMA_CH0 + chan->chan_id,
+ SND_DMA_IRQ_CPU_AFFINITY, err);
+ }
+ }
+#endif
+ return ret;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
prtd->running = 0;
- return snd_dmaengine_pcm_trigger(substream,
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+ chan = snd_dmaengine_pcm_get_chan(substream);
+ if (chan) {
+ err = irq_set_affinity(INT_APB_DMA_CH0 + chan->chan_id,
+ irq_default_affinity);
+ if (err < 0)
+ pr_warn("%s:Failed to set default irq affinity for irq:%u, error:%d\n",
+ __func__, INT_APB_DMA_CH0 + chan->chan_id, err);
+ }
+#endif
+ snd_dmaengine_pcm_trigger(substream,
SNDRV_PCM_TRIGGER_STOP);
+ if (rtd->dai_link->ops && rtd->dai_link->ops->trigger)
+ rtd->dai_link->ops->trigger(substream, cmd);
+ return 0;
default:
return -EINVAL;
}
@@ -357,6 +426,9 @@
int tegra_pcm_new(struct snd_soc_pcm_runtime *rtd)
{
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+ snd_dma_proc_init();
+#endif
return tegra_pcm_dma_allocate(rtd ,
tegra_pcm_hardware.buffer_bytes_max);
}
@@ -365,19 +437,61 @@
{
tegra_pcm_deallocate_dma_buffer(pcm, SNDRV_PCM_STREAM_CAPTURE);
tegra_pcm_deallocate_dma_buffer(pcm, SNDRV_PCM_STREAM_PLAYBACK);
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_DENVER_CPU)
+ snd_dma_proc_done();
+#endif
}
+static struct snd_soc_dai_driver tegra_fast_dai[] = {
+ [0] = {
+ .name = "tegra-fast-pcm",
+ .id = 0,
+ .playback = {
+ .stream_name = "fast-pcm-playback",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ },
+ },
+ [1] = {
+ .name = "tegra-fast-pcm1",
+ .id = 1,
+ .playback = {
+ .stream_name = "fast-pcm-playback1",
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ },
+ },
+};
+
static int tegra_pcm_probe(struct snd_soc_platform *platform)
{
platform->dapm.idle_bias_off = 1;
return 0;
}
+unsigned int tegra_fast_pcm_read(struct snd_soc_platform *platform,
+ unsigned int reg)
+{
+ return 0;
+}
+
+int tegra_fast_pcm_write(struct snd_soc_platform *platform,
+ unsigned int reg, unsigned int val)
+{
+ return 0;
+}
+
static struct snd_soc_platform_driver tegra_pcm_platform = {
.ops = &tegra_pcm_ops,
.pcm_new = tegra_pcm_new,
.pcm_free = tegra_pcm_free,
.probe = tegra_pcm_probe,
+ .read = tegra_fast_pcm_read,
+ .write = tegra_fast_pcm_write,
};
int tegra_pcm_platform_register(struct device *dev)
@@ -392,6 +506,44 @@
}
EXPORT_SYMBOL_GPL(tegra_pcm_platform_unregister);
+static const struct snd_soc_component_driver tegra_fast_pcm_component = {
+ .name = "tegra-pcm-audio",
+};
+
+static int tegra_soc_platform_probe(struct platform_device *pdev)
+{
+ int ret = 0;
+
+ tegra_pcm_platform_register(&pdev->dev);
+
+ ret = snd_soc_register_component(&pdev->dev, &tegra_fast_pcm_component,
+ tegra_fast_dai, ARRAY_SIZE(tegra_fast_dai));
+ if (ret)
+ dev_err(&pdev->dev, "Could not register component: %d\n", ret);
+
+ return 0;
+}
+
+static int tegra_soc_platform_remove(struct platform_device *pdev)
+{
+ pr_info("tegra_soc_platform_remove");
+ tegra_pcm_platform_unregister(&pdev->dev);
+ return 0;
+}
+
+static struct platform_driver tegra_pcm_driver = {
+ .driver = {
+ .name = "tegra-pcm-audio",
+ .owner = THIS_MODULE,
+ },
+
+ .probe = tegra_soc_platform_probe,
+ .remove = tegra_soc_platform_remove,
+};
+
+module_platform_driver(tegra_pcm_driver);
+
+
MODULE_AUTHOR("Stephen Warren <swarren@nvidia.com>");
MODULE_DESCRIPTION("Tegra PCM ASoC driver");
MODULE_LICENSE("GPL");
diff --git a/sound/soc/tegra/tegra_rt5677.c b/sound/soc/tegra/tegra_rt5677.c
new file mode 100644
index 0000000..9655f6b
--- /dev/null
+++ b/sound/soc/tegra/tegra_rt5677.c
@@ -0,0 +1,2107 @@
+/*
+ * tegra_rt5677.c - Tegra machine ASoC driver for boards using ALC5645 codec.
+ *
+ * Copyright (c) 2013-2014, NVIDIA CORPORATION. All rights reserved.
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ *
+ */
+#include <asm/mach-types.h>
+#include <linux/of.h>
+#include <linux/clk.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/gpio.h>
+#include <linux/of_gpio.h>
+#include <linux/regulator/consumer.h>
+#include <linux/delay.h>
+#include <linux/pm_runtime.h>
+#include <linux/cpufreq.h>
+#include <mach/tegra_asoc_pdata.h>
+#include <mach/gpio-tegra.h>
+#include <linux/sysedp.h>
+
+#include <sound/core.h>
+#include <sound/jack.h>
+#include <sound/pcm.h>
+#include <sound/pcm_params.h>
+#include <sound/soc.h>
+#include "../codecs/rt5677.h"
+#include "../codecs/rt5506.h"
+#include "../codecs/tfa9895.h"
+
+#include "tegra_pcm.h"
+#include "tegra30_ahub.h"
+#include "tegra30_i2s.h"
+#include "tegra30_dam.h"
+#include "tegra_rt5677.h"
+
+#define DRV_NAME "tegra-snd-rt5677"
+
+#define DAI_LINK_HIFI 0
+#define DAI_LINK_SPEAKER 1
+#define DAI_LINK_BTSCO 2
+#define DAI_LINK_MI2S_DUMMY 3
+#define DAI_LINK_PCM_OFFLOAD_FE 4
+#define DAI_LINK_COMPR_OFFLOAD_FE 5
+#define DAI_LINK_I2S_OFFLOAD_BE 6
+#define DAI_LINK_I2S_OFFLOAD_SPEAKER_BE 7
+#define DAI_LINK_PCM_OFFLOAD_CAPTURE_FE 8
+#define DAI_LINK_FAST_FE 9
+#define NUM_DAI_LINKS 10
+
+#define HOTWORD_CPU_FREQ_BOOST_MIN 2000000
+#define HOTWORD_CPU_FREQ_BOOST_DURATION_MS 400
+
+const char *tegra_rt5677_i2s_dai_name[TEGRA30_NR_I2S_IFC] = {
+ "tegra30-i2s.0",
+ "tegra30-i2s.1",
+ "tegra30-i2s.2",
+ "tegra30-i2s.3",
+ "tegra30-i2s.4",
+};
+
+struct regulator *rt5677_reg;
+static struct sysedp_consumer *sysedpc;
+
+void __set_rt5677_power(struct tegra_rt5677 *machine, bool enable, bool hp_depop);
+void set_rt5677_power_locked(struct tegra_rt5677 *machine, bool enable, bool hp_depop);
+
+static int hotword_cpufreq_notifier(struct notifier_block* nb,
+ unsigned long event, void* data)
+{
+ struct cpufreq_policy *policy = data;
+
+ if (policy == NULL || event != CPUFREQ_ADJUST)
+ return 0;
+
+ pr_debug("%s: adjusting cpu%d min freq to %d for hotword (currently "
+ "at %d)\n", __func__, policy->cpu, HOTWORD_CPU_FREQ_BOOST_MIN,
+ policy->cur);
+
+ /* Make sure that the policy makes sense overall. */
+ cpufreq_verify_within_limits(policy, HOTWORD_CPU_FREQ_BOOST_MIN,
+ policy->cpuinfo.max_freq);
+ return 0;
+}
+
+static struct notifier_block hotword_cpufreq_notifier_block = {
+ .notifier_call = hotword_cpufreq_notifier
+};
+
+static void tegra_do_hotword_work(struct work_struct *work)
+{
+ struct tegra_rt5677 *machine = container_of(work, struct tegra_rt5677, hotword_work);
+ char *hot_event[] = { "ACTION=HOTWORD", NULL };
+ int ret, i;
+
+ /* Register a CPU policy change listener that will bump up the min
+ * frequency while we're processing the hotword. */
+ ret = cpufreq_register_notifier(&hotword_cpufreq_notifier_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ if (!ret) {
+ for_each_online_cpu(i)
+ cpufreq_update_policy(i);
+ }
+
+ kobject_uevent_env(&machine->pcard->dev->kobj, KOBJ_CHANGE, hot_event);
+
+ /* If we registered the notifier, we can wait the specified duration
+ * before reseting the CPUs back to what they were. */
+ if (!ret) {
+ msleep(HOTWORD_CPU_FREQ_BOOST_DURATION_MS);
+ cpufreq_unregister_notifier(&hotword_cpufreq_notifier_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ for_each_online_cpu(i)
+ cpufreq_update_policy(i);
+ }
+
+ return;
+}
+
+static irqreturn_t detect_rt5677_irq_handler(int irq, void *dev_id)
+{
+ int value;
+ struct tegra_rt5677 *machine = dev_id;
+
+ value = gpio_get_value(machine->pdata->gpio_irq1);
+
+ pr_info("RT5677 IRQ is triggered = 0x%x\n", value);
+ if (value == 1) {
+ schedule_work(&machine->hotword_work);
+ wake_lock_timeout(&machine->vad_wake, msecs_to_jiffies(1500));
+ }
+ return IRQ_HANDLED;
+}
+
+static void tegra_rt5677_set_cif(int cif, unsigned int channels,
+ unsigned int sample_size)
+{
+ tegra30_ahub_set_tx_cif_channels(cif, channels, channels);
+
+ switch (sample_size) {
+ case 8:
+ tegra30_ahub_set_tx_cif_bits(cif,
+ TEGRA30_AUDIOCIF_BITS_8, TEGRA30_AUDIOCIF_BITS_8);
+ tegra30_ahub_set_tx_fifo_pack_mode(cif,
+ TEGRA30_AHUB_CHANNEL_CTRL_TX_PACK_8_4);
+ break;
+
+ case 16:
+ tegra30_ahub_set_tx_cif_bits(cif,
+ TEGRA30_AUDIOCIF_BITS_16, TEGRA30_AUDIOCIF_BITS_16);
+ tegra30_ahub_set_tx_fifo_pack_mode(cif,
+ TEGRA30_AHUB_CHANNEL_CTRL_TX_PACK_16);
+ break;
+
+ case 24:
+ tegra30_ahub_set_tx_cif_bits(cif,
+ TEGRA30_AUDIOCIF_BITS_24, TEGRA30_AUDIOCIF_BITS_24);
+ tegra30_ahub_set_tx_fifo_pack_mode(cif, 0);
+ break;
+
+ case 32:
+ tegra30_ahub_set_tx_cif_bits(cif,
+ TEGRA30_AUDIOCIF_BITS_32, TEGRA30_AUDIOCIF_BITS_32);
+ tegra30_ahub_set_tx_fifo_pack_mode(cif, 0);
+ break;
+
+ default:
+ pr_err("Error in sample_size\n");
+ break;
+ }
+}
+
+
+static int tegra_rt5677_fe_pcm_startup(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ tegra30_ahub_enable_clocks();
+ tegra30_ahub_allocate_tx_fifo(
+ &machine->playback_fifo_cif,
+ &machine->playback_dma_data.addr,
+ &machine->playback_dma_data.req_sel);
+
+ machine->playback_dma_data.wrap = 4;
+ machine->playback_dma_data.width = 32;
+ cpu_dai->playback_dma_data = &machine->playback_dma_data;
+
+ tegra30_ahub_set_rx_cif_source(TEGRA30_AHUB_RXCIF_DAM0_RX0 +
+ (machine->dam_ifc * 2), machine->playback_fifo_cif);
+
+ return 0;
+}
+
+static void tegra_rt5677_fe_pcm_shutdown(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ tegra30_ahub_unset_rx_cif_source(TEGRA30_AHUB_RXCIF_DAM0_RX0 +
+ (machine->dam_ifc * 2));
+ tegra30_ahub_free_tx_fifo(machine->playback_fifo_cif);
+ machine->playback_fifo_cif = -1;
+ tegra30_ahub_disable_clocks();
+}
+
+static int tegra_rt5677_fe_pcm_trigger(struct snd_pcm_substream *substream,
+ int cmd)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_ENABLE,
+ TEGRA30_DAM_CHIN0_SRC);
+ tegra30_ahub_enable_tx_fifo(machine->playback_fifo_cif);
+ break;
+
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ tegra30_ahub_disable_tx_fifo(machine->playback_fifo_cif);
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_DISABLE,
+ TEGRA30_DAM_CHIN0_SRC);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_fe_compr_ops_startup(struct snd_compr_stream *cstream)
+{
+ struct snd_soc_pcm_runtime *fe = cstream->private_data;
+ struct snd_soc_dai *cpu_dai = fe->cpu_dai;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(fe->card);
+
+ tegra30_ahub_enable_clocks();
+ tegra30_ahub_allocate_tx_fifo(
+ &machine->playback_fifo_cif,
+ &machine->playback_dma_data.addr,
+ &machine->playback_dma_data.req_sel);
+
+ machine->playback_dma_data.wrap = 4;
+ machine->playback_dma_data.width = 32;
+ cpu_dai->playback_dma_data = &machine->playback_dma_data;
+
+ tegra30_ahub_set_rx_cif_source(TEGRA30_AHUB_RXCIF_DAM0_RX0 +
+ (machine->dam_ifc * 2), machine->playback_fifo_cif);
+
+ return 0;
+}
+
+static void tegra_rt5677_fe_compr_ops_shutdown(struct snd_compr_stream *cstream)
+{
+ struct snd_soc_pcm_runtime *fe = cstream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(fe->card);
+
+ tegra30_ahub_unset_rx_cif_source(TEGRA30_AHUB_RXCIF_DAM0_RX0 +
+ (machine->dam_ifc * 2));
+ tegra30_ahub_free_tx_fifo(machine->playback_fifo_cif);
+ machine->playback_fifo_cif = -1;
+ tegra30_ahub_disable_clocks();
+}
+
+static int tegra_rt5677_fe_compr_ops_trigger(struct snd_compr_stream *cstream,
+ int cmd)
+{
+ struct snd_soc_pcm_runtime *fe = cstream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(fe->card);
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_ENABLE,
+ TEGRA30_DAM_CHIN0_SRC);
+ tegra30_ahub_enable_tx_fifo(machine->playback_fifo_cif);
+ break;
+
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ tegra30_ahub_disable_tx_fifo(machine->playback_fifo_cif);
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_DISABLE,
+ TEGRA30_DAM_CHIN0_SRC);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_fe_fast_startup(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ tegra30_ahub_enable_clocks();
+ tegra30_ahub_allocate_tx_fifo(
+ &machine->playback_fast_fifo_cif,
+ &machine->playback_fast_dma_data.addr,
+ &machine->playback_fast_dma_data.req_sel);
+
+ machine->playback_fast_dma_data.width = 32;
+ machine->playback_fast_dma_data.wrap = 4;
+ cpu_dai->playback_dma_data = &machine->playback_fast_dma_data;
+
+ tegra30_ahub_set_rx_cif_source(
+ TEGRA30_AHUB_RXCIF_DAM0_RX1 + (machine->dam_ifc * 2),
+ machine->playback_fast_fifo_cif);
+
+ return 0;
+}
+
+static void tegra_rt5677_fe_fast_shutdown(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ tegra30_ahub_unset_rx_cif_source(TEGRA30_AHUB_RXCIF_DAM0_RX1 +
+ (machine->dam_ifc * 2));
+ tegra30_ahub_free_tx_fifo(machine->playback_fast_fifo_cif);
+ machine->playback_fast_fifo_cif = -1;
+ tegra30_ahub_disable_clocks();
+}
+
+static int tegra_rt5677_fe_fast_trigger(struct snd_pcm_substream *substream,
+ int cmd)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ tegra30_dam_ch0_set_datasync(machine->dam_ifc, 2);
+ tegra30_dam_ch1_set_datasync(machine->dam_ifc, 0);
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_ENABLE,
+ TEGRA30_DAM_CHIN1);
+ tegra30_ahub_enable_tx_fifo(machine->playback_fast_fifo_cif);
+ break;
+
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ tegra30_ahub_disable_tx_fifo(machine->playback_fast_fifo_cif);
+ tegra30_dam_enable(machine->dam_ifc, TEGRA30_DAM_DISABLE,
+ TEGRA30_DAM_CHIN1);
+ tegra30_dam_ch0_set_datasync(machine->dam_ifc, 1);
+ tegra30_dam_ch1_set_datasync(machine->dam_ifc, 0);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_spk_startup(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(cpu_dai);
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ int dam_ifc = machine->dam_ifc;
+ pr_info("%s:mi2s amp on\n",__func__);
+
+ tegra_asoc_utils_tristate_pd_dap(i2s->id, false);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (rtd->dai_link->be_id == DAI_LINK_I2S_OFFLOAD_SPEAKER_BE) {
+ mutex_lock(&machine->dam_mutex);
+ if (machine->dam_ref_cnt == 0) {
+ tegra30_dam_soft_reset(dam_ifc);
+ tegra30_dam_allocate_channel(dam_ifc,
+ TEGRA30_DAM_CHIN0_SRC);
+ tegra30_dam_allocate_channel(dam_ifc,
+ TEGRA30_DAM_CHIN1);
+ tegra30_dam_enable_clock(dam_ifc);
+
+ tegra30_dam_set_samplerate(dam_ifc, TEGRA30_DAM_CHOUT,
+ 48000);
+ tegra30_dam_set_samplerate(dam_ifc,
+ TEGRA30_DAM_CHIN0_SRC, 48000);
+ tegra30_dam_set_gain(dam_ifc, TEGRA30_DAM_CHIN0_SRC,
+ 0x1000);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHIN0_SRC,
+ 2, 16, 2, 32);
+ tegra30_dam_set_gain(dam_ifc, TEGRA30_DAM_CHIN1,
+ 0x1000);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHIN1,
+ 2, 16, 2, 32);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHOUT,
+ 2, 16, 2, 32);
+ tegra30_dam_enable_stereo_mixing(machine->dam_ifc, 1);
+ tegra30_dam_ch0_set_datasync(dam_ifc, 0);
+ tegra30_dam_ch1_set_datasync(dam_ifc, 0);
+ }
+ tegra30_ahub_set_rx_cif_source(i2s->playback_i2s_cif,
+ TEGRA30_AHUB_TXCIF_DAM0_TX0 + machine->dam_ifc);
+ machine->dam_ref_cnt++;
+ mutex_unlock(&machine->dam_mutex);
+ } else {
+ tegra30_ahub_allocate_tx_fifo(
+ &i2s->playback_fifo_cif,
+ &i2s->playback_dma_data.addr,
+ &i2s->playback_dma_data.req_sel);
+ i2s->playback_dma_data.wrap = 4;
+ i2s->playback_dma_data.width = 32;
+ cpu_dai->playback_dma_data = &i2s->playback_dma_data;
+ tegra30_ahub_set_rx_cif_source(
+ i2s->playback_i2s_cif,
+ i2s->playback_fifo_cif);
+ }
+ }
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mutex_lock(&machine->spk_amp_lock);
+ set_tfa9895_spkamp(1, 0);
+ set_tfa9895l_spkamp(1, 0);
+ mutex_unlock(&machine->spk_amp_lock);
+ }
+
+ return 0;
+}
+
+static void tegra_rt5677_spk_shutdown(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(cpu_dai);
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ pr_info("%s:mi2s amp off\n",__func__);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (rtd->dai_link->be_id == DAI_LINK_I2S_OFFLOAD_SPEAKER_BE) {
+ mutex_lock(&machine->dam_mutex);
+ tegra30_ahub_unset_rx_cif_source(i2s->playback_i2s_cif);
+ machine->dam_ref_cnt--;
+ if (machine->dam_ref_cnt == 0)
+ tegra30_dam_disable_clock(machine->dam_ifc);
+ mutex_unlock(&machine->dam_mutex);
+ } else {
+ tegra30_ahub_unset_rx_cif_source(i2s->playback_i2s_cif);
+ tegra30_ahub_free_tx_fifo(i2s->playback_fifo_cif);
+ i2s->playback_fifo_cif = -1;
+ }
+ }
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mutex_lock(&machine->spk_amp_lock);
+ set_tfa9895_spkamp(0, 0);
+ set_tfa9895l_spkamp(0, 0);
+ mutex_unlock(&machine->spk_amp_lock);
+ }
+ tegra_asoc_utils_tristate_pd_dap(i2s->id, true);
+}
+
+static int tegra_rt5677_startup(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(cpu_dai);
+
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int dam_ifc = machine->dam_ifc;
+
+ pr_debug("%s i2s->id=%d %d\n", __func__, i2s->id,
+ pdata->i2s_param[HIFI_CODEC].audio_port_id);
+ tegra_asoc_utils_tristate_pd_dap(i2s->id, false);
+ if (i2s->id == pdata->i2s_param[HIFI_CODEC].audio_port_id) {
+ cancel_delayed_work_sync(&machine->power_work);
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ set_rt5677_power_locked(machine, true, true);
+ else
+ set_rt5677_power_locked(machine, true, false);
+ }
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (rtd->dai_link->be_id == DAI_LINK_I2S_OFFLOAD_BE) {
+ mutex_lock(&machine->dam_mutex);
+ if (machine->dam_ref_cnt == 0) {
+ tegra30_dam_soft_reset(dam_ifc);
+ tegra30_dam_allocate_channel(dam_ifc,
+ TEGRA30_DAM_CHIN0_SRC);
+ tegra30_dam_allocate_channel(dam_ifc,
+ TEGRA30_DAM_CHIN1);
+ tegra30_dam_enable_clock(dam_ifc);
+
+ tegra30_dam_set_samplerate(dam_ifc, TEGRA30_DAM_CHOUT,
+ 48000);
+ tegra30_dam_set_samplerate(dam_ifc,
+ TEGRA30_DAM_CHIN0_SRC, 48000);
+ tegra30_dam_set_gain(dam_ifc, TEGRA30_DAM_CHIN0_SRC,
+ 0x1000);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHIN0_SRC,
+ 2, 16, 2, 32);
+ tegra30_dam_set_gain(dam_ifc, TEGRA30_DAM_CHIN1,
+ 0x1000);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHIN1,
+ 2, 16, 2, 32);
+ tegra30_dam_set_acif(dam_ifc, TEGRA30_DAM_CHOUT,
+ 2, 16, 2, 32);
+ tegra30_dam_enable_stereo_mixing(machine->dam_ifc, 1);
+ tegra30_dam_ch0_set_datasync(dam_ifc, 0);
+ tegra30_dam_ch1_set_datasync(dam_ifc, 0);
+ }
+ tegra30_ahub_set_rx_cif_source(i2s->playback_i2s_cif,
+ TEGRA30_AHUB_TXCIF_DAM0_TX0 + machine->dam_ifc);
+ machine->dam_ref_cnt++;
+ mutex_unlock(&machine->dam_mutex);
+ } else {
+ tegra30_ahub_allocate_tx_fifo(
+ &i2s->playback_fifo_cif,
+ &i2s->playback_dma_data.addr,
+ &i2s->playback_dma_data.req_sel);
+ i2s->playback_dma_data.wrap = 4;
+ i2s->playback_dma_data.width = 32;
+ cpu_dai->playback_dma_data = &i2s->playback_dma_data;
+ tegra30_ahub_set_rx_cif_source(
+ i2s->playback_i2s_cif,
+ i2s->playback_fifo_cif);
+ }
+ }
+ return 0;
+}
+
+static void tegra_rt5677_shutdown(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(cpu_dai);
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (rtd->dai_link->be_id == DAI_LINK_I2S_OFFLOAD_BE) {
+ mutex_lock(&machine->dam_mutex);
+ tegra30_ahub_unset_rx_cif_source(i2s->playback_i2s_cif);
+ machine->dam_ref_cnt--;
+ if (machine->dam_ref_cnt == 0)
+ tegra30_dam_disable_clock(machine->dam_ifc);
+ mutex_unlock(&machine->dam_mutex);
+ } else {
+ tegra30_ahub_unset_rx_cif_source(i2s->playback_i2s_cif);
+ tegra30_ahub_free_tx_fifo(i2s->playback_fifo_cif);
+ i2s->playback_fifo_cif = -1;
+ }
+ }
+ tegra_asoc_utils_tristate_pd_dap(i2s->id, true);
+
+ if (machine->codec && machine->codec->active)
+ return;
+
+ schedule_delayed_work(&machine->power_work,
+ msecs_to_jiffies(500));
+}
+
+static int tegra_rt5677_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *codec_dai = rtd->codec_dai;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct snd_soc_codec *codec = rtd->codec;
+ struct snd_soc_card *card = codec->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int srate, mclk, i2s_daifmt, codec_daifmt;
+ int err, rate, sample_size;
+ unsigned int i2sclock;
+
+ srate = params_rate(params);
+ mclk = 256 * srate;
+
+ i2s_daifmt = SND_SOC_DAIFMT_NB_NF;
+ i2s_daifmt |= pdata->i2s_param[HIFI_CODEC].is_i2s_master ?
+ SND_SOC_DAIFMT_CBS_CFS : SND_SOC_DAIFMT_CBM_CFM;
+
+ switch (params_format(params)) {
+ case SNDRV_PCM_FORMAT_S8:
+ sample_size = 8;
+ break;
+ case SNDRV_PCM_FORMAT_S16_LE:
+ sample_size = 16;
+ break;
+ case SNDRV_PCM_FORMAT_S24_LE:
+ sample_size = 24;
+ break;
+ case SNDRV_PCM_FORMAT_S32_LE:
+ sample_size = 32;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (pdata->i2s_param[HIFI_CODEC].i2s_mode) {
+ case TEGRA_DAIFMT_I2S:
+ i2s_daifmt |= SND_SOC_DAIFMT_I2S;
+ break;
+ case TEGRA_DAIFMT_DSP_A:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_A;
+ break;
+ case TEGRA_DAIFMT_DSP_B:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_B;
+ break;
+ case TEGRA_DAIFMT_LEFT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_LEFT_J;
+ break;
+ case TEGRA_DAIFMT_RIGHT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_RIGHT_J;
+ break;
+ default:
+ dev_err(card->dev, "Can't configure i2s format\n");
+ return -EINVAL;
+ }
+
+ err = tegra_asoc_utils_set_rate(&machine->util_data, srate, mclk);
+ if (err < 0) {
+ if (!(machine->util_data.set_mclk % mclk)) {
+ mclk = machine->util_data.set_mclk;
+ } else {
+ dev_err(card->dev, "Can't configure clocks\n");
+ return err;
+ }
+ }
+
+ tegra_asoc_utils_lock_clk_rate(&machine->util_data, 1);
+
+ rate = clk_get_rate(machine->util_data.clk_cdev1);
+
+ if (pdata->i2s_param[HIFI_CODEC].is_i2s_master) {
+ err = snd_soc_dai_set_sysclk(codec_dai, RT5677_SCLK_S_MCLK,
+ rate, SND_SOC_CLOCK_IN);
+ if (err < 0) {
+ dev_err(card->dev, "codec_dai clock not set\n");
+ return err;
+ }
+ } else {
+
+ err = snd_soc_dai_set_pll(codec_dai, 0, RT5677_PLL1_S_MCLK,
+ rate, 512*srate);
+ if (err < 0) {
+ dev_err(card->dev, "codec_dai pll not set\n");
+ return err;
+ }
+ err = snd_soc_dai_set_sysclk(codec_dai, RT5677_SCLK_S_PLL1,
+ 512*srate, SND_SOC_CLOCK_IN);
+ if (err < 0) {
+ dev_err(card->dev, "codec_dai clock not set\n");
+ return err;
+ }
+ }
+
+ /* Use 64Fs */
+ i2sclock = srate * 2 * 32;
+
+ err = snd_soc_dai_set_sysclk(cpu_dai, 0,
+ i2sclock, SND_SOC_CLOCK_OUT);
+ if (err < 0) {
+ dev_err(card->dev, "cpu_dai clock not set\n");
+ return err;
+ }
+
+ codec_daifmt = i2s_daifmt;
+
+ /*invert the codec bclk polarity when codec is master
+ in DSP mode this is done to match with the negative
+ edge settings of tegra i2s*/
+ if (((i2s_daifmt & SND_SOC_DAIFMT_FORMAT_MASK)
+ == SND_SOC_DAIFMT_DSP_A) &&
+ ((i2s_daifmt & SND_SOC_DAIFMT_MASTER_MASK)
+ == SND_SOC_DAIFMT_CBM_CFM)) {
+ codec_daifmt &= ~(SND_SOC_DAIFMT_INV_MASK);
+ codec_daifmt |= SND_SOC_DAIFMT_IB_NF;
+ }
+
+ err = snd_soc_dai_set_fmt(codec_dai, codec_daifmt);
+ if (err < 0) {
+ dev_err(card->dev, "codec_dai fmt not set\n");
+ return err;
+ }
+
+ err = snd_soc_dai_set_fmt(cpu_dai, i2s_daifmt);
+ if (err < 0) {
+ dev_err(card->dev, "cpu_dai fmt not set\n");
+ return err;
+ }
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if ((int)machine->playback_fifo_cif >= 0)
+ tegra_rt5677_set_cif(machine->playback_fifo_cif,
+ params_channels(params), sample_size);
+
+ if ((int)machine->playback_fast_fifo_cif >= 0)
+ tegra_rt5677_set_cif(machine->playback_fast_fifo_cif,
+ params_channels(params), sample_size);
+ }
+
+ return 0;
+}
+
+static int tegra_speaker_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int i2s_daifmt;
+ int err, sample_size;
+
+ i2s_daifmt = SND_SOC_DAIFMT_NB_NF;
+ i2s_daifmt |= pdata->i2s_param[SPEAKER].is_i2s_master ?
+ SND_SOC_DAIFMT_CBS_CFS : SND_SOC_DAIFMT_CBM_CFM;
+
+ switch (pdata->i2s_param[SPEAKER].i2s_mode) {
+ case TEGRA_DAIFMT_I2S:
+ i2s_daifmt |= SND_SOC_DAIFMT_I2S;
+ break;
+ case TEGRA_DAIFMT_DSP_A:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_A;
+ break;
+ case TEGRA_DAIFMT_DSP_B:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_B;
+ break;
+ case TEGRA_DAIFMT_LEFT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_LEFT_J;
+ break;
+ case TEGRA_DAIFMT_RIGHT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_RIGHT_J;
+ break;
+ default:
+ dev_err(card->dev, "Can't configure i2s format\n");
+ return -EINVAL;
+ }
+
+ err = snd_soc_dai_set_fmt(rtd->cpu_dai, i2s_daifmt);
+ if (err < 0) {
+ dev_err(card->dev, "cpu_dai fmt not set\n");
+ return err;
+ }
+
+ switch (params_format(params)) {
+ case SNDRV_PCM_FORMAT_S8:
+ sample_size = 8;
+ break;
+ case SNDRV_PCM_FORMAT_S16_LE:
+ sample_size = 16;
+ break;
+ case SNDRV_PCM_FORMAT_S24_LE:
+ sample_size = 24;
+ break;
+ case SNDRV_PCM_FORMAT_S32_LE:
+ sample_size = 32;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if ((int)machine->playback_fifo_cif >= 0)
+ tegra_rt5677_set_cif(machine->playback_fifo_cif,
+ params_channels(params), sample_size);
+
+ if ((int)machine->playback_fast_fifo_cif >= 0)
+ tegra_rt5677_set_cif(machine->playback_fast_fifo_cif,
+ params_channels(params), sample_size);
+ }
+
+ return 0;
+}
+
+static int tegra_bt_sco_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_card *card = rtd->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int i2s_daifmt;
+ int err;
+
+ i2s_daifmt = SND_SOC_DAIFMT_NB_NF;
+ i2s_daifmt |= pdata->i2s_param[BT_SCO].is_i2s_master ?
+ SND_SOC_DAIFMT_CBS_CFS : SND_SOC_DAIFMT_CBM_CFM;
+
+ switch (pdata->i2s_param[BT_SCO].i2s_mode) {
+ case TEGRA_DAIFMT_I2S:
+ i2s_daifmt |= SND_SOC_DAIFMT_I2S;
+ break;
+ case TEGRA_DAIFMT_DSP_A:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_A;
+ break;
+ case TEGRA_DAIFMT_DSP_B:
+ i2s_daifmt |= SND_SOC_DAIFMT_DSP_B;
+ break;
+ case TEGRA_DAIFMT_LEFT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_LEFT_J;
+ break;
+ case TEGRA_DAIFMT_RIGHT_J:
+ i2s_daifmt |= SND_SOC_DAIFMT_RIGHT_J;
+ break;
+ default:
+ dev_err(card->dev, "Can't configure i2s format\n");
+ return -EINVAL;
+ }
+
+ err = snd_soc_dai_set_fmt(rtd->cpu_dai, i2s_daifmt);
+ if (err < 0) {
+ dev_err(card->dev, "cpu_dai fmt not set\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static int tegra_hw_free(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(rtd->card);
+
+ tegra_asoc_utils_lock_clk_rate(&machine->util_data, 0);
+
+ return 0;
+}
+
+static struct snd_soc_ops tegra_rt5677_ops = {
+ .hw_params = tegra_rt5677_hw_params,
+ .hw_free = tegra_hw_free,
+ .startup = tegra_rt5677_startup,
+ .shutdown = tegra_rt5677_shutdown,
+};
+
+static struct snd_soc_ops tegra_rt5677_fe_pcm_ops = {
+ .startup = tegra_rt5677_fe_pcm_startup,
+ .shutdown = tegra_rt5677_fe_pcm_shutdown,
+ .trigger = tegra_rt5677_fe_pcm_trigger,
+};
+
+static struct snd_soc_compr_ops tegra_rt5677_fe_compr_ops = {
+ .startup = tegra_rt5677_fe_compr_ops_startup,
+ .shutdown = tegra_rt5677_fe_compr_ops_shutdown,
+ .trigger = tegra_rt5677_fe_compr_ops_trigger,
+};
+
+static struct snd_soc_ops tegra_rt5677_fe_fast_ops = {
+ .startup = tegra_rt5677_fe_fast_startup,
+ .shutdown = tegra_rt5677_fe_fast_shutdown,
+ .trigger = tegra_rt5677_fe_fast_trigger,
+};
+
+static struct snd_soc_ops tegra_rt5677_speaker_ops = {
+ .hw_params = tegra_speaker_hw_params,
+ .startup = tegra_rt5677_spk_startup,
+ .shutdown = tegra_rt5677_spk_shutdown,
+};
+
+static struct snd_soc_ops tegra_rt5677_bt_sco_ops = {
+ .hw_params = tegra_bt_sco_hw_params,
+ .startup = tegra_rt5677_startup,
+ .shutdown = tegra_rt5677_shutdown,
+};
+
+#define HTC_SOC_ENUM_EXT(xname, xhandler_get, xhandler_put) \
+{ .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \
+ .info = snd_soc_info_volsw_ext, \
+ .get = xhandler_get, .put = xhandler_put, \
+ .private_value = 255 }
+
+static int tegra_rt5677_rt5506_gain_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ ucontrol->value.integer.value[0] = rt5506_get_gain();
+ return 0;
+}
+
+static int tegra_rt5677_rt5506_gain_set(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ pr_info("%s: 0x%x\n", __func__,
+ (unsigned int)ucontrol->value.integer.value[0]);
+ rt5506_set_gain((unsigned char)ucontrol->value.integer.value[0]);
+ return 0;
+}
+
+/* speaker single channel */
+enum speaker_state {
+ BIT_LR_CH = 0,
+ BIT_LEFT_CH = (1 << 0),
+ BIT_RIGHT_CH = (1 << 1),
+};
+
+static const char * const tegra_rt5677_speaker_test_mode[] = {
+ "LR", "Left", "Right",
+};
+
+static int speaker_test_mode;
+
+static const SOC_ENUM_SINGLE_DECL(tegra_rt5677_speaker_test_mode_enum, 0, 0,
+ tegra_rt5677_speaker_test_mode);
+
+static int tegra_rt5677_speaker_test_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ ucontrol->value.integer.value[0] = speaker_test_mode;
+ return 0;
+}
+
+static int tegra_rt5677_speaker_test_set(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ enum speaker_state state = BIT_LR_CH;
+
+ if (ucontrol->value.integer.value[0] == 1) {
+ state = BIT_LEFT_CH;
+ tfa9895_disable(1);
+ tfa9895l_disable(0);
+ } else if (ucontrol->value.integer.value[0] == 2) {
+ state = BIT_RIGHT_CH;
+ tfa9895_disable(0);
+ tfa9895l_disable(1);
+ } else {
+ state = BIT_LR_CH;
+ tfa9895_disable(0);
+ tfa9895l_disable(0);
+ }
+
+ speaker_test_mode = state;
+
+ pr_info("%s: tegra_rt5677_speaker_test_dev set to %d done\n",
+ __func__, state);
+
+ return 0;
+}
+
+/* digital mic bias */
+enum dmic_bias_state {
+ BIT_DMIC_BIAS_DISABLE = 0,
+ BIT_DMIC_BIAS_ENABLE = (1 << 0),
+};
+
+static const char * const tegra_rt5677_dmic_mode[] = {
+ "disable", "enable"
+};
+
+static int dmic_mode;
+
+static const SOC_ENUM_SINGLE_DECL(tegra_rt5677_dmic_mode_enum, 0, 0,
+ tegra_rt5677_dmic_mode);
+
+static int tegra_rt5677_dmic_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ ucontrol->value.integer.value[0] = dmic_mode;
+ return 0;
+}
+
+static int tegra_rt5677_dmic_set(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ enum dmic_bias_state state = BIT_DMIC_BIAS_DISABLE;
+
+ struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int ret = 0;
+
+ pr_info("tegra_rt5677_dmic_set, %d\n", pdata->gpio_int_mic_en);
+
+ if (ucontrol->value.integer.value[0] == 0) {
+ state = BIT_DMIC_BIAS_DISABLE;
+ ret = gpio_direction_output(pdata->gpio_int_mic_en, 0);
+ if (ret)
+ pr_err("gpio_int_mic_en=0 fail,%d\n", ret);
+ else
+ pr_info("gpio_int_mic_en=0\n");
+ } else {
+ state = BIT_DMIC_BIAS_ENABLE;
+ ret = gpio_direction_output(pdata->gpio_int_mic_en, 1);
+ if (ret)
+ pr_err("gpio_int_mic_en=1 fail,%d\n", ret);
+ else
+ pr_info("gpio_int_mic_en=1\n");
+ }
+
+ dmic_mode = state;
+
+ pr_info("%s: tegra_rt5677_dmic_set set to %d done\n",
+ __func__, state);
+ return 0;
+}
+
+/* headset mic bias */
+enum amic_bias_state {
+ BIT_AMIC_BIAS_DISABLE = 0,
+ BIT_AMIC_BIAS_ENABLE = (1 << 0),
+};
+
+static const char * const tegra_rt5677_amic_test_mode[] = {
+ "disable", "enable"
+};
+
+static int amic_test_mode;
+
+static const SOC_ENUM_SINGLE_DECL(tegra_rt5677_amic_test_mode_enum, 0, 0,
+ tegra_rt5677_amic_test_mode);
+
+static int tegra_rt5677_amic_test_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ ucontrol->value.integer.value[0] = amic_test_mode;
+ return 0;
+}
+
+static int tegra_rt5677_amic_test_set(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ enum amic_bias_state state = BIT_AMIC_BIAS_DISABLE;
+
+ struct snd_soc_card *card = snd_kcontrol_chip(kcontrol);
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int ret = 0;
+
+ pr_info("tegra_rt5677_amic_test_set, %d\n", pdata->gpio_ext_mic_en);
+
+ if (gpio_is_valid(pdata->gpio_ext_mic_en)) {
+ pr_info("gpio_ext_mic_en %d is valid\n",
+ pdata->gpio_ext_mic_en);
+ ret = gpio_request(pdata->gpio_ext_mic_en, "ext-mic-enable");
+ if (ret) {
+ pr_err("Fail gpio_request gpio_ext_mic_en, %d\n",
+ ret);
+ return ret;
+ }
+ } else {
+ pr_err("gpio_ext_mic_en %d is invalid\n",
+ pdata->gpio_ext_mic_en);
+ return -1;
+ }
+
+
+ if (ucontrol->value.integer.value[0] == 0) {
+ state = BIT_AMIC_BIAS_DISABLE;
+ ret = gpio_direction_output(pdata->gpio_ext_mic_en, 0);
+ if (ret)
+ pr_err("gpio_ext_mic_en=0 fail,%d\n", ret);
+ else
+ pr_info("gpio_ext_mic_en=0\n");
+ } else {
+ state = BIT_AMIC_BIAS_ENABLE;
+ ret = gpio_direction_output(pdata->gpio_ext_mic_en, 1);
+ if (ret)
+ pr_err("gpio_ext_mic_en=1 fail,%d\n", ret);
+ else
+ pr_info("gpio_ext_mic_en=1\n");
+ }
+
+ gpio_free(pdata->gpio_ext_mic_en);
+
+ amic_test_mode = state;
+
+ pr_info("%s: tegra_rt5677_amic_test_dev set to %d done\n",
+ __func__, state);
+
+ return 0;
+}
+
+/* rt5506 dump register */
+enum rt5506_dump_state {
+ BIT_RT5506_DUMP_DISABLE = 0,
+ BIT_RT5506_DUMP_ENABLE = (1 << 0),
+};
+
+static const char * const tegra_rt5677_rt5506_dump_mode[] = {
+ "disable", "enable"
+};
+
+static int rt5506_dump_mode;
+
+static const SOC_ENUM_SINGLE_DECL(tegra_rt5677_rt5506_dump_enum, 0, 0,
+ tegra_rt5677_rt5506_dump_mode);
+
+static int tegra_rt5677_rt5506_dump_get(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ ucontrol->value.integer.value[0] = rt5506_dump_mode;
+ return 0;
+}
+
+static int tegra_rt5677_rt5506_dump_set(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ enum rt5506_dump_state state = BIT_RT5506_DUMP_DISABLE;
+
+ int ret = 0;
+
+ if (ucontrol->value.integer.value[0] == 0) {
+ state = BIT_RT5506_DUMP_DISABLE;
+ } else {
+ state = BIT_RT5506_DUMP_ENABLE;
+ pr_info("tegra_rt5677_rt5506_dump_set, rt5506_dump_reg()\n");
+ rt5506_dump_reg();
+ }
+
+ rt5506_dump_mode = state;
+
+ pr_info("%s: tegra_rt5677_rt5506_dump_dev set to %d done\n",
+ __func__, state);
+
+ return ret;
+}
+
+static int tegra_rt5677_event_headphone_jack(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *k, int event)
+{
+ struct snd_soc_dapm_context *dapm = w->dapm;
+ struct snd_soc_card *card = dapm->card;
+
+ dev_dbg(card->dev, "tegra_rt5677_event_headphone_jack (%d)\n",
+ event);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ /* set hp_en low and usleep 10 ms for charging */
+ set_rt5506_hp_en(0);
+ usleep_range(10000,10000);
+ dev_dbg(card->dev, "%s: set_rt5506_amp(1,0)\n", __func__);
+ /*msleep(900); depop*/
+ set_rt5506_amp(1, 0);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+ dev_dbg(card->dev, "%s: set_rt5506_amp(0,0)\n", __func__);
+ set_rt5506_amp(0, 0);
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_event_mic_jack(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *k, int event)
+{
+ struct snd_soc_dapm_context *dapm = w->dapm;
+ struct snd_soc_card *card = dapm->card;
+
+ dev_dbg(card->dev, "tegra_rt5677_event_mic_jack (%d)\n",
+ event);
+
+ return 0;
+}
+
+static int tegra_rt5677_event_int_mic(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *k, int event)
+{
+ return 0;
+}
+
+static const struct snd_soc_dapm_widget flounder_dapm_widgets[] = {
+ SND_SOC_DAPM_HP("Headphone Jack", tegra_rt5677_event_headphone_jack),
+ SND_SOC_DAPM_MIC("Mic Jack", tegra_rt5677_event_mic_jack),
+ SND_SOC_DAPM_MIC("Int Mic", tegra_rt5677_event_int_mic),
+};
+
+static const struct snd_soc_dapm_route flounder_audio_map[] = {
+ {"Headphone Jack", NULL, "LOUT1"},
+ {"Headphone Jack", NULL, "LOUT2"},
+ {"IN2P", NULL, "Mic Jack"},
+ {"IN2N", NULL, "Mic Jack"},
+ {"DMIC L1", NULL, "Int Mic"},
+ {"DMIC R1", NULL, "Int Mic"},
+ {"DMIC L2", NULL, "Int Mic"},
+ {"DMIC R2", NULL, "Int Mic"},
+ /* AHUB BE connections */
+ {"DAM VMixer", NULL, "fast-pcm-playback"},
+ {"DAM VMixer", NULL, "offload-compr-playback"},
+ {"AIF1 Playback", NULL, "I2S1_OUT"},
+ {"Playback", NULL, "I2S2_OUT"},
+};
+
+static const struct snd_kcontrol_new flounder_controls[] = {
+ SOC_DAPM_PIN_SWITCH("Headphone Jack"),
+ SOC_DAPM_PIN_SWITCH("Mic Jack"),
+ SOC_DAPM_PIN_SWITCH("Int Mic"),
+
+ HTC_SOC_ENUM_EXT("Headset rt5506 Volume",
+ tegra_rt5677_rt5506_gain_get, tegra_rt5677_rt5506_gain_set),
+
+ SOC_ENUM_EXT("Speaker Channel Switch",
+ tegra_rt5677_speaker_test_mode_enum,
+ tegra_rt5677_speaker_test_get, tegra_rt5677_speaker_test_set),
+
+ SOC_ENUM_EXT("AMIC Test Switch", tegra_rt5677_amic_test_mode_enum,
+ tegra_rt5677_amic_test_get, tegra_rt5677_amic_test_set),
+
+ SOC_ENUM_EXT("DMIC BIAS Switch", tegra_rt5677_dmic_mode_enum,
+ tegra_rt5677_dmic_get, tegra_rt5677_dmic_set),
+
+ SOC_ENUM_EXT("RT5506 Dump Register", tegra_rt5677_rt5506_dump_enum,
+ tegra_rt5677_rt5506_dump_get, tegra_rt5677_rt5506_dump_set),
+};
+
+static int tegra_rt5677_init(struct snd_soc_pcm_runtime *rtd)
+{
+ struct snd_soc_codec *codec = rtd->codec;
+ struct snd_soc_dapm_context *dapm = &codec->dapm;
+ struct snd_soc_card *card = codec->card;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ int ret;
+
+ ret = tegra_asoc_utils_register_ctls(&machine->util_data);
+ if (ret < 0)
+ return ret;
+
+ /* FIXME: Calculate automatically based on DAPM routes? */
+ snd_soc_dapm_nc_pin(dapm, "LOUT1");
+ snd_soc_dapm_nc_pin(dapm, "LOUT2");
+ snd_soc_dapm_sync(dapm);
+ machine->codec = codec;
+
+ return 0;
+}
+
+static int tegra_rt5677_be_init(struct snd_soc_pcm_runtime *rtd)
+{
+ struct snd_soc_dai *dai = rtd->cpu_dai;
+ struct tegra30_i2s *i2s = snd_soc_dai_get_drvdata(dai);
+
+ i2s->allocate_pb_fifo_cif = false;
+
+ return 0;
+}
+
+static int tegra_offload_hw_params_be_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_interval *snd_rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
+ struct snd_interval *snd_channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
+
+ snd_rate->min = snd_rate->max = 48000;
+ snd_channels->min = snd_channels->max = 2;
+
+ snd_mask_set(¶ms->masks[SNDRV_PCM_HW_PARAM_FORMAT -
+ SNDRV_PCM_HW_PARAM_FIRST_MASK],
+ SNDRV_PCM_FORMAT_S16_LE);
+
+ pr_debug("%s::%d %d %d\n", __func__, params_rate(params),
+ params_channels(params), params_format(params));
+ return 0;
+}
+
+static struct snd_soc_dai_link tegra_rt5677_dai[NUM_DAI_LINKS] = {
+ [DAI_LINK_HIFI] = {
+ .name = "rt5677",
+ .stream_name = "rt5677 PCM",
+ .codec_name = "rt5677.1-002d",
+ .platform_name = "tegra30-i2s.1",
+ .cpu_dai_name = "tegra30-i2s.1",
+ .codec_dai_name = "rt5677-aif1",
+ .init = tegra_rt5677_init,
+ .ops = &tegra_rt5677_ops,
+ },
+
+ [DAI_LINK_SPEAKER] = {
+ .name = "SPEAKER",
+ .stream_name = "SPEAKER PCM",
+ .codec_name = "spdif-dit.0",
+ .platform_name = "tegra30-i2s.2",
+ .cpu_dai_name = "tegra30-i2s.2",
+ .codec_dai_name = "dit-hifi",
+ .ops = &tegra_rt5677_speaker_ops,
+ },
+
+ [DAI_LINK_BTSCO] = {
+ .name = "BT-SCO",
+ .stream_name = "BT SCO PCM",
+ .codec_name = "spdif-dit.1",
+ .platform_name = "tegra30-i2s.3",
+ .cpu_dai_name = "tegra30-i2s.3",
+ .codec_dai_name = "dit-hifi",
+ .ops = &tegra_rt5677_bt_sco_ops,
+ },
+
+ [DAI_LINK_MI2S_DUMMY] = {
+ .name = "MI2S DUMMY",
+ .stream_name = "MI2S DUMMY PCM",
+ .codec_name = "spdif-dit.0",
+ .platform_name = "tegra30-i2s.2",
+ .cpu_dai_name = "tegra30-i2s.2",
+ .codec_dai_name = "dit-hifi",
+ .ops = &tegra_rt5677_speaker_ops,
+ },
+ [DAI_LINK_PCM_OFFLOAD_FE] = {
+ .name = "fe-offload-pcm",
+ .stream_name = "offload-pcm",
+
+ .platform_name = "tegra-offload",
+ .cpu_dai_name = "tegra-offload-pcm",
+
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .ops = &tegra_rt5677_fe_pcm_ops,
+ .dynamic = 1,
+ },
+ [DAI_LINK_COMPR_OFFLOAD_FE] = {
+ .name = "fe-offload-compr",
+ .stream_name = "offload-compr",
+
+ .platform_name = "tegra-offload",
+ .cpu_dai_name = "tegra-offload-compr",
+
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .compr_ops = &tegra_rt5677_fe_compr_ops,
+ .dynamic = 1,
+ },
+ [DAI_LINK_PCM_OFFLOAD_CAPTURE_FE] = {
+ .name = "fe-offload-pcm-capture",
+ .stream_name = "offload-pcm-capture",
+
+ .platform_name = "tegra-offload",
+ .cpu_dai_name = "tegra-offload-pcm",
+
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+
+ },
+ [DAI_LINK_FAST_FE] = {
+ .name = "fe-fast-pcm",
+ .stream_name = "fast-pcm",
+ .platform_name = "tegra-pcm-audio",
+ .cpu_dai_name = "tegra-fast-pcm",
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .ops = &tegra_rt5677_fe_fast_ops,
+ .dynamic = 1,
+ },
+ [DAI_LINK_I2S_OFFLOAD_BE] = {
+ .name = "be-offload-audio-codec",
+ .stream_name = "offload-audio-pcm",
+ .codec_name = "rt5677.1-002d",
+ .platform_name = "tegra30-i2s.1",
+ .cpu_dai_name = "tegra30-i2s.1",
+ .codec_dai_name = "rt5677-aif1",
+ .init = tegra_rt5677_be_init,
+ .ops = &tegra_rt5677_ops,
+
+ .no_pcm = 1,
+
+ .be_id = DAI_LINK_I2S_OFFLOAD_BE,
+ .ignore_pmdown_time = 1,
+ .be_hw_params_fixup = tegra_offload_hw_params_be_fixup,
+ },
+ [DAI_LINK_I2S_OFFLOAD_SPEAKER_BE] = {
+ .name = "be-offload-audio-speaker",
+ .stream_name = "offload-audio-pcm-spk",
+ .codec_name = "spdif-dit.0",
+ .platform_name = "tegra30-i2s.2",
+ .cpu_dai_name = "tegra30-i2s.2",
+ .codec_dai_name = "dit-hifi",
+ .init = tegra_rt5677_be_init,
+ .ops = &tegra_rt5677_speaker_ops,
+
+ .no_pcm = 1,
+
+ .be_id = DAI_LINK_I2S_OFFLOAD_SPEAKER_BE,
+ .ignore_pmdown_time = 1,
+ .be_hw_params_fixup = tegra_offload_hw_params_be_fixup,
+ },
+};
+
+void mclk_enable(struct tegra_rt5677 *machine, bool on)
+{
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int ret;
+ if (on && !machine->clock_enabled) {
+ gpio_free(pdata->codec_mclk.id);
+ pr_debug("%s: gpio_free for gpio[%d] %s\n",
+ __func__, pdata->codec_mclk.id, pdata->codec_mclk.name);
+ machine->clock_enabled = 1;
+ tegra_asoc_utils_clk_enable(&machine->util_data);
+ } else if (!on && machine->clock_enabled){
+ machine->clock_enabled = 0;
+ tegra_asoc_utils_clk_disable(&machine->util_data);
+ ret = gpio_request(pdata->codec_mclk.id,
+ pdata->codec_mclk.name);
+ if (ret) {
+ pr_err("Fail gpio_request codec_mclk, %d\n",
+ ret);
+ return;
+ }
+ gpio_direction_output(pdata->codec_mclk.id, 0);
+ pr_debug("%s: gpio_request for gpio[%d] %s, return %d\n",
+ __func__, pdata->codec_mclk.id, pdata->codec_mclk.name, ret);
+ }
+}
+
+static int tegra_rt5677_suspend_post(struct snd_soc_card *card)
+{
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ int i, suspend_allowed = 1;
+
+ /*In Voice Call we ignore suspend..so check for that*/
+ for (i = 0; i < machine->pcard->num_links; i++) {
+ if (machine->pcard->dai_link[i].ignore_suspend) {
+ suspend_allowed = 0;
+ break;
+ }
+ }
+
+ if (suspend_allowed) {
+ /*This may be required if dapm setbias level is not called in
+ some cases, may be due to a wrong dapm map*/
+ mutex_lock(&machine->rt5677_lock);
+ if (machine->clock_enabled) {
+ mclk_enable(machine, 0);
+ }
+ mutex_unlock(&machine->rt5677_lock);
+ /*TODO: Disable Audio Regulators*/
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_resume_pre(struct snd_soc_card *card)
+{
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ int i, suspend_allowed = 1;
+ /*In Voice Call we ignore suspend..so check for that*/
+ for (i = 0; i < machine->pcard->num_links; i++) {
+ if (machine->pcard->dai_link[i].ignore_suspend) {
+ suspend_allowed = 0;
+ break;
+ }
+ }
+
+ if (suspend_allowed) {
+ /*This may be required if dapm setbias level is not called in
+ some cases, may be due to a wrong dapm map*/
+ mutex_lock(&machine->rt5677_lock);
+ if (!machine->clock_enabled &&
+ machine->bias_level != SND_SOC_BIAS_OFF) {
+ mclk_enable(machine, 1);
+ tegra_asoc_utils_clk_enable(&machine->util_data);
+ __set_rt5677_power(machine, true, true);
+ }
+ mutex_unlock(&machine->rt5677_lock);
+ /*TODO: Enable Audio Regulators*/
+ }
+
+ return 0;
+}
+
+static int tegra_rt5677_set_bias_level(struct snd_soc_card *card,
+ struct snd_soc_dapm_context *dapm, enum snd_soc_bias_level level)
+{
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+
+ cancel_delayed_work_sync(&machine->power_work);
+ mutex_lock(&machine->rt5677_lock);
+ if (machine->bias_level == SND_SOC_BIAS_OFF &&
+ level != SND_SOC_BIAS_OFF && (!machine->clock_enabled)) {
+ mclk_enable(machine, 1);
+ machine->bias_level = level;
+ __set_rt5677_power(machine, true, false);
+ }
+ mutex_unlock(&machine->rt5677_lock);
+
+ return 0;
+}
+
+static int tegra_rt5677_set_bias_level_post(struct snd_soc_card *card,
+ struct snd_soc_dapm_context *dapm, enum snd_soc_bias_level level)
+{
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct snd_soc_codec *codec = NULL;
+ struct rt5677_priv *rt5677 = NULL;
+ int i = 0;
+
+ if (machine->codec) {
+ codec = machine->codec;
+ rt5677 = snd_soc_codec_get_drvdata(codec);
+ }
+
+ mutex_lock(&machine->rt5677_lock);
+
+ for (i = 0; i < card->num_rtd; i++) {
+ codec = card->rtd[i].codec;
+ if (codec && codec->active)
+ goto exit;
+ }
+
+ machine->bias_level = level;
+exit:
+ mutex_unlock(&machine->rt5677_lock);
+
+ return 0;
+}
+
+static struct snd_soc_card snd_soc_tegra_rt5677 = {
+ .name = "tegra-rt5677",
+ .owner = THIS_MODULE,
+ .dai_link = tegra_rt5677_dai,
+ .num_links = ARRAY_SIZE(tegra_rt5677_dai),
+ .suspend_post = tegra_rt5677_suspend_post,
+ .resume_pre = tegra_rt5677_resume_pre,
+ .set_bias_level = tegra_rt5677_set_bias_level,
+ .set_bias_level_post = tegra_rt5677_set_bias_level_post,
+ .controls = flounder_controls,
+ .num_controls = ARRAY_SIZE(flounder_controls),
+ .dapm_widgets = flounder_dapm_widgets,
+ .num_dapm_widgets = ARRAY_SIZE(flounder_dapm_widgets),
+ .dapm_routes = flounder_audio_map,
+ .num_dapm_routes = ARRAY_SIZE(flounder_audio_map),
+ .fully_routed = true,
+};
+
+void __set_rt5677_power(struct tegra_rt5677 *machine, bool enable, bool hp_depop)
+{
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ int ret = 0;
+ static bool status = false;
+
+ if (enable && status == false) {
+ /* set hp_en high to depop for headset path */
+ if (hp_depop)
+ set_rt5506_hp_en(1);
+ pr_info("tegra_rt5677 power_on\n");
+ if (!machine->clock_enabled) {
+ pr_debug("%s: call mclk_enable(true)\n", __func__);
+ mclk_enable(machine, 1);
+ }
+ /*V_IO_1V8*/
+ if (gpio_is_valid(pdata->gpio_ldo1_en)) {
+ pr_debug("gpio_ldo1_en %d is valid\n", pdata->gpio_ldo1_en);
+ ret = gpio_request(pdata->gpio_ldo1_en, "rt5677-ldo-enable");
+ if (ret) {
+ pr_err("Fail gpio_request gpio_ldo1_en, %d\n", ret);
+ } else {
+ ret = gpio_direction_output(pdata->gpio_ldo1_en, 1);
+ if (ret) {
+ pr_err("gpio_ldo1_en=1 fail,%d\n", ret);
+ gpio_free(pdata->gpio_ldo1_en);
+ } else
+ pr_debug("gpio_ldo1_en=1\n");
+ }
+ } else {
+ pr_err("gpio_ldo1_en %d is invalid\n", pdata->gpio_ldo1_en);
+ }
+
+ usleep_range(1000, 2000);
+
+ /*V_AUD_1V2*/
+ if (IS_ERR(rt5677_reg))
+ pr_err("Fail regulator_get v_ldo2\n");
+ else {
+ ret = regulator_enable(rt5677_reg);
+ if (ret)
+ pr_err("Fail regulator_enable v_ldo2, %d\n", ret);
+ else
+ pr_debug("tegra_rt5677_reg v_ldo2 is enabled\n");
+ }
+
+ usleep_range(1000, 2000);
+
+ /*V_AUD_1V8*/
+ if (gpio_is_valid(pdata->gpio_int_mic_en)) {
+ ret = gpio_direction_output(pdata->gpio_int_mic_en, 1);
+ if (ret)
+ pr_err("Turn on gpio_int_mic_en fail,%d\n", ret);
+ else
+ pr_debug("Turn on gpio_int_mic_en\n");
+ } else {
+ pr_err("gpio_int_mic_en is invalid,%d\n", ret);
+ }
+
+ usleep_range(1000, 2000);
+
+ /*AUD_ALC5677_RESET#*/
+ if (gpio_is_valid(pdata->gpio_reset)) {
+ ret = gpio_direction_output(pdata->gpio_reset, 1);
+ if (ret)
+ pr_err("Turn on gpio_reset fail,%d\n", ret);
+ else
+ pr_debug("Turn on gpio_reset\n");
+ } else {
+ pr_err("gpio_reset is invalid,%d\n", ret);
+ }
+ status = true;
+ } else if (enable == false && status) {
+ pr_info("tegra_rt5677 power_off\n");
+
+ /*AUD_ALC5677_RESET#*/
+ if (gpio_is_valid(pdata->gpio_reset)) {
+ ret = gpio_direction_output(pdata->gpio_reset, 0);
+ if (ret)
+ pr_err("Turn off gpio_reset fail,%d\n", ret);
+ else
+ pr_debug("Turn off gpio_reset\n");
+ } else {
+ pr_err("gpio_reset is invalid,%d\n", ret);
+ }
+
+ /*V_AUD_1V8*/
+ if (gpio_is_valid(pdata->gpio_int_mic_en)) {
+ ret = gpio_direction_output(pdata->gpio_int_mic_en, 0);
+ if (ret)
+ pr_err("Turn off gpio_int_mic_en fail,%d\n", ret);
+ else
+ pr_debug("Turn off gpio_int_mic_en\n");
+ } else {
+ pr_err("gpio_int_mic_en is invalid,%d\n", ret);
+ }
+
+ usleep_range(1000, 2000);
+
+ /*V_AUD_1V2*/
+ if (IS_ERR(rt5677_reg))
+ pr_err("Fail regulator_get v_ldo2\n");
+ else {
+ ret = regulator_disable(rt5677_reg);
+ if (ret)
+ pr_err("Fail regulator_disable v_ldo2, %d\n", ret);
+ else
+ pr_debug("tegra_rt5677_reg v_ldo2 is disabled\n");
+ }
+
+ usleep_range(1000, 2000);
+
+ /*V_IO_1V8*/
+ if (gpio_is_valid(pdata->gpio_ldo1_en)) {
+ pr_debug("gpio_ldo1_en %d is valid\n", pdata->gpio_ldo1_en);
+ ret = gpio_direction_output(pdata->gpio_ldo1_en, 0);
+ if (ret)
+ pr_err("gpio_ldo1_en=0 fail,%d\n", ret);
+ else
+ pr_debug("gpio_ldo1_en=0\n");
+
+ gpio_free(pdata->gpio_ldo1_en);
+ } else {
+ pr_err("gpio_ldo1_en %d is invalid\n", pdata->gpio_ldo1_en);
+ }
+
+ /* set hp_en low to prevent power leakage */
+ set_rt5506_hp_en(0);
+ status = false;
+ if (machine->clock_enabled)
+ mclk_enable(machine, 0);
+ machine->bias_level = SND_SOC_BIAS_OFF;
+ }
+
+ return;
+}
+
+void set_rt5677_power_locked(struct tegra_rt5677 *machine, bool enable, bool hp_depop)
+{
+ mutex_lock(&machine->rt5677_lock);
+ __set_rt5677_power(machine, enable, hp_depop);
+ mutex_unlock(&machine->rt5677_lock);
+}
+
+void set_rt5677_power_extern(bool enable)
+{
+ struct snd_soc_card *card = &snd_soc_tegra_rt5677;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+
+ set_rt5677_power_locked(machine, enable, false);
+}
+EXPORT_SYMBOL(set_rt5677_power_extern);
+
+static void trgra_do_power_work(struct work_struct *work)
+{
+ struct snd_soc_card *card = &snd_soc_tegra_rt5677;
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ mutex_lock(&machine->rt5677_lock);
+ __set_rt5677_power(machine, false, false);
+ mutex_unlock(&machine->rt5677_lock);
+}
+
+static int tegra_rt5677_driver_probe(struct platform_device *pdev)
+{
+ struct snd_soc_card *card = &snd_soc_tegra_rt5677;
+ struct device_node *np = pdev->dev.of_node;
+ struct tegra_rt5677 *machine;
+ struct tegra_asoc_platform_data *pdata = NULL;
+ int ret = 0;
+ int codec_id;
+ int rt5677_irq = 0;
+ u32 val32[7];
+
+ if (!pdev->dev.platform_data && !pdev->dev.of_node) {
+ dev_err(&pdev->dev, "No platform data supplied\n");
+ return -EINVAL;
+ }
+ if (pdev->dev.platform_data) {
+ pdata = pdev->dev.platform_data;
+ } else if (np) {
+ pdata = kzalloc(sizeof(struct tegra_asoc_platform_data),
+ GFP_KERNEL);
+ if (!pdata) {
+ dev_err(&pdev->dev, "Can't allocate tegra_asoc_platform_data struct\n");
+ return -ENOMEM;
+ }
+
+ of_property_read_string(np, "nvidia,codec_name",
+ &pdata->codec_name);
+
+ of_property_read_string(np, "nvidia,codec_dai_name",
+ &pdata->codec_dai_name);
+
+ pdata->gpio_ldo1_en = of_get_named_gpio(np,
+ "nvidia,ldo-gpios", 0);
+ if (pdata->gpio_ldo1_en < 0)
+ dev_warn(&pdev->dev, "Failed to get LDO_EN GPIO\n");
+
+ pdata->gpio_hp_det = of_get_named_gpio(np,
+ "nvidia,hp-det-gpios", 0);
+ if (pdata->gpio_hp_det < 0)
+ dev_warn(&pdev->dev, "Failed to get HP Det GPIO\n");
+
+ pdata->gpio_codec1 = pdata->gpio_codec2 = pdata->gpio_codec3 =
+ pdata->gpio_spkr_en = pdata->gpio_hp_mute =
+ pdata->gpio_int_mic_en = pdata->gpio_ext_mic_en = -1;
+
+ of_property_read_u32_array(np, "nvidia,i2s-param-hifi", val32,
+ ARRAY_SIZE(val32));
+ pdata->i2s_param[HIFI_CODEC].audio_port_id = (int)val32[0];
+ pdata->i2s_param[HIFI_CODEC].is_i2s_master = (int)val32[1];
+ pdata->i2s_param[HIFI_CODEC].i2s_mode = (int)val32[2];
+
+ of_property_read_u32_array(np, "nvidia,i2s-param-bt", val32,
+ ARRAY_SIZE(val32));
+ pdata->i2s_param[BT_SCO].audio_port_id = (int)val32[0];
+ pdata->i2s_param[BT_SCO].is_i2s_master = (int)val32[1];
+ pdata->i2s_param[BT_SCO].i2s_mode = (int)val32[2];
+ }
+
+ if (!pdata) {
+ dev_err(&pdev->dev, "No platform data supplied\n");
+ return -EINVAL;
+ }
+
+ if (pdata->codec_name)
+ card->dai_link->codec_name = pdata->codec_name;
+
+ if (pdata->codec_dai_name)
+ card->dai_link->codec_dai_name = pdata->codec_dai_name;
+
+ machine = kzalloc(sizeof(struct tegra_rt5677), GFP_KERNEL);
+ if (!machine) {
+ dev_err(&pdev->dev, "Can't allocate tegra_rt5677 struct\n");
+ if (np)
+ kfree(pdata);
+ return -ENOMEM;
+ }
+
+ machine->pdata = pdata;
+ machine->pcard = card;
+ machine->bias_level = SND_SOC_BIAS_STANDBY;
+
+ INIT_DELAYED_WORK(&machine->power_work, trgra_do_power_work);
+
+ /*V_IO_1V8*/
+ if (gpio_is_valid(pdata->gpio_ldo1_en)) {
+ dev_dbg(&pdev->dev, "gpio_ldo1_en %d is valid\n",
+ pdata->gpio_ldo1_en);
+ ret = gpio_request(pdata->gpio_ldo1_en, "rt5677-ldo-enable");
+ if (ret) {
+ dev_err(&pdev->dev, "Fail gpio_request gpio_ldo1_en, %d\n",
+ ret);
+ goto err_free_machine;
+ } else {
+ ret = gpio_direction_output(pdata->gpio_ldo1_en, 1);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "gpio_ldo1_en=1 fail,%d\n", ret);
+ gpio_free(pdata->gpio_ldo1_en);
+ goto err_free_machine;
+ } else
+ dev_dbg(&pdev->dev, "gpio_ldo1_en=1\n");
+ }
+ } else {
+ dev_err(&pdev->dev, "gpio_ldo1_en %d is invalid\n",
+ pdata->gpio_ldo1_en);
+ }
+
+ usleep_range(1000, 2000);
+
+ INIT_WORK(&machine->hotword_work, tegra_do_hotword_work);
+
+ rt5677_reg = regulator_get(&pdev->dev, "v_ldo2");
+
+ if (gpio_is_valid(pdata->gpio_int_mic_en)) {
+ ret = gpio_request(pdata->gpio_int_mic_en, "int-mic-enable");
+ if (ret) {
+ dev_err(&pdev->dev, "Fail gpio_request gpio_int_mic_en, %d\n", ret);
+ goto err_free_machine;
+ }
+ } else
+ dev_err(&pdev->dev, "gpio_int_mic_en %d is invalid\n", pdata->gpio_int_mic_en);
+
+ /*AUD_ALC5677_RESET*/
+ if (gpio_is_valid(pdata->gpio_reset)) {
+ ret = gpio_request(pdata->gpio_reset, "rt5677-reset");
+ if (ret) {
+ dev_err(&pdev->dev, "Fail gpio_request gpio_reset, %d\n", ret);
+ goto err_free_machine;
+ }
+ } else
+ dev_err(&pdev->dev, "gpio_reset %d is invalid\n", pdata->gpio_reset);
+
+ usleep_range(1000, 2000);
+ mutex_init(&machine->rt5677_lock);
+ mutex_init(&machine->spk_amp_lock);
+
+ machine->clock_enabled = 1;
+ ret = tegra_asoc_utils_init(&machine->util_data, &pdev->dev, card);
+ if (ret)
+ goto err_free_machine;
+ usleep_range(500, 1500);
+
+ set_rt5677_power_locked(machine, true, false);
+ usleep_range(500, 1500);
+
+ if (gpio_is_valid(pdata->gpio_irq1)) {
+ dev_dbg(&pdev->dev, "gpio_irq1 %d is valid\n",
+ pdata->gpio_irq1);
+ ret = gpio_request(pdata->gpio_irq1, "rt5677-irq");
+ if (ret) {
+ dev_err(&pdev->dev, "Fail gpio_request gpio_irq1, %d\n",
+ ret);
+ goto err_free_machine;
+ }
+
+ ret = gpio_direction_input(pdata->gpio_irq1);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Fail gpio_direction_input gpio_irq1, %d\n",
+ ret);
+ goto err_free_machine;
+ }
+
+ rt5677_irq = gpio_to_irq(pdata->gpio_irq1);
+ if (rt5677_irq < 0) {
+ ret = rt5677_irq;
+ dev_err(&pdev->dev, "Fail gpio_to_irq gpio_irq1, %d\n",
+ ret);
+ goto err_free_machine;
+ }
+
+ ret = request_irq(rt5677_irq, detect_rt5677_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING ,
+ "RT5677_IRQ", machine);
+ if (ret) {
+ dev_err(&pdev->dev, "request_irq rt5677_irq failed, %d\n",
+ ret);
+ goto err_free_machine;
+ } else {
+ dev_dbg(&pdev->dev, "request_irq rt5677_irq ok\n");
+ enable_irq_wake(rt5677_irq);
+ }
+ } else {
+ dev_err(&pdev->dev, "gpio_irq1 %d is invalid\n",
+ pdata->gpio_irq1);
+ }
+
+ card->dev = &pdev->dev;
+ platform_set_drvdata(pdev, card);
+ snd_soc_card_set_drvdata(card, machine);
+
+#ifndef CONFIG_ARCH_TEGRA_2x_SOC
+ codec_id = pdata->i2s_param[HIFI_CODEC].audio_port_id;
+ tegra_rt5677_dai[DAI_LINK_HIFI].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+ tegra_rt5677_dai[DAI_LINK_HIFI].platform_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+ tegra_rt5677_dai[DAI_LINK_I2S_OFFLOAD_BE].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+
+ codec_id = pdata->i2s_param[SPEAKER].audio_port_id;
+ tegra_rt5677_dai[DAI_LINK_SPEAKER].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+ tegra_rt5677_dai[DAI_LINK_SPEAKER].platform_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+
+ tegra_rt5677_dai[DAI_LINK_I2S_OFFLOAD_SPEAKER_BE].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+
+ tegra_rt5677_dai[DAI_LINK_MI2S_DUMMY].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+ tegra_rt5677_dai[DAI_LINK_MI2S_DUMMY].platform_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+
+ codec_id = pdata->i2s_param[BT_SCO].audio_port_id;
+ tegra_rt5677_dai[DAI_LINK_BTSCO].cpu_dai_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+ tegra_rt5677_dai[DAI_LINK_BTSCO].platform_name =
+ tegra_rt5677_i2s_dai_name[codec_id];
+#endif
+
+ card->dapm.idle_bias_off = 1;
+ ret = snd_soc_register_card(card);
+ if (ret) {
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
+ ret);
+ goto err_unregister_switch;
+ }
+
+ if (!card->instantiated) {
+ ret = -ENODEV;
+ dev_err(&pdev->dev, "sound card not instantiated (%d)\n",
+ ret);
+ goto err_unregister_card;
+ }
+
+#ifndef CONFIG_ARCH_TEGRA_2x_SOC
+ ret = tegra_asoc_utils_set_parent(&machine->util_data,
+ pdata->i2s_param[HIFI_CODEC].is_i2s_master);
+ if (ret) {
+ dev_err(&pdev->dev, "tegra_asoc_utils_set_parent failed (%d)\n",
+ ret);
+ goto err_unregister_card;
+ }
+#endif
+
+ if (machine->clock_enabled == 1) {
+ pr_info("%s to close MCLK\n", __func__);
+ mclk_enable(machine, 0);
+ }
+
+ machine->bias_level = SND_SOC_BIAS_OFF;
+
+ sysedpc = sysedp_create_consumer("speaker", "speaker");
+
+ wake_lock_init(&machine->vad_wake, WAKE_LOCK_SUSPEND, "rt5677_wake");
+
+ machine->playback_fifo_cif = -1;
+ machine->playback_fast_fifo_cif = -1;
+
+ machine->dam_ifc = tegra30_dam_allocate_controller();
+ if (machine->dam_ifc < 0) {
+ dev_err(&pdev->dev, "DAM allocation failed\n");
+ goto err_unregister_card;
+ }
+ mutex_init(&machine->dam_mutex);
+
+ return 0;
+
+err_unregister_card:
+ snd_soc_unregister_card(card);
+err_unregister_switch:
+ tegra_asoc_utils_fini(&machine->util_data);
+ disable_irq_wake(rt5677_irq);
+ free_irq(rt5677_irq, 0);
+err_free_machine:
+ if (np)
+ kfree(machine->pdata);
+
+ kfree(machine);
+ if (machine->clock_enabled == 1) {
+ pr_info("%s to close MCLK\n", __func__);
+ mclk_enable(machine, 0);
+ }
+ return ret;
+}
+
+static int tegra_rt5677_driver_remove(struct platform_device *pdev)
+{
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+ struct tegra_rt5677 *machine = snd_soc_card_get_drvdata(card);
+ struct tegra_asoc_platform_data *pdata = machine->pdata;
+ struct device_node *np = pdev->dev.of_node;
+
+ int ret;
+ int rt5677_irq;
+
+ if (gpio_is_valid(pdata->gpio_irq1)) {
+ dev_dbg(&pdev->dev, "gpio_irq1 %d is valid\n",
+ pdata->gpio_irq1);
+ rt5677_irq = gpio_to_irq(pdata->gpio_irq1);
+ if (rt5677_irq < 0) {
+ ret = rt5677_irq;
+ dev_err(&pdev->dev, "Fail gpio_to_irq gpio_irq1, %d\n",
+ ret);
+ } else {
+ disable_irq_wake(rt5677_irq);
+ free_irq(rt5677_irq, 0);
+ }
+ cancel_work_sync(&machine->hotword_work);
+ } else {
+ dev_err(&pdev->dev, "gpio_irq1 %d is invalid\n",
+ pdata->gpio_irq1);
+ }
+
+ set_rt5677_power_locked(machine, false, false);
+
+ if (gpio_is_valid(pdata->gpio_int_mic_en))
+ gpio_free(pdata->gpio_int_mic_en);
+ if (gpio_is_valid(pdata->gpio_reset))
+ gpio_free(pdata->gpio_reset);
+ if (rt5677_reg)
+ regulator_put(rt5677_reg);
+
+ usleep_range(1000, 2000);
+
+ if (gpio_is_valid(pdata->gpio_ldo1_en)) {
+ dev_dbg(&pdev->dev, "gpio_ldo1_en %d is valid\n",
+ pdata->gpio_ldo1_en);
+ ret = gpio_direction_output(pdata->gpio_ldo1_en, 0);
+ if (ret)
+ dev_err(&pdev->dev,
+ "gpio_ldo1_en=0 fail,%d\n", ret);
+ else
+ dev_dbg(&pdev->dev, "gpio_ldo1_en=0\n");
+
+ gpio_free(pdata->gpio_ldo1_en);
+ } else {
+ dev_err(&pdev->dev, "gpio_ldo1_en %d is invalid\n",
+ pdata->gpio_ldo1_en);
+ }
+
+ if (machine->dam_ifc >= 0)
+ tegra30_dam_free_controller(machine->dam_ifc);
+
+ snd_soc_unregister_card(card);
+
+ tegra_asoc_utils_fini(&machine->util_data);
+
+ sysedp_free_consumer(sysedpc);
+
+ wake_lock_destroy(&machine->vad_wake);
+
+ if (np)
+ kfree(machine->pdata);
+
+ kfree(machine);
+
+ return 0;
+}
+
+static const struct of_device_id tegra_rt5677_of_match[] = {
+ { .compatible = "nvidia,tegra-audio-rt5677", },
+ {},
+};
+
+static struct platform_driver tegra_rt5677_driver = {
+ .driver = {
+ .name = DRV_NAME,
+ .owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
+ .of_match_table = tegra_rt5677_of_match,
+ },
+ .probe = tegra_rt5677_driver_probe,
+ .remove = tegra_rt5677_driver_remove,
+};
+
+static int __init tegra_rt5677_modinit(void)
+{
+ return platform_driver_register(&tegra_rt5677_driver);
+}
+module_init(tegra_rt5677_modinit);
+
+static void __exit tegra_rt5677_modexit(void)
+{
+ platform_driver_unregister(&tegra_rt5677_driver);
+}
+module_exit(tegra_rt5677_modexit);
+MODULE_AUTHOR("Ravindra Lokhande <rlokhande@nvidia.com>");
+MODULE_AUTHOR("Manoj Gangwal <mgangwal@nvidia.com>");
+MODULE_AUTHOR("Nikesh Oswal <noswal@nvidia.com>");
+MODULE_DESCRIPTION("Tegra+rt5677 machine ASoC driver");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:" DRV_NAME);
diff --git a/sound/soc/tegra/tegra_rt5677.h b/sound/soc/tegra/tegra_rt5677.h
new file mode 100644
index 0000000..4d65abc
--- /dev/null
+++ b/sound/soc/tegra/tegra_rt5677.h
@@ -0,0 +1,49 @@
+/*
+ * tegra_rt5677.h - Tegra machine ASoC driver for boards using ALC5645 codec.
+ *
+ * Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved.
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ *
+ */
+#include <linux/wakelock.h>
+#include "tegra_asoc_utils.h"
+
+struct tegra_rt5677 {
+ struct tegra_asoc_utils_data util_data;
+ struct tegra_asoc_platform_data *pdata;
+ struct snd_soc_codec *codec;
+ int gpio_requested;
+ enum snd_soc_bias_level bias_level;
+ int clock_enabled;
+ struct regulator *codec_reg;
+ struct regulator *digital_reg;
+ struct regulator *analog_reg;
+ struct regulator *spk_reg;
+ struct regulator *mic_reg;
+ struct regulator *dmic_reg;
+ struct snd_soc_card *pcard;
+ struct delayed_work power_work;
+ struct work_struct hotword_work;
+ struct mutex rt5677_lock;
+ struct mutex spk_amp_lock;
+ struct wake_lock vad_wake;
+ enum tegra30_ahub_txcif playback_fast_fifo_cif;
+ enum tegra30_ahub_txcif playback_fifo_cif;
+ struct tegra_pcm_dma_params playback_fast_dma_data;
+ struct tegra_pcm_dma_params playback_dma_data;
+ int dam_ifc;
+ int dam_ref_cnt;
+ struct mutex dam_mutex;
+};