Merge changes from topic 'rotary-master'
* changes:
inputflinger: Add support for scaling and true value reporting
inputflinger: Initial support for rotary encoders.
diff --git a/cmds/dumpstate/bugreport-format.md b/cmds/dumpstate/bugreport-format.md
new file mode 100644
index 0000000..fc43250
--- /dev/null
+++ b/cmds/dumpstate/bugreport-format.md
@@ -0,0 +1,87 @@
+# Bugreport file format
+
+This document specifies the format of the bugreport files generated by the
+bugreport services (like `bugreport` and `bugreportplus`) and delivered to the
+end user (i.e., it doesn’t include other tools like `adb bugreport`).
+
+A _bugreport_ is initially generated by dumpstate, then processed by **Shell**,
+which in turn delivers it to the end user through a `ACTION_SEND_MULTIPLE`
+intent; the end user then select which app (like an email client) handles such
+intent.
+
+## Text file (Pre-M)
+Prior to _Android M (Marshmallow)_, `dumpstate` generates a flat .txt file named
+_bugreport-DATE.txt_ (where _DATE_ is date the bugreport was generated, in the
+format _YYYY-MM-DD-HH-MM-SS_), and Shell simply propagates it as an attachment
+in the `ACTION_SEND_MULTIPLE` intent.
+
+## Version v0 (Android M)
+On _Android M (Marshmallow)_, dumpstate still generates a flat
+_bugreport-DATE.txt_ file, but then **Shell** creates a zip file called
+_bugreport-DATE.zip_ containing a _bugreport-DATE.txt_ entry and sends that
+file as the `ACTION_SEND_MULTIPLE` attachment.
+
+## Version v1 (Android N)
+On _Android N (TBD)_, `dumpstate` generates a zip file directly (unless there
+is a failure, in which case it reverts to the flat file that is zipped by
+**Shell** and hence the end result is the _v0_ format).
+
+The zip file is by default called _bugreport-DATE.zip_ and it contains a
+_bugreport-DATE.txt_ entry, although the end user can change the name (through
+**Shell**), in which case they would be called _bugreport-NEW_NAME.zip_ and
+_bugreport-NEW_NAME.txt_ respectively.
+
+The zip file also contains 2 metadata entries generated by `dumpstate`:
+
+- `version.txt`: whose value is **v1**.
+- `main-entry.txt`: whose value is the name of the flat text entry (i.e.,
+ _bugreport-DATE.txt_ or _bugreport-NEW_NAME.txt_).
+
+`dumpstate` can also copy files from the device’s filesystem into the zip file
+under the `FS` folder. For example, a `/dirA/dirB/fileC` file in the device
+would generate a `FS/dirA/dirB/fileC` entry in the zip file.
+
+The flat file also has some minor changes:
+
+- Tombstone files were removed and added to the zip file.
+- The duration of each section is printed in the report.
+- Some dumpsys sections (memory and cpuinfo) are reported earlier in the file.
+
+Besides the files generated by `dumpstate`, **Shell** can also add 2 other
+files upon the end user’s request:
+
+- `title.txt`: whose value is a single-line summary of the problem.
+- `description.txt`: whose value is a multi-line, detailed description of the problem.
+
+## Intermediate versions
+During development, the versions will be suffixed with _-devX_ or
+_-devX-EXPERIMENTAL_FEATURE_, where _X_ is a number that increases as the
+changes become stable.
+
+For example, the initial version during _Android N_ development was
+**v1-dev1**. When `dumpsys` was split in 2 sections but not all tools were
+ready to parse that format, the version was named **v1-dev1-dumpsys-split**,
+which had to be passed do `dumpsys` explicitly (i.e., trhough a
+`-V v1-dev1-dumpsys-split` argument). Once that format became stable and tools
+knew how to parse it, the default version became **v1-dev2**.
+
+Similarly, if changes in the file format are made after the initial release of
+Android defining that format, then a new _sub-version_ will be used.
+For example, if after _Android N_ launches changes are made for the next _N_
+release, the version will be called **v1.1** or something like that.
+
+Determining version and main entry
+-----------------------------------------------
+
+Tools parsing the zipped bugreport file can use the following algorithm to
+determine the bugreport format version and its main entry:
+
+```
+If [entries contain "version.txt"]
+ version = read("version.txt")
+ main_entry = read("main_entry.txt")
+else
+ version = v0
+ main_entry = entries[0]
+fi
+```
diff --git a/cmds/dumpstate/dumpstate.cpp b/cmds/dumpstate/dumpstate.cpp
index 258a99f..22fd2c3 100644
--- a/cmds/dumpstate/dumpstate.cpp
+++ b/cmds/dumpstate/dumpstate.cpp
@@ -83,6 +83,16 @@
// Root dir for all files copied as-is into the bugreport
const std::string& ZIP_ROOT_DIR = "FS";
+/*
+ * List of supported zip format versions.
+ *
+ * See bugreport-format.txt for more info.
+ */
+// TODO: change to "v1" before final N build
+static std::string VERSION_DEFAULT = "v1-dev1";
+// TODO: remove before final N build
+static std::string VERSION_DUMPSYS_SPLIT = "v1-dev1-dumpsys-split";
+
/* gets the tombstone data, according to the bugreport type: if zipped gets all tombstones,
* otherwise gets just those modified in the last half an hour. */
static void get_tombstone_fds(tombstone_data_t data[NUM_TOMBSTONES]) {
@@ -132,9 +142,9 @@
if (!zip_writer) return;
const char *title = "MOUNT INFO";
mount_points.clear();
- DurationReporter duration_reporter(title);
+ DurationReporter duration_reporter(title, NULL);
for_each_pid(do_mountinfo, NULL);
- printf("%s: %d entries added to zip file\n", title, mount_points.size());
+ ALOGD("%s: %lu entries added to zip file\n", title, mount_points.size());
}
static void dump_dev_files(const char *title, const char *driverpath, const char *filename)
@@ -325,7 +335,7 @@
/* End copy from system/core/logd/LogBuffer.cpp */
/* dumps the current system state to stdout */
-static void print_header() {
+static void print_header(std::string version) {
char build[PROPERTY_VALUE_MAX], fingerprint[PROPERTY_VALUE_MAX];
char radio[PROPERTY_VALUE_MAX], bootloader[PROPERTY_VALUE_MAX];
char network[PROPERTY_VALUE_MAX], date[80];
@@ -352,20 +362,27 @@
printf("Kernel: ");
dump_file(NULL, "/proc/version");
printf("Command line: %s\n", strtok(cmdline_buf, "\n"));
+ printf("Bugreport format version: %s\n", version.c_str());
printf("\n");
}
/* adds a new entry to the existing zip file. */
static bool add_zip_entry_from_fd(const std::string& entry_name, int fd) {
+ if (!zip_writer) {
+ ALOGD("Not adding zip entry %s from fd because zip_writer is not set", entry_name.c_str());
+ return false;
+ }
+ ALOGD("Adding zip entry %s", entry_name.c_str());
int32_t err = zip_writer->StartEntryWithTime(entry_name.c_str(),
ZipWriter::kCompress, get_mtime(fd, now));
if (err) {
- ALOGE("zip_writer->StartEntryWithTime(%s): %s\n", entry_name.c_str(), ZipWriter::ErrorCodeString(err));
+ ALOGE("zip_writer->StartEntryWithTime(%s): %s\n", entry_name.c_str(),
+ ZipWriter::ErrorCodeString(err));
return false;
}
+ std::vector<uint8_t> buffer(65536);
while (1) {
- std::vector<uint8_t> buffer(65536);
ssize_t bytes_read = TEMP_FAILURE_RETRY(read(fd, buffer.data(), sizeof(buffer)));
if (bytes_read == 0) {
break;
@@ -407,13 +424,46 @@
/* adds all files from a directory to the zipped bugreport file */
void add_dir(const char *dir, bool recursive) {
- if (!zip_writer) return;
- DurationReporter duration_reporter(dir);
+ if (!zip_writer) {
+ ALOGD("Not adding dir %s because zip_writer is not set", dir);
+ return;
+ }
+ DurationReporter duration_reporter(dir, NULL);
dump_files(NULL, dir, recursive ? skip_none : is_dir, _add_file_from_fd);
}
+/* adds a text entry entry to the existing zip file. */
+static bool add_text_zip_entry(const std::string& entry_name, const std::string& content) {
+ if (!zip_writer) {
+ ALOGD("Not adding text zip entry %s because zip_writer is not set", entry_name.c_str());
+ return false;
+ }
+ ALOGD("Adding zip text entry %s", entry_name.c_str());
+ int32_t err = zip_writer->StartEntryWithTime(entry_name.c_str(), ZipWriter::kCompress, now);
+ if (err) {
+ ALOGE("zip_writer->StartEntryWithTime(%s): %s\n", entry_name.c_str(),
+ ZipWriter::ErrorCodeString(err));
+ return false;
+ }
+
+ err = zip_writer->WriteBytes(content.c_str(), content.length());
+ if (err) {
+ ALOGE("zip_writer->WriteBytes(%s): %s\n", entry_name.c_str(),
+ ZipWriter::ErrorCodeString(err));
+ return false;
+ }
+
+ err = zip_writer->FinishEntry();
+ if (err) {
+ ALOGE("zip_writer->FinishEntry(): %s\n", ZipWriter::ErrorCodeString(err));
+ return false;
+ }
+
+ return true;
+}
+
static void dumpstate(const std::string& screenshot_path) {
- std::unique_ptr<DurationReporter> duration_reporter(new DurationReporter("DUMPSTATE"));
+ DurationReporter duration_reporter("DUMPSTATE");
unsigned long timeout;
dump_dev_files("TRUSTY VERSION", "/sys/bus/platform/drivers/trusty", "trusty_version");
@@ -466,12 +516,11 @@
"-v", "printable",
"-d",
"*:v", NULL);
- timeout = logcat_timeout("events") + logcat_timeout("security");
+ timeout = logcat_timeout("events");
if (timeout < 20000) {
timeout = 20000;
}
run_command("EVENT LOG", timeout / 1000, "logcat", "-b", "events",
- "-b", "security",
"-v", "threadtime",
"-v", "printable",
"-d",
@@ -716,24 +765,28 @@
printf("========================================================\n");
+ printf("== Final progress (pid %d): %d/%d (originally %d)\n",
+ getpid(), progress, weight_total, WEIGHT_TOTAL);
+ printf("========================================================\n");
printf("== dumpstate: done\n");
printf("========================================================\n");
}
static void usage() {
- fprintf(stderr, "usage: dumpstate [-b soundfile] [-e soundfile] [-o file [-d] [-p] [-z]] [-s] [-q]\n"
- " -o: write to file (instead of stdout)\n"
- " -d: append date to filename (requires -o)\n"
- " -z: generates zipped file (requires -o)\n"
- " -p: capture screenshot to filename.png (requires -o)\n"
- " -s: write output to control socket (for init)\n"
+ fprintf(stderr, "usage: dumpstate [-b soundfile] [-e soundfile] [-o file [-d] [-p] [-z]] [-s] [-q] [-B] [-P] [-R] [-V version]\n"
" -b: play sound file instead of vibrate, at beginning of job\n"
" -e: play sound file instead of vibrate, at end of job\n"
+ " -o: write to file (instead of stdout)\n"
+ " -d: append date to filename (requires -o)\n"
+ " -p: capture screenshot to filename.png (requires -o)\n"
+ " -z: generates zipped file (requires -o)\n"
+ " -s: write output to control socket (for init)\n"
" -q: disable vibrate\n"
" -B: send broadcast when finished (requires -o)\n"
- " -P: send broadacast when started and update system properties on progress (requires -o and -B)\n"
+ " -P: send broadcast when started and update system properties on progress (requires -o and -B)\n"
" -R: take bugreport in remote mode (requires -o, -z, -d and -B, shouldn't be used with -P)\n"
- );
+ " -V: sets the bugreport format version (%s or %s)\n",
+ VERSION_DEFAULT.c_str(), VERSION_DUMPSYS_SPLIT.c_str());
}
static void sigpipe_handler(int n) {
@@ -750,6 +803,10 @@
ALOGE("Failed to add text entry to .zip file\n");
return false;
}
+ if (!add_text_zip_entry("main_entry.txt", bugreport_name)) {
+ ALOGE("Failed to add main_entry.txt to .zip file\n");
+ return false;
+ }
int32_t err = zip_writer->Finish();
if (err) {
@@ -798,6 +855,48 @@
return std::string(hash_buffer);
}
+/* switch to non-root user and group */
+bool drop_root() {
+ /* ensure we will keep capabilities when we drop root */
+ if (prctl(PR_SET_KEEPCAPS, 1) < 0) {
+ ALOGE("prctl(PR_SET_KEEPCAPS) failed: %s\n", strerror(errno));
+ return false;
+ }
+
+ gid_t groups[] = { AID_LOG, AID_SDCARD_R, AID_SDCARD_RW,
+ AID_MOUNT, AID_INET, AID_NET_BW_STATS, AID_READPROC };
+ if (setgroups(sizeof(groups)/sizeof(groups[0]), groups) != 0) {
+ ALOGE("Unable to setgroups, aborting: %s\n", strerror(errno));
+ return false;
+ }
+ if (setgid(AID_SHELL) != 0) {
+ ALOGE("Unable to setgid, aborting: %s\n", strerror(errno));
+ return false;
+ }
+ if (setuid(AID_SHELL) != 0) {
+ ALOGE("Unable to setuid, aborting: %s\n", strerror(errno));
+ return false;
+ }
+
+ struct __user_cap_header_struct capheader;
+ struct __user_cap_data_struct capdata[2];
+ memset(&capheader, 0, sizeof(capheader));
+ memset(&capdata, 0, sizeof(capdata));
+ capheader.version = _LINUX_CAPABILITY_VERSION_3;
+ capheader.pid = 0;
+
+ capdata[CAP_TO_INDEX(CAP_SYSLOG)].permitted = CAP_TO_MASK(CAP_SYSLOG);
+ capdata[CAP_TO_INDEX(CAP_SYSLOG)].effective = CAP_TO_MASK(CAP_SYSLOG);
+ capdata[0].inheritable = 0;
+ capdata[1].inheritable = 0;
+
+ if (capset(&capheader, &capdata[0]) < 0) {
+ ALOGE("capset failed: %s\n", strerror(errno));
+ return false;
+ }
+
+ return true;
+}
int main(int argc, char *argv[]) {
struct sigaction sigact;
@@ -810,6 +909,7 @@
int do_broadcast = 0;
int do_early_screenshot = 0;
int is_remote_mode = 0;
+ std::string version = VERSION_DEFAULT;
now = time(NULL);
@@ -839,7 +939,7 @@
/* parse arguments */
int c;
- while ((c = getopt(argc, argv, "dho:svqzpPBR")) != -1) {
+ while ((c = getopt(argc, argv, "dho:svqzpPBRV:")) != -1) {
switch (c) {
case 'd': do_add_date = 1; break;
case 'z': do_zip_file = 1; break;
@@ -851,6 +951,7 @@
case 'P': do_update_progress = 1; break;
case 'R': is_remote_mode = 1; break;
case 'B': do_broadcast = 1; break;
+ case 'V': version = optarg; break;
case '?': printf("\n");
case 'h':
usage();
@@ -873,6 +974,13 @@
exit(1);
}
+ if (version != VERSION_DEFAULT && version != VERSION_DUMPSYS_SPLIT) {
+ usage();
+ exit(1);
+ }
+
+ ALOGI("bugreport format version: %s\n", version.c_str());
+
do_early_screenshot = do_update_progress;
// If we are going to use a socket, do it as early as possible
@@ -931,6 +1039,7 @@
if (do_zip_file) {
ALOGD("Creating initial .zip file");
path = bugreport_dir + "/" + base_name + "-" + suffix + ".zip";
+ create_parent_dirs(path.c_str());
zip_file.reset(fopen(path.c_str(), "wb"));
if (!zip_file) {
ALOGE("fopen(%s, 'wb'): %s\n", path.c_str(), strerror(errno));
@@ -938,6 +1047,7 @@
} else {
zip_writer.reset(new ZipWriter(zip_file.get()));
}
+ add_text_zip_entry("version.txt", version);
}
if (do_update_progress) {
@@ -951,7 +1061,12 @@
}
}
- print_header();
+ /* read /proc/cmdline before dropping root */
+ FILE *cmdline = fopen("/proc/cmdline", "re");
+ if (cmdline) {
+ fgets(cmdline_buf, sizeof(cmdline_buf), cmdline);
+ fclose(cmdline);
+ }
/* open the vibrator before dropping root */
std::unique_ptr<FILE, int(*)(FILE*)> vibrator(NULL, fclose);
@@ -983,13 +1098,6 @@
}
}
- /* read /proc/cmdline before dropping root */
- FILE *cmdline = fopen("/proc/cmdline", "re");
- if (cmdline) {
- fgets(cmdline_buf, sizeof(cmdline_buf), cmdline);
- fclose(cmdline);
- }
-
/* collect stack traces from Dalvik and native processes (needs root) */
dump_traces_path = dump_traces();
@@ -998,42 +1106,7 @@
add_dir(RECOVERY_DIR, true);
add_mountinfo();
- /* ensure we will keep capabilities when we drop root */
- if (prctl(PR_SET_KEEPCAPS, 1) < 0) {
- ALOGE("prctl(PR_SET_KEEPCAPS) failed: %s\n", strerror(errno));
- return -1;
- }
-
- /* switch to non-root user and group */
- gid_t groups[] = { AID_LOG, AID_SDCARD_R, AID_SDCARD_RW,
- AID_MOUNT, AID_INET, AID_NET_BW_STATS, AID_READPROC };
- if (setgroups(sizeof(groups)/sizeof(groups[0]), groups) != 0) {
- ALOGE("Unable to setgroups, aborting: %s\n", strerror(errno));
- return -1;
- }
- if (setgid(AID_SHELL) != 0) {
- ALOGE("Unable to setgid, aborting: %s\n", strerror(errno));
- return -1;
- }
- if (setuid(AID_SHELL) != 0) {
- ALOGE("Unable to setuid, aborting: %s\n", strerror(errno));
- return -1;
- }
-
- struct __user_cap_header_struct capheader;
- struct __user_cap_data_struct capdata[2];
- memset(&capheader, 0, sizeof(capheader));
- memset(&capdata, 0, sizeof(capdata));
- capheader.version = _LINUX_CAPABILITY_VERSION_3;
- capheader.pid = 0;
-
- capdata[CAP_TO_INDEX(CAP_SYSLOG)].permitted = CAP_TO_MASK(CAP_SYSLOG);
- capdata[CAP_TO_INDEX(CAP_SYSLOG)].effective = CAP_TO_MASK(CAP_SYSLOG);
- capdata[0].inheritable = 0;
- capdata[1].inheritable = 0;
-
- if (capset(&capheader, &capdata[0]) < 0) {
- ALOGE("capset failed: %s\n", strerror(errno));
+ if (!drop_root()) {
return -1;
}
@@ -1043,6 +1116,10 @@
directly, but the libziparchive doesn't support that option yet. */
redirect_to_file(stdout, const_cast<char*>(tmp_path.c_str()));
}
+ // NOTE: there should be no stdout output until now, otherwise it would break the header.
+ // In particular, DurationReport objects should be created passing 'title, NULL', so their
+ // duration is logged into ALOG instead.
+ print_header(version);
dumpstate(do_early_screenshot ? "": screenshot_path);
@@ -1117,7 +1194,7 @@
if (!path.empty()) {
ALOGI("Final bugreport path: %s\n", path.c_str());
std::vector<std::string> am_args = {
- "--receiver-permission", "android.permission.DUMP",
+ "--receiver-permission", "android.permission.DUMP", "--receiver-foreground",
"--ei", "android.intent.extra.PID", std::to_string(getpid()),
"--es", "android.intent.extra.BUGREPORT", path
};
diff --git a/cmds/dumpstate/dumpstate.h b/cmds/dumpstate/dumpstate.h
index 0a9f9e2..a6afbf4 100644
--- a/cmds/dumpstate/dumpstate.h
+++ b/cmds/dumpstate/dumpstate.h
@@ -112,6 +112,9 @@
/* redirect output to a file */
void redirect_to_file(FILE *redirect, char *path);
+/* create leading directories, if necessary */
+void create_parent_dirs(const char *path);
+
/* dump Dalvik and native stack traces, return the trace file location (NULL if none) */
const char *dump_traces();
@@ -165,6 +168,7 @@
class DurationReporter {
public:
DurationReporter(const char *title);
+ DurationReporter(const char *title, FILE* out);
~DurationReporter();
@@ -172,6 +176,7 @@
private:
const char* title_;
+ FILE* out_;
uint64_t started_;
};
diff --git a/cmds/dumpstate/utils.cpp b/cmds/dumpstate/utils.cpp
index e49d766..0c35430 100644
--- a/cmds/dumpstate/utils.cpp
+++ b/cmds/dumpstate/utils.cpp
@@ -51,6 +51,7 @@
/* list of native processes to include in the native dumps */
static const char* native_processes_to_dump[] = {
"/system/bin/audioserver",
+ "/system/bin/cameraserver",
"/system/bin/drmserver",
"/system/bin/mediaserver",
"/system/bin/sdcard",
@@ -59,19 +60,26 @@
NULL,
};
-DurationReporter::DurationReporter(const char *title) {
+DurationReporter::DurationReporter(const char *title) : DurationReporter(title, stdout) {}
+
+DurationReporter::DurationReporter(const char *title, FILE *out) {
title_ = title;
if (title) {
started_ = DurationReporter::nanotime();
}
+ out_ = out;
}
DurationReporter::~DurationReporter() {
if (title_) {
uint64_t elapsed = DurationReporter::nanotime() - started_;
// Use "Yoda grammar" to make it easier to grep|sort sections.
- printf("------ %.3fs was the duration of '%s' ------\n",
- (float) elapsed / NANOS_PER_SEC, title_);
+ if (out_) {
+ fprintf(out_, "------ %.3fs was the duration of '%s' ------\n",
+ (float) elapsed / NANOS_PER_SEC, title_);
+ } else {
+ ALOGD("Duration of '%s': %.3fs\n", title_, (float) elapsed / NANOS_PER_SEC);
+ }
}
}
@@ -566,7 +574,6 @@
} else if (WIFEXITED(status) && WEXITSTATUS(status) > 0) {
printf("*** %s: Exit code %d\n", command, WEXITSTATUS(status));
}
- if (title) printf("[%s: %.3fs elapsed]\n\n", command, (float)elapsed / NANOS_PER_SEC);
if (weight > 0) {
update_progress(weight);
@@ -579,8 +586,8 @@
fprintf(stderr, "send_broadcast: too many arguments (%d)\n", (int) args.size());
return;
}
- const char *am_args[1024] = { "/system/bin/am", "broadcast", "--user", "0",
- "-a", action.c_str() };
+ const char *am_args[1024] = { "/system/bin/am", "broadcast",
+ "--user", "0", "-a", action.c_str() };
size_t am_index = 5; // Starts at the index of last initial value above.
for (const std::string& arg : args) {
am_args[++am_index] = arg.c_str();
@@ -650,25 +657,37 @@
close(fd);
}
-/* redirect output to a file */
-void redirect_to_file(FILE *redirect, char *path) {
- char *chp = path;
+void create_parent_dirs(const char *path) {
+ char *chp = (char*) path;
/* skip initial slash */
if (chp[0] == '/')
chp++;
/* create leading directories, if necessary */
+ struct stat dir_stat;
while (chp && chp[0]) {
chp = strchr(chp, '/');
if (chp) {
*chp = 0;
- mkdir(path, 0770); /* drwxrwx--- */
+ if (stat(path, &dir_stat) == -1 || !S_ISDIR(dir_stat.st_mode)) {
+ ALOGI("Creating directory %s\n", path);
+ if (mkdir(path, 0770)) { /* drwxrwx--- */
+ ALOGE("Unable to create directory %s: %s\n", path, strerror(errno));
+ } else if (chown(path, AID_SHELL, AID_SHELL)) {
+ ALOGE("Unable to change ownership of dir %s: %s\n", path, strerror(errno));
+ }
+ }
*chp++ = '/';
}
}
+}
- int fd = TEMP_FAILURE_RETRY(open(path, O_WRONLY | O_CREAT | O_TRUNC | O_CLOEXEC,
+/* redirect output to a file */
+void redirect_to_file(FILE *redirect, char *path) {
+ create_parent_dirs(path);
+
+ int fd = TEMP_FAILURE_RETRY(open(path, O_WRONLY | O_CREAT | O_TRUNC | O_CLOEXEC | O_NOFOLLOW,
S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH));
if (fd < 0) {
fprintf(stderr, "%s: %s\n", path, strerror(errno));
@@ -690,7 +709,7 @@
/* dump Dalvik and native stack traces, return the trace file location (NULL if none) */
const char *dump_traces() {
- DurationReporter duration_reporter("DUMP TRACES");
+ DurationReporter duration_reporter("DUMP TRACES", NULL);
ON_DRY_RUN_RETURN(NULL);
const char* result = NULL;
diff --git a/cmds/dumpsys/dumpsys.cpp b/cmds/dumpsys/dumpsys.cpp
index ce8993d..ef009da 100644
--- a/cmds/dumpsys/dumpsys.cpp
+++ b/cmds/dumpsys/dumpsys.cpp
@@ -26,28 +26,76 @@
return lhs->compare(*rhs);
}
+static void usage() {
+ fprintf(stderr,
+ "usage: dumpsys\n"
+ " To dump all services.\n"
+ "or:\n"
+ " dumpsys [--help | -l | --skip SERVICES | SERVICE [ARGS]]\n"
+ " --help: shows this help\n"
+ " -l: only list services, do not dump them\n"
+ " --skip SERVICES: dumps all services but SERVICES (comma-separated list)\n"
+ " SERVICE [ARGS]: dumps only service SERVICE, optionally passing ARGS to it\n");
+}
+
+bool IsSkipped(const Vector<String16>& skipped, const String16& service) {
+ for (const auto& candidate : skipped) {
+ if (candidate == service) {
+ return true;
+ }
+ }
+ return false;
+}
+
int main(int argc, char* const argv[])
{
signal(SIGPIPE, SIG_IGN);
sp<IServiceManager> sm = defaultServiceManager();
fflush(stdout);
if (sm == NULL) {
- ALOGE("Unable to get default service manager!");
+ ALOGE("Unable to get default service manager!");
aerr << "dumpsys: Unable to get default service manager!" << endl;
return 20;
}
Vector<String16> services;
Vector<String16> args;
+ Vector<String16> skippedServices;
bool showListOnly = false;
- if ((argc == 2) && (strcmp(argv[1], "-l") == 0)) {
- showListOnly = true;
+ if (argc == 2) {
+ // 1 argument: check for special cases (-l or --help)
+ if (strcmp(argv[1], "--help") == 0) {
+ usage();
+ return 0;
+ }
+ if (strcmp(argv[1], "-l") == 0) {
+ showListOnly = true;
+ }
}
- if ((argc == 1) || showListOnly) {
+ if (argc == 3) {
+ // 2 arguments: check for special cases (--skip SERVICES)
+ if (strcmp(argv[1], "--skip") == 0) {
+ char* token = strtok(argv[2], ",");
+ while (token != NULL) {
+ skippedServices.add(String16(token));
+ token = strtok(NULL, ",");
+ }
+ }
+ }
+ bool dumpAll = argc == 1;
+ if (dumpAll || !skippedServices.empty() || showListOnly) {
+ // gets all services
services = sm->listServices();
services.sort(sort_func);
args.add(String16("-a"));
} else {
+ // gets a specific service:
+ // first check if its name is not a special argument...
+ if (strcmp(argv[1], "--skip") == 0 || strcmp(argv[1], "-l") == 0) {
+ usage();
+ return -1;
+ }
+ // ...then gets its arguments
services.add(String16(argv[1]));
for (int i=2; i<argc; i++) {
args.add(String16(argv[i]));
@@ -59,11 +107,12 @@
if (N > 1) {
// first print a list of the current services
aout << "Currently running services:" << endl;
-
+
for (size_t i=0; i<N; i++) {
sp<IBinder> service = sm->checkService(services[i]);
if (service != NULL) {
- aout << " " << services[i] << endl;
+ bool skipped = IsSkipped(skippedServices, services[i]);
+ aout << " " << services[i] << (skipped ? " (skipped)" : "") << endl;
}
}
}
@@ -73,6 +122,8 @@
}
for (size_t i=0; i<N; i++) {
+ if (IsSkipped(skippedServices, services[i])) continue;
+
sp<IBinder> service = sm->checkService(services[i]);
if (service != NULL) {
if (N > 1) {
diff --git a/cmds/installd/Android.mk b/cmds/installd/Android.mk
index 209632e..65bcf39 100644
--- a/cmds/installd/Android.mk
+++ b/cmds/installd/Android.mk
@@ -43,6 +43,43 @@
LOCAL_CLANG := true
include $(BUILD_EXECUTABLE)
+#
+# OTA Executable
+#
+
+include $(CLEAR_VARS)
+LOCAL_MODULE := otapreopt
+LOCAL_MODULE_TAGS := optional
+LOCAL_CFLAGS := $(common_cflags)
+
+# Base & ASLR boundaries for boot image creation.
+ifndef LIBART_IMG_HOST_MIN_BASE_ADDRESS_DELTA
+ LOCAL_LIBART_IMG_HOST_MIN_BASE_ADDRESS_DELTA := -0x1000000
+else
+ LOCAL_LIBART_IMG_HOST_MIN_BASE_ADDRESS_DELTA := $(LIBART_IMG_HOST_MIN_BASE_ADDRESS_DELTA)
+endif
+ifndef LIBART_IMG_HOST_MAX_BASE_ADDRESS_DELTA
+ LOCAL_LIBART_IMG_HOST_MAX_BASE_ADDRESS_DELTA := 0x1000000
+else
+ LOCAL_LIBART_IMG_HOST_MAX_BASE_ADDRESS_DELTA := $(LIBART_IMG_HOST_MAX_BASE_ADDRESS_DELTA)
+endif
+LOCAL_CFLAGS += -DART_BASE_ADDRESS=$(LIBART_IMG_HOST_BASE_ADDRESS)
+LOCAL_CFLAGS += -DART_BASE_ADDRESS_MIN_DELTA=$(LOCAL_LIBART_IMG_HOST_MIN_BASE_ADDRESS_DELTA)
+LOCAL_CFLAGS += -DART_BASE_ADDRESS_MAX_DELTA=$(LOCAL_LIBART_IMG_HOST_MAX_BASE_ADDRESS_DELTA)
+
+LOCAL_SRC_FILES := otapreopt.cpp $(common_src_files)
+LOCAL_SHARED_LIBRARIES := \
+ libbase \
+ libcutils \
+ liblog \
+ liblogwrap \
+ libselinux \
+
+LOCAL_STATIC_LIBRARIES := libdiskusage
+LOCAL_ADDITIONAL_DEPENDENCIES += $(LOCAL_PATH)/Android.mk
+LOCAL_CLANG := true
+include $(BUILD_EXECUTABLE)
+
# Tests.
include $(LOCAL_PATH)/tests/Android.mk
\ No newline at end of file
diff --git a/cmds/installd/commands.cpp b/cmds/installd/commands.cpp
index 7799ab9..e9ec3d3 100644
--- a/cmds/installd/commands.cpp
+++ b/cmds/installd/commands.cpp
@@ -51,12 +51,15 @@
static const char* kCpPath = "/system/bin/cp";
+#define MIN_RESTRICTED_HOME_SDK_VERSION 24 // > M
+
int create_app_data(const char *uuid, const char *pkgname, userid_t userid, int flags,
- appid_t appid, const char* seinfo) {
+ appid_t appid, const char* seinfo, int target_sdk_version) {
uid_t uid = multiuser_get_uid(userid, appid);
+ int target_mode = target_sdk_version >= MIN_RESTRICTED_HOME_SDK_VERSION ? 0700 : 0751;
if (flags & FLAG_CE_STORAGE) {
auto path = create_data_user_package_path(uuid, userid, pkgname);
- if (fs_prepare_dir_strict(path.c_str(), 0751, uid, uid) != 0) {
+ if (fs_prepare_dir_strict(path.c_str(), target_mode, uid, uid) != 0) {
PLOG(ERROR) << "Failed to prepare " << path;
return -1;
}
@@ -67,7 +70,7 @@
}
if (flags & FLAG_DE_STORAGE) {
auto path = create_data_user_de_package_path(uuid, userid, pkgname);
- if (fs_prepare_dir_strict(path.c_str(), 0751, uid, uid) == -1) {
+ if (fs_prepare_dir_strict(path.c_str(), target_mode, uid, uid) == -1) {
PLOG(ERROR) << "Failed to prepare " << path;
// TODO: include result once 25796509 is fixed
return 0;
@@ -121,7 +124,7 @@
}
int move_complete_app(const char *from_uuid, const char *to_uuid, const char *package_name,
- const char *data_app_name, appid_t appid, const char* seinfo) {
+ const char *data_app_name, appid_t appid, const char* seinfo, int target_sdk_version) {
std::vector<userid_t> users = get_known_users(from_uuid);
// Copy app
@@ -176,7 +179,7 @@
}
if (create_app_data(to_uuid, package_name, user, FLAG_CE_STORAGE | FLAG_DE_STORAGE,
- appid, seinfo) != 0) {
+ appid, seinfo, target_sdk_version) != 0) {
LOG(ERROR) << "Failed to create package target " << to;
goto fail;
}
@@ -595,10 +598,10 @@
return strcmp(tmp_property_value, "true") == 0;
}
-static void run_dex2oat(int zip_fd, int oat_fd, const char* input_file_name,
+static void run_dex2oat(int zip_fd, int oat_fd, int image_fd, const char* input_file_name,
const char* output_file_name, int swap_fd, const char *instruction_set,
- bool vm_safe_mode, bool debuggable, bool post_bootcomplete, bool use_jit)
-{
+ bool vm_safe_mode, bool debuggable, bool post_bootcomplete, bool extract_only,
+ const std::vector<int>& profile_files_fd, const std::vector<int>& reference_profile_files_fd) {
static const unsigned int MAX_INSTRUCTION_SET_LEN = 7;
if (strlen(instruction_set) >= MAX_INSTRUCTION_SET_LEN) {
@@ -607,6 +610,12 @@
return;
}
+ if (profile_files_fd.size() != reference_profile_files_fd.size()) {
+ ALOGE("Invalid configuration of profile files: pf_size (%zu) != rpf_size (%zu)",
+ profile_files_fd.size(), reference_profile_files_fd.size());
+ return;
+ }
+
char dex2oat_Xms_flag[kPropertyValueMax];
bool have_dex2oat_Xms_flag = get_property("dalvik.vm.dex2oat-Xms", dex2oat_Xms_flag, NULL) > 0;
@@ -657,6 +666,14 @@
bool generate_debug_info = check_boolean_property("debug.generate-debug-info");
+ char app_image_format[kPropertyValueMax];
+ char image_format_arg[strlen("--image-format=") + kPropertyValueMax];
+ bool have_app_image_format =
+ image_fd >= 0 && get_property("dalvik.vm.appimageformat", app_image_format, NULL) > 0;
+ if (have_app_image_format) {
+ sprintf(image_format_arg, "--image-format=%s", app_image_format);
+ }
+
static const char* DEX2OAT_BIN = "/system/bin/dex2oat";
static const char* RUNTIME_ARG = "--runtime-arg";
@@ -675,6 +692,8 @@
char dex2oat_compiler_filter_arg[strlen("--compiler-filter=") + kPropertyValueMax];
bool have_dex2oat_swap_fd = false;
char dex2oat_swap_fd[strlen("--swap-fd=") + MAX_INT_LEN];
+ bool have_dex2oat_image_fd = false;
+ char dex2oat_image_fd[strlen("--app-image-fd=") + MAX_INT_LEN];
sprintf(zip_fd_arg, "--zip-fd=%d", zip_fd);
sprintf(zip_location_arg, "--zip-location=%s", input_file_name);
@@ -687,9 +706,11 @@
have_dex2oat_swap_fd = true;
sprintf(dex2oat_swap_fd, "--swap-fd=%d", swap_fd);
}
+ if (image_fd >= 0) {
+ have_dex2oat_image_fd = true;
+ sprintf(dex2oat_image_fd, "--app-image-fd=%d", image_fd);
+ }
- // use the JIT if either it's specified as a dexopt flag or if the property is set
- use_jit = use_jit || check_boolean_property("debug.usejit");
if (have_dex2oat_Xms_flag) {
sprintf(dex2oat_Xms_arg, "-Xms%s", dex2oat_Xms_flag);
}
@@ -703,7 +724,7 @@
} else if (vm_safe_mode) {
strcpy(dex2oat_compiler_filter_arg, "--compiler-filter=interpret-only");
have_dex2oat_compiler_filter_flag = true;
- } else if (use_jit) {
+ } else if (extract_only) {
strcpy(dex2oat_compiler_filter_arg, "--compiler-filter=verify-at-runtime");
have_dex2oat_compiler_filter_flag = true;
} else if (have_dex2oat_compiler_filter_flag) {
@@ -717,6 +738,17 @@
(get_property("dalvik.vm.always_debuggable", prop_buf, "0") > 0) &&
(prop_buf[0] == '1');
}
+ std::vector<std::string> profile_file_args(profile_files_fd.size());
+ std::vector<std::string> reference_profile_file_args(profile_files_fd.size());
+ // "reference-profile-file-fd" is longer than "profile-file-fd" so we can
+ // use it to set the max length.
+ char profile_buf[strlen("--reference-profile-file-fd=") + MAX_INT_LEN];
+ for (size_t k = 0; k < profile_files_fd.size(); k++) {
+ sprintf(profile_buf, "--profile-file-fd=%d", profile_files_fd[k]);
+ profile_file_args[k].assign(profile_buf);
+ sprintf(profile_buf, "--reference-profile-file-fd=%d", reference_profile_files_fd[k]);
+ reference_profile_file_args[k].assign(profile_buf);
+ }
ALOGV("Running %s in=%s out=%s\n", DEX2OAT_BIN, input_file_name, output_file_name);
@@ -728,10 +760,14 @@
+ (have_dex2oat_compiler_filter_flag ? 1 : 0)
+ (have_dex2oat_threads_flag ? 1 : 0)
+ (have_dex2oat_swap_fd ? 1 : 0)
+ + (have_dex2oat_image_fd ? 1 : 0)
+ (have_dex2oat_relocation_skip_flag ? 2 : 0)
+ (generate_debug_info ? 1 : 0)
+ (debuggable ? 1 : 0)
- + dex2oat_flags_count];
+ + (have_app_image_format ? 1 : 0)
+ + dex2oat_flags_count
+ + profile_files_fd.size()
+ + reference_profile_files_fd.size()];
int i = 0;
argv[i++] = DEX2OAT_BIN;
argv[i++] = zip_fd_arg;
@@ -762,12 +798,18 @@
if (have_dex2oat_swap_fd) {
argv[i++] = dex2oat_swap_fd;
}
+ if (have_dex2oat_image_fd) {
+ argv[i++] = dex2oat_image_fd;
+ }
if (generate_debug_info) {
argv[i++] = "--generate-debug-info";
}
if (debuggable) {
argv[i++] = "--debuggable";
}
+ if (have_app_image_format) {
+ argv[i++] = image_format_arg;
+ }
if (dex2oat_flags_count) {
i += split(dex2oat_flags, argv + i);
}
@@ -775,6 +817,10 @@
argv[i++] = RUNTIME_ARG;
argv[i++] = dex2oat_norelocation;
}
+ for (size_t k = 0; k < profile_file_args.size(); k++) {
+ argv[i++] = profile_file_args[k].c_str();
+ argv[i++] = reference_profile_file_args[k].c_str();
+ }
// Do not add after dex2oat_flags, they should override others for debugging.
argv[i] = NULL;
@@ -841,21 +887,171 @@
}
}
-int dexopt(const char *apk_path, uid_t uid, const char *pkgname, const char *instruction_set,
- int dexopt_needed, const char* oat_dir, int dexopt_flags)
+constexpr const char* PROFILE_FILE_EXTENSION = ".prof";
+constexpr const char* REFERENCE_PROFILE_FILE_EXTENSION = ".prof.ref";
+
+static void close_all_fds(const std::vector<int>& fds, const char* description) {
+ for (size_t i = 0; i < fds.size(); i++) {
+ if (close(fds[i]) != 0) {
+ PLOG(WARNING) << "Failed to close fd for " << description << " at index " << i;
+ }
+ }
+}
+
+static int open_code_cache_for_user(userid_t user, const char* volume_uuid, const char* pkgname) {
+ std::string code_cache_path =
+ create_data_user_package_path(volume_uuid, user, pkgname) + CODE_CACHE_DIR_POSTFIX;
+
+ struct stat buffer;
+ // Check that the code cache exists. If not, return and don't log an error.
+ if (TEMP_FAILURE_RETRY(lstat(code_cache_path.c_str(), &buffer)) == -1) {
+ if (errno != ENOENT) {
+ PLOG(ERROR) << "Failed to lstat code_cache: " << code_cache_path;
+ return -1;
+ }
+ }
+
+ int code_cache_fd = open(code_cache_path.c_str(),
+ O_PATH | O_CLOEXEC | O_DIRECTORY | O_NOFOLLOW);
+ if (code_cache_fd < 0) {
+ PLOG(ERROR) << "Failed to open code_cache: " << code_cache_path;
+ }
+ return code_cache_fd;
+}
+
+// Keep profile paths in sync with ActivityThread.
+static void open_profile_files_for_user(uid_t uid, const char* pkgname, int code_cache_fd,
+ /*out*/ int* profile_fd, /*out*/ int* reference_profile_fd) {
+ *profile_fd = -1;
+ *reference_profile_fd = -1;
+ std::string profile_file(pkgname);
+ profile_file += PROFILE_FILE_EXTENSION;
+
+ // Check if the profile exists. If not, early return and don't log an error.
+ struct stat buffer;
+ if (TEMP_FAILURE_RETRY(fstatat(
+ code_cache_fd, profile_file.c_str(), &buffer, AT_SYMLINK_NOFOLLOW)) == -1) {
+ if (errno != ENOENT) {
+ PLOG(ERROR) << "Failed to fstatat profile file: " << profile_file;
+ return;
+ }
+ }
+
+ // Open in read-write to allow transfer of information from the current profile
+ // to the reference profile.
+ *profile_fd = openat(code_cache_fd, profile_file.c_str(), O_RDWR | O_NOFOLLOW);
+ if (*profile_fd < 0) {
+ PLOG(ERROR) << "Failed to open profile file: " << profile_file;
+ return;
+ }
+
+ std::string reference_profile(pkgname);
+ reference_profile += REFERENCE_PROFILE_FILE_EXTENSION;
+ // Give read-write permissions just for the user (changed with fchown after opening).
+ // We need write permission because dex2oat will update the reference profile files
+ // with the content of the corresponding current profile files.
+ *reference_profile_fd = openat(code_cache_fd, reference_profile.c_str(),
+ O_CREAT | O_RDWR | O_NOFOLLOW, S_IWUSR | S_IRUSR);
+ if (*reference_profile_fd < 0) {
+ close(*profile_fd);
+ return;
+ }
+ if (fchown(*reference_profile_fd, uid, uid) < 0) {
+ PLOG(ERROR) << "Cannot change reference profile file owner: " << reference_profile;
+ close(*profile_fd);
+ *profile_fd = -1;
+ *reference_profile_fd = -1;
+ }
+}
+
+static void open_profile_files(const char* volume_uuid, uid_t uid, const char* pkgname,
+ std::vector<int>* profile_fds, std::vector<int>* reference_profile_fds) {
+ std::vector<userid_t> users = get_known_users(volume_uuid);
+ for (auto user : users) {
+ int code_cache_fd = open_code_cache_for_user(user, volume_uuid, pkgname);
+ if (code_cache_fd < 0) {
+ continue;
+ }
+ int profile_fd = -1;
+ int reference_profile_fd = -1;
+ open_profile_files_for_user(
+ uid, pkgname, code_cache_fd, &profile_fd, &reference_profile_fd);
+ close(code_cache_fd);
+
+ // Add to the lists only if both fds are valid.
+ if ((profile_fd >= 0) && (reference_profile_fd >= 0)) {
+ profile_fds->push_back(profile_fd);
+ reference_profile_fds->push_back(reference_profile_fd);
+ }
+ }
+}
+
+static void trim_extension(char* path) {
+ // Trim the extension.
+ int pos = strlen(path);
+ for (; pos >= 0 && path[pos] != '.'; --pos) {}
+ if (pos >= 0) {
+ path[pos] = '\0'; // Trim extension
+ }
+}
+
+static bool add_extension_to_file_name(char* file_name, const char* extension) {
+ if (strlen(file_name) + strlen(extension) + 1 > PKG_PATH_MAX) {
+ return false;
+ }
+ strcat(file_name, extension);
+ return true;
+}
+
+static int open_output_file(char* file_name, bool recreate) {
+ int flags = O_RDWR | O_CREAT;
+ if (recreate) {
+ unlink(file_name);
+ flags |= O_EXCL;
+ }
+ return open(file_name, flags, 0600);
+}
+
+static bool set_permissions_and_ownership(int fd, bool is_public, int uid, const char* path) {
+ if (fchmod(fd,
+ S_IRUSR|S_IWUSR|S_IRGRP |
+ (is_public ? S_IROTH : 0)) < 0) {
+ ALOGE("installd cannot chmod '%s' during dexopt\n", path);
+ return false;
+ } else if (fchown(fd, AID_SYSTEM, uid) < 0) {
+ ALOGE("installd cannot chown '%s' during dexopt\n", path);
+ return false;
+ }
+ return true;
+}
+
+int dexopt(const char* apk_path, uid_t uid, const char* pkgname, const char* instruction_set,
+ int dexopt_needed, const char* oat_dir, int dexopt_flags, const char* volume_uuid,
+ bool use_profiles)
{
struct utimbuf ut;
struct stat input_stat;
char out_path[PKG_PATH_MAX];
char swap_file_name[PKG_PATH_MAX];
+ char image_path[PKG_PATH_MAX];
const char *input_file;
char in_odex_path[PKG_PATH_MAX];
- int res, input_fd=-1, out_fd=-1, swap_fd=-1;
+ int res, input_fd=-1, out_fd=-1, image_fd=-1, swap_fd=-1;
bool is_public = (dexopt_flags & DEXOPT_PUBLIC) != 0;
bool vm_safe_mode = (dexopt_flags & DEXOPT_SAFEMODE) != 0;
bool debuggable = (dexopt_flags & DEXOPT_DEBUGGABLE) != 0;
bool boot_complete = (dexopt_flags & DEXOPT_BOOTCOMPLETE) != 0;
- bool use_jit = (dexopt_flags & DEXOPT_USEJIT) != 0;
+ bool extract_only = (dexopt_flags & DEXOPT_EXTRACTONLY) != 0;
+ std::vector<int> profile_files_fd;
+ std::vector<int> reference_profile_files_fd;
+ if (use_profiles) {
+ open_profile_files(volume_uuid, uid, pkgname,
+ &profile_files_fd, &reference_profile_files_fd);
+ if (profile_files_fd.empty()) {
+ // Skip profile guided compilation because no profiles were found.
+ return 0;
+ }
+ }
if ((dexopt_flags & ~DEXOPT_MASK) != 0) {
LOG_FATAL("dexopt flags contains unknown fields\n");
@@ -919,38 +1115,49 @@
ALOGE("installd cannot open '%s' for output during dexopt\n", out_path);
goto fail;
}
- if (fchmod(out_fd,
- S_IRUSR|S_IWUSR|S_IRGRP |
- (is_public ? S_IROTH : 0)) < 0) {
- ALOGE("installd cannot chmod '%s' during dexopt\n", out_path);
- goto fail;
- }
- if (fchown(out_fd, AID_SYSTEM, uid) < 0) {
- ALOGE("installd cannot chown '%s' during dexopt\n", out_path);
+ if (!set_permissions_and_ownership(out_fd, is_public, uid, out_path)) {
goto fail;
}
// Create a swap file if necessary.
if (ShouldUseSwapFileForDexopt()) {
// Make sure there really is enough space.
- size_t out_len = strlen(out_path);
- if (out_len + strlen(".swap") + 1 <= PKG_PATH_MAX) {
- strcpy(swap_file_name, out_path);
- strcpy(swap_file_name + strlen(out_path), ".swap");
- unlink(swap_file_name);
- swap_fd = open(swap_file_name, O_RDWR | O_CREAT | O_EXCL, 0600);
- if (swap_fd < 0) {
- // Could not create swap file. Optimistically go on and hope that we can compile
- // without it.
- ALOGE("installd could not create '%s' for swap during dexopt\n", swap_file_name);
- } else {
- // Immediately unlink. We don't really want to hit flash.
- unlink(swap_file_name);
- }
- } else {
- // Swap file path is too long. Try to run without.
- ALOGE("installd could not create swap file for path %s during dexopt\n", out_path);
+ strcpy(swap_file_name, out_path);
+ if (add_extension_to_file_name(swap_file_name, ".swap")) {
+ swap_fd = open_output_file(swap_file_name, /*recreate*/true);
}
+ if (swap_fd < 0) {
+ // Could not create swap file. Optimistically go on and hope that we can compile
+ // without it.
+ ALOGE("installd could not create '%s' for swap during dexopt\n", swap_file_name);
+ } else {
+ // Immediately unlink. We don't really want to hit flash.
+ unlink(swap_file_name);
+ }
+ }
+
+ // Avoid generating an app image for extract only since it will not contain any classes.
+ strcpy(image_path, out_path);
+ trim_extension(image_path);
+ if (add_extension_to_file_name(image_path, ".art")) {
+ char app_image_format[kPropertyValueMax];
+ bool have_app_image_format =
+ get_property("dalvik.vm.appimageformat", app_image_format, NULL) > 0;
+ if (!extract_only && have_app_image_format) {
+ // Recreate is false since we want to avoid deleting the image in case dex2oat decides to
+ // not compile anything.
+ image_fd = open_output_file(image_path, /*recreate*/false);
+ if (image_fd < 0) {
+ // Could not create application image file. Go on since we can compile without it.
+ ALOGE("installd could not create '%s' for image file during dexopt\n", image_path);
+ } else if (!set_permissions_and_ownership(image_fd, is_public, uid, image_path)) {
+ image_fd = -1;
+ }
+ }
+ // If we have a valid image file path but no image fd, erase the image file.
+ if (image_fd < 0) {
+ unlink(image_path);
+ }
}
ALOGV("DexInv: --- BEGIN '%s' ---\n", input_file);
@@ -987,14 +1194,9 @@
|| dexopt_needed == DEXOPT_SELF_PATCHOAT_NEEDED) {
run_patchoat(input_fd, out_fd, input_file, out_path, pkgname, instruction_set);
} else if (dexopt_needed == DEXOPT_DEX2OAT_NEEDED) {
- const char *input_file_name = strrchr(input_file, '/');
- if (input_file_name == NULL) {
- input_file_name = input_file;
- } else {
- input_file_name++;
- }
- run_dex2oat(input_fd, out_fd, input_file_name, out_path, swap_fd,
- instruction_set, vm_safe_mode, debuggable, boot_complete, use_jit);
+ run_dex2oat(input_fd, out_fd, image_fd, input_file, out_path, swap_fd,
+ instruction_set, vm_safe_mode, debuggable, boot_complete, extract_only,
+ profile_files_fd, reference_profile_files_fd);
} else {
ALOGE("Invalid dexopt needed: %d\n", dexopt_needed);
exit(73);
@@ -1016,9 +1218,16 @@
close(out_fd);
close(input_fd);
- if (swap_fd != -1) {
+ if (swap_fd >= 0) {
close(swap_fd);
}
+ if (use_profiles != 0) {
+ close_all_fds(profile_files_fd, "profile_files_fd");
+ close_all_fds(reference_profile_files_fd, "reference_profile_files_fd");
+ }
+ if (image_fd >= 0) {
+ close(image_fd);
+ }
return 0;
fail:
@@ -1029,6 +1238,16 @@
if (input_fd >= 0) {
close(input_fd);
}
+ if (use_profiles != 0) {
+ close_all_fds(profile_files_fd, "profile_files_fd");
+ close_all_fds(reference_profile_files_fd, "reference_profile_files_fd");
+ }
+ if (swap_fd >= 0) {
+ close(swap_fd);
+ }
+ if (image_fd >= 0) {
+ close(image_fd);
+ }
return -1;
}
@@ -1072,245 +1291,6 @@
}
}
-int movefileordir(char* srcpath, char* dstpath, int dstbasepos,
- int dstuid, int dstgid, struct stat* statbuf)
-{
- DIR *d;
- struct dirent *de;
- int res;
-
- int srcend = strlen(srcpath);
- int dstend = strlen(dstpath);
-
- if (lstat(srcpath, statbuf) < 0) {
- ALOGW("Unable to stat %s: %s\n", srcpath, strerror(errno));
- return 1;
- }
-
- if ((statbuf->st_mode&S_IFDIR) == 0) {
- mkinnerdirs(dstpath, dstbasepos, S_IRWXU|S_IRWXG|S_IXOTH,
- dstuid, dstgid, statbuf);
- ALOGV("Renaming %s to %s (uid %d)\n", srcpath, dstpath, dstuid);
- if (rename(srcpath, dstpath) >= 0) {
- if (chown(dstpath, dstuid, dstgid) < 0) {
- ALOGE("cannot chown %s: %s\n", dstpath, strerror(errno));
- unlink(dstpath);
- return 1;
- }
- } else {
- ALOGW("Unable to rename %s to %s: %s\n",
- srcpath, dstpath, strerror(errno));
- return 1;
- }
- return 0;
- }
-
- d = opendir(srcpath);
- if (d == NULL) {
- ALOGW("Unable to opendir %s: %s\n", srcpath, strerror(errno));
- return 1;
- }
-
- res = 0;
-
- while ((de = readdir(d))) {
- const char *name = de->d_name;
- /* always skip "." and ".." */
- if (name[0] == '.') {
- if (name[1] == 0) continue;
- if ((name[1] == '.') && (name[2] == 0)) continue;
- }
-
- if ((srcend+strlen(name)) >= (PKG_PATH_MAX-2)) {
- ALOGW("Source path too long; skipping: %s/%s\n", srcpath, name);
- continue;
- }
-
- if ((dstend+strlen(name)) >= (PKG_PATH_MAX-2)) {
- ALOGW("Destination path too long; skipping: %s/%s\n", dstpath, name);
- continue;
- }
-
- srcpath[srcend] = dstpath[dstend] = '/';
- strcpy(srcpath+srcend+1, name);
- strcpy(dstpath+dstend+1, name);
-
- if (movefileordir(srcpath, dstpath, dstbasepos, dstuid, dstgid, statbuf) != 0) {
- res = 1;
- }
-
- // Note: we will be leaving empty directories behind in srcpath,
- // but that is okay, the package manager will be erasing all of the
- // data associated with .apks that disappear.
-
- srcpath[srcend] = dstpath[dstend] = 0;
- }
-
- closedir(d);
- return res;
-}
-
-int movefiles()
-{
- DIR *d;
- int dfd, subfd;
- struct dirent *de;
- struct stat s;
- char buf[PKG_PATH_MAX+1];
- int bufp, bufe, bufi, readlen;
-
- char srcpkg[PKG_NAME_MAX];
- char dstpkg[PKG_NAME_MAX];
- char srcpath[PKG_PATH_MAX];
- char dstpath[PKG_PATH_MAX];
- int dstuid=-1, dstgid=-1;
- int hasspace;
-
- d = opendir(UPDATE_COMMANDS_DIR_PREFIX);
- if (d == NULL) {
- goto done;
- }
- dfd = dirfd(d);
-
- /* Iterate through all files in the directory, executing the
- * file movements requested there-in.
- */
- while ((de = readdir(d))) {
- const char *name = de->d_name;
-
- if (de->d_type == DT_DIR) {
- continue;
- } else {
- subfd = openat(dfd, name, O_RDONLY);
- if (subfd < 0) {
- ALOGW("Unable to open update commands at %s%s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name);
- continue;
- }
-
- bufp = 0;
- bufe = 0;
- buf[PKG_PATH_MAX] = 0;
- srcpkg[0] = dstpkg[0] = 0;
- while (1) {
- bufi = bufp;
- while (bufi < bufe && buf[bufi] != '\n') {
- bufi++;
- }
- if (bufi < bufe) {
- buf[bufi] = 0;
- ALOGV("Processing line: %s\n", buf+bufp);
- hasspace = 0;
- while (bufp < bufi && isspace(buf[bufp])) {
- hasspace = 1;
- bufp++;
- }
- if (buf[bufp] == '#' || bufp == bufi) {
- // skip comments and empty lines.
- } else if (hasspace) {
- if (dstpkg[0] == 0) {
- ALOGW("Path before package line in %s%s: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, buf+bufp);
- } else if (srcpkg[0] == 0) {
- // Skip -- source package no longer exists.
- } else {
- ALOGV("Move file: %s (from %s to %s)\n", buf+bufp, srcpkg, dstpkg);
- if (!create_move_path(srcpath, srcpkg, buf+bufp, 0) &&
- !create_move_path(dstpath, dstpkg, buf+bufp, 0)) {
- movefileordir(srcpath, dstpath,
- strlen(dstpath)-strlen(buf+bufp),
- dstuid, dstgid, &s);
- }
- }
- } else {
- char* div = strchr(buf+bufp, ':');
- if (div == NULL) {
- ALOGW("Bad package spec in %s%s; no ':' sep: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, buf+bufp);
- } else {
- *div = 0;
- div++;
- if (strlen(buf+bufp) < PKG_NAME_MAX) {
- strcpy(dstpkg, buf+bufp);
- } else {
- srcpkg[0] = dstpkg[0] = 0;
- ALOGW("Package name too long in %s%s: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, buf+bufp);
- }
- if (strlen(div) < PKG_NAME_MAX) {
- strcpy(srcpkg, div);
- } else {
- srcpkg[0] = dstpkg[0] = 0;
- ALOGW("Package name too long in %s%s: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, div);
- }
- if (srcpkg[0] != 0) {
- if (!create_pkg_path(srcpath, srcpkg, PKG_DIR_POSTFIX, 0)) {
- if (lstat(srcpath, &s) < 0) {
- // Package no longer exists -- skip.
- srcpkg[0] = 0;
- }
- } else {
- srcpkg[0] = 0;
- ALOGW("Can't create path %s in %s%s\n",
- div, UPDATE_COMMANDS_DIR_PREFIX, name);
- }
- if (srcpkg[0] != 0) {
- if (!create_pkg_path(dstpath, dstpkg, PKG_DIR_POSTFIX, 0)) {
- if (lstat(dstpath, &s) == 0) {
- dstuid = s.st_uid;
- dstgid = s.st_gid;
- } else {
- // Destination package doesn't
- // exist... due to original-package,
- // this is normal, so don't be
- // noisy about it.
- srcpkg[0] = 0;
- }
- } else {
- srcpkg[0] = 0;
- ALOGW("Can't create path %s in %s%s\n",
- div, UPDATE_COMMANDS_DIR_PREFIX, name);
- }
- }
- ALOGV("Transfering from %s to %s: uid=%d\n",
- srcpkg, dstpkg, dstuid);
- }
- }
- }
- bufp = bufi+1;
- } else {
- if (bufp == 0) {
- if (bufp < bufe) {
- ALOGW("Line too long in %s%s, skipping: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, buf);
- }
- } else if (bufp < bufe) {
- memcpy(buf, buf+bufp, bufe-bufp);
- bufe -= bufp;
- bufp = 0;
- }
- readlen = read(subfd, buf+bufe, PKG_PATH_MAX-bufe);
- if (readlen < 0) {
- ALOGW("Failure reading update commands in %s%s: %s\n",
- UPDATE_COMMANDS_DIR_PREFIX, name, strerror(errno));
- break;
- } else if (readlen == 0) {
- break;
- }
- bufe += readlen;
- buf[bufe] = 0;
- ALOGV("Read buf: %s\n", buf);
- }
- }
- close(subfd);
- }
- }
- closedir(d);
-done:
- return 0;
-}
-
int linklib(const char* uuid, const char* pkgname, const char* asecLibDir, int userId)
{
struct stat s, libStat;
diff --git a/cmds/installd/commands.h b/cmds/installd/commands.h
index 5510e7b..53a789f 100644
--- a/cmds/installd/commands.h
+++ b/cmds/installd/commands.h
@@ -29,14 +29,14 @@
namespace installd {
int create_app_data(const char *uuid, const char *pkgname, userid_t userid, int flags,
- appid_t appid, const char* seinfo);
+ appid_t appid, const char* seinfo, int target_sdk_version);
int restorecon_app_data(const char* uuid, const char* pkgName, userid_t userid, int flags,
appid_t appid, const char* seinfo);
int clear_app_data(const char *uuid, const char *pkgname, userid_t userid, int flags);
int destroy_app_data(const char *uuid, const char *pkgname, userid_t userid, int flags);
int move_complete_app(const char* from_uuid, const char *to_uuid, const char *package_name,
- const char *data_app_name, appid_t appid, const char* seinfo);
+ const char *data_app_name, appid_t appid, const char* seinfo, int target_sdk_version);
int get_app_size(const char *uuid, const char *pkgname, int userid, int flags,
const char *apkpath, const char *libdirpath, const char *fwdlock_apkpath,
@@ -48,9 +48,9 @@
int rm_dex(const char *path, const char *instruction_set);
int free_cache(const char *uuid, int64_t free_size);
int dexopt(const char *apk_path, uid_t uid, const char *pkgName, const char *instruction_set,
- int dexopt_needed, const char* oat_dir, int dexopt_flags);
+ int dexopt_needed, const char* oat_dir, int dexopt_flags,
+ const char* volume_uuid, bool use_profiles);
int mark_boot_complete(const char *instruction_set);
-int movefiles();
int linklib(const char* uuid, const char* pkgname, const char* asecLibDir, int userId);
int idmap(const char *target_path, const char *overlay_path, uid_t uid);
int create_oat_dir(const char* oat_dir, const char *instruction_set);
diff --git a/cmds/installd/installd.cpp b/cmds/installd/installd.cpp
index 8542c4a..c0ae5b7 100644
--- a/cmds/installd/installd.cpp
+++ b/cmds/installd/installd.cpp
@@ -45,8 +45,6 @@
#define TOKEN_MAX 16 /* max number of arguments in buffer */
#define REPLY_MAX 256 /* largest reply allowed */
-#define DEBUG_FBE 0
-
namespace android {
namespace installd {
@@ -192,8 +190,9 @@
static int do_create_app_data(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED) {
/* const char *uuid, const char *pkgname, userid_t userid, int flags,
- appid_t appid, const char* seinfo */
- return create_app_data(parse_null(arg[0]), arg[1], atoi(arg[2]), atoi(arg[3]), atoi(arg[4]), arg[5]);
+ appid_t appid, const char* seinfo, int target_sdk_version */
+ return create_app_data(parse_null(arg[0]), arg[1], atoi(arg[2]), atoi(arg[3]),
+ atoi(arg[4]), arg[5], atoi(arg[6]));
}
static int do_restorecon_app_data(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED) {
@@ -212,11 +211,41 @@
return destroy_app_data(parse_null(arg[0]), arg[1], atoi(arg[2]), atoi(arg[3]));
}
-static int do_dexopt(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED)
+static int do_ota_dexopt(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED) {
+ // Time to fork and run otapreopt.
+ pid_t pid = fork();
+ if (pid == 0) {
+ const char* argv[1 + 9 + 1];
+ argv[0] = "/system/bin/otapreopt";
+ for (size_t i = 1; i <= 9; ++i) {
+ argv[i] = arg[i - 1];
+ }
+ argv[10] = nullptr;
+
+ execv(argv[0], (char * const *)argv);
+ ALOGE("execv(OTAPREOPT) failed: %s\n", strerror(errno));
+ exit(99);
+ } else {
+ int res = wait_child(pid);
+ if (res == 0) {
+ ALOGV("DexInv: --- END OTAPREOPT (success) ---\n");
+ } else {
+ ALOGE("DexInv: --- END OTAPREOPT --- status=0x%04x, process failed\n", res);
+ }
+ return res;
+ }
+}
+
+static int do_dexopt(char **arg, char reply[REPLY_MAX])
{
- /* apk_path, uid, pkgname, instruction_set, dexopt_needed, oat_dir, dexopt_flags */
+ int dexopt_flags = atoi(arg[6]);
+ if ((dexopt_flags & DEXOPT_OTA) != 0) {
+ return do_ota_dexopt(arg, reply);
+ }
+ /* apk_path, uid, pkgname, instruction_set, dexopt_needed, oat_dir, dexopt_flags, volume_uuid,
+ use_profiles */
return dexopt(arg[0], atoi(arg[1]), arg[2], arg[3], atoi(arg[4]),
- arg[5], atoi(arg[6]));
+ arg[5], dexopt_flags, parse_null(arg[7]), (atoi(arg[8]) == 0 ? false : true));
}
static int do_mark_boot_complete(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED)
@@ -258,8 +287,10 @@
static int do_move_complete_app(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED) {
/* const char* from_uuid, const char *to_uuid, const char *package_name,
- const char *data_app_name, appid_t appid, const char* seinfo */
- return move_complete_app(parse_null(arg[0]), parse_null(arg[1]), arg[2], arg[3], atoi(arg[4]), arg[5]);
+ const char *data_app_name, appid_t appid, const char* seinfo,
+ int target_sdk_version */
+ return move_complete_app(parse_null(arg[0]), parse_null(arg[1]), arg[2], arg[3],
+ atoi(arg[4]), arg[5], atoi(arg[6]));
}
static int do_mk_user_config(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED)
@@ -272,11 +303,6 @@
return delete_user(parse_null(arg[0]), atoi(arg[1])); /* uuid, userid */
}
-static int do_movefiles(char **arg ATTRIBUTE_UNUSED, char reply[REPLY_MAX] ATTRIBUTE_UNUSED)
-{
- return movefiles();
-}
-
static int do_linklib(char **arg, char reply[REPLY_MAX] ATTRIBUTE_UNUSED)
{
return linklib(parse_null(arg[0]), arg[1], arg[2], atoi(arg[3]));
@@ -314,18 +340,17 @@
struct cmdinfo cmds[] = {
{ "ping", 0, do_ping },
- { "create_app_data", 6, do_create_app_data },
+ { "create_app_data", 7, do_create_app_data },
{ "restorecon_app_data", 6, do_restorecon_app_data },
{ "clear_app_data", 4, do_clear_app_data },
{ "destroy_app_data", 4, do_destroy_app_data },
- { "move_complete_app", 6, do_move_complete_app },
+ { "move_complete_app", 7, do_move_complete_app },
{ "get_app_size", 9, do_get_app_size },
- { "dexopt", 7, do_dexopt },
+ { "dexopt", 9, do_dexopt },
{ "markbootcomplete", 1, do_mark_boot_complete },
{ "rmdex", 2, do_rm_dex },
{ "freecache", 2, do_free_cache },
- { "movefiles", 0, do_movefiles },
{ "linklib", 4, do_linklib },
{ "mkuserconfig", 1, do_mk_user_config },
{ "rmuser", 2, do_rm_user },
@@ -465,139 +490,11 @@
}
int version = oldVersion;
- // /data/user
- char *user_data_dir = build_string2(android_data_dir.path, SECONDARY_USER_PREFIX);
- // /data/data
- char *legacy_data_dir = build_string2(android_data_dir.path, PRIMARY_USER_PREFIX);
- // /data/user/0
- char *primary_data_dir = build_string3(android_data_dir.path, SECONDARY_USER_PREFIX, "0");
- if (!user_data_dir || !legacy_data_dir || !primary_data_dir) {
- goto fail;
- }
-
- // Make the /data/user directory if necessary
- if (access(user_data_dir, R_OK) < 0) {
- if (mkdir(user_data_dir, 0711) < 0) {
- goto fail;
- }
- if (chown(user_data_dir, AID_SYSTEM, AID_SYSTEM) < 0) {
- goto fail;
- }
- if (chmod(user_data_dir, 0711) < 0) {
- goto fail;
- }
- }
- // Make the /data/user/0 symlink to /data/data if necessary
- if (access(primary_data_dir, R_OK) < 0) {
- if (symlink(legacy_data_dir, primary_data_dir)) {
- goto fail;
- }
- }
-
- if (version == 0) {
- // Introducing multi-user, so migrate /data/media contents into /data/media/0
- ALOGD("Upgrading /data/media for multi-user");
-
- // Ensure /data/media
- if (fs_prepare_dir(android_media_dir.path, 0770, AID_MEDIA_RW, AID_MEDIA_RW) == -1) {
- goto fail;
- }
-
- // /data/media.tmp
- char media_tmp_dir[PATH_MAX];
- snprintf(media_tmp_dir, PATH_MAX, "%smedia.tmp", android_data_dir.path);
-
- // Only copy when upgrade not already in progress
- if (access(media_tmp_dir, F_OK) == -1) {
- if (rename(android_media_dir.path, media_tmp_dir) == -1) {
- ALOGE("Failed to move legacy media path: %s", strerror(errno));
- goto fail;
- }
- }
-
- // Create /data/media again
- if (fs_prepare_dir(android_media_dir.path, 0770, AID_MEDIA_RW, AID_MEDIA_RW) == -1) {
- goto fail;
- }
-
- if (selinux_android_restorecon(android_media_dir.path, 0)) {
- goto fail;
- }
-
- // /data/media/0
- char owner_media_dir[PATH_MAX];
- snprintf(owner_media_dir, PATH_MAX, "%s0", android_media_dir.path);
-
- // Move any owner data into place
- if (access(media_tmp_dir, F_OK) == 0) {
- if (rename(media_tmp_dir, owner_media_dir) == -1) {
- ALOGE("Failed to move owner media path: %s", strerror(errno));
- goto fail;
- }
- }
-
- // Ensure media directories for any existing users
- DIR *dir;
- struct dirent *dirent;
- char user_media_dir[PATH_MAX];
-
- dir = opendir(user_data_dir);
- if (dir != NULL) {
- while ((dirent = readdir(dir))) {
- if (dirent->d_type == DT_DIR) {
- const char *name = dirent->d_name;
-
- // skip "." and ".."
- if (name[0] == '.') {
- if (name[1] == 0) continue;
- if ((name[1] == '.') && (name[2] == 0)) continue;
- }
-
- // /data/media/<user_id>
- snprintf(user_media_dir, PATH_MAX, "%s%s", android_media_dir.path, name);
- if (fs_prepare_dir(user_media_dir, 0770, AID_MEDIA_RW, AID_MEDIA_RW) == -1) {
- goto fail;
- }
- }
- }
- closedir(dir);
- }
-
- version = 1;
- }
-
- // /data/media/obb
- char media_obb_dir[PATH_MAX];
- snprintf(media_obb_dir, PATH_MAX, "%sobb", android_media_dir.path);
-
- if (version == 1) {
- // Introducing /data/media/obb for sharing OBB across users; migrate
- // any existing OBB files from owner.
- ALOGD("Upgrading to shared /data/media/obb");
-
- // /data/media/0/Android/obb
- char owner_obb_path[PATH_MAX];
- snprintf(owner_obb_path, PATH_MAX, "%s0/Android/obb", android_media_dir.path);
-
- // Only move if target doesn't already exist
- if (access(media_obb_dir, F_OK) != 0 && access(owner_obb_path, F_OK) == 0) {
- if (rename(owner_obb_path, media_obb_dir) == -1) {
- ALOGE("Failed to move OBB from owner: %s", strerror(errno));
- goto fail;
- }
- }
-
+ if (version < 2) {
+ SLOGD("Assuming that device has multi-user storage layout; upgrade no longer supported");
version = 2;
}
- if (ensure_media_user_dirs(nullptr, 0) == -1) {
- ALOGE("Failed to setup media for user 0");
- goto fail;
- }
- if (fs_prepare_dir(media_obb_dir, 0770, AID_MEDIA_RW, AID_MEDIA_RW) == -1) {
- goto fail;
- }
-
if (ensure_config_user_dirs(0) == -1) {
ALOGE("Failed to setup misc for user 0");
goto fail;
@@ -617,7 +514,7 @@
DIR *dir;
struct dirent *dirent;
- dir = opendir(user_data_dir);
+ dir = opendir("/data/user");
if (dir != NULL) {
while ((dirent = readdir(dir))) {
const char *name = dirent->d_name;
@@ -679,9 +576,6 @@
res = 0;
fail:
- free(user_data_dir);
- free(legacy_data_dir);
- free(primary_data_dir);
return res;
}
@@ -748,12 +642,6 @@
}
fcntl(lsocket, F_SETFD, FD_CLOEXEC);
- // Perform all filesystem access as system so that FBE emulation mode
- // can block access using chmod 000.
-#if DEBUG_FBE
- setfsuid(AID_SYSTEM);
-#endif
-
for (;;) {
alen = sizeof(addr);
s = accept(lsocket, &addr, &alen);
diff --git a/cmds/installd/installd_constants.h b/cmds/installd/installd_constants.h
index 220de9a..0d21519 100644
--- a/cmds/installd/installd_constants.h
+++ b/cmds/installd/installd_constants.h
@@ -48,8 +48,7 @@
// This is used as a string literal, can't be constants. TODO: std::string...
#define DALVIK_CACHE "dalvik-cache"
constexpr const char* DALVIK_CACHE_POSTFIX = "/classes.dex";
-
-constexpr const char* UPDATE_COMMANDS_DIR_PREFIX = "/system/etc/updatecmds/";
+constexpr const char* DALVIK_CACHE_POSTFIX2 = "@classes.dex";
constexpr const char* IDMAP_PREFIX = "/data/resource-cache/";
constexpr const char* IDMAP_SUFFIX = "@idmap";
@@ -75,7 +74,8 @@
constexpr int DEXOPT_SAFEMODE = 1 << 2;
constexpr int DEXOPT_DEBUGGABLE = 1 << 3;
constexpr int DEXOPT_BOOTCOMPLETE = 1 << 4;
-constexpr int DEXOPT_USEJIT = 1 << 5;
+constexpr int DEXOPT_EXTRACTONLY = 1 << 5;
+constexpr int DEXOPT_OTA = 1 << 6;
/* all known values for dexopt flags */
constexpr int DEXOPT_MASK =
@@ -83,7 +83,8 @@
| DEXOPT_SAFEMODE
| DEXOPT_DEBUGGABLE
| DEXOPT_BOOTCOMPLETE
- | DEXOPT_USEJIT;
+ | DEXOPT_EXTRACTONLY
+ | DEXOPT_OTA;
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(*(a)))
diff --git a/cmds/installd/otapreopt.cpp b/cmds/installd/otapreopt.cpp
new file mode 100644
index 0000000..27f7939
--- /dev/null
+++ b/cmds/installd/otapreopt.cpp
@@ -0,0 +1,641 @@
+/*
+ ** Copyright 2016, The Android Open Source Project
+ **
+ ** Licensed under the Apache License, Version 2.0 (the "License");
+ ** you may not use this file except in compliance with the License.
+ ** You may obtain a copy of the License at
+ **
+ ** http://www.apache.org/licenses/LICENSE-2.0
+ **
+ ** Unless required by applicable law or agreed to in writing, software
+ ** distributed under the License is distributed on an "AS IS" BASIS,
+ ** WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ ** See the License for the specific language governing permissions and
+ ** limitations under the License.
+ */
+
+#include <algorithm>
+#include <inttypes.h>
+#include <random>
+#include <selinux/android.h>
+#include <selinux/avc.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/capability.h>
+#include <sys/prctl.h>
+#include <sys/stat.h>
+#include <sys/wait.h>
+
+#include <android-base/logging.h>
+#include <android-base/macros.h>
+#include <android-base/stringprintf.h>
+#include <cutils/fs.h>
+#include <cutils/log.h>
+#include <cutils/properties.h>
+#include <private/android_filesystem_config.h>
+
+#include <commands.h>
+#include <globals.h>
+#include <installd_deps.h> // Need to fill in requirements of commands.
+#include <string_helpers.h>
+#include <system_properties.h>
+#include <utils.h>
+
+#ifndef LOG_TAG
+#define LOG_TAG "otapreopt"
+#endif
+
+#define BUFFER_MAX 1024 /* input buffer for commands */
+#define TOKEN_MAX 16 /* max number of arguments in buffer */
+#define REPLY_MAX 256 /* largest reply allowed */
+
+using android::base::StringPrintf;
+
+namespace android {
+namespace installd {
+
+static constexpr const char* kBootClassPathPropertyName = "env.BOOTCLASSPATH";
+static constexpr const char* kAndroidRootPathPropertyName = "env.ANDROID_ROOT";
+static constexpr const char* kOTARootDirectory = "/system-b";
+static constexpr size_t kISAIndex = 3;
+
+template<typename T>
+static constexpr T RoundDown(T x, typename std::decay<T>::type n) {
+ return DCHECK_CONSTEXPR(IsPowerOfTwo(n), , T(0))(x & -n);
+}
+
+template<typename T>
+static constexpr T RoundUp(T x, typename std::remove_reference<T>::type n) {
+ return RoundDown(x + n - 1, n);
+}
+
+class OTAPreoptService {
+ public:
+ static constexpr const char* kOTADataDirectory = "/data/ota";
+
+ // Main driver. Performs the following steps.
+ //
+ // 1) Parse options (read system properties etc from B partition).
+ //
+ // 2) Read in package data.
+ //
+ // 3) Prepare environment variables.
+ //
+ // 4) Prepare(compile) boot image, if necessary.
+ //
+ // 5) Run update.
+ int Main(int argc, char** argv) {
+ if (!ReadSystemProperties()) {
+ LOG(ERROR)<< "Failed reading system properties.";
+ return 1;
+ }
+
+ if (!ReadEnvironment()) {
+ LOG(ERROR) << "Failed reading environment properties.";
+ return 2;
+ }
+
+ if (!ReadPackage(argc, argv)) {
+ LOG(ERROR) << "Failed reading command line file.";
+ return 3;
+ }
+
+ PrepareEnvironment();
+
+ if (!PrepareBootImage()) {
+ LOG(ERROR) << "Failed preparing boot image.";
+ return 4;
+ }
+
+ int dexopt_retcode = RunPreopt();
+
+ return dexopt_retcode;
+ }
+
+ int GetProperty(const char* key, char* value, const char* default_value) {
+ const std::string* prop_value = system_properties_.GetProperty(key);
+ if (prop_value == nullptr) {
+ if (default_value == nullptr) {
+ return 0;
+ }
+ // Copy in the default value.
+ strncpy(value, default_value, kPropertyValueMax - 1);
+ value[kPropertyValueMax - 1] = 0;
+ return strlen(default_value);// TODO: Need to truncate?
+ }
+ size_t size = std::min(kPropertyValueMax - 1, prop_value->length());
+ strncpy(value, prop_value->data(), size);
+ value[size] = 0;
+ return static_cast<int>(size);
+ }
+
+private:
+ bool ReadSystemProperties() {
+ // TODO(agampe): What to do about the things in default.prop? It's only heap sizes, so it's easy
+ // to emulate for now, but has issues (e.g., vendors modifying the boot classpath
+ // may require larger values here - revisit). That's why this goes first, so that
+ // if those dummy values are overridden in build.prop, that's what we'll get.
+ //
+ // Note: It seems we'll get access to the B root partition, so we should read the default.prop
+ // file.
+ // if (!system_properties_.Load(b_mount_path_ + "/default.prop") {
+ // return false;
+ // }
+ system_properties_.SetProperty("dalvik.vm.image-dex2oat-Xms", "64m");
+ system_properties_.SetProperty("dalvik.vm.image-dex2oat-Xmx", "64m");
+ system_properties_.SetProperty("dalvik.vm.dex2oat-Xms", "64m");
+ system_properties_.SetProperty("dalvik.vm.dex2oat-Xmx", "512m");
+
+ // TODO(agampe): Do this properly/test.
+ return system_properties_.Load(b_mount_path_ + "/system/build.prop");
+ }
+
+ bool ReadEnvironment() {
+ // Read important environment variables. For simplicity, store them as
+ // system properties.
+ // TODO(agampe): We'll have to parse init.environ.rc for BOOTCLASSPATH.
+ // For now, just the A version.
+ const char* boot_classpath = getenv("BOOTCLASSPATH");
+ if (boot_classpath == nullptr) {
+ return false;
+ }
+ system_properties_.SetProperty(kBootClassPathPropertyName, boot_classpath);
+
+ const char* root_path = getenv("ANDROID_ROOT");
+ if (root_path == nullptr) {
+ return false;
+ }
+ system_properties_.SetProperty(kAndroidRootPathPropertyName, b_mount_path_ + root_path);
+
+ return true;
+ }
+
+ bool ReadPackage(int argc ATTRIBUTE_UNUSED, char** argv) {
+ size_t index = 0;
+ while (index < ARRAY_SIZE(package_parameters_) &&
+ argv[index + 1] != nullptr) {
+ package_parameters_[index] = argv[index + 1];
+ index++;
+ }
+ if (index != ARRAY_SIZE(package_parameters_)) {
+ LOG(ERROR) << "Wrong number of parameters";
+ return false;
+ }
+
+ return true;
+ }
+
+ void PrepareEnvironment() {
+ CHECK(system_properties_.GetProperty(kBootClassPathPropertyName) != nullptr);
+ const std::string& boot_cp =
+ *system_properties_.GetProperty(kBootClassPathPropertyName);
+ environ_.push_back(StringPrintf("BOOTCLASSPATH=%s", boot_cp.c_str()));
+ environ_.push_back(StringPrintf("ANDROID_DATA=%s", kOTADataDirectory));
+ CHECK(system_properties_.GetProperty(kAndroidRootPathPropertyName) != nullptr);
+ const std::string& android_root =
+ *system_properties_.GetProperty(kAndroidRootPathPropertyName);
+ environ_.push_back(StringPrintf("ANDROID_ROOT=%s", android_root.c_str()));
+
+ for (const std::string& e : environ_) {
+ putenv(const_cast<char*>(e.c_str()));
+ }
+ }
+
+ // Ensure that we have the right boot image. The first time any app is
+ // compiled, we'll try to generate it.
+ bool PrepareBootImage() {
+ if (package_parameters_[kISAIndex] == nullptr) {
+ LOG(ERROR) << "Instruction set missing.";
+ return false;
+ }
+ const char* isa = package_parameters_[kISAIndex];
+
+ // Check whether the file exists where expected.
+ std::string dalvik_cache = std::string(kOTADataDirectory) + "/" + DALVIK_CACHE;
+ std::string isa_path = dalvik_cache + "/" + isa;
+ std::string art_path = isa_path + "/system@framework@boot.art";
+ std::string oat_path = isa_path + "/system@framework@boot.oat";
+ if (access(art_path.c_str(), F_OK) == 0 &&
+ access(oat_path.c_str(), F_OK) == 0) {
+ // Files exist, assume everything is alright.
+ return true;
+ }
+
+ // Create the directories, if necessary.
+ if (access(dalvik_cache.c_str(), F_OK) != 0) {
+ if (mkdir(dalvik_cache.c_str(), 0711) != 0) {
+ PLOG(ERROR) << "Could not create dalvik-cache dir";
+ return false;
+ }
+ }
+ if (access(isa_path.c_str(), F_OK) != 0) {
+ if (mkdir(isa_path.c_str(), 0711) != 0) {
+ PLOG(ERROR) << "Could not create dalvik-cache isa dir";
+ return false;
+ }
+ }
+
+ // Prepare and run dex2oat.
+ // TODO: Delete files, just for a blank slate.
+ const std::string& boot_cp = *system_properties_.GetProperty(kBootClassPathPropertyName);
+
+ // This needs to be kept in sync with ART, see art/runtime/gc/space/image_space.cc.
+ std::vector<std::string> cmd;
+ cmd.push_back(b_mount_path_ + "/system/bin/dex2oat");
+ cmd.push_back(StringPrintf("--image=%s", art_path.c_str()));
+ for (const std::string& boot_part : Split(boot_cp, ':')) {
+ cmd.push_back(StringPrintf("--dex-file=%s", boot_part.c_str()));
+ }
+ cmd.push_back(StringPrintf("--oat-file=%s", oat_path.c_str()));
+
+ int32_t base_offset = ChooseRelocationOffsetDelta(ART_BASE_ADDRESS_MIN_DELTA,
+ ART_BASE_ADDRESS_MAX_DELTA);
+ cmd.push_back(StringPrintf("--base=0x%x", ART_BASE_ADDRESS + base_offset));
+
+ cmd.push_back(StringPrintf("--instruction-set=%s", isa));
+
+ // These things are pushed by AndroidRuntime, see frameworks/base/core/jni/AndroidRuntime.cpp.
+ AddCompilerOptionFromSystemProperty("dalvik.vm.image-dex2oat-Xms",
+ "-Xms",
+ true,
+ cmd);
+ AddCompilerOptionFromSystemProperty("dalvik.vm.image-dex2oat-Xmx",
+ "-Xmx",
+ true,
+ cmd);
+ AddCompilerOptionFromSystemProperty("dalvik.vm.image-dex2oat-filter",
+ "--compiler-filter=",
+ false,
+ cmd);
+ cmd.push_back(StringPrintf("--image-classes=%s/system/etc/preloaded-classes",
+ b_mount_path_.c_str()));
+ // TODO: Compiled-classes.
+ const std::string* extra_opts =
+ system_properties_.GetProperty("dalvik.vm.image-dex2oat-flags");
+ if (extra_opts != nullptr) {
+ std::vector<std::string> extra_vals = Split(*extra_opts, ' ');
+ cmd.insert(cmd.end(), extra_vals.begin(), extra_vals.end());
+ }
+ // TODO: Should we lower this? It's usually set close to max, because
+ // normally there's not much else going on at boot.
+ AddCompilerOptionFromSystemProperty("dalvik.vm.image-dex2oat-threads",
+ "-j",
+ false,
+ cmd);
+ AddCompilerOptionFromSystemProperty(
+ StringPrintf("dalvik.vm.isa.%s.variant", isa).c_str(),
+ "--instruction-set-variant=",
+ false,
+ cmd);
+ AddCompilerOptionFromSystemProperty(
+ StringPrintf("dalvik.vm.isa.%s.features", isa).c_str(),
+ "--instruction-set-features=",
+ false,
+ cmd);
+
+ std::string error_msg;
+ bool result = Exec(cmd, &error_msg);
+ if (!result) {
+ LOG(ERROR) << "Could not generate boot image: " << error_msg;
+ }
+ return result;
+ }
+
+ static const char* ParseNull(const char* arg) {
+ return (strcmp(arg, "!") == 0) ? nullptr : arg;
+ }
+
+ int RunPreopt() {
+ /* apk_path, uid, pkgname, instruction_set, dexopt_needed, oat_dir, dexopt_flags,
+ volume_uuid, use_profiles */
+ int ret = dexopt(package_parameters_[0],
+ atoi(package_parameters_[1]),
+ package_parameters_[2],
+ package_parameters_[3],
+ atoi(package_parameters_[4]),
+ package_parameters_[5],
+ atoi(package_parameters_[6]),
+ ParseNull(package_parameters_[7]),
+ (atoi(package_parameters_[8]) == 0 ? false : true));
+ return ret;
+ }
+
+ ////////////////////////////////////
+ // Helpers, mostly taken from ART //
+ ////////////////////////////////////
+
+ // Wrapper on fork/execv to run a command in a subprocess.
+ bool Exec(const std::vector<std::string>& arg_vector, std::string* error_msg) {
+ const std::string command_line(Join(arg_vector, ' '));
+
+ CHECK_GE(arg_vector.size(), 1U) << command_line;
+
+ // Convert the args to char pointers.
+ const char* program = arg_vector[0].c_str();
+ std::vector<char*> args;
+ for (size_t i = 0; i < arg_vector.size(); ++i) {
+ const std::string& arg = arg_vector[i];
+ char* arg_str = const_cast<char*>(arg.c_str());
+ CHECK(arg_str != nullptr) << i;
+ args.push_back(arg_str);
+ }
+ args.push_back(nullptr);
+
+ // Fork and exec.
+ pid_t pid = fork();
+ if (pid == 0) {
+ // No allocation allowed between fork and exec.
+
+ // Change process groups, so we don't get reaped by ProcessManager.
+ setpgid(0, 0);
+
+ execv(program, &args[0]);
+
+ PLOG(ERROR) << "Failed to execv(" << command_line << ")";
+ // _exit to avoid atexit handlers in child.
+ _exit(1);
+ } else {
+ if (pid == -1) {
+ *error_msg = StringPrintf("Failed to execv(%s) because fork failed: %s",
+ command_line.c_str(), strerror(errno));
+ return false;
+ }
+
+ // wait for subprocess to finish
+ int status;
+ pid_t got_pid = TEMP_FAILURE_RETRY(waitpid(pid, &status, 0));
+ if (got_pid != pid) {
+ *error_msg = StringPrintf("Failed after fork for execv(%s) because waitpid failed: "
+ "wanted %d, got %d: %s",
+ command_line.c_str(), pid, got_pid, strerror(errno));
+ return false;
+ }
+ if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
+ *error_msg = StringPrintf("Failed execv(%s) because non-0 exit status",
+ command_line.c_str());
+ return false;
+ }
+ }
+ return true;
+ }
+
+ // Choose a random relocation offset. Taken from art/runtime/gc/image_space.cc.
+ static int32_t ChooseRelocationOffsetDelta(int32_t min_delta, int32_t max_delta) {
+ constexpr size_t kPageSize = PAGE_SIZE;
+ CHECK_EQ(min_delta % kPageSize, 0u);
+ CHECK_EQ(max_delta % kPageSize, 0u);
+ CHECK_LT(min_delta, max_delta);
+
+ std::default_random_engine generator;
+ generator.seed(GetSeed());
+ std::uniform_int_distribution<int32_t> distribution(min_delta, max_delta);
+ int32_t r = distribution(generator);
+ if (r % 2 == 0) {
+ r = RoundUp(r, kPageSize);
+ } else {
+ r = RoundDown(r, kPageSize);
+ }
+ CHECK_LE(min_delta, r);
+ CHECK_GE(max_delta, r);
+ CHECK_EQ(r % kPageSize, 0u);
+ return r;
+ }
+
+ static uint64_t GetSeed() {
+#ifdef __BIONIC__
+ // Bionic exposes arc4random, use it.
+ uint64_t random_data;
+ arc4random_buf(&random_data, sizeof(random_data));
+ return random_data;
+#else
+#error "This is only supposed to run with bionic. Otherwise, implement..."
+#endif
+ }
+
+ void AddCompilerOptionFromSystemProperty(const char* system_property,
+ const char* prefix,
+ bool runtime,
+ std::vector<std::string>& out) {
+ const std::string* value =
+ system_properties_.GetProperty(system_property);
+ if (value != nullptr) {
+ if (runtime) {
+ out.push_back("--runtime-arg");
+ }
+ if (prefix != nullptr) {
+ out.push_back(StringPrintf("%s%s", prefix, value->c_str()));
+ } else {
+ out.push_back(*value);
+ }
+ }
+ }
+
+ // The path where the B partitions are mounted.
+ // TODO(agampe): If we're running this *inside* the change-root, we wouldn't need this.
+ std::string b_mount_path_;
+
+ // Stores the system properties read out of the B partition. We need to use these properties
+ // to compile, instead of the A properties we could get from init/get_property.
+ SystemProperties system_properties_;
+
+ const char* package_parameters_[9];
+
+ // Store environment values we need to set.
+ std::vector<std::string> environ_;
+};
+
+OTAPreoptService gOps;
+
+////////////////////////
+// Plug-in functions. //
+////////////////////////
+
+int get_property(const char *key, char *value, const char *default_value) {
+ // TODO: Replace with system-properties map.
+ return gOps.GetProperty(key, value, default_value);
+}
+
+// Compute the output path of
+bool calculate_oat_file_path(char path[PKG_PATH_MAX], const char *oat_dir,
+ const char *apk_path,
+ const char *instruction_set) {
+ // TODO: Insert B directory.
+ char *file_name_start;
+ char *file_name_end;
+
+ file_name_start = strrchr(apk_path, '/');
+ if (file_name_start == nullptr) {
+ ALOGE("apk_path '%s' has no '/'s in it\n", apk_path);
+ return false;
+ }
+ file_name_end = strrchr(file_name_start, '.');
+ if (file_name_end == nullptr) {
+ ALOGE("apk_path '%s' has no extension\n", apk_path);
+ return false;
+ }
+
+ // Calculate file_name
+ file_name_start++; // Move past '/', is valid as file_name_end is valid.
+ size_t file_name_len = file_name_end - file_name_start;
+ std::string file_name(file_name_start, file_name_len);
+
+ // <apk_parent_dir>/oat/<isa>/<file_name>.odex.b
+ snprintf(path, PKG_PATH_MAX, "%s/%s/%s.odex.b", oat_dir, instruction_set,
+ file_name.c_str());
+ return true;
+}
+
+/*
+ * Computes the odex file for the given apk_path and instruction_set.
+ * /system/framework/whatever.jar -> /system/framework/oat/<isa>/whatever.odex
+ *
+ * Returns false if it failed to determine the odex file path.
+ */
+bool calculate_odex_file_path(char path[PKG_PATH_MAX], const char *apk_path,
+ const char *instruction_set) {
+ if (StringPrintf("%soat/%s/odex.b", apk_path, instruction_set).length() + 1 > PKG_PATH_MAX) {
+ ALOGE("apk_path '%s' may be too long to form odex file path.\n", apk_path);
+ return false;
+ }
+
+ const char *path_end = strrchr(apk_path, '/');
+ if (path_end == nullptr) {
+ ALOGE("apk_path '%s' has no '/'s in it?!\n", apk_path);
+ return false;
+ }
+ std::string path_component(apk_path, path_end - apk_path);
+
+ const char *name_begin = path_end + 1;
+ const char *extension_start = strrchr(name_begin, '.');
+ if (extension_start == nullptr) {
+ ALOGE("apk_path '%s' has no extension.\n", apk_path);
+ return false;
+ }
+ std::string name_component(name_begin, extension_start - name_begin);
+
+ std::string new_path = StringPrintf("%s/oat/%s/%s.odex.b",
+ path_component.c_str(),
+ instruction_set,
+ name_component.c_str());
+ CHECK_LT(new_path.length(), PKG_PATH_MAX);
+ strcpy(path, new_path.c_str());
+ return true;
+}
+
+bool create_cache_path(char path[PKG_PATH_MAX],
+ const char *src,
+ const char *instruction_set) {
+ size_t srclen = strlen(src);
+
+ /* demand that we are an absolute path */
+ if ((src == 0) || (src[0] != '/') || strstr(src,"..")) {
+ return false;
+ }
+
+ if (srclen > PKG_PATH_MAX) { // XXX: PKG_NAME_MAX?
+ return false;
+ }
+
+ std::string from_src = std::string(src + 1);
+ std::replace(from_src.begin(), from_src.end(), '/', '@');
+
+ std::string assembled_path = StringPrintf("%s/%s/%s/%s%s",
+ OTAPreoptService::kOTADataDirectory,
+ DALVIK_CACHE,
+ instruction_set,
+ from_src.c_str(),
+ DALVIK_CACHE_POSTFIX2);
+
+ if (assembled_path.length() + 1 > PKG_PATH_MAX) {
+ return false;
+ }
+ strcpy(path, assembled_path.c_str());
+
+ return true;
+}
+
+bool initialize_globals() {
+ const char* data_path = getenv("ANDROID_DATA");
+ if (data_path == nullptr) {
+ ALOGE("Could not find ANDROID_DATA");
+ return false;
+ }
+ return init_globals_from_data_and_root(data_path, kOTARootDirectory);
+}
+
+static bool initialize_directories() {
+ // This is different from the normal installd. We only do the base
+ // directory, the rest will be created on demand when each app is compiled.
+ mode_t old_umask = umask(0);
+ LOG(INFO) << "Old umask: " << old_umask;
+ if (access(OTAPreoptService::kOTADataDirectory, R_OK) < 0) {
+ ALOGE("Could not access %s\n", OTAPreoptService::kOTADataDirectory);
+ return false;
+ }
+ return true;
+}
+
+static int log_callback(int type, const char *fmt, ...) {
+ va_list ap;
+ int priority;
+
+ switch (type) {
+ case SELINUX_WARNING:
+ priority = ANDROID_LOG_WARN;
+ break;
+ case SELINUX_INFO:
+ priority = ANDROID_LOG_INFO;
+ break;
+ default:
+ priority = ANDROID_LOG_ERROR;
+ break;
+ }
+ va_start(ap, fmt);
+ LOG_PRI_VA(priority, "SELinux", fmt, ap);
+ va_end(ap);
+ return 0;
+}
+
+static int otapreopt_main(const int argc, char *argv[]) {
+ int selinux_enabled = (is_selinux_enabled() > 0);
+
+ setenv("ANDROID_LOG_TAGS", "*:v", 1);
+ android::base::InitLogging(argv);
+
+ ALOGI("otapreopt firing up\n");
+
+ if (argc < 2) {
+ ALOGE("Expecting parameters");
+ exit(1);
+ }
+
+ union selinux_callback cb;
+ cb.func_log = log_callback;
+ selinux_set_callback(SELINUX_CB_LOG, cb);
+
+ if (!initialize_globals()) {
+ ALOGE("Could not initialize globals; exiting.\n");
+ exit(1);
+ }
+
+ if (!initialize_directories()) {
+ ALOGE("Could not create directories; exiting.\n");
+ exit(1);
+ }
+
+ if (selinux_enabled && selinux_status_open(true) < 0) {
+ ALOGE("Could not open selinux status; exiting.\n");
+ exit(1);
+ }
+
+ int ret = android::installd::gOps.Main(argc, argv);
+
+ return ret;
+}
+
+} // namespace installd
+} // namespace android
+
+int main(const int argc, char *argv[]) {
+ return android::installd::otapreopt_main(argc, argv);
+}
diff --git a/cmds/installd/string_helpers.h b/cmds/installd/string_helpers.h
new file mode 100644
index 0000000..e8fcdef
--- /dev/null
+++ b/cmds/installd/string_helpers.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright (C) 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef ART_OTAPREOPT_STRING_HELPERS_H_
+#define ART_OTAPREOPT_STRING_HELPERS_H_
+
+#include <sstream>
+#include <string>
+
+#include <android-base/macros.h>
+
+namespace android {
+namespace installd {
+
+static inline bool StringStartsWith(const std::string& target,
+ const char* prefix) {
+ return target.compare(0, strlen(prefix), prefix) == 0;
+}
+
+// Split the input according to the separator character. Doesn't honor quotation.
+static inline std::vector<std::string> Split(const std::string& in, const char separator) {
+ if (in.empty()) {
+ return std::vector<std::string>();
+ }
+
+ std::vector<std::string> ret;
+ std::stringstream strstr(in);
+ std::string token;
+
+ while (std::getline(strstr, token, separator)) {
+ ret.push_back(token);
+ }
+
+ return ret;
+}
+
+template <typename StringT>
+static inline std::string Join(const std::vector<StringT>& strings, char separator) {
+ if (strings.empty()) {
+ return "";
+ }
+
+ std::string result(strings[0]);
+ for (size_t i = 1; i < strings.size(); ++i) {
+ result += separator;
+ result += strings[i];
+ }
+ return result;
+}
+
+} // namespace installd
+} // namespace android
+
+#endif // ART_OTAPREOPT_STRING_HELPERS_H_
diff --git a/cmds/installd/system_properties.h b/cmds/installd/system_properties.h
new file mode 100644
index 0000000..1b5fb3a
--- /dev/null
+++ b/cmds/installd/system_properties.h
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef OTAPREOPT_SYSTEM_PROPERTIES_H_
+#define OTAPREOPT_SYSTEM_PROPERTIES_H_
+
+#include <fstream>
+#include <string>
+#include <unordered_map>
+
+namespace android {
+namespace installd {
+
+// Helper class to read system properties into and manage as a string->string map.
+class SystemProperties {
+ public:
+ bool Load(const std::string& strFile) {
+ std::ifstream input_stream(strFile);
+
+ if (!input_stream.is_open()) {
+ return false;
+ }
+
+ while (!input_stream.eof()) {
+ // Read the next line.
+ std::string line;
+ getline(input_stream, line);
+
+ // Is the line empty? Simplifies the next check.
+ if (line.empty()) {
+ continue;
+ }
+
+ // Is this a comment (starts with pound)?
+ if (line[0] == '#') {
+ continue;
+ }
+
+ size_t equals_pos = line.find('=');
+ if (equals_pos == std::string::npos || equals_pos == 0) {
+ // Did not find equals sign, or it's the first character - isn't a valid line.
+ continue;
+ }
+
+ std::string key = line.substr(0, equals_pos);
+ std::string value = line.substr(equals_pos + 1,
+ line.length() - equals_pos + 1);
+
+ properties_.insert(std::make_pair(key, value));
+ }
+
+ return true;
+ }
+
+ // Look up the key in the map. Returns null if the key isn't mapped.
+ const std::string* GetProperty(const std::string& key) const {
+ auto it = properties_.find(key);
+ if (it != properties_.end()) {
+ return &it->second;
+ }
+ return nullptr;
+ }
+
+ void SetProperty(const std::string& key, const std::string& value) {
+ properties_.insert(std::make_pair(key, value));
+ }
+
+ private:
+ // The actual map.
+ std::unordered_map<std::string, std::string> properties_;
+};
+
+} // namespace installd
+} // namespace android
+
+#endif // OTAPREOPT_SYSTEM_PROPERTIES_H_
diff --git a/cmds/installd/utils.cpp b/cmds/installd/utils.cpp
index 92a9565..d25bf71 100644
--- a/cmds/installd/utils.cpp
+++ b/cmds/installd/utils.cpp
@@ -1168,16 +1168,6 @@
return result;
}
-/* Ensure that /data/media directories are prepared for given user. */
-int ensure_media_user_dirs(const char* uuid, userid_t userid) {
- std::string media_user_path(create_data_media_path(uuid, userid));
- if (fs_prepare_dir(media_user_path.c_str(), 0770, AID_MEDIA_RW, AID_MEDIA_RW) == -1) {
- return -1;
- }
-
- return 0;
-}
-
int ensure_config_user_dirs(userid_t userid) {
char config_user_path[PATH_MAX];
diff --git a/cmds/installd/utils.h b/cmds/installd/utils.h
index 4d6b66e..2d9573e 100644
--- a/cmds/installd/utils.h
+++ b/cmds/installd/utils.h
@@ -135,7 +135,6 @@
char *build_string3(const char *s1, const char *s2, const char *s3);
int ensure_dir(const char* path, mode_t mode, uid_t uid, gid_t gid);
-int ensure_media_user_dirs(const char* uuid, userid_t userid);
int ensure_config_user_dirs(userid_t userid);
int wait_child(pid_t pid);
diff --git a/cmds/servicemanager/servicemanager.rc b/cmds/servicemanager/servicemanager.rc
index 0d07a70..1ba339d 100644
--- a/cmds/servicemanager/servicemanager.rc
+++ b/cmds/servicemanager/servicemanager.rc
@@ -10,3 +10,5 @@
onrestart restart surfaceflinger
onrestart restart inputflinger
onrestart restart drm
+ onrestart restart cameraserver
+
diff --git a/data/etc/android.hardware.nfc.hcef.xml b/data/etc/android.hardware.nfc.hcef.xml
new file mode 100644
index 0000000..0d03023
--- /dev/null
+++ b/data/etc/android.hardware.nfc.hcef.xml
@@ -0,0 +1,21 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2015 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This feature indicates that the device supports host-based
+ NFC-F card emulation -->
+<permissions>
+ <feature name="android.hardware.nfc.hcef" />
+</permissions>
diff --git a/data/etc/android.hardware.vr.high_performance.xml b/data/etc/android.hardware.vr.high_performance.xml
new file mode 100644
index 0000000..776f4f7
--- /dev/null
+++ b/data/etc/android.hardware.vr.high_performance.xml
@@ -0,0 +1,21 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This is the set of features required for a VR-compatible device -->
+<permissions>
+ <feature name="android.software.vr.mode" />
+ <feature name="android.hardware.vr.high_performance" />
+</permissions>
diff --git a/data/etc/android.hardware.wifi.nan.xml b/data/etc/android.hardware.wifi.nan.xml
new file mode 100644
index 0000000..e557610
--- /dev/null
+++ b/data/etc/android.hardware.wifi.nan.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Copyright (C) 2016 The Android Open Source Project
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This is the standard feature indicating that the device includes WiFi NAN. -->
+<permissions>
+ <feature name="android.hardware.wifi.nan" />
+</permissions>
diff --git a/data/etc/handheld_core_hardware.xml b/data/etc/handheld_core_hardware.xml
index 5edf0e8..9cb4d6d 100644
--- a/data/etc/handheld_core_hardware.xml
+++ b/data/etc/handheld_core_hardware.xml
@@ -52,6 +52,9 @@
<!-- Feature to specify if the device supports a VR mode. -->
<feature name="android.software.vr.mode" />
+ <!-- Devices with all optimizations required to be a "VR Ready" device that
+ pass all CTS tests for this feature must include feature
+ android.hardware.vr.high_performance -->
<!-- devices with GPS must include android.hardware.location.gps.xml -->
<!-- devices with an autofocus camera and/or flash must include either
diff --git a/include/android/choreographer.h b/include/android/choreographer.h
new file mode 100644
index 0000000..02c83dc
--- /dev/null
+++ b/include/android/choreographer.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * @addtogroup Choreographer
+ * @{
+ */
+
+/**
+ * @file choreographer.h
+ */
+
+#ifndef ANDROID_CHOREOGRAPHER_H
+#define ANDROID_CHOREOGRAPHER_H
+
+#include <sys/cdefs.h>
+
+__BEGIN_DECLS
+
+struct AChoreographer;
+typedef struct AChoreographer AChoreographer;
+
+/**
+ * Prototype of the function that is called when a new frame is being rendered.
+ * It's passed the time that the frame is being rendered as nanoseconds in the
+ * CLOCK_MONOTONIC time base, as well as the data pointer provided by the
+ * application that registered a callback. All callbacks that run as part of
+ * rendering a frame will observe the same frame time, so it should be used
+ * whenever events need to be synchronized (e.g. animations).
+ */
+typedef void (*AChoreographer_frameCallback)(long frameTimeNanos, void* data);
+
+/**
+ * Get the AChoreographer instance for the current thread. This must be called
+ * on an ALooper thread.
+ */
+AChoreographer* AChoreographer_getInstance();
+
+/**
+ * Post a callback to be run on the next frame. The data pointer provided will
+ * be passed to the callback function when it's called.
+ */
+void AChoreographer_postFrameCallback(AChoreographer* choreographer,
+ AChoreographer_frameCallback callback, void* data);
+/**
+ * Post a callback to be run on the frame following the specified delay. The
+ * data pointer provided will be passed to the callback function when it's
+ * called.
+ */
+void AChoreographer_postFrameCallbackDelayed(AChoreographer* choreographer,
+ AChoreographer_frameCallback callback, void* data, long delayMillis);
+__END_DECLS
+
+#endif // ANDROID_CHOREOGRAPHER_H
+
+/** @} */
diff --git a/include/android/sensor.h b/include/android/sensor.h
index 9472ad6..f2647be 100644
--- a/include/android/sensor.h
+++ b/include/android/sensor.h
@@ -184,29 +184,43 @@
} AMetaDataEvent;
typedef struct AUncalibratedEvent {
- union {
- float uncalib[3];
- struct {
- float x_uncalib;
- float y_uncalib;
- float z_uncalib;
+ union {
+ float uncalib[3];
+ struct {
+ float x_uncalib;
+ float y_uncalib;
+ float z_uncalib;
+ };
};
- };
- union {
- float bias[3];
- struct {
- float x_bias;
- float y_bias;
- float z_bias;
+ union {
+ float bias[3];
+ struct {
+ float x_bias;
+ float y_bias;
+ float z_bias;
+ };
};
- };
} AUncalibratedEvent;
typedef struct AHeartRateEvent {
- float bpm;
- int8_t status;
+ float bpm;
+ int8_t status;
} AHeartRateEvent;
+typedef struct ADynamicSensorEvent {
+ int32_t connected;
+ int32_t handle;
+} ADynamicSensorEvent;
+
+typedef struct {
+ int32_t type;
+ int32_t serial;
+ union {
+ int32_t data_int32[14];
+ float data_float[14];
+ };
+} AAdditionalInfoEvent;
+
/* NOTE: Must match hardware/sensors.h */
typedef struct ASensorEvent {
int32_t version; /* sizeof(struct ASensorEvent) */
@@ -229,6 +243,8 @@
AUncalibratedEvent uncalibrated_magnetic;
AMetaDataEvent meta_data;
AHeartRateEvent heart_rate;
+ ADynamicSensorEvent dynamic_sensor_meta;
+ AAdditionalInfoEvent additional_info;
};
union {
uint64_t data[8];
diff --git a/include/binder/Parcel.h b/include/binder/Parcel.h
index 0abf8f3..5956e13 100644
--- a/include/binder/Parcel.h
+++ b/include/binder/Parcel.h
@@ -17,6 +17,7 @@
#ifndef ANDROID_PARCEL_H
#define ANDROID_PARCEL_H
+#include <string>
#include <vector>
#include <cutils/native_handle.h>
@@ -119,6 +120,10 @@
status_t writeChar(char16_t val);
status_t writeByte(int8_t val);
+ // Take a UTF8 encoded string, convert to UTF16, write it to the parcel.
+ status_t writeUtf8AsUtf16(const std::string& str);
+ status_t writeUtf8AsUtf16(const std::unique_ptr<std::string>& str);
+
status_t writeByteVector(const std::unique_ptr<std::vector<int8_t>>& val);
status_t writeByteVector(const std::vector<int8_t>& val);
status_t writeInt32Vector(const std::unique_ptr<std::vector<int32_t>>& val);
@@ -136,6 +141,9 @@
status_t writeString16Vector(
const std::unique_ptr<std::vector<std::unique_ptr<String16>>>& val);
status_t writeString16Vector(const std::vector<String16>& val);
+ status_t writeUtf8VectorAsUtf16Vector(
+ const std::unique_ptr<std::vector<std::unique_ptr<std::string>>>& val);
+ status_t writeUtf8VectorAsUtf16Vector(const std::vector<std::string>& val);
status_t writeStrongBinderVector(const std::unique_ptr<std::vector<sp<IBinder>>>& val);
status_t writeStrongBinderVector(const std::vector<sp<IBinder>>& val);
@@ -230,6 +238,10 @@
int8_t readByte() const;
status_t readByte(int8_t *pArg) const;
+ // Read a UTF16 encoded string, convert to UTF8
+ status_t readUtf8FromUtf16(std::string* str) const;
+ status_t readUtf8FromUtf16(std::unique_ptr<std::string>* str) const;
+
const char* readCString() const;
String8 readString8() const;
String16 readString16() const;
@@ -274,6 +286,9 @@
status_t readString16Vector(
std::unique_ptr<std::vector<std::unique_ptr<String16>>>* val) const;
status_t readString16Vector(std::vector<String16>* val) const;
+ status_t readUtf8VectorFromUtf16Vector(
+ std::unique_ptr<std::vector<std::unique_ptr<std::string>>>* val) const;
+ status_t readUtf8VectorFromUtf16Vector(std::vector<std::string>* val) const;
template<typename T>
status_t read(Flattenable<T>& val) const;
diff --git a/include/gui/BufferItem.h b/include/gui/BufferItem.h
index 370f5d5..a515f39 100644
--- a/include/gui/BufferItem.h
+++ b/include/gui/BufferItem.h
@@ -125,6 +125,10 @@
// Indicates that this buffer was queued by the producer. When in single
// buffer mode acquire() can return a BufferItem that wasn't in the queue.
bool mQueuedBuffer;
+
+ // Indicates that this BufferItem contains a stale buffer which has already
+ // been released by the BufferQueue.
+ bool mIsStale;
};
} // namespace android
diff --git a/include/gui/BufferQueueCore.h b/include/gui/BufferQueueCore.h
index fbd5114..e2e73a0 100644
--- a/include/gui/BufferQueueCore.h
+++ b/include/gui/BufferQueueCore.h
@@ -105,24 +105,32 @@
// connected, mDequeueCondition must be broadcast.
int getMaxBufferCountLocked() const;
- // freeBufferLocked frees the GraphicBuffer and sync resources for the
+ // This performs the same computation but uses the given arguments instead
+ // of the member variables for mMaxBufferCount, mAsyncMode, and
+ // mDequeueBufferCannotBlock.
+ int getMaxBufferCountLocked(bool asyncMode,
+ bool dequeueBufferCannotBlock, int maxBufferCount) const;
+
+ // clearBufferSlotLocked frees the GraphicBuffer and sync resources for the
// given slot.
- void freeBufferLocked(int slot, bool validate = true);
+ void clearBufferSlotLocked(int slot);
// freeAllBuffersLocked frees the GraphicBuffer and sync resources for
// all slots, even if they're currently dequeued, queued, or acquired.
void freeAllBuffersLocked();
- // stillTracking returns true iff the buffer item is still being tracked
- // in one of the slots.
- bool stillTracking(const BufferItem* item) const;
+ // If delta is positive, makes more slots available. If negative, takes
+ // away slots. Returns false if the request can't be met.
+ bool adjustAvailableSlotsLocked(int delta);
// waitWhileAllocatingLocked blocks until mIsAllocating is false.
void waitWhileAllocatingLocked() const;
+#if DEBUG_ONLY_CODE
// validateConsistencyLocked ensures that the free lists are in sync with
// the information stored in mSlots
void validateConsistencyLocked() const;
+#endif
// mAllocator is the connection to SurfaceFlinger that is used to allocate
// new GraphicBuffer objects.
@@ -179,13 +187,20 @@
Fifo mQueue;
// mFreeSlots contains all of the slots which are FREE and do not currently
- // have a buffer attached
+ // have a buffer attached.
std::set<int> mFreeSlots;
// mFreeBuffers contains all of the slots which are FREE and currently have
- // a buffer attached
+ // a buffer attached.
std::list<int> mFreeBuffers;
+ // mUnusedSlots contains all slots that are currently unused. They should be
+ // free and not have a buffer attached.
+ std::list<int> mUnusedSlots;
+
+ // mActiveBuffers contains all slots which have a non-FREE buffer attached.
+ std::set<int> mActiveBuffers;
+
// mDequeueCondition is a condition variable used for dequeueBuffer in
// synchronous mode.
mutable Condition mDequeueCondition;
diff --git a/include/gui/BufferQueueProducer.h b/include/gui/BufferQueueProducer.h
index 645a07b..dc05e98 100644
--- a/include/gui/BufferQueueProducer.h
+++ b/include/gui/BufferQueueProducer.h
@@ -187,9 +187,9 @@
// BufferQueueCore::INVALID_BUFFER_SLOT otherwise
int getFreeBufferLocked() const;
- // Returns the next free slot if one less than or equal to maxBufferCount
- // is available or BufferQueueCore::INVALID_BUFFER_SLOT otherwise
- int getFreeSlotLocked(int maxBufferCount) const;
+ // Returns the next free slot if one is available or
+ // BufferQueueCore::INVALID_BUFFER_SLOT otherwise
+ int getFreeSlotLocked() const;
// waitForFreeSlotThenRelock finds the oldest slot in the FREE state. It may
// block if there are no available slots and we are not in non-blocking
@@ -200,8 +200,7 @@
Dequeue,
Attach,
};
- status_t waitForFreeSlotThenRelock(FreeSlotCaller caller, int* found,
- status_t* returnFlags) const;
+ status_t waitForFreeSlotThenRelock(FreeSlotCaller caller, int* found) const;
sp<BufferQueueCore> mCore;
diff --git a/include/gui/BufferSlot.h b/include/gui/BufferSlot.h
index 17a654a..943fa82 100644
--- a/include/gui/BufferSlot.h
+++ b/include/gui/BufferSlot.h
@@ -174,14 +174,15 @@
struct BufferSlot {
BufferSlot()
- : mEglDisplay(EGL_NO_DISPLAY),
+ : mGraphicBuffer(nullptr),
+ mEglDisplay(EGL_NO_DISPLAY),
mBufferState(),
mRequestBufferCalled(false),
mFrameNumber(0),
mEglFence(EGL_NO_SYNC_KHR),
+ mFence(Fence::NO_FENCE),
mAcquireCalled(false),
- mNeedsCleanupOnRelease(false),
- mAttachedByConsumer(false) {
+ mNeedsReallocation(false) {
}
// mGraphicBuffer points to the buffer allocated for this slot or is NULL
@@ -191,8 +192,6 @@
// mEglDisplay is the EGLDisplay used to create EGLSyncKHR objects.
EGLDisplay mEglDisplay;
- static const char* bufferStateName(BufferState state);
-
// mBufferState is the current state of this buffer slot.
BufferState mBufferState;
@@ -227,15 +226,10 @@
// Indicates whether this buffer has been seen by a consumer yet
bool mAcquireCalled;
- // Indicates whether this buffer needs to be cleaned up by the
- // consumer. This is set when a buffer in ACQUIRED state is freed.
- // It causes releaseBuffer to return STALE_BUFFER_SLOT.
- bool mNeedsCleanupOnRelease;
-
- // Indicates whether the buffer was attached on the consumer side.
- // If so, it needs to set the BUFFER_NEEDS_REALLOCATION flag when dequeued
- // to prevent the producer from using a stale cached buffer.
- bool mAttachedByConsumer;
+ // Indicates whether the buffer was re-allocated without notifying the
+ // producer. If so, it needs to set the BUFFER_NEEDS_REALLOCATION flag when
+ // dequeued to prevent the producer from using a stale cached buffer.
+ bool mNeedsReallocation;
};
} // namespace android
diff --git a/include/gui/IGraphicBufferConsumer.h b/include/gui/IGraphicBufferConsumer.h
index d4c9ee5..e983c16 100644
--- a/include/gui/IGraphicBufferConsumer.h
+++ b/include/gui/IGraphicBufferConsumer.h
@@ -199,20 +199,33 @@
// cannot be less than maxAcquiredBufferCount.
//
// Return of a value other than NO_ERROR means an error has occurred:
- // * BAD_VALUE - bufferCount was out of range (see above).
+ // * BAD_VALUE - one of the below conditions occurred:
+ // * bufferCount was out of range (see above).
+ // * failure to adjust the number of available slots.
// * INVALID_OPERATION - attempting to call this after a producer connected.
virtual status_t setMaxBufferCount(int bufferCount) = 0;
// setMaxAcquiredBufferCount sets the maximum number of buffers that can
- // be acquired by the consumer at one time (default 1). This call will
- // fail if a producer is connected to the BufferQueue.
+ // be acquired by the consumer at one time (default 1). If this method
+ // succeeds, any new buffer slots will be both unallocated and owned by the
+ // BufferQueue object (i.e. they are not owned by the producer or consumer).
+ // Calling this may also cause some buffer slots to be emptied.
+ //
+ // This function should not be called with a value of maxAcquiredBuffers
+ // that is less than the number of currently acquired buffer slots. Doing so
+ // will result in a BAD_VALUE error.
//
// maxAcquiredBuffers must be (inclusive) between 1 and
// MAX_MAX_ACQUIRED_BUFFERS. It also cannot cause the maxBufferCount value
// to be exceeded.
//
// Return of a value other than NO_ERROR means an error has occurred:
- // * BAD_VALUE - maxAcquiredBuffers was out of range (see above).
+ // * NO_INIT - the buffer queue has been abandoned
+ // * BAD_VALUE - one of the below conditions occurred:
+ // * maxAcquiredBuffers was out of range (see above).
+ // * failure to adjust the number of available slots.
+ // * client would have more than the requested number of
+ // acquired buffers after this call
// * INVALID_OPERATION - attempting to call this after a producer connected.
virtual status_t setMaxAcquiredBufferCount(int maxAcquiredBuffers) = 0;
diff --git a/include/gui/IGraphicBufferProducer.h b/include/gui/IGraphicBufferProducer.h
index 8646981..265728f 100644
--- a/include/gui/IGraphicBufferProducer.h
+++ b/include/gui/IGraphicBufferProducer.h
@@ -82,15 +82,16 @@
virtual status_t requestBuffer(int slot, sp<GraphicBuffer>* buf) = 0;
// setMaxDequeuedBufferCount sets the maximum number of buffers that can be
- // dequeued by the producer at one time. If this method succeeds, buffer
- // slots will be both unallocated and owned by the BufferQueue object (i.e.
- // they are not owned by the producer or consumer). Calling this will also
- // cause all buffer slots to be emptied. If the caller is caching the
+ // dequeued by the producer at one time. If this method succeeds, any new
+ // buffer slots will be both unallocated and owned by the BufferQueue object
+ // (i.e. they are not owned by the producer or consumer). Calling this may
+ // also cause some buffer slots to be emptied. If the caller is caching the
// contents of the buffer slots, it should empty that cache after calling
// this method.
//
- // This function should not be called when there are any currently dequeued
- // buffer slots. Doing so will result in a BAD_VALUE error.
+ // This function should not be called with a value of maxDequeuedBuffers
+ // that is less than the number of currently dequeued buffer slots. Doing so
+ // will result in a BAD_VALUE error.
//
// The buffer count should be at least 1 (inclusive), but at most
// (NUM_BUFFER_SLOTS - the minimum undequeued buffer count) (exclusive). The
@@ -100,9 +101,11 @@
// Return of a value other than NO_ERROR means an error has occurred:
// * NO_INIT - the buffer queue has been abandoned.
// * BAD_VALUE - one of the below conditions occurred:
- // * bufferCount was out of range (see above)
- // * client has one or more buffers dequeued
- // * this call would cause the maxBufferCount value to be exceeded
+ // * bufferCount was out of range (see above).
+ // * client would have more than the requested number of dequeued
+ // buffers after this call.
+ // * this call would cause the maxBufferCount value to be exceeded.
+ // * failure to adjust the number of available slots.
virtual status_t setMaxDequeuedBufferCount(int maxDequeuedBuffers) = 0;
// Set the async flag if the producer intends to asynchronously queue
@@ -115,8 +118,10 @@
//
// Return of a value other than NO_ERROR means an error has occurred:
// * NO_INIT - the buffer queue has been abandoned.
- // * BAD_VALUE - this call would cause the maxBufferCount value to be
+ // * BAD_VALUE - one of the following has occurred:
+ // * this call would cause the maxBufferCount value to be
// exceeded
+ // * failure to adjust the number of available slots.
virtual status_t setAsyncMode(bool async) = 0;
// dequeueBuffer requests a new buffer slot for the client to use. Ownership
@@ -436,6 +441,9 @@
// * the producer is already connected
// * api was out of range (see above).
// * output was NULL.
+ // * Failure to adjust the number of available slots. This can
+ // happen because of trying to allocate/deallocate the async
+ // buffer in response to the value of producerControlledByApp.
// * DEAD_OBJECT - the token is hosted by an already-dead process
//
// Additional negative errors may be returned by the internals, they
@@ -534,6 +542,11 @@
// timeout of -1. If set (to a value other than -1), this will disable
// non-blocking mode and its corresponding spare buffer (which is used to
// ensure a buffer is always available).
+ //
+ // Return of a value other than NO_ERROR means an error has occurred:
+ // * BAD_VALUE - Failure to adjust the number of available slots. This can
+ // happen because of trying to allocate/deallocate the async
+ // buffer.
virtual status_t setDequeueTimeout(nsecs_t timeout) = 0;
};
diff --git a/include/gui/ISensorServer.h b/include/gui/ISensorServer.h
index 3dca2a3..571acb5 100644
--- a/include/gui/ISensorServer.h
+++ b/include/gui/ISensorServer.h
@@ -38,6 +38,8 @@
DECLARE_META_INTERFACE(SensorServer);
virtual Vector<Sensor> getSensorList(const String16& opPackageName) = 0;
+ virtual Vector<Sensor> getDynamicSensorList(const String16& opPackageName) = 0;
+
virtual sp<ISensorEventConnection> createSensorEventConnection(const String8& packageName,
int mode, const String16& opPackageName) = 0;
virtual int32_t isDataInjectionEnabled() = 0;
diff --git a/include/gui/Sensor.h b/include/gui/Sensor.h
index 8142be6..3792540 100644
--- a/include/gui/Sensor.h
+++ b/include/gui/Sensor.h
@@ -52,9 +52,13 @@
TYPE_PROXIMITY = ASENSOR_TYPE_PROXIMITY
};
- Sensor();
- Sensor(struct sensor_t const* hwSensor, int halVersion = 0);
- ~Sensor();
+ typedef struct {
+ uint8_t b[16];
+ } uuid_t;
+
+ Sensor();
+ Sensor(struct sensor_t const* hwSensor, int halVersion = 0);
+ ~Sensor();
const String8& getName() const;
const String8& getVendor() const;
@@ -77,6 +81,7 @@
uint32_t getFlags() const;
bool isWakeUpSensor() const;
int32_t getReportingMode() const;
+ const uuid_t& getUuid() const;
// LightFlattenable protocol
inline bool isFixedSize() const { return false; }
@@ -103,6 +108,7 @@
int32_t mRequiredAppOp;
int32_t mMaxDelay;
uint32_t mFlags;
+ uuid_t mUuid;
static void flattenString8(void*& buffer, size_t& size, const String8& string8);
static bool unflattenString8(void const*& buffer, size_t& size, String8& outputString8);
};
diff --git a/include/gui/SensorManager.h b/include/gui/SensorManager.h
index 0cff46c..6c6230f 100644
--- a/include/gui/SensorManager.h
+++ b/include/gui/SensorManager.h
@@ -54,7 +54,8 @@
static SensorManager& getInstanceForPackage(const String16& packageName);
~SensorManager();
- ssize_t getSensorList(Sensor const* const** list) const;
+ ssize_t getSensorList(Sensor const* const** list);
+ ssize_t getDynamicSensorList(Vector<Sensor>& list);
Sensor const* getDefaultSensor(int type);
sp<SensorEventQueue> createEventQueue(String8 packageName = String8(""), int mode = 0);
bool isDataInjectionEnabled();
@@ -64,17 +65,17 @@
void sensorManagerDied();
SensorManager(const String16& opPackageName);
- status_t assertStateLocked() const;
+ status_t assertStateLocked();
private:
static Mutex sLock;
static std::map<String16, SensorManager*> sPackageInstances;
- mutable Mutex mLock;
- mutable sp<ISensorServer> mSensorServer;
- mutable Sensor const** mSensorList;
- mutable Vector<Sensor> mSensors;
- mutable sp<IBinder::DeathRecipient> mDeathObserver;
+ Mutex mLock;
+ sp<ISensorServer> mSensorServer;
+ Sensor const** mSensorList;
+ Vector<Sensor> mSensors;
+ sp<IBinder::DeathRecipient> mDeathObserver;
const String16 mOpPackageName;
};
diff --git a/include/hardware_properties/HardwarePropertiesManager.h b/include/hardware_properties/HardwarePropertiesManager.h
new file mode 100644
index 0000000..13f2b99
--- /dev/null
+++ b/include/hardware_properties/HardwarePropertiesManager.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2016 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef ANDROID_HARDWAREPROPERTIESMANAGER_H
+#define ANDROID_HARDWAREPROPERTIESMANAGER_H
+
+namespace android {
+
+// must be kept in sync with definitions in HardwarePropertiesManager.java
+enum {
+ DEVICE_TEMPERATURE_CPU = 0,
+ DEVICE_TEMPERATURE_GPU = 1,
+ DEVICE_TEMPERATURE_BATTERY = 2,
+};
+
+}; // namespace android
+
+#endif // ANDROID_HARDWAREPROPERTIESMANAGER_H
diff --git a/include/media/hardware/HardwareAPI.h b/include/media/hardware/HardwareAPI.h
index 1008c22..9ba5f7f 100644
--- a/include/media/hardware/HardwareAPI.h
+++ b/include/media/hardware/HardwareAPI.h
@@ -120,6 +120,17 @@
int nFenceFd; // -1 if unused
};
+// Meta data buffer layout for passing a native_handle to codec
+struct VideoNativeHandleMetadata {
+ MetadataBufferType eType; // must be kMetadataBufferTypeNativeHandleSource
+
+#ifdef OMX_ANDROID_COMPILE_AS_32BIT_ON_64BIT_PLATFORMS
+ OMX_PTR pHandle;
+#else
+ native_handle_t *pHandle;
+#endif
+};
+
// A pointer to this struct is passed to OMX_SetParameter() when the extension
// index "OMX.google.android.index.prepareForAdaptivePlayback" is given.
//
@@ -190,6 +201,7 @@
// Structure describing a media image (frame)
// Currently only supporting YUV
+// @deprecated. Use MediaImage2 instead
struct MediaImage {
enum Type {
MEDIA_IMAGE_TYPE_UNKNOWN = 0,
@@ -219,6 +231,45 @@
PlaneInfo mPlane[MAX_NUM_PLANES];
};
+struct MediaImage2 {
+ enum Type {
+ MEDIA_IMAGE_TYPE_UNKNOWN = 0,
+ MEDIA_IMAGE_TYPE_YUV,
+ MEDIA_IMAGE_TYPE_YUVA,
+ MEDIA_IMAGE_TYPE_RGB,
+ MEDIA_IMAGE_TYPE_RGBA,
+ MEDIA_IMAGE_TYPE_Y,
+ };
+
+ enum PlaneIndex {
+ Y = 0,
+ U = 1,
+ V = 2,
+ R = 0,
+ G = 1,
+ B = 2,
+ A = 3,
+ MAX_NUM_PLANES = 4,
+ };
+
+ Type mType;
+ uint32_t mNumPlanes; // number of planes
+ uint32_t mWidth; // width of largest plane (unpadded, as in nFrameWidth)
+ uint32_t mHeight; // height of largest plane (unpadded, as in nFrameHeight)
+ uint32_t mBitDepth; // useable bit depth (always MSB)
+ uint32_t mBitDepthAllocated; // bits per component (must be 8 or 16)
+
+ struct PlaneInfo {
+ uint32_t mOffset; // offset of first pixel of the plane in bytes
+ // from buffer offset
+ int32_t mColInc; // column increment in bytes
+ int32_t mRowInc; // row increment in bytes
+ uint32_t mHorizSubsampling; // subsampling compared to the largest plane
+ uint32_t mVertSubsampling; // subsampling compared to the largest plane
+ };
+ PlaneInfo mPlane[MAX_NUM_PLANES];
+};
+
// A pointer to this struct is passed to OMX_GetParameter when the extension
// index for the 'OMX.google.android.index.describeColorFormat'
// extension is given. This method can be called from any component state
@@ -245,6 +296,8 @@
// For non-YUV packed planar/semiplanar image formats, or if bUsingNativeBuffers
// is OMX_TRUE and the component does not support this color format with native
// buffers, the component shall set mNumPlanes to 0, and mType to MEDIA_IMAGE_TYPE_UNKNOWN.
+
+// @deprecated: use DescribeColorFormat2Params
struct DescribeColorFormatParams {
OMX_U32 nSize;
OMX_VERSIONTYPE nVersion;
@@ -260,6 +313,25 @@
MediaImage sMediaImage;
};
+// A pointer to this struct is passed to OMX_GetParameter when the extension
+// index for the 'OMX.google.android.index.describeColorFormat2'
+// extension is given. This is operationally the same as DescribeColorFormatParams
+// but can be used for HDR and RGBA/YUVA formats.
+struct DescribeColorFormat2Params {
+ OMX_U32 nSize;
+ OMX_VERSIONTYPE nVersion;
+ // input: parameters from OMX_VIDEO_PORTDEFINITIONTYPE
+ OMX_COLOR_FORMATTYPE eColorFormat;
+ OMX_U32 nFrameWidth;
+ OMX_U32 nFrameHeight;
+ OMX_U32 nStride;
+ OMX_U32 nSliceHeight;
+ OMX_BOOL bUsingNativeBuffers;
+
+ // output: fill out the MediaImage2 fields
+ MediaImage2 sMediaImage;
+};
+
// A pointer to this struct is passed to OMX_SetParameter or OMX_GetParameter
// when the extension index for the
// 'OMX.google.android.index.configureVideoTunnelMode' extension is given.
@@ -281,6 +353,111 @@
OMX_PTR pSidebandWindow; // OUT
};
+// Color description parameters. This is passed via OMX_SetConfig or OMX_GetConfig
+// to video encoders and decoders when the
+// 'OMX.google.android.index.describeColorAspects' extension is given.
+//
+// Video encoders: the framework uses OMX_SetConfig to specify color aspects
+// of the coded video before the component transitions to idle state.
+//
+// Video decoders: the framework uses OMX_SetConfig to specify color aspects
+// of the coded video parsed from the container before the component transitions
+// to idle state. If the bitstream contains color information, the component should
+// update the appropriate color aspects - unless the bitstream contains the
+// "unspecified" value. For "reserved" values, the component should set the aspect
+// to "Other".
+//
+// The framework subsequently uses OMX_GetConfig to get any updates of the
+// color aspects from the decoder. If the color aspects change at any time
+// during the processing of the stream, the component shall signal a
+// OMX_EventPortSettingsChanged event with data2 set to the extension index
+// (or OMX_IndexConfigCommonOutputCrop, as it is handled identically). Component
+// shall not signal a separate event purely for color aspect change, if it occurs
+// together with a port definition (e.g. size) or crop change.
+//
+// NOTE: this structure is expected to grow in the future if new color aspects are
+// added to codec bitstreams. OMX component should not require a specific nSize
+// though could verify that nSize is at least the size of the structure at the
+// time of implementation. All new fields will be added at the end of the structure
+// ensuring backward compatibility.
+
+struct DescribeColorAspectsParams {
+ OMX_U32 nSize; // IN
+ OMX_VERSIONTYPE nVersion; // IN
+ OMX_U32 nPortIndex; // IN
+ OMX_U32 nRange; // IN/OUT (one of the ColorAspects.Range enums)
+ OMX_U32 nPrimaries; // IN/OUT (one of the ColorAspects.Primaries enums)
+ OMX_U32 nTransfer; // IN/OUT (one of the ColorAspects.Transfer enums)
+ OMX_U32 nMatrixCoeffs; // IN/OUT (one of the ColorAspects.MatrixCoeffs enums)
+};
+
+struct ColorAspects {
+ // this is in sync with the range values in graphics.h
+ enum Range : uint32_t {
+ RangeUnspecified,
+ RangeFull,
+ RangeLimited,
+ RangeOther = 0xff,
+ };
+
+ enum Primaries : uint32_t {
+ PrimariesUnspecified,
+ PrimariesBT709_5, // Rec.ITU-R BT.709-5 or equivalent
+ PrimariesBT470_6M, // Rec.ITU-R BT.470-6 System M or equivalent
+ PrimariesBT601_6_625, // Rec.ITU-R BT.601-6 625 or equivalent
+ PrimariesBT601_6_525, // Rec.ITU-R BT.601-6 525 or equivalent
+ PrimariesGenericFilm, // Generic Film
+ PrimariesBT2020, // Rec.ITU-R BT.2020 or equivalent
+ PrimariesOther = 0xff,
+ };
+
+ // this partially in sync with the transfer values in graphics.h prior to the transfers
+ // unlikely to be required by Android section
+ enum Transfer : uint32_t {
+ TransferUnspecified,
+ TransferLinear, // Linear transfer characteristics
+ TransferSRGB, // sRGB or equivalent
+ TransferSMPTE170M, // SMPTE 170M or equivalent (e.g. BT.601/709/2020)
+ TransferGamma22, // Assumed display gamma 2.2
+ TransferGamma28, // Assumed display gamma 2.8
+ TransferST2084, // SMPTE ST 2084 for 10/12/14/16 bit systems
+ TransferHLG, // ARIB STD-B67 hybrid-log-gamma
+
+ // transfers unlikely to be required by Android
+ TransferSMPTE240M = 0x40, // SMPTE 240M
+ TransferXvYCC, // IEC 61966-2-4
+ TransferBT1361, // Rec.ITU-R BT.1361 extended gamut
+ TransferST428, // SMPTE ST 428-1
+ TransferOther = 0xff,
+ };
+
+ enum MatrixCoeffs : uint32_t {
+ MatrixUnspecified,
+ MatrixBT709_5, // Rec.ITU-R BT.709-5 or equivalent
+ MatrixBT470_6M, // KR=0.30, KB=0.11 or equivalent
+ MatrixBT601_6, // Rec.ITU-R BT.601-6 625 or equivalent
+ MatrixSMPTE240M, // SMPTE 240M or equivalent
+ MatrixBT2020, // Rec.ITU-R BT.2020 non-constant luminance
+ MatrixBT2020Constant, // Rec.ITU-R BT.2020 constant luminance
+ MatrixOther = 0xff,
+ };
+
+ // this is in sync with the standard values in graphics.h
+ enum Standard : uint32_t {
+ StandardUnspecified,
+ StandardBT709, // PrimariesBT709_5 and MatrixBT709_5
+ StandardBT601_625, // PrimariesBT601_6_625 and MatrixBT601_6
+ StandardBT601_625_Unadjusted, // PrimariesBT601_6_625 and KR=0.222, KB=0.071
+ StandardBT601_525, // PrimariesBT601_6_525 and MatrixBT601_6
+ StandardBT601_525_Unadjusted, // PrimariesBT601_6_525 and MatrixSMPTE240M
+ StandardBT2020, // PrimariesBT2020 and MatrixBT2020
+ StandardBT2020Constant, // PrimariesBT2020 and MatrixBT2020Constant
+ StandardBT470M, // PrimariesBT470_6M and MatrixBT470_6M
+ StandardFilm, // PrimariesGenericFilm and KR=0.253, KB=0.068
+ StandardOther = 0xff,
+ };
+};
+
} // namespace android
extern android::OMXPluginBase *createOMXPlugin();
diff --git a/include/media/hardware/MetadataBufferType.h b/include/media/hardware/MetadataBufferType.h
index b765203..4f6d5e2 100644
--- a/include/media/hardware/MetadataBufferType.h
+++ b/include/media/hardware/MetadataBufferType.h
@@ -111,6 +111,28 @@
*/
kMetadataBufferTypeANWBuffer = 2,
+ /*
+ * kMetadataBufferTypeNativeHandleSource is used to indicate that
+ * the payload of the metadata buffers can be interpreted as
+ * a native_handle_t.
+ *
+ * In this case, the metadata that the encoder receives
+ * will have a byte stream that consists of two parts:
+ * 1. First, there is an integer indicating that the metadata contains a
+ * native handle (kMetadataBufferTypeNativeHandleSource).
+ * 2. This is followed by a pointer to native_handle_t. The encoder needs
+ * to interpret this native handle and encode the frame. The encoder must
+ * not free this native handle as it does not actually own this native
+ * handle. The handle will be freed after the encoder releases the buffer
+ * back to camera.
+ * ----------------------------------------------------------------
+ * | kMetadataBufferTypeNativeHandleSource | native_handle_t* nh |
+ * ----------------------------------------------------------------
+ *
+ * See the VideoNativeHandleMetadata structure.
+ */
+ kMetadataBufferTypeNativeHandleSource = 3,
+
/* This value is used by framework, but is never used inside a metadata buffer */
kMetadataBufferTypeInvalid = -1,
diff --git a/include/media/openmax/OMX_AsString.h b/include/media/openmax/OMX_AsString.h
index ae8430d..c3145c9 100644
--- a/include/media/openmax/OMX_AsString.h
+++ b/include/media/openmax/OMX_AsString.h
@@ -714,6 +714,7 @@
case OMX_VIDEO_CodingVP8: return "VP8";
case OMX_VIDEO_CodingVP9: return "VP9";
case OMX_VIDEO_CodingHEVC: return "HEVC";
+ case OMX_VIDEO_CodingDolbyVision:return "DolbyVision";
default: return def;
}
}
diff --git a/include/media/openmax/OMX_AudioExt.h b/include/media/openmax/OMX_AudioExt.h
index 2a1c3f2..05c2232 100644
--- a/include/media/openmax/OMX_AudioExt.h
+++ b/include/media/openmax/OMX_AudioExt.h
@@ -94,6 +94,15 @@
OMX_S32 nPCMLimiterEnable; /**< Signal level limiting, 0 for disable, 1 for enable, -1 if unspecified */
} OMX_AUDIO_PARAM_ANDROID_AACPRESENTATIONTYPE;
+typedef struct OMX_AUDIO_PARAM_ANDROID_PROFILETYPE {
+ OMX_U32 nSize;
+ OMX_VERSIONTYPE nVersion;
+ OMX_U32 nPortIndex;
+ OMX_U32 eProfile; /**< type is OMX_AUDIO_AACPROFILETYPE or OMX_AUDIO_WMAPROFILETYPE
+ depending on context */
+ OMX_U32 nProfileIndex; /**< Used to query for individual profile support information */
+} OMX_AUDIO_PARAM_ANDROID_PROFILETYPE;
+
#ifdef __cplusplus
}
#endif /* __cplusplus */
diff --git a/include/media/openmax/OMX_IndexExt.h b/include/media/openmax/OMX_IndexExt.h
index 25bea1f..8bfc49d 100644
--- a/include/media/openmax/OMX_IndexExt.h
+++ b/include/media/openmax/OMX_IndexExt.h
@@ -61,6 +61,7 @@
OMX_IndexParamAudioAndroidOpus, /**< reference: OMX_AUDIO_PARAM_ANDROID_OPUSTYPE */
OMX_IndexParamAudioAndroidAacPresentation, /**< reference: OMX_AUDIO_PARAM_ANDROID_AACPRESENTATIONTYPE */
OMX_IndexParamAudioAndroidEac3, /**< reference: OMX_AUDIO_PARAM_ANDROID_EAC3TYPE */
+ OMX_IndexParamAudioProfileQuerySupported, /**< reference: OMX_AUDIO_PARAM_ANDROID_PROFILETYPE */
/* Image parameters and configurations */
OMX_IndexExtImageStartUnused = OMX_IndexKhronosExtensions + 0x00500000,
diff --git a/include/media/openmax/OMX_Video.h b/include/media/openmax/OMX_Video.h
index decc410..ca85cf1 100644
--- a/include/media/openmax/OMX_Video.h
+++ b/include/media/openmax/OMX_Video.h
@@ -88,6 +88,7 @@
OMX_VIDEO_CodingVP8, /**< Google VP8, formerly known as On2 VP8 */
OMX_VIDEO_CodingVP9, /**< Google VP9 */
OMX_VIDEO_CodingHEVC, /**< ITU H.265/HEVC */
+ OMX_VIDEO_CodingDolbyVision,/**< Dolby Vision */
OMX_VIDEO_CodingKhronosExtensions = 0x6F000000, /**< Reserved region for introducing Khronos Standard Extensions */
OMX_VIDEO_CodingVendorStartUnused = 0x7F000000, /**< Reserved region for introducing Vendor Extensions */
OMX_VIDEO_CodingMax = 0x7FFFFFFF
diff --git a/include/media/openmax/OMX_VideoExt.h b/include/media/openmax/OMX_VideoExt.h
index 3971bc5..4ae4c88 100644
--- a/include/media/openmax/OMX_VideoExt.h
+++ b/include/media/openmax/OMX_VideoExt.h
@@ -75,6 +75,36 @@
OMX_VIDEO_VP8LevelMax = 0x7FFFFFFF
} OMX_VIDEO_VP8LEVELTYPE;
+/** VP9 profiles */
+typedef enum OMX_VIDEO_VP9PROFILETYPE {
+ OMX_VIDEO_VP9Profile0 = 0x0,
+ OMX_VIDEO_VP9Profile1 = 0x1,
+ OMX_VIDEO_VP9Profile2 = 0x2,
+ OMX_VIDEO_VP9Profile3 = 0x3,
+ OMX_VIDEO_VP9ProfileUnknown = 0x6EFFFFFF,
+ OMX_VIDEO_VP9ProfileMax = 0x7FFFFFFF
+} OMX_VIDEO_VP9PROFILETYPE;
+
+/** VP9 levels */
+typedef enum OMX_VIDEO_VP9LEVELTYPE {
+ OMX_VIDEO_VP9Level1 = 0x0,
+ OMX_VIDEO_VP9Level11 = 0x1,
+ OMX_VIDEO_VP9Level2 = 0x2,
+ OMX_VIDEO_VP9Level21 = 0x4,
+ OMX_VIDEO_VP9Level3 = 0x8,
+ OMX_VIDEO_VP9Level31 = 0x10,
+ OMX_VIDEO_VP9Level4 = 0x20,
+ OMX_VIDEO_VP9Level41 = 0x40,
+ OMX_VIDEO_VP9Level5 = 0x80,
+ OMX_VIDEO_VP9Level51 = 0x100,
+ OMX_VIDEO_VP9Level52 = 0x200,
+ OMX_VIDEO_VP9Level6 = 0x400,
+ OMX_VIDEO_VP9Level61 = 0x800,
+ OMX_VIDEO_VP9Level62 = 0x1000,
+ OMX_VIDEO_VP9LevelUnknown = 0x6EFFFFFF,
+ OMX_VIDEO_VP9LevelMax = 0x7FFFFFFF
+} OMX_VIDEO_VP9LEVELTYPE;
+
/** VP8 Param */
typedef struct OMX_VIDEO_PARAM_VP8TYPE {
OMX_U32 nSize;
@@ -185,13 +215,14 @@
OMX_VIDEO_HEVCHighTiermax = 0x7FFFFFFF
} OMX_VIDEO_HEVCLEVELTYPE;
-/** Structure for controlling HEVC video encoding and decoding */
+/** Structure for controlling HEVC video encoding */
typedef struct OMX_VIDEO_PARAM_HEVCTYPE {
OMX_U32 nSize;
OMX_VERSIONTYPE nVersion;
OMX_U32 nPortIndex;
OMX_VIDEO_HEVCPROFILETYPE eProfile;
OMX_VIDEO_HEVCLEVELTYPE eLevel;
+ OMX_U32 nKeyFrameInterval;
} OMX_VIDEO_PARAM_HEVCTYPE;
/** Structure to define if dependent slice segments should be used */
@@ -216,6 +247,33 @@
// following) the render information for the last frame.
} OMX_VIDEO_RENDEREVENTTYPE;
+/** Dolby Vision Profile enum type */
+typedef enum OMX_VIDEO_DOLBYVISIONPROFILETYPE {
+ OMX_VIDEO_DolbyVisionProfileUnknown = 0x0,
+ OMX_VIDEO_DolbyVisionProfileDvavDer = 0x1,
+ OMX_VIDEO_DolbyVisionProfileDvavDen = 0x2,
+ OMX_VIDEO_DolbyVisionProfileDvheDer = 0x3,
+ OMX_VIDEO_DolbyVisionProfileDvheDen = 0x4,
+ OMX_VIDEO_DolbyVisionProfileDvheDtr = 0x5,
+ OMX_VIDEO_DolbyVisionProfileDvheStn = 0x6,
+ OMX_VIDEO_DolbyVisionProfileMax = 0x7FFFFFFF
+} OMX_VIDEO_DOLBYVISIONPROFILETYPE;
+
+/** Dolby Vision Level enum type */
+typedef enum OMX_VIDEO_DOLBYVISIONLEVELTYPE {
+ OMX_VIDEO_DolbyVisionLevelUnknown = 0x0,
+ OMX_VIDEO_DolbyVisionLevelHd24 = 0x1,
+ OMX_VIDEO_DolbyVisionLevelHd30 = 0x2,
+ OMX_VIDEO_DolbyVisionLevelFhd24 = 0x4,
+ OMX_VIDEO_DolbyVisionLevelFhd30 = 0x8,
+ OMX_VIDEO_DolbyVisionLevelFhd60 = 0x10,
+ OMX_VIDEO_DolbyVisionLevelUhd24 = 0x20,
+ OMX_VIDEO_DolbyVisionLevelUhd30 = 0x40,
+ OMX_VIDEO_DolbyVisionLevelUhd48 = 0x80,
+ OMX_VIDEO_DolbyVisionLevelUhd60 = 0x100,
+ OMX_VIDEO_DolbyVisionLevelmax = 0x7FFFFFFF
+} OMX_VIDEO_DOLBYVISIONLEVELTYPE;
+
#ifdef __cplusplus
}
#endif /* __cplusplus */
diff --git a/libs/binder/IPCThreadState.cpp b/libs/binder/IPCThreadState.cpp
index a237684..1f6bda2 100644
--- a/libs/binder/IPCThreadState.cpp
+++ b/libs/binder/IPCThreadState.cpp
@@ -287,12 +287,18 @@
return new IPCThreadState;
}
- if (gShutdown) return NULL;
+ if (gShutdown) {
+ ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
+ return NULL;
+ }
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
- if (pthread_key_create(&gTLS, threadDestructor) != 0) {
+ int key_create_value = pthread_key_create(&gTLS, threadDestructor);
+ if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
+ ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
+ strerror(key_create_value));
return NULL;
}
gHaveTLS = true;
diff --git a/libs/binder/Parcel.cpp b/libs/binder/Parcel.cpp
index 10cdee6..d3fe158 100644
--- a/libs/binder/Parcel.cpp
+++ b/libs/binder/Parcel.cpp
@@ -17,41 +17,43 @@
#define LOG_TAG "Parcel"
//#define LOG_NDEBUG 0
-#include <binder/Parcel.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <unistd.h>
-#include <binder/IPCThreadState.h>
#include <binder/Binder.h>
#include <binder/BpBinder.h>
+#include <binder/IPCThreadState.h>
+#include <binder/Parcel.h>
#include <binder/ProcessState.h>
#include <binder/Status.h>
#include <binder/TextOutput.h>
-#include <errno.h>
+#include <cutils/ashmem.h>
#include <utils/Debug.h>
+#include <utils/Flattenable.h>
#include <utils/Log.h>
+#include <utils/misc.h>
#include <utils/String8.h>
#include <utils/String16.h>
-#include <utils/misc.h>
-#include <utils/Flattenable.h>
-#include <cutils/ashmem.h>
#include <private/binder/binder_module.h>
#include <private/binder/Static.h>
-#include <inttypes.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <sys/mman.h>
-
#ifndef INT32_MAX
#define INT32_MAX ((int32_t)(2147483647))
#endif
#define LOG_REFS(...)
-//#define LOG_REFS(...) ALOG(LOG_DEBUG, "Parcel", __VA_ARGS__)
+//#define LOG_REFS(...) ALOG(LOG_DEBUG, LOG_TAG, __VA_ARGS__)
#define LOG_ALLOC(...)
-//#define LOG_ALLOC(...) ALOG(LOG_DEBUG, "Parcel", __VA_ARGS__)
+//#define LOG_ALLOC(...) ALOG(LOG_DEBUG, LOG_TAG, __VA_ARGS__)
// ---------------------------------------------------------------------------
@@ -121,8 +123,10 @@
return;
}
case BINDER_TYPE_FD: {
- if (obj.cookie != 0) {
- if (outAshmemSize != NULL) {
+ if ((obj.cookie != 0) && (outAshmemSize != NULL)) {
+ struct stat st;
+ int ret = fstat(obj.handle, &st);
+ if (!ret && S_ISCHR(st.st_mode)) {
// If we own an ashmem fd, keep track of how much memory it refers to.
int size = ashmem_get_size_region(obj.handle);
if (size > 0) {
@@ -171,15 +175,19 @@
return;
}
case BINDER_TYPE_FD: {
- if (outAshmemSize != NULL) {
- if (obj.cookie != 0) {
- int size = ashmem_get_size_region(obj.handle);
- if (size > 0) {
- *outAshmemSize -= size;
+ if (obj.cookie != 0) { // owned
+ if (outAshmemSize != NULL) {
+ struct stat st;
+ int ret = fstat(obj.handle, &st);
+ if (!ret && S_ISCHR(st.st_mode)) {
+ int size = ashmem_get_size_region(obj.handle);
+ if (size > 0) {
+ *outAshmemSize -= size;
+ }
}
-
- close(obj.handle);
}
+
+ close(obj.handle);
}
return;
}
@@ -741,6 +749,37 @@
return NULL;
}
+status_t Parcel::writeUtf8AsUtf16(const std::string& str) {
+ const uint8_t* strData = (uint8_t*)str.data();
+ const size_t strLen= str.length();
+ const ssize_t utf16Len = utf8_to_utf16_length(strData, strLen);
+ if (utf16Len < 0 || utf16Len> std::numeric_limits<int32_t>::max()) {
+ return BAD_VALUE;
+ }
+
+ status_t err = writeInt32(utf16Len);
+ if (err) {
+ return err;
+ }
+
+ // Allocate enough bytes to hold our converted string and its terminating NULL.
+ void* dst = writeInplace((utf16Len + 1) * sizeof(char16_t));
+ if (!dst) {
+ return NO_MEMORY;
+ }
+
+ utf8_to_utf16(strData, strLen, (char16_t*)dst);
+
+ return NO_ERROR;
+}
+
+status_t Parcel::writeUtf8AsUtf16(const std::unique_ptr<std::string>& str) {
+ if (!str) {
+ return writeInt32(-1);
+ }
+ return writeUtf8AsUtf16(*str);
+}
+
status_t Parcel::writeByteVector(const std::unique_ptr<std::vector<int8_t>>& val)
{
if (!val) {
@@ -844,6 +883,15 @@
return writeNullableTypedVector(val, &Parcel::writeString16);
}
+status_t Parcel::writeUtf8VectorAsUtf16Vector(
+ const std::unique_ptr<std::vector<std::unique_ptr<std::string>>>& val) {
+ return writeNullableTypedVector(val, &Parcel::writeUtf8AsUtf16);
+}
+
+status_t Parcel::writeUtf8VectorAsUtf16Vector(const std::vector<std::string>& val) {
+ return writeTypedVector(val, &Parcel::writeUtf8AsUtf16);
+}
+
status_t Parcel::writeInt32(int32_t val)
{
return writeAligned(val);
@@ -1483,6 +1531,14 @@
return readTypedVector(val, &Parcel::readString16);
}
+status_t Parcel::readUtf8VectorFromUtf16Vector(
+ std::unique_ptr<std::vector<std::unique_ptr<std::string>>>* val) const {
+ return readNullableTypedVector(val, &Parcel::readUtf8FromUtf16);
+}
+
+status_t Parcel::readUtf8VectorFromUtf16Vector(std::vector<std::string>* val) const {
+ return readTypedVector(val, &Parcel::readUtf8FromUtf16);
+}
status_t Parcel::readInt32(int32_t *pArg) const
{
@@ -1641,6 +1697,46 @@
return int8_t(readInt32());
}
+status_t Parcel::readUtf8FromUtf16(std::string* str) const {
+ size_t utf16Size = 0;
+ const char16_t* src = readString16Inplace(&utf16Size);
+ if (!src) {
+ return UNEXPECTED_NULL;
+ }
+
+ // Save ourselves the trouble, we're done.
+ if (utf16Size == 0u) {
+ str->clear();
+ return NO_ERROR;
+ }
+
+ ssize_t utf8Size = utf16_to_utf8_length(src, utf16Size);
+ if (utf8Size < 0) {
+ return BAD_VALUE;
+ }
+ // Note that while it is probably safe to assume string::resize keeps a
+ // spare byte around for the trailing null, we're going to be explicit.
+ str->resize(utf8Size + 1);
+ utf16_to_utf8(src, utf16Size, &((*str)[0]));
+ str->resize(utf8Size);
+ return NO_ERROR;
+}
+
+status_t Parcel::readUtf8FromUtf16(std::unique_ptr<std::string>* str) const {
+ const int32_t start = dataPosition();
+ int32_t size;
+ status_t status = readInt32(&size);
+ str->reset();
+
+ if (status != OK || size < 0) {
+ return status;
+ }
+
+ setDataPosition(start);
+ str->reset(new std::string());
+ return readUtf8FromUtf16(str->get());
+}
+
const char* Parcel::readCString() const
{
const size_t avail = mDataSize-mDataPos;
diff --git a/libs/gui/Android.mk b/libs/gui/Android.mk
index 8a965dd..635020e 100644
--- a/libs/gui/Android.mk
+++ b/libs/gui/Android.mk
@@ -36,6 +36,8 @@
# Don't warn about struct padding
LOCAL_CPPFLAGS += -Wno-padded
+LOCAL_CPPFLAGS += -DDEBUG_ONLY_CODE=$(if $(filter userdebug eng,$(TARGET_BUILD_VARIANT)),1,0)
+
LOCAL_SRC_FILES := \
IGraphicBufferConsumer.cpp \
IConsumerListener.cpp \
diff --git a/libs/gui/BufferItem.cpp b/libs/gui/BufferItem.cpp
index de8ff70..036ef1e 100644
--- a/libs/gui/BufferItem.cpp
+++ b/libs/gui/BufferItem.cpp
@@ -39,7 +39,8 @@
mTransformToDisplayInverse(false),
mSurfaceDamage(),
mSingleBufferMode(false),
- mQueuedBuffer(true) {
+ mQueuedBuffer(true),
+ mIsStale(false) {
}
BufferItem::~BufferItem() {}
diff --git a/libs/gui/BufferQueueConsumer.cpp b/libs/gui/BufferQueueConsumer.cpp
index 6f9f21f..92285e5 100644
--- a/libs/gui/BufferQueueConsumer.cpp
+++ b/libs/gui/BufferQueueConsumer.cpp
@@ -20,6 +20,12 @@
#define ATRACE_TAG ATRACE_TAG_GRAPHICS
//#define LOG_NDEBUG 0
+#if DEBUG_ONLY_CODE
+#define VALIDATE_CONSISTENCY() do { mCore->validateConsistencyLocked(); } while (0)
+#else
+#define VALIDATE_CONSISTENCY()
+#endif
+
#include <gui/BufferItem.h>
#include <gui/BufferQueueConsumer.h>
#include <gui/BufferQueueCore.h>
@@ -49,7 +55,7 @@
// buffer so that the consumer can successfully set up the newly acquired
// buffer before releasing the old one.
int numAcquiredBuffers = 0;
- for (int s = 0; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
+ for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isAcquired()) {
++numAcquiredBuffers;
}
@@ -133,7 +139,8 @@
BQ_LOGV("acquireBuffer: drop desire=%" PRId64 " expect=%" PRId64
" size=%zu",
desiredPresent, expectedPresent, mCore->mQueue.size());
- if (mCore->stillTracking(front)) {
+
+ if (!front->mIsStale) {
// Front buffer is still in mSlots, so mark the slot as free
mSlots[front->mSlot].mBufferState.freeQueued();
@@ -144,13 +151,17 @@
mSlots[front->mSlot].mBufferState.isFree()) {
mSlots[front->mSlot].mBufferState.mShared = false;
}
- // Don't put the shared buffer on the free list.
+
+ // Don't put the shared buffer on the free list
if (!mSlots[front->mSlot].mBufferState.isShared()) {
+ mCore->mActiveBuffers.erase(front->mSlot);
mCore->mFreeBuffers.push_back(front->mSlot);
}
+
listener = mCore->mConnectedProducerListener;
++numDroppedBuffers;
}
+
mCore->mQueue.erase(front);
front = mCore->mQueue.begin();
}
@@ -205,6 +216,7 @@
outBuffer->mSurfaceDamage = Region::INVALID_REGION;
outBuffer->mSingleBufferMode = true;
outBuffer->mQueuedBuffer = false;
+ outBuffer->mIsStale = false;
} else {
slot = front->mSlot;
*outBuffer = *front;
@@ -216,10 +228,9 @@
BQ_LOGV("acquireBuffer: acquiring { slot=%d/%" PRIu64 " buffer=%p }",
slot, outBuffer->mFrameNumber, outBuffer->mGraphicBuffer->handle);
- // If the front buffer is still being tracked, update its slot state
- if (mCore->stillTracking(outBuffer)) {
+
+ if (!outBuffer->mIsStale) {
mSlots[slot].mAcquireCalled = true;
- mSlots[slot].mNeedsCleanupOnRelease = false;
// Don't decrease the queue count if the BufferItem wasn't
// previously in the queue. This happens in single buffer mode when
// the queue is empty and the BufferItem is created above.
@@ -247,7 +258,7 @@
ATRACE_INT(mCore->mConsumerName.string(), mCore->mQueue.size());
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
}
if (listener != NULL) {
@@ -270,7 +281,7 @@
return NO_INIT;
}
- if (mCore->mSingleBufferMode) {
+ if (mCore->mSingleBufferMode || slot == mCore->mSingleBufferSlot) {
BQ_LOGE("detachBuffer: detachBuffer not allowed in single buffer"
"mode");
return BAD_VALUE;
@@ -287,9 +298,11 @@
}
mSlots[slot].mBufferState.detachConsumer();
- mCore->freeBufferLocked(slot);
+ mCore->mActiveBuffers.erase(slot);
+ mCore->mFreeSlots.insert(slot);
+ mCore->clearBufferSlotLocked(slot);
mCore->mDequeueCondition.broadcast();
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
@@ -316,7 +329,7 @@
// Make sure we don't have too many acquired buffers
int numAcquiredBuffers = 0;
- for (int s = 0; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
+ for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isAcquired()) {
++numAcquiredBuffers;
}
@@ -351,14 +364,14 @@
return NO_MEMORY;
}
+ mCore->mActiveBuffers.insert(found);
*outSlot = found;
ATRACE_BUFFER_INDEX(*outSlot);
BQ_LOGV("attachBuffer: returning slot %d", *outSlot);
mSlots[*outSlot].mGraphicBuffer = buffer;
mSlots[*outSlot].mBufferState.attachConsumer();
- mSlots[*outSlot].mAttachedByConsumer = true;
- mSlots[*outSlot].mNeedsCleanupOnRelease = false;
+ mSlots[*outSlot].mNeedsReallocation = true;
mSlots[*outSlot].mFence = Fence::NO_FENCE;
mSlots[*outSlot].mFrameNumber = 0;
@@ -379,7 +392,7 @@
// for attached buffers.
mSlots[*outSlot].mAcquireCalled = false;
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
@@ -411,41 +424,35 @@
return STALE_BUFFER_SLOT;
}
-
- if (mSlots[slot].mBufferState.isAcquired()) {
- mSlots[slot].mEglDisplay = eglDisplay;
- mSlots[slot].mEglFence = eglFence;
- mSlots[slot].mFence = releaseFence;
- mSlots[slot].mBufferState.release();
-
- // After leaving single buffer mode, the shared buffer will
- // still be around. Mark it as no longer shared if this
- // operation causes it to be free.
- if (!mCore->mSingleBufferMode &&
- mSlots[slot].mBufferState.isFree()) {
- mSlots[slot].mBufferState.mShared = false;
- }
- // Don't put the shared buffer on the free list.
- if (!mSlots[slot].mBufferState.isShared()) {
- mCore->mFreeBuffers.push_back(slot);
- }
-
- listener = mCore->mConnectedProducerListener;
- BQ_LOGV("releaseBuffer: releasing slot %d", slot);
- } else if (mSlots[slot].mNeedsCleanupOnRelease) {
- BQ_LOGV("releaseBuffer: releasing a stale buffer slot %d "
- "(state = %s)", slot, mSlots[slot].mBufferState.string());
- mSlots[slot].mNeedsCleanupOnRelease = false;
- return STALE_BUFFER_SLOT;
- } else {
+ if (!mSlots[slot].mBufferState.isAcquired()) {
BQ_LOGE("releaseBuffer: attempted to release buffer slot %d "
"but its state was %s", slot,
mSlots[slot].mBufferState.string());
return BAD_VALUE;
}
+ mSlots[slot].mEglDisplay = eglDisplay;
+ mSlots[slot].mEglFence = eglFence;
+ mSlots[slot].mFence = releaseFence;
+ mSlots[slot].mBufferState.release();
+
+ // After leaving single buffer mode, the shared buffer will
+ // still be around. Mark it as no longer shared if this
+ // operation causes it to be free.
+ if (!mCore->mSingleBufferMode && mSlots[slot].mBufferState.isFree()) {
+ mSlots[slot].mBufferState.mShared = false;
+ }
+ // Don't put the shared buffer on the free list.
+ if (!mSlots[slot].mBufferState.isShared()) {
+ mCore->mActiveBuffers.erase(slot);
+ mCore->mFreeBuffers.push_back(slot);
+ }
+
+ listener = mCore->mConnectedProducerListener;
+ BQ_LOGV("releaseBuffer: releasing slot %d", slot);
+
mCore->mDequeueCondition.broadcast();
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
} // Autolock scope
// Call back without lock held
@@ -497,6 +504,7 @@
mCore->mConsumerListener = NULL;
mCore->mQueue.clear();
mCore->freeAllBuffersLocked();
+ mCore->mSingleBufferSlot = BufferQueueCore::INVALID_BUFFER_SLOT;
mCore->mDequeueCondition.broadcast();
return NO_ERROR;
}
@@ -579,6 +587,15 @@
return BAD_VALUE;
}
+ int delta = mCore->getMaxBufferCountLocked(mCore->mAsyncMode,
+ mCore->mDequeueBufferCannotBlock, bufferCount) -
+ mCore->getMaxBufferCountLocked();
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ BQ_LOGE("setMaxBufferCount: BufferQueue failed to adjust the number of "
+ "available slots. Delta = %d", delta);
+ return BAD_VALUE;
+ }
+
mCore->mMaxBufferCount = bufferCount;
return NO_ERROR;
}
@@ -594,26 +611,59 @@
return BAD_VALUE;
}
- Mutex::Autolock lock(mCore->mMutex);
+ sp<IConsumerListener> listener;
+ { // Autolock scope
+ Mutex::Autolock lock(mCore->mMutex);
+ mCore->waitWhileAllocatingLocked();
- if (mCore->mConnectedApi != BufferQueueCore::NO_CONNECTED_API) {
- BQ_LOGE("setMaxAcquiredBufferCount: producer is already connected");
- return INVALID_OPERATION;
+ if (mCore->mIsAbandoned) {
+ BQ_LOGE("setMaxAcquiredBufferCount: consumer is abandoned");
+ return NO_INIT;
+ }
+
+ // The new maxAcquiredBuffers count should not be violated by the number
+ // of currently acquired buffers
+ int acquiredCount = 0;
+ for (int slot : mCore->mActiveBuffers) {
+ if (mSlots[slot].mBufferState.isAcquired()) {
+ acquiredCount++;
+ }
+ }
+ if (acquiredCount > maxAcquiredBuffers) {
+ BQ_LOGE("setMaxAcquiredBufferCount: the requested maxAcquiredBuffer"
+ "count (%d) exceeds the current acquired buffer count (%d)",
+ maxAcquiredBuffers, acquiredCount);
+ return BAD_VALUE;
+ }
+
+ if ((maxAcquiredBuffers + mCore->mMaxDequeuedBufferCount +
+ (mCore->mAsyncMode || mCore->mDequeueBufferCannotBlock ? 1 : 0))
+ > mCore->mMaxBufferCount) {
+ BQ_LOGE("setMaxAcquiredBufferCount: %d acquired buffers would "
+ "exceed the maxBufferCount (%d) (maxDequeued %d async %d)",
+ maxAcquiredBuffers, mCore->mMaxBufferCount,
+ mCore->mMaxDequeuedBufferCount, mCore->mAsyncMode ||
+ mCore->mDequeueBufferCannotBlock);
+ return BAD_VALUE;
+ }
+
+ int delta = maxAcquiredBuffers - mCore->mMaxAcquiredBufferCount;
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ return BAD_VALUE;
+ }
+
+ BQ_LOGV("setMaxAcquiredBufferCount: %d", maxAcquiredBuffers);
+ mCore->mMaxAcquiredBufferCount = maxAcquiredBuffers;
+ VALIDATE_CONSISTENCY();
+ if (delta < 0) {
+ listener = mCore->mConsumerListener;
+ }
+ }
+ // Call back without lock held
+ if (listener != NULL) {
+ listener->onBuffersReleased();
}
- if ((maxAcquiredBuffers + mCore->mMaxDequeuedBufferCount +
- (mCore->mAsyncMode || mCore->mDequeueBufferCannotBlock ? 1 : 0)) >
- mCore->mMaxBufferCount) {
- BQ_LOGE("setMaxAcquiredBufferCount: %d acquired buffers would exceed "
- "the maxBufferCount (%d) (maxDequeued %d async %d)",
- maxAcquiredBuffers, mCore->mMaxBufferCount,
- mCore->mMaxDequeuedBufferCount, mCore->mAsyncMode ||
- mCore->mDequeueBufferCannotBlock);
- return BAD_VALUE;
- }
-
- BQ_LOGV("setMaxAcquiredBufferCount: %d", maxAcquiredBuffers);
- mCore->mMaxAcquiredBufferCount = maxAcquiredBuffers;
return NO_ERROR;
}
diff --git a/libs/gui/BufferQueueCore.cpp b/libs/gui/BufferQueueCore.cpp
index c24ad19..f785db0 100644
--- a/libs/gui/BufferQueueCore.cpp
+++ b/libs/gui/BufferQueueCore.cpp
@@ -20,6 +20,12 @@
#define EGL_EGLEXT_PROTOTYPES
+#if DEBUG_ONLY_CODE
+#define VALIDATE_CONSISTENCY() do { validateConsistencyLocked(); } while (0)
+#else
+#define VALIDATE_CONSISTENCY()
+#endif
+
#include <inttypes.h>
#include <gui/BufferItem.h>
@@ -52,6 +58,8 @@
mQueue(),
mFreeSlots(),
mFreeBuffers(),
+ mUnusedSlots(),
+ mActiveBuffers(),
mDequeueCondition(),
mDequeueBufferCannotBlock(false),
mDefaultBufferFormat(PIXEL_FORMAT_RGBA_8888),
@@ -82,8 +90,14 @@
BQ_LOGE("createGraphicBufferAlloc failed");
}
}
- for (int slot = 0; slot < BufferQueueDefs::NUM_BUFFER_SLOTS; ++slot) {
- mFreeSlots.insert(slot);
+
+ int numStartingBuffers = getMaxBufferCountLocked();
+ for (int s = 0; s < numStartingBuffers; s++) {
+ mFreeSlots.insert(s);
+ }
+ for (int s = numStartingBuffers; s < BufferQueueDefs::NUM_BUFFER_SLOTS;
+ s++) {
+ mUnusedSlots.push_front(s);
}
}
@@ -113,32 +127,26 @@
mDefaultHeight, mDefaultBufferFormat, mTransformHint, mQueue.size(),
fifo.string());
- // Trim the free buffers so as to not spam the dump
- int maxBufferCount = 0;
- for (int s = BufferQueueDefs::NUM_BUFFER_SLOTS - 1; s >= 0; --s) {
- const BufferSlot& slot(mSlots[s]);
- if (!slot.mBufferState.isFree() ||
- slot.mGraphicBuffer != NULL) {
- maxBufferCount = s + 1;
- break;
- }
+ for (int s : mActiveBuffers) {
+ const sp<GraphicBuffer>& buffer(mSlots[s].mGraphicBuffer);
+ result.appendFormat("%s%s[%02d:%p] state=%-8s, %p [%4ux%4u:%4u,%3X]\n",
+ prefix, (mSlots[s].mBufferState.isAcquired()) ? ">" : " ", s,
+ buffer.get(), mSlots[s].mBufferState.string(), buffer->handle,
+ buffer->width, buffer->height, buffer->stride, buffer->format);
+
+ }
+ for (int s : mFreeBuffers) {
+ const sp<GraphicBuffer>& buffer(mSlots[s].mGraphicBuffer);
+ result.appendFormat("%s [%02d:%p] state=%-8s, %p [%4ux%4u:%4u,%3X]\n",
+ prefix, s, buffer.get(), mSlots[s].mBufferState.string(),
+ buffer->handle, buffer->width, buffer->height, buffer->stride,
+ buffer->format);
}
- for (int s = 0; s < maxBufferCount; ++s) {
- const BufferSlot& slot(mSlots[s]);
- const sp<GraphicBuffer>& buffer(slot.mGraphicBuffer);
- result.appendFormat("%s%s[%02d:%p] state=%-8s", prefix,
- (slot.mBufferState.isAcquired()) ? ">" : " ",
- s, buffer.get(),
- slot.mBufferState.string());
-
- if (buffer != NULL) {
- result.appendFormat(", %p [%4ux%4u:%4u,%3X]", buffer->handle,
- buffer->width, buffer->height, buffer->stride,
- buffer->format);
- }
-
- result.append("\n");
+ for (int s : mFreeSlots) {
+ const sp<GraphicBuffer>& buffer(mSlots[s].mGraphicBuffer);
+ result.appendFormat("%s [%02d:%p] state=%-8s\n", prefix, s,
+ buffer.get(), mSlots[s].mBufferState.string());
}
}
@@ -156,44 +164,33 @@
return getMinUndequeuedBufferCountLocked() + 1;
}
+int BufferQueueCore::getMaxBufferCountLocked(bool asyncMode,
+ bool dequeueBufferCannotBlock, int maxBufferCount) const {
+ int maxCount = mMaxAcquiredBufferCount + mMaxDequeuedBufferCount +
+ ((asyncMode || dequeueBufferCannotBlock) ? 1 : 0);
+ maxCount = std::min(maxBufferCount, maxCount);
+ return maxCount;
+}
+
int BufferQueueCore::getMaxBufferCountLocked() const {
int maxBufferCount = mMaxAcquiredBufferCount + mMaxDequeuedBufferCount +
- (mAsyncMode || mDequeueBufferCannotBlock ? 1 : 0);
+ ((mAsyncMode || mDequeueBufferCannotBlock) ? 1 : 0);
// limit maxBufferCount by mMaxBufferCount always
maxBufferCount = std::min(mMaxBufferCount, maxBufferCount);
- // Any buffers that are dequeued by the producer or sitting in the queue
- // waiting to be consumed need to have their slots preserved. Such buffers
- // will temporarily keep the max buffer count up until the slots no longer
- // need to be preserved.
- for (int s = maxBufferCount; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
- BufferState state = mSlots[s].mBufferState;
- if (state.isQueued() || state.isDequeued()) {
- maxBufferCount = s + 1;
- }
- }
-
return maxBufferCount;
}
-void BufferQueueCore::freeBufferLocked(int slot, bool validate) {
- BQ_LOGV("freeBufferLocked: slot %d", slot);
- bool hadBuffer = mSlots[slot].mGraphicBuffer != NULL;
+void BufferQueueCore::clearBufferSlotLocked(int slot) {
+ BQ_LOGV("clearBufferSlotLocked: slot %d", slot);
+
mSlots[slot].mGraphicBuffer.clear();
- if (mSlots[slot].mBufferState.isAcquired()) {
- mSlots[slot].mNeedsCleanupOnRelease = true;
- }
- if (!mSlots[slot].mBufferState.isFree()) {
- mFreeSlots.insert(slot);
- } else if (hadBuffer) {
- // If the slot was FREE, but we had a buffer, we need to move this slot
- // from the free buffers list to the the free slots list
- mFreeBuffers.remove(slot);
- mFreeSlots.insert(slot);
- }
- mSlots[slot].mAcquireCalled = false;
+ mSlots[slot].mBufferState.reset();
+ mSlots[slot].mRequestBufferCalled = false;
mSlots[slot].mFrameNumber = 0;
+ mSlots[slot].mAcquireCalled = false;
+ mSlots[slot].mNeedsReallocation = true;
// Destroy fence as BufferQueue now takes ownership
if (mSlots[slot].mEglFence != EGL_NO_SYNC_KHR) {
@@ -201,35 +198,72 @@
mSlots[slot].mEglFence = EGL_NO_SYNC_KHR;
}
mSlots[slot].mFence = Fence::NO_FENCE;
- if (validate) {
- validateConsistencyLocked();
- }
+ mSlots[slot].mEglDisplay = EGL_NO_DISPLAY;
}
void BufferQueueCore::freeAllBuffersLocked() {
- mBufferHasBeenQueued = false;
- for (int s = 0; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
- freeBufferLocked(s, false);
- mSlots[s].mBufferState.reset();
+ for (int s : mFreeSlots) {
+ clearBufferSlotLocked(s);
}
- mSingleBufferSlot = INVALID_BUFFER_SLOT;
- validateConsistencyLocked();
+
+ for (int s : mFreeBuffers) {
+ mFreeSlots.insert(s);
+ clearBufferSlotLocked(s);
+ }
+ mFreeBuffers.clear();
+
+ for (int s : mActiveBuffers) {
+ mFreeSlots.insert(s);
+ clearBufferSlotLocked(s);
+ }
+ mActiveBuffers.clear();
+
+ for (auto& b : mQueue) {
+ b.mIsStale = true;
+ }
+
+ VALIDATE_CONSISTENCY();
}
-bool BufferQueueCore::stillTracking(const BufferItem* item) const {
- const BufferSlot& slot = mSlots[item->mSlot];
-
- BQ_LOGV("stillTracking: item { slot=%d/%" PRIu64 " buffer=%p } "
- "slot { slot=%d/%" PRIu64 " buffer=%p }",
- item->mSlot, item->mFrameNumber,
- (item->mGraphicBuffer.get() ? item->mGraphicBuffer->handle : 0),
- item->mSlot, slot.mFrameNumber,
- (slot.mGraphicBuffer.get() ? slot.mGraphicBuffer->handle : 0));
-
- // Compare item with its original buffer slot. We can check the slot as
- // the buffer would not be moved to a different slot by the producer.
- return (slot.mGraphicBuffer != NULL) &&
- (item->mGraphicBuffer->handle == slot.mGraphicBuffer->handle);
+bool BufferQueueCore::adjustAvailableSlotsLocked(int delta) {
+ if (delta >= 0) {
+ // If we're going to fail, do so before modifying anything
+ if (delta > static_cast<int>(mUnusedSlots.size())) {
+ return false;
+ }
+ while (delta > 0) {
+ if (mUnusedSlots.empty()) {
+ return false;
+ }
+ int slot = mUnusedSlots.back();
+ mUnusedSlots.pop_back();
+ mFreeSlots.insert(slot);
+ delta--;
+ }
+ } else {
+ // If we're going to fail, do so before modifying anything
+ if (-delta > static_cast<int>(mFreeSlots.size() +
+ mFreeBuffers.size())) {
+ return false;
+ }
+ while (delta < 0) {
+ if (!mFreeSlots.empty()) {
+ auto slot = mFreeSlots.begin();
+ clearBufferSlotLocked(*slot);
+ mUnusedSlots.push_back(*slot);
+ mFreeSlots.erase(slot);
+ } else if (!mFreeBuffers.empty()) {
+ int slot = mFreeBuffers.back();
+ clearBufferSlotLocked(slot);
+ mUnusedSlots.push_back(slot);
+ mFreeBuffers.pop_back();
+ } else {
+ return false;
+ }
+ delta++;
+ }
+ }
+ return true;
}
void BufferQueueCore::waitWhileAllocatingLocked() const {
@@ -239,49 +273,131 @@
}
}
+#if DEBUG_ONLY_CODE
void BufferQueueCore::validateConsistencyLocked() const {
static const useconds_t PAUSE_TIME = 0;
+ int allocatedSlots = 0;
for (int slot = 0; slot < BufferQueueDefs::NUM_BUFFER_SLOTS; ++slot) {
bool isInFreeSlots = mFreeSlots.count(slot) != 0;
bool isInFreeBuffers =
std::find(mFreeBuffers.cbegin(), mFreeBuffers.cend(), slot) !=
mFreeBuffers.cend();
- if (mSlots[slot].mBufferState.isFree() &&
- !mSlots[slot].mBufferState.isShared()) {
- if (mSlots[slot].mGraphicBuffer == NULL) {
- if (!isInFreeSlots) {
- BQ_LOGE("Slot %d is FREE but is not in mFreeSlots", slot);
- usleep(PAUSE_TIME);
- }
- if (isInFreeBuffers) {
- BQ_LOGE("Slot %d is in mFreeSlots "
- "but is also in mFreeBuffers", slot);
- usleep(PAUSE_TIME);
- }
- } else {
- if (!isInFreeBuffers) {
- BQ_LOGE("Slot %d is FREE but is not in mFreeBuffers", slot);
- usleep(PAUSE_TIME);
- }
- if (isInFreeSlots) {
- BQ_LOGE("Slot %d is in mFreeBuffers "
- "but is also in mFreeSlots", slot);
- usleep(PAUSE_TIME);
- }
- }
- } else {
+ bool isInActiveBuffers = mActiveBuffers.count(slot) != 0;
+ bool isInUnusedSlots =
+ std::find(mUnusedSlots.cbegin(), mUnusedSlots.cend(), slot) !=
+ mUnusedSlots.cend();
+
+ if (isInFreeSlots || isInFreeBuffers || isInActiveBuffers) {
+ allocatedSlots++;
+ }
+
+ if (isInUnusedSlots) {
if (isInFreeSlots) {
- BQ_LOGE("Slot %d is in mFreeSlots but is not FREE (%s)",
- slot, mSlots[slot].mBufferState.string());
+ BQ_LOGE("Slot %d is in mUnusedSlots and in mFreeSlots", slot);
usleep(PAUSE_TIME);
}
if (isInFreeBuffers) {
- BQ_LOGE("Slot %d is in mFreeBuffers but is not FREE (%s)",
- slot, mSlots[slot].mBufferState.string());
+ BQ_LOGE("Slot %d is in mUnusedSlots and in mFreeBuffers", slot);
usleep(PAUSE_TIME);
}
+ if (isInActiveBuffers) {
+ BQ_LOGE("Slot %d is in mUnusedSlots and in mActiveBuffers",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ if (!mSlots[slot].mBufferState.isFree()) {
+ BQ_LOGE("Slot %d is in mUnusedSlots but is not FREE", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (mSlots[slot].mGraphicBuffer != NULL) {
+ BQ_LOGE("Slot %d is in mUnusedSluts but has an active buffer",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ } else if (isInFreeSlots) {
+ if (isInUnusedSlots) {
+ BQ_LOGE("Slot %d is in mFreeSlots and in mUnusedSlots", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInFreeBuffers) {
+ BQ_LOGE("Slot %d is in mFreeSlots and in mFreeBuffers", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInActiveBuffers) {
+ BQ_LOGE("Slot %d is in mFreeSlots and in mActiveBuffers", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (!mSlots[slot].mBufferState.isFree()) {
+ BQ_LOGE("Slot %d is in mFreeSlots but is not FREE", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (mSlots[slot].mGraphicBuffer != NULL) {
+ BQ_LOGE("Slot %d is in mFreeSlots but has a buffer",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ } else if (isInFreeBuffers) {
+ if (isInUnusedSlots) {
+ BQ_LOGE("Slot %d is in mFreeBuffers and in mUnusedSlots", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInFreeSlots) {
+ BQ_LOGE("Slot %d is in mFreeBuffers and in mFreeSlots", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInActiveBuffers) {
+ BQ_LOGE("Slot %d is in mFreeBuffers and in mActiveBuffers",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ if (!mSlots[slot].mBufferState.isFree()) {
+ BQ_LOGE("Slot %d is in mFreeBuffers but is not FREE", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (mSlots[slot].mGraphicBuffer == NULL) {
+ BQ_LOGE("Slot %d is in mFreeBuffers but has no buffer", slot);
+ usleep(PAUSE_TIME);
+ }
+ } else if (isInActiveBuffers) {
+ if (isInUnusedSlots) {
+ BQ_LOGE("Slot %d is in mActiveBuffers and in mUnusedSlots",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInFreeSlots) {
+ BQ_LOGE("Slot %d is in mActiveBuffers and in mFreeSlots", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (isInFreeBuffers) {
+ BQ_LOGE("Slot %d is in mActiveBuffers and in mFreeBuffers",
+ slot);
+ usleep(PAUSE_TIME);
+ }
+ if (mSlots[slot].mBufferState.isFree() &&
+ !mSlots[slot].mBufferState.isShared()) {
+ BQ_LOGE("Slot %d is in mActiveBuffers but is FREE", slot);
+ usleep(PAUSE_TIME);
+ }
+ if (mSlots[slot].mGraphicBuffer == NULL && !mIsAllocating) {
+ BQ_LOGE("Slot %d is in mActiveBuffers but has no buffer", slot);
+ usleep(PAUSE_TIME);
+ }
+ } else {
+ BQ_LOGE("Slot %d isn't in any of mUnusedSlots, mFreeSlots, "
+ "mFreeBuffers, or mActiveBuffers", slot);
+ usleep(PAUSE_TIME);
}
}
+
+ if (allocatedSlots != getMaxBufferCountLocked()) {
+ BQ_LOGE("Number of allocated slots is incorrect. Allocated = %d, "
+ "Should be %d (%zu free slots, %zu free buffers, "
+ "%zu activeBuffers, %zu unusedSlots)", allocatedSlots,
+ getMaxBufferCountLocked(), mFreeSlots.size(),
+ mFreeBuffers.size(), mActiveBuffers.size(),
+ mUnusedSlots.size());
+ }
}
+#endif
} // namespace android
diff --git a/libs/gui/BufferQueueProducer.cpp b/libs/gui/BufferQueueProducer.cpp
index 56f1a09..9d42464 100644
--- a/libs/gui/BufferQueueProducer.cpp
+++ b/libs/gui/BufferQueueProducer.cpp
@@ -20,6 +20,12 @@
#define ATRACE_TAG ATRACE_TAG_GRAPHICS
//#define LOG_NDEBUG 0
+#if DEBUG_ONLY_CODE
+#define VALIDATE_CONSISTENCY() do { mCore->validateConsistencyLocked(); } while (0)
+#else
+#define VALIDATE_CONSISTENCY()
+#endif
+
#define EGL_EGLEXT_PROTOTYPES
#include <gui/BufferItem.h>
@@ -95,13 +101,20 @@
return NO_INIT;
}
- // There must be no dequeued buffers when changing the buffer count.
- for (int s = 0; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
+ // The new maxDequeuedBuffer count should not be violated by the number
+ // of currently dequeued buffers
+ int dequeuedCount = 0;
+ for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isDequeued()) {
- BQ_LOGE("setMaxDequeuedBufferCount: buffer owned by producer");
- return BAD_VALUE;
+ dequeuedCount++;
}
}
+ if (dequeuedCount > maxDequeuedBuffers) {
+ BQ_LOGE("setMaxDequeuedBufferCount: the requested maxDequeuedBuffer"
+ "count (%d) exceeds the current dequeued buffer count (%d)",
+ maxDequeuedBuffers, dequeuedCount);
+ return BAD_VALUE;
+ }
int bufferCount = mCore->getMinUndequeuedBufferCountLocked();
bufferCount += maxDequeuedBuffers;
@@ -128,14 +141,16 @@
return BAD_VALUE;
}
- // Here we are guaranteed that the producer doesn't have any dequeued
- // buffers and will release all of its buffer references. We don't
- // clear the queue, however, so that currently queued buffers still
- // get displayed.
- mCore->freeAllBuffersLocked();
+ int delta = maxDequeuedBuffers - mCore->mMaxDequeuedBufferCount;
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ return BAD_VALUE;
+ }
mCore->mMaxDequeuedBufferCount = maxDequeuedBuffers;
+ VALIDATE_CONSISTENCY();
+ if (delta < 0) {
+ listener = mCore->mConsumerListener;
+ }
mCore->mDequeueCondition.broadcast();
- listener = mCore->mConsumerListener;
} // Autolock scope
// Call back without lock held
@@ -172,7 +187,17 @@
return BAD_VALUE;
}
+ int delta = mCore->getMaxBufferCountLocked(async,
+ mCore->mDequeueBufferCannotBlock, mCore->mMaxBufferCount)
+ - mCore->getMaxBufferCountLocked();
+
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ BQ_LOGE("setAsyncMode: BufferQueue failed to adjust the number of "
+ "available slots. Delta = %d", delta);
+ return BAD_VALUE;
+ }
mCore->mAsyncMode = async;
+ VALIDATE_CONSISTENCY();
mCore->mDequeueCondition.broadcast();
listener = mCore->mConsumerListener;
} // Autolock scope
@@ -188,25 +213,22 @@
if (mCore->mFreeBuffers.empty()) {
return BufferQueueCore::INVALID_BUFFER_SLOT;
}
- auto slot = mCore->mFreeBuffers.front();
+ int slot = mCore->mFreeBuffers.front();
mCore->mFreeBuffers.pop_front();
return slot;
}
-int BufferQueueProducer::getFreeSlotLocked(int maxBufferCount) const {
+int BufferQueueProducer::getFreeSlotLocked() const {
if (mCore->mFreeSlots.empty()) {
return BufferQueueCore::INVALID_BUFFER_SLOT;
}
- auto slot = *(mCore->mFreeSlots.begin());
- if (slot < maxBufferCount) {
- mCore->mFreeSlots.erase(slot);
- return slot;
- }
- return BufferQueueCore::INVALID_BUFFER_SLOT;
+ auto slot = mCore->mFreeSlots.begin();
+ mCore->mFreeSlots.erase(slot);
+ return *slot;
}
status_t BufferQueueProducer::waitForFreeSlotThenRelock(FreeSlotCaller caller,
- int* found, status_t* returnFlags) const {
+ int* found) const {
auto callerString = (caller == FreeSlotCaller::Dequeue) ?
"dequeueBuffer" : "attachBuffer";
bool tryAgain = true;
@@ -216,20 +238,9 @@
return NO_INIT;
}
- const int maxBufferCount = mCore->getMaxBufferCountLocked();
-
- // Free up any buffers that are in slots beyond the max buffer count
- for (int s = maxBufferCount; s < BufferQueueDefs::NUM_BUFFER_SLOTS; ++s) {
- assert(mSlots[s].mBufferState.isFree());
- if (mSlots[s].mGraphicBuffer != NULL) {
- mCore->freeBufferLocked(s);
- *returnFlags |= RELEASE_ALL_BUFFERS;
- }
- }
-
int dequeuedCount = 0;
int acquiredCount = 0;
- for (int s = 0; s < maxBufferCount; ++s) {
+ for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isDequeued()) {
++dequeuedCount;
}
@@ -254,6 +265,7 @@
// our slots are empty but we have many buffers in the queue. This can
// cause us to run out of memory if we outrun the consumer. Wait here if
// it looks like we have too many buffers queued up.
+ const int maxBufferCount = mCore->getMaxBufferCountLocked();
bool tooManyBuffers = mCore->mQueue.size()
> static_cast<size_t>(maxBufferCount);
if (tooManyBuffers) {
@@ -268,15 +280,15 @@
} else {
if (caller == FreeSlotCaller::Dequeue) {
// If we're calling this from dequeue, prefer free buffers
- auto slot = getFreeBufferLocked();
+ int slot = getFreeBufferLocked();
if (slot != BufferQueueCore::INVALID_BUFFER_SLOT) {
*found = slot;
} else if (mCore->mAllowAllocation) {
- *found = getFreeSlotLocked(maxBufferCount);
+ *found = getFreeSlotLocked();
}
} else {
// If we're calling this from attach, prefer free slots
- auto slot = getFreeSlotLocked(maxBufferCount);
+ int slot = getFreeSlotLocked();
if (slot != BufferQueueCore::INVALID_BUFFER_SLOT) {
*found = slot;
} else {
@@ -369,7 +381,7 @@
int found = BufferItem::INVALID_BUFFER_SLOT;
while (found == BufferItem::INVALID_BUFFER_SLOT) {
status_t status = waitForFreeSlotThenRelock(FreeSlotCaller::Dequeue,
- &found, &returnFlags);
+ &found);
if (status != NO_ERROR) {
return status;
}
@@ -388,24 +400,36 @@
// requested attributes, we free it and attempt to get another one.
if (!mCore->mAllowAllocation) {
if (buffer->needsReallocation(width, height, format, usage)) {
- if (mCore->mSingleBufferMode &&
- mCore->mSingleBufferSlot == found) {
+ if (mCore->mSingleBufferSlot == found) {
BQ_LOGE("dequeueBuffer: cannot re-allocate a shared"
"buffer");
return BAD_VALUE;
}
-
- mCore->freeBufferLocked(found);
+ mCore->mFreeSlots.insert(found);
+ mCore->clearBufferSlotLocked(found);
found = BufferItem::INVALID_BUFFER_SLOT;
continue;
}
}
}
+ const sp<GraphicBuffer>& buffer(mSlots[found].mGraphicBuffer);
+ if (mCore->mSingleBufferSlot == found &&
+ buffer->needsReallocation(width, height, format, usage)) {
+ BQ_LOGE("dequeueBuffer: cannot re-allocate a shared"
+ "buffer");
+
+ return BAD_VALUE;
+ }
+
+ if (mCore->mSingleBufferSlot != found) {
+ mCore->mActiveBuffers.insert(found);
+ }
*outSlot = found;
ATRACE_BUFFER_INDEX(found);
- attachedByConsumer = mSlots[found].mAttachedByConsumer;
+ attachedByConsumer = mSlots[found].mNeedsReallocation;
+ mSlots[found].mNeedsReallocation = false;
mSlots[found].mBufferState.dequeue();
@@ -417,7 +441,6 @@
mSlots[found].mBufferState.mShared = true;
}
- const sp<GraphicBuffer>& buffer(mSlots[found].mGraphicBuffer);
if ((buffer == NULL) ||
buffer->needsReallocation(width, height, format, usage))
{
@@ -452,8 +475,6 @@
*outFence = mSlots[found].mFence;
mSlots[found].mEglFence = EGL_NO_SYNC_KHR;
mSlots[found].mFence = Fence::NO_FENCE;
-
- mCore->validateConsistencyLocked();
} // Autolock scope
if (returnFlags & BUFFER_NEEDS_REALLOCATION) {
@@ -481,6 +502,8 @@
BQ_LOGE("dequeueBuffer: BufferQueue has been abandoned");
return NO_INIT;
}
+
+ VALIDATE_CONSISTENCY();
} // Autolock scope
}
@@ -527,9 +550,8 @@
return NO_INIT;
}
- if (mCore->mSingleBufferMode) {
- BQ_LOGE("detachBuffer: cannot detach a buffer in single buffer"
- "mode");
+ if (mCore->mSingleBufferMode || mCore->mSingleBufferSlot == slot) {
+ BQ_LOGE("detachBuffer: cannot detach a buffer in single buffer mode");
return BAD_VALUE;
}
@@ -548,9 +570,11 @@
}
mSlots[slot].mBufferState.detachProducer();
- mCore->freeBufferLocked(slot);
+ mCore->mActiveBuffers.erase(slot);
+ mCore->mFreeSlots.insert(slot);
+ mCore->clearBufferSlotLocked(slot);
mCore->mDequeueCondition.broadcast();
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
@@ -593,13 +617,14 @@
int found = mCore->mFreeBuffers.front();
mCore->mFreeBuffers.remove(found);
+ mCore->mFreeSlots.insert(found);
BQ_LOGV("detachNextBuffer detached slot %d", found);
*outBuffer = mSlots[found].mGraphicBuffer;
*outFence = mSlots[found].mFence;
- mCore->freeBufferLocked(found);
- mCore->validateConsistencyLocked();
+ mCore->clearBufferSlotLocked(found);
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
@@ -629,7 +654,7 @@
}
if (mCore->mSingleBufferMode) {
- BQ_LOGE("attachBuffer: cannot atach a buffer in single buffer mode");
+ BQ_LOGE("attachBuffer: cannot attach a buffer in single buffer mode");
return BAD_VALUE;
}
@@ -644,8 +669,7 @@
status_t returnFlags = NO_ERROR;
int found;
- status_t status = waitForFreeSlotThenRelock(FreeSlotCaller::Attach, &found,
- &returnFlags);
+ status_t status = waitForFreeSlotThenRelock(FreeSlotCaller::Attach, &found);
if (status != NO_ERROR) {
return status;
}
@@ -666,8 +690,9 @@
mSlots[*outSlot].mEglFence = EGL_NO_SYNC_KHR;
mSlots[*outSlot].mFence = Fence::NO_FENCE;
mSlots[*outSlot].mRequestBufferCalled = true;
-
- mCore->validateConsistencyLocked();
+ mSlots[*outSlot].mAcquireCalled = false;
+ mCore->mActiveBuffers.insert(found);
+ VALIDATE_CONSISTENCY();
return returnFlags;
}
@@ -722,11 +747,9 @@
return NO_INIT;
}
- const int maxBufferCount = mCore->getMaxBufferCountLocked();
-
- if (slot < 0 || slot >= maxBufferCount) {
+ if (slot < 0 || slot >= BufferQueueDefs::NUM_BUFFER_SLOTS) {
BQ_LOGE("queueBuffer: slot index %d out of range [0, %d)",
- slot, maxBufferCount);
+ slot, BufferQueueDefs::NUM_BUFFER_SLOTS);
return BAD_VALUE;
} else if (!mSlots[slot].mBufferState.isDequeued()) {
BQ_LOGE("queueBuffer: slot %d is not owned by the producer "
@@ -807,9 +830,8 @@
// state to see if we need to replace it
BufferQueueCore::Fifo::iterator front(mCore->mQueue.begin());
if (front->mIsDroppable) {
- // If the front queued buffer is still being tracked, we first
- // mark it as freed
- if (mCore->stillTracking(front)) {
+
+ if (!front->mIsStale) {
mSlots[front->mSlot].mBufferState.freeQueued();
// After leaving single buffer mode, the shared buffer will
@@ -821,9 +843,11 @@
}
// Don't put the shared buffer on the free list.
if (!mSlots[front->mSlot].mBufferState.isShared()) {
- mCore->mFreeBuffers.push_front(front->mSlot);
+ mCore->mActiveBuffers.erase(front->mSlot);
+ mCore->mFreeBuffers.push_back(front->mSlot);
}
}
+
// Overwrite the droppable buffer with the incoming one
*front = item;
frameReplacedListener = mCore->mConsumerListener;
@@ -845,7 +869,7 @@
// Take a ticket for the callback functions
callbackTicket = mNextCallbackTicket++;
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
} // Autolock scope
// Don't send the GraphicBuffer through the callback, and don't send
@@ -926,11 +950,13 @@
// Don't put the shared buffer on the free list.
if (!mSlots[slot].mBufferState.isShared()) {
- mCore->mFreeBuffers.push_front(slot);
+ mCore->mActiveBuffers.erase(slot);
+ mCore->mFreeBuffers.push_back(slot);
}
+
mSlots[slot].mFence = fence;
mCore->mDequeueCondition.broadcast();
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
@@ -1020,6 +1046,17 @@
return BAD_VALUE;
}
+ int delta = mCore->getMaxBufferCountLocked(mCore->mAsyncMode,
+ mDequeueTimeout < 0 ?
+ mCore->mConsumerControlledByApp && producerControlledByApp : false,
+ mCore->mMaxBufferCount) -
+ mCore->getMaxBufferCountLocked();
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ BQ_LOGE("connect: BufferQueue failed to adjust the number of available "
+ "slots. Delta = %d", delta);
+ return BAD_VALUE;
+ }
+
int status = NO_ERROR;
switch (api) {
case NATIVE_WINDOW_API_EGL:
@@ -1056,8 +1093,9 @@
mCore->mDequeueBufferCannotBlock =
mCore->mConsumerControlledByApp && producerControlledByApp;
}
- mCore->mAllowAllocation = true;
+ mCore->mAllowAllocation = true;
+ VALIDATE_CONSISTENCY();
return status;
}
@@ -1094,6 +1132,8 @@
token->unlinkToDeath(
static_cast<IBinder::DeathRecipient*>(this));
}
+ mCore->mSingleBufferSlot =
+ BufferQueueCore::INVALID_BUFFER_SLOT;
mCore->mConnectedProducerListener = NULL;
mCore->mConnectedApi = BufferQueueCore::NO_CONNECTED_API;
mCore->mSidebandStream.clear();
@@ -1138,7 +1178,6 @@
PixelFormat format, uint32_t usage) {
ATRACE_CALL();
while (true) {
- Vector<int> freeSlots;
size_t newBufferCount = 0;
uint32_t allocWidth = 0;
uint32_t allocHeight = 0;
@@ -1154,32 +1193,11 @@
return;
}
- int currentBufferCount = 0;
- for (int slot = 0; slot < BufferQueueDefs::NUM_BUFFER_SLOTS; ++slot) {
- if (mSlots[slot].mGraphicBuffer != NULL) {
- ++currentBufferCount;
- } else {
- if (!mSlots[slot].mBufferState.isFree()) {
- BQ_LOGE("allocateBuffers: slot %d without buffer is not FREE",
- slot);
- continue;
- }
-
- freeSlots.push_back(slot);
- }
- }
-
- int maxBufferCount = mCore->getMaxBufferCountLocked();
- BQ_LOGV("allocateBuffers: allocating from %d buffers up to %d buffers",
- currentBufferCount, maxBufferCount);
- if (maxBufferCount <= currentBufferCount)
- return;
- newBufferCount =
- static_cast<size_t>(maxBufferCount - currentBufferCount);
- if (freeSlots.size() < newBufferCount) {
- BQ_LOGE("allocateBuffers: ran out of free slots");
+ newBufferCount = mCore->mFreeSlots.size();
+ if (newBufferCount == 0) {
return;
}
+
allocWidth = width > 0 ? width : mCore->mDefaultWidth;
allocHeight = height > 0 ? height : mCore->mDefaultHeight;
allocFormat = format != 0 ? format : mCore->mDefaultBufferFormat;
@@ -1221,29 +1239,28 @@
}
for (size_t i = 0; i < newBufferCount; ++i) {
- int slot = freeSlots[i];
- if (!mSlots[slot].mBufferState.isFree()) {
- // A consumer allocated the FREE slot with attachBuffer. Discard the buffer we
- // allocated.
- BQ_LOGV("allocateBuffers: slot %d was acquired while allocating. "
- "Dropping allocated buffer.", slot);
+ if (mCore->mFreeSlots.empty()) {
+ BQ_LOGV("allocateBuffers: a slot was occupied while "
+ "allocating. Dropping allocated buffer.");
continue;
}
- mCore->freeBufferLocked(slot); // Clean up the slot first
- mSlots[slot].mGraphicBuffer = buffers[i];
- mSlots[slot].mFence = Fence::NO_FENCE;
+ auto slot = mCore->mFreeSlots.begin();
+ mCore->clearBufferSlotLocked(*slot); // Clean up the slot first
+ mSlots[*slot].mGraphicBuffer = buffers[i];
+ mSlots[*slot].mFence = Fence::NO_FENCE;
// freeBufferLocked puts this slot on the free slots list. Since
// we then attached a buffer, move the slot to free buffer list.
mCore->mFreeSlots.erase(slot);
- mCore->mFreeBuffers.push_front(slot);
+ mCore->mFreeBuffers.push_front(*slot);
- BQ_LOGV("allocateBuffers: allocated a new buffer in slot %d", slot);
+ BQ_LOGV("allocateBuffers: allocated a new buffer in slot %d",
+ *slot);
}
mCore->mIsAllocating = false;
mCore->mIsAllocatingCondition.broadcast();
- mCore->validateConsistencyLocked();
+ VALIDATE_CONSISTENCY();
} // Autolock scope
}
}
@@ -1297,8 +1314,18 @@
BQ_LOGV("setDequeueTimeout: %" PRId64, timeout);
Mutex::Autolock lock(mCore->mMutex);
+ int delta = mCore->getMaxBufferCountLocked(mCore->mAsyncMode, false,
+ mCore->mMaxBufferCount) - mCore->getMaxBufferCountLocked();
+ if (!mCore->adjustAvailableSlotsLocked(delta)) {
+ BQ_LOGE("setDequeueTimeout: BufferQueue failed to adjust the number of "
+ "available slots. Delta = %d", delta);
+ return BAD_VALUE;
+ }
+
mDequeueTimeout = timeout;
mCore->mDequeueBufferCannotBlock = false;
+
+ VALIDATE_CONSISTENCY();
return NO_ERROR;
}
diff --git a/libs/gui/IGraphicBufferConsumer.cpp b/libs/gui/IGraphicBufferConsumer.cpp
index d2f482e..a75569f 100644
--- a/libs/gui/IGraphicBufferConsumer.cpp
+++ b/libs/gui/IGraphicBufferConsumer.cpp
@@ -304,7 +304,7 @@
CHECK_INTERFACE(IGraphicBufferConsumer, data, reply);
sp<GraphicBuffer> buffer = new GraphicBuffer();
data.read(*buffer.get());
- int slot;
+ int slot = -1;
int result = attachBuffer(&slot, buffer);
reply->writeInt32(slot);
reply->writeInt32(result);
diff --git a/libs/gui/IGraphicBufferProducer.cpp b/libs/gui/IGraphicBufferProducer.cpp
index 0cca58d..2478601 100644
--- a/libs/gui/IGraphicBufferProducer.cpp
+++ b/libs/gui/IGraphicBufferProducer.cpp
@@ -466,6 +466,7 @@
QueueBufferOutput* const output =
reinterpret_cast<QueueBufferOutput *>(
reply->writeInplace(sizeof(QueueBufferOutput)));
+ memset(output, 0, sizeof(QueueBufferOutput));
status_t result = queueBuffer(buf, input, output);
reply->writeInt32(result);
return NO_ERROR;
diff --git a/libs/gui/ISensorServer.cpp b/libs/gui/ISensorServer.cpp
index f581b5c..3a4c7e4 100644
--- a/libs/gui/ISensorServer.cpp
+++ b/libs/gui/ISensorServer.cpp
@@ -35,7 +35,8 @@
enum {
GET_SENSOR_LIST = IBinder::FIRST_CALL_TRANSACTION,
CREATE_SENSOR_EVENT_CONNECTION,
- ENABLE_DATA_INJECTION
+ ENABLE_DATA_INJECTION,
+ GET_DYNAMIC_SENSOR_LIST,
};
class BpSensorServer : public BpInterface<ISensorServer>
@@ -65,6 +66,23 @@
return v;
}
+ virtual Vector<Sensor> getDynamicSensorList(const String16& opPackageName)
+ {
+ Parcel data, reply;
+ data.writeInterfaceToken(ISensorServer::getInterfaceDescriptor());
+ data.writeString16(opPackageName);
+ remote()->transact(GET_DYNAMIC_SENSOR_LIST, data, &reply);
+ Sensor s;
+ Vector<Sensor> v;
+ uint32_t n = reply.readUint32();
+ v.setCapacity(n);
+ while (n--) {
+ reply.read(s);
+ v.add(s);
+ }
+ return v;
+ }
+
virtual sp<ISensorEventConnection> createSensorEventConnection(const String8& packageName,
int mode, const String16& opPackageName)
{
@@ -124,6 +142,17 @@
reply->writeInt32(static_cast<int32_t>(ret));
return NO_ERROR;
}
+ case GET_DYNAMIC_SENSOR_LIST: {
+ CHECK_INTERFACE(ISensorServer, data, reply);
+ const String16& opPackageName = data.readString16();
+ Vector<Sensor> v(getDynamicSensorList(opPackageName));
+ size_t n = v.size();
+ reply->writeUint32(static_cast<uint32_t>(n));
+ for (size_t i = 0; i < n; i++) {
+ reply->write(v[i]);
+ }
+ return NO_ERROR;
+ }
}
return BBinder::onTransact(code, data, reply, flags);
}
diff --git a/libs/gui/Sensor.cpp b/libs/gui/Sensor.cpp
index 0a0fc4b..0b2b942 100644
--- a/libs/gui/Sensor.cpp
+++ b/libs/gui/Sensor.cpp
@@ -188,7 +188,7 @@
if (halVersion < SENSORS_DEVICE_API_VERSION_1_3) {
mFlags |= SENSOR_FLAG_WAKE_UP;
}
- break;
+ break;
case SENSOR_TYPE_WAKE_GESTURE:
mStringType = SENSOR_STRING_TYPE_WAKE_GESTURE;
mFlags |= SENSOR_FLAG_ONE_SHOT_MODE;
@@ -217,6 +217,32 @@
mFlags |= SENSOR_FLAG_WAKE_UP;
}
break;
+ case SENSOR_TYPE_DYNAMIC_SENSOR_META:
+ mStringType = SENSOR_STRING_TYPE_DYNAMIC_SENSOR_META;
+ mFlags = SENSOR_FLAG_SPECIAL_REPORTING_MODE; // special trigger and non-wake up
+ break;
+ case SENSOR_TYPE_POSE_6DOF:
+ mStringType = SENSOR_STRING_TYPE_POSE_6DOF;
+ mFlags |= SENSOR_FLAG_CONTINUOUS_MODE;
+ break;
+ case SENSOR_TYPE_STATIONARY_DETECT:
+ mStringType = SENSOR_STRING_TYPE_STATIONARY_DETECT;
+ mFlags |= SENSOR_FLAG_ONE_SHOT_MODE;
+ if (halVersion < SENSORS_DEVICE_API_VERSION_1_3) {
+ mFlags |= SENSOR_FLAG_WAKE_UP;
+ }
+ break;
+ case SENSOR_TYPE_MOTION_DETECT:
+ mStringType = SENSOR_STRING_TYPE_MOTION_DETECT;
+ mFlags |= SENSOR_FLAG_ONE_SHOT_MODE;
+ if (halVersion < SENSORS_DEVICE_API_VERSION_1_3) {
+ mFlags |= SENSOR_FLAG_WAKE_UP;
+ }
+ break;
+ case SENSOR_TYPE_HEART_BEAT:
+ mStringType = SENSOR_STRING_TYPE_HEART_BEAT;
+ mFlags |= SENSOR_FLAG_SPECIAL_REPORTING_MODE;
+ break;
default:
// Only pipe the stringType, requiredPermission and flags for custom sensors.
if (halVersion > SENSORS_DEVICE_API_VERSION_1_0 && hwSensor->stringType) {
@@ -368,13 +394,18 @@
return ((mFlags & REPORTING_MODE_MASK) >> REPORTING_MODE_SHIFT);
}
+const Sensor::uuid_t& Sensor::getUuid() const {
+ return mUuid;
+}
+
size_t Sensor::getFlattenedSize() const
{
size_t fixedSize =
- sizeof(int32_t) * 3 +
- sizeof(float) * 4 +
- sizeof(int32_t) * 6 +
- sizeof(bool);
+ sizeof(mVersion) + sizeof(mHandle) + sizeof(mType) +
+ sizeof(mMinValue) + sizeof(mMaxValue) + sizeof(mResolution) +
+ sizeof(mPower) + sizeof(mMinDelay) + sizeof(mFifoMaxEventCount) +
+ sizeof(mFifoMaxEventCount) + sizeof(mRequiredPermissionRuntime) +
+ sizeof(mRequiredAppOp) + sizeof(mMaxDelay) + sizeof(mFlags) + sizeof(mUuid);
size_t variableSize =
sizeof(uint32_t) + FlattenableUtils::align<4>(mName.length()) +
@@ -408,6 +439,7 @@
FlattenableUtils::write(buffer, size, mRequiredAppOp);
FlattenableUtils::write(buffer, size, mMaxDelay);
FlattenableUtils::write(buffer, size, mFlags);
+ FlattenableUtils::write(buffer, size, mUuid);
return NO_ERROR;
}
@@ -419,11 +451,11 @@
return NO_MEMORY;
}
- size_t fixedSize =
- sizeof(int32_t) * 3 +
- sizeof(float) * 4 +
- sizeof(int32_t) * 5;
- if (size < fixedSize) {
+ size_t fixedSize1 =
+ sizeof(mVersion) + sizeof(mHandle) + sizeof(mType) + sizeof(mMinValue) +
+ sizeof(mMaxValue) + sizeof(mResolution) + sizeof(mPower) + sizeof(mMinDelay) +
+ sizeof(mFifoMaxEventCount) + sizeof(mFifoMaxEventCount);
+ if (size < fixedSize1) {
return NO_MEMORY;
}
@@ -444,10 +476,19 @@
if (!unflattenString8(buffer, size, mRequiredPermission)) {
return NO_MEMORY;
}
+
+ size_t fixedSize2 =
+ sizeof(mRequiredPermissionRuntime) + sizeof(mRequiredAppOp) + sizeof(mMaxDelay) +
+ sizeof(mFlags) + sizeof(mUuid);
+ if (size < fixedSize2) {
+ return NO_MEMORY;
+ }
+
FlattenableUtils::read(buffer, size, mRequiredPermissionRuntime);
FlattenableUtils::read(buffer, size, mRequiredAppOp);
FlattenableUtils::read(buffer, size, mMaxDelay);
FlattenableUtils::read(buffer, size, mFlags);
+ FlattenableUtils::read(buffer, size, mUuid);
return NO_ERROR;
}
diff --git a/libs/gui/SensorManager.cpp b/libs/gui/SensorManager.cpp
index 33608b5..225bfa8 100644
--- a/libs/gui/SensorManager.cpp
+++ b/libs/gui/SensorManager.cpp
@@ -89,19 +89,16 @@
}
SensorManager::SensorManager(const String16& opPackageName)
- : mSensorList(0), mOpPackageName(opPackageName)
-{
+ : mSensorList(0), mOpPackageName(opPackageName) {
// okay we're not locked here, but it's not needed during construction
assertStateLocked();
}
-SensorManager::~SensorManager()
-{
+SensorManager::~SensorManager() {
free(mSensorList);
}
-void SensorManager::sensorManagerDied()
-{
+void SensorManager::sensorManagerDied() {
Mutex::Autolock _l(mLock);
mSensorServer.clear();
free(mSensorList);
@@ -109,7 +106,7 @@
mSensors.clear();
}
-status_t SensorManager::assertStateLocked() const {
+status_t SensorManager::assertStateLocked() {
bool initSensorManager = false;
if (mSensorServer == NULL) {
initSensorManager = true;
@@ -136,13 +133,13 @@
}
class DeathObserver : public IBinder::DeathRecipient {
- SensorManager& mSensorManger;
+ SensorManager& mSensorManager;
virtual void binderDied(const wp<IBinder>& who) {
ALOGW("sensorservice died [%p]", who.unsafe_get());
- mSensorManger.sensorManagerDied();
+ mSensorManager.sensorManagerDied();
}
public:
- DeathObserver(SensorManager& mgr) : mSensorManger(mgr) { }
+ DeathObserver(SensorManager& mgr) : mSensorManager(mgr) { }
};
LOG_ALWAYS_FATAL_IF(mSensorServer.get() == NULL, "getService(SensorService) NULL");
@@ -164,8 +161,7 @@
return NO_ERROR;
}
-ssize_t SensorManager::getSensorList(Sensor const* const** list) const
-{
+ssize_t SensorManager::getSensorList(Sensor const* const** list) {
Mutex::Autolock _l(mLock);
status_t err = assertStateLocked();
if (err < 0) {
@@ -175,6 +171,19 @@
return static_cast<ssize_t>(mSensors.size());
}
+ssize_t SensorManager::getDynamicSensorList(Vector<Sensor> & dynamicSensors) {
+ Mutex::Autolock _l(mLock);
+ status_t err = assertStateLocked();
+ if (err < 0) {
+ return static_cast<ssize_t>(err);
+ }
+
+ dynamicSensors = mSensorServer->getDynamicSensorList(mOpPackageName);
+ size_t count = dynamicSensors.size();
+
+ return static_cast<ssize_t>(count);
+}
+
Sensor const* SensorManager::getDefaultSensor(int type)
{
Mutex::Autolock _l(mLock);
diff --git a/libs/gui/Surface.cpp b/libs/gui/Surface.cpp
index 9e90ad0..6fc55c3 100644
--- a/libs/gui/Surface.cpp
+++ b/libs/gui/Surface.cpp
@@ -759,6 +759,13 @@
*outFence = Fence::NO_FENCE;
}
+ for (int i = 0; i < NUM_BUFFER_SLOTS; i++) {
+ if (mSlots[i].buffer != NULL &&
+ mSlots[i].buffer->handle == buffer->handle) {
+ mSlots[i].buffer = NULL;
+ }
+ }
+
return NO_ERROR;
}
diff --git a/libs/gui/tests/BufferQueue_test.cpp b/libs/gui/tests/BufferQueue_test.cpp
index ac9af07..f4c47ed 100644
--- a/libs/gui/tests/BufferQueue_test.cpp
+++ b/libs/gui/tests/BufferQueue_test.cpp
@@ -179,6 +179,14 @@
sp<DummyConsumer> dc(new DummyConsumer);
mConsumer->consumerConnect(dc, false);
+ EXPECT_EQ(OK, mConsumer->setMaxBufferCount(10));
+ EXPECT_EQ(BAD_VALUE, mConsumer->setMaxAcquiredBufferCount(10));
+
+ IGraphicBufferProducer::QueueBufferOutput qbo;
+ mProducer->connect(new DummyProducerListener, NATIVE_WINDOW_API_CPU, false,
+ &qbo);
+ mProducer->setMaxDequeuedBufferCount(3);
+
int minBufferCount;
ASSERT_NO_FATAL_FAILURE(GetMinUndequeuedBufferCount(&minBufferCount));
EXPECT_EQ(BAD_VALUE, mConsumer->setMaxAcquiredBufferCount(
@@ -190,8 +198,24 @@
BufferQueue::MAX_MAX_ACQUIRED_BUFFERS+1));
EXPECT_EQ(BAD_VALUE, mConsumer->setMaxAcquiredBufferCount(100));
- EXPECT_EQ(OK, mConsumer->setMaxBufferCount(5));
- EXPECT_EQ(BAD_VALUE, mConsumer->setMaxAcquiredBufferCount(5));
+ int slot;
+ sp<Fence> fence;
+ sp<GraphicBuffer> buf;
+ IGraphicBufferProducer::QueueBufferInput qbi(0, false,
+ HAL_DATASPACE_UNKNOWN, Rect(0, 0, 1, 1),
+ NATIVE_WINDOW_SCALING_MODE_FREEZE, 0, Fence::NO_FENCE);
+ BufferItem item;
+ EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(3));
+ for (int i = 0; i < 3; i++) {
+ ASSERT_EQ(IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION,
+ mProducer->dequeueBuffer(&slot, &fence, 1, 1, 0,
+ GRALLOC_USAGE_SW_READ_OFTEN));
+ ASSERT_EQ(OK, mProducer->requestBuffer(slot, &buf));
+ ASSERT_EQ(OK, mProducer->queueBuffer(slot, qbi, &qbo));
+ ASSERT_EQ(OK, mConsumer->acquireBuffer(&item, 0));
+ }
+
+ EXPECT_EQ(BAD_VALUE, mConsumer->setMaxAcquiredBufferCount(2));
}
TEST_F(BufferQueueTest, SetMaxAcquiredBufferCountWithLegalValues_Succeeds) {
@@ -199,12 +223,44 @@
sp<DummyConsumer> dc(new DummyConsumer);
mConsumer->consumerConnect(dc, false);
+ IGraphicBufferProducer::QueueBufferOutput qbo;
+ mProducer->connect(new DummyProducerListener, NATIVE_WINDOW_API_CPU, false,
+ &qbo);
+ mProducer->setMaxDequeuedBufferCount(2);
+
int minBufferCount;
ASSERT_NO_FATAL_FAILURE(GetMinUndequeuedBufferCount(&minBufferCount));
EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(1));
EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(2));
EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(minBufferCount));
+
+ int slot;
+ sp<Fence> fence;
+ sp<GraphicBuffer> buf;
+ IGraphicBufferProducer::QueueBufferInput qbi(0, false,
+ HAL_DATASPACE_UNKNOWN, Rect(0, 0, 1, 1),
+ NATIVE_WINDOW_SCALING_MODE_FREEZE, 0, Fence::NO_FENCE);
+ BufferItem item;
+
+ ASSERT_EQ(IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION,
+ mProducer->dequeueBuffer(&slot, &fence, 1, 1, 0,
+ GRALLOC_USAGE_SW_READ_OFTEN));
+ ASSERT_EQ(OK, mProducer->requestBuffer(slot, &buf));
+ ASSERT_EQ(OK, mProducer->queueBuffer(slot, qbi, &qbo));
+ ASSERT_EQ(OK, mConsumer->acquireBuffer(&item, 0));
+
+ EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(3));
+
+ for (int i = 0; i < 2; i++) {
+ ASSERT_EQ(IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION,
+ mProducer->dequeueBuffer(&slot, &fence, 1, 1, 0,
+ GRALLOC_USAGE_SW_READ_OFTEN));
+ ASSERT_EQ(OK, mProducer->requestBuffer(slot, &buf));
+ ASSERT_EQ(OK, mProducer->queueBuffer(slot, qbi, &qbo));
+ ASSERT_EQ(OK, mConsumer->acquireBuffer(&item, 0));
+ }
+
EXPECT_EQ(OK, mConsumer->setMaxAcquiredBufferCount(
BufferQueue::MAX_MAX_ACQUIRED_BUFFERS));
}
diff --git a/libs/gui/tests/IGraphicBufferProducer_test.cpp b/libs/gui/tests/IGraphicBufferProducer_test.cpp
index 882b14c..45b6463 100644
--- a/libs/gui/tests/IGraphicBufferProducer_test.cpp
+++ b/libs/gui/tests/IGraphicBufferProducer_test.cpp
@@ -502,31 +502,30 @@
ASSERT_OK(mProducer->setMaxDequeuedBufferCount(minBuffers))
<< "bufferCount: " << minBuffers;
- std::vector<DequeueBufferResult> dequeueList;
-
// Should now be able to dequeue up to minBuffers times
+ DequeueBufferResult result;
for (int i = 0; i < minBuffers; ++i) {
- DequeueBufferResult result;
EXPECT_EQ(OK, ~IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION &
(dequeueBuffer(DEFAULT_WIDTH, DEFAULT_HEIGHT, DEFAULT_FORMAT,
TEST_PRODUCER_USAGE_BITS, &result)))
<< "iteration: " << i << ", slot: " << result.slot;
-
- dequeueList.push_back(result);
- }
-
- // Cancel every buffer, so we can set buffer count again
- for (auto& result : dequeueList) {
- mProducer->cancelBuffer(result.slot, result.fence);
}
ASSERT_OK(mProducer->setMaxDequeuedBufferCount(maxBuffers));
+ // queue the first buffer to enable max dequeued buffer count checking
+ IGraphicBufferProducer::QueueBufferInput input = CreateBufferInput();
+ IGraphicBufferProducer::QueueBufferOutput output;
+ sp<GraphicBuffer> buffer;
+ ASSERT_OK(mProducer->requestBuffer(result.slot, &buffer));
+ ASSERT_OK(mProducer->queueBuffer(result.slot, input, &output));
+
+
// Should now be able to dequeue up to maxBuffers times
+ int dequeuedSlot = -1;
+ sp<Fence> dequeuedFence;
for (int i = 0; i < maxBuffers; ++i) {
- int dequeuedSlot = -1;
- sp<Fence> dequeuedFence;
EXPECT_EQ(OK, ~IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION &
(mProducer->dequeueBuffer(&dequeuedSlot, &dequeuedFence,
@@ -535,6 +534,12 @@
TEST_PRODUCER_USAGE_BITS)))
<< "iteration: " << i << ", slot: " << dequeuedSlot;
}
+
+ // Cancel a buffer, so we can decrease the buffer count
+ ASSERT_OK(mProducer->cancelBuffer(dequeuedSlot, dequeuedFence));
+
+ // Should now be able to decrease the max dequeued count by 1
+ ASSERT_OK(mProducer->setMaxDequeuedBufferCount(maxBuffers-1));
}
TEST_F(IGraphicBufferProducerTest, SetMaxDequeuedBufferCount_Fails) {
@@ -553,11 +558,12 @@
EXPECT_EQ(BAD_VALUE, mProducer->setMaxDequeuedBufferCount(maxBuffers + 1))
<< "bufferCount: " << maxBuffers + 1;
- // Prerequisite to fail out a valid setBufferCount call
- {
- int dequeuedSlot = -1;
- sp<Fence> dequeuedFence;
-
+ // Set max dequeue count to 2
+ ASSERT_OK(mProducer->setMaxDequeuedBufferCount(2));
+ // Dequeue 2 buffers
+ int dequeuedSlot = -1;
+ sp<Fence> dequeuedFence;
+ for (int i = 0; i < 2; i++) {
ASSERT_EQ(OK, ~IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION &
(mProducer->dequeueBuffer(&dequeuedSlot, &dequeuedFence,
DEFAULT_WIDTH, DEFAULT_HEIGHT,
@@ -566,8 +572,8 @@
<< "slot: " << dequeuedSlot;
}
- // Client has one or more buffers dequeued
- EXPECT_EQ(BAD_VALUE, mProducer->setMaxDequeuedBufferCount(minBuffers))
+ // Client has too many buffers dequeued
+ EXPECT_EQ(BAD_VALUE, mProducer->setMaxDequeuedBufferCount(1))
<< "bufferCount: " << minBuffers;
// Abandon buffer queue
diff --git a/opengl/include/EGL/eglext.h b/opengl/include/EGL/eglext.h
index b2abdb1..267f8af 100644
--- a/opengl/include/EGL/eglext.h
+++ b/opengl/include/EGL/eglext.h
@@ -598,6 +598,19 @@
#endif
#endif
+#ifndef EGL_ANDROID_create_native_client_buffer
+#define EGL_ANDROID_create_native_client_buffer 1
+#define EGL_NATIVE_BUFFER_USAGE_ANDROID 0x3143
+#define EGL_NATIVE_BUFFER_USAGE_PROTECTED_BIT_ANDROID 0x00000001
+#define EGL_NATIVE_BUFFER_USAGE_RENDERBUFFER_ANDROID 0x00000002
+#define EGL_NATIVE_BUFFER_USAGE_TEXTURE_ANDROID 0x00000004
+#ifdef EGL_EGLEXT_PROTOTYPES
+EGLAPI EGLClientBuffer eglCreateNativeClientBufferANDROID (const EGLint *attrib_list);
+#else
+typedef EGLAPI EGLClientBuffer (EGLAPIENTRYP PFNEGLCREATENATIVECLIENTBUFFERANDROID) (const EGLint *attrib_list);
+#endif
+#endif
+
#ifdef __cplusplus
}
#endif
diff --git a/opengl/libs/Android.mk b/opengl/libs/Android.mk
index 3f9e332..eb86860 100644
--- a/opengl/libs/Android.mk
+++ b/opengl/libs/Android.mk
@@ -31,7 +31,7 @@
EGL/Loader.cpp \
#
-LOCAL_SHARED_LIBRARIES += libcutils libutils liblog
+LOCAL_SHARED_LIBRARIES += libcutils libutils liblog libui
LOCAL_MODULE:= libEGL
LOCAL_LDFLAGS += -Wl,--exclude-libs=ALL
LOCAL_SHARED_LIBRARIES += libdl
diff --git a/opengl/libs/EGL/eglApi.cpp b/opengl/libs/EGL/eglApi.cpp
index 5bd7464..c7e2afb 100644
--- a/opengl/libs/EGL/eglApi.cpp
+++ b/opengl/libs/EGL/eglApi.cpp
@@ -33,6 +33,8 @@
#include <cutils/properties.h>
#include <cutils/memory.h>
+#include <ui/GraphicBuffer.h>
+
#include <utils/KeyedVector.h>
#include <utils/SortedVector.h>
#include <utils/String8.h>
@@ -80,6 +82,7 @@
"EGL_KHR_get_all_proc_addresses "
"EGL_ANDROID_presentation_time "
"EGL_KHR_swap_buffers_with_damage "
+ "EGL_ANDROID_create_native_client_buffer "
;
extern char const * const gExtensionString =
"EGL_KHR_image " // mandatory
@@ -168,6 +171,10 @@
{ "eglSwapBuffersWithDamageKHR",
(__eglMustCastToProperFunctionPointerType)&eglSwapBuffersWithDamageKHR },
+ // EGL_ANDROID_native_client_buffer
+ { "eglCreateNativeClientBufferANDROID",
+ (__eglMustCastToProperFunctionPointerType)&eglCreateNativeClientBufferANDROID },
+
// EGL_KHR_partial_update
{ "eglSetDamageRegionKHR",
(__eglMustCastToProperFunctionPointerType)&eglSetDamageRegionKHR },
@@ -1770,6 +1777,97 @@
return EGL_TRUE;
}
+EGLClientBuffer eglCreateNativeClientBufferANDROID(const EGLint *attrib_list)
+{
+ clearError();
+
+ int usage = 0;
+ uint32_t width = 0;
+ uint32_t height = 0;
+ uint32_t format = 0;
+ uint32_t red_size = 0;
+ uint32_t green_size = 0;
+ uint32_t blue_size = 0;
+ uint32_t alpha_size = 0;
+
+#define GET_POSITIVE_VALUE(case_name, target) \
+ case case_name: \
+ if (value > 0) { \
+ target = value; \
+ } else { \
+ return setError(EGL_BAD_PARAMETER, (EGLClientBuffer)0); \
+ } \
+ break
+
+ if (attrib_list) {
+ while (*attrib_list != EGL_NONE) {
+ GLint attr = *attrib_list++;
+ GLint value = *attrib_list++;
+ switch (attr) {
+ GET_POSITIVE_VALUE(EGL_WIDTH, width);
+ GET_POSITIVE_VALUE(EGL_HEIGHT, height);
+ GET_POSITIVE_VALUE(EGL_RED_SIZE, red_size);
+ GET_POSITIVE_VALUE(EGL_GREEN_SIZE, green_size);
+ GET_POSITIVE_VALUE(EGL_BLUE_SIZE, blue_size);
+ GET_POSITIVE_VALUE(EGL_ALPHA_SIZE, alpha_size);
+ case EGL_NATIVE_BUFFER_USAGE_ANDROID:
+ if (value & EGL_NATIVE_BUFFER_USAGE_PROTECTED_BIT_ANDROID) {
+ usage |= GRALLOC_USAGE_PROTECTED;
+ // If we are using QCOM then add in extra bits. This
+ // should be removed before launch. These correspond to:
+ // USAGE_PRIVATE_MM_HEAP | USAGE_PRIVATE_UNCACHED
+ usage |= 0x82000000;
+ }
+ if (value & EGL_NATIVE_BUFFER_USAGE_RENDERBUFFER_ANDROID) {
+ usage |= GRALLOC_USAGE_HW_RENDER;
+ }
+ if (value & EGL_NATIVE_BUFFER_USAGE_TEXTURE_ANDROID) {
+ usage |= GRALLOC_USAGE_HW_TEXTURE;
+ }
+ // The buffer must be used for either a texture or a
+ // renderbuffer.
+ if ((value & EGL_NATIVE_BUFFER_USAGE_RENDERBUFFER_ANDROID) &&
+ (value & EGL_NATIVE_BUFFER_USAGE_TEXTURE_ANDROID)) {
+ return setError(EGL_BAD_PARAMETER, (EGLClientBuffer)0);
+ }
+ break;
+ default:
+ return setError(EGL_BAD_PARAMETER, (EGLClientBuffer)0);
+ }
+ }
+ }
+#undef GET_POSITIVE_VALUE
+
+ // Validate format.
+ if (red_size == 8 && green_size == 8 && blue_size == 8) {
+ if (alpha_size == 8) {
+ format = HAL_PIXEL_FORMAT_RGBA_8888;
+ } else {
+ format = HAL_PIXEL_FORMAT_RGB_888;
+ }
+ } else if (red_size == 5 && green_size == 6 && blue_size == 5 &&
+ alpha_size == 0) {
+ format == HAL_PIXEL_FORMAT_RGB_565;
+ } else {
+ ALOGE("Invalid native pixel format { r=%d, g=%d, b=%d, a=%d }",
+ red_size, green_size, blue_size, alpha_size);
+ return setError(EGL_BAD_PARAMETER, (EGLClientBuffer)0);
+ }
+
+ GraphicBuffer* gBuffer = new GraphicBuffer(width, height, format, usage);
+ const status_t err = gBuffer->initCheck();
+ if (err != NO_ERROR) {
+ ALOGE("Unable to create native buffer { w=%d, h=%d, f=%d, u=%#x }: %#x",
+ width, height, format, usage, err);
+ // Destroy the buffer.
+ sp<GraphicBuffer> holder(gBuffer);
+ return setError(EGL_BAD_ALLOC, (EGLClientBuffer)0);
+ }
+ ALOGD("Created new native buffer %p { w=%d, h=%d, f=%d, u=%#x }",
+ gBuffer, width, height, format, usage);
+ return static_cast<EGLClientBuffer>(gBuffer->getNativeBuffer());
+}
+
// ----------------------------------------------------------------------------
// NVIDIA extensions
// ----------------------------------------------------------------------------
diff --git a/opengl/libs/EGL/egl_cache.cpp b/opengl/libs/EGL/egl_cache.cpp
index b0798a1..f368d75 100644
--- a/opengl/libs/EGL/egl_cache.cpp
+++ b/opengl/libs/EGL/egl_cache.cpp
@@ -21,6 +21,7 @@
#include "egldefs.h"
#include <fcntl.h>
+#include <inttypes.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
@@ -306,7 +307,8 @@
// Sanity check the size before trying to mmap it.
size_t fileSize = statBuf.st_size;
if (fileSize > maxTotalSize * 2) {
- ALOGE("cache file is too large: %#llx", statBuf.st_size);
+ ALOGE("cache file is too large: %#" PRIx64,
+ static_cast<off64_t>(statBuf.st_size));
close(fd);
return;
}
diff --git a/opengl/libs/EGL/egl_display.cpp b/opengl/libs/EGL/egl_display.cpp
index ab21c8f..6a9d7b6 100644
--- a/opengl/libs/EGL/egl_display.cpp
+++ b/opengl/libs/EGL/egl_display.cpp
@@ -271,7 +271,7 @@
// there are no reference to them, it which case, we're free to
// delete them.
size_t count = objects.size();
- ALOGW_IF(count, "eglTerminate() called w/ %d objects remaining", count);
+ ALOGW_IF(count, "eglTerminate() called w/ %zu objects remaining", count);
for (size_t i=0 ; i<count ; i++) {
egl_object_t* o = objects.itemAt(i);
o->destroy();
diff --git a/opengl/libs/EGL/egl_entries.in b/opengl/libs/EGL/egl_entries.in
index 498b2fc..2b56718 100644
--- a/opengl/libs/EGL/egl_entries.in
+++ b/opengl/libs/EGL/egl_entries.in
@@ -80,6 +80,7 @@
EGL_ENTRY(EGLBoolean, eglSetSwapRectangleANDROID, EGLDisplay, EGLSurface, EGLint, EGLint, EGLint, EGLint)
EGL_ENTRY(EGLClientBuffer, eglGetRenderBufferANDROID, EGLDisplay, EGLSurface)
EGL_ENTRY(EGLint, eglDupNativeFenceFDANDROID, EGLDisplay, EGLSyncKHR)
+EGL_ENTRY(EGLClientBuffer, eglCreateNativeClientBufferANDROID, const EGLint *)
/* NVIDIA extensions */
diff --git a/opengl/libs/EGL/egl_object.h b/opengl/libs/EGL/egl_object.h
index f5a9f58..17a8304 100644
--- a/opengl/libs/EGL/egl_object.h
+++ b/opengl/libs/EGL/egl_object.h
@@ -37,7 +37,7 @@
namespace android {
// ----------------------------------------------------------------------------
-struct egl_display_t;
+class egl_display_t;
class egl_object_t {
egl_display_t *display;
diff --git a/services/inputflinger/InputDispatcher.cpp b/services/inputflinger/InputDispatcher.cpp
index 04919f7..eed14ab 100644
--- a/services/inputflinger/InputDispatcher.cpp
+++ b/services/inputflinger/InputDispatcher.cpp
@@ -134,7 +134,7 @@
case AMOTION_EVENT_ACTION_POINTER_DOWN:
case AMOTION_EVENT_ACTION_POINTER_UP: {
int32_t index = getMotionEventActionPointerIndex(action);
- return index >= 0 && size_t(index) < pointerCount;
+ return index >= 0 && index < pointerCount;
}
case AMOTION_EVENT_ACTION_BUTTON_PRESS:
case AMOTION_EVENT_ACTION_BUTTON_RELEASE:
diff --git a/services/inputflinger/tests/InputDispatcher_test.cpp b/services/inputflinger/tests/InputDispatcher_test.cpp
index 2d8eaef..7ae36d8 100644
--- a/services/inputflinger/tests/InputDispatcher_test.cpp
+++ b/services/inputflinger/tests/InputDispatcher_test.cpp
@@ -170,7 +170,7 @@
<< "Should reject motion events with pointer down index too large.";
event.initialize(DEVICE_ID, AINPUT_SOURCE_TOUCHSCREEN,
- AMOTION_EVENT_ACTION_POINTER_DOWN | (-1 << AMOTION_EVENT_ACTION_POINTER_INDEX_SHIFT),
+ AMOTION_EVENT_ACTION_POINTER_DOWN | (~0U << AMOTION_EVENT_ACTION_POINTER_INDEX_SHIFT),
0, 0, 0, AMETA_NONE, 0, 0, 0, 0, 0,
ARBITRARY_TIME, ARBITRARY_TIME,
/*pointerCount*/ 1, pointerProperties, pointerCoords);
@@ -191,7 +191,7 @@
<< "Should reject motion events with pointer up index too large.";
event.initialize(DEVICE_ID, AINPUT_SOURCE_TOUCHSCREEN,
- AMOTION_EVENT_ACTION_POINTER_UP | (-1 << AMOTION_EVENT_ACTION_POINTER_INDEX_SHIFT),
+ AMOTION_EVENT_ACTION_POINTER_UP | (~0U << AMOTION_EVENT_ACTION_POINTER_INDEX_SHIFT),
0, 0, 0, AMETA_NONE, 0, 0, 0, 0, 0,
ARBITRARY_TIME, ARBITRARY_TIME,
/*pointerCount*/ 1, pointerProperties, pointerCoords);
diff --git a/services/inputflinger/tests/InputReader_test.cpp b/services/inputflinger/tests/InputReader_test.cpp
index 42bc865..a7fe69c 100644
--- a/services/inputflinger/tests/InputReader_test.cpp
+++ b/services/inputflinger/tests/InputReader_test.cpp
@@ -1528,8 +1528,8 @@
NotifySwitchArgs args;
ASSERT_NO_FATAL_FAILURE(mFakeListener->assertNotifySwitchWasCalled(&args));
ASSERT_EQ(ARBITRARY_TIME, args.eventTime);
- ASSERT_EQ((1 << SW_LID) | (1 << SW_JACK_PHYSICAL_INSERT), args.switchValues);
- ASSERT_EQ((1 << SW_LID) | (1 << SW_JACK_PHYSICAL_INSERT) | (1 << SW_HEADPHONE_INSERT),
+ ASSERT_EQ((1U << SW_LID) | (1U << SW_JACK_PHYSICAL_INSERT), args.switchValues);
+ ASSERT_EQ((1U << SW_LID) | (1U << SW_JACK_PHYSICAL_INSERT) | (1 << SW_HEADPHONE_INSERT),
args.switchMask);
ASSERT_EQ(uint32_t(0), args.policyFlags);
}
diff --git a/services/sensorservice/SensorDevice.cpp b/services/sensorservice/SensorDevice.cpp
index 40d596f..179b1c5 100644
--- a/services/sensorservice/SensorDevice.cpp
+++ b/services/sensorservice/SensorDevice.cpp
@@ -73,6 +73,17 @@
}
}
+void SensorDevice::handleDynamicSensorConnection(int handle, bool connected) {
+ if (connected) {
+ Info model;
+ mActivationCount.add(handle, model);
+ mSensorDevice->activate(
+ reinterpret_cast<struct sensors_poll_device_t *>(mSensorDevice), handle, 0);
+ } else {
+ mActivationCount.removeItem(handle);
+ }
+}
+
void SensorDevice::dump(String8& result)
{
if (!mSensorModule) return;
diff --git a/services/sensorservice/SensorDevice.h b/services/sensorservice/SensorDevice.h
index c484849..c12630a 100644
--- a/services/sensorservice/SensorDevice.h
+++ b/services/sensorservice/SensorDevice.h
@@ -89,6 +89,7 @@
bool isClientDisabledLocked(void* ident);
public:
ssize_t getSensorList(sensor_t const** list);
+ void handleDynamicSensorConnection(int handle, bool connected);
status_t initCheck() const;
int getHalDeviceVersion() const;
ssize_t poll(sensors_event_t* buffer, size_t count);
diff --git a/services/sensorservice/SensorService.cpp b/services/sensorservice/SensorService.cpp
index acad61c..f91dac5 100644
--- a/services/sensorservice/SensorService.cpp
+++ b/services/sensorservice/SensorService.cpp
@@ -246,9 +246,6 @@
Sensor SensorService::registerSensor(SensorInterface* s)
{
- sensors_event_t event;
- memset(&event, 0, sizeof(event));
-
const Sensor sensor(s->getSensor());
// add to the sensor list (returned to clients)
mSensorList.add(sensor);
@@ -260,6 +257,37 @@
return sensor;
}
+Sensor SensorService::registerDynamicSensor(SensorInterface* s)
+{
+ Sensor sensor = registerSensor(s);
+ mDynamicSensorList.add(sensor);
+ return sensor;
+}
+
+bool SensorService::unregisterDynamicSensor(int handle) {
+ bool found = false;
+
+ for (size_t i=0 ; i<mSensorList.size() ; i++) {
+ if (mSensorList[i].getHandle() == handle) {
+ mSensorList.removeAt(i);
+ found = true;
+ break;
+ }
+ }
+
+ if (found) {
+ for (size_t i=0 ; i<mDynamicSensorList.size() ; i++) {
+ if (mDynamicSensorList[i].getHandle() == handle) {
+ mDynamicSensorList.removeAt(i);
+ }
+ }
+
+ mSensorMap.removeItem(handle);
+ mLastEventSeen.removeItem(handle);
+ }
+ return found;
+}
+
Sensor SensorService::registerVirtualSensor(SensorInterface* s)
{
Sensor sensor = registerSensor(s);
@@ -593,11 +621,11 @@
}
}
- // Map flush_complete_events in the buffer to SensorEventConnections which called flush on
- // the hardware sensor. mapFlushEventsToConnections[i] will be the SensorEventConnection
- // mapped to the corresponding flush_complete_event in mSensorEventBuffer[i] if such a
- // mapping exists (NULL otherwise).
for (int i = 0; i < count; ++i) {
+ // Map flush_complete_events in the buffer to SensorEventConnections which called flush on
+ // the hardware sensor. mapFlushEventsToConnections[i] will be the SensorEventConnection
+ // mapped to the corresponding flush_complete_event in mSensorEventBuffer[i] if such a
+ // mapping exists (NULL otherwise).
mMapFlushEventsToConnections[i] = NULL;
if (mSensorEventBuffer[i].type == SENSOR_TYPE_META_DATA) {
const int sensor_handle = mSensorEventBuffer[i].meta_data.sensor;
@@ -607,8 +635,40 @@
rec->removeFirstPendingFlushConnection();
}
}
+
+ // handle dynamic sensor meta events, process registration and unregistration of dynamic
+ // sensor based on content of event.
+ if (mSensorEventBuffer[i].type == SENSOR_TYPE_DYNAMIC_SENSOR_META) {
+ if (mSensorEventBuffer[i].dynamic_sensor_meta.connected) {
+ int handle = mSensorEventBuffer[i].dynamic_sensor_meta.handle;
+ const sensor_t& dynamicSensor =
+ *(mSensorEventBuffer[i].dynamic_sensor_meta.sensor);
+ ALOGI("Dynamic sensor handle 0x%x connected, type %d, name %s",
+ handle, dynamicSensor.type, dynamicSensor.name);
+
+ device.handleDynamicSensorConnection(handle, true /*connected*/);
+ registerDynamicSensor(new HardwareSensor(dynamicSensor));
+
+ } else {
+ int handle = mSensorEventBuffer[i].dynamic_sensor_meta.handle;
+ ALOGI("Dynamic sensor handle 0x%x disconnected", handle);
+
+ device.handleDynamicSensorConnection(handle, false /*connected*/);
+ if (!unregisterDynamicSensor(handle)) {
+ ALOGE("Dynamic sensor release error.");
+ }
+
+ size_t numConnections = activeConnections.size();
+ for (size_t i=0 ; i < numConnections; ++i) {
+ if (activeConnections[i] != NULL) {
+ activeConnections[i]->removeSensor(handle);
+ }
+ }
+ }
+ }
}
+
// Send our events to clients. Check the state of wake lock for each client and release the
// lock if none of the clients need it.
bool needsWakeLock = false;
@@ -693,13 +753,18 @@
void SensorService::recordLastValueLocked(
const sensors_event_t* buffer, size_t count) {
for (size_t i = 0; i < count; i++) {
- if (buffer[i].type != SENSOR_TYPE_META_DATA) {
- MostRecentEventLogger* &circular_buf = mLastEventSeen.editValueFor(buffer[i].sensor);
- if (circular_buf == NULL) {
- circular_buf = new MostRecentEventLogger(buffer[i].type);
- }
- circular_buf->addEvent(buffer[i]);
+ if (buffer[i].type == SENSOR_TYPE_META_DATA ||
+ buffer[i].type == SENSOR_TYPE_DYNAMIC_SENSOR_META ||
+ buffer[i].type == SENSOR_TYPE_ADDITIONAL_INFO ||
+ mLastEventSeen.indexOfKey(buffer[i].sensor) <0 ) {
+ continue;
}
+
+ MostRecentEventLogger* &circular_buf = mLastEventSeen.editValueFor(buffer[i].sensor);
+ if (circular_buf == NULL) {
+ circular_buf = new MostRecentEventLogger(buffer[i].type);
+ }
+ circular_buf->addEvent(buffer[i]);
}
}
@@ -729,7 +794,7 @@
bool SensorService::isVirtualSensor(int handle) const {
SensorInterface* sensor = mSensorMap.valueFor(handle);
- return sensor->isVirtual();
+ return sensor != NULL && sensor->isVirtual();
}
bool SensorService::isWakeUpSensorEvent(const sensors_event_t& event) const {
@@ -766,6 +831,23 @@
return accessibleSensorList;
}
+Vector<Sensor> SensorService::getDynamicSensorList(const String16& opPackageName)
+{
+ Vector<Sensor> accessibleSensorList;
+ for (size_t i = 0; i < mDynamicSensorList.size(); i++) {
+ Sensor sensor = mDynamicSensorList[i];
+ if (canAccessSensor(sensor, "getDynamicSensorList", opPackageName)) {
+ accessibleSensorList.add(sensor);
+ } else {
+ ALOGI("Skipped sensor %s because it requires permission %s and app op %d",
+ sensor.getName().string(),
+ sensor.getRequiredPermission().string(),
+ sensor.getRequiredAppOp());
+ }
+ }
+ return accessibleSensorList;
+}
+
sp<ISensorEventConnection> SensorService::createSensorEventConnection(const String8& packageName,
int requestedMode, const String16& opPackageName) {
// Only 2 modes supported for a SensorEventConnection ... NORMAL and DATA_INJECTION.
@@ -950,8 +1032,7 @@
// one should be trigger by a change in value). Also if this sensor isn't
// already active, don't call flush().
if (err == NO_ERROR &&
- sensor->getSensor().getReportingMode() != AREPORTING_MODE_ONE_SHOT &&
- sensor->getSensor().getReportingMode() != AREPORTING_MODE_ON_CHANGE &&
+ sensor->getSensor().getReportingMode() == AREPORTING_MODE_CONTINUOUS &&
rec->getNumConnections() > 1) {
connection->setFirstFlushPending(handle, true);
status_t err_flush = sensor->flush(connection.get(), handle);
diff --git a/services/sensorservice/SensorService.h b/services/sensorservice/SensorService.h
index 080a550..ef4516b 100644
--- a/services/sensorservice/SensorService.h
+++ b/services/sensorservice/SensorService.h
@@ -149,6 +149,7 @@
// ISensorServer interface
virtual Vector<Sensor> getSensorList(const String16& opPackageName);
+ virtual Vector<Sensor> getDynamicSensorList(const String16& opPackageName);
virtual sp<ISensorEventConnection> createSensorEventConnection(
const String8& packageName,
int requestedMode, const String16& opPackageName);
@@ -165,6 +166,8 @@
static void sortEventBuffer(sensors_event_t* buffer, size_t count);
Sensor registerSensor(SensorInterface* sensor);
Sensor registerVirtualSensor(SensorInterface* sensor);
+ Sensor registerDynamicSensor(SensorInterface* sensor);
+ bool unregisterDynamicSensor(int handle);
status_t cleanupWithoutDisable(const sp<SensorEventConnection>& connection, int handle);
status_t cleanupWithoutDisableLocked(const sp<SensorEventConnection>& connection, int handle);
void cleanupAutoDisabledSensorLocked(const sp<SensorEventConnection>& connection,
@@ -212,6 +215,7 @@
Vector<Sensor> mSensorList;
Vector<Sensor> mUserSensorListDebug;
Vector<Sensor> mUserSensorList;
+ Vector<Sensor> mDynamicSensorList;
DefaultKeyedVector<int, SensorInterface*> mSensorMap;
Vector<SensorInterface *> mVirtualSensorList;
status_t mInitCheck;
diff --git a/services/surfaceflinger/Android.mk b/services/surfaceflinger/Android.mk
index 17ca10e..d70b069 100644
--- a/services/surfaceflinger/Android.mk
+++ b/services/surfaceflinger/Android.mk
@@ -124,6 +124,10 @@
LOCAL_INIT_RC := surfaceflinger.rc
+ifneq ($(ENABLE_CPUSETS),)
+ LOCAL_CFLAGS += -DENABLE_CPUSETS
+endif
+
LOCAL_SRC_FILES := \
main_surfaceflinger.cpp
diff --git a/services/surfaceflinger/Layer.cpp b/services/surfaceflinger/Layer.cpp
index d484708..d39075f 100644
--- a/services/surfaceflinger/Layer.cpp
+++ b/services/surfaceflinger/Layer.cpp
@@ -73,6 +73,7 @@
mCurrentTransform(0),
mCurrentScalingMode(NATIVE_WINDOW_SCALING_MODE_FREEZE),
mCurrentOpacity(true),
+ mCurrentFrameNumber(0),
mRefreshPending(false),
mFrameLatencyNeeded(false),
mFiltering(false),
@@ -147,6 +148,9 @@
}
Layer::~Layer() {
+ for (auto& point : mRemoteSyncPoints) {
+ point->setTransactionApplied();
+ }
mFlinger->deleteTextureAsync(mTextureName);
mFrameTracker.logAndResetStats(mName);
}
@@ -163,20 +167,6 @@
}
}
-void Layer::markSyncPointsAvailable(const BufferItem& item) {
- auto pointIter = mLocalSyncPoints.begin();
- while (pointIter != mLocalSyncPoints.end()) {
- if ((*pointIter)->getFrameNumber() == item.mFrameNumber) {
- auto syncPoint = *pointIter;
- pointIter = mLocalSyncPoints.erase(pointIter);
- Mutex::Autolock lock(mAvailableFrameMutex);
- mAvailableFrames.push_back(std::move(syncPoint));
- } else {
- ++pointIter;
- }
- }
-}
-
void Layer::onFrameAvailable(const BufferItem& item) {
// Add this buffer from our internal queue tracker
{ // Autolock scope
@@ -205,8 +195,6 @@
mQueueItemCondition.broadcast();
}
- markSyncPointsAvailable(item);
-
mFlinger->signalLayerUpdate();
}
@@ -233,8 +221,6 @@
mLastFrameNumberReceived = item.mFrameNumber;
mQueueItemCondition.broadcast();
}
-
- markSyncPointsAvailable(item);
}
void Layer::onSidebandStreamChanged() {
@@ -803,22 +789,25 @@
return static_cast<uint32_t>(producerStickyTransform);
}
-void Layer::addSyncPoint(std::shared_ptr<SyncPoint> point) {
- uint64_t headFrameNumber = 0;
- {
- Mutex::Autolock lock(mQueueItemLock);
- if (!mQueueItems.empty()) {
- headFrameNumber = mQueueItems[0].mFrameNumber;
- } else {
- headFrameNumber = mLastFrameNumberReceived;
- }
+uint64_t Layer::getHeadFrameNumber() const {
+ Mutex::Autolock lock(mQueueItemLock);
+ if (!mQueueItems.empty()) {
+ return mQueueItems[0].mFrameNumber;
+ } else {
+ return mCurrentFrameNumber;
+ }
+}
+
+bool Layer::addSyncPoint(const std::shared_ptr<SyncPoint>& point) {
+ if (point->getFrameNumber() <= mCurrentFrameNumber) {
+ // Don't bother with a SyncPoint, since we've already latched the
+ // relevant frame
+ return false;
}
- if (point->getFrameNumber() <= headFrameNumber) {
- point->setFrameAvailable();
- } else {
- mLocalSyncPoints.push_back(std::move(point));
- }
+ Mutex::Autolock lock(mLocalSyncPointMutex);
+ mLocalSyncPoints.push_back(point);
+ return true;
}
void Layer::setFiltering(bool filtering) {
@@ -940,8 +929,6 @@
return;
}
- Mutex::Autolock lock(mPendingStateMutex);
-
// If this transaction is waiting on the receipt of a frame, generate a sync
// point and send it to the remote layer.
if (mCurrentState.handle != nullptr) {
@@ -956,8 +943,13 @@
} else {
auto syncPoint = std::make_shared<SyncPoint>(
mCurrentState.frameNumber);
- handleLayer->addSyncPoint(syncPoint);
- mRemoteSyncPoints.push_back(std::move(syncPoint));
+ if (handleLayer->addSyncPoint(syncPoint)) {
+ mRemoteSyncPoints.push_back(std::move(syncPoint));
+ } else {
+ // We already missed the frame we're supposed to synchronize
+ // on, so go ahead and apply the state update
+ mCurrentState.handle = nullptr;
+ }
}
// Wake us up to check if the frame has been received
@@ -969,15 +961,13 @@
void Layer::popPendingState() {
auto oldFlags = mCurrentState.flags;
mCurrentState = mPendingStates[0];
- mCurrentState.flags = (oldFlags & ~mCurrentState.mask) |
+ mCurrentState.flags = (oldFlags & ~mCurrentState.mask) |
(mCurrentState.flags & mCurrentState.mask);
mPendingStates.removeAt(0);
}
bool Layer::applyPendingStates() {
- Mutex::Autolock lock(mPendingStateMutex);
-
bool stateUpdateAvailable = false;
while (!mPendingStates.empty()) {
if (mPendingStates[0].handle != nullptr) {
@@ -991,6 +981,17 @@
continue;
}
+ if (mRemoteSyncPoints.front()->getFrameNumber() !=
+ mPendingStates[0].frameNumber) {
+ ALOGE("[%s] Unexpected sync point frame number found",
+ mName.string());
+
+ // Signal our end of the sync point and then dispose of it
+ mRemoteSyncPoints.front()->setTransactionApplied();
+ mRemoteSyncPoints.pop_front();
+ continue;
+ }
+
if (mRemoteSyncPoints.front()->frameIsAvailable()) {
// Apply the state update
popPendingState();
@@ -1019,9 +1020,12 @@
}
void Layer::notifyAvailableFrames() {
- Mutex::Autolock lock(mAvailableFrameMutex);
- for (auto frame : mAvailableFrames) {
- frame->setFrameAvailable();
+ auto headFrameNumber = getHeadFrameNumber();
+ Mutex::Autolock lock(mLocalSyncPointMutex);
+ for (auto& point : mLocalSyncPoints) {
+ if (headFrameNumber >= point->getFrameNumber()) {
+ point->setFrameAvailable();
+ }
}
}
@@ -1462,36 +1466,39 @@
Reject r(mDrawingState, getCurrentState(), recomputeVisibleRegions,
getProducerStickyTransform() != 0);
- uint64_t maxFrameNumber = 0;
- uint64_t headFrameNumber = 0;
+
+ // Check all of our local sync points to ensure that all transactions
+ // which need to have been applied prior to the frame which is about to
+ // be latched have signaled
+
+ auto headFrameNumber = getHeadFrameNumber();
+ bool matchingFramesFound = false;
+ bool allTransactionsApplied = true;
{
- Mutex::Autolock lock(mQueueItemLock);
- maxFrameNumber = mLastFrameNumberReceived;
- if (!mQueueItems.empty()) {
- headFrameNumber = mQueueItems[0].mFrameNumber;
+ Mutex::Autolock lock(mLocalSyncPointMutex);
+ for (auto& point : mLocalSyncPoints) {
+ if (point->getFrameNumber() > headFrameNumber) {
+ break;
+ }
+
+ matchingFramesFound = true;
+
+ if (!point->frameIsAvailable()) {
+ // We haven't notified the remote layer that the frame for
+ // this point is available yet. Notify it now, and then
+ // abort this attempt to latch.
+ point->setFrameAvailable();
+ allTransactionsApplied = false;
+ break;
+ }
+
+ allTransactionsApplied &= point->transactionIsApplied();
}
}
- bool availableFramesEmpty = true;
- {
- Mutex::Autolock lock(mAvailableFrameMutex);
- availableFramesEmpty = mAvailableFrames.empty();
- }
- if (!availableFramesEmpty) {
- Mutex::Autolock lock(mAvailableFrameMutex);
- bool matchingFramesFound = false;
- bool allTransactionsApplied = true;
- for (auto& frame : mAvailableFrames) {
- if (headFrameNumber != frame->getFrameNumber()) {
- break;
- }
- matchingFramesFound = true;
- allTransactionsApplied &= frame->transactionIsApplied();
- }
- if (matchingFramesFound && !allTransactionsApplied) {
- mFlinger->signalLayerUpdate();
- return outDirtyRegion;
- }
+ if (matchingFramesFound && !allTransactionsApplied) {
+ mFlinger->signalLayerUpdate();
+ return outDirtyRegion;
}
// This boolean is used to make sure that SurfaceFlinger's shadow copy
@@ -1501,7 +1508,7 @@
bool queuedBuffer = false;
status_t updateResult = mSurfaceFlingerConsumer->updateTexImage(&r,
mFlinger->mPrimaryDispSync, &mSingleBufferMode, &queuedBuffer,
- maxFrameNumber);
+ mLastFrameNumberReceived);
if (updateResult == BufferQueue::PRESENT_LATER) {
// Producer doesn't want buffer to be displayed yet. Signal a
// layer update so we check again at the next opportunity.
@@ -1560,15 +1567,6 @@
mFlinger->signalLayerUpdate();
}
- if (!availableFramesEmpty) {
- Mutex::Autolock lock(mAvailableFrameMutex);
- auto frameNumber = mSurfaceFlingerConsumer->getFrameNumber();
- while (!mAvailableFrames.empty() &&
- frameNumber == mAvailableFrames.front()->getFrameNumber()) {
- mAvailableFrames.pop_front();
- }
- }
-
if (updateResult != NO_ERROR) {
// something happened!
recomputeVisibleRegions = true;
@@ -1617,6 +1615,30 @@
recomputeVisibleRegions = true;
}
+ mCurrentFrameNumber = mSurfaceFlingerConsumer->getFrameNumber();
+
+ // Remove any sync points corresponding to the buffer which was just
+ // latched
+ {
+ Mutex::Autolock lock(mLocalSyncPointMutex);
+ auto point = mLocalSyncPoints.begin();
+ while (point != mLocalSyncPoints.end()) {
+ if (!(*point)->frameIsAvailable() ||
+ !(*point)->transactionIsApplied()) {
+ // This sync point must have been added since we started
+ // latching. Don't drop it yet.
+ ++point;
+ continue;
+ }
+
+ if ((*point)->getFrameNumber() <= mCurrentFrameNumber) {
+ point = mLocalSyncPoints.erase(point);
+ } else {
+ ++point;
+ }
+ }
+ }
+
// FIXME: postedRegion should be dirty & bounds
Region dirtyRegion(Rect(s.active.w, s.active.h));
diff --git a/services/surfaceflinger/Layer.h b/services/surfaceflinger/Layer.h
index 9e3c4db..d91e94e 100644
--- a/services/surfaceflinger/Layer.h
+++ b/services/surfaceflinger/Layer.h
@@ -356,10 +356,6 @@
virtual void onFrameReplaced(const BufferItem& item) override;
virtual void onSidebandStreamChanged() override;
- // Move frames made available by item in to a list which will
- // be signalled at the beginning of the next transaction
- virtual void markSyncPointsAvailable(const BufferItem& item);
-
void commitTransaction();
// needsLinearFiltering - true if this surface's state requires filtering
@@ -413,19 +409,24 @@
std::atomic<bool> mTransactionIsApplied;
};
+ // SyncPoints which will be signaled when the correct frame is at the head
+ // of the queue and dropped after the frame has been latched. Protected by
+ // mLocalSyncPointMutex.
+ Mutex mLocalSyncPointMutex;
std::list<std::shared_ptr<SyncPoint>> mLocalSyncPoints;
- // Guarded by mPendingStateMutex
+ // SyncPoints which will be signaled and then dropped when the transaction
+ // is applied
std::list<std::shared_ptr<SyncPoint>> mRemoteSyncPoints;
- void addSyncPoint(std::shared_ptr<SyncPoint> point);
+ uint64_t getHeadFrameNumber() const;
+
+ // Returns false if the relevant frame has already been latched
+ bool addSyncPoint(const std::shared_ptr<SyncPoint>& point);
void pushPendingState();
void popPendingState();
bool applyPendingStates();
-
- Mutex mAvailableFrameMutex;
- std::list<std::shared_ptr<SyncPoint>> mAvailableFrames;
public:
void notifyAvailableFrames();
private:
@@ -461,6 +462,7 @@
uint32_t mCurrentTransform;
uint32_t mCurrentScalingMode;
bool mCurrentOpacity;
+ std::atomic<uint64_t> mCurrentFrameNumber;
bool mRefreshPending;
bool mFrameLatencyNeeded;
// Whether filtering is forced on or not
@@ -488,7 +490,7 @@
mutable Mutex mQueueItemLock;
Condition mQueueItemCondition;
Vector<BufferItem> mQueueItems;
- uint64_t mLastFrameNumberReceived;
+ std::atomic<uint64_t> mLastFrameNumberReceived;
bool mUpdateTexImageFailed; // This is only modified from the main thread
bool mSingleBufferMode;
diff --git a/services/surfaceflinger/main_surfaceflinger.cpp b/services/surfaceflinger/main_surfaceflinger.cpp
index ca81aaa..4cd7aeb 100644
--- a/services/surfaceflinger/main_surfaceflinger.cpp
+++ b/services/surfaceflinger/main_surfaceflinger.cpp
@@ -42,6 +42,13 @@
set_sched_policy(0, SP_FOREGROUND);
+#ifdef ENABLE_CPUSETS
+ // Put most SurfaceFlinger threads in the system-background cpuset
+ // Keeps us from unnecessarily using big cores
+ // Do this after the binder thread pool init
+ set_cpuset_policy(0, SP_SYSTEM);
+#endif
+
// initialize before clients can connect
flinger->init();
diff --git a/services/surfaceflinger/surfaceflinger.rc b/services/surfaceflinger/surfaceflinger.rc
index 1d6e20f..2b4ea2a 100644
--- a/services/surfaceflinger/surfaceflinger.rc
+++ b/services/surfaceflinger/surfaceflinger.rc
@@ -3,4 +3,4 @@
user system
group graphics drmrpc readproc
onrestart restart zygote
- writepid /dev/cpuset/system-background/tasks /sys/fs/cgroup/stune/foreground/tasks
+ writepid /sys/fs/cgroup/stune/foreground/tasks
diff --git a/vulkan/.clang-format b/vulkan/.clang-format
new file mode 100644
index 0000000..563cd9a
--- /dev/null
+++ b/vulkan/.clang-format
@@ -0,0 +1,2 @@
+BasedOnStyle: Chromium
+IndentWidth: 4
diff --git a/vulkan/Android.mk b/vulkan/Android.mk
new file mode 100644
index 0000000..d125673
--- /dev/null
+++ b/vulkan/Android.mk
@@ -0,0 +1 @@
+include $(call all-named-subdir-makefiles, libvulkan nulldrv tools)
diff --git a/vulkan/api/platform.api b/vulkan/api/platform.api
new file mode 100644
index 0000000..980722d
--- /dev/null
+++ b/vulkan/api/platform.api
@@ -0,0 +1,49 @@
+// Copyright (c) 2015 The Khronos Group Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a
+// copy of this software and/or associated documentation files (the
+// "Materials"), to deal in the Materials without restriction, including
+// without limitation the rights to use, copy, modify, merge, publish,
+// distribute, sublicense, and/or sell copies of the Materials, and to
+// permit persons to whom the Materials are furnished to do so, subject to
+// the following conditions:
+//
+// The above copyright notice and this permission notice shall be included
+// in all copies or substantial portions of the Materials.
+//
+// THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+// IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+// CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+// TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+// MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+
+// Platform types, as defined or included in vk_platform.h
+
+type u64 size_t
+
+// VK_USE_PLATFORM_XLIB_KHR
+@internal class Display {}
+@internal class Window {}
+@internal type u64 VisualID
+
+// VK_USE_PLATFORM_XCB_KHR
+@internal class xcb_connection_t {}
+@internal type u32 xcb_window_t
+@internal type u32 xcb_visualid_t
+
+// VK_USE_PLATFORM_WAYLAND_KHR
+@internal class wl_display {}
+@internal class wl_surface {}
+
+// VK_USE_PLATFORM_MIR_KHR
+@internal class MirConnection {}
+@internal class MirSurface {}
+
+// VK_USE_PLATFORM_ANDROID_KHR
+@internal class ANativeWindow {}
+
+// VK_USE_PLATFORM_WIN32_KHR
+@internal type void* HINSTANCE
+@internal type void* HWND
diff --git a/vulkan/api/templates/asciidoc.tmpl b/vulkan/api/templates/asciidoc.tmpl
new file mode 100644
index 0000000..3009e19
--- /dev/null
+++ b/vulkan/api/templates/asciidoc.tmpl
@@ -0,0 +1,151 @@
+{{Include "vulkan_common.tmpl"}}
+{{if not (Global "AsciiDocPath")}}{{Global "AsciiDocPath" "../../doc/specs/vulkan/"}}{{end}}
+{{$ | Macro "AsciiDoc.Main"}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ AsciiDoc generation main entry point.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Main"}}
+ {{$docPath := Global "AsciiDocPath"}}
+
+ {{/* Generate AsciiDoc files for API enums and bitfields (flags). */}}
+ {{range $e := $.Enums}}
+ {{if not $e.IsBitfield}}
+ {{$filename := print $docPath "enums/" (Macro "EnumName" $e) ".txt"}}
+ {{Macro "AsciiDoc.Write" "Code" (Macro "AsciiDoc.Enum" $e) "File" $filename}}
+ {{else}}
+ {{$filename := print $docPath "flags/" (Macro "EnumName" $e) ".txt"}}
+ {{Macro "AsciiDoc.Write" "Code" (Macro "AsciiDoc.Flag" $e) "File" $filename}}
+ {{end}}
+ {{end}}
+
+ {{/* Generate AsciiDoc files for API commands (protos). */}}
+ {{range $f := (AllCommands $)}}
+ {{if not (GetAnnotation $f "pfn")}}
+ {{$filename := print $docPath "protos/" $f.Name ".txt"}}
+ {{Macro "AsciiDoc.Write" "Code" (Macro "AsciiDoc.Proto" $f) "File" $filename}}
+ {{end}}
+ {{end}}
+
+ {{/* Generate AsciiDoc files for API structs. */}}
+ {{range $c := $.Classes}}
+ {{if not (GetAnnotation $c "internal")}}
+ {{$filename := print $docPath "structs/" $c.Name ".txt"}}
+ {{Macro "AsciiDoc.Write" "Code" (Macro "AsciiDoc.Struct" $c) "File" $filename}}
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the AsciiDoc contents for the specified API enum.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Enum"}}
+ {{AssertType $ "Enum"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef enum {
+ {{range $i, $e := $.Entries}}
+ {{Macro "EnumEntry" $e}} = {{AsSigned $e.Value}}, {{Macro "Docs" $e.Docs}}
+ {{end}}
+ ¶
+ {{$name := Macro "EnumName" $ | TrimRight "ABCDEFGHIJKLMNOQRSTUVWXYZ" | SplitPascalCase | Upper | JoinWith "_"}}
+ {{$first := Macro "EnumFirstEntry" $}}
+ {{$last := Macro "EnumLastEntry" $}}
+ {{$name}}_BEGIN_RANGE = {{$first}},
+ {{$name}}_END_RANGE = {{$last}},
+ {{$name}}_NUM = ({{$last}} - {{$first}} + 1),
+ {{$name}}_MAX_ENUM = 0x7FFFFFFF
+ } {{Macro "EnumName" $}};
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the AsciiDoc contents for the specified API bitfield.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Flag"}}
+ {{AssertType $ "Enum"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef VkFlags {{Macro "EnumName" $}};
+ {{if $.Entries}}
+ typedef enum {
+ {{range $e := $.Entries}}
+ {{Macro "BitfieldEntryName" $e}} = {{printf "%#.8x" $e.Value}}, {{Macro "Docs" $e.Docs}}
+ {{end}}
+ } {{Macro "EnumName" $ | TrimRight "s"}}Bits;
+ {{end}}
+{{end}}
+
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the AsciiDoc contents for the specified API class.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Struct"}}
+ {{AssertType $ "Class"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef {{if GetAnnotation $ "union"}}union{{else}}struct{{end}} {
+ {{range $f := $.Fields}}
+ {{Node "Type" $f}} {{$f.Name}}{{Macro "ArrayPostfix" (TypeOf $f)}}; {{Macro "Docs" $f.Docs}}
+ {{end}}
+ } {{Macro "StructName" $}};
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the AsciiDoc contents for the specified API function.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Proto"}}
+ {{AssertType $ "Function"}}
+
+ {{Macro "Docs" $.Docs}}
+ {{Node "Type" $.Return}} VKAPI {{Macro "FunctionName" $}}({{Macro "Parameters" $}});
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Wraps the specified Code in AsciiDoc source tags then writes to the specified File.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Write"}}
+ {{AssertType $.Code "string"}}
+ {{AssertType $.File "string"}}
+
+ {{$code := $.Code | Format (Global "clang-format")}}
+ {{JoinWith "\n" (Macro "AsciiDoc.Header") $code (Macro "AsciiDoc.Footer") ""| Write $.File}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits an AsciiDoc source header.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Header"}}
+[source,{basebackend@docbook:c++:cpp}]
+------------------------------------------------------------------------------
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits an AsciiDoc source footer.
+-------------------------------------------------------------------------------
+*/}}
+{{define "AsciiDoc.Footer"}}
+------------------------------------------------------------------------------
+{{end}}
diff --git a/vulkan/api/templates/vk_xml.tmpl b/vulkan/api/templates/vk_xml.tmpl
new file mode 100644
index 0000000..893bde7
--- /dev/null
+++ b/vulkan/api/templates/vk_xml.tmpl
@@ -0,0 +1,435 @@
+{{Include "vulkan_common.tmpl"}}
+{{Macro "DefineGlobals" $}}
+{{$ | Macro "vk.xml" | Reflow 4 | Write "vk.xml"}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Entry point
+-------------------------------------------------------------------------------
+*/}}
+{{define "vk.xml"}}
+<?xml version="1.0" encoding="UTF-8"?>
+<registry>
+ »<comment>«
+Copyright (c) 2015 The Khronos Group Inc.
+¶
+Permission is hereby granted, free of charge, to any person obtaining a
+copy of this software and/or associated documentation files (the
+"Materials"), to deal in the Materials without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Materials, and to
+permit persons to whom the Materials are furnished to do so, subject to
+the following conditions:
+¶
+The above copyright notice and this permission notice shall be included
+in all copies or substantial portions of the Materials.
+¶
+THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+¶
+------------------------------------------------------------------------
+¶
+This file, vk.xml, is the Vulkan API Registry.»
+ </comment>
+¶
+ <!-- SECTION: Vulkan type definitions -->
+ <types>»
+ <type name="vk_platform" category="include">#include "vk_platform.h"</type>
+¶
+ <type category="define">#define <name>VK_MAKE_VERSION</name>(major, minor, patch) \
+ «((major << 22) | (minor << 12) | patch)</type>»
+¶
+ <type category="define">// Vulkan API version supported by this file««
+#define <name>VK_API_VERSION</name> <type>VK_MAKE_VERSION</type>({{Global "VERSION_MAJOR"}}, {{Global "VERSION_MINOR"}}, {{Global "VERSION_PATCH"}})</type>
+¶
+ »»<type category="define">««
+#if (_MSC_VER >= 1800 || __cplusplus >= 201103L)
+#define <name>VK_NONDISP_HANDLE_OPERATOR_BOOL</name>() explicit operator bool() const { return handle != 0; }
+#else
+#define VK_NONDISP_HANDLE_OPERATOR_BOOL()
+«#endif
+ »»»</type>
+¶
+ <type category="define">«««
+#define <name>VK_DEFINE_HANDLE</name>(obj) typedef struct obj##_T* obj;</type>
+ »»»<type category="define">«««
+#if defined(__cplusplus)
+ »»#if (_MSC_VER >= 1800 || __cplusplus >= 201103L)
+ »// The bool operator only works if there are no implicit conversions from an obj to
+ // a bool-compatible type, which can then be used to unintentionally violate type safety.
+ // C++11 and above supports the "explicit" keyword on conversion operators to stop this
+ // from happening. Otherwise users of C++ below C++11 won't get direct access to evaluating
+ // the object handle as a bool in expressions like:
+ // if (obj) vkDestroy(obj);
+ #define VK_NONDISP_HANDLE_OPERATOR_BOOL() explicit operator bool() const { return handle != 0; }
+ #define VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ explicit obj(uint64_t x) : handle(x) { } \
+ obj(decltype(nullptr)) : handle(0) { }
+ «#else»
+ #define VK_NONDISP_HANDLE_OPERATOR_BOOL()
+ #define VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ obj(uint64_t x) : handle(x) { }
+ «#endif
+ #define <name>VK_DEFINE_NONDISP_HANDLE</name>(obj) \»
+ struct obj { \
+ obj() { } \
+ VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ obj& operator =(uint64_t x) { handle = x; return *this; } \
+ bool operator==(const obj& other) const { return handle == other.handle; } \
+ bool operator!=(const obj& other) const { return handle != other.handle; } \
+ bool operator!() const { return !handle; } \
+ VK_NONDISP_HANDLE_OPERATOR_BOOL() \
+ uint64_t handle; \
+ };
+««#else
+ »#define VK_DEFINE_NONDISP_HANDLE(obj) typedef struct obj##_T { uint64_t handle; } obj;«
+#endif
+ »»</type>
+¶
+ <type category="define">
+#if defined(__cplusplus) && ((defined(_MSC_VER) && _MSC_VER >= 1800) || __cplusplus >= 201103L)
+ »#define <name>VK_NULL_HANDLE</name> nullptr
+«#else
+ »#define VK_NULL_HANDLE 0
+«#endif
+ »»</type>
+¶
+ <type requires="vk_platform" name="VkDeviceSize"/>
+ <type requires="vk_platform" name="VkSampleMask"/>
+ <type requires="vk_platform" name="VkFlags"/>
+ <!-- Basic C types, pulled in via vk_platform.h -->
+ <type requires="vk_platform" name="char"/>
+ <type requires="vk_platform" name="float"/>
+ <type requires="vk_platform" name="VkBool32"/>
+ <type requires="vk_platform" name="uint8_t"/>
+ <type requires="vk_platform" name="uint32_t"/>
+ <type requires="vk_platform" name="uint64_t"/>
+ <type requires="vk_platform" name="int32_t"/>
+ <type requires="vk_platform" name="size_t"/>
+ <!-- Bitfield types -->
+ {{range $e := $.Enums}}
+ {{if $e.IsBitfield}}
+ {{$bits := print (Macro "EnumName" $e | TrimRight "s") "Bits"}}
+ <type{{if $e.Entries}} requires="{{$bits}}"{{end}} category="bitmask">typedef <type>VkFlags</type> <name>{{$e.Name}}</name>;</type>§
+ {{if $e.Entries}}{{Macro "XML.Docs" $e.Docs}}
+ {{else}}{{Macro "XML.Docs" (Strings $e.Docs "(no bits yet)")}}
+ {{end}}
+ {{end}}
+ {{end}}
+¶
+ <!-- Types which can be void pointers or class pointers, selected at compile time -->
+ {{range $i, $p := $.Pseudonyms}}
+ {{ if (GetAnnotation $p "dispatchHandle")}}
+ {{if Global "VK_DEFINE_HANDLE_TYPE_DEFINED"}}
+ <type category="handle">VK_DEFINE_HANDLE(<name>{{$p.Name}}</name>)</type>
+ {{else}}
+ {{Global "VK_DEFINE_HANDLE_TYPE_DEFINED" "YES"}}
+ <type category="handle"><type>VK_DEFINE_HANDLE</type>(<name>{{$p.Name}}</name>)</type>
+ {{end}}
+ {{else if (GetAnnotation $p "nonDispatchHandle")}}
+ {{if Global "VK_DEFINE_NONDISP_HANDLE_TYPE_DEFINED"}}
+ <type category="handle">VK_DEFINE_NONDISP_HANDLE(<name>{{$p.Name}}</name>)</type>
+ {{else}}
+ {{Global "VK_DEFINE_NONDISP_HANDLE_TYPE_DEFINED" "YES"}}
+ <type category="handle"><type>VK_DEFINE_NONDISP_HANDLE</type>(<name>{{$p.Name}}</name>)</type>
+ {{end}}
+ {{end}}
+ {{end}}
+¶
+ <!-- Types generated from corresponding <enums> tags below -->
+ {{range $e := SortBy $.Enums "EnumName"}}
+ {{if and $e.Entries (not (GetAnnotation $e "internal"))}}
+ {{if $e.IsBitfield}}
+ <type name="{{Macro "EnumName" $e | TrimRight "s"}}Bits" category="enum"/>
+ {{else}}
+ <type name="{{$e.Name}}" category="enum"/>
+ {{end}}
+ {{end}}
+ {{end}}
+¶
+ <!-- The PFN_vk*Function types are used by VkAllocCallbacks below -->
+ <type>typedef void* (VKAPI *<name>PFN_vkAllocFunction</name>)(«
+ void* pUserData,
+ size_t size,
+ size_t alignment,
+ <type>VkSystemAllocType</type> allocType);</type>»
+ <type>typedef void (VKAPI *<name>PFN_vkFreeFunction</name>)(«
+ void* pUserData,
+ void* pMem);</type>»
+¶
+ <!-- The PFN_vkVoidFunction type are used by VkGet*ProcAddr below -->
+ <type>typedef void (VKAPI *<name>PFN_vkVoidFunction</name>)(void);</type>
+¶
+ <!-- Struct types -->
+ {{range $c := $.Classes}}
+ {{if not (GetAnnotation $c "internal")}}
+ {{Macro "Struct" $c}}
+ {{end}}
+ {{end}}
+ «</types>
+¶
+ <!-- SECTION: Vulkan enumerant (token) definitions. -->
+¶
+ <enums namespace="VK" comment="Misc. hardcoded constants - not an enumerated type">»
+ <!-- This is part of the header boilerplate -->
+ {{range $d := $.Definitions}}
+ {{if HasPrefix $d.Name "VK_"}}
+ <enum value="{{$d.Expression}}" name="{{$d.Name}}"/>{{Macro "XML.Docs" $d.Docs}}
+ {{end}}
+ {{end}}
+ <enum value="1000.0f" name="VK_LOD_CLAMP_NONE"/>
+ <enum value="(-0U)" name="VK_REMAINING_MIP_LEVELS"/>
+ <enum value="(~0U)" name="VK_REMAINING_ARRAY_LAYERS"/>
+ <enum value="(_0ULL)" name="VK_WHOLE_SIZE"/>
+ <enum value="(~0U)" name="VK_ATTACHMENT_UNUSED"/>
+ <enum value="(~0U)" name="VK_QUEUE_FAMILY_IGNORED"/>
+ <enum value="(~0U)" name="VK_SUBPASS_EXTERNAL"/>
+ «</enums>
+¶
+ <!-- Unlike OpenGL, most tokens in Vulkan are actual typed enumerants in»
+ their own numeric namespaces. The "name" attribute is the C enum
+ type name, and is pulled in from a <type> definition above
+ (slightly clunky, but retains the type / enum distinction). "type"
+ attributes of "enum" or "bitmask" indicate that these values should
+ be generated inside an appropriate definition. -->«
+¶
+ {{range $e := $.Enums}}
+ {{if not (or $e.IsBitfield (GetAnnotation $e "internal"))}}
+ {{Macro "XML.Enum" $e}}
+ {{end}}
+ {{end}}
+¶
+ <!-- Flags -->
+ {{range $e := $.Enums}}
+ {{if $e.IsBitfield}}
+ {{Macro "XML.Bitfield" $e}}
+ {{end}}
+ {{end}}
+¶
+ <!-- SECTION: Vulkan command definitions -->
+ <commands namespace="vk">»
+ {{range $f := AllCommands $}}
+ {{if not (GetAnnotation $f "pfn")}}
+ {{Macro "XML.Function" $f}}
+ {{end}}
+ {{end}}
+ «</commands>
+¶
+ <!-- SECTION: Vulkan API interface definitions -->
+ <feature api="vulkan" name="VK_VERSION_1_0" number="1.0">»
+ <require comment="Header boilerplate">»
+ <type name="vk_platform"/>
+ «</require>
+ <require comment="API version">»
+ <type name="VK_API_VERSION"/>
+ «</require>
+ <require comment="API constants">»
+ <enum name="VK_LOD_CLAMP_NONE"/>
+ <enum name="VK_REMAINING_MIP_LEVELS"/>
+ <enum name="VK_REMAINING_ARRAY_LAYERS"/>
+ <enum name="VK_WHOLE_SIZE"/>
+ <enum name="VK_ATTACHMENT_UNUSED"/>
+ <enum name="VK_TRUE"/>
+ <enum name="VK_FALSE"/>
+ «</require>
+ <require comment="All functions (TODO: split by type)">»
+ {{range $f := AllCommands $}}
+ {{if not (GetAnnotation $f "pfn")}}
+ <command name="{{$f.Name}}"/>
+ {{end}}
+ {{end}}
+ </require>
+ «<require comment="Types not directly used by the API">»
+ <!-- Include <type name="typename"/> here for e.g. structs that»
+ are not parameter types of commands, but still need to be
+ defined in the API.
+ «-->
+ <type name="VkBufferMemoryBarrier"/>
+ <type name="VkDispatchIndirectCmd"/>
+ <type name="VkDrawIndexedIndirectCmd"/>
+ <type name="VkDrawIndirectCmd"/>
+ <type name="VkImageMemoryBarrier"/>
+ <type name="VkMemoryBarrier"/>
+ «</require>
+ «</feature>
+¶
+ <!-- SECTION: Vulkan extension interface definitions (none yet) -->
+«</registry>
+{{end}}
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified bitfield.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Bitfield"}}
+ {{AssertType $ "Enum"}}
+
+ {{if $.Entries}}
+ <enums namespace="VK" name="{{Macro "EnumName" $ | TrimRight "s"}}Bits" type="bitmask">»
+ {{range $e := $.Entries}}
+ {{$pos := Bitpos $e.Value}}
+ <enum §
+ {{if gt $pos -1}} bitpos="{{$pos}}" §
+ {{else}}value="{{if $e.Value}}{{printf "0x%.8X" $e.Value}}{{else}}0{{end}}" §
+ {{end}}name="{{Macro "BitfieldEntryName" $e}}" §
+ {{if $d := $e.Docs}} comment="{{$d | JoinWith " "}}"{{end}}/>
+ {{end}}
+ «</enums>
+ {{end}}
+
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified enum.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Enum"}}
+ {{AssertType $ "Enum"}}
+
+ <enums namespace="VK" name="{{Macro "EnumName" $}}" type="enum" §
+ expand="{{Macro "EnumName" $ | SplitPascalCase | Upper | JoinWith "_"}}"{{if $.Docs}} comment="{{$.Docs | JoinWith " "}}"{{end}}>»
+ {{range $i, $e := $.Entries}}
+ <enum value="{{AsSigned $e.Value}}" name="{{Macro "BitfieldEntryName" $e}}"{{if $e.Docs}} comment="{{$e.Docs | JoinWith " "}}"{{end}}/>
+ {{end}}
+ {{if $lu := GetAnnotation $ "lastUnused"}}
+ <unused start="{{index $lu.Arguments 0}}"/>
+ {{end}}
+ «</enums>
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified class.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Struct"}}
+ {{AssertType $ "Class"}}
+
+ <type category="{{Macro "StructType" $}}" name="{{Macro "StructName" $}}"{{if $.Docs}} comment="{{$.Docs | JoinWith " "}}"{{end}}>»
+ {{range $f := $.Fields}}
+ <member>{{Node "XML.Type" $f}} <name>{{$f.Name}}</name>{{Macro "XML.ArrayPostfix" $f}}</member>{{Macro "XML.Docs" $f.Docs}}
+ {{end}}
+ «</type>
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits either 'struct' or 'union' for the specified class.
+-------------------------------------------------------------------------------
+*/}}
+{{define "StructType"}}
+ {{AssertType $ "Class"}}
+
+ {{if GetAnnotation $ "union"}}union{{else}}struct{{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C function pointer typedef declaration for the specified command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Function"}}
+ {{AssertType $ "Function"}}
+
+ {{$ts := GetAnnotation $ "threadSafety"}}
+ <command{{if $ts}} threadsafe="{{index $ts.Arguments 0}}"{{end}}>»
+ <proto>{{Node "XML.Type" $.Return}} <name>{{$.Name}}</name></proto>
+ {{range $p := $.CallParameters}}
+ <param>{{Node "XML.Type" $p}} <name>{{$p.Name}}{{Macro "ArrayPostfix" $p}}</name></param>
+ {{end}}
+ «</command>
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the XML translation for the specified documentation block (string array).
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Docs"}}
+ {{if $}} <!-- {{JoinWith " " $ | Replace "<" "" | Replace ">" ""}} -->{{end}}
+{{end}}
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C translation for the specified type.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Type.Class" }}<type>{{Macro "StructName" $.Type}}</type>{{end}}
+{{define "XML.Type.Pseudonym" }}<type>{{$.Type.Name}}</type>{{end}}
+{{define "XML.Type.Enum" }}<type>{{$.Type.Name}}</type>{{end}}
+{{define "XML.Type.StaticArray"}}{{Node "XML.Type" $.Type.ValueType}}{{end}}
+{{define "XML.Type.Pointer" }}{{if $.Type.Const}}{{Node "XML.ConstType" $.Type.To}}{{else}}{{Node "XML.Type" $.Type.To}}{{end}}*{{end}}
+{{define "XML.Type.Slice" }}<type>{{Node "XML.Type" $.Type.To}}</type>*{{end}}
+{{define "XML.Type#s8" }}<type>int8_t</type>{{end}}
+{{define "XML.Type#u8" }}<type>uint8_t</type>{{end}}
+{{define "XML.Type#s16" }}<type>int16_t</type>{{end}}
+{{define "XML.Type#u16" }}<type>uint16_t</type>{{end}}
+{{define "XML.Type#s32" }}<type>int32_t</type>{{end}}
+{{define "XML.Type#u32" }}<type>uint32_t</type>{{end}}
+{{define "XML.Type#f32" }}<type>float</type>{{end}}
+{{define "XML.Type#s64" }}<type>int64_t</type>{{end}}
+{{define "XML.Type#u64" }}<type>uint64_t</type>{{end}}
+{{define "XML.Type#f64" }}<type>double</type>{{end}}
+{{define "XML.Type#char" }}<type>char</type>{{end}}
+{{define "XML.Type#void" }}void{{end}}
+
+{{define "XML.ConstType_Default"}}const {{Node "XML.Type" $.Type}}{{end}}
+{{define "XML.ConstType.Pointer"}}{{Node "XML.Type" $.Type}} const{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a C type and name for the given parameter
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Parameter"}}
+ {{AssertType $ "Parameter"}}
+
+ <type>{{Macro "ParameterType" $}}</type> <name>{{$.Name}}{{Macro "ArrayPostfix" $}}</name>
+{{end}}
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a comma-separated list of C type-name paired parameters for the given
+ command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.Parameters"}}
+ {{AssertType $ "Function"}}
+
+ {{ForEach $.CallParameters "XML.Parameter" | JoinWith ", "}}
+ {{if not $.CallParameters}}<type>void</type>{{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the fixed-size-array postfix for pseudonym types annotated with @array
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.ArrayPostfix"}}{{Node "XML.ArrayPostfix" $}}{{end}}
+{{define "XML.ArrayPostfix.StaticArray"}}[{{Node "XML.NamedValue" $.Type.SizeExpr}}]{{end}}
+{{define "XML.ArrayPostfix_Default"}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the value of the given constant, or the <enum> tagged name if existant.
+-------------------------------------------------------------------------------
+*/}}
+{{define "XML.NamedValue.Definition"}}<enum>{{$.Node.Name}}</enum>{{end}}
+{{define "XML.NamedValue.EnumEntry"}}<enum>{{$.Node.Name}}</enum>{{end}}
+{{define "XML.NamedValue_Default"}}{{$.Node}}{{end}}
diff --git a/vulkan/api/templates/vulkan_common.tmpl b/vulkan/api/templates/vulkan_common.tmpl
new file mode 100644
index 0000000..f694c56
--- /dev/null
+++ b/vulkan/api/templates/vulkan_common.tmpl
@@ -0,0 +1,223 @@
+{{$clang_style := "{BasedOnStyle: Google, AccessModifierOffset: -4, ColumnLimit: 200, ContinuationIndentWidth: 8, IndentWidth: 4, AlignOperands: true, CommentPragmas: '.*'}"}}
+{{Global "clang-format" (Strings "clang-format" "-style" $clang_style)}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C translation for the specified type.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Type.Class" }}{{if GetAnnotation $.Type "internal"}}struct {{end}}{{Macro "StructName" $.Type}}{{end}}
+{{define "Type.Pseudonym" }}{{$.Type.Name}}{{end}}
+{{define "Type.Enum" }}{{$.Type.Name}}{{end}}
+{{define "Type.StaticArray"}}{{Node "Type" $.Type.ValueType}}{{end}}
+{{define "Type.Pointer" }}{{if $.Type.Const}}{{Node "ConstType" $.Type.To}}{{else}}{{Node "Type" $.Type.To}}{{end}}*{{end}}
+{{define "Type.Slice" }}{{Log "%T %+v" $.Node $.Node}}{{Node "Type" $.Type.To}}*{{end}}
+{{define "Type#bool" }}bool{{end}}
+{{define "Type#int" }}int{{end}}
+{{define "Type#uint" }}unsigned int{{end}}
+{{define "Type#s8" }}int8_t{{end}}
+{{define "Type#u8" }}uint8_t{{end}}
+{{define "Type#s16" }}int16_t{{end}}
+{{define "Type#u16" }}uint16_t{{end}}
+{{define "Type#s32" }}int32_t{{end}}
+{{define "Type#u32" }}uint32_t{{end}}
+{{define "Type#f32" }}float{{end}}
+{{define "Type#s64" }}int64_t{{end}}
+{{define "Type#u64" }}uint64_t{{end}}
+{{define "Type#f64" }}double{{end}}
+{{define "Type#void" }}void{{end}}
+{{define "Type#char" }}char{{end}}
+
+{{define "ConstType_Default"}}const {{Node "Type" $.Type}}{{end}}
+{{define "ConstType.Pointer"}}{{Node "Type" $.Type}} const{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C translation for the specified documentation block (string array).
+-------------------------------------------------------------------------------
+*/}}
+{{define "Docs"}}
+ {{if $}}// {{$ | JoinWith "\n// "}}{{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of a bitfield entry.
+-------------------------------------------------------------------------------
+*/}}
+{{define "BitfieldEntryName"}}
+ {{AssertType $ "EnumEntry"}}
+
+ {{Macro "EnumEntry" $}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of an enum type.
+-------------------------------------------------------------------------------
+*/}}
+{{define "EnumName"}}{{AssertType $ "Enum"}}{{$.Name}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of an enum entry.
+-------------------------------------------------------------------------------
+*/}}
+{{define "EnumEntry"}}
+ {{AssertType $.Owner "Enum"}}
+ {{AssertType $.Name "string"}}
+
+ {{$.Name}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of the first entry of an enum.
+-------------------------------------------------------------------------------
+*/}}
+{{define "EnumFirstEntry"}}
+ {{AssertType $ "Enum"}}
+
+ {{range $i, $e := $.Entries}}
+ {{if not $i}}{{$e.Name}}{{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of the last entry of an enum.
+-------------------------------------------------------------------------------
+*/}}{{define "EnumLastEntry"}}
+ {{AssertType $ "Enum"}}
+
+ {{range $i, $e := $.Entries}}
+ {{if not (HasMore $i $.Entries)}}{{$e.Name}}{{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of a struct (class) type.
+-------------------------------------------------------------------------------
+*/}}
+{{define "StructName"}}{{AssertType $ "Class"}}{{$.Name}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the name of a function.
+-------------------------------------------------------------------------------
+*/}}
+{{define "FunctionName"}}{{AssertType $ "Function"}}{{$.Name}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the fixed-size-array postfix for pseudonym types annotated with @array
+-------------------------------------------------------------------------------
+*/}}
+{{define "ArrayPostfix"}}{{Node "ArrayPostfix" $}}{{end}}
+{{define "ArrayPostfix.StaticArray"}}[{{$.Type.Size}}]{{end}}
+{{define "ArrayPostfix_Default"}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a C type and name for the given parameter
+-------------------------------------------------------------------------------
+*/}}
+{{define "Parameter"}}
+ {{AssertType $ "Parameter"}}
+
+ {{if GetAnnotation $ "readonly"}}const {{end}}{{Macro "ParameterType" $}} {{$.Name}}{{Macro "ArrayPostfix" $}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a C name for the given parameter
+-------------------------------------------------------------------------------
+*/}}
+{{define "ParameterName"}}
+ {{AssertType $ "Parameter"}}
+
+ {{$.Name}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a C type for the given parameter
+-------------------------------------------------------------------------------
+*/}}
+{{define "ParameterType"}}{{AssertType $ "Parameter"}}{{Node "Type" $}}{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a comma-separated list of C type-name paired parameters for the given
+ command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Parameters"}}
+ {{AssertType $ "Function"}}
+
+ {{ForEach $.CallParameters "Parameter" | JoinWith ", "}}
+ {{if not $.CallParameters}}void{{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C function pointer name for the specified command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "FunctionPtrName"}}
+ {{AssertType $ "Function"}}
+
+ PFN_{{$.Name}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Parses const variables as text Globals.
+-------------------------------------------------------------------------------
+*/}}
+{{define "DefineGlobals"}}
+ {{AssertType $ "API"}}
+
+ {{range $d := $.Definitions}}
+ {{Global $d.Name $d.Expression}}
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Given a function, return "Global", "Instance", or "Device" depending on which
+ dispatch table the function belongs to.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Vtbl#VkInstance" }}Instance{{end}}
+{{define "Vtbl#VkPhysicalDevice"}}Instance{{end}}
+{{define "Vtbl#VkDevice" }}Device{{end}}
+{{define "Vtbl#VkQueue" }}Device{{end}}
+{{define "Vtbl#VkCommandBuffer" }}Device{{end}}
+{{define "Vtbl_Default" }}Global{{end}}
+{{define "Vtbl"}}
+ {{AssertType $ "Function"}}
+
+ {{if gt (len $.CallParameters) 0}}
+ {{Node "Vtbl" (index $.CallParameters 0)}}
+ {{else}}Global
+ {{end}}
+{{end}}
diff --git a/vulkan/api/templates/vulkan_h.tmpl b/vulkan/api/templates/vulkan_h.tmpl
new file mode 100644
index 0000000..b2a77ec
--- /dev/null
+++ b/vulkan/api/templates/vulkan_h.tmpl
@@ -0,0 +1,291 @@
+{{Include "vulkan_common.tmpl"}}
+{{Macro "DefineGlobals" $}}
+{{$ | Macro "vulkan.h" | Format (Global "clang-format") | Write "../include/vulkan.h"}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Entry point
+-------------------------------------------------------------------------------
+*/}}
+{{define "vulkan.h"}}
+#ifndef __vulkan_h_
+#define __vulkan_h_ 1
+¶
+#ifdef __cplusplus
+extern "C" {
+#endif
+¶
+/*
+** Copyright (c) 2015 The Khronos Group Inc.
+**
+** Permission is hereby granted, free of charge, to any person obtaining a
+** copy of this software and/or associated documentation files (the
+** "Materials"), to deal in the Materials without restriction, including
+** without limitation the rights to use, copy, modify, merge, publish,
+** distribute, sublicense, and/or sell copies of the Materials, and to
+** permit persons to whom the Materials are furnished to do so, subject to
+** the following conditions:
+**
+** The above copyright notice and this permission notice shall be included
+** in all copies or substantial portions of the Materials.
+**
+** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+*/
+¶
+/*
+** This header is generated from the Khronos Vulkan API Registry.
+**
+*/
+¶
+#define VK_VERSION_1_0 1
+#include "vk_platform.h"
+¶
+#define VK_MAKE_VERSION(major, minor, patch) ((major << 22) | (minor << 12) | patch)
+¶
+// Vulkan API version supported by this file
+#define VK_API_VERSION \
+ VK_MAKE_VERSION({{Global "VERSION_MAJOR"}}, {{Global "VERSION_MINOR"}}, {{Global "VERSION_PATCH"}})
+¶
+#if defined(__cplusplus) && ((defined(_MSC_VER) && _MSC_VER >= 1800 || __cplusplus >= 201103L)
+ #define VK_NULL_HANDLE nullptr
+#else
+ #define VK_NULL_HANDLE 0
+#endif
+¶
+#define VK_DEFINE_HANDLE(obj) typedef struct obj##_T* obj;
+¶
+#if defined(__cplusplus)
+#if ((defined(_MSC_VER) && _MSC_VER >= 1800 || __cplusplus >= 201103L)
+// The bool operator only works if there are no implicit conversions from an obj to
+// a bool-compatible type, which can then be used to unintentionally violate type safety.
+// C++11 and above supports the "explicit" keyword on conversion operators to stop this
+// from happening. Otherwise users of C++ below C++11 won't get direct access to evaluating
+// the object handle as a bool in expressions like:
+// if (obj) vkDestroy(obj);
+#define VK_NONDISP_HANDLE_OPERATOR_BOOL() \
+ explicit operator bool() const { return handle != 0; }
+#define VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ explicit obj(uint64_t x) : handle(x) { } \
+ obj(decltype(nullptr)) : handle(0) { }
+#else
+#define VK_NONDISP_HANDLE_OPERATOR_BOOL()
+#define VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ obj(uint64_t x) : handle(x) { }
+#endif
+#define VK_DEFINE_NONDISP_HANDLE(obj) \
+ struct obj { \
+ obj() : handle(0) { } \
+ VK_NONDISP_HANDLE_CONSTRUCTOR_FROM_UINT64(obj) \
+ obj& operator=(uint64_t x) { \
+ handle = x; \
+ return *this; \
+ } \
+ bool operator==(const obj& other) const { return handle == other.handle; } \
+ bool operator!=(const obj& other) const { return handle != other.handle; } \
+ bool operator!() const { return !handle; } \
+ VK_NONDISP_HANDLE_OPERATOR_BOOL() \
+ uint64_t handle; \
+ };
+#else
+#define VK_DEFINE_NONDISP_HANDLE(obj) \
+ typedef struct obj##_T { uint64_t handle; } obj;
+#endif
+¶
+#define VK_LOD_CLAMP_NONE 1000.0f
+#define VK_REMAINING_MIP_LEVELS (~0U)
+#define VK_REMAINING_ARRAY_LAYERS (~0U)
+#define VK_WHOLE_SIZE (~0ULL)
+#define VK_ATTACHMENT_UNUSED (~0U)
+define VK_QUEUE_FAMILY_IGNORED (~0U)
+define VK_SUBPASS_EXTERNAL (~0U)
+{{range $d := $.Definitions}}
+ {{if HasPrefix $d.Name "VK_"}}#define {{$d.Name}} {{$d.Expression}}{{end}}
+{{end}}
+¶
+{{range $i, $p := $.Pseudonyms}}
+ {{if GetAnnotation $p "dispatchHandle"}}VK_DEFINE_HANDLE({{$p.Name}})
+ {{else if GetAnnotation $p "nonDispatchHandle"}}VK_DEFINE_NONDISP_HANDLE({{$p.Name}})
+ {{end}}
+{{end}}
+¶
+// ------------------------------------------------------------------------------------------------
+// Enumerations
+¶
+ {{range $e := $.Enums}}
+ {{if not $e.IsBitfield}}
+ {{Macro "Enum" $e}}
+ {{end}}
+ {{end}}
+¶
+// ------------------------------------------------------------------------------------------------
+// Flags
+¶
+ {{range $e := $.Enums}}
+ {{if $e.IsBitfield}}
+ {{Macro "Bitfield" $e}}
+ {{end}}
+ {{end}}
+¶
+// ------------------------------------------------------------------------------------------------
+// Vulkan structures
+¶
+ {{/* Function pointers */}}
+ {{range $f := AllCommands $}}
+ {{if GetAnnotation $f "pfn"}}
+ {{Macro "FunctionTypedef" $f}}
+ {{end}}
+ {{end}}
+¶
+ {{range $c := $.Classes}}
+ {{if not (GetAnnotation $c "internal")}}
+ {{Macro "Struct" $c}}
+ {{end}}
+ {{end}}
+¶
+// ------------------------------------------------------------------------------------------------
+// API functions
+¶
+ {{range $f := AllCommands $}}
+ {{if not (GetAnnotation $f "pfn")}}
+ {{Macro "FunctionTypedef" $f}}
+ {{end}}
+ {{end}}
+¶
+#ifdef VK_NO_PROTOTYPES
+¶
+ {{range $f := AllCommands $}}
+ {{if not (GetAnnotation $f "pfn")}}
+ {{Macro "FunctionDecl" $f}}
+ {{end}}
+ {{end}}
+¶
+#endif
+¶
+#ifdef __cplusplus
+}
+#endif
+¶
+#endif
+{{end}}
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified bitfield.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Bitfield"}}
+ {{AssertType $ "Enum"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef VkFlags {{Macro "EnumName" $}};
+ {{if $.Entries}}
+ typedef enum {
+ {{range $b := $.Entries}}
+ {{Macro "BitfieldEntryName" $b}} = {{printf "0x%.8X" $b.Value}}, {{Macro "Docs" $b.Docs}}
+ {{end}}
+ } {{Macro "EnumName" $ | TrimRight "s"}}Bits;
+ {{end}}
+ ¶
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified enum.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Enum"}}
+ {{AssertType $ "Enum"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef enum {
+ {{range $i, $e := $.Entries}}
+ {{Macro "EnumEntry" $e}} = {{printf "0x%.8X" $e.Value}}, {{Macro "Docs" $e.Docs}}
+ {{end}}
+ ¶
+ {{$name := Macro "EnumName" $ | TrimRight "ABCDEFGHIJKLMNOQRSTUVWXYZ" | SplitPascalCase | Upper | JoinWith "_"}}
+ {{if GetAnnotation $ "enumMaxOnly"}}
+ VK_MAX_ENUM({{$name | SplitOn "VK_"}})
+ {{else}}
+ {{$first := Macro "EnumFirstEntry" $ | SplitOn $name | TrimLeft "_"}}
+ {{$last := Macro "EnumLastEntry" $ | SplitOn $name | TrimLeft "_"}}
+ VK_ENUM_RANGE({{$name | SplitOn "VK_"}}, {{$first}}, {{$last}})
+ {{end}}
+ } {{Macro "EnumName" $}};
+ ¶
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified class.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Struct"}}
+ {{AssertType $ "Class"}}
+
+ {{Macro "Docs" $.Docs}}
+ typedef {{Macro "StructType" $}} {
+ {{ForEach $.Fields "Field" | JoinWith "\n"}}
+ } {{Macro "StructName" $}};
+ ¶
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C declaration for the specified class field.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Field"}}
+ {{AssertType $ "Field"}}
+
+ {{Node "Type" $}} {{$.Name}}§
+ {{Macro "ArrayPostfix" (TypeOf $)}}; {{Macro "Docs" $.Docs}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits either 'struct' or 'union' for the specified class.
+-------------------------------------------------------------------------------
+*/}}
+{{define "StructType"}}
+ {{AssertType $ "Class"}}
+
+ {{if GetAnnotation $ "union"}}union{{else}}struct{{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C function pointer typedef declaration for the specified command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "FunctionTypedef"}}
+ {{AssertType $ "Function"}}
+
+ typedef {{Node "Type" $.Return}} (VKAPI* {{Macro "FunctionPtrName" $}})({{Macro "Parameters" $}});
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits the C function declaration for the specified command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "FunctionDecl"}}
+ {{AssertType $ "Function"}}
+
+ {{if not (GetAnnotation $ "fptr")}}
+ {{Macro "Docs" $.Docs}}
+ {{Node "Type" $.Return}} VKAPI {{Macro "FunctionName" $}}({{Macro "Parameters" $}});
+ {{end}}
+{{end}}
diff --git a/vulkan/api/vulkan.api b/vulkan/api/vulkan.api
new file mode 100644
index 0000000..9b1e684
--- /dev/null
+++ b/vulkan/api/vulkan.api
@@ -0,0 +1,5488 @@
+// Copyright (c) 2015 The Khronos Group Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a
+// copy of this software and/or associated documentation files (the
+// "Materials"), to deal in the Materials without restriction, including
+// without limitation the rights to use, copy, modify, merge, publish,
+// distribute, sublicense, and/or sell copies of the Materials, and to
+// permit persons to whom the Materials are furnished to do so, subject to
+// the following conditions:
+//
+// The above copyright notice and this permission notice shall be included
+// in all copies or substantial portions of the Materials.
+//
+// THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+// IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+// CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+// TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+// MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+
+import platform "platform.api"
+
+///////////////
+// Constants //
+///////////////
+
+// API version (major.minor.patch)
+define VERSION_MAJOR 1
+define VERSION_MINOR 0
+define VERSION_PATCH 2
+
+// API limits
+define VK_MAX_PHYSICAL_DEVICE_NAME_SIZE 256
+define VK_UUID_SIZE 16
+define VK_MAX_EXTENSION_NAME_SIZE 256
+define VK_MAX_DESCRIPTION_SIZE 256
+define VK_MAX_MEMORY_TYPES 32
+define VK_MAX_MEMORY_HEAPS 16 /// The maximum number of unique memory heaps, each of which supporting 1 or more memory types.
+
+// API keywords
+define VK_TRUE 1
+define VK_FALSE 0
+
+// API keyword, but needs special handling by some templates
+define NULL_HANDLE 0
+
+@extension("VK_KHR_surface") define VK_KHR_SURFACE_SPEC_VERSION 25
+@extension("VK_KHR_surface") define VK_KHR_SURFACE_EXTENSION_NAME "VK_KHR_surface"
+
+@extension("VK_KHR_swapchain") define VK_KHR_SWAPCHAIN_SPEC_VERSION 67
+@extension("VK_KHR_swapchain") define VK_KHR_SWAPCHAIN_EXTENSION_NAME "VK_KHR_swapchain"
+
+@extension("VK_KHR_display") define VK_KHR_DISPLAY_SPEC_VERSION 21
+@extension("VK_KHR_display") define VK_KHR_DISPLAY_EXTENSION_NAME "VK_KHR_display"
+
+@extension("VK_KHR_display_swapchain") define VK_KHR_DISPLAY_SWAPCHAIN_SPEC_VERSION 9
+@extension("VK_KHR_display_swapchain") define VK_KHR_DISPLAY_SWAPCHAIN_EXTENSION_NAME "VK_KHR_display_swapchain"
+
+@extension("VK_KHR_xlib_surface") define VK_KHR_XLIB_SURFACE_SPEC_VERSION 6
+@extension("VK_KHR_xlib_surface") define VK_KHR_XLIB_SURFACE_NAME "VK_KHR_xlib_surface"
+
+@extension("VK_KHR_xcb_surface") define VK_KHR_XCB_SURFACE_SPEC_VERSION 6
+@extension("VK_KHR_xcb_surface") define VK_KHR_XCB_SURFACE_NAME "VK_KHR_xcb_surface"
+
+@extension("VK_KHR_wayland_surface") define VK_KHR_WAYLAND_SURFACE_SPEC_VERSION 5
+@extension("VK_KHR_wayland_surface") define VK_KHR_WAYLAND_SURFACE_NAME "VK_KHR_wayland_surface"
+
+@extension("VK_KHR_mir_surface") define VK_KHR_MIR_SURFACE_SPEC_VERSION 4
+@extension("VK_KHR_mir_surface") define VK_KHR_MIR_SURFACE_NAME "VK_KHR_mir_surface"
+
+@extension("VK_KHR_android_surface") define VK_KHR_ANDROID_SURFACE_SPEC_VERSION 6
+@extension("VK_KHR_android_surface") define VK_KHR_ANDROID_SURFACE_NAME "VK_KHR_android_surface"
+
+@extension("VK_KHR_win32_surface") define VK_KHR_WIN32_SURFACE_SPEC_VERSION 5
+@extension("VK_KHR_win32_surface") define VK_KHR_WIN32_SURFACE_NAME "VK_KHR_win32_surface"
+
+@extension("VK_EXT_debug_report") define VK_EXT_DEBUG_REPORT_SPEC_VERSION 2
+@extension("VK_EXT_debug_report") define VK_EXT_DEBUG_REPORT_NAME "VK_EXT_debug_report"
+
+
+/////////////
+// Types //
+/////////////
+
+type u32 VkBool32
+type u32 VkFlags
+type u64 VkDeviceSize
+type u32 VkSampleMask
+
+/// Dispatchable handle types.
+@dispatchHandle type u64 VkInstance
+@dispatchHandle type u64 VkPhysicalDevice
+@dispatchHandle type u64 VkDevice
+@dispatchHandle type u64 VkQueue
+@dispatchHandle type u64 VkCommandBuffer
+
+/// Non dispatchable handle types.
+@nonDispatchHandle type u64 VkDeviceMemory
+@nonDispatchHandle type u64 VkCommandPool
+@nonDispatchHandle type u64 VkBuffer
+@nonDispatchHandle type u64 VkBufferView
+@nonDispatchHandle type u64 VkImage
+@nonDispatchHandle type u64 VkImageView
+@nonDispatchHandle type u64 VkShaderModule
+@nonDispatchHandle type u64 VkPipeline
+@nonDispatchHandle type u64 VkPipelineLayout
+@nonDispatchHandle type u64 VkSampler
+@nonDispatchHandle type u64 VkDescriptorSet
+@nonDispatchHandle type u64 VkDescriptorSetLayout
+@nonDispatchHandle type u64 VkDescriptorPool
+@nonDispatchHandle type u64 VkFence
+@nonDispatchHandle type u64 VkSemaphore
+@nonDispatchHandle type u64 VkEvent
+@nonDispatchHandle type u64 VkQueryPool
+@nonDispatchHandle type u64 VkFramebuffer
+@nonDispatchHandle type u64 VkRenderPass
+@nonDispatchHandle type u64 VkPipelineCache
+
+@extension("VK_KHR_surface") @nonDispatchHandle type u64 VkSurfaceKHR
+
+@extension("VK_KHR_swapchain") @nonDispatchHandle type u64 VkSwapchainKHR
+
+@extension("VK_KHR_display") @nonDispatchHandle type u64 VkDisplayKHR
+@extension("VK_KHR_display") @nonDispatchHandle type u64 VkDisplayModeKHR
+
+@extension("VK_EXT_debug_report") @nonDispatchHandle type u64 VkDebugReportCallbackEXT
+
+
+/////////////
+// Enums //
+/////////////
+
+enum VkImageLayout {
+ VK_IMAGE_LAYOUT_UNDEFINED = 0x00000000, /// Implicit layout an image is when its contents are undefined due to various reasons (e.g. right after creation)
+ VK_IMAGE_LAYOUT_GENERAL = 0x00000001, /// General layout when image can be used for any kind of access
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 0x00000002, /// Optimal layout when image is only used for color attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 0x00000003, /// Optimal layout when image is only used for depth/stencil attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 0x00000004, /// Optimal layout when image is used for read only depth/stencil attachment and shader access
+ VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 0x00000005, /// Optimal layout when image is used for read only shader access
+ VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL = 0x00000006, /// Optimal layout when image is used only as source of transfer operations
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL = 0x00000007, /// Optimal layout when image is used only as destination of transfer operations
+ VK_IMAGE_LAYOUT_PREINITIALIZED = 0x00000008, /// Initial layout used when the data is populated by the CPU
+
+ //@extension("VK_KHR_swapchain")
+ VK_IMAGE_LAYOUT_PRESENT_SRC_KHR = 1000001002,
+}
+
+enum VkAttachmentLoadOp {
+ VK_ATTACHMENT_LOAD_OP_LOAD = 0x00000000,
+ VK_ATTACHMENT_LOAD_OP_CLEAR = 0x00000001,
+ VK_ATTACHMENT_LOAD_OP_DONT_CARE = 0x00000002,
+}
+
+enum VkAttachmentStoreOp {
+ VK_ATTACHMENT_STORE_OP_STORE = 0x00000000,
+ VK_ATTACHMENT_STORE_OP_DONT_CARE = 0x00000001,
+}
+
+enum VkImageType {
+ VK_IMAGE_TYPE_1D = 0x00000000,
+ VK_IMAGE_TYPE_2D = 0x00000001,
+ VK_IMAGE_TYPE_3D = 0x00000002,
+}
+
+enum VkImageTiling {
+ VK_IMAGE_TILING_OPTIMAL = 0x00000000,
+ VK_IMAGE_TILING_LINEAR = 0x00000001,
+}
+
+enum VkImageViewType {
+ VK_IMAGE_VIEW_TYPE_1D = 0x00000000,
+ VK_IMAGE_VIEW_TYPE_2D = 0x00000001,
+ VK_IMAGE_VIEW_TYPE_3D = 0x00000002,
+ VK_IMAGE_VIEW_TYPE_CUBE = 0x00000003,
+ VK_IMAGE_VIEW_TYPE_1D_ARRAY = 0x00000004,
+ VK_IMAGE_VIEW_TYPE_2D_ARRAY = 0x00000005,
+ VK_IMAGE_VIEW_TYPE_CUBE_ARRAY = 0x00000006,
+}
+
+enum VkCommandBufferLevel {
+ VK_COMMAND_BUFFER_LEVEL_PRIMARY = 0x00000000,
+ VK_COMMAND_BUFFER_LEVEL_SECONDARY = 0x00000001,
+}
+
+enum VkComponentSwizzle {
+ VK_COMPONENT_SWIZZLE_IDENTITY = 0x00000000,
+ VK_COMPONENT_SWIZZLE_ZERO = 0x00000001,
+ VK_COMPONENT_SWIZZLE_ONE = 0x00000002,
+ VK_COMPONENT_SWIZZLE_R = 0x00000003,
+ VK_COMPONENT_SWIZZLE_G = 0x00000004,
+ VK_COMPONENT_SWIZZLE_B = 0x00000005,
+ VK_COMPONENT_SWIZZLE_A = 0x00000006,
+}
+
+enum VkDescriptorType {
+ VK_DESCRIPTOR_TYPE_SAMPLER = 0x00000000,
+ VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER = 0x00000001,
+ VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE = 0x00000002,
+ VK_DESCRIPTOR_TYPE_STORAGE_IMAGE = 0x00000003,
+ VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER = 0x00000004,
+ VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER = 0x00000005,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 0x00000006,
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER = 0x00000007,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 0x00000008,
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC = 0x00000009,
+ VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT = 0x0000000a,
+}
+
+enum VkQueryType {
+ VK_QUERY_TYPE_OCCLUSION = 0x00000000,
+ VK_QUERY_TYPE_PIPELINE_STATISTICS = 0x00000001, /// Optional
+ VK_QUERY_TYPE_TIMESTAMP = 0x00000002,
+}
+
+enum VkBorderColor {
+ VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK = 0x00000000,
+ VK_BORDER_COLOR_INT_TRANSPARENT_BLACK = 0x00000001,
+ VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK = 0x00000002,
+ VK_BORDER_COLOR_INT_OPAQUE_BLACK = 0x00000003,
+ VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE = 0x00000004,
+ VK_BORDER_COLOR_INT_OPAQUE_WHITE = 0x00000005,
+}
+
+enum VkPipelineBindPoint {
+ VK_PIPELINE_BIND_POINT_GRAPHICS = 0x00000000,
+ VK_PIPELINE_BIND_POINT_COMPUTE = 0x00000001,
+}
+
+enum VkPrimitiveTopology {
+ VK_PRIMITIVE_TOPOLOGY_POINT_LIST = 0x00000000,
+ VK_PRIMITIVE_TOPOLOGY_LINE_LIST = 0x00000001,
+ VK_PRIMITIVE_TOPOLOGY_LINE_STRIP = 0x00000002,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST = 0x00000003,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP = 0x00000004,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN = 0x00000005,
+ VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY = 0x00000006,
+ VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY = 0x00000007,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY = 0x00000008,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY = 0x00000009,
+ VK_PRIMITIVE_TOPOLOGY_PATCH_LIST = 0x0000000a,
+}
+
+enum VkSharingMode {
+ VK_SHARING_MODE_EXCLUSIVE = 0x00000000,
+ VK_SHARING_MODE_CONCURRENT = 0x00000001,
+}
+
+enum VkIndexType {
+ VK_INDEX_TYPE_UINT16 = 0x00000000,
+ VK_INDEX_TYPE_UINT32 = 0x00000001,
+}
+
+enum VkFilter {
+ VK_FILTER_NEAREST = 0x00000000,
+ VK_FILTER_LINEAR = 0x00000001,
+}
+
+enum VkSamplerMipmapMode {
+ VK_SAMPLER_MIPMAP_MODE_NEAREST = 0x00000001, /// Choose nearest mip level
+ VK_SAMPLER_MIPMAP_MODE_LINEAR = 0x00000002, /// Linear filter between mip levels
+}
+
+enum VkSamplerAddressMode {
+ VK_SAMPLER_ADDRESS_MODE_REPEAT = 0x00000000,
+ VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT = 0x00000001,
+ VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE = 0x00000002,
+ VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER = 0x00000003,
+ VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE = 0x00000004,
+}
+
+enum VkCompareOp {
+ VK_COMPARE_OP_NEVER = 0x00000000,
+ VK_COMPARE_OP_LESS = 0x00000001,
+ VK_COMPARE_OP_EQUAL = 0x00000002,
+ VK_COMPARE_OP_LESS_OR_EQUAL = 0x00000003,
+ VK_COMPARE_OP_GREATER = 0x00000004,
+ VK_COMPARE_OP_NOT_EQUAL = 0x00000005,
+ VK_COMPARE_OP_GREATER_OR_EQUAL = 0x00000006,
+ VK_COMPARE_OP_ALWAYS = 0x00000007,
+}
+
+enum VkPolygonMode {
+ VK_POLYGON_MODE_FILL = 0x00000000,
+ VK_POLYGON_MODE_LINE = 0x00000001,
+ VK_POLYGON_MODE_POINT = 0x00000002,
+}
+
+enum VkFrontFace {
+ VK_FRONT_FACE_COUNTER_CLOCKWISE = 0x00000000,
+ VK_FRONT_FACE_CLOCKWISE = 0x00000001,
+}
+
+enum VkBlendFactor {
+ VK_BLEND_FACTOR_ZERO = 0x00000000,
+ VK_BLEND_FACTOR_ONE = 0x00000001,
+ VK_BLEND_FACTOR_SRC_COLOR = 0x00000002,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC_COLOR = 0x00000003,
+ VK_BLEND_FACTOR_DST_COLOR = 0x00000004,
+ VK_BLEND_FACTOR_ONE_MINUS_DST_COLOR = 0x00000005,
+ VK_BLEND_FACTOR_SRC_ALPHA = 0x00000006,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA = 0x00000007,
+ VK_BLEND_FACTOR_DST_ALPHA = 0x00000008,
+ VK_BLEND_FACTOR_ONE_MINUS_DST_ALPHA = 0x00000009,
+ VK_BLEND_FACTOR_CONSTANT_COLOR = 0x0000000a,
+ VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_COLOR = 0x0000000b,
+ VK_BLEND_FACTOR_CONSTANT_ALPHA = 0x0000000c,
+ VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA = 0x0000000d,
+ VK_BLEND_FACTOR_SRC_ALPHA_SATURATE = 0x0000000e,
+ VK_BLEND_FACTOR_SRC1_COLOR = 0x0000000f,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC1_COLOR = 0x00000010,
+ VK_BLEND_FACTOR_SRC1_ALPHA = 0x00000011,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA = 0x00000012,
+}
+
+enum VkBlendOp {
+ VK_BLEND_OP_ADD = 0x00000000,
+ VK_BLEND_OP_SUBTRACT = 0x00000001,
+ VK_BLEND_OP_REVERSE_SUBTRACT = 0x00000002,
+ VK_BLEND_OP_MIN = 0x00000003,
+ VK_BLEND_OP_MAX = 0x00000004,
+}
+
+enum VkStencilOp {
+ VK_STENCIL_OP_KEEP = 0x00000000,
+ VK_STENCIL_OP_ZERO = 0x00000001,
+ VK_STENCIL_OP_REPLACE = 0x00000002,
+ VK_STENCIL_OP_INCREMENT_AND_CLAMP = 0x00000003,
+ VK_STENCIL_OP_DECREMENT_AND_CLAMP = 0x00000004,
+ VK_STENCIL_OP_INVERT = 0x00000005,
+ VK_STENCIL_OP_INCREMENT_AND_WRAP = 0x00000006,
+ VK_STENCIL_OP_DECREMENT_AND_WRAP = 0x00000007,
+}
+
+enum VkLogicOp {
+ VK_LOGIC_OP_CLEAR = 0x00000000,
+ VK_LOGIC_OP_AND = 0x00000001,
+ VK_LOGIC_OP_AND_REVERSE = 0x00000002,
+ VK_LOGIC_OP_COPY = 0x00000003,
+ VK_LOGIC_OP_AND_INVERTED = 0x00000004,
+ VK_LOGIC_OP_NO_OP = 0x00000005,
+ VK_LOGIC_OP_XOR = 0x00000006,
+ VK_LOGIC_OP_OR = 0x00000007,
+ VK_LOGIC_OP_NOR = 0x00000008,
+ VK_LOGIC_OP_EQUIVALENT = 0x00000009,
+ VK_LOGIC_OP_INVERT = 0x0000000a,
+ VK_LOGIC_OP_OR_REVERSE = 0x0000000b,
+ VK_LOGIC_OP_COPY_INVERTED = 0x0000000c,
+ VK_LOGIC_OP_OR_INVERTED = 0x0000000d,
+ VK_LOGIC_OP_NAND = 0x0000000e,
+ VK_LOGIC_OP_SET = 0x0000000f,
+}
+
+enum VkSystemAllocationScope {
+ VK_SYSTEM_ALLOCATION_SCOPE_COMMAND = 0x00000000,
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT = 0x00000001,
+ VK_SYSTEM_ALLOCATION_SCOPE_CACHE = 0x00000002,
+ VK_SYSTEM_ALLOCATION_SCOPE_DEVICE = 0x00000003,
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE = 0x00000004,
+}
+
+enum VkInternalAllocationType {
+ VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE = 0x00000000,
+}
+
+enum VkPhysicalDeviceType {
+ VK_PHYSICAL_DEVICE_TYPE_OTHER = 0x00000000,
+ VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU = 0x00000001,
+ VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU = 0x00000002,
+ VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU = 0x00000003,
+ VK_PHYSICAL_DEVICE_TYPE_CPU = 0x00000004,
+}
+
+enum VkVertexInputRate {
+ VK_VERTEX_INPUT_RATE_VERTEX = 0x00000000,
+ VK_VERTEX_INPUT_RATE_INSTANCE = 0x00000001,
+}
+
+/// Vulkan format definitions
+enum VkFormat {
+ VK_FORMAT_UNDEFINED = 0,
+ VK_FORMAT_R4G4_UNORM_PACK8 = 1,
+ VK_FORMAT_R4G4B4A4_UNORM_PACK16 = 2,
+ VK_FORMAT_B4G4R4A4_UNORM_PACK16 = 3,
+ VK_FORMAT_R5G6B5_UNORM_PACK16 = 4,
+ VK_FORMAT_B5G6R5_UNORM_PACK16 = 5,
+ VK_FORMAT_R5G5B5A1_UNORM_PACK16 = 6,
+ VK_FORMAT_B5G5R5A1_UNORM_PACK16 = 7,
+ VK_FORMAT_A1R5G5B5_UNORM_PACK16 = 8,
+ VK_FORMAT_R8_UNORM = 9,
+ VK_FORMAT_R8_SNORM = 10,
+ VK_FORMAT_R8_USCALED = 11,
+ VK_FORMAT_R8_SSCALED = 12,
+ VK_FORMAT_R8_UINT = 13,
+ VK_FORMAT_R8_SINT = 14,
+ VK_FORMAT_R8_SRGB = 15,
+ VK_FORMAT_R8G8_UNORM = 16,
+ VK_FORMAT_R8G8_SNORM = 17,
+ VK_FORMAT_R8G8_USCALED = 18,
+ VK_FORMAT_R8G8_SSCALED = 19,
+ VK_FORMAT_R8G8_UINT = 20,
+ VK_FORMAT_R8G8_SINT = 21,
+ VK_FORMAT_R8G8_SRGB = 22,
+ VK_FORMAT_R8G8B8_UNORM = 23,
+ VK_FORMAT_R8G8B8_SNORM = 24,
+ VK_FORMAT_R8G8B8_USCALED = 25,
+ VK_FORMAT_R8G8B8_SSCALED = 26,
+ VK_FORMAT_R8G8B8_UINT = 27,
+ VK_FORMAT_R8G8B8_SINT = 28,
+ VK_FORMAT_R8G8B8_SRGB = 29,
+ VK_FORMAT_B8G8R8_UNORM = 30,
+ VK_FORMAT_B8G8R8_SNORM = 31,
+ VK_FORMAT_B8G8R8_USCALED = 32,
+ VK_FORMAT_B8G8R8_SSCALED = 33,
+ VK_FORMAT_B8G8R8_UINT = 34,
+ VK_FORMAT_B8G8R8_SINT = 35,
+ VK_FORMAT_B8G8R8_SRGB = 36,
+ VK_FORMAT_R8G8B8A8_UNORM = 37,
+ VK_FORMAT_R8G8B8A8_SNORM = 38,
+ VK_FORMAT_R8G8B8A8_USCALED = 39,
+ VK_FORMAT_R8G8B8A8_SSCALED = 40,
+ VK_FORMAT_R8G8B8A8_UINT = 41,
+ VK_FORMAT_R8G8B8A8_SINT = 42,
+ VK_FORMAT_R8G8B8A8_SRGB = 43,
+ VK_FORMAT_B8G8R8A8_UNORM = 44,
+ VK_FORMAT_B8G8R8A8_SNORM = 45,
+ VK_FORMAT_B8G8R8A8_USCALED = 46,
+ VK_FORMAT_B8G8R8A8_SSCALED = 47,
+ VK_FORMAT_B8G8R8A8_UINT = 48,
+ VK_FORMAT_B8G8R8A8_SINT = 49,
+ VK_FORMAT_B8G8R8A8_SRGB = 50,
+ VK_FORMAT_A8B8G8R8_UNORM_PACK32 = 51,
+ VK_FORMAT_A8B8G8R8_SNORM_PACK32 = 52,
+ VK_FORMAT_A8B8G8R8_USCALED_PACK32 = 53,
+ VK_FORMAT_A8B8G8R8_SSCALED_PACK32 = 54,
+ VK_FORMAT_A8B8G8R8_UINT_PACK32 = 55,
+ VK_FORMAT_A8B8G8R8_SINT_PACK32 = 56,
+ VK_FORMAT_A8B8G8R8_SRGB_PACK32 = 57,
+ VK_FORMAT_A2R10G10B10_UNORM_PACK32 = 58,
+ VK_FORMAT_A2R10G10B10_SNORM_PACK32 = 59,
+ VK_FORMAT_A2R10G10B10_USCALED_PACK32 = 60,
+ VK_FORMAT_A2R10G10B10_SSCALED_PACK32 = 61,
+ VK_FORMAT_A2R10G10B10_UINT_PACK32 = 62,
+ VK_FORMAT_A2R10G10B10_SINT_PACK32 = 63,
+ VK_FORMAT_A2B10G10R10_UNORM_PACK32 = 64,
+ VK_FORMAT_A2B10G10R10_SNORM_PACK32 = 65,
+ VK_FORMAT_A2B10G10R10_USCALED_PACK32 = 66,
+ VK_FORMAT_A2B10G10R10_SSCALED_PACK32 = 67,
+ VK_FORMAT_A2B10G10R10_UINT_PACK32 = 68,
+ VK_FORMAT_A2B10G10R10_SINT_PACK32 = 69,
+ VK_FORMAT_R16_UNORM = 70,
+ VK_FORMAT_R16_SNORM = 71,
+ VK_FORMAT_R16_USCALED = 72,
+ VK_FORMAT_R16_SSCALED = 73,
+ VK_FORMAT_R16_UINT = 74,
+ VK_FORMAT_R16_SINT = 75,
+ VK_FORMAT_R16_SFLOAT = 76,
+ VK_FORMAT_R16G16_UNORM = 77,
+ VK_FORMAT_R16G16_SNORM = 78,
+ VK_FORMAT_R16G16_USCALED = 79,
+ VK_FORMAT_R16G16_SSCALED = 80,
+ VK_FORMAT_R16G16_UINT = 81,
+ VK_FORMAT_R16G16_SINT = 82,
+ VK_FORMAT_R16G16_SFLOAT = 83,
+ VK_FORMAT_R16G16B16_UNORM = 84,
+ VK_FORMAT_R16G16B16_SNORM = 85,
+ VK_FORMAT_R16G16B16_USCALED = 86,
+ VK_FORMAT_R16G16B16_SSCALED = 87,
+ VK_FORMAT_R16G16B16_UINT = 88,
+ VK_FORMAT_R16G16B16_SINT = 89,
+ VK_FORMAT_R16G16B16_SFLOAT = 90,
+ VK_FORMAT_R16G16B16A16_UNORM = 91,
+ VK_FORMAT_R16G16B16A16_SNORM = 92,
+ VK_FORMAT_R16G16B16A16_USCALED = 93,
+ VK_FORMAT_R16G16B16A16_SSCALED = 94,
+ VK_FORMAT_R16G16B16A16_UINT = 95,
+ VK_FORMAT_R16G16B16A16_SINT = 96,
+ VK_FORMAT_R16G16B16A16_SFLOAT = 97,
+ VK_FORMAT_R32_UINT = 98,
+ VK_FORMAT_R32_SINT = 99,
+ VK_FORMAT_R32_SFLOAT = 100,
+ VK_FORMAT_R32G32_UINT = 101,
+ VK_FORMAT_R32G32_SINT = 102,
+ VK_FORMAT_R32G32_SFLOAT = 103,
+ VK_FORMAT_R32G32B32_UINT = 104,
+ VK_FORMAT_R32G32B32_SINT = 105,
+ VK_FORMAT_R32G32B32_SFLOAT = 106,
+ VK_FORMAT_R32G32B32A32_UINT = 107,
+ VK_FORMAT_R32G32B32A32_SINT = 108,
+ VK_FORMAT_R32G32B32A32_SFLOAT = 109,
+ VK_FORMAT_R64_UINT = 110,
+ VK_FORMAT_R64_SINT = 111,
+ VK_FORMAT_R64_SFLOAT = 112,
+ VK_FORMAT_R64G64_UINT = 113,
+ VK_FORMAT_R64G64_SINT = 114,
+ VK_FORMAT_R64G64_SFLOAT = 115,
+ VK_FORMAT_R64G64B64_UINT = 116,
+ VK_FORMAT_R64G64B64_SINT = 117,
+ VK_FORMAT_R64G64B64_SFLOAT = 118,
+ VK_FORMAT_R64G64B64A64_UINT = 119,
+ VK_FORMAT_R64G64B64A64_SINT = 120,
+ VK_FORMAT_R64G64B64A64_SFLOAT = 121,
+ VK_FORMAT_B10G11R11_UFLOAT_PACK32 = 122,
+ VK_FORMAT_E5B9G9R9_UFLOAT_PACK32 = 123,
+ VK_FORMAT_D16_UNORM = 124,
+ VK_FORMAT_X8_D24_UNORM_PACK32 = 125,
+ VK_FORMAT_D32_SFLOAT = 126,
+ VK_FORMAT_S8_UINT = 127,
+ VK_FORMAT_D16_UNORM_S8_UINT = 128,
+ VK_FORMAT_D24_UNORM_S8_UINT = 129,
+ VK_FORMAT_D32_SFLOAT_S8_UINT = 130,
+ VK_FORMAT_BC1_RGB_UNORM_BLOCK = 131,
+ VK_FORMAT_BC1_RGB_SRGB_BLOCK = 132,
+ VK_FORMAT_BC1_RGBA_UNORM_BLOCK = 133,
+ VK_FORMAT_BC1_RGBA_SRGB_BLOCK = 134,
+ VK_FORMAT_BC2_UNORM_BLOCK = 135,
+ VK_FORMAT_BC2_SRGB_BLOCK = 136,
+ VK_FORMAT_BC3_UNORM_BLOCK = 137,
+ VK_FORMAT_BC3_SRGB_BLOCK = 138,
+ VK_FORMAT_BC4_UNORM_BLOCK = 139,
+ VK_FORMAT_BC4_SNORM_BLOCK = 140,
+ VK_FORMAT_BC5_UNORM_BLOCK = 141,
+ VK_FORMAT_BC5_SNORM_BLOCK = 142,
+ VK_FORMAT_BC6H_UFLOAT_BLOCK = 143,
+ VK_FORMAT_BC6H_SFLOAT_BLOCK = 144,
+ VK_FORMAT_BC7_UNORM_BLOCK = 145,
+ VK_FORMAT_BC7_SRGB_BLOCK = 146,
+ VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK = 147,
+ VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK = 148,
+ VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK = 149,
+ VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK = 150,
+ VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK = 151,
+ VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK = 152,
+ VK_FORMAT_EAC_R11_UNORM_BLOCK = 153,
+ VK_FORMAT_EAC_R11_SNORM_BLOCK = 154,
+ VK_FORMAT_EAC_R11G11_UNORM_BLOCK = 155,
+ VK_FORMAT_EAC_R11G11_SNORM_BLOCK = 156,
+ VK_FORMAT_ASTC_4x4_UNORM_BLOCK = 157,
+ VK_FORMAT_ASTC_4x4_SRGB_BLOCK = 158,
+ VK_FORMAT_ASTC_5x4_UNORM_BLOCK = 159,
+ VK_FORMAT_ASTC_5x4_SRGB_BLOCK = 160,
+ VK_FORMAT_ASTC_5x5_UNORM_BLOCK = 161,
+ VK_FORMAT_ASTC_5x5_SRGB_BLOCK = 162,
+ VK_FORMAT_ASTC_6x5_UNORM_BLOCK = 163,
+ VK_FORMAT_ASTC_6x5_SRGB_BLOCK = 164,
+ VK_FORMAT_ASTC_6x6_UNORM_BLOCK = 165,
+ VK_FORMAT_ASTC_6x6_SRGB_BLOCK = 166,
+ VK_FORMAT_ASTC_8x5_UNORM_BLOCK = 167,
+ VK_FORMAT_ASTC_8x5_SRGB_BLOCK = 168,
+ VK_FORMAT_ASTC_8x6_UNORM_BLOCK = 169,
+ VK_FORMAT_ASTC_8x6_SRGB_BLOCK = 170,
+ VK_FORMAT_ASTC_8x8_UNORM_BLOCK = 171,
+ VK_FORMAT_ASTC_8x8_SRGB_BLOCK = 172,
+ VK_FORMAT_ASTC_10x5_UNORM_BLOCK = 173,
+ VK_FORMAT_ASTC_10x5_SRGB_BLOCK = 174,
+ VK_FORMAT_ASTC_10x6_UNORM_BLOCK = 175,
+ VK_FORMAT_ASTC_10x6_SRGB_BLOCK = 176,
+ VK_FORMAT_ASTC_10x8_UNORM_BLOCK = 177,
+ VK_FORMAT_ASTC_10x8_SRGB_BLOCK = 178,
+ VK_FORMAT_ASTC_10x10_UNORM_BLOCK = 179,
+ VK_FORMAT_ASTC_10x10_SRGB_BLOCK = 180,
+ VK_FORMAT_ASTC_12x10_UNORM_BLOCK = 181,
+ VK_FORMAT_ASTC_12x10_SRGB_BLOCK = 182,
+ VK_FORMAT_ASTC_12x12_UNORM_BLOCK = 183,
+ VK_FORMAT_ASTC_12x12_SRGB_BLOCK = 184,
+}
+
+/// Structure type enumerant
+enum VkStructureType {
+ VK_STRUCTURE_TYPE_APPLICATION_INFO = 0,
+ VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 1,
+ VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO = 2,
+ VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 3,
+ VK_STRUCTURE_TYPE_SUBMIT_INFO = 4,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO = 5,
+ VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE = 6,
+ VK_STRUCTURE_TYPE_BIND_SPARSE_INFO = 7,
+ VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 8,
+ VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 9,
+ VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 10,
+ VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 11,
+ VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 12,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 13,
+ VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 14,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 15,
+ VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO = 16,
+ VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO = 17,
+ VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 18,
+ VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO = 19,
+ VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO = 20,
+ VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO = 21,
+ VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO = 22,
+ VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO = 23,
+ VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO = 24,
+ VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO = 25,
+ VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO = 26,
+ VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO = 27,
+ VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 28,
+ VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 29,
+ VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO = 30,
+ VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 31,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 32,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 33,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO = 34,
+ VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET = 35,
+ VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET = 36,
+ VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 37,
+ VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 38,
+ VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO = 39,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO = 40,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO = 41,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO = 42,
+ VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO = 43,
+ VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 44,
+ VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 45,
+ VK_STRUCTURE_TYPE_MEMORY_BARRIER = 46,
+ VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO = 47,
+ VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO = 48,
+
+ //@extension("VK_KHR_swapchain")
+ VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR = 1000001000,
+ VK_STRUCTURE_TYPE_PRESENT_INFO_KHR = 1000001001,
+
+ //@extension("VK_KHR_display")
+ VK_STRUCTURE_TYPE_DISPLAY_MODE_CREATE_INFO_KHR = 1000002000,
+ VK_STRUCTURE_TYPE_DISPLAY_SURFACE_CREATE_INFO_KHR = 1000002001,
+
+ //@extension("VK_KHR_display_swapchain")
+ VK_STRUCTURE_TYPE_DISPLAY_DISPLAY_PRESENT_INFO_KHR = 1000003000,
+
+ //@extension("VK_KHR_xlib_surface")
+ VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR = 1000004000,
+
+ //@extension("VK_KHR_xcb_surface")
+ VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR = 1000005000,
+
+ //@extension("VK_KHR_wayland_surface")
+ VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR = 1000006000,
+
+ //@extension("VK_KHR_mir_surface")
+ VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR = 1000007000,
+
+ //@extension("VK_KHR_android_surface")
+ VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR = 1000008000,
+
+ //@extension("VK_KHR_win32_surface")
+ VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR = 1000009000,
+
+ //@extension("VK_EXT_debug_report")
+ VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT = 1000011000,
+}
+
+enum VkSubpassContents {
+ VK_SUBPASS_CONTENTS_INLINE = 0x00000000,
+ VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS = 0x00000001,
+}
+
+enum VkPipelineCacheHeaderVersion {
+ VK_PIPELINE_CACHE_HEADER_VERSION_ONE = 1,
+}
+
+@lastUnused(-11)
+/// Error and return codes
+enum VkResult {
+ // Return codes for successful operation execution (positive values)
+ VK_SUCCESS = 0,
+ VK_NOT_READY = 1,
+ VK_TIMEOUT = 2,
+ VK_EVENT_SET = 3,
+ VK_EVENT_RESET = 4,
+ VK_INCOMPLETE = 5,
+
+ //@extension("VK_KHR_swapchain")
+ VK_SUBOPTIMAL_KHR = 1000001003,
+
+ // Error codes (negative values)
+ VK_ERROR_OUT_OF_HOST_MEMORY = 0xFFFFFFFF, // -1
+ VK_ERROR_OUT_OF_DEVICE_MEMORY = 0xFFFFFFFE, // -2
+ VK_ERROR_INITIALIZATION_FAILED = 0xFFFFFFFD, // -3
+ VK_ERROR_DEVICE_LOST = 0xFFFFFFFC, // -4
+ VK_ERROR_MEMORY_MAP_FAILED = 0xFFFFFFFB, // -5
+ VK_ERROR_LAYER_NOT_PRESENT = 0xFFFFFFFA, // -6
+ VK_ERROR_EXTENSION_NOT_PRESENT = 0xFFFFFFF9, // -7
+ VK_ERROR_FEATURE_NOT_PRESENT = 0xFFFFFFF8, // -8
+ VK_ERROR_INCOMPATIBLE_DRIVER = 0xFFFFFFF7, // -9
+ VK_ERROR_TOO_MANY_OBJECTS = 0xFFFFFFF6, // -10
+ VK_ERROR_FORMAT_NOT_SUPPORTED = 0xFFFFFFF5, // -11
+
+ //@extension("VK_KHR_surface")
+ VK_ERROR_SURFACE_LOST_KHR = 0xC4653600, // -1000000000
+
+ //@extension("VK_KHR_surface")
+ VK_ERROR_NATIVE_WINDOW_IN_USE_KHR = 0xC46535FF, // -1000008001
+
+ //@extension("VK_KHR_swapchain")
+ VK_ERROR_OUT_OF_DATE_KHR = 0xC4653214, // -1000001004
+
+ //@extension("VK_KHR_display_swapchain")
+ VK_ERROR_INCOMPATIBLE_DISPLAY_KHR = 0xC4652A47, // -1000003001
+
+ //@extension("VK_EXT_debug_report")
+ VK_ERROR_VALIDATION_FAILED_EXT = 0xC4650B07, // -1000011001
+}
+
+enum VkDynamicState {
+ VK_DYNAMIC_STATE_VIEWPORT = 0x00000000,
+ VK_DYNAMIC_STATE_SCISSOR = 0x00000001,
+ VK_DYNAMIC_STATE_LINE_WIDTH = 0x00000002,
+ VK_DYNAMIC_STATE_DEPTH_BIAS = 0x00000003,
+ VK_DYNAMIC_STATE_BLEND_CONSTANTS = 0x00000004,
+ VK_DYNAMIC_STATE_DEPTH_BOUNDS = 0x00000005,
+ VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK = 0x00000006,
+ VK_DYNAMIC_STATE_STENCIL_WRITE_MASK = 0x00000007,
+ VK_DYNAMIC_STATE_STENCIL_REFERENCE = 0x00000008,
+}
+
+@extension("VK_KHR_surface")
+enum VkPresentModeKHR {
+ VK_PRESENT_MODE_IMMEDIATE_KHR = 0x00000000,
+ VK_PRESENT_MODE_MAILBOX_KHR = 0x00000001,
+ VK_PRESENT_MODE_FIFO_KHR = 0x00000002,
+ VK_PRESENT_MODE_FIFO_RELAXED_KHR = 0x00000003,
+}
+
+@extension("VK_KHR_surface")
+enum VkColorSpaceKHR {
+ VK_COLORSPACE_SRGB_NONLINEAR_KHR = 0x00000000,
+}
+
+@extension("VK_EXT_debug_report")
+enum VkDebugReportObjectTypeEXT {
+ VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT = 0,
+ VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT = 1,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT = 2,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT = 3,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT = 4,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT = 5,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT = 6,
+ VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT = 7,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT = 8,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT = 9,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT = 10,
+ VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT = 11,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT = 12,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT = 13,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT = 14,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT = 15,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT = 16,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT = 17,
+ VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT = 18,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT = 19,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT = 20,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT = 21,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT = 22,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT = 23,
+ VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT = 24,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT = 25,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SURFACE_KHR_EXT = 26,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT = 27,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT = 28,
+}
+
+@extension("VK_EXT_debug_report")
+enum VkDebugReportErrorEXT {
+ VK_DEBUG_REPORT_ERROR_NONE_EXT = 0,
+ VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT = 1,
+}
+
+
+/////////////////
+// Bitfields //
+/////////////////
+
+/// Queue capabilities
+type VkFlags VkQueueFlags
+bitfield VkQueueFlagBits {
+ VK_QUEUE_GRAPHICS_BIT = 0x00000001, /// Queue supports graphics operations
+ VK_QUEUE_COMPUTE_BIT = 0x00000002, /// Queue supports compute operations
+ VK_QUEUE_TRANSFER_BIT = 0x00000004, /// Queue supports transfer operations
+ VK_QUEUE_SPARSE_BINDING_BIT = 0x00000008, /// Queue supports sparse resource memory management operations
+}
+
+/// Memory properties passed into vkAllocMemory().
+type VkFlags VkMemoryPropertyFlags
+bitfield VkMemoryPropertyFlagBits {
+ VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT = 0x00000001,
+ VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT = 0x00000002,
+ VK_MEMORY_PROPERTY_HOST_COHERENT_BIT = 0x00000004,
+ VK_MEMORY_PROPERTY_HOST_CACHED_BIT = 0x00000008,
+ VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT = 0x00000010,
+}
+
+/// Memory heap flags
+type VkFlags VkMemoryHeapFlags
+bitfield VkMemoryHeapFlagBits {
+ VK_MEMORY_HEAP_DEVICE_LOCAL_BIT = 0x00000001,
+}
+
+/// Access flags
+type VkFlags VkAccessFlags
+bitfield VkAccessFlagBits {
+ VK_ACCESS_INDIRECT_COMMAND_READ_BIT = 0x00000001,
+ VK_ACCESS_INDEX_READ_BIT = 0x00000002,
+ VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT = 0x00000004,
+ VK_ACCESS_UNIFORM_READ_BIT = 0x00000008,
+ VK_ACCESS_INPUT_ATTACHMENT_READ_BIT = 0x00000010,
+ VK_ACCESS_SHADER_READ_BIT = 0x00000020,
+ VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
+ VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
+ VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT = 0x00000200,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT = 0x00000400,
+ VK_ACCESS_TRANSFER_READ_BIT = 0x00000800,
+ VK_ACCESS_TRANSFER_WRITE_BIT = 0x00001000,
+ VK_ACCESS_HOST_READ_BIT = 0x00002000,
+ VK_ACCESS_HOST_WRITE_BIT = 0x00004000,
+ VK_ACCESS_MEMORY_READ_BIT = 0x00008000,
+ VK_ACCESS_MEMORY_WRITE_BIT = 0x00010000,
+}
+
+/// Buffer usage flags
+type VkFlags VkBufferUsageFlags
+bitfield VkBufferUsageFlagBits {
+ VK_BUFFER_USAGE_TRANSFER_SRC_BIT = 0x00000001, /// Can be used as a source of transfer operations
+ VK_BUFFER_USAGE_TRANSFER_DST_BIT = 0x00000002, /// Can be used as a destination of transfer operations
+ VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000004, /// Can be used as TBO
+ VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT = 0x00000008, /// Can be used as IBO
+ VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT = 0x00000010, /// Can be used as UBO
+ VK_BUFFER_USAGE_STORAGE_BUFFER_BIT = 0x00000020, /// Can be used as SSBO
+ VK_BUFFER_USAGE_INDEX_BUFFER_BIT = 0x00000040, /// Can be used as source of fixed function index fetch (index buffer)
+ VK_BUFFER_USAGE_VERTEX_BUFFER_BIT = 0x00000080, /// Can be used as source of fixed function vertex fetch (VBO)
+ VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT = 0x00000100, /// Can be the source of indirect parameters (e.g. indirect buffer, parameter buffer)
+}
+
+/// Buffer creation flags
+type VkFlags VkBufferCreateFlags
+bitfield VkBufferCreateFlagBits {
+ VK_BUFFER_CREATE_SPARSE_BINDING_BIT = 0x00000001, /// Buffer should support sparse backing
+ VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002, /// Buffer should support sparse backing with partial residency
+ VK_BUFFER_CREATE_SPARSE_ALIASED_BIT = 0x00000004, /// Buffer should support constent data access to physical memory blocks mapped into multiple locations of sparse buffers
+}
+
+/// Shader stage flags
+type VkFlags VkShaderStageFlags
+bitfield VkShaderStageFlagBits {
+ VK_SHADER_STAGE_VERTEX_BIT = 0x00000001,
+ VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT = 0x00000002,
+ VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT = 0x00000004,
+ VK_SHADER_STAGE_GEOMETRY_BIT = 0x00000008,
+ VK_SHADER_STAGE_FRAGMENT_BIT = 0x00000010,
+ VK_SHADER_STAGE_COMPUTE_BIT = 0x00000020,
+ VK_SHADER_STAGE_ALL_GRAPHICS = 0x0000001F,
+
+ VK_SHADER_STAGE_ALL = 0x7FFFFFFF,
+}
+
+/// Descriptor pool create flags
+type VkFlags VkDescriptorPoolCreateFlags
+bitfield VkDescriptorPoolCreateFlagBits {
+ VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT = 0x00000001,
+}
+
+/// Descriptor pool reset flags
+type VkFlags VkDescriptorPoolResetFlags
+//bitfield VkDescriptorPoolResetFlagBits {
+//}
+
+/// Image usage flags
+type VkFlags VkImageUsageFlags
+bitfield VkImageUsageFlagBits {
+ VK_IMAGE_USAGE_TRANSFER_SRC_BIT = 0x00000001, /// Can be used as a source of transfer operations
+ VK_IMAGE_USAGE_TRANSFER_DST_BIT = 0x00000002, /// Can be used as a destination of transfer operations
+ VK_IMAGE_USAGE_SAMPLED_BIT = 0x00000004, /// Can be sampled from (SAMPLED_IMAGE and COMBINED_IMAGE_SAMPLER descriptor types)
+ VK_IMAGE_USAGE_STORAGE_BIT = 0x00000008, /// Can be used as storage image (STORAGE_IMAGE descriptor type)
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000010, /// Can be used as framebuffer color attachment
+ VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000020, /// Can be used as framebuffer depth/stencil attachment
+ VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000040, /// Image data not needed outside of rendering
+ VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT = 0x00000080, /// Can be used as framebuffer input attachment
+}
+
+/// Image creation flags
+type VkFlags VkImageCreateFlags
+bitfield VkImageCreateFlagBits {
+ VK_IMAGE_CREATE_SPARSE_BINDING_BIT = 0x00000001, /// Image should support sparse backing
+ VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002, /// Image should support sparse backing with partial residency
+ VK_IMAGE_CREATE_SPARSE_ALIASED_BIT = 0x00000004, /// Image should support constent data access to physical memory blocks mapped into multiple locations of sparse images
+ VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000008, /// Allows image views to have different format than the base image
+ VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT = 0x00000010, /// Allows creating image views with cube type from the created image
+}
+
+/// Image view creation flags
+type VkFlags VkImageViewCreateFlags
+//bitfield VkImageViewCreateFlagBits {
+//}
+
+/// Pipeline creation flags
+type VkFlags VkPipelineCreateFlags
+bitfield VkPipelineCreateFlagBits {
+ VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
+ VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
+ VK_PIPELINE_CREATE_DERIVATIVE_BIT = 0x00000004,
+}
+
+/// Color component flags
+type VkFlags VkColorComponentFlags
+bitfield VkColorComponentFlagBits {
+ VK_COLOR_COMPONENT_R_BIT = 0x00000001,
+ VK_COLOR_COMPONENT_G_BIT = 0x00000002,
+ VK_COLOR_COMPONENT_B_BIT = 0x00000004,
+ VK_COLOR_COMPONENT_A_BIT = 0x00000008,
+}
+
+/// Fence creation flags
+type VkFlags VkFenceCreateFlags
+bitfield VkFenceCreateFlagBits {
+ VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
+}
+
+/// Semaphore creation flags
+type VkFlags VkSemaphoreCreateFlags
+//bitfield VkSemaphoreCreateFlagBits {
+//}
+
+/// Format capability flags
+type VkFlags VkFormatFeatureFlags
+bitfield VkFormatFeatureFlagBits {
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT = 0x00000001, /// Format can be used for sampled images (SAMPLED_IMAGE and COMBINED_IMAGE_SAMPLER descriptor types)
+ VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT = 0x00000002, /// Format can be used for storage images (STORAGE_IMAGE descriptor type)
+ VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT = 0x00000004, /// Format supports atomic operations in case it's used for storage images
+ VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000008, /// Format can be used for uniform texel buffers (TBOs)
+ VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT = 0x00000010, /// Format can be used for storage texel buffers (IBOs)
+ VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT = 0x00000020, /// Format supports atomic operations in case it's used for storage texel buffers
+ VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT = 0x00000040, /// Format can be used for vertex buffers (VBOs)
+ VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT = 0x00000080, /// Format can be used for color attachment images
+ VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT = 0x00000100, /// Format supports blending in case it's used for color attachment images
+ VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000200, /// Format can be used for depth/stencil attachment images
+ VK_FORMAT_FEATURE_BLIT_SRC_BIT = 0x00000400, /// Format can be used as the source image of blits with vkCommandBlitImage
+ VK_FORMAT_FEATURE_BLIT_DST_BIT = 0x00000800, /// Format can be used as the destination image of blits with vkCommandBlitImage
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT = 0x00001000,
+}
+
+/// Query control flags
+type VkFlags VkQueryControlFlags
+bitfield VkQueryControlFlagBits {
+ VK_QUERY_CONTROL_PRECISE_BIT = 0x00000001,
+}
+
+/// Query result flags
+type VkFlags VkQueryResultFlags
+bitfield VkQueryResultFlagBits {
+ VK_QUERY_RESULT_64_BIT = 0x00000001, /// Results of the queries are written to the destination buffer as 64-bit values
+ VK_QUERY_RESULT_WAIT_BIT = 0x00000002, /// Results of the queries are waited on before proceeding with the result copy
+ VK_QUERY_RESULT_WITH_AVAILABILITY_BIT = 0x00000004, /// Besides the results of the query, the availability of the results is also written
+ VK_QUERY_RESULT_PARTIAL_BIT = 0x00000008, /// Copy the partial results of the query even if the final results aren't available
+}
+
+/// Shader module creation flags
+type VkFlags VkShaderModuleCreateFlags
+//bitfield VkShaderModuleCreateFlagBits {
+//}
+
+/// Event creation flags
+type VkFlags VkEventCreateFlags
+//bitfield VkEventCreateFlagBits {
+//}
+
+/// Command buffer usage flags
+type VkFlags VkCommandBufferUsageFlags
+bitfield VkCommandBufferUsageFlagBits {
+ VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT = 0x00000001,
+ VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT = 0x00000002,
+ VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT = 0x00000004,
+}
+
+/// Pipeline statistics flags
+type VkFlags VkQueryPipelineStatisticFlags
+bitfield VkQueryPipelineStatisticFlagBits {
+ VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT = 0x00000001, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT = 0x00000002, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT = 0x00000004, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT = 0x00000008, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT = 0x00000010, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT = 0x00000020, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT = 0x00000040, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT = 0x00000080, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT = 0x00000100, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT = 0x00000200, /// Optional
+ VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT = 0x00000400, /// Optional
+}
+
+/// Memory mapping flags
+type VkFlags VkMemoryMapFlags
+//bitfield VkMemoryMapFlagBits {
+//}
+
+/// Bitfield of image aspects
+type VkFlags VkImageAspectFlags
+bitfield VkImageAspectFlagBits {
+ VK_IMAGE_ASPECT_COLOR_BIT = 0x00000001,
+ VK_IMAGE_ASPECT_DEPTH_BIT = 0x00000002,
+ VK_IMAGE_ASPECT_STENCIL_BIT = 0x00000004,
+ VK_IMAGE_ASPECT_METADATA_BIT = 0x00000008,
+}
+
+/// Sparse memory bind flags
+type VkFlags VkSparseMemoryBindFlags
+bitfield VkSparseMemoryBindFlagBits {
+ VK_SPARSE_MEMORY_BIND_METADATA_BIT = 0x00000001,
+}
+
+/// Sparse image memory requirements flags
+type VkFlags VkSparseImageFormatFlags
+bitfield VkSparseImageFormatFlagBits {
+ VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT = 0x00000001, /// Image uses a single miptail region for all array slices
+ VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT = 0x00000002, /// Image requires mip levels to be an exact multiple of the sparse iamge block size for non-mip-tail levels.
+ VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT = 0x00000004, /// Image uses a non-standard sparse block size
+}
+
+/// Pipeline stages
+type VkFlags VkPipelineStageFlags
+bitfield VkPipelineStageFlagBits {
+ VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT = 0x00000001, /// Before subsequent commands are processed
+ VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT = 0x00000002, /// Draw/DispatchIndirect command fetch
+ VK_PIPELINE_STAGE_VERTEX_INPUT_BIT = 0x00000004, /// Vertex/index fetch
+ VK_PIPELINE_STAGE_VERTEX_SHADER_BIT = 0x00000008, /// Vertex shading
+ VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT = 0x00000010, /// Tessellation control shading
+ VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT = 0x00000020, /// Tessellation evaluation shading
+ VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT = 0x00000040, /// Geometry shading
+ VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT = 0x00000080, /// Fragment shading
+ VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT = 0x00000100, /// Early fragment (depth/stencil) tests
+ VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT = 0x00000200, /// Late fragment (depth/stencil) tests
+ VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT = 0x00000400, /// Color attachment writes
+ VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT = 0x00000800, /// Compute shading
+ VK_PIPELINE_STAGE_TRANSFER_BIT = 0x00001000, /// Transfer/copy operations
+ VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT = 0x00002000,
+ VK_PIPELINE_STAGE_HOST_BIT = 0x00004000, /// Indicates host (CPU) is a source/sink of the dependency
+
+ VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT = 0x00008000, /// All stages of the graphics pipeline
+ VK_PIPELINE_STAGE_ALL_COMMANDS_BIT = 0x00010000, /// All graphics, compute, copy, and transition commands
+}
+
+/// Render pass attachment description flags
+type VkFlags VkAttachmentDescriptionFlags
+bitfield VkAttachmentDescriptionFlagBits {
+ VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT = 0x00000001, /// The attachment may alias physical memory of another attachment in the same renderpass
+}
+
+/// Subpass description flags
+type VkFlags VkSubpassDescriptionFlags
+bitfield VkSubpassDescriptionFlagBits {
+}
+
+/// Command pool creation flags
+type VkFlags VkCommandPoolCreateFlags
+bitfield VkCommandPoolCreateFlagBits {
+ VK_COMMAND_POOL_CREATE_TRANSIENT_BIT = 0x00000001, /// Command buffers have a short lifetime
+ VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT = 0x00000002, /// Command buffers may release their memory individually
+}
+
+/// Command pool reset flags
+type VkFlags VkCommandPoolResetFlags
+bitfield VkCommandPoolResetFlagBits {
+ VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT = 0x00000001, /// Release resources owned by the pool
+}
+
+type VkFlags VkCommandBufferResetFlags
+bitfield VkCommandBufferResetFlagBits {
+ VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT = 0x00000001, /// Release resources owned by the buffer
+}
+
+type VkFlags VkSampleCountFlags
+bitfield VkSampleCountFlagBits {
+ VK_SAMPLE_COUNT_1_BIT = 0x00000001,
+ VK_SAMPLE_COUNT_2_BIT = 0x00000002,
+ VK_SAMPLE_COUNT_4_BIT = 0x00000004,
+ VK_SAMPLE_COUNT_8_BIT = 0x00000008,
+ VK_SAMPLE_COUNT_16_BIT = 0x00000010,
+ VK_SAMPLE_COUNT_32_BIT = 0x00000020,
+ VK_SAMPLE_COUNT_64_BIT = 0x00000040,
+}
+
+type VkFlags VkStencilFaceFlags
+bitfield VkStencilFaceFlagBits {
+ VK_STENCIL_FACE_FRONT_BIT = 0x00000001, /// Front face
+ VK_STENCIL_FACE_BACK_BIT = 0x00000002, /// Back face
+ VK_STENCIL_FRONT_AND_BACK = 0x00000003,
+}
+
+/// Instance creation flags
+type VkFlags VkInstanceCreateFlags
+//bitfield VkInstanceCreateFlagBits {
+//}
+
+/// Device creation flags
+type VkFlags VkDeviceCreateFlags
+//bitfield VkDeviceCreateFlagBits {
+//}
+
+/// Device queue creation flags
+type VkFlags VkDeviceQueueCreateFlags
+//bitfield VkDeviceQueueCreateFlagBits {
+//}
+
+/// Query pool creation flags
+type VkFlags VkQueryPoolCreateFlags
+//bitfield VkQueryPoolCreateFlagBits {
+//}
+
+/// Buffer view creation flags
+type VkFlags VkBufferViewCreateFlags
+//bitfield VkBufferViewCreateFlagBits {
+//}
+
+/// Pipeline cache creation flags
+type VkFlags VkPipelineCacheCreateFlags
+//bitfield VkPipelineCacheCreateFlagBits {
+//}
+
+/// Pipeline shader stage creation flags
+type VkFlags VkPipelineShaderStageCreateFlags
+//bitfield VkPipelineShaderStageCreateFlagBits {
+//}
+
+/// Descriptor set layout creation flags
+type VkFlags VkDescriptorSetLayoutCreateFlags
+//bitfield VkDescriptorSetLayoutCreateFlagBits {
+//}
+
+/// Pipeline vertex input state creation flags
+type VkFlags VkPipelineVertexInputStateCreateFlags
+//bitfield VkPipelineVertexInputStateCreateFlagBits {
+//}
+
+/// Pipeline input assembly state creation flags
+type VkFlags VkPipelineInputAssemblyStateCreateFlags
+//bitfield VkPipelineInputAssemblyStateCreateFlagBits {
+//}
+
+/// Tessellation state creation flags
+type VkFlags VkPipelineTessellationStateCreateFlags
+//bitfield VkPipelineTessellationStateCreateFlagBits {
+//}
+
+/// Viewport state creation flags
+type VkFlags VkPipelineViewportStateCreateFlags
+//bitfield VkPipelineViewportStateCreateFlagBits {
+//}
+
+/// Rasterization state creation flags
+type VkFlags VkPipelineRasterizationStateCreateFlags
+//bitfield VkPipelineRasterizationStateCreateFlagBits {
+//}
+
+/// Multisample state creation flags
+type VkFlags VkPipelineMultisampleStateCreateFlags
+//bitfield VkPipelineMultisampleStateCreateFlagBits {
+//}
+
+/// Color blend state creation flags
+type VkFlags VkPipelineColorBlendStateCreateFlags
+//bitfield VkPipelineColorBlendStateCreateFlagBits {
+//}
+
+/// Depth/stencil state creation flags
+type VkFlags VkPipelineDepthStencilStateCreateFlags
+//bitfield VkPipelineDepthStencilStateCreateFlagBits {
+//}
+
+/// Dynamic state creation flags
+type VkFlags VkPipelineDynamicStateCreateFlags
+//bitfield VkPipelineDynamicStateCreateFlagBits {
+//}
+
+/// Pipeline layout creation flags
+type VkFlags VkPipelineLayoutCreateFlags
+//bitfield VkPipelineLayoutCreateFlagBits {
+//}
+
+/// Sampler creation flags
+type VkFlags VkSamplerCreateFlags
+//bitfield VkSamplerCreateFlagBits {
+//}
+
+/// Render pass creation flags
+type VkFlags VkRenderPassCreateFlags
+//bitfield VkRenderPassCreateFlagBits {
+//}
+
+/// Framebuffer creation flags
+type VkFlags VkFramebufferCreateFlags
+//bitfield VkFramebufferCreateFlagBits {
+//}
+
+/// Dependency flags
+type VkFlags VkDependencyFlags
+bitfield VkDependencyFlagBits {
+ VK_DEPENDENCY_BY_REGION_BIT = 0x00000001,
+}
+
+/// Cull mode flags
+type VkFlags VkCullModeFlags
+bitfield VkCullModeFlagBits {
+ VK_CULL_MODE_NONE = 0x00000000,
+ VK_CULL_MODE_FRONT_BIT = 0x00000001,
+ VK_CULL_MODE_BACK_BIT = 0x00000002,
+ VK_CULL_MODE_FRONT_AND_BACK = 0x00000003,
+}
+
+@extension("VK_KHR_surface")
+type VkFlags VkSurfaceTransformFlagsKHR
+@extension("VK_KHR_surface")
+bitfield VkSurfaceTransformFlagBitsKHR {
+ VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR = 0x00000001,
+ VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR = 0x00000002,
+ VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR = 0x00000004,
+ VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR = 0x00000008,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_BIT_KHR = 0x00000010,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_90_BIT_KHR = 0x00000020,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_180_BIT_KHR = 0x00000040,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_270_BIT_KHR = 0x00000080,
+ VK_SURFACE_TRANSFORM_INHERIT_BIT_KHR = 0x00000100,
+}
+
+@extension("VK_KHR_surface")
+type VkFlags VkCompositeAlphaFlagsKHR
+@extension("VK_KHR_surface")
+bitfield VkCompositeAlphaFlagBitsKHR {
+ VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR = 0x00000001,
+ VK_COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR = 0x00000002,
+ VK_COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR = 0x00000004,
+ VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR = 0x00000008,
+}
+
+@extension("VK_KHR_swapchain")
+type VkFlags VkSwapchainCreateFlagsKHR
+//@extension("VK_KHR_swapchain")
+//bitfield VkSwapchainCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_display")
+type VkFlags VkDisplayPlaneAlphaFlagsKHR
+@extension("VK_KHR_display")
+bitfield VkDisplayPlaneAlphaFlagBitsKHR {
+ VK_DISPLAY_PLANE_ALPHA_OPAQUE_BIT_KHR = 0x00000001,
+ VK_DISPLAY_PLANE_ALPHA_GLOBAL_BIT_KHR = 0x00000002,
+ VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_BIT_KHR = 0x00000004,
+ VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_PREMULTIPLIED_BIT_KHR = 0x00000008,
+}
+
+@extension("VK_KHR_display")
+type VkFlags VkDisplaySurfaceCreateFlagsKHR
+//@extension("VK_KHR_display")
+//bitfield VkDisplaySurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_display")
+type VkFlags VkDisplayModeCreateFlagsKHR
+//@extension("VK_KHR_display")
+//bitfield VkDisplayModeCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_xlib_surface")
+type VkFlags VkXlibSurfaceCreateFlagsKHR
+//@extension("VK_KHR_xlib_surface")
+//bitfield VkXlibSurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_xcb_surface")
+type VkFlags VkXcbSurfaceCreateFlagsKHR
+//@extension("VK_KHR_xcb_surface")
+//bitfield VkXcbSurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_wayland_surface")
+type VkFlags VkWaylandSurfaceCreateFlagsKHR
+//@extension("VK_KHR_wayland_surface")
+//bitfield VkWaylandSurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_mir_surface")
+type VkFlags VkMirSurfaceCreateFlagsKHR
+//@extension("VK_KHR_mir_surface")
+//bitfield VkMirSurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_android_surface")
+type VkFlags VkAndroidSurfaceCreateFlagsKHR
+//@extension("VK_KHR_android_surface")
+//bitfield VkAndroidSurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_KHR_win32_surface")
+type VkFlags VkWin32SurfaceCreateFlagsKHR
+//@extension("VK_KHR_win32_surface")
+//bitfield VkWin32SurfaceCreateFlagBitsKHR {
+//}
+
+@extension("VK_EXT_debug_report")
+type VkFlags VkDebugReportFlagsEXT
+@extension("VK_EXT_debug_report")
+bitfield VkDebugReportFlagBitsEXT {
+ VK_DEBUG_REPORT_INFO_BIT_EXT = 0x00000001,
+ VK_DEBUG_REPORT_WARN_BIT_EXT = 0x00000002,
+ VK_DEBUG_REPORT_PERF_WARN_BIT_EXT = 0x00000004,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT = 0x00000008,
+ VK_DEBUG_REPORT_DEBUG_BIT_EXT = 0x00000010,
+}
+
+
+//////////////////
+// Structures //
+//////////////////
+
+class VkOffset2D {
+ s32 x
+ s32 y
+}
+
+class VkOffset3D {
+ s32 x
+ s32 y
+ s32 z
+}
+
+class VkExtent2D {
+ u32 width
+ u32 height
+}
+
+class VkExtent3D {
+ u32 width
+ u32 height
+ u32 depth
+}
+
+class VkViewport {
+ f32 x
+ f32 y
+ f32 width
+ f32 height
+ f32 minDepth
+ f32 maxDepth
+}
+
+class VkRect2D {
+ VkOffset2D offset
+ VkExtent2D extent
+}
+
+class VkClearRect {
+ VkRect2D rect
+ u32 baseArrayLayer
+ u32 layerCount
+}
+
+class VkComponentMapping {
+ VkComponentSwizzle r
+ VkComponentSwizzle g
+ VkComponentSwizzle b
+ VkComponentSwizzle a
+}
+
+class VkPhysicalDeviceProperties {
+ u32 apiVersion
+ u32 driverVersion
+ u32 vendorID
+ u32 deviceID
+ VkPhysicalDeviceType deviceType
+ char[VK_MAX_PHYSICAL_DEVICE_NAME_SIZE] deviceName
+ u8[VK_UUID_SIZE] pipelineCacheUUID
+ VkPhysicalDeviceLimits limits
+ VkPhysicalDeviceSparseProperties sparseProperties
+}
+
+class VkExtensionProperties {
+ char[VK_MAX_EXTENSION_NAME_SIZE] extensionName /// extension name
+ u32 specVersion /// version of the extension specification implemented
+}
+
+class VkLayerProperties {
+ char[VK_MAX_EXTENSION_NAME_SIZE] layerName /// layer name
+ u32 specVersion /// version of the layer specification implemented
+ u32 implementationVersion /// build or release version of the layer's library
+ char[VK_MAX_DESCRIPTION_SIZE] description /// Free-form description of the layer
+}
+
+class VkSubmitInfo {
+ VkStructureType sType /// Type of structure. Should be VK_STRUCTURE_TYPE_SUBMIT_INFO
+ const void* pNext /// Next structure in chain
+ u32 waitSemaphoreCount
+ const VkSemaphore* pWaitSemaphores
+ const VkPipelineStageFlags* pWaitDstStageMask
+ u32 commandBufferCount
+ const VkCommandBuffer* pCommandBuffers
+ u32 signalSemaphoreCount
+ const VkSemaphore* pSignalSemaphores
+}
+
+class VkApplicationInfo {
+ VkStructureType sType /// Type of structure. Should be VK_STRUCTURE_TYPE_APPLICATION_INFO
+ const void* pNext /// Next structure in chain
+ const char* pApplicationName
+ u32 applicationVersion
+ const char* pEngineName
+ u32 engineVersion
+ u32 apiVersion
+}
+
+class VkAllocationCallbacks {
+ void* pUserData
+ PFN_vkAllocationFunction pfnAllocation
+ PFN_vkReallocationFunction pfnReallocation
+ PFN_vkFreeFunction pfnFree
+ PFN_vkInternalAllocationNotification pfnInternalAllocation
+ PFN_vkInternalFreeNotification pfnInternalFree
+}
+
+class VkDeviceQueueCreateInfo {
+ VkStructureType sStype /// Should be VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDeviceQueueCreateFlags flags
+ u32 queueFamilyIndex
+ u32 queueCount
+ const f32* pQueuePriorities
+}
+
+class VkDeviceCreateInfo {
+ VkStructureType sType /// Should be VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDeviceCreateFlags flags
+ u32 queueCreateInfoCount
+ const VkDeviceQueueCreateInfo* pQueueCreateInfos
+ u32 enabledLayerCount
+ const char* const* ppEnabledLayerNames /// Ordered list of layer names to be enabled
+ u32 enabledExtensionCount
+ const char* const* ppEnabledExtensionNames
+ const VkPhysicalDeviceFeatures* pEnabledFeatures
+}
+
+class VkInstanceCreateInfo {
+ VkStructureType sType /// Should be VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkInstanceCreateFlags flags
+ const VkApplicationInfo* pApplicationInfo
+ u32 enabledLayerCount
+ const char* const* ppEnabledLayerNames /// Ordered list of layer names to be enabled
+ u32 enabledExtensionCount
+ const char* const* ppEnabledExtensionNames /// Extension names to be enabled
+}
+
+class VkQueueFamilyProperties {
+ VkQueueFlags queueFlags /// Queue flags
+ u32 queueCount
+ u32 timestampValidBits
+ VkExtent3D minImageTransferGranularity
+}
+
+class VkPhysicalDeviceMemoryProperties {
+ u32 memoryTypeCount
+ VkMemoryType[VK_MAX_MEMORY_TYPES] memoryTypes
+ u32 memoryHeapCount
+ VkMemoryHeap[VK_MAX_MEMORY_HEAPS] memoryHeaps
+}
+
+class VkMemoryAllocateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDeviceSize allocationSize /// Size of memory allocation
+ u32 memoryTypeIndex /// Index of the of the memory type to allocate from
+}
+
+class VkMemoryRequirements {
+ VkDeviceSize size /// Specified in bytes
+ VkDeviceSize alignment /// Specified in bytes
+ u32 memoryTypeBits /// Bitfield of the allowed memory type indices into memoryTypes[] for this object
+}
+
+class VkSparseImageFormatProperties {
+ VkImageAspectFlagBits aspectMask
+ VkExtent3D imageGranularity
+ VkSparseImageFormatFlags flags
+}
+
+class VkSparseImageMemoryRequirements {
+ VkSparseImageFormatProperties formatProperties
+ u32 imageMipTailFirstLod
+ VkDeviceSize imageMipTailSize /// Specified in bytes, must be a multiple of image block size / alignment
+ VkDeviceSize imageMipTailOffset /// Specified in bytes, must be a multiple of image block size / alignment
+ VkDeviceSize imageMipTailStride /// Specified in bytes, must be a multiple of image block size / alignment
+}
+
+class VkMemoryType {
+ VkMemoryPropertyFlags propertyFlags /// Memory properties of this memory type
+ u32 heapIndex /// Index of the memory heap allocations of this memory type are taken from
+}
+
+class VkMemoryHeap {
+ VkDeviceSize size /// Available memory in the heap
+ VkMemoryHeapFlags flags /// Flags for the heap
+}
+
+class VkMappedMemoryRange {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE
+ const void* pNext /// Pointer to next structure
+ VkDeviceMemory memory /// Mapped memory object
+ VkDeviceSize offset /// Offset within the mapped memory the range starts from
+ VkDeviceSize size /// Size of the range within the mapped memory
+}
+
+class VkFormatProperties {
+ VkFormatFeatureFlags linearTilingFeatures /// Format features in case of linear tiling
+ VkFormatFeatureFlags optimalTilingFeatures /// Format features in case of optimal tiling
+ VkFormatFeatureFlags bufferFeatures /// Format features supported by buffers
+}
+
+class VkImageFormatProperties {
+ VkExtent3D maxExtent /// max image dimensions for this resource type
+ u32 maxMipLevels /// max number of mipmap levels for this resource type
+ u32 maxArrayLayers /// max array layers for this resource type
+ VkSampleCountFlags sampleCounts /// supported sample counts for this resource type
+ VkDeviceSize maxResourceSize /// max size (in bytes) of this resource type
+}
+
+class VkDescriptorImageInfo {
+ VkSampler sampler
+ VkImageView imageView
+ VkImageLayout imageLayout
+}
+
+class VkDescriptorBufferInfo {
+ VkBuffer buffer /// Buffer used for this descriptor when the descriptor is UNIFORM_BUFFER[_DYNAMIC]
+ VkDeviceSize offset /// Base offset from buffer start in bytes to update in the descriptor set.
+ VkDeviceSize range /// Size in bytes of the buffer resource for this descriptor update.
+}
+
+class VkWriteDescriptorSet {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET
+ const void* pNext /// Pointer to next structure
+ VkDescriptorSet dstSet /// Destination descriptor set
+ u32 dstBinding /// Binding within the destination descriptor set to write
+ u32 dstArrayElement /// Array element within the destination binding to write
+ u32 descriptorCount /// Number of descriptors to write (determines the size of the array pointed by <pDescriptors>)
+ VkDescriptorType descriptorType /// Descriptor type to write (determines which fields of the array pointed by <pDescriptors> are going to be used)
+ const VkDescriptorImageInfo* pImageInfo
+ const VkDescriptorBufferInfo* pBufferInfo
+ const VkBufferView* pTexelBufferView
+}
+
+class VkCopyDescriptorSet {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET
+ const void* pNext /// Pointer to next structure
+ VkDescriptorSet srcSet /// Source descriptor set
+ u32 srcBinding /// Binding within the source descriptor set to copy from
+ u32 srcArrayElement /// Array element within the source binding to copy from
+ VkDescriptorSet dstSet /// Destination descriptor set
+ u32 dstBinding /// Binding within the destination descriptor set to copy to
+ u32 dstArrayElement /// Array element within the destination binding to copy to
+ u32 descriptorCount /// Number of descriptors to copy
+}
+
+class VkBufferCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO
+ const void* pNext /// Pointer to next structure.
+ VkBufferCreateFlags flags /// Buffer creation flags
+ VkDeviceSize size /// Specified in bytes
+ VkBufferUsageFlags usage /// Buffer usage flags
+ VkSharingMode sharingMode
+ u32 queueFamilyIndexCount
+ const u32* pQueueFamilyIndices
+}
+
+class VkBufferViewCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO
+ const void* pNext /// Pointer to next structure.
+ VkBufferViewCreateFlags flags
+ VkBuffer buffer
+ VkFormat format /// Optionally specifies format of elements
+ VkDeviceSize offset /// Specified in bytes
+ VkDeviceSize range /// View size specified in bytes
+}
+
+class VkImageSubresource {
+ VkImageAspectFlagBits aspectMask
+ u32 mipLevel
+ u32 arrayLayer
+}
+
+class VkImageSubresourceRange {
+ VkImageAspectFlags aspectMask
+ u32 baseMipLevel
+ u32 levelCount
+ u32 baseArrayLayer
+ u32 layerCount
+}
+
+class VkMemoryBarrier {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_MEMORY_BARRIER
+ const void* pNext /// Pointer to next structure.
+ VkAccessFlags srcAccessMask
+ VkAccessFlags dstAccessMask
+}
+
+class VkBufferMemoryBarrier {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER
+ const void* pNext /// Pointer to next structure.
+ VkAccessFlags srcAccessMask
+ VkAccessFlags dstAccessMask
+ u32 srcQueueFamilyIndex /// Queue family to transition ownership from
+ u32 dstQueueFamilyIndex /// Queue family to transition ownership to
+ VkBuffer buffer /// Buffer to sync
+ VkDeviceSize offset /// Offset within the buffer to sync
+ VkDeviceSize size /// Amount of bytes to sync
+}
+
+class VkImageMemoryBarrier {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER
+ const void* pNext /// Pointer to next structure.
+ VkAccessFlags srcAccessMask
+ VkAccessFlags dstAccessMask
+ VkImageLayout oldLayout /// Current layout of the image
+ VkImageLayout newLayout /// New layout to transition the image to
+ u32 srcQueueFamilyIndex /// Queue family to transition ownership from
+ u32 dstQueueFamilyIndex /// Queue family to transition ownership to
+ VkImage image /// Image to sync
+ VkImageSubresourceRange subresourceRange /// Subresource range to sync
+}
+
+class VkImageCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
+ const void* pNext /// Pointer to next structure.
+ VkImageCreateFlags flags /// Image creation flags
+ VkImageType imageType
+ VkFormat format
+ VkExtent3D extent
+ u32 mipLevels
+ u32 arrayLayers
+ VkSampleCountFlagBits samples
+ VkImageTiling tiling
+ VkImageUsageFlags usage /// Image usage flags
+ VkSharingMode sharingMode /// Cross-queue-family sharing mode
+ u32 queueFamilyIndexCount /// Number of queue families to share across
+ const u32* pQueueFamilyIndices /// Array of queue family indices to share across
+ VkImageLayout initialLayout /// Initial image layout for all subresources
+}
+
+class VkSubresourceLayout {
+ VkDeviceSize offset /// Specified in bytes
+ VkDeviceSize size /// Specified in bytes
+ VkDeviceSize rowPitch /// Specified in bytes
+ VkDeviceSize arrayPitch /// Specified in bytes
+ VkDeviceSize depthPitch /// Specified in bytes
+}
+
+class VkImageViewCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkImageViewCreateFlags flags
+ VkImage image
+ VkImageViewType viewType
+ VkFormat format
+ VkComponentMapping components
+ VkImageSubresourceRange subresourceRange
+}
+
+class VkBufferCopy {
+ VkDeviceSize srcOffset /// Specified in bytes
+ VkDeviceSize dstOffset /// Specified in bytes
+ VkDeviceSize size /// Specified in bytes
+}
+
+class VkSparseMemoryBind {
+ VkDeviceSize resourceOffset /// Specified in bytes
+ VkDeviceSize size /// Specified in bytes
+ VkDeviceMemory memory
+ VkDeviceSize memoryOffset /// Specified in bytes
+ VkSparseMemoryBindFlags flags
+}
+
+class VkSparseImageMemoryBind {
+ VkImageSubresource subresource
+ VkOffset3D offset
+ VkExtent3D extent
+ VkDeviceMemory memory
+ VkDeviceSize memoryOffset /// Specified in bytes
+ VkSparseMemoryBindFlags flags
+}
+
+class VkSparseBufferMemoryBindInfo {
+ VkBuffer buffer
+ u32 bindCount
+ const VkSparseMemoryBind* pBinds
+}
+
+class VkSparseImageOpaqueMemoryBindInfo {
+ VkImage image
+ u32 bindCount
+ const VkSparseMemoryBind* pBinds
+}
+
+class VkSparseImageMemoryBindInfo {
+ VkImage image
+ u32 bindCount
+ const VkSparseMemoryBind* pBinds
+}
+
+class VkBindSparseInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_BIND_SPARSE_INFO
+ const void* pNext
+ u32 waitSemaphoreCount
+ const VkSemaphore* pWaitSemaphores
+ u32 numBufferBinds
+ const VkSparseBufferMemoryBindInfo* pBufferBinds
+ u32 numImageOpaqueBinds
+ const VkSparseImageOpaqueMemoryBindInfo* pImageOpaqueBinds
+ u32 numImageBinds
+ const VkSparseImageMemoryBindInfo* pImageBinds
+ u32 signalSemaphoreCount
+ const VkSemaphore* pSignalSemaphores
+}
+
+class VkImageSubresourceLayers {
+ VkImageAspectFlags aspectMask
+ u32 mipLevel
+ u32 baseArrayLayer
+ u32 layerCount
+}
+
+class VkImageCopy {
+ VkImageSubresourceLayers srcSubresource
+ VkOffset3D srcOffset /// Specified in pixels for both compressed and uncompressed images
+ VkImageSubresourceLayers dstSubresource
+ VkOffset3D dstOffset /// Specified in pixels for both compressed and uncompressed images
+ VkExtent3D extent /// Specified in pixels for both compressed and uncompressed images
+}
+
+class VkImageBlit {
+ VkImageSubresourceLayers srcSubresource
+ VkOffset3D[2] srcOffsets
+ VkImageSubresourceLayers dstSubresource
+ VkOffset3D[2] dstOffsets
+}
+
+class VkBufferImageCopy {
+ VkDeviceSize bufferOffset /// Specified in bytes
+ u32 bufferRowLength /// Specified in texels
+ u32 bufferImageHeight
+ VkImageSubresourceLayers imageSubresource
+ VkOffset3D imageOffset /// Specified in pixels for both compressed and uncompressed images
+ VkExtent3D imageExtent /// Specified in pixels for both compressed and uncompressed images
+}
+
+class VkImageResolve {
+ VkImageSubresourceLayers srcSubresource
+ VkOffset3D srcOffset
+ VkImageSubresourceLayers dstSubresource
+ VkOffset3D dstOffset
+ VkExtent3D extent
+}
+
+class VkShaderModuleCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkShaderModuleCreateFlags flags /// Reserved
+ platform.size_t codeSize /// Specified in bytes
+ const u32* pCode /// Binary code of size codeSize
+}
+
+class VkDescriptorSetLayoutBinding {
+ u32 binding
+ VkDescriptorType descriptorType /// Type of the descriptors in this binding
+ u32 descriptorCount /// Number of descriptors in this binding
+ VkShaderStageFlags stageFlags /// Shader stages this binding is visible to
+ const VkSampler* pImmutableSamplers /// Immutable samplers (used if descriptor type is SAMPLER or COMBINED_IMAGE_SAMPLER, is either NULL or contains <count> number of elements)
+}
+
+class VkDescriptorSetLayoutCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDescriptorSetLayoutCreateFlags flags
+ u32 bindingCount /// Number of bindings in the descriptor set layout
+ const VkDescriptorSetLayoutBinding* pBindings /// Array of descriptor set layout bindings
+}
+
+class VkDescriptorPoolSize {
+ VkDescriptorType type
+ u32 descriptorCount
+}
+
+class VkDescriptorPoolCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDescriptorPoolCreateFlags flags
+ u32 maxSets
+ u32 poolSizeCount
+ const VkDescriptorPoolSize* pPoolSizes
+}
+
+class VkDescriptorSetAllocateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkDescriptorPool descriptorPool
+ u32 setCount
+ const VkDescriptorSetLayout* pSetLayouts
+}
+
+class VkSpecializationMapEntry {
+ u32 constantID /// The SpecConstant ID specified in the BIL
+ u32 offset /// Offset of the value in the data block
+ platform.size_t size /// Size in bytes of the SpecConstant
+}
+
+class VkSpecializationInfo {
+ u32 mapEntryCount /// Number of entries in the map
+ const VkSpecializationMapEntry* pMapEntries /// Array of map entries
+ platform.size_t dataSize /// Size in bytes of pData
+ const void* pData /// Pointer to SpecConstant data
+}
+
+class VkPipelineShaderStageCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineShaderStageCreateFlags flags
+ VkShaderStageFlagBits stage
+ VkShaderModule module
+ const char* pName
+ const VkSpecializationInfo* pSpecializationInfo
+}
+
+class VkComputePipelineCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineCreateFlags flags /// Pipeline creation flags
+ VkPipelineShaderStageCreateInfo stage
+ VkPipelineLayout layout /// Interface layout of the pipeline
+ VkPipeline basePipelineHandle /// If VK_PIPELINE_CREATE_DERIVATIVE_BIT is set and this value is nonzero, it specifies the handle of the base pipeline this is a derivative of
+ s32 basePipelineIndex /// If VK_PIPELINE_CREATE_DERIVATIVE_BIT is set and this value is not -1, it specifies an index into pCreateInfos of the base pipeline this is a derivative of
+}
+
+class VkVertexInputBindingDescription {
+ u32 binding /// Vertex buffer binding id
+ u32 stride /// Distance between vertices in bytes (0 = no advancement)
+ VkVertexInputRate inputRate /// Rate at which binding is incremented
+}
+
+class VkVertexInputAttributeDescription {
+ u32 location /// location of the shader vertex attrib
+ u32 binding /// Vertex buffer binding id
+ VkFormat format /// format of source data
+ u32 offset /// Offset of first element in bytes from base of vertex
+}
+
+class VkPipelineVertexInputStateCreateInfo {
+ VkStructureType sType /// Should be VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineVertexInputStateCreateFlags flags
+ u32 vertexBindingDescriptionCount /// number of bindings
+ const VkVertexInputBindingDescription* pVertexBindingDescriptions
+ u32 vertexAttributeDescriptionCount /// number of attributes
+ const VkVertexInputAttributeDescription* pVertexAttributeDescriptions
+}
+
+class VkPipelineInputAssemblyStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineInputAssemblyStateCreateFlags flags
+ VkPrimitiveTopology topology
+ VkBool32 primitiveRestartEnable
+}
+
+class VkPipelineTessellationStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineTessellationStateCreateFlags flags
+ u32 patchControlPoints
+}
+
+class VkPipelineViewportStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineViewportStateCreateFlags flags
+ u32 viewportCount
+ const VkViewport* pViewports
+ u32 scissorCount
+ const VkRect2D* pScissors
+}
+
+class VkPipelineRasterizationStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineRasterizationStateCreateFlags flags
+ VkBool32 depthClampEnable
+ VkBool32 rasterizerDiscardEnable
+ VkPolygonMode polygonMode /// optional (GL45)
+ VkCullModeFlags cullMode
+ VkFrontFace frontFace
+ VkBool32 depthBiasEnable
+ f32 depthBiasConstantFactor
+ f32 depthBiasClamp
+ f32 depthBiasSlopeFactor
+ f32 lineWidth
+}
+
+class VkPipelineMultisampleStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineMultisampleStateCreateFlags flags
+ VkSampleCountFlagBits rasterizationSamples /// Number of samples used for rasterization
+ VkBool32 sampleShadingEnable /// optional (GL45)
+ f32 minSampleShading /// optional (GL45)
+ const VkSampleMask* pSampleMask
+ VkBool32 alphaToCoverageEnable
+ VkBool32 alphaToOneEnable
+}
+
+class VkPipelineColorBlendAttachmentState {
+ VkBool32 blendEnable
+ VkBlendFactor srcColorBlendFactor
+ VkBlendFactor dstColorBlendFactor
+ VkBlendOp colorBlendOp
+ VkBlendFactor srcAlphaBlendFactor
+ VkBlendFactor dstAlphaBlendFactor
+ VkBlendOp alphaBlendOp
+ VkColorComponentFlags colorWriteMask
+}
+
+class VkPipelineColorBlendStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineColorBlendStateCreateFlags flags
+ VkBool32 logicOpEnable
+ VkLogicOp logicOp
+ u32 attachmentCount /// # of pAttachments
+ const VkPipelineColorBlendAttachmentState* pAttachments
+ f32[4] blendConstants
+}
+
+class VkStencilOpState {
+ VkStencilOp failOp
+ VkStencilOp passOp
+ VkStencilOp depthFailOp
+ VkCompareOp compareOp
+ u32 compareMask
+ u32 writeMask
+ u32 reference
+}
+
+class VkPipelineDepthStencilStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineDepthStencilStateCreateFlags flags
+ VkBool32 depthTestEnable
+ VkBool32 depthWriteEnable
+ VkCompareOp depthCompareOp
+ VkBool32 depthBoundsTestEnable /// optional (depth_bounds_test)
+ VkBool32 stencilTestEnable
+ VkStencilOpState front
+ VkStencilOpState back
+ f32 minDepthBounds
+ f32 maxDepthBounds
+}
+
+class VkPipelineDynamicStateCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineDynamicStateCreateFlags flags
+ u32 dynamicStateCount
+ const VkDynamicState* pDynamicStates
+}
+
+class VkGraphicsPipelineCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineCreateFlags flags /// Pipeline creation flags
+ u32 stageCount
+ const VkPipelineShaderStageCreateInfo* pStages /// One entry for each active shader stage
+ const VkPipelineVertexInputStateCreateInfo* pVertexInputState
+ const VkPipelineInputAssemblyStateCreateInfo* pInputAssemblyState
+ const VkPipelineTessellationStateCreateInfo* pTessellationState
+ const VkPipelineViewportStateCreateInfo* pViewportState
+ const VkPipelineRasterizationStateCreateInfo* pRasterizationState
+ const VkPipelineMultisampleStateCreateInfo* pMultisampleState
+ const VkPipelineDepthStencilStateCreateInfo* pDepthStencilState
+ const VkPipelineColorBlendStateCreateInfo* pColorBlendState
+ const VkPipelineDynamicStateCreateInfo* pDynamicState
+ VkPipelineLayout layout /// Interface layout of the pipeline
+ VkRenderPass renderPass
+ u32 subpass
+ VkPipeline basePipelineHandle /// If VK_PIPELINE_CREATE_DERIVATIVE_BIT is set and this value is nonzero, it specifies the handle of the base pipeline this is a derivative of
+ s32 basePipelineIndex /// If VK_PIPELINE_CREATE_DERIVATIVE_BIT is set and this value is not -1, it specifies an index into pCreateInfos of the base pipeline this is a derivative of
+}
+
+class VkPipelineCacheCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineCacheCreateFlags flags
+ platform.size_t initialDataSize /// Size of initial data to populate cache, in bytes
+ const void* pInitialData /// Initial data to populate cache
+}
+
+class VkPushConstantRange {
+ VkShaderStageFlags stageFlags /// Which stages use the range
+ u32 offset /// Start of the range, in bytes
+ u32 size /// Length of the range, in bytes
+}
+
+class VkPipelineLayoutCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkPipelineLayoutCreateFlags flags
+ u32 descriptorSetCount /// Number of descriptor sets interfaced by the pipeline
+ const VkDescriptorSetLayout* pSetLayouts /// Array of <setCount> number of descriptor set layout objects defining the layout of the
+ u32 pushConstantRangeCount /// Number of push-constant ranges used by the pipeline
+ const VkPushConstantRange* pPushConstantRanges /// Array of pushConstantRangeCount number of ranges used by various shader stages
+}
+
+class VkSamplerCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkSamplerCreateFlags flags
+ VkFilter magFilter /// Filter mode for magnification
+ VkFilter minFilter /// Filter mode for minifiation
+ VkSamplerMipmapMode mipmapMode /// Mipmap selection mode
+ VkSamplerAddressMode addressModeU
+ VkSamplerAddressMode addressModeV
+ VkSamplerAddressMode addressModeW
+ f32 mipLodBias
+ VkBool32 anisotropyEnable
+ f32 maxAnisotropy
+ VkBool32 compareEnable
+ VkCompareOp compareOp
+ f32 minLod
+ f32 maxLod
+ VkBorderColor borderColor
+ VkBool32 unnormalizedCoordinates
+}
+
+class VkCommandPoolCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkCommandPoolCreateFlags flags /// Command pool creation flags
+ u32 queueFamilyIndex
+}
+
+class VkCommandBufferAllocateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkCommandPool commandPool
+ VkCommandBufferLevel level
+ u32 commandBufferCount
+}
+
+class VkCommandBufferInheritanceInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO
+ const void* pNext /// Pointer to next structure
+ VkRenderPass renderPass /// Render pass for secondary command buffers
+ u32 subpass
+ VkFramebuffer framebuffer /// Framebuffer for secondary command buffers
+ VkBool32 occlusionQueryEnable
+ VkQueryControlFlags queryFlags
+ VkQueryPipelineStatisticFlags pipelineStatistics
+}
+
+class VkCommandBufferBeginInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO
+ const void* pNext /// Pointer to next structure
+ VkCommandBufferUsageFlags flags /// Command buffer usage flags
+ const VkCommandBufferInheritanceInfo* pInheritanceInfo
+}
+
+class VkRenderPassBeginInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO
+ const void* pNext /// Pointer to next structure
+ VkRenderPass renderPass
+ VkFramebuffer framebuffer
+ VkRect2D renderArea
+ u32 clearValueCount
+ const VkClearValue* pClearValues
+}
+
+@union
+/// Union allowing specification of floating point, integer, or unsigned integer color data. Actual value selected is based on image/attachment being cleared.
+class VkClearColorValue {
+ f32[4] float32
+ s32[4] int32
+ u32[4] uint32
+}
+
+class VkClearDepthStencilValue {
+ f32 depth
+ u32 stencil
+}
+
+@union
+/// Union allowing specification of color, depth, and stencil color values. Actual value selected is based on attachment being cleared.
+class VkClearValue {
+ VkClearColorValue color
+ VkClearDepthStencilValue depthStencil
+}
+
+class VkClearAttachment {
+ VkImageAspectFlags aspectMask
+ u32 colorAttachment
+ VkClearValue clearValue
+}
+
+class VkAttachmentDescription {
+ VkAttachmentDescriptionFlags flags
+ VkFormat format
+ VkSampleCountFlagBits samples
+ VkAttachmentLoadOp loadOp /// Load op for color or depth data
+ VkAttachmentStoreOp storeOp /// Store op for color or depth data
+ VkAttachmentLoadOp stencilLoadOp /// Load op for stencil data
+ VkAttachmentStoreOp stencilStoreOp /// Store op for stencil data
+ VkImageLayout initialLayout
+ VkImageLayout finalLayout
+}
+
+class VkAttachmentReference {
+ u32 attachment
+ VkImageLayout layout
+}
+
+class VkSubpassDescription {
+ VkSubpassDescriptionFlags flags
+ VkPipelineBindPoint pipelineBindPoint /// Must be VK_PIPELINE_BIND_POINT_GRAPHICS for now
+ u32 inputAttachmentCount
+ const VkAttachmentReference* pInputAttachments
+ u32 colorAttachmentCount
+ const VkAttachmentReference* pColorAttachments
+ const VkAttachmentReference* pResolveAttachments
+ const VkAttachmentReference* pDepthStencilAttachment
+ u32 preserveAttachmentCount
+ const u32* pPreserveAttachments
+}
+
+class VkSubpassDependency {
+ u32 srcSubpass
+ u32 dstSubpass
+ VkPipelineStageFlags srcStageMask
+ VkPipelineStageFlags dstStageMask
+ VkAccessFlags srcAccessMask
+ VkAccessFlags dstAccessMask
+ VkDependencyFlags dependencyFlags
+}
+
+class VkRenderPassCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkRenderPassCreateFlags flags
+ u32 attachmentCount
+ const VkAttachmentDescription* pAttachments
+ u32 subpassCount
+ const VkSubpassDescription* pSubpasses
+ u32 dependencyCount
+ const VkSubpassDependency* pDependencies
+}
+
+class VkEventCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkEventCreateFlags flags /// Event creation flags
+}
+
+class VkFenceCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_FENCE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkFenceCreateFlags flags /// Fence creation flags
+}
+
+class VkPhysicalDeviceFeatures {
+ VkBool32 robustBufferAccess /// out of bounds buffer accesses are well defined
+ VkBool32 fullDrawIndexUint32 /// full 32-bit range of indices for indexed draw calls
+ VkBool32 imageCubeArray /// image views which are arrays of cube maps
+ VkBool32 independentBlend /// blending operations are controlled per-attachment
+ VkBool32 geometryShader /// geometry stage
+ VkBool32 tessellationShader /// tessellation control and evaluation stage
+ VkBool32 sampleRateShading /// per-sample shading and interpolation
+ VkBool32 dualSrcBlend /// blend operations which take two sources
+ VkBool32 logicOp /// logic operations
+ VkBool32 multiDrawIndirect /// multi draw indirect
+ VkBool32 drawIndirectFirstInstance
+ VkBool32 depthClamp /// depth clamping
+ VkBool32 depthBiasClamp /// depth bias clamping
+ VkBool32 fillModeNonSolid /// point and wireframe fill modes
+ VkBool32 depthBounds /// depth bounds test
+ VkBool32 wideLines /// lines with width greater than 1
+ VkBool32 largePoints /// points with size greater than 1
+ VkBool32 alphaToOne /// The fragment alpha channel can be forced to maximum representable alpha value
+ VkBool32 multiViewport
+ VkBool32 samplerAnisotropy
+ VkBool32 textureCompressionETC2 /// ETC texture compression formats
+ VkBool32 textureCompressionASTC_LDR /// ASTC LDR texture compression formats
+ VkBool32 textureCompressionBC /// BC1-7 texture compressed formats
+ VkBool32 occlusionQueryPrecise
+ VkBool32 pipelineStatisticsQuery /// pipeline statistics query
+ VkBool32 vertexPipelineStoresAndAtomics
+ VkBool32 fragmentStoresAndAtomics
+ VkBool32 shaderTessellationAndGeometryPointSize
+ VkBool32 shaderImageGatherExtended /// texture gather with run-time values and independent offsets
+ VkBool32 shaderStorageImageExtendedFormats /// the extended set of formats can be used for storage images
+ VkBool32 shaderStorageImageMultisample /// multisample images can be used for storage images
+ VkBool32 shaderStorageImageReadWithoutFormat
+ VkBool32 shaderStorageImageWriteWithoutFormat
+ VkBool32 shaderUniformBufferArrayDynamicIndexing /// arrays of uniform buffers can be accessed with dynamically uniform indices
+ VkBool32 shaderSampledImageArrayDynamicIndexing /// arrays of sampled images can be accessed with dynamically uniform indices
+ VkBool32 shaderStorageBufferArrayDynamicIndexing /// arrays of storage buffers can be accessed with dynamically uniform indices
+ VkBool32 shaderStorageImageArrayDynamicIndexing /// arrays of storage images can be accessed with dynamically uniform indices
+ VkBool32 shaderClipDistance /// clip distance in shaders
+ VkBool32 shaderCullDistance /// cull distance in shaders
+ VkBool32 shaderFloat64 /// 64-bit floats (doubles) in shaders
+ VkBool32 shaderInt64 /// 64-bit integers in shaders
+ VkBool32 shaderInt16 /// 16-bit integers in shaders
+ VkBool32 shaderResourceResidency /// shader can use texture operations that return resource residency information (requires sparseNonResident support)
+ VkBool32 shaderResourceMinLod /// shader can use texture operations that specify minimum resource LOD
+ VkBool32 sparseBinding /// Sparse resources support: Resource memory can be managed at opaque page level rather than object level
+ VkBool32 sparseResidencyBuffer /// Sparse resources support: GPU can access partially resident buffers
+ VkBool32 sparseResidencyImage2D /// Sparse resources support: GPU can access partially resident 2D (non-MSAA non-DepthStencil) images
+ VkBool32 sparseResidencyImage3D /// Sparse resources support: GPU can access partially resident 3D images
+ VkBool32 sparseResidency2Samples /// Sparse resources support: GPU can access partially resident MSAA 2D images with 2 samples
+ VkBool32 sparseResidency4Samples /// Sparse resources support: GPU can access partially resident MSAA 2D images with 4 samples
+ VkBool32 sparseResidency8Samples /// Sparse resources support: GPU can access partially resident MSAA 2D images with 8 samples
+ VkBool32 sparseResidency16Samples /// Sparse resources support: GPU can access partially resident MSAA 2D images with 16 samples
+ VkBool32 sparseResidencyAliased /// Sparse resources support: GPU can correctly access data aliased into multiple locations (opt-in)
+ VkBool32 variableMultisampleRate
+ VkBool32 inheritedQueries
+}
+
+class VkPhysicalDeviceLimits {
+ /// resource maximum sizes
+ u32 maxImageDimension1D /// max 1D image dimension
+ u32 maxImageDimension2D /// max 2D image dimension
+ u32 maxImageDimension3D /// max 3D image dimension
+ u32 maxImageDimensionCube /// max cubemap image dimension
+ u32 maxImageArrayLayers /// max layers for image arrays
+ u32 maxTexelBufferElements
+ u32 maxUniformBufferRange /// max uniform buffer size (bytes)
+ u32 maxStorageBufferRange /// max storage buffer size (bytes)
+ u32 maxPushConstantsSize /// max size of the push constants pool (bytes)
+ /// memory limits
+ u32 maxMemoryAllocationCount /// max number of device memory allocations supported
+ u32 maxSamplerAllocationCount
+ VkDeviceSize bufferImageGranularity /// Granularity (in bytes) at which buffers and images can be bound to adjacent memory for simultaneous usage
+ VkDeviceSize sparseAddressSpaceSize /// Total address space available for sparse allocations (bytes)
+ /// descriptor set limits
+ u32 maxBoundDescriptorSets /// max number of descriptors sets that can be bound to a pipeline
+ u32 maxPerStageDescriptorSamplers /// max num of samplers allowed per-stage in a descriptor set
+ u32 maxPerStageDescriptorUniformBuffers /// max num of uniform buffers allowed per-stage in a descriptor set
+ u32 maxPerStageDescriptorStorageBuffers /// max num of storage buffers allowed per-stage in a descriptor set
+ u32 maxPerStageDescriptorSampledImages /// max num of sampled images allowed per-stage in a descriptor set
+ u32 maxPerStageDescriptorStorageImages /// max num of storage images allowed per-stage in a descriptor set
+ u32 maxPerStageDescriptorInputAttachments
+ u32 maxPerStageResources
+ u32 maxDescriptorSetSamplers /// max num of samplers allowed in all stages in a descriptor set
+ u32 maxDescriptorSetUniformBuffers /// max num of uniform buffers allowed in all stages in a descriptor set
+ u32 maxDescriptorSetUniformBuffersDynamic /// max num of dynamic uniform buffers allowed in all stages in a descriptor set
+ u32 maxDescriptorSetStorageBuffers /// max num of storage buffers allowed in all stages in a descriptor set
+ u32 maxDescriptorSetStorageBuffersDynamic /// max num of dynamic storage buffers allowed in all stages in a descriptor set
+ u32 maxDescriptorSetSampledImages /// max num of sampled images allowed in all stages in a descriptor set
+ u32 maxDescriptorSetStorageImages /// max num of storage images allowed in all stages in a descriptor set
+ u32 maxDescriptorSetInputAttachments
+ /// vertex stage limits
+ u32 maxVertexInputAttributes /// max num of vertex input attribute slots
+ u32 maxVertexInputBindings /// max num of vertex input binding slots
+ u32 maxVertexInputAttributeOffset /// max vertex input attribute offset added to vertex buffer offset
+ u32 maxVertexInputBindingStride /// max vertex input binding stride
+ u32 maxVertexOutputComponents /// max num of output components written by vertex shader
+ /// tessellation control stage limits
+ u32 maxTessellationGenerationLevel /// max level supported by tess primitive generator
+ u32 maxTessellationPatchSize /// max patch size (vertices)
+ u32 maxTessellationControlPerVertexInputComponents /// max num of input components per-vertex in TCS
+ u32 maxTessellationControlPerVertexOutputComponents /// max num of output components per-vertex in TCS
+ u32 maxTessellationControlPerPatchOutputComponents /// max num of output components per-patch in TCS
+ u32 maxTessellationControlTotalOutputComponents /// max total num of per-vertex and per-patch output components in TCS
+ u32 maxTessellationEvaluationInputComponents /// max num of input components per vertex in TES
+ u32 maxTessellationEvaluationOutputComponents /// max num of output components per vertex in TES
+ /// geometry stage limits
+ u32 maxGeometryShaderInvocations /// max invocation count supported in geometry shader
+ u32 maxGeometryInputComponents /// max num of input components read in geometry stage
+ u32 maxGeometryOutputComponents /// max num of output components written in geometry stage
+ u32 maxGeometryOutputVertices /// max num of vertices that can be emitted in geometry stage
+ u32 maxGeometryTotalOutputComponents /// max total num of components (all vertices) written in geometry stage
+ /// fragment stage limits
+ u32 maxFragmentInputComponents /// max num of input compontents read in fragment stage
+ u32 maxFragmentOutputAttachments /// max num of output attachments written in fragment stage
+ u32 maxFragmentDualSrcAttachments /// max num of output attachments written when using dual source blending
+ u32 maxFragmentCombinedOutputResources /// max total num of storage buffers, storage images and output buffers
+ /// compute stage limits
+ u32 maxComputeSharedMemorySize /// max total storage size of work group local storage (bytes)
+ u32[3] maxComputeWorkGroupCount /// max num of compute work groups that may be dispatched by a single command (x,y,z)
+ u32 maxComputeWorkGroupInvocations /// max total compute invocations in a single local work group
+ u32[3] maxComputeWorkGroupSize /// max local size of a compute work group (x,y,z)
+
+ u32 subPixelPrecisionBits /// num bits of subpixel precision in screen x and y
+ u32 subTexelPrecisionBits /// num bits of subtexel precision
+ u32 mipmapPrecisionBits /// num bits of mipmap precision
+
+ u32 maxDrawIndexedIndexValue /// max index value for indexed draw calls (for 32-bit indices)
+ u32 maxDrawIndirectCount
+
+ f32 maxSamplerLodBias /// max absolute sampler level of detail bias
+ f32 maxSamplerAnisotropy /// max degree of sampler anisotropy
+
+ u32 maxViewports /// max number of active viewports
+ u32[2] maxViewportDimensions /// max viewport dimensions (x,y)
+ f32[2] viewportBoundsRange /// viewport bounds range (min,max)
+ u32 viewportSubPixelBits /// num bits of subpixel precision for viewport
+
+ platform.size_t minMemoryMapAlignment /// min required alignment of pointers returned by MapMemory (bytes)
+ VkDeviceSize minTexelBufferOffsetAlignment /// min required alignment for texel buffer offsets (bytes)
+ VkDeviceSize minUniformBufferOffsetAlignment /// min required alignment for uniform buffer sizes and offsets (bytes)
+ VkDeviceSize minStorageBufferOffsetAlignment /// min required alignment for storage buffer offsets (bytes)
+
+ s32 minTexelOffset /// min texel offset for OpTextureSampleOffset
+ u32 maxTexelOffset /// max texel offset for OpTextureSampleOffset
+ s32 minTexelGatherOffset /// min texel offset for OpTextureGatherOffset
+ u32 maxTexelGatherOffset /// max texel offset for OpTextureGatherOffset
+ f32 minInterpolationOffset /// furthest negative offset for interpolateAtOffset
+ f32 maxInterpolationOffset /// furthest positive offset for interpolateAtOffset
+ u32 subPixelInterpolationOffsetBits /// num of subpixel bits for interpolateAtOffset
+
+ u32 maxFramebufferWidth /// max width for a framebuffer
+ u32 maxFramebufferHeight /// max height for a framebuffer
+ u32 maxFramebufferLayers /// max layer count for a layered framebuffer
+ VkSampleCountFlags framebufferColorSampleCounts
+ VkSampleCountFlags framebufferDepthSampleCounts
+ VkSampleCountFlags framebufferStencilSampleCounts
+ VkSampleCountFlags framebufferNoAttachmentSampleCounts
+ u32 maxColorAttachments /// max num of framebuffer color attachments
+
+ VkSampleCountFlags sampledImageColorSampleCounts
+ VkSampleCountFlags sampledImageIntegerSampleCounts
+ VkSampleCountFlags sampledImageDepthSampleCounts
+ VkSampleCountFlags sampledImageStencilSampleCounts
+ VkSampleCountFlags storageImageSampleCounts
+ u32 maxSampleMaskWords /// max num of sample mask words
+ VkBool32 timestampComputeAndGraphics
+
+ f32 timestampPeriod
+
+ u32 maxClipDistances /// max number of clip distances
+ u32 maxCullDistances /// max number of cull distances
+ u32 maxCombinedClipAndCullDistances /// max combined number of user clipping
+
+ u32 discreteQueuePriorities
+
+ f32[2] pointSizeRange /// range (min,max) of supported point sizes
+ f32[2] lineWidthRange /// range (min,max) of supported line widths
+ f32 pointSizeGranularity /// granularity of supported point sizes
+ f32 lineWidthGranularity /// granularity of supported line widths
+ VkBool32 strictLines
+ VkBool32 standardSampleLocations
+
+ VkDeviceSize optimalBufferCopyOffsetAlignment
+ VkDeviceSize optimalBufferCopyRowPitchAlignment
+ VkDeviceSize nonCoherentAtomSize
+}
+
+class VkPhysicalDeviceSparseProperties {
+ VkBool32 residencyStandard2DBlockShape /// Sparse resources support: GPU will access all 2D (single sample) sparse resources using the standard block shapes (based on pixel format)
+ VkBool32 residencyStandard2DMultisampleBlockShape /// Sparse resources support: GPU will access all 2D (multisample) sparse resources using the standard block shapes (based on pixel format)
+ VkBool32 residencyStandard3DBlockShape /// Sparse resources support: GPU will access all 3D sparse resources using the standard block shapes (based on pixel format)
+ VkBool32 residencyAlignedMipSize /// Sparse resources support: Images with mip-level dimensions that are NOT a multiple of the block size will be placed in the mip tail
+ VkBool32 residencyNonResidentStrict /// Sparse resources support: GPU can safely access non-resident regions of a resource, all reads return as if data is 0, writes are discarded
+}
+
+class VkSemaphoreCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkSemaphoreCreateFlags flags /// Semaphore creation flags
+}
+
+class VkQueryPoolCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkQueryPoolCreateFlags flags
+ VkQueryType queryType
+ u32 queryCount
+ VkQueryPipelineStatisticFlags pipelineStatistics /// Optional
+}
+
+class VkFramebufferCreateInfo {
+ VkStructureType sType /// Must be VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO
+ const void* pNext /// Pointer to next structure
+ VkFramebufferCreateFlags flags
+ VkRenderPass renderPass
+ u32 attachmentCount
+ const VkImageView* pAttachments
+ u32 width
+ u32 height
+ u32 layers
+}
+
+class VkDrawIndirectCommand {
+ u32 vertexCount
+ u32 instanceCount
+ u32 firstVertex
+ u32 firstInstance
+}
+
+class VkDrawIndexedIndirectCommand {
+ u32 indexCount
+ u32 instanceCount
+ u32 firstIndex
+ s32 vertexOffset
+ u32 firstInstance
+}
+
+class VkDispatchIndirectCommand {
+ u32 x
+ u32 y
+ u32 z
+}
+
+@extension("VK_KHR_surface")
+class VkSurfaceCapabilitiesKHR {
+ u32 minImageCount
+ u32 maxImageCount
+ VkExtent2D currentExtent
+ VkExtent2D minImageExtent
+ VkExtent2D maxImageExtent
+ u32 maxImageArrayLayers
+ VkSurfaceTransformFlagsKHR supportedTransforms
+ VkSurfaceTransformFlagBitsKHR currentTransform
+ VkCompositeAlphaFlagsKHR supportedCompositeAlpha
+ VkImageUsageFlags supportedUsageFlags
+}
+
+@extension("VK_KHR_surface")
+class VkSurfaceFormatKHR {
+ VkFormat format
+ VkColorSpaceKHR colorSpace
+}
+
+@extension("VK_KHR_swapchain")
+class VkSwapchainCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkSwapchainCreateFlagsKHR flags
+ VkSurfaceKHR surface
+ u32 minImageCount
+ VkFormat imageFormat
+ VkColorSpaceKHR imageColorSpace
+ VkExtent2D imageExtent
+ u32 imageArrayLayers
+ VkImageUsageFlags imageUsage
+ VkSharingMode sharingMode
+ u32 queueFamilyIndexCount
+ const u32* pQueueFamilyIndices
+ VkSurfaceTransformFlagBitsKHR preTransform
+ VkCompositeAlphaFlagBitsKHR compositeAlpha
+ VkPresentModeKHR presentMode
+ VkBool32 clipped
+ VkSwapchainKHR oldSwapchain
+}
+
+@extension("VK_KHR_swapchain")
+class VkPresentInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ u32 waitSemaphoreCount
+ const VkSemaphore* pWaitSemaphores
+ u32 swapchainCount
+ const VkSwapchainKHR* pSwapchains
+ const u32* pImageIndices
+ VkResult* pResults
+}
+
+@extension("VK_KHR_display")
+class VkDisplayPropertiesKHR {
+ VkDisplayKHR display
+ const char* displayName
+ VkExtent2D physicalDimensions
+ VkExtent2D physicalResolution
+ VkSurfaceTransformFlagsKHR supportedTransforms
+ VkBool32 planeReorderPossible
+ VkBool32 persistentContent
+}
+
+@extension("VK_KHR_display")
+class VkDisplayModeParametersKHR {
+ VkExtent2D visibleRegion
+ u32 refreshRate
+}
+
+@extension("VK_KHR_display")
+class VkDisplayModePropertiesKHR {
+ VkDisplayModeKHR displayMode
+ VkDisplayModeParametersKHR parameters
+}
+
+@extension("VK_KHR_display")
+class VkDisplayModeCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkDisplayModeCreateFlagsKHR flags
+ VkDisplayModeParametersKHR parameters
+}
+
+@extension("VK_KHR_display")
+class VkDisplayPlanePropertiesKHR {
+ VkDisplayKHR currentDisplay
+ u32 currentStackIndex
+}
+
+@extension("VK_KHR_display")
+class VkDisplayPlaneCapabilitiesKHR {
+ VkDisplayPlaneAlphaFlagsKHR supportedAlpha
+ VkOffset2D minSrcPosition
+ VkOffset2D maxSrcPosition
+ VkExtent2D minSrcExtent
+ VkExtent2D maxSrcExtent
+ VkOffset2D minDstPosition
+ VkOffset2D maxDstPosition
+ VkExtent2D minDstExtent
+ VkExtent2D maxDstExtent
+}
+
+@extension("VK_KHR_display")
+class VkDisplaySurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkDisplaySurfaceCreateFlagsKHR flags
+ VkDisplayModeKHR displayMode
+ u32 planeIndex
+ u32 planeStackIndex
+ VkSurfaceTransformFlagBitsKHR transform
+ f32 globalAlpha
+ VkDisplayPlaneAlphaFlagBitsKHR alphaMode
+ VkExtent2D imageExtent
+}
+
+@extension("VK_KHR_display_swapchain")
+class VkDisplayPresentInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkRect2D srcRect
+ VkRect2D dstRect
+ VkBool32 persistent
+}
+
+@extension("VK_KHR_xlib_surface")
+class VkXlibSurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkXlibSurfaceCreateFlagsKHR flags
+ platform.Display* dpy
+ platform.Window window
+}
+
+@extension("VK_KHR_xcb_surface")
+class VkXcbSurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkXcbSurfaceCreateFlagsKHR flags
+ platform.xcb_connection_t* connection
+ platform.xcb_window_t window
+}
+
+@extension("VK_KHR_wayland_surface")
+class VkWaylandSurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkWaylandSurfaceCreateFlagsKHR flags
+ platform.wl_display* display
+ platform.wl_surface* surface
+}
+
+@extension("VK_KHR_mir_surface")
+class VkMirSurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkMirSurfaceCreateFlagsKHR flags
+ platform.MirConnection* connection
+ platform.MirSurface* mirSurface
+}
+
+@extension("VK_KHR_android_surface")
+class VkAndroidSurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkAndroidSurfaceCreateFlagsKHR flags
+ platform.ANativeWindow* window
+}
+
+@extension("VK_KHR_win32_surface")
+class VkWin32SurfaceCreateInfoKHR {
+ VkStructureType sType
+ const void* pNext
+ VkWin32SurfaceCreateFlagsKHR flags
+ platform.HINSTANCE hinstance
+ platform.HWND hwnd
+}
+
+@extension("VK_EXT_debug_report")
+class VkDebugReportCallbackCreateInfoEXT {
+ VkStructureType sType
+ const void* pNext
+ VkDebugReportFlagsEXT flags
+ PFN_vkDebugReportCallbackEXT pfnCallback
+ void* pUserData
+}
+
+
+////////////////
+// Commands //
+////////////////
+
+// Function pointers. TODO: add support for function pointers.
+
+@external type void* PFN_vkVoidFunction
+@pfn cmd void vkVoidFunction() {
+}
+
+@external type void* PFN_vkAllocationFunction
+@pfn cmd void* vkAllocationFunction(
+ void* pUserData,
+ platform.size_t size,
+ platform.size_t alignment,
+ VkSystemAllocationScope allocationScope) {
+ return ?
+}
+
+@external type void* PFN_vkReallocationFunction
+@pfn cmd void* vkReallocationFunction(
+ void* pUserData,
+ void* pOriginal,
+ platform.size_t size,
+ platform.size_t alignment,
+ VkSystemAllocationScope allocationScope) {
+ return ?
+}
+
+@external type void* PFN_vkFreeFunction
+@pfn cmd void vkFreeFunction(
+ void* pUserData,
+ void* pMemory) {
+}
+
+@external type void* PFN_vkInternalAllocationNotification
+@pfn cmd void vkInternalAllocationNotification(
+ void* pUserData,
+ platform.size_t size,
+ VkInternalAllocationType allocationType,
+ VkSystemAllocationScope allocationScope) {
+}
+
+@external type void* PFN_vkInternalFreeNotification
+@pfn cmd void vkInternalFreeNotification(
+ void* pUserData,
+ platform.size_t size,
+ VkInternalAllocationType allocationType,
+ VkSystemAllocationScope allocationScope) {
+}
+
+// Global functions
+
+@threadSafety("system")
+cmd VkResult vkCreateInstance(
+ const VkInstanceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkInstance* pInstance) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO)
+
+ instance := ?
+ pInstance[0] = instance
+ State.Instances[instance] = new!InstanceObject()
+
+ layers := pCreateInfo.ppEnabledLayerNames[0:pCreateInfo.enabledLayerCount]
+ extensions := pCreateInfo.ppEnabledExtensionNames[0:pCreateInfo.enabledExtensionCount]
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyInstance(
+ VkInstance instance,
+ const VkAllocationCallbacks* pAllocator) {
+ instanceObject := GetInstance(instance)
+
+ State.Instances[instance] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkEnumeratePhysicalDevices(
+ VkInstance instance,
+ u32* pPhysicalDeviceCount,
+ VkPhysicalDevice* pPhysicalDevices) {
+ instanceObject := GetInstance(instance)
+
+ physicalDeviceCount := as!u32(?)
+ pPhysicalDeviceCount[0] = physicalDeviceCount
+ physicalDevices := pPhysicalDevices[0:physicalDeviceCount]
+
+ for i in (0 .. physicalDeviceCount) {
+ physicalDevice := ?
+ physicalDevices[i] = physicalDevice
+ if !(physicalDevice in State.PhysicalDevices) {
+ State.PhysicalDevices[physicalDevice] = new!PhysicalDeviceObject(instance: instance)
+ }
+ }
+
+ return ?
+}
+
+cmd PFN_vkVoidFunction vkGetDeviceProcAddr(
+ VkDevice device,
+ const char* pName) {
+ if device != NULL_HANDLE {
+ device := GetDevice(device)
+ }
+
+ return ?
+}
+
+cmd PFN_vkVoidFunction vkGetInstanceProcAddr(
+ VkInstance instance,
+ const char* pName) {
+ if instance != NULL_HANDLE {
+ instanceObject := GetInstance(instance)
+ }
+
+ return ?
+}
+
+cmd void vkGetPhysicalDeviceProperties(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceProperties* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ properties := ?
+ pProperties[0] = properties
+}
+
+cmd void vkGetPhysicalDeviceQueueFamilyProperties(
+ VkPhysicalDevice physicalDevice,
+ u32* pQueueFamilyPropertyCount,
+ VkQueueFamilyProperties* pQueueFamilyProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ // TODO: Figure out how to express fetch-count-or-properties
+ // This version fails 'apic validate' with 'fence not allowed in
+ // *semantic.Branch'. Other attempts have failed with the same or other
+ // errors.
+ // if pQueueFamilyProperties != null {
+ // queuesProperties := pQueueFamilyProperties[0:pCount[0]]
+ // for i in (0 .. pCount[0]) {
+ // queueProperties := as!VkQueueFamilyProperties(?)
+ // queuesProperties[i] = queueProperties
+ // }
+ // } else {
+ // count := ?
+ // pCount[0] = count
+ // }
+}
+
+cmd void vkGetPhysicalDeviceMemoryProperties(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceMemoryProperties* pMemoryProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ memoryProperties := ?
+ pMemoryProperties[0] = memoryProperties
+}
+
+cmd void vkGetPhysicalDeviceFeatures(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceFeatures* pFeatures) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ features := ?
+ pFeatures[0] = features
+}
+
+cmd void vkGetPhysicalDeviceFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkFormatProperties* pFormatProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ formatProperties := ?
+ pFormatProperties[0] = formatProperties
+}
+
+cmd VkResult vkGetPhysicalDeviceImageFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkImageType type,
+ VkImageTiling tiling,
+ VkImageUsageFlags usage,
+ VkImageCreateFlags flags,
+ VkImageFormatProperties* pImageFormatProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ imageFormatProperties := ?
+ pImageFormatProperties[0] = imageFormatProperties
+
+ return ?
+}
+
+
+// Device functions
+
+@threadSafety("system")
+cmd VkResult vkCreateDevice(
+ VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDevice* pDevice) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO)
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ device := ?
+ pDevice[0] = device
+ State.Devices[device] = new!DeviceObject(physicalDevice: physicalDevice)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyDevice(
+ VkDevice device,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+
+ State.Devices[device] = null
+}
+
+
+// Extension discovery functions
+
+cmd VkResult vkEnumerateInstanceLayerProperties(
+ u32* pPropertyCount,
+ VkLayerProperties* pProperties) {
+ count := as!u32(?)
+ pPropertyCount[0] = count
+
+ properties := pProperties[0:count]
+ for i in (0 .. count) {
+ property := ?
+ properties[i] = property
+ }
+
+ return ?
+}
+
+cmd VkResult vkEnumerateInstanceExtensionProperties(
+ const char* pLayerName,
+ u32* pPropertyCount,
+ VkExtensionProperties* pProperties) {
+ count := as!u32(?)
+ pPropertyCount[0] = count
+
+ properties := pProperties[0:count]
+ for i in (0 .. count) {
+ property := ?
+ properties[i] = property
+ }
+
+ return ?
+}
+
+cmd VkResult vkEnumerateDeviceLayerProperties(
+ VkPhysicalDevice physicalDevice,
+ u32* pPropertyCount,
+ VkLayerProperties* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ count := as!u32(?)
+ pPropertyCount[0] = count
+
+ properties := pProperties[0:count]
+ for i in (0 .. count) {
+ property := ?
+ properties[i] = property
+ }
+
+ return ?
+}
+
+cmd VkResult vkEnumerateDeviceExtensionProperties(
+ VkPhysicalDevice physicalDevice,
+ const char* pLayerName,
+ u32* pPropertyCount,
+ VkExtensionProperties* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ count := as!u32(?)
+ pPropertyCount[0] = count
+
+ properties := pProperties[0:count]
+ for i in (0 .. count) {
+ property := ?
+ properties[i] = property
+ }
+
+ return ?
+}
+
+
+// Queue functions
+
+@threadSafety("system")
+cmd void vkGetDeviceQueue(
+ VkDevice device,
+ u32 queueFamilyIndex,
+ u32 queueIndex,
+ VkQueue* pQueue) {
+ deviceObject := GetDevice(device)
+
+ queue := ?
+ pQueue[0] = queue
+
+ if !(queue in State.Queues) {
+ State.Queues[queue] = new!QueueObject(device: device)
+ }
+}
+
+@threadSafety("app")
+cmd VkResult vkQueueSubmit(
+ VkQueue queue,
+ u32 submitCount,
+ const VkSubmitInfo* pSubmits,
+ VkFence fence) {
+ queueObject := GetQueue(queue)
+
+ if fence != NULL_HANDLE {
+ fenceObject := GetFence(fence)
+ assert(fenceObject.device == queueObject.device)
+ }
+
+ // commandBuffers := pcommandBuffers[0:commandBufferCount]
+ // for i in (0 .. commandBufferCount) {
+ // commandBuffer := commandBuffers[i]
+ // commandBufferObject := GetCommandBuffer(commandBuffer)
+ // assert(commandBufferObject.device == queueObject.device)
+ //
+ // validate("QueueCheck", commandBufferObject.queueFlags in queueObject.flags,
+ // "vkQueueSubmit: enqueued commandBuffer requires missing queue capabilities.")
+ // }
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkQueueWaitIdle(
+ VkQueue queue) {
+ queueObject := GetQueue(queue)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkDeviceWaitIdle(
+ VkDevice device) {
+ deviceObject := GetDevice(device)
+
+ return ?
+}
+
+
+// Memory functions
+
+@threadSafety("system")
+cmd VkResult vkAllocateMemory(
+ VkDevice device,
+ const VkMemoryAllocateInfo* pAllocateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDeviceMemory* pMemory) {
+ assert(pAllocateInfo.sType == VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO)
+ deviceObject := GetDevice(device)
+
+ memory := ?
+ pMemory[0] = memory
+ State.DeviceMemories[memory] = new!DeviceMemoryObject(
+ device: device,
+ allocationSize: pAllocateInfo[0].allocationSize)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkFreeMemory(
+ VkDevice device,
+ VkDeviceMemory memory,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+
+ // Check that no objects are still bound before freeing.
+ validate("MemoryCheck", len(memoryObject.boundObjects) == 0,
+ "vkFreeMemory: objects still bound")
+ validate("MemoryCheck", len(memoryObject.boundCommandBuffers) == 0,
+ "vkFreeMemory: commandBuffers still bound")
+ State.DeviceMemories[memory] = null
+}
+
+@threadSafety("app")
+cmd VkResult vkMapMemory(
+ VkDevice device,
+ VkDeviceMemory memory,
+ VkDeviceSize offset,
+ VkDeviceSize size,
+ VkMemoryMapFlags flags,
+ void** ppData) {
+ deviceObject := GetDevice(device)
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+
+ assert(flags == as!VkMemoryMapFlags(0))
+ assert((offset + size) <= memoryObject.allocationSize)
+
+ return ?
+}
+
+@threadSafety("app")
+cmd void vkUnmapMemory(
+ VkDevice device,
+ VkDeviceMemory memory) {
+ deviceObject := GetDevice(device)
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+}
+
+cmd VkResult vkFlushMappedMemoryRanges(
+ VkDevice device,
+ u32 memoryRangeCount
+ const VkMappedMemoryRange* pMemoryRanges) {
+ deviceObject := GetDevice(device)
+
+ memoryRanges := pMemoryRanges[0:memoryRangeCount]
+ for i in (0 .. memoryRangeCount) {
+ memoryRange := memoryRanges[i]
+ memoryObject := GetDeviceMemory(memoryRange.memory)
+ assert(memoryObject.device == device)
+ assert((memoryRange.offset + memoryRange.size) <= memoryObject.allocationSize)
+ }
+
+ return ?
+}
+
+cmd VkResult vkInvalidateMappedMemoryRanges(
+ VkDevice device,
+ u32 memoryRangeCount,
+ const VkMappedMemoryRange* pMemoryRanges) {
+ deviceObject := GetDevice(device)
+
+ memoryRanges := pMemoryRanges[0:memoryRangeCount]
+ for i in (0 .. memoryRangeCount) {
+ memoryRange := memoryRanges[i]
+ memoryObject := GetDeviceMemory(memoryRange.memory)
+ assert(memoryObject.device == device)
+ assert((memoryRange.offset + memoryRange.size) <= memoryObject.allocationSize)
+ }
+
+ return ?
+}
+
+
+// Memory management API functions
+
+cmd void vkGetDeviceMemoryCommitment(
+ VkDevice device,
+ VkDeviceMemory memory,
+ VkDeviceSize* pCommittedMemoryInBytes) {
+ deviceObject := GetDevice(device)
+
+ if memory != NULL_HANDLE {
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+ }
+
+ committedMemoryInBytes := ?
+ pCommittedMemoryInBytes[0] = committedMemoryInBytes
+}
+
+cmd void vkGetBufferMemoryRequirements(
+ VkDevice device,
+ VkBuffer buffer,
+ VkMemoryRequirements* pMemoryRequirements) {
+ deviceObject := GetDevice(device)
+ bufferObject := GetBuffer(buffer)
+ assert(bufferObject.device == device)
+}
+
+cmd VkResult vkBindBufferMemory(
+ VkDevice device,
+ VkBuffer buffer,
+ VkDeviceMemory memory,
+ VkDeviceSize memoryOffset) {
+ deviceObject := GetDevice(device)
+ bufferObject := GetBuffer(buffer)
+ assert(bufferObject.device == device)
+
+ // Unbind buffer from previous memory object, if not VK_NULL_HANDLE.
+ if bufferObject.memory != NULL_HANDLE {
+ memoryObject := GetDeviceMemory(bufferObject.memory)
+ memoryObject.boundObjects[as!u64(buffer)] = null
+ }
+
+ // Bind buffer to given memory object, if not VK_NULL_HANDLE.
+ if memory != NULL_HANDLE {
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+ memoryObject.boundObjects[as!u64(buffer)] = memoryOffset
+ }
+ bufferObject.memory = memory
+ bufferObject.memoryOffset = memoryOffset
+
+ return ?
+}
+
+cmd void vkGetImageMemoryRequirements(
+ VkDevice device,
+ VkImage image,
+ VkMemoryRequirements* pMemoryRequirements) {
+ deviceObject := GetDevice(device)
+ imageObject := GetImage(image)
+ assert(imageObject.device == device)
+}
+
+cmd VkResult vkBindImageMemory(
+ VkDevice device,
+ VkImage image,
+ VkDeviceMemory memory,
+ VkDeviceSize memoryOffset) {
+ deviceObject := GetDevice(device)
+ imageObject := GetImage(image)
+ assert(imageObject.device == device)
+
+ // Unbind image from previous memory object, if not VK_NULL_HANDLE.
+ if imageObject.memory != NULL_HANDLE {
+ memoryObject := GetDeviceMemory(imageObject.memory)
+ memoryObject.boundObjects[as!u64(image)] = null
+ }
+
+ // Bind image to given memory object, if not VK_NULL_HANDLE.
+ if memory != NULL_HANDLE {
+ memoryObject := GetDeviceMemory(memory)
+ assert(memoryObject.device == device)
+ memoryObject.boundObjects[as!u64(image)] = memoryOffset
+ }
+ imageObject.memory = memory
+ imageObject.memoryOffset = memoryOffset
+
+ return ?
+}
+
+cmd void vkGetImageSparseMemoryRequirements(
+ VkDevice device,
+ VkImage image,
+ u32* pSparseMemoryRequirementCount,
+ VkSparseImageMemoryRequirements* pSparseMemoryRequirements) {
+ deviceObject := GetDevice(device)
+ imageObject := GetImage(image)
+ assert(imageObject.device == device)
+}
+
+cmd void vkGetPhysicalDeviceSparseImageFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkImageType type,
+ VkSampleCountFlagBits samples,
+ VkImageUsageFlags usage,
+ VkImageTiling tiling,
+ u32* pPropertyCount,
+ VkSparseImageFormatProperties* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+}
+
+cmd VkResult vkQueueBindSparse(
+ VkQueue queue,
+ u32 bindInfoCount,
+ const VkBindSparseInfo* pBindInfo,
+ VkFence fence) {
+ queueObject := GetQueue(queue)
+
+ return ?
+}
+
+
+// Fence functions
+
+@threadSafety("system")
+cmd VkResult vkCreateFence(
+ VkDevice device,
+ const VkFenceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkFence* pFence) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_FENCE_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ fence := ?
+ pFence[0] = fence
+ State.Fences[fence] = new!FenceObject(
+ device: device, signaled: (pCreateInfo.flags == as!VkFenceCreateFlags(VK_FENCE_CREATE_SIGNALED_BIT)))
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyFence(
+ VkDevice device,
+ VkFence fence,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ fenceObject := GetFence(fence)
+ assert(fenceObject.device == device)
+
+ State.Fences[fence] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkResetFences(
+ VkDevice device,
+ u32 fenceCount,
+ const VkFence* pFences) {
+ deviceObject := GetDevice(device)
+
+ fences := pFences[0:fenceCount]
+ for i in (0 .. fenceCount) {
+ fence := fences[i]
+ fenceObject := GetFence(fence)
+ assert(fenceObject.device == device)
+ fenceObject.signaled = false
+ }
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkGetFenceStatus(
+ VkDevice device,
+ VkFence fence) {
+ deviceObject := GetDevice(device)
+ fenceObject := GetFence(fence)
+ assert(fenceObject.device == device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkWaitForFences(
+ VkDevice device,
+ u32 fenceCount,
+ const VkFence* pFences,
+ VkBool32 waitAll,
+ u64 timeout) { /// timeout in nanoseconds
+ deviceObject := GetDevice(device)
+
+ fences := pFences[0:fenceCount]
+ for i in (0 .. fenceCount) {
+ fence := fences[i]
+ fenceObject := GetFence(fence)
+ assert(fenceObject.device == device)
+ }
+
+ return ?
+}
+
+
+// Queue semaphore functions
+
+@threadSafety("system")
+cmd VkResult vkCreateSemaphore(
+ VkDevice device,
+ const VkSemaphoreCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSemaphore* pSemaphore) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ semaphore := ?
+ pSemaphore[0] = semaphore
+ State.Semaphores[semaphore] = new!SemaphoreObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroySemaphore(
+ VkDevice device,
+ VkSemaphore semaphore,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ semaphoreObject := GetSemaphore(semaphore)
+ assert(semaphoreObject.device == device)
+
+ State.Semaphores[semaphore] = null
+}
+
+
+// Event functions
+
+@threadSafety("system")
+cmd VkResult vkCreateEvent(
+ VkDevice device,
+ const VkEventCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkEvent* pEvent) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_EVENT_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ event := ?
+ pEvent[0] = event
+ State.Events[event] = new!EventObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyEvent(
+ VkDevice device,
+ VkEvent event,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ eventObject := GetEvent(event)
+ assert(eventObject.device == device)
+
+ State.Events[event] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkGetEventStatus(
+ VkDevice device,
+ VkEvent event) {
+ deviceObject := GetDevice(device)
+ eventObject := GetEvent(event)
+ assert(eventObject.device == device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkSetEvent(
+ VkDevice device,
+ VkEvent event) {
+ deviceObject := GetDevice(device)
+ eventObject := GetEvent(event)
+ assert(eventObject.device == device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd VkResult vkResetEvent(
+ VkDevice device,
+ VkEvent event) {
+ deviceObject := GetDevice(device)
+ eventObject := GetEvent(event)
+ assert(eventObject.device == device)
+
+ return ?
+}
+
+
+// Query functions
+
+@threadSafety("system")
+cmd VkResult vkCreateQueryPool(
+ VkDevice device,
+ const VkQueryPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkQueryPool* pQueryPool) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ queryPool := ?
+ pQueryPool[0] = queryPool
+ State.QueryPools[queryPool] = new!QueryPoolObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyQueryPool(
+ VkDevice device,
+ VkQueryPool queryPool,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(queryPoolObject.device == device)
+
+ State.QueryPools[queryPool] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkGetQueryPoolResults(
+ VkDevice device,
+ VkQueryPool queryPool,
+ u32 firstQuery,
+ u32 queryCount,
+ platform.size_t dataSize,
+ void* pData,
+ VkDeviceSize stride,
+ VkQueryResultFlags flags) {
+ deviceObject := GetDevice(device)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(queryPoolObject.device == device)
+
+ data := pData[0:dataSize]
+
+ return ?
+}
+
+// Buffer functions
+
+@threadSafety("system")
+cmd VkResult vkCreateBuffer(
+ VkDevice device,
+ const VkBufferCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkBuffer* pBuffer) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ buffer := ?
+ pBuffer[0] = buffer
+ State.Buffers[buffer] = new!BufferObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyBuffer(
+ VkDevice device,
+ VkBuffer buffer,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ bufferObject := GetBuffer(buffer)
+ assert(bufferObject.device == device)
+
+ assert(bufferObject.memory == 0)
+ State.Buffers[buffer] = null
+}
+
+
+// Buffer view functions
+
+@threadSafety("system")
+cmd VkResult vkCreateBufferView(
+ VkDevice device,
+ const VkBufferViewCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkBufferView* pView) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ bufferObject := GetBuffer(pCreateInfo.buffer)
+ assert(bufferObject.device == device)
+
+ view := ?
+ pView[0] = view
+ State.BufferViews[view] = new!BufferViewObject(device: device, buffer: pCreateInfo.buffer)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyBufferView(
+ VkDevice device,
+ VkBufferView bufferView,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ bufferViewObject := GetBufferView(bufferView)
+ assert(bufferViewObject.device == device)
+
+ State.BufferViews[bufferView] = null
+}
+
+
+// Image functions
+
+@threadSafety("system")
+cmd VkResult vkCreateImage(
+ VkDevice device,
+ const VkImageCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkImage* pImage) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ image := ?
+ pImage[0] = image
+ State.Images[image] = new!ImageObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyImage(
+ VkDevice device,
+ VkImage image,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ imageObject := GetImage(image)
+ assert(imageObject.device == device)
+
+ assert(imageObject.memory == 0)
+ State.Images[image] = null
+}
+
+cmd void vkGetImageSubresourceLayout(
+ VkDevice device,
+ VkImage image,
+ const VkImageSubresource* pSubresource,
+ VkSubresourceLayout* pLayout) {
+ deviceObject := GetDevice(device)
+ imageObject := GetImage(image)
+ assert(imageObject.device == device)
+}
+
+
+// Image view functions
+
+@threadSafety("system")
+cmd VkResult vkCreateImageView(
+ VkDevice device,
+ const VkImageViewCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkImageView* pView) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ imageObject := GetImage(pCreateInfo.image)
+ assert(imageObject.device == device)
+
+ view := ?
+ pView[0] = view
+ State.ImageViews[view] = new!ImageViewObject(device: device, image: pCreateInfo.image)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyImageView(
+ VkDevice device,
+ VkImageView imageView,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ imageViewObject := GetImageView(imageView)
+ assert(imageViewObject.device == device)
+
+ State.ImageViews[imageView] = null
+}
+
+
+// Shader functions
+
+cmd VkResult vkCreateShaderModule(
+ VkDevice device,
+ const VkShaderModuleCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkShaderModule* pShaderModule) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ shaderModule := ?
+ pShaderModule[0] = shaderModule
+ State.ShaderModules[shaderModule] = new!ShaderModuleObject(device: device)
+
+ return ?
+}
+
+cmd void vkDestroyShaderModule(
+ VkDevice device,
+ VkShaderModule shaderModule,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ shaderModuleObject := GetShaderModule(shaderModule)
+ assert(shaderModuleObject.device == device)
+
+ State.ShaderModules[shaderModule] = null
+}
+
+
+// Pipeline functions
+
+cmd VkResult vkCreatePipelineCache(
+ VkDevice device,
+ const VkPipelineCacheCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipelineCache* pPipelineCache) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ pipelineCache := ?
+ pPipelineCache[0] = pipelineCache
+ State.PipelineCaches[pipelineCache] = new!PipelineCacheObject(device: device)
+
+ return ?
+}
+
+cmd void vkDestroyPipelineCache(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ pipelineCacheObject := GetPipelineCache(pipelineCache)
+ assert(pipelineCacheObject.device == device)
+
+ State.PipelineCaches[pipelineCache] = null
+}
+
+cmd VkResult vkGetPipelineCacheData(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ platform.size_t* pDataSize,
+ void* pData) {
+ deviceObject := GetDevice(device)
+ pipelineCacheObject := GetPipelineCache(pipelineCache)
+ assert(pipelineCacheObject.device == device)
+
+ return ?
+}
+
+cmd VkResult vkMergePipelineCaches(
+ VkDevice device,
+ VkPipelineCache dstCache,
+ u32 srcCacheCount,
+ const VkPipelineCache* pSrcCaches) {
+ deviceObject := GetDevice(device)
+ dstCacheObject := GetPipelineCache(dstCache)
+ assert(dstCacheObject.device == device)
+
+ srcCaches := pSrcCaches[0:srcCacheCount]
+ for i in (0 .. srcCacheCount) {
+ srcCache := srcCaches[i]
+ srcCacheObject := GetPipelineCache(srcCache)
+ assert(srcCacheObject.device == device)
+ }
+
+ return ?
+}
+
+cmd VkResult vkCreateGraphicsPipelines(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ u32 createInfoCount,
+ const VkGraphicsPipelineCreateInfo* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipeline* pPipelines) {
+ deviceObject := GetDevice(device)
+ if pipelineCache != NULL_HANDLE {
+ pipelineCacheObject := GetPipelineCache(pipelineCache)
+ assert(pipelineCacheObject.device == device)
+ }
+
+ createInfos := pCreateInfos[0:createInfoCount]
+ pipelines := pPipelines[0:createInfoCount]
+ for i in (0 .. createInfoCount) {
+ pipeline := ?
+ pipelines[i] = pipeline
+ State.Pipelines[pipeline] = new!PipelineObject(device: device)
+ }
+
+ return ?
+}
+
+cmd VkResult vkCreateComputePipelines(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ u32 createInfoCount,
+ const VkComputePipelineCreateInfo* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipeline* pPipelines) {
+ deviceObject := GetDevice(device)
+ if pipelineCache != NULL_HANDLE {
+ pipelineCacheObject := GetPipelineCache(pipelineCache)
+ assert(pipelineCacheObject.device == device)
+ }
+
+ createInfos := pCreateInfos[0:createInfoCount]
+ pipelines := pPipelines[0:createInfoCount]
+ for i in (0 .. createInfoCount) {
+ pipeline := ?
+ pipelines[i] = pipeline
+ State.Pipelines[pipeline] = new!PipelineObject(device: device)
+ }
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyPipeline(
+ VkDevice device,
+ VkPipeline pipeline,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ pipelineObjects := GetPipeline(pipeline)
+ assert(pipelineObjects.device == device)
+
+ State.Pipelines[pipeline] = null
+}
+
+
+// Pipeline layout functions
+
+@threadSafety("system")
+cmd VkResult vkCreatePipelineLayout(
+ VkDevice device,
+ const VkPipelineLayoutCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipelineLayout* pPipelineLayout) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ pipelineLayout := ?
+ pPipelineLayout[0] = pipelineLayout
+ State.PipelineLayouts[pipelineLayout] = new!PipelineLayoutObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyPipelineLayout(
+ VkDevice device,
+ VkPipelineLayout pipelineLayout,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ pipelineLayoutObjects := GetPipelineLayout(pipelineLayout)
+ assert(pipelineLayoutObjects.device == device)
+
+ State.PipelineLayouts[pipelineLayout] = null
+}
+
+
+// Sampler functions
+
+@threadSafety("system")
+cmd VkResult vkCreateSampler(
+ VkDevice device,
+ const VkSamplerCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSampler* pSampler) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ sampler := ?
+ pSampler[0] = sampler
+ State.Samplers[sampler] = new!SamplerObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroySampler(
+ VkDevice device,
+ VkSampler sampler,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ samplerObject := GetSampler(sampler)
+ assert(samplerObject.device == device)
+
+ State.Samplers[sampler] = null
+}
+
+
+// Descriptor set functions
+
+@threadSafety("system")
+cmd VkResult vkCreateDescriptorSetLayout(
+ VkDevice device,
+ const VkDescriptorSetLayoutCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDescriptorSetLayout* pSetLayout) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ setLayout := ?
+ pSetLayout[0] = setLayout
+ State.DescriptorSetLayouts[setLayout] = new!DescriptorSetLayoutObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyDescriptorSetLayout(
+ VkDevice device,
+ VkDescriptorSetLayout descriptorSetLayout,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ descriptorSetLayoutObject := GetDescriptorSetLayout(descriptorSetLayout)
+ assert(descriptorSetLayoutObject.device == device)
+
+ State.DescriptorSetLayouts[descriptorSetLayout] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkCreateDescriptorPool(
+ VkDevice device,
+ const VkDescriptorPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDescriptorPool* pDescriptorPool) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ descriptorPool := ?
+ pDescriptorPool[0] = descriptorPool
+ State.DescriptorPools[descriptorPool] = new!DescriptorPoolObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyDescriptorPool(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ descriptorPoolObject := GetDescriptorPool(descriptorPool)
+ assert(descriptorPoolObject.device == device)
+
+ State.DescriptorPools[descriptorPool] = null
+}
+
+@threadSafety("app")
+cmd VkResult vkResetDescriptorPool(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ VkDescriptorPoolResetFlags flags) {
+ deviceObject := GetDevice(device)
+ descriptorPoolObject := GetDescriptorPool(descriptorPool)
+ assert(descriptorPoolObject.device == device)
+
+ return ?
+}
+
+@threadSafety("app")
+cmd VkResult vkAllocateDescriptorSets(
+ VkDevice device,
+ const VkDescriptorSetAllocateInfo* pAllocateInfo,
+ VkDescriptorSet* pDescriptorSets) {
+ deviceObject := GetDevice(device)
+ allocInfo := pAllocateInfo[0]
+ descriptorPoolObject := GetDescriptorPool(allocInfo.descriptorPool)
+
+ setLayouts := allocInfo.pSetLayouts[0:allocInfo.setCount]
+ for i in (0 .. allocInfo.setCount) {
+ setLayout := setLayouts[i]
+ setLayoutObject := GetDescriptorSetLayout(setLayout)
+ assert(setLayoutObject.device == device)
+ }
+
+ descriptorSets := pDescriptorSets[0:allocInfo.setCount]
+ for i in (0 .. allocInfo.setCount) {
+ descriptorSet := ?
+ descriptorSets[i] = descriptorSet
+ State.DescriptorSets[descriptorSet] = new!DescriptorSetObject(device: device)
+ }
+
+ return ?
+}
+
+cmd VkResult vkFreeDescriptorSets(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ u32 descriptorSetCount,
+ const VkDescriptorSet* pDescriptorSets) {
+ deviceObject := GetDevice(device)
+ descriptorPoolObject := GetDescriptorPool(descriptorPool)
+
+ descriptorSets := pDescriptorSets[0:descriptorSetCount]
+ for i in (0 .. descriptorSetCount) {
+ descriptorSet := descriptorSets[i]
+ descriptorSetObject := GetDescriptorSet(descriptorSet)
+ assert(descriptorSetObject.device == device)
+ State.DescriptorSets[descriptorSet] = null
+ }
+
+ return ?
+}
+
+cmd void vkUpdateDescriptorSets(
+ VkDevice device,
+ u32 descriptorWriteCount,
+ const VkWriteDescriptorSet* pDescriptorWrites,
+ u32 descriptorCopyCount,
+ const VkCopyDescriptorSet* pDescriptorCopies) {
+ deviceObject := GetDevice(device)
+
+ descriptorWrites := pDescriptorWrites[0:descriptorWriteCount]
+ for i in (0 .. descriptorWriteCount) {
+ descriptorWrite := descriptorWrites[i]
+ descriptorWriteObject := GetDescriptorSet(descriptorWrite.dstSet)
+ assert(descriptorWriteObject.device == device)
+ }
+
+ descriptorCopies := pDescriptorCopies[0:descriptorCopyCount]
+ for i in (0 .. descriptorCopyCount) {
+ descriptorCopy := descriptorCopies[i]
+ descriptorCopyObject := GetDescriptorSet(descriptorCopy.dstSet)
+ assert(descriptorCopyObject.device == device)
+ }
+}
+
+
+// Framebuffer functions
+
+@threadSafety("system")
+cmd VkResult vkCreateFramebuffer(
+ VkDevice device,
+ const VkFramebufferCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkFramebuffer* pFramebuffer) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ framebuffer := ?
+ pFramebuffer[0] = framebuffer
+ State.Framebuffers[framebuffer] = new!FramebufferObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyFramebuffer(
+ VkDevice device,
+ VkFramebuffer framebuffer,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ framebufferObject := GetFramebuffer(framebuffer)
+ assert(framebufferObject.device == device)
+
+ State.Framebuffers[framebuffer] = null
+}
+
+
+// Renderpass functions
+
+@threadSafety("system")
+cmd VkResult vkCreateRenderPass(
+ VkDevice device,
+ const VkRenderPassCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkRenderPass* pRenderPass) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ renderpass := ?
+ pRenderPass[0] = renderpass
+ State.RenderPasses[renderpass] = new!RenderPassObject(device: device)
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkDestroyRenderPass(
+ VkDevice device,
+ VkRenderPass renderPass,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ renderPassObject := GetRenderPass(renderPass)
+ assert(renderPassObject.device == device)
+
+ State.RenderPasses[renderPass] = null
+}
+
+cmd void vkGetRenderAreaGranularity(
+ VkDevice device,
+ VkRenderPass renderPass,
+ VkExtent2D* pGranularity) {
+ deviceObject := GetDevice(device)
+ renderPassObject := GetRenderPass(renderPass)
+
+ granularity := ?
+ pGranularity[0] = granularity
+}
+
+// Command pool functions
+
+cmd VkResult vkCreateCommandPool(
+ VkDevice device,
+ const VkCommandPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkCommandPool* pCommandPool) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO)
+ deviceObject := GetDevice(device)
+
+ commandPool := ?
+ pCommandPool[0] = commandPool
+ State.CommandPools[commandPool] = new!CommandPoolObject(device: device)
+
+ return ?
+}
+
+cmd void vkDestroyCommandPool(
+ VkDevice device,
+ VkCommandPool commandPool,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ commandPoolObject := GetCommandPool(commandPool)
+ assert(commandPoolObject.device == device)
+
+ State.CommandPools[commandPool] = null
+}
+
+cmd VkResult vkResetCommandPool(
+ VkDevice device,
+ VkCommandPool commandPool,
+ VkCommandPoolResetFlags flags) {
+ deviceObject := GetDevice(device)
+ commandPoolObject := GetCommandPool(commandPool)
+ assert(commandPoolObject.device == device)
+
+ return ?
+}
+
+// Command buffer functions
+
+macro void bindCommandBuffer(VkCommandBuffer commandBuffer, any obj, VkDeviceMemory memory) {
+ memoryObject := GetDeviceMemory(memory)
+ memoryObject.boundCommandBuffers[commandBuffer] = commandBuffer
+
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.boundObjects[as!u64(obj)] = memory
+}
+
+macro void unbindCommandBuffer(VkCommandBuffer commandBuffer, any obj, VkDeviceMemory memory) {
+ memoryObject := GetDeviceMemory(memory)
+ memoryObject.boundCommandBuffers[commandBuffer] = null
+
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.boundObjects[as!u64(obj)] = null
+}
+
+@threadSafety("system")
+cmd VkResult vkAllocateCommandBuffers(
+ VkDevice device,
+ const VkCommandBufferAllocateInfo* pAllocateInfo,
+ VkCommandBuffer* pCommandBuffers) {
+ assert(pAllocateInfo[0].sType == VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO)
+
+ count := pAllocateInfo[0].commandBufferCount
+ commandBuffers := pCommandBuffers[0:count]
+ for i in (0 .. count) {
+ commandBuffer := ?
+ commandBuffers[i] = commandBuffer
+ State.CommandBuffers[commandBuffer] = new!CommandBufferObject(device: device)
+ }
+
+ return ?
+}
+
+@threadSafety("system")
+cmd void vkFreeCommandBuffers(
+ VkDevice device,
+ VkCommandPool commandPool,
+ u32 commandBufferCount,
+ const VkCommandBuffer* pCommandBuffers) {
+ deviceObject := GetDevice(device)
+
+ commandBuffers := pCommandBuffers[0:commandBufferCount]
+ for i in (0 .. commandBufferCount) {
+ commandBufferObject := GetCommandBuffer(commandBuffers[i])
+ assert(commandBufferObject.device == device)
+ // TODO: iterate over boundObjects and clear memory bindings
+ State.CommandBuffers[commandBuffers[i]] = null
+ }
+}
+
+@threadSafety("app")
+cmd VkResult vkBeginCommandBuffer(
+ VkCommandBuffer commandBuffer,
+ const VkCommandBufferBeginInfo* pBeginInfo) {
+ assert(pBeginInfo.sType == VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO)
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ // TODO: iterate over boundObjects and clear memory bindings
+
+ return ?
+}
+
+@threadSafety("app")
+cmd VkResult vkEndCommandBuffer(
+ VkCommandBuffer commandBuffer) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ return ?
+}
+
+@threadSafety("app")
+cmd VkResult vkResetCommandBuffer(
+ VkCommandBuffer commandBuffer,
+ VkCommandBufferResetFlags flags) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ // TODO: iterate over boundObjects and clear memory bindings
+
+ return ?
+}
+
+
+// Command buffer building functions
+
+@threadSafety("app")
+cmd void vkCmdBindPipeline(
+ VkCommandBuffer commandBuffer,
+ VkPipelineBindPoint pipelineBindPoint,
+ VkPipeline pipeline) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ pipelineObject := GetPipeline(pipeline)
+ assert(commandBufferObject.device == pipelineObject.device)
+
+ queue := switch (pipelineBindPoint) {
+ case VK_PIPELINE_BIND_POINT_COMPUTE: VK_QUEUE_COMPUTE_BIT
+ case VK_PIPELINE_BIND_POINT_GRAPHICS: VK_QUEUE_GRAPHICS_BIT
+ }
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, queue)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetViewport(
+ VkCommandBuffer commandBuffer,
+ u32 firstViewport,
+ u32 viewportCount,
+ const VkViewport* pViewports) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetScissor(
+ VkCommandBuffer commandBuffer,
+ u32 firstScissor,
+ u32 scissorCount,
+ const VkRect2D* pScissors) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetLineWidth(
+ VkCommandBuffer commandBuffer,
+ f32 lineWidth) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetDepthBias(
+ VkCommandBuffer commandBuffer,
+ f32 depthBiasConstantFactor,
+ f32 depthBiasClamp,
+ f32 depthBiasSlopeFactor) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetBlendConstants(
+ VkCommandBuffer commandBuffer,
+ // TODO(jessehall): apic only supports 'const' on pointer types. Using
+ // an annotation as a quick hack to pass this to the template without
+ // having to modify the AST and semantic model.
+ @readonly f32[4] blendConstants) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetDepthBounds(
+ VkCommandBuffer commandBuffer,
+ f32 minDepthBounds,
+ f32 maxDepthBounds) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetStencilCompareMask(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ u32 compareMask) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetStencilWriteMask(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ u32 writeMask) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetStencilReference(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ u32 reference) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdBindDescriptorSets(
+ VkCommandBuffer commandBuffer,
+ VkPipelineBindPoint pipelineBindPoint,
+ VkPipelineLayout layout,
+ u32 firstSet,
+ u32 descriptorSetCount,
+ const VkDescriptorSet* pDescriptorSets,
+ u32 dynamicOffsetCount,
+ const u32* pDynamicOffsets) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ descriptorSets := pDescriptorSets[0:descriptorSetCount]
+ for i in (0 .. descriptorSetCount) {
+ descriptorSet := descriptorSets[i]
+ descriptorSetObject := GetDescriptorSet(descriptorSet)
+ assert(commandBufferObject.device == descriptorSetObject.device)
+ }
+
+ dynamicOffsets := pDynamicOffsets[0:dynamicOffsetCount]
+ for i in (0 .. dynamicOffsetCount) {
+ dynamicOffset := dynamicOffsets[i]
+ }
+
+ queue := switch (pipelineBindPoint) {
+ case VK_PIPELINE_BIND_POINT_COMPUTE: VK_QUEUE_COMPUTE_BIT
+ case VK_PIPELINE_BIND_POINT_GRAPHICS: VK_QUEUE_GRAPHICS_BIT
+ }
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, queue)
+}
+
+@threadSafety("app")
+cmd void vkCmdBindIndexBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ VkIndexType indexType) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ bufferObject := GetBuffer(buffer)
+ assert(commandBufferObject.device == bufferObject.device)
+
+ bindCommandBuffer(commandBuffer, buffer, bufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdBindVertexBuffers(
+ VkCommandBuffer commandBuffer,
+ u32 firstBinding,
+ u32 bindingCount,
+ const VkBuffer* pBuffers,
+ const VkDeviceSize* pOffsets) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ // TODO: check if not [firstBinding:firstBinding+bindingCount]
+ buffers := pBuffers[0:bindingCount]
+ offsets := pOffsets[0:bindingCount]
+ for i in (0 .. bindingCount) {
+ buffer := buffers[i]
+ offset := offsets[i]
+ bufferObject := GetBuffer(buffer)
+ assert(commandBufferObject.device == bufferObject.device)
+
+ bindCommandBuffer(commandBuffer, buffer, bufferObject.memory)
+ }
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDraw(
+ VkCommandBuffer commandBuffer,
+ u32 vertexCount,
+ u32 instanceCount,
+ u32 firstVertex,
+ u32 firstInstance) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDrawIndexed(
+ VkCommandBuffer commandBuffer,
+ u32 indexCount,
+ u32 instanceCount,
+ u32 firstIndex,
+ s32 vertexOffset,
+ u32 firstInstance) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDrawIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ u32 drawCount,
+ u32 stride) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ bufferObject := GetBuffer(buffer)
+ assert(commandBufferObject.device == bufferObject.device)
+
+ bindCommandBuffer(commandBuffer, buffer, bufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDrawIndexedIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ u32 drawCount,
+ u32 stride) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ bufferObject := GetBuffer(buffer)
+ assert(commandBufferObject.device == bufferObject.device)
+
+ bindCommandBuffer(commandBuffer, buffer, bufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDispatch(
+ VkCommandBuffer commandBuffer,
+ u32 x,
+ u32 y,
+ u32 z) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_COMPUTE_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdDispatchIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ bufferObject := GetBuffer(buffer)
+ assert(commandBufferObject.device == bufferObject.device)
+
+ bindCommandBuffer(commandBuffer, buffer, bufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_COMPUTE_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdCopyBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer srcBuffer,
+ VkBuffer dstBuffer,
+ u32 regionCount,
+ const VkBufferCopy* pRegions) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcBufferObject := GetBuffer(srcBuffer)
+ dstBufferObject := GetBuffer(dstBuffer)
+ assert(commandBufferObject.device == srcBufferObject.device)
+ assert(commandBufferObject.device == dstBufferObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcBuffer, srcBufferObject.memory)
+ bindCommandBuffer(commandBuffer, dstBuffer, dstBufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdCopyImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ u32 regionCount,
+ const VkImageCopy* pRegions) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcImageObject := GetImage(srcImage)
+ dstImageObject := GetImage(dstImage)
+ assert(commandBufferObject.device == srcImageObject.device)
+ assert(commandBufferObject.device == dstImageObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcImage, srcImageObject.memory)
+ bindCommandBuffer(commandBuffer, dstImage, dstImageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdBlitImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ u32 regionCount,
+ const VkImageBlit* pRegions,
+ VkFilter filter) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcImageObject := GetImage(srcImage)
+ dstImageObject := GetImage(dstImage)
+ assert(commandBufferObject.device == srcImageObject.device)
+ assert(commandBufferObject.device == dstImageObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcImage, srcImageObject.memory)
+ bindCommandBuffer(commandBuffer, dstImage, dstImageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdCopyBufferToImage(
+ VkCommandBuffer commandBuffer,
+ VkBuffer srcBuffer,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ u32 regionCount,
+ const VkBufferImageCopy* pRegions) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcBufferObject := GetBuffer(srcBuffer)
+ dstImageObject := GetImage(dstImage)
+ assert(commandBufferObject.device == srcBufferObject.device)
+ assert(commandBufferObject.device == dstImageObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcBuffer, srcBufferObject.memory)
+ bindCommandBuffer(commandBuffer, dstImage, dstImageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdCopyImageToBuffer(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkBuffer dstBuffer,
+ u32 regionCount,
+ const VkBufferImageCopy* pRegions) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcImageObject := GetImage(srcImage)
+ dstBufferObject := GetBuffer(dstBuffer)
+ assert(commandBufferObject.device == srcImageObject.device)
+ assert(commandBufferObject.device == dstBufferObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcImage, srcImageObject.memory)
+ bindCommandBuffer(commandBuffer, dstBuffer, dstBufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdUpdateBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize dataSize,
+ const u32* pData) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ dstBufferObject := GetBuffer(dstBuffer)
+ assert(commandBufferObject.device == dstBufferObject.device)
+
+ data := pData[0:dataSize]
+
+ bindCommandBuffer(commandBuffer, dstBuffer, dstBufferObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdFillBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize size,
+ u32 data) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ dstBufferObject := GetBuffer(dstBuffer)
+ assert(commandBufferObject.device == dstBufferObject.device)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_TRANSFER_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdClearColorImage(
+ VkCommandBuffer commandBuffer,
+ VkImage image,
+ VkImageLayout imageLayout,
+ const VkClearColorValue* pColor,
+ u32 rangeCount,
+ const VkImageSubresourceRange* pRanges) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ imageObject := GetImage(image)
+ assert(commandBufferObject.device == imageObject.device)
+
+ ranges := pRanges[0:rangeCount]
+ for i in (0 .. rangeCount) {
+ range := ranges[i]
+ }
+
+ bindCommandBuffer(commandBuffer, image, imageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdClearDepthStencilImage(
+ VkCommandBuffer commandBuffer,
+ VkImage image,
+ VkImageLayout imageLayout,
+ const VkClearDepthStencilValue* pDepthStencil,
+ u32 rangeCount,
+ const VkImageSubresourceRange* pRanges) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ imageObject := GetImage(image)
+ assert(commandBufferObject.device == imageObject.device)
+
+ ranges := pRanges[0:rangeCount]
+ for i in (0 .. rangeCount) {
+ range := ranges[i]
+ }
+
+ bindCommandBuffer(commandBuffer, image, imageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdClearAttachments(
+ VkCommandBuffer commandBuffer,
+ u32 attachmentCount,
+ const VkClearAttachment* pAttachments,
+ u32 rectCount,
+ const VkClearRect* pRects) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ rects := pRects[0:rectCount]
+ for i in (0 .. rectCount) {
+ rect := rects[i]
+ }
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdResolveImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ u32 regionCount,
+ const VkImageResolve* pRegions) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ srcImageObject := GetImage(srcImage)
+ dstImageObject := GetImage(dstImage)
+ assert(commandBufferObject.device == srcImageObject.device)
+ assert(commandBufferObject.device == dstImageObject.device)
+
+ regions := pRegions[0:regionCount]
+ for i in (0 .. regionCount) {
+ region := regions[i]
+ }
+
+ bindCommandBuffer(commandBuffer, srcImage, srcImageObject.memory)
+ bindCommandBuffer(commandBuffer, dstImage, dstImageObject.memory)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+@threadSafety("app")
+cmd void vkCmdSetEvent(
+ VkCommandBuffer commandBuffer,
+ VkEvent event,
+ VkPipelineStageFlags stageMask) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ eventObject := GetEvent(event)
+ assert(commandBufferObject.device == eventObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdResetEvent(
+ VkCommandBuffer commandBuffer,
+ VkEvent event,
+ VkPipelineStageFlags stageMask) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ eventObject := GetEvent(event)
+ assert(commandBufferObject.device == eventObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdWaitEvents(
+ VkCommandBuffer commandBuffer,
+ u32 eventCount,
+ const VkEvent* pEvents,
+ VkPipelineStageFlags srcStageMask,
+ VkPipelineStageFlags dstStageMask,
+ u32 memoryBarrierCount,
+ const VkMemoryBarrier* pMemoryBarriers,
+ u32 bufferMemoryBarrierCount,
+ const VkBufferMemoryBarrier* pBufferMemoryBarriers,
+ u32 imageMemoryBarrierCount,
+ const VkImageMemoryBarrier* pImageMemoryBarriers) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ events := pEvents[0:eventCount]
+ for i in (0 .. eventCount) {
+ event := events[i]
+ eventObject := GetEvent(event)
+ assert(commandBufferObject.device == eventObject.device)
+ }
+
+ memoryBarriers := pMemoryBarriers[0:memoryBarrierCount]
+ for i in (0 .. memoryBarrierCount) {
+ memoryBarrier := memoryBarriers[i]
+ }
+ bufferMemoryBarriers := pBufferMemoryBarriers[0:bufferMemoryBarrierCount]
+ for i in (0 .. bufferMemoryBarrierCount) {
+ bufferMemoryBarrier := bufferMemoryBarriers[i]
+ bufferObject := GetBuffer(bufferMemoryBarrier.buffer)
+ assert(bufferObject.device == commandBufferObject.device)
+ }
+ imageMemoryBarriers := pImageMemoryBarriers[0:imageMemoryBarrierCount]
+ for i in (0 .. imageMemoryBarrierCount) {
+ imageMemoryBarrier := imageMemoryBarriers[i]
+ imageObject := GetImage(imageMemoryBarrier.image)
+ assert(imageObject.device == commandBufferObject.device)
+ }
+}
+
+@threadSafety("app")
+cmd void vkCmdPipelineBarrier(
+ VkCommandBuffer commandBuffer,
+ VkPipelineStageFlags srcStageMask,
+ VkPipelineStageFlags dstStageMask,
+ VkDependencyFlags dependencyFlags,
+ u32 memoryBarrierCount,
+ const VkMemoryBarrier* pMemoryBarriers,
+ u32 bufferMemoryBarrierCount,
+ const VkBufferMemoryBarrier* pBufferMemoryBarriers,
+ u32 imageMemoryBarrierCount,
+ const VkImageMemoryBarrier* pImageMemoryBarriers) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ memoryBarriers := pMemoryBarriers[0:memoryBarrierCount]
+ for i in (0 .. memoryBarrierCount) {
+ memoryBarrier := memoryBarriers[i]
+ }
+ bufferMemoryBarriers := pBufferMemoryBarriers[0:bufferMemoryBarrierCount]
+ for i in (0 .. bufferMemoryBarrierCount) {
+ bufferMemoryBarrier := bufferMemoryBarriers[i]
+ bufferObject := GetBuffer(bufferMemoryBarrier.buffer)
+ assert(bufferObject.device == commandBufferObject.device)
+ }
+ imageMemoryBarriers := pImageMemoryBarriers[0:imageMemoryBarrierCount]
+ for i in (0 .. imageMemoryBarrierCount) {
+ imageMemoryBarrier := imageMemoryBarriers[i]
+ imageObject := GetImage(imageMemoryBarrier.image)
+ assert(imageObject.device == commandBufferObject.device)
+ }
+}
+
+@threadSafety("app")
+cmd void vkCmdBeginQuery(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ u32 query,
+ VkQueryControlFlags flags) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(commandBufferObject.device == queryPoolObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdEndQuery(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ u32 query) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(commandBufferObject.device == queryPoolObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdResetQueryPool(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ u32 firstQuery,
+ u32 queryCount) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(commandBufferObject.device == queryPoolObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdWriteTimestamp(
+ VkCommandBuffer commandBuffer,
+ VkPipelineStageFlagBits pipelineStage,
+ VkQueryPool queryPool,
+ u32 query) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ queryPoolObject := GetQueryPool(queryPool)
+ assert(commandBufferObject.device == queryPoolObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdCopyQueryPoolResults(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ u32 firstQuery,
+ u32 queryCount,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize stride,
+ VkQueryResultFlags flags) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ queryPoolObject := GetQueryPool(queryPool)
+ dstBufferObject := GetBuffer(dstBuffer)
+ assert(commandBufferObject.device == queryPoolObject.device)
+ assert(commandBufferObject.device == dstBufferObject.device)
+}
+
+cmd void vkCmdPushConstants(
+ VkCommandBuffer commandBuffer,
+ VkPipelineLayout layout,
+ VkShaderStageFlags stageFlags,
+ u32 offset,
+ u32 size,
+ const void* pValues) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ layoutObject := GetPipelineLayout(layout)
+ assert(commandBufferObject.device == layoutObject.device)
+}
+
+@threadSafety("app")
+cmd void vkCmdBeginRenderPass(
+ VkCommandBuffer commandBuffer,
+ const VkRenderPassBeginInfo* pRenderPassBegin,
+ VkSubpassContents contents) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+ renderPassObject := GetRenderPass(pRenderPassBegin.renderPass)
+ framebufferObject := GetFramebuffer(pRenderPassBegin.framebuffer)
+ assert(commandBufferObject.device == renderPassObject.device)
+ assert(commandBufferObject.device == framebufferObject.device)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+cmd void vkCmdNextSubpass(
+ VkCommandBuffer commandBuffer,
+ VkSubpassContents contents) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+}
+
+@threadSafety("app")
+cmd void vkCmdEndRenderPass(
+ VkCommandBuffer commandBuffer) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ commandBufferObject.queueFlags = AddQueueFlag(commandBufferObject.queueFlags, VK_QUEUE_GRAPHICS_BIT)
+}
+
+cmd void vkCmdExecuteCommands(
+ VkCommandBuffer commandBuffer,
+ u32 commandBufferCount,
+ const VkCommandBuffer* pCommandBuffers) {
+ commandBufferObject := GetCommandBuffer(commandBuffer)
+
+ commandBuffers := pCommandBuffers[0:commandBufferCount]
+ for i in (0 .. commandBufferCount) {
+ secondaryCommandBuffer := commandBuffers[i]
+ secondaryCommandBufferObject := GetCommandBuffer(secondaryCommandBuffer)
+ assert(commandBufferObject.device == secondaryCommandBufferObject.device)
+ }
+}
+
+@extension("VK_KHR_surface")
+cmd void vkDestroySurfaceKHR(
+ VkInstance instance,
+ VkSurfaceKHR surface,
+ const VkAllocationCallbacks* pAllocator) {
+ instanceObject := GetInstance(instance)
+ surfaceObject := GetSurface(surface)
+ assert(surfaceObject.instance == instance)
+
+ State.Surfaces[surface] = null
+}
+
+@extension("VK_KHR_surface")
+cmd VkResult vkGetPhysicalDeviceSurfaceSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex,
+ VkSurfaceKHR surface,
+ VkBool32* pSupported) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ return ?
+}
+
+@extension("VK_KHR_surface")
+cmd VkResult vkGetPhysicalDeviceSurfaceCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ VkSurfaceCapabilitiesKHR* pSurfaceCapabilities) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ surfaceCapabilities := ?
+ pSurfaceCapabilities[0] = surfaceCapabilities
+
+ return ?
+}
+
+@extension("VK_KHR_surface")
+cmd VkResult vkGetPhysicalDeviceSurfaceFormatsKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ u32* pSurfaceFormatCount,
+ VkSurfaceFormatKHR* pSurfaceFormats) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ count := as!u32(?)
+ pSurfaceFormatCount[0] = count
+ surfaceFormats := pSurfaceFormats[0:count]
+
+ for i in (0 .. count) {
+ surfaceFormat := ?
+ surfaceFormats[i] = surfaceFormat
+ }
+
+ return ?
+}
+
+@extension("VK_KHR_surface")
+cmd VkResult vkGetPhysicalDeviceSurfacePresentModesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ u32* pPresentModeCount,
+ VkPresentModeKHR* pPresentModes) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+
+ count := as!u32(?)
+ pPresentModeCount[0] = count
+ presentModes := pPresentModes[0:count]
+
+ for i in (0 .. count) {
+ presentMode := ?
+ presentModes[i] = presentMode
+ }
+
+ return ?
+}
+
+@extension("VK_KHR_swapchain")
+cmd VkResult vkCreateSwapchainKHR(
+ VkDevice device,
+ const VkSwapchainCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSwapchainKHR* pSwapchain) {
+ assert(pCreateInfo.sType == VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR)
+ deviceObject := GetDevice(device)
+
+ swapchain := ?
+ pSwapchain[0] = swapchain
+ State.Swapchains[swapchain] = new!SwapchainObject(device: device)
+
+ return ?
+}
+
+@extension("VK_KHR_swapchain")
+cmd void vkDestroySwapchainKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ const VkAllocationCallbacks* pAllocator) {
+ deviceObject := GetDevice(device)
+ swapchainObject := GetSwapchain(swapchain)
+ assert(swapchainObject.device == device)
+
+ State.Swapchains[swapchain] = null
+}
+
+@extension("VK_KHR_swapchain")
+cmd VkResult vkGetSwapchainImagesKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ u32* pSwapchainImageCount,
+ VkImage* pSwapchainImages) {
+ deviceObject := GetDevice(device)
+
+ count := as!u32(?)
+ pSwapchainImageCount[0] = count
+ swapchainImages := pSwapchainImages[0:count]
+
+ for i in (0 .. count) {
+ swapchainImage := ?
+ swapchainImages[i] = swapchainImage
+ State.Images[swapchainImage] = new!ImageObject(device: device)
+ }
+
+ return ?
+}
+
+@extension("VK_KHR_swapchain")
+cmd VkResult vkAcquireNextImageKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ u64 timeout,
+ VkSemaphore semaphore,
+ VkFence fence,
+ u32* pImageIndex) {
+ deviceObject := GetDevice(device)
+ swapchainObject := GetSwapchain(swapchain)
+
+ imageIndex := ?
+ pImageIndex[0] = imageIndex
+
+ return ?
+}
+
+@extension("VK_KHR_swapchain")
+cmd VkResult vkQueuePresentKHR(
+ VkQueue queue,
+ const VkPresentInfoKHR* pPresentInfo) {
+ queueObject := GetQueue(queue)
+
+ presentInfo := ?
+ pPresentInfo[0] = presentInfo
+
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkGetPhysicalDeviceDisplayPropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ u32* pPropertyCount,
+ VkDisplayPropertiesKHR* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkGetPhysicalDeviceDisplayPlanePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ u32* pPropertyCount,
+ VkDisplayPlanePropertiesKHR* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkGetDisplayPlaneSupportedDisplaysKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 planeIndex,
+ u32* pDisplayCount,
+ VkDisplayKHR* pDisplays) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkGetDisplayModePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ u32* pPropertyCount,
+ VkDisplayModePropertiesKHR* pProperties) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkCreateDisplayModeKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ const VkDisplayModeCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDisplayModeKHR* pMode) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkGetDisplayPlaneCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayModeKHR mode,
+ u32 planeIndex,
+ VkDisplayPlaneCapabilitiesKHR* pCapabilities) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_display")
+cmd VkResult vkCreateDisplayPlaneSurfaceKHR(
+ VkInstance instance,
+ const VkDisplaySurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ return ?
+}
+
+@extension("VK_KHR_display_swapchain")
+cmd VkResult vkCreateSharedSwapchainsKHR(
+ VkDevice device,
+ u32 swapchainCount,
+ const VkSwapchainCreateInfoKHR* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkSwapchainKHR* pSwapchains) {
+ return ?
+}
+
+@extension("VK_KHR_xlib_surface")
+cmd VkResult vkCreateXlibSurfaceKHR(
+ VkInstance instance,
+ const VkXlibSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_xlib_surface")
+cmd VkBool32 vkGetPhysicalDeviceXlibPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex,
+ platform.Display* dpy,
+ platform.VisualID visualID) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_xcb_surface")
+cmd VkResult vkCreateXcbSurfaceKHR(
+ VkInstance instance,
+ const VkXcbSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_xcb_surface")
+cmd VkBool32 vkGetPhysicalDeviceXcbPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex,
+ platform.xcb_connection_t* connection,
+ platform.xcb_visualid_t visual_id) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_wayland_surface")
+cmd VkResult vkCreateWaylandSurfaceKHR(
+ VkInstance instance,
+ const VkWaylandSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_wayland_surface")
+cmd VkBool32 vkGetPhysicalDeviceWaylandPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex,
+ platform.wl_display* display) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_mir_surface")
+cmd VkResult vkCreateMirSurfaceKHR(
+ VkInstance instance,
+ const VkMirSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_mir_surface")
+cmd VkBool32 vkGetPhysicalDeviceMirPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex,
+ platform.MirConnection* connection) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_KHR_android_surface")
+cmd VkResult vkCreateAndroidSurfaceKHR(
+ VkInstance instance,
+ const VkAndroidSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_win32_surface")
+cmd VkResult vkCreateWin32SurfaceKHR(
+ VkInstance instance,
+ const VkWin32SurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface) {
+ instanceObject := GetInstance(instance)
+ return ?
+}
+
+@extension("VK_KHR_win32_surface")
+cmd VkResult vkGetPhysicalDeviceWin32PresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ u32 queueFamilyIndex) {
+ physicalDeviceObject := GetPhysicalDevice(physicalDevice)
+ return ?
+}
+
+@extension("VK_EXT_debug_report")
+@external type void* PFN_vkDebugReportCallbackEXT
+@extension("VK_EXT_debug_report")
+@pfn cmd VkBool32 vkDebugReportCallbackEXT(
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objectType,
+ u64 object,
+ platform.size_t location,
+ s32 messageCode,
+ const char* pLayerPrefix,
+ const char* pMessage,
+ void* pUserData) {
+ return ?
+}
+
+@extension("VK_EXT_debug_report")
+cmd VkResult vkCreateDebugReportCallbackEXT(
+ VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDebugReportCallbackEXT* pCallback) {
+ return ?
+}
+
+@extension("VK_EXT_debug_report")
+cmd void vkDestroyDebugReportCallbackEXT(
+ VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks* pAllocator) {
+}
+
+@extension("VK_EXT_debug_report")
+cmd void vkDebugReportMessageEXT(
+ VkInstance instance,
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objectType,
+ u64 object,
+ platform.size_t location,
+ s32 messageCode,
+ const char* pLayerPrefix,
+ const char* pMessage) {
+}
+
+
+////////////////
+// Validation //
+////////////////
+
+extern void validate(string layerName, bool condition, string message)
+
+
+/////////////////////////////
+// Internal State Tracking //
+/////////////////////////////
+
+StateObject State
+
+@internal class StateObject {
+ // Dispatchable objects.
+ map!(VkInstance, ref!InstanceObject) Instances
+ map!(VkPhysicalDevice, ref!PhysicalDeviceObject) PhysicalDevices
+ map!(VkDevice, ref!DeviceObject) Devices
+ map!(VkQueue, ref!QueueObject) Queues
+ map!(VkCommandBuffer, ref!CommandBufferObject) CommandBuffers
+
+ // Non-dispatchable objects.
+ map!(VkDeviceMemory, ref!DeviceMemoryObject) DeviceMemories
+ map!(VkBuffer, ref!BufferObject) Buffers
+ map!(VkBufferView, ref!BufferViewObject) BufferViews
+ map!(VkImage, ref!ImageObject) Images
+ map!(VkImageView, ref!ImageViewObject) ImageViews
+ map!(VkShaderModule, ref!ShaderModuleObject) ShaderModules
+ map!(VkPipeline, ref!PipelineObject) Pipelines
+ map!(VkPipelineLayout, ref!PipelineLayoutObject) PipelineLayouts
+ map!(VkSampler, ref!SamplerObject) Samplers
+ map!(VkDescriptorSet, ref!DescriptorSetObject) DescriptorSets
+ map!(VkDescriptorSetLayout, ref!DescriptorSetLayoutObject) DescriptorSetLayouts
+ map!(VkDescriptorPool, ref!DescriptorPoolObject) DescriptorPools
+ map!(VkFence, ref!FenceObject) Fences
+ map!(VkSemaphore, ref!SemaphoreObject) Semaphores
+ map!(VkEvent, ref!EventObject) Events
+ map!(VkQueryPool, ref!QueryPoolObject) QueryPools
+ map!(VkFramebuffer, ref!FramebufferObject) Framebuffers
+ map!(VkRenderPass, ref!RenderPassObject) RenderPasses
+ map!(VkPipelineCache, ref!PipelineCacheObject) PipelineCaches
+ map!(VkCommandPool, ref!CommandPoolObject) CommandPools
+ map!(VkSurfaceKHR, ref!SurfaceObject) Surfaces
+ map!(VkSwapchainKHR, ref!SwapchainObject) Swapchains
+}
+
+@internal class InstanceObject {
+}
+
+@internal class PhysicalDeviceObject {
+ VkInstance instance
+}
+
+@internal class DeviceObject {
+ VkPhysicalDevice physicalDevice
+}
+
+@internal class QueueObject {
+ VkDevice device
+ VkQueueFlags flags
+}
+
+@internal class CommandBufferObject {
+ VkDevice device
+ map!(u64, VkDeviceMemory) boundObjects
+ VkQueueFlags queueFlags
+}
+
+@internal class DeviceMemoryObject {
+ VkDevice device
+ VkDeviceSize allocationSize
+ map!(u64, VkDeviceSize) boundObjects
+ map!(VkCommandBuffer, VkCommandBuffer) boundCommandBuffers
+}
+
+@internal class BufferObject {
+ VkDevice device
+ VkDeviceMemory memory
+ VkDeviceSize memoryOffset
+}
+
+@internal class BufferViewObject {
+ VkDevice device
+ VkBuffer buffer
+}
+
+@internal class ImageObject {
+ VkDevice device
+ VkDeviceMemory memory
+ VkDeviceSize memoryOffset
+}
+
+@internal class ImageViewObject {
+ VkDevice device
+ VkImage image
+}
+
+@internal class ShaderObject {
+ VkDevice device
+}
+
+@internal class ShaderModuleObject {
+ VkDevice device
+}
+
+@internal class PipelineObject {
+ VkDevice device
+}
+
+@internal class PipelineLayoutObject {
+ VkDevice device
+}
+
+@internal class SamplerObject {
+ VkDevice device
+}
+
+@internal class DescriptorSetObject {
+ VkDevice device
+}
+
+@internal class DescriptorSetLayoutObject {
+ VkDevice device
+}
+
+@internal class DescriptorPoolObject {
+ VkDevice device
+}
+
+@internal class FenceObject {
+ VkDevice device
+ bool signaled
+}
+
+@internal class SemaphoreObject {
+ VkDevice device
+}
+
+@internal class EventObject {
+ VkDevice device
+}
+
+@internal class QueryPoolObject {
+ VkDevice device
+}
+
+@internal class FramebufferObject {
+ VkDevice device
+}
+
+@internal class RenderPassObject {
+ VkDevice device
+}
+
+@internal class PipelineCacheObject {
+ VkDevice device
+}
+
+@internal class CommandPoolObject {
+ VkDevice device
+}
+
+@internal class SurfaceObject {
+ VkInstance instance
+}
+
+@internal class SwapchainObject {
+ VkDevice device
+}
+
+macro ref!InstanceObject GetInstance(VkInstance instance) {
+ assert(instance in State.Instances)
+ return State.Instances[instance]
+}
+
+macro ref!PhysicalDeviceObject GetPhysicalDevice(VkPhysicalDevice physicalDevice) {
+ assert(physicalDevice in State.PhysicalDevices)
+ return State.PhysicalDevices[physicalDevice]
+}
+
+macro ref!DeviceObject GetDevice(VkDevice device) {
+ assert(device in State.Devices)
+ return State.Devices[device]
+}
+
+macro ref!QueueObject GetQueue(VkQueue queue) {
+ assert(queue in State.Queues)
+ return State.Queues[queue]
+}
+
+macro ref!CommandBufferObject GetCommandBuffer(VkCommandBuffer commandBuffer) {
+ assert(commandBuffer in State.CommandBuffers)
+ return State.CommandBuffers[commandBuffer]
+}
+
+macro ref!DeviceMemoryObject GetDeviceMemory(VkDeviceMemory memory) {
+ assert(memory in State.DeviceMemories)
+ return State.DeviceMemories[memory]
+}
+
+macro ref!BufferObject GetBuffer(VkBuffer buffer) {
+ assert(buffer in State.Buffers)
+ return State.Buffers[buffer]
+}
+
+macro ref!BufferViewObject GetBufferView(VkBufferView bufferView) {
+ assert(bufferView in State.BufferViews)
+ return State.BufferViews[bufferView]
+}
+
+macro ref!ImageObject GetImage(VkImage image) {
+ assert(image in State.Images)
+ return State.Images[image]
+}
+
+macro ref!ImageViewObject GetImageView(VkImageView imageView) {
+ assert(imageView in State.ImageViews)
+ return State.ImageViews[imageView]
+}
+
+macro ref!ShaderModuleObject GetShaderModule(VkShaderModule shaderModule) {
+ assert(shaderModule in State.ShaderModules)
+ return State.ShaderModules[shaderModule]
+}
+
+macro ref!PipelineObject GetPipeline(VkPipeline pipeline) {
+ assert(pipeline in State.Pipelines)
+ return State.Pipelines[pipeline]
+}
+
+macro ref!PipelineLayoutObject GetPipelineLayout(VkPipelineLayout pipelineLayout) {
+ assert(pipelineLayout in State.PipelineLayouts)
+ return State.PipelineLayouts[pipelineLayout]
+}
+
+macro ref!SamplerObject GetSampler(VkSampler sampler) {
+ assert(sampler in State.Samplers)
+ return State.Samplers[sampler]
+}
+
+macro ref!DescriptorSetObject GetDescriptorSet(VkDescriptorSet descriptorSet) {
+ assert(descriptorSet in State.DescriptorSets)
+ return State.DescriptorSets[descriptorSet]
+}
+
+macro ref!DescriptorSetLayoutObject GetDescriptorSetLayout(VkDescriptorSetLayout descriptorSetLayout) {
+ assert(descriptorSetLayout in State.DescriptorSetLayouts)
+ return State.DescriptorSetLayouts[descriptorSetLayout]
+}
+
+macro ref!DescriptorPoolObject GetDescriptorPool(VkDescriptorPool descriptorPool) {
+ assert(descriptorPool in State.DescriptorPools)
+ return State.DescriptorPools[descriptorPool]
+}
+
+macro ref!FenceObject GetFence(VkFence fence) {
+ assert(fence in State.Fences)
+ return State.Fences[fence]
+}
+
+macro ref!SemaphoreObject GetSemaphore(VkSemaphore semaphore) {
+ assert(semaphore in State.Semaphores)
+ return State.Semaphores[semaphore]
+}
+
+macro ref!EventObject GetEvent(VkEvent event) {
+ assert(event in State.Events)
+ return State.Events[event]
+}
+
+macro ref!QueryPoolObject GetQueryPool(VkQueryPool queryPool) {
+ assert(queryPool in State.QueryPools)
+ return State.QueryPools[queryPool]
+}
+
+macro ref!FramebufferObject GetFramebuffer(VkFramebuffer framebuffer) {
+ assert(framebuffer in State.Framebuffers)
+ return State.Framebuffers[framebuffer]
+}
+
+macro ref!RenderPassObject GetRenderPass(VkRenderPass renderPass) {
+ assert(renderPass in State.RenderPasses)
+ return State.RenderPasses[renderPass]
+}
+
+macro ref!PipelineCacheObject GetPipelineCache(VkPipelineCache pipelineCache) {
+ assert(pipelineCache in State.PipelineCaches)
+ return State.PipelineCaches[pipelineCache]
+}
+
+macro ref!CommandPoolObject GetCommandPool(VkCommandPool commandPool) {
+ assert(commandPool in State.CommandPools)
+ return State.CommandPools[commandPool]
+}
+
+macro ref!SurfaceObject GetSurface(VkSurfaceKHR surface) {
+ assert(surface in State.Surfaces)
+ return State.Surfaces[surface]
+}
+
+macro ref!SwapchainObject GetSwapchain(VkSwapchainKHR swapchain) {
+ assert(swapchain in State.Swapchains)
+ return State.Swapchains[swapchain]
+}
+
+macro VkQueueFlags AddQueueFlag(VkQueueFlags flags, VkQueueFlagBits bit) {
+ return as!VkQueueFlags(as!u32(flags) | as!u32(bit))
+}
diff --git a/vulkan/doc/DevelopersGuide.pdf b/vulkan/doc/DevelopersGuide.pdf
new file mode 100644
index 0000000..cf009c5
--- /dev/null
+++ b/vulkan/doc/DevelopersGuide.pdf
Binary files differ
diff --git a/vulkan/doc/implementors_guide/implementors_guide-docinfo.adoc b/vulkan/doc/implementors_guide/implementors_guide-docinfo.adoc
new file mode 100644
index 0000000..69b8c61
--- /dev/null
+++ b/vulkan/doc/implementors_guide/implementors_guide-docinfo.adoc
@@ -0,0 +1,23 @@
+<style type="text/css">
+
+code,div.listingblock {
+ max-width: 68em;
+}
+
+p {
+ max-width: 50em;
+}
+
+table {
+ max-width: 50em;
+}
+
+table.tableblock {
+ border-width: 1px;
+}
+
+h2 {
+ max-width: 35em;
+}
+
+</style>
diff --git a/vulkan/doc/implementors_guide/implementors_guide.adoc b/vulkan/doc/implementors_guide/implementors_guide.adoc
new file mode 100644
index 0000000..ae46f43
--- /dev/null
+++ b/vulkan/doc/implementors_guide/implementors_guide.adoc
@@ -0,0 +1,165 @@
+// asciidoc -b html5 -d book -f implementors_guide.conf implementors_guide.adoc
+= Vulkan on Android Implementor's Guide =
+:toc: right
+:numbered:
+:revnumber: 5
+
+This document is intended for GPU IHVs writing Vulkan drivers for Android, and OEMs integrating them for specific devices. It describes how a Vulkan driver interacts with the system, how GPU-specific tools should be installed, and Android-specific requirements.
+
+This is still a fairly rough draft; details will be filled in over time.
+
+== Architecture ==
+
+The primary interface between Vulkan applications and a device's Vulkan driver is the loader, which is part of AOSP and installed at +/system/lib[64]/libvulkan.so+. The loader provides the core Vulkan API entry points, as well as entry points of a few extensions that are required on Android and always present. In particular, the window system integration (WSI) extensions are exported by the loader and primarily implemented in it rather than the driver. The loader also supports enumerating and loading layers which can expose additional extensions and/or intercept core API calls on their way to the driver.
+
+The NDK will include a stub +libvulkan.so+ exporting the same symbols as the loader. Calling the Vulkan functions exported from +libvulkan.so+ will enter trampoline functions in the loader which will dispatch to the appropriate layer or driver based on their first argument. The +vkGet*ProcAddr+ calls will return the function pointers that the trampolines would dispatch to, so calling through these function pointers rather than the exported symbols will be slightly more efficient since it skips the trampoline and dispatch.
+
+=== Driver Enumeration and Loading ===
+
+Android expects the GPUs available to the system to be known when the system image is built, so its driver enumeration process isn't as elaborate as other platforms. The loader will use the existing HAL mechanism (see https://android.googlesource.com/platform/hardware/libhardware/+/lollipop-mr1-release/include/hardware/hardware.h[hardware.h]) for discovering and loading the driver. As of this writing, the preferred paths for 32-bit and 64-bit Vulkan drivers are:
+
+ /vendor/lib/hw/vulkan.<ro.product.platform>.so
+ /vendor/lib64/hw/vulkan.<ro.product.platform>.so
+
+where +<ro.product.platform>+ is replaced by the value of the system property of that name. See https://android.googlesource.com/platform/hardware/libhardware/+/lollipop-mr1-release/hardware.c[libhardware/hardware.c] for details and supported alternative locations.
+
+The Vulkan +hw_module_t+ derivative is currently trivial. If support for multiple drivers is ever added, the HAL module will export a list of strings that can be passed to the module +open+ call. For the time being, only one driver is supported, and the constant string +HWVULKAN_DEVICE_0+ is passed to +open+.
+
+The Vulkan +hw_device_t+ derivative corresponds to a single driver, though that driver can support multiple physical devices. The +hw_device_t+ structure will be extended to export +vkGetGlobalExtensionProperties+, +vkCreateInstance+, and +vkGetInstanceProcAddr+ functions. The loader will find all other +VkInstance+, +VkPhysicalDevice+, and +vkGetDeviceProcAddr+ functions by calling +vkGetInstanceProcAddr+.
+
+=== Layer Discovery and Loading ===
+
+Android's security model and policies differ significantly from other platforms. In particular, Android does not allow loading external code into a non-debuggable process on production (non-rooted) devices, nor does it allow external code to inspect or control the process's memory/state/etc. This includes a prohibition on saving core dumps, API traces, etc. to disk for later inspection. So only layers delivered as part of the application will be enabled on production devices, and drivers must also not provide functionality that violates these policies.
+
+There are three major use cases for layers:
+
+1. Development-time layers: validation layers, shims for tracing/profiling/debugging tools, etc. These shouldn't be installed on the system image of production devices: they would be a waste of space for most users, and they should be updateable without requiring a system update. A developer wishing to use one of these during development has the ability to modify their application package (e.g. adding a file to their native libraries directory). IHV and OEM engineers who are trying to diagnose failures in shipping, unmodifiable apps are assumed to have access to non-production (rooted) builds of the system image.
+
+2. Utility layers, such as a layer that implements a heap for device memory. These layers will almost always expose extensions. Developers choose which layers, and which versions of those layers, to use in their application; different applications that use the same layer may still use different versions. Developers will choose which of these layers to ship in their application package.
+
+3. Injected layers, like framerate, social network, or game launcher overlays, which are provided by the user or some other application without the application's knowledge or consent. These violate Android's security policies and will not be supported.
+
+In the normal state the loader will only search in the application's native library directory for layers; details are TBD but it will probably just try to load any library with a name matching a particular pattern(e.g. +libvklayer_foo.so+). It will probably not need a separate manifest file; the developer deliberately included these layers, so the reasons to avoid loading libraries before enabling them don't apply.
+
+On debuggable devices (+ro.debuggable+ property exists and is non-zero, generally rooted or engineering builds) or debuggable processes (+prctl(PR_GET_DUMPABLE)==1+, based on the application's manifest), the loader may also search an adb-writeable location on /data for layers. It's not clear whether this is useful; in all the cases it could be used, the layer could be just as easily be put in the application's native library directory.
+
+Finally, the loader may include a built-in validation layer that it will enable based on settings in the Developer Options menu, which would send validation errors or warnings to the system log. Drivers may be able to emit additional hardware-specific errors/warnings through this mechanism. This layer would not be enumerated through the API. This is intended to allow cooperative end-users to collect extra information about failures from unmodified applications on unmodified devices to aid triage/diagnosis of difficult-to-reproduce problems. The functionality would be intentionally limited to minimize security and privacy risk.
+
+Our goal is to allow layers to be ported with only build-environment changes between Android and other platforms. This means the interface between layers and the loader must match the interface used by the LunarG loader. Currently, the LunarG interface has a few deficiencies and is largely unspecified. We intend to work with LunarG to correct as many deficiencies as we can and to specify the interface in detail so that layers can be implemented without referring to the loader source code.
+
+== Window System Integration ==
+
+The +vk_wsi_swapchin+ and +vk_wsi_device_swapchain+ extensions will primarily be implemented by the platform and live in +libvulkan.so+. The +VkSwapchain+ object and all interaction with +ANativeWindow+ will be handled by the platform and not exposed to drivers. The WSI implementation will rely on a few private interfaces to the driver for this implementation. These will be loaded through the driver's +vkGetDeviceProcAddr+ functions, after passing through any enabled layers.
+
+Implementations may need swapchain buffers to be allocated with implementation-defined private gralloc usage flags. When creating a swapchain, the platform will ask the driver to translate the requested format and image usage flags into gralloc usage flags by calling
+[source,c]
+----
+VkResult VKAPI vkGetSwapchainGrallocUsageANDROID(
+ VkDevice device,
+ VkFormat format,
+ VkImageUsageFlags imageUsage,
+ int* grallocUsage
+);
+----
+The +format+ and +imageUsage+ parameters are taken from the +VkSwapchainCreateInfoKHR+ structure. The driver should fill +*grallocUsage+ with the gralloc usage flags it requires for that format and usage. These will be combined with the usage flags requested by the swapchain consumer when allocating buffers.
+
++VkNativeBufferANDROID+ is a +vkCreateImage+ extension structure for creating an image backed by a gralloc buffer. This structure is provided to +vkCreateImage+ in the +VkImageCreateInfo+ structure chain. Calls to +vkCreateImage+ with this structure will happen during the first call to +vkGetSwapChainInfoWSI(.. VK_SWAP_CHAIN_INFO_TYPE_IMAGES_WSI ..)+. The WSI implementation will allocate the number of native buffers requested for the swapchain, then create a +VkImage+ for each one.
+
+[source,c]
+----
+typedef struct {
+ VkStructureType sType; // must be VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID
+ const void* pNext;
+
+ // Buffer handle and stride returned from gralloc alloc()
+ buffer_handle_t handle;
+ int stride;
+
+ // Gralloc format and usage requested when the buffer was allocated.
+ int format;
+ int usage;
+} VkNativeBufferANDROID;
+----
+
+TBD: During swapchain re-creation (using +oldSwapChain+), we may have to defer allocation of new gralloc buffers until old buffers have been released. If so, the +vkCreateImage+ calls will be deferred until the first +vkAcquireNextImageWSI+ that would return the new image.
+
+When creating a gralloc-backed image, the +VkImageCreateInfo+ will have:
+----
+ .imageType = VK_IMAGE_TYPE_2D
+ .format = a VkFormat matching the format requested for the gralloc buffer
+ .extent = the 2D dimensions requested for the gralloc buffer
+ .mipLevels = 1
+ .arraySize = 1
+ .samples = 1
+ .tiling = VK_IMAGE_TILING_OPTIMAL
+ .usage = VkSwapChainCreateInfoWSI::imageUsageFlags
+ .flags = 0
+ .sharingMode = VkSwapChainCreateInfoWSI::sharingMode
+ .queueFamilyCount = VkSwapChainCreateInfoWSI::queueFamilyCount
+ .pQueueFamilyIndices = VkSwapChainCreateInfoWSI::pQueueFamilyIndices
+----
+
++vkAcquireImageANDROID+ acquires ownership of a swapchain image and imports an
+externally-signalled native fence into both an existing VkSemaphore object
+and an existing VkFence object:
+
+[source,c]
+----
+VkResult VKAPI vkAcquireImageANDROID(
+ VkDevice device,
+ VkImage image,
+ int nativeFenceFd,
+ VkSemaphore semaphore,
+ VkFence fence
+);
+----
+
+This function is called during +vkAcquireNextImageWSI+ to import a native
+fence into the +VkSemaphore+ and +VkFence+ objects provided by the
+application. Both semaphore and fence objects are optional in this call. The
+driver may also use this opportunity to recognize and handle any external
+changes to the gralloc buffer state; many drivers won't need to do anything
+here. This call puts the +VkSemaphore+ and +VkFence+ into the same "pending"
+state as +vkQueueSignalSemaphore+ and +vkQueueSubmit+ respectively, so queues
+can wait on the semaphore and the application can wait on the fence. Both
+objects become signalled when the underlying native fence signals; if the
+native fence has already signalled, then the semaphore will be in the signalled
+state when this function returns. The driver takes ownership of the fence fd
+and is responsible for closing it when no longer needed. It must do so even if
+neither a semaphore or fence object is provided, or even if
++vkAcquireImageANDROID+ fails and returns an error. If +fenceFd+ is -1, it
+is as if the native fence was already signalled.
+
++vkQueueSignalReleaseImageANDROID+ prepares a swapchain image for external use, and creates a native fence and schedules it to be signalled when prior work on the queue has completed.
+
+[source,c]
+----
+VkResult VKAPI vkQueueSignalReleaseImageANDROID(
+ VkQueue queue,
+ uint32_t waitSemaphoreCount,
+ const VkSemaphore* pWaitSemaphores,
+ VkImage image,
+ int* pNativeFenceFd
+);
+----
+
+This will be called during +vkQueuePresentWSI+ on the provided queue. Effects are similar to +vkQueueSignalSemaphore+, except with a native fence instead of a semaphore. The native fence must: not signal until the +waitSemaphoreCount+ semaphores in +pWaitSemaphores+ have signaled. Unlike +vkQueueSignalSemaphore+, however, this call creates and returns the synchronization object that will be signalled rather than having it provided as input. If the queue is already idle when this function is called, it is allowed but not required to set +*pNativeFenceFd+ to -1. The file descriptor returned in +*pNativeFenceFd+ is owned and will be closed by the caller. Many drivers will be able to ignore the +image+ parameter, but some may need to prepare CPU-side data structures associated with a gralloc buffer for use by external image consumers. Preparing buffer contents for use by external consumers should have been done asynchronously as part of transitioning the image to +VK_IMAGE_LAYOUT_PRESENT_SRC_KHR+.
+
+== History ==
+
+. *2015-07-08* Initial version
+. *2015-08-16*
+ * Renamed to Implementor's Guide
+ * Wording and formatting changes
+ * Updated based on resolution of Khronos bug 14265
+ * Deferred support for multiple drivers
+. *2015-11-04*
+ * Added vkGetSwapchainGrallocUsageANDROID
+ * Replaced vkImportNativeFenceANDROID and vkQueueSignalNativeFenceANDROID
+ with vkAcquireImageANDROID and vkQueueSignalReleaseImageANDROID, to allow
+ drivers to known the ownership state of swapchain images.
+. *2015-12-03*
+ * Added a VkFence parameter to vkAcquireImageANDROID corresponding to the
+ parameter added to vkAcquireNextImageKHR.
+. *2016-01-08*
+ * Added waitSemaphoreCount and pWaitSemaphores parameters to vkQueueSignalReleaseImageANDROID.
\ No newline at end of file
diff --git a/vulkan/doc/implementors_guide/implementors_guide.conf b/vulkan/doc/implementors_guide/implementors_guide.conf
new file mode 100644
index 0000000..572a4d9
--- /dev/null
+++ b/vulkan/doc/implementors_guide/implementors_guide.conf
@@ -0,0 +1,5 @@
+[attributes]
+newline=\n
+
+[replacements]
+\+\/-=±
diff --git a/vulkan/doc/implementors_guide/implementors_guide.html b/vulkan/doc/implementors_guide/implementors_guide.html
new file mode 100644
index 0000000..58ce0dc
--- /dev/null
+++ b/vulkan/doc/implementors_guide/implementors_guide.html
@@ -0,0 +1,984 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<meta name="generator" content="AsciiDoc 8.6.9">
+<title>Vulkan on Android Implementor’s Guide</title>
+<style type="text/css">
+/* Shared CSS for AsciiDoc xhtml11 and html5 backends */
+
+/* Default font. */
+body {
+ font-family: Georgia,serif;
+}
+
+/* Title font. */
+h1, h2, h3, h4, h5, h6,
+div.title, caption.title,
+thead, p.table.header,
+#toctitle,
+#author, #revnumber, #revdate, #revremark,
+#footer {
+ font-family: Arial,Helvetica,sans-serif;
+}
+
+body {
+ margin: 1em 5% 1em 5%;
+}
+
+a {
+ color: blue;
+ text-decoration: underline;
+}
+a:visited {
+ color: fuchsia;
+}
+
+em {
+ font-style: italic;
+ color: navy;
+}
+
+strong {
+ font-weight: bold;
+ color: #083194;
+}
+
+h1, h2, h3, h4, h5, h6 {
+ color: #527bbd;
+ margin-top: 1.2em;
+ margin-bottom: 0.5em;
+ line-height: 1.3;
+}
+
+h1, h2, h3 {
+ border-bottom: 2px solid silver;
+}
+h2 {
+ padding-top: 0.5em;
+}
+h3 {
+ float: left;
+}
+h3 + * {
+ clear: left;
+}
+h5 {
+ font-size: 1.0em;
+}
+
+div.sectionbody {
+ margin-left: 0;
+}
+
+hr {
+ border: 1px solid silver;
+}
+
+p {
+ margin-top: 0.5em;
+ margin-bottom: 0.5em;
+}
+
+ul, ol, li > p {
+ margin-top: 0;
+}
+ul > li { color: #aaa; }
+ul > li > * { color: black; }
+
+.monospaced, code, pre {
+ font-family: "Courier New", Courier, monospace;
+ font-size: inherit;
+ color: navy;
+ padding: 0;
+ margin: 0;
+}
+pre {
+ white-space: pre-wrap;
+}
+
+#author {
+ color: #527bbd;
+ font-weight: bold;
+ font-size: 1.1em;
+}
+#email {
+}
+#revnumber, #revdate, #revremark {
+}
+
+#footer {
+ font-size: small;
+ border-top: 2px solid silver;
+ padding-top: 0.5em;
+ margin-top: 4.0em;
+}
+#footer-text {
+ float: left;
+ padding-bottom: 0.5em;
+}
+#footer-badges {
+ float: right;
+ padding-bottom: 0.5em;
+}
+
+#preamble {
+ margin-top: 1.5em;
+ margin-bottom: 1.5em;
+}
+div.imageblock, div.exampleblock, div.verseblock,
+div.quoteblock, div.literalblock, div.listingblock, div.sidebarblock,
+div.admonitionblock {
+ margin-top: 1.0em;
+ margin-bottom: 1.5em;
+}
+div.admonitionblock {
+ margin-top: 2.0em;
+ margin-bottom: 2.0em;
+ margin-right: 10%;
+ color: #606060;
+}
+
+div.content { /* Block element content. */
+ padding: 0;
+}
+
+/* Block element titles. */
+div.title, caption.title {
+ color: #527bbd;
+ font-weight: bold;
+ text-align: left;
+ margin-top: 1.0em;
+ margin-bottom: 0.5em;
+}
+div.title + * {
+ margin-top: 0;
+}
+
+td div.title:first-child {
+ margin-top: 0.0em;
+}
+div.content div.title:first-child {
+ margin-top: 0.0em;
+}
+div.content + div.title {
+ margin-top: 0.0em;
+}
+
+div.sidebarblock > div.content {
+ background: #ffffee;
+ border: 1px solid #dddddd;
+ border-left: 4px solid #f0f0f0;
+ padding: 0.5em;
+}
+
+div.listingblock > div.content {
+ border: 1px solid #dddddd;
+ border-left: 5px solid #f0f0f0;
+ background: #f8f8f8;
+ padding: 0.5em;
+}
+
+div.quoteblock, div.verseblock {
+ padding-left: 1.0em;
+ margin-left: 1.0em;
+ margin-right: 10%;
+ border-left: 5px solid #f0f0f0;
+ color: #888;
+}
+
+div.quoteblock > div.attribution {
+ padding-top: 0.5em;
+ text-align: right;
+}
+
+div.verseblock > pre.content {
+ font-family: inherit;
+ font-size: inherit;
+}
+div.verseblock > div.attribution {
+ padding-top: 0.75em;
+ text-align: left;
+}
+/* DEPRECATED: Pre version 8.2.7 verse style literal block. */
+div.verseblock + div.attribution {
+ text-align: left;
+}
+
+div.admonitionblock .icon {
+ vertical-align: top;
+ font-size: 1.1em;
+ font-weight: bold;
+ text-decoration: underline;
+ color: #527bbd;
+ padding-right: 0.5em;
+}
+div.admonitionblock td.content {
+ padding-left: 0.5em;
+ border-left: 3px solid #dddddd;
+}
+
+div.exampleblock > div.content {
+ border-left: 3px solid #dddddd;
+ padding-left: 0.5em;
+}
+
+div.imageblock div.content { padding-left: 0; }
+span.image img { border-style: none; vertical-align: text-bottom; }
+a.image:visited { color: white; }
+
+dl {
+ margin-top: 0.8em;
+ margin-bottom: 0.8em;
+}
+dt {
+ margin-top: 0.5em;
+ margin-bottom: 0;
+ font-style: normal;
+ color: navy;
+}
+dd > *:first-child {
+ margin-top: 0.1em;
+}
+
+ul, ol {
+ list-style-position: outside;
+}
+ol.arabic {
+ list-style-type: decimal;
+}
+ol.loweralpha {
+ list-style-type: lower-alpha;
+}
+ol.upperalpha {
+ list-style-type: upper-alpha;
+}
+ol.lowerroman {
+ list-style-type: lower-roman;
+}
+ol.upperroman {
+ list-style-type: upper-roman;
+}
+
+div.compact ul, div.compact ol,
+div.compact p, div.compact p,
+div.compact div, div.compact div {
+ margin-top: 0.1em;
+ margin-bottom: 0.1em;
+}
+
+tfoot {
+ font-weight: bold;
+}
+td > div.verse {
+ white-space: pre;
+}
+
+div.hdlist {
+ margin-top: 0.8em;
+ margin-bottom: 0.8em;
+}
+div.hdlist tr {
+ padding-bottom: 15px;
+}
+dt.hdlist1.strong, td.hdlist1.strong {
+ font-weight: bold;
+}
+td.hdlist1 {
+ vertical-align: top;
+ font-style: normal;
+ padding-right: 0.8em;
+ color: navy;
+}
+td.hdlist2 {
+ vertical-align: top;
+}
+div.hdlist.compact tr {
+ margin: 0;
+ padding-bottom: 0;
+}
+
+.comment {
+ background: yellow;
+}
+
+.footnote, .footnoteref {
+ font-size: 0.8em;
+}
+
+span.footnote, span.footnoteref {
+ vertical-align: super;
+}
+
+#footnotes {
+ margin: 20px 0 20px 0;
+ padding: 7px 0 0 0;
+}
+
+#footnotes div.footnote {
+ margin: 0 0 5px 0;
+}
+
+#footnotes hr {
+ border: none;
+ border-top: 1px solid silver;
+ height: 1px;
+ text-align: left;
+ margin-left: 0;
+ width: 20%;
+ min-width: 100px;
+}
+
+div.colist td {
+ padding-right: 0.5em;
+ padding-bottom: 0.3em;
+ vertical-align: top;
+}
+div.colist td img {
+ margin-top: 0.3em;
+}
+
+@media print {
+ #footer-badges { display: none; }
+}
+
+#toc {
+ margin-bottom: 2.5em;
+}
+
+#toctitle {
+ color: #527bbd;
+ font-size: 1.1em;
+ font-weight: bold;
+ margin-top: 1.0em;
+ margin-bottom: 0.1em;
+}
+
+div.toclevel0, div.toclevel1, div.toclevel2, div.toclevel3, div.toclevel4 {
+ margin-top: 0;
+ margin-bottom: 0;
+}
+div.toclevel2 {
+ margin-left: 2em;
+ font-size: 0.9em;
+}
+div.toclevel3 {
+ margin-left: 4em;
+ font-size: 0.9em;
+}
+div.toclevel4 {
+ margin-left: 6em;
+ font-size: 0.9em;
+}
+
+span.aqua { color: aqua; }
+span.black { color: black; }
+span.blue { color: blue; }
+span.fuchsia { color: fuchsia; }
+span.gray { color: gray; }
+span.green { color: green; }
+span.lime { color: lime; }
+span.maroon { color: maroon; }
+span.navy { color: navy; }
+span.olive { color: olive; }
+span.purple { color: purple; }
+span.red { color: red; }
+span.silver { color: silver; }
+span.teal { color: teal; }
+span.white { color: white; }
+span.yellow { color: yellow; }
+
+span.aqua-background { background: aqua; }
+span.black-background { background: black; }
+span.blue-background { background: blue; }
+span.fuchsia-background { background: fuchsia; }
+span.gray-background { background: gray; }
+span.green-background { background: green; }
+span.lime-background { background: lime; }
+span.maroon-background { background: maroon; }
+span.navy-background { background: navy; }
+span.olive-background { background: olive; }
+span.purple-background { background: purple; }
+span.red-background { background: red; }
+span.silver-background { background: silver; }
+span.teal-background { background: teal; }
+span.white-background { background: white; }
+span.yellow-background { background: yellow; }
+
+span.big { font-size: 2em; }
+span.small { font-size: 0.6em; }
+
+span.underline { text-decoration: underline; }
+span.overline { text-decoration: overline; }
+span.line-through { text-decoration: line-through; }
+
+div.unbreakable { page-break-inside: avoid; }
+
+
+/*
+ * xhtml11 specific
+ *
+ * */
+
+div.tableblock {
+ margin-top: 1.0em;
+ margin-bottom: 1.5em;
+}
+div.tableblock > table {
+ border: 3px solid #527bbd;
+}
+thead, p.table.header {
+ font-weight: bold;
+ color: #527bbd;
+}
+p.table {
+ margin-top: 0;
+}
+/* Because the table frame attribute is overriden by CSS in most browsers. */
+div.tableblock > table[frame="void"] {
+ border-style: none;
+}
+div.tableblock > table[frame="hsides"] {
+ border-left-style: none;
+ border-right-style: none;
+}
+div.tableblock > table[frame="vsides"] {
+ border-top-style: none;
+ border-bottom-style: none;
+}
+
+
+/*
+ * html5 specific
+ *
+ * */
+
+table.tableblock {
+ margin-top: 1.0em;
+ margin-bottom: 1.5em;
+}
+thead, p.tableblock.header {
+ font-weight: bold;
+ color: #527bbd;
+}
+p.tableblock {
+ margin-top: 0;
+}
+table.tableblock {
+ border-width: 3px;
+ border-spacing: 0px;
+ border-style: solid;
+ border-color: #527bbd;
+ border-collapse: collapse;
+}
+th.tableblock, td.tableblock {
+ border-width: 1px;
+ padding: 4px;
+ border-style: solid;
+ border-color: #527bbd;
+}
+
+table.tableblock.frame-topbot {
+ border-left-style: hidden;
+ border-right-style: hidden;
+}
+table.tableblock.frame-sides {
+ border-top-style: hidden;
+ border-bottom-style: hidden;
+}
+table.tableblock.frame-none {
+ border-style: hidden;
+}
+
+th.tableblock.halign-left, td.tableblock.halign-left {
+ text-align: left;
+}
+th.tableblock.halign-center, td.tableblock.halign-center {
+ text-align: center;
+}
+th.tableblock.halign-right, td.tableblock.halign-right {
+ text-align: right;
+}
+
+th.tableblock.valign-top, td.tableblock.valign-top {
+ vertical-align: top;
+}
+th.tableblock.valign-middle, td.tableblock.valign-middle {
+ vertical-align: middle;
+}
+th.tableblock.valign-bottom, td.tableblock.valign-bottom {
+ vertical-align: bottom;
+}
+
+
+/*
+ * manpage specific
+ *
+ * */
+
+body.manpage h1 {
+ padding-top: 0.5em;
+ padding-bottom: 0.5em;
+ border-top: 2px solid silver;
+ border-bottom: 2px solid silver;
+}
+body.manpage h2 {
+ border-style: none;
+}
+body.manpage div.sectionbody {
+ margin-left: 3em;
+}
+
+@media print {
+ body.manpage div#toc { display: none; }
+}
+
+
+</style>
+<script type="text/javascript">
+/*<+'])');
+ // Function that scans the DOM tree for header elements (the DOM2
+ // nodeIterator API would be a better technique but not supported by all
+ // browsers).
+ var iterate = function (el) {
+ for (var i = el.firstChild; i != null; i = i.nextSibling) {
+ if (i.nodeType == 1 /* Node.ELEMENT_NODE */) {
+ var mo = re.exec(i.tagName);
+ if (mo && (i.getAttribute("class") || i.getAttribute("className")) != "float") {
+ result[result.length] = new TocEntry(i, getText(i), mo[1]-1);
+ }
+ iterate(i);
+ }
+ }
+ }
+ iterate(el);
+ return result;
+ }
+
+ var toc = document.getElementById("toc");
+ if (!toc) {
+ return;
+ }
+
+ // Delete existing TOC entries in case we're reloading the TOC.
+ var tocEntriesToRemove = [];
+ var i;
+ for (i = 0; i < toc.childNodes.length; i++) {
+ var entry = toc.childNodes[i];
+ if (entry.nodeName.toLowerCase() == 'div'
+ && entry.getAttribute("class")
+ && entry.getAttribute("class").match(/^toclevel/))
+ tocEntriesToRemove.push(entry);
+ }
+ for (i = 0; i < tocEntriesToRemove.length; i++) {
+ toc.removeChild(tocEntriesToRemove[i]);
+ }
+
+ // Rebuild TOC entries.
+ var entries = tocEntries(document.getElementById("content"), toclevels);
+ for (var i = 0; i < entries.length; ++i) {
+ var entry = entries[i];
+ if (entry.element.id == "")
+ entry.element.id = "_toc_" + i;
+ var a = document.createElement("a");
+ a.href = "#" + entry.element.id;
+ a.appendChild(document.createTextNode(entry.text));
+ var div = document.createElement("div");
+ div.appendChild(a);
+ div.className = "toclevel" + entry.toclevel;
+ toc.appendChild(div);
+ }
+ if (entries.length == 0)
+ toc.parentNode.removeChild(toc);
+},
+
+
+/////////////////////////////////////////////////////////////////////
+// Footnotes generator
+/////////////////////////////////////////////////////////////////////
+
+/* Based on footnote generation code from:
+ * http://www.brandspankingnew.net/archive/2005/07/format_footnote.html
+ */
+
+footnotes: function () {
+ // Delete existing footnote entries in case we're reloading the footnodes.
+ var i;
+ var noteholder = document.getElementById("footnotes");
+ if (!noteholder) {
+ return;
+ }
+ var entriesToRemove = [];
+ for (i = 0; i < noteholder.childNodes.length; i++) {
+ var entry = noteholder.childNodes[i];
+ if (entry.nodeName.toLowerCase() == 'div' && entry.getAttribute("class") == "footnote")
+ entriesToRemove.push(entry);
+ }
+ for (i = 0; i < entriesToRemove.length; i++) {
+ noteholder.removeChild(entriesToRemove[i]);
+ }
+
+ // Rebuild footnote entries.
+ var cont = document.getElementById("content");
+ var spans = cont.getElementsByTagName("span");
+ var refs = {};
+ var n = 0;
+ for (i=0; i<spans.length; i++) {
+ if (spans[i].className == "footnote") {
+ n++;
+ var note = spans[i].getAttribute("data-note");
+ if (!note) {
+ // Use [\s\S] in place of . so multi-line matches work.
+ // Because JavaScript has no s (dotall) regex flag.
+ note = spans[i].innerHTML.match(/\s*\[([\s\S]*)]\s*/)[1];
+ spans[i].innerHTML =
+ "[<a id='_footnoteref_" + n + "' href='#_footnote_" + n +
+ "' title='View footnote' class='footnote'>" + n + "</a>]";
+ spans[i].setAttribute("data-note", note);
+ }
+ noteholder.innerHTML +=
+ "<div class='footnote' id='_footnote_" + n + "'>" +
+ "<a href='#_footnoteref_" + n + "' title='Return to text'>" +
+ n + "</a>. " + note + "</div>";
+ var id =spans[i].getAttribute("id");
+ if (id != null) refs["#"+id] = n;
+ }
+ }
+ if (n == 0)
+ noteholder.parentNode.removeChild(noteholder);
+ else {
+ // Process footnoterefs.
+ for (i=0; i<spans.length; i++) {
+ if (spans[i].className == "footnoteref") {
+ var href = spans[i].getElementsByTagName("a")[0].getAttribute("href");
+ href = href.match(/#.*/)[0]; // Because IE return full URL.
+ n = refs[href];
+ spans[i].innerHTML =
+ "[<a href='#_footnote_" + n +
+ "' title='View footnote' class='footnote'>" + n + "</a>]";
+ }
+ }
+ }
+},
+
+install: function(toclevels) {
+ var timerId;
+
+ function reinstall() {
+ asciidoc.footnotes();
+ if (toclevels) {
+ asciidoc.toc(toclevels);
+ }
+ }
+
+ function reinstallAndRemoveTimer() {
+ clearInterval(timerId);
+ reinstall();
+ }
+
+ timerId = setInterval(reinstall, 500);
+ if (document.addEventListener)
+ document.addEventListener("DOMContentLoaded", reinstallAndRemoveTimer, false);
+ else
+ window.onload = reinstallAndRemoveTimer;
+}
+
+}
+asciidoc.install(2);
+/*]]>*/
+</script>
+</head>
+<body class="book">
+<div id="header">
+<h1>Vulkan on Android Implementor’s Guide</h1>
+<span id="revnumber">version 5</span>
+<div id="toc">
+ <div id="toctitle">Table of Contents</div>
+ <noscript><p><b>JavaScript must be enabled in your browser to display the table of contents.</b></p></noscript>
+</div>
+</div>
+<div id="content">
+<div id="preamble">
+<div class="sectionbody">
+<div class="paragraph"><p>This document is intended for GPU IHVs writing Vulkan drivers for Android, and OEMs integrating them for specific devices. It describes how a Vulkan driver interacts with the system, how GPU-specific tools should be installed, and Android-specific requirements.</p></div>
+<div class="paragraph"><p>This is still a fairly rough draft; details will be filled in over time.</p></div>
+</div>
+</div>
+<div class="sect1">
+<h2 id="_architecture">1. Architecture</h2>
+<div class="sectionbody">
+<div class="paragraph"><p>The primary interface between Vulkan applications and a device’s Vulkan driver is the loader, which is part of AOSP and installed at <span class="monospaced">/system/lib[64]/libvulkan.so</span>. The loader provides the core Vulkan API entry points, as well as entry points of a few extensions that are required on Android and always present. In particular, the window system integration (WSI) extensions are exported by the loader and primarily implemented in it rather than the driver. The loader also supports enumerating and loading layers which can expose additional extensions and/or intercept core API calls on their way to the driver.</p></div>
+<div class="paragraph"><p>The NDK will include a stub <span class="monospaced">libvulkan.so</span> exporting the same symbols as the loader. Calling the Vulkan functions exported from <span class="monospaced">libvulkan.so</span> will enter trampoline functions in the loader which will dispatch to the appropriate layer or driver based on their first argument. The <span class="monospaced">vkGet*ProcAddr</span> calls will return the function pointers that the trampolines would dispatch to, so calling through these function pointers rather than the exported symbols will be slightly more efficient since it skips the trampoline and dispatch.</p></div>
+<div class="sect2">
+<h3 id="_driver_enumeration_and_loading">1.1. Driver Enumeration and Loading</h3>
+<div class="paragraph"><p>Android expects the GPUs available to the system to be known when the system image is built, so its driver enumeration process isn’t as elaborate as other platforms. The loader will use the existing HAL mechanism (see <a href="https://android.googlesource.com/platform/hardware/libhardware/+/lollipop-mr1-release/include/hardware/hardware.h">hardware.h</a>) for discovering and loading the driver. As of this writing, the preferred paths for 32-bit and 64-bit Vulkan drivers are:</p></div>
+<div class="literalblock">
+<div class="content monospaced">
+<pre>/vendor/lib/hw/vulkan.<ro.product.platform>.so
+/vendor/lib64/hw/vulkan.<ro.product.platform>.so</pre>
+</div></div>
+<div class="paragraph"><p>where <span class="monospaced"><ro.product.platform></span> is replaced by the value of the system property of that name. See <a href="https://android.googlesource.com/platform/hardware/libhardware/+/lollipop-mr1-release/hardware.c">libhardware/hardware.c</a> for details and supported alternative locations.</p></div>
+<div class="paragraph"><p>The Vulkan <span class="monospaced">hw_module_t</span> derivative is currently trivial. If support for multiple drivers is ever added, the HAL module will export a list of strings that can be passed to the module <span class="monospaced">open</span> call. For the time being, only one driver is supported, and the constant string <span class="monospaced">HWVULKAN_DEVICE_0</span> is passed to <span class="monospaced">open</span>.</p></div>
+<div class="paragraph"><p>The Vulkan <span class="monospaced">hw_device_t</span> derivative corresponds to a single driver, though that driver can support multiple physical devices. The <span class="monospaced">hw_device_t</span> structure will be extended to export <span class="monospaced">vkGetGlobalExtensionProperties</span>, <span class="monospaced">vkCreateInstance</span>, and <span class="monospaced">vkGetInstanceProcAddr</span> functions. The loader will find all other <span class="monospaced">VkInstance</span>, <span class="monospaced">VkPhysicalDevice</span>, and <span class="monospaced">vkGetDeviceProcAddr</span> functions by calling <span class="monospaced">vkGetInstanceProcAddr</span>.</p></div>
+</div>
+<div class="sect2">
+<h3 id="_layer_discovery_and_loading">1.2. Layer Discovery and Loading</h3>
+<div class="paragraph"><p>Android’s security model and policies differ significantly from other platforms. In particular, Android does not allow loading external code into a non-debuggable process on production (non-rooted) devices, nor does it allow external code to inspect or control the process’s memory/state/etc. This includes a prohibition on saving core dumps, API traces, etc. to disk for later inspection. So only layers delivered as part of the application will be enabled on production devices, and drivers must also not provide functionality that violates these policies.</p></div>
+<div class="paragraph"><p>There are three major use cases for layers:</p></div>
+<div class="olist arabic"><ol class="arabic">
+<li>
+<p>
+Development-time layers: validation layers, shims for tracing/profiling/debugging tools, etc. These shouldn’t be installed on the system image of production devices: they would be a waste of space for most users, and they should be updateable without requiring a system update. A developer wishing to use one of these during development has the ability to modify their application package (e.g. adding a file to their native libraries directory). IHV and OEM engineers who are trying to diagnose failures in shipping, unmodifiable apps are assumed to have access to non-production (rooted) builds of the system image.
+</p>
+</li>
+<li>
+<p>
+Utility layers, such as a layer that implements a heap for device memory. These layers will almost always expose extensions. Developers choose which layers, and which versions of those layers, to use in their application; different applications that use the same layer may still use different versions. Developers will choose which of these layers to ship in their application package.
+</p>
+</li>
+<li>
+<p>
+Injected layers, like framerate, social network, or game launcher overlays, which are provided by the user or some other application without the application’s knowledge or consent. These violate Android’s security policies and will not be supported.
+</p>
+</li>
+</ol></div>
+<div class="paragraph"><p>In the normal state the loader will only search in the application’s native library directory for layers; details are TBD but it will probably just try to load any library with a name matching a particular pattern(e.g. <span class="monospaced">libvklayer_foo.so</span>). It will probably not need a separate manifest file; the developer deliberately included these layers, so the reasons to avoid loading libraries before enabling them don’t apply.</p></div>
+<div class="paragraph"><p>On debuggable devices (<span class="monospaced">ro.debuggable</span> property exists and is non-zero, generally rooted or engineering builds) or debuggable processes (<span class="monospaced">prctl(PR_GET_DUMPABLE)==1</span>, based on the application’s manifest), the loader may also search an adb-writeable location on /data for layers. It’s not clear whether this is useful; in all the cases it could be used, the layer could be just as easily be put in the application’s native library directory.</p></div>
+<div class="paragraph"><p>Finally, the loader may include a built-in validation layer that it will enable based on settings in the Developer Options menu, which would send validation errors or warnings to the system log. Drivers may be able to emit additional hardware-specific errors/warnings through this mechanism. This layer would not be enumerated through the API. This is intended to allow cooperative end-users to collect extra information about failures from unmodified applications on unmodified devices to aid triage/diagnosis of difficult-to-reproduce problems. The functionality would be intentionally limited to minimize security and privacy risk.</p></div>
+<div class="paragraph"><p>Our goal is to allow layers to be ported with only build-environment changes between Android and other platforms. This means the interface between layers and the loader must match the interface used by the LunarG loader. Currently, the LunarG interface has a few deficiencies and is largely unspecified. We intend to work with LunarG to correct as many deficiencies as we can and to specify the interface in detail so that layers can be implemented without referring to the loader source code.</p></div>
+</div>
+</div>
+</div>
+<div class="sect1">
+<h2 id="_window_system_integration">2. Window System Integration</h2>
+<div class="sectionbody">
+<div class="paragraph"><p>The <span class="monospaced">vk_wsi_swapchin</span> and <span class="monospaced">vk_wsi_device_swapchain</span> extensions will primarily be implemented by the platform and live in <span class="monospaced">libvulkan.so</span>. The <span class="monospaced">VkSwapchain</span> object and all interaction with <span class="monospaced">ANativeWindow</span> will be handled by the platform and not exposed to drivers. The WSI implementation will rely on a few private interfaces to the driver for this implementation. These will be loaded through the driver’s <span class="monospaced">vkGetDeviceProcAddr</span> functions, after passing through any enabled layers.</p></div>
+<div class="paragraph"><p>Implementations may need swapchain buffers to be allocated with implementation-defined private gralloc usage flags. When creating a swapchain, the platform will ask the driver to translate the requested format and image usage flags into gralloc usage flags by calling</p></div>
+<div class="listingblock">
+<div class="content"><!-- Generator: GNU source-highlight 3.1.8
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre><tt>VkResult <span style="color: #008080">VKAPI</span> <span style="font-weight: bold"><span style="color: #000000">vkGetSwapchainGrallocUsageANDROID</span></span><span style="color: #990000">(</span>
+ <span style="color: #008080">VkDevice</span> device<span style="color: #990000">,</span>
+ <span style="color: #008080">VkFormat</span> format<span style="color: #990000">,</span>
+ <span style="color: #008080">VkImageUsageFlags</span> imageUsage<span style="color: #990000">,</span>
+ <span style="color: #009900">int</span><span style="color: #990000">*</span> grallocUsage
+<span style="color: #990000">);</span></tt></pre></div></div>
+<div class="paragraph"><p>The <span class="monospaced">format</span> and <span class="monospaced">imageUsage</span> parameters are taken from the <span class="monospaced">VkSwapchainCreateInfoKHR</span> structure. The driver should fill <span class="monospaced">*grallocUsage</span> with the gralloc usage flags it requires for that format and usage. These will be combined with the usage flags requested by the swapchain consumer when allocating buffers.</p></div>
+<div class="paragraph"><p><span class="monospaced">VkNativeBufferANDROID</span> is a <span class="monospaced">vkCreateImage</span> extension structure for creating an image backed by a gralloc buffer. This structure is provided to <span class="monospaced">vkCreateImage</span> in the <span class="monospaced">VkImageCreateInfo</span> structure chain. Calls to <span class="monospaced">vkCreateImage</span> with this structure will happen during the first call to <span class="monospaced">vkGetSwapChainInfoWSI(.. VK_SWAP_CHAIN_INFO_TYPE_IMAGES_WSI ..)</span>. The WSI implementation will allocate the number of native buffers requested for the swapchain, then create a <span class="monospaced">VkImage</span> for each one.</p></div>
+<div class="listingblock">
+<div class="content"><!-- Generator: GNU source-highlight 3.1.8
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre><tt><span style="font-weight: bold"><span style="color: #0000FF">typedef</span></span> <span style="font-weight: bold"><span style="color: #0000FF">struct</span></span> <span style="color: #FF0000">{</span>
+ <span style="color: #008080">VkStructureType</span> sType<span style="color: #990000">;</span> <span style="font-style: italic"><span style="color: #9A1900">// must be VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID</span></span>
+ <span style="font-weight: bold"><span style="color: #0000FF">const</span></span> <span style="color: #009900">void</span><span style="color: #990000">*</span> pNext<span style="color: #990000">;</span>
+
+ <span style="font-style: italic"><span style="color: #9A1900">// Buffer handle and stride returned from gralloc alloc()</span></span>
+ <span style="color: #008080">buffer_handle_t</span> handle<span style="color: #990000">;</span>
+ <span style="color: #009900">int</span> stride<span style="color: #990000">;</span>
+
+ <span style="font-style: italic"><span style="color: #9A1900">// Gralloc format and usage requested when the buffer was allocated.</span></span>
+ <span style="color: #009900">int</span> format<span style="color: #990000">;</span>
+ <span style="color: #009900">int</span> usage<span style="color: #990000">;</span>
+<span style="color: #FF0000">}</span> VkNativeBufferANDROID<span style="color: #990000">;</span></tt></pre></div></div>
+<div class="paragraph"><p>TBD: During swapchain re-creation (using <span class="monospaced">oldSwapChain</span>), we may have to defer allocation of new gralloc buffers until old buffers have been released. If so, the <span class="monospaced">vkCreateImage</span> calls will be deferred until the first <span class="monospaced">vkAcquireNextImageWSI</span> that would return the new image.</p></div>
+<div class="paragraph"><p>When creating a gralloc-backed image, the <span class="monospaced">VkImageCreateInfo</span> will have:</p></div>
+<div class="listingblock">
+<div class="content monospaced">
+<pre> .imageType = VK_IMAGE_TYPE_2D
+ .format = a VkFormat matching the format requested for the gralloc buffer
+ .extent = the 2D dimensions requested for the gralloc buffer
+ .mipLevels = 1
+ .arraySize = 1
+ .samples = 1
+ .tiling = VK_IMAGE_TILING_OPTIMAL
+ .usage = VkSwapChainCreateInfoWSI::imageUsageFlags
+ .flags = 0
+ .sharingMode = VkSwapChainCreateInfoWSI::sharingMode
+ .queueFamilyCount = VkSwapChainCreateInfoWSI::queueFamilyCount
+ .pQueueFamilyIndices = VkSwapChainCreateInfoWSI::pQueueFamilyIndices</pre>
+</div></div>
+<div class="paragraph"><p><span class="monospaced">vkAcquireImageANDROID</span> acquires ownership of a swapchain image and imports an
+externally-signalled native fence into both an existing VkSemaphore object
+and an existing VkFence object:</p></div>
+<div class="listingblock">
+<div class="content"><!-- Generator: GNU source-highlight 3.1.8
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre><tt>VkResult <span style="color: #008080">VKAPI</span> <span style="font-weight: bold"><span style="color: #000000">vkAcquireImageANDROID</span></span><span style="color: #990000">(</span>
+ <span style="color: #008080">VkDevice</span> device<span style="color: #990000">,</span>
+ <span style="color: #008080">VkImage</span> image<span style="color: #990000">,</span>
+ <span style="color: #009900">int</span> nativeFenceFd<span style="color: #990000">,</span>
+ <span style="color: #008080">VkSemaphore</span> semaphore<span style="color: #990000">,</span>
+ VkFence fence
+<span style="color: #990000">);</span></tt></pre></div></div>
+<div class="paragraph"><p>This function is called during <span class="monospaced">vkAcquireNextImageWSI</span> to import a native
+fence into the <span class="monospaced">VkSemaphore</span> and <span class="monospaced">VkFence</span> objects provided by the
+application. Both semaphore and fence objects are optional in this call. The
+driver may also use this opportunity to recognize and handle any external
+changes to the gralloc buffer state; many drivers won’t need to do anything
+here. This call puts the <span class="monospaced">VkSemaphore</span> and <span class="monospaced">VkFence</span> into the same "pending"
+state as <span class="monospaced">vkQueueSignalSemaphore</span> and <span class="monospaced">vkQueueSubmit</span> respectively, so queues
+can wait on the semaphore and the application can wait on the fence. Both
+objects become signalled when the underlying native fence signals; if the
+native fence has already signalled, then the semaphore will be in the signalled
+state when this function returns. The driver takes ownership of the fence fd
+and is responsible for closing it when no longer needed. It must do so even if
+neither a semaphore or fence object is provided, or even if
+<span class="monospaced">vkAcquireImageANDROID</span> fails and returns an error. If <span class="monospaced">fenceFd</span> is -1, it
+is as if the native fence was already signalled.</p></div>
+<div class="paragraph"><p><span class="monospaced">vkQueueSignalReleaseImageANDROID</span> prepares a swapchain image for external use, and creates a native fence and schedules it to be signalled when prior work on the queue has completed.</p></div>
+<div class="listingblock">
+<div class="content"><!-- Generator: GNU source-highlight 3.1.8
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre><tt>VkResult <span style="color: #008080">VKAPI</span> <span style="font-weight: bold"><span style="color: #000000">vkQueueSignalReleaseImageANDROID</span></span><span style="color: #990000">(</span>
+ <span style="color: #008080">VkQueue</span> queue<span style="color: #990000">,</span>
+ <span style="color: #008080">uint32_t</span> waitSemaphoreCount<span style="color: #990000">,</span>
+ <span style="font-weight: bold"><span style="color: #0000FF">const</span></span> VkSemaphore<span style="color: #990000">*</span> pWaitSemaphores<span style="color: #990000">,</span>
+ <span style="color: #008080">VkImage</span> image<span style="color: #990000">,</span>
+ <span style="color: #009900">int</span><span style="color: #990000">*</span> pNativeFenceFd
+<span style="color: #990000">);</span></tt></pre></div></div>
+<div class="paragraph"><p>This will be called during <span class="monospaced">vkQueuePresentWSI</span> on the provided queue. Effects are similar to <span class="monospaced">vkQueueSignalSemaphore</span>, except with a native fence instead of a semaphore. The native fence must: not signal until the <span class="monospaced">waitSemaphoreCount</span> semaphores in <span class="monospaced">pWaitSemaphores</span> have signaled. Unlike <span class="monospaced">vkQueueSignalSemaphore</span>, however, this call creates and returns the synchronization object that will be signalled rather than having it provided as input. If the queue is already idle when this function is called, it is allowed but not required to set <span class="monospaced">*pNativeFenceFd</span> to -1. The file descriptor returned in <span class="monospaced">*pNativeFenceFd</span> is owned and will be closed by the caller. Many drivers will be able to ignore the <span class="monospaced">image</span> parameter, but some may need to prepare CPU-side data structures associated with a gralloc buffer for use by external image consumers. Preparing buffer contents for use by external consumers should have been done asynchronously as part of transitioning the image to <span class="monospaced">VK_IMAGE_LAYOUT_PRESENT_SRC_KHR</span>.</p></div>
+</div>
+</div>
+<div class="sect1">
+<h2 id="_history">3. History</h2>
+<div class="sectionbody">
+<div class="olist arabic"><ol class="arabic">
+<li>
+<p>
+<strong>2015-07-08</strong> Initial version
+</p>
+</li>
+<li>
+<p>
+<strong>2015-08-16</strong>
+</p>
+<div class="ulist"><ul>
+<li>
+<p>
+Renamed to Implementor’s Guide
+</p>
+</li>
+<li>
+<p>
+Wording and formatting changes
+</p>
+</li>
+<li>
+<p>
+Updated based on resolution of Khronos bug 14265
+</p>
+</li>
+<li>
+<p>
+Deferred support for multiple drivers
+</p>
+</li>
+</ul></div>
+</li>
+<li>
+<p>
+<strong>2015-11-04</strong>
+</p>
+<div class="ulist"><ul>
+<li>
+<p>
+Added vkGetSwapchainGrallocUsageANDROID
+</p>
+</li>
+<li>
+<p>
+Replaced vkImportNativeFenceANDROID and vkQueueSignalNativeFenceANDROID
+ with vkAcquireImageANDROID and vkQueueSignalReleaseImageANDROID, to allow
+ drivers to known the ownership state of swapchain images.
+</p>
+</li>
+</ul></div>
+</li>
+<li>
+<p>
+<strong>2015-12-03</strong>
+</p>
+<div class="ulist"><ul>
+<li>
+<p>
+Added a VkFence parameter to vkAcquireImageANDROID corresponding to the
+ parameter added to vkAcquireNextImageKHR.
+</p>
+</li>
+</ul></div>
+</li>
+<li>
+<p>
+<strong>2016-01-08</strong>
+</p>
+<div class="ulist"><ul>
+<li>
+<p>
+Added waitSemaphoreCount and pWaitSemaphores parameters to vkQueueSignalReleaseImageANDROID.
+</p>
+</li>
+</ul></div>
+</li>
+</ol></div>
+</div>
+</div>
+</div>
+<div id="footnotes"><hr></div>
+<div id="footer">
+<div id="footer-text">
+Version 5<br>
+Last updated 2016-01-08 22:43:07 PST
+</div>
+</div>
+</body>
+</html>
diff --git a/vulkan/include/hardware/hwvulkan.h b/vulkan/include/hardware/hwvulkan.h
new file mode 100644
index 0000000..9e9a14d
--- /dev/null
+++ b/vulkan/include/hardware/hwvulkan.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef ANDROID_HWVULKAN_H
+#define ANDROID_HWVULKAN_H
+
+#include <hardware/hardware.h>
+#include <vulkan/vulkan.h>
+
+__BEGIN_DECLS
+
+#define HWVULKAN_HARDWARE_MODULE_ID "vulkan"
+
+#define HWVULKAN_MODULE_API_VERSION_0_1 HARDWARE_MODULE_API_VERSION(0, 1)
+#define HWVULKAN_DEVICE_API_VERSION_0_1 HARDWARE_DEVICE_API_VERSION_2(0, 1, 0)
+
+#define HWVULKAN_DEVICE_0 "vk0"
+
+typedef struct hwvulkan_module_t {
+ struct hw_module_t common;
+} hwvulkan_module_t;
+
+/* Dispatchable Vulkan object handles must be pointers, which must point to
+ * instances of hwvulkan_dispatch_t (potentially followed by additional
+ * implementation-defined data). On return from the creation function, the
+ * 'magic' field must contain HWVULKAN_DISPATCH_MAGIC; the loader will overwrite
+ * the 'vtbl' field.
+ *
+ * NOTE: The magic value and the layout of hwvulkan_dispatch_t match the LunarG
+ * loader used on platforms, to avoid pointless annoying differences for
+ * multi-platform drivers. Don't change them without a good reason. If there is
+ * an opportunity to change it, using a magic value that doesn't leave the
+ * upper 32-bits zero on 64-bit platforms would be nice.
+ */
+#define HWVULKAN_DISPATCH_MAGIC 0x01CDC0DE
+typedef union {
+ uintptr_t magic;
+ const void* vtbl;
+} hwvulkan_dispatch_t;
+
+/* A hwvulkan_device_t corresponds to an ICD on other systems. Currently there
+ * can only be one on a system (HWVULKAN_DEVICE_0). It is opened once per
+ * process when the Vulkan API is first used; the hw_device_t::close() function
+ * is never called. Any non-trivial resource allocation should be done when
+ * the VkInstance is created rather than when the hwvulkan_device_t is opened.
+ */
+typedef struct hwvulkan_device_t {
+ struct hw_device_t common;
+
+ PFN_vkEnumerateInstanceExtensionProperties
+ EnumerateInstanceExtensionProperties;
+ PFN_vkCreateInstance CreateInstance;
+ PFN_vkGetInstanceProcAddr GetInstanceProcAddr;
+} hwvulkan_device_t;
+
+__END_DECLS
+
+#endif // ANDROID_HWVULKAN_H
diff --git a/vulkan/include/vulkan/vk_android_native_buffer.h b/vulkan/include/vulkan/vk_android_native_buffer.h
new file mode 100644
index 0000000..d0ebf81
--- /dev/null
+++ b/vulkan/include/vulkan/vk_android_native_buffer.h
@@ -0,0 +1,91 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef __VK_ANDROID_NATIVE_BUFFER_H__
+#define __VK_ANDROID_NATIVE_BUFFER_H__
+
+#include <system/window.h>
+#include <vulkan/vulkan.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define VK_ANDROID_native_buffer 1
+
+#define VK_ANDROID_NATIVE_BUFFER_EXTENSION_NUMBER 11
+#define VK_ANDROID_NATIVE_BUFFER_SPEC_VERSION 5
+#define VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME "VK_ANDROID_native_buffer"
+
+#define VK_ANDROID_NATIVE_BUFFER_ENUM(type,id) ((type)(1000000000 + (1000 * (VK_ANDROID_NATIVE_BUFFER_EXTENSION_NUMBER - 1)) + (id)))
+#define VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID VK_ANDROID_NATIVE_BUFFER_ENUM(VkStructureType, 0)
+
+typedef struct {
+ VkStructureType sType; // must be VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID
+ const void* pNext;
+
+ // Buffer handle and stride returned from gralloc alloc()
+ buffer_handle_t handle;
+ int stride;
+
+ // Gralloc format and usage requested when the buffer was allocated.
+ int format;
+ int usage;
+} VkNativeBufferANDROID;
+
+typedef VkResult (VKAPI_PTR *PFN_vkGetSwapchainGrallocUsageANDROID)(VkDevice device, VkFormat format, VkImageUsageFlags imageUsage, int* grallocUsage);
+typedef VkResult (VKAPI_PTR *PFN_vkAcquireImageANDROID)(VkDevice device, VkImage image, int nativeFenceFd, VkSemaphore semaphore, VkFence fence);
+typedef VkResult (VKAPI_PTR *PFN_vkQueueSignalReleaseImageANDROID)(VkQueue queue, uint32_t waitSemaphoreCount, const VkSemaphore* pWaitSemaphores, VkImage image, int* pNativeFenceFd);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainGrallocUsageANDROID(
+ VkDevice device,
+ VkFormat format,
+ VkImageUsageFlags imageUsage,
+ int* grallocUsage
+);
+VKAPI_ATTR VkResult VKAPI_CALL vkAcquireImageANDROID(
+ VkDevice device,
+ VkImage image,
+ int nativeFenceFd,
+ VkSemaphore semaphore,
+ VkFence fence
+);
+VKAPI_ATTR VkResult VKAPI_CALL vkQueueSignalReleaseImageANDROID(
+ VkQueue queue,
+ uint32_t waitSemaphoreCount,
+ const VkSemaphore* pWaitSemaphores,
+ VkImage image,
+ int* pNativeFenceFd
+);
+// -- DEPRECATED --
+VKAPI_ATTR VkResult VKAPI_CALL vkImportNativeFenceANDROID(
+ VkDevice device,
+ VkSemaphore semaphore,
+ int nativeFenceFd
+);
+VKAPI_ATTR VkResult VKAPI_CALL vkQueueSignalNativeFenceANDROID(
+ VkQueue queue,
+ int* pNativeFenceFd
+);
+// ----------------
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif // __VK_ANDROID_NATIVE_BUFFER_H__
diff --git a/vulkan/include/vulkan/vk_ext_debug_report.h b/vulkan/include/vulkan/vk_ext_debug_report.h
new file mode 100644
index 0000000..c391033
--- /dev/null
+++ b/vulkan/include/vulkan/vk_ext_debug_report.h
@@ -0,0 +1,149 @@
+//
+// File: vk_ext_debug_report.h
+//
+/*
+ *
+ * Copyright (C) 2015 Valve Corporation
+ * Copyright (C) 2015 Google Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Cody Northrop <cody@lunarg.com>
+ * Author: Courtney Goeltzenleuchter <courtney@LunarG.com>
+ * Author: Tony Barbour <tony@LunarG.com>
+ *
+ */
+
+#pragma once
+
+#include "vulkan/vulkan.h"
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif // __cplusplus
+
+/*
+***************************************************************************************************
+* DebugReport Vulkan Extension API
+***************************************************************************************************
+*/
+#define VK_EXT_debug_report 1
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDebugReportCallbackEXT)
+
+#define VK_EXT_DEBUG_REPORT_SPEC_VERSION 2
+#define VK_EXT_DEBUG_REPORT_EXTENSION_NAME "VK_EXT_debug_report"
+
+
+typedef enum VkDebugReportObjectTypeEXT {
+ VK_DEBUG_REPORT_OBJECT_TYPE_UNKNOWN_EXT = 0,
+ VK_DEBUG_REPORT_OBJECT_TYPE_INSTANCE_EXT = 1,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PHYSICAL_DEVICE_EXT = 2,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_EXT = 3,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUEUE_EXT = 4,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SEMAPHORE_EXT = 5,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_BUFFER_EXT = 6,
+ VK_DEBUG_REPORT_OBJECT_TYPE_FENCE_EXT = 7,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEVICE_MEMORY_EXT = 8,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_EXT = 9,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_EXT = 10,
+ VK_DEBUG_REPORT_OBJECT_TYPE_EVENT_EXT = 11,
+ VK_DEBUG_REPORT_OBJECT_TYPE_QUERY_POOL_EXT = 12,
+ VK_DEBUG_REPORT_OBJECT_TYPE_BUFFER_VIEW_EXT = 13,
+ VK_DEBUG_REPORT_OBJECT_TYPE_IMAGE_VIEW_EXT = 14,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SHADER_MODULE_EXT = 15,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_CACHE_EXT = 16,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_LAYOUT_EXT = 17,
+ VK_DEBUG_REPORT_OBJECT_TYPE_RENDER_PASS_EXT = 18,
+ VK_DEBUG_REPORT_OBJECT_TYPE_PIPELINE_EXT = 19,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_EXT = 20,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SAMPLER_EXT = 21,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_POOL_EXT = 22,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DESCRIPTOR_SET_EXT = 23,
+ VK_DEBUG_REPORT_OBJECT_TYPE_FRAMEBUFFER_EXT = 24,
+ VK_DEBUG_REPORT_OBJECT_TYPE_COMMAND_POOL_EXT = 25,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SURFACE_KHR_EXT = 26,
+ VK_DEBUG_REPORT_OBJECT_TYPE_SWAPCHAIN_KHR_EXT = 27,
+ VK_DEBUG_REPORT_OBJECT_TYPE_DEBUG_REPORT_EXT = 28,
+} VkDebugReportObjectTypeEXT;
+
+typedef enum VkDebugReportErrorEXT {
+ VK_DEBUG_REPORT_ERROR_NONE_EXT = 0,
+ VK_DEBUG_REPORT_ERROR_CALLBACK_REF_EXT = 1,
+} VkDebugReportErrorEXT;
+
+typedef enum VkDebugReportFlagBitsEXT {
+ VK_DEBUG_REPORT_INFO_BIT_EXT = 0x00000001,
+ VK_DEBUG_REPORT_WARN_BIT_EXT = 0x00000002,
+ VK_DEBUG_REPORT_PERF_WARN_BIT_EXT = 0x00000004,
+ VK_DEBUG_REPORT_ERROR_BIT_EXT = 0x00000008,
+ VK_DEBUG_REPORT_DEBUG_BIT_EXT = 0x00000010,
+} VkDebugReportFlagBitsEXT;
+typedef VkFlags VkDebugReportFlagsEXT;
+
+typedef VkBool32 (VKAPI_PTR *PFN_vkDebugReportCallbackEXT)(
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objectType,
+ uint64_t object,
+ size_t location,
+ int32_t messageCode,
+ const char* pLayerPrefix,
+ const char* pMessage,
+ void* pUserData);
+
+
+typedef struct VkDebugReportCallbackCreateInfoEXT {
+ VkStructureType sType;
+ const void* pNext;
+ VkDebugReportFlagsEXT flags;
+ PFN_vkDebugReportCallbackEXT pfnCallback;
+ void* pUserData;
+} VkDebugReportCallbackCreateInfoEXT;
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDebugReportCallbackEXT)(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDebugReportCallbackEXT* pCallback);
+typedef void (VKAPI_PTR *PFN_vkDestroyDebugReportCallbackEXT)(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator);
+typedef void (VKAPI_PTR *PFN_vkDebugReportMessageEXT)(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDebugReportCallbackEXT(
+ VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDebugReportCallbackEXT* pCallback);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyDebugReportCallbackEXT(
+ VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR void VKAPI_CALL vkDebugReportMessageEXT(
+ VkInstance instance,
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT objectType,
+ uint64_t object,
+ size_t location,
+ int32_t messageCode,
+ const char* pLayerPrefix,
+ const char* pMessage);
+#endif
+
+#ifdef __cplusplus
+} // extern "C"
+#endif // __cplusplus
+
diff --git a/vulkan/include/vulkan/vk_platform.h b/vulkan/include/vulkan/vk_platform.h
new file mode 100644
index 0000000..a53e725
--- /dev/null
+++ b/vulkan/include/vulkan/vk_platform.h
@@ -0,0 +1,127 @@
+//
+// File: vk_platform.h
+//
+/*
+** Copyright (c) 2014-2015 The Khronos Group Inc.
+**
+** Permission is hereby granted, free of charge, to any person obtaining a
+** copy of this software and/or associated documentation files (the
+** "Materials"), to deal in the Materials without restriction, including
+** without limitation the rights to use, copy, modify, merge, publish,
+** distribute, sublicense, and/or sell copies of the Materials, and to
+** permit persons to whom the Materials are furnished to do so, subject to
+** the following conditions:
+**
+** The above copyright notice and this permission notice shall be included
+** in all copies or substantial portions of the Materials.
+**
+** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+*/
+
+
+#ifndef __VK_PLATFORM_H__
+#define __VK_PLATFORM_H__
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif // __cplusplus
+
+/*
+***************************************************************************************************
+* Platform-specific directives and type declarations
+***************************************************************************************************
+*/
+
+/* Platform-specific calling convention macros.
+ *
+ * Platforms should define these so that Vulkan clients call Vulkan commands
+ * with the same calling conventions that the Vulkan implementation expects.
+ *
+ * VKAPI_ATTR - Placed before the return type in function declarations.
+ * Useful for C++11 and GCC/Clang-style function attribute syntax.
+ * VKAPI_CALL - Placed after the return type in function declarations.
+ * Useful for MSVC-style calling convention syntax.
+ * VKAPI_PTR - Placed between the '(' and '*' in function pointer types.
+ *
+ * Function declaration: VKAPI_ATTR void VKAPI_CALL vkCommand(void);
+ * Function pointer type: typedef void (VKAPI_PTR *PFN_vkCommand)(void);
+ */
+#if defined(_WIN32)
+ // On Windows, Vulkan commands use the stdcall convention
+ #define VKAPI_ATTR
+ #define VKAPI_CALL __stdcall
+ #define VKAPI_PTR VKAPI_CALL
+#elif defined(__ANDROID__) && defined(__ARM_EABI__) && !defined(__ARM_ARCH_7A__)
+ // Android does not support Vulkan in native code using the "armeabi" ABI.
+ #error "Vulkan requires the 'armeabi-v7a' or 'armeabi-v7a-hard' ABI on 32-bit ARM CPUs"
+#elif defined(__ANDROID__) && defined(__ARM_ARCH_7A__)
+ // On Android/ARMv7a, Vulkan functions use the armeabi-v7a-hard calling
+ // convention, even if the application's native code is compiled with the
+ // armeabi-v7a calling convention.
+ #define VKAPI_ATTR __attribute__((pcs("aapcs-vfp")))
+ #define VKAPI_CALL
+ #define VKAPI_PTR VKAPI_ATTR
+#else
+ // On other platforms, use the default calling convention
+ #define VKAPI_ATTR
+ #define VKAPI_CALL
+ #define VKAPI_PTR
+#endif
+
+#include <stddef.h>
+
+#if !defined(VK_NO_STDINT_H)
+ #if defined(_MSC_VER) && (_MSC_VER < 1600)
+ typedef signed __int8 int8_t;
+ typedef unsigned __int8 uint8_t;
+ typedef signed __int16 int16_t;
+ typedef unsigned __int16 uint16_t;
+ typedef signed __int32 int32_t;
+ typedef unsigned __int32 uint32_t;
+ typedef signed __int64 int64_t;
+ typedef unsigned __int64 uint64_t;
+ #else
+ #include <stdint.h>
+ #endif
+#endif // !defined(VK_NO_STDINT_H)
+
+#ifdef __cplusplus
+} // extern "C"
+#endif // __cplusplus
+
+// Platform-specific headers required by platform window system extensions.
+// These are enabled prior to #including "vulkan.h". The same enable then
+// controls inclusion of the extension interfaces in vulkan.h.
+
+#ifdef VK_USE_PLATFORM_ANDROID_KHR
+#include <android/native_window.h>
+#endif
+
+#ifdef VK_USE_PLATFORM_MIR_KHR
+#include <mir_toolkit/client_types.h>
+#endif
+
+#ifdef VK_USE_PLATFORM_WAYLAND_KHR
+#include <wayland-client.h>
+#endif
+
+#ifdef VK_USE_PLATFORM_WIN32_KHR
+#include <windows.h>
+#endif
+
+#ifdef VK_USE_PLATFORM_XLIB_KHR
+#include <X11/Xlib.h>
+#endif
+
+#ifdef VK_USE_PLATFORM_XCB_KHR
+#include <xcb/xcb.h>
+#endif
+
+#endif // __VK_PLATFORM_H__
diff --git a/vulkan/include/vulkan/vulkan.h b/vulkan/include/vulkan/vulkan.h
new file mode 100644
index 0000000..9940f85
--- /dev/null
+++ b/vulkan/include/vulkan/vulkan.h
@@ -0,0 +1,3671 @@
+#ifndef __vulkan_h_
+#define __vulkan_h_ 1
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+** Copyright (c) 2015 The Khronos Group Inc.
+**
+** Permission is hereby granted, free of charge, to any person obtaining a
+** copy of this software and/or associated documentation files (the
+** "Materials"), to deal in the Materials without restriction, including
+** without limitation the rights to use, copy, modify, merge, publish,
+** distribute, sublicense, and/or sell copies of the Materials, and to
+** permit persons to whom the Materials are furnished to do so, subject to
+** the following conditions:
+**
+** The above copyright notice and this permission notice shall be included
+** in all copies or substantial portions of the Materials.
+**
+** THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+*/
+
+/*
+** This header is generated from the Khronos Vulkan XML API Registry.
+**
+*/
+
+
+#define VK_VERSION_1_0 1
+#include "vk_platform.h"
+
+#define VK_MAKE_VERSION(major, minor, patch) \
+ ((major << 22) | (minor << 12) | patch)
+
+// Vulkan API version supported by this file
+#define VK_API_VERSION VK_MAKE_VERSION(1, 0, 2)
+
+
+#define VK_NULL_HANDLE 0
+
+
+
+#define VK_DEFINE_HANDLE(object) typedef struct object##_T* object;
+
+
+#if defined(__LP64__) || defined(_WIN64) || defined(__x86_64__) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
+ #define VK_DEFINE_NON_DISPATCHABLE_HANDLE(object) typedef struct object##_T *object;
+#else
+ #define VK_DEFINE_NON_DISPATCHABLE_HANDLE(object) typedef uint64_t object;
+#endif
+
+
+
+typedef uint32_t VkFlags;
+typedef uint32_t VkBool32;
+typedef uint64_t VkDeviceSize;
+typedef uint32_t VkSampleMask;
+
+VK_DEFINE_HANDLE(VkInstance)
+VK_DEFINE_HANDLE(VkPhysicalDevice)
+VK_DEFINE_HANDLE(VkDevice)
+VK_DEFINE_HANDLE(VkQueue)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSemaphore)
+VK_DEFINE_HANDLE(VkCommandBuffer)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFence)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDeviceMemory)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkBuffer)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkImage)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkEvent)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkQueryPool)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkBufferView)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkImageView)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkShaderModule)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipelineCache)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipelineLayout)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkRenderPass)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkPipeline)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorSetLayout)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSampler)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorPool)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDescriptorSet)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkFramebuffer)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkCommandPool)
+
+#define VK_LOD_CLAMP_NONE 1000.0f
+#define VK_REMAINING_MIP_LEVELS (~0U)
+#define VK_REMAINING_ARRAY_LAYERS (~0U)
+#define VK_WHOLE_SIZE (~0ULL)
+#define VK_ATTACHMENT_UNUSED (~0U)
+#define VK_TRUE 1
+#define VK_FALSE 0
+#define VK_QUEUE_FAMILY_IGNORED (~0U)
+#define VK_SUBPASS_EXTERNAL (~0U)
+#define VK_MAX_PHYSICAL_DEVICE_NAME_SIZE 256
+#define VK_UUID_SIZE 16
+#define VK_MAX_MEMORY_TYPES 32
+#define VK_MAX_MEMORY_HEAPS 16
+#define VK_MAX_EXTENSION_NAME_SIZE 256
+#define VK_MAX_DESCRIPTION_SIZE 256
+
+
+typedef enum VkPipelineCacheHeaderVersion {
+ VK_PIPELINE_CACHE_HEADER_VERSION_ONE = 1,
+ VK_PIPELINE_CACHE_HEADER_VERSION_BEGIN_RANGE = VK_PIPELINE_CACHE_HEADER_VERSION_ONE,
+ VK_PIPELINE_CACHE_HEADER_VERSION_END_RANGE = VK_PIPELINE_CACHE_HEADER_VERSION_ONE,
+ VK_PIPELINE_CACHE_HEADER_VERSION_RANGE_SIZE = (VK_PIPELINE_CACHE_HEADER_VERSION_ONE - VK_PIPELINE_CACHE_HEADER_VERSION_ONE + 1),
+ VK_PIPELINE_CACHE_HEADER_VERSION_MAX_ENUM = 0x7FFFFFFF
+} VkPipelineCacheHeaderVersion;
+
+typedef enum VkResult {
+ VK_SUCCESS = 0,
+ VK_NOT_READY = 1,
+ VK_TIMEOUT = 2,
+ VK_EVENT_SET = 3,
+ VK_EVENT_RESET = 4,
+ VK_INCOMPLETE = 5,
+ VK_ERROR_OUT_OF_HOST_MEMORY = -1,
+ VK_ERROR_OUT_OF_DEVICE_MEMORY = -2,
+ VK_ERROR_INITIALIZATION_FAILED = -3,
+ VK_ERROR_DEVICE_LOST = -4,
+ VK_ERROR_MEMORY_MAP_FAILED = -5,
+ VK_ERROR_LAYER_NOT_PRESENT = -6,
+ VK_ERROR_EXTENSION_NOT_PRESENT = -7,
+ VK_ERROR_FEATURE_NOT_PRESENT = -8,
+ VK_ERROR_INCOMPATIBLE_DRIVER = -9,
+ VK_ERROR_TOO_MANY_OBJECTS = -10,
+ VK_ERROR_FORMAT_NOT_SUPPORTED = -11,
+ VK_ERROR_SURFACE_LOST_KHR = -1000000000,
+ VK_ERROR_NATIVE_WINDOW_IN_USE_KHR = -1000000001,
+ VK_SUBOPTIMAL_KHR = 1000001003,
+ VK_ERROR_OUT_OF_DATE_KHR = -1000001004,
+ VK_ERROR_INCOMPATIBLE_DISPLAY_KHR = -1000003001,
+ VK_ERROR_VALIDATION_FAILED_EXT = -1000011001,
+ VK_RESULT_BEGIN_RANGE = VK_ERROR_FORMAT_NOT_SUPPORTED,
+ VK_RESULT_END_RANGE = VK_INCOMPLETE,
+ VK_RESULT_RANGE_SIZE = (VK_INCOMPLETE - VK_ERROR_FORMAT_NOT_SUPPORTED + 1),
+ VK_RESULT_MAX_ENUM = 0x7FFFFFFF
+} VkResult;
+
+typedef enum VkStructureType {
+ VK_STRUCTURE_TYPE_APPLICATION_INFO = 0,
+ VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 1,
+ VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO = 2,
+ VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 3,
+ VK_STRUCTURE_TYPE_SUBMIT_INFO = 4,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO = 5,
+ VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE = 6,
+ VK_STRUCTURE_TYPE_BIND_SPARSE_INFO = 7,
+ VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 8,
+ VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 9,
+ VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 10,
+ VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 11,
+ VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 12,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 13,
+ VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 14,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 15,
+ VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO = 16,
+ VK_STRUCTURE_TYPE_PIPELINE_CACHE_CREATE_INFO = 17,
+ VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 18,
+ VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO = 19,
+ VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO = 20,
+ VK_STRUCTURE_TYPE_PIPELINE_TESSELLATION_STATE_CREATE_INFO = 21,
+ VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO = 22,
+ VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO = 23,
+ VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO = 24,
+ VK_STRUCTURE_TYPE_PIPELINE_DEPTH_STENCIL_STATE_CREATE_INFO = 25,
+ VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO = 26,
+ VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO = 27,
+ VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 28,
+ VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 29,
+ VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO = 30,
+ VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 31,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 32,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 33,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO = 34,
+ VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET = 35,
+ VK_STRUCTURE_TYPE_COPY_DESCRIPTOR_SET = 36,
+ VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 37,
+ VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 38,
+ VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO = 39,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO = 40,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_INHERITANCE_INFO = 41,
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO = 42,
+ VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO = 43,
+ VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 44,
+ VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 45,
+ VK_STRUCTURE_TYPE_MEMORY_BARRIER = 46,
+ VK_STRUCTURE_TYPE_LOADER_INSTANCE_CREATE_INFO = 47,
+ VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO = 48,
+ VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR = 1000001000,
+ VK_STRUCTURE_TYPE_PRESENT_INFO_KHR = 1000001001,
+ VK_STRUCTURE_TYPE_DISPLAY_MODE_CREATE_INFO_KHR = 1000002000,
+ VK_STRUCTURE_TYPE_DISPLAY_SURFACE_CREATE_INFO_KHR = 1000002001,
+ VK_STRUCTURE_TYPE_DISPLAY_PRESENT_INFO_KHR = 1000003000,
+ VK_STRUCTURE_TYPE_XLIB_SURFACE_CREATE_INFO_KHR = 1000004000,
+ VK_STRUCTURE_TYPE_XCB_SURFACE_CREATE_INFO_KHR = 1000005000,
+ VK_STRUCTURE_TYPE_WAYLAND_SURFACE_CREATE_INFO_KHR = 1000006000,
+ VK_STRUCTURE_TYPE_MIR_SURFACE_CREATE_INFO_KHR = 1000007000,
+ VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR = 1000008000,
+ VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR = 1000009000,
+ VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT = 1000011000,
+ VK_STRUCTURE_TYPE_BEGIN_RANGE = VK_STRUCTURE_TYPE_APPLICATION_INFO,
+ VK_STRUCTURE_TYPE_END_RANGE = VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO,
+ VK_STRUCTURE_TYPE_RANGE_SIZE = (VK_STRUCTURE_TYPE_LOADER_DEVICE_CREATE_INFO - VK_STRUCTURE_TYPE_APPLICATION_INFO + 1),
+ VK_STRUCTURE_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkStructureType;
+
+typedef enum VkSystemAllocationScope {
+ VK_SYSTEM_ALLOCATION_SCOPE_COMMAND = 0,
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT = 1,
+ VK_SYSTEM_ALLOCATION_SCOPE_CACHE = 2,
+ VK_SYSTEM_ALLOCATION_SCOPE_DEVICE = 3,
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE = 4,
+ VK_SYSTEM_ALLOCATION_SCOPE_BEGIN_RANGE = VK_SYSTEM_ALLOCATION_SCOPE_COMMAND,
+ VK_SYSTEM_ALLOCATION_SCOPE_END_RANGE = VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE,
+ VK_SYSTEM_ALLOCATION_SCOPE_RANGE_SIZE = (VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE - VK_SYSTEM_ALLOCATION_SCOPE_COMMAND + 1),
+ VK_SYSTEM_ALLOCATION_SCOPE_MAX_ENUM = 0x7FFFFFFF
+} VkSystemAllocationScope;
+
+typedef enum VkInternalAllocationType {
+ VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE = 0,
+ VK_INTERNAL_ALLOCATION_TYPE_BEGIN_RANGE = VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE,
+ VK_INTERNAL_ALLOCATION_TYPE_END_RANGE = VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE,
+ VK_INTERNAL_ALLOCATION_TYPE_RANGE_SIZE = (VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE - VK_INTERNAL_ALLOCATION_TYPE_EXECUTABLE + 1),
+ VK_INTERNAL_ALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkInternalAllocationType;
+
+typedef enum VkFormat {
+ VK_FORMAT_UNDEFINED = 0,
+ VK_FORMAT_R4G4_UNORM_PACK8 = 1,
+ VK_FORMAT_R4G4B4A4_UNORM_PACK16 = 2,
+ VK_FORMAT_B4G4R4A4_UNORM_PACK16 = 3,
+ VK_FORMAT_R5G6B5_UNORM_PACK16 = 4,
+ VK_FORMAT_B5G6R5_UNORM_PACK16 = 5,
+ VK_FORMAT_R5G5B5A1_UNORM_PACK16 = 6,
+ VK_FORMAT_B5G5R5A1_UNORM_PACK16 = 7,
+ VK_FORMAT_A1R5G5B5_UNORM_PACK16 = 8,
+ VK_FORMAT_R8_UNORM = 9,
+ VK_FORMAT_R8_SNORM = 10,
+ VK_FORMAT_R8_USCALED = 11,
+ VK_FORMAT_R8_SSCALED = 12,
+ VK_FORMAT_R8_UINT = 13,
+ VK_FORMAT_R8_SINT = 14,
+ VK_FORMAT_R8_SRGB = 15,
+ VK_FORMAT_R8G8_UNORM = 16,
+ VK_FORMAT_R8G8_SNORM = 17,
+ VK_FORMAT_R8G8_USCALED = 18,
+ VK_FORMAT_R8G8_SSCALED = 19,
+ VK_FORMAT_R8G8_UINT = 20,
+ VK_FORMAT_R8G8_SINT = 21,
+ VK_FORMAT_R8G8_SRGB = 22,
+ VK_FORMAT_R8G8B8_UNORM = 23,
+ VK_FORMAT_R8G8B8_SNORM = 24,
+ VK_FORMAT_R8G8B8_USCALED = 25,
+ VK_FORMAT_R8G8B8_SSCALED = 26,
+ VK_FORMAT_R8G8B8_UINT = 27,
+ VK_FORMAT_R8G8B8_SINT = 28,
+ VK_FORMAT_R8G8B8_SRGB = 29,
+ VK_FORMAT_B8G8R8_UNORM = 30,
+ VK_FORMAT_B8G8R8_SNORM = 31,
+ VK_FORMAT_B8G8R8_USCALED = 32,
+ VK_FORMAT_B8G8R8_SSCALED = 33,
+ VK_FORMAT_B8G8R8_UINT = 34,
+ VK_FORMAT_B8G8R8_SINT = 35,
+ VK_FORMAT_B8G8R8_SRGB = 36,
+ VK_FORMAT_R8G8B8A8_UNORM = 37,
+ VK_FORMAT_R8G8B8A8_SNORM = 38,
+ VK_FORMAT_R8G8B8A8_USCALED = 39,
+ VK_FORMAT_R8G8B8A8_SSCALED = 40,
+ VK_FORMAT_R8G8B8A8_UINT = 41,
+ VK_FORMAT_R8G8B8A8_SINT = 42,
+ VK_FORMAT_R8G8B8A8_SRGB = 43,
+ VK_FORMAT_B8G8R8A8_UNORM = 44,
+ VK_FORMAT_B8G8R8A8_SNORM = 45,
+ VK_FORMAT_B8G8R8A8_USCALED = 46,
+ VK_FORMAT_B8G8R8A8_SSCALED = 47,
+ VK_FORMAT_B8G8R8A8_UINT = 48,
+ VK_FORMAT_B8G8R8A8_SINT = 49,
+ VK_FORMAT_B8G8R8A8_SRGB = 50,
+ VK_FORMAT_A8B8G8R8_UNORM_PACK32 = 51,
+ VK_FORMAT_A8B8G8R8_SNORM_PACK32 = 52,
+ VK_FORMAT_A8B8G8R8_USCALED_PACK32 = 53,
+ VK_FORMAT_A8B8G8R8_SSCALED_PACK32 = 54,
+ VK_FORMAT_A8B8G8R8_UINT_PACK32 = 55,
+ VK_FORMAT_A8B8G8R8_SINT_PACK32 = 56,
+ VK_FORMAT_A8B8G8R8_SRGB_PACK32 = 57,
+ VK_FORMAT_A2R10G10B10_UNORM_PACK32 = 58,
+ VK_FORMAT_A2R10G10B10_SNORM_PACK32 = 59,
+ VK_FORMAT_A2R10G10B10_USCALED_PACK32 = 60,
+ VK_FORMAT_A2R10G10B10_SSCALED_PACK32 = 61,
+ VK_FORMAT_A2R10G10B10_UINT_PACK32 = 62,
+ VK_FORMAT_A2R10G10B10_SINT_PACK32 = 63,
+ VK_FORMAT_A2B10G10R10_UNORM_PACK32 = 64,
+ VK_FORMAT_A2B10G10R10_SNORM_PACK32 = 65,
+ VK_FORMAT_A2B10G10R10_USCALED_PACK32 = 66,
+ VK_FORMAT_A2B10G10R10_SSCALED_PACK32 = 67,
+ VK_FORMAT_A2B10G10R10_UINT_PACK32 = 68,
+ VK_FORMAT_A2B10G10R10_SINT_PACK32 = 69,
+ VK_FORMAT_R16_UNORM = 70,
+ VK_FORMAT_R16_SNORM = 71,
+ VK_FORMAT_R16_USCALED = 72,
+ VK_FORMAT_R16_SSCALED = 73,
+ VK_FORMAT_R16_UINT = 74,
+ VK_FORMAT_R16_SINT = 75,
+ VK_FORMAT_R16_SFLOAT = 76,
+ VK_FORMAT_R16G16_UNORM = 77,
+ VK_FORMAT_R16G16_SNORM = 78,
+ VK_FORMAT_R16G16_USCALED = 79,
+ VK_FORMAT_R16G16_SSCALED = 80,
+ VK_FORMAT_R16G16_UINT = 81,
+ VK_FORMAT_R16G16_SINT = 82,
+ VK_FORMAT_R16G16_SFLOAT = 83,
+ VK_FORMAT_R16G16B16_UNORM = 84,
+ VK_FORMAT_R16G16B16_SNORM = 85,
+ VK_FORMAT_R16G16B16_USCALED = 86,
+ VK_FORMAT_R16G16B16_SSCALED = 87,
+ VK_FORMAT_R16G16B16_UINT = 88,
+ VK_FORMAT_R16G16B16_SINT = 89,
+ VK_FORMAT_R16G16B16_SFLOAT = 90,
+ VK_FORMAT_R16G16B16A16_UNORM = 91,
+ VK_FORMAT_R16G16B16A16_SNORM = 92,
+ VK_FORMAT_R16G16B16A16_USCALED = 93,
+ VK_FORMAT_R16G16B16A16_SSCALED = 94,
+ VK_FORMAT_R16G16B16A16_UINT = 95,
+ VK_FORMAT_R16G16B16A16_SINT = 96,
+ VK_FORMAT_R16G16B16A16_SFLOAT = 97,
+ VK_FORMAT_R32_UINT = 98,
+ VK_FORMAT_R32_SINT = 99,
+ VK_FORMAT_R32_SFLOAT = 100,
+ VK_FORMAT_R32G32_UINT = 101,
+ VK_FORMAT_R32G32_SINT = 102,
+ VK_FORMAT_R32G32_SFLOAT = 103,
+ VK_FORMAT_R32G32B32_UINT = 104,
+ VK_FORMAT_R32G32B32_SINT = 105,
+ VK_FORMAT_R32G32B32_SFLOAT = 106,
+ VK_FORMAT_R32G32B32A32_UINT = 107,
+ VK_FORMAT_R32G32B32A32_SINT = 108,
+ VK_FORMAT_R32G32B32A32_SFLOAT = 109,
+ VK_FORMAT_R64_UINT = 110,
+ VK_FORMAT_R64_SINT = 111,
+ VK_FORMAT_R64_SFLOAT = 112,
+ VK_FORMAT_R64G64_UINT = 113,
+ VK_FORMAT_R64G64_SINT = 114,
+ VK_FORMAT_R64G64_SFLOAT = 115,
+ VK_FORMAT_R64G64B64_UINT = 116,
+ VK_FORMAT_R64G64B64_SINT = 117,
+ VK_FORMAT_R64G64B64_SFLOAT = 118,
+ VK_FORMAT_R64G64B64A64_UINT = 119,
+ VK_FORMAT_R64G64B64A64_SINT = 120,
+ VK_FORMAT_R64G64B64A64_SFLOAT = 121,
+ VK_FORMAT_B10G11R11_UFLOAT_PACK32 = 122,
+ VK_FORMAT_E5B9G9R9_UFLOAT_PACK32 = 123,
+ VK_FORMAT_D16_UNORM = 124,
+ VK_FORMAT_X8_D24_UNORM_PACK32 = 125,
+ VK_FORMAT_D32_SFLOAT = 126,
+ VK_FORMAT_S8_UINT = 127,
+ VK_FORMAT_D16_UNORM_S8_UINT = 128,
+ VK_FORMAT_D24_UNORM_S8_UINT = 129,
+ VK_FORMAT_D32_SFLOAT_S8_UINT = 130,
+ VK_FORMAT_BC1_RGB_UNORM_BLOCK = 131,
+ VK_FORMAT_BC1_RGB_SRGB_BLOCK = 132,
+ VK_FORMAT_BC1_RGBA_UNORM_BLOCK = 133,
+ VK_FORMAT_BC1_RGBA_SRGB_BLOCK = 134,
+ VK_FORMAT_BC2_UNORM_BLOCK = 135,
+ VK_FORMAT_BC2_SRGB_BLOCK = 136,
+ VK_FORMAT_BC3_UNORM_BLOCK = 137,
+ VK_FORMAT_BC3_SRGB_BLOCK = 138,
+ VK_FORMAT_BC4_UNORM_BLOCK = 139,
+ VK_FORMAT_BC4_SNORM_BLOCK = 140,
+ VK_FORMAT_BC5_UNORM_BLOCK = 141,
+ VK_FORMAT_BC5_SNORM_BLOCK = 142,
+ VK_FORMAT_BC6H_UFLOAT_BLOCK = 143,
+ VK_FORMAT_BC6H_SFLOAT_BLOCK = 144,
+ VK_FORMAT_BC7_UNORM_BLOCK = 145,
+ VK_FORMAT_BC7_SRGB_BLOCK = 146,
+ VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK = 147,
+ VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK = 148,
+ VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK = 149,
+ VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK = 150,
+ VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK = 151,
+ VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK = 152,
+ VK_FORMAT_EAC_R11_UNORM_BLOCK = 153,
+ VK_FORMAT_EAC_R11_SNORM_BLOCK = 154,
+ VK_FORMAT_EAC_R11G11_UNORM_BLOCK = 155,
+ VK_FORMAT_EAC_R11G11_SNORM_BLOCK = 156,
+ VK_FORMAT_ASTC_4x4_UNORM_BLOCK = 157,
+ VK_FORMAT_ASTC_4x4_SRGB_BLOCK = 158,
+ VK_FORMAT_ASTC_5x4_UNORM_BLOCK = 159,
+ VK_FORMAT_ASTC_5x4_SRGB_BLOCK = 160,
+ VK_FORMAT_ASTC_5x5_UNORM_BLOCK = 161,
+ VK_FORMAT_ASTC_5x5_SRGB_BLOCK = 162,
+ VK_FORMAT_ASTC_6x5_UNORM_BLOCK = 163,
+ VK_FORMAT_ASTC_6x5_SRGB_BLOCK = 164,
+ VK_FORMAT_ASTC_6x6_UNORM_BLOCK = 165,
+ VK_FORMAT_ASTC_6x6_SRGB_BLOCK = 166,
+ VK_FORMAT_ASTC_8x5_UNORM_BLOCK = 167,
+ VK_FORMAT_ASTC_8x5_SRGB_BLOCK = 168,
+ VK_FORMAT_ASTC_8x6_UNORM_BLOCK = 169,
+ VK_FORMAT_ASTC_8x6_SRGB_BLOCK = 170,
+ VK_FORMAT_ASTC_8x8_UNORM_BLOCK = 171,
+ VK_FORMAT_ASTC_8x8_SRGB_BLOCK = 172,
+ VK_FORMAT_ASTC_10x5_UNORM_BLOCK = 173,
+ VK_FORMAT_ASTC_10x5_SRGB_BLOCK = 174,
+ VK_FORMAT_ASTC_10x6_UNORM_BLOCK = 175,
+ VK_FORMAT_ASTC_10x6_SRGB_BLOCK = 176,
+ VK_FORMAT_ASTC_10x8_UNORM_BLOCK = 177,
+ VK_FORMAT_ASTC_10x8_SRGB_BLOCK = 178,
+ VK_FORMAT_ASTC_10x10_UNORM_BLOCK = 179,
+ VK_FORMAT_ASTC_10x10_SRGB_BLOCK = 180,
+ VK_FORMAT_ASTC_12x10_UNORM_BLOCK = 181,
+ VK_FORMAT_ASTC_12x10_SRGB_BLOCK = 182,
+ VK_FORMAT_ASTC_12x12_UNORM_BLOCK = 183,
+ VK_FORMAT_ASTC_12x12_SRGB_BLOCK = 184,
+ VK_FORMAT_BEGIN_RANGE = VK_FORMAT_UNDEFINED,
+ VK_FORMAT_END_RANGE = VK_FORMAT_ASTC_12x12_SRGB_BLOCK,
+ VK_FORMAT_RANGE_SIZE = (VK_FORMAT_ASTC_12x12_SRGB_BLOCK - VK_FORMAT_UNDEFINED + 1),
+ VK_FORMAT_MAX_ENUM = 0x7FFFFFFF
+} VkFormat;
+
+typedef enum VkImageType {
+ VK_IMAGE_TYPE_1D = 0,
+ VK_IMAGE_TYPE_2D = 1,
+ VK_IMAGE_TYPE_3D = 2,
+ VK_IMAGE_TYPE_BEGIN_RANGE = VK_IMAGE_TYPE_1D,
+ VK_IMAGE_TYPE_END_RANGE = VK_IMAGE_TYPE_3D,
+ VK_IMAGE_TYPE_RANGE_SIZE = (VK_IMAGE_TYPE_3D - VK_IMAGE_TYPE_1D + 1),
+ VK_IMAGE_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkImageType;
+
+typedef enum VkImageTiling {
+ VK_IMAGE_TILING_OPTIMAL = 0,
+ VK_IMAGE_TILING_LINEAR = 1,
+ VK_IMAGE_TILING_BEGIN_RANGE = VK_IMAGE_TILING_OPTIMAL,
+ VK_IMAGE_TILING_END_RANGE = VK_IMAGE_TILING_LINEAR,
+ VK_IMAGE_TILING_RANGE_SIZE = (VK_IMAGE_TILING_LINEAR - VK_IMAGE_TILING_OPTIMAL + 1),
+ VK_IMAGE_TILING_MAX_ENUM = 0x7FFFFFFF
+} VkImageTiling;
+
+typedef enum VkPhysicalDeviceType {
+ VK_PHYSICAL_DEVICE_TYPE_OTHER = 0,
+ VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU = 1,
+ VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU = 2,
+ VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU = 3,
+ VK_PHYSICAL_DEVICE_TYPE_CPU = 4,
+ VK_PHYSICAL_DEVICE_TYPE_BEGIN_RANGE = VK_PHYSICAL_DEVICE_TYPE_OTHER,
+ VK_PHYSICAL_DEVICE_TYPE_END_RANGE = VK_PHYSICAL_DEVICE_TYPE_CPU,
+ VK_PHYSICAL_DEVICE_TYPE_RANGE_SIZE = (VK_PHYSICAL_DEVICE_TYPE_CPU - VK_PHYSICAL_DEVICE_TYPE_OTHER + 1),
+ VK_PHYSICAL_DEVICE_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkPhysicalDeviceType;
+
+typedef enum VkQueryType {
+ VK_QUERY_TYPE_OCCLUSION = 0,
+ VK_QUERY_TYPE_PIPELINE_STATISTICS = 1,
+ VK_QUERY_TYPE_TIMESTAMP = 2,
+ VK_QUERY_TYPE_BEGIN_RANGE = VK_QUERY_TYPE_OCCLUSION,
+ VK_QUERY_TYPE_END_RANGE = VK_QUERY_TYPE_TIMESTAMP,
+ VK_QUERY_TYPE_RANGE_SIZE = (VK_QUERY_TYPE_TIMESTAMP - VK_QUERY_TYPE_OCCLUSION + 1),
+ VK_QUERY_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkQueryType;
+
+typedef enum VkSharingMode {
+ VK_SHARING_MODE_EXCLUSIVE = 0,
+ VK_SHARING_MODE_CONCURRENT = 1,
+ VK_SHARING_MODE_BEGIN_RANGE = VK_SHARING_MODE_EXCLUSIVE,
+ VK_SHARING_MODE_END_RANGE = VK_SHARING_MODE_CONCURRENT,
+ VK_SHARING_MODE_RANGE_SIZE = (VK_SHARING_MODE_CONCURRENT - VK_SHARING_MODE_EXCLUSIVE + 1),
+ VK_SHARING_MODE_MAX_ENUM = 0x7FFFFFFF
+} VkSharingMode;
+
+typedef enum VkImageLayout {
+ VK_IMAGE_LAYOUT_UNDEFINED = 0,
+ VK_IMAGE_LAYOUT_GENERAL = 1,
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 2,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 3,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 4,
+ VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 5,
+ VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL = 6,
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL = 7,
+ VK_IMAGE_LAYOUT_PREINITIALIZED = 8,
+ VK_IMAGE_LAYOUT_PRESENT_SRC_KHR = 1000001002,
+ VK_IMAGE_LAYOUT_BEGIN_RANGE = VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_END_RANGE = VK_IMAGE_LAYOUT_PREINITIALIZED,
+ VK_IMAGE_LAYOUT_RANGE_SIZE = (VK_IMAGE_LAYOUT_PREINITIALIZED - VK_IMAGE_LAYOUT_UNDEFINED + 1),
+ VK_IMAGE_LAYOUT_MAX_ENUM = 0x7FFFFFFF
+} VkImageLayout;
+
+typedef enum VkImageViewType {
+ VK_IMAGE_VIEW_TYPE_1D = 0,
+ VK_IMAGE_VIEW_TYPE_2D = 1,
+ VK_IMAGE_VIEW_TYPE_3D = 2,
+ VK_IMAGE_VIEW_TYPE_CUBE = 3,
+ VK_IMAGE_VIEW_TYPE_1D_ARRAY = 4,
+ VK_IMAGE_VIEW_TYPE_2D_ARRAY = 5,
+ VK_IMAGE_VIEW_TYPE_CUBE_ARRAY = 6,
+ VK_IMAGE_VIEW_TYPE_BEGIN_RANGE = VK_IMAGE_VIEW_TYPE_1D,
+ VK_IMAGE_VIEW_TYPE_END_RANGE = VK_IMAGE_VIEW_TYPE_CUBE_ARRAY,
+ VK_IMAGE_VIEW_TYPE_RANGE_SIZE = (VK_IMAGE_VIEW_TYPE_CUBE_ARRAY - VK_IMAGE_VIEW_TYPE_1D + 1),
+ VK_IMAGE_VIEW_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkImageViewType;
+
+typedef enum VkComponentSwizzle {
+ VK_COMPONENT_SWIZZLE_IDENTITY = 0,
+ VK_COMPONENT_SWIZZLE_ZERO = 1,
+ VK_COMPONENT_SWIZZLE_ONE = 2,
+ VK_COMPONENT_SWIZZLE_R = 3,
+ VK_COMPONENT_SWIZZLE_G = 4,
+ VK_COMPONENT_SWIZZLE_B = 5,
+ VK_COMPONENT_SWIZZLE_A = 6,
+ VK_COMPONENT_SWIZZLE_BEGIN_RANGE = VK_COMPONENT_SWIZZLE_IDENTITY,
+ VK_COMPONENT_SWIZZLE_END_RANGE = VK_COMPONENT_SWIZZLE_A,
+ VK_COMPONENT_SWIZZLE_RANGE_SIZE = (VK_COMPONENT_SWIZZLE_A - VK_COMPONENT_SWIZZLE_IDENTITY + 1),
+ VK_COMPONENT_SWIZZLE_MAX_ENUM = 0x7FFFFFFF
+} VkComponentSwizzle;
+
+typedef enum VkVertexInputRate {
+ VK_VERTEX_INPUT_RATE_VERTEX = 0,
+ VK_VERTEX_INPUT_RATE_INSTANCE = 1,
+ VK_VERTEX_INPUT_RATE_BEGIN_RANGE = VK_VERTEX_INPUT_RATE_VERTEX,
+ VK_VERTEX_INPUT_RATE_END_RANGE = VK_VERTEX_INPUT_RATE_INSTANCE,
+ VK_VERTEX_INPUT_RATE_RANGE_SIZE = (VK_VERTEX_INPUT_RATE_INSTANCE - VK_VERTEX_INPUT_RATE_VERTEX + 1),
+ VK_VERTEX_INPUT_RATE_MAX_ENUM = 0x7FFFFFFF
+} VkVertexInputRate;
+
+typedef enum VkPrimitiveTopology {
+ VK_PRIMITIVE_TOPOLOGY_POINT_LIST = 0,
+ VK_PRIMITIVE_TOPOLOGY_LINE_LIST = 1,
+ VK_PRIMITIVE_TOPOLOGY_LINE_STRIP = 2,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST = 3,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP = 4,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN = 5,
+ VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY = 6,
+ VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY = 7,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY = 8,
+ VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY = 9,
+ VK_PRIMITIVE_TOPOLOGY_PATCH_LIST = 10,
+ VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = VK_PRIMITIVE_TOPOLOGY_POINT_LIST,
+ VK_PRIMITIVE_TOPOLOGY_END_RANGE = VK_PRIMITIVE_TOPOLOGY_PATCH_LIST,
+ VK_PRIMITIVE_TOPOLOGY_RANGE_SIZE = (VK_PRIMITIVE_TOPOLOGY_PATCH_LIST - VK_PRIMITIVE_TOPOLOGY_POINT_LIST + 1),
+ VK_PRIMITIVE_TOPOLOGY_MAX_ENUM = 0x7FFFFFFF
+} VkPrimitiveTopology;
+
+typedef enum VkPolygonMode {
+ VK_POLYGON_MODE_FILL = 0,
+ VK_POLYGON_MODE_LINE = 1,
+ VK_POLYGON_MODE_POINT = 2,
+ VK_POLYGON_MODE_BEGIN_RANGE = VK_POLYGON_MODE_FILL,
+ VK_POLYGON_MODE_END_RANGE = VK_POLYGON_MODE_POINT,
+ VK_POLYGON_MODE_RANGE_SIZE = (VK_POLYGON_MODE_POINT - VK_POLYGON_MODE_FILL + 1),
+ VK_POLYGON_MODE_MAX_ENUM = 0x7FFFFFFF
+} VkPolygonMode;
+
+typedef enum VkFrontFace {
+ VK_FRONT_FACE_COUNTER_CLOCKWISE = 0,
+ VK_FRONT_FACE_CLOCKWISE = 1,
+ VK_FRONT_FACE_BEGIN_RANGE = VK_FRONT_FACE_COUNTER_CLOCKWISE,
+ VK_FRONT_FACE_END_RANGE = VK_FRONT_FACE_CLOCKWISE,
+ VK_FRONT_FACE_RANGE_SIZE = (VK_FRONT_FACE_CLOCKWISE - VK_FRONT_FACE_COUNTER_CLOCKWISE + 1),
+ VK_FRONT_FACE_MAX_ENUM = 0x7FFFFFFF
+} VkFrontFace;
+
+typedef enum VkCompareOp {
+ VK_COMPARE_OP_NEVER = 0,
+ VK_COMPARE_OP_LESS = 1,
+ VK_COMPARE_OP_EQUAL = 2,
+ VK_COMPARE_OP_LESS_OR_EQUAL = 3,
+ VK_COMPARE_OP_GREATER = 4,
+ VK_COMPARE_OP_NOT_EQUAL = 5,
+ VK_COMPARE_OP_GREATER_OR_EQUAL = 6,
+ VK_COMPARE_OP_ALWAYS = 7,
+ VK_COMPARE_OP_BEGIN_RANGE = VK_COMPARE_OP_NEVER,
+ VK_COMPARE_OP_END_RANGE = VK_COMPARE_OP_ALWAYS,
+ VK_COMPARE_OP_RANGE_SIZE = (VK_COMPARE_OP_ALWAYS - VK_COMPARE_OP_NEVER + 1),
+ VK_COMPARE_OP_MAX_ENUM = 0x7FFFFFFF
+} VkCompareOp;
+
+typedef enum VkStencilOp {
+ VK_STENCIL_OP_KEEP = 0,
+ VK_STENCIL_OP_ZERO = 1,
+ VK_STENCIL_OP_REPLACE = 2,
+ VK_STENCIL_OP_INCREMENT_AND_CLAMP = 3,
+ VK_STENCIL_OP_DECREMENT_AND_CLAMP = 4,
+ VK_STENCIL_OP_INVERT = 5,
+ VK_STENCIL_OP_INCREMENT_AND_WRAP = 6,
+ VK_STENCIL_OP_DECREMENT_AND_WRAP = 7,
+ VK_STENCIL_OP_BEGIN_RANGE = VK_STENCIL_OP_KEEP,
+ VK_STENCIL_OP_END_RANGE = VK_STENCIL_OP_DECREMENT_AND_WRAP,
+ VK_STENCIL_OP_RANGE_SIZE = (VK_STENCIL_OP_DECREMENT_AND_WRAP - VK_STENCIL_OP_KEEP + 1),
+ VK_STENCIL_OP_MAX_ENUM = 0x7FFFFFFF
+} VkStencilOp;
+
+typedef enum VkLogicOp {
+ VK_LOGIC_OP_CLEAR = 0,
+ VK_LOGIC_OP_AND = 1,
+ VK_LOGIC_OP_AND_REVERSE = 2,
+ VK_LOGIC_OP_COPY = 3,
+ VK_LOGIC_OP_AND_INVERTED = 4,
+ VK_LOGIC_OP_NO_OP = 5,
+ VK_LOGIC_OP_XOR = 6,
+ VK_LOGIC_OP_OR = 7,
+ VK_LOGIC_OP_NOR = 8,
+ VK_LOGIC_OP_EQUIVALENT = 9,
+ VK_LOGIC_OP_INVERT = 10,
+ VK_LOGIC_OP_OR_REVERSE = 11,
+ VK_LOGIC_OP_COPY_INVERTED = 12,
+ VK_LOGIC_OP_OR_INVERTED = 13,
+ VK_LOGIC_OP_NAND = 14,
+ VK_LOGIC_OP_SET = 15,
+ VK_LOGIC_OP_BEGIN_RANGE = VK_LOGIC_OP_CLEAR,
+ VK_LOGIC_OP_END_RANGE = VK_LOGIC_OP_SET,
+ VK_LOGIC_OP_RANGE_SIZE = (VK_LOGIC_OP_SET - VK_LOGIC_OP_CLEAR + 1),
+ VK_LOGIC_OP_MAX_ENUM = 0x7FFFFFFF
+} VkLogicOp;
+
+typedef enum VkBlendFactor {
+ VK_BLEND_FACTOR_ZERO = 0,
+ VK_BLEND_FACTOR_ONE = 1,
+ VK_BLEND_FACTOR_SRC_COLOR = 2,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC_COLOR = 3,
+ VK_BLEND_FACTOR_DST_COLOR = 4,
+ VK_BLEND_FACTOR_ONE_MINUS_DST_COLOR = 5,
+ VK_BLEND_FACTOR_SRC_ALPHA = 6,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA = 7,
+ VK_BLEND_FACTOR_DST_ALPHA = 8,
+ VK_BLEND_FACTOR_ONE_MINUS_DST_ALPHA = 9,
+ VK_BLEND_FACTOR_CONSTANT_COLOR = 10,
+ VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_COLOR = 11,
+ VK_BLEND_FACTOR_CONSTANT_ALPHA = 12,
+ VK_BLEND_FACTOR_ONE_MINUS_CONSTANT_ALPHA = 13,
+ VK_BLEND_FACTOR_SRC_ALPHA_SATURATE = 14,
+ VK_BLEND_FACTOR_SRC1_COLOR = 15,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC1_COLOR = 16,
+ VK_BLEND_FACTOR_SRC1_ALPHA = 17,
+ VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA = 18,
+ VK_BLEND_FACTOR_BEGIN_RANGE = VK_BLEND_FACTOR_ZERO,
+ VK_BLEND_FACTOR_END_RANGE = VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA,
+ VK_BLEND_FACTOR_RANGE_SIZE = (VK_BLEND_FACTOR_ONE_MINUS_SRC1_ALPHA - VK_BLEND_FACTOR_ZERO + 1),
+ VK_BLEND_FACTOR_MAX_ENUM = 0x7FFFFFFF
+} VkBlendFactor;
+
+typedef enum VkBlendOp {
+ VK_BLEND_OP_ADD = 0,
+ VK_BLEND_OP_SUBTRACT = 1,
+ VK_BLEND_OP_REVERSE_SUBTRACT = 2,
+ VK_BLEND_OP_MIN = 3,
+ VK_BLEND_OP_MAX = 4,
+ VK_BLEND_OP_BEGIN_RANGE = VK_BLEND_OP_ADD,
+ VK_BLEND_OP_END_RANGE = VK_BLEND_OP_MAX,
+ VK_BLEND_OP_RANGE_SIZE = (VK_BLEND_OP_MAX - VK_BLEND_OP_ADD + 1),
+ VK_BLEND_OP_MAX_ENUM = 0x7FFFFFFF
+} VkBlendOp;
+
+typedef enum VkDynamicState {
+ VK_DYNAMIC_STATE_VIEWPORT = 0,
+ VK_DYNAMIC_STATE_SCISSOR = 1,
+ VK_DYNAMIC_STATE_LINE_WIDTH = 2,
+ VK_DYNAMIC_STATE_DEPTH_BIAS = 3,
+ VK_DYNAMIC_STATE_BLEND_CONSTANTS = 4,
+ VK_DYNAMIC_STATE_DEPTH_BOUNDS = 5,
+ VK_DYNAMIC_STATE_STENCIL_COMPARE_MASK = 6,
+ VK_DYNAMIC_STATE_STENCIL_WRITE_MASK = 7,
+ VK_DYNAMIC_STATE_STENCIL_REFERENCE = 8,
+ VK_DYNAMIC_STATE_BEGIN_RANGE = VK_DYNAMIC_STATE_VIEWPORT,
+ VK_DYNAMIC_STATE_END_RANGE = VK_DYNAMIC_STATE_STENCIL_REFERENCE,
+ VK_DYNAMIC_STATE_RANGE_SIZE = (VK_DYNAMIC_STATE_STENCIL_REFERENCE - VK_DYNAMIC_STATE_VIEWPORT + 1),
+ VK_DYNAMIC_STATE_MAX_ENUM = 0x7FFFFFFF
+} VkDynamicState;
+
+typedef enum VkFilter {
+ VK_FILTER_NEAREST = 0,
+ VK_FILTER_LINEAR = 1,
+ VK_FILTER_BEGIN_RANGE = VK_FILTER_NEAREST,
+ VK_FILTER_END_RANGE = VK_FILTER_LINEAR,
+ VK_FILTER_RANGE_SIZE = (VK_FILTER_LINEAR - VK_FILTER_NEAREST + 1),
+ VK_FILTER_MAX_ENUM = 0x7FFFFFFF
+} VkFilter;
+
+typedef enum VkSamplerMipmapMode {
+ VK_SAMPLER_MIPMAP_MODE_NEAREST = 0,
+ VK_SAMPLER_MIPMAP_MODE_LINEAR = 1,
+ VK_SAMPLER_MIPMAP_MODE_BEGIN_RANGE = VK_SAMPLER_MIPMAP_MODE_NEAREST,
+ VK_SAMPLER_MIPMAP_MODE_END_RANGE = VK_SAMPLER_MIPMAP_MODE_LINEAR,
+ VK_SAMPLER_MIPMAP_MODE_RANGE_SIZE = (VK_SAMPLER_MIPMAP_MODE_LINEAR - VK_SAMPLER_MIPMAP_MODE_NEAREST + 1),
+ VK_SAMPLER_MIPMAP_MODE_MAX_ENUM = 0x7FFFFFFF
+} VkSamplerMipmapMode;
+
+typedef enum VkSamplerAddressMode {
+ VK_SAMPLER_ADDRESS_MODE_REPEAT = 0,
+ VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT = 1,
+ VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE = 2,
+ VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER = 3,
+ VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE = 4,
+ VK_SAMPLER_ADDRESS_MODE_BEGIN_RANGE = VK_SAMPLER_ADDRESS_MODE_REPEAT,
+ VK_SAMPLER_ADDRESS_MODE_END_RANGE = VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE,
+ VK_SAMPLER_ADDRESS_MODE_RANGE_SIZE = (VK_SAMPLER_ADDRESS_MODE_MIRROR_CLAMP_TO_EDGE - VK_SAMPLER_ADDRESS_MODE_REPEAT + 1),
+ VK_SAMPLER_ADDRESS_MODE_MAX_ENUM = 0x7FFFFFFF
+} VkSamplerAddressMode;
+
+typedef enum VkBorderColor {
+ VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK = 0,
+ VK_BORDER_COLOR_INT_TRANSPARENT_BLACK = 1,
+ VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK = 2,
+ VK_BORDER_COLOR_INT_OPAQUE_BLACK = 3,
+ VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE = 4,
+ VK_BORDER_COLOR_INT_OPAQUE_WHITE = 5,
+ VK_BORDER_COLOR_BEGIN_RANGE = VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK,
+ VK_BORDER_COLOR_END_RANGE = VK_BORDER_COLOR_INT_OPAQUE_WHITE,
+ VK_BORDER_COLOR_RANGE_SIZE = (VK_BORDER_COLOR_INT_OPAQUE_WHITE - VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK + 1),
+ VK_BORDER_COLOR_MAX_ENUM = 0x7FFFFFFF
+} VkBorderColor;
+
+typedef enum VkDescriptorType {
+ VK_DESCRIPTOR_TYPE_SAMPLER = 0,
+ VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER = 1,
+ VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE = 2,
+ VK_DESCRIPTOR_TYPE_STORAGE_IMAGE = 3,
+ VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER = 4,
+ VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER = 5,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 6,
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER = 7,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 8,
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC = 9,
+ VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT = 10,
+ VK_DESCRIPTOR_TYPE_BEGIN_RANGE = VK_DESCRIPTOR_TYPE_SAMPLER,
+ VK_DESCRIPTOR_TYPE_END_RANGE = VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT,
+ VK_DESCRIPTOR_TYPE_RANGE_SIZE = (VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT - VK_DESCRIPTOR_TYPE_SAMPLER + 1),
+ VK_DESCRIPTOR_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkDescriptorType;
+
+typedef enum VkAttachmentLoadOp {
+ VK_ATTACHMENT_LOAD_OP_LOAD = 0,
+ VK_ATTACHMENT_LOAD_OP_CLEAR = 1,
+ VK_ATTACHMENT_LOAD_OP_DONT_CARE = 2,
+ VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE = VK_ATTACHMENT_LOAD_OP_LOAD,
+ VK_ATTACHMENT_LOAD_OP_END_RANGE = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
+ VK_ATTACHMENT_LOAD_OP_RANGE_SIZE = (VK_ATTACHMENT_LOAD_OP_DONT_CARE - VK_ATTACHMENT_LOAD_OP_LOAD + 1),
+ VK_ATTACHMENT_LOAD_OP_MAX_ENUM = 0x7FFFFFFF
+} VkAttachmentLoadOp;
+
+typedef enum VkAttachmentStoreOp {
+ VK_ATTACHMENT_STORE_OP_STORE = 0,
+ VK_ATTACHMENT_STORE_OP_DONT_CARE = 1,
+ VK_ATTACHMENT_STORE_OP_BEGIN_RANGE = VK_ATTACHMENT_STORE_OP_STORE,
+ VK_ATTACHMENT_STORE_OP_END_RANGE = VK_ATTACHMENT_STORE_OP_DONT_CARE,
+ VK_ATTACHMENT_STORE_OP_RANGE_SIZE = (VK_ATTACHMENT_STORE_OP_DONT_CARE - VK_ATTACHMENT_STORE_OP_STORE + 1),
+ VK_ATTACHMENT_STORE_OP_MAX_ENUM = 0x7FFFFFFF
+} VkAttachmentStoreOp;
+
+typedef enum VkPipelineBindPoint {
+ VK_PIPELINE_BIND_POINT_GRAPHICS = 0,
+ VK_PIPELINE_BIND_POINT_COMPUTE = 1,
+ VK_PIPELINE_BIND_POINT_BEGIN_RANGE = VK_PIPELINE_BIND_POINT_GRAPHICS,
+ VK_PIPELINE_BIND_POINT_END_RANGE = VK_PIPELINE_BIND_POINT_COMPUTE,
+ VK_PIPELINE_BIND_POINT_RANGE_SIZE = (VK_PIPELINE_BIND_POINT_COMPUTE - VK_PIPELINE_BIND_POINT_GRAPHICS + 1),
+ VK_PIPELINE_BIND_POINT_MAX_ENUM = 0x7FFFFFFF
+} VkPipelineBindPoint;
+
+typedef enum VkCommandBufferLevel {
+ VK_COMMAND_BUFFER_LEVEL_PRIMARY = 0,
+ VK_COMMAND_BUFFER_LEVEL_SECONDARY = 1,
+ VK_COMMAND_BUFFER_LEVEL_BEGIN_RANGE = VK_COMMAND_BUFFER_LEVEL_PRIMARY,
+ VK_COMMAND_BUFFER_LEVEL_END_RANGE = VK_COMMAND_BUFFER_LEVEL_SECONDARY,
+ VK_COMMAND_BUFFER_LEVEL_RANGE_SIZE = (VK_COMMAND_BUFFER_LEVEL_SECONDARY - VK_COMMAND_BUFFER_LEVEL_PRIMARY + 1),
+ VK_COMMAND_BUFFER_LEVEL_MAX_ENUM = 0x7FFFFFFF
+} VkCommandBufferLevel;
+
+typedef enum VkIndexType {
+ VK_INDEX_TYPE_UINT16 = 0,
+ VK_INDEX_TYPE_UINT32 = 1,
+ VK_INDEX_TYPE_BEGIN_RANGE = VK_INDEX_TYPE_UINT16,
+ VK_INDEX_TYPE_END_RANGE = VK_INDEX_TYPE_UINT32,
+ VK_INDEX_TYPE_RANGE_SIZE = (VK_INDEX_TYPE_UINT32 - VK_INDEX_TYPE_UINT16 + 1),
+ VK_INDEX_TYPE_MAX_ENUM = 0x7FFFFFFF
+} VkIndexType;
+
+typedef enum VkSubpassContents {
+ VK_SUBPASS_CONTENTS_INLINE = 0,
+ VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS = 1,
+ VK_SUBPASS_CONTENTS_BEGIN_RANGE = VK_SUBPASS_CONTENTS_INLINE,
+ VK_SUBPASS_CONTENTS_END_RANGE = VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS,
+ VK_SUBPASS_CONTENTS_RANGE_SIZE = (VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS - VK_SUBPASS_CONTENTS_INLINE + 1),
+ VK_SUBPASS_CONTENTS_MAX_ENUM = 0x7FFFFFFF
+} VkSubpassContents;
+
+typedef VkFlags VkInstanceCreateFlags;
+
+typedef enum VkFormatFeatureFlagBits {
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_BIT = 0x00000001,
+ VK_FORMAT_FEATURE_STORAGE_IMAGE_BIT = 0x00000002,
+ VK_FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT = 0x00000004,
+ VK_FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000008,
+ VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT = 0x00000010,
+ VK_FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT = 0x00000020,
+ VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT = 0x00000040,
+ VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BIT = 0x00000080,
+ VK_FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT = 0x00000100,
+ VK_FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000200,
+ VK_FORMAT_FEATURE_BLIT_SRC_BIT = 0x00000400,
+ VK_FORMAT_FEATURE_BLIT_DST_BIT = 0x00000800,
+ VK_FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT = 0x00001000,
+} VkFormatFeatureFlagBits;
+typedef VkFlags VkFormatFeatureFlags;
+
+typedef enum VkImageUsageFlagBits {
+ VK_IMAGE_USAGE_TRANSFER_SRC_BIT = 0x00000001,
+ VK_IMAGE_USAGE_TRANSFER_DST_BIT = 0x00000002,
+ VK_IMAGE_USAGE_SAMPLED_BIT = 0x00000004,
+ VK_IMAGE_USAGE_STORAGE_BIT = 0x00000008,
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000010,
+ VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000020,
+ VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000040,
+ VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT = 0x00000080,
+} VkImageUsageFlagBits;
+typedef VkFlags VkImageUsageFlags;
+
+typedef enum VkImageCreateFlagBits {
+ VK_IMAGE_CREATE_SPARSE_BINDING_BIT = 0x00000001,
+ VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002,
+ VK_IMAGE_CREATE_SPARSE_ALIASED_BIT = 0x00000004,
+ VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000008,
+ VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT = 0x00000010,
+} VkImageCreateFlagBits;
+typedef VkFlags VkImageCreateFlags;
+
+typedef enum VkSampleCountFlagBits {
+ VK_SAMPLE_COUNT_1_BIT = 0x00000001,
+ VK_SAMPLE_COUNT_2_BIT = 0x00000002,
+ VK_SAMPLE_COUNT_4_BIT = 0x00000004,
+ VK_SAMPLE_COUNT_8_BIT = 0x00000008,
+ VK_SAMPLE_COUNT_16_BIT = 0x00000010,
+ VK_SAMPLE_COUNT_32_BIT = 0x00000020,
+ VK_SAMPLE_COUNT_64_BIT = 0x00000040,
+} VkSampleCountFlagBits;
+typedef VkFlags VkSampleCountFlags;
+
+typedef enum VkQueueFlagBits {
+ VK_QUEUE_GRAPHICS_BIT = 0x00000001,
+ VK_QUEUE_COMPUTE_BIT = 0x00000002,
+ VK_QUEUE_TRANSFER_BIT = 0x00000004,
+ VK_QUEUE_SPARSE_BINDING_BIT = 0x00000008,
+} VkQueueFlagBits;
+typedef VkFlags VkQueueFlags;
+
+typedef enum VkMemoryPropertyFlagBits {
+ VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT = 0x00000001,
+ VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT = 0x00000002,
+ VK_MEMORY_PROPERTY_HOST_COHERENT_BIT = 0x00000004,
+ VK_MEMORY_PROPERTY_HOST_CACHED_BIT = 0x00000008,
+ VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT = 0x00000010,
+} VkMemoryPropertyFlagBits;
+typedef VkFlags VkMemoryPropertyFlags;
+
+typedef enum VkMemoryHeapFlagBits {
+ VK_MEMORY_HEAP_DEVICE_LOCAL_BIT = 0x00000001,
+} VkMemoryHeapFlagBits;
+typedef VkFlags VkMemoryHeapFlags;
+typedef VkFlags VkDeviceCreateFlags;
+typedef VkFlags VkDeviceQueueCreateFlags;
+
+typedef enum VkPipelineStageFlagBits {
+ VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT = 0x00000001,
+ VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT = 0x00000002,
+ VK_PIPELINE_STAGE_VERTEX_INPUT_BIT = 0x00000004,
+ VK_PIPELINE_STAGE_VERTEX_SHADER_BIT = 0x00000008,
+ VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT = 0x00000010,
+ VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT = 0x00000020,
+ VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT = 0x00000040,
+ VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT = 0x00000080,
+ VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT = 0x00000100,
+ VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT = 0x00000200,
+ VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT = 0x00000400,
+ VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT = 0x00000800,
+ VK_PIPELINE_STAGE_TRANSFER_BIT = 0x00001000,
+ VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT = 0x00002000,
+ VK_PIPELINE_STAGE_HOST_BIT = 0x00004000,
+ VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT = 0x00008000,
+ VK_PIPELINE_STAGE_ALL_COMMANDS_BIT = 0x00010000,
+} VkPipelineStageFlagBits;
+typedef VkFlags VkPipelineStageFlags;
+typedef VkFlags VkMemoryMapFlags;
+
+typedef enum VkImageAspectFlagBits {
+ VK_IMAGE_ASPECT_COLOR_BIT = 0x00000001,
+ VK_IMAGE_ASPECT_DEPTH_BIT = 0x00000002,
+ VK_IMAGE_ASPECT_STENCIL_BIT = 0x00000004,
+ VK_IMAGE_ASPECT_METADATA_BIT = 0x00000008,
+} VkImageAspectFlagBits;
+typedef VkFlags VkImageAspectFlags;
+
+typedef enum VkSparseImageFormatFlagBits {
+ VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT = 0x00000001,
+ VK_SPARSE_IMAGE_FORMAT_ALIGNED_MIP_SIZE_BIT = 0x00000002,
+ VK_SPARSE_IMAGE_FORMAT_NONSTANDARD_BLOCK_SIZE_BIT = 0x00000004,
+} VkSparseImageFormatFlagBits;
+typedef VkFlags VkSparseImageFormatFlags;
+
+typedef enum VkSparseMemoryBindFlagBits {
+ VK_SPARSE_MEMORY_BIND_METADATA_BIT = 0x00000001,
+} VkSparseMemoryBindFlagBits;
+typedef VkFlags VkSparseMemoryBindFlags;
+
+typedef enum VkFenceCreateFlagBits {
+ VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
+} VkFenceCreateFlagBits;
+typedef VkFlags VkFenceCreateFlags;
+typedef VkFlags VkSemaphoreCreateFlags;
+typedef VkFlags VkEventCreateFlags;
+typedef VkFlags VkQueryPoolCreateFlags;
+
+typedef enum VkQueryPipelineStatisticFlagBits {
+ VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_VERTICES_BIT = 0x00000001,
+ VK_QUERY_PIPELINE_STATISTIC_INPUT_ASSEMBLY_PRIMITIVES_BIT = 0x00000002,
+ VK_QUERY_PIPELINE_STATISTIC_VERTEX_SHADER_INVOCATIONS_BIT = 0x00000004,
+ VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_INVOCATIONS_BIT = 0x00000008,
+ VK_QUERY_PIPELINE_STATISTIC_GEOMETRY_SHADER_PRIMITIVES_BIT = 0x00000010,
+ VK_QUERY_PIPELINE_STATISTIC_CLIPPING_INVOCATIONS_BIT = 0x00000020,
+ VK_QUERY_PIPELINE_STATISTIC_CLIPPING_PRIMITIVES_BIT = 0x00000040,
+ VK_QUERY_PIPELINE_STATISTIC_FRAGMENT_SHADER_INVOCATIONS_BIT = 0x00000080,
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_CONTROL_SHADER_PATCHES_BIT = 0x00000100,
+ VK_QUERY_PIPELINE_STATISTIC_TESSELLATION_EVALUATION_SHADER_INVOCATIONS_BIT = 0x00000200,
+ VK_QUERY_PIPELINE_STATISTIC_COMPUTE_SHADER_INVOCATIONS_BIT = 0x00000400,
+} VkQueryPipelineStatisticFlagBits;
+typedef VkFlags VkQueryPipelineStatisticFlags;
+
+typedef enum VkQueryResultFlagBits {
+ VK_QUERY_RESULT_64_BIT = 0x00000001,
+ VK_QUERY_RESULT_WAIT_BIT = 0x00000002,
+ VK_QUERY_RESULT_WITH_AVAILABILITY_BIT = 0x00000004,
+ VK_QUERY_RESULT_PARTIAL_BIT = 0x00000008,
+} VkQueryResultFlagBits;
+typedef VkFlags VkQueryResultFlags;
+
+typedef enum VkBufferCreateFlagBits {
+ VK_BUFFER_CREATE_SPARSE_BINDING_BIT = 0x00000001,
+ VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT = 0x00000002,
+ VK_BUFFER_CREATE_SPARSE_ALIASED_BIT = 0x00000004,
+} VkBufferCreateFlagBits;
+typedef VkFlags VkBufferCreateFlags;
+
+typedef enum VkBufferUsageFlagBits {
+ VK_BUFFER_USAGE_TRANSFER_SRC_BIT = 0x00000001,
+ VK_BUFFER_USAGE_TRANSFER_DST_BIT = 0x00000002,
+ VK_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER_BIT = 0x00000004,
+ VK_BUFFER_USAGE_STORAGE_TEXEL_BUFFER_BIT = 0x00000008,
+ VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT = 0x00000010,
+ VK_BUFFER_USAGE_STORAGE_BUFFER_BIT = 0x00000020,
+ VK_BUFFER_USAGE_INDEX_BUFFER_BIT = 0x00000040,
+ VK_BUFFER_USAGE_VERTEX_BUFFER_BIT = 0x00000080,
+ VK_BUFFER_USAGE_INDIRECT_BUFFER_BIT = 0x00000100,
+} VkBufferUsageFlagBits;
+typedef VkFlags VkBufferUsageFlags;
+typedef VkFlags VkBufferViewCreateFlags;
+typedef VkFlags VkImageViewCreateFlags;
+typedef VkFlags VkShaderModuleCreateFlags;
+typedef VkFlags VkPipelineCacheCreateFlags;
+
+typedef enum VkPipelineCreateFlagBits {
+ VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
+ VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
+ VK_PIPELINE_CREATE_DERIVATIVE_BIT = 0x00000004,
+} VkPipelineCreateFlagBits;
+typedef VkFlags VkPipelineCreateFlags;
+typedef VkFlags VkPipelineShaderStageCreateFlags;
+
+typedef enum VkShaderStageFlagBits {
+ VK_SHADER_STAGE_VERTEX_BIT = 0x00000001,
+ VK_SHADER_STAGE_TESSELLATION_CONTROL_BIT = 0x00000002,
+ VK_SHADER_STAGE_TESSELLATION_EVALUATION_BIT = 0x00000004,
+ VK_SHADER_STAGE_GEOMETRY_BIT = 0x00000008,
+ VK_SHADER_STAGE_FRAGMENT_BIT = 0x00000010,
+ VK_SHADER_STAGE_COMPUTE_BIT = 0x00000020,
+ VK_SHADER_STAGE_ALL_GRAPHICS = 0x1F,
+ VK_SHADER_STAGE_ALL = 0x7FFFFFFF,
+} VkShaderStageFlagBits;
+typedef VkFlags VkPipelineVertexInputStateCreateFlags;
+typedef VkFlags VkPipelineInputAssemblyStateCreateFlags;
+typedef VkFlags VkPipelineTessellationStateCreateFlags;
+typedef VkFlags VkPipelineViewportStateCreateFlags;
+typedef VkFlags VkPipelineRasterizationStateCreateFlags;
+
+typedef enum VkCullModeFlagBits {
+ VK_CULL_MODE_NONE = 0,
+ VK_CULL_MODE_FRONT_BIT = 0x00000001,
+ VK_CULL_MODE_BACK_BIT = 0x00000002,
+ VK_CULL_MODE_FRONT_AND_BACK = 0x3,
+} VkCullModeFlagBits;
+typedef VkFlags VkCullModeFlags;
+typedef VkFlags VkPipelineMultisampleStateCreateFlags;
+typedef VkFlags VkPipelineDepthStencilStateCreateFlags;
+typedef VkFlags VkPipelineColorBlendStateCreateFlags;
+
+typedef enum VkColorComponentFlagBits {
+ VK_COLOR_COMPONENT_R_BIT = 0x00000001,
+ VK_COLOR_COMPONENT_G_BIT = 0x00000002,
+ VK_COLOR_COMPONENT_B_BIT = 0x00000004,
+ VK_COLOR_COMPONENT_A_BIT = 0x00000008,
+} VkColorComponentFlagBits;
+typedef VkFlags VkColorComponentFlags;
+typedef VkFlags VkPipelineDynamicStateCreateFlags;
+typedef VkFlags VkPipelineLayoutCreateFlags;
+typedef VkFlags VkShaderStageFlags;
+typedef VkFlags VkSamplerCreateFlags;
+typedef VkFlags VkDescriptorSetLayoutCreateFlags;
+
+typedef enum VkDescriptorPoolCreateFlagBits {
+ VK_DESCRIPTOR_POOL_CREATE_FREE_DESCRIPTOR_SET_BIT = 0x00000001,
+} VkDescriptorPoolCreateFlagBits;
+typedef VkFlags VkDescriptorPoolCreateFlags;
+typedef VkFlags VkDescriptorPoolResetFlags;
+typedef VkFlags VkFramebufferCreateFlags;
+typedef VkFlags VkRenderPassCreateFlags;
+
+typedef enum VkAttachmentDescriptionFlagBits {
+ VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT = 0x00000001,
+} VkAttachmentDescriptionFlagBits;
+typedef VkFlags VkAttachmentDescriptionFlags;
+typedef VkFlags VkSubpassDescriptionFlags;
+
+typedef enum VkAccessFlagBits {
+ VK_ACCESS_INDIRECT_COMMAND_READ_BIT = 0x00000001,
+ VK_ACCESS_INDEX_READ_BIT = 0x00000002,
+ VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT = 0x00000004,
+ VK_ACCESS_UNIFORM_READ_BIT = 0x00000008,
+ VK_ACCESS_INPUT_ATTACHMENT_READ_BIT = 0x00000010,
+ VK_ACCESS_SHADER_READ_BIT = 0x00000020,
+ VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
+ VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
+ VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT = 0x00000200,
+ VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT = 0x00000400,
+ VK_ACCESS_TRANSFER_READ_BIT = 0x00000800,
+ VK_ACCESS_TRANSFER_WRITE_BIT = 0x00001000,
+ VK_ACCESS_HOST_READ_BIT = 0x00002000,
+ VK_ACCESS_HOST_WRITE_BIT = 0x00004000,
+ VK_ACCESS_MEMORY_READ_BIT = 0x00008000,
+ VK_ACCESS_MEMORY_WRITE_BIT = 0x00010000,
+} VkAccessFlagBits;
+typedef VkFlags VkAccessFlags;
+
+typedef enum VkDependencyFlagBits {
+ VK_DEPENDENCY_BY_REGION_BIT = 0x00000001,
+} VkDependencyFlagBits;
+typedef VkFlags VkDependencyFlags;
+
+typedef enum VkCommandPoolCreateFlagBits {
+ VK_COMMAND_POOL_CREATE_TRANSIENT_BIT = 0x00000001,
+ VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT = 0x00000002,
+} VkCommandPoolCreateFlagBits;
+typedef VkFlags VkCommandPoolCreateFlags;
+
+typedef enum VkCommandPoolResetFlagBits {
+ VK_COMMAND_POOL_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
+} VkCommandPoolResetFlagBits;
+typedef VkFlags VkCommandPoolResetFlags;
+
+typedef enum VkCommandBufferUsageFlagBits {
+ VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT = 0x00000001,
+ VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT = 0x00000002,
+ VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT = 0x00000004,
+} VkCommandBufferUsageFlagBits;
+typedef VkFlags VkCommandBufferUsageFlags;
+
+typedef enum VkQueryControlFlagBits {
+ VK_QUERY_CONTROL_PRECISE_BIT = 0x00000001,
+} VkQueryControlFlagBits;
+typedef VkFlags VkQueryControlFlags;
+
+typedef enum VkCommandBufferResetFlagBits {
+ VK_COMMAND_BUFFER_RESET_RELEASE_RESOURCES_BIT = 0x00000001,
+} VkCommandBufferResetFlagBits;
+typedef VkFlags VkCommandBufferResetFlags;
+
+typedef enum VkStencilFaceFlagBits {
+ VK_STENCIL_FACE_FRONT_BIT = 0x00000001,
+ VK_STENCIL_FACE_BACK_BIT = 0x00000002,
+ VK_STENCIL_FRONT_AND_BACK = 0x3,
+} VkStencilFaceFlagBits;
+typedef VkFlags VkStencilFaceFlags;
+
+typedef void* (VKAPI_PTR *PFN_vkAllocationFunction)(
+ void* pUserData,
+ size_t size,
+ size_t alignment,
+ VkSystemAllocationScope allocationScope);
+
+typedef void* (VKAPI_PTR *PFN_vkReallocationFunction)(
+ void* pUserData,
+ void* pOriginal,
+ size_t size,
+ size_t alignment,
+ VkSystemAllocationScope allocationScope);
+
+typedef void (VKAPI_PTR *PFN_vkFreeFunction)(
+ void* pUserData,
+ void* pMemory);
+
+typedef void (VKAPI_PTR *PFN_vkInternalAllocationNotification)(
+ void* pUserData,
+ size_t size,
+ VkInternalAllocationType allocationType,
+ VkSystemAllocationScope allocationScope);
+
+typedef void (VKAPI_PTR *PFN_vkInternalFreeNotification)(
+ void* pUserData,
+ size_t size,
+ VkInternalAllocationType allocationType,
+ VkSystemAllocationScope allocationScope);
+
+typedef void (VKAPI_PTR *PFN_vkVoidFunction)(void);
+
+typedef struct VkApplicationInfo {
+ VkStructureType sType;
+ const void* pNext;
+ const char* pApplicationName;
+ uint32_t applicationVersion;
+ const char* pEngineName;
+ uint32_t engineVersion;
+ uint32_t apiVersion;
+} VkApplicationInfo;
+
+typedef struct VkInstanceCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkInstanceCreateFlags flags;
+ const VkApplicationInfo* pApplicationInfo;
+ uint32_t enabledLayerCount;
+ const char* const* ppEnabledLayerNames;
+ uint32_t enabledExtensionCount;
+ const char* const* ppEnabledExtensionNames;
+} VkInstanceCreateInfo;
+
+typedef struct VkAllocationCallbacks {
+ void* pUserData;
+ PFN_vkAllocationFunction pfnAllocation;
+ PFN_vkReallocationFunction pfnReallocation;
+ PFN_vkFreeFunction pfnFree;
+ PFN_vkInternalAllocationNotification pfnInternalAllocation;
+ PFN_vkInternalFreeNotification pfnInternalFree;
+} VkAllocationCallbacks;
+
+typedef struct VkPhysicalDeviceFeatures {
+ VkBool32 robustBufferAccess;
+ VkBool32 fullDrawIndexUint32;
+ VkBool32 imageCubeArray;
+ VkBool32 independentBlend;
+ VkBool32 geometryShader;
+ VkBool32 tessellationShader;
+ VkBool32 sampleRateShading;
+ VkBool32 dualSrcBlend;
+ VkBool32 logicOp;
+ VkBool32 multiDrawIndirect;
+ VkBool32 drawIndirectFirstInstance;
+ VkBool32 depthClamp;
+ VkBool32 depthBiasClamp;
+ VkBool32 fillModeNonSolid;
+ VkBool32 depthBounds;
+ VkBool32 wideLines;
+ VkBool32 largePoints;
+ VkBool32 alphaToOne;
+ VkBool32 multiViewport;
+ VkBool32 samplerAnisotropy;
+ VkBool32 textureCompressionETC2;
+ VkBool32 textureCompressionASTC_LDR;
+ VkBool32 textureCompressionBC;
+ VkBool32 occlusionQueryPrecise;
+ VkBool32 pipelineStatisticsQuery;
+ VkBool32 vertexPipelineStoresAndAtomics;
+ VkBool32 fragmentStoresAndAtomics;
+ VkBool32 shaderTessellationAndGeometryPointSize;
+ VkBool32 shaderImageGatherExtended;
+ VkBool32 shaderStorageImageExtendedFormats;
+ VkBool32 shaderStorageImageMultisample;
+ VkBool32 shaderStorageImageReadWithoutFormat;
+ VkBool32 shaderStorageImageWriteWithoutFormat;
+ VkBool32 shaderUniformBufferArrayDynamicIndexing;
+ VkBool32 shaderSampledImageArrayDynamicIndexing;
+ VkBool32 shaderStorageBufferArrayDynamicIndexing;
+ VkBool32 shaderStorageImageArrayDynamicIndexing;
+ VkBool32 shaderClipDistance;
+ VkBool32 shaderCullDistance;
+ VkBool32 shaderFloat64;
+ VkBool32 shaderInt64;
+ VkBool32 shaderInt16;
+ VkBool32 shaderResourceResidency;
+ VkBool32 shaderResourceMinLod;
+ VkBool32 sparseBinding;
+ VkBool32 sparseResidencyBuffer;
+ VkBool32 sparseResidencyImage2D;
+ VkBool32 sparseResidencyImage3D;
+ VkBool32 sparseResidency2Samples;
+ VkBool32 sparseResidency4Samples;
+ VkBool32 sparseResidency8Samples;
+ VkBool32 sparseResidency16Samples;
+ VkBool32 sparseResidencyAliased;
+ VkBool32 variableMultisampleRate;
+ VkBool32 inheritedQueries;
+} VkPhysicalDeviceFeatures;
+
+typedef struct VkFormatProperties {
+ VkFormatFeatureFlags linearTilingFeatures;
+ VkFormatFeatureFlags optimalTilingFeatures;
+ VkFormatFeatureFlags bufferFeatures;
+} VkFormatProperties;
+
+typedef struct VkExtent3D {
+ uint32_t width;
+ uint32_t height;
+ uint32_t depth;
+} VkExtent3D;
+
+typedef struct VkImageFormatProperties {
+ VkExtent3D maxExtent;
+ uint32_t maxMipLevels;
+ uint32_t maxArrayLayers;
+ VkSampleCountFlags sampleCounts;
+ VkDeviceSize maxResourceSize;
+} VkImageFormatProperties;
+
+typedef struct VkPhysicalDeviceLimits {
+ uint32_t maxImageDimension1D;
+ uint32_t maxImageDimension2D;
+ uint32_t maxImageDimension3D;
+ uint32_t maxImageDimensionCube;
+ uint32_t maxImageArrayLayers;
+ uint32_t maxTexelBufferElements;
+ uint32_t maxUniformBufferRange;
+ uint32_t maxStorageBufferRange;
+ uint32_t maxPushConstantsSize;
+ uint32_t maxMemoryAllocationCount;
+ uint32_t maxSamplerAllocationCount;
+ VkDeviceSize bufferImageGranularity;
+ VkDeviceSize sparseAddressSpaceSize;
+ uint32_t maxBoundDescriptorSets;
+ uint32_t maxPerStageDescriptorSamplers;
+ uint32_t maxPerStageDescriptorUniformBuffers;
+ uint32_t maxPerStageDescriptorStorageBuffers;
+ uint32_t maxPerStageDescriptorSampledImages;
+ uint32_t maxPerStageDescriptorStorageImages;
+ uint32_t maxPerStageDescriptorInputAttachments;
+ uint32_t maxPerStageResources;
+ uint32_t maxDescriptorSetSamplers;
+ uint32_t maxDescriptorSetUniformBuffers;
+ uint32_t maxDescriptorSetUniformBuffersDynamic;
+ uint32_t maxDescriptorSetStorageBuffers;
+ uint32_t maxDescriptorSetStorageBuffersDynamic;
+ uint32_t maxDescriptorSetSampledImages;
+ uint32_t maxDescriptorSetStorageImages;
+ uint32_t maxDescriptorSetInputAttachments;
+ uint32_t maxVertexInputAttributes;
+ uint32_t maxVertexInputBindings;
+ uint32_t maxVertexInputAttributeOffset;
+ uint32_t maxVertexInputBindingStride;
+ uint32_t maxVertexOutputComponents;
+ uint32_t maxTessellationGenerationLevel;
+ uint32_t maxTessellationPatchSize;
+ uint32_t maxTessellationControlPerVertexInputComponents;
+ uint32_t maxTessellationControlPerVertexOutputComponents;
+ uint32_t maxTessellationControlPerPatchOutputComponents;
+ uint32_t maxTessellationControlTotalOutputComponents;
+ uint32_t maxTessellationEvaluationInputComponents;
+ uint32_t maxTessellationEvaluationOutputComponents;
+ uint32_t maxGeometryShaderInvocations;
+ uint32_t maxGeometryInputComponents;
+ uint32_t maxGeometryOutputComponents;
+ uint32_t maxGeometryOutputVertices;
+ uint32_t maxGeometryTotalOutputComponents;
+ uint32_t maxFragmentInputComponents;
+ uint32_t maxFragmentOutputAttachments;
+ uint32_t maxFragmentDualSrcAttachments;
+ uint32_t maxFragmentCombinedOutputResources;
+ uint32_t maxComputeSharedMemorySize;
+ uint32_t maxComputeWorkGroupCount[3];
+ uint32_t maxComputeWorkGroupInvocations;
+ uint32_t maxComputeWorkGroupSize[3];
+ uint32_t subPixelPrecisionBits;
+ uint32_t subTexelPrecisionBits;
+ uint32_t mipmapPrecisionBits;
+ uint32_t maxDrawIndexedIndexValue;
+ uint32_t maxDrawIndirectCount;
+ float maxSamplerLodBias;
+ float maxSamplerAnisotropy;
+ uint32_t maxViewports;
+ uint32_t maxViewportDimensions[2];
+ float viewportBoundsRange[2];
+ uint32_t viewportSubPixelBits;
+ size_t minMemoryMapAlignment;
+ VkDeviceSize minTexelBufferOffsetAlignment;
+ VkDeviceSize minUniformBufferOffsetAlignment;
+ VkDeviceSize minStorageBufferOffsetAlignment;
+ int32_t minTexelOffset;
+ uint32_t maxTexelOffset;
+ int32_t minTexelGatherOffset;
+ uint32_t maxTexelGatherOffset;
+ float minInterpolationOffset;
+ float maxInterpolationOffset;
+ uint32_t subPixelInterpolationOffsetBits;
+ uint32_t maxFramebufferWidth;
+ uint32_t maxFramebufferHeight;
+ uint32_t maxFramebufferLayers;
+ VkSampleCountFlags framebufferColorSampleCounts;
+ VkSampleCountFlags framebufferDepthSampleCounts;
+ VkSampleCountFlags framebufferStencilSampleCounts;
+ VkSampleCountFlags framebufferNoAttachmentsSampleCounts;
+ uint32_t maxColorAttachments;
+ VkSampleCountFlags sampledImageColorSampleCounts;
+ VkSampleCountFlags sampledImageIntegerSampleCounts;
+ VkSampleCountFlags sampledImageDepthSampleCounts;
+ VkSampleCountFlags sampledImageStencilSampleCounts;
+ VkSampleCountFlags storageImageSampleCounts;
+ uint32_t maxSampleMaskWords;
+ VkBool32 timestampComputeAndGraphics;
+ float timestampPeriod;
+ uint32_t maxClipDistances;
+ uint32_t maxCullDistances;
+ uint32_t maxCombinedClipAndCullDistances;
+ uint32_t discreteQueuePriorities;
+ float pointSizeRange[2];
+ float lineWidthRange[2];
+ float pointSizeGranularity;
+ float lineWidthGranularity;
+ VkBool32 strictLines;
+ VkBool32 standardSampleLocations;
+ VkDeviceSize optimalBufferCopyOffsetAlignment;
+ VkDeviceSize optimalBufferCopyRowPitchAlignment;
+ VkDeviceSize nonCoherentAtomSize;
+} VkPhysicalDeviceLimits;
+
+typedef struct VkPhysicalDeviceSparseProperties {
+ VkBool32 residencyStandard2DBlockShape;
+ VkBool32 residencyStandard2DMultisampleBlockShape;
+ VkBool32 residencyStandard3DBlockShape;
+ VkBool32 residencyAlignedMipSize;
+ VkBool32 residencyNonResidentStrict;
+} VkPhysicalDeviceSparseProperties;
+
+typedef struct VkPhysicalDeviceProperties {
+ uint32_t apiVersion;
+ uint32_t driverVersion;
+ uint32_t vendorID;
+ uint32_t deviceID;
+ VkPhysicalDeviceType deviceType;
+ char deviceName[VK_MAX_PHYSICAL_DEVICE_NAME_SIZE];
+ uint8_t pipelineCacheUUID[VK_UUID_SIZE];
+ VkPhysicalDeviceLimits limits;
+ VkPhysicalDeviceSparseProperties sparseProperties;
+} VkPhysicalDeviceProperties;
+
+typedef struct VkQueueFamilyProperties {
+ VkQueueFlags queueFlags;
+ uint32_t queueCount;
+ uint32_t timestampValidBits;
+ VkExtent3D minImageTransferGranularity;
+} VkQueueFamilyProperties;
+
+typedef struct VkMemoryType {
+ VkMemoryPropertyFlags propertyFlags;
+ uint32_t heapIndex;
+} VkMemoryType;
+
+typedef struct VkMemoryHeap {
+ VkDeviceSize size;
+ VkMemoryHeapFlags flags;
+} VkMemoryHeap;
+
+typedef struct VkPhysicalDeviceMemoryProperties {
+ uint32_t memoryTypeCount;
+ VkMemoryType memoryTypes[VK_MAX_MEMORY_TYPES];
+ uint32_t memoryHeapCount;
+ VkMemoryHeap memoryHeaps[VK_MAX_MEMORY_HEAPS];
+} VkPhysicalDeviceMemoryProperties;
+
+typedef struct VkDeviceQueueCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDeviceQueueCreateFlags flags;
+ uint32_t queueFamilyIndex;
+ uint32_t queueCount;
+ const float* pQueuePriorities;
+} VkDeviceQueueCreateInfo;
+
+typedef struct VkDeviceCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDeviceCreateFlags flags;
+ uint32_t queueCreateInfoCount;
+ const VkDeviceQueueCreateInfo* pQueueCreateInfos;
+ uint32_t enabledLayerCount;
+ const char* const* ppEnabledLayerNames;
+ uint32_t enabledExtensionCount;
+ const char* const* ppEnabledExtensionNames;
+ const VkPhysicalDeviceFeatures* pEnabledFeatures;
+} VkDeviceCreateInfo;
+
+typedef struct VkExtensionProperties {
+ char extensionName[VK_MAX_EXTENSION_NAME_SIZE];
+ uint32_t specVersion;
+} VkExtensionProperties;
+
+typedef struct VkLayerProperties {
+ char layerName[VK_MAX_EXTENSION_NAME_SIZE];
+ uint32_t specVersion;
+ uint32_t implementationVersion;
+ char description[VK_MAX_DESCRIPTION_SIZE];
+} VkLayerProperties;
+
+typedef struct VkSubmitInfo {
+ VkStructureType sType;
+ const void* pNext;
+ uint32_t waitSemaphoreCount;
+ const VkSemaphore* pWaitSemaphores;
+ const VkPipelineStageFlags* pWaitDstStageMask;
+ uint32_t commandBufferCount;
+ const VkCommandBuffer* pCommandBuffers;
+ uint32_t signalSemaphoreCount;
+ const VkSemaphore* pSignalSemaphores;
+} VkSubmitInfo;
+
+typedef struct VkMemoryAllocateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDeviceSize allocationSize;
+ uint32_t memoryTypeIndex;
+} VkMemoryAllocateInfo;
+
+typedef struct VkMappedMemoryRange {
+ VkStructureType sType;
+ const void* pNext;
+ VkDeviceMemory memory;
+ VkDeviceSize offset;
+ VkDeviceSize size;
+} VkMappedMemoryRange;
+
+typedef struct VkMemoryRequirements {
+ VkDeviceSize size;
+ VkDeviceSize alignment;
+ uint32_t memoryTypeBits;
+} VkMemoryRequirements;
+
+typedef struct VkSparseImageFormatProperties {
+ VkImageAspectFlags aspectMask;
+ VkExtent3D imageGranularity;
+ VkSparseImageFormatFlags flags;
+} VkSparseImageFormatProperties;
+
+typedef struct VkSparseImageMemoryRequirements {
+ VkSparseImageFormatProperties formatProperties;
+ uint32_t imageMipTailFirstLod;
+ VkDeviceSize imageMipTailSize;
+ VkDeviceSize imageMipTailOffset;
+ VkDeviceSize imageMipTailStride;
+} VkSparseImageMemoryRequirements;
+
+typedef struct VkSparseMemoryBind {
+ VkDeviceSize resourceOffset;
+ VkDeviceSize size;
+ VkDeviceMemory memory;
+ VkDeviceSize memoryOffset;
+ VkSparseMemoryBindFlags flags;
+} VkSparseMemoryBind;
+
+typedef struct VkSparseBufferMemoryBindInfo {
+ VkBuffer buffer;
+ uint32_t bindCount;
+ const VkSparseMemoryBind* pBinds;
+} VkSparseBufferMemoryBindInfo;
+
+typedef struct VkSparseImageOpaqueMemoryBindInfo {
+ VkImage image;
+ uint32_t bindCount;
+ const VkSparseMemoryBind* pBinds;
+} VkSparseImageOpaqueMemoryBindInfo;
+
+typedef struct VkImageSubresource {
+ VkImageAspectFlags aspectMask;
+ uint32_t mipLevel;
+ uint32_t arrayLayer;
+} VkImageSubresource;
+
+typedef struct VkOffset3D {
+ int32_t x;
+ int32_t y;
+ int32_t z;
+} VkOffset3D;
+
+typedef struct VkSparseImageMemoryBind {
+ VkImageSubresource subresource;
+ VkOffset3D offset;
+ VkExtent3D extent;
+ VkDeviceMemory memory;
+ VkDeviceSize memoryOffset;
+ VkSparseMemoryBindFlags flags;
+} VkSparseImageMemoryBind;
+
+typedef struct VkSparseImageMemoryBindInfo {
+ VkImage image;
+ uint32_t bindCount;
+ const VkSparseImageMemoryBind* pBinds;
+} VkSparseImageMemoryBindInfo;
+
+typedef struct VkBindSparseInfo {
+ VkStructureType sType;
+ const void* pNext;
+ uint32_t waitSemaphoreCount;
+ const VkSemaphore* pWaitSemaphores;
+ uint32_t bufferBindCount;
+ const VkSparseBufferMemoryBindInfo* pBufferBinds;
+ uint32_t imageOpaqueBindCount;
+ const VkSparseImageOpaqueMemoryBindInfo* pImageOpaqueBinds;
+ uint32_t imageBindCount;
+ const VkSparseImageMemoryBindInfo* pImageBinds;
+ uint32_t signalSemaphoreCount;
+ const VkSemaphore* pSignalSemaphores;
+} VkBindSparseInfo;
+
+typedef struct VkFenceCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkFenceCreateFlags flags;
+} VkFenceCreateInfo;
+
+typedef struct VkSemaphoreCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkSemaphoreCreateFlags flags;
+} VkSemaphoreCreateInfo;
+
+typedef struct VkEventCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkEventCreateFlags flags;
+} VkEventCreateInfo;
+
+typedef struct VkQueryPoolCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkQueryPoolCreateFlags flags;
+ VkQueryType queryType;
+ uint32_t queryCount;
+ VkQueryPipelineStatisticFlags pipelineStatistics;
+} VkQueryPoolCreateInfo;
+
+typedef struct VkBufferCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkBufferCreateFlags flags;
+ VkDeviceSize size;
+ VkBufferUsageFlags usage;
+ VkSharingMode sharingMode;
+ uint32_t queueFamilyIndexCount;
+ const uint32_t* pQueueFamilyIndices;
+} VkBufferCreateInfo;
+
+typedef struct VkBufferViewCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkBufferViewCreateFlags flags;
+ VkBuffer buffer;
+ VkFormat format;
+ VkDeviceSize offset;
+ VkDeviceSize range;
+} VkBufferViewCreateInfo;
+
+typedef struct VkImageCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkImageCreateFlags flags;
+ VkImageType imageType;
+ VkFormat format;
+ VkExtent3D extent;
+ uint32_t mipLevels;
+ uint32_t arrayLayers;
+ VkSampleCountFlagBits samples;
+ VkImageTiling tiling;
+ VkImageUsageFlags usage;
+ VkSharingMode sharingMode;
+ uint32_t queueFamilyIndexCount;
+ const uint32_t* pQueueFamilyIndices;
+ VkImageLayout initialLayout;
+} VkImageCreateInfo;
+
+typedef struct VkSubresourceLayout {
+ VkDeviceSize offset;
+ VkDeviceSize size;
+ VkDeviceSize rowPitch;
+ VkDeviceSize arrayPitch;
+ VkDeviceSize depthPitch;
+} VkSubresourceLayout;
+
+typedef struct VkComponentMapping {
+ VkComponentSwizzle r;
+ VkComponentSwizzle g;
+ VkComponentSwizzle b;
+ VkComponentSwizzle a;
+} VkComponentMapping;
+
+typedef struct VkImageSubresourceRange {
+ VkImageAspectFlags aspectMask;
+ uint32_t baseMipLevel;
+ uint32_t levelCount;
+ uint32_t baseArrayLayer;
+ uint32_t layerCount;
+} VkImageSubresourceRange;
+
+typedef struct VkImageViewCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkImageViewCreateFlags flags;
+ VkImage image;
+ VkImageViewType viewType;
+ VkFormat format;
+ VkComponentMapping components;
+ VkImageSubresourceRange subresourceRange;
+} VkImageViewCreateInfo;
+
+typedef struct VkShaderModuleCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkShaderModuleCreateFlags flags;
+ size_t codeSize;
+ const uint32_t* pCode;
+} VkShaderModuleCreateInfo;
+
+typedef struct VkPipelineCacheCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineCacheCreateFlags flags;
+ size_t initialDataSize;
+ const void* pInitialData;
+} VkPipelineCacheCreateInfo;
+
+typedef struct VkSpecializationMapEntry {
+ uint32_t constantID;
+ uint32_t offset;
+ size_t size;
+} VkSpecializationMapEntry;
+
+typedef struct VkSpecializationInfo {
+ uint32_t mapEntryCount;
+ const VkSpecializationMapEntry* pMapEntries;
+ size_t dataSize;
+ const void* pData;
+} VkSpecializationInfo;
+
+typedef struct VkPipelineShaderStageCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineShaderStageCreateFlags flags;
+ VkShaderStageFlagBits stage;
+ VkShaderModule module;
+ const char* pName;
+ const VkSpecializationInfo* pSpecializationInfo;
+} VkPipelineShaderStageCreateInfo;
+
+typedef struct VkVertexInputBindingDescription {
+ uint32_t binding;
+ uint32_t stride;
+ VkVertexInputRate inputRate;
+} VkVertexInputBindingDescription;
+
+typedef struct VkVertexInputAttributeDescription {
+ uint32_t location;
+ uint32_t binding;
+ VkFormat format;
+ uint32_t offset;
+} VkVertexInputAttributeDescription;
+
+typedef struct VkPipelineVertexInputStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineVertexInputStateCreateFlags flags;
+ uint32_t vertexBindingDescriptionCount;
+ const VkVertexInputBindingDescription* pVertexBindingDescriptions;
+ uint32_t vertexAttributeDescriptionCount;
+ const VkVertexInputAttributeDescription* pVertexAttributeDescriptions;
+} VkPipelineVertexInputStateCreateInfo;
+
+typedef struct VkPipelineInputAssemblyStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineInputAssemblyStateCreateFlags flags;
+ VkPrimitiveTopology topology;
+ VkBool32 primitiveRestartEnable;
+} VkPipelineInputAssemblyStateCreateInfo;
+
+typedef struct VkPipelineTessellationStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineTessellationStateCreateFlags flags;
+ uint32_t patchControlPoints;
+} VkPipelineTessellationStateCreateInfo;
+
+typedef struct VkViewport {
+ float x;
+ float y;
+ float width;
+ float height;
+ float minDepth;
+ float maxDepth;
+} VkViewport;
+
+typedef struct VkOffset2D {
+ int32_t x;
+ int32_t y;
+} VkOffset2D;
+
+typedef struct VkExtent2D {
+ uint32_t width;
+ uint32_t height;
+} VkExtent2D;
+
+typedef struct VkRect2D {
+ VkOffset2D offset;
+ VkExtent2D extent;
+} VkRect2D;
+
+typedef struct VkPipelineViewportStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineViewportStateCreateFlags flags;
+ uint32_t viewportCount;
+ const VkViewport* pViewports;
+ uint32_t scissorCount;
+ const VkRect2D* pScissors;
+} VkPipelineViewportStateCreateInfo;
+
+typedef struct VkPipelineRasterizationStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineRasterizationStateCreateFlags flags;
+ VkBool32 depthClampEnable;
+ VkBool32 rasterizerDiscardEnable;
+ VkPolygonMode polygonMode;
+ VkCullModeFlags cullMode;
+ VkFrontFace frontFace;
+ VkBool32 depthBiasEnable;
+ float depthBiasConstantFactor;
+ float depthBiasClamp;
+ float depthBiasSlopeFactor;
+ float lineWidth;
+} VkPipelineRasterizationStateCreateInfo;
+
+typedef struct VkPipelineMultisampleStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineMultisampleStateCreateFlags flags;
+ VkSampleCountFlagBits rasterizationSamples;
+ VkBool32 sampleShadingEnable;
+ float minSampleShading;
+ const VkSampleMask* pSampleMask;
+ VkBool32 alphaToCoverageEnable;
+ VkBool32 alphaToOneEnable;
+} VkPipelineMultisampleStateCreateInfo;
+
+typedef struct VkStencilOpState {
+ VkStencilOp failOp;
+ VkStencilOp passOp;
+ VkStencilOp depthFailOp;
+ VkCompareOp compareOp;
+ uint32_t compareMask;
+ uint32_t writeMask;
+ uint32_t reference;
+} VkStencilOpState;
+
+typedef struct VkPipelineDepthStencilStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineDepthStencilStateCreateFlags flags;
+ VkBool32 depthTestEnable;
+ VkBool32 depthWriteEnable;
+ VkCompareOp depthCompareOp;
+ VkBool32 depthBoundsTestEnable;
+ VkBool32 stencilTestEnable;
+ VkStencilOpState front;
+ VkStencilOpState back;
+ float minDepthBounds;
+ float maxDepthBounds;
+} VkPipelineDepthStencilStateCreateInfo;
+
+typedef struct VkPipelineColorBlendAttachmentState {
+ VkBool32 blendEnable;
+ VkBlendFactor srcColorBlendFactor;
+ VkBlendFactor dstColorBlendFactor;
+ VkBlendOp colorBlendOp;
+ VkBlendFactor srcAlphaBlendFactor;
+ VkBlendFactor dstAlphaBlendFactor;
+ VkBlendOp alphaBlendOp;
+ VkColorComponentFlags colorWriteMask;
+} VkPipelineColorBlendAttachmentState;
+
+typedef struct VkPipelineColorBlendStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineColorBlendStateCreateFlags flags;
+ VkBool32 logicOpEnable;
+ VkLogicOp logicOp;
+ uint32_t attachmentCount;
+ const VkPipelineColorBlendAttachmentState* pAttachments;
+ float blendConstants[4];
+} VkPipelineColorBlendStateCreateInfo;
+
+typedef struct VkPipelineDynamicStateCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineDynamicStateCreateFlags flags;
+ uint32_t dynamicStateCount;
+ const VkDynamicState* pDynamicStates;
+} VkPipelineDynamicStateCreateInfo;
+
+typedef struct VkGraphicsPipelineCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineCreateFlags flags;
+ uint32_t stageCount;
+ const VkPipelineShaderStageCreateInfo* pStages;
+ const VkPipelineVertexInputStateCreateInfo* pVertexInputState;
+ const VkPipelineInputAssemblyStateCreateInfo* pInputAssemblyState;
+ const VkPipelineTessellationStateCreateInfo* pTessellationState;
+ const VkPipelineViewportStateCreateInfo* pViewportState;
+ const VkPipelineRasterizationStateCreateInfo* pRasterizationState;
+ const VkPipelineMultisampleStateCreateInfo* pMultisampleState;
+ const VkPipelineDepthStencilStateCreateInfo* pDepthStencilState;
+ const VkPipelineColorBlendStateCreateInfo* pColorBlendState;
+ const VkPipelineDynamicStateCreateInfo* pDynamicState;
+ VkPipelineLayout layout;
+ VkRenderPass renderPass;
+ uint32_t subpass;
+ VkPipeline basePipelineHandle;
+ int32_t basePipelineIndex;
+} VkGraphicsPipelineCreateInfo;
+
+typedef struct VkComputePipelineCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineCreateFlags flags;
+ VkPipelineShaderStageCreateInfo stage;
+ VkPipelineLayout layout;
+ VkPipeline basePipelineHandle;
+ int32_t basePipelineIndex;
+} VkComputePipelineCreateInfo;
+
+typedef struct VkPushConstantRange {
+ VkShaderStageFlags stageFlags;
+ uint32_t offset;
+ uint32_t size;
+} VkPushConstantRange;
+
+typedef struct VkPipelineLayoutCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkPipelineLayoutCreateFlags flags;
+ uint32_t setLayoutCount;
+ const VkDescriptorSetLayout* pSetLayouts;
+ uint32_t pushConstantRangeCount;
+ const VkPushConstantRange* pPushConstantRanges;
+} VkPipelineLayoutCreateInfo;
+
+typedef struct VkSamplerCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkSamplerCreateFlags flags;
+ VkFilter magFilter;
+ VkFilter minFilter;
+ VkSamplerMipmapMode mipmapMode;
+ VkSamplerAddressMode addressModeU;
+ VkSamplerAddressMode addressModeV;
+ VkSamplerAddressMode addressModeW;
+ float mipLodBias;
+ VkBool32 anisotropyEnable;
+ float maxAnisotropy;
+ VkBool32 compareEnable;
+ VkCompareOp compareOp;
+ float minLod;
+ float maxLod;
+ VkBorderColor borderColor;
+ VkBool32 unnormalizedCoordinates;
+} VkSamplerCreateInfo;
+
+typedef struct VkDescriptorSetLayoutBinding {
+ uint32_t binding;
+ VkDescriptorType descriptorType;
+ uint32_t descriptorCount;
+ VkShaderStageFlags stageFlags;
+ const VkSampler* pImmutableSamplers;
+} VkDescriptorSetLayoutBinding;
+
+typedef struct VkDescriptorSetLayoutCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDescriptorSetLayoutCreateFlags flags;
+ uint32_t bindingCount;
+ const VkDescriptorSetLayoutBinding* pBindings;
+} VkDescriptorSetLayoutCreateInfo;
+
+typedef struct VkDescriptorPoolSize {
+ VkDescriptorType type;
+ uint32_t descriptorCount;
+} VkDescriptorPoolSize;
+
+typedef struct VkDescriptorPoolCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDescriptorPoolCreateFlags flags;
+ uint32_t maxSets;
+ uint32_t poolSizeCount;
+ const VkDescriptorPoolSize* pPoolSizes;
+} VkDescriptorPoolCreateInfo;
+
+typedef struct VkDescriptorSetAllocateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkDescriptorPool descriptorPool;
+ uint32_t descriptorSetCount;
+ const VkDescriptorSetLayout* pSetLayouts;
+} VkDescriptorSetAllocateInfo;
+
+typedef struct VkDescriptorImageInfo {
+ VkSampler sampler;
+ VkImageView imageView;
+ VkImageLayout imageLayout;
+} VkDescriptorImageInfo;
+
+typedef struct VkDescriptorBufferInfo {
+ VkBuffer buffer;
+ VkDeviceSize offset;
+ VkDeviceSize range;
+} VkDescriptorBufferInfo;
+
+typedef struct VkWriteDescriptorSet {
+ VkStructureType sType;
+ const void* pNext;
+ VkDescriptorSet dstSet;
+ uint32_t dstBinding;
+ uint32_t dstArrayElement;
+ uint32_t descriptorCount;
+ VkDescriptorType descriptorType;
+ const VkDescriptorImageInfo* pImageInfo;
+ const VkDescriptorBufferInfo* pBufferInfo;
+ const VkBufferView* pTexelBufferView;
+} VkWriteDescriptorSet;
+
+typedef struct VkCopyDescriptorSet {
+ VkStructureType sType;
+ const void* pNext;
+ VkDescriptorSet srcSet;
+ uint32_t srcBinding;
+ uint32_t srcArrayElement;
+ VkDescriptorSet dstSet;
+ uint32_t dstBinding;
+ uint32_t dstArrayElement;
+ uint32_t descriptorCount;
+} VkCopyDescriptorSet;
+
+typedef struct VkFramebufferCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkFramebufferCreateFlags flags;
+ VkRenderPass renderPass;
+ uint32_t attachmentCount;
+ const VkImageView* pAttachments;
+ uint32_t width;
+ uint32_t height;
+ uint32_t layers;
+} VkFramebufferCreateInfo;
+
+typedef struct VkAttachmentDescription {
+ VkAttachmentDescriptionFlags flags;
+ VkFormat format;
+ VkSampleCountFlagBits samples;
+ VkAttachmentLoadOp loadOp;
+ VkAttachmentStoreOp storeOp;
+ VkAttachmentLoadOp stencilLoadOp;
+ VkAttachmentStoreOp stencilStoreOp;
+ VkImageLayout initialLayout;
+ VkImageLayout finalLayout;
+} VkAttachmentDescription;
+
+typedef struct VkAttachmentReference {
+ uint32_t attachment;
+ VkImageLayout layout;
+} VkAttachmentReference;
+
+typedef struct VkSubpassDescription {
+ VkSubpassDescriptionFlags flags;
+ VkPipelineBindPoint pipelineBindPoint;
+ uint32_t inputAttachmentCount;
+ const VkAttachmentReference* pInputAttachments;
+ uint32_t colorAttachmentCount;
+ const VkAttachmentReference* pColorAttachments;
+ const VkAttachmentReference* pResolveAttachments;
+ const VkAttachmentReference* pDepthStencilAttachment;
+ uint32_t preserveAttachmentCount;
+ const uint32_t* pPreserveAttachments;
+} VkSubpassDescription;
+
+typedef struct VkSubpassDependency {
+ uint32_t srcSubpass;
+ uint32_t dstSubpass;
+ VkPipelineStageFlags srcStageMask;
+ VkPipelineStageFlags dstStageMask;
+ VkAccessFlags srcAccessMask;
+ VkAccessFlags dstAccessMask;
+ VkDependencyFlags dependencyFlags;
+} VkSubpassDependency;
+
+typedef struct VkRenderPassCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkRenderPassCreateFlags flags;
+ uint32_t attachmentCount;
+ const VkAttachmentDescription* pAttachments;
+ uint32_t subpassCount;
+ const VkSubpassDescription* pSubpasses;
+ uint32_t dependencyCount;
+ const VkSubpassDependency* pDependencies;
+} VkRenderPassCreateInfo;
+
+typedef struct VkCommandPoolCreateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkCommandPoolCreateFlags flags;
+ uint32_t queueFamilyIndex;
+} VkCommandPoolCreateInfo;
+
+typedef struct VkCommandBufferAllocateInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkCommandPool commandPool;
+ VkCommandBufferLevel level;
+ uint32_t commandBufferCount;
+} VkCommandBufferAllocateInfo;
+
+typedef struct VkCommandBufferInheritanceInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkRenderPass renderPass;
+ uint32_t subpass;
+ VkFramebuffer framebuffer;
+ VkBool32 occlusionQueryEnable;
+ VkQueryControlFlags queryFlags;
+ VkQueryPipelineStatisticFlags pipelineStatistics;
+} VkCommandBufferInheritanceInfo;
+
+typedef struct VkCommandBufferBeginInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkCommandBufferUsageFlags flags;
+ const VkCommandBufferInheritanceInfo* pInheritanceInfo;
+} VkCommandBufferBeginInfo;
+
+typedef struct VkBufferCopy {
+ VkDeviceSize srcOffset;
+ VkDeviceSize dstOffset;
+ VkDeviceSize size;
+} VkBufferCopy;
+
+typedef struct VkImageSubresourceLayers {
+ VkImageAspectFlags aspectMask;
+ uint32_t mipLevel;
+ uint32_t baseArrayLayer;
+ uint32_t layerCount;
+} VkImageSubresourceLayers;
+
+typedef struct VkImageCopy {
+ VkImageSubresourceLayers srcSubresource;
+ VkOffset3D srcOffset;
+ VkImageSubresourceLayers dstSubresource;
+ VkOffset3D dstOffset;
+ VkExtent3D extent;
+} VkImageCopy;
+
+typedef struct VkImageBlit {
+ VkImageSubresourceLayers srcSubresource;
+ VkOffset3D srcOffsets[2];
+ VkImageSubresourceLayers dstSubresource;
+ VkOffset3D dstOffsets[2];
+} VkImageBlit;
+
+typedef struct VkBufferImageCopy {
+ VkDeviceSize bufferOffset;
+ uint32_t bufferRowLength;
+ uint32_t bufferImageHeight;
+ VkImageSubresourceLayers imageSubresource;
+ VkOffset3D imageOffset;
+ VkExtent3D imageExtent;
+} VkBufferImageCopy;
+
+typedef union VkClearColorValue {
+ float float32[4];
+ int32_t int32[4];
+ uint32_t uint32[4];
+} VkClearColorValue;
+
+typedef struct VkClearDepthStencilValue {
+ float depth;
+ uint32_t stencil;
+} VkClearDepthStencilValue;
+
+typedef union VkClearValue {
+ VkClearColorValue color;
+ VkClearDepthStencilValue depthStencil;
+} VkClearValue;
+
+typedef struct VkClearAttachment {
+ VkImageAspectFlags aspectMask;
+ uint32_t colorAttachment;
+ VkClearValue clearValue;
+} VkClearAttachment;
+
+typedef struct VkClearRect {
+ VkRect2D rect;
+ uint32_t baseArrayLayer;
+ uint32_t layerCount;
+} VkClearRect;
+
+typedef struct VkImageResolve {
+ VkImageSubresourceLayers srcSubresource;
+ VkOffset3D srcOffset;
+ VkImageSubresourceLayers dstSubresource;
+ VkOffset3D dstOffset;
+ VkExtent3D extent;
+} VkImageResolve;
+
+typedef struct VkMemoryBarrier {
+ VkStructureType sType;
+ const void* pNext;
+ VkAccessFlags srcAccessMask;
+ VkAccessFlags dstAccessMask;
+} VkMemoryBarrier;
+
+typedef struct VkBufferMemoryBarrier {
+ VkStructureType sType;
+ const void* pNext;
+ VkAccessFlags srcAccessMask;
+ VkAccessFlags dstAccessMask;
+ uint32_t srcQueueFamilyIndex;
+ uint32_t dstQueueFamilyIndex;
+ VkBuffer buffer;
+ VkDeviceSize offset;
+ VkDeviceSize size;
+} VkBufferMemoryBarrier;
+
+typedef struct VkImageMemoryBarrier {
+ VkStructureType sType;
+ const void* pNext;
+ VkAccessFlags srcAccessMask;
+ VkAccessFlags dstAccessMask;
+ VkImageLayout oldLayout;
+ VkImageLayout newLayout;
+ uint32_t srcQueueFamilyIndex;
+ uint32_t dstQueueFamilyIndex;
+ VkImage image;
+ VkImageSubresourceRange subresourceRange;
+} VkImageMemoryBarrier;
+
+typedef struct VkRenderPassBeginInfo {
+ VkStructureType sType;
+ const void* pNext;
+ VkRenderPass renderPass;
+ VkFramebuffer framebuffer;
+ VkRect2D renderArea;
+ uint32_t clearValueCount;
+ const VkClearValue* pClearValues;
+} VkRenderPassBeginInfo;
+
+typedef struct VkDispatchIndirectCommand {
+ uint32_t x;
+ uint32_t y;
+ uint32_t z;
+} VkDispatchIndirectCommand;
+
+typedef struct VkDrawIndexedIndirectCommand {
+ uint32_t indexCount;
+ uint32_t instanceCount;
+ uint32_t firstIndex;
+ int32_t vertexOffset;
+ uint32_t firstInstance;
+} VkDrawIndexedIndirectCommand;
+
+typedef struct VkDrawIndirectCommand {
+ uint32_t vertexCount;
+ uint32_t instanceCount;
+ uint32_t firstVertex;
+ uint32_t firstInstance;
+} VkDrawIndirectCommand;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateInstance)(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance);
+typedef void (VKAPI_PTR *PFN_vkDestroyInstance)(VkInstance instance, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkEnumeratePhysicalDevices)(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceFeatures)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceImageFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceProperties)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceQueueFamilyProperties)(VkPhysicalDevice physicalDevice, uint32_t* pQueueFamilyPropertyCount, VkQueueFamilyProperties* pQueueFamilyProperties);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceMemoryProperties)(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties);
+typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vkGetInstanceProcAddr)(VkInstance instance, const char* pName);
+typedef PFN_vkVoidFunction (VKAPI_PTR *PFN_vkGetDeviceProcAddr)(VkDevice device, const char* pName);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDevice)(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice);
+typedef void (VKAPI_PTR *PFN_vkDestroyDevice)(VkDevice device, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkEnumerateInstanceExtensionProperties)(const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkEnumerateDeviceExtensionProperties)(VkPhysicalDevice physicalDevice, const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkEnumerateInstanceLayerProperties)(uint32_t* pPropertyCount, VkLayerProperties* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkEnumerateDeviceLayerProperties)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkLayerProperties* pProperties);
+typedef void (VKAPI_PTR *PFN_vkGetDeviceQueue)(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue);
+typedef VkResult (VKAPI_PTR *PFN_vkQueueSubmit)(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence);
+typedef VkResult (VKAPI_PTR *PFN_vkQueueWaitIdle)(VkQueue queue);
+typedef VkResult (VKAPI_PTR *PFN_vkDeviceWaitIdle)(VkDevice device);
+typedef VkResult (VKAPI_PTR *PFN_vkAllocateMemory)(VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo, const VkAllocationCallbacks* pAllocator, VkDeviceMemory* pMemory);
+typedef void (VKAPI_PTR *PFN_vkFreeMemory)(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkMapMemory)(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void** ppData);
+typedef void (VKAPI_PTR *PFN_vkUnmapMemory)(VkDevice device, VkDeviceMemory memory);
+typedef VkResult (VKAPI_PTR *PFN_vkFlushMappedMemoryRanges)(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges);
+typedef VkResult (VKAPI_PTR *PFN_vkInvalidateMappedMemoryRanges)(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges);
+typedef void (VKAPI_PTR *PFN_vkGetDeviceMemoryCommitment)(VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes);
+typedef VkResult (VKAPI_PTR *PFN_vkBindBufferMemory)(VkDevice device, VkBuffer buffer, VkDeviceMemory memory, VkDeviceSize memoryOffset);
+typedef VkResult (VKAPI_PTR *PFN_vkBindImageMemory)(VkDevice device, VkImage image, VkDeviceMemory memory, VkDeviceSize memoryOffset);
+typedef void (VKAPI_PTR *PFN_vkGetBufferMemoryRequirements)(VkDevice device, VkBuffer buffer, VkMemoryRequirements* pMemoryRequirements);
+typedef void (VKAPI_PTR *PFN_vkGetImageMemoryRequirements)(VkDevice device, VkImage image, VkMemoryRequirements* pMemoryRequirements);
+typedef void (VKAPI_PTR *PFN_vkGetImageSparseMemoryRequirements)(VkDevice device, VkImage image, uint32_t* pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements* pSparseMemoryRequirements);
+typedef void (VKAPI_PTR *PFN_vkGetPhysicalDeviceSparseImageFormatProperties)(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pPropertyCount, VkSparseImageFormatProperties* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkQueueBindSparse)(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateFence)(VkDevice device, const VkFenceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFence* pFence);
+typedef void (VKAPI_PTR *PFN_vkDestroyFence)(VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkResetFences)(VkDevice device, uint32_t fenceCount, const VkFence* pFences);
+typedef VkResult (VKAPI_PTR *PFN_vkGetFenceStatus)(VkDevice device, VkFence fence);
+typedef VkResult (VKAPI_PTR *PFN_vkWaitForFences)(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateSemaphore)(VkDevice device, const VkSemaphoreCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSemaphore* pSemaphore);
+typedef void (VKAPI_PTR *PFN_vkDestroySemaphore)(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateEvent)(VkDevice device, const VkEventCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkEvent* pEvent);
+typedef void (VKAPI_PTR *PFN_vkDestroyEvent)(VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkGetEventStatus)(VkDevice device, VkEvent event);
+typedef VkResult (VKAPI_PTR *PFN_vkSetEvent)(VkDevice device, VkEvent event);
+typedef VkResult (VKAPI_PTR *PFN_vkResetEvent)(VkDevice device, VkEvent event);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateQueryPool)(VkDevice device, const VkQueryPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkQueryPool* pQueryPool);
+typedef void (VKAPI_PTR *PFN_vkDestroyQueryPool)(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkGetQueryPoolResults)(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateBuffer)(VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer);
+typedef void (VKAPI_PTR *PFN_vkDestroyBuffer)(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateBufferView)(VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView);
+typedef void (VKAPI_PTR *PFN_vkDestroyBufferView)(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateImage)(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage);
+typedef void (VKAPI_PTR *PFN_vkDestroyImage)(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator);
+typedef void (VKAPI_PTR *PFN_vkGetImageSubresourceLayout)(VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateImageView)(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView);
+typedef void (VKAPI_PTR *PFN_vkDestroyImageView)(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateShaderModule)(VkDevice device, const VkShaderModuleCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkShaderModule* pShaderModule);
+typedef void (VKAPI_PTR *PFN_vkDestroyShaderModule)(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreatePipelineCache)(VkDevice device, const VkPipelineCacheCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineCache* pPipelineCache);
+typedef void (VKAPI_PTR *PFN_vkDestroyPipelineCache)(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPipelineCacheData)(VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData);
+typedef VkResult (VKAPI_PTR *PFN_vkMergePipelineCaches)(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateGraphicsPipelines)(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateComputePipelines)(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines);
+typedef void (VKAPI_PTR *PFN_vkDestroyPipeline)(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreatePipelineLayout)(VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout);
+typedef void (VKAPI_PTR *PFN_vkDestroyPipelineLayout)(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateSampler)(VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler);
+typedef void (VKAPI_PTR *PFN_vkDestroySampler)(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDescriptorSetLayout)(VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout);
+typedef void (VKAPI_PTR *PFN_vkDestroyDescriptorSetLayout)(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDescriptorPool)(VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool);
+typedef void (VKAPI_PTR *PFN_vkDestroyDescriptorPool)(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkResetDescriptorPool)(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags);
+typedef VkResult (VKAPI_PTR *PFN_vkAllocateDescriptorSets)(VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets);
+typedef VkResult (VKAPI_PTR *PFN_vkFreeDescriptorSets)(VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets);
+typedef void (VKAPI_PTR *PFN_vkUpdateDescriptorSets)(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateFramebuffer)(VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer);
+typedef void (VKAPI_PTR *PFN_vkDestroyFramebuffer)(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateRenderPass)(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass);
+typedef void (VKAPI_PTR *PFN_vkDestroyRenderPass)(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator);
+typedef void (VKAPI_PTR *PFN_vkGetRenderAreaGranularity)(VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateCommandPool)(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool);
+typedef void (VKAPI_PTR *PFN_vkDestroyCommandPool)(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkResetCommandPool)(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags);
+typedef VkResult (VKAPI_PTR *PFN_vkAllocateCommandBuffers)(VkDevice device, const VkCommandBufferAllocateInfo* pAllocateInfo, VkCommandBuffer* pCommandBuffers);
+typedef void (VKAPI_PTR *PFN_vkFreeCommandBuffers)(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers);
+typedef VkResult (VKAPI_PTR *PFN_vkBeginCommandBuffer)(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo);
+typedef VkResult (VKAPI_PTR *PFN_vkEndCommandBuffer)(VkCommandBuffer commandBuffer);
+typedef VkResult (VKAPI_PTR *PFN_vkResetCommandBuffer)(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags);
+typedef void (VKAPI_PTR *PFN_vkCmdBindPipeline)(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline);
+typedef void (VKAPI_PTR *PFN_vkCmdSetViewport)(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports);
+typedef void (VKAPI_PTR *PFN_vkCmdSetScissor)(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors);
+typedef void (VKAPI_PTR *PFN_vkCmdSetLineWidth)(VkCommandBuffer commandBuffer, float lineWidth);
+typedef void (VKAPI_PTR *PFN_vkCmdSetDepthBias)(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor);
+typedef void (VKAPI_PTR *PFN_vkCmdSetBlendConstants)(VkCommandBuffer commandBuffer, const float blendConstants[4]);
+typedef void (VKAPI_PTR *PFN_vkCmdSetDepthBounds)(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds);
+typedef void (VKAPI_PTR *PFN_vkCmdSetStencilCompareMask)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask);
+typedef void (VKAPI_PTR *PFN_vkCmdSetStencilWriteMask)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask);
+typedef void (VKAPI_PTR *PFN_vkCmdSetStencilReference)(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference);
+typedef void (VKAPI_PTR *PFN_vkCmdBindDescriptorSets)(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets);
+typedef void (VKAPI_PTR *PFN_vkCmdBindIndexBuffer)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType);
+typedef void (VKAPI_PTR *PFN_vkCmdBindVertexBuffers)(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets);
+typedef void (VKAPI_PTR *PFN_vkCmdDraw)(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance);
+typedef void (VKAPI_PTR *PFN_vkCmdDrawIndexed)(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance);
+typedef void (VKAPI_PTR *PFN_vkCmdDrawIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride);
+typedef void (VKAPI_PTR *PFN_vkCmdDrawIndexedIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride);
+typedef void (VKAPI_PTR *PFN_vkCmdDispatch)(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z);
+typedef void (VKAPI_PTR *PFN_vkCmdDispatchIndirect)(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset);
+typedef void (VKAPI_PTR *PFN_vkCmdCopyBuffer)(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions);
+typedef void (VKAPI_PTR *PFN_vkCmdCopyImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy* pRegions);
+typedef void (VKAPI_PTR *PFN_vkCmdBlitImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter);
+typedef void (VKAPI_PTR *PFN_vkCmdCopyBufferToImage)(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions);
+typedef void (VKAPI_PTR *PFN_vkCmdCopyImageToBuffer)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions);
+typedef void (VKAPI_PTR *PFN_vkCmdUpdateBuffer)(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData);
+typedef void (VKAPI_PTR *PFN_vkCmdFillBuffer)(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data);
+typedef void (VKAPI_PTR *PFN_vkCmdClearColorImage)(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges);
+typedef void (VKAPI_PTR *PFN_vkCmdClearDepthStencilImage)(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges);
+typedef void (VKAPI_PTR *PFN_vkCmdClearAttachments)(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects);
+typedef void (VKAPI_PTR *PFN_vkCmdResolveImage)(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve* pRegions);
+typedef void (VKAPI_PTR *PFN_vkCmdSetEvent)(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask);
+typedef void (VKAPI_PTR *PFN_vkCmdResetEvent)(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask);
+typedef void (VKAPI_PTR *PFN_vkCmdWaitEvents)(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers);
+typedef void (VKAPI_PTR *PFN_vkCmdPipelineBarrier)(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers);
+typedef void (VKAPI_PTR *PFN_vkCmdBeginQuery)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query, VkQueryControlFlags flags);
+typedef void (VKAPI_PTR *PFN_vkCmdEndQuery)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query);
+typedef void (VKAPI_PTR *PFN_vkCmdResetQueryPool)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount);
+typedef void (VKAPI_PTR *PFN_vkCmdWriteTimestamp)(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t query);
+typedef void (VKAPI_PTR *PFN_vkCmdCopyQueryPoolResults)(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags);
+typedef void (VKAPI_PTR *PFN_vkCmdPushConstants)(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void* pValues);
+typedef void (VKAPI_PTR *PFN_vkCmdBeginRenderPass)(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents);
+typedef void (VKAPI_PTR *PFN_vkCmdNextSubpass)(VkCommandBuffer commandBuffer, VkSubpassContents contents);
+typedef void (VKAPI_PTR *PFN_vkCmdEndRenderPass)(VkCommandBuffer commandBuffer);
+typedef void (VKAPI_PTR *PFN_vkCmdExecuteCommands)(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateInstance(
+ const VkInstanceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkInstance* pInstance);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyInstance(
+ VkInstance instance,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEnumeratePhysicalDevices(
+ VkInstance instance,
+ uint32_t* pPhysicalDeviceCount,
+ VkPhysicalDevice* pPhysicalDevices);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFeatures(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceFeatures* pFeatures);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkFormatProperties* pFormatProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceImageFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkImageType type,
+ VkImageTiling tiling,
+ VkImageUsageFlags usage,
+ VkImageCreateFlags flags,
+ VkImageFormatProperties* pImageFormatProperties);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceProperties(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceProperties* pProperties);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceQueueFamilyProperties(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pQueueFamilyPropertyCount,
+ VkQueueFamilyProperties* pQueueFamilyProperties);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceMemoryProperties(
+ VkPhysicalDevice physicalDevice,
+ VkPhysicalDeviceMemoryProperties* pMemoryProperties);
+
+VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetInstanceProcAddr(
+ VkInstance instance,
+ const char* pName);
+
+VKAPI_ATTR PFN_vkVoidFunction VKAPI_CALL vkGetDeviceProcAddr(
+ VkDevice device,
+ const char* pName);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDevice(
+ VkPhysicalDevice physicalDevice,
+ const VkDeviceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDevice* pDevice);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyDevice(
+ VkDevice device,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceExtensionProperties(
+ const char* pLayerName,
+ uint32_t* pPropertyCount,
+ VkExtensionProperties* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceExtensionProperties(
+ VkPhysicalDevice physicalDevice,
+ const char* pLayerName,
+ uint32_t* pPropertyCount,
+ VkExtensionProperties* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateInstanceLayerProperties(
+ uint32_t* pPropertyCount,
+ VkLayerProperties* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEnumerateDeviceLayerProperties(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkLayerProperties* pProperties);
+
+VKAPI_ATTR void VKAPI_CALL vkGetDeviceQueue(
+ VkDevice device,
+ uint32_t queueFamilyIndex,
+ uint32_t queueIndex,
+ VkQueue* pQueue);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkQueueSubmit(
+ VkQueue queue,
+ uint32_t submitCount,
+ const VkSubmitInfo* pSubmits,
+ VkFence fence);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkQueueWaitIdle(
+ VkQueue queue);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkDeviceWaitIdle(
+ VkDevice device);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkAllocateMemory(
+ VkDevice device,
+ const VkMemoryAllocateInfo* pAllocateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDeviceMemory* pMemory);
+
+VKAPI_ATTR void VKAPI_CALL vkFreeMemory(
+ VkDevice device,
+ VkDeviceMemory memory,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkMapMemory(
+ VkDevice device,
+ VkDeviceMemory memory,
+ VkDeviceSize offset,
+ VkDeviceSize size,
+ VkMemoryMapFlags flags,
+ void** ppData);
+
+VKAPI_ATTR void VKAPI_CALL vkUnmapMemory(
+ VkDevice device,
+ VkDeviceMemory memory);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkFlushMappedMemoryRanges(
+ VkDevice device,
+ uint32_t memoryRangeCount,
+ const VkMappedMemoryRange* pMemoryRanges);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkInvalidateMappedMemoryRanges(
+ VkDevice device,
+ uint32_t memoryRangeCount,
+ const VkMappedMemoryRange* pMemoryRanges);
+
+VKAPI_ATTR void VKAPI_CALL vkGetDeviceMemoryCommitment(
+ VkDevice device,
+ VkDeviceMemory memory,
+ VkDeviceSize* pCommittedMemoryInBytes);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkBindBufferMemory(
+ VkDevice device,
+ VkBuffer buffer,
+ VkDeviceMemory memory,
+ VkDeviceSize memoryOffset);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkBindImageMemory(
+ VkDevice device,
+ VkImage image,
+ VkDeviceMemory memory,
+ VkDeviceSize memoryOffset);
+
+VKAPI_ATTR void VKAPI_CALL vkGetBufferMemoryRequirements(
+ VkDevice device,
+ VkBuffer buffer,
+ VkMemoryRequirements* pMemoryRequirements);
+
+VKAPI_ATTR void VKAPI_CALL vkGetImageMemoryRequirements(
+ VkDevice device,
+ VkImage image,
+ VkMemoryRequirements* pMemoryRequirements);
+
+VKAPI_ATTR void VKAPI_CALL vkGetImageSparseMemoryRequirements(
+ VkDevice device,
+ VkImage image,
+ uint32_t* pSparseMemoryRequirementCount,
+ VkSparseImageMemoryRequirements* pSparseMemoryRequirements);
+
+VKAPI_ATTR void VKAPI_CALL vkGetPhysicalDeviceSparseImageFormatProperties(
+ VkPhysicalDevice physicalDevice,
+ VkFormat format,
+ VkImageType type,
+ VkSampleCountFlagBits samples,
+ VkImageUsageFlags usage,
+ VkImageTiling tiling,
+ uint32_t* pPropertyCount,
+ VkSparseImageFormatProperties* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkQueueBindSparse(
+ VkQueue queue,
+ uint32_t bindInfoCount,
+ const VkBindSparseInfo* pBindInfo,
+ VkFence fence);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateFence(
+ VkDevice device,
+ const VkFenceCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkFence* pFence);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyFence(
+ VkDevice device,
+ VkFence fence,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkResetFences(
+ VkDevice device,
+ uint32_t fenceCount,
+ const VkFence* pFences);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetFenceStatus(
+ VkDevice device,
+ VkFence fence);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkWaitForFences(
+ VkDevice device,
+ uint32_t fenceCount,
+ const VkFence* pFences,
+ VkBool32 waitAll,
+ uint64_t timeout);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateSemaphore(
+ VkDevice device,
+ const VkSemaphoreCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSemaphore* pSemaphore);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroySemaphore(
+ VkDevice device,
+ VkSemaphore semaphore,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateEvent(
+ VkDevice device,
+ const VkEventCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkEvent* pEvent);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyEvent(
+ VkDevice device,
+ VkEvent event,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetEventStatus(
+ VkDevice device,
+ VkEvent event);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkSetEvent(
+ VkDevice device,
+ VkEvent event);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkResetEvent(
+ VkDevice device,
+ VkEvent event);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateQueryPool(
+ VkDevice device,
+ const VkQueryPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkQueryPool* pQueryPool);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyQueryPool(
+ VkDevice device,
+ VkQueryPool queryPool,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetQueryPoolResults(
+ VkDevice device,
+ VkQueryPool queryPool,
+ uint32_t firstQuery,
+ uint32_t queryCount,
+ size_t dataSize,
+ void* pData,
+ VkDeviceSize stride,
+ VkQueryResultFlags flags);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateBuffer(
+ VkDevice device,
+ const VkBufferCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkBuffer* pBuffer);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyBuffer(
+ VkDevice device,
+ VkBuffer buffer,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateBufferView(
+ VkDevice device,
+ const VkBufferViewCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkBufferView* pView);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyBufferView(
+ VkDevice device,
+ VkBufferView bufferView,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateImage(
+ VkDevice device,
+ const VkImageCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkImage* pImage);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyImage(
+ VkDevice device,
+ VkImage image,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR void VKAPI_CALL vkGetImageSubresourceLayout(
+ VkDevice device,
+ VkImage image,
+ const VkImageSubresource* pSubresource,
+ VkSubresourceLayout* pLayout);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateImageView(
+ VkDevice device,
+ const VkImageViewCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkImageView* pView);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyImageView(
+ VkDevice device,
+ VkImageView imageView,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateShaderModule(
+ VkDevice device,
+ const VkShaderModuleCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkShaderModule* pShaderModule);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyShaderModule(
+ VkDevice device,
+ VkShaderModule shaderModule,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineCache(
+ VkDevice device,
+ const VkPipelineCacheCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipelineCache* pPipelineCache);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineCache(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPipelineCacheData(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ size_t* pDataSize,
+ void* pData);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkMergePipelineCaches(
+ VkDevice device,
+ VkPipelineCache dstCache,
+ uint32_t srcCacheCount,
+ const VkPipelineCache* pSrcCaches);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateGraphicsPipelines(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ uint32_t createInfoCount,
+ const VkGraphicsPipelineCreateInfo* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipeline* pPipelines);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateComputePipelines(
+ VkDevice device,
+ VkPipelineCache pipelineCache,
+ uint32_t createInfoCount,
+ const VkComputePipelineCreateInfo* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipeline* pPipelines);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyPipeline(
+ VkDevice device,
+ VkPipeline pipeline,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreatePipelineLayout(
+ VkDevice device,
+ const VkPipelineLayoutCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkPipelineLayout* pPipelineLayout);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyPipelineLayout(
+ VkDevice device,
+ VkPipelineLayout pipelineLayout,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateSampler(
+ VkDevice device,
+ const VkSamplerCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSampler* pSampler);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroySampler(
+ VkDevice device,
+ VkSampler sampler,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorSetLayout(
+ VkDevice device,
+ const VkDescriptorSetLayoutCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDescriptorSetLayout* pSetLayout);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorSetLayout(
+ VkDevice device,
+ VkDescriptorSetLayout descriptorSetLayout,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDescriptorPool(
+ VkDevice device,
+ const VkDescriptorPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDescriptorPool* pDescriptorPool);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyDescriptorPool(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkResetDescriptorPool(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ VkDescriptorPoolResetFlags flags);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkAllocateDescriptorSets(
+ VkDevice device,
+ const VkDescriptorSetAllocateInfo* pAllocateInfo,
+ VkDescriptorSet* pDescriptorSets);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkFreeDescriptorSets(
+ VkDevice device,
+ VkDescriptorPool descriptorPool,
+ uint32_t descriptorSetCount,
+ const VkDescriptorSet* pDescriptorSets);
+
+VKAPI_ATTR void VKAPI_CALL vkUpdateDescriptorSets(
+ VkDevice device,
+ uint32_t descriptorWriteCount,
+ const VkWriteDescriptorSet* pDescriptorWrites,
+ uint32_t descriptorCopyCount,
+ const VkCopyDescriptorSet* pDescriptorCopies);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateFramebuffer(
+ VkDevice device,
+ const VkFramebufferCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkFramebuffer* pFramebuffer);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyFramebuffer(
+ VkDevice device,
+ VkFramebuffer framebuffer,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateRenderPass(
+ VkDevice device,
+ const VkRenderPassCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkRenderPass* pRenderPass);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyRenderPass(
+ VkDevice device,
+ VkRenderPass renderPass,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR void VKAPI_CALL vkGetRenderAreaGranularity(
+ VkDevice device,
+ VkRenderPass renderPass,
+ VkExtent2D* pGranularity);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateCommandPool(
+ VkDevice device,
+ const VkCommandPoolCreateInfo* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkCommandPool* pCommandPool);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroyCommandPool(
+ VkDevice device,
+ VkCommandPool commandPool,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandPool(
+ VkDevice device,
+ VkCommandPool commandPool,
+ VkCommandPoolResetFlags flags);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkAllocateCommandBuffers(
+ VkDevice device,
+ const VkCommandBufferAllocateInfo* pAllocateInfo,
+ VkCommandBuffer* pCommandBuffers);
+
+VKAPI_ATTR void VKAPI_CALL vkFreeCommandBuffers(
+ VkDevice device,
+ VkCommandPool commandPool,
+ uint32_t commandBufferCount,
+ const VkCommandBuffer* pCommandBuffers);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkBeginCommandBuffer(
+ VkCommandBuffer commandBuffer,
+ const VkCommandBufferBeginInfo* pBeginInfo);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkEndCommandBuffer(
+ VkCommandBuffer commandBuffer);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkResetCommandBuffer(
+ VkCommandBuffer commandBuffer,
+ VkCommandBufferResetFlags flags);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBindPipeline(
+ VkCommandBuffer commandBuffer,
+ VkPipelineBindPoint pipelineBindPoint,
+ VkPipeline pipeline);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetViewport(
+ VkCommandBuffer commandBuffer,
+ uint32_t firstViewport,
+ uint32_t viewportCount,
+ const VkViewport* pViewports);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetScissor(
+ VkCommandBuffer commandBuffer,
+ uint32_t firstScissor,
+ uint32_t scissorCount,
+ const VkRect2D* pScissors);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetLineWidth(
+ VkCommandBuffer commandBuffer,
+ float lineWidth);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBias(
+ VkCommandBuffer commandBuffer,
+ float depthBiasConstantFactor,
+ float depthBiasClamp,
+ float depthBiasSlopeFactor);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetBlendConstants(
+ VkCommandBuffer commandBuffer,
+ const float blendConstants[4]);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetDepthBounds(
+ VkCommandBuffer commandBuffer,
+ float minDepthBounds,
+ float maxDepthBounds);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilCompareMask(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ uint32_t compareMask);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilWriteMask(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ uint32_t writeMask);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetStencilReference(
+ VkCommandBuffer commandBuffer,
+ VkStencilFaceFlags faceMask,
+ uint32_t reference);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBindDescriptorSets(
+ VkCommandBuffer commandBuffer,
+ VkPipelineBindPoint pipelineBindPoint,
+ VkPipelineLayout layout,
+ uint32_t firstSet,
+ uint32_t descriptorSetCount,
+ const VkDescriptorSet* pDescriptorSets,
+ uint32_t dynamicOffsetCount,
+ const uint32_t* pDynamicOffsets);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBindIndexBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ VkIndexType indexType);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBindVertexBuffers(
+ VkCommandBuffer commandBuffer,
+ uint32_t firstBinding,
+ uint32_t bindingCount,
+ const VkBuffer* pBuffers,
+ const VkDeviceSize* pOffsets);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDraw(
+ VkCommandBuffer commandBuffer,
+ uint32_t vertexCount,
+ uint32_t instanceCount,
+ uint32_t firstVertex,
+ uint32_t firstInstance);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexed(
+ VkCommandBuffer commandBuffer,
+ uint32_t indexCount,
+ uint32_t instanceCount,
+ uint32_t firstIndex,
+ int32_t vertexOffset,
+ uint32_t firstInstance);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ uint32_t drawCount,
+ uint32_t stride);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDrawIndexedIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset,
+ uint32_t drawCount,
+ uint32_t stride);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDispatch(
+ VkCommandBuffer commandBuffer,
+ uint32_t x,
+ uint32_t y,
+ uint32_t z);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdDispatchIndirect(
+ VkCommandBuffer commandBuffer,
+ VkBuffer buffer,
+ VkDeviceSize offset);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdCopyBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer srcBuffer,
+ VkBuffer dstBuffer,
+ uint32_t regionCount,
+ const VkBufferCopy* pRegions);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdCopyImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ uint32_t regionCount,
+ const VkImageCopy* pRegions);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBlitImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ uint32_t regionCount,
+ const VkImageBlit* pRegions,
+ VkFilter filter);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdCopyBufferToImage(
+ VkCommandBuffer commandBuffer,
+ VkBuffer srcBuffer,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ uint32_t regionCount,
+ const VkBufferImageCopy* pRegions);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdCopyImageToBuffer(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkBuffer dstBuffer,
+ uint32_t regionCount,
+ const VkBufferImageCopy* pRegions);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdUpdateBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize dataSize,
+ const uint32_t* pData);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdFillBuffer(
+ VkCommandBuffer commandBuffer,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize size,
+ uint32_t data);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdClearColorImage(
+ VkCommandBuffer commandBuffer,
+ VkImage image,
+ VkImageLayout imageLayout,
+ const VkClearColorValue* pColor,
+ uint32_t rangeCount,
+ const VkImageSubresourceRange* pRanges);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdClearDepthStencilImage(
+ VkCommandBuffer commandBuffer,
+ VkImage image,
+ VkImageLayout imageLayout,
+ const VkClearDepthStencilValue* pDepthStencil,
+ uint32_t rangeCount,
+ const VkImageSubresourceRange* pRanges);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdClearAttachments(
+ VkCommandBuffer commandBuffer,
+ uint32_t attachmentCount,
+ const VkClearAttachment* pAttachments,
+ uint32_t rectCount,
+ const VkClearRect* pRects);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdResolveImage(
+ VkCommandBuffer commandBuffer,
+ VkImage srcImage,
+ VkImageLayout srcImageLayout,
+ VkImage dstImage,
+ VkImageLayout dstImageLayout,
+ uint32_t regionCount,
+ const VkImageResolve* pRegions);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdSetEvent(
+ VkCommandBuffer commandBuffer,
+ VkEvent event,
+ VkPipelineStageFlags stageMask);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdResetEvent(
+ VkCommandBuffer commandBuffer,
+ VkEvent event,
+ VkPipelineStageFlags stageMask);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdWaitEvents(
+ VkCommandBuffer commandBuffer,
+ uint32_t eventCount,
+ const VkEvent* pEvents,
+ VkPipelineStageFlags srcStageMask,
+ VkPipelineStageFlags dstStageMask,
+ uint32_t memoryBarrierCount,
+ const VkMemoryBarrier* pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount,
+ const VkBufferMemoryBarrier* pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount,
+ const VkImageMemoryBarrier* pImageMemoryBarriers);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdPipelineBarrier(
+ VkCommandBuffer commandBuffer,
+ VkPipelineStageFlags srcStageMask,
+ VkPipelineStageFlags dstStageMask,
+ VkDependencyFlags dependencyFlags,
+ uint32_t memoryBarrierCount,
+ const VkMemoryBarrier* pMemoryBarriers,
+ uint32_t bufferMemoryBarrierCount,
+ const VkBufferMemoryBarrier* pBufferMemoryBarriers,
+ uint32_t imageMemoryBarrierCount,
+ const VkImageMemoryBarrier* pImageMemoryBarriers);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBeginQuery(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ uint32_t query,
+ VkQueryControlFlags flags);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdEndQuery(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ uint32_t query);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdResetQueryPool(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ uint32_t firstQuery,
+ uint32_t queryCount);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdWriteTimestamp(
+ VkCommandBuffer commandBuffer,
+ VkPipelineStageFlagBits pipelineStage,
+ VkQueryPool queryPool,
+ uint32_t query);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdCopyQueryPoolResults(
+ VkCommandBuffer commandBuffer,
+ VkQueryPool queryPool,
+ uint32_t firstQuery,
+ uint32_t queryCount,
+ VkBuffer dstBuffer,
+ VkDeviceSize dstOffset,
+ VkDeviceSize stride,
+ VkQueryResultFlags flags);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdPushConstants(
+ VkCommandBuffer commandBuffer,
+ VkPipelineLayout layout,
+ VkShaderStageFlags stageFlags,
+ uint32_t offset,
+ uint32_t size,
+ const void* pValues);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdBeginRenderPass(
+ VkCommandBuffer commandBuffer,
+ const VkRenderPassBeginInfo* pRenderPassBegin,
+ VkSubpassContents contents);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdNextSubpass(
+ VkCommandBuffer commandBuffer,
+ VkSubpassContents contents);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdEndRenderPass(
+ VkCommandBuffer commandBuffer);
+
+VKAPI_ATTR void VKAPI_CALL vkCmdExecuteCommands(
+ VkCommandBuffer commandBuffer,
+ uint32_t commandBufferCount,
+ const VkCommandBuffer* pCommandBuffers);
+#endif
+
+#define VK_KHR_surface 1
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSurfaceKHR)
+
+#define VK_KHR_SURFACE_SPEC_VERSION 25
+#define VK_KHR_SURFACE_EXTENSION_NAME "VK_KHR_surface"
+
+
+typedef enum VkColorSpaceKHR {
+ VK_COLORSPACE_SRGB_NONLINEAR_KHR = 0,
+ VK_COLORSPACE_BEGIN_RANGE = VK_COLORSPACE_SRGB_NONLINEAR_KHR,
+ VK_COLORSPACE_END_RANGE = VK_COLORSPACE_SRGB_NONLINEAR_KHR,
+ VK_COLORSPACE_RANGE_SIZE = (VK_COLORSPACE_SRGB_NONLINEAR_KHR - VK_COLORSPACE_SRGB_NONLINEAR_KHR + 1),
+ VK_COLORSPACE_MAX_ENUM = 0x7FFFFFFF
+} VkColorSpaceKHR;
+
+typedef enum VkPresentModeKHR {
+ VK_PRESENT_MODE_IMMEDIATE_KHR = 0,
+ VK_PRESENT_MODE_MAILBOX_KHR = 1,
+ VK_PRESENT_MODE_FIFO_KHR = 2,
+ VK_PRESENT_MODE_FIFO_RELAXED_KHR = 3,
+ VK_PRESENT_MODE_BEGIN_RANGE = VK_PRESENT_MODE_IMMEDIATE_KHR,
+ VK_PRESENT_MODE_END_RANGE = VK_PRESENT_MODE_FIFO_RELAXED_KHR,
+ VK_PRESENT_MODE_RANGE_SIZE = (VK_PRESENT_MODE_FIFO_RELAXED_KHR - VK_PRESENT_MODE_IMMEDIATE_KHR + 1),
+ VK_PRESENT_MODE_MAX_ENUM = 0x7FFFFFFF
+} VkPresentModeKHR;
+
+
+typedef enum VkSurfaceTransformFlagBitsKHR {
+ VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR = 0x00000001,
+ VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR = 0x00000002,
+ VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR = 0x00000004,
+ VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR = 0x00000008,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_BIT_KHR = 0x00000010,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_90_BIT_KHR = 0x00000020,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_180_BIT_KHR = 0x00000040,
+ VK_SURFACE_TRANSFORM_HORIZONTAL_MIRROR_ROTATE_270_BIT_KHR = 0x00000080,
+ VK_SURFACE_TRANSFORM_INHERIT_BIT_KHR = 0x00000100,
+} VkSurfaceTransformFlagBitsKHR;
+typedef VkFlags VkSurfaceTransformFlagsKHR;
+
+typedef enum VkCompositeAlphaFlagBitsKHR {
+ VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR = 0x00000001,
+ VK_COMPOSITE_ALPHA_PRE_MULTIPLIED_BIT_KHR = 0x00000002,
+ VK_COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR = 0x00000004,
+ VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR = 0x00000008,
+} VkCompositeAlphaFlagBitsKHR;
+typedef VkFlags VkCompositeAlphaFlagsKHR;
+
+typedef struct VkSurfaceCapabilitiesKHR {
+ uint32_t minImageCount;
+ uint32_t maxImageCount;
+ VkExtent2D currentExtent;
+ VkExtent2D minImageExtent;
+ VkExtent2D maxImageExtent;
+ uint32_t maxImageArrayLayers;
+ VkSurfaceTransformFlagsKHR supportedTransforms;
+ VkSurfaceTransformFlagBitsKHR currentTransform;
+ VkCompositeAlphaFlagsKHR supportedCompositeAlpha;
+ VkImageUsageFlags supportedUsageFlags;
+} VkSurfaceCapabilitiesKHR;
+
+typedef struct VkSurfaceFormatKHR {
+ VkFormat format;
+ VkColorSpaceKHR colorSpace;
+} VkSurfaceFormatKHR;
+
+
+typedef void (VKAPI_PTR *PFN_vkDestroySurfaceKHR)(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, VkSurfaceKHR surface, VkBool32* pSupported);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR* pSurfaceCapabilities);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfaceFormatsKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pSurfaceFormatCount, VkSurfaceFormatKHR* pSurfaceFormats);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceSurfacePresentModesKHR)(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pPresentModeCount, VkPresentModeKHR* pPresentModes);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR void VKAPI_CALL vkDestroySurfaceKHR(
+ VkInstance instance,
+ VkSurfaceKHR surface,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ VkSurfaceKHR surface,
+ VkBool32* pSupported);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ VkSurfaceCapabilitiesKHR* pSurfaceCapabilities);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfaceFormatsKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ uint32_t* pSurfaceFormatCount,
+ VkSurfaceFormatKHR* pSurfaceFormats);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceSurfacePresentModesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkSurfaceKHR surface,
+ uint32_t* pPresentModeCount,
+ VkPresentModeKHR* pPresentModes);
+#endif
+
+#define VK_KHR_swapchain 1
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkSwapchainKHR)
+
+#define VK_KHR_SWAPCHAIN_SPEC_VERSION 67
+#define VK_KHR_SWAPCHAIN_EXTENSION_NAME "VK_KHR_swapchain"
+
+typedef VkFlags VkSwapchainCreateFlagsKHR;
+
+typedef struct VkSwapchainCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkSwapchainCreateFlagsKHR flags;
+ VkSurfaceKHR surface;
+ uint32_t minImageCount;
+ VkFormat imageFormat;
+ VkColorSpaceKHR imageColorSpace;
+ VkExtent2D imageExtent;
+ uint32_t imageArrayLayers;
+ VkImageUsageFlags imageUsage;
+ VkSharingMode imageSharingMode;
+ uint32_t queueFamilyIndexCount;
+ const uint32_t* pQueueFamilyIndices;
+ VkSurfaceTransformFlagBitsKHR preTransform;
+ VkCompositeAlphaFlagBitsKHR compositeAlpha;
+ VkPresentModeKHR presentMode;
+ VkBool32 clipped;
+ VkSwapchainKHR oldSwapchain;
+} VkSwapchainCreateInfoKHR;
+
+typedef struct VkPresentInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ uint32_t waitSemaphoreCount;
+ const VkSemaphore* pWaitSemaphores;
+ uint32_t swapchainCount;
+ const VkSwapchainKHR* pSwapchains;
+ const uint32_t* pImageIndices;
+ VkResult* pResults;
+} VkPresentInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateSwapchainKHR)(VkDevice device, const VkSwapchainCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchain);
+typedef void (VKAPI_PTR *PFN_vkDestroySwapchainKHR)(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks* pAllocator);
+typedef VkResult (VKAPI_PTR *PFN_vkGetSwapchainImagesKHR)(VkDevice device, VkSwapchainKHR swapchain, uint32_t* pSwapchainImageCount, VkImage* pSwapchainImages);
+typedef VkResult (VKAPI_PTR *PFN_vkAcquireNextImageKHR)(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t* pImageIndex);
+typedef VkResult (VKAPI_PTR *PFN_vkQueuePresentKHR)(VkQueue queue, const VkPresentInfoKHR* pPresentInfo);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateSwapchainKHR(
+ VkDevice device,
+ const VkSwapchainCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSwapchainKHR* pSwapchain);
+
+VKAPI_ATTR void VKAPI_CALL vkDestroySwapchainKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ const VkAllocationCallbacks* pAllocator);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetSwapchainImagesKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ uint32_t* pSwapchainImageCount,
+ VkImage* pSwapchainImages);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkAcquireNextImageKHR(
+ VkDevice device,
+ VkSwapchainKHR swapchain,
+ uint64_t timeout,
+ VkSemaphore semaphore,
+ VkFence fence,
+ uint32_t* pImageIndex);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkQueuePresentKHR(
+ VkQueue queue,
+ const VkPresentInfoKHR* pPresentInfo);
+#endif
+
+#define VK_KHR_display 1
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDisplayKHR)
+VK_DEFINE_NON_DISPATCHABLE_HANDLE(VkDisplayModeKHR)
+
+#define VK_KHR_DISPLAY_SPEC_VERSION 21
+#define VK_KHR_DISPLAY_EXTENSION_NAME "VK_KHR_display"
+
+
+typedef enum VkDisplayPlaneAlphaFlagBitsKHR {
+ VK_DISPLAY_PLANE_ALPHA_OPAQUE_BIT_KHR = 0x00000001,
+ VK_DISPLAY_PLANE_ALPHA_GLOBAL_BIT_KHR = 0x00000002,
+ VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_BIT_KHR = 0x00000004,
+ VK_DISPLAY_PLANE_ALPHA_PER_PIXEL_PREMULTIPLIED_BIT_KHR = 0x00000008,
+} VkDisplayPlaneAlphaFlagBitsKHR;
+typedef VkFlags VkDisplayModeCreateFlagsKHR;
+typedef VkFlags VkDisplayPlaneAlphaFlagsKHR;
+typedef VkFlags VkDisplaySurfaceCreateFlagsKHR;
+
+typedef struct VkDisplayPropertiesKHR {
+ VkDisplayKHR display;
+ const char* displayName;
+ VkExtent2D physicalDimensions;
+ VkExtent2D physicalResolution;
+ VkSurfaceTransformFlagsKHR supportedTransforms;
+ VkBool32 planeReorderPossible;
+ VkBool32 persistentContent;
+} VkDisplayPropertiesKHR;
+
+typedef struct VkDisplayModeParametersKHR {
+ VkExtent2D visibleRegion;
+ uint32_t refreshRate;
+} VkDisplayModeParametersKHR;
+
+typedef struct VkDisplayModePropertiesKHR {
+ VkDisplayModeKHR displayMode;
+ VkDisplayModeParametersKHR parameters;
+} VkDisplayModePropertiesKHR;
+
+typedef struct VkDisplayModeCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkDisplayModeCreateFlagsKHR flags;
+ VkDisplayModeParametersKHR parameters;
+} VkDisplayModeCreateInfoKHR;
+
+typedef struct VkDisplayPlaneCapabilitiesKHR {
+ VkDisplayPlaneAlphaFlagsKHR supportedAlpha;
+ VkOffset2D minSrcPosition;
+ VkOffset2D maxSrcPosition;
+ VkExtent2D minSrcExtent;
+ VkExtent2D maxSrcExtent;
+ VkOffset2D minDstPosition;
+ VkOffset2D maxDstPosition;
+ VkExtent2D minDstExtent;
+ VkExtent2D maxDstExtent;
+} VkDisplayPlaneCapabilitiesKHR;
+
+typedef struct VkDisplayPlanePropertiesKHR {
+ VkDisplayKHR currentDisplay;
+ uint32_t currentStackIndex;
+} VkDisplayPlanePropertiesKHR;
+
+typedef struct VkDisplaySurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkDisplaySurfaceCreateFlagsKHR flags;
+ VkDisplayModeKHR displayMode;
+ uint32_t planeIndex;
+ uint32_t planeStackIndex;
+ VkSurfaceTransformFlagBitsKHR transform;
+ float globalAlpha;
+ VkDisplayPlaneAlphaFlagBitsKHR alphaMode;
+ VkExtent2D imageExtent;
+} VkDisplaySurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceDisplayPropertiesKHR)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPropertiesKHR* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkGetPhysicalDeviceDisplayPlanePropertiesKHR)(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkDisplayPlanePropertiesKHR* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayPlaneSupportedDisplaysKHR)(VkPhysicalDevice physicalDevice, uint32_t planeIndex, uint32_t* pDisplayCount, VkDisplayKHR* pDisplays);
+typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayModePropertiesKHR)(VkPhysicalDevice physicalDevice, VkDisplayKHR display, uint32_t* pPropertyCount, VkDisplayModePropertiesKHR* pProperties);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDisplayModeKHR)(VkPhysicalDevice physicalDevice, VkDisplayKHR display, const VkDisplayModeCreateInfoKHR*pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDisplayModeKHR* pMode);
+typedef VkResult (VKAPI_PTR *PFN_vkGetDisplayPlaneCapabilitiesKHR)(VkPhysicalDevice physicalDevice, VkDisplayModeKHR mode, uint32_t planeIndex, VkDisplayPlaneCapabilitiesKHR* pCapabilities);
+typedef VkResult (VKAPI_PTR *PFN_vkCreateDisplayPlaneSurfaceKHR)(VkInstance instance, const VkDisplaySurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceDisplayPropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPropertiesKHR* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetPhysicalDeviceDisplayPlanePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t* pPropertyCount,
+ VkDisplayPlanePropertiesKHR* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayPlaneSupportedDisplaysKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t planeIndex,
+ uint32_t* pDisplayCount,
+ VkDisplayKHR* pDisplays);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayModePropertiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ uint32_t* pPropertyCount,
+ VkDisplayModePropertiesKHR* pProperties);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDisplayModeKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayKHR display,
+ const VkDisplayModeCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkDisplayModeKHR* pMode);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkGetDisplayPlaneCapabilitiesKHR(
+ VkPhysicalDevice physicalDevice,
+ VkDisplayModeKHR mode,
+ uint32_t planeIndex,
+ VkDisplayPlaneCapabilitiesKHR* pCapabilities);
+
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateDisplayPlaneSurfaceKHR(
+ VkInstance instance,
+ const VkDisplaySurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+#endif
+
+#define VK_KHR_display_swapchain 1
+#define VK_KHR_DISPLAY_SWAPCHAIN_SPEC_VERSION 9
+#define VK_KHR_DISPLAY_SWAPCHAIN_EXTENSION_NAME "VK_KHR_display_swapchain"
+
+typedef struct VkDisplayPresentInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkRect2D srcRect;
+ VkRect2D dstRect;
+ VkBool32 persistent;
+} VkDisplayPresentInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateSharedSwapchainsKHR)(VkDevice device, uint32_t swapchainCount, const VkSwapchainCreateInfoKHR* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchains);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateSharedSwapchainsKHR(
+ VkDevice device,
+ uint32_t swapchainCount,
+ const VkSwapchainCreateInfoKHR* pCreateInfos,
+ const VkAllocationCallbacks* pAllocator,
+ VkSwapchainKHR* pSwapchains);
+#endif
+
+#ifdef VK_USE_PLATFORM_XLIB_KHR
+#define VK_KHR_xlib_surface 1
+#include <X11/Xlib.h>
+
+#define VK_KHR_XLIB_SURFACE_SPEC_VERSION 6
+#define VK_KHR_XLIB_SURFACE_EXTENSION_NAME "VK_KHR_xlib_surface"
+
+typedef VkFlags VkXlibSurfaceCreateFlagsKHR;
+
+typedef struct VkXlibSurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkXlibSurfaceCreateFlagsKHR flags;
+ Display* dpy;
+ Window window;
+} VkXlibSurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateXlibSurfaceKHR)(VkInstance instance, const VkXlibSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXlibPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, Display* dpy, VisualID visualID);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateXlibSurfaceKHR(
+ VkInstance instance,
+ const VkXlibSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+
+VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXlibPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ Display* dpy,
+ VisualID visualID);
+#endif
+#endif /* VK_USE_PLATFORM_XLIB_KHR */
+
+#ifdef VK_USE_PLATFORM_XCB_KHR
+#define VK_KHR_xcb_surface 1
+#include <xcb/xcb.h>
+
+#define VK_KHR_XCB_SURFACE_SPEC_VERSION 6
+#define VK_KHR_XCB_SURFACE_EXTENSION_NAME "VK_KHR_xcb_surface"
+
+typedef VkFlags VkXcbSurfaceCreateFlagsKHR;
+
+typedef struct VkXcbSurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkXcbSurfaceCreateFlagsKHR flags;
+ xcb_connection_t* connection;
+ xcb_window_t window;
+} VkXcbSurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateXcbSurfaceKHR)(VkInstance instance, const VkXcbSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceXcbPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, xcb_connection_t* connection, xcb_visualid_t visual_id);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateXcbSurfaceKHR(
+ VkInstance instance,
+ const VkXcbSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+
+VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceXcbPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ xcb_connection_t* connection,
+ xcb_visualid_t visual_id);
+#endif
+#endif /* VK_USE_PLATFORM_XCB_KHR */
+
+#ifdef VK_USE_PLATFORM_WAYLAND_KHR
+#define VK_KHR_wayland_surface 1
+#include <wayland-client.h>
+
+#define VK_KHR_WAYLAND_SURFACE_SPEC_VERSION 5
+#define VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME "VK_KHR_wayland_surface"
+
+typedef VkFlags VkWaylandSurfaceCreateFlagsKHR;
+
+typedef struct VkWaylandSurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkWaylandSurfaceCreateFlagsKHR flags;
+ struct wl_display* display;
+ struct wl_surface* surface;
+} VkWaylandSurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateWaylandSurfaceKHR)(VkInstance instance, const VkWaylandSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWaylandPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, struct wl_display* display);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateWaylandSurfaceKHR(
+ VkInstance instance,
+ const VkWaylandSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+
+VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWaylandPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ struct wl_display* display);
+#endif
+#endif /* VK_USE_PLATFORM_WAYLAND_KHR */
+
+#ifdef VK_USE_PLATFORM_MIR_KHR
+#define VK_KHR_mir_surface 1
+#include <mir_toolkit/client_types.h>
+
+#define VK_KHR_MIR_SURFACE_SPEC_VERSION 4
+#define VK_KHR_MIR_SURFACE_EXTENSION_NAME "VK_KHR_mir_surface"
+
+typedef VkFlags VkMirSurfaceCreateFlagsKHR;
+
+typedef struct VkMirSurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkMirSurfaceCreateFlagsKHR flags;
+ MirConnection* connection;
+ MirSurface* mirSurface;
+} VkMirSurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateMirSurfaceKHR)(VkInstance instance, const VkMirSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceMirPresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, MirConnection* connection);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateMirSurfaceKHR(
+ VkInstance instance,
+ const VkMirSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+
+VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceMirPresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex,
+ MirConnection* connection);
+#endif
+#endif /* VK_USE_PLATFORM_MIR_KHR */
+
+#ifdef VK_USE_PLATFORM_ANDROID_KHR
+#define VK_KHR_android_surface 1
+#include <android/native_window.h>
+
+#define VK_KHR_ANDROID_SURFACE_SPEC_VERSION 6
+#define VK_KHR_ANDROID_SURFACE_EXTENSION_NAME "VK_KHR_android_surface"
+
+typedef VkFlags VkAndroidSurfaceCreateFlagsKHR;
+
+typedef struct VkAndroidSurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkAndroidSurfaceCreateFlagsKHR flags;
+ ANativeWindow* window;
+} VkAndroidSurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateAndroidSurfaceKHR)(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateAndroidSurfaceKHR(
+ VkInstance instance,
+ const VkAndroidSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+#endif
+#endif /* VK_USE_PLATFORM_ANDROID_KHR */
+
+#ifdef VK_USE_PLATFORM_WIN32_KHR
+#define VK_KHR_win32_surface 1
+#include <windows.h>
+
+#define VK_KHR_WIN32_SURFACE_SPEC_VERSION 5
+#define VK_KHR_WIN32_SURFACE_EXTENSION_NAME "VK_KHR_win32_surface"
+
+typedef VkFlags VkWin32SurfaceCreateFlagsKHR;
+
+typedef struct VkWin32SurfaceCreateInfoKHR {
+ VkStructureType sType;
+ const void* pNext;
+ VkWin32SurfaceCreateFlagsKHR flags;
+ HINSTANCE hinstance;
+ HWND hwnd;
+} VkWin32SurfaceCreateInfoKHR;
+
+
+typedef VkResult (VKAPI_PTR *PFN_vkCreateWin32SurfaceKHR)(VkInstance instance, const VkWin32SurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface);
+typedef VkBool32 (VKAPI_PTR *PFN_vkGetPhysicalDeviceWin32PresentationSupportKHR)(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex);
+
+#ifndef VK_NO_PROTOTYPES
+VKAPI_ATTR VkResult VKAPI_CALL vkCreateWin32SurfaceKHR(
+ VkInstance instance,
+ const VkWin32SurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* pAllocator,
+ VkSurfaceKHR* pSurface);
+
+VKAPI_ATTR VkBool32 VKAPI_CALL vkGetPhysicalDeviceWin32PresentationSupportKHR(
+ VkPhysicalDevice physicalDevice,
+ uint32_t queueFamilyIndex);
+#endif
+#endif /* VK_USE_PLATFORM_WIN32_KHR */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/vulkan/include/vulkan/vulkan_loader_data.h b/vulkan/include/vulkan/vulkan_loader_data.h
new file mode 100644
index 0000000..968a7aa
--- /dev/null
+++ b/vulkan/include/vulkan/vulkan_loader_data.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef VULKAN_VULKAN_LOADER_DATA_H
+#define VULKAN_VULKAN_LOADER_DATA_H
+
+#include <string>
+
+namespace vulkan {
+ struct LoaderData {
+ std::string layer_path;
+ __attribute__((visibility("default"))) static LoaderData& GetInstance();
+ };
+}
+
+#endif
diff --git a/vulkan/libvulkan/Android.mk b/vulkan/libvulkan/Android.mk
new file mode 100644
index 0000000..a196a36
--- /dev/null
+++ b/vulkan/libvulkan/Android.mk
@@ -0,0 +1,49 @@
+# Copyright 2015 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCAL_PATH:= $(call my-dir)
+include $(CLEAR_VARS)
+
+LOCAL_CLANG := true
+LOCAL_CFLAGS := -DLOG_TAG=\"vulkan\" \
+ -std=c99 -fvisibility=hidden -fstrict-aliasing \
+ -Weverything -Werror \
+ -Wno-padded \
+ -Wno-undef
+#LOCAL_CFLAGS += -DLOG_NDEBUG=0
+LOCAL_CPPFLAGS := -std=c++14 \
+ -fexceptions \
+ -Wno-c++98-compat-pedantic \
+ -Wno-exit-time-destructors \
+ -Wno-c99-extensions \
+ -Wno-zero-length-array \
+ -Wno-global-constructors
+
+LOCAL_C_INCLUDES := \
+ frameworks/native/vulkan/include \
+ system/core/libsync/include
+
+LOCAL_SRC_FILES := \
+ debug_report.cpp \
+ dispatch_gen.cpp \
+ layers_extensions.cpp \
+ loader.cpp \
+ swapchain.cpp \
+ vulkan_loader_data.cpp
+LOCAL_ADDITIONAL_DEPENDENCIES := $(LOCAL_PATH)/Android.mk
+
+LOCAL_SHARED_LIBRARIES := libhardware liblog libsync libutils libcutils
+
+LOCAL_MODULE := libvulkan
+include $(BUILD_SHARED_LIBRARY)
diff --git a/vulkan/libvulkan/debug_report.cpp b/vulkan/libvulkan/debug_report.cpp
new file mode 100644
index 0000000..fea9f18
--- /dev/null
+++ b/vulkan/libvulkan/debug_report.cpp
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2016 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "loader.h"
+
+namespace vulkan {
+
+VkResult DebugReportCallbackList::CreateCallback(
+ VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkDebugReportCallbackEXT* callback) {
+ VkDebugReportCallbackEXT driver_callback;
+ VkResult result = GetDriverDispatch(instance).CreateDebugReportCallbackEXT(
+ GetDriverInstance(instance), create_info, allocator, &driver_callback);
+ if (result != VK_SUCCESS)
+ return result;
+
+ const VkAllocationCallbacks* alloc =
+ allocator ? allocator : GetAllocator(instance);
+ void* mem =
+ alloc->pfnAllocation(alloc->pUserData, sizeof(Node), alignof(Node),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
+ if (!mem) {
+ GetDriverDispatch(instance).DestroyDebugReportCallbackEXT(
+ GetDriverInstance(instance), driver_callback, allocator);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ std::lock_guard<decltype(rwmutex_)> lock(rwmutex_);
+ head_.next =
+ new (mem) Node{head_.next, create_info->flags, create_info->pfnCallback,
+ create_info->pUserData, driver_callback};
+ *callback =
+ VkDebugReportCallbackEXT(reinterpret_cast<uintptr_t>(head_.next));
+ return VK_SUCCESS;
+}
+
+void DebugReportCallbackList::DestroyCallback(
+ VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks* allocator) {
+ Node* node = reinterpret_cast<Node*>(uintptr_t(callback));
+ std::unique_lock<decltype(rwmutex_)> lock(rwmutex_);
+ Node* prev = &head_;
+ while (prev && prev->next != node)
+ prev = prev->next;
+ prev->next = node->next;
+ lock.unlock();
+
+ GetDriverDispatch(instance).DestroyDebugReportCallbackEXT(
+ GetDriverInstance(instance), node->driver_callback, allocator);
+
+ const VkAllocationCallbacks* alloc =
+ allocator ? allocator : GetAllocator(instance);
+ alloc->pfnFree(alloc->pUserData, node);
+}
+
+void DebugReportCallbackList::Message(VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT object_type,
+ uint64_t object,
+ size_t location,
+ int32_t message_code,
+ const char* layer_prefix,
+ const char* message) {
+ std::shared_lock<decltype(rwmutex_)> lock(rwmutex_);
+ Node* node = &head_;
+ while ((node = node->next)) {
+ if ((node->flags & flags) != 0) {
+ node->callback(flags, object_type, object, location, message_code,
+ layer_prefix, message, node->data);
+ }
+ }
+}
+
+VkResult CreateDebugReportCallbackEXT_Bottom(
+ VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkDebugReportCallbackEXT* callback) {
+ return GetDebugReportCallbacks(instance).CreateCallback(
+ instance, create_info, allocator, callback);
+}
+
+void DestroyDebugReportCallbackEXT_Bottom(
+ VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks* allocator) {
+ if (callback)
+ GetDebugReportCallbacks(instance).DestroyCallback(instance, callback,
+ allocator);
+}
+
+void DebugReportMessageEXT_Bottom(VkInstance instance,
+ VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT object_type,
+ uint64_t object,
+ size_t location,
+ int32_t message_code,
+ const char* layer_prefix,
+ const char* message) {
+ GetDriverDispatch(instance).DebugReportMessageEXT(
+ GetDriverInstance(instance), flags, object_type, object, location,
+ message_code, layer_prefix, message);
+ GetDebugReportCallbacks(instance).Message(flags, object_type, object,
+ location, message_code,
+ layer_prefix, message);
+}
+
+} // namespace vulkan
diff --git a/vulkan/libvulkan/debug_report.h b/vulkan/libvulkan/debug_report.h
new file mode 100644
index 0000000..5bce240
--- /dev/null
+++ b/vulkan/libvulkan/debug_report.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright 2016 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBVULKAN_DEBUG_REPORT_H
+#define LIBVULKAN_DEBUG_REPORT_H 1
+
+#include <shared_mutex>
+#include <vulkan/vk_ext_debug_report.h>
+
+namespace vulkan {
+
+// clang-format off
+VKAPI_ATTR VkResult CreateDebugReportCallbackEXT_Bottom(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDebugReportCallbackEXT* pCallback);
+VKAPI_ATTR void DestroyDebugReportCallbackEXT_Bottom(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR void DebugReportMessageEXT_Bottom(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage);
+// clang-format on
+
+class DebugReportCallbackList {
+ public:
+ DebugReportCallbackList()
+ : head_{nullptr, 0, nullptr, nullptr, VK_NULL_HANDLE} {}
+ DebugReportCallbackList(const DebugReportCallbackList&) = delete;
+ DebugReportCallbackList& operator=(const DebugReportCallbackList&) = delete;
+ ~DebugReportCallbackList() = default;
+
+ VkResult CreateCallback(
+ VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkDebugReportCallbackEXT* callback);
+ void DestroyCallback(VkInstance instance,
+ VkDebugReportCallbackEXT callback,
+ const VkAllocationCallbacks* allocator);
+ void Message(VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT object_type,
+ uint64_t object,
+ size_t location,
+ int32_t message_code,
+ const char* layer_prefix,
+ const char* message);
+
+ private:
+ struct Node {
+ Node* next;
+ VkDebugReportFlagsEXT flags;
+ PFN_vkDebugReportCallbackEXT callback;
+ void* data;
+ VkDebugReportCallbackEXT driver_callback;
+ };
+
+ // TODO(jessehall): replace with std::shared_mutex when available in libc++
+ std::shared_timed_mutex rwmutex_;
+ Node head_;
+};
+
+} // namespace vulkan
+
+#endif // LIBVULKAN_DEBUG_REPORT_H
diff --git a/vulkan/libvulkan/dispatch.tmpl b/vulkan/libvulkan/dispatch.tmpl
new file mode 100644
index 0000000..0f1194c
--- /dev/null
+++ b/vulkan/libvulkan/dispatch.tmpl
@@ -0,0 +1,618 @@
+{{/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */}}
+
+{{Include "../api/templates/vulkan_common.tmpl"}}
+{{Global "clang-format" (Strings "clang-format" "-style=file")}}
+{{Macro "DefineGlobals" $}}
+{{$ | Macro "dispatch_gen.h" | Format (Global "clang-format") | Write "dispatch_gen.h" }}
+{{$ | Macro "dispatch_gen.cpp" | Format (Global "clang-format") | Write "dispatch_gen.cpp"}}
+
+{{/*
+-------------------------------------------------------------------------------
+ dispatch_gen.h
+-------------------------------------------------------------------------------
+*/}}
+{{define "dispatch_gen.h"}}
+/*
+•* Copyright 2015 The Android Open Source Project
+•*
+•* Licensed under the Apache License, Version 2.0 (the "License");
+•* you may not use this file except in compliance with the License.
+•* You may obtain a copy of the License at
+•*
+•* http://www.apache.org/licenses/LICENSE-2.0
+•*
+•* Unless required by applicable law or agreed to in writing, software
+•* distributed under the License is distributed on an "AS IS" BASIS,
+•* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+•* See the License for the specific language governing permissions and
+•* limitations under the License.
+•*/
+¶
+#define VK_USE_PLATFORM_ANDROID_KHR
+#include <vulkan/vk_android_native_buffer.h>
+#include <vulkan/vk_ext_debug_report.h>
+#include <vulkan/vulkan.h>
+¶
+namespace vulkan {
+¶
+struct InstanceDispatchTable {«
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsInstanceDispatched" $f)}}
+ {{Macro "FunctionPtrName" $f}} {{Macro "BaseName" $f}};
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+struct DeviceDispatchTable {«
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsDeviceDispatched" $f)}}
+ {{Macro "FunctionPtrName" $f}} {{Macro "BaseName" $f}};
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+struct DriverDispatchTable {«
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsInstanceDispatched" $f)}}
+ {{if not (Macro "IsLoaderFunction" $f)}}
+ {{Macro "FunctionPtrName" $f}} {{Macro "BaseName" $f}};
+ {{end}}
+ {{end}}
+ {{end}}
+
+ PFN_vkGetDeviceProcAddr GetDeviceProcAddr;
+
+ {{/* TODO(jessehall): Needed by swapchain code. Figure out a better way of
+ handling this that avoids the special case. Probably should rework
+ things so the driver dispatch table has all driver functions. Probably
+ need separate instance- and device-level copies, fill in all device-
+ dispatched functions in the device-level copies only, and change
+ GetDeviceProcAddr_Bottom to look in the already-loaded driver
+ dispatch table rather than forwarding to the driver's
+ vkGetDeviceProcAddr. */}}
+ PFN_vkCreateImage CreateImage;
+ PFN_vkDestroyImage DestroyImage;
+
+ PFN_vkGetSwapchainGrallocUsageANDROID GetSwapchainGrallocUsageANDROID;
+ PFN_vkAcquireImageANDROID AcquireImageANDROID;
+ PFN_vkQueueSignalReleaseImageANDROID QueueSignalReleaseImageANDROID;
+ // clang-format on
+»};
+¶
+} // namespace vulkan
+¶{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ dispatch_gen.cpp
+-------------------------------------------------------------------------------
+*/}}
+{{define "dispatch_gen.cpp"}}
+/*
+•* Copyright 2015 The Android Open Source Project
+•*
+•* Licensed under the Apache License, Version 2.0 (the "License");
+•* you may not use this file except in compliance with the License.
+•* You may obtain a copy of the License at
+•*
+•* http://www.apache.org/licenses/LICENSE-2.0
+•*
+•* Unless required by applicable law or agreed to in writing, software
+•* distributed under the License is distributed on an "AS IS" BASIS,
+•* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+•* See the License for the specific language governing permissions and
+•* limitations under the License.
+•*/
+¶
+#include <log/log.h>
+#include <algorithm>
+#include "loader.h"
+¶
+#define UNLIKELY(expr) __builtin_expect((expr), 0)
+¶
+using namespace vulkan;
+¶
+namespace {
+¶
+struct NameProc {
+ const char* name;
+ PFN_vkVoidFunction proc;
+};
+¶
+PFN_vkVoidFunction Lookup(const char* name, const NameProc* begin, const NameProc* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name,
+ [](const NameProc& e, const char* n) { return strcmp(e.name, n) < 0; });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return nullptr;
+ return entry->proc;
+}
+¶
+template <size_t N>
+PFN_vkVoidFunction Lookup(const char* name, const NameProc (&procs)[N]) {
+ return Lookup(name, procs, procs + N);
+}
+¶
+const NameProc kLoaderExportProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "IsExported" $f)}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>({{$f.Name}})},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+const NameProc kLoaderGlobalProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if and (Macro "HasLoaderTopImpl" $f) (eq (Macro "Vtbl" $f) "Global")}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>(§
+ static_cast<{{Macro "FunctionPtrName" $f}}>(§
+ {{Macro "BaseName" $f}}_Top))},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+const NameProc kLoaderTopProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "HasLoaderTopImpl" $f)}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>(§
+ static_cast<{{Macro "FunctionPtrName" $f}}>(§
+ {{Macro "BaseName" $f}}_Top))},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+const NameProc kLoaderBottomProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "HasLoaderBottomImpl" $f)}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>(§
+ static_cast<{{Macro "FunctionPtrName" $f}}>(§
+ {{Macro "BaseName" $f}}_Bottom))},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+struct NameOffset {
+ const char* name;
+ size_t offset;
+};
+¶
+ssize_t Lookup(const char* name,
+ const NameOffset* begin,
+ const NameOffset* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name, [](const NameOffset& e, const char* n) {
+ return strcmp(e.name, n) < 0;
+ });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return -1;
+ return static_cast<ssize_t>(entry->offset);
+}
+¶
+template <size_t N, class Table>
+PFN_vkVoidFunction Lookup(const char* name,
+ const NameOffset (&offsets)[N],
+ const Table& table) {
+ ssize_t offset = Lookup(name, offsets, offsets + N);
+ if (offset < 0)
+ return nullptr;
+ uintptr_t base = reinterpret_cast<uintptr_t>(&table);
+ return *reinterpret_cast<PFN_vkVoidFunction*>(base +
+ static_cast<size_t>(offset));
+}
+¶
+const NameOffset kInstanceDispatchOffsets[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "IsInstanceDispatched" $f)}}
+ {"{{$f.Name}}", offsetof(InstanceDispatchTable, {{Macro "BaseName" $f}})},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+const NameOffset kDeviceDispatchOffsets[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "IsDeviceDispatched" $f)}}
+ {"{{$f.Name}}", offsetof(DeviceDispatchTable, {{Macro "BaseName" $f}})},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+} // anonymous namespace
+¶
+namespace vulkan {
+¶
+PFN_vkVoidFunction GetLoaderExportProcAddr(const char* name) {
+ return Lookup(name, kLoaderExportProcs);
+}
+¶
+PFN_vkVoidFunction GetLoaderGlobalProcAddr(const char* name) {
+ return Lookup(name, kLoaderGlobalProcs);
+}
+¶
+PFN_vkVoidFunction GetLoaderTopProcAddr(const char* name) {
+ return Lookup(name, kLoaderTopProcs);
+}
+¶
+PFN_vkVoidFunction GetLoaderBottomProcAddr(const char* name) {
+ return Lookup(name, kLoaderBottomProcs);
+}
+¶
+PFN_vkVoidFunction GetDispatchProcAddr(const InstanceDispatchTable& dispatch,
+ const char* name) {
+ return Lookup(name, kInstanceDispatchOffsets, dispatch);
+}
+¶
+PFN_vkVoidFunction GetDispatchProcAddr(const DeviceDispatchTable& dispatch,
+ const char* name) {
+ return Lookup(name, kDeviceDispatchOffsets, dispatch);
+}
+¶
+bool LoadInstanceDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ InstanceDispatchTable& dispatch) {«
+ bool success = true;
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsInstanceDispatched" $f)}}
+ dispatch.{{Macro "BaseName" $f}} = §
+ reinterpret_cast<{{Macro "FunctionPtrName" $f}}>(§
+ get_proc_addr(instance, "{{$f.Name}}"));
+ if (UNLIKELY(!dispatch.{{Macro "BaseName" $f}})) {
+ ALOGE("missing instance proc: %s", "{{$f.Name}}");
+ success = false;
+ }
+ {{end}}
+ {{end}}
+ // clang-format on
+ return success;
+»}
+¶
+bool LoadDeviceDispatchTable(VkDevice device,
+ PFN_vkGetDeviceProcAddr get_proc_addr,
+ DeviceDispatchTable& dispatch) {«
+ bool success = true;
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsDeviceDispatched" $f)}}
+ dispatch.{{Macro "BaseName" $f}} = §
+ reinterpret_cast<{{Macro "FunctionPtrName" $f}}>(§
+ get_proc_addr(device, "{{$f.Name}}"));
+ if (UNLIKELY(!dispatch.{{Macro "BaseName" $f}})) {
+ ALOGE("missing device proc: %s", "{{$f.Name}}");
+ success = false;
+ }
+ {{end}}
+ {{end}}
+ // clang-format on
+ return success;
+»}
+¶
+bool LoadDriverDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ const InstanceExtensionSet& extensions,
+ DriverDispatchTable& dispatch) {«
+ bool success = true;
+ // clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsInstanceDispatched" $f)}}
+ {{if not (Macro "IsLoaderFunction" $f)}}
+ {{$ext := GetAnnotation $f "extension"}}
+ {{if $ext}}
+ if (extensions[{{Macro "ExtensionConstant" $ext}}]) {
+ {{end}}
+ dispatch.{{Macro "BaseName" $f}} = §
+ reinterpret_cast<{{Macro "FunctionPtrName" $f}}>(§
+ get_proc_addr(instance, "{{$f.Name}}"));
+ if (UNLIKELY(!dispatch.{{Macro "BaseName" $f}})) {
+ ALOGE("missing driver proc: %s", "{{$f.Name}}");
+ success = false;
+ }
+ {{if $ext}}
+ }
+ {{end}}
+ {{end}}
+ {{end}}
+ {{end}}
+ dispatch.GetDeviceProcAddr = reinterpret_cast<PFN_vkGetDeviceProcAddr>(get_proc_addr(instance, "vkGetDeviceProcAddr"));
+ if (UNLIKELY(!dispatch.GetDeviceProcAddr)) {
+ ALOGE("missing driver proc: %s", "vkGetDeviceProcAddr");
+ success = false;
+ }
+ dispatch.CreateImage = reinterpret_cast<PFN_vkCreateImage>(get_proc_addr(instance, "vkCreateImage"));
+ if (UNLIKELY(!dispatch.CreateImage)) {
+ ALOGE("missing driver proc: %s", "vkCreateImage");
+ success = false;
+ }
+ dispatch.DestroyImage = reinterpret_cast<PFN_vkDestroyImage>(get_proc_addr(instance, "vkDestroyImage"));
+ if (UNLIKELY(!dispatch.DestroyImage)) {
+ ALOGE("missing driver proc: %s", "vkDestroyImage");
+ success = false;
+ }
+ dispatch.GetSwapchainGrallocUsageANDROID = reinterpret_cast<PFN_vkGetSwapchainGrallocUsageANDROID>(get_proc_addr(instance, "vkGetSwapchainGrallocUsageANDROID"));
+ if (UNLIKELY(!dispatch.GetSwapchainGrallocUsageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkGetSwapchainGrallocUsageANDROID");
+ success = false;
+ }
+ dispatch.AcquireImageANDROID = reinterpret_cast<PFN_vkAcquireImageANDROID>(get_proc_addr(instance, "vkAcquireImageANDROID"));
+ if (UNLIKELY(!dispatch.AcquireImageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkAcquireImageANDROID");
+ success = false;
+ }
+ dispatch.QueueSignalReleaseImageANDROID = reinterpret_cast<PFN_vkQueueSignalReleaseImageANDROID>(get_proc_addr(instance, "vkQueueSignalReleaseImageANDROID"));
+ if (UNLIKELY(!dispatch.QueueSignalReleaseImageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkQueueSignalReleaseImageANDROID");
+ success = false;
+ }
+ // clang-format on
+ return success;
+»}
+¶
+} // namespace vulkan
+¶
+// clang-format off
+¶
+{{range $f := AllCommands $}}
+ {{if and (not (GetAnnotation $f "pfn")) (Macro "IsExported" $f)}}
+ __attribute__((visibility("default")))
+ VKAPI_ATTR {{Node "Type" $f.Return}} {{$f.Name}}({{Macro "Parameters" $f}}) {
+ {{if not (IsVoid $f.Return.Type)}}return §{{end}}
+ {{Macro "Dispatch" $f}}({{Macro "Arguments" $f}});
+ }
+ ¶
+ {{end}}
+{{end}}
+¶
+// clang-format on
+¶{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emit the dispatch lookup for a function based on its first parameter.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Dispatch"}}
+ {{AssertType $ "Function"}}
+
+ {{if (Macro "HasLoaderTopImpl" $)}}
+ {{Macro "BaseName" $}}_Top§
+ {{else}}
+ {{$p0 := index $.CallParameters 0}}
+ GetDispatchTable({{$p0.Name}}).{{Macro "BaseName" $}}§
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Map an extension name to InstanceExtension or DeviceExtension enum value
+-------------------------------------------------------------------------------
+*/}}
+{{define "ExtensionConstant"}}
+ {{$name := index $.Arguments 0}}
+ {{ if (eq $name "VK_KHR_surface")}}kKHR_surface
+ {{else if (eq $name "VK_KHR_android_surface")}}kKHR_android_surface
+ {{else if (eq $name "VK_EXT_debug_report")}}kEXT_debug_report
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a function name without the "vk" prefix.
+-------------------------------------------------------------------------------
+*/}}
+{{define "BaseName"}}
+ {{AssertType $ "Function"}}
+ {{TrimPrefix "vk" $.Name}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a comma-separated list of C parameter names for the given command.
+-------------------------------------------------------------------------------
+*/}}
+{{define "Arguments"}}
+ {{AssertType $ "Function"}}
+
+ {{ForEach $.CallParameters "ParameterName" | JoinWith ", "}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Emit "true" for supported functions that undergo table dispatch. Only global
+ functions and functions handled in the loader top without calling into
+ lower layers are not dispatched.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsInstanceDispatched"}}
+ {{AssertType $ "Function"}}
+ {{if and (Macro "IsFunctionSupported" $) (eq (Macro "Vtbl" $) "Instance")}}
+ {{if (ne $.Name "vkGetInstanceProcAddr")}}true{{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Emit "true" for supported functions that can have device-specific dispatch.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsDeviceDispatched"}}
+ {{AssertType $ "Function"}}
+ {{if (Macro "IsFunctionSupported" $)}}
+ {{if eq (Macro "Vtbl" $) "Device"}}
+ {{if ne $.Name "vkGetDeviceProcAddr"}}
+ true
+ {{end}}
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Emit "true" if a function is core or from a supportable extension.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsFunctionSupported"}}
+ {{AssertType $ "Function"}}
+ {{if not (GetAnnotation $ "pfn")}}
+ {{$ext := GetAnnotation $ "extension"}}
+ {{if not $ext}}true
+ {{else if not (Macro "IsExtensionBlacklisted" $ext)}}true
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Decides whether a function should be exported from the Android Vulkan
+ library. Functions in the core API and in loader extensions are exported.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsExported"}}
+ {{AssertType $ "Function"}}
+
+ {{if (Macro "IsFunctionSupported" $)}}
+ {{$ext := GetAnnotation $ "extension"}}
+ {{if $ext}}
+ {{Macro "IsLoaderExtension" $ext}}
+ {{else}}
+ true
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Reports whether an extension function is implemented entirely by the loader,
+ and not implemented by drivers.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsLoaderFunction"}}
+ {{AssertType $ "Function"}}
+
+ {{$ext := GetAnnotation $ "extension"}}
+ {{if $ext}}
+ {{Macro "IsLoaderExtension" $ext}}
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emit "true" if the loader has a top-level implementation for the function
+ that should be called directly rather than dispatching to the first layer.
+-------------------------------------------------------------------------------
+*/}}
+{{define "HasLoaderTopImpl"}}
+ {{AssertType $ "Function"}}
+
+ {{/* Global functions can't be dispatched */}}
+ {{ if and (not (GetAnnotation $ "pfn")) (eq (Macro "Vtbl" $) "Global")}}true
+
+ {{/* G*PA are implemented by reading the dispatch table, not by dispatching
+ through it. */}}
+ {{else if eq $.Name "vkGetInstanceProcAddr"}}true
+ {{else if eq $.Name "vkGetDeviceProcAddr"}}true
+
+ {{/* Loader top needs to initialize dispatch for device-level dispatchable
+ objects */}}
+ {{else if eq $.Name "vkGetDeviceQueue"}}true
+ {{else if eq $.Name "vkAllocateCommandBuffers"}}true
+
+ {{/* vkDestroy for dispatchable objects needs to handle VK_NULL_HANDLE;
+ trying to dispatch through that would crash. */}}
+ {{else if eq $.Name "vkDestroyInstance"}}true
+ {{else if eq $.Name "vkDestroyDevice"}}true
+
+ {{end}}
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emit "true" if the loader has a bottom-level implementation for the function
+ which terminates the dispatch chain.
+-------------------------------------------------------------------------------
+*/}}
+{{define "HasLoaderBottomImpl"}}
+ {{AssertType $ "Function"}}
+
+ {{if (Macro "IsFunctionSupported" $)}}
+ {{ if (eq (Macro "Vtbl" $) "Instance")}}true
+ {{else if (Macro "IsLoaderFunction" $)}}true
+ {{else if (eq $.Name "vkCreateInstance")}}true
+ {{else if (eq $.Name "vkGetDeviceProcAddr")}}true
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Emit "true" if an extension is unsupportable on Android.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsExtensionBlacklisted"}}
+ {{$ext := index $.Arguments 0}}
+ {{ if eq $ext "VK_KHR_display"}}true
+ {{else if eq $ext "VK_KHR_display_swapchain"}}true
+ {{else if eq $ext "VK_KHR_xlib_surface"}}true
+ {{else if eq $ext "VK_KHR_xcb_surface"}}true
+ {{else if eq $ext "VK_KHR_wayland_surface"}}true
+ {{else if eq $ext "VK_KHR_mir_surface"}}true
+ {{else if eq $ext "VK_KHR_win32_surface"}}true
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Reports whether an extension is implemented entirely by the loader,
+ so drivers should not enumerate it.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsLoaderExtension"}}
+ {{$ext := index $.Arguments 0}}
+ {{ if eq $ext "VK_KHR_surface"}}true
+ {{else if eq $ext "VK_KHR_swapchain"}}true
+ {{else if eq $ext "VK_KHR_android_surface"}}true
+ {{end}}
+{{end}}
diff --git a/vulkan/libvulkan/dispatch_gen.cpp b/vulkan/libvulkan/dispatch_gen.cpp
new file mode 100644
index 0000000..60da749
--- /dev/null
+++ b/vulkan/libvulkan/dispatch_gen.cpp
@@ -0,0 +1,2085 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <log/log.h>
+#include <algorithm>
+#include "loader.h"
+
+#define UNLIKELY(expr) __builtin_expect((expr), 0)
+
+using namespace vulkan;
+
+namespace {
+
+struct NameProc {
+ const char* name;
+ PFN_vkVoidFunction proc;
+};
+
+PFN_vkVoidFunction Lookup(const char* name,
+ const NameProc* begin,
+ const NameProc* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name,
+ [](const NameProc& e, const char* n) { return strcmp(e.name, n) < 0; });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return nullptr;
+ return entry->proc;
+}
+
+template <size_t N>
+PFN_vkVoidFunction Lookup(const char* name, const NameProc (&procs)[N]) {
+ return Lookup(name, procs, procs + N);
+}
+
+const NameProc kLoaderExportProcs[] = {
+ // clang-format off
+ {"vkAcquireNextImageKHR", reinterpret_cast<PFN_vkVoidFunction>(vkAcquireNextImageKHR)},
+ {"vkAllocateCommandBuffers", reinterpret_cast<PFN_vkVoidFunction>(vkAllocateCommandBuffers)},
+ {"vkAllocateDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(vkAllocateDescriptorSets)},
+ {"vkAllocateMemory", reinterpret_cast<PFN_vkVoidFunction>(vkAllocateMemory)},
+ {"vkBeginCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkBeginCommandBuffer)},
+ {"vkBindBufferMemory", reinterpret_cast<PFN_vkVoidFunction>(vkBindBufferMemory)},
+ {"vkBindImageMemory", reinterpret_cast<PFN_vkVoidFunction>(vkBindImageMemory)},
+ {"vkCmdBeginQuery", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBeginQuery)},
+ {"vkCmdBeginRenderPass", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBeginRenderPass)},
+ {"vkCmdBindDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBindDescriptorSets)},
+ {"vkCmdBindIndexBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBindIndexBuffer)},
+ {"vkCmdBindPipeline", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBindPipeline)},
+ {"vkCmdBindVertexBuffers", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBindVertexBuffers)},
+ {"vkCmdBlitImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdBlitImage)},
+ {"vkCmdClearAttachments", reinterpret_cast<PFN_vkVoidFunction>(vkCmdClearAttachments)},
+ {"vkCmdClearColorImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdClearColorImage)},
+ {"vkCmdClearDepthStencilImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdClearDepthStencilImage)},
+ {"vkCmdCopyBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCmdCopyBuffer)},
+ {"vkCmdCopyBufferToImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdCopyBufferToImage)},
+ {"vkCmdCopyImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdCopyImage)},
+ {"vkCmdCopyImageToBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCmdCopyImageToBuffer)},
+ {"vkCmdCopyQueryPoolResults", reinterpret_cast<PFN_vkVoidFunction>(vkCmdCopyQueryPoolResults)},
+ {"vkCmdDispatch", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDispatch)},
+ {"vkCmdDispatchIndirect", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDispatchIndirect)},
+ {"vkCmdDraw", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDraw)},
+ {"vkCmdDrawIndexed", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDrawIndexed)},
+ {"vkCmdDrawIndexedIndirect", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDrawIndexedIndirect)},
+ {"vkCmdDrawIndirect", reinterpret_cast<PFN_vkVoidFunction>(vkCmdDrawIndirect)},
+ {"vkCmdEndQuery", reinterpret_cast<PFN_vkVoidFunction>(vkCmdEndQuery)},
+ {"vkCmdEndRenderPass", reinterpret_cast<PFN_vkVoidFunction>(vkCmdEndRenderPass)},
+ {"vkCmdExecuteCommands", reinterpret_cast<PFN_vkVoidFunction>(vkCmdExecuteCommands)},
+ {"vkCmdFillBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCmdFillBuffer)},
+ {"vkCmdNextSubpass", reinterpret_cast<PFN_vkVoidFunction>(vkCmdNextSubpass)},
+ {"vkCmdPipelineBarrier", reinterpret_cast<PFN_vkVoidFunction>(vkCmdPipelineBarrier)},
+ {"vkCmdPushConstants", reinterpret_cast<PFN_vkVoidFunction>(vkCmdPushConstants)},
+ {"vkCmdResetEvent", reinterpret_cast<PFN_vkVoidFunction>(vkCmdResetEvent)},
+ {"vkCmdResetQueryPool", reinterpret_cast<PFN_vkVoidFunction>(vkCmdResetQueryPool)},
+ {"vkCmdResolveImage", reinterpret_cast<PFN_vkVoidFunction>(vkCmdResolveImage)},
+ {"vkCmdSetBlendConstants", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetBlendConstants)},
+ {"vkCmdSetDepthBias", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetDepthBias)},
+ {"vkCmdSetDepthBounds", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetDepthBounds)},
+ {"vkCmdSetEvent", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetEvent)},
+ {"vkCmdSetLineWidth", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetLineWidth)},
+ {"vkCmdSetScissor", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetScissor)},
+ {"vkCmdSetStencilCompareMask", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetStencilCompareMask)},
+ {"vkCmdSetStencilReference", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetStencilReference)},
+ {"vkCmdSetStencilWriteMask", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetStencilWriteMask)},
+ {"vkCmdSetViewport", reinterpret_cast<PFN_vkVoidFunction>(vkCmdSetViewport)},
+ {"vkCmdUpdateBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCmdUpdateBuffer)},
+ {"vkCmdWaitEvents", reinterpret_cast<PFN_vkVoidFunction>(vkCmdWaitEvents)},
+ {"vkCmdWriteTimestamp", reinterpret_cast<PFN_vkVoidFunction>(vkCmdWriteTimestamp)},
+ {"vkCreateAndroidSurfaceKHR", reinterpret_cast<PFN_vkVoidFunction>(vkCreateAndroidSurfaceKHR)},
+ {"vkCreateBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCreateBuffer)},
+ {"vkCreateBufferView", reinterpret_cast<PFN_vkVoidFunction>(vkCreateBufferView)},
+ {"vkCreateCommandPool", reinterpret_cast<PFN_vkVoidFunction>(vkCreateCommandPool)},
+ {"vkCreateComputePipelines", reinterpret_cast<PFN_vkVoidFunction>(vkCreateComputePipelines)},
+ {"vkCreateDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(vkCreateDescriptorPool)},
+ {"vkCreateDescriptorSetLayout", reinterpret_cast<PFN_vkVoidFunction>(vkCreateDescriptorSetLayout)},
+ {"vkCreateDevice", reinterpret_cast<PFN_vkVoidFunction>(vkCreateDevice)},
+ {"vkCreateEvent", reinterpret_cast<PFN_vkVoidFunction>(vkCreateEvent)},
+ {"vkCreateFence", reinterpret_cast<PFN_vkVoidFunction>(vkCreateFence)},
+ {"vkCreateFramebuffer", reinterpret_cast<PFN_vkVoidFunction>(vkCreateFramebuffer)},
+ {"vkCreateGraphicsPipelines", reinterpret_cast<PFN_vkVoidFunction>(vkCreateGraphicsPipelines)},
+ {"vkCreateImage", reinterpret_cast<PFN_vkVoidFunction>(vkCreateImage)},
+ {"vkCreateImageView", reinterpret_cast<PFN_vkVoidFunction>(vkCreateImageView)},
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(vkCreateInstance)},
+ {"vkCreatePipelineCache", reinterpret_cast<PFN_vkVoidFunction>(vkCreatePipelineCache)},
+ {"vkCreatePipelineLayout", reinterpret_cast<PFN_vkVoidFunction>(vkCreatePipelineLayout)},
+ {"vkCreateQueryPool", reinterpret_cast<PFN_vkVoidFunction>(vkCreateQueryPool)},
+ {"vkCreateRenderPass", reinterpret_cast<PFN_vkVoidFunction>(vkCreateRenderPass)},
+ {"vkCreateSampler", reinterpret_cast<PFN_vkVoidFunction>(vkCreateSampler)},
+ {"vkCreateSemaphore", reinterpret_cast<PFN_vkVoidFunction>(vkCreateSemaphore)},
+ {"vkCreateShaderModule", reinterpret_cast<PFN_vkVoidFunction>(vkCreateShaderModule)},
+ {"vkCreateSwapchainKHR", reinterpret_cast<PFN_vkVoidFunction>(vkCreateSwapchainKHR)},
+ {"vkDestroyBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyBuffer)},
+ {"vkDestroyBufferView", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyBufferView)},
+ {"vkDestroyCommandPool", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyCommandPool)},
+ {"vkDestroyDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyDescriptorPool)},
+ {"vkDestroyDescriptorSetLayout", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyDescriptorSetLayout)},
+ {"vkDestroyDevice", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyDevice)},
+ {"vkDestroyEvent", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyEvent)},
+ {"vkDestroyFence", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyFence)},
+ {"vkDestroyFramebuffer", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyFramebuffer)},
+ {"vkDestroyImage", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyImage)},
+ {"vkDestroyImageView", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyImageView)},
+ {"vkDestroyInstance", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyInstance)},
+ {"vkDestroyPipeline", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyPipeline)},
+ {"vkDestroyPipelineCache", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyPipelineCache)},
+ {"vkDestroyPipelineLayout", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyPipelineLayout)},
+ {"vkDestroyQueryPool", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyQueryPool)},
+ {"vkDestroyRenderPass", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyRenderPass)},
+ {"vkDestroySampler", reinterpret_cast<PFN_vkVoidFunction>(vkDestroySampler)},
+ {"vkDestroySemaphore", reinterpret_cast<PFN_vkVoidFunction>(vkDestroySemaphore)},
+ {"vkDestroyShaderModule", reinterpret_cast<PFN_vkVoidFunction>(vkDestroyShaderModule)},
+ {"vkDestroySurfaceKHR", reinterpret_cast<PFN_vkVoidFunction>(vkDestroySurfaceKHR)},
+ {"vkDestroySwapchainKHR", reinterpret_cast<PFN_vkVoidFunction>(vkDestroySwapchainKHR)},
+ {"vkDeviceWaitIdle", reinterpret_cast<PFN_vkVoidFunction>(vkDeviceWaitIdle)},
+ {"vkEndCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkEndCommandBuffer)},
+ {"vkEnumerateDeviceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(vkEnumerateDeviceExtensionProperties)},
+ {"vkEnumerateDeviceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(vkEnumerateDeviceLayerProperties)},
+ {"vkEnumerateInstanceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(vkEnumerateInstanceExtensionProperties)},
+ {"vkEnumerateInstanceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(vkEnumerateInstanceLayerProperties)},
+ {"vkEnumeratePhysicalDevices", reinterpret_cast<PFN_vkVoidFunction>(vkEnumeratePhysicalDevices)},
+ {"vkFlushMappedMemoryRanges", reinterpret_cast<PFN_vkVoidFunction>(vkFlushMappedMemoryRanges)},
+ {"vkFreeCommandBuffers", reinterpret_cast<PFN_vkVoidFunction>(vkFreeCommandBuffers)},
+ {"vkFreeDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(vkFreeDescriptorSets)},
+ {"vkFreeMemory", reinterpret_cast<PFN_vkVoidFunction>(vkFreeMemory)},
+ {"vkGetBufferMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(vkGetBufferMemoryRequirements)},
+ {"vkGetDeviceMemoryCommitment", reinterpret_cast<PFN_vkVoidFunction>(vkGetDeviceMemoryCommitment)},
+ {"vkGetDeviceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(vkGetDeviceProcAddr)},
+ {"vkGetDeviceQueue", reinterpret_cast<PFN_vkVoidFunction>(vkGetDeviceQueue)},
+ {"vkGetEventStatus", reinterpret_cast<PFN_vkVoidFunction>(vkGetEventStatus)},
+ {"vkGetFenceStatus", reinterpret_cast<PFN_vkVoidFunction>(vkGetFenceStatus)},
+ {"vkGetImageMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(vkGetImageMemoryRequirements)},
+ {"vkGetImageSparseMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(vkGetImageSparseMemoryRequirements)},
+ {"vkGetImageSubresourceLayout", reinterpret_cast<PFN_vkVoidFunction>(vkGetImageSubresourceLayout)},
+ {"vkGetInstanceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(vkGetInstanceProcAddr)},
+ {"vkGetPhysicalDeviceFeatures", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceFeatures)},
+ {"vkGetPhysicalDeviceFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceFormatProperties)},
+ {"vkGetPhysicalDeviceImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceImageFormatProperties)},
+ {"vkGetPhysicalDeviceMemoryProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceMemoryProperties)},
+ {"vkGetPhysicalDeviceProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceProperties)},
+ {"vkGetPhysicalDeviceQueueFamilyProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceQueueFamilyProperties)},
+ {"vkGetPhysicalDeviceSparseImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSparseImageFormatProperties)},
+ {"vkGetPhysicalDeviceSurfaceCapabilitiesKHR", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceCapabilitiesKHR)},
+ {"vkGetPhysicalDeviceSurfaceFormatsKHR", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceFormatsKHR)},
+ {"vkGetPhysicalDeviceSurfacePresentModesKHR", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfacePresentModesKHR)},
+ {"vkGetPhysicalDeviceSurfaceSupportKHR", reinterpret_cast<PFN_vkVoidFunction>(vkGetPhysicalDeviceSurfaceSupportKHR)},
+ {"vkGetPipelineCacheData", reinterpret_cast<PFN_vkVoidFunction>(vkGetPipelineCacheData)},
+ {"vkGetQueryPoolResults", reinterpret_cast<PFN_vkVoidFunction>(vkGetQueryPoolResults)},
+ {"vkGetRenderAreaGranularity", reinterpret_cast<PFN_vkVoidFunction>(vkGetRenderAreaGranularity)},
+ {"vkGetSwapchainImagesKHR", reinterpret_cast<PFN_vkVoidFunction>(vkGetSwapchainImagesKHR)},
+ {"vkInvalidateMappedMemoryRanges", reinterpret_cast<PFN_vkVoidFunction>(vkInvalidateMappedMemoryRanges)},
+ {"vkMapMemory", reinterpret_cast<PFN_vkVoidFunction>(vkMapMemory)},
+ {"vkMergePipelineCaches", reinterpret_cast<PFN_vkVoidFunction>(vkMergePipelineCaches)},
+ {"vkQueueBindSparse", reinterpret_cast<PFN_vkVoidFunction>(vkQueueBindSparse)},
+ {"vkQueuePresentKHR", reinterpret_cast<PFN_vkVoidFunction>(vkQueuePresentKHR)},
+ {"vkQueueSubmit", reinterpret_cast<PFN_vkVoidFunction>(vkQueueSubmit)},
+ {"vkQueueWaitIdle", reinterpret_cast<PFN_vkVoidFunction>(vkQueueWaitIdle)},
+ {"vkResetCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(vkResetCommandBuffer)},
+ {"vkResetCommandPool", reinterpret_cast<PFN_vkVoidFunction>(vkResetCommandPool)},
+ {"vkResetDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(vkResetDescriptorPool)},
+ {"vkResetEvent", reinterpret_cast<PFN_vkVoidFunction>(vkResetEvent)},
+ {"vkResetFences", reinterpret_cast<PFN_vkVoidFunction>(vkResetFences)},
+ {"vkSetEvent", reinterpret_cast<PFN_vkVoidFunction>(vkSetEvent)},
+ {"vkUnmapMemory", reinterpret_cast<PFN_vkVoidFunction>(vkUnmapMemory)},
+ {"vkUpdateDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(vkUpdateDescriptorSets)},
+ {"vkWaitForFences", reinterpret_cast<PFN_vkVoidFunction>(vkWaitForFences)},
+ // clang-format on
+};
+
+const NameProc kLoaderGlobalProcs[] = {
+ // clang-format off
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateInstance>(CreateInstance_Top))},
+ {"vkEnumerateInstanceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceExtensionProperties>(EnumerateInstanceExtensionProperties_Top))},
+ {"vkEnumerateInstanceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceLayerProperties>(EnumerateInstanceLayerProperties_Top))},
+ // clang-format on
+};
+
+const NameProc kLoaderTopProcs[] = {
+ // clang-format off
+ {"vkAllocateCommandBuffers", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAllocateCommandBuffers>(AllocateCommandBuffers_Top))},
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateInstance>(CreateInstance_Top))},
+ {"vkDestroyDevice", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDevice>(DestroyDevice_Top))},
+ {"vkDestroyInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyInstance>(DestroyInstance_Top))},
+ {"vkEnumerateInstanceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceExtensionProperties>(EnumerateInstanceExtensionProperties_Top))},
+ {"vkEnumerateInstanceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceLayerProperties>(EnumerateInstanceLayerProperties_Top))},
+ {"vkGetDeviceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceProcAddr>(GetDeviceProcAddr_Top))},
+ {"vkGetDeviceQueue", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceQueue>(GetDeviceQueue_Top))},
+ {"vkGetInstanceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetInstanceProcAddr>(GetInstanceProcAddr_Top))},
+ // clang-format on
+};
+
+const NameProc kLoaderBottomProcs[] = {
+ // clang-format off
+ {"vkAcquireNextImageKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAcquireNextImageKHR>(AcquireNextImageKHR_Bottom))},
+ {"vkCreateAndroidSurfaceKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateAndroidSurfaceKHR>(CreateAndroidSurfaceKHR_Bottom))},
+ {"vkCreateDebugReportCallbackEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDebugReportCallbackEXT>(CreateDebugReportCallbackEXT_Bottom))},
+ {"vkCreateDevice", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDevice>(CreateDevice_Bottom))},
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateInstance>(CreateInstance_Bottom))},
+ {"vkCreateSwapchainKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateSwapchainKHR>(CreateSwapchainKHR_Bottom))},
+ {"vkDebugReportMessageEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDebugReportMessageEXT>(DebugReportMessageEXT_Bottom))},
+ {"vkDestroyDebugReportCallbackEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDebugReportCallbackEXT>(DestroyDebugReportCallbackEXT_Bottom))},
+ {"vkDestroyInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyInstance>(DestroyInstance_Bottom))},
+ {"vkDestroySurfaceKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroySurfaceKHR>(DestroySurfaceKHR_Bottom))},
+ {"vkDestroySwapchainKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroySwapchainKHR>(DestroySwapchainKHR_Bottom))},
+ {"vkEnumerateDeviceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateDeviceExtensionProperties>(EnumerateDeviceExtensionProperties_Bottom))},
+ {"vkEnumerateDeviceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateDeviceLayerProperties>(EnumerateDeviceLayerProperties_Bottom))},
+ {"vkEnumeratePhysicalDevices", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumeratePhysicalDevices>(EnumeratePhysicalDevices_Bottom))},
+ {"vkGetDeviceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceProcAddr>(GetDeviceProcAddr_Bottom))},
+ {"vkGetInstanceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetInstanceProcAddr>(GetInstanceProcAddr_Bottom))},
+ {"vkGetPhysicalDeviceFeatures", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceFeatures>(GetPhysicalDeviceFeatures_Bottom))},
+ {"vkGetPhysicalDeviceFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceFormatProperties>(GetPhysicalDeviceFormatProperties_Bottom))},
+ {"vkGetPhysicalDeviceImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceImageFormatProperties>(GetPhysicalDeviceImageFormatProperties_Bottom))},
+ {"vkGetPhysicalDeviceMemoryProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceMemoryProperties>(GetPhysicalDeviceMemoryProperties_Bottom))},
+ {"vkGetPhysicalDeviceProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceProperties>(GetPhysicalDeviceProperties_Bottom))},
+ {"vkGetPhysicalDeviceQueueFamilyProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceQueueFamilyProperties>(GetPhysicalDeviceQueueFamilyProperties_Bottom))},
+ {"vkGetPhysicalDeviceSparseImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSparseImageFormatProperties>(GetPhysicalDeviceSparseImageFormatProperties_Bottom))},
+ {"vkGetPhysicalDeviceSurfaceCapabilitiesKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR>(GetPhysicalDeviceSurfaceCapabilitiesKHR_Bottom))},
+ {"vkGetPhysicalDeviceSurfaceFormatsKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSurfaceFormatsKHR>(GetPhysicalDeviceSurfaceFormatsKHR_Bottom))},
+ {"vkGetPhysicalDeviceSurfacePresentModesKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSurfacePresentModesKHR>(GetPhysicalDeviceSurfacePresentModesKHR_Bottom))},
+ {"vkGetPhysicalDeviceSurfaceSupportKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSurfaceSupportKHR>(GetPhysicalDeviceSurfaceSupportKHR_Bottom))},
+ {"vkGetSwapchainImagesKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetSwapchainImagesKHR>(GetSwapchainImagesKHR_Bottom))},
+ {"vkQueuePresentKHR", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkQueuePresentKHR>(QueuePresentKHR_Bottom))},
+ // clang-format on
+};
+
+struct NameOffset {
+ const char* name;
+ size_t offset;
+};
+
+ssize_t Lookup(const char* name,
+ const NameOffset* begin,
+ const NameOffset* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name, [](const NameOffset& e, const char* n) {
+ return strcmp(e.name, n) < 0;
+ });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return -1;
+ return static_cast<ssize_t>(entry->offset);
+}
+
+template <size_t N, class Table>
+PFN_vkVoidFunction Lookup(const char* name,
+ const NameOffset (&offsets)[N],
+ const Table& table) {
+ ssize_t offset = Lookup(name, offsets, offsets + N);
+ if (offset < 0)
+ return nullptr;
+ uintptr_t base = reinterpret_cast<uintptr_t>(&table);
+ return *reinterpret_cast<PFN_vkVoidFunction*>(base +
+ static_cast<size_t>(offset));
+}
+
+const NameOffset kInstanceDispatchOffsets[] = {
+ // clang-format off
+ {"vkCreateAndroidSurfaceKHR", offsetof(InstanceDispatchTable, CreateAndroidSurfaceKHR)},
+ {"vkCreateDebugReportCallbackEXT", offsetof(InstanceDispatchTable, CreateDebugReportCallbackEXT)},
+ {"vkCreateDevice", offsetof(InstanceDispatchTable, CreateDevice)},
+ {"vkDebugReportMessageEXT", offsetof(InstanceDispatchTable, DebugReportMessageEXT)},
+ {"vkDestroyDebugReportCallbackEXT", offsetof(InstanceDispatchTable, DestroyDebugReportCallbackEXT)},
+ {"vkDestroyInstance", offsetof(InstanceDispatchTable, DestroyInstance)},
+ {"vkDestroySurfaceKHR", offsetof(InstanceDispatchTable, DestroySurfaceKHR)},
+ {"vkEnumerateDeviceExtensionProperties", offsetof(InstanceDispatchTable, EnumerateDeviceExtensionProperties)},
+ {"vkEnumerateDeviceLayerProperties", offsetof(InstanceDispatchTable, EnumerateDeviceLayerProperties)},
+ {"vkEnumeratePhysicalDevices", offsetof(InstanceDispatchTable, EnumeratePhysicalDevices)},
+ {"vkGetPhysicalDeviceFeatures", offsetof(InstanceDispatchTable, GetPhysicalDeviceFeatures)},
+ {"vkGetPhysicalDeviceFormatProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceFormatProperties)},
+ {"vkGetPhysicalDeviceImageFormatProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceImageFormatProperties)},
+ {"vkGetPhysicalDeviceMemoryProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceMemoryProperties)},
+ {"vkGetPhysicalDeviceProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceProperties)},
+ {"vkGetPhysicalDeviceQueueFamilyProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceQueueFamilyProperties)},
+ {"vkGetPhysicalDeviceSparseImageFormatProperties", offsetof(InstanceDispatchTable, GetPhysicalDeviceSparseImageFormatProperties)},
+ {"vkGetPhysicalDeviceSurfaceCapabilitiesKHR", offsetof(InstanceDispatchTable, GetPhysicalDeviceSurfaceCapabilitiesKHR)},
+ {"vkGetPhysicalDeviceSurfaceFormatsKHR", offsetof(InstanceDispatchTable, GetPhysicalDeviceSurfaceFormatsKHR)},
+ {"vkGetPhysicalDeviceSurfacePresentModesKHR", offsetof(InstanceDispatchTable, GetPhysicalDeviceSurfacePresentModesKHR)},
+ {"vkGetPhysicalDeviceSurfaceSupportKHR", offsetof(InstanceDispatchTable, GetPhysicalDeviceSurfaceSupportKHR)},
+ // clang-format on
+};
+
+const NameOffset kDeviceDispatchOffsets[] = {
+ // clang-format off
+ {"vkAcquireNextImageKHR", offsetof(DeviceDispatchTable, AcquireNextImageKHR)},
+ {"vkAllocateCommandBuffers", offsetof(DeviceDispatchTable, AllocateCommandBuffers)},
+ {"vkAllocateDescriptorSets", offsetof(DeviceDispatchTable, AllocateDescriptorSets)},
+ {"vkAllocateMemory", offsetof(DeviceDispatchTable, AllocateMemory)},
+ {"vkBeginCommandBuffer", offsetof(DeviceDispatchTable, BeginCommandBuffer)},
+ {"vkBindBufferMemory", offsetof(DeviceDispatchTable, BindBufferMemory)},
+ {"vkBindImageMemory", offsetof(DeviceDispatchTable, BindImageMemory)},
+ {"vkCmdBeginQuery", offsetof(DeviceDispatchTable, CmdBeginQuery)},
+ {"vkCmdBeginRenderPass", offsetof(DeviceDispatchTable, CmdBeginRenderPass)},
+ {"vkCmdBindDescriptorSets", offsetof(DeviceDispatchTable, CmdBindDescriptorSets)},
+ {"vkCmdBindIndexBuffer", offsetof(DeviceDispatchTable, CmdBindIndexBuffer)},
+ {"vkCmdBindPipeline", offsetof(DeviceDispatchTable, CmdBindPipeline)},
+ {"vkCmdBindVertexBuffers", offsetof(DeviceDispatchTable, CmdBindVertexBuffers)},
+ {"vkCmdBlitImage", offsetof(DeviceDispatchTable, CmdBlitImage)},
+ {"vkCmdClearAttachments", offsetof(DeviceDispatchTable, CmdClearAttachments)},
+ {"vkCmdClearColorImage", offsetof(DeviceDispatchTable, CmdClearColorImage)},
+ {"vkCmdClearDepthStencilImage", offsetof(DeviceDispatchTable, CmdClearDepthStencilImage)},
+ {"vkCmdCopyBuffer", offsetof(DeviceDispatchTable, CmdCopyBuffer)},
+ {"vkCmdCopyBufferToImage", offsetof(DeviceDispatchTable, CmdCopyBufferToImage)},
+ {"vkCmdCopyImage", offsetof(DeviceDispatchTable, CmdCopyImage)},
+ {"vkCmdCopyImageToBuffer", offsetof(DeviceDispatchTable, CmdCopyImageToBuffer)},
+ {"vkCmdCopyQueryPoolResults", offsetof(DeviceDispatchTable, CmdCopyQueryPoolResults)},
+ {"vkCmdDispatch", offsetof(DeviceDispatchTable, CmdDispatch)},
+ {"vkCmdDispatchIndirect", offsetof(DeviceDispatchTable, CmdDispatchIndirect)},
+ {"vkCmdDraw", offsetof(DeviceDispatchTable, CmdDraw)},
+ {"vkCmdDrawIndexed", offsetof(DeviceDispatchTable, CmdDrawIndexed)},
+ {"vkCmdDrawIndexedIndirect", offsetof(DeviceDispatchTable, CmdDrawIndexedIndirect)},
+ {"vkCmdDrawIndirect", offsetof(DeviceDispatchTable, CmdDrawIndirect)},
+ {"vkCmdEndQuery", offsetof(DeviceDispatchTable, CmdEndQuery)},
+ {"vkCmdEndRenderPass", offsetof(DeviceDispatchTable, CmdEndRenderPass)},
+ {"vkCmdExecuteCommands", offsetof(DeviceDispatchTable, CmdExecuteCommands)},
+ {"vkCmdFillBuffer", offsetof(DeviceDispatchTable, CmdFillBuffer)},
+ {"vkCmdNextSubpass", offsetof(DeviceDispatchTable, CmdNextSubpass)},
+ {"vkCmdPipelineBarrier", offsetof(DeviceDispatchTable, CmdPipelineBarrier)},
+ {"vkCmdPushConstants", offsetof(DeviceDispatchTable, CmdPushConstants)},
+ {"vkCmdResetEvent", offsetof(DeviceDispatchTable, CmdResetEvent)},
+ {"vkCmdResetQueryPool", offsetof(DeviceDispatchTable, CmdResetQueryPool)},
+ {"vkCmdResolveImage", offsetof(DeviceDispatchTable, CmdResolveImage)},
+ {"vkCmdSetBlendConstants", offsetof(DeviceDispatchTable, CmdSetBlendConstants)},
+ {"vkCmdSetDepthBias", offsetof(DeviceDispatchTable, CmdSetDepthBias)},
+ {"vkCmdSetDepthBounds", offsetof(DeviceDispatchTable, CmdSetDepthBounds)},
+ {"vkCmdSetEvent", offsetof(DeviceDispatchTable, CmdSetEvent)},
+ {"vkCmdSetLineWidth", offsetof(DeviceDispatchTable, CmdSetLineWidth)},
+ {"vkCmdSetScissor", offsetof(DeviceDispatchTable, CmdSetScissor)},
+ {"vkCmdSetStencilCompareMask", offsetof(DeviceDispatchTable, CmdSetStencilCompareMask)},
+ {"vkCmdSetStencilReference", offsetof(DeviceDispatchTable, CmdSetStencilReference)},
+ {"vkCmdSetStencilWriteMask", offsetof(DeviceDispatchTable, CmdSetStencilWriteMask)},
+ {"vkCmdSetViewport", offsetof(DeviceDispatchTable, CmdSetViewport)},
+ {"vkCmdUpdateBuffer", offsetof(DeviceDispatchTable, CmdUpdateBuffer)},
+ {"vkCmdWaitEvents", offsetof(DeviceDispatchTable, CmdWaitEvents)},
+ {"vkCmdWriteTimestamp", offsetof(DeviceDispatchTable, CmdWriteTimestamp)},
+ {"vkCreateBuffer", offsetof(DeviceDispatchTable, CreateBuffer)},
+ {"vkCreateBufferView", offsetof(DeviceDispatchTable, CreateBufferView)},
+ {"vkCreateCommandPool", offsetof(DeviceDispatchTable, CreateCommandPool)},
+ {"vkCreateComputePipelines", offsetof(DeviceDispatchTable, CreateComputePipelines)},
+ {"vkCreateDescriptorPool", offsetof(DeviceDispatchTable, CreateDescriptorPool)},
+ {"vkCreateDescriptorSetLayout", offsetof(DeviceDispatchTable, CreateDescriptorSetLayout)},
+ {"vkCreateEvent", offsetof(DeviceDispatchTable, CreateEvent)},
+ {"vkCreateFence", offsetof(DeviceDispatchTable, CreateFence)},
+ {"vkCreateFramebuffer", offsetof(DeviceDispatchTable, CreateFramebuffer)},
+ {"vkCreateGraphicsPipelines", offsetof(DeviceDispatchTable, CreateGraphicsPipelines)},
+ {"vkCreateImage", offsetof(DeviceDispatchTable, CreateImage)},
+ {"vkCreateImageView", offsetof(DeviceDispatchTable, CreateImageView)},
+ {"vkCreatePipelineCache", offsetof(DeviceDispatchTable, CreatePipelineCache)},
+ {"vkCreatePipelineLayout", offsetof(DeviceDispatchTable, CreatePipelineLayout)},
+ {"vkCreateQueryPool", offsetof(DeviceDispatchTable, CreateQueryPool)},
+ {"vkCreateRenderPass", offsetof(DeviceDispatchTable, CreateRenderPass)},
+ {"vkCreateSampler", offsetof(DeviceDispatchTable, CreateSampler)},
+ {"vkCreateSemaphore", offsetof(DeviceDispatchTable, CreateSemaphore)},
+ {"vkCreateShaderModule", offsetof(DeviceDispatchTable, CreateShaderModule)},
+ {"vkCreateSwapchainKHR", offsetof(DeviceDispatchTable, CreateSwapchainKHR)},
+ {"vkDestroyBuffer", offsetof(DeviceDispatchTable, DestroyBuffer)},
+ {"vkDestroyBufferView", offsetof(DeviceDispatchTable, DestroyBufferView)},
+ {"vkDestroyCommandPool", offsetof(DeviceDispatchTable, DestroyCommandPool)},
+ {"vkDestroyDescriptorPool", offsetof(DeviceDispatchTable, DestroyDescriptorPool)},
+ {"vkDestroyDescriptorSetLayout", offsetof(DeviceDispatchTable, DestroyDescriptorSetLayout)},
+ {"vkDestroyDevice", offsetof(DeviceDispatchTable, DestroyDevice)},
+ {"vkDestroyEvent", offsetof(DeviceDispatchTable, DestroyEvent)},
+ {"vkDestroyFence", offsetof(DeviceDispatchTable, DestroyFence)},
+ {"vkDestroyFramebuffer", offsetof(DeviceDispatchTable, DestroyFramebuffer)},
+ {"vkDestroyImage", offsetof(DeviceDispatchTable, DestroyImage)},
+ {"vkDestroyImageView", offsetof(DeviceDispatchTable, DestroyImageView)},
+ {"vkDestroyPipeline", offsetof(DeviceDispatchTable, DestroyPipeline)},
+ {"vkDestroyPipelineCache", offsetof(DeviceDispatchTable, DestroyPipelineCache)},
+ {"vkDestroyPipelineLayout", offsetof(DeviceDispatchTable, DestroyPipelineLayout)},
+ {"vkDestroyQueryPool", offsetof(DeviceDispatchTable, DestroyQueryPool)},
+ {"vkDestroyRenderPass", offsetof(DeviceDispatchTable, DestroyRenderPass)},
+ {"vkDestroySampler", offsetof(DeviceDispatchTable, DestroySampler)},
+ {"vkDestroySemaphore", offsetof(DeviceDispatchTable, DestroySemaphore)},
+ {"vkDestroyShaderModule", offsetof(DeviceDispatchTable, DestroyShaderModule)},
+ {"vkDestroySwapchainKHR", offsetof(DeviceDispatchTable, DestroySwapchainKHR)},
+ {"vkDeviceWaitIdle", offsetof(DeviceDispatchTable, DeviceWaitIdle)},
+ {"vkEndCommandBuffer", offsetof(DeviceDispatchTable, EndCommandBuffer)},
+ {"vkFlushMappedMemoryRanges", offsetof(DeviceDispatchTable, FlushMappedMemoryRanges)},
+ {"vkFreeCommandBuffers", offsetof(DeviceDispatchTable, FreeCommandBuffers)},
+ {"vkFreeDescriptorSets", offsetof(DeviceDispatchTable, FreeDescriptorSets)},
+ {"vkFreeMemory", offsetof(DeviceDispatchTable, FreeMemory)},
+ {"vkGetBufferMemoryRequirements", offsetof(DeviceDispatchTable, GetBufferMemoryRequirements)},
+ {"vkGetDeviceMemoryCommitment", offsetof(DeviceDispatchTable, GetDeviceMemoryCommitment)},
+ {"vkGetDeviceQueue", offsetof(DeviceDispatchTable, GetDeviceQueue)},
+ {"vkGetEventStatus", offsetof(DeviceDispatchTable, GetEventStatus)},
+ {"vkGetFenceStatus", offsetof(DeviceDispatchTable, GetFenceStatus)},
+ {"vkGetImageMemoryRequirements", offsetof(DeviceDispatchTable, GetImageMemoryRequirements)},
+ {"vkGetImageSparseMemoryRequirements", offsetof(DeviceDispatchTable, GetImageSparseMemoryRequirements)},
+ {"vkGetImageSubresourceLayout", offsetof(DeviceDispatchTable, GetImageSubresourceLayout)},
+ {"vkGetPipelineCacheData", offsetof(DeviceDispatchTable, GetPipelineCacheData)},
+ {"vkGetQueryPoolResults", offsetof(DeviceDispatchTable, GetQueryPoolResults)},
+ {"vkGetRenderAreaGranularity", offsetof(DeviceDispatchTable, GetRenderAreaGranularity)},
+ {"vkGetSwapchainImagesKHR", offsetof(DeviceDispatchTable, GetSwapchainImagesKHR)},
+ {"vkInvalidateMappedMemoryRanges", offsetof(DeviceDispatchTable, InvalidateMappedMemoryRanges)},
+ {"vkMapMemory", offsetof(DeviceDispatchTable, MapMemory)},
+ {"vkMergePipelineCaches", offsetof(DeviceDispatchTable, MergePipelineCaches)},
+ {"vkQueueBindSparse", offsetof(DeviceDispatchTable, QueueBindSparse)},
+ {"vkQueuePresentKHR", offsetof(DeviceDispatchTable, QueuePresentKHR)},
+ {"vkQueueSubmit", offsetof(DeviceDispatchTable, QueueSubmit)},
+ {"vkQueueWaitIdle", offsetof(DeviceDispatchTable, QueueWaitIdle)},
+ {"vkResetCommandBuffer", offsetof(DeviceDispatchTable, ResetCommandBuffer)},
+ {"vkResetCommandPool", offsetof(DeviceDispatchTable, ResetCommandPool)},
+ {"vkResetDescriptorPool", offsetof(DeviceDispatchTable, ResetDescriptorPool)},
+ {"vkResetEvent", offsetof(DeviceDispatchTable, ResetEvent)},
+ {"vkResetFences", offsetof(DeviceDispatchTable, ResetFences)},
+ {"vkSetEvent", offsetof(DeviceDispatchTable, SetEvent)},
+ {"vkUnmapMemory", offsetof(DeviceDispatchTable, UnmapMemory)},
+ {"vkUpdateDescriptorSets", offsetof(DeviceDispatchTable, UpdateDescriptorSets)},
+ {"vkWaitForFences", offsetof(DeviceDispatchTable, WaitForFences)},
+ // clang-format on
+};
+
+} // anonymous namespace
+
+namespace vulkan {
+
+PFN_vkVoidFunction GetLoaderExportProcAddr(const char* name) {
+ return Lookup(name, kLoaderExportProcs);
+}
+
+PFN_vkVoidFunction GetLoaderGlobalProcAddr(const char* name) {
+ return Lookup(name, kLoaderGlobalProcs);
+}
+
+PFN_vkVoidFunction GetLoaderTopProcAddr(const char* name) {
+ return Lookup(name, kLoaderTopProcs);
+}
+
+PFN_vkVoidFunction GetLoaderBottomProcAddr(const char* name) {
+ return Lookup(name, kLoaderBottomProcs);
+}
+
+PFN_vkVoidFunction GetDispatchProcAddr(const InstanceDispatchTable& dispatch,
+ const char* name) {
+ return Lookup(name, kInstanceDispatchOffsets, dispatch);
+}
+
+PFN_vkVoidFunction GetDispatchProcAddr(const DeviceDispatchTable& dispatch,
+ const char* name) {
+ return Lookup(name, kDeviceDispatchOffsets, dispatch);
+}
+
+bool LoadInstanceDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ InstanceDispatchTable& dispatch) {
+ bool success = true;
+ // clang-format off
+ dispatch.DestroyInstance = reinterpret_cast<PFN_vkDestroyInstance>(get_proc_addr(instance, "vkDestroyInstance"));
+ if (UNLIKELY(!dispatch.DestroyInstance)) {
+ ALOGE("missing instance proc: %s", "vkDestroyInstance");
+ success = false;
+ }
+ dispatch.EnumeratePhysicalDevices = reinterpret_cast<PFN_vkEnumeratePhysicalDevices>(get_proc_addr(instance, "vkEnumeratePhysicalDevices"));
+ if (UNLIKELY(!dispatch.EnumeratePhysicalDevices)) {
+ ALOGE("missing instance proc: %s", "vkEnumeratePhysicalDevices");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceQueueFamilyProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceQueueFamilyProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceQueueFamilyProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceQueueFamilyProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceQueueFamilyProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceMemoryProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceMemoryProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceMemoryProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceMemoryProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceMemoryProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceFeatures = reinterpret_cast<PFN_vkGetPhysicalDeviceFeatures>(get_proc_addr(instance, "vkGetPhysicalDeviceFeatures"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceFeatures)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceFeatures");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceFormatProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceFormatProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceImageFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceImageFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceImageFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceImageFormatProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceImageFormatProperties");
+ success = false;
+ }
+ dispatch.CreateDevice = reinterpret_cast<PFN_vkCreateDevice>(get_proc_addr(instance, "vkCreateDevice"));
+ if (UNLIKELY(!dispatch.CreateDevice)) {
+ ALOGE("missing instance proc: %s", "vkCreateDevice");
+ success = false;
+ }
+ dispatch.EnumerateDeviceLayerProperties = reinterpret_cast<PFN_vkEnumerateDeviceLayerProperties>(get_proc_addr(instance, "vkEnumerateDeviceLayerProperties"));
+ if (UNLIKELY(!dispatch.EnumerateDeviceLayerProperties)) {
+ ALOGE("missing instance proc: %s", "vkEnumerateDeviceLayerProperties");
+ success = false;
+ }
+ dispatch.EnumerateDeviceExtensionProperties = reinterpret_cast<PFN_vkEnumerateDeviceExtensionProperties>(get_proc_addr(instance, "vkEnumerateDeviceExtensionProperties"));
+ if (UNLIKELY(!dispatch.EnumerateDeviceExtensionProperties)) {
+ ALOGE("missing instance proc: %s", "vkEnumerateDeviceExtensionProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSparseImageFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceSparseImageFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceSparseImageFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSparseImageFormatProperties)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceSparseImageFormatProperties");
+ success = false;
+ }
+ dispatch.DestroySurfaceKHR = reinterpret_cast<PFN_vkDestroySurfaceKHR>(get_proc_addr(instance, "vkDestroySurfaceKHR"));
+ if (UNLIKELY(!dispatch.DestroySurfaceKHR)) {
+ ALOGE("missing instance proc: %s", "vkDestroySurfaceKHR");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSurfaceSupportKHR = reinterpret_cast<PFN_vkGetPhysicalDeviceSurfaceSupportKHR>(get_proc_addr(instance, "vkGetPhysicalDeviceSurfaceSupportKHR"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSurfaceSupportKHR)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceSurfaceSupportKHR");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSurfaceCapabilitiesKHR = reinterpret_cast<PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR>(get_proc_addr(instance, "vkGetPhysicalDeviceSurfaceCapabilitiesKHR"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSurfaceCapabilitiesKHR)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceSurfaceCapabilitiesKHR");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSurfaceFormatsKHR = reinterpret_cast<PFN_vkGetPhysicalDeviceSurfaceFormatsKHR>(get_proc_addr(instance, "vkGetPhysicalDeviceSurfaceFormatsKHR"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSurfaceFormatsKHR)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceSurfaceFormatsKHR");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSurfacePresentModesKHR = reinterpret_cast<PFN_vkGetPhysicalDeviceSurfacePresentModesKHR>(get_proc_addr(instance, "vkGetPhysicalDeviceSurfacePresentModesKHR"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSurfacePresentModesKHR)) {
+ ALOGE("missing instance proc: %s", "vkGetPhysicalDeviceSurfacePresentModesKHR");
+ success = false;
+ }
+ dispatch.CreateAndroidSurfaceKHR = reinterpret_cast<PFN_vkCreateAndroidSurfaceKHR>(get_proc_addr(instance, "vkCreateAndroidSurfaceKHR"));
+ if (UNLIKELY(!dispatch.CreateAndroidSurfaceKHR)) {
+ ALOGE("missing instance proc: %s", "vkCreateAndroidSurfaceKHR");
+ success = false;
+ }
+ dispatch.CreateDebugReportCallbackEXT = reinterpret_cast<PFN_vkCreateDebugReportCallbackEXT>(get_proc_addr(instance, "vkCreateDebugReportCallbackEXT"));
+ if (UNLIKELY(!dispatch.CreateDebugReportCallbackEXT)) {
+ ALOGE("missing instance proc: %s", "vkCreateDebugReportCallbackEXT");
+ success = false;
+ }
+ dispatch.DestroyDebugReportCallbackEXT = reinterpret_cast<PFN_vkDestroyDebugReportCallbackEXT>(get_proc_addr(instance, "vkDestroyDebugReportCallbackEXT"));
+ if (UNLIKELY(!dispatch.DestroyDebugReportCallbackEXT)) {
+ ALOGE("missing instance proc: %s", "vkDestroyDebugReportCallbackEXT");
+ success = false;
+ }
+ dispatch.DebugReportMessageEXT = reinterpret_cast<PFN_vkDebugReportMessageEXT>(get_proc_addr(instance, "vkDebugReportMessageEXT"));
+ if (UNLIKELY(!dispatch.DebugReportMessageEXT)) {
+ ALOGE("missing instance proc: %s", "vkDebugReportMessageEXT");
+ success = false;
+ }
+ // clang-format on
+ return success;
+}
+
+bool LoadDeviceDispatchTable(VkDevice device,
+ PFN_vkGetDeviceProcAddr get_proc_addr,
+ DeviceDispatchTable& dispatch) {
+ bool success = true;
+ // clang-format off
+ dispatch.DestroyDevice = reinterpret_cast<PFN_vkDestroyDevice>(get_proc_addr(device, "vkDestroyDevice"));
+ if (UNLIKELY(!dispatch.DestroyDevice)) {
+ ALOGE("missing device proc: %s", "vkDestroyDevice");
+ success = false;
+ }
+ dispatch.GetDeviceQueue = reinterpret_cast<PFN_vkGetDeviceQueue>(get_proc_addr(device, "vkGetDeviceQueue"));
+ if (UNLIKELY(!dispatch.GetDeviceQueue)) {
+ ALOGE("missing device proc: %s", "vkGetDeviceQueue");
+ success = false;
+ }
+ dispatch.QueueSubmit = reinterpret_cast<PFN_vkQueueSubmit>(get_proc_addr(device, "vkQueueSubmit"));
+ if (UNLIKELY(!dispatch.QueueSubmit)) {
+ ALOGE("missing device proc: %s", "vkQueueSubmit");
+ success = false;
+ }
+ dispatch.QueueWaitIdle = reinterpret_cast<PFN_vkQueueWaitIdle>(get_proc_addr(device, "vkQueueWaitIdle"));
+ if (UNLIKELY(!dispatch.QueueWaitIdle)) {
+ ALOGE("missing device proc: %s", "vkQueueWaitIdle");
+ success = false;
+ }
+ dispatch.DeviceWaitIdle = reinterpret_cast<PFN_vkDeviceWaitIdle>(get_proc_addr(device, "vkDeviceWaitIdle"));
+ if (UNLIKELY(!dispatch.DeviceWaitIdle)) {
+ ALOGE("missing device proc: %s", "vkDeviceWaitIdle");
+ success = false;
+ }
+ dispatch.AllocateMemory = reinterpret_cast<PFN_vkAllocateMemory>(get_proc_addr(device, "vkAllocateMemory"));
+ if (UNLIKELY(!dispatch.AllocateMemory)) {
+ ALOGE("missing device proc: %s", "vkAllocateMemory");
+ success = false;
+ }
+ dispatch.FreeMemory = reinterpret_cast<PFN_vkFreeMemory>(get_proc_addr(device, "vkFreeMemory"));
+ if (UNLIKELY(!dispatch.FreeMemory)) {
+ ALOGE("missing device proc: %s", "vkFreeMemory");
+ success = false;
+ }
+ dispatch.MapMemory = reinterpret_cast<PFN_vkMapMemory>(get_proc_addr(device, "vkMapMemory"));
+ if (UNLIKELY(!dispatch.MapMemory)) {
+ ALOGE("missing device proc: %s", "vkMapMemory");
+ success = false;
+ }
+ dispatch.UnmapMemory = reinterpret_cast<PFN_vkUnmapMemory>(get_proc_addr(device, "vkUnmapMemory"));
+ if (UNLIKELY(!dispatch.UnmapMemory)) {
+ ALOGE("missing device proc: %s", "vkUnmapMemory");
+ success = false;
+ }
+ dispatch.FlushMappedMemoryRanges = reinterpret_cast<PFN_vkFlushMappedMemoryRanges>(get_proc_addr(device, "vkFlushMappedMemoryRanges"));
+ if (UNLIKELY(!dispatch.FlushMappedMemoryRanges)) {
+ ALOGE("missing device proc: %s", "vkFlushMappedMemoryRanges");
+ success = false;
+ }
+ dispatch.InvalidateMappedMemoryRanges = reinterpret_cast<PFN_vkInvalidateMappedMemoryRanges>(get_proc_addr(device, "vkInvalidateMappedMemoryRanges"));
+ if (UNLIKELY(!dispatch.InvalidateMappedMemoryRanges)) {
+ ALOGE("missing device proc: %s", "vkInvalidateMappedMemoryRanges");
+ success = false;
+ }
+ dispatch.GetDeviceMemoryCommitment = reinterpret_cast<PFN_vkGetDeviceMemoryCommitment>(get_proc_addr(device, "vkGetDeviceMemoryCommitment"));
+ if (UNLIKELY(!dispatch.GetDeviceMemoryCommitment)) {
+ ALOGE("missing device proc: %s", "vkGetDeviceMemoryCommitment");
+ success = false;
+ }
+ dispatch.GetBufferMemoryRequirements = reinterpret_cast<PFN_vkGetBufferMemoryRequirements>(get_proc_addr(device, "vkGetBufferMemoryRequirements"));
+ if (UNLIKELY(!dispatch.GetBufferMemoryRequirements)) {
+ ALOGE("missing device proc: %s", "vkGetBufferMemoryRequirements");
+ success = false;
+ }
+ dispatch.BindBufferMemory = reinterpret_cast<PFN_vkBindBufferMemory>(get_proc_addr(device, "vkBindBufferMemory"));
+ if (UNLIKELY(!dispatch.BindBufferMemory)) {
+ ALOGE("missing device proc: %s", "vkBindBufferMemory");
+ success = false;
+ }
+ dispatch.GetImageMemoryRequirements = reinterpret_cast<PFN_vkGetImageMemoryRequirements>(get_proc_addr(device, "vkGetImageMemoryRequirements"));
+ if (UNLIKELY(!dispatch.GetImageMemoryRequirements)) {
+ ALOGE("missing device proc: %s", "vkGetImageMemoryRequirements");
+ success = false;
+ }
+ dispatch.BindImageMemory = reinterpret_cast<PFN_vkBindImageMemory>(get_proc_addr(device, "vkBindImageMemory"));
+ if (UNLIKELY(!dispatch.BindImageMemory)) {
+ ALOGE("missing device proc: %s", "vkBindImageMemory");
+ success = false;
+ }
+ dispatch.GetImageSparseMemoryRequirements = reinterpret_cast<PFN_vkGetImageSparseMemoryRequirements>(get_proc_addr(device, "vkGetImageSparseMemoryRequirements"));
+ if (UNLIKELY(!dispatch.GetImageSparseMemoryRequirements)) {
+ ALOGE("missing device proc: %s", "vkGetImageSparseMemoryRequirements");
+ success = false;
+ }
+ dispatch.QueueBindSparse = reinterpret_cast<PFN_vkQueueBindSparse>(get_proc_addr(device, "vkQueueBindSparse"));
+ if (UNLIKELY(!dispatch.QueueBindSparse)) {
+ ALOGE("missing device proc: %s", "vkQueueBindSparse");
+ success = false;
+ }
+ dispatch.CreateFence = reinterpret_cast<PFN_vkCreateFence>(get_proc_addr(device, "vkCreateFence"));
+ if (UNLIKELY(!dispatch.CreateFence)) {
+ ALOGE("missing device proc: %s", "vkCreateFence");
+ success = false;
+ }
+ dispatch.DestroyFence = reinterpret_cast<PFN_vkDestroyFence>(get_proc_addr(device, "vkDestroyFence"));
+ if (UNLIKELY(!dispatch.DestroyFence)) {
+ ALOGE("missing device proc: %s", "vkDestroyFence");
+ success = false;
+ }
+ dispatch.ResetFences = reinterpret_cast<PFN_vkResetFences>(get_proc_addr(device, "vkResetFences"));
+ if (UNLIKELY(!dispatch.ResetFences)) {
+ ALOGE("missing device proc: %s", "vkResetFences");
+ success = false;
+ }
+ dispatch.GetFenceStatus = reinterpret_cast<PFN_vkGetFenceStatus>(get_proc_addr(device, "vkGetFenceStatus"));
+ if (UNLIKELY(!dispatch.GetFenceStatus)) {
+ ALOGE("missing device proc: %s", "vkGetFenceStatus");
+ success = false;
+ }
+ dispatch.WaitForFences = reinterpret_cast<PFN_vkWaitForFences>(get_proc_addr(device, "vkWaitForFences"));
+ if (UNLIKELY(!dispatch.WaitForFences)) {
+ ALOGE("missing device proc: %s", "vkWaitForFences");
+ success = false;
+ }
+ dispatch.CreateSemaphore = reinterpret_cast<PFN_vkCreateSemaphore>(get_proc_addr(device, "vkCreateSemaphore"));
+ if (UNLIKELY(!dispatch.CreateSemaphore)) {
+ ALOGE("missing device proc: %s", "vkCreateSemaphore");
+ success = false;
+ }
+ dispatch.DestroySemaphore = reinterpret_cast<PFN_vkDestroySemaphore>(get_proc_addr(device, "vkDestroySemaphore"));
+ if (UNLIKELY(!dispatch.DestroySemaphore)) {
+ ALOGE("missing device proc: %s", "vkDestroySemaphore");
+ success = false;
+ }
+ dispatch.CreateEvent = reinterpret_cast<PFN_vkCreateEvent>(get_proc_addr(device, "vkCreateEvent"));
+ if (UNLIKELY(!dispatch.CreateEvent)) {
+ ALOGE("missing device proc: %s", "vkCreateEvent");
+ success = false;
+ }
+ dispatch.DestroyEvent = reinterpret_cast<PFN_vkDestroyEvent>(get_proc_addr(device, "vkDestroyEvent"));
+ if (UNLIKELY(!dispatch.DestroyEvent)) {
+ ALOGE("missing device proc: %s", "vkDestroyEvent");
+ success = false;
+ }
+ dispatch.GetEventStatus = reinterpret_cast<PFN_vkGetEventStatus>(get_proc_addr(device, "vkGetEventStatus"));
+ if (UNLIKELY(!dispatch.GetEventStatus)) {
+ ALOGE("missing device proc: %s", "vkGetEventStatus");
+ success = false;
+ }
+ dispatch.SetEvent = reinterpret_cast<PFN_vkSetEvent>(get_proc_addr(device, "vkSetEvent"));
+ if (UNLIKELY(!dispatch.SetEvent)) {
+ ALOGE("missing device proc: %s", "vkSetEvent");
+ success = false;
+ }
+ dispatch.ResetEvent = reinterpret_cast<PFN_vkResetEvent>(get_proc_addr(device, "vkResetEvent"));
+ if (UNLIKELY(!dispatch.ResetEvent)) {
+ ALOGE("missing device proc: %s", "vkResetEvent");
+ success = false;
+ }
+ dispatch.CreateQueryPool = reinterpret_cast<PFN_vkCreateQueryPool>(get_proc_addr(device, "vkCreateQueryPool"));
+ if (UNLIKELY(!dispatch.CreateQueryPool)) {
+ ALOGE("missing device proc: %s", "vkCreateQueryPool");
+ success = false;
+ }
+ dispatch.DestroyQueryPool = reinterpret_cast<PFN_vkDestroyQueryPool>(get_proc_addr(device, "vkDestroyQueryPool"));
+ if (UNLIKELY(!dispatch.DestroyQueryPool)) {
+ ALOGE("missing device proc: %s", "vkDestroyQueryPool");
+ success = false;
+ }
+ dispatch.GetQueryPoolResults = reinterpret_cast<PFN_vkGetQueryPoolResults>(get_proc_addr(device, "vkGetQueryPoolResults"));
+ if (UNLIKELY(!dispatch.GetQueryPoolResults)) {
+ ALOGE("missing device proc: %s", "vkGetQueryPoolResults");
+ success = false;
+ }
+ dispatch.CreateBuffer = reinterpret_cast<PFN_vkCreateBuffer>(get_proc_addr(device, "vkCreateBuffer"));
+ if (UNLIKELY(!dispatch.CreateBuffer)) {
+ ALOGE("missing device proc: %s", "vkCreateBuffer");
+ success = false;
+ }
+ dispatch.DestroyBuffer = reinterpret_cast<PFN_vkDestroyBuffer>(get_proc_addr(device, "vkDestroyBuffer"));
+ if (UNLIKELY(!dispatch.DestroyBuffer)) {
+ ALOGE("missing device proc: %s", "vkDestroyBuffer");
+ success = false;
+ }
+ dispatch.CreateBufferView = reinterpret_cast<PFN_vkCreateBufferView>(get_proc_addr(device, "vkCreateBufferView"));
+ if (UNLIKELY(!dispatch.CreateBufferView)) {
+ ALOGE("missing device proc: %s", "vkCreateBufferView");
+ success = false;
+ }
+ dispatch.DestroyBufferView = reinterpret_cast<PFN_vkDestroyBufferView>(get_proc_addr(device, "vkDestroyBufferView"));
+ if (UNLIKELY(!dispatch.DestroyBufferView)) {
+ ALOGE("missing device proc: %s", "vkDestroyBufferView");
+ success = false;
+ }
+ dispatch.CreateImage = reinterpret_cast<PFN_vkCreateImage>(get_proc_addr(device, "vkCreateImage"));
+ if (UNLIKELY(!dispatch.CreateImage)) {
+ ALOGE("missing device proc: %s", "vkCreateImage");
+ success = false;
+ }
+ dispatch.DestroyImage = reinterpret_cast<PFN_vkDestroyImage>(get_proc_addr(device, "vkDestroyImage"));
+ if (UNLIKELY(!dispatch.DestroyImage)) {
+ ALOGE("missing device proc: %s", "vkDestroyImage");
+ success = false;
+ }
+ dispatch.GetImageSubresourceLayout = reinterpret_cast<PFN_vkGetImageSubresourceLayout>(get_proc_addr(device, "vkGetImageSubresourceLayout"));
+ if (UNLIKELY(!dispatch.GetImageSubresourceLayout)) {
+ ALOGE("missing device proc: %s", "vkGetImageSubresourceLayout");
+ success = false;
+ }
+ dispatch.CreateImageView = reinterpret_cast<PFN_vkCreateImageView>(get_proc_addr(device, "vkCreateImageView"));
+ if (UNLIKELY(!dispatch.CreateImageView)) {
+ ALOGE("missing device proc: %s", "vkCreateImageView");
+ success = false;
+ }
+ dispatch.DestroyImageView = reinterpret_cast<PFN_vkDestroyImageView>(get_proc_addr(device, "vkDestroyImageView"));
+ if (UNLIKELY(!dispatch.DestroyImageView)) {
+ ALOGE("missing device proc: %s", "vkDestroyImageView");
+ success = false;
+ }
+ dispatch.CreateShaderModule = reinterpret_cast<PFN_vkCreateShaderModule>(get_proc_addr(device, "vkCreateShaderModule"));
+ if (UNLIKELY(!dispatch.CreateShaderModule)) {
+ ALOGE("missing device proc: %s", "vkCreateShaderModule");
+ success = false;
+ }
+ dispatch.DestroyShaderModule = reinterpret_cast<PFN_vkDestroyShaderModule>(get_proc_addr(device, "vkDestroyShaderModule"));
+ if (UNLIKELY(!dispatch.DestroyShaderModule)) {
+ ALOGE("missing device proc: %s", "vkDestroyShaderModule");
+ success = false;
+ }
+ dispatch.CreatePipelineCache = reinterpret_cast<PFN_vkCreatePipelineCache>(get_proc_addr(device, "vkCreatePipelineCache"));
+ if (UNLIKELY(!dispatch.CreatePipelineCache)) {
+ ALOGE("missing device proc: %s", "vkCreatePipelineCache");
+ success = false;
+ }
+ dispatch.DestroyPipelineCache = reinterpret_cast<PFN_vkDestroyPipelineCache>(get_proc_addr(device, "vkDestroyPipelineCache"));
+ if (UNLIKELY(!dispatch.DestroyPipelineCache)) {
+ ALOGE("missing device proc: %s", "vkDestroyPipelineCache");
+ success = false;
+ }
+ dispatch.GetPipelineCacheData = reinterpret_cast<PFN_vkGetPipelineCacheData>(get_proc_addr(device, "vkGetPipelineCacheData"));
+ if (UNLIKELY(!dispatch.GetPipelineCacheData)) {
+ ALOGE("missing device proc: %s", "vkGetPipelineCacheData");
+ success = false;
+ }
+ dispatch.MergePipelineCaches = reinterpret_cast<PFN_vkMergePipelineCaches>(get_proc_addr(device, "vkMergePipelineCaches"));
+ if (UNLIKELY(!dispatch.MergePipelineCaches)) {
+ ALOGE("missing device proc: %s", "vkMergePipelineCaches");
+ success = false;
+ }
+ dispatch.CreateGraphicsPipelines = reinterpret_cast<PFN_vkCreateGraphicsPipelines>(get_proc_addr(device, "vkCreateGraphicsPipelines"));
+ if (UNLIKELY(!dispatch.CreateGraphicsPipelines)) {
+ ALOGE("missing device proc: %s", "vkCreateGraphicsPipelines");
+ success = false;
+ }
+ dispatch.CreateComputePipelines = reinterpret_cast<PFN_vkCreateComputePipelines>(get_proc_addr(device, "vkCreateComputePipelines"));
+ if (UNLIKELY(!dispatch.CreateComputePipelines)) {
+ ALOGE("missing device proc: %s", "vkCreateComputePipelines");
+ success = false;
+ }
+ dispatch.DestroyPipeline = reinterpret_cast<PFN_vkDestroyPipeline>(get_proc_addr(device, "vkDestroyPipeline"));
+ if (UNLIKELY(!dispatch.DestroyPipeline)) {
+ ALOGE("missing device proc: %s", "vkDestroyPipeline");
+ success = false;
+ }
+ dispatch.CreatePipelineLayout = reinterpret_cast<PFN_vkCreatePipelineLayout>(get_proc_addr(device, "vkCreatePipelineLayout"));
+ if (UNLIKELY(!dispatch.CreatePipelineLayout)) {
+ ALOGE("missing device proc: %s", "vkCreatePipelineLayout");
+ success = false;
+ }
+ dispatch.DestroyPipelineLayout = reinterpret_cast<PFN_vkDestroyPipelineLayout>(get_proc_addr(device, "vkDestroyPipelineLayout"));
+ if (UNLIKELY(!dispatch.DestroyPipelineLayout)) {
+ ALOGE("missing device proc: %s", "vkDestroyPipelineLayout");
+ success = false;
+ }
+ dispatch.CreateSampler = reinterpret_cast<PFN_vkCreateSampler>(get_proc_addr(device, "vkCreateSampler"));
+ if (UNLIKELY(!dispatch.CreateSampler)) {
+ ALOGE("missing device proc: %s", "vkCreateSampler");
+ success = false;
+ }
+ dispatch.DestroySampler = reinterpret_cast<PFN_vkDestroySampler>(get_proc_addr(device, "vkDestroySampler"));
+ if (UNLIKELY(!dispatch.DestroySampler)) {
+ ALOGE("missing device proc: %s", "vkDestroySampler");
+ success = false;
+ }
+ dispatch.CreateDescriptorSetLayout = reinterpret_cast<PFN_vkCreateDescriptorSetLayout>(get_proc_addr(device, "vkCreateDescriptorSetLayout"));
+ if (UNLIKELY(!dispatch.CreateDescriptorSetLayout)) {
+ ALOGE("missing device proc: %s", "vkCreateDescriptorSetLayout");
+ success = false;
+ }
+ dispatch.DestroyDescriptorSetLayout = reinterpret_cast<PFN_vkDestroyDescriptorSetLayout>(get_proc_addr(device, "vkDestroyDescriptorSetLayout"));
+ if (UNLIKELY(!dispatch.DestroyDescriptorSetLayout)) {
+ ALOGE("missing device proc: %s", "vkDestroyDescriptorSetLayout");
+ success = false;
+ }
+ dispatch.CreateDescriptorPool = reinterpret_cast<PFN_vkCreateDescriptorPool>(get_proc_addr(device, "vkCreateDescriptorPool"));
+ if (UNLIKELY(!dispatch.CreateDescriptorPool)) {
+ ALOGE("missing device proc: %s", "vkCreateDescriptorPool");
+ success = false;
+ }
+ dispatch.DestroyDescriptorPool = reinterpret_cast<PFN_vkDestroyDescriptorPool>(get_proc_addr(device, "vkDestroyDescriptorPool"));
+ if (UNLIKELY(!dispatch.DestroyDescriptorPool)) {
+ ALOGE("missing device proc: %s", "vkDestroyDescriptorPool");
+ success = false;
+ }
+ dispatch.ResetDescriptorPool = reinterpret_cast<PFN_vkResetDescriptorPool>(get_proc_addr(device, "vkResetDescriptorPool"));
+ if (UNLIKELY(!dispatch.ResetDescriptorPool)) {
+ ALOGE("missing device proc: %s", "vkResetDescriptorPool");
+ success = false;
+ }
+ dispatch.AllocateDescriptorSets = reinterpret_cast<PFN_vkAllocateDescriptorSets>(get_proc_addr(device, "vkAllocateDescriptorSets"));
+ if (UNLIKELY(!dispatch.AllocateDescriptorSets)) {
+ ALOGE("missing device proc: %s", "vkAllocateDescriptorSets");
+ success = false;
+ }
+ dispatch.FreeDescriptorSets = reinterpret_cast<PFN_vkFreeDescriptorSets>(get_proc_addr(device, "vkFreeDescriptorSets"));
+ if (UNLIKELY(!dispatch.FreeDescriptorSets)) {
+ ALOGE("missing device proc: %s", "vkFreeDescriptorSets");
+ success = false;
+ }
+ dispatch.UpdateDescriptorSets = reinterpret_cast<PFN_vkUpdateDescriptorSets>(get_proc_addr(device, "vkUpdateDescriptorSets"));
+ if (UNLIKELY(!dispatch.UpdateDescriptorSets)) {
+ ALOGE("missing device proc: %s", "vkUpdateDescriptorSets");
+ success = false;
+ }
+ dispatch.CreateFramebuffer = reinterpret_cast<PFN_vkCreateFramebuffer>(get_proc_addr(device, "vkCreateFramebuffer"));
+ if (UNLIKELY(!dispatch.CreateFramebuffer)) {
+ ALOGE("missing device proc: %s", "vkCreateFramebuffer");
+ success = false;
+ }
+ dispatch.DestroyFramebuffer = reinterpret_cast<PFN_vkDestroyFramebuffer>(get_proc_addr(device, "vkDestroyFramebuffer"));
+ if (UNLIKELY(!dispatch.DestroyFramebuffer)) {
+ ALOGE("missing device proc: %s", "vkDestroyFramebuffer");
+ success = false;
+ }
+ dispatch.CreateRenderPass = reinterpret_cast<PFN_vkCreateRenderPass>(get_proc_addr(device, "vkCreateRenderPass"));
+ if (UNLIKELY(!dispatch.CreateRenderPass)) {
+ ALOGE("missing device proc: %s", "vkCreateRenderPass");
+ success = false;
+ }
+ dispatch.DestroyRenderPass = reinterpret_cast<PFN_vkDestroyRenderPass>(get_proc_addr(device, "vkDestroyRenderPass"));
+ if (UNLIKELY(!dispatch.DestroyRenderPass)) {
+ ALOGE("missing device proc: %s", "vkDestroyRenderPass");
+ success = false;
+ }
+ dispatch.GetRenderAreaGranularity = reinterpret_cast<PFN_vkGetRenderAreaGranularity>(get_proc_addr(device, "vkGetRenderAreaGranularity"));
+ if (UNLIKELY(!dispatch.GetRenderAreaGranularity)) {
+ ALOGE("missing device proc: %s", "vkGetRenderAreaGranularity");
+ success = false;
+ }
+ dispatch.CreateCommandPool = reinterpret_cast<PFN_vkCreateCommandPool>(get_proc_addr(device, "vkCreateCommandPool"));
+ if (UNLIKELY(!dispatch.CreateCommandPool)) {
+ ALOGE("missing device proc: %s", "vkCreateCommandPool");
+ success = false;
+ }
+ dispatch.DestroyCommandPool = reinterpret_cast<PFN_vkDestroyCommandPool>(get_proc_addr(device, "vkDestroyCommandPool"));
+ if (UNLIKELY(!dispatch.DestroyCommandPool)) {
+ ALOGE("missing device proc: %s", "vkDestroyCommandPool");
+ success = false;
+ }
+ dispatch.ResetCommandPool = reinterpret_cast<PFN_vkResetCommandPool>(get_proc_addr(device, "vkResetCommandPool"));
+ if (UNLIKELY(!dispatch.ResetCommandPool)) {
+ ALOGE("missing device proc: %s", "vkResetCommandPool");
+ success = false;
+ }
+ dispatch.AllocateCommandBuffers = reinterpret_cast<PFN_vkAllocateCommandBuffers>(get_proc_addr(device, "vkAllocateCommandBuffers"));
+ if (UNLIKELY(!dispatch.AllocateCommandBuffers)) {
+ ALOGE("missing device proc: %s", "vkAllocateCommandBuffers");
+ success = false;
+ }
+ dispatch.FreeCommandBuffers = reinterpret_cast<PFN_vkFreeCommandBuffers>(get_proc_addr(device, "vkFreeCommandBuffers"));
+ if (UNLIKELY(!dispatch.FreeCommandBuffers)) {
+ ALOGE("missing device proc: %s", "vkFreeCommandBuffers");
+ success = false;
+ }
+ dispatch.BeginCommandBuffer = reinterpret_cast<PFN_vkBeginCommandBuffer>(get_proc_addr(device, "vkBeginCommandBuffer"));
+ if (UNLIKELY(!dispatch.BeginCommandBuffer)) {
+ ALOGE("missing device proc: %s", "vkBeginCommandBuffer");
+ success = false;
+ }
+ dispatch.EndCommandBuffer = reinterpret_cast<PFN_vkEndCommandBuffer>(get_proc_addr(device, "vkEndCommandBuffer"));
+ if (UNLIKELY(!dispatch.EndCommandBuffer)) {
+ ALOGE("missing device proc: %s", "vkEndCommandBuffer");
+ success = false;
+ }
+ dispatch.ResetCommandBuffer = reinterpret_cast<PFN_vkResetCommandBuffer>(get_proc_addr(device, "vkResetCommandBuffer"));
+ if (UNLIKELY(!dispatch.ResetCommandBuffer)) {
+ ALOGE("missing device proc: %s", "vkResetCommandBuffer");
+ success = false;
+ }
+ dispatch.CmdBindPipeline = reinterpret_cast<PFN_vkCmdBindPipeline>(get_proc_addr(device, "vkCmdBindPipeline"));
+ if (UNLIKELY(!dispatch.CmdBindPipeline)) {
+ ALOGE("missing device proc: %s", "vkCmdBindPipeline");
+ success = false;
+ }
+ dispatch.CmdSetViewport = reinterpret_cast<PFN_vkCmdSetViewport>(get_proc_addr(device, "vkCmdSetViewport"));
+ if (UNLIKELY(!dispatch.CmdSetViewport)) {
+ ALOGE("missing device proc: %s", "vkCmdSetViewport");
+ success = false;
+ }
+ dispatch.CmdSetScissor = reinterpret_cast<PFN_vkCmdSetScissor>(get_proc_addr(device, "vkCmdSetScissor"));
+ if (UNLIKELY(!dispatch.CmdSetScissor)) {
+ ALOGE("missing device proc: %s", "vkCmdSetScissor");
+ success = false;
+ }
+ dispatch.CmdSetLineWidth = reinterpret_cast<PFN_vkCmdSetLineWidth>(get_proc_addr(device, "vkCmdSetLineWidth"));
+ if (UNLIKELY(!dispatch.CmdSetLineWidth)) {
+ ALOGE("missing device proc: %s", "vkCmdSetLineWidth");
+ success = false;
+ }
+ dispatch.CmdSetDepthBias = reinterpret_cast<PFN_vkCmdSetDepthBias>(get_proc_addr(device, "vkCmdSetDepthBias"));
+ if (UNLIKELY(!dispatch.CmdSetDepthBias)) {
+ ALOGE("missing device proc: %s", "vkCmdSetDepthBias");
+ success = false;
+ }
+ dispatch.CmdSetBlendConstants = reinterpret_cast<PFN_vkCmdSetBlendConstants>(get_proc_addr(device, "vkCmdSetBlendConstants"));
+ if (UNLIKELY(!dispatch.CmdSetBlendConstants)) {
+ ALOGE("missing device proc: %s", "vkCmdSetBlendConstants");
+ success = false;
+ }
+ dispatch.CmdSetDepthBounds = reinterpret_cast<PFN_vkCmdSetDepthBounds>(get_proc_addr(device, "vkCmdSetDepthBounds"));
+ if (UNLIKELY(!dispatch.CmdSetDepthBounds)) {
+ ALOGE("missing device proc: %s", "vkCmdSetDepthBounds");
+ success = false;
+ }
+ dispatch.CmdSetStencilCompareMask = reinterpret_cast<PFN_vkCmdSetStencilCompareMask>(get_proc_addr(device, "vkCmdSetStencilCompareMask"));
+ if (UNLIKELY(!dispatch.CmdSetStencilCompareMask)) {
+ ALOGE("missing device proc: %s", "vkCmdSetStencilCompareMask");
+ success = false;
+ }
+ dispatch.CmdSetStencilWriteMask = reinterpret_cast<PFN_vkCmdSetStencilWriteMask>(get_proc_addr(device, "vkCmdSetStencilWriteMask"));
+ if (UNLIKELY(!dispatch.CmdSetStencilWriteMask)) {
+ ALOGE("missing device proc: %s", "vkCmdSetStencilWriteMask");
+ success = false;
+ }
+ dispatch.CmdSetStencilReference = reinterpret_cast<PFN_vkCmdSetStencilReference>(get_proc_addr(device, "vkCmdSetStencilReference"));
+ if (UNLIKELY(!dispatch.CmdSetStencilReference)) {
+ ALOGE("missing device proc: %s", "vkCmdSetStencilReference");
+ success = false;
+ }
+ dispatch.CmdBindDescriptorSets = reinterpret_cast<PFN_vkCmdBindDescriptorSets>(get_proc_addr(device, "vkCmdBindDescriptorSets"));
+ if (UNLIKELY(!dispatch.CmdBindDescriptorSets)) {
+ ALOGE("missing device proc: %s", "vkCmdBindDescriptorSets");
+ success = false;
+ }
+ dispatch.CmdBindIndexBuffer = reinterpret_cast<PFN_vkCmdBindIndexBuffer>(get_proc_addr(device, "vkCmdBindIndexBuffer"));
+ if (UNLIKELY(!dispatch.CmdBindIndexBuffer)) {
+ ALOGE("missing device proc: %s", "vkCmdBindIndexBuffer");
+ success = false;
+ }
+ dispatch.CmdBindVertexBuffers = reinterpret_cast<PFN_vkCmdBindVertexBuffers>(get_proc_addr(device, "vkCmdBindVertexBuffers"));
+ if (UNLIKELY(!dispatch.CmdBindVertexBuffers)) {
+ ALOGE("missing device proc: %s", "vkCmdBindVertexBuffers");
+ success = false;
+ }
+ dispatch.CmdDraw = reinterpret_cast<PFN_vkCmdDraw>(get_proc_addr(device, "vkCmdDraw"));
+ if (UNLIKELY(!dispatch.CmdDraw)) {
+ ALOGE("missing device proc: %s", "vkCmdDraw");
+ success = false;
+ }
+ dispatch.CmdDrawIndexed = reinterpret_cast<PFN_vkCmdDrawIndexed>(get_proc_addr(device, "vkCmdDrawIndexed"));
+ if (UNLIKELY(!dispatch.CmdDrawIndexed)) {
+ ALOGE("missing device proc: %s", "vkCmdDrawIndexed");
+ success = false;
+ }
+ dispatch.CmdDrawIndirect = reinterpret_cast<PFN_vkCmdDrawIndirect>(get_proc_addr(device, "vkCmdDrawIndirect"));
+ if (UNLIKELY(!dispatch.CmdDrawIndirect)) {
+ ALOGE("missing device proc: %s", "vkCmdDrawIndirect");
+ success = false;
+ }
+ dispatch.CmdDrawIndexedIndirect = reinterpret_cast<PFN_vkCmdDrawIndexedIndirect>(get_proc_addr(device, "vkCmdDrawIndexedIndirect"));
+ if (UNLIKELY(!dispatch.CmdDrawIndexedIndirect)) {
+ ALOGE("missing device proc: %s", "vkCmdDrawIndexedIndirect");
+ success = false;
+ }
+ dispatch.CmdDispatch = reinterpret_cast<PFN_vkCmdDispatch>(get_proc_addr(device, "vkCmdDispatch"));
+ if (UNLIKELY(!dispatch.CmdDispatch)) {
+ ALOGE("missing device proc: %s", "vkCmdDispatch");
+ success = false;
+ }
+ dispatch.CmdDispatchIndirect = reinterpret_cast<PFN_vkCmdDispatchIndirect>(get_proc_addr(device, "vkCmdDispatchIndirect"));
+ if (UNLIKELY(!dispatch.CmdDispatchIndirect)) {
+ ALOGE("missing device proc: %s", "vkCmdDispatchIndirect");
+ success = false;
+ }
+ dispatch.CmdCopyBuffer = reinterpret_cast<PFN_vkCmdCopyBuffer>(get_proc_addr(device, "vkCmdCopyBuffer"));
+ if (UNLIKELY(!dispatch.CmdCopyBuffer)) {
+ ALOGE("missing device proc: %s", "vkCmdCopyBuffer");
+ success = false;
+ }
+ dispatch.CmdCopyImage = reinterpret_cast<PFN_vkCmdCopyImage>(get_proc_addr(device, "vkCmdCopyImage"));
+ if (UNLIKELY(!dispatch.CmdCopyImage)) {
+ ALOGE("missing device proc: %s", "vkCmdCopyImage");
+ success = false;
+ }
+ dispatch.CmdBlitImage = reinterpret_cast<PFN_vkCmdBlitImage>(get_proc_addr(device, "vkCmdBlitImage"));
+ if (UNLIKELY(!dispatch.CmdBlitImage)) {
+ ALOGE("missing device proc: %s", "vkCmdBlitImage");
+ success = false;
+ }
+ dispatch.CmdCopyBufferToImage = reinterpret_cast<PFN_vkCmdCopyBufferToImage>(get_proc_addr(device, "vkCmdCopyBufferToImage"));
+ if (UNLIKELY(!dispatch.CmdCopyBufferToImage)) {
+ ALOGE("missing device proc: %s", "vkCmdCopyBufferToImage");
+ success = false;
+ }
+ dispatch.CmdCopyImageToBuffer = reinterpret_cast<PFN_vkCmdCopyImageToBuffer>(get_proc_addr(device, "vkCmdCopyImageToBuffer"));
+ if (UNLIKELY(!dispatch.CmdCopyImageToBuffer)) {
+ ALOGE("missing device proc: %s", "vkCmdCopyImageToBuffer");
+ success = false;
+ }
+ dispatch.CmdUpdateBuffer = reinterpret_cast<PFN_vkCmdUpdateBuffer>(get_proc_addr(device, "vkCmdUpdateBuffer"));
+ if (UNLIKELY(!dispatch.CmdUpdateBuffer)) {
+ ALOGE("missing device proc: %s", "vkCmdUpdateBuffer");
+ success = false;
+ }
+ dispatch.CmdFillBuffer = reinterpret_cast<PFN_vkCmdFillBuffer>(get_proc_addr(device, "vkCmdFillBuffer"));
+ if (UNLIKELY(!dispatch.CmdFillBuffer)) {
+ ALOGE("missing device proc: %s", "vkCmdFillBuffer");
+ success = false;
+ }
+ dispatch.CmdClearColorImage = reinterpret_cast<PFN_vkCmdClearColorImage>(get_proc_addr(device, "vkCmdClearColorImage"));
+ if (UNLIKELY(!dispatch.CmdClearColorImage)) {
+ ALOGE("missing device proc: %s", "vkCmdClearColorImage");
+ success = false;
+ }
+ dispatch.CmdClearDepthStencilImage = reinterpret_cast<PFN_vkCmdClearDepthStencilImage>(get_proc_addr(device, "vkCmdClearDepthStencilImage"));
+ if (UNLIKELY(!dispatch.CmdClearDepthStencilImage)) {
+ ALOGE("missing device proc: %s", "vkCmdClearDepthStencilImage");
+ success = false;
+ }
+ dispatch.CmdClearAttachments = reinterpret_cast<PFN_vkCmdClearAttachments>(get_proc_addr(device, "vkCmdClearAttachments"));
+ if (UNLIKELY(!dispatch.CmdClearAttachments)) {
+ ALOGE("missing device proc: %s", "vkCmdClearAttachments");
+ success = false;
+ }
+ dispatch.CmdResolveImage = reinterpret_cast<PFN_vkCmdResolveImage>(get_proc_addr(device, "vkCmdResolveImage"));
+ if (UNLIKELY(!dispatch.CmdResolveImage)) {
+ ALOGE("missing device proc: %s", "vkCmdResolveImage");
+ success = false;
+ }
+ dispatch.CmdSetEvent = reinterpret_cast<PFN_vkCmdSetEvent>(get_proc_addr(device, "vkCmdSetEvent"));
+ if (UNLIKELY(!dispatch.CmdSetEvent)) {
+ ALOGE("missing device proc: %s", "vkCmdSetEvent");
+ success = false;
+ }
+ dispatch.CmdResetEvent = reinterpret_cast<PFN_vkCmdResetEvent>(get_proc_addr(device, "vkCmdResetEvent"));
+ if (UNLIKELY(!dispatch.CmdResetEvent)) {
+ ALOGE("missing device proc: %s", "vkCmdResetEvent");
+ success = false;
+ }
+ dispatch.CmdWaitEvents = reinterpret_cast<PFN_vkCmdWaitEvents>(get_proc_addr(device, "vkCmdWaitEvents"));
+ if (UNLIKELY(!dispatch.CmdWaitEvents)) {
+ ALOGE("missing device proc: %s", "vkCmdWaitEvents");
+ success = false;
+ }
+ dispatch.CmdPipelineBarrier = reinterpret_cast<PFN_vkCmdPipelineBarrier>(get_proc_addr(device, "vkCmdPipelineBarrier"));
+ if (UNLIKELY(!dispatch.CmdPipelineBarrier)) {
+ ALOGE("missing device proc: %s", "vkCmdPipelineBarrier");
+ success = false;
+ }
+ dispatch.CmdBeginQuery = reinterpret_cast<PFN_vkCmdBeginQuery>(get_proc_addr(device, "vkCmdBeginQuery"));
+ if (UNLIKELY(!dispatch.CmdBeginQuery)) {
+ ALOGE("missing device proc: %s", "vkCmdBeginQuery");
+ success = false;
+ }
+ dispatch.CmdEndQuery = reinterpret_cast<PFN_vkCmdEndQuery>(get_proc_addr(device, "vkCmdEndQuery"));
+ if (UNLIKELY(!dispatch.CmdEndQuery)) {
+ ALOGE("missing device proc: %s", "vkCmdEndQuery");
+ success = false;
+ }
+ dispatch.CmdResetQueryPool = reinterpret_cast<PFN_vkCmdResetQueryPool>(get_proc_addr(device, "vkCmdResetQueryPool"));
+ if (UNLIKELY(!dispatch.CmdResetQueryPool)) {
+ ALOGE("missing device proc: %s", "vkCmdResetQueryPool");
+ success = false;
+ }
+ dispatch.CmdWriteTimestamp = reinterpret_cast<PFN_vkCmdWriteTimestamp>(get_proc_addr(device, "vkCmdWriteTimestamp"));
+ if (UNLIKELY(!dispatch.CmdWriteTimestamp)) {
+ ALOGE("missing device proc: %s", "vkCmdWriteTimestamp");
+ success = false;
+ }
+ dispatch.CmdCopyQueryPoolResults = reinterpret_cast<PFN_vkCmdCopyQueryPoolResults>(get_proc_addr(device, "vkCmdCopyQueryPoolResults"));
+ if (UNLIKELY(!dispatch.CmdCopyQueryPoolResults)) {
+ ALOGE("missing device proc: %s", "vkCmdCopyQueryPoolResults");
+ success = false;
+ }
+ dispatch.CmdPushConstants = reinterpret_cast<PFN_vkCmdPushConstants>(get_proc_addr(device, "vkCmdPushConstants"));
+ if (UNLIKELY(!dispatch.CmdPushConstants)) {
+ ALOGE("missing device proc: %s", "vkCmdPushConstants");
+ success = false;
+ }
+ dispatch.CmdBeginRenderPass = reinterpret_cast<PFN_vkCmdBeginRenderPass>(get_proc_addr(device, "vkCmdBeginRenderPass"));
+ if (UNLIKELY(!dispatch.CmdBeginRenderPass)) {
+ ALOGE("missing device proc: %s", "vkCmdBeginRenderPass");
+ success = false;
+ }
+ dispatch.CmdNextSubpass = reinterpret_cast<PFN_vkCmdNextSubpass>(get_proc_addr(device, "vkCmdNextSubpass"));
+ if (UNLIKELY(!dispatch.CmdNextSubpass)) {
+ ALOGE("missing device proc: %s", "vkCmdNextSubpass");
+ success = false;
+ }
+ dispatch.CmdEndRenderPass = reinterpret_cast<PFN_vkCmdEndRenderPass>(get_proc_addr(device, "vkCmdEndRenderPass"));
+ if (UNLIKELY(!dispatch.CmdEndRenderPass)) {
+ ALOGE("missing device proc: %s", "vkCmdEndRenderPass");
+ success = false;
+ }
+ dispatch.CmdExecuteCommands = reinterpret_cast<PFN_vkCmdExecuteCommands>(get_proc_addr(device, "vkCmdExecuteCommands"));
+ if (UNLIKELY(!dispatch.CmdExecuteCommands)) {
+ ALOGE("missing device proc: %s", "vkCmdExecuteCommands");
+ success = false;
+ }
+ dispatch.CreateSwapchainKHR = reinterpret_cast<PFN_vkCreateSwapchainKHR>(get_proc_addr(device, "vkCreateSwapchainKHR"));
+ if (UNLIKELY(!dispatch.CreateSwapchainKHR)) {
+ ALOGE("missing device proc: %s", "vkCreateSwapchainKHR");
+ success = false;
+ }
+ dispatch.DestroySwapchainKHR = reinterpret_cast<PFN_vkDestroySwapchainKHR>(get_proc_addr(device, "vkDestroySwapchainKHR"));
+ if (UNLIKELY(!dispatch.DestroySwapchainKHR)) {
+ ALOGE("missing device proc: %s", "vkDestroySwapchainKHR");
+ success = false;
+ }
+ dispatch.GetSwapchainImagesKHR = reinterpret_cast<PFN_vkGetSwapchainImagesKHR>(get_proc_addr(device, "vkGetSwapchainImagesKHR"));
+ if (UNLIKELY(!dispatch.GetSwapchainImagesKHR)) {
+ ALOGE("missing device proc: %s", "vkGetSwapchainImagesKHR");
+ success = false;
+ }
+ dispatch.AcquireNextImageKHR = reinterpret_cast<PFN_vkAcquireNextImageKHR>(get_proc_addr(device, "vkAcquireNextImageKHR"));
+ if (UNLIKELY(!dispatch.AcquireNextImageKHR)) {
+ ALOGE("missing device proc: %s", "vkAcquireNextImageKHR");
+ success = false;
+ }
+ dispatch.QueuePresentKHR = reinterpret_cast<PFN_vkQueuePresentKHR>(get_proc_addr(device, "vkQueuePresentKHR"));
+ if (UNLIKELY(!dispatch.QueuePresentKHR)) {
+ ALOGE("missing device proc: %s", "vkQueuePresentKHR");
+ success = false;
+ }
+ // clang-format on
+ return success;
+}
+
+bool LoadDriverDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ const InstanceExtensionSet& extensions,
+ DriverDispatchTable& dispatch) {
+ bool success = true;
+ // clang-format off
+ dispatch.DestroyInstance = reinterpret_cast<PFN_vkDestroyInstance>(get_proc_addr(instance, "vkDestroyInstance"));
+ if (UNLIKELY(!dispatch.DestroyInstance)) {
+ ALOGE("missing driver proc: %s", "vkDestroyInstance");
+ success = false;
+ }
+ dispatch.EnumeratePhysicalDevices = reinterpret_cast<PFN_vkEnumeratePhysicalDevices>(get_proc_addr(instance, "vkEnumeratePhysicalDevices"));
+ if (UNLIKELY(!dispatch.EnumeratePhysicalDevices)) {
+ ALOGE("missing driver proc: %s", "vkEnumeratePhysicalDevices");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceQueueFamilyProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceQueueFamilyProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceQueueFamilyProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceQueueFamilyProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceQueueFamilyProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceMemoryProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceMemoryProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceMemoryProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceMemoryProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceMemoryProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceFeatures = reinterpret_cast<PFN_vkGetPhysicalDeviceFeatures>(get_proc_addr(instance, "vkGetPhysicalDeviceFeatures"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceFeatures)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceFeatures");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceFormatProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceFormatProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceImageFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceImageFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceImageFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceImageFormatProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceImageFormatProperties");
+ success = false;
+ }
+ dispatch.CreateDevice = reinterpret_cast<PFN_vkCreateDevice>(get_proc_addr(instance, "vkCreateDevice"));
+ if (UNLIKELY(!dispatch.CreateDevice)) {
+ ALOGE("missing driver proc: %s", "vkCreateDevice");
+ success = false;
+ }
+ dispatch.EnumerateDeviceLayerProperties = reinterpret_cast<PFN_vkEnumerateDeviceLayerProperties>(get_proc_addr(instance, "vkEnumerateDeviceLayerProperties"));
+ if (UNLIKELY(!dispatch.EnumerateDeviceLayerProperties)) {
+ ALOGE("missing driver proc: %s", "vkEnumerateDeviceLayerProperties");
+ success = false;
+ }
+ dispatch.EnumerateDeviceExtensionProperties = reinterpret_cast<PFN_vkEnumerateDeviceExtensionProperties>(get_proc_addr(instance, "vkEnumerateDeviceExtensionProperties"));
+ if (UNLIKELY(!dispatch.EnumerateDeviceExtensionProperties)) {
+ ALOGE("missing driver proc: %s", "vkEnumerateDeviceExtensionProperties");
+ success = false;
+ }
+ dispatch.GetPhysicalDeviceSparseImageFormatProperties = reinterpret_cast<PFN_vkGetPhysicalDeviceSparseImageFormatProperties>(get_proc_addr(instance, "vkGetPhysicalDeviceSparseImageFormatProperties"));
+ if (UNLIKELY(!dispatch.GetPhysicalDeviceSparseImageFormatProperties)) {
+ ALOGE("missing driver proc: %s", "vkGetPhysicalDeviceSparseImageFormatProperties");
+ success = false;
+ }
+ if (extensions[kEXT_debug_report]) {
+ dispatch.CreateDebugReportCallbackEXT = reinterpret_cast<PFN_vkCreateDebugReportCallbackEXT>(get_proc_addr(instance, "vkCreateDebugReportCallbackEXT"));
+ if (UNLIKELY(!dispatch.CreateDebugReportCallbackEXT)) {
+ ALOGE("missing driver proc: %s", "vkCreateDebugReportCallbackEXT");
+ success = false;
+ }
+ }
+ if (extensions[kEXT_debug_report]) {
+ dispatch.DestroyDebugReportCallbackEXT = reinterpret_cast<PFN_vkDestroyDebugReportCallbackEXT>(get_proc_addr(instance, "vkDestroyDebugReportCallbackEXT"));
+ if (UNLIKELY(!dispatch.DestroyDebugReportCallbackEXT)) {
+ ALOGE("missing driver proc: %s", "vkDestroyDebugReportCallbackEXT");
+ success = false;
+ }
+ }
+ if (extensions[kEXT_debug_report]) {
+ dispatch.DebugReportMessageEXT = reinterpret_cast<PFN_vkDebugReportMessageEXT>(get_proc_addr(instance, "vkDebugReportMessageEXT"));
+ if (UNLIKELY(!dispatch.DebugReportMessageEXT)) {
+ ALOGE("missing driver proc: %s", "vkDebugReportMessageEXT");
+ success = false;
+ }
+ }
+ dispatch.GetDeviceProcAddr = reinterpret_cast<PFN_vkGetDeviceProcAddr>(get_proc_addr(instance, "vkGetDeviceProcAddr"));
+ if (UNLIKELY(!dispatch.GetDeviceProcAddr)) {
+ ALOGE("missing driver proc: %s", "vkGetDeviceProcAddr");
+ success = false;
+ }
+ dispatch.CreateImage = reinterpret_cast<PFN_vkCreateImage>(get_proc_addr(instance, "vkCreateImage"));
+ if (UNLIKELY(!dispatch.CreateImage)) {
+ ALOGE("missing driver proc: %s", "vkCreateImage");
+ success = false;
+ }
+ dispatch.DestroyImage = reinterpret_cast<PFN_vkDestroyImage>(get_proc_addr(instance, "vkDestroyImage"));
+ if (UNLIKELY(!dispatch.DestroyImage)) {
+ ALOGE("missing driver proc: %s", "vkDestroyImage");
+ success = false;
+ }
+ dispatch.GetSwapchainGrallocUsageANDROID = reinterpret_cast<PFN_vkGetSwapchainGrallocUsageANDROID>(get_proc_addr(instance, "vkGetSwapchainGrallocUsageANDROID"));
+ if (UNLIKELY(!dispatch.GetSwapchainGrallocUsageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkGetSwapchainGrallocUsageANDROID");
+ success = false;
+ }
+ dispatch.AcquireImageANDROID = reinterpret_cast<PFN_vkAcquireImageANDROID>(get_proc_addr(instance, "vkAcquireImageANDROID"));
+ if (UNLIKELY(!dispatch.AcquireImageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkAcquireImageANDROID");
+ success = false;
+ }
+ dispatch.QueueSignalReleaseImageANDROID = reinterpret_cast<PFN_vkQueueSignalReleaseImageANDROID>(get_proc_addr(instance, "vkQueueSignalReleaseImageANDROID"));
+ if (UNLIKELY(!dispatch.QueueSignalReleaseImageANDROID)) {
+ ALOGE("missing driver proc: %s", "vkQueueSignalReleaseImageANDROID");
+ success = false;
+ }
+ // clang-format on
+ return success;
+}
+
+} // namespace vulkan
+
+// clang-format off
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance) {
+ return CreateInstance_Top(pCreateInfo, pAllocator, pInstance);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator) {
+ DestroyInstance_Top(instance, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices) {
+ return GetDispatchTable(instance).EnumeratePhysicalDevices(instance, pPhysicalDeviceCount, pPhysicalDevices);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR PFN_vkVoidFunction vkGetDeviceProcAddr(VkDevice device, const char* pName) {
+ return GetDeviceProcAddr_Top(device, pName);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR PFN_vkVoidFunction vkGetInstanceProcAddr(VkInstance instance, const char* pName) {
+ return GetInstanceProcAddr_Top(instance, pName);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceProperties(physicalDevice, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t* pQueueFamilyPropertyCount, VkQueueFamilyProperties* pQueueFamilyProperties) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceQueueFamilyProperties(physicalDevice, pQueueFamilyPropertyCount, pQueueFamilyProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceMemoryProperties(physicalDevice, pMemoryProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceFeatures(physicalDevice, pFeatures);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceFormatProperties(physicalDevice, format, pFormatProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties) {
+ return GetDispatchTable(physicalDevice).GetPhysicalDeviceImageFormatProperties(physicalDevice, format, type, tiling, usage, flags, pImageFormatProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice) {
+ return GetDispatchTable(physicalDevice).CreateDevice(physicalDevice, pCreateInfo, pAllocator, pDevice);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator) {
+ DestroyDevice_Top(device, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEnumerateInstanceLayerProperties(uint32_t* pPropertyCount, VkLayerProperties* pProperties) {
+ return EnumerateInstanceLayerProperties_Top(pPropertyCount, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEnumerateInstanceExtensionProperties(const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties) {
+ return EnumerateInstanceExtensionProperties_Top(pLayerName, pPropertyCount, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkLayerProperties* pProperties) {
+ return GetDispatchTable(physicalDevice).EnumerateDeviceLayerProperties(physicalDevice, pPropertyCount, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties) {
+ return GetDispatchTable(physicalDevice).EnumerateDeviceExtensionProperties(physicalDevice, pLayerName, pPropertyCount, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue) {
+ GetDeviceQueue_Top(device, queueFamilyIndex, queueIndex, pQueue);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkQueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence) {
+ return GetDispatchTable(queue).QueueSubmit(queue, submitCount, pSubmits, fence);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkQueueWaitIdle(VkQueue queue) {
+ return GetDispatchTable(queue).QueueWaitIdle(queue);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkDeviceWaitIdle(VkDevice device) {
+ return GetDispatchTable(device).DeviceWaitIdle(device);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkAllocateMemory(VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo, const VkAllocationCallbacks* pAllocator, VkDeviceMemory* pMemory) {
+ return GetDispatchTable(device).AllocateMemory(device, pAllocateInfo, pAllocator, pMemory);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkFreeMemory(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).FreeMemory(device, memory, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkMapMemory(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void** ppData) {
+ return GetDispatchTable(device).MapMemory(device, memory, offset, size, flags, ppData);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkUnmapMemory(VkDevice device, VkDeviceMemory memory) {
+ GetDispatchTable(device).UnmapMemory(device, memory);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkFlushMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges) {
+ return GetDispatchTable(device).FlushMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkInvalidateMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges) {
+ return GetDispatchTable(device).InvalidateMappedMemoryRanges(device, memoryRangeCount, pMemoryRanges);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetDeviceMemoryCommitment(VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes) {
+ GetDispatchTable(device).GetDeviceMemoryCommitment(device, memory, pCommittedMemoryInBytes);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements* pMemoryRequirements) {
+ GetDispatchTable(device).GetBufferMemoryRequirements(device, buffer, pMemoryRequirements);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkBindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory memory, VkDeviceSize memoryOffset) {
+ return GetDispatchTable(device).BindBufferMemory(device, buffer, memory, memoryOffset);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements* pMemoryRequirements) {
+ GetDispatchTable(device).GetImageMemoryRequirements(device, image, pMemoryRequirements);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkBindImageMemory(VkDevice device, VkImage image, VkDeviceMemory memory, VkDeviceSize memoryOffset) {
+ return GetDispatchTable(device).BindImageMemory(device, image, memory, memoryOffset);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t* pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements* pSparseMemoryRequirements) {
+ GetDispatchTable(device).GetImageSparseMemoryRequirements(device, image, pSparseMemoryRequirementCount, pSparseMemoryRequirements);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pPropertyCount, VkSparseImageFormatProperties* pProperties) {
+ GetDispatchTable(physicalDevice).GetPhysicalDeviceSparseImageFormatProperties(physicalDevice, format, type, samples, usage, tiling, pPropertyCount, pProperties);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkQueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence) {
+ return GetDispatchTable(queue).QueueBindSparse(queue, bindInfoCount, pBindInfo, fence);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateFence(VkDevice device, const VkFenceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFence* pFence) {
+ return GetDispatchTable(device).CreateFence(device, pCreateInfo, pAllocator, pFence);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyFence(device, fence, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkResetFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences) {
+ return GetDispatchTable(device).ResetFences(device, fenceCount, pFences);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetFenceStatus(VkDevice device, VkFence fence) {
+ return GetDispatchTable(device).GetFenceStatus(device, fence);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkWaitForFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout) {
+ return GetDispatchTable(device).WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSemaphore* pSemaphore) {
+ return GetDispatchTable(device).CreateSemaphore(device, pCreateInfo, pAllocator, pSemaphore);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroySemaphore(device, semaphore, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateEvent(VkDevice device, const VkEventCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkEvent* pEvent) {
+ return GetDispatchTable(device).CreateEvent(device, pCreateInfo, pAllocator, pEvent);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyEvent(device, event, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetEventStatus(VkDevice device, VkEvent event) {
+ return GetDispatchTable(device).GetEventStatus(device, event);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkSetEvent(VkDevice device, VkEvent event) {
+ return GetDispatchTable(device).SetEvent(device, event);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkResetEvent(VkDevice device, VkEvent event) {
+ return GetDispatchTable(device).ResetEvent(device, event);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkQueryPool* pQueryPool) {
+ return GetDispatchTable(device).CreateQueryPool(device, pCreateInfo, pAllocator, pQueryPool);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyQueryPool(device, queryPool, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags) {
+ return GetDispatchTable(device).GetQueryPoolResults(device, queryPool, firstQuery, queryCount, dataSize, pData, stride, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateBuffer(VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer) {
+ return GetDispatchTable(device).CreateBuffer(device, pCreateInfo, pAllocator, pBuffer);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyBuffer(device, buffer, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateBufferView(VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView) {
+ return GetDispatchTable(device).CreateBufferView(device, pCreateInfo, pAllocator, pView);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyBufferView(device, bufferView, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateImage(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage) {
+ return GetDispatchTable(device).CreateImage(device, pCreateInfo, pAllocator, pImage);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyImage(device, image, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout) {
+ GetDispatchTable(device).GetImageSubresourceLayout(device, image, pSubresource, pLayout);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateImageView(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView) {
+ return GetDispatchTable(device).CreateImageView(device, pCreateInfo, pAllocator, pView);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyImageView(device, imageView, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkShaderModule* pShaderModule) {
+ return GetDispatchTable(device).CreateShaderModule(device, pCreateInfo, pAllocator, pShaderModule);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyShaderModule(device, shaderModule, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineCache* pPipelineCache) {
+ return GetDispatchTable(device).CreatePipelineCache(device, pCreateInfo, pAllocator, pPipelineCache);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyPipelineCache(device, pipelineCache, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData) {
+ return GetDispatchTable(device).GetPipelineCacheData(device, pipelineCache, pDataSize, pData);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkMergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches) {
+ return GetDispatchTable(device).MergePipelineCaches(device, dstCache, srcCacheCount, pSrcCaches);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines) {
+ return GetDispatchTable(device).CreateGraphicsPipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines) {
+ return GetDispatchTable(device).CreateComputePipelines(device, pipelineCache, createInfoCount, pCreateInfos, pAllocator, pPipelines);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyPipeline(device, pipeline, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout) {
+ return GetDispatchTable(device).CreatePipelineLayout(device, pCreateInfo, pAllocator, pPipelineLayout);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyPipelineLayout(device, pipelineLayout, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateSampler(VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler) {
+ return GetDispatchTable(device).CreateSampler(device, pCreateInfo, pAllocator, pSampler);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroySampler(device, sampler, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout) {
+ return GetDispatchTable(device).CreateDescriptorSetLayout(device, pCreateInfo, pAllocator, pSetLayout);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyDescriptorSetLayout(device, descriptorSetLayout, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool) {
+ return GetDispatchTable(device).CreateDescriptorPool(device, pCreateInfo, pAllocator, pDescriptorPool);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyDescriptorPool(device, descriptorPool, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) {
+ return GetDispatchTable(device).ResetDescriptorPool(device, descriptorPool, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkAllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets) {
+ return GetDispatchTable(device).AllocateDescriptorSets(device, pAllocateInfo, pDescriptorSets);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkFreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets) {
+ return GetDispatchTable(device).FreeDescriptorSets(device, descriptorPool, descriptorSetCount, pDescriptorSets);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkUpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies) {
+ GetDispatchTable(device).UpdateDescriptorSets(device, descriptorWriteCount, pDescriptorWrites, descriptorCopyCount, pDescriptorCopies);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer) {
+ return GetDispatchTable(device).CreateFramebuffer(device, pCreateInfo, pAllocator, pFramebuffer);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyFramebuffer(device, framebuffer, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateRenderPass(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass) {
+ return GetDispatchTable(device).CreateRenderPass(device, pCreateInfo, pAllocator, pRenderPass);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyRenderPass(device, renderPass, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkGetRenderAreaGranularity(VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity) {
+ GetDispatchTable(device).GetRenderAreaGranularity(device, renderPass, pGranularity);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool) {
+ return GetDispatchTable(device).CreateCommandPool(device, pCreateInfo, pAllocator, pCommandPool);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroyCommandPool(device, commandPool, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags) {
+ return GetDispatchTable(device).ResetCommandPool(device, commandPool, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkAllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo* pAllocateInfo, VkCommandBuffer* pCommandBuffers) {
+ return AllocateCommandBuffers_Top(device, pAllocateInfo, pCommandBuffers);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkFreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers) {
+ GetDispatchTable(device).FreeCommandBuffers(device, commandPool, commandBufferCount, pCommandBuffers);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkBeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo) {
+ return GetDispatchTable(commandBuffer).BeginCommandBuffer(commandBuffer, pBeginInfo);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkEndCommandBuffer(VkCommandBuffer commandBuffer) {
+ return GetDispatchTable(commandBuffer).EndCommandBuffer(commandBuffer);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags) {
+ return GetDispatchTable(commandBuffer).ResetCommandBuffer(commandBuffer, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) {
+ GetDispatchTable(commandBuffer).CmdBindPipeline(commandBuffer, pipelineBindPoint, pipeline);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports) {
+ GetDispatchTable(commandBuffer).CmdSetViewport(commandBuffer, firstViewport, viewportCount, pViewports);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors) {
+ GetDispatchTable(commandBuffer).CmdSetScissor(commandBuffer, firstScissor, scissorCount, pScissors);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth) {
+ GetDispatchTable(commandBuffer).CmdSetLineWidth(commandBuffer, lineWidth);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor) {
+ GetDispatchTable(commandBuffer).CmdSetDepthBias(commandBuffer, depthBiasConstantFactor, depthBiasClamp, depthBiasSlopeFactor);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]) {
+ GetDispatchTable(commandBuffer).CmdSetBlendConstants(commandBuffer, blendConstants);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds) {
+ GetDispatchTable(commandBuffer).CmdSetDepthBounds(commandBuffer, minDepthBounds, maxDepthBounds);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask) {
+ GetDispatchTable(commandBuffer).CmdSetStencilCompareMask(commandBuffer, faceMask, compareMask);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask) {
+ GetDispatchTable(commandBuffer).CmdSetStencilWriteMask(commandBuffer, faceMask, writeMask);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference) {
+ GetDispatchTable(commandBuffer).CmdSetStencilReference(commandBuffer, faceMask, reference);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets) {
+ GetDispatchTable(commandBuffer).CmdBindDescriptorSets(commandBuffer, pipelineBindPoint, layout, firstSet, descriptorSetCount, pDescriptorSets, dynamicOffsetCount, pDynamicOffsets);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) {
+ GetDispatchTable(commandBuffer).CmdBindIndexBuffer(commandBuffer, buffer, offset, indexType);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets) {
+ GetDispatchTable(commandBuffer).CmdBindVertexBuffers(commandBuffer, firstBinding, bindingCount, pBuffers, pOffsets);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance) {
+ GetDispatchTable(commandBuffer).CmdDraw(commandBuffer, vertexCount, instanceCount, firstVertex, firstInstance);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance) {
+ GetDispatchTable(commandBuffer).CmdDrawIndexed(commandBuffer, indexCount, instanceCount, firstIndex, vertexOffset, firstInstance);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride) {
+ GetDispatchTable(commandBuffer).CmdDrawIndirect(commandBuffer, buffer, offset, drawCount, stride);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride) {
+ GetDispatchTable(commandBuffer).CmdDrawIndexedIndirect(commandBuffer, buffer, offset, drawCount, stride);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z) {
+ GetDispatchTable(commandBuffer).CmdDispatch(commandBuffer, x, y, z);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset) {
+ GetDispatchTable(commandBuffer).CmdDispatchIndirect(commandBuffer, buffer, offset);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions) {
+ GetDispatchTable(commandBuffer).CmdCopyBuffer(commandBuffer, srcBuffer, dstBuffer, regionCount, pRegions);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy* pRegions) {
+ GetDispatchTable(commandBuffer).CmdCopyImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter) {
+ GetDispatchTable(commandBuffer).CmdBlitImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions, filter);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions) {
+ GetDispatchTable(commandBuffer).CmdCopyBufferToImage(commandBuffer, srcBuffer, dstImage, dstImageLayout, regionCount, pRegions);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions) {
+ GetDispatchTable(commandBuffer).CmdCopyImageToBuffer(commandBuffer, srcImage, srcImageLayout, dstBuffer, regionCount, pRegions);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData) {
+ GetDispatchTable(commandBuffer).CmdUpdateBuffer(commandBuffer, dstBuffer, dstOffset, dataSize, pData);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data) {
+ GetDispatchTable(commandBuffer).CmdFillBuffer(commandBuffer, dstBuffer, dstOffset, size, data);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges) {
+ GetDispatchTable(commandBuffer).CmdClearColorImage(commandBuffer, image, imageLayout, pColor, rangeCount, pRanges);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges) {
+ GetDispatchTable(commandBuffer).CmdClearDepthStencilImage(commandBuffer, image, imageLayout, pDepthStencil, rangeCount, pRanges);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects) {
+ GetDispatchTable(commandBuffer).CmdClearAttachments(commandBuffer, attachmentCount, pAttachments, rectCount, pRects);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve* pRegions) {
+ GetDispatchTable(commandBuffer).CmdResolveImage(commandBuffer, srcImage, srcImageLayout, dstImage, dstImageLayout, regionCount, pRegions);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ GetDispatchTable(commandBuffer).CmdSetEvent(commandBuffer, event, stageMask);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+ GetDispatchTable(commandBuffer).CmdResetEvent(commandBuffer, event, stageMask);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers) {
+ GetDispatchTable(commandBuffer).CmdWaitEvents(commandBuffer, eventCount, pEvents, srcStageMask, dstStageMask, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers) {
+ GetDispatchTable(commandBuffer).CmdPipelineBarrier(commandBuffer, srcStageMask, dstStageMask, dependencyFlags, memoryBarrierCount, pMemoryBarriers, bufferMemoryBarrierCount, pBufferMemoryBarriers, imageMemoryBarrierCount, pImageMemoryBarriers);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query, VkQueryControlFlags flags) {
+ GetDispatchTable(commandBuffer).CmdBeginQuery(commandBuffer, queryPool, query, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query) {
+ GetDispatchTable(commandBuffer).CmdEndQuery(commandBuffer, queryPool, query);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount) {
+ GetDispatchTable(commandBuffer).CmdResetQueryPool(commandBuffer, queryPool, firstQuery, queryCount);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t query) {
+ GetDispatchTable(commandBuffer).CmdWriteTimestamp(commandBuffer, pipelineStage, queryPool, query);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags) {
+ GetDispatchTable(commandBuffer).CmdCopyQueryPoolResults(commandBuffer, queryPool, firstQuery, queryCount, dstBuffer, dstOffset, stride, flags);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void* pValues) {
+ GetDispatchTable(commandBuffer).CmdPushConstants(commandBuffer, layout, stageFlags, offset, size, pValues);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents) {
+ GetDispatchTable(commandBuffer).CmdBeginRenderPass(commandBuffer, pRenderPassBegin, contents);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents) {
+ GetDispatchTable(commandBuffer).CmdNextSubpass(commandBuffer, contents);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdEndRenderPass(VkCommandBuffer commandBuffer) {
+ GetDispatchTable(commandBuffer).CmdEndRenderPass(commandBuffer);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkCmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers) {
+ GetDispatchTable(commandBuffer).CmdExecuteCommands(commandBuffer, commandBufferCount, pCommandBuffers);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroySurfaceKHR(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(instance).DestroySurfaceKHR(instance, surface, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPhysicalDeviceSurfaceSupportKHR(VkPhysicalDevice physicalDevice, uint32_t queueFamilyIndex, VkSurfaceKHR surface, VkBool32* pSupported) {
+ return GetDispatchTable(physicalDevice).GetPhysicalDeviceSurfaceSupportKHR(physicalDevice, queueFamilyIndex, surface, pSupported);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPhysicalDeviceSurfaceCapabilitiesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR* pSurfaceCapabilities) {
+ return GetDispatchTable(physicalDevice).GetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevice, surface, pSurfaceCapabilities);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPhysicalDeviceSurfaceFormatsKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pSurfaceFormatCount, VkSurfaceFormatKHR* pSurfaceFormats) {
+ return GetDispatchTable(physicalDevice).GetPhysicalDeviceSurfaceFormatsKHR(physicalDevice, surface, pSurfaceFormatCount, pSurfaceFormats);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetPhysicalDeviceSurfacePresentModesKHR(VkPhysicalDevice physicalDevice, VkSurfaceKHR surface, uint32_t* pPresentModeCount, VkPresentModeKHR* pPresentModes) {
+ return GetDispatchTable(physicalDevice).GetPhysicalDeviceSurfacePresentModesKHR(physicalDevice, surface, pPresentModeCount, pPresentModes);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateSwapchainKHR(VkDevice device, const VkSwapchainCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSwapchainKHR* pSwapchain) {
+ return GetDispatchTable(device).CreateSwapchainKHR(device, pCreateInfo, pAllocator, pSwapchain);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR void vkDestroySwapchainKHR(VkDevice device, VkSwapchainKHR swapchain, const VkAllocationCallbacks* pAllocator) {
+ GetDispatchTable(device).DestroySwapchainKHR(device, swapchain, pAllocator);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkGetSwapchainImagesKHR(VkDevice device, VkSwapchainKHR swapchain, uint32_t* pSwapchainImageCount, VkImage* pSwapchainImages) {
+ return GetDispatchTable(device).GetSwapchainImagesKHR(device, swapchain, pSwapchainImageCount, pSwapchainImages);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkAcquireNextImageKHR(VkDevice device, VkSwapchainKHR swapchain, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t* pImageIndex) {
+ return GetDispatchTable(device).AcquireNextImageKHR(device, swapchain, timeout, semaphore, fence, pImageIndex);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkQueuePresentKHR(VkQueue queue, const VkPresentInfoKHR* pPresentInfo) {
+ return GetDispatchTable(queue).QueuePresentKHR(queue, pPresentInfo);
+}
+
+__attribute__((visibility("default")))
+VKAPI_ATTR VkResult vkCreateAndroidSurfaceKHR(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSurfaceKHR* pSurface) {
+ return GetDispatchTable(instance).CreateAndroidSurfaceKHR(instance, pCreateInfo, pAllocator, pSurface);
+}
+
+// clang-format on
diff --git a/vulkan/libvulkan/dispatch_gen.h b/vulkan/libvulkan/dispatch_gen.h
new file mode 100644
index 0000000..14c5da8
--- /dev/null
+++ b/vulkan/libvulkan/dispatch_gen.h
@@ -0,0 +1,206 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define VK_USE_PLATFORM_ANDROID_KHR
+#include <vulkan/vk_android_native_buffer.h>
+#include <vulkan/vk_ext_debug_report.h>
+#include <vulkan/vulkan.h>
+
+namespace vulkan {
+
+struct InstanceDispatchTable {
+ // clang-format off
+ PFN_vkDestroyInstance DestroyInstance;
+ PFN_vkEnumeratePhysicalDevices EnumeratePhysicalDevices;
+ PFN_vkGetPhysicalDeviceProperties GetPhysicalDeviceProperties;
+ PFN_vkGetPhysicalDeviceQueueFamilyProperties GetPhysicalDeviceQueueFamilyProperties;
+ PFN_vkGetPhysicalDeviceMemoryProperties GetPhysicalDeviceMemoryProperties;
+ PFN_vkGetPhysicalDeviceFeatures GetPhysicalDeviceFeatures;
+ PFN_vkGetPhysicalDeviceFormatProperties GetPhysicalDeviceFormatProperties;
+ PFN_vkGetPhysicalDeviceImageFormatProperties GetPhysicalDeviceImageFormatProperties;
+ PFN_vkCreateDevice CreateDevice;
+ PFN_vkEnumerateDeviceLayerProperties EnumerateDeviceLayerProperties;
+ PFN_vkEnumerateDeviceExtensionProperties EnumerateDeviceExtensionProperties;
+ PFN_vkGetPhysicalDeviceSparseImageFormatProperties GetPhysicalDeviceSparseImageFormatProperties;
+ PFN_vkDestroySurfaceKHR DestroySurfaceKHR;
+ PFN_vkGetPhysicalDeviceSurfaceSupportKHR GetPhysicalDeviceSurfaceSupportKHR;
+ PFN_vkGetPhysicalDeviceSurfaceCapabilitiesKHR GetPhysicalDeviceSurfaceCapabilitiesKHR;
+ PFN_vkGetPhysicalDeviceSurfaceFormatsKHR GetPhysicalDeviceSurfaceFormatsKHR;
+ PFN_vkGetPhysicalDeviceSurfacePresentModesKHR GetPhysicalDeviceSurfacePresentModesKHR;
+ PFN_vkCreateAndroidSurfaceKHR CreateAndroidSurfaceKHR;
+ PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallbackEXT;
+ PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallbackEXT;
+ PFN_vkDebugReportMessageEXT DebugReportMessageEXT;
+ // clang-format on
+};
+
+struct DeviceDispatchTable {
+ // clang-format off
+ PFN_vkDestroyDevice DestroyDevice;
+ PFN_vkGetDeviceQueue GetDeviceQueue;
+ PFN_vkQueueSubmit QueueSubmit;
+ PFN_vkQueueWaitIdle QueueWaitIdle;
+ PFN_vkDeviceWaitIdle DeviceWaitIdle;
+ PFN_vkAllocateMemory AllocateMemory;
+ PFN_vkFreeMemory FreeMemory;
+ PFN_vkMapMemory MapMemory;
+ PFN_vkUnmapMemory UnmapMemory;
+ PFN_vkFlushMappedMemoryRanges FlushMappedMemoryRanges;
+ PFN_vkInvalidateMappedMemoryRanges InvalidateMappedMemoryRanges;
+ PFN_vkGetDeviceMemoryCommitment GetDeviceMemoryCommitment;
+ PFN_vkGetBufferMemoryRequirements GetBufferMemoryRequirements;
+ PFN_vkBindBufferMemory BindBufferMemory;
+ PFN_vkGetImageMemoryRequirements GetImageMemoryRequirements;
+ PFN_vkBindImageMemory BindImageMemory;
+ PFN_vkGetImageSparseMemoryRequirements GetImageSparseMemoryRequirements;
+ PFN_vkQueueBindSparse QueueBindSparse;
+ PFN_vkCreateFence CreateFence;
+ PFN_vkDestroyFence DestroyFence;
+ PFN_vkResetFences ResetFences;
+ PFN_vkGetFenceStatus GetFenceStatus;
+ PFN_vkWaitForFences WaitForFences;
+ PFN_vkCreateSemaphore CreateSemaphore;
+ PFN_vkDestroySemaphore DestroySemaphore;
+ PFN_vkCreateEvent CreateEvent;
+ PFN_vkDestroyEvent DestroyEvent;
+ PFN_vkGetEventStatus GetEventStatus;
+ PFN_vkSetEvent SetEvent;
+ PFN_vkResetEvent ResetEvent;
+ PFN_vkCreateQueryPool CreateQueryPool;
+ PFN_vkDestroyQueryPool DestroyQueryPool;
+ PFN_vkGetQueryPoolResults GetQueryPoolResults;
+ PFN_vkCreateBuffer CreateBuffer;
+ PFN_vkDestroyBuffer DestroyBuffer;
+ PFN_vkCreateBufferView CreateBufferView;
+ PFN_vkDestroyBufferView DestroyBufferView;
+ PFN_vkCreateImage CreateImage;
+ PFN_vkDestroyImage DestroyImage;
+ PFN_vkGetImageSubresourceLayout GetImageSubresourceLayout;
+ PFN_vkCreateImageView CreateImageView;
+ PFN_vkDestroyImageView DestroyImageView;
+ PFN_vkCreateShaderModule CreateShaderModule;
+ PFN_vkDestroyShaderModule DestroyShaderModule;
+ PFN_vkCreatePipelineCache CreatePipelineCache;
+ PFN_vkDestroyPipelineCache DestroyPipelineCache;
+ PFN_vkGetPipelineCacheData GetPipelineCacheData;
+ PFN_vkMergePipelineCaches MergePipelineCaches;
+ PFN_vkCreateGraphicsPipelines CreateGraphicsPipelines;
+ PFN_vkCreateComputePipelines CreateComputePipelines;
+ PFN_vkDestroyPipeline DestroyPipeline;
+ PFN_vkCreatePipelineLayout CreatePipelineLayout;
+ PFN_vkDestroyPipelineLayout DestroyPipelineLayout;
+ PFN_vkCreateSampler CreateSampler;
+ PFN_vkDestroySampler DestroySampler;
+ PFN_vkCreateDescriptorSetLayout CreateDescriptorSetLayout;
+ PFN_vkDestroyDescriptorSetLayout DestroyDescriptorSetLayout;
+ PFN_vkCreateDescriptorPool CreateDescriptorPool;
+ PFN_vkDestroyDescriptorPool DestroyDescriptorPool;
+ PFN_vkResetDescriptorPool ResetDescriptorPool;
+ PFN_vkAllocateDescriptorSets AllocateDescriptorSets;
+ PFN_vkFreeDescriptorSets FreeDescriptorSets;
+ PFN_vkUpdateDescriptorSets UpdateDescriptorSets;
+ PFN_vkCreateFramebuffer CreateFramebuffer;
+ PFN_vkDestroyFramebuffer DestroyFramebuffer;
+ PFN_vkCreateRenderPass CreateRenderPass;
+ PFN_vkDestroyRenderPass DestroyRenderPass;
+ PFN_vkGetRenderAreaGranularity GetRenderAreaGranularity;
+ PFN_vkCreateCommandPool CreateCommandPool;
+ PFN_vkDestroyCommandPool DestroyCommandPool;
+ PFN_vkResetCommandPool ResetCommandPool;
+ PFN_vkAllocateCommandBuffers AllocateCommandBuffers;
+ PFN_vkFreeCommandBuffers FreeCommandBuffers;
+ PFN_vkBeginCommandBuffer BeginCommandBuffer;
+ PFN_vkEndCommandBuffer EndCommandBuffer;
+ PFN_vkResetCommandBuffer ResetCommandBuffer;
+ PFN_vkCmdBindPipeline CmdBindPipeline;
+ PFN_vkCmdSetViewport CmdSetViewport;
+ PFN_vkCmdSetScissor CmdSetScissor;
+ PFN_vkCmdSetLineWidth CmdSetLineWidth;
+ PFN_vkCmdSetDepthBias CmdSetDepthBias;
+ PFN_vkCmdSetBlendConstants CmdSetBlendConstants;
+ PFN_vkCmdSetDepthBounds CmdSetDepthBounds;
+ PFN_vkCmdSetStencilCompareMask CmdSetStencilCompareMask;
+ PFN_vkCmdSetStencilWriteMask CmdSetStencilWriteMask;
+ PFN_vkCmdSetStencilReference CmdSetStencilReference;
+ PFN_vkCmdBindDescriptorSets CmdBindDescriptorSets;
+ PFN_vkCmdBindIndexBuffer CmdBindIndexBuffer;
+ PFN_vkCmdBindVertexBuffers CmdBindVertexBuffers;
+ PFN_vkCmdDraw CmdDraw;
+ PFN_vkCmdDrawIndexed CmdDrawIndexed;
+ PFN_vkCmdDrawIndirect CmdDrawIndirect;
+ PFN_vkCmdDrawIndexedIndirect CmdDrawIndexedIndirect;
+ PFN_vkCmdDispatch CmdDispatch;
+ PFN_vkCmdDispatchIndirect CmdDispatchIndirect;
+ PFN_vkCmdCopyBuffer CmdCopyBuffer;
+ PFN_vkCmdCopyImage CmdCopyImage;
+ PFN_vkCmdBlitImage CmdBlitImage;
+ PFN_vkCmdCopyBufferToImage CmdCopyBufferToImage;
+ PFN_vkCmdCopyImageToBuffer CmdCopyImageToBuffer;
+ PFN_vkCmdUpdateBuffer CmdUpdateBuffer;
+ PFN_vkCmdFillBuffer CmdFillBuffer;
+ PFN_vkCmdClearColorImage CmdClearColorImage;
+ PFN_vkCmdClearDepthStencilImage CmdClearDepthStencilImage;
+ PFN_vkCmdClearAttachments CmdClearAttachments;
+ PFN_vkCmdResolveImage CmdResolveImage;
+ PFN_vkCmdSetEvent CmdSetEvent;
+ PFN_vkCmdResetEvent CmdResetEvent;
+ PFN_vkCmdWaitEvents CmdWaitEvents;
+ PFN_vkCmdPipelineBarrier CmdPipelineBarrier;
+ PFN_vkCmdBeginQuery CmdBeginQuery;
+ PFN_vkCmdEndQuery CmdEndQuery;
+ PFN_vkCmdResetQueryPool CmdResetQueryPool;
+ PFN_vkCmdWriteTimestamp CmdWriteTimestamp;
+ PFN_vkCmdCopyQueryPoolResults CmdCopyQueryPoolResults;
+ PFN_vkCmdPushConstants CmdPushConstants;
+ PFN_vkCmdBeginRenderPass CmdBeginRenderPass;
+ PFN_vkCmdNextSubpass CmdNextSubpass;
+ PFN_vkCmdEndRenderPass CmdEndRenderPass;
+ PFN_vkCmdExecuteCommands CmdExecuteCommands;
+ PFN_vkCreateSwapchainKHR CreateSwapchainKHR;
+ PFN_vkDestroySwapchainKHR DestroySwapchainKHR;
+ PFN_vkGetSwapchainImagesKHR GetSwapchainImagesKHR;
+ PFN_vkAcquireNextImageKHR AcquireNextImageKHR;
+ PFN_vkQueuePresentKHR QueuePresentKHR;
+ // clang-format on
+};
+
+struct DriverDispatchTable {
+ // clang-format off
+ PFN_vkDestroyInstance DestroyInstance;
+ PFN_vkEnumeratePhysicalDevices EnumeratePhysicalDevices;
+ PFN_vkGetPhysicalDeviceProperties GetPhysicalDeviceProperties;
+ PFN_vkGetPhysicalDeviceQueueFamilyProperties GetPhysicalDeviceQueueFamilyProperties;
+ PFN_vkGetPhysicalDeviceMemoryProperties GetPhysicalDeviceMemoryProperties;
+ PFN_vkGetPhysicalDeviceFeatures GetPhysicalDeviceFeatures;
+ PFN_vkGetPhysicalDeviceFormatProperties GetPhysicalDeviceFormatProperties;
+ PFN_vkGetPhysicalDeviceImageFormatProperties GetPhysicalDeviceImageFormatProperties;
+ PFN_vkCreateDevice CreateDevice;
+ PFN_vkEnumerateDeviceLayerProperties EnumerateDeviceLayerProperties;
+ PFN_vkEnumerateDeviceExtensionProperties EnumerateDeviceExtensionProperties;
+ PFN_vkGetPhysicalDeviceSparseImageFormatProperties GetPhysicalDeviceSparseImageFormatProperties;
+ PFN_vkCreateDebugReportCallbackEXT CreateDebugReportCallbackEXT;
+ PFN_vkDestroyDebugReportCallbackEXT DestroyDebugReportCallbackEXT;
+ PFN_vkDebugReportMessageEXT DebugReportMessageEXT;
+ PFN_vkGetDeviceProcAddr GetDeviceProcAddr;
+ PFN_vkCreateImage CreateImage;
+ PFN_vkDestroyImage DestroyImage;
+ PFN_vkGetSwapchainGrallocUsageANDROID GetSwapchainGrallocUsageANDROID;
+ PFN_vkAcquireImageANDROID AcquireImageANDROID;
+ PFN_vkQueueSignalReleaseImageANDROID QueueSignalReleaseImageANDROID;
+ // clang-format on
+};
+
+} // namespace vulkan
diff --git a/vulkan/libvulkan/layers_extensions.cpp b/vulkan/libvulkan/layers_extensions.cpp
new file mode 100644
index 0000000..287e69b
--- /dev/null
+++ b/vulkan/libvulkan/layers_extensions.cpp
@@ -0,0 +1,438 @@
+/*
+ * Copyright 2016 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// #define LOG_NDEBUG 0
+
+#include "loader.h"
+#include <alloca.h>
+#include <dirent.h>
+#include <dlfcn.h>
+#include <mutex>
+#include <sys/prctl.h>
+#include <string>
+#include <string.h>
+#include <vector>
+#include <log/log.h>
+#include <vulkan/vulkan_loader_data.h>
+
+using namespace vulkan;
+
+// TODO(jessehall): The whole way we deal with extensions is pretty hokey, and
+// not a good long-term solution. Having a hard-coded enum of extensions is
+// bad, of course. Representing sets of extensions (requested, supported, etc.)
+// as a bitset isn't necessarily bad, if the mapping from extension to bit were
+// dynamic. Need to rethink this completely when there's a little more time.
+
+// TODO(jessehall): This file currently builds up global data structures as it
+// loads, and never cleans them up. This means we're doing heap allocations
+// without going through an app-provided allocator, but worse, we'll leak those
+// allocations if the loader is unloaded.
+//
+// We should allocate "enough" BSS space, and suballocate from there. Will
+// probably want to intern strings, etc., and will need some custom/manual data
+// structures.
+
+// TODO(jessehall): Currently we have separate lists for instance and device
+// layers. Most layers are both; we should use one entry for each layer name,
+// with a mask saying what kind(s) it is.
+
+namespace vulkan {
+struct Layer {
+ VkLayerProperties properties;
+ size_t library_idx;
+ std::vector<VkExtensionProperties> extensions;
+};
+} // namespace vulkan
+
+namespace {
+
+std::mutex g_library_mutex;
+struct LayerLibrary {
+ std::string path;
+ void* dlhandle;
+ size_t refcount;
+};
+std::vector<LayerLibrary> g_layer_libraries;
+std::vector<Layer> g_instance_layers;
+std::vector<Layer> g_device_layers;
+
+void AddLayerLibrary(const std::string& path) {
+ ALOGV("examining layer library '%s'", path.c_str());
+
+ void* dlhandle = dlopen(path.c_str(), RTLD_NOW | RTLD_LOCAL);
+ if (!dlhandle) {
+ ALOGW("failed to load layer library '%s': %s", path.c_str(), dlerror());
+ return;
+ }
+
+ PFN_vkEnumerateInstanceLayerProperties enumerate_instance_layers =
+ reinterpret_cast<PFN_vkEnumerateInstanceLayerProperties>(
+ dlsym(dlhandle, "vkEnumerateInstanceLayerProperties"));
+ PFN_vkEnumerateInstanceExtensionProperties enumerate_instance_extensions =
+ reinterpret_cast<PFN_vkEnumerateInstanceExtensionProperties>(
+ dlsym(dlhandle, "vkEnumerateInstanceExtensionProperties"));
+ PFN_vkEnumerateDeviceLayerProperties enumerate_device_layers =
+ reinterpret_cast<PFN_vkEnumerateDeviceLayerProperties>(
+ dlsym(dlhandle, "vkEnumerateDeviceLayerProperties"));
+ PFN_vkEnumerateDeviceExtensionProperties enumerate_device_extensions =
+ reinterpret_cast<PFN_vkEnumerateDeviceExtensionProperties>(
+ dlsym(dlhandle, "vkEnumerateDeviceExtensionProperties"));
+ if (!((enumerate_instance_layers && enumerate_instance_extensions) ||
+ (enumerate_device_layers && enumerate_device_extensions))) {
+ ALOGV(
+ "layer library '%s' has neither instance nor device enumeraion "
+ "functions",
+ path.c_str());
+ dlclose(dlhandle);
+ return;
+ }
+
+ VkResult result;
+ uint32_t num_instance_layers = 0;
+ uint32_t num_device_layers = 0;
+ if (enumerate_instance_layers) {
+ result = enumerate_instance_layers(&num_instance_layers, nullptr);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateInstanceLayerProperties failed for library '%s': "
+ "%d",
+ path.c_str(), result);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+ if (enumerate_device_layers) {
+ result = enumerate_device_layers(VK_NULL_HANDLE, &num_device_layers,
+ nullptr);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateDeviceLayerProperties failed for library '%s': %d",
+ path.c_str(), result);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+ VkLayerProperties* properties = static_cast<VkLayerProperties*>(alloca(
+ (num_instance_layers + num_device_layers) * sizeof(VkLayerProperties)));
+ if (num_instance_layers > 0) {
+ result = enumerate_instance_layers(&num_instance_layers, properties);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateInstanceLayerProperties failed for library '%s': "
+ "%d",
+ path.c_str(), result);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+ if (num_device_layers > 0) {
+ result = enumerate_device_layers(VK_NULL_HANDLE, &num_device_layers,
+ properties + num_instance_layers);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateDeviceLayerProperties failed for library '%s': %d",
+ path.c_str(), result);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+
+ size_t library_idx = g_layer_libraries.size();
+ size_t prev_num_instance_layers = g_instance_layers.size();
+ size_t prev_num_device_layers = g_device_layers.size();
+ g_instance_layers.reserve(prev_num_instance_layers + num_instance_layers);
+ g_device_layers.reserve(prev_num_device_layers + num_device_layers);
+ for (size_t i = 0; i < num_instance_layers; i++) {
+ const VkLayerProperties& props = properties[i];
+
+ Layer layer;
+ layer.properties = props;
+ layer.library_idx = library_idx;
+
+ if (enumerate_instance_extensions) {
+ uint32_t count = 0;
+ result =
+ enumerate_instance_extensions(props.layerName, &count, nullptr);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateInstanceExtensionProperties(%s) failed for "
+ "library "
+ "'%s': %d",
+ props.layerName, path.c_str(), result);
+ g_instance_layers.resize(prev_num_instance_layers);
+ dlclose(dlhandle);
+ return;
+ }
+ layer.extensions.resize(count);
+ result = enumerate_instance_extensions(props.layerName, &count,
+ layer.extensions.data());
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateInstanceExtensionProperties(%s) failed for "
+ "library "
+ "'%s': %d",
+ props.layerName, path.c_str(), result);
+ g_instance_layers.resize(prev_num_instance_layers);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+
+ g_instance_layers.push_back(layer);
+ ALOGV(" added instance layer '%s'", props.layerName);
+ }
+ for (size_t i = 0; i < num_device_layers; i++) {
+ const VkLayerProperties& props = properties[num_instance_layers + i];
+
+ Layer layer;
+ layer.properties = props;
+ layer.library_idx = library_idx;
+
+ if (enumerate_device_extensions) {
+ uint32_t count;
+ result = enumerate_device_extensions(
+ VK_NULL_HANDLE, props.layerName, &count, nullptr);
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateDeviceExtensionProperties(%s) failed for "
+ "library "
+ "'%s': %d",
+ props.layerName, path.c_str(), result);
+ g_instance_layers.resize(prev_num_instance_layers);
+ g_device_layers.resize(prev_num_device_layers);
+ dlclose(dlhandle);
+ return;
+ }
+ layer.extensions.resize(count);
+ result =
+ enumerate_device_extensions(VK_NULL_HANDLE, props.layerName,
+ &count, layer.extensions.data());
+ if (result != VK_SUCCESS) {
+ ALOGW(
+ "vkEnumerateDeviceExtensionProperties(%s) failed for "
+ "library "
+ "'%s': %d",
+ props.layerName, path.c_str(), result);
+ g_instance_layers.resize(prev_num_instance_layers);
+ g_device_layers.resize(prev_num_device_layers);
+ dlclose(dlhandle);
+ return;
+ }
+ }
+
+ g_device_layers.push_back(layer);
+ ALOGV(" added device layer '%s'", props.layerName);
+ }
+
+ dlclose(dlhandle);
+
+ g_layer_libraries.push_back(LayerLibrary{path, nullptr, 0});
+}
+
+void DiscoverLayersInDirectory(const std::string& dir_path) {
+ ALOGV("looking for layers in '%s'", dir_path.c_str());
+
+ DIR* directory = opendir(dir_path.c_str());
+ if (!directory) {
+ int err = errno;
+ ALOGV_IF(err != ENOENT, "failed to open layer directory '%s': %s (%d)",
+ dir_path.c_str(), strerror(err), err);
+ return;
+ }
+
+ std::string path;
+ path.reserve(dir_path.size() + 20);
+ path.append(dir_path);
+ path.append("/");
+
+ struct dirent* entry;
+ while ((entry = readdir(directory))) {
+ size_t libname_len = strlen(entry->d_name);
+ if (strncmp(entry->d_name, "libVkLayer", 10) != 0 ||
+ strncmp(entry->d_name + libname_len - 3, ".so", 3) != 0)
+ continue;
+ path.append(entry->d_name);
+ AddLayerLibrary(path);
+ path.resize(dir_path.size() + 1);
+ }
+
+ closedir(directory);
+}
+
+void* GetLayerGetProcAddr(const Layer& layer,
+ const char* gpa_name,
+ size_t gpa_name_len) {
+ const LayerLibrary& library = g_layer_libraries[layer.library_idx];
+ void* gpa;
+ size_t layer_name_len = std::max(size_t{2}, strlen(layer.properties.layerName));
+ char* name = static_cast<char*>(alloca(layer_name_len + gpa_name_len + 1));
+ strcpy(name, layer.properties.layerName);
+ strcpy(name + layer_name_len, gpa_name);
+ if (!(gpa = dlsym(library.dlhandle, name))) {
+ strcpy(name, "vk");
+ strcpy(name + 2, gpa_name);
+ gpa = dlsym(library.dlhandle, name);
+ }
+ return gpa;
+}
+
+uint32_t EnumerateLayers(const std::vector<Layer>& layers,
+ uint32_t count,
+ VkLayerProperties* properties) {
+ uint32_t n = std::min(count, static_cast<uint32_t>(layers.size()));
+ for (uint32_t i = 0; i < n; i++) {
+ properties[i] = layers[i].properties;
+ }
+ return static_cast<uint32_t>(layers.size());
+}
+
+void GetLayerExtensions(const std::vector<Layer>& layers,
+ const char* name,
+ const VkExtensionProperties** properties,
+ uint32_t* count) {
+ auto layer =
+ std::find_if(layers.cbegin(), layers.cend(), [=](const Layer& entry) {
+ return strcmp(entry.properties.layerName, name) == 0;
+ });
+ if (layer == layers.cend()) {
+ *properties = nullptr;
+ *count = 0;
+ } else {
+ *properties = layer->extensions.data();
+ *count = static_cast<uint32_t>(layer->extensions.size());
+ }
+}
+
+LayerRef GetLayerRef(std::vector<Layer>& layers, const char* name) {
+ for (uint32_t id = 0; id < layers.size(); id++) {
+ if (strcmp(name, layers[id].properties.layerName) == 0) {
+ LayerLibrary& library = g_layer_libraries[layers[id].library_idx];
+ std::lock_guard<std::mutex> lock(g_library_mutex);
+ if (library.refcount++ == 0) {
+ library.dlhandle =
+ dlopen(library.path.c_str(), RTLD_NOW | RTLD_LOCAL);
+ ALOGV("Opening library %s", library.path.c_str());
+ if (!library.dlhandle) {
+ ALOGE("failed to load layer library '%s': %s",
+ library.path.c_str(), dlerror());
+ library.refcount = 0;
+ return LayerRef(nullptr);
+ }
+ }
+ ALOGV("Refcount on activate is %zu", library.refcount);
+ return LayerRef(&layers[id]);
+ }
+ }
+ return LayerRef(nullptr);
+}
+
+} // anonymous namespace
+
+namespace vulkan {
+
+void DiscoverLayers() {
+ if (prctl(PR_GET_DUMPABLE, 0, 0, 0, 0))
+ DiscoverLayersInDirectory("/data/local/debug/vulkan");
+ if (!LoaderData::GetInstance().layer_path.empty())
+ DiscoverLayersInDirectory(LoaderData::GetInstance().layer_path.c_str());
+}
+
+uint32_t EnumerateInstanceLayers(uint32_t count,
+ VkLayerProperties* properties) {
+ return EnumerateLayers(g_instance_layers, count, properties);
+}
+
+uint32_t EnumerateDeviceLayers(uint32_t count, VkLayerProperties* properties) {
+ return EnumerateLayers(g_device_layers, count, properties);
+}
+
+void GetInstanceLayerExtensions(const char* name,
+ const VkExtensionProperties** properties,
+ uint32_t* count) {
+ GetLayerExtensions(g_instance_layers, name, properties, count);
+}
+
+void GetDeviceLayerExtensions(const char* name,
+ const VkExtensionProperties** properties,
+ uint32_t* count) {
+ GetLayerExtensions(g_device_layers, name, properties, count);
+}
+
+LayerRef GetInstanceLayerRef(const char* name) {
+ return GetLayerRef(g_instance_layers, name);
+}
+
+LayerRef GetDeviceLayerRef(const char* name) {
+ return GetLayerRef(g_device_layers, name);
+}
+
+LayerRef::LayerRef(Layer* layer) : layer_(layer) {}
+
+LayerRef::~LayerRef() {
+ if (layer_) {
+ LayerLibrary& library = g_layer_libraries[layer_->library_idx];
+ std::lock_guard<std::mutex> lock(g_library_mutex);
+ if (--library.refcount == 0) {
+ ALOGV("Closing library %s", library.path.c_str());
+ dlclose(library.dlhandle);
+ library.dlhandle = nullptr;
+ }
+ ALOGV("Refcount on destruction is %zu", library.refcount);
+ }
+}
+
+LayerRef::LayerRef(LayerRef&& other) : layer_(std::move(other.layer_)) {
+ other.layer_ = nullptr;
+}
+
+PFN_vkGetInstanceProcAddr LayerRef::GetGetInstanceProcAddr() const {
+ return layer_ ? reinterpret_cast<PFN_vkGetInstanceProcAddr>(
+ GetLayerGetProcAddr(*layer_, "GetInstanceProcAddr", 19))
+ : nullptr;
+}
+
+PFN_vkGetDeviceProcAddr LayerRef::GetGetDeviceProcAddr() const {
+ return layer_ ? reinterpret_cast<PFN_vkGetDeviceProcAddr>(
+ GetLayerGetProcAddr(*layer_, "GetDeviceProcAddr", 17))
+ : nullptr;
+}
+
+bool LayerRef::SupportsExtension(const char* name) const {
+ return std::find_if(layer_->extensions.cbegin(), layer_->extensions.cend(),
+ [=](const VkExtensionProperties& ext) {
+ return strcmp(ext.extensionName, name) == 0;
+ }) != layer_->extensions.cend();
+}
+
+InstanceExtension InstanceExtensionFromName(const char* name) {
+ if (strcmp(name, VK_KHR_SURFACE_EXTENSION_NAME) == 0)
+ return kKHR_surface;
+ if (strcmp(name, VK_KHR_ANDROID_SURFACE_EXTENSION_NAME) == 0)
+ return kKHR_android_surface;
+ if (strcmp(name, VK_EXT_DEBUG_REPORT_EXTENSION_NAME) == 0)
+ return kEXT_debug_report;
+ return kInstanceExtensionCount;
+}
+
+DeviceExtension DeviceExtensionFromName(const char* name) {
+ if (strcmp(name, VK_KHR_SWAPCHAIN_EXTENSION_NAME) == 0)
+ return kKHR_swapchain;
+ if (strcmp(name, VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME) == 0)
+ return kANDROID_native_buffer;
+ return kDeviceExtensionCount;
+}
+
+} // namespace vulkan
diff --git a/vulkan/libvulkan/loader.cpp b/vulkan/libvulkan/loader.cpp
new file mode 100644
index 0000000..939f3b9
--- /dev/null
+++ b/vulkan/libvulkan/loader.cpp
@@ -0,0 +1,1333 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// module header
+#include "loader.h"
+// standard C headers
+#include <dirent.h>
+#include <dlfcn.h>
+#include <inttypes.h>
+#include <pthread.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/prctl.h>
+// standard C++ headers
+#include <algorithm>
+#include <mutex>
+#include <sstream>
+#include <string>
+#include <unordered_map>
+#include <vector>
+// platform/library headers
+#include <cutils/properties.h>
+#include <hardware/hwvulkan.h>
+#include <log/log.h>
+#include <vulkan/vulkan_loader_data.h>
+
+// #define ENABLE_ALLOC_CALLSTACKS 1
+#if ENABLE_ALLOC_CALLSTACKS
+#include <utils/CallStack.h>
+#define ALOGD_CALLSTACK(...) \
+ do { \
+ ALOGD(__VA_ARGS__); \
+ android::CallStack callstack; \
+ callstack.update(); \
+ callstack.log(LOG_TAG, ANDROID_LOG_DEBUG, " "); \
+ } while (false)
+#else
+#define ALOGD_CALLSTACK(...) \
+ do { \
+ } while (false)
+#endif
+
+using namespace vulkan;
+
+static const uint32_t kMaxPhysicalDevices = 4;
+
+namespace {
+
+// These definitions are taken from the LunarG Vulkan Loader. They are used to
+// enforce compatability between the Loader and Layers.
+typedef void* (*PFN_vkGetProcAddr)(void* obj, const char* pName);
+
+typedef struct VkLayerLinkedListElem_ {
+ PFN_vkGetProcAddr get_proc_addr;
+ void* next_element;
+ void* base_object;
+} VkLayerLinkedListElem;
+
+// ----------------------------------------------------------------------------
+
+// Standard-library allocator that delegates to VkAllocationCallbacks.
+//
+// TODO(jessehall): This class currently always uses
+// VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE. The scope to use could be a template
+// parameter or a constructor parameter. The former would help catch bugs
+// where we use the wrong scope, e.g. adding a command-scope string to an
+// instance-scope vector. But that might also be pretty annoying to deal with.
+template <class T>
+class CallbackAllocator {
+ public:
+ typedef T value_type;
+
+ CallbackAllocator(const VkAllocationCallbacks* alloc_input)
+ : alloc(alloc_input) {}
+
+ template <class T2>
+ CallbackAllocator(const CallbackAllocator<T2>& other)
+ : alloc(other.alloc) {}
+
+ T* allocate(std::size_t n) {
+ void* mem =
+ alloc->pfnAllocation(alloc->pUserData, n * sizeof(T), alignof(T),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (!mem)
+ throw std::bad_alloc();
+ return static_cast<T*>(mem);
+ }
+
+ void deallocate(T* array, std::size_t /*n*/) noexcept {
+ alloc->pfnFree(alloc->pUserData, array);
+ }
+
+ const VkAllocationCallbacks* alloc;
+};
+// These are needed in order to move Strings
+template <class T>
+bool operator==(const CallbackAllocator<T>& alloc1,
+ const CallbackAllocator<T>& alloc2) {
+ return alloc1.alloc == alloc2.alloc;
+}
+template <class T>
+bool operator!=(const CallbackAllocator<T>& alloc1,
+ const CallbackAllocator<T>& alloc2) {
+ return !(alloc1 == alloc2);
+}
+
+template <class T>
+using Vector = std::vector<T, CallbackAllocator<T>>;
+
+typedef std::basic_string<char, std::char_traits<char>, CallbackAllocator<char>>
+ String;
+
+// ----------------------------------------------------------------------------
+
+VKAPI_ATTR void* DefaultAllocate(void*,
+ size_t size,
+ size_t alignment,
+ VkSystemAllocationScope) {
+ void* ptr = nullptr;
+ // Vulkan requires 'alignment' to be a power of two, but posix_memalign
+ // additionally requires that it be at least sizeof(void*).
+ int ret = posix_memalign(&ptr, std::max(alignment, sizeof(void*)), size);
+ ALOGD_CALLSTACK("Allocate: size=%zu align=%zu => (%d) %p", size, alignment,
+ ret, ptr);
+ return ret == 0 ? ptr : nullptr;
+}
+
+VKAPI_ATTR void* DefaultReallocate(void*,
+ void* ptr,
+ size_t size,
+ size_t alignment,
+ VkSystemAllocationScope) {
+ if (size == 0) {
+ free(ptr);
+ return nullptr;
+ }
+
+ // TODO(jessehall): Right now we never shrink allocations; if the new
+ // request is smaller than the existing chunk, we just continue using it.
+ // Right now the loader never reallocs, so this doesn't matter. If that
+ // changes, or if this code is copied into some other project, this should
+ // probably have a heuristic to allocate-copy-free when doing so will save
+ // "enough" space.
+ size_t old_size = ptr ? malloc_usable_size(ptr) : 0;
+ if (size <= old_size)
+ return ptr;
+
+ void* new_ptr = nullptr;
+ if (posix_memalign(&new_ptr, alignment, size) != 0)
+ return nullptr;
+ if (ptr) {
+ memcpy(new_ptr, ptr, std::min(old_size, size));
+ free(ptr);
+ }
+ return new_ptr;
+}
+
+VKAPI_ATTR void DefaultFree(void*, void* ptr) {
+ ALOGD_CALLSTACK("Free: %p", ptr);
+ free(ptr);
+}
+
+const VkAllocationCallbacks kDefaultAllocCallbacks = {
+ .pUserData = nullptr,
+ .pfnAllocation = DefaultAllocate,
+ .pfnReallocation = DefaultReallocate,
+ .pfnFree = DefaultFree,
+};
+
+// ----------------------------------------------------------------------------
+// Global Data and Initialization
+
+hwvulkan_device_t* g_hwdevice = nullptr;
+InstanceExtensionSet g_driver_instance_extensions;
+
+void LoadVulkanHAL() {
+ static const hwvulkan_module_t* module;
+ int result =
+ hw_get_module("vulkan", reinterpret_cast<const hw_module_t**>(&module));
+ if (result != 0) {
+ ALOGE("failed to load vulkan hal: %s (%d)", strerror(-result), result);
+ return;
+ }
+ result = module->common.methods->open(
+ &module->common, HWVULKAN_DEVICE_0,
+ reinterpret_cast<hw_device_t**>(&g_hwdevice));
+ if (result != 0) {
+ ALOGE("failed to open vulkan driver: %s (%d)", strerror(-result),
+ result);
+ module = nullptr;
+ return;
+ }
+
+ VkResult vkresult;
+ uint32_t count;
+ if ((vkresult = g_hwdevice->EnumerateInstanceExtensionProperties(
+ nullptr, &count, nullptr)) != VK_SUCCESS) {
+ ALOGE("driver EnumerateInstanceExtensionProperties failed: %d",
+ vkresult);
+ g_hwdevice->common.close(&g_hwdevice->common);
+ g_hwdevice = nullptr;
+ module = nullptr;
+ return;
+ }
+ VkExtensionProperties* extensions = static_cast<VkExtensionProperties*>(
+ alloca(count * sizeof(VkExtensionProperties)));
+ if ((vkresult = g_hwdevice->EnumerateInstanceExtensionProperties(
+ nullptr, &count, extensions)) != VK_SUCCESS) {
+ ALOGE("driver EnumerateInstanceExtensionProperties failed: %d",
+ vkresult);
+ g_hwdevice->common.close(&g_hwdevice->common);
+ g_hwdevice = nullptr;
+ module = nullptr;
+ return;
+ }
+ ALOGV_IF(count > 0, "Driver-supported instance extensions:");
+ for (uint32_t i = 0; i < count; i++) {
+ ALOGV(" %s (v%u)", extensions[i].extensionName,
+ extensions[i].specVersion);
+ InstanceExtension id =
+ InstanceExtensionFromName(extensions[i].extensionName);
+ if (id != kInstanceExtensionCount)
+ g_driver_instance_extensions.set(id);
+ }
+ // Ignore driver attempts to support loader extensions
+ g_driver_instance_extensions.reset(kKHR_surface);
+ g_driver_instance_extensions.reset(kKHR_android_surface);
+}
+
+bool EnsureInitialized() {
+ static std::once_flag once_flag;
+ std::call_once(once_flag, []() {
+ LoadVulkanHAL();
+ DiscoverLayers();
+ });
+ return g_hwdevice != nullptr;
+}
+
+// -----------------------------------------------------------------------------
+
+struct Instance {
+ Instance(const VkAllocationCallbacks* alloc_callbacks)
+ : dispatch_ptr(&dispatch),
+ handle(reinterpret_cast<VkInstance>(&dispatch_ptr)),
+ alloc(alloc_callbacks),
+ num_physical_devices(0),
+ active_layers(CallbackAllocator<LayerRef>(alloc)),
+ message(VK_NULL_HANDLE) {
+ memset(&dispatch, 0, sizeof(dispatch));
+ memset(physical_devices, 0, sizeof(physical_devices));
+ drv.instance = VK_NULL_HANDLE;
+ memset(&drv.dispatch, 0, sizeof(drv.dispatch));
+ drv.num_physical_devices = 0;
+ }
+
+ ~Instance() {}
+
+ const InstanceDispatchTable* dispatch_ptr;
+ const VkInstance handle;
+ InstanceDispatchTable dispatch;
+
+ const VkAllocationCallbacks* alloc;
+ uint32_t num_physical_devices;
+ VkPhysicalDevice physical_devices[kMaxPhysicalDevices];
+ DeviceExtensionSet physical_device_driver_extensions[kMaxPhysicalDevices];
+
+ Vector<LayerRef> active_layers;
+ VkDebugReportCallbackEXT message;
+ DebugReportCallbackList debug_report_callbacks;
+
+ struct {
+ VkInstance instance;
+ DriverDispatchTable dispatch;
+ uint32_t num_physical_devices;
+ } drv; // may eventually be an array
+};
+
+struct Device {
+ Device(Instance* instance_)
+ : instance(instance_),
+ active_layers(CallbackAllocator<LayerRef>(instance->alloc)) {
+ memset(&dispatch, 0, sizeof(dispatch));
+ }
+ DeviceDispatchTable dispatch;
+ Instance* instance;
+ PFN_vkGetDeviceProcAddr get_device_proc_addr;
+ Vector<LayerRef> active_layers;
+};
+
+template <typename THandle>
+struct HandleTraits {};
+template <>
+struct HandleTraits<VkInstance> {
+ typedef Instance LoaderObjectType;
+};
+template <>
+struct HandleTraits<VkPhysicalDevice> {
+ typedef Instance LoaderObjectType;
+};
+template <>
+struct HandleTraits<VkDevice> {
+ typedef Device LoaderObjectType;
+};
+template <>
+struct HandleTraits<VkQueue> {
+ typedef Device LoaderObjectType;
+};
+template <>
+struct HandleTraits<VkCommandBuffer> {
+ typedef Device LoaderObjectType;
+};
+
+template <typename THandle>
+typename HandleTraits<THandle>::LoaderObjectType& GetDispatchParent(
+ THandle handle) {
+ // TODO(jessehall): Make Instance and Device POD types (by removing the
+ // non-default constructors), so that offsetof is actually legal to use.
+ // The specific case we're using here is safe in gcc/clang (and probably
+ // most other C++ compilers), but isn't guaranteed by C++.
+ typedef typename HandleTraits<THandle>::LoaderObjectType ObjectType;
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Winvalid-offsetof"
+ const size_t kDispatchOffset = offsetof(ObjectType, dispatch);
+#pragma clang diagnostic pop
+
+ const auto& dispatch = GetDispatchTable(handle);
+ uintptr_t dispatch_addr = reinterpret_cast<uintptr_t>(&dispatch);
+ uintptr_t object_addr = dispatch_addr - kDispatchOffset;
+ return *reinterpret_cast<ObjectType*>(object_addr);
+}
+
+// -----------------------------------------------------------------------------
+
+void DestroyDevice(Device* device) {
+ const VkAllocationCallbacks* alloc = device->instance->alloc;
+ device->~Device();
+ alloc->pfnFree(alloc->pUserData, device);
+}
+
+template <class TObject>
+LayerRef GetLayerRef(const char* name);
+template <>
+LayerRef GetLayerRef<Instance>(const char* name) {
+ return GetInstanceLayerRef(name);
+}
+template <>
+LayerRef GetLayerRef<Device>(const char* name) {
+ return GetDeviceLayerRef(name);
+}
+
+template <class TObject>
+bool ActivateLayer(TObject* object, const char* name) {
+ LayerRef layer(GetLayerRef<TObject>(name));
+ if (!layer)
+ return false;
+ if (std::find(object->active_layers.begin(), object->active_layers.end(),
+ layer) == object->active_layers.end()) {
+ try {
+ object->active_layers.push_back(std::move(layer));
+ } catch (std::bad_alloc&) {
+ // TODO(jessehall): We should fail with VK_ERROR_OUT_OF_MEMORY
+ // if we can't enable a requested layer. Callers currently ignore
+ // ActivateLayer's return value.
+ ALOGW("failed to activate layer '%s': out of memory", name);
+ return false;
+ }
+ }
+ ALOGV("activated layer '%s'", name);
+ return true;
+}
+
+struct InstanceNamesPair {
+ Instance* instance;
+ Vector<String>* layer_names;
+};
+
+void SetLayerNamesFromProperty(const char* name,
+ const char* value,
+ void* data) {
+ try {
+ const char prefix[] = "debug.vulkan.layer.";
+ const size_t prefixlen = sizeof(prefix) - 1;
+ if (value[0] == '\0' || strncmp(name, prefix, prefixlen) != 0)
+ return;
+ const char* number_str = name + prefixlen;
+ long layer_number = strtol(number_str, nullptr, 10);
+ if (layer_number <= 0 || layer_number == LONG_MAX) {
+ ALOGW("Cannot use a layer at number %ld from string %s",
+ layer_number, number_str);
+ return;
+ }
+ auto instance_names_pair = static_cast<InstanceNamesPair*>(data);
+ Vector<String>* layer_names = instance_names_pair->layer_names;
+ Instance* instance = instance_names_pair->instance;
+ size_t layer_size = static_cast<size_t>(layer_number);
+ if (layer_size > layer_names->size()) {
+ layer_names->resize(
+ layer_size, String(CallbackAllocator<char>(instance->alloc)));
+ }
+ (*layer_names)[layer_size - 1] = value;
+ } catch (std::bad_alloc&) {
+ ALOGW("failed to handle property '%s'='%s': out of memory", name,
+ value);
+ return;
+ }
+}
+
+template <class TInfo, class TObject>
+VkResult ActivateAllLayers(TInfo create_info,
+ Instance* instance,
+ TObject* object) {
+ ALOG_ASSERT(create_info->sType == VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO ||
+ create_info->sType == VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ "Cannot activate layers for unknown object %p", object);
+ CallbackAllocator<char> string_allocator(instance->alloc);
+ // Load system layers
+ if (prctl(PR_GET_DUMPABLE, 0, 0, 0, 0)) {
+ char layer_prop[PROPERTY_VALUE_MAX];
+ property_get("debug.vulkan.layers", layer_prop, "");
+ char* strtok_state;
+ char* layer_name = nullptr;
+ while ((layer_name = strtok_r(layer_name ? nullptr : layer_prop, ":",
+ &strtok_state))) {
+ ActivateLayer(object, layer_name);
+ }
+ Vector<String> layer_names(CallbackAllocator<String>(instance->alloc));
+ InstanceNamesPair instance_names_pair = {.instance = instance,
+ .layer_names = &layer_names};
+ property_list(SetLayerNamesFromProperty,
+ static_cast<void*>(&instance_names_pair));
+ for (auto layer_name_element : layer_names) {
+ ActivateLayer(object, layer_name_element.c_str());
+ }
+ }
+ // Load app layers
+ for (uint32_t i = 0; i < create_info->enabledLayerCount; ++i) {
+ if (!ActivateLayer(object, create_info->ppEnabledLayerNames[i])) {
+ ALOGE("requested %s layer '%s' not present",
+ create_info->sType == VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
+ ? "instance"
+ : "device",
+ create_info->ppEnabledLayerNames[i]);
+ return VK_ERROR_LAYER_NOT_PRESENT;
+ }
+ }
+ return VK_SUCCESS;
+}
+
+template <class TCreateInfo>
+bool AddExtensionToCreateInfo(TCreateInfo& local_create_info,
+ const char* extension_name,
+ const VkAllocationCallbacks* alloc) {
+ for (uint32_t i = 0; i < local_create_info.enabledExtensionCount; ++i) {
+ if (!strcmp(extension_name,
+ local_create_info.ppEnabledExtensionNames[i])) {
+ return false;
+ }
+ }
+ uint32_t extension_count = local_create_info.enabledExtensionCount;
+ local_create_info.enabledExtensionCount++;
+ void* mem = alloc->pfnAllocation(
+ alloc->pUserData,
+ local_create_info.enabledExtensionCount * sizeof(char*), alignof(char*),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (mem) {
+ const char** enabled_extensions = static_cast<const char**>(mem);
+ for (uint32_t i = 0; i < extension_count; ++i) {
+ enabled_extensions[i] =
+ local_create_info.ppEnabledExtensionNames[i];
+ }
+ enabled_extensions[extension_count] = extension_name;
+ local_create_info.ppEnabledExtensionNames = enabled_extensions;
+ } else {
+ ALOGW("%s extension cannot be enabled: memory allocation failed",
+ extension_name);
+ local_create_info.enabledExtensionCount--;
+ return false;
+ }
+ return true;
+}
+
+template <class T>
+void FreeAllocatedCreateInfo(T& local_create_info,
+ const VkAllocationCallbacks* alloc) {
+ alloc->pfnFree(
+ alloc->pUserData,
+ const_cast<char**>(local_create_info.ppEnabledExtensionNames));
+}
+
+VKAPI_ATTR
+VkBool32 LogDebugMessageCallback(VkDebugReportFlagsEXT flags,
+ VkDebugReportObjectTypeEXT /*objectType*/,
+ uint64_t /*object*/,
+ size_t /*location*/,
+ int32_t message_code,
+ const char* layer_prefix,
+ const char* message,
+ void* /*user_data*/) {
+ if (flags & VK_DEBUG_REPORT_ERROR_BIT_EXT) {
+ ALOGE("[%s] Code %d : %s", layer_prefix, message_code, message);
+ } else if (flags & VK_DEBUG_REPORT_WARN_BIT_EXT) {
+ ALOGW("[%s] Code %d : %s", layer_prefix, message_code, message);
+ }
+ return false;
+}
+
+VkResult Noop() {
+ return VK_SUCCESS;
+}
+
+} // anonymous namespace
+
+namespace vulkan {
+
+// -----------------------------------------------------------------------------
+// "Bottom" functions. These are called at the end of the instance dispatch
+// chain.
+
+VkResult CreateInstance_Bottom(const VkInstanceCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkInstance* vkinstance) {
+ Instance& instance = GetDispatchParent(*vkinstance);
+ VkResult result;
+
+ // Check that all enabled extensions are supported
+ InstanceExtensionSet enabled_extensions;
+ uint32_t num_driver_extensions = 0;
+ for (uint32_t i = 0; i < create_info->enabledExtensionCount; i++) {
+ const char* name = create_info->ppEnabledExtensionNames[i];
+ InstanceExtension id = InstanceExtensionFromName(name);
+ if (id != kInstanceExtensionCount) {
+ if (g_driver_instance_extensions[id]) {
+ num_driver_extensions++;
+ enabled_extensions.set(id);
+ continue;
+ }
+ if (id == kKHR_surface || id == kKHR_android_surface ||
+ id == kEXT_debug_report) {
+ enabled_extensions.set(id);
+ continue;
+ }
+ }
+ bool supported = false;
+ for (const auto& layer : instance.active_layers) {
+ if (layer.SupportsExtension(name))
+ supported = true;
+ }
+ if (!supported) {
+ ALOGE(
+ "requested instance extension '%s' not supported by "
+ "loader, driver, or any active layers",
+ name);
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_EXTENSION_NOT_PRESENT;
+ }
+ }
+
+ VkInstanceCreateInfo driver_create_info = *create_info;
+ driver_create_info.enabledLayerCount = 0;
+ driver_create_info.ppEnabledLayerNames = nullptr;
+ driver_create_info.enabledExtensionCount = 0;
+ driver_create_info.ppEnabledExtensionNames = nullptr;
+ if (num_driver_extensions > 0) {
+ const char** names = static_cast<const char**>(
+ alloca(num_driver_extensions * sizeof(char*)));
+ for (uint32_t i = 0; i < create_info->enabledExtensionCount; i++) {
+ const char* name = create_info->ppEnabledExtensionNames[i];
+ InstanceExtension id = InstanceExtensionFromName(name);
+ if (id != kInstanceExtensionCount) {
+ if (g_driver_instance_extensions[id]) {
+ names[driver_create_info.enabledExtensionCount++] = name;
+ continue;
+ }
+ }
+ }
+ driver_create_info.ppEnabledExtensionNames = names;
+ ALOG_ASSERT(
+ driver_create_info.enabledExtensionCount == num_driver_extensions,
+ "counted enabled driver instance extensions twice and got "
+ "different answers!");
+ }
+
+ result = g_hwdevice->CreateInstance(&driver_create_info, instance.alloc,
+ &instance.drv.instance);
+ if (result != VK_SUCCESS) {
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return result;
+ }
+
+ hwvulkan_dispatch_t* drv_dispatch =
+ reinterpret_cast<hwvulkan_dispatch_t*>(instance.drv.instance);
+ if (drv_dispatch->magic == HWVULKAN_DISPATCH_MAGIC) {
+ // Skip setting drv_dispatch->vtbl, since we never call through it;
+ // we go through instance.drv.dispatch instead.
+ } else {
+ ALOGE("invalid VkInstance dispatch magic: 0x%" PRIxPTR,
+ drv_dispatch->magic);
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ if (!LoadDriverDispatchTable(instance.drv.instance,
+ g_hwdevice->GetInstanceProcAddr,
+ enabled_extensions, instance.drv.dispatch)) {
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ uint32_t num_physical_devices = 0;
+ result = instance.drv.dispatch.EnumeratePhysicalDevices(
+ instance.drv.instance, &num_physical_devices, nullptr);
+ if (result != VK_SUCCESS) {
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ num_physical_devices = std::min(num_physical_devices, kMaxPhysicalDevices);
+ result = instance.drv.dispatch.EnumeratePhysicalDevices(
+ instance.drv.instance, &num_physical_devices,
+ instance.physical_devices);
+ if (result != VK_SUCCESS) {
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ Vector<VkExtensionProperties> extensions(
+ Vector<VkExtensionProperties>::allocator_type(instance.alloc));
+ for (uint32_t i = 0; i < num_physical_devices; i++) {
+ hwvulkan_dispatch_t* pdev_dispatch =
+ reinterpret_cast<hwvulkan_dispatch_t*>(
+ instance.physical_devices[i]);
+ if (pdev_dispatch->magic != HWVULKAN_DISPATCH_MAGIC) {
+ ALOGE("invalid VkPhysicalDevice dispatch magic: 0x%" PRIxPTR,
+ pdev_dispatch->magic);
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ pdev_dispatch->vtbl = instance.dispatch_ptr;
+
+ uint32_t count;
+ if ((result = instance.drv.dispatch.EnumerateDeviceExtensionProperties(
+ instance.physical_devices[i], nullptr, &count, nullptr)) !=
+ VK_SUCCESS) {
+ ALOGW("driver EnumerateDeviceExtensionProperties(%u) failed: %d", i,
+ result);
+ continue;
+ }
+ try {
+ extensions.resize(count);
+ } catch (std::bad_alloc&) {
+ ALOGE("instance creation failed: out of memory");
+ DestroyInstance_Bottom(instance.handle, allocator);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+ if ((result = instance.drv.dispatch.EnumerateDeviceExtensionProperties(
+ instance.physical_devices[i], nullptr, &count,
+ extensions.data())) != VK_SUCCESS) {
+ ALOGW("driver EnumerateDeviceExtensionProperties(%u) failed: %d", i,
+ result);
+ continue;
+ }
+ ALOGV_IF(count > 0, "driver gpu[%u] supports extensions:", i);
+ for (const auto& extension : extensions) {
+ ALOGV(" %s (v%u)", extension.extensionName, extension.specVersion);
+ DeviceExtension id =
+ DeviceExtensionFromName(extension.extensionName);
+ if (id == kDeviceExtensionCount) {
+ ALOGW("driver gpu[%u] extension '%s' unknown to loader", i,
+ extension.extensionName);
+ } else {
+ instance.physical_device_driver_extensions[i].set(id);
+ }
+ }
+ // Ignore driver attempts to support loader extensions
+ instance.physical_device_driver_extensions[i].reset(kKHR_swapchain);
+ }
+ instance.drv.num_physical_devices = num_physical_devices;
+ instance.num_physical_devices = instance.drv.num_physical_devices;
+
+ return VK_SUCCESS;
+}
+
+PFN_vkVoidFunction GetInstanceProcAddr_Bottom(VkInstance, const char* name) {
+ PFN_vkVoidFunction pfn;
+ if ((pfn = GetLoaderBottomProcAddr(name)))
+ return pfn;
+ return nullptr;
+}
+
+VkResult EnumeratePhysicalDevices_Bottom(VkInstance vkinstance,
+ uint32_t* pdev_count,
+ VkPhysicalDevice* pdevs) {
+ Instance& instance = GetDispatchParent(vkinstance);
+ uint32_t count = instance.num_physical_devices;
+ if (pdevs) {
+ count = std::min(count, *pdev_count);
+ std::copy(instance.physical_devices, instance.physical_devices + count,
+ pdevs);
+ }
+ *pdev_count = count;
+ return VK_SUCCESS;
+}
+
+void GetPhysicalDeviceProperties_Bottom(
+ VkPhysicalDevice pdev,
+ VkPhysicalDeviceProperties* properties) {
+ GetDispatchParent(pdev).drv.dispatch.GetPhysicalDeviceProperties(
+ pdev, properties);
+}
+
+void GetPhysicalDeviceFeatures_Bottom(VkPhysicalDevice pdev,
+ VkPhysicalDeviceFeatures* features) {
+ GetDispatchParent(pdev).drv.dispatch.GetPhysicalDeviceFeatures(pdev,
+ features);
+}
+
+void GetPhysicalDeviceMemoryProperties_Bottom(
+ VkPhysicalDevice pdev,
+ VkPhysicalDeviceMemoryProperties* properties) {
+ GetDispatchParent(pdev).drv.dispatch.GetPhysicalDeviceMemoryProperties(
+ pdev, properties);
+}
+
+void GetPhysicalDeviceQueueFamilyProperties_Bottom(
+ VkPhysicalDevice pdev,
+ uint32_t* pCount,
+ VkQueueFamilyProperties* properties) {
+ GetDispatchParent(pdev).drv.dispatch.GetPhysicalDeviceQueueFamilyProperties(
+ pdev, pCount, properties);
+}
+
+void GetPhysicalDeviceFormatProperties_Bottom(VkPhysicalDevice pdev,
+ VkFormat format,
+ VkFormatProperties* properties) {
+ GetDispatchParent(pdev).drv.dispatch.GetPhysicalDeviceFormatProperties(
+ pdev, format, properties);
+}
+
+VkResult GetPhysicalDeviceImageFormatProperties_Bottom(
+ VkPhysicalDevice pdev,
+ VkFormat format,
+ VkImageType type,
+ VkImageTiling tiling,
+ VkImageUsageFlags usage,
+ VkImageCreateFlags flags,
+ VkImageFormatProperties* properties) {
+ return GetDispatchParent(pdev)
+ .drv.dispatch.GetPhysicalDeviceImageFormatProperties(
+ pdev, format, type, tiling, usage, flags, properties);
+}
+
+void GetPhysicalDeviceSparseImageFormatProperties_Bottom(
+ VkPhysicalDevice pdev,
+ VkFormat format,
+ VkImageType type,
+ VkSampleCountFlagBits samples,
+ VkImageUsageFlags usage,
+ VkImageTiling tiling,
+ uint32_t* properties_count,
+ VkSparseImageFormatProperties* properties) {
+ GetDispatchParent(pdev)
+ .drv.dispatch.GetPhysicalDeviceSparseImageFormatProperties(
+ pdev, format, type, samples, usage, tiling, properties_count,
+ properties);
+}
+
+VKAPI_ATTR
+VkResult EnumerateDeviceExtensionProperties_Bottom(
+ VkPhysicalDevice gpu,
+ const char* layer_name,
+ uint32_t* properties_count,
+ VkExtensionProperties* properties) {
+ const VkExtensionProperties* extensions = nullptr;
+ uint32_t num_extensions = 0;
+ if (layer_name) {
+ GetDeviceLayerExtensions(layer_name, &extensions, &num_extensions);
+ } else {
+ Instance& instance = GetDispatchParent(gpu);
+ size_t gpu_idx = 0;
+ while (instance.physical_devices[gpu_idx] != gpu)
+ gpu_idx++;
+ const DeviceExtensionSet driver_extensions =
+ instance.physical_device_driver_extensions[gpu_idx];
+
+ // We only support VK_KHR_swapchain if the GPU supports
+ // VK_ANDROID_native_buffer
+ VkExtensionProperties* available = static_cast<VkExtensionProperties*>(
+ alloca(kDeviceExtensionCount * sizeof(VkExtensionProperties)));
+ if (driver_extensions[kANDROID_native_buffer]) {
+ available[num_extensions++] = VkExtensionProperties{
+ VK_KHR_SWAPCHAIN_EXTENSION_NAME, VK_KHR_SWAPCHAIN_SPEC_VERSION};
+ }
+
+ // TODO(jessehall): We need to also enumerate extensions supported by
+ // implicitly-enabled layers. Currently we don't have that list of
+ // layers until instance creation.
+ extensions = available;
+ }
+
+ if (!properties || *properties_count > num_extensions)
+ *properties_count = num_extensions;
+ if (properties)
+ std::copy(extensions, extensions + *properties_count, properties);
+ return *properties_count < num_extensions ? VK_INCOMPLETE : VK_SUCCESS;
+}
+
+VKAPI_ATTR
+VkResult EnumerateDeviceLayerProperties_Bottom(VkPhysicalDevice /*pdev*/,
+ uint32_t* properties_count,
+ VkLayerProperties* properties) {
+ uint32_t layer_count =
+ EnumerateDeviceLayers(properties ? *properties_count : 0, properties);
+ if (!properties || *properties_count > layer_count)
+ *properties_count = layer_count;
+ return *properties_count < layer_count ? VK_INCOMPLETE : VK_SUCCESS;
+}
+
+VKAPI_ATTR
+VkResult CreateDevice_Bottom(VkPhysicalDevice gpu,
+ const VkDeviceCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkDevice* device_out) {
+ Instance& instance = GetDispatchParent(gpu);
+ VkResult result;
+
+ // FIXME(jessehall): We don't have good conventions or infrastructure yet to
+ // do better than just using the instance allocator and scope for
+ // everything. See b/26732122.
+ if (true /*!allocator*/)
+ allocator = instance.alloc;
+
+ void* mem = allocator->pfnAllocation(allocator->pUserData, sizeof(Device),
+ alignof(Device),
+ VK_SYSTEM_ALLOCATION_SCOPE_DEVICE);
+ if (!mem)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ Device* device = new (mem) Device(&instance);
+
+ result = ActivateAllLayers(create_info, &instance, device);
+ if (result != VK_SUCCESS) {
+ DestroyDevice(device);
+ return result;
+ }
+
+ size_t gpu_idx = 0;
+ while (instance.physical_devices[gpu_idx] != gpu)
+ gpu_idx++;
+
+ uint32_t num_driver_extensions = 0;
+ const char** driver_extensions = static_cast<const char**>(
+ alloca(create_info->enabledExtensionCount * sizeof(const char*)));
+ for (uint32_t i = 0; i < create_info->enabledExtensionCount; i++) {
+ const char* name = create_info->ppEnabledExtensionNames[i];
+ DeviceExtension id = DeviceExtensionFromName(name);
+ if (id != kDeviceExtensionCount) {
+ if (instance.physical_device_driver_extensions[gpu_idx][id]) {
+ driver_extensions[num_driver_extensions++] = name;
+ continue;
+ }
+ if (id == kKHR_swapchain &&
+ instance.physical_device_driver_extensions
+ [gpu_idx][kANDROID_native_buffer]) {
+ driver_extensions[num_driver_extensions++] =
+ VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME;
+ continue;
+ }
+ }
+ bool supported = false;
+ for (const auto& layer : device->active_layers) {
+ if (layer.SupportsExtension(name))
+ supported = true;
+ }
+ if (!supported) {
+ ALOGE(
+ "requested device extension '%s' not supported by loader, "
+ "driver, or any active layers",
+ name);
+ DestroyDevice(device);
+ return VK_ERROR_EXTENSION_NOT_PRESENT;
+ }
+ }
+
+ VkDeviceCreateInfo driver_create_info = *create_info;
+ driver_create_info.enabledLayerCount = 0;
+ driver_create_info.ppEnabledLayerNames = nullptr;
+ // TODO(jessehall): As soon as we enumerate device extensions supported by
+ // the driver, we need to filter the requested extension list to those
+ // supported by the driver here. Also, add the VK_ANDROID_native_buffer
+ // extension to the list iff the VK_KHR_swapchain extension was requested,
+ // instead of adding it unconditionally like we do now.
+ driver_create_info.enabledExtensionCount = num_driver_extensions;
+ driver_create_info.ppEnabledExtensionNames = driver_extensions;
+
+ VkDevice drv_device;
+ result = instance.drv.dispatch.CreateDevice(gpu, &driver_create_info,
+ allocator, &drv_device);
+ if (result != VK_SUCCESS) {
+ DestroyDevice(device);
+ return result;
+ }
+
+ hwvulkan_dispatch_t* drv_dispatch =
+ reinterpret_cast<hwvulkan_dispatch_t*>(drv_device);
+ if (drv_dispatch->magic != HWVULKAN_DISPATCH_MAGIC) {
+ ALOGE("invalid VkDevice dispatch magic: 0x%" PRIxPTR,
+ drv_dispatch->magic);
+ PFN_vkDestroyDevice destroy_device =
+ reinterpret_cast<PFN_vkDestroyDevice>(
+ instance.drv.dispatch.GetDeviceProcAddr(drv_device,
+ "vkDestroyDevice"));
+ destroy_device(drv_device, allocator);
+ DestroyDevice(device);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ drv_dispatch->vtbl = &device->dispatch;
+ device->get_device_proc_addr = reinterpret_cast<PFN_vkGetDeviceProcAddr>(
+ instance.drv.dispatch.GetDeviceProcAddr(drv_device,
+ "vkGetDeviceProcAddr"));
+
+ void* base_object = static_cast<void*>(drv_device);
+ void* next_object = base_object;
+ VkLayerLinkedListElem* next_element;
+ PFN_vkGetDeviceProcAddr next_get_proc_addr = GetDeviceProcAddr_Bottom;
+ Vector<VkLayerLinkedListElem> elem_list(
+ CallbackAllocator<VkLayerLinkedListElem>(instance.alloc));
+ try {
+ elem_list.resize(device->active_layers.size());
+ } catch (std::bad_alloc&) {
+ ALOGE("device creation failed: out of memory");
+ PFN_vkDestroyDevice destroy_device =
+ reinterpret_cast<PFN_vkDestroyDevice>(
+ instance.drv.dispatch.GetDeviceProcAddr(drv_device,
+ "vkDestroyDevice"));
+ destroy_device(drv_device, allocator);
+ DestroyDevice(device);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ for (size_t i = elem_list.size(); i > 0; i--) {
+ size_t idx = i - 1;
+ next_element = &elem_list[idx];
+ next_element->get_proc_addr =
+ reinterpret_cast<PFN_vkGetProcAddr>(next_get_proc_addr);
+ next_element->base_object = base_object;
+ next_element->next_element = next_object;
+ next_object = static_cast<void*>(next_element);
+
+ next_get_proc_addr = device->active_layers[idx].GetGetDeviceProcAddr();
+ if (!next_get_proc_addr) {
+ next_object = next_element->next_element;
+ next_get_proc_addr = reinterpret_cast<PFN_vkGetDeviceProcAddr>(
+ next_element->get_proc_addr);
+ }
+ }
+
+ // This is the magic call that initializes all the layer devices and
+ // allows them to create their device_handle -> device_data mapping.
+ next_get_proc_addr(static_cast<VkDevice>(next_object),
+ "vkGetDeviceProcAddr");
+
+ // We must create all the layer devices *before* retrieving the device
+ // procaddrs, so that the layers know which extensions are enabled and
+ // therefore which functions to return procaddrs for.
+ PFN_vkCreateDevice create_device = reinterpret_cast<PFN_vkCreateDevice>(
+ next_get_proc_addr(drv_device, "vkCreateDevice"));
+ create_device(gpu, create_info, allocator, &drv_device);
+
+ if (!LoadDeviceDispatchTable(static_cast<VkDevice>(base_object),
+ next_get_proc_addr, device->dispatch)) {
+ DestroyDevice(device);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ *device_out = drv_device;
+ return VK_SUCCESS;
+}
+
+void DestroyInstance_Bottom(VkInstance vkinstance,
+ const VkAllocationCallbacks* allocator) {
+ Instance& instance = GetDispatchParent(vkinstance);
+
+ // These checks allow us to call DestroyInstance_Bottom from any error
+ // path in CreateInstance_Bottom, before the driver instance is fully
+ // initialized.
+ if (instance.drv.instance != VK_NULL_HANDLE &&
+ instance.drv.dispatch.DestroyInstance) {
+ instance.drv.dispatch.DestroyInstance(instance.drv.instance, allocator);
+ }
+ if (instance.message) {
+ PFN_vkDestroyDebugReportCallbackEXT destroy_debug_report_callback;
+ destroy_debug_report_callback =
+ reinterpret_cast<PFN_vkDestroyDebugReportCallbackEXT>(
+ vkGetInstanceProcAddr(vkinstance,
+ "vkDestroyDebugReportCallbackEXT"));
+ destroy_debug_report_callback(vkinstance, instance.message, allocator);
+ }
+ instance.active_layers.clear();
+ const VkAllocationCallbacks* alloc = instance.alloc;
+ instance.~Instance();
+ alloc->pfnFree(alloc->pUserData, &instance);
+}
+
+PFN_vkVoidFunction GetDeviceProcAddr_Bottom(VkDevice vkdevice,
+ const char* name) {
+ if (strcmp(name, "vkCreateDevice") == 0) {
+ // TODO(jessehall): Blegh, having this here is disgusting. The current
+ // layer init process can't call through the instance dispatch table's
+ // vkCreateDevice, because that goes through the instance layers rather
+ // than through the device layers. So we need to be able to get the
+ // vkCreateDevice pointer through the *device* layer chain.
+ //
+ // Because we've already created the driver device before calling
+ // through the layer vkCreateDevice functions, the loader bottom proc
+ // is a no-op.
+ return reinterpret_cast<PFN_vkVoidFunction>(Noop);
+ }
+
+ // VK_ANDROID_native_buffer should be hidden from applications and layers.
+ // TODO(jessehall): Generate this as part of GetLoaderBottomProcAddr.
+ PFN_vkVoidFunction pfn;
+ if (strcmp(name, "vkGetSwapchainGrallocUsageANDROID") == 0 ||
+ strcmp(name, "vkAcquireImageANDROID") == 0 ||
+ strcmp(name, "vkQueueSignalReleaseImageANDROID") == 0) {
+ return nullptr;
+ }
+ if ((pfn = GetLoaderBottomProcAddr(name)))
+ return pfn;
+ return GetDispatchParent(vkdevice).get_device_proc_addr(vkdevice, name);
+}
+
+// -----------------------------------------------------------------------------
+// Loader top functions. These are called directly from the loader entry
+// points or from the application (via vkGetInstanceProcAddr) without going
+// through a dispatch table.
+
+VkResult EnumerateInstanceExtensionProperties_Top(
+ const char* layer_name,
+ uint32_t* properties_count,
+ VkExtensionProperties* properties) {
+ if (!EnsureInitialized())
+ return VK_ERROR_INITIALIZATION_FAILED;
+
+ const VkExtensionProperties* extensions = nullptr;
+ uint32_t num_extensions = 0;
+ if (layer_name) {
+ GetInstanceLayerExtensions(layer_name, &extensions, &num_extensions);
+ } else {
+ VkExtensionProperties* available = static_cast<VkExtensionProperties*>(
+ alloca(kInstanceExtensionCount * sizeof(VkExtensionProperties)));
+ available[num_extensions++] = VkExtensionProperties{
+ VK_KHR_SURFACE_EXTENSION_NAME, VK_KHR_SURFACE_SPEC_VERSION};
+ available[num_extensions++] =
+ VkExtensionProperties{VK_KHR_ANDROID_SURFACE_EXTENSION_NAME,
+ VK_KHR_ANDROID_SURFACE_SPEC_VERSION};
+ if (g_driver_instance_extensions[kEXT_debug_report]) {
+ available[num_extensions++] =
+ VkExtensionProperties{VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
+ VK_EXT_DEBUG_REPORT_SPEC_VERSION};
+ }
+ // TODO(jessehall): We need to also enumerate extensions supported by
+ // implicitly-enabled layers. Currently we don't have that list of
+ // layers until instance creation.
+ extensions = available;
+ }
+
+ if (!properties || *properties_count > num_extensions)
+ *properties_count = num_extensions;
+ if (properties)
+ std::copy(extensions, extensions + *properties_count, properties);
+ return *properties_count < num_extensions ? VK_INCOMPLETE : VK_SUCCESS;
+}
+
+VkResult EnumerateInstanceLayerProperties_Top(uint32_t* properties_count,
+ VkLayerProperties* properties) {
+ if (!EnsureInitialized())
+ return VK_ERROR_INITIALIZATION_FAILED;
+
+ uint32_t layer_count =
+ EnumerateInstanceLayers(properties ? *properties_count : 0, properties);
+ if (!properties || *properties_count > layer_count)
+ *properties_count = layer_count;
+ return *properties_count < layer_count ? VK_INCOMPLETE : VK_SUCCESS;
+}
+
+VkResult CreateInstance_Top(const VkInstanceCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkInstance* instance_out) {
+ VkResult result;
+
+ if (!EnsureInitialized())
+ return VK_ERROR_INITIALIZATION_FAILED;
+
+ if (!allocator)
+ allocator = &kDefaultAllocCallbacks;
+
+ VkInstanceCreateInfo local_create_info = *create_info;
+ create_info = &local_create_info;
+
+ void* instance_mem = allocator->pfnAllocation(
+ allocator->pUserData, sizeof(Instance), alignof(Instance),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE);
+ if (!instance_mem)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ Instance* instance = new (instance_mem) Instance(allocator);
+
+ result = ActivateAllLayers(create_info, instance, instance);
+ if (result != VK_SUCCESS) {
+ DestroyInstance_Bottom(instance->handle, allocator);
+ return result;
+ }
+
+ void* base_object = static_cast<void*>(instance->handle);
+ void* next_object = base_object;
+ VkLayerLinkedListElem* next_element;
+ PFN_vkGetInstanceProcAddr next_get_proc_addr = GetInstanceProcAddr_Bottom;
+ Vector<VkLayerLinkedListElem> elem_list(
+ CallbackAllocator<VkLayerLinkedListElem>(instance->alloc));
+ try {
+ elem_list.resize(instance->active_layers.size());
+ } catch (std::bad_alloc&) {
+ ALOGE("instance creation failed: out of memory");
+ DestroyInstance_Bottom(instance->handle, allocator);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ for (size_t i = elem_list.size(); i > 0; i--) {
+ size_t idx = i - 1;
+ next_element = &elem_list[idx];
+ next_element->get_proc_addr =
+ reinterpret_cast<PFN_vkGetProcAddr>(next_get_proc_addr);
+ next_element->base_object = base_object;
+ next_element->next_element = next_object;
+ next_object = static_cast<void*>(next_element);
+
+ next_get_proc_addr =
+ instance->active_layers[idx].GetGetInstanceProcAddr();
+ if (!next_get_proc_addr) {
+ next_object = next_element->next_element;
+ next_get_proc_addr = reinterpret_cast<PFN_vkGetInstanceProcAddr>(
+ next_element->get_proc_addr);
+ }
+ }
+
+ // This is the magic call that initializes all the layer instances and
+ // allows them to create their instance_handle -> instance_data mapping.
+ next_get_proc_addr(static_cast<VkInstance>(next_object),
+ "vkGetInstanceProcAddr");
+
+ if (!LoadInstanceDispatchTable(static_cast<VkInstance>(base_object),
+ next_get_proc_addr, instance->dispatch)) {
+ DestroyInstance_Bottom(instance->handle, allocator);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // Force enable callback extension if required
+ bool enable_callback = false;
+ bool enable_logging = false;
+ if (prctl(PR_GET_DUMPABLE, 0, 0, 0, 0)) {
+ enable_callback =
+ property_get_bool("debug.vulkan.enable_callback", false);
+ enable_logging = enable_callback;
+ if (enable_callback) {
+ enable_callback = AddExtensionToCreateInfo(
+ local_create_info, "VK_EXT_debug_report", instance->alloc);
+ }
+ }
+
+ VkInstance handle = instance->handle;
+ PFN_vkCreateInstance create_instance =
+ reinterpret_cast<PFN_vkCreateInstance>(
+ next_get_proc_addr(instance->handle, "vkCreateInstance"));
+ result = create_instance(create_info, allocator, &handle);
+ if (enable_callback)
+ FreeAllocatedCreateInfo(local_create_info, instance->alloc);
+ if (result >= 0) {
+ *instance_out = instance->handle;
+ } else {
+ // For every layer, including the loader top and bottom layers:
+ // - If a call to the next CreateInstance fails, the layer must clean
+ // up anything it has successfully done so far, and propagate the
+ // error upwards.
+ // - If a layer successfully calls the next layer's CreateInstance, and
+ // afterwards must fail for some reason, it must call the next layer's
+ // DestroyInstance before returning.
+ // - The layer must not call the next layer's DestroyInstance if that
+ // layer's CreateInstance wasn't called, or returned failure.
+
+ // On failure, CreateInstance_Bottom frees the instance struct, so it's
+ // already gone at this point. Nothing to do.
+ }
+
+ if (enable_logging) {
+ const VkDebugReportCallbackCreateInfoEXT callback_create_info = {
+ .sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT,
+ .flags =
+ VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_WARN_BIT_EXT,
+ .pfnCallback = LogDebugMessageCallback,
+ };
+ PFN_vkCreateDebugReportCallbackEXT create_debug_report_callback =
+ reinterpret_cast<PFN_vkCreateDebugReportCallbackEXT>(
+ GetInstanceProcAddr_Top(instance->handle,
+ "vkCreateDebugReportCallbackEXT"));
+ create_debug_report_callback(instance->handle, &callback_create_info,
+ allocator, &instance->message);
+ }
+
+ return result;
+}
+
+PFN_vkVoidFunction GetInstanceProcAddr_Top(VkInstance vkinstance,
+ const char* name) {
+ // vkGetInstanceProcAddr(NULL_HANDLE, ..) only works for global commands
+ if (!vkinstance)
+ return GetLoaderGlobalProcAddr(name);
+
+ const InstanceDispatchTable& dispatch = GetDispatchTable(vkinstance);
+ PFN_vkVoidFunction pfn;
+ // Always go through the loader-top function if there is one.
+ if ((pfn = GetLoaderTopProcAddr(name)))
+ return pfn;
+ // Otherwise, look up the handler in the instance dispatch table
+ if ((pfn = GetDispatchProcAddr(dispatch, name)))
+ return pfn;
+ // Anything not handled already must be a device-dispatched function
+ // without a loader-top. We must return a function that will dispatch based
+ // on the dispatchable object parameter -- which is exactly what the
+ // exported functions do. So just return them here.
+ return GetLoaderExportProcAddr(name);
+}
+
+void DestroyInstance_Top(VkInstance instance,
+ const VkAllocationCallbacks* allocator) {
+ if (!instance)
+ return;
+ GetDispatchTable(instance).DestroyInstance(instance, allocator);
+}
+
+PFN_vkVoidFunction GetDeviceProcAddr_Top(VkDevice device, const char* name) {
+ PFN_vkVoidFunction pfn;
+ if (!device)
+ return nullptr;
+ if ((pfn = GetLoaderTopProcAddr(name)))
+ return pfn;
+ return GetDispatchProcAddr(GetDispatchTable(device), name);
+}
+
+void GetDeviceQueue_Top(VkDevice vkdevice,
+ uint32_t family,
+ uint32_t index,
+ VkQueue* queue_out) {
+ const auto& table = GetDispatchTable(vkdevice);
+ table.GetDeviceQueue(vkdevice, family, index, queue_out);
+ hwvulkan_dispatch_t* queue_dispatch =
+ reinterpret_cast<hwvulkan_dispatch_t*>(*queue_out);
+ if (queue_dispatch->magic != HWVULKAN_DISPATCH_MAGIC &&
+ queue_dispatch->vtbl != &table)
+ ALOGE("invalid VkQueue dispatch magic: 0x%" PRIxPTR,
+ queue_dispatch->magic);
+ queue_dispatch->vtbl = &table;
+}
+
+VkResult AllocateCommandBuffers_Top(
+ VkDevice vkdevice,
+ const VkCommandBufferAllocateInfo* alloc_info,
+ VkCommandBuffer* cmdbufs) {
+ const auto& table = GetDispatchTable(vkdevice);
+ VkResult result =
+ table.AllocateCommandBuffers(vkdevice, alloc_info, cmdbufs);
+ if (result != VK_SUCCESS)
+ return result;
+ for (uint32_t i = 0; i < alloc_info->commandBufferCount; i++) {
+ hwvulkan_dispatch_t* cmdbuf_dispatch =
+ reinterpret_cast<hwvulkan_dispatch_t*>(cmdbufs[i]);
+ ALOGE_IF(cmdbuf_dispatch->magic != HWVULKAN_DISPATCH_MAGIC,
+ "invalid VkCommandBuffer dispatch magic: 0x%" PRIxPTR,
+ cmdbuf_dispatch->magic);
+ cmdbuf_dispatch->vtbl = &table;
+ }
+ return VK_SUCCESS;
+}
+
+void DestroyDevice_Top(VkDevice vkdevice,
+ const VkAllocationCallbacks* /*allocator*/) {
+ if (!vkdevice)
+ return;
+ Device& device = GetDispatchParent(vkdevice);
+ device.dispatch.DestroyDevice(vkdevice, device.instance->alloc);
+ DestroyDevice(&device);
+}
+
+// -----------------------------------------------------------------------------
+
+const VkAllocationCallbacks* GetAllocator(VkInstance vkinstance) {
+ return GetDispatchParent(vkinstance).alloc;
+}
+
+const VkAllocationCallbacks* GetAllocator(VkDevice vkdevice) {
+ return GetDispatchParent(vkdevice).instance->alloc;
+}
+
+VkInstance GetDriverInstance(VkInstance instance) {
+ return GetDispatchParent(instance).drv.instance;
+}
+
+const DriverDispatchTable& GetDriverDispatch(VkInstance instance) {
+ return GetDispatchParent(instance).drv.dispatch;
+}
+
+const DriverDispatchTable& GetDriverDispatch(VkDevice device) {
+ return GetDispatchParent(device).instance->drv.dispatch;
+}
+
+const DriverDispatchTable& GetDriverDispatch(VkQueue queue) {
+ return GetDispatchParent(queue).instance->drv.dispatch;
+}
+
+DebugReportCallbackList& GetDebugReportCallbacks(VkInstance instance) {
+ return GetDispatchParent(instance).debug_report_callbacks;
+}
+
+} // namespace vulkan
diff --git a/vulkan/libvulkan/loader.h b/vulkan/libvulkan/loader.h
new file mode 100644
index 0000000..3e2d1c4
--- /dev/null
+++ b/vulkan/libvulkan/loader.h
@@ -0,0 +1,182 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBVULKAN_LOADER_H
+#define LIBVULKAN_LOADER_H 1
+
+#include <bitset>
+#include "dispatch_gen.h"
+#include "debug_report.h"
+
+namespace vulkan {
+
+enum InstanceExtension {
+ kKHR_surface,
+ kKHR_android_surface,
+ kEXT_debug_report,
+ kInstanceExtensionCount
+};
+typedef std::bitset<kInstanceExtensionCount> InstanceExtensionSet;
+
+enum DeviceExtension {
+ kKHR_swapchain,
+ kANDROID_native_buffer,
+ kDeviceExtensionCount
+};
+typedef std::bitset<kDeviceExtensionCount> DeviceExtensionSet;
+
+inline const InstanceDispatchTable& GetDispatchTable(VkInstance instance) {
+ return **reinterpret_cast<InstanceDispatchTable**>(instance);
+}
+
+inline const InstanceDispatchTable& GetDispatchTable(
+ VkPhysicalDevice physical_device) {
+ return **reinterpret_cast<InstanceDispatchTable**>(physical_device);
+}
+
+inline const DeviceDispatchTable& GetDispatchTable(VkDevice device) {
+ return **reinterpret_cast<DeviceDispatchTable**>(device);
+}
+
+inline const DeviceDispatchTable& GetDispatchTable(VkQueue queue) {
+ return **reinterpret_cast<DeviceDispatchTable**>(queue);
+}
+
+inline const DeviceDispatchTable& GetDispatchTable(
+ VkCommandBuffer command_buffer) {
+ return **reinterpret_cast<DeviceDispatchTable**>(command_buffer);
+}
+
+// -----------------------------------------------------------------------------
+// dispatch_gen.cpp
+
+PFN_vkVoidFunction GetLoaderExportProcAddr(const char* name);
+PFN_vkVoidFunction GetLoaderGlobalProcAddr(const char* name);
+PFN_vkVoidFunction GetLoaderTopProcAddr(const char* name);
+PFN_vkVoidFunction GetLoaderBottomProcAddr(const char* name);
+PFN_vkVoidFunction GetDispatchProcAddr(const InstanceDispatchTable& dispatch,
+ const char* name);
+PFN_vkVoidFunction GetDispatchProcAddr(const DeviceDispatchTable& dispatch,
+ const char* name);
+bool LoadInstanceDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ InstanceDispatchTable& dispatch);
+bool LoadDeviceDispatchTable(VkDevice device,
+ PFN_vkGetDeviceProcAddr get_proc_addr,
+ DeviceDispatchTable& dispatch);
+bool LoadDriverDispatchTable(VkInstance instance,
+ PFN_vkGetInstanceProcAddr get_proc_addr,
+ const InstanceExtensionSet& extensions,
+ DriverDispatchTable& dispatch);
+
+// -----------------------------------------------------------------------------
+// loader.cpp
+
+// clang-format off
+VKAPI_ATTR VkResult EnumerateInstanceExtensionProperties_Top(const char* layer_name, uint32_t* count, VkExtensionProperties* properties);
+VKAPI_ATTR VkResult EnumerateInstanceLayerProperties_Top(uint32_t* count, VkLayerProperties* properties);
+VKAPI_ATTR VkResult CreateInstance_Top(const VkInstanceCreateInfo* create_info, const VkAllocationCallbacks* allocator, VkInstance* instance_out);
+VKAPI_ATTR PFN_vkVoidFunction GetInstanceProcAddr_Top(VkInstance instance, const char* name);
+VKAPI_ATTR void DestroyInstance_Top(VkInstance instance, const VkAllocationCallbacks* allocator);
+VKAPI_ATTR PFN_vkVoidFunction GetDeviceProcAddr_Top(VkDevice drv_device, const char* name);
+VKAPI_ATTR void GetDeviceQueue_Top(VkDevice drv_device, uint32_t family, uint32_t index, VkQueue* out_queue);
+VKAPI_ATTR VkResult AllocateCommandBuffers_Top(VkDevice device, const VkCommandBufferAllocateInfo* alloc_info, VkCommandBuffer* cmdbufs);
+VKAPI_ATTR void DestroyDevice_Top(VkDevice drv_device, const VkAllocationCallbacks* allocator);
+
+VKAPI_ATTR VkResult CreateInstance_Bottom(const VkInstanceCreateInfo* create_info, const VkAllocationCallbacks* allocator, VkInstance* vkinstance);
+VKAPI_ATTR PFN_vkVoidFunction GetInstanceProcAddr_Bottom(VkInstance, const char* name);
+VKAPI_ATTR VkResult EnumeratePhysicalDevices_Bottom(VkInstance vkinstance, uint32_t* pdev_count, VkPhysicalDevice* pdevs);
+VKAPI_ATTR void GetPhysicalDeviceProperties_Bottom(VkPhysicalDevice pdev, VkPhysicalDeviceProperties* properties);
+VKAPI_ATTR void GetPhysicalDeviceFeatures_Bottom(VkPhysicalDevice pdev, VkPhysicalDeviceFeatures* features);
+VKAPI_ATTR void GetPhysicalDeviceMemoryProperties_Bottom(VkPhysicalDevice pdev, VkPhysicalDeviceMemoryProperties* properties);
+VKAPI_ATTR void GetPhysicalDeviceQueueFamilyProperties_Bottom(VkPhysicalDevice pdev, uint32_t* properties_count, VkQueueFamilyProperties* properties);
+VKAPI_ATTR void GetPhysicalDeviceFormatProperties_Bottom(VkPhysicalDevice pdev, VkFormat format, VkFormatProperties* properties);
+VKAPI_ATTR VkResult GetPhysicalDeviceImageFormatProperties_Bottom(VkPhysicalDevice pdev, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* properties);
+VKAPI_ATTR void GetPhysicalDeviceSparseImageFormatProperties_Bottom(VkPhysicalDevice pdev, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* properties_count, VkSparseImageFormatProperties* properties);
+VKAPI_ATTR VkResult EnumerateDeviceExtensionProperties_Bottom(VkPhysicalDevice pdev, const char* layer_name, uint32_t* properties_count, VkExtensionProperties* properties);
+VKAPI_ATTR VkResult EnumerateDeviceLayerProperties_Bottom(VkPhysicalDevice pdev, uint32_t* properties_count, VkLayerProperties* properties);
+VKAPI_ATTR VkResult CreateDevice_Bottom(VkPhysicalDevice pdev, const VkDeviceCreateInfo* create_info, const VkAllocationCallbacks* allocator, VkDevice* device_out);
+VKAPI_ATTR void DestroyInstance_Bottom(VkInstance vkinstance, const VkAllocationCallbacks* allocator);
+VKAPI_ATTR PFN_vkVoidFunction GetDeviceProcAddr_Bottom(VkDevice vkdevice, const char* name);
+// clang-format on
+
+const VkAllocationCallbacks* GetAllocator(VkInstance instance);
+const VkAllocationCallbacks* GetAllocator(VkDevice device);
+VkInstance GetDriverInstance(VkInstance instance);
+const DriverDispatchTable& GetDriverDispatch(VkInstance instance);
+const DriverDispatchTable& GetDriverDispatch(VkDevice device);
+const DriverDispatchTable& GetDriverDispatch(VkQueue queue);
+DebugReportCallbackList& GetDebugReportCallbacks(VkInstance instance);
+
+// -----------------------------------------------------------------------------
+// swapchain.cpp
+
+// clang-format off
+VKAPI_ATTR VkResult CreateAndroidSurfaceKHR_Bottom(VkInstance instance, const VkAndroidSurfaceCreateInfoKHR* pCreateInfo, const VkAllocationCallbacks* allocator, VkSurfaceKHR* surface);
+VKAPI_ATTR void DestroySurfaceKHR_Bottom(VkInstance instance, VkSurfaceKHR surface, const VkAllocationCallbacks* allocator);
+VKAPI_ATTR VkResult GetPhysicalDeviceSurfaceSupportKHR_Bottom(VkPhysicalDevice pdev, uint32_t queue_family, VkSurfaceKHR surface, VkBool32* pSupported);
+VKAPI_ATTR VkResult GetPhysicalDeviceSurfaceCapabilitiesKHR_Bottom(VkPhysicalDevice pdev, VkSurfaceKHR surface, VkSurfaceCapabilitiesKHR* capabilities);
+VKAPI_ATTR VkResult GetPhysicalDeviceSurfaceFormatsKHR_Bottom(VkPhysicalDevice pdev, VkSurfaceKHR surface, uint32_t* count, VkSurfaceFormatKHR* formats);
+VKAPI_ATTR VkResult GetPhysicalDeviceSurfacePresentModesKHR_Bottom(VkPhysicalDevice pdev, VkSurfaceKHR surface, uint32_t* count, VkPresentModeKHR* modes);
+VKAPI_ATTR VkResult CreateSwapchainKHR_Bottom(VkDevice device, const VkSwapchainCreateInfoKHR* create_info, const VkAllocationCallbacks* allocator, VkSwapchainKHR* swapchain_handle);
+VKAPI_ATTR void DestroySwapchainKHR_Bottom(VkDevice device, VkSwapchainKHR swapchain_handle, const VkAllocationCallbacks* allocator);
+VKAPI_ATTR VkResult GetSwapchainImagesKHR_Bottom(VkDevice device, VkSwapchainKHR swapchain_handle, uint32_t* count, VkImage* images);
+VKAPI_ATTR VkResult AcquireNextImageKHR_Bottom(VkDevice device, VkSwapchainKHR swapchain_handle, uint64_t timeout, VkSemaphore semaphore, VkFence fence, uint32_t* image_index);
+VKAPI_ATTR VkResult QueuePresentKHR_Bottom(VkQueue queue, const VkPresentInfoKHR* present_info);
+// clang-format on
+
+// -----------------------------------------------------------------------------
+// layers_extensions.cpp
+
+struct Layer;
+class LayerRef {
+ public:
+ LayerRef(Layer* layer);
+ LayerRef(LayerRef&& other);
+ ~LayerRef();
+ LayerRef(const LayerRef&) = delete;
+ LayerRef& operator=(const LayerRef&) = delete;
+
+ // provides bool-like behavior
+ operator const Layer*() const { return layer_; }
+
+ PFN_vkGetInstanceProcAddr GetGetInstanceProcAddr() const;
+ PFN_vkGetDeviceProcAddr GetGetDeviceProcAddr() const;
+
+ bool SupportsExtension(const char* name) const;
+
+ private:
+ Layer* layer_;
+};
+
+void DiscoverLayers();
+uint32_t EnumerateInstanceLayers(uint32_t count, VkLayerProperties* properties);
+uint32_t EnumerateDeviceLayers(uint32_t count, VkLayerProperties* properties);
+void GetInstanceLayerExtensions(const char* name,
+ const VkExtensionProperties** properties,
+ uint32_t* count);
+void GetDeviceLayerExtensions(const char* name,
+ const VkExtensionProperties** properties,
+ uint32_t* count);
+LayerRef GetInstanceLayerRef(const char* name);
+LayerRef GetDeviceLayerRef(const char* name);
+
+InstanceExtension InstanceExtensionFromName(const char* name);
+DeviceExtension DeviceExtensionFromName(const char* name);
+
+} // namespace vulkan
+
+#endif // LIBVULKAN_LOADER_H
diff --git a/vulkan/libvulkan/swapchain.cpp b/vulkan/libvulkan/swapchain.cpp
new file mode 100644
index 0000000..bab5a59
--- /dev/null
+++ b/vulkan/libvulkan/swapchain.cpp
@@ -0,0 +1,706 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <algorithm>
+#include <memory>
+
+#include <gui/BufferQueue.h>
+#include <log/log.h>
+#include <sync/sync.h>
+
+#include "loader.h"
+
+using namespace vulkan;
+
+// TODO(jessehall): Currently we don't have a good error code for when a native
+// window operation fails. Just returning INITIALIZATION_FAILED for now. Later
+// versions (post SDK 0.9) of the API/extension have a better error code.
+// When updating to that version, audit all error returns.
+
+namespace {
+
+// ----------------------------------------------------------------------------
+// These functions/classes form an adaptor that allows objects to be refcounted
+// by both android::sp<> and std::shared_ptr<> simultaneously, and delegates
+// allocation of the shared_ptr<> control structure to VkAllocationCallbacks.
+// The
+// platform holds a reference to the ANativeWindow using its embedded reference
+// count, and the ANativeWindow implementation holds references to the
+// ANativeWindowBuffers using their embedded reference counts, so the
+// shared_ptr *must* cooperate with these and hold at least one reference to
+// the object using the embedded reference count.
+
+template <typename T>
+struct NativeBaseDeleter {
+ void operator()(T* obj) { obj->common.decRef(&obj->common); }
+};
+
+template <typename Host>
+struct AllocScope {};
+
+template <>
+struct AllocScope<VkInstance> {
+ static const VkSystemAllocationScope kScope =
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE;
+};
+
+template <>
+struct AllocScope<VkDevice> {
+ static const VkSystemAllocationScope kScope =
+ VK_SYSTEM_ALLOCATION_SCOPE_DEVICE;
+};
+
+template <typename T>
+class VulkanAllocator {
+ public:
+ typedef T value_type;
+
+ VulkanAllocator(const VkAllocationCallbacks& allocator,
+ VkSystemAllocationScope scope)
+ : allocator_(allocator), scope_(scope) {}
+
+ template <typename U>
+ explicit VulkanAllocator(const VulkanAllocator<U>& other)
+ : allocator_(other.allocator_), scope_(other.scope_) {}
+
+ T* allocate(size_t n) const {
+ T* p = static_cast<T*>(allocator_.pfnAllocation(
+ allocator_.pUserData, n * sizeof(T), alignof(T), scope_));
+ if (!p)
+ throw std::bad_alloc();
+ return p;
+ }
+ void deallocate(T* p, size_t) const noexcept {
+ return allocator_.pfnFree(allocator_.pUserData, p);
+ }
+
+ private:
+ template <typename U>
+ friend class VulkanAllocator;
+ const VkAllocationCallbacks& allocator_;
+ const VkSystemAllocationScope scope_;
+};
+
+template <typename T, typename Host>
+std::shared_ptr<T> InitSharedPtr(Host host, T* obj) {
+ try {
+ obj->common.incRef(&obj->common);
+ return std::shared_ptr<T>(
+ obj, NativeBaseDeleter<T>(),
+ VulkanAllocator<T>(*GetAllocator(host), AllocScope<Host>::kScope));
+ } catch (std::bad_alloc&) {
+ obj->common.decRef(&obj->common);
+ return nullptr;
+ }
+}
+
+// ----------------------------------------------------------------------------
+
+struct Surface {
+ std::shared_ptr<ANativeWindow> window;
+};
+
+VkSurfaceKHR HandleFromSurface(Surface* surface) {
+ return VkSurfaceKHR(reinterpret_cast<uint64_t>(surface));
+}
+
+Surface* SurfaceFromHandle(VkSurfaceKHR handle) {
+ return reinterpret_cast<Surface*>(handle);
+}
+
+struct Swapchain {
+ Swapchain(Surface& surface_, uint32_t num_images_)
+ : surface(surface_), num_images(num_images_) {}
+
+ Surface& surface;
+ uint32_t num_images;
+
+ struct Image {
+ Image() : image(VK_NULL_HANDLE), dequeue_fence(-1), dequeued(false) {}
+ VkImage image;
+ std::shared_ptr<ANativeWindowBuffer> buffer;
+ // The fence is only valid when the buffer is dequeued, and should be
+ // -1 any other time. When valid, we own the fd, and must ensure it is
+ // closed: either by closing it explicitly when queueing the buffer,
+ // or by passing ownership e.g. to ANativeWindow::cancelBuffer().
+ int dequeue_fence;
+ bool dequeued;
+ } images[android::BufferQueue::NUM_BUFFER_SLOTS];
+};
+
+VkSwapchainKHR HandleFromSwapchain(Swapchain* swapchain) {
+ return VkSwapchainKHR(reinterpret_cast<uint64_t>(swapchain));
+}
+
+Swapchain* SwapchainFromHandle(VkSwapchainKHR handle) {
+ return reinterpret_cast<Swapchain*>(handle);
+}
+
+} // anonymous namespace
+
+namespace vulkan {
+
+VKAPI_ATTR
+VkResult CreateAndroidSurfaceKHR_Bottom(
+ VkInstance instance,
+ const VkAndroidSurfaceCreateInfoKHR* pCreateInfo,
+ const VkAllocationCallbacks* allocator,
+ VkSurfaceKHR* out_surface) {
+ if (!allocator)
+ allocator = GetAllocator(instance);
+ void* mem = allocator->pfnAllocation(allocator->pUserData, sizeof(Surface),
+ alignof(Surface),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
+ if (!mem)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ Surface* surface = new (mem) Surface;
+
+ surface->window = InitSharedPtr(instance, pCreateInfo->window);
+ if (!surface->window) {
+ ALOGE("surface creation failed: out of memory");
+ surface->~Surface();
+ allocator->pfnFree(allocator->pUserData, surface);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ // TODO(jessehall): Create and use NATIVE_WINDOW_API_VULKAN.
+ int err =
+ native_window_api_connect(surface->window.get(), NATIVE_WINDOW_API_EGL);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("native_window_api_connect() failed: %s (%d)", strerror(-err),
+ err);
+ surface->~Surface();
+ allocator->pfnFree(allocator->pUserData, surface);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ *out_surface = HandleFromSurface(surface);
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+void DestroySurfaceKHR_Bottom(VkInstance instance,
+ VkSurfaceKHR surface_handle,
+ const VkAllocationCallbacks* allocator) {
+ Surface* surface = SurfaceFromHandle(surface_handle);
+ if (!surface)
+ return;
+ native_window_api_disconnect(surface->window.get(), NATIVE_WINDOW_API_EGL);
+ surface->~Surface();
+ if (!allocator)
+ allocator = GetAllocator(instance);
+ allocator->pfnFree(allocator->pUserData, surface);
+}
+
+VKAPI_ATTR
+VkResult GetPhysicalDeviceSurfaceSupportKHR_Bottom(VkPhysicalDevice /*pdev*/,
+ uint32_t /*queue_family*/,
+ VkSurfaceKHR /*surface*/,
+ VkBool32* supported) {
+ *supported = VK_TRUE;
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+VkResult GetPhysicalDeviceSurfaceCapabilitiesKHR_Bottom(
+ VkPhysicalDevice /*pdev*/,
+ VkSurfaceKHR surface,
+ VkSurfaceCapabilitiesKHR* capabilities) {
+ int err;
+ ANativeWindow* window = SurfaceFromHandle(surface)->window.get();
+
+ int width, height;
+ err = window->query(window, NATIVE_WINDOW_DEFAULT_WIDTH, &width);
+ if (err != 0) {
+ ALOGE("NATIVE_WINDOW_DEFAULT_WIDTH query failed: %s (%d)",
+ strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ err = window->query(window, NATIVE_WINDOW_DEFAULT_HEIGHT, &height);
+ if (err != 0) {
+ ALOGE("NATIVE_WINDOW_DEFAULT_WIDTH query failed: %s (%d)",
+ strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ capabilities->currentExtent =
+ VkExtent2D{static_cast<uint32_t>(width), static_cast<uint32_t>(height)};
+
+ // TODO(jessehall): Figure out what the min/max values should be.
+ capabilities->minImageCount = 2;
+ capabilities->maxImageCount = 3;
+
+ // TODO(jessehall): Figure out what the max extent should be. Maximum
+ // texture dimension maybe?
+ capabilities->minImageExtent = VkExtent2D{1, 1};
+ capabilities->maxImageExtent = VkExtent2D{4096, 4096};
+
+ // TODO(jessehall): We can support all transforms, fix this once
+ // implemented.
+ capabilities->supportedTransforms = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR;
+
+ // TODO(jessehall): Implement based on NATIVE_WINDOW_TRANSFORM_HINT.
+ capabilities->currentTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR;
+
+ capabilities->maxImageArrayLayers = 1;
+
+ // TODO(jessehall): I think these are right, but haven't thought hard about
+ // it. Do we need to query the driver for support of any of these?
+ // Currently not included:
+ // - VK_IMAGE_USAGE_GENERAL: maybe? does this imply cpu mappable?
+ // - VK_IMAGE_USAGE_DEPTH_STENCIL_BIT: definitely not
+ // - VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT: definitely not
+ capabilities->supportedUsageFlags =
+ VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT |
+ VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_STORAGE_BIT |
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
+ VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT;
+
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+VkResult GetPhysicalDeviceSurfaceFormatsKHR_Bottom(
+ VkPhysicalDevice /*pdev*/,
+ VkSurfaceKHR /*surface*/,
+ uint32_t* count,
+ VkSurfaceFormatKHR* formats) {
+ // TODO(jessehall): Fill out the set of supported formats. Longer term, add
+ // a new gralloc method to query whether a (format, usage) pair is
+ // supported, and check that for each gralloc format that corresponds to a
+ // Vulkan format. Shorter term, just add a few more formats to the ones
+ // hardcoded below.
+
+ const VkSurfaceFormatKHR kFormats[] = {
+ {VK_FORMAT_R8G8B8A8_UNORM, VK_COLORSPACE_SRGB_NONLINEAR_KHR},
+ {VK_FORMAT_R8G8B8A8_SRGB, VK_COLORSPACE_SRGB_NONLINEAR_KHR},
+ };
+ const uint32_t kNumFormats = sizeof(kFormats) / sizeof(kFormats[0]);
+
+ VkResult result = VK_SUCCESS;
+ if (formats) {
+ if (*count < kNumFormats)
+ result = VK_INCOMPLETE;
+ std::copy(kFormats, kFormats + std::min(*count, kNumFormats), formats);
+ }
+ *count = kNumFormats;
+ return result;
+}
+
+VKAPI_ATTR
+VkResult GetPhysicalDeviceSurfacePresentModesKHR_Bottom(
+ VkPhysicalDevice /*pdev*/,
+ VkSurfaceKHR /*surface*/,
+ uint32_t* count,
+ VkPresentModeKHR* modes) {
+ const VkPresentModeKHR kModes[] = {
+ VK_PRESENT_MODE_MAILBOX_KHR, VK_PRESENT_MODE_FIFO_KHR,
+ };
+ const uint32_t kNumModes = sizeof(kModes) / sizeof(kModes[0]);
+
+ VkResult result = VK_SUCCESS;
+ if (modes) {
+ if (*count < kNumModes)
+ result = VK_INCOMPLETE;
+ std::copy(kModes, kModes + std::min(*count, kNumModes), modes);
+ }
+ *count = kNumModes;
+ return result;
+}
+
+VKAPI_ATTR
+VkResult CreateSwapchainKHR_Bottom(VkDevice device,
+ const VkSwapchainCreateInfoKHR* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkSwapchainKHR* swapchain_handle) {
+ int err;
+ VkResult result = VK_SUCCESS;
+
+ if (!allocator)
+ allocator = GetAllocator(device);
+
+ ALOGV_IF(create_info->imageArrayLayers != 1,
+ "Swapchain imageArrayLayers (%u) != 1 not supported",
+ create_info->imageArrayLayers);
+
+ ALOGE_IF(create_info->imageFormat != VK_FORMAT_R8G8B8A8_UNORM,
+ "swapchain formats other than R8G8B8A8_UNORM not yet implemented");
+ ALOGE_IF(create_info->imageColorSpace != VK_COLORSPACE_SRGB_NONLINEAR_KHR,
+ "color spaces other than SRGB_NONLINEAR not yet implemented");
+ ALOGE_IF(create_info->oldSwapchain,
+ "swapchain re-creation not yet implemented");
+ ALOGE_IF(create_info->preTransform != VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR,
+ "swapchain preTransform not yet implemented");
+ ALOGE_IF(create_info->presentMode != VK_PRESENT_MODE_FIFO_KHR,
+ "present modes other than FIFO are not yet implemented");
+
+ // -- Configure the native window --
+
+ Surface& surface = *SurfaceFromHandle(create_info->surface);
+ const DriverDispatchTable& dispatch = GetDriverDispatch(device);
+
+ err = native_window_set_buffers_dimensions(
+ surface.window.get(), static_cast<int>(create_info->imageExtent.width),
+ static_cast<int>(create_info->imageExtent.height));
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("native_window_set_buffers_dimensions(%d,%d) failed: %s (%d)",
+ create_info->imageExtent.width, create_info->imageExtent.height,
+ strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ err = native_window_set_scaling_mode(
+ surface.window.get(), NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("native_window_set_scaling_mode(SCALE_TO_WINDOW) failed: %s (%d)",
+ strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ uint32_t min_undequeued_buffers;
+ err = surface.window->query(
+ surface.window.get(), NATIVE_WINDOW_MIN_UNDEQUEUED_BUFFERS,
+ reinterpret_cast<int*>(&min_undequeued_buffers));
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("window->query failed: %s (%d)", strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ uint32_t num_images =
+ (create_info->minImageCount - 1) + min_undequeued_buffers;
+ err = native_window_set_buffer_count(surface.window.get(), num_images);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("native_window_set_buffer_count failed: %s (%d)", strerror(-err),
+ err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ int gralloc_usage = 0;
+ // TODO(jessehall): Remove conditional once all drivers have been updated
+ if (dispatch.GetSwapchainGrallocUsageANDROID) {
+ result = dispatch.GetSwapchainGrallocUsageANDROID(
+ device, create_info->imageFormat, create_info->imageUsage,
+ &gralloc_usage);
+ if (result != VK_SUCCESS) {
+ ALOGE("vkGetSwapchainGrallocUsageANDROID failed: %d", result);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+ } else {
+ gralloc_usage = GRALLOC_USAGE_HW_RENDER | GRALLOC_USAGE_HW_TEXTURE;
+ }
+ err = native_window_set_usage(surface.window.get(), gralloc_usage);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("native_window_set_usage failed: %s (%d)", strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ // -- Allocate our Swapchain object --
+ // After this point, we must deallocate the swapchain on error.
+
+ void* mem = allocator->pfnAllocation(allocator->pUserData,
+ sizeof(Swapchain), alignof(Swapchain),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
+ if (!mem)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ Swapchain* swapchain = new (mem) Swapchain(surface, num_images);
+
+ // -- Dequeue all buffers and create a VkImage for each --
+ // Any failures during or after this must cancel the dequeued buffers.
+
+ VkNativeBufferANDROID image_native_buffer = {
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wold-style-cast"
+ .sType = VK_STRUCTURE_TYPE_NATIVE_BUFFER_ANDROID,
+#pragma clang diagnostic pop
+ .pNext = nullptr,
+ };
+ VkImageCreateInfo image_create = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ .pNext = &image_native_buffer,
+ .imageType = VK_IMAGE_TYPE_2D,
+ .format = VK_FORMAT_R8G8B8A8_UNORM, // TODO(jessehall)
+ .extent = {0, 0, 1},
+ .mipLevels = 1,
+ .arrayLayers = 1,
+ .samples = VK_SAMPLE_COUNT_1_BIT,
+ .tiling = VK_IMAGE_TILING_OPTIMAL,
+ .usage = create_info->imageUsage,
+ .flags = 0,
+ .sharingMode = create_info->imageSharingMode,
+ .queueFamilyIndexCount = create_info->queueFamilyIndexCount,
+ .pQueueFamilyIndices = create_info->pQueueFamilyIndices,
+ };
+
+ for (uint32_t i = 0; i < num_images; i++) {
+ Swapchain::Image& img = swapchain->images[i];
+
+ ANativeWindowBuffer* buffer;
+ err = surface.window->dequeueBuffer(surface.window.get(), &buffer,
+ &img.dequeue_fence);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate
+ // possible errors and translate them to valid Vulkan result codes?
+ ALOGE("dequeueBuffer[%u] failed: %s (%d)", i, strerror(-err), err);
+ result = VK_ERROR_INITIALIZATION_FAILED;
+ break;
+ }
+ img.buffer = InitSharedPtr(device, buffer);
+ if (!img.buffer) {
+ ALOGE("swapchain creation failed: out of memory");
+ surface.window->cancelBuffer(surface.window.get(), buffer,
+ img.dequeue_fence);
+ result = VK_ERROR_OUT_OF_HOST_MEMORY;
+ break;
+ }
+ img.dequeued = true;
+
+ image_create.extent =
+ VkExtent3D{static_cast<uint32_t>(img.buffer->width),
+ static_cast<uint32_t>(img.buffer->height),
+ 1};
+ image_native_buffer.handle = img.buffer->handle;
+ image_native_buffer.stride = img.buffer->stride;
+ image_native_buffer.format = img.buffer->format;
+ image_native_buffer.usage = img.buffer->usage;
+
+ result =
+ dispatch.CreateImage(device, &image_create, nullptr, &img.image);
+ if (result != VK_SUCCESS) {
+ ALOGD("vkCreateImage w/ native buffer failed: %u", result);
+ break;
+ }
+ }
+
+ // -- Cancel all buffers, returning them to the queue --
+ // If an error occurred before, also destroy the VkImage and release the
+ // buffer reference. Otherwise, we retain a strong reference to the buffer.
+ //
+ // TODO(jessehall): The error path here is the same as DestroySwapchain,
+ // but not the non-error path. Should refactor/unify.
+ for (uint32_t i = 0; i < num_images; i++) {
+ Swapchain::Image& img = swapchain->images[i];
+ if (img.dequeued) {
+ surface.window->cancelBuffer(surface.window.get(), img.buffer.get(),
+ img.dequeue_fence);
+ img.dequeue_fence = -1;
+ img.dequeued = false;
+ }
+ if (result != VK_SUCCESS) {
+ if (img.image)
+ dispatch.DestroyImage(device, img.image, nullptr);
+ }
+ }
+
+ if (result != VK_SUCCESS) {
+ swapchain->~Swapchain();
+ allocator->pfnFree(allocator->pUserData, swapchain);
+ return result;
+ }
+
+ *swapchain_handle = HandleFromSwapchain(swapchain);
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+void DestroySwapchainKHR_Bottom(VkDevice device,
+ VkSwapchainKHR swapchain_handle,
+ const VkAllocationCallbacks* allocator) {
+ const DriverDispatchTable& dispatch = GetDriverDispatch(device);
+ Swapchain* swapchain = SwapchainFromHandle(swapchain_handle);
+ const std::shared_ptr<ANativeWindow>& window = swapchain->surface.window;
+
+ for (uint32_t i = 0; i < swapchain->num_images; i++) {
+ Swapchain::Image& img = swapchain->images[i];
+ if (img.dequeued) {
+ window->cancelBuffer(window.get(), img.buffer.get(),
+ img.dequeue_fence);
+ img.dequeue_fence = -1;
+ img.dequeued = false;
+ }
+ if (img.image) {
+ dispatch.DestroyImage(device, img.image, nullptr);
+ }
+ }
+
+ if (!allocator)
+ allocator = GetAllocator(device);
+ swapchain->~Swapchain();
+ allocator->pfnFree(allocator->pUserData, swapchain);
+}
+
+VKAPI_ATTR
+VkResult GetSwapchainImagesKHR_Bottom(VkDevice,
+ VkSwapchainKHR swapchain_handle,
+ uint32_t* count,
+ VkImage* images) {
+ Swapchain& swapchain = *SwapchainFromHandle(swapchain_handle);
+ VkResult result = VK_SUCCESS;
+ if (images) {
+ uint32_t n = swapchain.num_images;
+ if (*count < swapchain.num_images) {
+ n = *count;
+ result = VK_INCOMPLETE;
+ }
+ for (uint32_t i = 0; i < n; i++)
+ images[i] = swapchain.images[i].image;
+ }
+ *count = swapchain.num_images;
+ return result;
+}
+
+VKAPI_ATTR
+VkResult AcquireNextImageKHR_Bottom(VkDevice device,
+ VkSwapchainKHR swapchain_handle,
+ uint64_t timeout,
+ VkSemaphore semaphore,
+ VkFence vk_fence,
+ uint32_t* image_index) {
+ Swapchain& swapchain = *SwapchainFromHandle(swapchain_handle);
+ ANativeWindow* window = swapchain.surface.window.get();
+ VkResult result;
+ int err;
+
+ ALOGW_IF(
+ timeout != UINT64_MAX,
+ "vkAcquireNextImageKHR: non-infinite timeouts not yet implemented");
+
+ ANativeWindowBuffer* buffer;
+ int fence_fd;
+ err = window->dequeueBuffer(window, &buffer, &fence_fd);
+ if (err != 0) {
+ // TODO(jessehall): Improve error reporting. Can we enumerate possible
+ // errors and translate them to valid Vulkan result codes?
+ ALOGE("dequeueBuffer failed: %s (%d)", strerror(-err), err);
+ return VK_ERROR_INITIALIZATION_FAILED;
+ }
+
+ uint32_t idx;
+ for (idx = 0; idx < swapchain.num_images; idx++) {
+ if (swapchain.images[idx].buffer.get() == buffer) {
+ swapchain.images[idx].dequeued = true;
+ swapchain.images[idx].dequeue_fence = fence_fd;
+ break;
+ }
+ }
+ if (idx == swapchain.num_images) {
+ ALOGE("dequeueBuffer returned unrecognized buffer");
+ window->cancelBuffer(window, buffer, fence_fd);
+ return VK_ERROR_OUT_OF_DATE_KHR;
+ }
+
+ int fence_clone = -1;
+ if (fence_fd != -1) {
+ fence_clone = dup(fence_fd);
+ if (fence_clone == -1) {
+ ALOGE("dup(fence) failed, stalling until signalled: %s (%d)",
+ strerror(errno), errno);
+ sync_wait(fence_fd, -1 /* forever */);
+ }
+ }
+
+ result = GetDriverDispatch(device).AcquireImageANDROID(
+ device, swapchain.images[idx].image, fence_clone, semaphore, vk_fence);
+ if (result != VK_SUCCESS) {
+ // NOTE: we're relying on AcquireImageANDROID to close fence_clone,
+ // even if the call fails. We could close it ourselves on failure, but
+ // that would create a race condition if the driver closes it on a
+ // failure path: some other thread might create an fd with the same
+ // number between the time the driver closes it and the time we close
+ // it. We must assume one of: the driver *always* closes it even on
+ // failure, or *never* closes it on failure.
+ window->cancelBuffer(window, buffer, fence_fd);
+ swapchain.images[idx].dequeued = false;
+ swapchain.images[idx].dequeue_fence = -1;
+ return result;
+ }
+
+ *image_index = idx;
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+VkResult QueuePresentKHR_Bottom(VkQueue queue,
+ const VkPresentInfoKHR* present_info) {
+ ALOGV_IF(present_info->sType != VK_STRUCTURE_TYPE_PRESENT_INFO_KHR,
+ "vkQueuePresentKHR: invalid VkPresentInfoKHR structure type %d",
+ present_info->sType);
+ ALOGV_IF(present_info->pNext, "VkPresentInfo::pNext != NULL");
+
+ const DriverDispatchTable& dispatch = GetDriverDispatch(queue);
+ VkResult final_result = VK_SUCCESS;
+ for (uint32_t sc = 0; sc < present_info->swapchainCount; sc++) {
+ Swapchain& swapchain =
+ *SwapchainFromHandle(present_info->pSwapchains[sc]);
+ ANativeWindow* window = swapchain.surface.window.get();
+ uint32_t image_idx = present_info->pImageIndices[sc];
+ Swapchain::Image& img = swapchain.images[image_idx];
+ VkResult result;
+ int err;
+
+ int fence = -1;
+ result = dispatch.QueueSignalReleaseImageANDROID(
+ queue, present_info->waitSemaphoreCount,
+ present_info->pWaitSemaphores, img.image, &fence);
+ if (result != VK_SUCCESS) {
+ ALOGE("QueueSignalReleaseImageANDROID failed: %d", result);
+ if (present_info->pResults)
+ present_info->pResults[sc] = result;
+ if (final_result == VK_SUCCESS)
+ final_result = result;
+ // TODO(jessehall): What happens to the buffer here? Does the app
+ // still own it or not, i.e. should we cancel the buffer? Hard to
+ // do correctly without synchronizing, though I guess we could wait
+ // for the queue to idle.
+ continue;
+ }
+
+ err = window->queueBuffer(window, img.buffer.get(), fence);
+ if (err != 0) {
+ // TODO(jessehall): What now? We should probably cancel the buffer,
+ // I guess?
+ ALOGE("queueBuffer failed: %s (%d)", strerror(-err), err);
+ if (present_info->pResults)
+ present_info->pResults[sc] = result;
+ if (final_result == VK_SUCCESS)
+ final_result = VK_ERROR_INITIALIZATION_FAILED;
+ continue;
+ }
+
+ if (img.dequeue_fence != -1) {
+ close(img.dequeue_fence);
+ img.dequeue_fence = -1;
+ }
+ img.dequeued = false;
+
+ if (present_info->pResults)
+ present_info->pResults[sc] = VK_SUCCESS;
+ }
+
+ return final_result;
+}
+
+} // namespace vulkan
diff --git a/vulkan/libvulkan/vulkan_loader_data.cpp b/vulkan/libvulkan/vulkan_loader_data.cpp
new file mode 100644
index 0000000..a6a0295
--- /dev/null
+++ b/vulkan/libvulkan/vulkan_loader_data.cpp
@@ -0,0 +1,24 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <vulkan/vulkan_loader_data.h>
+
+using namespace vulkan;
+
+LoaderData& LoaderData::GetInstance() {
+ static LoaderData loader_data;
+ return loader_data;
+}
diff --git a/vulkan/nulldrv/Android.mk b/vulkan/nulldrv/Android.mk
new file mode 100644
index 0000000..77d4746
--- /dev/null
+++ b/vulkan/nulldrv/Android.mk
@@ -0,0 +1,45 @@
+# Copyright 2015 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCAL_PATH:= $(call my-dir)
+include $(CLEAR_VARS)
+
+LOCAL_CLANG := true
+LOCAL_CFLAGS := -std=c99 -fvisibility=hidden -fstrict-aliasing \
+ -DLOG_TAG=\"vknulldrv\" \
+ -Weverything -Werror \
+ -Wno-padded \
+ -Wno-undef \
+ -Wno-zero-length-array
+#LOCAL_CFLAGS += -DLOG_NDEBUG=0
+LOCAL_CPPFLAGS := -std=c++1y \
+ -Wno-c++98-compat-pedantic \
+ -Wno-c99-extensions
+
+LOCAL_C_INCLUDES := \
+ frameworks/native/vulkan/include
+
+LOCAL_SRC_FILES := \
+ null_driver.cpp \
+ null_driver_gen.cpp
+
+LOCAL_SHARED_LIBRARIES := liblog
+
+# Real drivers would set this to vulkan.$(TARGET_BOARD_PLATFORM)
+LOCAL_MODULE := vulkan.default
+LOCAL_PROPRIETARY_MODULE := true
+LOCAL_MODULE_RELATIVE_PATH := hw
+LOCAL_MODULE_TAGS := optional
+
+include $(BUILD_SHARED_LIBRARY)
diff --git a/vulkan/nulldrv/null_driver.cpp b/vulkan/nulldrv/null_driver.cpp
new file mode 100644
index 0000000..b4e21db
--- /dev/null
+++ b/vulkan/nulldrv/null_driver.cpp
@@ -0,0 +1,1184 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <hardware/hwvulkan.h>
+#include <vulkan/vk_ext_debug_report.h>
+
+#include <algorithm>
+#include <array>
+#include <inttypes.h>
+#include <string.h>
+
+#include <log/log.h>
+#include <utils/Errors.h>
+
+#include "null_driver_gen.h"
+
+using namespace null_driver;
+
+struct VkPhysicalDevice_T {
+ hwvulkan_dispatch_t dispatch;
+};
+
+struct VkInstance_T {
+ hwvulkan_dispatch_t dispatch;
+ VkAllocationCallbacks allocator;
+ VkPhysicalDevice_T physical_device;
+ uint64_t next_callback_handle;
+};
+
+struct VkQueue_T {
+ hwvulkan_dispatch_t dispatch;
+};
+
+struct VkCommandBuffer_T {
+ hwvulkan_dispatch_t dispatch;
+};
+
+namespace {
+// Handles for non-dispatchable objects are either pointers, or arbitrary
+// 64-bit non-zero values. We only use pointers when we need to keep state for
+// the object even in a null driver. For the rest, we form a handle as:
+// [63:63] = 1 to distinguish from pointer handles*
+// [62:56] = non-zero handle type enum value
+// [55: 0] = per-handle-type incrementing counter
+// * This works because virtual addresses with the high bit set are reserved
+// for kernel data in all ABIs we run on.
+//
+// We never reclaim handles on vkDestroy*. It's not even necessary for us to
+// have distinct handles for live objects, and practically speaking we won't
+// ever create 2^56 objects of the same type from a single VkDevice in a null
+// driver.
+//
+// Using a namespace here instead of 'enum class' since we want scoped
+// constants but also want implicit conversions to integral types.
+namespace HandleType {
+enum Enum {
+ kBufferView,
+ kDebugReportCallbackEXT,
+ kDescriptorPool,
+ kDescriptorSet,
+ kDescriptorSetLayout,
+ kEvent,
+ kFence,
+ kFramebuffer,
+ kImageView,
+ kPipeline,
+ kPipelineCache,
+ kPipelineLayout,
+ kQueryPool,
+ kRenderPass,
+ kSampler,
+ kSemaphore,
+ kShaderModule,
+
+ kNumTypes
+};
+} // namespace HandleType
+
+const VkDeviceSize kMaxDeviceMemory = VkDeviceSize(INTPTR_MAX) + 1;
+
+} // anonymous namespace
+
+struct VkDevice_T {
+ hwvulkan_dispatch_t dispatch;
+ VkAllocationCallbacks allocator;
+ VkInstance_T* instance;
+ VkQueue_T queue;
+ std::array<uint64_t, HandleType::kNumTypes> next_handle;
+};
+
+// -----------------------------------------------------------------------------
+// Declare HAL_MODULE_INFO_SYM early so it can be referenced by nulldrv_device
+// later.
+
+namespace {
+int OpenDevice(const hw_module_t* module, const char* id, hw_device_t** device);
+hw_module_methods_t nulldrv_module_methods = {.open = OpenDevice};
+} // namespace
+
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wmissing-variable-declarations"
+__attribute__((visibility("default"))) hwvulkan_module_t HAL_MODULE_INFO_SYM = {
+ .common =
+ {
+ .tag = HARDWARE_MODULE_TAG,
+ .module_api_version = HWVULKAN_MODULE_API_VERSION_0_1,
+ .hal_api_version = HARDWARE_HAL_API_VERSION,
+ .id = HWVULKAN_HARDWARE_MODULE_ID,
+ .name = "Null Vulkan Driver",
+ .author = "The Android Open Source Project",
+ .methods = &nulldrv_module_methods,
+ },
+};
+#pragma clang diagnostic pop
+
+// -----------------------------------------------------------------------------
+
+namespace {
+
+int CloseDevice(struct hw_device_t* /*device*/) {
+ // nothing to do - opening a device doesn't allocate any resources
+ return 0;
+}
+
+hwvulkan_device_t nulldrv_device = {
+ .common =
+ {
+ .tag = HARDWARE_DEVICE_TAG,
+ .version = HWVULKAN_DEVICE_API_VERSION_0_1,
+ .module = &HAL_MODULE_INFO_SYM.common,
+ .close = CloseDevice,
+ },
+ .EnumerateInstanceExtensionProperties =
+ EnumerateInstanceExtensionProperties,
+ .CreateInstance = CreateInstance,
+ .GetInstanceProcAddr = GetInstanceProcAddr};
+
+int OpenDevice(const hw_module_t* /*module*/,
+ const char* id,
+ hw_device_t** device) {
+ if (strcmp(id, HWVULKAN_DEVICE_0) == 0) {
+ *device = &nulldrv_device.common;
+ return 0;
+ }
+ return -ENOENT;
+}
+
+VkInstance_T* GetInstanceFromPhysicalDevice(
+ VkPhysicalDevice_T* physical_device) {
+ return reinterpret_cast<VkInstance_T*>(
+ reinterpret_cast<uintptr_t>(physical_device) -
+ offsetof(VkInstance_T, physical_device));
+}
+
+uint64_t AllocHandle(uint64_t type, uint64_t* next_handle) {
+ const uint64_t kHandleMask = (UINT64_C(1) << 56) - 1;
+ ALOGE_IF(*next_handle == kHandleMask,
+ "non-dispatchable handles of type=%" PRIu64
+ " are about to overflow",
+ type);
+ return (UINT64_C(1) << 63) | ((type & 0x7) << 56) |
+ ((*next_handle)++ & kHandleMask);
+}
+
+template <class Handle>
+Handle AllocHandle(VkInstance instance, HandleType::Enum type) {
+ return reinterpret_cast<Handle>(
+ AllocHandle(type, &instance->next_callback_handle));
+}
+
+template <class Handle>
+Handle AllocHandle(VkDevice device, HandleType::Enum type) {
+ return reinterpret_cast<Handle>(
+ AllocHandle(type, &device->next_handle[type]));
+}
+
+} // namespace
+
+namespace null_driver {
+
+#define DEFINE_OBJECT_HANDLE_CONVERSION(T) \
+ T* Get##T##FromHandle(Vk##T h); \
+ T* Get##T##FromHandle(Vk##T h) { \
+ return reinterpret_cast<T*>(uintptr_t(h)); \
+ } \
+ Vk##T GetHandleTo##T(const T* obj); \
+ Vk##T GetHandleTo##T(const T* obj) { \
+ return Vk##T(reinterpret_cast<uintptr_t>(obj)); \
+ }
+
+// -----------------------------------------------------------------------------
+// Global
+
+VKAPI_ATTR
+VkResult EnumerateInstanceExtensionProperties(
+ const char* layer_name,
+ uint32_t* count,
+ VkExtensionProperties* properties) {
+ if (layer_name) {
+ ALOGW(
+ "Driver vkEnumerateInstanceExtensionProperties shouldn't be called "
+ "with a layer name ('%s')",
+ layer_name);
+ }
+
+// NOTE: Change this to zero to report and extension, which can be useful
+// for testing changes to the loader.
+#if 1
+ (void)properties; // unused
+ *count = 0;
+ return VK_SUCCESS;
+#else
+ const VkExtensionProperties kExtensions[] = {
+ {VK_EXT_DEBUG_REPORT_EXTENSION_NAME, VK_EXT_DEBUG_REPORT_SPEC_VERSION}};
+ const uint32_t kExtensionsCount =
+ sizeof(kExtensions) / sizeof(kExtensions[0]);
+
+ if (!properties || *count > kExtensionsCount)
+ *count = kExtensionsCount;
+ if (properties)
+ std::copy(kExtensions, kExtensions + *count, properties);
+ return *count < kExtensionsCount ? VK_INCOMPLETE : VK_SUCCESS;
+#endif
+}
+
+VKAPI_ATTR
+VkResult CreateInstance(const VkInstanceCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkInstance* out_instance) {
+ // Assume the loader provided alloc callbacks even if the app didn't.
+ ALOG_ASSERT(
+ allocator,
+ "Missing alloc callbacks, loader or app should have provided them");
+
+ VkInstance_T* instance =
+ static_cast<VkInstance_T*>(allocator->pfnAllocation(
+ allocator->pUserData, sizeof(VkInstance_T), alignof(VkInstance_T),
+ VK_SYSTEM_ALLOCATION_SCOPE_INSTANCE));
+ if (!instance)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+
+ instance->dispatch.magic = HWVULKAN_DISPATCH_MAGIC;
+ instance->allocator = *allocator;
+ instance->physical_device.dispatch.magic = HWVULKAN_DISPATCH_MAGIC;
+ instance->next_callback_handle = 0;
+
+ for (uint32_t i = 0; i < create_info->enabledExtensionCount; i++) {
+ if (strcmp(create_info->ppEnabledExtensionNames[i],
+ VK_EXT_DEBUG_REPORT_EXTENSION_NAME) == 0) {
+ ALOGV("instance extension '%s' requested",
+ create_info->ppEnabledExtensionNames[i]);
+ } else {
+ ALOGW("unsupported extension '%s' requested",
+ create_info->ppEnabledExtensionNames[i]);
+ }
+ }
+
+ *out_instance = instance;
+ return VK_SUCCESS;
+}
+
+VKAPI_ATTR
+PFN_vkVoidFunction GetInstanceProcAddr(VkInstance instance, const char* name) {
+ return instance ? GetInstanceProcAddr(name) : GetGlobalProcAddr(name);
+}
+
+VKAPI_ATTR
+PFN_vkVoidFunction GetDeviceProcAddr(VkDevice, const char* name) {
+ return GetInstanceProcAddr(name);
+}
+
+// -----------------------------------------------------------------------------
+// Instance
+
+void DestroyInstance(VkInstance instance,
+ const VkAllocationCallbacks* /*allocator*/) {
+ instance->allocator.pfnFree(instance->allocator.pUserData, instance);
+}
+
+// -----------------------------------------------------------------------------
+// PhysicalDevice
+
+VkResult EnumeratePhysicalDevices(VkInstance instance,
+ uint32_t* physical_device_count,
+ VkPhysicalDevice* physical_devices) {
+ if (physical_devices && *physical_device_count >= 1)
+ physical_devices[0] = &instance->physical_device;
+ *physical_device_count = 1;
+ return VK_SUCCESS;
+}
+
+VkResult EnumerateDeviceLayerProperties(VkPhysicalDevice /*gpu*/,
+ uint32_t* count,
+ VkLayerProperties* /*properties*/) {
+ ALOGW("Driver vkEnumerateDeviceLayerProperties shouldn't be called");
+ *count = 0;
+ return VK_SUCCESS;
+}
+
+VkResult EnumerateDeviceExtensionProperties(VkPhysicalDevice /*gpu*/,
+ const char* layer_name,
+ uint32_t* count,
+ VkExtensionProperties* properties) {
+ if (layer_name) {
+ ALOGW(
+ "Driver vkEnumerateDeviceExtensionProperties shouldn't be called "
+ "with a layer name ('%s')",
+ layer_name);
+ *count = 0;
+ return VK_SUCCESS;
+ }
+
+ const VkExtensionProperties kExtensions[] = {
+ {VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME,
+ VK_ANDROID_NATIVE_BUFFER_SPEC_VERSION}};
+ const uint32_t kExtensionsCount =
+ sizeof(kExtensions) / sizeof(kExtensions[0]);
+
+ if (!properties || *count > kExtensionsCount)
+ *count = kExtensionsCount;
+ if (properties)
+ std::copy(kExtensions, kExtensions + *count, properties);
+ return *count < kExtensionsCount ? VK_INCOMPLETE : VK_SUCCESS;
+}
+
+void GetPhysicalDeviceProperties(VkPhysicalDevice,
+ VkPhysicalDeviceProperties* properties) {
+ properties->apiVersion = VK_API_VERSION;
+ properties->driverVersion = VK_MAKE_VERSION(0, 0, 1);
+ properties->vendorID = 0;
+ properties->deviceID = 0;
+ properties->deviceType = VK_PHYSICAL_DEVICE_TYPE_OTHER;
+ strcpy(properties->deviceName, "Android Vulkan Null Driver");
+ memset(properties->pipelineCacheUUID, 0,
+ sizeof(properties->pipelineCacheUUID));
+}
+
+void GetPhysicalDeviceQueueFamilyProperties(
+ VkPhysicalDevice,
+ uint32_t* count,
+ VkQueueFamilyProperties* properties) {
+ if (!properties || *count > 1)
+ *count = 1;
+ if (properties && *count == 1) {
+ properties->queueFlags = VK_QUEUE_GRAPHICS_BIT | VK_QUEUE_COMPUTE_BIT |
+ VK_QUEUE_TRANSFER_BIT;
+ properties->queueCount = 1;
+ properties->timestampValidBits = 64;
+ properties->minImageTransferGranularity = VkExtent3D{1, 1, 1};
+ }
+}
+
+void GetPhysicalDeviceMemoryProperties(
+ VkPhysicalDevice,
+ VkPhysicalDeviceMemoryProperties* properties) {
+ properties->memoryTypeCount = 1;
+ properties->memoryTypes[0].propertyFlags =
+ VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT |
+ VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT |
+ VK_MEMORY_PROPERTY_HOST_COHERENT_BIT |
+ VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
+ properties->memoryTypes[0].heapIndex = 0;
+ properties->memoryHeapCount = 1;
+ properties->memoryHeaps[0].size = kMaxDeviceMemory;
+ properties->memoryHeaps[0].flags = VK_MEMORY_HEAP_DEVICE_LOCAL_BIT;
+}
+
+// -----------------------------------------------------------------------------
+// Device
+
+VkResult CreateDevice(VkPhysicalDevice physical_device,
+ const VkDeviceCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkDevice* out_device) {
+ VkInstance_T* instance = GetInstanceFromPhysicalDevice(physical_device);
+ if (!allocator)
+ allocator = &instance->allocator;
+ VkDevice_T* device = static_cast<VkDevice_T*>(allocator->pfnAllocation(
+ allocator->pUserData, sizeof(VkDevice_T), alignof(VkDevice_T),
+ VK_SYSTEM_ALLOCATION_SCOPE_DEVICE));
+ if (!device)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+
+ device->dispatch.magic = HWVULKAN_DISPATCH_MAGIC;
+ device->allocator = *allocator;
+ device->instance = instance;
+ device->queue.dispatch.magic = HWVULKAN_DISPATCH_MAGIC;
+ std::fill(device->next_handle.begin(), device->next_handle.end(),
+ UINT64_C(0));
+
+ for (uint32_t i = 0; i < create_info->enabledExtensionCount; i++) {
+ if (strcmp(create_info->ppEnabledExtensionNames[i],
+ VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME) == 0) {
+ ALOGV("Enabling " VK_ANDROID_NATIVE_BUFFER_EXTENSION_NAME);
+ }
+ }
+
+ *out_device = device;
+ return VK_SUCCESS;
+}
+
+void DestroyDevice(VkDevice device,
+ const VkAllocationCallbacks* /*allocator*/) {
+ if (!device)
+ return;
+ device->allocator.pfnFree(device->allocator.pUserData, device);
+}
+
+void GetDeviceQueue(VkDevice device, uint32_t, uint32_t, VkQueue* queue) {
+ *queue = &device->queue;
+}
+
+// -----------------------------------------------------------------------------
+// CommandPool
+
+struct CommandPool {
+ typedef VkCommandPool HandleType;
+ VkAllocationCallbacks allocator;
+};
+DEFINE_OBJECT_HANDLE_CONVERSION(CommandPool)
+
+VkResult CreateCommandPool(VkDevice device,
+ const VkCommandPoolCreateInfo* /*create_info*/,
+ const VkAllocationCallbacks* allocator,
+ VkCommandPool* cmd_pool) {
+ if (!allocator)
+ allocator = &device->allocator;
+ CommandPool* pool = static_cast<CommandPool*>(allocator->pfnAllocation(
+ allocator->pUserData, sizeof(CommandPool), alignof(CommandPool),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT));
+ if (!pool)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ pool->allocator = *allocator;
+ *cmd_pool = GetHandleToCommandPool(pool);
+ return VK_SUCCESS;
+}
+
+void DestroyCommandPool(VkDevice /*device*/,
+ VkCommandPool cmd_pool,
+ const VkAllocationCallbacks* /*allocator*/) {
+ CommandPool* pool = GetCommandPoolFromHandle(cmd_pool);
+ pool->allocator.pfnFree(pool->allocator.pUserData, pool);
+}
+
+// -----------------------------------------------------------------------------
+// CmdBuffer
+
+VkResult AllocateCommandBuffers(VkDevice /*device*/,
+ const VkCommandBufferAllocateInfo* alloc_info,
+ VkCommandBuffer* cmdbufs) {
+ VkResult result = VK_SUCCESS;
+ CommandPool& pool = *GetCommandPoolFromHandle(alloc_info->commandPool);
+ std::fill(cmdbufs, cmdbufs + alloc_info->commandBufferCount, nullptr);
+ for (uint32_t i = 0; i < alloc_info->commandBufferCount; i++) {
+ cmdbufs[i] =
+ static_cast<VkCommandBuffer_T*>(pool.allocator.pfnAllocation(
+ pool.allocator.pUserData, sizeof(VkCommandBuffer_T),
+ alignof(VkCommandBuffer_T), VK_SYSTEM_ALLOCATION_SCOPE_OBJECT));
+ if (!cmdbufs[i]) {
+ result = VK_ERROR_OUT_OF_HOST_MEMORY;
+ break;
+ }
+ cmdbufs[i]->dispatch.magic = HWVULKAN_DISPATCH_MAGIC;
+ }
+ if (result != VK_SUCCESS) {
+ for (uint32_t i = 0; i < alloc_info->commandBufferCount; i++) {
+ if (!cmdbufs[i])
+ break;
+ pool.allocator.pfnFree(pool.allocator.pUserData, cmdbufs[i]);
+ }
+ }
+ return result;
+}
+
+void FreeCommandBuffers(VkDevice /*device*/,
+ VkCommandPool cmd_pool,
+ uint32_t count,
+ const VkCommandBuffer* cmdbufs) {
+ CommandPool& pool = *GetCommandPoolFromHandle(cmd_pool);
+ for (uint32_t i = 0; i < count; i++)
+ pool.allocator.pfnFree(pool.allocator.pUserData, cmdbufs[i]);
+}
+
+// -----------------------------------------------------------------------------
+// DeviceMemory
+
+struct DeviceMemory {
+ typedef VkDeviceMemory HandleType;
+ VkDeviceSize size;
+ alignas(16) uint8_t data[0];
+};
+DEFINE_OBJECT_HANDLE_CONVERSION(DeviceMemory)
+
+VkResult AllocateMemory(VkDevice device,
+ const VkMemoryAllocateInfo* alloc_info,
+ const VkAllocationCallbacks* allocator,
+ VkDeviceMemory* mem_handle) {
+ if (SIZE_MAX - sizeof(DeviceMemory) <= alloc_info->allocationSize)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ if (!allocator)
+ allocator = &device->allocator;
+
+ size_t size = sizeof(DeviceMemory) + size_t(alloc_info->allocationSize);
+ DeviceMemory* mem = static_cast<DeviceMemory*>(allocator->pfnAllocation(
+ allocator->pUserData, size, alignof(DeviceMemory),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT));
+ if (!mem)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ mem->size = size;
+ *mem_handle = GetHandleToDeviceMemory(mem);
+ return VK_SUCCESS;
+}
+
+void FreeMemory(VkDevice device,
+ VkDeviceMemory mem_handle,
+ const VkAllocationCallbacks* allocator) {
+ if (!allocator)
+ allocator = &device->allocator;
+ DeviceMemory* mem = GetDeviceMemoryFromHandle(mem_handle);
+ allocator->pfnFree(allocator->pUserData, mem);
+}
+
+VkResult MapMemory(VkDevice,
+ VkDeviceMemory mem_handle,
+ VkDeviceSize offset,
+ VkDeviceSize,
+ VkMemoryMapFlags,
+ void** out_ptr) {
+ DeviceMemory* mem = GetDeviceMemoryFromHandle(mem_handle);
+ *out_ptr = &mem->data[0] + offset;
+ return VK_SUCCESS;
+}
+
+// -----------------------------------------------------------------------------
+// Buffer
+
+struct Buffer {
+ typedef VkBuffer HandleType;
+ VkDeviceSize size;
+};
+DEFINE_OBJECT_HANDLE_CONVERSION(Buffer)
+
+VkResult CreateBuffer(VkDevice device,
+ const VkBufferCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkBuffer* buffer_handle) {
+ ALOGW_IF(create_info->size > kMaxDeviceMemory,
+ "CreateBuffer: requested size 0x%" PRIx64
+ " exceeds max device memory size 0x%" PRIx64,
+ create_info->size, kMaxDeviceMemory);
+ if (!allocator)
+ allocator = &device->allocator;
+ Buffer* buffer = static_cast<Buffer*>(allocator->pfnAllocation(
+ allocator->pUserData, sizeof(Buffer), alignof(Buffer),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT));
+ if (!buffer)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ buffer->size = create_info->size;
+ *buffer_handle = GetHandleToBuffer(buffer);
+ return VK_SUCCESS;
+}
+
+void GetBufferMemoryRequirements(VkDevice,
+ VkBuffer buffer_handle,
+ VkMemoryRequirements* requirements) {
+ Buffer* buffer = GetBufferFromHandle(buffer_handle);
+ requirements->size = buffer->size;
+ requirements->alignment = 16; // allow fast Neon/SSE memcpy
+ requirements->memoryTypeBits = 0x1;
+}
+
+void DestroyBuffer(VkDevice device,
+ VkBuffer buffer_handle,
+ const VkAllocationCallbacks* allocator) {
+ if (!allocator)
+ allocator = &device->allocator;
+ Buffer* buffer = GetBufferFromHandle(buffer_handle);
+ allocator->pfnFree(allocator->pUserData, buffer);
+}
+
+// -----------------------------------------------------------------------------
+// Image
+
+struct Image {
+ typedef VkImage HandleType;
+ VkDeviceSize size;
+};
+DEFINE_OBJECT_HANDLE_CONVERSION(Image)
+
+VkResult CreateImage(VkDevice device,
+ const VkImageCreateInfo* create_info,
+ const VkAllocationCallbacks* allocator,
+ VkImage* image_handle) {
+ if (create_info->imageType != VK_IMAGE_TYPE_2D ||
+ create_info->format != VK_FORMAT_R8G8B8A8_UNORM ||
+ create_info->mipLevels != 1) {
+ ALOGE("CreateImage: not yet implemented: type=%d format=%d mips=%u",
+ create_info->imageType, create_info->format,
+ create_info->mipLevels);
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ }
+
+ VkDeviceSize size =
+ VkDeviceSize(create_info->extent.width * create_info->extent.height) *
+ create_info->arrayLayers * create_info->samples * 4u;
+ ALOGW_IF(size > kMaxDeviceMemory,
+ "CreateImage: image size 0x%" PRIx64
+ " exceeds max device memory size 0x%" PRIx64,
+ size, kMaxDeviceMemory);
+
+ if (!allocator)
+ allocator = &device->allocator;
+ Image* image = static_cast<Image*>(allocator->pfnAllocation(
+ allocator->pUserData, sizeof(Image), alignof(Image),
+ VK_SYSTEM_ALLOCATION_SCOPE_OBJECT));
+ if (!image)
+ return VK_ERROR_OUT_OF_HOST_MEMORY;
+ image->size = size;
+ *image_handle = GetHandleToImage(image);
+ return VK_SUCCESS;
+}
+
+void GetImageMemoryRequirements(VkDevice,
+ VkImage image_handle,
+ VkMemoryRequirements* requirements) {
+ Image* image = GetImageFromHandle(image_handle);
+ requirements->size = image->size;
+ requirements->alignment = 16; // allow fast Neon/SSE memcpy
+ requirements->memoryTypeBits = 0x1;
+}
+
+void DestroyImage(VkDevice device,
+ VkImage image_handle,
+ const VkAllocationCallbacks* allocator) {
+ if (!allocator)
+ allocator = &device->allocator;
+ Image* image = GetImageFromHandle(image_handle);
+ allocator->pfnFree(allocator->pUserData, image);
+}
+
+VkResult GetSwapchainGrallocUsageANDROID(VkDevice,
+ VkFormat,
+ VkImageUsageFlags,
+ int* grallocUsage) {
+ // The null driver never reads or writes the gralloc buffer
+ *grallocUsage = 0;
+ return VK_SUCCESS;
+}
+
+VkResult AcquireImageANDROID(VkDevice,
+ VkImage,
+ int fence,
+ VkSemaphore,
+ VkFence) {
+ close(fence);
+ return VK_SUCCESS;
+}
+
+VkResult QueueSignalReleaseImageANDROID(VkQueue,
+ uint32_t,
+ const VkSemaphore*,
+ VkImage,
+ int* fence) {
+ *fence = -1;
+ return VK_SUCCESS;
+}
+
+// -----------------------------------------------------------------------------
+// No-op types
+
+VkResult CreateBufferView(VkDevice device,
+ const VkBufferViewCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkBufferView* view) {
+ *view = AllocHandle<VkBufferView>(device, HandleType::kBufferView);
+ return VK_SUCCESS;
+}
+
+VkResult CreateDescriptorPool(VkDevice device,
+ const VkDescriptorPoolCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkDescriptorPool* pool) {
+ *pool = AllocHandle<VkDescriptorPool>(device, HandleType::kDescriptorPool);
+ return VK_SUCCESS;
+}
+
+VkResult AllocateDescriptorSets(VkDevice device,
+ const VkDescriptorSetAllocateInfo* alloc_info,
+ VkDescriptorSet* descriptor_sets) {
+ for (uint32_t i = 0; i < alloc_info->descriptorSetCount; i++)
+ descriptor_sets[i] =
+ AllocHandle<VkDescriptorSet>(device, HandleType::kDescriptorSet);
+ return VK_SUCCESS;
+}
+
+VkResult CreateDescriptorSetLayout(VkDevice device,
+ const VkDescriptorSetLayoutCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkDescriptorSetLayout* layout) {
+ *layout = AllocHandle<VkDescriptorSetLayout>(
+ device, HandleType::kDescriptorSetLayout);
+ return VK_SUCCESS;
+}
+
+VkResult CreateEvent(VkDevice device,
+ const VkEventCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkEvent* event) {
+ *event = AllocHandle<VkEvent>(device, HandleType::kEvent);
+ return VK_SUCCESS;
+}
+
+VkResult CreateFence(VkDevice device,
+ const VkFenceCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkFence* fence) {
+ *fence = AllocHandle<VkFence>(device, HandleType::kFence);
+ return VK_SUCCESS;
+}
+
+VkResult CreateFramebuffer(VkDevice device,
+ const VkFramebufferCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkFramebuffer* framebuffer) {
+ *framebuffer = AllocHandle<VkFramebuffer>(device, HandleType::kFramebuffer);
+ return VK_SUCCESS;
+}
+
+VkResult CreateImageView(VkDevice device,
+ const VkImageViewCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkImageView* view) {
+ *view = AllocHandle<VkImageView>(device, HandleType::kImageView);
+ return VK_SUCCESS;
+}
+
+VkResult CreateGraphicsPipelines(VkDevice device,
+ VkPipelineCache,
+ uint32_t count,
+ const VkGraphicsPipelineCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkPipeline* pipelines) {
+ for (uint32_t i = 0; i < count; i++)
+ pipelines[i] = AllocHandle<VkPipeline>(device, HandleType::kPipeline);
+ return VK_SUCCESS;
+}
+
+VkResult CreateComputePipelines(VkDevice device,
+ VkPipelineCache,
+ uint32_t count,
+ const VkComputePipelineCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkPipeline* pipelines) {
+ for (uint32_t i = 0; i < count; i++)
+ pipelines[i] = AllocHandle<VkPipeline>(device, HandleType::kPipeline);
+ return VK_SUCCESS;
+}
+
+VkResult CreatePipelineCache(VkDevice device,
+ const VkPipelineCacheCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkPipelineCache* cache) {
+ *cache = AllocHandle<VkPipelineCache>(device, HandleType::kPipelineCache);
+ return VK_SUCCESS;
+}
+
+VkResult CreatePipelineLayout(VkDevice device,
+ const VkPipelineLayoutCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkPipelineLayout* layout) {
+ *layout =
+ AllocHandle<VkPipelineLayout>(device, HandleType::kPipelineLayout);
+ return VK_SUCCESS;
+}
+
+VkResult CreateQueryPool(VkDevice device,
+ const VkQueryPoolCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkQueryPool* pool) {
+ *pool = AllocHandle<VkQueryPool>(device, HandleType::kQueryPool);
+ return VK_SUCCESS;
+}
+
+VkResult CreateRenderPass(VkDevice device,
+ const VkRenderPassCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkRenderPass* renderpass) {
+ *renderpass = AllocHandle<VkRenderPass>(device, HandleType::kRenderPass);
+ return VK_SUCCESS;
+}
+
+VkResult CreateSampler(VkDevice device,
+ const VkSamplerCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkSampler* sampler) {
+ *sampler = AllocHandle<VkSampler>(device, HandleType::kSampler);
+ return VK_SUCCESS;
+}
+
+VkResult CreateSemaphore(VkDevice device,
+ const VkSemaphoreCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkSemaphore* semaphore) {
+ *semaphore = AllocHandle<VkSemaphore>(device, HandleType::kSemaphore);
+ return VK_SUCCESS;
+}
+
+VkResult CreateShaderModule(VkDevice device,
+ const VkShaderModuleCreateInfo*,
+ const VkAllocationCallbacks* /*allocator*/,
+ VkShaderModule* module) {
+ *module = AllocHandle<VkShaderModule>(device, HandleType::kShaderModule);
+ return VK_SUCCESS;
+}
+
+VkResult CreateDebugReportCallbackEXT(VkInstance instance,
+ const VkDebugReportCallbackCreateInfoEXT*,
+ const VkAllocationCallbacks*,
+ VkDebugReportCallbackEXT* callback) {
+ *callback = AllocHandle<VkDebugReportCallbackEXT>(
+ instance, HandleType::kDebugReportCallbackEXT);
+ return VK_SUCCESS;
+}
+
+// -----------------------------------------------------------------------------
+// No-op entrypoints
+
+// clang-format off
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wunused-parameter"
+
+void GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+void GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+VkResult GetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult EnumerateInstanceLayerProperties(uint32_t* pCount, VkLayerProperties* pProperties) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmitInfo, VkFence fence) {
+ return VK_SUCCESS;
+}
+
+VkResult QueueWaitIdle(VkQueue queue) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult DeviceWaitIdle(VkDevice device) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void UnmapMemory(VkDevice device, VkDeviceMemory mem) {
+}
+
+VkResult FlushMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange* pMemRanges) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult InvalidateMappedMemoryRanges(VkDevice device, uint32_t memRangeCount, const VkMappedMemoryRange* pMemRanges) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void GetDeviceMemoryCommitment(VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+VkResult BindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory mem, VkDeviceSize memOffset) {
+ return VK_SUCCESS;
+}
+
+VkResult BindImageMemory(VkDevice device, VkImage image, VkDeviceMemory mem, VkDeviceSize memOffset) {
+ return VK_SUCCESS;
+}
+
+void GetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t* pNumRequirements, VkSparseImageMemoryRequirements* pSparseMemoryRequirements) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+void GetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pNumProperties, VkSparseImageFormatProperties* pProperties) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+VkResult QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void DestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks* allocator) {
+}
+
+VkResult ResetFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences) {
+ return VK_SUCCESS;
+}
+
+VkResult GetFenceStatus(VkDevice device, VkFence fence) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult WaitForFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout) {
+ return VK_SUCCESS;
+}
+
+void DestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks* allocator) {
+}
+
+VkResult GetEventStatus(VkDevice device, VkEvent event) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult SetEvent(VkDevice device, VkEvent event) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult ResetEvent(VkDevice device, VkEvent event) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void DestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* allocator) {
+}
+
+VkResult GetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t startQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void DestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* allocator) {
+}
+
+void GetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+void DestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* allocator) {
+}
+
+VkResult GetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult MergePipelineCaches(VkDevice device, VkPipelineCache destCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void DestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* allocator) {
+}
+
+VkResult ResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void UpdateDescriptorSets(VkDevice device, uint32_t writeCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t copyCount, const VkCopyDescriptorSet* pDescriptorCopies) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+VkResult FreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t count, const VkDescriptorSet* pDescriptorSets) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void DestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* allocator) {
+}
+
+void DestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* allocator) {
+}
+
+void GetRenderAreaGranularity(VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+}
+
+VkResult ResetCommandPool(VkDevice device, VkCommandPool cmdPool, VkCommandPoolResetFlags flags) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+VkResult BeginCommandBuffer(VkCommandBuffer cmdBuffer, const VkCommandBufferBeginInfo* pBeginInfo) {
+ return VK_SUCCESS;
+}
+
+VkResult EndCommandBuffer(VkCommandBuffer cmdBuffer) {
+ return VK_SUCCESS;
+}
+
+VkResult ResetCommandBuffer(VkCommandBuffer cmdBuffer, VkCommandBufferResetFlags flags) {
+ ALOGV("TODO: vk%s", __FUNCTION__);
+ return VK_SUCCESS;
+}
+
+void CmdBindPipeline(VkCommandBuffer cmdBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline) {
+}
+
+void CmdSetViewport(VkCommandBuffer cmdBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports) {
+}
+
+void CmdSetScissor(VkCommandBuffer cmdBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors) {
+}
+
+void CmdSetLineWidth(VkCommandBuffer cmdBuffer, float lineWidth) {
+}
+
+void CmdSetDepthBias(VkCommandBuffer cmdBuffer, float depthBias, float depthBiasClamp, float slopeScaledDepthBias) {
+}
+
+void CmdSetBlendConstants(VkCommandBuffer cmdBuffer, const float blendConst[4]) {
+}
+
+void CmdSetDepthBounds(VkCommandBuffer cmdBuffer, float minDepthBounds, float maxDepthBounds) {
+}
+
+void CmdSetStencilCompareMask(VkCommandBuffer cmdBuffer, VkStencilFaceFlags faceMask, uint32_t stencilCompareMask) {
+}
+
+void CmdSetStencilWriteMask(VkCommandBuffer cmdBuffer, VkStencilFaceFlags faceMask, uint32_t stencilWriteMask) {
+}
+
+void CmdSetStencilReference(VkCommandBuffer cmdBuffer, VkStencilFaceFlags faceMask, uint32_t stencilReference) {
+}
+
+void CmdBindDescriptorSets(VkCommandBuffer cmdBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t setCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets) {
+}
+
+void CmdBindIndexBuffer(VkCommandBuffer cmdBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType) {
+}
+
+void CmdBindVertexBuffers(VkCommandBuffer cmdBuffer, uint32_t startBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets) {
+}
+
+void CmdDraw(VkCommandBuffer cmdBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance) {
+}
+
+void CmdDrawIndexed(VkCommandBuffer cmdBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance) {
+}
+
+void CmdDrawIndirect(VkCommandBuffer cmdBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+}
+
+void CmdDrawIndexedIndirect(VkCommandBuffer cmdBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t count, uint32_t stride) {
+}
+
+void CmdDispatch(VkCommandBuffer cmdBuffer, uint32_t x, uint32_t y, uint32_t z) {
+}
+
+void CmdDispatchIndirect(VkCommandBuffer cmdBuffer, VkBuffer buffer, VkDeviceSize offset) {
+}
+
+void CmdCopyBuffer(VkCommandBuffer cmdBuffer, VkBuffer srcBuffer, VkBuffer destBuffer, uint32_t regionCount, const VkBufferCopy* pRegions) {
+}
+
+void CmdCopyImage(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage destImage, VkImageLayout destImageLayout, uint32_t regionCount, const VkImageCopy* pRegions) {
+}
+
+void CmdBlitImage(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage destImage, VkImageLayout destImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter) {
+}
+
+void CmdCopyBufferToImage(VkCommandBuffer cmdBuffer, VkBuffer srcBuffer, VkImage destImage, VkImageLayout destImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions) {
+}
+
+void CmdCopyImageToBuffer(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer destBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions) {
+}
+
+void CmdUpdateBuffer(VkCommandBuffer cmdBuffer, VkBuffer destBuffer, VkDeviceSize destOffset, VkDeviceSize dataSize, const uint32_t* pData) {
+}
+
+void CmdFillBuffer(VkCommandBuffer cmdBuffer, VkBuffer destBuffer, VkDeviceSize destOffset, VkDeviceSize fillSize, uint32_t data) {
+}
+
+void CmdClearColorImage(VkCommandBuffer cmdBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges) {
+}
+
+void CmdClearDepthStencilImage(VkCommandBuffer cmdBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges) {
+}
+
+void CmdClearAttachments(VkCommandBuffer cmdBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects) {
+}
+
+void CmdResolveImage(VkCommandBuffer cmdBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage destImage, VkImageLayout destImageLayout, uint32_t regionCount, const VkImageResolve* pRegions) {
+}
+
+void CmdSetEvent(VkCommandBuffer cmdBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+}
+
+void CmdResetEvent(VkCommandBuffer cmdBuffer, VkEvent event, VkPipelineStageFlags stageMask) {
+}
+
+void CmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers) {
+}
+
+void CmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers) {
+}
+
+void CmdBeginQuery(VkCommandBuffer cmdBuffer, VkQueryPool queryPool, uint32_t slot, VkQueryControlFlags flags) {
+}
+
+void CmdEndQuery(VkCommandBuffer cmdBuffer, VkQueryPool queryPool, uint32_t slot) {
+}
+
+void CmdResetQueryPool(VkCommandBuffer cmdBuffer, VkQueryPool queryPool, uint32_t startQuery, uint32_t queryCount) {
+}
+
+void CmdWriteTimestamp(VkCommandBuffer cmdBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t slot) {
+}
+
+void CmdCopyQueryPoolResults(VkCommandBuffer cmdBuffer, VkQueryPool queryPool, uint32_t startQuery, uint32_t queryCount, VkBuffer destBuffer, VkDeviceSize destOffset, VkDeviceSize destStride, VkQueryResultFlags flags) {
+}
+
+void CmdPushConstants(VkCommandBuffer cmdBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t start, uint32_t length, const void* values) {
+}
+
+void CmdBeginRenderPass(VkCommandBuffer cmdBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents) {
+}
+
+void CmdNextSubpass(VkCommandBuffer cmdBuffer, VkSubpassContents contents) {
+}
+
+void CmdEndRenderPass(VkCommandBuffer cmdBuffer) {
+}
+
+void CmdExecuteCommands(VkCommandBuffer cmdBuffer, uint32_t cmdBuffersCount, const VkCommandBuffer* pCmdBuffers) {
+}
+
+void DestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator) {
+}
+
+void DebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage) {
+}
+
+#pragma clang diagnostic pop
+// clang-format on
+
+} // namespace null_driver
diff --git a/vulkan/nulldrv/null_driver.tmpl b/vulkan/nulldrv/null_driver.tmpl
new file mode 100644
index 0000000..57e72d3
--- /dev/null
+++ b/vulkan/nulldrv/null_driver.tmpl
@@ -0,0 +1,223 @@
+{{/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */}}
+
+{{Include "../api/templates/vulkan_common.tmpl"}}
+{{Global "clang-format" (Strings "clang-format" "-style=file")}}
+{{Macro "DefineGlobals" $}}
+{{$ | Macro "null_driver_gen.h" | Format (Global "clang-format") | Write "null_driver_gen.h" }}
+{{$ | Macro "null_driver_gen.cpp" | Format (Global "clang-format") | Write "null_driver_gen.cpp"}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ null_driver_gen.h
+-------------------------------------------------------------------------------
+*/}}
+{{define "null_driver_gen.h"}}
+/*
+•* Copyright 2015 The Android Open Source Project
+•*
+•* Licensed under the Apache License, Version 2.0 (the "License");
+•* you may not use this file except in compliance with the License.
+•* You may obtain a copy of the License at
+•*
+•* http://www.apache.org/licenses/LICENSE-2.0
+•*
+•* Unless required by applicable law or agreed to in writing, software
+•* distributed under the License is distributed on an "AS IS" BASIS,
+•* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+•* See the License for the specific language governing permissions and
+•* limitations under the License.
+•*/
+¶
+// This file is generated. Do not edit manually!
+// To regenerate: $ apic template ../api/vulkan.api null_driver.tmpl
+// Requires apic from https://android.googlesource.com/platform/tools/gpu/.
+¶
+#ifndef NULLDRV_NULL_DRIVER_H
+#define NULLDRV_NULL_DRIVER_H 1
+¶
+#include <vulkan/vk_android_native_buffer.h>
+#include <vulkan/vk_ext_debug_report.h>
+#include <vulkan/vulkan.h>
+¶
+namespace null_driver {«
+¶
+PFN_vkVoidFunction GetGlobalProcAddr(const char* name);
+PFN_vkVoidFunction GetInstanceProcAddr(const char* name);
+¶
+// clang-format off
+ {{range $f := AllCommands $}}
+ {{if (Macro "IsDriverFunction" $f)}}
+VKAPI_ATTR {{Node "Type" $f.Return}} {{Macro "BaseName" $f}}({{Macro "Parameters" $f}});
+ {{end}}
+ {{end}}
+VKAPI_ATTR VkResult GetSwapchainGrallocUsageANDROID(VkDevice device, VkFormat format, VkImageUsageFlags imageUsage, int* grallocUsage);
+VKAPI_ATTR VkResult AcquireImageANDROID(VkDevice device, VkImage image, int nativeFenceFd, VkSemaphore semaphore, VkFence fence);
+VKAPI_ATTR VkResult QueueSignalReleaseImageANDROID(VkQueue queue, uint32_t waitSemaphoreCount, const VkSemaphore* pWaitSemaphores, VkImage image, int* pNativeFenceFd);
+// clang-format on
+¶
+»} // namespace null_driver
+¶
+#endif // NULLDRV_NULL_DRIVER_H
+¶{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ null_driver_gen.cpp
+-------------------------------------------------------------------------------
+*/}}
+{{define "null_driver_gen.cpp"}}
+/*
+•* Copyright 2015 The Android Open Source Project
+•*
+•* Licensed under the Apache License, Version 2.0 (the "License");
+•* you may not use this file except in compliance with the License.
+•* You may obtain a copy of the License at
+•*
+•* http://www.apache.org/licenses/LICENSE-2.0
+•*
+•* Unless required by applicable law or agreed to in writing, software
+•* distributed under the License is distributed on an "AS IS" BASIS,
+•* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+•* See the License for the specific language governing permissions and
+•* limitations under the License.
+•*/
+¶
+// This file is generated. Do not edit manually!
+// To regenerate: $ apic template ../api/vulkan.api null_driver.tmpl
+// Requires apic from https://android.googlesource.com/platform/tools/gpu/.
+¶
+#include "null_driver_gen.h"
+#include <algorithm>
+¶
+using namespace null_driver;
+¶
+namespace {
+¶
+struct NameProc {
+ const char* name;
+ PFN_vkVoidFunction proc;
+};
+¶
+PFN_vkVoidFunction Lookup(const char* name,
+ const NameProc* begin,
+ const NameProc* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name,
+ [](const NameProc& e, const char* n) { return strcmp(e.name, n) < 0; });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return nullptr;
+ return entry->proc;
+}
+¶
+template <size_t N>
+PFN_vkVoidFunction Lookup(const char* name, const NameProc (&procs)[N]) {
+ return Lookup(name, procs, procs + N);
+}
+¶
+const NameProc kGlobalProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if and (Macro "IsDriverFunction" $f) (eq (Macro "Vtbl" $f) "Global")}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>(§
+ static_cast<{{Macro "FunctionPtrName" $f}}>(§
+ {{Macro "BaseName" $f}}))},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+const NameProc kInstanceProcs[] = {«
+ // clang-format off
+ {{range $f := SortBy (AllCommands $) "FunctionName"}}
+ {{if (Macro "IsDriverFunction" $f)}}
+ {"{{$f.Name}}", reinterpret_cast<PFN_vkVoidFunction>(§
+ static_cast<{{Macro "FunctionPtrName" $f}}>(§
+ {{Macro "BaseName" $f}}))},
+ {{end}}
+ {{end}}
+ // clang-format on
+»};
+¶
+} // namespace
+¶
+namespace null_driver {
+¶
+PFN_vkVoidFunction GetGlobalProcAddr(const char* name) {
+ return Lookup(name, kGlobalProcs);
+}
+¶
+PFN_vkVoidFunction GetInstanceProcAddr(const char* name) {«
+ PFN_vkVoidFunction pfn;
+ if ((pfn = Lookup(name, kInstanceProcs)))
+ return pfn;
+ if (strcmp(name, "vkGetSwapchainGrallocUsageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetSwapchainGrallocUsageANDROID>(GetSwapchainGrallocUsageANDROID));
+ if (strcmp(name, "vkAcquireImageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAcquireImageANDROID>(AcquireImageANDROID));
+ if (strcmp(name, "vkQueueSignalReleaseImageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkQueueSignalReleaseImageANDROID>(QueueSignalReleaseImageANDROID));
+ return nullptr;
+»}
+¶
+} // namespace null_driver
+¶
+{{end}}
+
+
+{{/*
+-------------------------------------------------------------------------------
+ Emits a function name without the "vk" prefix.
+-------------------------------------------------------------------------------
+*/}}
+{{define "BaseName"}}
+ {{AssertType $ "Function"}}
+ {{TrimPrefix "vk" $.Name}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Emits 'true' if the API function is implemented by the driver.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsDriverFunction"}}
+ {{AssertType $ "Function"}}
+
+ {{if not (GetAnnotation $ "pfn")}}
+ {{$ext := GetAnnotation $ "extension"}}
+ {{if $ext}}
+ {{Macro "IsDriverExtension" $ext}}
+ {{else}}
+ true
+ {{end}}
+ {{end}}
+{{end}}
+
+
+{{/*
+------------------------------------------------------------------------------
+ Reports whether an extension is implemented by the driver.
+------------------------------------------------------------------------------
+*/}}
+{{define "IsDriverExtension"}}
+ {{$ext := index $.Arguments 0}}
+ {{ if eq $ext "VK_ANDROID_native_buffer"}}true
+ {{else if eq $ext "VK_EXT_debug_report"}}true
+ {{end}}
+{{end}}
diff --git a/vulkan/nulldrv/null_driver_gen.cpp b/vulkan/nulldrv/null_driver_gen.cpp
new file mode 100644
index 0000000..c5f42b0
--- /dev/null
+++ b/vulkan/nulldrv/null_driver_gen.cpp
@@ -0,0 +1,228 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// This file is generated. Do not edit manually!
+// To regenerate: $ apic template ../api/vulkan.api null_driver.tmpl
+// Requires apic from https://android.googlesource.com/platform/tools/gpu/.
+
+#include "null_driver_gen.h"
+#include <algorithm>
+
+using namespace null_driver;
+
+namespace {
+
+struct NameProc {
+ const char* name;
+ PFN_vkVoidFunction proc;
+};
+
+PFN_vkVoidFunction Lookup(const char* name,
+ const NameProc* begin,
+ const NameProc* end) {
+ const auto& entry = std::lower_bound(
+ begin, end, name,
+ [](const NameProc& e, const char* n) { return strcmp(e.name, n) < 0; });
+ if (entry == end || strcmp(entry->name, name) != 0)
+ return nullptr;
+ return entry->proc;
+}
+
+template <size_t N>
+PFN_vkVoidFunction Lookup(const char* name, const NameProc (&procs)[N]) {
+ return Lookup(name, procs, procs + N);
+}
+
+const NameProc kGlobalProcs[] = {
+ // clang-format off
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateInstance>(CreateInstance))},
+ {"vkEnumerateInstanceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceExtensionProperties>(EnumerateInstanceExtensionProperties))},
+ {"vkEnumerateInstanceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceLayerProperties>(EnumerateInstanceLayerProperties))},
+ // clang-format on
+};
+
+const NameProc kInstanceProcs[] = {
+ // clang-format off
+ {"vkAllocateCommandBuffers", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAllocateCommandBuffers>(AllocateCommandBuffers))},
+ {"vkAllocateDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAllocateDescriptorSets>(AllocateDescriptorSets))},
+ {"vkAllocateMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkAllocateMemory>(AllocateMemory))},
+ {"vkBeginCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkBeginCommandBuffer>(BeginCommandBuffer))},
+ {"vkBindBufferMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkBindBufferMemory>(BindBufferMemory))},
+ {"vkBindImageMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkBindImageMemory>(BindImageMemory))},
+ {"vkCmdBeginQuery", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBeginQuery>(CmdBeginQuery))},
+ {"vkCmdBeginRenderPass", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBeginRenderPass>(CmdBeginRenderPass))},
+ {"vkCmdBindDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBindDescriptorSets>(CmdBindDescriptorSets))},
+ {"vkCmdBindIndexBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBindIndexBuffer>(CmdBindIndexBuffer))},
+ {"vkCmdBindPipeline", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBindPipeline>(CmdBindPipeline))},
+ {"vkCmdBindVertexBuffers", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBindVertexBuffers>(CmdBindVertexBuffers))},
+ {"vkCmdBlitImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdBlitImage>(CmdBlitImage))},
+ {"vkCmdClearAttachments", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdClearAttachments>(CmdClearAttachments))},
+ {"vkCmdClearColorImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdClearColorImage>(CmdClearColorImage))},
+ {"vkCmdClearDepthStencilImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdClearDepthStencilImage>(CmdClearDepthStencilImage))},
+ {"vkCmdCopyBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdCopyBuffer>(CmdCopyBuffer))},
+ {"vkCmdCopyBufferToImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdCopyBufferToImage>(CmdCopyBufferToImage))},
+ {"vkCmdCopyImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdCopyImage>(CmdCopyImage))},
+ {"vkCmdCopyImageToBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdCopyImageToBuffer>(CmdCopyImageToBuffer))},
+ {"vkCmdCopyQueryPoolResults", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdCopyQueryPoolResults>(CmdCopyQueryPoolResults))},
+ {"vkCmdDispatch", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDispatch>(CmdDispatch))},
+ {"vkCmdDispatchIndirect", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDispatchIndirect>(CmdDispatchIndirect))},
+ {"vkCmdDraw", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDraw>(CmdDraw))},
+ {"vkCmdDrawIndexed", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDrawIndexed>(CmdDrawIndexed))},
+ {"vkCmdDrawIndexedIndirect", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDrawIndexedIndirect>(CmdDrawIndexedIndirect))},
+ {"vkCmdDrawIndirect", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdDrawIndirect>(CmdDrawIndirect))},
+ {"vkCmdEndQuery", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdEndQuery>(CmdEndQuery))},
+ {"vkCmdEndRenderPass", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdEndRenderPass>(CmdEndRenderPass))},
+ {"vkCmdExecuteCommands", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdExecuteCommands>(CmdExecuteCommands))},
+ {"vkCmdFillBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdFillBuffer>(CmdFillBuffer))},
+ {"vkCmdNextSubpass", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdNextSubpass>(CmdNextSubpass))},
+ {"vkCmdPipelineBarrier", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdPipelineBarrier>(CmdPipelineBarrier))},
+ {"vkCmdPushConstants", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdPushConstants>(CmdPushConstants))},
+ {"vkCmdResetEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdResetEvent>(CmdResetEvent))},
+ {"vkCmdResetQueryPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdResetQueryPool>(CmdResetQueryPool))},
+ {"vkCmdResolveImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdResolveImage>(CmdResolveImage))},
+ {"vkCmdSetBlendConstants", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetBlendConstants>(CmdSetBlendConstants))},
+ {"vkCmdSetDepthBias", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetDepthBias>(CmdSetDepthBias))},
+ {"vkCmdSetDepthBounds", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetDepthBounds>(CmdSetDepthBounds))},
+ {"vkCmdSetEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetEvent>(CmdSetEvent))},
+ {"vkCmdSetLineWidth", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetLineWidth>(CmdSetLineWidth))},
+ {"vkCmdSetScissor", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetScissor>(CmdSetScissor))},
+ {"vkCmdSetStencilCompareMask", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetStencilCompareMask>(CmdSetStencilCompareMask))},
+ {"vkCmdSetStencilReference", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetStencilReference>(CmdSetStencilReference))},
+ {"vkCmdSetStencilWriteMask", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetStencilWriteMask>(CmdSetStencilWriteMask))},
+ {"vkCmdSetViewport", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdSetViewport>(CmdSetViewport))},
+ {"vkCmdUpdateBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdUpdateBuffer>(CmdUpdateBuffer))},
+ {"vkCmdWaitEvents", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdWaitEvents>(CmdWaitEvents))},
+ {"vkCmdWriteTimestamp", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCmdWriteTimestamp>(CmdWriteTimestamp))},
+ {"vkCreateBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateBuffer>(CreateBuffer))},
+ {"vkCreateBufferView", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateBufferView>(CreateBufferView))},
+ {"vkCreateCommandPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateCommandPool>(CreateCommandPool))},
+ {"vkCreateComputePipelines", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateComputePipelines>(CreateComputePipelines))},
+ {"vkCreateDebugReportCallbackEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDebugReportCallbackEXT>(CreateDebugReportCallbackEXT))},
+ {"vkCreateDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDescriptorPool>(CreateDescriptorPool))},
+ {"vkCreateDescriptorSetLayout", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDescriptorSetLayout>(CreateDescriptorSetLayout))},
+ {"vkCreateDevice", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateDevice>(CreateDevice))},
+ {"vkCreateEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateEvent>(CreateEvent))},
+ {"vkCreateFence", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateFence>(CreateFence))},
+ {"vkCreateFramebuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateFramebuffer>(CreateFramebuffer))},
+ {"vkCreateGraphicsPipelines", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateGraphicsPipelines>(CreateGraphicsPipelines))},
+ {"vkCreateImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateImage>(CreateImage))},
+ {"vkCreateImageView", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateImageView>(CreateImageView))},
+ {"vkCreateInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateInstance>(CreateInstance))},
+ {"vkCreatePipelineCache", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreatePipelineCache>(CreatePipelineCache))},
+ {"vkCreatePipelineLayout", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreatePipelineLayout>(CreatePipelineLayout))},
+ {"vkCreateQueryPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateQueryPool>(CreateQueryPool))},
+ {"vkCreateRenderPass", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateRenderPass>(CreateRenderPass))},
+ {"vkCreateSampler", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateSampler>(CreateSampler))},
+ {"vkCreateSemaphore", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateSemaphore>(CreateSemaphore))},
+ {"vkCreateShaderModule", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkCreateShaderModule>(CreateShaderModule))},
+ {"vkDebugReportMessageEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDebugReportMessageEXT>(DebugReportMessageEXT))},
+ {"vkDestroyBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyBuffer>(DestroyBuffer))},
+ {"vkDestroyBufferView", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyBufferView>(DestroyBufferView))},
+ {"vkDestroyCommandPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyCommandPool>(DestroyCommandPool))},
+ {"vkDestroyDebugReportCallbackEXT", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDebugReportCallbackEXT>(DestroyDebugReportCallbackEXT))},
+ {"vkDestroyDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDescriptorPool>(DestroyDescriptorPool))},
+ {"vkDestroyDescriptorSetLayout", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDescriptorSetLayout>(DestroyDescriptorSetLayout))},
+ {"vkDestroyDevice", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyDevice>(DestroyDevice))},
+ {"vkDestroyEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyEvent>(DestroyEvent))},
+ {"vkDestroyFence", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyFence>(DestroyFence))},
+ {"vkDestroyFramebuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyFramebuffer>(DestroyFramebuffer))},
+ {"vkDestroyImage", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyImage>(DestroyImage))},
+ {"vkDestroyImageView", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyImageView>(DestroyImageView))},
+ {"vkDestroyInstance", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyInstance>(DestroyInstance))},
+ {"vkDestroyPipeline", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyPipeline>(DestroyPipeline))},
+ {"vkDestroyPipelineCache", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyPipelineCache>(DestroyPipelineCache))},
+ {"vkDestroyPipelineLayout", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyPipelineLayout>(DestroyPipelineLayout))},
+ {"vkDestroyQueryPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyQueryPool>(DestroyQueryPool))},
+ {"vkDestroyRenderPass", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyRenderPass>(DestroyRenderPass))},
+ {"vkDestroySampler", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroySampler>(DestroySampler))},
+ {"vkDestroySemaphore", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroySemaphore>(DestroySemaphore))},
+ {"vkDestroyShaderModule", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDestroyShaderModule>(DestroyShaderModule))},
+ {"vkDeviceWaitIdle", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkDeviceWaitIdle>(DeviceWaitIdle))},
+ {"vkEndCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEndCommandBuffer>(EndCommandBuffer))},
+ {"vkEnumerateDeviceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateDeviceExtensionProperties>(EnumerateDeviceExtensionProperties))},
+ {"vkEnumerateDeviceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateDeviceLayerProperties>(EnumerateDeviceLayerProperties))},
+ {"vkEnumerateInstanceExtensionProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceExtensionProperties>(EnumerateInstanceExtensionProperties))},
+ {"vkEnumerateInstanceLayerProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumerateInstanceLayerProperties>(EnumerateInstanceLayerProperties))},
+ {"vkEnumeratePhysicalDevices", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkEnumeratePhysicalDevices>(EnumeratePhysicalDevices))},
+ {"vkFlushMappedMemoryRanges", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkFlushMappedMemoryRanges>(FlushMappedMemoryRanges))},
+ {"vkFreeCommandBuffers", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkFreeCommandBuffers>(FreeCommandBuffers))},
+ {"vkFreeDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkFreeDescriptorSets>(FreeDescriptorSets))},
+ {"vkFreeMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkFreeMemory>(FreeMemory))},
+ {"vkGetBufferMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetBufferMemoryRequirements>(GetBufferMemoryRequirements))},
+ {"vkGetDeviceMemoryCommitment", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceMemoryCommitment>(GetDeviceMemoryCommitment))},
+ {"vkGetDeviceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceProcAddr>(GetDeviceProcAddr))},
+ {"vkGetDeviceQueue", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetDeviceQueue>(GetDeviceQueue))},
+ {"vkGetEventStatus", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetEventStatus>(GetEventStatus))},
+ {"vkGetFenceStatus", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetFenceStatus>(GetFenceStatus))},
+ {"vkGetImageMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetImageMemoryRequirements>(GetImageMemoryRequirements))},
+ {"vkGetImageSparseMemoryRequirements", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetImageSparseMemoryRequirements>(GetImageSparseMemoryRequirements))},
+ {"vkGetImageSubresourceLayout", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetImageSubresourceLayout>(GetImageSubresourceLayout))},
+ {"vkGetInstanceProcAddr", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetInstanceProcAddr>(GetInstanceProcAddr))},
+ {"vkGetPhysicalDeviceFeatures", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceFeatures>(GetPhysicalDeviceFeatures))},
+ {"vkGetPhysicalDeviceFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceFormatProperties>(GetPhysicalDeviceFormatProperties))},
+ {"vkGetPhysicalDeviceImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceImageFormatProperties>(GetPhysicalDeviceImageFormatProperties))},
+ {"vkGetPhysicalDeviceMemoryProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceMemoryProperties>(GetPhysicalDeviceMemoryProperties))},
+ {"vkGetPhysicalDeviceProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceProperties>(GetPhysicalDeviceProperties))},
+ {"vkGetPhysicalDeviceQueueFamilyProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceQueueFamilyProperties>(GetPhysicalDeviceQueueFamilyProperties))},
+ {"vkGetPhysicalDeviceSparseImageFormatProperties", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPhysicalDeviceSparseImageFormatProperties>(GetPhysicalDeviceSparseImageFormatProperties))},
+ {"vkGetPipelineCacheData", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetPipelineCacheData>(GetPipelineCacheData))},
+ {"vkGetQueryPoolResults", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetQueryPoolResults>(GetQueryPoolResults))},
+ {"vkGetRenderAreaGranularity", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkGetRenderAreaGranularity>(GetRenderAreaGranularity))},
+ {"vkInvalidateMappedMemoryRanges", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkInvalidateMappedMemoryRanges>(InvalidateMappedMemoryRanges))},
+ {"vkMapMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkMapMemory>(MapMemory))},
+ {"vkMergePipelineCaches", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkMergePipelineCaches>(MergePipelineCaches))},
+ {"vkQueueBindSparse", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkQueueBindSparse>(QueueBindSparse))},
+ {"vkQueueSubmit", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkQueueSubmit>(QueueSubmit))},
+ {"vkQueueWaitIdle", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkQueueWaitIdle>(QueueWaitIdle))},
+ {"vkResetCommandBuffer", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkResetCommandBuffer>(ResetCommandBuffer))},
+ {"vkResetCommandPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkResetCommandPool>(ResetCommandPool))},
+ {"vkResetDescriptorPool", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkResetDescriptorPool>(ResetDescriptorPool))},
+ {"vkResetEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkResetEvent>(ResetEvent))},
+ {"vkResetFences", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkResetFences>(ResetFences))},
+ {"vkSetEvent", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkSetEvent>(SetEvent))},
+ {"vkUnmapMemory", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkUnmapMemory>(UnmapMemory))},
+ {"vkUpdateDescriptorSets", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkUpdateDescriptorSets>(UpdateDescriptorSets))},
+ {"vkWaitForFences", reinterpret_cast<PFN_vkVoidFunction>(static_cast<PFN_vkWaitForFences>(WaitForFences))},
+ // clang-format on
+};
+
+} // namespace
+
+namespace null_driver {
+
+PFN_vkVoidFunction GetGlobalProcAddr(const char* name) {
+ return Lookup(name, kGlobalProcs);
+}
+
+PFN_vkVoidFunction GetInstanceProcAddr(const char* name) {
+ PFN_vkVoidFunction pfn;
+ if ((pfn = Lookup(name, kInstanceProcs)))
+ return pfn;
+ if (strcmp(name, "vkGetSwapchainGrallocUsageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(
+ static_cast<PFN_vkGetSwapchainGrallocUsageANDROID>(
+ GetSwapchainGrallocUsageANDROID));
+ if (strcmp(name, "vkAcquireImageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(
+ static_cast<PFN_vkAcquireImageANDROID>(AcquireImageANDROID));
+ if (strcmp(name, "vkQueueSignalReleaseImageANDROID") == 0)
+ return reinterpret_cast<PFN_vkVoidFunction>(
+ static_cast<PFN_vkQueueSignalReleaseImageANDROID>(
+ QueueSignalReleaseImageANDROID));
+ return nullptr;
+}
+
+} // namespace null_driver
diff --git a/vulkan/nulldrv/null_driver_gen.h b/vulkan/nulldrv/null_driver_gen.h
new file mode 100644
index 0000000..ddf4afb
--- /dev/null
+++ b/vulkan/nulldrv/null_driver_gen.h
@@ -0,0 +1,181 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// This file is generated. Do not edit manually!
+// To regenerate: $ apic template ../api/vulkan.api null_driver.tmpl
+// Requires apic from https://android.googlesource.com/platform/tools/gpu/.
+
+#ifndef NULLDRV_NULL_DRIVER_H
+#define NULLDRV_NULL_DRIVER_H 1
+
+#include <vulkan/vk_android_native_buffer.h>
+#include <vulkan/vk_ext_debug_report.h>
+#include <vulkan/vulkan.h>
+
+namespace null_driver {
+
+PFN_vkVoidFunction GetGlobalProcAddr(const char* name);
+PFN_vkVoidFunction GetInstanceProcAddr(const char* name);
+
+// clang-format off
+VKAPI_ATTR VkResult CreateInstance(const VkInstanceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkInstance* pInstance);
+VKAPI_ATTR void DestroyInstance(VkInstance instance, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult EnumeratePhysicalDevices(VkInstance instance, uint32_t* pPhysicalDeviceCount, VkPhysicalDevice* pPhysicalDevices);
+VKAPI_ATTR PFN_vkVoidFunction GetDeviceProcAddr(VkDevice device, const char* pName);
+VKAPI_ATTR PFN_vkVoidFunction GetInstanceProcAddr(VkInstance instance, const char* pName);
+VKAPI_ATTR void GetPhysicalDeviceProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceProperties* pProperties);
+VKAPI_ATTR void GetPhysicalDeviceQueueFamilyProperties(VkPhysicalDevice physicalDevice, uint32_t* pQueueFamilyPropertyCount, VkQueueFamilyProperties* pQueueFamilyProperties);
+VKAPI_ATTR void GetPhysicalDeviceMemoryProperties(VkPhysicalDevice physicalDevice, VkPhysicalDeviceMemoryProperties* pMemoryProperties);
+VKAPI_ATTR void GetPhysicalDeviceFeatures(VkPhysicalDevice physicalDevice, VkPhysicalDeviceFeatures* pFeatures);
+VKAPI_ATTR void GetPhysicalDeviceFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkFormatProperties* pFormatProperties);
+VKAPI_ATTR VkResult GetPhysicalDeviceImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkImageTiling tiling, VkImageUsageFlags usage, VkImageCreateFlags flags, VkImageFormatProperties* pImageFormatProperties);
+VKAPI_ATTR VkResult CreateDevice(VkPhysicalDevice physicalDevice, const VkDeviceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDevice* pDevice);
+VKAPI_ATTR void DestroyDevice(VkDevice device, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult EnumerateInstanceLayerProperties(uint32_t* pPropertyCount, VkLayerProperties* pProperties);
+VKAPI_ATTR VkResult EnumerateInstanceExtensionProperties(const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties);
+VKAPI_ATTR VkResult EnumerateDeviceLayerProperties(VkPhysicalDevice physicalDevice, uint32_t* pPropertyCount, VkLayerProperties* pProperties);
+VKAPI_ATTR VkResult EnumerateDeviceExtensionProperties(VkPhysicalDevice physicalDevice, const char* pLayerName, uint32_t* pPropertyCount, VkExtensionProperties* pProperties);
+VKAPI_ATTR void GetDeviceQueue(VkDevice device, uint32_t queueFamilyIndex, uint32_t queueIndex, VkQueue* pQueue);
+VKAPI_ATTR VkResult QueueSubmit(VkQueue queue, uint32_t submitCount, const VkSubmitInfo* pSubmits, VkFence fence);
+VKAPI_ATTR VkResult QueueWaitIdle(VkQueue queue);
+VKAPI_ATTR VkResult DeviceWaitIdle(VkDevice device);
+VKAPI_ATTR VkResult AllocateMemory(VkDevice device, const VkMemoryAllocateInfo* pAllocateInfo, const VkAllocationCallbacks* pAllocator, VkDeviceMemory* pMemory);
+VKAPI_ATTR void FreeMemory(VkDevice device, VkDeviceMemory memory, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult MapMemory(VkDevice device, VkDeviceMemory memory, VkDeviceSize offset, VkDeviceSize size, VkMemoryMapFlags flags, void** ppData);
+VKAPI_ATTR void UnmapMemory(VkDevice device, VkDeviceMemory memory);
+VKAPI_ATTR VkResult FlushMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges);
+VKAPI_ATTR VkResult InvalidateMappedMemoryRanges(VkDevice device, uint32_t memoryRangeCount, const VkMappedMemoryRange* pMemoryRanges);
+VKAPI_ATTR void GetDeviceMemoryCommitment(VkDevice device, VkDeviceMemory memory, VkDeviceSize* pCommittedMemoryInBytes);
+VKAPI_ATTR void GetBufferMemoryRequirements(VkDevice device, VkBuffer buffer, VkMemoryRequirements* pMemoryRequirements);
+VKAPI_ATTR VkResult BindBufferMemory(VkDevice device, VkBuffer buffer, VkDeviceMemory memory, VkDeviceSize memoryOffset);
+VKAPI_ATTR void GetImageMemoryRequirements(VkDevice device, VkImage image, VkMemoryRequirements* pMemoryRequirements);
+VKAPI_ATTR VkResult BindImageMemory(VkDevice device, VkImage image, VkDeviceMemory memory, VkDeviceSize memoryOffset);
+VKAPI_ATTR void GetImageSparseMemoryRequirements(VkDevice device, VkImage image, uint32_t* pSparseMemoryRequirementCount, VkSparseImageMemoryRequirements* pSparseMemoryRequirements);
+VKAPI_ATTR void GetPhysicalDeviceSparseImageFormatProperties(VkPhysicalDevice physicalDevice, VkFormat format, VkImageType type, VkSampleCountFlagBits samples, VkImageUsageFlags usage, VkImageTiling tiling, uint32_t* pPropertyCount, VkSparseImageFormatProperties* pProperties);
+VKAPI_ATTR VkResult QueueBindSparse(VkQueue queue, uint32_t bindInfoCount, const VkBindSparseInfo* pBindInfo, VkFence fence);
+VKAPI_ATTR VkResult CreateFence(VkDevice device, const VkFenceCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFence* pFence);
+VKAPI_ATTR void DestroyFence(VkDevice device, VkFence fence, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult ResetFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences);
+VKAPI_ATTR VkResult GetFenceStatus(VkDevice device, VkFence fence);
+VKAPI_ATTR VkResult WaitForFences(VkDevice device, uint32_t fenceCount, const VkFence* pFences, VkBool32 waitAll, uint64_t timeout);
+VKAPI_ATTR VkResult CreateSemaphore(VkDevice device, const VkSemaphoreCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSemaphore* pSemaphore);
+VKAPI_ATTR void DestroySemaphore(VkDevice device, VkSemaphore semaphore, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateEvent(VkDevice device, const VkEventCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkEvent* pEvent);
+VKAPI_ATTR void DestroyEvent(VkDevice device, VkEvent event, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult GetEventStatus(VkDevice device, VkEvent event);
+VKAPI_ATTR VkResult SetEvent(VkDevice device, VkEvent event);
+VKAPI_ATTR VkResult ResetEvent(VkDevice device, VkEvent event);
+VKAPI_ATTR VkResult CreateQueryPool(VkDevice device, const VkQueryPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkQueryPool* pQueryPool);
+VKAPI_ATTR void DestroyQueryPool(VkDevice device, VkQueryPool queryPool, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult GetQueryPoolResults(VkDevice device, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, size_t dataSize, void* pData, VkDeviceSize stride, VkQueryResultFlags flags);
+VKAPI_ATTR VkResult CreateBuffer(VkDevice device, const VkBufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBuffer* pBuffer);
+VKAPI_ATTR void DestroyBuffer(VkDevice device, VkBuffer buffer, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateBufferView(VkDevice device, const VkBufferViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkBufferView* pView);
+VKAPI_ATTR void DestroyBufferView(VkDevice device, VkBufferView bufferView, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateImage(VkDevice device, const VkImageCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImage* pImage);
+VKAPI_ATTR void DestroyImage(VkDevice device, VkImage image, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR void GetImageSubresourceLayout(VkDevice device, VkImage image, const VkImageSubresource* pSubresource, VkSubresourceLayout* pLayout);
+VKAPI_ATTR VkResult CreateImageView(VkDevice device, const VkImageViewCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkImageView* pView);
+VKAPI_ATTR void DestroyImageView(VkDevice device, VkImageView imageView, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateShaderModule(VkDevice device, const VkShaderModuleCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkShaderModule* pShaderModule);
+VKAPI_ATTR void DestroyShaderModule(VkDevice device, VkShaderModule shaderModule, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreatePipelineCache(VkDevice device, const VkPipelineCacheCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineCache* pPipelineCache);
+VKAPI_ATTR void DestroyPipelineCache(VkDevice device, VkPipelineCache pipelineCache, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult GetPipelineCacheData(VkDevice device, VkPipelineCache pipelineCache, size_t* pDataSize, void* pData);
+VKAPI_ATTR VkResult MergePipelineCaches(VkDevice device, VkPipelineCache dstCache, uint32_t srcCacheCount, const VkPipelineCache* pSrcCaches);
+VKAPI_ATTR VkResult CreateGraphicsPipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkGraphicsPipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines);
+VKAPI_ATTR VkResult CreateComputePipelines(VkDevice device, VkPipelineCache pipelineCache, uint32_t createInfoCount, const VkComputePipelineCreateInfo* pCreateInfos, const VkAllocationCallbacks* pAllocator, VkPipeline* pPipelines);
+VKAPI_ATTR void DestroyPipeline(VkDevice device, VkPipeline pipeline, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreatePipelineLayout(VkDevice device, const VkPipelineLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkPipelineLayout* pPipelineLayout);
+VKAPI_ATTR void DestroyPipelineLayout(VkDevice device, VkPipelineLayout pipelineLayout, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateSampler(VkDevice device, const VkSamplerCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkSampler* pSampler);
+VKAPI_ATTR void DestroySampler(VkDevice device, VkSampler sampler, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateDescriptorSetLayout(VkDevice device, const VkDescriptorSetLayoutCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorSetLayout* pSetLayout);
+VKAPI_ATTR void DestroyDescriptorSetLayout(VkDevice device, VkDescriptorSetLayout descriptorSetLayout, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateDescriptorPool(VkDevice device, const VkDescriptorPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDescriptorPool* pDescriptorPool);
+VKAPI_ATTR void DestroyDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult ResetDescriptorPool(VkDevice device, VkDescriptorPool descriptorPool, VkDescriptorPoolResetFlags flags);
+VKAPI_ATTR VkResult AllocateDescriptorSets(VkDevice device, const VkDescriptorSetAllocateInfo* pAllocateInfo, VkDescriptorSet* pDescriptorSets);
+VKAPI_ATTR VkResult FreeDescriptorSets(VkDevice device, VkDescriptorPool descriptorPool, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets);
+VKAPI_ATTR void UpdateDescriptorSets(VkDevice device, uint32_t descriptorWriteCount, const VkWriteDescriptorSet* pDescriptorWrites, uint32_t descriptorCopyCount, const VkCopyDescriptorSet* pDescriptorCopies);
+VKAPI_ATTR VkResult CreateFramebuffer(VkDevice device, const VkFramebufferCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkFramebuffer* pFramebuffer);
+VKAPI_ATTR void DestroyFramebuffer(VkDevice device, VkFramebuffer framebuffer, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult CreateRenderPass(VkDevice device, const VkRenderPassCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkRenderPass* pRenderPass);
+VKAPI_ATTR void DestroyRenderPass(VkDevice device, VkRenderPass renderPass, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR void GetRenderAreaGranularity(VkDevice device, VkRenderPass renderPass, VkExtent2D* pGranularity);
+VKAPI_ATTR VkResult CreateCommandPool(VkDevice device, const VkCommandPoolCreateInfo* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkCommandPool* pCommandPool);
+VKAPI_ATTR void DestroyCommandPool(VkDevice device, VkCommandPool commandPool, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR VkResult ResetCommandPool(VkDevice device, VkCommandPool commandPool, VkCommandPoolResetFlags flags);
+VKAPI_ATTR VkResult AllocateCommandBuffers(VkDevice device, const VkCommandBufferAllocateInfo* pAllocateInfo, VkCommandBuffer* pCommandBuffers);
+VKAPI_ATTR void FreeCommandBuffers(VkDevice device, VkCommandPool commandPool, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers);
+VKAPI_ATTR VkResult BeginCommandBuffer(VkCommandBuffer commandBuffer, const VkCommandBufferBeginInfo* pBeginInfo);
+VKAPI_ATTR VkResult EndCommandBuffer(VkCommandBuffer commandBuffer);
+VKAPI_ATTR VkResult ResetCommandBuffer(VkCommandBuffer commandBuffer, VkCommandBufferResetFlags flags);
+VKAPI_ATTR void CmdBindPipeline(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipeline pipeline);
+VKAPI_ATTR void CmdSetViewport(VkCommandBuffer commandBuffer, uint32_t firstViewport, uint32_t viewportCount, const VkViewport* pViewports);
+VKAPI_ATTR void CmdSetScissor(VkCommandBuffer commandBuffer, uint32_t firstScissor, uint32_t scissorCount, const VkRect2D* pScissors);
+VKAPI_ATTR void CmdSetLineWidth(VkCommandBuffer commandBuffer, float lineWidth);
+VKAPI_ATTR void CmdSetDepthBias(VkCommandBuffer commandBuffer, float depthBiasConstantFactor, float depthBiasClamp, float depthBiasSlopeFactor);
+VKAPI_ATTR void CmdSetBlendConstants(VkCommandBuffer commandBuffer, const float blendConstants[4]);
+VKAPI_ATTR void CmdSetDepthBounds(VkCommandBuffer commandBuffer, float minDepthBounds, float maxDepthBounds);
+VKAPI_ATTR void CmdSetStencilCompareMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t compareMask);
+VKAPI_ATTR void CmdSetStencilWriteMask(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t writeMask);
+VKAPI_ATTR void CmdSetStencilReference(VkCommandBuffer commandBuffer, VkStencilFaceFlags faceMask, uint32_t reference);
+VKAPI_ATTR void CmdBindDescriptorSets(VkCommandBuffer commandBuffer, VkPipelineBindPoint pipelineBindPoint, VkPipelineLayout layout, uint32_t firstSet, uint32_t descriptorSetCount, const VkDescriptorSet* pDescriptorSets, uint32_t dynamicOffsetCount, const uint32_t* pDynamicOffsets);
+VKAPI_ATTR void CmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType);
+VKAPI_ATTR void CmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets);
+VKAPI_ATTR void CmdDraw(VkCommandBuffer commandBuffer, uint32_t vertexCount, uint32_t instanceCount, uint32_t firstVertex, uint32_t firstInstance);
+VKAPI_ATTR void CmdDrawIndexed(VkCommandBuffer commandBuffer, uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance);
+VKAPI_ATTR void CmdDrawIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride);
+VKAPI_ATTR void CmdDrawIndexedIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, uint32_t drawCount, uint32_t stride);
+VKAPI_ATTR void CmdDispatch(VkCommandBuffer commandBuffer, uint32_t x, uint32_t y, uint32_t z);
+VKAPI_ATTR void CmdDispatchIndirect(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset);
+VKAPI_ATTR void CmdCopyBuffer(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferCopy* pRegions);
+VKAPI_ATTR void CmdCopyImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageCopy* pRegions);
+VKAPI_ATTR void CmdBlitImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageBlit* pRegions, VkFilter filter);
+VKAPI_ATTR void CmdCopyBufferToImage(VkCommandBuffer commandBuffer, VkBuffer srcBuffer, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkBufferImageCopy* pRegions);
+VKAPI_ATTR void CmdCopyImageToBuffer(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkBuffer dstBuffer, uint32_t regionCount, const VkBufferImageCopy* pRegions);
+VKAPI_ATTR void CmdUpdateBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize dataSize, const uint32_t* pData);
+VKAPI_ATTR void CmdFillBuffer(VkCommandBuffer commandBuffer, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize size, uint32_t data);
+VKAPI_ATTR void CmdClearColorImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearColorValue* pColor, uint32_t rangeCount, const VkImageSubresourceRange* pRanges);
+VKAPI_ATTR void CmdClearDepthStencilImage(VkCommandBuffer commandBuffer, VkImage image, VkImageLayout imageLayout, const VkClearDepthStencilValue* pDepthStencil, uint32_t rangeCount, const VkImageSubresourceRange* pRanges);
+VKAPI_ATTR void CmdClearAttachments(VkCommandBuffer commandBuffer, uint32_t attachmentCount, const VkClearAttachment* pAttachments, uint32_t rectCount, const VkClearRect* pRects);
+VKAPI_ATTR void CmdResolveImage(VkCommandBuffer commandBuffer, VkImage srcImage, VkImageLayout srcImageLayout, VkImage dstImage, VkImageLayout dstImageLayout, uint32_t regionCount, const VkImageResolve* pRegions);
+VKAPI_ATTR void CmdSetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask);
+VKAPI_ATTR void CmdResetEvent(VkCommandBuffer commandBuffer, VkEvent event, VkPipelineStageFlags stageMask);
+VKAPI_ATTR void CmdWaitEvents(VkCommandBuffer commandBuffer, uint32_t eventCount, const VkEvent* pEvents, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers);
+VKAPI_ATTR void CmdPipelineBarrier(VkCommandBuffer commandBuffer, VkPipelineStageFlags srcStageMask, VkPipelineStageFlags dstStageMask, VkDependencyFlags dependencyFlags, uint32_t memoryBarrierCount, const VkMemoryBarrier* pMemoryBarriers, uint32_t bufferMemoryBarrierCount, const VkBufferMemoryBarrier* pBufferMemoryBarriers, uint32_t imageMemoryBarrierCount, const VkImageMemoryBarrier* pImageMemoryBarriers);
+VKAPI_ATTR void CmdBeginQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query, VkQueryControlFlags flags);
+VKAPI_ATTR void CmdEndQuery(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t query);
+VKAPI_ATTR void CmdResetQueryPool(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount);
+VKAPI_ATTR void CmdWriteTimestamp(VkCommandBuffer commandBuffer, VkPipelineStageFlagBits pipelineStage, VkQueryPool queryPool, uint32_t query);
+VKAPI_ATTR void CmdCopyQueryPoolResults(VkCommandBuffer commandBuffer, VkQueryPool queryPool, uint32_t firstQuery, uint32_t queryCount, VkBuffer dstBuffer, VkDeviceSize dstOffset, VkDeviceSize stride, VkQueryResultFlags flags);
+VKAPI_ATTR void CmdPushConstants(VkCommandBuffer commandBuffer, VkPipelineLayout layout, VkShaderStageFlags stageFlags, uint32_t offset, uint32_t size, const void* pValues);
+VKAPI_ATTR void CmdBeginRenderPass(VkCommandBuffer commandBuffer, const VkRenderPassBeginInfo* pRenderPassBegin, VkSubpassContents contents);
+VKAPI_ATTR void CmdNextSubpass(VkCommandBuffer commandBuffer, VkSubpassContents contents);
+VKAPI_ATTR void CmdEndRenderPass(VkCommandBuffer commandBuffer);
+VKAPI_ATTR void CmdExecuteCommands(VkCommandBuffer commandBuffer, uint32_t commandBufferCount, const VkCommandBuffer* pCommandBuffers);
+VKAPI_ATTR VkResult CreateDebugReportCallbackEXT(VkInstance instance, const VkDebugReportCallbackCreateInfoEXT* pCreateInfo, const VkAllocationCallbacks* pAllocator, VkDebugReportCallbackEXT* pCallback);
+VKAPI_ATTR void DestroyDebugReportCallbackEXT(VkInstance instance, VkDebugReportCallbackEXT callback, const VkAllocationCallbacks* pAllocator);
+VKAPI_ATTR void DebugReportMessageEXT(VkInstance instance, VkDebugReportFlagsEXT flags, VkDebugReportObjectTypeEXT objectType, uint64_t object, size_t location, int32_t messageCode, const char* pLayerPrefix, const char* pMessage);
+VKAPI_ATTR VkResult GetSwapchainGrallocUsageANDROID(VkDevice device, VkFormat format, VkImageUsageFlags imageUsage, int* grallocUsage);
+VKAPI_ATTR VkResult AcquireImageANDROID(VkDevice device, VkImage image, int nativeFenceFd, VkSemaphore semaphore, VkFence fence);
+VKAPI_ATTR VkResult QueueSignalReleaseImageANDROID(VkQueue queue, uint32_t waitSemaphoreCount, const VkSemaphore* pWaitSemaphores, VkImage image, int* pNativeFenceFd);
+// clang-format on
+
+} // namespace null_driver
+
+#endif // NULLDRV_NULL_DRIVER_H
diff --git a/vulkan/patches/README b/vulkan/patches/README
new file mode 100644
index 0000000..d424dd8
--- /dev/null
+++ b/vulkan/patches/README
@@ -0,0 +1,26 @@
+frameworks/native/vulkan/patches
+================================
+Each subdirectory corresponds to a sequence of patches. These are
+"virtual branches": we only have one shared branch, so these let us
+share experimental or auxiliary changes without disturbing the main
+branch.
+
+To apply:
+$ cd <somewhere in target git repo>
+$ git am $VULKAN_PATCHES/$PATCH_DIR/*
+
+
+frameworks_base-apk_library_dir
+-------------------------------
+This branch is for $TOP/frameworks/base. It modifies the framework to
+inform the Vulkan loader, during activity startup, where the
+activity's native library directory. The loader will search this
+directory for layer libraries. Without this change, layers will only
+be loaded from a global location under /data.
+
+
+build-install_libvulkan
+-----------------------
+This branch is for $TOP/build. It adds libvulkan.so to the base
+PRODUCT_PACKAGES variable, so it will be built and installed on the system
+partition by default.
diff --git a/vulkan/patches/build-install_libvulkan/0001-Add-libvulkan-to-base-PRODUCT_PACKAGES.patch b/vulkan/patches/build-install_libvulkan/0001-Add-libvulkan-to-base-PRODUCT_PACKAGES.patch
new file mode 100644
index 0000000..9d214bd
--- /dev/null
+++ b/vulkan/patches/build-install_libvulkan/0001-Add-libvulkan-to-base-PRODUCT_PACKAGES.patch
@@ -0,0 +1,25 @@
+From a0aa01fb36a2769b7113316c86e902def62001d9 Mon Sep 17 00:00:00 2001
+From: Jesse Hall <jessehall@google.com>
+Date: Wed, 14 Oct 2015 15:20:34 -0700
+Subject: [PATCH] Add libvulkan to base PRODUCT_PACKAGES
+
+Change-Id: I6c3ad4732148888a88fe980bf8e2bedf26ee74c8
+---
+ target/product/base.mk | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/target/product/base.mk b/target/product/base.mk
+index 1699156..4b9ce92 100644
+--- a/target/product/base.mk
++++ b/target/product/base.mk
+@@ -94,6 +94,7 @@ PRODUCT_PACKAGES += \
+ libvisualizer \
+ libvorbisidec \
+ libmediandk \
++ libvulkan \
+ libwifi-service \
+ media \
+ media_cmd \
+--
+2.6.0.rc2.230.g3dd15c0
+
diff --git a/vulkan/patches/frameworks_base-apk_library_dir/0001-Adding-plumbing-for-passing-the-lib-directory.patch b/vulkan/patches/frameworks_base-apk_library_dir/0001-Adding-plumbing-for-passing-the-lib-directory.patch
new file mode 100644
index 0000000..81022d6
--- /dev/null
+++ b/vulkan/patches/frameworks_base-apk_library_dir/0001-Adding-plumbing-for-passing-the-lib-directory.patch
@@ -0,0 +1,133 @@
+From 5c7e465f1d11bccecdc5cacce87d1fd7deeb5adb Mon Sep 17 00:00:00 2001
+From: Michael Lentine <mlentine@google.com>
+Date: Mon, 14 Sep 2015 13:28:25 -0500
+Subject: [PATCH] Adding plumbing for passing the lib directory.
+
+Added call in handleBindApplication which will pass the library path into
+HardwareRender which then passes it to libvulkan through ThreadedRenderer's
+jni interface.
+
+Change-Id: Ie5709ac46f47c4af5c020d604a479e78745d7777
+---
+ core/java/android/app/ActivityThread.java | 7 +++++--
+ core/java/android/view/HardwareRenderer.java | 11 +++++++++++
+ core/java/android/view/ThreadedRenderer.java | 1 +
+ core/jni/Android.mk | 2 ++
+ core/jni/android_view_ThreadedRenderer.cpp | 15 +++++++++++++++
+ 5 files changed, 34 insertions(+), 2 deletions(-)
+
+diff --git a/core/java/android/app/ActivityThread.java b/core/java/android/app/ActivityThread.java
+index da21eaf..76608c6 100644
+--- a/core/java/android/app/ActivityThread.java
++++ b/core/java/android/app/ActivityThread.java
+@@ -4520,8 +4520,11 @@ public final class ActivityThread {
+ } else {
+ Log.e(TAG, "Unable to setupGraphicsSupport due to missing code-cache directory");
+ }
+- }
+-
++ }
++
++ // Add the lib dir path to hardware renderer so that vulkan layers
++ // can be searched for within that directory.
++ HardwareRenderer.setLibDir(data.info.getLibDir());
+
+ final boolean is24Hr = "24".equals(mCoreSettings.getString(Settings.System.TIME_12_24));
+ DateFormat.set24HourTimePref(is24Hr);
+diff --git a/core/java/android/view/HardwareRenderer.java b/core/java/android/view/HardwareRenderer.java
+index 5e58250..ed99115 100644
+--- a/core/java/android/view/HardwareRenderer.java
++++ b/core/java/android/view/HardwareRenderer.java
+@@ -301,6 +301,17 @@ public abstract class HardwareRenderer {
+ }
+
+ /**
++ * Sets the library directory to use as a search path for vulkan layers.
++ *
++ * @param libDir A directory that contains vulkan layers
++ *
++ * @hide
++ */
++ public static void setLibDir(String libDir) {
++ ThreadedRenderer.setupVulkanLayerPath(libDir);
++ }
++
++ /**
+ * Indicates that the specified hardware layer needs to be updated
+ * as soon as possible.
+ *
+diff --git a/core/java/android/view/ThreadedRenderer.java b/core/java/android/view/ThreadedRenderer.java
+index f6119e2..d3e5175 100644
+--- a/core/java/android/view/ThreadedRenderer.java
++++ b/core/java/android/view/ThreadedRenderer.java
+@@ -492,6 +492,7 @@ public class ThreadedRenderer extends HardwareRenderer {
+ }
+
+ static native void setupShadersDiskCache(String cacheFile);
++ static native void setupVulkanLayerPath(String layerPath);
+
+ private static native void nSetAtlas(long nativeProxy, GraphicBuffer buffer, long[] map);
+ private static native void nSetProcessStatsBuffer(long nativeProxy, int fd);
+diff --git a/core/jni/Android.mk b/core/jni/Android.mk
+index 6b07a47..438e95b 100644
+--- a/core/jni/Android.mk
++++ b/core/jni/Android.mk
+@@ -177,6 +177,7 @@ LOCAL_C_INCLUDES += \
+ $(LOCAL_PATH)/android/graphics \
+ $(LOCAL_PATH)/../../libs/hwui \
+ $(LOCAL_PATH)/../../../native/opengl/libs \
++ $(LOCAL_PATH)/../../../native/vulkan/include \
+ $(call include-path-for, bluedroid) \
+ $(call include-path-for, libhardware)/hardware \
+ $(call include-path-for, libhardware_legacy)/hardware_legacy \
+@@ -225,6 +226,7 @@ LOCAL_SHARED_LIBRARIES := \
+ libEGL \
+ libGLESv1_CM \
+ libGLESv2 \
++ libvulkan \
+ libETC1 \
+ libhardware \
+ libhardware_legacy \
+diff --git a/core/jni/android_view_ThreadedRenderer.cpp b/core/jni/android_view_ThreadedRenderer.cpp
+index 47132f4..69e8ca6 100644
+--- a/core/jni/android_view_ThreadedRenderer.cpp
++++ b/core/jni/android_view_ThreadedRenderer.cpp
+@@ -27,6 +27,7 @@
+ #include <EGL/egl.h>
+ #include <EGL/eglext.h>
+ #include <EGL/egl_cache.h>
++#include <vulkan/vulkan_loader_data.h>
+
+ #include <utils/StrongPointer.h>
+ #include <android_runtime/android_view_Surface.h>
+@@ -448,6 +449,18 @@ static void android_view_ThreadedRenderer_setupShadersDiskCache(JNIEnv* env, job
+ }
+
+ // ----------------------------------------------------------------------------
++// Layers
++// ----------------------------------------------------------------------------
++
++static void android_view_ThreadedRenderer_setupVulkanLayerPath(JNIEnv* env, jobject clazz,
++ jstring layerPath) {
++
++ const char* layerArray = env->GetStringUTFChars(layerPath, NULL);
++ vulkan::LoaderData::GetInstance().layer_path = layerArray;
++ env->ReleaseStringUTFChars(layerPath, layerArray);
++}
++
++// ----------------------------------------------------------------------------
+ // JNI Glue
+ // ----------------------------------------------------------------------------
+
+@@ -487,6 +500,8 @@ static JNINativeMethod gMethods[] = {
+ { "nDumpProfileData", "([BLjava/io/FileDescriptor;)V", (void*) android_view_ThreadedRenderer_dumpProfileData },
+ { "setupShadersDiskCache", "(Ljava/lang/String;)V",
+ (void*) android_view_ThreadedRenderer_setupShadersDiskCache },
++ { "setupVulkanLayerPath", "(Ljava/lang/String;)V",
++ (void*) android_view_ThreadedRenderer_setupVulkanLayerPath },
+ };
+
+ int register_android_view_ThreadedRenderer(JNIEnv* env) {
+--
+2.6.0.rc2.230.g3dd15c0
+
diff --git a/vulkan/tools/Android.mk b/vulkan/tools/Android.mk
new file mode 100644
index 0000000..31d6089
--- /dev/null
+++ b/vulkan/tools/Android.mk
@@ -0,0 +1,37 @@
+# Copyright 2015 The Android Open Source Project
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCAL_PATH:= $(call my-dir)
+include $(CLEAR_VARS)
+
+LOCAL_CLANG := true
+LOCAL_CFLAGS := -std=c99 -fvisibility=hidden -fstrict-aliasing
+LOCAL_CFLAGS += -DLOG_TAG=\"vkinfo\"
+LOCAL_CFLAGS += -Weverything -Werror -Wno-padded -Wno-undef -Wno-switch-enum
+LOCAL_CPPFLAGS := -std=c++1y \
+ -Wno-c++98-compat-pedantic \
+ -Wno-c99-extensions
+
+LOCAL_C_INCLUDES := \
+ frameworks/native/vulkan/include
+
+LOCAL_SRC_FILES := vkinfo.cpp
+LOCAL_ADDITIONAL_DEPENDENCIES := $(LOCAL_PATH)/Android.mk
+
+LOCAL_SHARED_LIBRARIES := libvulkan liblog
+
+LOCAL_MODULE := vkinfo
+LOCAL_MODULE_TAGS := optional
+
+include $(BUILD_EXECUTABLE)
diff --git a/vulkan/tools/vkinfo.cpp b/vulkan/tools/vkinfo.cpp
new file mode 100644
index 0000000..6a63667
--- /dev/null
+++ b/vulkan/tools/vkinfo.cpp
@@ -0,0 +1,412 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include <algorithm>
+#include <array>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <sstream>
+#include <vector>
+
+#include <vulkan/vulkan.h>
+#include <vulkan/vk_ext_debug_report.h>
+
+#define LOG_TAG "vkinfo"
+#include <log/log.h>
+
+namespace {
+
+struct GpuInfo {
+ VkPhysicalDeviceProperties properties;
+ VkPhysicalDeviceMemoryProperties memory;
+ VkPhysicalDeviceFeatures features;
+ std::vector<VkQueueFamilyProperties> queue_families;
+ std::vector<VkExtensionProperties> extensions;
+ std::vector<VkLayerProperties> layers;
+ std::vector<std::vector<VkExtensionProperties>> layer_extensions;
+};
+struct VulkanInfo {
+ std::vector<VkExtensionProperties> extensions;
+ std::vector<VkLayerProperties> layers;
+ std::vector<std::vector<VkExtensionProperties>> layer_extensions;
+ std::vector<GpuInfo> gpus;
+};
+
+// ----------------------------------------------------------------------------
+
+[[noreturn]] void die(const char* proc, VkResult result) {
+ const char* result_str;
+ switch (result) {
+ // clang-format off
+ case VK_SUCCESS: result_str = "VK_SUCCESS"; break;
+ case VK_NOT_READY: result_str = "VK_NOT_READY"; break;
+ case VK_TIMEOUT: result_str = "VK_TIMEOUT"; break;
+ case VK_EVENT_SET: result_str = "VK_EVENT_SET"; break;
+ case VK_EVENT_RESET: result_str = "VK_EVENT_RESET"; break;
+ case VK_INCOMPLETE: result_str = "VK_INCOMPLETE"; break;
+ case VK_ERROR_OUT_OF_HOST_MEMORY: result_str = "VK_ERROR_OUT_OF_HOST_MEMORY"; break;
+ case VK_ERROR_OUT_OF_DEVICE_MEMORY: result_str = "VK_ERROR_OUT_OF_DEVICE_MEMORY"; break;
+ case VK_ERROR_INITIALIZATION_FAILED: result_str = "VK_ERROR_INITIALIZATION_FAILED"; break;
+ case VK_ERROR_DEVICE_LOST: result_str = "VK_ERROR_DEVICE_LOST"; break;
+ case VK_ERROR_MEMORY_MAP_FAILED: result_str = "VK_ERROR_MEMORY_MAP_FAILED"; break;
+ case VK_ERROR_LAYER_NOT_PRESENT: result_str = "VK_ERROR_LAYER_NOT_PRESENT"; break;
+ case VK_ERROR_EXTENSION_NOT_PRESENT: result_str = "VK_ERROR_EXTENSION_NOT_PRESENT"; break;
+ case VK_ERROR_INCOMPATIBLE_DRIVER: result_str = "VK_ERROR_INCOMPATIBLE_DRIVER"; break;
+ default: result_str = "<unknown VkResult>"; break;
+ // clang-format on
+ }
+ fprintf(stderr, "%s failed: %s (%d)\n", proc, result_str, result);
+ exit(1);
+}
+
+bool HasExtension(const std::vector<VkExtensionProperties>& extensions,
+ const char* name) {
+ return std::find_if(extensions.cbegin(), extensions.cend(),
+ [=](const VkExtensionProperties& prop) {
+ return strcmp(prop.extensionName, name) == 0;
+ }) != extensions.end();
+}
+
+void EnumerateInstanceExtensions(
+ const char* layer_name,
+ std::vector<VkExtensionProperties>* extensions) {
+ VkResult result;
+ uint32_t count;
+ result =
+ vkEnumerateInstanceExtensionProperties(layer_name, &count, nullptr);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateInstanceExtensionProperties (count)", result);
+ do {
+ extensions->resize(count);
+ result = vkEnumerateInstanceExtensionProperties(layer_name, &count,
+ extensions->data());
+ } while (result == VK_INCOMPLETE);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateInstanceExtensionProperties (data)", result);
+}
+
+void EnumerateDeviceExtensions(VkPhysicalDevice gpu,
+ const char* layer_name,
+ std::vector<VkExtensionProperties>* extensions) {
+ VkResult result;
+ uint32_t count;
+ result =
+ vkEnumerateDeviceExtensionProperties(gpu, layer_name, &count, nullptr);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateDeviceExtensionProperties (count)", result);
+ do {
+ extensions->resize(count);
+ result = vkEnumerateDeviceExtensionProperties(gpu, layer_name, &count,
+ extensions->data());
+ } while (result == VK_INCOMPLETE);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateDeviceExtensionProperties (data)", result);
+}
+
+void GatherGpuInfo(VkPhysicalDevice gpu, GpuInfo& info) {
+ VkResult result;
+ uint32_t count;
+
+ vkGetPhysicalDeviceProperties(gpu, &info.properties);
+ vkGetPhysicalDeviceMemoryProperties(gpu, &info.memory);
+ vkGetPhysicalDeviceFeatures(gpu, &info.features);
+
+ vkGetPhysicalDeviceQueueFamilyProperties(gpu, &count, nullptr);
+ info.queue_families.resize(count);
+ vkGetPhysicalDeviceQueueFamilyProperties(gpu, &count,
+ info.queue_families.data());
+
+ result = vkEnumerateDeviceLayerProperties(gpu, &count, nullptr);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateDeviceLayerProperties (count)", result);
+ do {
+ info.layers.resize(count);
+ result =
+ vkEnumerateDeviceLayerProperties(gpu, &count, info.layers.data());
+ } while (result == VK_INCOMPLETE);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateDeviceLayerProperties (data)", result);
+ info.layer_extensions.resize(info.layers.size());
+
+ EnumerateDeviceExtensions(gpu, nullptr, &info.extensions);
+ for (size_t i = 0; i < info.layers.size(); i++) {
+ EnumerateDeviceExtensions(gpu, info.layers[i].layerName,
+ &info.layer_extensions[i]);
+ }
+
+ const std::array<const char*, 1> kDesiredExtensions = {
+ {VK_KHR_SWAPCHAIN_EXTENSION_NAME},
+ };
+ const char* extensions[kDesiredExtensions.size()];
+ uint32_t num_extensions = 0;
+ for (const auto& desired_ext : kDesiredExtensions) {
+ bool available = HasExtension(info.extensions, desired_ext);
+ for (size_t i = 0; !available && i < info.layer_extensions.size(); i++)
+ available = HasExtension(info.layer_extensions[i], desired_ext);
+ if (available)
+ extensions[num_extensions++] = desired_ext;
+ }
+
+ VkDevice device;
+ const VkDeviceQueueCreateInfo queue_create_info = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO,
+ .queueFamilyIndex = 0,
+ .queueCount = 1,
+ };
+ const VkDeviceCreateInfo create_info = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ .queueCreateInfoCount = 1,
+ .pQueueCreateInfos = &queue_create_info,
+ .enabledExtensionCount = num_extensions,
+ .ppEnabledExtensionNames = extensions,
+ .pEnabledFeatures = &info.features,
+ };
+ result = vkCreateDevice(gpu, &create_info, nullptr, &device);
+ if (result != VK_SUCCESS)
+ die("vkCreateDevice", result);
+ vkDestroyDevice(device, nullptr);
+}
+
+void GatherInfo(VulkanInfo* info) {
+ VkResult result;
+ uint32_t count;
+
+ result = vkEnumerateInstanceLayerProperties(&count, nullptr);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateInstanceLayerProperties (count)", result);
+ do {
+ info->layers.resize(count);
+ result =
+ vkEnumerateInstanceLayerProperties(&count, info->layers.data());
+ } while (result == VK_INCOMPLETE);
+ if (result != VK_SUCCESS)
+ die("vkEnumerateInstanceLayerProperties (data)", result);
+ info->layer_extensions.resize(info->layers.size());
+
+ EnumerateInstanceExtensions(nullptr, &info->extensions);
+ for (size_t i = 0; i < info->layers.size(); i++) {
+ EnumerateInstanceExtensions(info->layers[i].layerName,
+ &info->layer_extensions[i]);
+ }
+
+ const char* kDesiredExtensions[] = {
+ VK_EXT_DEBUG_REPORT_EXTENSION_NAME,
+ };
+ const char*
+ extensions[sizeof(kDesiredExtensions) / sizeof(kDesiredExtensions[0])];
+ uint32_t num_extensions = 0;
+ for (const auto& desired_ext : kDesiredExtensions) {
+ bool available = HasExtension(info->extensions, desired_ext);
+ for (size_t i = 0; !available && i < info->layer_extensions.size(); i++)
+ available = HasExtension(info->layer_extensions[i], desired_ext);
+ if (available)
+ extensions[num_extensions++] = desired_ext;
+ }
+
+ const VkInstanceCreateInfo create_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ .enabledExtensionCount = num_extensions,
+ .ppEnabledExtensionNames = extensions,
+ };
+ VkInstance instance;
+ result = vkCreateInstance(&create_info, nullptr, &instance);
+ if (result != VK_SUCCESS)
+ die("vkCreateInstance", result);
+
+ uint32_t num_gpus;
+ result = vkEnumeratePhysicalDevices(instance, &num_gpus, nullptr);
+ if (result != VK_SUCCESS)
+ die("vkEnumeratePhysicalDevices (count)", result);
+ std::vector<VkPhysicalDevice> gpus(num_gpus, VK_NULL_HANDLE);
+ do {
+ gpus.resize(num_gpus, VK_NULL_HANDLE);
+ result = vkEnumeratePhysicalDevices(instance, &num_gpus, gpus.data());
+ } while (result == VK_INCOMPLETE);
+ if (result != VK_SUCCESS)
+ die("vkEnumeratePhysicalDevices (data)", result);
+
+ info->gpus.resize(num_gpus);
+ for (size_t i = 0; i < gpus.size(); i++)
+ GatherGpuInfo(gpus[i], info->gpus.at(i));
+
+ vkDestroyInstance(instance, nullptr);
+}
+
+// ----------------------------------------------------------------------------
+
+uint32_t ExtractMajorVersion(uint32_t version) {
+ return (version >> 22) & 0x3FF;
+}
+uint32_t ExtractMinorVersion(uint32_t version) {
+ return (version >> 12) & 0x3FF;
+}
+uint32_t ExtractPatchVersion(uint32_t version) {
+ return (version >> 0) & 0xFFF;
+}
+
+const char* VkPhysicalDeviceTypeStr(VkPhysicalDeviceType type) {
+ switch (type) {
+ case VK_PHYSICAL_DEVICE_TYPE_OTHER:
+ return "OTHER";
+ case VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU:
+ return "INTEGRATED_GPU";
+ case VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU:
+ return "DISCRETE_GPU";
+ case VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU:
+ return "VIRTUAL_GPU";
+ case VK_PHYSICAL_DEVICE_TYPE_CPU:
+ return "CPU";
+ default:
+ return "<UNKNOWN>";
+ }
+}
+
+const char* VkQueueFlagBitStr(VkQueueFlagBits bit) {
+ switch (bit) {
+ case VK_QUEUE_GRAPHICS_BIT:
+ return "GRAPHICS";
+ case VK_QUEUE_COMPUTE_BIT:
+ return "COMPUTE";
+ case VK_QUEUE_TRANSFER_BIT:
+ return "TRANSFER";
+ case VK_QUEUE_SPARSE_BINDING_BIT:
+ return "SPARSE";
+ }
+}
+
+void PrintExtensions(const std::vector<VkExtensionProperties>& extensions,
+ const char* prefix) {
+ for (const auto& e : extensions)
+ printf("%s%s (v%u)\n", prefix, e.extensionName, e.specVersion);
+}
+
+void PrintLayers(
+ const std::vector<VkLayerProperties>& layers,
+ const std::vector<std::vector<VkExtensionProperties>> extensions,
+ const char* prefix) {
+ std::string ext_prefix(prefix);
+ ext_prefix.append(" ");
+ for (size_t i = 0; i < layers.size(); i++) {
+ printf(
+ "%s%s %u.%u.%u/%u\n"
+ "%s %s\n",
+ prefix, layers[i].layerName,
+ ExtractMajorVersion(layers[i].specVersion),
+ ExtractMinorVersion(layers[i].specVersion),
+ ExtractPatchVersion(layers[i].specVersion),
+ layers[i].implementationVersion, prefix, layers[i].description);
+ if (!extensions[i].empty())
+ printf("%s Extensions [%zu]:\n", prefix, extensions[i].size());
+ PrintExtensions(extensions[i], ext_prefix.c_str());
+ }
+}
+
+void PrintGpuInfo(const GpuInfo& info) {
+ VkResult result;
+ std::ostringstream strbuf;
+
+ printf(" \"%s\" (%s) %u.%u.%u/%#x [%04x:%04x]\n",
+ info.properties.deviceName,
+ VkPhysicalDeviceTypeStr(info.properties.deviceType),
+ ExtractMajorVersion(info.properties.apiVersion),
+ ExtractMinorVersion(info.properties.apiVersion),
+ ExtractPatchVersion(info.properties.apiVersion),
+ info.properties.driverVersion, info.properties.vendorID,
+ info.properties.deviceID);
+
+ for (uint32_t heap = 0; heap < info.memory.memoryHeapCount; heap++) {
+ if ((info.memory.memoryHeaps[heap].flags &
+ VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) != 0)
+ strbuf << "DEVICE_LOCAL";
+ printf(" Heap %u: %" PRIu64 " MiB (0x%" PRIx64 " B) %s\n", heap,
+ info.memory.memoryHeaps[heap].size / 0x1000000,
+ info.memory.memoryHeaps[heap].size, strbuf.str().c_str());
+ strbuf.str(std::string());
+
+ for (uint32_t type = 0; type < info.memory.memoryTypeCount; type++) {
+ if (info.memory.memoryTypes[type].heapIndex != heap)
+ continue;
+ VkMemoryPropertyFlags flags =
+ info.memory.memoryTypes[type].propertyFlags;
+ if ((flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT) != 0)
+ strbuf << " DEVICE_LOCAL";
+ if ((flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
+ strbuf << " HOST_VISIBLE";
+ if ((flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) != 0)
+ strbuf << " COHERENT";
+ if ((flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT) != 0)
+ strbuf << " CACHED";
+ if ((flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT) != 0)
+ strbuf << " LAZILY_ALLOCATED";
+ printf(" Type %u:%s\n", type, strbuf.str().c_str());
+ strbuf.str(std::string());
+ }
+ }
+
+ for (uint32_t family = 0; family < info.queue_families.size(); family++) {
+ const VkQueueFamilyProperties& qprops = info.queue_families[family];
+ VkQueueFlags flags = qprops.queueFlags;
+ char flags_str[5];
+ flags_str[0] = (flags & VK_QUEUE_GRAPHICS_BIT) ? 'G' : '_';
+ flags_str[1] = (flags & VK_QUEUE_COMPUTE_BIT) ? 'C' : '_';
+ flags_str[2] = (flags & VK_QUEUE_TRANSFER_BIT) ? 'T' : '_';
+ flags_str[3] = (flags & VK_QUEUE_SPARSE_BINDING_BIT) ? 'S' : '_';
+ flags_str[4] = '\0';
+ printf(
+ " Queue Family %u: %ux %s\n"
+ " timestampValidBits: %ub\n"
+ " minImageTransferGranularity: (%u,%u,%u)\n",
+ family, qprops.queueCount, flags_str, qprops.timestampValidBits,
+ qprops.minImageTransferGranularity.width,
+ qprops.minImageTransferGranularity.height,
+ qprops.minImageTransferGranularity.depth);
+ }
+
+ if (!info.extensions.empty()) {
+ printf(" Extensions [%zu]:\n", info.extensions.size());
+ PrintExtensions(info.extensions, " ");
+ }
+ if (!info.layers.empty()) {
+ printf(" Layers [%zu]:\n", info.layers.size());
+ PrintLayers(info.layers, info.layer_extensions, " ");
+ }
+}
+
+void PrintInfo(const VulkanInfo& info) {
+ std::ostringstream strbuf;
+
+ printf("Instance Extensions [%zu]:\n", info.extensions.size());
+ PrintExtensions(info.extensions, " ");
+ if (!info.layers.empty()) {
+ printf("Instance Layers [%zu]:\n", info.layers.size());
+ PrintLayers(info.layers, info.layer_extensions, " ");
+ }
+
+ printf("PhysicalDevices [%zu]:\n", info.gpus.size());
+ for (const auto& gpu : info.gpus)
+ PrintGpuInfo(gpu);
+}
+
+} // namespace
+
+// ----------------------------------------------------------------------------
+
+int main(int /*argc*/, char const* /*argv*/ []) {
+ VulkanInfo info;
+ GatherInfo(&info);
+ PrintInfo(info);
+ return 0;
+}