blob: c85899c6f160f1d1d9176282aa029dbcfaa7f9fa [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>FindCUDA &mdash; CMake 3.8.2 Documentation</title>
<link rel="stylesheet" href="../_static/cmake.css" type="text/css" />
<link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '../',
VERSION: '3.8.2',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '.txt'
};
</script>
<script type="text/javascript" src="../_static/jquery.js"></script>
<script type="text/javascript" src="../_static/underscore.js"></script>
<script type="text/javascript" src="../_static/doctools.js"></script>
<link rel="shortcut icon" href="../_static/cmake-favicon.ico"/>
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="FindCups" href="FindCups.html" />
<link rel="prev" title="FindCoin3D" href="FindCoin3D.html" />
</head>
<body role="document">
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="../genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="FindCups.html" title="FindCups"
accesskey="N">next</a> |</li>
<li class="right" >
<a href="FindCoin3D.html" title="FindCoin3D"
accesskey="P">previous</a> |</li>
<li>
<img src="../_static/cmake-logo-16.png" alt=""
style="vertical-align: middle; margin-top: -2px" />
</li>
<li>
<a href="https://cmake.org/">CMake</a> &#187;
</li>
<li>
<a href="../index.html">3.8.2 Documentation</a> &#187;
</li>
<li class="nav-item nav-item-1"><a href="../manual/cmake-modules.7.html" accesskey="U">cmake-modules(7)</a> &#187;</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body" role="main">
<div class="section" id="findcuda">
<span id="module:FindCUDA"></span><h1>FindCUDA<a class="headerlink" href="#findcuda" title="Permalink to this headline">ΒΆ</a></h1>
<p>Tools for building CUDA C files: libraries and build dependencies.</p>
<p>This script locates the NVIDIA CUDA C tools. It should work on linux,
windows, and mac and should be reasonably up to date with CUDA C
releases.</p>
<p>This script makes use of the standard find_package arguments of
&lt;VERSION&gt;, REQUIRED and QUIET. CUDA_FOUND will report if an
acceptable version of CUDA was found.</p>
<p>The script will prompt the user to specify CUDA_TOOLKIT_ROOT_DIR if
the prefix cannot be determined by the location of nvcc in the system
path and REQUIRED is specified to find_package(). To use a different
installed version of the toolkit set the environment variable
CUDA_BIN_PATH before running cmake (e.g.
CUDA_BIN_PATH=/usr/local/cuda1.0 instead of the default
/usr/local/cuda) or set CUDA_TOOLKIT_ROOT_DIR after configuring. If
you change the value of CUDA_TOOLKIT_ROOT_DIR, various components that
depend on the path will be relocated.</p>
<p>It might be necessary to set CUDA_TOOLKIT_ROOT_DIR manually on certain
platforms, or to use a cuda runtime not installed in the default
location. In newer versions of the toolkit the cuda library is
included with the graphics driver- be sure that the driver version
matches what is needed by the cuda runtime version.</p>
<p>The following variables affect the behavior of the macros in the
script (in alphebetical order). Note that any of these flags can be
changed multiple times in the same directory before calling
CUDA_ADD_EXECUTABLE, CUDA_ADD_LIBRARY, CUDA_COMPILE, CUDA_COMPILE_PTX,
CUDA_COMPILE_FATBIN, CUDA_COMPILE_CUBIN or CUDA_WRAP_SRCS:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span>CUDA_64_BIT_DEVICE_CODE (Default matches host bit size)
-- Set to ON to compile for 64 bit device code, OFF for 32 bit device code.
Note that making this different from the host code when generating object
or C files from CUDA code just won&#39;t work, because size_t gets defined by
nvcc in the generated source. If you compile to PTX and then load the
file yourself, you can mix bit sizes between device and host.
CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE (Default ON)
-- Set to ON if you want the custom build rule to be attached to the source
file in Visual Studio. Turn OFF if you add the same cuda file to multiple
targets.
This allows the user to build the target from the CUDA file; however, bad
things can happen if the CUDA source file is added to multiple targets.
When performing parallel builds it is possible for the custom build
command to be run more than once and in parallel causing cryptic build
errors. VS runs the rules for every source file in the target, and a
source can have only one rule no matter how many projects it is added to.
When the rule is run from multiple targets race conditions can occur on
the generated file. Eventually everything will get built, but if the user
is unaware of this behavior, there may be confusion. It would be nice if
this script could detect the reuse of source files across multiple targets
and turn the option off for the user, but no good solution could be found.
CUDA_BUILD_CUBIN (Default OFF)
-- Set to ON to enable and extra compilation pass with the -cubin option in
Device mode. The output is parsed and register, shared memory usage is
printed during build.
CUDA_BUILD_EMULATION (Default OFF for device mode)
-- Set to ON for Emulation mode. -D_DEVICEEMU is defined for CUDA C files
when CUDA_BUILD_EMULATION is TRUE.
CUDA_GENERATED_OUTPUT_DIR (Default CMAKE_CURRENT_BINARY_DIR)
-- Set to the path you wish to have the generated files placed. If it is
blank output files will be placed in CMAKE_CURRENT_BINARY_DIR.
Intermediate files will always be placed in
CMAKE_CURRENT_BINARY_DIR/CMakeFiles.
CUDA_HOST_COMPILATION_CPP (Default ON)
-- Set to OFF for C compilation of host code.
CUDA_HOST_COMPILER (Default CMAKE_C_COMPILER, $(VCInstallDir)/bin for VS)
-- Set the host compiler to be used by nvcc. Ignored if -ccbin or
--compiler-bindir is already present in the CUDA_NVCC_FLAGS or
CUDA_NVCC_FLAGS_&lt;CONFIG&gt; variables. For Visual Studio targets
$(VCInstallDir)/bin is a special value that expands out to the path when
the command is run from within VS.
CUDA_NVCC_FLAGS
CUDA_NVCC_FLAGS_&lt;CONFIG&gt;
-- Additional NVCC command line arguments. NOTE: multiple arguments must be
semi-colon delimited (e.g. --compiler-options;-Wall)
CUDA_PROPAGATE_HOST_FLAGS (Default ON)
-- Set to ON to propagate CMAKE_{C,CXX}_FLAGS and their configuration
dependent counterparts (e.g. CMAKE_C_FLAGS_DEBUG) automatically to the
host compiler through nvcc&#39;s -Xcompiler flag. This helps make the
generated host code match the rest of the system better. Sometimes
certain flags give nvcc problems, and this will help you turn the flag
propagation off. This does not affect the flags supplied directly to nvcc
via CUDA_NVCC_FLAGS or through the OPTION flags specified through
CUDA_ADD_LIBRARY, CUDA_ADD_EXECUTABLE, or CUDA_WRAP_SRCS. Flags used for
shared library compilation are not affected by this flag.
CUDA_SEPARABLE_COMPILATION (Default OFF)
-- If set this will enable separable compilation for all CUDA runtime object
files. If used outside of CUDA_ADD_EXECUTABLE and CUDA_ADD_LIBRARY
(e.g. calling CUDA_WRAP_SRCS directly),
CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME and
CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS should be called.
CUDA_SOURCE_PROPERTY_FORMAT
-- If this source file property is set, it can override the format specified
to CUDA_WRAP_SRCS (OBJ, PTX, CUBIN, or FATBIN). If an input source file
is not a .cu file, setting this file will cause it to be treated as a .cu
file. See documentation for set_source_files_properties on how to set
this property.
CUDA_USE_STATIC_CUDA_RUNTIME (Default ON)
-- When enabled the static version of the CUDA runtime library will be used
in CUDA_LIBRARIES. If the version of CUDA configured doesn&#39;t support
this option, then it will be silently disabled.
CUDA_VERBOSE_BUILD (Default OFF)
-- Set to ON to see all the commands used when building the CUDA file. When
using a Makefile generator the value defaults to VERBOSE (run make
VERBOSE=1 to see output), although setting CUDA_VERBOSE_BUILD to ON will
always print the output.
</pre></div>
</div>
<p>The script creates the following macros (in alphebetical order):</p>
<div class="highlight-default"><div class="highlight"><pre><span></span>CUDA_ADD_CUFFT_TO_TARGET( cuda_target )
-- Adds the cufft library to the target (can be any target). Handles whether
you are in emulation mode or not.
CUDA_ADD_CUBLAS_TO_TARGET( cuda_target )
-- Adds the cublas library to the target (can be any target). Handles
whether you are in emulation mode or not.
CUDA_ADD_EXECUTABLE( cuda_target file0 file1 ...
[WIN32] [MACOSX_BUNDLE] [EXCLUDE_FROM_ALL] [OPTIONS ...] )
-- Creates an executable &quot;cuda_target&quot; which is made up of the files
specified. All of the non CUDA C files are compiled using the standard
build rules specified by CMAKE and the cuda files are compiled to object
files using nvcc and the host compiler. In addition CUDA_INCLUDE_DIRS is
added automatically to include_directories(). Some standard CMake target
calls can be used on the target after calling this macro
(e.g. set_target_properties and target_link_libraries), but setting
properties that adjust compilation flags will not affect code compiled by
nvcc. Such flags should be modified before calling CUDA_ADD_EXECUTABLE,
CUDA_ADD_LIBRARY or CUDA_WRAP_SRCS.
CUDA_ADD_LIBRARY( cuda_target file0 file1 ...
[STATIC | SHARED | MODULE] [EXCLUDE_FROM_ALL] [OPTIONS ...] )
-- Same as CUDA_ADD_EXECUTABLE except that a library is created.
CUDA_BUILD_CLEAN_TARGET()
-- Creates a convience target that deletes all the dependency files
generated. You should make clean after running this target to ensure the
dependency files get regenerated.
CUDA_COMPILE( generated_files file0 file1 ... [STATIC | SHARED | MODULE]
[OPTIONS ...] )
-- Returns a list of generated files from the input source files to be used
with ADD_LIBRARY or ADD_EXECUTABLE.
CUDA_COMPILE_PTX( generated_files file0 file1 ... [OPTIONS ...] )
-- Returns a list of PTX files generated from the input source files.
CUDA_COMPILE_FATBIN( generated_files file0 file1 ... [OPTIONS ...] )
-- Returns a list of FATBIN files generated from the input source files.
CUDA_COMPILE_CUBIN( generated_files file0 file1 ... [OPTIONS ...] )
-- Returns a list of CUBIN files generated from the input source files.
CUDA_COMPUTE_SEPARABLE_COMPILATION_OBJECT_FILE_NAME( output_file_var
cuda_target
object_files )
-- Compute the name of the intermediate link file used for separable
compilation. This file name is typically passed into
CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS. output_file_var is produced
based on cuda_target the list of objects files that need separable
compilation as specified by object_files. If the object_files list is
empty, then output_file_var will be empty. This function is called
automatically for CUDA_ADD_LIBRARY and CUDA_ADD_EXECUTABLE. Note that
this is a function and not a macro.
CUDA_INCLUDE_DIRECTORIES( path0 path1 ... )
-- Sets the directories that should be passed to nvcc
(e.g. nvcc -Ipath0 -Ipath1 ... ). These paths usually contain other .cu
files.
CUDA_LINK_SEPARABLE_COMPILATION_OBJECTS( output_file_var cuda_target
nvcc_flags object_files)
-- Generates the link object required by separable compilation from the given
object files. This is called automatically for CUDA_ADD_EXECUTABLE and
CUDA_ADD_LIBRARY, but can be called manually when using CUDA_WRAP_SRCS
directly. When called from CUDA_ADD_LIBRARY or CUDA_ADD_EXECUTABLE the
nvcc_flags passed in are the same as the flags passed in via the OPTIONS
argument. The only nvcc flag added automatically is the bitness flag as
specified by CUDA_64_BIT_DEVICE_CODE. Note that this is a function
instead of a macro.
CUDA_SELECT_NVCC_ARCH_FLAGS(out_variable [target_CUDA_architectures])
-- Selects GPU arch flags for nvcc based on target_CUDA_architectures
target_CUDA_architectures : Auto | Common | All | LIST(ARCH_AND_PTX ...)
- &quot;Auto&quot; detects local machine GPU compute arch at runtime.
- &quot;Common&quot; and &quot;All&quot; cover common and entire subsets of architectures
ARCH_AND_PTX : NAME | NUM.NUM | NUM.NUM(NUM.NUM) | NUM.NUM+PTX
NAME: Fermi Kepler Maxwell Kepler+Tegra Kepler+Tesla Maxwell+Tegra Pascal
NUM: Any number. Only those pairs are currently accepted by NVCC though:
2.0 2.1 3.0 3.2 3.5 3.7 5.0 5.2 5.3 6.0 6.2
Returns LIST of flags to be added to CUDA_NVCC_FLAGS in ${out_variable}
Additionally, sets ${out_variable}_readable to the resulting numeric list
Example:
CUDA_SELECT_NVCC_ARCH_FLAGS(ARCH_FLAGS 3.0 3.5+PTX 5.2(5.0) Maxwell)
LIST(APPEND CUDA_NVCC_FLAGS ${ARCH_FLAGS})
More info on CUDA architectures: https://en.wikipedia.org/wiki/CUDA
Note that this is a function instead of a macro.
CUDA_WRAP_SRCS ( cuda_target format generated_files file0 file1 ...
[STATIC | SHARED | MODULE] [OPTIONS ...] )
-- This is where all the magic happens. CUDA_ADD_EXECUTABLE,
CUDA_ADD_LIBRARY, CUDA_COMPILE, and CUDA_COMPILE_PTX all call this
function under the hood.
Given the list of files (file0 file1 ... fileN) this macro generates
custom commands that generate either PTX or linkable objects (use &quot;PTX&quot; or
&quot;OBJ&quot; for the format argument to switch). Files that don&#39;t end with .cu
or have the HEADER_FILE_ONLY property are ignored.
The arguments passed in after OPTIONS are extra command line options to
give to nvcc. You can also specify per configuration options by
specifying the name of the configuration followed by the options. General
options must precede configuration specific options. Not all
configurations need to be specified, only the ones provided will be used.
OPTIONS -DFLAG=2 &quot;-DFLAG_OTHER=space in flag&quot;
DEBUG -g
RELEASE --use_fast_math
RELWITHDEBINFO --use_fast_math;-g
MINSIZEREL --use_fast_math
For certain configurations (namely VS generating object files with
CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE set to ON), no generated file will
be produced for the given cuda file. This is because when you add the
cuda file to Visual Studio it knows that this file produces an object file
and will link in the resulting object file automatically.
This script will also generate a separate cmake script that is used at
build time to invoke nvcc. This is for several reasons.
1. nvcc can return negative numbers as return values which confuses
Visual Studio into thinking that the command succeeded. The script now
checks the error codes and produces errors when there was a problem.
2. nvcc has been known to not delete incomplete results when it
encounters problems. This confuses build systems into thinking the
target was generated when in fact an unusable file exists. The script
now deletes the output files if there was an error.
3. By putting all the options that affect the build into a file and then
make the build rule dependent on the file, the output files will be
regenerated when the options change.
This script also looks at optional arguments STATIC, SHARED, or MODULE to
determine when to target the object compilation for a shared library.
BUILD_SHARED_LIBS is ignored in CUDA_WRAP_SRCS, but it is respected in
CUDA_ADD_LIBRARY. On some systems special flags are added for building
objects intended for shared libraries. A preprocessor macro,
&lt;target_name&gt;_EXPORTS is defined when a shared library compilation is
detected.
Flags passed into add_definitions with -D or /D are passed along to nvcc.
</pre></div>
</div>
<p>The script defines the following variables:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">CUDA_VERSION_MAJOR</span> <span class="o">--</span> <span class="n">The</span> <span class="n">major</span> <span class="n">version</span> <span class="n">of</span> <span class="n">cuda</span> <span class="k">as</span> <span class="n">reported</span> <span class="n">by</span> <span class="n">nvcc</span><span class="o">.</span>
<span class="n">CUDA_VERSION_MINOR</span> <span class="o">--</span> <span class="n">The</span> <span class="n">minor</span> <span class="n">version</span><span class="o">.</span>
<span class="n">CUDA_VERSION</span>
<span class="n">CUDA_VERSION_STRING</span> <span class="o">--</span> <span class="n">CUDA_VERSION_MAJOR</span><span class="o">.</span><span class="n">CUDA_VERSION_MINOR</span>
<span class="n">CUDA_HAS_FP16</span> <span class="o">--</span> <span class="n">Whether</span> <span class="n">a</span> <span class="n">short</span> <span class="nb">float</span> <span class="p">(</span><span class="n">float16</span><span class="p">,</span><span class="n">fp16</span><span class="p">)</span> <span class="ow">is</span> <span class="n">supported</span><span class="o">.</span>
<span class="n">CUDA_TOOLKIT_ROOT_DIR</span> <span class="o">--</span> <span class="n">Path</span> <span class="n">to</span> <span class="n">the</span> <span class="n">CUDA</span> <span class="n">Toolkit</span> <span class="p">(</span><span class="n">defined</span> <span class="k">if</span> <span class="ow">not</span> <span class="nb">set</span><span class="p">)</span><span class="o">.</span>
<span class="n">CUDA_SDK_ROOT_DIR</span> <span class="o">--</span> <span class="n">Path</span> <span class="n">to</span> <span class="n">the</span> <span class="n">CUDA</span> <span class="n">SDK</span><span class="o">.</span> <span class="n">Use</span> <span class="n">this</span> <span class="n">to</span> <span class="n">find</span> <span class="n">files</span> <span class="ow">in</span> <span class="n">the</span>
<span class="n">SDK</span><span class="o">.</span> <span class="n">This</span> <span class="n">script</span> <span class="n">will</span> <span class="ow">not</span> <span class="n">directly</span> <span class="n">support</span> <span class="n">finding</span>
<span class="n">specific</span> <span class="n">libraries</span> <span class="ow">or</span> <span class="n">headers</span><span class="p">,</span> <span class="k">as</span> <span class="n">that</span> <span class="n">isn</span><span class="s1">&#39;t</span>
<span class="n">supported</span> <span class="n">by</span> <span class="n">NVIDIA</span><span class="o">.</span> <span class="n">If</span> <span class="n">you</span> <span class="n">want</span> <span class="n">to</span> <span class="n">change</span>
<span class="n">libraries</span> <span class="n">when</span> <span class="n">the</span> <span class="n">path</span> <span class="n">changes</span> <span class="n">see</span> <span class="n">the</span>
<span class="n">FindCUDA</span><span class="o">.</span><span class="n">cmake</span> <span class="n">script</span> <span class="k">for</span> <span class="n">an</span> <span class="n">example</span> <span class="n">of</span> <span class="n">how</span> <span class="n">to</span> <span class="n">clear</span>
<span class="n">these</span> <span class="n">variables</span><span class="o">.</span> <span class="n">There</span> <span class="n">are</span> <span class="n">also</span> <span class="n">examples</span> <span class="n">of</span> <span class="n">how</span> <span class="n">to</span>
<span class="n">use</span> <span class="n">the</span> <span class="n">CUDA_SDK_ROOT_DIR</span> <span class="n">to</span> <span class="n">locate</span> <span class="n">headers</span> <span class="ow">or</span>
<span class="n">libraries</span><span class="p">,</span> <span class="k">if</span> <span class="n">you</span> <span class="n">so</span> <span class="n">choose</span> <span class="p">(</span><span class="n">at</span> <span class="n">your</span> <span class="n">own</span> <span class="n">risk</span><span class="p">)</span><span class="o">.</span>
<span class="n">CUDA_INCLUDE_DIRS</span> <span class="o">--</span> <span class="n">Include</span> <span class="n">directory</span> <span class="k">for</span> <span class="n">cuda</span> <span class="n">headers</span><span class="o">.</span> <span class="n">Added</span> <span class="n">automatically</span>
<span class="k">for</span> <span class="n">CUDA_ADD_EXECUTABLE</span> <span class="ow">and</span> <span class="n">CUDA_ADD_LIBRARY</span><span class="o">.</span>
<span class="n">CUDA_LIBRARIES</span> <span class="o">--</span> <span class="n">Cuda</span> <span class="n">RT</span> <span class="n">library</span><span class="o">.</span>
<span class="n">CUDA_CUFFT_LIBRARIES</span> <span class="o">--</span> <span class="n">Device</span> <span class="ow">or</span> <span class="n">emulation</span> <span class="n">library</span> <span class="k">for</span> <span class="n">the</span> <span class="n">Cuda</span> <span class="n">FFT</span>
<span class="n">implementation</span> <span class="p">(</span><span class="n">alternative</span> <span class="n">to</span><span class="p">:</span>
<span class="n">CUDA_ADD_CUFFT_TO_TARGET</span> <span class="n">macro</span><span class="p">)</span>
<span class="n">CUDA_CUBLAS_LIBRARIES</span> <span class="o">--</span> <span class="n">Device</span> <span class="ow">or</span> <span class="n">emulation</span> <span class="n">library</span> <span class="k">for</span> <span class="n">the</span> <span class="n">Cuda</span> <span class="n">BLAS</span>
<span class="n">implementation</span> <span class="p">(</span><span class="n">alternative</span> <span class="n">to</span><span class="p">:</span>
<span class="n">CUDA_ADD_CUBLAS_TO_TARGET</span> <span class="n">macro</span><span class="p">)</span><span class="o">.</span>
<span class="n">CUDA_cudart_static_LIBRARY</span> <span class="o">--</span> <span class="n">Statically</span> <span class="n">linkable</span> <span class="n">cuda</span> <span class="n">runtime</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">5.5</span><span class="o">+</span>
<span class="n">CUDA_cudadevrt_LIBRARY</span> <span class="o">--</span> <span class="n">Device</span> <span class="n">runtime</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Required</span> <span class="k">for</span> <span class="n">separable</span> <span class="n">compilation</span><span class="o">.</span>
<span class="n">CUDA_cupti_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Profiling</span> <span class="n">Tools</span> <span class="n">Interface</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">4.0</span><span class="o">+.</span>
<span class="n">CUDA_curand_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Random</span> <span class="n">Number</span> <span class="n">Generation</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">3.2</span><span class="o">+.</span>
<span class="n">CUDA_cusolver_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Direct</span> <span class="n">Solver</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">7.0</span><span class="o">+.</span>
<span class="n">CUDA_cusparse_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Sparse</span> <span class="n">Matrix</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">3.2</span><span class="o">+.</span>
<span class="n">CUDA_npp_LIBRARY</span> <span class="o">--</span> <span class="n">NVIDIA</span> <span class="n">Performance</span> <span class="n">Primitives</span> <span class="n">lib</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">4.0</span><span class="o">+.</span>
<span class="n">CUDA_nppc_LIBRARY</span> <span class="o">--</span> <span class="n">NVIDIA</span> <span class="n">Performance</span> <span class="n">Primitives</span> <span class="n">lib</span> <span class="p">(</span><span class="n">core</span><span class="p">)</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">5.5</span><span class="o">+.</span>
<span class="n">CUDA_nppi_LIBRARY</span> <span class="o">--</span> <span class="n">NVIDIA</span> <span class="n">Performance</span> <span class="n">Primitives</span> <span class="n">lib</span> <span class="p">(</span><span class="n">image</span> <span class="n">processing</span><span class="p">)</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">5.5</span><span class="o">+.</span>
<span class="n">CUDA_npps_LIBRARY</span> <span class="o">--</span> <span class="n">NVIDIA</span> <span class="n">Performance</span> <span class="n">Primitives</span> <span class="n">lib</span> <span class="p">(</span><span class="n">signal</span> <span class="n">processing</span><span class="p">)</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">5.5</span><span class="o">+.</span>
<span class="n">CUDA_nvcuvenc_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Video</span> <span class="n">Encoder</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">3.2</span><span class="o">+.</span>
<span class="n">Windows</span> <span class="n">only</span><span class="o">.</span>
<span class="n">CUDA_nvcuvid_LIBRARY</span> <span class="o">--</span> <span class="n">CUDA</span> <span class="n">Video</span> <span class="n">Decoder</span> <span class="n">library</span><span class="o">.</span>
<span class="n">Only</span> <span class="n">available</span> <span class="k">for</span> <span class="n">CUDA</span> <span class="n">version</span> <span class="mf">3.2</span><span class="o">+.</span>
<span class="n">Windows</span> <span class="n">only</span><span class="o">.</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
<h4>Previous topic</h4>
<p class="topless"><a href="FindCoin3D.html"
title="previous chapter">FindCoin3D</a></p>
<h4>Next topic</h4>
<p class="topless"><a href="FindCups.html"
title="next chapter">FindCups</a></p>
<div role="note" aria-label="source link">
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="../_sources/module/FindCUDA.rst.txt"
rel="nofollow">Show Source</a></li>
</ul>
</div>
<div id="searchbox" style="display: none" role="search">
<h3>Quick search</h3>
<form class="search" action="../search.html" method="get">
<div><input type="text" name="q" /></div>
<div><input type="submit" value="Go" /></div>
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="../genindex.html" title="General Index"
>index</a></li>
<li class="right" >
<a href="FindCups.html" title="FindCups"
>next</a> |</li>
<li class="right" >
<a href="FindCoin3D.html" title="FindCoin3D"
>previous</a> |</li>
<li>
<img src="../_static/cmake-logo-16.png" alt=""
style="vertical-align: middle; margin-top: -2px" />
</li>
<li>
<a href="https://cmake.org/">CMake</a> &#187;
</li>
<li>
<a href="../index.html">3.8.2 Documentation</a> &#187;
</li>
<li class="nav-item nav-item-1"><a href="../manual/cmake-modules.7.html" >cmake-modules(7)</a> &#187;</li>
</ul>
</div>
<div class="footer" role="contentinfo">
&#169; Copyright 2000-2017 Kitware, Inc. and Contributors.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.5.2.
</div>
</body>
</html>