blob: 9cdf4531f3e6f10ddd5244148e4fbac17f9a881a [file] [log] [blame]
This is ../../../doc/bison.info, produced by makeinfo version 4.13 from
../../../doc/bison.texi.
This manual (9 December 2012) is for GNU Bison (version 2.7), the GNU
parser generator.
Copyright (C) 1988-1993, 1995, 1998-2012 Free Software Foundation,
Inc.
Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License,
Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, with the Front-Cover texts
being "A GNU Manual," and with the Back-Cover Texts as in (a)
below. A copy of the license is included in the section entitled
"GNU Free Documentation License."
(a) The FSF's Back-Cover Text is: "You have the freedom to copy and
modify this GNU manual. Buying copies from the FSF supports it in
developing GNU and promoting software freedom."
INFO-DIR-SECTION Software development
START-INFO-DIR-ENTRY
* bison: (bison). GNU parser generator (Yacc replacement).
END-INFO-DIR-ENTRY

File: bison.info, Node: Top, Next: Introduction, Up: (dir)
Bison
*****
This manual (9 December 2012) is for GNU Bison (version 2.7), the GNU
parser generator.
Copyright (C) 1988-1993, 1995, 1998-2012 Free Software Foundation,
Inc.
Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License,
Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, with the Front-Cover texts
being "A GNU Manual," and with the Back-Cover Texts as in (a)
below. A copy of the license is included in the section entitled
"GNU Free Documentation License."
(a) The FSF's Back-Cover Text is: "You have the freedom to copy and
modify this GNU manual. Buying copies from the FSF supports it in
developing GNU and promoting software freedom."
* Menu:
* Introduction::
* Conditions::
* Copying:: The GNU General Public License says
how you can copy and share Bison.
Tutorial sections:
* Concepts:: Basic concepts for understanding Bison.
* Examples:: Three simple explained examples of using Bison.
Reference sections:
* Grammar File:: Writing Bison declarations and rules.
* Interface:: C-language interface to the parser function `yyparse'.
* Algorithm:: How the Bison parser works at run-time.
* Error Recovery:: Writing rules for error recovery.
* Context Dependency:: What to do if your language syntax is too
messy for Bison to handle straightforwardly.
* Debugging:: Understanding or debugging Bison parsers.
* Invocation:: How to run Bison (to produce the parser implementation).
* Other Languages:: Creating C++ and Java parsers.
* FAQ:: Frequently Asked Questions
* Table of Symbols:: All the keywords of the Bison language are explained.
* Glossary:: Basic concepts are explained.
* Copying This Manual:: License for copying this manual.
* Bibliography:: Publications cited in this manual.
* Index of Terms:: Cross-references to the text.
--- The Detailed Node Listing ---
The Concepts of Bison
* Language and Grammar:: Languages and context-free grammars,
as mathematical ideas.
* Grammar in Bison:: How we represent grammars for Bison's sake.
* Semantic Values:: Each token or syntactic grouping can have
a semantic value (the value of an integer,
the name of an identifier, etc.).
* Semantic Actions:: Each rule can have an action containing C code.
* GLR Parsers:: Writing parsers for general context-free languages.
* Locations:: Overview of location tracking.
* Bison Parser:: What are Bison's input and output,
how is the output used?
* Stages:: Stages in writing and running Bison grammars.
* Grammar Layout:: Overall structure of a Bison grammar file.
Writing GLR Parsers
* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
* GLR Semantic Actions:: Deferred semantic actions have special concerns.
* Compiler Requirements:: GLR parsers require a modern C compiler.
Examples
* RPN Calc:: Reverse polish notation calculator;
a first example with no operator precedence.
* Infix Calc:: Infix (algebraic) notation calculator.
Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @N and @$.
* Multi-function Calc:: Calculator with memory and trig functions.
It uses multiple data-types for semantic values.
* Exercises:: Ideas for improving the multi-function calculator.
Reverse Polish Notation Calculator
* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
* Rpcalc Lexer:: The lexical analyzer.
* Rpcalc Main:: The controlling function.
* Rpcalc Error:: The error reporting function.
* Rpcalc Generate:: Running Bison on the grammar file.
* Rpcalc Compile:: Run the C compiler on the output code.
Grammar Rules for `rpcalc'
* Rpcalc Input::
* Rpcalc Line::
* Rpcalc Expr::
Location Tracking Calculator: `ltcalc'
* Ltcalc Declarations:: Bison and C declarations for ltcalc.
* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
* Ltcalc Lexer:: The lexical analyzer.
Multi-Function Calculator: `mfcalc'
* Mfcalc Declarations:: Bison declarations for multi-function calculator.
* Mfcalc Rules:: Grammar rules for the calculator.
* Mfcalc Symbol Table:: Symbol table management subroutines.
Bison Grammar Files
* Grammar Outline:: Overall layout of the grammar file.
* Symbols:: Terminal and nonterminal symbols.
* Rules:: How to write grammar rules.
* Recursion:: Writing recursive rules.
* Semantics:: Semantic values and actions.
* Tracking Locations:: Locations and actions.
* Named References:: Using named references in actions.
* Declarations:: All kinds of Bison declarations are described here.
* Multiple Parsers:: Putting more than one Bison parser in one program.
Outline of a Bison Grammar
* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
* Bison Declarations:: Syntax and usage of the Bison declarations section.
* Grammar Rules:: Syntax and usage of the grammar rules section.
* Epilogue:: Syntax and usage of the epilogue.
Defining Language Semantics
* Value Type:: Specifying one data type for all semantic values.
* Multiple Types:: Specifying several alternative data types.
* Actions:: An action is the semantic definition of a grammar rule.
* Action Types:: Specifying data types for actions to operate on.
* Mid-Rule Actions:: Most actions go at the end of a rule.
This says when, why and how to use the exceptional
action in the middle of a rule.
Actions in Mid-Rule
* Using Mid-Rule Actions:: Putting an action in the middle of a rule.
* Mid-Rule Action Translation:: How mid-rule actions are actually processed.
* Mid-Rule Conflicts:: Mid-rule actions can cause conflicts.
Tracking Locations
* Location Type:: Specifying a data type for locations.
* Actions and Locations:: Using locations in actions.
* Location Default Action:: Defining a general way to compute locations.
Bison Declarations
* Require Decl:: Requiring a Bison version.
* Token Decl:: Declaring terminal symbols.
* Precedence Decl:: Declaring terminals with precedence and associativity.
* Union Decl:: Declaring the set of all semantic value types.
* Type Decl:: Declaring the choice of type for a nonterminal symbol.
* Initial Action Decl:: Code run before parsing starts.
* Destructor Decl:: Declaring how symbols are freed.
* Printer Decl:: Declaring how symbol values are displayed.
* Expect Decl:: Suppressing warnings about parsing conflicts.
* Start Decl:: Specifying the start symbol.
* Pure Decl:: Requesting a reentrant parser.
* Push Decl:: Requesting a push parser.
* Decl Summary:: Table of all Bison declarations.
* %define Summary:: Defining variables to adjust Bison's behavior.
* %code Summary:: Inserting code into the parser source.
Parser C-Language Interface
* Parser Function:: How to call `yyparse' and what it returns.
* Push Parser Function:: How to call `yypush_parse' and what it returns.
* Pull Parser Function:: How to call `yypull_parse' and what it returns.
* Parser Create Function:: How to call `yypstate_new' and what it returns.
* Parser Delete Function:: How to call `yypstate_delete' and what it returns.
* Lexical:: You must supply a function `yylex'
which reads tokens.
* Error Reporting:: You must supply a function `yyerror'.
* Action Features:: Special features for use in actions.
* Internationalization:: How to let the parser speak in the user's
native language.
The Lexical Analyzer Function `yylex'
* Calling Convention:: How `yyparse' calls `yylex'.
* Token Values:: How `yylex' must return the semantic value
of the token it has read.
* Token Locations:: How `yylex' must return the text location
(line number, etc.) of the token, if the
actions want that.
* Pure Calling:: How the calling convention differs in a pure parser
(*note A Pure (Reentrant) Parser: Pure Decl.).
The Bison Parser Algorithm
* Lookahead:: Parser looks one token ahead when deciding what to do.
* Shift/Reduce:: Conflicts: when either shifting or reduction is valid.
* Precedence:: Operator precedence works by resolving conflicts.
* Contextual Precedence:: When an operator's precedence depends on context.
* Parser States:: The parser is a finite-state-machine with stack.
* Reduce/Reduce:: When two rules are applicable in the same situation.
* Mysterious Conflicts:: Conflicts that look unjustified.
* Tuning LR:: How to tune fundamental aspects of LR-based parsing.
* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
* Memory Management:: What happens when memory is exhausted. How to avoid it.
Operator Precedence
* Why Precedence:: An example showing why precedence is needed.
* Using Precedence:: How to specify precedence in Bison grammars.
* Precedence Examples:: How these features are used in the previous example.
* How Precedence:: How they work.
* Non Operators:: Using precedence for general conflicts.
Tuning LR
* LR Table Construction:: Choose a different construction algorithm.
* Default Reductions:: Disable default reductions.
* LAC:: Correct lookahead sets in the parser states.
* Unreachable States:: Keep unreachable parser states for debugging.
Handling Context Dependencies
* Semantic Tokens:: Token parsing can depend on the semantic context.
* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
* Tie-in Recovery:: Lexical tie-ins have implications for how
error recovery rules must be written.
Debugging Your Parser
* Understanding:: Understanding the structure of your parser.
* Graphviz:: Getting a visual representation of the parser.
* Xml:: Getting a markup representation of the parser.
* Tracing:: Tracing the execution of your parser.
Tracing Your Parser
* Enabling Traces:: Activating run-time trace support
* Mfcalc Traces:: Extending `mfcalc' to support traces
* The YYPRINT Macro:: Obsolete interface for semantic value reports
Invoking Bison
* Bison Options:: All the options described in detail,
in alphabetical order by short options.
* Option Cross Key:: Alphabetical list of long options.
* Yacc Library:: Yacc-compatible `yylex' and `main'.
Parsers Written In Other Languages
* C++ Parsers:: The interface to generate C++ parser classes
* Java Parsers:: The interface to generate Java parser classes
C++ Parsers
* C++ Bison Interface:: Asking for C++ parser generation
* C++ Semantic Values:: %union vs. C++
* C++ Location Values:: The position and location classes
* C++ Parser Interface:: Instantiating and running the parser
* C++ Scanner Interface:: Exchanges between yylex and parse
* A Complete C++ Example:: Demonstrating their use
C++ Location Values
* C++ position:: One point in the source file
* C++ location:: Two points in the source file
* User Defined Location Type:: Required interface for locations
A Complete C++ Example
* Calc++ --- C++ Calculator:: The specifications
* Calc++ Parsing Driver:: An active parsing context
* Calc++ Parser:: A parser class
* Calc++ Scanner:: A pure C++ Flex scanner
* Calc++ Top Level:: Conducting the band
Java Parsers
* Java Bison Interface:: Asking for Java parser generation
* Java Semantic Values:: %type and %token vs. Java
* Java Location Values:: The position and location classes
* Java Parser Interface:: Instantiating and running the parser
* Java Scanner Interface:: Specifying the scanner for the parser
* Java Action Features:: Special features for use in actions
* Java Differences:: Differences between C/C++ and Java Grammars
* Java Declarations Summary:: List of Bison declarations used with Java
Frequently Asked Questions
* Memory Exhausted:: Breaking the Stack Limits
* How Can I Reset the Parser:: `yyparse' Keeps some State
* Strings are Destroyed:: `yylval' Loses Track of Strings
* Implementing Gotos/Loops:: Control Flow in the Calculator
* Multiple start-symbols:: Factoring closely related grammars
* Secure? Conform?:: Is Bison POSIX safe?
* I can't build Bison:: Troubleshooting
* Where can I find help?:: Troubleshouting
* Bug Reports:: Troublereporting
* More Languages:: Parsers in C++, Java, and so on
* Beta Testing:: Experimenting development versions
* Mailing Lists:: Meeting other Bison users
Copying This Manual
* Copying This Manual:: License for copying this manual.

File: bison.info, Node: Introduction, Next: Conditions, Prev: Top, Up: Top
Introduction
************
"Bison" is a general-purpose parser generator that converts an
annotated context-free grammar into a deterministic LR or generalized
LR (GLR) parser employing LALR(1) parser tables. As an experimental
feature, Bison can also generate IELR(1) or canonical LR(1) parser
tables. Once you are proficient with Bison, you can use it to develop
a wide range of language parsers, from those used in simple desk
calculators to complex programming languages.
Bison is upward compatible with Yacc: all properly-written Yacc
grammars ought to work with Bison with no change. Anyone familiar with
Yacc should be able to use Bison with little trouble. You need to be
fluent in C or C++ programming in order to use Bison or to understand
this manual. Java is also supported as an experimental feature.
We begin with tutorial chapters that explain the basic concepts of
using Bison and show three explained examples, each building on the
last. If you don't know Bison or Yacc, start by reading these
chapters. Reference chapters follow, which describe specific aspects
of Bison in detail.
Bison was written originally by Robert Corbett. Richard Stallman
made it Yacc-compatible. Wilfred Hansen of Carnegie Mellon University
added multi-character string literals and other features. Since then,
Bison has grown more robust and evolved many other new features thanks
to the hard work of a long list of volunteers. For details, see the
`THANKS' and `ChangeLog' files included in the Bison distribution.
This edition corresponds to version 2.7 of Bison.

File: bison.info, Node: Conditions, Next: Copying, Prev: Introduction, Up: Top
Conditions for Using Bison
**************************
The distribution terms for Bison-generated parsers permit using the
parsers in nonfree programs. Before Bison version 2.2, these extra
permissions applied only when Bison was generating LALR(1) parsers in
C. And before Bison version 1.24, Bison-generated parsers could be
used only in programs that were free software.
The other GNU programming tools, such as the GNU C compiler, have
never had such a requirement. They could always be used for nonfree
software. The reason Bison was different was not due to a special
policy decision; it resulted from applying the usual General Public
License to all of the Bison source code.
The main output of the Bison utility--the Bison parser implementation
file--contains a verbatim copy of a sizable piece of Bison, which is
the code for the parser's implementation. (The actions from your
grammar are inserted into this implementation at one point, but most of
the rest of the implementation is not changed.) When we applied the
GPL terms to the skeleton code for the parser's implementation, the
effect was to restrict the use of Bison output to free software.
We didn't change the terms because of sympathy for people who want to
make software proprietary. *Software should be free.* But we
concluded that limiting Bison's use to free software was doing little to
encourage people to make other software free. So we decided to make the
practical conditions for using Bison match the practical conditions for
using the other GNU tools.
This exception applies when Bison is generating code for a parser.
You can tell whether the exception applies to a Bison output file by
inspecting the file for text beginning with "As a special
exception...". The text spells out the exact terms of the exception.

File: bison.info, Node: Copying, Next: Concepts, Prev: Conditions, Up: Top
GNU GENERAL PUBLIC LICENSE
**************************
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. `http://fsf.org/'
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Preamble
========
The GNU General Public License is a free, copyleft license for software
and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains
free software for all its users. We, the Free Software Foundation, use
the GNU General Public License for most of our software; it applies
also to any other work released this way by its authors. You can apply
it to your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you
have certain responsibilities if you distribute copies of the software,
or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the
manufacturer can do so. This is fundamentally incompatible with the
aim of protecting users' freedom to change the software. The
systematic pattern of such abuse occurs in the area of products for
individuals to use, which is precisely where it is most unacceptable.
Therefore, we have designed this version of the GPL to prohibit the
practice for those products. If such problems arise substantially in
other domains, we stand ready to extend this provision to those domains
in future versions of the GPL, as needed to protect the freedom of
users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
====================
0. Definitions.
"This License" refers to version 3 of the GNU General Public
License.
"Copyright" also means copyright-like laws that apply to other
kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the
work in a fashion requiring copyright permission, other than the
making of an exact copy. The resulting work is called a "modified
version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work
based on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it
on a computer or modifying a private copy. Propagation includes
copying, distribution (with or without modification), making
available to the public, and in some countries other activities as
well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user
through a computer network, with no transfer of a copy, is not
conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to
the extent that warranties are provided), that licensees may
convey the work under this License, and how to view a copy of this
License. If the interface presents a list of user commands or
options, such as a menu, a prominent item in the list meets this
criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any
non-source form of a work.
A "Standard Interface" means an interface that either is an
official standard defined by a recognized standards body, or, in
the case of interfaces specified for a particular programming
language, one that is widely used among developers working in that
language.
The "System Libraries" of an executable work include anything,
other than the work as a whole, that (a) is included in the normal
form of packaging a Major Component, but which is not part of that
Major Component, and (b) serves only to enable use of the work
with that Major Component, or to implement a Standard Interface
for which an implementation is available to the public in source
code form. A "Major Component", in this context, means a major
essential component (kernel, window system, and so on) of the
specific operating system (if any) on which the executable work
runs, or a compiler used to produce the work, or an object code
interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including
scripts to control those activities. However, it does not include
the work's System Libraries, or general-purpose tools or generally
available free programs which are used unmodified in performing
those activities but which are not part of the work. For example,
Corresponding Source includes interface definition files
associated with source files for the work, and the source code for
shared libraries and dynamically linked subprograms that the work
is specifically designed to require, such as by intimate data
communication or control flow between those subprograms and other
parts of the work.
The Corresponding Source need not include anything that users can
regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running
a covered work is covered by this License only if the output,
given its content, constitutes a covered work. This License
acknowledges your rights of fair use or other equivalent, as
provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise
remains in force. You may convey covered works to others for the
sole purpose of having them make modifications exclusively for
you, or provide you with facilities for running those works,
provided that you comply with the terms of this License in
conveying all material for which you do not control copyright.
Those thus making or running the covered works for you must do so
exclusively on your behalf, under your direction and control, on
terms that prohibit them from making any copies of your
copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section
10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under
article 11 of the WIPO copyright treaty adopted on 20 December
1996, or similar laws prohibiting or restricting circumvention of
such measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such
circumvention is effected by exercising rights under this License
with respect to the covered work, and you disclaim any intention
to limit operation or modification of the work as a means of
enforcing, against the work's users, your or third parties' legal
rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the
code; keep intact all notices of the absence of any warranty; and
give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these
conditions:
a. The work must carry prominent notices stating that you
modified it, and giving a relevant date.
b. The work must carry prominent notices stating that it is
released under this License and any conditions added under
section 7. This requirement modifies the requirement in
section 4 to "keep intact all notices".
c. You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable
section 7 additional terms, to the whole of the work, and all
its parts, regardless of how they are packaged. This License
gives no permission to license the work in any other way, but
it does not invalidate such permission if you have separately
received it.
d. If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has
interactive interfaces that do not display Appropriate Legal
Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered
work, and which are not combined with it such as to form a larger
program, in or on a volume of a storage or distribution medium, is
called an "aggregate" if the compilation and its resulting
copyright are not used to limit the access or legal rights of the
compilation's users beyond what the individual works permit.
Inclusion of a covered work in an aggregate does not cause this
License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this
License, in one of these ways:
a. Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b. Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for
as long as you offer spare parts or customer support for that
product model, to give anyone who possesses the object code
either (1) a copy of the Corresponding Source for all the
software in the product that is covered by this License, on a
durable physical medium customarily used for software
interchange, for a price no more than your reasonable cost of
physically performing this conveying of source, or (2) access
to copy the Corresponding Source from a network server at no
charge.
c. Convey individual copies of the object code with a copy of
the written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially,
and only if you received the object code with such an offer,
in accord with subsection 6b.
d. Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access
to the Corresponding Source in the same way through the same
place at no further charge. You need not require recipients
to copy the Corresponding Source along with the object code.
If the place to copy the object code is a network server, the
Corresponding Source may be on a different server (operated
by you or a third party) that supports equivalent copying
facilities, provided you maintain clear directions next to
the object code saying where to find the Corresponding Source.
Regardless of what server hosts the Corresponding Source, you
remain obligated to ensure that it is available for as long
as needed to satisfy these requirements.
e. Convey the object code using peer-to-peer transmission,
provided you inform other peers where the object code and
Corresponding Source of the work are being offered to the
general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is
excluded from the Corresponding Source as a System Library, need
not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means
any tangible personal property which is normally used for personal,
family, or household purposes, or (2) anything designed or sold for
incorporation into a dwelling. In determining whether a product
is a consumer product, doubtful cases shall be resolved in favor of
coverage. For a particular product received by a particular user,
"normally used" refers to a typical or common use of that class of
product, regardless of the status of the particular user or of the
way in which the particular user actually uses, or expects or is
expected to use, the product. A product is a consumer product
regardless of whether the product has substantial commercial,
industrial or non-consumer uses, unless such uses represent the
only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to
install and execute modified versions of a covered work in that
User Product from a modified version of its Corresponding Source.
The information must suffice to ensure that the continued
functioning of the modified object code is in no case prevented or
interfered with solely because modification has been made.
If you convey an object code work under this section in, or with,
or specifically for use in, a User Product, and the conveying
occurs as part of a transaction in which the right of possession
and use of the User Product is transferred to the recipient in
perpetuity or for a fixed term (regardless of how the transaction
is characterized), the Corresponding Source conveyed under this
section must be accompanied by the Installation Information. But
this requirement does not apply if neither you nor any third party
retains the ability to install modified object code on the User
Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not
include a requirement to continue to provide support service,
warranty, or updates for a work that has been modified or
installed by the recipient, or for the User Product in which it
has been modified or installed. Access to a network may be denied
when the modification itself materially and adversely affects the
operation of the network or violates the rules and protocols for
communication across the network.
Corresponding Source conveyed, and Installation Information
provided, in accord with this section must be in a format that is
publicly documented (and with an implementation available to the
public in source code form), and must require no special password
or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of
this License by making exceptions from one or more of its
conditions. Additional permissions that are applicable to the
entire Program shall be treated as though they were included in
this License, to the extent that they are valid under applicable
law. If additional permissions apply only to part of the Program,
that part may be used separately under those permissions, but the
entire Program remains governed by this License without regard to
the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part
of it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material
you add to a covered work, you may (if authorized by the copyright
holders of that material) supplement the terms of this License
with terms:
a. Disclaiming warranty or limiting liability differently from
the terms of sections 15 and 16 of this License; or
b. Requiring preservation of specified reasonable legal notices
or author attributions in that material or in the Appropriate
Legal Notices displayed by works containing it; or
c. Prohibiting misrepresentation of the origin of that material,
or requiring that modified versions of such material be
marked in reasonable ways as different from the original
version; or
d. Limiting the use for publicity purposes of names of licensors
or authors of the material; or
e. Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f. Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified
versions of it) with contractual assumptions of liability to
the recipient, for any liability that these contractual
assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as
you received it, or any part of it, contains a notice stating that
it is governed by this License along with a term that is a further
restriction, you may remove that term. If a license document
contains a further restriction but permits relicensing or
conveying under this License, you may add to a covered work
material governed by the terms of that license document, provided
that the further restriction does not survive such relicensing or
conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in
the form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights
under this License (including any patent licenses granted under
the third paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly
and finally terminates your license, and (b) permanently, if the
copyright holder fails to notify you of the violation by some
reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from
that copyright holder, and you cure the violation prior to 30 days
after your receipt of the notice.
Termination of your rights under this section does not terminate
the licenses of parties who have received copies or rights from
you under this License. If your rights have been terminated and
not permanently reinstated, you do not qualify to receive new
licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer
transmission to receive a copy likewise does not require
acceptance. However, nothing other than this License grants you
permission to propagate or modify any covered work. These actions
infringe copyright if you do not accept this License. Therefore,
by modifying or propagating a covered work, you indicate your
acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not
responsible for enforcing compliance by third parties with this
License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a
covered work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or
could give under the previous paragraph, plus a right to
possession of the Corresponding Source of the work from the
predecessor in interest, if the predecessor has it or can get it
with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you
may not impose a license fee, royalty, or other charge for
exercise of rights granted under this License, and you may not
initiate litigation (including a cross-claim or counterclaim in a
lawsuit) alleging that any patent claim is infringed by making,
using, selling, offering for sale, or importing the Program or any
portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.
The work thus licensed is called the contributor's "contributor
version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner,
permitted by this License, of making, using, or selling its
contributor version, but do not include claims that would be
infringed only as a consequence of further modification of the
contributor version. For purposes of this definition, "control"
includes the right to grant patent sublicenses in a manner
consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide,
royalty-free patent license under the contributor's essential
patent claims, to make, use, sell, offer for sale, import and
otherwise run, modify and propagate the contents of its
contributor version.
In the following three paragraphs, a "patent license" is any
express agreement or commitment, however denominated, not to
enforce a patent (such as an express permission to practice a
patent or covenant not to sue for patent infringement). To
"grant" such a patent license to a party means to make such an
agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent
license, and the Corresponding Source of the work is not available
for anyone to copy, free of charge and under the terms of this
License, through a publicly available network server or other
readily accessible means, then you must either (1) cause the
Corresponding Source to be so available, or (2) arrange to deprive
yourself of the benefit of the patent license for this particular
work, or (3) arrange, in a manner consistent with the requirements
of this License, to extend the patent license to downstream
recipients. "Knowingly relying" means you have actual knowledge
that, but for the patent license, your conveying the covered work
in a country, or your recipient's use of the covered work in a
country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate,
modify or convey a specific copy of the covered work, then the
patent license you grant is automatically extended to all
recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that
are specifically granted under this License. You may not convey a
covered work if you are a party to an arrangement with a third
party that is in the business of distributing software, under
which you make payment to the third party based on the extent of
your activity of conveying the work, and under which the third
party grants, to any of the parties who would receive the covered
work from you, a discriminatory patent license (a) in connection
with copies of the covered work conveyed by you (or copies made
from those copies), or (b) primarily for and in connection with
specific products or compilations that contain the covered work,
unless you entered into that arrangement, or that patent license
was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order,
agreement or otherwise) that contradict the conditions of this
License, they do not excuse you from the conditions of this
License. If you cannot convey a covered work so as to satisfy
simultaneously your obligations under this License and any other
pertinent obligations, then as a consequence you may not convey it
at all. For example, if you agree to terms that obligate you to
collect a royalty for further conveying from those to whom you
convey the Program, the only way you could satisfy both those
terms and this License would be to refrain entirely from conveying
the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a
single combined work, and to convey the resulting work. The terms
of this License will continue to apply to the part which is the
covered work, but the special requirements of the GNU Affero
General Public License, section 13, concerning interaction through
a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new
versions of the GNU General Public License from time to time.
Such new versions will be similar in spirit to the present
version, but may differ in detail to address new problems or
concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU
General Public License "or any later version" applies to it, you
have the option of following the terms and conditions either of
that numbered version or of any later version published by the
Free Software Foundation. If the Program does not specify a
version number of the GNU General Public License, you may choose
any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that
proxy's public statement of acceptance of a version permanently
authorizes you to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE
COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE
RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.
SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES
AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA
BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely
approximates an absolute waiver of all civil liability in
connection with the Program, unless a warranty or assumption of
liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
===========================
How to Apply These Terms to Your New Programs
=============================================
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these
terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
ONE LINE TO GIVE THE PROGRAM'S NAME AND A BRIEF IDEA OF WHAT IT DOES.
Copyright (C) YEAR NAME OF AUTHOR
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see `http://www.gnu.org/licenses/'.
Also add information on how to contact you by electronic and paper
mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
PROGRAM Copyright (C) YEAR NAME OF AUTHOR
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License. Of course, your
program's commands might be different; for a GUI interface, you would
use an "about box".
You should also get your employer (if you work as a programmer) or
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. For more information on this, and how to apply and follow
the GNU GPL, see `http://www.gnu.org/licenses/'.
The GNU General Public License does not permit incorporating your
program into proprietary programs. If your program is a subroutine
library, you may consider it more useful to permit linking proprietary
applications with the library. If this is what you want to do, use the
GNU Lesser General Public License instead of this License. But first,
please read `http://www.gnu.org/philosophy/why-not-lgpl.html'.

File: bison.info, Node: Concepts, Next: Examples, Prev: Copying, Up: Top
1 The Concepts of Bison
***********************
This chapter introduces many of the basic concepts without which the
details of Bison will not make sense. If you do not already know how to
use Bison or Yacc, we suggest you start by reading this chapter
carefully.
* Menu:
* Language and Grammar:: Languages and context-free grammars,
as mathematical ideas.
* Grammar in Bison:: How we represent grammars for Bison's sake.
* Semantic Values:: Each token or syntactic grouping can have
a semantic value (the value of an integer,
the name of an identifier, etc.).
* Semantic Actions:: Each rule can have an action containing C code.
* GLR Parsers:: Writing parsers for general context-free languages.
* Locations:: Overview of location tracking.
* Bison Parser:: What are Bison's input and output,
how is the output used?
* Stages:: Stages in writing and running Bison grammars.
* Grammar Layout:: Overall structure of a Bison grammar file.

File: bison.info, Node: Language and Grammar, Next: Grammar in Bison, Up: Concepts
1.1 Languages and Context-Free Grammars
=======================================
In order for Bison to parse a language, it must be described by a
"context-free grammar". This means that you specify one or more
"syntactic groupings" and give rules for constructing them from their
parts. For example, in the C language, one kind of grouping is called
an `expression'. One rule for making an expression might be, "An
expression can be made of a minus sign and another expression".
Another would be, "An expression can be an integer". As you can see,
rules are often recursive, but there must be at least one rule which
leads out of the recursion.
The most common formal system for presenting such rules for humans
to read is "Backus-Naur Form" or "BNF", which was developed in order to
specify the language Algol 60. Any grammar expressed in BNF is a
context-free grammar. The input to Bison is essentially
machine-readable BNF.
There are various important subclasses of context-free grammars.
Although it can handle almost all context-free grammars, Bison is
optimized for what are called LR(1) grammars. In brief, in these
grammars, it must be possible to tell how to parse any portion of an
input string with just a single token of lookahead. For historical
reasons, Bison by default is limited by the additional restrictions of
LALR(1), which is hard to explain simply. *Note Mysterious
Conflicts::, for more information on this. As an experimental feature,
you can escape these additional restrictions by requesting IELR(1) or
canonical LR(1) parser tables. *Note LR Table Construction::, to learn
how.
Parsers for LR(1) grammars are "deterministic", meaning roughly that
the next grammar rule to apply at any point in the input is uniquely
determined by the preceding input and a fixed, finite portion (called a
"lookahead") of the remaining input. A context-free grammar can be
"ambiguous", meaning that there are multiple ways to apply the grammar
rules to get the same inputs. Even unambiguous grammars can be
"nondeterministic", meaning that no fixed lookahead always suffices to
determine the next grammar rule to apply. With the proper
declarations, Bison is also able to parse these more general
context-free grammars, using a technique known as GLR parsing (for
Generalized LR). Bison's GLR parsers are able to handle any
context-free grammar for which the number of possible parses of any
given string is finite.
In the formal grammatical rules for a language, each kind of
syntactic unit or grouping is named by a "symbol". Those which are
built by grouping smaller constructs according to grammatical rules are
called "nonterminal symbols"; those which can't be subdivided are called
"terminal symbols" or "token types". We call a piece of input
corresponding to a single terminal symbol a "token", and a piece
corresponding to a single nonterminal symbol a "grouping".
We can use the C language as an example of what symbols, terminal and
nonterminal, mean. The tokens of C are identifiers, constants (numeric
and string), and the various keywords, arithmetic operators and
punctuation marks. So the terminal symbols of a grammar for C include
`identifier', `number', `string', plus one symbol for each keyword,
operator or punctuation mark: `if', `return', `const', `static', `int',
`char', `plus-sign', `open-brace', `close-brace', `comma' and many more.
(These tokens can be subdivided into characters, but that is a matter of
lexicography, not grammar.)
Here is a simple C function subdivided into tokens:
int /* keyword `int' */
square (int x) /* identifier, open-paren, keyword `int',
identifier, close-paren */
{ /* open-brace */
return x * x; /* keyword `return', identifier, asterisk,
identifier, semicolon */
} /* close-brace */
The syntactic groupings of C include the expression, the statement,
the declaration, and the function definition. These are represented in
the grammar of C by nonterminal symbols `expression', `statement',
`declaration' and `function definition'. The full grammar uses dozens
of additional language constructs, each with its own nonterminal
symbol, in order to express the meanings of these four. The example
above is a function definition; it contains one declaration, and one
statement. In the statement, each `x' is an expression and so is `x *
x'.
Each nonterminal symbol must have grammatical rules showing how it
is made out of simpler constructs. For example, one kind of C
statement is the `return' statement; this would be described with a
grammar rule which reads informally as follows:
A `statement' can be made of a `return' keyword, an `expression'
and a `semicolon'.
There would be many other rules for `statement', one for each kind of
statement in C.
One nonterminal symbol must be distinguished as the special one which
defines a complete utterance in the language. It is called the "start
symbol". In a compiler, this means a complete input program. In the C
language, the nonterminal symbol `sequence of definitions and
declarations' plays this role.
For example, `1 + 2' is a valid C expression--a valid part of a C
program--but it is not valid as an _entire_ C program. In the
context-free grammar of C, this follows from the fact that `expression'
is not the start symbol.
The Bison parser reads a sequence of tokens as its input, and groups
the tokens using the grammar rules. If the input is valid, the end
result is that the entire token sequence reduces to a single grouping
whose symbol is the grammar's start symbol. If we use a grammar for C,
the entire input must be a `sequence of definitions and declarations'.
If not, the parser reports a syntax error.

File: bison.info, Node: Grammar in Bison, Next: Semantic Values, Prev: Language and Grammar, Up: Concepts
1.2 From Formal Rules to Bison Input
====================================
A formal grammar is a mathematical construct. To define the language
for Bison, you must write a file expressing the grammar in Bison syntax:
a "Bison grammar" file. *Note Bison Grammar Files: Grammar File.
A nonterminal symbol in the formal grammar is represented in Bison
input as an identifier, like an identifier in C. By convention, it
should be in lower case, such as `expr', `stmt' or `declaration'.
The Bison representation for a terminal symbol is also called a
"token type". Token types as well can be represented as C-like
identifiers. By convention, these identifiers should be upper case to
distinguish them from nonterminals: for example, `INTEGER',
`IDENTIFIER', `IF' or `RETURN'. A terminal symbol that stands for a
particular keyword in the language should be named after that keyword
converted to upper case. The terminal symbol `error' is reserved for
error recovery. *Note Symbols::.
A terminal symbol can also be represented as a character literal,
just like a C character constant. You should do this whenever a token
is just a single character (parenthesis, plus-sign, etc.): use that
same character in a literal as the terminal symbol for that token.
A third way to represent a terminal symbol is with a C string
constant containing several characters. *Note Symbols::, for more
information.
The grammar rules also have an expression in Bison syntax. For
example, here is the Bison rule for a C `return' statement. The
semicolon in quotes is a literal character token, representing part of
the C syntax for the statement; the naked semicolon, and the colon, are
Bison punctuation used in every rule.
stmt: RETURN expr ';' ;
*Note Syntax of Grammar Rules: Rules.

File: bison.info, Node: Semantic Values, Next: Semantic Actions, Prev: Grammar in Bison, Up: Concepts
1.3 Semantic Values
===================
A formal grammar selects tokens only by their classifications: for
example, if a rule mentions the terminal symbol `integer constant', it
means that _any_ integer constant is grammatically valid in that
position. The precise value of the constant is irrelevant to how to
parse the input: if `x+4' is grammatical then `x+1' or `x+3989' is
equally grammatical.
But the precise value is very important for what the input means
once it is parsed. A compiler is useless if it fails to distinguish
between 4, 1 and 3989 as constants in the program! Therefore, each
token in a Bison grammar has both a token type and a "semantic value".
*Note Defining Language Semantics: Semantics, for details.
The token type is a terminal symbol defined in the grammar, such as
`INTEGER', `IDENTIFIER' or `',''. It tells everything you need to know
to decide where the token may validly appear and how to group it with
other tokens. The grammar rules know nothing about tokens except their
types.
The semantic value has all the rest of the information about the
meaning of the token, such as the value of an integer, or the name of an
identifier. (A token such as `','' which is just punctuation doesn't
need to have any semantic value.)
For example, an input token might be classified as token type
`INTEGER' and have the semantic value 4. Another input token might
have the same token type `INTEGER' but value 3989. When a grammar rule
says that `INTEGER' is allowed, either of these tokens is acceptable
because each is an `INTEGER'. When the parser accepts the token, it
keeps track of the token's semantic value.
Each grouping can also have a semantic value as well as its
nonterminal symbol. For example, in a calculator, an expression
typically has a semantic value that is a number. In a compiler for a
programming language, an expression typically has a semantic value that
is a tree structure describing the meaning of the expression.

File: bison.info, Node: Semantic Actions, Next: GLR Parsers, Prev: Semantic Values, Up: Concepts
1.4 Semantic Actions
====================
In order to be useful, a program must do more than parse input; it must
also produce some output based on the input. In a Bison grammar, a
grammar rule can have an "action" made up of C statements. Each time
the parser recognizes a match for that rule, the action is executed.
*Note Actions::.
Most of the time, the purpose of an action is to compute the
semantic value of the whole construct from the semantic values of its
parts. For example, suppose we have a rule which says an expression
can be the sum of two expressions. When the parser recognizes such a
sum, each of the subexpressions has a semantic value which describes
how it was built up. The action for this rule should create a similar
sort of value for the newly recognized larger expression.
For example, here is a rule that says an expression can be the sum of
two subexpressions:
expr: expr '+' expr { $$ = $1 + $3; } ;
The action says how to produce the semantic value of the sum expression
from the values of the two subexpressions.

File: bison.info, Node: GLR Parsers, Next: Locations, Prev: Semantic Actions, Up: Concepts
1.5 Writing GLR Parsers
=======================
In some grammars, Bison's deterministic LR(1) parsing algorithm cannot
decide whether to apply a certain grammar rule at a given point. That
is, it may not be able to decide (on the basis of the input read so
far) which of two possible reductions (applications of a grammar rule)
applies, or whether to apply a reduction or read more of the input and
apply a reduction later in the input. These are known respectively as
"reduce/reduce" conflicts (*note Reduce/Reduce::), and "shift/reduce"
conflicts (*note Shift/Reduce::).
To use a grammar that is not easily modified to be LR(1), a more
general parsing algorithm is sometimes necessary. If you include
`%glr-parser' among the Bison declarations in your file (*note Grammar
Outline::), the result is a Generalized LR (GLR) parser. These parsers
handle Bison grammars that contain no unresolved conflicts (i.e., after
applying precedence declarations) identically to deterministic parsers.
However, when faced with unresolved shift/reduce and reduce/reduce
conflicts, GLR parsers use the simple expedient of doing both,
effectively cloning the parser to follow both possibilities. Each of
the resulting parsers can again split, so that at any given time, there
can be any number of possible parses being explored. The parsers
proceed in lockstep; that is, all of them consume (shift) a given input
symbol before any of them proceed to the next. Each of the cloned
parsers eventually meets one of two possible fates: either it runs into
a parsing error, in which case it simply vanishes, or it merges with
another parser, because the two of them have reduced the input to an
identical set of symbols.
During the time that there are multiple parsers, semantic actions are
recorded, but not performed. When a parser disappears, its recorded
semantic actions disappear as well, and are never performed. When a
reduction makes two parsers identical, causing them to merge, Bison
records both sets of semantic actions. Whenever the last two parsers
merge, reverting to the single-parser case, Bison resolves all the
outstanding actions either by precedences given to the grammar rules
involved, or by performing both actions, and then calling a designated
user-defined function on the resulting values to produce an arbitrary
merged result.
* Menu:
* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
* GLR Semantic Actions:: Deferred semantic actions have special concerns.
* Compiler Requirements:: GLR parsers require a modern C compiler.

File: bison.info, Node: Simple GLR Parsers, Next: Merging GLR Parses, Up: GLR Parsers
1.5.1 Using GLR on Unambiguous Grammars
---------------------------------------
In the simplest cases, you can use the GLR algorithm to parse grammars
that are unambiguous but fail to be LR(1). Such grammars typically
require more than one symbol of lookahead.
Consider a problem that arises in the declaration of enumerated and
subrange types in the programming language Pascal. Here are some
examples:
type subrange = lo .. hi;
type enum = (a, b, c);
The original language standard allows only numeric literals and
constant identifiers for the subrange bounds (`lo' and `hi'), but
Extended Pascal (ISO/IEC 10206) and many other Pascal implementations
allow arbitrary expressions there. This gives rise to the following
situation, containing a superfluous pair of parentheses:
type subrange = (a) .. b;
Compare this to the following declaration of an enumerated type with
only one value:
type enum = (a);
(These declarations are contrived, but they are syntactically valid,
and more-complicated cases can come up in practical programs.)
These two declarations look identical until the `..' token. With
normal LR(1) one-token lookahead it is not possible to decide between
the two forms when the identifier `a' is parsed. It is, however,
desirable for a parser to decide this, since in the latter case `a'
must become a new identifier to represent the enumeration value, while
in the former case `a' must be evaluated with its current meaning,
which may be a constant or even a function call.
You could parse `(a)' as an "unspecified identifier in parentheses",
to be resolved later, but this typically requires substantial
contortions in both semantic actions and large parts of the grammar,
where the parentheses are nested in the recursive rules for expressions.
You might think of using the lexer to distinguish between the two
forms by returning different tokens for currently defined and undefined
identifiers. But if these declarations occur in a local scope, and `a'
is defined in an outer scope, then both forms are possible--either
locally redefining `a', or using the value of `a' from the outer scope.
So this approach cannot work.
A simple solution to this problem is to declare the parser to use
the GLR algorithm. When the GLR parser reaches the critical state, it
merely splits into two branches and pursues both syntax rules
simultaneously. Sooner or later, one of them runs into a parsing
error. If there is a `..' token before the next `;', the rule for
enumerated types fails since it cannot accept `..' anywhere; otherwise,
the subrange type rule fails since it requires a `..' token. So one of
the branches fails silently, and the other one continues normally,
performing all the intermediate actions that were postponed during the
split.
If the input is syntactically incorrect, both branches fail and the
parser reports a syntax error as usual.
The effect of all this is that the parser seems to "guess" the
correct branch to take, or in other words, it seems to use more
lookahead than the underlying LR(1) algorithm actually allows for. In
this example, LR(2) would suffice, but also some cases that are not
LR(k) for any k can be handled this way.
In general, a GLR parser can take quadratic or cubic worst-case time,
and the current Bison parser even takes exponential time and space for
some grammars. In practice, this rarely happens, and for many grammars
it is possible to prove that it cannot happen. The present example
contains only one conflict between two rules, and the type-declaration
context containing the conflict cannot be nested. So the number of
branches that can exist at any time is limited by the constant 2, and
the parsing time is still linear.
Here is a Bison grammar corresponding to the example above. It
parses a vastly simplified form of Pascal type declarations.
%token TYPE DOTDOT ID
%left '+' '-'
%left '*' '/'
%%
type_decl: TYPE ID '=' type ';' ;
type:
'(' id_list ')'
| expr DOTDOT expr
;
id_list:
ID
| id_list ',' ID
;
expr:
'(' expr ')'
| expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| ID
;
When used as a normal LR(1) grammar, Bison correctly complains about
one reduce/reduce conflict. In the conflicting situation the parser
chooses one of the alternatives, arbitrarily the one declared first.
Therefore the following correct input is not recognized:
type t = (a) .. b;
The parser can be turned into a GLR parser, while also telling Bison
to be silent about the one known reduce/reduce conflict, by adding
these two declarations to the Bison grammar file (before the first
`%%'):
%glr-parser
%expect-rr 1
No change in the grammar itself is required. Now the parser recognizes
all valid declarations, according to the limited syntax above,
transparently. In fact, the user does not even notice when the parser
splits.
So here we have a case where we can use the benefits of GLR, almost
without disadvantages. Even in simple cases like this, however, there
are at least two potential problems to beware. First, always analyze
the conflicts reported by Bison to make sure that GLR splitting is only
done where it is intended. A GLR parser splitting inadvertently may
cause problems less obvious than an LR parser statically choosing the
wrong alternative in a conflict. Second, consider interactions with
the lexer (*note Semantic Tokens::) with great care. Since a split
parser consumes tokens without performing any actions during the split,
the lexer cannot obtain information via parser actions. Some cases of
lexer interactions can be eliminated by using GLR to shift the
complications from the lexer to the parser. You must check the
remaining cases for correctness.
In our example, it would be safe for the lexer to return tokens
based on their current meanings in some symbol table, because no new
symbols are defined in the middle of a type declaration. Though it is
possible for a parser to define the enumeration constants as they are
parsed, before the type declaration is completed, it actually makes no
difference since they cannot be used within the same enumerated type
declaration.

File: bison.info, Node: Merging GLR Parses, Next: GLR Semantic Actions, Prev: Simple GLR Parsers, Up: GLR Parsers
1.5.2 Using GLR to Resolve Ambiguities
--------------------------------------
Let's consider an example, vastly simplified from a C++ grammar.
%{
#include <stdio.h>
#define YYSTYPE char const *
int yylex (void);
void yyerror (char const *);
%}
%token TYPENAME ID
%right '='
%left '+'
%glr-parser
%%
prog:
/* Nothing. */
| prog stmt { printf ("\n"); }
;
stmt:
expr ';' %dprec 1
| decl %dprec 2
;
expr:
ID { printf ("%s ", $$); }
| TYPENAME '(' expr ')'
{ printf ("%s <cast> ", $1); }
| expr '+' expr { printf ("+ "); }
| expr '=' expr { printf ("= "); }
;
decl:
TYPENAME declarator ';'
{ printf ("%s <declare> ", $1); }
| TYPENAME declarator '=' expr ';'
{ printf ("%s <init-declare> ", $1); }
;
declarator:
ID { printf ("\"%s\" ", $1); }
| '(' declarator ')'
;
This models a problematic part of the C++ grammar--the ambiguity between
certain declarations and statements. For example,
T (x) = y+z;
parses as either an `expr' or a `stmt' (assuming that `T' is recognized
as a `TYPENAME' and `x' as an `ID'). Bison detects this as a
reduce/reduce conflict between the rules `expr : ID' and `declarator :
ID', which it cannot resolve at the time it encounters `x' in the
example above. Since this is a GLR parser, it therefore splits the
problem into two parses, one for each choice of resolving the
reduce/reduce conflict. Unlike the example from the previous section
(*note Simple GLR Parsers::), however, neither of these parses "dies,"
because the grammar as it stands is ambiguous. One of the parsers
eventually reduces `stmt : expr ';'' and the other reduces `stmt :
decl', after which both parsers are in an identical state: they've seen
`prog stmt' and have the same unprocessed input remaining. We say that
these parses have "merged."
At this point, the GLR parser requires a specification in the
grammar of how to choose between the competing parses. In the example
above, the two `%dprec' declarations specify that Bison is to give
precedence to the parse that interprets the example as a `decl', which
implies that `x' is a declarator. The parser therefore prints
"x" y z + T <init-declare>
The `%dprec' declarations only come into play when more than one
parse survives. Consider a different input string for this parser:
T (x) + y;
This is another example of using GLR to parse an unambiguous construct,
as shown in the previous section (*note Simple GLR Parsers::). Here,
there is no ambiguity (this cannot be parsed as a declaration).
However, at the time the Bison parser encounters `x', it does not have
enough information to resolve the reduce/reduce conflict (again,
between `x' as an `expr' or a `declarator'). In this case, no
precedence declaration is used. Again, the parser splits into two, one
assuming that `x' is an `expr', and the other assuming `x' is a
`declarator'. The second of these parsers then vanishes when it sees
`+', and the parser prints
x T <cast> y +
Suppose that instead of resolving the ambiguity, you wanted to see
all the possibilities. For this purpose, you must merge the semantic
actions of the two possible parsers, rather than choosing one over the
other. To do so, you could change the declaration of `stmt' as follows:
stmt:
expr ';' %merge <stmtMerge>
| decl %merge <stmtMerge>
;
and define the `stmtMerge' function as:
static YYSTYPE
stmtMerge (YYSTYPE x0, YYSTYPE x1)
{
printf ("<OR> ");
return "";
}
with an accompanying forward declaration in the C declarations at the
beginning of the file:
%{
#define YYSTYPE char const *
static YYSTYPE stmtMerge (YYSTYPE x0, YYSTYPE x1);
%}
With these declarations, the resulting parser parses the first example
as both an `expr' and a `decl', and prints
"x" y z + T <init-declare> x T <cast> y z + = <OR>
Bison requires that all of the productions that participate in any
particular merge have identical `%merge' clauses. Otherwise, the
ambiguity would be unresolvable, and the parser will report an error
during any parse that results in the offending merge.

File: bison.info, Node: GLR Semantic Actions, Next: Compiler Requirements, Prev: Merging GLR Parses, Up: GLR Parsers
1.5.3 GLR Semantic Actions
--------------------------
By definition, a deferred semantic action is not performed at the same
time as the associated reduction. This raises caveats for several
Bison features you might use in a semantic action in a GLR parser.
In any semantic action, you can examine `yychar' to determine the
type of the lookahead token present at the time of the associated
reduction. After checking that `yychar' is not set to `YYEMPTY' or
`YYEOF', you can then examine `yylval' and `yylloc' to determine the
lookahead token's semantic value and location, if any. In a
nondeferred semantic action, you can also modify any of these variables
to influence syntax analysis. *Note Lookahead Tokens: Lookahead.
In a deferred semantic action, it's too late to influence syntax
analysis. In this case, `yychar', `yylval', and `yylloc' are set to
shallow copies of the values they had at the time of the associated
reduction. For this reason alone, modifying them is dangerous.
Moreover, the result of modifying them is undefined and subject to
change with future versions of Bison. For example, if a semantic
action might be deferred, you should never write it to invoke
`yyclearin' (*note Action Features::) or to attempt to free memory
referenced by `yylval'.
Another Bison feature requiring special consideration is `YYERROR'
(*note Action Features::), which you can invoke in a semantic action to
initiate error recovery. During deterministic GLR operation, the
effect of `YYERROR' is the same as its effect in a deterministic parser.
In a deferred semantic action, its effect is undefined.
Also, see *note Default Action for Locations: Location Default
Action, which describes a special usage of `YYLLOC_DEFAULT' in GLR
parsers.

File: bison.info, Node: Compiler Requirements, Prev: GLR Semantic Actions, Up: GLR Parsers
1.5.4 Considerations when Compiling GLR Parsers
-----------------------------------------------
The GLR parsers require a compiler for ISO C89 or later. In addition,
they use the `inline' keyword, which is not C89, but is C99 and is a
common extension in pre-C99 compilers. It is up to the user of these
parsers to handle portability issues. For instance, if using Autoconf
and the Autoconf macro `AC_C_INLINE', a mere
%{
#include <config.h>
%}
will suffice. Otherwise, we suggest
%{
#if (__STDC_VERSION__ < 199901 && ! defined __GNUC__ \
&& ! defined inline)
# define inline
#endif
%}

File: bison.info, Node: Locations, Next: Bison Parser, Prev: GLR Parsers, Up: Concepts
1.6 Locations
=============
Many applications, like interpreters or compilers, have to produce
verbose and useful error messages. To achieve this, one must be able
to keep track of the "textual location", or "location", of each
syntactic construct. Bison provides a mechanism for handling these
locations.
Each token has a semantic value. In a similar fashion, each token
has an associated location, but the type of locations is the same for
all tokens and groupings. Moreover, the output parser is equipped with
a default data structure for storing locations (*note Tracking
Locations::, for more details).
Like semantic values, locations can be reached in actions using a
dedicated set of constructs. In the example above, the location of the
whole grouping is `@$', while the locations of the subexpressions are
`@1' and `@3'.
When a rule is matched, a default action is used to compute the
semantic value of its left hand side (*note Actions::). In the same
way, another default action is used for locations. However, the action
for locations is general enough for most cases, meaning there is
usually no need to describe for each rule how `@$' should be formed.
When building a new location for a given grouping, the default behavior
of the output parser is to take the beginning of the first symbol, and
the end of the last symbol.

File: bison.info, Node: Bison Parser, Next: Stages, Prev: Locations, Up: Concepts
1.7 Bison Output: the Parser Implementation File
================================================
When you run Bison, you give it a Bison grammar file as input. The
most important output is a C source file that implements a parser for
the language described by the grammar. This parser is called a "Bison
parser", and this file is called a "Bison parser implementation file".
Keep in mind that the Bison utility and the Bison parser are two
distinct programs: the Bison utility is a program whose output is the
Bison parser implementation file that becomes part of your program.
The job of the Bison parser is to group tokens into groupings
according to the grammar rules--for example, to build identifiers and
operators into expressions. As it does this, it runs the actions for
the grammar rules it uses.
The tokens come from a function called the "lexical analyzer" that
you must supply in some fashion (such as by writing it in C). The Bison
parser calls the lexical analyzer each time it wants a new token. It
doesn't know what is "inside" the tokens (though their semantic values
may reflect this). Typically the lexical analyzer makes the tokens by
parsing characters of text, but Bison does not depend on this. *Note
The Lexical Analyzer Function `yylex': Lexical.
The Bison parser implementation file is C code which defines a
function named `yyparse' which implements that grammar. This function
does not make a complete C program: you must supply some additional
functions. One is the lexical analyzer. Another is an error-reporting
function which the parser calls to report an error. In addition, a
complete C program must start with a function called `main'; you have
to provide this, and arrange for it to call `yyparse' or the parser
will never run. *Note Parser C-Language Interface: Interface.
Aside from the token type names and the symbols in the actions you
write, all symbols defined in the Bison parser implementation file
itself begin with `yy' or `YY'. This includes interface functions such
as the lexical analyzer function `yylex', the error reporting function
`yyerror' and the parser function `yyparse' itself. This also includes
numerous identifiers used for internal purposes. Therefore, you should
avoid using C identifiers starting with `yy' or `YY' in the Bison
grammar file except for the ones defined in this manual. Also, you
should avoid using the C identifiers `malloc' and `free' for anything
other than their usual meanings.
In some cases the Bison parser implementation file includes system
headers, and in those cases your code should respect the identifiers
reserved by those headers. On some non-GNU hosts, `<alloca.h>',
`<malloc.h>', `<stddef.h>', and `<stdlib.h>' are included as needed to
declare memory allocators and related types. `<libintl.h>' is included
if message translation is in use (*note Internationalization::). Other
system headers may be included if you define `YYDEBUG' to a nonzero
value (*note Tracing Your Parser: Tracing.).

File: bison.info, Node: Stages, Next: Grammar Layout, Prev: Bison Parser, Up: Concepts
1.8 Stages in Using Bison
=========================
The actual language-design process using Bison, from grammar
specification to a working compiler or interpreter, has these parts:
1. Formally specify the grammar in a form recognized by Bison (*note
Bison Grammar Files: Grammar File.). For each grammatical rule in
the language, describe the action that is to be taken when an
instance of that rule is recognized. The action is described by a
sequence of C statements.
2. Write a lexical analyzer to process input and pass tokens to the
parser. The lexical analyzer may be written by hand in C (*note
The Lexical Analyzer Function `yylex': Lexical.). It could also
be produced using Lex, but the use of Lex is not discussed in this
manual.
3. Write a controlling function that calls the Bison-produced parser.
4. Write error-reporting routines.
To turn this source code as written into a runnable program, you
must follow these steps:
1. Run Bison on the grammar to produce the parser.
2. Compile the code output by Bison, as well as any other source
files.
3. Link the object files to produce the finished product.

File: bison.info, Node: Grammar Layout, Prev: Stages, Up: Concepts
1.9 The Overall Layout of a Bison Grammar
=========================================
The input file for the Bison utility is a "Bison grammar file". The
general form of a Bison grammar file is as follows:
%{
PROLOGUE
%}
BISON DECLARATIONS
%%
GRAMMAR RULES
%%
EPILOGUE
The `%%', `%{' and `%}' are punctuation that appears in every Bison
grammar file to separate the sections.
The prologue may define types and variables used in the actions.
You can also use preprocessor commands to define macros used there, and
use `#include' to include header files that do any of these things.
You need to declare the lexical analyzer `yylex' and the error printer
`yyerror' here, along with any other global identifiers used by the
actions in the grammar rules.
The Bison declarations declare the names of the terminal and
nonterminal symbols, and may also describe operator precedence and the
data types of semantic values of various symbols.
The grammar rules define how to construct each nonterminal symbol
from its parts.
The epilogue can contain any code you want to use. Often the
definitions of functions declared in the prologue go here. In a simple
program, all the rest of the program can go here.

File: bison.info, Node: Examples, Next: Grammar File, Prev: Concepts, Up: Top
2 Examples
**********
Now we show and explain several sample programs written using Bison: a
reverse polish notation calculator, an algebraic (infix) notation
calculator -- later extended to track "locations" -- and a
multi-function calculator. All produce usable, though limited,
interactive desk-top calculators.
These examples are simple, but Bison grammars for real programming
languages are written the same way. You can copy these examples into a
source file to try them.
* Menu:
* RPN Calc:: Reverse polish notation calculator;
a first example with no operator precedence.
* Infix Calc:: Infix (algebraic) notation calculator.
Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @N and @$.
* Multi-function Calc:: Calculator with memory and trig functions.
It uses multiple data-types for semantic values.
* Exercises:: Ideas for improving the multi-function calculator.

File: bison.info, Node: RPN Calc, Next: Infix Calc, Up: Examples
2.1 Reverse Polish Notation Calculator
======================================
The first example is that of a simple double-precision "reverse polish
notation" calculator (a calculator using postfix operators). This
example provides a good starting point, since operator precedence is
not an issue. The second example will illustrate how operator
precedence is handled.
The source code for this calculator is named `rpcalc.y'. The `.y'
extension is a convention used for Bison grammar files.
* Menu:
* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
* Rpcalc Lexer:: The lexical analyzer.
* Rpcalc Main:: The controlling function.
* Rpcalc Error:: The error reporting function.
* Rpcalc Generate:: Running Bison on the grammar file.
* Rpcalc Compile:: Run the C compiler on the output code.

File: bison.info, Node: Rpcalc Declarations, Next: Rpcalc Rules, Up: RPN Calc
2.1.1 Declarations for `rpcalc'
-------------------------------
Here are the C and Bison declarations for the reverse polish notation
calculator. As in C, comments are placed between `/*...*/'.
/* Reverse polish notation calculator. */
%{
#define YYSTYPE double
#include <math.h>
int yylex (void);
void yyerror (char const *);
%}
%token NUM
%% /* Grammar rules and actions follow. */
The declarations section (*note The prologue: Prologue.) contains two
preprocessor directives and two forward declarations.
The `#define' directive defines the macro `YYSTYPE', thus specifying
the C data type for semantic values of both tokens and groupings (*note
Data Types of Semantic Values: Value Type.). The Bison parser will use
whatever type `YYSTYPE' is defined as; if you don't define it, `int' is
the default. Because we specify `double', each token and each
expression has an associated value, which is a floating point number.
The `#include' directive is used to declare the exponentiation
function `pow'.
The forward declarations for `yylex' and `yyerror' are needed
because the C language requires that functions be declared before they
are used. These functions will be defined in the epilogue, but the
parser calls them so they must be declared in the prologue.
The second section, Bison declarations, provides information to Bison
about the token types (*note The Bison Declarations Section: Bison
Declarations.). Each terminal symbol that is not a single-character
literal must be declared here. (Single-character literals normally
don't need to be declared.) In this example, all the arithmetic
operators are designated by single-character literals, so the only
terminal symbol that needs to be declared is `NUM', the token type for
numeric constants.

File: bison.info, Node: Rpcalc Rules, Next: Rpcalc Lexer, Prev: Rpcalc Declarations, Up: RPN Calc
2.1.2 Grammar Rules for `rpcalc'
--------------------------------
Here are the grammar rules for the reverse polish notation calculator.
input:
/* empty */
| input line
;
line:
'\n'
| exp '\n' { printf ("%.10g\n", $1); }
;
exp:
NUM { $$ = $1; }
| exp exp '+' { $$ = $1 + $2; }
| exp exp '-' { $$ = $1 - $2; }
| exp exp '*' { $$ = $1 * $2; }
| exp exp '/' { $$ = $1 / $2; }
| exp exp '^' { $$ = pow ($1, $2); } /* Exponentiation */
| exp 'n' { $$ = -$1; } /* Unary minus */
;
%%
The groupings of the rpcalc "language" defined here are the
expression (given the name `exp'), the line of input (`line'), and the
complete input transcript (`input'). Each of these nonterminal symbols
has several alternate rules, joined by the vertical bar `|' which is
read as "or". The following sections explain what these rules mean.
The semantics of the language is determined by the actions taken
when a grouping is recognized. The actions are the C code that appears
inside braces. *Note Actions::.
You must specify these actions in C, but Bison provides the means for
passing semantic values between the rules. In each action, the
pseudo-variable `$$' stands for the semantic value for the grouping
that the rule is going to construct. Assigning a value to `$$' is the
main job of most actions. The semantic values of the components of the
rule are referred to as `$1', `$2', and so on.
* Menu:
* Rpcalc Input::
* Rpcalc Line::
* Rpcalc Expr::

File: bison.info, Node: Rpcalc Input, Next: Rpcalc Line, Up: Rpcalc Rules
2.1.2.1 Explanation of `input'
..............................
Consider the definition of `input':
input:
/* empty */
| input line
;
This definition reads as follows: "A complete input is either an
empty string, or a complete input followed by an input line". Notice
that "complete input" is defined in terms of itself. This definition
is said to be "left recursive" since `input' appears always as the
leftmost symbol in the sequence. *Note Recursive Rules: Recursion.
The first alternative is empty because there are no symbols between
the colon and the first `|'; this means that `input' can match an empty
string of input (no tokens). We write the rules this way because it is
legitimate to type `Ctrl-d' right after you start the calculator. It's
conventional to put an empty alternative first and write the comment
`/* empty */' in it.
The second alternate rule (`input line') handles all nontrivial
input. It means, "After reading any number of lines, read one more
line if possible." The left recursion makes this rule into a loop.
Since the first alternative matches empty input, the loop can be
executed zero or more times.
The parser function `yyparse' continues to process input until a
grammatical error is seen or the lexical analyzer says there are no more
input tokens; we will arrange for the latter to happen at end-of-input.

File: bison.info, Node: Rpcalc Line, Next: Rpcalc Expr, Prev: Rpcalc Input, Up: Rpcalc Rules
2.1.2.2 Explanation of `line'
.............................
Now consider the definition of `line':
line:
'\n'
| exp '\n' { printf ("%.10g\n", $1); }
;
The first alternative is a token which is a newline character; this
means that rpcalc accepts a blank line (and ignores it, since there is
no action). The second alternative is an expression followed by a
newline. This is the alternative that makes rpcalc useful. The
semantic value of the `exp' grouping is the value of `$1' because the
`exp' in question is the first symbol in the alternative. The action
prints this value, which is the result of the computation the user
asked for.
This action is unusual because it does not assign a value to `$$'.
As a consequence, the semantic value associated with the `line' is
uninitialized (its value will be unpredictable). This would be a bug if
that value were ever used, but we don't use it: once rpcalc has printed
the value of the user's input line, that value is no longer needed.

File: bison.info, Node: Rpcalc Expr, Prev: Rpcalc Line, Up: Rpcalc Rules
2.1.2.3 Explanation of `expr'
.............................
The `exp' grouping has several rules, one for each kind of expression.
The first rule handles the simplest expressions: those that are just
numbers. The second handles an addition-expression, which looks like
two expressions followed by a plus-sign. The third handles
subtraction, and so on.
exp:
NUM
| exp exp '+' { $$ = $1 + $2; }
| exp exp '-' { $$ = $1 - $2; }
...
;
We have used `|' to join all the rules for `exp', but we could
equally well have written them separately:
exp: NUM ;
exp: exp exp '+' { $$ = $1 + $2; };
exp: exp exp '-' { $$ = $1 - $2; };
...
Most of the rules have actions that compute the value of the
expression in terms of the value of its parts. For example, in the
rule for addition, `$1' refers to the first component `exp' and `$2'
refers to the second one. The third component, `'+'', has no meaningful
associated semantic value, but if it had one you could refer to it as
`$3'. When `yyparse' recognizes a sum expression using this rule, the
sum of the two subexpressions' values is produced as the value of the
entire expression. *Note Actions::.
You don't have to give an action for every rule. When a rule has no
action, Bison by default copies the value of `$1' into `$$'. This is
what happens in the first rule (the one that uses `NUM').
The formatting shown here is the recommended convention, but Bison
does not require it. You can add or change white space as much as you
wish. For example, this:
exp: NUM | exp exp '+' {$$ = $1 + $2; } | ... ;
means the same thing as this:
exp:
NUM
| exp exp '+' { $$ = $1 + $2; }
| ...
;
The latter, however, is much more readable.

File: bison.info, Node: Rpcalc Lexer, Next: Rpcalc Main, Prev: Rpcalc Rules, Up: RPN Calc
2.1.3 The `rpcalc' Lexical Analyzer
-----------------------------------
The lexical analyzer's job is low-level parsing: converting characters
or sequences of characters into tokens. The Bison parser gets its
tokens by calling the lexical analyzer. *Note The Lexical Analyzer
Function `yylex': Lexical.
Only a simple lexical analyzer is needed for the RPN calculator.
This lexical analyzer skips blanks and tabs, then reads in numbers as
`double' and returns them as `NUM' tokens. Any other character that
isn't part of a number is a separate token. Note that the token-code
for such a single-character token is the character itself.
The return value of the lexical analyzer function is a numeric code
which represents a token type. The same text used in Bison rules to
stand for this token type is also a C expression for the numeric code
for the type. This works in two ways. If the token type is a
character literal, then its numeric code is that of the character; you
can use the same character literal in the lexical analyzer to express
the number. If the token type is an identifier, that identifier is
defined by Bison as a C macro whose definition is the appropriate
number. In this example, therefore, `NUM' becomes a macro for `yylex'
to use.
The semantic value of the token (if it has one) is stored into the
global variable `yylval', which is where the Bison parser will look for
it. (The C data type of `yylval' is `YYSTYPE', which was defined at
the beginning of the grammar; *note Declarations for `rpcalc': Rpcalc
Declarations.)
A token type code of zero is returned if the end-of-input is
encountered. (Bison recognizes any nonpositive value as indicating
end-of-input.)
Here is the code for the lexical analyzer:
/* The lexical analyzer returns a double floating point
number on the stack and the token NUM, or the numeric code
of the character read if not a number. It skips all blanks
and tabs, and returns 0 for end-of-input. */
#include <ctype.h>
int
yylex (void)
{
int c;
/* Skip white space. */
while ((c = getchar ()) == ' ' || c == '\t')
continue;
/* Process numbers. */
if (c == '.' || isdigit (c))
{
ungetc (c, stdin);
scanf ("%lf", &yylval);
return NUM;
}
/* Return end-of-input. */
if (c == EOF)
return 0;
/* Return a single char. */
return c;
}

File: bison.info, Node: Rpcalc Main, Next: Rpcalc Error, Prev: Rpcalc Lexer, Up: RPN Calc
2.1.4 The Controlling Function
------------------------------
In keeping with the spirit of this example, the controlling function is
kept to the bare minimum. The only requirement is that it call
`yyparse' to start the process of parsing.
int
main (void)
{
return yyparse ();
}

File: bison.info, Node: Rpcalc Error, Next: Rpcalc Generate, Prev: Rpcalc Main, Up: RPN Calc
2.1.5 The Error Reporting Routine
---------------------------------
When `yyparse' detects a syntax error, it calls the error reporting
function `yyerror' to print an error message (usually but not always
`"syntax error"'). It is up to the programmer to supply `yyerror'
(*note Parser C-Language Interface: Interface.), so here is the
definition we will use:
#include <stdio.h>
/* Called by yyparse on error. */
void
yyerror (char const *s)
{
fprintf (stderr, "%s\n", s);
}
After `yyerror' returns, the Bison parser may recover from the error
and continue parsing if the grammar contains a suitable error rule
(*note Error Recovery::). Otherwise, `yyparse' returns nonzero. We
have not written any error rules in this example, so any invalid input
will cause the calculator program to exit. This is not clean behavior
for a real calculator, but it is adequate for the first example.

File: bison.info, Node: Rpcalc Generate, Next: Rpcalc Compile, Prev: Rpcalc Error, Up: RPN Calc
2.1.6 Running Bison to Make the Parser
--------------------------------------
Before running Bison to produce a parser, we need to decide how to
arrange all the source code in one or more source files. For such a
simple example, the easiest thing is to put everything in one file, the
grammar file. The definitions of `yylex', `yyerror' and `main' go at
the end, in the epilogue of the grammar file (*note The Overall Layout
of a Bison Grammar: Grammar Layout.).
For a large project, you would probably have several source files,
and use `make' to arrange to recompile them.
With all the source in the grammar file, you use the following
command to convert it into a parser implementation file:
bison FILE.y
In this example, the grammar file is called `rpcalc.y' (for "Reverse
Polish CALCulator"). Bison produces a parser implementation file named
`FILE.tab.c', removing the `.y' from the grammar file name. The parser
implementation file contains the source code for `yyparse'. The
additional functions in the grammar file (`yylex', `yyerror' and
`main') are copied verbatim to the parser implementation file.

File: bison.info, Node: Rpcalc Compile, Prev: Rpcalc Generate, Up: RPN Calc
2.1.7 Compiling the Parser Implementation File
----------------------------------------------
Here is how to compile and run the parser implementation file:
# List files in current directory.
$ ls
rpcalc.tab.c rpcalc.y
# Compile the Bison parser.
# `-lm' tells compiler to search math library for `pow'.
$ cc -lm -o rpcalc rpcalc.tab.c
# List files again.
$ ls
rpcalc rpcalc.tab.c rpcalc.y
The file `rpcalc' now contains the executable code. Here is an
example session using `rpcalc'.
$ rpcalc
4 9 +
13
3 7 + 3 4 5 *+-
-13
3 7 + 3 4 5 * + - n Note the unary minus, `n'
13
5 6 / 4 n +
-3.166666667
3 4 ^ Exponentiation
81
^D End-of-file indicator
$

File: bison.info, Node: Infix Calc, Next: Simple Error Recovery, Prev: RPN Calc, Up: Examples
2.2 Infix Notation Calculator: `calc'
=====================================
We now modify rpcalc to handle infix operators instead of postfix.
Infix notation involves the concept of operator precedence and the need
for parentheses nested to arbitrary depth. Here is the Bison code for
`calc.y', an infix desk-top calculator.
/* Infix notation calculator. */
%{
#define YYSTYPE double
#include <math.h>
#include <stdio.h>
int yylex (void);
void yyerror (char const *);
%}
/* Bison declarations. */
%token NUM
%left '-' '+'
%left '*' '/'
%left NEG /* negation--unary minus */
%right '^' /* exponentiation */
%% /* The grammar follows. */
input:
/* empty */
| input line
;
line:
'\n'
| exp '\n' { printf ("\t%.10g\n", $1); }
;
exp:
NUM { $$ = $1; }
| exp '+' exp { $$ = $1 + $3; }
| exp '-' exp { $$ = $1 - $3; }
| exp '*' exp { $$ = $1 * $3; }
| exp '/' exp { $$ = $1 / $3; }
| '-' exp %prec NEG { $$ = -$2; }
| exp '^' exp { $$ = pow ($1, $3); }
| '(' exp ')' { $$ = $2; }
;
%%
The functions `yylex', `yyerror' and `main' can be the same as before.
There are two important new features shown in this code.
In the second section (Bison declarations), `%left' declares token
types and says they are left-associative operators. The declarations
`%left' and `%right' (right associativity) take the place of `%token'
which is used to declare a token type name without associativity.
(These tokens are single-character literals, which ordinarily don't
need to be declared. We declare them here to specify the
associativity.)
Operator precedence is determined by the line ordering of the
declarations; the higher the line number of the declaration (lower on
the page or screen), the higher the precedence. Hence, exponentiation
has the highest precedence, unary minus (`NEG') is next, followed by
`*' and `/', and so on. *Note Operator Precedence: Precedence.
The other important new feature is the `%prec' in the grammar
section for the unary minus operator. The `%prec' simply instructs
Bison that the rule `| '-' exp' has the same precedence as `NEG'--in
this case the next-to-highest. *Note Context-Dependent Precedence:
Contextual Precedence.
Here is a sample run of `calc.y':
$ calc
4 + 4.5 - (34/(8*3+-3))
6.880952381
-56 + 2
-54
3 ^ 2
9

File: bison.info, Node: Simple Error Recovery, Next: Location Tracking Calc, Prev: Infix Calc, Up: Examples
2.3 Simple Error Recovery
=========================
Up to this point, this manual has not addressed the issue of "error
recovery"--how to continue parsing after the parser detects a syntax
error. All we have handled is error reporting with `yyerror'. Recall
that by default `yyparse' returns after calling `yyerror'. This means
that an erroneous input line causes the calculator program to exit.
Now we show how to rectify this deficiency.
The Bison language itself includes the reserved word `error', which
may be included in the grammar rules. In the example below it has been
added to one of the alternatives for `line':
line:
'\n'
| exp '\n' { printf ("\t%.10g\n", $1); }
| error '\n' { yyerrok; }
;
This addition to the grammar allows for simple error recovery in the
event of a syntax error. If an expression that cannot be evaluated is
read, the error will be recognized by the third rule for `line', and
parsing will continue. (The `yyerror' function is still called upon to
print its message as well.) The action executes the statement
`yyerrok', a macro defined automatically by Bison; its meaning is that
error recovery is complete (*note Error Recovery::). Note the
difference between `yyerrok' and `yyerror'; neither one is a misprint.
This form of error recovery deals with syntax errors. There are
other kinds of errors; for example, division by zero, which raises an
exception signal that is normally fatal. A real calculator program
must handle this signal and use `longjmp' to return to `main' and
resume parsing input lines; it would also have to discard the rest of
the current line of input. We won't discuss this issue further because
it is not specific to Bison programs.

File: bison.info, Node: Location Tracking Calc, Next: Multi-function Calc, Prev: Simple Error Recovery, Up: Examples
2.4 Location Tracking Calculator: `ltcalc'
==========================================
This example extends the infix notation calculator with location
tracking. This feature will be used to improve the error messages. For
the sake of clarity, this example is a simple integer calculator, since
most of the work needed to use locations will be done in the lexical
analyzer.
* Menu:
* Ltcalc Declarations:: Bison and C declarations for ltcalc.
* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
* Ltcalc Lexer:: The lexical analyzer.

File: bison.info, Node: Ltcalc Declarations, Next: Ltcalc Rules, Up: Location Tracking Calc
2.4.1 Declarations for `ltcalc'
-------------------------------
The C and Bison declarations for the location tracking calculator are
the same as the declarations for the infix notation calculator.
/* Location tracking calculator. */
%{
#define YYSTYPE int
#include <math.h>
int yylex (void);
void yyerror (char const *);
%}
/* Bison declarations. */
%token NUM
%left '-' '+'
%left '*' '/'
%left NEG
%right '^'
%% /* The grammar follows. */
Note there are no declarations specific to locations. Defining a data
type for storing locations is not needed: we will use the type provided
by default (*note Data Types of Locations: Location Type.), which is a
four member structure with the following integer fields: `first_line',
`first_column', `last_line' and `last_column'. By conventions, and in
accordance with the GNU Coding Standards and common practice, the line
and column count both start at 1.

File: bison.info, Node: Ltcalc Rules, Next: Ltcalc Lexer, Prev: Ltcalc Declarations, Up: Location Tracking Calc
2.4.2 Grammar Rules for `ltcalc'
--------------------------------
Whether handling locations or not has no effect on the syntax of your
language. Therefore, grammar rules for this example will be very close
to those of the previous example: we will only modify them to benefit
from the new information.
Here, we will use locations to report divisions by zero, and locate
the wrong expressions or subexpressions.
input:
/* empty */
| input line
;
line:
'\n'
| exp '\n' { printf ("%d\n", $1); }
;
exp:
NUM { $$ = $1; }
| exp '+' exp { $$ = $1 + $3; }
| exp '-' exp { $$ = $1 - $3; }
| exp '*' exp { $$ = $1 * $3; }
| exp '/' exp
{
if ($3)
$$ = $1 / $3;
else
{
$$ = 1;
fprintf (stderr, "%d.%d-%d.%d: division by zero",
@3.first_line, @3.first_column,
@3.last_line, @3.last_column);
}
}
| '-' exp %prec NEG { $$ = -$2; }
| exp '^' exp { $$ = pow ($1, $3); }
| '(' exp ')' { $$ = $2; }
This code shows how to reach locations inside of semantic actions, by
using the pseudo-variables `@N' for rule components, and the
pseudo-variable `@$' for groupings.
We don't need to assign a value to `@$': the output parser does it
automatically. By default, before executing the C code of each action,
`@$' is set to range from the beginning of `@1' to the end of `@N', for
a rule with N components. This behavior can be redefined (*note
Default Action for Locations: Location Default Action.), and for very
specific rules, `@$' can be computed by hand.

File: bison.info, Node: Ltcalc Lexer, Prev: Ltcalc Rules, Up: Location Tracking Calc
2.4.3 The `ltcalc' Lexical Analyzer.
------------------------------------
Until now, we relied on Bison's defaults to enable location tracking.
The next step is to rewrite the lexical analyzer, and make it able to
feed the parser with the token locations, as it already does for
semantic values.
To this end, we must take into account every single character of the
input text, to avoid the computed locations of being fuzzy or wrong:
int
yylex (void)
{
int c;
/* Skip white space. */
while ((c = getchar ()) == ' ' || c == '\t')
++yylloc.last_column;
/* Step. */
yylloc.first_line = yylloc.last_line;
yylloc.first_column = yylloc.last_column;
/* Process numbers. */
if (isdigit (c))
{
yylval = c - '0';
++yylloc.last_column;
while (isdigit (c = getchar ()))
{
++yylloc.last_column;
yylval = yylval * 10 + c - '0';
}
ungetc (c, stdin);
return NUM;
}
/* Return end-of-input. */
if (c == EOF)
return 0;
/* Return a single char, and update location. */
if (c == '\n')
{
++yylloc.last_line;
yylloc.last_column = 0;
}
else
++yylloc.last_column;
return c;
}
Basically, the lexical analyzer performs the same processing as
before: it skips blanks and tabs, and reads numbers or single-character
tokens. In addition, it updates `yylloc', the global variable (of type
`YYLTYPE') containing the token's location.
Now, each time this function returns a token, the parser has its
number as well as its semantic value, and its location in the text.
The last needed change is to initialize `yylloc', for example in the
controlling function:
int
main (void)
{
yylloc.first_line = yylloc.last_line = 1;
yylloc.first_column = yylloc.last_column = 0;
return yyparse ();
}
Remember that computing locations is not a matter of syntax. Every
character must be associated to a location update, whether it is in
valid input, in comments, in literal strings, and so on.

File: bison.info, Node: Multi-function Calc, Next: Exercises, Prev: Location Tracking Calc, Up: Examples
2.5 Multi-Function Calculator: `mfcalc'
=======================================
Now that the basics of Bison have been discussed, it is time to move on
to a more advanced problem. The above calculators provided only five
functions, `+', `-', `*', `/' and `^'. It would be nice to have a
calculator that provides other mathematical functions such as `sin',
`cos', etc.
It is easy to add new operators to the infix calculator as long as
they are only single-character literals. The lexical analyzer `yylex'
passes back all nonnumeric characters as tokens, so new grammar rules
suffice for adding a new operator. But we want something more
flexible: built-in functions whose syntax has this form:
FUNCTION_NAME (ARGUMENT)
At the same time, we will add memory to the calculator, by allowing you
to create named variables, store values in them, and use them later.
Here is a sample session with the multi-function calculator:
$ mfcalc
pi = 3.141592653589
3.1415926536
sin(pi)
0.0000000000
alpha = beta1 = 2.3
2.3000000000
alpha
2.3000000000
ln(alpha)
0.8329091229
exp(ln(beta1))
2.3000000000
$
Note that multiple assignment and nested function calls are
permitted.
* Menu:
* Mfcalc Declarations:: Bison declarations for multi-function calculator.
* Mfcalc Rules:: Grammar rules for the calculator.
* Mfcalc Symbol Table:: Symbol table management subroutines.

File: bison.info, Node: Mfcalc Declarations, Next: Mfcalc Rules, Up: Multi-function Calc
2.5.1 Declarations for `mfcalc'
-------------------------------
Here are the C and Bison declarations for the multi-function calculator.
%{
#include <math.h> /* For math functions, cos(), sin(), etc. */
#include "calc.h" /* Contains definition of `symrec'. */
int yylex (void);
void yyerror (char const *);
%}
%union {
double val; /* For returning numbers. */
symrec *tptr; /* For returning symbol-table pointers. */
}
%token <val> NUM /* Simple double precision number. */
%token <tptr> VAR FNCT /* Variable and function. */
%type <val> exp
%right '='
%left '-' '+'
%left '*' '/'
%left NEG /* negation--unary minus */
%right '^' /* exponentiation */
The above grammar introduces only two new features of the Bison
language. These features allow semantic values to have various data
types (*note More Than One Value Type: Multiple Types.).
The `%union' declaration specifies the entire list of possible types;
this is instead of defining `YYSTYPE'. The allowable types are now
double-floats (for `exp' and `NUM') and pointers to entries in the
symbol table. *Note The Collection of Value Types: Union Decl.
Since values can now have various types, it is necessary to
associate a type with each grammar symbol whose semantic value is used.
These symbols are `NUM', `VAR', `FNCT', and `exp'. Their declarations
are augmented with information about their data type (placed between
angle brackets).
The Bison construct `%type' is used for declaring nonterminal
symbols, just as `%token' is used for declaring token types. We have
not used `%type' before because nonterminal symbols are normally
declared implicitly by the rules that define them. But `exp' must be
declared explicitly so we can specify its value type. *Note
Nonterminal Symbols: Type Decl.

File: bison.info, Node: Mfcalc Rules, Next: Mfcalc Symbol Table, Prev: Mfcalc Declarations, Up: Multi-function Calc
2.5.2 Grammar Rules for `mfcalc'
--------------------------------
Here are the grammar rules for the multi-function calculator. Most of
them are copied directly from `calc'; three rules, those which mention
`VAR' or `FNCT', are new.
%% /* The grammar follows. */
input:
/* empty */
| input line
;
line:
'\n'
| exp '\n' { printf ("%.10g\n", $1); }
| error '\n' { yyerrok; }
;
exp:
NUM { $$ = $1; }
| VAR { $$ = $1->value.var; }
| VAR '=' exp { $$ = $3; $1->value.var = $3; }
| FNCT '(' exp ')' { $$ = (*($1->value.fnctptr))($3); }
| exp '+' exp { $$ = $1 + $3; }
| exp '-' exp { $$ = $1 - $3; }
| exp '*' exp { $$ = $1 * $3; }
| exp '/' exp { $$ = $1 / $3; }
| '-' exp %prec NEG { $$ = -$2; }
| exp '^' exp { $$ = pow ($1, $3); }
| '(' exp ')' { $$ = $2; }
;
/* End of grammar. */
%%

File: bison.info, Node: Mfcalc Symbol Table, Prev: Mfcalc Rules, Up: Multi-function Calc
2.5.3 The `mfcalc' Symbol Table
-------------------------------
The multi-function calculator requires a symbol table to keep track of
the names and meanings of variables and functions. This doesn't affect
the grammar rules (except for the actions) or the Bison declarations,
but it requires some additional C functions for support.
The symbol table itself consists of a linked list of records. Its
definition, which is kept in the header `calc.h', is as follows. It
provides for either functions or variables to be placed in the table.
/* Function type. */
typedef double (*func_t) (double);
/* Data type for links in the chain of symbols. */
struct symrec
{
char *name; /* name of symbol */
int type; /* type of symbol: either VAR or FNCT */
union
{
double var; /* value of a VAR */
func_t fnctptr; /* value of a FNCT */
} value;
struct symrec *next; /* link field */
};
typedef struct symrec symrec;
/* The symbol table: a chain of `struct symrec'. */
extern symrec *sym_table;
symrec *putsym (char const *, int);
symrec *getsym (char const *);
The new version of `main' includes a call to `init_table', a
function that initializes the symbol table. Here it is, and
`init_table' as well:
#include <stdio.h>
/* Called by yyparse on error. */
void
yyerror (char const *s)
{
fprintf (stderr, "%s\n", s);
}
struct init
{
char const *fname;
double (*fnct) (double);
};
struct init const arith_fncts[] =
{
"sin", sin,
"cos", cos,
"atan", atan,
"ln", log,
"exp", exp,
"sqrt", sqrt,
0, 0
};
/* The symbol table: a chain of `struct symrec'. */
symrec *sym_table;
/* Put arithmetic functions in table. */
void
init_table (void)
{
int i;
for (i = 0; arith_fncts[i].fname != 0; i++)
{
symrec *ptr = putsym (arith_fncts[i].fname, FNCT);
ptr->value.fnctptr = arith_fncts[i].fnct;
}
}
int
main (void)
{
init_table ();
return yyparse ();
}
By simply editing the initialization list and adding the necessary
include files, you can add additional functions to the calculator.
Two important functions allow look-up and installation of symbols in
the symbol table. The function `putsym' is passed a name and the type
(`VAR' or `FNCT') of the object to be installed. The object is linked
to the front of the list, and a pointer to the object is returned. The
function `getsym' is passed the name of the symbol to look up. If
found, a pointer to that symbol is returned; otherwise zero is returned.
#include <stdlib.h> /* malloc. */
#include <string.h> /* strlen. */
symrec *
putsym (char const *sym_name, int sym_type)
{
symrec *ptr = (symrec *) malloc (sizeof (symrec));
ptr->name = (char *) malloc (strlen (sym_name) + 1);
strcpy (ptr->name,sym_name);
ptr->type = sym_type;
ptr->value.var = 0; /* Set value to 0 even if fctn. */
ptr->next = (struct symrec *)sym_table;
sym_table = ptr;
return ptr;
}
symrec *
getsym (char const *sym_name)
{
symrec *ptr;
for (ptr = sym_table; ptr != (symrec *) 0;
ptr = (symrec *)ptr->next)
if (strcmp (ptr->name,sym_name) == 0)
return ptr;
return 0;
}
The function `yylex' must now recognize variables, numeric values,
and the single-character arithmetic operators. Strings of alphanumeric
characters with a leading letter are recognized as either variables or
functions depending on what the symbol table says about them.
The string is passed to `getsym' for look up in the symbol table. If
the name appears in the table, a pointer to its location and its type
(`VAR' or `FNCT') is returned to `yyparse'. If it is not already in
the table, then it is installed as a `VAR' using `putsym'. Again, a
pointer and its type (which must be `VAR') is returned to `yyparse'.
No change is needed in the handling of numeric values and arithmetic
operators in `yylex'.
#include <ctype.h>
int
yylex (void)
{
int c;
/* Ignore white space, get first nonwhite character. */
while ((c = getchar ()) == ' ' || c == '\t')
continue;
if (c == EOF)
return 0;
/* Char starts a number => parse the number. */
if (c == '.' || isdigit (c))
{
ungetc (c, stdin);
scanf ("%lf", &yylval.val);
return NUM;
}
/* Char starts an identifier => read the name. */
if (isalpha (c))
{
/* Initially make the buffer long enough
for a 40-character symbol name. */
static size_t length = 40;
static char *symbuf = 0;
symrec *s;
int i;
if (!symbuf)
symbuf = (char *) malloc (length + 1);
i = 0;
do
{
/* If buffer is full, make it bigger. */
if (i == length)
{
length *= 2;
symbuf = (char *) realloc (symbuf, length + 1);
}
/* Add this character to the buffer. */
symbuf[i++] = c;
/* Get another character. */
c = getchar ();
}
while (isalnum (c));
ungetc (c, stdin);
symbuf[i] = '\0';
s = getsym (symbuf);
if (s == 0)
s = putsym (symbuf, VAR);
yylval.tptr = s;
return s->type;
}
/* Any other character is a token by itself. */
return c;
}
The error reporting function is unchanged, and the new version of
`main' includes a call to `init_table' and sets the `yydebug' on user
demand (*Note Tracing Your Parser: Tracing, for details):
/* Called by yyparse on error. */
void
yyerror (char const *s)
{
fprintf (stderr, "%s\n", s);
}
int
main (int argc, char const* argv[])
{
int i;
/* Enable parse traces on option -p. */
for (i = 1; i < argc; ++i)
if (!strcmp(argv[i], "-p"))
yydebug = 1;
init_table ();
return yyparse ();
}
This program is both powerful and flexible. You may easily add new
functions, and it is a simple job to modify this code to install
predefined variables such as `pi' or `e' as well.

File: bison.info, Node: Exercises, Prev: Multi-function Calc, Up: Examples
2.6 Exercises
=============
1. Add some new functions from `math.h' to the initialization list.
2. Add another array that contains constants and their values. Then
modify `init_table' to add these constants to the symbol table.
It will be easiest to give the constants type `VAR'.
3. Make the program report an error if the user refers to an
uninitialized variable in any way except to store a value in it.

File: bison.info, Node: Grammar File, Next: Interface, Prev: Examples, Up: Top
3 Bison Grammar Files
*********************
Bison takes as input a context-free grammar specification and produces a
C-language function that recognizes correct instances of the grammar.
The Bison grammar file conventionally has a name ending in `.y'.
*Note Invoking Bison: Invocation.
* Menu:
* Grammar Outline:: Overall layout of the grammar file.
* Symbols:: Terminal and nonterminal symbols.
* Rules:: How to write grammar rules.
* Recursion:: Writing recursive rules.
* Semantics:: Semantic values and actions.
* Tracking Locations:: Locations and actions.
* Named References:: Using named references in actions.
* Declarations:: All kinds of Bison declarations are described here.
* Multiple Parsers:: Putting more than one Bison parser in one program.

File: bison.info, Node: Grammar Outline, Next: Symbols, Up: Grammar File
3.1 Outline of a Bison Grammar
==============================
A Bison grammar file has four main sections, shown here with the
appropriate delimiters:
%{
PROLOGUE
%}
BISON DECLARATIONS
%%
GRAMMAR RULES
%%
EPILOGUE
Comments enclosed in `/* ... */' may appear in any of the sections.
As a GNU extension, `//' introduces a comment that continues until end
of line.
* Menu:
* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
* Bison Declarations:: Syntax and usage of the Bison declarations section.
* Grammar Rules:: Syntax and usage of the grammar rules section.
* Epilogue:: Syntax and usage of the epilogue.

File: bison.info, Node: Prologue, Next: Prologue Alternatives, Up: Grammar Outline
3.1.1 The prologue
------------------
The PROLOGUE section contains macro definitions and declarations of
functions and variables that are used in the actions in the grammar
rules. These are copied to the beginning of the parser implementation
file so that they precede the definition of `yyparse'. You can use
`#include' to get the declarations from a header file. If you don't
need any C declarations, you may omit the `%{' and `%}' delimiters that
bracket this section.
The PROLOGUE section is terminated by the first occurrence of `%}'
that is outside a comment, a string literal, or a character constant.
You may have more than one PROLOGUE section, intermixed with the
BISON DECLARATIONS. This allows you to have C and Bison declarations
that refer to each other. For example, the `%union' declaration may
use types defined in a header file, and you may wish to prototype
functions that take arguments of type `YYSTYPE'. This can be done with
two PROLOGUE blocks, one before and one after the `%union' declaration.
%{
#define _GNU_SOURCE
#include <stdio.h>
#include "ptypes.h"
%}
%union {
long int n;
tree t; /* `tree' is defined in `ptypes.h'. */
}
%{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
%}
...
When in doubt, it is usually safer to put prologue code before all
Bison declarations, rather than after. For example, any definitions of
feature test macros like `_GNU_SOURCE' or `_POSIX_C_SOURCE' should
appear before all Bison declarations, as feature test macros can affect
the behavior of Bison-generated `#include' directives.

File: bison.info, Node: Prologue Alternatives, Next: Bison Declarations, Prev: Prologue, Up: Grammar Outline
3.1.2 Prologue Alternatives
---------------------------
The functionality of PROLOGUE sections can often be subtle and
inflexible. As an alternative, Bison provides a `%code' directive with
an explicit qualifier field, which identifies the purpose of the code
and thus the location(s) where Bison should generate it. For C/C++,
the qualifier can be omitted for the default location, or it can be one
of `requires', `provides', `top'. *Note %code Summary::.
Look again at the example of the previous section:
%{
#define _GNU_SOURCE
#include <stdio.h>
#include "ptypes.h"
%}
%union {
long int n;
tree t; /* `tree' is defined in `ptypes.h'. */
}
%{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
%}
...
Notice that there are two PROLOGUE sections here, but there's a subtle
distinction between their functionality. For example, if you decide to
override Bison's default definition for `YYLTYPE', in which PROLOGUE
section should you write your new definition? You should write it in
the first since Bison will insert that code into the parser
implementation file _before_ the default `YYLTYPE' definition. In
which PROLOGUE section should you prototype an internal function,
`trace_token', that accepts `YYLTYPE' and `yytokentype' as arguments?
You should prototype it in the second since Bison will insert that code
_after_ the `YYLTYPE' and `yytokentype' definitions.
This distinction in functionality between the two PROLOGUE sections
is established by the appearance of the `%union' between them. This
behavior raises a few questions. First, why should the position of a
`%union' affect definitions related to `YYLTYPE' and `yytokentype'?
Second, what if there is no `%union'? In that case, the second kind of
PROLOGUE section is not available. This behavior is not intuitive.
To avoid this subtle `%union' dependency, rewrite the example using a
`%code top' and an unqualified `%code'. Let's go ahead and add the new
`YYLTYPE' definition and the `trace_token' prototype at the same time:
%code top {
#define _GNU_SOURCE
#include <stdio.h>
/* WARNING: The following code really belongs
* in a `%code requires'; see below. */
#include "ptypes.h"
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%union {
long int n;
tree t; /* `tree' is defined in `ptypes.h'. */
}
%code {
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
static void trace_token (enum yytokentype token, YYLTYPE loc);
}
...
In this way, `%code top' and the unqualified `%code' achieve the same
functionality as the two kinds of PROLOGUE sections, but it's always
explicit which kind you intend. Moreover, both kinds are always
available even in the absence of `%union'.
The `%code top' block above logically contains two parts. The first
two lines before the warning need to appear near the top of the parser
implementation file. The first line after the warning is required by
`YYSTYPE' and thus also needs to appear in the parser implementation
file. However, if you've instructed Bison to generate a parser header
file (*note %defines: Decl Summary.), you probably want that line to
appear before the `YYSTYPE' definition in that header file as well.
The `YYLTYPE' definition should also appear in the parser header file
to override the default `YYLTYPE' definition there.
In other words, in the `%code top' block above, all but the first two
lines are dependency code required by the `YYSTYPE' and `YYLTYPE'
definitions. Thus, they belong in one or more `%code requires':
%code top {
#define _GNU_SOURCE
#include <stdio.h>
}
%code requires {
#include "ptypes.h"
}
%union {
long int n;
tree t; /* `tree' is defined in `ptypes.h'. */
}
%code requires {
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%code {
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
static void trace_token (enum yytokentype token, YYLTYPE loc);
}
...
Now Bison will insert `#include "ptypes.h"' and the new `YYLTYPE'
definition before the Bison-generated `YYSTYPE' and `YYLTYPE'
definitions in both the parser implementation file and the parser
header file. (By the same reasoning, `%code requires' would also be
the appropriate place to write your own definition for `YYSTYPE'.)
When you are writing dependency code for `YYSTYPE' and `YYLTYPE',
you should prefer `%code requires' over `%code top' regardless of
whether you instruct Bison to generate a parser header file. When you
are writing code that you need Bison to insert only into the parser
implementation file and that has no special need to appear at the top
of that file, you should prefer the unqualified `%code' over `%code
top'. These practices will make the purpose of each block of your code
explicit to Bison and to other developers reading your grammar file.
Following these practices, we expect the unqualified `%code' and `%code
requires' to be the most important of the four PROLOGUE alternatives.
At some point while developing your parser, you might decide to
provide `trace_token' to modules that are external to your parser.
Thus, you might wish for Bison to insert the prototype into both the
parser header file and the parser implementation file. Since this
function is not a dependency required by `YYSTYPE' or `YYLTYPE', it
doesn't make sense to move its prototype to a `%code requires'. More
importantly, since it depends upon `YYLTYPE' and `yytokentype', `%code
requires' is not sufficient. Instead, move its prototype from the
unqualified `%code' to a `%code provides':
%code top {
#define _GNU_SOURCE
#include <stdio.h>
}
%code requires {
#include "ptypes.h"
}
%union {
long int n;
tree t; /* `tree' is defined in `ptypes.h'. */
}
%code requires {
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
} YYLTYPE;
}
%code provides {
void trace_token (enum yytokentype token, YYLTYPE loc);
}
%code {
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
}
...
Bison will insert the `trace_token' prototype into both the parser
header file and the parser implementation file after the definitions
for `yytokentype', `YYLTYPE', and `YYSTYPE'.
The above examples are careful to write directives in an order that
reflects the layout of the generated parser implementation and header
files: `%code top', `%code requires', `%code provides', and then
`%code'. While your grammar files may generally be easier to read if
you also follow this order, Bison does not require it. Instead, Bison
lets you choose an organization that makes sense to you.
You may declare any of these directives multiple times in the
grammar file. In that case, Bison concatenates the contained code in
declaration order. This is the only way in which the position of one
of these directives within the grammar file affects its functionality.
The result of the previous two properties is greater flexibility in
how you may organize your grammar file. For example, you may organize
semantic-type-related directives by semantic type:
%code requires { #include "type1.h" }
%union { type1 field1; }
%destructor { type1_free ($$); } <field1>
%printer { type1_print (yyoutput, $$); } <field1>
%code requires { #include "type2.h" }
%union { type2 field2; }
%destructor { type2_free ($$); } <field2>
%printer { type2_print (yyoutput, $$); } <field2>
You could even place each of the above directive groups in the rules
section of the grammar file next to the set of rules that uses the
associated semantic type. (In the rules section, you must terminate
each of those directives with a semicolon.) And you don't have to
worry that some directive (like a `%union') in the definitions section
is going to adversely affect their functionality in some
counter-intuitive manner just because it comes first. Such an
organization is not possible using PROLOGUE sections.
This section has been concerned with explaining the advantages of
the four PROLOGUE alternatives over the original Yacc PROLOGUE.
However, in most cases when using these directives, you shouldn't need
to think about all the low-level ordering issues discussed here.
Instead, you should simply use these directives to label each block of
your code according to its purpose and let Bison handle the ordering.
`%code' is the most generic label. Move code to `%code requires',
`%code provides', or `%code top' as needed.

File: bison.info, Node: Bison Declarations, Next: Grammar Rules, Prev: Prologue Alternatives, Up: Grammar Outline
3.1.3 The Bison Declarations Section
------------------------------------
The BISON DECLARATIONS section contains declarations that define
terminal and nonterminal symbols, specify precedence, and so on. In
some simple grammars you may not need any declarations. *Note Bison
Declarations: Declarations.

File: bison.info, Node: Grammar Rules, Next: Epilogue, Prev: Bison Declarations, Up: Grammar Outline
3.1.4 The Grammar Rules Section
-------------------------------
The "grammar rules" section contains one or more Bison grammar rules,
and nothing else. *Note Syntax of Grammar Rules: Rules.
There must always be at least one grammar rule, and the first `%%'
(which precedes the grammar rules) may never be omitted even if it is
the first thing in the file.

File: bison.info, Node: Epilogue, Prev: Grammar Rules, Up: Grammar Outline
3.1.5 The epilogue
------------------
The EPILOGUE is copied verbatim to the end of the parser implementation
file, just as the PROLOGUE is copied to the beginning. This is the
most convenient place to put anything that you want to have in the
parser implementation file but which need not come before the
definition of `yyparse'. For example, the definitions of `yylex' and
`yyerror' often go here. Because C requires functions to be declared
before being used, you often need to declare functions like `yylex' and
`yyerror' in the Prologue, even if you define them in the Epilogue.
*Note Parser C-Language Interface: Interface.
If the last section is empty, you may omit the `%%' that separates it
from the grammar rules.
The Bison parser itself contains many macros and identifiers whose
names start with `yy' or `YY', so it is a good idea to avoid using any
such names (except those documented in this manual) in the epilogue of
the grammar file.

File: bison.info, Node: Symbols, Next: Rules, Prev: Grammar Outline, Up: Grammar File
3.2 Symbols, Terminal and Nonterminal
=====================================
"Symbols" in Bison grammars represent the grammatical classifications
of the language.
A "terminal symbol" (also known as a "token type") represents a
class of syntactically equivalent tokens. You use the symbol in grammar
rules to mean that a token in that class is allowed. The symbol is
represented in the Bison parser by a numeric code, and the `yylex'
function returns a token type code to indicate what kind of token has
been read. You don't need to know what the code value is; you can use
the symbol to stand for it.
A "nonterminal symbol" stands for a class of syntactically
equivalent groupings. The symbol name is used in writing grammar rules.
By convention, it should be all lower case.
Symbol names can contain letters, underscores, periods, and
non-initial digits and dashes. Dashes in symbol names are a GNU
extension, incompatible with POSIX Yacc. Periods and dashes make
symbol names less convenient to use with named references, which
require brackets around such names (*note Named References::).
Terminal symbols that contain periods or dashes make little sense:
since they are not valid symbols (in most programming languages) they
are not exported as token names.
There are three ways of writing terminal symbols in the grammar:
* A "named token type" is written with an identifier, like an
identifier in C. By convention, it should be all upper case. Each
such name must be defined with a Bison declaration such as
`%token'. *Note Token Type Names: Token Decl.
* A "character token type" (or "literal character token") is written
in the grammar using the same syntax used in C for character
constants; for example, `'+'' is a character token type. A
character token type doesn't need to be declared unless you need to
specify its semantic value data type (*note Data Types of Semantic
Values: Value Type.), associativity, or precedence (*note Operator
Precedence: Precedence.).
By convention, a character token type is used only to represent a
token that consists of that particular character. Thus, the token
type `'+'' is used to represent the character `+' as a token.
Nothing enforces this convention, but if you depart from it, your
program will confuse other readers.
All the usual escape sequences used in character literals in C can
be used in Bison as well, but you must not use the null character
as a character literal because its numeric code, zero, signifies
end-of-input (*note Calling Convention for `yylex': Calling
Convention.). Also, unlike standard C, trigraphs have no special
meaning in Bison character literals, nor is backslash-newline
allowed.
* A "literal string token" is written like a C string constant; for
example, `"<="' is a literal string token. A literal string token
doesn't need to be declared unless you need to specify its semantic
value data type (*note Value Type::), associativity, or precedence
(*note Precedence::).
You can associate the literal string token with a symbolic name as
an alias, using the `%token' declaration (*note Token
Declarations: Token Decl.). If you don't do that, the lexical
analyzer has to retrieve the token number for the literal string
token from the `yytname' table (*note Calling Convention::).
*Warning*: literal string tokens do not work in Yacc.
By convention, a literal string token is used only to represent a
token that consists of that particular string. Thus, you should
use the token type `"<="' to represent the string `<=' as a token.
Bison does not enforce this convention, but if you depart from it,
people who read your program will be confused.
All the escape sequences used in string literals in C can be used
in Bison as well, except that you must not use a null character
within a string literal. Also, unlike Standard C, trigraphs have
no special meaning in Bison string literals, nor is
backslash-newline allowed. A literal string token must contain
two or more characters; for a token containing just one character,
use a character token (see above).
How you choose to write a terminal symbol has no effect on its
grammatical meaning. That depends only on where it appears in rules and
on when the parser function returns that symbol.
The value returned by `yylex' is always one of the terminal symbols,
except that a zero or negative value signifies end-of-input. Whichever
way you write the token type in the grammar rules, you write it the
same way in the definition of `yylex'. The numeric code for a
character token type is simply the positive numeric code of the
character, so `yylex' can use the identical value to generate the
requisite code, though you may need to convert it to `unsigned char' to
avoid sign-extension on hosts where `char' is signed. Each named token