blob: a508b9c13c92f8330d246e657ec8afa6fa65f6d9 [file] [log] [blame]
\input texinfo @c -*-texinfo-*-
@comment %**start of header
@setfilename bison.info
@include version.texi
@settitle Bison @value{VERSION}
@setchapternewpage odd
@finalout
@c SMALL BOOK version
@c This edition has been formatted so that you can format and print it in
@c the smallbook format.
@c @smallbook
@c Set following if you want to document %default-prec and %no-default-prec.
@c This feature is experimental and may change in future Bison versions.
@c @set defaultprec
@ifnotinfo
@syncodeindex fn cp
@syncodeindex vr cp
@syncodeindex tp cp
@end ifnotinfo
@ifinfo
@synindex fn cp
@synindex vr cp
@synindex tp cp
@end ifinfo
@comment %**end of header
@copying
This manual (@value{UPDATED}) is for GNU Bison (version
@value{VERSION}), the GNU parser generator.
Copyright @copyright{} 1988-1993, 1995, 1998-2012 Free Software
Foundation, Inc.
@quotation
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, with the Front-Cover texts
being ``A GNU Manual,'' and with the Back-Cover Texts as in
(a) below. A copy of the license is included in the section entitled
``GNU Free Documentation License.''
(a) The FSF's Back-Cover Text is: ``You have the freedom to copy and
modify this GNU manual. Buying copies from the FSF
supports it in developing GNU and promoting software
freedom.''
@end quotation
@end copying
@dircategory Software development
@direntry
* bison: (bison). GNU parser generator (Yacc replacement).
@end direntry
@titlepage
@title Bison
@subtitle The Yacc-compatible Parser Generator
@subtitle @value{UPDATED}, Bison Version @value{VERSION}
@author by Charles Donnelly and Richard Stallman
@page
@vskip 0pt plus 1filll
@insertcopying
@sp 2
Published by the Free Software Foundation @*
51 Franklin Street, Fifth Floor @*
Boston, MA 02110-1301 USA @*
Printed copies are available from the Free Software Foundation.@*
ISBN 1-882114-44-2
@sp 2
Cover art by Etienne Suvasa.
@end titlepage
@contents
@ifnottex
@node Top
@top Bison
@insertcopying
@end ifnottex
@menu
* Introduction::
* Conditions::
* Copying:: The GNU General Public License says
how you can copy and share Bison.
Tutorial sections:
* Concepts:: Basic concepts for understanding Bison.
* Examples:: Three simple explained examples of using Bison.
Reference sections:
* Grammar File:: Writing Bison declarations and rules.
* Interface:: C-language interface to the parser function @code{yyparse}.
* Algorithm:: How the Bison parser works at run-time.
* Error Recovery:: Writing rules for error recovery.
* Context Dependency:: What to do if your language syntax is too
messy for Bison to handle straightforwardly.
* Debugging:: Understanding or debugging Bison parsers.
* Invocation:: How to run Bison (to produce the parser implementation).
* Other Languages:: Creating C++ and Java parsers.
* FAQ:: Frequently Asked Questions
* Table of Symbols:: All the keywords of the Bison language are explained.
* Glossary:: Basic concepts are explained.
* Copying This Manual:: License for copying this manual.
* Bibliography:: Publications cited in this manual.
* Index of Terms:: Cross-references to the text.
@detailmenu
--- The Detailed Node Listing ---
The Concepts of Bison
* Language and Grammar:: Languages and context-free grammars,
as mathematical ideas.
* Grammar in Bison:: How we represent grammars for Bison's sake.
* Semantic Values:: Each token or syntactic grouping can have
a semantic value (the value of an integer,
the name of an identifier, etc.).
* Semantic Actions:: Each rule can have an action containing C code.
* GLR Parsers:: Writing parsers for general context-free languages.
* Locations:: Overview of location tracking.
* Bison Parser:: What are Bison's input and output,
how is the output used?
* Stages:: Stages in writing and running Bison grammars.
* Grammar Layout:: Overall structure of a Bison grammar file.
Writing GLR Parsers
* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
* GLR Semantic Actions:: Deferred semantic actions have special concerns.
* Compiler Requirements:: GLR parsers require a modern C compiler.
Examples
* RPN Calc:: Reverse polish notation calculator;
a first example with no operator precedence.
* Infix Calc:: Infix (algebraic) notation calculator.
Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @@@var{n} and @@$.
* Multi-function Calc:: Calculator with memory and trig functions.
It uses multiple data-types for semantic values.
* Exercises:: Ideas for improving the multi-function calculator.
Reverse Polish Notation Calculator
* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
* Rpcalc Lexer:: The lexical analyzer.
* Rpcalc Main:: The controlling function.
* Rpcalc Error:: The error reporting function.
* Rpcalc Generate:: Running Bison on the grammar file.
* Rpcalc Compile:: Run the C compiler on the output code.
Grammar Rules for @code{rpcalc}
* Rpcalc Input::
* Rpcalc Line::
* Rpcalc Expr::
Location Tracking Calculator: @code{ltcalc}
* Ltcalc Declarations:: Bison and C declarations for ltcalc.
* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
* Ltcalc Lexer:: The lexical analyzer.
Multi-Function Calculator: @code{mfcalc}
* Mfcalc Declarations:: Bison declarations for multi-function calculator.
* Mfcalc Rules:: Grammar rules for the calculator.
* Mfcalc Symbol Table:: Symbol table management subroutines.
Bison Grammar Files
* Grammar Outline:: Overall layout of the grammar file.
* Symbols:: Terminal and nonterminal symbols.
* Rules:: How to write grammar rules.
* Recursion:: Writing recursive rules.
* Semantics:: Semantic values and actions.
* Tracking Locations:: Locations and actions.
* Named References:: Using named references in actions.
* Declarations:: All kinds of Bison declarations are described here.
* Multiple Parsers:: Putting more than one Bison parser in one program.
Outline of a Bison Grammar
* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
* Bison Declarations:: Syntax and usage of the Bison declarations section.
* Grammar Rules:: Syntax and usage of the grammar rules section.
* Epilogue:: Syntax and usage of the epilogue.
Defining Language Semantics
* Value Type:: Specifying one data type for all semantic values.
* Multiple Types:: Specifying several alternative data types.
* Actions:: An action is the semantic definition of a grammar rule.
* Action Types:: Specifying data types for actions to operate on.
* Mid-Rule Actions:: Most actions go at the end of a rule.
This says when, why and how to use the exceptional
action in the middle of a rule.
Actions in Mid-Rule
* Using Mid-Rule Actions:: Putting an action in the middle of a rule.
* Mid-Rule Action Translation:: How mid-rule actions are actually processed.
* Mid-Rule Conflicts:: Mid-rule actions can cause conflicts.
Tracking Locations
* Location Type:: Specifying a data type for locations.
* Actions and Locations:: Using locations in actions.
* Location Default Action:: Defining a general way to compute locations.
Bison Declarations
* Require Decl:: Requiring a Bison version.
* Token Decl:: Declaring terminal symbols.
* Precedence Decl:: Declaring terminals with precedence and associativity.
* Union Decl:: Declaring the set of all semantic value types.
* Type Decl:: Declaring the choice of type for a nonterminal symbol.
* Initial Action Decl:: Code run before parsing starts.
* Destructor Decl:: Declaring how symbols are freed.
* Printer Decl:: Declaring how symbol values are displayed.
* Expect Decl:: Suppressing warnings about parsing conflicts.
* Start Decl:: Specifying the start symbol.
* Pure Decl:: Requesting a reentrant parser.
* Push Decl:: Requesting a push parser.
* Decl Summary:: Table of all Bison declarations.
* %define Summary:: Defining variables to adjust Bison's behavior.
* %code Summary:: Inserting code into the parser source.
Parser C-Language Interface
* Parser Function:: How to call @code{yyparse} and what it returns.
* Push Parser Function:: How to call @code{yypush_parse} and what it returns.
* Pull Parser Function:: How to call @code{yypull_parse} and what it returns.
* Parser Create Function:: How to call @code{yypstate_new} and what it returns.
* Parser Delete Function:: How to call @code{yypstate_delete} and what it returns.
* Lexical:: You must supply a function @code{yylex}
which reads tokens.
* Error Reporting:: You must supply a function @code{yyerror}.
* Action Features:: Special features for use in actions.
* Internationalization:: How to let the parser speak in the user's
native language.
The Lexical Analyzer Function @code{yylex}
* Calling Convention:: How @code{yyparse} calls @code{yylex}.
* Token Values:: How @code{yylex} must return the semantic value
of the token it has read.
* Token Locations:: How @code{yylex} must return the text location
(line number, etc.) of the token, if the
actions want that.
* Pure Calling:: How the calling convention differs in a pure parser
(@pxref{Pure Decl, ,A Pure (Reentrant) Parser}).
The Bison Parser Algorithm
* Lookahead:: Parser looks one token ahead when deciding what to do.
* Shift/Reduce:: Conflicts: when either shifting or reduction is valid.
* Precedence:: Operator precedence works by resolving conflicts.
* Contextual Precedence:: When an operator's precedence depends on context.
* Parser States:: The parser is a finite-state-machine with stack.
* Reduce/Reduce:: When two rules are applicable in the same situation.
* Mysterious Conflicts:: Conflicts that look unjustified.
* Tuning LR:: How to tune fundamental aspects of LR-based parsing.
* Generalized LR Parsing:: Parsing arbitrary context-free grammars.
* Memory Management:: What happens when memory is exhausted. How to avoid it.
Operator Precedence
* Why Precedence:: An example showing why precedence is needed.
* Using Precedence:: How to specify precedence in Bison grammars.
* Precedence Examples:: How these features are used in the previous example.
* How Precedence:: How they work.
* Non Operators:: Using precedence for general conflicts.
Tuning LR
* LR Table Construction:: Choose a different construction algorithm.
* Default Reductions:: Disable default reductions.
* LAC:: Correct lookahead sets in the parser states.
* Unreachable States:: Keep unreachable parser states for debugging.
Handling Context Dependencies
* Semantic Tokens:: Token parsing can depend on the semantic context.
* Lexical Tie-ins:: Token parsing can depend on the syntactic context.
* Tie-in Recovery:: Lexical tie-ins have implications for how
error recovery rules must be written.
Debugging Your Parser
* Understanding:: Understanding the structure of your parser.
* Graphviz:: Getting a visual representation of the parser.
* Xml:: Getting a markup representation of the parser.
* Tracing:: Tracing the execution of your parser.
Tracing Your Parser
* Enabling Traces:: Activating run-time trace support
* Mfcalc Traces:: Extending @code{mfcalc} to support traces
* The YYPRINT Macro:: Obsolete interface for semantic value reports
Invoking Bison
* Bison Options:: All the options described in detail,
in alphabetical order by short options.
* Option Cross Key:: Alphabetical list of long options.
* Yacc Library:: Yacc-compatible @code{yylex} and @code{main}.
Parsers Written In Other Languages
* C++ Parsers:: The interface to generate C++ parser classes
* Java Parsers:: The interface to generate Java parser classes
C++ Parsers
* C++ Bison Interface:: Asking for C++ parser generation
* C++ Semantic Values:: %union vs. C++
* C++ Location Values:: The position and location classes
* C++ Parser Interface:: Instantiating and running the parser
* C++ Scanner Interface:: Exchanges between yylex and parse
* A Complete C++ Example:: Demonstrating their use
C++ Location Values
* C++ position:: One point in the source file
* C++ location:: Two points in the source file
* User Defined Location Type:: Required interface for locations
A Complete C++ Example
* Calc++ --- C++ Calculator:: The specifications
* Calc++ Parsing Driver:: An active parsing context
* Calc++ Parser:: A parser class
* Calc++ Scanner:: A pure C++ Flex scanner
* Calc++ Top Level:: Conducting the band
Java Parsers
* Java Bison Interface:: Asking for Java parser generation
* Java Semantic Values:: %type and %token vs. Java
* Java Location Values:: The position and location classes
* Java Parser Interface:: Instantiating and running the parser
* Java Scanner Interface:: Specifying the scanner for the parser
* Java Action Features:: Special features for use in actions
* Java Differences:: Differences between C/C++ and Java Grammars
* Java Declarations Summary:: List of Bison declarations used with Java
Frequently Asked Questions
* Memory Exhausted:: Breaking the Stack Limits
* How Can I Reset the Parser:: @code{yyparse} Keeps some State
* Strings are Destroyed:: @code{yylval} Loses Track of Strings
* Implementing Gotos/Loops:: Control Flow in the Calculator
* Multiple start-symbols:: Factoring closely related grammars
* Secure? Conform?:: Is Bison POSIX safe?
* I can't build Bison:: Troubleshooting
* Where can I find help?:: Troubleshouting
* Bug Reports:: Troublereporting
* More Languages:: Parsers in C++, Java, and so on
* Beta Testing:: Experimenting development versions
* Mailing Lists:: Meeting other Bison users
Copying This Manual
* Copying This Manual:: License for copying this manual.
@end detailmenu
@end menu
@node Introduction
@unnumbered Introduction
@cindex introduction
@dfn{Bison} is a general-purpose parser generator that converts an
annotated context-free grammar into a deterministic LR or generalized
LR (GLR) parser employing LALR(1) parser tables. As an experimental
feature, Bison can also generate IELR(1) or canonical LR(1) parser
tables. Once you are proficient with Bison, you can use it to develop
a wide range of language parsers, from those used in simple desk
calculators to complex programming languages.
Bison is upward compatible with Yacc: all properly-written Yacc
grammars ought to work with Bison with no change. Anyone familiar
with Yacc should be able to use Bison with little trouble. You need
to be fluent in C or C++ programming in order to use Bison or to
understand this manual. Java is also supported as an experimental
feature.
We begin with tutorial chapters that explain the basic concepts of
using Bison and show three explained examples, each building on the
last. If you don't know Bison or Yacc, start by reading these
chapters. Reference chapters follow, which describe specific aspects
of Bison in detail.
Bison was written originally by Robert Corbett. Richard Stallman made
it Yacc-compatible. Wilfred Hansen of Carnegie Mellon University
added multi-character string literals and other features. Since then,
Bison has grown more robust and evolved many other new features thanks
to the hard work of a long list of volunteers. For details, see the
@file{THANKS} and @file{ChangeLog} files included in the Bison
distribution.
This edition corresponds to version @value{VERSION} of Bison.
@node Conditions
@unnumbered Conditions for Using Bison
The distribution terms for Bison-generated parsers permit using the
parsers in nonfree programs. Before Bison version 2.2, these extra
permissions applied only when Bison was generating LALR(1)
parsers in C@. And before Bison version 1.24, Bison-generated
parsers could be used only in programs that were free software.
The other GNU programming tools, such as the GNU C
compiler, have never
had such a requirement. They could always be used for nonfree
software. The reason Bison was different was not due to a special
policy decision; it resulted from applying the usual General Public
License to all of the Bison source code.
The main output of the Bison utility---the Bison parser implementation
file---contains a verbatim copy of a sizable piece of Bison, which is
the code for the parser's implementation. (The actions from your
grammar are inserted into this implementation at one point, but most
of the rest of the implementation is not changed.) When we applied
the GPL terms to the skeleton code for the parser's implementation,
the effect was to restrict the use of Bison output to free software.
We didn't change the terms because of sympathy for people who want to
make software proprietary. @strong{Software should be free.} But we
concluded that limiting Bison's use to free software was doing little to
encourage people to make other software free. So we decided to make the
practical conditions for using Bison match the practical conditions for
using the other GNU tools.
This exception applies when Bison is generating code for a parser.
You can tell whether the exception applies to a Bison output file by
inspecting the file for text beginning with ``As a special
exception@dots{}''. The text spells out the exact terms of the
exception.
@node Copying
@unnumbered GNU GENERAL PUBLIC LICENSE
@include gpl-3.0.texi
@node Concepts
@chapter The Concepts of Bison
This chapter introduces many of the basic concepts without which the
details of Bison will not make sense. If you do not already know how to
use Bison or Yacc, we suggest you start by reading this chapter carefully.
@menu
* Language and Grammar:: Languages and context-free grammars,
as mathematical ideas.
* Grammar in Bison:: How we represent grammars for Bison's sake.
* Semantic Values:: Each token or syntactic grouping can have
a semantic value (the value of an integer,
the name of an identifier, etc.).
* Semantic Actions:: Each rule can have an action containing C code.
* GLR Parsers:: Writing parsers for general context-free languages.
* Locations:: Overview of location tracking.
* Bison Parser:: What are Bison's input and output,
how is the output used?
* Stages:: Stages in writing and running Bison grammars.
* Grammar Layout:: Overall structure of a Bison grammar file.
@end menu
@node Language and Grammar
@section Languages and Context-Free Grammars
@cindex context-free grammar
@cindex grammar, context-free
In order for Bison to parse a language, it must be described by a
@dfn{context-free grammar}. This means that you specify one or more
@dfn{syntactic groupings} and give rules for constructing them from their
parts. For example, in the C language, one kind of grouping is called an
`expression'. One rule for making an expression might be, ``An expression
can be made of a minus sign and another expression''. Another would be,
``An expression can be an integer''. As you can see, rules are often
recursive, but there must be at least one rule which leads out of the
recursion.
@cindex BNF
@cindex Backus-Naur form
The most common formal system for presenting such rules for humans to read
is @dfn{Backus-Naur Form} or ``BNF'', which was developed in
order to specify the language Algol 60. Any grammar expressed in
BNF is a context-free grammar. The input to Bison is
essentially machine-readable BNF.
@cindex LALR grammars
@cindex IELR grammars
@cindex LR grammars
There are various important subclasses of context-free grammars. Although
it can handle almost all context-free grammars, Bison is optimized for what
are called LR(1) grammars. In brief, in these grammars, it must be possible
to tell how to parse any portion of an input string with just a single token
of lookahead. For historical reasons, Bison by default is limited by the
additional restrictions of LALR(1), which is hard to explain simply.
@xref{Mysterious Conflicts}, for more information on this. As an
experimental feature, you can escape these additional restrictions by
requesting IELR(1) or canonical LR(1) parser tables. @xref{LR Table
Construction}, to learn how.
@cindex GLR parsing
@cindex generalized LR (GLR) parsing
@cindex ambiguous grammars
@cindex nondeterministic parsing
Parsers for LR(1) grammars are @dfn{deterministic}, meaning
roughly that the next grammar rule to apply at any point in the input is
uniquely determined by the preceding input and a fixed, finite portion
(called a @dfn{lookahead}) of the remaining input. A context-free
grammar can be @dfn{ambiguous}, meaning that there are multiple ways to
apply the grammar rules to get the same inputs. Even unambiguous
grammars can be @dfn{nondeterministic}, meaning that no fixed
lookahead always suffices to determine the next grammar rule to apply.
With the proper declarations, Bison is also able to parse these more
general context-free grammars, using a technique known as GLR
parsing (for Generalized LR). Bison's GLR parsers
are able to handle any context-free grammar for which the number of
possible parses of any given string is finite.
@cindex symbols (abstract)
@cindex token
@cindex syntactic grouping
@cindex grouping, syntactic
In the formal grammatical rules for a language, each kind of syntactic
unit or grouping is named by a @dfn{symbol}. Those which are built by
grouping smaller constructs according to grammatical rules are called
@dfn{nonterminal symbols}; those which can't be subdivided are called
@dfn{terminal symbols} or @dfn{token types}. We call a piece of input
corresponding to a single terminal symbol a @dfn{token}, and a piece
corresponding to a single nonterminal symbol a @dfn{grouping}.
We can use the C language as an example of what symbols, terminal and
nonterminal, mean. The tokens of C are identifiers, constants (numeric
and string), and the various keywords, arithmetic operators and
punctuation marks. So the terminal symbols of a grammar for C include
`identifier', `number', `string', plus one symbol for each keyword,
operator or punctuation mark: `if', `return', `const', `static', `int',
`char', `plus-sign', `open-brace', `close-brace', `comma' and many more.
(These tokens can be subdivided into characters, but that is a matter of
lexicography, not grammar.)
Here is a simple C function subdivided into tokens:
@example
int /* @r{keyword `int'} */
square (int x) /* @r{identifier, open-paren, keyword `int',}
@r{identifier, close-paren} */
@{ /* @r{open-brace} */
return x * x; /* @r{keyword `return', identifier, asterisk,}
@r{identifier, semicolon} */
@} /* @r{close-brace} */
@end example
The syntactic groupings of C include the expression, the statement, the
declaration, and the function definition. These are represented in the
grammar of C by nonterminal symbols `expression', `statement',
`declaration' and `function definition'. The full grammar uses dozens of
additional language constructs, each with its own nonterminal symbol, in
order to express the meanings of these four. The example above is a
function definition; it contains one declaration, and one statement. In
the statement, each @samp{x} is an expression and so is @samp{x * x}.
Each nonterminal symbol must have grammatical rules showing how it is made
out of simpler constructs. For example, one kind of C statement is the
@code{return} statement; this would be described with a grammar rule which
reads informally as follows:
@quotation
A `statement' can be made of a `return' keyword, an `expression' and a
`semicolon'.
@end quotation
@noindent
There would be many other rules for `statement', one for each kind of
statement in C.
@cindex start symbol
One nonterminal symbol must be distinguished as the special one which
defines a complete utterance in the language. It is called the @dfn{start
symbol}. In a compiler, this means a complete input program. In the C
language, the nonterminal symbol `sequence of definitions and declarations'
plays this role.
For example, @samp{1 + 2} is a valid C expression---a valid part of a C
program---but it is not valid as an @emph{entire} C program. In the
context-free grammar of C, this follows from the fact that `expression' is
not the start symbol.
The Bison parser reads a sequence of tokens as its input, and groups the
tokens using the grammar rules. If the input is valid, the end result is
that the entire token sequence reduces to a single grouping whose symbol is
the grammar's start symbol. If we use a grammar for C, the entire input
must be a `sequence of definitions and declarations'. If not, the parser
reports a syntax error.
@node Grammar in Bison
@section From Formal Rules to Bison Input
@cindex Bison grammar
@cindex grammar, Bison
@cindex formal grammar
A formal grammar is a mathematical construct. To define the language
for Bison, you must write a file expressing the grammar in Bison syntax:
a @dfn{Bison grammar} file. @xref{Grammar File, ,Bison Grammar Files}.
A nonterminal symbol in the formal grammar is represented in Bison input
as an identifier, like an identifier in C@. By convention, it should be
in lower case, such as @code{expr}, @code{stmt} or @code{declaration}.
The Bison representation for a terminal symbol is also called a @dfn{token
type}. Token types as well can be represented as C-like identifiers. By
convention, these identifiers should be upper case to distinguish them from
nonterminals: for example, @code{INTEGER}, @code{IDENTIFIER}, @code{IF} or
@code{RETURN}. A terminal symbol that stands for a particular keyword in
the language should be named after that keyword converted to upper case.
The terminal symbol @code{error} is reserved for error recovery.
@xref{Symbols}.
A terminal symbol can also be represented as a character literal, just like
a C character constant. You should do this whenever a token is just a
single character (parenthesis, plus-sign, etc.): use that same character in
a literal as the terminal symbol for that token.
A third way to represent a terminal symbol is with a C string constant
containing several characters. @xref{Symbols}, for more information.
The grammar rules also have an expression in Bison syntax. For example,
here is the Bison rule for a C @code{return} statement. The semicolon in
quotes is a literal character token, representing part of the C syntax for
the statement; the naked semicolon, and the colon, are Bison punctuation
used in every rule.
@example
stmt: RETURN expr ';' ;
@end example
@noindent
@xref{Rules, ,Syntax of Grammar Rules}.
@node Semantic Values
@section Semantic Values
@cindex semantic value
@cindex value, semantic
A formal grammar selects tokens only by their classifications: for example,
if a rule mentions the terminal symbol `integer constant', it means that
@emph{any} integer constant is grammatically valid in that position. The
precise value of the constant is irrelevant to how to parse the input: if
@samp{x+4} is grammatical then @samp{x+1} or @samp{x+3989} is equally
grammatical.
But the precise value is very important for what the input means once it is
parsed. A compiler is useless if it fails to distinguish between 4, 1 and
3989 as constants in the program! Therefore, each token in a Bison grammar
has both a token type and a @dfn{semantic value}. @xref{Semantics,
,Defining Language Semantics},
for details.
The token type is a terminal symbol defined in the grammar, such as
@code{INTEGER}, @code{IDENTIFIER} or @code{','}. It tells everything
you need to know to decide where the token may validly appear and how to
group it with other tokens. The grammar rules know nothing about tokens
except their types.
The semantic value has all the rest of the information about the
meaning of the token, such as the value of an integer, or the name of an
identifier. (A token such as @code{','} which is just punctuation doesn't
need to have any semantic value.)
For example, an input token might be classified as token type
@code{INTEGER} and have the semantic value 4. Another input token might
have the same token type @code{INTEGER} but value 3989. When a grammar
rule says that @code{INTEGER} is allowed, either of these tokens is
acceptable because each is an @code{INTEGER}. When the parser accepts the
token, it keeps track of the token's semantic value.
Each grouping can also have a semantic value as well as its nonterminal
symbol. For example, in a calculator, an expression typically has a
semantic value that is a number. In a compiler for a programming
language, an expression typically has a semantic value that is a tree
structure describing the meaning of the expression.
@node Semantic Actions
@section Semantic Actions
@cindex semantic actions
@cindex actions, semantic
In order to be useful, a program must do more than parse input; it must
also produce some output based on the input. In a Bison grammar, a grammar
rule can have an @dfn{action} made up of C statements. Each time the
parser recognizes a match for that rule, the action is executed.
@xref{Actions}.
Most of the time, the purpose of an action is to compute the semantic value
of the whole construct from the semantic values of its parts. For example,
suppose we have a rule which says an expression can be the sum of two
expressions. When the parser recognizes such a sum, each of the
subexpressions has a semantic value which describes how it was built up.
The action for this rule should create a similar sort of value for the
newly recognized larger expression.
For example, here is a rule that says an expression can be the sum of
two subexpressions:
@example
expr: expr '+' expr @{ $$ = $1 + $3; @} ;
@end example
@noindent
The action says how to produce the semantic value of the sum expression
from the values of the two subexpressions.
@node GLR Parsers
@section Writing GLR Parsers
@cindex GLR parsing
@cindex generalized LR (GLR) parsing
@findex %glr-parser
@cindex conflicts
@cindex shift/reduce conflicts
@cindex reduce/reduce conflicts
In some grammars, Bison's deterministic
LR(1) parsing algorithm cannot decide whether to apply a
certain grammar rule at a given point. That is, it may not be able to
decide (on the basis of the input read so far) which of two possible
reductions (applications of a grammar rule) applies, or whether to apply
a reduction or read more of the input and apply a reduction later in the
input. These are known respectively as @dfn{reduce/reduce} conflicts
(@pxref{Reduce/Reduce}), and @dfn{shift/reduce} conflicts
(@pxref{Shift/Reduce}).
To use a grammar that is not easily modified to be LR(1), a
more general parsing algorithm is sometimes necessary. If you include
@code{%glr-parser} among the Bison declarations in your file
(@pxref{Grammar Outline}), the result is a Generalized LR
(GLR) parser. These parsers handle Bison grammars that
contain no unresolved conflicts (i.e., after applying precedence
declarations) identically to deterministic parsers. However, when
faced with unresolved shift/reduce and reduce/reduce conflicts,
GLR parsers use the simple expedient of doing both,
effectively cloning the parser to follow both possibilities. Each of
the resulting parsers can again split, so that at any given time, there
can be any number of possible parses being explored. The parsers
proceed in lockstep; that is, all of them consume (shift) a given input
symbol before any of them proceed to the next. Each of the cloned
parsers eventually meets one of two possible fates: either it runs into
a parsing error, in which case it simply vanishes, or it merges with
another parser, because the two of them have reduced the input to an
identical set of symbols.
During the time that there are multiple parsers, semantic actions are
recorded, but not performed. When a parser disappears, its recorded
semantic actions disappear as well, and are never performed. When a
reduction makes two parsers identical, causing them to merge, Bison
records both sets of semantic actions. Whenever the last two parsers
merge, reverting to the single-parser case, Bison resolves all the
outstanding actions either by precedences given to the grammar rules
involved, or by performing both actions, and then calling a designated
user-defined function on the resulting values to produce an arbitrary
merged result.
@menu
* Simple GLR Parsers:: Using GLR parsers on unambiguous grammars.
* Merging GLR Parses:: Using GLR parsers to resolve ambiguities.
* GLR Semantic Actions:: Deferred semantic actions have special concerns.
* Compiler Requirements:: GLR parsers require a modern C compiler.
@end menu
@node Simple GLR Parsers
@subsection Using GLR on Unambiguous Grammars
@cindex GLR parsing, unambiguous grammars
@cindex generalized LR (GLR) parsing, unambiguous grammars
@findex %glr-parser
@findex %expect-rr
@cindex conflicts
@cindex reduce/reduce conflicts
@cindex shift/reduce conflicts
In the simplest cases, you can use the GLR algorithm
to parse grammars that are unambiguous but fail to be LR(1).
Such grammars typically require more than one symbol of lookahead.
Consider a problem that
arises in the declaration of enumerated and subrange types in the
programming language Pascal. Here are some examples:
@example
type subrange = lo .. hi;
type enum = (a, b, c);
@end example
@noindent
The original language standard allows only numeric
literals and constant identifiers for the subrange bounds (@samp{lo}
and @samp{hi}), but Extended Pascal (ISO/IEC
10206) and many other
Pascal implementations allow arbitrary expressions there. This gives
rise to the following situation, containing a superfluous pair of
parentheses:
@example
type subrange = (a) .. b;
@end example
@noindent
Compare this to the following declaration of an enumerated
type with only one value:
@example
type enum = (a);
@end example
@noindent
(These declarations are contrived, but they are syntactically
valid, and more-complicated cases can come up in practical programs.)
These two declarations look identical until the @samp{..} token.
With normal LR(1) one-token lookahead it is not
possible to decide between the two forms when the identifier
@samp{a} is parsed. It is, however, desirable
for a parser to decide this, since in the latter case
@samp{a} must become a new identifier to represent the enumeration
value, while in the former case @samp{a} must be evaluated with its
current meaning, which may be a constant or even a function call.
You could parse @samp{(a)} as an ``unspecified identifier in parentheses'',
to be resolved later, but this typically requires substantial
contortions in both semantic actions and large parts of the
grammar, where the parentheses are nested in the recursive rules for
expressions.
You might think of using the lexer to distinguish between the two
forms by returning different tokens for currently defined and
undefined identifiers. But if these declarations occur in a local
scope, and @samp{a} is defined in an outer scope, then both forms
are possible---either locally redefining @samp{a}, or using the
value of @samp{a} from the outer scope. So this approach cannot
work.
A simple solution to this problem is to declare the parser to
use the GLR algorithm.
When the GLR parser reaches the critical state, it
merely splits into two branches and pursues both syntax rules
simultaneously. Sooner or later, one of them runs into a parsing
error. If there is a @samp{..} token before the next
@samp{;}, the rule for enumerated types fails since it cannot
accept @samp{..} anywhere; otherwise, the subrange type rule
fails since it requires a @samp{..} token. So one of the branches
fails silently, and the other one continues normally, performing
all the intermediate actions that were postponed during the split.
If the input is syntactically incorrect, both branches fail and the parser
reports a syntax error as usual.
The effect of all this is that the parser seems to ``guess'' the
correct branch to take, or in other words, it seems to use more
lookahead than the underlying LR(1) algorithm actually allows
for. In this example, LR(2) would suffice, but also some cases
that are not LR(@math{k}) for any @math{k} can be handled this way.
In general, a GLR parser can take quadratic or cubic worst-case time,
and the current Bison parser even takes exponential time and space
for some grammars. In practice, this rarely happens, and for many
grammars it is possible to prove that it cannot happen.
The present example contains only one conflict between two
rules, and the type-declaration context containing the conflict
cannot be nested. So the number of
branches that can exist at any time is limited by the constant 2,
and the parsing time is still linear.
Here is a Bison grammar corresponding to the example above. It
parses a vastly simplified form of Pascal type declarations.
@example
%token TYPE DOTDOT ID
@group
%left '+' '-'
%left '*' '/'
@end group
%%
@group
type_decl: TYPE ID '=' type ';' ;
@end group
@group
type:
'(' id_list ')'
| expr DOTDOT expr
;
@end group
@group
id_list:
ID
| id_list ',' ID
;
@end group
@group
expr:
'(' expr ')'
| expr '+' expr
| expr '-' expr
| expr '*' expr
| expr '/' expr
| ID
;
@end group
@end example
When used as a normal LR(1) grammar, Bison correctly complains
about one reduce/reduce conflict. In the conflicting situation the
parser chooses one of the alternatives, arbitrarily the one
declared first. Therefore the following correct input is not
recognized:
@example
type t = (a) .. b;
@end example
The parser can be turned into a GLR parser, while also telling Bison
to be silent about the one known reduce/reduce conflict, by adding
these two declarations to the Bison grammar file (before the first
@samp{%%}):
@example
%glr-parser
%expect-rr 1
@end example
@noindent
No change in the grammar itself is required. Now the
parser recognizes all valid declarations, according to the
limited syntax above, transparently. In fact, the user does not even
notice when the parser splits.
So here we have a case where we can use the benefits of GLR,
almost without disadvantages. Even in simple cases like this, however,
there are at least two potential problems to beware. First, always
analyze the conflicts reported by Bison to make sure that GLR
splitting is only done where it is intended. A GLR parser
splitting inadvertently may cause problems less obvious than an
LR parser statically choosing the wrong alternative in a
conflict. Second, consider interactions with the lexer (@pxref{Semantic
Tokens}) with great care. Since a split parser consumes tokens without
performing any actions during the split, the lexer cannot obtain
information via parser actions. Some cases of lexer interactions can be
eliminated by using GLR to shift the complications from the
lexer to the parser. You must check the remaining cases for
correctness.
In our example, it would be safe for the lexer to return tokens based on
their current meanings in some symbol table, because no new symbols are
defined in the middle of a type declaration. Though it is possible for
a parser to define the enumeration constants as they are parsed, before
the type declaration is completed, it actually makes no difference since
they cannot be used within the same enumerated type declaration.
@node Merging GLR Parses
@subsection Using GLR to Resolve Ambiguities
@cindex GLR parsing, ambiguous grammars
@cindex generalized LR (GLR) parsing, ambiguous grammars
@findex %dprec
@findex %merge
@cindex conflicts
@cindex reduce/reduce conflicts
Let's consider an example, vastly simplified from a C++ grammar.
@example
%@{
#include <stdio.h>
#define YYSTYPE char const *
int yylex (void);
void yyerror (char const *);
%@}
%token TYPENAME ID
%right '='
%left '+'
%glr-parser
%%
prog:
/* Nothing. */
| prog stmt @{ printf ("\n"); @}
;
stmt:
expr ';' %dprec 1
| decl %dprec 2
;
expr:
ID @{ printf ("%s ", $$); @}
| TYPENAME '(' expr ')'
@{ printf ("%s <cast> ", $1); @}
| expr '+' expr @{ printf ("+ "); @}
| expr '=' expr @{ printf ("= "); @}
;
decl:
TYPENAME declarator ';'
@{ printf ("%s <declare> ", $1); @}
| TYPENAME declarator '=' expr ';'
@{ printf ("%s <init-declare> ", $1); @}
;
declarator:
ID @{ printf ("\"%s\" ", $1); @}
| '(' declarator ')'
;
@end example
@noindent
This models a problematic part of the C++ grammar---the ambiguity between
certain declarations and statements. For example,
@example
T (x) = y+z;
@end example
@noindent
parses as either an @code{expr} or a @code{stmt}
(assuming that @samp{T} is recognized as a @code{TYPENAME} and
@samp{x} as an @code{ID}).
Bison detects this as a reduce/reduce conflict between the rules
@code{expr : ID} and @code{declarator : ID}, which it cannot resolve at the
time it encounters @code{x} in the example above. Since this is a
GLR parser, it therefore splits the problem into two parses, one for
each choice of resolving the reduce/reduce conflict.
Unlike the example from the previous section (@pxref{Simple GLR Parsers}),
however, neither of these parses ``dies,'' because the grammar as it stands is
ambiguous. One of the parsers eventually reduces @code{stmt : expr ';'} and
the other reduces @code{stmt : decl}, after which both parsers are in an
identical state: they've seen @samp{prog stmt} and have the same unprocessed
input remaining. We say that these parses have @dfn{merged.}
At this point, the GLR parser requires a specification in the
grammar of how to choose between the competing parses.
In the example above, the two @code{%dprec}
declarations specify that Bison is to give precedence
to the parse that interprets the example as a
@code{decl}, which implies that @code{x} is a declarator.
The parser therefore prints
@example
"x" y z + T <init-declare>
@end example
The @code{%dprec} declarations only come into play when more than one
parse survives. Consider a different input string for this parser:
@example
T (x) + y;
@end example
@noindent
This is another example of using GLR to parse an unambiguous
construct, as shown in the previous section (@pxref{Simple GLR Parsers}).
Here, there is no ambiguity (this cannot be parsed as a declaration).
However, at the time the Bison parser encounters @code{x}, it does not
have enough information to resolve the reduce/reduce conflict (again,
between @code{x} as an @code{expr} or a @code{declarator}). In this
case, no precedence declaration is used. Again, the parser splits
into two, one assuming that @code{x} is an @code{expr}, and the other
assuming @code{x} is a @code{declarator}. The second of these parsers
then vanishes when it sees @code{+}, and the parser prints
@example
x T <cast> y +
@end example
Suppose that instead of resolving the ambiguity, you wanted to see all
the possibilities. For this purpose, you must merge the semantic
actions of the two possible parsers, rather than choosing one over the
other. To do so, you could change the declaration of @code{stmt} as
follows:
@example
stmt:
expr ';' %merge <stmtMerge>
| decl %merge <stmtMerge>
;
@end example
@noindent
and define the @code{stmtMerge} function as:
@example
static YYSTYPE
stmtMerge (YYSTYPE x0, YYSTYPE x1)
@{
printf ("<OR> ");
return "";
@}
@end example
@noindent
with an accompanying forward declaration
in the C declarations at the beginning of the file:
@example
%@{
#define YYSTYPE char const *
static YYSTYPE stmtMerge (YYSTYPE x0, YYSTYPE x1);
%@}
@end example
@noindent
With these declarations, the resulting parser parses the first example
as both an @code{expr} and a @code{decl}, and prints
@example
"x" y z + T <init-declare> x T <cast> y z + = <OR>
@end example
Bison requires that all of the
productions that participate in any particular merge have identical
@samp{%merge} clauses. Otherwise, the ambiguity would be unresolvable,
and the parser will report an error during any parse that results in
the offending merge.
@node GLR Semantic Actions
@subsection GLR Semantic Actions
@cindex deferred semantic actions
By definition, a deferred semantic action is not performed at the same time as
the associated reduction.
This raises caveats for several Bison features you might use in a semantic
action in a GLR parser.
@vindex yychar
@cindex GLR parsers and @code{yychar}
@vindex yylval
@cindex GLR parsers and @code{yylval}
@vindex yylloc
@cindex GLR parsers and @code{yylloc}
In any semantic action, you can examine @code{yychar} to determine the type of
the lookahead token present at the time of the associated reduction.
After checking that @code{yychar} is not set to @code{YYEMPTY} or @code{YYEOF},
you can then examine @code{yylval} and @code{yylloc} to determine the
lookahead token's semantic value and location, if any.
In a nondeferred semantic action, you can also modify any of these variables to
influence syntax analysis.
@xref{Lookahead, ,Lookahead Tokens}.
@findex yyclearin
@cindex GLR parsers and @code{yyclearin}
In a deferred semantic action, it's too late to influence syntax analysis.
In this case, @code{yychar}, @code{yylval}, and @code{yylloc} are set to
shallow copies of the values they had at the time of the associated reduction.
For this reason alone, modifying them is dangerous.
Moreover, the result of modifying them is undefined and subject to change with
future versions of Bison.
For example, if a semantic action might be deferred, you should never write it
to invoke @code{yyclearin} (@pxref{Action Features}) or to attempt to free
memory referenced by @code{yylval}.
@findex YYERROR
@cindex GLR parsers and @code{YYERROR}
Another Bison feature requiring special consideration is @code{YYERROR}
(@pxref{Action Features}), which you can invoke in a semantic action to
initiate error recovery.
During deterministic GLR operation, the effect of @code{YYERROR} is
the same as its effect in a deterministic parser.
In a deferred semantic action, its effect is undefined.
@c The effect is probably a syntax error at the split point.
Also, see @ref{Location Default Action, ,Default Action for Locations}, which
describes a special usage of @code{YYLLOC_DEFAULT} in GLR parsers.
@node Compiler Requirements
@subsection Considerations when Compiling GLR Parsers
@cindex @code{inline}
@cindex GLR parsers and @code{inline}
The GLR parsers require a compiler for ISO C89 or
later. In addition, they use the @code{inline} keyword, which is not
C89, but is C99 and is a common extension in pre-C99 compilers. It is
up to the user of these parsers to handle
portability issues. For instance, if using Autoconf and the Autoconf
macro @code{AC_C_INLINE}, a mere
@example
%@{
#include <config.h>
%@}
@end example
@noindent
will suffice. Otherwise, we suggest
@example
%@{
#if (__STDC_VERSION__ < 199901 && ! defined __GNUC__ \
&& ! defined inline)
# define inline
#endif
%@}
@end example
@node Locations
@section Locations
@cindex location
@cindex textual location
@cindex location, textual
Many applications, like interpreters or compilers, have to produce verbose
and useful error messages. To achieve this, one must be able to keep track of
the @dfn{textual location}, or @dfn{location}, of each syntactic construct.
Bison provides a mechanism for handling these locations.
Each token has a semantic value. In a similar fashion, each token has an
associated location, but the type of locations is the same for all tokens
and groupings. Moreover, the output parser is equipped with a default data
structure for storing locations (@pxref{Tracking Locations}, for more
details).
Like semantic values, locations can be reached in actions using a dedicated
set of constructs. In the example above, the location of the whole grouping
is @code{@@$}, while the locations of the subexpressions are @code{@@1} and
@code{@@3}.
When a rule is matched, a default action is used to compute the semantic value
of its left hand side (@pxref{Actions}). In the same way, another default
action is used for locations. However, the action for locations is general
enough for most cases, meaning there is usually no need to describe for each
rule how @code{@@$} should be formed. When building a new location for a given
grouping, the default behavior of the output parser is to take the beginning
of the first symbol, and the end of the last symbol.
@node Bison Parser
@section Bison Output: the Parser Implementation File
@cindex Bison parser
@cindex Bison utility
@cindex lexical analyzer, purpose
@cindex parser
When you run Bison, you give it a Bison grammar file as input. The
most important output is a C source file that implements a parser for
the language described by the grammar. This parser is called a
@dfn{Bison parser}, and this file is called a @dfn{Bison parser
implementation file}. Keep in mind that the Bison utility and the
Bison parser are two distinct programs: the Bison utility is a program
whose output is the Bison parser implementation file that becomes part
of your program.
The job of the Bison parser is to group tokens into groupings according to
the grammar rules---for example, to build identifiers and operators into
expressions. As it does this, it runs the actions for the grammar rules it
uses.
The tokens come from a function called the @dfn{lexical analyzer} that
you must supply in some fashion (such as by writing it in C). The Bison
parser calls the lexical analyzer each time it wants a new token. It
doesn't know what is ``inside'' the tokens (though their semantic values
may reflect this). Typically the lexical analyzer makes the tokens by
parsing characters of text, but Bison does not depend on this.
@xref{Lexical, ,The Lexical Analyzer Function @code{yylex}}.
The Bison parser implementation file is C code which defines a
function named @code{yyparse} which implements that grammar. This
function does not make a complete C program: you must supply some
additional functions. One is the lexical analyzer. Another is an
error-reporting function which the parser calls to report an error.
In addition, a complete C program must start with a function called
@code{main}; you have to provide this, and arrange for it to call
@code{yyparse} or the parser will never run. @xref{Interface, ,Parser
C-Language Interface}.
Aside from the token type names and the symbols in the actions you
write, all symbols defined in the Bison parser implementation file
itself begin with @samp{yy} or @samp{YY}. This includes interface
functions such as the lexical analyzer function @code{yylex}, the
error reporting function @code{yyerror} and the parser function
@code{yyparse} itself. This also includes numerous identifiers used
for internal purposes. Therefore, you should avoid using C
identifiers starting with @samp{yy} or @samp{YY} in the Bison grammar
file except for the ones defined in this manual. Also, you should
avoid using the C identifiers @samp{malloc} and @samp{free} for
anything other than their usual meanings.
In some cases the Bison parser implementation file includes system
headers, and in those cases your code should respect the identifiers
reserved by those headers. On some non-GNU hosts, @code{<alloca.h>},
@code{<malloc.h>}, @code{<stddef.h>}, and @code{<stdlib.h>} are
included as needed to declare memory allocators and related types.
@code{<libintl.h>} is included if message translation is in use
(@pxref{Internationalization}). Other system headers may be included
if you define @code{YYDEBUG} to a nonzero value (@pxref{Tracing,
,Tracing Your Parser}).
@node Stages
@section Stages in Using Bison
@cindex stages in using Bison
@cindex using Bison
The actual language-design process using Bison, from grammar specification
to a working compiler or interpreter, has these parts:
@enumerate
@item
Formally specify the grammar in a form recognized by Bison
(@pxref{Grammar File, ,Bison Grammar Files}). For each grammatical rule
in the language, describe the action that is to be taken when an
instance of that rule is recognized. The action is described by a
sequence of C statements.
@item
Write a lexical analyzer to process input and pass tokens to the parser.
The lexical analyzer may be written by hand in C (@pxref{Lexical, ,The
Lexical Analyzer Function @code{yylex}}). It could also be produced
using Lex, but the use of Lex is not discussed in this manual.
@item
Write a controlling function that calls the Bison-produced parser.
@item
Write error-reporting routines.
@end enumerate
To turn this source code as written into a runnable program, you
must follow these steps:
@enumerate
@item
Run Bison on the grammar to produce the parser.
@item
Compile the code output by Bison, as well as any other source files.
@item
Link the object files to produce the finished product.
@end enumerate
@node Grammar Layout
@section The Overall Layout of a Bison Grammar
@cindex grammar file
@cindex file format
@cindex format of grammar file
@cindex layout of Bison grammar
The input file for the Bison utility is a @dfn{Bison grammar file}. The
general form of a Bison grammar file is as follows:
@example
%@{
@var{Prologue}
%@}
@var{Bison declarations}
%%
@var{Grammar rules}
%%
@var{Epilogue}
@end example
@noindent
The @samp{%%}, @samp{%@{} and @samp{%@}} are punctuation that appears
in every Bison grammar file to separate the sections.
The prologue may define types and variables used in the actions. You can
also use preprocessor commands to define macros used there, and use
@code{#include} to include header files that do any of these things.
You need to declare the lexical analyzer @code{yylex} and the error
printer @code{yyerror} here, along with any other global identifiers
used by the actions in the grammar rules.
The Bison declarations declare the names of the terminal and nonterminal
symbols, and may also describe operator precedence and the data types of
semantic values of various symbols.
The grammar rules define how to construct each nonterminal symbol from its
parts.
The epilogue can contain any code you want to use. Often the
definitions of functions declared in the prologue go here. In a
simple program, all the rest of the program can go here.
@node Examples
@chapter Examples
@cindex simple examples
@cindex examples, simple
Now we show and explain several sample programs written using Bison: a
reverse polish notation calculator, an algebraic (infix) notation
calculator --- later extended to track ``locations'' ---
and a multi-function calculator. All
produce usable, though limited, interactive desk-top calculators.
These examples are simple, but Bison grammars for real programming
languages are written the same way. You can copy these examples into a
source file to try them.
@menu
* RPN Calc:: Reverse polish notation calculator;
a first example with no operator precedence.
* Infix Calc:: Infix (algebraic) notation calculator.
Operator precedence is introduced.
* Simple Error Recovery:: Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @@@var{n} and @@$.
* Multi-function Calc:: Calculator with memory and trig functions.
It uses multiple data-types for semantic values.
* Exercises:: Ideas for improving the multi-function calculator.
@end menu
@node RPN Calc
@section Reverse Polish Notation Calculator
@cindex reverse polish notation
@cindex polish notation calculator
@cindex @code{rpcalc}
@cindex calculator, simple
The first example is that of a simple double-precision @dfn{reverse polish
notation} calculator (a calculator using postfix operators). This example
provides a good starting point, since operator precedence is not an issue.
The second example will illustrate how operator precedence is handled.
The source code for this calculator is named @file{rpcalc.y}. The
@samp{.y} extension is a convention used for Bison grammar files.
@menu
* Rpcalc Declarations:: Prologue (declarations) for rpcalc.
* Rpcalc Rules:: Grammar Rules for rpcalc, with explanation.
* Rpcalc Lexer:: The lexical analyzer.
* Rpcalc Main:: The controlling function.
* Rpcalc Error:: The error reporting function.
* Rpcalc Generate:: Running Bison on the grammar file.
* Rpcalc Compile:: Run the C compiler on the output code.
@end menu
@node Rpcalc Declarations
@subsection Declarations for @code{rpcalc}
Here are the C and Bison declarations for the reverse polish notation
calculator. As in C, comments are placed between @samp{/*@dots{}*/}.
@example
/* Reverse polish notation calculator. */
%@{
#define YYSTYPE double
#include <math.h>
int yylex (void);
void yyerror (char const *);
%@}
%token NUM
%% /* Grammar rules and actions follow. */
@end example
The declarations section (@pxref{Prologue, , The prologue}) contains two
preprocessor directives and two forward declarations.
The @code{#define} directive defines the macro @code{YYSTYPE}, thus
specifying the C data type for semantic values of both tokens and
groupings (@pxref{Value Type, ,Data Types of Semantic Values}). The
Bison parser will use whatever type @code{YYSTYPE} is defined as; if you
don't define it, @code{int} is the default. Because we specify
@code{double}, each token and each expression has an associated value,
which is a floating point number.
The @code{#include} directive is used to declare the exponentiation
function @code{pow}.
The forward declarations for @code{yylex} and @code{yyerror} are
needed because the C language requires that functions be declared
before they are used. These functions will be defined in the
epilogue, but the parser calls them so they must be declared in the
prologue.
The second section, Bison declarations, provides information to Bison
about the token types (@pxref{Bison Declarations, ,The Bison
Declarations Section}). Each terminal symbol that is not a
single-character literal must be declared here. (Single-character
literals normally don't need to be declared.) In this example, all the
arithmetic operators are designated by single-character literals, so the
only terminal symbol that needs to be declared is @code{NUM}, the token
type for numeric constants.
@node Rpcalc Rules
@subsection Grammar Rules for @code{rpcalc}
Here are the grammar rules for the reverse polish notation calculator.
@example
@group
input:
/* empty */
| input line
;
@end group
@group
line:
'\n'
| exp '\n' @{ printf ("%.10g\n", $1); @}
;
@end group
@group
exp:
NUM @{ $$ = $1; @}
| exp exp '+' @{ $$ = $1 + $2; @}
| exp exp '-' @{ $$ = $1 - $2; @}
| exp exp '*' @{ $$ = $1 * $2; @}
| exp exp '/' @{ $$ = $1 / $2; @}
| exp exp '^' @{ $$ = pow ($1, $2); @} /* Exponentiation */
| exp 'n' @{ $$ = -$1; @} /* Unary minus */
;
@end group
%%
@end example
The groupings of the rpcalc ``language'' defined here are the expression
(given the name @code{exp}), the line of input (@code{line}), and the
complete input transcript (@code{input}). Each of these nonterminal
symbols has several alternate rules, joined by the vertical bar @samp{|}
which is read as ``or''. The following sections explain what these rules
mean.
The semantics of the language is determined by the actions taken when a
grouping is recognized. The actions are the C code that appears inside
braces. @xref{Actions}.
You must specify these actions in C, but Bison provides the means for
passing semantic values between the rules. In each action, the
pseudo-variable @code{$$} stands for the semantic value for the grouping
that the rule is going to construct. Assigning a value to @code{$$} is the
main job of most actions. The semantic values of the components of the
rule are referred to as @code{$1}, @code{$2}, and so on.
@menu
* Rpcalc Input::
* Rpcalc Line::
* Rpcalc Expr::
@end menu
@node Rpcalc Input
@subsubsection Explanation of @code{input}
Consider the definition of @code{input}:
@example
input:
/* empty */
| input line
;
@end example
This definition reads as follows: ``A complete input is either an empty
string, or a complete input followed by an input line''. Notice that
``complete input'' is defined in terms of itself. This definition is said
to be @dfn{left recursive} since @code{input} appears always as the
leftmost symbol in the sequence. @xref{Recursion, ,Recursive Rules}.
The first alternative is empty because there are no symbols between the
colon and the first @samp{|}; this means that @code{input} can match an
empty string of input (no tokens). We write the rules this way because it
is legitimate to type @kbd{Ctrl-d} right after you start the calculator.
It's conventional to put an empty alternative first and write the comment
@samp{/* empty */} in it.
The second alternate rule (@code{input line}) handles all nontrivial input.
It means, ``After reading any number of lines, read one more line if
possible.'' The left recursion makes this rule into a loop. Since the
first alternative matches empty input, the loop can be executed zero or
more times.
The parser function @code{yyparse} continues to process input until a
grammatical error is seen or the lexical analyzer says there are no more
input tokens; we will arrange for the latter to happen at end-of-input.
@node Rpcalc Line
@subsubsection Explanation of @code{line}
Now consider the definition of @code{line}:
@example
line:
'\n'
| exp '\n' @{ printf ("%.10g\n", $1); @}
;
@end example
The first alternative is a token which is a newline character; this means
that rpcalc accepts a blank line (and ignores it, since there is no
action). The second alternative is an expression followed by a newline.
This is the alternative that makes rpcalc useful. The semantic value of
the @code{exp} grouping is the value of @code{$1} because the @code{exp} in
question is the first symbol in the alternative. The action prints this
value, which is the result of the computation the user asked for.
This action is unusual because it does not assign a value to @code{$$}. As
a consequence, the semantic value associated with the @code{line} is
uninitialized (its value will be unpredictable). This would be a bug if
that value were ever used, but we don't use it: once rpcalc has printed the
value of the user's input line, that value is no longer needed.
@node Rpcalc Expr
@subsubsection Explanation of @code{expr}
The @code{exp} grouping has several rules, one for each kind of expression.
The first rule handles the simplest expressions: those that are just numbers.
The second handles an addition-expression, which looks like two expressions
followed by a plus-sign. The third handles subtraction, and so on.
@example
exp:
NUM
| exp exp '+' @{ $$ = $1 + $2; @}
| exp exp '-' @{ $$ = $1 - $2; @}
@dots{}
;
@end example
We have used @samp{|} to join all the rules for @code{exp}, but we could
equally well have written them separately:
@example
exp: NUM ;
exp: exp exp '+' @{ $$ = $1 + $2; @};
exp: exp exp '-' @{ $$ = $1 - $2; @};
@dots{}
@end example
Most of the rules have actions that compute the value of the expression in
terms of the value of its parts. For example, in the rule for addition,
@code{$1} refers to the first component @code{exp} and @code{$2} refers to
the second one. The third component, @code{'+'}, has no meaningful
associated semantic value, but if it had one you could refer to it as
@code{$3}. When @code{yyparse} recognizes a sum expression using this
rule, the sum of the two subexpressions' values is produced as the value of
the entire expression. @xref{Actions}.
You don't have to give an action for every rule. When a rule has no
action, Bison by default copies the value of @code{$1} into @code{$$}.
This is what happens in the first rule (the one that uses @code{NUM}).
The formatting shown here is the recommended convention, but Bison does
not require it. You can add or change white space as much as you wish.
For example, this:
@example
exp: NUM | exp exp '+' @{$$ = $1 + $2; @} | @dots{} ;
@end example
@noindent
means the same thing as this:
@example
exp:
NUM
| exp exp '+' @{ $$ = $1 + $2; @}
| @dots{}
;
@end example
@noindent
The latter, however, is much more readable.
@node Rpcalc Lexer
@subsection The @code{rpcalc} Lexical Analyzer
@cindex writing a lexical analyzer
@cindex lexical analyzer, writing
The lexical analyzer's job is low-level parsing: converting characters
or sequences of characters into tokens. The Bison parser gets its
tokens by calling the lexical analyzer. @xref{Lexical, ,The Lexical
Analyzer Function @code{yylex}}.
Only a simple lexical analyzer is needed for the RPN
calculator. This
lexical analyzer skips blanks and tabs, then reads in numbers as
@code{double} and returns them as @code{NUM} tokens. Any other character
that isn't part of a number is a separate token. Note that the token-code
for such a single-character token is the character itself.
The return value of the lexical analyzer function is a numeric code which
represents a token type. The same text used in Bison rules to stand for
this token type is also a C expression for the numeric code for the type.
This works in two ways. If the token type is a character literal, then its
numeric code is that of the character; you can use the same
character literal in the lexical analyzer to express the number. If the
token type is an identifier, that identifier is defined by Bison as a C
macro whose definition is the appropriate number. In this example,
therefore, @code{NUM} becomes a macro for @code{yylex} to use.
The semantic value of the token (if it has one) is stored into the
global variable @code{yylval}, which is where the Bison parser will look
for it. (The C data type of @code{yylval} is @code{YYSTYPE}, which was
defined at the beginning of the grammar; @pxref{Rpcalc Declarations,
,Declarations for @code{rpcalc}}.)
A token type code of zero is returned if the end-of-input is encountered.
(Bison recognizes any nonpositive value as indicating end-of-input.)
Here is the code for the lexical analyzer:
@example
@group
/* The lexical analyzer returns a double floating point
number on the stack and the token NUM, or the numeric code
of the character read if not a number. It skips all blanks
and tabs, and returns 0 for end-of-input. */
#include <ctype.h>
@end group
@group
int
yylex (void)
@{
int c;
/* Skip white space. */
while ((c = getchar ()) == ' ' || c == '\t')
continue;
@end group
@group
/* Process numbers. */
if (c == '.' || isdigit (c))
@{
ungetc (c, stdin);
scanf ("%lf", &yylval);
return NUM;
@}
@end group
@group
/* Return end-of-input. */
if (c == EOF)
return 0;
/* Return a single char. */
return c;
@}
@end group
@end example
@node Rpcalc Main
@subsection The Controlling Function
@cindex controlling function
@cindex main function in simple example
In keeping with the spirit of this example, the controlling function is
kept to the bare minimum. The only requirement is that it call
@code{yyparse} to start the process of parsing.
@example
@group
int
main (void)
@{
return yyparse ();
@}
@end group
@end example
@node Rpcalc Error
@subsection The Error Reporting Routine
@cindex error reporting routine
When @code{yyparse} detects a syntax error, it calls the error reporting
function @code{yyerror} to print an error message (usually but not
always @code{"syntax error"}). It is up to the programmer to supply
@code{yyerror} (@pxref{Interface, ,Parser C-Language Interface}), so
here is the definition we will use:
@example
@group
#include <stdio.h>
@end group
@group
/* Called by yyparse on error. */
void
yyerror (char const *s)
@{
fprintf (stderr, "%s\n", s);
@}
@end group
@end example
After @code{yyerror} returns, the Bison parser may recover from the error
and continue parsing if the grammar contains a suitable error rule
(@pxref{Error Recovery}). Otherwise, @code{yyparse} returns nonzero. We
have not written any error rules in this example, so any invalid input will
cause the calculator program to exit. This is not clean behavior for a
real calculator, but it is adequate for the first example.
@node Rpcalc Generate
@subsection Running Bison to Make the Parser
@cindex running Bison (introduction)
Before running Bison to produce a parser, we need to decide how to
arrange all the source code in one or more source files. For such a
simple example, the easiest thing is to put everything in one file,
the grammar file. The definitions of @code{yylex}, @code{yyerror} and
@code{main} go at the end, in the epilogue of the grammar file
(@pxref{Grammar Layout, ,The Overall Layout of a Bison Grammar}).
For a large project, you would probably have several source files, and use
@code{make} to arrange to recompile them.
With all the source in the grammar file, you use the following command
to convert it into a parser implementation file:
@example
bison @var{file}.y
@end example
@noindent
In this example, the grammar file is called @file{rpcalc.y} (for
``Reverse Polish @sc{calc}ulator''). Bison produces a parser
implementation file named @file{@var{file}.tab.c}, removing the
@samp{.y} from the grammar file name. The parser implementation file
contains the source code for @code{yyparse}. The additional functions
in the grammar file (@code{yylex}, @code{yyerror} and @code{main}) are
copied verbatim to the parser implementation file.
@node Rpcalc Compile
@subsection Compiling the Parser Implementation File
@cindex compiling the parser
Here is how to compile and run the parser implementation file:
@example
@group
# @r{List files in current directory.}
$ @kbd{ls}
rpcalc.tab.c rpcalc.y
@end group
@group
# @r{Compile the Bison parser.}
# @r{@samp{-lm} tells compiler to search math library for @code{pow}.}
$ @kbd{cc -lm -o rpcalc rpcalc.tab.c}
@end group
@group
# @r{List files again.}
$ @kbd{ls}
rpcalc rpcalc.tab.c rpcalc.y
@end group
@end example
The file @file{rpcalc} now contains the executable code. Here is an
example session using @code{rpcalc}.
@example
$ @kbd{rpcalc}
@kbd{4 9 +}
13
@kbd{3 7 + 3 4 5 *+-}
-13
@kbd{3 7 + 3 4 5 * + - n} @r{Note the unary minus, @samp{n}}
13
@kbd{5 6 / 4 n +}
-3.166666667
@kbd{3 4 ^} @r{Exponentiation}
81
@kbd{^D} @r{End-of-file indicator}
$
@end example
@node Infix Calc
@section Infix Notation Calculator: @code{calc}
@cindex infix notation calculator
@cindex @code{calc}
@cindex calculator, infix notation
We now modify rpcalc to handle infix operators instead of postfix. Infix
notation involves the concept of operator precedence and the need for
parentheses nested to arbitrary depth. Here is the Bison code for
@file{calc.y}, an infix desk-top calculator.
@example
/* Infix notation calculator. */
@group
%@{
#define YYSTYPE double
#include <math.h>
#include <stdio.h>
int yylex (void);
void yyerror (char const *);
%@}
@end group
@group
/* Bison declarations. */
%token NUM
%left '-' '+'
%left '*' '/'
%left NEG /* negation--unary minus */
%right '^' /* exponentiation */
@end group
%% /* The grammar follows. */
@group
input:
/* empty */
| input line
;
@end group
@group
line:
'\n'
| exp '\n' @{ printf ("\t%.10g\n", $1); @}
;
@end group
@group
exp:
NUM @{ $$ = $1; @}
| exp '+' exp @{ $$ = $1 + $3; @}
| exp '-' exp @{ $$ = $1 - $3; @}
| exp '*' exp @{ $$ = $1 * $3; @}
| exp '/' exp @{ $$ = $1 / $3; @}
| '-' exp %prec NEG @{ $$ = -$2; @}
| exp '^' exp @{ $$ = pow ($1, $3); @}
| '(' exp ')' @{ $$ = $2; @}
;
@end group
%%
@end example
@noindent
The functions @code{yylex}, @code{yyerror} and @code{main} can be the
same as before.
There are two important new features shown in this code.
In the second section (Bison declarations), @code{%left} declares token
types and says they are left-associative operators. The declarations
@code{%left} and @code{%right} (right associativity) take the place of
@code{%token} which is used to declare a token type name without
associativity. (These tokens are single-character literals, which
ordinarily don't need to be declared. We declare them here to specify
the associativity.)
Operator precedence is determined by the line ordering of the
declarations; the higher the line number of the declaration (lower on
the page or screen), the higher the precedence. Hence, exponentiation
has the highest precedence, unary minus (@code{NEG}) is next, followed
by @samp{*} and @samp{/}, and so on. @xref{Precedence, ,Operator
Precedence}.
The other important new feature is the @code{%prec} in the grammar
section for the unary minus operator. The @code{%prec} simply instructs
Bison that the rule @samp{| '-' exp} has the same precedence as
@code{NEG}---in this case the next-to-highest. @xref{Contextual
Precedence, ,Context-Dependent Precedence}.
Here is a sample run of @file{calc.y}:
@need 500
@example
$ @kbd{calc}
@kbd{4 + 4.5 - (34/(8*3+-3))}
6.880952381
@kbd{-56 + 2}
-54
@kbd{3 ^ 2}
9
@end example
@node Simple Error Recovery
@section Simple Error Recovery
@cindex error recovery, simple
Up to this point, this manual has not addressed the issue of @dfn{error
recovery}---how to continue parsing after the parser detects a syntax
error. All we have handled is error reporting with @code{yyerror}.
Recall that by default @code{yyparse} returns after calling
@code{yyerror}. This means that an erroneous input line causes the
calculator program to exit. Now we show how to rectify this deficiency.
The Bison language itself includes the reserved word @code{error}, which
may be included in the grammar rules. In the example below it has
been added to one of the alternatives for @code{line}:
@example
@group
line:
'\n'
| exp '\n' @{ printf ("\t%.10g\n", $1); @}
| error '\n' @{ yyerrok; @}
;
@end group
@end example
This addition to the grammar allows for simple error recovery in the
event of a syntax error. If an expression that cannot be evaluated is
read, the error will be recognized by the third rule for @code{line},
and parsing will continue. (The @code{yyerror} function is still called
upon to print its message as well.) The action executes the statement
@code{yyerrok}, a macro defined automatically by Bison; its meaning is
that error recovery is complete (@pxref{Error Recovery}). Note the
difference between @code{yyerrok} and @code{yyerror}; neither one is a
misprint.
This form of error recovery deals with syntax errors. There are other
kinds of errors; for example, division by zero, which raises an exception
signal that is normally fatal. A real calculator program must handle this
signal and use @code{longjmp} to return to @code{main} and resume parsing
input lines; it would also have to discard the rest of the current line of
input. We won't discuss this issue further because it is not specific to
Bison programs.
@node Location Tracking Calc
@section Location Tracking Calculator: @code{ltcalc}
@cindex location tracking calculator
@cindex @code{ltcalc}
@cindex calculator, location tracking
This example extends the infix notation calculator with location
tracking. This feature will be used to improve the error messages. For
the sake of clarity, this example is a simple integer calculator, since
most of the work needed to use locations will be done in the lexical
analyzer.
@menu
* Ltcalc Declarations:: Bison and C declarations for ltcalc.
* Ltcalc Rules:: Grammar rules for ltcalc, with explanations.
* Ltcalc Lexer:: The lexical analyzer.
@end menu
@node Ltcalc Declarations
@subsection Declarations for @code{ltcalc}
The C and Bison declarations for the location tracking calculator are
the same as the declarations for the infix notation calculator.
@example
/* Location tracking calculator. */
%@{
#define YYSTYPE int
#include <math.h>
int yylex (void);
void yyerror (char const *);
%@}
/* Bison declarations. */
%token NUM
%left '-' '+'
%left '*' '/'
%left NEG
%right '^'
%% /* The grammar follows. */
@end example
@noindent
Note there are no declarations specific to locations. Defining a data
type for storing locations is not needed: we will use the type provided
by default (@pxref{Location Type, ,Data Types of Locations}), which is a
four member structure with the following integer fields:
@code{first_line}, @code{first_column}, @code{last_line} and
@code{last_column}. By conventions, and in accordance with the GNU
Coding Standards and common practice, the line and column count both
start at 1.
@node Ltcalc Rules
@subsection Grammar Rules for @code{ltcalc}
Whether handling locations or not has no effect on the syntax of your
language. Therefore, grammar rules for this example will be very close
to those of the previous example: we will only modify them to benefit
from the new information.
Here, we will use locations to report divisions by zero, and locate the
wrong expressions or subexpressions.
@example
@group
input:
/* empty */
| input line
;
@end group
@group
line:
'\n'
| exp '\n' @{ printf ("%d\n", $1); @}
;
@end group
@group
exp:
NUM @{ $$ = $1; @}
| exp '+' exp @{ $$ = $1 + $3; @}
| exp '-' exp @{ $$ = $1 - $3; @}
| exp '*' exp @{ $$ = $1 * $3; @}
@end group
@group
| exp '/' exp
@{
if ($3)
$$ = $1 / $3;
else
@{
$$ = 1;
fprintf (stderr, "%d.%d-%d.%d: division by zero",
@@3.first_line, @@3.first_column,
@@3.last_line, @@3.last_column);
@}
@}
@end group
@group
| '-' exp %prec NEG @{ $$ = -$2; @}
| exp '^' exp @{ $$ = pow ($1, $3); @}
| '(' exp ')' @{ $$ = $2; @}
@end group
@end example
This code shows how to reach locations inside of semantic actions, by
using the pseudo-variables @code{@@@var{n}} for rule components, and the
pseudo-variable @code{@@$} for groupings.
We don't need to assign a value to @code{@@$}: the output parser does it
automatically. By default, before executing the C code of each action,
@code{@@$} is set to range from the beginning of @code{@@1} to the end
of @code{@@@var{n}}, for a rule with @var{n} components. This behavior
can be redefined (@pxref{Location Default Action, , Default Action for
Locations}), and for very specific rules, @code{@@$} can be computed by
hand.
@node Ltcalc Lexer
@subsection The @code{ltcalc} Lexical Analyzer.
Until now, we relied on Bison's defaults to enable location
tracking. The next step is to rewrite the lexical analyzer, and make it
able to feed the parser with the token locations, as it already does for
semantic values.
To this end, we must take into account every single character of the
input text, to avoid the computed locations of being fuzzy or wrong:
@example
@group
int
yylex (void)
@{
int c;
@end group
@group
/* Skip white space. */
while ((c = getchar ()) == ' ' || c == '\t')
++yylloc.last_column;
@end group
@group
/* Step. */
yylloc.first_line = yylloc.last_line;
yylloc.first_column = yylloc.last_column;
@end group
@group
/* Process numbers. */
if (isdigit (c))
@{
yylval = c - '0';
++yylloc.last_column;
while (isdigit (c = getchar ()))
@{
++yylloc.last_column;
yylval = yylval * 10 + c - '0';
@}
ungetc (c, stdin);
return NUM;
@}
@end group
/* Return end-of-input. */
if (c == EOF)
return 0;
@group
/* Return a single char, and update location. */
if (c == '\n')
@{
++yylloc.last_line;
yylloc.last_column = 0;
@}
else
++yylloc.last_column;
return c;
@}
@end group
@end example
Basically, the lexical analyzer performs the same processing as before:
it skips blanks and tabs, and reads numbers or single-character tokens.
In addition, it updates @code{yylloc}, the global variable (of type
@code{YYLTYPE}) containing the token's location.
Now, each time this function returns a token, the parser has its number
as well as its semantic value, and its location in the text. The last
needed change is to initialize @code{yylloc}, for example in the
controlling function:
@example
@group
int
main (void)
@{
yylloc.first_line = yylloc.last_line = 1;
yylloc.first_column = yylloc.last_column = 0;
return yyparse ();
@}
@end group
@end example
Remember that computing locations is not a matter of syntax. Every
character must be associated to a location update, whether it is in
valid input, in comments, in literal strings, and so on.
@node Multi-function Calc
@section Multi-Function Calculator: @code{mfcalc}
@cindex multi-function calculator
@cindex @code{mfcalc}
@cindex calculator, multi-function
Now that the basics of Bison have been discussed, it is time to move on to
a more advanced problem. The above calculators provided only five
functions, @samp{+}, @samp{-}, @samp{*}, @samp{/} and @samp{^}. It would
be nice to have a calculator that provides other mathematical functions such
as @code{sin}, @code{cos}, etc.
It is easy to add new operators to the infix calculator as long as they are
only single-character literals. The lexical analyzer @code{yylex} passes
back all nonnumeric characters as tokens, so new grammar rules suffice for
adding a new operator. But we want something more flexible: built-in
functions whose syntax has this form:
@example
@var{function_name} (@var{argument})
@end example
@noindent
At the same time, we will add memory to the calculator, by allowing you
to create named variables, store values in them, and use them later.
Here is a sample session with the multi-function calculator:
@example
$ @kbd{mfcalc}
@kbd{pi = 3.141592653589}
3.1415926536
@kbd{sin(pi)}
0.0000000000
@kbd{alpha = beta1 = 2.3}
2.3000000000
@kbd{alpha}
2.3000000000
@kbd{ln(alpha)}
0.8329091229
@kbd{exp(ln(beta1))}
2.3000000000
$
@end example
Note that multiple assignment and nested function calls are permitted.
@menu
* Mfcalc Declarations:: Bison declarations for multi-function calculator.
* Mfcalc Rules:: Grammar rules for the calculator.
* Mfcalc Symbol Table:: Symbol table management subroutines.
@end menu
@node Mfcalc Declarations
@subsection Declarations for @code{mfcalc}
Here are the C and Bison declarations for the multi-function calculator.
@comment file: mfcalc.y: 1
@example
@group
%@{
#include <math.h> /* For math functions, cos(), sin(), etc. */
#include "calc.h" /* Contains definition of `symrec'. */
int yylex (void);
void yyerror (char const *);
%@}
@end group
@group
%union @{
double val; /* For returning numbers. */
symrec *tptr; /* For returning symbol-table pointers. */
@}
@end group
%token <val> NUM /* Simple double precision number. */
%token <tptr> VAR FNCT /* Variable and function. */
%type <val> exp
@group
%right '='
%left '-' '+'
%left '*' '/'
%left NEG /* negation--unary minus */
%right '^' /* exponentiation */
@end group
@end example
The above grammar introduces only two new features of the Bison language.
These features allow semantic values to have various data types
(@pxref{Multiple Types, ,More Than One Value Type}).
The @code{%union} declaration specifies the entire list of possible types;
this is instead of defining @code{YYSTYPE}. The allowable types are now
double-floats (for @code{exp} and @code{NUM}) and pointers to entries in
the symbol table. @xref{Union Decl, ,The Collection of Value Types}.
Since values can now have various types, it is necessary to associate a
type with each grammar symbol whose semantic value is used. These symbols
are @code{NUM}, @code{VAR}, @code{FNCT}, and @code{exp}. Their
declarations are augmented with information about their data type (placed
between angle brackets).
The Bison construct @code{%type} is used for declaring nonterminal
symbols, just as @code{%token} is used for declaring token types. We
have not used @code{%type} before because nonterminal symbols are
normally declared implicitly by the rules that define them. But
@code{exp} must be declared explicitly so we can specify its value type.
@xref{Type Decl, ,Nonterminal Symbols}.
@node Mfcalc Rules
@subsection Grammar Rules for @code{mfcalc}
Here are the grammar rules for the multi-function calculator.
Most of them are copied directly from @code{calc}; three rules,
those which mention @code{VAR} or @code{FNCT}, are new.
@comment file: mfcalc.y: 3
@example
%% /* The grammar follows. */
@group
input:
/* empty */
| input line
;
@end group
@group
line:
'\n'
| exp '\n' @{ printf ("%.10g\n", $1); @}
| error '\n' @{ yyerrok; @}
;
@end group
@group
exp:
NUM @{ $$ = $1; @}
| VAR @{ $$ = $1->value.var; @}
| VAR '=' exp @{ $$ = $3; $1->value.var = $3; @}
| FNCT '(' exp ')' @{ $$ = (*($1->value.fnctptr))($3); @}
| exp '+' exp @{ $$ = $1 + $3; @}
| exp '-' exp @{ $$ = $1 - $3; @}
| exp '*' exp @{ $$ = $1 * $3; @}
| exp '/' exp @{ $$ = $1 / $3; @}
| '-' exp %prec NEG @{ $$ = -$2; @}
| exp '^' exp @{ $$ = pow ($1, $3); @}
| '(' exp ')' @{ $$ = $2; @}
;
@end group
/* End of grammar. */
%%
@end example
@node Mfcalc Symbol Table
@subsection The @code{mfcalc} Symbol Table
@cindex symbol table example
The multi-function calculator requires a symbol table to keep track of the
names and meanings of variables and functions. This doesn't affect the
grammar rules (except for the actions) or the Bison declarations, but it
requires some additional C functions for support.
The symbol table itself consists of a linked list of records. Its
definition, which is kept in the header @file{calc.h}, is as follows. It
provides for either functions or variables to be placed in the table.
@comment file: calc.h
@example
@group
/* Function type. */
typedef double (*func_t) (double);
@end group
@group
/* Data type for links in the chain of symbols. */
struct symrec
@{
char *name; /* name of symbol */
int type; /* type of symbol: either VAR or FNCT */
union
@{
double var; /* value of a VAR */
func_t fnctptr; /* value of a FNCT */
@} value;
struct symrec *next; /* link field */
@};
@end group
@group
typedef struct symrec symrec;
/* The symbol table: a chain of `struct symrec'. */
extern symrec *sym_table;
symrec *putsym (char const *, int);
symrec *getsym (char const *);
@end group
@end example
The new version of @code{main} includes a call to @code{init_table}, a
function that initializes the symbol table. Here it is, and
@code{init_table} as well:
@comment file: mfcalc.y: 3
@example
#include <stdio.h>
@group
/* Called by yyparse on error. */
void
yyerror (char const *s)
@{
fprintf (stderr, "%s\n", s);
@}
@end group
@group
struct init
@{
char const *fname;
double (*fnct) (double);
@};
@end group
@group
struct init const arith_fncts[] =
@{
"sin", sin,
"cos", cos,
"atan", atan,
"ln", log,
"exp", exp,
"sqrt", sqrt,
0, 0
@};
@end group
@group
/* The symbol table: a chain of `struct symrec'. */
symrec *sym_table;
@end group
@group
/* Put arithmetic functions in table. */
void
init_table (void)
@{
int i;
for (i = 0; arith_fncts[i].fname != 0; i++)
@{
symrec *ptr = putsym (arith_fncts[i].fname, FNCT);
ptr->value.fnctptr = arith_fncts[i].fnct;
@}
@}
@end group
@group
int
main (void)
@{
init_table ();
return yyparse ();
@}
@end group
@end example
By simply editing the initialization list and adding the necessary include
files, you can add additional functions to the calculator.
Two important functions allow look-up and installation of symbols in the
symbol table. The function @code{putsym} is passed a name and the type
(@code{VAR} or @code{FNCT}) of the object to be installed. The object is
linked to the front of the list, and a pointer to the object is returned.
The function @code{getsym} is passed the name of the symbol to look up. If
found, a pointer to that symbol is returned; otherwise zero is returned.
@comment file: mfcalc.y: 3
@example
#include <stdlib.h> /* malloc. */
#include <string.h> /* strlen. */
@group
symrec *
putsym (char const *sym_name, int sym_type)
@{
symrec *ptr = (symrec *) malloc (sizeof (symrec));
ptr->name = (char *) malloc (strlen (sym_name) + 1);
strcpy (ptr->name,sym_name);
ptr->type = sym_type;
ptr->value.var = 0; /* Set value to 0 even if fctn. */
ptr->next = (struct symrec *)sym_table;
sym_table = ptr;
return ptr;
@}
@end group
@group
symrec *
getsym (char const *sym_name)
@{
symrec *ptr;
for (ptr = sym_table; ptr != (symrec *) 0;
ptr = (symrec *)ptr->next)
if (strcmp (ptr->name,sym_name) == 0)
return ptr;
return 0;
@}
@end group
@end example
The function @code{yylex} must now recognize variables, numeric values, and
the single-character arithmetic operators. Strings of alphanumeric
characters with a leading letter are recognized as either variables or
functions depending on what the symbol table says about them.
The string is passed to @code{getsym} for look up in the symbol table. If
the name appears in the table, a pointer to its location and its type
(@code{VAR} or @code{FNCT}) is returned to @code{yyparse}. If it is not
already in the table, then it is installed as a @code{VAR} using
@code{putsym}. Again, a pointer and its type (which must be @code{VAR}) is
returned to @code{yyparse}.
No change is needed in the handling of numeric values and arithmetic
operators in @code{yylex}.
@comment file: mfcalc.y: 3
@example
@group
#include <ctype.h>
@end group
@group
int
yylex (void)
@{
int c;
/* Ignore white space, get first nonwhite character. */
while ((c = getchar ()) == ' ' || c == '\t')
continue;
if (c == EOF)
return 0;
@end group
@group
/* Char starts a number => parse the number. */
if (c == '.' || isdigit (c))
@{
ungetc (c, stdin);
scanf ("%lf", &yylval.val);
return NUM;
@}
@end group
@group
/* Char starts an identifier => read the name. */
if (isalpha (c))
@{
/* Initially make the buffer long enough
for a 40-character symbol name. */
static size_t length = 40;
static char *symbuf = 0;
symrec *s;
int i;
@end group
if (!symbuf)
symbuf = (char *) malloc (length + 1);
i = 0;
do
@group
@{
/* If buffer is full, make it bigger. */
if (i == length)
@{
length *= 2;
symbuf = (char *) realloc (symbuf, length + 1);
@}
/* Add this character to the buffer. */
symbuf[i++] = c;
/* Get another character. */
c = getchar ();
@}
@end group
@group
while (isalnum (c));
ungetc (c, stdin);
symbuf[i] = '\0';
@end group
@group
s = getsym (symbuf);
if (s == 0)
s = putsym (symbuf, VAR);
yylval.tptr = s;
return s->type;
@}
/* Any other character is a token by itself. */
return c;
@}
@end group
@end example
The error reporting function is unchanged, and the new version of
@code{main} includes a call to @code{init_table} and sets the @code{yydebug}
on user demand (@xref{Tracing, , Tracing Your Parser}, for details):
@comment file: mfcalc.y: 3
@example
@group
/* Called by yyparse on error. */
void
yyerror (char const *s)
@{
fprintf (stderr, "%s\n", s);
@}
@end group
@group
int
main (int argc, char const* argv[])
@{
int i;
/* Enable parse traces on option -p. */
for (i = 1; i < argc; ++i)
if (!strcmp(argv[i], "-p"))
yydebug = 1;
init_table ();
return yyparse ();
@}
@end group
@end example
This program is both powerful and flexible. You may easily add new
functions, and it is a simple job to modify this code to install
predefined variables such as @code{pi} or @code{e} as well.
@node Exercises
@section Exercises
@cindex exercises
@enumerate
@item
Add some new functions from @file{math.h} to the initialization list.
@item
Add another array that contains constants and their values. Then
modify @code{init_table} to add these constants to the symbol table.
It will be easiest to give the constants type @code{VAR}.
@item
Make the program report an error if the user refers to an
uninitialized variable in any way except to store a value in it.
@end enumerate
@node Grammar File
@chapter Bison Grammar Files
Bison takes as input a context-free grammar specification and produces a
C-language function that recognizes correct instances of the grammar.
The Bison grammar file conventionally has a name ending in @samp{.y}.
@xref{Invocation, ,Invoking Bison}.
@menu
* Grammar Outline:: Overall layout of the grammar file.
* Symbols:: Terminal and nonterminal symbols.
* Rules:: How to write grammar rules.
* Recursion:: Writing recursive rules.
* Semantics:: Semantic values and actions.
* Tracking Locations:: Locations and actions.
* Named References:: Using named references in actions.
* Declarations:: All kinds of Bison declarations are described here.
* Multiple Parsers:: Putting more than one Bison parser in one program.
@end menu
@node Grammar Outline
@section Outline of a Bison Grammar
@cindex comment
@findex // @dots{}
@findex /* @dots{} */
A Bison grammar file has four main sections, shown here with the
appropriate delimiters:
@example
%@{
@var{Prologue}
%@}
@var{Bison declarations}
%%
@var{Grammar rules}
%%
@var{Epilogue}
@end example
Comments enclosed in @samp{/* @dots{} */} may appear in any of the sections.
As a GNU extension, @samp{//} introduces a comment that continues until end
of line.
@menu
* Prologue:: Syntax and usage of the prologue.
* Prologue Alternatives:: Syntax and usage of alternatives to the prologue.
* Bison Declarations:: Syntax and usage of the Bison declarations section.
* Grammar Rules:: Syntax and usage of the grammar rules section.
* Epilogue:: Syntax and usage of the epilogue.
@end menu
@node Prologue
@subsection The prologue
@cindex declarations section
@cindex Prologue
@cindex declarations
The @var{Prologue} section contains macro definitions and declarations
of functions and variables that are used in the actions in the grammar
rules. These are copied to the beginning of the parser implementation
file so that they precede the definition of @code{yyparse}. You can
use @samp{#include} to get the declarations from a header file. If
you don't need any C declarations, you may omit the @samp{%@{} and
@samp{%@}} delimiters that bracket this section.
The @var{Prologue} section is terminated by the first occurrence
of @samp{%@}} that is outside a comment, a string literal, or a
character constant.
You may have more than one @var{Prologue} section, intermixed with the
@var{Bison declarations}. This allows you to have C and Bison
declarations that refer to each other. For example, the @code{%union}
declaration may use types defined in a header file, and you may wish to
prototype functions that take arguments of type @code{YYSTYPE}. This
can be done with two @var{Prologue} blocks, one before and one after the
@code{%union} declaration.
@example
%@{
#define _GNU_SOURCE
#include <stdio.h>
#include "ptypes.h"
%@}
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
%@{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
%@}
@dots{}
@end example
When in doubt, it is usually safer to put prologue code before all
Bison declarations, rather than after. For example, any definitions
of feature test macros like @code{_GNU_SOURCE} or
@code{_POSIX_C_SOURCE} should appear before all Bison declarations, as
feature test macros can affect the behavior of Bison-generated
@code{#include} directives.
@node Prologue Alternatives
@subsection Prologue Alternatives
@cindex Prologue Alternatives
@findex %code
@findex %code requires
@findex %code provides
@findex %code top
The functionality of @var{Prologue} sections can often be subtle and
inflexible. As an alternative, Bison provides a @code{%code}
directive with an explicit qualifier field, which identifies the
purpose of the code and thus the location(s) where Bison should
generate it. For C/C++, the qualifier can be omitted for the default
location, or it can be one of @code{requires}, @code{provides},
@code{top}. @xref{%code Summary}.
Look again at the example of the previous section:
@example
%@{
#define _GNU_SOURCE
#include <stdio.h>
#include "ptypes.h"
%@}
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
%@{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
%@}
@dots{}
@end example
@noindent
Notice that there are two @var{Prologue} sections here, but there's a
subtle distinction between their functionality. For example, if you
decide to override Bison's default definition for @code{YYLTYPE}, in
which @var{Prologue} section should you write your new definition?
You should write it in the first since Bison will insert that code
into the parser implementation file @emph{before} the default
@code{YYLTYPE} definition. In which @var{Prologue} section should you
prototype an internal function, @code{trace_token}, that accepts
@code{YYLTYPE} and @code{yytokentype} as arguments? You should
prototype it in the second since Bison will insert that code
@emph{after} the @code{YYLTYPE} and @code{yytokentype} definitions.
This distinction in functionality between the two @var{Prologue} sections is
established by the appearance of the @code{%union} between them.
This behavior raises a few questions.
First, why should the position of a @code{%union} affect definitions related to
@code{YYLTYPE} and @code{yytokentype}?
Second, what if there is no @code{%union}?
In that case, the second kind of @var{Prologue} section is not available.
This behavior is not intuitive.
To avoid this subtle @code{%union} dependency, rewrite the example using a
@code{%code top} and an unqualified @code{%code}.
Let's go ahead and add the new @code{YYLTYPE} definition and the
@code{trace_token} prototype at the same time:
@example
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
/* WARNING: The following code really belongs
* in a `%code requires'; see below. */
#include "ptypes.h"
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
@{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
@} YYLTYPE;
@}
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
%code @{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
static void trace_token (enum yytokentype token, YYLTYPE loc);
@}
@dots{}
@end example
@noindent
In this way, @code{%code top} and the unqualified @code{%code} achieve the same
functionality as the two kinds of @var{Prologue} sections, but it's always
explicit which kind you intend.
Moreover, both kinds are always available even in the absence of @code{%union}.
The @code{%code top} block above logically contains two parts. The
first two lines before the warning need to appear near the top of the
parser implementation file. The first line after the warning is
required by @code{YYSTYPE} and thus also needs to appear in the parser
implementation file. However, if you've instructed Bison to generate
a parser header file (@pxref{Decl Summary, ,%defines}), you probably
want that line to appear before the @code{YYSTYPE} definition in that
header file as well. The @code{YYLTYPE} definition should also appear
in the parser header file to override the default @code{YYLTYPE}
definition there.
In other words, in the @code{%code top} block above, all but the first two
lines are dependency code required by the @code{YYSTYPE} and @code{YYLTYPE}
definitions.
Thus, they belong in one or more @code{%code requires}:
@example
@group
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
@}
@end group
@group
%code requires @{
#include "ptypes.h"
@}
@end group
@group
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
@end group
@group
%code requires @{
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
@{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
@} YYLTYPE;
@}
@end group
@group
%code @{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
static void trace_token (enum yytokentype token, YYLTYPE loc);
@}
@end group
@dots{}
@end example
@noindent
Now Bison will insert @code{#include "ptypes.h"} and the new
@code{YYLTYPE} definition before the Bison-generated @code{YYSTYPE}
and @code{YYLTYPE} definitions in both the parser implementation file
and the parser header file. (By the same reasoning, @code{%code
requires} would also be the appropriate place to write your own
definition for @code{YYSTYPE}.)
When you are writing dependency code for @code{YYSTYPE} and
@code{YYLTYPE}, you should prefer @code{%code requires} over
@code{%code top} regardless of whether you instruct Bison to generate
a parser header file. When you are writing code that you need Bison
to insert only into the parser implementation file and that has no
special need to appear at the top of that file, you should prefer the
unqualified @code{%code} over @code{%code top}. These practices will
make the purpose of each block of your code explicit to Bison and to
other developers reading your grammar file. Following these
practices, we expect the unqualified @code{%code} and @code{%code
requires} to be the most important of the four @var{Prologue}
alternatives.
At some point while developing your parser, you might decide to
provide @code{trace_token} to modules that are external to your
parser. Thus, you might wish for Bison to insert the prototype into
both the parser header file and the parser implementation file. Since
this function is not a dependency required by @code{YYSTYPE} or
@code{YYLTYPE}, it doesn't make sense to move its prototype to a
@code{%code requires}. More importantly, since it depends upon
@code{YYLTYPE} and @code{yytokentype}, @code{%code requires} is not
sufficient. Instead, move its prototype from the unqualified
@code{%code} to a @code{%code provides}:
@example
@group
%code top @{
#define _GNU_SOURCE
#include <stdio.h>
@}
@end group
@group
%code requires @{
#include "ptypes.h"
@}
@end group
@group
%union @{
long int n;
tree t; /* @r{@code{tree} is defined in @file{ptypes.h}.} */
@}
@end group
@group
%code requires @{
#define YYLTYPE YYLTYPE
typedef struct YYLTYPE
@{
int first_line;
int first_column;
int last_line;
int last_column;
char *filename;
@} YYLTYPE;
@}
@end group
@group
%code provides @{
void trace_token (enum yytokentype token, YYLTYPE loc);
@}
@end group
@group
%code @{
static void print_token_value (FILE *, int, YYSTYPE);
#define YYPRINT(F, N, L) print_token_value (F, N, L)
@}
@end group
@dots{}
@end example
@noindent
Bison will insert the @code{trace_token} prototype into both the
parser header file and the parser implementation file after the
definitions for @code{yytokentype}, @code{YYLTYPE}, and
@code{YYSTYPE}.
The above examples are careful to write directives in an order that
reflects the layout of the generated parser implementation and header
files: @code{%code top}, @code{%code requires}, @code{%code provides},
and then @code{%code}. While your grammar files may generally be
easier to read if you also follow this order, Bison does not require
it. Instead, Bison lets you choose an organization that makes sense
to you.
You may declare any of these directives multiple times in the grammar file.
In that case, Bison concatenates the contained code in declaration order.
This is the only way in which the position of one of these directives within
the grammar file affects its functionality.
The result of the previous two properties is greater flexibility in how you may
organize your grammar file.
For example, you may organize semantic-type-related directives by semantic
type:
@example
@group
%code requires @{ #include "type1.h" @}
%union @{ type1 field1; @}
%destructor @{ type1_free ($$); @} <field1>
%printer @{ type1_print (yyoutput, $$); @} <field1>
@end group
@group
%code requires @{ #include "type2.h" @}
%union @{ type2 field2; @}
%destructor @{ type2_free ($$); @} <field2>
%printer @{ type2_print (yyoutput, $$); @} <field2>
@end group
@end example
@noindent
You could even place each of the above directive groups in the rules section of
the grammar file next to the set of rules that uses the associated semantic
type.
(In the rules section, you must terminate each of those directives with a
semicolon.)
And you don't have to worry that some directive (like a @code{%union}) in the
definitions section is going to adversely affect their functionality in some
counter-intuitive manner just because it comes first.
Such an organization is not possible using @var{Prologue} sections.
This section has been concerned with explaining the advantages of the four
@var{Prologue} alternatives over the original Yacc @var{Prologue}.
However, in most cases when using these directives, you shouldn't need to
think about all the low-level ordering issues discussed here.
Instead, you should simply use these directives to label each block of your
code according to its purpose and let Bison handle the ordering.
@code{%code} is the most generic label.
Move code to @code{%code requires}, @code{%code provides}, or @code{%code top}
as needed.
@node Bison Declarations
@subsection The Bison Declarations Section
@cindex Bison declarations (introduction)
@cindex declarations, Bison (introduction)
The @var{Bison declarations} section contains declarations that define
terminal and nonterminal symbols, specify precedence, and so on.
In some simple grammars you may not need any declarations.
@xref{Declarations, ,Bison Declarations}.
@node Grammar Rules
@subsection The Grammar Rules Section
@cindex grammar rules section
@cindex rules section for grammar
The @dfn{grammar rules} section contains one or more Bison grammar
rules, and nothing else. @xref{Rules, ,Syntax of Grammar Rules}.
There must always be at least one grammar rule, and the first
@samp{%%} (which precedes the grammar rules) may never be omitted even
if it is the first thing in the file.
@node Epilogue
@subsection The epilogue
@cindex additional C code section
@cindex epilogue
@cindex C code, section for additional
The @var{Epilogue} is copied verbatim to the end of the parser
implementation file, just as the @var{Prologue} is copied to the
beginning. This is the most convenient place to put anything that you
want to have in the parser implementation file but which need not come
before the definition of @code{yyparse}. For example, the definitions
of @code{yylex} and @code{yyerror} often go here. Because C requires
functions to be declared before being used, you often need to declare
functions like @code{yylex} and @code{yyerror} in the Prologue, even
if you define them in the Epilogue. @xref{Interface, ,Parser
C-Language Interface}.
If the last section is empty, you may omit the @samp{%%} that separates it
from the grammar rules.
The Bison parser itself contains many macros and identifiers whose names
start with @samp{yy} or @samp{YY}, so it is a good idea to avoid using
any such names (except those documented in this manual) in the epilogue
of the grammar file.
@node Symbols
@section Symbols, Terminal and Nonterminal
@cindex nonterminal symbol
@cindex terminal symbol
@cindex token type
@cindex symbol
@dfn{Symbols} in Bison grammars represent the grammatical classifications
of the language.
A @dfn{terminal symbol} (also known as a @dfn{token type}) represents a
class of syntactically equivalent tokens. You use the symbol in grammar
rules to mean that a token in that class is allowed. The symbol is
represented in the Bison parser by a numeric code, and the @code{yylex}
function returns a token type code to indicate what kind of token has
been read. You don't need to know what the code value is; you can use
the symbol to stand for it.
A @dfn{nonterminal symbol} stands for a class of syntactically
equivalent groupings. The symbol name is used in writing grammar rules.
By convention, it should be all lower case.
Symbol names can contain letters, underscores, periods, and non-initial
digits and dashes. Dashes in symbol names are a GNU extension, incompatible
with POSIX Yacc. Periods and dashes make symbol names less convenient to
use with named references, which require brackets around such names
(@pxref{Named References}). Terminal symbols that contain periods or dashes
make little sense: since they are not valid symbols (in most programming
languages) they are not exported as token names.
There are three ways of writing terminal symbols in the grammar:
@itemize @bullet
@item
A @dfn{named token type} is written with an identifier, like an
identifier in C@. By convention, it should be all upper case. Each
such name must be defined with a Bison declaration such as
@code{%token}. @xref{Token Decl, ,Token Type Names}.
@item
@cindex character token
@cindex literal token
@cindex single-character literal
A @dfn{character token type} (or @dfn{literal character token}) is
written in the grammar using the same syntax used in C for character
constants; for example, @code{'+'} is a character token type. A
character token type doesn't need to be declared unless you need to
specify its semantic value data type (@pxref{Value Type, ,Data Types of
Semantic Values}), associativity, or precedence (@pxref{Precedence,
,Operator Precedence}).
By convention, a character token type is used only to represent a
token that consists of that particular character. Thus, the token
type @code{'+'} is used to represent the character @samp{+} as a
token. Nothing enforces this convention, but if you depart from it,
your program will confuse other readers.
All the usual escape sequences used in character literals in C can be
used in Bison as well, but you must not use the null character as a
character literal because its numeric code, zero, signifies
end-of-input (@pxref{Calling Convention, ,Calling Convention
for @code{yylex}}). Also, unlike standard C, trigraphs have no
special meaning in Bison character literals, nor is backslash-newline
allowed.
@item
@cindex string token
@cindex literal string token
@cindex multicharacter literal
A @dfn{literal string token} is written like a C string constant; for
example, @code{"<="} is a literal string token. A literal string token
doesn't need to be declared unless you need to specify its semantic
value data type (@pxref{Value Type}), associativity, or precedence
(@pxref{Precedence}).
You can associate the literal string token with a symbolic name as an
alias, using the @code{%token} declaration (@pxref{Token Decl, ,Token
Declarations}). If you don't do that, the lexical analyzer has to
retrieve the token number for the literal string token from the
@code{yytname} table (@pxref{Calling Convention}).
@strong{Warning}: literal string tokens do not work in Yacc.
By convention, a literal string token is used only to represent a token
that consists of that particular string. Thus, you should use the token
type @code{"<="} to represent the string @samp{<=} as a token. Bison
does not enforce this convention, but if you depart from it, people who
read your program will be confused.
All the escape sequences used in string literals in C can be used in
Bison as well, except that you must not use a null character within a
string literal. Also, unlike Standard C, trigraphs have no special
meaning in Bison string literals, nor is backslash-newline allowed. A
literal string token must contain two or more characters; for a token
containing just one character, use a character token (see above).
@end itemize
How you choose to write a terminal symbol has no effect on its
grammatical meaning. That depends only on where it appears in rules and
on when the parser function returns that symbol.
The value returned by @code{yylex} is always one of the terminal
symbols, except that a zero or negative value signifies end-of-input.
Whichever way you write the token type in the grammar rules, you write
it the same way in the definition of @code{yylex}. The numeric code
for a character token type is simply the positive numeric code of the
character, so @code{yylex} can use the identical value to generate the
requisite code, though you may need to convert it to @code{unsigned
char} to avoid sign-extension on hosts where @code{char} is signed.
Each named token type becomes a C macro in the parser implementation
file, so @code{yylex} can use the name to stand for the code. (This
is why periods don't make sense in terminal symbols.) @xref{Calling
Convention, ,Calling Convention for @code{yylex}}.
If @code{yylex} is defined in a separate file, you need to arrange for the
token-type macro definitions to be available there. Use the @samp{-d}
option when you run Bison, so that it will write these macro definitions
into a separate header file @file{@var{name}.tab.h} which you can include
in the other source files that need it. @xref{Invocation, ,Invoking Bison}.
If you want to write a grammar that is portable to any Standard C
host, you must use only nonnull character tokens taken from the basic
execution character set of Standard C@. This set consists of the ten
digits, the 52 lower- and upper-case English letters, and the
characters in the following C-language string:
@example
"\a\b\t\n\v\f\r !\"#%&'()*+,-./:;<=>?[\\]^_@{|@}~"
@end example
The @code{yylex} function and Bison must use a consistent character set
and encoding for character tokens. For example, if you run Bison in an
ASCII environment, but then compile and run the resulting
program in an environment that uses an incompatible character set like
EBCDIC, the resulting program may not work because the tables
generated by Bison will assume ASCII numeric values for
character tokens. It is standard practice for software distributions to
contain C source files that were generated by Bison in an
ASCII environment, so installers on platforms that are
incompatible with ASCII must rebuild those files before
compiling them.
The symbol @code{error} is a terminal symbol reserved for error recovery
(@pxref{Error Recovery}); you shouldn't use it for any other purpose.
In particular, @code{yylex} should never return this value. The default
value of the error token is 256, unless you explicitly assigned 256 to
one of your tokens with a @code{%token} declaration.
@node Rules
@section Syntax of Grammar Rules
@cindex rule syntax
@cindex grammar rule syntax
@cindex syntax of grammar rules
A Bison grammar rule has the following general form:
@example
@group
@var{result}: @var{components}@dots{};
@end group
@end example
@noindent
where @var{result} is the nonterminal symbol that this rule describes,
and @var{components} are various terminal and nonterminal symbols that
are put together by this rule (@pxref{Symbols}).
For example,
@example
@group
exp: exp '+' exp;
@end group
@end example
@noindent
says that two groupings of type @code{exp}, with a @samp{+} token in between,
can be combined into a larger grouping of type @code{exp}.
White space in rules is significant only to separate symbols. You can add
extra white space as you wish.
Scattered among the components can be @var{actions} that determine
the semantics of the rule. An action looks like this:
@example
@{@var{C statements}@}
@end example
@noindent
@cindex braced code
This is an example of @dfn{braced code}, that is, C code surrounded by
braces, much like a compound statement in C@. Braced code can contain
any sequence of C tokens, so long as its braces are balanced. Bison
does not check the braced code for correctness directly; it merely
copies the code to the parser implementation file, where the C
compiler can check it.
Within braced code, the balanced-brace count is not affected by braces
within comments, string literals, or character constants, but it is
affected by the C digraphs @samp{<%} and @samp{%>} that represent
braces. At the top level braced code must be terminated by @samp{@}}
and not by a digraph. Bison does not look for trigraphs, so if braced
code uses trigraphs you should ensure that they do not affect the
nesting of braces or the boundaries of comments, string literals, or
character constants.
Usually there is only one action and it follows the components.
@xref{Actions}.
@findex |
Multiple rules for the same @var{result} can be written separately or can
be joined with the vertical-bar character @samp{|} as follows:
@example
@group
@var{result}:
@var{rule1-components}@dots{}
| @var{rule2-components}@dots{}
@dots{}
;
@end group
@end example
@noindent
They are still considered distinct rules even when joined in this way.
If @var{components} in a rule is empty, it means that @var{result} can
match the empty string. For example, here is how to define a
comma-separated sequence of zero or more @code{exp} groupings:
@example
@group
expseq:
/* empty */
| expseq1
;
@end group
@group
expseq1:
exp
| expseq1 ',' exp
;
@end group
@end example
@noindent
It is customary to write a comment @samp{/* empty */} in each rule
with no components.
@node Recursion
@section Recursive Rules
@cindex recursive rule
A rule is called @dfn{recursive} when its @var{result} nonterminal
appears also on its right hand side. Nearly all Bison grammars need to
use recursion, because that is the only way to define a sequence of any
number of a particular thing. Consider this recursive definition of a
comma-separated sequence of one or more expressions:
@example
@group
expseq1:
exp
| expseq1 ',' exp
;
@end group
@end example
@cindex left recursion
@cindex right recursion
@noindent
Since the recursive use of @code{expseq1} is the leftmost symbol in the
right hand side, we call this @dfn{left recursion}. By contrast, here
the same construct is defined using @dfn{right recursion}:
@example
@group
expseq1:
exp
| exp ',' expseq1
;
@end group
@end example
@noindent
Any kind of sequence can be defined using either left recursion or right
recursion, but you should always use left recursion, because it can
parse a sequence of any number of elements with bounded stack space.
Right recursion uses up space on the Bison stack in proportion to the
number of elements in the sequence, because all the elements must be
shifted onto the stack before the rule can be applied even once.
@xref{Algorithm, ,The Bison Parser Algorithm}, for further explanation
of this.
@cindex mutual recursion
@dfn{Indirect} or @dfn{mutual} recursion occurs when the result of the
rule does not appear directly on its right hand side, but does appear
in rules for other nonterminals which do appear on its right hand
side.
For example:
@example
@group
expr:
primary
| primary '+' primary
;
@end group
@group
primary:
constant
| '(' expr ')'
;
@end group
@end example
@noindent
defines two mutually-recursive nonterminals, since each refers to the
other.
@node Semantics
@section Defining Language Semantics
@cindex defining language semantics
@cindex language semantics, defining
The grammar rules for a language determine only the syntax. The semantics
are determined by the semantic values associated with various tokens and
groupings, and by the actions taken when various groupings are recognized.
For example, the calculator calculates properly because the value
associated with each expression is the proper number; it adds properly
because the action for the grouping @w{@samp{@var{x} + @var{y}}} is to add
the numbers associated with @var{x} and @var{y}.
@menu
* Value Type:: Specifying one data type for all semantic values.
* Multiple Types:: Specifying several alternative data types.
* Actions:: An action is the semantic definition of a grammar rule.
* Action Types:: Specifying data types for actions to operate on.
* Mid-Rule Actions:: Most actions go at the end of a rule.
This says when, why and how to use the exceptional
action in the middle of a rule.
@end menu
@node Value Type
@subsection Data Types of Semantic Values
@cindex semantic value type
@cindex value type, semantic
@cindex data types of semantic values
@cindex default data type
In a simple program it may be sufficient to use the same data type for
the semantic values of all language constructs. This was true in the
RPN and infix calculator examples (@pxref{RPN Calc, ,Reverse Polish
Notation Calculator}).
Bison normally uses the type @code{int} for semantic values if your
program uses the same data type for all language constructs. To
specify some other type, define @code{YYSTYPE} as a macro, like this:
@example
#define YYSTYPE double
@end example
@noindent
@code{YYSTYPE}'s replacement list should be a type name
that does not contain parentheses or square brackets.
This macro definition must go in the prologue of the grammar file
(@pxref{Grammar Outline, ,Outline of a Bison Grammar}).
@node Multiple Types
@subsection More Than One Value Type
In most programs, you will need different data types for different kinds
of tokens and groupings. For example, a numeric constant may need type
@code{int} or @code{long int}, while a string constant needs type
@code{char *}, and an identifier might need a pointer to an entry in the
symbol table.
To use more than one data type for semantic values in one parser, Bison
requires you to do two things:
@itemize @bullet
@item
Specify the entire collection of possible data types, either by using the
@code{%union} Bison declaration (@pxref{Union Decl, ,The Collection of
Value Types}), or by using a @code{typedef} or a @code{#define} to
define @code{YYSTYPE} to be a union type whose member names are
the type tags.
@item
Choose one of those types for each symbol (terminal or nonterminal) for
which semantic values are used. This is done for tokens with the
@code{%token} Bison declaration (@pxref{Token Decl, ,Token Type Names})
and for groupings with the @code{%type} Bison declaration (@pxref{Type
Decl, ,Nonterminal Symbols}).
@end itemize
@node Actions
@subsection Actions
@cindex action
@vindex $$
@vindex $@var{n}
@vindex $@var{name}
@vindex $[@var{name}]
An action accompanies a syntactic rule and contains C code to be executed
each time an instance of that rule is recognized. The task of most actions
is to compute a semantic value for the grouping built by the rule from the
semantic values associated with tokens or smaller groupings.
An action consists of braced code containing C statements, and can be
placed at any position in the rule;
it is executed at that position. Most rules have just one action at the
end of the rule, following all the components. Actions in the middle of
a rule are tricky and used only for special purposes (@pxref{Mid-Rule
Actions, ,Actions in Mid-Rule}).
The C code in an action can refer to the semantic values of the
components matched by the rule with the construct @code{$@var{n}},
which stands for the value of the @var{n}th component. The semantic
value for the grouping being constructed is @code{$$}. In addition,
the semantic values of symbols can be accessed with the named
references construct @code{$@var{name}} or @code{$[@var{name}]}.
Bison translates both of these constructs into expressions of the
appropriate type when it copies the actions into the parser
implementation file. @code{$$} (or @code{$@var{name}}, when it stands
for the current grouping) is translated to a modifiable lvalue, so it
can be assigned to.
Here is a typical example:
@example
@group
exp:
@dots{}
| exp '+' exp @{ $$ = $1 + $3; @}
@end group
@end example
Or, in terms of named references:
@example
@group
exp[result]:
@dots{}
| exp[left] '+' exp[right] @{ $result = $left + $right; @}
@end group
@end example
@noindent
This rule constructs an @code{exp} from two smaller @code{exp} groupings
connected by a plus-sign token. In the action, @code{$1} and @code{$3}
(@code{$left} and @code{$right})
refer to the semantic values of the two component @code{exp} groupings,
which are the first and third symbols on the right hand side of the rule.
The sum is stored into @code{$$} (@code{$result}) so that it becomes the
semantic value of
the addition-expression just recognized by the rule. If there were a
useful semantic value associated with the @samp{+} token, it could be
referred to as @code{$2}.
@xref{Named References}, for more information about using the named
references construct.
Note that the vertical-bar character @samp{|} is really a rule
separator, and actions are attached to a single rule. This is a
difference with tools like Flex, for which @samp{|} stands for either
``or'', or ``the same action as that of the next rule''. In the
following example, the action is triggered only when @samp{b} is found:
@example
@group
a-or-b: 'a'|'b' @{ a_or_b_found = 1; @};
@end group
@end example
@cindex default action
If you don't specify an action for a rule, Bison supplies a default:
@w{@code{$$ = $1}.} Thus, the value of the first symbol in the rule
becomes the value of the whole rule. Of course, the default action is
valid only if the two data types match. There is no meaningful default
action for an empty rule; every empty rule must have an explicit action
unless the rule's value does not matter.
@code{$@var{n}} with @var{n} zero or negative is allowed for reference
to tokens and groupings on the stack @emph{before} those that match the
current rule. This is a very risky practice, and to use it reliably
you must be certain of the context in which the rule is applied. Here
is a case in which you can use this reliably:
@example
@group
foo:
expr bar '+' expr @{ @dots{} @}
| expr bar '-' expr @{ @dots{} @}
;
@end group
@group
bar:
/* empty */ @{ previous_expr = $0; @}
;
@end group
@end example
As long as @code{bar} is used only in the fashion shown here, @code{$0}
always refers to the @code{expr} which precedes @code{bar} in the
definition of @code{foo}.
@vindex yylval
It is also possible to access the semantic value of the lookahead token, if
any, from a semantic action.
This semantic value is stored in @code{yylval}.
@xref{Action Features, ,Special Features for Use in Actions}.
@node Action Types
@subsection Data Types of Values in Actions
@cindex action data types
@cindex data types in actions
If you have chosen a single data type for semantic values, the @code{$$}
and @code{$@var{n}} constructs always have that data type.
If you have used @code{%union} to specify a variety of data types, then you