mcpp is a C preprocessor developed by kmatsui (Kiyoshi Matsui) based on DECUS cpp written by Martin Minow, and then rewritten entirely. mcpp means Matsui cpp. This software is supplied as source codes, and to use mcpp in any compiler systems, a small amount of modifications to adapt to the compiler system are required before it can be compiled into an executable. *1
This document describes the specification for mcpp executables that has been already ported to certain compiler systems. For those who want to know more about mcpp or want to port it to other compiler systems, refer to mcpp source and its document mcpp-porting.html.
All these sources and related documents are provided as an open-source-software.
Before going into detail, some of the mcpp features are introduced here.
Note:
*1 mcpp V.2.6.3 onward provides some binary packages too, at the following site.
mcpp is a portable preprocessor, supporting various operating systems, including Linux, FreeBSD and Windows. Its source has a wide portability, and can be compiled by any compilers which support Standard C or C++ (ANSI/ISO C or C++). The library functions used are only the classic ones.
To port mcpp to each compiler system, in many cases, one only needs to change some macro definitions in the header files and simply compile it. In the worst case, adding several dozen of lines into a source file would be enough.
To process multi-byte characters (Kanji), it supports Japanese EUC-JP, shift-JIS and ISO2022-JP, Chinese GB-2312, Taiwanese Big-5 and Korean KSC-5601 (KSX 1001), as well as UTF-8. For shift-JIS, ISO2022-JP or Big-5, mcpp can complement the compiler-proper if it does not recognize them.
mcpp has various behavioral modes. Other than Standard-conforming mode, there are K&R 1st mode, "Reiser" cpp mode and what I call post-Standard mode. mcpp has also an execution option for C++ preprocessor.
Different from many existing preprocessors, Standard mode of mcpp has the highest conformance to Standards: all of C90, C99 and C++98. It has been developed aiming to become the reference model of the Standard C preprocessor. Those versions of the Standard can be specified by an execution option. *1
In addition, it provides several useful enhancements: '#pragma MCPP debug', which traces the process of macro expansion or #if expression evaluation, and the header file "pre-preprocessing" facility.
mcpp also provides several useful execution options, such as warning level or include directory specification options.
Even if there are any mistakes in the source, mcpp deals suitably with accurate plain diagnostic messages without running out of control or displaying misguiding error messages. It also displays warnings for portability problems. The detailed documents are also attached.
In spite of its high quality, mcpp's code size and memory usage is relatively small.
A disadvantage of mcpp, if any, is slower processing speed. It takes two or three times time of GCC 3.*, 4.* / cc1, but seeing that its processing speed is almost the same as that of Borland C 5.5/cpp32 and that it runs a little bit faster when the header file pre-preprocessing facility is used, it cannot be described as particularly slow. mcpp puts an emphasis on standard conformance, source portability and operability in a small memory space, making this level of processing speed inevitable.
Validation Suite for Standard C Preprocessing, which is used to test the extent to which a preprocessor conforms to Standard C, its documentation cpp-test.html, which contains results of applying Validation Suite to various preprocessors, are also released with mcpp. When looking through this file, you will notice that so-called Standard-conforming preprocessors have so many conformance-related problems.
During the course of developing mcpp V.2.3, it was selected as one of the "Exploratory Software Projects for 2002" by Information-technology Promotion Agency (IPA), Japan, along with its Validation Suite. From July 2002 to February 2003, the project, financed by IPA, proceeded under advice of Yutaka Niibe project manager. I asked "HighWell, Inc." Limited Company, Tokyo, for translation of all the documents. For technical details, I revised and corrected the translated documents.
mcpp was continuously adopted to one of the "Exploratory Software Projects" in 2003 by Hiroshi Ichiji project manager. The update of mcpp proceeded into the next version, V.2.4. *2
After the project, I am still going on updating mcpp and Validation Suite.
Note:
*1 ISO/IEC 9899:1990 (JIS X 3010-1993) had been used as C Standard, but in 1999, ISO/IEC 9899:1999 was adopted as a new Standard. This document calls the former C90 and latter C99. The former is generally called ANSI C or C89 because it migrated from ANSI X3.159-1989. ISO/IEC 9899:1990 + Amendment 1995 is sometimes called C95. C++ Standards are ISO/IEC 14882:1998 and its corrigendum version ISO/IEC 14882:2003. This document calls both of them C++98.
*2 The outline of the "Exploratory Software Project" can be seen at the following site (Japanese only).
mcpp from V.2.3 through V.2.5 had been located at:
In April 2006, mcpp project moved to:
The old version of mcpp, cpp V.2.2 and Validation Suite V.1.2 are located in the following Vector's web site. They are in the directory called dos/prog/c, but they are not for MS-DOS exclusively. Sources are for UNIX, WIN32, MS-DOS. The documents are Japanese only.
http://www.vector.co.jp/soft/dos/prog/se081188.html
http://www.vector.co.jp/soft/dos/prog/se081189.html
http://www.vector.co.jp/soft/dos/prog/se081186.html
The text files in these archive files available at Vector use [CR]+[LF] as a <newline> and encode Kanji in shift-JIS for DOS/Windows. On the other hand, those from V.2.3 through V.2.5 available at SourceForge use [LF] as a <newline> and encode Kanji in EUC-JP for UNIX. From V.2.6 on two types of archive, .tar.gz file with [LF]/EUC-JP and .zip file with [CR]+[LF]/shift-JIS, are provided.
Though this manual was text-file in the older versions, it has changed to html-file at V.2.6.2.
This manual uses the following typographical conventions:
There are two types of build (or compiling configuration) for mcpp executable. *1, *2
Each mcpp executable has following 5 behavioral modes regardless of the building types.
The mode of mcpp is specified by the run-time options as follows:
In this document, I group OLDPREP and KR into pre-Standard modes, and group STD, COMPAT and POSTSTD into Standard modes. Since COMPAT mode is almost the same with STD mode, STD includes COMPAT unless otherwise mentioned. *3
There are differences in the macro expansion methods between Standard and pre-Standard modes. Roughly speaking, this difference is the difference between C90 and pre-C90. The biggest difference is the expansion of the function-like macros (macros with arguments). For the arguments with macros, while in Standard mode, mcpp substitutes the parameter within the replacement list of the original macro after completely expanding the arguments, in pre-Standard, mcpp substitutes the parameter without expanding, then expands the argument at rescan time.
Also, in Standard mode, a macro is not expanded recursively in principle, even if the macro definition is recursive directly or indirectly. If there is a recursive macro definition in pre-Standard mode, it causes infinite recursion and becomes an error at expansion time.
Handling of \ at line end is also different by mode. In Standard mode, after processing the trigraph, the sequence of <backslash><newline> gets deleted before tokenization, but in pre-Standard mode, these only get deleted when they are within the string literals or in a #define line.
There is a subtle difference in tokenization (token parsing, decomposition to tokens). In Standard mode, it tokenizes on "token based processing" principle. To put it concretely, in Standard mode, spaces will be inserted surrounding the expanded macro to prevent the unexpected merging with its adjacent tokens. In pre-Standard mode, traditional, convenient and tacit tokenization and the macro expansion methods of "character based text replacement" are left a trace. About these, please see cpp-test.html#2.
In Standard mode, it handles the numeric token, called preprocessing number, according to the Standard specification. In pre-Standard, the numeric tokens are the same as integer constant tokens or floating point tokens. The suffix 'U', 'u', 'LL' and 'll' of the integer constant and the suffixes 'F', 'f', 'L' and 'l' of floating point are not recognized as a part of the tokens in pre-Standard.
The string literals and character constants of wide characters are recognized as single tokens only in Standard mode.
Digraph, #error, #pragma, and _Pragma() operator are available only in Standard mode. Also, -S <n> option (strict-ansi specs) and -+ option (the one run as C++ preprocessor) are used only in Standard mode. Pre-defined macros __STDC__, __STDC_VERSION__ are defined in Standard mode, and they don't get defined in pre-Standard.
#if defined, #elif cannot be used in pre-Standard mode. Macros cannot be used within argument of #include or #line in pre-Standard. Predefined macros, __FILE__, __LINE__, __DATE__, __TIME__ are not defined at pre-Standard.
On the other hand, #assert, #asm (#endasm), #put_defines and #debug are available in pre-Standard mode only.
#if expression is evaluated in long / unsigned long or long long / unsigned long long at Standard mode, and in (signed) long only at pre-Standard. sizeof (type) in #if expression can be used only in pre-Standard.
Trigraphs and UCN (universal character name) are available only in STD mode.
The output of diagnostic messages is also slightly different between the modes. Please see chapter 5 for details.
Any other items, which do not have any distinct rules between K&R 1st and the Standards, follow the C90 rules in pre-Standard mode.
The difference of OLDPREP mode from KR mode and the difference of POSTSTD and COMPAT modes from STD mode are as follows:
Moreover, there is a mode called lang-asm. That is a mode to process anomalous sources which are assembler sources and nevertheless have comments, directives and macros of C embedded. While POST_STD cannot become this mode, STD, KR and OLD get to this mode when specified by an option. See 2.5 for its specifications.
For the above reasons, there are some different specifications in mcpp executables. So, please read this manual carefully. This chapter describes first the common options, next the behavioral-mode-dependent options, then the the options common to most compiler systems, finally the compiler-dependent options for each compiler-specific-build.
Note:
*1 There is another one named subroutine-build which is called as a subroutine from some other main program. The behavioral specification of subroutine-build is, however, the same with either of compiler-specific-build or compiler-independent-build according to its compile time setting. Hence, this manual does not mention subroutine-build particularly. As for subroutine-build, refer to mcpp-porting.html#3.12.
*2 The binary packages provided at the SourceForge site are of compiler-independent-builds.
*3 mcpp had two separate executables for Standard mode and pre-Standard mode; they were integrated into one at V.2.6.
*4 This option is for compatibility with GCC, Visual C++ and other major implementations. 'compat' means "compatible mode".
The <arg> and [arg] shown below indicate required and optional arguments respectively. Note that the <, >, [, or ] character itself must not be entered.
mcpp invocation takes a form of:
mcpp [-<opts> [-<opts>]] [in_file] [out_file] [-<opts> [-<opts>]]
Note that you have to replace the above "mcpp" with other name, depending on how mcpp is installed.
When out_file (an output path) is omitted, stdout is used unless the -o option is specified. When in_file (an input path) is omitted, stdin is used. A diagnostic message is output to stderr unless the -Q option is specified.
If any of these files cannot be opened, preprocessing is terminated, issuing an error message.
For an option with argument, white-space characters may or may not be inserted between the option character and an argument. In other words, both of "-I<arg>" and "-I <arg>" are acceptable. For options without argument, both of "-Qi" and "-Q -i" are valid.
For an option with an argument, missing a required argument causes an error except for the -M option,
If -D, -U, -I, or -W option is specified multiple times, each of them is valid. For -S, -V, or -+ option, only the first one is valid. For -2, or -3 option, its specification switches each time an option is specified. For other options, the last one is valid.
The option letters are case sensitive.
The switch character is '-', not '/', even under Windows.
When invalid options are specified, a usage statement is displayed. To check valid options, enter a command, such as "mcpp -?". In addition to the usage message, there are several error messages, but they are self-explanatory. I will omit their explanations.
This section covers common options across mcpp modes or compiler systems.
The -M* options are to output source file dependency lines for makefile. When there are several source files and the -M* option is specified for each of these source files to process and merge the outputs into a file, dependency description lines are aligned. These options are similar to those of GCC, but there are several differences. *1
test.o: test.c test.h test.h:
$(objpfx)foo.o: foo.c
$$(objpfx)foo.o: foo.c
Note:
*1 mcpp differs from GCC in that:
mcpp has several behavioral modes. For their specifications refer to sec 2.1.
This manual shows a list of various mcpp behaviors by mode, which may not be readable. Please be patient. In this manual, all the uppercased names that do not begin with "__" and displayed in italics, such as DIGRAPHS_INIT, TRUE, FALSE, etc, are macros defined in system.H. These macros are only used for compiling mcpp itself and a mcpp executable generated does not predefine these macros. You must understand this point clearly.
The following options are available in Standard mode:
The following option is available for STD mode:
#pragma MCPP debug macro_callThe -K option has almost the same effect with this pragma at top of an input file except predefined macros are notified only by this option. About the specs of macro notification, see 3.5.8. #pragma MCPP debug macro_call.
Note:
*1 C++'s __STDC__ is not desirable and causes many problems. GCC document says that __STDC__ needs to be predefined in C++ because many header files expect __STDC__ to be defined. The header files should be blamed for this. For common parts among C90, C99 and C++, "#if __STDC__ || __cplusplus" should be used.
*2 Different from C99, the C++ Standard makes much of UCN. So did C 1997/11 draft. Half-hearted implementation is not permitted. However, implementing Unicode in earnest is too much burden for preprocessor.
*3 In C90 mcpp treats // as a comment but issues a warning.
*4 This is for compatibility with GCC.
*5 If you install GCC-specific-mcpp, cc1 (cc1plus) is set to be handed from mcpp preprocessed file with -fpreprocessed option. Though this option means that the input is already preprocessed, cc1 still processes comment. Therefore, you can safely pass output of -K to cc1 with -fpreprocessed. Furthermore, if you add -save-temps option to gcc (g++) command, preprocessed output is left as *.i (*.ii) file, and you can read it by some refactoring tool.
*6 Comment insertion by -K option causes column shifts in sources, and this makes *.S file of GCC, which is not C/C++ source and compiled with -x assembler-with-cpp option, unable to be assembled. Also comments kept by -C option are sometimes confusing with that inserted by -K option. Therefore these options cannot be used at the same time.
*7 This option fails to keep column position on some particularly complex cases. When line splicing by a <backslash><newline> and line splicing by a line-crossing comment are intermingled on one output line, or a comment crosses over 256 lines, column position will be lost. Note that each '\v' and '\f' is converted to a space with or without this option.
The following 2 options can be used on UNIX-like systems, for either of compiler-independent-build and GCC-specific-build. On GCC-specific-build, however, these will get an error if the GCC does not support them.
Since GCC has so many options that GCC-specific-build of mcpp has some different options from the other builds in order to avoid conflicts with GCC. Note that the options in compiler-independent-build are all the same even if compiled by GCC. The options common to the builds other than GCC-specific are as follows.
#APP
If the token that follows the line top # does not agree with any of C directives as above, mcpp outputs this line as it is without causing an error.
# + any comment.
If the token that follows the line top # is not even an identifier nor pp-number, mcpp discards the line with a warning, without causing an error.
"A very very long long string literal"
The above old-fashioned string literals are concatenated into "A very very\nlong long\nstring literal".
These sometimes happen to GNU source code, however, this option for GCC is -x assembler-with-cpp or -lang-asm.
This option cannot be used in POSTSTD mode.
This manual calls this mode lang-asm mode.
This mode is recommended when you use mcpp as a macro processor for some text other than C/C++, for example, as a cpp called from xrdb.
To use mcpp replacing the compiler system's resident preprocessor, install it in the directory where the resident preprocessor is located under an appropriate name. Before copying mcpp, be sure to change the name of resident preprocessor so that it may not be overwritten.
For settings on Linux, FreeBSD, or CygWIN see 3.9.5. For settings in GCC 3.*, 4.*, see also 3.9.7, and 3.9.7.1. For MinGW, see 3.9.7.1.
Possibly the compiler driver cannot pass some options to mcpp in a normal manner. However, GCC provides the -Wp almighty option to allow you to pass any options to the preprocessor. For example, if you specify as follows:
gcc -Wp,-W31,-Q23
The -W31 and -Q23 options are passed to preprocessor. The options you want to pass to preprocessor have to be specified following -Wp with each option delimited by ", ". *1, *2
For other compiler systems, if their compiler driver source is available, it is recommended that this type of an almighty option should be added to the source. If you modify the compiler driver source code in the way that, for example, when -P<opt> is specified, only -<opt> is passed to preprocessor, it would be very convenient because any options can be passed.
An alternative way to use all the options of mcpp is to write a makefile in which first preprocess with mcpp, then compile the output file of mcpp as a source file. For this method, refer to sections 2.9 and 2.10.
The following options are available for some compiler-specific-builds. The compiler-independent-build has not these options, of course.
The following options are available for the LCC-Win32-specific-build.
The following options are available for the Visual C-specific-build.
mcpp on Mac OS X accepts the following option, on both of GCC-specific-build and compiler-independent-build.
mcpp on Mac OS X accepts the following option on GCC-specific-build.
The following options (until at the end of this 2.6 section) are available for the GCC-specific-build. Note that since __STDC__ is set to 1 for GCC, the result is same with or without the -S1 option.
The followings are available across the modes.
#line 123 "filename"Most compiler systems can use this C source format, but some systems cannot. The default specification of mcpp is such that, in compiler-specific-build for the compiler systems that cannot use the C source format, mcpp outputs the line number information in a format that the compiler-proper can accept it.
The following options are available for Standard mode.
For STD mode, following options are available. (These cannot be used in POSTSTD mode.)
The following option is available for pre-Standard mode of GCC-specific-build.
The next option is available on CygWIN GCC-specific-build.
mcpp neither makes the following options an error nor does anything about them. (It sometime issues a warning.)
In GCC V.3.3 or later, preprocessor has been absorbed into compiler, and independent preprocessor does not exist. Moreover, gcc often passes to preprocessor the options not for preprocessor, even if it is invoked with -no-integrated-cpp option. GCC-specific-build of mcpp for V.3.3 or later ignores the following options, if it cannot recognize them, as that kind of false options.
Note:
*1 -Wa and -Wl are almighty options for assembler and linker, respectively. The documentation on UNIX/System V/cc describes these options. Probably, GCC provides the -W<x> option for compatibility.
*2 In GCC V.3, cpp was absorbed into cc1 (cc1plus). Therefore, the options specified with -Wp are normally passed to cc1 (cc1plus). To have cpp (cpp0), not ccl, preprocess, the -no-integrated-cpp option must be specified on gcc invocation.
*3 GCC V.3.3 or later predefines several dozen of macros. -dD option does not regard these macros as predefined and output them.
*4 The output of -dM option is similar to that of '#pragma MCPP put_defines' ('#put_defines') with the following differences:
*5 Refer 3.9.6.3.
In compiler-independent-build of mcpp, the include directories are not set up other than /usr/include and /usr/local/include in UNIX systems. Other directories, if required, must be specified using environment variables or runtime options. The environment variable in compiler-independent-build is INCLUDE for C and CPLUS_INCLUDE for C++. Searching the file starts from the includer's source directory by default. (refer to 4.2 for the search rule.) Besides, in Linux there is a confusion of include directories, hence special setup is necessary to cope with this problem. Refer to 3.9.9 for the problem.
For the default include directories on GCC-specific-build, refer to noconfig/*.dif files, and for search rule and environment variable name, refer to 4.2.
For the environment variable LC_ALL, LC_CTYPE, LANG, refer to 2.8.
mcpp can process various multi-byte character encodings as follows.
EUC-JP Japanese extended UNIX code (UJIS) shift-JIS Japanese MS-Kanji GB-2312 EUC-like Chinese encoding (Simplified Chinese) Big-Five Taiwanese encoding (Traditional Chinese) KSC-5601 EUC-like Korean encoding (KSX 1001) ISO-2022-JP1 International standard Japanese UTF-8 A kind of Unicode encoding
The encoding used during execution can be specified as follows (Priority is given in this order):
How to specify a <encoding> is basically same across #pragma __setlocale, -e option, and the environment variables; in the table below, the encoding on the left-side hand is specified by the <encoding> on right-hand side; <encoding> is not case sensitive; '-' and '_' are ignored. Moreover, if it has '.', the character sequence to the '.' is ignored. Therefore, EUC_JP, EUC-JP, EUCJP, euc-jp, eucjp and ja_JP.eucJP are regarded as same. '*' represents any character sequence of zero or more bytes. (iso8859-1, iso8859-2 are equivalent to iso8859*.).
EUC-JP eucjp, euc, ujis shift-JIS sjis, shiftjis, mskanji GB-2312 gb2312, cngb, euccn BIG-FIVE bigfive, big5, cnbig5, euctw KSC-5601 ksc5601, ksx1001, wansung, euckr IS0-2022-JP1 iso2022jp, iso2022jp1, jis UTF-8 utf8, utf Not specified c, en*, latin*, iso8859*
If any of the following encodings is specified, mcpp is no longer able to recognize multi-byte characters: C, en* (english), latin* and iso8859*. When a non-ASCII ISO-8859 Latin-<n> single-byte character set is used, one of these encodings must be specified. When an empty name is used (#pragma __setlocale( "")), the encoding is restored to the default.
Only in the Visual C-specific-build, the following encoding name can be specified with '#pragma setlocale'. This is for compatibility with Visual C++. It is recommended you should use these names because the Visual C++ compiler cannot recognize encoding names other than these. ('-' can be omitted for mcpp, but not for the Visual C++ compiler-proper.)
shift-JIS japanese, jpn GB-2312 chinese-simplified, chs BIG-FIVE chinese-traditional, cht KSC-5601 korean, kor Not specified C, english
In Visual C++, the default multi-byte character encoding varies, depending on what language the language parameter and "Region and Language Option" of Windows are set to. However, the #pragma setlocale specification takes precedence over these Windows's settings.
GCC sometimes fails to handle shift-JIS, ISO2022JP and BIG-FIVE encodings, which contain the byte of 0x5c value. So, GCC-specific-build of mcpp complements it. *1
Note
*1 If the --enable-c-mbchar option is specified to configure GCC itself, that GCC recognizes an encoding specified by an environmental variable LANG set to one of C-EUCJP, C-SJIS or C-JIS, gcc's info says.
This way of configuring seems to be available from 1998 onward, but it has been seldom used, and its implementation does not work.
Although GCC-specific-build of mcpp had supported these environmental variables, such as LANG=C-SJIS, it removed that feature since V.2.7.
Also GCC info says that, besides LANG, environmental variables LC_ALL and LC_CTYPE can be used to specify an encoding. However, the difference between using LC_ALL or LC_CTYPE or not lies only in their diagnostic messages, in actual.
Compilers whose preprocessor is integrated into themselves are called one-pass compilers. These includes Visual C, Borland C, and LCC-Win32. Such compilers are becoming more popular because they can achieve a little higher processing speed. However, the time for preprocessing becomes shorter due to better hardware performance. In the first place, there is much point for preprocessing to be a common phase, mostly independent of run-time environment and compiler systems. It is not desirable that one-pass compilers become more popular. There will be more compiler-system-specific specifications.
Anyhow, it is impossible to replace the preprocessor of a one-pass compiler with mcpp. To use mcpp, a source program is preprocessed with mcpp and then the output is passed to a one-pass compiler. As you see, preprocessing takes place twice. It is useless but inevitable. Using of mcpp still has merits of source checking and can avail functions not available in resident preprocessor.
To use mcpp with a one-pass compiler, the procedure must be written in makefile. For sample procedures, refer to the makefile re-compilation settings used to compile mcpp itself, such as visualc.mak, borlandc.mak, and lcc_w32.mak.
Although GCC 3 or 4 compiler now integrates its preprocessing facility into itself, gcc provides an option to use an external preprocessor. Use this option when mcpp is used. (See 3.9.7.)
It is difficult to use mcpp in Integrated Development Environment (IDE) because IDE's GUI follows compiler-system-specific specifications and internal interfaces are not usually made available to third parties. Furthermore, one-pass compilers make it more difficult to insert a phase to use mcpp.
This subsection describes how to make mcpp available in Windows / Visual C++ 2003, 2005, 2008 IDE. Use the compiler-specific-build for Borland C and LCC-Win32 on command lines.
Also, it is described here how to make mcpp available in Mac OS X / Xcode.app / Apple-GCC.
mcpp cannot be used in a normal "project" since the internal specifications of Visual C++'s IDE are not made available to third parties and the compiler is a one-pass compiler. However, once a makefile that uses mcpp is created, Visual C++'s IDE can recognize the makefile and you can create a "makefile project" using that file. This allows you to utilize most of the IDE functions, including source editing, search, and source level debugging.
"Creating a Makefile Project" of a Visual C++ 2003 document, Visual C++ 2005 Express Edition Help and Visual C++ 2008 Express Edition Help describe how to make a makefile project. Perform the following procedure to create a makefile project.
"Build command line": nmake "Output": mcpp.exe "Clean command": nmake clean "Rebuild command line": nmake PREPROCESSED=1To make the Visual C-specific-build of mcpp, add an option COMPILER=MSC as:
"Build command line": nmake COMPILER=MSC "Output": mcpp.exe "Clean command": nmake clean "Rebuild command line": nmake COMPILER=MSC PREPROCESSED=1Since a Makefile project does not provide a 'make install' equivalent command, you must write the makefile in such a way that the commands you specify in "Build command line" and "Rebuild command line" also perform installation. *4
You can now use every functions, including Edit, Build, Rebuild and Debugging.
Note:
*1 On VC 2003 and 2005, to use the debugging function under Windows XP pro or Windows 2000, a user must belong to a group called "Debugger users". However, Windows XP HE does not provide such a group, so one has to login as an administrator. On VC 2008, such a limitation on users group was lifted.
*2 In order to perform the source level debugging function, makefile must be written in such a way that cl.exe is called with the -Zi option appended to generate debugging information.
*3 If you start Visual Studio by selecting "Start" -> "Programs", environment variables, such as for include directories, are not set. In order to have these variables set, you should open the 'Visual Studio command prompt' to start Visual Studio by typing on VC 2003:
devenv <Project File> /useenv
On VC 2005 express edition and VC 2008 express edition:
vcexpress <Project File> /useenv
*4 You must have a writing permission to the directory into which you install mcpp. If you try to install into 'bin' or 'lib' directories of the compiler system, the permission should be carefully set by an administrator account. It is recommended to make the user account belong to "Power users" or "Authenticated users" group and set "write" and "modify" permissions to the directory for the group. Another way of controlling the permission is to install the compiler system into a directory which the user has wrinting permission on, such as a shared directory.
You can use Xcode.app, which is an IDE on Mac OS X, with mcpp without problems. *1
Xcode.app uses gcc (g++) in /Developer/usr/bin rather than /usr/bin for some reason. (/Developer is the default install directory for Xcode.) To use mcpp in Xcode.app, you must install GCC-specific-build for the gcc (g++) in that directory. You should do as follows to install it. (${mcpp_dir} means the directory where the source of mcpp is placed.)
export PATH=/Developer/usr/bin:$PATH configure ${mcpp_dir}/configure --enable-replace-cpp make sudo make install
The installation method is the same with that for gcc in /usr/bin, except PATH setting. So, please refer to INSTALL for installation to cross-compiler or installation of universal binary.
After installing mcpp in such a way, you can use Xcode.app without any special setting for mcpp. Also the Apple-GCC-specific *.hmap files, which are "header map" files generated by Xcode.app, are read and processed by mcpp. However, mcpp does not process precompiled-header. It processes '#include "header.pch"' as an ordinary #include. Also, mcpp does not preprocess Objective-C and Objective-C++, so *.m and *.mm source files are directly handed on to cc1obj and cc1objplus, bypassing mcpp.
When you use mcpp-specific options, specify them as follows:
From screen top menu bar of Xcode.app, select "Project" > "Edit Project Settings".
The "project editor" window will appear.
Then, select "Build" pane of it, and edit "Other C flags" item.
The options should be specified following '-Wp,' and separated by commas, for example:
-Wp,-23,-W3
Note:
*1 Here we refer to Mac OS X Leopard / Xcode 3.0.
mcpp has its own enhancements. Each compiler-system-resident preprocessor has its own enhancements, some of which are not available in mcpp. This section covers these enhancements and their compatibility problems.
Principally, mcpp outputs #pragma lines as they are. This principle is applied to the #pragma lines processed by mcpp itself. This is because the compiler-proper may interpret the same #pragma for itself.
However, mcpp does not outputs the lines beginning with '#pragma MCPP', since these are for mcpp only. Also, mcpp does not outputs lines of '#pragma GCC' followed by either 'poison', 'dependency' or 'system_header'. Moreover, mcpp outputs neither of '#pragma once', '#pragma push_macro', nor '#pragma pop_macro' because they are useless on the later phases. On the other hand, '#pragma GCC visibility *' is outputted, because it is for the compiler and the linker. *1
mcpp compiled with EXPAND_PRAGMA == TRUE expands macros in #pragma line (in actual, EXPAND_PRAGMA is set TRUE only for Visual C-specific-build and Borland C-specific-build). However, #pragma lines followed by STDC, MCPP or GCC are never expanded.
#pragma sub-directives are implementation-defined, hence there are risks of same name sub-directive having different meanings to different compiler-systems. Some device is necessary to avoid name collision. Moreover, when EXPAND_PRAGMA == TRUE, there should be a device to avoid the name of #pragma sub-directive itself being macro expanded. This is why mcpp-specific sub-directives begin with '#pragma MCPP' and are not subject to macro expansion. This device is adopted from '#pragma STDC' of C99 and '#pragma GCC' of GCC 3.
'#pragma once' is, however, implemented as it is, since this pragma has been implemented in many preprocessors and has now no risk of name collision. '#pragma __setlocale' is prefixed with "__" instead of MCPP, because it has also meaning for compiler-proper, and because the prefix avoids user-name-space.
Note:
*1 The GCC-specific-build of mcpp only supports '#pragma GCC system_header' of the pragmas starting with GCC. It does not support '#pragma GCC poison' and '#pragma GCC dependency'.
mcpp in Standard mode uses '#pragma MCPP put_defines', '#pragma MCPP preprocess' and '#pragma MCPP preprocessed'. Pre-Standard mode uses #put_defines, #preprocess and #preprocessed. Let me explain by taking an example of #pragma.
When mcpp encounters '#pragma MCPP put_defines' directive, it outputs all the macros defined at that time in the form of #define lines. Of course, the #undef-ed macros are not output. The macros that cannot be #defined or #undef-ed, such __STDC__ and etc, are output in the form of #define lines, but are enclosed with comment marks. (Since __FILE__ and __LINE__ are special macros defined dynamically on a macro invocation, the replacement list output here means nothing.)
In pre-Standard mode and POSTSTD mode mcpp do not memorize parameter names of function-like macro definitions. So, these directives mechanically represent names of the first, second, third parameters as a, b, c, ... and so on. If it reaches the 27th parameter, it begins with a1, b1, c1, ..., a2, b2, c2, ... and so on.
If you enter the following directive after invoking mcpp from keyboard without specifying input and output files, all the predefined macros are listed.
#pragma MCPP put_defines
It also outputs a comment to indicate the source file name where each macro definition is found, as well as its line number. If you invoke mcpp with options such as -S1 or -N, you will see a different set of predefined macros.
When mcpp encounters '#pragma MCPP preprocess' directive, it outputs the following line:
#pragma MCPP preprocessed
This indicates that the source file has been already preprocessed by mcpp.
When mcpp encounters a '#pragma MCPP preprocessed' directive, it determines that the source file has been preprocessed by mcpp and continues to output the code it reads as it is, until it encounters a #define line. When mcpp does encounter a #define directive, mcpp determines that the rest of the source file are all #define lines and defines macros. At this time, mcpp would memorize the source filename and line number in the comment. *1, *2
A '#pragma MCPP preprocessed' is applied only to the lines that follow the directive in the source file where the '#pragma MCPP preprocessed' directive is found. If the source file is an #included one, when control is returned to the #including file, '#pragma MCPP preprocessed' is no longer applied.
Note:
*1 Actual processing is a little more complex. When mcpp encounters a '#pragma MCPP preprocessed', mcpp outputs lines it has read just as they are, except for #line lines, which compiler-specific-build of mcpp converts and outputs into a format that the compiler-proper can accept. mcpp disregards predefined standard macro because its #define line is enclosed with comment marks.
*2 Therefore, information on where a macro definition is found is not lost during pre-preprocessing.
With above directives, you can "pre-preprocess" header files. Pre-preprocessing considerably saves the entire preprocessing time. I think the explanation so far has already given you an understanding of how to pre-preprocess header files, but to deepen your understanding, let me explain it by taking an example of mcpp's own source code.
mcpp source consists of eight *.c files, of which seven files include "system.H" and "internal.H". No other headers are included. The source looks like this:
#if PREPROCESSED #include "mcpp.H" #else #include "system.H" #include "internal.H" #endif
The system.H includes noconfig.H or configed.H, as well as several standard header files. mcpp.H is not a source file I provide and is a "pre-preprocessed" header file I am going to generate.
To generate mcpp.H (of course, after setting up noconfig.H and other headers), invoke mcpp as follows:
mcpp > mcpp.H
For compiler systems, such as GCC, also specify the -b option.
Enter the following directives from the keyboard:
#pragma MCPP preprocess #include "system.H" #include "internal.H" #pragma MCPP put_defines
Enter "end-of-file" to terminate mcpp.
This has accomplished mcpp.H, which consists of the preprocessed system.H and internal.H and a set of #define lines following them. Including mcpp.H gives the same effect as including system.H and internal.H, but its size is one-nth of the original header files including the standard ones. This is because #if and comments are eliminated. It takes far less time to include mcpp.H in seven *.c files than to include system.H and internal.H seven times. By using '#pragma MCPP preprocess', much more time can be saved.
On compilation, use the -DPREPROCESSED=1 option.
It is recommended that the above procedure should be written in a file and the makefile should refer to it. The makefile and preproc.c appended to mcpp sources contain the procedure. Please refer to it.
Although the usage of independent preprocessor is limited for one-pass compilers like Visual C, Borland C or LCC-Win32, the pre-preprocessing facility is useful even for those.
The pre-preprocessing facility of header files is similar to that of the -dD option of GCC, but it differs from it in that:
As far as the pre-preprocessing facility is concerned, mcpp is more accurate and practical than GCC.
#pragma once directive is available in Standard mode.
#pragma once is also available for GCC, Visual C, LCC-Win32 and compiler-independent preprocessor called Wave.
This directive is used when you want to include a header file only once. With the following directive in a header file, mcpp includes the header file only once even if a #include line for that file appears many times.
#pragma once
Usually, compiler-system-specific standard header files prevent duplicate definitions by using the following code:
#ifndef __STDIO_H #define __STDIO_H /* Contents of stdio.h */ #endif
#pragma once provides similar functionality to this. Using macros always involves reading a header file. (The preprocessor cannot skip reading the code as people do and must read the entire header file for #if's or #endif's; It must read a comment before it can determine whether a line is a directive line, that is, a line with # at the beginning followed by a preprocessing directive; To do so, the preprocessor must identify a string literal; After all, it must read through the entire header file and perform most of tokenization.) #pragma once eliminates the need of even accessing to a header file, resulting in a improved processing speed for multiple includes.
To determine whether two header files are identical, file name characters, including directory names in a search path, are compared. Windows is not case sensitive. Therefore, "/DIR1/header.h" and "/DIR2/header.h" are regarded as distinct, but "header.h" and "HEADER.H" are regarded as the same on Windows, but distinct on UNIX-like systems. A directory is memorized after converting to absolute path, and a symbolic link in UNIX systems is memorized after dereferencing. Moreover, path-list is normalized by removing redundant part such as "foo/../". So, the identical files are determined always correctly. *1, *2, *3
I borrowed the idea of #pragma once from GCC V.1.*/cpp. GCC V.2.*, and V.3.* still has this functionality but it is regarded as obsolete. The specification of GCC V.2.*/cpp has been changed as follows: If the entire header file is enclosed with #ifndef _MACRO, #define _MACRO, and #endif, the cpp memorizes it and inclusion occurs only once, even without #pragma once.
However, this GCC V.2 and V.3 specification sometimes does not work for commercially available compiler systems that are not based on the GCC specification, due to a difference in the standard header file notation. In addition, the GCC V.2 and V.3 specification is more complex to implement. For this reason, I decided to implement only #pragma once.
As with other preprocessors, it is not advisable to rely only on #pragma once when the same header files are used. It is recommended that #pragma once should be combined with macros as follows:
#ifndef __STDIO_H #define __STDIO_H #pragma once /* Contents of stdio.h */ #endif
Note that #pragma once must not be written in <assert.h>. For the reason, see cpp-test.html#5.1.2. The same thing can be said with <cassert> and <cassert.h> of C++.
Another problem is that the recent GCC/GLIBC system has header files, like <stddef.h>, which are repeatedly #included by other system headers. They define macros, such as __need_NULL, __need_size_t, and __need_ptrdiff_t, and then #include <stddef.h>. Each time they do so, definitions such as NULL, size_t, and ptrdiff_t are defined in the <stddef.h>. The same thing can be said with <errno.h> and <signal.h>, and even with <stdio.h>. Other system headers define macros, such as __need_FILE, __need___FILE, and then #include <stdio.h>. Each time they do so, definitions such as FILE may be defined in <stdio.h>. #pragma once can not be used in such header files. *4
Note:
*1 The normalized result can be seen by '#pragma MCPP debug path'. See 3.5.1.
'#pragma MCPP put_defines' and diagnostics use the same result, too.
However, the path-list is not normalized usually in #line line.
By default, the #line line is output as specified by #include line, prepending the normalized include path, if any.
But, if -K option is specified, it is normalized so as to be easily utilized by some other tools.
*2 On CygWIN, /bin and /usr/bin are the same directory in real, also /lib and /usr/lib are the same, and supposing / is C:/dir/cygwin on Windows, /cygdrive/c/dir/cygwin is the same as /. mcpp treats these directories as the same, converting the path-list to the format of /cygdrive/c/dir/cygwin/dir-list/file.
*3 On MinGW, / and /usr are the same directory in real. Supposing / is C:/dir/msys/1.0, /c/dir/msys/1.0 is the same as /, and supposing /mingw is C:/dir/mingw, /c/dir/mingw is the same with /mingw. mcpp treats each of these as the same directories, converting the path-list to the format of c:/dir/msys/1.0/dir-list/file or c:/dir/mingw/dir-list/file.
*4 This is applied at least to Linux/GCC 2.9x, 3.* and 4.*/glibc 2.1, 2.2 and 2.3. FreeBSD 4, 5, 6 has much simpler system headers because it does not use glibc.
With a small number of header files, writing #pragma once to them does not require much effort, but it would be tremendous work if there are many header files. I wrote a simple tool to write it automatically to header files.
tool/ins_once.c is a tool written for old versions of GCC. As Borland 5.5 conform to the same standard header file notation with GCC, this tool can be used. However, it is advisable that this tool should not be used in the systems like Glibc 2 that has many exceptions shown above.
Even in the compiler systems that can use the tool, some header files do not strictly conform to the GCC notation. GCC's read-once functionality also does not work properly for these header files.
Compile ins_once.c and perform the following command in a directory, such as /usr/include or /usr/local/include, under UNIX.
chmod -R u+w *
and then execute ins_once as follows:
ins_once -t *.h */*.h */*/*.h
Ins_once reports header files that do not begin with #ifndef or #if !defined. Manually modify these files. Then, execute ins_once as follows:
ins_once *.h */*.h */*/*.h
If the first directive in each header file is #ifndef or #if !defined, a #pragma once line is inserted immediate below the line. Only a root user or a user with an appropriate permission is eligible for this modification. When you modified access permission, use 'chmod -R u-w *' to restore to original permission.
Ins_once provides the following options. Select the most appropriate one for your system.
ins_once roughly checks to write a #pragma once line only once in the same header file even if it is executed several times, but the check is not very strict. As this ins_once is of temporary and tentative nature, it scarcely performs tokenization. It worked as I expected with FreeBSD 2.0 and 2.2.7, Borland C 5.5, but it may not work properly for special header files. So before executing this tool, be sure to make a backup of an original file.
Have the shell expand a wild-card. (In case of buffer overflow, execute ins_once several times by specifying some of your system header files.)
These directives are provided for compatibility with GCC. GCC provides the #include_next and #warning directives. Although these directives are non-conforming, not only some source programs sometimes use them but also some Glibc2 system header files do. Taking this situation into consideration, I implemented the #include_next and #warning directives in GCC-specific-build to allow compilation of such source programs, however, mcpp issues a warning when it finds the directives. Regardless of the compiler systems mcpp is ported to, mcpp in Standard mode also implements #pragma MCPP warning.
With the following directive, mcpp skips the current file's directory and start searching header.h from the next directory of search path.
#include_next <header.h>
CygWIN and MinGW ignores the distinctions of alphabetical case of header names.
The following code outputs 'any message' to stderr as a warning message:
#pragma MCPP warning any message #warning any message
Different from #error, this is not counted as an error.
When I ported mcpp to Visual C, I implemented these directives in mcpp, and then made them available for other systems.
'#pragma MCPP push_macro( "MACRO")' and '#pragma MCPP pop_macro( "MACRO")' are used to "push" or "pop" a macro definition (MACRO) to the current macro definition stack.
'#pragma push_macro( "MACRO")' and '#pragma pop_macro( "MACRO")' are also available for Visual C.
push_macro saves a macro definition to the stack, and pop_macro retrieves the macro definition. The pushed macro definition remains valid after push_macro. To invalidate it, use #undef or redefine the macro with a new definition. push_macro can be used multiple times for a same name macro.
'#pragma __setlocale( "<encoding>")' changes the current multi-byte character encoding to <encoding>. The argument of setlocale must be a string literal. For <encoding>, refer to 2.8. This directive allows you to use several encodings in one translation unit.
In Visual C++, '#pragma __setlocale' cannot be used. Use '#pragma setlocale' instead. Encoding specification must be conveyed not only to mcpp but also to the compiler-proper. The latter can recognize only #pragma setlocale. For other compiler systems, when the compiler-proper cannot recognize an encoding, mcpp complements it.
There is not yet any compiler-proper which can recognize '#pragma __setlocale'.
'#pragma MCPP debug' and '#pragma MCPP end_debug' are for Standard mode. #debug and #end_debug are for pre-Standard mode.
The '#pragma MCPP debug <args>' directive can be written anywhere in a source program. <args> specifies a debug information type. One #pragma MCPP debug directive can take several <arg>. One or more <arg> must be specified for each directive. mcpp begins to output debug information when it finds this directive, and stops it when it encounters '#pragma MCPP end_debug <args>'. The <args> can be omitted, in which case all types of debug information is reset. If <args> contains an argument that is not supported by mcpp, mcpp issues a warning, but all the preceding arguments are regarded as valid.
All the debug information is output to the same path with the preprocessing output to synchronize with it. Therefore, this directive usually prevents compilation. Nevertheless, #pragma MCPP debug macro_call outputs informations embedding into comments, and can be re-preprocessed and compiled.
When you noticed something was wrong with the preprocessing result, enclose the coding you want to debug with the following directives, for example:
#pragma MCPP debug token expand /* Coding you want to debug */ #pragma MCPP end_debug
As this directive was originally used for debugging mcpp itself, it was not developed with end users in mind. So, you may not understand its behavior unless you read its source code, and you may sometimes feel it outputs too much information, but it is useful for tracing the preprocessing process. Be patient.
The following debug information types can be specified with <arg>.
path Displays the include file search path. token Parses tokens one by one and displays its type. expand Traces a macro expansion process. macro_call Embed macro notifications into comments on each macro definition and macro expansion. if Displays the result (true or false) of #if, #elif, #ifdef and #ifndef. expression Traces #if expression evaluation. getc Traces preprocess 1-byte by 1-byte. memory Displays the status of heap memory used by mcpp.
With these directives, mcpp displays include directories in the search path (excluding the current and source directories with which search begins) in the order of priority, starting with the highest one first.
In addition, with a #include directive, mcpp displays all the directories, including the current one, it actually searched for the #include file.
When a header file with #pragma once specified is #included again, the message to that effect is displayed.
Moreover, mcpp normalizes the path-list removing the redundant part such as "foo/../", and displays the result when the normalized path-list differs from the original one.
Also mcpp dereferences the symbolic link to its linked-file, and displays the result when conversion is occurred.
With these directives, mcpp displays a source line it has read, and then displays a token and its type on the source line each time it has read. This token, more specifically, is a preprocessing-token (pp-token). Not only pp-tokens on a source line but also ones mcpp reads again internally during macro expansion are displayed repeatedly.
However, the following 1-byte tokens are not displayed for mcpp program's convenience sake:
A pp-token has the following types:
NAM Identifier NUM Preprocessing-number OPE Operator or punctuator STR String literal WSTR Wide string literal CHR Character constant WCHR Wide character constant SPE Special pp-tokens, such as $ and @ SEP Token separator white space
Of SEP, other than <newline> are not normally displayed. Control codes such as <newline> are displayed as <^J> or <^M>.
With these directives, mcpp traces the expansion process of a macro invocation. When mcpp in Standard mode encounters a #pragma MCPP debug, it behaves as follows:
If there is a macro invocation, mcpp displays the macro definition. Each argument is read, the argument is substituted for the corresponding parameter in the replacement list and the replacement list is rescanned. mcpp displays this whole process. In case of nested macro definitions, they are rescanned and expanded one by one. If an argument has a macro, mcpp traces the above process recursively before parameter substitution.
Each time control is passed to and returned from a certain set of mcpp internal functions, mcpp displays the trace information along with the function name. The following table shows the role of these functions. Reading mcpp source code will gives you a concrete idea on what each function is doing.
expand_macro Entrance routine for macro expansion replace Expands a macro one level down. collect_args Collects arguments. prescan Scans a replacement list and processes # and ## operator. substitute Substitutes parameters with arguments. rescan Rescans a replacement list.
Except for expand_macro, above functions are indirectly recursive with each other.
For replace and collect_args, mcpp displays data it internally stacks during macro expansion. This data is displayed using the following internal codes:
<n> Nth parameter <TSEP> Token delimiter inserted by mcpp <MAGIC> Code that inhibits re-replacement of the macro of the same name <RT_END> Code that indicates the end of a replacement list <SRC> Code that indicates an identifier taken from source file while rescanning
<SRC> is used only in STD mode, and is not used in POSTSTD mode nor in COMPAT mode.
It is recommended that '#pragma MCPP debug token' should be also used.
If you specify also '#pragma MCPP debug macro_call' or -K option, macro notifications are output embedded in comments. However, in replace() and its subordinate routines some magic characters (internal codes) are written or removed in the input stream instead of comment. These magic characters are displayed as:
<MACm> Call of the m'th macro contained in one macro call <MAC_END> The end of the macro call started by previous MACm <MACm:ARGn> The n'th argument of the m'th macro call <ARG_END> The end of the argument started by previous MACm:ARGn
If you specify -v option too, the MAC_END and the ARG_END markers also display the same numbers with corresponding starting markers.
For #debug expand, mcpp uses internal routines considerably different from those used for Standard mode. The explanations are omitted.
With these directives, mcpp displays #if, #elif, #ifdef and #ifndef lines and reports their evaluation result (true or false). As for a skipped #if section, no report is made.
With these directives, mcpp traces evaluation of a #if or #elif expression. DECUS cpp, based on which mcpp has been developed, provides these directives for the purpose of debugging cpp itself. I scarcely modified them. This directive outputs a very long list of internal functions, as well as variable names and their values. Unless you read the mcpp source code, you may not understand these variables. However, without the source code, you can manage to understand how the mcpp pushes onto and takes out of a evaluation stack a complex expression value.
With these directives, mcpp outputs detailed data each time it calls get_ch(), a function to read one byte. When mcpp in Standard mode scans a pp-token, this routine is called to read only the first byte of the pp-token.
With a #debug getc, mcpp calls this routine during token scan, resulting in a tremendous amount of data output.
In any way, using these directives outputs a huge amount of data, so you scarcely need to use them.
With these directives, mcpp reports the status of the heap memory it has internally allocated or released using malloc(), realloc() or free() only once. Only the kmmalloc I developed and some other types of malloc() provide this functionality. Refer to mcpp-porting.html#4.extra. In case of other malloc(), mcpp will neither cause an error nor report a status.
mcpp reports the heap memory status again when it terminates with these directives on. The same thing happens when mcpp terminates due to out of memory.
With this directive, mcpp starts macro notification mode.
In this mode, on each macro definition and macro expansion, its line and column information on source file are output embedded in comments.
On a macro call with arguments, location information on each arguments are reported too.
Token concatenation by macro, however, may cause loss of macro information about the tokens before concatenation.
In addition, some informations are output on #undef, #if (#elif, #ifdef, #ifndef) and #endif, too.
This mode is specified also by -K option.
Macro notification mode is designed to allow reconstruction of the original source position from the preprocessed output. The primary purpose of this mode is to allow C/C++ refactoring tools to refactor source code without having to implement a special-purpose C preprocessor. This mode is also handy for debugging macro expansions. *1
The goal for macro expansion mode is to annotate every macro expansion, while still allowing the output to be compiled. On the other hand, '#pragma MCPP debug expand' is to trace macro expansion and outputs detailed informations, but its output is not compilable.
Note:
*1 Most of the specifications of macro notification mode were proposed by Taras Glek. He is working on refactoring of sources at mozilla project:
http://blog.mozilla.com/tglek/
For example, macro definition directives such as:
#define NULL 0L #define BAR(x, y) x ## y #define BAZ(a, b) a + b
produce the following output:
/*mNULL 1:9-1:16*/ /*mBAR 2:9-2:25*/ /*mBAZ 3:9-3:24*/
where the format means:
/*m[NAME] [start line]:[start column]-[end line]:[end column]*/Line and column numbers start from 1. When you specify -K option, predefined macros are output too, which have no location information.
#undef BAZ
This line produces the output:
/*undef 10*//*BAZ*/
The [lnum] and [NAME] in the format of /*undef [lnum]*//*[NAME]*/ indicate line number of the line and the undefined MACRO name.
Within source code other than directives, macros are expanded with markers indicating start and stop of the macro expansion. The format allows for HTML-like nesting. /*<...*/ signals start of macro expansion and /*>*/ the end. The start of macro expansion takes the following format replacing /*m of start of macro definition format to /*<:
/*<[NAME] [start line]:[start column]-[end line]:[end column]*/On macro with arguments, markers indicating source location of each argument and markers indicating start and end of each argument expansion are output too. The marker for argument location takes the format of /*!...*/. When a macro is found in an argument, informations on that macro is output recursively, with its location information if it is on the source file. Macro argument marker also have a disambiguating naming scheme. An argument name is of the format:
[func-like-macro-name]:[nesting level]-[argument number]This way, if someone calls 'BAZ(BAZ(a,b), c)', it would possible to distinguish nested macros of the same name and their arguments from each other. The argument number starts from 0. Then the location format follows it as:
[start line]:[start column]-[end line]:[end column]The marker for start of an argument also takes the format:
/*<[func-like-macro-name]:[nesting level]-[argument number]*/The marker for end of an argument is the same with the one for end of a macro expansion: /*>*/.
The following lines:
foo(NULL); foo(BAR(some_, var)); foo = BAZ(NULL, 2); bar = BAZ(BAZ(a,b), c);
expand to:
foo(/*<NULL 4:5-4:9*/0L/*>*/); foo(/*<BAR 5:5-5:20*//*!BAR:0-0 5:9-5:14*//*!BAR:0-1 5:16-5:19*/some_var/*>*/); foo = /*<BAZ 6:7-6:19*//*!BAZ:0-0 6:11-6:15*//*!BAZ:0-1 6:17-6:18*//*<BAZ:0-0*//*<NULL 6:11-6:15*/0L/*>*//*>*/ + /*<BAZ:0-1*/2/*>*//*>*/; bar = /*<BAZ 7:7-7:23*//*!BAZ:0-0 7:11-7:19*//*!BAZ:0-1 7:21-7:22*//*<BAZ:0-0*//*<BAZ 7:11-7:19*//*!BAZ:1-0*//*!BAZ:1-1*//*<BAZ:1-0*/a/*>*/ + /*<BAZ:1-1*/b/*>*//*>*//*>*/ + /*<BAZ:0-1*/c/*>*//*>*/;
Moreover, when -v option is specified along with -K option, the marker for end of macro expansion and the marker for end of an argument expansion also output the macro name and its number same with their starting markers as:
foo(/*<NULL 4:5-4:9*/0L/*NULL>*/); foo(/*<BAR 5:5-5:20*//*!BAR:0-0 5:9-5:14*//*!BAR:0-1 5:16-5:19*/some_var/*BAR>*/); foo = /*<BAZ 6:7-6:19*//*!BAZ:0-0 6:11-6:15*//*!BAZ:0-1 6:17-6:18*//*<BAZ:0-0*//*<NULL 6:11-6:15*/0L/*NULL>*//*BAZ:0-0>*/ + /*<BAZ:0-1*/2/*BAZ:0-1>*//*BAZ>*/; bar = /*<BAZ 7:7-7:23*//*!BAZ:0-0 7:11-7:19*//*!BAZ:0-1 7:21-7:22*//*<BAZ:0-0*//*<BAZ 7:11-7:19*//*!BAZ:1-0*//*!BAZ:1-1*//*<BAZ:1-0*/a/*BAZ:1-0>*/ + /*<BAZ:1-1*/b/*BAZ:1-1>*//*BAZ>*//*BAZ:0-0>*/ + /*<BAZ:0-1*/c/*BAZ:0-1>*//*BAZ>*/;
As you see in this example, all the ending markers correspond to the last preceding starting markers of the same nesting level. Hence, you can judge their correspondence automatically even without -v option.
On #if (#elif, #ifdef, #ifndef) line, informations on the macros in the line are shown.
For example, this is bar.h:
#define NULL 0L #define BAR(x, y) x ## y #define BAZ(a, b) a + b
And here is foo.c:
#include "bar.h" #ifdef BAR #ifndef BAZ #if 1 + BAR( 2, 3) #endif #else #if 1 #endif #if BAZ( 1, BAR( 2, 3)) #undef BAZ #endif #endif #endif
Then, foo.c produces the following output:
#line 1 "/dir/foo.c" #line 1 "/dir/bar.h" /*mNULL 1:9-1:16*/ /*mBAR 2:9-2:25*/ /*mBAZ 3:9-3:24*/ #line 2 "/dir/foo.c" /*ifdef 2*//*BAR*//*i T*/ /*ifndef 3*//*BAZ*//*i F*/ /*else 6:T*/ /*if 7*//*i T*/ /*endif 8*/ /*if 9*//*BAZ*//*BAR*//*i T*/ /*undef 10*//*BAZ*/ #line 11 "/dir/foo.c" /*endif 11*/ /*endif 12*/ /*endif 13*/
As you see, on #if line, an annotation starts with a /*if [lnum]*/ format where [lnum] indicates the current line number. Then one or more /*[NAME]*/ markers follow, if some macros are found, a /*[NAME]*/ for each macro. The annotation terminates with /*i T*/ or /*i F*/ which indicates that the directive is evaluated true or false, respectively. The expansion result is not displayed, unlike a macro on lines other than directives. On a line such as '#if 1' which has no macro, no /*[NAME]*/ is displayed.
Also annotations on #elif, #ifdef and #ifndef start with /*elif [lnum]*/, /*ifdef [lnum]*/ and /*ifndef [lnum]*/, respectively, followed by /*[NAME]*/, if some macros are found, and terminate with /*i T*/ or /*i F*/.
In any blocks where compilation is to be skipped, no annotation is displayed.
On #else line, as the above examples, an information is displayed in /*else [lnum]:[C]*/ format where [lnum] is the current line number and [C] is 'T' or 'F' which indicates that the #else - #endif block is to be compiled or to be skipped.
On #endif line, as the above examples, an information is displayed in /*endif [lnum]*/ format where [lnum] indicates the current line number. Of course, the #endif corresponds to the last #if (#ifdef, #ifndef) which is not yet closed.
In addition, on the macro notification mode, the output format of filename in #line line differs from the default. It outputs the filename in the "normalized" full-path-list. See 3.2. This is for the convenience of refactoring tool making.
#assert is available in pre-Standard mode, except the GCC-specific-build. #assert provides the functionality equivalent to the #error directive in the Standard C. The following code in the Standard C:
#if ULONG_MAX/2 < LONG_MAX #error Bad unsigned long handling. #endif
can be expressed as:
#assert LONG_MAX <= ULONG_MAX/2
The argument of #assert is evaluated as a #if expression. If it evaluates to true (non-zero), mcpp does nothing and if false (0), it displays the following message and then the argument line (after processing line splicing and comments):
Preprocessing assertion failed
mcpp counts this as error but continues processing.
This #assert is quite different from that of System V or GCC.
mcpp in pre-Standard mode regards a block enclosed with the #asm and #endasm directives as assembler coding. mcpp implements this functionality for Microware C/6809 only. To implement this functionality in other compiler systems, do_old() and put_asm() in system.c must be modified.
For a #asm block, mcpp performs trigraphs conversion and deletes <backslash><newline> sequence, but it neither performs comment processing, checks tokens or characters, nor deletes white-space characters at the beginning of a line. Also, it does not expand a token that happens to have the same name with a macro and outputs it as it is. Other directive lines have no meaning within the #asm block.
These #asm and #endasm directives do not conform to Standard C. In the first place, extension directives in the form other than "#pragma sub-directive" are not Standard C conforming. Changing their directive names to #pragma asm and #pragma endasm does not solve this problem. In Standard C, the source code must consist of a C token sequence (more precisely, a preprocessing token sequence), however, an assembler program is not a C token sequence. To use assembly code in the Standard C, there is no other way but to embed it in a string literal token. Then, you have to implement a built-in function that processes that string literal in the compiler-proper and call it as follows:
asm ( " leax _iob+13,y\n" " pshs x\n" );
However, this is not suitable for a longer assembly code, in which case, you had better write the assembly code as a separate file like a library function, and assemble and link the program. This seems to be inconvenient, but it is necessary to separate the assembler portion completely to write a portable C program. It is recommended that you should write assembly code in a separate file rather than using #asm.
These features are available in Standard mode. The -V199901L option with __STDC_VERSION__ set to 199901L enables the following C99's features. The same thing can be said with C++ for the -V199901L option with __cplusplus set to 199901L or more. Although C++ Standard does not provides for the features other than 1 or 7, mcpp in Standard mode provides them for better compatibility with C99. Standard mode also allows variable argument macros even in the C90 and C++ modes. *1
#define debug(...) fprintf(stderr, __VA_ARGS__)
Here is a macro invocation:
debug( "X = %d\n", x);
This macro is expanded as follows:
fprintf(stderr, "X = %d\n", x);
... in the parameter list corresponds to one or more parameters. In the above example, ... corresponds to __VA_ARGS__ in the replacement list. During a macro invocation, several arguments that correspond to the ..., including ",", are concatenated to be treated as one argument.
_Pragma( "foo bar") has the same effect as specifying #pragma foo bar. The argument of the _Pragma() operator must be one string literal or wide string literal. For a wide string, the prefix (L) is deleted and treated as same as a string literal. For a string literal, " enclosing that string literal is deleted, and \" and \\ in that literal is replaced with " and \, respectively, before it is treated as a #pragma argument.
#pragma must be written somewhere in one logical line and its argument is not macro-expanded at least for C90. On the other hand, the _Pragma() operator can be written anywhere in source code (even in a replacement list), which gives the same effect with #pragma written in a logical line. The _Pragma() operator generated during macro expansion is also valid. This flexibility provides the pragma directive with a wide range of portability and allows a header file to absorb the difference in #pragma among compiler systems. (For this sample, see pragmas.h and pragmas.t of "Validation Suite".) *2
C99 stipulates a #if expression is of maximum integer type. As "long long" and "unsigned long long" are required types, the type of an #if expression is "long long / unsigned long long" or larger. C90 and C++98 stipulate the type is long / unsigned long. mcpp, however, evaluates it by long long / unsigned long long even in C90 and C++98, and issues a warning when the value is out of range of long / unsigned long. *1
Note:
*1 This is for compatibility with GCC and Visual C++ 2005, 2008. It is difficult also for other compiler systems to implement C99 specifications all at once. Probably, they will begin to implement them little by little with __STDC_VERSION__ set to 199409L or so.
*2 C99 says that a #pragma argument that begins with STDC is not macro-expanded. For other #pragma arguments, whether macro is expanded is implementation-defined.
mcpp of compiler-specific-builds have some specifications peculiar to each compiler system. Such particular specifications, other than execution option and #pragma, are explained in this section.
GCC has variadic macro of its own specification from V.2 as shown in 3.9.1.6. In this manual we call this as GCC2-spec variadic macro. Moreover, GCC implemented one more spec of variadic from V.3 as shown in 3.9.6.3, which we call GCC3-spec variadic macro. GCC V.2.95 and later implements C99 variadic, too. Nevertheless, softwares such as glibc or Linux system headers do not use C99 variadic, nor even GCC3-spec one, and still use that of GCC2-spec.
mcpp of GCC-specific-build on STD mode implemented GCC3-spec variadic from V.2.6.3, and GCC2-spec one from V.2.7, in order to avoid inconveniences on Linux or some other softwares. Yet, mcpp warns at use of them. GCC-spec variadics, especially GCC2-spec one, are not only unportable, but also syntactically unclean. Use of them in sources to write in future is not recommendable.
Visual C had not implemented variadic macro up to 2003. But, Visual C 2005 finally implemented it. Its specification is C99 one with a modification like GCC3-spec one. When variable arguments are absent, Visual C removes their immediately preceding comma. It does not use '##' token used in GCC3-spec. The specification is illustrated as below. Visual C document says that in the third example, a comma is not removed. In actual, however, it removes even the comma in this case.
#define EMPTY #define VC_VA( format, ...) printf( format, __VA_ARGS__) VC_VA( "var_args: %s %d\n", "Hello", 2005); /* printf( "var_args: %s %d\n", "Hello", 2005); */ VC_VA( "absence of var_args:\n"); /* printf( "absence of var_args:\n"); */ VC_VA( "empty var_args:\n", EMPTY); /* printf( "empty var_args:\n", ); */ /* trailing comma */
Visual C-specific-build of mcpp implemented in STD mode this special handling from V.2.7. Still, it warns at use of the spec. mcpp implements the behavior on the above third example as its spec, and does not remove the comma.
On a macro with 'defined' token in it, GCC expands it differently from usual when it is in #if line. I explain this problem at 3.9.4.6.
mcpp of GCC-specific-build from V.2.7 onward handles this sort of macro like GCC in STD mode. This behavior was implemented to cope with a few wrong macros used in Linux system headers. Yet, it warns at use of 'defined' token in macro on #if line. You should write correct #if expression.
Borland C has the asm keyword. This keyword is used to write assembly code as follows:
asm { mov x,4; ...; }
This is quite irregular and deviates from the C grammar. If there happen to be a token with the same name as a macro, it will be macro-expanded. The same can be said with Borland C itself and mcpp. It is recommended that an assembler program should be written in a separate .asm file. mcpp does not treat this specially.
Visual C++ also has the __asm keyword, which provides the similar functionality to this.
GCC provide a Standard-conforming built-in function asm() which is used as asm( " mov x,4\n").
GCC has #import directive, which is a feature of Objective-C and imported to C/C++. #import is a #include with an implicit '#pragma once'. It is occasionally used in C/C++ sources on Mac OS X.
mcpp V.2.7 and later implements this directive only on Mac OS X, in both of GCC-specific-build and compiler-independent-build.
Visual C has peculiar directives named #import and #using, which have form of preprocessing-directive, in fact, they are directives for compiler and linker. The #import of Visual C has no relation to that of GCC.
mcpp of Visual-C-specific-build outputs these lines as they are.
Although I tried to develop mcpp in such manner that the GCC-specific-build provides compatibility with GCC / cpp (cc1) to the extent that it does not hinder practical use, it is still incompatible in many aspects.
First of all, as shown in Chapter 2, there are many differences in execution options. mcpp implements neither -A option nor non-conforming directives, including #assert and #ident. *1
Fortunately, there seems to be quite few sources that cannot be compiled due to a lack of this compatibility.
It is more problematic that there are some sources that assume special behaviors of old preprocessors. Most of such source code receives a warning when -pedantic is specified in GCC. mcpp in Standard mode, by default, provides almost the same behavior as GCC's -pedantic since it implements Standard conforming error checking. However, since GCC/cpp, by default, allows such Standard violations without issuing a diagnostic, there are some sources that take advantage of this.
It is very easy to rewrite such non-conforming code to Standard-conforming code in most cases, so it is meaningless to take the trouble to write non-conforming code only to impair portability and, what is worse, to provide a hotbed of bugs. When you find such code, do not hesitate to correct it. *2
Note:
*1 The functionality of #assert and #ident should be implemented using #pragma, if necessary. The same can be said with #include_next and #warning, but these directives seem to be sometimes used in GCC system, so I grudgingly implemented them in GCC-specific-build, however, a warning is issued when they are used.
*2 From 3.9 through 3.9.3 sections of this document was written in 1998, when the sources depending on traditional preprocessors were frequently found. After that time, such sources have greatly decreased, and on the other hand, sources depending on the local features and implementation trivialities of GCC have much increased. The 3.9.4 and later, especially 3.9.8 and later sections describe mainly such problems. (2008/03)
Taking FreeBSD 2.2.2-R (1997/05) kernel source code as an example, this section explains some preprocessing problems. All the directories that appear in this section are installed in /sys (/usr/src/sys). Of the items I point out below, 3.9.1.7 and 3.9.1.8 are not necessarily Standard violations and work as expected in mcpp, but mcpp issues a warning because their coding is confusing. 3.9.1.6 is an enhancement and C99 provides the same functionality, but it differs from GNUC/cpp in notation.
Assembly codes are embedded by the following manner in i386/apm/apm.c, i386/isa/npx.c, i386/isa/seagate.c, i386/scsi/aic7xxx.h, dev/aic7xxx/aic7xxx_asm.c, dev/aic7xxx/symbol.c, gnu/ext2fs/i386- bitops.h, pc98/pc98/npx.c:
asm(" asm code0 #ifdef PC98 asm code1 #else asm code2 #endif ... ");
When no " closing a string literal appears by the end of line, GCC/cpp, by default, interprets that the string literal ends at the end of line. The above coding is based on this specification. In addition, the compiler-proper seems to interpret the whole content of asm() as a string literal spreading across lines.
I think that assembler source code should be written in a separate file, but if you want to embed it in ".c" file by all means, write it in the following manner, instead of using the confusing coding shown above.
asm( " asm code0\n" #ifdef PC98 " asm code1\n" #else " asm code2\n" #endif " ...\n" );
Standard C conforming preprocessors will accept it.
The following line appears in ddb/db_run.c, netatalk/at.h, netatalk/aarp.c, net/if-ethersubr.c, i386/isa/isa.h, i386/isa/wdreg.h, i386/isa/tw.c, i386/isa/b004.c, i386/isa/matcd/matcd.c, i386/isa/sound/sound_calls.h, i386/isa/pcvt/pcvt_drv.c, pci/meteor.c, and pc98/pc98/pc98.h:
#endif MACRO
This line should be changed to:
#endif /* MACRO */
To my surprise, i386/apm/apm.c contains the following strange line:
#ifdef 0
Of course, this should be written as:
#if 0
This code must have been neither debugged nor used.
gnu/i386/isa/dgb.c has a duplicate definition of the following macro:
#define DEBUG
Some of header files have a macro definition conflicting with this.
The Standard C regards duplicate definitions as "violation of constraint", but how they are treated depends on compiler systems; some make the first definition valid after issuing an error message and others, like GCC 2/cpp, make the last definition valid without issuing any message by default. To make the last definition valid, the following code should be added immediately before the last definition.
#undef DEBUG
i386/isa/if_ze.c, and i386/isa/if_zp.c have the #warning directive. This is the only Standard violation directive I found in the kernel source. To conform to the Standard C, there is no way but to comment out this line.
mcpp accepts #warning.
gnu/ext2fs/ext2_fs.h and i386/isa/mcd.c have the following macro that takes variable number of arguments:
#define MCD_TRACE(fmt, a...) \ { \ if (mcd_data[unit].debug) { \ printf("mcd%d: status=0x%02x: ", \ unit, mcd_data[unit].status); \ printf(fmt, ## a); \ } \ } # define ext2_debug(fmt, a...) { \ printf("EXT2-fs DEBUG (%s, %d): %s:", \ __FILE__, __LINE__, __FUNCTION__); \ printf(fmt, ## a); \ }
This is a GCC-specific enhanced specification and cannot be applied to other compiler systems. The above "## a" can be simply written as "a". With ## and in the absence of an argument corresponding to "a..." in a macro invocation, the preceding comma is deleted. C99 also provides for variable argument macros, but their notation differs from that of GCC. The above example is written as follows in C99:
#define MCD_TRACE( ...) \ { \ if (mcd_data[unit].debug) { \ printf("mcd%d: status=0x%02x: ", \ unit, mcd_data[unit].status); \ printf( __VA_ARGS__); \ } \ } # define ext2_debug( ...) { \ printf("EXT2-fs DEBUG (%s, %d): %s:", \ __FILE__, __LINE__, __FUNCTION__); \ printf( __VA_ARGS__); \ }
The most annoying difference is that in C99 requires one or more arguments on a macro invocation corresponding to "..." while GNUC/cpp requires 0 or more arguments corresponding to "a...". To handle this, when there is no argument corresponding to "...", mcpp issues a warning, instead of making it an error. Therefore, you can change the above code as follows:
#define MCD_TRACE(fmt, ...) \ { \ if (mcd_data[unit].debug) { \ printf("mcd%d: status=0x%02x: ", \ unit, mcd_data[unit].status); \ printf(fmt, __VA_ARGS__); \ } \ } # define ext2_debug(fmt, ...) { \ printf("EXT2-fs DEBUG (%s, %d): %s:", \ __FILE__, __LINE__, __FUNCTION__); \ printf(fmt, __VA_ARGS__); \ }
This is simpler with one-to-one correspondence. However, this way of writing has a disadvantage that a comma immediately before an empty argument remains, resulting in, for example, printf( fmt, ). In this case, there is no other way but to write a macro definition in accordance with C99 specifications, or avoid using an empty argument in a macro invocation. Harmless tokens, such as NULL or 0, are used to write, for example, MCD_TRACE(fmt, NULL). *1
Note:
*1
GCC 2.95.3 or later also implements variable argument macros based on the C99 syntax. It is recommended to use this syntax. GCC specific one provides the flexibility of allowing for zero number of variable argument, but its notation is bad in that (1) for the "args..." parameter, a white space must not be inserted between "args" and "...", but such a pp-token is not permitted in C/C++, and that (2) it is not desirable that the notation for a token concatenation operator is used for different meaning in a replacement list. It is desirable to allow zero number of variable arguments based on the C99 notation. GCC 3 introduced a notation for variable argument macros that is a mixture of GCC 2's traditional notation and C99 one. For details, refer to 3.9.6.3.
The following macro invocations appear in nfs/nfs.h, nfs/nfsmount.h, nfs/nfsmode.h, netinet/if_ether.c, netinet/in.c, sys/proc.h, sys/socketvars.h, i386/scsi/aic7xxx.h, i386/include/pmap.h, dev/aic7xxx/scan.l, dev/aic7xxx/aic7xxx_asm.c, kern/vfs_cache.c, pci/wd82371.c, vm/vm_object.h, and vm/device/pager.c. So do in /usr/include/nfs/nfs.h.
LIST_HEAD(, arg2) TAILQ_HEAD(, arg2) CIRCLEQ_HEAD(, arg2) SLIST_HEAD(, arg2) STAILQ_HAED(, arg2)
The first argument is empty. C99 approved empty arguments but C90 regarded them as undefined. Taking it consideration that an argument may happen to be empty during a nested macro invocation, empty arguments should be approved, however, it is neither necessary nor desirable to write an empty argument in source code. Note that for a one-argument macro, there is syntax ambiguity between an empty argument and a lack of argument.
Taking everything into consideration, the following notation is recommended:
#define EMPTY LIST_HEAD(EMPTY, arg2) TAILQ_HEAD(EMPTY, arg2) CIRCLEQ_HEAD(EMPTY, arg2) SLIST_HEAD(EMPTY, arg2) STAILQ_HAED(EMPTY, arg2)
Any Standard C conforming preprocessor will accept this notation.
By the way, some of the header files (in the nfs directory) shown above neither have the macro definitions shown above nor #include any other header files. This is because such header files assume that these macro definitions exist in sys/queue.h and that *.c programs will #include sys/queue.h first. These files arise ambiguity.
kern/kern_mib.c has the following macro definitions:
SYSCTL_NODE(, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9)
In this case, the first argument cannot be changed to EMPTY. Because the corresponding macro definition in the sys/sysctl.h is as follows:
#define SYSCTL_NODE(parent, nbr, name, access, handler, descr) \ extern struct linker_set sysctl_##parent##_##name; \ SYSCTL_OID(parent, nbr, name, CTLTYPE_NODE|access, \ (void*)&sysctl_##parent##_##name, 0, handler, "N", descr); \ TEXT_SET(sysctl_##parent##_##name, sysctl__##parent##_##name);
In other words, these arguments are not macro-expanded. The arguments of the SYSCTL_OID macro shown above, including the first one, are not macro expanded. In this case, there is no way but to leave the empty argument as it is. *1
Note:
*1 C99 approves empty arguments as legitimate. Taking macros, such as SYSCTL_NODE() and SYSCTL_OID(), into consideration, the EMPTY macro is not almighty and using empty arguments has some reason. In addition, even if EMPTY is used, a nested macro invocation may cause empty arguments. However, for source readability, using EMPTY is recommended whenever possible.
i386/include/endian.h, as well as /usr/include/machine/endian.h, has the following macro definitions. (There are four same kinds of definitions.)
#define __byte_swap_long(x) (replacement text) #define NTOHL(x) (x) = ntohl ((u_long)x) #define ntohl __byte_swap_long
The problem is the ntohl definition. Although ntohl is an object-like macro, it is expanded to a function-like macro name, then rescanned with subsequent text, and is expanded as if it were a function-like macro. This way of macro-expansion has been regarded as an implicit specification since K&R 1st, and the Standard C somehow approved it as legitimate. However, as I discuss in other documents, it is this specification that makes macro-expansion unnecessarily complicated and brings confusion to Standard documents. This is a bug specification. *1
This ntohl is actually a function-like macro, written as an object-like macro omitting the parameter list. You had better define this like a function-like macro that it is:
#define ntohl(x) __byte_swap_long(x)
This causes no problem.
i386/isa/sound/os.h has the same kind of macro definitions:
#define INB inb #define INW inb
This should be written as follows:
#define INB(x) inb(x) #define INW(x) inb(x)
Note:
*1 ISO 9899:1990 Corrigendum 1:1994 regarded the notation as undefined. C99 replaced this article with other. However, Standard documents are still confusing about this. For details, see cpp-test.html#2.7.6.
Some kernel sources are contained in several ".S" files, that is, they are written in assembler. These sources contain #include's or #ifdef's, which require preprocessing. To preprocess them, in FreeBSD 2.2.2-R, 'cc' is called with the '-x assembler-with-cpp' option, and 'cc' calls '/usr/libexec/cpp' with the '-lang-asm' option and then calls 'as'.
Of course, this ways of using .S files is non-conforming. This assembler source code must not contain a token that happens to have the same name with a macro. White spaces between tokens and at the beginning of a line must be retained during preprocessing. In addition, if the first token at the beginning of a line is a # indicating an assembler comment, special processing is required on the preprocessor side. This not only considerably limits available preprocessors but also increases the possibility of unknowingly introducing bugs. So, using .S files in this way is not recommended. *1
To preprocess source code for use with several types of machines, the code should be written in the following manner and be saved in not ".S" but ".c" file. 4.4BSD-Lite actually adopts this way of coding.
asm( " asm code0\n" #ifdef Machine_A " asm code1\n" #else " asm code2\n" #endif " ...\n" );
Note:
*1 In FreeBSD 2.0-R, these kernel sources are contained not in *.S but in *.s file. The Makefile is so defined as to call 'cpp', instead of 'cc', to process them. Then the 'cc' calls 'as'. When the 'cpp' is called, '/usr/bin/cpp' is invoked. '/usr/bin/cpp' is a shell-script that calls '/usr/libexec/cpp -traditional'. This method was more convenient in that it provides a way to change preprocessors to be used by modifying the script.
I compiled all the source files in /usr/src/lib/libc of FreeBSD 2.2.2R. There was no problem, probably because most of them comes from 4.4BSD-Lite without much modification. It is quite rare and surprising that a huge amount of source files in excellent quality is gathered together.
Only at one place, I found the following coding in gen/getgrent.c. Of course, ";" at the end of line is surplus.
#endif;
As seen so far, writing a Standard-conforming source code with better portability in a more secure manner neither requires much effort nor provides any demerits. In spite of it, why does source code less conforming to Standards still exist at all?
When comparing the FreeBSD 2.0-R kernel sources with those of 2.2.2-R, non-conforming ones do not decrease in number. The problem is that newer sources are not necessarily more conforming to the Standards. There are few non-conforming sources in 4.4BSD-Lite. This is probably because the 4.4BSD sources were rewritten to become conforming to the Standard C and POSIX. However, during the process of implementing these sources to FreeBSD, the old writing style revived in some sources. For example, although the ntohl shown above is written as ntohl(x) in 4.4BSD-Lite, it is written as ntohl in FreeBSD. Why did the notation once put away revive?
I blame GCC/cpp for this revival, which passes these non-conforming sources without issuing a diagnostic. If -pedantic had been a default behavior, the old style source would have never revived. If -pedantic-errors had been a default behavior, although, GCC/cpp would not have been put into practical use because too many sources failed to be compiled. The gcc's man page describes the -pedantic option as: "There is no reason to use this option except for satisfying pedants." Now that eight years have already passed since Standard C was established, it is a high time that GCC/cpp should set -pedantic as default, not go so far as to set -pedantic-errors. *1
In FreeBSD 2.0-R, nested comments were sometimes found, but in 2.2.2-R, they disappeared. This is because GCC/cpp no longer allowed them. This has nothing to do with -pedantic, but I want to say how influential preprocessor's source checking is.
Note:
*1 I wrote 3.9.3 subsection in 1998. After that, gcc's man page or info deleted this expression, however, the specification remains almost the same.
I compiled glibc (GNU LIBC) 2.1.3 sources (February, 2000). Different from those of FreeBSD libc, I found many problems. Some sources are written based on GCC/cpp's undocumented specifications, in which case it took me a lot of time to identify them.
sysdeps/i386/dl-machine.h and stdlib/longlong.h have many multi-line string literals as shown below:
#define MACRO asm(" instr 0 instr 1 instr 2 ")
Some string literals are very long. compile/csu/version-info.h created by make also has a multi-line string literal. Of course, it is non-conforming, but GCC treats it as a string literal with embedded <newline>.
The -lang-asm (-x assembler-with-cpp, -a) option allows mcpp to convert a multi-line string literal into the following code:
#define MACRO asm("\n instr 0\n instr 1\n instr 2\n")
However, this option cannot work properly for a string literal with a directive inserted in the middle as shown in 3.9.1.1, in which case there is no way but to rewrite the source.
#include_next appears in the following files:
catgets/config.h, db2/config.h, include/fpu_control.h, include/limits.h, include/bits/ipc.h, include/sys/sysinfo.h, locale/programs/config.h, and sysdeps/unix/sysv/linux/a.out.h
sysvipc/sys/ipc.h has #warning.
Although these directives are not approved by the Standard C, #include_next, in particular, becomes indispensable for glibc 2. So, mcpp for GCC implements #include_next and #warning.
The problems concerning #include_next is that it is not only a standard violation but also that what headers are actually included depends on the setting of include directories and a search order, which are changed by users via environment variables.
When glibc is installed, some files in glibc's include directory are copied to the /usr/include directory. These files are used as system header files. That these header files contain #include_next means system headers become patchy. It seems to be time to reorganize them.
The following files contain definitions of macros with variable number of arguments based on the GCC specification, as well as macro invocations:
elf/dl-lookup.c, elf/dl-version.c, elf/ldsodefs.h, glibc-compat/nss_db/db-XXX.c, glibc-compat/nss_files/files-XXX.c, linuxthreads/internals.h, locale/loadlocale.c, locale/programs/linereader.h, locale/programs/locale.c, nss/nss_db/db-XXX.c, nss/nss_files/files-XXX.c, sysdeps/unix/sysdep.h, sysdeps/unix/sysv/linux/i386/sysdep.h, and sysdeps/i386/fpu/bits/mathinline.h
This is a deviation from the C99 Standard. You must rewrite the source code before you can use mcpp. *1
Note:
*1 This is a spec since GCC2. There is another spec of GCC3 which is a compromise of GCC2 and C99 specs. See 3.9.6.3.
The following files have macro invocations with empty arguments:
catgets/catgetsinfo.h, elf/dl-open.c, grp/fgetgrent_r.c, libio/clearerr_u.c, libio/rewind.c, libio/clearerr.c, libio/iosetbuffer.c, locale/programs/ld-ctype.c, locale/setlocale.c, login/getutent_r.c, malloc/thread-m.h, math/bits/mathcalls.h, misc/efgcvt_r.c, nss/nss_files/files-rpc.c, nss/nss_files/files-network.c, nss/nss_files/files-hosts.c, nss/nss_files/files-proto.c, pwd/fgetpwent_r.c, shadow/sgetspent_r.c, sysdeps/unix/sysv/linux/bits/sigset.h, sysdeps/unix/dirstream.h
math/bits/mathcalls.h, in particular, contains as much as 79 empty arguments. This header file is installed in /usr/include/bits/mathcalls.h and is #included by /usr/include/math.h. Even with an EMPTY macro, nested macro invocations generate a lot of empty arguments. Are there any other ways to write macros more clearly?
The following files contain object-like macro definitions replaced with function-like macro names:
argp/argp-fmtstream.h, ctype/ctype.h, elf/sprof.c, elf/dl-runtime.c, elf/do-rel.h, elf/do-lookup.h, elf/dl-addr.c, io/ftw.c, io/ftw64.c, io/sys/stat.h, locale/programs/ld-ctype.c, malloc/mcheck.c, math/test-*.c, nss/nss_files/files-*.c, posix/regex.c, posix/getopt.c, stdlib/gmp-impl.h, string/bits/string2.h, string/strcoll.c, sysdeps/i386/i486/bits/string.h, sysdeps/generic/_G_config.h, sysdeps/unix/sysv/linux/_G_config.h
Of these, some function-like macros, like the ones in math/test-*.c , are first replaced with an object-like macro name and then further replaced with a function-like macro name. Why did these macros have to be written in this way?
sysdeps/generic/_G_config.h, sysdeps/unix/sysv/linux/_G_config.h, and malloc/malloc.c contain the following macro definition expanded to the "defined" pp-token.
#define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
The intention of this macro definition is that with the following directive,
#if HAVE_MREMAP
the above line is expected to be expanded as follows:
#if defined(__linux__) && !defined(__arm__)
However, the behavior is undefined in Standard C when a #if line has a "defined" pp-token in a macro expansion result. Apart from it, this macro definition is strange in the first place.
The HAVE_MREMAP macro is first replaced with the following,
defined(__linux__) && !defined(__arm__) (1)
and then the identifiers defined, __linux__ and __arm__ are rescanned for more macro replacement. If any of them is a macro, it is expanded. In this case, defined cannot be defined as a macro (Otherwise, it causes another undefined result), and if __linux__ is defined as 1 and __arm__ is not defined, this macro is finally expanded as follows:
defined(1) && !defined(__arm__)
defined(1), of course, is a syntax error of a #if expression.
However, GCC/cpp stops macro expansion at (1) and regards it as the final macro expansion result of the #if line. Since this is "undefined" anyhow, this GNU specification cannot be described as wrong, but it lacks of consistency in that how to expand a macro differs between macros in a #if line and in other lines. At least, it lacks of portability. *1
The above code should be written as follows:
#if defined(__linux__) && !defined(__arm__) #define HAVE_MREMAP 1 #endif
I hope this kind of confusing code be eliminated as early as possible. *2
Note:
*1 GCC 2/cpp internally treats defined in a #if line as a special macro. For this reason, when GCC/cpp rescans the following sequence of tokens for macro expansion, it evaluates it as a #if expression, as a result of special handling of defined pseudo-macro, instead of expanding the original macro. In other words, distinction between macro expansion and #if expression evaluation is ambiguous.
defined(__linux__) && !defined(__arm__)
This problem relates to GCC/cpp' own program structure. GCC 2/cpp has a de facto main routine rescan(), which is a macro rescanning routine. This routine reads and processes source file from the beginning to the end, during the course of which, it calls a preprocessing directive processing routine. Although implementing everything using macros is a traditional program structure of a macro processor, this structure can be thought to cause mixture of macro expansion and other processing.
*2 In glibc 2.4, this macro was corrected. Nevertheless, many other macros of the same sort were newly defined.
The files named *.S contain assembler source code requiring preprocessing. Some of these files have preprocessing directives, such as #include, #define, and #if. In addition, the file named compile/csu/crti.S generated by Make contains the following lines:
#APP
or
#NO_APP
From a syntax point of view, preprocessors cannot tell whether these lines are invalid preprocessing directives or valid assembler comments. GCC seems to leave these lines as they are during preprocessing and treat it as assembler comments.
Concatenation of pp-tokens using the ## operator sometimes generates an invalid pp-token. GCC/cpp outputs these pp-tokens without issuing a diagnostic.
For compatibility with GCC, I reluctantly decided that, with the -lang-asm (-x assembler-with-cpp, -a) option, mcpp does not treat these non-conforming directives and invalid pp-tokens generated by ## as error, and outputs them as they are and issues a warning.
Essentially, these sources should be processed with an assembler macro processor. GNU seems to provide a macro processor called gasp, but it seems to be scarcely used for some reason.
When invoked with the -dM option, GCC outputs only macro definitions, which is used by stdlib/isomac.c in 'make check' routine.
The problem of the isomac.c is that it accepts only GCC/cpp's macro definition file format and regards a comment or a blank line as an error.
Glibc make sometimes uses a program called rpcgen. The problem of rpcgen is that it accepts only GCC/cpp's output format of preprocessor line number information as follows:
#123 "filename"
Rpcgen does accept neither:
#line 123
nor
#line 123 "filename"
Rpcgen regards them as error.
I reluctantly decided that GCC-specific-mcpp uses the GCC format by default. Rpcgen's specification is poor in that it is based on a particular compiler system's format and cannot accept the standard one.
Glibc 2.1 'makefile' often uses the -include option and sometimes uses -isystem and -I- options. The former can be substituted with #include at the beginning of source code. The latter two are less necessary; these are only necessary to update system headers.
Only GCC-specific-build of mcpp implements these two options, but I would like these less necessary options to be made obsolete. *1
Note:
*1 GCC/cpp provides several more options that specify include directories and their search orders, such as -iprefix, -iwithprefix, and -idirafter. It also provides the -remap option that specifies mapping between long-file-names and MS-DOS 8+3 format filenames. On CygWIN systems, specs files contain these options, but it is not necessary to use these options because include directories can be specified with environment variables and because such mapping is no longer necessary on CygWIN.
This is not a problem of glibc, but of GCC. The following macros are GCC/cpp predefined macros although their names do not appear in documentation.
__VERSION__, __SIZE_TYPE__, __PTRDIFF_TYPE__, __WCHAR_TYPE__
On Vine Linux 2.1 (egcs-1.1.2) systems, __VERSION__ is set to "egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)". On many systems, including Linux/i386, the values of other three macros have types unsigned int, int, and long int, respectively. However, on FreeBSD and CygWIN systems, their types are slightly different from them (I do not know why). Why does those predefines macros remain undocumented?
The most strange thing is the undocumented environment variable named SUNPRO_DEPENDENCIES. sysdeps/unix/sysv/linux/Makefile contains the following script:
SUNPRO_DEPENDENCIES='$ (@:.h=.d)-t $@' \ $ (CC) -E -x c $ (sysinclude) $< -D_LIBC -dM | \ ... \ etc.
The intent of this script is to specify a file name with the environment variable SUNPRO_DEPENDENCIES, and to have cpp output macro definitions in source code and dependency description lines between source files to that file.
I had no other way but to read the GCC/cpp source code (egcs-1.1.2/gcc/cccp.c) to know how this environment variable works.
In addition, there is another environment variable, DEPENDENCIES_OUTPUT, which has a similar function. The difference between the two is that SUNPRO_DEPENDENCIES also outputs dependency description lines among system headers, but DEPENDENCIES_OUTPUT does not.
Only GCC-specific-build of mcpp enables these two environment variables, but I would like these undocumented specifications to be made obsolete as early as possible.
Linux (i386)/GCC 2 appends the -Asystem(unix), -Acpu(i386) or -Amachine(i386) to cpp invocation options by using specs file. As long as the glibc 2.1.3 for Linux/x86 is concerned, there seems to be no source code that utilizes this functionality.
It is a big problem that glibc's system headers have become patchy and very complicated. A small difference in settings may result in a big difference in preprocessing results.
On the other hand, Glibc 2.1.3 did not contain #else junk, #endif junk, or duplicate macro definitions that were found in FreeBSD 2.2.2/kernel sources. In some aspects, Glibc 2.1 source is better organized than FreeBSD 2/kernel source.
However, as a whole, there were not a few sources that are based on GCC-specific specifications in glibc 2.1, which impairs portability to other compiler systems although such sources form only a small portion of several thousand source files. Dependence on GCC local specifications is not desirable for program readability and maintainability. I hope that GCC V.3 will make obsolete these local specifications and that all the source code based on them will be completely rewritten.
You must modify some source code as follows before you can use mcpp to compile glibc 2.1 sources: *1
In addition to the options specified in Makefile or specs file, you must specify the -lang-asm (-xassembler-with-cpp) option to process *.S files containing multi-line string literals or assembler comments before you can invoke mcpp. Usually, you can leave this option specified when preprocessing other files.
When you want to use GCC/cpp or mcpp, or change the default options, you had better perform the following steps:
#!/bin/sh /usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/mcpp -Q -lang-asm "$@"The -Q options are optional, however, I recommend that you should use -Q to record a large amount of diagnostic messages.
chmod a+x mcpp.sh mv cpp cpp_gnuc ln -sf mcpp.sh cppThese commands execute mcpp.sh linked to cpp when gcc calls cpp, and mcpp.sh calls mcpp using the above options before the ones specified by gcc.
ln -sf cpp_gnuc cpp
Note:
*1 mcpp V.2.7 implemented these specs. Hence, editing of sources are not necessarily required.
*2 If you use 'configure' and 'make' to compile GCC-specific-build of mcpp, the 'make install' command will set the script appropriately. The only thing left for you here is to add '-Q -lang-asm' options to the script.
Another problem of using mcpp is that it issues a huge amount of warning messages. You can redirect them to a file using the -Q option, but when you preprocess a large amount of source code, such as glibc, total of several hundred MB or more of 'mcpp.err' are created, so it is impossible for you to look through the whole files.
Taking a close look at mcpp.err, you will find same warnings being issued repeatedly. This is because the same *.h files are #included by many source programs. To make the files more readable, perform the following procedure:
grep 'fatal:' `find . -name mcpp.err` grep 'error:' `find . -name mcpp.err`
grep 'warning:' `find . -name mcpp.err` | sort -k3 -u > mcpp-warnings-sorted
grep 'warning:' `find . -name mcpp.err` | sort -k3 | uniq > mcpp-warnings-all
grep 'warning: Replacement' `find . -name mcpp.err` | sort -k3 | uniq | lessAfter you get an overall idea of what source lines are causing what kinds of errors or warnings, you can see a particular mcpp.err by "less" and then, if necessary, see the source file in question.
mcpp <-opts> in-file.c > in-file.i 2>&1When you use "make", you must temporarily change the above shell-script.
I first compiled GCC 3.2 sources on Linux and FreeBSD, then I used the generated gcc to compile mcpp and then I recompiled GCC 3.2 sources using mcpp for preprocessing.
New GCC compilers are bootstrapped during various phases of make; gcc and cc1, etc generated in an earlier phase are used to recompile themselves, and those generated compiler drivers and compiler-propers are used again to recompile themselves, and so on. During the bootstrap, gcc exists under the name of xgcc.
Other than cc1 and cc1plus, GCC 2 has a separate preprocessor called cpp. In GCC 3, cpp was absorbed into cc1 and cc1plus. However, there still exists a separate preprocessor cpp0. To have cpp0 preprocess, the -no-integrated-cpp option must be specified when you invoke gcc or g++. Therefore, to have mcpp preprocess, you must use a shell-script that have gcc (xgcc) or g++ invoke mcpp first then invoke cc1 or cc1plus. *1
In the GCC compiler system, the settings of system headers and their search order are becoming very complex. So, a small difference in settings may result in a difference in preprocessing results. Even successful compilation was often difficult to attain. In addition, compilation and tests require a lot of other software. Older versions of such software may cause failure in compilation or tests. Actually, compilation sometimes failed due to some hardware problems on my machine.
Actually, I failed to compile GCC 3.2 source under FreeBSD 4.4R. I had to upgrade FreeBSD to 4.7R and changed software packages to those for FreeBSD 4.7R before I was able to succeed in compilation. *2
I used VineLinux 2.5 on two PCs. Although compilation of GCC 3.2 sources using GCC 2.95.3 was successful on one PC (K6/200MHz), recompilation of GCC 3.2 sources using the generated GCC 3.2/cc1 failed, and caused many segmentation faults. Then I changed CPU from K6 to AthlonXP. This time, recompilation was successful; no segmentation faults occurred. Hardware may have caused the problem.
When I compiled GCC 3.2 sources using GCC 2.95.4 under FreeBSD on K6, "make -k check" of the generated gcc was almost successful. When I recompiled GCC 3.2 itself using the generated GCC 3.2, in "make -k check" of g++ and libstdc++-v3, about 20 percent of testsuite was unsuccessful. However, when using AthlonXP, instead of K6, everything went OK. Hardware may have caused the problem.
On both VineLinux PCs, when I recompiled GCC 3.2 sources using GCC 3.2 itself and mcpp, "make -k check" of the generated gcc was successful. However, in "make -k check" of g++ and libstdc++-v3, 20 percent of testsuite failed. *3, *4, *5
In anyway, the cause of this testsuite failure seems to lie not in the generated compilers themselves, such as gcc, g++, cc1 and cc1plus, but in the header files or some other settings.
mcpp cannot be described as completely compatible with GCC/cpp, but is highly compatible. So, mcpp and GCC/cpp can be used interchangeably.
GCC 3.2 sources were compiled in the following environment:
OS make library CPU VineLinux 2.5 GNU make glibc 2.2.4 Celeron/1060MHz VineLinux 2.5 GNU make glibc 2.2.4 K6/200MHz, AthlonXP/2.0GHz FreeBSD 4.7R UCB make libc.so.4 K6/200MHz, AthlonXP/2.0GHz
Only C and C++ were compiled.
Note:
*1 I had to do this for each bootstrap stage. Since makefile is too large and too complex to change, I employed an inelegant method; I kept on sitting in front of PC screen during the entire process of bootstrap. At each end of the stages, I entered ^C and replaced xgcc and others with shell-scripts.
*2 Due to dependency between packages, the system falls into confusion unless appropriate versions are installed. Actually, for this reason, my FreeBSD temporarily failed to invoke kterm.
*3 "make -k check" cannot be used with mcpp because diagnostics of mcpp are different from those of GCC.
*4 "make -k check" seems to require an English environment, so the LANG environment variable should be set to C.
*5 All the testsuite failures were caused by inability of the pthread_* functions, such as pthread_getspecific and pthread_setspecific, to be linked in the library i686-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++.so.5.0.0. When a correctly generated library was installed, "make -k check" was successful. On FreeBSD, this problem never happened. This is probably because of small differences in settings.
This very old way of coding was no longer found in GCC 3.2 sources. Multi-line string literals were made obsolete as late as at GCC 3.2. GCC 3.2 processes a source with a multi-line string literal as you expect, but issues a warning.
limits.h and syslimits.h in build/gcc/include generated during the course of make have #include_next. When GCC 3.2 is installed, these header files are copied to limits.h and syslimits.h in lib/gcc-lib/i686-pc-linux-gnu/3.2/include.
GCC 3.2 sources does not have #warnings.
GCC 3.2 sources have some variable argument macros, but most of them are found in testsuite and they are nothing but test samples. Although GCC 3.2 still supports variable argument macros in GCC 2 notation, the ones using __VA_ARGS__ (in C99 notation) are more frequently found in GCC 3.2 sources.
In GCC 3, variable argument macros in a mixed notation of GCC 2 and C99 are found: *1
#define eprintf( fmt, ...) fprintf( stderr, fmt, ##__VA_ARGS__)
This definition corresponds to the following one of GCC 2 spec.
#define eprintf( fmt, args...) fprintf( stderr, fmt, ##args)
According to the GCC specifications, in the absence of an argument corresponding to "...", the comma immediately before "##" is deleted. So, this is expanded as follows:
eprintf( "success!\n") ==> fprintf( stderr, "success!\n")
As far as this example is concerned, this specification seems to be convenient, but is not desirable in that (1) a comma in a replacement list of a macro definition is not always used to delimit parameters, (2) it allows a token concatenation operator (##) to have other functionality, (3) it makes rules more complex by allowing exceptions. *2, *3, *4
Note:
*1 This manual calls the variadic macro of specification since GCC 2 as GCC2 spec, and that of created in GCC 3 as GCC3 spec.
*2 While on GCC 2.* 'args...' in definition of GCC2 spec variadic macro should not be separated as 'args ...', on GCC 3 the intervening spaces are allowed.
*3 When -ansi option (or any of -std=c* or -std=iso* option) is specified, GCC, however, does not remove the comma even if the variable argument is absent. Nevertheless, the '##' disappears silently.
*4 mcpp V.2.6.3 implemented variadic macro of GCC3 spec for STD mode on GCC-specific-build only. V.2.7 even implemented GCC2 spec one.
Apart from #include-ed system headers, such as /usr/include/bits/mathcalls.h and /usr/include/bits/sigset.h, empty arguments in a macro invocation are found only in gcc/libgcc2.h of GCC 3.2 sources themselves. *1
Note:
*1 These two header files are copied into the system header directory when glibc is installed. They do not exist on FreeBSD because glibc is not used.
gcc/fixinc/gnu-regex.c and libiberty/regex.c have object-like macros that are replaced with function-like macro name. /usr/lib/bison.simple, a #included file, also has such macros. These macros are all relevant to alloca. For example, libiberty/regex.c has the following macro definitions.
#define REGEX_ALLOCATE alloca #define alloca( size) __builtin_alloca( size)
This should be written as follows:
#define REGEX_ALLOCATE( size) alloca( size)
Why did they omit (size)?
In addition, regex.c also has another alloca, which is defined as follows:
#define alloca __builtin_alloc
Their writing style is inconsistent.
Furthermore, regex.c has a #include "regex.c" line, which is including itself. regex.c is a strange and unnecessarily complicated source.
GCC 3.2 sources do not have macros expanded to 'defined'. According to GCC 3.2 documents, this type of macro is preprocessed in the same way as GCC 2/cpp, but GCC 3.2 issues a warning to indicate "may not portable". However, GCC 3.2 does not issue a warning to an example shown in 3.9.4.6.
cpp.info of GCC 3 says:
Wherever possible, you should use a preprocessor geared to the language you are writing in. Modern versions of the GNU assembler have macro facilities.
However, the gcc/config directory has several *.S files.
Make of GCC 3.2 uses neither rpcgen nor -dM option. However, specifications of rpcgen and the -dM option do not seem to change from the previous versions.
These options are frequently used in make of GCC 3.2. Sometimes, the -isystem option is used to specify several system include directories at one time. Is it inevitable to use the option during software compilation that updates system headers themselves? I think they had better use an environment variable to specify all the system include directories.
On the other hand, GCC 3/cpp documents discourage to use the -iwithprefix and -iwithprefixbefore options. GCC provides many options to specify include directories. Does GCC 3.2 move toward reorganization or reduction in number of them? *1
Note:
*1 GCC 3.2 Makefile uses the -iprefix option in a stand alone manner (without using -iwithprefix or -iwithprefixbefore), although the -iprefix option makes sense only when used with one of these two options following it.
GCC 2 did not document predefined macros, such as __VERSION__, __SIZE_TYPE__, __PTRDIFF_TYPE__ and __WCHAR_TYPE__. Even with the -dM option, their existence was unknown. GCC 3 not only documents them but also enhances -dM to show their definitions.
GCC 3 documents the SUNPRO_DEPENDENCIES environment variable GCC 2 did not. (I do not know why this environment variable is needed.)
GCC 3 implements following #pragmas:
#pragma GCC poison #pragma GCC dependency #pragma GCC system_header
Of these, GCC 3.2 sources use poison and system_header. mcpp does not support these #pragmas because I do not think them necessary. (I omit explanation of their specifications.) *1
GCC 3 deprecates assertion directives, such as #assert, although gcc, by default, specifies the -A option.
In GCC 2, the -traditional option is implemented in one and the same cpp, result in a strange mixture of very old specifications and C99 ones. In GCC 3, its preprocessor was divided into two: non-traditional cpp0 and tradcpp0. The -traditional option is valid only for gcc. cpp0 does not provides it. gcc -traditional invokes tradcpp0 for preprocessing.
tradcpp0 is getting closer to a true traditional preprocessor before C90. They say that they no longer maintain tradcpp0 except for serious bugs.
The strange specifications of GCC 2/cpp seem to have been significantly revised.
Note:
*1 mcpp V.2.7 onward supports #pragma GCC system_header on GCC-specific-build.
As seen above, as far as preprocessing is concerned, GCC 3.2 sources have been much improved than glibc 2.1.3 sources in that the traditional way of writing has been almost eliminated and that meaningless options are no longer used.
GCC 3.2/cpp0 itself is also much superior to GCC 2/cpp in that it regards traditional specifications as obsolete and articulates the token-based principle. Undocumented specifications have been significantly reduced. Although these improvements are not still sufficient, GCC is certainly moving toward the right direction.
However, GNU / Linux system headers become so complex that it is difficult to grasp their entire structure, which may one of the biggest causes of problems in the GNU / Linux system.
Another pitiful fact is that the preprocessor is absorbed into the compiler-proper. Therefore, to use mcpp, the -no-integrated-cpp option must be specified when invoking gcc or g++. If you compile a large amount of source files with complicated or many makefiles, or if some program automatically invoke gcc, you should create a shell-script that invokes gcc or g++ with the -no-integrated-cpp option automatically specified.
Let me take an example of this. Place the following shell-scripts in the directory where gcc and g++ reside (on my Linux, /usr/local/gcc-3.2/bin), under the names of gcc.sh and g++.sh, respectively.
#!/bin/sh /usr/local/gcc-3.2/bin/gcc_proper -no-integrated-cpp "$@" #!/bin/sh /usr/local/gcc-3.2/bin/g++_proper -no-integrated-cpp "$@"
Move to this directory and enter the following commands:
chmod a+x gcc.sh g++.sh mv gcc gcc_proper mv g++ g++_proper ln -sf gcc.sh gcc ln -sf g++.sh g++
In the directory where cpp is located (on my Linux, /usr/local/gcc-3.2/lib/gcc-lib/i686-pc-linux-gnu/3.2), create a script that executes mcpp when cpp0 is invoked, as you did for GCC 2 (See 3.9.5). By doing this, gcc or g++ first invokes mcpp and then invokes cc1 or cc1plus with the -fpreprocessed option appended. -fpreprocessed indicates the source has been preprocessed already. *1
Note that when a GCC version other than the system standard one is installed, additional include directory settings may be required. mcpp embeds these settings when mcpp itself is compiled, thus eliminating the need to set them with environment variables.
If possible, I want to replace the cpplib source, the preprocessing part of cc1 or cc1plus, with mcpp. The source files that define the internal interface between cpplib and ccl or cc1plus, as well as the external interface between cpplib and user programs that use it, amount to as much as 46KB. It is impossible to replace. Why is the interfaces so complex? It is a pity.
Note:
*1 mcpp gets all the necessary informations by 'configure' and sets these scripts by 'make install'.
Although GCC 3.2 seemed to go in the direction of better portability, GCC turned its direction to a different goal on 3.3 and 3.4. V.3.3 and 3.4 differ from 3.2 in the following points.
GCC / cc1 is becoming one huge and complex compiler absorbing preprocessor and some system header's contents. I doubt whether this is a better way of compiler construction, especially of developing open source one.
As regards mcpp, it is a nuisance that gcc arbitrarily hands to preprocessor some irrelevant options. Since it is risky to ignore all the options unrecognized by mcpp, I didn't adopt this. Although mcpp ignores the wrong options such as -c or -m* which are frequently handed from gcc, it will get an error if other unexpected options are passed on.
In order to avoid conflicts with those wrong options, mcpp V.2.5 changed some options, -c to -@compat, -m to -e, and some others.
To use mcpp with GCC 3.2 or former, it is necessary only to replace invoking of cpp0 by mcpp. To use mcpp with GCC 3.3 or later, it is necessary to divide invoking of cc1 to mcpp and cc1. src/set_mcpp.sh will write shell-scripts for this purpose in the GCC libexec directory on mcpp installation. The 'make install' command will also get GCC predefined macros using -dM option and set those for mcpp. *1, *2, *3
In addition, GCC 3.4 changed processing of multi-byte characters. Its document says as: *4
There is a trend to identify "internationalization" with "unicodization", especially in the Western people who do not use multi-byte characters. It seems that this trend has reached to GCC.
What is worse, GCC 3.4 or later does not implement their specification sufficiently. In actual, it behaves as:
mcpp takes -e <encoding> option to specify an encoding, and the GCC-specific-build inserts <backslash> to the byte in multi-byte character which has the same value with <backslash>, '"' or '\'', when the encoding is one of BIG-5, shift-JIS or ISO2022-JP, in order to complement GCC's inability. However, it does not convert the encoding to UTF-8. mcpp also treats -finput-charset as the same option as -e. I adopted these specifications because: *7
Note:
*1 The output of -dM option, however, slightly differs each other depending on other options. What is worse, most of the predefined macros are undocumented ones. As a result, the whole picture cannot be grasped easily.
*2 MinGW does not support symbolic link. Though the 'ln -s' command exists, it does not link but only copy. Moreover, MinGW's GCC rejects to invoke a shell-script even if it is named cc1. To cope with this, mcpp's MinGW GCC-specific-build generates a binary executable named cc1.exe (copied also to cc1plus.exe) which invokes mcpp.exe or GCC's cc1.exe/cc1plus.exe.
*3 CygWIN / GCC has -mno-cygwin option which alters system include directory and alters GCC's predefined macros. mcpp V.2.6.1 onward, CygWIN GCC-specific-build supports this option and generates two sets of header files for the predefined macros.
*4 On GCC in my FreeBSD 6.3, multi-byte character conversion to UTF-8 does not work at all, though libiconv seems to be linked to them. It was the same with FreeBSD 5.3 and 6.2, too.
*5 This conversion seems not to be done in preprocessing phase, but in compilation phase. Output of -E option is still UTF-8.
*6 GCC V.4.1-4.3 fail to compile due to a bug of GCC, if -save-temps or -no-integrated-cpp option is specified at the same time with -f*-charset option.
*7 When you pass the output of mcpp to cc1, you should not specify -fexec-charset option nor -finput-charset option.
I compiled glibc 2.4 (March, 2006) source, and checked preprocessing problems in it. As a compiler system, I used GCC 4.1.1 with mcpp 2.6.3. Since my machine is x86 type, I did not check the codes for other CPUs.
This is a six years newer version than glibc 2.1.3 (February, 2000) which I checked formerly, so it has naturally some parts largely changed from the old version. However, It has remarkably many parts unchanged. On the whole, most of the problems I noticed in the old version have not been revised, on the contrary, unportable sources have increased.The old-fashioned "multi-line string literal" has disappeared.
#include_next is found in the following source files. Its occurrence has increased as compared with the six years older version.
catgets/config.h, elf/tls-macros.h, include/bits/dlfcn.h, include/bits/ipc.h, include/fpu_control.h, include/limits.h, include/net/if.h, include/pthread.h, include/sys/sysctl.h, include/sys/sysinfo.h, include/tls.h, locale/programs/config.h, nptl/sysdeps/pthread/aio_misc.h, nptl/sysdeps/unix/sysv/linux/aio_misc.h, nptl/sysdeps/unix/sysv/linux/i386/clone.S, nptl/sysdeps/unix/sysv/linux/i386/vfork.S, nptl/sysdeps/unix/sysv/linux/sleep.c, sysdeps/unix/sysv/linux/ldsodefs.h, sysdeps/unix/sysv/linux/siglist.h
Though the following is not a part of glibc itself but a testcase file to test glibc by 'make check', #include_next is found also in it.
sysdeps/i386/i686/tst-stack-align.h
#warning appears in sysvipc/sys/ipc.h. This directive is in a block to be skipped in normal processing, and does not cause any problem.
There are definitions of variable argument macros in the following files. All of these are that of the old spec since GCC2. There is not any macro of C99 spec, nor even GCC3 spec one.
elf/dl-lookup.c, elf/dl-version.c, include/libc-symbols.h, include/stdio.h, locale/loadlocale.c, locale/programs/ld-time.c, locale/programs/linereader.h, locale/programs/locale.c, locale/programs/locfile.h, nptl/sysdeps/pthread/setxid.h, nss/nss_files/files-XXX.c, nss/nss_files/files-hosts.c, sysdeps/generic/ldsodefs.h, sysdeps/i386/fpu/bits/mathinline.h, sysdeps/unix/sysdep.h, sysdeps/unix/sysv/linux/i386/sysdep.h
The following testcase files also have variadic macro definitions of GCC2 spec.
localedata/tst-ctype.c, posix/bug-glob2.c, posix/tst-gnuglob.c, stdio-common/bug13.c
Moreover, many of the calls of these macros lack actual argument for variable argument. As much as 142 files have such macro calls lacking variable argument, and 120 files of them have such unusual macro calls as the replacement list of which have ", ##" sequence immediately preceding variable argument and hence removal of the ',' happen.
As a variable argument macro specification, C99 one is portable and is recommendable. However, it is not so easy to rewrite GCC specs macro to C99 one. Both of GCC2 spec and GCC3 spec variadic macros do not necessarily correspond to C99 spec one-to-one, because GCC specs cause removal of the preceding comma in case of absence of variable argument. If you rewrite GCC spec macro definition to C99 one, you also need to rewrite macro calls of absent variable argument and supplement an argument.
In glibc 2.1.3, GCC2 spec macros were not so many, and it was not a heavy work for a user to rewrite them with an editor. In glibc 2.4, however, such macro definitions increased and especially their calls vastly increased. As a consequence, it is impossible now for a user to rewrite them.
To cope with this situation, mcpp V.2.6.3 onward implemented GCC3 spec variadic macro for GCC-specific-build only. Furthermore, mcpp V.2.7 implemented GCC2 spec one, too. However, you should not write GCC2 spec macro in your sources, because the spec is too deviant from token-based principle. Since GCC2 spec corresponds to GCC3 spec one-to-one, it is easy to rewrite a macro definition to GCC3 spec, and call of that macro need not be rewritten. The already written macros with GCC2 spec will become a little clearer, if rewritten this way. *1
To rewrite a GCC2 spec variadic macro to GCC3 spec one, for example, change:
#define libc_hidden_proto(name, attrs...) hidden_proto (name, ##attrs)
to:
#define libc_hidden_proto(name, ...) hidden_proto (name, ## __VA_ARGS__)
That is, change the parameter attrs... to ..., and change attrs in the replacement-list to __VA_ARGS__.
Note:
*1 As for variadic macro of GCC2 spec and GCC3 spec, see 3.9.1.6, 3.9.6.3 respectively.
The macro calls with any empty argument are found in as many as 488 source files. They have greatly increased since the old version. C99 approval of empty macro argument may have influenced the tendency.
In particular, math/bits/mathcalls.h has as many as 79 macro calls with empty argument. That is the same with the old version.
The following files have object-like macro definitions replaced to function-like macro names:
argp/argp-fmtstream.h, hesiod/nss_hesiod/hesiod-proto.c, intl/plural.c, libio/iopopen.c, nis/nss_nis/nis-hosts.c, nss/nss_files/files-hosts.c, nss/nss_files/files-network.c, nss/nss_files/files-proto.c, nss/nss_files/files-rpc.c, nss/nss_files/files-service.c, resolv/arpa/nameser_compat.h, stdlib/gmp-impl.h, string/strcoll_l.c, sysdeps/unix/sysv/linux/clock_getres.c, sysdeps/unix/sysv/linux/clock_gettime.c
elf/link.h has function-like macro definitions replaced to function-like macro names. For example,:
#define ELFW(type) _ElfW (ELF, __ELF_NATIVE_CLASS, type) /* sysdeps/generic/ldsodefs.h:46 */ #define _ElfW(e,w,t) _ElfW_1 (e, w, _##t) /* elf/link.h:32 */ #define _ElfW_1(e,w,t) e##w##t /* elf/link.h:33 */ #define __ELF_NATIVE_CLASS __WORDSIZE /* bits/elfclass.h:11 */ #define __WORDSIZE 32 /* sysdeps/wordsize-32/bits/wordsize.h:19 */ #define ELF32_ST_TYPE(val) ((val) & 0xf) /* elf/elf.h:429 */
with the above macro definitions,
&& ELFW(ST_TYPE) (sym->st_info) != STT_TLS /* elf/do-lookup.h:81 */
in this macro call, ELFW(ST_TYPE) is expanded with the following steps:
ELFW(ST_TYPE) _ElfW(ELF, __ELF_NATIVE_CLASS, ST_TYPE) _ElfW_1(ELF, 32, _ST_TYPE) ELF32_ST_TYPE
Then, ELF32_ST_TYPE with the subsequent sequence (sym->st_info) is expanded to ((sym->st_info) & 0xf). That is to say, a function-like macro call of _ElfW_1(ELF, 32, _ST_TYPE) is expanded to name of another function-like macro ELF32_ST_TYPE.
These macros become more clear, if the 3 definitions of above 6 are written as:
#define ELFW( type, val) _ElfW( ELF, __ELF_NATIVE_CLASS, type, val) #define _ElfW( e, w, t, val) _ElfW_1( e, w, _##t, val) #define _ElfW_1( e, w, t, val) e##w##t( val)
and if they are used as:
&& ELFW(ST_TYPE, sym->st_info) != STT_TLS
Although these arguments may seem to be a little redundant, these are more natural than the original ones, if we think of function call syntax.
The following files contain macro definitions whose replacement-lists have the 'defined' token. *1
iconv/skeleton.c, sysdeps/generic/_G_config.h, sysdeps/gnu/_G_config.h, sysdeps/i386/dl-machine.h, sysdeps/i386/i686/memset.S, sysdeps/mach/hurd/_G_config.h, sysdeps/posix/sysconf.c
Those macros are used in some #if lines of the following files, and also some of the above files themselves.
elf/dl-conflict.c, elf/dl-runtime.c, elf/dynamic-link.h
In glibc 2.1.3, malloc/malloc.c had a macro definition of HAVE_MREMAP whose replacement-list contained the 'defined' token. In glibc 2.4, that macro definition has been revised to portable one, nevertheless the unportable macros of the same sort have increased in other source files.
In a #if expression, the result of a macro expansion whose replacement-list has the 'defined' token is undefined according to the Standards, and it is only self-satisfaction of GCC to preprocess the expression plausibly and arbitrarily. In order to make these sources portable among other preprocessors, at least the definitions of these macros should be rewritten, and in some cases the calls of the macros should be rewritten, too. *2
In most cases, the simple rewriting is sufficient as seen in 3.9.4.6. In some cases, however, this method does not work. Those are the cases where evaluation result of 'defined MACRO' differs depending on its timing. For example, sysdeps/i386/dl-machine.h has the following macro definition, which is used in some #if expressions on other files.
#define ELF_MACHINE_NO_RELA defined RTLD_BOOTSTRAP
Rewriting the definition as follows will not do.
#if defined RTLD_BOOTSTRAP #define ELF_MACHINE_NO_RELA 1 #endif
The macro RTLD_BOOTSTRAP is defined in elf/rtld.c, if and only that file is included before dl-machine.h. In other words, the evaluation result of 'defined RTLD_BOOTSTRAP' depends on the order of including the two files. In order to rewrite these sources portable, the macro ELF_MACHINE_NO_RELA should be abandoned since it is useless macro found only in #if lines, and the #if line:
#if ELF_MACHINE_NO_RELA
should be rewritten as:
#if defined RTLD_BOOTSTRAP
In glibc, this portable style of #if lines are found on many places, at the same time, the undefined style as above example are also found on some places.
Note:
*1 On Linux, /usr/include/_G_config.h is the header file installed from glibc's sysdeps/gnu/_G_config.h, therefore it has the same macro definition as:
#define _G_HAVE_ST_BLKSIZE defined (_STATBUF_ST_BLKSIZE)
This should be rewritten to:
#if defined (_STATBUF_ST_BLKSIZE) #define _G_HAVE_ST_BLKSIZE 1 #endif
*2 mcpp V.2.7 and later in STD mode on GCC-specific-build handles 'defined' token generated by macro expansion in #if line like GCC. Yet, such a bug-to-bug handling should not be depended on.
*.S files are provided for each CPU type, so their number is very large and amounts to more than 1000. The files for one CPU type as x86 are some portion of them.
*.S file is an assembler source with inserted preprocessing directives such as #if or #include, comments or macros of C. Since assembler source is not consisted of C token sequence, it accompanies some risks to preprocess it by C preprocessor. To process an assembler source, the preprocessor must pass such characters as % or $ (which are not used in C except in string literal or in character constant) as they are, and retain existence or non-existence of spaces as they are. Furthermore, the preprocessor must relax syntax checking to pass a sequence which would be an error if it was in C source. On the other hand, it must process #if lines or macros like C, and must do some sort of error checking, too. What a nuisance! These specifications have not any logical basis at all, these are GCC's local and mostly undocumented behaviors and no more.
To illustrate the problems, let me take an example of the following fragment from nptl/sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S.
.byte 8 # Return address register # column. #ifdef SHARED .uleb128 7 # Augmentation value length. .byte 0x9b # Personality: DW_EH_PE_pcrel # + DW_EH_PE_sdata4
'#ifdef SHARED' intends to be a directive of C.
On the other hand, the latter part of each line starting with # are supposed to be comments.
'# column.' is, however, syntactically indistinguishable from invalid directive, since the # is the first non-white-space-character of the line.
'# + DW_EH_PE_sdata4' causes even syntax error in C.
Another file has the following line, where a single character appears singly.
In C, a pair of the single quote is used to quote a character constant, and unmatched single quote causes a tokenization error.
movl 12(%esp), %eax # that `fixup' takes its parameters in regs.
The above pthread_cond_wait.S also has the following line which is a macro call.
versioned_symbol (libpthread, __pthread_cond_wait, pthread_cond_wait, GLIBC_2_3_2)
The macros are defined as:
# define versioned_symbol(lib, local, symbol, version) \ versioned_symbol_1 (local, symbol, VERSION_##lib##_##version) /* include/shlib-compat.h:65 */ # define versioned_symbol_1(local, symbol, name) \ default_symbol_version (local, symbol, name) /* include/shlib-compat.h:67 */ # define default_symbol_version(real, name, version) \ _default_symbol_version(real, name, version) /* include/libc-symbols.h:398 */ # define _default_symbol_version(real, name, version) \ .symver real, name##@##@##version /* include/libc-symbols.h:411 */ #define VERSION_libpthread_GLIBC_2_3_2 GLIBC_2.3.2 /* Created by make: abi-versions.h:145 */
The line is expected to be expanded as:
.symver __pthread_cond_wait, pthread_cond_wait@@GLIBC_2.3.2
The problem is the definition of _default_symbol_version. There is no C token containing '@' (except string-literal or character-constant). Though pthread_cond_wait@@GLIBC_2.3.2 is a sequence generated by concatenating some parts with ## operator, this is not a C token. The concatenation generates illegal tokens also in midst of its processing. The macro uses ## operator of C, nevertheless its syntax is far from C.
In order to do a sort of preprocessing on an assembler source, essentially an assembler macro processor should be used.
To process assembler codes with C, it is recommended that the asm() or __asm__() function should be used whenever possible, to embed the assembler code in a string literal, and that not *.S but *.c should be used as a file name.
libc-symbols.h has another version of the above macro as follows which is used for *.c.
This macro can be processed by Standard-conforming C preprocessor without problem.
# define _default_symbol_version(real, name, version) \ __asm__ (".symver " #real "," #name "@@" #version)
glibc also has many *.c or *.h files which use asm() or __asm()__. Nevertheless, it has much more *.S files.
If you process an assembler source by C preprocessor in any way, at least you should use /* */ or // as comment notation instead of #. In actual, many sources of glibc use /* */ or //, whereas some sources use #.
Having said so, mcpp V.2.6.3 onward relaxed grammar checking largely in lang-asm mode to process these unusual sources, considering that glibc 2.4 has too many *.S files and out-of-C-grammar-sources has increased since 2.1.3.
The problem of stdlib/isomac.c which I referred to at 3.9.4.8 is the same in glibc 2.4.
Also the problem of rpcgen is unchanged.
In addition, glibc 2.4 has scripts/versions.awk file, which presupposes GCC's peculiar behavior about the number of line top spaces of preprocessed output. In order to use mcpp or other preprocessors, this file should be revised as follows.
$ diff -c versions.awk* *** versions.awk 2006-12-13 00:59:56.000000000 +0900 --- versions.awk.orig 2005-03-23 10:46:29.000000000 +0900 *************** *** 50,56 **** } # This matches the beginning of a new version for the current library. ! /^ *[A-Z]/ { if (renamed[actlib "::" $1]) actver = renamed[actlib "::" $1]; else if (!versions[actlib "::" $1] && $1 != "GLIBC_PRIVATE") { --- 50,56 ---- } # This matches the beginning of a new version for the current library. ! /^ [A-Za-z_]/ { if (renamed[actlib "::" $1]) actver = renamed[actlib "::" $1]; else if (!versions[actlib "::" $1] && $1 != "GLIBC_PRIVATE") { *************** *** 65,71 **** # This matches lines with names to be added to the current version in the # current library. This is the only place where we print something to # the intermediate file. ! /^ *[a-z_]/ { sortver=actver # Ensure GLIBC_ versions come always first sub(/^GLIBC_/," GLIBC_",sortver) --- 65,71 ---- # This matches lines with names to be added to the current version in the # current library. This is the only place where we print something to # the intermediate file. ! /^ / { sortver=actver # Ensure GLIBC_ versions come always first sub(/^GLIBC_/," GLIBC_",sortver)
-isystem and -I- options are no longer used.
On the other hand, -include option is used extremely frequently. A header file include/libc-symbols.h is included by this option as many as 7000 times. This -include is an option to push out a #include line from source to makefile. It makes source incomplete, and is not recommendable.
This is not a problem of glibc but of GCC. While a few important predefined macros were undocumented in GCC 2, they got documented in GCC 3. On the other hand, GCC 3.3 and later predefines many macros, and most of them are undocumented.
debug/tst-chk1.c has a queer part which is not processed as its intension by other preprocessor than GCC, unless revised as follows.
$ diff -cw tst-chk1.c* *** tst-chk1.c 2007-01-11 00:31:45.000000000 +0900 --- tst-chk1.c.orig 2005-08-23 00:12:34.000000000 +0900 *************** *** 113,119 **** static int do_test (void) { - int arg; struct sigaction sa; sa.sa_handler = handler; sa.sa_flags = 0; --- 113,118 ---- *************** *** 135,146 **** struct A { char buf1[9]; char buf2[1]; } a; struct wA { wchar_t buf1[9]; wchar_t buf2[1]; } wa; #ifdef __USE_FORTIFY_LEVEL ! arg = (int) __USE_FORTIFY_LEVEL; #else ! arg = 0; #endif ! printf ("Test checking routines at fortify level %d\n", arg); /* These ops can be done without runtime checking of object size. */ memcpy (buf, "abcdefghij", 10); --- 134,146 ---- struct A { char buf1[9]; char buf2[1]; } a; struct wA { wchar_t buf1[9]; wchar_t buf2[1]; } wa; + printf ("Test checking routines at fortify level %d\n", #ifdef __USE_FORTIFY_LEVEL ! (int) __USE_FORTIFY_LEVEL #else ! 0 #endif ! ); /* These ops can be done without runtime checking of object size. */ memcpy (buf, "abcdefghij", 10);
Contrary to its innocent looking, the original source defines printf() as a macro, and as its consequence, #ifdef and other directive-like lines are usually eaten as an argument of the macro call. According to the Standards, the result is undefined when there is a line in an argument of a macro which would otherwise act as a directive. Since directive processing and macro expansion should be done in the same translation phase, it is an arbitrariness of GCC to process directive first. In the first place, processing of #ifdef __USE_FORTIFY_LEVEL line also contains macro processing, therefore it is extremely arbitrary to process this line and the other directive-like lines first then expand printf() macro. C preprocessing should be done sequentially from the top.
The configure script of glibc also has a portion to use GCC's peculiar help message. The script searches help message of compiler for "-z relro" option. If you use mcpp as a preprocessor, this portion does not yield the expected result. In spite of this problem, fortunately, compiling and test of glibc is done normally.
By the way, while GCC up to 3.2 appended many useless -A options by default on its invocation, GCC 3.3 onward ceased to do it.
Most of the portability problems I had found in glibc 2.1.3 have not been cleared in glibc 2.4 the six years newer version. On the contrary, number of sources lacking portability has increased.
There have been a few improvements such as disappearance of multi-line-string-literal, -isystem, -I- options and -A options on GCC side.
Meanwhile, sources with such unportable features have largely increased as #include_next, variadic macro of GCC2 spec, its call without variable argument, macro definition with 'defined' token in its replacement-list, *.S file and -include option. Macro calls with an empty argument have also increased. Above all it is most annoying that the writings which do not correspond to Standard C one-to-one, and hence cannot be easily converted to portable one, have increased.
All of these are problems of dependency on GCC's local specification and undocumented behavior. In a large scale software as glibc, once such unportable sources are created, it becomes difficult to revise them because many source files are interrelated. As a consequence, the same writings tend to be inherited for years, and even new sources are written so as to suit the old interfaces. For example, it shows this relationship directly that only the variadic macros of GCC2 spec are used, and neither of C99 spec nor GCC3 spec are not used at all. Besides, even if some unportable parts in a few sources are revised, at the same time the old unportable codings often appear newly in other sources. The old style writings are not easily cleared.
On the other hand, change of GCC behavior breaks many sources, and the possible influence becomes greater with time, therefore GCC becomes difficult to change its behavior. I think that both of GCC and glibc need to tidy up their old local specifications and old interfaces drastically in the near future.
On Linux, the system compiler is GCC, and the standard library is glibc. In these circumstances, there are some system headers which presuppose only GCC. Those are the obstacles to use other compiling tools than GCC such as mcpp of compiler-independent-build. For example, stddef.h and some other Standard header files are located only in GCC's version specific include directory, and are not found in /usr/include. These are rude deficiencies of the system header structure, and mcpp needs some workarounds for them.
On Linux, GCC installs a version specific include directory such as /usr/lib/gcc-lib/SYSTEM/VERSION/include where the Standard headers stddef.h, limits.h and some others are located. These headers and GCC behavior on them are queer. The problems are the same on CygWIN as on Linux. Mac OS X also has a few problems on some Standard headers.
In the first place, on Linux, five of the Standard C header files float.h, iso646.h, stdarg.h, stdbool.h, stddef.h are located only in the GCC version specific directory, not in /usr/include nor /usr/local/include. The system headers on Linux seem to more or less intend that compiler systems other than GCC use only /usr/include and GCC uses its version specific directory in addition to /usr/include. In fact, /usr/include lacks some Standard headers, that is the problem for non-GCC compilers or preprocessors.
If non-GCC preprocessor uses also GCC version specific directory, then on limits.h in this directory, the preprocessor encounters #include_next which is a GCC specific directive. If that is the case, why doesn't the preprocessor implement #include_next? Then, the limits.h causes a problem, because it is not cleanly written. What is worse, GCC V.3.3 or later predefines practically by itself the macros to be defined by limits.h, hence the header is useless for other preprocessors.
Besides, as for GCC itself, it shows queer behavior with #include_next in this header.
Although these problems are complicated to explain, I will describe them here, because they have been neglected for years for some reason.
Note that only mcpp of compiler-independent-build suffers this problem. GCC-specific-build is not affected.
The include directories for GCC are typically set as:
/usr/local/include /usr/lib/gcc-lib/SYSTEM/VERSION/include /usr/include
These are searched from upper to lower. The second is the GCC specific include directory. SYSTEM is i386-vine-linux, i368-redhat-linux or such, VERSION is 3.3.2, 3.4.3 or such. If you install another version of GCC into /usr/local, the /usr/lib/gcc-lib part above will become /usr/local/lib/gcc. In C++, some other directories are set with higher priority than /usr/local/include. For GCC V.3.* and 4.*, those are:
/usr/include/c++/VERSION /usr/include/c++/VERSION/SYSTEM /usr/include/c++/VERSION/backward
The name of these directories seem GCC specific, nevertheless no other C++ standard directories do not exist, so the other preprocessors can use no directories but these. For GCC 2.95, the include directory in C++ was:
/usr/include/g++-3
In addition, the directories specified by -I option or by environment variables are prepended to the list.
Let me take an example of limits.h in C on GCC V.3.3 or later focusing on definition of LONG_MAX, in order to make the explanations below simple. There are two limits.h: one in /usr/include and another in the version specific directory.
#include <limits.h>
By this line, GCC includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h. This header file starts as:
#ifndef _GCC_LIMITS_H_ #define _GCC_LIMITS_H_ #ifndef _LIBC_LIMITS_H_ #include "syslimits.h" #endif
Then, GCC includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h which is a short file as:
#define _GCC_NEXT_LIMITS_H #include_next <limits.h> #undef _GCC_NEXT_LIMITS_H
Now, limits.h is included again. Which limits.h? Since this directive is #include_next, it would skip the /usr/lib/gcc-lib/SYSTEM/VERSION/include, and would search /usr/include. GCC's cpp.info says:
This directive works like `#include' except in searching for the specified file: it starts searching the list of header file directories _after_ the directory in which the current file was found.
In fact, however, GCC does not include /usr/include/limits.h, but includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h again somehow.
This time _GCC_LIMITS_H_ has been defined already, so the block beginning with the line:
#ifndef _GCC_LIMITS_H_
is skipped, and the next block is evaluated:
#else #ifdef _GCC_NEXT_LIMITS_H #include_next <limits.h> #endif #endif
Again, just the same #include_next <limits.h> which were found in /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h. Does GCC include /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h again as the previous time, which is the current file, and run into infinite recursion? No, it does not, but it includes /usr/include/limits.h this time. The behavior of GCC is beyond my understanding.
In /usr/include/limits.h, <features.h> and some other headers are included. Also, /usr/include/limits.h has a block beginning with the line:
#if !defined __GNUC__ || __GNUC__ < 2
In this block, <bits/wordsize.h> is included, and the Standard required macros are defined depending whether wordsize is 32 bit or 64 bit. For example, if wordsize is 32 bit, LONG_MAX is defined as:
#define LONG_MAX 2147483647L
Of course, GCC skips this block. Then, going to the end of this file, it returns to /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h. Then, ending this file of the second inclusion, it returns to /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h. Then, this file ends too, and GCC returns to the first inclusion of /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h. In this file, after the above cited part, there are definitions of the Standard required macros. For instance, LONG_MAX is defined as:
#undef LONG_MAX #define LONG_MAX __LONG_MAX__
Then, the file ends.
#include <limits.h>
The processing of this line has ended. After all, LONG_MAX is defined to __LONG_MAX__ and it is the end. What is __LONG_MAX__? As a matter of fact, GCC V.3.3 or later predefines many macros including __LONG_MAX__ which is predefined to 2147483647L for 32 bit system. As with the other Standard required macros, the situations are almost the same as LONG_MAX, because they are defined using the predefined ones. If so, what is the purpose of these complicated header files and #include_next handling at all?
The behavior of GCC V.2.95, V.3.2, V.3.4, V.4.0 and V.4.1 on #include_next is the same as V.3.3. That is to say:
#include_next <limits.h>
by this line in /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h, GCC includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h, and by the same line in this file:
#include_next <limits.h>
it includes /usr/include/limits.h. As a result, in processing the line:
#include <limits.h>
/usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h is included twice. This duplicate inclusion happens to produce the same result, nevertheless it is redundant, and first of all, the behavior differs from the specification and is not consistent. In addition, this part of the file is redundant if the behavior accords to the specification.
#else #ifdef _GCC_NEXT_LIMITS_H #include_next <limits.h> #endif
Now, what happens to compiler or preprocessor other than GCC using Linux standard headers? stddef.h and some other Standard headers are not found in /usr/include nor /usr/local/include. If so, how about using also GCC version specific directory?
#include <limits.h>
By this line, the preprocessor includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h, and from this file it includes /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h, and in this file, it sees the line:
#include_next <limits.h>
Then, how about implementing #include_next? If the #include_next is implemented as its specification, the preprocessor searches by this line the "next" include directory /usr/include, and includes /usr/include/limits.h. Then, this non-GCC preprocessor processes the block beginning with this line:
#if !defined __GNUC__ || __GNUC__ < 2
In this block it defines LONG_MAX as:
#define LONG_MAX 2147483647L
and defines also the other macros appropriately. Then, it ends this file, and returns to /usr/lib/gcc-lib/SYSTEM/VERSION/include/syslimits.h. Then, it ends this file, and returns to /usr/lib/gcc-lib/SYSTEM/VERSION/include/limits.h. And it encounters these lines:
#undef LONG_MAX #define LONG_MAX __LONG_MAX__
At the end of the long run, all the correct definitions are canceled, and they become the undefined name __LONG_MAX__ or such!
Up to GCC V.3.2, the corresponding part of version specific limits.h had the lines like:
#define __LONG_MAX__ 2147483647L
Hence, the canceled macros are redefined correctly. Although the most part of the processing is useless, the results were correct. With the header files of V.3.3 or later, a non-GCC preprocessor is taken around here and there to get vain results.
The problems are summarized as below: *1, *2, *3, *4
Under these problems lies the excessively complicated system header structure. The extension directive #include_next enhances the complication. The use of this directive is very limited. Though GCC and glibc use it in compiling and installing of themselves, it does not exist in the installed system headers except for limits.h. The rare example in limits.h causes GCC above mentioned confusion. This presents a question on the reason of its existence.
Anyway, the compiler-independent-build of mcpp needs the following workarounds for the present. In order to avoid confusion, the compiler-independent-build does not implement #include_next nor uses GCC specific include directories.
For the GCC-specific-build of mcpp, no special setting is required, because it has GCC specific include directories list, implements #include_next as its specification, and predefines the macros as GCC does.
Note:
*1 I have checked the descriptions of this 3.9.9 section on Linux / GCC 2.95.3, 3.2, 3.3.2, 3.4.3, 4.0.2, 4.1.1, 4.3.0 and on CygWIN / GCC 2.95.3, 3.4.4. As with CygWIN, the behavior on #include_next was as its specification on GCC 2.95.3, but on 3.4.4 it changed to the same behavior as Linux. The C++ include directories in CygWIN was /usr/include/g++-3 on 2.95.3, while they are /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/c++ and its sub-directories on 3.4.4.
*2 On FreeBSD 6.2 or 6.3 and its bundled GCC 3.4.6, all the Standard C headers are present in /usr/include, #include_next is not used in any system headers, and GCC specific C include directory does not exist. However, C++ include directories are GCC version dependent as /usr/include/c++/3.4, /usr/include/c++/3.4/backward.
Even on FreeBSD, an installation of another version of GCC makes GCC-version-specific include directory. Most of the headers in the directory are redundant. However, the headers in /usr/include remain unchanged.
*3 On Mac OS X Leopard / Apple-GCC 4.0.1, as on Linux, there is a GCC-version-specific include directory, #include_next is used in limits.h and a few other headers, also two limits.h are found. However, #include_next in syslimits.h has been deleted by Apple. float.h, iso646.h, stdarg.h, stdbool.h, stddef.h are all found in /usr/include, hence so much special settings are not necessary for mcpp. But, float.h, stdarg.h are only for GCC and Metrowerks (for powerpc), so if you use them with mcpp, you must rewrite float.h yourself and make stdarg.h to include GCC-version-specific one. Note that some definitions in float.h are different between x86 and powerpc.
*4 On MinGW / GCC 3.4.*, though the include directories and their precedence differ from the other systems, the behavior of GCC on #include_next is the same, and also some Standard headers are not in the standard include directory /mingw/include but in its version-specific-directory.
*5 float.h for i386 system can be written as follows referring to GCC's setting:
/* float.h */ #ifndef _FLOAT_H___ #define _FLOAT_H___ #define FLT_ROUNDS 1 #define FLT_RADIX 2 #define FLT_MANT_DIG 24 #define DBL_MANT_DIG 53 #define LDBL_MANT_DIG 64 #define FLT_DIG 6 #define DBL_DIG 15 #define LDBL_DIG 18 #define FLT_MIN_EXP (-125) #define DBL_MIN_EXP (-1021) #define LDBL_MIN_EXP (-16381) #define FLT_MIN_10_EXP (-37) #define DBL_MIN_10_EXP (-307) #define LDBL_MIN_10_EXP (-4931) #define FLT_MAX_EXP 128 #define DBL_MAX_EXP 1024 #define LDBL_MAX_EXP 16384 #define FLT_MAX_10_EXP 38 #define DBL_MAX_10_EXP 308 #define LDBL_MAX_10_EXP 4932 #define FLT_MAX 3.40282347e+38F #define DBL_MAX 1.7976931348623157e+308 #define LDBL_MAX 1.18973149535723176502e+4932L #define FLT_EPSILON 1.19209290e-7F #define DBL_EPSILON 2.2204460492503131e-16 #define LDBL_EPSILON 1.08420217248550443401e-19L #define FLT_MIN 1.17549435e-38F #define DBL_MIN 2.2250738585072014e-308 #define LDBL_MIN 3.36210314311209350626e-4932L #if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define FLT_EVAL_METHOD 2 #define DECIMAL_DIG 21 #endif /* C99 */ #endif /* _FLOAT_H___ */
On V.2.7, mcpp began to support Max OS X / GCC. This section describes the problems of the system found by mcpp. The author, however, does not know the system so much yet. He has only compiled mcpp itself and firefox on the system. He knows nothing about Objective C nor Objective C++.
Since GCC is practically the only compiler on this system now, some dependencies on GCC-local specs are found in some of its system headers. Such dependencies are not so much as Linux, maybe because its standard library is not glibc. But, they are not so few as FreeBSD. Some tidy-ups are desirable.
Another characteristic of this system is that the system compiler is a GCC largely modified and extended by Apple. In the system headers and some sources of Apple on Max OS X, dependencies on Apple-GCC-local specs are more conspicuous than those on general-GCC-local specs. In particular, the extended specs to support both of Intel-Mac and PowerPC-Mac on a machine are the most characteristic.
Here, we refer to the system of Mac OS X Leopard / Apple-GCC 4.0.1.
The GCC-local directive #include_next are not many, but found in float.h, stdarg.h, varargs.h in /usr/include/ and the files of the same name in /Developer/SDKs/MacOSX10.*.sdk/usr/include/.
All of them are to include different real header files depending on whether the compiler is GCC or Metrowerks.
When the compiler is GCC, stdarg.h, for example, does '#include_next <stdarg.h>'.
The limits.h in GCC-version-specific include directory has #include_next as Linux, but the one in syslimits.h has been removed and a bit tidied up.
Though this directive is used modestly, it is a problem that float.h, stdarg.h presuppose only GCC and Metrowerks.
Those can be written more portable as on FreeBSD. *1
In addition, #include_next for GCC on a header in /usr/include is a nonsense, because the priority of that include directory is lower than GCC-version-specific one.
Consequently this #include_next is never executed.
Another GCC-local directive #warning is sometimes found in objc/, wx-2.8/wx/ and a few other directories in /usr/include/, and their corresponding directories in /Developer/SDKs/MacOSX*.sdk/usr/include/.
Most of the directives are warnings against obsolete or deprecated files or usages.
backward_warning.h in /usr/include/c++/VERSION/backward/ and its corresponding file in /Developer/SDKs/MacOSX*.sdk/ are to execute #warning against these deprecated headers.
And all the headers in the directories include this header.
This is the same with Linux or FreeBSD.
Note:
*1 About how to use these headers with compiler-independent-build of mcpp, refer 3.9.9.4 and its note 3.
/usr/include/sys/cdefs.h and its corresponding file of the same name in /Developer/SDKs/MacOSX*.sdk/ have a macro definition as:
#define __DARWIN_NO_LONG_LONG (defined(__STRICT_ANSI__) \ && (__STDC_VERSION__-0 < 199901L) \ && !defined(__GNUG__))
And it is used in stdlib.h and a few others as:
#if __DARWIN_NO_LONG_LONG
This macro should be defined as: *1
#if defined(__STRICT_ANSI__) \ && (__STDC_VERSION__-0 < 199901L) \ && !defined(__GNUG__) #define __DARWIN_NO_LONG_LONG 1 #endif
Note:
*1 As for its reason, see 3.9.4.6 and 3.9.8.6.
gssapi.h, krb5.h, profile.h in /System/Library/Frameworks/Kerberos.framework/Headers have queer #endif lines like:
#endif \* __KERBEROS5__ */
This \* __KERBEROS5__ */ seems to intend to be a comment. I cannot understand why they must invent such a writing. Though GCC usually warns at it, Apple-GCC does not issue any warning even if -pedantic or any other options are specified. Apple-GCC does not warn at the following case, too. It still trails a sense of pre-C90.
#endif __KERBEROS5__
As far as a compilation of firefox 3.0b3pre source is concerned, any of the following special usages of macro, which are frequently found in glibc sources and Linux system headers, is not found in the Mac OS X system headers included from the firefox source.
Apple-GCC has some peculiar specifications different from the general GCC.
Specs to generate binaries for both of Intel-Mac and PowerPC-Mac on either machine
Mac OS X has a pair of compilers for x86 and ppc. (One is a native compiler, and other is a cross compiler.) This pair of Apple-GCCs have their own option -arch. If you specify multiple CPUs as '-arch i386 -arch ppc', gcc will be repeatedly invoked, binaries for the specified CPUs will be generated, and a "universal binary" which bundles all the binaries will be created. Also they have another peculiar option -mmacosx-version-min=. You can use this option along with -isysroot or --sysroot option, and widen the range of compatibility of the binary to the older versions of Mac OS X to some extent. These specs are convenient to make a binary package for Mac OS X.
As for preprocessing, you should remember that some predefined macros differ depending on the CPU specified.
"framework" directories
Mac OS X has "framework" directories inherited from NeXTstep. Framework is a hierarchical directory that contains shared resources such as header files, library, documents, and some other resources. To include a header file in these directories, such a directive is used as:
#include <Kerberos/Kerberos.h>
This format is the same with:
#include <sys/stat.h>
However, these two have quite different meanings. While the latter includes the file sys/stat.h in some include directory (in this case /usr/include), <Kerberos/Kerberos.h> is not a path-list, and Kerberos is not even a directory name. This is a file Kerberos.framework/Headers/Kerberos.h in a framework directory /System/Library/Frameworks. And in actual, Kerberos.framework/Headers is a symbolic-link to Kerberos.framework/Versions/Current/Headers. This is the most simple case of framework header file location. There are many other far more complex cases.
Who has invented such a complex system? This system burdens a preprocessor, because it needs to search system headers in framework directories repeatedly building and rebuilding path-list. Some headers further include many other headers.
"header map" file
Xcode.app is an IDE on Max OS X. It uses "header map" file, which is a list of include files. One of the tools of Xcode checks source files, searches the files to be included, and records the path-list of the files into a file named *.hmap, and Apple-GCC refers to it instead of include directories. This is an extended feature of Apple-GCC.
Header map file is a device to lessen burdens of header file searching. However, it is a binary file of a peculiar specification and lacks transparency. In order to lessen heavy burdens of framework header searching, it is more desired to reorganize the framework system.
Tokens in #endif line
As shown in the previous section, Apple-GCC does not issue even a warning whatever junks are on a #endif line, regardless of whatever options specified. It is quite an anachronism.
Non-ASCII characters in comments
This is not a problem of GCC, but a problem of system headers in framework directory. In many headers, some non-ASCII characters are frequently found in comments, such as the copyright mark (0xA9) and others of ISO-8859-* (?). They are nuisances on an environment of multibyte characters, even if in comments. A little bit of character encoding consciousness is desired. Though the characters of this kind are sometimes found also in /usr/include of Linux, they are far more often found in framework headers of Mac OS.
I compiled source of firefox developing version 3.0b3pre (January, 2008), or 3.0-beta3-prerelease, by GCC replacing its preprocessor with mcpp V.2.7 on Linux/x86 + GCC 4.1.2 and Mac OS X + GCC 4.0.1. mcpp were executed with -Kv option passing its output to cc1 (cc1plus). As its results, the compilations successfully completed, and the firefox binaries were generated. *1
The preprocessing portability of firefox source on the whole is rather high. The dependencies on GCC-local specifications, such as frequently found in glibc sources, are not found so many. It is portable enough to officially support both of GCC on Linux, Mac OS X and Visual C++ on Windows.
Preprocessing portability of a source is, however, not necessarily sufficient, even if GCC and Visual C pass it. In the sections below, I will check some problems, sometimes comparing them with glibc sources. I omit explanations on GCC's problems here to avoid duplication. For GCC's problems, refer to 3.9.4, 3.9.8 , which also comment on glibc sources. *2
Note:
*1 I checked out the sources from CVS repository of mozilla.org. One of the motivations to compile firefox source was to test -K option of mcpp. This option was proposed by Taras Glek, and he was working on refactoring of C/C++ source at mozilla.com. So, I also used firefox source to test -K option and other behaviors of mcpp. About -K (-Kv) option, refer to 2.4.
*2 There is a list of coding-guidelines for firefox as below.
But, its content is too old.
portable-cpp
The following GCC-local-specs, which are sometimes used in glibc sources, are not used in firefox sources. Though compiling firefox on Linux includes system headers, and some of which contain such as GCC2-spec variadic macros, they are not firefox sources themselves.
The following features are not used even in recent glibc, and not used in firefox at all.
However, a lot of #include_next are found only in one directory: config/system_wrappers/, which is generated by configure. All of the 900 files generated in the directory are short header files of the same pattern. For example, stdio.h is:
#pragma GCC system_header #pragma GCC visibility push(default) #include_next <stdio.h> #pragma GCC visibility pop
This is a code to utilize '#pragma GCC visibility *' directive implemented in GCC 4.*. At the same time, there is a file 'config/gcc_hidden.h' as below. The file is specified by -include option for most of the translation units, and read in at the start of the units.
#pragma GCC visibility push(hidden)
system_wrappers directory should be the include directory with the highest priority, so you should specify it as a first argument of -I option. In spite of such a constraint, this usage of #include_next is simple and seems to has no problem.
On the other hand, for many sources in nsprpub directory, '-fvisibility=hidden' option is used instead of '-include gcc_hidden.h', and the headers in system_wrappers are not used. This nsprpub directory seems still to be reorganized.
Many sources use C99 specifications without specifying C99. GCC use "gnu89" spec by default on *.c source, which is a compromising spec of C90 plus some of C99 specs and GCC-local specs. Some of firefox sources use the following C99 specs implicitly, depending on GCC's default behavior.
Empty argument in macro call
Though empty argument in macro call is rare in firefox, these 3 files have it. The actual macro called with any empty argument is only one named NS_ENSURE_TRUE.
layout/style/nsHTMLStyleSheet.cpp, layout/generic/nsObjectFrame.cpp, intl/uconv/src/nsGREResProperties.cpp
Also the following files in gfx/cairo/cairo/src/ have it. The actual macro is only one: slim_hidden_ulp2.
cairoint.h, cairo-font-face.c, cairo-font-options.c, cairo-ft-font.c, cairo-ft-private.h, cairo-image-surface.c, cairo-matrix.c, cairo-matrix.c, cairo-pattern.c, cairo-scaled-font.c, cairo-surface.c, cairo-xlib-surface.c, cairo.c
Though these empty macro arguments are used on Linux, they are not used on Mac OS X. Anyway, these are not tricky ones.
Translation limits beyond C90
Length of an identifier, nesting level of #include, number of macro definitions and so forth often exceed C90 translation limits.
Identifiers longer than 31 bytes are found especially frequently in the directory gfx/cairo/cairo/src/.
Nesting of #include over than 8 level and macro definitions over than 1024 are often found, too.
These are almost inevitable on Linux and Mac OS X, since only inclusion of some system headers often reaches these limits.
Using // comment in C source
Some C sources have this type of comments. The list of guidelines prohibits this. However, this causes few problem nowadays.
Above specifications are also available on Visual C 2005, 2008. Since GCC has -std=c99 option, we might use this to specify C99 explicitly. Visual C, however, has no option to specify a version of Standard. We cannot help to use C99 specs implicitly. Therefore, firefox sources cannot be blamed for using C99 specs implicitly, in the current states of the major compiler-systems. *1
By the way, firefox sources do not use variadic macro for some reason, in spite of using some other C99 specs implicitly. Visual C up to 2003 did not implement variadic macro. Is that why firefox did not use the feature? The circumstances has changed since Visual C 2005 implemented it.
Note:
*1 On C++, GCC defaults to "gnu++98" spec, which is explained as "C++98 plus GCC extensions". It has in actual, however, some C99 specs mixed. Meanwhile, Visual C says that it is based on C90 and C++98 for C and C++ respectively. In actual, both of C and C++ of Visual C have C99 features mixed in it as well as a few Visual C extensions, especially in Visual C 2005 and 2008. Both of GCC and Visual C have such mixture of versions of Standard and their own extensions and modifications, thus bring about some ambiguities. The absence of option in Visual C to specify a version of Standard is the most inconvenient problem.
Object-like macro replaced with function-like macro name is found sometimes in many other programs, and is found also in firefox sources below, though not frequently.
content/base/src/nsTextFragment.h, modules/libimg/png/mozpngconf.h, modules/libjar/zipstub.h, modules/libpr0n/src/imgLoader.h, nsprpub/pr/include/obsolete/protypes.h, nsprpub/pr/include/private/primpl.h, nsprpub/pr/include/prtypes.h, parser/expat/lib/xmlparse.c, security/nss/lib/jar/jarver.c security/nss/lib/util/secport.h, xpcom/glue/nsISupportsImpl.hIn addition, building of firefox creates, in a directory for developing environment, many links to header files, which are copied into /usr/include/firefox-VERSION/ when you install developing environment for firefox. Some of these header files have symbolic links to the above files. mozilla-confic.h, that is created by configure, has a macro definition of this kind, too.
These macros should be written as function-like macro to improve readability. In actual, many other macros in firefox sources are defined as function-like macro replacing to another function-like macro with the same arguments. There are coding style differences among the authors. It would be better to set a coding guideline on this matter.
A Macro with 'defined' token in its replacement text, sometimes found in glibc, is found in firefox only once.
modules/oji/src/nsJVMConfigManagerUnix.cpp defines a macro as:
#define NS_COMPILER_GNUC3 defined(__GXX_ABI_VERSION) && \ (__GXX_ABI_VERSION >= 102) /* G++ V3 ABI */and uses it in itself as:
#if (NS_COMPILER_GNUC3)
This macro should be removed and the #if line should be rewritten as:
#if defined(__GXX_ABI_VERSION) && (__GXX_ABI_VERSION >= 102) /* G++ V3 ABI */
Maybe this file is to be compiled only by GCC, nevertheless it is not good practice to depend on preprocessor's wrong implementation.
Note:
*1 GCC-specific-build of mcpp V.2.7 enabled GCC-like handling of 'defined' in macro on #if line. But mcpp warns at it, and you would better to revise the code.
The following files in jpeg directory have #endif lines with comments without comment mark. All of the lines has appeared by some recent updates.
jmorecfg.h, jconfig.h, jdapimin.c, jdcolor.c, jdmaster.c
Though this style of writing was frequently seen in some sources for UNIX-like systems up until middle of 1990s, it has almost completely disappeared nowadays, and cannot be found even in that glibc sources. GCC usually warns at it as expected. For all that, these sources take such a writing style. Only Apple-GCC does not warn at it. Have these sources been edited on Mac OS?
The assembler sources are written as *.s (*.asm) files, and some of which contain macros, but in principle, they do not call for preprocessor.
On Mac OS X / ppc, however, there is only one exception. xpcom/reflect/xptcall/src/md/unix/xptcinvoke_asm_ppc_rhapsody.s calls for preprocessor, because it has a #if block containing only one line. The block seems to be unnecessary already.
Compilation of firefox begins with configure, which generates mozilla-config.h. In compilation of most of the sources, this header file is specified by -include option. config/gcc_hidden.h is also specified similarly. Why don't the sources #include these headers at their top?
Some silent redefinition of macros are found, though they are rare.
In compilation of most of the sources, -DZLIB_INTERNAL option is specified. In other words, the macro is defined as 1. It is, however, defined by some sources in modules/zlib/src/ as:
#define ZLIB_INTERNAL
It is defined to zero token. And it is used as:
# ifdef ZLIB_INTERNAL
Though the difference does not make different result in this case, different definitions of the same macro is not recommended. Maybe the option by Makefile is redundant.
xpcom/build/nsXPCOMPrivate.h defines a macro MAXPATHLEN differently from /usr/include/sys/param.h. This discrepancy stems from an inconsistency among the related header files about whether include /usr/include/sys/param.h or not. The related header files should be reorganized.
On Mac OS X, assert macro, once defined in /usr/include/assert.h, is redefined in netwerk/dns/src/nsIDNKitInterface.h. '#undef assert' should precede it.
On Mac OS X, in modules/libreg/src/VerReg.c, queer redefinition of macro VR_FILE_SEP occurs as:
#if defined(XP_MAC) || defined(XP_MACOSX) #define VR_FILE_SEP ':' #endif #ifdef XP_UNIX #define VR_FILE_SEP '/' #endif
, because on Mac OS X, configure defines both of XP_MACOSX and XP_UNIX. This redefinition may be an intended one. Anyway, it is misleading. It would be better to write as below, clearly showing the priority of XP_UNIX.
#ifdef XP_UNIX #define VR_FILE_SEP '/' #elif defined(XP_MAC) || defined(XP_MACOSX) #define VR_FILE_SEP ':' #endif
The following files have too long comments crossing over several hundred lines or more.
extensions/universalchardet/src/base/Big5Freq.tab, extensions/universalchardet/src/base/EUCKRFreq.tab,intl/unicharutil/src/ignorables_abjadpoints.x-ccmap, layout/generic/punct_marks.ccmap
Especially, in the directories intl/uconv/ucv*/, there are many files with too long comments. There is even a case of single comment crossing over 8000 lines! All of these files have name of *.uf or *.ut, and are mapping tables between Unicode and each Asian encodings, generated automatically by some tool. They do not seem to be source of C/C++, but they are included from other C++ sources. Most part of these files are comments, which seem to be a sort of document or table for some other tool.
It is not recommendable to include long documents or tables in source files. They should be separated from source files, even if placed in source tree.
Though these files are used in Linux, they are not used in Mac OS X. On the other hand, on Mac OS X, system headers in framework directories are frequently used, and some of them are queer files mostly occupied with comments.
The encoding of newline in firefox source is [LF]. A few files, however, have a small block of lines ending with [CR][LF]. All of these [CR][LF] lines seem to be fragments inserted as patches. Some conversion tools should be used when one edit source file on Windows.
I used mcpp to preprocess some sample programs provided by Visual C++ 2003, 2005 and 2008. The system headers seem to have only a few compatibility problems shown below. These problems are often seen in other compile systems and do not have a serious impact on preprocessing.
Although the Linux system-headers and glibc sources often contain GCC local specification based coding, Visual C++ system headers has only a few Visual C++ local coding.
I found only one outrageous macro in Visual C++. Vc7/PlatformSDK/Include/WTypes.h has the following macro definition: *1
#define _VARIANT_BOOL /##/
This macro definition is used in oaidl.h and propidl.h in Vc7/PlatformSDK/Include/ as follows:
_VARIANT_BOOL bool;
What does this macro aim at?
This macro seems to expect _VARIANT_BOOL to be expanded into // and the line to be commented out. Actually, this expectation is met in Visual C cl.exe !
In the first place, // is not a token (preprocessing-token). Macro definitions should be processed and expanded after source are parsed into tokens and a comment is converted into one space. Therefore, it is irrational for a macro to generate comments. When this macro is expanded into //, the result is undefined because // is not a valid preprocessing-token.
In order to use these header files with mcpp, comment out these macro definitions and change many _VARIANT_BOOL occurrences as follows:
#if !__STDC__ && (_MSC_VER <= 1000) _VARIANT_BOOL bool; #endif
If you use only Visual C 5.0 or later, this line can be simply commented out as follows:
// _VARIANT_BOOL bool;
This macro is, indeed, out of question, however, it is Visual C/cl.exe, which allows such an outrageous macro to be preprocessed as a comment, should be blamed. This example reveals the following serious problems this preprocessor has:
Probably, the cl.exe preprocessor was developed based on a very old somewhat character-based preprocessor. It is easy to presume that the preprocessor has been upgraded by repeating partial revision to the old preprocessor.
There are many preprocessors which presumably have a very old program structure. GCC 2/cpp, shown in 3.9, is one of such preprocessors. Repeated partial revision of such a preprocessor will only makes its program structure more complicated. However much such revision may be made, there are limits to quality such preprocessor can achieve. Unless a old source is given up and completely rewritten, a clear and well-structured preprocessor cannot be obtained.
At GCC 3/cpp0, a total revision was made to GCC 2; the entire source code was rewritten. So, GCC 3/cpp0 has become quite different from GCC 2. Although mcpp was initially developed based on the source of an old preprocessor, DECUS cpp, the source code was totally rewritten soon.
Note:
*1 Visual C++ 2005 Express Edition does not contain Platform SDK. However, you can download "Platform SDK for Windows 2003", and use it with VC2005. wtypes.h, oaidl.h, propidl.h in this PlatformSDK/Include directory also have the same macro definition and its usage as VC2003 Platform SDK.
Also on Visual C++ 2008, in the header files of the same name in 'Microsoft SDKs/Windows/v6.0A/Include' directory, that macro definition and its usage are quite the same.
Another problem is use of '$' in identifiers. Its use in macro names suddenly increased in the system headers on Visual C++ 2008. Though such macros were also found on Visual C++ 2005, they were rare. But, on Visual C++ 2008, they are found here and there.
'Microsoft Visual Studio 9.0/VC/include/sal.h' is the most conspicuous one. This header defines macros for so-called SAL (standard source code annotation language) of Microsoft, and has many names containing '$'. This file is included from many standard headers via 'Microsoft Visual Studio 9.0/VC/include/crtdefs.h', so most sources are compiled with these macros without knowing it.
If you specify -Za option to invoke the compiler cl, the SAL is disabled and all of the names with '$' in the sal.h disappear. The necessity of this notation is, however, hard to understand. Though GCC also enables '$' in identifiers by default, its actual use is rarely found nowadays.
This kind of names are also found in the system headers named specstrings*.h in 'Microsoft SDKs/Windows/v6.0A/Include' directory. They are included from Windows.h via WinDef.h, and the names with '$' do not disappear even -Za option is specified. The option causes only errors. So, you cannot use the -Za option to compile a source which includes Windows.h.
This chapter does not contain all the C preprocessor specifications. For details on Standard C preprocessing, refer to cpp-test.html. For mcpp behaviors in each mode, refer to 2.1. This chapter covers several preprocessor-related specifications, including those called implementation-defined by Standards. For more details on mcpp implementation-defined-behaviors, see Chapter 5, "Diagnostic Messages".
The header file internal.H defines values returned by mcpp to a parent process. mcpp returns 0 on success, and errno for errno != 0 and 1 for errno == 0 on error. Success means that no error has occurred.
This section explains the order in which mcpp searches directories for an include file when it encounters a #include directive.
With the -I- option (-nostdinc option for GCC-specific-build and -X for Visual C-specific-build), the directories specified in 4.4 and later are not searched.
ANSI C Rationale says the ANSI committee intends to define a current directory as base directory. I think this is acceptable, in that the base directory is always constant and that the specification is clearer. However, some implementations, such as UNIX, seem to define a source file directory as base one at least for #include "header". The compiler-independent-build of mcpp also takes source file directory as base, according to the majority.
This section explains how to construct a header-name pp-token and extract a file name from it.
Evaluation of #if expression depends on the largest integer type of the host compiler (by which mcpp was compiled) and that of the target compiler (which uses mcpp). Since the compiler-independent-build has no target compiler, the type depends only on the host compiler.
mcpp in Standard mode evaluates #if expression in the common largest integer type of the host and target compiler. Nevertheless, mcpp in pre-Standard mode evaluates it in (signed) long.
In the compiler-systems having type "long long", if __STDC_VERSION__ is set to 199901L or higher using the -V199901L option, mcpp evaluates a #if expression in "long long" or "unsigned long long", according to the C99 specification. Although C90 and C++98 stipulate that #if expression is evaluated in long / unsigned long, mcpp evaluate it in long long / unsigned long long even in C90 or C++98 mode, and issues a warning in case of the value overflows the range of long / unsigned long. *1
Visual C and Borland C 5.5 do not have a "long long" type, but have an __int64 type of the same length. So, a #if expression is evaluated as __int64 / unsigned __int64. (However, since LL and ULL suffixes cannot be used in Visual C++ 2002 or earlier and Borland C 5.5, these suffixes must not be used in coding other than #if lines.)
In addition, when you invoke with the -+ option for C++ preprocessing, mcpp evaluates pp-tokens 'true' and 'false' in a #if expression to 1LL (or 1L) and 0LL (or 0L), respectively.
mcpp in Standard mode evaluates #if expression as follows. For a compiler without long long, please read "long long" and "unsigned long long" hereinafter, until the end of 4.5, as "long" and "unsigned long", respectively. For pre-Standard mode read all of them as "long".
Anyway, an integer constant token always has a non-negative value.
In pre-Standard mode, an integer constant token is evaluated within the range of non-negative long. A token beyond that range is diagnosed as an out of range error. All the operations are performed within the range of long.
If both of host and target compilers have type unsigned long long and the range of unsigned long long of the host is narrower than that of the target, a beyond that host range is evaluated to an out of range error.
If an operation using constant tokens produces a result out of range of long long, an out of range error occurs. If it produces a result out of range of unsigned long long, a warning is issued. This can be applied to intermediate operation results.
Since a bitwise right shift of a negative value or a division operation using it does not provide portability, mcpp issues a warning. If an operation using a mixture of unsigned and signed operands converts a signed negative value to an unsigned positive value, a warning is also issued. How these values are evaluated depends on the specification of the compiler-proper of the host system.
C90 and C++98 makes it a rule that a preprocessor evaluates a #if expression in long/unsigned long (in C99, the maximum integer type is used). These specifications are rougher than those of compiler-propers. A (#)if expression is often evaluated differently between preprocessor and compiler-proper, especially when sign extension is involved.
In addition, since keywords are not used during Standard C preprocessing, sizeof or cast cannot be used in a #if expression. Of course, neither variables, enumeration constants, nor floating point numbers can be used there. Standard mode allows the "defined" operator in a #if expression as well as the #elif directive. Except for these differences, mcpp evaluates a #if expression in accordance with priority of and the associative law among operators, just as compiler-propers do. In a binary operation, an arithmetic conversion often takes place to equalize the types on both-hand sides; If one operand is unsigned long long and the other is long long, the both are converted to unsigned long long.
Note:
*1 mcpp up to V.2.5 evaluated #if expression in C90 and C++98 by long long / unsigned long long internally, and issued an error on overflow of long / unsigned long. From V.2.6 onward, mcpp degraded the error to warning for compatibility with GCC or Visual C.
Constant tokens in a #if expression includes identifiers (macros and non-macros), integer tokens and character constants. How to evaluate character constants is implementation-defined and lacks of portability. Even (#)if 'const' is sometimes evaluated differently between preprocessor and compiler-proper. Note that Standards does not even guarantee that (#)if 'const' is evaluated to the same.
mcpp in POSTSTD mode does not evaluate a character constant in a #if expression, which is almost meaningless, and makes it an error.
Like other integer constant tokens, mcpp evaluates a character constant in a #if expression within the range of long long or unsigned long long. (In pre-Standard mode, long only.)
A multi-byte character or a wide character is generally evaluated with 2-bytes type, except for the UTF-8 encoding, which is evaluated with 4-bytes type. Since UTF-8 has a variable length, mcpp evaluates it with 4-byte type. mcpp does not support EUC's 3 byte encoding scheme. (A 3-byte character is recognized as 1 byte + 2 bytes. As a consequence, its value is evaluated correctly.) Although there are some implementations using the 2-byte encoding scheme that define wchar_t as 4-byte, mcpp has no relevance to wchar_t. The following paragraphs describe two-byte multi-byte character encodings.
Multi-byte character constants, such as 'X', are evaluated to ((First byte value << 8) + Second byte value). (8 is the value of CHAR_BIT in <limits.h>.) Note that 'X' is used here to designate a multi-byte character. Though 'X' itself is not a multi-byte character, it is used here to avoid character garbling.
Let me take an example of multi-character character constants, such as 'ab', '\x12\x3', and '\x123\x45'. 'a', 'b', '\x12', '\x3' and '\x123' are regarded as one byte. When a multi-character character constant is evaluated, each one byte, starting from the highest one, is evaluated within the range of [0, 0xFF] and combined by shifting it to left by 8. (0xFF is the value of UCHAR_MAX in <limits.h>.) If the value of one escape sequence exceeds 0xFF, an out of range error occurs. Therefore, in the implementation of the ASCII character set, the above three tokens are evaluated to 0x6162, 0x1203 and error, respectively.
L'X' is evaluated to the same value as 'X'. Let me take an example of multi-character wide character constants, such as L'ab', L'\x12\x3', and L'\x123\x45'. L'a', L'b', L'\x12', L'\x3', L'\x123', and L'\x45' are regarded as one wide character. When a multi-character wide character constant is evaluated, each wide character, starting from the highest one, is evaluated within the range of [0, 0xFFFF] and combined by shifting it to left by 16. If the value of one escape sequence exceeds the maximum value of an unsigned 2-byte integer, an out of range error occurs. Therefore, in the implementation of the ASCII character set, the above three tokens are evaluated to 0x00610062, 0x00120003, and 0x01230045, respectively.
If the values of a multi-character character constant and a multi-character wide character constant exceed the range of unsigned long long, an out of range error occurs.
With __STDC_VERSION__ or __cplusplus set to 199901L or higher, mcpp evaluates a Universal Character Name (UCN) in the form of \uxxxx and \Uxxxxxxxx as a hex escape sequence. (I know this evaluation is nonsense but no other way.)
If the compiler-proper of the target compiler system uses a signed char or signed wchar_t, a character constant in a (#)if expression may be evaluated differently between mcpp and compiler-proper. The range that causes a range error may also differ between them. In addition, evaluation of multi-character character constants and multi-byte character constants varies even among preprocessors and among compilers. Standard C does not define whether, with CHAR_BIT set to 8, 'ab' is evaluated to 'a' * 256 +'b' or 'a' + 'b' * 256.
In general, character constants should not be used in an #if expression, as long as you have an alternative method. I think an alternative method always exists.
Standard C stipulates that preprocessing is a process independent of run-time environments or compiler-proper specifications, thus prohibiting it from using sizeof and cast in an #if expression. However, pre-Standard mode allows sizeof (type) in a #if expression. This was done as a part of my effort to add necessary modifications to DECUS cpp, such as adding long long and long double processing, while retaining its original functionality. As to cast, I neither implemented nor had a will to do so because it would require troublesome work.
A series of macros beginning with S_, such as S_CHAR, in eval.c define the size of each type. Under cross implementation, these macros must be modified to specify size of the types, in integer values, used in the target system.
I have to admit that mcpp does not provide the full functionality of #if sizeof. mcpp just ignores the letter of "signed" or "unsigned" preceding char, short, int, long, and long long when it appears in a #if sizeof. Also mcpp does not support sizeof (void *). I know this is a half-hearted implementation but I do not want to increase the number of flags in system.H in vain for this non-conforming function. I initially thought of removing the sizeof code from the original version because I did not intend to support cast at all, but on the second thought, I decided to make a small amount of modifications to make use of the existing code.
mcpp in principle compresses a white-space sequence, excluding <newline>, as a token separator into one space character during tokenization in the translation phase 3. If -k or -K option is specified in STD mode, however, it outputs horizontal white spaces as they are without compressing. It also deletes a white-space sequence at the end of a line.
A white-space sequence at the beginning of a line is deleted in POSTSTD mode, and putout as they are in other modes. The latter is special treatment for convenience of human reading. *1
This compression and deletion occurs during the intermediate phase. The next phase 4 involves macro expansion and preprocess-directive-line processing. Macro expansion may sometimes produce several space characters before and after the macro. Of course, the number of space characters does not affect compilation results.
Standard C says that whether implementation compresses a white-space sequence into one space character during the translation phase 3 is implementation-defined, but you usually do not have to worry about this. <Vertical-tab> or <form-feed> in a preprocessor directive line may adversely affect portability, since this is undefined in Standard C. mcpp converts it to one space character.
Note:
*1 Up to V.2.6.3 mcpp squeezed line top white spaces into one space. In V.2.6.4, it changed the behavior.
This section describes the specifications of mcpp executables generated when DIFfile and makefile for each compiler system in the noconfig directory are used to compile mcpp with default settings. When a configure script is used to compile mcpp, the generated mcpp may differ, depending on configure's results, however, as long as OS and compiler system versions are same, generated mcpps would be same except for include directories.
The compiler-independent-build of mcpp has the constant specifications regardless of the compiler system with which mcpp was compiled, except a few features dependent on OS and CPU.
There are compiler-independent-build and compiler-specific-build for mcpp executables, and each executable has several behavioral modes. For those, refer to 2.1. This section describes the settings centering on STD mode.
DIFfiles and makefiles are for the following compiler systems:
FreeBSD 6.3 GCC V.3.4 Vine Linux 4.2 / x86 GCC V.2.95, V.3.2, V.3.3, V.3.4, V.4.1 Debian GNU/Linux 4.0 / x86 GCC V.4.1 Ubuntu Linux 8.04 / x86_64 GCC V.4.2 Fedora Linux 9 / x86 GCC V.4.3 Mac OS X Leopard / x86 GCC V.4.0 CygWIN 1.3.10 (GCC V.2.95), 1.5.18 (GCC 3.4) MinGW & MSYS GCC 3.4 WIN32 LCC-Win32 2003-08, 2006-03 WIN32 Visual C++ 2003, 2005, 2008 WIN32 Borland C++ V.5.5
In addition, for the following compilers which I don't have, the difference files contributed from some users are contained here.
WIN32 Visual C++ V.6.0, 2002 WIN32 Borland C++ V.5.9 (C++Builder 2007)
Of all the macros defined in noconfig.H and system.H, the settings of those mentioned below are identical among every mcpp executable, regardless of their compiler systems.
Each mcpp is compiled with DIGRAPHS_INIT == FALSE, so enables digraph when the -2 (-digraphs) option is specified.
With TRIGRAPHS_INIT == FALSE, trigraph is enabled with the -3 (-trigraphs) option.
With OK_UCN set to TRUE, Universal Character Name (UCN) can be used in C99 and C++.
With OK_MBIDENT set to FALSE, multi-byte-characters cannot be used in identifiers.
With STDC set to 1, the initial value of __STDC__ is 1.
The translation limits are set as follows.
NMACPARS (Maximum number of macro arguments) 255 NEXP (Maximum number of nested levels of #if expressions) 256 BLK_NEST (Maximum number of nested levels of #if section) 256 RESCAN_LIMIT (Maximum number of nested levels of macro rescans) 64 IDMAX (Valid length of identifier) 1024 INCLUDE_NEST (Maximum number of #include nest level) 256 NBUFF (Maximum length of a source line) *1 65536 NWORK (Maximum length of an output line) 65536 NMACWORK (Size of internal buffers used for macro expansion) 262144
On GCC-specific-build and Visual C-specific-build, however, NMACWORK is used as the maximum length of an output line.
This macro differs on OS regardless of build types.
MBCHAR (Default encoding of multibyte character):
Linux, FreeBSD, Mac OS X EUC-JP WIN32, CygWIN, MinGW SJIS
The settings of the macros below are different among compiler systems.
STDC_VERSION (Initial value of __STDC_VERSION__):
Compiler-independent, GCC 2 199409L Others 0L
HAVE_DIGRAPHS (Is digraphs output as it is?):
Compiler-independent, GCC, Visual C TRUE Others FALSE
EXPAND_PRAGMA (Is a #pragma line macro-expanded in C99?):
Visual C, Borland C TRUE Others FALSE
GCC 2.7-2.95 defines __STDC_VERSION__ to 199409L. However, in GCC V.3.*,V.4.*, __STDC_VERSION__ is no longer predefined by default and is now defined in accordance with an execution option. mcpp setting for GCC follows these variations.
If STDC_VERSION is set to 0L, mcpp predefines __STDC_VERSION__ as 0L. So, specifying the -V199409L option sets __STDC__ and __STDC_VERSION__ to 1 and 199409L, respectively and allows only predefined macros that begin with '_', resulting in mcpp in the strictly C95 conforming mode. The -V199901L option specifies C99 mode.
In C99 mode, mcpp predefines __STDC_HOSTED__ as 1.
mcpp itself predefines neither __STDC_ISO_10646__, __STDC_IEC_559__ nor __STDC_IEC_559_COMPLEX__. These values are compiler-system-specific. In glibc 2 / x86, the system header defines __STDC_IEC_559__ and __STDC_IEC_559_COMPLEX__ as 1. Other compiler systems do not define them.
If HAVE_DIGRAPHS is set to FALSE, digraph is output after converting to usual token.
The argument of #pragma line beginning with STDC, MCPP or GCC is never macro-expanded even if EXPAND_PRAGMA == TRUE.
Include directories are set as follows:
System-specific or site-specific directories under UNIX-like OSs are as follows (common to compiler-independent-build and compiler-specific-build):
FreeBSD, Linux, Mac OS X, CygWIN /usr/include, /usr/local/include
Mac OS X has also the framework directories set to /System/Library/Frameworks and /Library/Frameworks by default.
On MinGW, /mingw/include is the default include directory.
CygWIN GCC-specific-build changes /usr/include to /usr/include/mingw by -mno-cygwin option.
For the implementation-specific directories that vary among compiler systems and their versions, see the DIFfiles. The compiler-independent-build does not set implementation-specific directories. mcpp for the compiler systems on Windows does not preset any directory but uses the environment variables: INCLUDE, CPLUS_INCLUDE. These environment variables are used by the compiler-independent-build too.
If these default settings do not suit you, change settings to recompile mcpp, or use environment variables or the -I option.
When the length of a preprocessed line exceeds NWORK-1, mcpp generally divides it into several lines so that each line length becomes equal to or less than NWORK-1. A string literal length must be equal to or less than NWORK-2. mcpp of GCC-specific-build and Visual C-specific-build, however, do not divide output line.
Again for confirmation, the macros mentioned above in italics are used only to compile mcpp, and are not built-in macros in a mcpp executable.
If you invoke mcpp without an input file and enter '#pragma MCPP put_defines', the built-in macros will be displayed.
With __STDC__ set to 1 or higher, the macros that do not begin with '_' are deleted. The -N (-undef) option deletes all the macros other than __MCPP. After -N, you can use -D to defines macro symbols over again. When you use a different compiler system version from those specified here, -N and -D allow you to redefine your version macro without recompiling mcpp. The -D option allows you to redefine a particular macro without using -N or -U.
When you use the -+ (-lang-c++) option to specify C++ preprocessing, __cplusplus is predefined with its initial value of 1L. In addition, some other macros are also predefined:
Although there are some predefined macros in GCC, those predefined by GCC were few, until GCC V.3.2. Most of them are passed from gcc to cpp by the -D option. So, it is not necessary for mcpp to define them for compatibility. However, mcpp predefines these macros for being used in a stand alone manner, such as pre-preprocessing.
GCC V.3.3 and later predefines 60 or 70 macros (suddenly). GCC-specific-build of mcpp V.2.5 and later for GCC V.3.3 or later also includes these predefined macros other than the above ones. These GCC-specific predefined macros are written in mcpp_g*.h header files, which is generated by installation of mcpp.
Since FreeBSD, Linux, CygWIN, MinGW / GCC and LCC-Win32, Visual C 2008 have a type long long, an #if expression is evaluated in long long or unsigned long long. Visual C 6.0, 2002, 2003, 2005 and Borland C 5.5 do not have a "long long" type but __int64 and unsigned __int64 instead. These types are used.
In the above compiler systems with type long ranges:
[-2147483647-1, 2147483647] ([-0x7fffffff-1, 0x7fffffff])
and unsigned long ranges:
[0, 4294967295] ([0, 0xffffffff]).
In the compiler systems with type long long ranges:
[-9223372036854775807-1, 9223372036854775807] ([-0x7fffffffffffffff-1, 0x7fffffffffffffff]),
and unsigned long long ranges:
[0, 18446744073709551615] ([0, 0xffffffffffffffff]).
All the compiler-propers of the above compiler systems internally represent a signed integer as two's complement number. So do bit operations. This can be applied to mcpp's #if expression.
Right shift of a negative integer is an arithmetic shift. This can be applied to mcpp's #if expression. (Right shifting an integer by one bit halves the value with the sign retained)
In an integer division or modulus operation, if either or both operands are negative values, an algebraic operation like Standard C's ldiv() function is performed. This can be applied to mcpp's #if expression.
These OSs use the ASCII basic character set. So does mcpp.
There is a memory management routine, kmmalloc, that I developed. This routine has malloc(), free(), realloc() and other memory handling functions. If kmmalloc is installed in systems other than CygWIN or Visual C 2005 or 2008, kmmalloc is linked when the MALLOC=KMMALLOC (or -DKMMALLOC=1) option is specified in make. Also its heap memory debugging routine is linked. mcpp for Linux and LCC-Win32 uses EFREEP, EFREEBLK, EALLOCBLK, EFREEWRT and ETRAILWRT with an errno of 2120, 2121,2122, 2123 and 2124 assigned, and other mcpp uses 120, 121, 122, 123, and 124. (Refer to mcpp-porting.html#4.extra.) *2
On the systems other than GNU and Visual C, you should preset the environment variable TZ, for example JST-9 in Japan. Or, the __DATE__ and __TIME__ macros are not set correctly.
Note:
*1 This limit applies also to the line spliced by <backslash><newline> deletion. Moreover, it applies to the line after converting a comment into a space and possibly concatenated multiple logical lines by a comment spreading across the lines.
*2 CygWIN 1.3.10 and 1.5.18 provides malloc() that has an internal routine named _malloc_r() which is called by a few other library functions. So this malloc() cannot be replaced with other malloc(). Also in Visual C 2005 and 2008, the program terminating routine calls an internal routine of resident malloc(), hence other malloc() cannot be used.
This section covers diagnostic messages issued by mcpp, as well as their meaning. By default, these messages are output to stderr. With the -Q option, they are redirected to the mcpp.err file in the current directory. A diagnostic message is output in the following manner:
If the -j option is specified, mcpp outputs neither the above 2 nor 3.
Diagnostic messages are divided into three levels:
fatal error Indicates an error is so serious that it is no longer meaningful to continue preprocessing. error Indicates there is a syntax or usage error. warning Indicates code lacks of portability or may contain a bug.
Warnings are further divided into five classes:
Class 1 Source code may contain a bug or at least lack portability. Class 2 Probably, source code will present no problem in practical use, but is problematic in terms of Standard conformance. Class 4 Probably, source code will present no problem in practical use, but is problematic in terms of portability. Class 8 Rather surplus warnings to #if groups skipped, sub-expression in #if expression whose evaluation is skipped, and etc. Class 16 Warning to trigraphs and digraphs.
Warnings other than Class 1 or 2 are rather specific to mcpp.
mcpp has various types of diagnostic messages. For example, STD mode provides the following types of diagnostics for each level and class.
fatal error 17 types error 76 types warning class 1 49 types warning class 2 15 types warning class 4 17 types warning class 8 30 types warning class 16 2 types
Principally, these messages point the coding in question. The diagnostic messages below have a sample value embedded in a token or a numeric value from source code. For the messages with a macro name embedded, a value the macro is expanded into is shown in real messages.
Depending on cases, a same message is issued as warning or error, in which case, this manual gives the first occurrence a detailed description. For the subsequent occurrences, the message is only listed.
Of all the errors shown below, some errors, such as a buffer overflow, occur due to mcpp specification restrictions. Some macros in system.H define translation limits, such as a buffer size. Enlarge the buffer size and recompile mcpp if necessary, however, be careful not to increase it too much. A large buffer in a system with a small amount of memory may cause an "out of memory" error frequently.
A fatal error occurs and preprocessing is terminated when it is no longer possible to continue preprocessing due to an I/O error or a shortage of memory, or it is no longer meaningful to do so due to a buffer overflow. A status value of failure is returned to a parent process.
The following four errors may also be caused by a buffer overflow at a token that is not so particularly long during macro expansion, in which case, you should divide the macro invocation.
mcpp issues an error message when it found a grammatical error. Standard C stipulates that a compiler system should issue a diagnostic message when they encounter a violation of syntax rules or constraints. Principally, Standard mode issues an error message to this type of violation, but sometimes issues a warning.
mcpp issues an error message or warning to most of undefined items in Standard C. However, mcpp issues neither an error nor a warning to the following undefined items:
For details on what is a violation of syntax rule or constraint, undefined, unspecified or implementation-defined in Standard C preprocessing, refer to cpp-test.html.
Even if an error occurs, mcpp continues preprocessing as long as they are not fatal one. mcpp shows the number of errors and returns the status of failure to the parent process when it exits.
The following several messages are all token-related errors. For the first four, mcpp skips the line in question and continues preprocessing. The first three are string literal or other token-related errors, indicating that a closing quotation mark is not found by the end of the logical line. This type of error occurs when you write a text that does not take a form of a preprocessing-token sequence in neither a string literal nor comment, as shown below:
#error I can't understand.
As processing-tokens are not so strictly defined as C tokens in the compiler-proper, most character sequences are regarded as pp-token sequences, as long as they belong to a source character set. Therefore, it is only this type of coding that causes a preprocessing-token error. Pp-token errors may occur in a skipped #if group.
This section covers messages issued when a source file ends with an unterminated #if section or macro invocation. If the file (not included file) marks the end of input, the message "End of input", not "End of file", is issued.
These diagnostic messages are issued as an error or warning, depending on mcpp modes.
Standard mode issues these messages as error, in which case mcpp skips the macro invocation in question and restores relationship between paired directives in a #if section to that of when the file is initially included.
On the other hand, pre-Standard mode issues warnings. OLDPREP mode does not even issue warning except on unterminated macro call.
This section covers errors caused by ill balanced directives of #if, #else and etc. Even if mcpp finds ill balance among these directives, it continues processing, assuming that the processing group so far still continues. mcpp checks to see whether directives are balanced even in a skipped #if group.
The #if (#ifdef) section is a block between #if (#ifdef or #ifndef) and #endif. The #if (#elif, #else) group is a smaller block, say, between #if (#ifdef or #ifndef) and #elif, between #elif and #else, or between #else and #endif within the #if (#ifdef) section.
The following two errors occur when #asm and #endasm are not balanced. These messages are issued only by compiler-specific-build for a particular compiler system and in pre-Standard mode.
This section covers simple syntax errors on directive lines that begin with #. The errors hereinafter discussed until 5.4.12 do not occur within a skipped #if group. (mcpp invoked with the -W8 option issues a warning to an unknown directive.)
When mcpp finds a directive line with a syntax error, it ignores the line and continues processing, in which case, it neither regards #if as the beginning of a section nor changes line numbers even with a #line. If a #include or #line line has a macro argument, Standard mode expands the macro and checks the syntax. Pre-Standard mode does not expand the macro.
Although the messages below do not show the directive name in question, the source line that follows the message show it. (A directive line with a comment converted to a space character always becomes one line, which is called "preprocessed line" here.)
The following error occurs only in Standard mode and this directive is ignored. OLDPREP mode issues neither an error nor a warning. KR mode issues a warning and continues preprocessing as if there had been no "junk" text.
This section covers syntax-related errors in #if, #elif and #assert directives. If a #if (#elif) line has these errors, mcpp evaluates it to false, skips the #if (#elif) group, and continues processing.
For a skipped #if (#ifdef, #ifndef, #elif or #else) group, mcpp checks validity of C preprocessing tokens and balance of these directives, but not other grammatical errors.
A #if line has a sub-expression whose evaluation is skipped. For example, in case of #if a || b, if "a" is evaluated to true, "b" is not evaluated at all. However, the following 14 types of syntax errors or translation limit errors are checked, even if they are located in a sub-expression whose evaluation is skipped.
The following error messages are relevant to #if sizeof. Only pre-Standard mode issues this error.
The following errors do not occur in a sub-expression whose evaluation is skipped. (mcpp invoked with the -W8 option issues a warning.)
The Standards say that #if expression is evaluated by the largest integer type in C99 and long / unsigned long in C90 and in C++98. mcpp evaluate it by long long / unsigned long long even if in C90 or C++98, and issues a warning on the value outside of long / unsigned long in C90 and in C++98. In this subsection, please read the following long long / unsigned long long as long / unsigned long for the compiler without long long, and as long in pre-Standard mode. In POSTSTD mode, character constant in #if expression is not available and causes a different error.
The following errors are relevant to sizeof. They are not issued in a sub-expression whose evaluation is skipped (The -W8 option issues a warning). Only in pre-Standard mode.
This section covers #define related errors. A macro will not be defined if an error occurs at #define. The # and ## operator related errors occurs in Standard mode. __VA_ARGS__ related errors also occur in Standard mode. Although variable argument macro is a C99 specification, mcpp allows these macros to be used in C90 and C++ modes for compatibility with GCC and Visual C++ 2005, 2008. (A warning is issued.)
In STD mode of GCC-specific-build, if you write GCC2-spec variadic using __VA_ARGS__, you will get this error. __VA_ARGS__ should be used only in GCC3-spec or C99-spec variadic.
This section covers #undef related errors.
This section covers macro expansion errors. mcpp displays a macro definition, as well as the source filename and line number where it is found. The errors related to # or ## operator can occur only in Standard mode.
When the following errors occur, the macro invocation will be skipped.
The following errors can occur only with -K option in STD mode. These errors mean that the macro is extremely complex and buffer for macro notification ran short. Almost impossible to happen in real life.
The following two are checked when mcpp is invoked with the -V199901L option. The same thing can be said when mcpp is invoked with the -V199901L option in C++ mode.
A warning is issued when source, although syntactically correct, possibly contains some coding mistakes or has a portability problem. Warnings are divided into five classes: 1, 2, 4, 8, and 16. These classes are enabled when the -W <n> option is specified on mcpp invocation. <n> specifies a ORed value of any of 1, 2, 4, 8, and 16. Class 4, for example, can be specified explicitly with -W4, and implicitly with -W<n>, where <n> is 1|4, 1|2|4, 2|4, 1|4|8, 4|8, 4|16, etc., because the AND-ed value of <n> and 4 is 4 (non-0).
Standard mode issues an error message to most of the source code that causes a Standard C undefined behavior, but a warning to some.
Likewise, Standard mode always issues a warning to the source code which uses Standard C unspecified specifications, except for the following:
Standard mode issues a warning to many implementation-defined behaviors, except for the following:
As you see, mcpp can perform almost all the portability checks necessary at a preprocessing level.
POSTSTD mode is identical with STD mode except for some specification differences described in section 2.1.
Regardless of the number of warnings, mcpp always returns a status of success. mcpp invoked with the -W0 option does not issue a warning.
Beside character codes, ISO-2022-JP has a shift sequence. Apart from the shift sequence, all the multi-byte characters other than UTF-8 are two bytes length.
Encoding first byte second byte shift-JIS 0x81-0x9f, 0xe0-0xfc 0x40-0x7e, 0x80-0xfc EUC-JP 0x8e, 0xa1-0xfe 0xa1-0xfe KS C 5601 0xa1-0xfe 0xa1-0xfe GB 2312-80 0xa1-0xfe 0xa1-0xfe Big Five 0xa1-0xfe 0x40-0x7e, 0xa1-0xfe ISO-2022-JP 0x21-0x7e 0x21-0x7e
On unterminated line or comments, the following messages are issued. OLDPREP mode does not issue warning.
The following warning messages are issued in pre-Standard mode. Pre-Standard mode ignores these warnings to continue processing until it reaches the end of input, causing many unexpected results. Standard mode issues an error. OLDPREP mode does not issue even warning, except on unterminated macro.
The following message is issued only in Standard mode.
The following message is issued only in STD mode.
#define THIS$AND$THAT(a, b) ((a) + (b))mcpp interprets this as follows:
#define THIS $AND$THAT(a, b) ((a) + (b))and issues a warning. Of course, this is a quite rare case.
The following warnings are issued only in lang-asm mode.
The following warnings on #pragma line are issued only in Standard mode. The lines are outputted in principle in spite of the warnings. However, the lines to be processed by preprocessor such as most of the lines starting with #pragma MCPP or #pragma GCC are not outputted. The pragmas for compiler or linker such as #pragma GCC visibility * are outputted without warning.
The GCC-specific-build issues the following warnings:
GCC-specific-build issues a Class 2 warning to a line with #pragma GCC followed by either poison or dependency and does not output the line. GCC V.3 resident preprocessor process the line but mcpp does not.
The following warnings are issued only in pre-Standard mode. Standard mode regards them as errors.
KR mode issues the following warning. Standard mode issues the same warning only to #pragma once, #pragma MCPP put_defines, #pragma MCPP push_macro, #pragma MCPP pop_macro, #pragma push_macro, #pragma pop_macro, #pragma MCPP debug, #pragma MCPP end_debug, and #endif for GCC-specific-build on STD mode; for other directives, Standard mode issues an error. OLDPREP mode issues neither an error nor a warning.
The following three warnings are relevant to an argument of #if, #elif, or #assert:
The followings warnings are also relevant to an argument of #if, #elif or #assert. They are not issued in a sub-expression whose evaluation is skipped. (mcpp invoked with the -W8 option issues them.)
The followings warnings are relevant to operations and types in a constant expression on #if, #elif or #assert lines. No warnings are also issued in a skipped sub-expression. (mcpp invoked with -W8 issues them.)
mcpp evaluate #if expression by long long / unsigned long long even if in C90 or C++98, and issues a warning on the value outside of long / unsigned long in C90 and in C++98. Also on LL suffix in other than C99, mcpp issues a warning. These warnings are of class 1 in compiler-independent-build and class 2 in compiler-specific-build. In POSTSTD mode, character constants are not used in #if expression, hence no warning is issued. (Those make errors.)
In these warnings, mcpp displays a macro definition followed by the source filename and line number where the macro is defined.
The following two are issued only in OLDPREP mode. (In other mode it causes an error.)
This section covers line number related warnings.
In C90, when you use #line to specify a value slightly below 32767, you won't receive an error, but sooner or later, the line number will exceed 32767, in which case, mcpp continues to increase the line number while issuing a warning. Some compiler-proper may not accept this large line number. It is not desirable to specify a large number with #line.
This section covers warnings to code that does not contains a bug but causes a portability problem.
mcpp evaluate #if expression by long long / unsigned long long even if in C90 or C++98, and issues a warning on the value outside of long / unsigned long in C90 and in C++98. Also LL suffix in other than C99 mode gets a warning as well as i64 suffix of compiler-specific-builds for Visual C and Borland C. These warnings are of class 1 in compiler-independent-build and class 2 in compiler-specific-build.
Only the Standard mode issues the following five warnings:
#define EMPTY, if possible, and then write EMPTY where an empty argument is written.
The following warning is issued only in POSTSTD mode.
The following two warnings are issued only in some compiler systems. Of course, the coding in question is valid in those particular systems, but it lacks of portability, so a warning is issued to remind users of it.
Standard C guarantees some minimum translation limits. It is desirable that a preprocessor imposes translation limits that exceed these values, but source programs that uses preprocessor' own translation limits will restrict portability. mcpp provides some macros in "system.H" that allows you to set translation limits to any values you like. mcpp in Standard mode issues a warning to source code that exceeds a Standard C guaranteed limit. However, these messages are excluded from Class 1 and 2 because they may be issued frequently, depending on standard headers of compiler systems or source programs.
The following warnings are not issued in a skipped #if group.
With __STDC_VERSION__ >= 199901L, the Standard specified translation limits are as follows:
Length of logical source line 4095 bytes Length of string literal, character constant, or header name 4095 bytes Identifier length 63 characters Depth of nested #includes 15 Depth of nested #ifs, #ifdefs, or #ifndefs 63 Depth of nested parentheses in #if expression 63 Number of macro parameters 127 Number of definable macros 4095
Note that the length of a UCN or multi-byte-character in an identifier is counted as the number of characters, not bytes. (A queer stipulation)
When mcpp is invoked with the -+ option to specify C++ preprocessing, the Standard guideline of translation limits are as follows:
Length of logical source line 65536 bytes Length of string literal, character constant, or header name 65536 bytes Identifier length 1024 characters Depth of nested #includes 256 Depth of nested #ifs, #ifdefs, or #ifndefs 256 Depth of nested parentheses in #if expression 256 Number of macro parameters 256 Number of definable macros 65536
Note that mcpp allows the maximum number of macro parameters of 255. So, when it reaches 256, mcpp issues an error.
The following warnings are excluded from class 1 and 2 because they are issued too frequently.
The following two warnings are issued only in Standard mode.
This warning is only with -K option in STD mode.
There is little chance that the indicated source code contains a bug, but these messages are issued to call attention to it. mcpp invoked with the -W8 option issues these warnings.
In a skipped #if group, whether preprocessing directives, such as #ifdef, #ifndef, #elif, #else, and #endif, are balanced or not is checked. However, mcpp invoked with the -W8 option also checks non-conforming or unknown directives. Standard mode issues a warning when the depth of nested #ifs exceeds 8.
The following warnings are related to #if expression. Given an expression of #if a || b, for example, if "a" is true, "b" is not evaluated. However, mcpp invoked with -W8 issues a warning to non-evaluated sub-expressions, in which case, the note saying "in non-evaluated sub-expression" is appended.
Trigraphs and digraphs are not used at all in an environment where they are not need to. If they are found in such an environment, attention needs to be paid. The purpose of the -W16 option is to find such trigraphs and digraphs. On the other hand, these warnings are very bothersome in an environment where trigraphs or digraphs are used on a regular basis because they are issued very frequently. For this reason, I set up a separate class for these warnings. Anyway, mcpp issues these messages only in the state where the trigraphs or digraphs are enabled. Digraph is for Standard mode only, and trigraph is for STD mode only.
<% -> { <: -> [ %: -> # %> -> } :> -> ] %:%: -> ##Therefore, the compiler-proper is not necessary to be able to handle digraphs. However, POSTSTD mode converts a digraph into a regular pp-token during the translation phase 1. The difference of this behavior between the modes appears when a # operator converts a digraph into a string; STD mode directly converts a digraph sequence into a string, while POSTSTD mode converts it into a regular pp-token, and then into a string. In addition, if a string literal contains a character sequence which is equivalent to a digraph sequence, STD mode does not convert it, while POSTSTD mode converts it into a character sequence of the corresponding pp-tokens.
Diagnostic Message | Fatal error | Error | Warning class | 1 | 2 | 4 | 8 | 16 |
---|---|---|---|---|---|---|---|
"..." isn't the last parameter | 5.4.7 | ||||||
"/*" in comment | 5.5.1 | ||||||
"and" is defined as macro | 5.5.3 | ||||||
"defined" shouldn't be defined | 5.4.7 | ||||||
"MACRO" has not been defined | 5.5.3 | ||||||
"MACRO" has not been pushed | 5.5.3 | ||||||
"MACRO" is already pushed | 5.5.3 | ||||||
"MACRO" wasn't defined | 5.8 | ||||||
"op" of negative number isn't portable | 5.5.4 | 5.8 | |||||
"__STDC__" shouldn't be redefined | 5.4.7 | ||||||
"__STDC__" shouldn't be undefined | 5.4.8 | ||||||
"__VA_ARGS__" without corresponding "..." | 5.4.7 | ||||||
"__VA_ARGS__" cannot be used in GCC2-spec variadic macro | 5.4.7 | ||||||
## after ## | 5.4.7 | ||||||
#error | 5.4.10 | ||||||
#include_next is not allowed by Standard | 5.6 | 5.8 | |||||
#warning | 5.5.7 | ||||||
'$' in identifier "THIS$AND$THAT" | 5.6 | ||||||
16 bits can't represent escape sequence L'\x12345' | 5.4.6 | 5.8 | |||||
2 digraph(s) converted | 5.9 | ||||||
2 trigraph(s) converted | 5.9 | ||||||
8 bits can't represent escape sequence '\x123' | 5.4.6 | 5.8 | |||||
_Pragma operator found in directive line | 5.4.12 | ||||||
Already seen #else at line 123 | 5.4.3 | ||||||
Bad defined syntax | 5.4.5 | ||||||
Bad pop_macro syntax | 5.5.3 | ||||||
Bad push_macro syntax | 5.5.3 | ||||||
Buffer overflow expanding macro "macro" at "something" | 5.4.9 | ||||||
Buffer overflow scanning token "token" | 5.3.3 | ||||||
Bug: | 5.3.1 | ||||||
Can't open include file "file-name" | 5.4.11 | ||||||
Can't use a character constant 'a' | 5.4.5 | ||||||
Can't use a string literal "string" | 5.4.5 | ||||||
Can't use the character 0x24 | 5.4.5 | ||||||
Can't use the operator "++" | 5.4.5 | ||||||
Constant "123456789012" is out of range of (unsigned) long | 5.5.4 | 5.6 | 5.8 | ||||
Constant "1234567890123456789012" is out of range | 5.4.6 | 5.8 | |||||
Converted 0x0c to a space | 5.7 | ||||||
Converted [CR+LF] to [LF] | 5.5.1 | 5.6 | |||||
Converted \ to / | 5.6 | ||||||
Division by zero | 5.4.6 | 5.8 | |||||
Duplicate parameter names "a" | 5.4.7 | ||||||
Empty argument in macro call "MACRO( a, ," | 5.6 | ||||||
Empty character constant '' | 5.4.1 | 5.5.1 | |||||
Empty parameter | 5.4.7 | ||||||
End of file with no newline, supplemented the newline | 5.5.2 | ||||||
End of file with unterminated #asm block started at line 123 | 5.4.2 | 5.5.2 | |||||
End of file with unterminated comment, terminated the comment | 5.5.2 | ||||||
End of file with \, deleted the \ | 5.5.2 | ||||||
End of file within #if (#ifdef) section started at line 123 | 5.4.2 | 5.5.2 | |||||
End of file within macro call started at line 123 | 5.4.2 | 5.5.2 | |||||
Excessive ")" | 5.4.5 | ||||||
Excessive token sequence "junk" | 5.4.4 | 5.5.3 | |||||
File read error | 5.3.2 | ||||||
File write error | 5.3.2 | ||||||
GCC2-spec variadic macro is defined | 5.6 | ||||||
Header-name enclosed by <, > is an obsolescent feature | 5.6 | ||||||
I64 suffix is used in other than C99 mode "123i64" | 5.6 | 5.8 | |||||
Identifier longer than 31 bytes "very_very_long_name" | 5.7 | ||||||
Ignored #ident | 5.5.3 | 5.8 | |||||
Ignored #sccs | 5.5.3 | 5.8 | |||||
Illegal #directive "123" | 5.4.4 | 5.5.3 | 5.8 | ||||
Illegal control character 0x1b in quotation | 5.5.1 | ||||||
Illegal control character 0x1b, skipped the character | 5.4.1 | ||||||
Illegal digit in octal number "089" | 5.5.1 | ||||||
Illegal multi-byte character sequence "XY" in quotation | 5.5.1 | ||||||
Illegal multi-byte character sequence "XY" | 5.4.1 | ||||||
Illegal parameter "123" | 5.4.7 | ||||||
Illegal shift count "-1" | 5.5.4 | 5.8 | |||||
Illegal UCN sequence | 5.4.1 | ||||||
In #asm block started at line 123 | 5.4.3 | ||||||
Integer character constant 'abcde' is out of range of unsigned long | 5.5.4 | 5.6 | 5.8 | ||||
Integer character constant 'abcdefghi' is out of range | 5.4.6 | 5.8 | |||||
Less than necessary N argument(s) in macro call "macro( a)" | 5.4.9 | 5.5.5 | |||||
Line number "0x123" isn't a decimal digits sequence | 5.4.4 | 5.5.6 | |||||
Line number "2147483648" is out of range of 1,2147483647 | 5.4.4 | ||||||
Line number "32768" got beyond range | 5.5.6 | ||||||
Line number "32768" is out of range of 1,32767 | 5.5.6 | ||||||
Line number "32769" is out of range | 5.5.6 | ||||||
LL suffix is used in other than C99 mode "123LL" | 5.5.4 | 5.6 | 5.8 | ||||
Logical source line longer than 509 bytes | 5.7 | ||||||
Macro "MACRO" is expanded to "defined" | 5.5.4 | ||||||
Macro "MACRO" is expanded to "sizeof" | 5.5.4 | ||||||
Macro "MACRO" is expanded to 0 token | 5.5.4 | ||||||
Macro "macro" needs arguments | 5.8 | ||||||
Macro started at line 123 swallowed directive-like line | 5.5.5 | ||||||
Macro with mixing of ## and # operators isn't portable | 5.7 | ||||||
Macro with multiple ## operators isn't portable | 5.7 | ||||||
Misplaced ":", previous operator is "+" | 5.4.5 | ||||||
Misplaced constant "12" | 5.4.5 | ||||||
Missing ")" | 5.4.5 | ||||||
Missing "," or ")" in parameter list "(a,b" | 5.4.7 | ||||||
More than 1024 macros defined | 5.7 | ||||||
More than 31 parameters | 5.7 | ||||||
More than 32 nesting of parens in #if expression | 5.7 | ||||||
More than 8 nesting of #if (#ifdef) sections | 5.7 | 5.8 | |||||
More than 8 nesting of #include | 5.7 | ||||||
More than BLK_NEST nesting of #if (#ifdef) sections | 5.3.3 | ||||||
More than INCLUDE_NEST nesting of #include | 5.3.3 | ||||||
More than necessary N argument(s) in macro call "macro( a, b, c) | 5.4.9 | ||||||
More than NEXP*2-1 constants stacked at "12" | 5.4.5 | ||||||
More than NEXP*3-1 operators and parens stacked at "+" | 5.4.5 | ||||||
More than NMACPARS parameters | 5.4.7 | ||||||
Multi-character or multi-byte character constant 'XY' isn't portable | 5.7 | 5.8 | |||||
Multi-character wide character constant L'ab' isn't portable | 5.7 | 5.8 | |||||
Negative value "-1" is converted to positive "18446744073709551615" | 5.5.4 | 5.8 | |||||
No argument | 5.4.4 | 5.5.3 | |||||
No header name | 5.4.4 | ||||||
No identifier | 5.4.4 | ||||||
No line number | 5.4.4 | ||||||
No space between macro name "MACRO" and repl-text | 5.5.3 | ||||||
No sub-directive | 5.5.3 | ||||||
No token after ## | 5.4.7 | ||||||
No token before ## | 5.4.7 | ||||||
Not a file name "name" | 5.4.4 | ||||||
Not a formal parameter "id" | 5.4.7 | ||||||
Not a header name "UNDEFINED_MACRO" | 5.4.4 | ||||||
Not a line number "name" | 5.4.4 | ||||||
Not a valid preprocessing token "+12" | 5.4.9 | 5.6 | |||||
Not a valid string literal | 5.4.9 | ||||||
Not an identifier "123" | 5.4.4 | 5.5.3 | |||||
Not an integer "1.23" | 5.4.5 | ||||||
Not in a #if (#ifdef) section | 5.4.3 | ||||||
Not in a #if (#ifdef) section in a source file | 5.4.3 | 5.5.3 | |||||
Operand of _Pragma() is not a string literal | 5.4.12 | ||||||
Operator ">" in incorrect context | 5.4.5 | ||||||
Old style predefined macro "linux" is used | 5.5.5 | ||||||
Out of memory (required size is 0x1234 bytes) | 5.3.2 | ||||||
Parsed "//" as comment | 5.6 | ||||||
Preprocessing assertion failed | 5.4.10 | ||||||
Quotation longer than 509 bytes "very_very_long_string" | 5.7 | ||||||
Recursive macro definition of "macro" to "macro" | 5.4.9 | ||||||
Removed ',' preceding the absent variable argument | 5.5.5 | ||||||
Replacement text "sub(" of macro "head" involved subsequent text | 5.5.5 | 5.8 | |||||
Rescanning macro "macro" more than RESCAN_LIMIT times at "something" | 5.4.9 | ||||||
Result of "op" is out of range | 5.4.6 | 5.8 | |||||
Result of "op" is out of range of (unsigned) long | 5.5.4 | 5.6 | 5.8 | ||||
Shift count "40" is larger than bit count of long | 5.5.4 | 5.6 | 5.8 | ||||
sizeof is disallowed in C Standard | 5.8 | ||||||
sizeof: Illegal type combination with "type" | 5.4.6 | 5.8 | |||||
sizeof: No type specified | 5.4.5 | ||||||
sizeof: Syntax error | 5.4.5 | ||||||
sizeof: Unknown type "type" | 5.4.6 | 5.8 | |||||
Skipped the #pragma line | 5.6 | ||||||
String literal longer than 509 bytes "very_very_long_string" | 5.7 | ||||||
The macro is redefined | 5.5.4 | ||||||
This is not a preprocessed source | 5.3.4 | ||||||
This preprocessed file is corrupted | 5.3.4 | ||||||
Too long comment, discarded up to here | 5.7 | ||||||
Too long header name "long-file-name" | 5.3.3 | ||||||
Too long identifier, truncated to "very_long_identifier" | 5.5.1 | ||||||
Too long line spliced by comments | 5.3.3 | ||||||
Too long logical line | 5.3.3 | ||||||
Too long number token "12345678901234" | 5.3.3 | ||||||
Too long pp-number token "1234toolong" | 5.3.3 | ||||||
Too long quotation "long-string" | 5.3.3 | ||||||
Too long source line | 5.3.3 | ||||||
Too long token | 5.3.3 | ||||||
Too many magics nested in macro argument | 5.4.9 | ||||||
Too many nested macros in tracing MACRO | 5.4.9 | ||||||
UCN cannot specify the value "0000007f" | 5.4.1 | 5.8 | |||||
Undefined escape sequence '\x' | 5.5.4 | 5.8 | |||||
Undefined symbol "name", evaluated to 0 | 5.7 | 5.8 | |||||
Unknown #directive "pseudo-directive" | 5.4.4 | 5.5.4 | 5.8 | ||||
Unknown argument "name" | 5.5.3 | ||||||
Unterminated character constant 't understand. | 5.4.1 | ||||||
Unterminated expression | 5.4.5 | ||||||
Unterminated header name | 5.4.1 | ||||||
Unterminated macro call "macro( a, (b,c)" | 5.4.9 | ||||||
Unterminated string literal | 5.4.1 | ||||||
Unterminated string literal, catenated to the next line | 5.5.1 | ||||||
Variable argument macro is defined | 5.6 | ||||||
Wide character constant L'abc' is out of range of unsigned long | 5.5.4 | 5.6 | 5.8 | ||||
Wide character constant L'abc' is out of range | 5.4.6 | 5.8 |
I have developed the Validation Suite to verify conformance of preprocessing to Standard C/C++, and released it along with mcpp source. The Validation Suite is intended to allow you to verify all the Standard C preprocessing specifications. Of course, I used the Validation Suite to check mcpp. And what is more, I have compiled mcpp in many compiler systems to verify its behavior. Therefore, I am confident that mcpp is now almost flawless, free of bugs and misinterpretation of specifications, however, I cannot deny the possibility that it still contains some bugs.
If you find a strange behavior, do not hesitate to let me know. If you receive a diagnostic message saying "Bug: ...", it is undoubtedly a bug of mcpp or a compiler system. (Probably, it's mcpp's.) How illegal a user program may be, should mcpp lose control, it is mcpp that is to be blamed for it.
When you report a bug, please be sure to provide the following information:
Other than bugs, I would appreciate if you give me feedback on mcpp usage, diagnostic messages or this manual.
For your feedback or information, please post to "Open Discussion Forum" at:
or send via e-mail.