Compile Linux Apps For Mac
2021年6月15日Download here: http://gg.gg/v05x8
Core Features
*How To Compile C++ Linux
*Game Applications For Mac
*Multi-language SupportSupport for Less,Sass, CoffeeScript and Compass Framework.
*Real-time CompilationListening files, compile automatically when the file changes, that everything is running in the background without user action.
*Compile Options
*Project SettingsSupport for the project to create a global configuration, set the same compiler options for the files.
*Error NotificationIf encountered an error during compilation, koala will pop up the error message.
*Cross-platform
The app is portable and compiles without any code change on Mac and on Linux. But one annoyance is that when I want to ship Linux version, I have to run my Linux box, copy the source code over there (over USB drive, because I have no network there, it’s an old laptop) and compile it, then copy it again over USB to my Mac and upload it. May 03, 2016 Due to Microsoft’s history of being a very proprietary closed-source company there’s a bit of a perception that you wouldn’t be able to compile Applications written in C# and run them in the Open Source world of Linux but I’m going to show you just how easy this has become with a very basic hello world example.
*SupportIn case you encounter an issue, you can open a ticket on Github. Also feature requests can be entered here: Github issue tracker
*Author
Contributors:
Max Deng, Leott Liu, Ziad El Khoury Hanna, 单炒饭
*Donate
If you find my work useful and you want to encourage the development of more free resources, you can do it by donating..
Now that you have the basic pieces in place, it is time to build your application. This section covers some of the more common issues that you may encounter in bringing your UNIX application to OS X. These issues apply largely without regard to what type of development you are doing.Using GNU Autoconf, Automake, and Autoheader
If you are bringing a preexisting command-line utility to OS X that uses GNU autoconf, automake, or autoheader, you will probably find that it configures itself without modification (though the resulting configuration may be insufficient). Just run configure and make as you would on any other UNIX-based system.
If running the configure script fails because it doesn’t understand the architecture, try replacing the project’s config.sub and config.guess files with those available in /usr/share/automake-1.6. If you are distributing applications that use autoconf, you should include an up-to-date version of config.sub and config.guess so that OS X users don’t have to do anything extra to build your project.
If that still fails, you may need to run /usr/bin/autoconf on your project to rebuild the configure script before it works. OS X includes autoconf in the BSD tools package. Beyond these basics, if the project does not build, you may need to modify your makefile using some of the tips provided in the following sections. After you do that, more extensive refactoring may be required.
Some programs may use autoconf macros that are not supported by the version of autoconf that shipped with OS X. Because autoconf changes periodically, you may actually need to get a new version of autoconf if you need to build the very latest sources for some projects. In general, most projects include a prebuilt configure script with releases, so this is usually not necessary unless you are building an open source project using sources obtained from CVS or from a daily source snapshot.
However, if you find it necessary to upgrade autoconf, you can get a current version from http://www.gnu.org/software/autoconf/. Note that autoconf, by default, installs in /usr/local/, so you may need to modify your PATH environment variable to use the newly updated version. Do not attempt to replace the version installed in /usr/.
For additional information about using the GNU autotoolset, see http://autotoolset.sourceforge.net/tutorial.html and the manual pages autoconf, automake, and autoheader.Compiling for Multiple CPU Architectures
Because the Macintosh platform includes more than one processor family, it is often important to compile software for multiple processor architectures. For example, libraries should generally be compiled as universal binaries even if you are exclusively targeting an Intel-based Macintosh computer, as your library may be used by a PowerPC binary running under Rosetta. For executables, if you plan to distribute compiled versions, you should generally create universal binaries for convenience.
When compiling programs for architectures other than your default host architecture, such as compiling for a ppc64 or Intel-based Macintosh target on a PowerPC-based build host, there are a few common problems that you may run into. Most of these problems result from one of the following mistakes:
*
Assuming that the build host is architecturally similar to the target architecture and will thus be capable of executing intermediate build products
*
Trying to determine target-processor-specific information at configuration time (by compiling and executing small code snippets) rather than at compile time (using macro tests) or execution time (for example, by using conditional byte swap functions)
Whenever cross-compiling occurs, extra care must be taken to ensure that the target architecture is detected correctly. This is particularly an issue when generating a binary containing object code for more than one architecture.
In many cases, binaries containing object code for more than one architecture can be generated simply by running the normal configuration script, then overriding the architecture flags at compile time.
For example, you might run
followed by
to generate a universal binary (for Intel-based and PowerPC-based Macintosh computers). To generate a 4-way universal binary that includes 64-bit versions, you would add -arch ppc64 and -arch x86_64 to the CFLAGS and LDFLAGS.
Note: If you are using an older version of gcc and your makefile passes LDFLAGS to gcc instead of passing them directly to ld, you may need to specify the linker flags as -Wl,-syslibroot,/Developer/SDKs/MacOSX10.4u.sdk. This tells the compiler to pass the unknown flags to the linker without interpreting them. Do not pass the LDFLAGS in this form to ld, however; ld does not currently support the -Wl syntax.
If you need to support an older version of gcc and your makefile passes LDFLAGS to both gcc and ld, you may need to modify it to pass this argument in different forms, depending on which tool is being used. Fortunately, these cases are rare; most makefiles either pass LDFLAGS to gcc or ld, but not both. Newer versions of gcc support -syslibroot directly.
If your makefile does not explicitly pass the contents of LDFLAGS to gcc or ld, they may still be passed to one or the other by a make rule. If you are using the standard built-in make rules, the contents of LDFLAGS are passed directly to ld. If in doubt, assume that it is passed to ld. If you get an invalid flag error, you guessed incorrectly.
If your makefile uses gcc to run the linker instead of invoking it directly, you must specify a list of target architectures to link when working with universal binary object (.o) files even if you are not using all of the architectures of the object file. If you don’t, you will not create a universal binary, and you may also get a linker error. For more information about 64-bit executables, see 64-Bit Transition Guide.
However, applications that make configuration-time decisions about the size of data structures will generally fail to build correctly in such an environment (since those sizes may need to be different depending on whether the compiler is executing a ppc pass, a ppc64 pass, or an i386 pass). When this happens, the tool must be configured and compiled for each architecture as separate executables, then glued together manually using lipo.
In rare cases, software not written with cross-compilation in mind will make configure-time decisions by executing code on the build host. In these cases, you will have to manually alter either the configuration scripts or the resulting headers to be appropriate for the actual target architecture (rather than the build architecture). In some cases, this can be solved by telling the configure script that you are cross-compiling using the --host, --build, and --target flags. However, this may simply result in defaults for the target platform being inserted, which doesn’t really solve the problem.
The best fix is to replace configure-time detection of endianness, data type sizes, and so on with compile-time or run-time detection. For example, instead of testing the architecture for endianness to obtain consistent byte order in a file, you should do one of the following:
*
Use C preprocessor macros like __BIG_ENDIAN__ and __LITTLE_ENDIAN__ to test endianness at compile time.
*
Use functions like htonl, htons, ntohl, and ntohs to guarantee a big-endian representation on any architecture.
*
Extract individual bytes by bitwise masking and shifting (for example, lowbyte=word & 0xff; nextbyte = (word >> 8) & 0xff; and so on).
Similarly, instead of performing elaborate tests to determine whether to use int or long for a 4-byte piece of data, you should simply use a standard sized type such as uint32_t.
Note: Not all script execution is incompatible with cross-compiling. A number of open source tools (GTK, for example) use script execution to determine the presence or absence of libraries, determine their versions and locations, and so on.
In those cases, you must be certain that the info script associated with the universal binary installation (or the target platform installation if you are strictly cross-compiling) is the one that executes during the configuration process, rather than the info script associated with an installation specific to your host architecture.
There are a few other caveats when working with universal binaries:
*
The library archive utility, ar, cannot work with libraries containing code for more than one architecture (or single-architecture libraries generated with lipo) after ranlib has added a table of contents to them. Thus, if you need to add additional object files to a library, you must keep a separate copy without a TOC.
*
The -M switch to gcc (to output dependency information) is not supported when multiple architectures are specified on the command line. Depending on your makefile, this may require substantial changes to your makefile rules. For autoconf-based configure scripts, the flag --disable-dependency-tracking should solve this problem.
For projects using automake, it may be necessary to run automake with the -i flag to disable dependency checks or put no-dependencies in the AUTOMAKE_OPTIONS variable in each Makefile.am file.
*
If you run into problems building a universal binary for an open source tool, the first thing you should do is to get the latest version of the source code. This does two things:
*
Ensures that the version of autoconf and automake used to generate the configuration scripts is reasonably current, reducing the likelihood of build failures, execution failures, backwards or forwards compatibility problems, and other idiosyncratic or downright broken behavior.
*
Reduces the likelihood of building a version of an open source tool that contains known security holes or other serious bugs.
*
Older versions of autoconf do not handle the case where --target, --host, and --build are not handled gracefully. Different versions also behave differently when you specify only one or two of these flags. Thus, you should always specify all three of these options if you are running an autoconf-generated configure script with intent to cross-compile.
*
Some earlier versions of autoconf handle cross-compiling poorly. If your tool contains a configure script generated by an early autoconf, you may be able to significantly improve things by replacing some of the config.* files (and config.guess in particular) with updated copies from the version of autoconf that comes with OS X.
This will not always work, however, in which case it may be necessary to actually regenerate the configure script by running autoconf. To do this, simply change into the root directory of the project and run /usr/bin/autoconf. It will automatically detect and use the configure.in file and use it to generate a new configure script. If you get warnings, you should first try a web search for the error message, as someone else may have already run into the problem (possibly on a different tool) and found a solution.
If you get errors about missing AC_ macros, you may need to download a copy of libraries on which your tool depends and copy their .m4 autoconf configuration files into /usr/share/autoconf. Alternately, you can add the macros to the file acinclude.m4 in your project’s main directory and autoconf should automatically pick up those macros.
You may, in some cases, need to rerun automake and/or autoheader if your tool uses them. Be prepared to run into missing AM_ and AH_ macros if you do, however. Because of the added risk of missing macros, this should generally only be done if running autoconf by itself does not correct a build problem.
Important: Be sure to make a backup copy of the original scripts, headers, and other generated files (or, ideally, the entire project directory) before running autoheader or automake.
*
Different makefiles and configure scripts handle command-line overrides in different ways. The most consistent way to force these overrides is to specify them prior to the command. For example:
should generally result in the above being added to CFLAGS during compilation. However, this behavior is not completely consistent across makefiles from different projects.
For additional information about autoconf, automake, and autoheader, you can view the autoconf documentation at http://www.gnu.org/software/autoconf/manual/index.html.
For additional information on compiler flags for Intel-based Macintosh computers, modifying code to support little-endian CPUs, and other porting concerns, you should read Universal Binary Programming Guidelines, Second Edition, available from the ADC Reference Library.Cross-Compiling a Self-Bootstrapping Tool
Probably the most difficult situation you may experience is that of a self-bootstrapping tool—a tool that uses a (possibly stripped-down) copy of itself to either compile the final version of itself or to construct support files or libraries. Some examples include TeX, Perl, and gcc.
Ideally, you should be able to build the executable as a universal binary in a single build pass. If that is possible, everything “just works”, since the universal binary can execute on the host. However, this is not always possible. If you have to cross-compile and glue the pieces together with lipo, this obviously will not work.
If the build system is written well, the tool will bootstrap itself by building a version compiled for the host, then use that to build the pieces for the target, and finally compile a version of the binary for the target. In that case, you should not have to do anything special for the build to succeed. Adobe photoshop cs6 for mac serial.
In some cases, however, it is not possible to simultaneously compile for multiple architectures and the build system wasn’t designed for cross-compiling. In those cases, the recommended solution is to pre-install a version of the tool for the host architecture, then modify the build scripts to rename the target’s intermediate copy of the tool and copy the host’s copy in place of that intermediate build product (for example, mv miniperl miniperl-target; cp /usr/bin/perl miniperl).
By doing this, later parts of the build script will execute the version of the tool built for the host architecture. Assuming there are no architecture dependencies in the dependent tools or support files, they should build correctly using the host’s copy of the tool. Once the dependent build is complete, you should swap back in the original target copy in the final build phase. The trick is in figuring out when to have each copy in place.Conditional Compilation on OS X
You will sometimes find it necessary to use conditional compilation to make your code behave differently depending on whether certain functionality is available.
Older code sometimes used conditional statements like #ifdef __MACH__ or #ifdef __APPLE__ to try to determine whether it was being compiled on OS X or not. While this seems appealing as a quick way of getting ported, it ultimately causes more work in the long run. For example, if you make the assumption that a particular function does not exist in OS X and conditionally replace it with your own version that implements the same functionality as a wrapper around a different API, your application may no longer compile or may be less efficient if Apple adds that function in a later version.
Apart from displaying or using the name of the OS for some reason (which you can more portably obtain from the uname API), code should never behave differently on OS X merely because it is running on OS X. Code should behave differently because OS X behaves differently in some way—offering an additional feature, not offering functionality specific to another operating system, and so on. Thus, for maximum portability and maintainability, you should focus on that difference and make the conditional compilation dependent upon detecting the difference rather than dependent upon the OS itself. This not only makes it easier to maintain your code as OS X evolves, but also makes it easier to port your code to other platforms that may support different but overlapping feature sets.
The most common reasons you might want to use such conditional statements are attempts to detect differences in:
*
processor architecture
*
byte order
*
file system case sensitivity
*
other file system properties
*
compiler, linker, or toolchain differences
*
availability of application frameworks
*
availability of header files
*
support for a function or feature
Instead it is better to figure out why your code needs to behave differently in OS X, then use conditional compilation techniques that are appropriate for the actual root cause.
The misuse of these conditionals often causes problems. For example, if you assume that certain frameworks are present if those macros are defined, you might get compile failures when building a 64-bit executable. If you instead test for the availability of the framework, you might be able to fall back on an alternative mechanism such as X11, or you might skip building the graphical portions of the application entirely.
For example, OS X provides preprocessor macros to determine the CPU architecture and byte order. These include:
*
__i386__—Intel (32-bit)
*
__x86_64__—Intel (64-bit)
*
__ppc__—PowerPC (32-bit)
*
__ppc64__—PowerPC (64-bit)
*
__BIG_ENDIAN__—B
https://diarynote-jp.indered.space
Core Features
*How To Compile C++ Linux
*Game Applications For Mac
*Multi-language SupportSupport for Less,Sass, CoffeeScript and Compass Framework.
*Real-time CompilationListening files, compile automatically when the file changes, that everything is running in the background without user action.
*Compile Options
*Project SettingsSupport for the project to create a global configuration, set the same compiler options for the files.
*Error NotificationIf encountered an error during compilation, koala will pop up the error message.
*Cross-platform
The app is portable and compiles without any code change on Mac and on Linux. But one annoyance is that when I want to ship Linux version, I have to run my Linux box, copy the source code over there (over USB drive, because I have no network there, it’s an old laptop) and compile it, then copy it again over USB to my Mac and upload it. May 03, 2016 Due to Microsoft’s history of being a very proprietary closed-source company there’s a bit of a perception that you wouldn’t be able to compile Applications written in C# and run them in the Open Source world of Linux but I’m going to show you just how easy this has become with a very basic hello world example.
*SupportIn case you encounter an issue, you can open a ticket on Github. Also feature requests can be entered here: Github issue tracker
*Author
Contributors:
Max Deng, Leott Liu, Ziad El Khoury Hanna, 单炒饭
*Donate
If you find my work useful and you want to encourage the development of more free resources, you can do it by donating..
Now that you have the basic pieces in place, it is time to build your application. This section covers some of the more common issues that you may encounter in bringing your UNIX application to OS X. These issues apply largely without regard to what type of development you are doing.Using GNU Autoconf, Automake, and Autoheader
If you are bringing a preexisting command-line utility to OS X that uses GNU autoconf, automake, or autoheader, you will probably find that it configures itself without modification (though the resulting configuration may be insufficient). Just run configure and make as you would on any other UNIX-based system.
If running the configure script fails because it doesn’t understand the architecture, try replacing the project’s config.sub and config.guess files with those available in /usr/share/automake-1.6. If you are distributing applications that use autoconf, you should include an up-to-date version of config.sub and config.guess so that OS X users don’t have to do anything extra to build your project.
If that still fails, you may need to run /usr/bin/autoconf on your project to rebuild the configure script before it works. OS X includes autoconf in the BSD tools package. Beyond these basics, if the project does not build, you may need to modify your makefile using some of the tips provided in the following sections. After you do that, more extensive refactoring may be required.
Some programs may use autoconf macros that are not supported by the version of autoconf that shipped with OS X. Because autoconf changes periodically, you may actually need to get a new version of autoconf if you need to build the very latest sources for some projects. In general, most projects include a prebuilt configure script with releases, so this is usually not necessary unless you are building an open source project using sources obtained from CVS or from a daily source snapshot.
However, if you find it necessary to upgrade autoconf, you can get a current version from http://www.gnu.org/software/autoconf/. Note that autoconf, by default, installs in /usr/local/, so you may need to modify your PATH environment variable to use the newly updated version. Do not attempt to replace the version installed in /usr/.
For additional information about using the GNU autotoolset, see http://autotoolset.sourceforge.net/tutorial.html and the manual pages autoconf, automake, and autoheader.Compiling for Multiple CPU Architectures
Because the Macintosh platform includes more than one processor family, it is often important to compile software for multiple processor architectures. For example, libraries should generally be compiled as universal binaries even if you are exclusively targeting an Intel-based Macintosh computer, as your library may be used by a PowerPC binary running under Rosetta. For executables, if you plan to distribute compiled versions, you should generally create universal binaries for convenience.
When compiling programs for architectures other than your default host architecture, such as compiling for a ppc64 or Intel-based Macintosh target on a PowerPC-based build host, there are a few common problems that you may run into. Most of these problems result from one of the following mistakes:
*
Assuming that the build host is architecturally similar to the target architecture and will thus be capable of executing intermediate build products
*
Trying to determine target-processor-specific information at configuration time (by compiling and executing small code snippets) rather than at compile time (using macro tests) or execution time (for example, by using conditional byte swap functions)
Whenever cross-compiling occurs, extra care must be taken to ensure that the target architecture is detected correctly. This is particularly an issue when generating a binary containing object code for more than one architecture.
In many cases, binaries containing object code for more than one architecture can be generated simply by running the normal configuration script, then overriding the architecture flags at compile time.
For example, you might run
followed by
to generate a universal binary (for Intel-based and PowerPC-based Macintosh computers). To generate a 4-way universal binary that includes 64-bit versions, you would add -arch ppc64 and -arch x86_64 to the CFLAGS and LDFLAGS.
Note: If you are using an older version of gcc and your makefile passes LDFLAGS to gcc instead of passing them directly to ld, you may need to specify the linker flags as -Wl,-syslibroot,/Developer/SDKs/MacOSX10.4u.sdk. This tells the compiler to pass the unknown flags to the linker without interpreting them. Do not pass the LDFLAGS in this form to ld, however; ld does not currently support the -Wl syntax.
If you need to support an older version of gcc and your makefile passes LDFLAGS to both gcc and ld, you may need to modify it to pass this argument in different forms, depending on which tool is being used. Fortunately, these cases are rare; most makefiles either pass LDFLAGS to gcc or ld, but not both. Newer versions of gcc support -syslibroot directly.
If your makefile does not explicitly pass the contents of LDFLAGS to gcc or ld, they may still be passed to one or the other by a make rule. If you are using the standard built-in make rules, the contents of LDFLAGS are passed directly to ld. If in doubt, assume that it is passed to ld. If you get an invalid flag error, you guessed incorrectly.
If your makefile uses gcc to run the linker instead of invoking it directly, you must specify a list of target architectures to link when working with universal binary object (.o) files even if you are not using all of the architectures of the object file. If you don’t, you will not create a universal binary, and you may also get a linker error. For more information about 64-bit executables, see 64-Bit Transition Guide.
However, applications that make configuration-time decisions about the size of data structures will generally fail to build correctly in such an environment (since those sizes may need to be different depending on whether the compiler is executing a ppc pass, a ppc64 pass, or an i386 pass). When this happens, the tool must be configured and compiled for each architecture as separate executables, then glued together manually using lipo.
In rare cases, software not written with cross-compilation in mind will make configure-time decisions by executing code on the build host. In these cases, you will have to manually alter either the configuration scripts or the resulting headers to be appropriate for the actual target architecture (rather than the build architecture). In some cases, this can be solved by telling the configure script that you are cross-compiling using the --host, --build, and --target flags. However, this may simply result in defaults for the target platform being inserted, which doesn’t really solve the problem.
The best fix is to replace configure-time detection of endianness, data type sizes, and so on with compile-time or run-time detection. For example, instead of testing the architecture for endianness to obtain consistent byte order in a file, you should do one of the following:
*
Use C preprocessor macros like __BIG_ENDIAN__ and __LITTLE_ENDIAN__ to test endianness at compile time.
*
Use functions like htonl, htons, ntohl, and ntohs to guarantee a big-endian representation on any architecture.
*
Extract individual bytes by bitwise masking and shifting (for example, lowbyte=word & 0xff; nextbyte = (word >> 8) & 0xff; and so on).
Similarly, instead of performing elaborate tests to determine whether to use int or long for a 4-byte piece of data, you should simply use a standard sized type such as uint32_t.
Note: Not all script execution is incompatible with cross-compiling. A number of open source tools (GTK, for example) use script execution to determine the presence or absence of libraries, determine their versions and locations, and so on.
In those cases, you must be certain that the info script associated with the universal binary installation (or the target platform installation if you are strictly cross-compiling) is the one that executes during the configuration process, rather than the info script associated with an installation specific to your host architecture.
There are a few other caveats when working with universal binaries:
*
The library archive utility, ar, cannot work with libraries containing code for more than one architecture (or single-architecture libraries generated with lipo) after ranlib has added a table of contents to them. Thus, if you need to add additional object files to a library, you must keep a separate copy without a TOC.
*
The -M switch to gcc (to output dependency information) is not supported when multiple architectures are specified on the command line. Depending on your makefile, this may require substantial changes to your makefile rules. For autoconf-based configure scripts, the flag --disable-dependency-tracking should solve this problem.
For projects using automake, it may be necessary to run automake with the -i flag to disable dependency checks or put no-dependencies in the AUTOMAKE_OPTIONS variable in each Makefile.am file.
*
If you run into problems building a universal binary for an open source tool, the first thing you should do is to get the latest version of the source code. This does two things:
*
Ensures that the version of autoconf and automake used to generate the configuration scripts is reasonably current, reducing the likelihood of build failures, execution failures, backwards or forwards compatibility problems, and other idiosyncratic or downright broken behavior.
*
Reduces the likelihood of building a version of an open source tool that contains known security holes or other serious bugs.
*
Older versions of autoconf do not handle the case where --target, --host, and --build are not handled gracefully. Different versions also behave differently when you specify only one or two of these flags. Thus, you should always specify all three of these options if you are running an autoconf-generated configure script with intent to cross-compile.
*
Some earlier versions of autoconf handle cross-compiling poorly. If your tool contains a configure script generated by an early autoconf, you may be able to significantly improve things by replacing some of the config.* files (and config.guess in particular) with updated copies from the version of autoconf that comes with OS X.
This will not always work, however, in which case it may be necessary to actually regenerate the configure script by running autoconf. To do this, simply change into the root directory of the project and run /usr/bin/autoconf. It will automatically detect and use the configure.in file and use it to generate a new configure script. If you get warnings, you should first try a web search for the error message, as someone else may have already run into the problem (possibly on a different tool) and found a solution.
If you get errors about missing AC_ macros, you may need to download a copy of libraries on which your tool depends and copy their .m4 autoconf configuration files into /usr/share/autoconf. Alternately, you can add the macros to the file acinclude.m4 in your project’s main directory and autoconf should automatically pick up those macros.
You may, in some cases, need to rerun automake and/or autoheader if your tool uses them. Be prepared to run into missing AM_ and AH_ macros if you do, however. Because of the added risk of missing macros, this should generally only be done if running autoconf by itself does not correct a build problem.
Important: Be sure to make a backup copy of the original scripts, headers, and other generated files (or, ideally, the entire project directory) before running autoheader or automake.
*
Different makefiles and configure scripts handle command-line overrides in different ways. The most consistent way to force these overrides is to specify them prior to the command. For example:
should generally result in the above being added to CFLAGS during compilation. However, this behavior is not completely consistent across makefiles from different projects.
For additional information about autoconf, automake, and autoheader, you can view the autoconf documentation at http://www.gnu.org/software/autoconf/manual/index.html.
For additional information on compiler flags for Intel-based Macintosh computers, modifying code to support little-endian CPUs, and other porting concerns, you should read Universal Binary Programming Guidelines, Second Edition, available from the ADC Reference Library.Cross-Compiling a Self-Bootstrapping Tool
Probably the most difficult situation you may experience is that of a self-bootstrapping tool—a tool that uses a (possibly stripped-down) copy of itself to either compile the final version of itself or to construct support files or libraries. Some examples include TeX, Perl, and gcc.
Ideally, you should be able to build the executable as a universal binary in a single build pass. If that is possible, everything “just works”, since the universal binary can execute on the host. However, this is not always possible. If you have to cross-compile and glue the pieces together with lipo, this obviously will not work.
If the build system is written well, the tool will bootstrap itself by building a version compiled for the host, then use that to build the pieces for the target, and finally compile a version of the binary for the target. In that case, you should not have to do anything special for the build to succeed. Adobe photoshop cs6 for mac serial.
In some cases, however, it is not possible to simultaneously compile for multiple architectures and the build system wasn’t designed for cross-compiling. In those cases, the recommended solution is to pre-install a version of the tool for the host architecture, then modify the build scripts to rename the target’s intermediate copy of the tool and copy the host’s copy in place of that intermediate build product (for example, mv miniperl miniperl-target; cp /usr/bin/perl miniperl).
By doing this, later parts of the build script will execute the version of the tool built for the host architecture. Assuming there are no architecture dependencies in the dependent tools or support files, they should build correctly using the host’s copy of the tool. Once the dependent build is complete, you should swap back in the original target copy in the final build phase. The trick is in figuring out when to have each copy in place.Conditional Compilation on OS X
You will sometimes find it necessary to use conditional compilation to make your code behave differently depending on whether certain functionality is available.
Older code sometimes used conditional statements like #ifdef __MACH__ or #ifdef __APPLE__ to try to determine whether it was being compiled on OS X or not. While this seems appealing as a quick way of getting ported, it ultimately causes more work in the long run. For example, if you make the assumption that a particular function does not exist in OS X and conditionally replace it with your own version that implements the same functionality as a wrapper around a different API, your application may no longer compile or may be less efficient if Apple adds that function in a later version.
Apart from displaying or using the name of the OS for some reason (which you can more portably obtain from the uname API), code should never behave differently on OS X merely because it is running on OS X. Code should behave differently because OS X behaves differently in some way—offering an additional feature, not offering functionality specific to another operating system, and so on. Thus, for maximum portability and maintainability, you should focus on that difference and make the conditional compilation dependent upon detecting the difference rather than dependent upon the OS itself. This not only makes it easier to maintain your code as OS X evolves, but also makes it easier to port your code to other platforms that may support different but overlapping feature sets.
The most common reasons you might want to use such conditional statements are attempts to detect differences in:
*
processor architecture
*
byte order
*
file system case sensitivity
*
other file system properties
*
compiler, linker, or toolchain differences
*
availability of application frameworks
*
availability of header files
*
support for a function or feature
Instead it is better to figure out why your code needs to behave differently in OS X, then use conditional compilation techniques that are appropriate for the actual root cause.
The misuse of these conditionals often causes problems. For example, if you assume that certain frameworks are present if those macros are defined, you might get compile failures when building a 64-bit executable. If you instead test for the availability of the framework, you might be able to fall back on an alternative mechanism such as X11, or you might skip building the graphical portions of the application entirely.
For example, OS X provides preprocessor macros to determine the CPU architecture and byte order. These include:
*
__i386__—Intel (32-bit)
*
__x86_64__—Intel (64-bit)
*
__ppc__—PowerPC (32-bit)
*
__ppc64__—PowerPC (64-bit)
*
__BIG_ENDIAN__—B
https://diarynote-jp.indered.space
コメント