Q: I am porting a Java product to Mac OS X. What are the equivalents for directories and paths common to JDK installations on other platforms?
A: I am porting a Java product to Mac OS X. What are the equivalents for directories and paths common to JDK installations on other platforms?
Since a JDK is preinstalled on every copy of Mac OS X, the location of the Java VM may vary, but can be found using tools built into the OS. User input should never be required to locate Java-related paths and directories.
Java Home
Many Java applications need to know the location of a $JAVA_HOME directory. The $JAVA_HOME on Mac OS X should be found using the/usr/libexec/java_home command line tool on Mac OS X 10.5 or later. On older Mac OS X versions where the tool does not exist, use the fixed path “/Library/Java/Home“. The /usr/libexec/java_home tool dynamically finds the top Java version specified in Java Preferences for the current user. This path allows access to the bin subdirectory where command line tools such as java, javac, etc. exist as on other platforms. The tool/usr/libexec/java_home allows you to specify a particular CPU architecture and Java platform version when locating a $JAVA_HOME.
Another advantage of dynamically finding this path, as opposed to hardcoding the fixed endpoint, is that it is updated when a new version of Java is downloaded via Software Update or installed with a newer version of Mac OS X. For this reason, it is important that developers do not install files in the JDKs inside of /System, since the changes will be lost with subsequent updates by newer versions of Java.
To obtain the path to the currently executing $JAVA_HOME, use the java.home System property.
Java software on other platforms often makes use of the $JAVA_HOME/lib/ext directory within a JDK installation to store support class or jar files. While Java for Mac OS X also contains a lib/ext directory, developers should not directly modify it for the same reasons mentioned above. The/Library/Java/Extensions directory can be used for additional jar files or JNI libraries that need to be placed on the system classpath. For more controlled access, the ~/Library/Java/Extensions directory can be used for user-level installation of support libraries. Items placed in either of these directories do not need to be named in an application’s classpath and will be available to all applications run under the respective scope (system-level or user-level, depending which directory is used).
On Mac OS X, the Java runtime provides the java.util.prefs API which is backed by the standard Mac OS X Preferences API and directories. Simply using this pure Java API reads and stores your application’s preferences in ~/Library/Preferences in a Mac OS X property list file. For applications that may already have their own preferences format, these preferences should be stored in the ~/Library/Preferences directory as well. This directory can be reached from Java code by creating a file with the path of System.getProperty("user.home") + "/Library/Preferences/" + "com.example.your.Application". An application that should have global preferences across all users could instead reside in/Library/Preferences, however this directory is not writable by non-admin users.
Some applications store user’s data in database files or save-less file schemes which are not conventional documents the user can move around in the file system. On Mac OS X, these files should go into a directory with your application’s name inside of the ~/Library directory. Your app’s directory in the top-level of ~/Library should only be used for irreplaceable user data, and not preferences (like window positions or recently-used document lists). You can create a path to this directory by appending System.getProperty("user.home") + "/Library/" + "Your App Name".
Applications that use temporary files to cache downloaded resources or complex calculations should store them in the secure Mac OS X temp directory. Use the java.io.tmpdir System property concatenated with a unique identifier for your application to create a place in the secure temp directory for your application. This location is periodically purged of files that have not been modified for several days, but is present on the startup disk.
If your application uses its own purging strategy, it may be appropriate to use the ~/Library/Caches directory in the user’s home directory. Keep in mind that this location may reside on a network mounted file system or in an encrypted FileVault image. You can create a directory in this location by appendingSystem.getProperty("user.home") + "/Library/Caches/" + "com.example.your.Application".
Gone are the days when the only source of linux distribution was the web. There were always very few small-scale distributors who would sell pre-installed linux desktop system, but the choices were few and the hardware options were very limited. Thanks to Dell, preinstalled desktop linux market is now mainstream. Today, we will look at some of the popular distributors/resellers, who are selling pre-installed linux systems for home users (not servers).
1) Dell: Dell is currently selling four laptop systems starting from $549 to $1049; the only linux desktop system available starts at $448. All these system are pre-installed with Ubuntu 8.04. here:
2) system76: They are Mark Shuttleworth’s personal favorite Linux pre-installed reseller. Like Dell they only sell pre-installed Ubuntu Linux. But they have a wider choice in systems and hardware configurations. Their cheapest laptop starts at $869. More information here.
3) The Linux Laptop Company: As the name suggests they are specialized in selling pre-installed linux laptops; like the previous two vendors their distro of choice is Ubuntu 8.04. They currently sell four models of laptops and the price starts at $699 to all the way up to $1299. More information here.
4) ASUS EeePC: Based on Xandros Linux Asus EeePC is perhaps the most widely used pre-installed linux subnotebook ever sold. There are several resellers for ASUS EeePC, but my favorite is newegg; where the listed price for 1000H 40G is $669. You can get more information about the system here.
5) Linux Emporium: Based on UK, they are Thinkpad resellers with three choices of Linux distro. Their systems can be configured for Ubuntu, Suse or Fedora. Prices start at £376. Get more informationhere.
6) Linux Certified: They sell a wide variety of laptops with six choices of pre-installed Linux Distros. Fedora 8, Ubuntu 8.04, open SUSE 11, RHEL 5, CenOS 5, Oracle Linux. Their LC2000 series laptops start $699. here.
A special mention goes to Walmart’s Gpc listed at $130. Not a very powerful system, but GOS linux distro is interesting nonetheless. If you have any experience with any pre-installed linux resellers, feel free to share your experience with us and let us know if we forgot to mention any notable resellers out there.
This article describes how you can upgrade your Fedora 15 system to Fedora 16. The upgrade procedure works for both desktop and server installations.
I do not issue any guarantee that this will work for you!
1 Preliminary Note
The commands in this article must be executed with root privileges. Open a terminal (on a Fedora 15 desktop, go to Applications > System Tools > Terminal) and log in as root, or if you log in with a regular user, type
su
to become root.
Please make sure that the system that you want to upgrade has more than 600 MB of RAM – otherwise the system might hang when it tries to reboot with the following message (leaving you with an unusable system):
Trying to unpack rootfs image as initramfs…
2 Upgrading To Fedora 16 (Desktop)
First we must upgrade the rpm package:
yum update rpm
Then we install the latest updates:
yum -y update
Next we clean the yum cache:
yum clean all
If you notice that a new kernel got installed during yum -y update, you should reboot the system now:
reboot
(After the reboot, log in as root again, either directly or with the help of
su
)
Now we come to the upgrade process. We can do this with preupgrade (preupgrade will also take care of your RPMFusion packages).
Install preupgrade…
yum install preupgrade
… and call it like this:
preupgrade
The preupgrade wizard will then start on your desktop. Select Fedora 16 (Verne). Afterwards the system is being prepared for the upgrade.
At the end, click on the Reboot Now button.
During the reboot, the upgrade is being performed. This can take quite a long time, so please be patient.
Afterwards, you can log into your new Fedora 16 desktop.
3 Upgrading To Fedora 16 (Server)
First we must upgrade the rpm package:
yum update rpm
Then we install the latest updates:
yum -y update
Next we clean the yum cache:
yum clean all
If you notice that a new kernel got installed during yum -y update, you should reboot the system now:
reboot
(After the reboot, log in as root again, either directly or with the help of
su
)
Now we come to the upgrade process. We can do this with preupgrade.
Install preupgrade…
yum install preupgrade
… and call it like this:
preupgrade-cli
It will show you a list of releases that you can upgrade to. If all goes well, it should show something like Fedora 16 (Verne) in the list:
[root@server1 ~]# preupgrade-cli
Loaded plugins: blacklist, langpacks, whiteout
No plugin match for: rpm-warm-cache
No plugin match for: remove-with-leaves
No plugin match for: auto-update-debuginfo
Loaded plugins: langpacks, presto, refresh-packagekit
please give a release to try to pre-upgrade to
valid entries include:
“Fedora 16 (Verne)”
[root@server1 ~]#
To upgrade, append the release string to the preupgrade-cli command:
preupgrade-cli “Fedora 16 (Verne)”
Preupgrade will also take care of your RPMFusion packages, so all you have to do after preupgrade has finished is to reboot:
reboot
During the reboot, the upgrade is being performed. This can take quite a long time, so please be patient. Afterwards, you can log into your new Fedora 16 server.
Yes, it should be easy to install another window manager and take if for a spin. You don’t even have to remove your current one, in fact it’s highly recommended that you leave it in place.
Open up your distributions package manager and install the XFCE packages. Then logout and use the menus on the login screen to select a different window manager during your login process.
I use gpasswd because not all versions of usermod have an easy way to add the user to a group without changing all the users’ groups. However, on any recent Fedora, usermod username -a -G wheel should have the same effect. You could also use the system-config-users GUI, of course.
If you are using Fedora 14 or earlier, use visudo to edit the sudoers file, removing the # from this line:
%wheel ALL=(ALL) ALL
This is the default in the sudoers file on Fedora 15 and newer, so adding the user to wheel is all you need to do.
See also this question and answer over on Server Fault for information on granting sudo-like “auth as self” behavior to wheel group members for graphical apps which use consolehelper or PackageKit
This is one system administrator’s point of view why LD_LIBRARY_PATH, as frequently used, is bad. This is written from a SunOS 4.x/5.x (and to some extent Linux) point of view, but this also applies to most other UNIXes.
What LD_LIBRARY_PATH does
LD_LIBRARY_PATH is an environment variable you set to give the run-time shared library loader (ld.so) an extra set of directories to look for when searching for shared libraries. Multiple directories can be listed, separated with a colon (:). This list is prepended to the existing list of compiled-in loader paths for a given executable, and any system default loader paths.
For security reasons, LD_LIBRARY_PATH is ignored at runtime for executables that have their setuid or setgid bit set. This severely limits the usefulness of LD_LIBRARY_PATH.
Why was it invented?
There were a couple good reasons why it was invented:
To test out new library routines against an already compiled binary (for either backward compatibility or for new feature testing).
To have a short term way out in case you wanted to move a set of shared libraries to another location.
As an often unwanted side effect, LD_LIBRARY_PATH will also be searched at link (ld) stage after directories specified with -L (also if no -L flag is given).
Some good examples of how LD_LIBRARY_PATH is used:
When upgrading shared libraries, you can test out a library before replacing it.
In a similar vein, in case your upgrade program depends on shared libraries and may freak out if you replace a shared library out from under it, you can use LD_LIBRARY_PATH to point to a directory with copy of a shared libraries and then you can replace the system copy without worry. You can even undo things should things fail by moving the copy back.
X11 uses LD_LIBRARY_PATH during its build process. X11 distributes its fonts in “bdf” format, and during the build process it needs to “compile” the bdf files into “pcf” files. LD_LIBRARY_PATH is used to point the the build lib directory so it can run bdftopcf during the build stage before the shared libraries are installed.
Perl can be installed with most of its core code as a shared library. This is handy if you embed Perl in other programs — you can compile them so they use the shared library and so you’ll save memory at run time. However Perl uses Perl scripts at various points in the build and install process. The ‘perl’ binary won’t run until its shared libraries are installed, unless LD_LIBRARY_PATH is used to bootstrap the process.
How has it been corrupted?
Too often people use it as a crutch for not doing the right thing (i.e. relying on the compiled in path). Often programs (even commercial ones) are compiled without any run-time loader paths at all, forcing you to have LD_LIBRARY_PATH set or else the program won’t run.
LD_LIBRARY_PATH is one of those insidious things that once it gets set globally for a user, things tend to happen which cause people to rely on it being set. Eventually when LD_LIBRARY_PATH needs to be changed or removed, mass breakage will occur!
How does the shared loader work?
SunOS 4.x uses major and minor revision numbers. If you have a library Xt, then it’s named something likelibXt.so.4.10 (Major version 4, minor 10). If you update the library (to correct a bug, for example), you would install libX11.so.4.11 and applications would automatically use the new version. To do this, the loader must do a readdir() for every directory in the loader path and glob out the correct file name. This is quite expensive especially if the directories are large, contain symlinks, and/or are located over NFS.
Linux, SunOS 5.x and most other SYSV variants use only major revision numbers. A library Xt is just named something like libXt.so.4. (Linux confuses things by generally using major/minor library file names, but always include a symlink that is the actual library path referenced. So, for example, a library “libXt.so.6” is actually a symlink to “libXt.so.6.0”. The linker/loader actually looks for “libXt.so.6”.)
The loader works essentially the same except that you don’t have minor library updates (you update the existing library) and the loader just does a stat() for each directory in the loader path. (This is much faster)
The bad old days before separate run-time vs link-time paths
Nowadays you specify the run-time path for an executable at link stage with the -R (or sometimes -rpath) flag to ld. There’s also LD_RUN_PATH which is an environment variable which acts to ld just like specifying -R.
Before all this you had only -L, which applied not only during compile-time, but during run time as well. There was no way to say “use this directory during compile time” but “use this other directory at run time”. There were some rather spectacular failure modes that one could get in to because of this. For example, say you are building X11R6 in an NFS automounted directory /home/snoopy/src. X11R6 is made up of shared libraries as well as programs. The programs are compiled against the libraries when they are located in the build tree, not in their final installed location. Since the linker must resolve symbols at link time, you need a -L path that includes the link-time path in addition to the final run-time path of, say, /usr/local/X11R6/lib. Now all the programs which use shared libraries will look first in /home/snoopy/src for their libraries and then in the correct place. Now every time an X11R6 app starts up it NFS automounts its build directory! You probably removed the temporary build directory ages ago, but the linker will still search there. What’s worse, say snoopy is down or no longer exists, no X11R6 apps will run! Bummer! Happily this all has been fixed, assuming your OS has a modern linker/loader. It also is worked around by specifying the final run time path first, before the build path in the -L options.
Evil Case Study #1
My first experience with this breakage was under SunOS 4.x, with OpenWindows. For some dumb reason, a few Sun OpenWindows apps were not compiled with correct run-time loader paths, forcing you to have LD_LIBRARY_PATH set all the time. Remember, at this time, in the global OpenWindows startup scripts the system would automatically set your LD_LIBRARY_PATH to be $OPENWINHOME/lib.
Okay, how did it break? Well, it just so happens that this site also had compiled X11R4 from source, in /usr/local/X11R4 . Things got really confusing because if you ever wanted to run the X11R4 apps, they would run against the OpenWindows libraries in /usr/openwin/lib, not the libraries in /usr/local/X11R4/lib! Things got even more confusing once X11R5 and then X11R6 came out. Now we had four different and often incompatible versions of a given shared library.
Hm. What do you do? If you set LD_LIBRARY_PATH to put OpenWindows first, then at best it will slow things down (since most people were running X11R5 and X11R6 stuff, searching for libraries in /usr/openwin/lib was a waste). At worst it caused spurious warnings (“ld.so: warning: libX11.x.y has older revision than expected z”) or caused apps to break altogether due to incompatibilities. It was also confusing to lots of people trying to compile X apps and forget to use -L.
What did I do? I whipped out emacs and binary edited the few OpenWindows apps which didn’t have a correct run-time path compiled in, and changed to the correct location in /usr/openwin/lib. (it should be noted that these tended to be apps which were fixed with system patches.. alas it seems guys who build the patched versions didn’t have the same environment as the FCS guys). I then changed all the startup scripts and removed any “setenv LD_LIBRARY_PATH” statements. I even put in an “unsetenv LD_LIBRARY_PATH” in my own .cshrc for good measure.
Evil Case Study #2
(based on a true story).
Due to licensing issues, it’s common for commercial apps to ship in binary form a copy of the shared Motif library. Motif is a commercial product, and not all OS’s come with it. It’s a common toolkit for commercial programs to write applications against. It’s also an evolving product, with ongoing bugfixes and new features.
Say application WidgetMan is one such application. In its startup script, it sets LD_LIBRARY_PATH to point to its copy of Motif so it uses that one when it runs. As it happens, WidgetMan is designed to launch other programs too. Unfortunately, when WidgetMan launches other apps, they inherit the LD_LIBRARY_PATH setting and some Motif based apps now break when run from WidgetMan because WidgetMan’s Motif is incompatible with (but the same library version as) the system Motif library. Bummer!
Imagine if you had followed what some clueless commercial install apps tell you to do and set LD_LIBRARY_PATH globally! Half-hearted attempts to improve things
Some OS’s (e.g. Linux) have a configurable loader. You can configure what run-time paths to look in by modifying /etc/ld.so.conf. This is almost as bad a LD_LIBRARY_PATH! Install scripts should never modify this file! This file should contain only the standard library locations as shipped with the OS.
Canonical rules for handling LD_LIBRARY_PATH
Never ever set LD_LIBRARY_PATH globally.
If you must ship binaries that use shared libraries and want to allow your clients to install the program outside a ‘standard’ location, do one of the following:
Ship your binaries as .o files, and as part of the install process relink them with the correct installation library path.
Ship executables with a very long “dummy” run-time library path, and as part of the install process use a binary editor to substitute the correct install library path in the executable.
If you are forced to set LD_LIBRARY_PATH, do so only as part of a wrapper.
Some software packages make you install a symlink from the standard location pointing to the real location. While this ‘works’, it does not solve the problem. What if you need to have two versions installed? Not to mention the fact that many vendors seem to choose stupid locations as their ‘standard’ location (like putting them in ‘/’ or ‘/usr’). This also typically makes things difficult for network installations, since even though you install an application on a network directory, you need to go around to every computer on the network and make a symlink.
Thoughts on improving LD_LIBRARY_PATH implementations in UNIX
Remove the link-time aspect of LD_LIBRARY_PATH. (Solaris’s ld will do this with the -i flag). Too often people just lazily set LD_LIBRARY_PATH so they don’t have to specify -L, causing bad consequences at run time for other apps. Or on the flip side people will set LD_LIBRARY_PATH to fix some brokenness at run time with some app, but it will lead to confusion or breakage at compile time for some other app if they don’t specify a correct -L path. It would be much cleaner if LD_LIBRARY_PATH only had influence at run-time. If necessary, invent some other environment variable for the job (LD_LINK_PATH ?).
Have OS’s ship with programs which allow one to safely change an executable’s run-time linker path.
Implement -s option to ldd which prints this run-time path for a given executable. (You can also see this with ‘dump -Lv’ in Solaris.)
Solaris 7 has a neat idea. There you can can specify a run time path which is also evaluated at run time. You link with an rpath of $ORIGIN/../lib. Here, $ORIGIN evaluates at run time to be the installation path of the binary. Now you can move the installation tree to another location entirely and everything will still work. We need this in other OS’s! Unfortunately, at least in Solaris 7, $ORIGIN is considered a “relative” path (you can subvert it if you have a writable directory on the same filesystem because UNIX lets you hard link even a setuid executable) so it is ignored on setuid/setgid binaries. Sun has fixed this in Solaris 8. You can specify with crle(1) paths that are “trustworthy”.
[root@ljj c_c++]# LD_DEBUG=help ls Valid options for the LD_DEBUG environment variable are: libs display library search paths reloc display relocation processing files display progress for input file symbols display symbol table processing bindings display information about symbol binding versions display version dependencies all all previous options combined statistics display relocation statistics unused determined unused DSOs help display this help message and exit To direct the debugging output into a file instead of standard output a filename can be specified using the LD_DEBUG_OUTPUT environment variable.
[root@ljj c_c++]#
Linux支持共享库已经有悠久的历史了,不再是什么新概念了。大家都知道如何编译、连接以及动态加载(dlopen/dlsym/dlclose) 共享库。但是,可能很多人,甚至包括一些高手,对共享库相关的一些环境变量认识模糊。当然,不知道这些环境变量,也可以用共享库,但是,若知道它们,可能就会用得更好。下面介绍一些常用的环境变量,希望对家有所帮助: LD_LIBRARY_PATH 这个环境变量是大家最为熟悉的,它告诉loader:在哪些目录中可以找到共享库。可以设置多个搜索目录,这些目录之间用冒号分隔开。在linux下,还提供了另外一种方式来完成同样的功能,你可以把这些目录加到/etc/ld.so.conf中,或则在/etc/ld.so.conf.d里创建一个文件,把目录加到这个文件里。当然,这是系统范围内全局有效的,而环境变量只对当前shell有效。按照惯例,除非你用上述方式指明,loader是不会在当前目录下去找共享库的,正如shell不会在当前目前找可执行文件一样。 LD_PRELOAD 这个环境变量对于程序员来说,也是特别有用的。它告诉loader:在解析函数地址时,优先使用LD_PRELOAD里指定的共享库中的函数。这为调试提供了方便,比如,对于C/C++程序来说,内存错误最难解决了。常见的做法就是重载malloc系列函数,但那样做要求重新编译程序,比较麻烦。使用LD_PRELOAD机制,就不用重新编译了,把包装函数库编译成共享库,并在LD_PRELOAD加入该共享库的名称,这些包装函数就会自动被调用了。在linux下,还提供了另外一种方式来完成同样的功能,你可以把要优先加载的共享库的文件名写在/etc/ld.so.preload里。当然,这是系统范围内全局有效的,而环境变量只对当前shell有效。 LD_ DEBUG 这个环境变量比较好玩,有时使用它,可以帮助你查找出一些共享库的疑难杂症(比如同名函数引起的问题)。同时,利用它,你也可以学到一些共享库加载过程的知识。它的参数如下: libs display library search paths reloc display relocation processing files display progress for input file symbols display symbol table processing bindings display information about symbol binding versions display version dependencies all all previous options combined statistics display relocation statistics unused determined unused DSOs help display this help message and exit BIND_NOW 这个环境变量与dlopen中的flag的意义是一致,只是dlopen中的flag适用于显示加载的情况,而BIND_NOW/BIND_NOT适用于隐式加载。 LD_PROFILE/LD_PROFILE_OUTPUT:为指定的共享库产生profile数据,LD_PROFILE指定共享库的名称,LD_PROFILE_OUTPUT指定输出profile文件的位置,是一个目录,且必须存在,默认的目录为/var/tmp/或/var/profile。通过profile数据,你可以得到一些该共享库中函数的使用统计信息。
Creating an inbound email service for Salesforce.com is a relatively straight forward process but there are a few thing to explain to make your life easier. The email service is an Apex class that implements the Messaging.InboundEmailHandler interface which allows you to process the email contents, headers and attachments. Using the information in the email, you could for instance, create a new contact if one does not exists with that email address, receive job applications and attached the person’s resume to their record or have an integration process that emails data files for processing.
You access email services from Setup -> Develop -> Email Services. This page contains the basic code you will always use to start your Apex class. Simply copy this code and create your new class with it. Click the “New Email Service” button to get started and fill out the form. There are a number of options so make sure you read carefully and check out the docs . One handy option is the “Enable Error Routing” which will send the inbound email to an alternative email address when the processing fails. You can can also specify email address(es) to accept mail from. This works great if you have some sort of internal process that emails results or file for import into Salesforce.com. Just like Workflow, make sure you mark it as “Active” or you will pull your hair out during testing.
After you save the new email service, you will need to scroll down to the bottom of the page and create a new email address for the service. An email service can have multiple email addresses and therefore process the same message differently for each address. When you create a new email service address you specify the “Context User” and “Accept Email From”. The email service uses the permissions of the Context User when processing the inbound message. So you could, for example, have the same email service that accepts email from US accounts and processes them with a US context user and another address that accepts email from EMEA accounts and processes them with an EMEA context user. After you submit the from the Force.com platform will create a unique email address like the following. This is the address you send your email to for processing.
Now that the email service is configured we can get down to writing the Apex code. Here’s a simple class the creates a new contact and attaches any documents to the record.
global class ProcessJobApplicantEmail implements Messaging.InboundEmailHandler {
global Messaging.InboundEmailResult handleInboundEmail(Messaging.InboundEmail email,
Messaging.InboundEnvelope envelope) {
Messaging.InboundEmailResult result = new Messaging.InboundEmailresult();
Contact contact = new Contact();
contact.FirstName = email.fromname.substring(0,email.fromname.indexOf(' '));
contact.LastName = email.fromname.substring(email.fromname.indexOf(' '));
contact.Email = envelope.fromAddress;
insert contact;
System.debug('====> Created contact '+contact.Id);
if (email.binaryAttachments != null && email.binaryAttachments.size() > 0) {
for (integer i = 0 ; i < email.binaryAttachments.size() ; i++) {
Attachment attachment = new Attachment();
// attach to the newly created contact record
attachment.ParentId = contact.Id;
attachment.Name = email.binaryAttachments[i].filename;
attachment.Body = email.binaryAttachments[i].body;
insert attachment;
}
}
return result;
}
}
One of the difficult thing about email service is debugging them. You can either create a test class for this or simply send the email and check the debug logs. Any debug statements you add to your class will show in the debug logs. Go to Setup -> Administration Setup -> Monitoring -> Debug Logs and add the Context User for the email service to the debug logs. Simply send an email to the address and check the debug log for that user.
One thing I wanted to see was the actual text and headers that are coming through in the service. Here’s an image of virtually all fields and headers in a sample email. Click for more details.
The following unit test will get you 100% code coverage.
static testMethod void testMe() {
// create a new email and envelope object
Messaging.InboundEmail email = new Messaging.InboundEmail() ;
Messaging.InboundEnvelope env = new Messaging.InboundEnvelope();
// setup the data for the email
email.subject = 'Test Job Applicant';
email.fromname = 'FirstName LastName';
env.fromAddress = 'someaddress@email.com';
// add an attachment
Messaging.InboundEmail.BinaryAttachment attachment = new Messaging.InboundEmail.BinaryAttachment();
attachment.body = blob.valueOf('my attachment text');
attachment.fileName = 'textfile.txt';
attachment.mimeTypeSubType = 'text/plain';
email.binaryAttachments =
new Messaging.inboundEmail.BinaryAttachment[] { attachment };
// call the email service class and test it with the data in the testMethod
ProcessJobApplicantEmail emailProcess = new ProcessJobApplicantEmail();
emailProcess.handleInboundEmail(email, env);
// query for the contact the email service created
Contact contact = [select id, firstName, lastName, email from contact
where firstName = 'FirstName' and lastName = 'LastName'];
System.assertEquals(contact.firstName,'FirstName');
System.assertEquals(contact.lastName,'LastName');
System.assertEquals(contact.email,'someaddress@email.com');
// find the attachment
Attachment a = [select name from attachment where parentId = :contact.id];
System.assertEquals(a.name,'textfile.txt');
}