While running with the Yarn Cluster mode, you always need to specify the other Memory settings for your executors and there individually memory, Plus you always need to specify the driver details also. Now for Example
Amazon EC2 Environment (Reserved already):
m3.xlarge | CORES : 4(1) | RAM : 15 (3.5) | HDD : 80 GB | Nodes : 3 Nodesspark-submit --class <YourClassFollowedByPackage> --master yarn-cluster --num-executors 2 --driver-memory 8g --executor-memory 8g --executor-cores 1 <Your Jar with Full Path> <Jar Args>
Always remember to add the other third-party libraries or jars to your Classpath in each of the Task Nodes, You can add them directly to your Spark or Hadoop Classpath on each of your Node.
Notes : 1) If you're using the Amazon EMR then It can be achieved using Custom Bootstrap Actions and S3.2) Remove the conflicting jars too. Sometimes you'll see an unnecessary NullPointerException and this could be one of the key reason for it.
If possible add your stacktrace using
yarn logs -applicationId <HadoopAppId>
So that I can answer you in more specific way.