Weka's J48 allows one to check information gain on a full set of attributes, should I use those significant attributes to build my model? Or should I use the full set of attributes?
In data mining, there is a multi-way trade-off between the number of features that you use, your accuracy, and the time it takes to generate a model. In theory, you'd want include every possible feature to boost accuracy; however, going about data mining in this way guarantees lengthy model generation times. Further, models that produce textual decision trees like J48 aren't as useful when the tree has thousands of nodes.
Depending on how many features you start out with, you may very well want to remove features that don't provide a large enough information gain. If you have a small number of features to begin with (e.g. fewer than 20), it might make sense just to keep all of them.
If you do wish to limit the number of features you use, it would be best to choose those with the highest Information Gain. It would also be worthwhile to look into things like Principal Component Reduction (which can be done through WEKA) to help select the best features.