There are some things I would recommend.
One is to have a custom Open Science Spam filter. You know, one of those self-learning things which recognizes spam, except train it to reject papers. The idea is to ensure certain consistency of writing and readability of input. A lot of members here could generate some examples they would like to see rejected to start the training. How much hints one gives the submitter as to why a paper was rejected are up to the designers; I would send a form letter that says it does not meet some of the automatic processing standards, and cursory review says the first problem is on page 3, with at least X many other pages being problematic (or some such thing that tell the submitter a lot of work needs to be done).
Second is to have people scan those papers that pass the filter briefly. If the claims are outrageous or there is some other clear indicator that the paper is not acceptable, see if the filter can be trained to reject papers containing the offending section.
Third is to have a process for review. Once papers pass the first two stages, have it sit in an Inbox for everyone to critique. If no one picks it up and comments it after a certain period, claim a backlog or reviewer shortage or find another system that would allow it to be reviewed.
Fourth is for papers that have made it through at least two reviewers (ideally at some remove from the authors and authoring institutions). Those are then put in the next stage of the pipeline for either thorough or massive reviewing, which can pick the paper apart. If a paper makes it to the fourth stage, it should be worthy of scrutiny by all. If it makes it path the fourth stage, it should be readable by "enough" people, ideally nonexperts as well as experts.
In all of this, the emphasis should be on good exposition and clear writing. Any portions which do not exhibit this should be clearly marked by a known reviewer, so that others will not have their time wasted by trying to interpret a document. Hopefully the automatic filter can be trained to recognize good exposition, and the How-To-Submit documentation should give good examples of clear exposition and examples which are unclear (and don't pass the first filter). The goal should be output that would be nominated for good science writing. Even if the paper is speculative and not supported by data, it can be marked as such, and critiques of the paper would in an ideal world include how the ideas could be tested.
Gerhard "Ask Me About System Design" Paseman, 2015.09.17