The other answers provide some good insight into why scientists might publish the way they do today, but I think all of them miss a pretty obvious and important point: the history of publishing.
Scientific findings have been published in print for hundreds of years, even if the concept of peer review is more recent1. Over the majority of this time period, scientists did not work with large data sets with the frequency and ease that we do today, and publishing "code" was certainly not common. A small data set could simply be published within an article, probably in the form of a table or figure, which could be distributed by photocopy or transcribed by hand.
Fast-forward to the 2010s: software is a critical intellectual and technical component in most areas of scientific research, and huge data sets can be disseminated openly, with ease, and with little to no cost. Distributing data and code inside a print article is rarely realistic these days, and even though journals typically publish online versions of all articles now, how to integrate supporting code and data is a big challenge—or at least publishers make it out to be a big challenge.
I would attribute the rapid advance in computing and networking technology as a primary cause in many (most) cases of "closed" thinking when it comes to publication. Many publishers and senior scientists are simply struggling to find their feet in this brave new world, and holding on to decades-old practices and values: the practices and values under which they were trained, and their mentors were trained.
1Baldwin M (2015) Credibility, peer review, and Nature, 1945–1990. Notes and Records of the Royal Society, 69, 337-352, doi:10.1098/rsnr.2015.0029.
This post has been migrated from the Open Science private beta at StackExchange (A51.SE)