diff --git a/.gitignore b/.gitignore index 8416e3bfccb1b8..8a6cc149115708 100644 --- a/.gitignore +++ b/.gitignore @@ -85,7 +85,7 @@ deps/npm/node_modules/.bin/ # test artifacts tools/faketime icu_config.gypi -test.tap +*.tap # Xcode workspaces and project folders *.xcodeproj diff --git a/CHANGELOG.md b/CHANGELOG.md index c9cbbdf5a60cc9..708b6811d7d1ad 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,10 +1,145 @@ # Node.js ChangeLog +## 2016-12-06, Version 4.7.0 'Argon' (LTS), @thealphanerd + +This LTS release comes with 108 commits. This includes 30 which are doc +related, 28 which are test related, 16 which are build / tool related, and 4 +commits which are updates to dependencies. + +### Notable Changes + +The SEMVER-MINOR changes include: + +* **build**: export openssl symbols on Windows making it possible to build addons linking against the bundled version of openssl (Alex Hultman) [#7576](https://github.com/nodejs/node/pull/7576) +* **debugger**: make listen address configurable in the debugger server (Ben Noordhuis) [#3316](https://github.com/nodejs/node/pull/3316) +* **dgram**: generalized send queue to handle close fixing a potential throw when dgram socket is closed in the listening event handler. (Matteo Collina) [#7066](https://github.com/nodejs/node/pull/7066) +* **http**: Introduce the 451 status code "Unavailable For Legal Reasons" (Max Barinov) [#4377](https://github.com/nodejs/node/pull/4377) +* **tls**: introduce `secureContext` for `tls.connect` which is useful for caching client certificates, key, and CA certificates. (Fedor Indutny) [#4246](https://github.com/nodejs/node/pull/4246) + +Notable SEMVER-PATCH changes include: + +* **build**: + - introduce the configure --shared option for embedders (sxa555) [#6994](https://github.com/nodejs/node/pull/6994) +* **gtest**: the test reporter now outputs tap comments as yamlish (Johan Bergström) [#9262](https://github.com/nodejs/node/pull/9262) +* **src**: node no longer aborts when c-ares initialization fails (Ben Noordhuis) [#8710](https://github.com/nodejs/node/pull/8710) +* **tls**: fix memory leak when writing data to TLSWrap instance during handshake (Fedor Indutny) [#9586](https://github.com/nodejs/node/pull/9586) + +### Commits + +* [[`ed31f9cc30`](https://github.com/nodejs/node/commit/ed31f9cc30)] - **benchmark**: add microbenchmarks for ES Map (Rod Vagg) [#7581](https://github.com/nodejs/node/pull/7581) +* [[`c5181eda4b`](https://github.com/nodejs/node/commit/c5181eda4b)] - **build**: reduce noise from doc target (Daniel Bevenius) [#9457](https://github.com/nodejs/node/pull/9457) +* [[`59d821debe`](https://github.com/nodejs/node/commit/59d821debe)] - **build**: use wxneeded on openbsd (Aaron Bieber) [#9232](https://github.com/nodejs/node/pull/9232) +* [[`7c73105606`](https://github.com/nodejs/node/commit/7c73105606)] - **build**: run cctests as part of test-ci target (Ben Noordhuis) [#8034](https://github.com/nodejs/node/pull/8034) +* [[`3919edb47e`](https://github.com/nodejs/node/commit/3919edb47e)] - **build**: don't build icu with -fno-rtti (Ben Noordhuis) [#8886](https://github.com/nodejs/node/pull/8886) +* [[`e97723b18c`](https://github.com/nodejs/node/commit/e97723b18c)] - **build**: abstract out shared library suffix (Stewart Addison) [#9385](https://github.com/nodejs/node/pull/9385) +* [[`0138b4db7c`](https://github.com/nodejs/node/commit/0138b4db7c)] - **build**: windows sharedlib support (Stewart Addison) [#9385](https://github.com/nodejs/node/pull/9385) +* [[`f21c2b9d3b`](https://github.com/nodejs/node/commit/f21c2b9d3b)] - **build**: configure --shared (sxa555) [#6994](https://github.com/nodejs/node/pull/6994) +* [[`bb2fdf58f7`](https://github.com/nodejs/node/commit/bb2fdf58f7)] - **build**: cherry pick V8 change for windows DLL support (Stefan Budeanu) [#8084](https://github.com/nodejs/node/pull/8084) +* [[`84849f186f`](https://github.com/nodejs/node/commit/84849f186f)] - **(SEMVER-MINOR)** **build**: export more openssl symbols on Windows (Alex Hultman) [#7576](https://github.com/nodejs/node/pull/7576) +* [[`3cefd65e90`](https://github.com/nodejs/node/commit/3cefd65e90)] - **build**: export openssl symbols on windows (Ben Noordhuis) [#6274](https://github.com/nodejs/node/pull/6274) +* [[`4de7a6e291`](https://github.com/nodejs/node/commit/4de7a6e291)] - **build**: fix config.gypi target (Daniel Bevenius) [#9053](https://github.com/nodejs/node/pull/9053) +* [[`9389572cbc`](https://github.com/nodejs/node/commit/9389572cbc)] - **crypto**: fix faulty logic in iv size check (Ben Noordhuis) [#9032](https://github.com/nodejs/node/pull/9032) +* [[`748e424163`](https://github.com/nodejs/node/commit/748e424163)] - **(SEMVER-MINOR)** **debugger**: make listen address configurable (Ben Noordhuis) [#3316](https://github.com/nodejs/node/pull/3316) +* [[`c1effb1255`](https://github.com/nodejs/node/commit/c1effb1255)] - **deps**: fix build with libc++ 3.8.0 (Johan Bergström) [#9763](https://github.com/nodejs/node/pull/9763) +* [[`eb34f687d5`](https://github.com/nodejs/node/commit/eb34f687d5)] - **deps**: revert default gtest reporter change (Brian White) [#8948](https://github.com/nodejs/node/pull/8948) +* [[`4c47446133`](https://github.com/nodejs/node/commit/4c47446133)] - **deps**: make gtest output tap (Ben Noordhuis) [#8034](https://github.com/nodejs/node/pull/8034) +* [[`91fce10aee`](https://github.com/nodejs/node/commit/91fce10aee)] - **deps**: back port OpenBSD fix in c-ares/c-ares (Aaron Bieber) [#9232](https://github.com/nodejs/node/pull/9232) +* [[`4571c84c67`](https://github.com/nodejs/node/commit/4571c84c67)] - **(SEMVER-MINOR)** **dgram**: generalized send queue to handle close (Matteo Collina) [#7066](https://github.com/nodejs/node/pull/7066) +* [[`d3c25c19ef`](https://github.com/nodejs/node/commit/d3c25c19ef)] - **doc**: update minute-taking procedure for CTC (Rich Trott) [#9425](https://github.com/nodejs/node/pull/9425) +* [[`861b689c01`](https://github.com/nodejs/node/commit/861b689c01)] - **doc**: update GOVERNANCE.md to use "meeting chair" (Rich Trott) [#9432](https://github.com/nodejs/node/pull/9432) +* [[`5e820ae746`](https://github.com/nodejs/node/commit/5e820ae746)] - **doc**: update Diagnostics WG info (Josh Gavant) [#9329](https://github.com/nodejs/node/pull/9329) +* [[`e08173a2f1`](https://github.com/nodejs/node/commit/e08173a2f1)] - **doc**: fix outdate ninja link (Yangyang Liu) [#9278](https://github.com/nodejs/node/pull/9278) +* [[`462c640a51`](https://github.com/nodejs/node/commit/462c640a51)] - **doc**: fix typo in email address in README (Rich Trott) [#8941](https://github.com/nodejs/node/pull/8941) +* [[`fc77cbb5b1`](https://github.com/nodejs/node/commit/fc77cbb5b1)] - **doc**: make node(1) more consistent with tradition (Alex Jordan) [#8902](https://github.com/nodejs/node/pull/8902) +* [[`66e26cd253`](https://github.com/nodejs/node/commit/66e26cd253)] - **doc**: child_process.execSync .stdio default is pipe (Kenneth Skovhus) [#9701](https://github.com/nodejs/node/pull/9701) +* [[`524ebfb5dd`](https://github.com/nodejs/node/commit/524ebfb5dd)] - **doc**: child_process .stdio accepts a String type (Kenneth Skovhus) [#9701](https://github.com/nodejs/node/pull/9701) +* [[`475fe96852`](https://github.com/nodejs/node/commit/475fe96852)] - **doc**: simplify process.memoryUsage() example code (Thomas Watson Steen) [#9560](https://github.com/nodejs/node/pull/9560) +* [[`c48c318806`](https://github.com/nodejs/node/commit/c48c318806)] - **doc**: change ./node to node in debugger.md (AnnaMag) [#8943](https://github.com/nodejs/node/pull/8943) +* [[`00a178257c`](https://github.com/nodejs/node/commit/00a178257c)] - **doc**: update CONTRIBUTING.md to address editing PRs (Gibson Fahnestock) [#9259](https://github.com/nodejs/node/pull/9259) +* [[`2b2dde855a`](https://github.com/nodejs/node/commit/2b2dde855a)] - **doc**: add italoacasas to collaborators (Italo A. Casas) [#9677](https://github.com/nodejs/node/pull/9677) +* [[`0f41058e41`](https://github.com/nodejs/node/commit/0f41058e41)] - **doc**: clarify relation between a file and a module (marzelin) [#9026](https://github.com/nodejs/node/pull/9026) +* [[`d1d207bd75`](https://github.com/nodejs/node/commit/d1d207bd75)] - **doc**: add Sakthipriyan to the CTC (Rod Vagg) [#9427](https://github.com/nodejs/node/pull/9427) +* [[`9dad98bdf1`](https://github.com/nodejs/node/commit/9dad98bdf1)] - **doc**: add 2016-10-26 CTC meeting minutes (Rich Trott) [#9348](https://github.com/nodejs/node/pull/9348) +* [[`824009296a`](https://github.com/nodejs/node/commit/824009296a)] - **doc**: add 2016-10-05 CTC meeting minutes (Josh Gavant) [#9326](https://github.com/nodejs/node/pull/9326) +* [[`1a701f1723`](https://github.com/nodejs/node/commit/1a701f1723)] - **doc**: add 2016-09-28 CTC meeting minutes (Josh Gavant) [#9325](https://github.com/nodejs/node/pull/9325) +* [[`e9c6aff113`](https://github.com/nodejs/node/commit/e9c6aff113)] - **doc**: add 2016-10-19 CTC meeting minutes (Josh Gavant) [#9193](https://github.com/nodejs/node/pull/9193) +* [[`c1e5e663a9`](https://github.com/nodejs/node/commit/c1e5e663a9)] - **doc**: improve header styling for API docs (Jeremiah Senkpiel) [#8811](https://github.com/nodejs/node/pull/8811) +* [[`279e30c3ee`](https://github.com/nodejs/node/commit/279e30c3ee)] - **doc**: add CTC meeting minutes for 2016-10-12 (Michael Dawson) [#9070](https://github.com/nodejs/node/pull/9070) +* [[`3b839d1855`](https://github.com/nodejs/node/commit/3b839d1855)] - **doc**: remove confusing reference in governance doc (Rich Trott) [#9073](https://github.com/nodejs/node/pull/9073) +* [[`e564cb6af4`](https://github.com/nodejs/node/commit/e564cb6af4)] - **doc**: add ctc-review label information (Rich Trott) [#9072](https://github.com/nodejs/node/pull/9072) +* [[`68ccc7a512`](https://github.com/nodejs/node/commit/68ccc7a512)] - **doc**: update reference to list hash algorithms in crypto.md (scott stern) [#9043](https://github.com/nodejs/node/pull/9043) +* [[`132425a058`](https://github.com/nodejs/node/commit/132425a058)] - **doc**: specify that errno is a number, not a string (John Vilk) [#9007](https://github.com/nodejs/node/pull/9007) +* [[`695ee1e77b`](https://github.com/nodejs/node/commit/695ee1e77b)] - **doc**: highlight deprecated API in ToC (Ilya Frolov) [#7189](https://github.com/nodejs/node/pull/7189) +* [[`4f8bf1bcf8`](https://github.com/nodejs/node/commit/4f8bf1bcf8)] - **doc**: explains why Reviewed-By is added in PRs (jessicaquynh) [#9044](https://github.com/nodejs/node/pull/9044) +* [[`af645a0553`](https://github.com/nodejs/node/commit/af645a0553)] - **doc**: explain why GitHub merge button is not used (jessicaquynh) [#9044](https://github.com/nodejs/node/pull/9044) +* [[`f472c09e90`](https://github.com/nodejs/node/commit/f472c09e90)] - **doc**: reference signal(7) for the list of signals (Emanuele DelBono) [#9323](https://github.com/nodejs/node/pull/9323) +* [[`88079817c2`](https://github.com/nodejs/node/commit/88079817c2)] - **doc**: fix typo in http.md (anu0012) [#9144](https://github.com/nodejs/node/pull/9144) +* [[`9f0ef5a4f2`](https://github.com/nodejs/node/commit/9f0ef5a4f2)] - **doc**: fix heading type for v4.6.2 changelog (Myles Borins) [#9515](https://github.com/nodejs/node/pull/9515) +* [[`f6f0b387ea`](https://github.com/nodejs/node/commit/f6f0b387ea)] - **events**: pass the original listener added by once (DavidCai) [#6394](https://github.com/nodejs/node/pull/6394) +* [[`02e6c84de2`](https://github.com/nodejs/node/commit/02e6c84de2)] - **gitignore**: ignore all tap files (Johan Bergström) [#9262](https://github.com/nodejs/node/pull/9262) +* [[`a7ae8876f9`](https://github.com/nodejs/node/commit/a7ae8876f9)] - **governance**: expand use of CTC issue tracker (Rich Trott) [#8945](https://github.com/nodejs/node/pull/8945) +* [[`36abbbe736`](https://github.com/nodejs/node/commit/36abbbe736)] - **gtest**: output tap comments as yamlish (Johan Bergström) [#9262](https://github.com/nodejs/node/pull/9262) +* [[`50a4471aff`](https://github.com/nodejs/node/commit/50a4471aff)] - **http**: fix connection upgrade checks (Brian White) [#8238](https://github.com/nodejs/node/pull/8238) +* [[`c94482b167`](https://github.com/nodejs/node/commit/c94482b167)] - **(SEMVER-MINOR)** **http**: 451 status code "Unavailable For Legal Reasons" (Max Barinov) [#4377](https://github.com/nodejs/node/pull/4377) +* [[`12da2581a8`](https://github.com/nodejs/node/commit/12da2581a8)] - **https**: fix memory leak with https.request() (Ilkka Myller) [#8647](https://github.com/nodejs/node/pull/8647) +* [[`3b448a7f12`](https://github.com/nodejs/node/commit/3b448a7f12)] - **lib**: changed var to const in linkedlist (Adri Van Houdt) [#8609](https://github.com/nodejs/node/pull/8609) +* [[`a3a184d40a`](https://github.com/nodejs/node/commit/a3a184d40a)] - **lib**: fix TypeError in v8-polyfill (Wyatt Preul) [#8863](https://github.com/nodejs/node/pull/8863) +* [[`423846053b`](https://github.com/nodejs/node/commit/423846053b)] - **lib**: remove let from for loops (Myles Borins) [#8873](https://github.com/nodejs/node/pull/8873) +* [[`9a192a9683`](https://github.com/nodejs/node/commit/9a192a9683)] - **net**: fix ambiguity in EOF handling (Fedor Indutny) [#9066](https://github.com/nodejs/node/pull/9066) +* [[`62e83b363e`](https://github.com/nodejs/node/commit/62e83b363e)] - **src**: Malloc/Calloc size 0 returns non-null pointer (Rich Trott) [#8572](https://github.com/nodejs/node/pull/8572) +* [[`51e09d00c4`](https://github.com/nodejs/node/commit/51e09d00c4)] - **src**: normalize malloc, realloc (Michael Dawson) [#7564](https://github.com/nodejs/node/pull/7564) +* [[`3b5cedebd1`](https://github.com/nodejs/node/commit/3b5cedebd1)] - **src**: renaming ares_task struct to node_ares_task (Daniel Bevenius) [#7345](https://github.com/nodejs/node/pull/7345) +* [[`e5d2a95d68`](https://github.com/nodejs/node/commit/e5d2a95d68)] - **src**: remove out-of-date TODO comment (Daniel Bevenius) [#9000](https://github.com/nodejs/node/pull/9000) +* [[`b4353e9017`](https://github.com/nodejs/node/commit/b4353e9017)] - **src**: fix typo in #endif comment (Juan Andres Andrango) [#8989](https://github.com/nodejs/node/pull/8989) +* [[`f0192ec195`](https://github.com/nodejs/node/commit/f0192ec195)] - **src**: don't abort when c-ares initialization fails (Ben Noordhuis) [#8710](https://github.com/nodejs/node/pull/8710) +* [[`f669a08b76`](https://github.com/nodejs/node/commit/f669a08b76)] - **src**: fix typo rval to value (Miguel Angel Asencio Hurtado) [#9023](https://github.com/nodejs/node/pull/9023) +* [[`9b9762ccec`](https://github.com/nodejs/node/commit/9b9762ccec)] - **streams**: fix regression in `unpipe()` (Anna Henningsen) [#9171](https://github.com/nodejs/node/pull/9171) +* [[`cc36a63205`](https://github.com/nodejs/node/commit/cc36a63205)] - **test**: remove watchdog in test-debug-signal-cluster (Rich Trott) [#9476](https://github.com/nodejs/node/pull/9476) +* [[`9144d373ba`](https://github.com/nodejs/node/commit/9144d373ba)] - **test**: cleanup test-dgram-error-message-address (Michael Macherey) [#8938](https://github.com/nodejs/node/pull/8938) +* [[`96bdfae041`](https://github.com/nodejs/node/commit/96bdfae041)] - **test**: improve test-debugger-util-regression (Santiago Gimeno) [#9490](https://github.com/nodejs/node/pull/9490) +* [[`2c758861c0`](https://github.com/nodejs/node/commit/2c758861c0)] - **test**: move timer-dependent test to sequential (Rich Trott) [#9431](https://github.com/nodejs/node/pull/9431) +* [[`d9955fbb17`](https://github.com/nodejs/node/commit/d9955fbb17)] - **test**: add test for HTTP client "aborted" event (Kyle E. Mitchell) [#7376](https://github.com/nodejs/node/pull/7376) +* [[`b0476c5590`](https://github.com/nodejs/node/commit/b0476c5590)] - **test**: fix flaky test-fs-watch-recursive on OS X (Rich Trott) [#9303](https://github.com/nodejs/node/pull/9303) +* [[`bcd156f4ab`](https://github.com/nodejs/node/commit/bcd156f4ab)] - **test**: refactor test-async-wrap-check-providers (Gerges Beshay) [#9297](https://github.com/nodejs/node/pull/9297) +* [[`9d5e7f5c85`](https://github.com/nodejs/node/commit/9d5e7f5c85)] - **test**: use strict assertions in module loader test (Ben Noordhuis) [#9263](https://github.com/nodejs/node/pull/9263) +* [[`6d742b3fdd`](https://github.com/nodejs/node/commit/6d742b3fdd)] - **test**: remove err timer from test-http-set-timeout (BethGriggs) [#9264](https://github.com/nodejs/node/pull/9264) +* [[`51b251d8eb`](https://github.com/nodejs/node/commit/51b251d8eb)] - **test**: add coverage for spawnSync() killSignal (cjihrig) [#8960](https://github.com/nodejs/node/pull/8960) +* [[`fafffd4f99`](https://github.com/nodejs/node/commit/fafffd4f99)] - **test**: fix test-child-process-fork-regr-gh-2847 (Santiago Gimeno) [#8954](https://github.com/nodejs/node/pull/8954) +* [[`a2621a25e5`](https://github.com/nodejs/node/commit/a2621a25e5)] - **test**: remove FIXME pummel/test-tls-securepair-client (Alfred Cepeda) [#8757](https://github.com/nodejs/node/pull/8757) +* [[`747013bc39`](https://github.com/nodejs/node/commit/747013bc39)] - **test**: output tap13 instead of almost-tap (Johan Bergström) [#9262](https://github.com/nodejs/node/pull/9262) +* [[`790406661d`](https://github.com/nodejs/node/commit/790406661d)] - **test**: refactor test-net-server-max-connections (Rich Trott) [#8931](https://github.com/nodejs/node/pull/8931) +* [[`347547a97e`](https://github.com/nodejs/node/commit/347547a97e)] - **test**: expand test coverage for url.js (Junshu Okamoto) [#8859](https://github.com/nodejs/node/pull/8859) +* [[`cec5e36df7`](https://github.com/nodejs/node/commit/cec5e36df7)] - **test**: fix test-cluster-worker-init.js flakyness (Ilkka Myller) [#8703](https://github.com/nodejs/node/pull/8703) +* [[`b3fccc2536`](https://github.com/nodejs/node/commit/b3fccc2536)] - **test**: enable cyrillic punycode test case (Ben Noordhuis) [#8695](https://github.com/nodejs/node/pull/8695) +* [[`03f703177f`](https://github.com/nodejs/node/commit/03f703177f)] - **test**: remove call to `net.Socket.resume()` (Alfred Cepeda) [#8679](https://github.com/nodejs/node/pull/8679) +* [[`527db40932`](https://github.com/nodejs/node/commit/527db40932)] - **test**: add coverage for execFileSync() errors (cjihrig) [#9211](https://github.com/nodejs/node/pull/9211) +* [[`40ef23969d`](https://github.com/nodejs/node/commit/40ef23969d)] - **test**: writable stream needDrain state (Italo A. Casas) [#8799](https://github.com/nodejs/node/pull/8799) +* [[`ba4a3ede56`](https://github.com/nodejs/node/commit/ba4a3ede56)] - **test**: writable stream ending state (Italo A. Casas) [#8707](https://github.com/nodejs/node/pull/8707) +* [[`80a26c7540`](https://github.com/nodejs/node/commit/80a26c7540)] - **test**: writable stream finished state (Italo A. Casas) [#8791](https://github.com/nodejs/node/pull/8791) +* [[`a64af39c83`](https://github.com/nodejs/node/commit/a64af39c83)] - **test**: remove duplicate required module (Rich Trott) [#9169](https://github.com/nodejs/node/pull/9169) +* [[`a038fcc307`](https://github.com/nodejs/node/commit/a038fcc307)] - **test**: add regression test for instanceof (Franziska Hinkelmann) [#9178](https://github.com/nodejs/node/pull/9178) +* [[`bd99b2d4e4`](https://github.com/nodejs/node/commit/bd99b2d4e4)] - **test**: checking if error constructor is assert.AssertionError (larissayvette) [#9119](https://github.com/nodejs/node/pull/9119) +* [[`4a6bd8683f`](https://github.com/nodejs/node/commit/4a6bd8683f)] - **test**: fix flaky test-child-process-fork-dgram (Rich Trott) [#9098](https://github.com/nodejs/node/pull/9098) +* [[`d9c33646e6`](https://github.com/nodejs/node/commit/d9c33646e6)] - **test**: add regression test for `unpipe()` (Niels Nielsen) [#9171](https://github.com/nodejs/node/pull/9171) +* [[`f9b24f42ba`](https://github.com/nodejs/node/commit/f9b24f42ba)] - **test**: use npm sandbox in test-npm-install (João Reis) [#9079](https://github.com/nodejs/node/pull/9079) +* [[`54c38eb22e`](https://github.com/nodejs/node/commit/54c38eb22e)] - **tickprocessor**: apply c++filt manually on mac (Fedor Indutny) [#8480](https://github.com/nodejs/node/pull/8480) +* [[`bf25994308`](https://github.com/nodejs/node/commit/bf25994308)] - **tls**: fix leak of WriteWrap+TLSWrap combination (Fedor Indutny) [#9586](https://github.com/nodejs/node/pull/9586) +* [[`9049c1f6b6`](https://github.com/nodejs/node/commit/9049c1f6b6)] - **(SEMVER-MINOR)** **tls**: introduce `secureContext` for `tls.connect` (Fedor Indutny) [#4246](https://github.com/nodejs/node/pull/4246) +* [[`b1bd1c42c0`](https://github.com/nodejs/node/commit/b1bd1c42c0)] - **tools**: allow test.py to use full paths of tests (Francis Gulotta) [#9694](https://github.com/nodejs/node/pull/9694) +* [[`533ce48b6a`](https://github.com/nodejs/node/commit/533ce48b6a)] - **tools**: make --repeat work with -j in test.py (Rich Trott) [#9249](https://github.com/nodejs/node/pull/9249) +* [[`f9baa1119f`](https://github.com/nodejs/node/commit/f9baa1119f)] - **tools**: remove dangling eslint symlink (Sam Roberts) [#9299](https://github.com/nodejs/node/pull/9299) +* [[`c8dccf29dd`](https://github.com/nodejs/node/commit/c8dccf29dd)] - **tools**: avoid let in for loops (jessicaquynh) [#9049](https://github.com/nodejs/node/pull/9049) +* [[`620cdc5ce8`](https://github.com/nodejs/node/commit/620cdc5ce8)] - **tools**: fix release script on macOS 10.12 (Evan Lucas) [#8824](https://github.com/nodejs/node/pull/8824) +* [[`f18f3b61e3`](https://github.com/nodejs/node/commit/f18f3b61e3)] - **util**: use template strings (Alejandro Oviedo Garcia) [#9120](https://github.com/nodejs/node/pull/9120) +* [[`1dfb5b5a09`](https://github.com/nodejs/node/commit/1dfb5b5a09)] - **v8**: update make-v8.sh to use git (Jaideep Bajwa) [#9393](https://github.com/nodejs/node/pull/9393) +* [[`bdb6cf92c7`](https://github.com/nodejs/node/commit/bdb6cf92c7)] - **win,msi**: mark INSTALLDIR property as secure (João Reis) [#8795](https://github.com/nodejs/node/pull/8795) +* [[`9a02414a29`](https://github.com/nodejs/node/commit/9a02414a29)] - **zlib**: fix raw inflate with custom dictionary (Tarjei Husøy) + ## 2016-11-08, Version 4.6.2 'Argon' (LTS), @thealphanerd This LTS release comes with 219 commits. This includes 80 commits that are docs related, 58 commits that are test related, 20 commits that are build / tool related, and 9 commits that are updates to dependencies. -## Notable Changes +### Notable Changes * **build**: It is now possible to build the documentation from the release tarball (Anna Henningsen) [#8413](https://github.com/nodejs/node/pull/8413) * **buffer**: Buffer.alloc() will no longer incorrectly return a zero filled buffer when an encoding is passed (Teddy Katz) [#9238](https://github.com/nodejs/node/pull/9238) @@ -12,7 +147,7 @@ This LTS release comes with 219 commits. This includes 80 commits that are docs * **repl**: Enable tab completion for global properties (Lance Ball) [#7369](https://github.com/nodejs/node/pull/7369) * **url**: `url.format()` will now encode all `#` in `search` (Ilkka Myller) [#8072](https://github.com/nodejs/node/pull/8072) -## Commits +### Commits * [[`06a1c9bf80`](https://github.com/nodejs/node/commit/06a1c9bf80)] - **assert**: remove code that is never reached (Rich Trott) [#8132](https://github.com/nodejs/node/pull/8132) * [[`861e584d46`](https://github.com/nodejs/node/commit/861e584d46)] - **async_wrap**: add a missing case to test-async-wrap-throw-no-init (yorkie) [#8198](https://github.com/nodejs/node/pull/8198) diff --git a/COLLABORATOR_GUIDE.md b/COLLABORATOR_GUIDE.md index 2bab2e203145f0..214f262fdc61d5 100644 --- a/COLLABORATOR_GUIDE.md +++ b/COLLABORATOR_GUIDE.md @@ -84,7 +84,7 @@ continuous integration tests on the ### Involving the CTC Collaborators may opt to elevate pull requests or issues to the CTC for -discussion by assigning the ***ctc-agenda*** tag. This should be done +discussion by assigning the `ctc-review` label. This should be done where a pull request: - has a significant impact on the codebase, @@ -101,6 +101,8 @@ information regarding the change process: - A `Reviewed-By: Name ` line for yourself and any other Collaborators who have reviewed the change. + - Useful for @mentions / contact list if something goes wrong in the PR. + - Protects against the assumption that GitHub will be around forever. - A `PR-URL:` line that references the *full* GitHub URL of the original pull request being merged so it's easy to trace a commit back to the conversation that led up to that change. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a6c8aec642b81e..f9ae4ce624b819 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -90,6 +90,13 @@ $ git config --global user.name "J. Random User" $ git config --global user.email "j.random.user@example.com" ``` +Add and commit: + +```text +$ git add my/changed/files +$ git commit +``` + Writing good commit logs is important. A commit log should describe what changed and why. Follow these guidelines when writing one: @@ -191,10 +198,60 @@ $ git push origin my-branch Go to https://github.com/yourusername/node and select your branch. Click the 'Pull Request' button and fill out the form. -Pull requests are usually reviewed within a few days. If there are comments -to address, apply your changes in a separate commit and push that to your -branch. Post a comment in the pull request afterwards; GitHub does -not send out notifications when you add commits. +Pull requests are usually reviewed within a few days. + +### Step 7: Discuss and update + +You will probably get feedback or requests for changes to your Pull Request. +This is a big part of the submission process, so don't be disheartened! + +To make changes to an existing Pull Request, make the changes to your branch. +When you push that branch to your fork, GitHub will automatically update the +Pull Request. + +You can push more commits to your branch: + +```text +$ git add my/changed/files +$ git commit +$ git push origin my-branch +``` + +Or you can rebase against master: + +```text +$ git fetch --all +$ git rebase origin/master +$ git push --force-with-lease origin my-branch +``` + +Or you can amend the last commit (for example if you want to change the commit +log). + +```text +$ git add any/changed/files +$ git commit --amend +$ git push --force-with-lease origin my-branch +``` + +**Important:** The `git push --force-with-lease` command is one of the few ways +to delete history in git. Before you use it, make sure you understand the risks. +If in doubt, you can always ask for guidance in the Pull Request or on +[IRC in the #node-dev channel](https://webchat.freenode.net?channels=node-dev&uio=d4). + +Feel free to post a comment in the Pull Request to ping reviewers if you are +awaiting an answer on something. + + +### Step 8: Landing + +Once your Pull Request has been reviewed and approved by at least one Node.js +Collaborators (often by saying LGTM, or Looks Good To Me), and as long as +there is consensus (no objections from a Collaborator), a +Collaborator can merge the Pull Request . GitHub often shows the Pull Request as + `Closed` at this point, but don't worry. If you look at the branch you raised + your Pull Request against (probably `master`), you should see a commit with + your name on it. Congratulations and thanks for your contribution! ## Developer's Certificate of Origin 1.1 diff --git a/GOVERNANCE.md b/GOVERNANCE.md index d3ba8355e0fca8..417c16e04e7ae3 100644 --- a/GOVERNANCE.md +++ b/GOVERNANCE.md @@ -23,14 +23,11 @@ The [nodejs/node](https://github.com/nodejs/node) GitHub repository is maintained by the CTC and additional Collaborators who are added by the CTC on an ongoing basis. -Individuals making significant and valuable contributions are made -Collaborators and given commit-access to the project. These -individuals are identified by the CTC and their addition as -Collaborators is discussed during the weekly CTC meeting. +Individuals identified by the CTC as making significant and valuable +contributions are made Collaborators and given commit access to the project. _Note:_ If you make a significant contribution and are not considered -for commit-access, log an issue or contact a CTC member directly and it -will be brought up in the next CTC meeting. +for commit access, log an issue or contact a CTC member directly. Modifications of the contents of the nodejs/node repository are made on a collaborative basis. Anybody with a GitHub account may propose a @@ -39,16 +36,21 @@ Collaborators. All pull requests must be reviewed and accepted by a Collaborator with sufficient expertise who is able to take full responsibility for the change. In the case of pull requests proposed by an existing Collaborator, an additional Collaborator is required -for sign-off. Consensus should be sought if additional Collaborators -participate and there is disagreement around a particular -modification. See [Consensus Seeking Process](#consensus-seeking-process) below -for further detail on the consensus model used for governance. +for sign-off. -Collaborators may opt to elevate significant or controversial -modifications, or modifications that have not found consensus to the -CTC for discussion by assigning the ***ctc-agenda*** tag to a pull -request or issue. The CTC should serve as the final arbiter where -required. +If one or more Collaborators oppose a proposed change, then the change can not +be accepted unless: + +* Discussions and/or additional changes result in no Collaborators objecting to + the change. Previously-objecting Collaborators do not necessarily have to + sign-off on the change, but they should not be opposed to it. +* The change is escalated to the CTC and the CTC votes to approve the change. + This should be used only after other options (especially discussion among + the disagreeing Collaborators) have been exhausted. + +Collaborators may opt to elevate significant or controversial modifications to +the CTC by assigning the `ctc-review` label to a pull request or issue. The +CTC should serve as the final arbiter where required. For the current list of Collaborators, see the project [README.md](./README.md#current-project-team-members). @@ -103,7 +105,7 @@ members affiliated with the over-represented employer(s). Typical activities of a CTC member include: * attending the weekly meeting -* commenting on the weekly CTC meeting issue and issues labeled `ctc-agenda` +* commenting on the weekly CTC meeting issue and issues labeled `ctc-review` * participating in CTC email threads * volunteering for tasks that arise from CTC meetings and related discussions * other activities (beyond those typical of Collaborators) that facilitate the @@ -115,7 +117,8 @@ Collaborator activities as well. ### CTC Meetings The CTC meets weekly in a voice conference call. The meeting is run by a -designated moderator approved by the CTC. Each meeting is streamed on YouTube. +designated meeting chair approved by the CTC. Each meeting is streamed on +YouTube. Items are added to the CTC agenda which are considered contentious or are modifications of governance, contribution policy, CTC membership, @@ -125,21 +128,37 @@ The intention of the agenda is not to approve or review all patches. That should happen continuously on GitHub and be handled by the larger group of Collaborators. -Any community member or contributor can ask that something be added to -the next meeting's agenda by logging a GitHub issue. Any Collaborator, -CTC member or the moderator can add the item to the agenda by adding -the ***ctc-agenda*** tag to the issue. +Any community member or contributor can ask that something be reviewed +by the CTC by logging a GitHub issue. Any Collaborator, CTC member, or the +meeting chair can bring the issue to the CTC's attention by applying the +`ctc-review` label. If consensus-seeking among CTC members fails for a +particular issue, it may be added to the CTC meeting agenda by adding the +`ctc-agenda` label. -Prior to each CTC meeting, the moderator will share the agenda with +Prior to each CTC meeting, the meeting chair will share the agenda with members of the CTC. CTC members can also add items to the agenda at the -beginning of each meeting. The moderator and the CTC cannot veto or remove +beginning of each meeting. The meeting chair and the CTC cannot veto or remove items. The CTC may invite persons or representatives from certain projects to participate in a non-voting capacity. -The moderator is responsible for summarizing the discussion of each agenda item -and sending it as a pull request after the meeting. +The meeting chair is responsible for ensuring that minutes are taken and that a +pull request with the minutes is submitted after the meeting. + +Due to the challenges of scheduling a global meeting with participants in +several timezones, the CTC will seek to resolve as many agenda items as possible +outside of meetings using +[the CTC issue tracker](https://github.com/nodejs/CTC/issues). The process in +the issue tracker is: + +* A CTC member opens an issue explaining the proposal/issue and @-mentions + @nodejs/ctc. +* After 72 hours, if there are two or more `LGTM`s from other CTC members and no + explicit opposition from other CTC members, then the proposal is approved. +* If there are any CTC members objecting, then a conversation ensues until + either the proposal is dropped or the objecting members are persuaded. If + there is an extended impasse, a motion for a vote may be made. ## Consensus Seeking Process @@ -147,8 +166,8 @@ The CTC follows a [Consensus Seeking](http://en.wikipedia.org/wiki/Consensus-seeking_decision-making) decision making model. -When an agenda item has appeared to reach a consensus, the moderator will ask -"Does anyone object?" as a final call for dissent from the consensus. +When an agenda item has appeared to reach a consensus, the meeting chair will +ask "Does anyone object?" as a final call for dissent from the consensus. If an agenda item cannot reach a consensus, a CTC member can call for either a closing vote or a vote to table the issue to the next meeting. All votes diff --git a/Makefile b/Makefile index d67e89a2771035..594c91a2b952bc 100644 --- a/Makefile +++ b/Makefile @@ -71,11 +71,7 @@ out/Makefile: common.gypi deps/uv/uv.gyp deps/http_parser/http_parser.gyp deps/z $(PYTHON) tools/gyp_node.py -f make config.gypi: configure - if [ -f $@ ]; then - $(error Stale $@, please re-run ./configure) - else - $(error No $@, please run ./configure first) - fi + $(error Missing or stale $@, please run ./$<) install: all $(PYTHON) tools/install.py $@ '$(DESTDIR)' '$(PREFIX)' @@ -109,7 +105,7 @@ cctest: all @out/$(BUILDTYPE)/$@ v8: - tools/make-v8.sh v8 + tools/make-v8.sh $(MAKE) -C deps/v8 $(V8_ARCH).$(BUILDTYPE_LOWER) $(V8_BUILD_OPTIONS) test: | cctest # Depends on 'all'. @@ -196,6 +192,7 @@ test-ci-js: $(TEST_CI_ARGS) $(CI_JS_SUITES) test-ci: | build-addons + out/Release/cctest --gtest_output=tap:cctest.tap $(PYTHON) tools/test.py -p tap --logfile test.tap --mode=release --flaky-tests=$(FLAKY_TESTS) \ $(TEST_CI_ARGS) $(CI_NATIVE_SUITES) $(CI_JS_SUITES) @@ -287,7 +284,7 @@ out/doc/%: doc/% # check if ./node is actually set, else use user pre-installed binary gen-json = tools/doc/generate.js --format=json $< > $@ out/doc/api/%.json: doc/api/%.md - [ -e tools/doc/node_modules/js-yaml/package.json ] || \ + @[ -e tools/doc/node_modules/js-yaml/package.json ] || \ [ -e tools/eslint/node_modules/js-yaml/package.json ] || \ if [ -x $(NODE) ]; then \ cd tools/doc && ../../$(NODE) ../../$(NPM) install; \ @@ -299,7 +296,7 @@ out/doc/api/%.json: doc/api/%.md # check if ./node is actually set, else use user pre-installed binary gen-html = tools/doc/generate.js --node-version=$(FULLVERSION) --format=html --template=doc/template.html $< > $@ out/doc/api/%.html: doc/api/%.md - [ -e tools/doc/node_modules/js-yaml/package.json ] || \ + @[ -e tools/doc/node_modules/js-yaml/package.json ] || \ [ -e tools/eslint/node_modules/js-yaml/package.json ] || \ if [ -x $(NODE) ]; then \ cd tools/doc && ../../$(NODE) ../../$(NPM) install; \ diff --git a/README.md b/README.md index 42584e52ebcaac..ca908c1aa9312a 100644 --- a/README.md +++ b/README.md @@ -186,6 +186,8 @@ more information about the governance of the Node.js project, see **Shigeki Ohtsu** <ohtsu@iij.ad.jp> * [TheAlphaNerd](https://github.com/TheAlphaNerd) - **Myles Borins** <myles.borins@gmail.com> +* [thefourtheye](https://github.com/thefourtheye) - +**Sakthipriyan Vairamani** <thechargingvolcano@gmail.com> * [trevnorris](https://github.com/trevnorris) - **Trevor Norris** <trev.norris@gmail.com> * [Trott](https://github.com/Trott) - @@ -237,10 +239,12 @@ more information about the governance of the Node.js project, see **Ilkka Myller** <ilkka.myller@nodefield.com> * [isaacs](https://github.com/isaacs) - **Isaac Z. Schlueter** <i@izs.me> +* [italoacasas](https://github.com/italoacasas) +**Italo A. Casas** <me@italoacasas.com> * [iWuzHere](https://github.com/iWuzHere) - **Imran Iqbal** <imran@imraniqbal.org> * [JacksonTian](https://github.com/JacksonTian) - -**Jackson Tian** <shvyo1987@gmail.com> +**Jackson Tian** <shyvo1987@gmail.com> * [jbergstroem](https://github.com/jbergstroem) - **Johan Bergström** <bugs@bergstroem.nu> * [jhamhader](https://github.com/jhamhader) - @@ -319,8 +323,6 @@ more information about the governance of the Node.js project, see **Michaël Zasso** <targos@protonmail.com> * [tellnes](https://github.com/tellnes) - **Christian Tellnes** <christian@tellnes.no> -* [thefourtheye](https://github.com/thefourtheye) - -**Sakthipriyan Vairamani** <thechargingvolcano@gmail.com> * [thekemkid](https://github.com/thekemkid) - **Glen Keane** <glenkeane.94@gmail.com> * [thlorenz](https://github.com/thlorenz) - diff --git a/WORKING_GROUPS.md b/WORKING_GROUPS.md index 88f44fbe2d8a4a..25414cd3392a82 100644 --- a/WORKING_GROUPS.md +++ b/WORKING_GROUPS.md @@ -21,7 +21,7 @@ back in to the CTC. * [Website](#website) * [Streams](#streams) * [Build](#build) -* [Tracing](#tracing) +* [Diagnostics](#diagnostics) * [i18n](#i18n) * [Evangelism](#evangelism) * [Roadmap](#roadmap) @@ -81,17 +81,22 @@ Its responsibilities are: * Creates and manages build-containers. -### [Tracing](https://github.com/nodejs/tracing-wg) +### [Diagnostics](https://github.com/nodejs/diagnostics) -The tracing working group's purpose is to increase the -transparency of software written in Node.js. +The diagnostics working group's purpose is to surface a set of comprehensive, +documented, and extensible diagnostic interfaces for use by +Node.js tools and JavaScript VMs. Its responsibilities are: -* Collaboration with V8 to integrate with `trace_event`. -* Maintenance and iteration on AsyncWrap. -* Maintenance and improvements to system tracing support (DTrace, LTTng, etc.) -* Documentation of tracing and debugging techniques. -* Fostering a tracing and debugging ecosystem. + +* Collaborate with V8 to integrate `v8_inspector` into Node.js. +* Collaborate with V8 to integrate `trace_event` into Node.js. +* Collaborate with Core to refine `async_wrap` and `async_hooks`. +* Maintain and improve OS trace system integration (e.g. ETW, LTTNG, dtrace). +* Document diagnostic capabilities and APIs in Node.js and its components. +* Explore opportunities and gaps, discuss feature requests, and address + conflicts in Node.js diagnostics. +* Foster an ecosystem of diagnostics tools for Node.js. ### i18n diff --git a/benchmark/es/map-bench.js b/benchmark/es/map-bench.js new file mode 100644 index 00000000000000..574da25d53f2f2 --- /dev/null +++ b/benchmark/es/map-bench.js @@ -0,0 +1,96 @@ +'use strict'; + +const common = require('../common.js'); +const assert = require('assert'); + +const bench = common.createBenchmark(main, { + method: ['object', 'nullProtoObject', 'fakeMap', 'map'], + millions: [1] +}); + +function runObject(n) { + const m = {}; + var i = 0; + bench.start(); + for (; i < n; i++) { + m['i' + i] = i; + m['s' + i] = String(i); + assert.equal(m['i' + i], m['s' + i]); + m['i' + i] = undefined; + m['s' + i] = undefined; + } + bench.end(n / 1e6); +} + +function runNullProtoObject(n) { + const m = Object.create(null); + var i = 0; + bench.start(); + for (; i < n; i++) { + m['i' + i] = i; + m['s' + i] = String(i); + assert.equal(m['i' + i], m['s' + i]); + m['i' + i] = undefined; + m['s' + i] = undefined; + } + bench.end(n / 1e6); +} + +function fakeMap() { + const m = {}; + return { + get(key) { return m['$' + key]; }, + set(key, val) { m['$' + key] = val; }, + get size() { return Object.keys(m).length; }, + has(key) { return Object.prototype.hasOwnProperty.call(m, '$' + key); } + }; +} + +function runFakeMap(n) { + const m = fakeMap(); + var i = 0; + bench.start(); + for (; i < n; i++) { + m.set('i' + i, i); + m.set('s' + i, String(i)); + assert.equal(m.get('i' + i), m.get('s' + i)); + m.set('i' + i, undefined); + m.set('s' + i, undefined); + } + bench.end(n / 1e6); +} + +function runMap(n) { + const m = new Map(); + var i = 0; + bench.start(); + for (; i < n; i++) { + m.set('i' + i, i); + m.set('s' + i, String(i)); + assert.equal(m.get('i' + i), m.get('s' + i)); + m.set('i' + i, undefined); + m.set('s' + i, undefined); + } + bench.end(n / 1e6); +} + +function main(conf) { + const n = +conf.millions * 1e6; + + switch (conf.method) { + case 'object': + runObject(n); + break; + case 'nullProtoObject': + runNullProtoObject(n); + break; + case 'fakeMap': + runFakeMap(n); + break; + case 'map': + runMap(n); + break; + default: + throw new Error('Unexpected method'); + } +} diff --git a/common.gypi b/common.gypi index 258cd1603ef54d..4aa8f919fb6d2f 100644 --- a/common.gypi +++ b/common.gypi @@ -11,6 +11,12 @@ 'msvs_multi_core_compile': '0', # we do enable multicore compiles, but not using the V8 way 'python%': 'python', + 'node_shared%': 'false', + 'force_dynamic_crt%': 0, + 'node_use_v8_platform%': 'true', + 'node_use_bundled_v8%': 'true', + 'node_module_version%': '', + 'node_tag%': '', 'uv_library%': 'static_library', @@ -73,6 +79,20 @@ ['OS == "android"', { 'cflags': [ '-fPIE' ], 'ldflags': [ '-fPIE', '-pie' ] + }], + ['node_shared=="true"', { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeLibrary': 3 # MultiThreadedDebugDLL (/MDd) + } + } + }], + ['node_shared=="false"', { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeLibrary': 1 # MultiThreadedDebug (/MTd) + } + } }] ], 'msvs_settings': { @@ -110,11 +130,24 @@ ['OS == "android"', { 'cflags': [ '-fPIE' ], 'ldflags': [ '-fPIE', '-pie' ] + }], + ['node_shared=="true"', { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeLibrary': 2 # MultiThreadedDLL (/MD) + } + } + }], + ['node_shared=="false"', { + 'msvs_settings': { + 'VCCLCompilerTool': { + 'RuntimeLibrary': 0 # MultiThreaded (/MT) + } + } }] ], 'msvs_settings': { 'VCCLCompilerTool': { - 'RuntimeLibrary': 0, # static release 'Optimization': 3, # /Ox, full optimization 'FavorSizeOrSpeed': 1, # /Ot, favour speed over size 'InlineFunctionExpansion': 2, # /Ob2, inline anything eligible @@ -243,6 +276,9 @@ ['_type=="static_library" and OS=="solaris"', { 'standalone_static_library': 1, }], + ['OS=="openbsd"', { + 'ldflags': [ '-Wl,-z,wxneeded' ], + }], ], 'conditions': [ [ 'target_arch=="ia32"', { @@ -291,6 +327,9 @@ ], 'ldflags!': [ '-rdynamic' ], }], + [ 'node_shared=="true"', { + 'cflags': [ '-fPIC' ], + }] ], }], ['OS=="android"', { diff --git a/configure b/configure index 27ab9a54a99981..02e926d4b3cf29 100755 --- a/configure +++ b/configure @@ -24,6 +24,10 @@ from gyp.common import GetFlavor sys.path.insert(0, os.path.join(root_dir, 'tools', 'configure.d')) import nodedownload +# imports in tools/ +sys.path.insert(0, os.path.join(root_dir, 'tools')) +import getmoduleversion + # parse our options parser = optparse.OptionParser() @@ -385,6 +389,26 @@ parser.add_option('--enable-static', dest='enable_static', help='build as static library') +parser.add_option('--shared', + action='store_true', + dest='shared', + help='compile shared library for embedding node in another project. ' + + '(This mode is not officially supported for regular applications)') + +parser.add_option('--without-v8-platform', + action='store_true', + dest='without_v8_platform', + default=False, + help='do not initialize v8 platform during node.js startup. ' + + '(This mode is not officially supported for regular applications)') + +parser.add_option('--without-bundled-v8', + action='store_true', + dest='without_bundled_v8', + default=False, + help='do not use V8 includes from the bundled deps folder. ' + + '(This mode is not officially supported for regular applications)') + (options, args) = parser.parse_args() # Expand ~ in the install prefix now, it gets written to multiple files. @@ -774,7 +798,14 @@ def configure_node(o): if options.enable_static: o['variables']['node_target_type'] = 'static_library' - o['variables']['node_module_version'] = 46 + o['variables']['node_shared'] = b(options.shared) + o['variables']['node_use_v8_platform'] = b(not options.without_v8_platform) + o['variables']['node_use_bundled_v8'] = b(not options.without_bundled_v8) + node_module_version = getmoduleversion.get_version() + shlib_suffix = '%s.dylib' if sys.platform == 'darwin' else 'so.%s' + shlib_suffix %= node_module_version + o['variables']['node_module_version'] = int(node_module_version) + o['variables']['shlib_suffix'] = shlib_suffix if options.linked_module: o['variables']['library_files'] = options.linked_module @@ -820,8 +851,7 @@ def configure_v8(o): o['variables']['v8_random_seed'] = 0 # Use a random seed for hash tables. o['variables']['v8_use_snapshot'] = 'false' if options.without_snapshot else 'true' o['variables']['node_enable_d8'] = b(options.enable_d8) - - + o['variables']['force_dynamic_crt'] = 1 if options.shared else 0 def configure_openssl(o): o['variables']['node_use_openssl'] = b(not options.without_ssl) o['variables']['node_shared_openssl'] = b(options.shared_openssl) diff --git a/deps/cares/include/ares.h b/deps/cares/include/ares.h index f9abe854d5846a..a4accb1583b722 100644 --- a/deps/cares/include/ares.h +++ b/deps/cares/include/ares.h @@ -39,7 +39,7 @@ typedef unsigned ares_socklen_t; require it! */ #if defined(_AIX) || defined(__NOVELL_LIBC__) || defined(__NetBSD__) || \ defined(__minix) || defined(__SYMBIAN32__) || defined(__INTEGRITY) || \ - defined(ANDROID) || defined(__ANDROID__) + defined(ANDROID) || defined(__ANDROID__) || defined(__OpenBSD__) #include #endif #if (defined(NETWARE) && !defined(__NOVELL_LIBC__)) diff --git a/deps/gtest/src/gtest.cc b/deps/gtest/src/gtest.cc index 7fd5f298dc04ed..87b67a2230fe55 100644 --- a/deps/gtest/src/gtest.cc +++ b/deps/gtest/src/gtest.cc @@ -3498,6 +3498,127 @@ std::string XmlUnitTestResultPrinter::RemoveInvalidXmlCharacters( // // +class TapUnitTestResultPrinter : public EmptyTestEventListener { + public: + TapUnitTestResultPrinter(); + explicit TapUnitTestResultPrinter(const char* output_file); + virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); + + private: + static void PrintTapUnitTest(::std::ostream* stream, + const UnitTest& unit_test); + static void PrintTapTestCase(int* count, + ::std::ostream* stream, + const TestCase& test_case); + static void OutputTapTestInfo(int* count, + ::std::ostream* stream, + const char* test_case_name, + const TestInfo& test_info); + static void OutputTapComment(::std::ostream* stream, const char* comment); + + const std::string output_file_; + GTEST_DISALLOW_COPY_AND_ASSIGN_(TapUnitTestResultPrinter); +}; + +TapUnitTestResultPrinter::TapUnitTestResultPrinter() {} + +TapUnitTestResultPrinter::TapUnitTestResultPrinter(const char* output_file) + : output_file_(output_file) { + if (output_file_.c_str() == NULL || output_file_.empty()) { + fprintf(stderr, "TAP output file may not be null\n"); + fflush(stderr); + exit(EXIT_FAILURE); + } +} + +void TapUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test, + int /*iteration*/) { + FILE* tapout = stdout; + + if (!output_file_.empty()) { + FilePath output_file(output_file_); + FilePath output_dir(output_file.RemoveFileName()); + + tapout = NULL; + if (output_dir.CreateDirectoriesRecursively()) + tapout = posix::FOpen(output_file_.c_str(), "w"); + + if (tapout == NULL) { + fprintf(stderr, "Unable to open file \"%s\"\n", output_file_.c_str()); + fflush(stderr); + exit(EXIT_FAILURE); + } + } + + std::stringstream stream; + PrintTapUnitTest(&stream, unit_test); + fprintf(tapout, "%s", StringStreamToString(&stream).c_str()); + fflush(tapout); + + if (tapout != stdout) + fclose(tapout); +} + +void TapUnitTestResultPrinter::PrintTapUnitTest(std::ostream* stream, + const UnitTest& unit_test) { + *stream << "TAP version 13\n"; + *stream << "1.." << unit_test.reportable_test_count() << "\n"; + + int count = 1; + for (int i = 0; i < unit_test.total_test_case_count(); ++i) { + const TestCase& test_case = *unit_test.GetTestCase(i); + if (test_case.reportable_test_count() > 0) + PrintTapTestCase(&count, stream, test_case); + } + + *stream << "# failures: " << unit_test.failed_test_count() << "\n"; +} + +void TapUnitTestResultPrinter::PrintTapTestCase(int* count, + std::ostream* stream, + const TestCase& test_case) { + for (int i = 0; i < test_case.total_test_count(); ++i) { + const TestInfo& test_info = *test_case.GetTestInfo(i); + if (test_info.is_reportable()) + OutputTapTestInfo(count, stream, test_case.name(), test_info); + } +} + +void TapUnitTestResultPrinter::OutputTapTestInfo(int* count, + ::std::ostream* stream, + const char* test_case_name, + const TestInfo& test_info) { + const TestResult& result = *test_info.result(); + const char* status = result.Passed() ? "ok" : "not ok"; + + *stream << status << " " << *count << " - " << + test_case_name << "." << test_info.name() << "\n"; + *stream << " ---\n"; + *stream << " duration_ms: " << + FormatTimeInMillisAsSeconds(result.elapsed_time()) << "\n"; + + if (result.total_part_count() > 0) { + *stream << " stack: |-\n"; + for (int i = 0; i < result.total_part_count(); ++i) { + const TestPartResult& part = result.GetTestPartResult(i); + OutputTapComment(stream, part.message()); + } + } + *stream << " ...\n"; + *count += 1; +} + +void TapUnitTestResultPrinter::OutputTapComment(::std::ostream* stream, + const char* comment) { + const char* start = comment; + while (const char* end = strchr(start, '\n')) { + *stream << " " << std::string(start, end) << "\n"; + start = end + 1; + } + if (*start) + *stream << " " << start << "\n"; +} + // Formats the given time in milliseconds as seconds. std::string FormatTimeInMillisAsSeconds(TimeInMillis ms) { ::std::stringstream ss; @@ -4365,6 +4486,9 @@ void UnitTestImpl::ConfigureXmlOutput() { if (output_format == "xml") { listeners()->SetDefaultXmlGenerator(new XmlUnitTestResultPrinter( UnitTestOptions::GetAbsolutePathToOutputFile().c_str())); + } else if (output_format == "tap") { + listeners()->SetDefaultXmlGenerator(new TapUnitTestResultPrinter( + UnitTestOptions::GetAbsolutePathToOutputFile().c_str())); } else if (output_format != "") { printf("WARNING: unrecognized output format \"%s\" ignored.\n", output_format.c_str()); diff --git a/deps/gtest/src/gtest_main.cc b/deps/gtest/src/gtest_main.cc index f3028225523306..4cf03e59bac5df 100644 --- a/deps/gtest/src/gtest_main.cc +++ b/deps/gtest/src/gtest_main.cc @@ -32,7 +32,6 @@ #include "gtest/gtest.h" GTEST_API_ int main(int argc, char **argv) { - printf("Running main() from gtest_main.cc\n"); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } diff --git a/deps/v8/build/toolchain.gypi b/deps/v8/build/toolchain.gypi index 4dbf42bfe3795c..d484ac9e118ab7 100644 --- a/deps/v8/build/toolchain.gypi +++ b/deps/v8/build/toolchain.gypi @@ -39,6 +39,7 @@ 'ubsan_vptr%': 0, 'v8_target_arch%': '<(target_arch)', 'v8_host_byteorder%': ' ConstantDeque; typedef std::map, - zone_allocator > > ConstantMap; + zone_allocator > > ConstantMap; typedef ZoneDeque InstructionDeque; typedef ZoneDeque ReferenceMapDeque; diff --git a/deps/v8/src/compiler/js-type-feedback.h b/deps/v8/src/compiler/js-type-feedback.h index 84060f80964c7b..6b8ff5adb1069c 100644 --- a/deps/v8/src/compiler/js-type-feedback.h +++ b/deps/v8/src/compiler/js-type-feedback.h @@ -33,9 +33,10 @@ class JSTypeFeedbackTable : public ZoneObject { private: friend class JSTypeFeedbackSpecializer; typedef std::map, - zone_allocator > TypeFeedbackIdMap; + zone_allocator > > + TypeFeedbackIdMap; typedef std::map, - zone_allocator > + zone_allocator > > FeedbackVectorICSlotMap; TypeFeedbackIdMap type_feedback_id_map_; diff --git a/deps/v8/src/zone-containers.h b/deps/v8/src/zone-containers.h index 8daf0dd657f232..79b168c37eab7a 100644 --- a/deps/v8/src/zone-containers.h +++ b/deps/v8/src/zone-containers.h @@ -114,12 +114,12 @@ class ZoneSet : public std::set> { // a zone allocator. template > class ZoneMap - : public std::map>> { + : public std::map>> { public: // Constructs an empty map. explicit ZoneMap(Zone* zone) - : std::map>>( - Compare(), zone_allocator>(zone)) {} + : std::map>>( + Compare(), zone_allocator>(zone)) {} }; diff --git a/doc/api/child_process.md b/doc/api/child_process.md index 8675fb3f40fe33..b596b2547d3f6d 100644 --- a/doc/api/child_process.md +++ b/doc/api/child_process.md @@ -551,7 +551,7 @@ added: v0.11.12 * `cwd` {String} Current working directory of the child process * `input` {String|Buffer} The value which will be passed as stdin to the spawned process - supplying this value will override `stdio[0]` - * `stdio` {Array} Child's stdio configuration. (Default: 'pipe') + * `stdio` {String|Array} Child's stdio configuration. (Default: 'pipe') - `stderr` by default will be output to the parent process' stderr unless `stdio` is specified * `env` {Object} Environment key-value pairs @@ -586,7 +586,7 @@ added: v0.11.12 * `cwd` {String} Current working directory of the child process * `input` {String|Buffer} The value which will be passed as stdin to the spawned process - supplying this value will override `stdio[0]` - * `stdio` {Array} Child's stdio configuration. (Default: 'pipe') + * `stdio` {String|Array} Child's stdio configuration. (Default: 'pipe') - `stderr` by default will be output to the parent process' stderr unless `stdio` is specified * `env` {Object} Environment key-value pairs @@ -626,7 +626,7 @@ added: v0.11.12 * `cwd` {String} Current working directory of the child process * `input` {String|Buffer} The value which will be passed as stdin to the spawned process - supplying this value will override `stdio[0]` - * `stdio` {Array} Child's stdio configuration. + * `stdio` {String|Array} Child's stdio configuration. (Default: 'pipe') * `env` {Object} Environment key-value pairs * `uid` {Number} Sets the user identity of the process. (See setuid(2).) * `gid` {Number} Sets the group identity of the process. (See setgid(2).) diff --git a/doc/api/crypto.md b/doc/api/crypto.md index 49cbef787ca0f3..177d1bd107597c 100644 --- a/doc/api/crypto.md +++ b/doc/api/crypto.md @@ -1209,18 +1209,18 @@ input.on('readable', () => { added: v0.1.92 --> -Creates and returns a `Sign` object that uses the given `algorithm`. On -recent OpenSSL releases, `openssl list-public-key-algorithms` will -display the available signing algorithms. One example is `'RSA-SHA256'`. +Creates and returns a `Sign` object that uses the given `algorithm`. +Use [`crypto.getHashes()`][] to obtain an array of names of the available +signing algorithms. ### crypto.createVerify(algorithm) -Creates and returns a `Verify` object that uses the given algorithm. On -recent OpenSSL releases, `openssl list-public-key-algorithms` will -display the available signing algorithms. One example is `'RSA-SHA256'`. +Creates and returns a `Verify` object that uses the given algorithm. +Use [`crypto.getHashes()`][] to obtain an array of names of the available +signing algorithms. ### crypto.getCiphers() -Returns an array with the names of the supported hash algorithms. +Returns an array of the names of the supported hash algorithms, +such as `RSA-SHA256`. Example: diff --git a/doc/api/debugger.md b/doc/api/debugger.md index c47e067b104bb2..6ddaf9c92a06a0 100644 --- a/doc/api/debugger.md +++ b/doc/api/debugger.md @@ -116,8 +116,9 @@ on line 1 It is also possible to set a breakpoint in a file (module) that isn't loaded yet: + ``` -$ ./node debug test/fixtures/break-in-module/main.js +$ node debug test/fixtures/break-in-module/main.js < debugger listening on port 5858 connecting to port 5858... ok break in test/fixtures/break-in-module/main.js:1 diff --git a/doc/api/errors.md b/doc/api/errors.md index 8f5a0abf3321df..5f451880f14332 100644 --- a/doc/api/errors.md +++ b/doc/api/errors.md @@ -449,13 +449,15 @@ added properties. ### Class: System Error #### error.code -#### error.errno Returns a string representing the error code, which is always `E` followed by a sequence of capital letters, and may be referenced in `man 2 intro`. -The properties `error.code` and `error.errno` are aliases of one another and -return the same value. +#### error.errno + +Returns a number corresponding to the **negated** error code, which may be +referenced in `man 2 intro`. For example, an `ENOENT` error has an `errno` of +`-2` because the error code for `ENOENT` is `2`. #### error.syscall diff --git a/doc/api/http.md b/doc/api/http.md index 3aed0315c2e498..17dd39c76ccfae 100644 --- a/doc/api/http.md +++ b/doc/api/http.md @@ -261,7 +261,7 @@ Emitted each time a server responds to a request with a `CONNECT` method. If thi event isn't being listened for, clients receiving a `CONNECT` method will have their connections closed. -A client server pair that show you how to listen for the `'connect'` event. +A client and server pair that shows you how to listen for the `'connect'` event: ```js const http = require('http'); diff --git a/doc/api/modules.md b/doc/api/modules.md index 75e1cc57de06aa..dd02fc372a6887 100644 --- a/doc/api/modules.md +++ b/doc/api/modules.md @@ -4,9 +4,9 @@ -Node.js has a simple module loading system. In Node.js, files and modules are -in one-to-one correspondence. As an example, `foo.js` loads the module -`circle.js` in the same directory. +Node.js has a simple module loading system. In Node.js, files and modules +are in one-to-one correspondence (each file is treated as a separate module). +As an example, `foo.js` loads the module `circle.js` in the same directory. The contents of `foo.js`: diff --git a/doc/api/process.md b/doc/api/process.md index 782ccc4c4cf223..7967354face764 100644 --- a/doc/api/process.md +++ b/doc/api/process.md @@ -254,7 +254,7 @@ cases: -Emitted when the processes receives a signal. See sigaction(2) for a list of +Emitted when the processes receives a signal. See sigaction(7) for a list of standard POSIX signal names such as `SIGINT`, `SIGHUP`, etc. Example of listening for `SIGINT`: @@ -768,9 +768,7 @@ Returns an object describing the memory usage of the Node.js process measured in bytes. ```js -const util = require('util'); - -console.log(util.inspect(process.memoryUsage())); +console.log(process.memoryUsage()); ``` This will generate: diff --git a/doc/api/tls.md b/doc/api/tls.md index d42e062b408f53..4d836b5e7f7be1 100644 --- a/doc/api/tls.md +++ b/doc/api/tls.md @@ -694,6 +694,10 @@ Creates a new client connection to the given `port` and `host` (old API) or SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant [SSL_METHODS][]. + - `secureContext`: An optional TLS context object from + `tls.createSecureContext( ... )`. Could it be used for caching client + certificates, key, and CA certificates. + - `session`: A `Buffer` instance, containing TLS session. The `callback` parameter will be added as a listener for the diff --git a/doc/api_assets/style.css b/doc/api_assets/style.css index a9dec759fa85df..1e82464cc93b79 100644 --- a/doc/api_assets/style.css +++ b/doc/api_assets/style.css @@ -15,6 +15,31 @@ body { background: #fff; } +h1, h2, h3, h4 { + margin: .8em 0 .5em; + line-height: 1.2; +} + +h5, h6 { + margin: 1em 0 .8em; + line-height: 1.2; +} + +h1 { + margin-top: 0; + font-size: 2.441em; +} + +h2 {font-size: 1.953em;} + +h3 {font-size: 1.563em;} + +h4 {font-size: 1.25em;} + +h5 {font-size: 1em;} + +h6 {font-size: .8em;} + pre, tt, code, .pre, span.type, a.type { font-family: Monaco, Consolas, "Lucida Console", monospace; } @@ -127,7 +152,7 @@ abbr { p { position: relative; text-rendering: optimizeLegibility; - margin: 0 0 1em 0; + margin: 0 0 1.125em 0; line-height: 1.5em; } @@ -180,10 +205,10 @@ h1, h2, h3, h4, h5, h6 { text-rendering: optimizeLegibility; font-weight: 700; position: relative; - margin-bottom: .5em; } header h1 { + font-size: 2em; line-height: 2em; margin: 0; } @@ -198,30 +223,15 @@ header h1 { background-color: #ccc; } -#toc + h1 { - margin-top: 1em; - padding-top: 0; -} - -h2 { - font-size: 1.5em; - margin: 1em 0 .5em; -} - h2 + h2 { margin: 0 0 .5em; } -h3 { - font-size: 1em; - margin: 1.5em 0 .5em; -} - h3 + h3 { margin: 0 0 .5em; } -h2, h3, h4 { +h2, h3, h4, h5 { position: relative; padding-right: 40px; } @@ -244,16 +254,6 @@ h1 span a, h2 span a, h3 span a, h4 span a { font-weight: bold; } -h5 { - font-size: 1.125em; - line-height: 1.4em; -} - -h6 { - font-size: 1em; - line-height: 1.4667em; -} - pre, tt, code { line-height: 1.5em; margin: 0; padding: 0; @@ -330,6 +330,21 @@ hr { margin-top: .666em; } +#toc .stability_0::after { + background-color: #d50027; + color: #fff; +} + +#toc .stability_0::after { + content: "deprecated"; + font-size: .8em; + position: relative; + top: -.18em; + left: .5em; + padding: 0 .3em .2em; + border-radius: 3px; +} + #apicontent li { margin-bottom: .5em; } @@ -338,7 +353,7 @@ hr { margin-bottom: 0; } -p tt, p code, li code { +tt, code { font-size: .9em; color: #040404; background-color: #f2f2f2; diff --git a/doc/ctc-meetings/2016-09-28.md b/doc/ctc-meetings/2016-09-28.md new file mode 100644 index 00000000000000..81f4fb4482f840 --- /dev/null +++ b/doc/ctc-meetings/2016-09-28.md @@ -0,0 +1,302 @@ +# Node Foundation CTC Meeting 2016-09-28 + +## Links + +* **Audio Recording**: TBP +* **GitHub Issue**: [#8802](https://github.com/nodejs/node/issues/8802) +* **Minutes Google Doc**: +* _Previous Minutes Google Doc_: + + +## Present + +* Anna Henningsen @addaleax (CTC) +* Сковорода Никита Андреевич @ChALkeR (CTC) +* Colin Ihrig @cjihrig (CTC) +* Evan Lucas @evanlucas (CTC) +* Jeremiah Senkpiel @Fishrock123 (CTC) +* Tracy Hinds @hackygolucky (observer/Node.js Foundation) +* Josh Gavant @joshgav (observer/Microsoft) +* Michael Dawson @mhdawson (CTC) +* Ali Ijaz Sheikh @ofrobots (CTC) +* Jenn Turner @renrutnnej (observer/Node.js Foundation) +* Rod Vagg @rvagg (CTC) +* Seth Thompson @s3ththompson (observer/Google) +* Myles Borins @TheAlphaNerd (CTC) +* Trevor Norris @trevnorris (CTC) +* Rich Trott @Trott (CTC) + + +## Standup + +* Anna Henningsen @addaleax (CTC) + * The usual, issues and PR reviews +* Сковорода Никита Андреевич @ChALkeR (CTC) + * Some issue and PR comments and reviews as usual. + * Some more work on docs linting. +* Colin Ihrig @cjihrig (CTC) + * Reviewing issues and PRs. + * Evan Lucas @evanlucas (CTC) + * v6.7.0 release + * More work on types eps +* Jeremiah Senkpiel @Fishrock123 (CTC) + * Issue / PR Review … general stuff + * Working towards ES Modules prototype implementations with Chris Dickinson +* Tracy Hinds @hackygolucky (observer/Node.js Foundation) + * getting the Outreachy info on website +* Josh Gavant @joshgav (observer/Microsoft) + * helping bring in some new MS contributors + * scheduled diag meeting for next week +* Michael Dawson @mhdawson (CTC) + * Finishing off PPC migration + * Fixing AIX issues when building from node-private + * Some work on ABI-stable node + * Misc PR review/lands + * Keeping up with issues + * Post-mortem nodereport review +* Brian White @mscdex (CTC) + * Worked on various performance improvements in node core + * Reviewed PRs, commented on issues +* Ali Ijaz Sheikh @ofrobots (CTC) + * Looking at node+V8 (5.5) integration build failures that seem related to recent parser improvements + * Investigating performance with the new interpreter + * Working with @matthewloring on FFI +* Jenn Turner @renrutnnej (observer/Node.js Foundation) + * No update, just observing +* Rod Vagg @rvagg (CTC) + * Security releases, supposed to be on vacation +* Seth Thompson @s3ththompson (observer/Google) + * async/await landed in V8 Tip of Tree. on track to ship with V8 5.5 + * expect a doc from V8 language team on promise hook API to allow microtask introspection in the near future +* Myles Borins @TheAlphaNerd (CTC) + * issue / pr review + * helping with security release + * backporting inspector + * auditing v4 backlog + * really have to get to that tap reporter + * coming up with outreachy mentor project +* Trevor Norris @trevnorris (CTC) + * AsyncHooks +* Rich Trott @Trott (CTC) + * mentoring more first-time contributors (via Node Todo) + * doc, test PRs + * ramping up a tiny bit on Build WG stuff, but just a tiny bit + + +## Agenda + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + +### nodejs/CTC + +* Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) + +### nodejs/node + +* meta: update NODE_MODULE_VERSION to 51 [#8808](https://github.com/nodejs/node/pull/8808) +* General v7.0.0 / v6 LTS Planning / Discussion + + +## Previous Meeting Review + +* deps: update V8 to 5.4 [#8317](https://github.com/nodejs/node/pull/8317) +* Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) +* Decide on what problem points for ES Modules we care about the most. [#15](https://github.com/nodejs/CTC/issues/15) + + +## Minutes + +### Scheduling Meetings [ctc#14](https://github.com/nodejs/CTC/issues/14) + +@trott: This is a status report. Initial proposal was to dive in and start rotating meetings. Some were on board, some were concerned. Nikita started Google spreadsheet to figure things out. + +One proposal is to move back one hour (12pm Pacific) which would be a mild improvement for Ben and Nikita. + +Input received from NA and EU but not Asia and Australia. Once we have that information we can figure out what might work. + +--- + +### meta: update NODE_MODULE_VERSION to 51 [#8808](https://github.com/nodejs/node/pull/8808) + +`process.versions.modules` == 48 for v6.7.0. + +Set in build script, we bump this number for each semver-minor. We’d update to 49, but Electron has been bumping in between, so we need to go to 51. + +@thealphanerd: Had a way to do this in the past, but never landed a version of V8 on master [before a release]. So those using master cannot rely on this check. + +Proposal is to add this to master now and v7.x when released. But should we wait to bump till the actual release? + +@rvagg: If you’re using master, it’s been a bit “buyer beware” in the past. + +@thealphanerd: Node-pre-gyp uses the module version number to determine whether to pull the pre-built binary or to rebuild. So that’s causing problems. + +@ofrobots: Is there a disadvantage to doing this now? + +@rvagg: Doesn’t seem to be. + +@joshgav: Any concern that we’d have to bump again at v7.x release? + +@rvagg: We’ll just bump again. + +@addaleax: We should watch what Electron is doing cause they pull in every V8 version. + +@trevnorris: Could it happen that newer version of V8 has a lower module version number? + +@trott: if module version mapped to V8 version we could always be in sync. + +@ofrobots: Problem is that ABI is more than just V8. Also, we’re moving to a VM-neutral API/ABI in the future and that will remove the relationship to a V8 version. + +@rvagg: We should coordinate with Electron and draw from the same pool of numbers. + +@ofrobots: A good point for a bump would be when we bring a new V8 into master. + +@rvagg: This would make testing those nightlies easier. + +@trevnorris: Sounds to me that we can’t map reliably map a version of Node to a version of V8. So pulling from the same pool as Electron might be misleading to developers. If the number in Electron doesn’t match a Node version there would be a conflict. + +@rvagg: People are tracking which module versions map to what, so they could follow this too. + +@trevnorris: Maybe we can give Electron the minor numbers. + +@rvagg: NW.js also had a similar issue. + +@thealphanerd: Electron bumped to 5.1 in an ABI-breaking way, so we have two versions of Node ABIs out there, cause they needed to stay closer to Chromium. + +@rvagg: They don’t need to keep up with the latest version of V8. + +@thealphanerd: Let’s talk offline. + +@rvagg: Back to GitHub? Or do we need to decide now? + +@thealphanerd : PR has a lot of LGTMs, would like to see this land today or tomorrow so we can unbreak master. + +@ofrobots: Two points - one, what to do now; two, what to do going forward? + +Ali and Myles will work on a policy going forward. + +@rvagg: Might belong in LTS repo as we’ve been doing a lot of versioning stuff there. + +**Next steps**: + +* If there are objections raise them in the issue, otherwise ready to merge. + +--- + +### General v7.0.0 / v6 LTS Planning / Discussion + +@Fishrock123: Make sure all are in the loop. + +Throw v0.10 in too since it’s end of life at end of October. + +@rvagg: LTS map says *first* of October. Some people expect that cause the docs say that. + +@Fishrock123: We discussed keeping it alive till end of December like v0.12, cause that’s when OpenSSL is EOL’ed. + +@mhdawson: If the doc says Oct 1 what’s the downside to sticking with that? + +@rvagg: We may have communicated Oct 31 through some channels, so some people may expect that. + +Or perhaps when v7.x is first released. + +Having said that, it’s been >2 years, so people have had time to migrate. + +@rvagg: Originally 0.12 was slated for EOL in April 2017, we moved it back because of the OpenSSL issue. + +@Fishrock123: Official LTS policy is target date is when the next release/LTS is cut. That’s usually midway through month. + +@rvagg: Push to LTS WG to resolve ASAP. + +@rvagg: James pushing another beta later this week or early next week. + +@Fishrock123: v7.x is now on semver-major freeze. + +@rvagg: There are some semver-major commits in master which aren’t in v7 release. + +@Fishrock123: Might still need to update? Might have been left out by James intentionally? + +@thealphanerd: Not a ton of semver-major things on master. There are the V8 upgrades (patches), and a move of a method to fs/internal. + +Big one is npm@4 coming through the pipeline 1-2 weeks before Node release, should we include that. + +@rvagg: npm@3 had problems originally so we delayed. Should we do the same for npm@4? + +@addaleax: I’d feel comfortable with landing it. Kat said they aren’t concerned if 4 is included now or not. + +@Fishrock123: Things which are deprecated in v4 will still be deprecated (not removed) in v5, so we could bump all the way to v5 in a later release. + +@rvagg: We don’t have to synchronize all these dates to one, we can be flexible if needed. + +@thealphanerd: If we have a date other than late October it might be a good idea to offer a date. + +@ofrobots: Tentatively Oct 18 is the target stable date for V8 5.4. As tentative as usual, not clear till the last moment. Low chance that V8 will be moving a lot around Oct 18. Haven’t seen this date slip by more than 1-2 days. Very low chance that V8 will destablizie us. + +@thealphanerd: Oct 25 as a tentative date for v7.x? + +@rvagg: Ali and Seth, what’s the risk of setting that date now? + +@ofrobots: Close to Oct 18 I can highlight any potential risk. + +@rvagg: Let’s say that - 25th is tentative date, we’ll communicate if there’s any change. Any objections? (No.) + +That will also be the day we switch v6 to LTS. + +@thealphanerd: Doing release of v6 LTS earlier might be helpful so we have that out of the way for potential v7 issues. + +@rvagg: Discussion on this will move to LTS WG. Join the LTS WG on Monday to discuss. + +@thealphanerd: Could use someone to be responsible for v6 LTS, please volunteer. + +@rvagg: It’s been helpful to have a single person for v4 LTS, but we need to find a model that scales in the future. + +@Fishrock123: Would be helpful to schedule LTS a week earlier to avoid problems. + +--- + +### Supported platforms proposal from Build WG [#488](https://github.com/nodejs/build/issues/488) + +Current proposal: + +@trott should make a CTC agenda item next week? + +@rvagg: Give input on that issue before it comes to CTC. Build WG must review and sign off on as well. + +@rvagg: Some discussion about tiers, this affects OS vendors. + +--- + +### Other + +@thealphanerd: Node.js is going to be working with Outreachy project to help people from underrepresented groups get involved. + +We need projects for these people to work on in 3 months. If you can think of good parts of the project to assign… would love to hear your suggestions. + +@rvagg: GitHub thread? + +@hackygolucky: I’ll create a new one and ping @nodejs/collaborators. + +@rvagg: Are we getting a satisfactory response on the call for mentors? + +@hackygolucky: 5 primary mentors and a number of supplementals. 4 sponsors, which means we can accept 4 mentees. + +@rvagg: If someone wants to be a supplemental is that still open? + +@hackygolucky: Thread is still open: https://github.com/nodejs/education/issues/7 + +--- + +## Q/A on public channels + +None. + +--- + +## Upcoming Meetings + +* CTC: 2016-10-05 +* TSC: 2016-10-06 +* Build: 2016-10-11 +* Diagnostics: 2016-10-05, 12pm Pacific +* Benchmarking: +* LTS: 2016-10-03 +* Post-Mortem: +* API: diff --git a/doc/ctc-meetings/2016-10-05.md b/doc/ctc-meetings/2016-10-05.md new file mode 100644 index 00000000000000..1db805e235aabd --- /dev/null +++ b/doc/ctc-meetings/2016-10-05.md @@ -0,0 +1,311 @@ +# Node Foundation CTC Meeting 2016-10-05 + +## Links + +* **Audio Recording**: TBP +* **GitHub Issue**: [#8915](https://github.com/nodejs/node/issues/8915) +* **Minutes Google Doc**: +* _Previous Minutes Google Doc_: + + +## Present + +* Anna Henningsen @addaleax (CTC) +* Bradley Meck @bmeck (observer/GoDaddy/TC39) +* Colin Ihrig @cjihrig (CTC) +* Evan Lucas @evanlucas (CTC) +* Jeremiah Senkpiel @Fishrock123 (CTC) +* Tracy Hinds @hackygolucky (observer/Node.js Foundation) +* Michael Dawson @mhdawson (CTC) +* Julien Gilli @misterdjules (CTC) +* Mikeal Rogers @mikeal (observer/Node.js Foundation) +* Jenn Turner @renrutnnej (observer/Node.js Foundation) +* Rod Vagg @rvagg (CTC) +* Seth Thompson @s3ththompson (observer/Google) +* Myles Borins @TheAlphaNerd (CTC) +* Sakthipriyan Vairamani @thefourtheye (observer) +* Trevor Norris @trevnorris (CTC) +* Rich Trott @Trott (CTC) +* Josh Gavant @joshgav (observer/Microsoft) + + +## Standup + +* Anna Henningsen @addaleax (CTC) + * Nothing noteworthy +* Bradley Meck @bmeck (observer/GoDaddy/TC39) + * Went to TC39, got late linking and re-linking fully discussed + * Working on live named imports talks and spec changes +* Colin Ihrig @cjihrig (CTC) + * Reviewing of issues and PRs. +* Evan Lucas @evanlucas (CTC) + * Worked a little on improving the commit linter + * Submitted a PR improving process.nextTick perf by 10-20% in some cases +* Jeremiah Senkpiel @Fishrock123 (CTC) + * merged existsSync undeprecation + * other PRs & Issues +* Tracy Hinds @hackygolucky (observer/Node.js Foundation) + * Outreachy application drive, issue open in the CTC repo + * Met last week to talk about code & lean @ Austin event +* Michael Dawson @mhdawson (CTC) + * Miscellaneous PR review + * fix a v8 test issue + * ABI Stable API meeting, reviews, discussion + * Closed out PPC migration +* Julien Gilli @misterdjules (CTC) + * Investigated SmartOS-specific issues. +* Mikeal Rogers @mikeal (observer/Node.js Foundation) + * Working on new budget for 2017 for the NF + * @ TC39 last week, very productive, looking for a way to be involved longer-term +* Brian White @mscdex (CTC) + * Continued work on various optimizations in core + * Submitted a few misc. cleanup PRs + * Reviewed PRs, commented on issues +* Jenn Turner @renrutnnej (observer/Node.js Foundation) + * No update, observing +* Rod Vagg @rvagg (CTC) + * Little build of build, little bit of LTS, little bit of NF work +* Seth Thompson @s3ththompson (observer/Google) + * async await on track to ship + * V8 inspector work ongoing +* Steven R Loomis @srl295 (observer/IBM/ICU) + * Regrets for today (and out next week) - ICU 58 will be out in a couple of weeks and will have an updated PR at that time… +* Myles Borins @TheAlphaNerd (CTC) + * v4.x backporting + * define outreachy project + * fix regressions in citgm + * work on new communication plan for LTS dates +* Sakthipriyan Vairamani @thefourtheye (observer) + * was sick for much of the week, so not much to report +* Trevor Norris @trevnorris (CTC) + * Fix performance regressions from async hooks + * Almost finished bringing PR compliant with EP. Soon tests and proper API documentation will follow. + * Engaged with the V8 team regarding the MicrotaskQueue API (https://bugs.chromium.org/p/v8/issues/detail?id=4643#c19) +* Rich Trott @Trott (CTC) + * CTC meeting rotation proposal + * 2FA for Collaborators + * miscellaneous issue tracker/commit activity +* Josh Gavant @joshgav (observer) + * time away + * diagnostics WG meeting + + +## Agenda + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + +### nodejs/node + +* doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) +* Intl: Consider deprecating Intl.v8BreakIterator [#8865](https://github.com/nodejs/node/issues/8865) +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + +### nodejs/CTC + +* Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) + + +## Previous Meeting Review + +* Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) +* meta: update NODE_MODULE_VERSION to 51 [#8808](https://github.com/nodejs/node/pull/8808) + * If there are objections raise them in the issue, otherwise ready to merge. +* General v7.0.0 / v6 LTS Planning / Discussion + + +## Minutes + +### doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) + +@johanbergstrom put this together in collaboration with libuv and v8 teams. + +@rvagg: Has everyone reviewed that would like to? + +We rely on a few dependencies that make us who we are — most importantly V8 and libuv. We therefore need to adopt their supported platforms and potentially add to their lists based on test and/or release coverage. + +@trott: Do we mean that we support whatever they support? + +@Fishrock123: Could be clarified. Intent is that we must start with what they offer and anything additional falls to us (Node). + +@rvagg: Our supported platform list is narrower than that of libuv and v8. + +@?: It should say we’re a subset due to the constraints. + +@rvagg: Removing that might be the way to go. + +Any objections to list as it stands? + +Applies to v6, probably the same for v7. Will need to be changed for v4. + +If there are concerns, raise in issue. + +**Next steps**: + +* Raise any concerns in issue. + +--- + +### Intl: Consider deprecating Intl.v8BreakIterator [#8865](https://github.com/nodejs/node/issues/8865) + +Want to move to a different API: `Intl.Segmentor`, so want to deprecate this one. + +@mikeal: We need to discourage community from using this API so we can move to a more standards-compliant implementation. + +@rvagg: We expose this API because V8 exposes it. + +@bmeck: APIs exposed by V8 can be removed… + +@rvagg: New API is before TC-39, could take a while. Also, what is the timeframe for removal? + +Is it possible this will be removed in 5.5 or 5.6, which may land in Node v7.x, in which case it would be a breaking change we’d have to polyfill. + +Or will it be removed later and we can include it in Node v8.x. + +@mikeal: V8 (Daniel) wants to remove this as soon as possible, but depends on TC-39. + +@bmeck: Also some small percentage of web uses this so not able to completely remove yet anyway. + +@seththompson: We wait till stage 3 at TC-39 to implement. As far as removing v8BreakIterator, we don’t necessarily rely on usage. + +Will investigate this further with Daniel. + +@rvagg: No problem with removing it, but need a signal as to when it will be removed. + +@seththompson: Dan is out of office through next week. + +**Next steps**: + +* Continue discussion in GitHub. + +--- + +### net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + +see also: + +@jasnell added this item, would be a semver-major change, can it be landed in v7. + +@rvagg: Adds error message when `.listen()` is called twice, EADDRINUSE. + +@addaleax: (#8294)[https://github.com/nodejs/node/pull/8294] documented that .listen() twice restarts the server. Does that conflict with this? + +@Fishrock123: When `close()` is called it resets the listen listener to `false`. + +@rvagg: Is this the only way to close? + +@addaleax: Might be… + +@rvagg: Will keep on agenda for James to discuss. + +@Fishrock123: Is anyone opposed to this? + +@evanlucas: Have we tested against the ecosystem? + +No, but it’s a poor usage anyway. + +@trott: delay until next week when James is here. + +@trevnorris: If you run this twice on the same server object, it overwrites the `_handle` property, orphaning the original handle. + +@mhdawson: Do they get a new handle to use? + +@trott: Put this into the issue. + +**Next steps**: + +* Discuss again next week. + +--- + +### Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) + +@trott: All possibilities are bad. Choosing the best of bad options. Hope to give proposed schedule a shot for 4 weeks and then evaluate. Or should we shoot this down and move on? + +@rvagg: To start next week, Oct 12? + +@trott: Yes. + +@rvagg: Any objections to next week: UTC 4:00pm, US Pacific 9am, US East 12 noon. + +Better for Europe and India. + +No. + +Next steps: + +* Do next week at new time. + +--- + +### General v7.0.0 / v6 LTS Planning / Discussion + +LTS group agreed to put out v6 LTS week of 10/17. + +Week after would be v7 final. + +@thealphanerd: We should do a release the week before v6 LTS to include all semver-minor changes we want in. + +@Fishrock123 will manage this release next week (10/10). + +@rvagg will manage v6 LTS the week after (10/17). + +@rvagg: What is status on V8 5.4 for 10/18? + +@seththompson: @ofrobots to confirm, no known problems. + +@rvagg: Still looking for someone to handle backports and releases for v6 LTS. + +@Fishrock123: We could switch Myles to v6 and someone else can pick up v4. + +@thealphanerd: As long as v4 is still “active” we have to go through everything. Some discussion of how to automate process. + +There will be more backports to v6, but also that stuff will be a lot clearer, if it can be backported. + +Would be good to come up with a better process for how things end up in staging. Instead of getting everything in there in the last minute. + +@Fishrock123: I disagree that if v4 is active it needs to have the same amount of activity all the way through. There has been an understanding that it would get more and more difficult to backport features, focus only on backporting security fixes. + +@thealphanerd: I found non-trivial number of items which would’ve been missed by automation. + +I’m a little bit uncomfortable with not auditing everything. Things that would get missed are more important than they sometimes seem. + +Myles, Evan, and Jeremiah to discuss tooling to help make auditing less work. + +@rvagg: Need to resolve Intl.v8BreakIterator for v7 - if we need to deprecate or remove it would be good to do so now. + +@SethThompson: Plan is to not deprecate anything until Intl.Segmentor reaches more conclusiveness in TC-39. But team would be fine if Node deprecates sooner. + +Current open issues for v7: https://github.com/nodejs/node/milestone/15 + +@rvagg: We won’t have OpenSSL 1.1.0 in Node in the near future, but it may be possible to compile against it. This isn’t a blocker for v7, doesn’t require semver-major. + +@rvagg: Should we ship another version of 0.10 with npm updated to include updated license? Comment in LTS WG. + +--- + +## Q/A on public channels + +Alex: Can we get a comment on stability of v6 at this point? + +@rvagg: We’ll have a standard v6 release next Tuesday, and the following Tuesday we’ll drop to LTS and stability push will start. Expect it to be as stable as the v4 releases. + +If you’re planning to move to v6 LTS, it’s worth testing now. + +@Fishrock123: Not many new features in next week’s v6.x.x release. Some regressions early in v6 lifecycle, none now. + +@evanlucas: Regression in inspector. + +@rvagg: Inspector is still marked experimental for now. That may change in the v6 lifetime. But for now it shouldn’t be treated as stable as the other features. + +--- + +## Upcoming Meetings + +* CTC: 2016-10-12, 9am Pacific +* TSC: 2016-10-06, 1pm Pacific +* Build: 2016-10-11 +* Diagnostics: first week of November +* Benchmarking: +* LTS: 2016-10-17 +* Post-Mortem: +* API: diff --git a/doc/ctc-meetings/2016-10-12.md b/doc/ctc-meetings/2016-10-12.md new file mode 100644 index 00000000000000..1b93d6e6f4e09b --- /dev/null +++ b/doc/ctc-meetings/2016-10-12.md @@ -0,0 +1,157 @@ +# Node Foundation CTC Meeting 2016-10-12 +## Links + +* **Audio Recording**: TBP +* **GitHub Issue**: [#9020](https://github.com/nodejs/node/issues/9020) +* **Minutes Google Doc**: +* _Previous Minutes Google Doc: _ + +## Present + +* Сковорода Никита Андреевич @ChALkeR (CTC) +* Colin Ihrig @cjihrig (CTC) +* Evan Lucas @evanlucas (CTC) +* Jeremiah Senkpiel @Fishrock123 (CTC) +* Michael Dawson @mhdawson (CTC) +* Brian White @mscdex (CTC) +* Ali Ijaz Sheikh @ofrobots (CTC) +* Shigeki Ohtsu @shigeki (CTC) +* Sakthipriyan Vairamani @thefourtheye (observer) +* Trevor Norris @trevnorris (CTC) +* Rich Trott @Trott (CTC) + +## Standup + +* Сковорода Никита Андреевич @ChALkeR (CTC) + * Nothing worth mentioning. + I was busy for the last week, catching up now. +* Colin Ihrig @cjihrig (CTC) + * Issue and PR review. A few PRs for tests. +* Evan Lucas @evanlucas (CTC) + * Opened a few PRs + * Some issue/pr review + * Wrote Chrome extension to make generating review metadata easier +* Jeremiah Senkpiel @Fishrock123 (CTC) + * misc PRs / issues + * working with the ChakraCore team on ES Modules + * v6.8.0 Release +* Michael Dawson @mhdawson (CTC) + * Misc review + land + * Back to working on adding nightly code coverage build + * ABI stable API PoC + * Keeping up to date on issues +* Brian White @mscdex (CTC) + * Worked on improved string encoding/decoding performance + * Reviewed PRs, commented on issues +* Ali Ijaz Sheikh @ofrobots (CTC) + * Travelling last week so not too much +* Shigeki Ohtsu @shigeki (CTC) + * Review some PR and issues related to crypto. +* Sakthipriyan Vairamani @thefourtheye (observer) + * catching up + * reviewing PRs and commenting +* Trevor Norris (CTC) + * Finishing up implementing parentId for async hooks +* Rich Trott @Trott (CTC) + * Issue and PR review + * governance discussions/issues + * Outreachy mentoring prep/work with applicants + +## Agenda + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + +### nodejs/node + +* governance: expand use of CTC issue tracker [#8945](https://github.com/nodejs/node/pull/8945) +* doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + +### nodejs/TSC + +* Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) + + +## Previous Meeting Review + +### nodejs/node + +* doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) + * approved last week, if any objections comment now + on issue +* Intl: Consider deprecating Intl.v8BreakIterator [#8865](https://github.com/nodejs/node/issues/8865) + * now closed so resolved +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + * back on agenda for this week + +### nodejs/CTC + +* Scheduling Meetings [#14](https://github.com/nodejs/CTC/issues/14) + * resolved, new meeting schedule set for next month. + +## Minutes + +### governance: expand use of CTC issue tracker [#8945](https://github.com/nodejs/node/pull/8945) + + * don't want to change decision making process without + putting through current process. + * ctc-review label would be useful. For issues we need to + come to consensus but that we don't need to bring to meeting. + Still good to open issues in CTC repo. + * Comment from @thefourtheye that section on consensus + seeking model could use some clarification. Rich -> likely + out of scope for this change. + * This only applies to issues that don't need a vote, votes + mostly required when consensus cannot be achieved. + * Please add your comments or LTGM to the issue. + +### doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) + + * discussed last week and input provided by CTC. + * removed from CTC-agenda. + + +### net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + + * latest discussion in issue still around if there is + any other way to close socket. + * you will already get errors if you listen twice. + * Some discussion, but missing proponents so discussion + back into github issue. + * If you object to it going into 7 (even though semver major) + make sure to comment. + + +### Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) + * Already discussed in TSC, here to discuss with those not in CTC. + * no strong opinions voiced beyond what is in issue. + * comments that we have gotten questions as to + why we have 2 bodies. + + +### General v7.0.0 / v6 LTS Planning / Discussion + + +* 2 weeks out from V7.0 - Jerimiah handling this one. +* 6 days from LTS release - Rodd handling this one. +* v6 Current release should go out today. + + +### http: improve invalid character in header error message [9010](https://github.com/nodejs/node/pull/9010) + +* asked if this could go into 7, some discussion, take + ongoing discussion back to github + +## Q/A on public channels + + +## Upcoming Meetings + +* CTC: 2016-10-19, 1pm Pacific +* TSC: 2016-10-13, 1pm Pacific +* Build: 2016-10-11 +* Diagnostics: first week of November +* Benchmarking: +* LTS: 2016-10-17 +* Post-Mortem: +* API: diff --git a/doc/ctc-meetings/2016-10-19.md b/doc/ctc-meetings/2016-10-19.md new file mode 100644 index 00000000000000..8fbaa1dfa906d6 --- /dev/null +++ b/doc/ctc-meetings/2016-10-19.md @@ -0,0 +1,202 @@ +# Node Foundation CTC Meeting 2016-10-19 + +## Links + +* **Audio Recording**: TBP +* **GitHub Issue**: [#9143](https://github.com/nodejs/node/issues/9143) +* **Minutes Google Doc**: +* _Previous Minutes Google Doc: _ + +## Present + +* Bradley Meck @bmeck (observer/GoDaddy/TC39) +* Colin Ihrig @cjihrig (CTC) +* Evan Lucas @evanlucas (CTC) +* James M Snell @jasnell (CTC) +* Josh Gavant @joshgav (observer/Microsoft) +* Michael Dawson @mhdawson (CTC) +* Julien Gilli @misterdjules (CTC) +* Mikeal Rogers @mikeal (observer/Node.js Foundation) +* Brian White @mscdex (CTC) +* Ali Ijaz Sheikh @ofrobots (CTC) +* Jenn Turner @renrutnnej (observer/Node.js Foundation) +* Steven R. Loomis @srl295 (observer) +* Sakthipriyan Vairamani @thefourtheye (observer) +* Trevor Norris @trevnorris (CTC) +* Rich Trott @Trott (CTC) + +## Standup + +* Bradley Meck @bmeck (observer/GoDaddy/TC39) + * Vacation, work on import changes for ES spec +* Colin Ihrig @cjihrig (CTC) + * Reviewed issues and PRs. Revisited an old PR. +* Evan Lucas @evanlucas (CTC) + * cut v6.8.1 release + * opened small doc PR + * issue/pr review +* James M Snell @jasnell (CTC) + * Preparing v7.0.0 release + * HTTP/2 implementation + * PRs +* Josh Gavant @joshgav (observer/Microsoft) + * PR’s to improve user experience with new debugger + * investigating how to integrate guides with API docs + * off this week +* Michael Dawson @mhdawson (CTC) + * couple days vacation + * Added code coverage nightly job and PR for doc + * misc issue review/comment/lands + * Some Abi stable node discussion/work + * A number of benchmarking issues to look at +* Julien Gilli @misterdjules (CTC) + * nothing too significant +* Mikeal Rogers @mikeal (observer/Node.js Foundation) + * Putting together bugeting stuff +* Brian White @mscdex (CTC) + * Continued work on improving string encoding/decoding + performance. + * Commented on issues, reviewed PRs. +* Ali Ijaz Sheikh @ofrobots (CTC) + * Some V8 backport triage. Not much else notable. +* Jenn Turner @renrutnnej (observer/Node.js Foundation) + * Just observing, no update +* Steven R. Loomis @srl295 (observer) + * ICU 58 going GA end of week so will submit PR to update - [#7844](https://github.com/nodejs/node/issues/7844) + * Backported break iterator fix to v4.x [#9008](https://github.com/nodejs/node/pull/9008) +* Sakthipriyan Vairamani @thefourtheye (observer) + * mostly PR reviews +* Trevor Norris @trevnorris (CTC) + * Worked with Matt Loring, proposed promise hooks api is + sufficient combined with debugger API. +* Rich Trott @Trott (CTC) + * Outreachy + * flaky tests + * linting tools and rules + +--- + +## Agenda + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + +### nodejs/node + +* doc: add ctc-review label information [#9072](https://github.com/nodejs/node/pull/9072) @Trott +* http: improve invalid character in header error message [#9010](https://github.com/nodejs/node/pull/9010) @evanlucas +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) @jasnell + +### nodejs/TSC + +* Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) @rvagg + + +--- + +## Previous Meeting Review + +### nodejs/node + +* governance: expand use of CTC issue tracker [#8945](https://github.com/nodejs/node/pull/8945) + + * Finalize through GitHub discussions. + * Consider `ctc-review` label (#9072). + +* doc: add supported platforms list [#8922](https://github.com/nodejs/node/pull/8922) + + * Complete, remove from agenda. + +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) + + * Back to GitHub for further discussion. + +### nodejs/TSC + +* Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) + +--- + +## Minutes + +### doc: add ctc-review label information [#9072](https://github.com/nodejs/node/pull/9072) @Trott + +@trott: Has landed. If any objections speak now. Goal is more resolutions within tracker, less in meetings. + +--- + +### http: improve invalid character in header error message [#9010](https://github.com/nodejs/node/pull/9010) @evanlucas + +@evanlucas: Concerns about this leading to an information leak, because header name is displayed. + +Would like to land this for v7. Would have to be merged today. + +@trott: Question is if people are uncomfortable moving forward now before information from @Chalker is available. + +@mhdawson: Seems there are already some hits (problems). + +@evanlucas: Package `http-node` had problems, seems they copied `_http_outgoing.js`. + +@evanlucas: Perhaps we can just add a debug message, that would not be semver-major. + +@mhdawson: Seems better to do that than introduce a semver-major change this late. + +@evanlucas: That seems fine, as long as we can get the needed info. + +@trott: So land the debug message now, perhaps land standard output in v8.x. + +**Next steps**: + +* Create new PR with debug-based message, defer including in stderr/stdout till v8.x. + +--- + +### net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) @jasnell + +@jasnell: Question is whether to land in v7.x. If so please take it up today! + +--- + +### Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) @rvagg + +@trott: Mainly to bring this to people’s attention, continue conversation in tracker. + +--- + +### Info text in `--inspect` output [#8978](https://github.com/nodejs/node/pull/8978) + +@joshgav: Should we land this for v7.x? + +@trott: Is this a semver-major change? + +@jasnell: No, especially cause this is an experimental feature. + +@evanlucas: I’d prefer to wait till Chrome automatically lists Node.js targets via chrome://inspect, which is 55. + +@ofrobots: No urgency to land this change now and it will make it harder to get started. + +@mhdawson: Agreed, no urgency to land this before v7.x. + +@joshgav: So I’ll update PR to include generic URL but not remove the chrome-devtools URL for now. + +**Next steps**: + +* Josh to update PR to still include chrome-devtools URL. +* Consider removing chrome-devtools URL when 55 is released (early December). + +--- + +## Q/A on public channels + + +--- + +## Upcoming Meetings + +* CTC: 2016-10-26, 9am Pacific +* TSC: 2016-10-20, 1pm Pacific +* Build: 2016-10-24 (?) +* Diagnostics: first week of November +* Benchmarking: +* LTS: +* Post-Mortem: +* API: diff --git a/doc/ctc-meetings/2016-10-26.md b/doc/ctc-meetings/2016-10-26.md new file mode 100644 index 00000000000000..ed8f8fee78a1bf --- /dev/null +++ b/doc/ctc-meetings/2016-10-26.md @@ -0,0 +1,151 @@ +# Node Foundation CTC Meeting 2016-10-26 + +## Links + +* **Audio Recording**: TBP +* **GitHub Issue**: +[#9261](https://github.com/nodejs/node/issues/9261) +* **Minutes Google Doc**: +* _Previous Minutes Google Doc: _ + + +## Present + +* Anna Henningsen @addaleax (CTC) +* Bradley Meck @bmeck (observer/GoDaddy/TC39) +* Ben Noordhuis @bnoordhuis (CTC) +* Сковорода Никита Андреевич @ChALkeR (CTC) +* Colin Ihrig @cjihrig (CTC) +* Evan Lucas @evanlucas (CTC) +* James M Snell @jasnell (CTC) +* Brian White @mscdex (CTC) +* Ali Ijaz Sheikh @ofrobots (CTC) +* Seth Thompson @s3ththompson (observer/Google) +* Shigeki Ohtsu @shigeki (CTC) +* Sakthipriyan Vairamani @thefourtheye (observer) +* Rich Trott @Trott (CTC) + + +## Standup + +* Anna Henningsen @addaleax (CTC) + * Not much +* Bradley Meck @bmeck (observer/GoDaddy/TC39) + * Work on making inspector work w/ vm + * Minor talks w/ modules spec authors +* Ben Noordhuis @bnoordhuis (CTC) + * (Nothing reported.) +* Сковорода Никита Андреевич @ChALkeR (CTC) + * Rebuilt a new dataset from npm packages, some further work on the tooling + * Some issue/pr comments as usual + * Some ecosystem security stuff +* Colin Ihrig @cjihrig (CTC) + * Reviewing issues and PRs, opened a few PRs, libuv 1.10.0 update +* Evan Lucas @evanlucas (CTC) + * Working on getting libuv to use fsevents for file watching on OS X + * Opened small PR to fix a test that kept failing on freebsd +* James M Snell @jasnell (CTC) + * Getting v7.0.0 out the door + * PRs + * More work on HTTP/2 +* Brian White @mscdex (CTC) + * Continued working on string encoding/decoding performance. Starting to see even more promising results. + * Reviewed PRs, commented on issues +* Ali Ijaz Sheikh @ofrobots (CTC) + * Not much +* Seth Thompson @s3ththompson (observer/Google) + * V8 5.5 beta shipped with async/await +* Shigeki Ohtsu @shigeki (CTC) + * Reviewed a few PR and made a security assessments of CVE-2016-8610 for Node-v0.10 and 0.12. +* Sakthipriyan Vairamani @thefourtheye (observer) + * Held an event to help people to get their first contribution into Node.js + * Looking at V8 code base +* Rich Trott @Trott (CTC) + * Outreachy, Node Todo, Node Interactive prep + * test and tools PRs + * usual PR review/commenting + + +--- + + +## Agenda + + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + + +### nodejs/TSC + + +* Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) + + +### nodejs/node + + +* Debugging: name every function +[#8913]https://github.com/nodejs/node/issues/8913 + + +--- + + +## Previous Meeting Review + + +Extracted from **ctc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting. + + +### nodejs/node + + +* doc: add ctc-review label information [#9072](https://github.com/nodejs/node/pull/9072) @Trott +* http: improve invalid character in header error message [#9010](https://github.com/nodejs/node/pull/9010) @evanlucas +* net: multiple listen() events fail silently [#8419](https://github.com/nodejs/node/pull/8419) @jasnell + + +### nodejs/TSC + + +* Consider folding TSC into CTC [#146](https://github.com/nodejs/TSC/issues/146) @rvagg + + +--- + + +## Minutes + + +### Consider folding TSC into CTC + + +Rich: Defer until Rod is here and have the conversation in the issue tracker until then? + + +James: Makes sense. Also, TSC call is tomorrow so it can be discussed then. + + +### Debugging: name every function + + +Rich: It seemed like a good idea initially, but it is not clear if it is a good thing in all cases (some +, some -). Issue is marked as good first contribution which means lots of new contributors are coming in. +Myles wanted a quick resolution, but it is not clear there is a quick resolution. Someone needs to sit down and come up with a list of cases where it would be beneficial + + +Brian: Prototype functions? + + +Rich: That is a case where it doesn't add value but doesn't hurt either. There are other cases where it does remove information. + + +Sakthipriyan: One example was fs.readFileSync.. + + +Brian: V8 may also be inferring names from a variable in cases when you do `let a = () => {...}`. Another case to take into consideration. + + +Rich: Move back to the issue tracker; remove 'good-first-contribution' label until we have documented what should/shouldn't be done. I can work with someone, or come up with documentation. + + +Floor: General agreement? Yes, general agreement. diff --git a/doc/guides/building-node-with-ninja.md b/doc/guides/building-node-with-ninja.md index eb5791af548145..d8471a3041ce61 100644 --- a/doc/guides/building-node-with-ninja.md +++ b/doc/guides/building-node-with-ninja.md @@ -35,4 +35,4 @@ The above alias can be modified slightly to produce a debug build, rather than a `alias nnodedebug='./configure --ninja && ninja -C out/Debug && ln -fs out/Debug/node node_g'` -[Ninja]: https://martine.github.io/ninja/ +[Ninja]: https://ninja-build.org/ diff --git a/doc/node.1 b/doc/node.1 index 6b9b57e4ea2e3d..0f4fc30fe2d523 100644 --- a/doc/node.1 +++ b/doc/node.1 @@ -159,8 +159,26 @@ is ~/.node_repl_history, which is overridden by this variable. Setting the value to an empty string ("" or " ") disables persistent REPL history. -.SH RESOURCES AND DOCUMENTATION +.SH BUGS +Bugs are tracked in GitHub Issues: +.ur https://github.com/nodejs/node/issues + + +.SH AUTHORS +Written and maintained by 1000+ contributors: +.ur https://github.com/nodejs/node/blob/master/AUTHORS + +.SH COPYRIGHT +Copyright Node.js contributors. Node.js is available under the MIT license. + +Node.js also includes external libraries that are available under a variety +of licenses. See +.ur https://github.com/nodejs/node/blob/master/LICENSE +for the full license text. + + +.SH RESOURCES AND DOCUMENTATION Website: .ur https://nodejs.org/ @@ -175,6 +193,7 @@ Mailing list: IRC (general questions): .ur "chat.freenode.net #node.js" +(unofficial) -IRC (node core development): +IRC (Node.js core development): .ur "chat.freenode.net #node-dev" diff --git a/doc/onboarding.md b/doc/onboarding.md index e2239c59088aac..f458cee46e6708 100644 --- a/doc/onboarding.md +++ b/doc/onboarding.md @@ -62,7 +62,8 @@ onboarding session. * Labels: * There is [a bot](https://github.com/nodejs-github-bot/github-bot) that applies subsystem labels (for example, `doc`, `test`, `assert`, or `buffer`) so that we know what parts of the code base the pull request modifies. It is not perfect, of course. Feel free to apply relevant labels and remove irrelevant labels from pull requests and issues. * [**See "Labels"**](./onboarding-extras.md#labels) - * Use the `ctc-agenda` if a topic is controversial or isn't coming to a conclusion after an extended time. + * Use the `ctc-review` label if a topic is controversial or isn't coming to + a conclusion after an extended time. * `semver-{minor,major}`: * If a change has the remote *chance* of breaking something, use `semver-major` * When adding a semver label, add a comment explaining why you're adding it. Do it right away so you don't forget! @@ -145,7 +146,8 @@ onboarding session. the objection is addressed. The options for such a situation include: * Engaging those with objections to determine a viable path forward; * Altering the pull request to address the objections; - * Escalating the discussion to the CTC using the `ctc-agenda` label. This should only be done after other options have been exhausted. + * Escalating the discussion to the CTC using the `ctc-review` label. This + should only be done after the previous options have been exhausted. * Wait before merging non-trivial changes. * 48 hours during the week and 72 hours on weekends. @@ -164,8 +166,15 @@ onboarding session. ## Landing PRs: Details -* Please never use GitHub's green "Merge Pull Request" button. +* Please never use GitHub's green ["Merge Pull Request"](https://help.github.com/articles/merging-a-pull-request/#merging-a-pull-request-using-the-github-web-interface) button. * If you do, please force-push removing the merge. + * Reasons for not using the web interface button: + * The merge method will add an unnecessary merge commit. + * The rebase & merge method adds metadata to the commit title. + * The rebase method changes the author. + * The squash & merge method has been known to add metadata to the commit title. + * If more than one author has contributed to the PR, only the latest author will be considered during the squashing. + Update your `master` branch (or whichever branch you are landing on, almost always `master`) diff --git a/lib/.eslintrc b/lib/.eslintrc index 341e9327c7cf3a..e8336884566ccd 100644 --- a/lib/.eslintrc +++ b/lib/.eslintrc @@ -1,3 +1,4 @@ rules: # Custom rules in tools/eslint-rules require-buffer: 2 + no-let-in-for-declaration: 2 diff --git a/lib/_debug_agent.js b/lib/_debug_agent.js index 3457c6db8ac9d6..eedca7ef5843bb 100644 --- a/lib/_debug_agent.js +++ b/lib/_debug_agent.js @@ -18,9 +18,10 @@ exports.start = function start() { process._rawDebug(err.stack || err); }); - agent.listen(process._debugAPI.port, function() { - var addr = this.address(); - process._rawDebug('Debugger listening on port %d', addr.port); + agent.listen(process._debugAPI.port, process._debugAPI.host, function() { + const addr = this.address(); + const host = net.isIPv6(addr.address) ? `[${addr.address}]` : addr.address; + process._rawDebug('Debugger listening on %s:%d', host, addr.port); process._debugAPI.notifyListen(); }); diff --git a/lib/_http_agent.js b/lib/_http_agent.js index cb52c1105466e5..f213c894ed5a42 100644 --- a/lib/_http_agent.js +++ b/lib/_http_agent.js @@ -123,6 +123,14 @@ Agent.prototype.addRequest = function(req, options) { options = util._extend({}, options); options = util._extend(options, this.options); + if (!options.servername) { + options.servername = options.host; + const hostHeader = req.getHeader('host'); + if (hostHeader) { + options.servername = hostHeader.replace(/:.*$/, ''); + } + } + var name = this.getName(options); if (!this.sockets[name]) { this.sockets[name] = []; diff --git a/lib/_http_common.js b/lib/_http_common.js index 1e6490eaffb6ce..127570a79db31d 100644 --- a/lib/_http_common.js +++ b/lib/_http_common.js @@ -78,17 +78,11 @@ function parserOnHeadersComplete(versionMajor, versionMinor, headers, method, parser.incoming.statusMessage = statusMessage; } - // The client made non-upgrade request, and server is just advertising - // supported protocols. - // - // See RFC7230 Section 6.7 - // - // NOTE: RegExp below matches `upgrade` in `Connection: abc, upgrade, def` - // header. - if (upgrade && - parser.outgoing !== null && - (parser.outgoing._headers.upgrade === undefined || - !/(^|\W)upgrade(\W|$)/i.test(parser.outgoing._headers.connection))) { + if (upgrade && parser.outgoing !== null && !parser.outgoing.upgrading) { + // The client made non-upgrade request, and server is just advertising + // supported protocols. + // + // See RFC7230 Section 6.7 upgrade = false; } diff --git a/lib/_http_outgoing.js b/lib/_http_outgoing.js index 654e85a34b3fd9..230211c5044a3b 100644 --- a/lib/_http_outgoing.js +++ b/lib/_http_outgoing.js @@ -9,16 +9,18 @@ const Buffer = require('buffer').Buffer; const common = require('_http_common'); const CRLF = common.CRLF; -const chunkExpression = common.chunkExpression; +const trfrEncChunkExpression = common.chunkExpression; const debug = common.debug; -const connectionExpression = /^Connection$/i; +const upgradeExpression = /^Upgrade$/i; const transferEncodingExpression = /^Transfer-Encoding$/i; -const closeExpression = /close/i; const contentLengthExpression = /^Content-Length$/i; const dateExpression = /^Date$/i; const expectExpression = /^Expect$/i; const trailerExpression = /^Trailer$/i; +const connectionExpression = /^Connection$/i; +const connCloseExpression = /(^|\W)close(\W|$)/i; +const connUpgradeExpression = /(^|\W)upgrade(\W|$)/i; const lenientHttpHeaders = !!process.REVERT_CVE_2016_2216; const automaticHeaders = { @@ -62,6 +64,7 @@ function OutgoingMessage() { this.writable = true; this._last = false; + this.upgrading = false; this.chunkedEncoding = false; this.shouldKeepAlive = true; this.useChunkedEncodingByDefault = true; @@ -191,11 +194,13 @@ OutgoingMessage.prototype._storeHeader = function(firstLine, headers) { // in the case of response it is: 'HTTP/1.1 200 OK\r\n' var state = { sentConnectionHeader: false, + sentConnectionUpgrade: false, sentContentLengthHeader: false, sentTransferEncodingHeader: false, sentDateHeader: false, sentExpect: false, sentTrailer: false, + sentUpgrade: false, messageHeader: firstLine }; @@ -224,6 +229,10 @@ OutgoingMessage.prototype._storeHeader = function(firstLine, headers) { } } + // Are we upgrading the connection? + if (state.sentConnectionUpgrade && state.sentUpgrade) + this.upgrading = true; + // Date header if (this.sendDate === true && state.sentDateHeader === false) { state.messageHeader += 'Date: ' + utcDate() + CRLF; @@ -313,15 +322,16 @@ function storeHeader(self, state, field, value) { if (connectionExpression.test(field)) { state.sentConnectionHeader = true; - if (closeExpression.test(value)) { + if (connCloseExpression.test(value)) { self._last = true; } else { self.shouldKeepAlive = true; } - + if (connUpgradeExpression.test(value)) + state.sentConnectionUpgrade = true; } else if (transferEncodingExpression.test(field)) { state.sentTransferEncodingHeader = true; - if (chunkExpression.test(value)) self.chunkedEncoding = true; + if (trfrEncChunkExpression.test(value)) self.chunkedEncoding = true; } else if (contentLengthExpression.test(field)) { state.sentContentLengthHeader = true; @@ -331,6 +341,8 @@ function storeHeader(self, state, field, value) { state.sentExpect = true; } else if (trailerExpression.test(field)) { state.sentTrailer = true; + } else if (upgradeExpression.test(field)) { + state.sentUpgrade = true; } } diff --git a/lib/_http_server.js b/lib/_http_server.js index f88593b9974e7a..d6b2d7e88e32a7 100644 --- a/lib/_http_server.js +++ b/lib/_http_server.js @@ -65,6 +65,7 @@ const STATUS_CODES = exports.STATUS_CODES = { 428: 'Precondition Required', // RFC 6585 429: 'Too Many Requests', // RFC 6585 431: 'Request Header Fields Too Large', // RFC 6585 + 451: 'Unavailable For Legal Reasons', 500: 'Internal Server Error', 501: 'Not Implemented', 502: 'Bad Gateway', diff --git a/lib/_linklist.js b/lib/_linklist.js index 02186cfedcb9f6..ea414843eebc9e 100644 --- a/lib/_linklist.js +++ b/lib/_linklist.js @@ -17,7 +17,7 @@ exports.peek = peek; // remove the most idle item from the list function shift(list) { - var first = list._idlePrev; + const first = list._idlePrev; remove(first); return first; } diff --git a/lib/_stream_readable.js b/lib/_stream_readable.js index a6e5c5d46f05fe..afa92f1d8cc907 100644 --- a/lib/_stream_readable.js +++ b/lib/_stream_readable.js @@ -654,17 +654,17 @@ Readable.prototype.unpipe = function(dest) { state.pipesCount = 0; state.flowing = false; - for (let i = 0; i < len; i++) + for (var i = 0; i < len; i++) dests[i].emit('unpipe', this); return this; } // try to find the right one. - const i = state.pipes.indexOf(dest); - if (i === -1) + const index = state.pipes.indexOf(dest); + if (index === -1) return this; - state.pipes.splice(i, 1); + state.pipes.splice(index, 1); state.pipesCount -= 1; if (state.pipesCount === 1) state.pipes = state.pipes[0]; diff --git a/lib/_stream_writable.js b/lib/_stream_writable.js index 8b2e90ca43c555..75ce7a187970cf 100644 --- a/lib/_stream_writable.js +++ b/lib/_stream_writable.js @@ -43,6 +43,7 @@ function WritableState(options, stream) { // cast to ints. this.highWaterMark = ~~this.highWaterMark; + // drain event flag. this.needDrain = false; // at the start of calling end() this.ending = false; diff --git a/lib/_tls_common.js b/lib/_tls_common.js index 9e6b00ea3d0ebe..fbf1d92c983115 100644 --- a/lib/_tls_common.js +++ b/lib/_tls_common.js @@ -34,6 +34,8 @@ exports.SecureContext = SecureContext; exports.createSecureContext = function createSecureContext(options, context) { if (!options) options = {}; + var i; + var len; var secureOptions = options.secureOptions; if (options.honorCipherOrder) @@ -47,7 +49,7 @@ exports.createSecureContext = function createSecureContext(options, context) { // cert's issuer in C++ code. if (options.ca) { if (Array.isArray(options.ca)) { - for (let i = 0, len = options.ca.length; i < len; i++) { + for (i = 0, len = options.ca.length; i < len; i++) { c.context.addCACert(options.ca[i]); } } else { @@ -59,7 +61,7 @@ exports.createSecureContext = function createSecureContext(options, context) { if (options.cert) { if (Array.isArray(options.cert)) { - for (let i = 0; i < options.cert.length; i++) + for (i = 0; i < options.cert.length; i++) c.context.setCert(options.cert[i]); } else { c.context.setCert(options.cert); @@ -72,7 +74,7 @@ exports.createSecureContext = function createSecureContext(options, context) { // which leads to the crash later on. if (options.key) { if (Array.isArray(options.key)) { - for (let i = 0; i < options.key.length; i++) { + for (i = 0; i < options.key.length; i++) { var key = options.key[i]; if (key.passphrase) @@ -103,7 +105,7 @@ exports.createSecureContext = function createSecureContext(options, context) { if (options.crl) { if (Array.isArray(options.crl)) { - for (let i = 0, len = options.crl.length; i < len; i++) { + for (i = 0, len = options.crl.length; i < len; i++) { c.context.addCRL(options.crl[i]); } } else { diff --git a/lib/_tls_wrap.js b/lib/_tls_wrap.js index 6acf5e26a65ebf..7efe42ab46ca22 100644 --- a/lib/_tls_wrap.js +++ b/lib/_tls_wrap.js @@ -317,14 +317,31 @@ proxiedMethods.forEach(function(name) { }); tls_wrap.TLSWrap.prototype.close = function closeProxy(cb) { - if (this.owner) + let ssl; + if (this.owner) { + ssl = this.owner.ssl; this.owner.ssl = null; + } + + // Invoke `destroySSL` on close to clean up possibly pending write requests + // that may self-reference TLSWrap, leading to leak + const done = () => { + if (ssl) { + ssl.destroySSL(); + if (ssl._secureContext.singleUse) { + ssl._secureContext.context.close(); + ssl._secureContext.context = null; + } + } + if (cb) + cb(); + }; if (this._parentWrap && this._parentWrap._handle === this._parent) { - this._parentWrap.once('close', cb); + this._parentWrap.once('close', done); return this._parentWrap.destroy(); } - return this._parent.close(cb); + return this._parent.close(done); }; TLSSocket.prototype._wrapHandle = function(wrap) { @@ -973,7 +990,7 @@ exports.connect = function(/* [port, host], options, cb */) { (options.socket && options.socket._host) || 'localhost'; const NPN = {}; - const context = tls.createSecureContext(options); + const context = options.secureContext || tls.createSecureContext(options); tls.convertNPNProtocols(options.NPNProtocols, NPN); var socket = new TLSSocket(options.socket, { diff --git a/lib/dgram.js b/lib/dgram.js index a2827df0262800..484689380e26eb 100644 --- a/lib/dgram.js +++ b/lib/dgram.js @@ -243,6 +243,32 @@ Socket.prototype.sendto = function(buffer, }; +function enqueue(self, toEnqueue) { + // If the send queue hasn't been initialized yet, do it, and install an + // event handler that flushes the send queue after binding is done. + if (!self._queue) { + self._queue = []; + self.once('listening', clearQueue); + } + self._queue.push(toEnqueue); + return; +} + + +function clearQueue() { + const queue = this._queue; + this._queue = undefined; + + // Flush the send queue. + for (var i = 0; i < queue.length; i++) + queue[i](); +} + + +// valid combinations +// send(buffer, offset, length, port, address, callback) +// send(buffer, offset, length, port, address) +// send(buffer, offset, length, port) Socket.prototype.send = function(buffer, offset, length, @@ -290,18 +316,13 @@ Socket.prototype.send = function(buffer, // If the socket hasn't been bound yet, push the outbound packet onto the // send queue and send after binding is complete. if (self._bindState != BIND_STATE_BOUND) { - // If the send queue hasn't been initialized yet, do it, and install an - // event handler that flushes the send queue after binding is done. - if (!self._sendQueue) { - self._sendQueue = []; - self.once('listening', function() { - // Flush the send queue. - for (var i = 0; i < self._sendQueue.length; i++) - self.send.apply(self, self._sendQueue[i]); - self._sendQueue = undefined; - }); - } - self._sendQueue.push([buffer, offset, length, port, address, callback]); + enqueue(self, self.send.bind(self, + buffer, + offset, + length, + port, + address, + callback)); return; } @@ -347,10 +368,15 @@ function afterSend(err) { this.callback(err, this.length); } - Socket.prototype.close = function(callback) { if (typeof callback === 'function') this.on('close', callback); + + if (this._queue) { + this._queue.push(this.close.bind(this)); + return this; + } + this._healthCheck(); this._stopReceiving(); this._handle.close(); diff --git a/lib/events.js b/lib/events.js index 6cb6900267b2ec..3cc16d20a32d22 100644 --- a/lib/events.js +++ b/lib/events.js @@ -270,7 +270,7 @@ EventEmitter.prototype.once = function once(type, listener) { // emits a 'removeListener' event iff the listener was removed EventEmitter.prototype.removeListener = function removeListener(type, listener) { - var list, events, position, i; + var list, events, position, i, originalListener; if (typeof listener !== 'function') throw new TypeError('listener must be a function'); @@ -289,7 +289,7 @@ EventEmitter.prototype.removeListener = else { delete events[type]; if (events.removeListener) - this.emit('removeListener', type, listener); + this.emit('removeListener', type, list.listener || listener); } } else if (typeof list !== 'function') { position = -1; @@ -297,6 +297,7 @@ EventEmitter.prototype.removeListener = for (i = list.length; i-- > 0;) { if (list[i] === listener || (list[i].listener && list[i].listener === listener)) { + originalListener = list[i].listener; position = i; break; } @@ -318,7 +319,7 @@ EventEmitter.prototype.removeListener = } if (events.removeListener) - this.emit('removeListener', type, listener); + this.emit('removeListener', type, originalListener || listener); } return this; diff --git a/lib/internal/v8_prof_polyfill.js b/lib/internal/v8_prof_polyfill.js index 755f8f0d65d1fb..821d36bb94ff20 100644 --- a/lib/internal/v8_prof_polyfill.js +++ b/lib/internal/v8_prof_polyfill.js @@ -38,11 +38,12 @@ const os = { /^[0-9a-f]+-[0-9a-f]+$/.test(arg)) { return ''; } - } else if (process.platform === 'darwin') { - args.unshift('-c', name); - name = '/bin/sh'; } - return cp.spawnSync(name, args).stdout.toString(); + var out = cp.spawnSync(name, args).stdout.toString(); + // Auto c++filt names, but not [iItT] + if (process.platform === 'darwin' && name === 'nm') + out = macCppfiltNm(out); + return out; } }; const print = console.log; @@ -75,7 +76,7 @@ function readline() { var bytes = fs.readSync(fd, buf, 0, buf.length); line += dec.write(buf.slice(0, bytes)); if (line.length === 0) { - return false; + return ''; } } } @@ -100,3 +101,30 @@ function versionCheck() { } } } + +function macCppfiltNm(out) { + // Re-grouped copy-paste from `tickprocessor.js` + const FUNC_RE = /^([0-9a-fA-F]{8,16} [iItT] )(.*)$/gm; + var entries = out.match(FUNC_RE); + if (entries === null) + return out; + + entries = entries.map((entry) => { + return entry.replace(/^[0-9a-fA-F]{8,16} [iItT] /, '') + }); + + var filtered; + try { + filtered = cp.spawnSync('c++filt', [ '-p' , '-i' ], { + input: entries.join('\n') + }).stdout.toString(); + } catch (e) { + return out; + } + + var i = 0; + filtered = filtered.split(/\n/g); + return out.replace(FUNC_RE, (all, prefix, postfix) => { + return prefix + (filtered[i++] || postfix); + }); +} diff --git a/lib/internal/v8_prof_processor.js b/lib/internal/v8_prof_processor.js index db5f400ed8febf..bbbe27b3e82892 100644 --- a/lib/internal/v8_prof_processor.js +++ b/lib/internal/v8_prof_processor.js @@ -20,8 +20,7 @@ scriptFiles.forEach(function(s) { var tickArguments = []; if (process.platform === 'darwin') { - const nm = 'foo() { nm "$@" | (c++filt -p -i || cat) }; foo $@'; - tickArguments.push('--mac', '--nm=' + nm); + tickArguments.push('--mac'); } else if (process.platform === 'win32') { tickArguments.push('--windows'); } diff --git a/lib/net.js b/lib/net.js index a5b52353b59627..e166fadaa8a075 100644 --- a/lib/net.js +++ b/lib/net.js @@ -560,14 +560,16 @@ function onread(nread, buffer) { debug('EOF'); + // push a null to signal the end of data. + // Do it before `maybeDestroy` for correct order of events: + // `end` -> `close` + self.push(null); + if (self._readableState.length === 0) { self.readable = false; maybeDestroy(self); } - // push a null to signal the end of data. - self.push(null); - // internal end event so that we know that the actual socket // is no longer readable, and we can start the shutdown // procedure. No need to wait for all the data to be consumed. diff --git a/lib/repl.js b/lib/repl.js index 71c80e05b3cd31..284c4a381d0a3b 100644 --- a/lib/repl.js +++ b/lib/repl.js @@ -286,7 +286,7 @@ function REPLServer(prompt, // After executing the current expression, store the values of RegExp // predefined properties back in `savedRegExMatches` - for (let idx = 1; idx < savedRegExMatches.length; idx += 1) { + for (var idx = 1; idx < savedRegExMatches.length; idx += 1) { savedRegExMatches[idx] = RegExp[`$${idx}`]; } diff --git a/lib/tls.js b/lib/tls.js index 2f41dc1f328ccf..565ac1f2770e63 100644 --- a/lib/tls.js +++ b/lib/tls.js @@ -90,7 +90,7 @@ function check(hostParts, pattern, wildcards) { return false; // Check host parts from right to left first. - for (let i = hostParts.length - 1; i > 0; i -= 1) + for (var i = hostParts.length - 1; i > 0; i -= 1) if (hostParts[i] !== patternParts[i]) return false; diff --git a/lib/url.js b/lib/url.js index e75b829ddff0f2..a041b77300af9b 100644 --- a/lib/url.js +++ b/lib/url.js @@ -89,7 +89,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { if (typeof url !== 'string') { throw new TypeError("Parameter 'url' must be a string, not " + typeof url); } - + var i, j, k, l; // Copy chrome, IE, opera backslash-handling behavior. // Back slashes before the query string get converted to forward slashes // See: https://code.google.com/p/chromium/issues/detail?id=25916 @@ -169,7 +169,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // find the first instance of any hostEndingChars var hostEnd = -1; - for (let i = 0; i < hostEndingChars.length; i++) { + for (i = 0; i < hostEndingChars.length; i++) { const hec = rest.indexOf(hostEndingChars[i]); if (hec !== -1 && (hostEnd === -1 || hec < hostEnd)) hostEnd = hec; @@ -197,7 +197,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // the host is the remaining to the left of the first non-host char hostEnd = -1; - for (let i = 0; i < nonHostChars.length; i++) { + for (i = 0; i < nonHostChars.length; i++) { const hec = rest.indexOf(nonHostChars[i]); if (hec !== -1 && (hostEnd === -1 || hec < hostEnd)) hostEnd = hec; @@ -224,12 +224,12 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // validate a little. if (!ipv6Hostname) { var hostparts = this.hostname.split(/\./); - for (let i = 0, l = hostparts.length; i < l; i++) { + for (i = 0, l = hostparts.length; i < l; i++) { var part = hostparts[i]; if (!part) continue; if (!part.match(hostnamePartPattern)) { var newpart = ''; - for (let j = 0, k = part.length; j < k; j++) { + for (j = 0, k = part.length; j < k; j++) { if (part.charCodeAt(j) > 127) { // we replace non-ASCII char with a temporary placeholder // we need this to make sure size of hostname is not @@ -294,7 +294,7 @@ Url.prototype.parse = function(url, parseQueryString, slashesDenoteHost) { // First, make 100% sure that any "autoEscape" chars get // escaped, even if encodeURIComponent doesn't think they // need to be. - for (let i = 0, l = autoEscape.length; i < l; i++) { + for (i = 0, l = autoEscape.length; i < l; i++) { var ae = autoEscape[i]; if (rest.indexOf(ae) === -1) continue; diff --git a/lib/util.js b/lib/util.js index 8d9afe921f4292..6128c36a988ec1 100644 --- a/lib/util.js +++ b/lib/util.js @@ -96,7 +96,7 @@ exports.debuglog = function(set) { debugEnviron = process.env.NODE_DEBUG || ''; set = set.toUpperCase(); if (!debugs[set]) { - if (new RegExp('\\b' + set + '\\b', 'i').test(debugEnviron)) { + if (new RegExp(`\\b${set}\\b`, 'i').test(debugEnviron)) { var pid = process.pid; debugs[set] = function() { var msg = exports.format.apply(exports, arguments); @@ -181,8 +181,8 @@ function stylizeWithColor(str, styleType) { var style = inspect.styles[styleType]; if (style) { - return '\u001b[' + inspect.colors[style][0] + 'm' + str + - '\u001b[' + inspect.colors[style][1] + 'm'; + return `\u001b[${inspect.colors[style][0]}m${str}` + + `\u001b[${inspect.colors[style][1]}m`; } else { return str; } @@ -297,8 +297,8 @@ function formatValue(ctx, value, recurseTimes) { // Some type of object without properties can be shortcutted. if (keys.length === 0) { if (typeof value === 'function') { - var name = value.name ? ': ' + value.name : ''; - return ctx.stylize('[Function' + name + ']', 'special'); + return ctx.stylize(`[Function${value.name ? `: ${value.name}` : ''}]`, + 'special'); } if (isRegExp(value)) { return ctx.stylize(RegExp.prototype.toString.call(value), 'regexp'); @@ -312,19 +312,19 @@ function formatValue(ctx, value, recurseTimes) { // now check the `raw` value to handle boxed primitives if (typeof raw === 'string') { formatted = formatPrimitiveNoColor(ctx, raw); - return ctx.stylize('[String: ' + formatted + ']', 'string'); + return ctx.stylize(`[String: ${formatted}]`, 'string'); } if (typeof raw === 'symbol') { formatted = formatPrimitiveNoColor(ctx, raw); - return ctx.stylize('[Symbol: ' + formatted + ']', 'symbol'); + return ctx.stylize(`[Symbol: ${formatted}]`, 'symbol'); } if (typeof raw === 'number') { formatted = formatPrimitiveNoColor(ctx, raw); - return ctx.stylize('[Number: ' + formatted + ']', 'number'); + return ctx.stylize(`[Number: ${formatted}]`, 'number'); } if (typeof raw === 'boolean') { formatted = formatPrimitiveNoColor(ctx, raw); - return ctx.stylize('[Boolean: ' + formatted + ']', 'boolean'); + return ctx.stylize(`[Boolean: ${formatted}]`, 'boolean'); } } @@ -390,8 +390,7 @@ function formatValue(ctx, value, recurseTimes) { // Make functions say that they are functions if (typeof value === 'function') { - var n = value.name ? ': ' + value.name : ''; - base = ' [Function' + n + ']'; + base = ` [Function${value.name ? `: ${value.name}` : ''}]`; } // Make RegExps say that they are RegExps @@ -412,24 +411,24 @@ function formatValue(ctx, value, recurseTimes) { // Make boxed primitive Strings look like such if (typeof raw === 'string') { formatted = formatPrimitiveNoColor(ctx, raw); - base = ' ' + '[String: ' + formatted + ']'; + base = ` [String: ${formatted}]`; } // Make boxed primitive Numbers look like such if (typeof raw === 'number') { formatted = formatPrimitiveNoColor(ctx, raw); - base = ' ' + '[Number: ' + formatted + ']'; + base = ` [Number: ${formatted}]`; } // Make boxed primitive Booleans look like such if (typeof raw === 'boolean') { formatted = formatPrimitiveNoColor(ctx, raw); - base = ' ' + '[Boolean: ' + formatted + ']'; + base = ` [Boolean: ${formatted}]`; } // Add constructor name if available if (base === '' && constructor) - braces[0] = constructor.name + ' ' + braces[0]; + braces[0] = `${constructor.name} ${braces[0]}`; if (empty === true) { return braces[0] + base + braces[1]; @@ -497,7 +496,7 @@ function formatPrimitiveNoColor(ctx, value) { function formatError(value) { - return '[' + Error.prototype.toString.call(value) + ']'; + return `[${Error.prototype.toString.call(value)}]`; } @@ -609,9 +608,9 @@ function formatProperty(ctx, value, recurseTimes, visibleKeys, key, array) { } if (!hasOwnProperty(visibleKeys, key)) { if (typeof key === 'symbol') { - name = '[' + ctx.stylize(key.toString(), 'symbol') + ']'; + name = `[${ctx.stylize(key.toString(), 'symbol')}]`; } else { - name = '[' + key + ']'; + name = `[${key}]`; } } if (!str) { @@ -649,7 +648,7 @@ function formatProperty(ctx, value, recurseTimes, visibleKeys, key, array) { } } - return name + ': ' + str; + return `${name}: ${str}`; } @@ -664,13 +663,10 @@ function reduceToSingleString(output, base, braces) { // we need to force the first item to be on the next line or the // items will not line up correctly. (base === '' && braces[0].length === 1 ? '' : base + '\n ') + - ' ' + - output.join(',\n ') + - ' ' + - braces[1]; + ` ${output.join(',\n ')} ${braces[1]}`; } - return braces[0] + base + ' ' + output.join(', ') + ' ' + braces[1]; + return `${braces[0]}${base} ${output.join(', ')} ${braces[1]}`; } @@ -854,7 +850,7 @@ exports.puts = internalUtil.deprecate(function() { exports.debug = internalUtil.deprecate(function(x) { - process.stderr.write('DEBUG: ' + x + '\n'); + process.stderr.write(`DEBUG: ${x}\n`); }, 'util.debug is deprecated. Use console.error instead.'); @@ -905,7 +901,7 @@ exports.pump = internalUtil.deprecate(function(readStream, writeStream, cb) { exports._errnoException = function(err, syscall, original) { var errname = uv.errname(err); - var message = syscall + ' ' + errname; + var message = `${syscall} ${errname}`; if (original) message += ' ' + original; var e = new Error(message); @@ -923,13 +919,13 @@ exports._exceptionWithHostPort = function(err, additional) { var details; if (port && port > 0) { - details = address + ':' + port; + details = `${address}:${port}`; } else { details = address; } if (additional) { - details += ' - Local (' + additional + ')'; + details += ` - Local (${additional})`; } var ex = exports._errnoException(err, syscall, details); ex.address = address; diff --git a/node.gyp b/node.gyp index 341ac34dce4c26..26a9f615d028aa 100644 --- a/node.gyp +++ b/node.gyp @@ -5,6 +5,11 @@ 'node_use_lttng%': 'false', 'node_use_etw%': 'false', 'node_use_perfctr%': 'false', + 'node_use_v8_platform%': 'true', + 'node_use_bundled_v8%': 'true', + 'node_shared%': 'false', + 'force_dynamic_crt%': 0, + 'node_module_version%': 'true', 'node_has_winsdk%': 'false', 'node_shared_zlib%': 'false', 'node_shared_http_parser%': 'false', @@ -93,6 +98,20 @@ 'deps/v8/tools/SourceMap.js', 'deps/v8/tools/tickprocessor-driver.js', ], + 'conditions': [ + [ 'OS=="win" and ' + 'node_use_openssl=="true" and ' + 'node_shared_openssl=="false"', { + 'use_openssl_def': 1, + }, { + 'use_openssl_def': 0, + }], + [ 'node_shared=="true"', { + 'node_target_type%': 'shared_library', + }, { + 'node_target_type%': 'executable', + }], + ], }, 'targets': [ @@ -211,6 +230,42 @@ 'conditions': [ + [ 'node_shared=="false"', { + 'msvs_settings': { + 'VCManifestTool': { + 'EmbedManifest': 'true', + 'AdditionalManifestFiles': 'src/res/node.exe.extra.manifest' + } + }, + }, { + 'defines': [ + 'NODE_SHARED_MODE', + ], + 'conditions': [ + [ 'node_module_version!="" and OS!="win"', { + 'product_extension': '<(shlib_suffix)', + }] + ], + }], + [ 'node_use_bundled_v8=="true"', { + 'include_dirs': [ + 'deps/v8' # include/v8_platform.h + ], + + 'dependencies': [ + 'deps/v8/tools/gyp/v8.gyp:v8', + 'deps/v8/tools/gyp/v8.gyp:v8_libplatform' + ], + }], + [ 'node_use_v8_platform=="true"', { + 'defines': [ + 'NODE_USE_V8_PLATFORM=1', + ], + }, { + 'defines': [ + 'NODE_USE_V8_PLATFORM=0', + ], + }], [ 'node_enable_d8=="true"', { 'dependencies': [ 'deps/v8/src/d8.gyp:d8' ], }], @@ -283,7 +338,7 @@ ], }, 'conditions': [ - ['OS in "linux freebsd"', { + ['OS in "linux freebsd" and node_shared=="false"', { 'ldflags': [ '-Wl,--whole-archive,' '<(PRODUCT_DIR)/obj.target/deps/openssl/' @@ -291,6 +346,9 @@ '-Wl,--no-whole-archive', ], }], + ['use_openssl_def==1', { + 'sources': ['<(SHARED_INTERMEDIATE_DIR)/openssl.def'], + }], ], }], ], @@ -448,7 +506,7 @@ 'NODE_PLATFORM="sunos"', ], }], - [ 'OS=="freebsd" or OS=="linux"', { + [ '(OS=="freebsd" or OS=="linux") and node_shared=="false"', { 'ldflags': [ '-Wl,-z,noexecstack', '-Wl,--whole-archive <(V8_BASE)', '-Wl,--no-whole-archive' ] @@ -457,12 +515,53 @@ 'ldflags': [ '-Wl,-M,/usr/lib/ld/map.noexstk' ], }], ], - 'msvs_settings': { - 'VCManifestTool': { - 'EmbedManifest': 'true', - 'AdditionalManifestFiles': 'src/res/node.exe.extra.manifest' - } - }, + }, + { + 'target_name': 'mkssldef', + 'type': 'none', + # TODO(bnoordhuis) Make all platforms export the same list of symbols. + # Teach mkssldef.py to generate linker maps that UNIX linkers understand. + 'conditions': [ + [ 'use_openssl_def==1', { + 'variables': { + 'mkssldef_flags': [ + # Categories to export. + '-CAES,BF,BIO,DES,DH,DSA,EC,ECDH,ECDSA,ENGINE,EVP,HMAC,MD4,MD5,' + 'NEXTPROTONEG,PSK,RC2,RC4,RSA,SHA,SHA0,SHA1,SHA256,SHA512,SOCK,' + 'STDIO,TLSEXT', + # Defines. + '-DWIN32', + # Symbols to filter from the export list. + '-X^DSO', + '-X^_', + '-X^private_', + ], + }, + 'conditions': [ + ['openssl_fips!=""', { + 'variables': { 'mkssldef_flags': ['-DOPENSSL_FIPS'] }, + }], + ], + 'actions': [ + { + 'action_name': 'mkssldef', + 'inputs': [ + 'deps/openssl/openssl/util/libeay.num', + 'deps/openssl/openssl/util/ssleay.num', + ], + 'outputs': ['<(SHARED_INTERMEDIATE_DIR)/openssl.def'], + 'action': [ + 'python', + 'tools/mkssldef.py', + '<@(mkssldef_flags)', + '-o', + '<@(_outputs)', + '<@(_inputs)', + ], + }, + ], + }], + ], }, # generate ETW header and resource files { diff --git a/src/cares_wrap.cc b/src/cares_wrap.cc index 6625f4dd40fd42..a6a5149126f24e 100644 --- a/src/cares_wrap.cc +++ b/src/cares_wrap.cc @@ -48,6 +48,38 @@ using v8::String; using v8::Value; +inline const char* ToErrorCodeString(int status) { + switch (status) { +#define V(code) case ARES_##code: return #code; + V(EADDRGETNETWORKPARAMS) + V(EBADFAMILY) + V(EBADFLAGS) + V(EBADHINTS) + V(EBADNAME) + V(EBADQUERY) + V(EBADRESP) + V(EBADSTR) + V(ECANCELLED) + V(ECONNREFUSED) + V(EDESTRUCTION) + V(EFILE) + V(EFORMERR) + V(ELOADIPHLPAPI) + V(ENODATA) + V(ENOMEM) + V(ENONAME) + V(ENOTFOUND) + V(ENOTIMP) + V(ENOTINITIALIZED) + V(EOF) + V(EREFUSED) + V(ESERVFAIL) + V(ETIMEOUT) +#undef V + } + return "UNKNOWN_ARES_ERROR"; +} + class GetAddrInfoReqWrap : public ReqWrap { public: GetAddrInfoReqWrap(Environment* env, Local req_wrap_obj); @@ -91,7 +123,7 @@ static void NewQueryReqWrap(const FunctionCallbackInfo& args) { } -static int cmp_ares_tasks(const ares_task_t* a, const ares_task_t* b) { +static int cmp_ares_tasks(const node_ares_task* a, const node_ares_task* b) { if (a->sock < b->sock) return -1; if (a->sock > b->sock) @@ -100,7 +132,7 @@ static int cmp_ares_tasks(const ares_task_t* a, const ares_task_t* b) { } -RB_GENERATE_STATIC(ares_task_list, ares_task_t, node, cmp_ares_tasks) +RB_GENERATE_STATIC(node_ares_task_list, node_ares_task, node, cmp_ares_tasks) @@ -114,7 +146,7 @@ static void ares_timeout(uv_timer_t* handle) { static void ares_poll_cb(uv_poll_t* watcher, int status, int events) { - ares_task_t* task = ContainerOf(&ares_task_t::poll_watcher, watcher); + node_ares_task* task = ContainerOf(&node_ares_task::poll_watcher, watcher); Environment* env = task->env; /* Reset the idle timer */ @@ -135,15 +167,16 @@ static void ares_poll_cb(uv_poll_t* watcher, int status, int events) { static void ares_poll_close_cb(uv_handle_t* watcher) { - ares_task_t* task = ContainerOf(&ares_task_t::poll_watcher, + node_ares_task* task = ContainerOf(&node_ares_task::poll_watcher, reinterpret_cast(watcher)); free(task); } -/* Allocates and returns a new ares_task_t */ -static ares_task_t* ares_task_create(Environment* env, ares_socket_t sock) { - ares_task_t* task = static_cast(malloc(sizeof(*task))); +/* Allocates and returns a new node_ares_task */ +static node_ares_task* ares_task_create(Environment* env, ares_socket_t sock) { + node_ares_task* task = + static_cast(node::Malloc(sizeof(*task))); if (task == nullptr) { /* Out of memory. */ @@ -169,11 +202,11 @@ static void ares_sockstate_cb(void* data, int read, int write) { Environment* env = static_cast(data); - ares_task_t* task; + node_ares_task* task; - ares_task_t lookup_task; + node_ares_task lookup_task; lookup_task.sock = sock; - task = RB_FIND(ares_task_list, env->cares_task_list(), &lookup_task); + task = RB_FIND(node_ares_task_list, env->cares_task_list(), &lookup_task); if (read || write) { if (!task) { @@ -194,7 +227,7 @@ static void ares_sockstate_cb(void* data, return; } - RB_INSERT(ares_task_list, env->cares_task_list(), task); + RB_INSERT(node_ares_task_list, env->cares_task_list(), task); } /* This should never fail. If it fails anyway, the query will eventually */ @@ -210,7 +243,7 @@ static void ares_sockstate_cb(void* data, CHECK(task && "When an ares socket is closed we should have a handle for it"); - RB_REMOVE(ares_task_list, env->cares_task_list(), task); + RB_REMOVE(node_ares_task_list, env->cares_task_list(), task); uv_close(reinterpret_cast(&task->poll_watcher), ares_poll_close_cb); @@ -330,41 +363,8 @@ class QueryWrap : public AsyncWrap { CHECK_NE(status, ARES_SUCCESS); HandleScope handle_scope(env()->isolate()); Context::Scope context_scope(env()->context()); - Local arg; - switch (status) { -#define V(code) \ - case ARES_ ## code: \ - arg = FIXED_ONE_BYTE_STRING(env()->isolate(), #code); \ - break; - V(ENODATA) - V(EFORMERR) - V(ESERVFAIL) - V(ENOTFOUND) - V(ENOTIMP) - V(EREFUSED) - V(EBADQUERY) - V(EBADNAME) - V(EBADFAMILY) - V(EBADRESP) - V(ECONNREFUSED) - V(ETIMEOUT) - V(EOF) - V(EFILE) - V(ENOMEM) - V(EDESTRUCTION) - V(EBADSTR) - V(EBADFLAGS) - V(ENONAME) - V(EBADHINTS) - V(ENOTINITIALIZED) - V(ELOADIPHLPAPI) - V(EADDRGETNETWORKPARAMS) - V(ECANCELLED) -#undef V - default: - arg = FIXED_ONE_BYTE_STRING(env()->isolate(), "UNKNOWN_ARES_ERROR"); - break; - } + const char* code = ToErrorCodeString(status); + Local arg = OneByteString(env()->isolate(), code); MakeCallback(env()->oncomplete_string(), 1, &arg); } @@ -1270,7 +1270,8 @@ static void Initialize(Local target, Environment* env = Environment::GetCurrent(context); int r = ares_library_init(ARES_LIB_INIT_ALL); - CHECK_EQ(r, ARES_SUCCESS); + if (r != ARES_SUCCESS) + return env->ThrowError(ToErrorCodeString(r)); struct ares_options options; memset(&options, 0, sizeof(options)); @@ -1282,7 +1283,10 @@ static void Initialize(Local target, r = ares_init_options(env->cares_channel_ptr(), &options, ARES_OPT_FLAGS | ARES_OPT_SOCK_STATE_CB); - CHECK_EQ(r, ARES_SUCCESS); + if (r != ARES_SUCCESS) { + ares_library_cleanup(); + return env->ThrowError(ToErrorCodeString(r)); + } /* Initialize the timeout timer. The timer won't be started until the */ /* first socket is opened. */ diff --git a/src/debug-agent.cc b/src/debug-agent.cc index e420e6e96c373d..ba613119a319be 100644 --- a/src/debug-agent.cc +++ b/src/debug-agent.cc @@ -44,6 +44,7 @@ using v8::Integer; using v8::Isolate; using v8::Local; using v8::Locker; +using v8::NewStringType; using v8::Object; using v8::String; using v8::Value; @@ -69,7 +70,7 @@ Agent::~Agent() { } -bool Agent::Start(int port, bool wait) { +bool Agent::Start(const std::string& host, int port, bool wait) { int err; if (state_ == kRunning) @@ -85,6 +86,7 @@ bool Agent::Start(int port, bool wait) { goto async_init_failed; uv_unref(reinterpret_cast(&child_signal_)); + host_ = host; port_ = port; wait_ = wait; @@ -211,6 +213,10 @@ void Agent::InitAdaptor(Environment* env) { Local api = t->GetFunction()->NewInstance(); api->SetAlignedPointerInInternalField(0, this); + api->Set(String::NewFromUtf8(isolate, "host", + NewStringType::kNormal).ToLocalChecked(), + String::NewFromUtf8(isolate, host_.data(), NewStringType::kNormal, + host_.size()).ToLocalChecked()); api->Set(String::NewFromUtf8(isolate, "port"), Integer::New(isolate, port_)); env->process_object()->Set(String::NewFromUtf8(isolate, "_debugAPI"), api); diff --git a/src/debug-agent.h b/src/debug-agent.h index a061e8b1f6df89..cbe7e7f4364fad 100644 --- a/src/debug-agent.h +++ b/src/debug-agent.h @@ -30,6 +30,7 @@ #include "v8-debug.h" #include +#include // Forward declaration to break recursive dependency chain with src/env.h. namespace node { @@ -73,7 +74,7 @@ class Agent { typedef void (*DispatchHandler)(node::Environment* env); // Start the debugger agent thread - bool Start(int port, bool wait); + bool Start(const std::string& host, int port, bool wait); // Listen for debug events void Enable(); // Stop the debugger agent @@ -112,6 +113,7 @@ class Agent { State state_; + std::string host_; int port_; bool wait_; diff --git a/src/env-inl.h b/src/env-inl.h index 6f19ff50cb536f..0a4ce55b3bc4ae 100644 --- a/src/env-inl.h +++ b/src/env-inl.h @@ -398,7 +398,7 @@ inline ares_channel* Environment::cares_channel_ptr() { return &cares_channel_; } -inline ares_task_list* Environment::cares_task_list() { +inline node_ares_task_list* Environment::cares_task_list() { return &cares_task_list_; } diff --git a/src/env.h b/src/env.h index 7b6ffc8b87c8ee..67d10b20e41673 100644 --- a/src/env.h +++ b/src/env.h @@ -259,16 +259,14 @@ namespace node { class Environment; -// TODO(bnoordhuis) Rename struct, the ares_ prefix implies it's part -// of the c-ares API while the _t suffix implies it's a typedef. -struct ares_task_t { +struct node_ares_task { Environment* env; ares_socket_t sock; uv_poll_t poll_watcher; - RB_ENTRY(ares_task_t) node; + RB_ENTRY(node_ares_task) node; }; -RB_HEAD(ares_task_list, ares_task_t); +RB_HEAD(node_ares_task_list, node_ares_task); class Environment { public: @@ -440,7 +438,7 @@ class Environment { inline uv_timer_t* cares_timer_handle(); inline ares_channel cares_channel(); inline ares_channel* cares_channel_ptr(); - inline ares_task_list* cares_task_list(); + inline node_ares_task_list* cares_task_list(); inline bool using_domains() const; inline void set_using_domains(bool value); @@ -542,7 +540,7 @@ class Environment { const uint64_t timer_base_; uv_timer_t cares_timer_handle_; ares_channel cares_channel_; - ares_task_list cares_task_list_; + node_ares_task_list cares_task_list_; bool using_domains_; bool printed_error_; bool trace_sync_io_; diff --git a/src/node.cc b/src/node.cc index 0501372490188c..d34225e2132dca 100644 --- a/src/node.cc +++ b/src/node.cc @@ -39,7 +39,9 @@ #include "string_bytes.h" #include "util.h" #include "uv.h" +#if NODE_USE_V8_PLATFORM #include "libplatform/libplatform.h" +#endif // NODE_USE_V8_PLATFORM #include "v8-debug.h" #include "v8-profiler.h" #include "zlib.h" @@ -56,6 +58,8 @@ #include #include #include + +#include #include #if defined(NODE_HAVE_I18N_SUPPORT) @@ -137,6 +141,7 @@ static unsigned int preload_module_count = 0; static const char** preload_modules = nullptr; static bool use_debug_agent = false; static bool debug_wait_connect = false; +static std::string debug_host; // NOLINT(runtime/string) static int debug_port = 5858; static bool prof_process = false; static bool v8_is_profiling = false; @@ -167,6 +172,30 @@ static v8::Platform* default_platform; static uv_sem_t debug_semaphore; #endif +static struct { +#if NODE_USE_V8_PLATFORM + void Initialize(int thread_pool_size) { + platform_ = v8::platform::CreateDefaultPlatform(thread_pool_size); + V8::InitializePlatform(platform_); + } + + void PumpMessageLoop(Isolate* isolate) { + v8::platform::PumpMessageLoop(platform_, isolate); + } + + void Dispose() { + delete platform_; + platform_ = nullptr; + } + + v8::Platform* platform_; +#else // !NODE_USE_V8_PLATFORM + void Initialize(int thread_pool_size) {} + void PumpMessageLoop(Isolate* isolate) {} + void Dispose() {} +#endif // !NODE_USE_V8_PLATFORM +} v8_platform; + static void PrintErrorString(const char* format, ...) { va_list ap; va_start(ap, format); @@ -949,9 +978,9 @@ void* ArrayBufferAllocator::Allocate(size_t size) { if (env_ == nullptr || !env_->array_buffer_allocator_info()->no_zero_fill() || zero_fill_all_buffers) - return calloc(size, 1); + return node::Calloc(size, 1); env_->array_buffer_allocator_info()->reset_fill_flag(); - return malloc(size); + return node::Malloc(size); } static bool DomainHasErrorHandler(const Environment* env, @@ -2544,7 +2573,7 @@ static void EnvSetter(Local property, SetEnvironmentVariableW(key_ptr, reinterpret_cast(*val)); } #endif - // Whether it worked or not, always return rval. + // Whether it worked or not, always return value. info.GetReturnValue().Set(value); } @@ -3262,20 +3291,55 @@ static bool ParseDebugOpt(const char* arg) { debug_wait_connect = true; port = arg + sizeof("--debug-brk=") - 1; } else if (!strncmp(arg, "--debug-port=", sizeof("--debug-port=") - 1)) { + // XXX(bnoordhuis) Misnomer, configures port and listen address. port = arg + sizeof("--debug-port=") - 1; } else { return false; } - if (port != nullptr) { - debug_port = atoi(port); - if (debug_port < 1024 || debug_port > 65535) { - fprintf(stderr, "Debug port must be in range 1024 to 65535.\n"); - PrintHelp(); - exit(12); + if (port == nullptr) { + return true; + } + + std::string* const the_host = &debug_host; + int* const the_port = &debug_port; + + // FIXME(bnoordhuis) Move IPv6 address parsing logic to lib/net.js. + // It seems reasonable to support [address]:port notation + // in net.Server#listen() and net.Socket#connect(). + const size_t port_len = strlen(port); + if (port[0] == '[' && port[port_len - 1] == ']') { + the_host->assign(port + 1, port_len - 2); + return true; + } + + const char* const colon = strrchr(port, ':'); + if (colon == nullptr) { + // Either a port number or a host name. Assume that + // if it's not all decimal digits, it's a host name. + for (size_t n = 0; port[n] != '\0'; n += 1) { + if (port[n] < '0' || port[n] > '9') { + *the_host = port; + return true; + } } + } else { + const bool skip = (colon > port && port[0] == '[' && colon[-1] == ']'); + the_host->assign(port + skip, colon - skip); } + char* endptr; + errno = 0; + const char* const digits = colon != nullptr ? colon + 1 : port; + const long result = strtol(digits, &endptr, 10); // NOLINT(runtime/int) + if (errno != 0 || *endptr != '\0' || result < 1024 || result > 65535) { + fprintf(stderr, "Debug port must be in range 1024 to 65535.\n"); + PrintHelp(); + exit(12); + } + + *the_port = static_cast(result); + return true; } @@ -3513,9 +3577,11 @@ static void StartDebug(Environment* env, bool wait) { env->debugger_agent()->set_dispatch_handler( DispatchMessagesDebugAgentCallback); - debugger_running = env->debugger_agent()->Start(debug_port, wait); + debugger_running = + env->debugger_agent()->Start(debug_host, debug_port, wait); if (debugger_running == false) { - fprintf(stderr, "Starting debugger on port %d failed\n", debug_port); + fprintf(stderr, "Starting debugger on %s:%d failed\n", + debug_host.c_str(), debug_port); fflush(stderr); return; } @@ -4226,11 +4292,11 @@ static void StartNodeInstance(void* arg) { SealHandleScope seal(isolate); bool more; do { - v8::platform::PumpMessageLoop(default_platform, isolate); + v8_platform.PumpMessageLoop(isolate); more = uv_run(env->event_loop(), UV_RUN_ONCE); if (more == false) { - v8::platform::PumpMessageLoop(default_platform, isolate); + v8_platform.PumpMessageLoop(isolate); EmitBeforeExit(env); // Emit `beforeExit` if the loop became alive either after emitting @@ -4291,8 +4357,8 @@ int Start(int argc, char** argv) { #endif const int thread_pool_size = 4; - default_platform = v8::platform::CreateDefaultPlatform(thread_pool_size); - V8::InitializePlatform(default_platform); + + v8_platform.Initialize(thread_pool_size); V8::Initialize(); int exit_code = 1; @@ -4309,8 +4375,7 @@ int Start(int argc, char** argv) { } V8::Dispose(); - delete default_platform; - default_platform = nullptr; + v8_platform.Dispose(); delete[] exec_argv; exec_argv = nullptr; diff --git a/src/node.h b/src/node.h index f70b5f8e784382..4d3293eab6b3c3 100644 --- a/src/node.h +++ b/src/node.h @@ -396,17 +396,23 @@ extern "C" NODE_EXTERN void node_module_register(void* mod); # define NODE_MODULE_EXPORT __attribute__((visibility("default"))) #endif +#ifdef NODE_SHARED_MODE +# define NODE_CTOR_PREFIX +#else +# define NODE_CTOR_PREFIX static +#endif + #if defined(_MSC_VER) #pragma section(".CRT$XCU", read) #define NODE_C_CTOR(fn) \ - static void __cdecl fn(void); \ + NODE_CTOR_PREFIX void __cdecl fn(void); \ __declspec(dllexport, allocate(".CRT$XCU")) \ void (__cdecl*fn ## _)(void) = fn; \ - static void __cdecl fn(void) + NODE_CTOR_PREFIX void __cdecl fn(void) #else #define NODE_C_CTOR(fn) \ - static void fn(void) __attribute__((constructor)); \ - static void fn(void) + NODE_CTOR_PREFIX void fn(void) __attribute__((constructor)); \ + NODE_CTOR_PREFIX void fn(void) #endif #define NODE_MODULE_X(modname, regfunc, priv, flags) \ diff --git a/src/node_buffer.cc b/src/node_buffer.cc index 877fdc0a551579..11317328a6b549 100644 --- a/src/node_buffer.cc +++ b/src/node_buffer.cc @@ -49,7 +49,7 @@ size_t length = end - start; #define BUFFER_MALLOC(length) \ - zero_fill_all_buffers ? calloc(length, 1) : malloc(length) + zero_fill_all_buffers ? node::Calloc(length, 1) : node::Malloc(length) namespace node { @@ -247,10 +247,6 @@ MaybeLocal New(Isolate* isolate, size_t actual = 0; char* data = nullptr; - // malloc(0) and realloc(ptr, 0) have implementation-defined behavior in - // that the standard allows them to either return a unique pointer or a - // nullptr for zero-sized allocation requests. Normalize by always using - // a nullptr. if (length > 0) { data = static_cast(BUFFER_MALLOC(length)); @@ -264,7 +260,7 @@ MaybeLocal New(Isolate* isolate, free(data); data = nullptr; } else if (actual < length) { - data = static_cast(realloc(data, actual)); + data = static_cast(node::Realloc(data, actual)); CHECK_NE(data, nullptr); } } @@ -343,7 +339,7 @@ MaybeLocal Copy(Environment* env, const char* data, size_t length) { void* new_data; if (length > 0) { CHECK_NE(data, nullptr); - new_data = malloc(length); + new_data = node::Malloc(length); if (new_data == nullptr) return Local(); memcpy(new_data, data, length); @@ -931,7 +927,7 @@ void IndexOfString(const FunctionCallbackInfo& args) { needle_length, offset); } else if (enc == BINARY) { - uint8_t* needle_data = static_cast(malloc(needle_length)); + uint8_t* needle_data = static_cast(node::Malloc(needle_length)); if (needle_data == nullptr) { return args.GetReturnValue().Set(-1); } diff --git a/src/node_crypto.cc b/src/node_crypto.cc index b3169caa2b884d..c6414a4ba82f8d 100644 --- a/src/node_crypto.cc +++ b/src/node_crypto.cc @@ -2090,7 +2090,7 @@ int SSLWrap::TLSExtStatusCallback(SSL* s, void* arg) { size_t len = Buffer::Length(obj); // OpenSSL takes control of the pointer after accepting it - char* data = reinterpret_cast(malloc(len)); + char* data = reinterpret_cast(node::Malloc(len)); CHECK_NE(data, nullptr); memcpy(data, resp, len); @@ -3068,11 +3068,10 @@ void CipherBase::InitIv(const char* cipher_type, return env()->ThrowError("Unknown cipher"); } - /* OpenSSL versions up to 0.9.8l failed to return the correct - iv_length (0) for ECB ciphers */ - if (EVP_CIPHER_iv_length(cipher_) != iv_len && - !(EVP_CIPHER_mode(cipher_) == EVP_CIPH_ECB_MODE && iv_len == 0) && - !(EVP_CIPHER_mode(cipher_) == EVP_CIPH_GCM_MODE) && iv_len > 0) { + const int expected_iv_len = EVP_CIPHER_iv_length(cipher_); + const bool is_gcm_mode = (EVP_CIPH_GCM_MODE == EVP_CIPHER_mode(cipher_)); + + if (is_gcm_mode == false && iv_len != expected_iv_len) { return env()->ThrowError("Invalid IV length"); } @@ -3080,13 +3079,10 @@ void CipherBase::InitIv(const char* cipher_type, const bool encrypt = (kind_ == kCipher); EVP_CipherInit_ex(&ctx_, cipher_, nullptr, nullptr, nullptr, encrypt); - /* Set IV length. Only required if GCM cipher and IV is not default iv. */ - if (EVP_CIPHER_mode(cipher_) == EVP_CIPH_GCM_MODE && - iv_len != EVP_CIPHER_iv_length(cipher_)) { - if (!EVP_CIPHER_CTX_ctrl(&ctx_, EVP_CTRL_GCM_SET_IVLEN, iv_len, nullptr)) { - EVP_CIPHER_CTX_cleanup(&ctx_); - return env()->ThrowError("Invalid IV length"); - } + if (is_gcm_mode && + !EVP_CIPHER_CTX_ctrl(&ctx_, EVP_CTRL_GCM_SET_IVLEN, iv_len, nullptr)) { + EVP_CIPHER_CTX_cleanup(&ctx_); + return env()->ThrowError("Invalid IV length"); } if (!EVP_CIPHER_CTX_set_key_length(&ctx_, key_len)) { @@ -3139,7 +3135,7 @@ bool CipherBase::GetAuthTag(char** out, unsigned int* out_len) const { if (initialised_ || kind_ != kCipher || !auth_tag_) return false; *out_len = auth_tag_len_; - *out = static_cast(malloc(auth_tag_len_)); + *out = static_cast(node::Malloc(auth_tag_len_)); CHECK_NE(*out, nullptr); memcpy(*out, auth_tag_, auth_tag_len_); return true; @@ -4694,7 +4690,7 @@ void ECDH::ComputeSecret(const FunctionCallbackInfo& args) { // NOTE: field_size is in bits int field_size = EC_GROUP_get_degree(ecdh->group_); size_t out_len = (field_size + 7) / 8; - char* out = static_cast(malloc(out_len)); + char* out = static_cast(node::Malloc(out_len)); CHECK_NE(out, nullptr); int r = ECDH_compute_key(out, out_len, pub, ecdh->key_, nullptr); @@ -4733,7 +4729,7 @@ void ECDH::GetPublicKey(const FunctionCallbackInfo& args) { if (size == 0) return env->ThrowError("Failed to get public key length"); - unsigned char* out = static_cast(malloc(size)); + unsigned char* out = static_cast(node::Malloc(size)); CHECK_NE(out, nullptr); int r = EC_POINT_point2oct(ecdh->group_, pub, form, out, size, nullptr); @@ -4762,7 +4758,7 @@ void ECDH::GetPrivateKey(const FunctionCallbackInfo& args) { return env->ThrowError("Failed to get ECDH private key"); int size = BN_num_bytes(b); - unsigned char* out = static_cast(malloc(size)); + unsigned char* out = static_cast(node::Malloc(size)); CHECK_NE(out, nullptr); if (size != BN_bn2bin(b, out)) { @@ -4839,7 +4835,7 @@ class PBKDF2Request : public AsyncWrap { saltlen_(saltlen), salt_(salt), keylen_(keylen), - key_(static_cast(malloc(keylen))), + key_(static_cast(node::Malloc(keylen))), iter_(iter) { if (key() == nullptr) FatalError("node::PBKDF2Request()", "Out of Memory"); @@ -5002,7 +4998,7 @@ void PBKDF2(const FunctionCallbackInfo& args) { THROW_AND_RETURN_IF_NOT_BUFFER(args[1]); - pass = static_cast(malloc(passlen)); + pass = static_cast(node::Malloc(passlen)); if (pass == nullptr) { FatalError("node::PBKDF2()", "Out of Memory"); } @@ -5014,7 +5010,7 @@ void PBKDF2(const FunctionCallbackInfo& args) { goto err; } - salt = static_cast(malloc(saltlen)); + salt = static_cast(node::Malloc(saltlen)); if (salt == nullptr) { FatalError("node::PBKDF2()", "Out of Memory"); } @@ -5107,7 +5103,7 @@ class RandomBytesRequest : public AsyncWrap { : AsyncWrap(env, object, AsyncWrap::PROVIDER_CRYPTO), error_(0), size_(size), - data_(static_cast(malloc(size))) { + data_(static_cast(node::Malloc(size))) { if (data() == nullptr) FatalError("node::RandomBytesRequest()", "Out of Memory"); Wrap(object, this); @@ -5336,7 +5332,7 @@ void GetCurves(const FunctionCallbackInfo& args) { if (num_curves) { alloc_size = sizeof(*curves) * num_curves; - curves = static_cast(malloc(alloc_size)); + curves = static_cast(node::Malloc(alloc_size)); CHECK_NE(curves, nullptr); diff --git a/src/node_internals.h b/src/node_internals.h index 62bf1c463831a7..f7ede94d88e021 100644 --- a/src/node_internals.h +++ b/src/node_internals.h @@ -216,7 +216,8 @@ class ArrayBufferAllocator : public v8::ArrayBuffer::Allocator { inline void set_env(Environment* env) { env_ = env; } virtual void* Allocate(size_t size); // Defined in src/node.cc - virtual void* AllocateUninitialized(size_t size) { return malloc(size); } + virtual void* AllocateUninitialized(size_t size) + { return node::Malloc(size); } virtual void Free(void* data, size_t) { free(data); } private: diff --git a/src/node_os.cc b/src/node_os.cc index 92f53a9c407fae..8b8c7b62c8518c 100644 --- a/src/node_os.cc +++ b/src/node_os.cc @@ -16,7 +16,7 @@ # include // gethostname, sysconf # include // MAXHOSTNAMELEN on Linux and the BSDs. # include -#endif // __MINGW32__ +#endif // __POSIX__ // Add Windows fallback. #ifndef MAXHOSTNAMELEN diff --git a/src/node_version.h b/src/node_version.h index 1437d69bddfa77..2d1cc99cb3cac4 100644 --- a/src/node_version.h +++ b/src/node_version.h @@ -2,13 +2,13 @@ #define SRC_NODE_VERSION_H_ #define NODE_MAJOR_VERSION 4 -#define NODE_MINOR_VERSION 6 -#define NODE_PATCH_VERSION 3 +#define NODE_MINOR_VERSION 7 +#define NODE_PATCH_VERSION 0 #define NODE_VERSION_IS_LTS 1 #define NODE_VERSION_LTS_CODENAME "Argon" -#define NODE_VERSION_IS_RELEASE 0 +#define NODE_VERSION_IS_RELEASE 1 #ifndef NODE_STRINGIFY #define NODE_STRINGIFY(n) NODE_STRINGIFY_HELPER(n) diff --git a/src/node_zlib.cc b/src/node_zlib.cc index 0b8f1e06f513b2..304fbf72fed44f 100644 --- a/src/node_zlib.cc +++ b/src/node_zlib.cc @@ -238,8 +238,11 @@ class ZCtx : public AsyncWrap { case INFLATERAW: ctx->err_ = inflate(&ctx->strm_, ctx->flush_); - // If data was encoded with dictionary - if (ctx->err_ == Z_NEED_DICT && ctx->dictionary_ != nullptr) { + // If data was encoded with dictionary (INFLATERAW will have it set in + // SetDictionary, don't repeat that here) + if (ctx->mode_ != INFLATERAW && + ctx->err_ == Z_NEED_DICT && + ctx->dictionary_ != nullptr) { // Load it ctx->err_ = inflateSetDictionary(&ctx->strm_, ctx->dictionary_, @@ -491,6 +494,13 @@ class ZCtx : public AsyncWrap { ctx->dictionary_, ctx->dictionary_len_); break; + case INFLATERAW: + // The other inflate cases will have the dictionary set when inflate() + // returns Z_NEED_DICT in Process() + ctx->err_ = inflateSetDictionary(&ctx->strm_, + ctx->dictionary_, + ctx->dictionary_len_); + break; default: break; } diff --git a/src/stream_wrap.cc b/src/stream_wrap.cc index 56012e67a55144..602a3642cf7ddb 100644 --- a/src/stream_wrap.cc +++ b/src/stream_wrap.cc @@ -154,7 +154,7 @@ void StreamWrap::OnAlloc(uv_handle_t* handle, void StreamWrap::OnAllocImpl(size_t size, uv_buf_t* buf, void* ctx) { - buf->base = static_cast(malloc(size)); + buf->base = static_cast(node::Malloc(size)); buf->len = size; if (buf->base == nullptr && size > 0) { @@ -210,7 +210,7 @@ void StreamWrap::OnReadImpl(ssize_t nread, return; } - char* base = static_cast(realloc(buf->base, nread)); + char* base = static_cast(node::Realloc(buf->base, nread)); CHECK_LE(static_cast(nread), buf->len); if (pending == UV_TCP) { diff --git a/src/string_bytes.cc b/src/string_bytes.cc index 8b993f57466f78..a650ac0b00452e 100644 --- a/src/string_bytes.cc +++ b/src/string_bytes.cc @@ -54,7 +54,7 @@ class ExternString: public ResourceType { return scope.Escape(String::Empty(isolate)); TypeName* new_data = - static_cast(malloc(length * sizeof(*new_data))); + static_cast(node::Malloc(length * sizeof(*new_data))); if (new_data == nullptr) { return Local(); } @@ -784,7 +784,7 @@ Local StringBytes::Encode(Isolate* isolate, case ASCII: if (contains_non_ascii(buf, buflen)) { - char* out = static_cast(malloc(buflen)); + char* out = static_cast(node::Malloc(buflen)); if (out == nullptr) { return Local(); } @@ -819,7 +819,7 @@ Local StringBytes::Encode(Isolate* isolate, case BASE64: { size_t dlen = base64_encoded_size(buflen); - char* dst = static_cast(malloc(dlen)); + char* dst = static_cast(node::Malloc(dlen)); if (dst == nullptr) { return Local(); } @@ -838,7 +838,7 @@ Local StringBytes::Encode(Isolate* isolate, case HEX: { size_t dlen = buflen * 2; - char* dst = static_cast(malloc(dlen)); + char* dst = static_cast(node::Malloc(dlen)); if (dst == nullptr) { return Local(); } diff --git a/src/tls_wrap.cc b/src/tls_wrap.cc index 116a379337ef57..dd1b0e3b5f340f 100644 --- a/src/tls_wrap.cc +++ b/src/tls_wrap.cc @@ -662,7 +662,7 @@ void TLSWrap::OnReadImpl(ssize_t nread, void TLSWrap::OnAllocSelf(size_t suggested_size, uv_buf_t* buf, void* ctx) { - buf->base = static_cast(malloc(suggested_size)); + buf->base = static_cast(node::Malloc(suggested_size)); CHECK_NE(buf->base, nullptr); buf->len = suggested_size; } diff --git a/src/udp_wrap.cc b/src/udp_wrap.cc index 57238534c4a30e..f64788e8e48891 100644 --- a/src/udp_wrap.cc +++ b/src/udp_wrap.cc @@ -342,7 +342,6 @@ void UDPWrap::RecvStop(const FunctionCallbackInfo& args) { } -// TODO(bnoordhuis) share with StreamWrap::AfterWrite() in stream_wrap.cc void UDPWrap::OnSend(uv_udp_send_t* req, int status) { SendWrap* req_wrap = static_cast(req->data); if (req_wrap->have_callback()) { @@ -359,7 +358,7 @@ void UDPWrap::OnSend(uv_udp_send_t* req, int status) { void UDPWrap::OnAlloc(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) { - buf->base = static_cast(malloc(suggested_size)); + buf->base = static_cast(node::Malloc(suggested_size)); buf->len = suggested_size; if (buf->base == nullptr && suggested_size > 0) { @@ -401,7 +400,7 @@ void UDPWrap::OnRecv(uv_udp_t* handle, return; } - char* base = static_cast(realloc(buf->base, nread)); + char* base = static_cast(node::Realloc(buf->base, nread)); argv[2] = Buffer::New(env, base, nread).ToLocalChecked(); argv[3] = AddressToJS(env, addr); wrap->MakeCallback(env->onmessage_string(), arraysize(argv), argv); diff --git a/src/util-inl.h b/src/util-inl.h index 7051659a5e0e6a..834771f5ee996d 100644 --- a/src/util-inl.h +++ b/src/util-inl.h @@ -217,6 +217,34 @@ bool StringEqualNoCase(const char* a, const char* b) { return false; } +// These should be used in our code as opposed to the native +// versions as they abstract out some platform and or +// compiler version specific functionality. +// malloc(0) and realloc(ptr, 0) have implementation-defined behavior in +// that the standard allows them to either return a unique pointer or a +// nullptr for zero-sized allocation requests. Normalize by always using +// a nullptr. +void* Realloc(void* pointer, size_t size) { + if (size == 0) { + free(pointer); + return nullptr; + } + return realloc(pointer, size); +} + +// As per spec realloc behaves like malloc if passed nullptr. +void* Malloc(size_t size) { + if (size == 0) size = 1; + return Realloc(nullptr, size); +} + +void* Calloc(size_t n, size_t size) { + if (n == 0) n = 1; + if (size == 0) size = 1; + CHECK_GE(n * size, n); // Overflow guard. + return calloc(n, size); +} + } // namespace node #endif // SRC_UTIL_INL_H_ diff --git a/src/util.h b/src/util.h index e5de6f2207e3b0..f96fb77cfafbea 100644 --- a/src/util.h +++ b/src/util.h @@ -16,6 +16,17 @@ namespace node { +// These should be used in our code as opposed to the native +// versions as they abstract out some platform and or +// compiler version specific functionality +// malloc(0) and realloc(ptr, 0) have implementation-defined behavior in +// that the standard allows them to either return a unique pointer or a +// nullptr for zero-sized allocation requests. Normalize by always using +// a nullptr. +inline void* Realloc(void* pointer, size_t size); +inline void* Malloc(size_t size); +inline void* Calloc(size_t n, size_t size); + #ifdef __APPLE__ template using remove_reference = std::tr1::remove_reference; #else @@ -250,7 +261,7 @@ class MaybeStackBuffer { // Guard against overflow. CHECK_LE(storage, sizeof(T) * storage); - buf_ = static_cast(malloc(sizeof(T) * storage)); + buf_ = static_cast(Malloc(sizeof(T) * storage)); CHECK_NE(buf_, nullptr); } diff --git a/test/addons/openssl-binding/binding.cc b/test/addons/openssl-binding/binding.cc new file mode 100644 index 00000000000000..59819cd33d2a38 --- /dev/null +++ b/test/addons/openssl-binding/binding.cc @@ -0,0 +1,35 @@ +#include "node.h" +#include "../../../src/util.h" +#include "../../../src/util-inl.h" + +#include +#include + +namespace { + +inline void RandomBytes(const v8::FunctionCallbackInfo& info) { + assert(info[0]->IsArrayBufferView()); + auto view = info[0].As(); + auto byte_offset = view->ByteOffset(); + auto byte_length = view->ByteLength(); + assert(view->HasBuffer()); + auto buffer = view->Buffer(); + auto contents = buffer->GetContents(); + auto data = static_cast(contents.Data()) + byte_offset; + assert(RAND_poll()); + auto rval = RAND_bytes(data, static_cast(byte_length)); + info.GetReturnValue().Set(rval > 0); +} + +inline void Initialize(v8::Local exports, + v8::Local module, + v8::Local context) { + auto isolate = context->GetIsolate(); + auto key = v8::String::NewFromUtf8(isolate, "randomBytes"); + auto value = v8::FunctionTemplate::New(isolate, RandomBytes)->GetFunction(); + assert(exports->Set(context, key, value).IsJust()); +} + +} // anonymous namespace + +NODE_MODULE_CONTEXT_AWARE(binding, Initialize) diff --git a/test/addons/openssl-binding/binding.gyp b/test/addons/openssl-binding/binding.gyp new file mode 100644 index 00000000000000..672f84bb860a9d --- /dev/null +++ b/test/addons/openssl-binding/binding.gyp @@ -0,0 +1,9 @@ +{ + 'targets': [ + { + 'target_name': 'binding', + 'sources': ['binding.cc'], + 'include_dirs': ['../../../deps/openssl/openssl/include'], + }, + ] +} diff --git a/test/addons/openssl-binding/test.js b/test/addons/openssl-binding/test.js new file mode 100644 index 00000000000000..aa515bac9a5c45 --- /dev/null +++ b/test/addons/openssl-binding/test.js @@ -0,0 +1,8 @@ +'use strict'; + +require('../../common'); +const assert = require('assert'); +const binding = require('./build/Release/binding'); +const bytes = new Uint8Array(1024); +assert(binding.randomBytes(bytes)); +assert(bytes.reduce((v, a) => v + a) > 0); diff --git a/test/cctest/util.cc b/test/cctest/util.cc index 37133aca562b72..1f1584fa72124a 100644 --- a/test/cctest/util.cc +++ b/test/cctest/util.cc @@ -74,3 +74,17 @@ TEST(UtilTest, ToLower) { EXPECT_EQ('a', ToLower('a')); EXPECT_EQ('a', ToLower('A')); } + +TEST(UtilTest, Malloc) { + using node::Malloc; + EXPECT_NE(nullptr, Malloc(0)); + EXPECT_NE(nullptr, Malloc(1)); +} + +TEST(UtilTest, Calloc) { + using node::Calloc; + EXPECT_NE(nullptr, Calloc(0, 0)); + EXPECT_NE(nullptr, Calloc(1, 0)); + EXPECT_NE(nullptr, Calloc(0, 1)); + EXPECT_NE(nullptr, Calloc(1, 1)); +} \ No newline at end of file diff --git a/test/parallel/parallel.status b/test/parallel/parallel.status index fc4d131b72f874..092fd9dafbc81c 100644 --- a/test/parallel/parallel.status +++ b/test/parallel/parallel.status @@ -35,6 +35,3 @@ test-regress-GH-1899 : FAIL, PASS # localIPv6Hosts list from test/common.js. test-https-connect-address-family : PASS,FLAKY test-tls-connect-address-family : PASS,FLAKY - -#covered by https://github.com/nodejs/node/issues/8271 -test-child-process-fork-dgram : PASS, FLAKY diff --git a/test/parallel/test-async-wrap-check-providers.js b/test/parallel/test-async-wrap-check-providers.js index 3ca0274399bea2..71fd93b77195e0 100644 --- a/test/parallel/test-async-wrap-check-providers.js +++ b/test/parallel/test-async-wrap-check-providers.js @@ -31,7 +31,7 @@ if (common.isAix) { } function init(id, provider) { - keyList = keyList.filter((e) => e != pkeys[provider]); + keyList = keyList.filter((e) => e !== pkeys[provider]); } function noop() { } @@ -114,6 +114,6 @@ process.on('exit', function() { if (keyList.length !== 0) { process._rawDebug('Not all keys have been used:'); process._rawDebug(keyList); - assert.equal(keyList.length, 0); + assert.strictEqual(keyList.length, 0); } }); diff --git a/test/parallel/test-child-process-fork-dgram.js b/test/parallel/test-child-process-fork-dgram.js index 1edd2b85cac3d6..8bdf006a743e0d 100644 --- a/test/parallel/test-child-process-fork-dgram.js +++ b/test/parallel/test-child-process-fork-dgram.js @@ -4,101 +4,86 @@ * sending a fd representing a UDP socket to the child and sending messages * to this endpoint, these messages are distributed to the parent and the * child process. - * - * Because it's not really possible to predict how the messages will be - * distributed among the parent and the child processes, we keep sending - * messages until both the parent and the child received at least one - * message. The worst case scenario is when either one never receives - * a message. In this case the test runner will timeout after 60 secs - * and the test will fail. */ -var dgram = require('dgram'); -var fork = require('child_process').fork; -var assert = require('assert'); -var common = require('../common'); +const common = require('../common'); +const dgram = require('dgram'); +const fork = require('child_process').fork; +const assert = require('assert'); if (common.isWindows) { - common.skip('Sending dgram sockets to child processes is ' + - 'not supported'); + common.skip('Sending dgram sockets to child processes is not supported'); return; } -var server; if (process.argv[2] === 'child') { - process.on('message', function removeMe(msg, clusterServer) { - if (msg === 'server') { - server = clusterServer; - - server.on('message', function() { - process.send('gotMessage'); - }); - - } else if (msg === 'stop') { - server.close(); - process.removeListener('message', removeMe); - } + let childServer; + + process.once('message', function(msg, clusterServer) { + childServer = clusterServer; + + childServer.once('message', function() { + process.send('gotMessage'); + childServer.close(); + }); + + process.send('handleReceived'); }); } else { - server = dgram.createSocket('udp4'); - var client = dgram.createSocket('udp4'); - var child = fork(__filename, ['child']); + const parentServer = dgram.createSocket('udp4'); + const client = dgram.createSocket('udp4'); + const child = fork(__filename, ['child']); - var msg = new Buffer('Some bytes'); + const msg = Buffer.from('Some bytes'); var childGotMessage = false; var parentGotMessage = false; - server.on('message', function(msg, rinfo) { + parentServer.once('message', function(msg, rinfo) { parentGotMessage = true; + parentServer.close(); }); - server.on('listening', function() { - child.send('server', server); + parentServer.on('listening', function() { + child.send('server', parentServer); - child.once('message', function(msg) { + child.on('message', function(msg) { if (msg === 'gotMessage') { childGotMessage = true; + } else if (msg = 'handlReceived') { + sendMessages(); } }); - - sendMessages(); }); - var sendMessages = function() { - var timer = setInterval(function() { - client.send( - msg, - 0, - msg.length, - server.address().port, - '127.0.0.1', - function(err) { - if (err) throw err; - } - ); + const sendMessages = function() { + const serverPort = parentServer.address().port; + const timer = setInterval(function() { /* * Both the parent and the child got at least one message, * test passed, clean up everyting. */ if (parentGotMessage && childGotMessage) { clearInterval(timer); - shutdown(); + client.close(); + } else { + client.send( + msg, + 0, + msg.length, + serverPort, + '127.0.0.1', + function(err) { + if (err) throw err; + } + ); } - }, 1); }; - var shutdown = function() { - child.send('stop'); - - server.close(); - client.close(); - }; - - server.bind(0, '127.0.0.1'); + parentServer.bind(0, '127.0.0.1'); process.once('exit', function() { assert(parentGotMessage); diff --git a/test/parallel/test-child-process-fork-net2.js b/test/parallel/test-child-process-fork-net2.js index 454769b875e181..ed977aaaf416e5 100644 --- a/test/parallel/test-child-process-fork-net2.js +++ b/test/parallel/test-child-process-fork-net2.js @@ -109,10 +109,6 @@ if (process.argv[2] === 'child') { console.error('[m] CLIENT: close event'); disconnected += 1; }); - // XXX This resume() should be unnecessary. - // a stream high water mark should be enough to keep - // consuming the input. - client.resume(); } }); diff --git a/test/parallel/test-child-process-fork-regr-gh-2847.js b/test/parallel/test-child-process-fork-regr-gh-2847.js index dba33259403cff..1ba3fc0ec743e2 100644 --- a/test/parallel/test-child-process-fork-regr-gh-2847.js +++ b/test/parallel/test-child-process-fork-regr-gh-2847.js @@ -49,14 +49,19 @@ var server = net.createServer(function(s) { server.close(); })); - send(); - send(function(err) { - // Ignore errors when sending the second handle because the worker - // may already have exited. - if (err) { - if (err.code !== 'ECONNREFUSED') { - throw err; - } - } + worker.on('online', function() { + send(function(err) { + assert.ifError(err); + send(function(err) { + // Ignore errors when sending the second handle because the worker + // may already have exited. + if (err) { + if ((err.message !== 'channel closed') && + (err.code !== 'ECONNREFUSED')) { + throw err; + } + } + }); + }); }); }); diff --git a/test/parallel/test-child-process-spawnsync-kill-signal.js b/test/parallel/test-child-process-spawnsync-kill-signal.js new file mode 100644 index 00000000000000..874a3d5581c670 --- /dev/null +++ b/test/parallel/test-child-process-spawnsync-kill-signal.js @@ -0,0 +1,51 @@ +'use strict'; +const common = require('../common'); +const assert = require('assert'); +const cp = require('child_process'); + +if (process.argv[2] === 'child') { + setInterval(() => {}, 1000); +} else { + const exitCode = common.isWindows ? 1 : 0; + const SIGKILL = process.binding('constants').SIGKILL; + + function spawn(killSignal) { + const child = cp.spawnSync(process.execPath, + [__filename, 'child'], + {killSignal, timeout: 100}); + + assert.strictEqual(child.status, exitCode); + assert.strictEqual(child.error.code, 'ETIMEDOUT'); + return child; + } + + // Verify that an error is thrown for unknown signals. + assert.throws(() => { + spawn('SIG_NOT_A_REAL_SIGNAL'); + }, /Error: Unknown signal: SIG_NOT_A_REAL_SIGNAL/); + + // Verify that the default kill signal is SIGTERM. + { + const child = spawn(); + + assert.strictEqual(child.signal, 'SIGTERM'); + assert.strictEqual(child.options.killSignal, undefined); + } + + // Verify that a string signal name is handled properly. + { + const child = spawn('SIGKILL'); + + assert.strictEqual(child.signal, 'SIGKILL'); + assert.strictEqual(child.options.killSignal, SIGKILL); + } + + // Verify that a numeric signal is handled properly. + { + const child = spawn(SIGKILL); + + assert.strictEqual(typeof SIGKILL, 'number'); + assert.strictEqual(child.signal, 'SIGKILL'); + assert.strictEqual(child.options.killSignal, SIGKILL); + } +} diff --git a/test/parallel/test-cluster-disconnect-handles.js b/test/parallel/test-cluster-disconnect-handles.js index 7db4621d5572b0..680e316cf0ba92 100644 --- a/test/parallel/test-cluster-disconnect-handles.js +++ b/test/parallel/test-cluster-disconnect-handles.js @@ -25,11 +25,8 @@ cluster.schedulingPolicy = cluster.SCHED_RR; if (cluster.isMaster) { let isKilling = false; const handles = require('internal/cluster').handles; - // FIXME(bnoordhuis) lib/cluster.js scans the execArgv arguments for - // debugger flags and renumbers any port numbers it sees starting - // from the default port 5858. Add a '.' that circumvents the - // scanner but is ignored by atoi(3). Heinous hack. - cluster.setupMaster({ execArgv: [`--debug=${common.PORT}.`] }); + const address = common.hasIPv6 ? '[::1]' : common.localhostIPv4; + cluster.setupMaster({ execArgv: [`--debug=${address}:${common.PORT}`] }); const worker = cluster.fork(); worker.once('exit', common.mustCall((code, signal) => { assert.strictEqual(code, 0, 'worker did not exit normally'); diff --git a/test/parallel/test-cluster-worker-init.js b/test/parallel/test-cluster-worker-init.js index 41f73e1255a7c5..3b82866d1b14eb 100644 --- a/test/parallel/test-cluster-worker-init.js +++ b/test/parallel/test-cluster-worker-init.js @@ -3,30 +3,25 @@ // verifies that, when a child process is forked, the cluster.worker // object can receive messages as expected -require('../common'); -var assert = require('assert'); -var cluster = require('cluster'); -var msg = 'foo'; +const common = require('../common'); +const assert = require('assert'); +const cluster = require('cluster'); +const msg = 'foo'; if (cluster.isMaster) { - var worker = cluster.fork(); - var timer = setTimeout(function() { - assert(false, 'message not received'); - }, 5000); + const worker = cluster.fork(); - timer.unref(); - - worker.on('message', function(message) { - assert(message, 'did not receive expected message'); + worker.on('message', common.mustCall((message) => { + assert.strictEqual(message, true, 'did not receive expected message'); worker.disconnect(); - }); + })); - worker.on('online', function() { + worker.on('online', () => { worker.send(msg); }); } else { // GH #7998 - cluster.worker.on('message', function(message) { + cluster.worker.on('message', (message) => { process.send(message === msg); }); } diff --git a/test/parallel/test-crypto-authenticated.js b/test/parallel/test-crypto-authenticated.js index b816d8ada3db9c..19643a9d80f2f3 100644 --- a/test/parallel/test-crypto-authenticated.js +++ b/test/parallel/test-crypto-authenticated.js @@ -307,10 +307,10 @@ const TEST_CASES = [ tag: 'a44a8266ee1c8eb0c8b5d4cf5ae9f19a', tampered: false }, ]; -var ciphers = crypto.getCiphers(); +const ciphers = crypto.getCiphers(); -for (var i in TEST_CASES) { - var test = TEST_CASES[i]; +for (const i in TEST_CASES) { + const test = TEST_CASES[i]; if (ciphers.indexOf(test.algo) === -1) { common.skip('unsupported ' + test.algo + ' test'); @@ -452,3 +452,15 @@ for (var i in TEST_CASES) { }, /Invalid IV length/); })(); } + +// Non-authenticating mode: +(function() { + const encrypt = + crypto.createCipheriv('aes-128-cbc', + 'ipxp9a6i1Mb4USb4', + '6fKjEjR3Vl30EUYC'); + encrypt.update('blah', 'ascii'); + encrypt.final(); + assert.throws(() => encrypt.getAuthTag(), / state/); + assert.throws(() => encrypt.setAAD(Buffer.from('123', 'ascii')), / state/); +})(); diff --git a/test/parallel/test-crypto-cipheriv-decipheriv.js b/test/parallel/test-crypto-cipheriv-decipheriv.js index 377f0f3ee43615..4462ee0a5fc34a 100644 --- a/test/parallel/test-crypto-cipheriv-decipheriv.js +++ b/test/parallel/test-crypto-cipheriv-decipheriv.js @@ -63,3 +63,38 @@ testCipher1(new Buffer('0123456789abcd0123456789'), '12345678'); testCipher1(new Buffer('0123456789abcd0123456789'), new Buffer('12345678')); testCipher2(new Buffer('0123456789abcd0123456789'), new Buffer('12345678')); + +// Zero-sized IV should be accepted in ECB mode. +crypto.createCipheriv('aes-128-ecb', Buffer.alloc(16), Buffer.alloc(0)); + +// But non-empty IVs should be rejected. +for (let n = 1; n < 256; n += 1) { + assert.throws( + () => crypto.createCipheriv('aes-128-ecb', Buffer.alloc(16), + Buffer.alloc(n)), + /Invalid IV length/); +} + +// Correctly sized IV should be accepted in CBC mode. +crypto.createCipheriv('aes-128-cbc', Buffer.alloc(16), Buffer.alloc(16)); + +// But all other IV lengths should be rejected. +for (let n = 0; n < 256; n += 1) { + if (n === 16) continue; + assert.throws( + () => crypto.createCipheriv('aes-128-cbc', Buffer.alloc(16), + Buffer.alloc(n)), + /Invalid IV length/); +} + +// Zero-sized IV should be rejected in GCM mode. +assert.throws( + () => crypto.createCipheriv('aes-128-gcm', Buffer.alloc(16), + Buffer.alloc(0)), + /Invalid IV length/); + +// But all other IV lengths should be accepted. +for (let n = 1; n < 256; n += 1) { + if (common.hasFipsCrypto && n < 12) continue; + crypto.createCipheriv('aes-128-gcm', Buffer.alloc(16), Buffer.alloc(n)); +} diff --git a/test/parallel/test-crypto-pbkdf2.js b/test/parallel/test-crypto-pbkdf2.js index 0b0ab13cbbfc5d..41940af4b91690 100644 --- a/test/parallel/test-crypto-pbkdf2.js +++ b/test/parallel/test-crypto-pbkdf2.js @@ -79,3 +79,11 @@ assert.throws(function() { assert.throws(function() { crypto.pbkdf2('password', 'salt', 1, -1, common.fail); }, /Bad key length/); + +// Should not get FATAL ERROR with empty password and salt +// https://github.com/nodejs/node/issues/8571 +assert.doesNotThrow(() => { + crypto.pbkdf2('', '', 1, 32, 'sha256', common.mustCall((e) => { + assert.ifError(e); + })); +}); diff --git a/test/parallel/test-debug-agent.js b/test/parallel/test-debug-agent.js index 0bb5d7f9be25df..d65029e5855b26 100644 --- a/test/parallel/test-debug-agent.js +++ b/test/parallel/test-debug-agent.js @@ -1,6 +1,12 @@ 'use strict'; require('../common'); const assert = require('assert'); +const debug = require('_debug_agent'); -assert.throws(() => { require('_debug_agent').start(); }, - assert.AssertionError); +assert.throws( + () => { debug.start(); }, + function(err) { + return (err instanceof assert.AssertionError && + err.message === 'Debugger agent running without bindings!'); + } +); diff --git a/test/parallel/test-debug-port-cluster.js b/test/parallel/test-debug-port-cluster.js index cc564b3ac1522c..3410aed2b67d27 100644 --- a/test/parallel/test-debug-port-cluster.js +++ b/test/parallel/test-debug-port-cluster.js @@ -16,7 +16,8 @@ child.stderr.setEncoding('utf8'); const checkMessages = common.mustCall(() => { for (let port = PORT_MIN; port <= PORT_MAX; port += 1) { - assert(stderr.includes(`Debugger listening on port ${port}`)); + const re = RegExp(`Debugger listening on (\\[::\\]|0\\.0\\.0\\.0):${port}`); + assert(re.test(stderr)); } }); diff --git a/test/parallel/test-debug-port-from-cmdline.js b/test/parallel/test-debug-port-from-cmdline.js index 71ed71bd63af65..527d72ee74a75f 100644 --- a/test/parallel/test-debug-port-from-cmdline.js +++ b/test/parallel/test-debug-port-from-cmdline.js @@ -39,11 +39,10 @@ function processStderrLine(line) { function assertOutputLines() { var expectedLines = [ 'Starting debugger agent.', - 'Debugger listening on port ' + debugPort + 'Debugger listening on (\\[::\\]|0\\.0\\.0\\.0):' + debugPort, ]; assert.equal(outputLines.length, expectedLines.length); for (var i = 0; i < expectedLines.length; i++) - assert.equal(outputLines[i], expectedLines[i]); - + assert(RegExp(expectedLines[i]).test(outputLines[i])); } diff --git a/test/parallel/test-debug-port-numbers.js b/test/parallel/test-debug-port-numbers.js index 33cdf6035b449a..683287340c6f8d 100644 --- a/test/parallel/test-debug-port-numbers.js +++ b/test/parallel/test-debug-port-numbers.js @@ -52,8 +52,9 @@ function kill(child) { process.on('exit', function() { for (const child of children) { - const one = RegExp(`Debugger listening on port ${child.test.port}`); - const two = RegExp(`connecting to 127.0.0.1:${child.test.port}`); + const port = child.test.port; + const one = RegExp(`Debugger listening on (\\[::\\]|0\.0\.0\.0):${port}`); + const two = RegExp(`connecting to 127.0.0.1:${port}`); assert(one.test(child.test.stdout)); assert(two.test(child.test.stdout)); } diff --git a/test/parallel/test-debug-signal-cluster.js b/test/parallel/test-debug-signal-cluster.js index 9a2536023a42c8..d5319a616939d1 100644 --- a/test/parallel/test-debug-signal-cluster.js +++ b/test/parallel/test-debug-signal-cluster.js @@ -51,10 +51,6 @@ function onNoMoreLines() { process.exit(); } -setTimeout(function testTimedOut() { - common.fail('test timed out'); -}, common.platformTimeout(4000)).unref(); - process.on('exit', function onExit() { // Kill processes in reverse order to avoid timing problems on Windows where // the parent process is killed before the children. @@ -65,11 +61,11 @@ process.on('exit', function onExit() { const expectedLines = [ 'Starting debugger agent.', - 'Debugger listening on port ' + (port + 0), + 'Debugger listening on (\\[::\\]|0\\.0\\.0\\.0):' + (port + 0), 'Starting debugger agent.', - 'Debugger listening on port ' + (port + 1), + 'Debugger listening on (\\[::\\]|0\\.0\\.0\\.0):' + (port + 1), 'Starting debugger agent.', - 'Debugger listening on port ' + (port + 2), + 'Debugger listening on (\\[::\\]|0\\.0\\.0\\.0):' + (port + 2), ]; function assertOutputLines() { @@ -79,5 +75,7 @@ function assertOutputLines() { outputLines.sort(); expectedLines.sort(); - assert.deepStrictEqual(outputLines, expectedLines); + assert.equal(outputLines.length, expectedLines.length); + for (var i = 0; i < expectedLines.length; i++) + assert(RegExp(expectedLines[i]).test(outputLines[i])); } diff --git a/test/parallel/test-debugger-util-regression.js b/test/parallel/test-debugger-util-regression.js index 5d2446c6bd40ad..4711d8e1509fd8 100644 --- a/test/parallel/test-debugger-util-regression.js +++ b/test/parallel/test-debugger-util-regression.js @@ -19,16 +19,11 @@ const proc = spawn(process.execPath, args, { stdio: 'pipe' }); proc.stdout.setEncoding('utf8'); proc.stderr.setEncoding('utf8'); -function fail() { - common.fail('the program should not hang'); -} - -const timer = setTimeout(fail, common.platformTimeout(4000)); - let stdout = ''; let stderr = ''; let nextCount = 0; +let exit = false; proc.stdout.on('data', (data) => { stdout += data; @@ -38,8 +33,8 @@ proc.stdout.on('data', (data) => { stdout.includes('> 4') && nextCount < 4) { nextCount++; proc.stdin.write('n\n'); - } else if (stdout.includes('{ a: \'b\' }')) { - clearTimeout(timer); + } else if (!exit && (stdout.includes('< { a: \'b\' }'))) { + exit = true; proc.stdin.write('.exit\n'); } else if (stdout.includes('program terminated')) { // Catch edge case present in v4.x @@ -50,15 +45,6 @@ proc.stdout.on('data', (data) => { proc.stderr.on('data', (data) => stderr += data); -// FIXME -// This test has been periodically failing on certain systems due to -// uncaught errors on proc.stdin. This will stop the process from -// exploding but is still not an elegant solution. Likely a deeper bug -// causing this problem. -proc.stdin.on('error', (err) => { - console.error(err); -}); - process.on('exit', (code) => { assert.equal(code, 0, 'the program should exit cleanly'); assert.equal(stdout.includes('{ a: \'b\' }'), true, diff --git a/test/parallel/test-dgram-close-in-listening.js b/test/parallel/test-dgram-close-in-listening.js new file mode 100644 index 00000000000000..e181f40de67dcf --- /dev/null +++ b/test/parallel/test-dgram-close-in-listening.js @@ -0,0 +1,18 @@ +'use strict'; +// Ensure that if a dgram socket is closed before the sendQueue is drained +// will not crash + +const common = require('../common'); +const dgram = require('dgram'); + +const buf = Buffer.alloc(1024, 42); + +const socket = dgram.createSocket('udp4'); + +socket.on('listening', function() { + socket.close(); +}); + +// adds a listener to 'listening' to send the data when +// the socket is available +socket.send(buf, 0, buf.length, common.PORT, 'localhost'); diff --git a/test/parallel/test-dgram-error-message-address.js b/test/parallel/test-dgram-error-message-address.js index d27b0043321aaf..3a87e8cabf3ca8 100644 --- a/test/parallel/test-dgram-error-message-address.js +++ b/test/parallel/test-dgram-error-message-address.js @@ -1,35 +1,35 @@ 'use strict'; -var common = require('../common'); -var assert = require('assert'); -var dgram = require('dgram'); +const common = require('../common'); +const assert = require('assert'); +const dgram = require('dgram'); // IPv4 Test -var socket_ipv4 = dgram.createSocket('udp4'); +const socket_ipv4 = dgram.createSocket('udp4'); socket_ipv4.on('listening', common.fail); socket_ipv4.on('error', common.mustCall(function(e) { assert.strictEqual(e.port, undefined); - assert.equal(e.message, 'bind EADDRNOTAVAIL 1.1.1.1'); - assert.equal(e.address, '1.1.1.1'); - assert.equal(e.code, 'EADDRNOTAVAIL'); + assert.strictEqual(e.message, 'bind EADDRNOTAVAIL 1.1.1.1'); + assert.strictEqual(e.address, '1.1.1.1'); + assert.strictEqual(e.code, 'EADDRNOTAVAIL'); socket_ipv4.close(); })); socket_ipv4.bind(0, '1.1.1.1'); // IPv6 Test -var socket_ipv6 = dgram.createSocket('udp6'); +const socket_ipv6 = dgram.createSocket('udp6'); socket_ipv6.on('listening', common.fail); socket_ipv6.on('error', common.mustCall(function(e) { // EAFNOSUPPORT or EPROTONOSUPPORT means IPv6 is disabled on this system. - var allowed = ['EADDRNOTAVAIL', 'EAFNOSUPPORT', 'EPROTONOSUPPORT']; - assert.notEqual(allowed.indexOf(e.code), -1); + const allowed = ['EADDRNOTAVAIL', 'EAFNOSUPPORT', 'EPROTONOSUPPORT']; + assert.notStrictEqual(allowed.indexOf(e.code), -1); assert.strictEqual(e.port, undefined); - assert.equal(e.message, 'bind ' + e.code + ' 111::1'); - assert.equal(e.address, '111::1'); + assert.strictEqual(e.message, 'bind ' + e.code + ' 111::1'); + assert.strictEqual(e.address, '111::1'); socket_ipv6.close(); })); diff --git a/test/parallel/test-event-emitter-remove-listeners.js b/test/parallel/test-event-emitter-remove-listeners.js index 95b39f4555b903..0d2ce551d3fcf9 100644 --- a/test/parallel/test-event-emitter-remove-listeners.js +++ b/test/parallel/test-event-emitter-remove-listeners.js @@ -98,3 +98,14 @@ e6.emit('hello'); // Interal listener array [listener3] e6.emit('hello'); + +const e7 = new events.EventEmitter(); + +const listener5 = () => {}; + +e7.once('hello', listener5); +e7.on('removeListener', common.mustCall((eventName, listener) => { + assert.strictEqual(eventName, 'hello'); + assert.strictEqual(listener, listener5); +})); +e7.emit('hello'); diff --git a/test/parallel/test-fs-watch-recursive.js b/test/parallel/test-fs-watch-recursive.js index 1fca541505b9c1..8d25d767080623 100644 --- a/test/parallel/test-fs-watch-recursive.js +++ b/test/parallel/test-fs-watch-recursive.js @@ -32,14 +32,17 @@ watcher.on('change', function(event, filename) { if (filename !== relativePathOne) return; + if (common.isOSX) { + clearInterval(interval); + } watcher.close(); watcherClosed = true; }); -if (process.platform === 'darwin') { - setTimeout(function() { +if (common.isOSX) { + var interval = setInterval(function() { fs.writeFileSync(filepathOne, 'world'); - }, 100); + }, 10); } else { fs.writeFileSync(filepathOne, 'world'); } diff --git a/test/parallel/test-http-client-aborted-event.js b/test/parallel/test-http-client-aborted-event.js new file mode 100644 index 00000000000000..951a128f51b261 --- /dev/null +++ b/test/parallel/test-http-client-aborted-event.js @@ -0,0 +1,18 @@ +'use strict'; +const common = require('../common'); +const http = require('http'); + +const server = http.Server(function(req, res) { + res.write('Part of my res.'); + res.destroy(); +}); + +server.listen(0, common.mustCall(function() { + http.get({ + port: this.address().port, + headers: { connection: 'keep-alive' } + }, common.mustCall(function(res) { + server.close(); + res.on('aborted', common.mustCall(function() {})); + })); +})); diff --git a/test/parallel/test-http-set-timeout.js b/test/parallel/test-http-set-timeout.js index 08777d30d22570..e8df29e0ccdc67 100644 --- a/test/parallel/test-http-set-timeout.js +++ b/test/parallel/test-http-set-timeout.js @@ -1,32 +1,26 @@ 'use strict'; -var common = require('../common'); -var assert = require('assert'); -var http = require('http'); -var net = require('net'); +const common = require('../common'); +const assert = require('assert'); +const http = require('http'); +const net = require('net'); var server = http.createServer(function(req, res) { - console.log('got request. setting 1 second timeout'); - var s = req.connection.setTimeout(500); - assert.ok(s instanceof net.Socket); - req.connection.on('timeout', function() { + console.log('got request. setting 500ms timeout'); + var socket = req.connection.setTimeout(500); + assert.ok(socket instanceof net.Socket); + req.connection.on('timeout', common.mustCall(function() { req.connection.destroy(); console.error('TIMEOUT'); server.close(); - }); + })); }); server.listen(0, function() { console.log(`Server running at http://127.0.0.1:${this.address().port}/`); - var errorTimer = setTimeout(function() { - throw new Error('Timeout was not successful'); - }, common.platformTimeout(2000)); - - var x = http.get({port: this.address().port, path: '/'}); - x.on('error', function() { - clearTimeout(errorTimer); + var request = http.get({port: this.address().port, path: '/'}); + request.on('error', common.mustCall(function() { console.log('HTTP REQUEST COMPLETE (this is good)'); - }); - x.end(); - + })); + request.end(); }); diff --git a/test/parallel/test-http-status-code.js b/test/parallel/test-http-status-code.js index 4422124a8cf70d..9e78b5575bd777 100644 --- a/test/parallel/test-http-status-code.js +++ b/test/parallel/test-http-status-code.js @@ -7,7 +7,7 @@ var http = require('http'); // ServerResponse.prototype.statusCode var testsComplete = 0; -var tests = [200, 202, 300, 404, 500]; +var tests = [200, 202, 300, 404, 451, 500]; var testIdx = 0; var s = http.createServer(function(req, res) { @@ -42,6 +42,6 @@ function nextTest() { process.on('exit', function() { - assert.equal(4, testsComplete); + assert.equal(5, testsComplete); }); diff --git a/test/parallel/test-http-upgrade-client.js b/test/parallel/test-http-upgrade-client.js index d0f29753c1794d..6543e7dc2223e1 100644 --- a/test/parallel/test-http-upgrade-client.js +++ b/test/parallel/test-http-upgrade-client.js @@ -25,40 +25,52 @@ var srv = net.createServer(function(c) { }); }); -var gotUpgrade = false; - -srv.listen(0, '127.0.0.1', function() { - - var req = http.get({ - port: this.address().port, - headers: { +srv.listen(0, '127.0.0.1', common.mustCall(function() { + var port = this.address().port; + var headers = [ + { connection: 'upgrade', upgrade: 'websocket' - } - }); - req.on('upgrade', function(res, socket, upgradeHead) { - var recvData = upgradeHead; - socket.on('data', function(d) { - recvData += d; + }, + [ + ['Host', 'echo.websocket.org'], + ['Connection', 'Upgrade'], + ['Upgrade', 'websocket'], + ['Origin', 'http://www.websocket.org'] + ] + ]; + var left = headers.length; + headers.forEach(function(h) { + var req = http.get({ + port: port, + headers: h }); + var sawUpgrade = false; + req.on('upgrade', common.mustCall(function(res, socket, upgradeHead) { + sawUpgrade = true; + var recvData = upgradeHead; + socket.on('data', function(d) { + recvData += d; + }); - socket.on('close', common.mustCall(function() { - assert.equal(recvData, 'nurtzo'); - })); - - console.log(res.headers); - var expectedHeaders = {'hello': 'world', - 'connection': 'upgrade', - 'upgrade': 'websocket' }; - assert.deepEqual(expectedHeaders, res.headers); + socket.on('close', common.mustCall(function() { + assert.equal(recvData, 'nurtzo'); + })); - socket.end(); - srv.close(); + console.log(res.headers); + var expectedHeaders = { + hello: 'world', + connection: 'upgrade', + upgrade: 'websocket' + }; + assert.deepStrictEqual(expectedHeaders, res.headers); - gotUpgrade = true; + socket.end(); + if (--left == 0) + srv.close(); + })); + req.on('close', common.mustCall(function() { + assert.strictEqual(sawUpgrade, true); + })); }); -}); - -process.on('exit', function() { - assert.ok(gotUpgrade); -}); +})); diff --git a/test/parallel/test-https-agent-sockets-leak.js b/test/parallel/test-https-agent-sockets-leak.js new file mode 100644 index 00000000000000..27b37dc4fe202b --- /dev/null +++ b/test/parallel/test-https-agent-sockets-leak.js @@ -0,0 +1,52 @@ +'use strict'; + +const common = require('../common'); +if (!common.hasCrypto) { + common.skip('missing crypto'); + return; +} + +const fs = require('fs'); +const https = require('https'); +const assert = require('assert'); + +const options = { + key: fs.readFileSync(common.fixturesDir + '/keys/agent1-key.pem'), + cert: fs.readFileSync(common.fixturesDir + '/keys/agent1-cert.pem'), + ca: fs.readFileSync(common.fixturesDir + '/keys/ca1-cert.pem') +}; + +const server = https.Server(options, common.mustCall((req, res) => { + res.writeHead(200); + res.end('https\n'); +})); + +const agent = new https.Agent({ + keepAlive: false +}); + +server.listen(0, common.mustCall(() => { + https.get({ + host: server.address().host, + port: server.address().port, + headers: {host: 'agent1'}, + rejectUnauthorized: true, + ca: options.ca, + agent: agent + }, common.mustCall((res) => { + res.resume(); + server.close(); + + // Only one entry should exist in agent.sockets pool + // If there are more entries in agent.sockets, + // removeSocket will not delete them resulting in a resource leak + assert.strictEqual(Object.keys(agent.sockets).length, 1); + + res.req.on('close', common.mustCall(() => { + // To verify that no leaks occur, check that no entries + // exist in agent.sockets pool after both request and socket + // has been closed. + assert.strictEqual(Object.keys(agent.sockets).length, 0); + })); + })); +})); diff --git a/test/parallel/test-instanceof.js b/test/parallel/test-instanceof.js new file mode 100644 index 00000000000000..45960621a66e50 --- /dev/null +++ b/test/parallel/test-instanceof.js @@ -0,0 +1,10 @@ +'use strict'; +require('../common'); +const assert = require('assert'); + + +// Regression test for instanceof, see +// https://github.com/nodejs/node/issues/7592 +const F = () => {}; +F.prototype = {}; +assert(Object.create(F.prototype) instanceof F); diff --git a/test/parallel/test-net-end-close.js b/test/parallel/test-net-end-close.js new file mode 100644 index 00000000000000..d9f33fd7d8d1cf --- /dev/null +++ b/test/parallel/test-net-end-close.js @@ -0,0 +1,26 @@ +'use strict'; +require('../common'); +const assert = require('assert'); +const net = require('net'); + +const uv = process.binding('uv'); + +const s = new net.Socket({ + handle: { + readStart: function() { + process.nextTick(() => this.onread(uv.UV_EOF, null)); + }, + close: (cb) => process.nextTick(cb) + }, + writable: false +}); +s.resume(); + +const events = []; + +s.on('end', () => events.push('end')); +s.on('close', () => events.push('close')); + +process.on('exit', () => { + assert.deepStrictEqual(events, [ 'end', 'close' ]); +}); diff --git a/test/parallel/test-net-server-max-connections.js b/test/parallel/test-net-server-max-connections.js index 661c18113b53fd..99d3b345f40974 100644 --- a/test/parallel/test-net-server-max-connections.js +++ b/test/parallel/test-net-server-max-connections.js @@ -1,23 +1,21 @@ 'use strict'; -var common = require('../common'); -var assert = require('assert'); +const common = require('../common'); +const assert = require('assert'); -var net = require('net'); +const net = require('net'); -// This test creates 200 connections to a server and sets the server's -// maxConnections property to 100. The first 100 connections make it through -// and the last 100 connections are rejected. +// This test creates 20 connections to a server and sets the server's +// maxConnections property to 10. The first 10 connections make it through +// and the last 10 connections are rejected. -var N = 200; -var count = 0; +const N = 20; var closes = 0; -var waits = []; +const waits = []; -var server = net.createServer(function(connection) { - console.error('connect %d', count++); +const server = net.createServer(common.mustCall(function(connection) { connection.write('hello'); waits.push(function() { connection.end(); }); -}); +}, N / 2)); server.listen(0, function() { makeConnection(0); @@ -25,11 +23,9 @@ server.listen(0, function() { server.maxConnections = N / 2; -console.error('server.maxConnections = %d', server.maxConnections); - function makeConnection(index) { - var c = net.createConnection(server.address().port); + const c = net.createConnection(server.address().port); var gotData = false; c.on('connect', function() { @@ -42,10 +38,10 @@ function makeConnection(index) { closes++; if (closes < N / 2) { - assert.ok(server.maxConnections <= index, - index + - ' was one of the first closed connections ' + - 'but shouldnt have been'); + assert.ok( + server.maxConnections <= index, + `${index} should not have been one of the first closed connections` + ); } if (closes === N / 2) { @@ -58,11 +54,11 @@ function makeConnection(index) { } if (index < server.maxConnections) { - assert.equal(true, gotData, - index + ' didn\'t get data, but should have'); + assert.strictEqual(true, gotData, + `${index} didn't get data, but should have`); } else { - assert.equal(false, gotData, - index + ' got data, but shouldn\'t have'); + assert.strictEqual(false, gotData, + `${index} got data, but shouldn't have`); } }); }); @@ -86,5 +82,5 @@ function makeConnection(index) { process.on('exit', function() { - assert.equal(N, closes); + assert.strictEqual(N, closes); }); diff --git a/test/parallel/test-npm-install.js b/test/parallel/test-npm-install.js index c716e48aefedab..8a900cf7f7c759 100644 --- a/test/parallel/test-npm-install.js +++ b/test/parallel/test-npm-install.js @@ -7,6 +7,10 @@ const assert = require('assert'); const fs = require('fs'); common.refreshTmpDir(); +const npmSandbox = path.join(common.tmpDir, 'npm-sandbox'); +fs.mkdirSync(npmSandbox); +const installDir = path.join(common.tmpDir, 'install-dir'); +fs.mkdirSync(installDir); const npmPath = path.join( common.testDir, @@ -28,15 +32,18 @@ const pkgContent = JSON.stringify({ } }); -const pkgPath = path.join(common.tmpDir, 'package.json'); +const pkgPath = path.join(installDir, 'package.json'); fs.writeFileSync(pkgPath, pkgContent); const env = Object.create(process.env); env['PATH'] = path.dirname(process.execPath); +env['NPM_CONFIG_PREFIX'] = path.join(npmSandbox, 'npm-prefix'); +env['NPM_CONFIG_TMP'] = path.join(npmSandbox, 'npm-tmp'); +env['HOME'] = path.join(npmSandbox, 'home'); const proc = spawn(process.execPath, args, { - cwd: common.tmpDir, + cwd: installDir, env: env }); @@ -44,7 +51,7 @@ function handleExit(code, signalCode) { assert.equal(code, 0, 'npm install should run without an error'); assert.ok(signalCode === null, 'signalCode should be null'); assert.doesNotThrow(function() { - fs.accessSync(common.tmpDir + '/node_modules/package-name'); + fs.accessSync(installDir + '/node_modules/package-name'); }); } diff --git a/test/parallel/test-punycode.js b/test/parallel/test-punycode.js index 42927549eae395..abd495e4a9957b 100644 --- a/test/parallel/test-punycode.js +++ b/test/parallel/test-punycode.js @@ -62,12 +62,10 @@ var tests = { '\uC744\uAE4C', // (I) Russian (Cyrillic) - /* XXX disabled, fails - possibly a bug in the RFC - 'b1abfaaepdrnnbgefbaDotcwatmq2g4l': + 'b1abfaaepdrnnbgefbadotcwatmq2g4l': '\u043F\u043E\u0447\u0435\u043C\u0443\u0436\u0435\u043E\u043D\u0438' + '\u043D\u0435\u0433\u043E\u0432\u043E\u0440\u044F\u0442\u043F\u043E' + '\u0440\u0443\u0441\u0441\u043A\u0438', - */ // (J) Spanish: PorqunopuedensimplementehablarenEspaol 'PorqunopuedensimplementehablarenEspaol-fmd56a': diff --git a/test/parallel/test-setproctitle.js b/test/parallel/test-setproctitle.js index 90d7b7a10df92c..f85152cfba36a1 100644 --- a/test/parallel/test-setproctitle.js +++ b/test/parallel/test-setproctitle.js @@ -8,27 +8,27 @@ if ('linux freebsd darwin'.indexOf(process.platform) === -1) { return; } -var assert = require('assert'); -var exec = require('child_process').exec; -var path = require('path'); +const assert = require('assert'); +const exec = require('child_process').exec; +const path = require('path'); // The title shouldn't be too long; libuv's uv_set_process_title() out of // security considerations no longer overwrites envp, only argv, so the // maximum title length is possibly quite short. -var title = 'testme'; +let title = 'testme'; -assert.notEqual(process.title, title); +assert.notStrictEqual(process.title, title); process.title = title; -assert.equal(process.title, title); +assert.strictEqual(process.title, title); -exec('ps -p ' + process.pid + ' -o args=', function(error, stdout, stderr) { - assert.equal(error, null); - assert.equal(stderr, ''); +exec(`ps -p ${process.pid} -o args=`, function callback(error, stdout, stderr) { + assert.ifError(error); + assert.strictEqual(stderr, ''); // freebsd always add ' (procname)' to the process title if (process.platform === 'freebsd') title += ` (${path.basename(process.execPath)})`; // omitting trailing whitespace and \n - assert.equal(stdout.replace(/\s+$/, ''), title); + assert.strictEqual(stdout.replace(/\s+$/, ''), title); }); diff --git a/test/parallel/test-stream-pipe-unpipe-streams.js b/test/parallel/test-stream-pipe-unpipe-streams.js new file mode 100644 index 00000000000000..8858ac57334e59 --- /dev/null +++ b/test/parallel/test-stream-pipe-unpipe-streams.js @@ -0,0 +1,32 @@ +'use strict'; +const common = require('../common'); +const assert = require('assert'); + +const Stream = require('stream'); +const Readable = Stream.Readable; +const Writable = Stream.Writable; + +const source = Readable({read: () => {}}); +const dest1 = Writable({write: () => {}}); +const dest2 = Writable({write: () => {}}); + +source.pipe(dest1); +source.pipe(dest2); + +dest1.on('unpipe', common.mustCall(() => {})); +dest2.on('unpipe', common.mustCall(() => {})); + +assert.strictEqual(source._readableState.pipes[0], dest1); +assert.strictEqual(source._readableState.pipes[1], dest2); +assert.strictEqual(source._readableState.pipes.length, 2); + +// Should be able to unpipe them in the reverse order that they were piped. + +source.unpipe(dest2); + +assert.strictEqual(source._readableState.pipes, dest1); +assert.notStrictEqual(source._readableState.pipes, dest2); + +source.unpipe(dest1); + +assert.strictEqual(source._readableState.pipes, null); diff --git a/test/parallel/test-stream-writable-finished-state.js b/test/parallel/test-stream-writable-finished-state.js new file mode 100644 index 00000000000000..b42137ed0b5d6b --- /dev/null +++ b/test/parallel/test-stream-writable-finished-state.js @@ -0,0 +1,22 @@ +'use strict'; + +const common = require('../common'); + +const assert = require('assert'); +const stream = require('stream'); + +const writable = new stream.Writable(); + +writable._write = (chunk, encoding, cb) => { + // The state finished should start in false. + assert.strictEqual(writable._writableState.finished, false); + cb(); +}; + +writable.on('finish', common.mustCall(() => { + assert.strictEqual(writable._writableState.finished, true); +})); + +writable.end('testing finished state', common.mustCall(() => { + assert.strictEqual(writable._writableState.finished, true); +})); diff --git a/test/parallel/test-stream-writable-needdrain-state.js b/test/parallel/test-stream-writable-needdrain-state.js new file mode 100644 index 00000000000000..ea5617d997d5ed --- /dev/null +++ b/test/parallel/test-stream-writable-needdrain-state.js @@ -0,0 +1,23 @@ +'use strict'; + +const common = require('../common'); +const stream = require('stream'); +const assert = require('assert'); + +const transform = new stream.Transform({ + transform: _transform, + highWaterMark: 1 +}); + +function _transform(chunk, encoding, cb) { + assert.strictEqual(transform._writableState.needDrain, true); + cb(); +} + +assert.strictEqual(transform._writableState.needDrain, false); + +transform.write('asdasd', common.mustCall(() => { + assert.strictEqual(transform._writableState.needDrain, false); +})); + +assert.strictEqual(transform._writableState.needDrain, true); diff --git a/test/parallel/test-stream-writableState-ending.js b/test/parallel/test-stream-writableState-ending.js new file mode 100644 index 00000000000000..1e0b0bf77f1ee7 --- /dev/null +++ b/test/parallel/test-stream-writableState-ending.js @@ -0,0 +1,34 @@ +'use strict'; + +require('../common'); + +const assert = require('assert'); +const stream = require('stream'); + +const writable = new stream.Writable(); + +function testStates(ending, finished, ended) { + assert.strictEqual(writable._writableState.ending, ending); + assert.strictEqual(writable._writableState.finished, finished); + assert.strictEqual(writable._writableState.ended, ended); +} + +writable._write = (chunk, encoding, cb) => { + // ending, finished, ended start in false. + testStates(false, false, false); + cb(); +}; + +writable.on('finish', () => { + // ending, finished, ended = true. + testStates(true, true, true); +}); + +writable.end('testing function end()', () => { + // ending, finished, ended = true. + testStates(true, true, true); +}); + +// ending, ended = true. +// finished = false. +testStates(true, false, true); diff --git a/test/parallel/test-tick-processor.js b/test/parallel/test-tick-processor.js index 40f086c898d651..b4b21d43471473 100644 --- a/test/parallel/test-tick-processor.js +++ b/test/parallel/test-tick-processor.js @@ -43,6 +43,14 @@ runTest(/RunInDebugContext/, setTimeout(function() { process.exit(0); }, 2000); f();`); +runTest(/Runtime_DateCurrentTime/, + `function f() { + this.ts = Date.now(); + setImmediate(function() { new f(); }); + }; + setTimeout(function() { process.exit(0); }, 2000); + f();`); + function runTest(pattern, code) { cp.execFileSync(process.execPath, ['-prof', '-pe', code]); var matches = fs.readdirSync(common.tmpDir); diff --git a/test/parallel/test-tls-connect-secure-context.js b/test/parallel/test-tls-connect-secure-context.js new file mode 100644 index 00000000000000..c7519ed770fd50 --- /dev/null +++ b/test/parallel/test-tls-connect-secure-context.js @@ -0,0 +1,37 @@ +'use strict'; +const common = require('../common'); + +if (!common.hasCrypto) { + console.log('1..0 # Skipped: missing crypto'); + return; +} +const tls = require('tls'); + +const fs = require('fs'); +const path = require('path'); + +const keysDir = path.join(common.fixturesDir, 'keys'); + +const ca = fs.readFileSync(path.join(keysDir, 'ca1-cert.pem')); +const cert = fs.readFileSync(path.join(keysDir, 'agent1-cert.pem')); +const key = fs.readFileSync(path.join(keysDir, 'agent1-key.pem')); + +const server = tls.createServer({ + cert: cert, + key: key +}, function(c) { + c.end(); +}).listen(common.PORT, function() { + const secureContext = tls.createSecureContext({ + ca: ca + }); + + const socket = tls.connect({ + secureContext: secureContext, + servername: 'agent1', + port: common.PORT + }, common.mustCall(function() { + server.close(); + socket.end(); + })); +}); diff --git a/test/parallel/test-tls-writewrap-leak.js b/test/parallel/test-tls-writewrap-leak.js new file mode 100644 index 00000000000000..cc55192229531d --- /dev/null +++ b/test/parallel/test-tls-writewrap-leak.js @@ -0,0 +1,26 @@ +'use strict'; +const common = require('../common'); + +if (!common.hasCrypto) { + common.skip('missing crypto'); + return; +} + +const assert = require('assert'); +const net = require('net'); +const tls = require('tls'); + +const server = net.createServer(common.mustCall((c) => { + c.destroy(); +})).listen(0, common.mustCall(() => { + const c = tls.connect({ port: server.address().port }); + c.on('error', () => { + // Otherwise `.write()` callback won't be invoked. + c.destroyed = false; + }); + + c.write('hello', common.mustCall((err) => { + assert.equal(err.code, 'ECANCELED'); + server.close(); + })); +})); diff --git a/test/parallel/test-url.js b/test/parallel/test-url.js index 164e6dcebf2d1f..10d4c16b943cc7 100644 --- a/test/parallel/test-url.js +++ b/test/parallel/test-url.js @@ -157,6 +157,17 @@ var parseTests = { path: '/Y' }, + // whitespace in the front + ' http://www.example.com/': { + href: 'http://www.example.com/', + protocol: 'http:', + slashes: true, + host: 'www.example.com', + hostname: 'www.example.com', + pathname: '/', + path: '/' + }, + // + not an invalid host character // per https://url.spec.whatwg.org/#host-parsing 'http://x.y.com+a/b/c': { diff --git a/test/parallel/test-zlib-dictionary-fail.js b/test/parallel/test-zlib-dictionary-fail.js index b4a344ceef5bb9..01d467aab80806 100644 --- a/test/parallel/test-zlib-dictionary-fail.js +++ b/test/parallel/test-zlib-dictionary-fail.js @@ -26,3 +26,17 @@ var zlib = require('zlib'); // String "test" encoded with dictionary "dict". stream.write(Buffer([0x78, 0xBB, 0x04, 0x09, 0x01, 0xA5])); })(); + +// Should raise an error, not trigger an assertion in src/node_zlib.cc +(function() { + var stream = zlib.createInflateRaw({ dictionary: Buffer('fail') }); + + stream.on('error', common.mustCall(function(err) { + // It's not possible to separate invalid dict and invalid data when using + // the raw format + assert(/invalid/.test(err.message)); + })); + + // String "test" encoded with dictionary "dict". + stream.write(Buffer([0x78, 0xBB, 0x04, 0x09, 0x01, 0xA5])); +})(); diff --git a/test/parallel/test-zlib-dictionary.js b/test/parallel/test-zlib-dictionary.js index f8ce5bfbe87df4..51cd3cc036c2ea 100644 --- a/test/parallel/test-zlib-dictionary.js +++ b/test/parallel/test-zlib-dictionary.js @@ -1,7 +1,7 @@ 'use strict'; // test compression/decompression with dictionary -require('../common'); +const common = require('../common'); const assert = require('assert'); const zlib = require('zlib'); @@ -69,6 +69,66 @@ function run(num) { } run(1); +function rawDictionaryTest() { + let output = ''; + const deflate = zlib.createDeflateRaw({ dictionary: spdyDict }); + const inflate = zlib.createInflateRaw({ dictionary: spdyDict }); + + deflate.on('data', function(chunk) { + inflate.write(chunk); + }); + + inflate.on('data', function(chunk) { + output += chunk; + }); + + deflate.on('end', function() { + inflate.end(); + }); + + inflate.on('end', common.mustCall(function() { + assert.equal(input, output); + })); + + deflate.write(input); + deflate.end(); +} + +function deflateRawResetDictionaryTest() { + let doneReset = false; + let output = ''; + const deflate = zlib.createDeflateRaw({ dictionary: spdyDict }); + const inflate = zlib.createInflateRaw({ dictionary: spdyDict }); + + deflate.on('data', function(chunk) { + if (doneReset) + inflate.write(chunk); + }); + + inflate.on('data', function(chunk) { + output += chunk; + }); + + deflate.on('end', function() { + inflate.end(); + }); + + inflate.on('end', common.mustCall(function() { + assert.equal(input, output); + })); + + deflate.write(input); + deflate.flush(function() { + deflate.reset(); + doneReset = true; + deflate.write(input); + deflate.end(); + }); +} + +rawDictionaryTest(); +deflateRawResetDictionaryTest(); + process.on('exit', function() { assert.equal(called, 2); }); diff --git a/test/pummel/test-tls-securepair-client.js b/test/pummel/test-tls-securepair-client.js index 5dd2af65b2ba5f..7a1b8770132e96 100644 --- a/test/pummel/test-tls-securepair-client.js +++ b/test/pummel/test-tls-securepair-client.js @@ -39,12 +39,6 @@ function test2() { } function test(keyfn, certfn, check, next) { - // FIXME: Avoid the common PORT as this test currently hits a C-level - // assertion error with node_g. The program aborts without HUPing - // the openssl s_server thus causing many tests to fail with - // EADDRINUSE. - var PORT = common.PORT + 5; - keyfn = join(common.fixturesDir, keyfn); var key = fs.readFileSync(keyfn).toString(); @@ -52,7 +46,7 @@ function test(keyfn, certfn, check, next) { var cert = fs.readFileSync(certfn).toString(); var server = spawn(common.opensslCli, ['s_server', - '-accept', PORT, + '-accept', common.PORT, '-cert', certfn, '-key', keyfn]); server.stdout.pipe(process.stdout); @@ -121,7 +115,7 @@ function test(keyfn, certfn, check, next) { pair.encrypted.pipe(s); s.pipe(pair.encrypted); - s.connect(PORT); + s.connect(common.PORT); s.on('connect', function() { console.log('client connected'); diff --git a/test/sequential/test-child-process-execsync.js b/test/sequential/test-child-process-execsync.js index 76da39c0e6b12d..ef970a2bb5471e 100644 --- a/test/sequential/test-child-process-execsync.js +++ b/test/sequential/test-child-process-execsync.js @@ -87,3 +87,19 @@ assert.strictEqual(ret, msg + '\n', execSync('exit -1', {stdio: 'ignore'}); }, /Command failed: exit -1/); })(); + +// Verify the execFileSync() behavior when the child exits with a non-zero code. +{ + const args = ['-e', 'process.exit(1)']; + + assert.throws(() => { + execFileSync(process.execPath, args); + }, (err) => { + const msg = `Command failed: ${process.execPath} ${args.join(' ')}`; + + assert(err instanceof Error); + assert.strictEqual(err.message.trim(), msg); + assert.strictEqual(err.status, 1); + return true; + }); +} diff --git a/test/sequential/test-debug-host-port.js b/test/sequential/test-debug-host-port.js new file mode 100644 index 00000000000000..be6a0837f920b8 --- /dev/null +++ b/test/sequential/test-debug-host-port.js @@ -0,0 +1,47 @@ +'use strict'; + +const common = require('../common'); +const assert = require('assert'); +const spawn = require('child_process').spawn; + +let run = () => {}; +function test(args, re) { + const next = run; + run = () => { + const options = {encoding: 'utf8'}; + const proc = spawn(process.execPath, args.concat(['-e', '0']), options); + let stderr = ''; + proc.stderr.setEncoding('utf8'); + proc.stderr.on('data', (data) => { + stderr += data; + if (re.test(stderr)) proc.kill(); + }); + proc.on('exit', common.mustCall(() => { + assert(re.test(stderr)); + next(); + })); + }; +} + +test(['--debug-brk'], /Debugger listening on (\[::\]|0\.0\.0\.0):5858/); +test(['--debug-brk=1234'], /Debugger listening on (\[::\]|0\.0\.0\.0):1234/); +test(['--debug-brk=127.0.0.1'], /Debugger listening on 127\.0\.0\.1:5858/); +test(['--debug-brk=127.0.0.1:1234'], /Debugger listening on 127\.0\.0\.1:1234/); +test(['--debug-brk=localhost'], + /Debugger listening on (\[::\]|127\.0\.0\.1):5858/); +test(['--debug-brk=localhost:1234'], + /Debugger listening on (\[::\]|127\.0\.0\.1):1234/); + +if (common.hasIPv6) { + test(['--debug-brk=::'], /Debug port must be in range 1024 to 65535/); + test(['--debug-brk=::0'], /Debug port must be in range 1024 to 65535/); + test(['--debug-brk=::1'], /Debug port must be in range 1024 to 65535/); + test(['--debug-brk=[::]'], /Debugger listening on \[::\]:5858/); + test(['--debug-brk=[::0]'], /Debugger listening on \[::\]:5858/); + test(['--debug-brk=[::]:1234'], /Debugger listening on \[::\]:1234/); + test(['--debug-brk=[::0]:1234'], /Debugger listening on \[::\]:1234/); + test(['--debug-brk=[::ffff:127.0.0.1]:1234'], + /Debugger listening on \[::ffff:127\.0\.0\.1\]:1234/); +} + +run(); // Runs tests in reverse order. diff --git a/test/sequential/test-module-loading.js b/test/sequential/test-module-loading.js index fea3ac04298901..2f4a85748ec03e 100644 --- a/test/sequential/test-module-loading.js +++ b/test/sequential/test-module-loading.js @@ -7,20 +7,20 @@ var fs = require('fs'); console.error('load test-module-loading.js'); // assert that this is the main module. -assert.equal(require.main.id, '.', 'main module should have id of \'.\''); -assert.equal(require.main, module, 'require.main should === module'); -assert.equal(process.mainModule, module, - 'process.mainModule should === module'); +assert.strictEqual(require.main.id, '.', 'main module should have id of \'.\''); +assert.strictEqual(require.main, module, 'require.main should === module'); +assert.strictEqual(process.mainModule, module, + 'process.mainModule should === module'); // assert that it's *not* the main module in the required module. require('../fixtures/not-main-module.js'); // require a file with a request that includes the extension var a_js = require('../fixtures/a.js'); -assert.equal(42, a_js.number); +assert.strictEqual(42, a_js.number); // require a file without any extensions var foo_no_ext = require('../fixtures/foo'); -assert.equal('ok', foo_no_ext.foo); +assert.strictEqual('ok', foo_no_ext.foo); var a = require('../fixtures/a'); var c = require('../fixtures/b/c'); @@ -31,54 +31,54 @@ var d3 = require(path.join(__dirname, '../fixtures/b/d')); // Relative var d4 = require('../fixtures/b/d'); -assert.equal(false, false, 'testing the test program.'); +assert.strictEqual(false, false, 'testing the test program.'); assert.ok(a.A instanceof Function); -assert.equal('A', a.A()); +assert.strictEqual('A', a.A()); assert.ok(a.C instanceof Function); -assert.equal('C', a.C()); +assert.strictEqual('C', a.C()); assert.ok(a.D instanceof Function); -assert.equal('D', a.D()); +assert.strictEqual('D', a.D()); assert.ok(d.D instanceof Function); -assert.equal('D', d.D()); +assert.strictEqual('D', d.D()); assert.ok(d2.D instanceof Function); -assert.equal('D', d2.D()); +assert.strictEqual('D', d2.D()); assert.ok(d3.D instanceof Function); -assert.equal('D', d3.D()); +assert.strictEqual('D', d3.D()); assert.ok(d4.D instanceof Function); -assert.equal('D', d4.D()); +assert.strictEqual('D', d4.D()); assert.ok((new a.SomeClass()) instanceof c.SomeClass); console.error('test index.js modules ids and relative loading'); const one = require('../fixtures/nested-index/one'); const two = require('../fixtures/nested-index/two'); -assert.notEqual(one.hello, two.hello); +assert.notStrictEqual(one.hello, two.hello); console.error('test index.js in a folder with a trailing slash'); const three = require('../fixtures/nested-index/three'); const threeFolder = require('../fixtures/nested-index/three/'); const threeIndex = require('../fixtures/nested-index/three/index.js'); -assert.equal(threeFolder, threeIndex); -assert.notEqual(threeFolder, three); +assert.strictEqual(threeFolder, threeIndex); +assert.notStrictEqual(threeFolder, three); console.error('test package.json require() loading'); -assert.equal(require('../fixtures/packages/main').ok, 'ok', - 'Failed loading package'); -assert.equal(require('../fixtures/packages/main-index').ok, 'ok', - 'Failed loading package with index.js in main subdir'); +assert.strictEqual(require('../fixtures/packages/main').ok, 'ok', + 'Failed loading package'); +assert.strictEqual(require('../fixtures/packages/main-index').ok, 'ok', + 'Failed loading package with index.js in main subdir'); console.error('test cycles containing a .. path'); const root = require('../fixtures/cycles/root'); const foo = require('../fixtures/cycles/folder/foo'); -assert.equal(root.foo, foo); -assert.equal(root.sayHello(), root.hello); +assert.strictEqual(root.foo, foo); +assert.strictEqual(root.sayHello(), root.hello); console.error('test node_modules folders'); // asserts are in the fixtures files themselves, @@ -97,23 +97,24 @@ try { require('../fixtures/throws_error'); } catch (e) { errorThrown = true; - assert.equal('blah', e.message); + assert.strictEqual('blah', e.message); } -assert.equal(require('path').dirname(__filename), __dirname); +assert.strictEqual(require('path').dirname(__filename), __dirname); console.error('load custom file types with extensions'); require.extensions['.test'] = function(module, filename) { var content = fs.readFileSync(filename).toString(); - assert.equal('this is custom source\n', content); + assert.strictEqual('this is custom source\n', content); content = content.replace('this is custom source', 'exports.test = \'passed\''); module._compile(content, filename); }; -assert.equal(require('../fixtures/registerExt').test, 'passed'); +assert.strictEqual(require('../fixtures/registerExt').test, 'passed'); // unknown extension, load as .js -assert.equal(require('../fixtures/registerExt.hello.world').test, 'passed'); +assert.strictEqual(require('../fixtures/registerExt.hello.world').test, + 'passed'); console.error('load custom file types that return non-strings'); require.extensions['.test'] = function(module, filename) { @@ -122,10 +123,10 @@ require.extensions['.test'] = function(module, filename) { }; }; -assert.equal(require('../fixtures/registerExt2').custom, 'passed'); +assert.strictEqual(require('../fixtures/registerExt2').custom, 'passed'); -assert.equal(require('../fixtures/foo').foo, 'ok', - 'require module with no extension'); +assert.strictEqual(require('../fixtures/foo').foo, 'ok', + 'require module with no extension'); assert.throws(function() { require.paths; @@ -135,7 +136,7 @@ assert.throws(function() { try { require('../fixtures/empty'); } catch (err) { - assert.equal(err.message, 'Cannot find module \'../fixtures/empty\''); + assert.strictEqual(err.message, 'Cannot find module \'../fixtures/empty\''); } // Check load order is as expected @@ -147,31 +148,31 @@ const msg = 'Load order incorrect.'; require.extensions['.reg'] = require.extensions['.js']; require.extensions['.reg2'] = require.extensions['.js']; -assert.equal(require(loadOrder + 'file1').file1, 'file1', msg); -assert.equal(require(loadOrder + 'file2').file2, 'file2.js', msg); +assert.strictEqual(require(loadOrder + 'file1').file1, 'file1', msg); +assert.strictEqual(require(loadOrder + 'file2').file2, 'file2.js', msg); try { require(loadOrder + 'file3'); } catch (e) { // Not a real .node module, but we know we require'd the right thing. assert.ok(e.message.replace(/\\/g, '/').match(/file3\.node/)); } -assert.equal(require(loadOrder + 'file4').file4, 'file4.reg', msg); -assert.equal(require(loadOrder + 'file5').file5, 'file5.reg2', msg); -assert.equal(require(loadOrder + 'file6').file6, 'file6/index.js', msg); +assert.strictEqual(require(loadOrder + 'file4').file4, 'file4.reg', msg); +assert.strictEqual(require(loadOrder + 'file5').file5, 'file5.reg2', msg); +assert.strictEqual(require(loadOrder + 'file6').file6, 'file6/index.js', msg); try { require(loadOrder + 'file7'); } catch (e) { assert.ok(e.message.replace(/\\/g, '/').match(/file7\/index\.node/)); } -assert.equal(require(loadOrder + 'file8').file8, 'file8/index.reg', msg); -assert.equal(require(loadOrder + 'file9').file9, 'file9/index.reg2', msg); +assert.strictEqual(require(loadOrder + 'file8').file8, 'file8/index.reg', msg); +assert.strictEqual(require(loadOrder + 'file9').file9, 'file9/index.reg2', msg); // make sure that module.require() is the same as // doing require() inside of that module. var parent = require('../fixtures/module-require/parent/'); var child = require('../fixtures/module-require/child/'); -assert.equal(child.loaded, parent.loaded); +assert.strictEqual(child.loaded, parent.loaded); // #1357 Loading JSON files with require() @@ -260,29 +261,29 @@ assert.throws(function() { process.on('exit', function() { assert.ok(a.A instanceof Function); - assert.equal('A done', a.A()); + assert.strictEqual('A done', a.A()); assert.ok(a.C instanceof Function); - assert.equal('C done', a.C()); + assert.strictEqual('C done', a.C()); assert.ok(a.D instanceof Function); - assert.equal('D done', a.D()); + assert.strictEqual('D done', a.D()); assert.ok(d.D instanceof Function); - assert.equal('D done', d.D()); + assert.strictEqual('D done', d.D()); assert.ok(d2.D instanceof Function); - assert.equal('D done', d2.D()); + assert.strictEqual('D done', d2.D()); - assert.equal(true, errorThrown); + assert.strictEqual(true, errorThrown); console.log('exit'); }); // #1440 Loading files with a byte order marker. -assert.equal(42, require('../fixtures/utf8-bom.js')); -assert.equal(42, require('../fixtures/utf8-bom.json')); +assert.strictEqual(42, require('../fixtures/utf8-bom.js')); +assert.strictEqual(42, require('../fixtures/utf8-bom.json')); // Error on the first line of a module should // have the correct line number diff --git a/test/parallel/test-repl-timeout-throw.js b/test/sequential/test-repl-timeout-throw.js similarity index 86% rename from test/parallel/test-repl-timeout-throw.js rename to test/sequential/test-repl-timeout-throw.js index 6c540c9e3197de..0188b3b8c502d8 100644 --- a/test/parallel/test-repl-timeout-throw.js +++ b/test/sequential/test-repl-timeout-throw.js @@ -1,10 +1,10 @@ 'use strict'; -var assert = require('assert'); -var common = require('../common'); +const common = require('../common'); +const assert = require('assert'); -var spawn = require('child_process').spawn; +const spawn = require('child_process').spawn; -var child = spawn(process.execPath, [ '-i' ], { +const child = spawn(process.execPath, [ '-i' ], { stdio: [null, null, 2] }); @@ -52,8 +52,8 @@ child.stdout.once('data', function() { }); child.on('close', function(c) { - assert(!c); + assert.strictEqual(c, 0); // make sure we got 3 throws, in the end. var lastLine = stdout.trim().split(/\r?\n/).pop(); - assert.equal(lastLine, '> 3'); + assert.strictEqual(lastLine, '> 3'); }); diff --git a/tools/doc/html.js b/tools/doc/html.js index 30bc3b5caae303..3b60116b1ba422 100644 --- a/tools/doc/html.js +++ b/tools/doc/html.js @@ -9,6 +9,8 @@ const typeParser = require('./type-parser.js'); module.exports = toHTML; +const STABILITY_TEXT_REG_EXP = /(.*:)\s*(\d)([\s\S]*)/; + // customized heading without id attribute var renderer = new marked.Renderer(); renderer.heading = function(text, level) { @@ -153,8 +155,11 @@ function parseLists(input) { var savedState = []; var depth = 0; var output = []; + let headingIndex = -1; + let heading = null; + output.links = input.links; - input.forEach(function(tok) { + input.forEach(function(tok, index) { if (tok.type === 'blockquote_start') { savedState.push(state); state = 'MAYBE_STABILITY_BQ'; @@ -167,6 +172,17 @@ function parseLists(input) { if ((tok.type === 'paragraph' && state === 'MAYBE_STABILITY_BQ') || tok.type === 'code') { if (tok.text.match(/Stability:.*/g)) { + const stabilityMatch = tok.text.match(STABILITY_TEXT_REG_EXP); + const stability = Number(stabilityMatch[2]); + const isStabilityIndex = + index - 2 === headingIndex || // general + index - 3 === headingIndex; // with api_metadata block + + if (heading && isStabilityIndex) { + heading.stability = stability; + headingIndex = -1; + heading = null; + } tok.text = parseAPIHeader(tok.text); output.push({ type: 'html', text: tok.text }); return; @@ -178,6 +194,8 @@ function parseLists(input) { if (state === null || (state === 'AFTERHEADING' && tok.type === 'heading')) { if (tok.type === 'heading') { + headingIndex = index; + heading = tok; state = 'AFTERHEADING'; } output.push(tok); @@ -280,7 +298,7 @@ function linkJsTypeDocs(text) { function parseAPIHeader(text) { text = text.replace( - /(.*:)\s(\d)([\s\S]*)/, + STABILITY_TEXT_REG_EXP, '
$1 $2$3
' ); return text; @@ -324,8 +342,8 @@ function buildToc(lexed, filename, cb) { const realFilename = path.basename(realFilenames[0], '.md'); const id = getId(realFilename + '_' + tok.text.trim()); toc.push(new Array((depth - 1) * 2 + 1).join(' ') + - '* ' + - tok.text + ''); + '* ' + + '' + tok.text + ''); tok.text += '#'; }); diff --git a/tools/eslint-rules/no-let-in-for-declaration.js b/tools/eslint-rules/no-let-in-for-declaration.js new file mode 100644 index 00000000000000..8b1a6783e0773d --- /dev/null +++ b/tools/eslint-rules/no-let-in-for-declaration.js @@ -0,0 +1,46 @@ +/** + * @fileoverview Prohibit the use of `let` as the loop variable + * in the initialization of for, and the left-hand + * iterator in forIn and forOf loops. + * + * @author Jessica Quynh Tran + */ + +'use strict'; + +//------------------------------------------------------------------------------ +// Rule Definition +//------------------------------------------------------------------------------ + +module.exports = { + create(context) { + + const msg = 'Use of `let` as the loop variable in a for-loop is ' + + 'not recommended. Please use `var` instead.'; + + /** + * Report function to test if the for-loop is declared using `let`. + */ + function testForLoop(node) { + if (node.init && node.init.kind === 'let') { + context.report(node.init, msg); + } + } + + /** + * Report function to test if the for-in or for-of loop + * is declared using `let`. + */ + function testForInOfLoop(node) { + if (node.left && node.left.kind === 'let') { + context.report(node.left, msg); + } + } + + return { + 'ForStatement': testForLoop, + 'ForInStatement': testForInOfLoop, + 'ForOfStatement': testForInOfLoop + }; + } +}; diff --git a/tools/eslint/node_modules/.bin/eslint b/tools/eslint/node_modules/.bin/eslint deleted file mode 120000 index 810e4bcb32af34..00000000000000 --- a/tools/eslint/node_modules/.bin/eslint +++ /dev/null @@ -1 +0,0 @@ -../eslint/bin/eslint.js \ No newline at end of file diff --git a/tools/getmoduleversion.py b/tools/getmoduleversion.py new file mode 100644 index 00000000000000..9ff7016b2b2062 --- /dev/null +++ b/tools/getmoduleversion.py @@ -0,0 +1,24 @@ +from __future__ import print_function +import os +import re + +def get_version(): + node_version_h = os.path.join( + os.path.dirname(__file__), + '..', + 'src', + 'node_version.h') + + f = open(node_version_h) + + regex = '^#define NODE_MODULE_VERSION [0-9]+' + + for line in f: + if re.match(regex, line): + major = line.split()[2] + return major + + raise Exception('Could not find pattern matching %s' % regex) + +if __name__ == '__main__': + print(get_version()) diff --git a/tools/getnodeversion.py b/tools/getnodeversion.py index 766e4f60dc07ad..f2032cccefe936 100644 --- a/tools/getnodeversion.py +++ b/tools/getnodeversion.py @@ -1,16 +1,20 @@ -import os,re +import os +import re -node_version_h = os.path.join(os.path.dirname(__file__), '..', 'src', +node_version_h = os.path.join( + os.path.dirname(__file__), + '..', + 'src', 'node_version.h') f = open(node_version_h) for line in f: - if re.match('#define NODE_MAJOR_VERSION', line): + if re.match('^#define NODE_MAJOR_VERSION', line): major = line.split()[2] - if re.match('#define NODE_MINOR_VERSION', line): + if re.match('^#define NODE_MINOR_VERSION', line): minor = line.split()[2] - if re.match('#define NODE_PATCH_VERSION', line): + if re.match('^#define NODE_PATCH_VERSION', line): patch = line.split()[2] print '%(major)s.%(minor)s.%(patch)s'% locals() diff --git a/tools/icu/icu-generic.gyp b/tools/icu/icu-generic.gyp index 222a9e95664d50..bb98623772064e 100644 --- a/tools/icu/icu-generic.gyp +++ b/tools/icu/icu-generic.gyp @@ -69,6 +69,7 @@ [ 'os_posix == 1 and OS != "mac" and OS != "ios"', { 'cflags': [ '-Wno-deprecated-declarations' ], 'cflags_cc': [ '-frtti' ], + 'cflags_cc!': [ '-fno-rtti' ], }], [ 'OS == "mac" or OS == "ios"', { 'xcode_settings': {'GCC_ENABLE_CPP_RTTI': 'YES' }, diff --git a/tools/install.py b/tools/install.py index b1997d48525209..daf2d292c51322 100755 --- a/tools/install.py +++ b/tools/install.py @@ -123,9 +123,23 @@ def subdir_files(path, dest, action): def files(action): is_windows = sys.platform == 'win32' + output_file = 'node' + output_prefix = 'out/Release/' - exeext = '.exe' if is_windows else '' - action(['out/Release/node' + exeext], 'bin/node' + exeext) + if 'false' == variables.get('node_shared'): + if is_windows: + output_file += '.exe' + else: + if is_windows: + output_file += '.dll' + else: + output_file = 'lib' + output_file + '.' + variables.get('shlib_suffix') + # GYP will output to lib.target except on OS X, this is hardcoded + # in its source - see the _InstallableTargetInstallPath function. + if sys.platform != 'darwin': + output_prefix += 'lib.target/' + + action([output_prefix + output_file], 'bin/' + output_file) if 'true' == variables.get('node_use_dtrace'): action(['out/Release/node.d'], 'lib/dtrace/node.d') diff --git a/tools/make-v8.sh b/tools/make-v8.sh index f6efb66a565d54..786171facf52cd 100755 --- a/tools/make-v8.sh +++ b/tools/make-v8.sh @@ -1,38 +1,47 @@ #!/bin/bash - -git_origin=$(git config --get remote.origin.url | sed 's/.\+[\/:]\([^\/]\+\/[^\/]\+\)$/\1/') -git_branch=$(git rev-parse --abbrev-ref HEAD) -v8ver=${1:-v8} #default v8 -svn_prefix=https://github.com -svn_path="$svn_prefix/$git_origin/branches/$git_branch/deps/$v8ver" -#svn_path="$git_origin/branches/$git_branch/deps/$v8ver" -gclient_string="solutions = [{'name': 'v8', 'url': '$svn_path', 'managed': False}]" +# Get V8 branch from v8/include/v8-version.h +MAJOR=$(grep V8_MAJOR_VERSION deps/v8/include/v8-version.h | cut -d ' ' -f 3) +MINOR=$(grep V8_MINOR_VERSION deps/v8/include/v8-version.h | cut -d ' ' -f 3) +BRANCH=$MAJOR.$MINOR # clean up if someone presses ctrl-c trap cleanup INT function cleanup() { trap - INT - rm .gclient || true rm .gclient_entries || true rm -rf _bad_scm/ || true - - #if v8ver isn't v8, move the v8 folders - #back to what they were - if [ "$v8ver" != "v8" ]; then - mv v8 $v8ver - mv .v8old v8 - fi + find v8 -name ".git" | xargs rm -rf || true + echo "git cleanup" + git reset --hard HEAD + git clean -fdq + # unstash local changes + git stash pop exit 0 } cd deps -echo $gclient_string > .gclient -if [ "$v8ver" != "v8" ]; then - mv v8 .v8old - mv $v8ver v8 +# stash local changes +git stash +rm -rf v8 + +echo "Fetching V8 from chromium.googlesource.com" +fetch v8 +if [ "$?" -ne 0 ]; then + echo "V8 fetch failed" + exit 1 fi +echo "V8 fetched" + +cd v8 + +echo "Checking out branch:$BRANCH" +git checkout remotes/branch-heads/$BRANCH + +echo "Sync dependencies" gclient sync + +cd .. cleanup diff --git a/tools/mkssldef.py b/tools/mkssldef.py new file mode 100755 index 00000000000000..8cbdbabd976ba9 --- /dev/null +++ b/tools/mkssldef.py @@ -0,0 +1,44 @@ +#!/usr/bin/env python + +from __future__ import print_function +import re +import sys + +categories = [] +defines = [] +excludes = [] + +if __name__ == '__main__': + out = sys.stdout + filenames = sys.argv[1:] + + while filenames and filenames[0].startswith('-'): + option = filenames.pop(0) + if option == '-o': out = open(filenames.pop(0), 'w') + elif option.startswith('-C'): categories += option[2:].split(',') + elif option.startswith('-D'): defines += option[2:].split(',') + elif option.startswith('-X'): excludes += option[2:].split(',') + + excludes = map(re.compile, excludes) + exported = [] + + for filename in filenames: + for line in open(filename).readlines(): + name, _, meta, _ = re.split('\s+', line) + if any(map(lambda p: p.match(name), excludes)): continue + meta = meta.split(':') + assert meta[0] in ('EXIST', 'NOEXIST') + assert meta[2] in ('FUNCTION', 'VARIABLE') + if meta[0] != 'EXIST': continue + if meta[2] != 'FUNCTION': continue + def satisfy(expr, rules): + def test(expr): + if expr.startswith('!'): return not expr[1:] in rules + return expr == '' or expr in rules + return all(map(test, expr.split(','))) + if not satisfy(meta[1], defines): continue + if not satisfy(meta[3], categories): continue + exported.append(name) + + print('EXPORTS', file=out) + for name in sorted(exported): print(' ', name, file=out) diff --git a/tools/msvs/msi/product.wxs b/tools/msvs/msi/product.wxs index bdda1a19c13e72..4f25505289b752 100755 --- a/tools/msvs/msi/product.wxs +++ b/tools/msvs/msi/product.wxs @@ -42,7 +42,7 @@ - + &1 | grep 'key ID' | awk '{print $NF}') - - if [ "X${gpgtagkey}" == "X" ]; then - echo "Could not find signed tag for \"${version}\"" - exit 1 - fi - - if [ "${gpgtagkey}" != "${gpgkey}" ]; then - echo "GPG key for \"${version}\" tag is not yours, cannot sign" + if ! git tag -v $version 2>&1 | grep "${gpgkey}" | grep key > /dev/null; then + echo "Could not find signed tag for \"${version}\" or GPG key is not yours" exit 1 fi diff --git a/tools/test.py b/tools/test.py index 97880fb3cd2667..626dd2cec6af99 100755 --- a/tools/test.py +++ b/tools/test.py @@ -43,6 +43,7 @@ import utils import multiprocessing import errno +import copy from os.path import join, dirname, abspath, basename, isdir, exists from datetime import datetime @@ -256,11 +257,15 @@ def HasRun(self, output): class TapProgressIndicator(SimpleProgressIndicator): - def _printDiagnostic(self, messages): - for l in messages.splitlines(): - logger.info('# ' + l) + def _printDiagnostic(self, traceback, severity): + logger.info(' severity: %s', severity) + logger.info(' stack: |-') + + for l in traceback.splitlines(): + logger.info(' ' + l) def Starting(self): + logger.info('TAP version 13') logger.info('1..%i' % len(self.cases)) self._done = 0 @@ -269,6 +274,8 @@ def AboutToRun(self, case): def HasRun(self, output): self._done += 1 + self.traceback = '' + self.severity = 'ok' # Print test name as (for example) "parallel/test-assert". Tests that are # scraped from the addons documentation are all named test.js, making it @@ -281,19 +288,23 @@ def HasRun(self, output): if output.UnexpectedOutput(): status_line = 'not ok %i %s' % (self._done, command) + self.severity = 'fail' + self.traceback = output.output.stdout + output.output.stderr + if FLAKY in output.test.outcomes and self.flaky_tests_mode == DONTCARE: status_line = status_line + ' # TODO : Fix flaky test' + self.severity = 'flaky' + logger.info(status_line) - self._printDiagnostic("\n".join(output.diagnostic)) if output.HasCrashed(): - self._printDiagnostic(PrintCrashed(output.output.exit_code)) + self.severity = 'crashed' + exit_code = output.output.exit_code + self.traceback = "oh no!\nexit code: " + PrintCrashed(exit_code) if output.HasTimedOut(): - self._printDiagnostic('TIMEOUT') + self.severity = 'fail' - self._printDiagnostic(output.output.stderr) - self._printDiagnostic(output.output.stdout) else: skip = skip_regex.search(output.output.stdout) if skip: @@ -304,7 +315,11 @@ def HasRun(self, output): if FLAKY in output.test.outcomes: status_line = status_line + ' # TODO : Fix flaky test' logger.info(status_line) - self._printDiagnostic("\n".join(output.diagnostic)) + + if output.diagnostic: + self.severity = 'ok' + self.traceback = output.diagnostic + duration = output.test.duration @@ -313,7 +328,12 @@ def HasRun(self, output): (duration.seconds + duration.days * 24 * 3600) * 10**6) / 10**6 logger.info(' ---') - logger.info(' duration_ms: %d.%d' % (total_seconds, duration.microseconds / 1000)) + logger.info(' duration_ms: %d.%d' % + (total_seconds, duration.microseconds / 1000)) + if self.severity is not 'ok' or self.traceback is not '': + if output.HasTimedOut(): + self.traceback = 'timeout' + self._printDiagnostic(self.traceback, self.severity) logger.info(' ...') def Done(self): @@ -777,7 +797,9 @@ def AddTestsToList(self, result, current_path, path, context, arch, mode): tests = self.GetConfiguration(context).ListTests(current_path, path, arch, mode) for t in tests: t.variant_flags = v - result += tests * context.repeat + result += tests + for i in range(1, context.repeat): + result += copy.deepcopy(tests) def GetTestStatus(self, context, sections, defs): self.GetConfiguration(context).GetTestStatus(sections, defs) @@ -1454,6 +1476,13 @@ def SplitPath(s): stripped = [ c.strip() for c in s.split('/') ] return [ Pattern(s) for s in stripped if len(s) > 0 ] +def NormalizePath(path): + # strip the extra path information of the specified test + if path.startswith('test/'): + path = path[5:] + if path.endswith('.js'): + path = path[:-3] + return path def GetSpecialCommandProcessor(value): if (not value) or (value.find('@') == -1): @@ -1526,7 +1555,7 @@ def Main(): else: paths = [ ] for arg in args: - path = SplitPath(arg) + path = SplitPath(NormalizePath(arg)) paths.append(path) # Check for --valgrind option. If enabled, we overwrite the special diff --git a/vcbuild.bat b/vcbuild.bat index 77c0d01a14f486..72943f62a7a6c3 100644 --- a/vcbuild.bat +++ b/vcbuild.bat @@ -20,6 +20,7 @@ set noprojgen= set nobuild= set nosign= set nosnapshot= +set cctest_args= set test_args= set package= set msi= @@ -37,6 +38,7 @@ set build_release= set configure_flags= set build_addons= set enable_vtune_profiling= +set dll= :next-arg if "%1"=="" goto args-done @@ -56,7 +58,7 @@ if /i "%1"=="noetw" set noetw=1&goto arg-ok if /i "%1"=="noperfctr" set noperfctr=1&goto arg-ok if /i "%1"=="licensertf" set licensertf=1&goto arg-ok if /i "%1"=="test" set test_args=%test_args% addons doctool sequential parallel message -J&set jslint=1&set build_addons=1&goto arg-ok -if /i "%1"=="test-ci" set test_args=%test_args% %test_ci_args% -p tap --logfile test.tap addons doctool message sequential parallel&set build_addons=1&goto arg-ok +if /i "%1"=="test-ci" set test_args=%test_args% %test_ci_args% -p tap --logfile test.tap addons doctool message sequential parallel&set cctest_args=%cctest_args% --gtest_output=tap:cctest.tap&set build_addons=1&goto arg-ok if /i "%1"=="test-addons" set test_args=%test_args% addons&set build_addons=1&goto arg-ok if /i "%1"=="test-simple" set test_args=%test_args% sequential parallel -J&goto arg-ok if /i "%1"=="test-message" set test_args=%test_args% message&goto arg-ok @@ -76,6 +78,7 @@ if /i "%1"=="intl-none" set i18n_arg=%1&goto arg-ok if /i "%1"=="download-all" set download_arg="--download=all"&goto arg-ok if /i "%1"=="ignore-flaky" set test_args=%test_args% --flaky-tests=dontcare&goto arg-ok if /i "%1"=="enable-vtune" set enable_vtune_profiling="--enable-vtune-profiling"&goto arg-ok +if /i "%1"=="dll" set dll=1&goto arg-ok echo Error: invalid command line option `%1`. exit /b 1 @@ -105,6 +108,7 @@ if defined noetw set configure_flags=%configure_flags% --without-etw& set noetw_ if defined noperfctr set configure_flags=%configure_flags% --without-perfctr& set noperfctr_msi_arg=/p:NoPerfCtr=1 if defined release_urlbase set configure_flags=%configure_flags% --release-urlbase=%release_urlbase% if defined download_arg set configure_flags=%configure_flags% %download_arg% +if defined dll set configure_flags=%configure_flags% --shared if "%i18n_arg%"=="full-icu" set configure_flags=%configure_flags% --with-intl=full-icu if "%i18n_arg%"=="small-icu" set configure_flags=%configure_flags% --with-intl=small-icu @@ -351,8 +355,8 @@ goto run-tests if "%test_args%"=="" goto jslint if "%config%"=="Debug" set test_args=--mode=debug %test_args% if "%config%"=="Release" set test_args=--mode=release %test_args% -echo running 'cctest' -"%config%\cctest" +echo running 'cctest %cctest_args%' +"%config%\cctest" %cctest_args% echo running 'python tools\test.py %test_args%' python tools\test.py %test_args% goto jslint