Javascript Proxy Request

Proxy – JavaScript – MDN Web Docs

The Proxy object enables you to create a proxy for another object, which can intercept and redefine fundamental operations for that scriptionA Proxy is created with two parameters:
target: the original object which you want to proxy
handler: an object that defines which operations will be intercepted and how to redefine intercepted operations.
For example, this code defines a simple target with just two properties, and an even simpler handler with no properties:
const target = {
message1: “hello”,
message2: “everyone”};
const handler1 = {};
const proxy1 = new Proxy(target, handler1);
Because the handler is empty, this proxy behaves just like the original target:
(ssage1); // hello
(ssage2); // everyone
To customise the proxy, we define functions on the handler object:
const handler2 = {
get: function(target, prop, receiver) {
return “world”;}};
const proxy2 = new Proxy(target, handler2);
Here we’ve provided an implementation of the get() handler, which intercepts attempts to access properties in the target.
Handler functions are sometimes called traps, presumably because they trap calls to the target object. The very simple trap in handler2 above redefines all property accessors:
(ssage1); // world
(ssage2); // world
With the help of the Reflect class we can give some accessors the original behavior and redefine others:
const handler3 = {
get: function (target, prop, receiver) {
if (prop === “message2”) {
return “world”;}
return (guments);}, };
const proxy3 = new Proxy(target, handler3);
Creates a new Proxy object.
Static methodsExamplesBasic exampleIn this simple example, the number 37 gets returned as the default value when the property name is not in the object. It is using the get() handler.
const handler = {
get: function(obj, prop) {
return prop in obj?
const p = new Proxy({}, handler);
p. a = 1;
p. b = undefined;
(p. a, p. b);
// 1, undefined
(‘c’ in p, p. c);
// false, 37
No-op forwarding proxyIn this example, we are using a native JavaScript object to which our proxy will forward all operations that are applied to it.
const target = {};
const p = new Proxy(target, {});
p. a = 37;
// operation forwarded to the target
(target. a);
// 37
// (The operation has been properly forwarded! )
Note that while this “no-op” works for JavaScript objects, it does not work for native browser objects like DOM lidationWith a Proxy, you can easily validate the passed value for an object. This example uses the set() handler.
let validator = {
set: function(obj, prop, value) {
if (prop === ‘age’) {
if (! Integer(value)) {
throw new TypeError(‘The age is not an integer’);}
if (value > 200) {
throw new RangeError(‘The age seems invalid’);}}
// The default behavior to store the value
obj[prop] = value;
// Indicate success
return true;}};
const person = new Proxy({}, validator);
= 100;
(); // 100
= ‘young’; // Throws an exception
= 300; // Throws an exception
Extending constructorA function proxy could easily extend a constructor with a new constructor. This example uses the construct() and apply() handlers.
function extend(sup, base) {
var descriptor = tOwnPropertyDescriptor(
ototype, ‘constructor’);
ototype = (ototype);
var handler = {
construct: function(target, args) {
var obj = (ototype);
(target, obj, args);
return obj;},
apply: function(target, that, args) {
(that, args);
(that, args);}};
var proxy = new Proxy(base, handler);
= proxy;
fineProperty(ototype, ‘constructor’, descriptor);
return proxy;}
var Person = function(name) {
= name;};
var Boy = extend(Person, function(name, age) {
= age;});
= ‘M’;
var Peter = new Boy(‘Peter’, 13);
(); // “M”
(); // “Peter”
(); // 13
Manipulating DOM nodesSometimes you want to toggle the attribute or class name of two different elements. Here’s how using the set() handler.
let view = new Proxy({
selected: null},
set: function(obj, prop, newval) {
let oldval = obj[prop];
if (prop === ‘selected’) {
if (oldval) {
tAttribute(‘aria-selected’, ‘false’);}
if (newval) {
tAttribute(‘aria-selected’, ‘true’);}}
obj[prop] = newval;
return true;}});
let i1 = lected = tElementById(‘item-1’); //giving error here, i1 is null
// ‘true’
let i2 = lected = tElementById(‘item-2’);
// ‘false’
Note: even if selected:! null, then giving tAttribute is not a function
Value correction and an extra propertyThe products proxy object evaluates the passed value and converts it to an array if needed. The object also supports an extra property called latestBrowser both as a getter and a setter.
let products = new Proxy({
browsers: [‘Internet Explorer’, ‘Netscape’]},
// An extra property
if (prop === ‘latestBrowser’) {
return owsers[ – 1];}
// The default behavior to return the value
return obj[prop];},
return true;}
// Convert the value if it is not an array
if (typeof value === ‘string’) {
value = [value];}
// [‘Internet Explorer’, ‘Netscape’]
owsers = ‘Firefox’;
// pass a string (by mistake)
// [‘Firefox’] <- no problem, the value is an array testBrowser = 'Chrome'; // ['Firefox', 'Chrome'] (testBrowser); // 'Chrome' Finding an array item object by its propertyThis proxy extends an array with some utility features. As you see, you can flexibly "define" properties without using fineProperties(). This example can be adapted to find a table row by its cell. In that case, the target will be let products = new Proxy([ { name: 'Firefox', type: 'browser'}, { name: 'SeaMonkey', type: 'browser'}, { name: 'Thunderbird', type: 'mailer'}], // The default behavior to return the value; prop is usually an integer if (prop in obj) { return obj[prop];} // Get the number of products; an alias of if (prop === 'number') { return;} let result, types = {}; for (let product of obj) { if ( === prop) { result = product;} if (types[]) { types[](product);} else { types[] = [product];}} // Get a product by name if (result) { return result;} // Get products by type if (prop in types) { return types[prop];} // Get product types if (prop === 'types') { return (types);} return undefined;}}); (products[0]); // { name: 'Firefox', type: 'browser'} (products['Firefox']); // { name: 'Firefox', type: 'browser'} (products['Chrome']); // undefined (owser); // [{ name: 'Firefox', type: 'browser'}, { name: 'SeaMonkey', type: 'browser'}] (); // ['browser', 'mailer'] (); // 3 A complete traps list exampleNow in order to create a complete sample traps list, for didactic purposes, we will try to proxify a non-native object that is particularly suited to this type of operation: the docCookies global object created by a simple cookie framework. /* var docCookies =... get the "docCookies" object here: */ var docCookies = new Proxy(docCookies, { get: function (oTarget, sKey) { return oTarget[sKey] || tItem(sKey) || undefined;}, set: function (oTarget, sKey, vValue) { if (sKey in oTarget) { return false;} return tItem(sKey, vValue);}, deleteProperty: function (oTarget, sKey) { if (! sKey in oTarget) { return false;} return moveItem(sKey);}, enumerate: function (oTarget, sKey) { return ();}, ownKeys: function (oTarget, sKey) { has: function (oTarget, sKey) { return sKey in oTarget || oTarget. hasItem(sKey);}, defineProperty: function (oTarget, sKey, oDesc) { if (oDesc && 'value' in oDesc) { tItem(sKey, );} return oTarget;}, getOwnPropertyDescriptor: function (oTarget, sKey) { var vValue = tItem(sKey); return vValue? { value: vValue, writable: true, enumerable: true, configurable: false}: undefined;}, }); /* Cookies test */ (_cookie1 = 'First value'); (tItem('my_cookie1')); tItem('my_cookie1', 'Changed value'); (_cookie1); SpecificationsSpecificationECMAScript Language Specification (ECMAScript)# sec-proxy-objectsBrowser compatibilityBCD tables only load in the browserSee also "Proxies are awesome" Brendan Eich presentation at JSConf (slides) Tutorial on proxies Request - Simplified HTTP client - GitHub

Request – Simplified HTTP client – GitHub

As of Feb 11th 2020, request is fully deprecated. No new changes are expected to land. In fact, none have landed for some time.
For more information about why request is deprecated and possible alternatives refer to
this issue.
Super simple to use
Request is designed to be the simplest way possible to make calls. It supports HTTPS and follows redirects by default.
const request = require(‘request’);
request(”, function (error, response, body) {
(‘error:’, error); // Print the error if one occurred
(‘statusCode:’, response && atusCode); // Print the response status code if a response was received
(‘body:’, body); // Print the HTML for the Google homepage. });
Table of contents
Promises & Async/Await
HTTP Authentication
Custom HTTP Headers
OAuth Signing
Unix Domain Sockets
TLS/SSL Protocol
Support for HAR 1. 2
All Available Options
Request also offers convenience methods like
faults and, and there are
lots of usage examples and several
debugging techniques.
You can stream any response to a file stream.
You can also stream a file to a PUT or POST request. This method will also check the file extension against a mapping of file extensions to content-types (in this case application/json) and use the proper content-type in the PUT request (if the headers don’t already provide one).
Request can also pipe to itself. When doing so, content-type and content-length are preserved in the PUT headers.
Request emits a “response” event when a response is received. The response argument will be an instance of comingMessage.
(‘response’, function(response) {
(atusCode) // 200
(response. headers[‘content-type’]) // ‘image/png’})
To easily handle errors when streaming requests, listen to the error event before piping:
(‘error’, function(err) {
Now let’s get fancy.
eateServer(function (req, resp) {
if ( === ‘/’) {
if ( === ‘PUT’) {
((”))} else if ( === ‘GET’ || === ‘HEAD’) {
You can also pipe() from rverRequest instances, as well as to rverResponse instances. The HTTP method, headers, and entity-body data will be sent. Which means that, if you don’t really care about security, you can do:
const x = request(”)
And since pipe() returns the destination stream in ≥ Node 0. 5. x you can do one line proxying. 🙂
Also, none of this new functionality conflicts with requests previous features, it just expands them.
const r = faults({‘proxy’:”})
You can still use intermediate proxies, the requests will still follow HTTP forwards, etc.
back to top
request supports both streaming and callback interfaces natively. If you’d like request to return a Promise instead, you can use an alternative interface wrapper for request. These wrappers can be useful if you prefer to work with Promises, or if you’d like to use async/await in ES2017.
Several alternative interfaces are provided by the request team, including:
request-promise (uses Bluebird Promises)
request-promise-native (uses native Promises)
request-promise-any (uses any-promise Promises)
Also, omisify, which is available from v8. 0 can be used to convert a regular function that takes a callback to return a promise instead.
request supports application/x-www-form-urlencoded and multipart/form-data form uploads. For multipart/related refer to the multipart API.
application/x-www-form-urlencoded (URL-Encoded Forms)
URL-encoded forms are simple.
(”, {form:{key:’value’}})
// or
({url:”, form: {key:’value’}}, function(err, Response, body){ /*… */})
multipart/form-data (Multipart Form Uploads)
For multipart/form-data we use the form-data library by @felixge. For the most cases, you can pass your upload form data via the formData option.
const formData = {
// Pass a simple key-value pair
my_field: ‘my_value’,
// Pass data via Buffers
my_buffer: ([1, 2, 3]),
// Pass data via Streams
my_file: eateReadStream(__dirname + ‘/’),
// Pass multiple values /w an Array
attachments: [
eateReadStream(__dirname + ‘/’),
eateReadStream(__dirname + ‘/’)],
// Pass optional meta-data with an ‘options’ object with style: {value: DATA, options: OPTIONS}
// Use case: for some types of streams, you’ll need to provide “file”-related information manually.
// See the `form-data` README for more information about options: custom_file: {
value: eateReadStream(‘/dev/urandom’),
options: {
filename: ”,
contentType: ‘image/jpeg’}}};
({url:”, formData: formData}, function optionalCallback(err, Response, body) {
if (err) {
return (‘upload failed:’, err);}
(‘Upload successful! Server responded with:’, body);});
For advanced cases, you can access the form-data object itself via (). This can be modified until the request is fired on the next cycle of the event-loop. (Note that this calling form() will clear the currently set form data for that request. )
// NOTE: Advanced use-case, for normal use see ‘formData’ usage above
const r = (”, function optionalCallback(err, Response, body) {… })
const form = ();
(‘my_field’, ‘my_value’);
(‘my_buffer’, ([1, 2, 3]));
(‘custom_file’, eateReadStream(__dirname + ‘/’), {filename: ”});
See the form-data README for more information & examples.
Some variations in different HTTP implementations require a newline/CRLF before, after, or both before and after the boundary of a multipart/related request (using the multipart option). This has been observed in the WebAPI version 4. 0. You can turn on a boundary preambleCRLF or postamble by passing them as true to your request options.
method: ‘PUT’,
preambleCRLF: true,
postambleCRLF: true,
uri: ”,
multipart: [
‘content-type’: ‘application/json’,
body: ringify({foo: ‘bar’, _attachments: {”: {follows: true, length: 18, ‘content_type’: ‘text/plain’}}})},
{ body: ‘I am an attachment’},
{ body: eateReadStream(”)}],
// alternatively pass an object containing additional options
multipart: {
chunked: false,
data: [
{ body: ‘I am an attachment’}]}},
function (error, response, body) {
if (error) {
return (‘upload failed:’, error);}
(‘Upload successful! Server responded with:’, body);})
(”)(‘username’, ‘password’, false);
(”, {
‘auth’: {
‘user’: ‘username’,
‘pass’: ‘password’,
‘sendImmediately’: false}});
(”)(null, null, true, ‘bearerToken’);
‘bearer’: ‘bearerToken’}});
If passed as an option, auth should be a hash containing values:
user || username
pass || password
sendImmediately (optional)
bearer (optional)
The method form takes parameters
auth(username, password, sendImmediately, bearer).
sendImmediately defaults to true, which causes a basic or bearer
authentication header to be sent. If sendImmediately is false, then
request will retry with a proper authentication header after receiving a
401 response from the server (which must contain a WWW-Authenticate header
indicating the required authentication method).
Note that you can also specify basic authentication using the URL itself, as
detailed in RFC 1738. Simply pass the
user:password before the host with an @ sign:
const username = ‘username’,
password = ‘password’,
url = ” + username + ‘:’ + password + ”;
request({url}, function (error, response, body) {
// Do more stuff with ‘body’ here});
Digest authentication is supported, but it only works with sendImmediately
set to false; otherwise request will send basic authentication on the
initial request, which will probably cause the request to fail.
Bearer authentication is supported, and is activated when the bearer value is
available. The value may be either a String or a Function returning a
String. Using a function to supply the bearer token is particularly useful if
used in conjunction with defaults to allow a single function to supply the
last known token at the time of sending a request, or to compute one on the fly.
HTTP Headers, such as User-Agent, can be set in the options object.
In the example below, we call the github API to find out the number
of stars and forks for the request repository. This requires a
custom User-Agent header as well as.
const options = {
url: ”,
headers: {
‘User-Agent’: ‘request’}};
function callback(error, response, body) {
if (! error && atusCode == 200) {
const info = (body);
(argazers_count + ” Stars”);
(rks_count + ” Forks”);}}
request(options, callback);
OAuth version 1. 0 is supported. The
default signing algorithm is
// OAuth1. 0 – 3-legged server side flow (Twitter example)
// step 1
const qs = require(‘querystring’), oauth =
{ callback: ”, consumer_key: CONSUMER_KEY, consumer_secret: CONSUMER_SECRET}, url = ”;
({url:url, oauth:oauth}, function (e, r, body) {
// Ideally, you would take the body in the response
// and construct a URL that a user clicks on (like a sign in button).
// The verifier is only available in the response after a user has
// verified with twitter that they are authorizing your app.
// step 2
const req_data = (body)
const uri = ”
+ ‘? ‘ + ringify({oauth_token: req_data. oauth_token})
// redirect the user to the authorize uri
// step 3
// after the user is redirected back to your server
const auth_data = (body), oauth =
{ consumer_key: CONSUMER_KEY, consumer_secret: CONSUMER_SECRET, token: auth_data. oauth_token, token_secret: req_data. oauth_token_secret, verifier: auth_data. oauth_verifier}, url = ”;
// ready to make signed requests on behalf of the user
const perm_data = (body), oauth =
{ consumer_key: CONSUMER_KEY, consumer_secret: CONSUMER_SECRET, token: perm_data. oauth_token, token_secret: perm_data. oauth_token_secret}, url = ”, qs =
{ screen_name: reen_name, user_id: er_id};
({url:url, oauth:oauth, qs:qs, json:true}, function (e, r, user) {
For RSA-SHA1 signing, make
the following changes to the OAuth options object:
Pass signature_method: ‘RSA-SHA1’
Instead of consumer_secret, specify a private_key string in
PEM format
For PLAINTEXT signing, make
Pass signature_method: ‘PLAINTEXT’
To send OAuth parameters via query params or in a post body as described in The
Consumer Request Parameters
section of the oauth1 spec:
Pass transport_method: ‘query’ or transport_method: ‘body’ in the OAuth
options object.
transport_method defaults to ‘header’
To use Request Body Hash you can either
Manually generate the body hash and pass it as a string body_hash: ‘… ‘
Automatically generate the body hash by passing body_hash: true
If you specify a proxy option, then the request (and any subsequent
redirects) will be sent via a connection to the proxy server.
If your endpoint is an url, and you are using a proxy, then
request will send a CONNECT request to the proxy server first, and
then use the supplied connection to connect to the endpoint.
That is, first it will make a request like:
User-Agent: whatever user agent you specify
and then the proxy server make a TCP connection to endpoint-server
on port 80, and return a response that looks like:
At this point, the connection is left open, and the client is
communicating directly with the machine.
See the wikipedia page on HTTP Tunneling
for more information.
By default, when proxying traffic, request will simply make a
standard proxied request. This is done by making the url
section of the initial line of the request a fully qualified url to
the endpoint.
For example, it will make a single request that looks like:
HTTP/1. 1 GET Host:
Other-Headers: all go here
request body or whatever
Because a pure ” over ” tunnel offers no additional security
or other features, it is generally simpler to go with a
straightforward HTTP proxy in this case. However, if you would like
to force a tunneling proxy, you may set the tunnel option to true.
You can also make a standard proxied request by explicitly setting
tunnel: false, but note that this will allow the proxy to see the traffic
to/from the destination server.
If you are using a tunneling proxy, you may set the
proxyHeaderWhiteList to share certain headers with the proxy.
You can also set the proxyHeaderExclusiveList to share certain
headers only with the proxy and not with destination host.
By default, this set is:
Note that, when using a tunneling proxy, the proxy-authorization
header and any headers from custom proxyHeaderExclusiveList are
never sent to the endpoint server, but only to the proxy server.
Controlling proxy behaviour using environment variables
The following environment variables are respected by request:
HTTP_PROXY / _proxy
HTTPS_PROXY / _proxy
NO_PROXY / no_proxy
When HTTP_PROXY / _proxy are set, they will be used to proxy non-SSL requests that do not have an explicit proxy configuration option present. Similarly, HTTPS_PROXY / _proxy will be respected for SSL requests that do not have an explicit proxy configuration option. It is valid to define a proxy in one of the environment variables, but then override it for a specific request, using the proxy configuration option. Furthermore, the proxy configuration option can be explicitly set to false / null to opt out of proxying altogether for that request.
request is also aware of the NO_PROXY/no_proxy environment variables. These variables provide a granular way to opt out of proxying, on a per-host basis. It should contain a comma separated list of hosts to opt out of proxying. It is also possible to opt of proxying when a particular destination port is used. Finally, the variable may be set to * to opt out of the implicit proxy configuration of the other environment variables.
Here’s some examples of valid no_proxy values:
– don’t proxy HTTP/HTTPS requests to Google.
– don’t proxy HTTPS requests to Google, but do proxy HTTP requests to Google., – don’t proxy HTTPS requests to Google, and don’t proxy HTTP requests to Yahoo!
* – ignore _proxy/_proxy environment variables altogether.
UNIX Domain Sockets
request supports making requests to UNIX Domain Sockets. To make one, use the following URL scheme:
/* Pattern */ ‘unix:SOCKET:PATH’
/* Example */ (‘unix:/absolute/path/to/’)
Note: The SOCKET path is assumed to be absolute to the root of the host file system.
TLS/SSL Protocol options, such as cert, key and passphrase, can be
set directly in options object, in the agentOptions property of the options object, or even in obalAgent. options. Keep in mind that, although agentOptions allows for a slightly wider range of configurations, the recommended way is via options object directly, as using agentOptions or obalAgent. options would not be applied in the same way in proxied environments (as data travels through a TLS connection instead of an / agent).
const fs = require(‘fs’), path = require(‘path’), certFile = solve(__dirname, ‘ssl/’), keyFile = solve(__dirname, ‘ssl/’), caFile = solve(__dirname, ‘ssl/’), request = require(‘request’);
cert: adFileSync(certFile),
key: adFileSync(keyFile),
passphrase: ‘password’,
ca: adFileSync(caFile)};
Using entOptions
In the example below, we call an API that requires client side SSL certificate
(in PEM format) with passphrase protected private key (in PEM format) and disable the SSLv3 protocol:
const fs = require(‘fs’), path = require(‘path’), certFile = solve(__dirname, ‘ssl/’), keyFile = solve(__dirname, ‘ssl/’), request = require(‘request’);
agentOptions: {
// Or use `pfx` property replacing `cert` and `key` when using private key, certificate and CA certs in PFX or PKCS12 format:
// pfx: adFileSync(pfxFilePath),
securityOptions: ‘SSL_OP_NO_SSLv3’}};
It is able to force using SSLv3 only by specifying secureProtocol:
secureProtocol: ‘SSLv3_method’}});
It is possible to accept other certificates than those signed by generally allowed Certificate Authorities (CAs).
This can be useful, for example, when using self-signed certificates.
To require a different root certificate, you can specify the signing CA by adding the contents of the CA’s certificate file to the agentOptions.
The certificate the domain presents must be signed by the root certificate specified:
ca: adFileSync(”)}});
The ca value can be an array of certificates, in the event you have a private or internal corporate public-key infrastructure hierarchy. For example, if you want to connect to which presents a key chain consisting of:
its own public key, which is signed by:
an intermediate “Corp Issuing Server”, that is in turn signed by:
a root CA “Corp Root CA”;
you can configure your request as follows:
ca: [
adFileSync(‘Corp Issuing ‘),
adFileSync(‘Corp Root ‘)]}});
The property will override the values: url, method, qs, headers, form, formData, body, json, as well as construct multipart data and read files from disk when []. fileName is present without a matching value.
A validation step will check if the HAR Request format matches the latest spec (v1. 2) and will skip parsing if not matching.
const request = require(‘request’)
// will be ignored
method: ‘GET’,
// HTTP Archive Request Object
har: {
method: ‘POST’,
headers: [
name: ‘content-type’,
value: ‘application/x-www-form-urlencoded’}],
postData: {
mimeType: ‘application/x-www-form-urlencoded’,
params: [
name: ‘foo’,
value: ‘bar’},
name: ‘hello’,
value: ‘world’}]}}})
// a POST request will be sent to // with body an application/x-www-form-urlencoded body:
// foo=bar&hello=world
request(options, callback)
The first argument can be either a url or an options object. The only required option is uri; all others are optional.
uri || url – fully qualified uri or a parsed url object from ()
baseUrl – fully qualified uri string used as the base url. Most useful with faults, for example when you want to do many requests to the same domain. If baseUrl is, then requesting /end/point? test=true will fetch. When baseUrl is given, uri must also be a string.
method – method (default: “GET”)
headers – headers (default: {})
qs – object containing querystring values to be appended to the uri
qsParseOptions – object containing options to pass to the method. Alternatively pass options to the method using this format {sep:’;’, eq:’:’, options:{}}
qsStringifyOptions – object containing options to pass to the ringify method. Alternatively pass options to the ringify method using this format {sep:’;’, eq:’:’, options:{}}. For example, to change the way arrays are converted to query strings using the qs module pass the arrayFormat option with one of indices|brackets|repeat
useQuerystring – if true, use querystring to stringify and parse
querystrings, otherwise use qs (default: false). Set this option to
true if you need arrays to be serialized as foo=bar&foo=baz instead of the
default foo[0]=bar&foo[1]=baz.
body – entity body for PATCH, POST and PUT requests. Must be a Buffer, String or ReadStream. If json is true, then body must be a JSON-serializable object.
form – when passed an object or a querystring, this sets body to a querystring representation of value, and adds Content-type: application/x-www-form-urlencoded header. When passed no options, a FormData instance is returned (and is piped to request). See “Forms” section above.
formData – data to pass for a multipart/form-data request. See
Forms section above.
multipart – array of objects which contain their own headers and body
attributes. Sends a multipart/related request. See Forms section
Alternatively you can pass in an object {chunked: false, data: []} where
chunked is used to specify whether the request is sent in
chunked transfer encoding
In non-chunked requests, data items with body streams are not allowed.
preambleCRLF – append a newline/CRLF before the boundary of your multipart/form-data request.
postambleCRLF – append a newline/CRLF at the end of the boundary of your multipart/form-data request.
json – sets body to JSON representation of value and adds Content-type: application/json header. Additionally, parses the response body as JSON.
jsonReviver – a reviver function that will be passed to () when parsing a JSON response body.
jsonReplacer – a replacer function that will be passed to ringify() when stringifying a JSON request body.
auth – a hash containing values user || username, pass || password, and sendImmediately (optional). See documentation above.
oauth – options for OAuth HMAC-SHA1 signing. See documentation above.
hawk – options for Hawk signing. The credentials key must contain the necessary signing info, see hawk docs for details.
aws – object containing AWS signing information. Should have the properties key, secret, and optionally session (note that this only works for services that require session as part of the canonical string). Also requires the property bucket, unless you’re specifying your bucket as part of the path, or the request doesn’t use a bucket (i. e. GET Services). If you want to use AWS sign version 4 use the parameter sign_version with value 4 otherwise the default is version 2. If you are using SigV4, you can also include a service property that specifies the service name. Note: you need to npm install aws4 first.
Signature – options for the HTTP Signature Scheme using Joyent’s library. The keyId and key properties must be specified. See the docs for other options.
followRedirect – follow HTTP 3xx responses as redirects (default: true). This property can also be implemented as function which gets response object as a single argument and should return true if redirects should continue or false otherwise.
followAllRedirects – follow non-GET HTTP 3xx responses as redirects (default: false)
followOriginalHttpMethod – by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: false)
maxRedirects – the maximum number of redirects to follow (default: 10)
removeRefererHeader – removes the referer header when a redirect happens (default: false). Note: if true, referer header set in the initial request is preserved during redirect chain.
encoding – encoding to be used on setEncoding of response data. If null, the body is returned as a Buffer. Anything else (including the default value of undefined) will be passed as the encoding parameter to toString() (meaning this is effectively utf8 by default). (Note: if you expect binary data, you should set encoding: null. )
gzip – if true, add an Accept-Encoding header to request compressed content encodings from the server (if not already present) and decode supported content encodings in the response. Note: Automatic decoding of the response content is performed on the body data returned through request (both through the request stream and passed to the callback function) but is not performed on the response stream (available from the response event) which is the unmodified comingMessage object which may contain compressed data. See example below.
jar – if true, remember cookies for future use (or define your custom cookie jar; see examples section)
agent – (s) instance to use
agentClass – alternatively specify your agent’s class name
agentOptions – and pass its options. Note: for HTTPS see tls API doc for TLS/SSL options and the documentation above.
forever – set to true to use the forever-agent Note: Defaults to (s)({keepAlive:true}) in node 0. 12+
pool – an object describing which agents to use for the request. If this option is omitted the request will use the global agent (as long as your options allow for it). Otherwise, request will search the pool for your custom agent. If no custom agent is found, a new agent will be created and added to the pool. Note: pool is used only when the agent option is not specified.
A maxSockets property can also be provided on the pool object to set the max number of sockets for all agents created (ex: pool: {maxSockets: Infinity}).
Note that if you are sending multiple requests in a loop and creating
multiple new pool objects, maxSockets will not work as intended. To
work around this, either use faults
with your pool options or create the pool object with the maxSockets
property outside of the loop.
timeout – integer containing number of milliseconds, controls two timeouts.
Read timeout: Time to wait for a server to send response headers (and start the response body) before aborting the request.
Connection timeout: Sets the socket to timeout after timeout milliseconds of inactivity. Note that increasing the timeout beyond the OS-wide TCP connection timeout will not have any effect (the default in Linux can be anywhere from 20-120 seconds)
localAddress – local interface to bind for network connections.
proxy – an HTTP proxy to be used. Supports proxy Auth with Basic Auth, identical to support for the url parameter (by embedding the auth info in the uri)
strictSSL – if true, requires SSL certificates be valid. Note: to use your own certificate authority, you need to specify an agent that was created with that CA as an option.
tunnel – controls the behavior of
HTTP CONNECT tunneling
as follows:
undefined (default) – true if the destination is, false otherwise
true – always tunnel to the destination by making a CONNECT request to
the proxy
false – request the destination as a GET request.
proxyHeaderWhiteList – a whitelist of headers to send to a
tunneling proxy.
proxyHeaderExclusiveList – a whitelist of headers to send
exclusively to a tunneling proxy and not to destination.
time – if true, the request-response cycle (including all redirects) is timed at millisecond resolution. When set, the following properties are added to the response object:
elapsedTime Duration of the entire request/response in milliseconds (deprecated).
responseStartTime Timestamp when the response began (in Unix Epoch milliseconds) (deprecated).
timingStart Timestamp of the start of the request (in Unix Epoch milliseconds).
timings Contains event timestamps in millisecond resolution relative to timingStart. If there were redirects, the properties reflect the timings of the final request in the redirect chain:
socket Relative timestamp when the module’s socket event fires. This happens when the socket is assigned to the request.
lookup Relative timestamp when the net module’s lookup event fires. This happens when the DNS has been resolved.
connect: Relative timestamp when the net module’s connect event fires. This happens when the server acknowledges the TCP connection.
response: Relative timestamp when the module’s response event fires. This happens when the first bytes are received from the server.
end: Relative timestamp when the last bytes of the response are received.
timingPhases Contains the durations of each request phase. If there were redirects, the properties reflect the timings of the final request in the redirect chain:
wait: Duration of socket initialization ()
dns: Duration of DNS lookup ( -)
tcp: Duration of TCP connection (nnect -)
firstByte: Duration of HTTP server response (sponse – nnect)
download: Duration of HTTP download ( – sponse)
total: Duration entire HTTP round-trip ()
har – a HAR 1. 2 Request Object, will be processed from HAR format into options overwriting matching values (see the HAR 1. 2 section for details)
callback – alternatively pass the request’s callback in the options object
The callback argument gets 3 arguments:
An error when applicable (usually from ientRequest object)
An comingMessage object (Response object)
The third is the response body (String or Buffer, or JSON object if the json option is supplied)
Convenience methods
There are also shorthand methods for different HTTP METHODs and some other conveniences.
This method returns a wrapper around the normal request API that defaults
to whatever options you pass to it.
Note: faults() does not modify the global request API;
instead, it returns a wrapper that has your default settings applied to it.
Note: You can call. defaults() on the wrapper that is returned from
faults to add/override defaults that were previously defaulted.
For example:
//requests using baseRequest() will set the ‘x-token’ header
const baseRequest = faults({
headers: {‘x-token’: ‘my-token’}})
//requests using specialRequest() will include the ‘x-token’ header set in
//baseRequest and will also include the ‘special’ header
const specialRequest = faults({
headers: {special: ‘special value’}})
These HTTP method convenience functions act just like request() but with a default method already set for you:
(): Defaults to method: “GET”.
(): Defaults to method: “POST”.
(): Defaults to method: “PUT”.
(): Defaults to method: “PATCH”.
() / (): Defaults to method: “DELETE”.
(): Defaults to method: “HEAD”.
request. options(): Defaults to method: “OPTIONS”.
Function that creates a new cookie.
Function that creates a new cookie jar.
Function that returns the specified response header field using a case-insensitive match
// print the Content-Type header even if the server returned it as ‘content-type’ (lowercase)
(‘Content-Type is:’, (‘Content-Type’));});
There are at least three ways to debug the operation of request:
Launch the node process like NODE_DEBUG=request node
(lib, request, otherlib works too).
Set require(‘request’) = true at any time (this does the same thing
as #1).
Use the request-debug module to
view request and response headers and bodies.
Most requests to external servers should have a timeout attached, in case the
server is not responding in a timely manner. Without a timeout, your code may
have a socket open/consume resources for minutes or more.
There are two main types of timeouts: connection timeouts and read
timeouts. A connect timeout occurs if the timeout is hit while your client is
attempting to establish a connection to a remote machine (corresponding to the
connect() call on the socket). A read timeout occurs any time the
server is too slow to send back a part of the response.
These two situations have widely different implications for what went wrong
with the request, so it’s useful to be able to distinguish them. You can detect
timeout errors by checking for an ‘ETIMEDOUT’ value. Further, you
can detect whether the timeout was a connection timeout by checking if the
nnect property is set to true.
(”, {timeout: 1500}, function(err) {
( === ‘ETIMEDOUT’);
// Set to `true` if the timeout was a connection timeout, `false` or
// `undefined` otherwise.
(nnect === true);
const request = require(‘request’), rand = (()*100000000). toString();
{ method: ‘PUT’, uri: ” + rand, multipart:
[ { ‘content-type’: ‘application/json’, body: ringify({foo: ‘bar’, _attachments: {”: {follows: true, length: 18, ‘content_type’: ‘text/plain’}}})}, { body: ‘I am an attachment’}]}, function (error, response, body) {
if(atusCode == 201){
(‘document saved as: ‘+ rand)} else {
(‘error: ‘+ atusCode)
For backwards-compatibility, response compression is not supported by default.
To accept gzip-compressed responses, set the gzip option to true. Note
that the body data passed through request is automatically decompressed
while the response object is unmodified and will contain compressed data if
the server sent a compressed response.
{ method: ‘GET’, uri: ”, gzip: true}, function (error, response, body) {
// body is the decompressed response body
(‘server encoded the data as: ‘ + (response. headers[‘content-encoding’] || ‘identity’))
(‘the decoded data is: ‘ + body)})
(‘data’, function(data) {
// decompressed data as it is received
(‘decoded chunk: ‘ + data)})
// unmodified comingMessage object
// compressed data as it is received
(‘received ‘ + + ‘ bytes of compressed data’)})})
Cookies are disabled by default (else, they would be used in subsequent requests). To enable cookies, set jar to true (either in defaults or options).
const request = faults({jar: true})
request(”, function () {
To use a custom cookie jar (instead of request’s global cookie jar), set jar to an instance of () (either in defaults or options)
const j = ()
const request = faults({jar:j})
const j = ();
const cookie = (‘key1=value1’);
Build an HTTPS-intercepting JavaScript proxy in 30 seconds flat

Build an HTTPS-intercepting JavaScript proxy in 30 seconds flat

HTTP(S) is the glue that binds together modern architectures, passing requests between microservices and connecting web & mobile apps alike to the APIs they depend on.
What if you could embed scripts directly into that glue?
By doing so, you could:
Inject errors, timeouts and unusual responses to test system reliability.
Record & report traffic from all clients for later analysis.
Redirect requests to replace production servers with local test servers.
Automatically validate and debug HTTP interactions across an entire system.
It turns out setting this up is super quick & easy to do. Using easily available JS libraries and scripts, you can start injecting code into HTTP interactions in no time at all. Let’s see how it works.
Putting the basics together
Mockttp is the open-source HTTP library that powers all the internals of HTTP Toolkit, built in TypeScript. It can act as an HTTP(S) server or proxy, to intercept and mock traffic, transform responses, inject errors, or fire events for all the traffic it receives.
First though, if you want to inspect & edit HTTP manually with a full UI and tools on top, it’s better to download HTTP Toolkit for free right now instead, and start there!
On the other hand, if you do want to build scripts and automations that capture & rewrite HTTPS, or if you’ve used HTTP Toolkit and now you want to create complex custom behaviour on top of its built-in rules, then Mockttp is perfect, and you’re in the right place.
Getting started with Mockttp is easy: install it, define a server, and start it. That looks like this:
Create a new directory
Run npm install mockttp
Create an script:
(async () => {
const mockttp = require(‘mockttp’);
// Create a proxy server with a self-signed HTTPS CA certificate:
const = await nerateCACertificate();
const server = tLocal({});
// Inject ‘Hello world’ responses for all requests
server. anyRequest(). thenReply(200, “Hello world”);
await ();
// Print out the server details:
const caFingerprint = nerateSPKIFingerprint()
(`Server running on port ${}`);
(`CA cert fingerprint ${caFingerprint}`);})(); // (Run in an async wrapper so we can use top-level await everywhere)
Start the proxy by running node
And you’re done!
To make this even easier I’ve bundled up a ready-to-use repo for this, along with easy Chrome setup to test it, on GitHub.
This creates an HTTPS-intercepting MitM proxy. All requests sent to this server directly or sent through this server as a proxy will receive an immediately 200 “Hello world” response.
From the client’s point of view (once configured) it will appear that this fake response has come directly from the real target URL (e. g. ) even though it’s clearly being injected by our script.
When started, this script prints the port it’s running on, the fingerprint of CA certificate used, which you can use to quickly temporarily trust that certificate in some clients, e. all Chromium browsers.
To test your proxy right now, connect a browser to it (assuming you have Chrome installed) by running:
google-chrome –proxy-server=localhost:$PORT –ignore-certificate-errors-spki-list=$CERT_FINGERPRINT –user-data-dir=$ANY_PATH
You’ll need to replace the $variables appropriately ($ANY_PATH will be used to store the profile data for a new temporary Chrome profile that will trust this CA certificate) and you may need to find the full path to the browser binary on your machine, if it’s not in your $PATH itself.
If you don’t like Chrome, the exact same arguments will work for any other Chromium-based browser, e. Edge or Brave, and we’ll look at how to intercept all sorts of other clients too in just a minute.
If you run this, and visit any URL in the browser that opens, you should immediately see your “Hello world” response being returned from all requests to any URL, complete with the nice padlock that confirms that this message definitely definitely came from the real website:
With this, we can now invisibly rewrite real HTTPS traffic. Let’s make that traffic do something more exciting.
Rewriting HTTPS dynamically
Mockttp lets you define rules, which match certain requests, and then perform certain actions.
Above, we’ve created a script that matches all requests, and always returns a fixed response. But there’s a lot of other things we could do, for example:
// Proxy all traffic through as normal, untouched:
server. forHost(“”). thenPassThrough();
// Make all GET requests to time out:
(“”). thenTimeout();
// Redirect any github requests to
server. thenForwardTo(“);
// Intercept /api? userId=123 on any host, serve the response from a file:
(“/api”). withQuery({ userId: 123}). thenFromFile(200, “/path/to/a/file”);
// Forcibly close any connection if a POST request is sent:
(). thenCloseConnection();
Rules like these give you the power to rewrite traffic any way you like: pass it through untouched like normal, replace responses, redirect traffic, you name it.
Replace the “hello world” line in the previous example with some of these rules, restart your server, and then try browsing the web again. will now work fine, but Google will be completely inaccessible, all POST requests will fail, and will be inexplicably replaced with the content of
If you’d like to use more rules like this, the detailed API docs provide more specific information on each of the methods available and how they work.
By default each rule will only be run for the first matching request it sees, until all matching rules have been run, in which case the last matching rule will repeat indefinitely. You can control this more precisely by adding (), (), (n), etc, as part of the rule definition. If you’re defining overlapping rules, you probably want to use () every time.
Advanced custom rewrite logic
There’s some more advanced types of rule we can add to our script: we can define our own custom request or response transformation logic.
Using this, it’s possible to run arbitrary code that can send a response directly, intercept a request as it’s sent upstream, or intercept a response that’s received from a real server. You can examine all real request & response content in your code, and then complete that request or response with your own changes included.
That looks like this:
// Replace targets entirely with custom logic:
let counter = 0;
server. thenCallback((request) => {
// This code will run for all requests to
return {
status: 200,
// Return a JSON response with an incrementing counter:
json: { counterValue: counter++}};});
// Or wrap targets, transforming real requests & responses:
server. thenPassThrough({
beforeResponse: (response) => {
// Here you can access the real response:
(`Got ${atusCode} response with body: ${}`);
// Values returned here replace parts of the response:
if (response. headers[‘content-type’]?. startsWith(‘text/html’)) {
// E. append to all HTML response bodies:
headers: { ‘content-type’: ‘text/html’},
body: + ” appended”};} else {
return {};}}});
The first rule will handle all requests by itself. The second rule will forward requests upstream, get a response, and then run the custom logic before returning the appended response back to the client:
You can similarly use beforeRequest to change the content of outgoing requests. Check the docs for a full list of the options and return values available.
Connecting more clients
So far we’ve created a proxy that can automatically rewrite specific traffic from a Chromium-based browser. That’s great, but a bit limited. How do you connect more clients?
There are generally two steps required:
Configure the client to use your Mockttp proxy as its HTTP(S) proxy
Configure the client to trust your HTTPS CA certificate
Configuring your client to use your proxy
Configuring the proxy settings will depend on the specific HTTP client you’re using, but is normally fairly simple and well documented.
You can often get away with just setting HTTP_PROXY and HTTPS_PROXY environment variables to YOUR_PROXY_PORT, as that’s a common convention, but that won’t work everywhere. Alternatively, in many cases you can change your system-wide proxy settings to use this proxy, but be aware that this will intercept all traffic on your machine, not just the target application.
If you want to intercept a application specifically, there is no global configuration option, but you can use the global-agent npm module with a GLOBAL_AGENT_HTTP_PROXY environment variable to do this like so:
npm install global-agent
node -r ‘global-agent/bootstrap’
For other cases, you’ll need to look into the docs for the HTTP client in question.
Configuring your client to trust your CA certificate
This is the step that ensures the client trusts your proxy to rewrite HTTPS.
It’s normally easiest to create CA certificate files on disk, and then import them, so you can easily load them directly into other software.
You can do that in JS by saving the key and cert properties of the CA certificate to a file. Like so:
const fs = require(‘fs’);
const { key, cert} = await nerateCACertificate();
fs. writeFile(“”, key);
fs. writeFile(“”, cert);
This creates (your certificate private key) and (your public CA certificate) files on disk, so you can use the same key & certificate every time, and so you can import the CA certificate into your HTTPS clients.
You can reuse these saved certificate details, instead of creating a certificate from scratch every time, by changing your server setup to look like this:
const server = tLocal({: {
keyPath: ‘. /’,
certPath: ‘. /’}});
These certificate files can be imported into most tools either via UIs (e. in Firefox’s certificate settings) or via environment variables (e. SSL_CERT_FILE=/path/to/).
If you want to intercept a process, there’s a custom NODE_EXTRA_CA_CERTS variable you can use to do this.
As a full example, combining that with the proxy settings above, that looks like this:
export NODE_EXTRA_CA_CERTS=/path/to/ # Trust the cert
# Start your target app, fully intercepted:
If you’re having trouble with either of these steps, you may be interested in the source behind the HTTP Toolkit Server, which automatically sets up a wide variety of clients for use with HTTP Toolkit in general, from Android to Electron apps to JVM processes.
Going further
To wrap up then, what can you do with this? Here are some ideas:
Create a proxy that completely blocks various hostnames or file types. No more ad networks, no more PDFs, no JS bigger than 100KB, whatever you like.
Proxy traffic during testing to replace some of your internal services or external dependencies with simple mocked versions, with no code changes required in the system under test.
Capture and log all traffic sent through your proxy matching certain patterns.
Randomly add delays or timeouts to test the reliability of your clients in unstable environments.
Combine this with HTTP Toolkit by redirecting some traffic there to your local proxy, to combine a full debugging UI with any custom logic you please, like so:
Play around with the example repo, and feel free to get in touch on Twitter if you build anything cool or if you have any questions.

Frequently Asked Questions about javascript proxy request

Leave a Reply

Your email address will not be published. Required fields are marked *