Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add secure proxy support in the client #5992

Merged
merged 1 commit into from
Oct 5, 2021

Conversation

bmbouter
Copy link
Contributor

@bmbouter bmbouter commented Sep 8, 2021

What do these changes do?

This patch opens up the code path and adds the implementation that allows end-users to start sending HTTPS requests through HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is kinda available since Python 3.7 but is disabled for asyncio with an attribute/flag/toggle. When the upstream CPython enables it finally, aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch asyncio in order to verify that this works. The users who are willing to do the same, will be able to take advantage of it right now. Eventually (hopefully starting Python 3.11), the need for monkey-patching should be eliminated.

Refs:

Original details

I test with this script:

import aiohttp
import asyncio


url = "http://example.com"
url = "https://example.com"

proxy_url = "http://localhost:3128/"
proxy_url = "https://localhost:3130/"

async def main():
    import pydevd_pycharm
    # pydevd_pycharm.settrace('localhost', port=29437, stdoutToServer=True, stderrToServer=True, suspend=False)
    async with aiohttp.ClientSession(headers={"Cache-Control": "no-cache"}) as session:
        async with session.get(url, proxy=proxy_url, verify_ssl=False) as resp:
            print(resp.status)
            print(resp)
            print(await resp.text())


loop = asyncio.get_event_loop()
loop.run_until_complete(main())

And I get

Traceback (most recent call last):
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 22, in <module>
    loop.run_until_complete(main())
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 15, in main
    async with session.get(url, proxy=proxy_url, verify_ssl=False) as resp:
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 1117, in __aenter__
    self._resp = await self._coro
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 520, in _request
    conn = await self._connector.connect(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 535, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 890, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 1139, in _create_proxy_connection
    transport, proto = await self._wrap_create_connection(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 969, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore  # noqa
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 1081, in create_connection
    transport, protocol = await self._create_connection_transport(
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 1111, in _create_connection_transport
    await waiter
ConnectionResetError

The squid proxy does show 1631129861.057 45 ::1 TCP_TUNNEL/200 39 CONNECT example.com:443 - HIER_DIRECT/93.184.216.34 - and I can see the 200 OK from the direct connection.

What fails is when taking the TCP socket from the direct connection and TLS wrapping it I get a ConnectionResetError.

Are there changes in behavior for the user?

They get the ability to send out HTTPS queries through HTTPS proxies if they monkey-patch the stdlib asyncio.

Related issue number

Resolves #3816
Resolves #4268

Checklist

  • I think the code is well written
  • Unit tests for the changes exist
  • Documentation reflects the changes
  • If you provide code modification, please add yourself to CONTRIBUTORS.txt
    • The format is <Name> <Surname>.
    • Please keep alphabetical order, the file is sorted by names.
  • Add a new news fragment into the CHANGES folder
    • name it <issue_id>.<type> for example (588.bugfix)
    • if you don't have an issue_id change it to the pr id after creating the pr
    • ensure type is one of the following:
      • .feature: Signifying a new feature.
      • .bugfix: Signifying a bug fix.
      • .doc: Signifying a documentation improvement.
      • .removal: Signifying a deprecation or removal of public API.
      • .misc: A ticket has been closed, but it is not of interest to users.
    • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."

@bmbouter bmbouter changed the base branch from master to 3.7 September 8, 2021 19:35
@lgtm-com

This comment has been minimized.

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 8, 2021

Note of the 4 cases (http or https proxy and http or https url being requested) 3 of the 4 work.

http proxy - http url - works
http proxy - https url - works
https proxy - http url - works
https proxy - https url - does not work

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 9, 2021

Here are some notes from a discussion with @webknjaz

  • An actual PR fix should go against master which will become aiohttp 4.0. The 3.7.z line very likely won't have another release. Any PR would be merged to master and then possibly backported to 3.8.
  • aiohttp can't support this because asyncio itself doesn't support TLS in TLS. See this bug against python.
  • loop.starttls() seems incompatible with how aiohttp handshakes TLS.
  • it is claimed that uvloop does support TLS in TLS correctly.
  • Here's another reproducer

Things for me to try:

  • retest this against master
  • attempt my reproducer with uvloop
  • try the other reproducer to try to learn more

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 9, 2021

Here's an interesting post summarizing how httpcore (another asyncio based http client) is persuing resolving this encode/httpcore#254 (comment) It has details about why this doesn't work.

Also here's what I believe is the only remaining fix needed for cPython: python/cpython#28073

@webknjaz
Copy link
Member

webknjaz commented Sep 9, 2021

The 3.y line very likely won't have another release.

Correction: 3.7.x won't have a release but 3.8 will. But master will be 4.0+ w/o a specific plan currently.

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 9, 2021

When I try using uvloop as the event loop with my reproducer I get this:

Traceback (most recent call last):
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 23, in <module>
    loop.run_until_complete(main())
  File "uvloop/loop.pyx", line 1501, in uvloop.loop.Loop.run_until_complete
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 16, in main
    async with session.get(url, proxy=proxy_url, verify_ssl=False) as resp:
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 1117, in __aenter__
    self._resp = await self._coro
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 520, in _request
    conn = await self._connector.connect(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 535, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 890, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 1139, in _create_proxy_connection
    transport, proto = await self._wrap_create_connection(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 969, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore  # noqa
  File "uvloop/loop.pyx", line 2069, in create_connection
  File "uvloop/loop.pyx", line 2064, in uvloop.loop.Loop.create_connection
  File "uvloop/sslproto.pyx", line 517, in uvloop.loop.SSLProtocol._on_handshake_complete
ConnectionResetError

@jborean93
Copy link
Contributor

I've had a play around with this today and while I haven't found a solution I think I may know why the connection is being reset. This process

aiohttp/aiohttp/connector.py

Lines 1076 to 1124 in 0fe5b62

proxy_req.method = hdrs.METH_CONNECT
proxy_req.url = req.url
key = dataclasses.replace(
req.connection_key, proxy=None, proxy_auth=None, proxy_headers_hash=None
)
conn = Connection(self, key, proto, self._loop)
proxy_resp = await proxy_req.send(conn)
try:
protocol = conn._protocol
assert protocol is not None
protocol.set_response_params()
resp = await proxy_resp.start(conn)
except BaseException:
proxy_resp.close()
conn.close()
raise
else:
conn._protocol = None
conn._transport = None
try:
if resp.status != 200:
message = resp.reason
if message is None:
message = RESPONSES[resp.status][0]
raise ClientHttpProxyError(
proxy_resp.request_info,
resp.history,
status=resp.status,
message=message,
headers=resp.headers,
)
rawsock = transport.get_extra_info("socket", default=None)
if rawsock is None:
raise RuntimeError("Transport does not expose socket instance")
# Duplicate the socket, so now we can close proxy transport
rawsock = rawsock.dup()
finally:
transport.close()
transport, proto = await self._wrap_create_connection(
self._factory,
timeout=timeout,
ssl=sslcontext,
sock=rawsock,
server_hostname=req.host,
req=req,
)
finally:
proxy_resp.close()
will send the CONNECT request to the proxy and then duplicates the socket for the newly wrapped transport. The closing of the original transport on line 1113 seems to bring down the HTTPS context and thus _wrap_create_connection raises the ConnectionResetError. If I was to comment out the transport.close() line and change the self._wrap_create_connection to

setattr(transport, "_start_tls_compatible", True)  # for https://github.com/python/cpython/pull/28073
proto = self._factory()
transport = await self._loop.start_tls(transport, proto, sslcontext, server_hostname=req.host)
proto.connection_made(transport)

I am able to complete the TLS handle for a HTTPS connection over a HTTPS proxy but once the scope of the function is exited the garbage collector then closes the connection. The whole asyncio protocol and transport mechanism is something I don't have much knowledge on but there seems to be some sort of relationship between the original connection that the CONNECT request was sent over and the TLS context which needs to be kept alive. Unfortunately at this point it's beyond my understanding.

@codecov

This comment has been minimized.

@bmbouter
Copy link
Contributor Author

@jborean93 can you post a diff of this? I was applying it, but since it doesn't fully resolve the issue due to GC I wanted to make sure I was getting as far as you got. Thanks!

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 10, 2021

To make testing easier I was able to reproduce curl working with secure proxy and aiohttp not working with secure proxy with:

# Make your keys
sudo openssl genrsa -out /etc/ssl/key.pem 4096
sudo openssl req -new -x509 -key /etc/ssl/key.pem -out /etc/ssl/cert.pem -days 1826

# Install proxy and run it
pip install proxy.py
proxy --hostname 0.0.0.0 --cert-file /etc/ssl/cert.pem --key-file /etc/ssl/key.pem

# Test it with curl in another tty
curl -v --proxy-insecure --proxy https://127.0.0.1:8899 https://example.com

@webknjaz
Copy link
Member

By the way, pip install trustme should give you an easier command for generating the CA certs (it's integrated in our tests and recently gained a CLI too).

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 10, 2021

Trustme was easier!

# Make your keys
pip install trustme
python -m trustme

# Install proxy and run it
pip install proxy.py
python -m proxy --hostname 0.0.0.0 --cert-file ./server.pem --key-file ./server.key

# Test it with curl in another tty
curl -v --proxy-cacert client.pem --proxy https://127.0.0.1:8899 https://example.com

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 10, 2021

Interestingly proxy.py shows the same "30-ish" second hang going through squid as the secure proxy was doing for me. A few days ago I figured out that if I asked curl to hangup the connection by specifying http 1.0 it would return immediately all the time. So the curl for that is:

curl -v --http1.0 --proxy-cacert client.pem --proxy https://127.0.0.1:8899 https://example.com

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 10, 2021

I switched my branch to aio-list/aiohttp:master and now I'm using this reproducer:

import aiohttp
import asyncio
import ssl

external_url = "https://httpbin.org/get"
proxy_url = "https://localhost:8899/"
client_key = "/home/vagrant/client.pem"

async def main():
    # Setup SSL Fun
    ssl_ctx = ssl.create_default_context(cafile=client_key)

    async with aiohttp.ClientSession(headers={"Cache-Control": "no-cache"}) as session:
        async with session.get(external_url, proxy=proxy_url, ssl=ssl_ctx) as resp:
            print(resp.status)
            print(resp)
            print(await resp.text())

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

For me (with proxy.py) it produces:

Traceback (most recent call last):
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 946, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 1081, in create_connection
    transport, protocol = await self._create_connection_transport(
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 1112, in _create_connection_transport
    await waiter
ConnectionResetError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 20, in <module>
    loop.run_until_complete(main())
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 14, in main
    async with session.get(external_url, proxy=proxy_url, ssl=ssl_ctx) as resp:
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 1077, in __aenter__
    self._resp = await self._coro
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 479, in _request
    conn = await self._connector.connect(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 508, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 867, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 1115, in _create_proxy_connection
    transport, proto = await self._wrap_create_connection(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 952, in _wrap_create_connection
    raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host httpbin.org:443 ssl:<ssl.SSLContext object at 0x7f0fcabcc4c0> [None]

@psf-chronographer psf-chronographer bot added the bot:chronographer:provided There is a change note present in this PR label Sep 10, 2021
@bmbouter bmbouter changed the base branch from 3.7 to master September 10, 2021 20:20
@lgtm-com

This comment has been minimized.

@webknjaz
Copy link
Member

So with the recent commit (e76ef53) the following repro works for me:

# tls-proxy-repro.py 
import aiohttp
import asyncio


url = "https://httpbin.org/get"

proxy_url = "https://localhost:8899/"

async def main():
    async with aiohttp.ClientSession(
            connector=aiohttp.TCPConnector(ssl=False),
            headers={"Cache-Control": "no-cache"},
            trust_env=True,
    ) as session:
        async with session.get(url, proxy=proxy_url) as resp:
            print(resp.status)
            print(resp)
            print(await resp.text())


__name__ == "__main__" and asyncio.run(main())

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 12, 2021

I tried this patch (e76ef53)) and your reproducer on my dev machine and also on a fresh F34 install (which has Python 3.9.7). It didn't work for me in either environment and yielded the same traceback.

Traceback (most recent call last):
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 22, in <module>
    __name__ == "__main__" and asyncio.run(main())
  File "/usr/lib64/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
  File "/home/vagrant/devel/aiohttp_proxy_test.py", line 16, in main
    async with session.get(url, proxy=proxy_url) as resp:
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 1077, in __aenter__
    self._resp = await self._coro
  File "/home/vagrant/devel/aiohttp/aiohttp/client.py", line 479, in _request
    conn = await self._connector.connect(
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 508, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 867, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "/home/vagrant/devel/aiohttp/aiohttp/connector.py", line 1125, in _create_proxy_connection
    tls_transport.get_protocol().connection_made(
AttributeError: 'NoneType' object has no attribute 'get_protocol'

@webknjaz
Copy link
Member

That's weird. I wonder in what cases start_tls() returns None. IIRC F34 enforces a more strict TLS setup so maybe if something's not right on the TLS level, it returns None while failing to create a new transport. I'm on Gentoo and I don't have this problem. OTOH I applied the patch to the install from PyPI, not to the recent code on master so there's that...

@jborean93
Copy link
Contributor

The issue is the CONNECT resp is still closing the original connection so the EOF is still being handled causing the failure down the line. @webknjaz you might still have

payload.on_eof(self._response_eof)
commented out allowing it to still run for you. If I was to do that then things start working again but with it running the connection is still FIN once the CONNECT response is processed.

@bmbouter
Copy link
Contributor Author

bmbouter commented Sep 13, 2021

I did receive the 200 OK response and the proxy shows a payload being returned correctly when I apply this patch to the 3.7 branch and commenting out the payload.on_eof(self._response_eof), but it still errors just later on. I think it's what you're saying @jborean93 that the connection is FIN once the CONNECT response is processed. Here's what it shows for me:

python ~/devel/aiohttp_proxy_test.py 
200
<ClientResponse(https://httpbin.org/get) [200 OK]>
<CIMultiDictProxy('Date': 'Mon, 13 Sep 2021 15:19:07 GMT', 'Content-Type': 'application/json', 'Content-Length': '349', 'Connection': 'keep-alive', 'Server': 'gunicorn/19.9.0', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true')>

{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Cache-Control": "no-cache", 
    "Host": "httpbin.org", 
    "User-Agent": "Python/3.9 aiohttp/3.7.4.post0", 
    "X-Amzn-Trace-Id": "Root=1-613f6beb-3790eb756b11edfb70c500a5"
  }, 
  "origin": "174.99.22.188", 
  "url": "https://httpbin.org/get"
}

Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x7ff27bce1430>
transport: <_SelectorSocketTransport closing fd=6>
Traceback (most recent call last):
  File "/usr/lib64/python3.9/asyncio/selector_events.py", line 918, in write
    n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.9/asyncio/sslproto.py", line 684, in _process_write_backlog
    self._transport.write(chunk)
  File "/usr/lib64/python3.9/asyncio/selector_events.py", line 924, in write
    self._fatal_error(exc, 'Fatal write error on socket transport')
  File "/usr/lib64/python3.9/asyncio/selector_events.py", line 719, in _fatal_error
    self._force_close(exc)
  File "/usr/lib64/python3.9/asyncio/selector_events.py", line 731, in _force_close
    self._loop.call_soon(self._call_connection_lost, exc)
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 746, in call_soon
    self._check_closed()
  File "/usr/lib64/python3.9/asyncio/base_events.py", line 510, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

So what is the right way to solve this? What can be done?

@bmbouter bmbouter changed the base branch from master to 3.8 September 14, 2021 15:25
@webknjaz webknjaz removed the bot:chronographer:provided There is a change note present in this PR label Sep 14, 2021
@jborean93
Copy link
Contributor

When testing offline changing https://github.com/aio-libs/aiohttp/pull/5992/files#diff-c143bd72e7f6748d879e5a0466a2872e05dc2d0b4a1157e9221702cc3c5516bbR1124 to the below was able to delay the connection being closed.

protocol.set_response_params(read_until_eof=True)

This works as it switches the behaviour in the raw HttpPayloadParser to not return a EmptyStreamReader but rather a generic StreamReader. In the former's case EOF is always marked thus processing the CONNECT response will bring down the connection whereas for a StreamReader it's not marked as EOF and the connection will stay active for the START_TLS step.

@webknjaz

This comment has been minimized.

@webknjaz webknjaz added backport-3.8 client enhancement python Pull requests that update Python code labels Oct 4, 2021
@webknjaz webknjaz self-assigned this Oct 4, 2021
@webknjaz webknjaz marked this pull request as draft October 4, 2021 02:02
@webknjaz webknjaz force-pushed the add-secure-proxy-support branch 2 times, most recently from c336e66 to 6d95407 Compare October 5, 2021 22:31
@webknjaz webknjaz changed the title Add secure proxy support Add secure proxy support in the client Oct 5, 2021
@webknjaz webknjaz added the reproducer: present This PR or issue contains code, which reproduce the problem described or clearly understandable STR label Oct 5, 2021
This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* https://bugs.python.org/issue37179
* python/cpython#28073
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* aio-libs#6044

Resolves aio-libs#3816
Resolves aio-libs#4268

Co-Authored-By: Brian Bouterse <bmbouter@gmail.com>
Co-Authored-By: Jordan Borean <jborean93@gmail.com>
Co-Authored-By: Sviatoslav Sydorenko <webknjaz@redhat.com>
@webknjaz webknjaz marked this pull request as ready for review October 5, 2021 22:57
@webknjaz webknjaz enabled auto-merge (squash) October 5, 2021 22:59
@webknjaz webknjaz merged commit c29e5fb into aio-libs:master Oct 5, 2021
@patchback
Copy link
Contributor

patchback bot commented Oct 5, 2021

Backport to 3.8: 💔 cherry-picking failed — conflicts found

❌ Failed to cleanly apply c29e5fb on top of patchback/backports/3.8/c29e5fb58efb65c6198ea75b95805ab972e98adc/pr-5992

Backporting merged PR #5992 into master

  1. Ensure you have a local repo clone of your fork. Unless you cloned it
    from the upstream, this would be your origin remote.
  2. Make sure you have an upstream repo added as a remote too. In these
    instructions you'll refer to it by the name upstream. If you don't
    have it, here's how you can add it:
    $ git remote add upstream https://github.com/aio-libs/aiohttp.git
  3. Ensure you have the latest copy of upstream and prepare a branch
    that will hold the backported code:
    $ git fetch upstream
    $ git checkout -b patchback/backports/3.8/c29e5fb58efb65c6198ea75b95805ab972e98adc/pr-5992 upstream/3.8
  4. Now, cherry-pick PR Add secure proxy support in the client #5992 contents into that branch:
    $ git cherry-pick -x c29e5fb58efb65c6198ea75b95805ab972e98adc
    If it'll yell at you with something like fatal: Commit c29e5fb58efb65c6198ea75b95805ab972e98adc is a merge but no -m option was given., add -m 1 as follows intead:
    $ git cherry-pick -m1 -x c29e5fb58efb65c6198ea75b95805ab972e98adc
  5. At this point, you'll probably encounter some merge conflicts. You must
    resolve them in to preserve the patch from PR Add secure proxy support in the client #5992 as close to the
    original as possible.
  6. Push this branch to your fork on GitHub:
    $ git push origin patchback/backports/3.8/c29e5fb58efb65c6198ea75b95805ab972e98adc/pr-5992
  7. Create a PR, ensure that the CI is green. If it's not — update it so that
    the tests and any other checks pass. This is it!
    Now relax and wait for the maintainers to process your pull request
    when they have some cycles to do reviews. Don't worry — they'll tell you if
    any improvements are necessary when the time comes!

🤖 @patchback
I'm built with octomachinery and
my source is open — https://github.com/sanitizers/patchback-github-app.

@aio-libs-github-bot
Copy link
Contributor

💔 Backport was not successful

The PR was attempted backported to the following branches:

  • ❌ 3.8: Commit could not be cherrypicked due to conflicts

webknjaz pushed a commit to webknjaz/aiohttp that referenced this pull request Oct 5, 2021
This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* https://bugs.python.org/issue37179
* python/cpython#28073
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* aio-libs#6044

PR aio-libs#5992
Resolves aio-libs#3816
Resolves aio-libs#4268

Co-authored-by: Brian Bouterse <bmbouter@gmail.com>
Co-authored-by: Jordan Borean <jborean93@gmail.com>
Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
(cherry picked from commit c29e5fb)
webknjaz pushed a commit to webknjaz/aiohttp that referenced this pull request Oct 5, 2021
This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* https://bugs.python.org/issue37179
* python/cpython#28073
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* aio-libs#6044

PR aio-libs#5992
Resolves aio-libs#3816
Resolves aio-libs#4268

Co-authored-by: Brian Bouterse <bmbouter@gmail.com>
Co-authored-by: Jordan Borean <jborean93@gmail.com>
Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
(cherry picked from commit c29e5fb)
@Hanaasagi Hanaasagi mentioned this pull request Oct 6, 2021
5 tasks
webknjaz pushed a commit to webknjaz/aiohttp that referenced this pull request Oct 11, 2021
This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* https://bugs.python.org/issue37179
* python/cpython#28073
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* aio-libs#6044

PR aio-libs#5992
Resolves aio-libs#3816
Resolves aio-libs#4268

Co-authored-by: Brian Bouterse <bmbouter@gmail.com>
Co-authored-by: Jordan Borean <jborean93@gmail.com>
Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
(cherry picked from commit c29e5fb)
webknjaz pushed a commit to webknjaz/aiohttp that referenced this pull request Oct 12, 2021
This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* https://bugs.python.org/issue37179
* python/cpython#28073
* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* aio-libs#6044

PR aio-libs#5992
Resolves aio-libs#3816
Resolves aio-libs#4268

Co-authored-by: Brian Bouterse <bmbouter@gmail.com>
Co-authored-by: Jordan Borean <jborean93@gmail.com>
Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
(cherry picked from commit c29e5fb)
webknjaz added a commit that referenced this pull request Oct 12, 2021
…ent (#6049)

This patch opens up the code path and adds the implementation that
allows end-users to start sending HTTPS requests through
HTTPS proxies.

The support for TLS-in-TLS (needed for this to work) in the stdlib is
kinda available since Python 3.7 but is disabled for `asyncio` with an
attribute/flag/toggle. When the upstream CPython enables it finally,
aiohttp v3.8+ will be able to work with it out of the box.

Currently the tests monkey-patch `asyncio` in order to verify that
this works. The users who are willing to do the same, will be able to
take advantage of it right now. Eventually (hopefully starting Python
3.11), the need for monkey-patching should be eliminated.

Refs:
* bugs.python.org/issue37179
* python/cpython#28073
* docs.aiohttp.org/en/stable/client_advanced.html#proxy-support
* #6044

PR #5992
Resolves #3816
Resolves #4268

Co-authored-by: Brian Bouterse <bmbouter@gmail.com>
Co-authored-by: Jordan Borean <jborean93@gmail.com>
Co-authored-by: Sviatoslav Sydorenko <webknjaz@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bot:chronographer:provided There is a change note present in this PR client enhancement python Pull requests that update Python code reproducer: present This PR or issue contains code, which reproduce the problem described or clearly understandable STR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use loop.starttls() on Python 3.7+ Implement HTTPS Proxy Support
4 participants