Web Infrastructure Battlefield – Are Reverse Proxies enough?

Are reverse proxies enough for developers and system administrators in order to defend their applications or are they silently being exploited in wild causing system level compromises? Today, as I have layered the foundational DAST scanning values and their results in a post before, most might be aware; this along wouldn’t make web applications secure with additional layers of protection involved as such that of reverse proxies.

Reverse Proxies

In order to understand what a reverse proxy is and what are the additional security protections generally taken by a server administrator, I have compiled several research on Defencely‘s own internal infrastructure and have completely agreed on the fact that web applications are dynamic and malicious intent will always find a way. In order to fix this, I needed to explain the executives of what the risks are and how these risks could be given a proper threat modeling management. This post is the resultant of such discussions and how and what reverse proxies are in the context of web protection – which have been always a buzz for web server administrators who had been still at current date failing to protect their applications from attacks.

A reverse proxy could be used as one of the following or in parallel to each other support:

  1. Load Balancer and Caching Servers.
  2. WAF/IPS set-up Proxy Server.
  3. As a obfuscation proxy.

Load Balancers and caching servers help protection towards DDOS (Distributed Denial of Service) Attacks, whereas a IPS/WAF enabled dedicated server helps protection from erroneous TCP packets, detection of such anomalous TCP packets and triggers an alarm after detecting such attacks. As a obfuscation proxy, an added layer of protection is added to the web infrastructure via keeping the software stack used in the application development hidden in the headers and other places to which an attacker would first enumerate prior preparing his attack sequences.

reverseasNow, some infrastructure implementers consider reverse proxies as the ultimate way to protect their web assets as well as web-server, which certainly isn’t the case. Reverse Proxies only strengthen with additional security measures.

Attack Surface Measurement

To completely measure the attack surface area, an attacker or the penetration tester has to understand the scope of the security audit, prepare a value based blueprint of how methodologically he would go about carrying out the entire security audit. Compromising a web application with protections placed such as WAF, IPS, IDS, HIPS, additional firewalls, firewall rule-sets, Honeypots, controls, etc could and might very well look complicated; but once an expert at this who do what they had been doing for their food professionally come across the scenario, it doesn’t take much longer for them to realize the basic enterprise foundations to analyze the attack surface, and then prepare their attack plan and goals associated with the security engagement.

To measure the attack surface area, three distinct things are taken into particular consideration, and these are:

  1. Trusts – the infrastructure assets which are the interactions between the objects and within the security scope.
  2. Accesses – any interaction which happens to be from the outside of security scope to the internal of the security scope is known to be as Accesses.
  3. Visibility – Visibility are informational assets which happen to be of informational value exposing security scopes.

All these three components of the security audit makes an entire relative component known to be Porosity which itself is the entire attack surface. And hence:

Porosity = Trusts + Accesses + Visibility

Other security measures which are built by the infrastructure implementers are controls, whose pure intention are to limit the functionality to where it should be and hence control the workflow of the data, application logic and the various expected output to expected valid input.

The five variables and widely used controls in the overall infrastructure security mechanisms are:

  1. Authentication
  2. Indemnification
  3. Resilience
  4. Subjugation
  5. Continuity

All of them have to do with non-repudiation, confidentiality, privacy, integrity and alarming respectively as per the points made across previously. Now, to define a vulnerability, the definition of vulnerability itself for web applications and web infrastructure would be the violations of accesses and trusts. Which is altogether the equation has to be:

Accesses + Trusts (violation of both or either one) = Vulnerability.

This would be the appropriate measure of any vulnerabilities found in the Web Infrastructure. To measure weaknesses, the right equation would be:

Authentication + Indemnification + Resilience + Subjugation + Continuity = Weaknesses

Any of the violations above would be measured as a weakness and not as a vulnerability in whatsoever means. A concern would be when Non-Repudiation, confidentiality, privacy, and integrity have been violated. This equation would be:

Non-Repudiation + Confidentiality + Privacy + Integrity = Concern

Apart from all of the above, any violation of visibility could be referred to as Exposure and is for informational values only.

Reverse Proxy Test Resultants

Since reverse proxy’s are implemented to obstruct incoming malicious traffic and also identify or drop packets which could possibly harm underlying web applications served via another web-server, the reverse proxy has to be the intermediate server and hence is a security infrastructure testing rather than web application vulnerability assessment testing. Because there is no direct interactions with the web application itself nor any logical components of the web application, the reverse proxy security audit solely is based upon infrastructure security testing.

I would break down several tools and testing methodologies via which all of the certainty beliefs of server administrators regarding reverse proxy to be the best security relief against attacks will find their hopeless paths. In order for me to methodologically test these uncertain black-box reverse proxy security audits, I would first need to interact with the reverse proxy themselves and then escalate my attacks higher into the web application since the malicious payloads would need to first infiltrate the intermediary reverse proxy. The billion dollar question is – Are Reverse Proxy themselves strong to prevent attacks or are themselves being attacked?

To methodologically put by resultants into an effective set pieces of information security assurances and provide a ground for compliance, I have constructed the logic behind breaking these security obstructions in contextual basis of Access Violations, Visibility Violations, Trusts Violations, and Non-Repudiation Violations. The whole research isn’t public for access yet but these are some measures which has been made public at current.

Access Violations

Since there is no interactions made originally with the web application itself, the test scope would be only the web infrastructure counting reverse proxy. I would use the popular Facebook proxy as an example and in security assessment and audits, these same tools or methodological techniques could be used!

Tools used:

  1. Nmap (Network Mapper)
  2. Unicornscan

Nmap is a great tool to look for access entry points let them be via TCP or UDP (UDP accesses are a concern for now as an example!). The first nmap commands I use over are described in the images attached right below:

re1

Nevertheless, this should be the way for UDP scans on all the existing ports; the results at this point are irrelevant of the actual fact results when done on a reverse proxy during security audits. I revived a bunch of access entries which could be used to look down deep into:

re2

Another way to do this quick and very efficiently (more efficient than Nmap) is to use Unicornscan (but only works efficient than Nmap for UDP Scans.):

re3

And the results obtained were faster and reliable. Nmap could be right handy and fit if unicorn is too much.

Visibility Violations

This again could be done via tamper data (firefox addon) if the reverse proxy does interact over browsers. If not, however for test purposes I have had used openssl:

r4

As prompt, there was something which were expected by the reverse proxy and certainly I failed to really enumerate at my end. This could again be a POST request which the proxy might just had expected. This again is a ‘might’ shadowy end and needs to be confirmed. I quickly hit up RESTClient to make sure, that was the scenario and indeed it was!

r5

To prove my previous theory that openssl might just be a handy toolset for the audit, I picked up my first target from the proxy list Facebook servers use which are publicly available and was tested before on Access Violation tests:

re6

As transparent, Facebook is connected, I could now pass on the commands as the proxy expected. I will issue a GET request this time and try to pull out a content and see if OPTIONS (verb/method) has been implemented:

re7

At this point, I was given a ‘400 Bad Request’ which essentially is a client side server status code; this again means the client has mistakenly or intentionally/thoughtfully as a foul play (as the case might be!) tried to access a resource on the proxy which doesn’t or hasn’t expected the request type (verb/method) which in this case was GET. I could had tested more, but notice the proxy server replies with  ‘HTTP 1.0’; this again might be mis-direction or the server really is implemented across HTTP/1.0 and does not use HTTP/1.1 for communications. Trying again:

r8

Again, I received the same client side server status code, which means the client has mistaken their request or has a malformed request. Malformed requests hence could be used in the similar fashion to detect the behavior of the proxy the security auditor is encountering with. This again has to be Visibility Violations where the reverse proxy would fail to distinguish itself from the real web-server and the attacker understand he is talking to a proxy and not the real web-server.

Also, the operation controls which are implemented across HTTPS are only confidentiality and not certainly privacy. This means if there is an intermediary server which one client is  being able to connect to, and originally wants privacy to be implemented and by default HTTPS has operational controls set on the server such as:

  1. Confidentiality
  2. Integrity
  3. Subjugation

which yet again happens to be the default operation controls universally for all cipher suites used in HTTPS, the privacy in it’s first place as imagined by the client, is a violation! such cases are when the client (user) expects HTTPS to be secure and readily provide them privacy but technically never knows how HTTPS has been implemented. The privacy violation here would result in the server being able to know where the information has come from (source) and where the information is going to (destination). But since confidentiality is mainstream business for why actually HTTPS has been implemented, the server wouldn’t be able to look at what data the server has received and what data the server is sending across after receiving; which means the server has no right over the data but the endpoints (source and the destination):

For informational value, I have attached how to test them (the cipher suites available):

sslyze

The results would be:

more

There are certainly more cases for SSH and services of interests. Ton of services could be unknown which also has the value of visibility tests run across since they might give an exposure to certain data information entity in a certain way, which shouldn’t be exposed at the first placed.

Conclusive Results

As discussed in the sections above, Defencely Red Team has covered most of the research aspects of vulnerability assessments and penetration tests which suits the needs for an enterprise security audit which does not and should not be only limited to web applications but also the components which support them. A lot of assets such as Non-Repudiation Violations, and other violations require an entire draft for themselves. I have been actively involved with the community in order to bring the most amazing results shared across publicly once prepared but as a part of my active role in Defencely, I am responsible as well for the research copies.

On either perspective, enterprise business risk assessment should involve reverse proxies as an alternate vulnerability assessment criteria and should involve the same into conclusive testing since all the test cases so far for each sub-sets of violations could end up into compromising an application. And; once, an attacker is able to direct his/her traffic in the way he/she intends and reverse proxies fails at the serious moment of impact – a server administrator never should consider reverse proxies as the only ultimate security protection available. Code level flaws are yet another fact which needs to be broaden and discussed, but that would be another day.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.

Defencely Smart DAST Scanner Analysis – Mindblowing Results!

Benchmarks and Evaluation based on:

→ Range of Attack vectors
→ Protocol Support (HTTP/SSL/TLS)
→ Proxy Support
→ Authentication and Session Management
→ Crawling Capability
→ Metadata functionality
→ Parsing
→ Command and Control
→ User Interface
→ Assessment based on

  1. OWASP Top 10
  2. WASC Threat Classification
  3. SANS Top 20
  4. SOX

→ Reporting Customization
→ Reporting Format (XML/HTML/PDF)
→ Commercial/Opensource

As a part of recent benchmarking I went over for application scanners at Defencely, the most amazing set of DAST scanners were what originally popped my mind to look over and turn attention into them to seek information if they were as capable of manual vulnerability assessments without risks involved. I found results which were both focused at zero false positive affinity and towards time saving goals of each of these scanners tested. Either way, DAST scanners have been my lunch for today and had to be analyzed to let others know what are some of the most amazing open-source and commercial scanners available. When I began my research, I had to overlook at Burp Suite, since it was the only tool-set with Burp Extenders I would require for any manual vulnerability assessment and penetration testing of web applications. This wasn’t however focused at Burp Suite Professional, and I had to gave our readers some of the points of other scanners which are available at their disposal with costing (for commercial frameworks or scanners). I would list them at ascending order for DAST scanning capabilities they have attracted corporate giants across the globe (but with many limitations, adverse effects and with much cost!):

1. IBM AppScan. (Commercial)

  • scans DAST (Dynamic Application Security Testing)
  • scans SAST (Static Application Security Testing)
  • Wide range of attack vectors on WAVSEP benchmark review (http://code.google.com/p/wavsep/)
  • Good score over other web application scanners
  • Less false positives
  • Download and other references: 01-ibm.com/software/awdtools/appscan/
  • 2015 current version: v9.0 (332MB or 513MB on Windows Platform)
  • Audit features can be compared to WebInspect, W3af and Acunetix
  • Costs $20,300 USD equivalent Rs 877,200 INR

081612_1622_IBMRational1

2. WebInspect. |Commercial|

hpweb

 

3. IronWASP (Opensource)

  • Requires .NET SP2.
  • Source code available.
  • Less false positives.
  • Editable core scripts on RUBY or Python
  • Download at: http://ironwasp.org/download.html
  • Stable, flexible, and without cost (free)
  • Runs on Windows, .EXE support and 5.1 MB zip compressed

ironwasp_post_import_1

4.  Acunetix WVS |Commercial|

  • Boasts high performance on Windows, with great security audit features.
  • Comparable to IBM’s AppScan with less rating on attack vectors and false +’ves
  • UI is friendly, great speeds and URL discovery capability.
  • Detection Accuracy is high, which makes it a good scanner overall.
  • Comparable with Syhunt Mini (Sandcat Mini) and ZAP.
  • Download at: acunetix.com/vulnerability-scanner
  • For Windows, good fuzzing inbuilt.
  • Costs for the Consultant Edition is $7955 USD equivalent Rs 4,37,445 INR.

acu

5.  Syhunt Dynamic (Commercial)

  • Previously renowned as Sandcat Pro.
  • Syhunt Hybrid performs hybrid DAST and SAST.
  • Great UI (User Interface)
  • Designed for Windows Platform.
  • Order at: syhunt.com/?n=Syhunt.Dynamic
  • Good user reviews.
  • Wide source code analysis and then vulnerability detection.
  • Costs high as $8000 USD equivalent Rs 4,39,920 per year.

assfwrgtytjyrgfd

6. BurpSuite Professional (Commercial)

  • Great crawling features with equivalent scanner
  • Available for Windows as well as for Linux
  • Good Proxy Usage.
  • Large database of attack vectors.
  • Get at: portswigger.net/burp/
  • Costs $299 USD per year. Rs 16,442 INR equivalent.

scanner_1

7. Core Impact (Commercial)

  • Good profiling.
  • Wide range of attack vectors
  • Extreme levels of Pivoting across different multi-layer infrastructure.
  • Good report generation capability.
  • IPS/IDS evasions, and detection
  • Accurate Detection rate with very little or no false positives.
  • Costs around $30,000 USD equivalent Rs 1649700 INR
  • Available only on contact with the Core Impact Team.

Most-Expensive-Computer-Software-in-the-World-TOP-10-3

8. Jsky (Commercial)

  • Good URI Indexing.
  • Great User Interface.
  • Comparable to opensource security audit tools
  • Is a assessment tool and also a scanner
  • Costs on per PC basis
  • Contact site: nosec.org/en/evaluate/

JSky_1

9. WebApp360 (Commercial)

  • OWASP Top 10 through vulnerability scans.
  • Boasts good performance speeds with low false positives.
  • Stripped XSS, Reflected XSS and other wide range of Web attack vectors.
  • Heuristic Based scans with proper detection rate.
  • Proper Web application Sanitizing detection and reporting.
  • Latest Joomla, WordPress plugins and web application services based repository.
  • Checks Jquery, and java- based scripts and DOM objects.
  • Get Webapp360 with a evaluation demo: ncircle.com/index.php?s=products_webapp360

SecureScan Screenshot Scan Profile

10. Nstalker (Commercial)

  • Source code assessment
  • Wide attack vectors.
  • OWASP top 10 detection with flawless efficiency.
  • Very low or no false positives.
  • 3rd party package vulnerability detection[s].
  • Great reporting and USER REVIEWS.
  • Get at: nstalker.com/buy/
  • Costs $3,199 USD equivalent to Rs 175913 INR.

nss

11. WA3F (Opensource)

  • Independent opensource web application scanner.
  • Good OWASP top 10 detections.
  • Less speed.
  • Less reporting features.
  • Medium False positives.
  • Great site crawler.
  • Considered good among opensource web application audit and security framework.

sql_vulns_w3af

12. Arachni (Opensource)

  • Command Line Utility as well as GUI
  • Ruby Library based scanner framework.
  • Highly automated.
  • Great web application scanning and tuning features.
  • Good web application attack vector records.
  • Free and opensource framework.

arachni_big1

13. Gamja (Opensource)

  • Good for common web application attack vectors
  • Command line as well as GUI.
  • Comparable but not as powerful as other opensource specific attack tools like SQLmap, XSSer, and Vega
  • Free and opensource.

 gam

14. Vega (Opensource)

  • Vega is good for attack vectors.
  • Robust but high detection rates.
  • False positives quite often detected.
  • Opensource and free

alertresponsehighlighting

15.  Nikto (Opensource)

  • High false positives.
  • Good records of web application attack vectors.
  • Opensource and free.
  • Included in Linux OS Back|Track

nikto1

16. Unicorn Scan |Opensource|

  • Great number of payloads
  • Good records of web attack vectors
  • High detecting rate.
  • Well documented.
  • Operational for initial web application tests

uniasa

17. WebSecurify (Commercial)

  • Wide range of attack vectors.
  • Uses XULrunner to perform configurations
  • Opensource (previously) as well as commercial.
  • Easy to use features.
  • Not complex.

WebSecurifyLarge

18. SkipFish (Opensource)

  • Command line utility.
  • Wide range of attack vectors.
  • Good support and well documeted.
  • Less dependencies on a linux based system.
  • Opensource and free to use for all.
  • Overall good performance.

skipfish

19. Grendel-Scan (Opensource)

  • Wide range of scan criteria
  • Well documented.
  • Command line Utility.
  • Uses Nikto configurations as intake.
  • Opensource and free to the community.

greden

Miscellaneous Scanners

Scanners Specifically for an attack vector:
  • SQLMAP for SQL Injections
  • XSSer for DOM based and persistent XSS.
  • Joomscan for Joomla based vulnerability.
  • Wpscan for wordpress vulnerabilities.
  • Dirbuster for directory crawler.
  • Whatweb for web application detections.

The list wouldn’t end if I had to specify each toolset used for a vulnerability assessment and penetration testing. With that said, DAST scanners are sometimes highly discouraged due to their adverse effects on the web-server and the application themselves. Running these DAST scanner with a segregated set-up clone of the original application is a recommendation; however most amateurs use these scanners without having done any risk assessments. Defencely provides a firm grips over DAST automated scanners with it’s manual vulnerability assessments which have zero chances of any false positives (which are a higher amount in aforementioned DAST scanners).

Now, one would question, if DAST scanners have already been in the market and their are available alternatives for the web application vulnerability assessments along with reporting merged with these shipped DAST application scanners, why would an enterprise problem need Defencely Manual Application Penetration Testing and Vulnerability Assessment solutions? Their is a 100 page answer to this, and I would break out some key-points for those who do not have previous technical back-ground and would require a very straight-forward answer!

DAST Scanners for profit?

Or a miss-out on the most important vulnerability not detected?

DAST scanners are notorious for generating random big logs into the web-server but this isn’t that important. While on a vulnerability assessment, the goals for any penetration tester would be to detect the application bugs that count. Since, DAST scanners are per-programmed as per the limited knowledge of how an application “might” work and does not detect the entire work-flow of the application, the logical part misses out or is never counted as a part of the test by default. There are some scanners such as HP WebInspect and other dynamic scanners which focuses on these particular areas, but they as well are limited to external application logic. Apart from what are the cons, I would break down the entire con-list and focus on these below mentioned key-points:

  • DAST Scanners does not locate the specific line of code to which the vulnerability is affected against.
  • The quality of the code could not be determined even if it was a white-box assessment.
  • Because of previous lack of code quality assurance, chances are other vulnerabilities are missed.
  • Apart from code level application vulnerabilities, logical bug detection isn’t a part by default.
  • The blind test cases are seldom logically imprinted to the payload and hence fails at bypasses.
  • Threat Modeling and prior risk assessment is never done which might harm the production-set.
  • Scanners generate a lot of traffic, leave behind massive logs with more false positives.

After all of the entire DAST scanning operations, the web application penetration tester is left with false positives and the company with false reports which were originally meant for an initial assessment which had to be investigated by security experts. This again is more consultancy cost and if not, the application is again highly vulnerable since most of the bugs were left alone at their own places which a scanner never pin-pointed and hence developers never patched. This would come up as a risk where the entire goal of the engagement is void because either way the application gets compromised and customer data stolen or out. This shouldn’t likely be the case. Now if the 3rd party company tries to invest in a re-pentest, again additionally a cost revenue has to be re-initiated. This overall is a non-productive task which is repetitive and yet not profitable or has a serious time consumption.

Vulnerability Assessment Solutions

Smart testing that works! for clients, for penetration testers, and the developers.

Methodological vulnerability assessments and penetration tests are never created from heaven nor they free-fall from the sky. Security experts who do what they do and those penetration testers who have been always best proven doing professionally target oriented penetration tests hence will agree on manually preparing test cases after scoping the web applications for a specific goal in an engagement. The client requirement is a the clear goal for the penetration testers, satisfying the developers needs is yet another goal which has to be met and hence a requirement analysis is to be fore-taken by the Red Team (a group of penetration testers). After going through massive enumeration, application scoping, realizing every possible targets doing background and the present status of the application, an entire risk assessment involved with the project is drawn out and presented to the applicable distribution list (those who would be executive, lead and representative).

After the scope creep, subscription prices and a formal meet-up for setting the goals of the penetration testing engagements to follow has been finalized; the testers would now be given with a set of priority list along with the authorization for the manual conduct of engagements. After having taking up the engagement contracts and operationally testing the application for specific loophole which might be a risk to the company in production or development (both cases or either of the two), a progress chart has to be prepared of how much the testing is covered and what assets are saved, and why certain business risks have now been mitigated as per the commitments to the goals of the entire project/engagement. This is highly effective determined, tactical and customized penetration testing designed to deliver the customers of what they deserve.

Defencely has a clean set of effective smart solutions which will work for your business value addition and not only does it project custom penetration testing services which meet up to your requirements but is a proven methodological manual vulnerability testing with man-power working behind the curtains (the entire Red Teaming!). This in turn benefits business with clear understanding workflow and helps developers fix the potential threats which if left open could possibly compromise the entire application as well as escalate to system compromise, web-server compromise and data exfilteration from database back-ends. The Defencely Security solution therefore provides its clients with total 360 degree security and keeps it under the protective umbrella.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him. Professionally, he have had experiences with several other companies working on critical application penetration test engagement, leading the Red Team and also holds experience training curious students at his leisure time. The application security guy!

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Test Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill.