Estimating Web Application Security Testing

I was recently asked how to estimate time and measure the appropriate timelimit for a certain security program and what considerable items should be inspected before one decides a timeline for particular tests. While the below post might be very sound for small to medium scale businesses, it might fail for enterprise organizations but this will surely prove an elementary insight in generic terms.

We have all heard about the time a pentester invests in order to determine the logistics for enumeration of a web application which is about to go for security testing. In most generic terms, certain pre-assumtions are in place and it’s natural for the security managers to estimate the necessary time-frame to be able to cut across costs on implementation, arranging appropriate resources for the task, etc. This post educates the minimum necessary to-the-point to determine these metrics.

Considerable Items

  • Number of URL’s which could be fetched via Burp’s Spider
  • Number of Parameters which could be fetched via Burp’s Engagement tools
  • Number of vhosts if not pointing to same main application resources
  • Existence of Web Service or API’s which are included in scope
    1. Here you will like to fetch the API’s included via Questionnaires
    2. Map the web services parameter (REST or SOAP)
    3. Add all of this to the pointers of Web Services (sum)

Estimation

The complexity and the size as discussed previously in the answer can be determined via accessing the vhosts, the number of dynamic URL’s (dynamic only means the application is talking to the back-end at the data tier level). Consider using test cases such as I in my one of the research had did previously for the clients below (this is private and one can define there own):

Web Security Test Case

If you are not aware and almost need to estimate a timeline delivery, use Gnatt chart for each submission and test case module i.e. define modules in periodic terms such as ‘Input Validation Security Test cases’, Session Management Security Test Cases’, etc. A look at the below timeline must give an absolute idea how to estimate a proper enterprise delivery schedule:

Web Application Security Project Timeline

But before all of this, what most significant is to roadmap the project planner and describe the client the needed test cases in worksheet so that the client could necessarily go through the requirements document and provide a submission for the same to fix a proper timeline scheduling for you; this can be done in one of these ways:

  1. Map the requirements, if white-box, what are the credentials requirements, etc?
  2. Fill the gaps, check the application before you commit, what are more details required?
  3. Always reach to the conclusions from summation of the aforementioned.
  4. Add more amount of days than the original derived, that way you ensure quality.

A project planner could look something like this which can be a integral need for planning the web application security project phases as well as help you in defining timelines for the project:

Web Security Project Planner

The estimation again is the by-product and it’s not necessarily that you wouldn’t face any scope creep’s, time delay on the project, resources for the project, etc in-between the project (which is why the additional day post which you map the timelines). Now, what rest that remains is to pin-point the critical path and the break points of the project e.g. what could go possibly wrong and to what extend, etc. You need to manage this extremely well and define everything beforehand. Best of luck! I hope you find this information useful.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him cheap new balance. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.

Solution to Cutting Out Cost Expenditure on Information Security

It’s not at all by surprise that information security is most expensive task and is closely knit to risk managers to provide quality assured security to it’s end-product be it: web applications, thick clients or in general software. To reduce overthinking over complicated executive level decision by project managers, it’s essential in general to know how an information security program works.

How to access necessary components for Enterprise security solutions?

The first approach to solving a problem is to understand it’s question. In information security, the question for an organization is to solve security gaps by placing a security program a provide a security plan further to eliminate this rising security gap but how shall the security program look like and how it shall commence is entirely upto security managers. The necessary components are to be placed before the administration and gain an approval to commence these components in order to manage enterprise risks – which includes ‘security’.

In applications, this entire Application life-cycle management (ALM) will have a particular process and it’s components will be essentially:

  1. people
  2. process
  3. product

SDLC_1

Security management for all of them have to be taken into consideration, whether it’s to educate it’s people about security and provide required awareness about information security policies around the organization, secure processes for the organization and the secure product itself. This can be a top-down approach to provide a framework for security (security program) and then plan security in specific ways to protect business assets and it’s interests of the organization. During this entire time in the process, cost can be a factor due to the approved budget for the program and maintenance of the security program to keep solving security gaps in a way they should be.

How to solve ”cost” factor in a security program?

The security managers takes decisions related to security and hence should be able to decide the overall cost in recurring terms. But it’s not just about determining the cost – it’s also about cost cutting to9 get the project budget fixed without affecting the quality provided by the security program. First it’s necessary to access what accomplishments are to be made during the entire life-span of an application or the application that is being developed in-house and will be in production servers after it’s deployed. Some of these have to be considered while setting by layout and determine the costs:

  1. Objectives
  2. Required people
  3. Required outsourcing
  4. Required maintenance

The objectives should be very clear to the project managers in order to set the right people in-house to handle security problems and these people will be responsible to handle and mitigate risks and plan further with internal development teams. It’s also necessary to outsource tasks which needs subject matter expertise since security isn’t about just one thing. When discussing information security – for an instance there might be one than more components that are to be taken care of such as:

  1. secure coding practices in-house
  2. architectural risk analysis
  3. threat analysis
  4. security audits
  5. penetration testing

For most applications, the first three is done in-house and these includes costs too. Involvement of penetration testing comes from outsourcing after the product or software (web applications) are deployed but during SDLC a ‘secure’ mechanism has to be placed which gives birth to SSDLC (secure SDLC). Most threat analysis come after risk analysis has been done at an architectural level because managers have to decide on resource that is to be allocated to each of these components. To cut costs, it’s required that skilled-labor are employed to each of the steps in security framework rather than trying to randomly handle security which most often fails and which isn’t cost effective at-all. Threat analysis will involve:

  • threat modeling
  • threat treatment
  • threat management

Thereat management people and it’s resources will also be responsible for the later results which are out during penetration tests and in-case expected outputs are not acquired, a team of expertise should be able to look at the functional dependency and improve their formal test cases which are two:

  1. positive testing (functional testing)
  2. negative testing (exceptional testing)

Functional testing means what web applications are supposed to do i.e. input v/s output and negative testing means how the applications handle exception or are there any ways in which exceptions occur, and could these lead to business risks? Most of the negative test cases are something which needs focus since they are the elements to which later penetration testing proves unit security testing wasn’t effective and that can be something which might be of concern. Why? because at later stages costs comes to an exponentially to manages security risks and accordingly rearrange and re-implement to make a correction and make the product work without it’s security being affected (and also since organizations have to maintain compliance).

These pointers are small things where the cost cutting can be most accurate because if a pin-point analysis is done how such extra costs can be reduced in security programs, so that at later stages all of it contributes to an overall security budget of the organization. Sometimes it’s also the reason why organization now outsource finance to managed bug bounties. To deliberately handle security the right way, it’s also necessary to keep quality while in the SDLC period, since after the product is released – it might be out of hand and a little too late for managers to manage security.

How does Defencely solve your problems?

Defencely provides a 360 security solution to organizational security problems whether the products are in SDLC or it has been already deployed. If applications are in SDLC phases, it’s more beneficial to cost-cut your resources and get dedicated security expertise to help you realize and reduce risks before any commitments or deployments to your applications – it’s like winning the war before it starts. This could also be beneficial to compliance that the organization has chosen and the required reports it needs to prove their applications are secure for it’s customers and end-users.

The solutions provided is overall and hence it’s extremely helpful to know if certain application passed improvement to maintain a continual security check . This can be in terms of application security assessments, penetration tests, and simulated security testing where a red team accesses your applications in offensive ways to determine measure of security and give your organization the overall security posture. Let’s get you started with the right security program for your platform, contact us to get to help you solve your enterprise security problems.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him air max sale. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.

Defencely Business Enterprise Security Solutions

Hello again, sometime not long before; Defencely.com had described a series of posts how enterprise security risks are evaluated and how these security risks are determined in order to proactively close them in a responsible manner. This post would ensure the key-points which encircle the Enterprise Business Security Threats and is brought forward to spread business security centric awareness among the industries. In order for enterprise community to work according to it’s workflow, the business functional model should never overlap the data model. In case if it does overlap each other, a business logic threat could possibly be potentially present. This threat accessing model present three distinct risk factors, which are:

  1. Confidentiality
  2. Integrity
  3. Availablility

It’s also known by the C.I.A triad. A risk to the confidentiality would arise if a certain access control feature integrated to the business functional model is bypassed using certain different techniques and hence provides an intruder with confidential data which otherwise should not had been compromised. A risk to the Integrity would mean that data was modified during it’s processing or somewhere in-between it’s whole life-cycle. This again woudl be a threat to the business. A risk to availability is when certain business critical public data is restrained from access and hence financially depriving the company or a corporation from accessing resources thereby the intruder forcing the business model to consume more resources confining these resources to itself and making it unavailable.

CIA_triad

Authenticity in business risk model is yet another concern due to which intruders coudl gain unauthorized access to critical business resources and hence compromise the company in certain ways. This in itself could be a tragic scenario to a company taking it’s toll in a tragic way and leading the company to losses. Consider the folowing intrusions for an instance:

  1. Uber Cab compromised with Github Security Key
  2. Staples compromised – the retail hack hijacking cards
  3. Home Depot email compomises – 53 email addresses exposed
  4. CNET – compromised by Russian Attackers

These were some of the haunting and threat-awareness-inspiring cases from some of the major giants around the buiness community and the industry. At past, Microsoft, Oracle, Sony, etc also had been attacked and compromised in a successful ways with business security and logical deduction of security. In these cases, the business security were comprmised in different ways but these were only possible keeping compromise in data in mind. The data compromised were all realted to the business assets and not in a way wherein a procedure was taken keeping business logic security in perspective. Certain types of application threat evaluation require these business data to be first realtively measured with proper processing. The evaluation process would then take certain test cases which the application should either pass or fail. These test cases are as described as follows:

  1. Identify threats to business protocols, if these could be violated in any way.
  2. Identify threats to business timing, if resources could be violated against timed access.
  3. Identify threats realted to compromise of data assets – if data is not segregated from un-essential workflow.
  4. Identify threats realted to financial assets of the company – if financial records could be compromised.
  5. Identify threats related to processing – if certain steps in the process could be bypassed.

Some of the common Business Logic Threats are:

  1. Autentication Failure and Escalation of Priviledges.
  2. Unauthorized Access to Resources via Parameter Manipulation.
  3. Business Process Logic Bypass via Cookies or Tampering Cookies.
  4. Bypass of Client Side Business Assets leading to Process Bypasses.
  5. E-Shop Lifting via Business Logic Manipulation leading to losses.
  6. Functional Bypass of Business Flaw leading to access of 3rd party limited resources.
  7. Service Availability based Denial of Service Attacks via Business Logic Threats.

There could be numerous other ways to access critical data and exfilter them with Business Logic Security. Recommendations are to test the application against these new techniques and build a proper segregation channeling for them in order to prevent intruders from harming the business working of a company. A prevention chart should be followed by enterprise developers during the development phases in the entire SDLC. This latter mentioned would securely deploy the applications and would restrict unauthorized use of data which otherwise could had been compromised by application level vulnerabilities or remains of business logic vulnerabilities. Either of them are fatal and could lead to lossess, sometimes ranging to reputation to financial losses.

diagram

Defencely provides services against these aforementioned threats along with long cutting edge reporting deliverables for it’s clients. The services enable it’s client to access potential threats and remedify them in order to patch them. Defencely provides an in-depth scope and individual deliverables for it’s clients which includes:

  1. Application Security Executive and Technical Reports
  2. Business Logic Threat Executive and Technical Reports
  3. Mobile Security Executive and Technical Reports
  4. Individual Mitigation Trackers for both Application and Business Reports
  5. A Monthly Mitigation Overall Record for all the Identified Vulnerabilities

Aside, Defencely.com also provides custom tailored services for Network Security and Code Audit. These deliverables are in focus with network security for server hardening and hence enables the clients to follow strict security policy rules and compliances. Contact Defencely for it’s amazing fast reliable services at hi@defencely.com and make your web applications, servers, mobile apps, and code audits glitter with real sense of security.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him. Professionally, he have had experiences with several other companies working on critical application penetration test engagement, leading the Red Team and also holds experience training curious students at his leisure time. The application security guy!

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Test Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill.

 

 

Web Infrastructure Battlefield – Are Reverse Proxies enough?

Are reverse proxies enough for developers and system administrators in order to defend their applications or are they silently being exploited in wild causing system level compromises? Today, as I have layered the foundational DAST scanning values and their results in a post before, most might be aware; this along wouldn’t make web applications secure with additional layers of protection involved as such that of reverse proxies.

Reverse Proxies

In order to understand what a reverse proxy is and what are the additional security protections generally taken by a server administrator, I have compiled several research on Defencely‘s own internal infrastructure and have completely agreed on the fact that web applications are dynamic and malicious intent will always find a way. In order to fix this, I needed to explain the executives of what the risks are and how these risks could be given a proper threat modeling management. This post is the resultant of such discussions and how and what reverse proxies are in the context of web protection – which have been always a buzz for web server administrators who had been still at current date failing to protect their applications from attacks.

A reverse proxy could be used as one of the following or in parallel to each other support:

  1. Load Balancer and Caching Servers.
  2. WAF/IPS set-up Proxy Server.
  3. As a obfuscation proxy.

Load Balancers and caching servers help protection towards DDOS (Distributed Denial of Service) Attacks, whereas a IPS/WAF enabled dedicated server helps protection from erroneous TCP packets, detection of such anomalous TCP packets and triggers an alarm after detecting such attacks. As a obfuscation proxy, an added layer of protection is added to the web infrastructure via keeping the software stack used in the application development hidden in the headers and other places to which an attacker would first enumerate prior preparing his attack sequences.

reverseasNow, some infrastructure implementers consider reverse proxies as the ultimate way to protect their web assets as well as web-server, which certainly isn’t the case. Reverse Proxies only strengthen with additional security measures.

Attack Surface Measurement

To completely measure the attack surface area, an attacker or the penetration tester has to understand the scope of the security audit, prepare a value based blueprint of how methodologically he would go about carrying out the entire security audit. Compromising a web application with protections placed such as WAF, IPS, IDS, HIPS, additional firewalls, firewall rule-sets, Honeypots, controls, etc could and might very well look complicated; but once an expert at this who do what they had been doing for their food professionally come across the scenario, it doesn’t take much longer for them to realize the basic enterprise foundations to analyze the attack surface, and then prepare their attack plan and goals associated with the security engagement.

To measure the attack surface area, three distinct things are taken into particular consideration, and these are:

  1. Trusts – the infrastructure assets which are the interactions between the objects and within the security scope.
  2. Accesses – any interaction which happens to be from the outside of security scope to the internal of the security scope is known to be as Accesses.
  3. Visibility – Visibility are informational assets which happen to be of informational value exposing security scopes.

All these three components of the security audit makes an entire relative component known to be Porosity which itself is the entire attack surface. And hence:

Porosity = Trusts + Accesses + Visibility

Other security measures which are built by the infrastructure implementers are controls, whose pure intention are to limit the functionality to where it should be and hence control the workflow of the data, application logic and the various expected output to expected valid input.

The five variables and widely used controls in the overall infrastructure security mechanisms are:

  1. Authentication
  2. Indemnification
  3. Resilience
  4. Subjugation
  5. Continuity

All of them have to do with non-repudiation, confidentiality, privacy, integrity and alarming respectively as per the points made across previously. Now, to define a vulnerability, the definition of vulnerability itself for web applications and web infrastructure would be the violations of accesses and trusts. Which is altogether the equation has to be:

Accesses + Trusts (violation of both or either one) = Vulnerability.

This would be the appropriate measure of any vulnerabilities found in the Web Infrastructure. To measure weaknesses, the right equation would be:

Authentication + Indemnification + Resilience + Subjugation + Continuity = Weaknesses

Any of the violations above would be measured as a weakness and not as a vulnerability in whatsoever means. A concern would be when Non-Repudiation, confidentiality, privacy, and integrity have been violated. This equation would be:

Non-Repudiation + Confidentiality + Privacy + Integrity = Concern

Apart from all of the above, any violation of visibility could be referred to as Exposure and is for informational values only.

Reverse Proxy Test Resultants

Since reverse proxy’s are implemented to obstruct incoming malicious traffic and also identify or drop packets which could possibly harm underlying web applications served via another web-server, the reverse proxy has to be the intermediate server and hence is a security infrastructure testing rather than web application vulnerability assessment testing. Because there is no direct interactions with the web application itself nor any logical components of the web application, the reverse proxy security audit solely is based upon infrastructure security testing.

I would break down several tools and testing methodologies via which all of the certainty beliefs of server administrators regarding reverse proxy to be the best security relief against attacks will find their hopeless paths. In order for me to methodologically test these uncertain black-box reverse proxy security audits, I would first need to interact with the reverse proxy themselves and then escalate my attacks higher into the web application since the malicious payloads would need to first infiltrate the intermediary reverse proxy. The billion dollar question is – Are Reverse Proxy themselves strong to prevent attacks or are themselves being attacked?

To methodologically put by resultants into an effective set pieces of information security assurances and provide a ground for compliance, I have constructed the logic behind breaking these security obstructions in contextual basis of Access Violations, Visibility Violations, Trusts Violations, and Non-Repudiation Violations. The whole research isn’t public for access yet but these are some measures which has been made public at current.

Access Violations

Since there is no interactions made originally with the web application itself, the test scope would be only the web infrastructure counting reverse proxy. I would use the popular Facebook proxy as an example and in security assessment and audits, these same tools or methodological techniques could be used!

Tools used:

  1. Nmap (Network Mapper)
  2. Unicornscan

Nmap is a great tool to look for access entry points let them be via TCP or UDP (UDP accesses are a concern for now as an example!). The first nmap commands I use over are described in the images attached right below:

re1

Nevertheless, this should be the way for UDP scans on all the existing ports; the results at this point are irrelevant of the actual fact results when done on a reverse proxy during security audits. I revived a bunch of access entries which could be used to look down deep into:

re2

Another way to do this quick and very efficiently (more efficient than Nmap) is to use Unicornscan (but only works efficient than Nmap for UDP Scans.):

re3

And the results obtained were faster and reliable. Nmap could be right handy and fit if unicorn is too much.

Visibility Violations

This again could be done via tamper data (firefox addon) if the reverse proxy does interact over browsers. If not, however for test purposes I have had used openssl:

r4

As prompt, there was something which were expected by the reverse proxy and certainly I failed to really enumerate at my end. This could again be a POST request which the proxy might just had expected. This again is a ‘might’ shadowy end and needs to be confirmed. I quickly hit up RESTClient to make sure, that was the scenario and indeed it was!

r5

To prove my previous theory that openssl might just be a handy toolset for the audit, I picked up my first target from the proxy list Facebook servers use which are publicly available and was tested before on Access Violation tests:

re6

As transparent, Facebook is connected, I could now pass on the commands as the proxy expected. I will issue a GET request this time and try to pull out a content and see if OPTIONS (verb/method) has been implemented:

re7

At this point, I was given a ‘400 Bad Request’ which essentially is a client side server status code; this again means the client has mistakenly or intentionally/thoughtfully as a foul play (as the case might be!) tried to access a resource on the proxy which doesn’t or hasn’t expected the request type (verb/method) which in this case was GET. I could had tested more, but notice the proxy server replies with  ‘HTTP 1.0’; this again might be mis-direction or the server really is implemented across HTTP/1.0 and does not use HTTP/1.1 for communications. Trying again:

r8

Again, I received the same client side server status code, which means the client has mistaken their request or has a malformed request. Malformed requests hence could be used in the similar fashion to detect the behavior of the proxy the security auditor is encountering with. This again has to be Visibility Violations where the reverse proxy would fail to distinguish itself from the real web-server and the attacker understand he is talking to a proxy and not the real web-server.

Also, the operation controls which are implemented across HTTPS are only confidentiality and not certainly privacy. This means if there is an intermediary server which one client is  being able to connect to, and originally wants privacy to be implemented and by default HTTPS has operational controls set on the server such as:

  1. Confidentiality
  2. Integrity
  3. Subjugation

which yet again happens to be the default operation controls universally for all cipher suites used in HTTPS, the privacy in it’s first place as imagined by the client, is a violation! such cases are when the client (user) expects HTTPS to be secure and readily provide them privacy but technically never knows how HTTPS has been implemented. The privacy violation here would result in the server being able to know where the information has come from (source) and where the information is going to (destination). But since confidentiality is mainstream business for why actually HTTPS has been implemented, the server wouldn’t be able to look at what data the server has received and what data the server is sending across after receiving; which means the server has no right over the data but the endpoints (source and the destination):

For informational value, I have attached how to test them (the cipher suites available):

sslyze

The results would be:

more

There are certainly more cases for SSH and services of interests. Ton of services could be unknown which also has the value of visibility tests run across since they might give an exposure to certain data information entity in a certain way, which shouldn’t be exposed at the first placed.

Conclusive Results

As discussed in the sections above, Defencely Red Team has covered most of the research aspects of vulnerability assessments and penetration tests which suits the needs for an enterprise security audit which does not and should not be only limited to web applications but also the components which support them. A lot of assets such as Non-Repudiation Violations, and other violations require an entire draft for themselves. I have been actively involved with the community in order to bring the most amazing results shared across publicly once prepared but as a part of my active role in Defencely, I am responsible as well for the research copies.

On either perspective, enterprise business risk assessment should involve reverse proxies as an alternate vulnerability assessment criteria and should involve the same into conclusive testing since all the test cases so far for each sub-sets of violations could end up into compromising an application. And; once, an attacker is able to direct his/her traffic in the way he/she intends and reverse proxies fails at the serious moment of impact – a server administrator never should consider reverse proxies as the only ultimate security protection available. Code level flaws are yet another fact which needs to be broaden and discussed, but that would be another day.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.