Do you rely on Hashing? Know WebSec Cryptography Indepth!

Credits: Thomas Pornin

For storing passwords hashes, you need an algorithm slow enough that brute-force attacks are not feasible. Salting the password will help against rainbow attacks, but not against brute-force attacks. For storing password hashes, you need to use an algorithm specifically designed for this purpose; such as:

scrypt is new but interesting because it not only uses a variable work factor but also memory-hard functions. This dramatically increases the cost of brute-force attacks, because both running-time and memory requirements are increased.

The Theory

We need to hash passwords as a second line of defence. A server which can authenticate users necessarily contains, somewhere in its entrails, some data which can be used to validate a password. A very simple system would just store the passwords themselves, and validation would be a simple comparison. But if a hostile outsider were to gain a simple glimpse at the contents of the file or database table which contains the passwords, then that attacker would learn a lot. Unfortunately, such partial, read-only breaches do occur in practice (a mislaid backup tape, a decommissioned but not wiped-out hard disk, an aftermath of a SQL injection attack — the possibilities are numerous). See this blog post for a detailed discussion.

Since the overall contents of a server that can validate passwords are necessarily sufficient to indeed validate passwords, an attacker who obtained a read-only snapshot of the server is in position to make an offline dictionary attack: he tries potential passwords until a match is found. This is unavoidable. So we want to make that kind of attack as hard as possible. Our tools are the following:

  • Cryptographic hash functions: these are fascinating mathematical objects which everybody can compute efficiently, and yet nobody knows how to invert them. This looks good for our problem – the server could store a hash of a password; when presented with a putative password, the server just has to hash it to see if it gets the same value; and yet, knowing the hash does not reveal the password itself.
  • Salts: among the advantages of the attacker over the defender is parallelism. The attacker usually grabs a whole list of hashed passwords, and is interested in breaking as many of them as possible. He may try to attack several in parallels. For instance, the attacker may consider one potential password, hash it, and then compare the value with 100 hashed passwords; this means that the attacker shares the cost of hashing over several attacked passwords. A similar optimisation is precomputed tables, including rainbow tables; this is still parallelism, with a space-time change of coordinates.The common characteristic of all attacks which use parallelism is that they work over several passwords which were processed with the exact same hash function. Salting is about using not one hash function, but a lot of distinct hash functions; ideally, each instance of password hashing should use its own hash function. A salt is a way to select a specific hash function among a big family of hash functions. Properly applied salts will completely thwart parallel attacks (including rainbow tables).
  • Slowness: computers become faster over time (Gordon Moore, co-founder of Intel, theorized it in his famous law). Human brains do not. This means that attackers can “try” more and more potential passwords as years pass, while users cannot remember more and more complex passwords (or flatly refuse to). To counter that trend, we can make hashing inherently slow by defining the hash function to use a lot of internal iterations (thousands, possibly millions).

We have a few standard cryptographic hash functions; the most famous are MD5 and the SHA family. Building a secure hash function out of elementary operations is far from easy. When cryptographers want to do that, they think hard, then harder, and organize a tournament where the functions fight each other fiercely. When hundreds of cryptographers gnawed and scraped and punched at a function for several years and found nothing bad to say about it, then they begin to admit that maybe that specific function could be considered as more or less secure air jordan sale. This is just what happened in the SHA-3 competition. We have to use this way of designing hash function because we know no better way. Mathematically, we do not know if secure hash functions actually exist; we just have “candidates” (that’s the difference between “it cannot be broken” and “nobody in the world knows how to break it”).

A basic hash function, even if secure as a hash function, is not appropriate for password hashing, because:

  • it is unsalted, allowing for parallel attacks (rainbow tables for MD5 or SHA-1 can be obtained for free, you do not even need to recompute them yourself);
  • it is way too fast, and gets faster with technological advances. With a recent GPU (i.e. off-the-shelf consumer product which everybody can buy), hashing rate is counted in billions of passwords per second.

So we need something better. It so happens that slapping together a hash function and a salt, and iterating it, is not easier to do than designing a hash function — at least, if you want the result to be secure. There again, you have to rely on standard constructions which have survived the continuous onslaught of vindicative cryptographers.

Good Password Hashing Functions

PBKDF2

PBKDF2 comes from PKCS#5. It is parameterized with an iteration count (an integer, at least 1, no upper limit), a salt (an arbitrary sequence of bytes, no constraint on length), a required output length (PBKDF2 can generate an output of configurable length), and an “underlying PRF”. In practice, PBKDF2 is always used with HMAC, which is itself a construction built over an underlying hash function. So when we say “PBKDF2 with SHA-1”, we actually mean “PBKDF2 with HMAC with SHA-1”.

Advantages of PBKDF2:

  • Has been specified for a long time, seems unscathed for now.
  • Is already implemented in various framework (e.g. it is provided with .NET).
  • Highly configurable (although some implementations do not let you choose the hash function, e.g. the one in .NET is for SHA-1 only).
  • Received NIST blessings (modulo the difference between hashing and key derivation; see later on).
  • Configurable output length (again, see later on).

Drawbacks of PBKDF2:

  • CPU-intensive only, thus amenable to high optimization with GPU (the defender is a basic server which does generic things, i.e. a PC, but the attacker can spend his budget on more specialized hardware, which will give him an edge).
  • You still have to manage the parameters yourself (salt generation and storage, iteration count encoding…). There is a standard encoding for PBKDF2 parameters but it uses ASN.1 so most people will avoid it if they can (ASN.1 can be tricky to handle for the non-expert).

bcrypt

bcrypt was designed by reusing and expanding elements of a block cipher called Blowfish. The iteration count is a power of two, which is a tad less configurable than PBKDF2, but sufficiently so nevertheless. This is the core password hashing mechanism in the OpenBSD operating system.

Advantages of bcrypt:

  • Many available implementations in various languages (see the links at the end of the Wikipedia page).
  • More resilient to GPU; this is due to details of its internal design. The bcrypt authors made it so voluntarily: they reused Blowfish because Blowfish was based on an internal RAM table which is constantly accessed and modified throughout the processing. This makes life much harder for whoever wants to speed up bcrypt with a GPU (GPU are not good at making a lot of memory accesses in parallel).
  • Standard output encoding which includes the salt, the iteration count and the output as one simple to store character string of printable characters.

Drawbacks of bcrypt:

  • Output size is fixed: 192 bits.
  • While bcrypt is good at thwarting GPU, it can still be thoroughly optimized with FPGA: modern FPGA chips have a lot of small embedded RAM blocks which are very convenient for running many bcrypt implementations in parallel within one chip. It has been done.
  • Input password size is limited to 51 characters. In order to handle longer passwords, one has to combine bcrypt with a hash function (you hash the password and then use the hash value as the “password” for bcrypt). Combining cryptographic primitives is known to be dangerous (see above) so such games cannot be recommended on a general basis.

scrypt

scrypt is a much newer construction (designed in 2009) which builds over PBKDF2 and a stream cipher called Salsa20/8, but these are just tools around the core strength of scrypt, which is RAM. scrypt has been designed to inherently use a lot of RAM (it generates some pseudo-random bytes, then repeatedly read them in a pseudo-random sequence). “Lots of RAM” is something which is hard to make parallel. A basic PC is good at RAM access, and will not try to read dozens of unrelated RAM bytes simultaneously. An attacker with a GPU or a FPGA will want to do that, and will find it difficult.

Advantages of scrypt:

  • A PC, i.e. exactly what the defender will use when hashing passwords, is the most efficient platform (or close enough) for computing scrypt. The attacker no longer gets a boost by spending his dollars on GPU or FPGA.
  • One more way to tune the function: memory size.

Drawbacks of scrypt:

  • Still new (my own rule of thumb is to wait at least 5 years of general exposure, so no scrypt for production until 2014 – but, of course, it is best if other people try scrypt in production, because this gives extra exposure).
  • Not as many available, ready-to-use implementations for various languages.
  • Unclear whether the CPU / RAM mix is optimal. For each of the pseudo-random RAM accesses, scrypt still computes a hash function. A cache miss will be about 200 clock cycles, one SHA-256 invocation is close to 1000. There may be room for improvement here.
  • Yet another parameter to configure: memory size.

OpenPGP Iterated And Salted S2K

I cite this one because you will use it if you do password-based file encryption with GnuPG. That tool follows the OpenPGP format which defines its own password hashing functions, called “Simple S2K”, “Salted S2K” and “Iterated and Salted S2K“. Only the third one can be deemed “good” in the context of this answer. It is defined as the hash of a very long string (configurable, up to about 65 megabytes) consisting of the repetition of an 8-byte salt and the password.

As far as these things go, OpenPGP’s Iterated And Salted S2K is decent; it can be considered as similar to PBKDF2, with less configurability. You will very rarely encounter it outside of OpenPGP, as a stand-alone function.

Unix “crypt”

Recent Unix-like systems (e.g. Linux), for validating user passwords, use iterated and salted variants of the crypt() function based on good hash functions, with thousands of iterations. This is reasonably good. Some systems can also use bcrypt, which is better.

The old crypt() function, based on the DES block cipher, is not good enough:

  • It is slow in software but fast in hardware, and can be made fast in software too but only when computing several instances in parallel (technique known as SWAR or “bitslicing”). Thus, the attacker is at an advantage.
  • It is still quite fast, with only 25 iterations.
  • It has a 12-bit salt, which means that salt reuse will occur quite often.
  • It truncates passwords to 8 characters (characters beyond the eighth are ignored) and it also drops the upper bit of each character (so you are more or less stuck with ASCII).

But the more recent variants, which are active by default, will be fine.

Bad Password Hashing Functions

About everything else, in particular virtually every homemade method that people relentlessly invent.

For some reason, many developers insist on designing function themselves, and seem to assume that “secure cryptographic design” means “throw together every kind of cryptographic or non-cryptographic operation that can be thought of”. See this question for an example. The underlying principle seems to be that the sheer complexity of the resulting utterly tangled mess of instruction will befuddle attackers. In practice, though, the developer himself will be more confused by his own creation than the attacker.

Complexity is bad. Homemade is bad. New is bad. If you remember that, you’ll avoid 99% of problems related to password hashing, or cryptography, or even security in general.

Password hashing in Windows operating systems used to be mindbogglingly awful and now is just terrible (unsalted, non-iterated MD4).

Key Derivation

Up to now, we considered the question of hashing passwords. A close problem is about transforming a password into a symmetric key which can be used for encryption; this is called key derivation and is the first thing you do when you “encrypt a file with a password”.

It is possible to make contrived examples of password hashing functions which are secure for the purpose of storing a password validation token, but terrible when it comes to generating symmetric keys; and the converse is equally possible. But these examples are very “artificial”. For practical functions like the one described above:

  • The output of a password hashing function is acceptable as a symmetric key, after possible truncation to the required size.
  • A Key Derivation Function can serve as a password hashing function as long as the “derived key” is long enough to avoid “generic preimages” (the attacker is just lucky and finds a password which yields the same output). An output of more than 100 bits or so will be enough.

Indeed, PBKDF2 and scrypt are KDF, not password hashing function — and NIST “approves” of PBKDF2 as a KDF, not explicitly as a password hasher (but it is possible, with only a very minute amount of hypocrisy, to read NIST’s prose in such a way that it seems to say that PBKDF2 is good for hashing passwords).

Conversely, bcrypt is really a block cipher (the bulk of the password processing is the “key schedule”) which is then used in CTR mode to produce three blocks (i.e. 192 bits) of pseudo-random output, making it a kind of hash function. bcrypt can be turned into a KDF with a little surgery, by using the block cipher in CTR mode for more blocks. But, as usual, we cannot recommend such homemade transforms. Fortunately, 192 bits are already more than enough for most purposes (e.g. symmetric encryption with GCM or EAX only needs a 128-bit key).

Miscellaneous Topics

How many iterations ?

As much as possible ! This salted-and-slow hashing is an arms race between the attacker and the defender. You use many iterations to make the hashing of a password harder for everybody. To improve security, you should set that number as high as you can tolerate on your server, given the tasks that your server must otherwise fulfill. Higher is better.

Collisions and MD5

MD5 is broken: it is computationally easy to find a lot of pairs of distinct inputs which hash to the same value. These are called collisions.

However, collisions are not an issue for password hashing. Password hashing requires the hash function to be resistant to preimages, not to collisions. Collisions are about finding pairs of messages which give the same output without restriction, whereas in password hashing the attacker must find a message which yields a given output that the attacker does not get to choose. This is quite different. As far as we known, MD5 is still (almost) as strong as it has ever been with regards to preimages (there is a theoretical attack which is still very far in the ludicrously impossible to run in practice).

The real problem with MD5 as it is commonly used in password hashing is that it is very fast, and unsalted. However, PBKDF2 used with MD5 would be robust. You should still use SHA-1 or SHA-256 with PBKDF2, but for Public Relations. People get nervous when they hear “MD5”.

Salt Generation

The main and only point of the salt is to be as unique as possible. Whenever a salt value is reused anywhere, this has the potential to help the attacker.

For instance, if you use the user name as salt, then an attacker (or several colluding attackers) could find it worthwhile to build rainbow tables which attack the password hashing function when the salt is “admin” (or “root” or “joe”) because there will be several, possibly many sites around the world which will have a user named “admin”. Similarly, when a user changes his password, he usually keeps his name, leading to salt reuse. Old passwords are valuable targets, because users have the habit of reusing passwords in several places (that’s known to be a bad idea, and advertised as such, but they will do it nonetheless because it makes their life easier), and also because people tend to generate their passwords “in sequence”: if you learn that Bob’s old password is “SuperSecretPassword37”, then Bob’s current password is probable “SuperSecretPassword38” or “SuperSecretPassword39”.

The cheap way to obtain uniqueness is to use randomness. If you generate your salt as a sequence of random bytes from the cryptographically secure PRNG that your operating system offers (/dev/urandom, CryptGenRandom()…) then you will get salt values which will be “unique with a sufficiently high probability”. 16 bytes are enough so that you will never see a salt collision in your life, which is overkill but simple enough.

UUID are a standard way of generating “unique” values. Note that “version 4” UUID just use randomness (122 random bits), like explained above. A lot of programming frameworks offer simple to use functions to generate UUID on demand, and they can be used as salts.

Salt Secrecy

Salts are not meant to be secret; otherwise we would call them keys. You do not need to make salts public, but if you have to make them public (e.g. to support client-side hashing), then don’t worry too much about it. Salts are there for uniqueness. Strictly speaking, the salt is nothing more than the selection of a specific hash function within a big family of functions.

“Pepper”

Cryptographers can never let a metaphor alone; they must extend it with further analogies and bad puns. “Peppering” is about using a secret salt, i.e. a key. If you use a “pepper” in your password hashing function, then you are switching to a quite different kind of cryptographic algorithm; namely, you are computing a Message Authentication Code over the password. The MAC key is your “pepper”.

Peppering makes sense if you can have a secret key which the attacker will not be able to read. Remember that we use password hashing because we consider that an attacker could grab a copy of the server database, or possible of the whole disk of the server. A typical scenario would be a server with two disks in RAID 1. One disk fails (electronic board fries – this happens a lot). The sysadmin replaces the disk, the mirror is rebuilt, no data is lost due to the magic of RAID 1. Since the old disk is dysfunctional, the sysadmin cannot easily wipe its contents. He just discards the disk. The attacker searches through the garbage bags, retrieves the disk, replaces the board, and lo! He has a complete image of the whole server system, including database, configuration files, binaries, operating system… the full monty, as the British say. For peppering to be really applicable, you need to be in a special setup where there is something more than a PC with disks; you need a HSM. HSM are very expensive, both in hardware and in operational procedure. But with a HSM, you can just use a secret “pepper” and process passwords with a simple HMAC (e.g. with SHA-1 or SHA-256). This will be vastly more efficient than bcrypt/PBKDF2/scrypt and their cumbersome iterations. Also, usage of a HSM will look extremely professional when doing a WebTrust audit.

Client-side hashing

Since hashing is (deliberately) expensive, it could make sense, in a client-server situation, to harness the CPU of the connecting clients. After all, when 100 clients connect to a single server, the clients collectively have a lot more muscle than the server.

To perform client-side hashing, the communication protocol must be enhanced to support sending the salt back to the client. This implies an extra round-trip, when compared to the simple client-sends-password-to-server protocol. This may or may not be easy to add to your specific case.

Client-side hashing is difficult in a Web context because the client uses Javascript, which is quite anemic for CPU-intensive tasks.

In the context of SRP, password hashing necessarily occurs on the client side.

Conclusion

Use bcrypt. PBKDF2 is not bad either. If you use scrypt you will be a “slightly early adopter” with the risks that are implied by this expression; but it would be a good move for scientific progress (“crash dummy” is a very honorable profession).

Estimating Web Application Security Testing

I was recently asked how to estimate time and measure the appropriate timelimit for a certain security program and what considerable items should be inspected before one decides a timeline for particular tests. While the below post might be very sound for small to medium scale businesses, it might fail for enterprise organizations but this will surely prove an elementary insight in generic terms.

We have all heard about the time a pentester invests in order to determine the logistics for enumeration of a web application which is about to go for security testing. In most generic terms, certain pre-assumtions are in place and it’s natural for the security managers to estimate the necessary time-frame to be able to cut across costs on implementation, arranging appropriate resources for the task, etc. This post educates the minimum necessary to-the-point to determine these metrics.

Considerable Items

  • Number of URL’s which could be fetched via Burp’s Spider
  • Number of Parameters which could be fetched via Burp’s Engagement tools
  • Number of vhosts if not pointing to same main application resources
  • Existence of Web Service or API’s which are included in scope
    1. Here you will like to fetch the API’s included via Questionnaires
    2. Map the web services parameter (REST or SOAP)
    3. Add all of this to the pointers of Web Services (sum)

Estimation

The complexity and the size as discussed previously in the answer can be determined via accessing the vhosts, the number of dynamic URL’s (dynamic only means the application is talking to the back-end at the data tier level). Consider using test cases such as I in my one of the research had did previously for the clients below (this is private and one can define there own):

Web Security Test Case

If you are not aware and almost need to estimate a timeline delivery, use Gnatt chart for each submission and test case module i.e. define modules in periodic terms such as ‘Input Validation Security Test cases’, Session Management Security Test Cases’, etc. A look at the below timeline must give an absolute idea how to estimate a proper enterprise delivery schedule:

Web Application Security Project Timeline

But before all of this, what most significant is to roadmap the project planner and describe the client the needed test cases in worksheet so that the client could necessarily go through the requirements document and provide a submission for the same to fix a proper timeline scheduling for you; this can be done in one of these ways:

  1. Map the requirements, if white-box, what are the credentials requirements, etc?
  2. Fill the gaps, check the application before you commit, what are more details required?
  3. Always reach to the conclusions from summation of the aforementioned.
  4. Add more amount of days than the original derived, that way you ensure quality.

A project planner could look something like this which can be a integral need for planning the web application security project phases as well as help you in defining timelines for the project:

Web Security Project Planner

The estimation again is the by-product and it’s not necessarily that you wouldn’t face any scope creep’s, time delay on the project, resources for the project, etc in-between the project (which is why the additional day post which you map the timelines). Now, what rest that remains is to pin-point the critical path and the break points of the project e.g. what could go possibly wrong and to what extend, etc. You need to manage this extremely well and define everything beforehand. Best of luck! I hope you find this information useful.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him cheap new balance. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.

Solution to Cutting Out Cost Expenditure on Information Security

It’s not at all by surprise that information security is most expensive task and is closely knit to risk managers to provide quality assured security to it’s end-product be it: web applications, thick clients or in general software. To reduce overthinking over complicated executive level decision by project managers, it’s essential in general to know how an information security program works.

How to access necessary components for Enterprise security solutions?

The first approach to solving a problem is to understand it’s question. In information security, the question for an organization is to solve security gaps by placing a security program a provide a security plan further to eliminate this rising security gap but how shall the security program look like and how it shall commence is entirely upto security managers. The necessary components are to be placed before the administration and gain an approval to commence these components in order to manage enterprise risks – which includes ‘security’.

In applications, this entire Application life-cycle management (ALM) will have a particular process and it’s components will be essentially:

  1. people
  2. process
  3. product

SDLC_1

Security management for all of them have to be taken into consideration, whether it’s to educate it’s people about security and provide required awareness about information security policies around the organization, secure processes for the organization and the secure product itself. This can be a top-down approach to provide a framework for security (security program) and then plan security in specific ways to protect business assets and it’s interests of the organization. During this entire time in the process, cost can be a factor due to the approved budget for the program and maintenance of the security program to keep solving security gaps in a way they should be.

How to solve ”cost” factor in a security program?

The security managers takes decisions related to security and hence should be able to decide the overall cost in recurring terms. But it’s not just about determining the cost – it’s also about cost cutting to9 get the project budget fixed without affecting the quality provided by the security program. First it’s necessary to access what accomplishments are to be made during the entire life-span of an application or the application that is being developed in-house and will be in production servers after it’s deployed. Some of these have to be considered while setting by layout and determine the costs:

  1. Objectives
  2. Required people
  3. Required outsourcing
  4. Required maintenance

The objectives should be very clear to the project managers in order to set the right people in-house to handle security problems and these people will be responsible to handle and mitigate risks and plan further with internal development teams. It’s also necessary to outsource tasks which needs subject matter expertise since security isn’t about just one thing. When discussing information security – for an instance there might be one than more components that are to be taken care of such as:

  1. secure coding practices in-house
  2. architectural risk analysis
  3. threat analysis
  4. security audits
  5. penetration testing

For most applications, the first three is done in-house and these includes costs too. Involvement of penetration testing comes from outsourcing after the product or software (web applications) are deployed but during SDLC a ‘secure’ mechanism has to be placed which gives birth to SSDLC (secure SDLC). Most threat analysis come after risk analysis has been done at an architectural level because managers have to decide on resource that is to be allocated to each of these components. To cut costs, it’s required that skilled-labor are employed to each of the steps in security framework rather than trying to randomly handle security which most often fails and which isn’t cost effective at-all. Threat analysis will involve:

  • threat modeling
  • threat treatment
  • threat management

Thereat management people and it’s resources will also be responsible for the later results which are out during penetration tests and in-case expected outputs are not acquired, a team of expertise should be able to look at the functional dependency and improve their formal test cases which are two:

  1. positive testing (functional testing)
  2. negative testing (exceptional testing)

Functional testing means what web applications are supposed to do i.e. input v/s output and negative testing means how the applications handle exception or are there any ways in which exceptions occur, and could these lead to business risks? Most of the negative test cases are something which needs focus since they are the elements to which later penetration testing proves unit security testing wasn’t effective and that can be something which might be of concern. Why? because at later stages costs comes to an exponentially to manages security risks and accordingly rearrange and re-implement to make a correction and make the product work without it’s security being affected (and also since organizations have to maintain compliance).

These pointers are small things where the cost cutting can be most accurate because if a pin-point analysis is done how such extra costs can be reduced in security programs, so that at later stages all of it contributes to an overall security budget of the organization. Sometimes it’s also the reason why organization now outsource finance to managed bug bounties. To deliberately handle security the right way, it’s also necessary to keep quality while in the SDLC period, since after the product is released – it might be out of hand and a little too late for managers to manage security.

How does Defencely solve your problems?

Defencely provides a 360 security solution to organizational security problems whether the products are in SDLC or it has been already deployed. If applications are in SDLC phases, it’s more beneficial to cost-cut your resources and get dedicated security expertise to help you realize and reduce risks before any commitments or deployments to your applications – it’s like winning the war before it starts. This could also be beneficial to compliance that the organization has chosen and the required reports it needs to prove their applications are secure for it’s customers and end-users.

The solutions provided is overall and hence it’s extremely helpful to know if certain application passed improvement to maintain a continual security check . This can be in terms of application security assessments, penetration tests, and simulated security testing where a red team accesses your applications in offensive ways to determine measure of security and give your organization the overall security posture. Let’s get you started with the right security program for your platform, contact us to get to help you solve your enterprise security problems.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him air max sale. Professionally, he have had experiences with several other companies working on critical application security vulnerability assessments and penetration test security engagements, leading the Red Team and also holds experience training curious students at his leisure time. He also does independent application security consultancy.

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Testing Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill and exploring new places for seeking new people alike.

Defencely Business Enterprise Security Solutions

Hello again, sometime not long before; Defencely.com had described a series of posts how enterprise security risks are evaluated and how these security risks are determined in order to proactively close them in a responsible manner. This post would ensure the key-points which encircle the Enterprise Business Security Threats and is brought forward to spread business security centric awareness among the industries. In order for enterprise community to work according to it’s workflow, the business functional model should never overlap the data model. In case if it does overlap each other, a business logic threat could possibly be potentially present. This threat accessing model present three distinct risk factors, which are:

  1. Confidentiality
  2. Integrity
  3. Availablility

It’s also known by the C.I.A triad. A risk to the confidentiality would arise if a certain access control feature integrated to the business functional model is bypassed using certain different techniques and hence provides an intruder with confidential data which otherwise should not had been compromised. A risk to the Integrity would mean that data was modified during it’s processing or somewhere in-between it’s whole life-cycle. This again woudl be a threat to the business. A risk to availability is when certain business critical public data is restrained from access and hence financially depriving the company or a corporation from accessing resources thereby the intruder forcing the business model to consume more resources confining these resources to itself and making it unavailable.

CIA_triad

Authenticity in business risk model is yet another concern due to which intruders coudl gain unauthorized access to critical business resources and hence compromise the company in certain ways. This in itself could be a tragic scenario to a company taking it’s toll in a tragic way and leading the company to losses. Consider the folowing intrusions for an instance:

  1. Uber Cab compromised with Github Security Key
  2. Staples compromised – the retail hack hijacking cards
  3. Home Depot email compomises – 53 email addresses exposed
  4. CNET – compromised by Russian Attackers

These were some of the haunting and threat-awareness-inspiring cases from some of the major giants around the buiness community and the industry. At past, Microsoft, Oracle, Sony, etc also had been attacked and compromised in a successful ways with business security and logical deduction of security. In these cases, the business security were comprmised in different ways but these were only possible keeping compromise in data in mind. The data compromised were all realted to the business assets and not in a way wherein a procedure was taken keeping business logic security in perspective. Certain types of application threat evaluation require these business data to be first realtively measured with proper processing. The evaluation process would then take certain test cases which the application should either pass or fail. These test cases are as described as follows:

  1. Identify threats to business protocols, if these could be violated in any way.
  2. Identify threats to business timing, if resources could be violated against timed access.
  3. Identify threats realted to compromise of data assets – if data is not segregated from un-essential workflow.
  4. Identify threats realted to financial assets of the company – if financial records could be compromised.
  5. Identify threats related to processing – if certain steps in the process could be bypassed.

Some of the common Business Logic Threats are:

  1. Autentication Failure and Escalation of Priviledges.
  2. Unauthorized Access to Resources via Parameter Manipulation.
  3. Business Process Logic Bypass via Cookies or Tampering Cookies.
  4. Bypass of Client Side Business Assets leading to Process Bypasses.
  5. E-Shop Lifting via Business Logic Manipulation leading to losses.
  6. Functional Bypass of Business Flaw leading to access of 3rd party limited resources.
  7. Service Availability based Denial of Service Attacks via Business Logic Threats.

There could be numerous other ways to access critical data and exfilter them with Business Logic Security. Recommendations are to test the application against these new techniques and build a proper segregation channeling for them in order to prevent intruders from harming the business working of a company. A prevention chart should be followed by enterprise developers during the development phases in the entire SDLC. This latter mentioned would securely deploy the applications and would restrict unauthorized use of data which otherwise could had been compromised by application level vulnerabilities or remains of business logic vulnerabilities. Either of them are fatal and could lead to lossess, sometimes ranging to reputation to financial losses.

diagram

Defencely provides services against these aforementioned threats along with long cutting edge reporting deliverables for it’s clients. The services enable it’s client to access potential threats and remedify them in order to patch them. Defencely provides an in-depth scope and individual deliverables for it’s clients which includes:

  1. Application Security Executive and Technical Reports
  2. Business Logic Threat Executive and Technical Reports
  3. Mobile Security Executive and Technical Reports
  4. Individual Mitigation Trackers for both Application and Business Reports
  5. A Monthly Mitigation Overall Record for all the Identified Vulnerabilities

Aside, Defencely.com also provides custom tailored services for Network Security and Code Audit. These deliverables are in focus with network security for server hardening and hence enables the clients to follow strict security policy rules and compliances. Contact Defencely for it’s amazing fast reliable services at hi@defencely.com and make your web applications, servers, mobile apps, and code audits glitter with real sense of security.

About the Author

Shritam Bhowmick is an application penetration tester professionally equipped with traditional as well as professional application penetration test experience adding value to Defencely Inc. Red Team and currently holds Technical Expertise at application threat reporting and coordination for Defencely Inc.’s global clients. At his belt of accomplishments, he has experience in identifying critical application vulnerabilities and add value to Defencely Inc. with his research work. The R&D sector towards application security is growing green at Defencely and is taken care by him. Professionally, he have had experiences with several other companies working on critical application penetration test engagement, leading the Red Team and also holds experience training curious students at his leisure time. The application security guy!

Out of professional expertise at Application Security, Shritam Bhowmick utilizes his knowledge for constructive Red Teaming Penetration Test Engagements for Indian Top Notch Clients and has a proven record for his excellence in the field of IT Security. A Google search with his name would suffice the eye. Shritam Bhowmick has been delivering numerous research papers which are mostly application security centric and loves to go beyond in the details. This approach has taken him into innovating stuff rather than re-inventing the wheel for others to harness old security concepts. In his spare time, which is barely a little; he blogs, brain-storms on web security concepts and prefers to stay away from the normal living. Apart from his professional living, he finds bliss in reading books, playing chess, philanthropy, and basket-ball for the sweat. He wildly loves watching horror movies for the thrill.