Friday, December 26, 2008

Differentiation of Log Management Solutions

Question:
Centralized Log Management
I'm look for an enterprise log management solution, which can collect log of various network devices, servers(primarily windows servers). The purpose of the same is primarily for complaince. eg:- detecting security issues, troubleshooting etc. I have read lot of articles, but haven't found a good document containing technical differentiation of the various Log Management products on offer. I require your professional suggestion on the subject.
Rgds
xxxxxx


Answer:
xxxxxx,
Here is a good start if you are looking for high level documents:
http://www.securitynews.cz/secnews/security.nsf/0/D328A8B95CC377A2C12572EF0069DF63/$file/Gartner_MQ.pdf

http://www.sans.org/score/esa_current.doc


On the technical site I would check the following areas with the solution provider:
1- Compatibility (which products are officially supported as the log source)
2- What are the event aggregation/consolidation/normalization and correlation options
3- What if the log source is not supported? How easy is it to integrate?
4- How is licensing? When the deployment is distributed, and you have remote event collectors how does it work? (per event, per core, per site etc)
5- What are the out of the box reports? (Ask for actual reports, do not just say yes to report names, do not just buy in ISO 27001 or PCI report are ready sales pitch)
6- How do you configure custom reports? Easy?
7- Do you have role-based management? Integration with LDAP, AD et al?
8- How do you integrate with other enterprise tools? Ticketing? GRC? Workflow etc? Easy?
9- Do you baseline data for anomaly detection? Do you support flow data analysis?
10- Can you get the solution in SaaS or fully managed MSSP format?
11- How do you scale?
12- How do you integrate with 3rd party storage solutions?
13- Is it more difficult than Google when you run a search?
14- How many people are required to run the operations? How many people are required to deploy it? Do you have formal training classes?
15- How do you maintain high availability? (Esp when you have multiple levels of agregation
16- Is it possible to store/analyze raw network traffic?




As discussed above and in other previous posts there are several "commercial" solutions to manage log data win servers, network equipment, UNIX servers, security devices etc. Depending on your requirements and event sources, the solutions may vary. I personally work with RSA Envision (formerly Network Intelligence), Cisco MARS, Loglogic, Q1 Labs and eIQ Networks but there are many other solutions. (e.g. IBM, CA, Novell, Arcsight, Intellitactics, NetForensics, TriGeo, Symantec, Quest, Consul, SenSage, and OpenService) In the meantime Nortel, Juniper and Enterasys have Q1 based offerings as well.
If you look at just the logging manager, you can extend solution set with LogRhythm, Splunk, Snare and Kiwi Syslog Daemon.

If you have a specific question let me know,
cheers,
- yinal


Why GRC does not stick?

GRC in IT field is supposed to be next best thing. But why is it not here yet?

The term IT-GRC is not a fabricated name. It is a real world response to an existing requirement which has evolved within the right steps: At the beginning there were only simple logs and policies, then came the tools, methodologies, and integrated solutions under the SIEM name. SIEM wasn’t enough, we needed a solution set for managing governance risk and compliance together, and then we had the IT GRC.

IT-GRC has all the good signs of the next killer solution, but why it is not mainstream? Many people including myself ask the same questions..

I would like to use the analogy in a very popular business book “Made to Stick” by Chip and Dan Heath.

Here is the book’s outline: The acronym "SUCCES" (with the last s omitted) abbreviates the ideas that stick... Each letter refers to a characteristic that can help make an idea "sticky":

Simple — find the core of any idea … First of all GRC has 3 cores (like an odd Intel processor) and each core points at different directions and groups in IT organizations. While we have difficulty in finding the the core of Governance, Risk of Compliance, we need the interpret all 3 cores together. Nobody can claim the presenting the core of GRC idea is simple (with the exception of funny SAP people who think GRC is SoD)

Unexpected — grab people's attention by surprising them. GRC is not surprising. We have been waiting for such a solution for years, there were simply not enough drivers for a commercial one. Within the name of toolkits, methodologies everybody had a hodgepodge workflow; at the end who beats a nice combination of excel, word and lately sharepoint documents :) . An organized solution such as IT-GRC that can tie into the governance of IT processes risk and compliance was always a project in progress. Luckily some vendors delivered much better organized solutions. But at the end of the day it was not surprising.. When I make a presentation on GRC, the first question that I get it (Can I buy a tool that delivers what you telling about?) The question is wrong of course but it steals all the “unexpected beauty of the solutions sets

Concrete — make sure an idea can be grasped and remembered later. No it won’t be remembered easily even if Gartner says so. GRC covers a broad area, and it is not easy to find individuals who carry the responsibility and the attention span for all the GRC solutions.

Credibility — give an idea believability. GRC is too good to be true. Since it is new in the IT field, credibility is not easy. Many of the vendors will oppose to this statement, but it is difficult to give credibility to a toolset where the implementation and the operational details of specific customers carry a higher role. Like ERP deployments, IT GRC deployments have to be unique for every operation. Toolsets require deployment and they need to be supported by management and operation teams. Credibility will eventually show up with the maturity of the solutions. There are some vendors out there with great customer names, which may form a good start.

Emotion — help people see the importance of an idea. The emotion was lost for most of the IT with the departure of the dot-com companies. But it is not difficult to create the emotion where governance can positively change the bottomline of the operations. I think this is a matter of time

Stories — empower people to use an idea through narrative. I can tell stories about the firewalls we built in 1994. GRC needs more stories. IT GRC is new, and our stories are limited, a search on Amazon ends up with SAP Oracle and the business side of old world GRC. IT GRC stories are not fully published yet.

It will stick at some point, but hopefully no too late.
cheers,
- yinal

Monday, November 17, 2008

What is 201 CMR 17:00?

Question:
What is 201 CMR 17:00?

Answer:

201 CMR 17:00 is yet another bigger brother telling us to the right thing…

The requirements simply enforce security of state of Mass residents’personal information… You may presume that the data is already secure. Well, that is wrong, just listen to the complaints for the requirements,
If you have a business and you do carry “personal information” about a Massachusetts resident then you must take care of the requirements listed in 201 CMR 17:00

The Office of Consumer Affairs and Business Regulation (OCABR) issued a comprehensive set of final (yes it is always final :) regulations establishing standards for how businesses protect and store consumers’ personal information as of September 22 2008. There is an executive order signed by Mass governor Deval L, Patrick related with this regulation., the irony is that it ends with “God Save the Commonwealth of Massachusetts”


The 201 CMR 17:00 standard is related with the M.G.L c. 93H because with the "general law chapter 93H –security breaches" there comes the enforcement leg of the regulation.

Implementation deadline is January 1, 2009 but an extension to May 2009 is hughly expected. Companies will be required to conduct internal and external security reviews and complete employee training


Of course most the technology associations, CPAs oppose to the regulation. They all have their reasons (not enough time, slow investment , harsh economic times etc). Mass CPA web site states that the compliance deadlines have been extended to May 1, 2009 (Jan 1, 2010 for 3rs party verifications and encryption). It is scary to know that the personal information is staying “clear” until then.

So what is it? “Every person that owns, licenses, stores or maintains personal information about a resident of the Commonwealth shall develop, implement, maintain and monitor a comprehensive, written information security program applicable to any records containing such personal information”

Personal information is defined with the following:Resident’s first name and last name or first initial and last name in combination of the one or more of the following data elements:
1. Social Security number
2. Driver's License number
3. Financial Account number (credit card, debit card)
4. Any means of access information for personal financial information

After a quick read, I came up with the following short/dirty to-do list for the 201 CMR 17:00 requirements:

1. Verification of current information security management system or framework
2. Assessment of current asset inventory for customer owned systems
3. Assessment of current information security roles and workflow
4. Assessment of policy enforcement for existing policies.
5. Verification of an information security risk management framework. Review of internal and external risk assessments.
6. Assessment of risk mitigation plan
7. Assessment of options for employee awareness programs for information security
8. Delivery of required policies matrix
9. Assessment of current employee termination procedures. Verification of enforcement
10. Assessment 3rd party business partners’ access to customer owned personal information. Cross-verification of 3rd party privacy policies
11. Assessment of workflow for personal information data collection. Verification of need-to-know principle
12. Assessment of access to personal information at customer facilities. Verification of need-to-know principle
13. Assessment of data classification for personal information at customer facilities.
14. Assessment of access logging for personal information
15. Verification of annual audit plan for personal information
16. Assessment of incident management
17. Assessment of patch management
18. Assessment of desktop/server firewall agent management, and enforcement
19. Assessment of encryption for all transmitted records and files containing personal information
20. Assessment authentication and authorization controls for personal information
21. Assessment of unique identifiers for personal information access (e.g. usernames)
22. Assessment account (password) management policy
23. Assessment of antivirus and malware policies, controls and enforcement.

My recommendation is the follow a larger framework such as ISO 27001 since there will be more compliance requirements in the future. ISO 27001 covers almost all requirements of 201 CMR 17:00



let me know if you have any questions,
- yinal

Monday, October 13, 2008

Security in outsourcing deals: problem or solution?

Question:
Security in outsourcig deals: problem or solution?
It seems to be somekind of paradox. Outsourcing could lead to efficiency if processes are standardized. So implementing security as a part of standard governance should be part of some solution. At the same time every customer demands their own security standards implemented which often differ in approach and/or weight. Each line of industry (ofcourse) have their own standard. This makes it next to impossible to deliver according all those standards at the same time (according contract) and still reach efficiency goals. Or is the whole community silently agreeying to deliver uncompliant? Anyone have any thoughts about this matter which they would like to share with me in Dutch or in English?

Answer:
.......,
I have been evaluating/auditing security aspect of outsourcing operations for a while.

It is actually possible to find efficiency in delivering security requirements for outsourcing providers.

Security has a universal interpretation, regardless of the languages that it is spoken.

You are right that every customer/ every industry/ every information security framework brings some new obligations to the solution providers, and it is not possible to offer a standard cookie-cutter solution set for a broad customer base.

Here are the tested approaches to ease the pressure of never-ending customer security requirement on outsourcing providers:

1)- Map it : When analyzed thoroughly, you will find more common requirements than the exclusive ones. In my own projects I can tell that more than 80% of the security requirements are common. The first step is to form cross-industry requirement matrixes. Several organizations deliver these mapping matrixes (e.g. ISACA) Customer has requirement A, which matches your solution B. You can find mapping matrixes for COBIT, ITIL, ISO27001, PCI, etc. For example if you have an ISO 27001 compliant service and your customer is asking for HIPAA you may easily map your existing ISO controls to HIPAA.

2)- Offer Self-Service: Flexibility of the delivery infrastructure is the most effective answer for the diverse customer requirements: When we initially developed a reporting portal, we thought that having 100 reports would be sufficient for our customer base. It wasn’t. As you have indicated, it never ends, every day there is a new requirement. We ended up building a reporting engine so that the customers can build their own reports . Today if a customer has a new security report requirement, we tell them to go to the portal and build one. For the workflow we took the same approach. We could not enforce our own workflow for escalation to all customers so we ended up developing a business rules engine. Now incidents are escalated according to customer requirements on the backoffice system. If a customer requires sophisticated flow, they choose to pay for developing their own business rules on our rules engine. It is possible to increase the number of example but I assume the idea is clear

3- Get Modular: Even the mighty outsourcing providers are brought to their knees by weird customer requirements. Make sure that the operational flow and the compliance of the outsourcing operations can interface with 3rd party specialists. That is the beauty of multi-sourcing under single contract. I was working with a large TelCo where outsourcing provider had everything but the DNS appliances, introducing a 3rd party specialist under outsourcer’s umbrella fixed the problem. If the interface agreements are done, and if there is a structured framework for auditing outsourcing service partners this is a way to grow healthy operations (low on cost side as well).

4-Focus on Service Management: Usually service/outsourcing companies rely on generic service managers who are afraid to go outside the contract terms. That does not work well in information security world. If the service managers can understand customer requirements properly and relate to outsourcing backoffice operations, many of the problems can be fixed before escalation. I like to see all customer facing members of the team working at the delivery side in the operations for a while. It is the only way to learn to flip the burger before selling it.

At the end of the day, the whole community is silently following a darwinist path, the ones who are adapting the requirements intelligently without hurting the operations and the budgets survive… The old way of my way or the highway approach just hurts the whole service industry.

I would have written more since the topic requires more attention, but please let me know if you have a specific question.

regards,
- yinal ozkan

Sunday, October 12, 2008

IT Security Consultant Jr.

Question: How can I train myself in IT Security?
I've been a technical consultant, developer and other various SDLC-related roles for quite a while now. My goal is to move into IT Security, so how do I jump-start? What should I read, or do?
I would very much appreciate if anyone can clarify what skillsets an IT Security Consultant should/must have

Answer:......,
As discussed above you have the right foundation to kick-start an IT security career.
IT Security career is a broad term and it can be defined by the combination of several practice areas, and you need the fundamental skills to take the first step. Specializations like Network Security, Application Security, Penetration Testing, Database Security, Cryptology, Audit will come later with specific skill-set requirements.
First fundamental skills:
1- Have a solid understanding of TCP/IP for today’s interconnected world of digital assets.(if any other network technologies are used you need to understand them as well) You may either read one of the good books in the market, (e.g. TCP/IP Illustrated) or write a small socket application from the scratch. You should be able to pass Cisco CCNA cert with your development background without any detailed help/courses, just a few books... When you read a network capture file you must be confident.
2- Have solid understanding of the basic pillars of information security; authentication, authorization, integrity, encryption and non-repudiation. You should be able to relate all the applications you use, in a security perspective. Try evaluating the applications that you use daily in terms of the pillars I mentioned above. Understand approaches, methodologies and solution sets.
3- Have a solid understanding of risk. Make sure that you understand the full risk life-cycle. Assets, Threat, Vulnerabilities, Safeguards, Gaps etc. Once you understand the threats and the safeguards, your vision gets clearer. You can study risk management frameworks that are available publicly.
4- Have solid understanding of IT security specific initiatives like COBIT, ISO27001, NIST, PCI NSA, CERT, CVE etc...
If you want to be a consultant then you need some more basics:
1- Understand market requirements, trend, and solutions sets. Start reading. Start following the top 10 blogs, other interesting blogs for information security, set up your google alerts, subscribe to the mailing lists, start checking security research sites daily
2- Build up your jargon, study CISSP, GIAC, CISM, CISA etc… these certifications help you to speak the same jargon with the rest (the CIA triad, role-based management etc…) When you say web access blocking instead of URL filtering your interviews will be short.
3- Get familiar with common solution sets, vendors, methodologies. Name 3 alternative solutions for each security requirement.
Another shortcut is to focus on 1 area only, if you like any of the areas above (Network Security, Application Security, Penetration Testing, Database Security, Cryptology, Audit) I can provide different paths. You may also try getting a vendor certification first and then start practicing security (Check Point, Cisco etc) as a shortcut.
Again, these are basics, these things will open the door for you, and they will make you book smart... Being a consultant requires active projects and hands-on expertise. On the job training is priceless if you can get an opportunity. If you do not have a project, then you may join to one of the community projects like OWASP, Snort, OSSTMM et al.
I have seen many self starters choosing the security management path. Without genuine information security experience, security management claim will be fun material for the veterans. Baby steps recommended.
I think this is a good start but let me know if have any specific questions.
Cheers,
- yinal ozkan

Saturday, September 27, 2008

WAF over SSL VPN?

Question: When is it a good idea to add a Web Application Firewall (WAF) to an existing VPN/SSL connection ? Is it even necessary at all
Approximately 100 End-Users
Medium Security (No Cash Transactions)
Web Server IIS based
scalability


Answer:
The answer depends on your security requirements.

If you have a assessed requirement (e.g. PCI) to secure your applications with a front-end like a web application firewall (WAF), then you should have a web application firewall in front of your web applications.

In general SSL VPN adds the following features to the shops that require layer 7 web application firewalls (when configured properly):
1 - All users accessing your web applications using SSL VPN are authenticated when it is enforced. If authenticated users are considered trusted, then you do not need an extra WAF protection.
2- SSL VPN systems can bring pre-authentication posture checks like malicious software scans. If you consider scanned clean systems trusted then you do not need a web application firewall
3- Some SSL systems come with integrated security features like content security, layer 7 security, protocol checks, firewalls etc. If the security level offered by the SSL VPN vendor is good enough for your web application security requirements you do not need an additional layer for WAF.

Let me know if you have any specific questions,
Regards,
- yinal ozkan

Web Filtering for ISP's, who would you recommend?

Question: I'm working on a Regulation to allow the content Regulator to issue website blocking requests to ISP's in ......... Blocking of a few websites is not a problem, but blocking an entire category of websites on the other hand (such as "pornography", for example) should be made possible.

The regulation will specify technical solutions (whether software or hardware based) that are acceptable and recognized of being capable of complying with individual, and blanket, blocking requests. Most of the solutions I've found online are tailored towards enterprises for managing employee access to websites; what I'm looking for, however, must be capable of handling access requests from all users of a given ISP. Given the fact that a single URL could have multiple IP addresses, the recommended solution should robust enough to deal with such complexities.

What would you recommend? How was your experience with it? A brief summary would do just fine, there's no need to take a lot of your time in answering this question.


Answer: We have been deploying web filtering solutions for TELCOs for a while. In the TelCo world the requirements are different from the enterprise:
1- No authentication is required
2- Performance and scalability is a major decision criteria
3- Pricing is important when the userbase is over 100K.
4- URL categories must fit your requirements, when needed you should be able to apply more than 1 filter database.
5- Management should not require an army of engineers.
6- Not too many pie charts are required for reporting

http://mediaproducts.gartner.com/reprints/securecomputing/160130.html
Is a good start for checking vendors

Big enterprise appliance based solutions usually have a custom ISP product.
Blue Coat, Ironport, SecureComputing (Now McAfee) , MI5 Networks and Optenet are used commonly at TelCos.

I do work with Blue Coat appliances since it is stable, scaleable and it does support 3rd party URL databases like Websense. But this combination can burn your budget. Blue Coat is in use at several neighboring states for you. Blue Coat also offers its own URL database:
http://www.bluecoat.com/

I have seen large ISP deployments with Optenet (the pricing options were good)
http://www.optenet.com/en-us/ispproducts.asp

Load balancing is a key issue, I am not sure how these ISPs are interconnected to Internet backbone but you will need to load balance content filters. You can check F5, Cisco, Citrix, Radware etc for L4-7 load balancing switches.

And a few recommendations: Do not get ambitious stay away from content AV. It does not scale at ISP level.
DNS poisoning , TCP resets are not very effective go with the content gateway.
Because of you specific requirements, in the cloud services like webroot and Scansafe may not be the best option.
This is a commodity market you have so many alternatives like 8e6, Barracuda, Clearswift et al.

If you have a specific vendor or design question, please let me know,
Regards,
- yinal ozkan

Friday, September 19, 2008

IT-GRC and GRCM tools revisited

The line between IT-GRC and the old world GRC are getting thinner everyday. So I updated my list with old world GRC players.. As you can tell they all have IT-GRC solutions

It is difficult to say which sets of tools are exactly for IT-GRC, or GRC Management (GRCM) or enterprise governance, risk and compliance (EGRC).

IT controls are everywhere when you check the 4 pillars of GRCM:
1- Audit management
2- Compliance management
3- Risk management
4- Policy management

Tools do not fix the governance problem but they do help in shaping your project with fewer bodies (and probably for an exchange for good hard cash)

The new era of tools have a better message than the previous "We fix your compliance problems" motto. We all knew that compliance was just another step to achieve governance on Information Security. The new tools have better connections with legacy information security and risk management tools, they also come with several predefined policy frameworks like ISO 27001, COSO, COBIT, PCI etc..

Not there yet, but if you are interested here is a good start list of lists for googling and reading:

Governance, Risk and Compliance (GRC) Tools with IT Controls (IT-GRC)


Agiliance
http://www.agiliance.com/
Brabeion
http://www.brabeion.com/
Archer
http://www.archer-tech.com/solutions/index.html
Control Path
http://www.controlpath.com/solutions_advantage.php
Symantec (Control Compliance Suite)
http://eval.symantec.com/mktginfo/enterprise/fact_sheets/ent-datasheet_control_compliance_suite_05-2007.en-us.pdf
Compliance Spectrum -Spectra (Command Center)
http://www.compliancespectrum.com/
Modulo
http://www.modulo.com/
NeIQ Vigelent Policy center and other NetIQ tools
http://download.netiq.com/CMS/WHITEPAPER/NetIQ_CRM_Methodology_Feb_2007.pdf
eIQ Networks SecureVue
http://www.eiqnetworks.com/products/SecureVue.shtml
CA clarity (formerly NIKU)
http://www.niku.com/it-governance-47.html
IBM Tivoli Series
http://www-306.ibm.com/software/uk/itsolutions/governance/?ca=grm_Lnav&me=w
SAP
http://www.sap.com/solutions/grc/index.epx
Relational Security - RSAM
http://www.relsec.com/rsam_overview.htm
Iconium
http://www.iconium.co.uk/Solutions/overview.htm
Security Works - Visible Security
http://security-works.com/?page_id=27
Oracle (formerly Logical Apps and Oracle GRC Manager)
http://www.oracle.com/solutions/corporate_governance/governance-risk-compliance-manager.html
Proteus
http://www.infogov.co.uk/proteus_enterprise/index.php
Avedos
http://www.avedos.com/257-Home-EN.html
BWise
http://www.bwise.com/
Neupart
http://www.neupart.com/
Metric Stream
http://www.metricstream.com/
Nemea
http://www.nemea.us/
Favored Solutions
http://www.favoredsolutions.net/
Paisley
http://www.paisley.com/
OpenPages
http://www.openpages.com/Solutions/Technology_17.asp
Qumas
http://www.qumas.com/products/index.asp
IDS Scheer
http://www.ids-scheer.com/en/ARIS/ARIS_Solutions/Governance_Risk__Compliance_Management/88815.html
Axentis

http://www.axentis.com/axentis_solutions_5.aspx
Achiever
http://www.goachiever.com/ACHIEVERPLUS/aweb2.nsf
Methodware
http://www.methodware.com/products/oprisk/idx-oprisk.shtml
Protiviti
http://www.protiviti.com/portal/site/pro-us/menuitem.32f530ef9aa26f4acd230ef2f5ffbfa0/
Cura Software
http://www.curasoftware.com/pages/content.asp?SectionId=7&SubSectionID=48
Mega
http://www.mega.com/index.asp/l

Thursday, July 24, 2008

The Frequency for Security Report Reviews

Q: How often do you review your security reports? Often, sometimes or never?
Security requires a hands on aproach, monitoring, reviewing and patching. In the case where there is no dedicated security personal onsite, are you reviewing the reports on a weekly basis, (often), monthly, (sometimes), or never? If sometimes or never, why not?

A: ....,
As you know, on a broader picture security reports must be managed.

The frequency for the review (which is a part of security management) can be determined by the security management approach of the operation. The frequency of reviews depends on the risk level of the protected assets.

Calculation of the review frequency can be based on a simple logic: The cost of the review (people/time/other resources etc) should be justified by the cost of risk avoided.
If the cost is right, then perform the reviews as often as possible.

As an example real-time log monitoring, on-site information security team and daily security review of reports make sense for a financial or healthcare operation where lives, hard cash figures determine the risk. On the other side it might be ok to batch process logs and review the reports weekly for a mom & pop hardware store based on the information risk appetite taken.

Other management concerns for security report reviews (besides frequency) are:
1- Who reviews the reports
2- Who approves/signs-off the reviews
3- How is the review process documented
4- How are the reviews’ effectiveness measured
5- How are the reviews are improved

Let me know if you have a specific question.
regards,
- yinal ozkan

Monday, June 30, 2008

Managed CPE Services and Green Data Center

Data Center Energy Efficiency is the new buzz word. There are several creative solutions on the market from controlling HVAC to virtualizing physical servers.

One of the data center energy savings methods is very simple. Find the clusters that are not fully utilized (e.g proxy cluster) and then shutdown the idle servers in the cluster farm, monitor utilization on active servers and boot spares to join cluster when needed. You do not have to have hot cluster members in cluster where more than 2 members exist.(unless this is required by utilization)… Batch task servers can be shutdown as well.

This action requires 7x24 careful monitoring and usually a service solution that can pass turing test. Managed Service Providers and managed security services providers do carry the know-how to monitor every cluster member and interfere full device boot cycle as of today. That is why we are not far away from “energy saving” offerings from Managed CPE providers.

On the operational level, simple SNMP monitoring of cluster members or monitoring the monitor (F5 LTM / Citrix Netscaler) systems will do the gratious shutdowns and joins for the unused cluseter members at no cost.

Let’s wait and see more creative offerings from MSPs and MSSPs. On the other hand, I personally believe that the data centers are cutting the emissions regardless of their green image. Data centers deliver massive automation and they do eliminate the traditional emission sources such are vehicles. (and unfortunately sometimes people)

Sunday, May 25, 2008

Firewall best practices

Q: Checkpoint firewall R62 & Nokia IP 560 Hardware based appliance best practices
We have Checkpoint firewall R62 & Nokia I 560 Hardware based appliance , we do audit of rules on quarterly basis but still i feel that lot of tuning to be done on Nokia IP 560 and Checkpoint .Can some one please help me in getting best practices for firewall.

A: Hi ...,
Best practices can be classified in 2 main areas:
1- Information Security
2- Operational

For information security, make sure that you follow a higher level information security framework with integrated risk management. Firewalls must be a part of the bigger picture, not standalone devices.
ISO 27001, NIST, COBIT or FFIEC can be a good start.
There are several guidelines by FFIEC if you are operating at financial services industry.
http://www.ffiec.gov/ffiecinfobase/html_pages/it_01.html

You can also check firewall specific guidelines from NIST
http://csrc.nist.gov/publications/nistpubs/800-41/sp800-41.pdf

Once you make sure that you address governance, risk and compliance (GRC) related concerns you can dive into operational issues such as reliability, high availability, performance, scalability, manageability.

We have been managing thousands of Check Point systems under Nokia platform. It is difficult to cover best practices in single post (from change management to patching, from backup policy to cluster optimization) There are several good recommendations in other posts as well; here is a quick view from my side.

1- Optimize rulebase (most used rules at the top, use logging intelligently, avoid duplicate objects, check unused objects rules, make sure that overlaps do no exist, use network object, decrease NAT usage etc)
2- Upgrade to R65 for Check Point. It is more stable and you will get all the new fixes faster.(when compared with R62)
3- IPSO 4.2 will bring you more features with SecureXL, QOS etc. but go over the release notes carefully. Make sure that SecureXL is enabled within the current deployment.
4- If you have performance issues and you are not planning to upgrade platform check the new ADP cards from Nokia.
5- Architecture-wise avoid running non-firewall features such as SmartCenter, AV, filtering on your Nokia unless you need them.
6- If you have site-to-site VPNs check the route based VPN feature with dynamic routing for better redundancy

I also recommend using 3rd party test services which include DDOS.

For automation, you can use firewall audit, change management tools such as Tufin, Algosec and Firemon (we work with Tufin). These tools will give you a lot of input on audit. On the security risk management side if you have budget, you can check SkyboxSecurity and RedSeal. They will be really helpful.

If you have any specific questions please let me know,

cheers,
- yinal

Wednesday, May 21, 2008

ThinClient Security

Q: Kindly elaborate your experience with ThinClient security from the likes of HP and Wyse.

Currently researching the following with regard to Thin Client Security :

1. Need of an AV in ThinClient and its manageability.If an AV is required which ones recommended and why(file size,footprint on the OS and detection capability)
2. How to enforce security policies in Embedded XP. Eg : Device control/lockdown of the system etc
3. Case study of ThinClient security - breaches that has happened in the past;best practices and recommendations etc etc
4.Possibility of a ThinClient turning on to be a zombie for a botnet and any other security considerations to keep in mind


A: Hi ........,
It is difficult have a generic answer for all thin clients since there are several OS flavors and several different deployment/use cases. HP offers clients with Windows CE, NeoLinux , Windows XP Embedded, Debian Linux 4.0 ,HP Thin Connect and Wyse offers clients with Wyse ThinOS ,Windows XP Embedded, Wyse Linux Windows CE etc) …You can choose to restrict user activity to minimum and reboot with a fresh image each time, or you may choose to customize your thin OS with some execution and allow limited apps in addition to RDP, ICA type terminal clients. So your risk profiles vary based on your decisions.

That being said here are my comments


Need of an AV in ThinClient and its manageability. If an AV is required which ones recommended and why(file size, footprint on the OS and detection capability)

If you are looking at XP embedded (XPe) as you stated down below you have some options. CA Anti-virus (formerly eTrust) is the first player with a small footprint.. Symantec offers full endpoint security with The Symantec Endpoint Protection for Windows XP Embedded. disk/cpu/memory requirements on Symantec client is 30Mb/1.3Ghz/128MB.. You can also use McAfee VirusScan Enterprise on XPe. I would crosscheck the licensing options on each of the option as well.. If you choose terminal connections and lock down the client only you may question the necessity as well.


How to enforce security policies in Embedded XP. Eg : Device control/lockdown of the system etc

Like in real XP, group policies and windows firewall. On Windows XP Embedded with SP2 you/your vendor can configure your run-time image for better security. Windows XP Embedded run-time image should include Group Policy Client Core Gptext.dll and Windows Firewall components.You can use MMC to update Group Policy on a deployed run-time image. You can also add Symantec endpoint security to the mix..

Case study of ThinClient security - breaches that has happened in the past;best practices and recommendations etc


Breaches seldom make it all the way to case studies for a good reason. In the past I have not personally witnessed a “thin client” related breach. But XP embedded is very close to XP and it is subject to similar attacks and even worse since it is not a closely monitored OS. There is a higher risk on thicker Linux and XPe distros if the end user is allowed to make any changes in the configuration. Best practice is to control all environment and disable user level access to OS settings completely. Remote exploits ( e.g. JavaScript, ActiveX vulnerabilities) will be sand-boxed if the user rights a restricted properly. I personally like terminal connection only or OS streaming models where every boot is a fresh start.


Possibility of a ThinClient turning on to be a zombie for a botnet and any other security considerations to keep in mind

It is technically possible if user has a right to modify anything and execute the changes. I would choose the thinnest model if the requirements do fit.

cheers,
- yinal


How to allow Internet access for research and protect servers at the same

Q: How do we allow Internet access for research and protect our servers at the same time? As a programmer analyst I use the Internet to do research for my work. We are looking for a way to still allow this but protect our servers where client info is held. We don't want to be constantly updating a trusted sites list with website addresses that our programmers use. Any suggestions would be greatly appreciated!

A: Hi ....,
As recommended in the previous posts, the first action is to follow a comprehensive security framework to make sure that all high level security concerns are addressed with appropriate security controls and information security governance is in place. (You can choose NIST, COBIT, ISO 27001 etc)

When the policies and procedures are addressed, on the architecture side I have several recommendations:
1- Segment your servers from research desktops. Servers have to be behind security controls like intrusion prevention, firewall, data leakage protection, database security, behavioral anomaly detection etc.
2- Secure your desktops and servers with client based firewalls, host based intrusion prevention systems, 802.1x, anti-virus, anti-malware, encryption, group policies, tripwire, application control, secure IM etc. Always secure your client data, use rights management tools, database security tools, encryption tools and audit tools on your servers where you keep client data, make sure that every security policy is enforced. Strong authentication is also recommended. Make sure that you control every output device like USB tokens, CD writers, backup tapes etc.
3- Filter your internet http/https access with anti-malware, anti-virus, instant message protection in addition to URL filtering, you can use a web based filtering service like ScanSafe or you can go with an in-house solution like BlueCoat, Websense, SecureComputing etc type of URL + AV + IM filters.. Deny outbound tunnels on external gateways. (which is a very common way for programmers to bypass filters…)
4- And for your programmers… Managing white-lists (trusted sites) is painful. Use a self-service management gateway for your URL filtering so that the users can request the sites that they want to access for limited time, and the approval process is automated. The way it works is to have a web portal where end users request for new trusted sites for research. Approval can be automatic or manual, unlimited or limited, user restricted or for all. You can integrate this portal with your internal user directory such as active directory or LDAP, so that portal can recognize each requestor without additional login

Let me know if you have any specific question about the any of the solution areas that I have briefly mentioned.
cheers,
- yinal

Type rest of the post here

Monday, April 28, 2008

Configuring VPN as leased line backup

Q:
Hi every body
Can you help me in configuring a VPN.
The setup is like this my customer has a point to point lease line as a primary link going to head office using Router 1841 , OSPF is running on this segment
, he has a ASA 5510 behind the the router , from ASA he has a ADSL modem directly connected
Now , what he wants to achieve is once the primary link (lease line) goes down , traffic start going out ffrom the ADSL link through a VPN tunnel.
Keep in mind on the Head Office he is running with Juniper Products.
Do you have any idea how it will be achieved.
I will really appreciate your quick response.



A:
Hi ...,
We deploy similar IPSEC VPNs over Internet links for high availability requirements. I call this MPLS Plan B... (In your case Leased Line Plan B :)

Here is my understanding of your setup:
Remote Office: Cisco ASA connected to Internet, Cisco 1841 connected to leased line
Headend: Juniper firewall connected to Internet, Some Cisco hardware connected to leased line
Internal Routing: OSPF


What you need is to extend dynamic routing (in your setup OSPF) to Cisco ASA and the Juniper appliances. Make sure that both ASA and the Juniper appliances participate in the OSPF. First build the IPSEC tunnel between the remote site ASA 5510 and the headend Juniper. Firewalls will route traffic to IPSEC tunnel interfaces as a by product of OSPF routing decision.

An important catch is the validation of the cost of Internet links for OSPF. Internet OSPF cost must be higher than the leased line cost, this will assure that leased line will stay as the primary link. Increase costs manually if that is not the case.

Inter-product IPSEC tunnels (in this case ASA to Juniper) can be tricky I do recommend a lab proof of concept before production cutover.

Another way of building Internet failover is to use GRE tunnels between internal Cisco hardware, so that you can bypass the Juniper headend firewall integration for routing (All you will need is a simple IPSEC VPN between ASA and the Juniper that allows GRE traffic between internal Cisco routers) . I prefer the first option.

cheers,
- yinal

Sunday, April 20, 2008

End Point device security is becoming a major issue?

Q: Hi, End Point device security is becoming a major issue. Devices like IPODS, Mobile's etc. are a threat to Data Security in Organizations. Any of us are facing such challenges in their organizations?

A: Hi .....,
Here are 3 basic approaches:
1- Cut the cord – do not allow transfer of any data to mobile devices, this option assures security but it is not a mature solution on the user side. We all agree that mobile devices are business enablers
2- Control/Manage End Points – You need to manage all these end points as a part of your enterprise operation. Security on the endpoints is no more different than any other enterprise components but it is more difficult since the resources are much more limited (you cannot have 20 applications running on Nokia phones or you cannot manage iPods centrally. You can start with the following list – single client is preferred:
- Port Control (USB, CD, Floppy, Bluetooth, IR, Wi-Fi, Ethernet etc)
- Location awareness
- Encryption (file, disk, mail), key/cert management
- Firewall
- IPS
- Antivirus (http and SMTP)
- Antispam, Phishing, Malware control (http, SMTP, SMS)
- URL filtering
- Application control, and tripwire type change control
- Remote device management (in a secure manner :)
- Biometrics/TPM/SSO/802.1x support
- Easy to scale on multiplatform esp. on mobile
3- Control Data- Instead of focusing on the device level security, you may focus on data security. You can shift from the logical controls to data level security controls. If the data in the organization is classified by security requirements and protected accordingly, the devices will naturally comply with the higher plan. For the critical data I do recommend checking the enterprise rights management systems (a.k.a. DRM). Once your data is protected by enterprise rights management (ERM) or Information Rights Management (IRM) , it will be protected on the endpoint devices as well. Deploying ERM is the challenge. You may start googling with the following keywords; EMC (Authentica), Oracle (SealedMedia), IBM or Microsoft RMS or choose dedicated shops like InstaSecure Modevity or Liquid Machines. I hear a lot of activity around Liquid Machines.

Let me know if you have a specific question on the topics above,
cheers,
- yinal

Tuesday, April 1, 2008

How much can "fear" be used ethically in selling a computer & network security solution?

Hi ...,
I have worked on both buy and the sell sides of the enterprise security space for years.

Fear is not a wrong feeling, but lying is a wrong unethical, immoral act.

The ethics; encompassing right conduct during the information security sales cycle is not unique to information security; it is based on the same ground principles of business ethics.

A sales person should be telling the truth. FUD selling is as unethical as selling unreal hope or misusing trust. FUD is discussed more because buyer side falls into the lies easier.

Fear, is a lifesaver when it is sensed in the right time in the right amount.
Fear can be classified as an instinct instead of an emotion.

If a tire sales person tells me that my car may have a serious accident because I have old tires, and it is the truth, I may owe him my life, there is nothing wrong with the fear there.
But if the same tire sales person tells me that my car may have a serious accident because I have old tires, and whatever he tells me is an is actually an empty ungrounded sales pitch, fear is the tool of sales. It is wrong.


That is the ethics line between the evil and the good. On my personal life I only relay fear where I have fear, where I share the same concerns with the person I am talking with. Risks can only determined from facts, not hearsay or imaginary sources, so my personal fears can be far way from reality for the person I am communicating with. A disclaimer of the facts when discussing the fearful topics can be a good ethical start for the sales side.

I can give more real life examples if you need any,
Let me know if you have any questions,
Regards,
- yinal ozkan

Sunday, March 23, 2008

Managed Security Services Providers and the BPO / ITO Providers

If you follow the managed security services providers (MSSP), you will notice a significant shift in the definition of outsourcing. Usually most of the MSSPs are aiming to manage the customer premises equipment with sophisticated remote management, and central log correlation tools. The manpower required to execute this operation is different from the BPO providers or even Managed Service Providers, just a handful Security Operation Centers (SOCs) and a 24x7 full escalation engineering shift is good enough to kick start an MSSP operation.

The difficulty in managing customer owned security devices derives from multiple sources, but if you exclude the technological challenges (like development, capacity, infrastructure, tools) the main roadblock for the newcomers is “Trust”

Giving up security, the keys to the kingdom is not easy, especially to an operation center that is connected via cross-oceanic fiber cable. All early players in the market played the “local” hand to gain the trust of the potential clients. In a post 9/11 world, relying on the 3rd party for all information security operation required different assurances. Large corporations chose different paths:

1- Choosing their trusted telecom company to provide managed security services instead of their core BPO partner
2- Choosing , monitor-only services from BPOs and limiting the scope to view and alert type of passive access instead of full security operation management
3- Trusting on a local specialized MSSP instead of a low-cost overseas shop

If we look at the MSSP market today, the decision tree above is very visible in the developed markets for managed services. Telecoms are eager to acquire any MSSP that can bring them the services recurring services revenue, and higher services margins. In the last 2-3 years, BT, Verizon, France Telecom NTT, KPN they all acquired some sort of managed security services providers to compete in the global market. The other global/local telecom chose to develop their own services. (e.g. AT&T, DT) Either way telecoms see MSSP market as their own managed CPE market.

On the second category, a lot of large outsourcing contracts came with security monitoring that pushed BPO providers to build impressive monitoring / log correlation operation centers. But large corporations usually kept the internal security team intact to manage the security infrastructure.

The last category of specialized MSSPs grew significantly with the demand. The interesting part is that none of the global players were from the outsourcing world Verisign, Symantec, Integralis, ISS, SecureWorks, Perimeter, RedSiren, Counterpane, NetSec, CyberTrust, Ubizen etc, none of the specialists belonged to a BPO provider. Most the specialized MSSP companies listed were acquired by the telecoms by the way.

So the question lies in “what will happen next?” Will the telecoms finally figure out the magic of delivering services, or will they prove yet another failure in investment. It is very difficult for a telecom company deliver a highly customized service, on the other hand cookie-cutter, managed router, managed MPLS type of boxed solutions do not play well in managed security services..

Of course there are alternatives like IBM acquiring ISS, Microsoft acquiring FrontBridge, Google acquiring Postini, and finally Dell acquiring SilverTech. But with the exception of IBM, in-the-cloud on-demand managed security services form a different category than the outsourcing complete operation, or the managed CPE.

I see a lot of leverage where large scale managed services shops (MSPs) to engage specialized MSSPs to build a hybrid best of both worlds model. At the end of the day it is very difficult to draw a demarcation line for managed security. If the desktops and the servers are a part of grand information security plan, an MSP should play well with the information security management provider; trusted MSSPs will also bring security credibility to MSPs. On the other hand MSSPs will be able to tap 500+ large MSP operation centers, and they will be able to deliver direct end-user security support for the first time. Segregation of duties will automatically be delivered in the meantime.

Time will show, which path will dominate the market.
On personal behalf,
- yinal ozkan

Saturday, March 15, 2008

What are the business drivers that lead to Infrastructure Management Outsourcing?

Q: What are the business drivers that lead to Infrastructure Management Outsourcing? What are some of the most pressing needs/challenges that an organisation wants to overcome through IM outsourcing.
Clarification :Other than cost arbritrage, what are the primary business goals.

A: Hi …….,
I have been working on information security infrastructure management outsourcing area for a considerable time. As listed above, there are several business drivers that lead to these projects. Here is my classification:

1- Efficiency: If somebody else can manage your infrastructure in a more efficient way that your internal team, then you should consider a third party

2- Cost Cutting: Who does not like to spend less for the same deliverable? Even in the cases where the efficiency and risk levels are not better with the third party, decision is obvious when you can cut costs considerably.

3- Competitivenes: Time to market, beating your competitors, innovation capability, intraindustrial integration, financial capability, are good concepts and the good business drivers. If your infrastructure is managed by a third-party, and the third party makes you agile, it is time to outsource.

4- Risk: Risk always matters. Ignoring risk is not bliss. Covering the bases sometimes involves a professional third party. Lowering risk can be measured quickly and can be listed as business driver.

All these 4 drivers involve each other when thoroughly, but classification helps better identification.

If you have specific questions about any category related with examples, please let me know,

Regards,
- yinal ozkan

Monday, February 18, 2008

Enterprise File Transfer Solutions

Q:
Enterprise File Transfer Solutions
I am researching Best Practices surrounding File transfer between business partners. The solution must be able to integrate with various back-end systems and offer Internet facing FTP, SFTP and FTP-SSL.
I have identified the following requirements:
The solution must offer automated encryption/decryption via PGP.
The solution must be able to route the information received to its' final destination.
No data may be unencrypted within the DMZ
(must be encrypted before being sent or decrypted after being moved internally)
Clarification
The PGP requirement is due to legacy considerations. All our current transfers are encrypted using PGP. Would entertain other encryption mechanisms... but PGP suport provides the most effective migration strategy.


A:
Hi …..,
Best practices are the least headaches on the operation side :) (not necessarily on the security side). Best practices are usually determined by the resources and the flexibility of your operations

You need a solution that is both transparent to existing operations while satisfying security and regulation requirements. That is a double edged sword.

If you have in house developers, the most customized way is to use "PGP Command Line" series of products. This is very flexible since it works on all platforms...

SCP, SSL, SSH and SFTP are usually not the full answer set since they encrypt "data in transit" do not answer the "data at rest" questions. I like certificate based encryption solutions but that is far away from PGP keys.

When using a key based solution, the ugly part is the automated key management with remote 3rd parties. You can use a trusted directory like PGP global directory for this purpose.

When you do not have flexibility to touch anything on the servers and the host then there are gateway products:

Sterling Commerce, and Globalscape are referenced above.

You can also check Forum Systems' Presidio OpenPGP security gateway…

Tumbleweed SecureTransport is the other gateway that is used by financial services.

And there is PGP Universal Gateway.

There are several other “store and forward” / “message both parties” secure enterprise data transfer solutions, you can check Ironport (Post x), SecureComputing (Ciphertrust), Zix, Voltage, Entrust and Accelion web sites for different solution sets.

Let me know if you have a specific question,
Regards,
- yinal ozkan

Monday, February 11, 2008

Information Security Statistics

"Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital."

Recently a colleague of mine recommended me to use statistics on my presentations.. When I see a bunch of numbers, pie charts and the percents signs on the screen, I get back to dot.com hey days… By 2008 we will see a world domination in…..

Statistics are only useful when they are generated according to statistical reality.A complete statistics survey result must contain links for methodological details including population coverage,. sample design, sample size and several other quality indicators .(for those who are wondering the source of the FUD).

In information security world, I see statistics in like “according to xxx institute 60% of the US businesses had been hacked last year so that you have to buy our product”. I have been expecting this cheap FUD to be over for so long, but no, it keeps coming back..
Well, as the smart audience you should ask, what do you mean by hacking, how did you question the respondents, what’s their role, what do they do, who do they work with etc.. You will end up with 300 respondents telling you the fate of the information security industry…By the way. I usually end up believing these numbers since the survey respondent (supposedly CISOs) had nothing else to do but answering these valuable surveys so that they form a lucky set of “60% hacked US operations”..

Tuesday, February 5, 2008

Vulnerability Assessment Vendors

Q: Do you have any recommendations for the Vulnerability Assessment Vendors / Products / Services?

A: Hi ...,
My recommendations will not be neutral (since I did not evaluate all vendors), but I might help you to identify the better Vendors/Products/Services.

There are so many options maybe that is why recommendations matter. Here is my quick dirty list:

Option 1: You can go to a security consultancy shop and ask for vulnerability assessment service.
All accounting firms and IT consultancy shops will offer something. (PWC, Deloitte & Touché, Ernst & Young, KPMG, Grant Thornton LLP, or BDO)...There are risk management companies like Protiviti who can also offer high end assessment services.

As expected the bodyshops with security practices like CSC and IBM offer the vulnerability assessment services along with the Telcos (Verizon, AT&T etc)

And of course all security integrators offer the service

My quick qualification criteria would be:
1- See the actual resumes of the consultants who will perform your scan, buy consultants not the brand. If possible interview consultants.
2- Check methodology documents from the consultancy shop; make sure that the structure is detailed enough for your requirements. You may also check with frameworks like OSSTMM, OWASP etc
3- Check previous deliverable document (sanitized versions)
4- Check references

Option 2: Using regular vulnerability scanner products in a box : You can start with free Nessus and go all the way with Tenable, NCircle, ISS, Foundstone (McAfee), eEye, Saint etc..
These products require your internal resources but you have the option to automate scheduled or event driven scans.

My quick check list would be:
1- Research arm depth of the vendor, are they using public data or actual vulnerability research information
2- Open source integration, ability to support custom signatures, 3rd party signatures
3- Integration with other enterprise tools, esp. with IPS, SIEM, GRC and help desk systems
4- Easy to use, easy to configure , support for dist. Deployment
5- Ability to understand network topology, (hosts behind firewall, hosts that are not routable or hosts that have host firewall etc)
6- Non-intrusive
7- Speed – Must be fast to scan a large quantity of hosts in a limited time frame


Option 3: Find an in the cloud service offering from product companies or specialists like Outpost24, Qualys (Vulnerability scanner-as-a-service or On-demand vulnerability scanning)., or managed security services providers (MSSPs). Payment card industry approved scanner services (ASVs) may give you good start for the list of service providers
https://www.pcisecuritystandards.org/pdfs/asv_report.html
Lately all product vendors joined the long list of on-demand remote scanning providers

When buying a service make sure that you check both option 1 and option 2 checklists , you need the both. It is also important to see how can a remote scanner company will scan your internal systems, they need a device at your premises (CPE) which should not require a lot of attention firewall configuration etc.. I have also seen that self-service providers should have state of the art portal interfaces to manage your scans. Test portals before moving forward.

Option 4: Specialty Scanners.. So far I have talked about regular network scanners. If you are planning to scan a web application, a database or an enterprise application with XML transactions, you should check different vendors/consultants/services. The criteria are a little bit different, Deep levels of application know-how is a must. There are also a couple of pen test tools in the market (e.g. core impact)

I strongly recommend going over the following ppt to find out what is out there:
http://www.owasp.org/images/f/ff/AppSec2005DC-Arian_Evans_Tools-Taxonomy.ppt



Well still no recommendation , but please let me know if you have any specific questions,

Regards,
- yinal ozkan