Saturday, December 29, 2012

Senator Ron Wyden Speaks for Americans on Privacy and FISA

I found the following YouTube video link on EFF.org of Senator Ron Wyden (D-OR).  The Senator speaks out for American rights and privacy before the Senate.  In spite of legislation trends, Congress may not be so one sided against individual rights as you might imagine.  Senator Wyden provides an interesting pro-privacy perspective along with some lessons from history.  Likewise, contrary Senate opinion would be interesting to hear.  Unfortunately, understanding lawmaker decisions fully is difficult since they are predicated upon material non-public or classified information in part.

Media:  Wyden Floor Statement on FISA Reauthorization Act and Proposed Amendments

Of course, the FISA extension passed by a landslide.  If your curious to see how your state represented you please refer to the following link, FISA Amendments Act of 2008 Five Year Extension.

@spoofzu

Friday, December 21, 2012

Night Before Christmas


For Lord of the Rings fans, following is the Night Before Christmas -- in Tengwar -- the language of the Elves.  I really wanted to find Tengwar Script which is used for the inscription on the inside of the Ring but no luck.  The URL for the online Tengwar generator is at the end of the article so you can give it a try yourself.



Świątkiewicz, Michał. "Online Tengwar Transcriber." Online Tengwar Transcriber. N.p., 2002. Web. 22 Dec. 2012. <http://tengwar.art.pl/tengwar/ott/english.php>.

Thursday, December 20, 2012

Google Hacking -- Blast from the Past

Google Hacking is not new but surprisingly few outside the security community understand what it is, it's risks, and it's rewards.  Whether your sharpening your security skills or improving your ability to find information on the Internet; this article is for you.  Once your SearchFu is strong, how you choose to use your new super hero powers are up to you.

What exactly is Google Hacking?  Google hacking is not breaking into Google computers as the name might suggest.  Google hacking is a multipurpose term; it's both a noun and a verb.  As a noun, Google Hacking[1] it's a groundbreaking book written by security super hero Johnny Long (Twitter, @ihackstuff).  As a verb, Google hacking is the activity of using Google advanced searching commands and techniques to find the proverbial -- needle in the Internet haystack.   You may wonder, how could searching with Google advanced search commands possibly become a security concern?

Information Persistence
Content you place on the Internet may live for very along time.  In fact, content predating the Internet sometimes finds its way back into the Internet.  Case in point, old Bulletin Board System(BBS) message threads(e.g., textfiles.com) are available online for anyone.  If you mistakenly publish content to your web servers, it may be downloaded by archiving bots like Wayback Machine or available in Google's caches.  It's difficult to know which deep dark corners of the Internet your content may live and for how long.

Safety in Numbers
People often feel a misguided sense of anonymity when they consider the large number of people on the Internet.  Many feel their personal information will get lost in all the billions of results.  Or better yet, their personal information is not interesting to anyone.  These are myths.  You will learn some simple and practical techniques to improve your search skills while raising your awareness.

Silent Reconnaissance
Google provides powerful search commands to locate information of interest.  The security concern is that there's no active defense against reconnaissance since corporate servers are not queried directly.  For instance, Apache HTTPD web logs will not contain any entries since the server is not accessed at the time of the search.

Some Advanced Google Search Techniques (Google's Full ReferenceAdv Page)
I'm not going to compress the Google Hacking book into a few paragraphs; to do so is an injustice.  Instead, I'll provide practical examples you can apply to your business and personal life.  Following are some practical uses for Google's search commands.

Limit search scope to a single website
The following is one of the most useful commands ever.  With this command you can limit all search results to a single web site of interest.  Use the site: command where host.something.com is the web site of interest.  Alternatively, you can drop the host and only include the target domain like, something.com.  The word, foo is your search term(s).

foo site:host.something.com

When I first cracked the cover on Johnny's book, for laughs, I tried to find all confidential information on my current employer's web site.  I was not planning the search would produce anything of interest.  After all, who would publish confidential information to a public company web site, right?  Bingo!  I found a lot of fluff but there were some interesting results I shared with horrified executives.  A good run of thumb, don't rule out the obvious.  Don't assume people think the same way as you.  What is obvious to you may not be so obvious to someone else.  Always confirm your suspicions.

Reduce noisy search results

The next useful search is a slight alteration of the preceding command by adding the minus operator to the search term.  Adding a minus operator to the search term(s) excludes matching criteria from the search result set.  In the following example, I use the minus operator with the site command but you can use it with other commands as well.  Consider the following.

-www site:company.com

The preceding query produces a result excluding any references to content served from www.  It may not be immediately apparent why such a search is useful but the query is useful to identify content on servers other than the primary (e.g. ,www).

Cached results
When Google's robots scan sites they cache the results.  You can use the info command to view cached page results.  Combine search terms with the info command produces no effect.  Consider the following example.

info:www.eff.org

When you type of the preceding command you will see information like shown in Figure 1 in your browser.

Figure 1:  Google info command to fetch cached page
In the past, there was a command to retrieve cached pages directly, cached.  The command is no longer supported.  While the caching feature is still available it's not as prominent as it once was.  The purposes of these changes are not entirely clear since the feature is still supported.

If your an IT administrator you can remove your web site or areas of your site from the gaze of Google's bots with a properly crafted robots.txt[3] file but there is a tradeoff.  Attackers can see any entries you include -- so it's somewhat defeating.  Still it's likely better a better alternative than an archive of your site stuffed into Google's caches, if that bothers you.

Limit results to specific file types

How many times have you wanted to find only a list of PDF, XLSX, of TXT files for your searches.  Well thanks to Google's filetype command you can.  Consider the following.

higgs boson filetype:pdf

The preceding search will produce a search result containing only PDF documents.  The salient point of filetype is that Google knows how to index file content for popular file types, not only HTML pages.

Dark Uses of the Google Search Commands 
Attackers are creative, often combining information from Google hacking sessions with other Internet resources like password databases.  Internet web cams are a popular target.  Attackers use search techniques to find specific web cam models of interest.  With the detailed make, model, and version information, attackers find default administrative credentials in password databases available on the Internet.  Once the administrative interfaces are known and account credentials are compromised, the web cam is hijacked.  A hijacked web cam may used to check if your home or not, monitor discussions, and far more creepy things.  Some higher end gimbaled models can moved or repositioned remotely via web controls -- downright creepy.  The following is partial list of darker uses for Google search.
  • Social security numbers
  • Credit card numbers
  • Personal passwords
  • Service or application passwords
  • Vulnerable software
  • Insecure web cams & embedded devices
  • Sensitive corporate information
If you want to learn more about Google hacking you can grab a copy of Johnny's book[1].  He also has a web site Hackers for Charity and maintains an up to date database of advanced Google searches[2].  Simply cut and paste search template commands into your web browser to see the latest results.

The reason I decided to write this article was a search query of mine from years ago produced many more results recently then the time of my original presentation years ago.  I was perplexed.  I assumed since Google hacking has been around for years people must have made improvements.  I was wrong.  Time to start talking more about Google hacking.   ;o)

Tatica. "Clipart - Kung Fu." Clipart - Kung Fu. 19 July 2011. Clipart.org. 16 Dec. 2012 <http://openclipart.org/detail/150409/kung-fu-by-tatica>.

[1] Long, Johnny. "Google Hacking for Penetration Testers [Paperback]." Google Hacking for Penetration Testers: Johnny Long: 9781597491761: Amazon.com: Books. 2 Nov. 2007. Syngress. 14 Dec. 2012 <http://www.amazon.com/Google-Hacking-Penetration-Testers-Johnny/dp/1597491764>.

[2] Long, Johnny. "GHDB « Hackers For Charity." GHDB « Hackers For Charity. Hackers for Charity. 16 Dec. 2012 <http://www.hackersforcharity.org/ghdb/>.  (Note: there are also other Google hacking DBs on the Internet)

[3] "Block or remove pages using a robots.txt file." Google.com. 16 Dec. 2012. Google. 20 Dec. 2012 <http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449>.


Thursday, December 13, 2012

Common Excuses for Not Fixing Security Vulnerabilities

I received a tweet today from Katie Moussouris (Twitter @k8em0) Sr. Strategist at Microsoft about their BlueHat security event.  Don't worry about attending, it's an invitation only security event -- I'm not invited either.  And I'm not so sure they want a red Oracle sheep in the crowd of blue Microsoft sheep anyway.  The good news is, like many conferences, they post some of their materials online.

The content for this year's BlueHat is not posted but you can view previous years materials.  I noticed a session by Jeremiah Grossman (Twitter @jeremiahg), "A Statistical Journey through the Web Application Security Landscape"[1].  Jeremiah is the founder and CTO of WhiteHat Security.  There're some points that struck home with me in his session so I thought I should share them.  The session is last years but it's still painfully relevant.

Some background worth mentioning, WhiteHat Security provides services and software to help businesses identify and resolve their application vulnerabilities.  WhiteHat has learned over the years to provide customers the best value, they must do more than report vulnerabilities; they must encourage customers to resolve their vulnerabilities.  Companies that don't remediate their vulnerabilities usually cancel their subscriptions since it's like paying for the same security reports year over year.  With the preceding in mind, WhiteHat asks it's customers why don't you fix your vulnerabilities?  Following are some of the top answers they receive.

Why don't you fix your vulnerabilities?

  1. Nobody at the organization understands or is responsible for maintaining the code.
  2. The development group does not understand or respect the vulnerability.
  3. Lack of budget to fix issues (vulnerabilities).
  4. Effected code is owned by unresponsive 3rd party vendor.
  5. Web site will be replaced or decommissioned soon.  Jeremiah noted, some of these sites have been on this status for over a year.
  6. Risk of exploitation is accepted.
  7. Solution conflicts with business use cases (aka, the vulnerability is a feature)
  8. Compliance does not require fixing the issue.
  9. Feature enhancements are prioritized ahead of security fixes.  Jeremiah noted, this one of the more fundamental challenges in software security.

At first I thought I missed an excuse but I reviewed the presentation again and only counted 9 not 10.  Anyway, I don't see these excuses going away anytime soon.  I have listened to them many times myself.  Jeremiah also mentions a 2010 survey noting a difference between where consumers perceive security problems and where they focus their IT resources and spending.  The point was that organizations are not allocating resource and spending where they should to address their security concerns.  Epic fail, sigh.  I encourage anyone interested to check out some the BlueHat presentations as well as Jeremiah's presentation.

Rgyle. "Clipart - Hat Outline." Clipart - Hat Outline. 13 Jan. 2008. Clipart.org. 14 Dec. 2012 <http://openclipart.org/detail/10458/hat-outline-by-rygle-10458>.

[1] Grossman, Jeremiah. "A Statistical Journey through the Web Application Security Landscape, BlueHat Security Briefings: Fall 2011 Sessions, " Channel 9. 3 Nov. 2011. Microsoft. 13 Dec. 2012 <http://bit.ly/TWZS1m>. (url shortened)

Tuesday, December 11, 2012

Just In Time Vendor Evaluations for Security Practitioners



[Updated December 11, 2012]

Deleted the post accidentally so I was lucky enough to recover it.

[Original post]

Do you ever get put on the spot with Just in Time(JIT) vendor security evaluations?  Here is the scenario, your the security lead for your group or company.  The phone rings and it's 4pm on Friday.  A stressed out project manager says, "We need a vendor security evaluation for Company A and Company B".  Of course, you were not included in any product meetings and demonstrations.  You know next to nothing about the companies or products and your now on the critical path -- welcome to security.  What will you do?

Security theories and best practices abound, and then there's the practical application ugly stuff we really do in the field.  Consider for a moment a medical analogy, a critically traumatized patient arrives to your unit.  Do avoid them until you can wash your hands?  No, you jump in and save a life!  It's really the same with security.  Often we are called to perform security triage under less than ideal circumstances.  If your engaged late with a Mission Impossible security review it's best to set clear expectations about what you can reasonably accomplish.  Realigning expectations is not fun but it's better to do this up front in project rather than surprise everyone at the end.

Many organizations have legal reviews and other qualification procedures but assuming all that is the concern of others, let's concentrate on security.  We agreed the evaluations will not be comprehensive but still beneficial.  With that in mind, role up your sleeves, there are some things you can do.  One of key things I like to know about an organization is their value or emphasis on security.  How do you gauge an organizations interest in security?  It's not easy, after all security is a sensitive subject for most organizations any many are not forthcoming about their security practices.

Often the best insight into an organizations dedication to security is the information they communicate without directly communicating.  Confused?  I'll explain, if you know the key questions to ask and signs to watch, you can learn a lot about an organizations willingness to invest in security.  Almost every organization says security is important.  But we need to select the organizations that put what they say into practice and invest in security.  A companies willingness and dedication to security is oftentimes at least as valuable to understand as the security maturity of their products.  Companies serious about security learn from their mistakes and drive their own security program improvements.  Companies not dedicated to security are doomed to fail and driven to improve largely by angry customers.  Not a fun experience -- especially if your the angry customer.

The point of this thought exercise is to assess the organizations interest or dedication to security not necessarily to assess their product offerings.  Of course, a product security assessment is absolutely essential for a comprehensive review.  For the review, I'm assuming you have some access to key personnel at each perspective organization to ask at least a few questions.  Now for the fun stuff.

Company Security Leadership
Most publicly traded companies publish a leadership web page describing corporate executive profiles.  Does the leadership page include a security executive?  Is there a security executive listed like Chief Security Officer, Chief Information Security Officer, VP or Director of Security?  If so, it's a good sign the security function is well-leveled within the organization.  Why is leveling important?  Security is a tough job and it almost always comes directly into conflict with production schedules.  Leveling security properly within organizations helps drive plans and priorities favorable to the security mission.  If security is not represented on the leadership page then the security responsibility, if it exists, is included within the responsibilities of other executives.  If no security executive position is listed it's not necessarily a cause for concern but a if a position is shown it's definitely positive.

How Many Individuals are Dedicated Full-Time to Security?
Ask how many individuals are dedicated full-time to security.  An integer less than one is a bad response.  What is a good response?  It really depends on the type of organization, number of products, and many other factors.  One vendor I interviewed responded with zero.  Their reasoning was that security is everyone's job and the function is divided among staff.  A reasonable answer but hardly a confidence builder given the size and scope of product offering under our consideration at the time.  Needless to say, the vender didn't win my vote.  Don't make any assumptions about the people securing the software -- ask!  Don't be surprised if the organization pushes back and does not share all the information you request.  If they don't share specifics focus more about the related details.  For instance, if they will not share information about staffing focus questions on what they do.  If they will not discus location and composition of the security team ask instead about the engineering team (or see if it's in their financial reports).

If you do receive some information around staffing levels consider the scope of products you plan to purchase along with the size of the organizations security program.  The level of security program investment commensurate with the software's operational capabilities and risks.  If it's an enterprise or cloud offering you should expect some significant investments in security.  If purchasing a non-critical application used by a few individuals behind the corporate firewalls then perhaps less investment in security is appropriate.

The Software Life Cycle
There are many ways to structure a security program depending upon the assets to defend.  In the development of software, integrating security into the software life cycle is important   The software life cycle is a multi-phase engineering model organizations follow for creating and delivering software.  The exact phases or steps of the model depend upon the life cycle model in use but most start with concept and requirements gathering.  Progressing into design and development then software testing.  Finally into delivery and deployment.  Consider asking which operational and security activities occur at each stage of the life cycle.  Your looking for a continuum of security activities throughout the life cycle.

During your review, you will receive all types of responses but consider the following.  In early product development phases, security must be involved to influence product architecture.  Security is the foundation and you can't very well build a house first and then go back and pour the foundation -- although oddly enough this happens all the time in security.  Moving into design and development, what measures are in place to ensure engineers write secure code?  Does the company provide training and tools to help automate security reviews?  Can the organization certify anyone updating code in source control has security training?  What types of security tests are executed and when?  Are all code changes or improvements tested completely before moved into production?  Who has authority to move changes into production?  Do coders have direct access to deploy their code into production?  What documentation artifacts are delivered in each development phase?  Are there any automated security tests, if so, what are they?  Answers to all or some of these questions will help you understand more about the organization and it's security practices.

How is the Security Program Organized?
Not all organizations are structured the same way.  However, positions such as the following are typical across industry.

Security Officers/Directors/Managers
Program leadership.  Responsible for the overall security program.  Establishing policies and tracking program effectiveness.

Security Architects
Ensure projects are designed securely from the start.  Create security requirements.  Create software security policies.

Security Engineers
Program software security features.  Work with engineers to ensure software changes comply with security policies.  Perform security reviews.  Maintain specialized security tools.  Implement cryptography functions.

Security Testers
Help develop and execute non-functional software security tests.  Often times Architects will be involved in test to ensure test cases support requirements and policies.  Security tests are important because without them it's impossible to understand product security posture.

Security Compliance
Security compliance teams ensure changes adhere to security policies.  For example, if a new web server is deployed in the production environment and not listed in an approved software specifications document compliance may follow-up with management or shutdown the non-compliant server.

Security Operations
These people watch network traffic going over the wire.  They are usually the first to see bad guys knocking on the door.

Security Forensics
Forensics teams preserve evidence.  Often this includes, imaging desktops and servers for trial.  Computer data used in a court of law must be handled properly to preserve evidence and ensure it's free from tampering.

Details about the organizations security functions are important to ensure the organization has security visibility across the entire software life cycle.  For instance, simply testing for vulnerabilities after the software is complete is not generally acceptable since software vulnerabilities are costly to remediate after they have been fully developed.  Your looking for a continuum of security activity across the development and operational processes.

On-Shore vs. Off-shore Staff
Is the organizations security staff on or off shore or a mix of both?  Don't assume staff are onshore.  This may or may not be a concern for you but some organizations are highly concerned about off shore security staff.  You may consider asking which responsibilities are charged to each group.

Security Patch Programs
Does the organization provide regular product security patches?  Do they issue emergency fixes as necessary?  How are patches communicated to customers?  You may consider checking the support area of the organizations web site.  You may find some patching information, security communications, or public policy information.

If your purchasing a cloud solution, are security vulnerabilities quietly fixed and pushed or are they communicated?  Are mitigating controls like Web Application Firewalls(WAF) available?  The industry loves to hate WAFs.  WAF's are not a perfect technology but engineering can often take significant time to code, test, and deliver a solution.  In the interim, your servers are wide open with a "kick me" sign.  The sweet spot for the WAF is that they deliver temporary protection very quickly while your teams deploy a more fully baked solution.

Security and Exchange Commission Fillings
US companies that are traded publicly must file documents like the yearly statements with the commission such as the 10-K or quarterly statements like the10-Q.  Sometimes you can find some security bugaboos in these statements or worthwhile information about the company concerns, engineering practices, onshore vs. offshore development, locations/practices for data storage, etc.

Social Media
Social media is also another organization barometer.  Sites like Twitter, LinkedIn, and Glassdoor can provide some insight into what current and past employees have to say about the company.  I would not place too much emphasis on social media but it's worth a look.  Keep in mind you may not be reviewing objective or fair commentary.  It's not likely you will find information directly related to the organizations security program but you can definitely find interesting details about engineering or IT practices.  If the engineers are pushed hard and frustrated to grind out basic software features it's not likely the organization will invest time in security polishing.  You might also search for leadership blogs but it's unlikely these yield much since this crowd is well versed about what they should or should not say.

Other Non-Functional Requirements
Hunt for any information you can find on product performance.  Is there significant positive or negative chatter in the support or news groups on performance issues?  Product performance is an interesting indicator since it's something difficult to evaluate until the solution is deployed deeply.  The point being, if an organization is not investing in performance it's likely they are not making adequate security investments either.  Like performance, security is difficult to evaluate without tools and expertise.

Audit Results
Does the organization have any recent IT audit results they are willing to share with you? SAS70's are common and typically provide some IT information.  Rather than focus on the result of the audit, an obvious thing to do, consider reviewing other information around IT organization and processes.  Often organizational clues can help piece together a more comprehensive operational picture along with other data you collect elsewhere.  Be careful about placing too much trust upon audit conclusions.  Some audits allow for those under review to craft their own control objectives which diminishes their effectiveness.  Independent audits are a good tool but form your own opinion based upon several sources of information.  Trust but verify is a good tenant of security.

Conclusion
My article is not a comprehensive set of techniques to profile organizations by any measure.  Instead my intent is to increase your confidence around an organizations appetite for security where perhaps you had no confidence previously.  Ultimately, if you don't have what you need for a proper security review you will need to stop the review and escalate.  But if I can help you reach some comfort in your decision making process, then I will have saved you from stepping in front of project steam rollers and freight trains which is far easier on your nerves.  :o)

Paper with Pen. Digital image. Openclipart.org. http://openclipart.org/, 12 Feb. 2012. Web. 7 Nov. 2012. <http://openclipart.org/people/jhnri4/Exquisite_kwrite.svg>.


Sunday, December 2, 2012

Security Old School vs. Security New School

[1]
As difficult as technology change is to anticipate, it's even more difficult to determine its impact on our security and privacy.  Our security choices are predicated upon knowledge and assumptions that may have been appropriate at one time, but are no longer appropriate.  Attackers feed upon our complacency and our misguided sense of trust.  This is why when attacks occur people are taken by surprise.

Before I get started, I feel some explanation is worthwhile about my use of Security Old School vs. Security New School.  Old School is simply our security attitudes, thoughts, or actions around a topic at some point in the past.  New School is how we should think and act about the same topic given changes in the world and industry.

Following are a few points I would like to lightly touch upon to educate and raise awareness of readers.  It's not an exhaustive list but some points I've considered at the time of writing.

Security Wack-a-Mole
Old School, Organizations can be destroyed or rendered ineffective by targeted attacks
New School, Leaderless organizations difficult to control or destroy

Combating terrorist organizations is difficult enough for nation states, consider the years of searching for Osama Bin Laden and resources required.  Now consider an organization like Anonymous.  Leaders emerge from the organization's background from time to time, rise to prominence, execute their agendas, and sublime into the background.  It's hard to imagine how nation states will be successful against Anonymous.  Anonymous is essentially an ideology.  Fighting an ideology requires a different type of program and techniques.  It was difficult enough to find one man let alone thousands of individuals.  This is why governments abhor anonymity; it's impossible to fight what cannot be seen.

Vulnerability Reports
Old School, Researchers reporting vulnerabilities to organizations for industry recognition
New School, Researchers sell vulnerabilities to highest bidder

In the past, independent security researchers reported vulnerabilities to companies out of professional courtesy.  To be the researcher who finds a 0-day vulnerability in a package deployed on everyone's computer creates fear, fear commands attention, and more importantly recognition and respect.  The objective of reporting was to earn some individual credit, garner a fan club, and move on to better paying or more interesting jobs.

Increasingly gray hats are getting a little dirtier and becoming more militant and mercenary[2].  Vulnerabilities are a traded commodity.  If you possess the commodity then you can conduct business.  A single unpublished vulnerability resulting in a complete host compromise may fetch as much as $100,000 USD.  Find a vulnerability or two and you can pay off the mortgage on your home.  A tempting proposition for talented gray hats living in impoverished countries; deliver pizza or sell vulnerabilities for lots of money?  Not much of a choice.  Resting on the good nature or professional courtesy of these individuals is far too much to trust in my opinion.

Vulnerability bounty programs are essential tool to motivate gray hats to make the best choice for organizations.  There's no guarantee organizations will not be double-crossed.  For instance, a researcher sells a vulnerability to a bounty program and also sells it on the black market, doubling their money.  The one real guarantee a vulnerability bounty program provides is that, in addition to Internet baddies, organizations will also be informed of their vulnerabilities.  It might feel like borderline extortion but this is the world we live in today.  The best way for organizations to avoid such dilemmas is to ensure security investments are commensurate with level of risk.  Don't let pride or arrogance stand in the way, you either play by the new rules or you're not included.

Vulnerability buyers may be anyone from nation states to corporations and well-funded individuals.  Why hire and manage a team of security ninjas when you can amass a battery of vulnerabilities to launch like scuds at your command?  Cash is King, is the saying.   The security ecosystem is changing like global warming.

Cyber Weapons
Old School, Your enemies dropped bombs causing terror, people hurt or die.
New School, Your enemies tamper with critical infrastructure, no terror, people hurt or die.

What is a Cyber Weapon?  A cyber weapon is malware engineered for military purposes.  When a bomb falls from the sky, explodes on it's target, it's terrifying and lots of people can be hurt or killed.  A cyber weapon is different.  No explosive impacts, or directions you can run, and quite likely no terror.  But the effects are very real.  A city may find itself without critical infrastructure like electric power or water and kill lots of people.  Imagine a hospital in the summer with no electricity and air conditioning.  Attacks against critical infrastructure have been around for many years.  My favorite was a disgruntled worker releasing millions of gallons of raw sewage on the Australian countryside [4].

Truth in Numbers
Old School, A product or service is good if it has lots of "likes" and positive reviews.
New School, A product or service is good if people you know personally like it.

It's more or less common sense that large numbers of "likes" or positive reviews for a product or service mean it's good, right?  Well, it's not exactly a guarantee anymore.  Economic incentives to fake "likes" and positive reviews are powerful motivators.  Couple economic incentives with low cost of labor in many nations and what do you have -- Internet Water Armies.  Internet Water Armies[3] are large virtual labor forces used to artificially inflate like counts and write positive reviews for products and services.  Water armies are used to influence crowd behaviors like purchase decisions and public opinion.  The best way for individuals to combat opinion manipulation is improved fact checking.  Check with friends you trust, check multiple sources, are the reviews good quality (e.g., grammar, misspellings, etc).  The amount of fact checking should be proportional to the value of product you're purchasing or decision you're making.  Really expensive decisions require careful checking whereas inexpensive less checking is required.

Service Anonymity in the Cloud
Old School, Internet services can be traced back easily to host providers
New School, Internet services deployed into cloud infrastructure are difficult to trace to host providers

The Pirate Bay(TPB) provides file sharing technology infrastructure to individuals throughout the world.  TPB servers do not host users files but their infrastructure helps users locate and share files with network peers.  Often the files shared by users of TPB are commercial, software programs, books, movies and songs covered by copyrights.

TPB servers have been raided many times.  Like the evolutionary processes of mutating genes, TPB has evolved from locally hosted services into globally hosted cloud services.  TPB services are virtualized as a disk images for quick deployment and encrypt data during transit as well as data at rest.  Encryption makes shutting down TPB very difficult since it's not easy for ISP's to know there hosting TPB services.  Load balancers as well as services are virtualized and deployed in many locations around the globe.  TPB encrypted cloud deployment paradigm is likely to be adopted by organizations placing a premium on operational and user anonymity.  We will see more of this innovative architecture in the future I'm sure, Tor exit relays, there's lots of possibilities.

Incidentally, even if you feel you're completely anonymous and untraceable I don't recommending downloading from TPB or similar services.  I have nothing against the TPB or copyright holders.  My concerns are limited to security and privacy.  Aside from the questionable legalities, such downloads are rumored malware vectors.  Perhaps, it's a rumor started by copyright holders.  Nevertheless, I don't recommend it.

News and Information
Old School, News is reported by journalists distributed via tv, paper, or electronic media outlets
New School, Everyone with a smart phone is a journalist, reporting is blistering fast and raw

Social media is changing the world.  When hurricane Sandy struck the east coast I knew everything about it before I saw my first news cast on television.  The power of Twitter really impacted me when I saw tweets of a collapsed crane in New York.  I received the tweets an entire day before I saw it on the news.  Likewise when the conflict in Gaza heated up, many people on the ground armed with smart phones posted news and photos.  Some of the pictures were shocking showing the good, bad, and ugly of war raw and unedited right on our smart phones.  Of course, some individuals confuse fact and fantasy or conflate facts in their reports but its no more or less worrisome to me than the highly polished and expertly crafted nightly news.

Anti-Virus
Old School, AV protects your computer from miscreants
New School, Be afraid...be very afraid

Anti-virus(AV) is definitely helpful but it's not the panacea it once was.  The weakness with AV is signature-based technology and if you don't have updated signatures or if there is no signature then you are vulnerable if exploited.  AV can also create weird performance problems sometimes difficult for novices to identify especially video gamers.  AV when combined with personal firewalls it's even more helpful.

A few quick tips for personal safety, use firewalls to block inbound network traffic.  Many AV programs come with built-in firewall controls, check your documentation.  Shutdown any operating system components or services you don't use.  Uninstall any unnecessary components you no longer use.  Don't run as root on *NIX or with administrator privileges on Windows.  If you're really cavalier about security you might consider full disk encryption.

Cloud Security
Old School, Data was stored in data center of the company providing the service
New School, Data is securely stored and in the cloud

To me the Cloud is like All Natural or Organic.  If you're telling me your cloud is secure you might as well be selling me Sal Pimento or St. John's Wort for my health.  Most claim their solutions are secure but evaluating them is difficult and time consuming for those of us with technical background.  For those with little technical background, you're forced to trust in the claims of cloud providers at face value.

Even if your data is secure, it may be stored in offshore data centers without your knowledge.  How would you feel if your data is stored in China?  If you're a US government agency or government contractor you may care.  Likewise it's getting difficult to find good applications that are not cloud enabled.  Users are almost forced to put data into the cloud.  Often features like syncing between, desktop, mobile, and tablet require cloud support.  Cloud security is not a technology problem it's a security and privacy problem.  The industry needs better rules over handling, use, distribution, and disclosure of personal data.

Abnormally Large Energy Bills
Old School, Increased energy use is an indicator of residential marijuana farm
New School, Increased energy use caused by residential Bitcoin mining

I included this in for fun and it's a classic case of mistaken assumptions.  I was watching a DEFCON 19 video and Skunkworks describes a home profiled by law enforcement and raided due to high energy consumption.  A little background, law enforcement uses electric energy consumption profiles as an indicator for marijuana growing.  Stealthy indoor marijuana growers use energy hungry lighting and hydroponics for growing plants.  During the police raid it was discovered the homeowner was not farming marijuana but instead mining Bitcoin.  Generating Bitcoin is a computationally intensive task.  The homeowner deployed a significant number of computers in his home, likely SLI GPU style gaming rigs, increasing his energy usage over other residences.  And thus a target for an power profiling by law enforcement agencies.

Our society is becoming more and more Internet enabled every day.  The implications of our technological capabilities and connectedness often evade our notice.  As a colleague of mine would say, the genie is out of the bottle and the genie likes to be free.  There's no going back to the way things were before.  The world is forever changed.  Our information systems are growing explosively across national boundaries and it's sure to surprise us on occasionally.

[1]  Pisa. Digital image. Openclipart.org. Openclipart.org, 2 Nov. 2006. Web. 1 Dec. 2012. <http://openclipart.org/detail/1186/leaning-tower-of-pisa-by-johnny_automatic>.

[2] Chickowski, Ericka. "How The Sale Of Vulnerabilities Will Change In 2013." Http://www.darkreading.com/. Dark Reading, 30 Nov. 2012. Web. 02 Dec. 2012. <http://www.darkreading.com/vulnerability-management/167901026/security/news/240142947/how-the-sale-of-vulnerabilities-will-change-in-2013.html>.

[3] "Internet Water Army." Wikipedia. Wikimedia Foundation, 20 Nov. 2012. Web. 02 Dec. 2012. <http://en.wikipedia.org/wiki/Internet_Water_Army>.

[4] Danchev, Dancho. "Dancho Danchev's Blog - Mind Streams of Information Security Knowledge." : SCADA Security Incidents and Critical Infrastructure Insecurities. Blog, 5 Oct. 2006. Web. 02 Dec. 2012. <http://ddanchev.blogspot.com/2006/10/scada-security-incidents-and-critical.html>.


Thursday, November 15, 2012

Measuring Internet Connection Throughput

[1]
[Updated Post Friday December 10, 2012]

I thought it would be helpful to share my results with everyone.  Figure 2 is chart showing hourly trending for my Internet speed over the last 2 weeks.

Figure 2: Hourly Trend of Broadband Speed
My Internet connection to my ISP is 50Mbps.  The chart shows I'm receiving slightly less than 1/2 my expected bandwidth.  I changed the chart around to produce bandwidth results in megabits per second (e.g., Mbps).  Megabits is how my ISP advertises their connections so it's less error prone for me to think of my results in terms of Mbps.  At little Excel magic yields the chart in Figure 2.  Now that measurements and charting are in good shape I'm thinking about my data and results.  I have come to the conclusion it's best if I rerun the tests connected directly to my ISPs access point.

In my home network my *NIX host, running the testing, connects to a HP Procurve 1700-8 managed switch, then to a Netgear Powerline AV500 Adapter running as a bridge, then to my Asus RT-N56U router/firewall, and finally to the ISPs access point.  It seems likely to me the WAN connection should be the most rate limited of all the gear.  However, to eliminate all possibility of error, I'm going to connect the *NIX host directly to the ISP access point and rerun the tests.  Sigh.

[Updated Post Friday November 23, 2012]

I made a few improvements I thought I would share.  The original program code had interleaving series data in rows.  There's nothing wrong with the data but it's hard to graph in Excel.  I improved the program by moving series data to columns.  Once I get all the data into Excel I delete all the columns except: DATE, TIME, ELAPSED-sjc01-10, ELAPSED-sjc01-100, ELAPSED-wdc01-10, ELAPSED-dal01-10.  To graph, I select all the data including column headings and choose the graph I want.  Following are the improved files.

Binary - ispspeedcheck2.jar
Code - ISPSpeedCheck2.zip

If you run the tool, following are a few lines of sample data you can expect to see in the new CSV file.

DATE,TIME,sjc01-10,PAYLOADSZ-sjc01-10,ELAPSED-sjc01-10,sjc01-100,PAYLOADSZ-sjc01-100,ELAPSED-sjc01-100,wdc01-10,PAYLOADSZ-wdc01-10,ELAPSED-wdc01-10,dal01-10,PAYLOADSZ-dal01-10,ELAPSED-dal01-10
11/20/2012,7:14:09 PM,sjc01-10,11536384,8145,sjc01-100,104874307,51949,wdc01-10,11536384,6383,dal01-10,11536384,5972
11/20/2012,8:15:14 PM,sjc01-10,11536384,5295,sjc01-100,104874307,46618,wdc01-10,11536384,6157,dal01-10,11536384,5537
11/20/2012,9:16:25 PM,sjc01-10,11536384,8588,sjc01-100,104874307,49301,wdc01-10,11536384,6273,dal01-10,11536384,5595

The following is a chart I generated with a few hours of data.

Updated Figure 1: Broadband Bandwidth Hourly Trend
Keep in mind if your going to run the new jar you should update your command line like the following.
java -Dfile=./ispspeedtest2.csv -jar ./ispspeedcheck2.jar

I'm interested to see some real data.  Happy Thanksgiving!  --Milton

[Original Post]

Many of us at one time or another wonder if we are really receiving our full Internet bandwidth.  There are a number of ways we can get this information.  I will cover a few of them as well as provide a small Java executable along with source code so you can experiment on your own.

If your only looking for a quick easy test for your Internet connection you don't need to write a program.  Open your browser to speedtest.net and give them a try.  I like speedtest.net and it's easy to use.  In my case, I wanted an hour by hour trend of my Internet connection throughput.  To get a trends on speedtest would be bothersome since I would have to run the test manually once each hour.  Great tool but you need a little something more if your interested in trends.  This monkey has better buttons to push so I decided on a Java project.  Moving on, following are some of the questions and points I considered when thinking about my broadband bandwidth.

Am I receiving my full Internet bandwidth?
Broadband is prickly business.  On the one hand, Internet Service Providers(ISPs) advertise and sell their connections by bandwidth.  On other hand, they don't make any guarantees about the bandwidth they provide.  As a consumer, what am I really receiving?  The impact to my wallet is guaranteed, that happens regularly each month, but my bandwidth is variable.  To be an ISP is a great line of business.  ISPs can command a constant monthly price for a product they may deliver in more or less in quantity -- at their sole discretion.  Imagine a grocery store operating on the same principle.  You pull up to the checkout counter with a full shopping cart of groceries or 1/2 a cart and it makes no difference.  It's all the same price.  Ok, I feel a rant coming on so I'll stop now but you get my point and source of my curiosity.

Paying for additional bandwidth provides more bandwidth but how much more?
I'm not sure how to answer this question at the moment, at least without changing my service.  Perhaps data on speedtest.net or similar sites.  For instance, if my connection is 50Mb/sec download and I'm receiving 25Mb/sec will upgrading to 128Mb/sec produce half as much throughput or 64Mb/sec on average?  Likewise, if I cut my bandwidth to 25Mb/sec do I really get half as much throughput or is it only 20% of the advertised capacity for slower speeds?  My assumption is that higher speed connections have more wiggle room for throttling whereas the lower speed connections are closer to the advertised throughput.  I think these questions are more suitable for an ISP review as opposed to a broadband tool measurement discussion but it's still something I'm interested to know.

Is there more Internet bandwidth available during off-peak hours?
It seems common sense during early morning or late evening hours more bandwidth is available since less people in your region are surfing the Internet.  Of course, does it really matter if there is Internet bandwidth when nobody wants to use it?  If a tree falls in the woods... ah, never mind.  Ok, unless your a torrent freak or schedule off peak downloads there's probably nothing practical in this line of thinking but I'm curious to find out.

Complexities of bandwidth measurement
Measuring your throughput is easy.  Understand the reasons for the numbers are more complicated.  There are many factors that influence overall results.  I don't claim to be a network expert but following are some areas that come to mind.

WAN Speed - Yay.  Generally, the broadband speed rating is between you and your ISP.  There is absolutely no guarantee to other servers on the Internet.  So the at best, you will only receive your advertised throughput to your ISP.  Traffic anywhere else may take the slow boat.

LAN Speed - Are the kids playing games or downloading content over the same connection?  If so, this will impact any measurements.  Also any IP enabled devices that share the LAN, television sets, smart phones, almost anything.  Even thermostats are IP enabled (incidentally NEST is really cool - Google it).  You could have significant traffic on your LAN interfering with your results.

WIFI - Are you working over WIFI?  Limited WIFI bandwidth or poor signal quality will negatively impact your results.  Better to be connected directly Ethernet.

Network Hardware - All boxes with blinky lights are not created equal.  I've tried many brands and settled on a highly rated ASUS RT-N56U router recommended on SmallNetBuilder.  One word -- awesome!  The value of good network hardware cannot be understated.  Don't assume a popular brand is the best.  In the age of the Internet, lots of people can make the same mistakes.  Don't follow the sheeple.  Do your own homework.

Traffic Shaping - ISP's shape traffic all the time.  Some have been accused of throttling peer to peer traffic like bittorrent.  It's almost guaranteed if your running speed tests against popular sites like speedtest.net your ISP optimizes that traffic.  Meaning, your throughput to speedtest.net is great but perhaps not consistent across other web sites.  On the surface, traffic shaping sounds a little dubious but the practice has it's place.  Even in the home, most consumer devices support some amount of traffic shaping.  A really comprehensive test would probably test traffic over several different protocols.  A test over different protocols would help work around traffic shaping.  I'm not going to do this just yet but it might produce a better result.  Electronic Frontier Foundation has some interesting resources on shaping[2].

Broadband history
Taking a speedtest.net reading and tossing it over the fence to your ISP is not very convincing if you have bandband throughput concerns.  Too many factors, as mentioned previously, can influence a short speedtest run and produce misleading results.  On the other hand, approaching your ISP with a 2 month history of hour-by-hour Internet bandwidth trend data is compelling evidence.  Even if you don't have a case to prove, more comprehensive information is always desirable.  What is the average WAN usage over time by hour of day?  The mean, median, standard deviation of download times for different size payloads?  Are 100Mb payload download times one order of magnitude greater than 10Mb or different?

My environment
I have several computer systems at the house.  In my home we have at least two computers per person.  We also have a cadre of WIFI enabled devices like Kindle Fire, iPhones, IP printer, VOIP devices, flatscreen TV, etc.  We use several operating systems: OSX for business, Ubuntu Linux for special tasks, and Windows for gaming.  I develop on my OSX system using Eclipse and deploy longer running tests on my Ubuntu server which works tirelessly nights and weekends without interruption.

The experiment
I thought about adding lots of bells and whistles to the program code.  Adding a user interface, fancy logging, etc.  In the end, I chose to keep the code very simple.  The output of the program is a simple CSV file that can be loaded into Microsoft Excel.  Following are a few lines of output from the CSV log file.
milton@sparky:~/bin$ tail -f ./ispspeedtest.csv
DATE,TIME,SERVER,PAYLOADSIZE,ELAPSED
11/14/2012,8:31:13 PM,sjc01-10,11536384,6355
11/14/2012,8:32:15 PM,sjc01-100,104874307,61625
11/14/2012,8:32:23 PM,wdc01-10,11536384,7625
11/14/2012,8:32:31 PM,dal01-10,11536384,7210
11/14/2012,9:32:37 PM,sjc01-10,11536384,6469
11/14/2012,9:33:37 PM,sjc01-100,104874307,60103
11/14/2012,9:33:45 PM,wdc01-10,11536384,7121
11/14/2012,9:33:52 PM,dal01-10,11536384,6979

The first line is the command line.  I use Linux tail command to show the log realtime in a command shell window.  Following the command line are, column headings, date and time message was logged, server code the payload was downloaded from, payload size in bytes, and the download time in milliseconds.  For instance, the first line of data, the payload is downloaded from sjc01-10 (a code for a speedtest server in SF Bay Area).  The size of the payload was 11.5Mb and it took 6.3 seconds to download.  The longer the program runs the more data is produced.  The tests are single threaded.  Once you have collected enough data you can stop the program (e.g., CTRL-C) or the program will stop on it's own after 1000hrs with the default settings.

Figure 1 shows a couple of days of hourly broadband trend data.  The top red line is the hourly data for a 100Mb zip file payload.  The blue line on the bottom of the figure is the hourly data for the 10Mb zip file payload.
Figure 1: Broadband bandwidth hourly trend
I realize the chart is small and not easy to read.  I provided the chart to give you an idea of what a finished chart may look like when it's done.  Unlike the stock market or your 401k, a line trending down is good and means your downloads are taking less time.

Source code, binaries, and limitations (and dragons)
Following is a sample source code package as well as executable jar if you don't want to compile.  Keep in mind the program is not full featured and not suitable for production systems.  I have provided no security controls like input validation.  The program is not multi-process safe or even thread safe in some places if you choose to repurpose some of the classes.  I originally made this for myself.  Thar be dragons!  Consider yourself warned.

The servers where I download payloads are hard coded in the program for North America only.  If you live elsewhere, you will want to experiment with ISPs closer to your region and you will need to make some improvements to the code.
Code - ispspeedcheck.zip

Running the binary depends upon your system but to get it running on my system I use the following.  The command launches the program and tells it to log CSV data to the ispspeedtest.csv file in the current working directory.
java -Dfile=./ispspeedtest.csv -jar ./ispspeedcheck.jar

I have tried it under Java 6 and Java 7 on OSX and Java 6 under Ubuntu Linux.  If anyone needs some assistance let me know and I might be able to make some changes or suggest some.  I'm just not sure who's interested in this stuff at the moment so I see need to invest anymore time than necessary to help answer my personal curiosities.

Alternative ideas
Writing a program to measure your Internet throughput is probably not the most efficient use anyone's time.  My first thought was to use Linux Curl command or WGET command to download the payloads.  Then use a graphing facility like RRDtool.  I'm not too familiar with RRDtool but it looked powerful and promising.  If your a Linux user a little Googling and hammering out a shell script is a good way to go.  I did some Googling to find desktop applications.  I didn't find anything for OSX.  I did find an interesting package for Windows.  To be honest, I just wanted to do this in Java.

Contacting your ISP
Before you decide to share any information with your ISP make sure you have your facts straight.  Double check your information, cross check with other tools, etc.  It's easy to misinterpret results or make a programming mistakes if you decide to make some improvements to the code.  If you've taken all precautions and you still wish to contact your ISP then be courteous.  Often raising awareness will get you on the path to resolution.  Also keep in mind your advertised bandwidth is not a guarantee.  Bandwidth guarantees are available but not generally to consumers since they cost a fortune.  Take the time to become familiar with your ISP's Terms of Service for your Internet connection before you call.

Verdict and next steps
No real results just yet.  I'm still getting started and I only have a couple of days of data.  I will update in the future with my results or describe how the program works if there's interest.

Good luck.

[1] Alves, J. Clipart - Wavy Checkered Flag. Digital image. Clipart - Wavy Checkered Flag. Clipart.org, 25 Mar. 2010. Web. 15 Nov. 2012. <http://openclipart.org/detail/62779/wavy-checkered-flag-by-j_alves>.

[2] "Test Your ISP." Electronic Frontier Foundation. EFF, n.d. Web. 15 Nov. 2012. <https://www.eff.org/testyourisp>.

Monday, November 5, 2012

Movie Reviewed, We are Legion: The Story of Hactivists

We are Legion:  The Story of Hactivists[1] is a documentary taking viewers inside the security hacktivist organization, Anonymous.  The film explores computer hacking subculture, early hacker organizations like Cult of the Dead Cow and Electronic Disturbance Theater, and provides history around Anonymous and where it's heading.

Many of us have heard news about the group Anonymous in the popular media and press lately.  But what is the group Anonymous?  Who is in charge?  What are their goals?  Following is the quick rundown.

What is Anonymous?
Anonymous not a group of angry teenagers pranking computers for fun.  Anonymous is a large group of hacktivists spanning many countries.

Who is in charge?
To quote the movie, "Anonymous is like a flock of birds".  When one bird changes direction sometimes the entire flock follows.  Leaders emerge from the group from time to time and people with like interests rally behind the leader.  For leaders, group relevance is determined by the number of people rallying behind your cause.  There is more than one leader since there is more than one cause.

What are Anonymous's goals?
The goals of the group change as group leadership changes.  The goals today are not the same goals as when the group started.  In fact, some of Anonymous original leadership discusses their differences in opinion with the newer leadership.

A number of individuals where interviewed throughout the program in particular, Chris Wysopal (Twitter @weldpond) CTO of Veracode.  Chris is a very talented and outspoken security researcher[2] and provides some hacking commentary including Blackhat conference origins.  The film also raises interesting points of view.  For instance, the film frames Anonymous as, hactivists, and describes their activities largely as forms of political protest or civil disobedience   The group uses technology means to demonstrate their causes like, Distributed Denial of Service (DDoS), web site defacement,  DOSing phone lines, trolling, even fake pizza delivery orders to harass individuals are considered fair game.  All of these are activities are painted as forms of political protest.  Sure, DDoS attacks are disruptive but no different than "sit ins" or picket lines (in the groups eyes).  I never thought of a DDoS attack as a form of political protest but it surely could be.  The world is changing fast and how we organize and protest is changing as well.

Thumbs up!  If your a security professional or interested in computer security it's a good movie to see.

[1] "We Are Legion | The Story of Hactivists." We Are Legion. Luminantmedia.com, n.d. Web. 06 Nov. 2012. <http://wearelegionthedocumentary.com/>.
[2] "Chris Wysopal." Wikipedia. Wikimedia Foundation, 29 Oct. 2012. Web. 06 Nov. 2012. <http://en.wikipedia.org/wiki/Chris_Wysopal>.

Wednesday, October 31, 2012

Java Spotlight Episode 106: Java Security Update


Bruce Lowenthal and I interviewed on Java security at JavaOne conference in San Francisco, California.
(Audio) http://goo.gl/fb/aUUaN

Tuesday, October 30, 2012

News from AppSecUSA 2012 in Austin TX

[Updated Post Friday November 30, 2012]

Conference videos posted, http://videos.2012.appsecusa.org/

[Original Post Follows]

This years Open Web Application Security Project(OWASP) AppSecUsa 2012 was in downtown Austin Texas at the Hyatt.  There were many sessions and speakers from across industry.  Whether your just starting out, or a seasoned computer security professional, OWASP conferences provide good value across all levels of experience.  In addition to security training, OWASP events are a great place to gather as a community and exchange ideas around security.

James Wickett (left), Milton Smith (right)
AppSecUsa was organized by the local OWASP chapter.  In the photo to the right is James Wickett (Twitter, @wickett), Austin OWASP Chapter Leader.  Josh Sokol (Twitter, @joshsokol) chairs OWASP Chapters Committee Chair, not shown.  Matt Tesauro (Twitter, @matt_tesauro) is the OWASP LiveCD Lead, also not shown.  I know I'm understating their credentials.  Amazing what these individuals have done in Austin.  Good job on the conference.  You rock and it's great to see you again!

Jim Manico (right)
There were a number of interesting presentations this year, I will cover a few. Top 10 Web Defenses - Jim Manico (Twitter, @manicode) VP Security, Whitehat Security.  The surprise for me, Jim communicated is that SQL Injection is still the largest attack vector.  I keep hearing the same rant from others so maybe I should start believing it.  We've known about SQL Injection attacks for years so it surprises me and it's disappointing.  The most pervasive attack I've seen to date is Cross-Site Scripting(XSS).  In fact, I contemplated interrupting or raising my hand during Jim's presentation but the next sentence out his mouth was, "XSS is the cockroach of the Internet".  Bravo!  Jim's a resident of Hawaii.  No, the hand signal in the photo is not a Hawaiian gang sign, it's a friendly greeting.

Jim provided a number of useful resources throughout his session like OWASP's cheat sheets covering a variety of topics[1].  The cheat sheet mentioned in the session is the Password Storage Cheat Sheet[2].  Another cheat sheet mentioned is the Forgot Password Cheat Sheet which you can find on main cheat sheet page[1].   I notice there is no cheat sheet specifically for storage of application or service passwords or at least one I find.  Some of the cheat sheets are work in progress while others are more mature.  In any case, the cheat sheets are a good emerging resource for common challenges.

Discussion around Content Security Policy(CSP)[3] kept surfacing in different sessions.  CSP is new to me but one of the interesting features is that it help's prevent content reposting.  From a practical perspective, you can use CSP to prevent attackers from iFraming your protected page content.  CSP protects content by including a new HTTP header (e.g., X-Frame-Options) communicating to browsers not to embed protected content.  Without even looking at the spec, my intuition tells me there are ways around CSP like using old browsers where CSP is not supported, MITM proxies to strip out the header, etc.  CSP is likely in the same camp as HTTPOnly, not bullet proof, but good defense-in-depth measure especially when combined with HTTPS and other measures.

Why Web Security Is Fundamentally Broken - Jeremiah Grossman, CTO, WhiteHat Security (Twitter @jeremiahg) Most noteworthy, Jeremiah demonstrated some social media hacking.  The hacking demo uses clicks provided by the user to authorize calls to social media sites like Twitter and Facebook.  In the demo, the Twitter Tweet box or Facebook Like button, usually provided on news pages or blog articles, is made to follow or tail the user's cursor around on the page.  Anywhere, the user clicks on the page, the social media button is clicked by the user -- a type of click jack.  User data is gathered from their social media site and populated into a redacted demo page.  In a real implementation, these same buttons can be made invisible so it's not obvious the linkage between the redacted demo page and the social media sites.   The redacted form demo is clever, when users see their personal data populated into the redacted form, one field at a time before their eyes, it's compelling and shocking.  When Jeremiah gets the real redacted demo page live it's guaranteed to get grab some press headlines.

Real World Cloud Application Security - Jason Chan, Cloud Security Architect, Netflix. (No Twitter)  Jason's presentation was interesting since Netflix is laying a lot of new ground with operational and engineering practices.  It's safe to say, almost nothing in their operational or engineering practices is standard.  For instance, Netflix combined both the development and operations into a single unit.  Netflix is largely operating on Amazon's cloud infrastructure.  An interesting fact is that 1/3 of all US Internet traffic is Netflix streams.  To harden their production infrastructure Netflix crashes their servers and applications on a regular basis.  Yup, you heard me right, they crash their systems regularly and purposefully.  To crash their systems they employe a framework of Monkeys -- stay with me for a moment.  One of the monkeys, Chaos Monkey, periodically kills a process, service, or an entire virtual machines at random.  The idea of killing various cloud components at runtime is that it builds more resilient applications.  Programmers and operations staff, that enjoy sleep, quickly learn how to build fault resilient applications tolerant to environmental changes.  Phew, that must have been a prickly implementation assuming they started with traditional processes.

Armadillo Races

The photo on the right was from the Armadillo races.  Armadillo's move really fast.  I don't think they ever stopped moving and they almost never travel in straight lines.  I have seen many dead armadillos on the side of the road so to finally see a live one is refreshing.  Live armadillo, check.  Now I only need to see a UFO and our national debt disappear.  We also had a mechanical bull on site.  Anyone wanting to ride the lightning could give it a try.  Also so everyone knows, I do have photos of Jim Manico (Twitter, @manicode) riding the mechanical bull.  No, I'm not going to post them.  There's some things you need to attend in person to see for yourself.

Lock Pick Village
Of course, no security event is complete without a Lock Pick Village.  Several years back I was attending a security conference with a lock pick village.  Interestingly enough, I learned how to pick locks with Johnny Long,  (Twitter, @ihackstuff) , author Google Hacking[4].  I bought my first set of lock picks at the conference(hope they are legal in California).  Actually, mine got rusty so I threw them out years ago.  You learn lots of life skills at security conferences.  Johnny's charity organization was at the conference but he was overseas assisting helping his team.  He's pretty active about making the world a better place, admirable.  Yes, I did buy a ihackcharities.org t-shirt.  There's just something wrong about that URL I like.



[1] "Cheat Sheets." OWASP. OWASP, 28 July 2012. Web. 27 Oct. 2012. <https://www.owasp.org/index.php/Cheat_Sheets>.

[2] Manico, Jim. "Password Storage Cheat Sheet." OWASP. OWASP, 26 Aug. 2011. Web. 27 Oct. 2012. <https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet>.

[3] Stern, Brandon, and Adam Barth. "Content Security Policy 1.0." Content Security Policy 1.0. W3C, n.d. Web. 27 Oct. 2012. <http://www.w3.org/TR/CSP/>.

[4] Google Hacking for Penetration Testers, Volume 2, http://www.amazon.com/Google-Hacking-Penetration-Testers-Johnny/dp/1597491764/ref=sr_1_cc_1?s=aps&ie=UTF8&qid=1351625349&sr=1-1-catcorr&keywords=google+hacking

Tuesday, October 16, 2012

Do Not Track, Why Does it Matter?

Figure [1]: Do Not Track
If your a software developer or browser power user it's likely you've heard some discussion around Do Not Track(DNT) features[2].  Like the name implies, DNT communicates the user's desire to the application not to be tracked[3] -- simple enough.  The fire storm around DNT is the implications for individual privacy and industry access to your personal information.

From a technical perspective, DNT is implemented as an HTTP header and sent by the web browser to the web application.  The application receives the DNT header and hopefully honors the user's wishes.  The setting is user adjustable via browser configuration settings, if supported.  The technologies are well established and relatively simple to implement.

The meaning of DNT is clear enough to many users and hardly requires explanation.  However, advertisers steadfastly refuse DNT since it impacts access to user personal data.  Favoring instead to self-regulate or other measures.  The Digital Advertising Alliance(DAA), representing over 5000 advertisers, does not support DNT[7].  So what's the problem?  Are the specifications not clear enough?  Nobody understands that user's value their privacy?  No, not at all.  So if industry understands what we want why don't they keep our information private?  To understand the industry viewpoint about your data a Verizon exec captures it succinctly -- "Data is the new oil"[8].  To me that says, our personal data is an incredibly valuable product.  A trip to the gas pump helps put the comparison in perspective.

The challenges of DNT are...
  • How best to implement within the applications.
  • Industry favors unfettered access to your personal information.
  • Support for DNT is voluntary.  Few rules and consequences around use or even abuse of your data.
  • Incredible financial incentive exists not to implement DNT.
  • It's not clear when -- if ever -- DNT will be formally adopted by IETF.  In fact, it's not looking good at all.
Beyond the commercialization of your data, there are practical reasons to retain some user information.  Clearly, information about the user must be retained to promote a good experience with the application.  Imagine if Facebook didn't have access to your list of friends -- the service would not be very useful.  Implementation of "no tracking" in the strictest sense is not desirable for anyone.  On the other end of the spectrum, data brokers gathering your personal information for resale is likely considered abusive to most users; that is, if they were even aware their data was being sold.  All this begs the question, what is considered good and bad tracking?

A Stanford University team did a pretty good job at defining good and bad tracking[6].  Their starting point was to consider tracking from the user's perspective.  A site you visit and interact directly is considered a 1st party.  Sites you do not directly interact with directly are considered 3rd parties.  The scope of DNT applies specifically to 3rd parties.  Any practices defining bad tracking apply to 3rd party use of your information.  Of course, there are some legitimate 3rd party uses like supporting infrastructure services so definition is tricky.

Thinking more about data again.  On deeper and more personal level, information about your present medical and financial conditions and history you post to friends on social media can be gathered and used by potential employers, insurance companies, to their benefit.  Be mindful of everything you discuss online and every bit of personal information you enter.  Unlike derogatory credit reporting data there is no limitation on life span of derogatory social media or even rules about how your personal Internet data may be traded or brokered[5].  My rule of thumb, if it's technologically possible to achieve and beneficial to someone or group, than I assume it's being done.
"If you don't know who the customer of the product you are using is, you don't know what the product is for. We are not the customers...we are the product".  --Doug Rushkoff[4] 
So to answer, why does DNT matter?  DNT matters because it communicates the individual's desire not to be tracked.  Any web site that does not comply with your privacy wishes runs the risk of a flogging by the court of public opinion.  DNT stabs at the very heart of information profiteers benefiting by knowing everything about you.

Individual privacy is an unfolding drama that will take years to sort out but I have every confidence it will be sorted out.  I have faith the industry will continue to misbehave, and regulators will do what they do best -- nothing or error on the side of more money for business.  Eventually, the confluence of injustice will produce a public outcry for privacy the likes we have never seen.  Already privacy is in the news every day.

Most people understand, to use a really good web site for free they must give up something.  Most think in terms of tolerating some advertisements in the web page.  However, many don't have a good understanding of what is being negotiated away and industry likes it that way -- but people are learning fast.

[1] Bug. Digital image. http://donottrack.us/. Stanford, n.d. Web. 10 Oct. 2012 <http://donottrack.us/images/bug.png>.
[2] "Do Not Track." - Universal Web Tracking Opt Out. Standford, n.d. Web. 10 Oct. 2012. <http://donottrack.us/>.
[3] Mayer, J., A. Narayanan, and S. Stamm. "Do Not Track: A Universal Third-Party Web Tracking Opt Out Draft-mayer-do-not-track-00." Ietf.org. Internet Engineering Task Force, 7 Mar. 2011. Web. 10 Oct. 2012. <http://tools.ietf.org/id/draft-mayer-do-not-track-00.txt>.
[4] Solon, Olivia. "You Are Facebook's Product, Not Customer." Wired UK. Wired.co.uk, 21 Sept. 2011. Web. 11 Oct. 2012. <http://www.wired.co.uk/news/archive/2011-09/21/doug-rushkoff-hello-etsy>.
[5] Singer, Natasha. "Senator Opens Investigation of Data Brokers." The New York Times. The New York Times, 11 Oct. 2012. Web. 11 Oct. 2012. <http://www.nytimes.com/2012/10/11/technology/senator-opens-investigation-of-data-brokers.html?_r=0>.
[6] Mayer, Jonathan, and Arvind Narayanan, Ph.D. "Re: Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers." Letter to Federal Trade Commission, Office of the Secretary. 18 Feb. 2011. Donottrack.us. Stanford University, n.d. Web. 12 Oct. 2012. <http://donottrack.us/docs/FTC_Privacy_Comment_Stanford.pdf>.
[7] Naples, Mark. "DAA Statement on DNT Browser Settings." BusinessWire.com. WIT Strategy, For the DAA, 9 Oct. 2012. Web. 16 Oct. 2012. <http://www.businesswire.com/news/home/20121009005980/en/DAA-Statement-DNT-Browser-Settings>.
[8] Morran, Chris. "Does Verizon’s Monitoring Of Customer Behavior Violate Wiretap Laws?" Http://consumerist.com/. The Consumerist, 16 Oct. 2012. Web. 17 Oct. 2012. <http://consumerist.com/2012/10/16/does-verizons-monitoring-of-customer-behavior-violate-wiretap-laws/>.

Share It!