CRYPTO-GRAM

                 April 15, 2002

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier@counterpane.com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at 
<http://www.counterpane.com/crypto-gram.html>.  To subscribe, visit 
<http://www.counterpane.com/crypto-gram.html> or send a blank message to 
crypto-gram-subscribe@chaparraltree.com.

Copyright (c) 2002 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      How to Think About Security
      Crypto-Gram Reprints
      Is 1024 Bits Enough?
      News
      Counterpane News
      Liability and Security
      Comments from Readers


** *** ***** ******* *********** *************

        How to Think About Security


If security has a silly season, we're in it.  After September 11, every 
two-bit peddler of security technology crawled out of the woodwork with new 
claims about how his product can make us all safe again.  Every misguided 
and defeated government security initiative was dragged out of the closet, 
dusted off, and presented as the savior of our way of life.  More and more, 
the general public is being asked to make security decisions, weigh 
security tradeoffs, and accept more intrusive security.

Unfortunately, the general public has no idea how to do this.

But we in computer security do.  We've been doing it for years; we do it 
all the time.  And I think we can teach everyone else to do it, too.  What 
follows is my foolproof, five-step, security analysis.  Use it to judge any 
security measure.

Step one:  What problem does the security measure solve?  You'd think this 
would be an easy one, but so many security initiatives are presented 
without any clear statement of the problem.  National ID cards are a 
purported solution without any clear problem.  Increased net surveillance 
has been presented as a vital security requirement, but without any 
explanation as to why.  (I see the problem not as one of not having enough 
information, but of not being able to analyze and interpret the information 
already available.)

Step two:  How well does the security measure solve the problem?  Too often 
analyses jump from the problem statement to a theoretical solution, without 
any analysis as to how well current technology actually solves the 
problem.  The companies that are pushing automatic face recognition 
software for airports and other public places spend all their time talking 
about the promises of a perfect system, while skipping the fact that 
existing systems work so poorly as to be useless.  Enforcing a no-fly zone 
around a nuclear reactor only makes sense if you assume a hijacker will 
honor the zone, or if it is large enough to allow reaction to a hijacker 
who doesn't.

Step three:  What other security problems does the measure cause?  Security 
is a complex and inter-related system; change one thing and the effects 
ripple.  If the government bans strong cryptography, or mandates 
back-doors, the resultant weaker systems will be easier for the bad guys to 
attack.  National ID cards require a centralized infrastructure that is 
vulnerable to abuse.  In fact, the rise of identity theft can be linked to 
the increased use of electronic identity.  Make identities harder to steal 
through increased security measures, and that will only make the fewer 
stolen identities more valuable and easier to use.

Step four:  What are the costs of the security measure?  Costs are not just 
financial, they're social as well.  We can improve security by banning 
commercial aircraft.  We can make it harder for criminals to outrun police 
by mandating 40 mph speed maximums in automobiles.  But these things cost 
society too much.  A national ID card would be enormously expensive.  The 
new rules allowing police to detain illegal aliens indefinitely without due 
process cost us dearly in liberty, as does much of the PATRIOT Act.  We 
don't allow torture (officially, at least).  Why not?  Sometimes a security 
measure, even though it may be effective, is not worth the costs.

Step five:  Given the answers to steps two through four, is the security 
measure worth the costs?  This is the easy step, but far too often no one 
bothers.  It's not enough for a security measure to be effective.  We don't 
have infinite resources.  We don't have infinite patience.  As a society, 
we need to do the things that make the most sense, that are the most 
effective use of our security dollar.

Some security measures pass these tests.  Increasing security around dams, 
reservoirs, and other infrastructure points is a good idea.  Not storing 
railcars full of hazardous chemicals in the middle of cities should have 
been mandated years ago.  New building evacuation plans are smart, 
too.  These are all good uses of our limited resources to improve security.

This five-step process works for any security measure, past, present, or 
future:

   1) What problem does it solve?
   2) How well does it solve the problem?
   3) What new problems does it add?
   4) What are the economic and social costs?
   5) Given the above, is it worth the costs?

When you start using it, you'd be surprised how ineffectual most security 
is these days.  For example, only two of the airline security measures put 
in place since September 11 have any real value: reinforcing the cockpit 
door, and convincing passengers to fight back.  Everything else falls 
somewhere between marginally improving security and a placebo.


** *** ***** ******* *********** *************

           Crypto-Gram Reprints



Natural Advantages of Defense: What Military History Can Teach Network 
Security, Part 1
<http://www.counterpane.com/crypto-gram-0104.html#1>

Microsoft Active Setup "Backdoor":
<http://www.counterpane.com/crypto-gram-0004.html#MicrosoftActiveSetup"Backd 
oor">

UCITA:
<http://www.counterpane.com/crypto-gram-0004.html#TheUniformComputerInformat 
ionTransactionsAct(UCITA)>

Cryptography: The Importance of Not Being Different:
<http://www.counterpane.com/crypto-gram-9904.html#different>

Threats Against Smart Cards:
<http://www.counterpane.com/crypto-gram-9904.html#smartcards>

Attacking Certificates with Computer Viruses:
<http://www.counterpane.com/crypto-gram-9904.html#certificates>


** *** ***** ******* *********** *************

          Is 1024 Bits Enough?



Last month I wrote about Dan Bernstein's factoring research, and how it 
might affect RSA key lengths.  Recently there's been a discussion on 
BugTraq, as cypherpunk Lucky Green cited the research as his primary 
motivation for revoking his 1024-bit PGP keys.

This brings up the interesting question: are 1024-bit RSA keys insecure, 
and what should we do about them?

The current public factoring record is 512 bits, using general purpose 
computers.  Prudence requires us to suspect that institutions like the NSA 
can do better, although we don't know how much better.

Way back in 1995, I estimated key lengths required to be secure from 
different adversaries: individuals, corporations, and governments (Applied 
Cryptography, 2nd Edition, table 7.6, page 162).  Back then I suggested 
that people migrate towards 1280-bit keys, and even 1536-bit keys, if they 
were concerned about large corporate or government adversaries:

    Recommended Public-Key Key Lengths (in bits)

    Year     Ind.     Corp.     Govt.
    1995     768      1280      1536
    2000    1024      1280      1536
    2005    1280      1536      2048
    2010    1280      1536      2048
    2015    1536      2048      2048

Looking back on those numbers written seven years ago, I think they were 
conservative but not unduly so.  Factoring, at least in the academic 
community, has not progressed as fast as I expected it to.  But 
mathematical progress is bursty, and a single breakthrough could more than 
make up for lost time.  So if I were making recommendations today, I would 
still stand by my 2000 estimates above.

I have long believed that a 1024-bit key could fall to a machine costing $1 
billion.  And that a 1024-bit RSA key is approximately equivalent to a 
80-bit symmetric key.  (In Applied Cryptography, I wrote that a 768-bit RSA 
key is equivalent to an 80-bit symmetric key; that's probably too low an 
RSA key.)

Comparing symmetric and public-key keys is a lot like comparing apples and 
oranges.  I recommend 128-bit symmetric keys because they are just as fast 
at 64-bit keys.  That's not true for public-key keys.  Doubling the key 
size roughly corresponds to a six-times speed slowdown in software.  This 
might not matter with PGP, but it will make client-server applications like 
SSL slow to a crawl.  I've seen papers claiming that you need 3072-bit RSA 
keys to correspond to 128-bit symmetric keys and 15K-bit RSA keys for 
256-bit symmetric keys.  This kind of thinking is ridiculous; the 
performance trade-offs and attack models are so different that the 
comparisons don't make sense.

But there's no reason to panic, or to dump existing systems.  I don't think 
Bernstein's announcement has changed anything.  Businesses today could 
reasonably be content with their 1024-bit keys, and military institutions 
and those paranoid enough to fear from them should have upgraded years ago.

To me, the big news in Lucky Green's announcement is not that he believes 
that Bernstein's research is sufficiently worrisome as to warrant revoking 
his 1024-bit keys; it's that, in 2002, he still has 1024-bit keys to revoke.

This discussion highlights the huge inertia in key rollover.  Many people 
are still using short keys.  Lucky Green's e-mail sheds a light on this 
phenomenon.  He wrote "In light of the above, I reluctantly revoked all my 
personal 1024-bit PGP keys and the large web-of-trust that these keys have 
acquired over time."  The web of trust attached to those keys was of great 
value, and reestablishing it with a new set of keys will be difficult and 
time-consuming.  To Green, that pain was more important than having a "long 
enough" key.


Lucky Green's BugTraq announcement:
<http://online.securityfocus.com/archive/1/263924>

My essay on Bernstein's factoring paper:
<http://www.counterpane.com/crypto-gram-0203.html#6>

News coverage:
<http://zdnet.com.com/2110-1105-863643.html>
<http://www.infosecuritymag.com/2002/apr/news.shtml#factoringfriction>

Other essays on the Bernstein paper:
<http://www.rsasecurity.com/rsalabs/technotes/bernstein.html>


** *** ***** ******* *********** *************

                     News



This is a novel idea.  Two neural nets begin with secret random weights and 
then train on each other's output.  Turns out they synchronize much sooner 
than can an observer who can only see the output but not affect it.
<http://xxx.lanl.gov/abs/cond-mat/0203011>
<http://www.newscientist.com/news/news.jsp?id=ns99992067>

Has al Qaeda been hacked?
<http://www.businessweek.com/bwdaily/dnflash/mar2002/nf20020312_9960.htm>

Mapping the CIA's network:
<http://www.vnunet.com/News/1129730>
<http://www.trustmatta.com/services/docs/Matta_Counterintelligence.pdf>

Spray-on microdots containing unique UIDs used to identity products:
<http://www.wired.com/news/technology/0,1282,50598,00.html>

Real-life cryptography:
<http://abcnews.go.com/sections/us/CrimeBlotter/crimeblotter011107.html>
The pathetic part is that it took this convict four years to invent the 
substitution cipher, and he didn't even think of breaking everything up 
into five-letter blocks.

The authors of the poorly named "Responsible Disclosure" RFC did the right 
thing.  They withdrew their document from the IETF.
<http://www.theregister.co.uk/content/55/24482.html>
<http://news.com.com/2100-1001-862994.html?tag=cd_mh>
Meanwhile, Steve Bellovin and Randy Bush published a competing document.
<http://www.ietf.org/internet-drafts/draft-ymbk-obscurity-00.txt>

The SSSCA has a new name now: The Consumer Broadband and Digital Television 
Promotion Act (CBDTPA).  Sen. Hollings submitted the bill.  It's still a 
disaster for the computer industry.
<http://www.theregus.com/content/6/24407.html>
<http://news.com.com/2100-1023-866337.html>
<http://www.wired.com/news/politics/0,1283,51274,00.html>
<http://online.securityfocus.com/columnists/71>
<http://www.osopinion.com/perl/story/16840.html>
<http://thomas.loc.gov/cgi-bin/bdquery/z?d107:s.02048:>

eBay is being criticized for not using SSL to secure their more sensitive 
Web pages.  Honestly, I think this is mostly besides the point.  eBay's 
weaknesses are not based around people eavesdropping on Web traffic; 
they're based on vulnerabilities in their Web servers, insecure passwords, 
etc.  While using SSL is probably a good idea, it would significantly hit 
their performance.  I don't think this is significant enough to worry about.
<http://news.com.com/2100-1017-870959.html>

Latest news on the FBI's Carnivore eavesdropping software (now with the 
more friendly name DCS-100):
<http://www.osopinion.com/perl/story/17009.html>

A good essay on the implications of  what Brilliant Digital has done by 
spreading their Trojan with KaZaA:
<http://www.cs.berkeley.edu/~nweaver/0wn2.html>

Decent article on cyberterrorism:
<http://www.cio.com/archive/031502/truth.html>

The 2002 CSI/FBI Computer Crime Survey has been released.  This is a great 
study, now in its seventh year.  Almost all of the scary statistics that 
appear in the press about computer crime come from this survey.  You owe it 
to yourself to read the original data.
<http://www.gocsi.com/press/20020407.html>
I wrote more about the survey, its strengths and limitations, last year:
<http://www.counterpane.com/crypto-gram-0104.html#3>
News stories:
<http://online.securityfocus.com/news/364>
<http://www.theregister.co.uk/content/6/24747.html>
<http://www.cnn.com/2002/TECH/internet/04/07/cybercrime.survey/index.html>

Microsoft's Trustworthy Computing initiative, and the motivation behind it:
<http://www.salon.com/tech/feature/2002/04/09/trustworthy/index.html>
<http://www.osopinion.com/perl/story/17092.html>

Insider attacks:
<http://www.zdnet.co.uk/news/specials/2000/10/enterprise/techrepublic/2002/1 
3/article004.html>

Managing IDSs in large organizations:
<http://online.securityfocus.com/infocus/1564>
<http://online.securityfocus.com/infocus/1567>

CERT's top six attack trends.  A good essay to read.
<http://www.cert.org/archive/pdf/attack_trends.pdf>

Guidelines for securing Windows 2000:
<http://www.itbuynet.com/pdf/0202-security.pdf>

"Free Speech Online and Offline," by Ross Anderson.  Excellent essay and a 
good European perspective on a UK export bill.
<http://www.itbuynet.com/pdf/0202-security.pdf>

NSA Security Recommendation Guides:
<http://nsa2.www.conxion.com/emailexec/download.htm>


** *** ***** ******* *********** *************

               Counterpane News



Counterpane has announced its Q1 results.  30 billion network events 
monitored and analyzed.  57,000 potential intrusions researched.  10,000 
intrusions detected and prevented.  Attack success rate while Counterpane 
was watching: 0.006%.  Attacks that succeeded long enough to cause damage 
while Counterpane was watching: 0.00%.
<http://www.counterpane.com/pr-500.html>

Counterpane SOC War Stories:
<http://www.counterpane.com/eventreports1.pdf>
<http://www.counterpane.com/eventreports2.pdf>
<http://www.counterpane.com/nimda.pdf>

Counterpane's success in defending networks is the subject of a Business 
Week article:
<http://www.businessweek.com/bw50/content/mar2002/a3776082.htm>

And this article in Internet Week:
<http://www.internetweek.com/newslead02/lead032202.htm>

These are both good articles, because they quantitatively show the benefits 
of Counterpane monitoring.

Two other Counterpane customers talk about their experiences:
<http://www.counterpane.com/pr-womble.html>
<http://www.counterpane.com/pr-currenex.html>

Counterpane Managed Security Monitoring now available in Latin America:
<http://www.counterpane.com/pr-open.html>

The Stanford Law School Center for Internet and Society is holding a 
one-day Conference on Cyber Security and Disclosure, May 9 at 
Stanford.  Schneier is delivering the luncheon talk.
<http://cyberlaw.stanford.edu/>

Schneier will be presenting Counterpane's solution in Baltimore, Dallas, 
Denver, New York, Sacramento, Salt Lake City, and Tallahassee.  If you 
would like to attend, sign up here:
<http://www.counterpane.com/cgi-bin/seminars.cgi>


** *** ***** ******* *********** *************

           Liability and Security



Today, computer security is at a crossroads.  It's failing, regularly, and 
with increasingly serious results.  I believe it will improve 
eventually.  In the near term, the consequences of insecurity will get 
worse before they get better.  And when they get better, the improvement 
will be slow and will be met with considerable resistance.  The engine of 
this improvement will be liability -- holding software manufacturers 
accountable for the security and, more generally, the quality of their 
products -- and the timetable for improvement depends wholly on how quickly 
security liability permeates cyberspace.

Network security is not a problem that technology can solve.  Security has 
a technological component, but businesses approach security as they do any 
other business risk: in terms of risk management.  Organizations optimize 
their activities to minimize their cost/risk ratio, and understanding those 
motivations is key to understanding computer security today.

For example, most organizations don't spend a lot of money on network 
security.  Why?  Because the costs are significant: time, expense, reduced 
functionality, frustrated end users.  On the other hand, the costs of 
ignoring security and getting hacked are small: the possibility of bad 
press and angry customers, maybe some network downtime, none of which is 
permanent.  And there's some regulatory pressure, from audits or lawsuits, 
that add additional costs.  The result: a smart organization does what 
everyone else does, and no more.

The same economic reasoning explains why software vendors don't spend a lot 
of effort securing their products.  The costs of adding good security are 
significant -- large expenses, reduced functionality, delayed product 
releases, annoyed users -- while the costs of ignoring security are minor: 
occasional bad press, and maybe some users switching to competitors' 
products.  Any smart software vendor will talk big about security, but do 
as little as possible.

Think about why firewalls succeeded in the marketplace.  It's not because 
they're effective; most firewalls are installed so poorly as not to be 
effective, and there are many more effective security products that have 
never seen widespread deployment.  Firewalls are ubiquitous because 
auditors started demanding firewalls.  This changed the cost equation for 
businesses.  The cost of adding a firewall was expense and user annoyance, 
but the cost of not having a firewall was failing an audit.  And even 
worse, a company without a firewall could be accused of not following 
industry best practices in a lawsuit.  The result: everyone has a firewall, 
whether it does any good or not.

Network security is a business problem, and the only way to fix it is to 
concentrate on the business motivations.  We need to change the costs; 
security needs to affect an organization's bottom line in an obvious 
way.  In order to improve computer security, the CEO must care.  In order 
for the CEO to care, it must affect the stock price and the shareholders.

I have a three-step program towards improving computer and network 
security.  None of the steps have anything to do with the technology; they 
all have to do with businesses, economics, and people.

Step one: enforce liabilities.  This is essential.  Today there are no real 
consequences for having bad security, or having low-quality software of any 
kind.  In fact, the marketplace rewards low quality.  More precisely, it 
rewards early releases at the expense of almost all quality.  If we expect 
CEOs to spend significant resources on security -- especially the security 
of their customers -- they must be liable for mishandling their customers' 
data.  If we expect software vendors to reduce features, lengthen 
development cycles, and invest in secure software development processes, 
they must be liable for security vulnerabilities in their products.

Legislatures could impose liability on the computer industry, by forcing 
software manufacturers to live with the same product liability laws that 
affect other industries.  If software manufacturers produced a defective 
product, they would be liable for damages.  Even without this, courts could 
start imposing liability-like penalties on software manufacturers and 
users.  This is starting to happen.  A U.S. judge forced the Department of 
Interior to take its network offline, because it couldn't guarantee the 
safety of American Indian data it was entrusted with.  Several cases have 
resulted in penalties against companies who used customer data in violation 
of their privacy promises, or who collected that data using 
misrepresentation or fraud.  And judges have issued restraining orders 
against companies with insecure networks that are used as conduits for 
attacks against others.

However it happens, liability changes everything.  Currently, there is no 
reason for a software company not to offer more features, more 
complexity.  Liability forces software companies to think twice before 
changing something.  Liability forces companies to protect the data they're 
entrusted with.

Step two: allow parties to transfer liabilities.  This will happen 
automatically, because this is what insurance companies do.  The insurance 
industry turns variable-cost risks into fixed expenses.  They're going to 
move into cyber-insurance in a big way.  And when they do, they're going to 
drive the computer security industry...just like they drive the security 
industry in the brick-and-mortar world.

A company doesn't buy security for its warehouse -- strong locks, window 
bars, or an alarm system -- because it makes it feel safe.  It buys that 
security because its insurance rates go down.  The same thing will hold 
true for computer security.  Once enough policies are being written, 
insurance companies will start charging different premiums for different 
levels of security.  Even without legislated liability, the CEO will start 
noticing how his insurance rates change.  And once the CEO starts buying 
security products based on his insurance premiums, the insurance industry 
will wield enormous power in the marketplace.  They will determine which 
security products are ubiquitous, and which are ignored.  And since the 
insurance companies pay for the actual liability, they have a great 
incentive to be rational about risk analysis and the effectiveness of 
security products.

And software companies will take notice, and will increase security in 
order to make the insurance for their products affordable.

Step three: provide mechanisms to reduce risk.  This will happen 
automatically, and be entirely market driven, because it's what the 
insurance industry wants.  Moreover, they want it done in standard models 
that they can build policies around.  They're going to look to security 
processes: processes of secure software development before systems are 
released, and processes of protection, detection, and response for 
corporate networks and systems.  And more and more, they're going to look 
towards outsourced services.

The insurance industry prefers security outsourcing, because they can write 
policies around those services.  It's much easier to design insurance 
around a standard set of security services delivered by an outside vendor 
than it is to customize a policy for each individual network.

Actually, this isn't a three-step program.  It's a one-step program with 
two inevitable consequences.  Enforce liability, and everything else will 
flow from it.  It has to.

Much of Internet security is a common: an area used by a community as a 
whole.  Like all commons, keeping it working benefits everyone, but any 
individual can benefit from exploiting it.  (Think of the criminal justice 
system in the real world.)  In our society we protect our commons -- our 
environment, healthy working conditions, safe food and drug practices, 
lawful streets, sound accounting practices -- by legislating those goods 
and by making companies liable for taking undue advantage of those 
commons.  This kind of thinking is what gives us bridges that don't 
collapse, clean air and water, and sanitary restaurants.  We don't live in 
a "buyer beware" society; we hold companies liable for taking advantage of 
buyers.

There's no reason to treat software any differently from other 
products.  Today Firestone can produce a tire with a single systemic flaw 
and they're liable, but Microsoft can produce an operating system with 
multiple systemic flaws discovered per week and not be liable.  This makes 
no sense, and it's the primary reason security is so bad today.


** *** ***** ******* *********** *************

             Comments from Readers



From: Bancroft Scott <baos@oss.com>
Subject: SNMP Vulnerabilities

ASN.1, like any other language, can be implemented correctly or 
incorrectly.  As can be seen from the CERT advisory, 
<http://www.cert.org/advisories/CA-2002-03.html>, there are many 
implementations of SNMP that have no vulnerabilities, and many which do. 
This fact by itself shows that the vulnerabilities lie with the 
implementations, not with ASN.1 or BER, for both flawed and flawless SNMP 
applications implement the same protocol and were subject to the same tests 
from Oulu University.

It is critical that all network applications, whether they use ASN.1 or 
not, be fully tested.  E-mail programs have been known to have more than 
their share of bugs, but it would be wrong to state that the e-mail 
protocol (SMTP) is flawed; it is e-mail implementations that are 
flawed.  Protocols that use ASN.1 are no different; it is wrong to conclude 
that applications that use ASN.1 are likely to be vulnerable.  If 
applications that use ASN.1 are properly implemented and tested they are as 
safe as any other properly implemented and tested application.



From: Alessandro Triglia <sandro@mclink.it>
Subject: SNMP Vulnerabilities

I was very concerned with the results of the tests performed by the 
University of Oulu, which show that a very large number of existing SNMP 
implementations are vulnerable to various types of attacks.

However, I disagree with your statement that  "The vulnerabilities [...] 
stem from problems in the reference code (probably) used inside the 
Abstract Syntax Notation (ASN.1) and Basic Encoding Rules (BER)."

The conclusions of the Oulu report are that "implementation errors plague 
several SNMP products." There is no suggestion or implication that the 
standardized ASN.1 encoding rules may be responsible for the existence of 
such defects in the products. The Oulu report talks very clearly about 
"implementation errors."

Also note that there is no such thing as a "reference code" in the ASN.1 
standards. Many ASN.1 toolkits exist, both free and commercial, and 
implementers may also choose to implement their own BER decoder by hand, if 
they so wish.  In any case, implementers are entirely responsible for the 
quality, standard-compliance, and robustness of their products.

The Oulu test report discovered defects in many SNMP 
implementations.  These defects are situated both at the SNMP protocol 
application level and at the BER decoder level.  Blaming the ASN.1 and BER 
standards for a defect in the implementation of the decoder is as 
inappropriate as blaming a protocol standard for any defects in the 
implementation of the protocol.

In other words, it is not appropriate to draw from the Oulu report the 
conclusion that the ASN.1 and BER standard may be, in some way, responsible 
for some intrinsic vulnerability of a BER decoder implementation.  Any 
existing system vulnerability is the consequence of a non-compliant (or not 
robust enough) implementation, either of the protocol or of the 
encoder/decoder.

As a further consideration, I believe it is totally inappropriate to deduce 
that a security threat exists in other protocols that are defined in 
ASN.1.  A security threat may exist in protocol implementations, of course, 
if the implementations have not been built correctly and have not been 
sufficiently tested against all the possible attacks.



From: Anonymous
Subject: SNMP vulnerabilities

I was part of the official communications between CERT and an affected 
vendor, so there is much that is part of that confidential exchange I 
cannot go into.  However I did write to CERT expressing concern at the 
"further delays to this already long overdue advisory."

CERT has a published disclosure policy:  "All vulnerabilities reported to 
the CERT/CC will be disclosed to the public 45 days after the initial 
report, regardless of the existence of availability of patches or 
workarounds from affected vendors. Extenuating circumstances, such as 
active exploitation, threats of an especially serious (or trivial) nature, 
or situations that require changes to an established standard may result in 
earlier or later disclosure."

The only exception being: "Threats that require 'hard' changes (changes to 
standards, changes to core operating system components) will cause us to 
extend our publication schedule."

 >From what I saw at the time the SNMP issues did not involve a change to a 
standard or a change to a core operating system component and therefore 
should not have received any special treatment.

I did ask CERT for a detailed response as part of the public announcement 
outlining why the delays were necessary and the consequences to their 
published disclosure policy.  If you are contacting CERT for a comment I'd 
be asking them that question again and looking at the numerous leaks that 
occurred before the official release.



From: "Bernd Kreimeier" <bk@oddworld.com>
Subject: an overlooked aspect of SSSCA/CBDTPA

I believe that the implications of the CBDTPA reach far beyond practicality 
or fair use, and take the inherent conflict between first amendment and 
copyright to a new level which is unacceptable for a free society. This is 
not just a consumer rights issue, it is a question of civil liberties, and 
institutional safeguards.

This initiative mandates a technology that, once installed, will permit 
ubiquitous surveillance of information transfer, and is fundamentally at 
odds with encrypted transmissions.

 >From the point of view of constitutional safeguards, a case can be made 
that permitting (much more so demanding) such a technology is in direct 
conflict with a framework that spawned such (for a European's eye extreme) 
measures as the second amendment's "right to keep and bear arms," to 
protect the people against a perceived possibility of a totalitarian regime 
of domestic or foreign origin.

In the late 70's, a German professor of law, Prof. Alexander Rossnagel made 
a case against a nuclear fuel cycle based on "breeding" weapon-grade 
plutonium, on grounds that the security requirements of large scale use of 
such a technology, and the resulting legal and institutional changes for 
law enforcement (in particular because of the risk-driven shift to incident 
prevention), were incompatible with the foundations of a free society.

I believe a similar case can be made with respect to the SSSCA/CBDTPA, and 
the positions publicly taken by the bill's sponsors and supporters (e.g. 
with respect to intercepting "legacy piracy").

In particular, it is once again the shift to prevention that is a clear 
indicator of an extension of the concept of law enforcement beyond what can 
be accepted in a democratic society.




Scott Tousley <Scott.Tousley@anser.org>
Subject: Full-Disclosure Essay

Re the "Nice essay on the full disclosure debate":
<http://www.infosecnews.com/opinion/2002/02/27_02.htm>.

I think you fail to point out the hidden motives that might be behind a 
vote for full disclosure from the legal community.  The author states that 
the "the best defense to legal action in the IT product security setting is 
proof that the manufacturer followed security best practices."  But 
corporate and/or provider practices can and will be argued by trial lawyers 
at great expense, because nothing can be perfectly implemented, least of 
all ill-defined best practices.

Also, I note the author's convenient linkage of attacker and 
provider/source ("If a hack attack, by its nature unsolicited and unwanted, 
can lead to liability for the problem being found against the ISP in 
question -- where does the willful non disclosure of a potential flaw leave 
a Microsoft or an AOL?"), and it seems to me again a trial lawyer's 
attitude where someone must be found legally and hence financially 
responsible (else where is a hard working lawyer to find a return on his 
legal business?).

Finally, "There are always practical limits to the baring of the corporate 
soul when an error is identified, but reticence that crosses the border 
into the realm of withholding information that would assist consumers in 
protecting themselves, is a compounded sin. At the very least, reticence 
illustrates a measure of corporate disdain for the consuming 
public."  Fancy words, but again, I see the motivation as mostly if things 
are kept quiet, it is that much harder for the legal business to grow this 
new liability business area.



From: Gwendolynn ferch Elydyr <gwen@reptiles.org>
Subject: Re: CRYPTO-GRAM, March 15, 2002

 >Hacking is judo: using network software to do things
 >it was never intended to do.

Being completely pedantic, Aikido would probably be a better comparison 
than Judo.  Although they both involve redirection of force, Aikido has a 
much greater focus on redirecting your opponent (and then getting away) 
than Judo, which tends to result in both parties rolling around on the 
floor wrestling.



From: Stefan Lucks <lucks@weisskugel.informatik.uni-mannheim.de>
Subject: Re: Liability and Security

I enjoyed reading the essay, and I agree with most of it. Enforcing 
liability can be a crucial point to improve computer and network 
security.  However, I am missing another crucial point, namely responsibility.

Large organizations may be able to hire the computer security experts they 
need for a sound risk management and a responsible security policy.  But 
this does not scale well for small organizations and individuals.

Without the ordinary user being able to make reasonable decisions about 
risk avoidance and risk acceptance, computers and networks are 
insecure.  This ability is what I mean by responsibility.

Liability depends on responsibility.  (This is an oversimplification. Think 
of it as a general principle.)  If you sell your house, and sometime later 
a tiny asteroid hits the earth, smashes the house and kills the buyer, you 
won't be hold liable.  (I am not a lawyer, but would hope so! ;-))  You did 
not know about the asteroid, you could not have known about it, and you 
would not have been able to protect the house against the asteroid.
In the physical world, people understand basic security issues, such as how 
to lock a door, and what it means to lock it or to leave it open.  I am 
free to leave my home without locking the door, and I know about the risk 
of doing so.  (If someone breaks in, my insurance won't pay.)

In the digital world, things are quite different today.  How many ordinary 
users understand simple computer security issues, as e.g. the following?

-   When opening a Word document from someone else, one may actually execute 
some potentially malicious code (a word-virus or a word-worm such as 
"ILOVEYOU").
-   The e-mail from "my.girlfriend.net" may actually be sent by an adversary, 
such as "my.girlfriends.exlover.com".
-   When you send a Word document you have written, you may reveal much more 
to the receiver than you can see yourself when you print the 
document.  WYSIWYG (What You See Is What You Get)?  Nonsense, you may get 
much more than what is shown to you by default, if you know where to 
look.  (A Word document may contain the entire history of its changes.)

So problems are
1.  Bad user interfaces (from a security point of view), and
2.  Users not understanding security issues.

Thus, we need improved user interfaces, but also the users will have to 
gain some knowledge about computer and Internet security. This is not much 
different from people learning the concept of a "key" and of "house 
security" (which happened in Europe at the time of the industrial 
revolution, I believe) when this was new.

Liability may force software producers to improve their user interface, but 
this is only the first step, and the users will have to do the second.


** *** ***** ******* *********** *************


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.  Back 
issues are available on <http://www.counterpane.com/crypto-gram.html>.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a 
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe, 
visit <http://www.counterpane.com/unsubform.html>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of "Secrets and Lies" and 
"Applied Cryptography," and an inventor of the Blowfish, Twofish, and 
Yarrow algorithms.  He is a member of the Advisory Board of the Electronic 
Privacy Information Center (EPIC).  He is a frequent writer and lecturer on 
computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security 
Monitoring.  Counterpane's expert security analysts protect networks for 
Fortune 1000 companies world-wide.

<http://www.counterpane.com/>

Copyright (c) 2002 by Counterpane Internet Security, Inc.