Schmidt
6/14/2011 12:45:00 PM
"Mayayana" <mayayana@invalid.nospam> schrieb im Newsbeitrag
news:it6l6b$oie$1@dont-email.me...
> | > the events could be increased in CSS and interactivity
> | > could be decreased, they could meet in a happy, safe,
> | > script-free middle. But as it stands now, I don't see any
> | > great promise in HTML5.
> | Could you give an example for such an Event - and
> | where you'd use it in favour of one or two JS-commands?
> |
> The "pseudo classes". I don't know what's available at
> this point. The only one I use and know of is :hover. It's
> not called an event, but that's how it functions. It allows one
> to set styles for when the mouse hovers over an element...
Ah, yes ... but adding more of this kind of "interactive CSS-
behaviour" would make the CSS-parser more complex,
more vulnerable.
Future attacks would then perhaps not target the Browsers
JS-parser - but attack the CSS-parser instead (with "weird
parameters").
We could then well end up in a situation, where one
recommends, to disable CSS when surfing the Web -
and use JS instead, because it is "more secure in the
meantime"... ;-)
[risks when going online]
> I realize there's a range of opinion on these issues. Most
> people who use script are like you -- they "doth protest
> too much" that script is not something to worry about.
It is not something to not worry about...
But it is needed, when you want to use web-sites, which
load faster, and do more interactive stuff at the clientside
(without stressing the server with additional roundtrips).
The very same sites could be implemented in the
"traditional" way too (without JS, instead performing
a load of normal HTTP-Get and Post-requests to achieve
the required exchange of "interactive DOM-Elements").
But then perhaps nobody would want to use these sites,
due to (much) lower responsiveness.
For example Google could offer also a special GoogleMail-
Client, compiled into a "conventional executable", offered as
a "normal setup-download".
Then potential customers of this service would exchange
potential JS-vulnerabilities to those inherent with
conventionally compiled code.
In both cases you would have to trust the vendor -
(of the JS-containing pages - or in case of a
conventionally compiled GoogleMail-App, the
vendor of this executable you just downloaded).
> But there's a deep disconnect between what people
> are doing online and what they like to think they're
> doing online. The risks and lack of privacy ...
Think we have to draw a line here first, when we talk
about the risks of enabled JS on one hand - and "lack
of privacy" on the other...
Again, JS does not run at the serverside - it does only
run at the clientside, within the Browser in question.
And it serves at the clientside only, to spare roundtrips over
the serverside.
When we talk about "privacy", then we usually mean
"uploaded, private data/information to a given vendors
site or online-service, for (possibly unwanted) storage
at the serverside".
And this task (the upload) can be (and is) ensured also
without any JS, over conventional HTTP-Get/Post requests.
And User-behaviour, User-preference-tracking can be
(and is) ensured without any JS too (cookies).
And that's what i meant with "customer-problem" ... when
*you* type something into a WebForm and press the send-
button, that's your own responsibility - that's not the JS,
which is "entering all that data".
The risk with enabled JS is "only" - when malicious sites
inject - or confront the Browsers JS-Parser with stuff
it chokes on - with the goal to "break out" of the Browser-
process and install something "trojan-like" on the Operating-
system. One other goal (aside from the break-out attempt)
of malicious JS is, to install "cross-site/cross-frame Event-
Listeners *within* the Browser-process", to spy for Key-
Events, typed in a different Page or Frame.
But such attacks can be performed not only against
a Browsers JS-parser - they can also address the CSS-
parser - or address the "Flash-Container" trying to break
out from there - or use prepared Pictures, to break out
from within the PNG- or JPG-decoding-routines...
There's a lot of potentially vulnerable-points in todays
Browsers - the assumption: "with disabled JS im safe"
is a wrong one.
Thinking twice, on what links you click with your Main-
Browser (on your Main-System) is a better strategy.
Most of the customers out there also already run a "personal
firewall" - most of these software-packages already
support an online-filter, which works at the socket-level
and scans incoming HTTP-packets for "malicious signatures"
before they reach the Browsers HTML/CSS/JS parser-
routines - helps a bit, I assume...
All in all, the percentage of bitten users (losing money
for example due to spied out banking-passwords or
whatever) seems yet on a level, which is "bearable"
(by the not affected "broad mass").
> You want to define the risks as a "consumer-side" problem
> rather than a script/interactivity problem. That's a bit like
> saying that a car with faulty brakes is a "death problem"
> and not a mechanical problem.
No, what I want to say is, that
first:
- there's no "interactivity-problem" with JS
(an ajax:RPC-request is finally performing a http-post or a
http-get in the end - and these "server-interactivity-actions"
can also be performed over normal "link-actions" in a
running browser-instance)
and second:
- enabled JS is only one *additional* point, that can widen
the potential attacking-surface of a running browser-instance,
but its not the only one point which is potentially attackable.
To come back to "car-examples": your recommendation is,
to never drive a car, which comes with these new "drive by
wire" brakes (triggered over an electric signal, when you
push the pedal) - since the microprocessor could fail -
or a wire could fail...
But all these cars with this new breaking-mechanism also
support ABS (an advantage), which one would give out
of hand, in case the recommendation is followed, to drive
only cars, which "decelerate normally", over a "pure
mechanical solution").
Hmm, now it is not unknown, that also before the introducion
of the "electronic brake" there were car-accidents due to
failing brakes (leaking hydraulics, failing mechanics).
A good part of these accidents perhaps avoidable, when
normal maintenance intervals would have met (customer-
responsibility).
And of course there was/is also accidents due to failing
"electronic brakes" - and as above ... there's customer
responsibility, to meet maintenance-intervals, where
the cars electronic-components are checked -maybe
exchanged or updated with new firmware.
And yes, these new brakes add a new level of complexity
on top of "already proven stuff" (the mechanic and hydraulic
parts of a brake) - but as said, the new added layer
is necessary for "additional advantages" nobody wants
to miss in a new car (ABS/ESP can help to avoid some
kind of accidents).
It's the responsibility of the vendor, to "harden" this
"additional layer of complexity" over time, to make it robust
and foolproof.
To come back to the browser (vendors) - I think they can
accomplish this goal (to harden the JS-layer) ... Google-
Chrome alreaedy "near there".
But nevertheless, "browsing-accidents" will always happen
(even with a near perfect, hardened JS-Layer) - especially
when the user of a browser is careless, "where to" - and on
which roads he travels (and if his car is well-maintained)...
Olaf