[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.ruby

Executing an Oracle stored procedure

Daniel Berger

6/15/2005 4:01:00 PM

Hi all,

Ruby 1.8.2
OCI8

I'm spacing - how do I call an Oracle stored procedure using DBI/OCI8?
I tried a few things, but no luck.

Thanks.

Dan

4 Answers

Berger, Daniel

6/15/2005 4:30:00 PM

0

Daniel Berger wrote:
> Hi all,
>
> Ruby 1.8.2
> OCI8
>
> I'm spacing - how do I call an Oracle stored procedure using DBI/OCI8?
> I tried a few things, but no luck.
>
> Thanks.
>
> Dan

Nevermind - got it.

It's dbh.prepare("begin my_proc; end;")

Don't forget that trailing semicolon.

Regards,

Dan



Bartc

10/11/2009 9:47:00 AM

0


"Ted DeLoggio" <tdeloggio@gmail.com> wrote in message
news:M_6dnZqH5PIW20zXRVn_vwA@giganews.com...
> On Sat, 10 Oct 2009 09:48:59 +0000, Albert wrote:
>
>> On pg 45 of K&R, the authors write that:
>>
>> float to int causes truncation of any fractional part.
>>
>> Then shouldn't:
>>
>> #include <stdio.h>
>>
>> int main(void)
>> {
>> float a = 1.0000;
>> int b = a;
>> printf("%d %d\n", a, b);
>> return 0;
>> }
>>
>> give 1 1 as the output instead of 0 and garbage?
>
> The output of this program on my machine is:
>
> 0 1072693248
>
> If float is 32 bits, IEEE 754 specifies 1 sign bit, 8 exponent bits, and
> 23 fraction bits.
>
> The value 1.0000 should be:
>
> sign = 0
> exponent = 127 (0 + 2^(8-1) bias)
> fraction = 0 (+1)
>
> Which, if interpreted as an integer would be 1065353216 (3F800000h). The
> output of the above program is 1072693248 (3FF00000h), which, if I
> haven't made an error in my arithmetic, is the floating-point value 1.875.
>
> Can anyone explain this discrepancy? Where did the extra fraction bits
> (0.875) come from?

The float is probably converted to double, so is 64 bits. 64 bits I believe
uses an 11-bit exponent. The mantissa is 0 because there is an implied 1 in
there.

--
Bartc

Bartc

10/11/2009 10:07:00 AM

0

"Keith Thompson" <kst-u@mib.org> wrote in message
news:lnocoe61pa.fsf@nuthaus.mib.org...
> Ted DeLoggio <tdeloggio@gmail.com> writes:

>> The output of this program on my machine is:
>>
>> 0 1072693248
>>
>> If float is 32 bits, IEEE 754 specifies 1 sign bit, 8 exponent bits, and
>> 23 fraction bits.
>>
>> The value 1.0000 should be:
>>
>> sign = 0
>> exponent = 127 (0 + 2^(8-1) bias)
>> fraction = 0 (+1)
>>
>> Which, if interpreted as an integer would be 1065353216 (3F800000h). The
>> output of the above program is 1072693248 (3FF00000h), which, if I
>> haven't made an error in my arithmetic, is the floating-point value
>> 1.875.
>>
>> Can anyone explain this discrepancy? Where did the extra fraction bits
>> (0.875) come from?
>
> First off, the behavior is undefined; the language itself has nothing
> to say about the output you're seeing.
>
> The particular results you're seeing on your system, with whatever
> compiler and options you're using and the current phase of the moon,

> are likely to have something to do with the fact that float is
> promoted to double

I didn't see this bit when I gave my own reply. I'd only got as far as the
phase of the moon..

Sometimes it's useful to give a concrete example of why some behaviours
happen, as you did in your excellent reply to Chad earlier in the thread.

The Standard only goes on about Undefined Behaviour so much, because /it
doesn't know what machine the code is being run on/, so it can't really
comment.

On the other hand, a programmer trying to debug a piece of code usually does
know, and knowing exactly why it's not working can be useful to learn for
next time.


--
bartc

Joe Wright

10/11/2009 11:58:00 AM

0

Ted DeLoggio wrote:
> On Sat, 10 Oct 2009 09:48:59 +0000, Albert wrote:
>
>> On pg 45 of K&R, the authors write that:
>>
>> float to int causes truncation of any fractional part.
>>
>> Then shouldn't:
>>
>> #include <stdio.h>
>>
>> int main(void)
>> {
>> float a = 1.0000;
>> int b = a;
>> printf("%d %d\n", a, b);
>> return 0;
>> }
>>
>> give 1 1 as the output instead of 0 and garbage?
>
> The output of this program on my machine is:
>
> 0 1072693248
>
> If float is 32 bits, IEEE 754 specifies 1 sign bit, 8 exponent bits, and
> 23 fraction bits.
>
> The value 1.0000 should be:
>
> sign = 0
> exponent = 127 (0 + 2^(8-1) bias)
> fraction = 0 (+1)
>
> Which, if interpreted as an integer would be 1065353216 (3F800000h). The
> output of the above program is 1072693248 (3FF00000h), which, if I
> haven't made an error in my arithmetic, is the floating-point value 1.875.
>
> Can anyone explain this discrepancy? Where did the extra fraction bits
> (0.875) come from?
>
>
>
My notation is slightly different because I assume a binary point to the left of
b23 such that the mantissa is always a fraction and bias is 126.

00111111 10000000 00000000 00000000
Exp = 127 (1)
00000001
Man = .10000000 00000000 00000000
1.00000000e+00

The top row is the raw float. In hex 0x3f800000.
--
Joe Wright
"If you rob Peter to pay Paul you can depend on the support of Paul."