[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Kenny

2/9/2006 2:42:00 PM

Hi all,

I have a sales importing process and it basically does the following :

1 - Read a file that contains sales order in csv values.
2 - Per line, it creates a salesorder.
3 - Per salesorder, a 'finish' process is done (some calculations) but this
is put as a batch job in the day batch (processed by the day batch server)

So they start reading files with for example 1000 orders in it. and
everything runs fine when creating the orders, creating the batch jobs. And
from that point, the batch server processes all these jobs.

The problem now is that the batch server runs out of memory after a couple
of hundred jobs. I don't understand this because after each job, the
resources are freed right??

if anyone has an idea, I would very much like to hear about it :)

thx in advance,
greets,
Kenny


5 Answers

Luegisdorf

2/9/2006 3:58:00 PM

0

Hi Kenny

The only Axapta memory leak I know about is if you use command like "table =
otherTable.orig()" and the 'otherTable' wasn't selected before (look at
discussion with the subject "This.orig gives out of memory" for more
information).

The other problem (but this is concerning the SQL-Server and not Axapta
itself); did you execute all jobs bouded in one transaction (tts), or is
every job a single transaction? As more update, delete and insert-commands
are in the transactions as more resources are needed by the DB-Server.

May be that helps a little bit.

Best regards
Patrick

"Kenny" wrote:

> Hi all,
>
> I have a sales importing process and it basically does the following :
>
> 1 - Read a file that contains sales order in csv values.
> 2 - Per line, it creates a salesorder.
> 3 - Per salesorder, a 'finish' process is done (some calculations) but this
> is put as a batch job in the day batch (processed by the day batch server)
>
> So they start reading files with for example 1000 orders in it. and
> everything runs fine when creating the orders, creating the batch jobs. And
> from that point, the batch server processes all these jobs.
>
> The problem now is that the batch server runs out of memory after a couple
> of hundred jobs. I don't understand this because after each job, the
> resources are freed right??
>
> if anyone has an idea, I would very much like to hear about it :)
>
> thx in advance,
> greets,
> Kenny
>
>

Bertrand Caillet

2/9/2006 3:58:00 PM

0

Did you secure your code to be Best Practice ?
--> Catch the Exception Error with TTSAbort

and verify that you commit (TTSCOMMIT) after each job so that you actually
free the memory.


--
Bertrand Caillet (MBS)


"Kenny" wrote:

> Hi all,
>
> I have a sales importing process and it basically does the following :
>
> 1 - Read a file that contains sales order in csv values.
> 2 - Per line, it creates a salesorder.
> 3 - Per salesorder, a 'finish' process is done (some calculations) but this
> is put as a batch job in the day batch (processed by the day batch server)
>
> So they start reading files with for example 1000 orders in it. and
> everything runs fine when creating the orders, creating the batch jobs. And
> from that point, the batch server processes all these jobs.
>
> The problem now is that the batch server runs out of memory after a couple
> of hundred jobs. I don't understand this because after each job, the
> resources are freed right??
>
> if anyone has an idea, I would very much like to hear about it :)
>
> thx in advance,
> greets,
> Kenny
>
>

Kenny

2/9/2006 4:20:00 PM

0

Hi luegisdorf,

I don't use an .orig() statement in my code.
Every time a batch job is ran, there is a ttsBegin and a ttsCommit. so there
are a lot of transactions but it is not the db, but the aos which crashes :|

greets,



"Luegisdorf" wrote:

> Hi Kenny
>
> The only Axapta memory leak I know about is if you use command like "table =
> otherTable.orig()" and the 'otherTable' wasn't selected before (look at
> discussion with the subject "This.orig gives out of memory" for more
> information).
>
> The other problem (but this is concerning the SQL-Server and not Axapta
> itself); did you execute all jobs bouded in one transaction (tts), or is
> every job a single transaction? As more update, delete and insert-commands
> are in the transactions as more resources are needed by the DB-Server.
>
> May be that helps a little bit.
>
> Best regards
> Patrick
>
> "Kenny" wrote:
>
> > Hi all,
> >
> > I have a sales importing process and it basically does the following :
> >
> > 1 - Read a file that contains sales order in csv values.
> > 2 - Per line, it creates a salesorder.
> > 3 - Per salesorder, a 'finish' process is done (some calculations) but this
> > is put as a batch job in the day batch (processed by the day batch server)
> >
> > So they start reading files with for example 1000 orders in it. and
> > everything runs fine when creating the orders, creating the batch jobs. And
> > from that point, the batch server processes all these jobs.
> >
> > The problem now is that the batch server runs out of memory after a couple
> > of hundred jobs. I don't understand this because after each job, the
> > resources are freed right??
> >
> > if anyone has an idea, I would very much like to hear about it :)
> >
> > thx in advance,
> > greets,
> > Kenny
> >
> >

Luegisdorf

2/10/2006 7:31:00 AM

0

That means every job has its own transaction but memory usage still grows
after a couple of jobs? Sounds very misterious; did you used the standard
'batch' function in Axapta to execute your import job or have you built your
own 'batch holder' (I guess the standard batch functionality)?

And a question I've intressted to: does only the client crashes which is
executing the batch jobs or that the AOS services really aborts? The question
is because you are writing about a 'batch server'.

All in all sounds very strange. I suppose to write a testfile while
executing your jobs with the current memory usage (may be before and after
ever job) to find out if the Memory is really keept high over several jobs
(as you suggest) or if just one job makes the trouble (may be a very large
file with more than 10000 orders).

If only one job is the problem, there are some nice ideas to solve the
problem (I would post if you could confirm that this would be the problem ...)

Let me know if you can get some analyzes about memory usage over several jobs.

See you later ..
Patrick



"Kenny" wrote:

> Hi luegisdorf,
>
> I don't use an .orig() statement in my code.
> Every time a batch job is ran, there is a ttsBegin and a ttsCommit. so there
> are a lot of transactions but it is not the db, but the aos which crashes :|
>
> greets,
>
>
>
> "Luegisdorf" wrote:
>
> > Hi Kenny
> >
> > The only Axapta memory leak I know about is if you use command like "table =
> > otherTable.orig()" and the 'otherTable' wasn't selected before (look at
> > discussion with the subject "This.orig gives out of memory" for more
> > information).
> >
> > The other problem (but this is concerning the SQL-Server and not Axapta
> > itself); did you execute all jobs bouded in one transaction (tts), or is
> > every job a single transaction? As more update, delete and insert-commands
> > are in the transactions as more resources are needed by the DB-Server.
> >
> > May be that helps a little bit.
> >
> > Best regards
> > Patrick
> >
> > "Kenny" wrote:
> >
> > > Hi all,
> > >
> > > I have a sales importing process and it basically does the following :
> > >
> > > 1 - Read a file that contains sales order in csv values.
> > > 2 - Per line, it creates a salesorder.
> > > 3 - Per salesorder, a 'finish' process is done (some calculations) but this
> > > is put as a batch job in the day batch (processed by the day batch server)
> > >
> > > So they start reading files with for example 1000 orders in it. and
> > > everything runs fine when creating the orders, creating the batch jobs. And
> > > from that point, the batch server processes all these jobs.
> > >
> > > The problem now is that the batch server runs out of memory after a couple
> > > of hundred jobs. I don't understand this because after each job, the
> > > resources are freed right??
> > >
> > > if anyone has an idea, I would very much like to hear about it :)
> > >
> > > thx in advance,
> > > greets,
> > > Kenny
> > >
> > >

Anton Venter

2/15/2006 9:04:00 AM

0

Is it the famous 'Out of memory' error message occurring round the 2GB of
RAM usage?. If it is, there is no real solution but there are ways around
it. Try to restructure the code by removing unessecary queries and object
creation etc. Make sure that objects go out of scope during the iterations
etc. In short, make the code lean and mean. You could also beak the
operation up into seperate blocks.

The problem is that during the iterations in the code, memory is allocated
on the heap by the kernel (using a third party library) but never actually
released. Which explains the memory growth during long operations and when
there is no more memory available you get the 'Out of memory' error.

"Kenny" <Kenny@discussions.microsoft.com> wrote in message
news:6E6F3B20-08AD-42FD-ADC0-592C4FB4B4C4@microsoft.com...
> Hi all,
>
> I have a sales importing process and it basically does the following :
>
> 1 - Read a file that contains sales order in csv values.
> 2 - Per line, it creates a salesorder.
> 3 - Per salesorder, a 'finish' process is done (some calculations) but
> this
> is put as a batch job in the day batch (processed by the day batch server)
>
> So they start reading files with for example 1000 orders in it. and
> everything runs fine when creating the orders, creating the batch jobs.
> And
> from that point, the batch server processes all these jobs.
>
> The problem now is that the batch server runs out of memory after a couple
> of hundred jobs. I don't understand this because after each job, the
> resources are freed right??
>
> if anyone has an idea, I would very much like to hear about it :)
>
> thx in advance,
> greets,
> Kenny
>
>