Bug #2021
FITS error 110 or 114 in GFits::free_members()
Status: | Closed | Start date: | 04/24/2017 | |
---|---|---|---|---|
Priority: | High | Due date: | ||
Assigned To: | Knödlseder Jürgen | % Done: | 100% | |
Category: | - | |||
Target version: | 1.3.0 | |||
Duration: |
Description
When using GammaLib in a multi-threaded environment a GFits::free_members()
error occurs from time to time with status codes 110 or 114. It may be that this is related to the multi-threading.
The cfitsio documentation says that the library needs to be configured with
./configure --enable-reentrant
to support multi-threading. This may be the problem.
Recurrence
No recurrence.
History
#1 Updated by Knödlseder Jürgen over 7 years ago
- Status changed from New to In Progress
- Assigned To set to Knödlseder Jürgen
- Target version set to 1.3.0
- % Done changed from 0 to 90
After putting the saving of the response cubes (as well as the loading) into an OpenMP critical zone the problems disappeared.
#2 Updated by Knödlseder Jürgen over 7 years ago
- % Done changed from 90 to 100
Code is merged into devel
. There seem to be no longer any problems, but I keep the issue open until the processing is complete.
#3 Updated by Knödlseder Jürgen over 7 years ago
- Priority changed from Normal to High
- % Done changed from 100 to 20
- l080b60 - edisp - 100 GeV - 150 TeV (status: 110, 114)
- l080b60 - edisp - 1 - 150 TeV (status: 110, 114)
- l282b68 - cntcube - 30 GeV - 160 TeV (status: 110)
- l000b60 - source cube - 100 GeV - 150 TeV (status: 110, 114)
- l020b60 - bkgcube - 30 GeV - 160 TeV (status: 110, 114)
- l282b68 - cntcube - 100 GeV - 150 TeV (status: 110)
- l040b60 - bkgcube - 1 - 150 TeV (status: 110, 114)
...
This means that not a specific cube is concerned but all cubes.
#4 Updated by Knödlseder Jürgen over 7 years ago
It appears that no space is left on the device, so this may be the problem.
#5 Updated by Knödlseder Jürgen over 7 years ago
- Status changed from In Progress to Closed
- % Done changed from 20 to 100
This problem did not reappear again, so I close the issue now.