This document describes the authorized program analysis reports (APARs) resolved in IBM Storage Scale 5.1.9.x releases.
This document was last updated on 10th June, 2025.
Tips:
APAR |
Severity |
Description |
Resolved in |
Feature Tags |
---|---|---|---|---|
|
| |||
IJ54197 | High Importance | tsapolicy calls flushBuf() when each chosen record is written when policy list operation is performed.This may result in additional processing time. This problem does not occur if "-I prepare" or "-I test" is specified. (show details) | 5.1.9.10 | mmapplypolicy |
IJ54322 | Suggested | tsapolicy sends all debug information to trace file since IBM Storage Scale 5.1.7.0. But if the cluster level is older than 5.1.7.0, one of the debug lines is displayed to the command output unconditionally. There is no functional impact. (show details) | 5.1.9.10 | mmapplypolicy |
IJ54425 | High Importance | Today when a Gateway node fails and comes back up, there will a recovery attempt on almost all filesets that the Gateway node used to handle. Although the number of recoveries has been capped using the afmMaxParallelRecoveries tunable, there's still chance that all filesets attempt mount first flooding the gateway node with mount requests.. (show details) | 5.1.9.10 | AFM |
IJ54494 | High Importance | BUG: unable to handle kernel NULL pointer dereference at 0000000000000080 (show details) | 5.1.9.10 | GPFS Core |
IJ53557 | High Importance | GPFS asserted due to unexpected hold count on events exporter object during destructor. (show details) | 5.1.9.10 | All Scale Users |
IJ54780 | Critical | Node failure could lead to unexpected log recovery failure that would require offline mmfsck to repair.  This could happen on file system with replication and snapshot. (show details) | 5.1.9.10 | Snapshots |
IJ54781 | High Importance | GPFS daemon could fail unexpectedly with logAssertFailed: firstValidOffset >= blockOffset && lastValidOffset > firstValidOffset. This could occur on file system with HAWC enabled. (show details) | 5.1.9.10 | HAWC |
IJ54782 | Critical | When accessing a mmaped file concurrently, even from internal GPFS operations like restripe, the required flush of the mmap data can be incomplete, resulting in data from previous mmap writes not to be written to disk. (show details) | 5.1.9.10 | All Scale Users |
IJ53214 | High Importance | With FAL and NFS Ganesha enabled, running workloads with path to an NFS export for long periods of time could result in NFS client ips not being logged in the audit log. (show details) | 5.1.9.10 | File Audit Logging, NFS |
IJ54629 | High Importance | mmrestorefs recreates all files and directories that were deleted after the snapshot was taken.If the deleted file is a special file, mmrestorefs uses mknod() system call to create the file.But mknod() cannot create a socket file on AIX. Hence, if socket files were deleted after the snapshot was taken,mmrestorefs on AIX will fail during re-creating the socket file. (show details) | 5.1.9.10 | mmrestorefs |
IJ54874 | High Importance | Assertion goes off when We attempt to check if parent and child are local and try to restrict replication for suchentities. (show details) | 5.1.9.10 | AFM |
IJ54878 | High Importance | If the dependent fileset is created as a non-root user and linked, then the uid/gid are not replicated for the dependent fileset to the remote site. (show details) | 5.1.9.10 | AFM |
IJ54879 | Suggested | When capturing traces in blocking mode, the kernel message "BUG: scheduling while atomic error message when using blocking traces" can be triggered. This could also lead to a deadlocked node, requiring a reboot. (show details) | 5.1.9.10 | All Scale Users |
IJ54880 | Suggested | mmafmconfig enable command on AFM primary mode fileset fails (show details) | 5.1.9.10 | AFM |
IJ54783 | High Importance | When trying to install Storage Scale on Windows with latest Cygwin version (3.6.1), the installation can fail due to security issues. (show details) | 5.1.9.10 | Install/Upgrade |
IJ54962 | High Importance | Snapshots are not listed under .snapshots directory when the AFM is enabled on the file system (show details) | 5.1.9.10 | AFM |
IJ54963 | High Importance | Symlinks are appended with a null character, which causes the pwd -P command to fail to resolve the real path. (show details) | 5.1.9.10 | AFM |
IJ54964 | High Importance | Ganesha crashes during blocked lock request processing The thread stack in Ganesha during a crash may resemble the following examples: Crash-1: raise crash_handler raise abort _nl_load_domain.cold.0 lock_entry_dec_ref process_blocked_lock_upcall state_blocked_lock_caller fridgethr_start_routine start_thread __clone Crash-2: raise abort state_hdl_cleanup mdcache_lru_clean _mdcache_lru_unref mdcache_put_ref lock_entry_dec_ref remove_from_locklist try_to_grant_lock process_blocked_lock_upcall state_blocked_lock_caller fridgethr_start_routine start_thread __clone (show details) |
5.1.9.10 | NFS |
IJ54965 | High Importance | NFSV4 ACLs are not replicated with AFM fileset level options afmSyncNFSV4ACL and afmNFSV4 (show details) | 5.1.9.10 | AFM |
IJ54966 | High Importance | Kernel Crash with selinux enabled (show details) | 5.1.9.10 | Scale core |
IJ54967 | High Importance | crash during cxiStrcpy in setSecurityXattr (show details) | 5.1.9.10 | Scale core |
IJ54968 | High Importance | opening a new file with O_RDWR|O_CREAT fails with EINVAL. (show details) | 5.1.9.10 | Scale Core |
IJ54969 | High Importance | kernel panic: general protection fault / ovl_dentry_revalidate_common / mmfsd ORrunning lsof /proc on a node crashes the node (show details) | 5.1.9.10 | Scale core |
IJ54978 | High Importance | There is a issue where the .ptrash directory's local bit gets reset and as a result - some operations performed during recovery in the trash dir are getting requeued to the remote site and causing queue to bedropped in a repeated loop. (show details) | 5.1.9.10 | AFM |
IJ54979 | High Importance | With afmFastCreate enabled, if the Create that tries to push the initial chunk of data fails to complete and gets requeued, then the requeued Create is replaying all data when it retries.And later there are a couple of Write messages that starting from offset where Create initially went inflight that is also played. Totaling to almost twice the amount of data of the file size to be replicated. (show details) | 5.1.9.10 | AFM |
IJ54593 | High Importance | During token minimization, a deadlock can occur on a client node. With token minimization, a client node is first asked to give up any tokens that are only for cached files. Without the fix, calling this codepath for files that have been deleted, could result in a deadlock. (show details) | 5.1.9.10 | All Scale Users |
IJ53743 | High Importance | Mmvdisk will not generate the nsd stanza properly when the failuregroups option is used, causing no thin inodes created. (show details) | 5.1.9.9 | GNR |
IJ53744 | Medium Importance | kernel panic and crash when scale is used as an overlay filesystem. The failure message is like the following: Failure at line 31071 in file mmfs/ts/kernext/gpfsops.C rc 0 reason 0 data (refcount != 0) (show details) |
5.1.9.9 | Scale core |
IJ51961 | Suggested | Inside GPFS daemon, the variable that represents number of allocations is integer type which could overflow in large system. As a result of it, some statistics are shown in negative. (show details) | 5.1.9.9 | Admin Commands (mmdiag and mmfsadm) |
IJ53784 | High Importance | There is a race condition that involves multiple threads performing a full-track read operation to the same track while disk errors exist. When the configuration parameter nsdRAIDClientOnlyChecksum is enabled, this race condition could create a situation where, without going through the checksum validation, data read from disks could be used for the reconstruction of data that failed to read due to disk errors. (show details) | 5.1.9.9 | GNR |
IJ53600 | Suggested | A Linux kernel change caused GPFS to break disk I/O into many small requests. (show details) | 5.1.9.9 | All Scale Users |
IJ53548 | Suggested | Attempting to set a timestamp in GPFS to a time before Jan 1 1970 results in an unexpected timestamp being stored. GPFS currently stores timestamps as a 32bit unsigned integer, and thus can store timestamps from Jan 1 1970 00:00:00 UTC to 7 February 2106 at 06:28:15 UTC. Setting a timestamp before 1970 was silently accepted. (show details) | 5.1.9.9 | All Scale Users |
IJ53910 | Critical | Unexpected long waiters on PrefetchListMutex could cause user applications to hang. This could cause cluster wide performance degradation. (show details) | 5.1.9.9 | All Scale Users |
IJ53911 | High Importance | [AFM] Resync is not able to sync xattr from evicted files, resulting in acls mismatch if afm control file is created after acl change in cache. (show details) | 5.1.9.9 | AFM |
IJ53912 | Critical | Online mmfsckx could report false critical corruption (duplicate reference) (show details) | 5.1.9.9 | All Scale Users |
IJ53724 | High Importance | Automatic inode expansion for an inode space can get disabled. (show details) | 5.1.9.9 | File creation/Inode allocation. |
IJ53723 | Critical | Under certain condition when SED discovery command fails for a pdisk with EIO error, in those scenarios hardware type information is not set correctly in pdisk. Which results in showing SSD disk as HDD. (show details) | 5.1.9.9 | GNR |
IJ54002 | High Importance | When adjusting number of prefetch threads in a partition, unexpected long waiter on PrefetchListMutex could be triggered leading to other long waiters and application hang. (show details) | 5.1.9.9 | All Scale Users |
IJ54021 | Medium Importance | AFM replication fails in IW mode if the remote mount hangs as the messages are incorrectly dropped. (show details) | 5.1.9.9 | AFM |
IJ54022 | High Importance | Client or CES node crashes when afmFastLookup is enabled due invalid fileset pointer access (show details) | 5.1.9.9 | AFM |
IJ54023 | High Importance | AFM outband prefetch causes deadlock due to incorrect handling of async messages while creating the files/dirs in the cache. (show details) | 5.1.9.9 | AFM |
IJ54024 | High Importance | AFM resync/changeSecondary commands hang as they try to send lookup on the local .ptrash directory (show details) | 5.1.9.9 | AFM |
IJ54025 | High Importance | Daemon asserts with Assert exp(oldValue == 0)* (show details) | 5.1.9.9 | AFM |
IJ54026 | High Importance | There is a Race between Local SG Panic Handling Thread and msgMgrThreadBody on the handlerList. This causes deadlock in deciding who should delete and free the Filesystem. (show details) | 5.1.9.9 | AFM |
IJ54004 | High Importance | The file system encryption functionality requires the CA certificates to be compliant with RFC 5280 specifications, which require that CA certificates' basicContraints are marked as critical. Consequently, Storage Scale does not allow the use of CA certificates that don't have basicContraints marked as critical. (show details) | 5.1.9.9 | GPFS Core |
IJ53563 | High Importance | With simplified setup for file system encryption, when the KMIP client and server certificates are signed by the CA certificate chains that have common certificates (e.g., same CA root and possibly intermediate certificates), mmgskkm command returns an error. That error forces the calling "mmkeyserv client create" command to fail. (show details) | 5.1.9.9 | GPFS Core |
IJ53647 | High Importance | In a two-node+tiebreaker cluster using server-based cluster configuration when one of the nodes is powered off and the other node tries to run election and opens the tiebreaker disk, it tries to call Disk::devOpen() which has a side effect of retrieving the WWN from the device. This logic of retrieving WWN from the device and has check on disk lease before sending the SCSI request,hitting a deadlock there. With CCR configuraiton, when it goes through election and tries to access tiebreaker disk, it invokes OpenDevice() from CCR directly, therefore it doesn't hit the problem.Removing the call of wwnFromDevice() from Disk::devOpen() eliminates this deadlock. (show details) | 5.1.9.9 | Cluster configuration |
IJ54063 | High Importance | An application like the SMB server may invoke the gpfs_stat_x() call (available in libgpfs.so) to retrieve stat information for a file. Such call implements "statlite" semantics, meaning that the size information is not assured to be the latest. Other applications which invoke standard stat()/fstat() calls do expect the size information to be up to date. However, due a problem in the logic, after gpfs_stat_x() is invoked, information is cached inside the kernel, and the cache is not purged even when other nodes change the file size (for example by appending data to it). The result is that stat() invoked on the node may still retrieve out of date file size information as other nodes write into the file. (show details) | 5.1.9.9 | All Scale Users |
IJ53693 | HIPER | (show details) | 5.1.9.9 | All Scale Users |
IJ54106 | Suggested | Added support for NFS over RDMA (RoCE) while using AFM (show details) | 5.1.9.9 | AFM |
IJ54107 | High Importance | .afm directory where some failure files are redirected today has a default permission of 700 when created.. Only the user has full access to it.. But there are times when other group users and others might want to view failure list files. And the file permission being 700 as well, the read gets denied. (show details) | 5.1.9.9 | AFM |
IJ54108 | High Importance | Kernel panic - not syncing: hung_task: blocked tasks. (show details) | 5.1.9.9 | Scale Core |
IJ53799 | Suggested | pmcollector service can segfault in LogStore::readAndProcess() and service will restart.There is a data race between 2 parallel code threads which was observed when the pmcollector aggregation was running (every 6h) while the node was under load. (show details) | 5.1.9.8 | perfmon (Zimon) |
IJ53136 | High Importance | As a result of an internal race condition, file system operations on encrypted file systems may fail with error 786. The error may be reported by either an mm command or in the error message log, e.g., 2024-10-11_21:53:22.106-0500: [A] Log recovery fs9 failed for log group 204, error 786 (show details) |
5.1.9.8 | GPFS Core |
IJ53151 | High Importance | AFM getOutbandList fails to get the changed files and users may not be able to detect the changes to run the prefetch command later. (show details) | 5.1.9.8 | AFM |
IJ53183 | High Importance | On Gateway node shutdown, Gateway node forcefully returns EIO to the application node which is promptly passing on to the application triggering the Read operation. (show details) | 5.1.9.8 | AFM |
IJ53213 | Suggested | Remove dependency from kernel version for afmNFSNconnect. (show details) | 5.1.9.8 | AFM |
IJ53324 | Critical | In extremely rare case, directory entry with wrong length could be wrongly created leading to file system panic client node and log recovery failure on file system manager node. This could eventually lead to file system been unmounted everywhere. (show details) | 5.1.9.8 | All Scale Users |
IJ53332 | High Importance | mmbackup command internally communicates with tsbuhelper process using a formatted string to get backup result and the format was changed in Spectrum Scale 5.1.9.0. mmbackup should accept old format and new format both but fails to handle old format properly. As a result of it, the backup count from the node using old format is not correctly added up. (show details) |
5.1.9.8 | mmbackup |
IJ52584 | Suggested | The sdrServ was not able to initialize due to the hostname resolution failure of the legacy server-based configuration server. This prevents GPFS daemon from coming up. (show details) | 5.1.9.8 | admin command |
IJ53333 | High Importance | Add an option in mmafmctl to checkDeleted files and dirs which might be hogging the usedInodes count on the fileset. (show details) | 5.1.9.8 | AFM |
IJ53421 | High Importance | Failed to register with GPFS: Bad file descriptor when SMB tries tree connect (show details) | 5.1.9.8 | GPFS core |
IJ53426 | Critical | When a new file system manager takeover after old file system manager loses quorum, it is possible for new file system to read stripe group descriptor too early which can cause stripe group descriptor updates to be lost. (show details) | 5.1.9.8 | All Scale Users |
IJ53420 | High Importance | GPFS daemon could fail unexpectedly with assert after file system unmounted due to panic. (show details) | 5.1.9.8 | All Scale Users |
IJ53372 | High Importance | GPFS leaks kernel memory every time a user that is a member in more than 32 groups tries to access an inode that denies access to that user through simple modebits (no ACL). This might go unnoticed, but if these conditions occur repeatedly, the kernel memory leak can affect the node operations, requiring a reboot to avoid outages. (show details) | 5.1.9.8 | All Scale Users |
IJ53490 | High Importance | The timeout test result is not consistent on AMD EPYC-Genoa Processor. If the test passes, the GSKIT hangs workaround will not be applied. This causes problem later (show details) | 5.1.9.8 | Admin Commands, gskit |
IJ53592 | High Importance | If its the first or only operation on the list and We attempt to queue it through startMarker, then We use the escaped path as opposed to unescaped path causing the failure in queueing the proper format file name. (show details) | 5.1.9.8 | AFM |
IJ53593 | High Importance | Logging of failure when is to failed list file is causing deadlock within the mmafmcosctl binary. (show details) | 5.1.9.8 | AFM |
IJ53594 | High Importance | Earlier a fix for the same issue was made, but it was considering to return RESTART between the Gateway node and app node only whenqueue is dropped. But there can be cases where Gateway node is being shutdown without queue being in dropped state. (show details) | 5.1.9.8 | AFM |
IJ53595 | High Importance | The AFM gateway node becomes unresponsive during startup due to numerous filesystem mount requests triggered by active I/O to multiple filesets. (show details) | 5.1.9.8 | AFM |
IJ53186 | High Importance | There is a bug in mmvdisk when process '--mmcrfs', It processes two option model: --opt value and --opt=value. An argument like the following will reproduce the issue: "afmMode=RO,afmTarget=gpfs:///gpfs/ssoft_src/" Because the option value of '-p' includes '=', mmvdisk splits its value 'afmMode=RO,afmTarget=gpfs:///gpfs/ssoft_src/ ' into two parts 'afmMode' and 'RO,afmTarget=gpfs:///gpfs/ssoft_src/' by mistake. That why mmcrfs reports Incorrect option: -p afmMode'. (show details) |
5.1.9.7 | mmvdisk |
IJ52272 | High Importance | 2024-08-26_04:00:08.796+0100: [X] *** Assert exp(context != unknownOp) in line 6125 of file /project/sprelgpfs519/build/rgpfs519s005i/src/avs/fs/mmfs/ts/fs/openfile.C 2024-08-26_04:00:08.796+0100: [E] *** Traceback: 2024-08-26_04:00:08.796+0100: [E] 2:0x55AE8B58E84A logAssertFailed + 0x3AA at ??:0 2024-08-26_04:00:08.796+0100: [E] 3:0x55AE8B2E093D LockFile(OpenFile**, StripeGroup*, FileUID, OperationLockMode, LkObj::LockModeEnum, LkObj::LockModeEnum*, LkObj::LockModeEnum*, int, int) + 0x69D at ??:0 2024-08-26_04:00:08.796+0100: [E] 4:0x55AE8B22CB2B FSOperation::createLockedFile(StripeGroup*, FileUID, OperationLockMode, LkObj::LockModeEnum, OpenFile**, unsigned int*, int, int) + 0x9B at ??:0 2024-08-26_04:00:08.796+0100: [E] 5:0x55AE8B8D4F4C markAuditLogInactive(StripeGroup*, FileUID) + 0x5C at ??:0 2024-08-26_04:00:08.796+0100: [E] 6:0x55AE8B8DE14C AuditWriter::processRequest(WriteRequest) + 0x3FC at ??:0 2024-08-26_04:00:08.796+0100: [E] 7:0x55AE8B8C5DFE serveWriteRequests(WriteRequest const&, void*) + 0xBE at ??:0 2024-08-26_04:00:08.796+0100: [E] 8:0x55AE8B8C61A0 AuditWriter::callback() + 0x1A0 at ??:0 2024-08-26_04:00:08.796+0100: [E] 9:0x55AE8B036C42 RunQueue::threadBody(RunQueueWorker*) + 0x392 at ??:0 2024-08-26_04:00:08.796+0100: [E] 10:0x55AE8B038EC2 Thread::callBody(Thread*) + 0x42 at ??:0 2024-08-26_04:00:08.796+0100: [E] 11:0x55AE8B025EC0 Thread::callBodyWrapper(Thread*) + 0xA0 at ??:0 2024-08-26_04:00:08.796+0100: [E] 12:0x7FA83AF9A1CA start_thread + 0xEA at ??:0 2024-08-26_04:00:08.796+0100: [E] 13:0x7FA839C9F8D3 __GI___clone + 0x43 at ??:0 mmfsd: /project/sprelgpfs519/build/rgpfs519s005i/src/avs/fs/mmfs/ts/fs/openfile.C:6125: void logAssertFailed(UInt32, const char*, UInt32, Int32, Int32, UInt32, const char*, const char*): Assertion `context != unknownOp' failed. (show details) |
5.1.9.7 | File Audit Logging |
IJ52511 | Suggested | Issuing the undocumented "tsdbfs test patch desc format" command results in mmfsd failures on other nodes. (show details) | 5.1.9.7 | All Scale Users |
IJ52808 | High Importance | ls command hangs on an NFS mounted directory (show details) | 5.1.9.7 | NFS |
IJ52845 | High Importance | When File Audit Logging is enabled, during fileset deletion, the LWE registry configuration is loaded into memory to retrieve the audit fileset name to check whether the fileset to be deleted is an audit fileset. This LWE registry configuration is not freed after retrieving the audit fileset name, leading to old configuration data being used when a new audit producer is configured. (show details) | 5.1.9.7 | File Audit Logging |
IJ52846 | High Importance | File Audit Logging producers not being cleanup from memory when audit is disabled (show details) | 5.1.9.7 | File Audit Logging |
IJ52847 | High Importance | When a new policy is installed for audit log, the audit registry config currently in memory is updated to the latest info and the LWE garbage collector (LWE GC) is triggered to clean up defunct producers.There could be a case where the LWE GC finishes first and registry update second, resulting in stale data being loaded in memory when there are no producers present (when audit is disabled).The next time audit is enabled, the old config data is used to configure a new audit producer. (show details) | 5.1.9.7 | File Audit Logging |
IJ52848 | High Importance | The major problem identified here is if killInflight issued on the mount is even working. (show details) | 5.1.9.7 | AFM |
IJ52849 | Suggested | Users with NFSv4 WRITE permission to a file can get permission denied when setting file timestamps to the current time (show details) | 5.1.9.7 | All Scale Users |
IJ52850 | High Importance | Some client commands, when invoked in a fast, repetitive sequence, may fail to connect to the mmfsd daemon. (show details) | 5.1.9.7 | GPFS Core |
IJ52851 | High Importance | Deadlock while performing multiple small reads on same file (show details) | 5.1.9.7 | AFM |
IJ52949 | High Importance | Script error in mmcrfileset leads to enabling afmObjectSyncOpenFileson the RO mode fileset which is failed promptly as expected. (show details) | 5.1.9.7 | AFM |
IJ52950 | High Importance | Cannot mount file system because it does not have a manager in a file system withmore than 8192 inode spaces. The failure is due to wrong sanity check for the number ofinodes created in the file system. (show details) | 5.1.9.7 | filesets |
IJ52992 | High Importance | This APAR addresses a problem in the NFS health check : When NFS is under load, the rpc null check may fail. The failure will be tolerated if the statistics check shows that NFS-Ganesha is still completing NFS rpc requests. 5.1.9.6 statistics check is not working, so the rpc null check fail will not be ignored and a CES IP failover triggered even though NFS-Ganesha is still completing NFS rpc requests. (show details) |
5.1.9.7 | NFS-Ganesha, CES-IP failover. |
IJ53044 | High Importance | In Scale 5.2.1 the lowDiskSpace callback is not being triggered when disk space usage has reached the high occupancy threshold that is specified in the current policy rules, breaking the usage of thresholds to migrate data between pools applications. (show details) | 5.1.9.7 | Block allocation manager code. |
IJ52948 | High Importance | Kernel-Crash in Scale 5.2.1.1 - general protection fault and system crash.The crash happens due to a memory corruption after mounting a gpfs filesystem.Sometimes this happens during a filesystem mount and sometimes a little while after. (show details) | 5.1.9.7 | Scale core |
IJ53006 | High Importance | Deadlock hang condition in which InodeRevokeWorkerThread threads will hang, and dumping waiters (e.g. via "mmdiag --waiters") will show: InodeRevokeWorkerThread: for flush mapped pages, VMM iowait (show details) | 5.1.9.7 | GPFS Core (mmap) |
IJ53007 | High Importance | Kernel null pointer exception while running outbound metadata prefetch. (show details) | 5.1.9.7 | AFM |
IJ53034 | High Importance | For unknown reasons (a possible reason is /tmp full), the update of RKM.conf was not able to obtain the KMIP port from a file. The code does not check for any error and continues to produce the kmipServerUri line with missing port number. Encrypted files may not be accessible due to this issue. (show details) | 5.1.9.7 | encryption, admin command |
IJ53036 | High Importance | Daemon assert going off: index >= 0 && index < maxTcpConnsPerNodeConn in file llcomm_m.C resulting in a GPFS daemon shutdown. (show details) | 5.1.9.7 | All Scale Users |
IJ53038 | Medium Importance | Reviewed bug fixes went into Ganesha 6 upstream branch and cherry-picked applicable and important fixed to IBM Ganesha 5.7 branch. Also, utility commands for enabling log rotation has been added. (show details) | 5.1.9.7 | NFS-Ganesha |
IJ53039 | High Importance | Frequent NFS hangs observed and also fixed a nfs crash in this tag update. 1. For crash related info, pls check defect: https://jazz07.rchland.ibm.com:21443/jazz/web/projects/GPFS#action=com.ibm.team.workitem.viewWorkItem&id=337064 2. For nfs-hang: https://jazz07.rchland.ibm.com:21443/jazz/web/projects/GPFS#action=com.ibm.team.workitem.viewWorkItem&id=338989 (show details) |
5.1.9.7 | NFS-Ganesha |
IJ53040 | Medium Importance | When the parsing of the RKM.conf file results in errors, the mmfsd daemon did not log error messages to the Scale message log and did not raise mmhealth rkmconf_instance_err events. This change results in errors encountered during the parsing of individual RKM.conf stanza being logged in the message log and mmhealth event being raised. (show details) | 5.1.9.7 | GPFS Core |
IJ53048 | High Importance | Daemon assert goes off when Read doesn't have right child Id on queue. (show details) | 5.1.9.7 | AFM |
IJ46422 | Critical | Undetected silent data corruption may be left on disk after truncate operation (show details) | 5.1.9.6 | All Scale Users |
IJ50999 | High Importance | During mmcheckquota running, a failed user data copy from user space to kernel space, leading to some cleanup works, and assertion goes off because one mutex related flag is missed acquired when fix quota accounting value. (show details) | 5.1.9.6 | Quotas |
IJ52204 | High Importance | mmimgrestore reports failure if a symbolic link has no content. This could result in an incomplete file system restore. (show details) | 5.1.9.6 | SOBAR (mmimgbackup/ mmimgrestore) |
IJ52205 | High Importance | mmapplypolicy performs an inode scan in parallel and the number of iscanBuckets can be specified via -A option. If the -A option is not specified, tsapolicy calculates it based on total used inodes between 7 to 4096. In a large file system, the value calculated is often larger than the open file limit, and tsapolicy could fail with "'Too many open files". (show details) | 5.1.9.6 | mmapplypolicy |
IJ52206 | Suggested | The mmxcp utility does not copy the file mode bits for SUID, SGID, or "sticky bit". (show details) | 5.1.9.6 | Filesets |
IJ51645 | High Importance | When mmapplypolicy runs with single node, it runs as strictly local mode since 5.1.9.0.But it could show slower performance during sorting phase with large file system because it does not execute parallel sorting. (show details) | 5.1.9.6 | mmapplypolicy |
IJ52208 | High Importance | When a message is sending to multiple destinations, if reconnect happens while the sender thread is still doing sending, do the resend in another thread could cause this assert. (show details) | 5.1.9.6 | All Scale Users |
IJ52209 | Suggested | With either the static or dynamic pagepool, when the allocation of pagepool memory is not possible (e.g. the node running ot of memory), an error message "[X] Cannot pin a page pool at the address X bytes because it is already in use" is printed, which is just confusing. (show details) | 5.1.9.6 | All Scale Users |
IJ52262 | High Importance | CTDB uses a queue to receive requests and send answers. However there was an issue that gave priority to the receiving side so, when a request was processed and the answer posted to the queue,if another incoming request arrived, it was served before sending the previous answer. On high traffic this can lead to CTDB long waits and starvation. (show details) | 5.1.9.6 | SMB |
IJ52264 | High Importance | Following assertion can be hit during a fileset deletion - Assert exp(!isValid() || inodesInTransit == NULL || inodesInTransit->getNumberOfOneBits() == 0 ... (show details) | 5.1.9.6 | All Scale Users |
IJ52265 | High Importance | Unable to handle kernel paging request crash in kxGanesha. The problem happens because thevalue of fdp->fd has changed after it was copied it. (show details) | 5.1.9.6 | NFS Ganesha |
IJ52266 | High Importance | When mmimgrestore creates a file with a saved inode, if the inode is already assigned as logfile, Storage Scale tries to move the logfile to another available inode. But if moving the logfile to another inode fails, Storage Scale returns EAGAIN and the mmimgrestore command will fail. (show details) | 5.1.9.6 | SOBAR |
IJ52221 | High Importance | Dangling entry in RO mode after re-uploading data to cos (show details) | 5.1.9.6 | AFM |
IJ52270 | Critical | When the manual update mode is in use at the AFM cache site, and an existing file in the cache is renamed and recreated with the same name, the AFM reconcile operation uploads the file to the AFM home site but may incorrectly update the file at the AFM home site. (show details) | 5.1.9.6 | AFM |
IJ52271 | High Importance | tsapolicy gets current cipherList setting from mmfs.cfg but it gets empty string if cipherList configuration variable is set to the default value (EMPTY). tsapolicy incorrectly determines the value as real cipher if the value is not "EMPTY", "DEFAULT", or "AUTHONLY". (show details) | 5.1.9.6 | mmapplypolicy |
IJ52324 | High Importance | ACLs are not fetched to AFM cache from the home when the opened AFM control file becomes stale. AFM control file is used to fetch ACLs and EAs from the home. (show details) | 5.1.9.6 | AFM |
IJ52323 | High Importance | When an already encoded list file is passed to mmafmctl'sevict sub-command, it seems to encoded it once more andpass to tspcacheevict. Causing tspcacheevict to notrealize the decoded file name and fail eviction.If the list is already in encoded format, refrain fromencoding list-file again. (show details) | 5.1.9.6 | AFM |
IJ52322 | Suggested | When a previous snapshot is created with the expiration time set, the next snapshot created can get the expiration time of the previous snap when the expiration time is not explicitly provided for this next snapshot. (show details) | 5.1.9.6 | Snapshots |
IJ52321 | High Importance | Daemon assert logAssertFailed: isNotCached() while disabling the AFM online with afmFastLookup enabled. This happens due to accessing the invalid fileset pointer. (show details) | 5.1.9.6 | AFM |
IJ51031 | High Importance | Metadata corruption on one node with folders not being correctly visible. Cannot cd into directory on one node. (show details) | 5.1.9.5 | No adverse affect. This is a failsafe change |
IJ51457 | Critical | If the File Audit Logging Audit Logs are compressed while GPFS is appending to them, the Audit Log can become corrupted and unrecoverable.This can happen when a compression policy is run against the audit log fileset / audit logs. (show details) | 5.1.9.5 | File Audit Logging |
IJ49862 | High Importance | When daemon restarts on a worker node, it is possible to have a race condition that causes worker local state change to take place after GNR's readmit operation which intends to repair tracks with stale data. The delayed state change could result the intended readmit operation to fail to repair the data on the given disks, thus result in stale sectors in the tracks which could have been fixed once the delayed state change takes place. With more disk failures before the next cycle of scan and repair operations having a chance to repair these vtracks, it could result data loss if number of faults are beyond the fault tolerance of the vdisk. (show details) | 5.1.9.5 | GNR |
IJ51652 | High Importance | Configuring perfmon --collectors with a non-cluster node name (e.g. the hostname which is different to the admin or daemon name) will fail mmsysmon noderoles detection and cause perfmon query port down and the GUI node will raise gui_pmcollector_connection_failed event. (show details) | 5.1.9.5 | Performance Monitoring Tool, GUI, mmhealth thresholds, GrafanaBridge |
IJ51658 | Critical | Signal 11 hit in function AclDataFile::hashInsert in acl.C, due to race condition when adding ACLs and handling cached ACL data invalidation during node recovery or hitting a disk error, resulting in mmfsd daemon crash. (show details) | 5.1.9.5 | All Scale Users |
IJ51363 | High Importance | Scanning directory with policy or Scale gpfs_ireaddir64 API is degraded since 5.1.3 release. (show details) | 5.1.9.5 | policy or gpfs_ireaddir64 API |
IJ51704 | High Importance | Triggering recovery on IW fileset (by running ls -l on root of fileset), with afmIOFlag afmRecoveryUseFset set on it causes a deadlock - which resolves itself almost after 10 minutes (300 retries of queueing of Getattr for the ls command). (show details) | 5.1.9.5 | AFM |
IJ51705 | High Importance | 1. Introduce new config option - afmSkipPtrash to skip moving files to the .ptrash directory. 2. Also add a mmafmctl subcommand \"emptyPtrash\" to cleanup the ptrash directory without relying on rm -rf. Similar to the one implemented in --empty-ptrash flag of prefetch. (show details) |
5.1.9.5 | AFM |
IJ51706 | High Importance | afmCheckRefreshDisable is a tunable at the cluster level today to avoid refresh from going to the Filesystem itself and return from the dcache. But it applies to all AFM filesets in the cluster, when tuned. Need a fileset level tunable to do the same so that it doesn't impact all other filesets in the cluster like it does today. (show details) | 5.1.9.5 | AFM |
IJ51707 | High Importance | For some threshold events the system pushes them from the Threshold to the Filesystem component internally. Due to misaligned data the event could get suppressed. (show details) | 5.1.9.5 | • System Health • perfmon (Zimon) |
IJ51708 | High Importance | When dynamic pagepool is enabled, pagepool memory is shrinking slowly when memoryconsuming application is requesting memory (show details) | 5.1.9.5 | All Scale Users |
IJ51709 | High Importance | Pagepool grow is rejected due to recent pagepool change history (show details) | 5.1.9.5 | All Scale Users |
IJ51710 | High Importance | Memory allocation from the shared segment failed (show details) | 5.1.9.5 | All Scale Users |
IJ51711 | High Importance | If a mount is symbolic link is attempted on an existing symlink of a directory, then it ends up creating the symbolic link with same name as the source inside the target directory. Since the DR is mostly RO in nature, it ends up getting an E_ROFS and prints these failures to the log. (show details) | 5.1.9.5 | AFM |
IJ51712 | High Importance | mmwmi.exe is a helper utility on Windows which is used by various mm* administration scripts to query various system settings such as IP addresses, mounted filesystems and so on. Under certain conditions such as active realtime scanning by security endpoints and anti-malwares, the output of mmwmi is not sent to stdout and any connected pipes that depend on it. This can cause various GPFS configuration scripts, such as mmcrcluster to fail. (show details) | 5.1.9.5 | Admin Commands. |
IJ51713 | High Importance | Problem here is that during conversion a wrong target is specified, with protocol as jttp instead of http. Leading to parsePcacheTarget to find this as invalid, but later trying to persist a NULL to disk where the assert goes off. (show details) | 5.1.9.5 | AFM |
IJ51781 | Suggested | "mmperfmon delete" shows a usage string, referencing "usage: perfkeys delete [-h]" instead of the proper usage. (show details) | 5.1.9.5 | perfmon (ZIMON) |
IJ51782 | Suggested | Customer are getting "SyntaxWarning: invalid escape sequence" errors when "mmperfmon" is used for custom scripting. (show details) | 5.1.9.5 | perfmon (ZIMON) |
IJ51783 | High Importance | recovery is not syncing old directories* (show details) | 5.1.9.5 | AFM |
IJ51784 | High Importance | mmafmcosconfig fails to create afmcos fileset in sudo configured setup (show details) | 5.1.9.5 | AFM |
IJ51785 | High Importance | Not able to initialiase download in case fileset is in Dirty state (show details) | 5.1.9.5 | AFM |
IJ51786 | Critical | AFM fileset is going in NeedsResync state due to replication of file whose parent directory is local. (show details) | 5.1.9.5 | AFM |
IJ51787 | High Importance | When a large number of secure connections are created at the same time between the mmfsd daemon instances in a Scale cluster, some of the secure connections may fail as a result of timeouts,resulting in unstable cluster operations. (show details) | 5.1.9.5 | GPFS Core |
IJ51843 | High Importance | Kernel crashes with the following assert message: GPFS logAssertFailed: vinfoP->viInUse. (show details) |
5.1.9.5 | NFS exports |
IJ51844 | High Importance | Newer versions of libmount1 package that are installed by default on SUSE15.6 filter out device name from gpfs mount options due to which mount fails. (show details) | 5.1.9.5 | GPFS core |
IJ51845 | Critical | AFM Gateway node reboot due to Out of memory exception. There is memory leak while doing the upload/reconcile operation in MU mode fileset. (show details) | 5.1.9.5 | AFM |
IJ50654 | High Importance | mmshutdown caused kernel crash while calling dentry_unlink_inode with the backtrace like this: ... #10 page_fault at ffffffff8d8012e4 #11 iput at ffffffff8cef25cc #12 dentry_unlink_inode at ffffffff8ceed5d6 #13 __dentry_kill at ffffffff8ceedb6f #14 dput at ffffffff8ceee480 #15 __fput at ffffffff8ced3bcd #16 ____fput at ffffffff8ced3d7e #17 task_work_run at ffffffff8ccbf41f #18 do_exit at ffffffff8cc9f69e (show details) |
5.1.9.5 | All Scale Users (Linux) |
IJ51332 | High Importance | GPFS daemon could assert unexpectedly with EXP(REGP != _NULL) in file alloc.C. This could occur on client nodes where there are active block allocation activities. (show details) | 5.1.9.5 | All Scale Users |
IJ50480 | High Importance | Long ACL garbage collection runs in the filesystem manager can cause lock conflicts with nodes that need to retrieve ACLs during garbage collection. The conflicts will resolve after garbage collection has finished. (show details) | 5.1.9.5 | All Scale Users |
IJ51846 | Suggested | due to locale issue few callhome commands were gethering data region specific and it causing the error in AoAtool while parsing this data (show details) | 5.1.9.5 | Callhome |
IJ51864 | High Importance | Crash of the node mounting a filesystem or while starting the node. (show details) | 5.1.9.5 | Filesystem mount |
IJ51908 | High Importance | When dynamic pagepool is enabled, we may not shrink due to pagepool grow still in progress, which results in out of memory (show details) | 5.1.9.5 | Dynamic pagepool |
IJ51909 | Suggested | There are a few occasions where error code 809 may be used inside the CCR Quorum management component. Although not user actionable, the product was changed to make some note of this in mmfs.log instead of, as had been the case, only available in GPFS Trace. The intent is to improved RAS in certain situations. (show details) | 5.1.9.5 | CCR |
IJ51011 | Suggested | Nessus vulnerability scanner found HSTS communication is not enforced on mmsysmon port 9980 (show details) | 5.1.9.4 | mmsysmon on GUI/pmcollector node |
IJ50232 | High Importance | The automated node expel mechanism (see references to themmhealthPendingRPCExpelThreshold configuration parameter) uses the internalmmsdrcli to issue an expel request to a node in the home cluster. If thesdrNotifyAuthEnabled configuration parameter is set to false (not recommended) then the command will fail with a message like the following: [W] The TLS handshake with node 192.168.132.151 failed with error 410 (client side).mmsdrcli: [err 144] Connection shut down and the expel request will fail. (show details) |
5.1.9.4 | System Health (even though the fix is not in system health itself) |
IJ51036 | Medium Importance | mm{add|del}disk will fail which is triggered by signal 11. (show details) | 5.1.9.4 | disk configuration and region management |
IJ51037 | Suggested | mmkeyserv returns an error when used to delete a previously delete tenant, instead of returning a success return code. (show details) | 5.1.9.4 | File System Core |
IJ51057 | Medium Importance | From a Windows client, in MMC permissions Tab on a share, the ACL listing was always showing as Everyone.If a subdirectory inside subdirectory is deleted, in the respective snapshot that was taken before, traversal to the inner subdirectory was showing errors. (show details) | 5.1.9.4 | SMB |
IJ51148 | High Importance | find or download all when run on a given path, sets timefor each of the individual entities with respect to COSand ends up blocking a following revalidation to fetchactual changes on the object's metadata from the COSto cache. (show details) | 5.1.9.4 | AFM |
IJ51149 | High Importance | Due to an issue with the way mmfsckx scans compressed files and internally stores information to detect if it has inconsistent compressed groups, mmfsckx will report and/or repair false positive inconsistencies for compressed files. The mmfsckx output will report something like below for example: !Inode 791488 snap 6 fset 6 "user file" indirect block 1 level 1 @4:13508288: disk address (ditto) in slot 0 replica 0 pointing to data block 226 code 2012 is invalid (show details) |
5.1.9.4 | mmfsckx |
IJ51150 | High Importance | mmfsckx captures allocation and deallocation information of blocks from remote client nodes or non-participating nodes that mount the file system while mmfsckx is running. And once the file system gets unmounted from these nodes it stops the capture of such information. But due to an issue mmfsckx was stopping the capture of such information before the complete unmount event ended and that led to mmfsckx then reporting and/or repairing false positive lost blocks, bad (incorrectly allocated) blocks, duplicate blocks. (show details) | 5.1.9.4 | mmfsckx |
IJ51252 | Suggested | Prefetch command failes but returns error code 0 (show details) | 5.1.9.4 | AFM |
IJ51225 | High Importance | There is a build failure while executing mmbuildgpl command. The failure is seen while compiling /usr/lpp/mmfs/src/gpl-linux/kx.c due to no member named '__st_ino' in struct stat. Please refrain from upgrade to affected kernel and/or OpenShift levels until fix is available (show details) | 5.1.9.4 | Build / Installation |
IJ51222 | Suggested | If a problem with an encryption server happens just the Rkmid is visible in the default "mmhealth node show" view. Furthermore there are two monitoring mechanisms, one which will use an index to convey whether main or backup server is affected and one which will directly use the hostname or ip for that. Moreover the usual way to resolve an event with "mmhealth event resolve" has been broken for that component. (show details) | 5.1.9.4 | System Health |
IJ51160 | Suggested | Daemon Assert gets triggered when afmLookupMapSize is set tohigher value of 32. Supported range is only 0 to 30. (show details) | 5.1.9.4 | AFM |
IJ51282 | High Importance | mmrestoreconfig also restores fileset configuration of a file systems. If the cluster version (minReleaseLevel) is below 5.1.3.0, the fileset restore will fail as it tries to restore the fileset permission inherit mode even if it is the default. The permission inherit mode was not enabled Storage Scale version 5.1.3.0. (show details) | 5.1.9.4 | Admin Commands SOBaR |
IJ51283 | Suggested | Command mmchnode and mmumount did not cleanup tmp node files in /var/mmfs/tmp. (show details) | 5.1.9.4 | Admin Commands |
IJ51286 | High Importance | GPFS daemon could unexpectedly fail with signal 11 when mounting a file system if file system quiesce is triggered during the mount process. (show details) | 5.1.9.4 | All Scale Users |
IJ51265 | Critical | It is possible for EA overflow block to be corrupted as result of log recovery after node failure. This can lead to lost of some extended attributes that can not be stored in the inode. (show details) | 5.1.9.4 | All Scale Users |
IJ51344 | Critical | When writing to a memory-mapped file, there is a chance that incorrect data could be written to the file before and after the targeted write range (show details) | 5.1.9.4 | All scale users |
IJ49992 | Suggested | If the local cluster nistCompliance value is off, the mmremotecluster and mmauth commands fail with not clear error message. (show details) | 5.1.9.3 | All Scale Users |
IJ50066 | Suggested | AFM LU mode fileset from a filesystem, to a target in the same filesystem (snapshot), using NSD backend is failing with error 1. Happening because another code fix had an unintended consequence for this code path. (show details) |
5.1.9.3 | AFM |
IJ50067 | High Importance | When afmResyncVer2 is run with afmSkipResyncRecovery set to yes,then the priority directories that AFM usually queues shouldnot be done. Since such directories might exist underparents that are not in sync already leading to error 112. (show details) |
5.1.9.3 | AFM |
IJ50068 | High Importance | This APAR addresses two issues related to NFS-Ganesha that can cause crashes. Here are the details: (gdb) bt #0 0x00007fff88239a68 in raise () #1 0x00007fff8881ffb8 in crash_handler (signo=11, info=0x7ffb42abbe48, ctx=0x7ffb42abb0d0) #3 0x00007fff888da5f4 in atomic_add_int64_t (augend=0x148, addend=1) #4 0x00007fff888da658 in atomic_inc_int64_t (var=0x148) #5 0x00007fff888de44c in _get_gsh_export_ref (a_export=0x0) #6 0x00007fff8888c6c0 in release_lock_owner (owner=0x7ffef94a1cc0) #7 0x00007fff88923e9c in nfs4_op_release_lockowner (op=0x7ffef922be60, data=0x7ffef954d290, resp=0x7ffef8629c30) #8 0x00007fff888fb810 in process_one_op (data=0x7ffef954d290, status=0x7ffb42abcdf4) #9 0x00007fff888fcc9c in nfs4_Compound (arg=0x7ffef95eec38, req=0x7ffef95ee410, res=0x7ffef8ce4b40) #10 0x00007fff88819130 in nfs_rpc_process_request (reqdata=0x7ffef95ee410, retry=false) #11 0x00007fff88819864 in nfs_rpc_valid_NFS (req=0x7ffef95ee410) #12 0x00007fff88750618 in svc_vc_decode (req=0x7ffef95ee410) #13 0x00007fff8874a8f4 in svc_request (xprt=0x7fff30039ca0, xdrs=0x7ffef95eb400) #14 0x00007fff887504ac in svc_vc_recv (xprt=0x7fff30039ca0) #15 0x00007fff8874a82c in svc_rqst_xprt_task_recv (wpe=0x7fff30039ed8) #16 0x00007fff8874b858 in svc_rqst_epoll_loop (wpe=0x10041cc5cb0) #17 0x00007fff8875b22c in work_pool_thread (arg=0x7ffdcd1047d0) #18 0x00007fff88229678 in start_thread () #19 0x00007fff880d8938 in clone () Or (gdb) bt #0 0x00007f96f58d9b8f in raise () #1 0x00007f96f75c6633 in crash_handler (signo=11, info=0x7f96ad9fc9b0, ctx=0x7f96ad9fc880) a #3 dec_nfs4_state_ref (state=0x7f9640465440) #4 0x00007f96f76762f9 in dec_state_t_ref (state=0x7f9640465440) #5 0x00007f96f767640c in nfs4_op_free_stateid (op=0x7f8dec12fba0, data=0x7f8dec1992b0, resp=0x7f8dec04ce70) #6 0x00007f96f766dbae in process_one_op (data=0x7f8dec1992b0, status=0x7f96ad9fe128) #7 0x00007f96f766ee80 in nfs4_Compound (arg=0x7f8dec110ab8, req=0x7f8dec110290, res=0x7f8dec5b7db0) #8 0x00007f96f75c17db in nfs_rpc_process_request (reqdata=0x7f8dec110290, retry=false) #9 0x00007f96f75c1cf1 in nfs_rpc_valid_NFS (req=0x7f8dec110290) #10 0x00007f96f733edfd in svc_vc_decode (req=0x7f8dec110290) #11 0x00007f96f733ac61 in svc_request (xprt=0x7f95d00c4a60, xdrs=0x7f8dec18dd00) #12 0x00007f96f733ed06 in svc_vc_recv (xprt=0x7f95d00c4a60) #13 0x00007f96f733abe1 in svc_rqst_xprt_task_recv (wpe=0x7f95d00c4c98) #14 0x00007f96f73462f6 in work_pool_thread (arg=0x7f8ddc0cc2f0) #15 0x00007f96f58cf1ca in start_thread () #16 0x00007f96f5119e73 in clone () (show details) |
5.1.9.3 | NFS-Ganesha crash followed by CES-IP failover. |
IJ49856 | Critical | Multi-threaded applications that issue mmap I/O and I/O system calls concurrently can hit a deadlock on the buffer lock. This is likely not a common pattern, but this problem has been observed with database applications. (show details) | 5.1.9.3 | All Scale Users |
IJ50208 | High Importance | Multi-threaded applications that issue mmap I/O and I/O system calls concurrently can hit a deadlock on the buffer lock. This is likely not a common pattern, but this problem has been observed with database applications. (show details) | 5.1.9.3 | All Scale Users |
IJ50209 | Suggested | Setting security header as suggested by RFC 6797 (show details) | 5.1.9.3 | perfmon (Zimon) |
IJ50210 | High Importance | With File Audit Logging (FAL) is enabled, when a change to the policy file happens and when the LWE garbage collector runs for FAL, there is a small window that a deadlock can occur with the long waiter message seen 'waiting for shared ThSXLock' for the PolicyCmdThread. (show details) |
5.1.9.3 | File Audit Logging |
IJ50211 | High Importance | During a mount operation of the file system, updating LWE configuration information for File Audit Logging before the Fileset metadata file (FMF) is initialized results in the signal 11, NotGlobalMutexClass::acquire() + 0x10 at mastSMsg.C:44 (show details) | 5.1.9.3 | File Audit Logging |
IJ50320 | Critical | AFM fileset is going in NeedsResync state due to replication of filewhose parent directory is local. (show details) | 5.1.9.3 | AFM |
IJ50321 | High Importance | When a thread is flushing the file metadata of the ACL file to disk, there's a small window that a deadlock can occur when a different thread tries to get a Windows security descriptor, as getting the security descriptor requires reading the ACL file. (show details) | 5.1.9.3 | All Scale Users |
IJ50323 | High Importance | When checking the block alloc map mmfsckx excludes the regions that are being checked or are already checked from further getting updated in the internal shadow map.But when checking for such excluded regions it was not checking which poolId the region belonged to. This resulted in mmfsckx not updating the shadow map for a region belongingto a pool while checking the block alloc map for the same region belonging to a different pool. This led to mmfsckx falsely marking blocks as lost block and then later to this assert. (show details) | 5.1.9.3 | mmfsckx |
IJ50035 | High Importance | When RDMA verbsSend is enabled and number of RDMA connections is largerthan 16, if reconnect happens, could cause segment fault issue. (show details) | 5.1.9.3 | All Scale Users |
IJ50372 | Critical | O_TRUNC is not ignored correctly after a successful file lookup during atomic_open() so truncation can happen during the open routine, before permission checks happen. This leads to a scenario in which a user on a different node can truncate a file which he does not have permissions to. (show details) | 5.1.9.3 | All Scale Users |
IJ50373 | Suggested | For certain performance monitoring operations in the case of an error the query and response get logged. That response can be large and logging it regularly will cause mmsysmon.log to grow rapidly. (show details) | 5.1.9.3 | System Health / perfmon (Zimon) |
IJ50374 | High Importance | With File Audit Logging (FAL) enabled, when deciding to run LWE garbage collector for FAL, an attempt to try-acquire the lock on the policy file mutex is performed. If the policy file mutex is busy, the attempt is canceled and retry on the next attempt. Upon canceling, the policy file mutex can be released without being held leading to the log assert. (show details) | 5.1.9.3 | File Audit Logging |
IJ50375 | Critical | GPFS daemon could assert unexpectedly with: Assert exp(0) in direct.C. This could happen on file system manager node after a node failure. (show details) | 5.1.9.3 | All Scale Users |
IJ50439 | High Importance | The ts commands do not always return the correct error code, providing incorrect results to mm commands that call them, resulting incorrect cluster operations. (show details) | 5.1.9.3 | Core |
IJ50440 | High Importance | mmfsckx fails to detect file having an illReplicated extended attribute overflow block and in the repair mode will not mark the flag illReplicated in it. (show details) | 5.1.9.3 | mmfsckx |
IJ50441 | High Importance | When scanning a compressed file mmfsckx in some case can incorrectly report a file having bad disk address (show details) | 5.1.9.3 | mmfsckx |
IJ50442 | High Importance | When scanning a file system having a corrupted snapshot mmfsckx can cause node assert with logAssertFailed: countCRAs() == 0 && "likely a leftover cached inode in inode0 d'tor"* (show details) | 5.1.9.3 | mmfsckx |
IJ50443 | High Importance | AFM policy generated intermediate files are always put to/var filesystem - /var/mmfs/tmp for Resync/Failover and/var/mmfs/afm for Recovery. We have seen in customer setups that the /var is provisioned very small and there might be other Filesystems that are well provisioned to handle such large files. /opt that IBM defaults to always or may be even inside the fileset. (show details) | 5.1.9.3 | AFM |
IJ50463 | High Importance | Stale data may be read while "mmchdisk start" is running. (show details) | 5.1.9.3 | All Scale Users |
IJ50563 | Critical | In a file system with replication configured, for a large file with number of data blocks more than 5000, if there are miss-updated on some data blocks\ due to disk failures on one replica disk, then these stale replicas would not be repaired if the helper nodes are getting involved to repair them. (show details) | 5.1.9.3 | Scale Users |
IJ50577 | High Importance | When there is a TCP network error, we will try to reconnect the TCP connection, butthe reconnect failed with "Connection timed out" error, which results in node expel. (show details) | 5.1.9.3 | All Scale Users |
IJ50708 | Critical | In a file system with replication configured, the miss-update info set in the disk address could be overwritten by log recovery process, then lead to stale data to be read as well as the start disk process cannot repair such stale replicas. (show details) | 5.1.9.3 | All Operating Systems |
IJ50794 | High Importance | Symbolic links may be incorrectly deleted during the offline mmfsck and may cause undetected data loss (show details) | 5.1.9.3 | General file system, creation of symbolic links. |
IJ50890 | Suggested | Metadata evict was giving error for 2nd attempt onwards. (show details) | 5.1.9.3 | AFM |
IJ49762 | High Importance | mmlsquota -d can cause gpfs daemon to crash (show details) | 5.1.9.3 | Quotas |
IJ49856 | Critical | Unexpected long waiter could appear with fetch thread waiting on FetchFlowControlCondvar with reason 'wait for buffer for fetch'. This could happen workload caused all prefetch/writebehind threads are assigned to do prefetching. (show details) | 5.1.9.3 | All Scale Users |
IJ50061 | High Importance | When mmfsckx is run on a file system such that it requires multiple scan passes to complete then mmfsckx can abort with reason "Assert failed "nEnqueuedNodes > 1"." (show details) | 5.1.9.3 | mmfsckx |
IJ49583 | Suggested | When a RDMA connection to a remote node has to be shutdown due to network errors (e.g. network link goes down) it can sometimes happen that the affected RDMA connection will not be closed and all resources assigned to this RDMA connection (memory, VERBS Queue Pair, ...) are not freed. (show details) | 5.1.9.2 | RDMA |
IJ49584 | High Importance | Spectrum Scale Erasure code edition interacts with third party software/hardware APIs for internal disk enclosure management.If the management interface becomes degraded and starts to hang commands in the kernel, the hang may also block communication handling threads. This causes a node to fail to renew its lease, causing it to be fenced off from the rest of the cluster. This may lead to additional outages.A previous APAR was issued for this in 5.1.4, but that fix was incomplete. (show details) |
5.1.9.2 | ESS/GNR |
IJ49585 | Suggested | If a tiebreaker disk has outdated version info, ccrrestore can abort with Python3 errors (show details) | 5.1.9.2 | CCR |
IJ49659 | Critical | AFM sets pcache attributes on inode after reading uncached file from home. It is modifying inode while filesystem is quiesced. Assert is hot due to same. (show details) |
5.1.9.2 | AFM |
IJ49586 | High Importance | File systems that have large number independent filesets usually tend to have a sparse inode space. So if mmfsckx is run on such a file system having large sparse inode space then it will take longer to run as it unnecessarily parses over inode alloc map segments pointing to sparse inode spaces instead of skipping them. (show details) | 5.1.9.2 | FSCKX |
IJ49587 | High Importance | When building an NFSv4 ACL from a POSIX access and default ACL of a directory, in between the retrievals of the access ACL and the default ACL, if an update or store ACL to another file or a directory happens, a deadlock can occur and the long waiter message "waiting for exclusive NF ThSXLock for readers to finish" is seen. (show details) | 5.1.9.2 | All Scale Users |
IJ49660 | High Importance | When replicating over NFS With KRB plus AD - if there's a user who is not included in the AD at the primary site who creates a File, this file is replicated as root to the DR first and then a Setattr is attempted with the User/Group to which file/dir belongs to. If the user doesn't exist on AD and is local to Primary cluster alone, then NFS prevents the Setattr and ergo the whole Create operation from Primary to DR gets stuck with E_INVAL. (show details) |
5.1.9.2 | AFM |
IJ49661 | Suggested | cluster health showing "healthy" for disabled CES services (show details) | 5.1.9.2 | System Health |
IJ49662 | Suggested | In ceratin cases the network status was not accounted for correctly which could result in "stuck" events like cluster_connections_bad and cluster_connections_down. (show details) | 5.1.9.2 | System Health |
IJ49710 | Suggested | For the failed callhome upload remove the job from queue if DC package not available. (show details) | 5.1.9.2 | Callhome |
IJ49699 | Suggested | Sometimes callhome upload getting failed due to curl(52) error (show details) | 5.1.9.2 | Callhome |
IJ49700 | Suggested | Sometime exception in logs while callhome sendfile progress converted to integer (show details) | 5.1.9.2 | Callhome |
IJ49701 | High Importance | Processes hang due to deadlocks in our Storage Scale cluster. There aredeadlock notifications on multiple nodes which were triggered by 'long waiter' events on the nodes (show details) | 5.1.9.2 | Regular file read flow in kernel version >= 5.14 |
IJ49714 | Suggested | Creating AFM fileset with more than 32 afmNumFlushThreads gives an error (show details) | 5.1.9.2 | AFM |
IJ49715 | Suggested | The 'rpc.statd' may be terminated or experience a crash due to statd-related issues. In these instances, the NFSv3 client will relinquish control over NFSv3 exports,and the GPFS health monitor will indicate 'statd_down'. (show details) | 5.1.9.2 | NFS |
IJ49580 | High Importance | When the device file for a NSD disk got offline or unattached from a node, the I/O issued from that node would fail with "No such device or address" error (6), even there are other NSD servers defined andavailable for servicing I/O request. (show details) | 5.1.9.2 | All Scale Users |
IJ49770 | High Importance | AFM object fileset fails to pull new objects from the S3/Azure store when the object fileset is exported via nfs-ganesha and readdir is performed over the NFS mount. However performing the readdir on the fileset directly pulls the entries correctly. (show details) | 5.1.9.2 | AFM |
IJ49771 | High Importance | AFM outband metadata prefetch hangs if there is an orphan file already exists for the entries in the list file. AFM orphan files have inode allocated but not initialized. (show details) | 5.1.9.2 | AFM |
IJ49772 | High Importance | Damon assert going off: otherP == NULL in clicmd.C, resulting in daemon restart. (show details) | 5.1.9.2 | All |
IJ49792 | High Importance | Add config option to add nconnect for nfs mount (show details) | 5.1.9.2 | AFM |
IJ49793 | Suggested | Prefetch is not generating the afmPrepopEnd callback event. (show details) | 5.1.9.2 | AFM |
IJ49794 | High Importance | Prefix downloads are getting failed or read or ls fails if prefix option is used with download or fileset creation. (show details) | 5.1.9.2 | AFM |
IJ49795 | Suggested | Rename not reflected to COS automatically if afmMUAutoRemove configured. (show details) | 5.1.9.2 | AFM |
IJ49796 | High Importance | AFM COS to GCS Hangs file system on GCS Errors if credentials doesnt have enough permission. (show details) | 5.1.9.2 | AFM |
IJ49851 | High Importance | There is crash observed in read_pages when called from page_cache_ra_unbound on SLES with kernel version >=5.14. (show details) | 5.1.9.2 | Regular file read flow in kernel version >= 5.14 |
IJ49852 | High Importance | With showNonZeroBlockCountForNonEmptyFiles set, block count is always shown as one to report fake block count. This is a work-around for faulty applications (e.g., Gnu tar --sparse) that erroneously assume zero st_blocks means the file contains no nonzero bytes. (show details) |
5.1.9.2 | AFM |
IJ49142 | Suggested | When running a workload on Windows which creates and deletes lots of files and directories in a short span, the inode number assigned for GPFS objects may be reused. If a stale inode entry somehow persists in the GPFS cache due to in flight hold counts, it can happen that due to conflict between the old and new object types, this stale entry will result in a file or directory not found error. (show details) | 5.1.9.1 | All Scale Users |
IJ49144 | High Importance | When dependent fileset is created inline using afmOnlineDepFset or created offline as in the earlier supported method, we mandate enabling mmafmconfig so that .afm/.afmtrash is present at the DR site insode dep fset, to handle conflict renames that AFM does. mmafmconfig enable at the DR on dep fset also creates .afmctl file which is CTL attr enabled and disallows anyone from removing it except when done through mmafmlocal. This causes the restore to fail removing the .afmctl inside dep fset when restoring to snapshot without the dep fset. Fix is to enable mmafmconfig .afm/.afmtrash without creating the .afmctl file which is not needed inside dependent filesets anyways. (show details) |
5.1.9.1 | AFM |
IJ49145 | High Importance | When failover is performed to an entirely new Secondary fileset at the DR within the same Filesystem as previous target sec fileset - The dependent fileset path We request to link under should change too. For this the existing dependent fileset is unlinked and when attempted to be linked under new path - since the dependent fileset exists, it returns the E_EXIST and later primary tries to lookup for remoteAttrs and fails the queue. Return E_EXIST if the fileset exists in linked state only so that the follow-up operation from Primary to build remote attributes succeeds. (show details) |
5.1.9.1 | AFM |
IJ49151 | High Importance | Memory corruption can happen if an application using the GPFS_FINE_GRAIN_WRITE_SHARING hint is running on a file system with its NSD servers having different endianness than the client node the application is running on. (show details) | 5.1.9.1 | Data shipping |
IJ49152 | High Importance | When running mmexpelnode to expel the node on which the command is running, we may hit this assert (show details) | 5.1.9.1 | All Scale Users |
IJ49044 | High Importance | When the file is opened with O_APPEND flag, sequential small read performance is poor (show details) | 5.1.9.1 | All Scale Users |
IJ49154 | Critical | GPFS daemon could fail unexpectedly with assert when handling disk address changes. This could happen when number of block in a file become very large and causes a variable used in internal calculation to over flow. This is more like to happen on file system where block size is very small. (show details) |
5.1.9.1 | All Scale Users |
IJ49169 | High Importance | AFM metadata prefetch does not preserve ctime on the files if they are migrated at home. This causes ctime mismatch between cache and home. (show details) | 5.1.9.1 | AFM |
IJ49196 | High Importance | If COS bucket has same name object and directory object, by default file objects were getting download, when customer requirement was to download directory content instead of files. (show details) | 5.1.9.1 | AFM |
IJ49197 | Suggested | Exception in mmsysmonitor.log due to some files were getting removed while mmcallhome data collection (show details) | 5.1.9.1 | Callhome |
IJ49198 | Suggested | mmcallhome SendFile: progress percentage not updated (show details) | 5.1.9.1 | Callhome |
IJ49216 | High Importance | Quota manager/client node may assert during per-fileset quota check, when there is being-deleted inode. (show details) | 5.1.9.1 | Quota |
IJ49135 | Critical | The assert going off on "logAssertFailed: oldDA1Found[i].compAddr(synched1[I])", then result in mmfsd daemon crashed and finally could cause file system can't be mounted on any node. (show details) | 5.1.9.0 | Compression |
IJ48873 | Critical | File data loss when copying or archiving data from migrated files (e.g., using "cp" or "tar" command that supports to detect sparse holes in source files with lseek(2) interface). (show details) | 5.1.9.0 | DMAPI |
IJ48871 | Critical | File data loss when copying or archiving data from snapshot and clone files (e.g., using "cp" or "tar" command that supports to detect sparse holes in source files with lseek(2) interface). (show details) | 5.1.9.0 | Snapshot and clone files |