This document describes the authorized program analysis reports (APARs) resolved in IBM Storage Scale 5.1.9.x releases.
This document was last updated on 15th Aug, 2024.
Tips:
APAR |
Severity |
Description |
Resolved in |
Feature Tags |
---|---|---|---|---|
|
| |||
IJ51031 | High Importance | Metadata corruption on one node with folders not being correctly visible. Cannot cd into directory on one node. (show details) | 5.1.9.5 | No adverse affect. This is a failsafe change |
IJ51457 | Critical | If the File Audit Logging Audit Logs are compressed while GPFS is appending to them, the Audit Log can become corrupted and unrecoverable.This can happen when a compression policy is run against the audit log fileset / audit logs. (show details) | 5.1.9.5 | File Audit Logging |
IJ49862 | High Importance | When daemon restarts on a worker node, it is possible to have a race condition that causes worker local state change to take place after GNR's readmit operation which intends to repair tracks with stale data. The delayed state change could result the intended readmit operation to fail to repair the data on the given disks, thus result in stale sectors in the tracks which could have been fixed once the delayed state change takes place. With more disk failures before the next cycle of scan and repair operations having a chance to repair these vtracks, it could result data loss if number of faults are beyond the fault tolerance of the vdisk. (show details) | 5.1.9.5 | GNR |
IJ51652 | High Importance | Configuring perfmon --collectors with a non-cluster node name (e.g. the hostname which is different to the admin or daemon name) will fail mmsysmon noderoles detection and cause perfmon query port down and the GUI node will raise gui_pmcollector_connection_failed event. (show details) | 5.1.9.5 | Performance Monitoring Tool, GUI, mmhealth thresholds, GrafanaBridge |
IJ51658 | Critical | Signal 11 hit in function AclDataFile::hashInsert in acl.C, due to race condition when adding ACLs and handling cached ACL data invalidation during node recovery or hitting a disk error, resulting in mmfsd daemon crash. (show details) | 5.1.9.5 | All Scale Users |
IJ51363 | High Importance | Scanning directory with policy or Scale gpfs_ireaddir64 API is degraded since 5.1.3 release. (show details) | 5.1.9.5 | policy or gpfs_ireaddir64 API |
IJ51704 | High Importance | Triggering recovery on IW fileset (by running ls -l on root of fileset), with afmIOFlag afmRecoveryUseFset set on it causes a deadlock - which resolves itself almost after 10 minutes (300 retries of queueing of Getattr for the ls command). (show details) | 5.1.9.5 | AFM |
IJ51705 | High Importance | 1. Introduce new config option - afmSkipPtrash to skip moving files to the .ptrash directory. 2. Also add a mmafmctl subcommand \"emptyPtrash\" to cleanup the ptrash directory without relying on rm -rf. Similar to the one implemented in --empty-ptrash flag of prefetch. (show details) |
5.1.9.5 | AFM |
IJ51706 | High Importance | afmCheckRefreshDisable is a tunable at the cluster level today to avoid refresh from going to the Filesystem itself and return from the dcache. But it applies to all AFM filesets in the cluster, when tuned. Need a fileset level tunable to do the same so that it doesn't impact all other filesets in the cluster like it does today. (show details) | 5.1.9.5 | AFM |
IJ51707 | High Importance | For some threshold events the system pushes them from the Threshold to the Filesystem component internally. Due to misaligned data the event could get suppressed. (show details) | 5.1.9.5 | • System Health • perfmon (Zimon) |
IJ51708 | High Importance | When dynamic pagepool is enabled, pagepool memory is shrinking slowly when memoryconsuming application is requesting memory (show details) | 5.1.9.5 | All Scale Users |
IJ51709 | High Importance | Pagepool grow is rejected due to recent pagepool change history (show details) | 5.1.9.5 | All Scale Users |
IJ51710 | High Importance | Memory allocation from the shared segment failed (show details) | 5.1.9.5 | All Scale Users |
IJ51711 | High Importance | If a mount is symbolic link is attempted on an existing symlink of a directory, then it ends up creating the symbolic link with same name as the source inside the target directory. Since the DR is mostly RO in nature, it ends up getting an E_ROFS and prints these failures to the log. (show details) | 5.1.9.5 | AFM |
IJ51712 | High Importance | mmwmi.exe is a helper utility on Windows which is used by various mm* administration scripts to query various system settings such as IP addresses, mounted filesystems and so on. Under certain conditions such as active realtime scanning by security endpoints and anti-malwares, the output of mmwmi is not sent to stdout and any connected pipes that depend on it. This can cause various GPFS configuration scripts, such as mmcrcluster to fail. (show details) | 5.1.9.5 | Admin Commands. |
IJ51713 | High Importance | Problem here is that during conversion a wrong target is specified, with protocol as jttp instead of http. Leading to parsePcacheTarget to find this as invalid, but later trying to persist a NULL to disk where the assert goes off. (show details) | 5.1.9.5 | AFM |
IJ51781 | Suggested | "mmperfmon delete" shows a usage string, referencing "usage: perfkeys delete [-h]" instead of the proper usage. (show details) | 5.1.9.5 | perfmon (ZIMON) |
IJ51782 | Suggested | Customer are getting "SyntaxWarning: invalid escape sequence" errors when "mmperfmon" is used for custom scripting. (show details) | 5.1.9.5 | perfmon (ZIMON) |
IJ51783 | High Importance | recovery is not syncing old directories* (show details) | 5.1.9.5 | AFM |
IJ51784 | High Importance | mmafmcosconfig fails to create afmcos fileset in sudo configured setup (show details) | 5.1.9.5 | AFM |
IJ51785 | High Importance | Not able to initialiase download in case fileset is in Dirty state (show details) | 5.1.9.5 | AFM |
IJ51786 | Critical | AFM fileset is going in NeedsResync state due to replication of file whose parent directory is local. (show details) | 5.1.9.5 | AFM |
IJ51787 | High Importance | When a large number of secure connections are created at the same time between the mmfsd daemon instances in a Scale cluster, some of the secure connections may fail as a result of timeouts,resulting in unstable cluster operations. (show details) | 5.1.9.5 | GPFS Core |
IJ51843 | High Importance | Kernel crashes with the following assert message: GPFS logAssertFailed: vinfoP->viInUse. (show details) |
5.1.9.5 | NFS exports |
IJ51844 | High Importance | Newer versions of libmount1 package that are installed by default on SUSE15.6 filter out device name from gpfs mount options due to which mount fails. (show details) | 5.1.9.5 | GPFS core |
IJ51845 | Critical | AFM Gateway node reboot due to Out of memory exception. There is memory leak while doing the upload/reconcile operation in MU mode fileset. (show details) | 5.1.9.5 | AFM |
IJ50654 | High Importance | mmshutdown caused kernel crash while calling dentry_unlink_inode with the backtrace like this: ... #10 page_fault at ffffffff8d8012e4 #11 iput at ffffffff8cef25cc #12 dentry_unlink_inode at ffffffff8ceed5d6 #13 __dentry_kill at ffffffff8ceedb6f #14 dput at ffffffff8ceee480 #15 __fput at ffffffff8ced3bcd #16 ____fput at ffffffff8ced3d7e #17 task_work_run at ffffffff8ccbf41f #18 do_exit at ffffffff8cc9f69e (show details) |
5.1.9.5 | All Scale Users (Linux) |
IJ51332 | High Importance | GPFS daemon could assert unexpectedly with EXP(REGP != _NULL) in file alloc.C. This could occur on client nodes where there are active block allocation activities. (show details) | 5.1.9.5 | All Scale Users |
IJ50480 | High Importance | Long ACL garbage collection runs in the filesystem manager can cause lock conflicts with nodes that need to retrieve ACLs during garbage collection. The conflicts will resolve after garbage collection has finished. (show details) | 5.1.9.5 | All Scale Users |
IJ51846 | Suggested | due to locale issue few callhome commands were gethering data region specific and it causing the error in AoAtool while parsing this data (show details) | 5.1.9.5 | Callhome |
IJ51864 | High Importance | Crash of the node mounting a filesystem or while starting the node. (show details) | 5.1.9.5 | Filesystem mount |
IJ51908 | High Importance | When dynamic pagepool is enabled, we may not shrink due to pagepool grow still in progress, which results in out of memory (show details) | 5.1.9.5 | Dynamic pagepool |
IJ51909 | Suggested | There are a few occasions where error code 809 may be used inside the CCR Quorum management component. Although not user actionable, the product was changed to make some note of this in mmfs.log instead of, as had been the case, only available in GPFS Trace. The intent is to improved RAS in certain situations. (show details) | 5.1.9.5 | CCR |
IJ51011 | Suggested | Nessus vulnerability scanner found HSTS communication is not enforced on mmsysmon port 9980 (show details) | 5.1.9.4 | mmsysmon on GUI/pmcollector node |
IJ50232 | High Importance | The automated node expel mechanism (see references to themmhealthPendingRPCExpelThreshold configuration parameter) uses the internalmmsdrcli to issue an expel request to a node in the home cluster. If thesdrNotifyAuthEnabled configuration parameter is set to false (not recommended) then the command will fail with a message like the following: [W] The TLS handshake with node 192.168.132.151 failed with error 410 (client side).mmsdrcli: [err 144] Connection shut down and the expel request will fail. (show details) |
5.1.9.4 | System Health (even though the fix is not in system health itself) |
IJ51036 | Medium Importance | mm{add|del}disk will fail which is triggered by signal 11. (show details) | 5.1.9.4 | disk configuration and region management |
IJ51037 | Suggested | mmkeyserv returns an error when used to delete a previously delete tenant, instead of returning a success return code. (show details) | 5.1.9.4 | File System Core |
IJ51057 | Medium Importance | From a Windows client, in MMC permissions Tab on a share, the ACL listing was always showing as Everyone.If a subdirectory inside subdirectory is deleted, in the respective snapshot that was taken before, traversal to the inner subdirectory was showing errors. (show details) | 5.1.9.4 | SMB |
IJ51148 | High Importance | find or download all when run on a given path, sets timefor each of the individual entities with respect to COSand ends up blocking a following revalidation to fetchactual changes on the object's metadata from the COSto cache. (show details) | 5.1.9.4 | AFM |
IJ51149 | High Importance | Due to an issue with the way mmfsckx scans compressed files and internally stores information to detect if it has inconsistent compressed groups, mmfsckx will report and/or repair false positive inconsistencies for compressed files. The mmfsckx output will report something like below for example: !Inode 791488 snap 6 fset 6 "user file" indirect block 1 level 1 @4:13508288: disk address (ditto) in slot 0 replica 0 pointing to data block 226 code 2012 is invalid (show details) |
5.1.9.4 | mmfsckx |
IJ51150 | High Importance | mmfsckx captures allocation and deallocation information of blocks from remote client nodes or non-participating nodes that mount the file system while mmfsckx is running. And once the file system gets unmounted from these nodes it stops the capture of such information. But due to an issue mmfsckx was stopping the capture of such information before the complete unmount event ended and that led to mmfsckx then reporting and/or repairing false positive lost blocks, bad (incorrectly allocated) blocks, duplicate blocks. (show details) | 5.1.9.4 | mmfsckx |
IJ51252 | Suggested | Prefetch command failes but returns error code 0 (show details) | 5.1.9.4 | AFM |
IJ51225 | High Importance | There is a build failure while executing mmbuildgpl command. The failure is seen while compiling /usr/lpp/mmfs/src/gpl-linux/kx.c due to no member named '__st_ino' in struct stat. Please refrain from upgrade to affected kernel and/or OpenShift levels until fix is available (show details) | 5.1.9.4 | Build / Installation |
IJ51222 | Suggested | If a problem with an encryption server happens just the Rkmid is visible in the default "mmhealth node show" view. Furthermore there are two monitoring mechanisms, one which will use an index to convey whether main or backup server is affected and one which will directly use the hostname or ip for that. Moreover the usual way to resolve an event with "mmhealth event resolve" has been broken for that component. (show details) | 5.1.9.4 | System Health |
IJ51160 | Suggested | Daemon Assert gets triggered when afmLookupMapSize is set tohigher value of 32. Supported range is only 0 to 30. (show details) | 5.1.9.4 | AFM |
IJ51282 | High Importance | mmrestoreconfig also restores fileset configuration of a file systems. If the cluster version (minReleaseLevel) is below 5.1.3.0, the fileset restore will fail as it tries to restore the fileset permission inherit mode even if it is the default. The permission inherit mode was not enabled Storage Scale version 5.1.3.0. (show details) | 5.1.9.4 | Admin Commands SOBaR |
IJ51283 | Suggested | Command mmchnode and mmumount did not cleanup tmp node files in /var/mmfs/tmp. (show details) | 5.1.9.4 | Admin Commands |
IJ51286 | High Importance | GPFS daemon could unexpectedly fail with signal 11 when mounting a file system if file system quiesce is triggered during the mount process. (show details) | 5.1.9.4 | All Scale Users |
IJ51265 | Critical | It is possible for EA overflow block to be corrupted as result of log recovery after node failure. This can lead to lost of some extended attributes that can not be stored in the inode. (show details) | 5.1.9.4 | All Scale Users |
IJ51344 | Critical | When writing to a memory-mapped file, there is a chance that incorrect data could be written to the file before and after the targeted write range (show details) | 5.1.9.4 | All scale users |
IJ49992 | Suggested | If the local cluster nistCompliance value is off, the mmremotecluster and mmauth commands fail with not clear error message. (show details) | 5.1.9.3 | All Scale Users |
IJ50066 | Suggested | AFM LU mode fileset from a filesystem, to a target in the same filesystem (snapshot), using NSD backend is failing with error 1. Happening because another code fix had an unintended consequence for this code path. (show details) |
5.1.9.3 | AFM |
IJ50067 | High Importance | When afmResyncVer2 is run with afmSkipResyncRecovery set to yes,then the priority directories that AFM usually queues shouldnot be done. Since such directories might exist underparents that are not in sync already leading to error 112. (show details) |
5.1.9.3 | AFM |
IJ50068 | High Importance | This APAR addresses two issues related to NFS-Ganesha that can cause crashes. Here are the details: (gdb) bt #0 0x00007fff88239a68 in raise () #1 0x00007fff8881ffb8 in crash_handler (signo=11, info=0x7ffb42abbe48, ctx=0x7ffb42abb0d0) #3 0x00007fff888da5f4 in atomic_add_int64_t (augend=0x148, addend=1) #4 0x00007fff888da658 in atomic_inc_int64_t (var=0x148) #5 0x00007fff888de44c in _get_gsh_export_ref (a_export=0x0) #6 0x00007fff8888c6c0 in release_lock_owner (owner=0x7ffef94a1cc0) #7 0x00007fff88923e9c in nfs4_op_release_lockowner (op=0x7ffef922be60, data=0x7ffef954d290, resp=0x7ffef8629c30) #8 0x00007fff888fb810 in process_one_op (data=0x7ffef954d290, status=0x7ffb42abcdf4) #9 0x00007fff888fcc9c in nfs4_Compound (arg=0x7ffef95eec38, req=0x7ffef95ee410, res=0x7ffef8ce4b40) #10 0x00007fff88819130 in nfs_rpc_process_request (reqdata=0x7ffef95ee410, retry=false) #11 0x00007fff88819864 in nfs_rpc_valid_NFS (req=0x7ffef95ee410) #12 0x00007fff88750618 in svc_vc_decode (req=0x7ffef95ee410) #13 0x00007fff8874a8f4 in svc_request (xprt=0x7fff30039ca0, xdrs=0x7ffef95eb400) #14 0x00007fff887504ac in svc_vc_recv (xprt=0x7fff30039ca0) #15 0x00007fff8874a82c in svc_rqst_xprt_task_recv (wpe=0x7fff30039ed8) #16 0x00007fff8874b858 in svc_rqst_epoll_loop (wpe=0x10041cc5cb0) #17 0x00007fff8875b22c in work_pool_thread (arg=0x7ffdcd1047d0) #18 0x00007fff88229678 in start_thread () #19 0x00007fff880d8938 in clone () Or (gdb) bt #0 0x00007f96f58d9b8f in raise () #1 0x00007f96f75c6633 in crash_handler (signo=11, info=0x7f96ad9fc9b0, ctx=0x7f96ad9fc880) a #3 dec_nfs4_state_ref (state=0x7f9640465440) #4 0x00007f96f76762f9 in dec_state_t_ref (state=0x7f9640465440) #5 0x00007f96f767640c in nfs4_op_free_stateid (op=0x7f8dec12fba0, data=0x7f8dec1992b0, resp=0x7f8dec04ce70) #6 0x00007f96f766dbae in process_one_op (data=0x7f8dec1992b0, status=0x7f96ad9fe128) #7 0x00007f96f766ee80 in nfs4_Compound (arg=0x7f8dec110ab8, req=0x7f8dec110290, res=0x7f8dec5b7db0) #8 0x00007f96f75c17db in nfs_rpc_process_request (reqdata=0x7f8dec110290, retry=false) #9 0x00007f96f75c1cf1 in nfs_rpc_valid_NFS (req=0x7f8dec110290) #10 0x00007f96f733edfd in svc_vc_decode (req=0x7f8dec110290) #11 0x00007f96f733ac61 in svc_request (xprt=0x7f95d00c4a60, xdrs=0x7f8dec18dd00) #12 0x00007f96f733ed06 in svc_vc_recv (xprt=0x7f95d00c4a60) #13 0x00007f96f733abe1 in svc_rqst_xprt_task_recv (wpe=0x7f95d00c4c98) #14 0x00007f96f73462f6 in work_pool_thread (arg=0x7f8ddc0cc2f0) #15 0x00007f96f58cf1ca in start_thread () #16 0x00007f96f5119e73 in clone () (show details) |
5.1.9.3 | NFS-Ganesha crash followed by CES-IP failover. |
IJ49856 | Critical | Multi-threaded applications that issue mmap I/O and I/O system calls concurrently can hit a deadlock on the buffer lock. This is likely not a common pattern, but this problem has been observed with database applications. (show details) | 5.1.9.3 | All Scale Users |
IJ50208 | High Importance | Multi-threaded applications that issue mmap I/O and I/O system calls concurrently can hit a deadlock on the buffer lock. This is likely not a common pattern, but this problem has been observed with database applications. (show details) | 5.1.9.3 | All Scale Users |
IJ50209 | Suggested | Setting security header as suggested by RFC 6797 (show details) | 5.1.9.3 | perfmon (Zimon) |
IJ50210 | High Importance | With File Audit Logging (FAL) is enabled, when a change to the policy file happens and when the LWE garbage collector runs for FAL, there is a small window that a deadlock can occur with the long waiter message seen 'waiting for shared ThSXLock' for the PolicyCmdThread. (show details) |
5.1.9.3 | File Audit Logging |
IJ50211 | High Importance | During a mount operation of the file system, updating LWE configuration information for File Audit Logging before the Fileset metadata file (FMF) is initialized results in the signal 11, NotGlobalMutexClass::acquire() + 0x10 at mastSMsg.C:44 (show details) | 5.1.9.3 | File Audit Logging |
IJ50320 | Critical | AFM fileset is going in NeedsResync state due to replication of filewhose parent directory is local. (show details) | 5.1.9.3 | AFM |
IJ50321 | High Importance | When a thread is flushing the file metadata of the ACL file to disk, there's a small window that a deadlock can occur when a different thread tries to get a Windows security descriptor, as getting the security descriptor requires reading the ACL file. (show details) | 5.1.9.3 | All Scale Users |
IJ50323 | High Importance | When checking the block alloc map mmfsckx excludes the regions that are being checked or are already checked from further getting updated in the internal shadow map.But when checking for such excluded regions it was not checking which poolId the region belonged to. This resulted in mmfsckx not updating the shadow map for a region belongingto a pool while checking the block alloc map for the same region belonging to a different pool. This led to mmfsckx falsely marking blocks as lost block and then later to this assert. (show details) | 5.1.9.3 | mmfsckx |
IJ50035 | High Importance | When RDMA verbsSend is enabled and number of RDMA connections is largerthan 16, if reconnect happens, could cause segment fault issue. (show details) | 5.1.9.3 | All Scale Users |
IJ50372 | Critical | O_TRUNC is not ignored correctly after a successful file lookup during atomic_open() so truncation can happen during the open routine, before permission checks happen. This leads to a scenario in which a user on a different node can truncate a file which he does not have permissions to. (show details) | 5.1.9.3 | All Scale Users |
IJ50373 | Suggested | For certain performance monitoring operations in the case of an error the query and response get logged. That response can be large and logging it regularly will cause mmsysmon.log to grow rapidly. (show details) | 5.1.9.3 | System Health / perfmon (Zimon) |
IJ50374 | High Importance | With File Audit Logging (FAL) enabled, when deciding to run LWE garbage collector for FAL, an attempt to try-acquire the lock on the policy file mutex is performed. If the policy file mutex is busy, the attempt is canceled and retry on the next attempt. Upon canceling, the policy file mutex can be released without being held leading to the log assert. (show details) | 5.1.9.3 | File Audit Logging |
IJ50375 | Critical | GPFS daemon could assert unexpectedly with: Assert exp(0) in direct.C. This could happen on file system manager node after a node failure. (show details) | 5.1.9.3 | All Scale Users |
IJ50439 | High Importance | The ts commands do not always return the correct error code, providing incorrect results to mm commands that call them, resulting incorrect cluster operations. (show details) | 5.1.9.3 | Core |
IJ50440 | High Importance | mmfsckx fails to detect file having an illReplicated extended attribute overflow block and in the repair mode will not mark the flag illReplicated in it. (show details) | 5.1.9.3 | mmfsckx |
IJ50441 | High Importance | When scanning a compressed file mmfsckx in some case can incorrectly report a file having bad disk address (show details) | 5.1.9.3 | mmfsckx |
IJ50442 | High Importance | When scanning a file system having a corrupted snapshot mmfsckx can cause node assert with logAssertFailed: countCRAs() == 0 && "likely a leftover cached inode in inode0 d'tor"* (show details) | 5.1.9.3 | mmfsckx |
IJ50443 | High Importance | AFM policy generated intermediate files are always put to/var filesystem - /var/mmfs/tmp for Resync/Failover and/var/mmfs/afm for Recovery. We have seen in customer setups that the /var is provisioned very small and there might be other Filesystems that are well provisioned to handle such large files. /opt that IBM defaults to always or may be even inside the fileset. (show details) | 5.1.9.3 | AFM |
IJ50463 | High Importance | Stale data may be read while "mmchdisk start" is running. (show details) | 5.1.9.3 | All Scale Users |
IJ50563 | Critical | In a file system with replication configured, for a large file with number of data blocks more than 5000, if there are miss-updated on some data blocks\ due to disk failures on one replica disk, then these stale replicas would not be repaired if the helper nodes are getting involved to repair them. (show details) | 5.1.9.3 | Scale Users |
IJ50577 | High Importance | When there is a TCP network error, we will try to reconnect the TCP connection, butthe reconnect failed with "Connection timed out" error, which results in node expel. (show details) | 5.1.9.3 | All Scale Users |
IJ50708 | Critical | In a file system with replication configured, the miss-update info set in the disk address could be overwritten by log recovery process, then lead to stale data to be read as well as the start disk process cannot repair such stale replicas. (show details) | 5.1.9.3 | All Operating Systems |
IJ50794 | High Importance | Symbolic links may be incorrectly deleted during the offline mmfsck and may cause undetected data loss (show details) | 5.1.9.3 | General file system, creation of symbolic links. |
IJ50890 | Suggested | Metadata evict was giving error for 2nd attempt onwards. (show details) | 5.1.9.3 | AFM |
IJ49762 | High Importance | mmlsquota -d can cause gpfs daemon to crash (show details) | 5.1.9.3 | Quotas |
IJ49856 | Critical | Unexpected long waiter could appear with fetch thread waiting on FetchFlowControlCondvar with reason 'wait for buffer for fetch'. This could happen workload caused all prefetch/writebehind threads are assigned to do prefetching. (show details) | 5.1.9.3 | All Scale Users |
IJ50061 | High Importance | When mmfsckx is run on a file system such that it requires multiple scan passes to complete then mmfsckx can abort with reason "Assert failed "nEnqueuedNodes > 1"." (show details) | 5.1.9.3 | mmfsckx |
IJ49583 | Suggested | When a RDMA connection to a remote node has to be shutdown due to network errors (e.g. network link goes down) it can sometimes happen that the affected RDMA connection will not be closed and all resources assigned to this RDMA connection (memory, VERBS Queue Pair, ...) are not freed. (show details) | 5.1.9.2 | RDMA |
IJ49584 | High Importance | Spectrum Scale Erasure code edition interacts with third party software/hardware APIs for internal disk enclosure management.If the management interface becomes degraded and starts to hang commands in the kernel, the hang may also block communication handling threads. This causes a node to fail to renew its lease, causing it to be fenced off from the rest of the cluster. This may lead to additional outages.A previous APAR was issued for this in 5.1.4, but that fix was incomplete. (show details) |
5.1.9.2 | ESS/GNR |
IJ49585 | Suggested | If a tiebreaker disk has outdated version info, ccrrestore can abort with Python3 errors (show details) | 5.1.9.2 | CCR |
IJ49659 | Critical | AFM sets pcache attributes on inode after reading uncached file from home. It is modifying inode while filesystem is quiesced. Assert is hot due to same. (show details) |
5.1.9.2 | AFM |
IJ49586 | High Importance | File systems that have large number independent filesets usually tend to have a sparse inode space. So if mmfsckx is run on such a file system having large sparse inode space then it will take longer to run as it unnecessarily parses over inode alloc map segments pointing to sparse inode spaces instead of skipping them. (show details) | 5.1.9.2 | FSCKX |
IJ49587 | High Importance | When building an NFSv4 ACL from a POSIX access and default ACL of a directory, in between the retrievals of the access ACL and the default ACL, if an update or store ACL to another file or a directory happens, a deadlock can occur and the long waiter message "waiting for exclusive NF ThSXLock for readers to finish" is seen. (show details) | 5.1.9.2 | All Scale Users |
IJ49660 | High Importance | When replicating over NFS With KRB plus AD - if there's a user who is not included in the AD at the primary site who creates a File, this file is replicated as root to the DR first and then a Setattr is attempted with the User/Group to which file/dir belongs to. If the user doesn't exist on AD and is local to Primary cluster alone, then NFS prevents the Setattr and ergo the whole Create operation from Primary to DR gets stuck with E_INVAL. (show details) |
5.1.9.2 | AFM |
IJ49661 | Suggested | cluster health showing "healthy" for disabled CES services (show details) | 5.1.9.2 | System Health |
IJ49662 | Suggested | In ceratin cases the network status was not accounted for correctly which could result in "stuck" events like cluster_connections_bad and cluster_connections_down. (show details) | 5.1.9.2 | System Health |
IJ49710 | Suggested | For the failed callhome upload remove the job from queue if DC package not available. (show details) | 5.1.9.2 | Callhome |
IJ49699 | Suggested | Sometimes callhome upload getting failed due to curl(52) error (show details) | 5.1.9.2 | Callhome |
IJ49700 | Suggested | Sometime exception in logs while callhome sendfile progress converted to integer (show details) | 5.1.9.2 | Callhome |
IJ49701 | High Importance | Processes hang due to deadlocks in our Storage Scale cluster. There aredeadlock notifications on multiple nodes which were triggered by 'long waiter' events on the nodes (show details) | 5.1.9.2 | Regular file read flow in kernel version >= 5.14 |
IJ49714 | Suggested | Creating AFM fileset with more than 32 afmNumFlushThreads gives an error (show details) | 5.1.9.2 | AFM |
IJ49715 | Suggested | The 'rpc.statd' may be terminated or experience a crash due to statd-related issues. In these instances, the NFSv3 client will relinquish control over NFSv3 exports,and the GPFS health monitor will indicate 'statd_down'. (show details) | 5.1.9.2 | NFS |
IJ49580 | High Importance | When the device file for a NSD disk got offline or unattached from a node, the I/O issued from that node would fail with "No such device or address" error (6), even there are other NSD servers defined andavailable for servicing I/O request. (show details) | 5.1.9.2 | All Scale Users |
IJ49770 | High Importance | AFM object fileset fails to pull new objects from the S3/Azure store when the object fileset is exported via nfs-ganesha and readdir is performed over the NFS mount. However performing the readdir on the fileset directly pulls the entries correctly. (show details) | 5.1.9.2 | AFM |
IJ49771 | High Importance | AFM outband metadata prefetch hangs if there is an orphan file already exists for the entries in the list file. AFM orphan files have inode allocated but not initialized. (show details) | 5.1.9.2 | AFM |
IJ49772 | High Importance | Damon assert going off: otherP == NULL in clicmd.C, resulting in daemon restart. (show details) | 5.1.9.2 | All |
IJ49792 | High Importance | Add config option to add nconnect for nfs mount (show details) | 5.1.9.2 | AFM |
IJ49793 | Suggested | Prefetch is not generating the afmPrepopEnd callback event. (show details) | 5.1.9.2 | AFM |
IJ49794 | High Importance | Prefix downloads are getting failed or read or ls fails if prefix option is used with download or fileset creation. (show details) | 5.1.9.2 | AFM |
IJ49795 | Suggested | Rename not reflected to COS automatically if afmMUAutoRemove configured. (show details) | 5.1.9.2 | AFM |
IJ49796 | High Importance | AFM COS to GCS Hangs file system on GCS Errors if credentials doesnt have enough permission. (show details) | 5.1.9.2 | AFM |
IJ49851 | High Importance | There is crash observed in read_pages when called from page_cache_ra_unbound on SLES with kernel version >=5.14. (show details) | 5.1.9.2 | Regular file read flow in kernel version >= 5.14 |
IJ49852 | High Importance | With showNonZeroBlockCountForNonEmptyFiles set, block count is always shown as one to report fake block count. This is a work-around for faulty applications (e.g., Gnu tar --sparse) that erroneously assume zero st_blocks means the file contains no nonzero bytes. (show details) |
5.1.9.2 | AFM |
IJ49142 | Suggested | When running a workload on Windows which creates and deletes lots of files and directories in a short span, the inode number assigned for GPFS objects may be reused. If a stale inode entry somehow persists in the GPFS cache due to in flight hold counts, it can happen that due to conflict between the old and new object types, this stale entry will result in a file or directory not found error. (show details) | 5.1.9.1 | All Scale Users |
IJ49144 | High Importance | When dependent fileset is created inline using afmOnlineDepFset or created offline as in the earlier supported method, we mandate enabling mmafmconfig so that .afm/.afmtrash is present at the DR site insode dep fset, to handle conflict renames that AFM does. mmafmconfig enable at the DR on dep fset also creates .afmctl file which is CTL attr enabled and disallows anyone from removing it except when done through mmafmlocal. This causes the restore to fail removing the .afmctl inside dep fset when restoring to snapshot without the dep fset. Fix is to enable mmafmconfig .afm/.afmtrash without creating the .afmctl file which is not needed inside dependent filesets anyways. (show details) |
5.1.9.1 | AFM |
IJ49145 | High Importance | When failover is performed to an entirely new Secondary fileset at the DR within the same Filesystem as previous target sec fileset - The dependent fileset path We request to link under should change too. For this the existing dependent fileset is unlinked and when attempted to be linked under new path - since the dependent fileset exists, it returns the E_EXIST and later primary tries to lookup for remoteAttrs and fails the queue. Return E_EXIST if the fileset exists in linked state only so that the follow-up operation from Primary to build remote attributes succeeds. (show details) |
5.1.9.1 | AFM |
IJ49151 | High Importance | Memory corruption can happen if an application using the GPFS_FINE_GRAIN_WRITE_SHARING hint is running on a file system with its NSD servers having different endianness than the client node the application is running on. (show details) | 5.1.9.1 | Data shipping |
IJ49152 | High Importance | When running mmexpelnode to expel the node on which the command is running, we may hit this assert (show details) | 5.1.9.1 | All Scale Users |
IJ49044 | High Importance | When the file is opened with O_APPEND flag, sequential small read performance is poor (show details) | 5.1.9.1 | All Scale Users |
IJ49154 | Critical | GPFS daemon could fail unexpectedly with assert when handling disk address changes. This could happen when number of block in a file become very large and causes a variable used in internal calculation to over flow. This is more like to happen on file system where block size is very small. (show details) |
5.1.9.1 | All Scale Users |
IJ49169 | High Importance | AFM metadata prefetch does not preserve ctime on the files if they are migrated at home. This causes ctime mismatch between cache and home. (show details) | 5.1.9.1 | AFM |
IJ49196 | High Importance | If COS bucket has same name object and directory object, by default file objects were getting download, when customer requirement was to download directory content instead of files. (show details) | 5.1.9.1 | AFM |
IJ49197 | Suggested | Exception in mmsysmonitor.log due to some files were getting removed while mmcallhome data collection (show details) | 5.1.9.1 | Callhome |
IJ49198 | Suggested | mmcallhome SendFile: progress percentage not updated (show details) | 5.1.9.1 | Callhome |
IJ49216 | High Importance | Quota manager/client node may assert during per-fileset quota check, when there is being-deleted inode. (show details) | 5.1.9.1 | Quota |
IJ49135 | Critical | The assert going off on "logAssertFailed: oldDA1Found[i].compAddr(synched1[I])", then result in mmfsd daemon crashed and finally could cause file system can't be mounted on any node. (show details) | 5.1.9.0 | Compression |
IJ48873 | Critical | File data loss when copying or archiving data from migrated files (e.g., using "cp" or "tar" command that supports to detect sparse holes in source files with lseek(2) interface). (show details) | 5.1.9.0 | DMAPI |
IJ48871 | Critical | File data loss when copying or archiving data from snapshot and clone files (e.g., using "cp" or "tar" command that supports to detect sparse holes in source files with lseek(2) interface). (show details) | 5.1.9.0 | Snapshot and clone files |