From: Paul <nospam@needed.invalid>
Subject: Re: A simple way to transfer photos from your phone to Windows withoutinstalling anything on either
Full headers:
From: Paul <nospam@needed.invalid>
Subject: Re: A simple way to transfer photos from your phone to Windows without
installing anything on either
Date: Sat, 24 Feb 2018 15:52:42 -0500
Organization: A noiseless patient Spider
Lines: 102
Message-ID: <p6sjaq$q02$>
References: <1pat6bvdosd5h$.1v0zueckye1z0$> <N8CjC.87760$CZ2.56323@fx39.iad> <k1708hh1x1bm.4u71bsj0gbj3$> <p6n7mu$cuh$> <19gis5r2t3woe$.15ka8hlpcnip0$> <9K3kC.89577$CZ2.67163@fx39.iad> <1dxxhzinci9qd.hpx5dw75usol$> <p6r6ge$5f4$> <acakC.112774$mJ1.26756@fx13.fr7> <p6s785$nht$> <v9lfsboe3dmc.1o44r2zk1zgx8$>
Mime-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 24 Feb 2018 20:52:42 -0000 (UTC)
Injection-Info:; posting-host="e5bfb4299a1cf27184e61bae5ae05121";
logging-data="26626"; mail-complaints-to="";posting-account="U2FsdGVkX1/VclMx3zlllrO+8zvGbFjrpSnNpTVMs5M="
User-Agent: Ratcatcher/ (Windows/20130802)
In-Reply-To: <v9lfsboe3dmc.1o44r2zk1zgx8$>
Cancel-Lock: sha1:J+i5P4JX2jkxDj93h6tqV1ihK/k=
Print Article
Forward Article
ultred ragnusen wrote:
> Paul<nospam@needed.invalid> wrote:
>>> Once I burn the ISO to a disk will it be 'bootable' or will additional 
>>> action be required first?
>> It requires dancing a jig on one foot.
> The Tier 2 Microsoft support person at +1-800-642-7676 took control of
> another Windows 10 Pro system to download, burn, test, and run the same
> sequence of repair that we ran (and failed at) using the bricked Windows 10
> Pro recovery console.
> For the data, Knoppix worked just fine, but I am getting a very common
> error from Knoppix on files that shouldn't have that error, where, when I
> google for the error, NONE of the common causes can possibly be why I'm
> getting that error.
>  Error splicing file: Value too large for defined data type.

One series of threads I could find, blamed the cause on

    Ubuntu is just not building gcc with -D_FILE_OFFSET_BITS=64

which causes 64-bit routines for file parameters to be use automatically.
You can declare such things discretely when programming, or
taking a legacy program and passing -D_FILE_OFFSET_BITS=64
helps in an attempt to fix them automatically.


The inode number in the example, is huge.

# on cifs mount...
19656 open("grape.c", O_RDONLY|O_NOCTTY) = 3
19656 fstat64(3, {st_dev=makedev(0, 23), st_ino=145241087983005616, <=== not a normal inode
                   st_mode=S_IFREG|0755, st_nlink=1, st_uid=3872,
                   st_gid=1000, st_blksize=16384, st_blocks=1, st_size=25,
                   st_atime=2009/10/18-19:13:16, st_mtime=2009/10/18-19:00:51,
                   st_ctime=2009/10/18-22:31:53}) = 0
19656 close(3) = 0

If we convert that number to hex, it's 0x020400000004AFB0.
It's remotely possible the inode number is actually 4AFB0
and the upper portion is "noise" from an uninitialized
stack parameter or memory location.

That's probably not the only root cause, but I wanted
to at least see an example of what they might be
complaining about.

In Linux, when NTFS is mounted, stat() results are faked
to make Linux "comfortable" with the IFS being mounted.
The Linux inode number, is actually formulated using
the #filenum of a file from $MFT. So the parameter in
fact, has a traceable origin. If you saw the errant
inode number in that case, you might be able to look up
in the $MFT, and see a "match" for the lower portion
of the number (the 4AFB0 part).

Since you say you're staying "on-platform" and not using
SAMBA/CIFS for this transfer, the result is highly
unusual. I've never seen this error in all the times
I've tried things with various Linux distros. I might
even be convinced to run a memory test as my first step

After the memtest completed one pass successfully,
I would change distros. And move on.


The other possibility, is the source disk is damaged
somehow. But the way Windows handles filenum, it doesn't
allow the number to grow and grow. When you delete a file,
the "slot" is available for the next file creation. This
helps to keep the "epoch" of filenum values low. While
the filenum field is likely to be a large one (to suit
the declared maximum number of files that NTFS supports
in Wikipedia), users probably never see filenum
values remotely approaching the max_value.

On my Win10 C: drive with the build trees in the user folder,
the stats (as collected by the old Win2K era utility nfi.exe) are

Number of files:  1318185

Last file:

File 1341804


So the highest #filenum (1341804) is not even remotely close
to being a 64 bit number in magnitude. And I don't even know
if a corruption on the source side could be interpreted that