Jump to content
The Education Forum

So I will give it one last try…..


Recommended Posts

So I thought that proceding like I did' date=' using undisputed photo evidence first to validate the process, before moving to undiscovered images, was the good thing.[/quote']

We are simply trying to understand your processes and confirm that DATA is not being ADDED until an image that is acceptable is produced….

Your conclusions are not yet supportable since you fail to understand that your little IMAGE ENHANCER program, Kneson, ADDS DATA…

http://www.imagener.com/help/unsharp.html

Resharp Function Sharpness Control

All digital images can benefit from being sharpened at some stage in their lives. This is especially true for enlarged images, but enlarged or not, all captured images - digital or analog - suffer from blurring or softening of detail in some way. This is precisely the motivation behind Kneson Software’s addition of the Resharp function in Imagener Professional and Imagener Unlimited. However, our customers have been asking for more information about this function as use of it does require a certain level of skill and knowledge to master.

Blurring occurs when the representation of an object is at a lower contrast or retains less detail than is present in the original. Sharpening compensates for this loss by improving the visibility of the information in the image. It works by applying a matrix of numbers over an array of the pixels in the image with the matrix centered over one pixel at a time. Sharpening can add image data that improve the appearance or impression of sharpness.

This function is formally known as "Unsharp Mask." We named it "Resharp" to avoid any confusion, but the technique is actually a darkroom technique for improving the sharpness of paper prints that was used before the age of digital pictures.

There are three variables to the function, Amount, Radius, and Threshold. Each variable interacts with the other so it is possible to obtain nearly identical effects with very different settings. Keep the following characteristics in mind:

Amount

•A measure of the strength of sharpening, roughly as a percentage of the increase in edge contrast.

•Works best within the 50-200% range (enter 50 up to 200 in the amount box in Imagener).

Radius

•A measures of the number of pixels over which the function operates.

•Needs to be set with care as radius has the greatest effect.

•A radius of three may in fact cover seven or more pixels.

•In general, low figures give crisp edges; larger figures produce broader edges, and increase overall contrast.

Threshold

•Measures the minimum difference between two boundaries that the function will operate on.

•Is based on 255 levels and therefore will only accept a numerical value of 0 to 255.

•Zero threshold tells the function to operate over the entire image.

•A value of 127 (about half of the 255 levels) will cause sharpening to occur only where pixels are next to other pixels that are 50% lighter or darker.

You wrote:

*the process (this is crucial) is NOT based on an IMAGE processing approach. It is on the contrary based on a DATA processing approach, meaning that it is not about ameliorating the visual content of the image as seen by the human eye, but rather about extracting the core information content of the image.”

We GET IT Franz… again see Fractals-math creating images.

Here is your enlargement method:

”Kneson Unlimited Enlargement method-An interpolation technology that transforms images into vectors allowing outstanding image enlargement clarity.

There is OBVIOUSLY more info/data/programming in the Vector than the bitmapped image and as enlargening it adds the info to retain the original look… but this is an ARTIFICIAL creation of NEW DATA….

http://mchobe04.wordpress.com/2011/02/09/bitmap-and-vector/

http://vectormagic.com/home/comparisons

Until you provide even a test file of the LAYERS you are discussing, there is little more to discuss."

...sorry, I had missed it...

Just tell me how I can send you a file....

Post some of them right here please, I’m not the only one interested in this… show us 5 KEY layers and the result of the math on those layers

Thanks

btw - djosephs@calottery.com is where you can send the larger files...

I will look for the Bond 4 images you posted... and compare again...

Link to comment
Share on other sites

  • Replies 154
  • Created
  • Last Reply

Top Posters In This Topic

*"this same phenomenon can be verified when men are watching wet t-shirts contests, were throwing water on textile allow for showing very precise information on what is actually behind. Access to this newly available information is usually received with enthusiastic response by observers".

The added data (water) reveals real, coherent and verifiable information present below the original layer, that would not be visible without this specific "interpolation"...

The fact that what is seen may be natural or artificial is a different point... :ph34r:

uh, not so much Franz... While I appreciate the imagery...

ADDING WATER changes the original pixels to such an extent that they are REPLACED by other REAL pixels that can now bleed thru and be seen

This is NOT the process you are describing... no matter how many interations you go thru you will NEVER be able to "remove" a layer of clothing on a photo and depict what is REALLY there...

but only what the original pixels SUGGEST is there and how the math makes up the difference... you've CREATED a new image Franz... not enhanced the old one.

Tell you what then Franz... since you believe your process is similiar to the wet Tshirt example... Use your process to tell us what is holding the bag up... what is Montgomery holding in his had that extends into the bag and keeps it upright...

thanks

bagfullshot-nowriting.jpg

Link to comment
Share on other sites

Franz, since you are working off digital images: each pixel of course contain no more information than that which is coded for that pixel. Any blowup of that creates however many new interpolated (whatever algorithm you use) points of data that is not in the original image which probably isn't like the real original for the same reasons in the first place.

However, there are some features of the technique you se3em to be using that are interesting to me, particularly the bit about what gets reinforced and what is washed out in the process. I don't know if you saw it but in a post I did a panorama of a gif by Chris using that sort of transparent layering so for example one can hardly see a trace of Zapruder but a clear image of Sitzman. It just means that that which moved leaves less of an impression and is 'masked' by the repetition of the static.

..hi John, very much honoured...

I agree with all the premises you make :

*no more information can be found that has been originally recorded

*interpolations will generate information that is not present in the original data set, and thus; though based on known and accepted mathematical formulas, has an objective value less than 1 / true

You have of course understood that I am very clumsily trying to explain in layman's terms a simple logical path that is based on what I call "objective information resilience", that I have tried to explain here using different analogies. It only means that real data will manifest more coherently / frequently in a correlated manner than random noise will. The more layers of information are added, the more information is gained by coalescing / strenghtening.

let's say we record different finite data sets of a human hand, stretched on a table:

*with a HD camera (showing HD details of skin and flesh, nails and complexion, body hair, etc)

*with same but using an infrared filter setting (showing heat / energy locations and channels)

*with an x-ray machine (showing bony structures)

*with a biological scanner (showing muscles, fleshy tissues, blood vessels, etc)

Now all 4 records are accurate (within known limitations, etc) of the reality. But actually, none has more value than the others in pretending to be more accurate than its competitor. They have equal objective value, as pertained to the observer: they are "true", all of them.

Now, how could 4 different data sets extrapolated from of identical data (our hand here) be "true" while blatantly carrying some markedly divergent information, is what intrigued me in the start. i thought it was a really interesting starting question.

My reasoning, which underlies the process, was as follows: what we call truth / objective experience might be based essentially on resilience of correlated information, with an uncertainty margin that can be lowered by increasing the number of iterations.

For instance, if we add the 4 images of our hand above, what we will get is a composite / interpolation of the whole four data sets, expressed as one data set combined, and carrying the information previously carried individually by each set.

Now, will adding 4 different "truth" will give us more truth?.

The process was simply designed, in the start, to try and check this hypothesis interesting hypothesis.

What I have found, and am trying to present here, shows that shows that the process works: if you compare the composite with any single data set, the composite will contain much, more information, period.

You will see correlations between mucles and heat / energy, connections between mucles and blood vessels and bones, while conserving data pertaining to contours and surface aspects of skin.

You will have much richer access (better understanding / knowledge) to the objective information content pertaining to the data set being analyzed (the hand...).

Each individual layer of information will be "weaker" in the composite than in the original,but will still be present in a degraded form. It will be weaker because it will be correlated with the other information available in the other layers, and because of the various "readings", it will not be identically duplicated in terms of data content.

But some part of the information will correlate (in a more or less explicit form), because it is "true", and will be present under one form or another, as we have seen: the core information about the hand and fingers shape and structures are information that are correlated in all 4 photographs of that hand (for instance the finger shapes seen in the HD picture will be correlated by the finger bones revealed by the x-ray): if you interpolate them, that correlated information will stand out from the resst of the data, which will be less correlated.

Actually, though I think Dave's terms of "bleeding thru layers" carry some blurry meaning to it that do not I think correspond to most of the images I am posting here, it actually has some good pedagogic value.

I notice that is also the point that has taken your attention, and rightly so because that is actually the main point.

I have not seen the GIf you mention, but would like to if that's possible.

As I explained, I am here to share not only the results, but the method: if you have some specific file you would want to have processed, just let me know...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"uh, not so much Franz... While I appreciate the imagery..."

...told you it is possible to discuss this seriously with no hard feelings B) : it is now part of History. People interested in this have there reasons, and there this no need for agressivity in dissenting opinions. I welcome all points of views and analysis.

You are actually what I am looking for here, somebody with dissenting opinion which can ask legitimate questions based on sound arguments about what I have found and maybe offer other receivable explanations. That's the point of coming here for me, actually...

"ADDING WATER changes the original pixels to such an extent that they are REPLACED by other REAL pixels that can now bleed thru and be seen"

...that's the limits of analogies, I am sorry for the confusion: there is no water to the pixel. In the exemple I took, it is added to the shirt.

And the textile is not "replaced": it is only interpolated with water, and thus "modified". But it is not turned into a patch of previously inexistent matter: it is still textile, which caracteristics relating to human vision have been slightly modified by the temprary addition of water. It is still clothing...

It is npt, in any case whatsoever a representation of information "coming out of nowhere": it is "an interpretation" of the previous data, simply interpolated with the original one. It is based on objective data, that might lose some definition that's agreed thru the process, but will still keep the core information available: shape, color, style, you name it

I think that makes a big difference, and this seems to be the point where we disagree...

"Tell you what then Franz... since you believe your process is similiar to the wet Tshirt example... Use your process to tell us what is holding the bag up... what is Montgomery holding in his had that extends into the bag and keeps it upright..."

..I have already explained above that you have misunderstood my analogy, I should have taken a better one...

Now I like the second part of your argument better.

"Use your process to tell us what is holding the bag up... what is Montgomery holding in his had that extends into the bag and keeps it upright..."

First thing I will tell you, of course, is I don't know, because I have never worked on it.

I had actually never known that this particular picture was of any interest to those who have an interest in the photographic record analysis. Let me know what is the question at stake here (just curious): a second gun being carried out of the building after the Carcano? I am not aware of this fine point, so let me know...

You seem to imply that I am claiming that the process can go thru physical matter and reveal information that is hidden from view (note that this is actually a true scientific phenomenon, supported by all sorts of exemples, as given in my analogy with the hand pictures).

This phenomenon is true, so the case is closed from that side of the argument, (x-rays and all, see above) but is not at all what I claim the process is doing, so this is not, I believe, a valid argument either

I am saying only that the process can go thru different levels of information, and interpolate them, that's all.

Your argument would be valid if I showed you a "stripped off" version of Bond 4, showing the man bare chested. That is not the case.

Now your question about what's in the bag may be of interest, I don't know the subject.

I will say this: you used the term of "blur", and I explained that I actually quite like it, despite its limitations. So I will use your term to answer your question:

*for the process to be able able to pick information pertaining to the object inside the bag, some of the information pertaining to the object hidden from view will have to "filter" / "blur" thru the upper level of information (the outside surface of the bag). This of course, is feasible because the physical presence of the hidden object will interact with the physical appearance of the bag, which translates as the way photons will bounce back on the outside surface on the paper and record data on the photo support.

The object, where in contact with the information layer "hidding" it (inside the bag) will "blur" some of his data content to the upper level (outside the bag), although in a much degraded form.

Again, a good analogy, I think of the process... ;)

So I would say that what you are asking for depends only of the kind of information that could be extracted from this picture. My guess would be that, based on how the process works and the image itself, I am doubtfull that I can be of much help on that one. Might be worthe a try, though to try to determine very basic postulated hypothesis, like apparent length or width, but that would appear to be much work for very unertain results.

The data you would have to work on would be a very, very much degraded from of the data you want to analyze. There would seem to too much uncertain variable data to begin with to work safely here in my view...

let's take again the wet t-shirt imagery:

You can see thru the textile because information pertaining to the body underneath will "blur" to the upper level and manifest itself on the shirt. This is possible only because the 2 sets of data (the body and the shirt) are contiguous. Basically, it could be said that they are actually interpolating with each other. This translates as information that is only made more manifest by the temporay addition of water, simply because it will facilitate / strenghten the interpolation.

Now you will not have the same result if the girl is five feet behind a bed sheet, because ther interpolation will not be there in the first place

So it could also be expressed, tentatively, that the process is maybe only retrieving allready present interpolations of information stocked in the original data, and verifying if by corroborting and differentiating them, it can bring out more internal coherence as to what the overall data means to the human eye.......

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"We are simply trying to understand your processes and confirm that DATA is not being ADDED until an image that is acceptable is produced…."

..that's OK with me, that's why I came here for in the first place...

"Your conclusions are not yet supportable since you fail to understand that your little IMAGE ENHANCER program, Kneson, ADDS DATA…"

I would almost agree for the same reasons with the first part of your phrase, but not the second one.

The conclusions are not all supported yet by independant crosschecking and, as explained, that is why I came here for.

But actually, some of them are; Black Dog Man IS supported by the HSCA findings.

We only extracted his identity (going one level up in data content)

So the process DID NOT create the BDM image in Belzner that I have shown here. It only verified the HSCA findings. That's a corroboration.

Now of course, corroborations work both ways, as you know... B)

And it can of course be argued, when we take the "supported argument" angle, that:

*Bond 4 is also a corroboration of Belner, since it shows not identical but very coherent (clothing, complexion, location) data as compared to Belzner.

That is lower correlation than when you work with identical sets of data, but it is till another level of correlation.

*Now, of course, if you want to factor also the witnesses data, you will stil get anothe level of corroboration (within the known limits of human perception of course, as already explained).

So I would say that stating that was I present here is totally unsupported by the rest of the available evidence is not exactly true...

"since you fail to understand that your little IMAGE ENHANCER program, Kneson, ADDS DATA…"

I understand this completely: that is actually one of the basic element of the process.

I thought I explained very clearly, with various sets of widely different exemples: the human experience IS based on interpolation of data. There is just no way around it.

You seem to have an ideal picture of objective reality that does not actually exists. Reality is fuzzy, because it is ultimately all based on extrapolations in some way...

We differ obviously on how this extrapolated information can be analyzed: your point is that since it has been interpolated / "altered", it does not hold any objective data value any longer. It no longer exists as a valid set of data that might carry to us objective information.

I think it still does, just like the t-shirts don't melt away when sprinkled with water (I heard some actually do, but that's a different story), nor turn into a fullmetal jacket....

I would express you position (again, correct me if I got you wrong) as saying "This interpolated data has a data content value of, by definition, "0" on a scale of 100", because all the original data has been totally, sytematically, utterly destroyed.".

I am only saying that all data content is not destroyed, and the information value will vary between "0" and "100", depending of the interpolations numbers and caracteristics (the variables), but will have a propension to value above "0", regardless of the variables, thus manifesting itself in a way that can easily be recorded, to be used for other interpolations."

"show us 5 KEY layers and the result of the math on those layers

Thanks"

... I can do no math, sorry: I explained from the start that this is just a layman experience with some intriguing results of a process that initially was an intellectual game about how to retrieve weak signals in market research, ie corroborating good info an deliminating bad info (and this has nothing really, to do with photo enhancement...).

It would appear, though again I am no specialist, that the maths involved in the main operational part (the interpolation process)are not very sophisticated: add, substract, multiply, differentiate.

The software is doing the math (which I assume is understood by specialists of the field), not me. I am only formulating requests pertaining to interpolations of data...

On the other hand, I think I have explained quite in details the process and the reasoning behind it in logical steps that can be checked, so I thought that, what I cannnot explain in scientific / mathematical terms, somebody else with more expertise in those fields could.

I don't know what you call 5 key layers. I have posted 4 layers from Bond 4, which I think show the data increment between evolving versions.

As I explained, it may be possible to compute a formula based on known variables (data value and iteration numbers), and run the odds that version 271, with all its fine details, could be generated randomly within less than 300 generations.

Like I say, I am no scientist, but logic would indicate to me this could be calculated....

But I think that you would need more than 4 layers anyway to really crosscheck this soundly, so I would think better to send you a complete file rather. I will check the size and let you know, and then you explain how I can make it available....

Tks...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

... as explained, this is not a well prepared presentation: I will continue to try addressing the technical aspects as best as I can of course, but since we have I believe at least defined clearly what the argument is here (and it is still an argument, we agree here...), based on known, undisputed images, I would like to now show exemples showing the process applied to known areas of interest, for which we have not been able to extract richer content up to now.

There are several of them, and, all interesting I think.

Link to comment
Share on other sites

... as explained, this is not a well prepared presentation: I will continue to try addressing the technical aspects as best as I can of course, but since we have I believe at least defined clearly what the argument is here (and it is still an argument, we agree here...), based on known, undisputed images, I would like to now show exemples showing the process applied to known areas of interest, for which we have not been able to extract richer content up to now.

There are several of them, and, all interesting I think.

Christian/DJ

The optical aspect of the process is removed until the data resembles something to the viewer if left to run an optical image may not appear as all the program does is crunch numbers .you could not produce anything resembling an optical image unless you are sitting watching the screen and flipping through the various processes .it is not designed to produce an "Image". Your eye sees the manipulated data .The Logarithms are there to prevent manipulation or tweaking.

Ian

Link to comment
Share on other sites

I don't think it's necessary to see the actual gif. I think the process helps me understand some of what you're talking about. I still think that there is a finite amount of information in the base images you use but when you resize them in whatever way the algorithm that performs the resizing creates a predictable set of new data that when zoomed in on appear to show things that simply cannot be said to be there. (IMO)

Link to comment
Share on other sites

*"this same phenomenon can be verified when men are watching wet t-shirts contests, were throwing water on textile allow for showing very precise information on what is actually behind. Access to this newly available information is usually received with enthusiastic response by observers".

The added data (water) reveals real, coherent and verifiable information present below the original layer, that would not be visible without this specific "interpolation"...

The fact that what is seen may be natural or artificial is a different point... :ph34r:

uh, not so much Franz... While I appreciate the imagery...

ADDING WATER changes the original pixels to such an extent that they are REPLACED by other REAL pixels that can now bleed thru and be seen

This is NOT the process you are describing... no matter how many interations you go thru you will NEVER be able to "remove" a layer of clothing on a photo and depict what is REALLY there...

but only what the original pixels SUGGEST is there and how the math makes up the difference... you've CREATED a new image Franz... not enhanced the old one.

Tell you what then Franz... since you believe your process is similiar to the wet Tshirt example... Use your process to tell us what is holding the bag up... what is Montgomery holding in his had that extends into the bag and keeps it upright...

thanks

bagfullshot-nowriting.jpg

A scoped Johnson 30.06?

--Tommy :ph34r:

Link to comment
Share on other sites

*"this same phenomenon can be verified when men are watching wet t-shirts contests, were throwing water on textile allow for showing very precise information on what is actually behind. Access to this newly available information is usually received with enthusiastic response by observers".

The added data (water) reveals real, coherent and verifiable information present below the original layer, that would not be visible without this specific "interpolation"...

The fact that what is seen may be natural or artificial is a different point... :ph34r:

uh, not so much Franz... While I appreciate the imagery...

ADDING WATER changes the original pixels to such an extent that they are REPLACED by other REAL pixels that can now bleed thru and be seen

This is NOT the process you are describing... no matter how many interations you go thru you will NEVER be able to "remove" a layer of clothing on a photo and depict what is REALLY there...

but only what the original pixels SUGGEST is there and how the math makes up the difference... you've CREATED a new image Franz... not enhanced the old one.

Tell you what then Franz... since you believe your process is similiar to the wet Tshirt example... Use your process to tell us what is holding the bag up... what is Montgomery holding in his had that extends into the bag and keeps it upright...

thanks

bagfullshot-nowriting.jpg

A scoped Johnson 30.06?

--Tommy :ph34r:

NOW we're talking... Wesley's of course, right?

To be honest, I got a note for Gary Mack about an oral record left by Montgomery... he literally says that he looked inside and saw a "venetian blind" which was what helped hold up the bag... Sorry, but I don't buy that....

I would express you position (again' date=' correct me if I got you wrong) as saying "This interpolated data has a data content value of, by definition, "0" on a scale of 100", because all the original data has been totally, sytematically, utterly destroyed.".

I am only saying that all data content is not destroyed, and the information value will vary between "0" and "100", depending of the interpolations numbers and caracteristics (the variables), but will have a propension to value above "0", regardless of the variables, thus manifesting itself in a way that can easily be recorded, to be used for other interpolations."

[/quote']

Franz,

Sorry but this not what I am saying, and btw I have no HARD FEELINGS about you or the topic, just that you spin these long winded rationalizations of the end results and the process but fail to provide the goods...

What I am saying is that the original image is the BASIS for all this data manipulation... that the final result will of course offer some resemblance to the original, but along the way the process has added data that may or may not be in the original at all....

Again with the Tshirt analogy... the water CHANGES the pixels... if not, then you could easily remove things in the image

And the textile / pixel is not "replaced": it is only interpolated with water, and thus "modified". But it is not turned into a patch of previously inexistence matter: it is still textile, which caracteristics relating to human vision have been slightly modified by the temprary addition of water. It is still clothing...

This is simply not an accurate description of what is courring here Franz... you are mixing real life and the frame by frame existence of photos... you can interpolate from now until forever, you are NOT going to create an image from a first photo/frame of what exists (dry Tshirt) in a second photo AFTER the water is poured ... you may be able to APPROXIMATE it from the data in the original... but it will NOT be the 2nd image that is actually photogrpahed or filmed...

Same thing with your Badgeman work.... while your argument supporting this is interesting and filled with imagery and supposition about things like BadgeMan being a REAL PERSON based on what the HSCA says or what White/Mack say... the SUGGESTION of the image of a man created by the foliage and whatever else IS THERE... so by default if a mathematical process is going to use the pixels offered to "SMOOTH THEM OUT" by creating a vector based enlargement FIRST, THEN running the math against this newly created and ARTIFICALLY ENHANCED image... it will offer things in the image that are CREATED by the guessing of the math and NOT from uncoving anything NEW in the original data....

While we KNOw the image itslef is not pixelated, turning the pixels into vectors ADDS AND REMOVES data so the enlargment is smoother... but once the "enlargement process" is completed you have a whole new image...

Can you please post STEP ONE of your process whereby you take a piece of Bond 4 and ENLARGE IT using the Kneson product and create a vector based gif or png file.... again, like fractals, the image does not exist until the math does its work.... it was NOT ALWAYS THERE Franz... it was never there to begin with

So here is where we stand... show us the original and the enlargement that you put into the PROCESS ENGINE.... we'll deal with the math/filter/interpolation process next....

thanks...

DJ

Link to comment
Share on other sites

Okay Franz...

I went back into the thread to find your Bond 4 work... and when scaled so your STAR is the same size we see the scaling and images are not even close.....

Franz---bond-4-v2.gif

Now something that does not even DAWN on us here...

BOND 4 was taken well AFTER the shots.....

The THING you are enhancing is probably the man who ran up the steps after the shots were fired and is on or near the bench that's there

Link to comment
Share on other sites

Here you are Franz... see the man turning and up the steps he went?

Why again are you enhancing Bond4 to discover the BDM as a shooter well AFTER the actual BDM image left?

thanks

DJ

please click gif to run...

Edited by David Josephs
Link to comment
Share on other sites

... as explained, this is not a well prepared presentation: I will continue to try addressing the technical aspects as best as I can of course, but since we have I believe at least defined clearly what the argument is here (and it is still an argument, we agree here...), based on known, undisputed images, I would like to now show exemples showing the process applied to known areas of interest, for which we have not been able to extract richer content up to now.

There are several of them, and, all interesting I think.

Christian/DJ

The optical aspect of the process is removed until the data resembles something to the viewer if left to run an optical image may not appear as all the program does is crunch numbers .you could not produce anything resembling an optical image unless you are sitting watching the screen and flipping through the various processes .it is not designed to produce an "Image". Your eye sees the manipulated data .The Logarithms are there to prevent manipulation or tweaking.

Ian

...yes, I agree there is definitively something subjective here, but no more and no less than in every human experience when your mind / counciousness is at play. Your mind has to make choices (in that case, choices about what can be considered as having data value). To keep this unavoidable bias of the way the human mind operates as much in check as possible, I have reasoned that by regularly re injecting into the loop the original information, and by keeping large number of "currently being processed files" (which might held varying potential images:some may even be mutually exclusive) whithout privileging any conclusions, and by using, each time it is available, correlation obtained from other sources to try to sort out between possible "clues":

*For instance, images found in the last frames of the Zapruder film are correlated by the HSCA acoustic panel conclusions of a shot fired from there.

*the presence of men behind the fence is actually attested by at least two witnesses

So, for this reason, I would give some specific extra attention to iterations that would, if they did, show that a significant set of iterations, appear to "go" into that direction, but still processing all sorts of apparently differing versions and keeping them evolving along at the same time. That is subjectivity at work, but no more than when we take everyday valid decisions based on partly subjective reasoning.

The question here, a very good one, will be the amount of subjectivity you will allow as "standard". The reinjecting loop part of the process (using original and very close-to-the-original data back into the flow) is intended to lower that "standard of bias".

For instance, I did not say "let's go find that Fence Shooter!". I was of the opinion that the frontal shot was a seriously high possibility, but I had strictly no interest (well, if the definitive answer is found, that interests me of course, but I basically consider this side of the case as trivia) on this very specific fine point . Plus, there are 100 000 billion tales in the universe about the exact possible locations. Lots of areas to check...

So I thought that it was better to work on the whole Moorman frame for a solid while, see first what turns out, and proceed from there. The data pertaing to the presence of men behind the fence actually started to manifest while I was actually interested in another unrelated pattern above the wall corner (Blacg Dog Man, in all probability) that in fact did not evolved into something really convincing.

So patterns will emerge, but if they are not recurrently coming back thru the process to "sustain themselves", they will disapear or lose coherence along the way.

That is why I explained that I would only try to present corroborated images: they can be corroborated between themselves, like Bond 4 and Belzner, which show the same man in same clothing at the same location, or crosschecked with other data as explained above.

That is why I explained also that the cumbersome part of the process is creating first a large selection of derivations closely related to the original (I systematically keep the first 50, whatever there appear to show).

So I agree, part of the process has a subjective dimension, but not much than anything related to human experience.

I hope I have your answered your question about the subjectivity part...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Franz, you obviously had the time to address some of the posts here... just not the one most compelling to your process and authenticity.

Please explain how the image you CREATED within Bond 4, well after whoever was there in Betzner and Willis left, and come to the conclusion we are seeing anything of value?

Here is Bond 4... 2 of 3 men are sitting on the steps, the third ran up the steps...

I took your little area of enlargement/enhancement and played with it and was able to fairly easily make a comparison to your 273rd iteration.

Your process destroys the detail ABOVE the fenceline and simple takes what is there and tweaks it until you like it...

The insert I CREATED does not reveal anything NEW, but simply ALTERS the pixels...

I can see how the enlargment product you use is helpful in creating a better ALTERED ORIGINAL so I tell you waht...

PROVE IT WORKS... show us the expression of the man on the higher step... he obviously has a mouth, nose, ears, eyes, etc...

Show us how you can make this person LOOK like a person as opposed to making tricks of light and shadow APPEAR to be a person....

Thank you Franz... and please... just the short answer - I do not need a lecture on how I think, or how human beings behave...

stick to the process and show us how it works on images we KNOW are there....

Peace

DJ

Link to comment
Share on other sites

"Franz,

Sorry but this not what I am saying, and btw I have no HARD FEELINGS about you or the topic, just that you spin these long winded rationalizations of the end results and the process but fail to provide the goods..."

...that's Ok, let's forget the hard feelings stuff, these are the limitations of written language without facial and tonal expressions...No sweat...

"What I am saying is that the original image is the BASIS for all this data manipulation... that the final result will of course offer some resemblance to the original, but along the way the process has added data that may or may not be in the original at all...."

We perfectly agree here, as explained several times...

"Again with the Tshirt analogy... the water CHANGES the pixels... if not, then you could easily remove things in the image"

Again, you are confusing the analogy: there is no water added to the pixel, but to the shirt: the t-shirt, in the exemple take, is the original data at hand. I chose this exemple to illustrate how interpolating "new data" can be used to extract valid information, previously unavailable. The water simply reinforces the interpolation between the 2 sets of data, reinforcing the information "blur" between the 2 levels (the skin and the shirt, which already are interfacing by contact: that is why it is not possible, I think, to observe this if there is no interpolation in the first place between differing information levels).

I am not drenching the picture in water to "clean it up" and see, for instance behind a solid wall, if that's what you mean...

"

And the textile / pixel is not "replaced": it is only interpolated with water, and thus "modified". But it is not turned into a patch of previously inexistence matter: it is still textile, which caracteristics relating to human vision have been slightly modified by the temprary addition of water. It is still clothing...

This is simply not an accurate description of what is courring here Franz... you are mixing real life and the frame by frame existence of photos..."

I understand that this is your opinion, but I think I have offered some receivable counter arguments...

"you can interpolate from now until forever,"

...surely, I don't intend to. Though there is a lot more to do, I have all sorts of interests in life. The JFK case is one of them, but far from the only one...

" you are NOT going to create an image from a first photo/frame of what exists (dry Tshirt) in a second photo AFTER the water is poured ... you may be able to APPROXIMATE it from the data in the original... but it will NOT be the 2nd image that is actually photogrpahed or filmed..."

..you lost me here, sorry...

"Same thing with your Badgeman work.... while your argument supporting this is interesting and filled with imagery and supposition about things like BadgeMan being a REAL PERSON based on what the HSCA says or what White/Mack say... the SUGGESTION of the image of a man created by the foliage and whatever else IS THERE..."

...I have presented no work on the Badge Man image here.There would seem to be a confusion.

I have used the identification of Belzner Black Dog Man (wihich is a totally different image...), which I chose precisely because it has been identified 33 years ago by the HSCA experts as the image of "an individual in dark clothing standing behind the corner of the wall".

It is not me saying this: this is a conclusion that has never been contested, as far as I know, during more than 3 decades. If you have valid data that can challenge the HSCA conclusions, I am of course very interested because it has potential crosschecking value for me...

"so by default if a mathematical process is going to use the pixels offered to "SMOOTH THEM OUT" by creating a vector based enlargement FIRST, THEN running the math against this newly created and ARTIFICALLY ENHANCED image... it will offer things in the image that are CREATED by the guessing of the math and NOT from uncoving anything NEW in the original data....

While we KNOw the image itslef is not pixelated, turning the pixels into vectors ADDS AND REMOVES data so the enlargment is smoother... but once the "enlargement process" is completed you have a whole new image...

Can you please post STEP ONE of your process whereby you take a piece of Bond 4 and ENLARGE IT using the Kneson product and create a vector based gif or png file.... again, like fractals, the image does not exist until the math does its work.... it was NOT ALWAYS THERE Franz... it was never there to begin with"

.. I think we already covered the main bases here, and as I explained, we do not, in my opinion, disagree on the basic facts at work here.

We have difference, as I understand it (but I could be wrong here) about the way the data collected (which, though interpolated, contains "valid" information, I think you agreed on that...)can be exploited.

You are saying that it can't. I am saying that it may.

"So here is where we stand... show us the original and the enlargement that you put into the PROCESS ENGINE.... we'll deal with the math/filter/interpolation process next....

thanks...

DJ

...thanks for your help on this: I will make availabl to you a complete file of the processing of the Belzner picture, starting with the original and all the successive iterations.

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...