Jump to content
The Education Forum

So I will give it one last try…..


Recommended Posts

"Okay Thanks Christian,

I understand much better now. I agree with 99% of the operational theory.

I am asking for the 'sponge' to be wrung out completely then filter the colorization of some info back down to a more perceivable level.

When the 'sponge' gets wrung out everything in a sense gets wet with all the info.

I would like to see a refinement to this. If its even possible."

..I think this is perfectly feasable with smoothing algorhytms and filters: basically my stance on this is I do not like to interfer radicaly with the data by injecting unrelated information into the finite data set (I am injecting only, as you have understood, processed interpretations of the same data base into the overall process flow).

But I think anyone here mastering the classic optical tools for photo enhancement could produce what you are asking for: a "smoothed out" version still preserving the new data extracted but without too much "degrading" of the surrounding information.

I am confident that somebody like Duke could work on the Bond 4 image, for instance, and produce what you want easily...

As for me, since I do not work with high tech material and do not master the techniques involved, my own clumsy way of doing it is simply to reinject periodically the original data set into the workflow: if you did like I suggested with the Bond 4 picture (fusionning it with the unprocessed original), you saw what I mean by "averaging pixel value"...

"Its like the orchestra all tuning their instruments at once (all info from each musician) If we could just 'hear' the woodwinds or just the percussion it would clear the air so their individual notes hit my ear...so to speak."

...a perfectly valid comparison: and since we Do know that audio equipment exists that allow for filtering different sound frequencies, I would be much supprised if we would not have the same options in the field of optics. May be Ian can help us on that....

"You are the conductor, the symphony has already been written."

...no, I am not the conductor: I am only the guy who found a long-lost key to the partition. The fact that I did do not bestow on me, in my opinion, a solist role: I am sure the orchestra here will do a much better and powerfull rendition that I'll ever be capable of with my limited resources... B)

"

I'll examine this closer. Hope a sonata is in the works."

... I will keep documenting this research here: thanks for yr interest, and remember: my whole files are avaible to you if you need... B)

Ed

Christian,

Don't be surprised if you start seeing some "men in gray suits" hanging around your flat and place of work. LOL(?)...

--Tommy :ph34r:

...the fact that I am not paranoid doesn't mean I don't believe in conspiracies... <_<

...No, I don't believe there's much physical danger involved in this today. All the principals are long dead, and it is more a matter of protecting bureaucratic institutions and underserved, flattering memories of public figures.of the past.

Sure, the mainstream media is still at it, for reasons explained elsewhere, and reputation can be ruined, and stuff like that, but not much else would be done. It is easier to disinform than to suppress information...

Link to comment
Share on other sites

  • Replies 154
  • Created
  • Last Reply

Top Posters In This Topic

... I will conclude this first part of the presentation by a last exemple of the process applied to a known picture, this time the Moorman picture.I worked for this one with the Morin Rellman version, which is quite rich in data content.

The apparent whitish explosion on the top of the head is, as you can see, visible in the original...

We will focus here on the back of JFK's head, and see if the process, like it has done for the image of Black Dog Man, can extract additional information about the nature of the rear head wound...

MoormanHeadShotProcessIllustrationLegend.jpg

Again, you can verify that all the information content of the processed results is actually present in the original, only in a much degraded form...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

..this is a close up of the processed versions, for better analysis....

MoormanRearHeadWoundComposite.jpg

..it would be interesting, I think, to compare this image with frames from the the Z film between 320-330, where JFK can be seen in profile, to see if there is any correlation between the wound seen here and what the Z film shows... B)

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...as long as you can create and save transparencies, and then interpolate them like I described ( add, substract, etc), sure you can... :)

If you want, I have some pictures that I have already processed, but can still go a long way better, like BDM in Willis 5. This one can I think get as clear as Bond 4...

Let me know if you want to take it up where I left it...

Link to comment
Share on other sites

A simple undistorted enhancement BDM in Willis.

Thanks Duncan.

.. I will keep documenting this research here: thanks for yr interest, and remember: my whole files are avaible to you if you need... B)

Much thanks Christian!

Link to comment
Share on other sites

...I think I will post a last result obtained on a known picture (Z 337) because it could serve to underscore the very low compexity of the process, showing that is only about agregating / interpolating different layers of identical data expressed differently in one single frame transparencies.

I believe this is not much different from Wilson's approach, for those who know his work, only with much less techniques and tools that he had at his disposal. As explained, I don't subscribe to much of his interpretations (seing tatoos on face and all..), but I agree with much of his initial findings (locations where human presence can be detected in specific areas of interest).

Wilson, from my understanding, was basically working with layers of processed information too, using techniques employed in analysis of satellites / planes images and films to interpret geological data: for this, you have to go beyond what you see with the human eyes

(with its very limited capacity) and look for signals (data...) not visible, but yet very much present and extractable, that could be then meaningfully interpreted by a human observer.

So what I am trying to explain here is not, I believe, something very conceptually innovative: the approach to it might be, I concede, (let me know if I've won a prize here... B)) but this technique has basically been known for decades, and put to used daily in all kind of profitable businesses around the world.

A real, legitimate question would be "Now how come you come along, with no technical knowledge, no powerful computers or higly specialized software, and say you can do better than the experts? Looks like making Nuclear Fusion in a basement with 2 jars of water an a set of electrods..."

Now, apart that Cold Fusion (nuclear fusion at cold-room temperature in very low-complexity settings)has now been documented time and again, showing that they may be different, unsophisticated approaches to seemingly higly complex phenomena, I think you have you answer when you know that the memory capacity of Armstrong Lunar Module was less than what can be found in Walmart calculators today.

People do not seem to always realize that mass market PC distribution has given the layman access to tools that were the exclusive domain of a selected circle of experts just a few years ago...

So below I post an illustration of the "layering" employed in the process, this time on Z 337, since we above discussed the rear wound:

* image 1 is slightly processed only, but is shown here because Kennedy's eye and nose are clearly visible, allowing for a clear location of the front head wound (we would assume the entry point would be at the joining tip of the 2 flaps of bone, thus somewhere above the right eye. We will look for eventual corroboration of this in the right side autopsy picture later on..)

We can note the massive disruption of the top right side, with 2 flaps opening logitudinally like a box, showing cerebral matter inside.

We can also note, of course, that the occipital area of JFK's head appear seriously disrupted too, because it does not reflect light like a normal, dark smoth surface with hair on it should in those same lighting conditions. We can use easily other parts of the head for comparison to establish this simple fact.

So the back of JFK's head, in Z 337, is optically "abnormal".

Now let's see if we can extract more with the process, and see why JFK's occiput reflects light in such an peculiar way, incoherent with the way photons sould react on a normally smooth, dark slightly curved surface, if the occiput was intact...

*picture 2 (center) shows a different level of information: you can see that we have "lost" much of JFK's face details for instance, as compared to picture 1, and much of the details from the top right opening of the cranium, but we can now see much more of the occipital area, which clearly shows a "volcano" shaped wound, and even a very disnctive dark spot in the middle at the top, which is coherent with a hole.

*picture 3 (right) is a composite of picture 2 and a previous (non shown here) highly highlited version. Thus it contains data pertaining to both versions, only interpolated and averaged. The volcano shape and the dark spot (hole) are still visible in picture 3, though less visible than in picture 2 because they have been "averaged" with the other image, in which they are not as resilient (strong, if you will...)

You can now verify that the process is simply about agregrating additionnal sets of data, and verify how they correlate or differ.

Corroborations will strengthen (becoming more visible, as they "add" one on top of the other if you will...), and discontinuities will weaken (fading as they will be less and less "supported" by the incoming flow of newly processed information), thus refining the data for your eye...

(these are not the best results I have obtained from Z 337: some of them are quite graphic...)

Z337Processillustrationlegend2011.jpg

...again, comparing these processed images of z 337 with the processed image of the Moorman rear wound (see above), we will constate a very high degree of coherence between the 2 "readings" of the wound, although they are taken at different moments, and from different points of view. This lowers considerably, I think, the odds of any "double freak visual occurence", which would already be a tough proposition.

So we have:

*a large opening on top right with a large flap of bone disloged and standing erect (Moorman, right during the headshot)/ a large opening on top right with a large flap of bone resting (Z 337, 1 second roughly after shot) against the left part of the cranium

*a dark opening in the occiput (Moorman), and a volcano shaped wound topped by a dark hole in the occiput (Z 337)

That is solid corroboration in my book... B)

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...here is a close up composite of both Z 337 and Moorman, for better analysis.

I have kept Jackie's face in the frame as a reference point.

People with good visual skills can easily pick up Kennedy's profile here, allowing for a better understanding of the bullet path...

HeadWoundCompositeMoormanZ337Legend2011.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Sir...

You are aware that by adding layers with a variety of interpolated "effects" on what can only be a finite amount of ORIGINAL DATA will, with enough play, create any effect desired...

Reading thru the thread I still have not seen a simple list of the layers used and the effects employed... but I have seen you make "doesn't it make sense" arguments in support of the conclusions...

Anyone who has worked with layers understands what you are doing and how you have basically incorporated manufactured data into the existing image...

Yes, the data is there and you create more with the process... but it is still "created" and not - hidden info not seen in the, what? prints? images you are using? sorry sir, but there is no EXTRA INFO in the files.... only processing effects enhancing ogiginal images to a state that MAY or MAY NOT be in what was the original...

When we are using digitized originals, we STILL will be enhancing info not altogether there...

I applaud the effort and the result, it still takes work... but "line enhancement filters" from photoshop superimposed over 5-10 other layered effects doth not make a new view of an ORIGINAL image...

Peace

DJ

Link to comment
Share on other sites

Sir...

You are aware that by adding layers with a variety of interpolated "effects" on what can only be a finite amount of ORIGINAL DATA will, with enough play, create any effect desired...

Reading thru the thread I still have not seen a simple list of the layers used and the effects employed... but I have seen you make "doesn't it make sense" arguments in support of the conclusions...

Anyone who has worked with layers understands what you are doing and how you have basically incorporated manufactured data into the existing image...

Yes, the data is there and you create more with the process... but it is still "created" and not - hidden info not seen in the, what? prints? images you are using? sorry sir, but there is no EXTRA INFO in the files.... only processing effects enhancing ogiginal images to a state that MAY or MAY NOT be in what was the original...

When we are using digitized originals, we STILL will be enhancing info not altogether there...

I applaud the effort and the result, it still takes work... but "line enhancement filters" from photoshop superimposed over 5-10 other layered effects doth not make a new view of an ORIGINAL image...

Peace

DJ

..Hi David, and thanks for your very interesting critics. I will try to ddress them all here as clearly as I can. I seem to remember you already raised the same legitimate issues a few years ago when I first came here, but I don't think I replied to you then.

So thanks for the opportunity... B) ,

I will below make a list of the critical points of your argumentation and then try to answer them. Correct me if I got you wrong on what you meant:

"You are aware that by adding layers with a variety of interpolated "effects" on what can only be a finite amount of ORIGINAL DATA will, with enough play, create any effect desired..."

Your argument here is, I believe, that you can statiscally produced any thing you want (including a very clear and detailled image), using a random process on any set of data, given enough time.

That is perfectly true, we totally agree here. It has been calculated that, given enough time and a typewriter (now probably a Mac...), a chimpanzee could write Milton's Paradise Lost. (yes, some people do work on this kind of stuff....)

So since this proposition is mathematically superior to "zero", that makes it "true", thus possible.

But there is a huge difference between "possible" (the odds that such an event can happen statiscally during the existence of the physical universe), and "probable" (the odds that this event will ever happen in a finite time context, objectively measured by a human observer).

So though the proposition that the Bonobo can actually write Milton's Word poem is technically "true", the fact is that, even should you live 100 000 billion lives,the chance that you will observe such a wonder is null. In your context of reference, this proposition will be "untrue".

What I mean by this is that, in your argument, time is of the essence...

Now, again, the clumsy method I developped comes handy: as I explained, each saved frame is actually a "frozen moment in time" of the processing flow, with each successive versions carrying additional information relative to the others (the process is actually not as linear as that, that is the main idea).

This actually means that we have a "clock" (sort of...) to measure the flow of time in the process, as compared to the solution you propose of a time-based, ramdom process capable of producing "phantom" images, not supported by objective data.

Now let's look at Bond 4, which I have posted here, both in original and processed forms.

The processed result is numbered "271", meaning it contains 271 interpolated layers of the information content of the original image. Bond 4 is very notable by the abondance of fine facial features that can be observed clearly, like eyes, nose, mouth and even lips.

In your reasoning, this means that it should be possible to go from the original (showing an indistinct whitish, roughly circular area above the wall) to version 271,with its incredible definition and verifiable points of reference, by agregating 271 layers of randomly produced, unsupported / false information, ie "noise" that doesn't pertain to any objective reality.

Now 271 may seem a large number, but actually is not if we go back to our Bonobo...

My point is that, if it was just random noise making funny figures, it would need much, much more time (and thus layers) to appear, much more time to reveal such finesse of details, out of "nowhere". It would be akin to painting the Joconda by throwing painted tennis balls at a wall. Might be even harder than the Bonobo stuff...

Now, for those with the technical skills, I would think it should be possible to measure the increment in data content between the original and version 271 (maybe simply by measuring the differences in pixel value?), and then calculate the odds that this could be gained randomly and still form what appear to form the very clear image of a man's face, with such a relatively low (though significant) number of iterations.

Dave's argument is very legitimate and interesting (and I hope I have answered it), and this could be a good addition to the discussion... B)

"Anyone who has worked with layers understands what you are doing and how you have basically incorporated manufactured data into the existing image..."

Your point here seems to be that, since I am incorporating "manufactured data" into the existing image, I am corrupting the original data, and thus cannot claim that what I am "reading" subsequently is a valid representation of the observed phenomenon. I will come back to the "manufactured data" argument later on, but I'd like to address the "added information" side of your argument first:

*adding artificial data to interact with something under observation to measure / extract information about it is done daily, by technicians, specialists, enginers, chemists, you name it, all around the world, and this approach as never been questioned so far as I know of:

- chemists do it when they pour a reactive substance on something they want to know more about, like its composition. What they are actually doing, when you think of it (we discuss this very interesting point with Ed previously here...),is destroying part of the data under observation to get a better undertanding of the rest of it...

- military personel do it when they direct a radar beam toward an incoming flying target, to measure its height, position, trajectory and speed. The beam is actually "additional manufatured data", which will be physically interacting (that is actually what is measured...) with the objective phenomenon you are trying to observe. What the observer will get on his screen will NOT be the true expression of reality, but the return of the "additional information", coming back to him after having interacted with the original data. Though the information is of course "degraded / altered" (you cannot see on a radar screen the actual optical image of the target, you see a visual "expression" of it) you still get sufficiently pertinent data to identify ALL the parameters indicated above, and some time more, like specific radar signatures of specific targets. I understand that civilian and military pesonel involved in those kind of tasks see no problem, despite the heavy responsabilities at stake in case of wrong decisions, with the rationlity of this approach in their daily work...

- you do it everytime you walk up into a dark room and turn the light on: what you think you "see" is just "additional information" (photons released by the light bulb when you switched on the light, which were not present before you did it) bouncing off (again, a very physical and measurable phenomenon...) and interacting with what's in the room.You only receive the feedback from that interaction (which is degraded and not a true template of reality) that you will then transcript with your nervous system,and interpret as images.

It would seem that this apparently cumbersome process doesn't pose much problem for living our everyday lives in relatively secure environment, by making sound judgments about what is going on around us.

So it can be argued, I think, that adding "additional information" to objective data to get a better knowledge of it is not something out of the ordinary, and is actually the base of human experience: that is our only way to "know" the world around us... B)

Now the "manufactured data" argument, also very interesting.

As I have tried to explain, I work with derivations of the same set of data. The same information will be "read" differently, using the classic tools of image enhancement: contast, sharpness, focus, equalization, smoothnes, etc. Each new iteration will be saved as a new transparency, and the interpolation of different transparencies will create new ones, etc.

What is contained on the EACH different saved frames is simply an extrapolation of the information content of the original picture. These extrapolations are thus a result of the very wellknown algorythms used in classic image processing software, which are considered valid, fiable and safe and used in all sorts of business and companies.

The richer the data content to make the extrapolation in the first place, the finer the result. The more you extrapolate on extrapolations, the fuzzier it gets, in all logic (that is why regularly reinjecting the original data and data close to the original -I keep the first 50 derivations of all files- is crucial).

The process only enables you to correlate the extrapolations, by crosschecking them against one another...

"only processing effects enhancing ogiginal images to a state that MAY or MAY NOT be in what was the original..."

..I think I have already answered this just above.

We totally agree on this: you are saying that the process will bring potential images, that may or not be true.

These are the extrapolations, ie logical, mathematical predictions made by the software algorhythms based on how it "understands" the data at hand. Variables are at play here (since you can modify how the software "understands" the data and thus interpret it in its extrapolations), hence the "uncertainty principle" at work.

Again, this principle is a known established scientific fact, used daily in everyday life from social sciences to building microchips...

I am only saying that the process, by the very simple method I have explained, gives you a tool to corroborate them, and thus to sort out between "noise", which does not pertain to the objective information you are studying, and real data, which will be, as I already explained, more resilient...

"line enhancement filters" from photoshop superimposed over 5-10 other layered effects doth not make a new view of an ORIGINAL image..."

...we are of course, talking here about much, much more than 5-10 layers agregated (which are not all "effects", but mostly interpolations of data...) : Bond 4 has 271, Belzner Black Dog Man over 350, etc...

Dave, I hope I have answered you basic points: let me know....

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...to continue addressing Dave's argument: I post below a composit of the original Bond 4, with an extreme close up of the final processed version (271).

Like I said, we are dealing with known, measured variables here (variations of pixel data content, and number of iterations between the 2 images).

It should, I think, be possible to calculate the odds of such an apparently real image (remember, by the way, that this is just a correlation of Black Dog Man, itself an undiputed image identified 40 years ago by HSCA experts as "an individual in dark clothing" physically present at this very location...) being produced with a set of finite (271...), randomly generated iterations.

The odds of this can I think be computed..... B)

Bond4PixelStudyComposite2011.jpg

...Again, people with good visual skills will notice that the data content of version 271 is, in fact, already present in the original, only in a much degraded form...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

I thank you for the long winded rationalization of your techniques and results

but this changes very little with regards to the FINITE amount of data in whatever original you are using

and the results of interpolations...

RANDOMLY GENERATED ITERATIONS ok

Since, as you say, you are correlating the BDM image WITH these iterations... and these iterations are BASED on the BDM image (btw which one do you use? size/type/source thanks)

we would EXPECT to see enhancements in areas there was information...

Since you are basically creating a 271 or 350 layer photoshop files and allowing each layer to bleed thru each other...

the final product is a COMBINATION of good info and VERY VERY BAD INFO generated thru mathematics....

You understand FRACTALS? the results of mathematical equations whereby an image is CREATED from the relationships defined in the math...

Same thing here Sir...

Until you provide even a test file of the LAYERS you are discussing, there is little more to discuss.

What is it you think we are seeing in that last post of the enhanced Bond 4? I've done a fade from one to the other....

Please narrate

thanks

DJ

interpolation-gif.gif

Link to comment
Share on other sites

"I thank you for the long winded rationalization of your techniques and results"

...sorry. I thought that, like I do, you would like to have answers to the questions you raise...

First, I think it is obvious that both images are not on the same scale: your animation should take this into account: as indicated, the processed result posted is an extreme blow up of the original on the left. I thought that was clearly explicited in the post...

I was not actually asking you to make a dynamic overlay of the 2 images (would be interesting on the same scale though...) but proposing what I thought could be a measurable experiment. I am no scientist, but you could be, or know people who are and can say "yes, it can be measured, or no, it can't be".

That was my point here: I would think, if I may, that if you wished to demonstrate the absence of correlations between the 2 images, you should adjust them to scale. I indicated clearly that I used different scales to illustrate the gain in definition of data content...

"Since, as you say, you are correlating the BDM image WITH these iterations... and these iterations are BASED on the BDM image (btw which one do you use? size/type/source thanks)

...I have explained that I worked with images collected from the best sources that I have found: there are several sites that offer good quality films and pictures pertaining to the case,known I believe to all searchers, and as I have explained this offer for useful crosschecking. In some cases, I have identified very precisely the source (for instance, the Moorman work was done essentially on the Morin Rellman version, with cross checking with a version from Lancer and Trask; )in my files, but not always. I have all roiginal files I have worked on, though, so any checking can be easily done.

I have also already explained that I can send complete files, from original to last processed results to anyone one would want to look into this, but I'll say it again....

"we would EXPECT to see enhancements in areas there was information..."

...I think the composite above shows just that: the original information is the whitish, roughly circular above the wall. The additional information is the details of the face in the processed version...

"Since you are basically creating a 271 or 350 layer photoshop files and allowing each layer to bleed thru each other...

the final product is a COMBINATION of good info and VERY VERY BAD INFO generated thru mathematics...."

... as explained, I am not "creating a...350 layer of PhotoShop files": I am just interpolating various levels of layers of information pertaining to the same identical data, using conventional optical tools to differentiate and corroborate between the different "interpretations": those interpretations are set within the known and accepted limitations of currently available software, widely in used worldwide. The optical processing is not the main part of the process: the data refining is. That is why, I believe, most results do not look like conventional optical enhancements

Now I like the rest of your argument better, the part about "allowing each layer to bleed thru each other".

We agree on this. B)

Only what you call "bleeding", is what the process will interpret as an indication of potential resilience of information: a freak, unsupported information will not "bleed" long through accretion, without fading away, because no new signal, whatever its form, will come to reinforce it regularly enough to maintain its coherence, as compared to supported data which we will be more resilient, again whatever its "form" (the way it is analyzed by the software and expressed visually), gaining information content in the long run.

The agregated iterations will simply verify the potential resilience of information: if there is no resilience, the "information" will disappear. If there is, the information will strenghten...

So the end result will be the simple refining process I have explained here...

"the final product is a COMBINATION of good info and VERY VERY BAD INFO generated thru mathematics...."

Not exactly: I would say that the final product is a combination of very highly correlated data (actually the data is "seen" because it is correlated...), extrapolated from the original data set using known accepted techniques and tools.

The process is simply sorting out "good info" from "VERY VERY BAD INFO" by mere iteration.

The VERY VERY BAD INFO will simply lose the accretion battle against the "good info", making way for a higher definition image....

Now let me say this: I am sorry if what I am saying here, or the way I am saying it, is irritating some people: that is not what I calme here for. I tried to take experience from my firts postings here, not to repeat the same "errors" and come with highly controversial propositions out of the blue, and antagonize people having different views on this subject matter.

So I thought that proceding like I did, using undisputed photo evidence first to validate the process, before moving to undiscovered images, was the good thing.

So far, basically, what I have presented here doesn't discredit at face value the Lone Nut Theory:

-we have seen that BDM was a DPD officer

-we have seen that there was another DPD officer beyond the wall with him

Sure, there would have to be some explaining to do regarding their 50-years absence from the official record, but these people are not shooting at JFK, so...

The correlated Moorman and Z 337 (most notably, with the volcano shape and hole...) are, true more difficult for the LNT but, again, this would still be arguable by a good lawyer: these are minute time sequences of excedingly rapid and disruptive movements, examined on images that cannot claim to be 100% accurate representations of reality (the adverse lawyer, of course, will remind that there is no such thing as a "100% accurate representation of reality": we only settle for accepted norms of it...).

So I would think there would be still room, though tenuous, for argument here from the Lone Nut side.

I am saying this to explain that no agressivity or emotion is needed here, really. Everybody has done his or her own research on this case, and though I have reached my own conclusions (as explained, what I have found with the process has caused to review some major points those last years) I respect everybody's point of view. There are pieces of truth on every sides of the controversy, and I very much appreciate confronting views with people developping interesting counter arguments.

But I am definitely not on a crusade, here. I just want to make what I have found available to searchers: actually how this data fits into their own scenario is not part of the equation. I will assume that most serious searchers will presumably evaluate it as objectively as possible and ask the questions (or material...) they feel appropriate to make up their own mind about its validity.

I am here for the questions and answers part only....

That is why I suggested what could be a measurable experiment of Dave's hypothesis (see above)....

"Until you provide even a test file of the LAYERS you are discussing, there is little more to discuss."

...sorry, I had missed it... :ph34r:

Just tell me how I can send you a file....

"VERY VERY BAD INFO generated thru mathematics....",

... we will have to discuss this at one point I think; might make for interesting discussion...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Franz, since you are working off digital images: each pixel of course contain no more information than that which is coded for that pixel. Any blowup of that creates however many new interpolated (whatever algorithm you use) points of data that is not in the original image which probably isn't like the real original for the same reasons in the first place.

However, there are some features of the technique you se3em to be using that are interesting to me, particularly the bit about what gets reinforced and what is washed out in the process. I don't know if you saw it but in a post I did a panorama of a gif by Chris using that sort of transparent layering so for example one can hardly see a trace of Zapruder but a clear image of Sitzman. It just means that that which moved leaves less of an impression and is 'masked' by the repetition of the static.

Link to comment
Share on other sites

..I notice I have skipped over one of Dave's argument below, which is also pertinent:

"but this changes very little with regards to the FINITE amount of data in whatever original you are using

and the results of interpolations..."

We agree that the total some of information recorded on the support (whatever its nature) has to be finite (unless we start postulating some as yet unknown holographic / fractal quality in the way such information is recorded in the first place: interesting thought though....).

But this is much less the case with the number of derivations that be extrapolated from this original set: this number can grow to quite high value (more than 1 500 for the work on the Picket Fence Corner), since it is only limited by the number of interpolations you chose to make.

It is my understanding that, after a certain limit, the gain in data content will probaly stabilize, reach a plateau and then start to decrease, just like when you lose focus thru yr binoculars when you go one for one extra, unecessary magnifying power.

So I don't think that the process can go on forever retrieving data: that is not the point here, I think. What the process will do is show you visually where coherent information will coalesce, because it will be present in more derivations than mere random noise, which will not do the same because of it lower coherence (propension to carry on correlated information to the same exact pixel).

Let's say you take a nice white shirt and put some denim stain on it (the "added data" argument discussed above). Since the stain is added data that does not pertain to the objective reality of the shirt, each time we will "process" the shirt (by adding new "data" under the from of water and soap, and then "read" the shirt, the stain, which is "unsupported noise", will be less and less visible, because it will not be correlated (strengthened) by new data (there will be now "clone like" new denim stain "contained" in the washing cycle...).The original intrinsec caracteristics of the shirt will have more resilient value than the "added on" stain, by iteration..

I tried to find another way to explain the "added data" argument on a lighter note (I think we can discuss this seriously in a relaxed way...), and it could also be explained that way:

*"this same phenomenon can be verified when men are watching wet t-shirts contests, where throwing water on textile allow for showing very precise information on what is actually behind. Access to this newly available information is usually received with enthusiastic response by observers".

The added data (water) reveals real, coherent and verifiable information present below the original layer, that would not be visible without this specific "interpolation"...

The fact that what is seen may be natural or artificial is a different point... :ph34r:

Dave has put on a dynamic overlay of 2 images, but using 2 images of significantly different sizes.

Since I can't do such things, if dave or someone else can do this with the 4 frame composite of Bond 4 posted earlier, I could try to comment, as Dave suggested, on the crosschecking of reference points during the evolution thru the different versions.

The 4 frames are on the same scale...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...