Jump to content
The Education Forum

So I will give it one last try…..


Recommended Posts

…before I let go.

I have come here once or twice in the last 5 years, to try relaying the following information to serious researchers:

*there exits a very simple, unexpensive method to optimize data content in photographic and film records. This method, or process, is apparently not known of the general public, and of most experts in the field of optics or image processing.

The results thus obtained are so far beyond what is conventionally expected from optical enhancement that some people might just find them “impossible”. So let me explain a few things here.…:

*the process (this is crucial) is NOT based on an IMAGE processing approach. It is on the contrary based on a DATA processing approach, meaning that it is not about ameliorating the visual content of the image as seen by the human eye, but rather about extracting the core information content of the image.

This is a major difference and explains why most images do not actually look like conventional optical enhancements: they are not. They are visual representations of the data content being analyzed.

I initially developped the concept behind the process while working on the problem of “weak signals” in market research. Weak signals are information collected during a research, but neglected in the final analysis while proving important, and sometimes crucial, in perspective. Simply put, the information is present, but not recognizable for what it is because of other, less important information parasiting and obscuring the really valuable data. Since I have absolutely no expertise in computer programming, this was simply a thought experiment about how to distinguish real / objective data from static / noise.

In 1999, I bought my first PC (yes, I know…), and thought it would be interesting to test the process on the photographic record of the JFK assassination: after all, what is a photo but a finite set of data expressed in a manner compatible with human vision?

*the process works beyond expectations. Applied to the JFK photographic and film records, it gives the following results:

-the shooting in Dealey Plaza was executed by 3 groups of men wearing Dallas Police uniforms

-the autopsy pictures show an entrance wound in the upper right forehead (slightly above the location indicated by Kilduff in Dallas), a lateral tangential wound in the right temple area and an exit wound in the occipital area, strongly indicative of a frontal shot, with a shockwave blowing open the right temporal area before the bullet exits the occiput.

-in both instances, there is evidence of tampering with the evidence

Before I post here photographic evidence supporting these statements, I’d like to add the following:

*from the “technical” side:

I have absolutely no expertise in data processing or image enhancement techniques: the enhancement method I have designed is thus a very simple, basic iterative process which can be explained in half a page to a 12-year old. It does not require expensive software nor powerful computers.

The conceptual architecture of the process is derived from:

*resilience

*equal standard value

*data interpolation

The fact that the process is iterative allows for a complete record of the data enhancement, and for easy cross check and verifications: my files are available to anyone, and the process is easily reproducible

*Is what I have found important?

I have always considered the analysis of picture and films of the assassination as a derivative, coming much second to serious data research, like the Hancock, Twyman, Horne, Douglass and Morley books to name a few of the more recent. The broader question of the “Why / Who?” (this is actually a single question…) is more interesting than the fine details of “How was it done?”, I believe.

However, the level of details extracted from the images and films I processed does in fact enrich our understanding of the crime. Basically, for the last 20 years, serious research has concentrated on a group of usual suspects, with means, motives, opportunities and documented links, and it is now a matter of fine-tuning the cursor of shared responsibility between those different groups of influence. The results I have obtained, and that anyone can duplicate, show that:

*the shooting could not have happened without high level complicity, and probably direct involvement, of the Dallas Police Department, at the logistics level (killing zone)

*Oswald did not shoot at JFK from the 6th floor of the TSBD. The photographic and film record establishes that Oswald was framed by the Dallas Police

So it would seem that that the Texan connection may have been one of the major operational players in the plot, and that is interesting in helping us “moving the cursor”…

The results I will present here show that it is possible to produce cleaned up versions of films and pictures of the assassination establishing the presence of unknown individuals in Dallas Police uniforms: 1) at the exact location (fence corner) where HSCA acoustical experts placed a gunman firing at Kennedy; 2) at locations long suspected to be shooting posts.

I believe that photographic and film corroboration of the HSCA acoustic panel findings of a fence shooter is new and significant information; I believe that establishing the identity of the man in the Snipers Nest (no, not Oswald…) is new and significant information.

This case is coming to closure, thanks to the irreplaceable work along the years of the research community: this is just another step, in our overall understanding of the case.

Fellows, it is today 48 years since that day in Dallas: let’s close this case before it turns 50, and put the revisionist historians, disinformation artists and agenda-driven pundits where they belong, in the dustbins of History.

Well, that was a long one…

I will first post a demonstration of the process, using the classic Black Dog Man image from Belzner as a reference.

BDM is an undisputed image, identified by HSCA experts as “an unknown individual in dark clothing” standing behind the wall during the shooting.

So now let’s see first if the process can do better than the experts and, why not, help us establish Black Dog Man’s Identity?...

BetznerBDMProcessIllustrationLegend.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

  • Replies 154
  • Created
  • Last Reply

Top Posters In This Topic

I am posting below an extreme close up of a different version of the same image, with even clearer details (like facial features) visible.

Like all images I will post here, this image is a "raw result" from the process, meaning it has not been manually retouched in any way: no artificial coloring "enhancement", no contour retraced for "better visualization", no "finishing touch" added.

This is just what the process churned out when applied to this specific set of finite data...

In that particular instance however, I have excised the background around the Black Dog Man shape: the cut out line is clearly visible and does not interfer with the image being observed.

This is, of course one of the results I obtained, (among many others) which appear a priori impossible, when you compare the original material before processing (posted in the initial post here) with the result shown below, where the process actually extracts minute facial details from an 8x (specialists will understand that that is quite a significant increment) enlargement taken from the background of a 1963 grainy photo...

It sure sounds quite "unconventional", to say the least, but then you can see for yourself below what I am talking about...

BlackDogManBetznerCrop4ProcessLegend.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

... as another proof of the process validity, I post below a composite of Black Dog Man from Belzner and Willis 5. Though evidently I have not worked as much on Willis, you can see that this image is already clear enough to allow for corroboration of the presence of a man in dark uniform behind the wall.

Like anyone knows, photo interpretation is quite tricky: we see with our brain, not our eyes. That is why I will only present here corroborated images, taken from different sources at different times but showing the same areas.

But when films corroborate pictures (and vice versa), and in turn corroborate acoustical findings and witnesses testimonies, I think we are actually making some progress here...

So let's examine a processed Black Dog Man, from 2 different points of view, at 2 slightly different times, and see how they compare...

BDMWillis5BelznrCompositeLegend.jpg

...the caption refers to images that I will post later...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

... as another proof of the process validity, I post below a composite of Black Dog Man from Belzner and Willis 5. Though evidently I have not worked as much on Willis, you can see that this image is already clear enough to allow for corroboration of the presence of a man in dark uniform behind the wall.

Like anyone knows, photo interpretation is quite tricky: we see with our brain, not our eyes. That is why I will only present here corroborated images, taken from different sources at different times but showing the same areas.

But when films corroborate pictures (and vice versa), and in turn corroborate acoustical findings and witnesses testimonies, I think we are actually making some progress here...

So let's examine a processed Black Dog Man, from 2 different points of view, at 2 slightly different times, and see how they compare...

BDMWillis5BelznrCompositeLegend.jpg

...the caption refers to images that I will post later...

Please show us more and if you could elaborate on the method used. Is there a program one can buy?

Kathy C

Link to comment
Share on other sites

..hi there..

No, I won't be stopping there, and thanks for taking the time to look into this...

Basically, I intend to leave a record of what I have found in the last 10 years(and, more importantly, HOW it was done), so that this is not lost in case I am gone some day: I am 56 with some health issues, so who knows?

It would be a pity that such a Rosetta Stone for solving the crime could be lost again for somebody else to rediscover in 50 years...

Like I explained a few years earlier, the process basically offers the research community the opportunity to produce cleaned up versions of all pictures and films of the assassination, in what could be a coordinated effort.

I now take between 4 and 48h hours of full time work to produce a cleaned up image, depending on the quality and complexity of the original material. This means that a dedicated group of people using the process could easily produce versions of the Hughes, Nix and the last frames of Zapruder, showing the assassins and their accomplices caught during the act on motion picture, within a few months.

This could be done before the 50th anniversary of the Dallas Hit.

Anyone up to this? Anyone interested, I can send you a detailled, very simple step-by-step description of how to proceed (will take half a page...) and various exemples of files to work on if needed.

I will post some stunning results obtained from those 3 films, to illustrate what I mean, but I thought it would be better first to establish the fact that the process does indeed bring undisputed, and quite unconventional, visual optimization to known images, like Black Dog Man, before presenting new, until-now unknown results which are bound to generate polemics or heated debate, because of all there is at stakes for researchers who have so much "invested" in such or such specific hypothesis regarding the assassination.

I have personnally have to review in some serious manner my own reconstruction of the assassination (50 years in the making...), based upon what I found in those last 10 years, so I know that this is not a pleasant feeling... B)

The last thing I need with this is people having an a priori mental block because they will feel challenged on some of their strong, core beliefs about a very emotional (to most people, and for different reasons) subject.

So I wanted first to establish the fact the process operates as I claim, and does indeed extract additional information that cannot be produced by classic optical enhancement methods, using an undisputed image such as Black Dog Man..

I think I have done just that, and I think I should allow for some time for posters reaction to this material.

I will post new images tomorrow, showing proof of another, untill-now unknown individual behind the wall during the shooting.

Terry, there is no problem if you want to contact me privately about this...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"Please show us more and if you could elaborate on the method used. Is there a program one can buy?

Kathy C"

...hi Cathy... No, there is no specific program to use or software to buy to be able to apply the process:

- I work with ArcSoft, a photo software that came with my printer, and which is very user-friendly (probably retails below 50$ ...)

- I use Kneson Imagener (70$)to generate high quality enlargements of original material to start a new file processing

The process, as I described, is a very simple iterative method based on data interpolation, which caracterizes itself by an unusually high enhancement value of data stored on photographic and film record. It can be done, theorically, with any image software you own...

The "added value" is in the process, not the tools... B)

I can post here a step by step "how-to-do-it" guide, or send it to you privately...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...I am posting below the first of a series of untill-now unknown images, showing conclusive evidence of Dallas Police direct complicity in the crime.

I believe I have conclusively established the validity of the process, by showing that it can do the HSCA experts one better by identifying Black Dog Man, not merely as "an individual in black clothing", but much more precisely as an individual wearing a dark uniform

So the process is superior to the experts' methods and techniques (whatever they were), when applied to known images like Black Dog Man. We will return to known images later on.

Now let's see if the process can go further and extract up-to-now unknown images: let's go back for instance to the Belzner picture, and verify if Black Dog Man, the unknown uniformed man behind the retaining wall corner, is alone or has company...

BelznerFullRetainingWallLegend3.jpg

It would seem that, indeed, he does...

Black Dog Man, you will remember, is several yards to the left, close to the retaining wall corner. This one is in close vicinity of the Stemmons Freeway Sign...

So what we have here is photographic evidence of a second, unidentified man in dark uniform present behind the wall during the shooting: it is my interpretation that those men are wearing Dallas Police uniforms...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...but could this stunning new image be a freak (though obviously, world-class..) optical illusion? Let's now examine a composite showing two radically different "readings" of this same data content, using our quite effective little process, to see how they compare, and look for resilient information.

Unsupported information / noise will have a much inferior statistical propension to manifest itself

coherently (with measurable points of reference, if you will) than real / objective information, regardless of how the data is analyzed. So the mathematical odds that two radically different expressions of the same data content will produce the same, identical "optical illusion / erroneous result" would appear to be very, very low, I believe.

So let's see...

Belzner2ndDPDOfficerCompositeLegend2011.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...but could this stunning new image be a freak (though obviously, world-class..) optical illusion? Let's now examine a composite showing two radically different "readings" of this same data content, using our quite effective little process, to see how they compare, and look for resilient information.

Unsupported information / noise will have a much inferior statistical propension to manifest itself

coherently (with measurable points of reference, if you will) than real / objective information, regardless of how the data is analyzed. So the mathematical odds that two radically different expressions of the same data content will produce the same, identical "optical illusion / erroneous result" would appear to be very, very low, I believe.

So let's see...

Belzner2ndDPDOfficerCompositeLegend2011.jpg

Christian

I once did a computer course to enhance my CNC skills. The tutor was into graphics big time and he explained a method for extracting data using pixel position and the ASCII codes for the colours 255 being black and 0 =white.

The memory at the time was restricted to a 1 byte pass containing sufficient information to build up a picture in layers

(much like a dot matrix printer).There was some talk at the time (Late 70's)of using some very clever algorithms to extract and manipulate the data I was surprised by the results even then.

I got a 1MB mac plus in 84 and acquired a piece of shareware called NiMH Image I understand this is the National Institute(For)Mental Health . This software was used to discern disease by colour from Brain tissue slices, it was extremely accurate and worked at pixel depth I can only assume the logarithms utilised by this software would be similar to those used by yourself to extract these images.

I still use a Mac and have Graphic Converter software which is also a powerful aid ( and Freeware).

I wonder have you tried applying this technique to the Back Yard or Neeley st. pictures

If they are composites this would reveal(in a pattern along the line of the union)the various values for each part of the originals .Its worth a try!.

I would certainly be interested in any information you can give me.

Thanks

Ian

Edited by Ian Kingsbury
Link to comment
Share on other sites

Thank you Christian Frantz; For sharing your detailed work with all, your efforts are appreciated, greatly as a contribution..one nevers knows what new and extensive findings researchers may develop,by given the time,coupled with their dedication and effort, and a show of appreciation by others of such, i wish you all the best of luck with your findings, your work certainly shows a great interest...please continue with your presentation thank you...take care..b

Link to comment
Share on other sites

... The process, as I described, is a very simple iterative method based on data interpolation, which caracterizes itself by an unusually high enhancement value of data stored on photographic and film record. It can be done, theorically, with any image software you own...

The "added value" is in the process, not the tools... B)

I can post here a step by step "how-to-do-it" guide, or send it to you privately...

Either would be helpful. I've read and re-read what you've written and haven't been able to make heads nor tails out of what you've been trying to explain.

Despite what they show you on CSI and NCIS, you can't get more detail out of a pixel than was recorded in the first place ... any more than you can "see around corners" in photos. I work with photos and photography all day every day, so if there's a duplicatable process to follow, I'm very interested in learning exactly what the process is.

Christian, do you need my email address? Or can you post the process here?

Link to comment
Share on other sites

...but could this stunning new image be a freak (though obviously, world-class..) optical illusion? Let's now examine a composite showing two radically different "readings" of this same data content, using our quite effective little process, to see how they compare, and look for resilient information.

Unsupported information / noise will have a much inferior statistical propension to manifest itself

coherently (with measurable points of reference, if you will) than real / objective information, regardless of how the data is analyzed. So the mathematical odds that two radically different expressions of the same data content will produce the same, identical "optical illusion / erroneous result" would appear to be very, very low, I believe.

So let's see...

Belzner2ndDPDOfficerCompositeLegend2011.jpg

Christian

I once did a computer course to enhance my CNC skills. The tutor was into graphics big time and he explained a method for extracting data using pixel position and the ASCII codes for the colours 255 being black and 0 =white.

The memory at the time was restricted to a 1 byte pass containing sufficient information to build up a picture in layers

(much like a dot matrix printer).There was some talk at the time (Late 70's)of using some very clever algorithms to extract and manipulate the data I was surprised by the results even then.

I got a 1MB mac plus in 84 and acquired a piece of shareware called NiMH Image I understand this is the National Institute(For)Mental Health . This software was used to discern disease by colour from Brain tissue slices, it was extremely accurate and worked at pixel depth I can only assume the logarithms utilised by this software would be similar to those used by yourself to extract these images.

I still use a Mac and have Graphic Converter software which is also a powerful aid ( and Freeware).

I wonder have you tried applying this technique to the Back Yard or Neeley st. pictures

If they are composites this would reveal(in a pattern along the line of the union)the various values for each part of the originals .Its worth a try!.

I would certainly be interested in any information you can give me.

Thanks

Ian

..Hi Ian....

As I explained, I have strictly no expertise in data processing or the like. I am however, generally considered "smart" (whatever that means...) mainly because I am an avid reader with varied interests, with a good memory and analysis, synthesis and creative skills.

I am presently at a loss to explain, in today's scientific language, how the process performs, but I can explain easily my reasoning in devising it, and what you just posted might just give technically trained specialists some interesting clue to the inner mechanism.

Basically, the process rests on 3 tenets:

*resilience: this posits that real / objective information (pertaining to a measurable phenomenon "out there") will have a stronger statistical propension to manifests itself coherently (with measurable, fixed points of reference)than static / unsupported information, regardless of how you collect the data.

Let's say you go out everyday, walk on top of a hill, and take a photo of the landscape in front of you. Now you can take those pictures at any time of the day or night you choose, in any season, in any type of weather conditions.For the sake of the argument, I will only insist on one prerequisite: that you all take them from the exact same spot.

Now let's say you took 100 images through one year, and analyze them all at the same time: regardless of the difference in lighting conditions or wheather, you will notice that some information will be present in all, or most, pictures (like for instance the skyline of buildings), some will be regularly, but not systematically present (like for instances lights in pictures take at dusk or during night time), and some will be virtually marginal and fluctuant (like clouds in the sky, or planes passing by, etc).

So what it means is that the more resilient the information is (the more it shows up in different "readings" of the same finite set of data, like a picture), the more accuracy you can assign to it: you can photograph a building any way you want, with all kind of settings and stuff, it will always show up somehow in the photo (because it interfers with the way photons will react with the equipment you use to record the event), preserving some core information from the original objective image.

Now what you say about your training course is exactly the reasoning I had to begin with:

-let's say that all pixels have a measured, quantified value between "white" and "black" (in my reasoning I used a scale of "0" for white and "100" for black but that's the same idea...): what we will actually do by processing a picture like I describe is simply produce a version of the data content which will "equalize" pixel value, based on stastistical resilience of information: as simple as that...

For instance, a specific pixel may have an original value of 80, but the process will show that the average statistical value after N iterations will drop to 10: you will go from a very dark gray to an almost pearly white.

Doesn't look too exciting as far as single pixels are concerned, but it sure does make for some spectacular results when considering whole groups of pixels, as you can see...

*the second tenet of the proces, I call equal standard value: by this I mean that all derivations of the original data are considered equal in value at the start of the process: a negative contains therefore nor more and no less that the original picture, which itself contains no more and no less than etc.Remember, we are looking for resilience (macro), not details (micro) at this stage

*the third and last tenet is data interpolation: by this I mean that, since all derivations are valid, any and all results obtained by interpolating them are also valid

Some will already have understood that we are actually creating here a sort of auto-feed data loop, where we can continually inject into the data base refined versions of the original information, creating new templates that in turn help refining the data some more...

Though I understand that my approach to the process is quite clumsy, I very much doubt that it has not been discovered already and probably put to use in some circles: this is just too powerful a tool.

I am confident that most parts of the process that I do manually could be automated, allowing for much, much faster processing

I did some work, quite recently, on some of the Backyard pictures: the results are quite intriguing,but still have to be worked upon in my opinion.

Since I am quite busy with the other stuff, I can send you my whole files on this, if you whish to explore this particular angle: I worked on the classic backyard picture, plus a composite of 2 close ups of Oswald's head.

Since several of you have requested the "how-to-guide", I will put it here online, if that's Ok: just give me a day or two.... B)

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

There is nothing wrong with raising the level of technical knowledge in this group, as it is key to solving the JFK assassinations finer details.

I do think your Black Dog man area is a key area to check and that the side on shot to JFK's head came from there.

I think it is extremely important to use the highest pixel content images possible for these technical enhancements. The data image has to have pixel detail that slightly exceeds the grain of the photo images.

I use methods using Adobe Premier and Adobe Photoshop, which both offer methods to enhance the image, change parameters to magnify, enhance the image details, and so on.

With Adobe Premier, I am able to synchronize the high res Zapruder Film to the Dictabelt Sound file of the shots and this works extremely well at showing the finer details in frame by frame analysis around the points of interest connected with the sounds of the shots fired.

Looking forward to hearing more on your techniques, and I do think you are on the right area with Black Dog man at the corner of the concrete wall. That was an extremely good place to hide, as there was a fence in front, a fence behind, and it was a clear close in shot to kill JFK from a grassy knoll area.

Link to comment
Share on other sites

Either would be helpful. I've read and re-read what you've written and haven't been able to make heads nor tails out of what you've been trying to explain.

Despite what they show you on CSI and NCIS, you can't get more detail out of a pixel than was recorded in the first place ... any more than you can "see around corners" in photos. I work with photos and photography all day every day, so if there's a duplicatable process to follow, I'm very interested in learning exactly what the process is.

Christian, do you need my email address? Or can you post the process here?

...Hi Duke... I remember we discussed this a few years ago...

I agree totally with your statement that "you can't get more detail out of a pixel than was recorded in the first place ...".

Actually, that is why I stopped any work with the process for 3 years, after I discovered the first image: I was convinced that it was some kind of high level optical trickery.

But facts are stubborn:

-the nemesis of optical illusions is enlargement: the level of details (at times, distinct facial features) visible even after major increment of enlargement (6, 8,10 and at times more...) goes far beyond what we should expect from a classic optical illusion, which evaporates at much lower scrutinity

-the process is reproducible: very early in my research, not aware of the fallibility of computers, i lost a complete file documenting Dallas Police presence in the DalTex Building.I had to rebuild it from scratch, with even better results. I have also worked on different versions of the same images, getting the same results

-it is not a matter of one image, found in one picture: it is a matter of several dozens of images, taken from films (Zapruder, Nix, Hughes) and pictures (Moorman, Belzner, Bond, Willis,Altgens, etc) showing very suspicious, undocumented activity by Dallas Police elements during the shooting.

Counting only those mentionned above, that makes 8 different but concurring sources providing identical information. That's a lot...

So, while I still agree with you that there is no way to extract any information that has not been previously recorded, my reasoning is now "What do we really know about the way visual data is recorded on photographic support?...".

My guess is now that what we can extract from the record is much more dependant on the tools at our disposal than previously thought. I think Wilson convincingly showed that, although I do not agree with most of his findings...

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...