Jump to content
The Education Forum

So I will give it one last try…..


Recommended Posts

"It is as you say: we discussed this several years ago. Or rather, I'd asked you about the process and you talked about the underlying thesis and the reproducible results ... but never described the process that is so "reproducible." You say it is, but don't tell us how to do it. That's what I asked you then, that's what I asked you this time."

...my friend, I don't think we need any agressivity here: we discussed this before, and I understood that your position, which rests on optical knowledge, is that what I am saying is, basically impossible. I remember asking you for your e-mail for private exchange, but don't think I ever received it. At that time, I was in touch with a poster here (jim? Doug?) who kindly offered to produce a short clip of my reearch, which I thought would be more explicit that my clumsy explanations.

I received the clip, but lost track of Doug / Jim, so I let the matter rest (I have other interest in lif beside the JFK case...), until last week, when the anniversary on the assassination got a bit on my nerve and I decided that, as I said, I would give it one last try, before I let it rest....

"If you don't want to tell us how to reproduce the process and merely want to tout the results you've obtained and the concept it's based on, stop complaining about how nobody seems to be interested in the information you're trying to "relay" without relaying it."

...I don't know where you get the idea that I don't want to tell how to reproduce the process: I think I have been quite explicit from the start that that is precisely what I came here for: to make the process known as widely as possible, so that different researchers can work with it coordinately, thus sparing time, resources and efforts. I know english is not my native langage,but I usually get better results when communicating with people... <_< <_<

..and I am not "touting" anything, either: I think I am taking all sorts of precautions, on the contrary, to present objectively what I have found, using as a presentation research done on an undisputed image, BlackDogMan, that I selected specifically because, presumably, its uncontested validity (see HSCA) would spare me the kind of a priori attitude you seem to have when confronted with something new to you...

"If it's reproducible, tell us exactly how to reproduce it. Otherwise, you're merely asking us to buy into a theory that's "so far beyond what is conventionally expected from optical enhancement" that all you can do is tell us why we should think it's valid without ever proving that it is. The proof is in the pudding known as "reproduction."

"

I am not sellling anything here, so there's nothing to "buy": I have no book nor documentary in the working, and I will post here the detailed method, so that it can be shared...again, I have never asked you to believe this without experiencing it for yourself: just re-read all my posts above: I am only here to try to convince people that there is much value in this process, that it costs nothing, is easy to employ, and brings out exclusive new information on the JFK case.

Now, you may be frustrated (?) that I have not already posted the method here. you can maybe understand, by just following the thread, that I like to make detailled replies to question asked, and that is time consumming: I also thought it would be better to establish first that the process works, by using the Belzer BlackDogman, so that people will take this more seriously.

So the next thing I'll post, probaly today, will be this step-by-step guide.

I'd like to add this is that's OK: I have no qualms about legitimate questions and critics (that's what forums are for), but I'd appreciate they'd be addressed with curtesy.

For instance, Ian and Ed have asked legitimate questions that I have tried to answer as best as as i can.

Ian also mentionned information of a technical nature that might help you make better sense of what I am trying to explain: it would seem that what looks uncomprehensible to you (coming from me, agreed...) made a lot of sense to his trainer in PC graphics.

Maybe you could check this out with him?

Also, you did not answer my previous question to you, so I will ask again:

*I understand you don't understand how the process works

*but what is your opinion of the enhancement ratio generated by the process? Do you see, or not, a significant improvement in clarity and details between the original, and the processed versions?

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

  • Replies 154
  • Created
  • Last Reply

Top Posters In This Topic

…so for all of who you asked for the “how-to” guide, here it is: I will simply describe, as precisely as possible, every step needed to produce highly enhanced images, of the type I have already posted here.

Evidently, though I assume that there must be some physical limit to the quantity / quality of data recorded originally (I agree with Duke here…) it is my opinion that better, more powerful tools should be able to produce even better results than what I have obtained with over-the-counter software.

So anybody here can, theorically, go one better than I did and release an almost crystal clear picture of Bond 4 BDM, for instance, a picture that might even be clear enough for an ID sketch… <_< ( yes , I am serious...)

I have already explained in some detail the concept behind the process (Duke, bear with me a minute here… B) ) so I will go directly to the “mechanics” of it.

So here we go:

*select the best source material, save and create a file; duplicate sources by selecting various versions of same original image (you’ll be surprised at the highly varying level of definition and sharpness that you will find on supposedly identical images) so as to make sure to work on the best material;

*make some high quality enlargements of the file(s) you intend to work on, even if they appear sufficiently large to be worked on comfortably with the classic enlargement tool of your usual software: using a special software just at this stage (I use Kneson Imagener –old version- which is cheap, around 70$ – and very effective for what I need ) will guarantee you will work with mathematically sound magnifications of data (i.e, with much less noise generated by the enlargement itself, to go back to my exchange with Ed)

*name the file, for instance “Belzer Black Dog Man from Trask”. Rename the original picture in the file accordingly “Belzner Black Dog Man from Trask 1”. From now on, you will be simply processing the internal data content of the image

*open the file and apply any tool or settings that you wish on it: most people will spontaneously, as I did for years, go for trying to enhance the visual aspect of the image: sharpness, lighting, etc. This is Ok, and I recommend that you start like this, but you can also apply much stronger settings when you get used to it.

*whatever you chose, you will have now created a new version (a new reading of the data, if you will, because you have modified the way the software analyzes the image…) of the finite set of information contained on the support. Save this file and name it “ Belzner Black Dog Man from trask 2” Every new version you create should then be tagged accordingly (3, 4, etc)to allow for a time flow analysis of the process enhancement. Each picture is thus a "freezed" increment in the process, each increment containg additional information compared to its predecessor. This is a very important and effective control check, because it shows the images do not appear out of thin air, but result from a gradual, incremental refining process

*now you will take that file and interpolate it with the original one: what you have in hand now is the equivalent of 2 witnesses describing the same event, but in a different manner. From the data processing point of view, this means that for instance interpolation will allow us to identify correlations (identical data present on both images) or discontinuities (information not shared in both images, which could possibly be unsupported noise): this means that we have here, very simply, a tool that will allow us to sort out real / objective information (because it will be statistically resilient) from noise / static (because it will be statistically much less coherent than genuine data).

*the software I use allows for the following operational interfaces between photo files: add; difference; multiply; substract. I would assume most software do the same.

*Now you can start generating new versions of the data content of the original, simply by creating different “readings” , either by modifying an already created image using yr software settings, by interpolating it with other versions, or both. Each operation will create a new valid expression of the data being observed, each with unique data content (though some differences may be minute).

Everytime you add, multiply, substract or differentiate 2 sets of data (images) you will create a new level of information taking into account all the previous other levels.

*now you will probably need to start organizing what you are creating: starting from the root name, I use explicit titles so that I can retrieve easily images of interest, like for instances ““ Belzner Black Dog Man 2 2nd DPD near Stemmons”. I have also a codification for the definition of the image (+++, Best, etc..) which allow for quick selection when needed. This mean I can tag a specific secondary image which appears of interest, and then follow its evolution thru time: as explained, real information will be resilient: an image that appears once or twice can be dismissed. If it is found 30 times, that’s is another matter. Some images will come and go, some will stay for good.

By doing so, you will avoid creating specific files for “discoveries” that eventually vanish into thin air (yes, I had some of them too in the beginning…). If a discovery proves resilient (much better if it is resilient and corroborated) , then go on and create a new file. My standard for resilience is an image which will appear more than 10 times in widely different readings.

Now, a few recommendations, if I may:

Depending on your skills or natural talents (people with keen observation skills will be advantaged here) you may produce interesting results more or less rapidly: no fret, that’s how it works. I can confirm though that I am now 10 times faster with it than when I started, which means that technique may have something to do with it (but I am sure this part could be automated, allowing us to concentrate on the results obtained only. Ian, if you can hear me:rolleyes: ).

It would seem to me that differentiating between varyingly sharpened / focused versions is quite effective, so you can try this as a preferred approach if you will for a start

I personally recommend working with full pictures as much as possible

Do not go there with certitudes, it is the best way to be disappointed. Check and recheck your findings, and then recheck them again. Make sure you have a good knowledge of the 3D physical settings of the locations under observation. Check and verify that what you see is compatible with scales and known dimensions.

Allways look for corroboration and cross checking of the image you may find: there is enough in the record to have different views of specific areas at different times, to confirm or deny what you may have found...

It is also crucial that the original data be reinjected on a regular basis into the processed workflow, in my opinion: this serves as a security check

Now for those would like to try it but would need some help to start, I can do this:

-I can send you a partial file (an original I have worked on, plus some already processed results): you could immediately start generating new versions and see immediately whether the process actually works by enhancing what I would have sent you.

I have absolutely no doubt that some people here can go much further than I have so far with this… B)

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

...

The process, as I described, is a very simple iterative method based on data interpolation, which caracterizes itself by an unusually high enhancement value of data stored on photographic and film record. It can be done, theorically, with any image software you own...

The "added value" is in the process, not the tools... B)

I can post here a step by step "how-to-do-it" guide, or send it to you privately...

Hi Christian,

I don't completely understand the description of your method, but I am quite interested in examining the process if you are willing to share.

...Hi Richard...

...no problem, you can see that I am myself not really on the technical side of things, and I understand that what I am saying is quite unconentional and should require much more pedagogic skills that I can muster... <_<

No worry, I will post a very simple do-it-yourself guide here for those of you interested.

I think I should say this also, so that there is no misunderstanding: anyone producing new images using this process can claim copyright to what they discover: I will only request that they indicate the anteriority of my research, that's all...

This is for Ian,

re the technical side of it: I have some friends who believe this method is marketable (could be turned into a sellable product), and that some serious money can be made out of it.

This is not at all my field of expertise: so if you want to look into it, that's fine with me... <_<

I believe your training guy was on the right track, and the material you have at your disposal could be a major improvement for the process: in all probablity, we should get much better results using more powerful material...

Christian

Thanks for the reply ,I fear todays ultrafast Processors and sectional programming may impede this type of software

it had to be programmed live using various subroutines suited to a particular Chip, this was when large scale integration meant in excess of 100,000 transistors .I have no doubt todays silicon marvels could be coaxed into producing some marvelous results .

I believe it all lies in the mathematics . By comparing a selection of pixels values and comparing to a known set of variable values within a range and then the next selection depositing the extrapolated data as a layer and add layer after layer to build a possible "match".The numerical aspect is to protect the image from being user lead i.e. adjusting

any veiwing characteristics ,Brightness,contrast and so forth.

I believe you would quickly adapt to watching a picture build or not ,That you could move to another selection and run through the variables surrounding that group.

I do hope I am not Babbling on aimlessly I am a stonemason and well out of my depth on todays core processors

I just remember the lessons ( for once in my life!)

Chips this will work with

8088

Motorola 68000

and one of the large Fairchild cmos chips one of the first opticals used in thier camera planes.( this may have been one of the E.T. chips as the company was bought out by Schlumberger)

Ian

Edited by Ian Kingsbury
Link to comment
Share on other sites

...for those who would like advice about where to start, here are some areas of very great interest:

*the wall corner in the uncropped Nix film (can be found on YouTube)

*the DalTex Building from Altgens

*the Sniper's Nest in the Hughes film (crucial)

There are, of course, several others...

Link to comment
Share on other sites

...still trying to build my case (I hope...) for serious consideration of this, I post below yet another illustration of the process superiority over conventional optical enhancement.

This composite shows the strikingly higher definition that the process can extract, as compared to the classic magnification.

The optical approach merely shows us a bigger dark blob.The processed image reveals all sorts of intricate details in the man's attitude, face and clothing...

Willis5ProcessDemonstrationLegend.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"I believe it all lies in the mathematics . By comparing a selection of pixels values and comparing to a known set of variable values within a range and then the next selection depositing the extrapolated data as a layer and add layer after layer to build a possible "match".The numerical aspect is to protect the image from being user lead i.e. adjusting

any veiwing characteristics ,Brightness,contrast and so forth."

...Ian, that is exactly what my intuitive thinking was at the time: to correlate sets of variable values by "averaging" them, using different expressions of the same data as a generator for the variables...

"I believe you would quickly adapt to watching a picture build or not ,That you could move to another selection and run through the variables surrounding that group."

.. . that is exactly how I imagine the process could be automated...

Let me know if you want to go further than that: I don't believe that it is so much a question of processors than of methodology, so it may not actually require state-of-the-art knowledge or material to build a working programm...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"I believe it all lies in the mathematics . By comparing a selection of pixels values and comparing to a known set of variable values within a range and then the next selection depositing the extrapolated data as a layer and add layer after layer to build a possible "match".The numerical aspect is to protect the image from being user lead i.e. adjusting

any veiwing characteristics ,Brightness,contrast and so forth."

...Ian, that is exactly what my intuitive thinking was at the time: to correlate sets of variable values by "averaging" them, using different expressions of the same data as a generator for the variables...

Christian

Eureka!. Serendipity is alive and well .Now the hard bit, how to calculate the routines?, A Risc chip would be preferable as the instruction set is greatly reduced but would no doubt have to compact various routines/subroutines like a zip file. these could be fed into the program core leaving a file bearing the information to storage this is

to be used by the main program/data storage device.This would require large amounts of memory to hold the various layers

and be able to manipulate said images up and down through the produced layers.

So what you have is a non-Photographic process being produced from data and the relation of the known being tested on the unknown ?.

Ian

p.s. the name of the software was NIH image unfortunately my old Mac plus is a museum peice on some execs desk in Canary wharf but I will try and find an emulator and the software for my current eMac.

Edited by Ian Kingsbury
Link to comment
Share on other sites

...Hi Duke... I remember we discussed this a few years ago...

I agree totally with your statement that "you can't get more detail out of a pixel than was recorded in the first place ...".

Actually, that is why I stopped any work with the process for 3 years, after I discovered the first image: I was convinced that it was some kind of high level optical trickery.

But facts are stubborn:

-the nemesis of optical illusions is enlargement: the level of details (at times, distinct facial features) visible even after major increment of enlargement (6, 8,10 and at times more...) goes far beyond what we should expect from a classic optical illusion, which evaporates at much lower scrutinity

-the process is reproducible: very early in my research, not aware of the fallibility of computers, i lost a complete file documenting Dallas Police presence in the DalTex Building.I had to rebuild it from scratch, with even better results. I have also worked on different versions of the same images, getting the same results

-it is not a matter of one image, found in one picture: it is a matter of several dozens of images, taken from films (Zapruder, Nix, Hughes) and pictures (Moorman, Belzner, Bond, Willis,Altgens, etc) showing very suspicious, undocumented activity by Dallas Police elements during the shooting.

Counting only those mentionned above, that makes 8 different but concurring sources providing identical information. That's a lot...

So, while I still agree with you that there is no way to extract any information that has not been previously recorded, my reasoning is now "What do we really know about the way visual data is recorded on photographic support?...".

My guess is now that what we can extract from the record is much more dependant on the tools at our disposal than previously thought. I think Wilson convincingly showed that, although I do not agree with most of his findings...

It is as you say: we discussed this several years ago. Or rather, I'd asked you about the process and you talked about the underlying thesis and the reproducible results ... but never described the process that is so "reproducible." You say it is, but don't tell us how to do it. That's what I asked you then, that's what I asked you this time.

If you don't want to tell us how to reproduce the process and merely want to tout the results you've obtained and the concept it's based on, stop complaining about how nobody seems to be interested in the information you're trying to "relay" without relaying it.

If it's reproducible, tell us exactly how to reproduce it. Otherwise, you're merely asking us to buy into a theory that's "so far beyond what is conventionally expected from optical enhancement" that all you can do is tell us why we should think it's valid without ever proving that it is. The proof is in the pudding known as "reproduction."

Please be patient with this man.

Kathy C

Link to comment
Share on other sites

"I believe it all lies in the mathematics . By comparing a selection of pixels values and comparing to a known set of variable values within a range and then the next selection depositing the extrapolated data as a layer and add layer after layer to build a possible "match".The numerical aspect is to protect the image from being user lead i.e. adjusting

any veiwing characteristics ,Brightness,contrast and so forth."

...Ian, that is exactly what my intuitive thinking was at the time: to correlate sets of variable values by "averaging" them, using different expressions of the same data as a generator for the variables...

Christian

Eureka!. Serendipity is alive and well .Now the hard bit, how to calculate the routines?, A Risc chip would be preferable as the instruction set is greatly reduced but would no doubt have to compact various routines/subroutines like a zip file. these could be fed into the program core leaving a file bearing the information to storage this is

to be used by the main program/data storage device.This would require large amounts of memory to hold the various layers

and be able to manipulate said images up and down through the produced layers.

So what you have is a non-Photographic process being produced from data and the relation of the known being tested on the unknown ?.

Ian

p.s. the name of the software was NIH image unfortunately my old Mac plus is a museum peice on some execs desk in Canary wharf but I will try and find an emulator and the software for my current eMac.

...oucch! This is much over my head ...

I think what you describe in terms that I do not all comprehend can also be achieved with less powerful tools, in the very simple manner I have described: it sure takes more time, but it does not require an inordinate amount of memory storage, nor fast processors. You don't need either to manipulate massive layers of data in complex ways, you just need to add, subtract, etc, one data content relative to the other, and save the result. That is not a complex task for a PC. Most versions produced are in the 100/300 ko range...

"So what you have is a non-Photographic process being produced from data and the relation of the known being tested on the unknown ?".

Not exactly: what we will be doing is calculating average values for the "unknown" (based on statistical resilience), using the known as a reference point...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Okay Thanks Christian,

I understand much better now. I agree with 99% of the operational theory.

I am asking for the 'sponge' to be wrung out completely then filter the colorization of some info back down to a more perceivable level.

When the 'sponge' gets wrung out everything in a sense gets wet with all the info.

I would like to see a refinement to this. If its even possible.

Its like the orchestra all tuning their instruments at once (all info from each musician) If we could just 'hear' the woodwinds or just the percussion it would clear the air so their individual notes hit my ear...so to speak.

You are the conductor, the symphony has already been written.

I'll examine this closer. Hope a sonata is in the works.

Ed

Link to comment
Share on other sites

...I am posting below a composite of 3 images of Black Dog Man (from Belzner, Willis 5 and Bond 4, all already posted here), to verify how they correlate.

In all logics,if they are the same person, they should share at least some common caracteristics, regardless of the quality or definition of each individual frame, or the individual's movements or attitude.

So let's see what recurring information can be found in those 3 frames, taken at different times from different points of view , and of course using different cameras and films

I will be using close ups for this, so that the optical illusion argument, once again, can be addressed at the same time: it is one thing to find ONE intriguing image in ONE picture; it is another thing to discover THREE (actually, there are dozens of images of BDM in the Nix film...)intriguing, coherent and correlated images in THREE different sources.

When the images are themselves clear enough to reveal minute facial details that can be compared,, as in below, I think we are making some progress...

BlackDogManTripleCompositeCloseup2011Legend2.jpg

...so what we have here is:

*a corroboration of BDM presence (nothing new, the HSCA established it in 1978)

*but more important some serious clues about his identity

-corroboration of a dark blue clothing and cap, ie a uniform similar to those of the DPD

-corroboration of the man as being a white young Caucasian male

Note also how the movements of the man are coherent with events taking place:

-Belzner and Willis show him at the very beginning of the shooting, with the motorcade comming straight at him: he is looking straight forward to the approaching vehicles

-Bond 4 is just few a seconds after the last shot: the motorcade has passed his point of view and he his now looking sideways

So I would say there is a lot of internal cohesion (location, clothing, facial features, movements relative to the event)between those 3 sets of data, which cannot be explained away easily I think....

Do we have any discontinuities (info not present in all 3 pictures observed)?

yes, we have one: in Belzner and Willis, it would seem that BDM is holding a long object (presumably a rifle barel)in his right hand, at an angle with his body.

In Bond, it seems that he he is now holding vertically this same object in his left hand.

This difference, of course, is easily accounted for if we assume that the man, presumably, has moved a little during the shooting...

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

"Okay Thanks Christian,

I understand much better now. I agree with 99% of the operational theory.

I am asking for the 'sponge' to be wrung out completely then filter the colorization of some info back down to a more perceivable level.

When the 'sponge' gets wrung out everything in a sense gets wet with all the info.

I would like to see a refinement to this. If its even possible."

..I think this is perfectly feasable with smoothing algorhytms and filters: basically my stance on this is I do not like to interfer radicaly with the data by injecting unrelated information into the finite data set (I am injecting only, as you have understood, processed interpretations of the same data base into the overall process flow).

But I think anyone here mastering the classic optical tools for photo enhancement could produce what you are asking for: a "smoothed out" version still preserving the new data extracted but without too much "degrading" of the surrounding information.

I am confident that somebody like Duke could work on the Bond 4 image, for instance, and produce what you want easily...

As for me, since I do not work with high tech material and do not master the techniques involved, my own clumsy way of doing it is simply to reinject periodically the original data set into the workflow: if you did like I suggested with the Bond 4 picture (fusionning it with the unprocessed original), you saw what I mean by "averaging pixel value"...

"Its like the orchestra all tuning their instruments at once (all info from each musician) If we could just 'hear' the woodwinds or just the percussion it would clear the air so their individual notes hit my ear...so to speak."

...a perfectly valid comparison: and since we Do know that audio equipment exists that allow for filtering different sound frequencies, I would be much supprised if we would not have the same options in the field of optics. May be Ian can help us on that....

"You are the conductor, the symphony has already been written."

...no, I am not the conductor: I am only the guy who found a long-lost key to the partition. The fact that I did do not bestow on me, in my opinion, a solist role: I am sure the orchestra here will do a much better and powerfull rendition that I'll ever be capable of with my limited resources... B)

"

I'll examine this closer. Hope a sonata is in the works."

... I will keep documenting this research here: thanks for yr interest, and remember: my whole files are avaible to you if you need... B)

Ed

Link to comment
Share on other sites

"Okay Thanks Christian,

I understand much better now. I agree with 99% of the operational theory.

I am asking for the 'sponge' to be wrung out completely then filter the colorization of some info back down to a more perceivable level.

When the 'sponge' gets wrung out everything in a sense gets wet with all the info.

I would like to see a refinement to this. If its even possible."

..I think this is perfectly feasable with smoothing algorhytms and filters: basically my stance on this is I do not like to interfer radicaly with the data by injecting unrelated information into the finite data set (I am injecting only, as you have understood, processed interpretations of the same data base into the overall process flow).

But I think anyone here mastering the classic optical tools for photo enhancement could produce what you are asking for: a "smoothed out" version still preserving the new data extracted but without too much "degrading" of the surrounding information.

I am confident that somebody like Duke could work on the Bond 4 image, for instance, and produce what you want easily...

As for me, since I do not work with high tech material and do not master the techniques involved, my own clumsy way of doing it is simply to reinject periodically the original data set into the workflow: if you did like I suggested with the Bond 4 picture (fusionning it with the unprocessed original), you saw what I mean by "averaging pixel value"...

"Its like the orchestra all tuning their instruments at once (all info from each musician) If we could just 'hear' the woodwinds or just the percussion it would clear the air so their individual notes hit my ear...so to speak."

...a perfectly valid comparison: and since we Do know that audio equipment exists that allow for filtering different sound frequencies, I would be much supprised if we would not have the same options in the field of optics. May be Ian can help us on that....

"You are the conductor, the symphony has already been written."

...no, I am not the conductor: I am only the guy who found a long-lost key to the partition. The fact that I did do not bestow on me, in my opinion, a solist role: I am sure the orchestra here will do a much better and powerfull rendition that I'll ever be capable of with my limited resources... B)

"

I'll examine this closer. Hope a sonata is in the works."

... I will keep documenting this research here: thanks for yr interest, and remember: my whole files are avaible to you if you need... B)

Ed

Christian,

Don't be surprised if you start seeing some "men in gray suits" hanging around your flat and place of work. LOL(?)...

--Tommy :ph34r:

Edited by Thomas Graves
Link to comment
Share on other sites

... I hope I am not belaboring the point, but I believe it is important, before moving to more controversial matters, to establish the process mechanism and validity again, still using an undisputed picture.

I have posted previously a composite illustrating the processing of Bond 4, which shows clearly, I think that the process is a mere iteration of added layers of the same information processed differently,then viewed together as a single transparency frame: this process will strenghten resilient information (by raising the average pixel value, if you will) and on the other hand minor / decrease less resilient information (such as noise, which will by definition not be present in ALL "readings", since it does not pertain to objective information actually present on the support).

So the process could be compared to a market survey (in which you want to understand the objective meaning of a finite set of data, just like in a picture), where instead of interrogating in detail ONE customer (applying classic optical methods to the original picture), you would have the opportunity to interrogate 246 (in the Belzner case...) of them, before analyzing how they correlate and differ on such and such topic.

You do not have to be an expert in the field to understand that, the more "points of view" on identical data, the richer understanding you get of the overall meaning of the information collected...

This is basically what is at works here...

BetznerBDMProcessIllustrationLegend2011.jpg

Edited by Christian Frantz Toussay
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...