Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

User:DeepBlueDiamond

From Wikipedia, the free encyclopedia

Introduction:

Engineer electro-techniques and electronics, Univ. Darmstadt, Germany (main: physics) 1981. System-Administrator, Assistant of the Chief until end 1992. 1993 independent promotion of last patent (by bad accident 1994 stopped). No native English spoken. I begin with:


Publications:

1. Neue Zürcher Zeitung (NZZ, CH) 12.03.1975, trans.: Analysis and synthesis of speech.

Introduction: Analysis of written words and letters to indicate how a signal-analysis works. The spaces of priory not spelled printed words indicate the limits for a raw computer analysis. A good idea was to make a picture of a sentence (optically seen) not sharp, a bit blurred. Then it is taken a bit more sharp to “see” smaller spaces between them limiting the characters. Then rather sharp analized characters can be identified by searching so called “characteristic points” (extremes, cross-points, edges, etc.) finally analysing with straight or curvatured lines between them.

Then my speech analysis was described: It abandoned (experts said for the first time) until then still used constant intervals (mainly 80 ms) for an Fourier analysis. Instead my analyser took at first a differentiation D of the signal. Already its envelope indicates (by me so called) "crucial changes" of the original signal (O) by a D/O relation only. Thus were obtained natural segments instead of taken arbitrary cuts. IBM Vienna, Prof. Zemanek praised the publication (dated: 1975-06-06, trans.) as an "excellent work" (IBM remained leading until the 90th).


2. Co-writer (with my father), Handbook of International Alloy Compositions and Designations, 1980. A boring matter of research, for many years mainly trying to get information by all relevant countries in order to be published (imagine incredible problems to get at that time any information from Eastern countries). 5 years became too much for a resulting book with pure tables.


3. ntzArchiv Bd. 11/1989, H3: Wissensbasierte Analyse und Codierung mit bereichsprädiktiver Codemodulation (Range-adaptive Analysis and Coding, based on one of my patents: Generalised code modulation method with domain prediction for a multiplicity of signals).

A hard work was to leave conventional seeing of a picture analysis, at that time mainly DPCM related. A very late example for the basic idea - to convince all normally conservative thinking (stubborn?) "scientific world" - was late but lightning: The most extreme case to convince them all was to consider a constant plane within a picture. It needs only its minimum to be coded completely for a whole plane and no more PCM for each point. E.g. MIN as a constant suffices for (meanwhile taken as standard) 8x8 points.

In all other "normal" cases minimum and maximum limit the variance of a quantity of points. Instead, even only minimum and a (directed, mainly already reduced) range upwards suffices to limit the values of a normal plane by its natural range. The analysis can be done for a seqence of e.g. 8x8 pixels of a pictures (even a block of 8x8x8 of a movie sequence). Mentioned natural limits of its range allow an exact but bit-reduced coding: By knowledge of the range (forming within a "domain of points") of a 2D or 3D-space results mainly in a code-length, by its minimum and maximum already naturally limited. This can be utilized: This allows exact but by its limits mainly reduced PCM-codes, but saving until about 50% for an exact coding already for a picture.

3D-PICTURE-ANALYSIS AND CODING: A very interesting aspect is a one bit hierarchical bottom-up coding. One bit codes (TRUE/FALSE) if a certain sub-quantity say by that TRUE/FALSE bit simply if bit-ranges of such a pixel-quantity (given by its limits Max - Min) can be reduced. This proceeding must be done for the whole sub-quantity (2D: 4 sub-planes of the original domain): When dividing such a plane (e.g. 8x8 in 4 times 4x4 planes, then in 4 times 2x2 planes, then in 4 times 1x1 meaning 4 single points) each sub-quantity (2D: 4 sub-planes) can be coded by one bit less, because the TRUE/FALSE bit indicates then bottom-up for the selected sub-plane if its whole sub-quantity of points can be coded with a one bit range-reduction. If it is true for a subplane, another TRUE bit can reduce the coding of whole quantity again.

Sophistically used: In such an analysis of curvature-sets within a picture sequence crucial differences represent edges. The edges can mainly be seen like deformed filled tubes, as “quasi-bodies” of a sequence and are defined by its Differentials. Especially used in a 3D-sequence of pictures, this provides also predictions for a near future (e.g by using Newton mechanics of movements). Such predictions are very important not only for the astronomy but also for a military usages, e.g. in order to predict the aim of a future movement.

EXPLANATION: Imagine please, take a transparent film sequence. Then cut all single pictures of the movie one about the other. This forms a transparent tower. The sequence of the pictures within represent by their form the original movie. When seeing anything moving on a screen we quasi "move within the tower". Edges of objects formed by differences (called here differentials) of the sequence are a kind of a deformed tube body formed within transparent pictures. A few significant points (formed by not only in one direction "signifciant differentials") - now seen as running - represented by significant edges of the "deforming tubes" are taken to simplify all calculations crucially.

The basic invention was utilized at first only as a pre-processor in order to reduce data-streams by range reduction. When I had given up it was used by SONY in digital consumer video cameras, called there "hierarchical range adaptive coding". A main problem is: If a conservatively induced "scientific world" cannot understand a new idea of an underdog, demanding even conservative terminologies for their understandings. - E.g.: While I myself took - impertinently? - the word PREDICTION in the sense of any kind of pre-codings, it was mainly (completely falsely) understood by conservative experts in the narrow sense of an ESTIMATION for a future in sense of DPCM.


4. SPACE FIDELITY, 1993 taken, fabricated 1995 by GRUNDIG Corp.: A stereo signal is characterized by a time-difference and loudness differences. At my time the scientific acoustic world seriously meant that only the loudness-differences make stereo effects. They mainly still ignored to realize that ears also realize time-differences. Thus, by conventinal theory it seemed to be absurd to convince "the experts" of a different reality. I had to go very far to demonstrate the effect as true to be heard - depending on the stereo recording manner - as a partially for experts incredible effect: Loudspeakers induced with real-time-delayed stereo signals can put already at the end of a damped tube longitudinal waves along the tube with partly very astonishing effects. Well-known was already formerly a "ground-effect": A 3D sound produces "plane waves" with a very spatial effect in churches along the plane ground. Only trying to describe it like this (as an underdog for "the experts") was a disastrous attempt. As demanded by experts to be finally written it became more confusing than declaring anything. While the experts had meant then to have understood the effect sufficiently e.g. for a patent I understood it no more really. In the patent included already - e.g. by register shifting - digitally produced (natural or not) time-delays. This was used one year later by Philips Inc. (e.g. "Incredible Surround") and is since 1999 used in well-known "Dolby Digital Surround".


5. MY PERSONAL RESULT: EXPERTS CAN PREVENT THE FUTURE

Papers about something crucially new - e.g. for Patents - must always describe at first correctly what is well-known to show a difference; then new things at least have to be described by well-known words the difference. But how can a visible or heard or otherwise realized effect of a "thing" can be described if the mainstream theory not supports it or if its physics can only be estimated but it experts far away simply say: This cannot be!-???

If something is really new, there is a crucial problem of understanding by words one another, mainly by pure ignorance. Especially conventional or mainstream trained highest experts are mainly not able to understand a rather or even completely new idea (see: How many years Einstein needed to convince the world).

Thus the highest as sole competent considered experts and their conventional or “mainstream” opinion can become a crucial obstacle for any (underdogs considered) “dissidents” theories. If others take and use the object commercially, describing the same with different words by a company only showing that it works, the ice suddenly breaks. Publications make nearly no more problems - very or too late.

Conventional or "mainstream" mainly wants to enforce different (new) ideas to be put into "their" well-known clothes and described with their conventional words, while real new ideas need sometimes simply new words, e.g. well-declared in one or more sentences in an index to convince like Einstein's books. But this is not the generally rigid demanded manner.

Einstein was the best, well-known example: He had to write many books to declare only few words of his ideas, in order to finally convince the scientific world and people in decades.

But the computer inventor Zuse was THE typical bad example for such crucial failures by "the experts": His first invention of a computer was not patented because experts simply not understood Zuse and his good idea and description, especially its basic Computer logic realized by relay switches.

Thus a normally too rigid conventional handling by conventions or mainstream (e.g. by rigid rules for patents, especially not for an apparatus only but as a basic idea for very much) can simply not be applied for crucially new general ideas. Conventions enforce (especially if not by a company supported) single inventors of ideas to take conventional words finally. That confuses finally all more than it helps. I myself was enforced to deform my ideas with by me taken priory not at all understood new words (I declared them in annexes). The results were so badly that the final description of the idea itself became crippled by others conventions. Finally I could not understand it more by myself, but others meant to have...


MY OPINION: Beware (only in their) mainstream educated induced experts - by its bad result: For alternatives uneducated and thereby no more mentally open students can prevent even good and necessary future inventions...

SUGGESTION: In time of Internet, crucial unknown or new effects of an object can be demonstrated to others more and more "easily" online. Patents should allow today a new manner of an (in patents already existing, so-called) "Continuation in part":

  • Beginnig with a seriously shown new effect for an object a patent could start, shown online.
  • Then the "scientific world" can be involved to describe the effect without loosing a priority for the inventor.
  • Then could be made written descriptions, even by different points of view.
  • Thus inventors could get patents even if its principles are not understood by inventor or experts.


I became impeded by undeserved accident March 1994. Hobby: Astronomy, Chess, Fairness.

Begin: DeepBlueDiamond 18:11, 8 August 2007 (UTC)