Go to Home Page            Watch the Video

Technology or Physics?

 A Seminar for University of El Paso, Texas

By Armando Rodriguez, President of Include Software Consulting Inc.

Abstract

For those that have chosen a techno-science career, there's still this dilemma of choosing  between taking more directly usable technology subjects or more basic math and science ones. It’s nothing that can be answered lightly, for if you choose only the last mentioned subjects, you will graduate as a good learner, but will fail to show any readily usable skills. On the other hand, many employers offer on the job training for technology issues that relate directly to their business, but such a training for basic subjects is unheard. Also, it is more likely that we study something technical by ourselves, than going into something like Green functions and even more unlikely if we have never heard of them.

This gets even more confusing when subjects are not absolutely basic or technical; here is something I have found to be true in my already long career:

·         The more readily applicable the subject, the narrower its areas of applicability  and the shorter its term of usefulness.
·         The more basic the subject, the broader its areas of applicability and the longer its term of usefulness.

There’s no mathematical prove to the above, but some stories might help in making the point. Here’s a short one…

 It was early 1993; I was still a jobless refugee when I was invited to this dinner with Professor Eugenio Fumi. He was the chairman of ITELCO, an Italian company that manufactured TV and FM transmitters. The company was opening a new branch in the USA and in the process of creating its staff. My roll in that dinner was helping a friend of mine with the language barrier in his final interview with the big boss.
Professor Fumi was a VIP in the industry for having invented a sort of IOT or Inductive Output Tube. During the said dinner, he mentioned that ITELCO transmitters featured these devices and explained in very few words what his invention was about; electron radiation in a retarding field, resonant cavity and so on. I knew near to nothing about transmitters in the TV/FM band, but my background in electrodynamics allowed me to follow his short explanation with some questions that, without being my intention, sent across that table a clear message that I had understood was he was talking about. At some point, the professor turned to his debuting CEO of ITELCO USA with this short phrase... "Hire him".
This was my first job in the United States.

In this seminar I will tell more stories about projects I’ve worked on in which math and physics knowledge have been paramount.  Yet, it is not just the importance this kind of knowledge have had in my competitiveness, but that it has allowed me to enjoy my labor life. Once, this client was being shown around the premises of the company I was working for. When he was walked into my office y began explaining the project I was working on, when he said “Oh, a fun job”. That comment had stayed with me since, because in fact, most of the jobs my career choice have provided me with have been... “fun jobs”.

Introduction

 In 1967, I was a student going through the second of a year engineering program but at the time, I was more interested in the “why’s” of everything than in the "how’s".  Math and basic science subjects were almost over and were being replaced by the more “how” type engineering subjects before I could answer most of my “why” kind of questions. For instance, I remember this course about transformers. It was all about iron volumes, staking factors; copper wire tables… without me understanding such basic things like why the primary and secondary voltages and currents went only with the turn ratio? Why was I not seeing any mutual inductances and time derivatives around anymore? Those questions were not within the scope of what was being taught… for designing transformers all you needed were those tables.

 

One day, by chance, I learned that there was this other career choice, Physics, where it was all about math, thermodynamics, electromagnetism, relativity, quantum mechanics, the works!…Wow, I figured I had been in the wrong school, yet friends and family kept telling me that all a physics graduate was good for was teaching. They were wrong, but my crystal ball was way off too. At the time I changed to physics, I pictured my future as a finder of the why’s of nature, yet my life has been all about the “how’s”. However, I think the training I got from studying physics prepared me well for the kind of world I was bound to deal with.

 

In the early 70’s I read this paper in science teaching that said that a professional was likely to change his/her line of work every 10 years to 15 years. In these 50 years I have found that the said period may have been shrinking. Here's how it has been for me:

 

 60’s – Radars

Repair and maintenance.

Army (the wrong one)

70’s – Solid State physics, Silicon planar technology

Teaching and research

University of Havana

 80’s – Microprocessors, PC’s, image processing, robotics

Big Bossing, C programming for DOS, electronics, mechanics

EICISOFT

93 to 97 – VCR’s, Video, Broadcasting

Software development in C for Windows

JVC and ITELCO

97 to 2002 – Digital telephony

Company owner, C++ Software development, electronics

The Box and #include Software Consulting Inc.

2003 to 2005 3D Image processing and vector graphics

C++ software, Math

Deneba/ACD

2003 to present FOREX (Foreign Currency Exchange)

VB.NET, SQL, Statistics, Brownian Motion

L3 Capital

2005 to date – Digital signal processors for aerospace

C++ embedded, C#, SQL, microwave, general physics...

ATG (Advanced Technical Group Inc)

 

It has been not just me, I have also seen friends and colleagues going through similar changes as well. Causes for the changes? Some were political; some were just business (like in “sorry, nothing personal…”). Others were simply caused by the development of technology that will, time and again, make obsolete many a hard learned skill

 

Just for fun, let me list a few in the field of electronics. In the 60’s I devoted lots of time and effort  in learning the skills around electron tube circuits; by the end of the decade transistors made most of those skills useless; So I learned transistor circuit design through the mass murder of hundreds of Germanium and latter on silicon transistors;  by mid 70’s integrated circuits changed the way to approach circuit design, so it was time for killing OpAmps and other IC's. Late into the 70’s, digital technology had made most of my analog circuits design skills unnecessary. In the 80’s, hard wired logic circuitry were being replaced by microprocessors and FPGA’s, so my skills in designing with TTL or CMOS IC’s lost most of its value. By the 90’s, skills in designing printed circuit boards became useless, ORCAD and a bunch of other tools did that job way more efficiently. Finally, with the new century, the Internet tsunami arrived turning everything upside down.

 

I had made fun of all this with friends that went to engineering schools and we have laughed at some of the subjects that were taught back then. Things like:

However, most of the skills I learned back in the School of Physics had remained valid through all the mentioned changes. I can name a few concepts introduced in subjects taught while in the School of Physics, that time and again have allowed me to be the one who could answer a question or get a job done:

The list could go on, but there is this part of the physics background that cannot be named or listed as easily. Let me put it this way, when going through a physics curriculum, you are introduced to concepts so abstract and difficult to grasp, that then you are no longer surprised by problems that life throws at you. You develop an attitude towards any new challenge like… if I made it through quantum mechanics, then I can handle this.

 

I must admit that my main source of income since the 80’s has been related to some form of software development and back in the 60’s, computers were not even mentioned in a physics curriculum. Yet, the projects I’ve been involved with have always been related to my physics background in one way or another.

 

Most software is developed out there for user interfaces, web pages or small management applications, all the background you need for that is a computer language and some OS knowledge. There is no such thing as a five year experience in any of that, new development environments emerge every 2 or 3 years making the old ways obsolete in such a short term. There are smart kids out there that could run circles around me doing any of that and, since they probably still live with their parents, they can do the job for a fraction of I would need to charge. Then, how come a guy like me is still in business?  Well… fortunately there is also a demand for software development requiring some culture in physics, technology or heavy math; the kind of skills that, no matter how smart, a kid cannot pickup by playing around with computers.  Allow me to show these general statements in few particular examples.

 

The 3D Adventure

It was 2003; half of my Digital Telephony business didn’t survive the “.COM” bubble burst and the other half was slaughtered by the Internet’s VoIP. It was not that my products didn’t evolve with technology; actually I even developed a VoIP product. The problem was that my clients, the small international traffic operators, didn’t survive the lowering rates. I no longer could find anyone to buy these products.

 

After five years of company success, I thought I would never be back to the job market, but I was wrong. At some point I realized that I had to bring my skills to a marketable level. My products were developed in C++ with Microsoft’s Visual Studio V6 and earlier; Visual Studio .NET was already out there and I knew nothing about C#, ASP.NET and the sort. I feared that a resume without those tools wouldn’t be even looked at, so went into a C# studying frenzy. Being a hands-on guy, the kind that only learns by doing something, after a suggestion from a former classmate of mine, today Dr. Fuentes a VIP in crystallophysics, I took the endeavor of developing an application for visualizing Spherical Harmonics.

Y23

 

When the app was ready, I uploaded it to my WEB site (and also included a link to it in my resume http://includesoft.com/include/complimentary.htm) as an example of my C# “mastery”.  A few days after that, I received a call from the CTO of Deneba Systems Inc., the company that developed CANVAS (a kind of breed between Photoshop and AutoCAD). The funny thing was that, since only C++ was used there, they were not interested in my C# at all, but in the 3D geometry skills from my physics background. The company was intending to include 3D capabilities to their new CANVAS X scientific release and I was put in charge of the two aspects of that project:

·         3D image processing and

·         3D vector drawing.

Yet, none of the developments I’m about to show were ever released. As I was being hired by Deneba, the company merged with ACD Systems, the developers of ACDsee, the first Photo Organizer ever to hit the market. The purpose of ACD acquiring Deneba was adding image editing capabilities to their ACDSee, since competition was already out there from Picasa, Flickr, etc. ACD executives were not interested in developing CANVAS any further and fought their Deneba counterparts over the issue until, after a year of working in the scientific release, they finally bought Denebas’s remaining part, the project was canceled and I was laid off. But that’s not the end of the story, a few months later, the new release of ACDsee was not selling that well and the company was surviving thanks to CANVAS faithful clientele. At that point, ACD tried to hire me back but… I was not interested, not even for twice the salary.

Stacks in 3D

 

After a year’s work, the new CANVAS X 3D feature could display MRI’s and CT scans in motion, it could do arbitrary sectioning and organ/tumor isolation.

CT (Computer Tomography) or MRI (Magnetic Resonance Imaging) generates what is known as a "stack". This stack is a set of even of slices from a 3-dimensional object. A few from a CT scan can be seen below. To the right a graphic explanation of what the stack is.

Stacks explanations Stack example

This allows studying the above set of 2D images as a 3 dimensional one like below:  (more 3D animations)

3-D image from a stack 

Chinese lamp technique for obtaining outer surfaces.To obtain the outer surface of the object in a CT Scan or MRI, you need to find the outer contours to each slice. Once all the contours are determined, surfaces will be generated between contours as the paper in a Chinese lamp. If the contours are sufficiently near, a fairly good resemblance to the real surface could be achieved.

Contouring is done by connecting contiguous pixels with the same luminance value. This luminance we call it “Threshold” and can adjusted manually by the user until the right surface is obtained.

One way to understand this contour is as the intersection of the plane z = Threshold with mathematical surface that represents the luminance, on the z axis, as a function of (x,y).

Contours can be generated individually or automatically. Contours will be generated by clicking on a pixel correction tool. The picture below shows how a contour generated and how the threshold chosen affects an automatic generation.

The threahold

The 3D-izer

When working for ACD,  I developed tools for drawing in 3D; these were also supposed to go in the Scientific Release of CANVAS X. Below there's a description of some of the capabilities of this sophisticated drawing tool.

3Dizer Manual

3Dizer Manual 1

3Dizer Maunual 2

3Dizer Manual 3

Any of the above drawings could be turned into an animated object like the examples below:

 

Moving 1Moving 2Moving 4Moving 3

3 dimensional charting was is also available. Surfaces defined by 3 variable analytical equations could be drafted, combined and animated as show in the examples below. 

Conic SectionsSpherical PlotSinX/X

The BIS_WDS, a Microwave Passive Concealed Object Detector for the War on Terror

(I wrote the explanation below for the executives and sales representatives who had to explain the BIS_WDS, a microwave Detection screenpassive concealed object detector, to security experts and other potential buyers. The information herein was obtained through conversations with Brijot’s scientific big shots and some I had to figure out myself)

 

The purpose of BIS_WDS (© Brijot Imaging Systems, Inc) is the detection of objects that subjects under surveillance could be concealing beneath their clothes. The principle for this detection relies on the fact that these hidden objects are always cooler than the skin in 4 o more degrees Celsius.  All surfaces with temperatures above 0 Kº radiate in all frequencies, since the spectrum of this radiation depends on its absolute temperature, measuring this radiation in a convenient band can differentiate a hidden object from the skin in the background.

 

Radiation spectra for different temperatures can be seen in Fig. 1.  The wave length at which the radiation peaks grows as temperature decreases. When it gets down to body temperature, this peak goes into the infrared with a wavelength of 9500 nm (~10 mm), a region to be found way beyond  the right limit of the Fig 1. This is the zone where thermal imaging cameras operate; producing the all too familiar pseudo color images that Hollywood loves to show off in their action movies. The problem with using infrared for detecting concealed objects is that clothing may not be transparent enough to the infrared.  However, cloth is perfectly transparent to microwave and, at body temperature, there’s also some microwave radiation, but only with one thousand times less power.

 

The chances for detecting a particular radiation depend, not only on its power, but also on the background noise level. Microwaves over 200 GHz are polluted with CMB or Cosmic Microwave Background (famous for its relation to the Big-Bang theory about the origin of the Universe).  Below 10 GHz, it gets polluted again, this time with atmospheric and galactic backgrounds and last, but not least, the ever growing human communication interferences.  Between these two limits the microwave bands are very clean, making bodily radiation in this band detectable.

Black body radiation

There are few choices for clean Microwave bands,

 

Ku band 12 to 18 GHz

K band 18 to 26.5 GHz

Ka band 26.5 to 40 GHz

Q band 30 to 50 GHz

U band 40 to 60 GHz

V band 50 to 75 GHz

E band 60 to 90 GHz

W band 75 to 110 GHz

D band 110 to 170 GHz

 

 Lower bands not only exhibit less radiation power, but require bigger horn antennas that complicate the optical and mechanical parts of the unit. On the other hand, microwave components for the D band could make the unit pricy and uncompetitive. So that’s why the choice for the BIS_WDS is around the E and W bands.

The system:

Concealed object detector schematicsSystem componentsThe system is simple; the devil is in the details. Radiation comes into the hole in the left, is reflected by a mirror into a PVC lens that focus it into the radiometer. The mirror angle oscillates driven by a coil, allowing the radiometer to scan the scene in front of the system up to 15 times a second. The radiometer consists in 32 small horn antennas with LNA (Low Noise Amplifiers), these are misnamed as Pixels. A video signal is built by sequencing the reading of the 32 Pixels at 34 positions of the mirror, which allows creating a 34 by 32 pixel image.

My client company, ATG (Advanced Technical Group) was in charge of the Radiometer.  Given the Pixels, the radiometer subsystem had the task of polarizing them, keeping their temperature low and constant, do the analog to digital conversion, apply a digital filter, create the video signal and deliver it through a TCP/IP network… Lots of physics here! Let’s see some of it.

The Pixel

The Pixel consists in a W band horn antenna, a five stage low noise amplifier (LNA) and a detector. The LNA uses a device called InP HEMT’s, this meaning Indium Phosphide High Electron Mobility Transistor. The physics of this device is similar to that of a field effect transistor (FET) only that instead of a silicon PN junction, there is a Schottky junction over a 3-5 semiconductor that shows very high electron mobility. Also, there’s this thing about the channel being kind of small… 2000 Armstrong!

 

The channel being so small makes a lot of difference.  Power is dissipated in such a small volume that the smallest change in the quiescent point may bring a huge change in local temperature. The static I-V curves of an HEMT device show negative slopes in the saturation region, this happens because electron mobility decreases with temperature, as the drain voltage increases and channel temperatures goes up, the electron mobility comes down.  (More on this http://includesoft.com/DSP/discussion_on_bias_voltages_accu.htm )

There’s another negative thing about temperature, it increases the amplifier noise figure, so there’s a compromise between gain and noise, the operating current has to be the one that renders the best signal to noise ratio. This requires some lab work beyond C++ and circuit design.

A change in gain will produce the same change in the output level as a change in temperature of the target, so the gain must be kept constant. Since the gain depends on the quiescent point, then this should be kept constant, right? The question is how constant it needs to be, how much technology you need to put into this problem? This is a nice physics problem too.

 

The Pixel has another interesting element besides the HEMT’s, the detector.  The whole point of amplifying is bringing the microwave to a “detectable” level. A detector is a nonlinear network that renders a “DC” component in response to microwave input power. At low frequencies, anything can be nonlinear, but at microwave frequencies, most nonlinear behaviors disappear. For instance, a PN junction, which is an almost a perfect rectifier at low frequencies, at high frequencies is just a resistor; the injected carriers have no time to recombine and come right back when polarity reverses too soon.  The kind of diode used for detecting microwaves is called a “reverse diode”. This name is because it conducts with the reverse polarity thanks to the tunnel effect. When polarized directly, tunnel effect disappears. Tunneling is a quantum effect; electrons bound in momentum by the semiconductor energy bands show a Heisenberg uncertainty in their position.  When a PN junction becomes too narrow, electrons near the junction, have a substantial probability of being on the other side of the junctions energy barrier; this explains the tunneling current. This quantum effect don’t even have a known response time, limits to its frequency bandwidth comes down to how fast can the junction be driven.

Cooling the Pixels

TEC's Even when having a perfectly constant quiescent point, if the external temperature changes the channel temperature will follow, so some means of temperature control has to be implemented. A quiescent temperature must be chosen and then it must be kept constant for the specified range of ambient temperatures that the BIS_WDS will be required to operate.

The simplest and cheapest way of accomplishing temperature stabilization is by heating.  If ambient temperature cools, the current to the heater is increased, if it warms, then current to the heater is reduced keeping the temperature of the controlled space constant. But noise power grows with temperature; heating the Pixels is not an acceptable solution here, so then it must be cooling.

 

Heating devices can easily be made small, but common coolers… Fortunately there are these devices called TEC’s (ThermoElectric Coolers) that use Peltier effect for pumping heat. These TEC’s are reversible heat engines; if you drive current through them, they will pump heat, but if you heat the hot side keeping the other cool, then it will generate electric power.

The application of these TEC’s to cooling poses two very interesting physics problems. One is the heat flow in a thermal system involving heat pipes for conveying the heat out of the box. These heat pipes are kind of “air conditioner machines” with no moving parts. The working fluid evaporates at the hot end and diffuses to the cold where vapors disappear as they condense; capillarity of a wick then brings this liquid back to the hot end again.

 

This whole heat transfer problem is also a nice example of a physics analogy, since it can be tackled using a lumped heat circuit approach.

The other part of the problem is how to stabilize the temperature. The technique here is the PID (Proportional, Integral Differential) control. Its Implementation involves some C++; it is one of the tasks of the DSP (Digital Signal Processor) in the system. A temperature sensor is attached to the Pixels; temperature is sampled every second; the Thermal Circuitdifference of the temperature reading to the temperature wanted is called error signal. A proportional control would be one in which the TEC current is made Proportional to the error signal in such a way that the TEC current is increased when warm or decreased if overcooled. The problem with a Proportional control is that you need an error to sustain a current. To avoid this you can add to the current a contribution that is proportional to the sum of the error signal, this way as long as there’s an error signal the total current will increase until there’s no error signal. That would be a Proportional Integral control. But there’s a problem with this one too… it is not stable; it will oscillate around the target temperature. To damp these oscillations, you need a contribution to the TEC current that is proportional to the change of the error signal; that would be the Derivative part of the PID control. Even with these three contributions, you may still get oscillations; you may also be over damping, making it a lazy control to external changes; or under damped, which would make it slow to settle into the right temperature. The contributions must be balanced as near as possible to the critical damping, but for this you need to have a good initial estimate and then plan your experiments well or you may be guessing for months. (The whole story on this http://includesoft.com/DSP/a_pid_to_control_temperature.htm )

And there’s more…

The detected signal must be amplified, converted and digitally filtered. For amplification, an operational amplifier must be chosen among the hundreds that are available in the market, what specs are more important?  The low noise or low offset drift with temperature? For answering that, the spectrum of the signal must be figured out. How fast the signal must be sampled? What kind of digital filter would be the best?  Should we use a Finite Impulse Response (FIR) digital filter or an Infinite Impulse Response (IIR)? That means, plenty opportunities for using a physics background.

Some articles on the above :

http://includesoft.com/DSP/pink_spectrum.htm

http://includesoft.com/DSP/a_test_signal_discussion.htm

http://includesoft.com/DSP/delay_and_the_averaging.htm

http://includesoft.com/DSP/video_amplifier_for_the_radiomet.htm


 

The TCAS or Traffic Collision Avoidance System

RadarA traffic collision avoidance system is a piece of equipment that you can find today in every commercial aircraft designed to reduce the chances of mid-air collisions. It was the Aeroméxico Flight 498 in 1986 collision that finally spurred the US Congress and other regulatory bodies into action and led to mandatory collision avoidance equipment.

The idea is simple enough; TCAS detects any nearing aircraft and issues a warning to the pilot. When two nearing aircraft are TCAS equipped, both TCAS talk to each other for coordinating theirs maneuvers. This is, one pilot is advised to descend while the other to climb.  The actual solution is not that simple as its principle.

 

I bet that when I first mentioned aircraft detection you immediately thought RADAR… I did. Yet RADAR is not it. Radar  is based on sending a strong RF beam and detecting the faint signal that bounces back from an intruder plane. For the kind of range a TCAS requires, Radar’s are bulky, pricy and require lots of power, totally unsuitable for an aircraft. Even for ground stations, most of the air traffic surveillance since the 50’s is based on much simple and smaller beacon radars that rely on all aircraft having a transponder.

 

TCAS-TransponderThe transponder has a sensitive receiver to detect even faint interrogation signals from beacon radars and transmits back a strong reply with the added bonus of altitude and identity information allowing full location and ranging. This scheme reduces the required power, bulkiness and price in orders of magnitude.

 

TCAS will be interrogating transponders, so it doesn’t need parabolic rotating antennas to pick up their replies; stronger signals allow for compact solutions with no moving parts. To the right, an entire TCAS from Honeywell showing the top and bottom antennas, the control unit and the panel display.

Ready for some physics? ...The TCAS Antenna

A rotating parabolic antenna being directional is straight forward, but one with no moving parts is not. Interrogations may be radiated in an omnidirectional manner as long as the direction from which a reply is coming from can be determined, that would be the intruder aircraft’s bearing angle. Quad parallel ¼ l rod antennas systems are the type used for this job (Figure 1). The carrier frequency used for the reply signals is of 1090MHz, so ¼ l is only 2.7”. Being the rod spacing about the size of the rods, the whole antenna array is about the size of a fist. Bearing detection comes in two flavors: amplitude and phase.

Four element antenna

 Bearing Determination by Phase

The physics behind bearing detection by phase is depicted in the figure bellow. B being the intruder’s bearing and Fnm , the phases between elements En and Em.

Operation principle of the four element antennaA little more physics… The phases in the above equations would be the ones that would be measured in opened elements, or in other words, in the unrealistic absence of currents. Yet, some energy needs to be absorbed from the incident wave to feed the receiver input circuits, so there will definitely be current in every element. These currents will induce voltages in the neighboring elements affecting the phases to be measured. Fortunately, symmetry forces any reactive phase shift in F43 to be the same as in F12 and also forces shifts in F23 to be the same as in F14, So, as for canceling out the mentioned reactive effects of a real antenna, the sum of these phase shifts is introduced in the above expression for the bearing.

B = 45 – atan2(F12 + F43 , F14  + F23)

 

It may be shocking that the expression for the bearing B is not dependent on the actual size of the antenna, yet it is not that just any size would the same; small antennas render smaller phase shifts for the same bearings. In other words, smaller antennas are less sensitive to intruder bearings changes than bigger ones.

Bearing Determination by Amplitude

A perfectly loaded dipole absorbs power from an incident electro-magnetic wave, leaving a shadow behind it.  Though this shadowing effect practically disappears due to diffraction after a few wave lengths, a dipole within a shadow will be absorbing less power from the wave. The amplitudes of the received signals in quad dipole antenna will then depend on the wave angle of incidence, since the dipoles in the back will be shadowed more or less depending on how behind they are to the front ones. TCAS amplitude systems take advantage of this effect allowing their bearing determinations to challenge in accuracy those of phase systems. (More at http://includesoft.com/DSP/The_quad_antenna.htm )

There’s a lot more physics to the TCAS: signal spectrum; information compression; global positioning; noise; thermal gradient; among other many. Yet, I was not involved with TCAS design or its development in any way, then… what’s my relation to TCAS? The main product line of ATG (Advance Technical Group), my client company since 2005, is equipment for testing that TCAS and transponders comply with the standard specifications.

TCAS Testing

 Banking angleTCAS Tester

Many of the tests consist in standard electronic measurements like RF pulse power, shape, carrier frequency etc., yet, the really tough ones are the behavioral tests or is how is the TCAS responding in air traffic scenarios. This requires that the test equipment be connected to the TCAS antenna and generate the same replies signals as real aircraft intruders would be doing. This is not just generating four sets of reply RF pulses (one for each antenna element), but also that their time positions be consistent with the simulated intruders distance down to +/- 25ft in 180 miles and that the relative phases (or amplitudes) be consistent with its bearing angle down to +/- 2°. For this kind of accuracy in 180 miles, not only the roundness of the Earth must be considered, but also that it is not a perfect sphere but an ellipsoid, so soon enough I was dealing with coordinate transformations from geodetic to ECEF (Earth Centered Earth Fixed). These calculation where to be performed, not by an “Intel Inside” computer, but by a way less powerful Texas Instrument DSP (Digital Signal Processor) chip, where floating point operations were limited to single precision. For those interested in more detail, I wrote a couple of internal reports on these issues: http://includesoft.com/DSP/ecef_calculation_for_range_and_b.htm and  http://includesoft.com/DSP/ErrorAnalysis.htm .

Examples of trajectoriesThe task further complicates with that is not just one intruder, but up to 432 are required and of those, the last 32 are dynamic intruders, meaning that they must move according to a flight plan, which must include realistic turns. That brings some plane dynamics into the game as well as 2D kinematics.  (You may read all about is at http://includesoft.com/DSP/realistic_trajectories.htm )

 

Cruise Speed

Knots

Turn Radius

 (nm)

Turn rate in

Degrees/s

120

0.45

4.244132

140

0.6125

3.637827

160

0.8

3.183099

180

1.0125

2.829421

200

1.25

2.546479

220

1.5125

2.314981

240

1.8

2.122066

260

2.1125

1.95883

280

2.45

1.818914

300

2.8125

1.697653

320

3.2

1.591549

340

3.6125

1.497929

360

4.05

1.414711

380

4.5125

1.340252

400

5

1.27324

420

5.5125

1.212609

440

6.05

1.15749

460

6.6125

1.107165

480

7.2

1.061033

500

7.8125

1.018592

But the fun is not over, the system must also simulate the motion of the own aircraft; meaning the one the TCAS under test is supposed to be on. So the scenario to simulate is the one that the TCAS under test should be “seeing”. If the own aircraft is flying, say North and an intruder at 2 o’clock is flying West, then TCAS must see this intruder as moving SW. Also, if the “own” is turning, then the bearing angle of all intruders must change accordingly. When all above is accomplished, then the “own” coordinates could be fed from a flight simulator, so you could have now a flight simulator with a TCAS!

 

The above development project combines the physics background with the C/C++ skills, yet sometimes you may have the chance of even using the physics background by itself…

Squitter-Interrogation Collision Probability

When the use of TCAS started expanding, the 1030/1090 MHz spectrum rapidly became polluted around busy airports. Before, it was just ground stations interrogating, but after TCAS, every airplane was interrogating as well. Many techniques were developed to reduce the amount of replies. Side lobe suppression and whisper/shout use the principle of interrogating with two pulses at different power levels; a weak pulse (the whisper) and a stronger one (the shout). If the receiver picks up a second pulse after the first, the transponder won’t reply and of course, if it can’t receive any of them, it won’t respond either, so only those far enough not to hear the whisper and but near enough to still pick up the shout, will reply.

 

With the growing air traffic, these solutions became insufficient, so the squitter was invented. The idea is that a plane will fly broadcasting its unique identification (called Mode S address) every second; this unsolicited signal is called a “squitter”. When a TCAS or ground station picks up a squitter, it interrogates with this address and only the aircraft with that Mode S address will reply. But this technique has developed even more with the extended squitters, these will broadcast not only their Mode S address but also their GPS positions, so every TCAS out there may track this aircraft without even having to interrogate; that’s called passive tracking.

 

Finally we are ready for the story. This TCAS manufacturer that was using the tester from my client company, complained that the simulated intruders were not responding 100% to the interrogations of their TCAS under test. This particular test consisted in a long 12 hour simulated scenario where the TCAS unit was forced to interrogate 10 times a second; every 5 seconds a computer would be sampling the interrogation- reply pairs. If an interrogation was detected but not its replay, the test would fail. Roughly, 1 every 4 or 5 of those tests failed and the manufacturer claimed it was not their TCAS, but the test equipment that was not responding to all the interrogations. One possible explanation in which neither would be at fault was that the failure could be due to interrogation-squitter collisions. In other words, if a simulated intruder happened to be interrogated by the unit under test while it was transmitting a squitter, it will certainly miss it. But the complaining part was not buying this; no way can such an unlikely event cause so many failures!

 

Going for the physics background “bag of tricks”, I recalled from statistics, the Poison distribution.

The squitter, called DF-11, has a 60 ms duration and happens once a second. Since the interrogation, called UF-0, has 18.5 ms, the chance of any part of an interrogation happening while squittering is:

 

1)         p = (60+2x18.5ms) / 106 ms = 0.000097

 

There is a 12 hour test that fails when a collision is detected.  For this test, 10 interrogations are made each second.  Events with such a low probability as Squitter-Interrogation collisions follow the Poison probability distribution .

 

2)         p(n/N) = (Np)n e-Np/n!

 

Where N is the number of interrogations in 12 hours (432000) and n is the number of collisions. P(n/N) is the probability of  having precisely n collisions in N tries (interrogations).

 Poison Distribution

Equation 2 is the limit of the binomial distribution for N -> , while the product Np stays finite… or in practical terms Np << N or  p << 1 (rare events).  In our case this condition is very much true:

 

3)         Np = 432000*0.000097 = 41.904

 

So, our Poison distribution will be:

 

4)         p(n) = (51.84)n * 3.06x10-23 /n!

 

Yet, not even all interrogations are observed, of the 432000 interrogations, only 2356 will be sampled by the computer (1 sample every 5 seconds), the probability of not observing a single collision in any of those samples, if n of them will happen is:

 

5)         Qo(n) = (1-n/432000)2356

 

Then the probability for observing one or more would be:

 

6)         Po(n) = 1 – Qo(n)

 

The probability for observing a collision when precisely n of them happen, will be the probability for observing at least one when precisely n collisions happen, that would be  Po, multiplied by the probability P(n) of having exactly that many collisions.  The probability of observing a collision, disregarding how many actually happened, would be the sum of the probabilities for each n form 1 to (in principle) 432000:

432000

 

6)         P = S1 Po(n) x p(n) = 0.2038

 

The chance of getting a collision failure is ~20% in the mentioned 12 hour test and that is 1 in 4, right? (That felt so good!)

 

There’s another nice example of a pure physics tasks. There was this project of an interrogator for testing transponders in actual flight. This involved power transmissions that raised safety concerns for those who would operate the instrument and even for unaware bystanders. So I got the task of calculating the radiation intensity in the vicinity of the interrogator and its compliance to radiation limits safety standards. You can read the resulting report at: http://includesoft.com/DSP/compliance_to_radiation_limits.htm .

 

A lot of the physics background use can also be found in the following reports:

 

http://includesoft.com/DSP/compensation_of_the_tcas_correct.htm

http://includesoft.com/DSP/ErrorAnalysis.htm

http://includesoft.com/DSP/ecef_calculation_for_range_and_b.htm