Talk:Control theory/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Many problems

In "controllability and observability", "states" and "unobservable poles" are discussed but not defined, therefore the section is incomprehensible for those it is intended for. It seems that a state is not defined in the whole article. Should we add a simple example, such as $\dot x=Ax+Bu$ and $y=Cx+Du$, where ABCD are constants (or constant matrices)? Or perhaps we should use a discrete-time system ($x_{n+1}=Ax_n+Bu_n$) for simplicity. Also in "stability" various other forms of stability should be mentioned at least, although they could be more exactly discussed in a separate article. However, basically the structure of this article might be a good idea: the reader can print it and easily study some basics at one glance. Tilin 14:37, 29 December 2005 (UTC)

Moved comment in Stability section to here

Someone posted this in the actual article under "Stability". Please in the future, post to the Comments page.

(It seems that here the author assumes that the transfer function of the system is rational. Can someone confirm if this is true also for nonrational transfer functions? References?)

68.40.50.73 06:24, 28 January 2006 (UTC)

Feedback control loop example

I think the control loop example should have a feedback other than unity (1). An example of which would provide a more general explanation.

I'd add a multiplier of k but I don't readily have access to matlab. Cburnett 13:49, 23 March 2006 (UTC)
Why do you need MatLab? Knotgoblin 02:54, 24 March 2006 (UTC)
a) It definitely looks like a matlab simulink schematic; b) got an alternative? Cburnett 03:01, 24 March 2006 (UTC)
I have access to Matlab and plenty of other similiar apps. I can try to make one. Knotgoblin 18:47, 27 March 2006 (UTC)

Mathematical Control Theory

Mathematical Control Theory is quite general. It is possible to state control problems in a general setting, that contains -for instance- continuos and discrete controls as particular cases. What about adding a section about Mathematical Control Theory? I am a Mathematician, but I have a Physics and Electronics background. I should be able to work it out. Let me know. gala.martin (what?) 05:53, 20 April 2006 (UTC)

Appendix A

Wont this appendix be better placed in transfer_function? Fannemel 21:02, 20 February 2006 (UTC)

Possibly, but I don't think it is needed at all. An ecyclopedia usually only provides the relevant equations (the ones that the reader might want to use), and leaves all deductions to references. In this case we could refer to any elementary textbook on control theory. --PeR 11:13, 14 June 2006 (UTC)

Controllability vs Reachability

What historically has been named 'controllability' is ambiguous. Let be the equilibrium state (usually taken to be 0) of a system in the absence of an input then.

  1. Reachability of an arbitrary state, , from an arbitrary state is the ability to transfer from the initial state to the final state in some time by applying a suitable input.
  2. Controllability of an arbitrary state, , is the ability to transfer from this state to the equilibrium state in some time by applying a suitable input.

Obviously reachability implies controllability. For linear stystems in continuous time, both concepts are equivalent. However, a discrete time system my be controllable without being reachabable. A trivial example is:

where is nilpotent.

Mastlab 21:21, 10 September 2006 (UTC)

Further Reading section

I just added a Further Reading section, since I figure this article would benefit from having some external resources to point to. I populated it with my undergrad controls text - probably not the best resource to add, but it's a necessary section and needed to have something inside - feel free to remove it as others are added. 17:52, 22 November 2006 (UTC)

Reverted anon

Recent anon tried to state that in the case of the control signal was voltage - this is most likely the situation, but may not be the case generally. Consider the situation where you have a fluctuating supply voltage (for some reason, bad power - batteries - who knows!) and you are measuring that to try to consider not burning out your motor, so you might have the ability to vary the load on the motor to keep it running at a constant velocity or torque. For example you might have a hydraulic brake (why you would do this is beyond me) or you might be stirring a fluid, which must be stirred at a constant velocity and you can raise or lower the fluid level. The control signal wont always be voltage; my examples are contrived but I believe possible. User A1 (talk) 13:17, 27 March 2008 (UTC)


Control in General

Various things need fixing, I think. There are various misleading things (the Laplace and Z-transforms aren't interchangeable, since they apply to continuous- and discrete-time systems respectively, for example). Various things could be explained more clearly, and the odd extra block diagram wouldn't be amiss.

Also, the history section surprises me - it talks about aeroplanes, but says nothing of Bode, Nyquist, and the like.

There's nothing in the entire Control section (at least that I've seen so far) about sensitivity or complementary sensitivity, which could be added to the classical control section.


If no-one has any objections, I intend to do a major reworking of a lot of this stuff. Not just the Control Theory article, but others. The Nyquist Stability Criterion could be better stated, and 'encirclements' really ought to be changed to 'anticlockwise encirclements'. If they're clockwise, you've got poles crossing the imaginary axis towards the right in the s-plane due to closing the feedback loop and applying gain, and you're making things worse. There are many other things, and it's probably not worth me writing them all down here, since they pertain to other articles

Jenesis 23:10, 20 October 2005 (UTC)

Jenesis, please do. It sounds like you know what you're talking about. I have alot of background in control theory, so if I can be of any assistance, leave me a message. --M0nstr42 16:07, 1 November 2005 (UTC)
I am not sure that the Wright Brothers did anything for the _theory_ of controls. Perhaps this should be taken out
Seconded. --SirTwitch (talk) 16:29, 15 May 2008 (UTC)
Did you mean to say "more so than the ability to produce lifT from an airfoil" (not life)? --M0nstr42 21:45, 4 November 2005 (UTC)


Well, I've had a go at a few things such as stability (asymptotic and marginal), and corrected some errors (which is generally quicker and easier than expanding the content). Unfortunately, the things I'd like to do here and the things I have time to do aren't really on a one-to-one standing. A few MATLAB plots would be fairly quick and easy, though, and diagrams are generally pretty helpful in Control. Anybody with access to MATLAB and the Control Toolbox could be of help there. I've checked with The MathWorks, and plots generated with an Academic copy can be freely distributed. Jenesis 22:59, 7 November 2005 (UTC)
The eqn shown for PID is incorrect I believe, control doesn't work off output but error (I did not edit it)I would disagree that PID is the simplest feedback control, likely the most common however.I can elaborate if need be. Billymac00 09:43, 2 January 2006 (UTC)

Under "PID Controller", when the equations are being introduced, r(t) is mentioned as "the desired output", but it is not present in any of the formulas under this section. The section is either incomplete, or the mention of r(t) needs to be removed. qtπ 14:02, 24 September 2014

Stability Section

The section on Stability is very confusing, and I am not sure that it is entirely correct. It implies that BIBO stability is the same thing as (or at least very similar to) Lyupanov stability. Then it jumps into assymptotic stability, and mixes up the discussion of CT and DT criteria for determining stability all in a single discussion. There is no mention of the notion that BIBO stability depends only on external measurements of the system, whereas assymptotic stability is a more general measure that encompasses the internal state variables as well as the input and output signals. It is also not all that clear that it is possible for a system that is BIBO stable to be assymptotically unstable, and how that can happen. First Harmonic (talk) 21:58, 29 June 2008 (UTC)

Fault detection

Fault detection redirects here, but the issue of fault detection and isolation/classification is, as far as I can see, not covered in this article and Advanced process control only mention it in passing. Should we start a new page on it, or could it be baked into other articles that I'm currently unaware of? EverGreg (talk) 12:00, 25 July 2008 (UTC)

I came here asking the same question. I don't see the connection at all, and I think the redirect is silly if the term isn't in any way clarified in the article. I'm surprised I can't find articles for either fault detection or fault isolation. 70.247.163.135 (talk) 00:38, 9 January 2009 (UTC)
I agree. I'm starting the Fault detection and isolation article, which fault detection and fault isolation redirects to. —Preceding unsigned comment added by EverGreg (talkcontribs) 10:32, 9 January 2009 (UTC)

Trim and Respond

If I am understanding this section correctly, they are just talking about playing with setpoints? This seems to be in strong contrast to the other headings in this section which talk about advanced control theory (adaptive control, optimal control, etc). Should we respond by trimming this section? User A1 (talk) 11:33, 27 August 2009 (UTC)

Sorry. What section are you refering to? -- Marcel Douwe Dekker (talk) 15:00, 27 August 2009 (UTC)
"Trim and respond". User A1 (talk) 01:53, 28 August 2009 (UTC)

History

I am not sure about that. But I think Kalman played a main role in control and filtering theory. Did not the NASA engineers say they needed two things for the travel to the moon: Newton's law and Kalman's filtering? I thoght he was a milestone in control theory, but I am not sure about that. I just saw many theorems named after him. Does anybody know more? Gala.martin 20:37, 6 February 2006 (UTC)

Kalman filtering is a topic in control theory that has to do with estimation of states that can not be directly measured. It is actually an iterative (discrete or continous) algorithm that uses information from measured states and the mathematic model of the actual system. The term filtering comes from the early days when operational amplifiers were used if my memory serves me right. Fannemel 20:50, 20 February 2006 (UTC)
Yes. Estimation of quantity conditionally to observations randomly perturbed. From a mathematical point of view, that's quite close to control theory. I just want to remark that Kalman had a main role in control theory history. He also understood what's the situation with attainable states, and proved several basic theorems in non linear control theory. Gala.martin 21:02, 21 February 2006 (UTC)

This is another instance of American's re-writing the history books! Much of control theory was developed in Russia during the space race. —Preceding unsigned comment added by 217.44.122.146 (talk) 16:06, 8 September 2009 (UTC)

@217.44.122.146 A) This is not a history book -- it's the internet. It's not just Americans editing these articles. The English Wikipedia is primarily edited by English Speakers. B) During the space race there were two groups working on the same type of research. Could it be that control theory was developed by more than one party independently? C) If you can provide a source to back up your claim that Russia developed control theory and that credit was stolen by the states, then we should definitely change the article. D) Really, though, why would America want to steal credit for developing control theory? It really doesn't matter who developed it; it's a mathematical theory, (the principles existed before we discovered them). qtπ 14:42, 24 September 2014 — Preceding unsigned comment added by 68.14.245.132 (talk)

Copy-paste registration

In this edit a section was copy/pasted from People in systems and control. -- Mdd (talk) 10:16, 19 October 2009 (UTC)

perceptual control summary section needed

We need a Perceptual Control Theory summary section, that gives a brief description and links to the main article. Biological systems which process information (perception) on both external and internal state are increasingly being understood in terms of control theory. --68.35.2.8 (talk) 09:17, 13 May 2010 (UTC)

I don't think we need such a section. From a brief skim of the Perceptual control theory article, I get the impression this is verging on fringe science that has very little that's tangibly relevant to this article. Oli Filth(talk|contribs) 09:24, 13 May 2010 (UTC)
I concur, i think fringe science is the key word. It certainly has nothing to do with the engineering concept of control theory. User A1 (talk)
I agree it looks like fringe science. On the other hand it does look as if it is related to control theory. It appears to be trying to describe animal behaviour in terms of control theory. Nevertheless, unless it can be shown that this is anything but a fringe theory, the current page should not link to it. Martijn Meijering (talk) 09:53, 13 May 2010 (UTC)
Apologies, I shouldn't have used the term "need", that shouldn't be the standard, obviously there is a lot that we don't "need". But it would be a service to our readers to expose them to this generalization of control theory to biological systems. Most chemical and behavioral systems are involved in feedback control to maintain internal state, from internal homeostasis to following chemical gradients to food or a compatible environment to more complex behaviors required by higher animals including humans to maintain internal state. Perhaps the real question is do we need the space for other purposes so much that we can't perform this service for our readers. Good grief, even simple references and wikilinks to other articles are being reverted. The more general pages such as this one, should be home pages guiding readers to the more specialized ones. Let's be generous, not possessive. --68.35.2.8 (talk) 16:22, 13 May 2010 (UTC)
This article is about control theory. I would support an AFD on that page, but I am refraining from proposing one on the fact that, despite somewhat odd, there appears to be actual literature on this. The page has had clearly wrong and misleading information, which is continually being removed by editors here, and I am just removing this one here. It also claims to be testable, but the article provides no evidence of any testable ideas (i.e. could you design an experiment or logical construct to generate a yes/no result regarding a component of this theory). I am not an expert on PCT, but it seems very fringe. User A1 (talk) 16:44, 13 May 2010 (UTC)
Just regarding negative feedback : this is really an oversimplification of stability, which is often represented using phase plane diagrams User A1 (talk) 16:49, 13 May 2010 (UTC)

Yes, PCT is quite testable, though I have unsuccessfully lobbied my friends to tone down the hype that sounds like a commercial for toothpaste. I suggest downloading the 13 computer demos that are part of my latest book (you don't have to buy the book to run them). Follow the link http://www.billpct.org/ and follow the instructions (PCs only, I don't know how to program Macs). Source code included, with reference to GNU license. Delphi 7, but if you can read Pascal you can see how everything is done. No spyware, viruses (that I know about), or other tricks. Visitors are counted but not identified.

Please don't bother with my old Brainstorm web page. It's way out of date.

Most of the programs have instructions embedded in them so you can get the idea of what it's about. —Preceding unsigned comment added by BillPCT (talkcontribs) 16:10, 14 May 2010 (UTC)

Tests of PCT can also be found at http://www.mindreadings.com/. Just click "Demos" for a list of experimental tests/demonstrations. You might also be interested in a book review I wrote (http://www.mindreadings.com/BookReview.htm) that explains the difference between engineering control theory and perceptual control theory. The difference is not in the theory itself -- PCT is control theory -- but in how the variables and functions in a control loop are mapped to the variables and functions involved in the behavior of living systems. Rmarken (talk) 16:50, 14 May 2010 (UTC)rmarken

I think we are getting off topic. Here is a snippet from the first code posting
for I := 1 to LastData do

  begin

    p := Output + Disturbance[I];

    e := 2400/ModelBitmap.Height*Ref - p;

    Output := Output + Gain*e*dtime;

    ModelMouse[I] := Output;

  end;

Thats pid control with funny variable names. Please consider fixing the PCT article to reflect some of your above comments, and to separate the hyperbole and fluff from the real information. Posting links to your book may be interpreted as having a WP:COI with the subject. User A1 (talk) 16:57, 14 May 2010 (UTC)

I'd posted one of the links.--68.35.2.8 (talk) 18:45, 14 May 2010 (UTC)

From Bill Powers: Correct, user A1, Rick Marken's snippet is PID control -- actually, just I. Most of the models in PCT are simple PID controllers of this kind, sometimes with embellishments but none very complex. The point isn't to advance the art of control system design, but to show how classical control theory can be applied to develop a theory of behavior that replaces the basic algorithms of stimulus-response psychology and cognitive psychology. The development of PCT started in 1953 when I read Wiener's book (Cybernetics), then Ashby's, and realized that negative feedback control is what I had found missing in all my psych courses in college (I have an undergraduate degree in physics and earned my living doing electronic system design).

In the suite of demos mentioned previously, there is one called "trackanalyze." It is a simple pursuit tracking task in which the user employs a mouse to make a cursor track a target for one minute. Data are sampled 60 times per second. The target moves according to a random pattern generated from 120 harmonic cosine terms with random phases. Difficulty is adjusted by making the amplitude of harmonic terms decay exponentially as the harmonic increases. Several models have been tested. For the program in the downloadable demos, the parameters can be adjusted to make the model's simulated mouse moments match the real mouse movements of a test subject within about 3% RMS (fraction of maximum target range), and predict behavior of the same person with a new disturbance pattern within about 5% RMS. Parameters adjusted are output integrator gain, damping factor, reference level, and perceptual transport lag in 60ths of a second. A new version of the model uses a two-level cascade with the higher level controlling perceived target-mouse separation by means of adjusting the reference signal of the lower system which senses and controls rate of change of separation. This model fits behavior over a good range of difficulties (target patterns with different bandwidths) and matches real behavior within about 2.5% RMS.

In the same suite of demos there is also a learning algorithm which is based on random-walk optimization patterned after the way the bacterium E. coli makes its way up gradients of nutrients. It is demonstrated by showing an arm with 14 degrees of freedom and 14 control systems gradually acquiring independent control while random disturbances act. It has not been tested yet with real physical dynamics, though there are reasons to believe it will still work.

Is this topic of any interest in this discussion group? If there's no interest here, we will look for other wiki venues. Further information can be found on a web page organized by Dr. Warren Mansell, senior lecturer in the psych department at the University of Manchester, UK: www.pctweb.org.

BillPCT (talk) 01:26, 15 May 2010 (UTC)

Cruise control, revisited

I changed the way the cruise control was described. The desired reference is of course the speed of the vehicle. The output, the variable which effects change, is of course the engine. The end result of the control is an effect on the vehicle speed, an 'output variable'. To simplify that concept from a general engine or vehicle speed to a specific output, the throttle position was chosen by some author. The desired result of the control was to control the vehicle speed, while the practical result, the actual output itself, is simply control of the engine. I do not see the logic of calling the physically controlled output the 'input variable'.

In order to further illustrate this difference, I added a couple clarifications in the following paragraph or two.

To illustrate this difference further here, let us consider a single closed loop controller managing the temperature of a closed room. The desired reference is let us say 72'F. The actual reference, the sensor input is telling the controller the temperature is 70'F. The controller sees the error and operates the output to increase the temperature. In this situation, let us assume the output is a proportional steam valve feeding a bank of registers. You would not call the proportional steam valve the input variable any more than you would call the throttle position the input variable.

-Garrett 68.63.108.124 15:22, 1 October 2007 (UTC)

The control variable u is very often referred to as the "input variable" (because it is an input to the physical system.) Using the term "control variable" or "control input" is perhaps better, as it is less ambiguous. However, referring to the reference signal as "input", while technically not incorrect, is not the way the language is used. The change you made to the section ons open-loop and closed loop control uses the opposite terminology of any control theory textbook I've ever read. Input/output is determined as seen from the physical system, not the controller. (However, the system input is referred to as "output" is when discussing the actual controller hardware, where, of course, this is an output port.) --PeR 18:19, 1 October 2007 (UTC)
From Bill Powers, on PCT. The usages of "input" and "output" in perceptual control theory are always from the standpoint of the control system rather than the external observer, designer, or user of the system: a living control system is its own user! In engineering, the output of a control system is what the customer wants it to control, but from a living system's standpoint, the output is the means of control, the actual output which goes out of the controller and enters its environment, like pushing on something with your hand. This output corresponds to what engineers have called the "control variable" -- the variable through which the controller acts to control something in its environment. The state of the variable controlled BY the controller is known to the living system only through its perceptual equipment, starting with its sensory inputs from the environment: what is perceived defines what is controlled. So we speak of "control of input," which acknowledges that the controlling system can never know directly what its perceptions represent; the perceptions are its only way of knowing, and the only variables it can control. If the characteristics of the input function change, the perceptual signal inside the system will remain under control, still matching the reference signal, but the environmental counterpart of that signal will be maintained in a different state. If there is a disturbance (like putting on funny glasses that shift the visual field to one side), the controlling person will keep the same object in the straight-ahead position as before, while an external observer will see the head turned to one side.
Finally, the reference signal in a living system is not accessible from outside; in human beings and many other organisms it is adjusted by outputs from higher-level systems. Spinal reflexes are control systems that get their reference signals from the brain via the spinal cord. Therefore we do NOT call the reference signal an "input" to the control system, even though it does enter the comparator of a particular control system from outside the loop. We just call it the reference signal, because the term input is already being used to designate perceptual inputs.
With these translations, any control engineer can convert between the PCT usages and the default usages in engineering. PCT can then be seen as ordinary negative feedback control theory in a biological setting.
PCT adherents argue that the engineering usages are a bit odd, in that an external variable that affects the controller only via its sensors is called an "output", while an output variable is called a "control variable," even though it can affect other variables but by itself can't control anything. And the conventional engineering diagram looks uncomfortably like the stimulus-response system that everybody but behaviorists has rejected, with its "input" entering one side and its "output" leaving the other side, with a feedback path that just seems sort of tacked on like an afterthought. In a human being, that feedback path is just about the whole game. Human beings care far more about controlling the results of their actions than about the actions themselves.
The differences probably make a difference only when engineers try to apply their version of the control system to a living organism. But since the translation is perfectly consistent, a control engineer should have no difficulty with it even if he or she doesn't take it to work every day. BillPCT (talk) 22:36, 15 May 2010 (UTC)

Clarification

The stability section needs to be explained better. 128.112.86.171 ---- (sig added by Cburnett; please use ~~~~ to sign your posts)

It should be emphasized that when we talk about stability, we are talking about the stability of a solution to the non-linear differential equations. The concept of stability doesn't apply to a system, it only applies to a solution. RHB100 (talk) 02:19, 21 February 2012 (UTC)

What exactly do you find can be improved. For someone, like me, that knows this stuff...it's a bit harder to know exactly what you think needs to be better explained. I'll try though. Cburnett 03:46, Jun 9, 2005 (UTC)
Stability is a really general topic (BIBO stability, Lyapunov stability, total internal stability, exponential stability, asymptotic stability, global stability, etc etc etc). There's no clarification of what "bounded" means, or really even what "input" and "output" mean - really you're talking about the norms of the input and output signals, and then you have to talk about what a "signal" is. You're kind of assuming the reader is familiar with alot of the terminology and the mathematics, but I don't think that's really your target audience. --M0nstr42 21:45, 4 November 2005 (UTC)

Difference between Laplace transformand Z transform

From the article "The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the axis is the real axis and the discrete Z-transform is in circular coordinates where the axis is the real axis."

I think the spirit is right, but not quite rigorous. There is a conformal mapping between the s domain and the Z domain with the imaginary axis of the s domain mapping into the unit circle of the Z domain.Constant314 (talk) 18:48, 31 March 2012 (UTC)

Z-Transform is missing ...

How can you avoid including Laplace and his Z-Transform in the history of control theory? I think he belongs FIRST. SystemBuilder (talk) 21:00, 3 July 2012 (UTC)

Introductory section reference to transfer function

The statement, "The input and output of the system are related to each other by what is known as a transfer function", is not true in general. More generally the input and output are related by nonlinear differential equations and perhaps by both nonlinear differential equations and discrere equations. When the input and output are related by a transfer function, it is usually the result of linearization of the nonlinear system at a trim point. RHB100 (talk) 02:38, 21 February 2012 (UTC)