首页    期刊浏览 2024年10月06日 星期日
登录注册

文章基本信息

  • 标题:Simulation-based design of dual-mode controller for non-linear processes.
  • 作者:Lee, Jong Min ; Lee, Jay H.
  • 期刊名称:Canadian Journal of Chemical Engineering
  • 印刷版ISSN:0008-4034
  • 出版年度:2007
  • 期号:August
  • 语种:English
  • 出版社:Chemical Institute of Canada
  • 摘要:Model predictive control (MPC) is being widely used in the process industry because of its ability to control multivariable processes with hard constraints. Most of the current commercial MPC solutions are based on linear dynamic models, which are easier in terms of identification and on-line computation (Qin and Badgwell, 2003). On the other hand, many chemical processes exhibit strong non-linearities. This disparity has prompted several studies on MPC formulations with non-linear dynamic models (Lee, 1997; Mayne et al., 2000). Since most non-linear MPC (NMPC) formulations require on-line solution of a non-linear program (NLP), issues related to computational efficiency and stability of a control algorithm have received much attention.

Simulation-based design of dual-mode controller for non-linear processes.


Lee, Jong Min ; Lee, Jay H.


INTRODUCTION

Model predictive control (MPC) is being widely used in the process industry because of its ability to control multivariable processes with hard constraints. Most of the current commercial MPC solutions are based on linear dynamic models, which are easier in terms of identification and on-line computation (Qin and Badgwell, 2003). On the other hand, many chemical processes exhibit strong non-linearities. This disparity has prompted several studies on MPC formulations with non-linear dynamic models (Lee, 1997; Mayne et al., 2000). Since most non-linear MPC (NMPC) formulations require on-line solution of a non-linear program (NLP), issues related to computational efficiency and stability of a control algorithm have received much attention.

The initial focus was on formulating a computationally tractable NMPC method with guaranteed stability. Mayne and Michalska (1990) showed that stability can be guaranteed by introducing a terminal state equality constraint at the end of prediction horizon. In this case, the value function for the NMPC can be shown to be a Lyapunov function under some mild assumptions. Because the equality constraint is difficult to handle numerically, Michalska and Mayne (1993) extended their work to suggest a dual-mode MPC scheme with a local linear state feedback controller inside an elliptical invariant region. This effectively relaxed the terminal equality constraint to an inequality constraint for the NMPC calculation. The dual-mode control scheme was designed to switch between the NMPC and the linear feedback controller depending on the location of the state. Chen and Allgower (1998) proposed a quasi-infinite horizon NMPC, which solves a finite horizon problem with a terminal cost and a terminal state inequality constraint. The main difference of this method from the Michalska and Mayne's is that a fictitious local linear state feedback controller is used only to determine the terminal penalty matrix and the terminal region off-line, and switching between controllers is not required .

These NMPC schemes have theoretical rigor but have some practical drawbacks. First, these methods still require solving a multi-stage non-linear program at each sample time. Assurance of a globally optimal solution or even a feasible solution is difficult to guarantee. Recently, a Lyapunov-based NMPC design was proposed to guarantee closed-loop stability of systems with state and input constraints by identifying a set of feasible initial conditions (Mhaskar et al., 2006, 2005). This method, however, requires constructing a control Lyapunov function in addition to solving a multi-stage non-linear optimization problem on-line. Second, the optimization problem for determining the invariant region for a local linear controller and the corresponding terminal weight are both conservative and computationally demanding. As a consequence, recent publications on dualmode MPC algorithms have mainly focused on studying linear systems, e.g. imposing stability for linear time-varying systems (Kim, 2002; Wan and Kothare, 2003) or achieving improved feasibility and optimality for linear time-invariant systems (Bloemen et al., 2002; Bacic et al., 2003).

Motivated by the drawbacks and the industry's reluctance to adopt full-blown NMPC, we propose an override (or supervisory) control strategy for monitoring and improving the performance of a local controller. Our method is similar to the dual-mode MPC suggested by Michalska and Mayne (Mayne and Michalska, 1990; Michalska and Mayne, 1993) in that switching between two different control policies depends on current location of the state. However, we employ a cost-to-go function based approach instead of NMPC. First, a cost-to-go function under the local controller is defined, which serves to delineate the admissible region wherein the local controller can effectively keep the system inside acceptable operating limits. The same cost-to-go function is also shown to facilitate the calculation of override control actions that will bring the system outside the admissible region back into the region as quickly as possible. We propose to use simulation or operation data to construct an approximation to the cost-to-go function. With the cost-to-go function, an override control action can be calculated by solving a singlestage non-linear optimization problem, which is considerably simpler than the multi-stage non-linear program solved in the NMPC. The cost-to-go function under a local controller is further improved by solving dynamic programming. We show that this leads to performance improvement of the override controller.

This paper is organized as follows. The following section introduces the proposed scheme for designing a dual-mode controller. In the next section, we provide a brief introduction to the approximate dynamic programming (ADP) procedure for improving a cost-to-go function and show its application to dualmode controller design. The fourth section demonstrates the new method on non-linear control examples. Conclusions are provided in the final section.

DUAL-MODE CONTROL STRATEGY

Simulation-Based Design of an Override Controller

The proposed scheme uses either simulation or actual plant data to identify the region of the state space, wherein a local linear controller can effectively keep the system inside an acceptable operating regime. This regime, which is generally defined by some inequalities in the state space, is constructed by considering safety and performance (e.g. normal operating modes in statistical process control). We identify the effective region for a local linear controller by assigning to each state a "cost-to-go" value, which is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1)

where [J.sup.[micro]([x.sub.k]) is the cost-to-go for state [x.sup.k] under the local control policy ?. ??maps states [x.sub.k] into control actions [u.sub.k] = [micro]([x.sub.k]). [alpha] is a discount factor with 0 < [alpha] [less than or equal to] for the formalization of the tradeoff between immediate and future costs, and [phi]([x.sub.k + t]) is a stage-wise cost that takes the value of 0 if the state at time k + t is inside the acceptable operating limit and 1 if outside when [x.sub.k] is the state at time k.

The cost-to-go of a state is the sum of all costs that one can expect to incur under a given policy starting from the state, and hence expresses the quality of a state in terms of future performance. This way, if a particular state [x.sub.k] under the linear control policy evolves into a state outside the limit in some near future under the policy [micro], the cost-to-go value will reflect it. On the other hand, those states that are not a precursor of future violation of the operating limit will have a zero cost-to-go value. The latter states comprise the "admissible" region.

The cost-to-go function is approximated by first simulating the closed-loop behaviour of the non-linear model under the local linear controller for various possible operating conditions and disturbances. This generates x vs. [J.sup.[micro](x) data for all the visited states during the simulation. Then the generated data can be interpolated to give an estimate of [J.sup.[micro]](x), [J.sup.[micro]](x), for any given x in the state space. This step is done off-line, i.e., before the real system starts operating.

In the real-time application, whenever the process reaches a state with a significant cost-to-go value, it is considered to be a warning sign that the local controller's action will not be adequate. When this happens, an override control action is calculated and implemented to bring the process back to the "admissible" region where the cost-to-go is insignificant. One can calculate such an action by implementing the following override policy at time k:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2)

where [eta] is a user-given threshold value for triggering the override control scheme. If no [u.sub.k] can be found such that [J.sup.[micro]]([x.sub.k + 1]([x.sub.k], [u.sub.k])) < [J.sup.[micro]]([x.sub.k + 1] ([x.sub.k] [micro] ([x.sub.k]))), then [u.sub.k] = [micro]([x.sub.k]) is used for the current sample time k. This approach reduces a multi-stage non-linear on-line control problem to a single-stage one, and does not require the convoluted optimization step for calculating terminal cost and its associated constraint. We refer the readers who are interested in comparison of computational requirements between ADP and NMPC to Lee and Lee (2004).

Approximation of Cost-to-Go Function

The approximate cost-to-go function J is necessary for estimating the cost-to-go of a state point that was never seen by the simulation. We employ a certain class of local function approximators such as k-nearest-neighbour averager because it can quantify the confidence of a cost-to-go estimate as will be shown in the below. In addition, we also showed in our previous work (Lee et al., 2006) that these approximators have a nice convergence property in the off-line learning step for further improvement of the cost-to-go function, which will be described in the next section.

In this work, we employ the following distance-weighted k-nearest-neighbour approximation scheme:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3)

where [d.sub.j] is the Euclidean distance from the query point [x.sub.q] to [x.sub.j], and [N.sub.k]([x.sub.q]) is the set composed of the k-nearest neighbours of [x.sub.q].

A caveat in the above procedure is that the cost-to-go estimate can leave significant bias in regions of the state space where the data density is inadequately low. Hence, we modify the approximate cost-to-go function J so that extra penalties can result when the optimizer finds a control action driving the system into a region with low data density. We consider the following modification:

[J.sub.new] ([x.sub.q]) = [J.sub.old] + [J.sub.bias] ([x.sub.q]) (4)

The added term [J.sub.bias]([x.sub.q]) is the extra penalty, which is parameterized as a quadratic function of local data density estimate around a query point [x.sub.q].

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (5)

where [f.sub.[OMEGA]]([x.sub.q]) is an estimate of the Parzen density (Parzen, 1962) at [x.sub.q], H(*) is a Heaviside step function, A is a scaling parameter, and [rho] is a threshold value. The Parzen density is obtained as a sum of Gaussian kernel functions, [kappa], placed at each sample in a training data set [OMEGA].

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (6)

where [x.sub.q] and [x.sub.j] [member of] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] is the user-given bandwidth parameter, and n is the number of data points in the set. [kappa] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (7)

In Equation (5), ??is a reciprocal of the data density corresponding to [[parallel] [x.sub.q] - [x.sub.j] [parallel].sup.2] = [[sigma].sub.B], and A is calculated so that a maximum cost-to-go value, say [J.sub.max], is assigned to [J.sub.bias] at [[parallel] [x.sub.q] - [x.sub.j] [parallel].sup.2] = 3 [[sigma].sub.B]. The penalty function biases upward the estimate of value for a query point in a manner inversely proportional to the data density, which discourages the optimizer from driving the system into unexplored regions. The detailed procedure of designing the penalty term can be found in Lee et al. (2006).

EVOLUTIONARY IMPROVEMENT OF COST-TO-GO

Because the approximator employed in the calculation of override control action is based on the cost-to-go value of the local linear controller, it is not the optimal cost-to-go. The resulting override controller from the suboptimal cost-to-go approximation is also suboptimal. The optimal cost-to-go function, which calculates an optimal control policy, satisfies the Bellman's equation and can be obtained by solving dynamic programming (DP) (Bertsekas, 2000). However, the computational requirements of conventional DP algorithms are known to be excessive for many practical applications because the optimization step of DP must be carried out for each value of [x.sub.k] in the entire state space.

In our previous work (Lee et al., 2006), we proposed an approximate dynamic programming (ADP)-based strategy that solves DP in a computationally efficient manner. The ADP scheme solves the Bellman equation iteratively only for the state points obtained from closed-loop simulation with the local function approximator. This way, one can circumvent the computational complexity arising from the increase in state dimension (Lee et al., 2006). It also provides an improved control policy, though it may not be the optimal one, while guaranteeing stable off-line learning and on-line implementation. The off-line computational requirement may increase with the number of data points, but this can be addressed by adopting proper data filtering techniques (Hastie et al., 2001). In addition, the ADP scheme coupled with an observer has been shown to be applicable to uncertain systems with parametric uncertainties (Lee and Lee, 2005b), and it could be modified to learn a cost-to-go function even without a process model (Lee and Lee, 2005a).

Thus, further improvement of the override control policy to steer the system back into the admissible region of the linear controller is possible by iteratively solving the following Bellman's equation until J converges.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (8)

where f is a non-linear state transition equation and i denotes iteration index. For this purpose, the stage-wise cost is re-defined differently as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (9)

With this change, the aim of the optimal control is to bring the system state back into the admissible region as quickly as possible .

ILLUSTRATIVE EXAMPLES

Simple Non-Linear Example

Problem Description

We consider a system with two states, one output, and one manipulated input described by

[x.sub.1] (k + 1) = [[x.sup.2].sub.1] (k) - [x.sub.2](k) + u) (k)

[x.sub.2] (k + 1) = 0.8 exp {[x.sub.1](k)} - [x.sub.2] (k) u (k)

y (k) = [x.sub.1] (k) (10)

with an equilibrium point of [x.sub.eq] = (-0.3898, 0.5418) and [u.sub.eq] = 0.

We also define the acceptable operating regime by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (11)

A linear MPC was designed based on a linearized model around the equilibrium point. The control objective is to regulate y to [y.sub.eq]. The linear MPC is used as the local controller with the following design:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (12)

with p = 2, m = 1, and the input constraints of -3 [less than or equal to] u [less than or equal to] 3, ?[DELTA]u [less than or equal to] 0.2. The upper bar denotes deviation variables.

The closed-loop behaviour under the local controller starting at [x.sub.0] = [x.sub.eq] + [0.3 0.6] = [-0.0898 1.1418] is shown as dotted lines in Figure 1. Though the initial point is inside the operating limit, the system under the local linear controller violates the limit several times until the system is regulated to the equilibrium point.

Simulation-Based Design

To design the proposed override controller, closed-loop simulations under the local controller were performed using 347 initial points inside the operating limit. The initial points were sampled by discretizing the state space in an equal-spaced fashion. The incremental value for discretizing each state was 0.05. The simulations generated 17 006 data points and cost-to-go values for each state in the trajectory were calculated using Equation (1) with a value of [alpha] = 1 and

[FIGURE 1 OMITTED]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (13)

Next step is to design a cost-to-go approximator. Considering the coverage of state space, the parameters of the penalty function were calculated by setting the threshold distance [[sigma].sub.B] as 3% of the normalized data range and [J.sub.max] = 30. The resulting parameters for the penalty function are [rho] = 0.0435 and A = 0.0104.

The actual value of cost-to-go is zero for the states inside the admissible region of a linear controller and outside the region the cost-to-go will be over unity. This makes the structure of cost-to-go function very stiff. However, the approximator will make the stiff structure relatively smooth by averaging. Therefore, a small tolerance value, [eta] = 0.02, was chosen to illustrate a possible shape of the admissible region under the local controller, which is illustrated in Figure 2.

Real-Time Application

To compare on-line performances of the local controller alone and the dual-mode controller (i.e., the local controller combined with the proposed override controller), eight initial points different from the training set were sampled. We also compare the proposed dual-mode controller with the successive linearization based MPC (SLMPC) scheme suggested by Lee and Ricker (1994). Finally, we also simulated the LMPC and the SLMPC with the state constraints of -0.95 [less than or equal to] [x.sub.1] [less than or equal to] 0.2 and -0.35 [less than or equal to] [x.sub.2] [less than or equal to] 0.45 (denoted by scLMPC and scSLMPC). The prediction and control horizons of SLMPC are the same as those of the LMPC.

The solid lines in Figure 1 are the state trajectory with the same initial point under the dual-mode controller. For the first three points, the override control actions were used instead of those of LMPC's. The proposed scheme successfully steers the state back to the region with lower cost-to-go values. Table 1 shows the sum of stage-wise cost (the total number of violation of operating limit) and the suggested control design outperforms for all the test points. We can also see that imposing state constraints did not work here as many infeasible solutions were returned, eventually causing divergence.

[FIGURE 2 OMITTED]

Bio-Reactor Example

In this section, we consider a bio-reactor example with two states: biomass and substrate (Bequette, 1998). With a substrate inhibition for growth rate expression of biomass, the system shows multiple steady states. To operate at the unstable equilibrium, closed-loop control must be used. The system equation is:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (14)

where [x.sub.1] is biomass concentration and [x.sub.2] is substrate concentration. Table 2 shows the parameters for the model at the unstable steady state.

Local Linear Controller

A linear MPC was designed based on a linearized model around the unstable equilibrium point with sample time of 0.1 h. The control objective is to regulate x to [x.sub.s] at the equilibrium values and the manipulated variables are the substrate concentration in the feed [x.sub.2f] and the dilution rate D. The LMPC controller parameters we used are Q = 100I, R = 10I, p = 10, and m = 5, where I is a 2 by 2 identity matrix, Q is a state weighting matrix, and R is an input weighting matrix.

[FIGURE 3 OMITTED]

We also define an acceptable operating region as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (15)

which is shown in Figure 3. The input constraints for MPC are

0 [less than or equal to] D [less than or equal to] 0.5 |[DELTA]D|[less than or equal to]0.2 0 [less than or equal to] [x.sub.2f] [less than or equal to] 8 |[DELTA][x.sub.2f]|[less than or equal to]2 (16)

The closed-loop behaviour under the LMPC for different initial points are shown in Figure 3. As in the previous example, the LMPC cannot drive the state back into the equilibrium point without violating the operating limit.

Simulation-Based Dual-Mode Controller

With the same definition of stage-wise cost as in Equation (13), a cost-to-go based override controller was designed. For the simulation, 109 initial points were sampled with the incremental value of [DELTA] [x.sub.1] = [DELTA][x.sub.2] = 0.2 in the regions of state space defined by 0.1 [less than or equal to] [x.sub.1] [less than or equal to] 3 and 0.1 [less than or equal to] [x.sub.3] [less than or equal to] 4. Closed-loop simulations under the LMPC yielded 21 909 points. Parameters for a cost-to-go approximator were chosen as: [rho] = 0.0435, [J.sub.max] = 50, and A = 0.0174.

[FIGURE 4 OMITTED]

[FIGURE 5 OMITTED]

As in the previous example, the dual-mode controller successfully navigated the state to the equilibrium point without violating the operating limit by searching for the path with lowest cost-to-go values. One of the sample trajectories tested is shown in Figure 4.

Improved Cost-to-Go Using ADP

The ADP approach in the previous section was applied to the first example for illustrating performance improvement of the override controller. The iteration of Equation (8) converged after 5 steps with the following convergence criterion:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (17)

Figure 5 shows the state trajectory with the initial point of [x.sub.0] = [x.sub.eq] + [0.3 0.75] when the improved cost-to-go function is used in the override control calculation. As shown in the figure, the improved override controller bring the state back into the admissible region more efficiently than that based on the cost-to-go approximation under the LMPC.

CONCLUSION

In this paper, a new dual-mode control scheme with an override controller was introduced and demonstrated. It uses simulation or operation data to approximate the cost-to-go function under a local linear control policy. The approximate cost-to-go function is then used to calculate override control actions to bring the state to the admissible region wherein the linear controller can stabilize the system. The scheme was shown to improve the performance and stability of a given local controller. The cost-to-go function can further be refined to offer an improved override control policy. The ease of design and implementation makes it a potentially appealing addition to an existing controller in industrial applications. The suggested framework can give operators indications on the future performance of the local controller and also suggest override control actions, if needed.

ACKNOWLEDGEMENTS

Jong Min Lee acknowledges financial support from the University of Alberta and NSERC, and Jay H. Lee also acknowledges financial support from NSF (CTS# 0301993).

Manuscript received February 4, 2007; revised manuscript received

May 7, 2007; accepted for publication May 8, 2007.

REFERENCES

Bacic, M., M. Cannon, Y. I. Lee and B. Kouvaritakis, "General Interpolation in MPC and its Advantages," IEEE Trans. Automatic Control 48(6), 1092-1096 (2003).

Bequette, B. W., "Process Dynamics: Modeling, Analysis, and Simulation," Prentice Hall, Upper Saddle River, New Jersey (1998).

Bertsekas, D. P., "Dynamic Programming and Optimal Control," Athena Scientific (2000).

Bloemen, H. H. J., T. J. J. van den Boom and H. B. Vergruggen, "Optimizing the End-Point State-Weighting Matrix in Model-Based Predictive Control," Automatica, 38, 1061-1068 (2002).

Chen, H. and F. Allgower, "A Quasi-Infinite Horizon Nonlinear Model Predictive Control Scheme with Guaranteed Stability," Automatica 34(10), 1205-1217 (1998).

Hastie, T., R. Tibshirani and J. Friedman, "The Elements of Statistical Learning: Data Mining, Inference, and Prediction," Springer (2001).

Kim, K. B., "Implementation of Stabilizing Receding Horizon Controls for Time-Varying Systems," Automatica 38, 1705-1711 (2002).

Lee, J. H., "Recent Advances in Model Predictive Control and Other Related Areas," Y. C. Kantor, C. E. Garcia and B. Carnahan, Ed., "Chemical Process Control--Assessment and New Directions for Research," volume 93, AIChE Symposium series (1997), pp. 201-216.

Lee, J. H. and N. L. Ricker, "Extended Kalman Filter Based Nonlinear Model Predictive Control," Ind. Eng. Chem. Res. 33(6), 1530-1541 (1994).

Lee, J. M. and J. H. Lee, "Simulation-Based Learning of Cost-to-Go for Control of Nonlinear Processes," Korean J. Chem. Eng. 21, 338-344 (2004).

Lee, J. M. and J. H. Lee, "Approximate Dynamic Programming Based Approaches for Input-Output Data-Driven Control of Nonlinear Processes," Automatica 41, 1281-1288 (2005a).

Lee, J. M. and J. H. Lee, "Approximate Dynamic Programming Strategy for Dual Adaptive Control," in "Proc. of 16th IFAC World Congress," (2005b).

Lee, J. M., N. S. Kaisare and J. H. Lee, "Choice of Approximator and Design of Penalty Function for an Approximate Dynamic Programming Based Control Approach," J. Proc. Control 16, 135-156 (2006).

Mayne, D. Q. and H. Michalska, "Receding Horizon Control of Nonlinear Systems," IEEE Trans. Automatic Control 35(7), 814-824 (1990).

Mayne, D. Q., J. B. Rawlings, C. V. Rao and P. O. M. Scokaert, "Constrained Model Predictive Control: Stability and Optimality," Automatica 36, 789-814 (2000).

Mhaskar, P., N. H. El-Farra and P. D. Christofides, "Predictive Control of Switched Nonlinear Systems with Scheduled Mode Transitions," IEEE Trans. Automatic Control 50, 1670-1680 (2005).

Mhaskar, P., N. H. El-Farra and P. D. Christofides, "Stabilization of Nonlinear Systems with State and Control Constraints using Lyapunov-Based Predictive Control," Syst. Control Lett. 55, 650-659 (2006).

Michalska, H. and D. Q. Mayne, "Robust Receding Horizon Control of Constrained Non-Linear Systems," IEEE Trans. Automatic Control 38(11), 1623-1633 (1993).

Parzen, E., "On Estimation of a Probability Density Function and Mode," Ann. Math. Statist. 33, 1065-1076 (1962).

Qin, S. J. and T. A. Badgwell, "A Survey of Industrial Model Predictive Control Technology," Control Eng. Pract. 11, 733-764 (2003).

Wan, Z. and M. V. Kothare, "Efficient Robust Constrained Model Predictive Control with a Time Varying Terminal Constraint Set," Syst. Control Lett. 48, 375-383 (2003).

Jong Min Lee (1) * and Jay H. Lee (2)

(1.) Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB, Canada T6G 2G6

(2.) School of Chemical and Biomolecuar Engineering, Georgia Institute of Technology, Atlanta, GA, U.S.A. 30332-010

* Author to whom correspondence may be addressed. E-mail address: jongmin.lee@ualberta.ca
Table 1. Comparison of performances (the total number of
limit violations)

Test pt LMPC SLMPC scLMPC scSLMPC Dual-Mode

1 diverge 5 diverge diverge 0
2 3 3 diverge diverge 0
3 2 0 0 diverge 0
4 2 0 0 diverge 0
5 0 0 diverge diverge 0
6 0 0 0 diverge 0
7 7 15 1 diverge 0
8 diverge diverge diverge diverge 0

Table 2. Model parameters: bio-reactor example

Parameter Value

[[micro].sub.max] 0.53 [hr.sup.-1]
[k.sub.m] 0.12 g/l
[k.sub.1] 0.4545 l/g
Y(yield) 0.4
[D.sub.s] 0.3 [hr.sup.-1]
[x.sub.2fs] 4.0 g/l
[x.sub.s] [0.9951 1.5123]
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有