How to express objective on final state?

21 posts / 0 new
janpeter
Offline
Joined: 2010-09-11
How to express objective on final state?

Hi

I have gone through the example in the manual and wonder how one express optimization where the objective is:

1) Maximize the final state

2) Maximize a function of the control variables at each instant

Would appreciate some input

Jan Peter

borlum
Offline
Joined: 2016-09-22
Maximizing final state

Hi Jan Peter,

Maximizing final state is possible, by simply maximizing your state at 'finalTime'. Thus letting your objective be:
``` objective = -x(finalTime) ```

Not quite sure I understand (2) :-)

- Joakim

janpeter
Offline
Joined: 2010-09-11

Hi,

Thanks for your answer. It works formally. But numerically a disaster though. Perhaps some options must be tuned. Let me explain.

The state variable that I want to maximize increase with time or can come to a halt for some time but never decrease, whatever the control variables. The goal is actually to maximize the increase of X (given some constraint) at each time instant. Due to the fact that X never decrease I thought I could translate the problem to maximise the final value (at a given time). In my first try I only knew how to formulate an integral objective function and I thought if I use say 10-X as my "objectiveIntegrand" it would also work (and X never goes over 10).  And I got the expected solution. Then I thought I clean this up by choosing a more proper objective function like X(finalState) but that give to me a strange solution.

The formulation of objective function closest to the real problem is alternative 2). Try to explain it better. I would like to maximize a function of the control variables (given some constraints g) at each time instant. This function f is a function of control variables u and v and f(u,v) controls the growth of X. Thus the problem is something like:

optimization Example (objective....,  startTime=0, finalTime=10)

...

input Real u(min=0.0), v(min=0.0);

equation

f = a*u + b*v;

g = c*u + d*v;

der(x) = f*x;

constraint

g <=g_max;

end Example

- - -

The real code is somewhat longer with more state, but I think the above shows the core of the problem. If it is a help I can enclose the code, of course. This problem is by purpose made very small and can be solved by making an analytical solution of what combination of u and v gives maximal f given the contraint g and with this knowledge ad Modelica model could easilty be formed. However, for more realistic problems the analtyical solution of f is hard to calculate and therefore it would be good to use Optimica to express the model in this more free way.

I would actually be glad to get this right, soon.

Jan Peter

Update: Now I better understand the difference between using objectiveIntegrand and objective on final state. Since the problem with my parameters the optimal growth stops before final time. This solution I get when using the objectiveIntegrand kind since you get a penalty of reaching the max value later. Both solutions this gives the same final value fo x, but different trajectories.

To formulate the problem along alternative 2) where we want to maximize f given constraings g is thus my main question now!

borlum
Offline
Joined: 2016-09-22
Derivative of cost

Okay, trying to deduce a bit; by having f as large as possible at each time t, then are you not just looking to maximize the integral of f?

janpeter
Offline
Joined: 2010-09-11
I just tried now to use

I just tried now to use objectiveIntegrand = f_max - f, but that give similar solution as objective = -x(finalTime).

That means no pressure to reach maximal x fast, but you get it in the end.

If I try objective = f_max-f,  that just does not work "java error occurred, details were printed"

Do not understand that either.

fredrik
Offline
Joined: 2011-08-26
What is the problem with

What is the problem with objectiveIntegrand = f_max - f? It sounds like exactly what you want. Also, you can remove f_max, it won't make a difference.

For objective, you should probably do objective = f_max - f(finalTime).

janpeter
Offline
Joined: 2010-09-11
The way the problem is

The way the problem is formulated originally is that f should be maximized (at all time instances or how I shouild put it).

But then I get Runtime error "java error occurred, datails were printed"...  - perhaps we should look more here?

To circumvent that I tried "objective -x" but gives the same error.

If take "objectiveIntegrand - x" then it works.

---

The problem for me is perhaps on a conceptual level.

When you describe dynamical systems it is sometimes convenient to express the model

in terms of some "energy" or similar to be minimized given some constraints, and not to take the burden to write down

the ODE to be solved. In my case it is about maximize "f" given some constraints.

I thought JModelica could handle this kind of description.

--

Thus, I wouild like to use optimization for a sub-system to simplify description of dynamics.

When this subsystem is part of a larger system, I may want to formulate an optimization problem for the total problem.

Something like that.

Jan Peter

PS. Does it help if I post some code?  - the examples is not much bigger than described above.

fredrik
Offline
Joined: 2011-08-26
Let's focus on the actual

Let's focus on the actual case you want to solve. My understanding is that this is objectiveIntegrand = -f. What is the complete error that you get using this?

Code for reproduction might help, but for starters I would just like to see the full error message.

janpeter
Offline
Joined: 2010-09-11
Ok. Using objectiveIntegrand

Ok. Using objectiveIntegrand = -f, gives a reasonable solution, but to be interesting there need to be a penalty also in the time it takes to reach the maximal point. Therefore objectiveIntegrand = -x works.

But the natural formulation of the problem is to use objective = -f, this gives a runtime error however with the key text: The class attribute 'objective' must have parameter or time variability.

In my eyes is f a function f(u,v) where u and v are time variable and shall be optimized, i.e. f varies with time. The problem perhaps is because there is no dynamics between u and v to f, as it is from u and v to x, which do work.

Enclosed a screen-dump of the error text.

AttachmentSize
Screen Shot 2017-03-21 at 09.44.12.png 107.2 KB
fredrik
Offline
Joined: 2011-08-26
As I said before, to use

As I said before, to use objective, you should do objective=-f(finalTime). But I don't think this is what you want. It sounds like you just want objectiveIntegrand=-f.

But if you really want to also penalize the time it takes to reach the maximum, and if the maximal point f_max is known, you do

objectiveIntegrand=-w*f, objective=finalTime, finalTime(free=True)

constraint
f(finalTime) = f_max,

where w is a weight parameter of your choosing. If f_max is not known, then I'm not sure what the best way to solve it is... It sounds like a bilevel optimization problem, which can't be naturally formulated with Optimica, but you could still solve it with some scripting (and a lot of computational time).

If you only want to reach f_max as fast as possible, remove objectiveIntegrand from the above.

janpeter
Offline
Joined: 2010-09-11
Hi   In my previous post I

Hi

In my previous post I said that the natural formulation of the problem is with objective = -f. (i.e. not objectiveIntegrand = -f)

I go an error message that I do not understand. Do you understand it?

The documentation of Optimica in the JModelica manual is sparse.

Is there some other document available?

What I try to do with this example is to formulate a simple example where I can do one model formulation where I can use an explicit solution of the optimisation of “f” so to say, since it in this case easy. And then I try to formulate a model in a different way where I ask optimica to do the optimisation of f at each time instant instead.

Simulation results of the two model should be the same.

The wider perspective is that here is a wide class of problems formulated where f is not so easy or possible to make an analytical solution for, and here you badly need optimisation at each time instant.

The point with optimica is to formulate the problem very close to how the user think of the problem, as I understand it. Therefore I am not satisfied with using “tricks” like instead of objective =-f, do have objectiveIntegrand = -x, although I understand that it will give the right solution to.

There is a Python package called PyFBA that do this kind of job for large problems

http://journal.frontiersin.org/article/10.3389/fmicb.2016.00907/full

I have not used PyFBA and know very little about it but I think I understand the most basic underlying idea.

My idea was to try use the same idea for smaller problems in JModelica with Optimica.

fredrik
Offline
Joined: 2011-08-26
The error message is telling

The error message is telling you that the objective can't have continuous variability, which is Modelica speak for that it can't be a function (of time), because that wouldn't make sense. I guess that you want to maximize some norm/functional of f, but it's not clear what norm you want. Do you have some equations for the natural formulation of your problem?

The longest description of Optimica can be found in https://lup.lub.lu.se/search/publication/599261

janpeter
Offline
Joined: 2010-09-11
Enclosed you find the

Enclosed you find the original model and the model formulated as an optimization problem, and a script for each to run it and bring you to plots for each. I also attach an explaining text.  Look forward to your response / Jan Peter

AttachmentSize
FBA example explaining text.pdf 58.28 KB
fba1.mo 2.6 KB
fba1_test1.py.txt 1.84 KB
fba1.mop 2.62 KB
fba1_test2.py.txt 1.91 KB
fredrik
Offline
Joined: 2011-08-26
My conclusion is the same as

My conclusion is the same as before: You want objectiveIntegrand = -mu. This seems to correspond to what you do in fba1.mo.

janpeter
Offline
Joined: 2010-09-11
Hi, Now I finally better

Hi,

Now I finally better understand, I think.

Optimisation of mu in each time instant must be translated to optimisation of mu over the whole period and therefore we need to look at the integral, i.e. use objectiveIntegrand mu. Under some condition that the objective function just accumulate over time it will imply that mu is optimised in each time instant. Something like that.

The more interesting problem is when the optimisation of mu in each time instant is more difficult. I see in the literature that people use linear-programming at each time instant. Would Optimica manage such a problem with the formulation we have now with just using objectiveIntegrand -mu?

Alternatively is it more practical/effective to formulate mu-optimisation as a time discrete problem and and for that time discrete system have an algorithm that do linear programming solution to the problem at that time instant? The reactor with growing cells could still be continuous time but mu changes at discrete times. Are there perhaps some linear-programming module already available in Modelica that one also can use in JModelica?

Jan Peter

janpeter
Offline
Joined: 2010-09-11
Hi again,   I just want to

Hi again,

I just want to add one improvement of the optimisation-formulaton of the original problem. In order to also capture the transient dynamics in the substrate level that we had in the original problem, it is natural to introduce an upper (time-variable) constraint on qGr and qEr that is related to the substrate concentration. Optimica handles this well.

The metabolic flows qGr and qEr can be put in a linear programming framework, see enclosed Python-script. In this simple case it is over-kill to put the dynamics in this framework but for more comprehensive description of growth and metabolism it is very natural. See for instance the Python community cobrapy on constraint-based reconstruction and analysis at https://opencobra.github.io/cobrapy/

Fig 1 - diagram of the original model

Fig 2 - diagram the optimisation formulation of the dynamics

Fba1.mo - modelica model

fba1b.mop - optimica model

Fba1b_fig2.py - python script

yeast_linprog.py - python script that use linear-programming to determine qGr and qEr

It would be interesting to formulate a more challenging model in with this linear programming framework. I suspect that an optimisation routine tailored to linear programming structure would be much better than the much more general routines used in JModelica. But it is still fun to be able to work with simpler models directly in JModelica this way.

Jan Peter

AttachmentSize
fba1_test1_fig1.png 45.17 KB
fba1_test1_fig2.png 70.8 KB
fba1.mo 2.41 KB
fba1b.mop 2.6 KB
fba1b_test2.py.txt 1.77 KB
yeast_linprog.py.txt 954 bytes
fredrik
Offline
Joined: 2011-08-26
Yes, if you can find the

Yes, if you can find the optimal input by only considering the current state, without considering the dynamics, it would be much more computationally efficient and reliable to solve a mathematical program (linear or otherwise) in each discrete time instant rather than solving it as a dynamic optimization problem.

I think a good way of doing this in JModelica would be to instead only use JModelica for simulation using FMI:

1. Compute the optimal input for the current state at the current discrete time using whatever Python package you want. You could use native CasADi, but that is probably more complex than what you need.
2. Feed the FMU with the optimal input and simulate until the next discrete time.
3. Goto 1.

janpeter
Offline
Joined: 2010-09-11
Good. Can we simply take the

Good. Can we simply take the small python script enclosed in the previous post and use that?

One need to in x0_bounds and x1_bound have upper level alpha*G and beta*E and then G, and E are inputs.

Output would be qGr and qEr (and perhaps mu) from the result from the linprog.

I have no idea how to "package" this pythons script into an input-output time discrete box that a Modelica model could intreact with in JModelicia.

Also I found that I have no problem to in the JModelica Python environment to do: from scipy import optimize, but I cannot do from scipy.optimise import linprog.

If this packaging is reasonably simple to do I would like to go on with this simple examplel we have.

*) I see that the JModelica environment has scipy version 0.11.0 while in my other environment (on the Mac) I have scipy version 0.18.1. Is there any possilbe conflict with JModelica upgrading scipy to some version that contain linprog?

**) A straight forward structure of the simulation code could be as follows:

1) Compile the fmu

2) Load fmu and set initial values

3) Simulate one time step and get G and E and save results in some incremental way

4) Do linprog with G and E and get qGr and qEr

5) Save the current state and let that be the next initial values for the fmu

6) Go back to step 2) until the given number som simultions steps are done

7) Plot the results

Somewhat "clumsy" but still would work I think.

fredrik
Offline
Joined: 2011-08-26

*) I can't really help you with scipy.optimize. I have barely used it.

**) Looks good, except that you won't need to explicitly get/set the state in each step. It's saved internally.

janpeter
Offline
Joined: 2010-09-11
Hi, I found scipy difficult

Hi,

I found scipy difficult to upgrade from 0.11 to at least 0.15.0 which contains linaer programming routines. Complaint about "wheel...". But after looking around I found a package Optlang that do the job. Straight forward to implement the Python script as sketched above. Enclosed a figure that shows (same) result with linear programming done at each sample intervall and the intervall was 0.033 hour.  / Jan Peter

AttachmentSize
fba10_test2_ts_033.png 58.92 KB
fredrik
Offline
Joined: 2011-08-26
Great!

Great!