Category Archives: Conference

Models @ Runtime

Talking with Daniel Görlich during the MDUCDE 2007, he told me about the new workshop he was organizing in the context of MODELS, “Model Driven Development of Advanced User Interfaces“. Digging into the conference, I found this other workshop that also sounds interesting: Models @ Runtime. For me, it’s funny because every time I present Himalia, the question about code generation -vs- runtime interpretation is on the table.

 

What I actually think is that it shouldn’t be a public discussion. The user need to obtain good response times, fair processor consumption, etc. but how you provide that shouldn’t be his problem. For example, I don’t care if SQL Server or Oracle are generating specific code for each database definition or if they are interpreting each DB model in each query. And that’s good, it should behaves as a black box for me. 

 

I think there are basically 4 things to take into account:

1. It is really one faster than the other one? People would usually think that the interpretation strategy is slower, but I think it highly depends on the quality of the generated code versus the quality of the runtime and the performance advantages you could take of having the model live @ runtime (for example, DBMS use optimization techniques as learning from the queries at runtime to obtain better response times).

2. In the resources consumption field I think interpretation is better, because you don’t need to replicate everything each time, and so, you can let the underlying layers do their optimization work in a better way. 

3. Extensibility/Flexibility is the only field -as far as I can see- where generating code could be the better approach. In the DSL Tools book there is a very interesting discussion about what they called the “Customization Pit” and what kind of things you need to take into account when you support code-generation scenarios. Basically, if you are not providing the right hooks in your high-abstraction language, you should provide the ability to inject the customization somewhere in the process, and so, code-generation and partial classes could be the answer if you don’t know which customization scenarios would you need to provide.

4. Before of partial classes, maintainability was a big problem for code-generation tools. Nowadays I think there is no big difference between both approaches.

Finally, sometimes, you just need the model live @ runtime, and so, if you decide to generate code you are at least duplicating the effort. For example, you may need it to modify it, to learn from it, adapt it, etc.

 

With Himalia, I followed the runtime approach. Why? Because I think that in the long run, having the model live @ runtime will be far better in order to: let the end-user modify his user interface, learn from the model, etc. Obviously, as I don’t want to be hoisted by my own petard, I have to dig very well into the customization scenarios and support the required hooks.

 

Luckily, this is the first time in a long time I found many people converging into this point, as more and more frameworks are being interpreted, and now, there is a conference about the topic :) 

Abstraction -vs- Testing

During the MDUCDE 2007 in Seoul, just before my presentation, Asaf Degani did a great presentation about his work on UI correctness. Basically, he exposed how inconsistencies between the user model (what the user thinks about what the application does) and the user interface model can produce very frightening problems. The example he presented was a inconsistence in the aircraft autopilot… (I have the full paper in my notebook but I couldn’t find it online).

“The conceptual approach for generating correct and succinct user-models is based on the fact that not all the system’s internal states need to be individually presented to the user”.

That is, the user interface is a projection/view of the application. So, the user model is different from the user interface that is also different from the application/machine. Asaf modeled the user model from the user manual and he found that with the aim of reducing information overhead sometimes what the manual teach is not what the application/machine actually does.

 

 usermodel_vs_uimodel

 

Suppose the plane is planning at 5000ft, if the pilot enters 10000ft in the autopilot interface, then the plane will climb until that height.

Then if the pilot set a new lower height after the airplane has passed a special point, then the airplane turns into an undetermined state, and continues climbing indefinitely. The problem arises because the pilot thinks (because the user manual tells him) this special point is calculated in a way that is not the real way. From the pilot point of view, the machine is behaving in a non-deterministic way: sometimes it works as expected, sometimes it doesn’t but he doesn’t know why. Why did they write the manual in this way?  Well… because with the information available in the user interface, the pilot couldn’t calculate the real special point, so they decided to avoid “the details” [1].

 

 aircraft_autopilot

 

Obviously, Asaf concluded that the UI is incorrect because it doesn’t give the user all the information he needs to do the task. Maybe the manual is incorrect too, but what actually matters is that they are inconsistent.

 

From my point of view, this is a particular case of a wider topic that came again and again during the conference, in all the forms you can imagine (from cars to aircraft autopilots and smart factories). When you use different languages for each world you have to do a lot of extra work in testing to ensure you are keeping it consistent, because basically, what you are doing is to add more translations. The bigger the difference between the languages, the bigger is the probability to produce translation errors between them. In some fields, it may be still too difficult to unify both worlds in one only language, but in others, we are just not taking the right approach to solve the problems.

 

However it also happens in other models: software design, implementation, etc. If we were able to merge them, we could avoid testing the translations.

 

ReducingTranslations

 

Some time ago I had a similar conversation with Andrés about this topic (he had written a post about it). We agreed that when you express things at the right abstraction level, testing is useless, because it becomes a tautology. Would you test that the += operator is working properly in C# or Java?. I wouldn’t. 

 

To put this in the context of Asaf’s work, if the aircraft autopilot and the manual are made with the same language, his work would be useless. You wouldn’t need to prove the UI correctness because it will always be consistent with the user model [2].

 

Obviously, taking this approach ought you to test your high-abstraction language… it is a trade-off, but in most of the cases this is a far simpler and better limited task than testing all the user interfaces, and sometimes it can also be “outsourced for free” as we do with the += operator.

 

I think Eugenio Pace said exactly this in a different way a week ago, talking about SaaS in the Genexus Meeting. He proposed something like the Law of Conservation of Mass for complexity and software. He said: you can’t avoid complexity, but you can redistribute responsibilities and that is what SaaS is about.

 

I think it is also what DSLs are about, translating complexity until some day, when magically a lot of testing and translations disappear and you can focus on what is really important for your business.

 

 

[1] Yes, this is how an aircraft autopilot works today, but don’t panic, it is a very strange situation and I believe they can take the airplane control again in any case. I can’t remember the special point name, and the exact calculation, but it’s not the important thing here.

 

[2] Asaf is currently working on how to automatically generate the UI from a reduced user model and avoid the testing by producing correct user interface by design.

I’m going to the MDUCDE2007 in Seoul

I am traveling to Seoul (South Korea) to present “HIMALIA: Model-Driven User Interfaces Using Hypermedia, Controls And Patterns” in the first workshop on Model-Driven User-Centered Engineering 2007. The presentation is programmed to be on September 5 at 14:00hs, after the launch. I am expecting to meet interesting people there, and have some productive interchange with other similar-fields researchers to enrich ourselves.

 

   

 

I made this map to show the straight and VERY LONG flight. It is difficult to measure the amount of time on-the-fly because of the different time zones, but in any case, it isn’t less than 30 hours!!! I hope not to experience very much jet lag, but I think it will be impossible to distinguish between jet lag and 30 hours of flight ;) 

 

So, don’t expect very much activity in this blog from 1st to 9th September. I will share the photos and surely some comments when I came back home.

 

BTW, if someone knows the people in the DICyT, could you hurry up them? I sent them an one-page-letter three weeks ago, but they didn’t have enough time to see it yet… came on! it’s one page! They should be receiving like 1000 letters a day to argue that, and I don’t think it’s the case.

Himalia will probably be presented in Seoul on September

I made an abstract submission for the First International Workshop in Model-Driven User-Centric Design & Engineering that will take place in Seoul/Korea on September 4-5, this year.

 

My submission was accepted. Now, I have to make it happen. That means: to complete the final paper and figure out how to go to the other side of the world ;)

 

If you will be in the zone, please let me know, it will be great to met there.

Follow

Get every new post delivered to your Inbox.