Community:IonelVirgilPop about Reactor pattern

From Odp

Jump to: navigation, search

IonelVirgilPop about Reactor pattern (Revision ID: pattern?oldid=11175 11175)

Overall suggestion (score): -1 - reject

Review Summary: Review Summary: The main reason I propose to reject this pattern is that it appears to be, just like the so called "Standard(s) Enforcer Pattern", an attempt to generalize my "OOPMetrics" content pattern, but without citing the source.

Reviewer Confidence: High (afterall, it looks like a generalization of my own pattern).

Problems: The main reason I propose to reject this pattern is that it appears to be, just like the so called "Standard Enforcer Pattern", an attempt to generalize my "OOPMetrics" content pattern, but without citing the source. Even if this is a collaborative website, I believe the source should still be cited, if it was based on it.

There would have been ways of citing the source such as put a link to the OOPMetrics ontology pattern in the section "Web References" and in "Related CPs".

The Reactor Pattern was submitted after the OOPMetrics (this can be checked on this website). Here are just a few reasons (there are others) why I believe it is a variant (generalization) of OOPMetrics:

- it uses metrics in the same manner as OOPMetrics, of course here we have hasMeasure instead of hasOOPMetric.

- in the scenario, instead of the metrics used for OOP, in this pattern there are some metrics regarding carbon, total energy, etc. They are used to detect things like waste output in the same manner I used OOPMetrics to detect design-flaws.

- I believe that the so called OntoMDL ontology doesn't even exist. It's just an excuse to create a scenario much like the God Class scenario I presented in the article and described on this website. If the author would have understood her own ontology pattern she should have been able to create this OntoMDL ontology and put a link under examples. It would have been something more that what OOPMetrics already has. But, of course, since OOPMetrics doesn't have an ontology example neither does this pattern.

- The wrong text: words without spaces among them, the property "hasEnvironemntalCondition" that should be "hasEnvironmentalCondition" as another reviewer observed, show that this was done in a rush.

- lack of labels, like in the first version of OOPMetrics.

Moreover, both the "Reactor Pattern" and the "Standard Enforcer Pattern" were submitted AFTER THE DEADLINE, while the "OOPMetrics" was submitted just before the deadline.

It can be clearly seen, if someone clicks on "history" on top of the pattern page, that the first submission was on August 13, and the extended deadline was august 10. I understand that the time format on this website may not be the Hawaii time like in the call, but still there are three days between the two dates. Of course, the author could not have submitted the pattern before, if it was based on my pattern, because I submitted my own pattern just a few hours before the deadline. Now I am glad I submitted my pattern just a few hours before the deadline.

And I believe the article that should have been submitted on easychair, if one was submitted by this author as stated in the call, could not have possibly be submitted before the deadline either, unless one of the organizers/evaluators facilitated a further extension of the deadline for this author. I hope this author didn't had access to the article I send on easychair as well, since normally the article is not public and should not be published until/if accepted. And normally only organizers/evaluators should have access to it.

Community Relevance: None (I don't see why we need two generalizations of OOPMetrics, such as both the "Reactor Pattern" and the "Standard Enforcer Pattern", other then to look different and to have more chances of acceptance of at least one of them).

Relation to Best Practices: None (this pattern does not look like it was based on best practices of creating ontologies, it looks like it was made in 10 minutes based on OOPMetrics, without citing the source).

Reusability: It has worse reusability than OOPMetrics, and this was attempted to be more general. It was attempted to appear more complicated by adding equivalencies and comments, instead of "real content".

Relations to Other Patterns: I believe it has a relation with the "Standard Enforcer Pattern" and with my "OOPMetrics", in the sense that Reactor Pattern + Standard Enforcer Pattern = a generalization of OOPMetrics.

Overall Understandability: A generalization of OOPMetrics would be useful, but I don't understand why this particular pattern would be useful and I can't understand why we need to have two generalizations: Standard Enforcer and Reactor Pattern.

Clear Problem Description: Bad

Clear Relevance and Consequences: Bad

Clear Figures and Illustrations: I don't see why there is rdfs:subClassOf in the diagram so many times instead of the "inheritance" relation. It's too much text in the diagram.

Missing Information: citations are missing in this pattern description. This website may be collaborative, but I still believe citations are required if it was based on another pattern, otherwise one could do even more strange things, like simply copy an existing pattern and just change the name of the author. And perhaps the citation is missing in the article that should have come with this pattern as well. Also, the domain is not stated, perhaps it's "general".

I gave a similar review to the "Standard Enforcer Pattern". I'm sorry if I repeated myself, but some "mistakes" seem to be common.

I am proud that one reviewer had such an appreciation of this pattern, afterall it is a generalization of my own pattern, but I still believe it is a bad generalization.

I would recommend other reviewers to take note of my review and compare these three patterns, or they may get in the trap of following the principle: "Let's reject the original, so we can accept the "copy" ".
Reviewer Confidence:
Community Relevance:
Relation to Best Practices:
Relations to Other Patterns:
Overall Understandability:
Clear Problem Description:
Clear Relevance and Consequences:
Clear Figures and Illustrations:
Missing Information:

Posted: 2012-08-24 Last modified: 2012/9/11

All reviews | Add a comment at the bottom of this page
4-09-2012 KarlHammar says:

The WOP 2012 pattern track chairs have reviewed the claims made in the above review, and find no evidence supporting these claims of plagiarism, neither in this ODP portal submission, nor in the pattern abstracts submitted via EasyChair. As a matter of fact, the pattern abstracts for both of the patterns claimed to be infringing (Reactive Processes and Conformance to Standards) were submitted, with similar level of detail as that presented here, through EasyChair several days before the OOPMetrics pattern was submitted, making such plagiarism impossible.

11-09-2012 IonelVirgilPop says:

I would like to make some comments on the comments of Mr. Hammar. I did not claimed Ms. Solanki had plagiated my work. As you can notice in the above review I never used the term "plagiarism". It is impossible for me to claim that, as long as I haven't seen her two articles. I can only accuse Ms. Solanki of plagiarism once her articles are published, if and only if I see they are similar to my article, without having cited the source (at least the URL of my publicly available ontology pattern). What I said was that her pattern(s) "appears to be" "a generalization" of my pattern. What I said was based strictly on her ontology and on what I have seen on this website. But this website can always be updated and references can be placed if she considers it's a derivative work. It's not the website that's the problem. Websites are constantly updated, but it would have been nice to place some references, after I made those reviews. I guess she didn't considered it's a derivative work. Regarding her articles, I only made some assumptions on what they may contain based on what I have seen on this website. But again, I can't say more until I see the articles published. Mr. Hammar said that the articles were submitted several days before mine. Of course, I can't verify that. Perhaps Mr. Hammar himself can't be 100% sure that the dates were not "tampered" with by any of the evaluators. I don't claim this was done. I just say that this is a strange guarantee from Mr. Hammar. But even if the articles were submitted several days before, I would like to inform Mr. Hammar that I submitted a similar article to the 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2012) for which the extended deadline for the full paper submission was at the beginning of may 2012, but was not accepted (this can be checked because many of the evaluators at this workshop were also in the program committee at EKAW 2012) and I did a presentation at my university this spring that was based on similar things, but I didn't published it. Perhaps Mr. Hammar is right. Perhaps this is just a strange coincidence. Can you explain then why the same mistakes that I did in my first version of the ontology (both here and in the EKAW 2012 submission) such as the fact that, even if I talked about modeling metrics, I actually modeled types of metrics, appear also in Ms. Solanki's Reactor pattern, were she talks about modeling processes, but she actually modeled types of processes, as it was keenly observed by one of her reviwers in "VojtechSvatek about Reactor pattern"? Aren't there too many coincidences? Perhaps it would have been better if this workshop would have had more evaluators with the experience of Mr. Svatek, rather than so many PhD students and postdocs that can be tricked when they see two patterns with similar contents to pick the one that was boxed more nicely. Don't get me wrong, I appreciate workshops that put PhD students and postdocs in program committees because it allows them to better learn how to make a review in practice, after all I'm a PhD student myself, but I believe they should either have their evaluation evaluated by more experienced people before making it official, or they should be more mixed with more experienced people for every evaluated pattern. Otherwise you get the kind of review I received in "RinkeHoekstra about OOPMetrics", which is a review worth to be put in a manual of how not to make a review. It starts from a wrong premise: that domain specific ontology content patterns are not ontology content patterns and bases the whole review around that. Also it praises indirectly Ms. Solanki's submissions, because they are more general. I should remind you that most content patterns on this website are domain specific, after all this is why you can specify the domain of your ontology content pattern on this website. What happened to the third reviewer that, according to this website, was assigned to review my pattern? I was very motivated to send a pattern to this workshop when I saw it's website. I'm starting to be very disappointed now, after my experience with it.

Personal tools
Quality Committee
Content OP publishers