Community:IonelVirgilPop about Standard Enforcer Pattern

From Odp

Jump to: navigation, search

IonelVirgilPop about Standard Enforcer Pattern (Revision ID: Enforcer Pattern?oldid=11149 11149)

Overall suggestion (score): -1 - reject

Review Summary: Review Summary: The main reason I propose to reject this pattern is that it appears to be, just like the so called "Reactor Pattern", an attempt to generalize my "OOPMetrics" content pattern, but without citing the source.

Reviewer Confidence: High (afterall, it looks like a generalization of my own pattern).

Problems: The main reason I propose to reject this pattern is that it appears to be, just like the so called "Reactor Pattern", an attempt to generalize my "OOPMetrics" content pattern, but without citing the source. Even if this is a collaborative website, I believe the source should still be cited, if it was based on it. There would have been ways of citing the source: put a link to the OOPMetrics ontology in the section "Web References" and in "Related CPs". The Standard Enforcer Pattern was submitted after the OOPMetrics (this can be checked on this website). After some time, the "Standard Enforcer Pattern" appeared, which looks like a generalization of it. Even some phrases look similar to me. Here are just a few reasons (there are others) why I believe it is a variant (generalization) of OOPMetrics:

-in my ontology pattern (OOPMetrics ontology pattern) one of the competency questions is: "What are the software metrics for a particular project/package/class/method?"

-in the "Standard Enforcer Pattern" the description of the ProcessEnforcingStandard is: "A process/operation/activity or serrvice that enforces one or more standard." (looks like a generalization)

-the intent is also similar, although explained differently and for a general domain.

-the ontology is not very complex, but it was made to appear that way by putting a lot of equivalences, textual descriptions/comments, but little "real content". It seems that the "real content" and the idea is mostly "borrowed" from OOPMetrics. It looks like it was done in 10 min based on OOPMetrics, but for a general domain and without citing the source.

-in the scenario there are mentioned a "set of descriptive metrics", but there is no class about metrics in this pattern and it is unclear how this pattern can actually be of use in this scenario. It looks somewhat like my God Class scenario, but this is for "algal biomass production". Here the so called "guidelines" are similar to the rules needed to see if a class is a God Class, based on it's metrics. Again, it looks like it was created in 10 min, based on OOPMetrics, without citing the source.

-labels are missing, just like in the first version of OOPMetrics.

Community Relevance: Bad (a generalization of OOPMetrics could be useful, but I don't believe this particular pattern is a useful generalization, and does not cite the source).

Relation to Best Practices: I don't believe it was based on best practices of writing ontologies over the years, it looks like it was done in 10 min based on OOPMetrics.

Reusability: Bad (it was perhaps intended as a generalization, but it has some additional classes that are not necessary and I don't think it's very reusable)

Relations to Other Patterns: I believe it has a relation with the "Reactor Pattern" and with my "OOPMetrics", in the sense that Reactor Pattern + Standard Enforcer Pattern = a generalization of OOPMetrics.

Unfortunately, neither one is mentioned as a Related CP.

Overall Understandability: A generalization of OOPMetrics would be useful, but I don't understand why this particular pattern would be useful and I can't understand why we need to have two generalizations: Standard Enforcer and Reactor Pattern.

Clear Problem Description: Bad

Clear Relevance and Consequences: Bad

Clear Figures and Illustrations: I don't see why there is rdfs:subClassOf in the diagram so many times instead of the "inheritance" relation. It's too much text in the diagram.

Missing Information: citations are missing in this pattern description. This website may be collaborative, but I still believe citations are required if it was based on another pattern, otherwise one could do even more strange things, like simply copy an existing pattern and just change the name of the author. And perhaps the citation is missing in the article that should have come with this pattern as well. Also, the domain is not stated, perhaps it's "general".

Overall, it is a very strange pattern.
Reviewer Confidence:
Community Relevance:
Relation to Best Practices:
Relations to Other Patterns:
Overall Understandability:
Clear Problem Description:
Clear Relevance and Consequences:
Clear Figures and Illustrations:
Missing Information:

Posted: 2012-08-24 Last modified: 2012/9/11

All reviews | Add a comment at the bottom of this page
4-09-2012 KarlHammar says:

The WOP 2012 pattern track chairs have reviewed the claims made in the above review, and find no evidence supporting these claims of plagiarism, neither in this ODP portal submission, nor in the pattern abstracts submitted via EasyChair. As a matter of fact, the pattern abstracts for both of the patterns claimed to be infringing (Reactive Processes and Conformance to Standards) were submitted, with similar level of detail as that presented here, through EasyChair several days before the OOPMetrics pattern was submitted, making such plagiarism impossible.

11-09-2012 IonelVirgilPop says:

I would like to make some comments on the comments of Mr. Hammar. I did not claimed Ms. Solanki had plagiated my work. As you can notice in the above review I never used the term "plagiarism". It is impossible for me to claim that, as long as I haven't seen her two articles. I can only accuse Ms. Solanki of plagiarism once her articles are published, if and only if I see they are similar to my article, without having cited the source (at least the URL of my publicly available ontology pattern). What I said was that her pattern(s) "appears to be" "a generalization" of my pattern. What I said was based strictly on her ontology and on what I have seen on this website. But this website can always be updated and references can be placed if she considers it's a derivative work. It's not the website that's the problem. Websites are constantly updated, but it would have been nice to place some references, after I made those reviews. I guess she didn't considered it's a derivative work. Regarding her articles, I only made some assumptions on what they may contain based on what I have seen on this website. But again, I can't say more until I see the articles published. Mr. Hammar said that the articles were submitted several days before mine. Of course, I can't verify that. Perhaps Mr. Hammar himself can't be 100% sure that the dates were not "tampered" with by any of the evaluators. I don't claim this was done. I just say that this is a strange guarantee from Mr. Hammar. But even if the articles were submitted several days before, I would like to inform Mr. Hammar that I submitted a similar article to the 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW 2012) for which the extended deadline for the full paper submission was at the beginning of may 2012, but was not accepted (this can be checked because many of the evaluators at this workshop were also in the program committee at EKAW 2012) and I did a presentation at my university this spring that was based on similar things, but I didn't published it. Perhaps Mr. Hammar is right. Perhaps this is just a strange coincidence. Can you explain then why the same mistakes that I did in my first version of the ontology (both here and in the EKAW 2012 submission) such as the fact that, even if I talked about modeling metrics, I actually modeled types of metrics, appear also in Ms. Solanki's Reactor pattern, were she talks about modeling processes, but she actually modeled types of processes, as it was keenly observed by one of her reviwers in "VojtechSvatek about Reactor pattern"? Aren't there too many coincidences? Perhaps it would have been better if this workshop would have had more evaluators with the experience of Mr. Svatek, rather than so many PhD students and postdocs that can be tricked when they see two patterns with similar contents to pick the one that was boxed more nicely. Don't get me wrong, I appreciate workshops that put PhD students and postdocs in program committees because it allows them to better learn how to make a review in practice, after all I'm a PhD student myself, but I believe they should either have their evaluation evaluated by more experienced people before making it official, or they should be more mixed with more experienced people for every evaluated pattern. Otherwise you get the kind of review I received in "RinkeHoekstra about OOPMetrics", which is a review worth to be put in a manual of how not to make a review. It starts from a wrong premise: that domain specific ontology content patterns are not ontology content patterns and bases the whole review around that. Also it praises indirectly Ms. Solanki's submissions, because they are more general. I should remind you that most content patterns on this website are domain specific, after all this is why you can specify the domain of your ontology content pattern on this website. What happened to the third reviewer that, according to this website, was assigned to review my pattern? I was very motivated to send a pattern to this workshop when I saw it's website. I'm starting to be very disappointed now, after my experience with it.

Personal tools
Quality Committee
Content OP publishers