View Profile Page
Dr. Anu is an Assistant Professor of Computer Science at Montclair State University. He joined Montclair State in Fall 2018. Before joining Montclair State, he received a PhD in Software Engineering from North Dakota State University.
Courses Taught at Montclair State University:
# CSIT 416 - IT Project Management (Spring'19, Fall'18)
# CSIT 104 - Computational Concepts (Spring'19)
Courses Taught at North Dakota State University:
# CSCI 116 - Business Use of Computers (Spring'10, Fall'10, Spring'11)
Dr. Anu is interested in software development planning, analysis, and management, and particularly in improving software quality by improving the processes and practices involved in software development.
The primary focus of Dr. Anu's research is on development and empirical validation of effective methods for verification and validation (V&V) of software artifacts.
Dr. Anu's Research Interests include:
Software Engineering, Requirements Engineering, Human Error in Software Engineering, Software Inspections, Software Quality Improvement, Empirical Software Engineering, Software Engineering Education, Human Factors in Cybersecurity
- 1:00 pm - 2:00 pm
- 1:00 pm - 2:00 pm
- 11:15 am - 12:15 pm
- 11:15 am - 12:10 pm
This research employs the Cognitive Psychology research on human errors to address a serious problem in Software Engineering: defects made during software development. We propose that because software development is a human-centric process, most software defects can be traced back to failures of human cognition (also called human errors or mental errors). In order to have the greatest impact on software quality and to minimize the impact of defects, our research is focused on the earliest phase of software development: the requirements engineering phase.
Requirements inspections involve multiple inspectors independently reviewing a requirements document and reporting faults in the document. But, inspectors report both faults and non-faults (false-positives). We are using machine learning based approaches to validate requirements reviews. Our approach uses supervised machine learning algorithms to isolate faults from false-positives. An important feature that we use for training our classifiers is labeling our review data with the fault-types (ambiguity, inconsistent, incorrect requirements, omission, etc.).