home | pre-prints | papers | lab | bio


 Tim Menzies

Professor (full), Phd, CS, IEEE Fellow
SE, AI, data mining, prog languages




"Less, but better"

I find simple software solutions to seemingly hard problems (see examples).

So what can I simplify for you?

For Students

I seek talented grad students for AI+SE. Is that you?

For Industry

Ask me how to innovate. On time. On budget. Case studies:

My Funding

$12.5M (total). From many sources, e.g.:


  • Awards:
    • IEEE Fellow (2019)
    • Publications:
      • H-index: 56 (Apr'20).
      • Papers: 86 journal + 129 conference + 86 other
    • Government work:
      • NASA software research chair: 2002-2004
      • NSF panelist: 13 times (2003-2020)
    • Journal work:
      • assoc. ed.: CACM, TSE, TOSEM, JSS, EMSE, IST, ASEJ, IEEE Software, SQJ, Big Data Research, IET Software
    • Conference work:
      • co-general chair: ICMSE'16; RAISE'19, PROMISE'05..'12
      • co-PC chair:PROMISE'20, SSBSE'17,NEIR'15,ASE'12
      • artifacts co-chair:ICSE'20, ASE'20, FSE'18, FSE'16
      • program committees: ASE'20, ICSE'20, ESEM'20, FSE'19, ASE'19, MSR'19, SSBSE, PROMISE,...
    • Research students: (total)
      • Ph.D.: 12 current. 12 past. Masters (by research): 31
      • Current graduate students (at the RAISE lab: real-world AI for SE):

    Zhe Yu Andrew Hill Huy Tu Suvodeep NajumderJoymallya Chakraborty Rui Shu
    Shrikanth Chandrasekaran Xueqi(Sherry) Yang Kewen Peng Dylan Wilson Andre Motta Andre Motta

    My prior graduates:

    Scott Chen David Owen Ashutosh Nandeshwar Ekrem Kocaguneli Abdel Salem Sayyad Fayola PetersJoe Krall
    Greg GayWei Fu Vivek Nair Amritanshu Agrawal Jianfeng Chen Rahul Krishna

    I challenge my students as follows:

    • Researchers usually seek solutions that are more intricate and complex;
    • Yet empirically & theoretically the world we can know is very simple;
    • So can you do "it" better, with less?

    Here is a sample of their results (and for more, see Google Scholar):

    Make your own (automated) Scientist


    Humans and AI can learn together to which tests will fail sooner.


    AI can learn what matters most to to a human operator.


    Humans can train and AI what documents to they need to read.

    Software quality control


    We can learn plans on what to change to most improve quality.


    By looking at many projects, we can find very simple predictors of next month's bugs.


    Intelligent defect predictors can stop developers wasting time while they are fixing bad smells.


    A simple feature and instance selectors let software projects share privatized data, without missing important patterns.


    Very simple social metrics can generate near-optimal predictors for software quality.


    Static code defect predictors have inherent limitations. But these limits can fixed via a new learner, very simple learner, that better understand the business goals.


    Surprisingly effective defect predictors can be built from simple static code attributes.

    Automated Decision Making
    (and Configuration)


    Security bugs can be uncovered even when hidden deep inside software data.


    Search-based methods can be 500 times faster than deep learners.


    Very simple optimizers can out perform overly complex deep learners.


    Very simple optimizers can dramatically improve the performance of data miners learning software quality predictors.


    active learners can simplify and reduce the cost of search-based se by orders of magnitude.


    Search-based SE methods can easily and readily and critically assess long held SE truisms.

    Finding General Lessons in SE


    We can find general lessons from 100s of software projects.


    The bellwether effect teaches us much about generality in SE.


    Even when project data collects data using different labels, we can still transfer lessons learned between them.


    Ultra-simple transfer learning methods (called "bellwethers") enable effective transfer of lessons learned.


    How to transfer lessons learned from past projects? Easy! Clustering tools enable transferring lessons learned between software projects.


    A simple nearest neighbor relevancy filtering resulted in one of the first general results in software analytics: defect predictor learned from Turkish toasters could be successfully applied to NASA flight software (and vice versa).

    Requirements engineering


    Seemingly complex, conflicting models can be tamed and controlled via some simple stochastic probing.


    Contrast set learners can explain enormous decision trees (6000 node) learned from complex requirements models just 6 rules.


    Contrast set learners find simple controllers in requirements models.

    Effort estimation


    The effort to build complex software can be estimated by very simple equations.


    Active learners can easily estimate large software projects after just a few samples.

    Other applications


    To tame cloud computing, use next generation algorithms


    Text miners can succinctly summarize thousands of technical papers about SE.


    Data miners can greatly simplify and reduce the effort involved in data collection for community health studies.


    Simple contrast-set learners out-perform state-of-the-art optimizers for spacecraft control;


    The lesson of decades of expert systems research is that, for specific domains, human expertise can be readily captured in just a few rules.


    BTW: for the origins of the "Less, but better" mantra, see Dieter Rams' 10 principles for good design.