perlanalyst - analyse your Perl documents (without running them)
perlanalyst [OPTIONS] [FILES OR DIRECTORIES]
Perlanalyst is a tool to analyse your Perl documents. This is done via static analysis, e.g. the code is analysed without running it.
Before getting into all the gory details, here are some basic usage examples to help get you started.
# find all subroutine declarations, recursively process all Perl # files beneath directory perlanalyst -all Sub # the same, but show only the declaration of the subroutine "foo" perlanalyst -all Sub --filter Name=foo # the same, but asked as a question perlananalyst --question Sub::Name=foo # the same, but look in another directory perlananalyst -q Sub::Name=foo ~/perl5/lib/perl5/Test # see a list of the files that would be examined perlanalyst --list-files
The Perlanalyst examines only files that end in
.t except if you specify the file names directly on the command line.
It can also list files that would be analysed, without actually searching them.
The program descends through the directory tree of the starting directories specified. If no file or directory is specified, perlanalyst descends through the current directory.
Run the analysis of the given NAME.
Send the results of the analysis through this filter. Can be specified multiple times, the filters are run in the order they appear one the command line.
Ask the question of that name.
List available analyses.
List available filters.
List files that would be examined. Does nothing else an exits afterwards.
List available questions.
An analysis examines a file in a simple way and returns a list of results. For example, a very simple analysis is: Give me all declarations of lexical variables.
A filter takes the results of an analysis and ... um ... filters it. An example would be: Give me all lexical variables with the name "foo".
A question is an analysis followed by one or more filters.
This is a proof of concept that was hacked together whilst enjoying the 13th German Perl Workshop in Frankfurt. Hacking was done on the train to and from Frankfurt and at the workshop itself.
The results of each analysis will be written to a database. Filters run on the data that was read from the database so analysis and questions can be performed in different steps.
Write more basic analyses using PPI and file operations. Write more filters and combine them in questions.
Higher level anaylses combine results from lower level analyses. For example, to see what package variables are declared, we have to know where the keywords
our are used and in what scope.
Perform an analyses once to get that initial data for a file and then only if it was modified.
The second one might be a simple RESTful server (using Dancer::Plugin::REST?).
The third one could be some kind of connection to Perl::Critic. Since this feels so natural and obvious to me, it might actually be the first one that should be built.
PPI is used for parsing the Perl documents.
Perl::Critic is a different kind of tool. It has the knowledge of experienced perl programmers built in and tells you if your code smells.
Gregor Goldbach <firstname.lastname@example.org>
This software is copyright (c) 2011 by Gregor Goldbach.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.