Population grid code options
============================
The following chapter contains all grid code options, along with their descriptions
There are 28 options that are not described yet.


Public options
--------------
The following options are meant to be changed by the user.


| **C_auto_logging**: Dictionary containing parameters to be logged by binary_c. The structure of this dictionary is as follows: the key is used as the headline which the user can then catch. The value at that key is a list of binary_c system parameters (like star[0].mass)

| **C_logging_code**: Variable to store the exact code that is used for the custom_logging. In this way the user can do more complex logging, as well as putting these logging strings in files.

| **HPC_force_join**: Integer, default 0. If 1, and the HPC variable ("slurm" or "condor") is 3, skip checking our own job and force the join.

| **HPC_rebuild_joinlist**: Integer, default 0. If 1, ignore the joinlist we would usually use and rebuild it automatically

| **Moe2017_options**: No description available yet

| **cache_dir**: No description available yet

| **combine_ensemble_with_thread_joining**: Boolean flag on whether to combine everything and return it to the user or if false: write it to data_dir/ensemble_output_{population_id}_{thread_id}.json

| **command_line**: No description available yet

| **condor**: Integer flag used to control HTCondor (referred to as Condor here) jobs. Default is 0 which means no Condor. 1 means launch Condor jobs. Do not manually set this to 2 (run Condor jobs) or 3 (join Condor job data) unless you know what you are doing, this is usually done for you.

| **condor_ClusterID**: Integer. Condor ClusterID variable, equivalent to Slurm's jobid. Jobs are numbered <ClusterID>.<Process>

| **condor_Process**: Integer. Condor Process variable, equivalent to Slurm's jobarrayindex. Jobs are numbered <ClusterID>.<Process>

| **condor_bash**: String. Points the location of the "bash" command, e.g. /bin/bash, that is used in Condor launch scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **condor_batchname**: String. Condor batchname option: this is what appears in condor_q. Defaults to "binary_c-condor"

| **condor_date**: String. Points the location of the "date" command, e.g. /usr/bin/date, that is used in Condor launch scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **condor_dir**: String. Working directory containing e.g. scripts, output, logs (e.g. should be NFS available to all jobs). This directory should not exist when you launch the Condor jobs.

| **condor_env**: String. Points the location of the "env" command, e.g. /usr/bin/env or /bin/env, that is used in Condor launch scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **condor_extra_settings**: Dictionary. Place to put extra configuration for the CONDOR submit file. The key and value of the dict will become the key and value of the line in te slurm batch file. Will be put in after all the other settings (and before the command). Take care not to overwrite something without really meaning to do so.

| **condor_getenv**: Boolean. If True, the default, condor takes the environment at submission and copies it to the jobs. You almost certainly want this to be True.

| **condor_initial_dir**: String. Directory from which condor scripts are run. If set to the default, None, this is the directory from which your script is run.

| **condor_kill_sig**: String. Signal Condor should use to stop a process. Note that grid.py expects this to be "SIGINT" which is the default.

| **condor_memory**: Integer. In MB, the memory use (ImageSize) of the job.

| **condor_njobs**: Integer. Number of jobs that Condor will run

| **condor_postpone_join**: Integer. Use to delay the joining of Condor grid data. If 1, data is not joined, e.g. if you want to do it off the condor grid (e.g. with more RAM). Default 0.

| **condor_postpone_submit**: Integer. Debugging tool. If 1, the condor script is not submitted (useful for debugging). Default 0.

| **condor_pwd**: String. Points the location of the "pwd" command, e.g. /bin/pwd, that is used in Condor launch scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **condor_q**: String. The Condor_q command, usually "/usr/bin/condor_q" but will depend on your HTCondor installation.

| **condor_requirements**: String. Condor job requirements. These are passed to Condor directly, you should read the HTCondor manual to learn about this. If no requirements exist, leave as an string.

| **condor_should_transfer_files**: Integer. Condor's option to transfer files at the end of the job. You should set this to "YES"

| **condor_snapshot_on_kill**: Integer. If 1 we save a snapshot on SIGKILL before exit.

| **condor_stream_error**: Boolean. If True, we activate Condor's stderr stream. If False, this data is copied at the end of the job.

| **condor_stream_output**: Boolean. If True, we activate Condor's stdout stream. If False, this data is copied at the end of the job.

| **condor_submit**: String. The Condor_submit command, usually "/usr/bin/condor_submit" but will depend on your HTCondor installation.

| **condor_universe**: String. The HTCondor "universe": this is "vanilla" by default.

| **condor_warn_max_memory**: Integer. In MB, the memory use (ImageSize) of the job.

| **condor_when_to_transfer_output**: Integer. Condor's option to decide when output files are transferred. You should usually set this to "ON_EXIT_OR_EVICT"

| **custom_generator**: No description available yet

| **custom_logging_func_memaddr**: Memory address where the custom_logging_function is stored. Input: int

| **do_analytics**: No description available yet

| **do_dry_run**: Whether to do a dry run to calculate the total probability for this run

| **dry_run_hook**: Function hook to be called for every system in a dry run. The function is passed a dict of the system parameters. Does nothing if None (the default).

| **dry_run_num_cores**: No description available yet

| **ensemble_factor_in_probability_weighted_mass**: Flag to multiply all the ensemble results with 1/probability_weighted_mass

| **evolution_type**: Variable containing the type of evolution used of the grid. Multiprocessing, linear processing or possibly something else (e.g. for Slurm or Condor).

| **exit_after_dry_run**: If True, exits after a dry run. Default is False.

| **exit_code**: No description available yet

| **failed_systems_threshold**: Variable storing the maximum number of systems that are allowed to fail before logging their command line arguments to failed_systems log files

| **function_cache**: Boolean, default True. If True, we use a cache for certain function calls.

| **function_cache_TTL**: No description available yet

| **function_cache_default_maxsize**: Integer, default 256. The default maxsize of the cache. Should be a power of 2.

| **function_cache_default_type**: String. One of the following types: LRUCache, LFUCache, FIFOCache, MRUCache, RRCache, TTLCache, NullCache, NoCache. You can find details of what these mean in the Python cachetools manual, except fo NoCache which means no cache is used at all, and NullCache is a dummy cache that never matches, used for testing overheads.

| **function_cache_functions**: No description available yet

| **gridcode_filename**: Filename for the grid code. Set and used by the population object. TODO: allow the user to provide their own function, rather than only a generated function.

| **joinlist**: No description available yet

| **log_args**: Boolean to log the arguments.

| **log_args_dir**: Directory to log the arguments to.

| **log_dt**: Time between verbose logging output.

| **log_file**: Log file for the population object. Unused

| **log_newline**: Newline character used at the end of verbose logging statements. This is \n (newline) by default, but \x0d (carriage return) might also be what you want.

| **log_runtime_systems**: Whether to log the runtime of the systems . Each systems run by the thread is logged to a file and is stored in the tmp_dir. (1 file per thread). Don't use this if you are planning to run a lot of systems. This is mostly for debugging and finding systems that take long to run. Integer, default = 0. if value is 1 then the systems are logged

| **max_queue_size**: Maximum size of the queue that is used to feed the processes. Don't make this too big! Default: 1000. Input: int

| **modulo**: No description available yet

| **multiplicity_fraction_function**: Which multiplicity fraction function to use. 0: None, 1: Arenou 2010, 2: Rhagavan 2010, 3: Moe and di Stefano (2017) 2017

| **n_logging_stats**: Number of logging statistics used to calculate time remaining (etc.). E.g., if you set this to 10 the previous 10 calls to the verbose log will be used to construct an estimate of the time remaining.

| **num_cores**: The number of cores that the population grid will use. You can set this manually by entering an integer great than 0. When 0 uses all logical cores. When -1 uses all physical cores. Input: int

| **num_cores_available**: No description available yet

| **original_command_line**: No description available yet

| **original_submission_time**: No description available yet

| **original_working_diretory**: No description available yet

| **parse_function**: Function that the user can provide to handle the output the binary_c. This function has to take the arguments (self, output). Its best not to return anything in this function, and just store stuff in the self.grid_results dictionary, or just output results to a file

| **print_stack_on_exit**: If True, prints a stack trace when the population's exit method is called.

| **repeat**: Factor of how many times a system should be repeated. Consider the evolution splitting binary_c argument for supernovae kick repeating.

| **restore_from_snapshot_dir**: No description available yet

| **restore_from_snapshot_file**: No description available yet

| **return_after_dry_run**: If True, return immediately after a dry run (and don't run actual stars). Default is False.

| **run_zero_probability_system**: Whether to run the zero probability systems. Default: True. Input: Boolean

| **rungrid**: No description available yet

| **save_ensemble_chunks**: No description available yet

| **save_population_object**: No description available yet

| **save_snapshots**: No description available yet

| **slurm**: Integer flag used to control Slurm jobs. Default is 0 which means no Slurm. 1 means launch Slurm jobs. Do not manually set this to 2 (run Slurm jobs) or 3 (join Slurm job data) unless you know what you are doing, this is usually done for you.

| **slurm_array**: String. Override for Slurm's --array option, useful for rerunning jobs manually. Default None.

| **slurm_array_max_jobs**: Integer. Override for the max number of concurrent Slurm array jobs. Default None.

| **slurm_bash**: String. Points the location of the "bash" command, e.g. /bin/bash, that is used in Slurm scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **slurm_date**: String. Points the location of the "date" command, e.g. /usr/bin/date, that is used in Slurm scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **slurm_dir**: String. Working directory containing e.g. scripts, output, logs (e.g. should be NFS available to all jobs). This directory should not exist when you launch the Slurm jobs.

| **slurm_env**: String. Points the location of the "env" command, e.g. /usr/bin/env or /bin/env, that is used in Slurm scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **slurm_extra_settings**: Dictionary of extra settings for Slurm to put in its launch script. Please see the Slurm documentation for the many options that are available to you.

| **slurm_jobarrayindex**: Integer. Slurm job array index. Each job is numbered <slurm_jobid>.<slurm_jobarrayindex>.

| **slurm_jobid**: Integer. Slurm job id. Each job is numbered <slurm_jobid>.<slurm_jobarrayindex>.

| **slurm_jobname**: String which names the Slurm jobs, default "binary_c-python".

| **slurm_memory**: String. Memory required for the job. Should be in megabytes in a format that Slurm understands, e.g. "512MB" (the default).

| **slurm_njobs**: Integer. Number of Slurm jobs to be launched.

| **slurm_ntasks**: Integer. Number of CPUs required per array job: usually only need this to be 1 (the default).

| **slurm_partition**: String containing the Slurm partition name. You should check your local Slurm installation to find out partition information, e.g. using the sview command.

| **slurm_postpone_join**: Integer, default 0. If 1 do not join job results with Slurm, instead you have to do it later manually.

| **slurm_postpone_sbatch**: Integer, default 0. If set to 1, do not launch Slurm jobs with sbatch, just make the scripts that would have.

| **slurm_pwd**: String. Points the location of the "pwd" command, e.g. /bin/pwd, that is used in Slurm scripts. This is set automatically on the submit machine, so if it is different on the nodes, you should set it manually.

| **slurm_sbatch**: String. The Slurm "sbatch" submission command, usually "/usr/bin/sbatch" but will depend on your Slurm installation. By default is set automatically.

| **slurm_time**: String. The time a Slurm job is allowed to take. Default is 0 which means no limit. Please check the Slurm documentation for required format of this option.

| **slurm_warn_max_memory**: String. If we set slurm_memory in excess of this, warn the user because this is usually a mistake. Default "1024MB".

| **source_file_filename**: Variable containing the source file containing lines of binary_c command line calls. These all have to start with binary_c.

| **start_at**: No description available yet

| **start_time**: No description available yet

| **status_dir**: Directory where grid status is stored

| **stop_queue**: No description available yet

| **symlink_latest_gridcode**: No description available yet

| **tmp_dir**: Directory where certain types of output are stored. The grid code is stored in that directory, as well as the custom logging libraries. Log files and other diagnostics will usually be written to this location, unless specified otherwise

| **verbosity**: Verbosity of the population code. Default is 0, by which only errors will be printed. Higher values will show more output, which is good for debugging.

| **weight**: Weight factor for each system. The calculated probability is multiplied by this. If the user wants each system to be repeated several times, then this variable should not be changed, rather change the _repeat variable instead, as that handles the reduction in probability per system. This is useful for systems that have a process with some random element in it.

| **working_diretory**: No description available yet

Moe & di Stefano sampler options
--------------------------------
The following options are meant to be changed by the user.


| **JSON**: No description available yet

| **Mmin**: Minimum stellar mass

| **multiplicity_model**: 
	        multiplicity model (as a function of log10M1)
	
	        You can use 'Poisson' which uses the system multiplicity
	        given by Moe and maps this to single/binary/triple/quad
	        fractions.
	
	        Alternatively, 'data' takes the fractions directly
	        from the data, but then triples and quadruples are
	        combined (and there are NO quadruples).
	        

| **multiplicity_modulator**: 
	        [single, binary, triple, quadruple]
	
	        e.g. [1,0,0,0] for single stars only
	             [0,1,0,0] for binary stars only
	
	        defaults to [1,1,0,0] i.e. singles and binaries
	        

| **normalize_multiplicities**: 
	        'norm': normalise so the whole population is 1.0
	                after implementing the appropriate fractions
	                S/(S+B+T+Q), B/(S+B+T+Q), T/(S+B+T+Q), Q/(S+B+T+Q)
	                given a mix of multiplicities, you can either (noting that
	                here (S,B,T,Q) = appropriate modulator * model(S,B,T,Q) )
	                note: if you only set one multiplicity_modulator
	                to 1, and all the others to 0, then normalising
	                will mean that you effectively have the same number
	                of stars as single, binary, triple or quad (whichever
	                is non-zero) i.e. the multiplicity fraction is ignored.
	                This is probably not useful except for
	                testing purposes or comparing to old grids.
	
	        'raw'   : stick to what is predicted, i.e.
	                  S/(S+B+T+Q), B/(S+B+T+Q), T/(S+B+T+Q), Q/(S+B+T+Q)
	                  without normalisation
	                  (in which case the total probability < 1.0 unless
	                  all you use single, binary, triple and quadruple)
	
	        'merge' : e.g. if you only have single and binary,
	                  add the triples and quadruples to the binaries, so
	                  binaries represent all multiple systems
	                  ...
	                  *** this is canonical binary population synthesis ***
	
	                  It only takes the maximum multiplicity into account,
	                  i.e. it doesn't multiply the resulting array by the multiplicity modulator again.
	                  This prevents the resulting array to always be 1 if only 1 multiplicity modulator element is nonzero
	
	                  Note: if multiplicity_modulator == [1,1,1,1]. this option does nothing (equivalent to 'raw').
	        

| **q_high_extrapolation_method**: Same as q_low_extrapolation_method

| **q_low_extrapolation_method**: 
	        q extrapolation (below 0.15) method
	            none
	            flat
	            linear2
	            plaw2
	            nolowq
	        

| **ranges**: 

| **resolutions**: 

| **samplerfuncs**: No description available yet

Private options
---------------
The following options are not meant to be changed by the user, as these options are used and set internally by the object itself. The description still is provided, but just for documentation purposes.


| **_Moe2017_JSON_data**: Location to store the loaded Moe&diStefano2017 dataset

| **_actually_evolve_system**: Whether to actually evolve the systems of just act as if. for testing. used in _process_run_population_grid

| **_binary_c_config_executable**: Full path of the binary_c-config executable. This options is not used in the population object.

| **_binary_c_dir**: Director where binary_c is stored. This options are not really used

| **_binary_c_executable**: Full path to the binary_c executable. This options is not used in the population object.

| **_binary_c_shared_library**: Full path to the libbinary_c file. This options is not used in the population object

| **_commandline_input**: String containing the arguments passed to the population object via the command line. Set and used by the population object.

| **_count**: Counter tracking which system the generator is on.

| **_custom_logging_shared_library_file**: filename for the custom_logging shared library. Used and set by the population object

| **_end_time_evolution**: Variable storing the end timestamp of the population evolution. Set by the object itself

| **_errors_exceeded**: Variable storing a Boolean flag whether the number of errors was higher than the set threshold (failed_systems_threshold). If True, then the command line arguments of the failing systems will not be stored in the failed_system_log files.

| **_errors_found**: Variable storing a Boolean flag whether errors by binary_c are encountered.

| **_evolution_type_options**: List containing the evolution type options.

| **_failed_count**: Variable storing the number of failed systems.

| **_failed_prob**: Variable storing the total probability of all the failed systems

| **_failed_systems_error_codes**: List storing the unique error codes raised by binary_c of the failed systems

| **_grid_variables**: Dictionary storing the grid_variables. These contain properties which are accessed by the _generate_grid_code function

| **_killed**: No description available yet

| **_loaded_Moe2017_data**: Internal variable storing whether the Moe and di Stefano (2017) data has been loaded into memory

| **_main_pid**: Main process ID of the master process. Used and set by the population object.

| **_population_id**: Variable storing a unique 32-char hex string.

| **_probtot**: Total probability of the population.

| **_queue_done**: No description available yet

| **_set_Moe2017_grid**: Internal flag whether the Moe and di Stefano (2017) grid has been loaded

| **_start_time_evolution**: Variable storing the start timestamp of the population evolution. Set by the object itself.

| **_store_memaddr**: Memory address of the store object for binary_c.

| **_system_generator**: Function object that contains the system generator function. This can be from a grid, or a source file, or a Monte Carlo grid.

| **_total_mass_run**: To count the total mass that thread/process has ran

| **_total_probability_weighted_mass_run**: To count the total mass * probability for each system that thread/process has ran

| **_total_starcount**: Variable storing the total number of systems in the generator. Used and set by the population object.

| **_zero_prob_stars_skipped**: Internal counter to track how many systems are skipped because they have 0 probability