pyhrf.jde.models module¶
-
class
pyhrf.jde.models.
ARN_BiG_BOLDSamplerInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Bases:
pyhrf.jde.models.BOLDSamplerInput
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
-
class
pyhrf.jde.models.
BOLDGibbsSampler
(nb_iterations=3000, obs_hist_pace=-1.0, glob_obs_hist_pace=-1, smpl_hist_pace=-1.0, burnin=0.3, callback=<pyhrf.jde.samplerbase.GSDefaultCallbackHandler object>, response_levels=<pyhrf.jde.nrl.bigaussian.NRLSampler object>, beta=<pyhrf.jde.beta.BetaSampler object>, noise_var=<pyhrf.jde.noise.NoiseVarianceSampler object>, hrf=<pyhrf.jde.hrf.HRFSampler object>, hrf_var=<pyhrf.jde.hrf.RHSampler object>, mixt_weights=<pyhrf.jde.nrl.bigaussian.MixtureWeightsSampler object>, mixt_params=<pyhrf.jde.nrl.bigaussian.BiGaussMixtureParamsSampler object>, scale=<pyhrf.jde.hrf.ScaleSampler object>, stop_crit_threshold=-1, stop_crit_from_start=False, check_final_value=None)¶ Bases:
pyhrf.xmlio.Initable
,pyhrf.jde.samplerbase.GibbsSampler
-
cleanObservables
()¶
-
computeFit
()¶
-
computePMStimInducedSignal
()¶
-
compute_crit_diff
(old_vals, means=None)¶
-
default_nb_its
= 3000¶
-
getGlobalOutputs
()¶
-
initGlobalObservables
()¶
-
inputClass
¶ alias of
WN_BiG_BOLDSamplerInput
-
parametersComments
= {'obs_hist_pace': 'See comment for samplesHistoryPaceSave.', 'smpl_hist_pace': 'To save the samples at each iteration\nIf x<0: no save\n If 0<x<1: define the fraction of iterations for which samples are saved\nIf x>=1: define the step in iterations number between saved samples.\nIf x=1: save samples at each iteration.'}¶
-
parametersToShow
= ['nb_iterations', 'response_levels', 'hrf', 'hrf_var']¶
-
saveGlobalObservables
(it)¶
-
stop_criterion
(it)¶
-
updateGlobalObservables
()¶
-
-
class
pyhrf.jde.models.
BOLDGibbsSampler_AR
(nb_iterations=3000, obs_hist_pace=-1.0, glob_obs_hist_pace=-1, smpl_hist_pace=-1.0, burnin=0.3, callback=<pyhrf.jde.samplerbase.GSDefaultCallbackHandler object>, response_levels=<pyhrf.jde.nrl.ar.NRLARSampler object>, beta=<pyhrf.jde.beta.BetaSampler object>, noise_var=<pyhrf.jde.noise.NoiseVarianceARSampler object>, noise_arp=<pyhrf.jde.noise.NoiseARParamsSampler object>, hrf=<pyhrf.jde.hrf.HRFARSampler object>, hrf_var=<pyhrf.jde.hrf.RHSampler object>, mixt_weights=<pyhrf.jde.nrl.bigaussian.MixtureWeightsSampler object>, mixt_params=<pyhrf.jde.nrl.bigaussian.BiGaussMixtureParamsSampler object>, scale=<pyhrf.jde.hrf.ScaleSampler object>, drift=<pyhrf.jde.drift.DriftARSampler object>, drift_var=<pyhrf.jde.drift.ETASampler object>, stop_crit_threshold=-1, stop_crit_from_start=False, check_final_value=None)¶ Bases:
pyhrf.xmlio.Initable
,pyhrf.jde.samplerbase.GibbsSampler
-
cleanObservables
()¶
-
computeFit
()¶
-
computePMStimInducedSignal
()¶
-
compute_crit_diff
(old_vals, means=None)¶
-
default_nb_its
= 3000¶
-
getGlobalOutputs
()¶
-
initGlobalObservables
()¶
-
inputClass
¶ alias of
ARN_BiG_BOLDSamplerInput
-
parametersComments
= {'obs_hist_pace': 'See comment for samplesHistoryPaceSave.', 'smpl_hist_pace': 'To save the samples at each iteration\nIf x<0: no save\n If 0<x<1: define the fraction of iterations for which samples are saved\nIf x>=1: define the step in iterations number between saved samples.\nIf x=1: save samples at each iteration.'}¶
-
parametersToShow
= ['nb_iterations', 'response_levels', 'hrf', 'hrf_var']¶
-
saveGlobalObservables
(it)¶
-
stop_criterion
(it)¶
-
updateGlobalObservables
()¶
-
-
class
pyhrf.jde.models.
BOLDSamplerInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Class holding data needed by the sampler : BOLD time courses for each voxel, onsets and voxel topology. It also perform some precalculation such as the convolution matrix based on the onsests (L{stackX})
-
buildCosMat
(paramLFD, ny)¶
-
buildOtherMatX
()¶
-
buildParadigmConvolMatrix
(zc, estimDuration, availableDataIndex, parData)¶
-
buildParadigmSingleCondMatrix
(zc, estimDuration, availableDataIndex, parData)¶
-
buildPolyMat
(paramLFD, n)¶
-
calcDt
(dtMin)¶
-
chewUpOnsets
(dt, hrfZc, hrfDuration)¶
-
cleanMem
()¶
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
setLFDMat
(paramLFD, typeLFD)¶ Build the low frequency basis from polynomial basis functions.
-
-
class
pyhrf.jde.models.
BOLDSampler_Multi_SessInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Class holding data needed by the sampler : BOLD time courses for each voxel, onsets and voxel topology. It also perform some precalculation such as the convolution matrix based on the onsests (L{stackX}) —- Multi-sessions version
-
buildCosMat
(paramLFD, ny)¶
-
buildOtherMatX
()¶
-
buildParadigmConvolMatrix
(zc, estimDuration, availableDataIndex, parData)¶
-
buildPolyMat
(paramLFD, n)¶
-
calcDt
(dtMin)¶
-
chewUpOnsets
(dt, hrfZc, hrfDuration)¶
-
cleanMem
()¶
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
setLFDMat
(paramLFD, typeLFD)¶ Build the low frequency basis from polynomial basis functions.
-
-
class
pyhrf.jde.models.
CallbackCritDiff
¶ Bases:
pyhrf.jde.samplerbase.GSDefaultCallbackHandler
-
callback
(it, variables, samplerEngine)¶
-
-
class
pyhrf.jde.models.
Drift_BOLDGibbsSampler
(nb_iterations=3000, obs_hist_pace=-1, glob_obs_hist_pace=-1, smpl_hist_pace=-1, burnin=0.3, callback=<pyhrf.jde.samplerbase.GSDefaultCallbackHandler object>, response_levels=<pyhrf.jde.nrl.bigaussian_drift.NRL_Drift_Sampler object>, beta=<pyhrf.jde.beta.BetaSampler object>, noise_var=<pyhrf.jde.noise.NoiseVariance_Drift_Sampler object>, hrf=<pyhrf.jde.hrf.HRF_Drift_Sampler object>, hrf_var=<pyhrf.jde.hrf.RHSampler object>, mixt_weights=<pyhrf.jde.nrl.bigaussian.MixtureWeightsSampler object>, mixt_params=<pyhrf.jde.nrl.bigaussian.BiGaussMixtureParamsSampler object>, scale=<pyhrf.jde.hrf.ScaleSampler object>, drift=<pyhrf.jde.drift.DriftSampler object>, drift_var=<pyhrf.jde.drift.ETASampler object>, stop_crit_threshold=-1, stop_crit_from_start=False, check_final_value=None)¶ Bases:
pyhrf.xmlio.Initable
,pyhrf.jde.samplerbase.GibbsSampler
-
computeFit
()¶
-
default_nb_its
= 3000¶
-
inputClass
¶ alias of
WN_BiG_Drift_BOLDSamplerInput
-
parametersToShow
= ['nb_iterations', 'response_levels', 'hrf', 'hrf_var']¶
-
-
class
pyhrf.jde.models.
Hab_WN_BiG_BOLDSamplerInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Bases:
pyhrf.jde.models.WN_BiG_BOLDSamplerInput
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
-
class
pyhrf.jde.models.
WN_BiG_BOLDSamplerInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Bases:
pyhrf.jde.models.BOLDSamplerInput
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
-
class
pyhrf.jde.models.
WN_BiG_Drift_BOLDSamplerInput
(data, dt, typeLFD, paramLFD, hrfZc, hrfDuration)¶ Bases:
pyhrf.jde.models.BOLDSamplerInput
-
cleanPrecalculations
()¶
-
makePrecalculations
()¶
-
-
class
pyhrf.jde.models.
W_BOLDGibbsSampler
(nb_iterations=3000, obs_hist_pace=-1.0, glob_obs_hist_pace=-1, smpl_hist_pace=-1.0, burnin=0.3, callback=<pyhrf.jde.samplerbase.GSDefaultCallbackHandler object>, response_levels=<pyhrf.jde.nrl.bigaussian.NRLSamplerWithRelVar object>, beta=<pyhrf.jde.beta.BetaSampler object>, noise_var=<pyhrf.jde.noise.NoiseVarianceSampler object>, hrf=<pyhrf.jde.hrf.HRFSamplerWithRelVar object>, hrf_var=<pyhrf.jde.hrf.RHSampler object>, mixt_weights=<pyhrf.jde.nrl.bigaussian.MixtureWeightsSampler object>, mixt_params=<pyhrf.jde.nrl.bigaussian.BiGaussMixtureParamsSamplerWithRelVar object>, scale=<pyhrf.jde.hrf.ScaleSampler object>, relevantVariable=<pyhrf.jde.wsampler.WSampler object>, stop_crit_threshold=-1, stop_crit_from_start=False, check_final_value=None)¶ Bases:
pyhrf.xmlio.Initable
,pyhrf.jde.samplerbase.GibbsSampler
-
default_nb_its
= 3000¶
-
inputClass
¶ alias of
WN_BiG_BOLDSamplerInput
-
parametersToShow
= ['nb_iterations', 'response_levels', 'hrf', 'hrf_var']¶
-
-
class
pyhrf.jde.models.
W_Drift_BOLDGibbsSampler
(nb_iterations=3000, obs_hist_pace=-1.0, glob_obs_hist_pace=-1, smpl_hist_pace=-1.0, burnin=0.3, callback=<pyhrf.jde.samplerbase.GSDefaultCallbackHandler object>, response_levels=<pyhrf.jde.nrl.bigaussian_drift.NRL_Drift_SamplerWithRelVar object>, beta=<pyhrf.jde.beta.BetaSampler object>, noise_var=<pyhrf.jde.noise.NoiseVariance_Drift_Sampler object>, hrf=<pyhrf.jde.hrf.HRF_Drift_SamplerWithRelVar object>, hrf_var=<pyhrf.jde.hrf.RHSampler object>, mixt_weights=<pyhrf.jde.nrl.bigaussian.MixtureWeightsSampler object>, mixt_params=<pyhrf.jde.nrl.bigaussian.BiGaussMixtureParamsSamplerWithRelVar object>, scale=<pyhrf.jde.hrf.ScaleSampler object>, condion_relevance=<pyhrf.jde.wsampler.W_Drift_Sampler object>, drift=<pyhrf.jde.drift.DriftSamplerWithRelVar object>, drift_var=<pyhrf.jde.drift.ETASampler object>, stop_crit_threshold=-1, stop_crit_from_start=False, check_final_value=None)¶ Bases:
pyhrf.xmlio.Initable
,pyhrf.jde.samplerbase.GibbsSampler
-
default_nb_its
= 3000¶
-
inputClass
¶ alias of
WN_BiG_Drift_BOLDSamplerInput
-
parametersToShow
= ['nb_iterations', 'response_levels', 'hrf', 'hrf_var']¶
-
-
pyhrf.jde.models.
computePl
(drift, varP, dest=None)¶
-
pyhrf.jde.models.
computeSumjaXh
(nrl, matXh, dest=None)¶
-
pyhrf.jde.models.
computeXh
(hrf, varX, dest=None)¶
-
pyhrf.jde.models.
computeYBar
(varMBY, varPl, dest=None)¶
-
pyhrf.jde.models.
computeYTilde
(sumj_aXh, varMBY, dest=None)¶
-
pyhrf.jde.models.
computeYTilde_Pl
(sumj_aXh, yBar, dest=None)¶
-
pyhrf.jde.models.
computehXQXh
(hrf, matXQX, dest=None)¶
-
pyhrf.jde.models.
permutation
(x)¶ Randomly permute a sequence, or return a permuted range.
If x is a multi-dimensional array, it is only shuffled along its first index.
Parameters: x (int or array_like) – If x is an integer, randomly permute np.arange(x)
. If x is an array, make a copy and shuffle the elements randomly.Returns: out – Permuted sequence or array range. Return type: ndarray Examples
>>> np.random.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6])
>>> np.random.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12])
>>> arr = np.arange(9).reshape((3, 3)) >>> np.random.permutation(arr) array([[6, 7, 8], [0, 1, 2], [3, 4, 5]])
-
pyhrf.jde.models.
rand
(d0, d1, ..., dn)¶ Random values in a given shape.
Create an array of the given shape and populate it with random samples from a uniform distribution over
[0, 1)
.Parameters: d1, ..., dn (d0,) – The dimensions of the returned array, should all be positive. If no argument is given a single Python float is returned. Returns: out – Random values. Return type: ndarray, shape (d0, d1, ..., dn)
See also
random()
Notes
This is a convenience function. If you want an interface that takes a shape-tuple as the first argument, refer to np.random.random_sample .
Examples
>>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random
-
pyhrf.jde.models.
randn
(d0, d1, ..., dn)¶ Return a sample (or samples) from the “standard normal” distribution.
If positive, int_like or int-convertible arguments are provided, randn generates an array of shape
(d0, d1, ..., dn)
, filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1 (if any of theare floats, they are first converted to integers by truncation). A single float randomly sampled from the distribution is returned if no argument is provided.
This is a convenience function. If you want an interface that takes a tuple as the first argument, use numpy.random.standard_normal instead.
Parameters: d1, ..., dn (d0,) – The dimensions of the returned array, should be all positive. If no argument is given a single Python float is returned. Returns: Z – A (d0, d1, ..., dn)
-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied.Return type: ndarray or float See also
random.standard_normal()
- Similar, but takes a tuple as its argument.
Notes
For random samples from
, use:
sigma * np.random.randn(...) + mu
Examples
>>> np.random.randn() 2.1923875335537315 #random
Two-by-four array of samples from N(3, 6.25):
>>> 2.5 * np.random.randn(2, 4) + 3 array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], #random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) #random
-
pyhrf.jde.models.
simulate_bold
(output_dir=None, noise_scenario='high_snr', spatial_size='tiny', normalize_hrf=True)¶