code
stringlengths 75
104k
| code_sememe
stringlengths 47
309k
| token_type
stringlengths 215
214k
| code_dependency
stringlengths 75
155k
|
---|---|---|---|
def update(self):
"""Calculate the smoothing parameter value.
The following example is explained in some detail in module
|smoothtools|:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotedischarge(1.0)
>>> highestremotetolerance(0.0)
>>> derived.highestremotesmoothpar.update()
>>> from hydpy.cythons.smoothutils import smooth_min1
>>> from hydpy import round_
>>> round_(smooth_min1(-4.0, 1.5, derived.highestremotesmoothpar))
-4.0
>>> highestremotetolerance(2.5)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, -1.5, derived.highestremotesmoothpar))
-4.01
Note that the example above corresponds to the example on function
|calc_smoothpar_min1|, due to the value of parameter
|HighestRemoteDischarge| being 1 m³/s. Doubling the value of
|HighestRemoteDischarge| also doubles the value of
|HighestRemoteSmoothPar| proportional. This leads to the following
result:
>>> highestremotedischarge(2.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, 1.0, derived.highestremotesmoothpar))
-4.02
This relationship between |HighestRemoteDischarge| and
|HighestRemoteSmoothPar| prevents from any smoothing when
the value of |HighestRemoteDischarge| is zero:
>>> highestremotedischarge(0.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
In addition, |HighestRemoteSmoothPar| is set to zero if
|HighestRemoteDischarge| is infinity (because no actual value
will ever come in the vicinit of infinity), which is why no
value would be changed through smoothing anyway):
>>> highestremotedischarge(inf)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
"""
control = self.subpars.pars.control
if numpy.isinf(control.highestremotedischarge):
self(0.0)
else:
self(control.highestremotedischarge *
smoothtools.calc_smoothpar_min1(control.highestremotetolerance)
) | def function[update, parameter[self]]:
constant[Calculate the smoothing parameter value.
The following example is explained in some detail in module
|smoothtools|:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotedischarge(1.0)
>>> highestremotetolerance(0.0)
>>> derived.highestremotesmoothpar.update()
>>> from hydpy.cythons.smoothutils import smooth_min1
>>> from hydpy import round_
>>> round_(smooth_min1(-4.0, 1.5, derived.highestremotesmoothpar))
-4.0
>>> highestremotetolerance(2.5)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, -1.5, derived.highestremotesmoothpar))
-4.01
Note that the example above corresponds to the example on function
|calc_smoothpar_min1|, due to the value of parameter
|HighestRemoteDischarge| being 1 m³/s. Doubling the value of
|HighestRemoteDischarge| also doubles the value of
|HighestRemoteSmoothPar| proportional. This leads to the following
result:
>>> highestremotedischarge(2.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, 1.0, derived.highestremotesmoothpar))
-4.02
This relationship between |HighestRemoteDischarge| and
|HighestRemoteSmoothPar| prevents from any smoothing when
the value of |HighestRemoteDischarge| is zero:
>>> highestremotedischarge(0.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
In addition, |HighestRemoteSmoothPar| is set to zero if
|HighestRemoteDischarge| is infinity (because no actual value
will ever come in the vicinit of infinity), which is why no
value would be changed through smoothing anyway):
>>> highestremotedischarge(inf)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
]
variable[control] assign[=] name[self].subpars.pars.control
if call[name[numpy].isinf, parameter[name[control].highestremotedischarge]] begin[:]
call[name[self], parameter[constant[0.0]]] | keyword[def] identifier[update] ( identifier[self] ):
literal[string]
identifier[control] = identifier[self] . identifier[subpars] . identifier[pars] . identifier[control]
keyword[if] identifier[numpy] . identifier[isinf] ( identifier[control] . identifier[highestremotedischarge] ):
identifier[self] ( literal[int] )
keyword[else] :
identifier[self] ( identifier[control] . identifier[highestremotedischarge] *
identifier[smoothtools] . identifier[calc_smoothpar_min1] ( identifier[control] . identifier[highestremotetolerance] )
) | def update(self):
"""Calculate the smoothing parameter value.
The following example is explained in some detail in module
|smoothtools|:
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotedischarge(1.0)
>>> highestremotetolerance(0.0)
>>> derived.highestremotesmoothpar.update()
>>> from hydpy.cythons.smoothutils import smooth_min1
>>> from hydpy import round_
>>> round_(smooth_min1(-4.0, 1.5, derived.highestremotesmoothpar))
-4.0
>>> highestremotetolerance(2.5)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, -1.5, derived.highestremotesmoothpar))
-4.01
Note that the example above corresponds to the example on function
|calc_smoothpar_min1|, due to the value of parameter
|HighestRemoteDischarge| being 1 m³/s. Doubling the value of
|HighestRemoteDischarge| also doubles the value of
|HighestRemoteSmoothPar| proportional. This leads to the following
result:
>>> highestremotedischarge(2.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(-4.0, 1.0, derived.highestremotesmoothpar))
-4.02
This relationship between |HighestRemoteDischarge| and
|HighestRemoteSmoothPar| prevents from any smoothing when
the value of |HighestRemoteDischarge| is zero:
>>> highestremotedischarge(0.0)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
In addition, |HighestRemoteSmoothPar| is set to zero if
|HighestRemoteDischarge| is infinity (because no actual value
will ever come in the vicinit of infinity), which is why no
value would be changed through smoothing anyway):
>>> highestremotedischarge(inf)
>>> derived.highestremotesmoothpar.update()
>>> round_(smooth_min1(1.0, 1.0, derived.highestremotesmoothpar))
1.0
"""
control = self.subpars.pars.control
if numpy.isinf(control.highestremotedischarge):
self(0.0) # depends on [control=['if'], data=[]]
else:
self(control.highestremotedischarge * smoothtools.calc_smoothpar_min1(control.highestremotetolerance)) |
def niftilist_mask_to_array(img_filelist, mask_file=None, outdtype=None):
"""From the list of absolute paths to nifti files, creates a Numpy array
with the masked data.
Parameters
----------
img_filelist: list of str
List of absolute file paths to nifti files. All nifti files must have
the same shape.
mask_file: str
Path to a Nifti mask file.
Should be the same shape as the files in nii_filelist.
outdtype: dtype
Type of the elements of the array, if not set will obtain the dtype from
the first nifti file.
Returns
-------
outmat:
Numpy array with shape N x prod(vol.shape) containing the N files as flat vectors.
mask_indices:
Tuple with the 3D spatial indices of the masking voxels, for reshaping
with vol_shape and remapping.
vol_shape:
Tuple with shape of the volumes, for reshaping.
"""
img = check_img(img_filelist[0])
if not outdtype:
outdtype = img.dtype
mask_data, _ = load_mask_data(mask_file)
indices = np.where (mask_data)
mask = check_img(mask_file)
outmat = np.zeros((len(img_filelist), np.count_nonzero(mask_data)),
dtype=outdtype)
for i, img_item in enumerate(img_filelist):
img = check_img(img_item)
if not are_compatible_imgs(img, mask):
raise NiftiFilesNotCompatible(repr_imgs(img), repr_imgs(mask_file))
vol = get_img_data(img)
outmat[i, :] = vol[indices]
return outmat, mask_data | def function[niftilist_mask_to_array, parameter[img_filelist, mask_file, outdtype]]:
constant[From the list of absolute paths to nifti files, creates a Numpy array
with the masked data.
Parameters
----------
img_filelist: list of str
List of absolute file paths to nifti files. All nifti files must have
the same shape.
mask_file: str
Path to a Nifti mask file.
Should be the same shape as the files in nii_filelist.
outdtype: dtype
Type of the elements of the array, if not set will obtain the dtype from
the first nifti file.
Returns
-------
outmat:
Numpy array with shape N x prod(vol.shape) containing the N files as flat vectors.
mask_indices:
Tuple with the 3D spatial indices of the masking voxels, for reshaping
with vol_shape and remapping.
vol_shape:
Tuple with shape of the volumes, for reshaping.
]
variable[img] assign[=] call[name[check_img], parameter[call[name[img_filelist]][constant[0]]]]
if <ast.UnaryOp object at 0x7da1afef8b20> begin[:]
variable[outdtype] assign[=] name[img].dtype
<ast.Tuple object at 0x7da1afef99f0> assign[=] call[name[load_mask_data], parameter[name[mask_file]]]
variable[indices] assign[=] call[name[np].where, parameter[name[mask_data]]]
variable[mask] assign[=] call[name[check_img], parameter[name[mask_file]]]
variable[outmat] assign[=] call[name[np].zeros, parameter[tuple[[<ast.Call object at 0x7da1afef8280>, <ast.Call object at 0x7da1afefa500>]]]]
for taget[tuple[[<ast.Name object at 0x7da1afef8370>, <ast.Name object at 0x7da1afef8760>]]] in starred[call[name[enumerate], parameter[name[img_filelist]]]] begin[:]
variable[img] assign[=] call[name[check_img], parameter[name[img_item]]]
if <ast.UnaryOp object at 0x7da1afef9b70> begin[:]
<ast.Raise object at 0x7da1afefa2c0>
variable[vol] assign[=] call[name[get_img_data], parameter[name[img]]]
call[name[outmat]][tuple[[<ast.Name object at 0x7da1afef8340>, <ast.Slice object at 0x7da1afef89a0>]]] assign[=] call[name[vol]][name[indices]]
return[tuple[[<ast.Name object at 0x7da1afef85e0>, <ast.Name object at 0x7da1afef9a20>]]] | keyword[def] identifier[niftilist_mask_to_array] ( identifier[img_filelist] , identifier[mask_file] = keyword[None] , identifier[outdtype] = keyword[None] ):
literal[string]
identifier[img] = identifier[check_img] ( identifier[img_filelist] [ literal[int] ])
keyword[if] keyword[not] identifier[outdtype] :
identifier[outdtype] = identifier[img] . identifier[dtype]
identifier[mask_data] , identifier[_] = identifier[load_mask_data] ( identifier[mask_file] )
identifier[indices] = identifier[np] . identifier[where] ( identifier[mask_data] )
identifier[mask] = identifier[check_img] ( identifier[mask_file] )
identifier[outmat] = identifier[np] . identifier[zeros] (( identifier[len] ( identifier[img_filelist] ), identifier[np] . identifier[count_nonzero] ( identifier[mask_data] )),
identifier[dtype] = identifier[outdtype] )
keyword[for] identifier[i] , identifier[img_item] keyword[in] identifier[enumerate] ( identifier[img_filelist] ):
identifier[img] = identifier[check_img] ( identifier[img_item] )
keyword[if] keyword[not] identifier[are_compatible_imgs] ( identifier[img] , identifier[mask] ):
keyword[raise] identifier[NiftiFilesNotCompatible] ( identifier[repr_imgs] ( identifier[img] ), identifier[repr_imgs] ( identifier[mask_file] ))
identifier[vol] = identifier[get_img_data] ( identifier[img] )
identifier[outmat] [ identifier[i] ,:]= identifier[vol] [ identifier[indices] ]
keyword[return] identifier[outmat] , identifier[mask_data] | def niftilist_mask_to_array(img_filelist, mask_file=None, outdtype=None):
"""From the list of absolute paths to nifti files, creates a Numpy array
with the masked data.
Parameters
----------
img_filelist: list of str
List of absolute file paths to nifti files. All nifti files must have
the same shape.
mask_file: str
Path to a Nifti mask file.
Should be the same shape as the files in nii_filelist.
outdtype: dtype
Type of the elements of the array, if not set will obtain the dtype from
the first nifti file.
Returns
-------
outmat:
Numpy array with shape N x prod(vol.shape) containing the N files as flat vectors.
mask_indices:
Tuple with the 3D spatial indices of the masking voxels, for reshaping
with vol_shape and remapping.
vol_shape:
Tuple with shape of the volumes, for reshaping.
"""
img = check_img(img_filelist[0])
if not outdtype:
outdtype = img.dtype # depends on [control=['if'], data=[]]
(mask_data, _) = load_mask_data(mask_file)
indices = np.where(mask_data)
mask = check_img(mask_file)
outmat = np.zeros((len(img_filelist), np.count_nonzero(mask_data)), dtype=outdtype)
for (i, img_item) in enumerate(img_filelist):
img = check_img(img_item)
if not are_compatible_imgs(img, mask):
raise NiftiFilesNotCompatible(repr_imgs(img), repr_imgs(mask_file)) # depends on [control=['if'], data=[]]
vol = get_img_data(img)
outmat[i, :] = vol[indices] # depends on [control=['for'], data=[]]
return (outmat, mask_data) |
def status(zpool=None):
'''
Return the status of the named zpool
zpool : string
optional name of storage pool
.. versionadded:: 2016.3.0
CLI Example:
.. code-block:: bash
salt '*' zpool.status myzpool
'''
ret = OrderedDict()
## collect status output
res = __salt__['cmd.run_all'](
__utils__['zfs.zpool_command']('status', target=zpool),
python_shell=False,
)
if res['retcode'] != 0:
return __utils__['zfs.parse_command_result'](res)
# NOTE: command output for reference
# =====================================================================
# pool: data
# state: ONLINE
# scan: scrub repaired 0 in 2h27m with 0 errors on Mon Jan 8 03:27:25 2018
# config:
#
# NAME STATE READ WRITE CKSUM
# data ONLINE 0 0 0
# mirror-0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
#
# errors: No known data errors
# =====================================================================
## parse status output
# NOTE: output is 'key: value' except for the 'config' key.
# mulitple pools will repeat the output, so if switch pools if
# we see 'pool:'
current_pool = None
current_prop = None
for zpd in res['stdout'].splitlines():
if zpd.strip() == '':
continue
if ':' in zpd:
# NOTE: line is 'key: value' format, we just update a dict
prop = zpd.split(':')[0].strip()
value = ":".join(zpd.split(':')[1:]).strip()
if prop == 'pool' and current_pool != value:
current_pool = value
ret[current_pool] = OrderedDict()
if prop != 'pool':
ret[current_pool][prop] = value
current_prop = prop
else:
# NOTE: we append the line output to the last property
# this should only happens once we hit the config
# section
ret[current_pool][current_prop] = "{0}\n{1}".format(
ret[current_pool][current_prop],
zpd
)
## parse config property for each pool
# NOTE: the config property has some structured data
# sadly this data is in a different format than
# the rest and it needs further processing
for pool in ret:
if 'config' not in ret[pool]:
continue
header = None
root_vdev = None
vdev = None
dev = None
rdev = None
config = ret[pool]['config']
config_data = OrderedDict()
for line in config.splitlines():
# NOTE: the first line is the header
# we grab all the none whitespace values
if not header:
header = line.strip().lower()
header = [x for x in header.split(' ') if x not in ['']]
continue
# NOTE: data is indented by 1 tab, then multiples of 2 spaces
# to differential root vdev, vdev, and dev
#
# we just strip the intial tab (can't use .strip() here)
if line[0] == "\t":
line = line[1:]
# NOTE: transform data into dict
stat_data = OrderedDict(list(zip(
header,
[x for x in line.strip().split(' ') if x not in ['']],
)))
# NOTE: decode the zfs values properly
stat_data = __utils__['zfs.from_auto_dict'](stat_data)
# NOTE: store stat_data in the proper location
if line.startswith(' ' * 6):
rdev = stat_data['name']
config_data[root_vdev][vdev][dev][rdev] = stat_data
elif line.startswith(' ' * 4):
rdev = None
dev = stat_data['name']
config_data[root_vdev][vdev][dev] = stat_data
elif line.startswith(' ' * 2):
rdev = dev = None
vdev = stat_data['name']
config_data[root_vdev][vdev] = stat_data
else:
rdev = dev = vdev = None
root_vdev = stat_data['name']
config_data[root_vdev] = stat_data
# NOTE: name already used as identifier, drop duplicate data
del stat_data['name']
ret[pool]['config'] = config_data
return ret | def function[status, parameter[zpool]]:
constant[
Return the status of the named zpool
zpool : string
optional name of storage pool
.. versionadded:: 2016.3.0
CLI Example:
.. code-block:: bash
salt '*' zpool.status myzpool
]
variable[ret] assign[=] call[name[OrderedDict], parameter[]]
variable[res] assign[=] call[call[name[__salt__]][constant[cmd.run_all]], parameter[call[call[name[__utils__]][constant[zfs.zpool_command]], parameter[constant[status]]]]]
if compare[call[name[res]][constant[retcode]] not_equal[!=] constant[0]] begin[:]
return[call[call[name[__utils__]][constant[zfs.parse_command_result]], parameter[name[res]]]]
variable[current_pool] assign[=] constant[None]
variable[current_prop] assign[=] constant[None]
for taget[name[zpd]] in starred[call[call[name[res]][constant[stdout]].splitlines, parameter[]]] begin[:]
if compare[call[name[zpd].strip, parameter[]] equal[==] constant[]] begin[:]
continue
if compare[constant[:] in name[zpd]] begin[:]
variable[prop] assign[=] call[call[call[name[zpd].split, parameter[constant[:]]]][constant[0]].strip, parameter[]]
variable[value] assign[=] call[call[constant[:].join, parameter[call[call[name[zpd].split, parameter[constant[:]]]][<ast.Slice object at 0x7da1b216b6d0>]]].strip, parameter[]]
if <ast.BoolOp object at 0x7da1b2168970> begin[:]
variable[current_pool] assign[=] name[value]
call[name[ret]][name[current_pool]] assign[=] call[name[OrderedDict], parameter[]]
if compare[name[prop] not_equal[!=] constant[pool]] begin[:]
call[call[name[ret]][name[current_pool]]][name[prop]] assign[=] name[value]
variable[current_prop] assign[=] name[prop]
for taget[name[pool]] in starred[name[ret]] begin[:]
if compare[constant[config] <ast.NotIn object at 0x7da2590d7190> call[name[ret]][name[pool]]] begin[:]
continue
variable[header] assign[=] constant[None]
variable[root_vdev] assign[=] constant[None]
variable[vdev] assign[=] constant[None]
variable[dev] assign[=] constant[None]
variable[rdev] assign[=] constant[None]
variable[config] assign[=] call[call[name[ret]][name[pool]]][constant[config]]
variable[config_data] assign[=] call[name[OrderedDict], parameter[]]
for taget[name[line]] in starred[call[name[config].splitlines, parameter[]]] begin[:]
if <ast.UnaryOp object at 0x7da1b216b8b0> begin[:]
variable[header] assign[=] call[call[name[line].strip, parameter[]].lower, parameter[]]
variable[header] assign[=] <ast.ListComp object at 0x7da1b216a6b0>
continue
if compare[call[name[line]][constant[0]] equal[==] constant[ ]] begin[:]
variable[line] assign[=] call[name[line]][<ast.Slice object at 0x7da1b21683a0>]
variable[stat_data] assign[=] call[name[OrderedDict], parameter[call[name[list], parameter[call[name[zip], parameter[name[header], <ast.ListComp object at 0x7da1b216b5b0>]]]]]]
variable[stat_data] assign[=] call[call[name[__utils__]][constant[zfs.from_auto_dict]], parameter[name[stat_data]]]
if call[name[line].startswith, parameter[binary_operation[constant[ ] * constant[6]]]] begin[:]
variable[rdev] assign[=] call[name[stat_data]][constant[name]]
call[call[call[call[name[config_data]][name[root_vdev]]][name[vdev]]][name[dev]]][name[rdev]] assign[=] name[stat_data]
<ast.Delete object at 0x7da1b206bca0>
call[call[name[ret]][name[pool]]][constant[config]] assign[=] name[config_data]
return[name[ret]] | keyword[def] identifier[status] ( identifier[zpool] = keyword[None] ):
literal[string]
identifier[ret] = identifier[OrderedDict] ()
identifier[res] = identifier[__salt__] [ literal[string] ](
identifier[__utils__] [ literal[string] ]( literal[string] , identifier[target] = identifier[zpool] ),
identifier[python_shell] = keyword[False] ,
)
keyword[if] identifier[res] [ literal[string] ]!= literal[int] :
keyword[return] identifier[__utils__] [ literal[string] ]( identifier[res] )
identifier[current_pool] = keyword[None]
identifier[current_prop] = keyword[None]
keyword[for] identifier[zpd] keyword[in] identifier[res] [ literal[string] ]. identifier[splitlines] ():
keyword[if] identifier[zpd] . identifier[strip] ()== literal[string] :
keyword[continue]
keyword[if] literal[string] keyword[in] identifier[zpd] :
identifier[prop] = identifier[zpd] . identifier[split] ( literal[string] )[ literal[int] ]. identifier[strip] ()
identifier[value] = literal[string] . identifier[join] ( identifier[zpd] . identifier[split] ( literal[string] )[ literal[int] :]). identifier[strip] ()
keyword[if] identifier[prop] == literal[string] keyword[and] identifier[current_pool] != identifier[value] :
identifier[current_pool] = identifier[value]
identifier[ret] [ identifier[current_pool] ]= identifier[OrderedDict] ()
keyword[if] identifier[prop] != literal[string] :
identifier[ret] [ identifier[current_pool] ][ identifier[prop] ]= identifier[value]
identifier[current_prop] = identifier[prop]
keyword[else] :
identifier[ret] [ identifier[current_pool] ][ identifier[current_prop] ]= literal[string] . identifier[format] (
identifier[ret] [ identifier[current_pool] ][ identifier[current_prop] ],
identifier[zpd]
)
keyword[for] identifier[pool] keyword[in] identifier[ret] :
keyword[if] literal[string] keyword[not] keyword[in] identifier[ret] [ identifier[pool] ]:
keyword[continue]
identifier[header] = keyword[None]
identifier[root_vdev] = keyword[None]
identifier[vdev] = keyword[None]
identifier[dev] = keyword[None]
identifier[rdev] = keyword[None]
identifier[config] = identifier[ret] [ identifier[pool] ][ literal[string] ]
identifier[config_data] = identifier[OrderedDict] ()
keyword[for] identifier[line] keyword[in] identifier[config] . identifier[splitlines] ():
keyword[if] keyword[not] identifier[header] :
identifier[header] = identifier[line] . identifier[strip] (). identifier[lower] ()
identifier[header] =[ identifier[x] keyword[for] identifier[x] keyword[in] identifier[header] . identifier[split] ( literal[string] ) keyword[if] identifier[x] keyword[not] keyword[in] [ literal[string] ]]
keyword[continue]
keyword[if] identifier[line] [ literal[int] ]== literal[string] :
identifier[line] = identifier[line] [ literal[int] :]
identifier[stat_data] = identifier[OrderedDict] ( identifier[list] ( identifier[zip] (
identifier[header] ,
[ identifier[x] keyword[for] identifier[x] keyword[in] identifier[line] . identifier[strip] (). identifier[split] ( literal[string] ) keyword[if] identifier[x] keyword[not] keyword[in] [ literal[string] ]],
)))
identifier[stat_data] = identifier[__utils__] [ literal[string] ]( identifier[stat_data] )
keyword[if] identifier[line] . identifier[startswith] ( literal[string] * literal[int] ):
identifier[rdev] = identifier[stat_data] [ literal[string] ]
identifier[config_data] [ identifier[root_vdev] ][ identifier[vdev] ][ identifier[dev] ][ identifier[rdev] ]= identifier[stat_data]
keyword[elif] identifier[line] . identifier[startswith] ( literal[string] * literal[int] ):
identifier[rdev] = keyword[None]
identifier[dev] = identifier[stat_data] [ literal[string] ]
identifier[config_data] [ identifier[root_vdev] ][ identifier[vdev] ][ identifier[dev] ]= identifier[stat_data]
keyword[elif] identifier[line] . identifier[startswith] ( literal[string] * literal[int] ):
identifier[rdev] = identifier[dev] = keyword[None]
identifier[vdev] = identifier[stat_data] [ literal[string] ]
identifier[config_data] [ identifier[root_vdev] ][ identifier[vdev] ]= identifier[stat_data]
keyword[else] :
identifier[rdev] = identifier[dev] = identifier[vdev] = keyword[None]
identifier[root_vdev] = identifier[stat_data] [ literal[string] ]
identifier[config_data] [ identifier[root_vdev] ]= identifier[stat_data]
keyword[del] identifier[stat_data] [ literal[string] ]
identifier[ret] [ identifier[pool] ][ literal[string] ]= identifier[config_data]
keyword[return] identifier[ret] | def status(zpool=None):
"""
Return the status of the named zpool
zpool : string
optional name of storage pool
.. versionadded:: 2016.3.0
CLI Example:
.. code-block:: bash
salt '*' zpool.status myzpool
"""
ret = OrderedDict()
## collect status output
res = __salt__['cmd.run_all'](__utils__['zfs.zpool_command']('status', target=zpool), python_shell=False)
if res['retcode'] != 0:
return __utils__['zfs.parse_command_result'](res) # depends on [control=['if'], data=[]]
# NOTE: command output for reference
# =====================================================================
# pool: data
# state: ONLINE
# scan: scrub repaired 0 in 2h27m with 0 errors on Mon Jan 8 03:27:25 2018
# config:
#
# NAME STATE READ WRITE CKSUM
# data ONLINE 0 0 0
# mirror-0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
# c0tXXXXCXXXXXXXXXXXd0 ONLINE 0 0 0
#
# errors: No known data errors
# =====================================================================
## parse status output
# NOTE: output is 'key: value' except for the 'config' key.
# mulitple pools will repeat the output, so if switch pools if
# we see 'pool:'
current_pool = None
current_prop = None
for zpd in res['stdout'].splitlines():
if zpd.strip() == '':
continue # depends on [control=['if'], data=[]]
if ':' in zpd:
# NOTE: line is 'key: value' format, we just update a dict
prop = zpd.split(':')[0].strip()
value = ':'.join(zpd.split(':')[1:]).strip()
if prop == 'pool' and current_pool != value:
current_pool = value
ret[current_pool] = OrderedDict() # depends on [control=['if'], data=[]]
if prop != 'pool':
ret[current_pool][prop] = value # depends on [control=['if'], data=['prop']]
current_prop = prop # depends on [control=['if'], data=['zpd']]
else:
# NOTE: we append the line output to the last property
# this should only happens once we hit the config
# section
ret[current_pool][current_prop] = '{0}\n{1}'.format(ret[current_pool][current_prop], zpd) # depends on [control=['for'], data=['zpd']]
## parse config property for each pool
# NOTE: the config property has some structured data
# sadly this data is in a different format than
# the rest and it needs further processing
for pool in ret:
if 'config' not in ret[pool]:
continue # depends on [control=['if'], data=[]]
header = None
root_vdev = None
vdev = None
dev = None
rdev = None
config = ret[pool]['config']
config_data = OrderedDict()
for line in config.splitlines():
# NOTE: the first line is the header
# we grab all the none whitespace values
if not header:
header = line.strip().lower()
header = [x for x in header.split(' ') if x not in ['']]
continue # depends on [control=['if'], data=[]]
# NOTE: data is indented by 1 tab, then multiples of 2 spaces
# to differential root vdev, vdev, and dev
#
# we just strip the intial tab (can't use .strip() here)
if line[0] == '\t':
line = line[1:] # depends on [control=['if'], data=[]]
# NOTE: transform data into dict
stat_data = OrderedDict(list(zip(header, [x for x in line.strip().split(' ') if x not in ['']])))
# NOTE: decode the zfs values properly
stat_data = __utils__['zfs.from_auto_dict'](stat_data)
# NOTE: store stat_data in the proper location
if line.startswith(' ' * 6):
rdev = stat_data['name']
config_data[root_vdev][vdev][dev][rdev] = stat_data # depends on [control=['if'], data=[]]
elif line.startswith(' ' * 4):
rdev = None
dev = stat_data['name']
config_data[root_vdev][vdev][dev] = stat_data # depends on [control=['if'], data=[]]
elif line.startswith(' ' * 2):
rdev = dev = None
vdev = stat_data['name']
config_data[root_vdev][vdev] = stat_data # depends on [control=['if'], data=[]]
else:
rdev = dev = vdev = None
root_vdev = stat_data['name']
config_data[root_vdev] = stat_data
# NOTE: name already used as identifier, drop duplicate data
del stat_data['name'] # depends on [control=['for'], data=['line']]
ret[pool]['config'] = config_data # depends on [control=['for'], data=['pool']]
return ret |
def _return_value(content, ns):
"""Find the return value in a CIM response.
The xmlns is needed because everything in CIM is a million levels
of namespace indirection.
"""
doc = ElementTree.fromstring(content)
query = './/{%(ns)s}%(item)s' % {'ns': ns, 'item': 'ReturnValue'}
rv = doc.find(query)
return int(rv.text) | def function[_return_value, parameter[content, ns]]:
constant[Find the return value in a CIM response.
The xmlns is needed because everything in CIM is a million levels
of namespace indirection.
]
variable[doc] assign[=] call[name[ElementTree].fromstring, parameter[name[content]]]
variable[query] assign[=] binary_operation[constant[.//{%(ns)s}%(item)s] <ast.Mod object at 0x7da2590d6920> dictionary[[<ast.Constant object at 0x7da204565630>, <ast.Constant object at 0x7da2045671f0>], [<ast.Name object at 0x7da204566b30>, <ast.Constant object at 0x7da204565690>]]]
variable[rv] assign[=] call[name[doc].find, parameter[name[query]]]
return[call[name[int], parameter[name[rv].text]]] | keyword[def] identifier[_return_value] ( identifier[content] , identifier[ns] ):
literal[string]
identifier[doc] = identifier[ElementTree] . identifier[fromstring] ( identifier[content] )
identifier[query] = literal[string] %{ literal[string] : identifier[ns] , literal[string] : literal[string] }
identifier[rv] = identifier[doc] . identifier[find] ( identifier[query] )
keyword[return] identifier[int] ( identifier[rv] . identifier[text] ) | def _return_value(content, ns):
"""Find the return value in a CIM response.
The xmlns is needed because everything in CIM is a million levels
of namespace indirection.
"""
doc = ElementTree.fromstring(content)
query = './/{%(ns)s}%(item)s' % {'ns': ns, 'item': 'ReturnValue'}
rv = doc.find(query)
return int(rv.text) |
def consolidateBy(requestContext, seriesList, consolidationFunc):
"""
Takes one metric or a wildcard seriesList and a consolidation function
name.
Valid function names are 'sum', 'average', 'min', and 'max'.
When a graph is drawn where width of the graph size in pixels is smaller
than the number of datapoints to be graphed, Graphite consolidates the
values to to prevent line overlap. The consolidateBy() function changes
the consolidation function from the default of 'average' to one of 'sum',
'max', or 'min'. This is especially useful in sales graphs, where
fractional values make no sense and a 'sum' of consolidated values is
appropriate.
Example::
&target=consolidateBy(Sales.widgets.largeBlue, 'sum')
&target=consolidateBy(Servers.web01.sda1.free_space, 'max')
"""
for series in seriesList:
# datalib will throw an exception, so it's not necessary to validate
# here
series.consolidationFunc = consolidationFunc
series.name = 'consolidateBy(%s,"%s")' % (series.name,
series.consolidationFunc)
series.pathExpression = series.name
return seriesList | def function[consolidateBy, parameter[requestContext, seriesList, consolidationFunc]]:
constant[
Takes one metric or a wildcard seriesList and a consolidation function
name.
Valid function names are 'sum', 'average', 'min', and 'max'.
When a graph is drawn where width of the graph size in pixels is smaller
than the number of datapoints to be graphed, Graphite consolidates the
values to to prevent line overlap. The consolidateBy() function changes
the consolidation function from the default of 'average' to one of 'sum',
'max', or 'min'. This is especially useful in sales graphs, where
fractional values make no sense and a 'sum' of consolidated values is
appropriate.
Example::
&target=consolidateBy(Sales.widgets.largeBlue, 'sum')
&target=consolidateBy(Servers.web01.sda1.free_space, 'max')
]
for taget[name[series]] in starred[name[seriesList]] begin[:]
name[series].consolidationFunc assign[=] name[consolidationFunc]
name[series].name assign[=] binary_operation[constant[consolidateBy(%s,"%s")] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Attribute object at 0x7da207f01bd0>, <ast.Attribute object at 0x7da207f01570>]]]
name[series].pathExpression assign[=] name[series].name
return[name[seriesList]] | keyword[def] identifier[consolidateBy] ( identifier[requestContext] , identifier[seriesList] , identifier[consolidationFunc] ):
literal[string]
keyword[for] identifier[series] keyword[in] identifier[seriesList] :
identifier[series] . identifier[consolidationFunc] = identifier[consolidationFunc]
identifier[series] . identifier[name] = literal[string] %( identifier[series] . identifier[name] ,
identifier[series] . identifier[consolidationFunc] )
identifier[series] . identifier[pathExpression] = identifier[series] . identifier[name]
keyword[return] identifier[seriesList] | def consolidateBy(requestContext, seriesList, consolidationFunc):
"""
Takes one metric or a wildcard seriesList and a consolidation function
name.
Valid function names are 'sum', 'average', 'min', and 'max'.
When a graph is drawn where width of the graph size in pixels is smaller
than the number of datapoints to be graphed, Graphite consolidates the
values to to prevent line overlap. The consolidateBy() function changes
the consolidation function from the default of 'average' to one of 'sum',
'max', or 'min'. This is especially useful in sales graphs, where
fractional values make no sense and a 'sum' of consolidated values is
appropriate.
Example::
&target=consolidateBy(Sales.widgets.largeBlue, 'sum')
&target=consolidateBy(Servers.web01.sda1.free_space, 'max')
"""
for series in seriesList:
# datalib will throw an exception, so it's not necessary to validate
# here
series.consolidationFunc = consolidationFunc
series.name = 'consolidateBy(%s,"%s")' % (series.name, series.consolidationFunc)
series.pathExpression = series.name # depends on [control=['for'], data=['series']]
return seriesList |
def check_program(name):
"""
Uses the shell program "which" to determine whether the named program
is available on the shell PATH.
"""
with open(os.devnull, "w") as null:
try:
subprocess.check_call(("which", name), stdout=null, stderr=null)
except subprocess.CalledProcessError as e:
return False
return True | def function[check_program, parameter[name]]:
constant[
Uses the shell program "which" to determine whether the named program
is available on the shell PATH.
]
with call[name[open], parameter[name[os].devnull, constant[w]]] begin[:]
<ast.Try object at 0x7da1b14da2c0>
return[constant[True]] | keyword[def] identifier[check_program] ( identifier[name] ):
literal[string]
keyword[with] identifier[open] ( identifier[os] . identifier[devnull] , literal[string] ) keyword[as] identifier[null] :
keyword[try] :
identifier[subprocess] . identifier[check_call] (( literal[string] , identifier[name] ), identifier[stdout] = identifier[null] , identifier[stderr] = identifier[null] )
keyword[except] identifier[subprocess] . identifier[CalledProcessError] keyword[as] identifier[e] :
keyword[return] keyword[False]
keyword[return] keyword[True] | def check_program(name):
"""
Uses the shell program "which" to determine whether the named program
is available on the shell PATH.
"""
with open(os.devnull, 'w') as null:
try:
subprocess.check_call(('which', name), stdout=null, stderr=null) # depends on [control=['try'], data=[]]
except subprocess.CalledProcessError as e:
return False # depends on [control=['except'], data=[]] # depends on [control=['with'], data=['null']]
return True |
def wd_addr(self):
"""
Gets the interface and direction as a `WINDIVERT_ADDRESS` structure.
:return: The `WINDIVERT_ADDRESS` structure.
"""
address = windivert_dll.WinDivertAddress()
address.IfIdx, address.SubIfIdx = self.interface
address.Direction = self.direction
return address | def function[wd_addr, parameter[self]]:
constant[
Gets the interface and direction as a `WINDIVERT_ADDRESS` structure.
:return: The `WINDIVERT_ADDRESS` structure.
]
variable[address] assign[=] call[name[windivert_dll].WinDivertAddress, parameter[]]
<ast.Tuple object at 0x7da1b0d0e050> assign[=] name[self].interface
name[address].Direction assign[=] name[self].direction
return[name[address]] | keyword[def] identifier[wd_addr] ( identifier[self] ):
literal[string]
identifier[address] = identifier[windivert_dll] . identifier[WinDivertAddress] ()
identifier[address] . identifier[IfIdx] , identifier[address] . identifier[SubIfIdx] = identifier[self] . identifier[interface]
identifier[address] . identifier[Direction] = identifier[self] . identifier[direction]
keyword[return] identifier[address] | def wd_addr(self):
"""
Gets the interface and direction as a `WINDIVERT_ADDRESS` structure.
:return: The `WINDIVERT_ADDRESS` structure.
"""
address = windivert_dll.WinDivertAddress()
(address.IfIdx, address.SubIfIdx) = self.interface
address.Direction = self.direction
return address |
def _create_models_for_relation_step(self, rel_model_name,
rel_key, rel_value, model):
"""
Create a new model linked to the given model.
Syntax:
And `model` with `field` "`value`" has `new model` in the database:
Example:
.. code-block:: gherkin
And project with name "Ball Project" has goals in the database:
| description |
| To have fun playing with balls of twine |
"""
model = get_model(model)
lookup = {rel_key: rel_value}
rel_model = get_model(rel_model_name).objects.get(**lookup)
data = guess_types(self.hashes)
for hash_ in data:
hash_['%s' % rel_model_name] = rel_model
try:
func = _WRITE_MODEL[model]
except KeyError:
func = partial(write_models, model)
func(data, None) | def function[_create_models_for_relation_step, parameter[self, rel_model_name, rel_key, rel_value, model]]:
constant[
Create a new model linked to the given model.
Syntax:
And `model` with `field` "`value`" has `new model` in the database:
Example:
.. code-block:: gherkin
And project with name "Ball Project" has goals in the database:
| description |
| To have fun playing with balls of twine |
]
variable[model] assign[=] call[name[get_model], parameter[name[model]]]
variable[lookup] assign[=] dictionary[[<ast.Name object at 0x7da1b1ad8850>], [<ast.Name object at 0x7da1b1adb490>]]
variable[rel_model] assign[=] call[call[name[get_model], parameter[name[rel_model_name]]].objects.get, parameter[]]
variable[data] assign[=] call[name[guess_types], parameter[name[self].hashes]]
for taget[name[hash_]] in starred[name[data]] begin[:]
call[name[hash_]][binary_operation[constant[%s] <ast.Mod object at 0x7da2590d6920> name[rel_model_name]]] assign[=] name[rel_model]
<ast.Try object at 0x7da1b1a11180>
call[name[func], parameter[name[data], constant[None]]] | keyword[def] identifier[_create_models_for_relation_step] ( identifier[self] , identifier[rel_model_name] ,
identifier[rel_key] , identifier[rel_value] , identifier[model] ):
literal[string]
identifier[model] = identifier[get_model] ( identifier[model] )
identifier[lookup] ={ identifier[rel_key] : identifier[rel_value] }
identifier[rel_model] = identifier[get_model] ( identifier[rel_model_name] ). identifier[objects] . identifier[get] (** identifier[lookup] )
identifier[data] = identifier[guess_types] ( identifier[self] . identifier[hashes] )
keyword[for] identifier[hash_] keyword[in] identifier[data] :
identifier[hash_] [ literal[string] % identifier[rel_model_name] ]= identifier[rel_model]
keyword[try] :
identifier[func] = identifier[_WRITE_MODEL] [ identifier[model] ]
keyword[except] identifier[KeyError] :
identifier[func] = identifier[partial] ( identifier[write_models] , identifier[model] )
identifier[func] ( identifier[data] , keyword[None] ) | def _create_models_for_relation_step(self, rel_model_name, rel_key, rel_value, model):
"""
Create a new model linked to the given model.
Syntax:
And `model` with `field` "`value`" has `new model` in the database:
Example:
.. code-block:: gherkin
And project with name "Ball Project" has goals in the database:
| description |
| To have fun playing with balls of twine |
"""
model = get_model(model)
lookup = {rel_key: rel_value}
rel_model = get_model(rel_model_name).objects.get(**lookup)
data = guess_types(self.hashes)
for hash_ in data:
hash_['%s' % rel_model_name] = rel_model # depends on [control=['for'], data=['hash_']]
try:
func = _WRITE_MODEL[model] # depends on [control=['try'], data=[]]
except KeyError:
func = partial(write_models, model) # depends on [control=['except'], data=[]]
func(data, None) |
def download_icon_font(icon_font, directory):
"""Download given (implemented) icon font into passed directory"""
try:
downloader = AVAILABLE_ICON_FONTS[icon_font]['downloader'](directory)
downloader.download_files()
return downloader
except KeyError: # pragma: no cover
raise Exception("We don't support downloading font '{name}'".format(
name=icon_font)
) | def function[download_icon_font, parameter[icon_font, directory]]:
constant[Download given (implemented) icon font into passed directory]
<ast.Try object at 0x7da18f720640> | keyword[def] identifier[download_icon_font] ( identifier[icon_font] , identifier[directory] ):
literal[string]
keyword[try] :
identifier[downloader] = identifier[AVAILABLE_ICON_FONTS] [ identifier[icon_font] ][ literal[string] ]( identifier[directory] )
identifier[downloader] . identifier[download_files] ()
keyword[return] identifier[downloader]
keyword[except] identifier[KeyError] :
keyword[raise] identifier[Exception] ( literal[string] . identifier[format] (
identifier[name] = identifier[icon_font] )
) | def download_icon_font(icon_font, directory):
"""Download given (implemented) icon font into passed directory"""
try:
downloader = AVAILABLE_ICON_FONTS[icon_font]['downloader'](directory)
downloader.download_files()
return downloader # depends on [control=['try'], data=[]]
except KeyError: # pragma: no cover
raise Exception("We don't support downloading font '{name}'".format(name=icon_font)) # depends on [control=['except'], data=[]] |
def check_matching_coordinates(func):
"""Decorate a function to make sure all given DataArrays have matching coordinates."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
data_arrays = ([a for a in args if isinstance(a, xr.DataArray)]
+ [a for a in kwargs.values() if isinstance(a, xr.DataArray)])
if len(data_arrays) > 1:
first = data_arrays[0]
for other in data_arrays[1:]:
if not first.metpy.coordinates_identical(other):
raise ValueError('Input DataArray arguments must be on same coordinates.')
return func(*args, **kwargs)
return wrapper | def function[check_matching_coordinates, parameter[func]]:
constant[Decorate a function to make sure all given DataArrays have matching coordinates.]
def function[wrapper, parameter[]]:
variable[data_arrays] assign[=] binary_operation[<ast.ListComp object at 0x7da1b22968c0> + <ast.ListComp object at 0x7da1b1d06440>]
if compare[call[name[len], parameter[name[data_arrays]]] greater[>] constant[1]] begin[:]
variable[first] assign[=] call[name[data_arrays]][constant[0]]
for taget[name[other]] in starred[call[name[data_arrays]][<ast.Slice object at 0x7da1b1d06a40>]] begin[:]
if <ast.UnaryOp object at 0x7da1b1d04640> begin[:]
<ast.Raise object at 0x7da1b1d047c0>
return[call[name[func], parameter[<ast.Starred object at 0x7da1b1d04610>]]]
return[name[wrapper]] | keyword[def] identifier[check_matching_coordinates] ( identifier[func] ):
literal[string]
@ identifier[functools] . identifier[wraps] ( identifier[func] )
keyword[def] identifier[wrapper] (* identifier[args] ,** identifier[kwargs] ):
identifier[data_arrays] =([ identifier[a] keyword[for] identifier[a] keyword[in] identifier[args] keyword[if] identifier[isinstance] ( identifier[a] , identifier[xr] . identifier[DataArray] )]
+[ identifier[a] keyword[for] identifier[a] keyword[in] identifier[kwargs] . identifier[values] () keyword[if] identifier[isinstance] ( identifier[a] , identifier[xr] . identifier[DataArray] )])
keyword[if] identifier[len] ( identifier[data_arrays] )> literal[int] :
identifier[first] = identifier[data_arrays] [ literal[int] ]
keyword[for] identifier[other] keyword[in] identifier[data_arrays] [ literal[int] :]:
keyword[if] keyword[not] identifier[first] . identifier[metpy] . identifier[coordinates_identical] ( identifier[other] ):
keyword[raise] identifier[ValueError] ( literal[string] )
keyword[return] identifier[func] (* identifier[args] ,** identifier[kwargs] )
keyword[return] identifier[wrapper] | def check_matching_coordinates(func):
"""Decorate a function to make sure all given DataArrays have matching coordinates."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
data_arrays = [a for a in args if isinstance(a, xr.DataArray)] + [a for a in kwargs.values() if isinstance(a, xr.DataArray)]
if len(data_arrays) > 1:
first = data_arrays[0]
for other in data_arrays[1:]:
if not first.metpy.coordinates_identical(other):
raise ValueError('Input DataArray arguments must be on same coordinates.') # depends on [control=['if'], data=[]] # depends on [control=['for'], data=['other']] # depends on [control=['if'], data=[]]
return func(*args, **kwargs)
return wrapper |
def _match_modes(self, remote_path, l_st):
"""Match mod, utime and uid/gid with locals one."""
self.sftp.chmod(remote_path, S_IMODE(l_st.st_mode))
self.sftp.utime(remote_path, (l_st.st_atime, l_st.st_mtime))
if self.chown:
self.sftp.chown(remote_path, l_st.st_uid, l_st.st_gid) | def function[_match_modes, parameter[self, remote_path, l_st]]:
constant[Match mod, utime and uid/gid with locals one.]
call[name[self].sftp.chmod, parameter[name[remote_path], call[name[S_IMODE], parameter[name[l_st].st_mode]]]]
call[name[self].sftp.utime, parameter[name[remote_path], tuple[[<ast.Attribute object at 0x7da18dc98d60>, <ast.Attribute object at 0x7da18dc987f0>]]]]
if name[self].chown begin[:]
call[name[self].sftp.chown, parameter[name[remote_path], name[l_st].st_uid, name[l_st].st_gid]] | keyword[def] identifier[_match_modes] ( identifier[self] , identifier[remote_path] , identifier[l_st] ):
literal[string]
identifier[self] . identifier[sftp] . identifier[chmod] ( identifier[remote_path] , identifier[S_IMODE] ( identifier[l_st] . identifier[st_mode] ))
identifier[self] . identifier[sftp] . identifier[utime] ( identifier[remote_path] ,( identifier[l_st] . identifier[st_atime] , identifier[l_st] . identifier[st_mtime] ))
keyword[if] identifier[self] . identifier[chown] :
identifier[self] . identifier[sftp] . identifier[chown] ( identifier[remote_path] , identifier[l_st] . identifier[st_uid] , identifier[l_st] . identifier[st_gid] ) | def _match_modes(self, remote_path, l_st):
"""Match mod, utime and uid/gid with locals one."""
self.sftp.chmod(remote_path, S_IMODE(l_st.st_mode))
self.sftp.utime(remote_path, (l_st.st_atime, l_st.st_mtime))
if self.chown:
self.sftp.chown(remote_path, l_st.st_uid, l_st.st_gid) # depends on [control=['if'], data=[]] |
def write_stats (self):
"""Write check statistic info."""
self.writeln()
self.writeln(_("Statistics:"))
if self.stats.downloaded_bytes is not None:
self.writeln(_("Downloaded: %s.") % strformat.strsize(self.stats.downloaded_bytes))
if self.stats.number > 0:
self.writeln(_(
"Content types: %(image)d image, %(text)d text, %(video)d video, "
"%(audio)d audio, %(application)d application, %(mail)d mail"
" and %(other)d other.") % self.stats.link_types)
self.writeln(_("URL lengths: min=%(min)d, max=%(max)d, avg=%(avg)d.") %
dict(min=self.stats.min_url_length,
max=self.stats.max_url_length,
avg=self.stats.avg_url_length))
else:
self.writeln(_("No statistics available since no URLs were checked.")) | def function[write_stats, parameter[self]]:
constant[Write check statistic info.]
call[name[self].writeln, parameter[]]
call[name[self].writeln, parameter[call[name[_], parameter[constant[Statistics:]]]]]
if compare[name[self].stats.downloaded_bytes is_not constant[None]] begin[:]
call[name[self].writeln, parameter[binary_operation[call[name[_], parameter[constant[Downloaded: %s.]]] <ast.Mod object at 0x7da2590d6920> call[name[strformat].strsize, parameter[name[self].stats.downloaded_bytes]]]]]
if compare[name[self].stats.number greater[>] constant[0]] begin[:]
call[name[self].writeln, parameter[binary_operation[call[name[_], parameter[constant[Content types: %(image)d image, %(text)d text, %(video)d video, %(audio)d audio, %(application)d application, %(mail)d mail and %(other)d other.]]] <ast.Mod object at 0x7da2590d6920> name[self].stats.link_types]]]
call[name[self].writeln, parameter[binary_operation[call[name[_], parameter[constant[URL lengths: min=%(min)d, max=%(max)d, avg=%(avg)d.]]] <ast.Mod object at 0x7da2590d6920> call[name[dict], parameter[]]]]] | keyword[def] identifier[write_stats] ( identifier[self] ):
literal[string]
identifier[self] . identifier[writeln] ()
identifier[self] . identifier[writeln] ( identifier[_] ( literal[string] ))
keyword[if] identifier[self] . identifier[stats] . identifier[downloaded_bytes] keyword[is] keyword[not] keyword[None] :
identifier[self] . identifier[writeln] ( identifier[_] ( literal[string] )% identifier[strformat] . identifier[strsize] ( identifier[self] . identifier[stats] . identifier[downloaded_bytes] ))
keyword[if] identifier[self] . identifier[stats] . identifier[number] > literal[int] :
identifier[self] . identifier[writeln] ( identifier[_] (
literal[string]
literal[string]
literal[string] )% identifier[self] . identifier[stats] . identifier[link_types] )
identifier[self] . identifier[writeln] ( identifier[_] ( literal[string] )%
identifier[dict] ( identifier[min] = identifier[self] . identifier[stats] . identifier[min_url_length] ,
identifier[max] = identifier[self] . identifier[stats] . identifier[max_url_length] ,
identifier[avg] = identifier[self] . identifier[stats] . identifier[avg_url_length] ))
keyword[else] :
identifier[self] . identifier[writeln] ( identifier[_] ( literal[string] )) | def write_stats(self):
"""Write check statistic info."""
self.writeln()
self.writeln(_('Statistics:'))
if self.stats.downloaded_bytes is not None:
self.writeln(_('Downloaded: %s.') % strformat.strsize(self.stats.downloaded_bytes)) # depends on [control=['if'], data=[]]
if self.stats.number > 0:
self.writeln(_('Content types: %(image)d image, %(text)d text, %(video)d video, %(audio)d audio, %(application)d application, %(mail)d mail and %(other)d other.') % self.stats.link_types)
self.writeln(_('URL lengths: min=%(min)d, max=%(max)d, avg=%(avg)d.') % dict(min=self.stats.min_url_length, max=self.stats.max_url_length, avg=self.stats.avg_url_length)) # depends on [control=['if'], data=[]]
else:
self.writeln(_('No statistics available since no URLs were checked.')) |
def render_config(config: Config, indent: str = "") -> str:
"""
Pretty-print a config in sort-of-JSON+comments.
"""
# Add four spaces to the indent.
new_indent = indent + " "
return "".join([
# opening brace + newline
"{\n",
# "type": "...", (if present)
f'{new_indent}"type": "{config.typ3}",\n' if config.typ3 else '',
# render each item
"".join(_render(item, new_indent) for item in config.items),
# indent and close the brace
indent,
"}\n"
]) | def function[render_config, parameter[config, indent]]:
constant[
Pretty-print a config in sort-of-JSON+comments.
]
variable[new_indent] assign[=] binary_operation[name[indent] + constant[ ]]
return[call[constant[].join, parameter[list[[<ast.Constant object at 0x7da204347ee0>, <ast.IfExp object at 0x7da204345a20>, <ast.Call object at 0x7da2054a7610>, <ast.Name object at 0x7da2054a56f0>, <ast.Constant object at 0x7da2054a76d0>]]]]] | keyword[def] identifier[render_config] ( identifier[config] : identifier[Config] , identifier[indent] : identifier[str] = literal[string] )-> identifier[str] :
literal[string]
identifier[new_indent] = identifier[indent] + literal[string]
keyword[return] literal[string] . identifier[join] ([
literal[string] ,
literal[string] keyword[if] identifier[config] . identifier[typ3] keyword[else] literal[string] ,
literal[string] . identifier[join] ( identifier[_render] ( identifier[item] , identifier[new_indent] ) keyword[for] identifier[item] keyword[in] identifier[config] . identifier[items] ),
identifier[indent] ,
literal[string]
]) | def render_config(config: Config, indent: str='') -> str:
"""
Pretty-print a config in sort-of-JSON+comments.
"""
# Add four spaces to the indent.
new_indent = indent + ' '
# opening brace + newline
# "type": "...", (if present)
# render each item
# indent and close the brace
return ''.join(['{\n', f'{new_indent}"type": "{config.typ3}",\n' if config.typ3 else '', ''.join((_render(item, new_indent) for item in config.items)), indent, '}\n']) |
def extract_subsection(im, shape):
r"""
Extracts the middle section of a image
Parameters
----------
im : ND-array
Image from which to extract the subsection
shape : array_like
Can either specify the size of the extracted section or the fractional
size of the image to extact.
Returns
-------
image : ND-array
An ND-array of size given by the ``shape`` argument, taken from the
center of the image.
Examples
--------
>>> import scipy as sp
>>> from porespy.tools import extract_subsection
>>> im = sp.array([[1, 1, 1, 1], [1, 2, 2, 2], [1, 2, 3, 3], [1, 2, 3, 4]])
>>> print(im)
[[1 1 1 1]
[1 2 2 2]
[1 2 3 3]
[1 2 3 4]]
>>> im = extract_subsection(im=im, shape=[2, 2])
>>> print(im)
[[2 2]
[2 3]]
"""
# Check if shape was given as a fraction
shape = sp.array(shape)
if shape[0] < 1:
shape = sp.array(im.shape) * shape
center = sp.array(im.shape) / 2
s_im = []
for dim in range(im.ndim):
r = shape[dim] / 2
lower_im = sp.amax((center[dim] - r, 0))
upper_im = sp.amin((center[dim] + r, im.shape[dim]))
s_im.append(slice(int(lower_im), int(upper_im)))
return im[tuple(s_im)] | def function[extract_subsection, parameter[im, shape]]:
constant[
Extracts the middle section of a image
Parameters
----------
im : ND-array
Image from which to extract the subsection
shape : array_like
Can either specify the size of the extracted section or the fractional
size of the image to extact.
Returns
-------
image : ND-array
An ND-array of size given by the ``shape`` argument, taken from the
center of the image.
Examples
--------
>>> import scipy as sp
>>> from porespy.tools import extract_subsection
>>> im = sp.array([[1, 1, 1, 1], [1, 2, 2, 2], [1, 2, 3, 3], [1, 2, 3, 4]])
>>> print(im)
[[1 1 1 1]
[1 2 2 2]
[1 2 3 3]
[1 2 3 4]]
>>> im = extract_subsection(im=im, shape=[2, 2])
>>> print(im)
[[2 2]
[2 3]]
]
variable[shape] assign[=] call[name[sp].array, parameter[name[shape]]]
if compare[call[name[shape]][constant[0]] less[<] constant[1]] begin[:]
variable[shape] assign[=] binary_operation[call[name[sp].array, parameter[name[im].shape]] * name[shape]]
variable[center] assign[=] binary_operation[call[name[sp].array, parameter[name[im].shape]] / constant[2]]
variable[s_im] assign[=] list[[]]
for taget[name[dim]] in starred[call[name[range], parameter[name[im].ndim]]] begin[:]
variable[r] assign[=] binary_operation[call[name[shape]][name[dim]] / constant[2]]
variable[lower_im] assign[=] call[name[sp].amax, parameter[tuple[[<ast.BinOp object at 0x7da1b06846a0>, <ast.Constant object at 0x7da1b06868c0>]]]]
variable[upper_im] assign[=] call[name[sp].amin, parameter[tuple[[<ast.BinOp object at 0x7da1b0686650>, <ast.Subscript object at 0x7da1b0686890>]]]]
call[name[s_im].append, parameter[call[name[slice], parameter[call[name[int], parameter[name[lower_im]]], call[name[int], parameter[name[upper_im]]]]]]]
return[call[name[im]][call[name[tuple], parameter[name[s_im]]]]] | keyword[def] identifier[extract_subsection] ( identifier[im] , identifier[shape] ):
literal[string]
identifier[shape] = identifier[sp] . identifier[array] ( identifier[shape] )
keyword[if] identifier[shape] [ literal[int] ]< literal[int] :
identifier[shape] = identifier[sp] . identifier[array] ( identifier[im] . identifier[shape] )* identifier[shape]
identifier[center] = identifier[sp] . identifier[array] ( identifier[im] . identifier[shape] )/ literal[int]
identifier[s_im] =[]
keyword[for] identifier[dim] keyword[in] identifier[range] ( identifier[im] . identifier[ndim] ):
identifier[r] = identifier[shape] [ identifier[dim] ]/ literal[int]
identifier[lower_im] = identifier[sp] . identifier[amax] (( identifier[center] [ identifier[dim] ]- identifier[r] , literal[int] ))
identifier[upper_im] = identifier[sp] . identifier[amin] (( identifier[center] [ identifier[dim] ]+ identifier[r] , identifier[im] . identifier[shape] [ identifier[dim] ]))
identifier[s_im] . identifier[append] ( identifier[slice] ( identifier[int] ( identifier[lower_im] ), identifier[int] ( identifier[upper_im] )))
keyword[return] identifier[im] [ identifier[tuple] ( identifier[s_im] )] | def extract_subsection(im, shape):
"""
Extracts the middle section of a image
Parameters
----------
im : ND-array
Image from which to extract the subsection
shape : array_like
Can either specify the size of the extracted section or the fractional
size of the image to extact.
Returns
-------
image : ND-array
An ND-array of size given by the ``shape`` argument, taken from the
center of the image.
Examples
--------
>>> import scipy as sp
>>> from porespy.tools import extract_subsection
>>> im = sp.array([[1, 1, 1, 1], [1, 2, 2, 2], [1, 2, 3, 3], [1, 2, 3, 4]])
>>> print(im)
[[1 1 1 1]
[1 2 2 2]
[1 2 3 3]
[1 2 3 4]]
>>> im = extract_subsection(im=im, shape=[2, 2])
>>> print(im)
[[2 2]
[2 3]]
"""
# Check if shape was given as a fraction
shape = sp.array(shape)
if shape[0] < 1:
shape = sp.array(im.shape) * shape # depends on [control=['if'], data=[]]
center = sp.array(im.shape) / 2
s_im = []
for dim in range(im.ndim):
r = shape[dim] / 2
lower_im = sp.amax((center[dim] - r, 0))
upper_im = sp.amin((center[dim] + r, im.shape[dim]))
s_im.append(slice(int(lower_im), int(upper_im))) # depends on [control=['for'], data=['dim']]
return im[tuple(s_im)] |
def team_2_json(self):
"""
transform ariane_clip3 team object to Ariane server JSON obj
:return: Ariane JSON obj
"""
LOGGER.debug("Team.team_2_json")
json_obj = {
'teamID': self.id,
'teamName': self.name,
'teamDescription': self.description,
'teamColorCode': self.color_code,
'teamOSInstancesID': self.osi_ids,
'teamApplicationsID': self.app_ids
}
return json.dumps(json_obj) | def function[team_2_json, parameter[self]]:
constant[
transform ariane_clip3 team object to Ariane server JSON obj
:return: Ariane JSON obj
]
call[name[LOGGER].debug, parameter[constant[Team.team_2_json]]]
variable[json_obj] assign[=] dictionary[[<ast.Constant object at 0x7da1b137e110>, <ast.Constant object at 0x7da1b137e6b0>, <ast.Constant object at 0x7da1b137dc90>, <ast.Constant object at 0x7da1b137f2e0>, <ast.Constant object at 0x7da1b137f9d0>, <ast.Constant object at 0x7da1b137e9e0>], [<ast.Attribute object at 0x7da1b137d0f0>, <ast.Attribute object at 0x7da1b137e860>, <ast.Attribute object at 0x7da1b137eaa0>, <ast.Attribute object at 0x7da1b137d8a0>, <ast.Attribute object at 0x7da1b137edd0>, <ast.Attribute object at 0x7da1b137ecb0>]]
return[call[name[json].dumps, parameter[name[json_obj]]]] | keyword[def] identifier[team_2_json] ( identifier[self] ):
literal[string]
identifier[LOGGER] . identifier[debug] ( literal[string] )
identifier[json_obj] ={
literal[string] : identifier[self] . identifier[id] ,
literal[string] : identifier[self] . identifier[name] ,
literal[string] : identifier[self] . identifier[description] ,
literal[string] : identifier[self] . identifier[color_code] ,
literal[string] : identifier[self] . identifier[osi_ids] ,
literal[string] : identifier[self] . identifier[app_ids]
}
keyword[return] identifier[json] . identifier[dumps] ( identifier[json_obj] ) | def team_2_json(self):
"""
transform ariane_clip3 team object to Ariane server JSON obj
:return: Ariane JSON obj
"""
LOGGER.debug('Team.team_2_json')
json_obj = {'teamID': self.id, 'teamName': self.name, 'teamDescription': self.description, 'teamColorCode': self.color_code, 'teamOSInstancesID': self.osi_ids, 'teamApplicationsID': self.app_ids}
return json.dumps(json_obj) |
def getPassage(self, urn, inventory=None, context=None):
""" Retrieve a passage
:param urn: URN identifying the text's passage (Minimum depth : 1)
:type urn: text
:param inventory: Name of the inventory
:type inventory: text
:param context: Number of citation units at the same level of the citation hierarchy as the requested urn, immediately preceding and immediately following the requested urn to include in the reply
:type context: int
:rtype: str
"""
return self.call({
"inv": inventory,
"urn": urn,
"context": context,
"request": "GetPassage"
}) | def function[getPassage, parameter[self, urn, inventory, context]]:
constant[ Retrieve a passage
:param urn: URN identifying the text's passage (Minimum depth : 1)
:type urn: text
:param inventory: Name of the inventory
:type inventory: text
:param context: Number of citation units at the same level of the citation hierarchy as the requested urn, immediately preceding and immediately following the requested urn to include in the reply
:type context: int
:rtype: str
]
return[call[name[self].call, parameter[dictionary[[<ast.Constant object at 0x7da18fe91570>, <ast.Constant object at 0x7da18fe90100>, <ast.Constant object at 0x7da18fe922c0>, <ast.Constant object at 0x7da18fe92830>], [<ast.Name object at 0x7da18fe90d30>, <ast.Name object at 0x7da18fe92740>, <ast.Name object at 0x7da18fe92980>, <ast.Constant object at 0x7da18fe92020>]]]]] | keyword[def] identifier[getPassage] ( identifier[self] , identifier[urn] , identifier[inventory] = keyword[None] , identifier[context] = keyword[None] ):
literal[string]
keyword[return] identifier[self] . identifier[call] ({
literal[string] : identifier[inventory] ,
literal[string] : identifier[urn] ,
literal[string] : identifier[context] ,
literal[string] : literal[string]
}) | def getPassage(self, urn, inventory=None, context=None):
""" Retrieve a passage
:param urn: URN identifying the text's passage (Minimum depth : 1)
:type urn: text
:param inventory: Name of the inventory
:type inventory: text
:param context: Number of citation units at the same level of the citation hierarchy as the requested urn, immediately preceding and immediately following the requested urn to include in the reply
:type context: int
:rtype: str
"""
return self.call({'inv': inventory, 'urn': urn, 'context': context, 'request': 'GetPassage'}) |
def get_nsx_controller(self):
"""
Get/Set nsx controller name
Args:
name: (str) : Name of the nsx-controller
callback (function): A function executed upon completion of the
method.
Returns: Return dictionary containing nsx-controller information.
Returns blank dict if no nsx-controller is configured.
Raises: None
"""
urn = "urn:brocade.com:mgmt:brocade-tunnels"
config = ET.Element("config")
ET.SubElement(config, "nsx-controller", xmlns=urn)
output = self._callback(config, handler='get_config')
result = {}
element = ET.fromstring(str(output))
for controller in element.iter('{%s}nsx-controller'%urn):
result['name'] = controller.find('{%s}name'%urn).text
isactivate = controller.find('{%s}activate'%urn)
if isactivate is None:
result['activate'] = False
else:
result['activate'] = True
connection = controller.find('{%s}connection-addr'%urn)
if connection is None:
result['port'] = None
result['address'] = None
else:
result['port'] = connection.find('{%s}port'%urn).text
address = connection.find('{%s}address'%urn)
if address is None:
result['address'] = None
else:
result['address'] = address.text
return result | def function[get_nsx_controller, parameter[self]]:
constant[
Get/Set nsx controller name
Args:
name: (str) : Name of the nsx-controller
callback (function): A function executed upon completion of the
method.
Returns: Return dictionary containing nsx-controller information.
Returns blank dict if no nsx-controller is configured.
Raises: None
]
variable[urn] assign[=] constant[urn:brocade.com:mgmt:brocade-tunnels]
variable[config] assign[=] call[name[ET].Element, parameter[constant[config]]]
call[name[ET].SubElement, parameter[name[config], constant[nsx-controller]]]
variable[output] assign[=] call[name[self]._callback, parameter[name[config]]]
variable[result] assign[=] dictionary[[], []]
variable[element] assign[=] call[name[ET].fromstring, parameter[call[name[str], parameter[name[output]]]]]
for taget[name[controller]] in starred[call[name[element].iter, parameter[binary_operation[constant[{%s}nsx-controller] <ast.Mod object at 0x7da2590d6920> name[urn]]]]] begin[:]
call[name[result]][constant[name]] assign[=] call[name[controller].find, parameter[binary_operation[constant[{%s}name] <ast.Mod object at 0x7da2590d6920> name[urn]]]].text
variable[isactivate] assign[=] call[name[controller].find, parameter[binary_operation[constant[{%s}activate] <ast.Mod object at 0x7da2590d6920> name[urn]]]]
if compare[name[isactivate] is constant[None]] begin[:]
call[name[result]][constant[activate]] assign[=] constant[False]
variable[connection] assign[=] call[name[controller].find, parameter[binary_operation[constant[{%s}connection-addr] <ast.Mod object at 0x7da2590d6920> name[urn]]]]
if compare[name[connection] is constant[None]] begin[:]
call[name[result]][constant[port]] assign[=] constant[None]
call[name[result]][constant[address]] assign[=] constant[None]
return[name[result]] | keyword[def] identifier[get_nsx_controller] ( identifier[self] ):
literal[string]
identifier[urn] = literal[string]
identifier[config] = identifier[ET] . identifier[Element] ( literal[string] )
identifier[ET] . identifier[SubElement] ( identifier[config] , literal[string] , identifier[xmlns] = identifier[urn] )
identifier[output] = identifier[self] . identifier[_callback] ( identifier[config] , identifier[handler] = literal[string] )
identifier[result] ={}
identifier[element] = identifier[ET] . identifier[fromstring] ( identifier[str] ( identifier[output] ))
keyword[for] identifier[controller] keyword[in] identifier[element] . identifier[iter] ( literal[string] % identifier[urn] ):
identifier[result] [ literal[string] ]= identifier[controller] . identifier[find] ( literal[string] % identifier[urn] ). identifier[text]
identifier[isactivate] = identifier[controller] . identifier[find] ( literal[string] % identifier[urn] )
keyword[if] identifier[isactivate] keyword[is] keyword[None] :
identifier[result] [ literal[string] ]= keyword[False]
keyword[else] :
identifier[result] [ literal[string] ]= keyword[True]
identifier[connection] = identifier[controller] . identifier[find] ( literal[string] % identifier[urn] )
keyword[if] identifier[connection] keyword[is] keyword[None] :
identifier[result] [ literal[string] ]= keyword[None]
identifier[result] [ literal[string] ]= keyword[None]
keyword[else] :
identifier[result] [ literal[string] ]= identifier[connection] . identifier[find] ( literal[string] % identifier[urn] ). identifier[text]
identifier[address] = identifier[connection] . identifier[find] ( literal[string] % identifier[urn] )
keyword[if] identifier[address] keyword[is] keyword[None] :
identifier[result] [ literal[string] ]= keyword[None]
keyword[else] :
identifier[result] [ literal[string] ]= identifier[address] . identifier[text]
keyword[return] identifier[result] | def get_nsx_controller(self):
"""
Get/Set nsx controller name
Args:
name: (str) : Name of the nsx-controller
callback (function): A function executed upon completion of the
method.
Returns: Return dictionary containing nsx-controller information.
Returns blank dict if no nsx-controller is configured.
Raises: None
"""
urn = 'urn:brocade.com:mgmt:brocade-tunnels'
config = ET.Element('config')
ET.SubElement(config, 'nsx-controller', xmlns=urn)
output = self._callback(config, handler='get_config')
result = {}
element = ET.fromstring(str(output))
for controller in element.iter('{%s}nsx-controller' % urn):
result['name'] = controller.find('{%s}name' % urn).text
isactivate = controller.find('{%s}activate' % urn)
if isactivate is None:
result['activate'] = False # depends on [control=['if'], data=[]]
else:
result['activate'] = True
connection = controller.find('{%s}connection-addr' % urn)
if connection is None:
result['port'] = None
result['address'] = None # depends on [control=['if'], data=[]]
else:
result['port'] = connection.find('{%s}port' % urn).text
address = connection.find('{%s}address' % urn)
if address is None:
result['address'] = None # depends on [control=['if'], data=[]]
else:
result['address'] = address.text # depends on [control=['for'], data=['controller']]
return result |
def _determine_datatype(fields):
"""Determine the numpy dtype of the data."""
# Convert the NRRD type string identifier into a NumPy string identifier using a map
np_typestring = _TYPEMAP_NRRD2NUMPY[fields['type']]
# This is only added if the datatype has more than one byte and is not using ASCII encoding
# Note: Endian is not required for ASCII encoding
if np.dtype(np_typestring).itemsize > 1 and fields['encoding'] not in ['ASCII', 'ascii', 'text', 'txt']:
if 'endian' not in fields:
raise NRRDError('Header is missing required field: "endian".')
elif fields['endian'] == 'big':
np_typestring = '>' + np_typestring
elif fields['endian'] == 'little':
np_typestring = '<' + np_typestring
else:
raise NRRDError('Invalid endian value in header: "%s"' % fields['endian'])
return np.dtype(np_typestring) | def function[_determine_datatype, parameter[fields]]:
constant[Determine the numpy dtype of the data.]
variable[np_typestring] assign[=] call[name[_TYPEMAP_NRRD2NUMPY]][call[name[fields]][constant[type]]]
if <ast.BoolOp object at 0x7da18dc99060> begin[:]
if compare[constant[endian] <ast.NotIn object at 0x7da2590d7190> name[fields]] begin[:]
<ast.Raise object at 0x7da18dc98f70>
return[call[name[np].dtype, parameter[name[np_typestring]]]] | keyword[def] identifier[_determine_datatype] ( identifier[fields] ):
literal[string]
identifier[np_typestring] = identifier[_TYPEMAP_NRRD2NUMPY] [ identifier[fields] [ literal[string] ]]
keyword[if] identifier[np] . identifier[dtype] ( identifier[np_typestring] ). identifier[itemsize] > literal[int] keyword[and] identifier[fields] [ literal[string] ] keyword[not] keyword[in] [ literal[string] , literal[string] , literal[string] , literal[string] ]:
keyword[if] literal[string] keyword[not] keyword[in] identifier[fields] :
keyword[raise] identifier[NRRDError] ( literal[string] )
keyword[elif] identifier[fields] [ literal[string] ]== literal[string] :
identifier[np_typestring] = literal[string] + identifier[np_typestring]
keyword[elif] identifier[fields] [ literal[string] ]== literal[string] :
identifier[np_typestring] = literal[string] + identifier[np_typestring]
keyword[else] :
keyword[raise] identifier[NRRDError] ( literal[string] % identifier[fields] [ literal[string] ])
keyword[return] identifier[np] . identifier[dtype] ( identifier[np_typestring] ) | def _determine_datatype(fields):
"""Determine the numpy dtype of the data."""
# Convert the NRRD type string identifier into a NumPy string identifier using a map
np_typestring = _TYPEMAP_NRRD2NUMPY[fields['type']]
# This is only added if the datatype has more than one byte and is not using ASCII encoding
# Note: Endian is not required for ASCII encoding
if np.dtype(np_typestring).itemsize > 1 and fields['encoding'] not in ['ASCII', 'ascii', 'text', 'txt']:
if 'endian' not in fields:
raise NRRDError('Header is missing required field: "endian".') # depends on [control=['if'], data=[]]
elif fields['endian'] == 'big':
np_typestring = '>' + np_typestring # depends on [control=['if'], data=[]]
elif fields['endian'] == 'little':
np_typestring = '<' + np_typestring # depends on [control=['if'], data=[]]
else:
raise NRRDError('Invalid endian value in header: "%s"' % fields['endian']) # depends on [control=['if'], data=[]]
return np.dtype(np_typestring) |
def parse_region(self, include, region_type, region_end, line):
"""
Extract a Shape from a region string
"""
if self.coordsys is None:
raise DS9RegionParserError("No coordinate system specified and a"
" region has been found.")
else:
helper = DS9RegionParser(coordsys=self.coordsys,
include=include,
region_type=region_type,
region_end=region_end,
global_meta=self.global_meta,
line=line)
helper.parse()
self.shapes.append(helper.shape) | def function[parse_region, parameter[self, include, region_type, region_end, line]]:
constant[
Extract a Shape from a region string
]
if compare[name[self].coordsys is constant[None]] begin[:]
<ast.Raise object at 0x7da18dc05ab0> | keyword[def] identifier[parse_region] ( identifier[self] , identifier[include] , identifier[region_type] , identifier[region_end] , identifier[line] ):
literal[string]
keyword[if] identifier[self] . identifier[coordsys] keyword[is] keyword[None] :
keyword[raise] identifier[DS9RegionParserError] ( literal[string]
literal[string] )
keyword[else] :
identifier[helper] = identifier[DS9RegionParser] ( identifier[coordsys] = identifier[self] . identifier[coordsys] ,
identifier[include] = identifier[include] ,
identifier[region_type] = identifier[region_type] ,
identifier[region_end] = identifier[region_end] ,
identifier[global_meta] = identifier[self] . identifier[global_meta] ,
identifier[line] = identifier[line] )
identifier[helper] . identifier[parse] ()
identifier[self] . identifier[shapes] . identifier[append] ( identifier[helper] . identifier[shape] ) | def parse_region(self, include, region_type, region_end, line):
"""
Extract a Shape from a region string
"""
if self.coordsys is None:
raise DS9RegionParserError('No coordinate system specified and a region has been found.') # depends on [control=['if'], data=[]]
else:
helper = DS9RegionParser(coordsys=self.coordsys, include=include, region_type=region_type, region_end=region_end, global_meta=self.global_meta, line=line)
helper.parse()
self.shapes.append(helper.shape) |
def create_schema(self, check_if_exists=False, sync_schema=True,
verbosity=1):
"""
Creates the schema 'schema_name' for this tenant. Optionally checks if
the schema already exists before creating it. Returns true if the
schema was created, false otherwise.
"""
# safety check
_check_schema_name(self.schema_name)
cursor = connection.cursor()
if check_if_exists and schema_exists(self.schema_name):
return False
# create the schema
cursor.execute('CREATE SCHEMA %s' % self.schema_name)
if sync_schema:
call_command('migrate_schemas',
schema_name=self.schema_name,
interactive=False,
verbosity=verbosity)
connection.set_schema_to_public() | def function[create_schema, parameter[self, check_if_exists, sync_schema, verbosity]]:
constant[
Creates the schema 'schema_name' for this tenant. Optionally checks if
the schema already exists before creating it. Returns true if the
schema was created, false otherwise.
]
call[name[_check_schema_name], parameter[name[self].schema_name]]
variable[cursor] assign[=] call[name[connection].cursor, parameter[]]
if <ast.BoolOp object at 0x7da1b18c1990> begin[:]
return[constant[False]]
call[name[cursor].execute, parameter[binary_operation[constant[CREATE SCHEMA %s] <ast.Mod object at 0x7da2590d6920> name[self].schema_name]]]
if name[sync_schema] begin[:]
call[name[call_command], parameter[constant[migrate_schemas]]]
call[name[connection].set_schema_to_public, parameter[]] | keyword[def] identifier[create_schema] ( identifier[self] , identifier[check_if_exists] = keyword[False] , identifier[sync_schema] = keyword[True] ,
identifier[verbosity] = literal[int] ):
literal[string]
identifier[_check_schema_name] ( identifier[self] . identifier[schema_name] )
identifier[cursor] = identifier[connection] . identifier[cursor] ()
keyword[if] identifier[check_if_exists] keyword[and] identifier[schema_exists] ( identifier[self] . identifier[schema_name] ):
keyword[return] keyword[False]
identifier[cursor] . identifier[execute] ( literal[string] % identifier[self] . identifier[schema_name] )
keyword[if] identifier[sync_schema] :
identifier[call_command] ( literal[string] ,
identifier[schema_name] = identifier[self] . identifier[schema_name] ,
identifier[interactive] = keyword[False] ,
identifier[verbosity] = identifier[verbosity] )
identifier[connection] . identifier[set_schema_to_public] () | def create_schema(self, check_if_exists=False, sync_schema=True, verbosity=1):
"""
Creates the schema 'schema_name' for this tenant. Optionally checks if
the schema already exists before creating it. Returns true if the
schema was created, false otherwise.
"""
# safety check
_check_schema_name(self.schema_name)
cursor = connection.cursor()
if check_if_exists and schema_exists(self.schema_name):
return False # depends on [control=['if'], data=[]]
# create the schema
cursor.execute('CREATE SCHEMA %s' % self.schema_name)
if sync_schema:
call_command('migrate_schemas', schema_name=self.schema_name, interactive=False, verbosity=verbosity) # depends on [control=['if'], data=[]]
connection.set_schema_to_public() |
def start_monitoring(seconds_frozen=SECONDS_FROZEN,
test_interval=TEST_INTERVAL):
"""Start monitoring for hanging threads.
seconds_frozen - How much time should thread hang to activate
printing stack trace - default(10)
tests_interval - Sleep time of monitoring thread (in milliseconds)
- default(100)
"""
thread = StoppableThread(target=monitor, args=(seconds_frozen,
test_interval))
thread.daemon = True
thread.start()
return thread | def function[start_monitoring, parameter[seconds_frozen, test_interval]]:
constant[Start monitoring for hanging threads.
seconds_frozen - How much time should thread hang to activate
printing stack trace - default(10)
tests_interval - Sleep time of monitoring thread (in milliseconds)
- default(100)
]
variable[thread] assign[=] call[name[StoppableThread], parameter[]]
name[thread].daemon assign[=] constant[True]
call[name[thread].start, parameter[]]
return[name[thread]] | keyword[def] identifier[start_monitoring] ( identifier[seconds_frozen] = identifier[SECONDS_FROZEN] ,
identifier[test_interval] = identifier[TEST_INTERVAL] ):
literal[string]
identifier[thread] = identifier[StoppableThread] ( identifier[target] = identifier[monitor] , identifier[args] =( identifier[seconds_frozen] ,
identifier[test_interval] ))
identifier[thread] . identifier[daemon] = keyword[True]
identifier[thread] . identifier[start] ()
keyword[return] identifier[thread] | def start_monitoring(seconds_frozen=SECONDS_FROZEN, test_interval=TEST_INTERVAL):
"""Start monitoring for hanging threads.
seconds_frozen - How much time should thread hang to activate
printing stack trace - default(10)
tests_interval - Sleep time of monitoring thread (in milliseconds)
- default(100)
"""
thread = StoppableThread(target=monitor, args=(seconds_frozen, test_interval))
thread.daemon = True
thread.start()
return thread |
def write_csv(fileobj, rows, encoding=ENCODING, dialect=DIALECT):
"""Dump rows to ``fileobj`` with the given ``encoding`` and CSV ``dialect``."""
csvwriter = csv.writer(fileobj, dialect=dialect)
csv_writerows(csvwriter, rows, encoding) | def function[write_csv, parameter[fileobj, rows, encoding, dialect]]:
constant[Dump rows to ``fileobj`` with the given ``encoding`` and CSV ``dialect``.]
variable[csvwriter] assign[=] call[name[csv].writer, parameter[name[fileobj]]]
call[name[csv_writerows], parameter[name[csvwriter], name[rows], name[encoding]]] | keyword[def] identifier[write_csv] ( identifier[fileobj] , identifier[rows] , identifier[encoding] = identifier[ENCODING] , identifier[dialect] = identifier[DIALECT] ):
literal[string]
identifier[csvwriter] = identifier[csv] . identifier[writer] ( identifier[fileobj] , identifier[dialect] = identifier[dialect] )
identifier[csv_writerows] ( identifier[csvwriter] , identifier[rows] , identifier[encoding] ) | def write_csv(fileobj, rows, encoding=ENCODING, dialect=DIALECT):
"""Dump rows to ``fileobj`` with the given ``encoding`` and CSV ``dialect``."""
csvwriter = csv.writer(fileobj, dialect=dialect)
csv_writerows(csvwriter, rows, encoding) |
def collapse_initials(name):
"""Remove the space between initials, eg T. A. --> T.A."""
if len(name.split(".")) > 1:
name = re.sub(r'([A-Z]\.)[\s\-]+(?=[A-Z]\.)', r'\1', name)
return name | def function[collapse_initials, parameter[name]]:
constant[Remove the space between initials, eg T. A. --> T.A.]
if compare[call[name[len], parameter[call[name[name].split, parameter[constant[.]]]]] greater[>] constant[1]] begin[:]
variable[name] assign[=] call[name[re].sub, parameter[constant[([A-Z]\.)[\s\-]+(?=[A-Z]\.)], constant[\1], name[name]]]
return[name[name]] | keyword[def] identifier[collapse_initials] ( identifier[name] ):
literal[string]
keyword[if] identifier[len] ( identifier[name] . identifier[split] ( literal[string] ))> literal[int] :
identifier[name] = identifier[re] . identifier[sub] ( literal[string] , literal[string] , identifier[name] )
keyword[return] identifier[name] | def collapse_initials(name):
"""Remove the space between initials, eg T. A. --> T.A."""
if len(name.split('.')) > 1:
name = re.sub('([A-Z]\\.)[\\s\\-]+(?=[A-Z]\\.)', '\\1', name) # depends on [control=['if'], data=[]]
return name |
def M(self, t, tips=None, gaps=None):
"""See docs for method in `Model` abstract base class."""
assert isinstance(t, float) and t > 0, "Invalid t: {0}".format(t)
with scipy.errstate(under='ignore'): # don't worry if some values 0
if ('expD', t) not in self._cached:
self._cached[('expD', t)] = scipy.exp(self.D * self.mu * t)
expD = self._cached[('expD', t)]
if tips is None:
# swap axes to broadcast multiply D as diagonal matrix
M = broadcastMatrixMultiply((self.A.swapaxes(0, 1) *
expD).swapaxes(1, 0), self.Ainv)
else:
M = broadcastMatrixVectorMultiply((self.A.swapaxes(0, 1)
* expD).swapaxes(1, 0), broadcastGetCols(
self.Ainv, tips))
if gaps is not None:
M[gaps] = scipy.ones(N_CODON, dtype='float')
#if M.min() < -0.01:
# warnings.warn("Large negative value in M(t) being set to 0. "
# "Value is {0}, t is {1}".format(M.min(), t))
M[M < 0] = 0.0
return M | def function[M, parameter[self, t, tips, gaps]]:
constant[See docs for method in `Model` abstract base class.]
assert[<ast.BoolOp object at 0x7da18fe91f60>]
with call[name[scipy].errstate, parameter[]] begin[:]
if compare[tuple[[<ast.Constant object at 0x7da18fe933d0>, <ast.Name object at 0x7da18fe92e90>]] <ast.NotIn object at 0x7da2590d7190> name[self]._cached] begin[:]
call[name[self]._cached][tuple[[<ast.Constant object at 0x7da20ed9a8f0>, <ast.Name object at 0x7da20ed9b820>]]] assign[=] call[name[scipy].exp, parameter[binary_operation[binary_operation[name[self].D * name[self].mu] * name[t]]]]
variable[expD] assign[=] call[name[self]._cached][tuple[[<ast.Constant object at 0x7da1b0b712a0>, <ast.Name object at 0x7da1b0b72590>]]]
if compare[name[tips] is constant[None]] begin[:]
variable[M] assign[=] call[name[broadcastMatrixMultiply], parameter[call[binary_operation[call[name[self].A.swapaxes, parameter[constant[0], constant[1]]] * name[expD]].swapaxes, parameter[constant[1], constant[0]]], name[self].Ainv]]
call[name[M]][compare[name[M] less[<] constant[0]]] assign[=] constant[0.0]
return[name[M]] | keyword[def] identifier[M] ( identifier[self] , identifier[t] , identifier[tips] = keyword[None] , identifier[gaps] = keyword[None] ):
literal[string]
keyword[assert] identifier[isinstance] ( identifier[t] , identifier[float] ) keyword[and] identifier[t] > literal[int] , literal[string] . identifier[format] ( identifier[t] )
keyword[with] identifier[scipy] . identifier[errstate] ( identifier[under] = literal[string] ):
keyword[if] ( literal[string] , identifier[t] ) keyword[not] keyword[in] identifier[self] . identifier[_cached] :
identifier[self] . identifier[_cached] [( literal[string] , identifier[t] )]= identifier[scipy] . identifier[exp] ( identifier[self] . identifier[D] * identifier[self] . identifier[mu] * identifier[t] )
identifier[expD] = identifier[self] . identifier[_cached] [( literal[string] , identifier[t] )]
keyword[if] identifier[tips] keyword[is] keyword[None] :
identifier[M] = identifier[broadcastMatrixMultiply] (( identifier[self] . identifier[A] . identifier[swapaxes] ( literal[int] , literal[int] )*
identifier[expD] ). identifier[swapaxes] ( literal[int] , literal[int] ), identifier[self] . identifier[Ainv] )
keyword[else] :
identifier[M] = identifier[broadcastMatrixVectorMultiply] (( identifier[self] . identifier[A] . identifier[swapaxes] ( literal[int] , literal[int] )
* identifier[expD] ). identifier[swapaxes] ( literal[int] , literal[int] ), identifier[broadcastGetCols] (
identifier[self] . identifier[Ainv] , identifier[tips] ))
keyword[if] identifier[gaps] keyword[is] keyword[not] keyword[None] :
identifier[M] [ identifier[gaps] ]= identifier[scipy] . identifier[ones] ( identifier[N_CODON] , identifier[dtype] = literal[string] )
identifier[M] [ identifier[M] < literal[int] ]= literal[int]
keyword[return] identifier[M] | def M(self, t, tips=None, gaps=None):
"""See docs for method in `Model` abstract base class."""
assert isinstance(t, float) and t > 0, 'Invalid t: {0}'.format(t)
with scipy.errstate(under='ignore'): # don't worry if some values 0
if ('expD', t) not in self._cached:
self._cached['expD', t] = scipy.exp(self.D * self.mu * t) # depends on [control=['if'], data=[]]
expD = self._cached['expD', t]
if tips is None:
# swap axes to broadcast multiply D as diagonal matrix
M = broadcastMatrixMultiply((self.A.swapaxes(0, 1) * expD).swapaxes(1, 0), self.Ainv) # depends on [control=['if'], data=[]]
else:
M = broadcastMatrixVectorMultiply((self.A.swapaxes(0, 1) * expD).swapaxes(1, 0), broadcastGetCols(self.Ainv, tips))
if gaps is not None:
M[gaps] = scipy.ones(N_CODON, dtype='float') # depends on [control=['if'], data=['gaps']] # depends on [control=['with'], data=[]]
#if M.min() < -0.01:
# warnings.warn("Large negative value in M(t) being set to 0. "
# "Value is {0}, t is {1}".format(M.min(), t))
M[M < 0] = 0.0
return M |
def get_command_info(self, peer_jid, command_name):
"""
Obtain information about a command.
:param peer_jid: JID of the peer to query
:type peer_jid: :class:`~aioxmpp.JID`
:param command_name: Node name of the command
:type command_name: :class:`str`
:rtype: :class:`~.disco.xso.InfoQuery`
:return: Service discovery information about the command
Sends a service discovery query to the service discovery node of the
command. The returned object contains information about the command,
such as the namespaces used by its implementation (generally the
:xep:`4` data forms namespace) and possibly localisations of the
commands name.
The `command_name` can be obtained by inspecting the listing from
:meth:`get_commands` or from well-known command names as defined for
example in :xep:`133`.
"""
disco = self.dependencies[aioxmpp.disco.DiscoClient]
response = yield from disco.query_info(
peer_jid,
node=command_name,
)
return response | def function[get_command_info, parameter[self, peer_jid, command_name]]:
constant[
Obtain information about a command.
:param peer_jid: JID of the peer to query
:type peer_jid: :class:`~aioxmpp.JID`
:param command_name: Node name of the command
:type command_name: :class:`str`
:rtype: :class:`~.disco.xso.InfoQuery`
:return: Service discovery information about the command
Sends a service discovery query to the service discovery node of the
command. The returned object contains information about the command,
such as the namespaces used by its implementation (generally the
:xep:`4` data forms namespace) and possibly localisations of the
commands name.
The `command_name` can be obtained by inspecting the listing from
:meth:`get_commands` or from well-known command names as defined for
example in :xep:`133`.
]
variable[disco] assign[=] call[name[self].dependencies][name[aioxmpp].disco.DiscoClient]
variable[response] assign[=] <ast.YieldFrom object at 0x7da18ede70d0>
return[name[response]] | keyword[def] identifier[get_command_info] ( identifier[self] , identifier[peer_jid] , identifier[command_name] ):
literal[string]
identifier[disco] = identifier[self] . identifier[dependencies] [ identifier[aioxmpp] . identifier[disco] . identifier[DiscoClient] ]
identifier[response] = keyword[yield] keyword[from] identifier[disco] . identifier[query_info] (
identifier[peer_jid] ,
identifier[node] = identifier[command_name] ,
)
keyword[return] identifier[response] | def get_command_info(self, peer_jid, command_name):
"""
Obtain information about a command.
:param peer_jid: JID of the peer to query
:type peer_jid: :class:`~aioxmpp.JID`
:param command_name: Node name of the command
:type command_name: :class:`str`
:rtype: :class:`~.disco.xso.InfoQuery`
:return: Service discovery information about the command
Sends a service discovery query to the service discovery node of the
command. The returned object contains information about the command,
such as the namespaces used by its implementation (generally the
:xep:`4` data forms namespace) and possibly localisations of the
commands name.
The `command_name` can be obtained by inspecting the listing from
:meth:`get_commands` or from well-known command names as defined for
example in :xep:`133`.
"""
disco = self.dependencies[aioxmpp.disco.DiscoClient]
response = (yield from disco.query_info(peer_jid, node=command_name))
return response |
def mainview(request, **criterias):
'View that handles all page requests.'
view_data = initview(request)
wrap = lambda func: ft.partial(func, _view_data=view_data, **criterias)
return condition(
etag_func=wrap(cache_etag),
last_modified_func=wrap(cache_last_modified) )\
(_mainview)(request, view_data, **criterias) | def function[mainview, parameter[request]]:
constant[View that handles all page requests.]
variable[view_data] assign[=] call[name[initview], parameter[name[request]]]
variable[wrap] assign[=] <ast.Lambda object at 0x7da2041d9870>
return[call[call[call[name[condition], parameter[]], parameter[name[_mainview]]], parameter[name[request], name[view_data]]]] | keyword[def] identifier[mainview] ( identifier[request] ,** identifier[criterias] ):
literal[string]
identifier[view_data] = identifier[initview] ( identifier[request] )
identifier[wrap] = keyword[lambda] identifier[func] : identifier[ft] . identifier[partial] ( identifier[func] , identifier[_view_data] = identifier[view_data] ,** identifier[criterias] )
keyword[return] identifier[condition] (
identifier[etag_func] = identifier[wrap] ( identifier[cache_etag] ),
identifier[last_modified_func] = identifier[wrap] ( identifier[cache_last_modified] ))( identifier[_mainview] )( identifier[request] , identifier[view_data] ,** identifier[criterias] ) | def mainview(request, **criterias):
"""View that handles all page requests."""
view_data = initview(request)
wrap = lambda func: ft.partial(func, _view_data=view_data, **criterias)
return condition(etag_func=wrap(cache_etag), last_modified_func=wrap(cache_last_modified))(_mainview)(request, view_data, **criterias) |
def dump(self):
'''Regurgitate the tables and rows'''
for table in self.tables:
print("*** %s ***" % table.name)
table.dump() | def function[dump, parameter[self]]:
constant[Regurgitate the tables and rows]
for taget[name[table]] in starred[name[self].tables] begin[:]
call[name[print], parameter[binary_operation[constant[*** %s ***] <ast.Mod object at 0x7da2590d6920> name[table].name]]]
call[name[table].dump, parameter[]] | keyword[def] identifier[dump] ( identifier[self] ):
literal[string]
keyword[for] identifier[table] keyword[in] identifier[self] . identifier[tables] :
identifier[print] ( literal[string] % identifier[table] . identifier[name] )
identifier[table] . identifier[dump] () | def dump(self):
"""Regurgitate the tables and rows"""
for table in self.tables:
print('*** %s ***' % table.name)
table.dump() # depends on [control=['for'], data=['table']] |
def deaggregate_record(decoded_data):
'''Given a Kinesis record data that is decoded, deaggregate if it was packed using the
Kinesis Producer Library into individual records. This method will be a no-op for any
records that are not aggregated (but will still return them).
decoded_data - the base64 decoded data that comprises either the KPL aggregated data, or the Kinesis payload directly.
return value - A list of deaggregated Kinesis record payloads (if the data is not aggregated, we just return a list with the payload alone)
'''
is_aggregated = True
#Verify the magic header
data_magic = None
if(len(decoded_data) >= len(aws_kinesis_agg.MAGIC)):
data_magic = decoded_data[:len(aws_kinesis_agg.MAGIC)]
else:
print("Not aggregated")
is_aggregated = False
decoded_data_no_magic = decoded_data[len(aws_kinesis_agg.MAGIC):]
if aws_kinesis_agg.MAGIC != data_magic or len(decoded_data_no_magic) <= aws_kinesis_agg.DIGEST_SIZE:
is_aggregated = False
if is_aggregated:
#verify the MD5 digest
message_digest = decoded_data_no_magic[-aws_kinesis_agg.DIGEST_SIZE:]
message_data = decoded_data_no_magic[:-aws_kinesis_agg.DIGEST_SIZE]
md5_calc = md5.new()
md5_calc.update(message_data)
calculated_digest = md5_calc.digest()
if message_digest != calculated_digest:
return [decoded_data]
else:
#Extract the protobuf message
try:
ar = kpl_pb2.AggregatedRecord()
ar.ParseFromString(message_data)
return [mr.data for mr in ar.records]
except BaseException as e:
raise e
else:
return [decoded_data] | def function[deaggregate_record, parameter[decoded_data]]:
constant[Given a Kinesis record data that is decoded, deaggregate if it was packed using the
Kinesis Producer Library into individual records. This method will be a no-op for any
records that are not aggregated (but will still return them).
decoded_data - the base64 decoded data that comprises either the KPL aggregated data, or the Kinesis payload directly.
return value - A list of deaggregated Kinesis record payloads (if the data is not aggregated, we just return a list with the payload alone)
]
variable[is_aggregated] assign[=] constant[True]
variable[data_magic] assign[=] constant[None]
if compare[call[name[len], parameter[name[decoded_data]]] greater_or_equal[>=] call[name[len], parameter[name[aws_kinesis_agg].MAGIC]]] begin[:]
variable[data_magic] assign[=] call[name[decoded_data]][<ast.Slice object at 0x7da20c992f50>]
variable[decoded_data_no_magic] assign[=] call[name[decoded_data]][<ast.Slice object at 0x7da20c991cc0>]
if <ast.BoolOp object at 0x7da2044c0070> begin[:]
variable[is_aggregated] assign[=] constant[False]
if name[is_aggregated] begin[:]
variable[message_digest] assign[=] call[name[decoded_data_no_magic]][<ast.Slice object at 0x7da20c992380>]
variable[message_data] assign[=] call[name[decoded_data_no_magic]][<ast.Slice object at 0x7da20c992cb0>]
variable[md5_calc] assign[=] call[name[md5].new, parameter[]]
call[name[md5_calc].update, parameter[name[message_data]]]
variable[calculated_digest] assign[=] call[name[md5_calc].digest, parameter[]]
if compare[name[message_digest] not_equal[!=] name[calculated_digest]] begin[:]
return[list[[<ast.Name object at 0x7da20e955300>]]] | keyword[def] identifier[deaggregate_record] ( identifier[decoded_data] ):
literal[string]
identifier[is_aggregated] = keyword[True]
identifier[data_magic] = keyword[None]
keyword[if] ( identifier[len] ( identifier[decoded_data] )>= identifier[len] ( identifier[aws_kinesis_agg] . identifier[MAGIC] )):
identifier[data_magic] = identifier[decoded_data] [: identifier[len] ( identifier[aws_kinesis_agg] . identifier[MAGIC] )]
keyword[else] :
identifier[print] ( literal[string] )
identifier[is_aggregated] = keyword[False]
identifier[decoded_data_no_magic] = identifier[decoded_data] [ identifier[len] ( identifier[aws_kinesis_agg] . identifier[MAGIC] ):]
keyword[if] identifier[aws_kinesis_agg] . identifier[MAGIC] != identifier[data_magic] keyword[or] identifier[len] ( identifier[decoded_data_no_magic] )<= identifier[aws_kinesis_agg] . identifier[DIGEST_SIZE] :
identifier[is_aggregated] = keyword[False]
keyword[if] identifier[is_aggregated] :
identifier[message_digest] = identifier[decoded_data_no_magic] [- identifier[aws_kinesis_agg] . identifier[DIGEST_SIZE] :]
identifier[message_data] = identifier[decoded_data_no_magic] [:- identifier[aws_kinesis_agg] . identifier[DIGEST_SIZE] ]
identifier[md5_calc] = identifier[md5] . identifier[new] ()
identifier[md5_calc] . identifier[update] ( identifier[message_data] )
identifier[calculated_digest] = identifier[md5_calc] . identifier[digest] ()
keyword[if] identifier[message_digest] != identifier[calculated_digest] :
keyword[return] [ identifier[decoded_data] ]
keyword[else] :
keyword[try] :
identifier[ar] = identifier[kpl_pb2] . identifier[AggregatedRecord] ()
identifier[ar] . identifier[ParseFromString] ( identifier[message_data] )
keyword[return] [ identifier[mr] . identifier[data] keyword[for] identifier[mr] keyword[in] identifier[ar] . identifier[records] ]
keyword[except] identifier[BaseException] keyword[as] identifier[e] :
keyword[raise] identifier[e]
keyword[else] :
keyword[return] [ identifier[decoded_data] ] | def deaggregate_record(decoded_data):
"""Given a Kinesis record data that is decoded, deaggregate if it was packed using the
Kinesis Producer Library into individual records. This method will be a no-op for any
records that are not aggregated (but will still return them).
decoded_data - the base64 decoded data that comprises either the KPL aggregated data, or the Kinesis payload directly.
return value - A list of deaggregated Kinesis record payloads (if the data is not aggregated, we just return a list with the payload alone)
"""
is_aggregated = True
#Verify the magic header
data_magic = None
if len(decoded_data) >= len(aws_kinesis_agg.MAGIC):
data_magic = decoded_data[:len(aws_kinesis_agg.MAGIC)] # depends on [control=['if'], data=[]]
else:
print('Not aggregated')
is_aggregated = False
decoded_data_no_magic = decoded_data[len(aws_kinesis_agg.MAGIC):]
if aws_kinesis_agg.MAGIC != data_magic or len(decoded_data_no_magic) <= aws_kinesis_agg.DIGEST_SIZE:
is_aggregated = False # depends on [control=['if'], data=[]]
if is_aggregated:
#verify the MD5 digest
message_digest = decoded_data_no_magic[-aws_kinesis_agg.DIGEST_SIZE:]
message_data = decoded_data_no_magic[:-aws_kinesis_agg.DIGEST_SIZE]
md5_calc = md5.new()
md5_calc.update(message_data)
calculated_digest = md5_calc.digest()
if message_digest != calculated_digest:
return [decoded_data] # depends on [control=['if'], data=[]]
else:
#Extract the protobuf message
try:
ar = kpl_pb2.AggregatedRecord()
ar.ParseFromString(message_data)
return [mr.data for mr in ar.records] # depends on [control=['try'], data=[]]
except BaseException as e:
raise e # depends on [control=['except'], data=['e']] # depends on [control=['if'], data=[]]
else:
return [decoded_data] |
def is_valid_mpls_label(label):
"""Validates `label` according to MPLS label rules
RFC says:
This 20-bit field.
A value of 0 represents the "IPv4 Explicit NULL Label".
A value of 1 represents the "Router Alert Label".
A value of 2 represents the "IPv6 Explicit NULL Label".
A value of 3 represents the "Implicit NULL Label".
Values 4-15 are reserved.
"""
if (not isinstance(label, numbers.Integral) or
(4 <= label <= 15) or
(label < 0 or label > 2 ** 20)):
return False
return True | def function[is_valid_mpls_label, parameter[label]]:
constant[Validates `label` according to MPLS label rules
RFC says:
This 20-bit field.
A value of 0 represents the "IPv4 Explicit NULL Label".
A value of 1 represents the "Router Alert Label".
A value of 2 represents the "IPv6 Explicit NULL Label".
A value of 3 represents the "Implicit NULL Label".
Values 4-15 are reserved.
]
if <ast.BoolOp object at 0x7da1b1a3d9c0> begin[:]
return[constant[False]]
return[constant[True]] | keyword[def] identifier[is_valid_mpls_label] ( identifier[label] ):
literal[string]
keyword[if] ( keyword[not] identifier[isinstance] ( identifier[label] , identifier[numbers] . identifier[Integral] ) keyword[or]
( literal[int] <= identifier[label] <= literal[int] ) keyword[or]
( identifier[label] < literal[int] keyword[or] identifier[label] > literal[int] ** literal[int] )):
keyword[return] keyword[False]
keyword[return] keyword[True] | def is_valid_mpls_label(label):
"""Validates `label` according to MPLS label rules
RFC says:
This 20-bit field.
A value of 0 represents the "IPv4 Explicit NULL Label".
A value of 1 represents the "Router Alert Label".
A value of 2 represents the "IPv6 Explicit NULL Label".
A value of 3 represents the "Implicit NULL Label".
Values 4-15 are reserved.
"""
if not isinstance(label, numbers.Integral) or 4 <= label <= 15 or (label < 0 or label > 2 ** 20):
return False # depends on [control=['if'], data=[]]
return True |
def generate_slug(value):
"""
Generates a slug using a Hashid of `value`.
COPIED from spectator.core.models.SluggedModelMixin() because migrations
don't make this happen automatically and perhaps the least bad thing is
to copy the method here, ugh.
"""
alphabet = app_settings.SLUG_ALPHABET
salt = app_settings.SLUG_SALT
hashids = Hashids(alphabet=alphabet, salt=salt, min_length=5)
return hashids.encode(value) | def function[generate_slug, parameter[value]]:
constant[
Generates a slug using a Hashid of `value`.
COPIED from spectator.core.models.SluggedModelMixin() because migrations
don't make this happen automatically and perhaps the least bad thing is
to copy the method here, ugh.
]
variable[alphabet] assign[=] name[app_settings].SLUG_ALPHABET
variable[salt] assign[=] name[app_settings].SLUG_SALT
variable[hashids] assign[=] call[name[Hashids], parameter[]]
return[call[name[hashids].encode, parameter[name[value]]]] | keyword[def] identifier[generate_slug] ( identifier[value] ):
literal[string]
identifier[alphabet] = identifier[app_settings] . identifier[SLUG_ALPHABET]
identifier[salt] = identifier[app_settings] . identifier[SLUG_SALT]
identifier[hashids] = identifier[Hashids] ( identifier[alphabet] = identifier[alphabet] , identifier[salt] = identifier[salt] , identifier[min_length] = literal[int] )
keyword[return] identifier[hashids] . identifier[encode] ( identifier[value] ) | def generate_slug(value):
"""
Generates a slug using a Hashid of `value`.
COPIED from spectator.core.models.SluggedModelMixin() because migrations
don't make this happen automatically and perhaps the least bad thing is
to copy the method here, ugh.
"""
alphabet = app_settings.SLUG_ALPHABET
salt = app_settings.SLUG_SALT
hashids = Hashids(alphabet=alphabet, salt=salt, min_length=5)
return hashids.encode(value) |
def generational_replacement(random, population, parents, offspring, args):
"""Performs generational replacement with optional weak elitism.
This function performs generational replacement, which means that
the entire existing population is replaced by the offspring,
truncating to the population size if the number of offspring is
larger. Weak elitism may also be specified through the `num_elites`
keyword argument in args. If this is used, the best `num_elites`
individuals in the current population are allowed to survive if
they are better than the worst `num_elites` offspring.
.. Arguments:
random -- the random number generator object
population -- the population of individuals
parents -- the list of parent individuals
offspring -- the list of offspring individuals
args -- a dictionary of keyword arguments
Optional keyword arguments in args:
- *num_elites* -- number of elites to consider (default 0)
"""
num_elites = args.setdefault('num_elites', 0)
population.sort(reverse=True)
offspring.extend(population[:num_elites])
offspring.sort(reverse=True)
survivors = offspring[:len(population)]
return survivors | def function[generational_replacement, parameter[random, population, parents, offspring, args]]:
constant[Performs generational replacement with optional weak elitism.
This function performs generational replacement, which means that
the entire existing population is replaced by the offspring,
truncating to the population size if the number of offspring is
larger. Weak elitism may also be specified through the `num_elites`
keyword argument in args. If this is used, the best `num_elites`
individuals in the current population are allowed to survive if
they are better than the worst `num_elites` offspring.
.. Arguments:
random -- the random number generator object
population -- the population of individuals
parents -- the list of parent individuals
offspring -- the list of offspring individuals
args -- a dictionary of keyword arguments
Optional keyword arguments in args:
- *num_elites* -- number of elites to consider (default 0)
]
variable[num_elites] assign[=] call[name[args].setdefault, parameter[constant[num_elites], constant[0]]]
call[name[population].sort, parameter[]]
call[name[offspring].extend, parameter[call[name[population]][<ast.Slice object at 0x7da1b1344610>]]]
call[name[offspring].sort, parameter[]]
variable[survivors] assign[=] call[name[offspring]][<ast.Slice object at 0x7da1b1346170>]
return[name[survivors]] | keyword[def] identifier[generational_replacement] ( identifier[random] , identifier[population] , identifier[parents] , identifier[offspring] , identifier[args] ):
literal[string]
identifier[num_elites] = identifier[args] . identifier[setdefault] ( literal[string] , literal[int] )
identifier[population] . identifier[sort] ( identifier[reverse] = keyword[True] )
identifier[offspring] . identifier[extend] ( identifier[population] [: identifier[num_elites] ])
identifier[offspring] . identifier[sort] ( identifier[reverse] = keyword[True] )
identifier[survivors] = identifier[offspring] [: identifier[len] ( identifier[population] )]
keyword[return] identifier[survivors] | def generational_replacement(random, population, parents, offspring, args):
"""Performs generational replacement with optional weak elitism.
This function performs generational replacement, which means that
the entire existing population is replaced by the offspring,
truncating to the population size if the number of offspring is
larger. Weak elitism may also be specified through the `num_elites`
keyword argument in args. If this is used, the best `num_elites`
individuals in the current population are allowed to survive if
they are better than the worst `num_elites` offspring.
.. Arguments:
random -- the random number generator object
population -- the population of individuals
parents -- the list of parent individuals
offspring -- the list of offspring individuals
args -- a dictionary of keyword arguments
Optional keyword arguments in args:
- *num_elites* -- number of elites to consider (default 0)
"""
num_elites = args.setdefault('num_elites', 0)
population.sort(reverse=True)
offspring.extend(population[:num_elites])
offspring.sort(reverse=True)
survivors = offspring[:len(population)]
return survivors |
def as_list(cls, protocol=Protocol.http, *args, **kwargs):
''' returns list views '''
return cls.as_view('list', protocol, *args, **kwargs) | def function[as_list, parameter[cls, protocol]]:
constant[ returns list views ]
return[call[name[cls].as_view, parameter[constant[list], name[protocol], <ast.Starred object at 0x7da20c6c7010>]]] | keyword[def] identifier[as_list] ( identifier[cls] , identifier[protocol] = identifier[Protocol] . identifier[http] ,* identifier[args] ,** identifier[kwargs] ):
literal[string]
keyword[return] identifier[cls] . identifier[as_view] ( literal[string] , identifier[protocol] ,* identifier[args] ,** identifier[kwargs] ) | def as_list(cls, protocol=Protocol.http, *args, **kwargs):
""" returns list views """
return cls.as_view('list', protocol, *args, **kwargs) |
def _create_value(self, data, name, spec):
""" Create the value for a field.
:param data: the whole data for the entity (all fields).
:param name: name of the initialized field.
:param spec: spec for the whole entity.
"""
field = getattr(self, 'create_' + name, None)
if field:
# this factory has a special creator function for this field
return field(data, name, spec)
value = data.get(name)
return spec.fields[name].clean(value) | def function[_create_value, parameter[self, data, name, spec]]:
constant[ Create the value for a field.
:param data: the whole data for the entity (all fields).
:param name: name of the initialized field.
:param spec: spec for the whole entity.
]
variable[field] assign[=] call[name[getattr], parameter[name[self], binary_operation[constant[create_] + name[name]], constant[None]]]
if name[field] begin[:]
return[call[name[field], parameter[name[data], name[name], name[spec]]]]
variable[value] assign[=] call[name[data].get, parameter[name[name]]]
return[call[call[name[spec].fields][name[name]].clean, parameter[name[value]]]] | keyword[def] identifier[_create_value] ( identifier[self] , identifier[data] , identifier[name] , identifier[spec] ):
literal[string]
identifier[field] = identifier[getattr] ( identifier[self] , literal[string] + identifier[name] , keyword[None] )
keyword[if] identifier[field] :
keyword[return] identifier[field] ( identifier[data] , identifier[name] , identifier[spec] )
identifier[value] = identifier[data] . identifier[get] ( identifier[name] )
keyword[return] identifier[spec] . identifier[fields] [ identifier[name] ]. identifier[clean] ( identifier[value] ) | def _create_value(self, data, name, spec):
""" Create the value for a field.
:param data: the whole data for the entity (all fields).
:param name: name of the initialized field.
:param spec: spec for the whole entity.
"""
field = getattr(self, 'create_' + name, None)
if field:
# this factory has a special creator function for this field
return field(data, name, spec) # depends on [control=['if'], data=[]]
value = data.get(name)
return spec.fields[name].clean(value) |
def addvector(self, vec):
"""
add a vector object to the layer of the current Vector object
Parameters
----------
vec: Vector
the vector object to add
merge: bool
merge overlapping polygons?
Returns
-------
"""
vec.layer.ResetReading()
for feature in vec.layer:
self.layer.CreateFeature(feature)
self.init_features()
vec.layer.ResetReading() | def function[addvector, parameter[self, vec]]:
constant[
add a vector object to the layer of the current Vector object
Parameters
----------
vec: Vector
the vector object to add
merge: bool
merge overlapping polygons?
Returns
-------
]
call[name[vec].layer.ResetReading, parameter[]]
for taget[name[feature]] in starred[name[vec].layer] begin[:]
call[name[self].layer.CreateFeature, parameter[name[feature]]]
call[name[self].init_features, parameter[]]
call[name[vec].layer.ResetReading, parameter[]] | keyword[def] identifier[addvector] ( identifier[self] , identifier[vec] ):
literal[string]
identifier[vec] . identifier[layer] . identifier[ResetReading] ()
keyword[for] identifier[feature] keyword[in] identifier[vec] . identifier[layer] :
identifier[self] . identifier[layer] . identifier[CreateFeature] ( identifier[feature] )
identifier[self] . identifier[init_features] ()
identifier[vec] . identifier[layer] . identifier[ResetReading] () | def addvector(self, vec):
"""
add a vector object to the layer of the current Vector object
Parameters
----------
vec: Vector
the vector object to add
merge: bool
merge overlapping polygons?
Returns
-------
"""
vec.layer.ResetReading()
for feature in vec.layer:
self.layer.CreateFeature(feature) # depends on [control=['for'], data=['feature']]
self.init_features()
vec.layer.ResetReading() |
def apply_master_config(overrides=None, defaults=None):
'''
Returns master configurations dict.
'''
if defaults is None:
defaults = DEFAULT_MASTER_OPTS.copy()
if overrides is None:
overrides = {}
opts = defaults.copy()
opts['__role'] = 'master'
_adjust_log_file_override(overrides, defaults['log_file'])
if overrides:
opts.update(overrides)
opts['__cli'] = salt.utils.stringutils.to_unicode(
os.path.basename(sys.argv[0])
)
if 'environment' in opts:
if opts['saltenv'] is not None:
log.warning(
'The \'saltenv\' and \'environment\' master config options '
'cannot both be used. Ignoring \'environment\' in favor of '
'\'saltenv\'.',
)
# Set environment to saltenv in case someone's custom runner is
# refrencing __opts__['environment']
opts['environment'] = opts['saltenv']
else:
log.warning(
'The \'environment\' master config option has been renamed '
'to \'saltenv\'. Using %s as the \'saltenv\' config value.',
opts['environment']
)
opts['saltenv'] = opts['environment']
if six.PY2 and 'rest_cherrypy' in opts:
# CherryPy is not unicode-compatible
opts['rest_cherrypy'] = salt.utils.data.encode(opts['rest_cherrypy'])
for idx, val in enumerate(opts['fileserver_backend']):
if val in ('git', 'hg', 'svn', 'minion'):
new_val = val + 'fs'
log.debug(
'Changed %s to %s in master opts\' fileserver_backend list',
val, new_val
)
opts['fileserver_backend'][idx] = new_val
if len(opts['sock_dir']) > len(opts['cachedir']) + 10:
opts['sock_dir'] = os.path.join(opts['cachedir'], '.salt-unix')
opts['token_dir'] = os.path.join(opts['cachedir'], 'tokens')
opts['syndic_dir'] = os.path.join(opts['cachedir'], 'syndics')
# Make sure ext_mods gets set if it is an untrue value
# (here to catch older bad configs)
opts['extension_modules'] = (
opts.get('extension_modules') or
os.path.join(opts['cachedir'], 'extmods')
)
# Set up the utils_dirs location from the extension_modules location
opts['utils_dirs'] = (
opts.get('utils_dirs') or
[os.path.join(opts['extension_modules'], 'utils')]
)
# Insert all 'utils_dirs' directories to the system path
insert_system_path(opts, opts['utils_dirs'])
if overrides.get('ipc_write_buffer', '') == 'dynamic':
opts['ipc_write_buffer'] = _DFLT_IPC_WBUFFER
using_ip_for_id = False
append_master = False
if not opts.get('id'):
opts['id'], using_ip_for_id = get_id(
opts,
cache_minion_id=None)
append_master = True
# it does not make sense to append a domain to an IP based id
if not using_ip_for_id and 'append_domain' in opts:
opts['id'] = _append_domain(opts)
if append_master:
opts['id'] += '_master'
# Prepend root_dir to other paths
prepend_root_dirs = [
'pki_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules',
'autosign_file', 'autoreject_file', 'token_dir', 'syndic_dir',
'sqlite_queue_dir', 'autosign_grains_dir'
]
# These can be set to syslog, so, not actual paths on the system
for config_key in ('log_file', 'key_logfile', 'ssh_log_file'):
log_setting = opts.get(config_key, '')
if log_setting is None:
continue
if urlparse(log_setting).scheme == '':
prepend_root_dirs.append(config_key)
prepend_root_dir(opts, prepend_root_dirs)
# Enabling open mode requires that the value be set to True, and
# nothing else!
opts['open_mode'] = opts['open_mode'] is True
opts['auto_accept'] = opts['auto_accept'] is True
opts['file_roots'] = _validate_file_roots(opts['file_roots'])
opts['pillar_roots'] = _validate_file_roots(opts['pillar_roots'])
if opts['file_ignore_regex']:
# If file_ignore_regex was given, make sure it's wrapped in a list.
# Only keep valid regex entries for improved performance later on.
if isinstance(opts['file_ignore_regex'], six.string_types):
ignore_regex = [opts['file_ignore_regex']]
elif isinstance(opts['file_ignore_regex'], list):
ignore_regex = opts['file_ignore_regex']
opts['file_ignore_regex'] = []
for regex in ignore_regex:
try:
# Can't store compiled regex itself in opts (breaks
# serialization)
re.compile(regex)
opts['file_ignore_regex'].append(regex)
except Exception:
log.warning(
'Unable to parse file_ignore_regex. Skipping: %s',
regex
)
if opts['file_ignore_glob']:
# If file_ignore_glob was given, make sure it's wrapped in a list.
if isinstance(opts['file_ignore_glob'], six.string_types):
opts['file_ignore_glob'] = [opts['file_ignore_glob']]
# Let's make sure `worker_threads` does not drop below 3 which has proven
# to make `salt.modules.publish` not work under the test-suite.
if opts['worker_threads'] < 3 and opts.get('peer', None):
log.warning(
"The 'worker_threads' setting in '%s' cannot be lower than "
'3. Resetting it to the default value of 3.', opts['conf_file']
)
opts['worker_threads'] = 3
opts.setdefault('pillar_source_merging_strategy', 'smart')
# Make sure hash_type is lowercase
opts['hash_type'] = opts['hash_type'].lower()
# Check and update TLS/SSL configuration
_update_ssl_config(opts)
_update_discovery_config(opts)
return opts | def function[apply_master_config, parameter[overrides, defaults]]:
constant[
Returns master configurations dict.
]
if compare[name[defaults] is constant[None]] begin[:]
variable[defaults] assign[=] call[name[DEFAULT_MASTER_OPTS].copy, parameter[]]
if compare[name[overrides] is constant[None]] begin[:]
variable[overrides] assign[=] dictionary[[], []]
variable[opts] assign[=] call[name[defaults].copy, parameter[]]
call[name[opts]][constant[__role]] assign[=] constant[master]
call[name[_adjust_log_file_override], parameter[name[overrides], call[name[defaults]][constant[log_file]]]]
if name[overrides] begin[:]
call[name[opts].update, parameter[name[overrides]]]
call[name[opts]][constant[__cli]] assign[=] call[name[salt].utils.stringutils.to_unicode, parameter[call[name[os].path.basename, parameter[call[name[sys].argv][constant[0]]]]]]
if compare[constant[environment] in name[opts]] begin[:]
if compare[call[name[opts]][constant[saltenv]] is_not constant[None]] begin[:]
call[name[log].warning, parameter[constant[The 'saltenv' and 'environment' master config options cannot both be used. Ignoring 'environment' in favor of 'saltenv'.]]]
call[name[opts]][constant[environment]] assign[=] call[name[opts]][constant[saltenv]]
if <ast.BoolOp object at 0x7da1b2009750> begin[:]
call[name[opts]][constant[rest_cherrypy]] assign[=] call[name[salt].utils.data.encode, parameter[call[name[opts]][constant[rest_cherrypy]]]]
for taget[tuple[[<ast.Name object at 0x7da1b2008dc0>, <ast.Name object at 0x7da1b2008820>]]] in starred[call[name[enumerate], parameter[call[name[opts]][constant[fileserver_backend]]]]] begin[:]
if compare[name[val] in tuple[[<ast.Constant object at 0x7da1b2008f40>, <ast.Constant object at 0x7da1b2008400>, <ast.Constant object at 0x7da1b20081c0>, <ast.Constant object at 0x7da1b2008ca0>]]] begin[:]
variable[new_val] assign[=] binary_operation[name[val] + constant[fs]]
call[name[log].debug, parameter[constant[Changed %s to %s in master opts' fileserver_backend list], name[val], name[new_val]]]
call[call[name[opts]][constant[fileserver_backend]]][name[idx]] assign[=] name[new_val]
if compare[call[name[len], parameter[call[name[opts]][constant[sock_dir]]]] greater[>] binary_operation[call[name[len], parameter[call[name[opts]][constant[cachedir]]]] + constant[10]]] begin[:]
call[name[opts]][constant[sock_dir]] assign[=] call[name[os].path.join, parameter[call[name[opts]][constant[cachedir]], constant[.salt-unix]]]
call[name[opts]][constant[token_dir]] assign[=] call[name[os].path.join, parameter[call[name[opts]][constant[cachedir]], constant[tokens]]]
call[name[opts]][constant[syndic_dir]] assign[=] call[name[os].path.join, parameter[call[name[opts]][constant[cachedir]], constant[syndics]]]
call[name[opts]][constant[extension_modules]] assign[=] <ast.BoolOp object at 0x7da1b21eebf0>
call[name[opts]][constant[utils_dirs]] assign[=] <ast.BoolOp object at 0x7da1b21ee8c0>
call[name[insert_system_path], parameter[name[opts], call[name[opts]][constant[utils_dirs]]]]
if compare[call[name[overrides].get, parameter[constant[ipc_write_buffer], constant[]]] equal[==] constant[dynamic]] begin[:]
call[name[opts]][constant[ipc_write_buffer]] assign[=] name[_DFLT_IPC_WBUFFER]
variable[using_ip_for_id] assign[=] constant[False]
variable[append_master] assign[=] constant[False]
if <ast.UnaryOp object at 0x7da1b21ee0e0> begin[:]
<ast.Tuple object at 0x7da1b21edfc0> assign[=] call[name[get_id], parameter[name[opts]]]
variable[append_master] assign[=] constant[True]
if <ast.BoolOp object at 0x7da1b21edd20> begin[:]
call[name[opts]][constant[id]] assign[=] call[name[_append_domain], parameter[name[opts]]]
if name[append_master] begin[:]
<ast.AugAssign object at 0x7da1b21eda50>
variable[prepend_root_dirs] assign[=] list[[<ast.Constant object at 0x7da1b21ed8d0>, <ast.Constant object at 0x7da1b21ed8a0>, <ast.Constant object at 0x7da1b21ed870>, <ast.Constant object at 0x7da1b21ed840>, <ast.Constant object at 0x7da1b21ed810>, <ast.Constant object at 0x7da1b21ed7e0>, <ast.Constant object at 0x7da1b21ed7b0>, <ast.Constant object at 0x7da1b21ed780>, <ast.Constant object at 0x7da1b21ed750>, <ast.Constant object at 0x7da1b21ed720>, <ast.Constant object at 0x7da1b21ed6f0>]]
for taget[name[config_key]] in starred[tuple[[<ast.Constant object at 0x7da1b21ed630>, <ast.Constant object at 0x7da1b21ed600>, <ast.Constant object at 0x7da1b21ed5d0>]]] begin[:]
variable[log_setting] assign[=] call[name[opts].get, parameter[name[config_key], constant[]]]
if compare[name[log_setting] is constant[None]] begin[:]
continue
if compare[call[name[urlparse], parameter[name[log_setting]]].scheme equal[==] constant[]] begin[:]
call[name[prepend_root_dirs].append, parameter[name[config_key]]]
call[name[prepend_root_dir], parameter[name[opts], name[prepend_root_dirs]]]
call[name[opts]][constant[open_mode]] assign[=] compare[call[name[opts]][constant[open_mode]] is constant[True]]
call[name[opts]][constant[auto_accept]] assign[=] compare[call[name[opts]][constant[auto_accept]] is constant[True]]
call[name[opts]][constant[file_roots]] assign[=] call[name[_validate_file_roots], parameter[call[name[opts]][constant[file_roots]]]]
call[name[opts]][constant[pillar_roots]] assign[=] call[name[_validate_file_roots], parameter[call[name[opts]][constant[pillar_roots]]]]
if call[name[opts]][constant[file_ignore_regex]] begin[:]
if call[name[isinstance], parameter[call[name[opts]][constant[file_ignore_regex]], name[six].string_types]] begin[:]
variable[ignore_regex] assign[=] list[[<ast.Subscript object at 0x7da1b21ec610>]]
call[name[opts]][constant[file_ignore_regex]] assign[=] list[[]]
for taget[name[regex]] in starred[name[ignore_regex]] begin[:]
<ast.Try object at 0x7da1b21ec1c0>
if call[name[opts]][constant[file_ignore_glob]] begin[:]
if call[name[isinstance], parameter[call[name[opts]][constant[file_ignore_glob]], name[six].string_types]] begin[:]
call[name[opts]][constant[file_ignore_glob]] assign[=] list[[<ast.Subscript object at 0x7da1b213b9d0>]]
if <ast.BoolOp object at 0x7da1b213b910> begin[:]
call[name[log].warning, parameter[constant[The 'worker_threads' setting in '%s' cannot be lower than 3. Resetting it to the default value of 3.], call[name[opts]][constant[conf_file]]]]
call[name[opts]][constant[worker_threads]] assign[=] constant[3]
call[name[opts].setdefault, parameter[constant[pillar_source_merging_strategy], constant[smart]]]
call[name[opts]][constant[hash_type]] assign[=] call[call[name[opts]][constant[hash_type]].lower, parameter[]]
call[name[_update_ssl_config], parameter[name[opts]]]
call[name[_update_discovery_config], parameter[name[opts]]]
return[name[opts]] | keyword[def] identifier[apply_master_config] ( identifier[overrides] = keyword[None] , identifier[defaults] = keyword[None] ):
literal[string]
keyword[if] identifier[defaults] keyword[is] keyword[None] :
identifier[defaults] = identifier[DEFAULT_MASTER_OPTS] . identifier[copy] ()
keyword[if] identifier[overrides] keyword[is] keyword[None] :
identifier[overrides] ={}
identifier[opts] = identifier[defaults] . identifier[copy] ()
identifier[opts] [ literal[string] ]= literal[string]
identifier[_adjust_log_file_override] ( identifier[overrides] , identifier[defaults] [ literal[string] ])
keyword[if] identifier[overrides] :
identifier[opts] . identifier[update] ( identifier[overrides] )
identifier[opts] [ literal[string] ]= identifier[salt] . identifier[utils] . identifier[stringutils] . identifier[to_unicode] (
identifier[os] . identifier[path] . identifier[basename] ( identifier[sys] . identifier[argv] [ literal[int] ])
)
keyword[if] literal[string] keyword[in] identifier[opts] :
keyword[if] identifier[opts] [ literal[string] ] keyword[is] keyword[not] keyword[None] :
identifier[log] . identifier[warning] (
literal[string]
literal[string]
literal[string] ,
)
identifier[opts] [ literal[string] ]= identifier[opts] [ literal[string] ]
keyword[else] :
identifier[log] . identifier[warning] (
literal[string]
literal[string] ,
identifier[opts] [ literal[string] ]
)
identifier[opts] [ literal[string] ]= identifier[opts] [ literal[string] ]
keyword[if] identifier[six] . identifier[PY2] keyword[and] literal[string] keyword[in] identifier[opts] :
identifier[opts] [ literal[string] ]= identifier[salt] . identifier[utils] . identifier[data] . identifier[encode] ( identifier[opts] [ literal[string] ])
keyword[for] identifier[idx] , identifier[val] keyword[in] identifier[enumerate] ( identifier[opts] [ literal[string] ]):
keyword[if] identifier[val] keyword[in] ( literal[string] , literal[string] , literal[string] , literal[string] ):
identifier[new_val] = identifier[val] + literal[string]
identifier[log] . identifier[debug] (
literal[string] ,
identifier[val] , identifier[new_val]
)
identifier[opts] [ literal[string] ][ identifier[idx] ]= identifier[new_val]
keyword[if] identifier[len] ( identifier[opts] [ literal[string] ])> identifier[len] ( identifier[opts] [ literal[string] ])+ literal[int] :
identifier[opts] [ literal[string] ]= identifier[os] . identifier[path] . identifier[join] ( identifier[opts] [ literal[string] ], literal[string] )
identifier[opts] [ literal[string] ]= identifier[os] . identifier[path] . identifier[join] ( identifier[opts] [ literal[string] ], literal[string] )
identifier[opts] [ literal[string] ]= identifier[os] . identifier[path] . identifier[join] ( identifier[opts] [ literal[string] ], literal[string] )
identifier[opts] [ literal[string] ]=(
identifier[opts] . identifier[get] ( literal[string] ) keyword[or]
identifier[os] . identifier[path] . identifier[join] ( identifier[opts] [ literal[string] ], literal[string] )
)
identifier[opts] [ literal[string] ]=(
identifier[opts] . identifier[get] ( literal[string] ) keyword[or]
[ identifier[os] . identifier[path] . identifier[join] ( identifier[opts] [ literal[string] ], literal[string] )]
)
identifier[insert_system_path] ( identifier[opts] , identifier[opts] [ literal[string] ])
keyword[if] identifier[overrides] . identifier[get] ( literal[string] , literal[string] )== literal[string] :
identifier[opts] [ literal[string] ]= identifier[_DFLT_IPC_WBUFFER]
identifier[using_ip_for_id] = keyword[False]
identifier[append_master] = keyword[False]
keyword[if] keyword[not] identifier[opts] . identifier[get] ( literal[string] ):
identifier[opts] [ literal[string] ], identifier[using_ip_for_id] = identifier[get_id] (
identifier[opts] ,
identifier[cache_minion_id] = keyword[None] )
identifier[append_master] = keyword[True]
keyword[if] keyword[not] identifier[using_ip_for_id] keyword[and] literal[string] keyword[in] identifier[opts] :
identifier[opts] [ literal[string] ]= identifier[_append_domain] ( identifier[opts] )
keyword[if] identifier[append_master] :
identifier[opts] [ literal[string] ]+= literal[string]
identifier[prepend_root_dirs] =[
literal[string] , literal[string] , literal[string] , literal[string] , literal[string] ,
literal[string] , literal[string] , literal[string] , literal[string] ,
literal[string] , literal[string]
]
keyword[for] identifier[config_key] keyword[in] ( literal[string] , literal[string] , literal[string] ):
identifier[log_setting] = identifier[opts] . identifier[get] ( identifier[config_key] , literal[string] )
keyword[if] identifier[log_setting] keyword[is] keyword[None] :
keyword[continue]
keyword[if] identifier[urlparse] ( identifier[log_setting] ). identifier[scheme] == literal[string] :
identifier[prepend_root_dirs] . identifier[append] ( identifier[config_key] )
identifier[prepend_root_dir] ( identifier[opts] , identifier[prepend_root_dirs] )
identifier[opts] [ literal[string] ]= identifier[opts] [ literal[string] ] keyword[is] keyword[True]
identifier[opts] [ literal[string] ]= identifier[opts] [ literal[string] ] keyword[is] keyword[True]
identifier[opts] [ literal[string] ]= identifier[_validate_file_roots] ( identifier[opts] [ literal[string] ])
identifier[opts] [ literal[string] ]= identifier[_validate_file_roots] ( identifier[opts] [ literal[string] ])
keyword[if] identifier[opts] [ literal[string] ]:
keyword[if] identifier[isinstance] ( identifier[opts] [ literal[string] ], identifier[six] . identifier[string_types] ):
identifier[ignore_regex] =[ identifier[opts] [ literal[string] ]]
keyword[elif] identifier[isinstance] ( identifier[opts] [ literal[string] ], identifier[list] ):
identifier[ignore_regex] = identifier[opts] [ literal[string] ]
identifier[opts] [ literal[string] ]=[]
keyword[for] identifier[regex] keyword[in] identifier[ignore_regex] :
keyword[try] :
identifier[re] . identifier[compile] ( identifier[regex] )
identifier[opts] [ literal[string] ]. identifier[append] ( identifier[regex] )
keyword[except] identifier[Exception] :
identifier[log] . identifier[warning] (
literal[string] ,
identifier[regex]
)
keyword[if] identifier[opts] [ literal[string] ]:
keyword[if] identifier[isinstance] ( identifier[opts] [ literal[string] ], identifier[six] . identifier[string_types] ):
identifier[opts] [ literal[string] ]=[ identifier[opts] [ literal[string] ]]
keyword[if] identifier[opts] [ literal[string] ]< literal[int] keyword[and] identifier[opts] . identifier[get] ( literal[string] , keyword[None] ):
identifier[log] . identifier[warning] (
literal[string]
literal[string] , identifier[opts] [ literal[string] ]
)
identifier[opts] [ literal[string] ]= literal[int]
identifier[opts] . identifier[setdefault] ( literal[string] , literal[string] )
identifier[opts] [ literal[string] ]= identifier[opts] [ literal[string] ]. identifier[lower] ()
identifier[_update_ssl_config] ( identifier[opts] )
identifier[_update_discovery_config] ( identifier[opts] )
keyword[return] identifier[opts] | def apply_master_config(overrides=None, defaults=None):
"""
Returns master configurations dict.
"""
if defaults is None:
defaults = DEFAULT_MASTER_OPTS.copy() # depends on [control=['if'], data=['defaults']]
if overrides is None:
overrides = {} # depends on [control=['if'], data=['overrides']]
opts = defaults.copy()
opts['__role'] = 'master'
_adjust_log_file_override(overrides, defaults['log_file'])
if overrides:
opts.update(overrides) # depends on [control=['if'], data=[]]
opts['__cli'] = salt.utils.stringutils.to_unicode(os.path.basename(sys.argv[0]))
if 'environment' in opts:
if opts['saltenv'] is not None:
log.warning("The 'saltenv' and 'environment' master config options cannot both be used. Ignoring 'environment' in favor of 'saltenv'.")
# Set environment to saltenv in case someone's custom runner is
# refrencing __opts__['environment']
opts['environment'] = opts['saltenv'] # depends on [control=['if'], data=[]]
else:
log.warning("The 'environment' master config option has been renamed to 'saltenv'. Using %s as the 'saltenv' config value.", opts['environment'])
opts['saltenv'] = opts['environment'] # depends on [control=['if'], data=['opts']]
if six.PY2 and 'rest_cherrypy' in opts:
# CherryPy is not unicode-compatible
opts['rest_cherrypy'] = salt.utils.data.encode(opts['rest_cherrypy']) # depends on [control=['if'], data=[]]
for (idx, val) in enumerate(opts['fileserver_backend']):
if val in ('git', 'hg', 'svn', 'minion'):
new_val = val + 'fs'
log.debug("Changed %s to %s in master opts' fileserver_backend list", val, new_val)
opts['fileserver_backend'][idx] = new_val # depends on [control=['if'], data=['val']] # depends on [control=['for'], data=[]]
if len(opts['sock_dir']) > len(opts['cachedir']) + 10:
opts['sock_dir'] = os.path.join(opts['cachedir'], '.salt-unix') # depends on [control=['if'], data=[]]
opts['token_dir'] = os.path.join(opts['cachedir'], 'tokens')
opts['syndic_dir'] = os.path.join(opts['cachedir'], 'syndics')
# Make sure ext_mods gets set if it is an untrue value
# (here to catch older bad configs)
opts['extension_modules'] = opts.get('extension_modules') or os.path.join(opts['cachedir'], 'extmods')
# Set up the utils_dirs location from the extension_modules location
opts['utils_dirs'] = opts.get('utils_dirs') or [os.path.join(opts['extension_modules'], 'utils')]
# Insert all 'utils_dirs' directories to the system path
insert_system_path(opts, opts['utils_dirs'])
if overrides.get('ipc_write_buffer', '') == 'dynamic':
opts['ipc_write_buffer'] = _DFLT_IPC_WBUFFER # depends on [control=['if'], data=[]]
using_ip_for_id = False
append_master = False
if not opts.get('id'):
(opts['id'], using_ip_for_id) = get_id(opts, cache_minion_id=None)
append_master = True # depends on [control=['if'], data=[]]
# it does not make sense to append a domain to an IP based id
if not using_ip_for_id and 'append_domain' in opts:
opts['id'] = _append_domain(opts) # depends on [control=['if'], data=[]]
if append_master:
opts['id'] += '_master' # depends on [control=['if'], data=[]]
# Prepend root_dir to other paths
prepend_root_dirs = ['pki_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules', 'autosign_file', 'autoreject_file', 'token_dir', 'syndic_dir', 'sqlite_queue_dir', 'autosign_grains_dir']
# These can be set to syslog, so, not actual paths on the system
for config_key in ('log_file', 'key_logfile', 'ssh_log_file'):
log_setting = opts.get(config_key, '')
if log_setting is None:
continue # depends on [control=['if'], data=[]]
if urlparse(log_setting).scheme == '':
prepend_root_dirs.append(config_key) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=['config_key']]
prepend_root_dir(opts, prepend_root_dirs)
# Enabling open mode requires that the value be set to True, and
# nothing else!
opts['open_mode'] = opts['open_mode'] is True
opts['auto_accept'] = opts['auto_accept'] is True
opts['file_roots'] = _validate_file_roots(opts['file_roots'])
opts['pillar_roots'] = _validate_file_roots(opts['pillar_roots'])
if opts['file_ignore_regex']:
# If file_ignore_regex was given, make sure it's wrapped in a list.
# Only keep valid regex entries for improved performance later on.
if isinstance(opts['file_ignore_regex'], six.string_types):
ignore_regex = [opts['file_ignore_regex']] # depends on [control=['if'], data=[]]
elif isinstance(opts['file_ignore_regex'], list):
ignore_regex = opts['file_ignore_regex'] # depends on [control=['if'], data=[]]
opts['file_ignore_regex'] = []
for regex in ignore_regex:
try:
# Can't store compiled regex itself in opts (breaks
# serialization)
re.compile(regex)
opts['file_ignore_regex'].append(regex) # depends on [control=['try'], data=[]]
except Exception:
log.warning('Unable to parse file_ignore_regex. Skipping: %s', regex) # depends on [control=['except'], data=[]] # depends on [control=['for'], data=['regex']] # depends on [control=['if'], data=[]]
if opts['file_ignore_glob']:
# If file_ignore_glob was given, make sure it's wrapped in a list.
if isinstance(opts['file_ignore_glob'], six.string_types):
opts['file_ignore_glob'] = [opts['file_ignore_glob']] # depends on [control=['if'], data=[]] # depends on [control=['if'], data=[]]
# Let's make sure `worker_threads` does not drop below 3 which has proven
# to make `salt.modules.publish` not work under the test-suite.
if opts['worker_threads'] < 3 and opts.get('peer', None):
log.warning("The 'worker_threads' setting in '%s' cannot be lower than 3. Resetting it to the default value of 3.", opts['conf_file'])
opts['worker_threads'] = 3 # depends on [control=['if'], data=[]]
opts.setdefault('pillar_source_merging_strategy', 'smart')
# Make sure hash_type is lowercase
opts['hash_type'] = opts['hash_type'].lower()
# Check and update TLS/SSL configuration
_update_ssl_config(opts)
_update_discovery_config(opts)
return opts |
def patch_spyder3(verbose=False):
'''Patch spyder to make it work with sos files and sos kernel '''
try:
# patch spyder/config/utils.py for file extension
from spyder.config import utils
src_file = utils.__file__
spyder_dir = os.path.dirname(os.path.dirname(src_file))
patch_file(src_file,
'''
(_("Cython/Pyrex files"), ('.pyx', '.pxd', '.pxi')),
(_("C files"), ('.c', '.h')),''',
'''
(_("Cython/Pyrex files"), ('.pyx', '.pxd', '.pxi')),
(_("SoS files"), ('.sos', )),
(_("C files"), ('.c', '.h')),''',
verbose=verbose)
#
# patch spyder/app/cli_options.py to add command line option --kernel
patch_file(os.path.join(spyder_dir, 'app', 'cli_options.py'),
'''help="String to show in the main window title")
options, args = parser.parse_args()''',
'''help="String to show in the main window title")
parser.add_option('--kernel', help="Jupyter kernel to start.")
options, args = parser.parse_args()''',
verbose=verbose)
#
# patch spyder/utils/sourcecode.py,
patch_file(os.path.join(spyder_dir, 'utils', 'sourcecode.py'),
"'Python': ('py', 'pyw', 'python', 'ipy')",
"'Python': ('py', 'pyw', 'python', 'ipy', 'sos')",
verbose=verbose)
patch_file(os.path.join(spyder_dir, 'utils', 'sourcecode.py'),
'''CELL_LANGUAGES = {'Python': ('#%%', '# %%', '# <codecell>', '# In[')}''',
'''CELL_LANGUAGES = {'Python': ('#%%', '# %%', '# <codecell>', '# In[', '%cell')}''',
verbose=verbose)
#
# patch spyder/app/mainwindow.py
patch_file(os.path.join(spyder_dir, 'app', 'mainwindow.py'),
'''
app.exec_()
''',
r'''
try:
if options.kernel == 'sos':
cfg_file = os.path.expanduser('~/.ipython/profile_default/ipython_config.py')
has_cfg = os.path.isfile(cfg_file)
if has_cfg and not os.path.isfile(cfg_file + '.sos_bak'):
os.rename(cfg_file, cfg_file + '.sos_bak')
with open(cfg_file, 'w') as cfg:
cfg.write("c.IPKernelApp.kernel_class = 'sos_notebook.spyder_kernel.SoS_SpyderKernel'\n")
app.exec_()
finally:
if options.kernel == 'sos':
os.remove(cfg_file)
if os.path.isfile(cfg_file + '.sos_bak'):
os.rename(cfg_file + '.sos_bak', cfg_file)
''',
verbose=verbose)
#
print('\nSpyder is successfully patched to accept .sos format and sos kernel.')
print('Use ')
print()
print(' $ spyder --kernel sos')
print()
print('to start spyder with sos kernel')
except Exception as e:
sys.exit('Failed to patch spyder: {}'.format(e)) | def function[patch_spyder3, parameter[verbose]]:
constant[Patch spyder to make it work with sos files and sos kernel ]
<ast.Try object at 0x7da18f723130> | keyword[def] identifier[patch_spyder3] ( identifier[verbose] = keyword[False] ):
literal[string]
keyword[try] :
keyword[from] identifier[spyder] . identifier[config] keyword[import] identifier[utils]
identifier[src_file] = identifier[utils] . identifier[__file__]
identifier[spyder_dir] = identifier[os] . identifier[path] . identifier[dirname] ( identifier[os] . identifier[path] . identifier[dirname] ( identifier[src_file] ))
identifier[patch_file] ( identifier[src_file] ,
literal[string] ,
literal[string] ,
identifier[verbose] = identifier[verbose] )
identifier[patch_file] ( identifier[os] . identifier[path] . identifier[join] ( identifier[spyder_dir] , literal[string] , literal[string] ),
literal[string] ,
literal[string] ,
identifier[verbose] = identifier[verbose] )
identifier[patch_file] ( identifier[os] . identifier[path] . identifier[join] ( identifier[spyder_dir] , literal[string] , literal[string] ),
literal[string] ,
literal[string] ,
identifier[verbose] = identifier[verbose] )
identifier[patch_file] ( identifier[os] . identifier[path] . identifier[join] ( identifier[spyder_dir] , literal[string] , literal[string] ),
literal[string] ,
literal[string] ,
identifier[verbose] = identifier[verbose] )
identifier[patch_file] ( identifier[os] . identifier[path] . identifier[join] ( identifier[spyder_dir] , literal[string] , literal[string] ),
literal[string] ,
literal[string] ,
identifier[verbose] = identifier[verbose] )
identifier[print] ( literal[string] )
identifier[print] ( literal[string] )
identifier[print] ()
identifier[print] ( literal[string] )
identifier[print] ()
identifier[print] ( literal[string] )
keyword[except] identifier[Exception] keyword[as] identifier[e] :
identifier[sys] . identifier[exit] ( literal[string] . identifier[format] ( identifier[e] )) | def patch_spyder3(verbose=False):
"""Patch spyder to make it work with sos files and sos kernel """
try:
# patch spyder/config/utils.py for file extension
from spyder.config import utils
src_file = utils.__file__
spyder_dir = os.path.dirname(os.path.dirname(src_file))
patch_file(src_file, '\n (_("Cython/Pyrex files"), (\'.pyx\', \'.pxd\', \'.pxi\')),\n (_("C files"), (\'.c\', \'.h\')),', '\n (_("Cython/Pyrex files"), (\'.pyx\', \'.pxd\', \'.pxi\')),\n (_("SoS files"), (\'.sos\', )),\n (_("C files"), (\'.c\', \'.h\')),', verbose=verbose)
#
# patch spyder/app/cli_options.py to add command line option --kernel
patch_file(os.path.join(spyder_dir, 'app', 'cli_options.py'), 'help="String to show in the main window title")\n options, args = parser.parse_args()', 'help="String to show in the main window title")\n parser.add_option(\'--kernel\', help="Jupyter kernel to start.")\n options, args = parser.parse_args()', verbose=verbose)
#
# patch spyder/utils/sourcecode.py,
patch_file(os.path.join(spyder_dir, 'utils', 'sourcecode.py'), "'Python': ('py', 'pyw', 'python', 'ipy')", "'Python': ('py', 'pyw', 'python', 'ipy', 'sos')", verbose=verbose)
patch_file(os.path.join(spyder_dir, 'utils', 'sourcecode.py'), "CELL_LANGUAGES = {'Python': ('#%%', '# %%', '# <codecell>', '# In[')}", "CELL_LANGUAGES = {'Python': ('#%%', '# %%', '# <codecell>', '# In[', '%cell')}", verbose=verbose)
#
# patch spyder/app/mainwindow.py
patch_file(os.path.join(spyder_dir, 'app', 'mainwindow.py'), '\n app.exec_()\n', '\n try:\n if options.kernel == \'sos\':\n cfg_file = os.path.expanduser(\'~/.ipython/profile_default/ipython_config.py\')\n has_cfg = os.path.isfile(cfg_file)\n if has_cfg and not os.path.isfile(cfg_file + \'.sos_bak\'):\n os.rename(cfg_file, cfg_file + \'.sos_bak\')\n with open(cfg_file, \'w\') as cfg:\n cfg.write("c.IPKernelApp.kernel_class = \'sos_notebook.spyder_kernel.SoS_SpyderKernel\'\\n")\n app.exec_()\n finally:\n if options.kernel == \'sos\':\n os.remove(cfg_file)\n if os.path.isfile(cfg_file + \'.sos_bak\'):\n os.rename(cfg_file + \'.sos_bak\', cfg_file)\n', verbose=verbose)
#
print('\nSpyder is successfully patched to accept .sos format and sos kernel.')
print('Use ')
print()
print(' $ spyder --kernel sos')
print()
print('to start spyder with sos kernel') # depends on [control=['try'], data=[]]
except Exception as e:
sys.exit('Failed to patch spyder: {}'.format(e)) # depends on [control=['except'], data=['e']] |
def roundfrac(intpart, fraction, digs):
"""Round or extend the fraction to size digs."""
f = len(fraction)
if f <= digs:
return intpart, fraction + '0'*(digs-f)
i = len(intpart)
if i+digs < 0:
return '0'*-digs, ''
total = intpart + fraction
nextdigit = total[i+digs]
if nextdigit >= '5': # Hard case: increment last digit, may have carry!
n = i + digs - 1
while n >= 0:
if total[n] != '9': break
n = n-1
else:
total = '0' + total
i = i+1
n = 0
total = total[:n] + chr(ord(total[n]) + 1) + '0'*(len(total)-n-1)
intpart, fraction = total[:i], total[i:]
if digs >= 0:
return intpart, fraction[:digs]
else:
return intpart[:digs] + '0'*-digs, '' | def function[roundfrac, parameter[intpart, fraction, digs]]:
constant[Round or extend the fraction to size digs.]
variable[f] assign[=] call[name[len], parameter[name[fraction]]]
if compare[name[f] less_or_equal[<=] name[digs]] begin[:]
return[tuple[[<ast.Name object at 0x7da18f812260>, <ast.BinOp object at 0x7da18f810b80>]]]
variable[i] assign[=] call[name[len], parameter[name[intpart]]]
if compare[binary_operation[name[i] + name[digs]] less[<] constant[0]] begin[:]
return[tuple[[<ast.BinOp object at 0x7da18f813c40>, <ast.Constant object at 0x7da18f810ac0>]]]
variable[total] assign[=] binary_operation[name[intpart] + name[fraction]]
variable[nextdigit] assign[=] call[name[total]][binary_operation[name[i] + name[digs]]]
if compare[name[nextdigit] greater_or_equal[>=] constant[5]] begin[:]
variable[n] assign[=] binary_operation[binary_operation[name[i] + name[digs]] - constant[1]]
while compare[name[n] greater_or_equal[>=] constant[0]] begin[:]
if compare[call[name[total]][name[n]] not_equal[!=] constant[9]] begin[:]
break
variable[n] assign[=] binary_operation[name[n] - constant[1]]
variable[total] assign[=] binary_operation[binary_operation[call[name[total]][<ast.Slice object at 0x7da18f810f70>] + call[name[chr], parameter[binary_operation[call[name[ord], parameter[call[name[total]][name[n]]]] + constant[1]]]]] + binary_operation[constant[0] * binary_operation[binary_operation[call[name[len], parameter[name[total]]] - name[n]] - constant[1]]]]
<ast.Tuple object at 0x7da18f811b40> assign[=] tuple[[<ast.Subscript object at 0x7da18f8109a0>, <ast.Subscript object at 0x7da18f813550>]]
if compare[name[digs] greater_or_equal[>=] constant[0]] begin[:]
return[tuple[[<ast.Name object at 0x7da18f8122c0>, <ast.Subscript object at 0x7da18f813a30>]]] | keyword[def] identifier[roundfrac] ( identifier[intpart] , identifier[fraction] , identifier[digs] ):
literal[string]
identifier[f] = identifier[len] ( identifier[fraction] )
keyword[if] identifier[f] <= identifier[digs] :
keyword[return] identifier[intpart] , identifier[fraction] + literal[string] *( identifier[digs] - identifier[f] )
identifier[i] = identifier[len] ( identifier[intpart] )
keyword[if] identifier[i] + identifier[digs] < literal[int] :
keyword[return] literal[string] *- identifier[digs] , literal[string]
identifier[total] = identifier[intpart] + identifier[fraction]
identifier[nextdigit] = identifier[total] [ identifier[i] + identifier[digs] ]
keyword[if] identifier[nextdigit] >= literal[string] :
identifier[n] = identifier[i] + identifier[digs] - literal[int]
keyword[while] identifier[n] >= literal[int] :
keyword[if] identifier[total] [ identifier[n] ]!= literal[string] : keyword[break]
identifier[n] = identifier[n] - literal[int]
keyword[else] :
identifier[total] = literal[string] + identifier[total]
identifier[i] = identifier[i] + literal[int]
identifier[n] = literal[int]
identifier[total] = identifier[total] [: identifier[n] ]+ identifier[chr] ( identifier[ord] ( identifier[total] [ identifier[n] ])+ literal[int] )+ literal[string] *( identifier[len] ( identifier[total] )- identifier[n] - literal[int] )
identifier[intpart] , identifier[fraction] = identifier[total] [: identifier[i] ], identifier[total] [ identifier[i] :]
keyword[if] identifier[digs] >= literal[int] :
keyword[return] identifier[intpart] , identifier[fraction] [: identifier[digs] ]
keyword[else] :
keyword[return] identifier[intpart] [: identifier[digs] ]+ literal[string] *- identifier[digs] , literal[string] | def roundfrac(intpart, fraction, digs):
"""Round or extend the fraction to size digs."""
f = len(fraction)
if f <= digs:
return (intpart, fraction + '0' * (digs - f)) # depends on [control=['if'], data=['f', 'digs']]
i = len(intpart)
if i + digs < 0:
return ('0' * -digs, '') # depends on [control=['if'], data=[]]
total = intpart + fraction
nextdigit = total[i + digs]
if nextdigit >= '5': # Hard case: increment last digit, may have carry!
n = i + digs - 1
while n >= 0:
if total[n] != '9':
break # depends on [control=['if'], data=[]]
n = n - 1 # depends on [control=['while'], data=['n']]
else:
total = '0' + total
i = i + 1
n = 0
total = total[:n] + chr(ord(total[n]) + 1) + '0' * (len(total) - n - 1)
(intpart, fraction) = (total[:i], total[i:]) # depends on [control=['if'], data=[]]
if digs >= 0:
return (intpart, fraction[:digs]) # depends on [control=['if'], data=['digs']]
else:
return (intpart[:digs] + '0' * -digs, '') |
def get_gosubdag(gosubdag=None):
"""Gets a GoSubDag initialized for use by a Grouper object."""
if gosubdag is not None:
if gosubdag.rcntobj is not None:
return gosubdag
else:
gosubdag.init_auxobjs()
return gosubdag
else:
go2obj = get_godag()
return GoSubDag(None, go2obj, rcntobj=True) | def function[get_gosubdag, parameter[gosubdag]]:
constant[Gets a GoSubDag initialized for use by a Grouper object.]
if compare[name[gosubdag] is_not constant[None]] begin[:]
if compare[name[gosubdag].rcntobj is_not constant[None]] begin[:]
return[name[gosubdag]] | keyword[def] identifier[get_gosubdag] ( identifier[gosubdag] = keyword[None] ):
literal[string]
keyword[if] identifier[gosubdag] keyword[is] keyword[not] keyword[None] :
keyword[if] identifier[gosubdag] . identifier[rcntobj] keyword[is] keyword[not] keyword[None] :
keyword[return] identifier[gosubdag]
keyword[else] :
identifier[gosubdag] . identifier[init_auxobjs] ()
keyword[return] identifier[gosubdag]
keyword[else] :
identifier[go2obj] = identifier[get_godag] ()
keyword[return] identifier[GoSubDag] ( keyword[None] , identifier[go2obj] , identifier[rcntobj] = keyword[True] ) | def get_gosubdag(gosubdag=None):
"""Gets a GoSubDag initialized for use by a Grouper object."""
if gosubdag is not None:
if gosubdag.rcntobj is not None:
return gosubdag # depends on [control=['if'], data=[]]
else:
gosubdag.init_auxobjs()
return gosubdag # depends on [control=['if'], data=['gosubdag']]
else:
go2obj = get_godag()
return GoSubDag(None, go2obj, rcntobj=True) |
def __allocate_clusters(self):
"""!
@brief Performs cluster allocation using leafs of tree in BANG directory (the smallest cells).
"""
leaf_blocks = self.__directory.get_leafs()
unhandled_block_indexes = set([i for i in range(len(leaf_blocks)) if leaf_blocks[i].get_density() > self.__density_threshold])
current_block = self.__find_block_center(leaf_blocks, unhandled_block_indexes)
cluster_index = 0
while current_block is not None:
if current_block.get_density() <= self.__density_threshold or len(current_block) <= self.__amount_threshold:
break
self.__expand_cluster_block(current_block, cluster_index, leaf_blocks, unhandled_block_indexes)
current_block = self.__find_block_center(leaf_blocks, unhandled_block_indexes)
cluster_index += 1
self.__store_clustering_results(cluster_index, leaf_blocks) | def function[__allocate_clusters, parameter[self]]:
constant[!
@brief Performs cluster allocation using leafs of tree in BANG directory (the smallest cells).
]
variable[leaf_blocks] assign[=] call[name[self].__directory.get_leafs, parameter[]]
variable[unhandled_block_indexes] assign[=] call[name[set], parameter[<ast.ListComp object at 0x7da1b0192b30>]]
variable[current_block] assign[=] call[name[self].__find_block_center, parameter[name[leaf_blocks], name[unhandled_block_indexes]]]
variable[cluster_index] assign[=] constant[0]
while compare[name[current_block] is_not constant[None]] begin[:]
if <ast.BoolOp object at 0x7da1b01923b0> begin[:]
break
call[name[self].__expand_cluster_block, parameter[name[current_block], name[cluster_index], name[leaf_blocks], name[unhandled_block_indexes]]]
variable[current_block] assign[=] call[name[self].__find_block_center, parameter[name[leaf_blocks], name[unhandled_block_indexes]]]
<ast.AugAssign object at 0x7da1b0193460>
call[name[self].__store_clustering_results, parameter[name[cluster_index], name[leaf_blocks]]] | keyword[def] identifier[__allocate_clusters] ( identifier[self] ):
literal[string]
identifier[leaf_blocks] = identifier[self] . identifier[__directory] . identifier[get_leafs] ()
identifier[unhandled_block_indexes] = identifier[set] ([ identifier[i] keyword[for] identifier[i] keyword[in] identifier[range] ( identifier[len] ( identifier[leaf_blocks] )) keyword[if] identifier[leaf_blocks] [ identifier[i] ]. identifier[get_density] ()> identifier[self] . identifier[__density_threshold] ])
identifier[current_block] = identifier[self] . identifier[__find_block_center] ( identifier[leaf_blocks] , identifier[unhandled_block_indexes] )
identifier[cluster_index] = literal[int]
keyword[while] identifier[current_block] keyword[is] keyword[not] keyword[None] :
keyword[if] identifier[current_block] . identifier[get_density] ()<= identifier[self] . identifier[__density_threshold] keyword[or] identifier[len] ( identifier[current_block] )<= identifier[self] . identifier[__amount_threshold] :
keyword[break]
identifier[self] . identifier[__expand_cluster_block] ( identifier[current_block] , identifier[cluster_index] , identifier[leaf_blocks] , identifier[unhandled_block_indexes] )
identifier[current_block] = identifier[self] . identifier[__find_block_center] ( identifier[leaf_blocks] , identifier[unhandled_block_indexes] )
identifier[cluster_index] += literal[int]
identifier[self] . identifier[__store_clustering_results] ( identifier[cluster_index] , identifier[leaf_blocks] ) | def __allocate_clusters(self):
"""!
@brief Performs cluster allocation using leafs of tree in BANG directory (the smallest cells).
"""
leaf_blocks = self.__directory.get_leafs()
unhandled_block_indexes = set([i for i in range(len(leaf_blocks)) if leaf_blocks[i].get_density() > self.__density_threshold])
current_block = self.__find_block_center(leaf_blocks, unhandled_block_indexes)
cluster_index = 0
while current_block is not None:
if current_block.get_density() <= self.__density_threshold or len(current_block) <= self.__amount_threshold:
break # depends on [control=['if'], data=[]]
self.__expand_cluster_block(current_block, cluster_index, leaf_blocks, unhandled_block_indexes)
current_block = self.__find_block_center(leaf_blocks, unhandled_block_indexes)
cluster_index += 1 # depends on [control=['while'], data=['current_block']]
self.__store_clustering_results(cluster_index, leaf_blocks) |
def stop_broadcast(self, broadcast_id):
"""
Use this method to stop a live broadcast of an OpenTok session
:param String broadcast_id: The ID of the broadcast you want to stop
:rtype A Broadcast object, which contains information of the broadcast: id, sessionId
projectId, createdAt, updatedAt and resolution
"""
endpoint = self.endpoints.broadcast_url(broadcast_id, stop=True)
response = requests.post(
endpoint,
headers=self.json_headers(),
proxies=self.proxies,
timeout=self.timeout
)
if response.status_code == 200:
return Broadcast(response.json())
elif response.status_code == 400:
raise BroadcastError(
'Invalid request. This response may indicate that data in your request '
'data is invalid JSON.')
elif response.status_code == 403:
raise AuthError('Authentication error.')
elif response.status_code == 409:
raise BroadcastError(
'The broadcast (with the specified ID) was not found or it has already '
'stopped.')
else:
raise RequestError('OpenTok server error.', response.status_code) | def function[stop_broadcast, parameter[self, broadcast_id]]:
constant[
Use this method to stop a live broadcast of an OpenTok session
:param String broadcast_id: The ID of the broadcast you want to stop
:rtype A Broadcast object, which contains information of the broadcast: id, sessionId
projectId, createdAt, updatedAt and resolution
]
variable[endpoint] assign[=] call[name[self].endpoints.broadcast_url, parameter[name[broadcast_id]]]
variable[response] assign[=] call[name[requests].post, parameter[name[endpoint]]]
if compare[name[response].status_code equal[==] constant[200]] begin[:]
return[call[name[Broadcast], parameter[call[name[response].json, parameter[]]]]] | keyword[def] identifier[stop_broadcast] ( identifier[self] , identifier[broadcast_id] ):
literal[string]
identifier[endpoint] = identifier[self] . identifier[endpoints] . identifier[broadcast_url] ( identifier[broadcast_id] , identifier[stop] = keyword[True] )
identifier[response] = identifier[requests] . identifier[post] (
identifier[endpoint] ,
identifier[headers] = identifier[self] . identifier[json_headers] (),
identifier[proxies] = identifier[self] . identifier[proxies] ,
identifier[timeout] = identifier[self] . identifier[timeout]
)
keyword[if] identifier[response] . identifier[status_code] == literal[int] :
keyword[return] identifier[Broadcast] ( identifier[response] . identifier[json] ())
keyword[elif] identifier[response] . identifier[status_code] == literal[int] :
keyword[raise] identifier[BroadcastError] (
literal[string]
literal[string] )
keyword[elif] identifier[response] . identifier[status_code] == literal[int] :
keyword[raise] identifier[AuthError] ( literal[string] )
keyword[elif] identifier[response] . identifier[status_code] == literal[int] :
keyword[raise] identifier[BroadcastError] (
literal[string]
literal[string] )
keyword[else] :
keyword[raise] identifier[RequestError] ( literal[string] , identifier[response] . identifier[status_code] ) | def stop_broadcast(self, broadcast_id):
"""
Use this method to stop a live broadcast of an OpenTok session
:param String broadcast_id: The ID of the broadcast you want to stop
:rtype A Broadcast object, which contains information of the broadcast: id, sessionId
projectId, createdAt, updatedAt and resolution
"""
endpoint = self.endpoints.broadcast_url(broadcast_id, stop=True)
response = requests.post(endpoint, headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout)
if response.status_code == 200:
return Broadcast(response.json()) # depends on [control=['if'], data=[]]
elif response.status_code == 400:
raise BroadcastError('Invalid request. This response may indicate that data in your request data is invalid JSON.') # depends on [control=['if'], data=[]]
elif response.status_code == 403:
raise AuthError('Authentication error.') # depends on [control=['if'], data=[]]
elif response.status_code == 409:
raise BroadcastError('The broadcast (with the specified ID) was not found or it has already stopped.') # depends on [control=['if'], data=[]]
else:
raise RequestError('OpenTok server error.', response.status_code) |
def parse_impl(self):
"""
Parses the HTML content as a stream. This is far less memory
intensive than loading the entire HTML file into memory, like
BeautifulSoup does.
"""
# Cast to str to ensure not unicode under Python 2, as the parser
# doesn't like that.
parser = XMLParser(encoding=str('UTF-8'))
element_iter = ET.iterparse(self.handle, events=("start", "end"), parser=parser)
for pos, element in element_iter:
tag, class_attr = _tag_and_class_attr(element)
if tag == "h1" and pos == "end":
if not self.user:
self.user = element.text.strip()
elif tag == "div" and "thread" in class_attr and pos == "start":
participants = self.parse_participants(element)
thread = self.parse_thread(participants, element_iter, True)
self.save_thread(thread) | def function[parse_impl, parameter[self]]:
constant[
Parses the HTML content as a stream. This is far less memory
intensive than loading the entire HTML file into memory, like
BeautifulSoup does.
]
variable[parser] assign[=] call[name[XMLParser], parameter[]]
variable[element_iter] assign[=] call[name[ET].iterparse, parameter[name[self].handle]]
for taget[tuple[[<ast.Name object at 0x7da20c6a8a60>, <ast.Name object at 0x7da20c6a95a0>]]] in starred[name[element_iter]] begin[:]
<ast.Tuple object at 0x7da20c6a8c70> assign[=] call[name[_tag_and_class_attr], parameter[name[element]]]
if <ast.BoolOp object at 0x7da20c6a9660> begin[:]
if <ast.UnaryOp object at 0x7da20c6ab5b0> begin[:]
name[self].user assign[=] call[name[element].text.strip, parameter[]] | keyword[def] identifier[parse_impl] ( identifier[self] ):
literal[string]
identifier[parser] = identifier[XMLParser] ( identifier[encoding] = identifier[str] ( literal[string] ))
identifier[element_iter] = identifier[ET] . identifier[iterparse] ( identifier[self] . identifier[handle] , identifier[events] =( literal[string] , literal[string] ), identifier[parser] = identifier[parser] )
keyword[for] identifier[pos] , identifier[element] keyword[in] identifier[element_iter] :
identifier[tag] , identifier[class_attr] = identifier[_tag_and_class_attr] ( identifier[element] )
keyword[if] identifier[tag] == literal[string] keyword[and] identifier[pos] == literal[string] :
keyword[if] keyword[not] identifier[self] . identifier[user] :
identifier[self] . identifier[user] = identifier[element] . identifier[text] . identifier[strip] ()
keyword[elif] identifier[tag] == literal[string] keyword[and] literal[string] keyword[in] identifier[class_attr] keyword[and] identifier[pos] == literal[string] :
identifier[participants] = identifier[self] . identifier[parse_participants] ( identifier[element] )
identifier[thread] = identifier[self] . identifier[parse_thread] ( identifier[participants] , identifier[element_iter] , keyword[True] )
identifier[self] . identifier[save_thread] ( identifier[thread] ) | def parse_impl(self):
"""
Parses the HTML content as a stream. This is far less memory
intensive than loading the entire HTML file into memory, like
BeautifulSoup does.
"""
# Cast to str to ensure not unicode under Python 2, as the parser
# doesn't like that.
parser = XMLParser(encoding=str('UTF-8'))
element_iter = ET.iterparse(self.handle, events=('start', 'end'), parser=parser)
for (pos, element) in element_iter:
(tag, class_attr) = _tag_and_class_attr(element)
if tag == 'h1' and pos == 'end':
if not self.user:
self.user = element.text.strip() # depends on [control=['if'], data=[]] # depends on [control=['if'], data=[]]
elif tag == 'div' and 'thread' in class_attr and (pos == 'start'):
participants = self.parse_participants(element)
thread = self.parse_thread(participants, element_iter, True)
self.save_thread(thread) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]] |
def _compile_dimension_size(self, base_index, array,
property, sized_elements):
"""Build one set of col widths or row heights."""
sort_index = base_index + 2
sized_elements.sort(key=lambda x: x[sort_index])
for element_data in sized_elements:
start, end = element_data[base_index], element_data[sort_index]
end += start
element, size = element_data[4:6]
# Find the total current size of the set
set_size = sum(array[start:end]) + (end-start-1)*self.margin
# Work out the extra space we need
extra_space_needed = getattr(size, property) - set_size
if extra_space_needed < 0: continue
# Distribute it among the entries
extra_space_each = extra_space_needed / (end-start)
for index in range(start, end):
array[index] += extra_space_each | def function[_compile_dimension_size, parameter[self, base_index, array, property, sized_elements]]:
constant[Build one set of col widths or row heights.]
variable[sort_index] assign[=] binary_operation[name[base_index] + constant[2]]
call[name[sized_elements].sort, parameter[]]
for taget[name[element_data]] in starred[name[sized_elements]] begin[:]
<ast.Tuple object at 0x7da1b2650a00> assign[=] tuple[[<ast.Subscript object at 0x7da1b2651e40>, <ast.Subscript object at 0x7da1b2652d40>]]
<ast.AugAssign object at 0x7da1b2652c20>
<ast.Tuple object at 0x7da1b26504f0> assign[=] call[name[element_data]][<ast.Slice object at 0x7da1b2650ac0>]
variable[set_size] assign[=] binary_operation[call[name[sum], parameter[call[name[array]][<ast.Slice object at 0x7da1b2650cd0>]]] + binary_operation[binary_operation[binary_operation[name[end] - name[start]] - constant[1]] * name[self].margin]]
variable[extra_space_needed] assign[=] binary_operation[call[name[getattr], parameter[name[size], name[property]]] - name[set_size]]
if compare[name[extra_space_needed] less[<] constant[0]] begin[:]
continue
variable[extra_space_each] assign[=] binary_operation[name[extra_space_needed] / binary_operation[name[end] - name[start]]]
for taget[name[index]] in starred[call[name[range], parameter[name[start], name[end]]]] begin[:]
<ast.AugAssign object at 0x7da18bcc9a50> | keyword[def] identifier[_compile_dimension_size] ( identifier[self] , identifier[base_index] , identifier[array] ,
identifier[property] , identifier[sized_elements] ):
literal[string]
identifier[sort_index] = identifier[base_index] + literal[int]
identifier[sized_elements] . identifier[sort] ( identifier[key] = keyword[lambda] identifier[x] : identifier[x] [ identifier[sort_index] ])
keyword[for] identifier[element_data] keyword[in] identifier[sized_elements] :
identifier[start] , identifier[end] = identifier[element_data] [ identifier[base_index] ], identifier[element_data] [ identifier[sort_index] ]
identifier[end] += identifier[start]
identifier[element] , identifier[size] = identifier[element_data] [ literal[int] : literal[int] ]
identifier[set_size] = identifier[sum] ( identifier[array] [ identifier[start] : identifier[end] ])+( identifier[end] - identifier[start] - literal[int] )* identifier[self] . identifier[margin]
identifier[extra_space_needed] = identifier[getattr] ( identifier[size] , identifier[property] )- identifier[set_size]
keyword[if] identifier[extra_space_needed] < literal[int] : keyword[continue]
identifier[extra_space_each] = identifier[extra_space_needed] /( identifier[end] - identifier[start] )
keyword[for] identifier[index] keyword[in] identifier[range] ( identifier[start] , identifier[end] ):
identifier[array] [ identifier[index] ]+= identifier[extra_space_each] | def _compile_dimension_size(self, base_index, array, property, sized_elements):
"""Build one set of col widths or row heights."""
sort_index = base_index + 2
sized_elements.sort(key=lambda x: x[sort_index])
for element_data in sized_elements:
(start, end) = (element_data[base_index], element_data[sort_index])
end += start
(element, size) = element_data[4:6]
# Find the total current size of the set
set_size = sum(array[start:end]) + (end - start - 1) * self.margin
# Work out the extra space we need
extra_space_needed = getattr(size, property) - set_size
if extra_space_needed < 0:
continue # depends on [control=['if'], data=[]]
# Distribute it among the entries
extra_space_each = extra_space_needed / (end - start)
for index in range(start, end):
array[index] += extra_space_each # depends on [control=['for'], data=['index']] # depends on [control=['for'], data=['element_data']] |
def _logstash(url, data):
'''
Issues HTTP queries to the logstash server.
'''
result = salt.utils.http.query(
url,
'POST',
header_dict=_HEADERS,
data=salt.utils.json.dumps(data),
decode=True,
status=True,
opts=__opts__
)
return result | def function[_logstash, parameter[url, data]]:
constant[
Issues HTTP queries to the logstash server.
]
variable[result] assign[=] call[name[salt].utils.http.query, parameter[name[url], constant[POST]]]
return[name[result]] | keyword[def] identifier[_logstash] ( identifier[url] , identifier[data] ):
literal[string]
identifier[result] = identifier[salt] . identifier[utils] . identifier[http] . identifier[query] (
identifier[url] ,
literal[string] ,
identifier[header_dict] = identifier[_HEADERS] ,
identifier[data] = identifier[salt] . identifier[utils] . identifier[json] . identifier[dumps] ( identifier[data] ),
identifier[decode] = keyword[True] ,
identifier[status] = keyword[True] ,
identifier[opts] = identifier[__opts__]
)
keyword[return] identifier[result] | def _logstash(url, data):
"""
Issues HTTP queries to the logstash server.
"""
result = salt.utils.http.query(url, 'POST', header_dict=_HEADERS, data=salt.utils.json.dumps(data), decode=True, status=True, opts=__opts__)
return result |
def corrupt_input(data, sess, corrtype, corrfrac):
"""Corrupt a fraction of data according to the chosen noise method.
:return: corrupted data
"""
corruption_ratio = np.round(corrfrac * data.shape[1]).astype(np.int)
if corrtype == 'none':
return np.copy(data)
if corrfrac > 0.0:
if corrtype == 'masking':
return masking_noise(data, sess, corrfrac)
elif corrtype == 'salt_and_pepper':
return salt_and_pepper_noise(data, corruption_ratio)
else:
return np.copy(data) | def function[corrupt_input, parameter[data, sess, corrtype, corrfrac]]:
constant[Corrupt a fraction of data according to the chosen noise method.
:return: corrupted data
]
variable[corruption_ratio] assign[=] call[call[name[np].round, parameter[binary_operation[name[corrfrac] * call[name[data].shape][constant[1]]]]].astype, parameter[name[np].int]]
if compare[name[corrtype] equal[==] constant[none]] begin[:]
return[call[name[np].copy, parameter[name[data]]]]
if compare[name[corrfrac] greater[>] constant[0.0]] begin[:]
if compare[name[corrtype] equal[==] constant[masking]] begin[:]
return[call[name[masking_noise], parameter[name[data], name[sess], name[corrfrac]]]] | keyword[def] identifier[corrupt_input] ( identifier[data] , identifier[sess] , identifier[corrtype] , identifier[corrfrac] ):
literal[string]
identifier[corruption_ratio] = identifier[np] . identifier[round] ( identifier[corrfrac] * identifier[data] . identifier[shape] [ literal[int] ]). identifier[astype] ( identifier[np] . identifier[int] )
keyword[if] identifier[corrtype] == literal[string] :
keyword[return] identifier[np] . identifier[copy] ( identifier[data] )
keyword[if] identifier[corrfrac] > literal[int] :
keyword[if] identifier[corrtype] == literal[string] :
keyword[return] identifier[masking_noise] ( identifier[data] , identifier[sess] , identifier[corrfrac] )
keyword[elif] identifier[corrtype] == literal[string] :
keyword[return] identifier[salt_and_pepper_noise] ( identifier[data] , identifier[corruption_ratio] )
keyword[else] :
keyword[return] identifier[np] . identifier[copy] ( identifier[data] ) | def corrupt_input(data, sess, corrtype, corrfrac):
"""Corrupt a fraction of data according to the chosen noise method.
:return: corrupted data
"""
corruption_ratio = np.round(corrfrac * data.shape[1]).astype(np.int)
if corrtype == 'none':
return np.copy(data) # depends on [control=['if'], data=[]]
if corrfrac > 0.0:
if corrtype == 'masking':
return masking_noise(data, sess, corrfrac) # depends on [control=['if'], data=[]]
elif corrtype == 'salt_and_pepper':
return salt_and_pepper_noise(data, corruption_ratio) # depends on [control=['if'], data=[]] # depends on [control=['if'], data=['corrfrac']]
else:
return np.copy(data) |
def add_data_set(self, data, series_type="line", name=None, **kwargs):
"""set data for series option in highcharts"""
self.data_set_count += 1
if not name:
name = "Series %d" % self.data_set_count
kwargs.update({'name':name})
if series_type == 'treemap':
self.add_JSsource('http://code.highcharts.com/modules/treemap.js')
series_data = Series(data, series_type=series_type, **kwargs)
series_data.__options__().update(SeriesOptions(series_type=series_type, **kwargs).__options__())
self.data_temp.append(series_data) | def function[add_data_set, parameter[self, data, series_type, name]]:
constant[set data for series option in highcharts]
<ast.AugAssign object at 0x7da1b1de2080>
if <ast.UnaryOp object at 0x7da1b1de1870> begin[:]
variable[name] assign[=] binary_operation[constant[Series %d] <ast.Mod object at 0x7da2590d6920> name[self].data_set_count]
call[name[kwargs].update, parameter[dictionary[[<ast.Constant object at 0x7da1b1de39d0>], [<ast.Name object at 0x7da1b1de1360>]]]]
if compare[name[series_type] equal[==] constant[treemap]] begin[:]
call[name[self].add_JSsource, parameter[constant[http://code.highcharts.com/modules/treemap.js]]]
variable[series_data] assign[=] call[name[Series], parameter[name[data]]]
call[call[name[series_data].__options__, parameter[]].update, parameter[call[call[name[SeriesOptions], parameter[]].__options__, parameter[]]]]
call[name[self].data_temp.append, parameter[name[series_data]]] | keyword[def] identifier[add_data_set] ( identifier[self] , identifier[data] , identifier[series_type] = literal[string] , identifier[name] = keyword[None] ,** identifier[kwargs] ):
literal[string]
identifier[self] . identifier[data_set_count] += literal[int]
keyword[if] keyword[not] identifier[name] :
identifier[name] = literal[string] % identifier[self] . identifier[data_set_count]
identifier[kwargs] . identifier[update] ({ literal[string] : identifier[name] })
keyword[if] identifier[series_type] == literal[string] :
identifier[self] . identifier[add_JSsource] ( literal[string] )
identifier[series_data] = identifier[Series] ( identifier[data] , identifier[series_type] = identifier[series_type] ,** identifier[kwargs] )
identifier[series_data] . identifier[__options__] (). identifier[update] ( identifier[SeriesOptions] ( identifier[series_type] = identifier[series_type] ,** identifier[kwargs] ). identifier[__options__] ())
identifier[self] . identifier[data_temp] . identifier[append] ( identifier[series_data] ) | def add_data_set(self, data, series_type='line', name=None, **kwargs):
"""set data for series option in highcharts"""
self.data_set_count += 1
if not name:
name = 'Series %d' % self.data_set_count # depends on [control=['if'], data=[]]
kwargs.update({'name': name})
if series_type == 'treemap':
self.add_JSsource('http://code.highcharts.com/modules/treemap.js') # depends on [control=['if'], data=[]]
series_data = Series(data, series_type=series_type, **kwargs)
series_data.__options__().update(SeriesOptions(series_type=series_type, **kwargs).__options__())
self.data_temp.append(series_data) |
def step(self, thumb=False):
"""Executes a single step.
Steps even if there is a breakpoint.
Args:
self (JLink): the ``JLink`` instance
thumb (bool): boolean indicating if to step in thumb mode
Returns:
``None``
Raises:
JLinkException: on error
"""
method = self._dll.JLINKARM_Step
if thumb:
method = self._dll.JLINKARM_StepComposite
res = method()
if res != 0:
raise errors.JLinkException("Failed to step over instruction.")
return None | def function[step, parameter[self, thumb]]:
constant[Executes a single step.
Steps even if there is a breakpoint.
Args:
self (JLink): the ``JLink`` instance
thumb (bool): boolean indicating if to step in thumb mode
Returns:
``None``
Raises:
JLinkException: on error
]
variable[method] assign[=] name[self]._dll.JLINKARM_Step
if name[thumb] begin[:]
variable[method] assign[=] name[self]._dll.JLINKARM_StepComposite
variable[res] assign[=] call[name[method], parameter[]]
if compare[name[res] not_equal[!=] constant[0]] begin[:]
<ast.Raise object at 0x7da1b17de680>
return[constant[None]] | keyword[def] identifier[step] ( identifier[self] , identifier[thumb] = keyword[False] ):
literal[string]
identifier[method] = identifier[self] . identifier[_dll] . identifier[JLINKARM_Step]
keyword[if] identifier[thumb] :
identifier[method] = identifier[self] . identifier[_dll] . identifier[JLINKARM_StepComposite]
identifier[res] = identifier[method] ()
keyword[if] identifier[res] != literal[int] :
keyword[raise] identifier[errors] . identifier[JLinkException] ( literal[string] )
keyword[return] keyword[None] | def step(self, thumb=False):
"""Executes a single step.
Steps even if there is a breakpoint.
Args:
self (JLink): the ``JLink`` instance
thumb (bool): boolean indicating if to step in thumb mode
Returns:
``None``
Raises:
JLinkException: on error
"""
method = self._dll.JLINKARM_Step
if thumb:
method = self._dll.JLINKARM_StepComposite # depends on [control=['if'], data=[]]
res = method()
if res != 0:
raise errors.JLinkException('Failed to step over instruction.') # depends on [control=['if'], data=[]]
return None |
def serialize_op( cls, opcode, opdata, opfields, verbose=True ):
"""
Given an opcode (byte), associated data (dict), and the operation
fields to serialize (opfields), convert it
into its canonical serialized form (i.e. in order to
generate a consensus hash.
opdata is allowed to have extra fields. They will be ignored
Return the canonical form on success.
Return None on error.
"""
fields = opfields.get( opcode, None )
if fields is None:
log.error("BUG: unrecongnized opcode '%s'" % opcode )
return None
all_values = []
debug_all_values = []
missing = []
for field in fields:
if not opdata.has_key(field):
missing.append( field )
field_value = opdata.get(field, None)
if field_value is None:
field_value = ""
# netstring format
debug_all_values.append( str(field) + "=" + str(len(str(field_value))) + ":" + str(field_value) )
all_values.append( str(len(str(field_value))) + ":" + str(field_value) )
if len(missing) > 0:
log.error("Missing fields; dump follows:\n{}".format(simplejson.dumps( opdata, indent=4, sort_keys=True )))
raise Exception("BUG: missing fields '{}'".format(",".join(missing)))
if verbose:
log.debug("SERIALIZE: {}:{}".format(opcode, ",".join(debug_all_values) ))
field_values = ",".join( all_values )
return opcode + ":" + field_values | def function[serialize_op, parameter[cls, opcode, opdata, opfields, verbose]]:
constant[
Given an opcode (byte), associated data (dict), and the operation
fields to serialize (opfields), convert it
into its canonical serialized form (i.e. in order to
generate a consensus hash.
opdata is allowed to have extra fields. They will be ignored
Return the canonical form on success.
Return None on error.
]
variable[fields] assign[=] call[name[opfields].get, parameter[name[opcode], constant[None]]]
if compare[name[fields] is constant[None]] begin[:]
call[name[log].error, parameter[binary_operation[constant[BUG: unrecongnized opcode '%s'] <ast.Mod object at 0x7da2590d6920> name[opcode]]]]
return[constant[None]]
variable[all_values] assign[=] list[[]]
variable[debug_all_values] assign[=] list[[]]
variable[missing] assign[=] list[[]]
for taget[name[field]] in starred[name[fields]] begin[:]
if <ast.UnaryOp object at 0x7da207f98f10> begin[:]
call[name[missing].append, parameter[name[field]]]
variable[field_value] assign[=] call[name[opdata].get, parameter[name[field], constant[None]]]
if compare[name[field_value] is constant[None]] begin[:]
variable[field_value] assign[=] constant[]
call[name[debug_all_values].append, parameter[binary_operation[binary_operation[binary_operation[binary_operation[call[name[str], parameter[name[field]]] + constant[=]] + call[name[str], parameter[call[name[len], parameter[call[name[str], parameter[name[field_value]]]]]]]] + constant[:]] + call[name[str], parameter[name[field_value]]]]]]
call[name[all_values].append, parameter[binary_operation[binary_operation[call[name[str], parameter[call[name[len], parameter[call[name[str], parameter[name[field_value]]]]]]] + constant[:]] + call[name[str], parameter[name[field_value]]]]]]
if compare[call[name[len], parameter[name[missing]]] greater[>] constant[0]] begin[:]
call[name[log].error, parameter[call[constant[Missing fields; dump follows:
{}].format, parameter[call[name[simplejson].dumps, parameter[name[opdata]]]]]]]
<ast.Raise object at 0x7da18ede7a00>
if name[verbose] begin[:]
call[name[log].debug, parameter[call[constant[SERIALIZE: {}:{}].format, parameter[name[opcode], call[constant[,].join, parameter[name[debug_all_values]]]]]]]
variable[field_values] assign[=] call[constant[,].join, parameter[name[all_values]]]
return[binary_operation[binary_operation[name[opcode] + constant[:]] + name[field_values]]] | keyword[def] identifier[serialize_op] ( identifier[cls] , identifier[opcode] , identifier[opdata] , identifier[opfields] , identifier[verbose] = keyword[True] ):
literal[string]
identifier[fields] = identifier[opfields] . identifier[get] ( identifier[opcode] , keyword[None] )
keyword[if] identifier[fields] keyword[is] keyword[None] :
identifier[log] . identifier[error] ( literal[string] % identifier[opcode] )
keyword[return] keyword[None]
identifier[all_values] =[]
identifier[debug_all_values] =[]
identifier[missing] =[]
keyword[for] identifier[field] keyword[in] identifier[fields] :
keyword[if] keyword[not] identifier[opdata] . identifier[has_key] ( identifier[field] ):
identifier[missing] . identifier[append] ( identifier[field] )
identifier[field_value] = identifier[opdata] . identifier[get] ( identifier[field] , keyword[None] )
keyword[if] identifier[field_value] keyword[is] keyword[None] :
identifier[field_value] = literal[string]
identifier[debug_all_values] . identifier[append] ( identifier[str] ( identifier[field] )+ literal[string] + identifier[str] ( identifier[len] ( identifier[str] ( identifier[field_value] )))+ literal[string] + identifier[str] ( identifier[field_value] ))
identifier[all_values] . identifier[append] ( identifier[str] ( identifier[len] ( identifier[str] ( identifier[field_value] )))+ literal[string] + identifier[str] ( identifier[field_value] ))
keyword[if] identifier[len] ( identifier[missing] )> literal[int] :
identifier[log] . identifier[error] ( literal[string] . identifier[format] ( identifier[simplejson] . identifier[dumps] ( identifier[opdata] , identifier[indent] = literal[int] , identifier[sort_keys] = keyword[True] )))
keyword[raise] identifier[Exception] ( literal[string] . identifier[format] ( literal[string] . identifier[join] ( identifier[missing] )))
keyword[if] identifier[verbose] :
identifier[log] . identifier[debug] ( literal[string] . identifier[format] ( identifier[opcode] , literal[string] . identifier[join] ( identifier[debug_all_values] )))
identifier[field_values] = literal[string] . identifier[join] ( identifier[all_values] )
keyword[return] identifier[opcode] + literal[string] + identifier[field_values] | def serialize_op(cls, opcode, opdata, opfields, verbose=True):
"""
Given an opcode (byte), associated data (dict), and the operation
fields to serialize (opfields), convert it
into its canonical serialized form (i.e. in order to
generate a consensus hash.
opdata is allowed to have extra fields. They will be ignored
Return the canonical form on success.
Return None on error.
"""
fields = opfields.get(opcode, None)
if fields is None:
log.error("BUG: unrecongnized opcode '%s'" % opcode)
return None # depends on [control=['if'], data=[]]
all_values = []
debug_all_values = []
missing = []
for field in fields:
if not opdata.has_key(field):
missing.append(field) # depends on [control=['if'], data=[]]
field_value = opdata.get(field, None)
if field_value is None:
field_value = '' # depends on [control=['if'], data=['field_value']]
# netstring format
debug_all_values.append(str(field) + '=' + str(len(str(field_value))) + ':' + str(field_value))
all_values.append(str(len(str(field_value))) + ':' + str(field_value)) # depends on [control=['for'], data=['field']]
if len(missing) > 0:
log.error('Missing fields; dump follows:\n{}'.format(simplejson.dumps(opdata, indent=4, sort_keys=True)))
raise Exception("BUG: missing fields '{}'".format(','.join(missing))) # depends on [control=['if'], data=[]]
if verbose:
log.debug('SERIALIZE: {}:{}'.format(opcode, ','.join(debug_all_values))) # depends on [control=['if'], data=[]]
field_values = ','.join(all_values)
return opcode + ':' + field_values |
def get_endpoint(self, endpoint=None):
"""Return interface URL endpoint."""
base_url = self.api_config.api_url
if not endpoint:
if 'localhost' in base_url:
endpoint = ''
else:
endpoint = ENDPOINTS[self.endpoint_type]
endpoint = '/'.join([p.strip('/') for p in (base_url, endpoint)]).strip('/')
self.logger.info("Using Endpoint: %s", endpoint)
return endpoint | def function[get_endpoint, parameter[self, endpoint]]:
constant[Return interface URL endpoint.]
variable[base_url] assign[=] name[self].api_config.api_url
if <ast.UnaryOp object at 0x7da1b0947190> begin[:]
if compare[constant[localhost] in name[base_url]] begin[:]
variable[endpoint] assign[=] constant[]
variable[endpoint] assign[=] call[call[constant[/].join, parameter[<ast.ListComp object at 0x7da1b0946050>]].strip, parameter[constant[/]]]
call[name[self].logger.info, parameter[constant[Using Endpoint: %s], name[endpoint]]]
return[name[endpoint]] | keyword[def] identifier[get_endpoint] ( identifier[self] , identifier[endpoint] = keyword[None] ):
literal[string]
identifier[base_url] = identifier[self] . identifier[api_config] . identifier[api_url]
keyword[if] keyword[not] identifier[endpoint] :
keyword[if] literal[string] keyword[in] identifier[base_url] :
identifier[endpoint] = literal[string]
keyword[else] :
identifier[endpoint] = identifier[ENDPOINTS] [ identifier[self] . identifier[endpoint_type] ]
identifier[endpoint] = literal[string] . identifier[join] ([ identifier[p] . identifier[strip] ( literal[string] ) keyword[for] identifier[p] keyword[in] ( identifier[base_url] , identifier[endpoint] )]). identifier[strip] ( literal[string] )
identifier[self] . identifier[logger] . identifier[info] ( literal[string] , identifier[endpoint] )
keyword[return] identifier[endpoint] | def get_endpoint(self, endpoint=None):
"""Return interface URL endpoint."""
base_url = self.api_config.api_url
if not endpoint:
if 'localhost' in base_url:
endpoint = '' # depends on [control=['if'], data=[]]
else:
endpoint = ENDPOINTS[self.endpoint_type] # depends on [control=['if'], data=[]]
endpoint = '/'.join([p.strip('/') for p in (base_url, endpoint)]).strip('/')
self.logger.info('Using Endpoint: %s', endpoint)
return endpoint |
def delete_topic(self, project, topic, fail_if_not_exists=False):
"""Deletes a Pub/Sub topic if it exists.
:param project: the GCP project ID in which to delete the topic
:type project: str
:param topic: the Pub/Sub topic name to delete; do not
include the ``projects/{project}/topics/`` prefix.
:type topic: str
:param fail_if_not_exists: if set, raise an exception if the topic
does not exist
:type fail_if_not_exists: bool
"""
service = self.get_conn()
full_topic = _format_topic(project, topic)
try:
service.projects().topics().delete(topic=full_topic).execute(num_retries=self.num_retries)
except HttpError as e:
# Status code 409 indicates that the topic was not found
if str(e.resp['status']) == '404':
message = 'Topic does not exist: {}'.format(full_topic)
self.log.warning(message)
if fail_if_not_exists:
raise PubSubException(message)
else:
raise PubSubException(
'Error deleting topic {}'.format(full_topic), e) | def function[delete_topic, parameter[self, project, topic, fail_if_not_exists]]:
constant[Deletes a Pub/Sub topic if it exists.
:param project: the GCP project ID in which to delete the topic
:type project: str
:param topic: the Pub/Sub topic name to delete; do not
include the ``projects/{project}/topics/`` prefix.
:type topic: str
:param fail_if_not_exists: if set, raise an exception if the topic
does not exist
:type fail_if_not_exists: bool
]
variable[service] assign[=] call[name[self].get_conn, parameter[]]
variable[full_topic] assign[=] call[name[_format_topic], parameter[name[project], name[topic]]]
<ast.Try object at 0x7da18f00fa00> | keyword[def] identifier[delete_topic] ( identifier[self] , identifier[project] , identifier[topic] , identifier[fail_if_not_exists] = keyword[False] ):
literal[string]
identifier[service] = identifier[self] . identifier[get_conn] ()
identifier[full_topic] = identifier[_format_topic] ( identifier[project] , identifier[topic] )
keyword[try] :
identifier[service] . identifier[projects] (). identifier[topics] (). identifier[delete] ( identifier[topic] = identifier[full_topic] ). identifier[execute] ( identifier[num_retries] = identifier[self] . identifier[num_retries] )
keyword[except] identifier[HttpError] keyword[as] identifier[e] :
keyword[if] identifier[str] ( identifier[e] . identifier[resp] [ literal[string] ])== literal[string] :
identifier[message] = literal[string] . identifier[format] ( identifier[full_topic] )
identifier[self] . identifier[log] . identifier[warning] ( identifier[message] )
keyword[if] identifier[fail_if_not_exists] :
keyword[raise] identifier[PubSubException] ( identifier[message] )
keyword[else] :
keyword[raise] identifier[PubSubException] (
literal[string] . identifier[format] ( identifier[full_topic] ), identifier[e] ) | def delete_topic(self, project, topic, fail_if_not_exists=False):
"""Deletes a Pub/Sub topic if it exists.
:param project: the GCP project ID in which to delete the topic
:type project: str
:param topic: the Pub/Sub topic name to delete; do not
include the ``projects/{project}/topics/`` prefix.
:type topic: str
:param fail_if_not_exists: if set, raise an exception if the topic
does not exist
:type fail_if_not_exists: bool
"""
service = self.get_conn()
full_topic = _format_topic(project, topic)
try:
service.projects().topics().delete(topic=full_topic).execute(num_retries=self.num_retries) # depends on [control=['try'], data=[]]
except HttpError as e:
# Status code 409 indicates that the topic was not found
if str(e.resp['status']) == '404':
message = 'Topic does not exist: {}'.format(full_topic)
self.log.warning(message)
if fail_if_not_exists:
raise PubSubException(message) # depends on [control=['if'], data=[]] # depends on [control=['if'], data=[]]
else:
raise PubSubException('Error deleting topic {}'.format(full_topic), e) # depends on [control=['except'], data=['e']] |
def deploy():
"""Deploy to production."""
_require_root()
if not confirm("This will apply any available migrations to the database. Has the database been backed up?"):
abort("Aborted.")
if not confirm("Are you sure you want to deploy?"):
abort("Aborted.")
with lcd(PRODUCTION_DOCUMENT_ROOT):
with shell_env(PRODUCTION="TRUE"):
local("git pull")
with open("requirements.txt", "r") as req_file:
requirements = req_file.read().strip().split()
try:
pkg_resources.require(requirements)
except pkg_resources.DistributionNotFound:
local("pip install -r requirements.txt")
except Exception:
traceback.format_exc()
local("pip install -r requirements.txt")
else:
puts("Python requirements already satisfied.")
with prefix("source /usr/local/virtualenvs/ion/bin/activate"):
local("./manage.py collectstatic --noinput", shell="/bin/bash")
local("./manage.py migrate", shell="/bin/bash")
restart_production_gunicorn(skip=True)
puts("Deploy complete.") | def function[deploy, parameter[]]:
constant[Deploy to production.]
call[name[_require_root], parameter[]]
if <ast.UnaryOp object at 0x7da1b02e5c90> begin[:]
call[name[abort], parameter[constant[Aborted.]]]
if <ast.UnaryOp object at 0x7da1b02e4e20> begin[:]
call[name[abort], parameter[constant[Aborted.]]]
with call[name[lcd], parameter[name[PRODUCTION_DOCUMENT_ROOT]]] begin[:]
with call[name[shell_env], parameter[]] begin[:]
call[name[local], parameter[constant[git pull]]]
with call[name[open], parameter[constant[requirements.txt], constant[r]]] begin[:]
variable[requirements] assign[=] call[call[call[name[req_file].read, parameter[]].strip, parameter[]].split, parameter[]]
<ast.Try object at 0x7da1b02e5cc0>
with call[name[prefix], parameter[constant[source /usr/local/virtualenvs/ion/bin/activate]]] begin[:]
call[name[local], parameter[constant[./manage.py collectstatic --noinput]]]
call[name[local], parameter[constant[./manage.py migrate]]]
call[name[restart_production_gunicorn], parameter[]]
call[name[puts], parameter[constant[Deploy complete.]]] | keyword[def] identifier[deploy] ():
literal[string]
identifier[_require_root] ()
keyword[if] keyword[not] identifier[confirm] ( literal[string] ):
identifier[abort] ( literal[string] )
keyword[if] keyword[not] identifier[confirm] ( literal[string] ):
identifier[abort] ( literal[string] )
keyword[with] identifier[lcd] ( identifier[PRODUCTION_DOCUMENT_ROOT] ):
keyword[with] identifier[shell_env] ( identifier[PRODUCTION] = literal[string] ):
identifier[local] ( literal[string] )
keyword[with] identifier[open] ( literal[string] , literal[string] ) keyword[as] identifier[req_file] :
identifier[requirements] = identifier[req_file] . identifier[read] (). identifier[strip] (). identifier[split] ()
keyword[try] :
identifier[pkg_resources] . identifier[require] ( identifier[requirements] )
keyword[except] identifier[pkg_resources] . identifier[DistributionNotFound] :
identifier[local] ( literal[string] )
keyword[except] identifier[Exception] :
identifier[traceback] . identifier[format_exc] ()
identifier[local] ( literal[string] )
keyword[else] :
identifier[puts] ( literal[string] )
keyword[with] identifier[prefix] ( literal[string] ):
identifier[local] ( literal[string] , identifier[shell] = literal[string] )
identifier[local] ( literal[string] , identifier[shell] = literal[string] )
identifier[restart_production_gunicorn] ( identifier[skip] = keyword[True] )
identifier[puts] ( literal[string] ) | def deploy():
"""Deploy to production."""
_require_root()
if not confirm('This will apply any available migrations to the database. Has the database been backed up?'):
abort('Aborted.') # depends on [control=['if'], data=[]]
if not confirm('Are you sure you want to deploy?'):
abort('Aborted.') # depends on [control=['if'], data=[]]
with lcd(PRODUCTION_DOCUMENT_ROOT):
with shell_env(PRODUCTION='TRUE'):
local('git pull')
with open('requirements.txt', 'r') as req_file:
requirements = req_file.read().strip().split()
try:
pkg_resources.require(requirements) # depends on [control=['try'], data=[]]
except pkg_resources.DistributionNotFound:
local('pip install -r requirements.txt') # depends on [control=['except'], data=[]]
except Exception:
traceback.format_exc()
local('pip install -r requirements.txt') # depends on [control=['except'], data=[]]
else:
puts('Python requirements already satisfied.') # depends on [control=['with'], data=['req_file']]
with prefix('source /usr/local/virtualenvs/ion/bin/activate'):
local('./manage.py collectstatic --noinput', shell='/bin/bash')
local('./manage.py migrate', shell='/bin/bash') # depends on [control=['with'], data=[]]
restart_production_gunicorn(skip=True) # depends on [control=['with'], data=[]] # depends on [control=['with'], data=[]]
puts('Deploy complete.') |
def chain_user_names(users, exclude_user, truncate=35):
"""Tag to return a truncated chain of user names."""
if not users or not isinstance(exclude_user, get_user_model()):
return ''
return truncatechars(
', '.join(u'{}'.format(u) for u in users.exclude(pk=exclude_user.pk)),
truncate) | def function[chain_user_names, parameter[users, exclude_user, truncate]]:
constant[Tag to return a truncated chain of user names.]
if <ast.BoolOp object at 0x7da20e9b3a90> begin[:]
return[constant[]]
return[call[name[truncatechars], parameter[call[constant[, ].join, parameter[<ast.GeneratorExp object at 0x7da20e9b1810>]], name[truncate]]]] | keyword[def] identifier[chain_user_names] ( identifier[users] , identifier[exclude_user] , identifier[truncate] = literal[int] ):
literal[string]
keyword[if] keyword[not] identifier[users] keyword[or] keyword[not] identifier[isinstance] ( identifier[exclude_user] , identifier[get_user_model] ()):
keyword[return] literal[string]
keyword[return] identifier[truncatechars] (
literal[string] . identifier[join] ( literal[string] . identifier[format] ( identifier[u] ) keyword[for] identifier[u] keyword[in] identifier[users] . identifier[exclude] ( identifier[pk] = identifier[exclude_user] . identifier[pk] )),
identifier[truncate] ) | def chain_user_names(users, exclude_user, truncate=35):
"""Tag to return a truncated chain of user names."""
if not users or not isinstance(exclude_user, get_user_model()):
return '' # depends on [control=['if'], data=[]]
return truncatechars(', '.join((u'{}'.format(u) for u in users.exclude(pk=exclude_user.pk))), truncate) |
def get_span(docgraph, node_id, debug=False):
"""
returns all the tokens that are dominated or in a span relation with
the given node. If debug is set to True, you'll get a warning if the
graph is cyclic.
Returns
-------
span : list of str
sorted list of token nodes (token node IDs)
"""
if debug is True and is_directed_acyclic_graph(docgraph) is False:
warnings.warn(
("Can't reliably extract span '{0}' from cyclical graph'{1}'."
"Maximum recursion depth may be exceeded.").format(node_id,
docgraph))
span = []
if docgraph.ns+':token' in docgraph.node[node_id]:
span.append(node_id)
for src_id, target_id, edge_attribs in docgraph.out_edges_iter(node_id,
data=True):
if src_id == target_id:
continue # ignore self-loops
# ignore pointing relations
if edge_attribs['edge_type'] != EdgeTypes.pointing_relation:
span.extend(get_span(docgraph, target_id))
return sorted(span, key=natural_sort_key) | def function[get_span, parameter[docgraph, node_id, debug]]:
constant[
returns all the tokens that are dominated or in a span relation with
the given node. If debug is set to True, you'll get a warning if the
graph is cyclic.
Returns
-------
span : list of str
sorted list of token nodes (token node IDs)
]
if <ast.BoolOp object at 0x7da18c4cf970> begin[:]
call[name[warnings].warn, parameter[call[constant[Can't reliably extract span '{0}' from cyclical graph'{1}'.Maximum recursion depth may be exceeded.].format, parameter[name[node_id], name[docgraph]]]]]
variable[span] assign[=] list[[]]
if compare[binary_operation[name[docgraph].ns + constant[:token]] in call[name[docgraph].node][name[node_id]]] begin[:]
call[name[span].append, parameter[name[node_id]]]
for taget[tuple[[<ast.Name object at 0x7da18c4cf100>, <ast.Name object at 0x7da18c4cccd0>, <ast.Name object at 0x7da18c4cfbe0>]]] in starred[call[name[docgraph].out_edges_iter, parameter[name[node_id]]]] begin[:]
if compare[name[src_id] equal[==] name[target_id]] begin[:]
continue
if compare[call[name[edge_attribs]][constant[edge_type]] not_equal[!=] name[EdgeTypes].pointing_relation] begin[:]
call[name[span].extend, parameter[call[name[get_span], parameter[name[docgraph], name[target_id]]]]]
return[call[name[sorted], parameter[name[span]]]] | keyword[def] identifier[get_span] ( identifier[docgraph] , identifier[node_id] , identifier[debug] = keyword[False] ):
literal[string]
keyword[if] identifier[debug] keyword[is] keyword[True] keyword[and] identifier[is_directed_acyclic_graph] ( identifier[docgraph] ) keyword[is] keyword[False] :
identifier[warnings] . identifier[warn] (
( literal[string]
literal[string] ). identifier[format] ( identifier[node_id] ,
identifier[docgraph] ))
identifier[span] =[]
keyword[if] identifier[docgraph] . identifier[ns] + literal[string] keyword[in] identifier[docgraph] . identifier[node] [ identifier[node_id] ]:
identifier[span] . identifier[append] ( identifier[node_id] )
keyword[for] identifier[src_id] , identifier[target_id] , identifier[edge_attribs] keyword[in] identifier[docgraph] . identifier[out_edges_iter] ( identifier[node_id] ,
identifier[data] = keyword[True] ):
keyword[if] identifier[src_id] == identifier[target_id] :
keyword[continue]
keyword[if] identifier[edge_attribs] [ literal[string] ]!= identifier[EdgeTypes] . identifier[pointing_relation] :
identifier[span] . identifier[extend] ( identifier[get_span] ( identifier[docgraph] , identifier[target_id] ))
keyword[return] identifier[sorted] ( identifier[span] , identifier[key] = identifier[natural_sort_key] ) | def get_span(docgraph, node_id, debug=False):
"""
returns all the tokens that are dominated or in a span relation with
the given node. If debug is set to True, you'll get a warning if the
graph is cyclic.
Returns
-------
span : list of str
sorted list of token nodes (token node IDs)
"""
if debug is True and is_directed_acyclic_graph(docgraph) is False:
warnings.warn("Can't reliably extract span '{0}' from cyclical graph'{1}'.Maximum recursion depth may be exceeded.".format(node_id, docgraph)) # depends on [control=['if'], data=[]]
span = []
if docgraph.ns + ':token' in docgraph.node[node_id]:
span.append(node_id) # depends on [control=['if'], data=[]]
for (src_id, target_id, edge_attribs) in docgraph.out_edges_iter(node_id, data=True):
if src_id == target_id:
continue # ignore self-loops # depends on [control=['if'], data=[]]
# ignore pointing relations
if edge_attribs['edge_type'] != EdgeTypes.pointing_relation:
span.extend(get_span(docgraph, target_id)) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]]
return sorted(span, key=natural_sort_key) |
def resourceprep(string, allow_unassigned=False):
"""
Process the given `string` using the Resourceprep (`RFC 6122`_) profile. In
the error cases defined in `RFC 3454`_ (stringprep), a :class:`ValueError`
is raised.
"""
chars = list(string)
_resourceprep_do_mapping(chars)
do_normalization(chars)
check_prohibited_output(
chars,
(
stringprep.in_table_c12,
stringprep.in_table_c21,
stringprep.in_table_c22,
stringprep.in_table_c3,
stringprep.in_table_c4,
stringprep.in_table_c5,
stringprep.in_table_c6,
stringprep.in_table_c7,
stringprep.in_table_c8,
stringprep.in_table_c9,
))
check_bidi(chars)
if not allow_unassigned:
check_unassigned(
chars,
(
stringprep.in_table_a1,
)
)
return "".join(chars) | def function[resourceprep, parameter[string, allow_unassigned]]:
constant[
Process the given `string` using the Resourceprep (`RFC 6122`_) profile. In
the error cases defined in `RFC 3454`_ (stringprep), a :class:`ValueError`
is raised.
]
variable[chars] assign[=] call[name[list], parameter[name[string]]]
call[name[_resourceprep_do_mapping], parameter[name[chars]]]
call[name[do_normalization], parameter[name[chars]]]
call[name[check_prohibited_output], parameter[name[chars], tuple[[<ast.Attribute object at 0x7da18f721ba0>, <ast.Attribute object at 0x7da18f723c40>, <ast.Attribute object at 0x7da18f721930>, <ast.Attribute object at 0x7da18f7218a0>, <ast.Attribute object at 0x7da18f722230>, <ast.Attribute object at 0x7da20c6e70a0>, <ast.Attribute object at 0x7da20c6e7c10>, <ast.Attribute object at 0x7da20c6e7e20>, <ast.Attribute object at 0x7da20c6e7d90>, <ast.Attribute object at 0x7da20c6e7e80>]]]]
call[name[check_bidi], parameter[name[chars]]]
if <ast.UnaryOp object at 0x7da20c6e71c0> begin[:]
call[name[check_unassigned], parameter[name[chars], tuple[[<ast.Attribute object at 0x7da20c6e7e50>]]]]
return[call[constant[].join, parameter[name[chars]]]] | keyword[def] identifier[resourceprep] ( identifier[string] , identifier[allow_unassigned] = keyword[False] ):
literal[string]
identifier[chars] = identifier[list] ( identifier[string] )
identifier[_resourceprep_do_mapping] ( identifier[chars] )
identifier[do_normalization] ( identifier[chars] )
identifier[check_prohibited_output] (
identifier[chars] ,
(
identifier[stringprep] . identifier[in_table_c12] ,
identifier[stringprep] . identifier[in_table_c21] ,
identifier[stringprep] . identifier[in_table_c22] ,
identifier[stringprep] . identifier[in_table_c3] ,
identifier[stringprep] . identifier[in_table_c4] ,
identifier[stringprep] . identifier[in_table_c5] ,
identifier[stringprep] . identifier[in_table_c6] ,
identifier[stringprep] . identifier[in_table_c7] ,
identifier[stringprep] . identifier[in_table_c8] ,
identifier[stringprep] . identifier[in_table_c9] ,
))
identifier[check_bidi] ( identifier[chars] )
keyword[if] keyword[not] identifier[allow_unassigned] :
identifier[check_unassigned] (
identifier[chars] ,
(
identifier[stringprep] . identifier[in_table_a1] ,
)
)
keyword[return] literal[string] . identifier[join] ( identifier[chars] ) | def resourceprep(string, allow_unassigned=False):
"""
Process the given `string` using the Resourceprep (`RFC 6122`_) profile. In
the error cases defined in `RFC 3454`_ (stringprep), a :class:`ValueError`
is raised.
"""
chars = list(string)
_resourceprep_do_mapping(chars)
do_normalization(chars)
check_prohibited_output(chars, (stringprep.in_table_c12, stringprep.in_table_c21, stringprep.in_table_c22, stringprep.in_table_c3, stringprep.in_table_c4, stringprep.in_table_c5, stringprep.in_table_c6, stringprep.in_table_c7, stringprep.in_table_c8, stringprep.in_table_c9))
check_bidi(chars)
if not allow_unassigned:
check_unassigned(chars, (stringprep.in_table_a1,)) # depends on [control=['if'], data=[]]
return ''.join(chars) |
def send_message(self, output):
"""
Send a message to the socket
"""
file_system_event = None
if self.my_action_input:
file_system_event = self.my_action_input.file_system_event or None
output_action = ActionInput(file_system_event,
output,
self.name,
"*")
Global.MESSAGE_DISPATCHER.send_message(output_action) | def function[send_message, parameter[self, output]]:
constant[
Send a message to the socket
]
variable[file_system_event] assign[=] constant[None]
if name[self].my_action_input begin[:]
variable[file_system_event] assign[=] <ast.BoolOp object at 0x7da1b2428340>
variable[output_action] assign[=] call[name[ActionInput], parameter[name[file_system_event], name[output], name[self].name, constant[*]]]
call[name[Global].MESSAGE_DISPATCHER.send_message, parameter[name[output_action]]] | keyword[def] identifier[send_message] ( identifier[self] , identifier[output] ):
literal[string]
identifier[file_system_event] = keyword[None]
keyword[if] identifier[self] . identifier[my_action_input] :
identifier[file_system_event] = identifier[self] . identifier[my_action_input] . identifier[file_system_event] keyword[or] keyword[None]
identifier[output_action] = identifier[ActionInput] ( identifier[file_system_event] ,
identifier[output] ,
identifier[self] . identifier[name] ,
literal[string] )
identifier[Global] . identifier[MESSAGE_DISPATCHER] . identifier[send_message] ( identifier[output_action] ) | def send_message(self, output):
"""
Send a message to the socket
"""
file_system_event = None
if self.my_action_input:
file_system_event = self.my_action_input.file_system_event or None # depends on [control=['if'], data=[]]
output_action = ActionInput(file_system_event, output, self.name, '*')
Global.MESSAGE_DISPATCHER.send_message(output_action) |
def getStats(self):
"""Runs varnishstats command to get stats from Varnish Cache.
@return: Dictionary of stats.
"""
info_dict = {}
args = [varnishstatCmd, '-1']
if self._instance is not None:
args.extend(['-n', self._instance])
output = util.exec_command(args)
if self._descDict is None:
self._descDict = {}
for line in output.splitlines():
mobj = re.match('(\S+)\s+(\d+)\s+(\d+\.\d+|\.)\s+(\S.*\S)\s*$',
line)
if mobj:
fname = mobj.group(1).replace('.', '_')
info_dict[fname] = util.parse_value(mobj.group(2))
self._descDict[fname] = mobj.group(4)
return info_dict | def function[getStats, parameter[self]]:
constant[Runs varnishstats command to get stats from Varnish Cache.
@return: Dictionary of stats.
]
variable[info_dict] assign[=] dictionary[[], []]
variable[args] assign[=] list[[<ast.Name object at 0x7da1b10b1f30>, <ast.Constant object at 0x7da1b10b1900>]]
if compare[name[self]._instance is_not constant[None]] begin[:]
call[name[args].extend, parameter[list[[<ast.Constant object at 0x7da1b10b1300>, <ast.Attribute object at 0x7da1b10b02e0>]]]]
variable[output] assign[=] call[name[util].exec_command, parameter[name[args]]]
if compare[name[self]._descDict is constant[None]] begin[:]
name[self]._descDict assign[=] dictionary[[], []]
for taget[name[line]] in starred[call[name[output].splitlines, parameter[]]] begin[:]
variable[mobj] assign[=] call[name[re].match, parameter[constant[(\S+)\s+(\d+)\s+(\d+\.\d+|\.)\s+(\S.*\S)\s*$], name[line]]]
if name[mobj] begin[:]
variable[fname] assign[=] call[call[name[mobj].group, parameter[constant[1]]].replace, parameter[constant[.], constant[_]]]
call[name[info_dict]][name[fname]] assign[=] call[name[util].parse_value, parameter[call[name[mobj].group, parameter[constant[2]]]]]
call[name[self]._descDict][name[fname]] assign[=] call[name[mobj].group, parameter[constant[4]]]
return[name[info_dict]] | keyword[def] identifier[getStats] ( identifier[self] ):
literal[string]
identifier[info_dict] ={}
identifier[args] =[ identifier[varnishstatCmd] , literal[string] ]
keyword[if] identifier[self] . identifier[_instance] keyword[is] keyword[not] keyword[None] :
identifier[args] . identifier[extend] ([ literal[string] , identifier[self] . identifier[_instance] ])
identifier[output] = identifier[util] . identifier[exec_command] ( identifier[args] )
keyword[if] identifier[self] . identifier[_descDict] keyword[is] keyword[None] :
identifier[self] . identifier[_descDict] ={}
keyword[for] identifier[line] keyword[in] identifier[output] . identifier[splitlines] ():
identifier[mobj] = identifier[re] . identifier[match] ( literal[string] ,
identifier[line] )
keyword[if] identifier[mobj] :
identifier[fname] = identifier[mobj] . identifier[group] ( literal[int] ). identifier[replace] ( literal[string] , literal[string] )
identifier[info_dict] [ identifier[fname] ]= identifier[util] . identifier[parse_value] ( identifier[mobj] . identifier[group] ( literal[int] ))
identifier[self] . identifier[_descDict] [ identifier[fname] ]= identifier[mobj] . identifier[group] ( literal[int] )
keyword[return] identifier[info_dict] | def getStats(self):
"""Runs varnishstats command to get stats from Varnish Cache.
@return: Dictionary of stats.
"""
info_dict = {}
args = [varnishstatCmd, '-1']
if self._instance is not None:
args.extend(['-n', self._instance]) # depends on [control=['if'], data=[]]
output = util.exec_command(args)
if self._descDict is None:
self._descDict = {} # depends on [control=['if'], data=[]]
for line in output.splitlines():
mobj = re.match('(\\S+)\\s+(\\d+)\\s+(\\d+\\.\\d+|\\.)\\s+(\\S.*\\S)\\s*$', line)
if mobj:
fname = mobj.group(1).replace('.', '_')
info_dict[fname] = util.parse_value(mobj.group(2))
self._descDict[fname] = mobj.group(4) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=['line']]
return info_dict |
def posix_to_dt_str(posix):
"""Reverse of str_to_datetime.
This is used by GCS stub to generate GET bucket XML response.
Args:
posix: A float of secs from unix epoch.
Returns:
A datetime str.
"""
dt = datetime.datetime.utcfromtimestamp(posix)
dt_str = dt.strftime(_DT_FORMAT)
return dt_str + '.000Z' | def function[posix_to_dt_str, parameter[posix]]:
constant[Reverse of str_to_datetime.
This is used by GCS stub to generate GET bucket XML response.
Args:
posix: A float of secs from unix epoch.
Returns:
A datetime str.
]
variable[dt] assign[=] call[name[datetime].datetime.utcfromtimestamp, parameter[name[posix]]]
variable[dt_str] assign[=] call[name[dt].strftime, parameter[name[_DT_FORMAT]]]
return[binary_operation[name[dt_str] + constant[.000Z]]] | keyword[def] identifier[posix_to_dt_str] ( identifier[posix] ):
literal[string]
identifier[dt] = identifier[datetime] . identifier[datetime] . identifier[utcfromtimestamp] ( identifier[posix] )
identifier[dt_str] = identifier[dt] . identifier[strftime] ( identifier[_DT_FORMAT] )
keyword[return] identifier[dt_str] + literal[string] | def posix_to_dt_str(posix):
"""Reverse of str_to_datetime.
This is used by GCS stub to generate GET bucket XML response.
Args:
posix: A float of secs from unix epoch.
Returns:
A datetime str.
"""
dt = datetime.datetime.utcfromtimestamp(posix)
dt_str = dt.strftime(_DT_FORMAT)
return dt_str + '.000Z' |
def default_token_user_loader(self, token):
"""
Default token user loader
Accepts a token and decodes it checking signature and expiration. Then
loads user by id from the token to see if account is not locked. If
all is good, returns user record, otherwise throws an exception.
:param token: str, token string
:return: boiler.user.models.User
"""
try:
data = self.decode_token(token)
except jwt.exceptions.DecodeError as e:
raise x.JwtDecodeError(str(e))
except jwt.ExpiredSignatureError as e:
raise x.JwtExpired(str(e))
user = self.get(data['user_id'])
if not user:
msg = 'No user with such id [{}]'
raise x.JwtNoUser(msg.format(data['user_id']))
if user.is_locked():
msg = 'This account is locked'
raise x.AccountLocked(msg, locked_until=user.locked_until)
if self.require_confirmation and not user.email_confirmed:
msg = 'Please confirm your email address [{}]'
raise x.EmailNotConfirmed(
msg.format(user.email_secure),
email=user.email
)
# test token matches the one on file
if not token == user._token:
raise x.JwtTokenMismatch('The token does not match our records')
# return on success
return user | def function[default_token_user_loader, parameter[self, token]]:
constant[
Default token user loader
Accepts a token and decodes it checking signature and expiration. Then
loads user by id from the token to see if account is not locked. If
all is good, returns user record, otherwise throws an exception.
:param token: str, token string
:return: boiler.user.models.User
]
<ast.Try object at 0x7da20c794670>
variable[user] assign[=] call[name[self].get, parameter[call[name[data]][constant[user_id]]]]
if <ast.UnaryOp object at 0x7da20c796080> begin[:]
variable[msg] assign[=] constant[No user with such id [{}]]
<ast.Raise object at 0x7da20c794c70>
if call[name[user].is_locked, parameter[]] begin[:]
variable[msg] assign[=] constant[This account is locked]
<ast.Raise object at 0x7da1b24e8580>
if <ast.BoolOp object at 0x7da1b24e95d0> begin[:]
variable[msg] assign[=] constant[Please confirm your email address [{}]]
<ast.Raise object at 0x7da1b24eb0d0>
if <ast.UnaryOp object at 0x7da1b24eb580> begin[:]
<ast.Raise object at 0x7da1b24e9db0>
return[name[user]] | keyword[def] identifier[default_token_user_loader] ( identifier[self] , identifier[token] ):
literal[string]
keyword[try] :
identifier[data] = identifier[self] . identifier[decode_token] ( identifier[token] )
keyword[except] identifier[jwt] . identifier[exceptions] . identifier[DecodeError] keyword[as] identifier[e] :
keyword[raise] identifier[x] . identifier[JwtDecodeError] ( identifier[str] ( identifier[e] ))
keyword[except] identifier[jwt] . identifier[ExpiredSignatureError] keyword[as] identifier[e] :
keyword[raise] identifier[x] . identifier[JwtExpired] ( identifier[str] ( identifier[e] ))
identifier[user] = identifier[self] . identifier[get] ( identifier[data] [ literal[string] ])
keyword[if] keyword[not] identifier[user] :
identifier[msg] = literal[string]
keyword[raise] identifier[x] . identifier[JwtNoUser] ( identifier[msg] . identifier[format] ( identifier[data] [ literal[string] ]))
keyword[if] identifier[user] . identifier[is_locked] ():
identifier[msg] = literal[string]
keyword[raise] identifier[x] . identifier[AccountLocked] ( identifier[msg] , identifier[locked_until] = identifier[user] . identifier[locked_until] )
keyword[if] identifier[self] . identifier[require_confirmation] keyword[and] keyword[not] identifier[user] . identifier[email_confirmed] :
identifier[msg] = literal[string]
keyword[raise] identifier[x] . identifier[EmailNotConfirmed] (
identifier[msg] . identifier[format] ( identifier[user] . identifier[email_secure] ),
identifier[email] = identifier[user] . identifier[email]
)
keyword[if] keyword[not] identifier[token] == identifier[user] . identifier[_token] :
keyword[raise] identifier[x] . identifier[JwtTokenMismatch] ( literal[string] )
keyword[return] identifier[user] | def default_token_user_loader(self, token):
"""
Default token user loader
Accepts a token and decodes it checking signature and expiration. Then
loads user by id from the token to see if account is not locked. If
all is good, returns user record, otherwise throws an exception.
:param token: str, token string
:return: boiler.user.models.User
"""
try:
data = self.decode_token(token) # depends on [control=['try'], data=[]]
except jwt.exceptions.DecodeError as e:
raise x.JwtDecodeError(str(e)) # depends on [control=['except'], data=['e']]
except jwt.ExpiredSignatureError as e:
raise x.JwtExpired(str(e)) # depends on [control=['except'], data=['e']]
user = self.get(data['user_id'])
if not user:
msg = 'No user with such id [{}]'
raise x.JwtNoUser(msg.format(data['user_id'])) # depends on [control=['if'], data=[]]
if user.is_locked():
msg = 'This account is locked'
raise x.AccountLocked(msg, locked_until=user.locked_until) # depends on [control=['if'], data=[]]
if self.require_confirmation and (not user.email_confirmed):
msg = 'Please confirm your email address [{}]'
raise x.EmailNotConfirmed(msg.format(user.email_secure), email=user.email) # depends on [control=['if'], data=[]]
# test token matches the one on file
if not token == user._token:
raise x.JwtTokenMismatch('The token does not match our records') # depends on [control=['if'], data=[]]
# return on success
return user |
def append(self, obj):
"""Append an object to end. If the object is a string, appends a
:class:`Word <Word>` object.
"""
if isinstance(obj, basestring):
return self._collection.append(Word(obj))
else:
return self._collection.append(obj) | def function[append, parameter[self, obj]]:
constant[Append an object to end. If the object is a string, appends a
:class:`Word <Word>` object.
]
if call[name[isinstance], parameter[name[obj], name[basestring]]] begin[:]
return[call[name[self]._collection.append, parameter[call[name[Word], parameter[name[obj]]]]]] | keyword[def] identifier[append] ( identifier[self] , identifier[obj] ):
literal[string]
keyword[if] identifier[isinstance] ( identifier[obj] , identifier[basestring] ):
keyword[return] identifier[self] . identifier[_collection] . identifier[append] ( identifier[Word] ( identifier[obj] ))
keyword[else] :
keyword[return] identifier[self] . identifier[_collection] . identifier[append] ( identifier[obj] ) | def append(self, obj):
"""Append an object to end. If the object is a string, appends a
:class:`Word <Word>` object.
"""
if isinstance(obj, basestring):
return self._collection.append(Word(obj)) # depends on [control=['if'], data=[]]
else:
return self._collection.append(obj) |
def restore_expanded_state(self):
"""Restore all items expanded state"""
if self.__expanded_state is None:
return
for item in self.get_items()+self.get_top_level_items():
user_text = get_item_user_text(item)
is_expanded = self.__expanded_state.get(hash(user_text))
if is_expanded is not None:
item.setExpanded(is_expanded) | def function[restore_expanded_state, parameter[self]]:
constant[Restore all items expanded state]
if compare[name[self].__expanded_state is constant[None]] begin[:]
return[None]
for taget[name[item]] in starred[binary_operation[call[name[self].get_items, parameter[]] + call[name[self].get_top_level_items, parameter[]]]] begin[:]
variable[user_text] assign[=] call[name[get_item_user_text], parameter[name[item]]]
variable[is_expanded] assign[=] call[name[self].__expanded_state.get, parameter[call[name[hash], parameter[name[user_text]]]]]
if compare[name[is_expanded] is_not constant[None]] begin[:]
call[name[item].setExpanded, parameter[name[is_expanded]]] | keyword[def] identifier[restore_expanded_state] ( identifier[self] ):
literal[string]
keyword[if] identifier[self] . identifier[__expanded_state] keyword[is] keyword[None] :
keyword[return]
keyword[for] identifier[item] keyword[in] identifier[self] . identifier[get_items] ()+ identifier[self] . identifier[get_top_level_items] ():
identifier[user_text] = identifier[get_item_user_text] ( identifier[item] )
identifier[is_expanded] = identifier[self] . identifier[__expanded_state] . identifier[get] ( identifier[hash] ( identifier[user_text] ))
keyword[if] identifier[is_expanded] keyword[is] keyword[not] keyword[None] :
identifier[item] . identifier[setExpanded] ( identifier[is_expanded] ) | def restore_expanded_state(self):
"""Restore all items expanded state"""
if self.__expanded_state is None:
return # depends on [control=['if'], data=[]]
for item in self.get_items() + self.get_top_level_items():
user_text = get_item_user_text(item)
is_expanded = self.__expanded_state.get(hash(user_text))
if is_expanded is not None:
item.setExpanded(is_expanded) # depends on [control=['if'], data=['is_expanded']] # depends on [control=['for'], data=['item']] |
def forwards(self, orm):
"Write your forwards methods here."
# Note: Remember to use orm['appname.ModelName'] rather than "from appname.models..."
for translation in orm['people.PersonTranslation'].objects.all():
if translation.language in ['en', 'de']:
translation.roman_first_name = translation.first_name
translation.roman_last_name = translation.last_name
else:
translation.non_roman_first_name = translation.first_name
translation.non_roman_last_name = translation.last_name
translation.save() | def function[forwards, parameter[self, orm]]:
constant[Write your forwards methods here.]
for taget[name[translation]] in starred[call[call[name[orm]][constant[people.PersonTranslation]].objects.all, parameter[]]] begin[:]
if compare[name[translation].language in list[[<ast.Constant object at 0x7da1b00da260>, <ast.Constant object at 0x7da1b00da380>]]] begin[:]
name[translation].roman_first_name assign[=] name[translation].first_name
name[translation].roman_last_name assign[=] name[translation].last_name
call[name[translation].save, parameter[]] | keyword[def] identifier[forwards] ( identifier[self] , identifier[orm] ):
literal[string]
keyword[for] identifier[translation] keyword[in] identifier[orm] [ literal[string] ]. identifier[objects] . identifier[all] ():
keyword[if] identifier[translation] . identifier[language] keyword[in] [ literal[string] , literal[string] ]:
identifier[translation] . identifier[roman_first_name] = identifier[translation] . identifier[first_name]
identifier[translation] . identifier[roman_last_name] = identifier[translation] . identifier[last_name]
keyword[else] :
identifier[translation] . identifier[non_roman_first_name] = identifier[translation] . identifier[first_name]
identifier[translation] . identifier[non_roman_last_name] = identifier[translation] . identifier[last_name]
identifier[translation] . identifier[save] () | def forwards(self, orm):
"""Write your forwards methods here."""
# Note: Remember to use orm['appname.ModelName'] rather than "from appname.models..."
for translation in orm['people.PersonTranslation'].objects.all():
if translation.language in ['en', 'de']:
translation.roman_first_name = translation.first_name
translation.roman_last_name = translation.last_name # depends on [control=['if'], data=[]]
else:
translation.non_roman_first_name = translation.first_name
translation.non_roman_last_name = translation.last_name
translation.save() # depends on [control=['for'], data=['translation']] |
def start(self):
"""Get the start key of the first range.
None if RangeMap is empty or unbounded to the left.
"""
if self._values[0] is NOT_SET:
try:
return self._keys[1]
except IndexError:
# This is empty or everything is mapped to a single value
return None
else:
# This is unbounded to the left
return self._keys[0] | def function[start, parameter[self]]:
constant[Get the start key of the first range.
None if RangeMap is empty or unbounded to the left.
]
if compare[call[name[self]._values][constant[0]] is name[NOT_SET]] begin[:]
<ast.Try object at 0x7da1b26a5600> | keyword[def] identifier[start] ( identifier[self] ):
literal[string]
keyword[if] identifier[self] . identifier[_values] [ literal[int] ] keyword[is] identifier[NOT_SET] :
keyword[try] :
keyword[return] identifier[self] . identifier[_keys] [ literal[int] ]
keyword[except] identifier[IndexError] :
keyword[return] keyword[None]
keyword[else] :
keyword[return] identifier[self] . identifier[_keys] [ literal[int] ] | def start(self):
"""Get the start key of the first range.
None if RangeMap is empty or unbounded to the left.
"""
if self._values[0] is NOT_SET:
try:
return self._keys[1] # depends on [control=['try'], data=[]]
except IndexError: # This is empty or everything is mapped to a single value
return None # depends on [control=['except'], data=[]] # depends on [control=['if'], data=[]]
else: # This is unbounded to the left
return self._keys[0] |
def _do_zero_width(self):
"""Fetch and update zero width tables."""
self._do_retrieve(self.UCD_URL, self.UCD_IN)
(version, date, values) = self._parse_category(
fname=self.UCD_IN,
categories=('Me', 'Mn',)
)
table = self._make_table(values)
self._do_write(self.ZERO_OUT, 'ZERO_WIDTH', version, date, table) | def function[_do_zero_width, parameter[self]]:
constant[Fetch and update zero width tables.]
call[name[self]._do_retrieve, parameter[name[self].UCD_URL, name[self].UCD_IN]]
<ast.Tuple object at 0x7da18f58fc70> assign[=] call[name[self]._parse_category, parameter[]]
variable[table] assign[=] call[name[self]._make_table, parameter[name[values]]]
call[name[self]._do_write, parameter[name[self].ZERO_OUT, constant[ZERO_WIDTH], name[version], name[date], name[table]]] | keyword[def] identifier[_do_zero_width] ( identifier[self] ):
literal[string]
identifier[self] . identifier[_do_retrieve] ( identifier[self] . identifier[UCD_URL] , identifier[self] . identifier[UCD_IN] )
( identifier[version] , identifier[date] , identifier[values] )= identifier[self] . identifier[_parse_category] (
identifier[fname] = identifier[self] . identifier[UCD_IN] ,
identifier[categories] =( literal[string] , literal[string] ,)
)
identifier[table] = identifier[self] . identifier[_make_table] ( identifier[values] )
identifier[self] . identifier[_do_write] ( identifier[self] . identifier[ZERO_OUT] , literal[string] , identifier[version] , identifier[date] , identifier[table] ) | def _do_zero_width(self):
"""Fetch and update zero width tables."""
self._do_retrieve(self.UCD_URL, self.UCD_IN)
(version, date, values) = self._parse_category(fname=self.UCD_IN, categories=('Me', 'Mn'))
table = self._make_table(values)
self._do_write(self.ZERO_OUT, 'ZERO_WIDTH', version, date, table) |
def getFormattedResult(self, specs=None, decimalmark='.', sciformat=1,
html=True):
"""Formatted result:
1. If the result is a detection limit, returns '< LDL' or '> UDL'
2. Print ResultText of matching ResultOptions
3. If the result is not floatable, return it without being formatted
4. If the analysis specs has hidemin or hidemax enabled and the
result is out of range, render result as '<min' or '>max'
5. If the result is below Lower Detection Limit, show '<LDL'
6. If the result is above Upper Detecion Limit, show '>UDL'
7. Otherwise, render numerical value
:param specs: Optional result specifications, a dictionary as follows:
{'min': <min_val>,
'max': <max_val>,
'error': <error>,
'hidemin': <hidemin_val>,
'hidemax': <hidemax_val>}
:param decimalmark: The string to be used as a decimal separator.
default is '.'
:param sciformat: 1. The sci notation has to be formatted as aE^+b
2. The sci notation has to be formatted as a·10^b
3. As 2, but with super html entity for exp
4. The sci notation has to be formatted as a·10^b
5. As 4, but with super html entity for exp
By default 1
:param html: if true, returns an string with the special characters
escaped: e.g: '<' and '>' (LDL and UDL for results like < 23.4).
"""
result = self.getResult()
# 1. The result is a detection limit, return '< LDL' or '> UDL'
dl = self.getDetectionLimitOperand()
if dl:
try:
res = float(result) # required, check if floatable
res = drop_trailing_zeros_decimal(res)
fdm = formatDecimalMark(res, decimalmark)
hdl = cgi.escape(dl) if html else dl
return '%s %s' % (hdl, fdm)
except (TypeError, ValueError):
logger.warn(
"The result for the analysis %s is a detection limit, "
"but not floatable: %s" % (self.id, result))
return formatDecimalMark(result, decimalmark=decimalmark)
choices = self.getResultOptions()
# 2. Print ResultText of matching ResulOptions
match = [x['ResultText'] for x in choices
if str(x['ResultValue']) == str(result)]
if match:
return match[0]
# 3. If the result is not floatable, return it without being formatted
try:
result = float(result)
except (TypeError, ValueError):
return formatDecimalMark(result, decimalmark=decimalmark)
# 4. If the analysis specs has enabled hidemin or hidemax and the
# result is out of range, render result as '<min' or '>max'
specs = specs if specs else self.getResultsRange()
hidemin = specs.get('hidemin', '')
hidemax = specs.get('hidemax', '')
try:
belowmin = hidemin and result < float(hidemin) or False
except (TypeError, ValueError):
belowmin = False
try:
abovemax = hidemax and result > float(hidemax) or False
except (TypeError, ValueError):
abovemax = False
# 4.1. If result is below min and hidemin enabled, return '<min'
if belowmin:
fdm = formatDecimalMark('< %s' % hidemin, decimalmark)
return fdm.replace('< ', '< ', 1) if html else fdm
# 4.2. If result is above max and hidemax enabled, return '>max'
if abovemax:
fdm = formatDecimalMark('> %s' % hidemax, decimalmark)
return fdm.replace('> ', '> ', 1) if html else fdm
# Below Lower Detection Limit (LDL)?
ldl = self.getLowerDetectionLimit()
if result < ldl:
# LDL must not be formatted according to precision, etc.
# Drop trailing zeros from decimal
ldl = drop_trailing_zeros_decimal(ldl)
fdm = formatDecimalMark('< %s' % ldl, decimalmark)
return fdm.replace('< ', '< ', 1) if html else fdm
# Above Upper Detection Limit (UDL)?
udl = self.getUpperDetectionLimit()
if result > udl:
# UDL must not be formatted according to precision, etc.
# Drop trailing zeros from decimal
udl = drop_trailing_zeros_decimal(udl)
fdm = formatDecimalMark('> %s' % udl, decimalmark)
return fdm.replace('> ', '> ', 1) if html else fdm
# Render numerical values
return format_numeric_result(self, self.getResult(),
decimalmark=decimalmark,
sciformat=sciformat) | def function[getFormattedResult, parameter[self, specs, decimalmark, sciformat, html]]:
constant[Formatted result:
1. If the result is a detection limit, returns '< LDL' or '> UDL'
2. Print ResultText of matching ResultOptions
3. If the result is not floatable, return it without being formatted
4. If the analysis specs has hidemin or hidemax enabled and the
result is out of range, render result as '<min' or '>max'
5. If the result is below Lower Detection Limit, show '<LDL'
6. If the result is above Upper Detecion Limit, show '>UDL'
7. Otherwise, render numerical value
:param specs: Optional result specifications, a dictionary as follows:
{'min': <min_val>,
'max': <max_val>,
'error': <error>,
'hidemin': <hidemin_val>,
'hidemax': <hidemax_val>}
:param decimalmark: The string to be used as a decimal separator.
default is '.'
:param sciformat: 1. The sci notation has to be formatted as aE^+b
2. The sci notation has to be formatted as a·10^b
3. As 2, but with super html entity for exp
4. The sci notation has to be formatted as a·10^b
5. As 4, but with super html entity for exp
By default 1
:param html: if true, returns an string with the special characters
escaped: e.g: '<' and '>' (LDL and UDL for results like < 23.4).
]
variable[result] assign[=] call[name[self].getResult, parameter[]]
variable[dl] assign[=] call[name[self].getDetectionLimitOperand, parameter[]]
if name[dl] begin[:]
<ast.Try object at 0x7da1b1d64940>
variable[choices] assign[=] call[name[self].getResultOptions, parameter[]]
variable[match] assign[=] <ast.ListComp object at 0x7da1b1d67f70>
if name[match] begin[:]
return[call[name[match]][constant[0]]]
<ast.Try object at 0x7da1b1d67490>
variable[specs] assign[=] <ast.IfExp object at 0x7da1b1d67610>
variable[hidemin] assign[=] call[name[specs].get, parameter[constant[hidemin], constant[]]]
variable[hidemax] assign[=] call[name[specs].get, parameter[constant[hidemax], constant[]]]
<ast.Try object at 0x7da1b1d664a0>
<ast.Try object at 0x7da1b23115a0>
if name[belowmin] begin[:]
variable[fdm] assign[=] call[name[formatDecimalMark], parameter[binary_operation[constant[< %s] <ast.Mod object at 0x7da2590d6920> name[hidemin]], name[decimalmark]]]
return[<ast.IfExp object at 0x7da1b2311270>]
if name[abovemax] begin[:]
variable[fdm] assign[=] call[name[formatDecimalMark], parameter[binary_operation[constant[> %s] <ast.Mod object at 0x7da2590d6920> name[hidemax]], name[decimalmark]]]
return[<ast.IfExp object at 0x7da1b2310040>]
variable[ldl] assign[=] call[name[self].getLowerDetectionLimit, parameter[]]
if compare[name[result] less[<] name[ldl]] begin[:]
variable[ldl] assign[=] call[name[drop_trailing_zeros_decimal], parameter[name[ldl]]]
variable[fdm] assign[=] call[name[formatDecimalMark], parameter[binary_operation[constant[< %s] <ast.Mod object at 0x7da2590d6920> name[ldl]], name[decimalmark]]]
return[<ast.IfExp object at 0x7da1b2313880>]
variable[udl] assign[=] call[name[self].getUpperDetectionLimit, parameter[]]
if compare[name[result] greater[>] name[udl]] begin[:]
variable[udl] assign[=] call[name[drop_trailing_zeros_decimal], parameter[name[udl]]]
variable[fdm] assign[=] call[name[formatDecimalMark], parameter[binary_operation[constant[> %s] <ast.Mod object at 0x7da2590d6920> name[udl]], name[decimalmark]]]
return[<ast.IfExp object at 0x7da1b2310400>]
return[call[name[format_numeric_result], parameter[name[self], call[name[self].getResult, parameter[]]]]] | keyword[def] identifier[getFormattedResult] ( identifier[self] , identifier[specs] = keyword[None] , identifier[decimalmark] = literal[string] , identifier[sciformat] = literal[int] ,
identifier[html] = keyword[True] ):
literal[string]
identifier[result] = identifier[self] . identifier[getResult] ()
identifier[dl] = identifier[self] . identifier[getDetectionLimitOperand] ()
keyword[if] identifier[dl] :
keyword[try] :
identifier[res] = identifier[float] ( identifier[result] )
identifier[res] = identifier[drop_trailing_zeros_decimal] ( identifier[res] )
identifier[fdm] = identifier[formatDecimalMark] ( identifier[res] , identifier[decimalmark] )
identifier[hdl] = identifier[cgi] . identifier[escape] ( identifier[dl] ) keyword[if] identifier[html] keyword[else] identifier[dl]
keyword[return] literal[string] %( identifier[hdl] , identifier[fdm] )
keyword[except] ( identifier[TypeError] , identifier[ValueError] ):
identifier[logger] . identifier[warn] (
literal[string]
literal[string] %( identifier[self] . identifier[id] , identifier[result] ))
keyword[return] identifier[formatDecimalMark] ( identifier[result] , identifier[decimalmark] = identifier[decimalmark] )
identifier[choices] = identifier[self] . identifier[getResultOptions] ()
identifier[match] =[ identifier[x] [ literal[string] ] keyword[for] identifier[x] keyword[in] identifier[choices]
keyword[if] identifier[str] ( identifier[x] [ literal[string] ])== identifier[str] ( identifier[result] )]
keyword[if] identifier[match] :
keyword[return] identifier[match] [ literal[int] ]
keyword[try] :
identifier[result] = identifier[float] ( identifier[result] )
keyword[except] ( identifier[TypeError] , identifier[ValueError] ):
keyword[return] identifier[formatDecimalMark] ( identifier[result] , identifier[decimalmark] = identifier[decimalmark] )
identifier[specs] = identifier[specs] keyword[if] identifier[specs] keyword[else] identifier[self] . identifier[getResultsRange] ()
identifier[hidemin] = identifier[specs] . identifier[get] ( literal[string] , literal[string] )
identifier[hidemax] = identifier[specs] . identifier[get] ( literal[string] , literal[string] )
keyword[try] :
identifier[belowmin] = identifier[hidemin] keyword[and] identifier[result] < identifier[float] ( identifier[hidemin] ) keyword[or] keyword[False]
keyword[except] ( identifier[TypeError] , identifier[ValueError] ):
identifier[belowmin] = keyword[False]
keyword[try] :
identifier[abovemax] = identifier[hidemax] keyword[and] identifier[result] > identifier[float] ( identifier[hidemax] ) keyword[or] keyword[False]
keyword[except] ( identifier[TypeError] , identifier[ValueError] ):
identifier[abovemax] = keyword[False]
keyword[if] identifier[belowmin] :
identifier[fdm] = identifier[formatDecimalMark] ( literal[string] % identifier[hidemin] , identifier[decimalmark] )
keyword[return] identifier[fdm] . identifier[replace] ( literal[string] , literal[string] , literal[int] ) keyword[if] identifier[html] keyword[else] identifier[fdm]
keyword[if] identifier[abovemax] :
identifier[fdm] = identifier[formatDecimalMark] ( literal[string] % identifier[hidemax] , identifier[decimalmark] )
keyword[return] identifier[fdm] . identifier[replace] ( literal[string] , literal[string] , literal[int] ) keyword[if] identifier[html] keyword[else] identifier[fdm]
identifier[ldl] = identifier[self] . identifier[getLowerDetectionLimit] ()
keyword[if] identifier[result] < identifier[ldl] :
identifier[ldl] = identifier[drop_trailing_zeros_decimal] ( identifier[ldl] )
identifier[fdm] = identifier[formatDecimalMark] ( literal[string] % identifier[ldl] , identifier[decimalmark] )
keyword[return] identifier[fdm] . identifier[replace] ( literal[string] , literal[string] , literal[int] ) keyword[if] identifier[html] keyword[else] identifier[fdm]
identifier[udl] = identifier[self] . identifier[getUpperDetectionLimit] ()
keyword[if] identifier[result] > identifier[udl] :
identifier[udl] = identifier[drop_trailing_zeros_decimal] ( identifier[udl] )
identifier[fdm] = identifier[formatDecimalMark] ( literal[string] % identifier[udl] , identifier[decimalmark] )
keyword[return] identifier[fdm] . identifier[replace] ( literal[string] , literal[string] , literal[int] ) keyword[if] identifier[html] keyword[else] identifier[fdm]
keyword[return] identifier[format_numeric_result] ( identifier[self] , identifier[self] . identifier[getResult] (),
identifier[decimalmark] = identifier[decimalmark] ,
identifier[sciformat] = identifier[sciformat] ) | def getFormattedResult(self, specs=None, decimalmark='.', sciformat=1, html=True):
"""Formatted result:
1. If the result is a detection limit, returns '< LDL' or '> UDL'
2. Print ResultText of matching ResultOptions
3. If the result is not floatable, return it without being formatted
4. If the analysis specs has hidemin or hidemax enabled and the
result is out of range, render result as '<min' or '>max'
5. If the result is below Lower Detection Limit, show '<LDL'
6. If the result is above Upper Detecion Limit, show '>UDL'
7. Otherwise, render numerical value
:param specs: Optional result specifications, a dictionary as follows:
{'min': <min_val>,
'max': <max_val>,
'error': <error>,
'hidemin': <hidemin_val>,
'hidemax': <hidemax_val>}
:param decimalmark: The string to be used as a decimal separator.
default is '.'
:param sciformat: 1. The sci notation has to be formatted as aE^+b
2. The sci notation has to be formatted as a·10^b
3. As 2, but with super html entity for exp
4. The sci notation has to be formatted as a·10^b
5. As 4, but with super html entity for exp
By default 1
:param html: if true, returns an string with the special characters
escaped: e.g: '<' and '>' (LDL and UDL for results like < 23.4).
"""
result = self.getResult()
# 1. The result is a detection limit, return '< LDL' or '> UDL'
dl = self.getDetectionLimitOperand()
if dl:
try:
res = float(result) # required, check if floatable
res = drop_trailing_zeros_decimal(res)
fdm = formatDecimalMark(res, decimalmark)
hdl = cgi.escape(dl) if html else dl
return '%s %s' % (hdl, fdm) # depends on [control=['try'], data=[]]
except (TypeError, ValueError):
logger.warn('The result for the analysis %s is a detection limit, but not floatable: %s' % (self.id, result))
return formatDecimalMark(result, decimalmark=decimalmark) # depends on [control=['except'], data=[]] # depends on [control=['if'], data=[]]
choices = self.getResultOptions()
# 2. Print ResultText of matching ResulOptions
match = [x['ResultText'] for x in choices if str(x['ResultValue']) == str(result)]
if match:
return match[0] # depends on [control=['if'], data=[]]
# 3. If the result is not floatable, return it without being formatted
try:
result = float(result) # depends on [control=['try'], data=[]]
except (TypeError, ValueError):
return formatDecimalMark(result, decimalmark=decimalmark) # depends on [control=['except'], data=[]]
# 4. If the analysis specs has enabled hidemin or hidemax and the
# result is out of range, render result as '<min' or '>max'
specs = specs if specs else self.getResultsRange()
hidemin = specs.get('hidemin', '')
hidemax = specs.get('hidemax', '')
try:
belowmin = hidemin and result < float(hidemin) or False # depends on [control=['try'], data=[]]
except (TypeError, ValueError):
belowmin = False # depends on [control=['except'], data=[]]
try:
abovemax = hidemax and result > float(hidemax) or False # depends on [control=['try'], data=[]]
except (TypeError, ValueError):
abovemax = False # depends on [control=['except'], data=[]]
# 4.1. If result is below min and hidemin enabled, return '<min'
if belowmin:
fdm = formatDecimalMark('< %s' % hidemin, decimalmark)
return fdm.replace('< ', '< ', 1) if html else fdm # depends on [control=['if'], data=[]]
# 4.2. If result is above max and hidemax enabled, return '>max'
if abovemax:
fdm = formatDecimalMark('> %s' % hidemax, decimalmark)
return fdm.replace('> ', '> ', 1) if html else fdm # depends on [control=['if'], data=[]]
# Below Lower Detection Limit (LDL)?
ldl = self.getLowerDetectionLimit()
if result < ldl:
# LDL must not be formatted according to precision, etc.
# Drop trailing zeros from decimal
ldl = drop_trailing_zeros_decimal(ldl)
fdm = formatDecimalMark('< %s' % ldl, decimalmark)
return fdm.replace('< ', '< ', 1) if html else fdm # depends on [control=['if'], data=['ldl']]
# Above Upper Detection Limit (UDL)?
udl = self.getUpperDetectionLimit()
if result > udl:
# UDL must not be formatted according to precision, etc.
# Drop trailing zeros from decimal
udl = drop_trailing_zeros_decimal(udl)
fdm = formatDecimalMark('> %s' % udl, decimalmark)
return fdm.replace('> ', '> ', 1) if html else fdm # depends on [control=['if'], data=['udl']]
# Render numerical values
return format_numeric_result(self, self.getResult(), decimalmark=decimalmark, sciformat=sciformat) |
def process_user_input(self):
"""
This overrides the method in ConsoleMenu to allow for comma-delimited and range inputs.
Examples:
All of the following inputs would have the same result:
* 1,2,3,4
* 1-4
* 1-2,3-4
* 1 - 4
* 1, 2, 3, 4
Raises:
ValueError: If the input cannot be correctly parsed.
"""
user_input = self.screen.input()
try:
indexes = self.__parse_range_list(user_input)
# Subtract 1 from each number for its actual index number
indexes[:] = [x - 1 for x in indexes if 0 < x < len(self.items) + 1]
for index in indexes:
self.current_option = index
self.select()
except Exception as e:
return | def function[process_user_input, parameter[self]]:
constant[
This overrides the method in ConsoleMenu to allow for comma-delimited and range inputs.
Examples:
All of the following inputs would have the same result:
* 1,2,3,4
* 1-4
* 1-2,3-4
* 1 - 4
* 1, 2, 3, 4
Raises:
ValueError: If the input cannot be correctly parsed.
]
variable[user_input] assign[=] call[name[self].screen.input, parameter[]]
<ast.Try object at 0x7da1b133d840> | keyword[def] identifier[process_user_input] ( identifier[self] ):
literal[string]
identifier[user_input] = identifier[self] . identifier[screen] . identifier[input] ()
keyword[try] :
identifier[indexes] = identifier[self] . identifier[__parse_range_list] ( identifier[user_input] )
identifier[indexes] [:]=[ identifier[x] - literal[int] keyword[for] identifier[x] keyword[in] identifier[indexes] keyword[if] literal[int] < identifier[x] < identifier[len] ( identifier[self] . identifier[items] )+ literal[int] ]
keyword[for] identifier[index] keyword[in] identifier[indexes] :
identifier[self] . identifier[current_option] = identifier[index]
identifier[self] . identifier[select] ()
keyword[except] identifier[Exception] keyword[as] identifier[e] :
keyword[return] | def process_user_input(self):
"""
This overrides the method in ConsoleMenu to allow for comma-delimited and range inputs.
Examples:
All of the following inputs would have the same result:
* 1,2,3,4
* 1-4
* 1-2,3-4
* 1 - 4
* 1, 2, 3, 4
Raises:
ValueError: If the input cannot be correctly parsed.
"""
user_input = self.screen.input()
try:
indexes = self.__parse_range_list(user_input)
# Subtract 1 from each number for its actual index number
indexes[:] = [x - 1 for x in indexes if 0 < x < len(self.items) + 1]
for index in indexes:
self.current_option = index
self.select() # depends on [control=['for'], data=['index']] # depends on [control=['try'], data=[]]
except Exception as e:
return # depends on [control=['except'], data=[]] |
def count(self, sqlTail = '') :
"Compile filters and counts the number of results. You can use sqlTail to add things such as order by"
sql, sqlValues = self.getSQLQuery(count = True)
return int(self.con.execute('%s %s'% (sql, sqlTail), sqlValues).fetchone()[0]) | def function[count, parameter[self, sqlTail]]:
constant[Compile filters and counts the number of results. You can use sqlTail to add things such as order by]
<ast.Tuple object at 0x7da1b0a84940> assign[=] call[name[self].getSQLQuery, parameter[]]
return[call[name[int], parameter[call[call[call[name[self].con.execute, parameter[binary_operation[constant[%s %s] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da1b0a87430>, <ast.Name object at 0x7da1b0a87580>]]], name[sqlValues]]].fetchone, parameter[]]][constant[0]]]]] | keyword[def] identifier[count] ( identifier[self] , identifier[sqlTail] = literal[string] ):
literal[string]
identifier[sql] , identifier[sqlValues] = identifier[self] . identifier[getSQLQuery] ( identifier[count] = keyword[True] )
keyword[return] identifier[int] ( identifier[self] . identifier[con] . identifier[execute] ( literal[string] %( identifier[sql] , identifier[sqlTail] ), identifier[sqlValues] ). identifier[fetchone] ()[ literal[int] ]) | def count(self, sqlTail=''):
"""Compile filters and counts the number of results. You can use sqlTail to add things such as order by"""
(sql, sqlValues) = self.getSQLQuery(count=True)
return int(self.con.execute('%s %s' % (sql, sqlTail), sqlValues).fetchone()[0]) |
def _extract_error(self, resp):
"""
Extract the actual error message from a solr response.
"""
reason = resp.headers.get('reason', None)
full_response = None
if reason is None:
try:
# if response is in json format
reason = resp.json()['error']['msg']
except KeyError:
# if json response has unexpected structure
full_response = resp.content
except ValueError:
# otherwise we assume it's html
reason, full_html = self._scrape_response(resp.headers, resp.content)
full_response = unescape_html(full_html)
msg = "[Reason: %s]" % reason
if reason is None:
msg += "\n%s" % full_response
return msg | def function[_extract_error, parameter[self, resp]]:
constant[
Extract the actual error message from a solr response.
]
variable[reason] assign[=] call[name[resp].headers.get, parameter[constant[reason], constant[None]]]
variable[full_response] assign[=] constant[None]
if compare[name[reason] is constant[None]] begin[:]
<ast.Try object at 0x7da20e9b2020>
variable[msg] assign[=] binary_operation[constant[[Reason: %s]] <ast.Mod object at 0x7da2590d6920> name[reason]]
if compare[name[reason] is constant[None]] begin[:]
<ast.AugAssign object at 0x7da20e9b1870>
return[name[msg]] | keyword[def] identifier[_extract_error] ( identifier[self] , identifier[resp] ):
literal[string]
identifier[reason] = identifier[resp] . identifier[headers] . identifier[get] ( literal[string] , keyword[None] )
identifier[full_response] = keyword[None]
keyword[if] identifier[reason] keyword[is] keyword[None] :
keyword[try] :
identifier[reason] = identifier[resp] . identifier[json] ()[ literal[string] ][ literal[string] ]
keyword[except] identifier[KeyError] :
identifier[full_response] = identifier[resp] . identifier[content]
keyword[except] identifier[ValueError] :
identifier[reason] , identifier[full_html] = identifier[self] . identifier[_scrape_response] ( identifier[resp] . identifier[headers] , identifier[resp] . identifier[content] )
identifier[full_response] = identifier[unescape_html] ( identifier[full_html] )
identifier[msg] = literal[string] % identifier[reason]
keyword[if] identifier[reason] keyword[is] keyword[None] :
identifier[msg] += literal[string] % identifier[full_response]
keyword[return] identifier[msg] | def _extract_error(self, resp):
"""
Extract the actual error message from a solr response.
"""
reason = resp.headers.get('reason', None)
full_response = None
if reason is None:
try:
# if response is in json format
reason = resp.json()['error']['msg'] # depends on [control=['try'], data=[]]
except KeyError:
# if json response has unexpected structure
full_response = resp.content # depends on [control=['except'], data=[]]
except ValueError:
# otherwise we assume it's html
(reason, full_html) = self._scrape_response(resp.headers, resp.content)
full_response = unescape_html(full_html) # depends on [control=['except'], data=[]] # depends on [control=['if'], data=['reason']]
msg = '[Reason: %s]' % reason
if reason is None:
msg += '\n%s' % full_response # depends on [control=['if'], data=[]]
return msg |
def set_cfme_caselevel(testcase, caselevels):
"""Converts tier to caselevel."""
tier = testcase.get("caselevel")
if tier is None:
return
try:
caselevel = caselevels[int(tier)]
except IndexError:
# invalid value
caselevel = "component"
except ValueError:
# there's already string value
return
testcase["caselevel"] = caselevel | def function[set_cfme_caselevel, parameter[testcase, caselevels]]:
constant[Converts tier to caselevel.]
variable[tier] assign[=] call[name[testcase].get, parameter[constant[caselevel]]]
if compare[name[tier] is constant[None]] begin[:]
return[None]
<ast.Try object at 0x7da204346740>
call[name[testcase]][constant[caselevel]] assign[=] name[caselevel] | keyword[def] identifier[set_cfme_caselevel] ( identifier[testcase] , identifier[caselevels] ):
literal[string]
identifier[tier] = identifier[testcase] . identifier[get] ( literal[string] )
keyword[if] identifier[tier] keyword[is] keyword[None] :
keyword[return]
keyword[try] :
identifier[caselevel] = identifier[caselevels] [ identifier[int] ( identifier[tier] )]
keyword[except] identifier[IndexError] :
identifier[caselevel] = literal[string]
keyword[except] identifier[ValueError] :
keyword[return]
identifier[testcase] [ literal[string] ]= identifier[caselevel] | def set_cfme_caselevel(testcase, caselevels):
"""Converts tier to caselevel."""
tier = testcase.get('caselevel')
if tier is None:
return # depends on [control=['if'], data=[]]
try:
caselevel = caselevels[int(tier)] # depends on [control=['try'], data=[]]
except IndexError:
# invalid value
caselevel = 'component' # depends on [control=['except'], data=[]]
except ValueError:
# there's already string value
return # depends on [control=['except'], data=[]]
testcase['caselevel'] = caselevel |
def _random_point(self):
"""Find an approximately random point in the flux cone."""
idx = np.random.randint(self.n_warmup,
size=min(2, np.ceil(np.sqrt(self.n_warmup))))
return self.warmup[idx, :].mean(axis=0) | def function[_random_point, parameter[self]]:
constant[Find an approximately random point in the flux cone.]
variable[idx] assign[=] call[name[np].random.randint, parameter[name[self].n_warmup]]
return[call[call[name[self].warmup][tuple[[<ast.Name object at 0x7da1b0060d90>, <ast.Slice object at 0x7da1b00602b0>]]].mean, parameter[]]] | keyword[def] identifier[_random_point] ( identifier[self] ):
literal[string]
identifier[idx] = identifier[np] . identifier[random] . identifier[randint] ( identifier[self] . identifier[n_warmup] ,
identifier[size] = identifier[min] ( literal[int] , identifier[np] . identifier[ceil] ( identifier[np] . identifier[sqrt] ( identifier[self] . identifier[n_warmup] ))))
keyword[return] identifier[self] . identifier[warmup] [ identifier[idx] ,:]. identifier[mean] ( identifier[axis] = literal[int] ) | def _random_point(self):
"""Find an approximately random point in the flux cone."""
idx = np.random.randint(self.n_warmup, size=min(2, np.ceil(np.sqrt(self.n_warmup))))
return self.warmup[idx, :].mean(axis=0) |
def assemble(experiments,
backend=None,
qobj_id=None, qobj_header=None, # common run options
shots=1024, memory=False, max_credits=None, seed_simulator=None,
default_qubit_los=None, default_meas_los=None, # schedule run options
schedule_los=None, meas_level=2, meas_return='avg',
memory_slots=None, memory_slot_size=100, rep_time=None, parameter_binds=None,
config=None, seed=None, # deprecated
**run_config):
"""Assemble a list of circuits or pulse schedules into a Qobj.
This function serializes the payloads, which could be either circuits or schedules,
to create Qobj "experiments". It further annotates the experiment payload with
header and configurations.
Args:
experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
Circuit(s) or pulse schedule(s) to execute
backend (BaseBackend):
If set, some runtime options are automatically grabbed from
backend.configuration() and backend.defaults().
If any other option is explicitly set (e.g. rep_rate), it
will override the backend's.
If any other options is set in the run_config, it will
also override the backend's.
qobj_id (str):
String identifier to annotate the Qobj
qobj_header (QobjHeader or dict):
User input that will be inserted in Qobj header, and will also be
copied to the corresponding Result header. Headers do not affect the run.
shots (int):
Number of repetitions of each circuit, for sampling. Default: 2014
memory (bool):
If True, per-shot measurement bitstrings are returned as well
(provided the backend supports it). For OpenPulse jobs, only
measurement level 2 supports this option. Default: False
max_credits (int):
Maximum credits to spend on job. Default: 10
seed_simulator (int):
Random seed to control sampling, for when backend is a simulator
default_qubit_los (list):
List of default qubit lo frequencies
default_meas_los (list):
List of default meas lo frequencies
schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or
Union[Dict[PulseChannel, float], LoConfig]):
Experiment LO configurations
meas_level (int):
Set the appropriate level of the measurement output for pulse experiments.
meas_return (str):
Level of measurement data for the backend to return
For `meas_level` 0 and 1:
"single" returns information from every shot.
"avg" returns average measurement output (averaged over number of shots).
memory_slots (int):
Number of classical memory slots used in this job.
memory_slot_size (int):
Size of each memory slot if the output is Level 0.
rep_time (int): repetition time of the experiment in μs.
The delay between experiments will be rep_time.
Must be from the list provided by the device.
parameter_binds (list[dict{Parameter: Value}]):
List of Parameter bindings over which the set of experiments will be
executed. Each list element (bind) should be of the form
{Parameter1: value1, Parameter2: value2, ...}. All binds will be
executed across all experiments, e.g. if parameter_binds is a
length-n list, and there are m experiments, a total of m x n
experiments will be run (one for each experiment/bind pair).
seed (int):
DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead
config (dict):
DEPRECATED in 0.8: use run_config instead
run_config (dict):
extra arguments used to configure the run (e.g. for Aer configurable backends)
Refer to the backend documentation for details on these arguments
Returns:
Qobj: a qobj which can be run on a backend. Depending on the type of input,
this will be either a QasmQobj or a PulseQobj.
Raises:
QiskitError: if the input cannot be interpreted as either circuits or schedules
"""
# deprecation matter
if config:
warnings.warn('config is not used anymore. Set all configs in '
'run_config.', DeprecationWarning)
run_config = run_config or config
if seed:
warnings.warn('seed is deprecated in favor of seed_simulator.', DeprecationWarning)
seed_simulator = seed_simulator or seed
# Get RunConfig(s) that will be inserted in Qobj to configure the run
experiments = experiments if isinstance(experiments, list) else [experiments]
qobj_id, qobj_header, run_config = _parse_run_args(backend, qobj_id, qobj_header,
shots, memory, max_credits, seed_simulator,
default_qubit_los, default_meas_los,
schedule_los, meas_level, meas_return,
memory_slots, memory_slot_size, rep_time,
parameter_binds, **run_config)
# assemble either circuits or schedules
if all(isinstance(exp, QuantumCircuit) for exp in experiments):
# If circuits are parameterized, bind parameters and remove from run_config
bound_experiments, run_config = _expand_parameters(circuits=experiments,
run_config=run_config)
return assemble_circuits(circuits=bound_experiments, qobj_id=qobj_id,
qobj_header=qobj_header, run_config=run_config)
elif all(isinstance(exp, Schedule) for exp in experiments):
return assemble_schedules(schedules=experiments, qobj_id=qobj_id,
qobj_header=qobj_header, run_config=run_config)
else:
raise QiskitError("bad input to assemble() function; "
"must be either circuits or schedules") | def function[assemble, parameter[experiments, backend, qobj_id, qobj_header, shots, memory, max_credits, seed_simulator, default_qubit_los, default_meas_los, schedule_los, meas_level, meas_return, memory_slots, memory_slot_size, rep_time, parameter_binds, config, seed]]:
constant[Assemble a list of circuits or pulse schedules into a Qobj.
This function serializes the payloads, which could be either circuits or schedules,
to create Qobj "experiments". It further annotates the experiment payload with
header and configurations.
Args:
experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):
Circuit(s) or pulse schedule(s) to execute
backend (BaseBackend):
If set, some runtime options are automatically grabbed from
backend.configuration() and backend.defaults().
If any other option is explicitly set (e.g. rep_rate), it
will override the backend's.
If any other options is set in the run_config, it will
also override the backend's.
qobj_id (str):
String identifier to annotate the Qobj
qobj_header (QobjHeader or dict):
User input that will be inserted in Qobj header, and will also be
copied to the corresponding Result header. Headers do not affect the run.
shots (int):
Number of repetitions of each circuit, for sampling. Default: 2014
memory (bool):
If True, per-shot measurement bitstrings are returned as well
(provided the backend supports it). For OpenPulse jobs, only
measurement level 2 supports this option. Default: False
max_credits (int):
Maximum credits to spend on job. Default: 10
seed_simulator (int):
Random seed to control sampling, for when backend is a simulator
default_qubit_los (list):
List of default qubit lo frequencies
default_meas_los (list):
List of default meas lo frequencies
schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or
Union[Dict[PulseChannel, float], LoConfig]):
Experiment LO configurations
meas_level (int):
Set the appropriate level of the measurement output for pulse experiments.
meas_return (str):
Level of measurement data for the backend to return
For `meas_level` 0 and 1:
"single" returns information from every shot.
"avg" returns average measurement output (averaged over number of shots).
memory_slots (int):
Number of classical memory slots used in this job.
memory_slot_size (int):
Size of each memory slot if the output is Level 0.
rep_time (int): repetition time of the experiment in μs.
The delay between experiments will be rep_time.
Must be from the list provided by the device.
parameter_binds (list[dict{Parameter: Value}]):
List of Parameter bindings over which the set of experiments will be
executed. Each list element (bind) should be of the form
{Parameter1: value1, Parameter2: value2, ...}. All binds will be
executed across all experiments, e.g. if parameter_binds is a
length-n list, and there are m experiments, a total of m x n
experiments will be run (one for each experiment/bind pair).
seed (int):
DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead
config (dict):
DEPRECATED in 0.8: use run_config instead
run_config (dict):
extra arguments used to configure the run (e.g. for Aer configurable backends)
Refer to the backend documentation for details on these arguments
Returns:
Qobj: a qobj which can be run on a backend. Depending on the type of input,
this will be either a QasmQobj or a PulseQobj.
Raises:
QiskitError: if the input cannot be interpreted as either circuits or schedules
]
if name[config] begin[:]
call[name[warnings].warn, parameter[constant[config is not used anymore. Set all configs in run_config.], name[DeprecationWarning]]]
variable[run_config] assign[=] <ast.BoolOp object at 0x7da1b03963e0>
if name[seed] begin[:]
call[name[warnings].warn, parameter[constant[seed is deprecated in favor of seed_simulator.], name[DeprecationWarning]]]
variable[seed_simulator] assign[=] <ast.BoolOp object at 0x7da1b03968f0>
variable[experiments] assign[=] <ast.IfExp object at 0x7da1b0394130>
<ast.Tuple object at 0x7da1b03941f0> assign[=] call[name[_parse_run_args], parameter[name[backend], name[qobj_id], name[qobj_header], name[shots], name[memory], name[max_credits], name[seed_simulator], name[default_qubit_los], name[default_meas_los], name[schedule_los], name[meas_level], name[meas_return], name[memory_slots], name[memory_slot_size], name[rep_time], name[parameter_binds]]]
if call[name[all], parameter[<ast.GeneratorExp object at 0x7da1b0395210>]] begin[:]
<ast.Tuple object at 0x7da1b0394f40> assign[=] call[name[_expand_parameters], parameter[]]
return[call[name[assemble_circuits], parameter[]]] | keyword[def] identifier[assemble] ( identifier[experiments] ,
identifier[backend] = keyword[None] ,
identifier[qobj_id] = keyword[None] , identifier[qobj_header] = keyword[None] ,
identifier[shots] = literal[int] , identifier[memory] = keyword[False] , identifier[max_credits] = keyword[None] , identifier[seed_simulator] = keyword[None] ,
identifier[default_qubit_los] = keyword[None] , identifier[default_meas_los] = keyword[None] ,
identifier[schedule_los] = keyword[None] , identifier[meas_level] = literal[int] , identifier[meas_return] = literal[string] ,
identifier[memory_slots] = keyword[None] , identifier[memory_slot_size] = literal[int] , identifier[rep_time] = keyword[None] , identifier[parameter_binds] = keyword[None] ,
identifier[config] = keyword[None] , identifier[seed] = keyword[None] ,
** identifier[run_config] ):
literal[string]
keyword[if] identifier[config] :
identifier[warnings] . identifier[warn] ( literal[string]
literal[string] , identifier[DeprecationWarning] )
identifier[run_config] = identifier[run_config] keyword[or] identifier[config]
keyword[if] identifier[seed] :
identifier[warnings] . identifier[warn] ( literal[string] , identifier[DeprecationWarning] )
identifier[seed_simulator] = identifier[seed_simulator] keyword[or] identifier[seed]
identifier[experiments] = identifier[experiments] keyword[if] identifier[isinstance] ( identifier[experiments] , identifier[list] ) keyword[else] [ identifier[experiments] ]
identifier[qobj_id] , identifier[qobj_header] , identifier[run_config] = identifier[_parse_run_args] ( identifier[backend] , identifier[qobj_id] , identifier[qobj_header] ,
identifier[shots] , identifier[memory] , identifier[max_credits] , identifier[seed_simulator] ,
identifier[default_qubit_los] , identifier[default_meas_los] ,
identifier[schedule_los] , identifier[meas_level] , identifier[meas_return] ,
identifier[memory_slots] , identifier[memory_slot_size] , identifier[rep_time] ,
identifier[parameter_binds] ,** identifier[run_config] )
keyword[if] identifier[all] ( identifier[isinstance] ( identifier[exp] , identifier[QuantumCircuit] ) keyword[for] identifier[exp] keyword[in] identifier[experiments] ):
identifier[bound_experiments] , identifier[run_config] = identifier[_expand_parameters] ( identifier[circuits] = identifier[experiments] ,
identifier[run_config] = identifier[run_config] )
keyword[return] identifier[assemble_circuits] ( identifier[circuits] = identifier[bound_experiments] , identifier[qobj_id] = identifier[qobj_id] ,
identifier[qobj_header] = identifier[qobj_header] , identifier[run_config] = identifier[run_config] )
keyword[elif] identifier[all] ( identifier[isinstance] ( identifier[exp] , identifier[Schedule] ) keyword[for] identifier[exp] keyword[in] identifier[experiments] ):
keyword[return] identifier[assemble_schedules] ( identifier[schedules] = identifier[experiments] , identifier[qobj_id] = identifier[qobj_id] ,
identifier[qobj_header] = identifier[qobj_header] , identifier[run_config] = identifier[run_config] )
keyword[else] :
keyword[raise] identifier[QiskitError] ( literal[string]
literal[string] ) | def assemble(experiments, backend=None, qobj_id=None, qobj_header=None, shots=1024, memory=False, max_credits=None, seed_simulator=None, default_qubit_los=None, default_meas_los=None, schedule_los=None, meas_level=2, meas_return='avg', memory_slots=None, memory_slot_size=100, rep_time=None, parameter_binds=None, config=None, seed=None, **run_config): # common run options
# schedule run options
# deprecated
'Assemble a list of circuits or pulse schedules into a Qobj.\n\n This function serializes the payloads, which could be either circuits or schedules,\n to create Qobj "experiments". It further annotates the experiment payload with\n header and configurations.\n\n Args:\n experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):\n Circuit(s) or pulse schedule(s) to execute\n\n backend (BaseBackend):\n If set, some runtime options are automatically grabbed from\n backend.configuration() and backend.defaults().\n If any other option is explicitly set (e.g. rep_rate), it\n will override the backend\'s.\n If any other options is set in the run_config, it will\n also override the backend\'s.\n\n qobj_id (str):\n String identifier to annotate the Qobj\n\n qobj_header (QobjHeader or dict):\n User input that will be inserted in Qobj header, and will also be\n copied to the corresponding Result header. Headers do not affect the run.\n\n shots (int):\n Number of repetitions of each circuit, for sampling. Default: 2014\n\n memory (bool):\n If True, per-shot measurement bitstrings are returned as well\n (provided the backend supports it). For OpenPulse jobs, only\n measurement level 2 supports this option. Default: False\n\n max_credits (int):\n Maximum credits to spend on job. Default: 10\n\n seed_simulator (int):\n Random seed to control sampling, for when backend is a simulator\n\n default_qubit_los (list):\n List of default qubit lo frequencies\n\n default_meas_los (list):\n List of default meas lo frequencies\n\n schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or\n Union[Dict[PulseChannel, float], LoConfig]):\n Experiment LO configurations\n\n meas_level (int):\n Set the appropriate level of the measurement output for pulse experiments.\n\n meas_return (str):\n Level of measurement data for the backend to return\n For `meas_level` 0 and 1:\n "single" returns information from every shot.\n "avg" returns average measurement output (averaged over number of shots).\n\n memory_slots (int):\n Number of classical memory slots used in this job.\n\n memory_slot_size (int):\n Size of each memory slot if the output is Level 0.\n\n rep_time (int): repetition time of the experiment in μs.\n The delay between experiments will be rep_time.\n Must be from the list provided by the device.\n\n parameter_binds (list[dict{Parameter: Value}]):\n List of Parameter bindings over which the set of experiments will be\n executed. Each list element (bind) should be of the form\n {Parameter1: value1, Parameter2: value2, ...}. All binds will be\n executed across all experiments, e.g. if parameter_binds is a\n length-n list, and there are m experiments, a total of m x n\n experiments will be run (one for each experiment/bind pair).\n\n seed (int):\n DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead\n\n config (dict):\n DEPRECATED in 0.8: use run_config instead\n\n run_config (dict):\n extra arguments used to configure the run (e.g. for Aer configurable backends)\n Refer to the backend documentation for details on these arguments\n\n Returns:\n Qobj: a qobj which can be run on a backend. Depending on the type of input,\n this will be either a QasmQobj or a PulseQobj.\n\n Raises:\n QiskitError: if the input cannot be interpreted as either circuits or schedules\n '
# deprecation matter
if config:
warnings.warn('config is not used anymore. Set all configs in run_config.', DeprecationWarning)
run_config = run_config or config # depends on [control=['if'], data=[]]
if seed:
warnings.warn('seed is deprecated in favor of seed_simulator.', DeprecationWarning)
seed_simulator = seed_simulator or seed # depends on [control=['if'], data=[]]
# Get RunConfig(s) that will be inserted in Qobj to configure the run
experiments = experiments if isinstance(experiments, list) else [experiments]
(qobj_id, qobj_header, run_config) = _parse_run_args(backend, qobj_id, qobj_header, shots, memory, max_credits, seed_simulator, default_qubit_los, default_meas_los, schedule_los, meas_level, meas_return, memory_slots, memory_slot_size, rep_time, parameter_binds, **run_config)
# assemble either circuits or schedules
if all((isinstance(exp, QuantumCircuit) for exp in experiments)):
# If circuits are parameterized, bind parameters and remove from run_config
(bound_experiments, run_config) = _expand_parameters(circuits=experiments, run_config=run_config)
return assemble_circuits(circuits=bound_experiments, qobj_id=qobj_id, qobj_header=qobj_header, run_config=run_config) # depends on [control=['if'], data=[]]
elif all((isinstance(exp, Schedule) for exp in experiments)):
return assemble_schedules(schedules=experiments, qobj_id=qobj_id, qobj_header=qobj_header, run_config=run_config) # depends on [control=['if'], data=[]]
else:
raise QiskitError('bad input to assemble() function; must be either circuits or schedules') |
def exclude_times(self, *tuples):
"""Adds multiple excluded times by tuple of (start, end, days) or by
TimeRange instance.
``start`` and ``end`` are in military integer times (e.g. - 1200 1430).
``days`` is a collection of integers or strings of fully-spelt, lowercased days
of the week.
"""
for item in tuples:
if isinstance(item, TimeRange):
self._excluded_times.append(item)
else:
self.exclude_time(*item)
return self | def function[exclude_times, parameter[self]]:
constant[Adds multiple excluded times by tuple of (start, end, days) or by
TimeRange instance.
``start`` and ``end`` are in military integer times (e.g. - 1200 1430).
``days`` is a collection of integers or strings of fully-spelt, lowercased days
of the week.
]
for taget[name[item]] in starred[name[tuples]] begin[:]
if call[name[isinstance], parameter[name[item], name[TimeRange]]] begin[:]
call[name[self]._excluded_times.append, parameter[name[item]]]
return[name[self]] | keyword[def] identifier[exclude_times] ( identifier[self] ,* identifier[tuples] ):
literal[string]
keyword[for] identifier[item] keyword[in] identifier[tuples] :
keyword[if] identifier[isinstance] ( identifier[item] , identifier[TimeRange] ):
identifier[self] . identifier[_excluded_times] . identifier[append] ( identifier[item] )
keyword[else] :
identifier[self] . identifier[exclude_time] (* identifier[item] )
keyword[return] identifier[self] | def exclude_times(self, *tuples):
"""Adds multiple excluded times by tuple of (start, end, days) or by
TimeRange instance.
``start`` and ``end`` are in military integer times (e.g. - 1200 1430).
``days`` is a collection of integers or strings of fully-spelt, lowercased days
of the week.
"""
for item in tuples:
if isinstance(item, TimeRange):
self._excluded_times.append(item) # depends on [control=['if'], data=[]]
else:
self.exclude_time(*item) # depends on [control=['for'], data=['item']]
return self |
def _append_html_element(self, item, element, html, glue=" ",
after=True):
"""Appends an html value after or before the element in the item dict
:param item: dictionary that represents an analysis row
:param element: id of the element the html must be added thereafter
:param html: element to append
:param glue: glue to use for appending
:param after: if the html content must be added after or before"""
position = after and 'after' or 'before'
item[position] = item.get(position, {})
original = item[position].get(element, '')
if not original:
item[position][element] = html
return
item[position][element] = glue.join([original, html]) | def function[_append_html_element, parameter[self, item, element, html, glue, after]]:
constant[Appends an html value after or before the element in the item dict
:param item: dictionary that represents an analysis row
:param element: id of the element the html must be added thereafter
:param html: element to append
:param glue: glue to use for appending
:param after: if the html content must be added after or before]
variable[position] assign[=] <ast.BoolOp object at 0x7da2043479a0>
call[name[item]][name[position]] assign[=] call[name[item].get, parameter[name[position], dictionary[[], []]]]
variable[original] assign[=] call[call[name[item]][name[position]].get, parameter[name[element], constant[]]]
if <ast.UnaryOp object at 0x7da18fe90880> begin[:]
call[call[name[item]][name[position]]][name[element]] assign[=] name[html]
return[None]
call[call[name[item]][name[position]]][name[element]] assign[=] call[name[glue].join, parameter[list[[<ast.Name object at 0x7da18fe92860>, <ast.Name object at 0x7da18fe92ec0>]]]] | keyword[def] identifier[_append_html_element] ( identifier[self] , identifier[item] , identifier[element] , identifier[html] , identifier[glue] = literal[string] ,
identifier[after] = keyword[True] ):
literal[string]
identifier[position] = identifier[after] keyword[and] literal[string] keyword[or] literal[string]
identifier[item] [ identifier[position] ]= identifier[item] . identifier[get] ( identifier[position] ,{})
identifier[original] = identifier[item] [ identifier[position] ]. identifier[get] ( identifier[element] , literal[string] )
keyword[if] keyword[not] identifier[original] :
identifier[item] [ identifier[position] ][ identifier[element] ]= identifier[html]
keyword[return]
identifier[item] [ identifier[position] ][ identifier[element] ]= identifier[glue] . identifier[join] ([ identifier[original] , identifier[html] ]) | def _append_html_element(self, item, element, html, glue=' ', after=True):
"""Appends an html value after or before the element in the item dict
:param item: dictionary that represents an analysis row
:param element: id of the element the html must be added thereafter
:param html: element to append
:param glue: glue to use for appending
:param after: if the html content must be added after or before"""
position = after and 'after' or 'before'
item[position] = item.get(position, {})
original = item[position].get(element, '')
if not original:
item[position][element] = html
return # depends on [control=['if'], data=[]]
item[position][element] = glue.join([original, html]) |
def _pep425_get_abi():
"""
:return:
A unicode string of the system abi. Will be something like: "cp27m",
"cp33m", etc.
"""
try:
soabi = sysconfig.get_config_var('SOABI')
if soabi:
if soabi.startswith('cpython-'):
return 'cp%s' % soabi.split('-')[1]
return soabi.replace('.', '_').replace('-', '_')
except (IOError, NameError):
pass
impl = _pep425_implementation()
suffix = ''
if impl == 'cp':
suffix += 'm'
if sys.maxunicode == 0x10ffff and sys.version_info < (3, 3):
suffix += 'u'
return '%s%s%s' % (impl, ''.join(map(str_cls, _pep425_version())), suffix) | def function[_pep425_get_abi, parameter[]]:
constant[
:return:
A unicode string of the system abi. Will be something like: "cp27m",
"cp33m", etc.
]
<ast.Try object at 0x7da1b08aee60>
variable[impl] assign[=] call[name[_pep425_implementation], parameter[]]
variable[suffix] assign[=] constant[]
if compare[name[impl] equal[==] constant[cp]] begin[:]
<ast.AugAssign object at 0x7da20c794c10>
if <ast.BoolOp object at 0x7da20c795d80> begin[:]
<ast.AugAssign object at 0x7da20c7957b0>
return[binary_operation[constant[%s%s%s] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da20c7962f0>, <ast.Call object at 0x7da20c795d50>, <ast.Name object at 0x7da20c796620>]]]] | keyword[def] identifier[_pep425_get_abi] ():
literal[string]
keyword[try] :
identifier[soabi] = identifier[sysconfig] . identifier[get_config_var] ( literal[string] )
keyword[if] identifier[soabi] :
keyword[if] identifier[soabi] . identifier[startswith] ( literal[string] ):
keyword[return] literal[string] % identifier[soabi] . identifier[split] ( literal[string] )[ literal[int] ]
keyword[return] identifier[soabi] . identifier[replace] ( literal[string] , literal[string] ). identifier[replace] ( literal[string] , literal[string] )
keyword[except] ( identifier[IOError] , identifier[NameError] ):
keyword[pass]
identifier[impl] = identifier[_pep425_implementation] ()
identifier[suffix] = literal[string]
keyword[if] identifier[impl] == literal[string] :
identifier[suffix] += literal[string]
keyword[if] identifier[sys] . identifier[maxunicode] == literal[int] keyword[and] identifier[sys] . identifier[version_info] <( literal[int] , literal[int] ):
identifier[suffix] += literal[string]
keyword[return] literal[string] %( identifier[impl] , literal[string] . identifier[join] ( identifier[map] ( identifier[str_cls] , identifier[_pep425_version] ())), identifier[suffix] ) | def _pep425_get_abi():
"""
:return:
A unicode string of the system abi. Will be something like: "cp27m",
"cp33m", etc.
"""
try:
soabi = sysconfig.get_config_var('SOABI')
if soabi:
if soabi.startswith('cpython-'):
return 'cp%s' % soabi.split('-')[1] # depends on [control=['if'], data=[]]
return soabi.replace('.', '_').replace('-', '_') # depends on [control=['if'], data=[]] # depends on [control=['try'], data=[]]
except (IOError, NameError):
pass # depends on [control=['except'], data=[]]
impl = _pep425_implementation()
suffix = ''
if impl == 'cp':
suffix += 'm' # depends on [control=['if'], data=[]]
if sys.maxunicode == 1114111 and sys.version_info < (3, 3):
suffix += 'u' # depends on [control=['if'], data=[]]
return '%s%s%s' % (impl, ''.join(map(str_cls, _pep425_version())), suffix) |
def _did_receive_response(self, connection):
""" Receive a response from the connection """
if connection.has_timeouted:
bambou_logger.info("NURESTConnection has timeout.")
return
has_callbacks = connection.has_callbacks()
should_post = not has_callbacks
if connection.handle_response_for_connection(should_post=should_post) and has_callbacks:
callback = connection.callbacks['local']
callback(connection) | def function[_did_receive_response, parameter[self, connection]]:
constant[ Receive a response from the connection ]
if name[connection].has_timeouted begin[:]
call[name[bambou_logger].info, parameter[constant[NURESTConnection has timeout.]]]
return[None]
variable[has_callbacks] assign[=] call[name[connection].has_callbacks, parameter[]]
variable[should_post] assign[=] <ast.UnaryOp object at 0x7da1b0fc7280>
if <ast.BoolOp object at 0x7da1b0d1e440> begin[:]
variable[callback] assign[=] call[name[connection].callbacks][constant[local]]
call[name[callback], parameter[name[connection]]] | keyword[def] identifier[_did_receive_response] ( identifier[self] , identifier[connection] ):
literal[string]
keyword[if] identifier[connection] . identifier[has_timeouted] :
identifier[bambou_logger] . identifier[info] ( literal[string] )
keyword[return]
identifier[has_callbacks] = identifier[connection] . identifier[has_callbacks] ()
identifier[should_post] = keyword[not] identifier[has_callbacks]
keyword[if] identifier[connection] . identifier[handle_response_for_connection] ( identifier[should_post] = identifier[should_post] ) keyword[and] identifier[has_callbacks] :
identifier[callback] = identifier[connection] . identifier[callbacks] [ literal[string] ]
identifier[callback] ( identifier[connection] ) | def _did_receive_response(self, connection):
""" Receive a response from the connection """
if connection.has_timeouted:
bambou_logger.info('NURESTConnection has timeout.')
return # depends on [control=['if'], data=[]]
has_callbacks = connection.has_callbacks()
should_post = not has_callbacks
if connection.handle_response_for_connection(should_post=should_post) and has_callbacks:
callback = connection.callbacks['local']
callback(connection) # depends on [control=['if'], data=[]] |
def validate(self):
"""Validate keys and values.
Check to make sure every key used is a valid Vorbis key, and
that every value used is a valid Unicode or UTF-8 string. If
any invalid keys or values are found, a ValueError is raised.
In Python 3 all keys and values have to be a string.
"""
if not isinstance(self.vendor, text_type):
if PY3:
raise ValueError("vendor needs to be str")
try:
self.vendor.decode('utf-8')
except UnicodeDecodeError:
raise ValueError
for key, value in self:
try:
if not is_valid_key(key):
raise ValueError("%r is not a valid key" % key)
except TypeError:
raise ValueError("%r is not a valid key" % key)
if not isinstance(value, text_type):
if PY3:
err = "%r needs to be str for key %r" % (value, key)
raise ValueError(err)
try:
value.decode("utf-8")
except Exception:
err = "%r is not a valid value for key %r" % (value, key)
raise ValueError(err)
return True | def function[validate, parameter[self]]:
constant[Validate keys and values.
Check to make sure every key used is a valid Vorbis key, and
that every value used is a valid Unicode or UTF-8 string. If
any invalid keys or values are found, a ValueError is raised.
In Python 3 all keys and values have to be a string.
]
if <ast.UnaryOp object at 0x7da1b20a95d0> begin[:]
if name[PY3] begin[:]
<ast.Raise object at 0x7da1b20aa8c0>
<ast.Try object at 0x7da1b20a82e0>
for taget[tuple[[<ast.Name object at 0x7da1b20ab0a0>, <ast.Name object at 0x7da1b20a9a80>]]] in starred[name[self]] begin[:]
<ast.Try object at 0x7da1b20a9720>
if <ast.UnaryOp object at 0x7da1b20aaf20> begin[:]
if name[PY3] begin[:]
variable[err] assign[=] binary_operation[constant[%r needs to be str for key %r] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da1b20a8700>, <ast.Name object at 0x7da1b20a9fc0>]]]
<ast.Raise object at 0x7da1b20ab6a0>
<ast.Try object at 0x7da1b20ab940>
return[constant[True]] | keyword[def] identifier[validate] ( identifier[self] ):
literal[string]
keyword[if] keyword[not] identifier[isinstance] ( identifier[self] . identifier[vendor] , identifier[text_type] ):
keyword[if] identifier[PY3] :
keyword[raise] identifier[ValueError] ( literal[string] )
keyword[try] :
identifier[self] . identifier[vendor] . identifier[decode] ( literal[string] )
keyword[except] identifier[UnicodeDecodeError] :
keyword[raise] identifier[ValueError]
keyword[for] identifier[key] , identifier[value] keyword[in] identifier[self] :
keyword[try] :
keyword[if] keyword[not] identifier[is_valid_key] ( identifier[key] ):
keyword[raise] identifier[ValueError] ( literal[string] % identifier[key] )
keyword[except] identifier[TypeError] :
keyword[raise] identifier[ValueError] ( literal[string] % identifier[key] )
keyword[if] keyword[not] identifier[isinstance] ( identifier[value] , identifier[text_type] ):
keyword[if] identifier[PY3] :
identifier[err] = literal[string] %( identifier[value] , identifier[key] )
keyword[raise] identifier[ValueError] ( identifier[err] )
keyword[try] :
identifier[value] . identifier[decode] ( literal[string] )
keyword[except] identifier[Exception] :
identifier[err] = literal[string] %( identifier[value] , identifier[key] )
keyword[raise] identifier[ValueError] ( identifier[err] )
keyword[return] keyword[True] | def validate(self):
"""Validate keys and values.
Check to make sure every key used is a valid Vorbis key, and
that every value used is a valid Unicode or UTF-8 string. If
any invalid keys or values are found, a ValueError is raised.
In Python 3 all keys and values have to be a string.
"""
if not isinstance(self.vendor, text_type):
if PY3:
raise ValueError('vendor needs to be str') # depends on [control=['if'], data=[]]
try:
self.vendor.decode('utf-8') # depends on [control=['try'], data=[]]
except UnicodeDecodeError:
raise ValueError # depends on [control=['except'], data=[]] # depends on [control=['if'], data=[]]
for (key, value) in self:
try:
if not is_valid_key(key):
raise ValueError('%r is not a valid key' % key) # depends on [control=['if'], data=[]] # depends on [control=['try'], data=[]]
except TypeError:
raise ValueError('%r is not a valid key' % key) # depends on [control=['except'], data=[]]
if not isinstance(value, text_type):
if PY3:
err = '%r needs to be str for key %r' % (value, key)
raise ValueError(err) # depends on [control=['if'], data=[]]
try:
value.decode('utf-8') # depends on [control=['try'], data=[]]
except Exception:
err = '%r is not a valid value for key %r' % (value, key)
raise ValueError(err) # depends on [control=['except'], data=[]] # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]]
return True |
def call(self, req, props=None):
"""
Executes a Barrister request and returns a response. If the request is a list, then the
response will also be a list. If the request is an empty list, a RpcException is raised.
:Parameters:
req
The request. Either a list of dicts, or a single dict.
props
Application defined properties to set on RequestContext for use with filters.
For example: authentication headers. Must be a dict.
"""
resp = None
if self.log.isEnabledFor(logging.DEBUG):
self.log.debug("Request: %s" % str(req))
if isinstance(req, list):
if len(req) < 1:
resp = err_response(None, ERR_INVALID_REQ, "Invalid Request. Empty batch.")
else:
resp = [ ]
for r in req:
resp.append(self._call_and_format(r, props))
else:
resp = self._call_and_format(req, props)
if self.log.isEnabledFor(logging.DEBUG):
self.log.debug("Response: %s" % str(resp))
return resp | def function[call, parameter[self, req, props]]:
constant[
Executes a Barrister request and returns a response. If the request is a list, then the
response will also be a list. If the request is an empty list, a RpcException is raised.
:Parameters:
req
The request. Either a list of dicts, or a single dict.
props
Application defined properties to set on RequestContext for use with filters.
For example: authentication headers. Must be a dict.
]
variable[resp] assign[=] constant[None]
if call[name[self].log.isEnabledFor, parameter[name[logging].DEBUG]] begin[:]
call[name[self].log.debug, parameter[binary_operation[constant[Request: %s] <ast.Mod object at 0x7da2590d6920> call[name[str], parameter[name[req]]]]]]
if call[name[isinstance], parameter[name[req], name[list]]] begin[:]
if compare[call[name[len], parameter[name[req]]] less[<] constant[1]] begin[:]
variable[resp] assign[=] call[name[err_response], parameter[constant[None], name[ERR_INVALID_REQ], constant[Invalid Request. Empty batch.]]]
if call[name[self].log.isEnabledFor, parameter[name[logging].DEBUG]] begin[:]
call[name[self].log.debug, parameter[binary_operation[constant[Response: %s] <ast.Mod object at 0x7da2590d6920> call[name[str], parameter[name[resp]]]]]]
return[name[resp]] | keyword[def] identifier[call] ( identifier[self] , identifier[req] , identifier[props] = keyword[None] ):
literal[string]
identifier[resp] = keyword[None]
keyword[if] identifier[self] . identifier[log] . identifier[isEnabledFor] ( identifier[logging] . identifier[DEBUG] ):
identifier[self] . identifier[log] . identifier[debug] ( literal[string] % identifier[str] ( identifier[req] ))
keyword[if] identifier[isinstance] ( identifier[req] , identifier[list] ):
keyword[if] identifier[len] ( identifier[req] )< literal[int] :
identifier[resp] = identifier[err_response] ( keyword[None] , identifier[ERR_INVALID_REQ] , literal[string] )
keyword[else] :
identifier[resp] =[]
keyword[for] identifier[r] keyword[in] identifier[req] :
identifier[resp] . identifier[append] ( identifier[self] . identifier[_call_and_format] ( identifier[r] , identifier[props] ))
keyword[else] :
identifier[resp] = identifier[self] . identifier[_call_and_format] ( identifier[req] , identifier[props] )
keyword[if] identifier[self] . identifier[log] . identifier[isEnabledFor] ( identifier[logging] . identifier[DEBUG] ):
identifier[self] . identifier[log] . identifier[debug] ( literal[string] % identifier[str] ( identifier[resp] ))
keyword[return] identifier[resp] | def call(self, req, props=None):
"""
Executes a Barrister request and returns a response. If the request is a list, then the
response will also be a list. If the request is an empty list, a RpcException is raised.
:Parameters:
req
The request. Either a list of dicts, or a single dict.
props
Application defined properties to set on RequestContext for use with filters.
For example: authentication headers. Must be a dict.
"""
resp = None
if self.log.isEnabledFor(logging.DEBUG):
self.log.debug('Request: %s' % str(req)) # depends on [control=['if'], data=[]]
if isinstance(req, list):
if len(req) < 1:
resp = err_response(None, ERR_INVALID_REQ, 'Invalid Request. Empty batch.') # depends on [control=['if'], data=[]]
else:
resp = []
for r in req:
resp.append(self._call_and_format(r, props)) # depends on [control=['for'], data=['r']] # depends on [control=['if'], data=[]]
else:
resp = self._call_and_format(req, props)
if self.log.isEnabledFor(logging.DEBUG):
self.log.debug('Response: %s' % str(resp)) # depends on [control=['if'], data=[]]
return resp |
def build_graph (json_iter):
"""
construct the TextRank graph from parsed paragraphs
"""
global DEBUG, WordNode
graph = nx.DiGraph()
for meta in json_iter:
if DEBUG:
print(meta["graf"])
for pair in get_tiles(map(WordNode._make, meta["graf"])):
if DEBUG:
print(pair)
for word_id in pair:
if not graph.has_node(word_id):
graph.add_node(word_id)
try:
graph.edge[pair[0]][pair[1]]["weight"] += 1.0
except KeyError:
graph.add_edge(pair[0], pair[1], weight=1.0)
return graph | def function[build_graph, parameter[json_iter]]:
constant[
construct the TextRank graph from parsed paragraphs
]
<ast.Global object at 0x7da1b0191390>
variable[graph] assign[=] call[name[nx].DiGraph, parameter[]]
for taget[name[meta]] in starred[name[json_iter]] begin[:]
if name[DEBUG] begin[:]
call[name[print], parameter[call[name[meta]][constant[graf]]]]
for taget[name[pair]] in starred[call[name[get_tiles], parameter[call[name[map], parameter[name[WordNode]._make, call[name[meta]][constant[graf]]]]]]] begin[:]
if name[DEBUG] begin[:]
call[name[print], parameter[name[pair]]]
for taget[name[word_id]] in starred[name[pair]] begin[:]
if <ast.UnaryOp object at 0x7da1b0192fb0> begin[:]
call[name[graph].add_node, parameter[name[word_id]]]
<ast.Try object at 0x7da1b01909a0>
return[name[graph]] | keyword[def] identifier[build_graph] ( identifier[json_iter] ):
literal[string]
keyword[global] identifier[DEBUG] , identifier[WordNode]
identifier[graph] = identifier[nx] . identifier[DiGraph] ()
keyword[for] identifier[meta] keyword[in] identifier[json_iter] :
keyword[if] identifier[DEBUG] :
identifier[print] ( identifier[meta] [ literal[string] ])
keyword[for] identifier[pair] keyword[in] identifier[get_tiles] ( identifier[map] ( identifier[WordNode] . identifier[_make] , identifier[meta] [ literal[string] ])):
keyword[if] identifier[DEBUG] :
identifier[print] ( identifier[pair] )
keyword[for] identifier[word_id] keyword[in] identifier[pair] :
keyword[if] keyword[not] identifier[graph] . identifier[has_node] ( identifier[word_id] ):
identifier[graph] . identifier[add_node] ( identifier[word_id] )
keyword[try] :
identifier[graph] . identifier[edge] [ identifier[pair] [ literal[int] ]][ identifier[pair] [ literal[int] ]][ literal[string] ]+= literal[int]
keyword[except] identifier[KeyError] :
identifier[graph] . identifier[add_edge] ( identifier[pair] [ literal[int] ], identifier[pair] [ literal[int] ], identifier[weight] = literal[int] )
keyword[return] identifier[graph] | def build_graph(json_iter):
"""
construct the TextRank graph from parsed paragraphs
"""
global DEBUG, WordNode
graph = nx.DiGraph()
for meta in json_iter:
if DEBUG:
print(meta['graf']) # depends on [control=['if'], data=[]]
for pair in get_tiles(map(WordNode._make, meta['graf'])):
if DEBUG:
print(pair) # depends on [control=['if'], data=[]]
for word_id in pair:
if not graph.has_node(word_id):
graph.add_node(word_id) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=['word_id']]
try:
graph.edge[pair[0]][pair[1]]['weight'] += 1.0 # depends on [control=['try'], data=[]]
except KeyError:
graph.add_edge(pair[0], pair[1], weight=1.0) # depends on [control=['except'], data=[]] # depends on [control=['for'], data=['pair']] # depends on [control=['for'], data=['meta']]
return graph |
def create_tenant(self, tenant_id, retentions=None):
"""
Create a tenant. Currently nothing can be set (to be fixed after the master
version of Hawkular-Metrics has fixed implementation.
:param retentions: A set of retention settings, see Hawkular-Metrics documentation for more info
"""
item = { 'id': tenant_id }
if retentions is not None:
item['retentions'] = retentions
self._post(self._get_tenants_url(), json.dumps(item, indent=2)) | def function[create_tenant, parameter[self, tenant_id, retentions]]:
constant[
Create a tenant. Currently nothing can be set (to be fixed after the master
version of Hawkular-Metrics has fixed implementation.
:param retentions: A set of retention settings, see Hawkular-Metrics documentation for more info
]
variable[item] assign[=] dictionary[[<ast.Constant object at 0x7da2044c0160>], [<ast.Name object at 0x7da2044c2140>]]
if compare[name[retentions] is_not constant[None]] begin[:]
call[name[item]][constant[retentions]] assign[=] name[retentions]
call[name[self]._post, parameter[call[name[self]._get_tenants_url, parameter[]], call[name[json].dumps, parameter[name[item]]]]] | keyword[def] identifier[create_tenant] ( identifier[self] , identifier[tenant_id] , identifier[retentions] = keyword[None] ):
literal[string]
identifier[item] ={ literal[string] : identifier[tenant_id] }
keyword[if] identifier[retentions] keyword[is] keyword[not] keyword[None] :
identifier[item] [ literal[string] ]= identifier[retentions]
identifier[self] . identifier[_post] ( identifier[self] . identifier[_get_tenants_url] (), identifier[json] . identifier[dumps] ( identifier[item] , identifier[indent] = literal[int] )) | def create_tenant(self, tenant_id, retentions=None):
"""
Create a tenant. Currently nothing can be set (to be fixed after the master
version of Hawkular-Metrics has fixed implementation.
:param retentions: A set of retention settings, see Hawkular-Metrics documentation for more info
"""
item = {'id': tenant_id}
if retentions is not None:
item['retentions'] = retentions # depends on [control=['if'], data=['retentions']]
self._post(self._get_tenants_url(), json.dumps(item, indent=2)) |
def get(self):
'''Get a task from queue when bucket available'''
if self.bucket.get() < 1:
return None
now = time.time()
self.mutex.acquire()
try:
task = self.priority_queue.get_nowait()
self.bucket.desc()
except Queue.Empty:
self.mutex.release()
return None
task.exetime = now + self.processing_timeout
self.processing.put(task)
self.mutex.release()
return task.taskid | def function[get, parameter[self]]:
constant[Get a task from queue when bucket available]
if compare[call[name[self].bucket.get, parameter[]] less[<] constant[1]] begin[:]
return[constant[None]]
variable[now] assign[=] call[name[time].time, parameter[]]
call[name[self].mutex.acquire, parameter[]]
<ast.Try object at 0x7da1b1fe4490>
name[task].exetime assign[=] binary_operation[name[now] + name[self].processing_timeout]
call[name[self].processing.put, parameter[name[task]]]
call[name[self].mutex.release, parameter[]]
return[name[task].taskid] | keyword[def] identifier[get] ( identifier[self] ):
literal[string]
keyword[if] identifier[self] . identifier[bucket] . identifier[get] ()< literal[int] :
keyword[return] keyword[None]
identifier[now] = identifier[time] . identifier[time] ()
identifier[self] . identifier[mutex] . identifier[acquire] ()
keyword[try] :
identifier[task] = identifier[self] . identifier[priority_queue] . identifier[get_nowait] ()
identifier[self] . identifier[bucket] . identifier[desc] ()
keyword[except] identifier[Queue] . identifier[Empty] :
identifier[self] . identifier[mutex] . identifier[release] ()
keyword[return] keyword[None]
identifier[task] . identifier[exetime] = identifier[now] + identifier[self] . identifier[processing_timeout]
identifier[self] . identifier[processing] . identifier[put] ( identifier[task] )
identifier[self] . identifier[mutex] . identifier[release] ()
keyword[return] identifier[task] . identifier[taskid] | def get(self):
"""Get a task from queue when bucket available"""
if self.bucket.get() < 1:
return None # depends on [control=['if'], data=[]]
now = time.time()
self.mutex.acquire()
try:
task = self.priority_queue.get_nowait()
self.bucket.desc() # depends on [control=['try'], data=[]]
except Queue.Empty:
self.mutex.release()
return None # depends on [control=['except'], data=[]]
task.exetime = now + self.processing_timeout
self.processing.put(task)
self.mutex.release()
return task.taskid |
def items(self):
"""
Returns an iterator over the named bitfields in the structure as
2-tuples of (key, value). Uses a clone so as to only read from
the underlying data once.
"""
temp = self.clone()
return [(f, getattr(temp, f)) for f in iter(self)] | def function[items, parameter[self]]:
constant[
Returns an iterator over the named bitfields in the structure as
2-tuples of (key, value). Uses a clone so as to only read from
the underlying data once.
]
variable[temp] assign[=] call[name[self].clone, parameter[]]
return[<ast.ListComp object at 0x7da1b23d58d0>] | keyword[def] identifier[items] ( identifier[self] ):
literal[string]
identifier[temp] = identifier[self] . identifier[clone] ()
keyword[return] [( identifier[f] , identifier[getattr] ( identifier[temp] , identifier[f] )) keyword[for] identifier[f] keyword[in] identifier[iter] ( identifier[self] )] | def items(self):
"""
Returns an iterator over the named bitfields in the structure as
2-tuples of (key, value). Uses a clone so as to only read from
the underlying data once.
"""
temp = self.clone()
return [(f, getattr(temp, f)) for f in iter(self)] |
def peripheral_didUpdateValueForDescriptor_error_(self, peripheral, descriptor, error):
"""Called when descriptor value was read or updated."""
logger.debug('peripheral_didUpdateValueForDescriptor_error called')
# Stop if there was some kind of error.
if error is not None:
return
# Notify the device about the updated descriptor value.
device = device_list().get(peripheral)
if device is not None:
device._descriptor_changed(descriptor) | def function[peripheral_didUpdateValueForDescriptor_error_, parameter[self, peripheral, descriptor, error]]:
constant[Called when descriptor value was read or updated.]
call[name[logger].debug, parameter[constant[peripheral_didUpdateValueForDescriptor_error called]]]
if compare[name[error] is_not constant[None]] begin[:]
return[None]
variable[device] assign[=] call[call[name[device_list], parameter[]].get, parameter[name[peripheral]]]
if compare[name[device] is_not constant[None]] begin[:]
call[name[device]._descriptor_changed, parameter[name[descriptor]]] | keyword[def] identifier[peripheral_didUpdateValueForDescriptor_error_] ( identifier[self] , identifier[peripheral] , identifier[descriptor] , identifier[error] ):
literal[string]
identifier[logger] . identifier[debug] ( literal[string] )
keyword[if] identifier[error] keyword[is] keyword[not] keyword[None] :
keyword[return]
identifier[device] = identifier[device_list] (). identifier[get] ( identifier[peripheral] )
keyword[if] identifier[device] keyword[is] keyword[not] keyword[None] :
identifier[device] . identifier[_descriptor_changed] ( identifier[descriptor] ) | def peripheral_didUpdateValueForDescriptor_error_(self, peripheral, descriptor, error):
"""Called when descriptor value was read or updated."""
logger.debug('peripheral_didUpdateValueForDescriptor_error called')
# Stop if there was some kind of error.
if error is not None:
return # depends on [control=['if'], data=[]]
# Notify the device about the updated descriptor value.
device = device_list().get(peripheral)
if device is not None:
device._descriptor_changed(descriptor) # depends on [control=['if'], data=['device']] |
def sanitize_date(publication_date: str) -> str:
"""Sanitize lots of different date strings into ISO-8601."""
if re1.search(publication_date):
return datetime.strptime(publication_date, '%Y %b %d').strftime('%Y-%m-%d')
if re2.search(publication_date):
return datetime.strptime(publication_date, '%Y %b').strftime('%Y-%m-01')
if re3.search(publication_date):
return publication_date + "-01-01"
if re4.search(publication_date):
return datetime.strptime(publication_date[:-4], '%Y %b').strftime('%Y-%m-01')
s = re5.search(publication_date)
if s:
year, season = s.groups()
return '{}-{}-01'.format(year, season_map[season])
s = re6.search(publication_date)
if s:
return datetime.strptime(publication_date, '%Y %b %d-{}'.format(s.groups()[0])).strftime('%Y-%m-%d')
s = re7.search(publication_date)
if s:
return datetime.strptime(publication_date, '%Y %b %d-{}'.format(s.groups()[0])).strftime('%Y-%m-%d') | def function[sanitize_date, parameter[publication_date]]:
constant[Sanitize lots of different date strings into ISO-8601.]
if call[name[re1].search, parameter[name[publication_date]]] begin[:]
return[call[call[name[datetime].strptime, parameter[name[publication_date], constant[%Y %b %d]]].strftime, parameter[constant[%Y-%m-%d]]]]
if call[name[re2].search, parameter[name[publication_date]]] begin[:]
return[call[call[name[datetime].strptime, parameter[name[publication_date], constant[%Y %b]]].strftime, parameter[constant[%Y-%m-01]]]]
if call[name[re3].search, parameter[name[publication_date]]] begin[:]
return[binary_operation[name[publication_date] + constant[-01-01]]]
if call[name[re4].search, parameter[name[publication_date]]] begin[:]
return[call[call[name[datetime].strptime, parameter[call[name[publication_date]][<ast.Slice object at 0x7da18eb561d0>], constant[%Y %b]]].strftime, parameter[constant[%Y-%m-01]]]]
variable[s] assign[=] call[name[re5].search, parameter[name[publication_date]]]
if name[s] begin[:]
<ast.Tuple object at 0x7da18eb57070> assign[=] call[name[s].groups, parameter[]]
return[call[constant[{}-{}-01].format, parameter[name[year], call[name[season_map]][name[season]]]]]
variable[s] assign[=] call[name[re6].search, parameter[name[publication_date]]]
if name[s] begin[:]
return[call[call[name[datetime].strptime, parameter[name[publication_date], call[constant[%Y %b %d-{}].format, parameter[call[call[name[s].groups, parameter[]]][constant[0]]]]]].strftime, parameter[constant[%Y-%m-%d]]]]
variable[s] assign[=] call[name[re7].search, parameter[name[publication_date]]]
if name[s] begin[:]
return[call[call[name[datetime].strptime, parameter[name[publication_date], call[constant[%Y %b %d-{}].format, parameter[call[call[name[s].groups, parameter[]]][constant[0]]]]]].strftime, parameter[constant[%Y-%m-%d]]]] | keyword[def] identifier[sanitize_date] ( identifier[publication_date] : identifier[str] )-> identifier[str] :
literal[string]
keyword[if] identifier[re1] . identifier[search] ( identifier[publication_date] ):
keyword[return] identifier[datetime] . identifier[strptime] ( identifier[publication_date] , literal[string] ). identifier[strftime] ( literal[string] )
keyword[if] identifier[re2] . identifier[search] ( identifier[publication_date] ):
keyword[return] identifier[datetime] . identifier[strptime] ( identifier[publication_date] , literal[string] ). identifier[strftime] ( literal[string] )
keyword[if] identifier[re3] . identifier[search] ( identifier[publication_date] ):
keyword[return] identifier[publication_date] + literal[string]
keyword[if] identifier[re4] . identifier[search] ( identifier[publication_date] ):
keyword[return] identifier[datetime] . identifier[strptime] ( identifier[publication_date] [:- literal[int] ], literal[string] ). identifier[strftime] ( literal[string] )
identifier[s] = identifier[re5] . identifier[search] ( identifier[publication_date] )
keyword[if] identifier[s] :
identifier[year] , identifier[season] = identifier[s] . identifier[groups] ()
keyword[return] literal[string] . identifier[format] ( identifier[year] , identifier[season_map] [ identifier[season] ])
identifier[s] = identifier[re6] . identifier[search] ( identifier[publication_date] )
keyword[if] identifier[s] :
keyword[return] identifier[datetime] . identifier[strptime] ( identifier[publication_date] , literal[string] . identifier[format] ( identifier[s] . identifier[groups] ()[ literal[int] ])). identifier[strftime] ( literal[string] )
identifier[s] = identifier[re7] . identifier[search] ( identifier[publication_date] )
keyword[if] identifier[s] :
keyword[return] identifier[datetime] . identifier[strptime] ( identifier[publication_date] , literal[string] . identifier[format] ( identifier[s] . identifier[groups] ()[ literal[int] ])). identifier[strftime] ( literal[string] ) | def sanitize_date(publication_date: str) -> str:
"""Sanitize lots of different date strings into ISO-8601."""
if re1.search(publication_date):
return datetime.strptime(publication_date, '%Y %b %d').strftime('%Y-%m-%d') # depends on [control=['if'], data=[]]
if re2.search(publication_date):
return datetime.strptime(publication_date, '%Y %b').strftime('%Y-%m-01') # depends on [control=['if'], data=[]]
if re3.search(publication_date):
return publication_date + '-01-01' # depends on [control=['if'], data=[]]
if re4.search(publication_date):
return datetime.strptime(publication_date[:-4], '%Y %b').strftime('%Y-%m-01') # depends on [control=['if'], data=[]]
s = re5.search(publication_date)
if s:
(year, season) = s.groups()
return '{}-{}-01'.format(year, season_map[season]) # depends on [control=['if'], data=[]]
s = re6.search(publication_date)
if s:
return datetime.strptime(publication_date, '%Y %b %d-{}'.format(s.groups()[0])).strftime('%Y-%m-%d') # depends on [control=['if'], data=[]]
s = re7.search(publication_date)
if s:
return datetime.strptime(publication_date, '%Y %b %d-{}'.format(s.groups()[0])).strftime('%Y-%m-%d') # depends on [control=['if'], data=[]] |
def create_azure_storage_credentials(config, general_options):
# type: (dict, blobxfer.models.options.General) ->
# blobxfer.operations.azure.StorageCredentials
"""Create an Azure StorageCredentials object from configuration
:param dict config: config dict
:param blobxfer.models.options.General: general options
:rtype: blobxfer.operations.azure.StorageCredentials
:return: credentials object
"""
creds = blobxfer.operations.azure.StorageCredentials(general_options)
endpoint = config['azure_storage'].get('endpoint') or 'core.windows.net'
for name in config['azure_storage']['accounts']:
key = config['azure_storage']['accounts'][name]
creds.add_storage_account(name, key, endpoint)
return creds | def function[create_azure_storage_credentials, parameter[config, general_options]]:
constant[Create an Azure StorageCredentials object from configuration
:param dict config: config dict
:param blobxfer.models.options.General: general options
:rtype: blobxfer.operations.azure.StorageCredentials
:return: credentials object
]
variable[creds] assign[=] call[name[blobxfer].operations.azure.StorageCredentials, parameter[name[general_options]]]
variable[endpoint] assign[=] <ast.BoolOp object at 0x7da20e9b1930>
for taget[name[name]] in starred[call[call[name[config]][constant[azure_storage]]][constant[accounts]]] begin[:]
variable[key] assign[=] call[call[call[name[config]][constant[azure_storage]]][constant[accounts]]][name[name]]
call[name[creds].add_storage_account, parameter[name[name], name[key], name[endpoint]]]
return[name[creds]] | keyword[def] identifier[create_azure_storage_credentials] ( identifier[config] , identifier[general_options] ):
literal[string]
identifier[creds] = identifier[blobxfer] . identifier[operations] . identifier[azure] . identifier[StorageCredentials] ( identifier[general_options] )
identifier[endpoint] = identifier[config] [ literal[string] ]. identifier[get] ( literal[string] ) keyword[or] literal[string]
keyword[for] identifier[name] keyword[in] identifier[config] [ literal[string] ][ literal[string] ]:
identifier[key] = identifier[config] [ literal[string] ][ literal[string] ][ identifier[name] ]
identifier[creds] . identifier[add_storage_account] ( identifier[name] , identifier[key] , identifier[endpoint] )
keyword[return] identifier[creds] | def create_azure_storage_credentials(config, general_options):
# type: (dict, blobxfer.models.options.General) ->
# blobxfer.operations.azure.StorageCredentials
'Create an Azure StorageCredentials object from configuration\n :param dict config: config dict\n :param blobxfer.models.options.General: general options\n :rtype: blobxfer.operations.azure.StorageCredentials\n :return: credentials object\n '
creds = blobxfer.operations.azure.StorageCredentials(general_options)
endpoint = config['azure_storage'].get('endpoint') or 'core.windows.net'
for name in config['azure_storage']['accounts']:
key = config['azure_storage']['accounts'][name]
creds.add_storage_account(name, key, endpoint) # depends on [control=['for'], data=['name']]
return creds |
def build(self, builder):
"""Build XML by appending to builder
:Example:
<FormData FormOID="MH" TransactionType="Update">
"""
params = dict(FormOID=self.formoid)
if self.transaction_type is not None:
params["TransactionType"] = self.transaction_type
if self.form_repeat_key is not None:
params["FormRepeatKey"] = str(self.form_repeat_key)
# mixins
self.mixin()
self.mixin_params(params)
builder.start("FormData", params)
# Ask children
for itemgroup in self.itemgroups:
itemgroup.build(builder, self.formoid)
if self.signature is not None:
self.signature.build(builder)
for annotation in self.annotations:
annotation.build(builder)
builder.end("FormData") | def function[build, parameter[self, builder]]:
constant[Build XML by appending to builder
:Example:
<FormData FormOID="MH" TransactionType="Update">
]
variable[params] assign[=] call[name[dict], parameter[]]
if compare[name[self].transaction_type is_not constant[None]] begin[:]
call[name[params]][constant[TransactionType]] assign[=] name[self].transaction_type
if compare[name[self].form_repeat_key is_not constant[None]] begin[:]
call[name[params]][constant[FormRepeatKey]] assign[=] call[name[str], parameter[name[self].form_repeat_key]]
call[name[self].mixin, parameter[]]
call[name[self].mixin_params, parameter[name[params]]]
call[name[builder].start, parameter[constant[FormData], name[params]]]
for taget[name[itemgroup]] in starred[name[self].itemgroups] begin[:]
call[name[itemgroup].build, parameter[name[builder], name[self].formoid]]
if compare[name[self].signature is_not constant[None]] begin[:]
call[name[self].signature.build, parameter[name[builder]]]
for taget[name[annotation]] in starred[name[self].annotations] begin[:]
call[name[annotation].build, parameter[name[builder]]]
call[name[builder].end, parameter[constant[FormData]]] | keyword[def] identifier[build] ( identifier[self] , identifier[builder] ):
literal[string]
identifier[params] = identifier[dict] ( identifier[FormOID] = identifier[self] . identifier[formoid] )
keyword[if] identifier[self] . identifier[transaction_type] keyword[is] keyword[not] keyword[None] :
identifier[params] [ literal[string] ]= identifier[self] . identifier[transaction_type]
keyword[if] identifier[self] . identifier[form_repeat_key] keyword[is] keyword[not] keyword[None] :
identifier[params] [ literal[string] ]= identifier[str] ( identifier[self] . identifier[form_repeat_key] )
identifier[self] . identifier[mixin] ()
identifier[self] . identifier[mixin_params] ( identifier[params] )
identifier[builder] . identifier[start] ( literal[string] , identifier[params] )
keyword[for] identifier[itemgroup] keyword[in] identifier[self] . identifier[itemgroups] :
identifier[itemgroup] . identifier[build] ( identifier[builder] , identifier[self] . identifier[formoid] )
keyword[if] identifier[self] . identifier[signature] keyword[is] keyword[not] keyword[None] :
identifier[self] . identifier[signature] . identifier[build] ( identifier[builder] )
keyword[for] identifier[annotation] keyword[in] identifier[self] . identifier[annotations] :
identifier[annotation] . identifier[build] ( identifier[builder] )
identifier[builder] . identifier[end] ( literal[string] ) | def build(self, builder):
"""Build XML by appending to builder
:Example:
<FormData FormOID="MH" TransactionType="Update">
"""
params = dict(FormOID=self.formoid)
if self.transaction_type is not None:
params['TransactionType'] = self.transaction_type # depends on [control=['if'], data=[]]
if self.form_repeat_key is not None:
params['FormRepeatKey'] = str(self.form_repeat_key) # depends on [control=['if'], data=[]]
# mixins
self.mixin()
self.mixin_params(params)
builder.start('FormData', params)
# Ask children
for itemgroup in self.itemgroups:
itemgroup.build(builder, self.formoid) # depends on [control=['for'], data=['itemgroup']]
if self.signature is not None:
self.signature.build(builder) # depends on [control=['if'], data=[]]
for annotation in self.annotations:
annotation.build(builder) # depends on [control=['for'], data=['annotation']]
builder.end('FormData') |
def publish(self, topic, payload, QoS):
"""
**Description**
Publish a new message to the desired topic with QoS.
**Syntax**
.. code:: python
# Publish a QoS0 message "myPayload" to topic "myTopic"
myAWSIoTMQTTClient.publish("myTopic", "myPayload", 0)
# Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub"
myAWSIoTMQTTClient.publish("myTopic/sub", "myPayloadWithQos1", 1)
**Parameters**
*topic* - Topic name to publish to.
*payload* - Payload to publish.
*QoS* - Quality of Service. Could be 0 or 1.
**Returns**
True if the publish request has been sent to paho. False if the request did not reach paho.
"""
return self._mqtt_core.publish(topic, payload, QoS, False) | def function[publish, parameter[self, topic, payload, QoS]]:
constant[
**Description**
Publish a new message to the desired topic with QoS.
**Syntax**
.. code:: python
# Publish a QoS0 message "myPayload" to topic "myTopic"
myAWSIoTMQTTClient.publish("myTopic", "myPayload", 0)
# Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub"
myAWSIoTMQTTClient.publish("myTopic/sub", "myPayloadWithQos1", 1)
**Parameters**
*topic* - Topic name to publish to.
*payload* - Payload to publish.
*QoS* - Quality of Service. Could be 0 or 1.
**Returns**
True if the publish request has been sent to paho. False if the request did not reach paho.
]
return[call[name[self]._mqtt_core.publish, parameter[name[topic], name[payload], name[QoS], constant[False]]]] | keyword[def] identifier[publish] ( identifier[self] , identifier[topic] , identifier[payload] , identifier[QoS] ):
literal[string]
keyword[return] identifier[self] . identifier[_mqtt_core] . identifier[publish] ( identifier[topic] , identifier[payload] , identifier[QoS] , keyword[False] ) | def publish(self, topic, payload, QoS):
"""
**Description**
Publish a new message to the desired topic with QoS.
**Syntax**
.. code:: python
# Publish a QoS0 message "myPayload" to topic "myTopic"
myAWSIoTMQTTClient.publish("myTopic", "myPayload", 0)
# Publish a QoS1 message "myPayloadWithQos1" to topic "myTopic/sub"
myAWSIoTMQTTClient.publish("myTopic/sub", "myPayloadWithQos1", 1)
**Parameters**
*topic* - Topic name to publish to.
*payload* - Payload to publish.
*QoS* - Quality of Service. Could be 0 or 1.
**Returns**
True if the publish request has been sent to paho. False if the request did not reach paho.
"""
return self._mqtt_core.publish(topic, payload, QoS, False) |
def _load_cpp4(self, filename):
"""Initializes Grid from a CCP4 file."""
ccp4 = CCP4.CCP4()
ccp4.read(filename)
grid, edges = ccp4.histogramdd()
self.__init__(grid=grid, edges=edges, metadata=self.metadata) | def function[_load_cpp4, parameter[self, filename]]:
constant[Initializes Grid from a CCP4 file.]
variable[ccp4] assign[=] call[name[CCP4].CCP4, parameter[]]
call[name[ccp4].read, parameter[name[filename]]]
<ast.Tuple object at 0x7da1b00fb250> assign[=] call[name[ccp4].histogramdd, parameter[]]
call[name[self].__init__, parameter[]] | keyword[def] identifier[_load_cpp4] ( identifier[self] , identifier[filename] ):
literal[string]
identifier[ccp4] = identifier[CCP4] . identifier[CCP4] ()
identifier[ccp4] . identifier[read] ( identifier[filename] )
identifier[grid] , identifier[edges] = identifier[ccp4] . identifier[histogramdd] ()
identifier[self] . identifier[__init__] ( identifier[grid] = identifier[grid] , identifier[edges] = identifier[edges] , identifier[metadata] = identifier[self] . identifier[metadata] ) | def _load_cpp4(self, filename):
"""Initializes Grid from a CCP4 file."""
ccp4 = CCP4.CCP4()
ccp4.read(filename)
(grid, edges) = ccp4.histogramdd()
self.__init__(grid=grid, edges=edges, metadata=self.metadata) |
def get_supported(versions=None, noarch=False):
"""Return a list of supported tags for each version specified in
`versions`.
:param versions: a list of string versions, of the form ["33", "32"],
or None. The first version will be assumed to support our ABI.
"""
supported = []
# Versions must be given with respect to the preference
if versions is None:
versions = []
major = sys.version_info[0]
# Support all previous minor Python versions.
for minor in range(sys.version_info[1], -1, -1):
versions.append(''.join(map(str, (major, minor))))
impl = get_abbr_impl()
abis = []
try:
soabi = sysconfig.get_config_var('SOABI')
except IOError as e: # Issue #1074
warnings.warn("{0}".format(e), RuntimeWarning)
soabi = None
if soabi and soabi.startswith('cpython-'):
abis[0:0] = ['cp' + soabi.split('-', 1)[-1]]
abi3s = set()
import imp
for suffix in imp.get_suffixes():
if suffix[0].startswith('.abi'):
abi3s.add(suffix[0].split('.', 2)[1])
abis.extend(sorted(list(abi3s)))
abis.append('none')
if not noarch:
arch = get_platform()
if sys.platform == 'darwin':
# support macosx-10.6-intel on macosx-10.9-x86_64
match = _osx_arch_pat.match(arch)
if match:
name, major, minor, actual_arch = match.groups()
actual_arches = [actual_arch]
if actual_arch in ('i386', 'ppc'):
actual_arches.append('fat')
if actual_arch in ('i386', 'x86_64'):
actual_arches.append('intel')
if actual_arch in ('i386', 'ppc', 'x86_64'):
actual_arches.append('fat3')
if actual_arch in ('ppc64', 'x86_64'):
actual_arches.append('fat64')
if actual_arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
actual_arches.append('universal')
tpl = '{0}_{1}_%i_%s'.format(name, major)
arches = []
for m in range(int(minor) + 1):
for a in actual_arches:
arches.append(tpl % (m, a))
else:
# arch pattern didn't match (?!)
arches = [arch]
else:
arches = [arch]
# Current version, current API (built specifically for our Python):
for abi in abis:
for arch in arches:
supported.append(('%s%s' % (impl, versions[0]), abi, arch))
# Has binaries, does not use the Python API:
supported.append(('py%s' % (versions[0][0]), 'none', arch))
# No abi / arch, but requires our implementation:
for i, version in enumerate(versions):
supported.append(('%s%s' % (impl, version), 'none', 'any'))
if i == 0:
# Tagged specifically as being cross-version compatible
# (with just the major version specified)
supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any'))
# No abi / arch, generic Python
for i, version in enumerate(versions):
supported.append(('py%s' % (version,), 'none', 'any'))
if i == 0:
supported.append(('py%s' % (version[0]), 'none', 'any'))
return supported | def function[get_supported, parameter[versions, noarch]]:
constant[Return a list of supported tags for each version specified in
`versions`.
:param versions: a list of string versions, of the form ["33", "32"],
or None. The first version will be assumed to support our ABI.
]
variable[supported] assign[=] list[[]]
if compare[name[versions] is constant[None]] begin[:]
variable[versions] assign[=] list[[]]
variable[major] assign[=] call[name[sys].version_info][constant[0]]
for taget[name[minor]] in starred[call[name[range], parameter[call[name[sys].version_info][constant[1]], <ast.UnaryOp object at 0x7da18f810520>, <ast.UnaryOp object at 0x7da18f813e50>]]] begin[:]
call[name[versions].append, parameter[call[constant[].join, parameter[call[name[map], parameter[name[str], tuple[[<ast.Name object at 0x7da18f8102b0>, <ast.Name object at 0x7da18f810850>]]]]]]]]
variable[impl] assign[=] call[name[get_abbr_impl], parameter[]]
variable[abis] assign[=] list[[]]
<ast.Try object at 0x7da18f812680>
if <ast.BoolOp object at 0x7da18f811e10> begin[:]
call[name[abis]][<ast.Slice object at 0x7da18f812980>] assign[=] list[[<ast.BinOp object at 0x7da18f811c90>]]
variable[abi3s] assign[=] call[name[set], parameter[]]
import module[imp]
for taget[name[suffix]] in starred[call[name[imp].get_suffixes, parameter[]]] begin[:]
if call[call[name[suffix]][constant[0]].startswith, parameter[constant[.abi]]] begin[:]
call[name[abi3s].add, parameter[call[call[call[name[suffix]][constant[0]].split, parameter[constant[.], constant[2]]]][constant[1]]]]
call[name[abis].extend, parameter[call[name[sorted], parameter[call[name[list], parameter[name[abi3s]]]]]]]
call[name[abis].append, parameter[constant[none]]]
if <ast.UnaryOp object at 0x7da18f8115d0> begin[:]
variable[arch] assign[=] call[name[get_platform], parameter[]]
if compare[name[sys].platform equal[==] constant[darwin]] begin[:]
variable[match] assign[=] call[name[_osx_arch_pat].match, parameter[name[arch]]]
if name[match] begin[:]
<ast.Tuple object at 0x7da18f811b70> assign[=] call[name[match].groups, parameter[]]
variable[actual_arches] assign[=] list[[<ast.Name object at 0x7da18f811000>]]
if compare[name[actual_arch] in tuple[[<ast.Constant object at 0x7da18f811ab0>, <ast.Constant object at 0x7da18f810310>]]] begin[:]
call[name[actual_arches].append, parameter[constant[fat]]]
if compare[name[actual_arch] in tuple[[<ast.Constant object at 0x7da18f810670>, <ast.Constant object at 0x7da18f811f90>]]] begin[:]
call[name[actual_arches].append, parameter[constant[intel]]]
if compare[name[actual_arch] in tuple[[<ast.Constant object at 0x7da18f811150>, <ast.Constant object at 0x7da18f811450>, <ast.Constant object at 0x7da18f813970>]]] begin[:]
call[name[actual_arches].append, parameter[constant[fat3]]]
if compare[name[actual_arch] in tuple[[<ast.Constant object at 0x7da18f8107f0>, <ast.Constant object at 0x7da18f812320>]]] begin[:]
call[name[actual_arches].append, parameter[constant[fat64]]]
if compare[name[actual_arch] in tuple[[<ast.Constant object at 0x7da18f8103a0>, <ast.Constant object at 0x7da18f8117e0>, <ast.Constant object at 0x7da18f811960>, <ast.Constant object at 0x7da18f810220>, <ast.Constant object at 0x7da18f811030>]]] begin[:]
call[name[actual_arches].append, parameter[constant[universal]]]
variable[tpl] assign[=] call[constant[{0}_{1}_%i_%s].format, parameter[name[name], name[major]]]
variable[arches] assign[=] list[[]]
for taget[name[m]] in starred[call[name[range], parameter[binary_operation[call[name[int], parameter[name[minor]]] + constant[1]]]]] begin[:]
for taget[name[a]] in starred[name[actual_arches]] begin[:]
call[name[arches].append, parameter[binary_operation[name[tpl] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da18f813370>, <ast.Name object at 0x7da18f812a40>]]]]]
for taget[name[abi]] in starred[name[abis]] begin[:]
for taget[name[arch]] in starred[name[arches]] begin[:]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da18f811d80>, <ast.Name object at 0x7da1b26acc70>, <ast.Name object at 0x7da1b26afa90>]]]]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da1b26ac910>, <ast.Constant object at 0x7da1b26ac730>, <ast.Name object at 0x7da1b26af190>]]]]
for taget[tuple[[<ast.Name object at 0x7da1b26aeb60>, <ast.Name object at 0x7da1b26aeec0>]]] in starred[call[name[enumerate], parameter[name[versions]]]] begin[:]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da1b26aeb90>, <ast.Constant object at 0x7da1b26ac490>, <ast.Constant object at 0x7da1b26aceb0>]]]]
if compare[name[i] equal[==] constant[0]] begin[:]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da1b26ad7e0>, <ast.Constant object at 0x7da1b26af610>, <ast.Constant object at 0x7da1b26ac880>]]]]
for taget[tuple[[<ast.Name object at 0x7da1b26ae740>, <ast.Name object at 0x7da1b26ad450>]]] in starred[call[name[enumerate], parameter[name[versions]]]] begin[:]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da1b26ad360>, <ast.Constant object at 0x7da1b26aed70>, <ast.Constant object at 0x7da1b26afbe0>]]]]
if compare[name[i] equal[==] constant[0]] begin[:]
call[name[supported].append, parameter[tuple[[<ast.BinOp object at 0x7da1b26af580>, <ast.Constant object at 0x7da1b26afe50>, <ast.Constant object at 0x7da1b26aff10>]]]]
return[name[supported]] | keyword[def] identifier[get_supported] ( identifier[versions] = keyword[None] , identifier[noarch] = keyword[False] ):
literal[string]
identifier[supported] =[]
keyword[if] identifier[versions] keyword[is] keyword[None] :
identifier[versions] =[]
identifier[major] = identifier[sys] . identifier[version_info] [ literal[int] ]
keyword[for] identifier[minor] keyword[in] identifier[range] ( identifier[sys] . identifier[version_info] [ literal[int] ],- literal[int] ,- literal[int] ):
identifier[versions] . identifier[append] ( literal[string] . identifier[join] ( identifier[map] ( identifier[str] ,( identifier[major] , identifier[minor] ))))
identifier[impl] = identifier[get_abbr_impl] ()
identifier[abis] =[]
keyword[try] :
identifier[soabi] = identifier[sysconfig] . identifier[get_config_var] ( literal[string] )
keyword[except] identifier[IOError] keyword[as] identifier[e] :
identifier[warnings] . identifier[warn] ( literal[string] . identifier[format] ( identifier[e] ), identifier[RuntimeWarning] )
identifier[soabi] = keyword[None]
keyword[if] identifier[soabi] keyword[and] identifier[soabi] . identifier[startswith] ( literal[string] ):
identifier[abis] [ literal[int] : literal[int] ]=[ literal[string] + identifier[soabi] . identifier[split] ( literal[string] , literal[int] )[- literal[int] ]]
identifier[abi3s] = identifier[set] ()
keyword[import] identifier[imp]
keyword[for] identifier[suffix] keyword[in] identifier[imp] . identifier[get_suffixes] ():
keyword[if] identifier[suffix] [ literal[int] ]. identifier[startswith] ( literal[string] ):
identifier[abi3s] . identifier[add] ( identifier[suffix] [ literal[int] ]. identifier[split] ( literal[string] , literal[int] )[ literal[int] ])
identifier[abis] . identifier[extend] ( identifier[sorted] ( identifier[list] ( identifier[abi3s] )))
identifier[abis] . identifier[append] ( literal[string] )
keyword[if] keyword[not] identifier[noarch] :
identifier[arch] = identifier[get_platform] ()
keyword[if] identifier[sys] . identifier[platform] == literal[string] :
identifier[match] = identifier[_osx_arch_pat] . identifier[match] ( identifier[arch] )
keyword[if] identifier[match] :
identifier[name] , identifier[major] , identifier[minor] , identifier[actual_arch] = identifier[match] . identifier[groups] ()
identifier[actual_arches] =[ identifier[actual_arch] ]
keyword[if] identifier[actual_arch] keyword[in] ( literal[string] , literal[string] ):
identifier[actual_arches] . identifier[append] ( literal[string] )
keyword[if] identifier[actual_arch] keyword[in] ( literal[string] , literal[string] ):
identifier[actual_arches] . identifier[append] ( literal[string] )
keyword[if] identifier[actual_arch] keyword[in] ( literal[string] , literal[string] , literal[string] ):
identifier[actual_arches] . identifier[append] ( literal[string] )
keyword[if] identifier[actual_arch] keyword[in] ( literal[string] , literal[string] ):
identifier[actual_arches] . identifier[append] ( literal[string] )
keyword[if] identifier[actual_arch] keyword[in] ( literal[string] , literal[string] , literal[string] , literal[string] , literal[string] ):
identifier[actual_arches] . identifier[append] ( literal[string] )
identifier[tpl] = literal[string] . identifier[format] ( identifier[name] , identifier[major] )
identifier[arches] =[]
keyword[for] identifier[m] keyword[in] identifier[range] ( identifier[int] ( identifier[minor] )+ literal[int] ):
keyword[for] identifier[a] keyword[in] identifier[actual_arches] :
identifier[arches] . identifier[append] ( identifier[tpl] %( identifier[m] , identifier[a] ))
keyword[else] :
identifier[arches] =[ identifier[arch] ]
keyword[else] :
identifier[arches] =[ identifier[arch] ]
keyword[for] identifier[abi] keyword[in] identifier[abis] :
keyword[for] identifier[arch] keyword[in] identifier[arches] :
identifier[supported] . identifier[append] (( literal[string] %( identifier[impl] , identifier[versions] [ literal[int] ]), identifier[abi] , identifier[arch] ))
identifier[supported] . identifier[append] (( literal[string] %( identifier[versions] [ literal[int] ][ literal[int] ]), literal[string] , identifier[arch] ))
keyword[for] identifier[i] , identifier[version] keyword[in] identifier[enumerate] ( identifier[versions] ):
identifier[supported] . identifier[append] (( literal[string] %( identifier[impl] , identifier[version] ), literal[string] , literal[string] ))
keyword[if] identifier[i] == literal[int] :
identifier[supported] . identifier[append] (( literal[string] %( identifier[impl] , identifier[versions] [ literal[int] ][ literal[int] ]), literal[string] , literal[string] ))
keyword[for] identifier[i] , identifier[version] keyword[in] identifier[enumerate] ( identifier[versions] ):
identifier[supported] . identifier[append] (( literal[string] %( identifier[version] ,), literal[string] , literal[string] ))
keyword[if] identifier[i] == literal[int] :
identifier[supported] . identifier[append] (( literal[string] %( identifier[version] [ literal[int] ]), literal[string] , literal[string] ))
keyword[return] identifier[supported] | def get_supported(versions=None, noarch=False):
"""Return a list of supported tags for each version specified in
`versions`.
:param versions: a list of string versions, of the form ["33", "32"],
or None. The first version will be assumed to support our ABI.
"""
supported = []
# Versions must be given with respect to the preference
if versions is None:
versions = []
major = sys.version_info[0]
# Support all previous minor Python versions.
for minor in range(sys.version_info[1], -1, -1):
versions.append(''.join(map(str, (major, minor)))) # depends on [control=['for'], data=['minor']] # depends on [control=['if'], data=['versions']]
impl = get_abbr_impl()
abis = []
try:
soabi = sysconfig.get_config_var('SOABI') # depends on [control=['try'], data=[]]
except IOError as e: # Issue #1074
warnings.warn('{0}'.format(e), RuntimeWarning)
soabi = None # depends on [control=['except'], data=['e']]
if soabi and soabi.startswith('cpython-'):
abis[0:0] = ['cp' + soabi.split('-', 1)[-1]] # depends on [control=['if'], data=[]]
abi3s = set()
import imp
for suffix in imp.get_suffixes():
if suffix[0].startswith('.abi'):
abi3s.add(suffix[0].split('.', 2)[1]) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=['suffix']]
abis.extend(sorted(list(abi3s)))
abis.append('none')
if not noarch:
arch = get_platform()
if sys.platform == 'darwin':
# support macosx-10.6-intel on macosx-10.9-x86_64
match = _osx_arch_pat.match(arch)
if match:
(name, major, minor, actual_arch) = match.groups()
actual_arches = [actual_arch]
if actual_arch in ('i386', 'ppc'):
actual_arches.append('fat') # depends on [control=['if'], data=[]]
if actual_arch in ('i386', 'x86_64'):
actual_arches.append('intel') # depends on [control=['if'], data=[]]
if actual_arch in ('i386', 'ppc', 'x86_64'):
actual_arches.append('fat3') # depends on [control=['if'], data=[]]
if actual_arch in ('ppc64', 'x86_64'):
actual_arches.append('fat64') # depends on [control=['if'], data=[]]
if actual_arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
actual_arches.append('universal') # depends on [control=['if'], data=[]]
tpl = '{0}_{1}_%i_%s'.format(name, major)
arches = []
for m in range(int(minor) + 1):
for a in actual_arches:
arches.append(tpl % (m, a)) # depends on [control=['for'], data=['a']] # depends on [control=['for'], data=['m']] # depends on [control=['if'], data=[]]
else:
# arch pattern didn't match (?!)
arches = [arch] # depends on [control=['if'], data=[]]
else:
arches = [arch]
# Current version, current API (built specifically for our Python):
for abi in abis:
for arch in arches:
supported.append(('%s%s' % (impl, versions[0]), abi, arch)) # depends on [control=['for'], data=['arch']] # depends on [control=['for'], data=['abi']]
# Has binaries, does not use the Python API:
supported.append(('py%s' % versions[0][0], 'none', arch)) # depends on [control=['if'], data=[]]
# No abi / arch, but requires our implementation:
for (i, version) in enumerate(versions):
supported.append(('%s%s' % (impl, version), 'none', 'any'))
if i == 0:
# Tagged specifically as being cross-version compatible
# (with just the major version specified)
supported.append(('%s%s' % (impl, versions[0][0]), 'none', 'any')) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]]
# No abi / arch, generic Python
for (i, version) in enumerate(versions):
supported.append(('py%s' % (version,), 'none', 'any'))
if i == 0:
supported.append(('py%s' % version[0], 'none', 'any')) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]]
return supported |
def process_tls(self, data, name):
"""
Remote TLS processing - one address:port per line
:param data:
:param name:
:return:
"""
ret = []
try:
lines = [x.strip() for x in data.split('\n')]
for idx, line in enumerate(lines):
if line == '':
continue
sub = self.process_host(line, name, idx)
if sub is not None:
ret.append(sub)
except Exception as e:
logger.error('Error in file processing %s : %s' % (name, e))
self.roca.trace_logger.log(e)
return ret | def function[process_tls, parameter[self, data, name]]:
constant[
Remote TLS processing - one address:port per line
:param data:
:param name:
:return:
]
variable[ret] assign[=] list[[]]
<ast.Try object at 0x7da18f720130>
return[name[ret]] | keyword[def] identifier[process_tls] ( identifier[self] , identifier[data] , identifier[name] ):
literal[string]
identifier[ret] =[]
keyword[try] :
identifier[lines] =[ identifier[x] . identifier[strip] () keyword[for] identifier[x] keyword[in] identifier[data] . identifier[split] ( literal[string] )]
keyword[for] identifier[idx] , identifier[line] keyword[in] identifier[enumerate] ( identifier[lines] ):
keyword[if] identifier[line] == literal[string] :
keyword[continue]
identifier[sub] = identifier[self] . identifier[process_host] ( identifier[line] , identifier[name] , identifier[idx] )
keyword[if] identifier[sub] keyword[is] keyword[not] keyword[None] :
identifier[ret] . identifier[append] ( identifier[sub] )
keyword[except] identifier[Exception] keyword[as] identifier[e] :
identifier[logger] . identifier[error] ( literal[string] %( identifier[name] , identifier[e] ))
identifier[self] . identifier[roca] . identifier[trace_logger] . identifier[log] ( identifier[e] )
keyword[return] identifier[ret] | def process_tls(self, data, name):
"""
Remote TLS processing - one address:port per line
:param data:
:param name:
:return:
"""
ret = []
try:
lines = [x.strip() for x in data.split('\n')]
for (idx, line) in enumerate(lines):
if line == '':
continue # depends on [control=['if'], data=[]]
sub = self.process_host(line, name, idx)
if sub is not None:
ret.append(sub) # depends on [control=['if'], data=['sub']] # depends on [control=['for'], data=[]] # depends on [control=['try'], data=[]]
except Exception as e:
logger.error('Error in file processing %s : %s' % (name, e))
self.roca.trace_logger.log(e) # depends on [control=['except'], data=['e']]
return ret |
def _run_somatic(paired, ref_file, target, out_file):
"""Run somatic calling with octopus, handling both paired and tumor-only cases.
Tweaks for low frequency, tumor only and UMI calling documented in:
https://github.com/luntergroup/octopus/blob/develop/configs/UMI.config
"""
align_bams = paired.tumor_bam
if paired.normal_bam:
align_bams += " %s --normal-sample %s" % (paired.normal_bam, paired.normal_name)
cores = dd.get_num_cores(paired.tumor_data)
# Do not try to search below 0.4% currently as leads to long runtimes
# https://github.com/luntergroup/octopus/issues/29#issuecomment-428167979
min_af = max([float(dd.get_min_allele_fraction(paired.tumor_data)) / 100.0, 0.004])
min_af_floor = min_af / 4.0
cmd = ("octopus --threads {cores} --reference {ref_file} --reads {align_bams} "
"--regions-file {target} "
"--min-credible-somatic-frequency {min_af_floor} --min-expected-somatic-frequency {min_af} "
"--downsample-above 4000 --downsample-target 4000 --min-kmer-prune 5 --min-bubble-score 20 "
"--max-haplotypes 200 --somatic-snv-mutation-rate '5e-4' --somatic-indel-mutation-rate '1e-05' "
"--target-working-memory 5G --target-read-buffer-footprint 5G --max-somatic-haplotypes 3 "
"--caller cancer "
"--working-directory {tmp_dir} "
"-o {tx_out_file} --legacy")
if not paired.normal_bam:
cmd += (" --tumour-germline-concentration 5")
if dd.get_umi_type(paired.tumor_data) or _is_umi_consensus_bam(paired.tumor_bam):
cmd += (" --allow-octopus-duplicates --overlap-masking 0 "
"--somatic-filter-expression 'GQ < 200 | MQ < 30 | SB > 0.2 | SD[.25] > 0.1 "
"| BQ < 40 | DP < 100 | MF > 0.1 | AD < 5 | CC > 1.1 | GQD > 2'")
with file_transaction(paired.tumor_data, out_file) as tx_out_file:
tmp_dir = os.path.dirname(tx_out_file)
do.run(cmd.format(**locals()), "Octopus somatic calling")
_produce_compatible_vcf(tx_out_file, paired.tumor_data, is_somatic=True)
return out_file | def function[_run_somatic, parameter[paired, ref_file, target, out_file]]:
constant[Run somatic calling with octopus, handling both paired and tumor-only cases.
Tweaks for low frequency, tumor only and UMI calling documented in:
https://github.com/luntergroup/octopus/blob/develop/configs/UMI.config
]
variable[align_bams] assign[=] name[paired].tumor_bam
if name[paired].normal_bam begin[:]
<ast.AugAssign object at 0x7da1b17a6050>
variable[cores] assign[=] call[name[dd].get_num_cores, parameter[name[paired].tumor_data]]
variable[min_af] assign[=] call[name[max], parameter[list[[<ast.BinOp object at 0x7da1b1845d80>, <ast.Constant object at 0x7da1b1847c10>]]]]
variable[min_af_floor] assign[=] binary_operation[name[min_af] / constant[4.0]]
variable[cmd] assign[=] constant[octopus --threads {cores} --reference {ref_file} --reads {align_bams} --regions-file {target} --min-credible-somatic-frequency {min_af_floor} --min-expected-somatic-frequency {min_af} --downsample-above 4000 --downsample-target 4000 --min-kmer-prune 5 --min-bubble-score 20 --max-haplotypes 200 --somatic-snv-mutation-rate '5e-4' --somatic-indel-mutation-rate '1e-05' --target-working-memory 5G --target-read-buffer-footprint 5G --max-somatic-haplotypes 3 --caller cancer --working-directory {tmp_dir} -o {tx_out_file} --legacy]
if <ast.UnaryOp object at 0x7da1b1844e20> begin[:]
<ast.AugAssign object at 0x7da1b18457b0>
if <ast.BoolOp object at 0x7da1b1845270> begin[:]
<ast.AugAssign object at 0x7da1b1845ea0>
with call[name[file_transaction], parameter[name[paired].tumor_data, name[out_file]]] begin[:]
variable[tmp_dir] assign[=] call[name[os].path.dirname, parameter[name[tx_out_file]]]
call[name[do].run, parameter[call[name[cmd].format, parameter[]], constant[Octopus somatic calling]]]
call[name[_produce_compatible_vcf], parameter[name[tx_out_file], name[paired].tumor_data]]
return[name[out_file]] | keyword[def] identifier[_run_somatic] ( identifier[paired] , identifier[ref_file] , identifier[target] , identifier[out_file] ):
literal[string]
identifier[align_bams] = identifier[paired] . identifier[tumor_bam]
keyword[if] identifier[paired] . identifier[normal_bam] :
identifier[align_bams] += literal[string] %( identifier[paired] . identifier[normal_bam] , identifier[paired] . identifier[normal_name] )
identifier[cores] = identifier[dd] . identifier[get_num_cores] ( identifier[paired] . identifier[tumor_data] )
identifier[min_af] = identifier[max] ([ identifier[float] ( identifier[dd] . identifier[get_min_allele_fraction] ( identifier[paired] . identifier[tumor_data] ))/ literal[int] , literal[int] ])
identifier[min_af_floor] = identifier[min_af] / literal[int]
identifier[cmd] =( literal[string]
literal[string]
literal[string]
literal[string]
literal[string]
literal[string]
literal[string]
literal[string]
literal[string] )
keyword[if] keyword[not] identifier[paired] . identifier[normal_bam] :
identifier[cmd] +=( literal[string] )
keyword[if] identifier[dd] . identifier[get_umi_type] ( identifier[paired] . identifier[tumor_data] ) keyword[or] identifier[_is_umi_consensus_bam] ( identifier[paired] . identifier[tumor_bam] ):
identifier[cmd] +=( literal[string]
literal[string]
literal[string] )
keyword[with] identifier[file_transaction] ( identifier[paired] . identifier[tumor_data] , identifier[out_file] ) keyword[as] identifier[tx_out_file] :
identifier[tmp_dir] = identifier[os] . identifier[path] . identifier[dirname] ( identifier[tx_out_file] )
identifier[do] . identifier[run] ( identifier[cmd] . identifier[format] (** identifier[locals] ()), literal[string] )
identifier[_produce_compatible_vcf] ( identifier[tx_out_file] , identifier[paired] . identifier[tumor_data] , identifier[is_somatic] = keyword[True] )
keyword[return] identifier[out_file] | def _run_somatic(paired, ref_file, target, out_file):
"""Run somatic calling with octopus, handling both paired and tumor-only cases.
Tweaks for low frequency, tumor only and UMI calling documented in:
https://github.com/luntergroup/octopus/blob/develop/configs/UMI.config
"""
align_bams = paired.tumor_bam
if paired.normal_bam:
align_bams += ' %s --normal-sample %s' % (paired.normal_bam, paired.normal_name) # depends on [control=['if'], data=[]]
cores = dd.get_num_cores(paired.tumor_data)
# Do not try to search below 0.4% currently as leads to long runtimes
# https://github.com/luntergroup/octopus/issues/29#issuecomment-428167979
min_af = max([float(dd.get_min_allele_fraction(paired.tumor_data)) / 100.0, 0.004])
min_af_floor = min_af / 4.0
cmd = "octopus --threads {cores} --reference {ref_file} --reads {align_bams} --regions-file {target} --min-credible-somatic-frequency {min_af_floor} --min-expected-somatic-frequency {min_af} --downsample-above 4000 --downsample-target 4000 --min-kmer-prune 5 --min-bubble-score 20 --max-haplotypes 200 --somatic-snv-mutation-rate '5e-4' --somatic-indel-mutation-rate '1e-05' --target-working-memory 5G --target-read-buffer-footprint 5G --max-somatic-haplotypes 3 --caller cancer --working-directory {tmp_dir} -o {tx_out_file} --legacy"
if not paired.normal_bam:
cmd += ' --tumour-germline-concentration 5' # depends on [control=['if'], data=[]]
if dd.get_umi_type(paired.tumor_data) or _is_umi_consensus_bam(paired.tumor_bam):
cmd += " --allow-octopus-duplicates --overlap-masking 0 --somatic-filter-expression 'GQ < 200 | MQ < 30 | SB > 0.2 | SD[.25] > 0.1 | BQ < 40 | DP < 100 | MF > 0.1 | AD < 5 | CC > 1.1 | GQD > 2'" # depends on [control=['if'], data=[]]
with file_transaction(paired.tumor_data, out_file) as tx_out_file:
tmp_dir = os.path.dirname(tx_out_file)
do.run(cmd.format(**locals()), 'Octopus somatic calling')
_produce_compatible_vcf(tx_out_file, paired.tumor_data, is_somatic=True) # depends on [control=['with'], data=['tx_out_file']]
return out_file |
def _set_statistics_oam(self, v, load=False):
"""
Setter method for statistics_oam, mapped from YANG variable /mpls_state/statistics_oam (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_statistics_oam is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_statistics_oam() directly.
YANG Description: OAM packet statistics
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=statistics_oam.statistics_oam, is_container='container', presence=False, yang_name="statistics-oam", rest_name="statistics-oam", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-statistics-oam', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)
except (TypeError, ValueError):
raise ValueError({
'error-string': """statistics_oam must be of a type compatible with container""",
'defined-type': "container",
'generated-type': """YANGDynClass(base=statistics_oam.statistics_oam, is_container='container', presence=False, yang_name="statistics-oam", rest_name="statistics-oam", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-statistics-oam', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False)""",
})
self.__statistics_oam = t
if hasattr(self, '_set'):
self._set() | def function[_set_statistics_oam, parameter[self, v, load]]:
constant[
Setter method for statistics_oam, mapped from YANG variable /mpls_state/statistics_oam (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_statistics_oam is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_statistics_oam() directly.
YANG Description: OAM packet statistics
]
if call[name[hasattr], parameter[name[v], constant[_utype]]] begin[:]
variable[v] assign[=] call[name[v]._utype, parameter[name[v]]]
<ast.Try object at 0x7da18f723040>
name[self].__statistics_oam assign[=] name[t]
if call[name[hasattr], parameter[name[self], constant[_set]]] begin[:]
call[name[self]._set, parameter[]] | keyword[def] identifier[_set_statistics_oam] ( identifier[self] , identifier[v] , identifier[load] = keyword[False] ):
literal[string]
keyword[if] identifier[hasattr] ( identifier[v] , literal[string] ):
identifier[v] = identifier[v] . identifier[_utype] ( identifier[v] )
keyword[try] :
identifier[t] = identifier[YANGDynClass] ( identifier[v] , identifier[base] = identifier[statistics_oam] . identifier[statistics_oam] , identifier[is_container] = literal[string] , identifier[presence] = keyword[False] , identifier[yang_name] = literal[string] , identifier[rest_name] = literal[string] , identifier[parent] = identifier[self] , identifier[path_helper] = identifier[self] . identifier[_path_helper] , identifier[extmethods] = identifier[self] . identifier[_extmethods] , identifier[register_paths] = keyword[True] , identifier[extensions] ={ literal[string] :{ literal[string] : literal[string] , literal[string] : keyword[None] }}, identifier[namespace] = literal[string] , identifier[defining_module] = literal[string] , identifier[yang_type] = literal[string] , identifier[is_config] = keyword[False] )
keyword[except] ( identifier[TypeError] , identifier[ValueError] ):
keyword[raise] identifier[ValueError] ({
literal[string] : literal[string] ,
literal[string] : literal[string] ,
literal[string] : literal[string] ,
})
identifier[self] . identifier[__statistics_oam] = identifier[t]
keyword[if] identifier[hasattr] ( identifier[self] , literal[string] ):
identifier[self] . identifier[_set] () | def _set_statistics_oam(self, v, load=False):
"""
Setter method for statistics_oam, mapped from YANG variable /mpls_state/statistics_oam (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_statistics_oam is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_statistics_oam() directly.
YANG Description: OAM packet statistics
"""
if hasattr(v, '_utype'):
v = v._utype(v) # depends on [control=['if'], data=[]]
try:
t = YANGDynClass(v, base=statistics_oam.statistics_oam, is_container='container', presence=False, yang_name='statistics-oam', rest_name='statistics-oam', parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'callpoint': u'mpls-statistics-oam', u'cli-suppress-show-path': None}}, namespace='urn:brocade.com:mgmt:brocade-mpls-operational', defining_module='brocade-mpls-operational', yang_type='container', is_config=False) # depends on [control=['try'], data=[]]
except (TypeError, ValueError):
raise ValueError({'error-string': 'statistics_oam must be of a type compatible with container', 'defined-type': 'container', 'generated-type': 'YANGDynClass(base=statistics_oam.statistics_oam, is_container=\'container\', presence=False, yang_name="statistics-oam", rest_name="statistics-oam", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u\'tailf-common\': {u\'callpoint\': u\'mpls-statistics-oam\', u\'cli-suppress-show-path\': None}}, namespace=\'urn:brocade.com:mgmt:brocade-mpls-operational\', defining_module=\'brocade-mpls-operational\', yang_type=\'container\', is_config=False)'}) # depends on [control=['except'], data=[]]
self.__statistics_oam = t
if hasattr(self, '_set'):
self._set() # depends on [control=['if'], data=[]] |
def get(self, sid):
"""
Constructs a TranscriptionContext
:param sid: The unique string that identifies the resource
:returns: twilio.rest.api.v2010.account.transcription.TranscriptionContext
:rtype: twilio.rest.api.v2010.account.transcription.TranscriptionContext
"""
return TranscriptionContext(self._version, account_sid=self._solution['account_sid'], sid=sid, ) | def function[get, parameter[self, sid]]:
constant[
Constructs a TranscriptionContext
:param sid: The unique string that identifies the resource
:returns: twilio.rest.api.v2010.account.transcription.TranscriptionContext
:rtype: twilio.rest.api.v2010.account.transcription.TranscriptionContext
]
return[call[name[TranscriptionContext], parameter[name[self]._version]]] | keyword[def] identifier[get] ( identifier[self] , identifier[sid] ):
literal[string]
keyword[return] identifier[TranscriptionContext] ( identifier[self] . identifier[_version] , identifier[account_sid] = identifier[self] . identifier[_solution] [ literal[string] ], identifier[sid] = identifier[sid] ,) | def get(self, sid):
"""
Constructs a TranscriptionContext
:param sid: The unique string that identifies the resource
:returns: twilio.rest.api.v2010.account.transcription.TranscriptionContext
:rtype: twilio.rest.api.v2010.account.transcription.TranscriptionContext
"""
return TranscriptionContext(self._version, account_sid=self._solution['account_sid'], sid=sid) |
def validate(self, folder):
"""Validate files and folders contained in this folder
It validates all of the files and folders contained in this
folder if some observers are interested in them.
"""
for observer in list(self.observers):
observer.validate(folder) | def function[validate, parameter[self, folder]]:
constant[Validate files and folders contained in this folder
It validates all of the files and folders contained in this
folder if some observers are interested in them.
]
for taget[name[observer]] in starred[call[name[list], parameter[name[self].observers]]] begin[:]
call[name[observer].validate, parameter[name[folder]]] | keyword[def] identifier[validate] ( identifier[self] , identifier[folder] ):
literal[string]
keyword[for] identifier[observer] keyword[in] identifier[list] ( identifier[self] . identifier[observers] ):
identifier[observer] . identifier[validate] ( identifier[folder] ) | def validate(self, folder):
"""Validate files and folders contained in this folder
It validates all of the files and folders contained in this
folder if some observers are interested in them.
"""
for observer in list(self.observers):
observer.validate(folder) # depends on [control=['for'], data=['observer']] |
def _make_proxy(self, varname, parent=None, constructor=MlabObjectProxy):
"""Creates a proxy for a variable.
XXX create and cache nested proxies also here.
"""
# FIXME why not just use gensym here?
proxy_val_name = "PROXY_VAL%d__" % self._proxy_count
self._proxy_count += 1
mlabraw.eval(self._session, "%s = %s;" % (proxy_val_name, varname))
res = constructor(self, proxy_val_name, parent)
self._proxies[proxy_val_name] = res
return res | def function[_make_proxy, parameter[self, varname, parent, constructor]]:
constant[Creates a proxy for a variable.
XXX create and cache nested proxies also here.
]
variable[proxy_val_name] assign[=] binary_operation[constant[PROXY_VAL%d__] <ast.Mod object at 0x7da2590d6920> name[self]._proxy_count]
<ast.AugAssign object at 0x7da18f58cbb0>
call[name[mlabraw].eval, parameter[name[self]._session, binary_operation[constant[%s = %s;] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da1b1042bf0>, <ast.Name object at 0x7da1b1041780>]]]]]
variable[res] assign[=] call[name[constructor], parameter[name[self], name[proxy_val_name], name[parent]]]
call[name[self]._proxies][name[proxy_val_name]] assign[=] name[res]
return[name[res]] | keyword[def] identifier[_make_proxy] ( identifier[self] , identifier[varname] , identifier[parent] = keyword[None] , identifier[constructor] = identifier[MlabObjectProxy] ):
literal[string]
identifier[proxy_val_name] = literal[string] % identifier[self] . identifier[_proxy_count]
identifier[self] . identifier[_proxy_count] += literal[int]
identifier[mlabraw] . identifier[eval] ( identifier[self] . identifier[_session] , literal[string] %( identifier[proxy_val_name] , identifier[varname] ))
identifier[res] = identifier[constructor] ( identifier[self] , identifier[proxy_val_name] , identifier[parent] )
identifier[self] . identifier[_proxies] [ identifier[proxy_val_name] ]= identifier[res]
keyword[return] identifier[res] | def _make_proxy(self, varname, parent=None, constructor=MlabObjectProxy):
"""Creates a proxy for a variable.
XXX create and cache nested proxies also here.
"""
# FIXME why not just use gensym here?
proxy_val_name = 'PROXY_VAL%d__' % self._proxy_count
self._proxy_count += 1
mlabraw.eval(self._session, '%s = %s;' % (proxy_val_name, varname))
res = constructor(self, proxy_val_name, parent)
self._proxies[proxy_val_name] = res
return res |
def init(paths, output, **kwargs):
"""Init data package from list of files.
It will also infer tabular data's schemas from their contents.
"""
dp = goodtables.init_datapackage(paths)
click.secho(
json_module.dumps(dp.descriptor, indent=4),
file=output
)
exit(dp.valid) | def function[init, parameter[paths, output]]:
constant[Init data package from list of files.
It will also infer tabular data's schemas from their contents.
]
variable[dp] assign[=] call[name[goodtables].init_datapackage, parameter[name[paths]]]
call[name[click].secho, parameter[call[name[json_module].dumps, parameter[name[dp].descriptor]]]]
call[name[exit], parameter[name[dp].valid]] | keyword[def] identifier[init] ( identifier[paths] , identifier[output] ,** identifier[kwargs] ):
literal[string]
identifier[dp] = identifier[goodtables] . identifier[init_datapackage] ( identifier[paths] )
identifier[click] . identifier[secho] (
identifier[json_module] . identifier[dumps] ( identifier[dp] . identifier[descriptor] , identifier[indent] = literal[int] ),
identifier[file] = identifier[output]
)
identifier[exit] ( identifier[dp] . identifier[valid] ) | def init(paths, output, **kwargs):
"""Init data package from list of files.
It will also infer tabular data's schemas from their contents.
"""
dp = goodtables.init_datapackage(paths)
click.secho(json_module.dumps(dp.descriptor, indent=4), file=output)
exit(dp.valid) |
def multi_plot_time(DataArray, SubSampleN=1, units='s', xlim=None, ylim=None, LabelArray=[], show_fig=True):
"""
plot the time trace for multiple data sets on the same axes.
Parameters
----------
DataArray : array-like
array of DataObject instances for which to plot the PSDs
SubSampleN : int, optional
Number of intervals between points to remove (to sub-sample data so
that you effectively have lower sample rate to make plotting easier
and quicker.
xlim : array-like, optional
2 element array specifying the lower and upper x limit for which to
plot the time signal
LabelArray : array-like, optional
array of labels for each data-set to be plotted
show_fig : bool, optional
If True runs plt.show() before returning figure
if False it just returns the figure object.
(the default is True, it shows the figure)
Returns
-------
fig : matplotlib.figure.Figure object
The figure object created
ax : matplotlib.axes.Axes object
The axes object created
"""
unit_prefix = units[:-1] # removed the last char
if LabelArray == []:
LabelArray = ["DataSet {}".format(i)
for i in _np.arange(0, len(DataArray), 1)]
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
for i, data in enumerate(DataArray):
ax.plot(unit_conversion(data.time.get_array()[::SubSampleN], unit_prefix), data.voltage[::SubSampleN],
alpha=0.8, label=LabelArray[i])
ax.set_xlabel("time (s)")
if xlim != None:
ax.set_xlim(xlim)
if ylim != None:
ax.set_ylim(ylim)
ax.grid(which="major")
legend = ax.legend(loc="best", frameon = 1)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('white')
ax.set_ylabel("voltage (V)")
if show_fig == True:
_plt.show()
return fig, ax | def function[multi_plot_time, parameter[DataArray, SubSampleN, units, xlim, ylim, LabelArray, show_fig]]:
constant[
plot the time trace for multiple data sets on the same axes.
Parameters
----------
DataArray : array-like
array of DataObject instances for which to plot the PSDs
SubSampleN : int, optional
Number of intervals between points to remove (to sub-sample data so
that you effectively have lower sample rate to make plotting easier
and quicker.
xlim : array-like, optional
2 element array specifying the lower and upper x limit for which to
plot the time signal
LabelArray : array-like, optional
array of labels for each data-set to be plotted
show_fig : bool, optional
If True runs plt.show() before returning figure
if False it just returns the figure object.
(the default is True, it shows the figure)
Returns
-------
fig : matplotlib.figure.Figure object
The figure object created
ax : matplotlib.axes.Axes object
The axes object created
]
variable[unit_prefix] assign[=] call[name[units]][<ast.Slice object at 0x7da1b288ce80>]
if compare[name[LabelArray] equal[==] list[[]]] begin[:]
variable[LabelArray] assign[=] <ast.ListComp object at 0x7da1b288d0c0>
variable[fig] assign[=] call[name[_plt].figure, parameter[]]
variable[ax] assign[=] call[name[fig].add_subplot, parameter[constant[111]]]
for taget[tuple[[<ast.Name object at 0x7da1b288d630>, <ast.Name object at 0x7da1b288d600>]]] in starred[call[name[enumerate], parameter[name[DataArray]]]] begin[:]
call[name[ax].plot, parameter[call[name[unit_conversion], parameter[call[call[name[data].time.get_array, parameter[]]][<ast.Slice object at 0x7da1b288d330>], name[unit_prefix]]], call[name[data].voltage][<ast.Slice object at 0x7da1b288d210>]]]
call[name[ax].set_xlabel, parameter[constant[time (s)]]]
if compare[name[xlim] not_equal[!=] constant[None]] begin[:]
call[name[ax].set_xlim, parameter[name[xlim]]]
if compare[name[ylim] not_equal[!=] constant[None]] begin[:]
call[name[ax].set_ylim, parameter[name[ylim]]]
call[name[ax].grid, parameter[]]
variable[legend] assign[=] call[name[ax].legend, parameter[]]
variable[frame] assign[=] call[name[legend].get_frame, parameter[]]
call[name[frame].set_facecolor, parameter[constant[white]]]
call[name[frame].set_edgecolor, parameter[constant[white]]]
call[name[ax].set_ylabel, parameter[constant[voltage (V)]]]
if compare[name[show_fig] equal[==] constant[True]] begin[:]
call[name[_plt].show, parameter[]]
return[tuple[[<ast.Name object at 0x7da1b2715de0>, <ast.Name object at 0x7da1b2715e40>]]] | keyword[def] identifier[multi_plot_time] ( identifier[DataArray] , identifier[SubSampleN] = literal[int] , identifier[units] = literal[string] , identifier[xlim] = keyword[None] , identifier[ylim] = keyword[None] , identifier[LabelArray] =[], identifier[show_fig] = keyword[True] ):
literal[string]
identifier[unit_prefix] = identifier[units] [:- literal[int] ]
keyword[if] identifier[LabelArray] ==[]:
identifier[LabelArray] =[ literal[string] . identifier[format] ( identifier[i] )
keyword[for] identifier[i] keyword[in] identifier[_np] . identifier[arange] ( literal[int] , identifier[len] ( identifier[DataArray] ), literal[int] )]
identifier[fig] = identifier[_plt] . identifier[figure] ( identifier[figsize] = identifier[properties] [ literal[string] ])
identifier[ax] = identifier[fig] . identifier[add_subplot] ( literal[int] )
keyword[for] identifier[i] , identifier[data] keyword[in] identifier[enumerate] ( identifier[DataArray] ):
identifier[ax] . identifier[plot] ( identifier[unit_conversion] ( identifier[data] . identifier[time] . identifier[get_array] ()[:: identifier[SubSampleN] ], identifier[unit_prefix] ), identifier[data] . identifier[voltage] [:: identifier[SubSampleN] ],
identifier[alpha] = literal[int] , identifier[label] = identifier[LabelArray] [ identifier[i] ])
identifier[ax] . identifier[set_xlabel] ( literal[string] )
keyword[if] identifier[xlim] != keyword[None] :
identifier[ax] . identifier[set_xlim] ( identifier[xlim] )
keyword[if] identifier[ylim] != keyword[None] :
identifier[ax] . identifier[set_ylim] ( identifier[ylim] )
identifier[ax] . identifier[grid] ( identifier[which] = literal[string] )
identifier[legend] = identifier[ax] . identifier[legend] ( identifier[loc] = literal[string] , identifier[frameon] = literal[int] )
identifier[frame] = identifier[legend] . identifier[get_frame] ()
identifier[frame] . identifier[set_facecolor] ( literal[string] )
identifier[frame] . identifier[set_edgecolor] ( literal[string] )
identifier[ax] . identifier[set_ylabel] ( literal[string] )
keyword[if] identifier[show_fig] == keyword[True] :
identifier[_plt] . identifier[show] ()
keyword[return] identifier[fig] , identifier[ax] | def multi_plot_time(DataArray, SubSampleN=1, units='s', xlim=None, ylim=None, LabelArray=[], show_fig=True):
"""
plot the time trace for multiple data sets on the same axes.
Parameters
----------
DataArray : array-like
array of DataObject instances for which to plot the PSDs
SubSampleN : int, optional
Number of intervals between points to remove (to sub-sample data so
that you effectively have lower sample rate to make plotting easier
and quicker.
xlim : array-like, optional
2 element array specifying the lower and upper x limit for which to
plot the time signal
LabelArray : array-like, optional
array of labels for each data-set to be plotted
show_fig : bool, optional
If True runs plt.show() before returning figure
if False it just returns the figure object.
(the default is True, it shows the figure)
Returns
-------
fig : matplotlib.figure.Figure object
The figure object created
ax : matplotlib.axes.Axes object
The axes object created
"""
unit_prefix = units[:-1] # removed the last char
if LabelArray == []:
LabelArray = ['DataSet {}'.format(i) for i in _np.arange(0, len(DataArray), 1)] # depends on [control=['if'], data=['LabelArray']]
fig = _plt.figure(figsize=properties['default_fig_size'])
ax = fig.add_subplot(111)
for (i, data) in enumerate(DataArray):
ax.plot(unit_conversion(data.time.get_array()[::SubSampleN], unit_prefix), data.voltage[::SubSampleN], alpha=0.8, label=LabelArray[i]) # depends on [control=['for'], data=[]]
ax.set_xlabel('time (s)')
if xlim != None:
ax.set_xlim(xlim) # depends on [control=['if'], data=['xlim']]
if ylim != None:
ax.set_ylim(ylim) # depends on [control=['if'], data=['ylim']]
ax.grid(which='major')
legend = ax.legend(loc='best', frameon=1)
frame = legend.get_frame()
frame.set_facecolor('white')
frame.set_edgecolor('white')
ax.set_ylabel('voltage (V)')
if show_fig == True:
_plt.show() # depends on [control=['if'], data=[]]
return (fig, ax) |
def visit_set(self, node):
"""return an astroid.Set node as string"""
return "{%s}" % ", ".join(child.accept(self) for child in node.elts) | def function[visit_set, parameter[self, node]]:
constant[return an astroid.Set node as string]
return[binary_operation[constant[{%s}] <ast.Mod object at 0x7da2590d6920> call[constant[, ].join, parameter[<ast.GeneratorExp object at 0x7da1b1e77ac0>]]]] | keyword[def] identifier[visit_set] ( identifier[self] , identifier[node] ):
literal[string]
keyword[return] literal[string] % literal[string] . identifier[join] ( identifier[child] . identifier[accept] ( identifier[self] ) keyword[for] identifier[child] keyword[in] identifier[node] . identifier[elts] ) | def visit_set(self, node):
"""return an astroid.Set node as string"""
return '{%s}' % ', '.join((child.accept(self) for child in node.elts)) |
def pad_length(s):
"""
Appends characters to the end of the string to increase the string length per
IBM Globalization Design Guideline A3: UI Expansion.
https://www-01.ibm.com/software/globalization/guidelines/a3.html
:param s: String to pad.
:returns: Padded string.
"""
padding_chars = [
u'\ufe4e', # ﹎: CENTRELINE LOW LINE
u'\u040d', # Ѝ: CYRILLIC CAPITAL LETTER I WITH GRAVE
u'\u05d0', # א: HEBREW LETTER ALEF
u'\u01c6', # dž: LATIN SMALL LETTER DZ WITH CARON
u'\u1f8f', # ᾏ: GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI
u'\u2167', # Ⅷ: ROMAN NUMERAL EIGHT
u'\u3234', # ㈴: PARENTHESIZED IDEOGRAPH NAME
u'\u32f9', # ㋹: CIRCLED KATAKANA RE
u'\ud4db', # 퓛: HANGUL SYLLABLE PWILH
u'\ufe8f', # ﺏ: ARABIC LETTER BEH ISOLATED FORM
u'\U0001D7D8', # 𝟘: MATHEMATICAL DOUBLE-STRUCK DIGIT ZERO
u'\U0001F6A6', # 🚦: VERTICAL TRAFFIC LIGHT
]
padding_generator = itertools.cycle(padding_chars)
target_lengths = {
six.moves.range(1, 11): 3,
six.moves.range(11, 21): 2,
six.moves.range(21, 31): 1.8,
six.moves.range(31, 51): 1.6,
six.moves.range(51, 71): 1.4,
}
if len(s) > 70:
target_length = int(math.ceil(len(s) * 1.3))
else:
for r, v in target_lengths.items():
if len(s) in r:
target_length = int(math.ceil(len(s) * v))
diff = target_length - len(s)
pad = u"".join([next(padding_generator) for _ in range(diff)])
return s + pad | def function[pad_length, parameter[s]]:
constant[
Appends characters to the end of the string to increase the string length per
IBM Globalization Design Guideline A3: UI Expansion.
https://www-01.ibm.com/software/globalization/guidelines/a3.html
:param s: String to pad.
:returns: Padded string.
]
variable[padding_chars] assign[=] list[[<ast.Constant object at 0x7da2044c00a0>, <ast.Constant object at 0x7da2044c1150>, <ast.Constant object at 0x7da2044c39a0>, <ast.Constant object at 0x7da2044c3b80>, <ast.Constant object at 0x7da2044c26b0>, <ast.Constant object at 0x7da2044c1420>, <ast.Constant object at 0x7da2044c2ce0>, <ast.Constant object at 0x7da2044c1270>, <ast.Constant object at 0x7da2044c1f30>, <ast.Constant object at 0x7da2044c3a00>, <ast.Constant object at 0x7da2044c0a90>, <ast.Constant object at 0x7da2044c3250>]]
variable[padding_generator] assign[=] call[name[itertools].cycle, parameter[name[padding_chars]]]
variable[target_lengths] assign[=] dictionary[[<ast.Call object at 0x7da2044c0a60>, <ast.Call object at 0x7da2044c22c0>, <ast.Call object at 0x7da2044c3880>, <ast.Call object at 0x7da2044c3c70>, <ast.Call object at 0x7da2044c03a0>], [<ast.Constant object at 0x7da2044c1e10>, <ast.Constant object at 0x7da2044c3700>, <ast.Constant object at 0x7da2044c10f0>, <ast.Constant object at 0x7da2044c0a30>, <ast.Constant object at 0x7da2044c32b0>]]
if compare[call[name[len], parameter[name[s]]] greater[>] constant[70]] begin[:]
variable[target_length] assign[=] call[name[int], parameter[call[name[math].ceil, parameter[binary_operation[call[name[len], parameter[name[s]]] * constant[1.3]]]]]]
variable[diff] assign[=] binary_operation[name[target_length] - call[name[len], parameter[name[s]]]]
variable[pad] assign[=] call[constant[].join, parameter[<ast.ListComp object at 0x7da2044c3a30>]]
return[binary_operation[name[s] + name[pad]]] | keyword[def] identifier[pad_length] ( identifier[s] ):
literal[string]
identifier[padding_chars] =[
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
literal[string] ,
]
identifier[padding_generator] = identifier[itertools] . identifier[cycle] ( identifier[padding_chars] )
identifier[target_lengths] ={
identifier[six] . identifier[moves] . identifier[range] ( literal[int] , literal[int] ): literal[int] ,
identifier[six] . identifier[moves] . identifier[range] ( literal[int] , literal[int] ): literal[int] ,
identifier[six] . identifier[moves] . identifier[range] ( literal[int] , literal[int] ): literal[int] ,
identifier[six] . identifier[moves] . identifier[range] ( literal[int] , literal[int] ): literal[int] ,
identifier[six] . identifier[moves] . identifier[range] ( literal[int] , literal[int] ): literal[int] ,
}
keyword[if] identifier[len] ( identifier[s] )> literal[int] :
identifier[target_length] = identifier[int] ( identifier[math] . identifier[ceil] ( identifier[len] ( identifier[s] )* literal[int] ))
keyword[else] :
keyword[for] identifier[r] , identifier[v] keyword[in] identifier[target_lengths] . identifier[items] ():
keyword[if] identifier[len] ( identifier[s] ) keyword[in] identifier[r] :
identifier[target_length] = identifier[int] ( identifier[math] . identifier[ceil] ( identifier[len] ( identifier[s] )* identifier[v] ))
identifier[diff] = identifier[target_length] - identifier[len] ( identifier[s] )
identifier[pad] = literal[string] . identifier[join] ([ identifier[next] ( identifier[padding_generator] ) keyword[for] identifier[_] keyword[in] identifier[range] ( identifier[diff] )])
keyword[return] identifier[s] + identifier[pad] | def pad_length(s):
"""
Appends characters to the end of the string to increase the string length per
IBM Globalization Design Guideline A3: UI Expansion.
https://www-01.ibm.com/software/globalization/guidelines/a3.html
:param s: String to pad.
:returns: Padded string.
""" # ﹎: CENTRELINE LOW LINE
# Ѝ: CYRILLIC CAPITAL LETTER I WITH GRAVE
# א: HEBREW LETTER ALEF
# dž: LATIN SMALL LETTER DZ WITH CARON
# ᾏ: GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI
# Ⅷ: ROMAN NUMERAL EIGHT
# ㈴: PARENTHESIZED IDEOGRAPH NAME
# ㋹: CIRCLED KATAKANA RE
# 퓛: HANGUL SYLLABLE PWILH
# ﺏ: ARABIC LETTER BEH ISOLATED FORM
# 𝟘: MATHEMATICAL DOUBLE-STRUCK DIGIT ZERO
# 🚦: VERTICAL TRAFFIC LIGHT
padding_chars = [u'﹎', u'Ѝ', u'א', u'dž', u'ᾏ', u'Ⅷ', u'㈴', u'㋹', u'퓛', u'ﺏ', u'𝟘', u'🚦']
padding_generator = itertools.cycle(padding_chars)
target_lengths = {six.moves.range(1, 11): 3, six.moves.range(11, 21): 2, six.moves.range(21, 31): 1.8, six.moves.range(31, 51): 1.6, six.moves.range(51, 71): 1.4}
if len(s) > 70:
target_length = int(math.ceil(len(s) * 1.3)) # depends on [control=['if'], data=[]]
else:
for (r, v) in target_lengths.items():
if len(s) in r:
target_length = int(math.ceil(len(s) * v)) # depends on [control=['if'], data=[]] # depends on [control=['for'], data=[]]
diff = target_length - len(s)
pad = u''.join([next(padding_generator) for _ in range(diff)])
return s + pad |
def promote_owner(self, stream_id, user_id):
''' promote user to owner in stream '''
req_hook = 'pod/v1/room/' + stream_id + '/membership/promoteOwner'
req_args = '{ "id": %s }' % user_id
status_code, response = self.__rest__.POST_query(req_hook, req_args)
self.logger.debug('%s: %s' % (status_code, response))
return status_code, response | def function[promote_owner, parameter[self, stream_id, user_id]]:
constant[ promote user to owner in stream ]
variable[req_hook] assign[=] binary_operation[binary_operation[constant[pod/v1/room/] + name[stream_id]] + constant[/membership/promoteOwner]]
variable[req_args] assign[=] binary_operation[constant[{ "id": %s }] <ast.Mod object at 0x7da2590d6920> name[user_id]]
<ast.Tuple object at 0x7da18bcc8130> assign[=] call[name[self].__rest__.POST_query, parameter[name[req_hook], name[req_args]]]
call[name[self].logger.debug, parameter[binary_operation[constant[%s: %s] <ast.Mod object at 0x7da2590d6920> tuple[[<ast.Name object at 0x7da18bcc8310>, <ast.Name object at 0x7da18bcc9c60>]]]]]
return[tuple[[<ast.Name object at 0x7da18bcca7d0>, <ast.Name object at 0x7da18bccafe0>]]] | keyword[def] identifier[promote_owner] ( identifier[self] , identifier[stream_id] , identifier[user_id] ):
literal[string]
identifier[req_hook] = literal[string] + identifier[stream_id] + literal[string]
identifier[req_args] = literal[string] % identifier[user_id]
identifier[status_code] , identifier[response] = identifier[self] . identifier[__rest__] . identifier[POST_query] ( identifier[req_hook] , identifier[req_args] )
identifier[self] . identifier[logger] . identifier[debug] ( literal[string] %( identifier[status_code] , identifier[response] ))
keyword[return] identifier[status_code] , identifier[response] | def promote_owner(self, stream_id, user_id):
""" promote user to owner in stream """
req_hook = 'pod/v1/room/' + stream_id + '/membership/promoteOwner'
req_args = '{ "id": %s }' % user_id
(status_code, response) = self.__rest__.POST_query(req_hook, req_args)
self.logger.debug('%s: %s' % (status_code, response))
return (status_code, response) |
def write_patch_file(self, patch_file, lines_to_write):
"""Write lines_to_write to a the file called patch_file
:param patch_file: file name of the patch to generate
:param lines_to_write: lines to write to the file - they should be \n terminated
:type lines_to_write: list[str]
:return: None
"""
with open(patch_file, 'w') as f:
f.writelines(lines_to_write) | def function[write_patch_file, parameter[self, patch_file, lines_to_write]]:
constant[Write lines_to_write to a the file called patch_file
:param patch_file: file name of the patch to generate
:param lines_to_write: lines to write to the file - they should be
terminated
:type lines_to_write: list[str]
:return: None
]
with call[name[open], parameter[name[patch_file], constant[w]]] begin[:]
call[name[f].writelines, parameter[name[lines_to_write]]] | keyword[def] identifier[write_patch_file] ( identifier[self] , identifier[patch_file] , identifier[lines_to_write] ):
literal[string]
keyword[with] identifier[open] ( identifier[patch_file] , literal[string] ) keyword[as] identifier[f] :
identifier[f] . identifier[writelines] ( identifier[lines_to_write] ) | def write_patch_file(self, patch_file, lines_to_write):
"""Write lines_to_write to a the file called patch_file
:param patch_file: file name of the patch to generate
:param lines_to_write: lines to write to the file - they should be
terminated
:type lines_to_write: list[str]
:return: None
"""
with open(patch_file, 'w') as f:
f.writelines(lines_to_write) # depends on [control=['with'], data=['f']] |
def devicecore(self):
"""Property providing access to the :class:`.DeviceCoreAPI`"""
if self._devicecore_api is None:
self._devicecore_api = self.get_devicecore_api()
return self._devicecore_api | def function[devicecore, parameter[self]]:
constant[Property providing access to the :class:`.DeviceCoreAPI`]
if compare[name[self]._devicecore_api is constant[None]] begin[:]
name[self]._devicecore_api assign[=] call[name[self].get_devicecore_api, parameter[]]
return[name[self]._devicecore_api] | keyword[def] identifier[devicecore] ( identifier[self] ):
literal[string]
keyword[if] identifier[self] . identifier[_devicecore_api] keyword[is] keyword[None] :
identifier[self] . identifier[_devicecore_api] = identifier[self] . identifier[get_devicecore_api] ()
keyword[return] identifier[self] . identifier[_devicecore_api] | def devicecore(self):
"""Property providing access to the :class:`.DeviceCoreAPI`"""
if self._devicecore_api is None:
self._devicecore_api = self.get_devicecore_api() # depends on [control=['if'], data=[]]
return self._devicecore_api |
def stream(self, muted=values.unset, hold=values.unset, coaching=values.unset,
limit=None, page_size=None):
"""
Streams ParticipantInstance records from the API as a generator stream.
This operation lazily loads records as efficiently as possible until the limit
is reached.
The results are returned as a generator, so this operation is memory efficient.
:param bool muted: Whether to return only participants that are muted
:param bool hold: Whether to return only participants that are on hold
:param bool coaching: Whether to return only participants who are coaching another call
:param int limit: Upper limit for the number of records to return. stream()
guarantees to never return more than limit. Default is no limit
:param int page_size: Number of records to fetch per request, when not set will use
the default value of 50 records. If no page_size is defined
but a limit is defined, stream() will attempt to read the
limit with the most efficient page size, i.e. min(limit, 1000)
:returns: Generator that will yield up to limit results
:rtype: list[twilio.rest.api.v2010.account.conference.participant.ParticipantInstance]
"""
limits = self._version.read_limits(limit, page_size)
page = self.page(muted=muted, hold=hold, coaching=coaching, page_size=limits['page_size'], )
return self._version.stream(page, limits['limit'], limits['page_limit']) | def function[stream, parameter[self, muted, hold, coaching, limit, page_size]]:
constant[
Streams ParticipantInstance records from the API as a generator stream.
This operation lazily loads records as efficiently as possible until the limit
is reached.
The results are returned as a generator, so this operation is memory efficient.
:param bool muted: Whether to return only participants that are muted
:param bool hold: Whether to return only participants that are on hold
:param bool coaching: Whether to return only participants who are coaching another call
:param int limit: Upper limit for the number of records to return. stream()
guarantees to never return more than limit. Default is no limit
:param int page_size: Number of records to fetch per request, when not set will use
the default value of 50 records. If no page_size is defined
but a limit is defined, stream() will attempt to read the
limit with the most efficient page size, i.e. min(limit, 1000)
:returns: Generator that will yield up to limit results
:rtype: list[twilio.rest.api.v2010.account.conference.participant.ParticipantInstance]
]
variable[limits] assign[=] call[name[self]._version.read_limits, parameter[name[limit], name[page_size]]]
variable[page] assign[=] call[name[self].page, parameter[]]
return[call[name[self]._version.stream, parameter[name[page], call[name[limits]][constant[limit]], call[name[limits]][constant[page_limit]]]]] | keyword[def] identifier[stream] ( identifier[self] , identifier[muted] = identifier[values] . identifier[unset] , identifier[hold] = identifier[values] . identifier[unset] , identifier[coaching] = identifier[values] . identifier[unset] ,
identifier[limit] = keyword[None] , identifier[page_size] = keyword[None] ):
literal[string]
identifier[limits] = identifier[self] . identifier[_version] . identifier[read_limits] ( identifier[limit] , identifier[page_size] )
identifier[page] = identifier[self] . identifier[page] ( identifier[muted] = identifier[muted] , identifier[hold] = identifier[hold] , identifier[coaching] = identifier[coaching] , identifier[page_size] = identifier[limits] [ literal[string] ],)
keyword[return] identifier[self] . identifier[_version] . identifier[stream] ( identifier[page] , identifier[limits] [ literal[string] ], identifier[limits] [ literal[string] ]) | def stream(self, muted=values.unset, hold=values.unset, coaching=values.unset, limit=None, page_size=None):
"""
Streams ParticipantInstance records from the API as a generator stream.
This operation lazily loads records as efficiently as possible until the limit
is reached.
The results are returned as a generator, so this operation is memory efficient.
:param bool muted: Whether to return only participants that are muted
:param bool hold: Whether to return only participants that are on hold
:param bool coaching: Whether to return only participants who are coaching another call
:param int limit: Upper limit for the number of records to return. stream()
guarantees to never return more than limit. Default is no limit
:param int page_size: Number of records to fetch per request, when not set will use
the default value of 50 records. If no page_size is defined
but a limit is defined, stream() will attempt to read the
limit with the most efficient page size, i.e. min(limit, 1000)
:returns: Generator that will yield up to limit results
:rtype: list[twilio.rest.api.v2010.account.conference.participant.ParticipantInstance]
"""
limits = self._version.read_limits(limit, page_size)
page = self.page(muted=muted, hold=hold, coaching=coaching, page_size=limits['page_size'])
return self._version.stream(page, limits['limit'], limits['page_limit']) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.