text
stringlengths 226
34.5k
|
---|
Python - Get waveform/amplitude of mp3
Question: I was hoping I could find a way to get the amplitude data from an mp3 in
python. Similar to audacity but I do not want a visual, a simple array of
values will do. I want my code to react to sound at certain points when it
gets louder. I am using pygame to play the audio and was trying to convert it
to a sndarray but it was only giving me the first 0.00018 seconds. Anyway I
can get the whole mp3? It does not have to be real time as I would like to be
able to react ahead of time anyway and will keep track of my position using
pygame.
I am building [this cloud](http://www.makeuseof.com/tag/build-cloud-lamp-
sound-reactive-lightning/) using a raspberry pi instead for other features. I
already have the lighting work and need it to react to the lightning,
lightshowpi is not an option sadly. Any help would be greatly appreciated
Edit: So this is what I have so far thanks to coder-don. It works, but i hangs
on the while loop. I do not know why. The mp3 I am using is rather long, could
that be the issue?
import os, sys
from gi.repository import Gst, GObject
Gst.init()
GObject.threads_init()
def get_peaks(filename):
global do_run
pipeline_txt = (
'filesrc location="%s" ! decodebin ! audioconvert ! '
'audio/x-raw,channels=1,rate=22050,endianness=1234,'
'width=32,depth=32,signed=(bool)TRUE !'
'level name=level interval=1000000000 !'
'fakesink' % filename)
pipeline = Gst.parse_launch(pipeline_txt)
level = pipeline.get_by_name('level')
bus = pipeline.get_bus()
bus.add_signal_watch()
peaks = []
do_run = True
def show_peak(bus, message):
global do_run
if message.type == Gst.MESSAGE_EOS:
pipeline.set_state(Gst.State.NULL)
do_run = False
return
# filter only on level messages
if message.src is not level or \
not message.structure.has_key('peak'):
return
peaks.append(message.structure['peak'][0])
# connect the callback
bus.connect('message', show_peak)
# run the pipeline until we got eos
pipeline.set_state(Gst.State.PLAYING)
ctx = GObject.MainContext()
while ctx and do_run:
ctx.iteration()
return peaks
def normalize(peaks):
_min = min(peaks)
print(_min)
_max = max(peaks)
print(_max)
d = _max - _min
return [(x - _min) / d for x in peaks]
if __name__ == '__main__':
filename = os.path.realpath(sys.argv[1])
peaks = get_peaks(filename)
print('Sample is %d seconds' % len(peaks))
print('Minimum is', min(peaks))
print('Maximum is', max(peaks))
peaks = normalize(peaks)
print(peaks)
Answer: Using [pydub](https://github.com/jiaaro/pydub), you can get `loudness` and
`highest amplitude` of an `mp3` file very easily. You can then use one of
these parameters to make code/light react to that.
From the [pydub](https://github.com/jiaaro/pydub/blob/master/API.markdown)
website,
**AudioSegment(…).max**
The highest amplitude of any sample in the AudioSegment. Useful for things
like normalization (which is provided in pydub.effects.normalize).
from pydub import AudioSegment
sound = AudioSegment.from_file("/path/to/sound.mp3", format="mp3")
peak_amplitude = sound.max
**AudioSegment(…).dBFS**
Returns the loudness of the AudioSegment in dBFS (db relative to the maximum
possible loudness). A Square wave at maximum amplitude will be roughly 0 dBFS
(maximum loudness), whereas a Sine Wave at maximum amplitude will be roughly
-3 dBFS.
from pydub import AudioSegment
sound = AudioSegment.from_file("/path/to/sound.mp3", format="mp3")
loudness = sound.dBFS
|
Use python file/function across multiple local projects
Question: I have multiple python projects and each of them has a utility file, all of
them with the same functions.
How can I set up some sort of local python library which I can import to all
my local projects? I know rodeo has some functionality that you can use to
designate some functions/files to be available across multiple python
projects. Is there a way to do that outside of rodeo?
I do not want to create a library that is available through pip (since that
would mean exposing it to everyone)
Answer: You can create a local python library by creating a python package. It is as
easy as putting a file named: `__init__.py`, into your utils lib.
You can read here more about the concept of [creating a python
packages](http://stackoverflow.com/questions/13215386/creating-python-
packages)
And here [about how to write a good
`__init__.py`](http://stackoverflow.com/questions/1944569/how-do-i-write-good-
correct-package-init-py-files)
To finish, I just need to mention the PYTHONPATH, and an almost similar
question [about importint from a parent
direcory](http://stackoverflow.com/questions/8951255/import-script-from-a-
parent-directory)
Hope it was helpful for you.
|
Rotate the image via opencv in my defined function with Python
Question: I would like to rotate an image in my defined function and save the result in
parameter for extra use in main function.
The codes are as below:
import cv2
def rotate(img1, img2): # rotate img1 and save it in img2
angle = 30 # rotated angle
h, w, c = img1.shape
m = cv2.getRotationMatrix2D((w/2, h/2), angle, 1)
img2 = cv2.warpAffine(img1, m, (w, h)) # rotate the img1 to img2
cv2.imwrite("rotate1.jpg", img2) # save the rotated image within the function, successfully!
img = cv2.imread("test.jpg")
img_out = None
rotate(img, img_out)
cv2.imwrite("rotate2.jpg", img_out) # save the rotated image in the main function, failed!
print("Finished!")
The result "img2" saved in function "rotate" is ok. But the one "img_out" from
the function parameter is failed to save.
What's the problem with it? How can I resolve it without using the global
variable? Thanks!
Answer: Modifications of parameters performed within a function will not be returned
to the main program. You could also have a look
[here](http://stackoverflow.com/questions/575196/in-python-why-can-a-function-
modify-some-arguments-as-perceived-by-the-caller) for further reading.
What you can do is return an image as shown in the code below:
import cv2
def rotate(img1): # rotate img1 and save it in img
angle = 30 # rotated angle
h, w, c = img1.shape
m = cv2.getRotationMatrix2D((w/2, h/2), angle, 1)
img2 = cv2.warpAffine(img1, m, (w, h)) # rotate the img1 to img2
cv2.imwrite("rotate1.jpg", img2) # save the rotated image within the function, successfully!
return img2
img = cv2.imread("image.jpg")
img_out=rotate(img)
cv2.imwrite("rotate2.jpg", img_out) # save the rotated image in the main function, failed!
print("Finished!")
|
How can I improve my quick sort algorithim (Python)
Question: After reading about the quick sort algorithim, I decided to write my own
implementation before looking at any code. The code below is what I came up
with. Upon comparing my code with other implementations I have observed that
rather than returning the sorted array from the quick sort function, other
implementations tend to take advantage of a list's mutability and simply run
the function on the unsorted array, which in turn will sort the array without
having to reference the function call. I am curious about the space time
comparison with my code and the code from the book I am using, which I have
provided below. I am assuming that in terms of time the algorithims perform
rather similarly, maybe the concatenation operation I am performing has a
negative impact? In terms of space, since I am not modifying the input array
directly, I am assuming that I am creating/ returning a new array which is
obviously inefficient, and important because the main advantage of quick sort
over merge sort is the saved space. Overall I am just looking for some
additional insight and any way to improve my algorithm's efficiency.
My code:
from random import randint
def quick(arr):
if len(arr) == 1:
return arr
else:
pivot = arr[0]
R = len(arr)-1
L = 1
while L <= len(arr)-1 and R >= 1:
if R == L:
if arr[0] > arr[R]:
arr[0], arr[R] = arr[R], arr[0]
break
if arr[R] >= pivot:
R = R - 1
continue
if arr[L] <= pivot:
L = L + 1
continue
arr[L], arr[R] = arr[R], arr[L]
return quick(arr[:R]) + quick(arr[R:])
print quick([randint(0,1000) for i in range(1000)])
The book I am using, Problem Solving With Algorithms and Data Structures Using
Python By Brad Miller and David Ranum, provides this quick sort code:
def quickSort(alist):
quickSortHelper(alist,0,len(alist)-1)
def quickSortHelper(alist,first,last):
if first<last:
splitpoint = partition(alist,first,last)
quickSortHelper(alist,first,splitpoint-1)
quickSortHelper(alist,splitpoint+1,last)
def partition(alist,first,last):
pivotvalue = alist[first]
leftmark = first+1
rightmark = last
done = False
while not done:
while leftmark <= rightmark and alist[leftmark] <= pivotvalue:
leftmark = leftmark + 1
while alist[rightmark] >= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
# alist = [54,26,93,17,77,31,44,55,20]
# quickSort(alist)
# print(alist)
Answer: This is nice code.
Compared to a quicksort version that is done in place (using only one array),
yours may be a bit slower because of the copy/concatenation.
Quicksort performances rely a lot on the choice of the pivot. By choosing the
first element, there are some cases where your code runs in quadratic time,
for example while sorting an already sorted array. The most known
optimizations are:
* Choosing a better pivot, for example by applying Tukey's ninther (you avoid those worst cases almost certainly).
* Performing an Insertion sort when the subarray is small enough (< 10 for example).
Else, there are a few variants of quicksort which run faster, like 3-way
quicksort using Bentley-McIlroy's sheme or dual-pivot quicksort (used to sort
arrays of primitive in java). The Insertion speedup is still applicable for
those.
|
Creating heavily nested python dictionaries in a clean programmatic way
Question: I am using a series of functions to populate a heavily nested dictionary. I am
wondering if there is a cleaner way to do this than just long assignment
string as shown in the example below.
outputdict = {}
outputdict['x']={}
outputdict['x']['y']={}
outputdict['x']['y']['total_patients']=len(example_dict.keys())
outputdict['x']['y']['z']={}
for variable1 in variable1s:
outputdict['x']['y']['z'][str(variable1)]={}
outputdict['x']['y']['z'][str(variable1)]['total_patients']=function_1(example_dict, variable1).count()
for popn in ['total','male','female']:
outputdict['x']['y']['z'][str(variable1)][popn]={}
for age_bucket in np.linspace(40,60,5):
age_str = str(age_bucket)+'_'+str(age_bucket+5)
outputdict['x']['y']['z'][str(variable1)][popn][age_str]={}
outputdict['x']['y']['z'][str(variable1)][popn]["total"]={}
for res in restypes:
if popn == 'total':
codelist, ncodes = function_2(function_1(example_dict, variable1), res, age_bucket)
else:
codelist, ncodes = function_2_gender(function_1(example_dict, variable1), res, age_bucket, popn)
outputdict['x']['y']['z'][str(variable1)][popn][age_str][res]={}
outputdict['x']['y']['z'][str(variable1)][popn][age_str][res]['total_codes']=ncodes
outputdict['x']['y']['z'][str(variable1)][popn][age_str][res]['top_codes']=[]
for item in codelist:
disp = {"code": item[0][:2], "value":item[0][2], "count":item[1]}
outputdict['x']['y']['z'][str(variable1)][popn][age_str][res]['top_codes'].append(disp)
codelist, ncodes = list_top_codes(function_1(example_dict, variable1), res)
outputdict['x']['y']['z'][str(variable1)][popn]["total"][res]={}
outputdict['x']['y']['z'][str(variable1)][popn]["total"][res]['top_codes']=[]
for item in codelist:
disp = {"code": item[0][:2], "value":item[0][2], "count":item[1]}
outputdict['x']['y']['z'][str(variable1)][popn]["total"][res]['top_codes'].append(disp)
outputdict
Answer: You could use
[autovivification](https://en.wikipedia.org/wiki/Autovivification#Python) with
[`defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict).
This would allow you to skip the creation of empty dicts since they would be
automatically created when undefined key is dereferenced:
from collections import defaultdict
dd = lambda: defaultdict(dd)
d = dd()
d['foo']['bar']['foobar'] = 1
So your code would look like following:
outputdict = dd()
outputdict['x']['y']['total_patients']=len(example_dict.keys())
for variable1 in variable1s:
outputdict['x']['y']['z'][str(variable1)]['total_patients']=function_1(example_dict, variable1).count()
The other possible improvement would be storing the nested dictionary to
variable so that you wouldn't have to type the full path everywhere:
for variable1 in variable1s:
nested = dd()
outputdict['x']['y']['z'][str(variable1)]=nested
nested['total_patients']=function_1(example_dict, variable1).count()
|
ipdb is triggering ImportError
Question: ipdb is triggering an import error for me when I run my Django site locally.
I'm working on Python 2.7 and within a virtual environment.
`which ipdb` shows the path `(/usr/local/bin/ipdb)`, as does `which ipython`,
which surprised me since I thought it should show my venv path (but shouldn't
it work if it's global, anyway?). So I tried `pip install
--target=/path/to/venv ipdb` and now it shows up in `pip freeze` (which it
didn't before) but still gives me an import error.
`which pip` gives `/Users/myname/.virtualenvs/myenv/bin/pip/`
My path:
`/Users/myname/.virtualenvs/myenv/bin:/Users/myname/.venvburrito/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/myname/bin:/usr/local/bin`
Sys.path: `'/Users/myname/Dropbox/myenv',
'/Users/myname/.venvburrito/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg',
'/Users/myname/.venvburrito/lib/python2.7/site-packages',
'/Users/myname/.venvburrito/lib/python2.7/site-
packages/setuptools-8.2-py2.7.egg',
'/Users/myname/.virtualenvs/myenv/lib/python27.zip',
'/Users/myname/.virtualenvs/myenv/lib/python2.7',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-darwin',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-mac',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/myname/.virtualenvs/myenv/Extras/lib/python',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-tk',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-old',
'/Users/myname/.virtualenvs/myenv/lib/python2.7/lib-dynload',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-
darwin',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-
tk',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-
mac',
'/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-
mac/lib-scriptpackages', '/Users/myname/.virtualenvs/myenv/lib/python2.7/site-
packages']`
If I run ipdb from the terminal, it works fine. I've tried restarting my
terminal.
Stacktrace:
Traceback (most recent call last):
File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 149, in get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/core/handlers/base.py", line 147, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/Users/myname/.virtualenvs/myenv/lib/python2.7/site-packages/django/views/generic/base.py", line 157, in get
context = self.get_context_data(**kwargs)
File "/Users/myname/Dropbox/blog/views.py", line 22, in get_context_data
import ipdb; ipdb.set_trace()
ImportError: No module named ipdb
Answer: I just set up a whole virtual env just to try this out because it must be a
simple fix. I managed to set up `ipdb` in my virtual env and I will write what
I did step by step.
$ virtualenv foo
$ cd foo
$ source ./bin/activate # activate venv
... at this point `which python` and `which pip` gives me the right python
executable inside my virtual env. Then next:
(venv: foo)$ pip install ipython
At this point, `which ipython` gives me the right ipython executable inside my
virtual env. **It's important to make sure that it points to the right
executables** , if it doesn't show the right executable, but the global one,
re-activate your virtual env. It is crucial that ipython (and all your
executables) point to the right executables inside your virtualenv.
Then I'm gonna try importing ipdb:
(venv: foo)$ ipython
In [1]: import ipdb
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-2d6f026194dd> in <module>()
----> 1 import ipdb
ImportError: No module named 'ipdb'
Module not found, because it hasn't been installed yet. Let's do it:
(venv: foo)$ pip install ipdb
and try it again:
(venv: foo)$ ipython [ 16-05-24 22:28 ]
Python 3.5.1 (default, Jan 29 2016, 19:58:36)
Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import ipdb
In [2]:
It seems to work for me. I was using `zsh` and `python3` but it shouldn't
matter. Your issue is most likely that it's not being installed in the right
places, meaning is using global executables instead of the ones from the
virtualenv.
From within my virtualenv you could see that ipdb is installed:
(venv: foo)$ find . -name ipdb
./lib/python3.5/site-packages/ipdb
I hope all this write up helps :)
|
How to pull from multiple lists to make a new directory
Question: I'm trying to pull data from two different lists in order to make a new
directory. But Python says that the second list `queue` is not defined when I
try to run it.
import csv
import time
import os, sys
from datetime import date
queuelist = ['ONE']
yearlist = ['2013']
year = str(date.today().year)
month = str(date.today().month)
for year in yearlist and queue in queuelist:
os.mkdirs('{0}\{1}'.format(queue,year))
Answer: You probably want either a nested loop ...
for year in yearlist:
for queue in queuelist:
# ... to do stuff for every possible year/queue combination
or you want to [`zip`](https://docs.python.org/2/library/functions.html#zip)
the two lists ...
for year, queue in zip(yearlist, queuelist):
# ... to do stuff with year-queue pairs of same index
|
Import Pandas from a path
Question: I created a program to filter out rows of data that have empty cells, however,
the people that will be using this program do not have any libraries
installed, they only have Python 2.7. Is there a way to import Pandas via a
path from a network drive? I looked up similar questions, but I can't even
seem to find the path on my computer to Pandas (I installed all libraries with
Anaconda). Thanks for any help.
Answer: Have you used sys
import sys
sys.path.append(/mynetwork/path_to_pandas)
import pandas
|
"Fast" interactive plot of a 2d numpy array in python
Question: I have a need to visualize a 2d numpy array in python. Not a contour plot, not
a surface plot. Plot a point on a z axis for every (x,y) element in the 3d
array. My data is a 1024 x 1024 array, but I suppose I could decimate it if I
had to. I need to be able to rotate the plot with mouse drags to see it from
different perspectives.
Matplotlib cannot do this, even for a 100 x 100 array. It is much much too
slow. A 100 x 100 array takes two or three seconds to redraw after dragging.
1024 x 1024 is out of the question.
mlab from Mayavi seems to have this capability, but the simplest trial crashes
on my system with wx errors. As far as I can tell, packages that provide fast
interactive rotation (e.g. VTK) are focused on rendering complex 3d shapes,
and don't provide a simple API for plotting data.
Can you suggest options?
My current setup:
OS X 10.11.4
python 1.7.11
numpy 1.11.0
matplotlib 1.5.1
mayavi 4.4.0
wx 3.0.0.0
Answer: This is an xyz plot with 20x20 decimation, which is about the highest that my
laptop will comfortably do. More points than this doesn't make much sense
anyway, since you wouldn't be able to tell the individual points apart
anymore.
from mpl_toolkits.mplot3d.axes3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
X = np.linspace(-3,3,1024)
Y = np.linspace(-3,3,1024)
X, Y = np.meshgrid(X, Y)
Z = np.exp(-(X**2+Y**2))
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=20, cstride=20)
plt.show(block=False)
raw_input('Press ENTER')
You could consider plotting via Gnuplot. It will handle interactive rotation
much faster than matplotlib. Example:
set isosamp 50
set xrange [-3:3]
set yrange [-3:3]
splot exp(-x**2-y**2)
There is a python module `Gnuplot.py` which should allow you to use it
directly from python.
|
Confirmation when installing with pip globally
Question: Sometimes when working with Python projects one can forget to activate a
virtual environment.
Is there a way to get an explicit confirmation when installing Python modules
with pip to the global scope instead of a virtual environment?
Answer: You can try to wrap `pip install`, e.g.:
import pip
def install(package):
pip.main(['install', package])
# Example
if __name__ == '__main__':
if not hasattr(sys, 'real_prefix'):
# replace this with your confirmation callback
print('Warning! installing in global scope!')
install('argh')
Sources:
[Installing python module within
code](http://stackoverflow.com/a/15950647/165753)
[Python: Determine if running inside
virtualenv](http://stackoverflow.com/a/1883251/165753)
|
Trying do post request using python
Question: I'm trying to make http post request using python. Also I need `totp`
authentication and for this I'm using `pyotp`. For the request I'm using
Requests library. But the code doesn't work and I don't know where the mistake
is. Do you have ideas? I'm using python 3.5.
This is the code:
import requests
import json
from requests.auth import HTTPDigestAuth
import pyotp
totp = pyotp.TOTP('base32secret3232')
url = 'http://3425325325664364345365'
datas = {
"github_url": "https://gist.github.com/323332/333333",
"contact_email": "[email protected]"
}
headers = {
'Accept': '*/*',
'Content-Length': '134',
'Content-Type': 'application/json',
'Host': '45654645654756456457547547',}
AUTH = HTTPDigestAuth('[email protected]', 'totp')
r = requests.post(url, json = datas, headers = headers, auth = AUTH)
print (r.status_code)
This is the error:
[](http://i.stack.imgur.com/28jej.jpg)
This is the second error:
[](http://i.stack.imgur.com/CLL9A.png)
Answer: After I read your question 1 more time I've realized that I missed the last
part.
I might be wrong because I've never used this package but you are using python
3.5 but `pyotp` only supports 2.7 -->
[pypi](https://pypi.python.org/pypi/pyotp)
|
IndexError: list index out of range in python, know the meaning, but don't understand why this happens
Question: I have the following code and I call the program writing:
python new.py -s 13 -p 5
But then, in line 63 I get the following error. I know what it means but I
can't understan why.
Traceback (most recent call last):
File "new.py", line 63, in <module>
rythm[i].append(rythm[last])
IndexError: list index out of range
And there is the code. What I am trying to do is spread 0s equally among 1s.
The first input is the length of the string with the 0s and 1s, and the second
input is the numper of 1s. Thank you!
import argparse
p = argparse.ArgumentParser()
p.add_argument("-pulses", help = "number of pulses", type = int)
p.add_argument("-slots", help = "length of the rythm", type = int)
args = p.parse_args()
slots = args.slots
pulses = args.pulses
pauses = slots - pulses
mod = pauses % pulses
rythm = []
if mod == 0:
x = slots/pauses
l = 0
while l<slots:
if l%x == 0:
rythm.append(1)
else:
rythm.append(0)
l = l + 1
print (rythm)
if mod != 0:
i = 0
j = 0
while i < pulses:
rythm.append([1])
i = i + 1
while j < pauses:
rythm.append([0])
j = j + 1
last = len(rythm)
last = last - 1
last_len = len(rythm[last])
x = slots%pauses
y = pauses - x
flag = True
while flag == True:
flag = False
if (last_len != 1) or (rythm[last] != 0):
flag = True
i = 0
while i < x:
rythm[i].append(rythm[last])
rythm.remove(rythm[last])
i = i + 1
y = y - x
x = x%y
last = len(rythm)
last = last - 1
last_len = len(rythm[last])
print (rythm)
Answer: `IndexError: list index out of range` \- means that the you are accessing a
value by position which doesnt exist.
for example:
a = range(10) #length of a is 9, list indices start at 0
print(a[10]) #accessing value by a index that doesn't exist, raises a IndexError exception
|
Allowing for possible unknown file names to be processed in a python script
Question: I am using a script that overlays an input pdf onto another that is
essentially a letterhead. However, I am not sure how to allow for the process
to be automated to allow for many files to be processed one at a time, without
previously knowing what the file will be named. I am using python 2.7.
from pyPdf import PdfFileWriter, PdfFileReader
output = PdfFileWriter()
input1 = PdfFileReader(file("example.pdf", "rb"))
# add page 1 from input1 to output document, unchanged
output.addPage(input1.getPage(0))
# add page 2 from input1, but first add a watermark from another pdf:
page2 = input1.getPage(0)
watermark = PdfFileReader(file("template.pdf", "rb"))
page2.mergePage(watermark.getPage(0))
output.addPage(page2)
# finally, write "output" to document-output.pdf
outputStream = file("example.pdf", "wb")
output.write(outputStream)
outputStream.close()
Answer: Import os and use os.listdir to look in a particular directory for the files:
<https://docs.python.org/2/library/os.html#os.listdir>
|
ValueError: invalid literal for int() with base 10
Question: I am a beginner in python. I came across this question in codewars.
_Jaden is known for some of his philosophy that he delivers via Twitter. When
writing on Twitter, he is known for almost always capitalizing every word.
Your task is to convert strings to how they would be written by Jaden Smith.
The strings are actual quotes from Jaden Smith, but they are not capitalized
in the same way he originally typed them._
Example :
_Not Jaden-Cased: "How can mirrors be real if our eyes aren't real"_
_Jaden-Cased: "How Can Mirrors Be Real If Our Eyes Aren't Real"_
This is my attempt (I am supposed to code using a function)
def toJadenCase(string):
l = len(string)
for i in range(0,l):
if string[i] == ' ':
y = string[i]
string[i+1] = chr(int(y)-32)
return srting
s = raw_input()
print toJadenCase(s)
When run, the following errors showed up
How can mirrors be real if our eyes aren't real (this is the input string)
Traceback (most recent call last):
File "jaden_smith.py", line 9, in <module>
print toJadenCase(s)
File "jaden_smith.py", line 6, in toJadenCase
string[i+1] = chr(int(y)-32)
ValueError: invalid literal for int() with base 10: ''
I couldn't understand these errors even after google-ing it. Any help would be
appreciated. I would also be great if other errors in my code are highlighted
and a better code is suggested.
Thanks in advance :D
Answer: **As Goodies points out, string should not be used as a variable name**
Following the [Zen of Python](http://c2.com/cgi/wiki?PythonPhilosophy), this
is technically a function that does exactly what you're trying to achieve:
def toJadenCase(quote):
return quote.title()
Edit:
Revised version to deal with apostrophes:
import string
def toJadenCase(quote):
return string.capwords(quote)
|
Python class is created when module is imported
Question: I have a Python module that is written like so.
SomeClasses.py
class A():
def __init__(self):
print "Hi! A is instantiated!"
class B():
def __init__(self):
print "Hi! B is instantiated!"
a = A()
When the file is imported, class A is automatically instantiated.
>>> import a
Hi! A is instantiated!
Now, most of the time, this is exactly the behavior I want. However, sometimes
I do not want an entire class to be instantly created during the import
because of the overhead. I did consider creating an init() function.
>>> import SomeClasses
>>> SomeClasses.init()
Hi! A is instantiated!
However, this would break most of the preexisting code. I want to avoid
rewriting a lot of the existing code base. Can anyone suggest a way to tell
the module upon import to not create the class?
Btw, I am running Python 2.7 on Windows 7.
Answer: You could refactor `SomeClasses` and move most of it into another module:
# SomeClasses.py
# One of the few legitimate uses of import * outside of an interactive session.
from _SomeClasses import *
a = A()
# _SomeClasses.py
class A(object):
def __init__(self):
print "Hi! A is instantiated!"
class B(object):
def __init__(self):
print "Hi! B is instantiated!"
Then if you don't want the expensive initialization of `a`, you import
`_SomeClasses` and use that module. The other code that relies on `a` existing
will import `SomeClasses` and get the automatically-created `a` instance.
|
Tensorflow gradient is always zero
Question: I have written a small Tensorflow program which convolves an image patch by
the same convolution kernel `num_unrollings` times in a row, and then attempts
to minimize the mean squared difference between the resulting values and a
target output.
However, when I run the model with `num_unrollings` greater than 1, the
gradient of my my loss (`tf_loss`) term with respect to the convolution kernel
(`tf_kernel`) is zero, so no learning occurs.
Here is the smallest code (python 3) I can come up with which reproduces the
problem, sorry about the length:
import tensorflow as tf
import numpy as np
batch_size = 1
kernel_size = 3
num_unrollings = 2
input_image_size = (kernel_size//2 * num_unrollings)*2 + 1
graph = tf.Graph()
with graph.as_default():
# Input data
tf_input_images = tf.random_normal(
[batch_size, input_image_size, input_image_size, 1]
)
tf_outputs = tf.random_normal(
[batch_size]
)
# Convolution kernel
tf_kernel = tf.Variable(
tf.zeros([kernel_size, kernel_size, 1, 1])
)
# Perform convolution(s)
_convolved_input = tf_input_images
for _ in range(num_unrollings):
_convolved_input = tf.nn.conv2d(
_convolved_input,
tf_kernel,
[1, 1, 1, 1],
padding="VALID"
)
tf_prediction = tf.reshape(_convolved_input, shape=[batch_size])
tf_loss = tf.reduce_mean(
tf.squared_difference(
tf_prediction,
tf_outputs
)
)
# FIXME: why is this gradient zero when num_unrollings > 1??
tf_gradient = tf.concat(0, tf.gradients(tf_loss, tf_kernel))
# Calculate and report gradient
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
gradient = session.run(tf_gradient)
print(gradient.reshape(kernel_size**2))
#prints [ 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Thank you for your help!
Answer: Try replacing
# Convolution kernel
tf_kernel = tf.Variable(
tf.zeros([kernel_size, kernel_size, 1, 1])
)
with something like:
# Convolution kernel
tf_kernel = tf.Variable(
tf.random_normal([kernel_size, kernel_size, 1, 1])
)
|
Plot variable value instead of name in Python
Question: I have a large number of files with data about specific dates but with random
(ugly) names that I'd like to assign to a more structured string "infile" that
I can then use to refer to the original filename. To be concrete, in the
following code sample:
file_25Jan1995 = 'random_file_name_x54r'
year = '1995'
month = 'Jan'
day = '25'
infile = 'file_'+day+month+year
print infile
print file_25Jan1995
This code produces the following output:
file_25Jan1995
random_file_name_x54r
My question is, how can I print (or pass to a function) the original filename
directly through the newly created string "infile"? So I'd like "print
some_method(infile)" to return "random_file_name_x54r". Is using a dict the
only way to do this?
Answer: Given that you have defined the variable, you can retrieve the value by name
from locals:
print(locals()[infile])
or by using eval:
print(eval(infile))
You probably don't want to do this, though. Since you needed to make all the
variables in the first place, you might as well put them in a dictionary.
* * *
One more suggestion... if you have the variables defined in a module, e.g.,
datasets.py, then you can fetch them from the module using getattr:
import datasets
print(getattr(datasets, infile))
|
Is there a way to count the number of elements of a certain name in an xml file using Python?
Question: I'm using Python 3.4 on a Windows 64-bit machine.
I currently have a xml file which has multiple hierarchies. There are a number
of elements going by the name "paragraph" in the xml tree. But they might be
on different hierarchies.
Is there any way to count the number of these elements in an easy way?
Traversal through the whole tree seems way too time-consuming.
Answer: If you were to use [`lxml.etree`](http://lxml.de/), then you would have a full
XPath support and can use [`count()`](https://developer.mozilla.org/en-
US/docs/Web/XPath/Functions/count):
import lxml.etree as ET
tree = ET.parse(xml)
paragraphs = tree.xpath('count(//p)')
print(paragraphs)
In
[`xml.etree.ElementTree`](https://docs.python.org/3/library/xml.etree.elementtree.html)
you would have to do it in Python via
[`findall()`](https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.findall)
and `len()` because of the [limited XPath
support](https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-
support):
import xml.etree.ElementTree as ET
tree = ET.parse(xml)
paragraphs = tree.findall('//p')
print(len(paragraphs))
|
python multissh pool on windows
Question: Trying to get a process pool to work on windows but after asking me the
password it again asks me the password.
import os
import sys
import paramiko
import getpass
import socket
from multiprocessing import Pool
def processFunc(hostname):
handle = paramiko.SSHClient()
handle.set_missing_host_key_policy(paramiko.AutoAddPolicy())
handle.connect(hostname, username=user, password=pw)
print("child")
stdin, stdout, stderr = handle.exec_command("show clock")
cmdOutput = ""
while True:
try:
cmdOutput += stdout.next()
except StopIteration:
break
print("Got output from host %s:%s" % (hostname, cmdOutput))
handle.close()
user = "sup"
f= open('csip.txt','r')
hostnames = []
for line in f:
hostname = line.strip()
hostnames.append(hostname)
pw = getpass.getpass("Enter ssh password:")
if __name__ == "__main__":
pool = Pool(processes=4)
pool.map(processFunc, hostnames, 1)
pool.close()
pool.join()
Am i doing something wrong? The script should read hostnames from the txt file
get the password and then invoke the process pool.
Answer: The below works -
But i want help to improve it. dont want to hardcode the username and
password.
import os
import sys
import paramiko
from multiprocessing import Pool
#Globals
Hostnames = []
f= open('csip.txt','r')
for line in f:
hname = line.strip()
Hostnames.append(hname)
def processFunc(Hostname):
handle = paramiko.SSHClient()
handle.set_missing_host_key_policy(paramiko.AutoAddPolicy())
handle.connect(Hostname, username="sup", password="123")
print("child")
stdin, stdout, stderr = handle.exec_command("show platform | i unknown")
cmdOutput = ""
while True:
try:
cmdOutput += stdout.next()
except StopIteration:
break
print("Got output from host %s:%s" % (Hostname, cmdOutput))
handle.close()
if __name__ == "__main__":
pool = Pool(processes=9)
pool.map(processFunc, Hostnames, 1)
pool.close()
pool.join()
|
How can I create Dictionary in Python from file, where values are a list
Question: I have a txt file and I want to read values into a dictionary. Different from
common dictionary, the value of each `key` is a value pair, for example:
tiger eat meat
tiger eat people
rabbit eat carrot
people can walk
trees has root
people has hand
I want to get a dictionary that,
tiger, {eat, meat}, {eat, people}
rabbit, {eat, carrot}
trees, {has, root}
people, {can, walk}, {has, hand}
Should I just `read lines`, `split(\n)` into 3 items and store the first one
as the key and the rest two ones as the values? Or there is a better way to
store the two values?
My objective is that, when I query what does a tiger eat, I want to get the
answer `meat` and `people`.
Answer:
import collections
lines=[]
with open('data1', 'r') as f:
lines=list(map(lambda line:line.strip(), f.readlines()))
d, flag=collections.defaultdict(list), False
for line in lines:
temp=list(map(lambda x:x.strip(), line.split()))
d[temp[0]].append(temp[1:])
print(d)
Here is the output:
$ cat data1
tiger eat meat
tiger eat people
rabbit eat carrot
people can walk
trees has root
people has hand
$ python3 a.py
defaultdict(<class 'list'>, {'rabbit': [['eat', 'carrot']], 'trees': [['has', 'root']], 'tiger': [['eat', 'meat'], ['eat', 'people']], 'people': [['can', 'walk'], ['has', 'hand']]})
**And if you want this structure:**
$ python3 a.py
defaultdict(<class 'list'>, {'people': [{'can': 'walk'}, {'has': 'hand'}], 'tiger': [{'eat': 'meat'}, {'eat': 'people'}], 'trees': [{'has': 'root'}], 'rabbit': [{'eat': 'carrot'}]})
replace the 2nd last line in the script to:
d[temp[0]].append({temp[1]:temp[2]})
|
Pandas Dataframe has no Plot function
Question: I'm trying to call `df.plot.scatter(...)` as shown
[here](http://pandas.pydata.org/pandas-docs/stable/visualization.html#scatter-
plot), where `df` is a `pandas.Dataframe` object.
But my IDE can't suggest any plot function when I initiate suggestions (though
it can suggest other `dataframe` members like `fillna()`, `to_json()` etc).
If I anyway write `df.plot.scatter(...)` and run it, it gives error:
AttributeError: 'function' object has no attribute 'scatter'
I use python 3.4 on windows 7. My IDE is PyCharm. These are the imports:
import pandas as pd
import matplotlib.pyplot as plt
Can it be about my python version, or maybe this function is removed from
pandas API? Thanks in advance.
Answer: I think your `pandas` version is older as `0.17.0`.
See [`DataFrame.plot.scatter`](http://pandas.pydata.org/pandas-
docs/version/0.17.0/generated/pandas.DataFrame.plot.scatter.html):
> New in version 0.17.0.
In older version you can use:
df.plot(kind='scatter')
|
pandas read_csv is putting all values in one column and one row
Question: I've sought out an answer on multiple forums and YouTube but to no avail,
sorry in advance if it is widely available and my keywords just weren't right.
I'm attempting to execute a simple pandas.read_csv('.csv',sep=','). However
the output I'm receiving is not splitting the data out into multiple columns
as I imagine it should.
I'm getting back all of my headers in one row, separated by commas. The same
is true for each line item tied to the respective headers.
I've tried setting this data up in a dataframe, manipulating the headers,
manually adding the headers with no success.
For better understanding I've copied and pasted from Ipython notebook of what
I'm seeing:
In [15]:
import pandas as pd
pd.read_csv('C:\Users\Dale\Desktop\ShpData\TrackerTW0.csv',sep=',')
Out[15]:
PurchaseOrderNumber,ShipmentFinalDestinationCity,TransferPointCity,POType,PlannedMode,ProgramType,FreightPaymentTerms,ContainerNumber,BL/AWB#,Mode,ShipmentFinalDestinationLocation,CarrierSCAC,Carrier,Forwarder,BrandDesc,POLCity,PODCity,InDCOutlookDate,InDCOriginalDate,AnticipatedShipDate,PlannedStockedDate,ExFactoryActualDate(LT),OriginConsolActualDate(LT),DepartLoadPortActualDate(LT),FullOutGatefromOceanTerminal(CYorPort)ActualDate(LT),DPArrivalActualDate(LT),FreightAvailableActualDate(LT),DestConsolActualDate(LT),DomDepartActualDate(LT),YardArrivalActualDate(LT),CarrierDropActualDate(LT),InDCActualDate(LT),StockedActualDate(LT),Vessel,VesselETADischargePortCity,DPArrivalOutlookDate,VesselETADischargePortActualDate(LT),FullOutGatefromOceanTerminal(CYorPort)OutlookDate,StockedOutlookDate,ShipmentLeg#,Metrics,TotalShippedQty
0 1251708,Rugby,Tuticorin,Initial Order,Ocean,Re...
1 1262597,Rugby,Hong Kong,Initial Order,Ocean,Re...
Thanks
Answer: You might want to try this, you have like 40 columns.
import pandas as pd
df = pd.read_csv('input.csv', names=['PurchaseOrderNumber','ShipmentFinalDestinationCity','TransferPointCity','POType','PlannedMode','ProgramType','FreightPaymentTerms','ContainerNumber','BL/AWB#','Mode','ShipmentFinalDestinationLocation','CarrierSCAC','Carrier','Forwarder','BrandDesc','POLCity','PODCity','InDCOutlookDate','InDCOriginalDate','AnticipatedShipDate','PlannedStockedDate','ExFactoryActualDate(LT)','OriginConsolActualDate(LT)','DepartLoadPortActualDate(LT)','FullOutGatefromOceanTerminal(CYorPort)ActualDate(LT)','DPArrivalActualDate(LT)','FreightAvailableActualDate(LT)','DestConsolActualDate(LT)','DomDepartActualDate(LT)','YardArrivalActualDate(LT)','CarrierDropActualDate(LT)','InDCActualDate(LT)','StockedActualDate(LT)','Vessel','VesselETADischargePortCity','DPArrivalOutlookDate','VesselETADischargePortActualDate(LT)','FullOutGatefromOceanTerminal(CYorPort)OutlookDate','StockedOutlookDate','ShipmentLeg#','Metrics','TotalShippedQty']
print df
|
How to find out if a class exists on an OrientDB using PyOrient?
Question: How is it possible to find out if a class exists; this allowing the prevention
of a 'class x already exists in current database' error message?
I have seen the following
[Question](https://stackoverflow.com/questions/28288268/check-class-creation-
in-orientdb), which gives answers in Java and SQL. I'm looking for the Python
equivalent.
Answer: I created the following example in `pyorient`:
**MY STRUCTURE:**
[](http://i.stack.imgur.com/M72fL.png)
**PyORIENT CODE:**
import pyorient
db_name = 'Stack37277880'
print("Connecting to the server...")
client = pyorient.OrientDB("localhost",2424)
session_id = client.connect("root","root")
print("OK - sessionID: ",session_id,"\n")
if client.db_exists( db_name, pyorient.STORAGE_TYPE_PLOCAL ):
client.db_open(db_name, "root", "root")
dbClasses = client.command("SELECT name FROM (SELECT expand(classes) FROM metadata:schema)")
newClass = "MyClass"
classFound = False
for idx, val in enumerate(dbClasses):
if (val.name == newClass):
classFound = True
break
if (classFound != True):
client.command("CREATE CLASS " + newClass)
print("Class " + newClass + " correctly created")
else:
print("Class " + newClass + " already exists into the DB")
client.db_close()
**First Run Output:**
Connecting to the server...
OK - sessionID: 70
Class MyClass correctly created
**OrientDB Studio:**
[](http://i.stack.imgur.com/lRRRB.png)
**Second Run Output:**
Connecting to the server...
OK - sessionID: 74
Class MyClass already exists into the DB
Hope it helps
|
graceful interrupt of while loop in ipython notebook
Question: I'm running some data analysis in ipython notebook. A separate machine
collects some data and saves them to a server folder, and my notebook scans
this server periodically for new files, and analyzes them.
I do this in a while loop that checks every second for new files. Currently I
have set it up to terminate when some number of new files are analyzed.
However, I want to instead terminate upon a keypress.
I have tried try-catching a keyboard interrupt, as suggested here: [How to
kill a while loop with a
keystroke?](http://stackoverflow.com/questions/13180941/how-to-kill-a-while-
loop-with-a-keystroke)
but it doesn't seem to work with ipython notebook (I am using Windows).
Using openCV's keywait does work for me, but I was wondering if there are
alternative methods without having to import opencv.
I have also tried implementing a button widget that interrupts the loop, as
such:
from ipywidgets import widgets
import time
%pylab inline
button = widgets.Button(description='Press to stop')
display(button)
class Mode():
def __init__(self):
self.value='running'
mode=Mode()
def on_button_clicked(b):
mode.value='stopped'
button.on_click(on_button_clicked)
while True:
time.sleep(1)
if mode.value=='stopped':
break
But I see that the loop basically ignores the button presses.
Answer: You can trigger a `KeyboardInterrupt` in a Notebook via the menu "Kernel -->
Interrupt".
So use this:
try:
while True:
do_something()
except KeyboardInterrupt:
pass
as suggested [here](http://stackoverflow.com/questions/13180941/how-to-kill-a-
while-loop-with-a-keystroke) and click this menu entry.
|
Mocking download of a file using Python requests and responses
Question: I have some python code which successfully downloads an image from a URL,
using [requests](http://docs.python-requests.org/en/master/), and saves it
into `/tmp/`. I want to test this does what it should. I'm using
[responses](https://github.com/getsentry/responses) to test fetching of JSON
files, but I'm not sure how to mock the behaviour of fetching a file.
I assume it'd be similar to mocking a standard response, like the below, but I
think I'm blanking on how to set the `body` to be a file...
@responses.activate
def test_download():
responses.add(responses.GET, 'http://example.org/images/my_image.jpg',
body='', status=200,
content_type='image/jpeg')
#...
**UPDATE:** Following Ashafix's comment, I'm trying this (python 3):
from io import BytesIO
@responses.activate
def test_download():
with open('tests/images/tester.jpg', 'rb') as img1:
imgIO = BytesIO(img1.read())
responses.add(responses.GET, 'http://example.org/images/my_image.jpg',
body=imgIO, status=200,
content_type='image/jpeg')
imgIO.seek(0)
#...
But when, subsequently, the code I'm testing attempts to do the request I get:
a bytes-like object is required, not '_io.BytesIO'
Feels like it's almost right, but I'm stumped.
**UPDATE 2:** Trying to follow Steve Jessop's suggestion:
@responses.activate
def test_download():
with open('tests/images/tester.jpg', 'rb') as img1:
responses.add(responses.GET, 'http://example.org/images/my_image.jpg',
body=img1.read(), status=200,
content_type='image/jpeg')
#...
But this time the code being tested raises this:
I/O operation on closed file.
Surely the image should still be open inside the `with` block?
**UPDATE 3:** The code I'm testing is something like this:
r = requests.get(url, stream=True)
if r.status_code == 200:
with open('/tmp/temp.jpg', 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
It seems to be that the final `shutil` line is generating the "I/O operation
on closed file." error. I don't understand this enough - the streaming of the
file - to know how best to mock this behaviour, to test the downloaded file is
saved to `/tmp/`.
Answer: First, to summarise my now overly long question... I'm testing some code
that's something like:
def download_file(url):
r = requests.get(url, stream=True)
if r.status_code == 200:
filename = os.path.basename(url)
with open('/tmp/%s' % filename, 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
return filename
It downloads an image and, streaming it, saves it to `/tmp/`. I wanted to mock
the request so I can test other things.
@responses.activate
def test_downloads_file(self):
url = 'http://example.org/test.jpg'
with open('tests/images/tester.jpg', 'rb') as img:
responses.add(responses.GET, url,
body=img.read(), status=200,
content_type='image/jpg',
adding_headers={'Transfer-Encoding': 'chunked'})
filename = download_file(url)
# assert things here.
Once I had worked out the way to use `open()` for this, I was still getting
"I/O operation on closed file." from `shutil.copyfileobj()`. The thing that's
stopped this is to add in the `Transfer-Encoding` header, which is present in
the headers when I make the real request.
Any suggestions for other, better solutions very welcome!
|
How do I use a variable as a function name in python
Question: How do I use a variable as a function name so that I can have a list of
functions and initialize them in a loop. I'm getting the error I expected
which is str object is not callable. But I don't know how to fix it. Thanks.
#Open protocol configuration file
config = configparser.ConfigParser()
config.read("protocol.config")
# Create new threads for each protocol that is configured
protocols = ["ISO", "CMT", "ASCII"]
threads = []
threadID = 0
for protocol in protocols:
if (config.getboolean(protocol, "configured") == True):
threadID = threadID + 1
function_name = config.get(protocol, "protocol_func")
threads.append(function_name(threadID, config.get(protocol, "port")))
# Start new threads
for thread in threads:
thread.start()
print ("Exiting Main Protocol Manager Thread")
Answer: If you put your set of valid `protocol_func`s in a specific module, you can
use `getattr()` to retrieve from that module:
import protocol_funcs
protocol_func = getattr(protocol_funcs, function_name)
threads.append(protocol_func(threadID, config.get(protocol, "port")))
* * *
Another approach is a decorator to register options:
protocol_funcs = {}
def protocol_func(f):
protocol_funcs[f.__name__] = f
return f
...thereafter:
@protocol_func
def some_protocol_func(id, port):
pass # TODO: provide a protocol function here
That way only functions decorated with `@protocol_func` can be used in the
config file, and the contents of that dictionary can be trivially iterated
over.
|
Code transformation from BS4 to lxml parser
Question: I am working on a project to extract specific information from locally stored
HTML files by using BS4. As I do have a considerably big amount of files (>1
Million) speed and performance is the key for having a useful code browsing
through all files. Until now I am working with BS4 as I was working before on
a web crawler and I thought BS4 is pretty easy and handy. However if it comes
to big data, BS4 is way to slow. I read about the `lxml parser` and
`html.parser` , that seems to be the most easy and fastest parsers among
python for HTML documents.
So my code right now looks like:
from bs4 import BeautifulSoup
import glob
import os
import re
import contextlib
@contextlib.contextmanager
def stdout2file(fname):
import sys
f = open(fname, 'w')
sys.stdout = f
yield
sys.stdout = sys.__stdout__
f.close()
def trade_spider():
os.chdir(r"C:\Users\XXX")
with stdout2file("output.txt"):
for file in glob.iglob('**/*.html', recursive=True):
with open(file, encoding="utf8") as f:
contents = f.read()
soup = BeautifulSoup(contents, "html.parser")
for item in soup.findAll("ix:nonfraction"):
if re.match(".*SearchTag", item['name']):
print(file.split(os.path.sep)[-1], end="| ")
print(item['name'], end="| ")
print(item.get_text())
break
trade_spider()
It opens a text file, goes into my set directory (os.chdir(..)), seraches
through all files ending .html, reads the content and if it finds tag with
name attribute "SearchTag" it takes the related HTML text and prints it to my
open text file. After one match there is a break and it will continue with the
next one. So what I read is, that BS4 does this all in memory, which increases
porcessing time significantly.
That's why I wanted to alter my code with using either lxml (prefered) or
html.parser.
Anyone of you being a genius and is able to alter my code to use lxml parser
without changing the initial easy idea I had on this?
Any help appreciated on this as I am totally stucked....
UPDATE:
import lxml.etree as et
import os
import glob
import contextlib
@contextlib.contextmanager
def stdout2file(fname):
import sys
f = open(fname, 'w')
sys.stdout = f
yield
sys.stdout = sys.__stdout__
f.close()
def skip_to(fle, line):
with open(fle) as f:
pos = 0
cur_line = f.readline().strip()
while not cur_line.startswith(line):
pos = f.tell()
cur_line = f.readline()
f.seek(pos)
return et.parse(f)
def trade_spider():
os.chdir(r"F:\04_Independent Auditors Report")
with stdout2file("auditfeesexpenses.txt"):
for file in glob.iglob('**/*.html', recursive=True):
xml = skip_to(file, "<?xml")
tree = xml.getroot()
nsmap = {"ix": tree.nsmap["ix"]}
fractions = xml.xpath("//ix:nonFraction[contains(@name, 'AuditFeesExpenses')]", namespaces=nsmap)
for fraction in fractions:
print(file.split(os.path.sep)[-1], end="| ")
print(fraction.get("name"), end="| ")
print(fraction.text, end=" \n")
break
trade_spider()
I get this error message:
Traceback (most recent call last):
File "C:/Users/6930p/PycharmProjects/untitled/Versuch/lxmlparser.py", line 43, in <module>
trade_spider()
File "C:/Users/6930p/PycharmProjects/untitled/Versuch/lxmlparser.py", line 33, in trade_spider
xml = skip_to(file, "<?xml")
File "C:/Users/6930p/PycharmProjects/untitled/Versuch/lxmlparser.py", line 26, in skip_to
return et.parse(f)
File "lxml.etree.pyx", line 3427, in lxml.etree.parse (src\lxml\lxml.etree.c:79720)
File "parser.pxi", line 1803, in lxml.etree._parseDocument (src\lxml\lxml.etree.c:116182)
File "parser.pxi", line 1823, in lxml.etree._parseFilelikeDocument (src\lxml\lxml.etree.c:116474)
File "parser.pxi", line 1718, in lxml.etree._parseDocFromFilelike (src\lxml\lxml.etree.c:115235)
File "parser.pxi", line 1139, in lxml.etree._BaseParser._parseDocFromFilelike (src\lxml\lxml.etree.c:110109)
File "parser.pxi", line 573, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:103323)
File "parser.pxi", line 679, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:104936)
File "lxml.etree.pyx", line 324, in lxml.etree._ExceptionContext._raise_if_stored (src\lxml\lxml.etree.c:10656)
File "parser.pxi", line 362, in lxml.etree._FileReaderContext.copyToBuffer (src\lxml\lxml.etree.c:100828)
File "C:\Users\6930p\Anaconda3\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 1789: character maps to <undefined>
Answer: There is a bit of work to tidy up the html as per your html file in
[pastebin](http://pastebin.com/rsyVhThY), the following finds `nonFraction`
tags with name attributes containing `'AuditFeesExpenses'`:
import lxml.etree as et
def skip_to(fle, line):
with open(fle) as f:
pos = 0
cur_line = f.readline().strip()
while not cur_line.startswith(line):
pos = f.tell()
cur_line = f.readline()
f.seek(pos)
return et.parse(f)
xml = skip_to("/home/padraic/Downloads/sample_html_file.html","<?xml")
tree = xml.getroot()
# one mapping is None -> None: 'http://www.w3.org/1999/xhtml'
nsmap = {k: v for k, v in tree.nsmap.items() if k}
print(xml.xpath("//ix:nonFraction[contains(@name, 'AuditFeesExpenses')]", namespaces=nsmap))
Output:
[<Element {http://www.xbrl.org/2008/inlineXBRL}nonFraction at 0x7f5b9e91c560>, <Element {http://www.xbrl.org/2008/inlineXBRL}nonFraction at 0x7f5b9e91c5a8>]
To pull the text and name:
fractions = xml.xpath("//ix:nonFraction[contains(@name, 'AuditFeesExpenses')]", namespaces=nsmap)
for fraction in fractions:
print(fraction.get("name"))
print(fraction.text)
Which will give you:
ns19:AuditFeesExpenses
1,850
ns19:AuditFeesExpenses
2,400
Also if you are just using the ix namespace you can just pull that
tree = xml.getroot()
nsmap = {"ix":tree.nsmap["ix"]}
fractions = xml.xpath("//ix:nonFraction[contains(@name, 'AuditFeesExpenses')]", namespaces=nsmap)
for fraction in fractions:
print(fraction.get("name"))
print(fraction.text)
So the full woking code:
def trade_spider():
os.chdir(r"C:\Users\Independent Auditors Report")
with stdout2file("auditfeesexpenses.txt"):
for file in glob.iglob('**/*.html', recursive=True):
xml = skip_to(file, "<?xml")
tree = xml.getroot()
nsmap = {"ix": tree.nsmap["ix"]}
fractions = xml.xpath("//ix:nonFraction[contains(@name, 'AuditFeesExpenses')]", namespaces=nsmap)
for fraction in fractions:
print(file.split(os.path.sep)[-1], end="| ")
print(fraction.get("name"), end="| ")
print(fraction.text, end="|")
In place of _os.chdir_ you can also:
for file in glob.iglob('C:/Users/Independent Auditors Report/**/*.html', recursive=True):
|
Python: read a text file and copy/paste directories listed in it to a new directory
Question: I am trying to read a .txt file that lists directory names and copy/paste the
listed directories into a new directory. I am pretty close to figuring it out
but need a function that copies the directory (not only its contents).
from distutils.dir_util import copy_tree
dst = '/Users/name/Desktop/Core/TEST'
f = open('/Users/name/Desktop/Core/Core_List.txt','r')
for i in f.readlines():
print i
copy_tree(i.strip(), dst)
f.close()
* * *
This is what ended up working:
from shutil import copytree
from os.path import join
dst = '/Users/name/Desktop/Core/TEST'
f = open('/Users/name/Desktop/Core/Core_List.txt','r')
for i in f.readlines():
print i
copytree(i.strip(), join(dst,i))
f.close()
Answer: Perhaps this
from shutil import copytree
from os.path import join
dst = '/Users/name/Desktop/Core/TEST'
with open('/Users/name/Desktop/Core/Core_List.txt') as f:
for src in f:
print src
copytree(src, join(dst, src))
Assuming src is relative to the working directory, it's somewhat more complex
if it's not.
|
Django model instance full_clean method, is this right?
Question: What I want to do, is write code that will allow me to bulk-load Django object
instances from a csv file. Obviously I should check all the data first before
saving anything.
tl;dr: the full_clean() method doesn't catch an impending attempt to save None
in a field without `null=True`. Seems perverse. Is this by design, and if so
why? Django has fewer bugs than just about anything else I have ever worked
with, so "Bug!" seems most unlikely.
Full version. What I thought would work, is for each row, create an object
instance, populate fields with the data from the spreadsheet, and then invoke
the full_clean method. I.e. (in outline)
from django.core.exceptions import ValidationError
...
# upload a CSV file and open with a csvreader
errors=[]
for rownumber, row in enumerate(csvreader):
o = SomeDjangoModel()
o.somefield = row[0] # repeated for all input data row[1] ...
try:
reason = ""
o.full_clean()
except ValidationError as e:
reason = "Row:{} Reason:{}".format( rownumber, str(e))
errors.append( reason)
# reason, together with the row-number of the csv file, fully explains
# what is wrong.
# end of loop
if errors:
# display errors to the user for him to fix
else:
# repeat the loop, doing .save() instead of .full_clean()
# and get database integrity errors trying to save Null in non-null model field.
Trouble is, `.full_clean()` does not catch None values in fields without
`null=True`
What should I be doing? Ideas include
1. Wrap the whole thing in a transaction, do a batch of o.save() inside an exception handler, and roll the entire transaction back unless there were no errors. But why bother the database when probably 90% of attempts will error out in trivial ways?
2. Feed the data in through a form, even though there is no form-level per-row interaction with the user.
3. Manually test for None where it shouldn't be. But what else does .full_clean not check?
I can understand that ultimately the only way to catch database integrity
errors is to attempt to store the data, but why doesn't Django alone catch
None in a null=False field?
BTW This is Django 1.9.6
Added detail. This is relevant fields of the model definition
class OrderHistory( models.Model):
invoice_no = models.CharField( max_length=10, unique=True) # no default
invoice_val= models.DecimalField( max_digits=8, decimal_places=2) # no default
date = models.DateField( ) # no default
and this is what is happening, done from `python manage.py shell`, to
demonstrate that the .full_clean method fails to spot a n
>>> from orderhistory.models import OrderHistory
>>> from datetime import date
>>> o = OrderHistory( date=date(2010,3,17), invoice_no="21003163")
>>> o.invoice_val=None
>>> o.full_clean() # passes clean
>>> o.save() # attempt to save this one which has passed full_clean() validation
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/base.py", line 708, in save
force_update=force_update, update_fields=update_fields)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/base.py", line 736, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/base.py", line 820, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/base.py", line 859, in _do_insert
using=using, raw=raw)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/query.py", line 1039, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 1060, in execute_sql
cursor.execute(sql, params)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "invoice_val" violates not-null constraint
DETAIL: Failing row contains (2, 21003163, , , 2010-03-17, , null, null, null, null, null, null).
>>>
>>> p = OrderHistory( invoice_no="21003164") # no date
>>> p.date=None
>>> p.full_clean() # this DOES error as it should
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/nigel/.virtualenvs/edge22/lib/python3.4/site-packages/django/db/models/base.py", line 1144, in full_clean
raise ValidationError(errors)
django.core.exceptions.ValidationError: {'date': ['This field cannot be null.']}
>>>
Answer: I have just repeated your steps in shell, and full_clean() triggers
ValidationError for None values:
>>> from orders.models import OrderHistory
>>> o = OrderHistory()
>>> o.full_clean()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/oz/.virtualenvs/full_clean_test/lib/python2.7/site-packages/django/db/models/base.py", line 1144, in full_clean
raise ValidationError(errors)
ValidationError: {'date': [u'This field cannot be null.'], 'invoice_val': [u'This field cannot be null.'], 'invoice_no': [u'This field cannot be blank.']}
I've tested it on fresh project with Django 1.9.6 and Python 2.7.10 on OSX and
Python 3.4.3 on Ubuntu.
Try removing all *.pyc files from your project. If that doesn't work, remove
your virtual env, create new one and reinstall your dependencies.
|
Extending snake in turtle graphics snake game
Question: So I'm working on a snake game in python's turtle graphics and I'm required to
use turtle. So Everything is working out fine except I just need the snake to
extend by one as it eats the objective. Can somebody please help.
from turtle import *
import random
def snakeGame():
setup(700, 700)
title("Snake Game!")
bgcolor("black")
Radius = 10
# Boundary
bPen = Pen()
bPen.ht()
bPen.color("Green")
bPen.up()
bPen.goto(-300, -300)
bPen.down()
for i in range(4):
bPen.ht()
bPen.fd(600)
bPen.lt(90)
# Player
playerPen = Pen()
playerPen.color("Green")
playerPen.shape("square")
playerPen.up()
playerPen.width(4)
# Create Circle
square = Pen()
square.shape("square")
square.color("Yellow")
square.up()
square.speed(0)
square.goto(-100, 100)
# Speed
speed = 2
# Movement functions
def up():
playerPen.setheading(90)
def down():
playerPen.setheading(270)
def left():
playerPen.setheading(180)
def right():
playerPen.setheading(0)
# Game Over
def gameOver():
pen = Pen()
pen.ht()
pen.color("white")
pen.write("GAME OVER!", align = "center", font = ("courier", 25, "bold"))
# Score
scoreVal = 0
# Key Prompts
onkey(left, "Left")
onkey(right, "Right")
onkey(up, "Up")
onkey(down, "Down")
listen()
# Infinity loop and game over
while True:
playerPen.fd(speed)
if playerPen.xcor() < -300 or playerPen.xcor() > 300:
gameOver()
playerPen.ht()
score = "Score = " + str(scoreVal)
sPen = Pen()
sPen.up()
sPen.ht()
sPen.goto(0, -100)
sPen.color("white")
sPen.write(score, align = "center", font = ("courier", 18, "bold"))
elif playerPen.ycor() < -300 or playerPen.ycor() > 300:
gameOver()
playerPen.ht()
score = "Score = " + str(scoreVal)
sPen = Pen()
sPen.up()
sPen.ht()
sPen.goto(0, -100)
sPen.color("white")
sPen.write(score, align = "center", font = ("courier", 18, "bold"))
# Collision with square
r1 = square.xcor() + 15
r2 = square.xcor() - 15
r3 = square.ycor() + 15
r4 = square.ycor() - 15
if (playerPen.xcor() >= r2 and playerPen.xcor() <= r1) and (playerPen.ycor() >= r4 and playerPen.ycor() <= r3):
square.goto(random.randint(-290, 290), random.randint(-290, 290))
scoreVal += 1
speed += 1
snakeGame()
done()
Answer: I've been working on this code for a while and the following is what I've come
up with.
import turtle
from random import randint
SCREEN = turtle.Screen()
SCREEN.title("Snake Game!")
SCREEN.setup(700,200)
SCREEN.bgcolor("black")
game_difficulty = 0
difficulty = turtle.Turtle()
difficulty.up()
difficulty.goto(0,-86.5)
difficulty.color("white")
difficulty.write("Choose the difficulty of the game:\n e for easy\n n for normal\n h for hard\n", align = "center", font = ("courier", 20, "bold"))
def difficulty_easy():
global game_difficulty
game_difficulty = 250
def difficulty_normal():
global game_difficulty
game_difficulty = 150
def difficulty_hard():
global game_difficulty
game_difficulty = 85
SCREEN.onkey(difficulty_easy, "e")
SCREEN.onkey(difficulty_easy, "E")
SCREEN.onkey(difficulty_normal, "n")
SCREEN.onkey(difficulty_normal, "N")
SCREEN.onkey(difficulty_hard, "h")
SCREEN.onkey(difficulty_hard, "H")
SCREEN.listen()
while game_difficulty != 150 and game_difficulty != 250 and game_difficulty != 85:
difficulty.ht()
difficulty.clear()
SCREEN.setup(700,700)
food = turtle.Turtle()
food.up()
food.shape("circle")
food.shapesize(0.9)
food.color("red")
frame = turtle.Turtle()
frame.ht()
frame.speed(1000)
frame.color("Green")
frame.up()
frame.goto(-310,-310)
frame.down()
for i in range (2):
for i in range(31):
frame.fd(20)
frame.lt(90)
frame.fd(620)
frame.bk(620)
frame.rt(90)
frame.lt(90)
for i in range(2):
frame.fd(620)
frame.lt(90)
snake = turtle.Turtle()
snake.up()
snake.shape("square")
snake.color("green")
snake_coor = [(0,0)]
stamps = []
dir_x = 0
dir_y = 0
Move = 0
LastMove = 0
stop = False
def getRandPos():
return ((randint(-15,15)*20,randint(-15,15)*20))
def actualiseDisplay():
tracer = SCREEN.tracer()
SCREEN.tracer(0)
snake.clearstamps(len(snake_coor))
food.goto(food_coor[0],food_coor[1])
for x,y in snake_coor:
snake.goto(x,y)
snake.stamp()
SCREEN.tracer(tracer)
def isSnakeAbove(random):
global snake_coor
times = 0
random = random
while times < len(snake_coor):
while random == snake_coor[times]:
random = ((randint(-15,15)*20,randint(-15,15)*20))
times = times + 1
return random
food_coor = isSnakeAbove(getRandPos())
def actualisePos():
global snake_coor, food_coor, stop
avance()
if isSelfCollision() or isBorderCollision():
stop = True
if isFoodCollision():
append()
food_coor = isSnakeAbove(getRandPos())
def loop():
if stop:
gameOver()
return
actualisePos()
actualiseDisplay()
SCREEN.ontimer(loop,game_difficulty)
def isSelfCollision():
global snake_coor, LastMove, Move
if len(snake_coor) >= 2:
if LastMove == 1 and Move == 2 or LastMove == 2 and Move == 1 or LastMove == 3 and Move == 4 or LastMove == 4 and Move == 3:
return True
return len(set(snake_coor)) < len(snake_coor)
def isFoodCollision():
sx,sy = snake_coor[0]
fx,fy = food_coor
if (sx >= fx - 10 and sx <= fx + 10) and (sy >= fy - 10 and sy <= fy + 10):
return True
def isBorderCollision():
x,y = snake_coor[0]
return not (-310 < x < 310 ) or not (-310 < y < 310 )
def avance():
global snake_coor
x, y = snake_coor[0]
x += dir_x*20
y += dir_y*20
snake_coor.insert(0, (x, y))
snake_coor.pop(-1)
def append():
global snake_coor
a = snake_coor[-1][:]
snake_coor.append(a)
def setDir(x,y):
global dir_x, dir_y
dir_x = x
dir_y = y
def right():
global LastMove, Move
LastMove = Move
Move = 1
setDir(1,0)
def left():
global LastMove, Move
LastMove = Move
Move = 2
setDir(-1,0)
def up():
global LastMove, Move
LastMove = Move
Move = 3
setDir(0,1)
def down():
global LastMove, Move
LastMove = Move
Move = 4
setDir(0,-1)
def gameOver():
d = turtle.Turtle()
d.up()
d.ht()
d.goto(0,-20)
d.color("white")
snake.ht()
snake.clearstamps(len(snake_coor))
food.ht()
frame.clear()
SCREEN.setup(300 , 200)
d.write("GAME OVER", align = "center", font = ("courier", 30, "bold"))
d.goto(0,-40)
G_O = "Score : "+str(len(snake_coor)-1)
d.write(G_O, align = "center", font = ("courier", 20, "bold"))
SCREEN.onclick(lambda*a:[SCREEN.bye(),exit()])
SCREEN.onkey(up, "Up")
SCREEN.onkey(down, "Down")
SCREEN.onkey(right, "Right")
SCREEN.onkey(left, "Left")
SCREEN.listen()
loop()
turtle.mainloop()
|
How to flag the most efficient way a column of a dataframe by values of another dataframe's in python/pandas?
Question: I've got a dataframe "A" (~500k records). It contains two columns:
"fromTimestamp" and "toTimestamp".
I've got a dataframe "B" (~5M records). It has some values and a column named
"actualTimestamp".
I want all of my rows in dataframe "B" where the value of "actualTimestamp" is
between the values of any "fromTimestamp" and "toTimestamp" pair to be
flagged.
I want something similar like this, but much more efficient code:
for index, row in A.iterrows():
cond1 = B['actual_timestamp'] >= row['from_timestamp']
cond2 = B['actual_timestamp'] <= row['to_timestamp']
B.ix[cond1 & cond2, 'corrupted_flag'] = True
What is the fastest/most efficient way to do this in python/pandas?
**Update:** Sample data
dataframe A (input):
from_timestamp to_timestamp
3 4
6 9
8 10
dataframe B (input):
data actual_timestamp
a 2
b 3
c 4
d 5
e 8
f 10
g 11
h 12
dataframe B (expected output):
data actual_timestamp corrupted_flag
a 2 False
b 3 True
c 4 True
d 5 False
e 8 True
f 10 True
g 11 False
h 12 False
Answer: You can use the [`intervaltree`](https://pypi.python.org/pypi/intervaltree)
package to build an [interval
tree](https://en.wikipedia.org/wiki/Interval_tree) from the timestamps in
DataFrame A, and then check if each timestamp from DataFrame B is in the tree:
from intervaltree import IntervalTree
tree = IntervalTree.from_tuples(zip(A['from_timestamp'], A['to_timestamp'] + 0.1))
B['corrupted_flag'] = B['actual_timestamp'].map(lambda x: tree.overlaps(x))
Note that you need to pad `A['to_timestamp']` slightly, as the upper bound of
an interval is not included as part of the interval in the `intervaltree`
package (although the lower bound is).
This method improved performance by a little more than a factor of `14` on
some sample data I generated (A = 10k rows, B = 100k rows). The performance
boost got bigger the more rows I added.
I've used the `intervaltree` package with `datetime` objects before, so the
code above should still work if your timestamps aren't integers like they are
in your sample data; you just might need to change how upper bounds are
padded.
|
Checking if global variables are defined in bottle
Question: Im trying to use bottle to update information on a site fed in from commands
in a chat bot but am struggling to get information from one route to another
while checking if the variables are defined.
It works fine until I add:
if 'area' not in globals():
area = ''
if 'function' not in globals():
function = ''
if 'user' not in globals():
user = ''
if 'value' not in globals():
value =''`
To check if the variable has been defined. It works unless I set a value using
/in. otherwise it errors with
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/bottle.py", line 862, in _handle
return route.call(**args)
File "/usr/local/lib/python3.5/dist-packages/bottle.py", line 1732, in wrapper
rv = callback(*a, **ka)
File "API.py", line 43, in botOut
return area + function + user + value
UnboundLocalError: local variable 'area' referenced before assignment
Full code:
from bottle import route, error, post, get, run, static_file, abort, redirect, response, request, template
Head = '''<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="style.css">
<script src="script.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
</head>
'''
foot = '''</body></html>'''
@route('/in')
def botIn():
global area
global function
global user
global value
area = request.query.area
function = request.query.function
user = request.query.user
value = request.query.value
print(area)
return "in"
@route('/out')
def botOut():
if 'area' not in globals():
area = ''
if 'function' not in globals():
function = ''
if 'user' not in globals():
user = ''
if 'value' not in globals():
value =''
return area + function + user + value
run (host='0.0.0.0', port=8080)
Answer: Instead of using 4 globals--which you then have to qualify with the `global`
keyword in several places--simply create one dict at the module level, and
store your state in that dict; no need to declare it `global` anywhere.
E.g.,
bot_state = {
'area': '',
'function': '',
'user': '',
'value': ''
}
@route('/in')
def botIn():
bot_state['area'] = request.query.area
bot_state['function'] = request.query.function
bot_state['user'] = request.query.user
bot_state['value'] = request.query.value
print(area)
return 'in'
@route('/out')
def botOut():
return ''.join(
bot_state['area'],
bot_state['function'],
bot_state['user'],
bot_state['value'],
)
Note that there are several more improvements I'd make to the code (e.g. each
route function should return a list of strings, not a string), but those are
the minimal changes I'd make in order to solve your immediate problem. Hope it
helps!
|
Execute multiple independent statements in SQLAlchemy Core?
Question: I'm using SQLAlchemy Core to run a few independent statements. **The
statements are to separate tables and unrelated**. Because of that I can't use
the standard `table.insert()` with multiple dictionaries of params passed in.
Right now, I'm doing this:
sql_conn.execute(query1)
sql_conn.execute(query2)
Is there any way I can run these in one shot instead of needing two back-and-
forths to the db? I'm on MySQL 5.7 and Python 2.7.11.
Answer: It is neither wise, nor practical, to run two queries at once.
It is not wise allowing such give hackers another way to do nasty things via
"SQL Injection".
On the other hand, it is possible, but not necessarily practical. You would
create a Stored Procedure that contains any number of related (or unrelated)
queries in it. Then `CALL` that procedure. There some things that _may_ make
it impractical:
* The only way to get data in is via a finite number of scalar arguments.
* The output comes back as multiple resultsets; you need to code differently to see what happened.
Roundtrip latency is _insignificant_ if you are on the same machine with the
MySQL server. It can usually be ignored even if the two servers are in the
same datacenter. Latency becomes important when the client and server are
separated by a long distance. For cross-Atlantic latency, we are talking over
100ms. Brazil to China is about 250ms. (Be glad we are no living on Jupiter.)
|
PySide: Emiting signal from QThread in new syntax
Question: I'm trying (and researching) with little success to emit a signal from a
working Qthread to the main window. I don't seem to understand how I should go
about this in the new syntax.
Here's a simple example.
from PySide.QtCore import *
from PySide.QtGui import *
import sys
import time
class Dialog(QDialog):
def __init__(self, parent=None):
super(Dialog, self).__init__(parent)
button = QPushButton("Test me!")
layout = QVBoxLayout()
layout.addWidget(button)
self.setLayout(layout)
#self.button.clicked.connect(self.test) ----> 'Dialog' object has no attribute 'button'
self.connect(button, SIGNAL('clicked()'), self.test)
self.workerThread = WorkerThread()
def test(self):
self.workerThread.start()
QMessageBox.information(self, 'Done!', 'Done.')
class WorkerThread(QThread):
def __init__(self, parent=None):
super(WorkerThread, self).__init__(parent)
def run(self):
time.sleep(5)
print "Thread done!"
app = QApplication(sys.argv)
dialog = Dialog()
dialog.show()
app.exec_()
I understand that if I didn't have another thread I'd create the signal inside
the Dialog class and connect it in the `__init__` but how can I create a
custom signal that can be emitted from WorkerThread and be used test()?
As a side question. You can see it commented out of the code that the new
syntax for connecting the signal errors out. Is it something in my
configurations?
I'm on OsX El Capitan, Python 2.7
Any help is highly appreciated! Thanks a lot
TL:DR: I'd like to emmit a signal from the WorkerThread after 5 seconds so
that the test function displays the QMessageBox only after WorkingThread is
done using the new syntax.
Answer: Ok, it's been a long day trying to figure this out. My main resource was this:
<http://www.matteomattei.com/pyside-signals-and-slots-with-qthread-example/>
In the new syntax, in order to handle signals from different threads, you have
to create a class for your signal like so:
class WorkerThreadSignal(QObject):
workerThreadDone = Signal()
This is how the WorkerThread end up looking like:
class WorkerThread(QThread):
def __init__(self, parent=None):
super(WorkerThread, self).__init__(parent)
self.workerThreadSignal = WorkerThreadSignal()
def run(self):
time.sleep(3)
self.workerThreadSignal.workerThreadDone.emit()
And for the connections on the Dialog class:
self.workerThread = WorkerThread()
self.buttonn.clicked.connect(self.test)
and:
self.workerThreadSignal = WorkerThreadSignal()
self.workerThread.workerThreadSignal.workerThreadDone.connect(self.success)
def success(self):
QMessageBox.warning(self, 'Warning!', 'Thread executed to completion!')
So the success method is called once the signal is emitted.
What took me the longest to figure out was this last line of code. I
originally thought I could connect directly to the WorkerThreadSignal class
but, at least in this case, it only worked once I backtracked it's location.
From the Dialog **init** to WorkerThread **init** back to the
WorkerThreadSignal. I took this hint from the website mentioned above.
I find strange that I have to create the same local variables on both classes,
maybe there's a way to create one global variable I can refer to instead all
the current solution but it works for now.
I hope this helps someone also stuck in this process!
PS: The syntax problem for the connection was also solved. So everything is
written with the new syntax, which is great.
|
How to get the cwd in a shell-dependend format?
Question: Since I'm using both Windows' `cmd.exe` and
[msysgit](/questions/tagged/msysgit "show questions tagged 'msysgit'")'s
`bash`, trying to access the Windows-path output by `os.getcwd()` is causing
Python to attempt accessing a path starting with a drive letter and a colon,
e.g. `C:\`, which `bash` correctly determines an invalid unix-path, which
instead should start with `/c/` in this example. But how can I modify a
Windows-path to become its [msys](/questions/tagged/msys "show questions
tagged 'msys'")-equivalent [iff](https://en.wikipedia.org/wiki/If_and_only_if
"if and only if") the script is running within `bash`?
Answer: Ugly but should work unless you create an environment variable `SHELL=bash`
for Windows:
def msysfy(dirname):
import os
try:
shell = os.environ['SHELL']
except KeyError: # by default, cmd.exe has no SHELL variable
shell = 'win'
if os.path.basename(shell)=='bash' and dirname[1] == ':':
return '/' + dirname[0].lower() + '/' + dirname[2:]
# don't worry about the other backslashes, msys handles them
else:
return dirname
|
Python: Append column to CSV from a different csv file
Question: I currently have a script which I want to use to combine csv data files. For
example I have a file called process.csv and file.csv but when I try to append
one to the other in a new file called 'all_files.csv' it appends it the
correct column but not from the top of the file.
What happens at the moment:
process/sec
08/03/16 11:19 0
08/03/16 11:34 0.1
08/03/16 11:49 0
08/03/16 12:03 0
08/03/16 12:13 0
08/03/16 12:23 0
file/sec
0
43.3
0
0
0
0
0
What I want:
process/sec file/sec
08/03/16 11:19 0 0
08/03/16 11:34 0.1 43.3
08/03/16 11:49 0 0
08/03/16 12:03 0 0
08/03/16 12:13 0 0
08/03/16 12:23 0 0
Here is my code (Note I removed all the excess code relating the an algorithm
I use for the `per_second` value and use a static value in this example):
def all_data(data_name,input_file_name,idx):
#Create file if first set of data
if data_name == 'first_set_of_data':
all_per_second_file = open("all_data.csv", 'wb')
#Append to file for all other data
else:
all_per_second_file = open("all_data.csv", 'a')
row_position=''
#For loop with index number to position rows after one another
#So not to rewrite new data to the same columns in all_data.csv
for number in range(0,idx):
row_position=row_position+','
with open(input_file_name, 'rb') as csvfile:
# get number of columns
for line in csvfile.readlines():
array = line.split(',')
first_item = array[0]
num_columns = len(array)
csvfile.seek(0)
reader = csv.reader(csvfile, delimiter=',')
#Columns to include Date and desired data
included_cols = [0, 3]
count =0
#Test value for example purposes
per_second=12
for row in reader:
#Create header
if count==1:
all_per_second_file.write(row_position+','+event_name+"\n")
#Intialise date column with first set of data
#first entry rate must be 0
if count ==2:
if event_name == 'first_set_of_data':
all_per_second_file.write(row_position+row[0]+",0\n")
else:
all_per_second_file.write(row_position+",0\n")
#If data after the first row =0 value should reset so data/sec should be 0, not a minus number
if count>2 and row[3]=='0':
if event_name == 'first_set_of_data':
all_per_second_file.write(row_position+row[0]+",0\n")
else:
all_per_second_file.write(row_position+",0\n")
#Otherwise calculate rate
elif count >=3:
if event_name == 'first_set_of_data':
all_per_second_file.write(row_position+row[0]+","+str("%.1f" % per_second)+"\n")
else:
all_per_second_file.write(row_position+","+str("%.1f" % per_second)+"\n")
count = count+1
all_per_second_file.close()
**Update in code:**
I have changed my script to the following which seems to work correctly:
def all_data(input_file_name):
a = pd.read_csv(per_second_address+input_file_name[0])
b = pd.read_csv(per_second_address+input_file_name[1])
c = pd.read_csv(per_second_address+input_file_name[2])
d = pd.read_csv(per_second_address+input_file_name[3])
b = b.dropna(axis=1)
c = c.dropna(axis=1)
d = d.dropna(axis=1)
merged = a.merge(b, on='Date')
merged = merged.merge(c, on='Date')
merged = merged.merge(d, on='Date')
merged.to_csv(per_second_address+"all_event_per_second.csv", index=False)
Answer: CSV file read/write operation is line-based.
* * *
Please check the below code with basic modules available with python:
process.csv contains:
time,process/sec
8/3/2016 11:19,0
8/3/2016 11:34,0
8/3/2016 11:49,1
8/3/2016 12:03,1
8/3/2016 12:13,0
8/3/2016 12:23,0
files.csv contains:
time,files/sec
8/3/2016 11:19,0
8/3/2016 11:34,2
8/3/2016 11:49,3
8/3/2016 12:03,4
8/3/2016 12:13,1
8/3/2016 12:23,0
Python code will create "combine.csv":
import csv
#Read both files
with open('process.csv', 'rb') as a:
reader = csv.reader(a,delimiter = ",")
process_csv = list(reader)
with open('files.csv', 'rb') as b:
reader = csv.reader(b,delimiter = ",")
data_csv = list(reader)
#Write into combine.csv
if len(process_csv) == len(data_csv):
with open('combine.csv', 'ab') as f:
writer = csv.writer(f,delimiter = ",")
for i in range(0,len(process_csv)):
temp_list = []
temp_list.extend(process_csv[i])
temp_list.append(data_csv[i][1])
writer.writerow(temp_list)
combine.csv has:
time,process/sec,files/sec
8/3/2016 11:19,0,0
8/3/2016 11:34,0,2
8/3/2016 11:49,1,3
8/3/2016 12:03,1,4
8/3/2016 12:13,0,1
8/3/2016 12:23,0,0
* * *
Code with pandas module.
import pandas as pd
a = pd.read_csv("process.csv")
b = pd.read_csv("files.csv")
b = b.dropna(axis=1)
merged = a.merge(b, on='time')
merged.to_csv("combine2.csv", index=False)
More info on pandas module, [click here !!!](http://pandas.pydata.org/)
|
Django 1.8 How do I add `sensitive_variables` or `sensitive_post_parameters` to a FormView method?
Question: When debugging an error with a form submit in Django, I noticed that the
user's password is in plain view in the debug "Request information" readout as
part of the POST parameters.
How do I wrap the `form_valid` (or maybe `dispatch`?) so that
`POST['password']` is hidden from the debugging information? I can't seem to
find the right combination of `@method_decorator` etc.
<https://docs.djangoproject.com/en/1.8/howto/error-reporting/#filtering-
sensitive-information>
from django.utils.decorators import method_decorator
from django.views.decorators.debug import sensitive_variables, sensitive_post_parameters
class ActivateView(FormView):
form_class = ActivatePasswordForm
template_name = 'activate.html'
def form_valid(self, form):
# erroneous function which has been fixed
do_something(form.cleaned_data['password'])
return super().form_valid(form)
I have tried:
@method_decorator(sensitive_variables)
def form_valid(self, form):
and:
@method_decorator(sensitive_post_parameters)
def form_valid(self, form):
but both bail out at:
Traceback:
File "venv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response
223. response = middleware_method(request, response)
File "venv/lib/python3.5/site-packages/django/middleware/clickjacking.py" in process_response
31. if response.get('X-Frame-Options', None) is not None:
Exception Type: AttributeError at /activate/
Exception Value: 'function' object has no attribute 'get'
and I've tried:
@method_decorator(sensitive_variables)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
and:
@method_decorator(sensitive_post_parameters)
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
but both bail out at:
Traceback:
File "venv/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "venv/lib/python3.5/site-packages/django/views/generic/base.py" in view
71. return self.dispatch(request, *args, **kwargs)
File "venv/lib/python3.5/site-packages/django/utils/decorators.py" in _wrapper
34. return bound_func(*args, **kwargs)
Exception Type: TypeError at /activate/
Exception Value: decorator() got an unexpected keyword argument 'unique_id'
This is the type error mentioned in [the
docs](https://docs.djangoproject.com/en/1.8/topics/class-based-
views/intro/#decorating-the-class).
## Solved
Solution is a mixture of the two answers below from Dave and Alasdair. Thanks.
The `@sensitive_post_parameters` decorator will only take effect when
`DEBUG=False`, which explains why I wasn't seeing anything change. Also, the
method has to be called within the `@method_decorator`. So the correct code
is:
@method_decorator(sensitive_post_parameters('password', 'password_again'))
def dispatch(self, *args, **kwargs):
return super().dispatch(*args, **kwargs)
Answer: `sensitive_variables` is documented as a decorator rather than an argument to
a decorator. And even with the correct syntax, it's possible but unlikely that
"treated in a special way" means that the variables will not be visible in
debugging information.
Production servers should never be run with DEBUG=True, and sensitive
variables should never appear in any logs generated by a production server.
But when debugging, the goal is to have all the information necessary to track
down problems, which includes passwords. Let us know if the debug page
generator censors sensitive variables. That would be a surprise.
def sensitive_variables(*variables):
"""
Indicates which variables used in the decorated function are sensitive, so
that those variables can later be treated in a special way, for example
by hiding them when logging unhandled exceptions.
Two forms are accepted:
* with specified variable names:
@sensitive_variables('user', 'password', 'credit_card')
def my_function(user):
password = user.pass_word
credit_card = user.credit_card_number
...
* without any specified variable names, in which case it is assumed that
all variables are considered sensitive:
@sensitive_variables()
def my_function()
...
"""
|
Python console output to variable
Question: I have seen a few posts on this site and some others that cover a similar
topic don't quite seem to reach the result i am looking for with python v3. My
aim is to have a popup window containing two entry boxes for a username and a
password which i can then output as variables named username and password, to
then in turn use to login to a website which i already have scripted. The code
i have so far is:
from tkinter import *
def show_entry_fields():
print("Username: %s\nPassword: %s" % (e1.get(), e2.get()))
master = Tk()
Label(master, text="Username").grid(row=0)
Label(master, text="Password").grid(row=1)
e1 = Entry(master)
e2 = Entry(master)
e1.grid(row=0, column=1)
e2.grid(row=1, column=1)
Button(master, text='Quit', command=master.quit).grid(row=3, column=0, sticky=W, pady=4)
Button(master, text='Submit', command=show_entry_fields).grid(row=3, column=1, sticky=W, pady=4)
I am getting stuck with working out how to take the output that shows in the
console after pressing submit and getting these two lines to become the
variables? Any help or suggestions would be greatly appreciated. Thanks in
advance,
James
Answer: You can use tkinter's variables:
username = StringVar()
password = StringVar()
And then when you define the entrys, add argument `textvariable`:
e1 = Entry(master, textvariable = username)
e2 = Entry(master, textvariable = password)
To get the value from this variable, call `.get()` function on it.
|
Fastest way to compute distance beetween each points in python
Question: In my project I need to compute euclidian distance beetween each points stored
in an array. The entry array is a 2D numpy array with 3 columns which are the
coordinates(x,y,z) and each rows define a new point.
I'm usualy working with 5000 - 6000 points in my test cases.
My first algorithm use Cython and my second numpy. I find that my numpy
algorithm is faster than cython.
edit: with 6000 points :
numpy 1.76 s / cython 4.36 s
Here's my cython code:
cimport cython
from libc.math cimport sqrt
@cython.boundscheck(False)
@cython.wraparound(False)
cdef void calcul1(double[::1] M,double[::1] R):
cdef int i=0
cdef int max = M.shape[0]
cdef int x,y
cdef int start = 1
for x in range(0,max,3):
for y in range(start,max,3):
R[i]= sqrt((M[y] - M[x])**2 + (M[y+1] - M[x+1])**2 + (M[y+2] - M[x+2])**2)
i+=1
start += 1
M is a memory view of the initial entry array but `flatten()` by numpy before
the call of the function `calcul1()`, R is a memory view of a 1D output array
to store all the results.
Here's my Numpy code :
def calcul2(M):
return np.sqrt(((M[:,:,np.newaxis] - M[:,np.newaxis,:])**2).sum(axis=0))
Here M is the initial entry array but `transpose()` by numpy before the
function call to have coordinates(x,y,z) as rows and points as columns.
Moreover this numpy function is quite convinient because the array it returns
is well organise. It's a n by n array with n the number of points and each
points has a row and a column. So for example the distance AB is stored at the
intersection index of row A and column B.
Here's how I call them (cython function):
cpdef test():
cdef double[::1] Mf
cdef double[::1] out = np.empty(17998000,dtype=np.float64) # (6000² - 6000) / 2
M = np.arange(6000*3,dtype=np.float64).reshape(6000,3) # Example array with 6000 points
Mf = M.flatten() #because my cython algorithm need a 1D array
Mt = M.transpose() # because my numpy algorithm need coordinates as rows
calcul2(Mt)
calcul1(Mf,out)
Am I doing something wrong here ? For my project both are not fast enough.
1: Is there a way to improve my cython code in order to beat numpy's speed ?
2: Is there a way to improve my numpy code to compute even faster ?
3: Or any other solutions, but it must be a python/cython (like parallel
computing) ?
Thank you.
Answer: Not sure where you are getting your timings, but you can use
[`scipy.spatial.distance`](http://docs.scipy.org/doc/scipy/reference/spatial.distance.html):
M = np.arange(6000*3, dtype=np.float64).reshape(6000,3)
np_result = calcul2(M)
sp_result = sd.cdist(M.T, M.T) #Scipy usage
np.allclose(np_result, sp_result)
>>> True
Timings:
%timeit calcul2(M)
1000 loops, best of 3: 313 µs per loop
%timeit sd.cdist(M.T, M.T)
10000 loops, best of 3: 86.4 µs per loop
Importantly, its also useful to realize that your output is symmetric:
np.allclose(sp_result, sp_result.T)
>>> True
An alternative is to only compute the upper triangular of this array:
%timeit sd.pdist(M.T)
10000 loops, best of 3: 39.1 µs per loop
Edit: Not sure which index you want to zip, looks like you may be doing it
both ways? Zipping the other index for comparison:
%timeit sd.pdist(M)
10 loops, best of 3: 135 ms per loop
Still about 10x faster than your current NumPy implementation.
|
Parse and combine date and time (leap seconds) from strings - python
Question: I parse a date (format: YYYY-MM-DD HH:MM:SS) from a data file which contains
multiple lines of dates.
The problem is that the data contains **leap seconds** so i'm not able to use
_datetime_. How can I take into account the leap seconds (0-60), so that at
the end I would have the same result if I would have used _datetime.strptime_
from the string with the format above (thus, date+time), please?
I have already tried with _combine_ using _date_ for the date and _time_ for
the time string. Is it the right way or are there some others?
Thanks in advance.
Answer: Just [use `time.strptime()`](http://stackoverflow.com/a/21029510/4279):
#!/usr/bin/env python
import datetime as DT
import time
from calendar import timegm
utc_time_string = '2012-06-30 23:59:60'
utc_time_tuple = time.strptime(utc_time_string, "%Y-%m-%d %H:%M:%S")[:6]
utc_dt = DT.datetime(1970, 1, 1) + DT.timedelta(seconds=timegm(utc_time_tuple))
# -> datetime.datetime(2012, 7, 1, 0, 0)
If the input time is not in UTC then you could handle the leap second in the
`time_tuple` manually e.g., the `datetime` module may raise `ValueError` if
you pass the leap second directly or it may silently truncate `60` to `59` if
it encounters a leap second in indirect (internal) calls.
|
Python using Google App Engine. Upload file feature that stores files in the GAE datastore
Question: I found code for a guestbook storing text in a datastore. I've been looking
all morning to find how would i modify my code to upload a file instead of
reading from the textfield. and displaying the file details after displaying
it. I would appreciate any help? or maybe there's an answer already out there
i just haven't found it. Here's my code so far:
import cgi
import datetime
import urllib
import wsgiref.handlers
from google.appengine.ext import db
from google.appengine.api import users
import webapp2
class Greeting(db.Model):
author = db.UserProperty()
content = db.StringProperty(multiline=True)
date = db.DateTimeProperty(auto_now_add=True)
def upload_key(upload_name=None):
return db.Key.from_path('Upload', upload_name or 'default_upload')
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.out.write('<html><body>')
upload_name=self.request.get('upload_name')
greetings = db.GqlQuery("SELECT * "
"FROM Greeting "
"WHERE ANCESTOR IS :1 "
"ORDER BY date DESC LIMIT 10",
upload_key(upload_name))
for greeting in greetings:
if greeting.author:
self.response.out.write(
'<b>%s</b> wrote:' % greeting.author.nickname())
else:
self.response.out.write('An anonymous person wrote:')
self.response.out.write('<blockquote>%s</blockquote>' %
cgi.escape(greeting.content))
self.response.out.write("""
<form action="/sign?%s" method="post">
<div><textarea name="content" rows="3" cols="60"></textarea></div>
<div><input type="submit" value="Upload a File"></div>
</form>
<hr>
<form>Name: <input value="%s" name="upload_name">
<input type="submit" value="switch user"></form>
</body>
</html>""" % (urllib.urlencode({'upload_name': upload_name}),
cgi.escape(upload_name)))
class Upload(webapp2.RequestHandler):
def post(self):
upload_name = self.request.get('upload_name')
greeting = Greeting(parent=upload_key(upload_name))
if users.get_current_user():
greeting.author = users.get_current_user()
greeting.content = self.request.get('content')
greeting.put()
self.redirect('/?' + urllib.urlencode({'upload_name': upload_name}))
APP = webapp2.WSGIApplication([
('/', MainPage),
('/sign', Upload)
], debug=True)
def main():
APP.RUN()
if __name__ == '__main__':
main()
Answer: There's two basic approaches. The traditional approach, and the approach
you'll find the most samples for, is the Blobstore API. The new approach is
Google Cloud Storage. The advantages of the Blobstore is that there are more
existing samples, but the advantage of GCS is that the same code can work
outside the context of App Engine.
**APPROACH 1 - BLOBSTORE API - EASIER ***
Here are the official [Blobstore
docs](https://cloud.google.com/appengine/docs/python/blobstore/#Python_Complete_sample_application)
with samples.
Here's a [similar Stack Overflow
question.](http://stackoverflow.com/questions/81451/upload-files-in-google-
app-engine)
**APPROACH - GOOGLE CLOUD STORAGE API - BETTER**
For Google Cloud Storage, the official client library is [gcloud-
python](https://github.com/googlecloudplatform/gcloud-python). Since this is
not part of the App Engine SDK, you will generally "vendor" it (include it
directly in your project) before you deploy the App Engine app using `pip -t`
flag, and modififying an `appengine_config.py` file. See the instructions for
that in ["Installing a
library"](https://cloud.google.com/appengine/docs/python/tools/using-
libraries-python-27). The short version of the story is that you do
mkdir lib
pip install gcloud-python -t lib
then add an `appengine_config.py` with the follow lines:
from google.appengine.ext import vendor
# Third-party libraries are stored in "lib", vendoring will make
# sure that they are importable by the application.
vendor.add('lib')
Finally, we walk through using this API in a Python app [in this
tutorial](https://cloud.google.com/python/getting-started/using-cloud-storage)
|
Python 3 - webbrowser module - open() function
Question: i write a piece of code in python, im a beginner and im learning modules.
import webbrowser as ac
ac.open("istihza.com")
it works correctly but when i run that, site is opening in internet explorer.
i want to make it opened with google chrome. Is any parameter to change
browser? or i need something else?
Answer: Simply grab the appropriate controller instance and open the url with that:
import webbrowser as ac
chrome =ac.get('chrome')
chrome.open('istihza.com')
|
If and else statement in Google Places API script in Python
Question: I am attempting to write a script that will loop through list items and query
the google places api.
The problem is that some of the queries will return no results, while other
queries will.
The query results are gathered into lists. For every query that returns no
results I would like to insert 'no results' string into list.
This is the script I have so far (API Key is fake):
companies = ['company A', 'company B', 'company C']
#create list items to store API search results
google_name = []
place_id = []
formatted_address = []
#function to find company id and address from company names
def places_api_id():
api_key = 'AIzaSyAKCp1kN0cHvO7t_NlqMagergrghhehtsrht'
url = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
#replace spaces within list items with %20
company_replaced = company.replace(' ', '%20')
final_url = url + '?query=' + company_replaced +'&key=' + api_key
json_obj = urllib2.urlopen(final_url)
data = json.loads(json_obj)
#if no results, insert 'no results'
if data['status'] == 'ZERO RESULTS':
google_name.append('no results')
place_id.append('no results')
formatted_address('no results')
#otherwise, insert the result into list
else:
for item in data['results']:
google_name.append(item['name'])
place_id.append(item['place_id'])
formatted_address.append(item['formatted_address'])
#run the script
for company in companies:
places_api_id()
Unfortunately when I run the script python produces the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-159-eadf5f84e27f> in <module>()
1 for company in companies:
----> 2 places_api_id()
3
<ipython-input-153-f0e25b871a0e> in places_api_id()
6 final_url = url + '?query=' + company_replaced +'&key=' + api_key
7 json_obj = urllib2.urlopen(final_url)
----> 8 data = json.loads(json_obj)
9 if data['status'] == 'ZERO RESULTS':
10 google_name.append('no results')
/usr/lib/python2.7/json/__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
336 parse_int is None and parse_float is None and
337 parse_constant is None and object_pairs_hook is None and not kw):
--> 338 return _default_decoder.decode(s)
339 if cls is None:
340 cls = JSONDecoder
/usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w)
364
365 """
--> 366 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
367 end = _w(s, end).end()
368 if end != len(s):
`TypeError: expected string or buffer
I would really appreciate your help and advice on how to get this script
working, I've been staring at it for hours.
Thank you Kamil
UPDATE
I am now loooping the following list through the script:
companies = ['MARINE AND GENERAL MUTUAL LIFE ASSURANCE SOCIETY',
'KENTSTONE PROPERTIES LIMITED',
'ASHFORD CATTLE MARKET COMPANY LIMITED(THE)',
'ORIENTAL GAS COMPANY, LIMITED(THE)',
'BRITISH INDIA STEAM NAVIGATION COMPANY LIMITED',
'N & C BUILDING PRODUCTS LIMITED',
'UNION MARINE AND GENERAL INSURANCE COMPANY LIMITED,(THE)',
'00000258 LIMITED',
'METHODIST NEWSPAPER COMPANY LIMITED',
'LONDON AND SUBURBAN LAND AND BUILDING COMPANY LIMITED(THE)']
after I run the script this is what Google Places API returns in the google
name list:
[u'The Ashford Cattle Market Co Ltd',
u'Orient Express Hotels',
u'British-India Steam-Navigation Co Ltd',
u'N-Of-One, Inc.',
u'In-N-Out Burger',
u'In-N-Out Burger Distribution Center',
u"Wet 'n Wild Orlando",
u'In-N-Out Burger',
u'Alt-N Technologies (MDaemon)',
u'Model N Inc',
u"Pies 'n' Thighs",
u"Bethany Women's Center",
u"Jim 'N Nick's Bar-B-Q",
u"Steak 'n Shake",
u'New Orleans Ernest N. Morial Convention Center',
u"Jim 'N Nick's Bar-B-Q",
u"Jim 'N Nick's Bar-B-Q",
u"Jim 'N Nick's Bar-B-Q",
u'Theatre N at Nemours',
u'Model N',
u"Jim 'N Nick's Bar-B-Q",
u'Memphis Rock n Soul Museum',
u"Eat'n Park - Squirrel Hill",
u'Travelers',
u'American General Life Insurance Co',
u'258 Ltd Rd',
u'The Limited',
u'258, New IPCL Rd',
u'London Metropolitan Archives',
u'Hampstead Garden Suburb Trust Ltd']
Majority of the company names returned by Google are not even on the companies
list and also there are many more of them. I am really confused now.
Answer: The error is not at the `if`-line, but before. `json_obj` is a file-like
object, not a string, therefore you have to use `load`:
data = json.load(json_obj)
PS: if the status is not what you expect, you can just test if
`data['results']` is empty or not:
import urllib2
from collections import namedtuple
API_KEY = 'AIzaSyAKCp1kN0cHvO7t_NlqMagergrghhehtsrht'
URL = 'https://maps.googleapis.com/maps/api/place/textsearch/json?query={q}&key={k}'
Place = namedtuple("Place", "google_name,place_id,formatted_address")
#function to find company id and address from company names
def places_api_id(company):
places = []
url = URL.format(q=urllib2.quote(company), k=API_KEY)
json_obj = urllib2.urlopen(url)
data = json.loads(json_obj)
if not data['results']:
places.append(Place("no results", "no results", "no results"))
else:
for item in data['results']:
places.append(Place(item['name'], item['place_id'], item['formatted_address']))
return places
companies = ['company A', 'company B', 'company C']
places = []
for company in companies:
places.extend(places_api_id(company))
|
Python3: zip in range
Question: I'm new to Python and I'm trying to zip 2 lists together into 1, which I was
already able to do. I've got 2 lists with several things in them, but I'm
asking the user to input a number, which should determine the range. So i've
got List1: A1, A2, A3, A4, A5, A6 and List2: B1,B2,B3,B4,B5,B6 I know how to
display the 2 complete lists, but what I'm trying to do is, if the user enters
number "3", the zip should only display: (A1,B1), (A2,B2), (A3,B3) instead of
the whole list. So here's what I tried:
a = ["A1", "A2", "A3", "A4", "A5", "A6"]
b = ["B1", "B2", "B3", "B4", "B5", "B6"]
c = zip(a,b)
number = int(input("please enter number"))
for x in c:
print(x[:number])
But this doesn't work. I tried to look it up, but couldn't find anything. I
would be glad, if someone could help me out.
Answer: You can slice the result of `zip()` with
[`itertools.islice()`](https://docs.python.org/3/library/itertools.html#itertools.islice):
>>> from itertools import islice
>>> list(islice(c, number))
[('A1', 'B1'), ('A2', 'B2'), ('A3', 'B3')]
|
How to Freeze a pane while re-sizing a window that has frames in python using tkinter and pack?
Question: This is my first post on stackoverflow. I am finally posting because I can not
find this anywhere and have been searching for nearly 4 hours, but I am stuck.
Here is my code example:
import tkinter as tk
from tkinter import *
root = tk.Tk()
root.geometry("600x100+200+200")
leftverticalFrame = Frame(root)
leftverticalFrame.pack(side=LEFT)
middleverticlFrame = Frame(root)
middleverticlFrame.pack(expand=TRUE)
rightverticalFrame = Frame(root)
rightverticalFrame.pack(side=RIGHT)
right = tk.Label(rightverticalFrame, text="Right Vertical Status Frame", bg="yellow")
right.pack(side=tk.RIGHT, fill=BOTH)
left = tk.Label(leftverticalFrame, text = "Left Vertical Navigation Frame", bg="orange")
left.pack(side=tk.LEFT, fill=BOTH)
bottom = tk.Label(middleverticlFrame, text="Middle Vertical Frame", bg="blue")
bottom.pack(side=tk.BOTTOM, expand=True, fill=tk.BOTH)
root.mainloop()
What I am doing is merely trying to layout the frames individually within the
root because the frames will use different managers. The left frame is
functioning exactly as I want it to, as is the middle frame. The problem is
with the frame on the right.
Notice when you re-size the window making it more narrow, the right frame
comes into the "middle frame's territory". Now the strange thing is the middle
frame does not replicate the same behavior when it comes to the boundary of
the left frame. I want the right frame to behave the same as the middle frame.
Essentially I am trying to make the Left and Right fairly static, but the
middle frame more dynamic. Can anyone tell me what I am doing wrong please?
Answer: An important thing to remember about `pack` is that the `side` attribute
doesn't refer to the side of the window, it refers to the _side of the
remaining available space_. The causes the order in which you pack things and
the side that you pack them to be significant, because each time you pack
something you change the location and amount of remaining available space.
In this case, the problem is that you didn't specify the `side` attribute for
the middle frame, so it defaults to `"top"` (as in, "top of the remaining
space", _not_ "top of the window"). Since there's already something on the
left, this puts it at the top of the remaining space on the right. Then, when
you put the next item on the right, it's on the right but below the thing that
is on the top.
There are at least a couple ways to solve this. The first is to pack the left
and right sides first, and then pack the middle. In this case it doesn't
matter which side you put the middle frame:
leftverticalFrame.pack(side=LEFT)
rightverticalFrame.pack(side=RIGHT)
middleverticlFrame.pack(expand=TRUE, side=TOP)
The second solution is to leave them in the original order, but pack the
middle frame on the left or right instead of the top:
leftverticalFrame.pack(side=LEFT)
middleverticlFrame.pack(expand=TRUE, side=LEFT)
rightverticalFrame.pack(side=RIGHT)
These two variations will initially look identical, or perhaps nearly
identical depending on what else might be in the frames or in the window.
However, the behavior is different when you start to make the window too small
to fit all of the frames.
In such a case, tkinter must eventually start reducing the size of a widget.
It does this in the reverse order that they were packed (read: the last one to
be packed is the first one to be shrunk). That means that if you want the left
and right to be fixed as much as possible, you should pack the middle section
last.
* * *
pro tip: it makes your code easier to read and maintain if you group all of
your layout code together. Consider this code:
f1 = Frame(...)
f1.pack(...)
f2 = Frame(...)
f2.pack(...)
I think you'll find over time that your code is easier to read and maintain if
you write it like this:
f1 = Frame(...)
f2 = Frame(...)
...
f1.pack(...)
f2.pack(...)
...
I think it makes the code much easier to visualize, since all of the layout
for a given parent window is in one place rather than sprinkled throughout the
code.
|
convert object to json string after function that needs python dict
Question: I have a python function/method that takes in a student and will show
profile... i realized i need to return context as a json string. how do i do
that?
context["student"] = db.query_dict(student_profile_sql.format(student_id=self.kwargs["student_id"])
)[0]
appear(self.request, "Show profile", {
"student_name": context["student"]["first_name...
})
return context // i need to return context as json string how can i do that?
How can i return context as a json string?
Answer: Import the `json` library:
import json
Then use `json.dumps`:
return json.dumps(context)
From the [Python
documentation](https://docs.python.org/2/library/json.html#json.dumps):
> **`json.dumps(obj, ...)`**
>
> Serialize obj to a JSON formatted `str`
|
Extract list from a string
Question: I am extracting data from the Google Adwords Reporting API via `Python`. I can
successfully pull the data and then hold it in a variable data.
data = get_report_data_from_google()
type(data)
str
Here is a sample:
data = 'ID,Labels,Date,Year\n3179799191,"[""SKWS"",""Exact""]",2016-05-16,2016\n3179461237,"[""SKWS"",""Broad""]",2016-05-16,2016\n3282565342,"[""SKWS"",""Broad""]",2016-05-16,2016\n'
I need to process this data more, and ultimately output a processed flat file
(Google Adwords API can return a CSV, but I need to pre-process the data
before loading it into a database.).
If I try to turn `data` into a `csv` object, and try to print each line, I get
one character per line like:
c = csv.reader(data, delimiter=',')
for i in c:
print(i)
['I']
['D']
['', '']
['L']
['a']
['b']
['e']
['l']
['s']
['', '']
['D']
['a']
['t']
['e']
So, my idea was to process each column of each line into a list, then add that
to a `csv` object. Trying that:
for line in data.splitlines():
print(line)
3179799191,"[""SKWS"",""Exact""]",2016-05-16,2016
What I actually find is that inside of the `str`, there is a list:
"[""SKWS"",""Exact""]"
This value is a "label"
[documentation](https://developers.google.com/adwords/api/docs/appendix/reports/adgroup-
performance-report#labels)
This list is formatted a bit weird - it has numerous parentheses in the value,
so trying to use a quote char, like ", will return something like this: [ SKWS
Exact ]. If I could get to [""SKWS"",""Exact""], that would be acceptable.
Is there a good way to extract a list object within a `str`? Is there a better
way to process and output this data to a csv?
Answer: You need to split the string first. `csv.reader` expects something that
provides a single line on each iteration, like a standard file object does. If
you have a string with newlines in it, split it on the newline character with
`splitlines()`:
>>> import csv
>>> data = 'ID,Labels,Date,Year\n3179799191,"[""SKWS"",""Exact""]",2016-05-16,2016\n3179461237,"[""SKWS"",""Broad""]",2016-05-16,2016\n3282565342,"[""SKWS"",""Broad""]",2016-05-16,2016\n'
>>> c = csv.reader(data.splitlines(), delimiter=',')
>>> for line in c:
... print(line)
...
['ID', 'Labels', 'Date', 'Year']
['3179799191', '["SKWS","Exact"]', '2016-05-16', '2016']
['3179461237', '["SKWS","Broad"]', '2016-05-16', '2016']
['3282565342', '["SKWS","Broad"]', '2016-05-16', '2016']
|
Python convert date string to timestamp
Question: I need to convert string type _Wed, 18 May 2016 11:21:35 GMT_ to timestamp, in
Python. I'm using:
datetime.datetime.strptime(string, format)
But I don't want to specify the format for the date type.
Answer: > But I don't want to specify the format for the date type.
Then, let the [`dateutil`](https://labix.org/python-dateutil) parser figure
that out:
>>> from dateutil.parser import parse
>>> parse("Wed, 18 May 2016 11:21:35 GMT")
datetime.datetime(2016, 5, 18, 11, 21, 35, tzinfo=tzutc())
|
From Python to Mathematica and back again
Question: I wrote a Python script, which produces an output that goes to a file. This is
read as an input file by Mathematica, that then uses it to make some
operations and finally returns another output file. In turn, this last file
should be read by the same initial Python script, to perform some more
operations.
My question is: what is the simplest (but efficient) way to do that?
I will write in the following a (very simplified) example of what I am dealing
with. I start with my python script `python_script.py`: this produces an array
`arr` that is saved in the file `"arr.txt"`
import numpy as np
arr = np.arange(9).reshape(3,3)
np.savetxt('arr.txt', arr, delimiter=' ')
This file is read by my Mathematica notebook `nb_Mathematica.nb`. This for
example could produce another array, in turn saved in another file,
`"arr2.txt"`
file = Import["arr.txt","Table"]
b = ArrayReshape[file, {3,3}]
c = {{1,1,1},{1,1,1},{1,1,1}}
d = b + c
Export["arr2.txt", d]
And now `"arr2.txt"` must be read by the original Python script. How is it
possible to do that? How in particular can I stop the Python script, start
Mathematica and then go back to the Python script?
Answer: On way to do this:
* Put your Mathematica code into a plain text file for example `make_arr.m`
* Use command line interface of Mathematica:
* `math -script make_arr.m`
* From python invoke the above with the [`subprocess`](https://docs.python.org/2/library/subprocess.html) module
* `subprocess.call(["math", "-script", "make_arr.m"])`
Optionally you can use command line arguments in the Mathematica script:
`file_name = $CommandLine[[4]]`
[Further to
read](http://reference.wolfram.com/language/tutorial/WolframLanguageScripts.html)
|
Import Errors with Python script run in R
Question: I have a Python program, which searches for an anomaly (First train, then
test). Now I need to start this Python program from RStudio. I have read about
`system('python myfirstpythonfile.py')`, but when I launch my Python program
in this way I have import errors with `numpy`, `scipy`, etc.
How can I launch my Python program from RStudio?
Answer: Having problems importing `numpy` or `scipy` suggests that your script is not
running in the correct Python _environment_. It is possible to install
multiple versions of Python on a computer, and which one is run when you type
`python` is determined by the `PATH` setting. It may be that when RStudio
executes your script (via `python myfirstpythonfile.py`) it is launching the
wrong Python — a version of Python on your computer that does not have the
`numpy` packages installed.
You can test if this is the case by running the following on the command line
and seeing what it outputs:
python -c "import sys; print(sys.executable)"
You can try the same from within RStudio:
system('python -c "import sys; print(sys.executable)"')
If it gives a different result, you can pass the result of the first as an
absolute path to python (changing /path/to/python for the correct value for
your system):
system('/path/to/python myfirstpythonfile.py')
As you mention in the comments that you are actually trying to use Python3,
then you may be able to simply do the following from within RStudio:
system('python3 myfirstpythonfile.py')
This will run your script using your installed Python3 and the associated
packages/libraries.
|
How to make an embedded python interpreter's local space share some variables with global space
Question: I have written a command line widget in my pyside program. The structure is
like this:
class CommandWidget(QWidget):
def __init__(self, parent = None):
super(CommandWidget, self).__init__(parent)
self.buffer=PyInterp(self)
self.buffer.initInterpreter(locals())
self........
class PyInterp(QTextEdit):
class InteractiveInterpreter(code.InteractiveInterpreter):
def __init__(self, locals):
code.InteractiveInterpreter.__init__(self, locals)
def runIt(self, command):
code.InteractiveInterpreter.runsource(self, command)
def __init__(self, parent = None):
super(PyInterp, self).__init__(parent)
I also have a mainwindow program running together with some other widgets. My
question is how can i import some functions in other widget class in this
interpreter, run the function and output the result to the global space. Or in
other words, I want to share some variables between the local space of the
interpreter and the global mainwindow space. How can I achieve that ?
**EDIT:** This is the data type I want to put into a signal.
class PosType(QObject):
def __init__(self, nx, ny, nz, start_pos, type):
self.nx = nx
self.ny = ny
self.nz = nz
self.start_pos = start_pos
self.type = type
This is the signal.
class PosSig(QObject):
sig = Signal(PosType)
def emit_sig(self, pos_data):
self.sig.emit(pos_data)
This is the function I want to put into the interpreter, so that when it is
called it will emit a signal.
def graphene(nx, ny, start_pos):
pos_info = PosType(nx = nx, ny = ny, nz = None, start_pos = start_pos, type = 1)
tmp_sig = PosSig()
tmp_sig.emit_sig(pos_info)
return
The above classes are in a file called ExposeFunc.py, and I plan to import
this .py file in the interpreter, then call the graphene function to emit the
signal.
In the mainwindow class, I have a slot.
def __init__(self):
#Interpreter Signals :
possig = PosSig()
possig.sig.connect(self.createObject)
@Slot(PosType)
def createObject(self, pos_info):
type = pos_info.type
if type == 1:
SharedItems.QS._FillData(pos_info.nx, pos_info.ny, start_pos)
return
Answer: There are a couple mechanisms. You can use `QtCore.Signal()` and
`@QtCore.Slot()` if you have things that are being passed at a certain point.
You can put most anything into a `Signal` see signal/slot example. [Qt Signals
& Slots Documentation](http://doc.qt.io/qt-4.8/signalsandslots.html)
Another mechanism, which I am less versed in would be the
[`QQueue`](http://doc.qt.io/qt-4.8/qqueue.html) class. The issue to need to
address is that you are passing data across threads so data access needs to be
protected.
Signal/Slot example:
class myDataType(QObject):
def __init__(self, data):
self.data = data
...
class foo(QObject):
mySignal = QtCore.Signal(myDataType)
def __init__(self):
...
def someFunction(self, data):
mySignal.emit(data)
class bar(QObject):
def __init__(self):
self.otherObject = foo()
self.otherObject.mySignal.connect(self.handler)
@QtCore.Slot(myDataType)
def handler(self, data):
do something with data
**ADDENDUM 1:**
Lets say you have a `QMainWindow`
class myMainWindow(QtGui.QMainWindow):
mainWindowSignal = QtCore.Signal(QObject)
def __init__(self, parent, *vargs, **kwargs):
...
self.myCommandWidget = CommandWidget(parent=self)
self.myCommandButton = QtGui.QPushButton("Press Me")
#This connects the button being clicked to a function.
self.myCommandButton.clicked.connect(self.button_pressed)
#This connects the Signal we made 'mainWindowSignal' to
# the do_something Slot in CommandWidget 'myCommandWidget'
self.mainWindowSignal.connect(self.myCommandWidget.do_something)
#This connects the Signal from 'myCommandWidget' 'dataReady'
# to Slot 'data_returned' to handle the data
self.myCommandWidget.dataReady.connect(self.dataReady)
self.data = "some data"
#We don't have to decorate this, but should
@QtCore.Slot()
def button_pressed(self):
self.mainWindowsSignal.emit(self.data)
@QtCore.Slot(str, int)
def data_returned(self, strValue, intValue):
#do something with the data.
#e.g.
self.command = strValue
self.retCode = intValue
class CommandWidget(QWidget):
dataReady = QtCore.Signal(str, int)
def __init__(self, parent=None):
#stuff you had here.
...
@QtCore.Slot(QObject
def do_something(self, data):
retStr = self.buffer.... #insert your function calls here
retInt = self.buffer....
self.dataReady.emit(retValue, retInt)
|
Retrieve created_at timestamp from pgsql
Question: I'm a Python newbie. I wrote an sql query to retrieve created_at timestamp in
pgsql. When I called the method `strftime('%x')` on it, I got this error:
AttributeError: 'long' object has no attribute 'strftime'
This is the query:
SELECT created_at FROM rating WHERE user_id = 'xxxxx' ORDER BY id DESC LIMIT 2;
When I printed the result of the query, I merely got `[(3L,)]` which is just
one of the two created_at times expected. How do I convert this back to
python's datetime?
Answer: strftime looks like it's not callable, have you imported DateTime?
Also, when calling strftime you'll need to format it, for example
created_at.strftime('%y %B %d').
Finally, it's actually quicker to process and convert the time in SQL rather
than using strftime.
A simpler and more performant solution would be to just format in the SQL
itself:
SELECT to_char(created_at,'YY-MM-DD HH24:MI:SS') FROM rating WHERE user_id = 'xxxxx' ORDER BY id DESC LIMIT 2;
|
Cutting out a portion of video - python
Question: I have videos of length approximately 25 min each and I wish to cut a few
seconds from the start using python.
Searching about it, I stumbled upon the moviepy package for python. The
problem is, it takes up a lot of time even for a single video. Following is
the code snippet I use to cut 7 seconds from the start of a single video. The
write process consumes a lot of time. Is there a better way to cut the videos
using python?
from moviepy.editor import *
clip = VideoFileClip("video1.mp4").cutout(0, 7)
clip.write_videofile("test.mp4")
Please let me know if I have missed out any details.
Any help is appreciated. Thanks!
Answer: Try this and tell us if it is faster (if it can, it will extract the video
directly using ffmpeg, without decoding and reencoding):
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
ffmpeg_extract_subclip("video1.mp4", t1, t2, targetname="test.mp4")
If that doesn't help, have a look at the
[code](https://github.com/Zulko/moviepy/blob/master/moviepy/video/io/ffmpeg_tools.py#L27)
|
how to justify text in label in tkinter in python Need justify in tkinter
Question: In Tkinter in Python: I have a table with a different label. How can I justify
the text that is in the label? Because It is a table and the texts in
different labels come together!
from tkinter import *
root=Tk()
a=Label(root,text='Hello World!')
a.pack()
a.place(x=200,y=200)
b=Label(root,text='Bye World')
b.pack()
b.place(x=200,y=100)
I want something for justifying in center some text in label but it is not
something that I need plz check this:
[link](http://s6.uplod.ir/i/00778/5p8kfr6qjgx3.png)
Answer: instead of using .pack() i would use .grid()
<http://effbot.org/tkinterbook/grid.htm>
grid will allow better management of your components
find bellow an example of usage and management:
Label(root, text="First").grid(row=0, sticky=W)
Label(root, text="Second").grid(row=1, sticky=W)
entry1 = Entry(root)
entry1 = Entry(root)
entry1.grid(row=0, column=1)
entry2.grid(row=1, column=1)
checkbutton.grid(columnspan=2, sticky=W)
image.grid(row=0, column=2, columnspan=2, rowspan=2,
sticky=W+E+N+S, padx=5, pady=5)
button1.grid(row=2, column=2)
button2.grid(row=2, column=3)
you would endup using the grid option padx="x" to "justify" your labels
|
Multiple Postgres SELECT processes(django GET requests) stuck, causing 100% CPU usage
Question: I'll try to give as much information I can here. Although the solution would
be great, I just want guidance on how to tackle the problem. How to view more
useful log files, etc. As I'm new to server maintainance. Any advice are
welcome.
Here's what's happenning in chronological order:
* I'm running 2 digitalocean droplets (Ubuntu 14.04 VPS)
* Droplet #1 running django, nginx, gunicorn
* Droplet #2 running postgres
* Everything runs fine for a month and suddenly the postgres droplet CPU usage spiked to 100%
* You can see htop log when this happens. I've attached a screenshot
* Another screenshot is nginx error.log, you can see that problem started at 15:56:14 where I highlighted with red box
* sudo poweroff the Postgres droplet and restart it doesn't fix the problem
* Restore postgres droplet to my last backup (20 hours ago) solves the problem but it keep happening again. This is 7th time in 2 days
I'll continue to do research and give more information. Meanwhile any opinions
are welcome.
Thank you.
[](http://i.stack.imgur.com/mm80Q.jpg)
[](http://i.stack.imgur.com/uyKer.jpg)
**Update 20 May 2016**
* Enabled slow query logging on Postgres server as recommended by _e4c5_
* 6 hours later, server freezed(100% CPU usage) again at 8:07 AM. I've attached all related screenshots
* Browser display 502 error if try to access the site during the freeze
* `sudo service restart postgresql` (and gunicorn, nginx on django server) does **NOT** fix the freeze (**I think this is a very interesting point**)
* However, restore Postgres server to my previous backup(now 2 days old) **does** fix the freeze
* The culprit **Postgres log** message is **Could not send data to client: Broken Pipe**
* The culprit **Nginx log** message is a simple django-rest-framework
api call which return only 20 items (each with some foreign-key data query)
**Update#2 20 May 2016** When the freeze occurs, I tried doing the following
in chronological order (turn off everything and turn them back on one-by-one)
* `sudo service stop postgresql` \--> cpu usage fall to 0-10%
* `sudo service stop gunicorn` \--> cpu usage stays at 0-10%
* `sudo service stop nginx`\--> cpu usage stays at to 0-10%
* `sudo service restart postgresql` \--> cpu usage stays at to 0-10%
* `sudo service restart gunicorn` \--> cpu usage stays at to 0-10%
* `sudo service restart nginx` \--> **cpu usage rose to 100% and stays there**
So this is not about server load or long query time then?
This is very confusing since if I restore database to my latest backup (2 days
ago), everything is back online even without touching nginx/gunicorn/django
server...
* * *
Update 8 June 2016 I turned on slow query logging. Set it to log queries that
takes longer than 1000ms.
I got this one query shows up in the log many times.
SELECT
"products_product"."id",
"products_product"."seller_id",
"products_product"."priority",
"products_product"."media",
"products_product"."active",
"products_product"."title",
"products_product"."slug",
"products_product"."description",
"products_product"."price",
"products_product"."sale_active",
"products_product"."sale_price",
"products_product"."timestamp",
"products_product"."updated",
"products_product"."draft",
"products_product"."hitcount",
"products_product"."finished",
"products_product"."is_marang_offline",
"products_product"."is_seller_beta_program",
COUNT("products_video"."id") AS "num_video"
FROM "products_product"
LEFT OUTER JOIN "products_video" ON ( "products_product"."id" = "products_video"."product_id" )
WHERE ("products_product"."draft" = false AND "products_product"."finished" = true)
GROUP BY
"products_product"."id",
"products_product"."seller_id",
"products_product"."priority",
"products_product"."media",
"products_product"."active",
"products_product"."title",
"products_product"."slug",
"products_product"."description",
"products_product"."price",
"products_product"."sale_active",
"products_product"."sale_price",
"products_product"."timestamp",
"products_product"."updated",
"products_product"."draft",
"products_product"."hitcount",
"products_product"."finished",
"products_product"."is_marang_offline",
"products_product"."is_seller_beta_program"
HAVING COUNT("products_video"."id") >= 8
ORDER BY "products_product"."priority" DESC, "products_product"."hitcount" DESC
LIMIT 100
I know it's such an ugly query (generated by django aggregation). In English,
this query just means **_"give me a list of products that have more than 8
videos in it"._**
And here the EXPLAIN output of this query:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=351.90..358.40 rows=100 width=933)
-> GroupAggregate (cost=351.90..364.06 rows=187 width=933)
Filter: (count(products_video.id) >= 8)
-> Sort (cost=351.90..352.37 rows=187 width=933)
Sort Key: products_product.priority, products_product.hitcount, products_product.id, products_product.seller_id, products_product.media, products_product.active, products_product.title, products_product.slug, products_product.description, products_product.price, products_product.sale_active, products_product.sale_price, products_product."timestamp", products_product.updated, products_product.draft, products_product.finished, products_product.is_marang_offline, products_product.is_seller_beta_program
-> Hash Right Join (cost=88.79..344.84 rows=187 width=933)
Hash Cond: (products_video.product_id = products_product.id)
-> Seq Scan on products_video (cost=0.00..245.41 rows=2341 width=8)
-> Hash (cost=88.26..88.26 rows=42 width=929)
-> Seq Scan on products_product (cost=0.00..88.26 rows=42 width=929)
Filter: ((NOT draft) AND finished)
## (11 rows)
**\--- Update 8 June 2016 #2 ---** Since there are many suggestions by many
people. So I'll try to apply the fixes one-by-one and report back
periodically.
@e4c5 Here's the information you need:
You can think of my site somewhat like Udemy, an online course marketplace.
There are "Product"(course). Each product contain a number of videos. Users
can comment on both Product page itself and each Videos.
In many cases, I'll need to query a list of products order by number of TOTAL
comments it got(the sum of product comments AND comments on each Video of that
Product)
The django query that correspond to the EXPLAIN output above:
all_products_exclude_draft = Product.objects.all().filter(draft=False)
products_that_contain_more_than_8_videos = all_products_exclude_draft.annotate(num_video=Count('video')).filter(finished=True, num_video__gte=8).order_by('timestamp')[:30]
I just noticed that I(or some other dev in my team) hit database twice with
these 2 python lines.
Here's the django models for Product and Video:
from django_model_changes import ChangesMixin
class Product(ChangesMixin, models.Model):
class Meta:
ordering = ['-priority', '-hitcount']
seller = models.ForeignKey(SellerAccount)
priority = models.PositiveSmallIntegerField(default=1)
media = models.ImageField(blank=True,
null=True,
upload_to=download_media_location,
default=settings.MEDIA_ROOT + '/images/default_icon.png',
storage=FileSystemStorage(location=settings.MEDIA_ROOT))
active = models.BooleanField(default=True)
title = models.CharField(max_length=500)
slug = models.SlugField(max_length=200, blank=True, unique=True)
description = models.TextField()
product_coin_price = models.IntegerField(default=0)
sale_active = models.BooleanField(default=False)
sale_price = models.IntegerField(default=0, null=True, blank=True) #100.00
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False, null=True)
updated = models.DateTimeField(auto_now_add=False, auto_now=True, null=True)
draft = models.BooleanField(default=True)
hitcount = models.IntegerField(default=0)
finished = models.BooleanField(default=False)
is_marang_offline = models.BooleanField(default=False)
is_seller_beta_program = models.BooleanField(default=False)
def __unicode__(self):
return self.title
def get_avg_rating(self):
rating_avg = self.productrating_set.aggregate(Avg("rating"), Count("rating"))
return rating_avg
def get_total_comment_count(self):
comment_count = self.video_set.aggregate(Count("comment"))
comment_count['comment__count'] += self.comment_set.count()
return comment_count
def get_total_hitcount(self):
amount = self.hitcount
for video in self.video_set.all():
amount += video.hitcount
return amount
def get_absolute_url(self):
view_name = "products:detail_slug"
return reverse(view_name, kwargs={"slug": self.slug})
def get_product_share_link(self):
full_url = "%s%s" %(settings.FULL_DOMAIN_NAME, self.get_absolute_url())
return full_url
def get_edit_url(self):
view_name = "sellers:product_edit"
return reverse(view_name, kwargs={"pk": self.id})
def get_video_list_url(self):
view_name = "sellers:video_list"
return reverse(view_name, kwargs={"pk": self.id})
def get_product_delete_url(self):
view_name = "products:product_delete"
return reverse(view_name, kwargs={"pk": self.id})
@property
def get_price(self):
if self.sale_price and self.sale_active:
return self.sale_price
return self.product_coin_price
@property
def video_count(self):
videoCount = self.video_set.count()
return videoCount
class Video(models.Model):
seller = models.ForeignKey(SellerAccount)
title = models.CharField(max_length=500)
slug = models.SlugField(max_length=200, null=True, blank=True)
story = models.TextField(default=" ")
chapter_number = models.PositiveSmallIntegerField(default=1)
active = models.BooleanField(default=True)
featured = models.BooleanField(default=False)
product = models.ForeignKey(Product, null=True)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False, null=True)
updated = models.DateTimeField(auto_now_add=False, auto_now=True, null=True)
draft = models.BooleanField(default=True)
hitcount = models.IntegerField(default=0)
objects = VideoManager()
class Meta:
unique_together = ('slug', 'product')
ordering = ['chapter_number', 'timestamp']
def __unicode__(self):
return self.title
def get_comment_count(self):
comment_count = self.comment_set.all_jing_jing().count()
return comment_count
def get_create_chapter_url(self):
return reverse("sellers:video_create", kwargs={"pk": self.id})
def get_edit_url(self):
view_name = "sellers:video_update"
return reverse(view_name, kwargs={"pk": self.id})
def get_video_delete_url(self):
view_name = "products:video_delete"
return reverse(view_name, kwargs={"pk": self.id})
def get_absolute_url(self):
try:
return reverse("products:video_detail", kwargs={"product_slug": self.product.slug, "pk": self.id})
except:
return "/"
def get_video_share_link(self):
full_url = "%s%s" %(settings.FULL_DOMAIN_NAME, self.get_absolute_url())
return full_url
def get_next_url(self):
current_product = self.product
videos = current_product.video_set.all().filter(chapter_number__gt=self.chapter_number)
next_vid = None
if len(videos) >= 1:
try:
next_vid = videos[0].get_absolute_url()
except IndexError:
next_vid = None
return next_vid
def get_previous_url(self):
current_product = self.product
videos = current_product.video_set.all().filter(chapter_number__lt=self.chapter_number).reverse()
next_vid = None
if len(videos) >= 1:
try:
next_vid = videos[0].get_absolute_url()
except IndexError:
next_vid = None
return next_vid
And here is the index of the Product and Video table I got from the command:
my_database_name=# \di
Note: this is photoshopped and include some other models as well. [](http://i.stack.imgur.com/q83EG.jpg)
* * *
**\--- Update 8 June 2016 #3 ---** @Jerzyk As you suspected. After I inspect
all my code again, I found that I indeed did a 'slicing-in-memory': I tried to
shuffle the first 10 results by doing this:
def get_queryset(self):
all_product_list = Product.objects.all().filter(draft=False).annotate(
num_video=Count(
Case(
When(
video__draft=False,
then=1,
)
)
)
).order_by('-priority', '-num_video', '-hitcount')
the_first_10_products = list(all_product_list[:10])
the_11th_product_onwards = list(all_product_list[10:])
random.shuffle(copy)
finalList = the_first_10_products + the_11th_product_onwards
Note: in the code above I need to count number of Video that is not in draft
status.
So this will be one of the thing I need to fix as well. Thanks. >_<
* * *
\--- Here are the related screenshots ---
_Postgres log when freezing occurs (log_min_duration = 500 milliseconds)_
[](http://i.stack.imgur.com/EzdlP.jpg)
_Postgres log (contunued from the above screenshot)_ [](http://i.stack.imgur.com/7IFqN.jpg)
_Nginx error.log in the same time period_ [](http://i.stack.imgur.com/KKeLa.jpg)
_DigitalOcean CPU usage graph just before freezing_ [](http://i.stack.imgur.com/SwVs7.jpg)
_DigitalOcean CPU usage graph just after freezing_ [](http://i.stack.imgur.com/K2gmV.jpg)
Answer: We can jump to the conclusion that your problems are caused by the slow query
in question. By itself each run of the query does not appear to be slow enough
to cause timeouts. However it's possible several of these queries are executed
concurrently and that could lead to the meltdown. There are two things that
you can do to speed things up.
## 1) Cache the result
The result of a long running query can be cached.
from django.core.cache import cache
def get_8x_videos():
cache_key = 'products_videos_join'
result = cache.get(cache_key, None)
if not result:
all_products_exclude_draft = Product.objects.all().filter(draft=False)
result = all_products_exclude_draft.annotate(num_video=Count('video')).filter(finished=True, num_video__gte=8).order_by('timestamp')[:30]
result = Product.objects.annotate('YOUR LONG QUERY HERE')
cache.set(cache_key, result)
return result
This query now comes from memcache (or whatever you use for caching) that
means if you have two successive hits for the page that uses this in quick
succession, the second one will have no impact on the database. You can
control how long the object is cached in memory.
## 2) Optimize the Query
The first thing that leaps out at you from the explain is that you are doing
sequential scan on both the `products_products` and `product_videos` tables.
Usually sequential scans are less desirable than index scans. However an index
scan _may not_ be used on this query because of the `COUNT()` and `HAVING`
`COUNT()` clauses you have on it as well as the massive `GROUP BY` clauses on
it.
update:
Your query has a LEFT OUTER JOIN, It's possible that an INNER JOIN or a
subquery might be faster, in order to do that, we need to recognize that
grouping on the `Video` table on `product_id` can give us the set of videos
that figure in at least 8 products.
inner = RawSQL('SELECT id from product_videos GROUP BY product_id HAVING COUNT(product_id) > 1',params=[])
Product.objects.filter(id__in=b)
The above eleminates the LEFT OUTER JOIN and introduces a subquery. However
this doesn't give easy access to the actual number of videos for each product,
so this query in it's present form may not be fully usable.
## 3) Improving indexes
While it may be tempting to create an index on `draft` and `finished` columns,
this will be futile as those columns do not have sufficient cardinality to be
good candidates for indexes. However it may still be possible to create a
conditional index. Again the conclusion can only be drawn after seeing your
tables.
|
Creating batches from custom dataset of images in Tensorflow
Question: I'm reading a list of .jpg images from disk, and I wanted to split it in
several batches. But I got a ValueError while trying to create the first
batch.
Here is my code:
import tensorflow as tf
import os
images_list = []
for i in range(6):
image = tf.read_file("{0}.jpg".format(i))
image_tensor = tf.image.decode_jpeg(image, channels=3)
image_tensor = tf.image.rgb_to_grayscale(image_tensor)
image_tensor = tf.image.resize_images(image_tensor, 28, 28)
image_tensor = tf.expand_dims(image_tensor, 0)
images_list.append(image_tensor)
batches, _ = tf.train.batch(images_list, batch_size=3, enqueue_many=True, capacity=6)
And this is the error message:
ValueError Traceback (most recent call last)
<ipython-input-77-a07e94cddf32> in <module>()
----> 1 batches, _ = tf.train.batch(images_list, batch_size=3, enqueue_many=True, capacity=6)
ValueError: too many values to unpack
Answer: Your error message is not linked to TensorFlow at all (you can see that the
ValueError was not thrown by TensorFlow).
If you look at the
[doc](https://www.tensorflow.org/versions/r0.8/api_docs/python/io_ops.html#batch),
you can see that `tf.train.batch()` returns a list of tensors (one value in
total), and you are trying to get two values when you write `batches, _ =
tf.train.batch(...)`.
That is why you get `ValueError: too many values to unpack`.
You just have to write instead:
batches = tf.train.batch(images_list, batch_size=3, enqueue_many=True, capacity=6)
|
How to get handle of a known data table
Question: I am hoping to figure out the answer of a seemingly simple question. Below I
was trying to get the handle of a known table, by the name of "TableMe". Being
able to print its name back on the screen would prove that I have got the
handle correctly.
from Spotfire.Dxp.Data import *
from Spotfire.Dxp.Application import *
# Trial #1
#dataTable = Document.Data.Tables["TableMe"]
# Trial #2
dataTable = Document.ActiveDataTableReference
print dataTable.Title
Both my Trial #1 and #2 had failed, for different reasons:
Trial #1:
AttributeError: 'getset_descriptor' object has no attribute 'Tables'
Trial #2:
AttributeError: 'getset_descriptor' object has no attribute 'Title'
I feel that this must be a simple question for any fluent IronPython
programmers. Can someone shed a light or two pls?
Answer: you don't need to import anything to access data tables:
for table in Document.Data.Tables:
print table.Name
print table.Id
print table.RowCount
print "---"
then to access a specific table:
table = Document.Data.Tables["TableMe"]
...or if you have the ID:
tID = "abc123"
table = Document.Data.Tables[tID]
...or by index (refer to the Data Table Properties dialog in Spotfire for the
order, make sure to start at zero):
table = Document.Data.Tables[0]
|
mod_wsgi apache with python-eve
Question: I tried to integrate my `eve` app into `apache`. I think I did all correctly
like it is shown in flask documentation.
When I try to consume my `eve` collection...I get an error in apache log:
Traceback (most recent call last):
File "/var/customers/webs/myapp/myapp.wsgi", line 7, in <module>
from run import app as application
File "/var/customers/webs/myapp/run.py", line 9, in <module>
app = Eve(__name__)
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 139, in __init__
self.validate_domain_struct()
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 252, in validate_domain_struct
raise ConfigException('DOMAIN dictionary missing or wrong.')
ConfigException: DOMAIN dictionary missing or wrong.
It seems that the app can't find my `settings.py`
My apache folder looks like:
/myapp
- myapp.wsgi
- run.py
- settings.py
if I start it directly using `python run.py`, everythink works fine.
Answer: Check [this](http://stackoverflow.com/questions/36521156/eve-app-deployment-
errors-can-anyone-help-me-to-fix-it/36725390#36725390) answer. You can try to
add the `settings.py` path using `settings` named parameter into the `eve` app
initialization.
|
PyXB module not recognised
Question: I have installed pyxb module regular way (python setup.py install) and here is
the output:
Found bundle in pyxb/bundles/common
Found bundle in pyxb/bundles/dc
Found bundle in pyxb/bundles/wssplat
Found bundle in pyxb/bundles/saml20
running install
running build
running build_py
running build_scripts
running install_lib
running install_scripts
changing mode of /usr/local/bin/pyxbgen to 755
changing mode of /usr/local/bin/pyxbwsdl to 755
changing mode of /usr/local/bin/pyxbdump to 755
running install_egg_info
Removing /usr/local/lib/python2.7/dist-packages/PyXB-1.2.4.egg-info
Writing /usr/local/lib/python2.7/dist-packages/PyXB-1.2.4.egg-info
However, I keep getting message:
ImportError: No module named pyxb
when running a script which contains:
import pyxb
import pyxb.binding
import pyxb.binding.saxer
import StringIO
import pyxb.utils.utility
import pyxb.utils.domutils
Does anyone have an idea why this may occur?
Answer: It turns out that it was a permission issue: when running the script as sudo
it successfully imports pyxb. setup.py script installed pyxb as: `drwxr-s--- 7
root staff 4096 May 19 16:30 pyxb `
|
How can I iterate through different CSV files with Python/Pandas by first column and item value?
Question: I have a folder which has over 100 CSV files each have more than 40k rows. I
am trying to iterate through these files by first column, which has the ID
numbers. My purpose is to find the rows that have the same ID numbers across
the CSV files and then create a new CSV file by concatenate/putting together
the rows that have the same ID number.
I skip the first 4 rows because they have irrelevant data.
My current code is:
# Enters the folders in the directory
for root, dirs, files in os.walk(csv_directory):
for item in files:
if item.endswith(".csv"):
date_string = item.split(".")[1]
year_string = date_string[:4]
file_directory = os.path.join(root,item)
list_csv = []
print "Reading %s ..." % item
# Reads the .csv files
with open(file_directory , 'rb') as file:
reader = csv.reader(file, delimiter = ',')
next(reader)
next(reader)
next(reader)
next(reader)
# Takes all rows for ID, col2 ,col3 in the directory
for row in reader:
index = [0,1,8]
list_csv.append(row[i] for i in index)
list_csv.append(date_string)
list_total.append(list_csv)
print len(list_total) , "rows are added."
print "Total Number of Rows: " , len(list_total)
Any help would be much appreciated!!
Answer: You could use something along the following lines.
import pandas as pd
from os import listdir
from os.path import join
source_path, dst_path = 'source/path', 'dst/path'
Get all `.csv` files:
files = [f for f in listdir(source_path) if f.endswith('.csv')]
Read all `.csv` files and use `pd.concat()` to combine - with ~100 files at
40K rows each you'd have ~4m rows which should be manageable unless each file
has a large number of columns:
all_files = pd.concat([pd.read_csv(join(source_path, f_name), skiprows=4) for f_name in files])
Use `.groupby()` to group all files by `id` (assumed to be found in
`'id_column'`), and save all same-id files back to `.csv`:
files_by_id = all_files.groupby('id_column')
for id, data in files_by_id:
data.to_csv(join(dst_path, 'file_{}.csv'.format(id)))
|
SWIG c++ vector access in python
Question: This may be a noob question but here it goes. I have wrapped a 3d vector into
a python module using SWIG. Everything has compiled and I can import the
module and perform actions with it. I can't seem to figure out how to access
my vector in python to store and change values in it. How do I store and
change my vector values in python. My code is below and was written to test if
the algorithm stl works with SWIG. It does seem to work but I need to be able
to put values into my vector with python.
header.h
#ifndef HEADER_H_INCLUDED
#define HEADER_H_INCLUDED
#include <vector>
using namespace std;
struct myStruct{
int vecd1, vecd2, vecd3;
vector<vector<vector<double> > >vec3d;
void vecSizer();
void deleteDuplicates();
double vecSize();
void run();
};
#endif // HEADER_H_INCLUDED
main.cpp
#include "header.h"
#include <vector>
#include <algorithm>
void myStruct::vecSizer()
{
vec3d.resize(vecd1);
for(int i = 0; i < vec3d.size(); i++)
{
vec3d[i].resize(vecd2);
for(int j = 0; j < vec3d[i].size(); j++)
{
vec3d[i][j].resize(vecd3);
}
}
}
void myStruct::deleteDuplicates()
{
vector<vector<vector<double> > >::iterator it;
sort(vec3d.begin(),vec3d.end());
it = unique(vec3d.begin(),vec3d.end());
vec3d.resize(distance(vec3d.begin(), it));
}
double myStruct::vecSize()
{
return vec3d.size();
}
void myStruct::run()
{
vecSizer();
deleteDuplicates();
vecSize();
}
from the terminal (Ubuntu)
import test #import the SWIG generated module
x = test.myStruct() #create an instance of myStruct
x.vecSize() #run vecSize() should be 0 since vector dimensions are not initialized
0.0
x.vec3d #see if vec3d exists and is of the correct type
<Swig Object of type 'vector< vector< vector< double > > > *' at 0x7fe6a483c8d0>
Thanks in advance!
Answer: It turns out that vectors are converted to immutable python objects when the
wrapper/interface is generated. So in short you cannot modify wrapped c++
vectors from python.
|
How to avoid SQL injection with "SELECT * FROM {table_name}"?
Question: In Python using Psycopg2 with the following code:
import psycopg2
import getpass
conn = psycopg2.connect("dbname=mydb user=%s" % getpass.getuser())
cursor = conn.cursor()
tables = ["user", "group", "partner", "product"]
for table in tables:
# with sql injection
cursor.execute("SELECT name FROM %s LIMIT 1" % (table,))
print "table", table, "result", len(cursor.fetchone())
# without sql injection
cursor.execute("SELECT name FROM %s LIMIT 1", (table,))
print "table", table, "result", len(cursor.fetchone())
The output was:
table res_partner result 1
Traceback (most recent call last):
File "my_psycopg2_example.py", line 16, in <module>
cursor.execute("SELECT name FROM %s LIMIT 1", (table,))
psycopg2.ProgrammingError: syntax error at or near "'res_partner'"
LINE 1: SELECT name FROM 'res_partner' LIMIT 1
With SQL injection it works fine.
But we don't want to create a security issue.
We read [this documentation](http://initd.org/psycopg/docs/usage.html#passing-
parameters-to-sql-queries) and in it found the following comment:
> Only variable values should be bound via this method: it shouldn’t be used
> to set table or field names. For these elements, ordinary string formatting
> should be used before running `execute()`.
But if we use "ordinary string formatting", we'll have SQL injection too.
What's a good way to manage this special case, and avoid SQL injection?
Answer: I think you're confusing the definition of SQL injection. SQL injection is an
_attack_ on your software where someone causes your SQL query to do something
you didn't want it to. String interpolation is not SQL injection. String
interpolation _can sometimes_ enable SQL injection, but not always. To see
that string interpolation isn't always unsafe, think about which of the
following is safest:
1. `sql = 'SELECT name FROM user'`
2. `sql = 'SELECT name FROM ' + 'user'`
3. `sql = 'SELECT name FROM %s' % ['user']`
4. `sql = 'SELECT name FROM {}'.format('user')`
Each of these lines of code does the exact same thing, so none of them can be
more or less safe than the others. In your exact example, there's no danger of
SQL injection, because you're just building a hardcoded SQL query string.
On the other hand, if your `table` value came from a user, then there could be
security issues:
* What if they pass the name of a table that exists, but you didn't want them to query?
table = 'secrets'
sql = 'SELECT name FROM %s LIMIT 1' % table
results in:
SELECT name FROM secrets LIMIT 1
* What if they pass [something](https://xkcd.com/327/) that is not actually a table name?
table = 'product; DROP TABLE user; --'
sql = 'SELECT name FROM %s LIMIT 1' % table
results in:
SELECT name FROM product;
DROP TABLE user;
-- LIMIT 1
You could prevent this by checking if the table name is allowed:
if table.lower() not in ["user", "group", "partner", "product"]:
raise Something('Bad table name: %r' % table)
|
For Loop Not iterating correctly With Functions Called
Question: I have been having some problems getting this for loop to iterate, call
functions, and then return to the loop and iterate again. It only runs twice
as of right now. I figure it has a way to do with how I am calling functions
inside of the loop. Is there something I am missing?
#!/usr/bin/python
# -*- coding: utf-8 -*-
import os, csv, xlrd, sys
import pandas as pd
import numpy as np
from openpyxl import load_workbook
def PosFinder():
with open('FinalMutations.csv', 'w') as csvf:
writer = csv.writer(csvf, delimiter=' ')
csvf.close()
MutFinder()
def MutFinder():
df = pd.read_csv('mutation-table.csv', sep=None)
MutationList = df['Seq ID']
Positions = list(set(MutationList))
n = len(Positions)
for i in (0, n):
print(i)
MutationPos=Positions[i]
MutationFound=df[df['Seq ID'].str.contains(MutationPos)]
FreqCheck(MutationFound)
i+=1
print('Program Complete!')
def FreqCheck(MutationFound):
PFreqs=MutationFound.ix[:,3]
PFreqs=PFreqs.str.strip('%')
Freqs= PFreqs.astype(float)
if len(MutationFound)==1:
Check = all(i<10.0 for i in Freqs)
if Check in [False, 'False']:
ToExcel(MutationFound)
else:
Check = all(i<10.0 for i in Freqs)
if Check in [False, 'False']:
ConstantFreq(MutationFound)
def ConstantFreq(MutationFound):
PFreqs=MutationFound.ix[:,3]
PFreqs=PFreqs.str.strip('%')
Freqs= PFreqs.astype(float)
Flag= all(x==Freqs[0] for x in Freqs)
if Flag in [False, 'False']:
RangeCheck(MutationFound, Freqs)
def RangeCheck(MutationFound, Freqs):
minFreq= Freqs.min()
maxFreq= Freqs.max()
netFreq= maxFreq-minFreq
if netFreq>10:
ToExcel(MutationFound)
def ToExcel(MutationFound):
with open('FinalMutations.csv', 'a') as csvf:
writer = csv.writer(csvf, delimiter=' ')
for row in MutationFound:
writer.writerow(row)
###Start Program###
PosFinder()
Answer: The for loop currently takes `(0, n)`, which is just list of the two values
`0` and `n`.
Change to `range(n)` to get all values `0`,`1`,...,`n-1`
|
Python: Intersection of Two 2D Arrays
Question: I have data in `.csv` file called 'Max.csv':
Valid Date MAX
1/1/1995 51
1/2/1995 45
1/3/1995 48
1/4/1995 45
Another csv called 'Min.csv' looks like:
Valid Date MIN
1/2/1995 33
1/4/1995 31
1/5/1995 30
1/6/1995 39
I want two generate two dictionaries or any other suggested data structure so
that I can have two separate variables Max and Min in python respectively as:
Valid Date MAX
1/2/1995 45
1/4/1995 45
Valid Date MIN
1/2/1995 33
1/4/1995 31
i.e. select the elements from Max and Min so that only the common elements are
output.
I am thinking about using numpy.intersect1d, but that means I have to
separately compare the Max and Min first on date column, find the index of
common dates and then grab the second columns for Max and Min. This appears
too complicated and I feel there are smarter ways to intersect two curves Max
and Min.
Answer: You mention that:
> I have to separately compare the Max and Min first on date column, find the
> index of common dates and then grab the second columns for Max and Min. This
> appears too complicated...
Indeed this is fundamentally what you need to do, one way or the other; but
using the
[numpy_indexed](https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP)
package (disclaimer: I am its author), this isn't complicated in the
slightest:
import numpy_indexed as npi
common_dates = npi.intersection(min_dates, max_dates)
print(max_values[npi.indices(max_dates, common_dates)])
print(min_values[npi.indices(min_dates, common_dates)])
Note that this solution is fully vectorized (contains no loops on the python-
level), and as such is bound to be much faster than the currently accepted
answer.
Note2: this is assuming the date columns are unique; if not, you should
replace 'npi.indices' with 'npi.in_'
|
Find Maximum and minimum value in a matrix, Python
Question: How can I find the lowest value given the number of the exercise?, I have this
code:
mat = []
calificaciones = []
#Captures student ID
def lmat (numeroest):
mattotal = []
for i in range (0, numeroest):
matricula = int(raw_input('Student ID : '))
mattotal.append (matricula)
return (mattotal)
#Captures grades
def numest (numeroest):
mattotal = []
calif = []
for i in range (0, numeroest):
numcal = input ('Introduce the ammount of grades: ')
for j in range (0, numcal):
matricula = int(input('Input the grades: '))
calif.append (matricula)
mattotal.append (calif)
return (mattotal)
So if a user inputs the number of the exercise it will output the lowest grade
in said exercise (e.g., exercise 2, that means it will be the lowest)
def givelowest ():
row = input ('Enter the number of the exercise: ')
for ...
I want to make a for loop that looks for that row (number of exercise, then
gives the lowest number in said row.
Answer: To get the `min/max` of a column you can use a function with
`operator.itemgetter`:
from operator import itemgetter
def min_or_max(m, col, f):
return f(map(itemgetter(col), m))
Then call it passing the matrix, what column and the func to use i.e min or
max:
In [22]: m = [[9, 2, 8],[4,6,8], [3, 1, 2]]
In [23]: func(m, 2, max)
Out[23]: 8
In [24]: func(m, 2, min)
Out[24]: 2
Or using indexing in a gen exp:
def func(m, col, f):
return f(row[col] for row in m)
If you want both in a single iteration:
from operator import itemgetter
def func(m, col):
mn, mx = float("inf"), float("-inf")
for i in map(itemgetter(col), m):
if mn > i:
mn = i
if mx < i:
mx = i
return mn, mx
m = [[9, 2, 8], [4, 6, 8], [3, 1, 2]]
mn, mx = func(m, 2)
|
Python Graphing Attribute Error
Question: I edited the code with the suggestions and currently receive this error
Traceback (most recent call last): File
"C:\Users\Jonathan.HollowayMainPc\Documents\Inchimoku Kinko Hyo.py", line 111,
in ichimoku_chart() File "C:\Users\Jonathan.HollowayMainPc\Documents\Inchimoku
Kinko Hyo.py", line 97, in ichimoku_chart facecolor='green', alpha=0.2,
interpolate=True) File "C:\Python27\lib\site-packages\matplotlib\pyplot.py",
line 2826, in fill_between interpolate=interpolate, **kwargs) File
"C:\Python27\lib\site-packages\matplotlib\axes_axes.py", line 4345, in
fill_between raise ValueError("Argument dimensions are incompatible")
ValueError: Argument dimensions are incompatible
My code is below not sure what is causing it. Any help would be appreciated.
import urllib
import string
import sys
import matplotlib
import pandas as pd
import matplotlib.pyplot as plt
import pandas.io.data as web
import datetime
#from stooq_helper_functions import data_to_dataframe
stocks = []
#^ list of for stocks
#for stock in stocks:
#Everything gets tabbed here.
stock = "ebay"
data = {'Close': [], 'High': [], 'Low': [], 'Open': [], 'Date':[], 'Volume':[]}
#^Above is done on each stock but only one for now to test.
url = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1y/csv'
page = urllib.urlopen(url)
for line in page:
new_string = string.split(line, ',')
if len(new_string) == 6:
if new_string[0].isdigit() == True:
#print new_string
data[stock]= new_string
todays_high = float(data[stock][2])
todays_low = float(data[stock][3])
todays_open = float(data[stock][4])
todays_close = float(data[stock][1])
todays_volume = data[stock][5]
todays_date = data[stock][0]
data['High'].append(todays_high)
data['Low'].append(todays_low)
data['Open'].append(todays_open)
data['Date'].append(todays_date)
data['Close'].append(todays_close)
data['Volume'].append(todays_volume)
matplotlib.style.use('ggplot')
def ichimoku_chart():
global data, stock
# Prepare the data
#pos = len(data) - days
close_prices = pd.DataFrame(data['Close'])
high_prices = pd.DataFrame(data['High'])
low_prices = pd.DataFrame(data['Low'])
data['Date'] = pd.to_datetime(data['Date'], format='%Y%m%d')
# workaround, so matplotlib accepts date axis
#data['Date'].set_index('Date')
# Ichimoku chart components
# 1. Tenkan-sen (Conversion Line): (9-period high + 9-period low)/2))
period9_high = pd.rolling_max(high_prices, window=9)
period9_low = pd.rolling_min(low_prices, window=9)
tenkan_sen = (period9_high + period9_low) / 2
data['tenkan_sen'] = tenkan_sen
# 2. Kijun-sen (Base Line): (26-period high + 26-period low)/2))
period26_high = pd.rolling_max(high_prices, window=26)
period26_low = pd.rolling_min(low_prices, window=26)
kijun_sen = (period26_high + period26_low) / 2
data['kijun_sen'] = kijun_sen
# 3. Senkou Span A (Leading Span A): (Conversion Line + Base Line)/2))
# plotted 26 periods ahead
senkou_span_a = ((tenkan_sen + kijun_sen) / 2).shift(26)
data['senkou_span_a'] = senkou_span_a
# 4. Senkou Span B (Leading Span B): (52-period high + 52-period low)/2))
# plotted 22 periods ahead
period52_high = pd.rolling_max(high_prices, window=52)
period52_low = pd.rolling_min(low_prices, window=52)
senkou_span_b = ((period52_high + period52_low) / 2).shift(22)
data['senkou_span_b'] = senkou_span_b
# 5. The most current closing price plotted 22 time periods behind
chikou_span = close_prices.shift(-22)
data['chikou_span'] = chikou_span
#data = data[pos:]
date_values = data['Date'].values
fig = plt.figure()
plt.plot_date(date_values, data['Close'], '-', linewidth=1.4, label='Close')
plt.plot_date(date_values, data['tenkan_sen'], '-', label='Tenkan Sen')
plt.plot_date(date_values, data['kijun_sen'], '-', label='Kijun Sen')
plt.plot_date(date_values, data['senkou_span_a'], '-', linewidth=0)
plt.plot_date(date_values, data['senkou_span_b'], '-', linewidth=0)
plt.plot_date(date_values, data['chikou_span'], '-', label='Chikou Span')
plt.fill_between(date_values, data['senkou_span_a'], data['senkou_span_b'],
where=data['senkou_span_a'] >= data['senkou_span_b'],
facecolor='green', alpha=0.2, interpolate=True)
plt.fill_between(date_values, data['senkou_span_a'], data['senkou_span_b'],
where=data['senkou_span_a'] < data['senkou_span_b'],
facecolor='red', alpha=0.2, interpolate=True)
fig.set_tight_layout(True)
plt.legend(loc='upper left')
plt.show()
#if __name__ == '__main__':
#days = sys.argv[1]
#stock = sys.argv[2]
#ichimoku_chart(data_to_dataframe(stock + '.txt'), int(days))
ichimoku_chart()
Answer: There are multiple issues
* `url = url = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1yr/csv'` should be `url = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1y/csv'`, i.e. `range=1y` instead of `range=1yr`. Otherwise no data will be returned
* `high_prices` is a list but `rolling_max` expects a `DataFrame` (<http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.rolling_max.html>). Try `high_prices = pd.DataFrame(data['High'])`
* Even with those two issues addressed, your plotting function `plt.plot_date(date_values, data['Close'], '-', linewidth=1.4, label='Close')` will fail because `close_prices = data['Close']` will always be empty since no data is written to `data['Close']`
Some smaller issues:
* `todays_volume = data[stock][5]` has a newline character `\n` attached
* the line `data[stock]= new_string` is not needed, it is always overwritten by last read line
* * *
**Update for the edited code and new error message**
> ValueError: Argument dimensions are incompatible
If you look at the dimensions of your `DataFrames` you will see that they have
different shapes.
>>> date_values.shape
(252,)
>>> data['senkou_span_a'].shape
(252, 1)
Changing your parameter to `data['senkou_span_a'][0]` will give a plot. I
cannot tell whether the plot makes sense and shows the correct data but at
least the Python statement is formally correct.
|
Memory usage when reading lines from a piped subprocess stdout in python
Question: I just want to understand what happens in the "background" in terms of memory
usage when dealing with a subprocess.Popen() result and reading line by line.
Here's a simple example.
Given the following script `test.py` that prints "Hello" then waits 10s and
prints "world":
import sys
import time
print ("Hello")
sys.stdout.flush()
time.sleep(10)
print ("World")
Then the following script `test_sub.py` will call as a subprocess 'test.py',
redirect the stdout to a pipe and then read it line by line:
import subprocess, time, os, sy
cmd = ["python3","test.py"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, universal_newlines = True)
for line in iter(p.stdout.readline, ''):
print("---" + line.rstrip())
In this case my question would be, when I run `test_sub.py` after it does the
subprocess call, it will print "Hello" then wait 10s until "world" comes and
then print it, **what happens to "Hello" during those 10s of waiting? Does it
get stored in memory until`test_sub.py` finishes, or does it get tossed away
in the first iteration?**
This may not matter to much for this example, but when dealing with really big
files it does.
Answer: > what happens to "Hello" during those 10s of waiting?
The `"Hello"` (in the parent) is available via `line` name until `.readline()`
returns the second time i.e., `"Hello"` lives _at the very least_ until the
output of `print("World")` is read in the parent.
If you mean what happens in the child process then after `sys.stdout.flush()`
there is no reason for `"Hello"` object to continue to live but it may e.g.,
see [Does Python intern strings?](http://stackoverflow.com/q/17679861/4279)
> Does it get stored in memory until test_sub.py finishes, or does it get
> tossed away in the first iteration?
After `.readline()` returns the second time, `line` refers to `"World"`. What
happens with `"Hello"` after that depends on the garbage collection in the
specific Python implementation i.e., even if `line` is `"World"`; the object
`"Hello"` may continue to live for some time. [Releasing memory in
Python](http://stackoverflow.com/q/15455048).
You could set `PYTHONDUMPREFS=1` envvar and run your code using a _debug_
`python` build, to see object that are alive when the `python` process exits.
For example, consider this code:
#!/usr/bin/env python3
import threading
import time
import sys
def strings():
yield "hello"
time.sleep(.5)
yield "world"
time.sleep(.5)
def print_line():
while True:
time.sleep(.1)
print('+++', line, file=sys.stderr)
threading.Thread(target=print_line, daemon=True).start()
for line in strings():
print('---', line)
time.sleep(1)
It demonstrates that `line` is not rebound until the second `yield`. The
output of `PYTHONDUMPREFS=1 ./python . |& grep "'hello'"` shows that `'hello'`
is still alive when `python` exits.
|
How to create file exception for multiple file analysis in Python
Question: I am analysing a large set of weather data (about 13000 files) and writing the
results to a file. Is there a way of implementing the code I have below in
such a way that it will ignore problematic files, that is, if a particular
file produces an error, can I make it skip this file and continue on to the
rest? Some sort of exception for glob? Files are in .oax format and output
file is .txt.
Around which segments would I need to include the try and exception commands?
import sharppy
import sharppy.sharptab.profile as profile
import sharppy.sharptab.interp as interp
import sharppy.sharptab.winds as winds
import sharppy.sharptab.utils as utils
import sharppy.sharptab.params as params
import sharppy.sharptab.thermo as thermo
import numpy as np
from StringIO import StringIO
import glob
import os
os.chdir('X:/ybbn_snding_data-oax/ybbn_snding_data-oax')
for file in glob.glob("*.oax"):
spc_file = open(file, 'r').read()
def parseSPC(spc_file):
## read in the file
data = np.array([l.strip() for l in spc_file.split('\n')])
## necessary index points
title_idx = np.where( data == '%TITLE%')[0][0]
start_idx = np.where( data == '%RAW%' )[0] + 1
finish_idx = np.where( data == '%END%')[0]
## create the plot title
data_header = data[title_idx + 1].split()
location = data_header[0]
time = data_header[1][:11]
## put it all together for StringIO
full_data = '\n'.join(data[start_idx : finish_idx][:])
sound_data = StringIO( full_data )
## read the data into arrays
p, h, T, Td, wdir, wspd = np.genfromtxt( sound_data, delimiter=',', comments="%", unpack=True )
return p, h, T, Td, wdir, wspd
pres, hght, tmpc, dwpc, wdir, wspd = parseSPC(spc_file)
prof = profile.create_profile(profile='default', pres=pres, hght=hght, tmpc=tmpc, \
dwpc=dwpc, wspd=wspd, wdir=wdir, missing=-9999, strictQC=True)
msl_hght = prof.hght[prof.sfc] # Grab the surface height value
#print "SURFACE HEIGHT (m MSL):",msl_hght
agl_hght = interp.to_agl(prof, msl_hght) # Converts to AGL
#print "SURFACE HEIGHT (m AGL):", agl_hght
msl_hght = interp.to_msl(prof, agl_hght) # Converts to MSL
#print "SURFACE HEIGHT (m MSL):",msl_hght
sfcpcl = params.parcelx( prof, flag=1 ) # Surface Parcel
fcstpcl = params.parcelx( prof, flag=2 ) # Forecast Parcel
mupcl = params.parcelx( prof, flag=3 ) # Most-Unstable Parcel
mlpcl = params.parcelx( prof, flag=4 ) # 100 mb Mean Layer Parcel
print mupcl.bplus, "," # J/kg
print mupcl.bminus, "," # J/kg
print mupcl.lclhght, "," # meters AGL
print mupcl.lfchght, "," # meters AGL
print mupcl.elhght, "," # meters AGL
print mupcl.li5, "," # C
sfc = prof.pres[prof.sfc]
p3km = interp.pres(prof, interp.to_msl(prof, 3000.))
p6km = interp.pres(prof, interp.to_msl(prof, 6000.))
p1km = interp.pres(prof, interp.to_msl(prof, 1000.))
mean_3km = winds.mean_wind(prof, pbot=sfc, ptop=p3km)
sfc_6km_shear = winds.wind_shear(prof, pbot=sfc, ptop=p6km)
sfc_3km_shear = winds.wind_shear(prof, pbot=sfc, ptop=p3km)
sfc_1km_shear = winds.wind_shear(prof, pbot=sfc, ptop=p1km)
print utils.comp2vec(mean_3km[0], mean_3km[1])[1], ","
print utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1], ","
srwind = params.bunkers_storm_motion(prof)
#print "Bunker's Storm Motion (right-mover) [deg,kts]:", utils.comp2vec(srwind[0], srwind[1])
#print "Bunker's Storm Motion (left-mover) [deg,kts]:", utils.comp2vec(srwind[2], srwind[3])
srh3km = winds.helicity(prof, 0, 3000., stu = srwind[0], stv = srwind[1])
srh1km = winds.helicity(prof, 0, 1000., stu = srwind[0], stv = srwind[1])
print srh3km[0], ","
stp_fixed = params.stp_fixed(sfcpcl.bplus, sfcpcl.lclhght, srh1km[0], utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1])
ship = params.ship(prof)
eff_inflow = params.effective_inflow_layer(prof)
ebot_hght = interp.to_agl(prof, interp.hght(prof, eff_inflow[0]))
etop_hght = interp.to_agl(prof, interp.hght(prof, eff_inflow[1]))
print ebot_hght, ","
print etop_hght, ","
effective_srh = winds.helicity(prof, ebot_hght, etop_hght, stu = srwind[0], stv = srwind[1])
print effective_srh[0], ","
ebwd = winds.wind_shear(prof, pbot=eff_inflow[0], ptop=eff_inflow[1])
ebwspd = utils.mag( ebwd[0], ebwd[1] )
print ebwspd, ",a"
scp = params.scp(mupcl.bplus, effective_srh[0], ebwspd)
stp_cin = params.stp_cin(mlpcl.bplus, effective_srh[0], ebwspd, mlpcl.lclhght, mlpcl.bminus)
#print "Supercell Composite Parameter:", scp
#print "Significant Tornado Parameter (w/CIN):", stp_cin
#print "Significant Tornado Parameter (fixed):", stp_fixed
f = open('nonstormdayvalues.txt','a')
a=str(mupcl.bplus)
f.write(a)
f.write(",")
b=str(mupcl.bminus)
f.write(b)
f.write(",")
c=str(mupcl.lclhght)
f.write(c)
f.write(",")
d=str(mupcl.elhght)
f.write(d)
f.write(",")
e=str(mupcl.li5)
f.write(e)
f.write(",")
g=str(utils.comp2vec(mean_3km[0], mean_3km[1])[1])
f.write(g)
f.write(",")
h=str(utils.comp2vec(sfc_6km_shear[0], sfc_6km_shear[1])[1])
f.write(h)
f.write(",")
i=str(srh3km[0])
f.write(i)
f.write(",")
j=str(ebot_hght)
f.write(j)
f.write(",")
k=str(etop_hght)
f.write(k)
f.write(",")
l=str(effective_srh[0])
f.write(l)
f.write(",")
m=str(ebwspd)
f.write(m)
f.write(",a")
f.close
Answer: Use
try:
#run something
#if some file is a bad file/ operation not allowed
#raises exception
except Exception as e:
#print e
#or do something else if error raised
You can use this in a loop if some error is raised it excepts and continue to
loop
|
Python: How to get internal directory structure of Outlook PST file?
Question: Is there any Library for Python (Windows) to read the Internal Directory
structure of PST file eg. Inbox, Drafts, etc including Folders created by
Users.
Answer: as per <https://support.microsoft.com/en-us/kb/287070>
> Microsoft Outlook automatically stores messages, contacts, appointments,
> tasks, notes, and journal entries in one of the following two locations: In
> a personal storage folder, also known as a .pst file, on your hard disk
> drive. In a mailbox that is located on the server. Your mailbox is located
> on a server if you use Outlook with Microsoft Exchange Server.
so, based on this info, you should be able to use OS
import os
documentation for the library is here:
<https://docs.python.org/3/library/os.html>
The following example shows a simple use of scandir() to display all the files
(excluding directories) in the given path that don’t start with '.'. The
entry.is_file() call will generally not make an additional system call:
for entry in os.scandir(path):
if not entry.name.startswith('.') and entry.is_file():
print(entry.name)
for validating directories: using listdir(path) lists available directories
within the specified path, adding further logic, you should be able to achieve
what you want Ex:
import os
cdirs = os.listdir("C:/")
print(cdirs)
or create a function to do this:
def file_check(path):
file_dirs = listdir(path)
#do something with this
return file_dirs
|
TypeError: object() takes no parameters - attempting to load a .txt file into a game
Question: I've been trying to work through [How to Write a Text Adventure in
Python](http://letstalkdata.com/2014/08/how-to-write-a-text-adventure-in-
python-part-2-the-world-space/) but have run into the error `TypeError:
object() takes no parameters` on the last line of this given code when I
attempt run it in the command prompt. I tried to research what this error
means, but can't figure out how to correct it in the context of my code. What
is causing this error? I apologize if anything is unclear.
_world = {}
starting_position = (0, 0)
def load_tiles():
"""Parses a file that describes the world space into the _world object"""
with open('resources/map.txt', 'r') as f:
rows = f.readlines()
x_max = len(rows[0].split('\t'))
for y in range(len(rows)):
cols = rows[y].split('\t')
for x in range(x_max):
tile_name = cols[x].replace('\n', '')
if tile_name == 'StartingRoom':
global starting_position
starting_position = (x, y)
_world[(x, y)] = None if tile_name == '' else getattr(__import__('tiles'), tile_name)(x, y)
Answer: # Best guess
The object called `tile_name` in your `tiles` module is a class that inherits
direct from `object` but has not overridden the `__init__()` method properly.
Show me the code in your tile.py module and that will probably show the
reason.
## More info on probable cause
New-style classes in python2 inherit from `object` (by definition). If you
call a method on your class which you haven't defined, python tries to use the
super-class's method, so in this case if you haven't defined `__init__` then
python tries to use `object.__init__`. But `object.__init__` takes no
arguments (ever), and so that's what it's complaining about.
So for example, if you write the two files below and run `python main.py` you
can recreate one way of getting this bug more simply:
main.py
getattr(__import__('tiles'), 'Class')(1, 2)
tiles.py
class Class(object):
pass
## Alternative cause
Alternatively, you often get this bug if you call `super` with arguments
without thinking about what the superclass is, so this looks similar:
main.py
getattr(__import__('tiles'), 'Class')(1, 2)
tiles.py
class Class(object):
def __init__(self, *args):
super(Class, self).__init__(*args)
But you'll get a slightly different stack trace.
|
Spider won't run after updating Scrapy
Question: As seems to frequently happen here, I am quite new to Python 2.7 and Scrapy.
Our project has us scraping website date, following some links and more
scraping, and so on. This was all working fine. Then I updated Scrapy.
Now when I launch my spider, I get the following message: [](http://i.stack.imgur.com/y8rPg.jpg)
This wasn't coming up anywhere previously (none of my prior error messages
looked anything like this). I am now running scrapy 1.1.0 on Python 2.7. And
none of the spiders that had previously worked on this project are working.
I can provide some example code if need be, but my (admittedly limited)
knowledge of Python suggests to me that its not even getting to my script
before bombing out.
**EDIT:** OK, so this code is supposed to start at the first authors page for
Deakin University academics on The Conversation, and go through and scrape how
many articles they have written and comments they have made.
import scrapy
from ltuconver.items import ConversationItem
from ltuconver.items import WebsitesItem
from ltuconver.items import PersonItem
from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http import Request
import bs4
class ConversationSpider(scrapy.Spider):
name = "urls"
allowed_domains = ["theconversation.com"]
start_urls = [
'http://theconversation.com/institutions/deakin-university/authors']
#URL grabber
def parse(self, response):
requests = []
people = Selector(response).xpath('///*[@id="experts"]/ul[*]/li[*]')
for person in people:
item = WebsitesItem()
item['url'] = 'http://theconversation.com/'+str(person.xpath('a/@href').extract())[4:-2]
self.logger.info('parseURL = %s',item['url'])
requests.append(Request(url=item['url'], callback=self.parseMainPage))
soup = bs4.BeautifulSoup(response.body, 'html.parser')
try:
nexturl = 'https://theconversation.com'+soup.find('span',class_='next').find('a')['href']
requests.append(Request(url=nexturl))
except:
pass
return requests
#go to URLs are grab the info
def parseMainPage(self, response):
person = Selector(response)
item = PersonItem()
item['name'] = str(person.xpath('//*[@id="outer"]/header/div/div[2]/h1/text()').extract())[3:-2]
item['occupation'] = str(person.xpath('//*[@id="outer"]/div/div[1]/div[1]/text()').extract())[11:-15]
item['art_count'] = int(str(person.xpath('//*[@id="outer"]/header/div/div[3]/a[1]/h2/text()').extract())[3:-3])
item['com_count'] = int(str(person.xpath('//*[@id="outer"]/header/div/div[3]/a[2]/h2/text()').extract())[3:-3])
And in my Settings, I have:
BOT_NAME = 'ltuconver'
SPIDER_MODULES = ['ltuconver.spiders']
NEWSPIDER_MODULE = 'ltuconver.spiders'
DEPTH_LIMIT=1
Answer: Apparently my six.py file was corrupt (or something like that). After swapping
it out with the same file from a colleague, it started working again 8-\
|
wxpython table label background color overflowing the grid
Question: I have to make a wxpython table using grid.I have set the background color of
the table using grid.SetLabelBackgroundColour("green"). But it is overflowing
the grid and changing color of the area outside the header also. Can anybody
please help me in fixing this.
import wx
import wx.grid
class GridFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
# Create a wxGrid object
grid = wx.grid.Grid(self, -1)
# Then we call CreateGrid to set the dimensions of the grid
# (100 rows and 10 columns in this example)
grid.CreateGrid(100, 10)
grid.SetLabelBackgroundColour("green")
# We can set the sizes of individual rows and columns
# in pixels
grid.SetRowSize(0, 60)
grid.SetColSize(0, 120)
# And set grid cell contents as strings
grid.SetCellValue(0, 0, 'wxGrid is good')
# We can specify that some cells are read.only
grid.SetCellValue(0, 3, 'This is read.only')
grid.SetReadOnly(0, 3)
# Colours can be specified for grid cell contents
grid.SetCellValue(3, 3, 'green on grey')
grid.SetCellTextColour(3, 3, wx.GREEN)
grid.SetCellBackgroundColour(3, 3, wx.LIGHT_GREY)
# We can specify the some cells will store numeric
# values rather than strings. Here we set grid column 5
# to hold floating point values displayed with width of 6
# and precision of 2
grid.SetColFormatFloat(5, 6, 2)
grid.SetCellValue(0, 6, '3.1415')
self.Show()
if __name__ == '__main__':
app = wx.App(0)
frame = GridFrame(None)
app.MainLoop()
Answer: I cannot find a way to force this to limit the label colour to just the number
of defined labels.
I don't know if the following would be of any use to you but you could limit
the size of the frame, so that the user does not see beyond the last column by
seting a size to the frame and limiting the ability to resize it.
wx.Frame.__init__(self, parent, size=(950,500),style= wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX)
|
How to transfer data on SQL Table to Excel file in Python?
Question: So I have a working piece of code which creates and modifies data in a SQL
table. I now want to transfer all the data in the SQL table to a Excel file.
Which libraries would I use and what functions in those libraries would I use?
Answer: an example of database with sqlite: memory.db and table name is called table1
in the example
import os
import csv
import sqlite3
def db2csv(file,Table1):
con = sqlite3.connect("memory.db")
cur = con.cursor()
if not os.path.exists(file):
os.makedirs(file)
with open(file, 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=';', quotechar='|', quoting=csv.QUOTE_MINIMAL)
for row in cur.execute('SELECT * FROM Table1 '):
spamwriter.writerow(row)
con.commit()
|
Getting file path from command line arguments in python
Question: I would like to read a file path from command line arguments, using argparse.
Is there any optimal way to check if the path is relative (file is in current
directory) or the complete path is given? (Other than checking the input and
adding current directory to file name if the path does not exist.)
Answer: As Display Name said, `os.path.isabs` along with `sys.argv` is probably the
best:
import sys
import os
fpath = sys.argv[-1]
print(os.path.isabs(fpath))
print(fpath)
output
>>>
True
C:\Users\310176421\Desktop\Python\print.py
>>>
some cmd stuff
C:\Users\310176421\Desktop\Python>python print.py C:\Users\310176421\Desktop\tes
t.txt
True
C:\Users\310176421\Desktop\test.txt
C:\Users\310176421\Desktop\Python>python print.py whatever
False
whatever
|
Want to convert sqlite code in Shared preference code
Question: I made a quiz application using sqlite data base but now have to convert this
in shared preference. How can I change it to shared preference??
Here is my code
QuizActivity.java
import java.util.List;
import android.os.Bundle;
import android.app.Activity;
import android.content.Intent;
import android.util.Log;
import android.view.Menu;
import android.view.View;
import android.widget.Button;
import android.widget.RadioButton;
import android.widget.RadioGroup;
import android.widget.TextView;
public class QuizActivity extends Activity {
List<Question> quesList;
int score=0;
int qid=0;
Question currentQ;
TextView txtQuestion;
RadioButton rda, rdb, rdc;
Button butNext;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_quiz);
DbHelper db=new DbHelper(this);
quesList=db.getAllQuestions();
currentQ=quesList.get(qid);
txtQuestion=(TextView)findViewById(R.id.textView1);
rda=(RadioButton)findViewById(R.id.radio0);
rdb=(RadioButton)findViewById(R.id.radio1);
rdc=(RadioButton)findViewById(R.id.radio2);
butNext=(Button)findViewById(R.id.button1);
setQuestionView();
butNext.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
RadioGroup grp=(RadioGroup)findViewById(R.id.radioGroup1);
RadioButton answer=(RadioButton)findViewById(grp.getCheckedRadioButtonId());
Log.d("yourans", currentQ.getANSWER()+" "+answer.getText());
if(currentQ.getANSWER().equals(answer.getText()))
{
score++;
Log.d("score", "Your score"+score);
}
if(qid<5){
currentQ=quesList.get(qid);
setQuestionView();
}else{
Intent intent = new Intent(QuizActivity.this, ResultActivity.class);
Bundle b = new Bundle();
b.putInt("score", score); //Your score
intent.putExtras(b); //Put your score to your next Intent
startActivity(intent);
finish();
}
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_quiz, menu);
return true;
}
private void setQuestionView()
{
txtQuestion.setText(currentQ.getQUESTION());
rda.setText(currentQ.getOPTA());
rdb.setText(currentQ.getOPTB());
rdc.setText(currentQ.getOPTC());
qid++;
}
}
ResultActivity.java
import android.os.Bundle;
import android.app.Activity;
import android.view.Menu;
import android.widget.RatingBar;
import android.widget.TextView;
public class ResultActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_result);
//get rating bar object
RatingBar bar=(RatingBar)findViewById(R.id.ratingBar1);
bar.setNumStars(5);
bar.setStepSize(0.5f);
//get text view
TextView t=(TextView)findViewById(R.id.textResult);
//get score
Bundle b = getIntent().getExtras();
int score= b.getInt("score");
//display score
bar.setRating(score);
switch (score)
{
case 1:
case 2: t.setText("Oopsie! Better Luck Next Time!");
break;
case 3:
case 4:t.setText("Hmmmm.. Someone's been reading a lot of trivia");
break;
case 5:t.setText("Who are you? A trivia wizard???");
break;
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.activity_result, menu);
return true;
}
}
DBhelper.java
import java.util.ArrayList;
import java.util.List;
import android.content.ContentValues;
import android.content.Context;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.database.sqlite.SQLiteOpenHelper;
public class DbHelper extends SQLiteOpenHelper {
private static final int DATABASE_VERSION = 1;
// Database Name
private static final String DATABASE_NAME = "triviaQuiz";
// tasks table name
private static final String TABLE_QUEST = "quest";
// tasks Table Columns names
private static final String KEY_ID = "id";
private static final String KEY_QUES = "question";
private static final String KEY_ANSWER = "answer"; //correct option
private static final String KEY_OPTA= "opta"; //option a
private static final String KEY_OPTB= "optb"; //option b
private static final String KEY_OPTC= "optc"; //option c
private SQLiteDatabase dbase;
public DbHelper(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
}
@Override
public void onCreate(SQLiteDatabase db) {
dbase=db;
String sql = "CREATE TABLE IF NOT EXISTS " + TABLE_QUEST + " ( "
+ KEY_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + KEY_QUES
+ " TEXT, " + KEY_ANSWER+ " TEXT, "+KEY_OPTA +" TEXT, "
+KEY_OPTB +" TEXT, "+KEY_OPTC+" TEXT)";
db.execSQL(sql);
addQuestions();
//db.close();
}
private void addQuestions()
{
Question q1=new Question("Which company is the largest manufacturer" +
" of network equipment?","HP", "IBM", "CISCO", "CISCO");
this.addQuestion(q1);
Question q2=new Question("Which of the following is NOT " +
"an operating system?", "SuSe", "BIOS", "DOS", "BIOS");
this.addQuestion(q2);
Question q3=new Question("Which of the following is the fastest" +
" writable memory?","RAM", "FLASH","Register","Register");
this.addQuestion(q3);
Question q4=new Question("Which of the following device" +
" regulates internet traffic?", "Router", "Bridge", "Hub","Router");
this.addQuestion(q4);
Question q5=new Question("Which of the following is NOT an" +
" interpreted language?","Ruby","Python","BASIC","BASIC");
this.addQuestion(q5);
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldV, int newV) {
// Drop older table if existed
db.execSQL("DROP TABLE IF EXISTS " + TABLE_QUEST);
// Create tables again
onCreate(db);
}
// Adding new question
public void addQuestion(Question quest) {
//SQLiteDatabase db = this.getWritableDatabase();
ContentValues values = new ContentValues();
values.put(KEY_QUES, quest.getQUESTION());
values.put(KEY_ANSWER, quest.getANSWER());
values.put(KEY_OPTA, quest.getOPTA());
values.put(KEY_OPTB, quest.getOPTB());
values.put(KEY_OPTC, quest.getOPTC());
// Inserting Row
dbase.insert(TABLE_QUEST, null, values);
}
public List<Question> getAllQuestions() {
List<Question> quesList = new ArrayList<Question>();
// Select All Query
String selectQuery = "SELECT * FROM " + TABLE_QUEST;
dbase=this.getReadableDatabase();
Cursor cursor = dbase.rawQuery(selectQuery, null);
// looping through all rows and adding to list
if (cursor.moveToFirst()) {
do {
Question quest = new Question();
quest.setID(cursor.getInt(0));
quest.setQUESTION(cursor.getString(1));
quest.setANSWER(cursor.getString(2));
quest.setOPTA(cursor.getString(3));
quest.setOPTB(cursor.getString(4));
quest.setOPTC(cursor.getString(5));
quesList.add(quest);
} while (cursor.moveToNext());
}
// return quest list
return quesList;
}
public int rowcount()
{
int row=0;
String selectQuery = "SELECT * FROM " + TABLE_QUEST;
SQLiteDatabase db = this.getWritableDatabase();
Cursor cursor = db.rawQuery(selectQuery, null);
row=cursor.getCount();
return row;
}
}
Question.java
public class Question {
private int ID;
private String QUESTION;
private String OPTA;
private String OPTB;
private String OPTC;
private String ANSWER;
public Question()
{
ID=0;
QUESTION="";
OPTA="";
OPTB="";
OPTC="";
ANSWER="";
}
public Question(String qUESTION, String oPTA, String oPTB, String oPTC,
String aNSWER) {
QUESTION = qUESTION;
OPTA = oPTA;
OPTB = oPTB;
OPTC = oPTC;
ANSWER = aNSWER;
}
public int getID()
{
return ID;
}
public String getQUESTION() {
return QUESTION;
}
public String getOPTA() {
return OPTA;
}
public String getOPTB() {
return OPTB;
}
public String getOPTC() {
return OPTC;
}
public String getANSWER() {
return ANSWER;
}
public void setID(int id)
{
ID=id;
}
public void setQUESTION(String qUESTION) {
QUESTION = qUESTION;
}
public void setOPTA(String oPTA) {
OPTA = oPTA;
}
public void setOPTB(String oPTB) {
OPTB = oPTB;
}
public void setOPTC(String oPTC) {
OPTC = oPTC;
}
public void setANSWER(String aNSWER) {
ANSWER = aNSWER;
}
}
activity_quiz.xml
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".QuizActivity" >
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentLeft="true"
android:layout_alignParentRight="true"
android:layout_alignParentTop="true"
android:orientation="vertical" >
<TextView
android:id="@+id/textView1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/largetext"
android:textAppearance="?android:attr/textAppearanceLarge" />
<RadioGroup
android:id="@+id/radioGroup1"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="0.04" >
<RadioButton
android:id="@+id/radio0"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:checked="true"
android:text="@string/radiobutton" />
<RadioButton
android:id="@+id/radio1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/radiobutton2" />
<RadioButton
android:id="@+id/radio2"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/radiobutton3" />
</RadioGroup>
<Button
android:id="@+id/button1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/str_next" />
</LinearLayout>
</RelativeLayou
activity_result
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ResultActivity" >
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentLeft="true"
android:layout_alignParentRight="true"
android:layout_alignParentTop="true"
android:orientation="vertical" >
<RatingBar
android:id="@+id/ratingBar1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:numStars="5"
android:stepSize="1.0"
android:rating="0.0"/>
<TextView
android:id="@+id/textResult"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="0.08"
android:text="@string/largetext3"
android:textAppearance="?android:attr/textAppearanceLarge" />
</LinearLayout>
</RelativeLayout>
Answer: You can write a helper encapsulated Question model to JSONObject and then
store JSONArray of JSONObject to single String shared preference. Add to
Question.java new methods
public JSONObject toJSONJbject() throws JSONException {
JSONObject object = new JSONObject();
object.put("ID", ID);
object.put("QUESTION", QUESTION);
object.put("OPTA", OPTA);
object.put("OPTB", OPTB);
object.put("OPTC", OPTC);
object.put("ANSWER", ANSWER);
return object;
}
public void fromJSONJbject(JSONObject object) throws JSONException {
ID = object.getInt("ID");
QUESTION = object.getString("QUESTION");
OPTA = object.getString("OPTA");
OPTB = object.getString("OPTB");
OPTC = object.getString("OPTC");
ANSWER = object.getString("ANSWER");
}
After that, rewrite getAllQuestions() to use JSONArray of toJSONJbject().
Because JSONArray doesn`t have unique key mechanism, you will need check data
based on ID value.
|
How to prevent inserting a record different from the model in python
Question: I have a data model that have some primitive and array type properties as
shown below:
class WordCollection:
"""First model"""
def __init__(self, **properties):
self.name = ""
self.status = CommonStatus.active
self.expire_time = time.time() + 1000 * 60 * 24 # after 1 day
self.created_date = time.time()
self.words = []
self.__dict__.update(properties)
That comes some hack. For example when i construct the class with a property
which is not part of the class it could be easily hacked.
collection = WordCollection(**{..., "hack_property":"large text or irrelative data"})
So i've played on class initialize method:
class WordCollection:
"""Second model"""
def __init__(self, **properties):
self.name = properties["name"] if "name" in properties else ""
self.status = properties["active"] if "active" in properties else CommonStatus.active
self.expire_time = properties["expire_time"] if "expire_time" in properties else time.time() + 1000 * 60 * 24 # after 1 day
self.created_date = properties["created_date"] if "created_date" in properties else time.time()
self.words = properties["words"] if "words" in properties else []
But above code does not solve the problem in full:
collection = WordCollection(**{..., "name":{"hack_property":"large text or irrelative data"}})
This is the last rebuilt code:
class WordCollection:
"""Third Model"""
def __init__(self, **properties):
self.name = properties["name"] if "name" in properties and isinstance(properties["name"], str) else ""
self.status = properties["active"] if "active" in properties \
and isinstance(properties["status"], int) else CommonStatus.active
....
Above revision solves my problem but it brings conditional complexity and i
believe that would be better solution than aboves.
Answer: A more standard formulation:
valid_properties = {'prop1', 'prop2', 'prop3'}
class WordCollection(object):
def __init__(self, name="", status=None; **properties):
# This one is explicit, with a default that is specified in the call signature
# Defaults in the call signature are resolved when the class is imported
self.name = name
# This one is more dynamic - CommonStatus.active could change
# after the user imports the class, so we don't want it fixed.
# Instead, use a sentinel.
# I usually use None. If None is a valid value, best bet
# is to do something like this:
# sentinel = object()
# then use that instead of None.
self.status = CommonStatus.active if status is None else status
# This one we just assign -
self.words = []
# You don't _have_ to include a **kwargs if you don't want to.
# If you don't want _any_ surprise properties, just leave
# **properties out of the __init__, and only ones you explicit
# declare will be allowed.
# Explicit is better - they show up in tab completion/help
# But if you want to filter out input to a set of valid props...
filtered_props = {k:v for k,v in properties.items() if k in valid_properties}
self.__dict__.update(filtered_props)
|
cURL request to Python (using multipart/form-data)
Question: I am trying to translate this cURL request:
curl -X POST "endpoint" -H 'Content-Type: multipart/form-data' -F "[email protected]"
So far I've got this:
requests.post(
endpoint,
headers={"Content-Type": "multipart/form-data"},
files={"config": ("conf.ttl", open("conf.ttl", "rb"), "text/turtle")}
)
But it doesn't work quite as expected. What is it I'm missing?
Answer: You shouldn't be explicitly setting "multipart/form-data". It is overwriting
all the other part of the header set by requests ("multipart/form-data;
boundary=4b9...",). There is no need to set the header, requests will do that
for you. You can see the request headers (requests.headers) in the example
below. You can see that
import requests
endpoint = "http://httpbin.org/post"
r = requests.post(
endpoint,
files={"config": ("conf.ttl", open("conf.ttl", "rb"), "text/turtle")}
)
print r.request.headers
print r.headers
print r.text
gives:
{'Content-Length': '259', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'python-requests/2.10.0', 'Connection': 'keep-alive', 'Content-Type': 'multipart/form-data; boundary=4b99265adcf04931964cb96f48b53a36'}
{'Content-Length': '530', 'Server': 'nginx', 'Connection': 'keep-alive', 'Access-Control-Allow-Credentials': 'true', 'Date': 'Fri, 20 May 2016 20:50:05 GMT', 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json'}
{
"args": {},
"data": "",
"files": {
"config": "curl -X POST \"endpoint\" -H 'Content-Type: multipart/form-data' -F \"[email protected]\"\n\n"
},
"form": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Content-Length": "259",
"Content-Type": "multipart/form-data; boundary=4b99265adcf04931964cb96f48b53a36",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.10.0"
},
"json": null,
"origin": "84.92.144.93",
"url": "http://httpbin.org/post"
}
Where as your code with the explicit header gives a error to the same URL.
{'Content-Length': '259', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'User-Agent': 'python-requests/2.10.0', 'Connection': 'keep-alive', 'Content-Type': 'multipart/form-data'}
{'Date': 'Fri, 20 May 2016 20:54:34 GMT', 'Content-Length': '291', 'Content-Type': 'text/html', 'Connection': 'keep-alive', 'Server': 'nginx'}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
|
Python Regex - Match and Start()
Question: Let's say I need to find the word "water" in a string. This word cannot be
part of another word and it can't be preceded by punctuation (so i'm assuming
it can only be preceded by a " " or it's the beginning of the string). I need
to return the index of the word's first char "w". So I'm trying this code :
import re
s = re.search(r"(\A| )\bwater\b", "Need water")
return s.start() # This returns the index of the char " " :(
Is it possible to ignore the **(\A| )** part of the pattern so that s.start()
always returns the index of the char "w"? Or am I thinking this wrong?
Answer: You can use
(?<!\S)\bwater\b
See the [regex demo](https://regex101.com/r/eT5vS1/1)
**Explanation:**
* `(?<!\S)` \- a negative lookbehind failing a match if there is a non-whitespace character right before a whole word `water`
* `\bwater\b` \- a whole word `water`.
Here is a [Python demo](http://ideone.com/vy1li5):
import re
s = re.search(r"(?<!\S)\bwater\b", "Need water")
if s:
print(s.start())
|
pytest exits with no error but with "collected 0 items"
Question: I have been trying to run unit tests using pytest in python. I had written a
module with one class and some methods inside that class. I wrote a unit test
for this module (with a simple assert statement to check equality of lists)
where I first instantiate the class with a list. Then I invoke a method on
that object (from the class). Both `test.py` and the script to be tested are
in the same folder. When I run `pytest` on it, I get "collected 0 items".
I am new to `pytest`, and but am able to run their examples successfully. What
am I missing here?
Running Python version 3.5.1 and pytest version 2.8.1 on Windows 7.
My test.py code:
from sort_algos import Sorts
def integer_sort_test():
myobject1 = Sorts([-100,10,-10])
assert myobject1.merge_sort() == [-101,-100,10]
sort_algos.py is a module containing class Sorts. merge_sort is a method under
Sorts.
Answer: `pytest` gathers tests according to a naming convention. By default any file
that is to contain tests must be named starting with `test_` and any function
in a file that should be treated as a test must also start with `test_`.
If you rename your test file to `test_sorts.py` and rename the example
function you provide above as `test_integer_sort`, then you will find it is
automatically collected and executed.
This test collecting behavior [can be
changed](https://pytest.org/latest/example/pythoncollection.html) to suit your
desires. Changing it will require learning about [configuration in
pytest](https://pytest.org/latest/customize.html).
|
List Index Is Out of Range And I Don't Know Why
Question: So this is the error:
Traceback (most recent call last):
File "E:\python\cloud.py", line 593, in <module>
inventory()
File "E:\python\cloud.py", line 297, in inventory
print("Weapon Attack Damage: ", c.weaponAttack[i])
IndexError: list index out of range
These are the only parts of the code that have the "weaponAttack" function in
it. I honestly don't see why it is giving me this error.
class Cloud:
def __init__(self):
self.weaponAttack = list()
self.sp = 0
self.armor = list()
self.armorReduction = list()
self.weapon = list()
self.money = 0
self.lvl = 0
self.exp = 0
self.mexp = 100
self.attackPower = 0
self.hp = 100
self.mhp = 100
self.name = "Cloud"
c = Cloud()
armors = ["No Armor","Belice Armor","Yoron's Armor","Andrew's Custom Armor","Zeus' Armor"]
armorReduce = [.0, .025, .05, .10, .15]
c.armor.append(armors[0])
c.armorReduction.append(armorReduce[0])
w = random.randint(0, 10)
weapons = ["The sword of wizdom","The sword of kindness", "the sword of power", "the sword of elctricity", "the sword of fire", "the sword of wind", "the sword of ice", "the sword of self appreciation", "the sword of love", "the earth sword", "the sword of the universe"]
weaponAttack = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c.weapon.append(weapons[w])
c.weapon.append(weaponAttack[w])
print("You have recieved the ", weapons[w])
print("")
print("It does ", weaponAttack[w]," attack power!")
print("")
for i in range(0, len(c.weapon)):
print(i)
print("Weapon: ", c.weapon[i])
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
Although, this is the rest of the code, but before you read it, I'm warning
you, it's long. Either way, I'm pretty sure that those lines of code up there
are the problem.
import random
import time
import sys
def asky():
ask = input("Would you like to check you player stats and inventory or go to the next battle? Say inventory for inventory or say next for the next battle: ")
if "inventory" in ask:
inventory()
elif "next" in ask:
user()
def Type(t):
t = list(t)
for a in t:
sys.stdout.write(a)
time.sleep(.02)
class Cloud:
def __init__(self):
self.weaponAttack = list()
self.sp = 0
self.armor = list()
self.armorReduction = list()
self.weapon = list()
self.money = 0
self.lvl = 0
self.exp = 0
self.mexp = 100
self.attackPower = 0
self.hp = 100
self.mhp = 100
self.name = "Cloud"
c = Cloud()
armors = ["No Armor","Belice Armor","Yoron's Armor","Andrew's Custom Armor","Zeus' Armor"]
armorReduce = [.0, .025, .05, .10, .15]
c.armor.append(armors[0])
c.armorReduction.append(armorReduce[0])
w = random.randint(0, 10)
weapons = ["The sword of wizdom","The sword of kindness", "the sword of power", "the sword of elctricity", "the sword of fire", "the sword of wind", "the sword of ice", "the sword of self appreciation", "the sword of love", "the earth sword", "the sword of the universe"]
weaponAttack = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c.weapon.append(weapons[w])
c.weapon.append(weaponAttack[w])
print("You have recieved the ", weapons[w])
print("")
print("It does ", weaponAttack[w]," attack power!")
print("")
class Soldier:
def __init__(self):
dmg = random.randint(5,20)
self.lvl = 0
self.attackPower = dmg
self.hp = 100
self.mhp = 100
self.name = "Soldier"
s = Soldier()
def enemy():
ad = random.randint(0,2)
if ad >= 1: #Attack
Type("Soldier attacks!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
hm = random.randint(0, 2)
if hm == 0:
Type("Miss!")
print("")
elif hm > 0:
crit = random.randint(0,10)
if crit == 0:
print("CRITICAL HIT!")
crithit = int((s.attackPower) * (.5))
c.hp = c.hp - (s.attackPower + crithit)
elif crit >= 1:
c.hp = c.hp - s.attackPower
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
asky()
c.exp = c.exp + 100
else:
user()
elif ad == 0:#Defend
Type("Soldier Defends!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if s.hp == s.mhp:
print("")
elif s.hp > (s.mhp - 15) and s.hp < s.mhp:
add = s.mhp - s.hp
s.hp = add + s.hp
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
elif s.hp < (s.mhp - 15):
s.hp = s.hp + 15
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
asky()
c.exp = c.exp + 100
else:
user()
def user():
User = input("attack or defend? ")
if "attack" in User:#attack
Type("Cloud attacks!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
hm = random.randint(0,4)
if hm == 0:
Type("Miss!")
print("")
elif hm > 0:
crit = random.randint(0,7)
if crit == 0:
print("CRITICAL HIT!")
crithit = int((c.weaponAttack[0]) * (.5))
s.hp = s.hp - (c.weaponAttack[0] + crithit)
elif crit >= 1:
s.hp = s.hp - c.weaponAttack[0]
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
c.exp = c.exp + 100
asky()
else:
enemy()
elif "defend" in User:#defend
Type("Cloud Heals!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp == c.mhp:
Type("You are at the maximum amount of health. Cannot add more health.")
print("")
elif c.hp > (c.mhp - 15) and c.hp < c.mhp:
add = c.mhp - c.hp
c.hp = add + c.hp
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
elif c.hp <= (c.mhp - 15):
c.hp = c.hp + 15
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("Congratulations!")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
c.exp = c.exp + 100
asky()
else:
enemy()
else:
Type("The option you have entered is not in the game database. Please try again")
print("")
user()
def inventory():
if c.exp == c.mexp:
print("LEVEL UP!")
c.exp = 0
adde = int((c.mexp) * (.5))
c.mexp = c.mexp + adde
c.sp = c.sp + 1
c.lvl = c.lvl + 1
if c.lvl > s.lvl:
s.lvl = s.lvl + 1
print("Level: ", c.lvl)
nextlvl = c.lvl + 1
print("Experience: ", c.exp, "/", c.mexp, "level", nextlvl)
print("Amount of Skill Points:", c.sp)
for i in range(0, len(c.weapon)):
print(i)
print("Weapon: ", c.weapon[i])
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
for j in range(0, len(c.armor)):
print("Armor: ", c.armor[j])
print("Armor Damage Reduction: ", c.armorReduction[j])
print("")
print("Amount of Crystals: ", c.money)
print("")
print("")
print("Stats:")
print("Maximum Health: ", c.mhp)
print("Current Health: ", c.hp)
print("Your Name: ", c.name)
print("")
sn = input("To heal yourself, you need to go to the shop. Say, *shop* to go to the shop, say *name* to change your name, say, *next* to fight another battle, say, *level* to use your skill point(s), or say, *help* for help: ")
if "name" in sn:
c.name = input("Enter Your name here: ")
print("Success! Your name has been changed to ", c.name)
inventory()
elif "next" in sn:
Type("3")
print("")
Type("2")
print("")
Type("1")
print("")
Type("FIGHT!")
print("")
user()
elif "help" in sn:
Type("The goal of this game is to fight all the enemies, kill the miniboss, and finally, kill the boss! each time you kill an enemy you gain *crystals*, currency which you can use to buy weapons, armor, and health. You can spend these *crystals* at the shop. To go to the shop, just say *shop* when you are in your inventory. Although, each time you defeat an enemy, they get harder to defeat. Once you level up, you gain one skill point. This skill point is then used while in your inventory by saying the word *level*. You can use your skill point(s) to upgrade your stats, such as, your maximum health, and your attack power.")
print("")
inventory()
elif "shop" in sn:
shop()
elif "level" in sn:
skills()
else:
print("Level: ", c.lvl)
nextlvl = c.lvl + 1
print("Experience: ", c.exp, "/", c.mexp, "level", nextlvl)
print("Amount of Skill Points:", c.sp)
for i in range(0, len(c.weapon)):
print("Weapon:", c.weapon[i])
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
for i in range(0, len(c.armor)):
print("Armor: ", c.armor[i])
print("Armor Damage Reduction: ", c.armorReduction[i])
print("")
print("Amount of Crystals: ", c.money)
print("")
print("")
print("Stats:")
print("Maximum Health: ", c.mhp)
print("Current Health: ", c.hp)
print("Attack Power: ", c.attackPower)
print("Your Name: ", c.name)
print("")
sn = input("To heal yourself, you need to go to the shop. Say, *shop* to go to the shop, say *name* to change your name, say, *next* to fight another battle, say, *level* to use your skill point(s), or say, *help* for help: ")
if "name" in sn:
c.name = input("Enter Your name here: ")
print("Success! Your name has been changed to ", c.name)
inventory()
elif "next" in sn:
Type("3")
print("")
Type("2")
print("")
Type("1")
print("")
Type("FIGHT!")
print("")
user()
elif "help" in sn:
Type("The goal of this game is to fight all the enemies, kill the miniboss, and finally, kill the boss! each time you kill an enemy you gain *crystals*, currency which you can use to buy weapons, armor, and health. You can spend these *crystals* at the shop. To go to the shop, just say *shop* when you are in your inventory. Although, each time you defeat an enemy, they get harder to defeat. Once you level up, you gain one skill point. This skill point is then used while in your inventory by saying the word *level*. You can use your skill point(s) to upgrade your stats, such as, your maximum health, and your attack power.")
print("")
inventory()
elif "shop" in sn:
shop()
elif "level" in sn:
skills()
def skills():
print("You have ", c.sp, "skill points to use.")
print("")
print("Upgrade attack power *press the number 1*")
print("")
print("Upgrade maximum health *press the number 2*")
print("")
skill = input("Enter the number of the skill you wish to upgrade, or say, cancel, to go back to your inventory screen.")
if "1" in skill:
sure = input("Are you sure you want to upgrade your character attack power in return for 1 skill point? *yes or no*")
if "yes" in sure:
c.sp = c.sp - 1
addsap = c.attackPower * .01
c.attackPower = c.attackPower + addsap
if "no" in sure:
skills()
elif "2" in skill:
sure = input("Are you sure you want to upgrade your maximum health in return for 1 skill point? *yes or no*")
if "yes" in sure:
c.sp = c.sp - 1
c.mhp = c.mhp + 30
if "no" in sure:
skills()
elif "cancel" in skill:
inventory()
else:
Type("The word or number you have entered is invalid. Please try again.")
print("")
skills()
def shop():
print("Welcome to Andrew's Blacksmith! Here you will find all the weapons, armor, and health you need, to defeat the horrid beast who goes by the name of Murlor! ")
print("")
print("Who's Murlor? *To ask this question, type in the number 1*")
print("")
print("Can you heal me? *To ask this question, type in the number 2*")
print("")
print("What weapons do you have? *To ask this question, type in the number 3*")
print("")
print("Got any armor? *To ask this question, type in the number 4*")
print("")
ask1 = input("Enter desired number here or say, cancel, to go back to your inventory screen. ")
if "1" in ask1:
def murlor():
Type("Murlor is a devil-like creature that lives deep among the caves of Bricegate. He has been terrorising the people of this village for centuries.")
print("")
print("What is Bricegate? *To choose this option, type in the number 1*")
print("")
print("Got any more information about this village? *To choose this option, type in the number 2*")
print("")
print("Thank you! *To choose this option, type in the number 3*")
ask3 = input("Enter desired number here, or say, cancel, to go back to the main shop screen. ")
if "1" in ask3:
Type("That's the name of this town.")
murlor()
elif "2" in ask3:
def askquest1():
quest1 = input("Well I DO know that there's this secret underground dungeon. It's VERY dangerous but it comes with a hug reward. If you ever concider it, could you get my lucky axe? I dropped it down a hole leading to the dungeon and i was too afraid to get it back. *If you accept the quest, say yes, if you want to go back, say, no.*")
if "yes" in quest1:
quest1()
elif "no" in quest1:
murlor()
else:
Type("The option you have selected is not valid. Please try again")
print("")
askquest1()
elif "3" in ask3:
shop()
else:
Type("The number or word you have entered is invalid. please try again.")
print("")
elif "2" in ask1:
def heal():
Type("Sure! That'll be 30 crystals.")
ask2 = input(" *say, okay, to confirm the purchase or say, no, to cancel the pruchase*")
if "okay" in ask2:
if c.money < 30:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
shop()
elif c.money >= 30:
c.money = c.money - 30
Type("30 crystals has been removed from your inventory.")
print("")
addn = c.mhp - c.hp
c.hp = c.hp + addn
Type("You have been healed!")
print("")
shop()
elif "no" in ask2:
shop()
else:
Type("The option you have chosen is invalid. Please try again")
print("")
heal()
elif "3" in ask1:
def swords():
print("Swords: ")
print("The Belice Sword: *Type 1 for this sword*")
print("Damage: 18")
print("Cost: 70 crystals")
print("")
print("The Sword of A Thousand Truths: *Type 2 for this sword*")
print("Damage: 28")
print("Cost: 100 crystals")
print("")
print("Spyro's Sword: *Type 3 for this sword*")
print("Damage: 32")
print("Cost: 125 crystals")
print("")
print("The Sword Of The Athens: *Type 4 for this sword*")
print("Damage: 36")
print("Cost: 150 crystals")
print("")
print("Coming Soon...")
sword = input("Enter the sword ID number or say cancel to go back to the main shop screen. You now have ", c.money, "crystals.")
if "1" in sword:
if c.money < 70:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
swords()
elif c.money >= 70:
c.money = c.money - 70
Type("70 crystals has been removed from your inventory.")
print("")
weapon.append("The Belice Sword")
Type("The Belice Sword has been added to your inventory!")
print("")
swords()
elif "2" in sword:
if c.money < 250:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
swords()
elif c.money >= 250:
c.money = c.money - 250
Type("250 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("The Sword Of A Thousand Truths")
Type("The Sword Of A Thousand Truths has been added to your inventory!")
print("")
swords()
elif "3" in sword:
if c.money < 525:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
swords()
elif c.money >= 525:
c.money = c.money - 525
Type("525 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("The Spyro's Sword")
Type("The Spyro's Sword has been added to your inventory!")
print("")
swords()
elif "4" in sword:
if c.money < 1050:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
swords()
elif c.money >= 1050:
c.money = c.money - 1050
Type("1050 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("The Sword Of The Athens")
Type("The Sword Of The Athens has been added to your inventory!")
print("")
swords()
elif "cancel" in sword:
shop()
else:
Type("The number or word you have entered is invalid. Please try again.")
print("")
swords()
elif "4" in ask1:
def armory():
print("Armor:")
print("Belice Armor, ID: 1")
print("Damage Reduction: 2.5%")
print("Cost: 100 crystals")
print("")
print("Yoron's armor, ID: 2")
print("Damage Reduction: 5%")
print("Cost: 250 crystals")
print("")
print("Andrew's Custom Armor, ID: 3")
print("Damage Reduction: 10%")
print("Cost: 500 crystals")
print("")
print("Zeus' Armor, ID: 4")
print("Damage Reduction: 15%")
print("Cost: 1000 crystals")
print("")
print("Coming Soon...")
print("")
armor = input("Enter armor ID number, or type, cancel, to go back to the main shop menu.")
if "1" in armor:
if c.money < 100:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
armory()
elif c.money >= 100:
c.money = c.money - 100
Type("100 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("Belice Armor")
Type("Belice Armor has been added to your inventory!")
print("")
armory()
elif "2" in armor:
if c.money < 250:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
armory()
elif c.money >= 250:
c.money = c.money - 250
Type("250 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("Yoron's Armor")
Type("Yoron's Armor has been added to your inventory!")
print("")
armory()
elif "3" in armor:
if c.money < 500:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
armory()
elif c.money >= 500:
c.money = c.money - 500
Type("500 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("Andrew's Custom Armor")
Type("Andrew's Custom Armor has been added to your inventory!")
print("")
armory()
elif "4" in armor:
if c.money < 1000:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
armory()
elif c.money >= 1000:
c.money = c.money - 1000
Type("1000 crystals has been removed from your inventory. You now have ", c.money, "crystals.")
print("")
weapon.append("Zeus' Armor")
Type("Zeus' Armor has been added to your inventory!")
print("")
armory()
elif "cancel" in armor:
shop()
else:
Type("The word or number you have entered is invalid. Please try again")
armory()
elif "cancel" in ask1:
inventory()
else:
Type("The number or word you have entered is invalid. Please try again.")
print("")
shop()
inventory()
Answer: I think your problem is here:
weapons = ["The sword of wizdom","The sword of kindness", "the sword of power", "the sword of elctricity", "the sword of fire", "the sword of wind", "the sword of ice", "the sword of self appreciation", "the sword of love", "the earth sword", "the sword of the universe"]
weaponAttack = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c.weapon.append(weapons[w])
c.weapon.append(weaponAttack[w])
That last line should probably be:
c.weaponAttack.append(weaponAttack[w])
|
An elegant, readable way to read Butcher tableau from a file
Question: I'm trying to read a specifically formatted file (namely, the Butcher tableau)
in python 3.5. The file looks like this(tab separated):
S
a1 b11 b12 ... b1S
a2 b21 b22 ... b2S
...
aS bS1 bS2 ... bSS
0.0 c1 c2 ... cS
[tolerance]
for example, (tab separated)
2
0.0 0.0 0.0
1.0 0.5 0.5
0.0 0.5 0.5
0.0001
So my code looks like i'm writing in C. Is there a more pythonic approach to
parsing this file? Maybe there are numpy methods that could be used here?
#the data from .dat file
S = 0 #method order, first char in .dat file
a = [] #S-dim left column of buther tableau
b = [] #S-dim matrix
c = [] #S-dim lower row
tolerance = 0 # for implicit methods
def parse_method(file_name):
'read the file_name, process lines, produce a Method object'
try:
with open('methods\\' + file_name) as file:
global S
S = int(next(file))
temp = []
for line in file:
temp.append([float(x) for x in line.replace('\n', '').split('\t')])
for i in range(S):
a.append(temp[i].pop(0))
b.append(temp[i])
global c
c = temp[S][1:]
global tolerance
tolerance = temp[-1][0] if len(temp)>S+1 else 0
except OSError as ioerror:
print('File Error: ' + str(ioerror))
Answer: Code -
from collections import namedtuple
def parse_file(file_name):
with open('a.txt', 'r') as f:
file_content = f.readlines()
file_content = [line.strip('\n') for line in file_content]
s = int(file_content[0])
a = [float(file_content[i].split()[0]) for i in range(1, s + 1)]
b = [list(map(float, file_content[i].split()[1:]))
for i in range(1, s + 1)]
c = list(map(float, file_content[-2].split()))
tolerance = float(file_content[-1])
ButcherTableau = namedtuple('ButcherTableau', 's a b c tolerance')
bt = ButcherTableau(s, a, b, c, tolerance)
return bt
p = parse_file('a.txt')
print('S :', p.s)
print('a :', p.a)
print('b :', p.b)
print('c :', p.c)
print('tolerance :', p.tolerance)
Output -
S : 2
a : [0.0, 1.0]
b : [[0.0, 0.0], [0.5, 0.5]]
c : [0.0, 0.5, 0.5]
tolerance : 0.0001
|
How to improve Python iteration performance over large files
Question: I have a reference file that is about 9,000 lines and has the following
structure: (index, size) - where index is unique but size may not be.
0 193532
1 10508
2 13984
3 14296
4 12572
5 12652
6 13688
7 14256
8 230172
9 16076
And I have a data file that is about 650,000 lines and has the following
structure: (cluster, offset, size) - where offset is unique but size is not.
446 0xdf6ad1 34572
447 0xdf8020 132484
451 0xe1871b 11044
451 0xe1b394 7404
451 0xe1d12b 5892
451 0xe1e99c 5692
452 0xe20092 6224
452 0xe21a4b 5428
452 0xe23029 5104
452 0xe2455e 138136
I need to compare each size value in the second column of the reference file
for any matches with the size values in the third column of the data file. If
there is a match, output the offset hex value (second column in the data file)
with the index value (first column in the reference file). Currently I am
doing this with the following code and just piping it to a new file:
#!/usr/bin/python3
import sys
ref_file = sys.argv[1]
dat_file = sys.argv[2]
with open(ref_file, 'r') as ref, open(dat_file, 'r') as dat:
for r_line in ref:
ref_size = r_line[r_line.find(' ') + 1:-1]
for d_line in dat:
dat_size = d_line[d_line.rfind(' ') + 1:-1]
if dat_size == ref_size:
print(d_line[d_line.find('0x') : d_line.rfind(' ')]
+ '\t'
+ r_line[:r_line.find(' ')])
dat.seek(0)
The typical output looks like this:
0x86ece1eb 0
0x16ff4628f 0
0x59b358020 0
0x27dfa8cb4 1
0x6f98eb88f 1
0x102cb10d4 2
0x18e2450c8 2
0x1a7aeed12 2
0x6cbb89262 2
0x34c8ad5 3
0x1c25c33e5 3
This works fine but takes about 50mins to complete for the given file sizes.
It has done it's job, but as a novice I am always keen to learn ways to
improve my coding and share these learnings. My question is, what changes
could I make to improve the performance of this code?
Answer: You can do the following, take a dictionary `dic` and do the following (
following is a pseudocode, also I assume sizes don't repeat )
for index,size in the first file:
dic[size] = index
for index,offset,size in second file:
if size in dic.keys():
print dic[size],offset
|
Removing brackets from Scrapy json output
Question: Final part of my code is to load data from my scrapy pipeline to my pandas
dataframe.
A sample result is as below:
{"Message": ["\r\n", " Profanity directed toward staff. ", "\r\n Profanity directed toward warden ", " \r\n "], "Desc": "https://www.tdcj.state.tx.us/death_row/dr_info/nicholsjoseph.jpg"}
When loaded to the dataframe the [] brackets are stil in there with "\r\n". A
quick search shows me that this is because of encoding and it is quite common
for scrapping.
Can anybody give me an idea of a pythonic way to get a rather cleaner output?
I am expecting something like
{"Message: "Profanity directed toward staff. Profanity directed toward warden", "Desc": "https://www.tdcj.state.tx.us/death_row/dr_info/nicholsjoseph.jpg"}
Edited to add item class and spider:
Item.py
from scrapy.item import Item, Field
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
class DeathItem(Item):
firstName = Field()
lastName = Field()
Age = Field()
Date = Field()
Race = Field()
County = Field()
Message = Field(
input_processor=MapCompose(unicode.strip),
output_processor=Join())
Desc = Field()
Mid = Field()
spider.py
from urlparse import urljoin
import scrapy
from texasdeath.items import DeathItem
class DeathSpider(scrapy.Spider):
name = "death"
allowed_domains = ["tdcj.state.tx.us"]
start_urls = [
"https://www.tdcj.state.tx.us/death_row/dr_executed_offenders.html"
]
def parse(self, response):
sites = response.xpath('//table/tbody/tr')
for site in sites:
item = DeathItem()
item['Mid'] = site.xpath('td[1]/text()').extract()
item['firstName'] = site.xpath('td[5]/text()').extract()
item['lastName'] = site.xpath('td[4]/text()').extract()
item['Age'] = site.xpath('td[7]/text()').extract()
item['Date'] = site.xpath('td[8]/text()').extract()
item['Race'] = site.xpath('td[9]/text()').extract()
item['County'] = site.xpath('td[10]/text()').extract()
url = urljoin(response.url, site.xpath("td[2]/a/@href").extract_first())
urlLast = urljoin(response.url, site.xpath("td[3]/a/@href").extract_first())
if url.endswith(("jpg","no_info_available.html")):
item['Desc'] = url
if urlLast.endswith("no_last_statement.html"):
item['Message'] = "No last statement"
yield item
else:
request = scrapy.Request(urlLast, meta={"item" : item}, callback =self.parse_details2)
yield request
else:
request = scrapy.Request(url, meta={"item": item,"urlLast" : urlLast}, callback=self.parse_details)
yield request
def parse_details(self, response):
item = response.meta["item"]
urlLast = response.meta["urlLast"]
item['Desc'] = response.xpath("//*[@id='body']/p[3]/text()").extract()
if urlLast.endswith("no_last_statement.html"):
item["Message"] = "No last statement"
return item
else:
request = scrapy.Request(urlLast, meta={"item": item}, callback=self.parse_details2)
return request
def parse_details2(self, response):
item = response.meta["item"]
item['Message'] = response.xpath("//div/p[contains(., 'Last Statement:')]/following-sibling::node()/descendant-or-self::text()").extract()
return item
I basically want a output in clean text to be loaded to my pandas dataframe.
However all the unwanted characters such as: [],\r\n\t to be left out.
Basically for the data to appear such as in the web.
Answer: You need to tweak the way the extracted item field is post-processed. For that
`Scrapy` has the [Item
Loaders](http://doc.scrapy.org/en/latest/topics/loaders.html) with input and
output processors. In your case, you need the `Join()` and
`MapCompose(unicode.strip)`:
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
class MyItemLoader(ItemLoader):
default_output_processor = TakeFirst()
message_in = MapCompose(unicode, unicode.strip)
message_out = Join()
|
Python calculation error
Question: I am trying to implement an LCM finding algorithm. It needs to find LCM for
very large numbers.
LCM is found using the formula,
LCM(A, B) = (A * B) / GCD(A, B)
where A and B are two inputs.
Input: `226553150 1023473145`
So, `LCM = (226553150 * 1023473145) / 5`
It should be, `46374212988031350`.
But python is finding this as `46374212988031352`, which is obviously an
error. How to solve this problem ? 
Answer: You are using floating point math, because you used the `/` true division
operator. Floating point can only approximate large numbers, and the
difference yo use is a result of that.
Use `//` floor division instead:
>>> (226553150 * 1023473145) // 5
46374212988031350
Floor division on integers never requires conversion to float, avoiding the
precision issues.
Alternatively, use the [`decimal`
module](https://docs.python.org/3/library/decimal.html) for higher-precision
math with real numbers:
>>> from decimal import Decimal
>>> Decimal('226553150') * Decimal('1023473145') / Decimal('5')
Decimal('46374212988031350')
This is slower than using `float`.
|
Python matplotlib animating the path of an object
Question: I've been fiddling with [this](https://github.com/dm6718/Massive-Spring-
Pendulum/blob/master/Massive%20Spring%20Pendulum.py#L68) bit of Python code to
simualate a spring-pendulum system. I altered the equation slightly and it
plots fine. However, I also want to add a persistent trace after it like in
[this](https://upload.wikimedia.org/wikipedia/commons/4/45/Double-compound-
pendulum.gif) gif.
Here is my full code (I can't trim it down any more since you need the ODE
solved to generate the plotted data), the relevant bit is near the end:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from numpy import sin, cos, pi, array
spring_constant = 22.93
length = 0.16
mass = 0.1
# initial conditions
init = array([-0.35, 0, 0.08, 1]) # initial values
#array([theta, theta_dot, x, x_dot])
#Return derivatives of the array z (= [theta, theta_dot, x, x_dot])
def deriv(z, t, spring_k, spring_l, bob_mass):
k = spring_k
l = spring_l
m = bob_mass
g = 9.8
theta = z[0]
thetadot = z[1]
x = z[2]
xdot= z[3]
return array([
thetadot,
(-1.0/(l+x)) * (2*xdot*thetadot + g*sin(theta)),
xdot,
g*cos(theta) + (l+x)*thetadot**2 - (k/m)*x
])
#Create time steps
time = np.linspace(0.0,10.0,1000)
#Numerically solve ODE
y = odeint(deriv,init,time, args = (spring_constant, length, mass))
l = length
r = l+y[:,2]
theta = y[:,0]
dt = np.mean(np.diff(time))
x = r*sin(theta)
y = -r*cos(theta)
##MATPLOTLIB BEGINS HERE##
fig = plt.figure()
ax = fig.add_subplot(111, autoscale_on=False,
xlim=(-1.2*r.max(), 1.2*r.max()),
ylim=(-1.2*r.max(), 0.2*r.max()), aspect = 1.0)
ax.grid()
##ANIMATION STUFF BEGINS HERE##
line, = ax.plot([], [], 'o-', lw=2)
time_template = 'time = %.1fs'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)
def init():
line.set_data([], [])
time_text.set_text('')
return line, time_text
def animate(i):
thisx = [0, x[i]]
thisy = [0, y[i]]
line.set_data(thisx, thisy)
time_text.set_text(time_template%(i*dt))
return line, time_text
ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)),
interval=25, blit=True, init_func=init)
plt.show()
I tried making a list of points that gets appended to every time the animation
loop calls, and then drawing all of those points so far each frame:
time_template = 'time = %.1fs'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)
foox = []
fooy = []
def init():
line.set_data([], [])
foo.set_data([], [])
time_text.set_text('')
return line, time_text, foo
def animate(i):
thisx = [0, x[i]]
thisy = [0, y[i]]
foox += [x[i]]
fooy += [y[i]]
line.set_data(thisx, thisy)
foo.set_data(foox, fooy)
time_text.set_text(time_template%(i*dt))
return line, time_text, foo
But I get
UnboundLocalError: local variable 'foox' referenced before assignment
Which I guess means it doesn't like it when you use a global variable? I'm not
sure how to keep a history of which points have been drawn without using a
variable outside of the animate() scope. Anyone know how?
Thank you.
**EDIT** :
I solved it. I was using += instead of .append() by mistake. Now I feel like
an idiot.
For posterity it should be:
def animate(i):
thisx = [0, x[i]]
thisy = [0, y[i]]
foox.append(x[i])
fooy.append(y[i])
line.set_data(thisx, thisy)
foo.set_data(foox, fooy)
time_text.set_text(time_template%(i*dt))
return line, time_text, foo
Answer: You are modifying global variables in your animate function, without declaring
them as `global`
`foo` and `line` are also redundant
Other than that, your animation works well; you can run the following code to
see it:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from numpy import sin, cos, pi, array
spring_constant = 22.93
length = 0.16
mass = 0.1
# initial conditions
init = array([-0.35, 0, 0.08, 1]) # initial values
#array([theta, theta_dot, x, x_dot])
#Return derivatives of the array z (= [theta, theta_dot, x, x_dot])
def deriv(z, t, spring_k, spring_l, bob_mass):
k = spring_k
l = spring_l
m = bob_mass
g = 9.8
theta = z[0]
thetadot = z[1]
x = z[2]
xdot= z[3]
return array([
thetadot,
(-1.0/(l+x)) * (2*xdot*thetadot + g*sin(theta)),
xdot,
g*cos(theta) + (l+x)*thetadot**2 - (k/m)*x
])
#Create time steps
time = np.linspace(0.0,10.0,1000)
#Numerically solve ODE
y = odeint(deriv,init,time, args = (spring_constant, length, mass))
l = length
r = l+y[:,2]
theta = y[:,0]
dt = np.mean(np.diff(time))
x = r*sin(theta)
y = -r*cos(theta)
##MATPLOTLIB BEGINS HERE##
fig = plt.figure()
ax = fig.add_subplot(111, autoscale_on=False,
xlim=(-1.2*r.max(), 1.2*r.max()),
ylim=(-1.2*r.max(), 0.2*r.max()), aspect = 1.0)
ax.grid()
##ANIMATION STUFF BEGINS HERE##
line, = ax.plot([], [], 'o-', lw=2)
time_template = 'time = %.1fs'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)
foox = []
fooy = []
#foo.set_data(foox, fooy)
def init():
global line, time_text, foo
line.set_data([], [])
# foo.set_data([], [])
time_text.set_text('')
return line, time_text#, foo
def animate(i):
global foox, fooy, foo
thisx = [0, x[i]]
thisy = [0, y[i]]
foox += [x[i]]
fooy += [y[i]]
line.set_data(thisx, thisy)
# foo.set_data(foox, fooy)
time_text.set_text(time_template%(i*dt))
return line, time_text#, foo
ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)), interval=25, blit=False, init_func=init)
plt.show()
I've set `blit=False` because last I checked, `blit` was not working on OSX
|
List Index Out of Range, Why Is This Happening?
Question: Here's the error I'm getting:
Traceback (most recent call last):
File "E:\python\cloud.py", line 34, in <module>
c = Cloud()
File "E:\python\cloud.py", line 18, in __init__
self.cweaponAttack = self.weaponAttack[0]
IndexError: list index out of range
I'm having trouble with my code and I have checked for spelling errors
everywhere but I haven't found any.
class Cloud:
def __init__(self):
self.weaponAttack = list()
self.cweaponAttack = self.weaponAttack[0]
self.sp = 1
self.armor = list()
self.armorReduction = list()
self.weapon = list()
self.cweapon = self.weapon
self.money = 10000
self.lvl = 0
self.exp = 0
self.mexp = 100
self.attackPower = 0
addaps = self.cweaponAttack * self.attackPower
self.dmg = self.cweaponAttack + addaps
self.hp = 100
self.mhp = 100
self.name = "Cloud"
c = Cloud()
armors = ["No Armor","Belice Armor","Yoron's Armor","Andrew's Custom Armor","Zeus' Armor"]
armorReduce = [0, .025, .05, .10, .15]
c.armor.append(armors[0])
c.armorReduction.append(armorReduce[0])
w = random.randint(0, 10)
weapons = ["The Sword of Wizdom","The Sword of Kindness", "The Sword of Power", "The Sword of Elctricity", "The Sword of Fire", "The Sword of Wind", "The Sword of Ice", "The Sword of Self Appreciation", "The Sword of Love", "The Earth Sword", "The Sword of The Universe"]
weaponAttacks = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c.weapon.append(weapons[w])
c.weaponAttack.append(weaponAttacks[w])
print("You have recieved the ", weapons[w])
print("")
print("It does ", weaponAttacks[w]," attack power!")
print("")
The lines above is where i'm positive that the error is coming from, but just
in case, here's the rest of the code. Warning: It's very long.
import random
import time
import sys
def asky():
ask = input("Would you like to check you player stats and inventory or go to the next battle? Say inventory for inventory or say next for the next battle: ")
if "inventory" in ask:
inventory()
elif "next" in ask:
user()
def Type(t):
t = list(t)
for a in t:
sys.stdout.write(a)
time.sleep(.035)
class Cloud:
def __init__(self):
self.weaponAttack = list()
self.cweaponAttack = self.weaponAttack[0]
self.sp = 1
self.armor = list()
self.armorReduction = list()
self.weapon = list()
self.cweapon = self.weapon
self.money = 10000
self.lvl = 0
self.exp = 0
self.mexp = 100
self.attackPower = 0
addaps = self.cweaponAttack * self.attackPower
self.dmg = self.cweaponAttack + addaps
self.hp = 100
self.mhp = 100
self.name = "Cloud"
c = Cloud()
armors = ["No Armor","Belice Armor","Yoron's Armor","Andrew's Custom Armor","Zeus' Armor"]
armorReduce = [0, .025, .05, .10, .15]
c.armor.append(armors[0])
c.armorReduction.append(armorReduce[0])
w = random.randint(0, 10)
weapons = ["The Sword of Wizdom","The Sword of Kindness", "The Sword of Power", "The Sword of Elctricity", "The Sword of Fire", "The Sword of Wind", "The Sword of Ice", "The Sword of Self Appreciation", "The Sword of Love", "The Earth Sword", "The Sword of The Universe"]
weaponAttacks = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c.weapon.append(weapons[w])
c.weaponAttack.append(weaponAttacks[w])
print("You have recieved the ", weapons[w])
print("")
print("It does ", weaponAttacks[w]," attack power!")
print("")
class Soldier:
def __init__(self):
dmg = random.randint(5,20)
self.lvl = 0
self.attackPower = dmg
self.hp = 100
self.mhp = 100
self.name = "Soldier"
s = Soldier()
def enemy():
ad = random.randint(0,2)
if ad >= 1: #Attack
Type("Soldier attacks!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
hm = random.randint(0, 2)
if hm == 0:
Type("Miss!")
print("")
elif hm > 0:
crit = random.randint(0,10)
if crit == 0:
print("CRITICAL HIT!")
crithit = int((s.attackPower) * (.5))
c.hp = c.hp - (s.attackPower + crithit)
elif crit >= 1:
c.hp = c.hp - s.attackPower
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
asky()
c.exp = c.exp + 100
else:
user()
elif ad == 0:#Defend
Type("Soldier Defends!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if s.hp == s.mhp:
print("")
elif s.hp > (s.mhp - 15) and s.hp < s.mhp:
add = s.mhp - s.hp
s.hp = add + s.hp
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
elif s.hp < (s.mhp - 15):
s.hp = s.hp + 15
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
asky()
c.exp = c.exp + 100
else:
user()
def user():
User = input("attack or defend? ")
if "attack" in User:#attack
Type("Cloud attacks!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
hm = random.randint(0,4)
if hm == 0:
Type("Miss!")
print("")
elif hm > 0:
crit = random.randint(0,7)
if crit == 0:
print("CRITICAL HIT!")
crithit = int((c.dmg) * (.5))
s.hp = s.hp - (c.dmg + crithit)
elif crit >= 1:
s.hp = s.hp - c.dmg
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
c.exp = c.exp + 100
asky()
else:
enemy()
elif "defend" in User:#defend
Type("Cloud Heals!")
print("")
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp == c.mhp:
Type("You are at the maximum amount of health. Cannot add more health.")
print("")
elif c.hp > (c.mhp - 15) and c.hp < c.mhp:
add = c.mhp - c.hp
c.hp = add + c.hp
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
elif c.hp <= (c.mhp - 15):
c.hp = c.hp + 15
Type("Cloud Health: ")
print(c.hp)
Type("Enemy Health: ")
print(s.hp)
if c.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("GAME OVER")
print("")
Type("You Lost!")
print("")
elif s.hp <= 0:
adds = s.mhp - s.hp
s.hp = s.hp + adds
Type("Congratulations!")
print("")
Type("You Won!")
print("")
Type("You recieved 100 crystals to spend at the shop!")
print("")
c.money = c.money + 100
c.exp = c.exp + 100
asky()
else:
enemy()
else:
Type("The option you have entered is not in the game database. Please try again")
print("")
user()
def inventory():
if c.exp == c.mexp:
print("LEVEL UP!")
c.exp = 0
adde = int((c.mexp) * (.5))
c.mexp = c.mexp + adde
c.sp = c.sp + 1
c.lvl = c.lvl + 1
if c.lvl > s.lvl:
s.lvl = s.lvl + 1
print("")
print("")
print("Level: ", c.lvl)
print("")
nextlvl = c.lvl + 1
print("Experience: [", c.exp, "/", c.mexp, "]level", nextlvl)
print("")
print("Amount of Skill Points:", c.sp)
print("")
for i in range(0, len(c.weapon)):
print(i)
print("Weapon: ", c.weapon[i])
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
for j in range(0, len(c.armor)):
print("Armor: ", c.armor[j])
print("Armor Damage Reduction: ", c.armorReduction[j])
print("")
print("Amount of Crystals: ", c.money)
print("")
print("")
print("Stats:")
print("")
print("Maximum Health: ", c.mhp)
print("")
print("Current Health: ", c.hp)
print("")
dtop = 100 * c.attackPower
print("Attack Power: Adds", dtop, "% of sword damage")
print("")
print("Overall Damage: ", c.dmg)
print("")
print("Your Name: ", c.name)
print("")
print("")
sn = input("To heal yourself, you need to go to the shop. Say, *shop* to go to the shop, say *name* to change your name, say, *next* to fight another battle, say, *level* to use your skill point(s), or say, *help* for help: ")
print("")
if "name" in sn:
c.name = input("Enter Your name here: ")
print("Success! Your name has been changed to ", c.name)
inventory()
elif "weapon" in sn:
weapChange()
elif "next" in sn:
Type("3")
print("")
Type("2")
print("")
Type("1")
print("")
Type("FIGHT!")
print("")
user()
elif "help" in sn:
def helpp():
Type("The goal of this game is to fight all the enemies, kill the miniboss, and finally, kill the boss! each time you kill an enemy you gain *crystals*, currency which you can use to buy weapons, armor, and health. You can spend these *crystals* at the shop. To go to the shop, just say *shop* when you are in your inventory. Although, each time you level up, they get harder to defeat. Once you level up, you gain one skill point. This skill point is then used while in your inventory by saying the word *level*. You can use your skill point(s) to upgrade your stats, such as, your maximum health, and your attack power.")
print("")
continu = input("Say, *back*, to go back to your inventory screen. ")
if "back" in continu:
inventory()
else:
Type("The word you have entered is invalid. Please try again.")
print("")
helpp()
elif "shop" in sn:
shop()
elif "level" in sn:
skills()
else:
print("Level: ", c.lvl)
print("")
nextlvl = c.lvl + 1
print("Experience: [", c.exp, "/", c.mexp, "]level", nextlvl)
print("")
print("Amount of Skill Points:", c.sp)
print("")
for i in range(0, len(c.weapon)):
print("Weapon:", c.weapon[i])
print("")
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
for i in range(0, len(c.armor)):
print("Armor: ", c.armor[i])
print("")
print("Armor Damage Reduction: ", c.armorReduction[i])
print("")
print("Amount of Crystals: ", c.money)
print("")
print("")
print("Stats:")
print("")
print("Maximum Health: ", c.mhp)
print("")
print("Current Health: ", c.hp)
print("")
dtop = 100 * c.attackPower
print("Attack Power: Adds", dtop, "% of sword damage")
print("")
print("Your Name: ", c.name)
print("")
print("")
sn = input("To heal yourself, you need to go to the shop. Say, *shop* to go to the shop, say *name* to change your name, say, *next* to fight another battle, say, *level* to use your skill point(s), say, *weapon* to switch your current weapon, or say, *help* for help: ")
if "name" in sn:
c.name = input("Enter Your name here: ")
print("Success! Your name has been changed to ", c.name)
inventory()
elif "weapon" in sn:
weapChange()
elif "next" in sn:
Type("3")
print("")
Type("2")
print("")
Type("1")
print("")
Type("FIGHT!")
print("")
user()
elif "help" in sn:
def helpp():
Type("The goal of this game is to fight all the enemies, kill the miniboss, and finally, kill the boss! each time you kill an enemy you gain *crystals*, currency which you can use to buy weapons, armor, and health. You can spend these *crystals* at the shop. To go to the shop, just say *shop* when you are in your inventory. Although, each time you level up, they get harder to defeat. Once you level up, you gain one skill point. This skill point is then used while in your inventory by saying the word *level*. You can use your skill point(s) to upgrade your stats, such as, your maximum health, and your attack power. To switch out your weapons, type in, *weapon*.")
print("")
continu = input("Say, *back*, to go back to your inventory screen. ")
if "back" in continu:
inventory()
else:
Type("The word you have entered is invalid. Please try again.")
print("")
helpp()
helpp()
elif "shop" in sn:
shop()
elif "level" in sn:
skills()
def weapChange():
for i in range(0, len(c.weapon)):
print("Weapon:", "To equip", c.weapon[i], ",say", i)
print("Weapon Attack Damage: ", c.weaponAttack[i])
print("")
weapchoice = input("Enter the weapon ID to the sword you would like to equip, or say, *cancel*, to go back to your inventory. ")
print("")
if "0" in weapchoice:
c.cweapon = c.weapon[0]
c.cweaponAttack = c.weaponAttack[0]
print("Success!", c.weapon[0], "is now equipped!")
inventory()
elif "1" in weapchoice:
c.cweapon = c.weapon[1]
print("Success!", c.weapon[1], "is now equipped!")
inventory()
c.cweaponAttack = c.weaponAttack[1]
elif "2" in weapchoice:
c.cweaponAttack = c.weaponAttack[2]
c.cweapon = c.weapon[2]
print("Success!", c.weapon[2], "is now equipped!")
inventory()
elif "3" in weapchoice:
c.cweaponAttack = c.weaponAttack[3]
c.cweapon = c.weapon[3]
print("Success!", c.weapon[3], "is now equipped!")
inventory()
elif "4" in weapchoice:
c.cweaponAttack = c.weaponAttack[4]
c.cweapon = c.weapon[4]
print("Success!", c.weapon[4], "is now equipped!")
inventory()
elif "5" in weapchoice:
c.cweaponAttack = c.weaponAttack[5]
c.cweapon = c.weapon[5]
print("Success!", c.weapon[5], "is now equipped!")
inventory()
elif "6" in weapchoice:
c.cweaponAttack = c.weaponAttack[6]
c.cweapon = c.weapon[6]
print("Success!", c.weapon[6], "is now equipped!")
inventory()
elif "7" in weapchoice:
c.cweaponAttack = c.weaponAttack[7]
c.cweapon = c.weapon[7]
print("Success!", c.weapon[7], "is now equipped!")
inventory()
elif "8" in weapchoice:
c.cweaponAttack = c.weaponAttack[8]
c.cweapon = c.weapon[8]
print("Success!", c.weapon[8], "is now equipped!")
inventory()
elif "9" in weapchoice:
c.cweaponAttack = c.weaponAttack[9]
c.cweapon = c.weapon[9]
print("Success!", c.weapon[9], "is now equipped!")
inventory()
elif "cancel" in weapchoice:
inventory()
else:
Type("The word or number you have entered is invalid. Please try again.")
print("")
print("")
weapChange()
def skills():
print("")
print("You have", c.sp, "skill points to use.")
print("")
print("Upgrade attack power *press the number 1*")
print("")
print("Upgrade maximum health *press the number 2*")
print("")
skill = input("Enter the number of the skill you wish to upgrade, or say, cancel, to go back to your inventory screen. ")
print("")
if "1" in skill:
sure = input("Are you sure you want to upgrade your character attack power in return for 1 skill point? *yes or no* ")
print("")
if "yes" in sure:
if c.sp == 0:
Type("I'm sorry but you do not have sufficient skill points to upgrade your attack power. ")
print("")
skills()
elif c.sp >= 1:
c.sp = c.sp - 1
c.attackPower = float(c.attackPower + .1)
addsap = int(100 * c.attackPower)
print("Your attack power has been upgraded to deal", addsap, "% more damage")
skills()
else:
Type("How the fuck did you get negative skill points?! ")
print("")
skills()
if "no" in sure:
skills()
elif "2" in skill:
sure = input("Are you sure you want to upgrade your maximum health in return for 1 skill point? *yes or no* ")
print("")
if "yes" in sure:
if c.sp == 0:
Type("I'm sorry but you do not have sufficient skill points to upgrade your maximum health. ")
print("")
skills()
elif c.sp >= 1:
c.sp = c.sp - 1
c.mhp = c.mhp + 30
skills()
else:
Type("How the fuck did you get negative skill points?! ")
print("")
skills()
if "no" in sure:
skills()
elif "cancel" in skill:
inventory()
else:
Type("The word or number you have entered is invalid. Please try again.")
print("")
skills()
def shop():
print("")
Type("Welcome to Andrew's Blacksmith! Here you will find all the weapons, armor, and health you need, to defeat the horrid beast who goes by the name of Murlor! ")
print("")
print("")
print("Who's Murlor? *To ask this question, type in the number 1*")
print("")
print("Can you heal me? *To ask this question, type in the number 2*")
print("")
print("What weapons do you have? *To ask this question, type in the number 3*")
print("")
print("Got any armor? *To ask this question, type in the number 4*")
print("")
ask1 = input("Enter desired number here or say, cancel, to go back to your inventory screen. ")
print("")
if "1" in ask1:
def murlor():
Type("Murlor is a devil-like creature that lives deep among the caves of Bricegate. He has been terrorising the people of this village for centuries.")
print("")
print("")
print("What is Bricegate? *To choose this option, type in the number 1*")
print("")
print("Got any more information about this village? *To choose this option, type in the number 2*")
print("")
print("Thank you! *To choose this option, type in the number 3*")
print("")
ask3 = input("Enter desired number here, or say, cancel, to go back to the main shop screen. ")
print("")
if "1" in ask3:
def questionTown():
Type("That's the name of this town.")
print("")
print("")
town = input("Go back? *Say, yes, to go back to the previous screen*")
print("")
if "yes" in town:
murlor()
else:
Type("I'm sorry but the word you have entered is invalid. Please try again.")
print("")
print("")
questionTown()
questionTown()
elif "2" in ask3:
def askquest1():
Type("Well I DO know that there's this secret underground dungeon. It's VERY dangerous but it comes with a huge reward. If you ever consider it, could you get my lucky axe? I dropped it down a hole leading to the dungeon and i was too afraid to get it back. *If you accept the quest, say yes, if you want to go back, say, no.*")
quest1 = input(" ")
print("")
if "yes" in quest1:
quest1()
elif "no" in quest1:
murlor()
else:
Type("The option you have selected is not valid. Please try again")
print("")
print("")
askquest1()
askquest1()
elif "3" in ask3:
shop()
else:
Type("The number or word you have entered is invalid. please try again.")
print("")
print("")
murlor()
murlor()
elif "2" in ask1:
def heal():
if c.hp == c.mhp:
Type("I can't heal you because there's nothing to heal.")
print("")
print("")
shop()
elif c.hp > 10 and c.hp < c.mhp:
Type("Sure! That'll be 30 crystals.")
ask2 = input(" *say, okay, to confirm the purchase or say, no, to cancel the pruchase* ")
print("")
if "okay" in ask2:
if c.money < 30:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
print("")
shop()
elif c.money >= 30:
c.money = c.money - 30
Type("30 crystals has been removed from your inventory.")
print("")
print("")
addn = c.mhp - c.hp
c.hp = c.hp + addn
Type("You have been healed!")
print("")
print("")
shop()
elif "no" in ask2:
shop()
else:
Type("The option you have chosen is invalid. Please try again")
print("")
print("")
heal()
elif c.hp > 0 and c.hp <= 10:
Type("How are you still alive?!")
print("")
print("")
Type("That'll be 50 crystals.")
ask2 = input(" *say, okay, to confirm the purchase or say, no, to cancel the pruchase* ")
print("")
if "okay" in ask2:
if c.money < 30:
Type("I'm sorry sir, but you don't have enough crystals to buy this.")
print("")
print("")
shop()
elif c.money >= 30:
c.money = c.money - 30
Type("30 crystals has been removed from your inventory.")
print("")
print("")
addn = c.mhp - c.hp
c.hp = c.hp + addn
Type("You have been healed!")
print("")
print("")
shop()
elif "no" in ask2:
shop()
else:
Type("The option you have chosen is invalid. Please try again")
print("")
print("")
heal()
else:
Type("HELP!! IT'S THE WALKING DEAD!!")
print("")
print("")
shop()
heal()
user()
Answer: At the time the instance of the class is first created with `Cloud()`,
`self.weaponAttack` is an empty `list`, and there will be no such thing as an
index 0.
You may consider passing a non-empty list to `self.weaponAttack` as an
argument via the class constructor:
weaponAttacks = [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]
c = Cloud(weaponAttacks)
And your `class` becomes:
class Cloud:
'''This is the Cloud class etc.'''
weaponAttack = list()
def __init__(self, weaponAttacks):
self.weaponAttack = weaponAttacks
self.cweaponAttack = self.weaponAttack[0]
|
I installed multiple Ipython kernels, but after this I cannot import numpy, pandas
Question: I wanted to be able to use both python 2.x and 3.x so I installed multiple
kernels as follows, as per the instructions in this question ([Using both
Python 2.x and Python 3.x in IPython
Notebook](http://stackoverflow.com/questions/30492623/using-both-
python-2-x-and-python-3-x-in-ipython-notebook))
To configure the python2.7 environment:
conda create -n py27 python=2.7`
source activate py27`
conda install notebook ipykernel`
ipython kernel install --user
and
To configure the python3.5 environment:
conda create -n py35 python=3.5
source activate py35
conda install notebook ipykernel
ipython kernel install --user
Now I can choose between python 2 and 3 in the notebook. But when I tried to
import either numpy or pandas I get the import error
**Import error:No module named numpy**
**I tried to uninstall Anaconda and reinstall it and then install jupyter
notebook, NOW I Cannot even start jupyter notebook it says 'Kernel Error'**
Can some one please help me out?
Answer: You need to do the following in each environment:
conda install numpy
You could also have done this on creation:
conda create -n py35 python=3.5 notebook ipykernel numpy
|
Sorting JSON by attribute's value using Python
Question: I have the following JSON file
{
"modifyDate": 1463899037000,
"champions": [
{
"id": 40,
"stats": {
"totalDeathsPerSession": 60,
"totalSessionsPlayed": 18,
"totalDamageTaken": 246343,
"totalQuadraKills": 0,
"totalTripleKills": 0,
"totalMinionKills": 272,
"maxChampionsKilled": 3,
"totalDoubleKills": 0,
"totalPhysicalDamageDealt": 121345,
"totalChampionKills": 11,
"totalAssists": 271,
"mostChampionKillsPerSession": 3,
"totalDamageDealt": 238803,
"totalFirstBlood": 0,
"totalSessionsLost": 7,
"totalSessionsWon": 11,
"totalMagicDamageDealt": 113241,
"totalGoldEarned": 176088,
"totalPentaKills": 0,
"totalTurretsKilled": 10,
"mostSpellsCast": 0,
"maxNumDeaths": 9,
"totalUnrealKills": 0
}
},
{
"id": 111,
"stats": {
"totalDeathsPerSession": 20,
"totalSessionsPlayed": 4,
"totalDamageTaken": 60371,
"totalQuadraKills": 0,
"totalTripleKills": 0,
"totalMinionKills": 247,
"maxChampionsKilled": 3,
"totalDoubleKills": 0,
"totalPhysicalDamageDealt": 35727,
"totalChampionKills": 4,
"totalAssists": 35,
"mostChampionKillsPerSession": 3,
"totalDamageDealt": 190815,
"totalFirstBlood": 0,
"totalSessionsLost": 2,
"totalSessionsWon": 2,
"totalMagicDamageDealt": 145353,
"totalGoldEarned": 30823,
"totalPentaKills": 0,
"totalTurretsKilled": 2,
"mostSpellsCast": 0,
"maxNumDeaths": 7,
"totalUnrealKills": 0
}
},
{
"id": 43,
"stats": {
"totalDeathsPerSession": 103,
"totalSessionsPlayed": 24,
"totalDamageTaken": 335867,
"totalQuadraKills": 0,
"totalTripleKills": 0,
"totalMinionKills": 828,
"maxChampionsKilled": 10,
"totalDoubleKills": 2,
"totalPhysicalDamageDealt": 170141,
"totalChampionKills": 77,
"totalAssists": 302,
"mostChampionKillsPerSession": 10,
"totalDamageDealt": 923985,
"totalFirstBlood": 0,
"totalSessionsLost": 7,
"totalSessionsWon": 17,
"totalMagicDamageDealt": 732367,
"totalGoldEarned": 242157,
"totalPentaKills": 0,
"totalTurretsKilled": 12,
"mostSpellsCast": 0,
"maxNumDeaths": 8,
"totalUnrealKills": 0
}
},
{
"id": 117,
"stats": {
"totalDeathsPerSession": 150,
"totalSessionsPlayed": 36,
"totalDamageTaken": 494142,
"totalQuadraKills": 0,
"totalTripleKills": 0,
"totalMinionKills": 2017,
"maxChampionsKilled": 8,
"totalDoubleKills": 5,
"totalPhysicalDamageDealt": 297987,
"totalChampionKills": 102,
"totalAssists": 418,
"mostChampionKillsPerSession": 8,
"totalDamageDealt": 1905782,
"totalFirstBlood": 0,
"totalSessionsLost": 13,
"totalSessionsWon": 23,
"totalMagicDamageDealt": 1577943,
"totalGoldEarned": 353798,
"totalPentaKills": 0,
"totalTurretsKilled": 15,
"mostSpellsCast": 0,
"maxNumDeaths": 12,
"totalUnrealKills": 0
}
},
{
"id": 254,
"stats": {
"totalDeathsPerSession": 13,
"totalSessionsPlayed": 2,
"totalDamageTaken": 43839,
"totalQuadraKills": 0,
"totalTripleKills": 0,
"totalMinionKills": 77,
"maxChampionsKilled": 8,
"totalDoubleKills": 0,
"totalPhysicalDamageDealt": 227018,
"totalChampionKills": 12,
"totalAssists": 8,
"mostChampionKillsPerSession": 8,
"totalDamageDealt": 247686,
"totalFirstBlood": 0,
"totalSessionsLost": 1,
"totalSessionsWon": 1,
"totalMagicDamageDealt": 3920,
"totalGoldEarned": 21321,
"totalPentaKills": 0,
"totalTurretsKilled": 0,
"mostSpellsCast": 0,
"maxNumDeaths": 9,
"totalUnrealKills": 0
}
}
],
"summonerId": 21193669
}
and I want to get the `id`s of the 3 `champions` that have the most
`totalSessionsPlayed`. To do this, I'd first sort the `champions` by
`totalSessionsPlayed` and then take the first 3 `id`s. How can I do this, or
is there maybe a better way to do this instead of sorting it first?
Answer: If I understand the problem right, you can use `heapq.nlargest` to [partially
sort](https://en.wikipedia.org/wiki/Partial_sorting) your array:
import json
import heapq
dat = json.loads("(your json here)")
champions = dat['champions']
tsp_getter = lambda x: x['stats']['totalSessionsPlayed']
largest = heapq.nlargest(3, champions, key = tsp_getter)
ids = [c['id'] for c in largest]
But probably simple `sorted` will play nice instead of `nlargest` (you can do
your benchmarks to check it):
tsp_getter = lambda x: - x['stats']['totalSessionsPlayed']
largest = sorted(champions, key = tsp_getter)
ids = [c['id'] for c in largest[:3]]
|
"One off" error in numpy.r_ array construction
Question: Suppose I want to construct an array in Python/numpy using the r_ operator
like so.
>>> import numpy as np
>>> np.r_[0.02:0.04:0.01]
array([ 0.02, 0.03])
>>> np.r_[0.04:0.06:0.01]
array([ 0.04, 0.05])
Both cases work as expected. If I change the limits though:
>>> np.r_[0.03:0.05:0.01] #?????
array([ 0.03, 0.04, 0.05])
Why does this happen? Is it something to do with an inexact floating point
representations? Or is this a bug?
Answer: With a complex 'step' this uses `linspace`:
In [68]: np.r_[0.02:.04:3j]
Out[68]: array([ 0.02, 0.03, 0.04])
In [69]: np.r_[0.03:.05:3j]
Out[69]: array([ 0.03, 0.04, 0.05])
With the float 'step' it uses `arange`, which has a note that results can be
inconsistent with non integer steps. It recommends `linspace` for more
control.
`np.mgrid` also accepts the psuedo-complex step notation.
Look in `/usr/lib/python3/dist-packages/numpy/lib/index_tricks.py` for more
details on how these classes work.
|
Call function from a file B in a loop in file A
Question: I have a python file B with all my function and a main code which is in loop
of 0.25 sec, and I want to call this file in a loop in my file A. Can you get
my weird mind ? What I did but only read the loop from file B once :
#FileA
while 1:
from FileB import *
And my file B :
#FileB
while t<0.25:
#my stuff
Thanks.
PS : I forget to mention that i can't modify the file B.
Answer: The `import` statement only reads the target module one time.
If you have control of both files, I'd suggest that you make your loop a
function in file B:
def main():
while t<0.25:
#my stuff
if __name__ == '__main__':
main()
Then you can call it repeatedly from file A:
from fileB import main as Bmain
while 1:
Bmain()
If you don't have control of the source code for the files (meaning: if the
code comes from someone else), there are a few options. Probably the easiest
and fastest to code would be to use the
[`os.system(command)`](https://docs.python.org/3/library/os.html?highlight=os.system#os.system)
function to run the contents of fileB in a separate process.
|
How can I Draw rectangle in programmatically
Question: How can I draw rectangle(Oblique projection view) in programmatically(python)
by giving height, width and depth.
Answer: Take a look at Python's `turtle` module, [here is the v3.3
documentation](https://docs.python.org/3.3/library/turtle.html). On top of
height, width and depth, you will need to think about an angle for the
projection - I think this is typically 30/45 degrees.
To get you started... [adapting code by Y. Daniel
Liang](http://www.cs.armstrong.edu/liang/py/html/UsefulTurtleFunctions.html).
import turtle
w = 100
h = 50
d = 20
angle = 30
def drawRectangle(width, height):
turtle.right(90)
turtle.forward(height)
turtle.right(90)
turtle.forward(width)
turtle.right(90)
turtle.forward(height)
turtle.right(90)
turtle.forward(width)
turtle.penup()
turtle.goto(0, 0)
turtle.pendown()
drawRectangle(w, h)
turtle.left(angle)
turtle.forward(d)
turtle.right(angle)
drawRectangle(w, h)
|
numpy.ndarray object not callable in brute
Question: I am following the code in a "Python for Finance" book and trying to optimize
a function but am getting an error when following the code.
> TypeError: 'numpy.ndarray' object is not callable
The other forums on this error don't seem to be applicable.
Please let me know where I'm going wrong.
Code:
# import libraries
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import scipy.optimize as spo
from mpl_toolkits.mplot3d import Axes3D
# define function
def fm(x, y):
return ( np.sin(x) + 0.05*x**2 + np.sin(y) + 0.05*y**2 )
# construct range vectors
x = np.linspace(-10, 10, 50)
y = np.linspace(-10, 10, 50)
X, Y = np.meshgrid(x, y)
Z = fm(X, Y)
# plot surface
fig = plt.figure(figsize=(9, 6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=2, cstride=2,cmap=mpl.cm.coolwarm,linewidth=0.5, antialiased=True)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('f(x, y)')
fig.colorbar(surf, shrink=0.5, aspect=5)
# define function for optimisation
def fo(x, y):
z = np.sin(x) + 0.05*x**2 + np.sin(y) + 0.05*y**2
if output == True:
print(x, y, z)
return z
# print each iteration?
output = True
rranges = (slice(-10, 10.1, 5), slice(-10, 10.1, 5))
params = (x,y)
# optimise
spo.brute(fo(x,y), ((-10, 10.1, 5),(-10, 10.1, 5)), finish=None)
Error:
Traceback (most recent call last):
File "<ipython-input-1-76b5e42b4ae6>", line 1, in <module>
runfile('C:/Users/Chris/Dropbox/Chris Personal/Learning Python Resources/Python for Finance/Chapter 9/trial_convex_optimisation.py', wdir='C:/Users/Chris/Dropbox/Chris Personal/Learning Python Resources/Python for Finance/Chapter 9')
File "C:\Users\Chris\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 699, in runfile
execfile(filename, namespace)
File "C:\Users\Chris\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 88, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "C:/Users/Chris/Dropbox/Chris Personal/Learning Python Resources/Python for Finance/Chapter 9/trial_convex_optimisation.py", line 46, in <module>
spo.brute(fo(x,y), [(-10, 10.1, 5),(-10, 10.1, 5)], finish=None)
File "C:\Users\Chris\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 2604, in brute
Jout = vecfunc(*grid)
File "C:\Users\Chris\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 1811, in __call__
return self._vectorize_call(func=func, args=vargs)
File "C:\Users\Chris\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 1874, in _vectorize_call
ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args)
File "C:\Users\Chris\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 1836, in _get_ufunc_and_otypes
outputs = func(*inputs)
File "C:\Users\Chris\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 2598, in _scalarfunc
return func(params, *args)
TypeError: 'numpy.ndarray' object is not callable
Answer: Your calling to `scipy.optimize.brute` function is wrong. You have to call
with `fo` function and `rranges` defined as follow:
def fo(xy):
x, y = xy
z = np.sin(x) + 0.05*x**2 + np.sin(y) + 0.05*y**2
if output == True:
print(x, y, z)
return z
rranges = (slice(-10, 10.1, 5), slice(-10, 10.1, 5))
spo.brute(fo, rranges, finish=None)
|
How to use install Theano on Python 2.7.9 on windows 8.1 Pro ?
Question: Also, it would be great if someone can help me with installation of "nolearn"
package too. Thanks!
Answer: I think the issue is that you don't have the import library for libpython.
This is lack for linking with msvc on Windows and does not ordinarily come
with binary division.
It is probable that you python distribution was built with something else than
msvc and doesn't even have an import library to distribute in which case you
would have to make one. There are some tools on the internet about how to do
that, but I'm a little misty on the details and I don't have a windows machine
right now.
Well, for more details you can speak with [windows tech
support](http://www.spotageek.com/services/windows-technical-support.php)
experts.
|
Python regex to extract phone numbers from string
Question: I am very new to regex , Using python re i am looking to extract phone numbers
from the following multi-line string text below :
Source = """<p><strong>Kuala Lumpur</strong><strong>:</strong> +60 (0)3 2723 7900</p>
<p><strong>Mutiara Damansara:</strong> +60 (0)3 2723 7900</p>
<p><strong>Penang:</strong> + 60 (0)4 255 9000</p>
<h2>Where we are </h2>
<strong> Call us on:</strong> +6 (03) 8924 8686
</p></div><div class="sys_two">
<h3 class="parentSchool">General enquiries</h3><p style="FONT-SIZE: 11px">
<strong> Call us on:</strong> +6 (03) 8924 8000
+ 60 (7) 268-6200 <br />
Fax:<br />
+60 (7) 228-6202<br />
Phone:</strong><strong style="color: #f00">+601-4228-8055</strong>"""
So when i compile the pattern , i should be able to find using
phone = re.findall(pattern,source,re.DOTALL)
['+60 (0)3 2723 7900',
'+60 (0)3 2723 7900',
'+ 60 (0)4 255 9000',
'+6 (03) 8924 8686',
'+6 (03) 8924 8000',
'+ 60 (7) 268-6200',
'+60 (7) 228-6202',
'+601-4228-8055']
Please help me identify the right pattern
Answer: Using `re` module.
>>> import re
>>> Source = """<p><strong>Kuala Lumpur</strong><strong>:</strong> +60 (0)3 2723 7900</p>
<p><strong>Mutiara Damansara:</strong> +60 (0)3 2723 7900</p>
<p><strong>Penang:</strong> + 60 (0)4 255 9000</p>
<h2>Where we are </h2>
<strong> Call us on:</strong> +6 (03) 8924 8686
</p></div><div class="sys_two">
<h3 class="parentSchool">General enquiries</h3><p style="FONT-SIZE: 11px">
<strong> Call us on:</strong> +6 (03) 8924 8000
+ 60 (7) 268-6200 <br />
Fax:<br />
+60 (7) 228-6202<br />
Phone:</strong><strong style="color: #f00">+601-4228-8055</strong>"""
>>> for i in re.findall(r'\+[-()\s\d]+?(?=\s*[+<])', Source):
print i
+60 (0)3 2723 7900
+60 (0)3 2723 7900
+ 60 (0)4 255 9000
+6 (03) 8924 8686
+6 (03) 8924 8000
+ 60 (7) 268-6200
+60 (7) 228-6202
+601-4228-8055
>>>
|
When reading in a txt matrix, how can i skip first column
Question: I have a file that looks like this:
1 2 3 4 5 6 7
1 0 1 1 1 1 1 1
2 0 0 1 1 1 1 1
3 0 0 0 1 1 1 1
4 0 0 0 0 1 1 1
5 0 0 0 0 0 1 1
6 0 0 0 0 0 0 1
7 0 0 0 0 0 0 0
I want to read in only the 1 and 0 and ignore the top header row and the row
name (the first column).
So far I have the header line all set, but how can I skip the skip column. My
code so far
with open('file') as f:
next(f) #skips header row
content = [x.strip('\n') for x in f.readlines()]
I'm trying to use only base python and no libraries.
Answer: Use a simple indexing:
with open('file') as f:
next(f)
content = [x.strip().split()[1:] for x in f]
This will give you the splitted zero and ones as a nested list.
If you don't want to split the lines you can still use indexing in order to
remove the first character.
content = [x[1:].strip() for x in f]
Or as a Numpythonic approach you can use `loadtxt()` function:
>>> import numpy as np
>>> form io import StringIO
>>> np.loadtxt(StringIO(my_string), skiprows=1)[:,1:]
array([[ 0., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1.],
[ 0., 0., 0., 1., 1., 1., 1.],
[ 0., 0., 0., 0., 1., 1., 1.],
[ 0., 0., 0., 0., 0., 1., 1.],
[ 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0.]])
|
ValueError: invalid literal for long() with base 10: '5B'
Question: What I understood of this error is that it means that there is a column that
is of type long(). But this column contains a value named '5B' which isn't a
long type.
This is the line where the error occurs:
df_Company = df1.groupby(by=['manufacturer','quality_issue'], as_index=False) ['quality_issue2'].count()
I have checked all the column types of the dataframe df1. But there are no
columns with the type long. 5B is a name of a manufacturer so I assume that
the column manufacturer has suddenly became of type long during this sentence.
checked what types the dataframe df1 has.
print (df1.dtypes)
manufacturer object
yearweek int64
quality_issue object
quality_issue2 object
I 'think' I have to do something with `df_Company.astype(long)` but it seems I
can't make it work. Does anyone has an idea how to fix this?
Note: the strange thing is that on my other computer where I have Python 3.5.1
the same code works just fine. but when I run the code on my current computer
where I have Python 2.7.9 I get this long error.
Answer: Problem is different, see
[8381](https://github.com/pydata/pandas/issues/8381), but in my pandas version
`0.18.1` it works nice.
I think you can change `False` to `True` and then
[`reset_index`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.reset_index.html):
df_Company=df1.groupby(by=['manufacturer','quality_issue'], as_index=True)['quality_issue2']
.count()
.reset_index()
Differences between [`size`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.GroupBy.size.html) and
[`count`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.count.html) (see
[differences with numeric
values](http://stackoverflow.com/a/33346694/2901002)):
Sample with `string` values:
import pandas as pd
import numpy as np
df1=pd.DataFrame([['foo','foo','bar','bar','bar','oats'],
['foo','foo','bar','bar','bar','oats'],
[None,'foo','bar',None,'bar','oats']]).T
df1.columns=['manufacturer','quality_issue','quality_issue2']
print (df1)
manufacturer quality_issue quality_issue2
0 foo foo None
1 foo foo foo
2 bar bar bar
3 bar bar None
4 bar bar bar
5 oats oats oats
df_Company=df1.groupby(by=['manufacturer','quality_issue'], as_index=False)['quality_issue2']
.count()
print (df_Company)
manufacturer quality_issue quality_issue2
0 bar bar 2
1 foo foo 1
2 oats oats 1
df_Company1=df1.groupby(by=['manufacturer','quality_issue'])['quality_issue2']
.size()
.reset_index(name='quality_issue2')
print (df_Company1)
manufacturer quality_issue quality_issue2
0 bar bar 3
1 foo foo 2
2 oats oats 1
I think you can omit `[quality_issue2]`, output is same:
df_Company1=df1.groupby(by=['manufacturer','quality_issue'])
.size()
.reset_index(name='quality_issue2')
print (df_Company1)
manufacturer quality_issue quality_issue2
0 bar bar 3
1 foo foo 2
2 oats oats 1
|
Pretty formatting of long string
Question: The string
date2check = to_datetime(str(last_tx.year) + \
'-' + str(int(last_tx.month)-3) + \
'-' + str(last_tx.day) + \
' ' + str(last_tx.hour) + \
':' + str(last_tx.minute) + \
':' + str(last_tx.second))
works without problem but I want to know if there is some way to re-write this
more appropiately (in a pythonic way). `last_tx` is a datetime object.
Answer: A pythonic way is using `datetime` module in order to get the date of 3 moth
ago:
datetime.strftime(last_tx-timedelta(90),'%Y-%m-%d %H:%M:%S')
Here is an example:
>>> from datetime import datetime, timedelta
>>> datetime.now()
datetime.datetime(2016, 5, 23, 23, 3, 34, 588744)
>>> datetime.strftime(datetime.now()-timedelta(90),'%Y-%m-%d %H:%M:%S')
'2016-03-24 23:03:38'
As @ sparkandshine mentioned in comment, since 90 doesn't always represent 3
month you can use `dateutil.relativedelta` in order to achieve an exact match.
|
Error using python logging from Pydev
Question: I am using PyDev with Python 3.5 from Aptana installation. All worked fine
until I decided to explore logging module, which I never used before. I
started with new script from the tutorial:
import logging
logging.warning('Watch out!') # will print a message to the console
logging.info('I told you so') # will not print anything
in Pydev I have this error:
Traceback (most recent call last):
File "C:\Users\Tomasz\workspace\basicLogging.py", line 7, in <module>
logging.warning('Watch out!') # will print a message to the console
AttributeError: module 'logging' has no attribute 'warning'
I searched and found questions like: [python : install logging
module](http://stackoverflow.com/questions/32848251/python-install-logging-
module) with similar problem but no solution. Obviously the problem is not
with installation. When I run exactly the same script from CMD I have correct
output. At the moment it seems like Pydev gives me error on most of my
scripts. If I come back to the code wich previously worked fine, now I have
this:
Traceback (most recent call last):
File "C:\Users\Tomasz\workspace\piClientFullQt.py", line 15, in <module>
from matplotlib.backends import qt_compat
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\__init__.py", line 122, in <module>
from matplotlib.cbook import is_string_like, mplDeprecation, dedent, get_label
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\matplotlib\cbook.py", line 33, in <module>
import numpy as np
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\__init__.py", line 180, in <module>
from . import add_newdocs
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\core\__init__.py", line 58, in <module>
from numpy.testing.nosetester import _numpy_tester
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\site-packages\numpy\testing\__init__.py", line 10, in <module>
from unittest import TestCase
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\unittest\__init__.py", line 59, in <module>
from .case import (TestCase, FunctionTestCase, SkipTest, skip, skipIf,
File "C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\lib\unittest\case.py", line 273, in <module>
class _CapturingHandler(logging.Handler):
AttributeError: module 'logging' has no attribute 'Handler'
I am not sure how this happened. If I do `print(sys.executable)` it gives the
same path
`C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\python3.exe` in
both cases, CMD running fine and Pydev giving error.
I have some problem with some python variables in Pydev (I think) but can't
find how to fix it.
**EDIT:** I look at [this](http://stackoverflow.com/questions/5595276/pydev-
eclipse-python-interpreters-error-stdlib-not-found?rq=1) question and tried
the answers
Location of python interpreter is correct and it looks like I have all libs
what I need
C:\Users\Tomasz>python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
C:\Users\Tomasz\AppData\Local\Programs\Python\Python35-32\Lib\site-packages
And site-packages are already in System PYHONPATH
I tried Restore Defaults in Window -> Preferences -> PyDev -> Iterpreters ->
Python Interpreter
**EDIT:** Following @Samuel advise I try:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.warning('Watch out!') # will print a message to the console
logging.info('I told you so') # will not print anything
and in PyDev I have:
Traceback (most recent call last):
File "C:\Users\Tomasz\workspace\SCT2python\goodExamps\logging\basicLogging.py", line 3, in <module>
logger = logging.getLogger()
AttributeError: module 'logging' has no attribute 'getLogger'
It works fine if I run it in command line as a script!!
**EDIT: THE SOLUTION** Thanks to @Samuel I figure out I made absolutely stupid
mistake! Before I started playing with the library I made a folder to keep my
scripts and stupidly I called it "logging". Obviously renaming the folder
solved the problem!
Answer: You need to init your logger instance:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.warning('Watch out!')
logger.info('I told you so')
|
how to take a line from a text to use as input for a function python
Question: Hi guys I have a text fill with some geodata that looks like this:
[(-76.34666,40.006886),(-76.34666,40.073017),(-76.25411,40.073017),(76.25411,40.006886)]
[(-84.82031,38.403187)),(-84.82031,42.327133),(-80.51862,42.327133),(-80.51862,38.403187)]
now i want to take it line by line as an input for a polygon function. I first
tried to make it with one line befor i try iteration but it wont work. That is
my code for now:
from shapely.wkt import loads as load_wkt
from shapely.geometry import Point, Polygon
f = open('koordinat.txt', 'r')
line = f.readline()
p = Polygon(line)
print (p.centroid)
I get all the time the same error by the "p = Polygon(line)" and "A LinearRing
must have at least 3 coordinate tuples" but when take one of the lines and put
it in the function manual it works fine.
Any help? Also an example for a possible iteration would be nice :)
Answer: You're passing a string to `Polygon`, but it's expecting a list of coordinates
(numbers).
Try this:
import ast
line = ast.literal_eval(f.readline())
|
"TypeError: list indices must be integers, not str" in Python 2.7
Question: I'm currently taking a Data Analysis course on Udacity. I'm having a bit of
hard time. I have I'm currently trying to convert some data types in some
dictionaries and I keep getting the error "TypeError: list indices must be
integers, not str" Now, it says it's a list but from my understanding all my
data is in a dictionary. Here's the code.
# Lesson 1 - Data Analysis
# Get & Open Data
import unicodecsv
import datetime as dt
def openCSV(filename):
with open(filename, "rb") as f:
reader = unicodecsv.DictReader(f)
return list(reader)
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, "%y-%m-%d")
def parse_int(i):
if i == '':
return None
else:
return int(i)
enrollments = openCSV("enrollments.csv")
for enrollment in enrollments:
enrollments['cancel_date'] = parse_date(enrollments['cancel_date'])
enrollments['days_to_cancel'] = parse_int(enrollments['days_to_cancel'])
enrollments['is_canceled'] = enrollments['is_canceled'] == 'True'
enrollments['is_udacity'] = enrollments['is_udacity'] == 'True'
enrollments['join_date'] = parse_date(enrollments['join_date'])
# daily_engagement = openCSV("daily_engagement.csv")
# project_submissions = openCSV("project_submissions.csv")
enrollments[0]
Here is a sample of the contents of the file, it's the first two rows:
account_key,status,join_date,cancel_date,days_to_cancel,is_udacity,is_canceled
448,canceled,2014-11-10,2015-01-14,65,True,True
Answer: in your for loop, you get enrolment by iterating on enrolments, but you try to
access enrollments keys instead of enrollment keys
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'
enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'
enrollment['join_date'] = parse_date(enrollment['join_date'])
Also, your helper functions can be simplified:
def parse_date(date):
return dt.strptime(date, "%y-%m-%d") if date else None
def parse_int(i):
return int(i) if i else None
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.